id
string
paper_link
string
title
string
full_text
string
2412.08586v1
http://arxiv.org/abs/2412.08586v1
Asymptotically good CSS-T codes exist
\documentclass[12pt,letterpaper]{article} \usepackage{graphicx} \usepackage{mathtools} \usepackage{amsthm} \usepackage{amssymb} \usepackage{tikz} \usepackage{tikz-cd} \usepackage{braket} \usepackage{hyperref} \usepackage[capitalize]{cleveref} \usepackage{authblk} \usepackage{multirow} \usepackage[normalem]{ulem} \usepackage{setspace}\doublespacing \usepackage[margin=1in]{geometry} \providecommand{\keywords}[1] { \small \textbf{\textit{Keywords---}} #1 } \newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma} \newtheorem{definition}{Definition} \newtheorem{corollary}{Corollary} \newtheorem{proposition}{Proposition} \newtheorem{remark}{Remark} \newtheorem{claim}{Claim} \newcommand\ie{\textit{i.e.,~}} \newcommand\eg{\textit{e.g.,~}} \newcommand{\wt}{{\textrm wt}} \newcommand{\en}{{\textrm En}} \newcommand{\Span}{{\textrm Span }} \newcommand{\Supp}{{\textrm Supp}} \newcommand{\F}{\mathbb{F}} \newcommand{\Fq}{\mathbb{F}_q} \newcommand{\defeq}{\vcentcolon=} \newcommand\elena[1]{{\textcolor{blue}{\textsc{Elena}: #1}}} \newcommand\shreyas[1]{{\textcolor{purple}{\textsc{Shreyas}: #1}}} \newcommand\reza[1]{{\textcolor{red}{\textsc{Reza}: #1}}} \newcommand\olatz[1]{{\textcolor{green}{\textsc{Olatz}: #1}}} \newcommand\josu[1]{{\textcolor{orange}{\textsc{Josu}: #1}}} \title{Asymptotically good CSS-T codes exist} \author[1]{Elena Berardini\thanks{\emph{Email addresses:} [email protected], [email protected],[email protected]}\thanks{[email protected], [email protected]}\thanks{The order of the authors is purely alphabetical. All authors contributed to the paper to the same extent.}} \author[3]{Reza Dastbasteh} \author[3]{Josu Etxezarreta Martinez} \author[1,2]{Shreyas Jain} \author[3]{Olatz Sanz Larrarte} \affil[1]{CNRS, IMB, University of Bordeaux, France} \affil[2]{Indian Institute of Science Education and Research, Mohali, India} \affil[3]{Department of Basic Sciences, Tecnun - University of Navarra, San Sebastian, Spain} \makeatletter \def\thanks#1{\protected@xdef\@thanks{\@thanks \protect\footnotetext{#1}}} \makeatother \date{} \begin{document} \sloppy \maketitle \begin{abstract} We give a new construction of binary quantum codes that enables the generation of a CSS-T code from any given CSS code. Using this construction, we prove the existence of asymptotically good binary CSS-T codes, resolving a previously open problem in the literature. Furthermore, we demonstrate that the same result holds for binary quantum low-density parity check CSS-T codes, and establish the existence of asymptotically good CSS codes that support any given $Z$ rotation transversally. Finally, we analyze the structure of the logical operators corresponding to certain non-Clifford gates supported by the quantum codes obtained from our construction. \end{abstract} \keywords{quantum code, CSS code, CSS-T code, asymptotically good codes, LDPC code, transversal gate} \section*{Introduction} Quantum error correction is a well-known and widely used technique for protecting quantum information from corruption by noise and, consequently, enabling the possibility of constructing fault-tolerant quantum computers. An $[\![n,k,d]\!]$ binary quantum error-correcting code encodes $k$ logical qubits of information into $n$ physical qubits and can correct up to $\lfloor \frac{d-1}{2} \rfloor$ errors. By itself, a quantum error correction code acts as a memory. However, operations over the logical level are required in order to obtain fault-tolerant quantum computers capable of realizing tasks that may exhibit quantum advantage. In this sense, logical operations are realized by manipulating the physical qubits encoding the information so that the desired logical operator is applied to the encoded logical qubits. Importantly, the operations applied to the physical qubits should preserve the code space, otherwise, the way in which the logical qubit is protected would essentially be lost. Furthermore, those logical operators must be induced in a fault-tolerant manner, implying that the overall quantum computation on the logical level is reliable enough. The most natural approach to fault tolerance is the design of transversal gates, which are formed by the tensor product of single-qubit gates. A key feature of transversal operators is that single errors do not propagate within the code blocks during their operation, making errors trackable. In general, the Eastin--Knill theorem states that no universal gate set can be implemented on the logical level in a transversal manner for binary quantum error-correcting codes with $d>1$ \cite{eastin2009restrictions}. Recently, a way to circumvent such a no-go theorem has been proposed under certain restrictions on the error model, but its application seems to be restricted to $d$-dimensional quantum systems \cite{EastinCircumvent}. There exist several universal quantum gate sets, with the Clifford+$T$ gate set being the most studied one \cite{solovayKitaev}. Quantum circuits composed purely of Clifford gates are efficiently classically simulable as a result of the Gottesman--Knill theorem \cite{Gottesman:1998hu}. Implementing only transversal Clifford gates is both easier and less resource-intensive. In fact, finding codes accepting logical Clifford gates in a transversal manner is relatively straightforward. For example, any Calderbank--Shor--Steane (CSS) code constructed from a self-orthogonal binary cyclic code, or more generally doubly even self-orthogonal code, and its dual, admits transversal Clifford operators \cite{dastbasteh2024quantum,grassl2000cyclic,S99_nature}. Nevertheless, those codes would lack the $T$-gate, implying that the quantum computations that can be realized would be classically simulable. A way to circumvent this problem is by means of magic state distillation and injection \cite{Campbell2017}. In this paradigm, magic state resources are injected into the ``memory code'' via a teleportation protocol resulting in the application of such operation on the code space in a fault-tolerant manner and the consumption of the magic resource \cite{terhalQmemories}. The logical operation induced to the logical qubits will depend on the magic state resource injected, a common choice for the latter being $\ket{T}$ magic states that result in the $T$ gate, which is the main non-Clifford gate discussed in this paper. In order to apply the $T$ gate in a fault-tolerant manner, \ie maintaining the error suppression level obtained by the code, magic states with a sufficient level of fidelity (of the order of the memory code) must be obtained. To do so, magic state distillation protocols such as the Bravyi--Haah protocols are used \cite{bravyi2012magic}. Such protocols make use of a certain amount of noisy magic states to distill a smaller number of magic states with higher fidelities. Magic state distillation protocols are closely connected to quantum error correction codes that admit the transversal implementation of a $T$-gate, \ie such operation does not change the code space. These codes are commonly referred to as quantum codes that support transversal $T$. Motivated by this application, numerous works in the literature have focused on constructing and characterizing stabilizer codes that support transversal $T$ gates; see, for instance, \cite{andrade2023css,camps2024toward,CMLMRS24,camps2024binary,RCNP20,RCNP20b}. In particular, \cite{RCNP20} introduces CSS codes supporting transversal $T$ gates, known as CSS-T codes, which are the main object studied in this paper. The same authors demonstrate in \cite{RCNP20b} that CSS-T codes are optimal among all non-degenerate stabilizer codes supporting transversal $T$ gates. However, a limitation of all currently known infinite families of CSS-T codes is that they exhibit either a vanishing rate or a vanishing relative distance. An intriguing open problem, proposed in \cite{RCNP20}, is to construct asymptotically good CSS-T codes, which simultaneously achieve a non-vanishing rate and relative distance. \paragraph{Our contribution.} Motivated by the open problem mentioned above, we introduce a construction that produces a binary CSS-T code from any given binary CSS code. We employ this construction to prove the existence of asymptotically good binary CSS-T codes. Additionally, we show that our construction preserves the low-density parity check (LDPC) property, proving the existence of asymptotically good LDPC codes that are also CSS-T. We also demonstrate that a generalization of our construction yields asymptotically good classes of binary codes that support transversal gates corresponding to any $Z$ rotation angle. Furthermore, we study the structure of the logical operators corresponding to the transversal $T$ gate, finer $Z$ rotations, and the $CCZ$ gate on our proposed CSS codes, which have the desired symmetry for these transversal gates. \paragraph{Organization of the paper.} In Section \ref{sec:preliminaries}, we set the notations and give definitions and known results on classical linear codes and quantum gates and codes. We also recall the definitions of CSS and CSS-T codes, which are the main focus of this paper. Section \ref{sec:CCSTconstruction} presents our construction of CSS-T codes from CSS ones and their resulting parameters and properties. In particular, we prove here the existence of asymptotically good CSS-T codes. In Section \ref{sec:transversalgates}, we discuss how to generalize our construction to support other transversal gates and examine their corresponding logical operators. Finally, in Section \ref{sec:conclusion}, we wrap up our work and present future research questions. \section{Preliminaries}\label{sec:preliminaries} In this section, we give a short introduction to classical and quantum error-correcting codes and their interplay. This allows us to fix the notations we will use for the rest of the paper. We refer the reader to \cite{HP10} and to \cite{gottesman1997stabilizer,NC02,Scherer19} for a detailed presentation of classical and quantum codes, respectively. \subsection{Classical linear codes} We work over the binary field $\F_2$. A (\emph{linear}) \emph{code} $C$ is an $\F_2$-linear subspace of $\F_2^n$, where $n$ is a positive integer. A vector $x\in C$ will be called a \emph{codeword}. The \emph{length} of the code $C$ is the dimension of the ambient space, namely $n$. The \emph{dimension} of $C$ is its dimension as a linear subspace over $\F_2$. A linear code $C$ can be defined using a {\em generator matrix}, that is, a binary matrix with the row space equal to $C$. A linear code $D \subseteq C$ is called a \emph{subcode} of $C$. The \emph{support} of a vector $x \in \F_2^n$ is the set $ \Supp(x):=\{i \in \{1,\dots, n\} \, | \, x_i \neq 0\}$. The (\emph{Hamming}) \emph{weight} of a codeword $x \in C$ is defined by $\wt(x) := |\Supp(x)|$. If $\wt(x)\equiv 0 \pmod 2$, then $x$ is called an {\em even (weight) vector}. A linear code $C\subseteq \F_2^n$ is called an {\em even code} if it only contains even vectors. For a set of codewords $S$ we use the notation $\wt(S)\coloneqq\{\wt(x) \mid x\in S, x\neq \mathbf{0}\}$, where $\boldsymbol{0}$ denotes the all-zeros vector. Finally, the \emph{minimum distance} $d(C)$ of a code $C$ is defined as $$ d(C)\coloneqq\min\{\wt(x) \mid x \in C, \, x \neq \boldsymbol{0}\}.$$ We will refer to a code of length $n$, dimension $k$ and minimum distance $d$ as an $[n,k,d]$ code. The \emph{(Euclidean) dual} of $C\subseteq\F_2^n$ is the linear code \[C^\perp\coloneqq\{ y\in\F_2^n \mid x \cdot y=0 \mbox{ for all } x \in C\},\] where $x \cdot y$ denotes the {\em Euclidean inner product} of the vectors $x$ and $y$. If $C$ has dimension $k$, then $C^\perp$ has dimension $n-k$. A code $C$ is said to be \emph{self-orthogonal} if $C\subseteq C^\perp$, and \emph{self-dual} if $C=C^\perp$. The component-wise multiplication of two length $n$ vectors $x$ and $y$, also known as the {\em Schur product}, is defined as $$(x_1,x_2,\ldots,x_n)\star(y_1,y_2,\ldots,y_n)\coloneqq(x_1y_1,x_2y_2,\ldots,x_n y_n).$$ The Schur product of two codes $C,D\subseteq\F_2^n$ is then defined as \[C\star D\coloneqq \Span\{x\star y \mid x\in C, y\in D\}.\] The Schur product $C\star C$ is denoted $C^{\star 2}$. Note that for a binary linear code $C$, we have $x\star x= x$ for any $x\in C$, whence $C\subseteq C^{\star 2}$. We refer to \cite{Randriam15} for an extensive survey on the Schur product of linear codes. \subsection{Quantum gates and quantum error correction} Throughout the paper, $\mathcal{H}$ is the complex Hilbert space $\mathbb{C}^2$ endowed with the inner product $$\langle u,v \rangle=\sum_{i=1}^{n} \overline{u_i} v_i,$$ where $v,u \in \mathcal{H}$ and $\overline{u_i}$ is the complex conjugate. A \emph{qubit} of length one (or $1$-qubit) is an element of $\mathcal{H}$ of norm one. We show the set of one qubit Pauli operators by \begin{center} $I=\begin{bmatrix}1&0\\0&1\end{bmatrix}$, $X=\begin{bmatrix}0&1\\1&0\end{bmatrix}$, $Y=\begin{bmatrix}0&-i\\i&0\end{bmatrix}$, and $Z=\begin{bmatrix}1&0\\0&-1\end{bmatrix}$. \end{center} These operators form a basis for all $2\times 2$ matrices acting over $\mathcal H$. Hence an arbitrary one qubit error operator, \ie a~$2\times 2$ unitary matrix over $\mathcal H$, can be represented as a linear combination of the mentioned matrices. Let $\{\ket 0,\ket 1\}$ be a basis for $\mathcal{H}$, where $\ket 0=\begin{bmatrix} 1 \\ 0 \end{bmatrix}$ and $\ket 1=\begin{bmatrix} 0 \\ 1 \end{bmatrix}$. An arbitrary state of a closed 1-qubit quantum system is defined by $$\ket \varphi=\alpha \ket 0+\beta \ket 1,$$ where $\alpha,\beta \in \mathbb{C}$ and $|\alpha|^2+|\beta|^2=1$. The action of each of the Pauli operators $P$ on $\ket{\varphi}$ is defined by $P\ket{\varphi}$. In particular, $X\ket{\varphi}=\beta \ket 0+\alpha \ket 1$ and $Z\ket{\varphi}=\alpha \ket 0-\beta \ket 1$. Recall that the $Z$ rotation by the angle $\frac{2\pi}{2^{\ell}}$ for some $\ell \ge 0$ is given by the gate $$\varGamma^{(\ell)}=\begin{bmatrix}1&0\\0&e^{\frac{2\pi i}{2^\ell}}\end{bmatrix}.$$ Then, the action of one qubit gate $\varGamma^{(\ell)}$ on the basis states is \begin{equation*} \begin{split} &\varGamma^{(\ell)} |0\rangle \longmapsto |0\rangle \ \text{and} \ \varGamma^{(\ell)} |1\rangle \longmapsto \mathrm{e}^{\frac{2\pi i}{2^{\ell}}}|1\rangle. \\ \end{split} \end{equation*} If we fix $\ell = 0,1,2,3$, then the $Z$ rotation $\varGamma^{(\ell)}$ is in correspondence to $I,Z,S$, and $T$, respectively. For any $n\in\mathbb{N}$, we let $\mathcal{H}^n=\mathcal{H}\otimes \dots \otimes \mathcal{H}$ be the $2^n$-dimensional complex Hilbert space $\mathbb{C}^{2^n}$, where $\otimes$ denotes the usual tensor product of elements in $\mathbb{C}^2$. The inner product defined above for $\mathcal{H}$ generalizes to elements of $\mathcal{H}^n$. Qubits of length $n$ (or $n$-qubits) are elements of $\mathcal{H}^n$ of norm one. For the rest of this paper, we fix $\{\ket a \mid a\in \F_2^n \}$ to be a basis for $\mathcal{H}^{n}$. \subsection{From classical to quantum codes} We start by reviewing the construction of quantum CSS codes from a pair of classical codes. The following theorem summarizes the CSS construction of binary quantum codes, which is a commonly used technique in the literature. \begin{theorem}[\cite{CS96}]\label{th:css} Let $C_2,C_1 \subseteq \mathbb{F}_2^n$ be two linear codes such that $C_2 \subseteq C_1$. Then, there exists an $[\![n,k,d]\!]$ quantum code, called CSS and denoted by $(C_1,C_2)_\mathrm{CSS}$, where $k=\dim(C_1)-\dim(C_2)$ and $d=\min\{\wt(C_1\setminus C_2),\wt(C_2^\perp\setminus C_1^\perp)\}$. \end{theorem} The quantum CSS code of \cref{th:css} has a {\em parity check matrix} of the form $$\begin{bmatrix} H_X & 0\\ 0 & H_Z \end{bmatrix},$$ where $H_X$ and $H_Z$ are generator matrices of $C_2$ and $C_1^\perp$, respectively. Throughout the paper, we will use the notation $d_X$ and $d_Z$ for $\min\{\wt(C_1\setminus C_2)\}$ and $\min\{\wt(C_2^\perp\setminus C_1^\perp)\}$, respectively, and refer to them as the $X$-distance and the $Z$-distance, respectively. Note that since $d_X\geq \min\left\{d(C_1)\right\}$ and $d_Z\geq \min\left\{d(C_2^\bot)\right\}$, we always have $d\geq \min\left\{d(C_1),d(C_2^\bot)\right\}$ for a quantum code $(C_1,C_2)_\mathrm{CSS}$. When equality holds in the latter inequality, we say that the quantum CSS code is {\em non-degenerate}. Otherwise, the code will be called {\em degenerate}. Let $C_2 \subseteq C_1$ be binary linear codes of length $n$ that define a binary quantum CSS code with parameters $[\![n,k,d]\!]$. Let $C_1=C_2\oplus \Span\{x_1,x_2,\ldots,x_k\}$, for some $x_1,x_2,\ldots,x_k \in C_1 \setminus C_2$. Let $H$ be a $k \times n$ binary matrix having $x_1,x_2,\ldots,x_k$ as its rows. A common encoding approach for this binary quantum code is defined by $\en:\mathcal{H}^{k} \rightarrow \mathcal{H}^{n}$, where \begin{equation}\label{encoding} \en \ket{u}=\frac{1}{\sqrt{|C_2|}}\sum_{x\in C_2} \ket{x+uH}, \end{equation} where $u\in \F_2^{k}$. Note that the quantum state on the left-hand side of Equation \eqref{encoding}, namely $\ket u$, is a $k$-qubit state, while the right-hand side quantum states are $n$-qubit states. To distinguish them, from now on, we show the logical states using the notation $\ket{u}_L$. Moreover, to simplify our computations, {\em we discard the normalization factor}. Each operation on such a quantum code is in the form of an $n$-qubit operation that preserves the code space, \ie $\en(\mathcal{H}^{k})$. In general, each error in an $n$-qubit operation can propagate within several of the code blocks. Therefore, a fault-tolerant approach is to investigate operations that can be decomposed into tensor product of $n$ single qubit gates, where each single error causes an error in one of the code blocks. Such $n$-qubit gates are called {\em transversal gates}. For instance, $T^{\otimes n}$ is an $n$-qubit transversal gate which will be called the transversal $T$ gate in this paper. Recently, Rengaswamy \emph{et al.~}characterized all stabilizer codes in which transversal $T$ gate, and some other variants of it, preserve the code space \cite{RCNP20b}. In the literature, stabilizer codes with this property, specifically, those named \textit{triorthogonal}, have been proposed for magic state distillation \cite{bravyi2012magic}. In \cite{RCNP20b}, it also was shown that CSS codes are optimal among all non-degenerate stabilizer codes with this property. A restriction of their characterization to the CSS codes, commonly known as {\em CSS-T codes}, is stated below. \begin{definition}[\cite{RCNP20}]\label{def:csst} A CSS code $(C_1,C_2)_\mathrm{CSS}$ is called a \emph{CSS-T code} if the followings hold: \begin{enumerate} \item the code $C_2$ is even, \ie$\wt(x) \equiv 0 \pmod 2$ for each $x\in C_2$; \item for each $x\in C_2$, there exists a self-dual code in $C_1^\perp$ of dimension $\wt(x)/2$ that is supported on $x$, \ie there exists $C_x \subseteq C_1^\perp$ such that $|C_x| = 2^{\wt(x)/2}$, $C_x = C_x^{\perp_x}$, where $\perp_x$ denotes the dual taken in $\F_2^{\wt(x)}$, and for each $z \in C_x$ we have $\Supp(z) \subseteq \Supp(x)$. \end{enumerate} \end{definition} A reformulation of the above conditions in terms of the Schur product of the ingredient classical codes was proved in \cite{CMLMRS24} and we recalled below. \begin{theorem}[{\cite[Theorem 2.3]{CMLMRS24}}]\label{th:csstchar} A pair of binary linear codes $C_1$ and $C_2$ generates a CSS-T code if and only if $C_1^\perp + C_1^{\star 2} \subseteq C_2^\perp$. \end{theorem} It is worth noting that if we consider a pair of linear codes $C_2\subseteq C_1$ giving a CSS code, we obtain for free the inclusion $C_1^\perp \subseteq C_2^\perp$. Therefore, each given CSS code $(C_1,C_2)_\mathrm{CSS}$, is a CSS-T code if and only if $C_1^{\star 2} \subseteq C_2^\perp$. \smallskip Some known constructions and several examples of CSS-T codes are provided in \cite{andrade2023css,camps2024toward,CMLMRS24,camps2024binary,RCNP20,RCNP20b}. All currently known families of CSS-T codes have either a vanishing rate or a vanishing relative distance. An interesting open problem, which will be answered in the next section, is to construct an infinite family of asymptotically good CSS-T codes with non-vanishing rate and relative distance \cite{RCNP20}. \section{Constructing CSS-T codes from CSS codes}\label{sec:CCSTconstruction} In this section, we present a method to construct a CSS-T code from any given CSS code. This allows us to prove in \cref{th:goodcodes} and \cref{C:Asymp1} the existence of families of asymptotically good CSS-T codes. In particular, our construction preserves the LDPC property of the original CSS code, enabling us to show in \cref{T:Sparsity} and \cref{cor:asympldpc} the existence of asymptotically good quantum LDPC codes that are CSS-T. Our approach is based on a technique that doubles the ingredient pair of classical codes. This entire section builds up to the following result. \begin{theorem}\label{T:CSST} For any given $[\![n,k,d]\!]$ binary CSS code $(C_1, C_2)_\mathrm{CSS}$, one can construct a $[\![2n, k,\geq d]\!]$ binary CSS-T code. \end{theorem} In the remainder of this section, we provide more details about our construction and prove the above mentioned result. \begin{definition}\label{def:phi2} Let $C\subseteq\F_2^n$ be a linear code. Let $\phi:C\to\F_2^n$ be a map. We define $$C^{N_\phi} \defeq \left\{(x,\phi(x)) \mid x\in C\right\}.$$ \end{definition} In what follows, we focus on using a CSS code $(C_1,C_2)_\mathrm{CSS}$ and an appropriate map $\phi$ such that $\left(C_1^{N_\phi}, C_2^{N_\phi}\right)$ forms a CSS-T code. In the following proposition, we give necessary and sufficient conditions on the map $\phi$ so that $\left(C_1^{N_\phi}, C_2^{N_\phi}\right)$ forms a CSS-T code. \begin{proposition}\label{prop:len-dim} Let $(C_1, C_2)_\mathrm{CSS}$ be a CSS code of length $n$. Then $\left(C_1^{N_\phi}, C_2^{N_\phi}\right)$ forms a CSS-T code if and only if $\phi : C_1 \rightarrow \F_2^n$ is a linear map such that for each $x,y \in C_1$ and $z \in C_2$ we have $$\wt(x \star y \star z) + \wt(\phi (x) \star \phi (y) \star \phi (z))\equiv 0 \mod{2}.$$ \end{proposition} \begin{proof} By definition, $ C_2^{N_\phi} \subseteq C_1^{N_\phi}$. Hence, $\left(C_1^{N_\phi}, C_2^{N_\phi}\right)$ is a CSS code if and only if $ C_1^{N_\phi}$ and $ C_2^{N_\phi}$ are linear codes, that is, if and only if for any $x,y \in C_1$ we have $(x,\phi(x)) + (y,\phi(y)) = (x + y, \phi(x) + \phi(y)) = (x + y, \phi(x+y))$, where the second equality follows by the definition of $C_1^{N_\phi}$. Thus, $\phi$ must be linear on $C_1$. Now, $\left(C_1^{N_\phi}, C_2^{N_\phi}\right)_\mathrm{CSS}$ is CSS-T if and only if $\left({C_1^{N_\phi}}\right)^{\star 2} \subseteq \left({C_2^{N_\phi}}\right)^\perp$ by \cref{th:csstchar}. So $\left(C_1^{N_\phi}, C_2^{N_\phi}\right)_\mathrm{CSS}$ is CSS-T if and only if for each $x,y \in C_1$ and each $z\in C_2$ we have \begin{align*} ((x,\phi(x))\star (y,\phi(y)))\cdot (z,\phi(z)) &= (x\star y)\cdot z + (\phi(x)\star \phi(y))\cdot \phi(z) \\ &= \sum_{i=1}^nx_iy_iz_i + \sum_{i=1}^n\phi(x)_i\phi(y)_i\phi(z)_i = 0. \end{align*} The last equality holds if and only if $\wt(x \star y \star z) + \wt(\phi (x) \star \phi (y) \star \phi (z)) \equiv 0 \pmod{2}$. This completes the proof. \end{proof} \begin{proposition}\label{prop:par} Let $\phi$ be a map satisfying the conditions of \cref{prop:len-dim}. If $(C_1, C_2)_\mathrm{CSS}$ is an $[\![n,k]\!]$ CSS code, then $\left(C_1^{N_\phi}, C_2^{N_\phi}\right)$ is a $[\![2n,k]\!]$ CSS-T code. \end{proposition} \begin{proof} This follows immediately by the properties of $\phi$ and the definition of $C^{N_\phi}$. \end{proof} Any map $\phi$ satisfying the conditions of \cref{prop:len-dim} allows constructing a CSS-T code from a CSS one. Let us point out that permutations of the coordinates, including the identity, satisfy those properties. Furthermore, using different permutations as the $\phi$ map will result in CSS codes with the same parameters, as shown in the following proposition. \begin{proposition}\label{prop:permutations} Let $C_2 \subseteq C_1 \subseteq \F_2^n$ be a pair of binary codes, and let $I$ and $\phi$ be the identity map and a permutation map on $\F_2^n$, respectively. Then, the CSS codes $\left(C_1^{N_I}, C_2^{N_I}\right)_\mathrm{CSS}$ and $\left(C_1^{N_\phi}, C_2^{N_\phi}\right)_\mathrm{CSS}$ have the same parameters. \end{proposition} \begin{proof} Clearly, the two CSS codes have the same length and the same dimension. So, we only need to deal with the minimum distance. Let $\Phi=(I,\phi)$ be the permutation of $\F_2^{2n}$ that leaves the first $n$ coordinates unchanged and applies $\phi$ to the second $n$ coordinates. Then $\Phi\left(C_1^{N_I}\right)=C_1^{N_\phi}$ and $\Phi\left(C_2^{N_I}\right)=C_2^{N_\phi}$. Therefore, $\Phi\left(C_1^{N_I} \setminus C_2^{N_I}\right)=C_1^{N_\phi} \setminus C_2^{N_\phi}$. Hence, both CSS codes have the same $X$-distance. Since $\Phi$ is a permutation, we have $\Phi\left(\left({C_i^{N_I}}\right)^\bot\right)=\left(\Phi\left(C_i^{N_I}\right)\right)^\bot$ for $i \in \{1,2\}$. Thus $\Phi\left(\left({C_i^{N_I}}\right)^\bot\right)=\left(\Phi\left(C_i^{N_I}\right)\right)^\bot=\left(C_i^{N_\phi}\right)^\bot$ and $\Phi\left({\left(C_2^{N_I}\right)}^\bot \setminus {\left(C_1^{N_I}\right)}^\bot\right)={\left(C_2^{N_\phi}\right)}^\bot \setminus {\left(C_1^{N_\phi}\right)}^\bot$ for $i \in \{1,2\}$. Hence, both CSS codes have the same $Z$-distance. \end{proof} In the rest of the section, we fix $\phi$ to be the identity permutation $I$, and consider the code \begin{equation}\label{E:CN} C^{N_I} \defeq \{(x,x) \mid x\in C\}, \end{equation} which we will denote simply by $C^N$. \begin{remark}\label{rmk:phi} Other choices of $\phi$ are possible and may lead to CSS-T codes with different parameters. As an example, consider the map $\phi: x \mapsto x+a$ for some $a \in C_1 \cap (C_1^{\star 2})^\perp$. Finding maps $\phi$ giving CSS-T codes with better parameters than the choice $\phi=I$ is an interesting open question. We return to this discussion in the conclusion. \end{remark} By \cref{prop:par} we know that $\left(C_1^{N},C_2^{N}\right)_\mathrm{CSS-T}$ has length $2n$ and dimension $k=\dim(C_1)-\dim(C_2)$. Hence, in what follows, we focus on studying the minimum distance of this code. To this end, we set $d^N = \min\left\{d_X^N, d_Z^N\right\}$ for the CSS-T code $\left(C_1^N, C_2^N\right)_\mathrm{CSS-T}$, where $d_X^N=\min\left\{\wt\left(C_1^N\setminus C_2^N\right)\right\}$ and $d_Z^N=\min\left\{\wt\left((C_2^N)^\perp\setminus (C_1^N)^\perp\right)\right\}$. \begin{lemma}\label{L:xdis} The CSS-T code $\left(C_1^N, C_2^N\right)_\mathrm{CSS}$ satisfies $d_X^N = 2d_X$. \end{lemma} \begin{proof} Let $x^N =(x,x)$ be the minimum weight codeword of $C_1^N \setminus C_2^N$, where $x \in C_1$. Note that $x \in C_1 \setminus C_2$ as otherwise $x^N \in C_2^N$. The fact that $x^N$ has the minimum weight implies that $x$ has to have weight $d_X$. Therefore, $\wt(x^N)=2d_X$ and we have $d_X^N = 2d_X$. \end{proof} To calculate $d_Z^N$, we need the following technical lemma. \begin{lemma}\label{C:properties} The followings hold. \begin{enumerate} \item\label{lem1} Let $x = (x_1,x_2) \in \left(C_2^N\right)^\perp,$ for some $ x_i \in \mathbb{F}_2^n$. Then $x'=(x_1 + x_2, 0) \in \left(C_2^N\right)^\perp$. In particular, $x_1 + x_2 \in C_2^\bot$ and $\wt(x')\leq \wt(x)$. \item\label{lem2} For each $x = (a,a) \in \left(C_2^N\right)^\perp$, we have $x \in \left(C_1^N\right)^\perp$. \item\label{lem3} Let $x_1, x_2 \in \mathbb{F}_2^n$. Then $x_1 + x_2 \in C_1^\perp$ if and only if $x' = (x_1 + x_2, 0) \in \left(C_1^N\right)^\perp$. \end{enumerate} \end{lemma} \begin{proof} \eqref{lem1} For each $ y = (a,a) \in C_2^N$ we have $$x\cdot y = x_1\cdot a+ x_2\cdot a= (x_1+x_2)\cdot a =x' \cdot y = 0.$$ The fact that $x_1 + x_2 \in C_2^\bot$ and $\wt(x')\leq \wt(x)$ follows immediately. \eqref{lem2} Let $y = (b,b) \in C_1^N$. Then $x \cdot y = a \cdot b + a \cdot b = 0$. Hence $x \in \left(C_1^N\right)^\perp$. \eqref{lem3} The statement is proved by the following sequence of equivalences: \begin{align*} x_1 + x_2 \in C_1^\perp &\iff \forall a\in C_1, (x_1 + x_2) \cdot a = 0 \iff \forall y =(a,a) \in C_1^N, x'\cdot y = 0\\ &\iff x' = (x_1 + x_2, 0) \in \left(C_1^N\right)^\perp. \end{align*} \end{proof} Next, we use the above lemma to compute $d_Z^N$. \begin{proposition}\label{L:zdis} The CSS-T code $\left(C_1^N, C_2^N\right)_\mathrm{CSS}$ satisfies $d_Z^N =d_Z$. \end{proposition} \begin{proof} Let $x = (x_1,x_2) \in \left(C_2^N\right)^\perp\setminus \left(C_1^N\right)^\perp$ for some $x_1,x_2 \in \mathbb{F}_2^n$ be a minimum weight vector. Since $d_Z^N = \wt\left(\left(C_2^N\right)^\perp \setminus \left(C_1^N\right)^\perp\right)$, Lemma \ref{C:properties}\eqref{lem2} implies that $x_1 \neq x_2$. Next, Lemma \ref{C:properties}\eqref{lem2} implies that $x'=(x_1 + x_2, 0) \in \left(C_2^N\right)^\perp$. Moreover, the fact that $x \notin \left(C_1^N\right)^\perp$ implies the existence of $c \neq 0\in C_1$ such that $x \cdot (c,c)=x_1 \cdot c + x_2 \cdot c \neq 0$. Therefore, $x'\cdot (c,c) \neq 0$ which implies $x' \notin \left(C_1^N\right)^\perp$. Now, Lemma \ref{C:properties}\eqref{lem3} implies that $x_1 + x_2 \in \left(C_2^\perp \setminus C_1^\perp \right)$. This shows that \begin{equation}\label{E:disz} d_Z^N=\wt(x) \ge \wt(x')=\wt(x_1 + x_2) \ge d_Z. \end{equation} For the other inequality, let $x$ be a vector of minimum weight in $C_2^\perp \setminus C_1^\perp$. Then, observe that $(x,0) \in \left(C_2^N\right)^\perp \setminus \left(C_1^N\right)^\perp$ as for each $(y,y) \in C_2^N$, where $y \in C_2$, we have $(x,0)\cdot (y,y) = x\cdot y = 0.$ The fact that $(x,0) \not\in (C_1^N)^\perp$ follows from Lemma \ref{C:properties}\eqref{lem3}. Thus \begin{equation}\label{E:disz2} \wt((x,0))=d_Z \ge d_Z^N. \end{equation} Finally, Equations \eqref{E:disz} and \eqref{E:disz2} complete the proof. \end{proof} Finally, we are ready to prove Theorem \ref{T:CSST}. \begin{proof}[Proof of Theorem \ref{T:CSST}] Let $(C_1,C_2)_\mathrm{CSS}$ have parameters $[\![n,k,d]\!]$. The combination of Propositions \ref{prop:par} and \ref{L:zdis} and Lemma \ref{L:xdis} shows that the CSS-T family $(C_1^N,C_2^N)$ has parameters $[\![2n,k,\geq d]\!]$. This concludes the proof. \end{proof} \begin{remark} Let $(C_1, C_2)_{\mathrm{CSS}}$ be a CSS code with the $X$ and $Z$ distances $d_X$ and $d_Z$, respectively. \begin{enumerate} \item When $d_X < d_Z$ the CSS-T code of Theorem \ref{T:CSST} has a strictly larger minimum distance than the initial CSS code. On the other hand, if $d_X > d_Z$ the CSS-T code has the same minimum distance as the initial CSS code. \item When $d_X \neq d_Z$, one can apply the result of Theorem \ref{T:CSST} to construct CSS-T code with double length, improved minimum distance and the same dimension of the ingredient CSS code. In particular, if $d_X < d_Z$ in a CSS code $(C_1, C_2)_{CSS}$, then applying Theorem \ref{T:CSST} directly to this code gives the desired CSS-T code. In the case when $d_X > d_Z$, then one can consider $C_1'=C_2^\perp$ and $C_2'=C_1^\perp$, after exchanging the $X$ and $Z$ stabilizers in the CSS code. Then, applying Theorem \ref{T:CSST} to $C_2'\subseteq C_1'$ gives the desired result. \item If $d_X,d_Z>2$, then the CSS-T code of Theorem \ref{T:CSST} is a degenerate quantum code. In particular, it contains all weight two codewords in the form $(a,a)$, where $a\in\F_2^n$ is a weight one vector. \end{enumerate} \end{remark} Table \ref{TL:1} provides a list of examples of binary CSS-T codes that we found after applying the construction of Theorem \ref{T:CSST} to binary linear cyclic codes. \begin{table}[ht] \begin{center} \begin{tabular}{|p{1.6 cm} p{1.6 cm} p{1.6 cm} p{1.6 cm}|} \hline \multicolumn{4}{|p{7.4cm}|}{\centering CSS-T Parameters} \\ \hline $[\![14, 3, 3]\!]$ & $[\![18, 2, 3]\!]$ & $[\![30, 1, 6]\!]$ & $[\![30, 2, 6]\!]$ \\ $[\![30, 8, 4]\!]$ & $[\![30, 10, 3]\!]$ & $[\![42, 1, 6]\!]^\ast$ & $[\![42, 6, 6]\!]$ \\ $[\![42, 7, 5]\!]$ & $[\![42, 12, 4]\!]$ & $[\![46, 1, 7]\!]$ &$[\![54, 1, 6]\!]^\ast$ \\ $[\![62, 1, 11]\!]$ & $[\![62, 15, 6]\!]$ & $[\![66, 2, 10]\!]^\ast$ & \\ \hline \end{tabular} \end{center} \caption{Small length binary CSS-T codes from cyclic codes. All the CSS-T codes are $Z$-degenerate. The codes shown by $\ast$ are also $X$-degenerate.} \label{TL:1} \end{table} \subsection{Asymptotically good families of CSS-T codes} Recall that the rate and relative distance of a binary quantum code with parameters $[\![n,k,d ]\!]$ are defined as $R=\frac{k}{n}$ and $\delta=\frac{d}{n}$, respectively. In the literature there exist several infinite families of quantum CSS codes with non-vanishing asymptotic rate and relative distance, commonly known as asymptotically good quantum codes, see for example \cite[Theorem 1]{ashikhmin2001asymptotically}, \cite[Theorem 1.2]{chen2001asymptotically}, and \cite[Theorem 2]{panteleev2022asymptotically}. The existence of such families combined with our Theorem \ref{T:CSST} allow us to construct asymptotically good CSS-T codes. \begin{theorem}\label{th:goodcodes} From any given asymptotically good family of binary CSS codes it is possible to derive an asymptotically good family of binary CSS-T codes. \end{theorem} \begin{proof} Let $\{(C_1,C_2)_i\}$ be a family of asymptotically good CSS codes, where $i>0$ is an integer, with rate and relative distance $\frac{k_i}{n_i}$ and $\frac{d_i}{n_i}$, respectively, for each $i$. Let the asymptotic rate and relative distance be $R=\displaystyle\lim_{i \to \infty} \frac{k_i}{n_i}>0$ and $\delta=\displaystyle\lim_{i \to \infty} \frac{d_i}{n_i}>0.$ Then applying the result of Theorem \ref{T:CSST} gives the infinite family $\left\{\left({C_1}^{N},{C_2}^N\right)_i\right\}$ for each $i>0$ with asymptotic rate and relative distance $R'=\frac{R}{2}>0$ and $\delta'=\frac{\delta}{2}>0$, respectively. \end{proof} \begin{corollary}\label{C:Asymp1} Infinite families of quantum CSS-T codes with non-vanishing asymptotic rate and relative distance exist. \end{corollary} \begin{proof} Examples of infinite asymptotically good families of binary CSS codes are presented, for instance, in \cite[Theorem 1]{ashikhmin2001asymptotically}, \cite[Theorem 1.2]{chen2001asymptotically}, and \cite[Theorem 2]{panteleev2022asymptotically}. The result follows from Theorem \ref{th:goodcodes}. \end{proof} \subsection{Asymptotically good families of LDPC CSS-T codes} In this subsection, we show that the CSS-T construction of Theorem \ref{T:CSST} preserves the Low-Density Parity Check (LDPC) property. An immediate application of this is the existence of asymptotically good quantum LDPC CSS-T codes. Quantum codes with sparse parity check matrices are especially interesting in the context of quantum computing. The LDPC property implies that the computation of each parity constraint imposed by the code involves a small number of qubits. As a result, the qubits of the code need to interact by means of a small number of two-qubit gates. Due to the fact that those gates used for syndrome extraction are also noisy, having a sparse amount of parity constraints is beneficial for practical implementations of quantum error correction codes \cite{BBcodes,breukLDPC}. Furthermore, sparse parity check matrices are also important for decoding purposes \cite{bpotf_2024,roffeDecodeSparse}. Thus, we discuss how our construction can be used to yield asymptotically good quantum LDPC codes. Before stating this result, we need some basic definitions and elementary results. Let $C_2 \subseteq C_1 \subseteq \F_2^n$ be binary linear codes. Recall that the corresponding quantum CSS code has a {\em parity check matrix} $$H=\begin{bmatrix} H_X & 0\\ 0 & H_Z \end{bmatrix},$$ where $H_X$ and $H_Z$ are generator matrices of $C_2$ and $C_1^\perp$, respectively. A quantum code with a sparse parity-check matrix $H$ is called a {\em quantum LDPC} code. Recall also that the following definition was given in Equation \eqref{E:CN}: $$C_i^{N_I}=\left\{(x,x): x \in C_i\right\}$$ for each $i \in \{1,2\}$. In the following, we will use the shorter notation $C_i^{N}$ for $C_i^{N_I}$. The next theorem shows that applying the result of Theorem \ref{T:CSST} to a CSS code preserves its sparsity by sending its parity check matrix to another sparse parity check matrix. \begin{theorem}\label{T:Sparsity} Let $(C_1, C_2)_\mathrm{CSS}$ be a CSS code, and let $$H=\begin{bmatrix} H_X & 0\\ 0 & H_Z \end{bmatrix},$$ where $H_X, H_Z$ generate $C_2$ and $C_1^\perp$, respectively. Then, the CSS-T code $\left(C_1^N,C_2^N\right)$ has parity check matrix $$H^N=\begin{bmatrix} H_X^N & 0\\ 0 & H_Z^N \end{bmatrix},$$ where \begin{center} $H^N_X=\begin{bmatrix} H_X & H_X \end{bmatrix}$ and $H^N_Z=\begin{bmatrix} H_Z & 0\\ I & I \end{bmatrix}.$ \end{center} In particular, if $r_X$ and $r_Z$ are the maximum row weights of $H_X$ and $H_Z$, respectively, then the maximum row weights of $H_X^N$ and $H_Z^N$ are $2r_X$ and $\max\{r_Z,2\}$. \end{theorem} \begin{proof} The definition of $C_2^N$ justifies that $H_X^N$ is a generator matrix for $C_2^N$. Moreover, one can easily check that the linear span of the rows of $H_Z^N$ is a subspace of $\left(C_1^N\right)^\bot$. Next, we show that the row space of $H_Z^N$ is in fact $(C_1^N)^\bot$. Let $(x,y) \in \left(C_1^N\right)^\perp$ for some $x,y \in \F_2^n$. Let $e_i^N \defeq (e_i,e_i)$, where $e_i\in \F_2^n$ is the standard basis vector whose $j$-th entry is $$({e_i})_j= \begin{cases} 1 & \text{ if }i=j,\\ 0 & \text{ if }i\ne j. \end{cases}$$ So there exists $\alpha_i \in \F_2$ for each $1\le i \le n$ such that $$(x,y) + \sum_{i=1}^n\alpha_ie_i^N =(x,y)+(y,y)=(x+y, 0) \in \left(C_1^N\right)^\bot.$$ Now Lemma \ref{C:properties}\eqref{lem3} implies that $x+y \in C_1^\bot$ and consequently $(x+y, 0)$ is in the row space of $\begin{bmatrix} H_Z & 0 \end{bmatrix}$. Since $(y,y)$ belongs to the row space of $\begin{bmatrix} I & I \end{bmatrix}$, we conclude that $(x,y)$ is in the row space of $H_Z^N$. The structure of $H_X^N$ shows that its maximum row weight is $2r_X$. Moreover, the last $n$ rows of $H_Z^n$ all have weight $2$. The maximum weight of the remaining rows of $H_Z^N$ is $r_Z$. Thus the maximum row weight of $H_Z^N$ is $\max\{r_Z,2\}$. \end{proof} Recall that there exist several families of good quantum LDPC codes in the literature, for instance, see \cite{DinurAsymptoticallyGood,gooLDPChomological,LeverrierAsymptoticallyGood,hsiehGood,physicsGood1,physicsGood2}. An immediate consequence of the above theorem is that applying the construction of Theorem \ref{T:CSST} to an LDPC CSS code preserves its sparsity. We use this property to entail the next result. \begin{corollary}\label{cor:asympldpc} A family of asymptotically good quantum LDPC code that is CSS-T exists. \end{corollary} \begin{proof} The proof follows by applying the quantum CSS-T construction of Theorem \ref{T:CSST} to each asymptotically good quantum LDPC family presented above. Theorem \ref{T:Sparsity} ensures that the resulting CSS-T codes are LDPC. \end{proof} \section{Logical operators of different transversal gates}\label{sec:transversalgates} In this section, we first develop the idea used in the previous chapter to produce asymptotically good CSS codes for any given $Z$ rotation. Secondly, we show that for any $(C_1,C_2)_\mathrm{CSS-T}$ with an even code $C_1$, the logical operator corresponding to the transversal $T$ has multiplicative order four. Therefore, the transversal $T$ gate never realizes transversal logical $T$ gate on such CSS codes. Finally, we prove that the CSS-T codes from the construction of Theorem \ref{T:CSST} also support transversal $CCZ$ gate. In this case, the corresponding logical operator is the identity. \subsection{Good CSS codes supporting transversal Z rotations} As we already stated in \cref{C:Asymp1}, one can construct a family of asymptotically good CSS codes that supports transversal $T$ gate. Obviously, the same family supports other larger $Z$ rotations, namely transversal $Z$ or $S$ gate. In this subsection, we discuss the construction of another asymptotically good CSS family that supports transversal application of other smaller $Z$ rotation gates. First, recall that the $Z$ rotation gate by the angle $\frac{2\pi}{2^{\ell}}$ for some $\ell \ge 0$ is $$\varGamma^{(\ell)}=\begin{bmatrix}1&0\\0&e^{\frac{2\pi i}{2^\ell}}\end{bmatrix}.$$ Next, we show that applying the construction of Theorem \ref{T:CSST} repeatedly preserves finer $Z$ rotations transversally. \begin{theorem}\label{thm:lconstruction} Let $\ell > 3$ be an integer. Applying transversal $\varGamma^{(\ell)}$ gate to binary quantum CSS code $\left(C_{1}^{\ell}, C_{2}^{\ell}\right)$, obtained from applying Theorem \ref{T:CSST} $\ell$ times to $C_{2} \subseteq C_{1} \subseteq \F_2^n$, results in the logical identity operator. \end{theorem} \begin{proof} Let $u \in \mathbb{F}_{2}^{k}$, where $k$ is the number of logical qubits in the quantum code, and $H$ be a binary matrix having a basis of $C_3^\ell$ in its rows, where $C_1^{\ell}= C_2^{\ell} \oplus C_3^{\ell}$. Note that the number of physical qubits is $2^{\ell} n$. Next, we show that the following diagram commutes \begin{equation}\label{E:commuting diagram} \begin{tikzcd}[row sep=1.2cm, column sep=1.2cm] \ket{u}_L \arrow[r,rightarrow, "\en"] \arrow[d,"I"] & \displaystyle\sum_{v \in C_2^l}\ket{v+uH} \arrow[d, "\varGamma^{(\ell)^{\otimes{2^{\ell}n}}}"] \\ \ket{u}_L \arrow[r,rightarrow,"\en" ]& \displaystyle\sum_{v \in C_2^l}\varGamma^{(\ell)^{\otimes{2^ln}}}\ket{v+uH} \end{tikzcd} \end{equation} The action of applying $\varGamma^{(\ell)}$ to all $2^{\ell}n$ physical qubits can be described as: \begin{equation}\label{eq:eq1} \varGamma^{(\ell)^{\otimes{2^{\ell}n}}} \left( \sum_{v \in C_{2}^{\ell}} |v + uH\rangle \right) = \sum_{v \in C_{2}^{\ell}} \varGamma^{(\ell)^{\otimes{2^{\ell}n}}} |v + uH \rangle. \end{equation} Recall that the term $v + uH \in C_{1}^{\ell}$ for each selection of $u$ and $v$. Now, applying the operator $ \varGamma^{(\ell)^{\otimes{2^{\ell}n}}}$ to each term in Equation \eqref{eq:eq1} gives \begin{equation*} \sum_{v \in C_{2}^{\ell}} \varGamma^{(\ell)^{\otimes{2^{\ell}n}}} |v + uH \rangle=\sum_{v \in C_{2}^{\ell}} \left( \mathrm{e}^{\frac{2\pi i}{2^{\ell}}} \right)^{ \wt{(v + uH)}} |v + uH \rangle, \end{equation*} where $\wt{(v + uH)}$ is the weight of corresponding vector in $C_{1}^{\ell}$. Moreover, since $C_{1}^{\ell}$ is obtained from applying the procedure of Theorem \ref{T:CSST} $\ell$ times, its codewords are in the form $$C_1^\ell=\{(x,x,\ldots,x): x \in C_1\},$$ where the number of repetitions is $2^{\ell}$ times. Hence the weight of each codeword of $C_1^{\ell}$ is divisible by $2^{\ell}$. Thus \begin{equation*} \sum_{v \in C_{2}^{\ell}} \varGamma^{(\ell)^{\otimes{2^{\ell}n}}} |v + uH \rangle=\sum_{v \in C_{2}^{\ell}} \left( \mathrm{e}^{\frac{2\pi i}{2^l}} \right)^{\wt{(v + uH)}} |v + uH \rangle = \sum_{v \in C_{2}^{\ell}} |v + uH \rangle = |u\rangle_L. \end{equation*} This shows that the diagram of Equation \eqref{E:commuting diagram} commutes, and completes the proof. \end{proof} Note that one may get the same result, with a different logical operator, after applying Theorem \ref{T:CSST} to $C_2 \subseteq C_1$ less than $\ell$ times. However, using $\ell$ repetitions keeps the proof more intuitive and simpler. An application of Theorem \ref{thm:lconstruction} is the following interesting result. The proof is similar to that of Corollary \ref{C:Asymp1}. So we skip it in here. \begin{corollary} Let $\ell>0$ be a positive integer. Then a family of asymptotically good CSS code exists that supports transversal $\varGamma^{(\ell)}$ gate. \end{corollary} \subsection{Logical operator of CSS-T codes with even weights} Recall that applying the $T$ gate transversally to all the physical qubits of a CSS-T code realizes a logical gate. In this subsection, we investigate the structure of this logical operator when the $C_2 \subseteq C_1$ both are even and form a CSS-T code. We show that the multiplicative order of such a logical operator is always a divisor of four. In particular, such CSS-T codes never realize a transversal logical $T$ gate as it has the multiplicative order of eight. \begin{theorem}\label{T:ord} Let $(C_1,C_2)$ be a CSS-T code with length $n$ such that $\wt(x)\equiv 0 \pmod 2$ for each $x\in C_1$. Then, the logical operator corresponding to $T^{\otimes n}$ has order a factor of four. \end{theorem} \begin{proof} Let $H$ be a binary matrix having a basis of $C_3$ in its rows, where $C_1= C_2 \oplus C_3$. The CSS code of $(C_1,C_2)$ preserves the transversal $T$, implying that the following diagram commutes: \begin{equation} \begin{tikzcd}[row sep=1.2cm, column sep=1.2cm] \ket{u}_L \arrow[r,rightarrow, "\en"] \arrow[d,"OP"] & \displaystyle\sum_{v \in C_2}\ket{v+uH} \arrow[d, "T^{\otimes n}"] \\ \ket{u'}_L \arrow[r,rightarrow,"\en" ]& \displaystyle\sum_{v \in C_2}\ket{v+u'H}=\displaystyle\sum_{v \in C_2}T^{\otimes n}\ket{v+uH}, \end{tikzcd} \end{equation} where $OP$ is the corresponding logical operator. Applying the operator $OP$ four times implies the next diagram \begin{equation}\label{E:tra} \begin{tikzcd}[row sep=1.2cm, column sep=1.2cm] \ket{u}_L \arrow[r,rightarrow, "\en"] \arrow[d,"OP^4"] & \displaystyle\sum_{v \in C_2}\ket{v+uH} \arrow[d, "{T^4}^{\otimes n}"] \\ \ket{u''}_L \arrow[r,rightarrow,"\en" ]& \displaystyle\sum_{v \in C_2}{T^4}^{\otimes n}\ket{v+uH}. \end{tikzcd} \end{equation} Moreover, $${T^4}^{\otimes n}\ket{v+uH}= \mathrm{e}^{(\frac{8\pi i}{8}) \wt(v+uH)}\ket{v+uH}=\ket{v+uH}$$ as $\wt(v+uH)$ is even. So the bottom-left part of Equation \eqref{E:tra} should be the same as the top-left part (\ie $\ket{u}_L=\ket{u''}_L$), which implies that $OP^4$ is the logical identity. \end{proof} Recall that $C_2^N \subseteq C_1^N$ is the pair of classical codes used in Theorem \ref{T:CSST}. Their definition shows that $C_1^N$ also has only even weight codewords. Therefore, the logical operator corresponding to the transversal $T$, when applied to the quantum code of Theorem \ref{T:CSST}, has order a factor of four. All the CSS-T codes obtained from the extended construction of classical codes and the CSS-T Reed--Muller (RM) (but not punctured RM) codes satisfy the conditions of Theorem \ref{T:ord}. \subsection{Application of transversal CCZ gate} Let $u, v, w \in \mathbb{F}_2^{n}$. Recall that the transversal $CCZ$ gate is defined as follows: \begin{equation*} CCZ^{\otimes n} |u\rangle |v\rangle |w\rangle \longmapsto (-1)^{\displaystyle\sum_{i = 1}^{n} (u_{i}v_{i}w_{i})}|u\rangle |v\rangle |w\rangle. \end{equation*} In this subsection, we show that the quantum CSS code obtained from Theorem \ref{T:CSST} also support transversal CCZ gate. First, recall that for each $C_2 \subseteq C_1$, we have set \[C_i^N \defeq \{(z,z): z \in C_i\}\] for each $i\in \{1,2\}$. We also fix $H$ to be a binary matrix having a basis of $C_3^N$ in its rows, where $C_1^{N}= C_2^{N} \oplus C_3^{N}$. \begin{theorem} Let $C_2 \subseteq C_1 \subseteq \F_2^n$ be two binary codes. The quantum CSS-T code $\left(C_1^N,C_2^N\right)_\mathrm{CSS-T}$ has the following property: applying the transversal $CCZ$ gate to all $2n$ physical qubits realizes the logical identity gate. \end{theorem} \begin{proof} Let $u, x, w \in \mathbb{F}_2^k$, where $k$ is the dimension of the CSS code $\left(C_1^N,C_2^N\right)$. Applying the $CCZ$ gate to all the physical qubits sends the logical state $\ket{u}_L\ket{x}_L\ket{w}_L$ to the state \begin{equation}\label{E:CCZ equi} CCZ^{\otimes 2n} \left( \sum_{v \in C_2^N} |v + uH \rangle \right) \left( \sum_{v' \in C_2^N} |v' + xH \rangle \right) \left( \sum_{v'' \in C_2^N} |v'' + wH \rangle \right). \end{equation} Expanding this gives: \begin{equation*} CCZ^{\otimes 2n}\left(\sum_{v, v', v'' \in C_2^N} |v + uH \rangle |v' + xH \rangle |v'' + wH \rangle \right). \end{equation*} Now, applying $CCZ^{\otimes 2n}$ to each term in the summation gives: \begin{equation*} \begin{split} &\sum_{v, v', v'' \in C_2^N} CCZ^{\otimes 2n} |v + uH \rangle |v' + xH \rangle |v'' + wH \rangle \\&= \sum_{v, v', v'' \in C_2^N} CCZ^{\otimes 2n} |a \rangle |b \rangle |c \rangle = \sum_{v, v', v'' \in C_2^N} (-1)^{\displaystyle\sum_{i=1}^{2n} a_i b_i c_i} |a\rangle |b\rangle |c\rangle, \end{split} \end{equation*} where $a = v + uH=(\alpha,\alpha)$, $b = v' + xH=(\beta,\beta)$, and $c = v'' + wH=(\gamma,\gamma)$ are codewords of $C_1^N$ for some $\alpha,\beta,\gamma \in \F_2^n$. Thus $$\sum_{i=1}^{2n} a_i b_i c_i=2\sum_{i=1}^{n}\alpha_{i}\beta_{i}\gamma_{i} \equiv 0 \pmod 2.$$ This implies that $(-1)^{\displaystyle\sum_{i=1}^{2n} a_i b_i c_i} = 1$. Thus, the result of (\ref{E:CCZ equi}) is \begin{equation*} \left( \sum_{v \in C_2^N} |v + uH \rangle \right) \left( \sum_{v' \in C_2^N} |v' + xH \rangle \right) \left( \sum_{v'' \in C_2^N} |v'' + wH \rangle \right). \end{equation*} This implies that $CCZ^{\otimes2n}$ acts as identity on physical qubits. This completes the proof. \end{proof} The above theorem can be extended to the case when there are more than two controls. However, we skip that case here. \section{Conclusion}\label{sec:conclusion} In this paper, we introduced a construction for generating binary CSS-T codes from any given binary CSS code. This construction enabled us to prove the existence of asymptotically good binary CSS-T codes, thereby resolving an open problem proposed in \cite{RCNP20}. Moreover, we demonstrated that our construction preserves the LDPC property of the ingredient CSS code, leading to the existence of asymptotically good binary quantum LDPC codes that are also CSS-T. Finally, we extended our construction to support transversal finer $Z$ rotations and transversal $CCZ$, and analyzed their corresponding logical operators. Our construction, as detailed in Section \ref{sec:CCSTconstruction}, relies on the use of a map $\phi$ satisfying the conditions of \cref{prop:len-dim}. In this paper, we chose $\phi$ to be the identity map as other permutation maps induce CSS-T codes with the same parameters. An interesting remaining question is to find other alternative maps $\phi$ satisfying the conditions of \cref{prop:len-dim} and leading to CSS-T codes with better parameters or new applications (see also \cref{rmk:phi}). Finally, another interesting follow-up would involve the possibility of using the proposed CSS-T construction in order to design magic state distillation protocols that may improve the overhead of existing protocols. Since most of the magic state distillation protocols are related to triorthogonal codes, it would be relevant to see how more general codes such as CSS-T codes may be used for such a task. While it is noteworthy that the transversal application of some of the non-Clifford gates considered in this paper results in the logical identity, such transformations have the potential to be used for magic state distillation \cite{litinski1,litinski2}. Furthermore, it would be relevant to study how the proposed codes can yield triorthogonal codes that have a more direct relationship with current magic state distillation protocols. Since our construction yields asymptotically good codes, successfully using those for magic state distillation tasks could result in making the magic state yield parameter arbitrarily small, which is still an open question for the qubit case. \section*{Acknowledgment} This work was supported by the Spanish Ministry of Economy and Competitiveness through the MADDIE project (Grant No. PID2022-137099NBC44), by the Spanish Ministry of Science and Innovation through the project “Few-qubit quantum hardware, algorithms and codes, on photonic and solidstate systems" (PLEC2021-008251), by the Ministry of Economic Affairs and Digital Transformation of the Spanish Government through the QUANTUM ENIA project call - QUANTUM SPAIN project, and by the European Union through the Recovery, Transformation and Resilience Plan - NextGenerationEU within the framework of the Digital Spain 2026 Agenda, and by the grant ANR-21-CE39-0009-BARRACUDA from the French National Research Agency. J.E.M. is funded by the Spanish Ministry of Science, Innovation and Universities through a Jose Castillejo mobility grant for his stay at the Cavendish Laboratory of the University of Cambridge. \bibliographystyle{abbrv} \bibliography{biblio_CSST} \end{document}
2412.08621v2
http://arxiv.org/abs/2412.08621v2
The separating Noether number of small groups
\documentclass[12pt, oneside]{amsart} \allowdisplaybreaks \usepackage{hyperref} \usepackage{geometry,comment} \geometry{letterpaper} \usepackage{graphicx} \usepackage{amssymb} \setlength{\oddsidemargin}{-.2cm} \setlength{\evensidemargin}{-.6cm} \setlength{\topmargin}{-1cm} \setlength{\textheight}{23cm} \setlength{\textwidth}{460pt} \renewcommand{\baselinestretch}{1.1} \newtheorem{theorem}{\bf Theorem}[section] \newtheorem{proposition}[theorem]{\bf Proposition} \newtheorem{lemma}[theorem]{\bf Lemma} \newtheorem{corollary}[theorem]{\bf Corollary} \newtheorem{conjecture}[theorem]{\bf Conjecture} \newtheorem{example}[theorem]{\bf Example} \newtheorem{examples}[theorem]{\bf Examples} \newtheorem{definition}[theorem]{\bf Definition} \newtheorem{remark}[theorem]{\bf Remark} \newtheorem{remarks}[theorem]{\bf Remarks} \newtheorem{problem}[theorem]{\bf Problem} \newtheorem{question}[theorem]{\bf Question} \newenvironment{proofof}[1]{\noindent{\it Proof of #1.}}{\hfill$\square$\\\mbox{}} eld{K} \def\sepbeta{\beta_{\mathrm{sep}}} \title{The separating Noether number of small groups} \author[M. Domokos]{M\'aty\'as Domokos} \address{HUN-REN Alfr\'ed R\'enyi Institute of Mathematics, Re\'altanoda utca 13-15, 1053 Budapest, Hungary, ORCID iD: https://orcid.org/0000-0002-0189-8831} \email{[email protected]} \author[B. Schefler]{Barna Schefler} \address{E\"otv\"os Lor\'and University, P\'azm\'any P\'eter s\'et\'any 1/C, 1117 Budapest, Hungary} \email{[email protected]} \subjclass[2020]{Primary 13A50; Secondary 13P15, 20C15} \keywords{polynomial invariants, separating sets, degree bounds, finite groups} \begin{document} \maketitle \begin{abstract} The separating Noether number of a finite group is the minimal positive integer $d$ such that for any finite dimensional complex linear representation of the group, any two dictinct orbits can be distinguished by a polynomial invariant of degree at most $d$. In the present paper its exact value is determined for all groups of order less than $32$, for all finite groups with a cyclic subgroup of index at most $2$, and for the direct product of a dihedral group with the $2$-element group. The results show in particular that unlike the ordinary Noether number, the separating Noether number of a non-abelian finite group may well be equal to the separating Noether number of a proper direct factor of the group. Most of the results are proved for the case of a general (possibly finite) base field containing an element whose multiplicative order equals the size of the group. \end{abstract} \section{Introduction} eld[V]$ for the coordinate ring of $V$ identified with the $n$-variable polynomial algebra eld[x_1,\dots,x_n]$, where the variables $x_1,\dots,x_n$ represent the coordinate functions on $V$ with respect to a chosen basis. Let $G$ be a finite group, and $\rho:G\to \mathrm{GL}(V)$ a representation of $G$ on $V$. We shall say that \emph{$(V,\rho)$ is a $G$-module} in this case; mostly we shall suppress $\rho$ from the notation and we will say that \emph{$V$ is a $G$-module}. eld$ instead of $V$ when we want to make explicit the actual base field. eld$-algebra automorphisms, such that the variables $x_1,\dots,x_n$ span a $G$-submodule isomorphic to $V^*$ (the $G$-module dual to $V$), eld[V]$ is the symmetric tensor algebra of $V^*$. Consider eld[V]\mid \forall g\in G:\quad g\cdot f=f\},\] the \emph{subalgebra of eld[V]$}. For $v\in V$ we shall denote by $G\cdot v$ the $G$-orbit of $v$, and by $\mathrm{Stab}_G(v):=\{g\in G\mid gv=v\}$ the \emph{stabilizer subgroup in $G$ of $v$}. It is well known that for $v,w\in V$ with $G\cdot v\neq G\cdot w$, eld[V]^G$ with $f(v)\neq f(w)$ (see for example \cite[Theorem 3.12.1]{derksen-kemper}, \cite[Theorem 16]{kemper} or \cite[Lemma 3.1]{draisma-kemper-wehlau}). Therefore the general notion of a "separating set of invariants" (see \cite[Definition 2.4.1]{derksen-kemper}) reduces to the following in the case of finite groups: \begin{definition}\label{def:separating set} eld[V]^G$ is called a \emph{separating set} if for any $v,w\in V$, $f(v)=f(w)$ for all $f\in S$ implies that $v$ and $w$ have the same $G$-orbit. \end{definition} eld[V]^G$ is a separating set. In the structural study of algebras of polynomial invariants of finite groups the focus on separating sets (instead of the more special generating sets) turned out to be useful in finding strengthened versions of known theorems in the invariant theory of finite groups, see for example \cite{dufresne}, \cite{dufresne-elmer-kohls}. Considering explicit computations of invariants, the class of modular representations for which generators of the corresponding algebra of invariants is known is very limited. On the other hand, in several such cases explicit finite separating sets are known, see for example \cite{sezer}, \cite{kohls-sezer}. The minimal possible size of a separating set is investigated in \cite{dufresne-jeffries}. In the non-modular case, an explicit separating set of multisymmetric polynomials is given in \cite{lopatin-reimers}. The systematic study of separating invariants over finite fields began recently in \cite{kemper-lopatin-reimers}, \cite{lopatin-muniz}. \subsection{Degree bounds} eld[V]$. We shall denote by $\deg(f)$ the degree of a eld[V]$. eld[V]^G$ is spanned by homogeneous elements, so it inherits the grading from eld[V]_d$. We shall write $\field[V]^G_{\le d}$ for the sum of the homogeneous components of degree at most $d$. Degree bounds for generators of algebras of invariants have always been in the focus of invariant theory (see e.g. \cite{wehlau}). We set eld\text{-algebra }\field[V]^G\},\] eld\}.\] eld$) because of the classical inequality $\beta^{\mathbb{C}}(G)\le |G|$ due to E. Noether \cite{noether} (the inequality was extended to the non-modular case in \cite{fleischmann}, \cite{fogarty}). Its systematic study was initiated in \cite{schmid}. The results of \cite{schmid} were improved first in \cite{domokos-hegedus}, \cite{sezer:1} and later in \cite{CzD:1}, \cite{cziszter-domokos:indextwo}. In particular, \cite[Table 1]{cziszter-domokos-szollosi} gives the exact value of the Noether number for all non-abelian groups of order less than $32$, and \cite{cziszter-domokos:indextwo} gives the Noether number for all groups containing a cyclic subgroup of index two. Following \cite[Definition 2]{kohls-kraft} (see also \cite{kemper}) we set eld[V]^G_{\le d}\text{ is a separating set}\},\] and eld(G):=\sup\{\sepbeta(G,V)\mid V\text{ is a finite dimensional }G\text{-module} eld\}.\] We shall refer to the above quantities as the \emph{separating Noether number}. As different orbits of a finite group can be separated by polynomial invariants, we have the obvious inequality \begin{equation*} \sepbeta(G,V)\le \beta(G,V) \qquad\text{ for any }V, \end{equation*} and hence \begin{equation}\label{eq:sepbeta<beta} eld(G). \end{equation} eld(G)=\infty$ by \cite{richman}, whereas $\sepbeta^\field(G)\le |G|$ also in the modular case (see for example the proof of \cite[Theorem 3.12.1]{derksen-kemper}, \cite[Theorem 16]{kemper} or \cite[Lemma 3.1]{draisma-kemper-wehlau}) However, in the present paper our focus is on the non-modular setup, when the characteristic of the base field does not divide the order of $G$. In this context an interesting and generally open question is the following: \begin{question} To what extent is the change from "generating sets" to "separating sets" reflected in the corresponding degree bounds? \end{question} Moreover, interest in these degree bounds arose also in recent applications for data science and machine learning, see \cite{bandeire-blumsmith-kileel-niles-weed-perry-wein}, \cite{cahill-contreras-hip}, \cite{cahill-iverson-mixon}, \cite{cahill-iverson-mixon-packer}. We mention also that problems in signal processing motivated the study of a variant of the Noether number for rational invariants in \cite{blumsmith-garcia-hidalgo-rodriguez}. eld(G)$ for certain finite abelian groups $G$ and an algebraically closed base field of non-modular characteristic was determined recently in \cite{schefler_c_n^r}, \cite{schefler_rank2}. As far as we know, the only non-abelian finite groups whose separating Noether number in the non-modular case is determined in the literature are the non-abelian groups of order $3p$ for some prime $p$ congruent to $1$ modulo $3$ (see \cite{cziszter:C7rtimesC3}). For some results on the modular case see \cite[Theorem C]{kohls-kraft}, \cite{elmer-kohls}. The aim of the present work is to produce a counterpart for the separating Noether number of the results on the Noether number from \cite{cziszter-domokos:indextwo} and \cite{cziszter-domokos-szollosi}. {\bf{Running assumption.}} Throughout this paper $G$ will stand for a finite group, and eld$ for a field. Unless explicitly stated otherwise, it is assumed eld$ does not divide $|G|$. \section{Main results}\label{sec:main results} Our first result builds on \cite{cziszter-domokos:indextwo} (see the equality \eqref{eq:beta(index two cyclic)}) and shows that for a group with a cyclic subgroup of index $2$ (their classification will be recalled in Section~\ref{sec:indextwo}) the inequality \eqref{eq:sepbeta<beta} holds with equality, namely: \begin{theorem}~\label{thm:sepbeta index two} Let $G$ be a non-cyclic finite group with a cyclic subgroup of index $2$, and eld$ contains an element of multiplicative order $|G|$. Then we have the equality eld(G)=\frac{1}{2} |G| + \begin{cases} 2 & \text{ if } G=\mathrm{Dic}_{4m}, \text{ $m>1$};\\ 1 & \text{ otherwise. } \end{cases}\] \end{theorem} Using \cite[Corollary 5.5]{cziszter-domokos:indextwo} we are able to determine the separating Noether number for another infinite sequence of non-abelian finite groups: \begin{theorem}\label{thm:sepbeta(D2nxC2)} For $n\ge 2$ even consider the direct product $G:=\mathrm{D}_{2n}\times \mathrm{C}_2$ of the dihedral group of order $2n$ and the cyclic group $\mathrm{C}_2$ of order $2$. eld$ has an element with multiplicative order $n$. Then we have eld(\mathrm{D}_{2n}\times \mathrm{C}_2)=n+2.\] \end{theorem} There are $16$ non-abelian groups of order less than $32$ not covered by Theorem~\ref{thm:sepbeta index two} or Theorem~\ref{thm:sepbeta(D2nxC2)}. We discuss them in the following theorem: \begin{theorem}\label{thm:sepbeta(<32)} Let $G$ be a non-abelian group of order less than $32$ that does not contain a cyclic subgroup of index at most $2$, and $G$ is not isomorphic to $\mathrm{D}_{2n}\times \mathrm{C}_2$. eld$ contains an element of multiplicative order $|G|$. eld(G)$ is given in the following table, in some cases under the additional assumption that eld)=0$ (this is indicated in the corresponding line of the table): \[ \begin{array}{c|c|c|c|c} eld(G) &\text{reference for }\sepbeta^\field(G) \\ \hline (12,3)&\mathrm{A}_4 & 6 & 6 & \mathrm{Theorem~\ref{thm:sepbeta(A4)}} \\ (16,3) & (\mathrm{C}_2\times \mathrm{C}_2) \rtimes \mathrm{C}_4 = (\mathrm{C}_4 \times \mathrm{C}_2) \rtimes_{\psi} \mathrm{C}_2 & 6 & 6 & \mathrm{Theorem~\ref{thm:sepbeta((C2xC2)rtimesC4)}}\\ (16,4) & \mathrm{C}_4 \rtimes \mathrm{C}_4 & 7 & 6 & \mathrm{Theorem~\ref{thm:sepbeta(C4rtimesC4)}}\\ (16,12) & \mathrm{Dic}_8 \times \mathrm{C}_2 & 7 & 6 & \mathrm{Theorem~\ref{thm:sepbeta(Dic8xC2)}}\\ (16,13)& (Pauli) \; = \; (\mathrm{C}_4 \times \mathrm{C}_2) \rtimes_{\phi} \mathrm{C}_2 & 7 & 7 & \mathrm{Theorem~\ref{thm:sepbeta(Pauli)}}\\ (18,3) &\mathrm{S}_3 \times \mathrm{C}_3 & 8 & 6 & \mathrm{Theorem~\ref{thm:sepbeta(S3xC3)}}\\ (18,4) & (\mathrm{C}_3\times \mathrm{C}_3) \rtimes_{-1} \mathrm{C}_2 & 6 & 6 & \mathrm{Theorem~\ref{thm:sepbeta((C3xC3)rtimesC2)}} \\ eld)=0 & 8 & 6 & \mathrm{Theorem~\ref{thm:sepbeta(C5rtimesC4)}}\\ (21,1)& \mathrm{C}_7 \rtimes \mathrm{C}_3 & 9 & 8 & \text{\cite[Theorem 4.1]{cziszter:C7rtimesC3}} \\ (24,3)& \mathrm{SL}_2(\mathbb{F}_3) = \tilde{\mathrm{A}}_4 & 12 & 12 & \mathrm{Theorem~\ref{thm:sepbeta(A4tilde)}} \\ eld)=0 & 9 & 8 & \mathrm{Theorem~\ref{thm:sepbeta(Dic12xC2)}}\\ (24,8)& \mathrm{C}_3 \rtimes \mathrm{D}_8 = (\mathrm{C}_6 \times \mathrm{C}_2) \rtimes \mathrm{C}_2 & 9 & 9 & \mathrm{Theorem~\ref{thm:sepbeta((C6xC2)rtimesC2)}}\\ (24,12)& \mathrm{S}_4 & 9 &9 & \mathrm{Theorem~\ref{thm:sepbeta(S4)}}\\ (24,13) & \mathrm{A}_4 \times \mathrm{C}_2 & 8 & 6 & \mathrm{Theorem~\ref{thm:sepbeta(A4xC2)}}\\ (27,3)& \mathrm{H}_{27}=\mathrm{UT}_3(\mathbb{F}_3) & 9 & 9 & \mathrm{Theorem~\ref{thm:sepbeta(H27)}} \\ eld)=0 & 11 & 10 & \mathrm{Theorem~\ref{thm:sepbeta(M27)}}\\ \hline \end{array} \] \end{theorem} The first column in the above table contains the identifier of $G$ in the Small Groups Library of GAP (see \cite{GAP4}); that is, a pair of the form \texttt{[order, i]}, where the GAP command $\mathsf{SmallGroup(id)}$ returns the $\texttt{i}$-th group of order $\texttt{order}$ in the catalogue. The second column gives $G$ in a standard notation or indicates its structure as a direct product or semidirect product (the symbol $\rtimes$ always stands for a semidirect product that is not a direct product, and $\psi$ and $\phi$ in the rows $(16,3)$ and $(16,13)$ stand for two different involutive automorphisms of $\mathrm{C}_4\times \mathrm{C}_2$). The presentation of $G$ in terms of generators and relations can be found in the beginning of the section eld(G)$. The third column of the table eld(G)$; these values are taken from \cite{cziszter-domokos-szollosi}. The fourth column contains eld(G)$. eld(G)$ given in the table eld$ than the restrictions stated in Theorem~\ref{thm:sepbeta(<32)}. eld$ needed for the validity of our arguments are contained in the statement quoted in the last column of the table. In a few instances we used computer calculations to determine generators of concrete invariant algebras. This reduces the validity of some of our results to the case of a base field of characteristic zero. The use of computer could have been avoided at the price of a significant increase of the length of the paper. For sake of completeness we include a similar table eld(G)$ for those abelian groups $G$ of order less than $32$ that do not have a cyclic subgroup of index at most $2$, under the assumption that eld$ contains an element whose multiplicative order equals the exponent of $G$. These results follow from \cite{domokos:abelian}, \cite{schefler_c_n^r}, \cite{schefler_rank2}, and the explanations in Section~\ref{sec:dependence on base field}. \[ \begin{array}{c|c|c|c|c} eld(G) &\text{reference for }\sepbeta^\field(G) \\ \hline (8,5) & \mathrm{C}_2\times \mathrm{C}_2\times \mathrm{C}_2 & 4 & 4 & \text{\cite[Theorem 3.10]{domokos:abelian}} \\ (9,2) &\mathrm{C}_3\times \mathrm{C}_3 & 5 & 4 & \text{\cite[Theorem 1.2]{schefler_c_n^r}} \\ (16,2) & \mathrm{C}_4\times \mathrm{C}_4 & 7 & 6 & \text{\cite[Theorem 1.2]{schefler_c_n^r}} \\ (16,10) & \mathrm{C}_2\times \mathrm{C}_2\times \mathrm{C}_4 & 6 & 6 & \text{\cite[Theorem 3.10]{domokos:abelian}} \\ (16,14) & \mathrm{C}_2\times \mathrm{C}_2\times \mathrm{C}_2\times \mathrm{C}_2 & 5 & 5 & \text{\cite[Theorem 3.10]{domokos:abelian}} \\ (18,5) & \mathrm{C}_3\times \mathrm{C}_6 & 8 & 7 & \text{\cite[Theorem 1.1]{schefler_rank2}} \\ (24,15) & \mathrm{C}_2\times \mathrm{C}_2\times \mathrm{C}_6 & 8 & 8 & \text{\cite[Theorem 3.10]{domokos:abelian}} \\ (25,2) & \mathrm{C}_5\times \mathrm{C}_5 & 9 & 6 & \text{\cite[Theorem 1.2]{schefler_c_n^r}} \\ (27,2) & \mathrm{C}_3\times \mathrm{C}_9 & 11 & 10 & \text{\cite[Theorem 1.1]{schefler_rank2}} \\ (27,5) & \mathrm{C}_3\times \mathrm{C}_3\times \mathrm{C}_3 & 7 & 6 & \text{\cite[Theorem 1.2]{schefler_c_n^r}}\\ \hline \end{array} \] \subsection{Failure of strict monotonicity} For a normal subgroup $N$ of $G$, all representations of $G/N$ can be viewed as a representation of $G$ whose kernel contains $N$. Therefore we have the obvious inequalities \begin{equation}\label{eq:sepbeta(G/N)} eld(G/N) eld(G/N). \end{equation} Moreover, both the Noether number and the separating Noether number are monotone for subgroups as well; that is, for any subgroup $H$ of a finite group $G$ we have \begin{equation}\label{eq:beta(H)} eld(H) \end{equation} (see \cite{schmid}) and \begin{equation}\label{eq:sepbeta(H)} eld(H) \end{equation} (see \cite[Theorem B]{kohls-kraft}). The inequality \eqref{eq:beta(H)} was sharpened in \cite[Theorem 1.2]{cziszter-domokos:lower bound}, where the following strict monotonicity of the Noether number was proved: \begin{align}\label{eq:strict monotonicity} eld(H) \text{ for any subgroup }H\neq G; \\ \notag eld(G/N) \text{ for any normal subgroup }N\neq \{1_G\} \text{ of }G. \end{align} The results of Theorem~\ref{thm:sepbeta index two} and Theorem~\ref{thm:sepbeta(<32)} show that the analogues of \eqref{eq:strict monotonicity} (i.e. strict monotonicity) does not hold for the separating Noether number. eld$ as in the cited theorems, we have eld(\mathrm{Dic}_8)=6 eld(\mathrm{Dic}_8\times \mathrm{C}_2),\] and $\mathrm{Dic}_8\times \mathrm{C}_2$ has a proper direct factor isomorphic to $\mathrm{Dic}_8$. Another similar example is eld(\mathrm{A}_4)=6 eld(\mathrm{A}_4\times\mathrm{C}_2).\] Furthermore, eld(\mathrm{C}_6),\] and the group $\mathrm{S}_3\times \mathrm{C}_3$ has a subgroup isomorphic to $\mathrm{C}_6$. Similar examples exist among the abelian groups. For example, we have eld(\mathrm{C}_6\oplus \mathrm{C}_6\oplus \mathrm{C}_6)=12= eld(\mathrm{C}_6\oplus \mathrm{C}_6\oplus \mathrm{C}_3)\] by \cite[Theorem 6.1]{schefler_c_n^r}. \subsection{Comments on the methods used} We make an essential use of the known results on exact values of the Noether numbers (as summarized in \cite{cziszter-domokos-szollosi}) of the groups considered here. In particular, in each case when the Noether number and the separating Noether number happen to coincide, we just need to come up with a representation and two points with distinct orbits that can not be separated by invariants of strictly smaller degree than the Noether number. In all such cases it turns out that rather small dimensional representations suffice to provide such examples. On the other hand, to settle the cases when the separating Noether number is strictly smaller than the Noether number requires elaborate work: we need a thorough analysis of concrete representations, discussions of stabilizers, group automorphisms, construction of invariants and relative invariants, as well as several ad hoc ideas that help to undertand the set of solutions of systems of polynomial equations. What makes these computations feasible is that by general principles (see the notion of Helly dimension and Lemma~\ref{lemma:helly} below) it is sufficient to deal with representations having a small number of irreducible summands. \subsection{Organization of the paper} We begin in Sections~\ref{sec:dependence on base field} and \ref{sec:dependence on field of beta} eld$, eld$. In the rest of Section~\ref{sec:prel} we introduce notation and collect general facts used throughout the paper. We turn in Section~\ref{sec:indextwo} to the proof of Theorem~\ref{thm:sepbeta index two} and Theorem~\ref{thm:sepbeta(D2nxC2)}. In Section~\ref{sec:easy groups} we deal with the groups of order less than $32$ for which the separating Noether number equals the Noether number. In Section~\ref{sec:HxC2} we cover three groups of the form eld(H)$. Section~\ref{sec:S3xC3} is devoted to the study of the separating Noether number of $\mathrm{S}_3\times \mathrm{C}_3$. In the final Section~\ref{sec:CnrtimesCk} the separating Noether number is computed for four groups of the form $G\cong \mathrm{C}_n\rtimes \mathrm{C}_k$ eld(G)$. \section{Preliminaries} \label{sec:prel} eld(G)$} \label{sec:dependence on base field} \begin{lemma}\label{lemma:spanning invariants} eld$ be a $G$-module. eld V$ over $L$. \begin{itemize} \item[(i)] For any non-negative integer $d$, the $L$-vector space $L[V_L]^G_d$ is spanned by its subset $K[V_K]^G_d$. \item[(ii)] We have the inequality eld)\le \sepbeta(G,V_L)$. \item[(iii)] We have the inequality eld(G)\le \sepbeta^L(G)$. \end{itemize} \end{lemma} \begin{proof} (i) It is well known, and follows from the fact that the space of solutions over $L$ of eld$ is spanned by the eld$. eld\subseteq V_L$ with $G\cdot v\neq G\cdot v'$. Then there exists an $f\in L[V_L]^G_d$ with $d\le \sepbeta(G,V_L)$ such that $f(v)\neq f(v')$. It follows by (i) that there exists an $h\in K[V]^G_d$ with $h(v)\neq h(v')$. This clearly shows the desired inequality. (iii) is an immediate consequence of (ii). \end{proof} \begin{remark}\label{remark:strict ineq} The inequalities in Lemma~\ref{lemma:spanning invariants} (ii), (iii) may be strict: (i) First we mention an example in the modular case. Consider the $\mathrm{S}_n$-module $\mathbb{F}_q^n$ endowed with the natural permutation representation, where $q$ is a prime power and $\mathbb{F}_q$ is the finite field with $q$ elements. Separating $\mathrm{S}_n$-invariants on $\mathbb{F}_q^n$ were studied (much before the notion of 'separating invariants' was formally introduced) in \cite{fine}, \cite{aberth}, and recently in \cite{kemper-lopatin-reimers}, \cite{domokos-miklosi}. In particular, we have $\sepbeta(\mathrm{S}_3,\mathbb{F}_2^3)=2$ by \cite[Lemma 4.3]{kemper-lopatin-reimers} and $\sepbeta(\mathrm{S}_3,\mathbb{F}_4^3)=3$ by \cite[Corollary 4.9]{domokos-miklosi}. eld)$ does not divide $|G|$ will be provided in the present paper in Proposition~\ref{prop:H27}. In the subsequent paper \cite{domokos-schefler} we shall give examples where eld)$ does not divide $|G|$. \end{remark} Lemma~\ref{lemma:spanning invariants} and Remark~\ref{remark:strict ineq} raise the following question (not answered in the present paper): \begin{question} Can the inequality in Lemma~\ref{lemma:spanning invariants} (ii) be strict eld$ infinite? \end{question} \begin{proposition}\label{prop:abelian-field-dependence} Let $G$ be a finite abelian group. Then we have \begin{align}\label{eq:sepbetaK<sepbetaC} eld(G)\le \sepbeta^{\mathbb{C}}(G), \text{ with equality when } eld \text{ contains an element } \\ \notag \text{whose multiplicative order equals the exponent of }G. \end{align} \end{proposition} \begin{proof} In \cite[Corollary 2.6]{domokos:abelian}, a characterization purely in terms of eld(G)$, valid when eld$ contains an element whose multiplicative order equals the exponent of $G$ (in fact \cite{domokos:abelian} assumes that the base field is algebraically closed, but the proof of the quoted result eld$). eld(G)=\sepbeta^{\mathbb{C}}(G)$ when eld$ contains an element of multiplicative order $|G|$. eld}}(G)=\sepbeta^\mathbb{C}(G)$, eld}$ is the algebraic closure eld$ (recall the running assumption that $|G|$ eld$). Note finally that by Lemma~\ref{lemma:spanning invariants} (iii) we have eld}}(G)$. \end{proof} \subsection{Comparison with the dependence eld(G)$.} \label{sec:dependence on field of beta} eld)$, so eld)=p>0; eld)=0. \end{cases}\] Moreover, by \cite[Theorem 4.7]{knop}, \begin{equation}\label{eq:knop} \beta^{\mathbb{F}_p}(G)\ge \beta^\mathbb{C}(G) \text{ for all }p\nmid |G|, \text{ with equality for all but finitely many }p.\end{equation} It is interesting to compare this with \eqref{eq:sepbetaK<sepbetaC}, where the inequality for the corresponding separating Noether numbers is proved in the reverse direction. We mention that no example is known for $p$ and $G$ (where $p\nmid |G|$) for which \eqref{eq:knop} is a strict inequality. In fact \eqref{eq:knop} holds with equality for all $p\nmid |G|$ when $G$ is abelian, or when $G$ is one of the groups for which the exact value of their Noether number is given in \cite{cziszter-domokos-szollosi}. For sake of completeness of the picture, we repeat that $\beta^{\mathbb{F}_p}(G)=\infty$ if $p$ divides $|G|$ by \cite{richman}, eld$ (including the case when eld)$ divides $|G|$). \subsection{Direct sum decompositions and multigradings} \label{subsec:direct sum decomp} Assume that we have a $G$-module direct sum decomposition \begin{equation}\label{eq:decomp} V=V_1\oplus\cdots\oplus V_k. \end{equation} View the dual space $V_i^*$ as the subspace of $V^*$ consisting of the linear forms vanishing on $V_j$ for $j\neq i$. Then $V^*=\bigoplus_{i=1}^kV_i^*$. Choose a basis $x_i^{(j)}$ ($i=1,\dots,\dim(V_j)$) in $V_i^*$, then eld[V]$ is $\{x_i^{(j)}\mid \ j=1,\dots,k;\ i=1,\dots,\dim(V_j)\}$. eld[V]$ is spanned by the monomials having degree $\alpha_j$ in the variables belonging to $V_j^*$ for each $j=1,\dots,k$. eld[V]$ preserves this multigrading, and thus eld[V]^G$ is spanned by multihomogeneous elements. In the sequel whenever we mention a `multigrading' not explicitly defined, this should refer to the multigrading that comes from a direct sum decomposition as in \eqref{eq:decomp}, which should be clear from the context. A typical way by which we shall exploit the multigrading is captured by the following obvious statement: \begin{lemma}\label{lemma:lower-bound} eld(G)\ge d$ if and only if there exists a $G$-module $V=V_1\oplus\cdots\oplus V_k$, eld[V]^G$ with $h(v)\neq h(v')$, such that for any multihomogeneous eld[V]^G$ with $\deg(f)<d$ we have $f(v)=f(v')$. \end{lemma} eld[V]$ is induced by the following eld^\times$ ($k$ direct factors) on $V$: for $\lambda=(\lambda_1,\dots,\lambda_k)\in T$ and $v=(v_1,\dots,v_k)\in V$ set $\lambda\cdot v:=(\lambda_1 v_1,\dots,\lambda_k v_k)$. This action commutes with the $G$-action on $V$. Consequently, for $v,v'\in V$ we have that \begin{equation}\label{eq:rescaling} G\cdot v=G\cdot v' \text{ if and only if }G\cdot (\lambda v)= G\cdot (\lambda v'). \end{equation} Later in the text when we say that 'after rescaling, we may assume that $v$ has some special form', then we mean that we replace $v$ by an appropriate element in its $T$-orbit, and we refer to \eqref{eq:rescaling}. \subsection{Reduction to multiplicity-free representations} We record a consequence of the results of \cite{draisma-kemper-wehlau} and Lemma~\ref{lemma:spanning invariants}: \begin{lemma} \label{lemma:multfree} Let $V_1,\dots,V_q$ be a complete irredundant list of representatives of the isomorphism classes of simple $G$-modules. Assume that \[|K|>(\max\{\dim(V_j)\mid j=1,\dots,q\}-1)|G|.\] Then we have the equality eld(G)=\sepbeta(G,V_1\oplus\cdots \oplus V_q).\] \end{lemma} \begin{proof} An arbitrary $G$-module $V$ is isomorphic to $V_1^{m_1}\oplus\cdots\oplus V_q^{m_q}$, where $W^m$ stands for the direct sum $W\oplus\cdots \oplus W$ of $m$ copies of $W$. By \cite[Corollary 2.6]{draisma-kemper-wehlau} we have $\sepbeta(G,V)\le \sepbeta(G,V_1^{\dim(V_1)}\oplus\cdots\oplus V_q^{\dim(V_q)})$, and by \cite[Theorem 3.4 (ii)]{draisma-kemper-wehlau} and \cite[Proposition 3.3 (ii)]{draisma-kemper-wehlau} we have $\sepbeta(G,V_1^{\dim(V_1)}\oplus\cdots\oplus V_q^{\dim(V_q)})= \sepbeta(G,V_1\oplus \cdots \oplus V_q)$. \end{proof} eld(G)$ it is sufficient to deal with multiplicity-free representations of $G$ (unless possibly if $K$ is a finite field with too few elements). \subsection{Helly dimension} Lemma~\ref{lemma:multfree} can be improved for groups whose Helly dimension is 'small'. The Helly dimension $\kappa(G)$ is the minimal positive integer $k$ having the following property: any set of cosets in $G$ with empty intersection contains a subset of at most $k$ cosets with empty intersection (see \cite[Section 4]{domokos:typical}). \begin{lemma}\label{lemma:helly} Under the assumptions and notation of Lemma~\ref{lemma:multfree}, there exists a $k\le\kappa(G)$ and $1\le j_1<\dots<j_k\le q$ with eld(G)=\sepbeta(G,V_{j_1}\oplus\cdots\oplus V_{j_k})$. \end{lemma} \begin{proof} This follows from Lemma~\ref{lemma:multfree} by the same argument as the proof of \cite[Lemma 4.1]{domokos-szabo} (formulated in a bit more general setup). \end{proof} In view of Lemma~\ref{lemma:helly}, it is helpful to find upper bounds on the Helly dimension. By \cite[Lemma 4.2]{domokos:typical} we have the inequality \begin{equation}\label{eq:kappa-mu} \kappa(G)\le \mu(G)+1, \end{equation} where $\mu(G)$ stands for the maximal size of an intersection independent set of subgroups of $G$. Recall that a set of subgroups of $G$ is \emph{intersection independent} if none contains the intersection of the others. \begin{lemma}\label{lemma:prime power} If a set $S$ of intersection independent subgroups in a group contains a cyclic subgroup of prime power order, then $|S|\le 2$. \end{lemma} \begin{proof} Let $S$ be an intersection independent set of subgroups of a group $G$ with $|S|\ge 2$ and $H\in S$ with $|H|$ a prime power. Take $J\in S$ with $J\cap H$ of minimal possible order. Since the lattice of subgroups of $H$ is a chain, for any $L\in S$ we have $L\cap H\supseteq H\cap J$, hence in particular $L\supseteq H\cap J$. Since $S$ is intersection independent, it follows that $L\in \{H,J\}$. As $L\in S$ was arbitrary, it means that $S=\{H,J\}$. \end{proof} \subsection{Notational convention.} \label{subsec:convention} eld^\times$, written multiplicatively). For $\chi\in \widehat G$ denote by $U_\chi$ the $1$-dimensional eld$ endowed with the representation $\chi$; that is, eld$. In several cases we shall deal with a $G$-module \begin{equation}\label{eq:V+U} V=W\oplus U, \qquad W=W_1\oplus\cdots\oplus W_l, \qquad U= U_{\chi_1} \oplus \cdots \oplus U_{\chi_m}, \end{equation} where the $W_i$ are pairwise non-isomorphic irreducible $G$-modules of dimension at least $2$, the $\chi,\dots,\chi_m$ are distinct characters of $G$. We shall denote by $x_1,x_2,\dots$ the coordinate functions on $W_1$, by $y_1,y_2,\dots$ the coordinate functions on $W_2$, $z_1,z_2,\dots$ the coordinate functions on $W_3$, etc., and denote by $t_\chi$ the coordinate function on $U_\chi$. According to the above conventions, we have $g\cdot t_\chi=\chi(g^{-1})t_\chi$. Associated to the above direct sum decomposition of $V$ is eld[V]$ explained at the beginning of Section~\ref{sec:prel}. The phrase `multihomogeneous' will refer to this multigrading. Usually we shall specify a concrete representation of $G$ (in other words, a $G$-module) eld)$. eld^n$, on which $g\in G$ operates via multiplication by the matrix $\psi(g)$. The action of $G$ on the variables $x_1,\dots,x_n$ (which form a basis in the dual space $V^*$ of $V$) is given explicitly by the following formula: \[g\cdot x_j=\sum_{i=1}^n\psi(g^{-1})_{ji}x_i.\] We shall write $A^T$ for the transpose of the matrix (e.g. row vector) $A$. \subsection{Relative invariants and the Hilbert ideal} \label{sec:Davenport} eld[V]^G$. A homogeneous invariant eld[V]^G$ is said to be \emph{indecomposable} if eld[V]^G$ is generated by the positive degree homogeneous indecomposable invariants. The so-called \emph{Hilbert ideal} $\mathcal{H}(G,V)$ eld[V]$ generated by the positive degree homogeneous $G$-invariants. The Hilbert ideal plays an essential role in studying generators of the algebra of invariants, because eld[V]^G$ is indecomposable if and only if $f$ is not contained in eld[V]$. More precisely, a set of eld[V]^G$ if and only if it gives a vector space basis in a vector space direct complement of eld[V]^G_+$ (the sum of the homogeneous components of $\field[V]^G$ of positive degree). eld[V]$ is a \emph{relative invariant of weight $\chi\in \widehat G$} if $g\cdot f=\chi(g^{-1})f$ for all $g\in G$ (so $f(gv)=\chi(g)f(v)$ for all $g\in G$ and $v\in V$). We set eld[V]\mid f \text{ is a relative invariant of weight }\chi\}.\] For example, if $V$ is as in \eqref{eq:V+U}, then the variable $t_\chi$ is a relative invariant of weight $\chi$. Moreover, for a sequence $\chi^{(1)},\dots,\chi^{(k)}$ of elements of $\widehat G$, $t_{\chi^{(1)}}\cdots t_{\chi^{(k)}}$ is an indecomposable $G$-invariant if and only if $\chi^{(1)}\cdots \chi^{(k)}=1\in \widehat G$ and there is no $\ell<k$, $1\le i_1<\cdots<i_{\ell}\le k$ with $\chi^{(i_1)}\cdots \chi^{(i_\ell)}=1\in \widehat G$ (we say in this case that $\chi^{(1)},\dots,\chi^{(k)}$ is an \emph{irreducible product-one sequence over $\widehat G$}, and we call $k$ its \emph{length}). We say that $\chi^{(1)},\dots,\chi^{(k)}$ is a \emph{product-one free sequence over $\widehat G$} if there is no $1\le i_1<\dots <i_\ell\le k$ with $\chi^{(i_1)}\cdots \chi^{(i_\ell)}=1\in \widehat G$. The \emph{Davenport constant} $\mathsf{D}(\widehat G)$ of the finite abelian group $G$ is defined as the maximal length of an irreducible product-one sequence over $G$. Clearly, the maximal possible length of a product-one free sequence is $\mathsf{D}(\widehat G)-1$. For more information about the Davenport constant and its relevance for the invariant theory of abelian groups see for example \cite{cziszter-domokos-geroldinger}. \begin{lemma}\label{lemma:V+U} Let $V=W\oplus U$ be a $G$-module as in \eqref{eq:V+U}. eld[W]^G$, and for $\chi\in \{\chi_1,\dots,\chi_m\}$ let eld[W]^{G,\chi}$ be a set of homogeneous relative invariants of weight $\chi$ eld[W]^{G,\chi}$ of eld[W]^G_+ \field[W]^{G,\chi}$. Set \[B:=\{t_{\chi^{(1)}}\cdots t_{\chi^{(k)}}\mid \chi^{(1)},\dots \chi^{(k)} \text{ is an irreducible product-one sequence over }\widehat G\},\] and \begin{align*}C:=\{ht_{\chi^{(1)}}\cdots t_{\chi^{(k)}}\mid \chi^{(1)},\dots \chi^{(k)} \text{ is a product-one free sequence over }\widehat G, \\ \quad h\in A_\chi \text{ where }\chi^{-1}=\chi^{(1)}\cdots \chi^{(k)}\}.\end{align*} Then $A\cup B\cup C$ is a homogeneous generating system of eld[V]^G$. \end{lemma} \begin{proof} eld[V]^G$ is generated by indecomposable multihomogeneous invariants. A multihomogeneous invariant $f$ belongs to eld[U]^G$ or is of the form eld[U]$ of positive degree, eld[W]$. eld$-subalgebra eld[V]$ generated by $A\cup B$. In the third case $t=t_{\chi^{(1)}}\cdots t_{\chi^{(k)}}$, where $\chi^{(1)},\dots \chi^{(k)}$ is a product-one free sequence over $\widehat G$. Then $t$ is a relative $G$-invariant of some weight $\chi$, and eld[V]^{G,\chi^{-1}}$. There exists a linear combination $a$ of elements from eld[W]^{G,\chi^{-1}}$. eld[V]$ generated by eld[V]^G_+)^2$. So we showed that any indecomposable eld[V]^G$ can be reduced modulo eld$-subalgebra of $\field[V]$ generated by $A\cup B\cup C$. This clearly implies our statement. \end{proof} eld[V]$ of polynomials write \[\mathcal{V}(S):=\{v\in V\mid \forall f\in S\colon f(v)=0\}\] for the common zero locus in $V$ of the elements of $S$. Next we restate \cite[Lemma 4.3]{cziszter:C7rtimesC3} in a sharp form: \begin{lemma}\label{lemma:common zero locus} We have the equality eld[V]^{G,\chi})=\{v\in V\mid \mathrm{Stab}_G(v)\nsubseteq \ker(\chi)\}.\] \end{lemma} \begin{proof} Suppose that $v\in V$, $g \in G$ with $g\cdot v=v$. eld[V]^{G,\chi}$ we have \[f(v)=f(g\cdot v)=\chi(g)f(v),\] therefore $\chi(g)\neq 1$ implies $f(v)=0$. This shows the inclusion $\{v\in V\mid \mathrm{Stab}_G(v)\nsubseteq \ker(\chi)\}\subseteq eld[V]^{G,\chi}$. The reverse inclusion is stated and proved in \cite[Lemma 4.3]{cziszter:C7rtimesC3}. \end{proof} \subsection{The use of automorphisms} The natural action of the automorphism group of $G$ on the isomorphism classes of representations of $G$ helps to save some computations later, thanks to the following statements: \begin{lemma}\label{lemma:auto} Let $(V,\rho)$ be a $G$-module and $\alpha$ an automorphism of the group $G$. \begin{itemize} \item[(i)] For any $\chi\in \widehat G$ we have eld[(V,\rho)]^{G,\chi}= eld[(V,\rho\circ\alpha)]^{G,\chi\circ\alpha}.\] In particular, eld[(V,\rho)]^G= eld[(V,\rho\circ\alpha)]^G.\] \item[(ii)] We have \[\beta(G,(V,\rho))=\beta(G,(V,\rho\circ\alpha)) \text{ and }\sepbeta(G,(V,\rho))=\sepbeta(G,(V,\rho\circ\alpha)).\] \item[(iii)] Assume that $(V,\rho)\cong (V,\rho\circ\alpha)$ as $G$-modules, so there exists a eld$-vector space isomorphism $T:V\to V$ with $T\circ \rho(g)=(\rho\circ\alpha)(g)\circ T$ for all $g\in G$. eld$-vector space isomorphism eld[[(V,\rho)]^{G,\chi\circ\alpha}.\] \end{itemize} \end{lemma} \begin{proof} (i) and (ii): The subset $\rho(G)$ of $\mathrm{GL}(V)$ coincides with the subset $(\rho\circ\alpha)(G)$ of $\mathrm{GL}(V)$, therefore the representations $\rho$ and $\rho\circ\alpha$ yield the same partition of $V$ into the union of $G$-orbits, eld[V]$ are invariants (respectively relative invariants) for the $G$-action given by $\rho$ as for the $G$-action given by $\rho\circ\alpha$. eld[(V,\rho)]^{G,\chi}$, $g\in G$ and $v\in V$, then $f((\rho\circ\alpha)(g)(v))=f(\rho(\alpha(g))(v)=\chi(\alpha(g))f(v)=(\chi\circ\alpha)(g)f(v)$. eld[V]$ has weight $\chi$ with respect to $\rho$, then it has weight $\chi\circ\alpha$ with respect to $\rho\circ\alpha$. This explains both statements. (iii): For each $g\in G$ we have \[(f\circ T)(gv)=f((T\circ\rho(g))(v)) =f((\rho\circ\alpha)(g)(T(v)) =\chi(\alpha(g))f(T(v))= (\chi\circ\alpha)(g)(f\circ T)(v). \] \end{proof} \section{Groups with a cyclic subgroup of index two}\label{sec:indextwo} Denote by $\mathrm{C}_n$ the cyclic group of order $n$. We use the notation for semidirect products \[ \mathrm{C}_m \rtimes_d \mathrm{C}_n = \langle a,b \mid a^m=1, b^n=1, bab^{-1}=a^d \rangle \quad \text{ where } d \in \mathbb{N}\mbox{ is coprime to }m. \] We need the following classification of finite groups with a cyclic subgroup of index $2$ (see \cite[Section 10]{cziszter-domokos:indextwo} for details and references): \begin{proposition}\label{prop:class index 2 cyclic} Any finite group containing a cyclic subgroup of index two is isomorphic to \begin{equation}\label{eq:H} \mathrm{C}_s\times (\mathrm{C}_r\rtimes_{-1} H)\end{equation} where $r,s$ are coprime odd integers, $\mathrm{C}_r\rtimes_{-1} H$ is the semidirect product where the kernel of the action of $H$ on $\mathrm{C}_r$ contains an index two cyclic subgroup $\langle a \rangle$ of $H$, $b\in H\setminus \langle a \rangle$ acts via inversion on $\mathrm{C}_r$ (i.e. $bxb^{-1}=x^{-1}$ for all $x\in \mathrm{C}_r$), and $H$ is one of the following $2$-groups: \begin{itemize} \item[(i)] $\mathrm{C}_{2^n}$ \quad ($n\geq 1$); \item[(ii)] $\mathrm{C}_{2^{n-1}}\times \mathrm{C}_2$ \quad ($n\geq 2$); \item[(iii)] $\mathrm{M}_{2^n} := \mathrm{C}_{2^{n-1}} \rtimes_d \mathrm{C}_2, \qquad d={2^{n-2}+1}$ \quad ($n\geq 3$); \item[(iv)] $\mathrm{D}_{2^n} := \mathrm{C}_{2^{n-1}} \rtimes_{-1} \mathrm{C}_2$ \quad ($n\geq 4$); \item[(v)] $\mathrm{SD}_{2^n} := \mathrm{C}_{2^{n-1}} \rtimes_d \mathrm{C}_2, \qquad d={2^{n-2}-1}$ \quad ($n\geq 4$); \item[(vi)] $\mathrm{Dic}_{2^n} := \langle a,b\mid a^{2^{n-1}}=1, b^2=a^{2^{n-2}}, bab^{-1}= a^{-1}\rangle$ \quad ($n\geq 3$). \end{itemize} \end{proposition} Among the groups listed in Proposition~\ref{prop:class index 2 cyclic} are the dicyclic groups. For a positive integer $m>1$, the \emph{dicyclic group} is \[\mathrm{Dic}_{4m}=\begin{cases} \mathrm{C}_r\rtimes_{-1} \mathrm{Dic}_{2^n} &\text{ for }m=2^{n-2}r \text{ even }\\ \mathrm{C}_m\rtimes_{-1} \mathrm{C}_4 &\text{ for }m>1 \text{ odd. }\end{cases}\] Note that $\mathrm{Dic}_8$ is the \emph{quaternion group} of order $8$. \begin{proposition}\label{prop:betasep-index2} Let $G$ be a non-cyclic group with a cyclic subgroup of index two. Assume that eld$ has an element of multiplicative order $|G|$. Then eld(G) \ge \frac{1}{2} |G| + \begin{cases} 2 & \text{ if } G=\mathrm{Dic}_{4m}, \text{ $m>1$};\\ 1 & \text{ otherwise. } \end{cases}\] \end{proposition} \begin{proof} Let $G$ be a group as in \eqref{eq:H} in Proposition~\ref{prop:class index 2 cyclic}, where $H$ is a semidirect product of a cyclic group and the two-element group (so $H$ is $\mathrm{C}_2$ or $H$ is of type (ii), (iii), (iv), or (v) from Proposition~\ref{prop:betasep-index2}). Then $G$ has a matrix representation generated by the matrices \[\ \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & -1 \end{bmatrix}, \quad \begin{bmatrix} \xi & 0 & 0 \\ 0 & \xi^k & 0 \\ 0 & 0 & 1 \end{bmatrix}, \quad \begin{bmatrix} \rho & 0 & 0 \\ 0 & \rho^{-1} & 0 \\ 0 & 0 & 1 \end{bmatrix}, \quad \begin{bmatrix} \varepsilon & 0 & 0 \\ 0 & \varepsilon & 0 \\ 0 & 0 & 1 \end{bmatrix}, \] where $k$ is some positive integer coprime to $2^{n-1}$ depending on the type of $H$, and $\xi$, $\rho$, $\varepsilon$ are roots of $1$ of multiplicative order $2^{n-1}$, eld^3$, on which the group $G$ acts via the given matrix representation. Setting $m:=sr2^{n-1}$, the monomials $x_1^m$ and $x_2^m$ are fixed by the cyclic index two subgroup $A$ of $G$ generated by the second, third, and fourth matrices above. Moreover, they are interchanged by the first matrix. eld^3$. Note that our representation is the direct sum of a $2$-dimensional and a $1$-dimensional representation. So by Lemma~\ref{lemma:lower-bound} it is sufficient to show that $v$ and $v'$ can not be separated by a \emph{multihomogeneous} invariant (cf. Section~\ref{subsec:direct sum decomp}) of degree at most $m$. Suppose for contradiction that $f(v)\neq f(v')$, where $f$ is a multihomogeneous invariant of degree at most $m$. So $f=t^df_1(x_1,x_2)$. Since the projections of these two vectors in the $x_1,x_2$ coordinate plane are the same, no invariant depending only on $x_1$ and $x_2$ can separate them. Thus we have $d>0$. The invariants depending only on $t$ are the polynomials in $t^2$, which take the same value (namely $1$) on these vectors, hence $\deg(f_1)>0$. Since $t$ is $A$-invariant, eld[x_1,x_2]^A$, so $f_1$ is eld$-linear combination of $A$-invariant monomials in $x_1$ and $x_2$. The monomials having positive degree in $x_2$ vanish on both $v$ and $v'$. It follows that $f_1$ must have an $A$-invariant monomial depending only on $x_1$. The only such monomials are the powers of $x_1^m$. Thus $\deg(f)\ge m+d>m$, a contradiction. Assume next that $G$ is a dicyclic group $\mathrm{Dic}_{4m}$. It is isomorphic to the matrix group generated by \[\begin{bmatrix} 0 & \mathrm{i}\\ \mathrm{i}& 0 \\ \end{bmatrix}, \quad \begin{bmatrix} \rho & 0 \\ 0 & \rho^{-1} \\ \end{bmatrix} \] eld$ with multiplicative order $4$ (so $\mathrm{i}^2=-1$) eld$ of multiplicative order $2m$. It is an easy exercise to show that the corresponding algebra of invariants is generated by $x_1^{2m}-x_2^{2m}$, $(x_1x_2)^2$, $(x_1^{2m}+x_2^{2m})x_1x_2$ when $m$ is odd, and by $x_1^{2m}+x_2^{2m}$, $(x_1x_2)^2$, $(x_1^{2m}-x_2^{2m})x_1x_2$ when $m$ is even. If $m$ is odd, the vectors $[1,1]^T$ and $[1,-1]^T$ are separated by the degree $2m+2$ generator, but all the smaller degree generators agree on them. eld$ has an element $\nu$ of multiplicative order $4m=|G|$. For even $m$ the vectors $[\nu,1]^T$ and $[\nu,-1]^T$ are separated by the degree $2m+2$ generator, whereas the smaller degree generators coincide on them. A matrix representation of $\mathrm{C}_s\times \mathrm{Dic}_{4m}$ (where $s>1$ is odd and is co-prime to $m$) is generated by \[ \begin{bmatrix} 0 & \mathrm{i}& 0 \\ \mathrm{i}& 0 & 0 \\ 0 & 0 & -1 \end{bmatrix}, \quad \begin{bmatrix} \rho & 0 & 0 \\ 0 & \rho^{-1} & 0 \\ 0 & 0 & 1 \end{bmatrix}, \quad \begin{bmatrix} \varepsilon & 0 & 0 \\ 0 & \varepsilon & 0 \\ 0 & 0 & 1 \end{bmatrix},\] where $\rho$ and $\varepsilon$ are roots of $1$ with multiplicative order $2m$ and $s$. The vectors $v:=[1,0,1]^T$ and $v':=[1,0,-1]^T$ are separated by the invariant $(x_1^{2ms}-x_2^{2ms})t$ when $m$ is odd and by $(x_1^{2ms}+x_2^{2ms})t$ when $m$ is even. On the other hand, any invariant of degree at most $2ms$ takes the same value on $v$ and $v'$ (this can be seen similarly to the argument in the second paragraph of the proof, using Lemma~\ref{lemma:lower-bound}). It remains to deal with the case when $G\cong \mathrm{C}_s\times (\mathrm{C}_r\rtimes_{-1} \mathrm{C}_{2^n})$ where $n\ge 3$. A matrix representation of this group is generated by \[ \begin{bmatrix} 0 & \omega & 0 \\ \omega & 0 & 0 \\ 0 & 0 & -1 \end{bmatrix}, \quad \begin{bmatrix} \rho & 0 & 0 \\ 0 & \rho^{-1} & 0 \\ 0 & 0 & 1 \end{bmatrix}, \quad \begin{bmatrix} \varepsilon & 0 & 0 \\ 0 & \varepsilon & 0 \\ 0 & 0 & 1 \end{bmatrix},\] where $\omega$, $\rho$ and $\varepsilon$ are roots of $1$ with multiplicative order $2^n$, $r$, and $s$. The square of the first matrix together with the second and the third generate a cyclic subgroup $A$ of $G$ of order $m:=2^{n-1}rs$, acting on the line spanned by $[1,0,0]^T$ via a character of order $m$. The invariant $(x_1^m+x_2^m)t$ separates $v:=[1,0,1]^T$ and $v':=[1,0,-1]^T$, and any invariant of degree at most $m$ takes the same value on $v$ and $v'$ (this can be seen similarly to the argument in the second paragraph of the proof, using Lemma~\ref{lemma:lower-bound}). \end{proof} \begin{proofof}{Theorem~\ref{thm:sepbeta index two}} eld(G)$ was computed for all groups with a cyclic subgroup of index $2$ in \cite[Theorem 10.3]{cziszter-domokos:indextwo}, where (as a special case of a statement on the so-called generalized Noether number) it was shown that if $G$ is a non-cyclic group with a cyclic subgroup of index two eld)$ is not a divisor of $|G|$, then \begin{equation}\label{eq:beta(index two cyclic)} eld(G) = \frac{1}{2} |G| + \begin{cases} 2 & \text{ if } G=\mathrm{Dic}_{4m}, \text{ $m>1$};\\ 1 & \text{ otherwise. }\end{cases} \end{equation} eld(G)$, and \eqref{eq:beta(index two cyclic)}. \end{proofof} \subsection{The groups $\mathrm{D}_{2n}\times \mathrm{C}_2$} In this section $n\ge 2$ is even, and \[G=\mathrm{D}_{2n}\times \mathrm{C}_2=\langle a,b,c\mid a^n=b^2=c^2=1,\ bab=a^{-1},\ ac=ca,\ bc=cb\rangle.\] \begin{proposition}\label{prop:D2nxC2} eld$ contains an element of multiplicative order $n$. eld(G)\ge n+2$. \end{proposition} \begin{proof} eld$ of multiplicative order $n$. Take the direct sum of the representations \[ a\mapsto \begin{bmatrix} \rho & 0 \\ 0 & \rho^{-1} \\ \end{bmatrix}, \quad b\mapsto \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}, \quad c\mapsto \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix}, \] \[a\mapsto 1, \quad b\mapsto -1, \quad c\mapsto -1 \qquad \text{ and } \qquad a\mapsto 1, \quad b\mapsto 1, \quad c\mapsto -1.\] Recall that $x_1,x_2$ stand for the standard coordinate functions on the $2$-dimensional summand, and $t_1,t_2$ stand for the coordinate functions on the $1$-dimensional summands. The invariant $(x_1^n-x_2^n)t_1t_2$ separates the points eld^4$. It is sufficient to show that $v$ and $v'$ can not be separated by an invariant of degree at most $n+1$. Suppose for contradiction that $h(v)\neq h(v')$ for some homogeneous eld[x_1,x_2,t_1,t_2]^G$ of degree at most $n+1$. We may assume by Lemma~\ref{lemma:lower-bound} that $h$ is multihomogeneous (so it is homogeneous both in $t_1$ and $t_2$), and has minimal possible total degree. So $h=t_1^{k_1}t_2^{k_2}h_1(x_1,x_2)$. Since the $(x_1,x_2,t_1)$-coordinates of $v$ and $v'$ are the same, we have $k_2>0$. Since $h_1$ is $\langle c\rangle$-invariant, $k_1+k_2$ must be even. eld[t_1^2,t_2^2]$, and $t_1^2(v)=1=t_1^2(v')$, $t_2^2(v)=1=t_2^2(v')$, so $\deg(h_1)>0$, and (since $h$ has minimal possible degree) $h=t_1t_2h_1$. Thus $h_1$ is a relative $G$-invariant of degree at most $n-1$, on which $G$ acts via the character eld^2$ factors through the action of the dihedral group $\mathrm{D}_{2n}\cong G/\langle c\rangle$, and it is well known (and easy to see) eld[x_1,x_2]$ with the above weight is $x_1^n-x_2^n$. We reached the desired contradiction. \end{proof} \begin{proofof}{Theorem~\ref{thm:sepbeta(D2nxC2)}} We have the isomorphism \[\mathrm{D}_{2n}\times \mathrm{C}_2\cong (\mathrm{C}_n\times \mathrm{C}_2)\rtimes_{-1}\mathrm{C}_2.\] Therefore by \cite[Corollary 5.5]{cziszter-domokos:indextwo} we have eld(\mathrm{D}_{2n}\times \mathrm{C}_2)= \mathsf{D}(\mathrm{C}_n\times \mathrm{C}_2)+1=n+2,\] eld(\mathrm{D}_{2n}\times \mathrm{C}_2)\le n+2$. The reverse inequality holds by\ Proposition~\ref{prop:D2nxC2}. \end{proofof} eld(G)$}\label{sec:easy groups} \subsection{The alternating group $\mathrm{A}_4$} \begin{proposition} \label{prop:alt_n} eld(\mathrm{A}_n)\ge n(n-1)/2$. \end{proposition} \begin{proof} eld)$ does not divide $|G|$ forces for $G=\mathrm{A}_n$ eld|\ge n$ if $n\ge 4$. eld$ has $n$ distinct elements $v_1,\dots,v_n$. eld^n$ via permuting the coordinates, but $v$ and $v'$ have different orbits with respect to the alternating subgroup $\mathrm{A}_n$. It is well known that the corresponding algebra of invariants is eld[x_1,\dots,x_n]^{\mathrm{S}_n} eld[x_1,\dots,x_n]^{\mathrm{S}_n},\] where $\Delta:=\sum_{1\le i<j\le n}(x_i-x_j)$. This shows that all $\mathrm{A}_n$-invariants of degree less than $n(n-1)/2$ are in fact $\mathrm{S}_n$-invariants. So they can not separate $v$ and $v'$, which have different $\mathrm{A}_n$-orbits. \end{proof} eld(\mathrm{A}_4)=6$ (see \cite[Theorem 3.4]{CzD:1}), hence Proposition~\ref{prop:alt_n} has the following consequence: \begin{theorem}\label{thm:sepbeta(A4)} We have the equality eld(\mathrm{A}_4)=6$. \end{theorem} \subsection{The binary tetrahedral group} In this section $G:=\widetilde{\mathrm{A}}_4$ is the binary tetrahedral group: the special orthogonal group $\mathrm{SO}(3,\mathbb{R})$ has a subgroup isomorphic to $\mathrm{A}_4$ (the group of rotations mapping a regular tetrahedron into itself), and $G$ is isomorphic to the preimage of $\mathrm{A}_4$ under the classical two-fold covering $\mathrm{SU}(2,\mathbb{C})\to \mathrm{SO}(3,\mathbb{R})$. So $G$ can be embedded as a subgroup into the special unitary group $\mathrm{SU} (2,\mathbb{C})$, acting via its defining representation on $V:=\mathbb{C}^2$. Thus $V$ is a $G$-module, and it is well known that $\mathbb{C}[V]^G$ is generated by three homogeneous invariants $f_6$, $f_8$, $f_{12}$, having degree $6$, $8$, and $12$ (see \cite[Lemma 4.1]{huffman} or \cite[Section 0.13]{popov-vinberg}). Since $G$ (as a subgroup of $\mathrm{GL}(V)$) is not generated by pseudo-reflections, $\mathbb{C}[V]^G$ does not contain an algebraically independent separating set by \cite{dufresne}, so the invariants $f_6$ and $f_8$ can not separate all the $G$-orbits in $V$. This implies that $\sepbeta^\mathbb{C}(G,V)\ge 12$. A second proof valid for more general base fields is given below. \begin{theorem} \label{thm:sepbeta(A4tilde)} eld$ contains an element $\xi$ of multiplicative order $8$. eld(\widetilde{\mathrm{A}}_4)=12$. \end{theorem} \begin{proof} An alternative way to think of $G=\widetilde{\mathrm{A}_4}$ is that $G\cong \mathrm{Dic}_8\rtimes \mathrm{C}_3$, the non-abelian semidirect product of the quaternion group of order $8$ and the $3$-element group. Denote by $a,b$ generators of $G$, where $a^4=1$, $b^3=1$, and $\langle a,bab^{-1}\rangle\cong \mathrm{Dic}_8$, such that $a,bab^{-1},b^2ab^{-2}$ correspond to the elements of the quaternion group traditionally denoted by $\mathrm{i}$, $\mathrm{j}$, $\mathrm{k}$. Then $G$ has the representation \[\psi:a\mapsto \begin{bmatrix} \xi^2 & 0 \\ 0 & -\xi^2 \\ \end{bmatrix}, \quad b\mapsto \begin{bmatrix} \frac 12(-1-\xi^2) & \frac 12(1+\xi^2) \\ \frac 12(-1+\xi^2) & \frac 12(-1+\xi^2) \\ \end{bmatrix} \] (indeed, $\psi(b)^3$ is the identity matrix, and $\psi(b)\psi(a)\psi(b)^{-1}= \begin{bmatrix} 0 & -1 \\ 1 & 0 \\ \end{bmatrix}$ together with $\psi(a)$ generate a subgroup in eld^2)$ isomorphic to the quaternion group). eld^2$ endowed with the representation $\psi$. eld[V]^{\mathrm{Dic}_8}$ is generated by $f_1:=(x_1x_2)^2$, $f_2:=(x_1^2+x_2^2)^2$, $f_3:=x_1x_2(x_1^4-x_2^4)$. The generator $f_3$ is fixed by $b$ as well, hence $f_3$ is $G$-invariant, whereas $f_1$ and $f_2$ span a $\langle b\rangle$-invariant subspace eld[x_1,x_2]$. The matrix of the linear transformation $b$ on eld\{f_1,f_2\}$ with respect to the basis $4f_1,f_2$ is $\begin{bmatrix} % or pmatrix or bmatrix or Bmatrix or ... 0 & 1 \\ -1 & -1 \\ \end{bmatrix}.$ eld[s_1,s_2,s_3]$ with the action of $\langle b\rangle\cong \mathrm{C}_3$ given by $s_1\mapsto -s_2$, $s_2\mapsto s_1-s_2$, $s_3\mapsto s_3$. eld$-algebra homomorphism given by $s_1\mapsto 4f_1$, $s_2\mapsto f_2$, $s_3\mapsto f_3$ restricts to a surjective eld[s_1,s_2,s_3]^{\langle b\rangle}$ onto eld[V]^G$. eld[s_1,s_2,s_3]^{\langle b\rangle}$ is generated in degree at most $\mathsf{D}(\mathrm{C}_3)=3$, an easy direct computation yields eld[s_1,s_2,s_3]^{\langle b\rangle}= eld[s_1^2 - s_1s_2 + s_2^2, s_1^2s_2 - s_1s_2^2, s_1^3 - 3s_1s_2^2 + s_2^3, s_3].\] eld[V]^G$ is generated by \begin{align*} h_8&:=(4f_1)^2 - 4f_1f_2 + f_2^2 \\ h_6&:=x_1x_2(x_1^2+x_2^2)(x_1^2-x_2^2) \\ h_{12}^{(1)}&:=(4f_1)^2f_2 - 4f_1f_2^2 \\ h_{12}^{(2)}&:=(4f_1)^3 - 12f_1f_2^2 + f_2^3 \end{align*} having degrees $4,6,12,12$. In fact $h_{12}^{(1)}$ can be omitted, because we have $h_{12}^{(1)}=-4h_6^2$. Set $v:=[1,\xi^2]^T$ and $v':=\xi v$. Then $(x_1^2+x_2^2)(v)=0=(x_1^2+x_2^2)(v')$, hence $f_2(v)=0=f_2(v')$. It follows that $h_6(v)=0=h_6(v')$. Moreover, $h_8(v)=16(\xi^4)^2=16=16(\xi^8)^2=h_8(v')$. Thus no invariants of degree less than $12$ separate $v$ and $v'$. However, they have different $G$-orbit, since $h_{12}^{(2)}(v)=4^3(\xi^4)^3=-64$, whereas $h_{12}^{(2)}(v')=4^3(\xi^8)^3=64$. Thus we proved the inequality eld(G)\ge 12$. On the other hand, eld(G)=12$, implying the reverse inequality eld(G)\le 12$. \end{proof} \subsection{The symmetric group $\mathrm{S}_4$ of degree $4$.} \begin{theorem}\label{thm:sepbeta(S4)} eld$ has an element of multiplicative order eld)\neq 5$. Then we have eld(\mathrm{S}_4)=9$. \end{theorem} \begin{proof} eld(\mathrm{S}_4)=9$ by \cite[Example 5.3]{cziszter-domokos-geroldinger}, eld(\mathrm{S}_4)\le 9$. eld^4$ via permuting the coordinates. Thus $\pi \in \mathrm{S}_4$ maps the variable $x_j$ to $\mathrm{sign}(\pi)x_{\pi(j)}$. Denote by $\sigma_j$ ($j=1,2,3,4)$ the elementary symmetric polynomial of degree $j$, and set $\Delta:=\prod_{1\le i<j\le 4}(x_i-x_j)$. eld[V]^{\mathrm{S}_4}$ is minimally generated by $\sigma_1^2$, $\sigma_2$, $\sigma_1\sigma_3$, $\sigma_4$, $\sigma_3^2$, $\sigma_1\Delta$, $\sigma_3\Delta$ (see \cite[Example 5.3]{cziszter-domokos-geroldinger}). We shall show that there exists a $v\in V$ with \begin{equation}\label{eq:S4 good v} \sigma_1(v)=0, \quad \Delta(v)\neq 0 \quad \text{ and } \sigma_3(v)\neq 0. \end{equation} Then setting $v'=-v$ we have that all the even degree invariants agree on $v$ and $v'$. The only odd degree invariant of degree less than $9$ is $\sigma_1\Delta$ (up to non-zero scalar multiples), and it vanishes both on $v$ and $v'$ by $\sigma_1(v)=0=\sigma_1(v')$. On the other hand, $\Delta(v)=\Delta(v')\neq 0$, $\sigma_3(v)=-\sigma_3(v')\neq 0$ imply that $(\sigma_3\Delta)(v)\neq (\sigma_3\Delta)(v')$. eld(\mathrm{S}_4)\ge 9$. It remains to prove the existence of $v$ satisfying \eqref{eq:S4 good v}. eld)\neq 5$, then $v:=[0,1,2,-3]^T\in V$ satisfies \eqref{eq:S4 good v} eld)\notin \{2,3\}$). eld$ contains an element eld$ contains $\mathbb{F}_{25}$, the field of $25$ elements. Let $v_1,v_2$ be the roots eld$, and let $v_3,v_4$ be the roots of eld$. Set $v:=[v_1,v_2,v_3,v_4]^T$. Note that eld[x], \] showing that $\sigma_1(v)=0$, $\sigma_3(v)=-1$. Moreover, $v_1,v_2,v_3,v_4$ are distinct, so $\Delta(v)\neq 0$, and thus \eqref{eq:S4 good v} holds. \end{proof} \subsection{The Pauli group} In this section $G$ stands for the \emph{Pauli group} $G$ of order $16$. eld$ has an element $\mathrm{i}$ of multiplicative order $4$. Then $G$ is generated by the matrices \[\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}, \quad \begin{bmatrix} 0 & -\mathrm{i} \\ \mathrm{i} & 0 \\ \end{bmatrix}, \quad \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} \] eld^2)$. \begin{theorem} \label{thm:sepbeta(Pauli)} eld$ has an element of multiplicative order $4$. We have the equality eld(G)=7$. \end{theorem} \begin{proof} Since $G$ is generated by reflections, by the Sheppard-Todd-Chevalley Theorem (see e.g. \cite[Theorem 7.2.1]{benson} or \cite[Section 3.9.4]{derksen-kemper}), eld[x_1,x_2]^G$ is a polynomial algebra with two generators. In fact it is generated by $x_1^4+x_2^4$ and $(x_1x_2)^2$. eld[x_1,x_2]$ modulo the Hilbert ideal (the ideal generated by $x_1^4+x_2^4$ and $(x_1x_2)^2$) is isomorphic to the regular representation of $G$ (see \cite{chevalley}). The element $f:=(x_1^4-x_2^4)x_1x_2$ has the property that $g\cdot f=-f$ where $g$ is any of the above three generators of $G$. One can easily see that $f$ does not belong to the Hilbert ideal. Take an extra indeterminate $t$ and extend the action of eld[x_1,x_2,t]$ by setting $g\cdot t=-t$ for any of the above three generators of $G$. Then $ft$ is a degree $7$ $G$-invariant separating $v:=[1,1,1]^T$ and $v':=[1,1,-1]^T$. We claim that $v$ and $v'$ can not be separated by an invariant of degree at most $6$. Suppose for contradiction that $h$ is a multihomogeneous $G$-invariant of degree at most $6$ with $h(v)\neq h(v')$ (cf. Lemma~\ref{lemma:lower-bound}). Then $h$ is homogeneous in $t$. eld[x_1,x_2]$ takes the same value on them, so $h$ has positive degree in $t$. Moreover, since $t^2$ eld[x_1,x_2]$, where $h_1$ is homogeneous of degree at most $5$. By $G$-invariance of $h$ we must have that $g\cdot h_1=-h_1$ for any of the above three generators $g$ of $G$. However, this $1$-dimensional representation of $G$ occurs only once as a summand in eld[x_1,x_2]$ modulo the Hilbert ideal, and we found already one occurrence in the eld(G)\ge 7$. eld(G)=7$ proved in \cite[Example 5.4]{cziszter-domokos-geroldinger}. \end{proof} \subsection{The Heisenberg group $\mathrm{H}_{27}$} In this section \[G=\mathrm{H}_{27}=\langle a,b,c \mid a^3=b^3=c^3=1,\ a^{-1}b^{-1}ab=c,\ ac=ca,\ bc=cb \rangle\] is the \emph{Heisenberg group} of order $27$. It is generated by $a,b$, and its commutator subgroup $\langle c\rangle\cong \mathrm{C}_3$, whereas $G/G'\cong \mathrm{C}_3\times \mathrm{C}_3$. eld$ has an element $\omega$ of multiplicative order $3$, and consider the following irreducible $3$-dimensional representation of $G$: \[ \psi: a\mapsto \begin{bmatrix} 1 & 0 & 0 \\ 0 & \omega & 0 \\ 0 & 0 & \omega^2 \end{bmatrix}, \quad b\mapsto \begin{bmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix}, \quad c\mapsto \begin{bmatrix} \omega & 0 & 0 \\ 0 & \omega & 0 \\ 0 & 0 & \omega \end{bmatrix}.\] eld^3$ endowed with the representation $\psi$. \begin{proposition}\label{prop:H27} eld|=4$. \end{proposition} \begin{proof} Denote by $H$ the subgroup of $G$ generated by $a$ and $c$. eld[V]^H$ is generated by the monomials $x_1^3$, $x_2^3$, $x_3^3$, $x_1x_2x_3$. It follows that each homogeneous $G$-invariant has degree divisible by $3$. The element $b$ fixes $x_1x_2x_3$, and permutes cyclically eld[x_1^3,x_2^3,x_3^3]^{\langle b\rangle}$. One easily deduces that the $\langle b\rangle$-invariants eld[V]$) of degree at most $6$ are contained in the subalgebra generated by $f_1:=x_1x_2x_3$, $f_2:=x_1^3+x_2^3+x_3^3$, $f_3:=x_1^3x_2^3+x_1^3x_3^3+x_2^3x_3^3$. eld$ with $\lambda^3\neq 1$. Now $f_1$, $f_2$, and $f_3$ are all symmetric in the variables $x_1,x_2,x_3$, therefore they agree on $v:=[1,\lambda,0]^T$ and $v':=[\lambda,1,0]^T$. On the other hand, $v$ and $v'$ have different $G$-orbits. Indeed, the algebra eld[V]^G$ contains also $f_4:=x_1^3x_2^6 + x_1^6x_3^3 + x_2^3x_3^6$. We have $f_4(v)=\lambda^6$, whereas $f_4(v')=\lambda^3$, so $f_4(v)\neq f_4(v')$. Thus $G\cdot v\neq G\cdot v'$, and $v$, $v'$ can not be separated by invariants of degree $<9$. eld|= 4$, then for any positive integer $n$, the polynomial $x_j^{3n}$ induces the same function on $V$ as $x_j^3$ ($j=1,2,3$). Therefore all functions on $V$ that are induced by some element of eld[x_1^3,x_2^3,x_3^3]$ can be induced by a polynomial in eld\{1,x_1^3,x_2^3,x_3^3,x_1^3x_2^3,x_1^3x_3^3,x_2^3x_3^3, x_1^3x_2^3x_3^3\}.\] Consequently, $x_1x_2x_3$ together with the $G$-invariants of degree at most $6$ in the above space of polynomials eld[V]^G$. In particular, we have $\sepbeta(G,V)\le 6$. To see the reverse inequality, note that the degree $3$ invariants are linear combinations of $x_1x_2x_3$ and $x_1^3+x_2^3+x_3^3$, and they agree on $v:=[0,0,0]^T$ and $v':=[1,1,0]^T$. However, $v$ and $v'$ have different $G$-orbits, eld|=4$. \end{proof} \begin{theorem}\label{thm:sepbeta(H27)} eld$ contains an element of multiplicative order $3$ eld|\neq 4$. eld(\mathrm{H}_{27})=9$. \end{theorem} \begin{proof} eld(G)=9$, therefore the result follows by Proposition~\ref{prop:H27}. \end{proof} \subsection{The non-abelian group $(\mathrm{C}_2\times \mathrm{C}_2)\rtimes \mathrm{C}_4$} In this section \[G:=\langle a,b,c\mid a^2=b^2=c^4=1, \quad ab=ba, \quad cac^{-1}=b, \quad cbc^{-1}=a\rangle.\] \begin{theorem} \label{thm:sepbeta((C2xC2)rtimesC4)} eld$ has an element $\mathrm{i}$ of multiplicative order $4$. Then we have the equality eld((\mathrm{C}_2\times\mathrm{C}_2)\rtimes \mathrm{C}_4)=6$. \end{theorem} \begin{proof} eld(G)=6$ is proved in \cite[Proposition 3.10]{cziszter-domokos-szollosi}, so it is sufficient eld(G)\ge 6$. The element $c^2$ belongs to the center of $G$, and the factor group $G/\langle c^2\rangle$ is isomorphic to the dihedral group $\mathrm{D}_8$. The representation \[ a\mapsto \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}, \quad b\mapsto \begin{bmatrix} 0 & -1 \\ -1 & 0 \\ \end{bmatrix}, \quad c\mapsto \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} \] has $c^2$ in its kernel and gives the defining representation of $\mathrm{D}_8$ as the group of symmetries of a square in the euclidean plane. The element $ab$ also belongs to the center of $G$, and the factor group $G/\langle ab\rangle \cong \mathrm{C}_4\times \mathrm{C}_2$ is generated by the coset of $c$ and the coset of $a$. Thus $G$ has the $1$-dimensional representations \[a\mapsto 1, \quad b\mapsto 1, \quad c\mapsto \mathrm{i} \qquad \text{ and } \qquad a\mapsto -1, \quad b\mapsto -1, \quad c\mapsto \mathrm{i}.\] Take the direct sum of the above $2$-dimensional and $1$-dimensional representations, and denote by $x_1,x_2$ the standard coordinate functions on the $2$-dimensional summand, and by $t_1,t_2$ the coordinate functions on the $1$-dimensional summands. eld[x_1,x_2,t_1,t_2]^G$ of invariants contains eld^4$. eld(G)\ge 6$, it is sufficient to show that $v$ and $v'$ can not be separated by a multihomogeneous invariant (cf. Section~\ref{subsec:direct sum decomp}) of degree at most $5$. Suppose for contradiction that $f(v)\neq f(v')$, where $f$ is a multihomogeneous invariant of degree at most $5$. It has positive degree in $t_2$, because the $(x_1,x_2,t_1)$ coordinates of $v$ and $v'$ are the same. Any $G$-invariant depending only on $t_1$ and $t_2$ is a polynomial in $t_1^4$, $t_1^2t_2^2$, $t_2^4$, and all these polynomials take the value $1$ both at $v$ and $v'$. So $f=t_1^{k_1}t_2^{k_2}f_1(x_1,x_2)$, where $0<k_1+k_2\le 4$, $\deg(f_1)>0$, and $k_2$ is odd. Note that $t_1^{k_1}t_2^{k_2}$ is a relative $G$-invariant, hence $f_1(x_1,x_2)$ must be a relative $G$-invariant, on which $G$ acts by the inverse character. Since $c^2$ fixes $f_1$, it must fix also $t_1^{k_1}t_2^{k_2}$, so $k_1+k_2$ is even. eld[x_1,x_2]$ is a relative $\mathrm{D}_8\cong G/\langle c^2\rangle$-invariant of degree at most $5-(k_1+k_2)$. eld[x_1,x_2]$, hence $t_1^{k_1}t_2^{k_2}=t_1t_2$, and so $f$ is a relative invariant of degree at most $3$. It is easy to see that up to non-zero scalar multiples, the only relative $G$-invariants in eld[x_1,x_2]_{\le 3}$ with non-trivial weight are $x_1x_2$ and $x_1^2-x_2^2$, but none of $x_1x_2t_1t_2$ or $(x_1^2-x_2^2)t_1t_2$ is a $G$-invariant. We arrived at the desired contradiction, finishing the proof. \end{proof} \subsection{The group $(\mathrm{C}_3\times \mathrm{C}_3)\rtimes_{-1}\mathrm{C}_2$}\label{sec:C3xC3rtimesC2} In this section \[G:=(\mathrm{C}_3\times \mathrm{C}_3)\rtimes_{-1}\mathrm{C}_2=\langle a,b,c\mid 1=a^3=b^3=c^2,\ ab=ba,\ cac^{-1}=a^{-1},\ cbc^{-1}=b^{-1}\rangle.\] eld$ contains an element $\xi$ of multiplicative order $6$. So $\omega:=\xi^2$ has multiplicative order $3$, and consider the following irreducible $2$-dimensional representations of $G$: \[ \psi_1: a\mapsto \begin{bmatrix} \omega & 0 \\ 0 & \omega^2 \\ \end{bmatrix}, \quad b\mapsto \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix}, \quad c\mapsto \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}, \] \[\psi_2: a\mapsto \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix}, \quad b\mapsto \begin{bmatrix} \omega & 0 \\ 0 & \omega^2 \\ \end{bmatrix}, \quad c\mapsto \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}. \] eld^2$ endowed with the representation $\psi_j$ for $j=1,2$ (see Section~\ref{subsec:convention}). \begin{proposition}\label{prop:C3xC3rtimesC2} We have $\sepbeta(G,W_1\oplus W_2)\ge 6$. \end{proposition} \begin{proof} Note that $\ker(\psi_1)=\langle b\rangle$ and $\ker(\psi_2)=\langle a\rangle$. We have $G/\ker(\psi_1)\cong \mathrm{S}_3$ and $G/\ker(\psi_2)\cong \mathrm{S}_3$. Both these representations factor through the irreducible $2$-dimensional representation of $\mathrm{S}_3$. eld[W_j]/\mathcal{H}(G,W_j)$ as an $\mathrm{S}_3=G/\ker(\psi_j)$-module is isomorphic to the regular representation of $\mathrm{S}_3$. Thus $x_1^3-x_2^3$ spans the only minimal non-trivial $G$-invariant subspace in a eld[W_1]$ on which $\ker(\psi_2)$ acts trivially, and $y_1^3-y_2^3$ spans the only minimal non-trivial $G$-invariant subspace in a eld[W_2]$ on which $\ker(\psi_1)$ acts trivially. eld[W_1\oplus W_2]^G$ is generated by eld[W_2]^G$, and $f:=(x_1^3-x_2^3)(y_1^3-y_2^3)$. Consider the points $v=(w_1,w_2)=([1,0]^T,[1,0]^T)$ and $v'=(w'_1,w'_2)=([1,0]^T,[0,1]^T)$. We claim that all invariants of degree at most $5$ agree on $v$ and on $v'$, eld[W_2]^G$ agree on $v$ and on $v'$. On the other hand, $f(v)=1$ and $f(v')=-1$. The proof is finished. \end{proof} \begin{theorem} \label{thm:sepbeta((C3xC3)rtimesC2)} eld$ has an element of multiplicative order $6$. eld((\mathrm{C}_3\times \mathrm{C}_3)\rtimes_{-1} \mathrm{C}_2)=6$. \end{theorem} \begin{proof} eld(G)=6$. Thus the result follows from Proposition~\ref{prop:C3xC3rtimesC2}. \end{proof} \subsection{The group $\mathrm{C}_3\rtimes \mathrm{D}_8\cong (\mathrm{C}_6\times \mathrm{C}_2)\rtimes \mathrm{C}_2$.} In this section \[G=\mathrm{C}_3\rtimes \mathrm{D}_8=(\mathrm{C}_6\times \mathrm{C}_2)\rtimes \mathrm{C}_2=\langle a,b,c\mid a^6=b^2=c^2=1,\ ba=ab,\ cac=a^{-1},\ cbc=a^3b\rangle.\] eld$ contains an element $\omega$ of multiplicative order $6$, and consider the following irreducible $2$-dimensional representation of $G$: \[ \psi: a\mapsto \begin{bmatrix} \omega & 0 \\ 0 & \omega^{-1} \\ \end{bmatrix}, \quad b\mapsto \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}, \quad c\mapsto \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}. \] The commutator subgroup of $G$ is $\langle a\rangle$. Denote by $\chi$ the $1$-dimensional representation of $G$ given by \[\chi:a\mapsto 1,\ b\mapsto -1,\ c\mapsto -1\] eld$ endowed with the representation $\chi$. \begin{proposition} \label{prop:mingen(C6xC2)rtimesC2)} eld[W\oplus U_\chi]^G$ is minimally generated by $(x_1x_2)^2$, $x_1^6+x_2^6$, $t^2$, $(x_1^6-x_2^6)x_1x_2t$. \end{proposition} \begin{proof} eld x_2$, $\field t$ in $\field[W\oplus U_\chi]$, hence $\field[W\oplus U_\chi]^{\langle a,b\rangle}$ is generated by $\langle a,b\rangle$-invariant monomials (see Lemma~\ref{lemma:V+U}). So $\field[W\oplus U_\chi]^{\langle a,b\rangle}$ is generated by $x_1^6$, $x_2^6$, $(x_1x_2)^2$, $t^2$, $x_1x_2t$. The element $c\in G$ fixes $(x_1x_2)^2$ and $t^2$, interchanges $x_1^6$ and $x_2^6$, and $c\cdot (x_1x_2t)=-x_1x_2t$. eld[W\oplus U_\chi]^G$ is generated by $(x_1x_2)^2$, $t^2$, $x_1^6+x_2^6$, $(x_1^6-x_2^6)x_1x_2t$, and $(x_1^6-x_2^6)^2$. The latter generator can be omitted, since we have $(x_1^6-x_2^6)^2=(x_1^6+x_2^6)^2-4(x_1^2x_2^2)^3$. \end{proof} \begin{theorem}\label{thm:sepbeta((C6xC2)rtimesC2)} eld$ contains an element $\xi$ of multiplicative order $12$. eld((\mathrm{C}_6\times \mathrm{C}_2)\rtimes \mathrm{C}_2)=9$. \end{theorem} \begin{proof} Set $v:=([1,\xi]^T,1)\in W\oplus U_\chi$ and $v':=([1,\xi]^T,-1)\in W\oplus U_\chi$. The invariant $(x_1^6-x_2^6)x_1x_2t$ has different values on $v$ and $v'$. By Proposition~\ref{prop:mingen(C6xC2)rtimesC2)} we see that all $G$-invariants of degree less than $9$ agree on $v$ and $v'$. This shows that eld((\mathrm{C}_6\times \mathrm{C}_2)\rtimes \mathrm{C}_2)\ge 9$. eld((\mathrm{C}_6\times \mathrm{C}_2)\rtimes \mathrm{C}_2)=9$ by \cite[Proposition 3.5]{cziszter-domokos-szollosi}, implying the reverse inequality eld((\mathrm{C}_6\times \mathrm{C}_2)\rtimes \mathrm{C}_2)\le 9$. \end{proof} eld(H)$} \label{sec:HxC2} \subsection{The group $\mathrm{Dic}_8\times \mathrm{C}_2$} In this section \[G=\mathrm{Dic}_8\times \mathrm{C}_2=\langle a,b\mid a^4=1,\ b^2=a^2,\ ba=a^3b\rangle \times \langle c\mid c^2=1\rangle\] is the direct product of the quaternion group of order $8$ and the group of order $2$. eld$ has an element $\mathrm{i}$ of multiplicative order $4$. Then $G$ has two irreducible $2$-dimensional representations (up to isomorphism), namely \[ \psi_1: a\mapsto \begin{bmatrix} \mathrm{i}& 0 \\ 0 & -\mathrm{i}\\ \end{bmatrix}, \quad b\mapsto \begin{bmatrix} 0 & -1 \\ 1 & 0 \\ \end{bmatrix}, \quad c\mapsto \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} \] (this is the only $2$ dimensional representation of $\mathrm{Dic}_8$ lifted to $G$ by the projection from $G$ to its direct factor $\mathrm{C}_2$), and \[\psi_2: a\mapsto \begin{bmatrix} \mathrm{i}& 0 \\ 0 &-\mathrm{i}\\ \end{bmatrix}, \quad b\mapsto \begin{bmatrix} 0 & -1 \\ 1 & 0 \\ \end{bmatrix}, \quad c\mapsto \begin{bmatrix} -1 & 0 \\ 0 & -1 \\ \end{bmatrix},\] the representation $\psi_1$ tensored with the non-trivial representation $c\mapsto -1$ of $\mathrm{C}_2$. The other irreducible representations of $G$ are $1$-dimensional, and can be labeled by eld^\times\times \field^\times$ as follows: eld^\times$ given by \[\chi:a\mapsto \chi_1,\quad b\mapsto \chi_2, \quad c\mapsto \chi_3.\] eld$ endowed with the representation $\chi$. Set \[U:=\bigoplus_{\chi\in \widehat G}U_\chi.\] The coordinate functions on $W_1$ are denoted by $x_1,x_2$, the coordinate functions on $W_2$ are $y_1,y_2$, and the coordinate function on $U_\chi$ is $t_\chi$ for $\chi\in \widehat G$ (see Section~\ref{subsec:convention}). \begin{proposition}\label{prop:Vi+U} For $i=1,2$ we have $\beta(G,W_i \oplus U)=6$. \end{proposition} \begin{proof} First we deal with the case $i=1$. It is well known (see for example the proof of Proposition~\ref{prop:betasep-index2}) that \begin{equation}\label{eq:Dic8 mingen} eld[(x_1x_2)^2,\ x_1^4+x_2^4,\ x_1x_2(x_1^4-x_2^4)].\end{equation} It is easy to verify that a Gr\"obner basis with respect to the lexicographic term order induced by $x_1>x_2$ of the Hilbert ideal $\mathcal{H}(G,W_1)$ is $x_1^2x_2^2$, $x_1^4+x_2^4$, $x_2^6$, $x_1x_2^5$. Using this Gr\"obner basis one can show that setting \[s_{(1,-1,1)}:=x_1x_2,\quad s_{(-1,1,1)}:=x_1^2+x_2^2, \quad s_{(-1,-1,1)}:=x_1^2-x_2^2,\] eld[x_1,x_2]$ of the Hilbert ideal $\mathcal{H}(G,W_1)$ is \goodbreak eld\{x_1,x_2\}\oplus \field s_{(1,-1,1)}\oplus \field s_{(-1,1,1)}\oplus eld\{x_1^3,x_2^3\} eld\{x_1^2x_2,x_1x_2^2\} eld s_{(1,-1,1)}s_{(-1,-1,1)} eld s_{(-1,1,1)}s_{(-1,-1,1)} eld\{x_1^5,x_2^5\}. \end{align*} The direct summands above are minimal $G$-invariant subspaces. eld s_\chi$ via the character $\chi$. eld[W_1\oplus U]^G$ is generated over eld[W_1]^G$ by products of elements from $\{s_\chi, \quad t_{\chi'}\mid \chi\in\{(1,-1,1),(-1,1,1),(-1,-1,1)\}, \ \chi'\in \widehat G\}$, with at most two factors of the form $s_\chi$, and no $G$-invariant proper subproduct. Since the Davenport constant of the group $\widehat G$ is $4$, it is sufficient to consider products with at most $4$ factors. All of them have degree at most $6$. The case $i=2$ is essentially the same. Indeed, eld[W_1]$ and $\field[W_2]$, which is also a $\mathrm{Dic}_8$-module isomorphism. The subgroup $\langle c\rangle$ acts trivially on $\field[W_1]$, and acts by multiplication by $-1$ of the variables $y_1,y_2$ in $\field[W_2]$. Therefore replacing $x_1$ by $y_1$ and $x_2$ by $y_2$ in the corresponding formulae above, we get the generators of eld[(y_1y_2)^2,\ y_1^4+y_2^4,\ y_1y_2(y_1^4-y_2^4)]$, eld[W_2]$ of the Hilbert ideal. So in the same way as in the above paragraph, we get the conclusion $\beta(W_2\oplus U)=6$. \end{proof} \begin{proposition} \label{prop:Q8xC2,V1+V2} We have $\beta(G,W_1\oplus W_2)=6$. \end{proposition} \begin{proof} The element $a^2\in G$ multiplies each of the indeterminates $x_1,x_2,y_1,y_2$ by $-1$. Since $G$-invariants are $a^2$-invariant, they must involve only monomials of even degree. eld(G)=7$ by \cite[Proposition 3.2]{cziszter-domokos-szollosi}. We conclude that $\beta(G,W_1\oplus W_2)\le 6$. Note also that $\beta(G,W_1\oplus W_2)\ge \beta(G,W_1)$, and we saw in the course of the proof of Proposition~\ref{prop:Vi+U} that $\beta(G,W_1)=6$ (the third generator in \eqref{eq:Dic8 mingen} is not symmetric in $x_1$ and $x_2$, hence can not be expressed by the first two). \end{proof} \begin{proposition}\label{prop:ralative invariants Q8xC2} Let $v:=(w_1,w_2)\in W_1\oplus W_2$ with $w_1\neq 0$ and $w_2\neq 0$. Then for any $\chi\in \widehat G$ there exists a relative invariant $f$ of weight $\chi$ such that $\deg(f)\le 4$ and $f(v)\neq 0$. \end{proposition} \begin{proof} \[\begin{array}{c|c} \chi & \text{relative invariants} \\ \hline \hline (-1,1,1) & f_1:=x_1^2+x_2^2, \quad f_2:=x_1x_2(x_1^2-x_2^2) \\ \hline (-1,1,-1) & f_1:=x_1y_1+x_2y_2,\quad f_2:=x_1x_2(x_1y_1-x_2y_2), \quad f_3:=x_2^3y_1-x_1^3y_2 \\ \hline (1,1,-1) & f_1:=x_1y_2-x_2y_1,\quad f_2:=x_1x_2(x_1y_2+x_2y_1), \quad f_3:=x_2^3y_2+x_1^3y_1 \end{array}\] A row in the above table contains a character $\chi\in \widehat G$ and relative invariants of weight $\chi$ of degree at most $4$, such that the common zero-set of these relative invariants is contained in the union of $W_1\oplus \{0\}$ and $W_2\oplus \{0\}$. For the other non-trivial weights the result can be deduced without further computation using Lemma~\ref{lemma:auto}. Indeed, the automorphism group of $G$ contains the automorphism group of its subgroup $\mathrm{Dic}_8$ as a subgroup. The automorphism group of $\mathrm{Dic}_8$ acts (on the right) therefore on $\widehat G$ (an automorphism $\alpha$ sends $\chi$ to $\chi\circ\alpha$), and any non-trivial $\chi \in \widehat G$ is in the orbit of one of the three weights in the above table. Moreover, for an automorphism $\alpha$ of $\mathrm{Dic}_8$ (viewed as an automorphism of $G$) we have that $\psi_1\circ \alpha\cong \psi_1$ and $\psi_2\circ\alpha\cong \psi_2$. Observe that for $(w_1,w_2)\in W_1\oplus W_2$, the condition that none of $w_i$ ($i=1,2)$ is zero is equivalent to the condition that $\{g\cdot (w_1,w_2)\mid g\in G\}$ spans eld$-vector space. Therefore by Lemma~\ref{lemma:auto} (iii), no $(w_1,w_2)$ with $w_1\neq 0$, $w_2\neq 0$ is contained in the common zero locus eld[W_1\oplus W_2]^{G,\chi\circ\alpha}, \ \deg(f)\le 4)$ if no such element of $W_1\oplus W_2$ is contained in eld[W_1\oplus W_2]^{G,\chi}, \ \deg(f)\le 4)$. eld[W_1\oplus W_2]^G$. \end{proof} \begin{theorem}\label{thm:sepbeta(Dic8xC2)} eld$ contains an element of multiplicative order $8$. eld(\mathrm{Dic}_8\times \mathrm{C}_2)=6$. \end{theorem} \begin{proof} By Theorem~\ref{thm:sepbeta index two} eld(\mathrm{Dic}_8)=6$, and since $\mathrm{Dic}_8$ is a direct factor of $G$, eld(\mathrm{Dic}_8)=6$. Now we turn to the reverse inequality. By Lemma~\ref{lemma:spanning invariants} (iii) it is sufficient to deal with the case when eld|$ is large enough so that we can apply Lemma~\ref{lemma:multfree}. Therefore it remains to prove that $\sepbeta(G,V)\le 6$ where $V=W_1\oplus W_2\oplus U$. That is, we have to show that if all invariants of degree at most $6$ take the same value on $v,v'\in V$, then $G\cdot v=G\cdot v'$. So $v=(w_1,w_2,u_{\chi}\mid \chi\in \widehat G)$ and $v'=(w'_1,w'_2,u'_\chi \mid \chi\in \widehat G)$. It follows from Proposition~\ref{prop:Q8xC2,V1+V2} that replacing $v'$ by an appropriate element in its $G$-orbit, we may assume that $w_1=w'_1$ and $w_2=w'_2$. If $w_1=0$ or $w_2=0$, then $v$ and $v'$ belong to the same orbit by Proposition~\ref{prop:Vi+U}. For any $\chi\in \widehat G$ there exists a relative invariant eld[W_1\oplus W_2]^{G,\chi^{-1}}$ with $\deg(f)\le 4$ and $f(w_1,w_2)\neq 0$ by Proposition~\ref{prop:ralative invariants Q8xC2}. eld[V]^G$: it has degree at most $5$, and so from $(ft_{\chi})(v)=(ft_{\chi})(v')$ we deduce $u_\chi=u'_\chi$. This holds for all $\chi\in\widehat G$, thus we showed $v=v'$, as claimed. \end{proof} \subsection{The group $\mathrm{Dic}_{12}\times \mathrm{C}_2$.} In this section \[G=\mathrm{Dic}_{12}\times \mathrm{C}_2=\langle a,b\mid a^6=1,\ b^2=a^3,\ ba=a^{-1}b\rangle \times \langle c\mid c^2=1\rangle.\] eld$ has an element $\xi$ of multiplicative order $12$; set $\omega:=\xi^2$ and $\mathrm{i}:=\xi^3$, so $\omega$ has multiplicative order $6$ and $\mathrm{i}$ has multiplicative order $4$. Consider the following irreducible $2$-dimensional representations of $G$: \[ \psi_1: a\mapsto \begin{bmatrix} \omega & 0 \\ 0 & \omega^ {-1}\\ \end{bmatrix}, \quad b\mapsto \begin{bmatrix} 0 & 1 \\ -1 & 0 \\ \end{bmatrix}, \quad c\mapsto \begin{bmatrix} -1 & 0 \\ 0 & -1 \\ \end{bmatrix}\] \[\psi_2: a\mapsto \begin{bmatrix} \omega & 0 \\ 0 & \omega^{-1} \\ \end{bmatrix}, \quad b\mapsto \begin{bmatrix} 0 & 1 \\ -1 & 0 \\ \end{bmatrix}, \quad c\mapsto \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} \] \[\psi_3: a\mapsto \begin{bmatrix} \omega^2 & 0 \\ 0 & \omega^{-2} \\ \end{bmatrix}, \quad b\mapsto \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}, \quad c\mapsto \begin{bmatrix} -1 & 0 \\ 0 & -1 \\ \end{bmatrix} \] \[\psi_4: a\mapsto \begin{bmatrix} \omega^2 & 0 \\ 0 & \omega^{-2} \\ \end{bmatrix}, \quad b\mapsto \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}, \quad c\mapsto \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix}. \] The other irreducible representations of $G$ are $1$-dimensional, and can be labelled by eld^\times$, where $\chi=(\chi_1,\chi_2)\in \widehat G$ is identified with the representation \[\chi:a\mapsto \chi_1^2,\ b\mapsto \chi_1,\ c\mapsto \chi_2\] (note that $\langle a^2 \rangle$ is the commutator subgroup of $G$, so $a^2$ is in the kernel of any $1$-dimensional representation of $G$). eld$ endowed with the representation $\chi$, and set $U:=\bigoplus_{\chi\in \widehat G}U_\chi$. The following result can be easily obtained using the CoCalc platform \cite{CoCalc}: \begin{proposition}\label{prop:Dic12xC2,V3+V4+U} eld$ has characteristic $0$. Then we have the equality $\beta(G,V)=8$ if $V$ is any of the $G$-modules \begin{align*}W_1\oplus W_4 \oplus U,\ W_2\oplus W_4\oplus U,\ W_3\oplus W_4\oplus U, \\ \ W_1\oplus W_2\oplus W_4,\ W_1\oplus W_3\oplus W_4, \ W_2\oplus W_3\oplus W_4. \end{align*} \end{proposition} \begin{remark} Proposition~\ref{prop:Dic12xC2,V3+V4+U} implies in particular that $\beta(W_i\oplus W_j)\le 8$ for all $1\le i<j\le 3$. On the other hand, also using computer we found that $\beta(G,W_1\oplus W_2\oplus W_3)=9$, and Proposition~\ref{prop:Dic12xC2,V3+V4+U} can not yield immediately a better upper bound for $\sepbeta(G,W_1\oplus W_2\oplus W_3)$. Indeed, consider $v=(w_1,w_2,w_3):=([1,0]^T,[1,0]^T,[1,1]^T)\in W_1\oplus W_2\oplus W_3$ and $v'=(w'_1,w'_2,w'_3):=([1,0]^T,[1,0]^T,[-1,-1]^T)\in W_1\oplus W_2\oplus W_3$. Then $(w_1,w_2)=(w'_1,w'_2)$, $b^2c\cdot (w_1,w_3)=(w'_1,w'_3)$, $c\cdot (w_2,w_3)=(w'_2,w'_3)$. So all pairs $(w_i,w_j)$ and $(w'_i,w'_j)$ have the same $G$-orbit. However, $v$ and $v'$ have different $G$-orbits, because the $G$-invariant $x_2y_2z_1+x_1y_1z_2$ separates them. \end{remark} \begin{proposition}\label{prop:Dic12xC2,V1+V2+V3} eld$ has characteristic $0$. For $V:=W_1\oplus W_2\oplus W_3$ we have the inequality $\sepbeta(G,V)\le 8$. \end{proposition} \begin{proof} Assume that for $v=(w_1,w_2,w_3)\in V$, $v'=(w'_1,w'_2,w'_3)\in V$ we have eld[V]^G$ with $\deg(f)\le 8$. We need to show that $G\cdot v=G\cdot v'$. By Proposition~\ref{prop:Dic12xC2,V3+V4+U} we know that $\beta(G,W_i\oplus W_j)\le 8$ for all $i,j\in \{1,2,3,4\}$. In particular, $G\cdot (w_1,w_2)=G\cdot (w'_1,w'_2)$. So replacing $v'$ by an appropriate element in its orbit we may assume that $w'_1=w_1$, $w'_2=w_2$, and moreover, both $w_1$ and $w_2$ are non-zero. We shall show that necessarily $w_3=w'_3$. eld[V]^G$: \begin{align*} f_1:=x_2y_2z_1+x_1y_1z_2, \quad f_2:=y_1y_2(x_2y_2z_1-x_1y_1z_2), \quad f_3:=x_1^3y_1z_1+x_2^3y_2z_2, \\ \ f_4:=x_2^3y_1z_1-x_1^3y_2z_2, \qquad f_5:=x_1y_2^3z_1-x_2y_1^3z_2. \end{align*} For $w\in V$ consider the matrices \begin{align*}M_1(w)&:=\begin{bmatrix} x_2(w)y_2(w)& x_1(w)y_1(w) \\ x_2(w)y_1(w)y_2^2(w) & -x_1(w)y_1^2(w)y_2(w) \end{bmatrix} \\ M_2(w)&:=\begin{bmatrix} x_2(w)y_2(w)& x_1(w)y_1(w) \\ x_1^3(w)y_1(w) & x_2^3(w)y_2(w) \end{bmatrix} \\ M_3(w)&:=\begin{bmatrix} x_2^3(w)y_1(w) & -x_1^3(w)y_2(w) \\ x_1(w)y_2^3(w) & -x_2(w)y_1^3(w) \end{bmatrix}. \end{align*} The definition of the $f_j$ implies that for any $w\in V$ we have the matrix equalities \begin{align*} M_1(w)\cdot \begin{bmatrix} z_1(w)\\ z_2(w)\end{bmatrix}&= \begin{bmatrix} f_1(w) \\ f_2(w) \end{bmatrix} \\ M_2(w)\cdot \begin{bmatrix} z_1(w)\\ z_2(w)\end{bmatrix}&= \begin{bmatrix} f_1(w) \\ f_3(w) \end{bmatrix} \\ M_3(w)\cdot \begin{bmatrix} z_1(w)\\ z_2(w)\end{bmatrix}&= \begin{bmatrix} f_4(w) \\ f_5(w) \end{bmatrix} \end{align*} Note that $M_j(v)=M_j(v')$ for $j=1,2,3$ (because $(w_1,w_2)=(w'_1,w'_2)$), and since $\deg(f_j)\le 5$, by assumption, we have $f_j(v)=f_j(v')$ for $j=1,2,3,4,5$. By basic linear algebra we conclude that $\begin{bmatrix}z_1(v) \\ z_2(v)\end{bmatrix} =\begin{bmatrix} z_1(v') \\ z_2(v')\end{bmatrix}$, unless the matrix $M_j(v)$ has zero determinant for all $j\in \{1,2,3\}$. We claim that this is not the case. Suppose to the contrary that $\det M_j(v)=0$ for $j=1,2,3$. Then $\det M_1(v)=0$ says that one of $x_1(v)$, $x_2(v)$, $y_1(v)$, $y_2(v)$ equals zero. Suppose for example that $x_1(v)=0$. Then $x_2(v)\neq 0$ (as $w_2\neq 0$), and $\det M_2(v)=0$ yields $y_2(v)=0$, implying in turn that $y_1(v)\neq 0$. Then $\det M_3(v) \neq 0$, a contradiction. The cases when $x_2(v)$, $y_1(v)$, or $y_2(v)$ is zero can be dealt with similarly. \end{proof} \begin{proposition}\label{prop:Dic12xC2,3 summands} eld$ has characteristic $0$. Let $i,j\in \{1,2,3\}$, $i\neq j$, and $\chi\in \widehat G$. Then $\sepbeta(G,W_i\oplus W_j\oplus U_\chi)\le 8$. \end{proposition} \begin{proof} Using the CoCalc platform \cite{CoCalc} we verified that for a $G$-module of the form $V=W_i\oplus W_j\oplus U_\chi$ we have $\beta(G,V)>8$ if and only if $V$ is as in the table below: \[\begin{array}{c|c} eld[V]^G \\ \hline W_2\oplus W_3\oplus U_{(\mathrm{i},-1)} & (y_1z_1+\mathrm{i}y_2z_2)t,\ y_1y_2(y_1z_1-\mathrm{i}y_2z_2)t,\ (y_2z_1^5-\mathrm{i}y_1z_2^5)t \\ W_2\oplus W_3\oplus U_{(-\mathrm{i},-1)} & (y_1z_1-\mathrm{i}y_2z_2)t,\ y_1y_2(y_1z_1+\mathrm{i}y_2z_2)t,\ (y_2z_1^5+\mathrm{i}y_1z_2^5)t \\ W_1\oplus W_3\oplus U_{(\mathrm{i},1)} & (x_1z_1+\mathrm{i}x_2z_2)t,\ x_1x_2(x_1z_1-\mathrm{i}x_2z_2)t,\ (x_2z_1^5-\mathrm{i}x_1z_2^5)t \\ W_1\oplus W_3\oplus U_{(-\mathrm{i},1)} & (x_1z_1-\mathrm{i}x_2z_2)t,\ x_1x_2(x_1z_1+\mathrm{i}x_2z_2)t,\ (x_2z_1^5+\mathrm{i}x_1z_2^5)t \\ W_1\oplus W_2\oplus U_{(-1,-1)} & (x_2y_1+x_1y_2)t,\ x_1x_2(x_2y_1-x_1y_2)t,\ (x_1y_1^5-x_2y_2^5)t \\ W_1\oplus W_2\oplus U_{(1,-1)} & (x_2y_1-x_1y_2)t,\ x_1x_2(x_2y_1+x_1y_2)t,\ (x_1y_1^5+x_2y_2^5)t \end{array} \] We shall show that $\sepbeta(G,V)\le 8$ for $V=W_2\oplus W_3\oplus U_{(\mathrm{i},-1)}$; the argument for the other $V$ in the table above is the same. Take $v=(w_2,w_3,u)\in V$ and $v'=(w'_2,w'_3,u')\in V$, and assume that eld[V]^G$ with $\deg(f)\le 8$. We need to show that then $G\cdot v=G\cdot v'$. Recall that the coordinate functions on $W_2$ are denoted by $y_1,y_2$, on $W_3$ by $z_1,z_2$, and on $U_{(\mathrm{i},-1)}$ by $t$. By Proposition~\ref{prop:Dic12xC2,V3+V4+U} we know that if $w_2=0$, then $w'_2=0$, and $(w_3,u)$ and $(w'_3,u')$ have the same $G$-orbit, Similarly, if $w_3=0$, then $G\cdot v=G\cdot v'$. So we may assume that $w_2\neq 0$ and $w_3\neq 0$, and again by Proposition~\ref{prop:Dic12xC2,V3+V4+U}, by replacing $v'$ by an appropriate element in its $G$-orbit, we may assume that $w_2=w'_2$ and $w_3=w'_3$. It remains to show that $u=u'$. Denote by $tf_1$, $tf_2$, $tf_3$ the $G$-invariants on $V$ given in the table. All have degree less than $8$, so $(tf_j)(v)=(tf_j)(v')$ holds for $j=1,2,3$. One can easily deduce from $w_2\neq 0$ and $w_3\neq 0$ that $f_j(w_2,w_3)\neq 0$ for some $j\in \{1,2,3\}$, hence $t(v)=t(v')$, i.e. $u=u'$. \end{proof} \begin{theorem}~\label{thm:sepbeta(Dic12xC2)} eld(\mathrm{Dic}_{12}\times \mathrm{C}_2)=8$. \end{theorem} \begin{proof} Since $\mathrm{Dic}_{12}$ is a homomorphic image of $G$, we have the obvious inequality eld(\mathrm{Dic}_{12})$, and by Theorem~\ref{thm:sepbeta index two} eld(\mathrm{Dic}_{12})=8$. By Lemma~\ref{lemma:multfree} it remains to prove that $\sepbeta(G,V)\le 8$, where $V$ is the $G$-module $V:=W_1\oplus W_2\oplus W_3\oplus W_4\oplus U$. Let $v=(w_1,w_2,w_3,w_4,u)\in V$, $v'=(w'_1,w'_2,w'_3,w'_4,u')\in V$, and assume that \begin{equation} \label{eq:Dec12xC2,f(v)=f(v')} eld[V]^G\text{ with }\deg(f)\le 8. \end{equation} We shall show that $v$ and $v'$ have the same $G$-orbit. \emph{Case 1:} There exists some $i,j\in \{1,2,3\}$, $i\neq j$, such that $w_i\neq 0$ and $w_j\neq 0$. If $w_1\neq 0$ then $\mathrm{Stab}_G(w_1)=\ker(\psi_1)=\langle b^2c\rangle$ has order $2$. If $w_2\neq 0$ then $\mathrm{Stab}_G(w_2)=\ker(\psi_2)=\langle c\rangle$ has order $2$. Since $\langle b^2c\rangle\cap \langle c\rangle=\{1_G\}$ and no non-zero element of $W_3$ is fixed by $b^2c$ or $c$, we have that $\mathrm{Stab}_G(w_i,w_j)$ is trivial. By Proposition~\ref{prop:Dic12xC2,V3+V4+U} we know that $G\cdot (w_i,w_j)=G\cdot (w'_i,w'_j)$, so replacing $v'$ by an appropriate element in its $G$-orbit, we may assume that $w'_i=w_i$ and $w'_j=w_j$. Take a component $w\in\{w_1,w_2,w_3,w_4,u_\chi\mid \chi\in \widehat G\}$ of $v$ different from $w_i$ and $w_j$, and let $w'$ be the corresponding component of $v'$. By Proposition~\ref{prop:Dic12xC2,V3+V4+U}, Proposition~\ref{prop:Dic12xC2,V1+V2+V3} and Proposition~\ref{prop:Dic12xC2,3 summands} we conclude that $G\cdot (w_i,w_j,w)=G\cdot (w_i,w_j,w')$, so $w'$ is in the orbit of $w$ with respect to the stabilizer of $(w_i,w_j)$, implying in turn that $w=w'$. Since this holds for all components $w$ and $w'$ of $v$ and $v'$, we have $v=v'$. \emph{Case 2:} At most one of $w_1,w_2,w_3$ is non-zero, so there exists an $i\in \{1,2,3\}$ such that $w_j=0$ for each $j\in \{1,2,3\}\setminus \{i\}$. As $\sepbeta(W_j)\le 8$ for all $j\in \{1,2,3\}\setminus \{i\}$, we conclude that also $w'_j=0$ for all $j\in \{1,2,3\}\setminus \{i\}$. Thus $v$ and $v'$ belong to a submodule $X$ of $V$ for which $\beta(G,X)\le 8$ by Proposition~\ref{prop:Dic12xC2,V3+V4+U}, and hence \eqref{eq:Dec12xC2,f(v)=f(v')} implies $G\cdot v=G\cdot v'$. \end{proof} \subsection{The group $\mathrm{A}_4\times \mathrm{C}_2$.} In this section \[G=\mathrm{A}_4\times \mathrm{C}_2=\langle a,b,c\mid a^2=b^2=c^3=1,\ ab=ba,\ cac^{-1}=b, \ cbc^{-1}=ab \rangle \times \langle d\mid d^2=1\rangle.\] Consider the following irreducible $3$-dimensional representations of $G$: \[ \psi_1: a\mapsto \begin{bmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{bmatrix}, \quad b\mapsto \begin{bmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \end{bmatrix}, \quad c\mapsto \begin{bmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix}, \quad d \mapsto \begin{bmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \end{bmatrix}\] \[\psi_2: a\mapsto \begin{bmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{bmatrix}, \quad b\mapsto \begin{bmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \end{bmatrix}, \quad c\mapsto \begin{bmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix}, \quad d \mapsto \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}.\] eld^3$ endowed with the representation $\psi_j$. eld$ contains an element $\omega$ of multiplicative order $3$. Then the other irreducible representations of $G$ are $1$-dimensional, and can be labelled by eld^\times$, where $\chi=(\chi_1,\chi_2)\in \widehat G$ is identified with the representation \[\chi:a\mapsto 1,\ b\mapsto 1,\ c\mapsto \chi_1, \ d\mapsto \chi_2\] (note that $\langle a,b \rangle$ is the commutator subgroup of $G$). eld$ endowed with the representation $\chi$, and set $U:=\bigoplus_{\chi\in \widehat G}U_\chi$. \begin{proposition}\label{prop:A4xC2,Vi,U} We have the equalities \begin{itemize} \item[(i)] $\beta(G,W_1)=6$. \item[(ii)] $\beta(G,W_2\oplus U)=6$. \end{itemize} \end{proposition} eld[W_1]^{\langle a,b,d\rangle}$ is generated by the monomials $x_1^2$, $x_2^2$, $x_3^2$. The group element $c$ permutes cyclically these monomials, hence eld[W_1]^G$ is generated by $x_1^2+x_2^2+x_3^2$, $x_1^2x_2^2+x_2^2x_3^2+x_1^2x_3^2$, $x_1^2x_2^2x_3^2$, $x_1^4x_2^2+x_2^4x_3^2+x_1^2x_3^4$, and since the elements with degree less than $6$ are symmetric, they can not form a generating set. Therefore we have (i). (ii) By a similar reasoning, eld[W_2]^G$ is generated by $y_1^2 + y_2^2 + y_3^2$, $y_1y_2y_3$, $y_1^2y_2^2 + y_2^2y_3^2 + y_1^2y_3^2$, $y_1^4y_2^2 + y_2^4y_3^2 + y_3^4y_1^2$, hence $\beta(G,W_2)= 6$. The factor group $G/G'=G/\langle a,b\rangle$ is cyclic of order $6$, hence its Davenport constant is $6$, implying that $\beta(G,U)=6$. Following the notation of Lemma \ref{lemma:V+U}, write $A_\chi$ for a set of eld[W_2]^{G,\chi}$ of $\field[W_2]^{G,\chi}\cap \mathcal{H}(G,W_2)$. By Lemma \ref{lemma:V+U} it is sufficient to show that (for some choice of the sets $A_\chi$) the polynomials in the following set have degree at most $6$: \begin{center} $C:=\{ht_{\chi^{(1)}}\cdots t_{\chi^{(k)}}\mid \chi^{(1)},\dots \chi^{(k)}$ is a product-one free sequence over $\widehat G$, \\ $h\in A_\chi$ where $\chi^{-1}=\chi^{(1)}\cdots \chi^{(k)}\}.$ \end{center} eld[W_2]^{G,\chi}$ is non-zero only if $\chi_2=1$. Moreover, $a,b\in \ker(\chi)$ for any $\chi\in \widehat G$, hence eld[W_2]^{\langle a,b\rangle}$. eld[W_2]^{\langle a,b\rangle}$ is generated by $y_1^2,y_2^2,y_3^2,y_1y_2y_3$, and eld[y_1^2,y_2^2,y_3^2,y_1y_2y_3]^{\langle c\rangle}$ is generated by $y_1^2+y_2^2+y_3^2$, $y_1^2y_2^2+y_2^2y_3^2+y_1^2y_3^2$, $y_1^4y_2^2+y_2^4y_3^2+y_1^2y_3^4$, $y_1y_2y_3$. Set \[s_{\omega}:=y_1^2+\omega y_2^2+\omega^2 y_3^2,\quad s_{\omega^2}:=y_1^2+\omega^2 y_2^2+\omega y_3^2.\] We may take $A_{(\omega,1)}:=\{s_{\omega}, s^2_{\omega^2}\}$ and $A_{(\omega^2,1)}:=\{s_{\omega^2}, s^2_{\omega}\}$ (this can be verified by computing a Gr\"obner basis in $\mathcal{H}(G,W_2)$). Now consider an element $f=ht_{\chi^{(1)}}\cdots t_{\chi^{(k)}}\in C$. Here $\deg(h)=2$, or $\deg(h)=4$ and $h$ is the product of two $\langle a,b,d\rangle$-invariants. After a possible renumbering we may assume that $\chi^{(i)}_2=-1$ for each $i=1,...,\ell$ and $\chi^{(j)}_2=1$ for each $j=\ell+1,...,k$. Since $h$ is a $\langle d\rangle$-invariant, $\ell=2\ell'$ must be even. We have $f=h\prod_{i=1}^{\ell'}(t_{\chi^{(i)}}t_{\chi^{(i+\ell')}})\prod_{j=\ell+1}^{k}t_{\chi^{(j)}}$. So the $G$-invariant $f$ is written as a product of relative invariants of weight $\chi\in \langle (\omega,1)\rangle\cong\mathrm{C}_3$, where the number of factors is $k-\ell'+1$ or $k-\ell'+2$, depending on whether $\deg(h)=2$ or $\deg(h)=4$. On the other hand, the number of factors is at most $\mathsf{D}(\mathrm{C}_3)=3$, since $ \chi^{(1)},\dots \chi^{(k)}$ is a product-one free sequence over $\widehat G$. Since $\deg(f)=\deg(h)+k$, we conclude in both cases that $\deg(f)\le 6$, and we are done. \end{proof} \begin{lemma}\label{lemma:A4xC2,stabilizer} For a non-zero $v\in W_1$ we have \begin{align*} |\mathrm{Stab}_G(v)|=\begin{cases} 3 &\text{ if } x_1^2(v)=x_2^2(v)=x_3^2(v) \\ 2 &\text{ if }x_j(v)=0 \text{ for a unique }j\in \{1,2,3\} \\ 4 &\text{ if } x_j(v)\neq 0 \text{ for a unique }j\in \{1,2,3\} \\ 1 &\text{ otherwise.}\end{cases}. \end{align*} \end{lemma} \begin{proof} One can easily find the non-zero elements of $W_1$ that occur as an eigenvector with eigenvalue $1$ for some non-identity element of $G$. \end{proof} \begin{proposition}\label{prop:A4xC2,V1+V2} We have the inequality $\sepbeta(G,W_1\oplus W_2)\le 6$. \end{proposition} \begin{proof} Take $v=(w_1,w_2), \ v'=(w'_1,w'_2)\in W_1\oplus W_2$ such that $f(v)=f(v')$ for all eld[W_1\oplus W_2]^G$ with $\deg(f)\le 6$. We need to show that $G\cdot v=G\cdot v'$. By Proposition~\ref{prop:A4xC2,Vi,U}, $w_1$ and $w'_1$ have the same $G$-orbit, so we assume that $w_1=w'_1$ (i.e. $x_1(v)=x_1(v')$, $x_2(v)=x_2(v')$, $x_3(v)=x_3(v')$). Moreover, we may assume that $G\cdot w_2=G\cdot w'_2$, and it is sufficient to deal with the case when both $w_1$ and $w_2$ are non-zero. By Lemma~\ref{lemma:A4xC2,stabilizer}, the stabilizer of $w_1$ in $G$ is non-trivial if and only if $(x_1x_2x_3)(v)=0$ or $x_1^2(v)=x_2^2(v)=x_3^2(v)$. This dictates the distinction of several cases below. \emph{Case I:} $(x_1x_2x_3)(v)\neq 0$ and $x_i^2(v)\neq x_j^2(v)$ if $i\neq j$. Consider the $G$-invariants \[f_1:=y_1^2+y_2^2+y_3^2,\qquad f_2:=x_1^2y_1^2+x_2^2y_2^2+x_3^2y_3^2, \qquad f_3:=x_1^4y_1^2+x_2^4y_2^2+x_3^4y_3^2.\] For $w\in V$ set \[M(w):=\begin{bmatrix} 1& 1& 1\\ x_1^2(w)& x_2^2(w)& x_3^2(w) \\ x_1^4(w)& x_2^4(w)& x_3^4(w) \end{bmatrix}.\] We have $\det M(w)=(x_1^2(w)-x_2^2(w))(x_1^2(w)-x_3^2(w))(x_2^2(w)-x_3^2(w))$, so $\det M(v)\neq 0$ by assumption, and as $M(v)$ depends only on $w_1=w'_1$, we have $M(v)=M(v')$. Using that $f_i(v)=f_i(v')$ for $i=1,2,3$ (since $\deg(f_i)\le 6$), we conclude \[\begin{bmatrix} y_1^2(v) \\ y_2^2(v)\\ y_3^2(v)\end{bmatrix} =M(v)^{-1}\cdot \begin{bmatrix} f_1(v)\\ f_2(v)\\ f_3(v)\end{bmatrix} =M(v')^{-1}\cdot \begin{bmatrix} f_1(v')\\ f_2(v')\\ f_3(v')\end{bmatrix} =\begin{bmatrix} y_1^2(v') \\ y_2^2(v')\\ y_3^2(v')\end{bmatrix}.\] Thus $y_i(v)=\pm y_i(v')$ for $i=1,2,3$. Taking into account that $G\cdot w_2=G\cdot w'_2$, one can easily see that either $y_i(v)=y_i(v')$ for $i=1,2,3$, so $v=v'$, and we are done, or for some $j\in \{1,2,3\}$ we have that $y_j(v)=y_j(v')$ and $y_i(v)=-y_i(v')$ for all $i\in \{1,2,3\}\setminus \{j\}$. By symmetry we may assume that \begin{equation}\label{eq:y3(v)=y3(v')} y_3(v)=y_3(v'), \qquad y_1(v)=-y_1(v'), \qquad y_2(v)=-y_2(v'). \end{equation} Consider the $G$-invariants \[f_4:=x_2x_3y_1+x_1x_3y_2+x_1x_2y_3, \qquad f_5:=x_1x_2x_3(x_1y_1+x_2y_2+x_2y_3).\] Then $f_4$ and $f_5$ have degree less than $6$, hence $f_4(v)=f_4(v')$ and $f_5(v)=f_5(v')$, implying by $y_3(v)=y_3(v')$ (see \eqref{eq:y3(v)=y3(v')}) that \[\begin{bmatrix} (x_2x_3)(v) & (x_1x_3)(v) \\ x_1(v) & x_2(v)\end{bmatrix} \cdot \begin{bmatrix} y_1(v) \\ y_2(v)\end{bmatrix} = \begin{bmatrix} (x_2x_3)(v) & (x_1x_3)(v) \\ x_1(v) & x_2(v)\end{bmatrix} \cdot \begin{bmatrix} y_1(v') \\ y_2(v')\end{bmatrix}.\] The assumptions for Case I guarantee that the determinant of the $2\times 2$ coefficient matrix above is non-zero, hence $\begin{bmatrix} y_1(v) \\ y_2(v)\end{bmatrix}=\begin{bmatrix} y_1(v') \\ y_2(v')\end{bmatrix}$, and so $v=v'$. \emph{Case II:} $(x_1x_2x_3)(v)\neq 0$ and $|\{x_1^2(v),x_2^2(v),x_3^2(v)\}|=2$, say $x_1^2(v)=x_2^2(v)\neq x_3^2(v)$. Similarly to Case I, by basic linear algebra we conclude from $f_1(v)=f_1(v')$ and $f_2(v)=f_2(v')$ that \begin{equation}\label{eq:y3^2(v)=y3^2(v')} (y_1^2+y_2^2)(v)=(y_1^2+y_2^2)(v') \text{ and } y_3^2(v)=y_3^2(v').\end{equation} Consider the $G$-invariant \[f_6:=x_3^2y_1^2+x_1^2y_2^2+x_2^2y_3^2.\] Now \eqref{eq:y3^2(v)=y3^2(v')}, $f_6(v)=f_6(v')$ imply that $y_1^2(v)=y_1^2(v')$ and $y_2^2(v)=y_2^2(v')$. So we have that \[y_1(v)=\pm y_1(v'),\quad y_2(v)=\pm y_2(v'),\quad y_3(v)=\pm y_3(v').\] Taking into account that $G\cdot w_2=G\cdot w'_2$ we conclude that either $w_2=w'_2$, and we are done, or for some $j\in \{1,2,3\}$ we have that $y_j(v)=y_j(v')$ and $y_i(v)=-y_i(v')$ for all $i\in \{1,2,3\}\setminus \{j\}$. So we have to deal with the cases II.a, II.b, II.c below: \emph{Case II.a:} $y_1(v)=-y_1(v')$, $y_2(v)=-y_2(v')$, $y_3(v)=y_3(v')$. Consider the invariant \[f_7:=x_2x_3^3y_1+x_1^3x_3y_2+x_1x_2^3y_3.\] It has degree less than $6$, and from $f_4(v)=f_4(v')$, $f_7(v)=f_7(v')$ we conclude \[\begin{bmatrix} (x_2x_3)(v) & (x_1x_3)(v)\\ (x_2x_3^3)(v) & (x_1^3x_3)(v)\end{bmatrix} \cdot \begin{bmatrix} y_1(v) \\ y_2(v)\end{bmatrix}= \begin{bmatrix} 0 \\ 0 \end{bmatrix}.\] The determinant of the $2\times 2$ matrix above is $x_1(v)x_2(v)x_3(v)^2(x_1(v)^2-x_3(v)^2)$, which is non-zero by the assumptions for Case II. Consequently, $y_1(v)=y_2(v)=0$, implying in turn that $v=v'$. \emph{Case II.b:} $y_1(v)=-y_1(v')$, $y_2(v)=y_2(v')$, $y_3(v)=-y_3(v')$. Similar to Case II.a, using invariants $f_4$ and $f_7$. \emph{Case II.c:} $y_1(v)=y_1(v')$, $y_2(v)=-y_2(v')$, $y_3(v)=-y_3(v')$. Similar to Case II.a, but instead of $f_7$ we have to use the invariant $x_2^3x_3y_1+x_1x_3^3y_2+x_1^3x_2y_3$. \emph{Case III.a:} Two of $x_1(v)$, $x_2(v)$, $x_3(v)$ are zero, say $x_1(v)=x_2(v)=0$, $x_3(v)\neq 0$. So $\mathrm{Stab}_G(w_1)$ has order $4$ by Lemma~\ref{lemma:A4xC2,stabilizer}, in fact $\mathrm{Stab}_G(w_1)=\langle a,bd\rangle$. Then $f_2(v)=f_2(v')$ implies $y_3(v)^2=y_3(v')^2$, $f_6(v)=f_6(v')$ implies $y_1(v)^2=y_1(v')^2$, and then $f_1(v)=f_1(v')$ implies $y_2(v)^2=y_2(v')^2$. Taking into account that $G\cdot w_2=G\cdot w'_2$ we conclude that either $w_2=w'_2$, and we are done, or for some $j\in \{1,2,3\}$ we have that $y_j(v)=y_j(v')$ and $y_i(v)=-y_i(v')$ for all $i\in \{1,2,3\}\setminus \{j\}$. In the latter case we have $w_2$ and $w'_2$ belong to the same orbit under $\langle a,bd\rangle=\mathrm{Stab}_G(w_1)$, hence $G\cdot v=G\cdot v'$. \emph{Case III.b:} $x_i(v)=0$ for a unique $i\in \{1,2,3\}$, say $x_3(v)=0$. Then $f_4(v)=f_4(v')$ implies $y_3(v)=y_3(v')$. The $G$-invariant \[f_8:=x_1^2x_3^2y_1^2+x_1^2x_2^2y_2^2+x_2^2x_3^2y_3^2\] shows that $y_2^2(v)=y_2^2(v')$, hence by $f_1(v)=f_1(v')$ we get $y_1^2(v)=y_1^2(v')$. Recall that $G\cdot w_2=G\cdot w'_2$, hence either $v=v'$ and we are done, or $y_1(v)=-y_1(v')$ and $y_2(v)=-y_2(v')$. Then $ad\cdot v=v'$, so $v$ and $v'$ have the same $G$-orbit. \emph{Case IV:} $x_1^2(v)=x_2^2(v)=x_3^2(v)$. Applying an element of $G$ and a rescaling on $W_1$ (see Section~\ref{subsec:direct sum decomp}) we may assume that $w_1=[1,1,1]^T$. We have the $G$-invariants \[f_9:=x_1x_2y_1y_2+x_1x_3y_1y_3+x_2x_3y_2y_3,\qquad f_{10}:=y_1y_2y_3.\] The multiset $\{f_4(v)=f_4(v'),f_9(v)=f_9(v'),f_{10}(v)=f_{10}(v')\}$ consists of the elementary symmetric polynomials of $y_1(v),y_2(v),y_3(v)$ (respectively of $y_1(v'),y_2(v'),y_3(v')$). Thus $w'_2$ is obtained from $w_2$ by permuting its coordinates. In fact $w_2$ can be taken to $w'_2$ by an even permutation of the coordinates, because $f_{11}(v)=f_{11}(v')$, where $f_{11}$ is the $G$-invariant \[f_{11}:=x_2x_3y_1y_2^2+x_1x_2y_1^2y_3+x_1x_3y_2y_3^2.\] It means that $w'_2$ belongs to the $\langle c\rangle$-orbit of $w_2$. Since $\mathrm{Stab}_G(w_1)=\langle c\rangle$, we conclude that $G\cdot v=G\cdot v'$. \end{proof} \begin{lemma}\label{lemma:A4xC2,stab(v1)trivial} Let $w_1$ be a non-zero element in $W_1$, and $w_2\in W_2$. \begin{itemize} \item[(i)] If $|\mathrm{Stab}_G(w_1)|\neq 3$, then for any $\chi\in \{(\omega,1),(\omega^2,1),(1,1)\}$ eld[W_1]^{G,\chi}$ with $\deg(f)\le 4$ such that $f(w_1)\neq 0$. \item[(ii)] If $|\mathrm{Stab}_G(w_1)|\notin \{2,4\}$, then for any $\chi\in \{(1,-1),(1,1)\}$ there exists a homogeneous eld[W_1]^{G,\chi}$ with $\deg(f)\le 3$ and $f(w_1)\neq 0$. \item[(iii)] If there exist $i,j\in\{1,2,3\}$ with $y_i^2(w_2)\neq y_j^2(w_2)$, then for any $\chi\in \{(\omega,1),(\omega^2,1),(1,1)\}$ eld[W_2]^{G,\chi}$ with $\deg(f)\le 4$ such that $f(w_2)\neq 0$. \end{itemize} \end{lemma} \begin{proof} Set \begin{align*} r_{(\omega,1)}:=x_1^2+\omega x_2^2+\omega^2 x_3^2 &\qquad \qquad s_{(\omega,1)}:=y_1^2+\omega y_2^2+\omega^2 y_3^2 \\ r_{(\omega^2,1)}:=x_1^2+\omega^2 x_2^2+\omega x_3^2 &\qquad \qquad s_{(\omega^2,1)}:=y_1^2+\omega^2 y_2^2+\omega y_3^2 \\ r_{(1,-1)}&:=x_1x_2x_3. \end{align*} Then $r_\chi$ and $s_\chi$ are relative $G$-invariants of weight $\chi$. (i) Take first $\chi=(\omega,1)$. Then $r_{(\omega,1)}$, $(r_{(\omega^2,1)})^2$ are both homogeneous relative invariants of degree at most $4$ and weight $\chi$, and their common zero locus in $W_1$ is $\{w\in W_1\mid x_1^2(w)=x_2^2(w)=x_3^2(w)\}$. Lemma~\ref{lemma:A4xC2,stabilizer} implies that $w_1$ does not belong to this common zero locus by the assumption on its stabilizer. The proof for $\chi=(\omega^2,1)$ is similar, one uses the relative invariants $r_{(\omega^2,1)}$, $(r_{(\omega,1)})^2$. For $\chi=(1,1)$ we can take $f=1$. (ii) For $\chi=(1,-1)$ we can take $f=r_{(1,-1)}$, because Lemma~\ref{lemma:A4xC2,stabilizer} implies $(x_1x_2x_3)(v_1)\neq 0$ by the assumption on the stabilizer of $w_1$. For $\chi=(1,1)$ we can take $f=1$. (iii) Similarly to the proof of (i), the common zero locus of the weight $(\omega,1)$ relative invariants $s_{(\omega,1)}$, $(s_{(\omega^2,1)})^2$ is $\{w\in W_2\mid y_1^2(w)=y_2^2(w)=y_3^2(w)\}$. Now $w_2$ does not belong to this common zero locus by assumption. The case of $\chi=(\omega^2,1)$ is settled using the relative invariants $s_{(\omega^2,1)}$, $(s_{(\omega,1)})^2$. \end{proof} \begin{theorem}\label{thm:sepbeta(A4xC2)} eld$ contains an element eld(\mathrm{A}_4\times \mathrm{C}_2)=6$. \end{theorem} eld(\mathrm{A}_4)$, and by Proposition~\ref{prop:alt_n} eld(\mathrm{A}_4)\ge 6$. By Lemma~\ref{lemma:spanning invariants} (iii), to prove the reverse inequality eld(G)\le 6$ it is sufficient to show that $\sepbeta^L(G)\le 6$ for eld$ is large enough to apply Lemma~\ref{lemma:multfree}, and so it is sufficient to prove $\sepbeta(G,V)\le 6$ for $V:=W_1\oplus W_2\oplus U$. Take $v=(w_1,w_2,u)\in V$, $v'=(w'_1,w'_2,u')\in V$, and assume that $f(v)=f(v')$ for all eld[V]^G_{\le 6}$. We need to show that then $G\cdot v=G\cdot v'$. By Proposition~\ref{prop:A4xC2,V1+V2} we know that $G\cdot (w_1,w_2)=G\cdot (w'_1,w'_2)$, so replacing $v'$ by an appropriate element in its $G$-orbit we may assume that $w'_1=w_1$, $w'_2=w_2$. By Proposition~\ref{prop:A4xC2,Vi,U} (ii) it is sufficient to deal with the case $w_1\neq 0$. The Davenport constant of $G/G'$ is $6$, therefore $G\cdot u=G\cdot u'$. By Proposition~\ref{prop:A4xC2,V1+V2} it is sufficient to deal with the case when $u\neq 0$, moreover, when $u$ has a non-zero component $u_\chi$ for some non-identity character $\chi\in \widehat G$. \emph{Case I:} $\mathrm{Stab}_G(w_1)$ is trivial. We claim that $u_{\chi}=u'_{\chi}$ for all $\chi\in \widehat G$, so $v=v'$. Since $\chi^2\in \langle (\omega,1)\rangle$ (respectively, $\chi^3\in \langle (1,-1)\rangle$), by Lemma~\ref{lemma:A4xC2,stab(v1)trivial} there exists a relative invariant eld[W_1]^{G,\chi^{-2}}$ eld[W_1]^{G,\chi^{-3}}$) such that $ft_\chi^2$ (respectively, $ft_\chi^3$) is a homogeneous $G$-invariant of degree at most $6$, and $f(v)=f(w_1)\neq 0$. From $(ft_\chi^2)(v)=(ft_\chi^2)(v')$ (respectively, $(ft_\chi^3)(v)=(ft_\chi^3)(v')$) we conclude that both $t_\chi^2(u)=t_\chi^2(u')$ and $t_\chi^3(u)=t_\chi^3(u')$. It follows that $t_\chi(u)=t_\chi(u')$, i.e. $u_\chi=u'_\chi$. \emph{Case II.a:} $|\mathrm{Stab}_G(w_1)|=3$ (consequently, by Lemma~\ref{lemma:A4xC2,stabilizer} we have $x_1^2(v)=x_2^2(v)=x_3^2(v)\neq 0$) and there exist $i,j\in \{1,2,3\}$ with $y_i^2(v)\neq y_j^2(v)$. By Lemma~\ref{lemma:A4xC2,stab(v1)trivial} (iii) for $\chi \in \langle (\omega,1)\rangle$ we conclude the existence of a homogeneous eld[W_2]^{G,\chi}$ of degree at most $4$ with $f(w_2)\neq 0$. eld[W_1]^{G,(1,-1)}$ does not vanish at $v$. In the same way as in Case I we conclude that for all $\chi\in \widehat G$ we have $t_\chi^2(v)=t_\chi^2(v')$ and $t_\chi^3(v)=t_\chi^3(v')$, implying in turn $t_\chi(v)=t_\chi(v')$, i.e. $u_\chi=u'_\chi$. This holds for all $\chi$, thus $v=v'$ in this case. \emph{Case II.b:} $|\mathrm{Stab}_G(w_1)|=3$ (consequently, by Lemma~\ref{lemma:A4xC2,stabilizer} we have $x_1^2(v)=x_2^2(v)=x_3^2(v)\neq 0$, hence in particular, $(x_1x_2x_3)(v)\neq 0$) and $\mathrm{Stab}_G(w_1)\subseteq \mathrm{Stab}_G(w_2)$. Write $H:=\mathrm{Stab}_G(w_1)$. Then $H\cong HG'/G'$ (since $|H|=3$ is coprime to $|G'|=4$). eld[U]$ is either $G$-invariant or is a relative $G$-invariant with weight $(1,-1)$. Hence $h$ or $x_1x_2x_3h$ is a $G$-invariant, implying by $(x_2x_2x_3)(v)\neq 0$ that eld[U]$ of degree eld(H)$. Consequently, there exists an element $g\in H$ with $g\cdot u=u'$. Now $g\cdot (w_1,w_2,u)=(w_1,w_2,u')$, so $G\cdot v=G\cdot v'$. \emph{Case II.c:} $|\mathrm{Stab}_G(w_1)|=3$ (consequently, by Lemma~\ref{lemma:A4xC2,stabilizer} we have $x_1^2(v)=x_2^2(v)=x_3^2(v)\neq 0$) and $y_1^2(v)=y_2^2(v)=y_2^2(v)\neq 0$. Replacing the pair $(v,v')$ by an appropriate element in its $G$-orbit we may assume that $w_1$ is a non-zero scalar multiple of $[1,1,1]^T$. If $w_2$ is also a scalar multiple of $[1,1,1]^T$, then we are in Case II.b. So we may assume that $w_2$ is not a scalar multiple of $[1,1,1]^T$. Then necessarily $w_2$ is a non-zero scalar multiple of one of $[-1,1,1]^T$, $[1,-1,1]^T$, $[1,1,-1]^T$. Then the relative invariants \begin{align*} eld[W_1\oplus W_2]^{G,(1,-1)}, eld[W_1\oplus W_2]^{G,(\omega,-1)}, eld[W_1\oplus W_2]^{G,(\omega^2,-1)} \end{align*} do not vanish at $(w_1,w_2)$. Multiplying them by $x_1x_2x_3$ we get degree $5$ relative invariants of weight $(1,1)$, $(\omega,1)$, $(\omega^2,1)$. It follows that for any $\chi\in \widehat G$ eld[W_1\oplus W_2]^{G,\chi^{-1}}$ of degree at most $5$ with $f(w_1,w_2)\neq 0$; then $(ft_\chi)(v)=(ft_\chi)(v')$, implying $u_\chi=u'_\chi$. This holds for all $\chi$, so $u=u'$ and thus $v=v'$ in this case. \emph{Case III:} $|\mathrm{Stab}_G(w_1)|=4$. By Lemma~\ref{lemma:A4xC2,stabilizer} exactly one of $x_1(v)$, $x_2(v)$, $x_3(v)$ is non-zero; by symmetry we may assume that $x_1(v)\neq 0$ and $x_2(v)=x_3(v)=0$. By Lemma~\ref{lemma:A4xC2,stab(v1)trivial} (i) for any $\chi\in \langle (\omega,1)\rangle$ there exists a homogeneous eld[W_1\oplus W_2]^{G,\chi}$ with $\deg(f)\le 4$ and $f(w_1,w_2)\neq 0$. eld[W_1\oplus W_2]^{G,(1,-1)}$ with $\deg(h)\le 3$ and $h(w_1,w_2)\neq 0$ then we are done as in Case I. This happens in the following case: \emph{Case III.a:} $y_1(v)\neq 0$ or $(y_2y_3)(v)\neq 0$. Then we can take $h:=x_1y_1+x_2y_2+x_3y_3$ or $h:=x_1y_2y_3+x_2y_3y_1+x_3y_1y_2$. Otherwise we are in the following case: \emph{Case III.b:} $y_1(v)=0$ and $(y_2y_3)(v)=0$. If $y_3(v)=0$, then $H:=\langle ad\rangle \subseteq \mathrm{Stab}_G(w_1,w_2)$. Note that $H\cong HG'/G'$. eld[U]^H$ is a relative $G$-invariant with weight in $\langle (\omega,1)\rangle$. If $\deg(m)\le 2$, then by Lemma~\ref{lemma:A4xC2,stab(v1)trivial} (iii) we have a $G$-invariant $mf$ with $\deg(mf)\le 6$ and $f(w_1,w_2)\neq 0$. eld[U]$ with $\deg(m)\le 2=\mathsf{D}(H)=\sepbeta(H)$. Consequently, $H\cdot u=H\cdot u'$, and as $H$ stabilizes $(w_1,w_2)$, we conclude $G\cdot v=G\cdot v'$. The case $y_2(v)=0$ is similar, we just need to take $H:=\langle abd\rangle\subseteq \mathrm{Stab}_G(w_1,w_2)$ in the above argument. \emph{Case IV:} $|\mathrm{Stab}_G(w_1)|=2$. By Lemma~\ref{lemma:A4xC2,stabilizer} exactly one of $x_1(v)$, $x_2(v)$, $x_3(v)$ is zero; by symmetry we may assume that $x_1(v)\neq 0$, $x_2(v)\neq 0$, and $x_3(v)=0$. Note that this implies that $H:=\mathrm{Stab}_G(w_1)=\langle ad\rangle$ is not contained in $G'$, hence $H\cong HG'/G'$. By Lemma~\ref{lemma:A4xC2,stab(v1)trivial} (i) for any $\chi\in \langle (\omega,1)\rangle$ there exists a homogeneous eld[W_1\oplus W_2]^{G,\chi}$ with $\deg(f)\le 4$ and $f(w_1,w_2)\neq 0$. eld[W_1\oplus W_2]^{G,(1,-1)}$ with $\deg(h)\le 3$ and $h(w_1,w_2)\neq 0$ then we are done as in Case I, whereas if $H\subseteq \mathrm{Stab}_G(w_2)$ then we can finish as in Case III.b. \emph{Case IV.a:} $y_1(v)=y_2(v)=0$. Then $ad\cdot w_2=w_2$, hence $H\subseteq \mathrm{Stab}_G(w_2)$, so we are done, as we pointed out above. \emph{Case IV.b:} exactly one of $y_1(v)$, $y_2(v)$ is zero. Then $h:=x_1y_1+x_2y_2+x_3y_3$ is a degree $\le 3$ relative $G$-invariant with weight $(1,-1)$ and not vanishing at $(w_1,w_2)$, so we are done, as we explained in the beginning of Case IV. \emph{Case IV.c:} Both of $y_1(v)$, $y_2(v)$ are non-zero. We claim that for any non-zero weight $\chi\in \widehat G$ there exists a eld[W_1\oplus W_2]^{G,\chi^{-1}}$ with $\deg(f)\le 4$ such that $f(w_1,w_2)\neq 0$. Then $ft_\chi$ is a $G$-invariant of degree $\le 5$, so $(ft_\chi)(v)=(ft_\chi)(v')$ implies $u_\chi=u'_\chi$ for all non-trivial $\chi\in \widehat G$, thus $v=v'$. For the weights $\chi\in \langle (\omega,1)\rangle$ the claim follows from Lemma~\ref{lemma:A4xC2,stab(v1)trivial} (i). For $\chi=(1,-1)$ we may take $f:=x_1^2x_2y_2+x_2^2x_3y_3+x_3^2x_1y_1$. For $\chi=(\omega,-1)$ consider the relative invariants \begin{align*} r_{(\omega,-1)}^{(1)}:=x_1y_1+\omega x_2y_2+\omega^2x_3y_3 \\ r_{(\omega,-1)}^{(2)}:=x_1y_1^3+\omega x_2y_2^3+\omega^2 x_3y_3^3 \\ r_{(\omega,-1)}^{(3)}:=x_1^3y_1+\omega x_2^3y_2+\omega^2x_3^3y_3 \end{align*} eld[W_1\oplus W_2]^{G,\chi}$. One of them does not vanish at $(w_1,w_2)$. Indeed, suppose for contradiction that all of them vanish at $(w_1,w_2)$. From $r_{(\omega,-1)}^{(1)}(w_1,w_2)=0=r_{(\omega,-1)}^{(2)}(w_1,w_2)$ we deduce $y_1^2(w_2)=y_2^2(w_2)$. Thus $[y_1(v),y_2(v)]$ is a non-zero scalar multiple of $[1,1]$ or $[1,-1]$. After that, from $r_{(\omega,-1)}^{(1)}(w_1,w_2)=0=r_{(\omega,-1)}^{(3)}(w_1,w_2)$ we conclude $x_1^2(v)=x_2^2(v)$ and hence $x_1(v)=\pm x_2(v)$, which clearly contradicts to the assumption that $r_{(\omega,-1)}^{(1)}(w_1,w_2)=0$. The case of the weight $\chi=(\omega^2,-1)$ can be settled similarly, using the relative invariants $x_1y_1+\omega^2 x_2y_2+\omega x_3y_3$, $x_1y_1^3+\omega^2 x_2y_2^3+\omega x_3y_3^3$, $x_1^3y_1+\omega^2 x_2^3y_2+\omega x_3^3y_3$. \end{proof} \section{The group $\mathrm{S}_3\times \mathrm{C}_3$} \label{sec:S3xC3} In this section \[G=\mathrm{S}_3\times \mathrm{C}_3=\langle a,b\mid a^3=b^2=1,\ ba=a^2b\rangle \times \langle c\mid c^3=1\rangle.\] eld$ contains an element $\omega$ of multiplicative order $3$, and consider the following irreducible $2$-dimensional representations of $G$: \[ \psi_1: a\mapsto \begin{bmatrix} \omega & 0 \\ 0 & \omega^2 \\ \end{bmatrix}, \quad b\mapsto \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}, \quad c\mapsto \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix}, \] \[\psi_2: a\mapsto \begin{bmatrix} \omega & 0 \\ 0 & \omega^2 \\ \end{bmatrix}, \quad b\mapsto \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}, \quad c\mapsto \begin{bmatrix} \omega & 0 \\ 0 & \omega \\ \end{bmatrix}, \] \[\psi_3: a\mapsto \begin{bmatrix} \omega & 0 \\ 0 & \omega^2 \\ \end{bmatrix}, \quad b\mapsto \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}, \quad c\mapsto \begin{bmatrix} \omega^2 & 0 \\ 0 & \omega^2 \\ \end{bmatrix}. \] The other irreducible representations of $G$ are $1$-dimensional, and can be labelled by the group $\widehat G=\{\pm 1\}\times \{1,\omega,\omega^2\}$, where $\chi=(\chi_1,\chi_2)\in \widehat G$ is identified with the representation \[\chi:a\mapsto 1,\ b\mapsto \chi_1,\ c\mapsto \chi_2\] (note that $\langle a\rangle$ is the commutator subgroup of $G$, so $a$ is in the kernel of any $1$-dimensional representation of $G$). eld$ endowed with the representation $\chi$, and set $U:=\bigoplus_{\chi\in \widehat G}U_\chi$. For $\xi,\eta,\zeta\in \{x,y,z\}$ set \[q_{\xi\eta}:=\frac 12 (\xi_1\eta_2+\xi_2\eta_1) \quad \text{ and } \quad p_{\xi\eta\nu}:=\xi_1\eta_1\zeta_1+\xi_2\eta_2\zeta_2.\] For example, $q_{xx}=x_1x_2$, $q_{yz}=\frac 12(y_1z_2+y_2z_1)$, $p_{xxx}=x_1^3+x_2^3$, $p_{xxy}=x_1^2y_1+x_2^2y_2$. It is well known (see for example \cite[Theorem 4.1]{hunziker}) that the elements $q_{\xi\eta}$, $p_{\xi\eta\zeta}$ eld[W_1\oplus W_2\oplus W_3]^{\mathrm{S}_3}$, where $\mathrm{S}_3$ is identified with the subgroup $\langle a,b\rangle$ of $G$. Set \begin{align*}A_{(1,1)}&:=\{q_{xx},q_{yz},p_{xxx},p_{yyy},p_{zzz},p_{xyz}\} \\ A_{(1,\omega)}&:=\{q_{xy},q_{zz},p_{xxy},p_{xzz},p_{yyz}\} \\ A_{(1,\omega^2)}&:=\{q_{xz},q_{yy},p_{xxz},p_{xyy},p_{yzz}\}. \end{align*} For $\chi\in\{(1,1),(1,\omega),(1,\omega^2)\}\subset \widehat G$ the elements of $A_\chi$ are relative $G$-invariants of weight $\chi$. \begin{proposition}\label{prop:S3xC3,V1+V2+V3mingen} eld[W_1\oplus W_2\oplus W_3]^G$ is \begin{align}\label{eq:S3xC3,V1+V2+V3mingen} B:=A_{(1,1)}\cup \{f_1f_2\mid f_1\in A_{(1,\omega)},f_2\in A_{(1,\omega^2)}\} \cup\{q_{xy}^3,q_{xy}^2q_{zz},q_{xy}q_{zz}^2,q_{zz}^3\} \\ \notag \cup\{q_{xz}^3,q_{xz}^2q_{yy},q_{xz}q_{yy}^2,q_{yy}^3\} \cup \{q_{zz}^2p_{xzz},\ q_{yy}^2p_{xyy}\}. \end{align} \end{proposition} \begin{proof} If $\chi^{(1)},\dots,\chi^{(k)}$ is an irreducible product-one sequence over the subgroup $\langle (1,\omega)\rangle$ of $\widehat G$ and $f_j\in A_{\chi^{(j)}}$ for $j=1,\dots k$, then $f_1\dots f_k$ is a $G$-invariant, and the invariants of this form eld[W_1\oplus W_2\oplus W_3]^G$ (see the discussion in Section~\ref{sec:Davenport} about the relevance of the Davenport constant for our study). eld[W_1\oplus W_2\oplus W_3]^G$ is generated by \begin{align}\label{eq:S3xC3,V1+V2+V3gens} A:=A_{(1,1)}\cup\{f_1f_2\mid f_1\in A_{(1,\omega)}, f_2\in A_{(1,\omega^2)}\} \\ \notag \cup \{f_1f_2f_3\mid f_1,f_2,f_3\in A_{(1,\omega)} \text{ or } f_1,f_2,f_3\in A_{(1,\omega^2)}\}. \end{align} By \cite[Proposition 3.3]{cziszter-domokos-szollosi} we have the inequality eld(G)=8$, so we can omit the degree $9$ elements from the generating system $A$. Therefore eld[W_1\oplus W_2\oplus W_3]^G$, it is sufficient to show that all the elements of $A\setminus B$ of degree $7$ or $8$ eld[W_1\oplus W_2\oplus W_3]$ generated by the invariants of degree at most $6$. All elements of degree $7$ or $8$ in $A\setminus B$ are products of the form $f_1f_2f_3$ where $\deg(f_1)=2$, $\deg(f_2)=3$, and both of $f_1$ and $f_2$ belong either to $A_{(1,\omega)}$ or to $A_{(1,\omega^2)}$. The equalities \begin{align*} 2q_{xy}p_{xxy}&=p_{xxx}q_{yy}+q_{xx}p_{xyy} \\ q_{xy}p_{xzz}&=q_{xx}p_{yzz}+q_{yz}p_{xxz}-p_{xyz}q_{xz} \\ q_{xy}p_{yyz}&=p_{yyy}q_{xz}-q_{yz}p_{xyy}+p_{xyz}q_{yy} \\ q_{zz}p_{xxy}&=2p_{xyz}q_{xz}-q_{xx}p_{yzz} \\ q_{zz}p_{yyz}&=2q_{yz}p_{yzz}-p_{zzz}q_{yy} \end{align*} show that with the only exception of $q_{zz}p_{xzz}$, all products $f_1f_2$ with $f_1,f_2\in A_{(1,\omega)}$, $\deg(f_1)=2$, $\deg(f_2)=3$ are contained in the ideal generated by the $G$-invariants of degree at most $3$. Furthermore, we have \[p_{xzz}^2=4q_{xx}q_{zz}^2-4q_{xz}^2q_{zz}+p_{zzz}p_{xxz},\] showing that $q_{zz}p_{xzz}p_{xzz}$ is contained in the ideal generated by the $G$-invariants of degree at most $4$. Thus the elements of $A\setminus B$ of degree $7$ or $8$ that are the product of three factors from $A_{(1,\omega)}$ are contained in the subalgebra of $G$-invariants of degree at most $6$. Similar considerations work for products of the form $f_1f_2f_3$ with $f_i\in A_{(1,\omega^2)}$, one just needs to interchange the roles of $y$ and $z$. This shows that $B$ is indeed a homogeneous generating system of eld[W_1\oplus W_2\oplus W_3]^G$. \end{proof} \begin{proposition}\label{prop:S3xC3,V1+V2+V3} We have the inequality $\sepbeta(G,W_1\oplus W_2\oplus W_3)\le 6$. \end{proposition} \begin{proof} Let $v=(w_1,w_2,w_3)$, $v'=(w'_1,w'_2,w'_3)\in W_1\oplus W_2\oplus W_3$, and suppose that eld[W_1\oplus W_2\oplus W_3]^G$ with $\deg(f)\le 6$. By Proposition~\ref{prop:S3xC3,V1+V2+V3mingen} it is sufficient to show that \begin{align}\label{eq:seven}(q_{zz}^2p_{xzz})(v)&=(q_{zz}^2p_{xzz})(v') \\ \label{eq:2seven} (q_{yy}^2p_{xyy})(v)&=(q_{yy}^2p_{xyy})(v').\end{align} We show \eqref{eq:seven}, the argument for \eqref{eq:2seven} is obtained by interchanging the roles of $y$ and $z$. eld[W_3]^G$ is generated in degree $\le 6$. So by assumption $w_3$ and $w'_3$ belong to the same $G$-orbit. Replacing $w$ by an appropriate element in its orbit we may assume that $w_3=w'_3$. eld[W_1]^G$ is generated in degree at most $3$, so $G\cdot w_1=G\cdot w'_1$. If $w_1=0$, then $w'_1=0$, and both sides of \eqref{eq:seven} are zero. From now on we assume that $w_1\neq 0$, and so equivalently, $w'_1\neq 0$. If $z_1(w_3)=0$ or $z_2(w_3)=0$, then $q_{zz}(v)=0=q_{zz}(v')$, hence $(q_{zz}^2p_{xzz})(v)=0=(q_{zz}^2p_{xzz})(v')$. In particular, \eqref{eq:seven} holds. From now on we assume in addition that \begin{equation}\label{eq:nonzero} z_1(w_3)\neq 0 \text{ and }z_2(w_3)\neq 0.\end{equation} By assumption we have \[(q_{zz}q_{xz})(v)=(q_{zz}q_{xz})(v').\] hence \[q_{xz}(v)=q_{xz}(v').\] Again by assumption we have \[(p_{xzz}q_{xz})(v)=(p_{xzz}q_{xz})(v').\] Therefore if $q_{xz}(v)\neq 0$, then we have \[p_{xzz}(v)=p_{xzz}(v'),\] implying in turn the desired equality \eqref{eq:seven}. It remains to deal with the case \[q_{xz}(v)=0=q_{xz}(v').\] Now $q_{xz}(v)=0$ implies that eld^\times [-z_1(w_3),z_2(w_3)]^T. \end{equation} Consequently, $w'_1$ is a non-zero scalar multiple of $w_1$. On the other hand, $w'_1\in G\cdot w_1$, thus $w_1$ is an eigenvector of some matrix in $\{\psi_1(g)\mid g\in G\}=\psi_1(G)$. If the corresponding eigenvalue is $1$, then $(w_1,w_3)=(w'_1,w'_3)$ and \eqref{eq:seven} holds. If the eigenvalue is not $1$, then $w_1$ is the $-1$-eigenvector of some non-diagonal element of $\psi_1(G)$ (since the eigenvectors with eigenvalue different from $1$ of the diagonal elements in $\psi_1(G)$ are scalar multiples of $[1,0]^T$ or $[0,1]^T$, and $w_1$ is not of this form by \eqref{eq:S3xC3,-z1,z2} and \eqref{eq:nonzero}). After rescaling $W_1$ and $W_3$ (see Section~\ref{subsec:direct sum decomp}), we have that \begin{align*} (w_1,w_3)=([1,-1]^T,[1,1]^T) \text{ and } (w'_1,w'_3)=([-1,1]^T,[1,1]^T), \text{ or } \\ (w_1,w_3)=([\omega,-1]^T,[\omega,1]^T) \text{ and } (w'_1,w'_3)=([-\omega,1]^T,[\omega,1]^T), \text{ or } \\ (w_1,w_3)=([\omega^2,-1]^T,[\omega^2,1]^T) \text{ and } (w'_1,w'_3)=([-\omega^2,1]^T,[\omega^2,1]^T) \end{align*} Now one can directly check that $p_{xzz}(v)=0=p_{xzz}(v')$ in each of these cases (in fact, $(w'_1,w'_3)=g\cdot (w_1,w_3)$ for some $g\in \{b,ab,a^2b\}$ in these cases), and the desired \eqref{eq:seven} holds. \end{proof} For $\xi,\eta,\zeta\in \{x,y,z\}$ set \[q^-_{\xi\eta}:=\frac 12 (\xi_1\eta_2-\xi_2\eta_1) \quad \text{ and } \quad p^-_{\xi\eta\nu}:=\xi_1\eta_1\zeta_1-\xi_2\eta_2\zeta_2,\] and define \begin{align*}A_{(-1,1)}&:=\{q^-_{yz},p^-_{xxx},p^-_{yyy},p^-_{zzz},p^-_{xyz}\} \\ A_{(-1,\omega)}&:=\{q^-_{xy},p^-_{xxy},p^-_{xzz},p^-_{yyz}\} \\ A_{(-1,\omega^2)}&:=\{q^-_{xz},p^-_{xxz},p^-_{xyy},p^-_{yzz}\}. \end{align*} The elements of $A_\chi$ are relative invariants of weight $\chi$. eld[W_1\oplus W_2\oplus W_3]$ is a linear combination of products of elements from $\bigcup_{\chi\in \widehat G}A_\chi$, but we do not need this.) \begin{proposition}\label{prop:S3xC3,U} We have the equality $\beta(G,U)=5$. \end{proposition} \begin{proof} The action of $G$ on $U$ factors through the regular representation of $\mathrm{C}_3\times \mathrm{C}_3\cong G/G'$, the factor of $G$ modulo its commutator subgroup. Therefore $\beta(G,U)=\mathsf{D}(\mathrm{C}_3\times \mathrm{C}_3)=5$. \end{proof} \begin{proposition}\label{prop:S3xC3,V1+U} We have the inequality $\beta(W_1\oplus U)\le 6$. \end{proposition} \begin{proof} eld p^-_{xxx}\oplus (\mathcal{H}(G,W_1)\cap eld[W_1]^{G,\chi}=\{0\}$ for all $\chi\in \widehat G\setminus \{(\pm 1,1)\}$. Therefore by Lemma~\ref{lemma:V+U}, $\field[W_1\oplus U]^G$ is generated by eld[U]^G$, and \begin{align*} \{p^-_{xxx}t_{\chi^{(1)}}\cdots t_{\chi^{(k)}}\mid \chi^{(1)}, \dots, \chi^{(k)} \text{ is a product-one free sequence over }\widehat G, \\ \chi^{(1)}\cdots \chi^{(k)}=(-1,1)\}. \end{align*} Since $\mathsf{D}(\mathrm{C}_6/\mathrm{C}_2)=\mathsf{D}(\mathrm{C}_3)=3$, for $k\ge 3$ the sequence $\chi^{(1)}, \dots \chi^{(k)}$ necessarily has a subsequence with product $(\pm 1,1)$, so the above generators of the form $p^-_{xxx}t_{\chi^{(1)}}\cdots t_{\chi^{(k)}}$ have degree at most $3+3$. eld[q_{xx},p_{xxx}]$, and $\beta(G,U)=\mathsf{D}(\mathrm{C}_6)=6$. eld[W_1\oplus U]^G$ has degree at most $6$. \end{proof} \begin{lemma} \label{lemma:S3xC3stabilizer} For a non-zero $w_1\in W_1$ we have $|\mathrm{Stab}_G(w_1)/\langle c\rangle|\in \{1,2\}$, and \begin{align*}|\mathrm{Stab}_G(w_1)/\langle c\rangle|=2\iff p^-_{xxx}(w_1)= 0\iff \mathrm{Stab}_G(w_1)\in \{\langle b,c\rangle, \langle ab,c\rangle, \langle a^2b,c\rangle\}. \end{align*} For a non-zero $w_j\in W_j$ ($j\in \{2,3\}$) we have $|\mathrm{Stab}_G(w_j)|\in \{1,2,3\}$, and \begin{itemize} \item[(i)] for $j=2$: \begin{align*}|\mathrm{Stab}_G(w_2)|=2\iff p^-_{yyy}(w_2)=0\iff \mathrm{Stab}_G(w_2)\in \{\langle b\rangle, \langle ab\rangle, \langle a^2b\rangle\} \\ |\mathrm{Stab}_G(w_2)|=3\iff q_{yy}(w_2)=0\iff \mathrm{Stab}_G(w_2)\in \{\langle ac \rangle, \langle ac^2\rangle\}. \end{align*} \item[(ii)] for $j=3$: \begin{align*}|\mathrm{Stab}_G(w_3)|=2\iff p^-_{zzz}(w_3)= 0\iff \mathrm{Stab}_G(w_3)\in \{\langle b\rangle, \langle ab\rangle, \langle a^2b\rangle\} \\ |\mathrm{Stab}_G(w_3)|=3\iff q_{zz}(w_3)= 0 \iff \mathrm{Stab}_G(w_3)\in \{\langle ac \rangle, \langle ac^2\rangle\}. \end{align*} \end{itemize} \end{lemma} \begin{proof} The statement can be verified by straightforward direct computation. \end{proof} \begin{lemma}\label{lemma:S3xC3,stab-inv} Let $W$ stand for $W_2$ or $W_3$. Let $w\in W$, $u,u'\in U$ such that eld[W\oplus U]^G$ with $\deg(f)\le 6$. Then $u$ and $u'$ belong to the same $\mathrm{Stab}_G(w)$-orbit. \end{lemma} \begin{proof} Consider first the case $W=W_2$. Assume that $\mathrm{Stab}_G(w)=\{1_G\}$. Then $q_{yy}(w)\neq 0$ and $p^-_{yyy}(w)\neq 0$ by Lemma~\ref{lemma:S3xC3stabilizer}. If $\chi\in\{(1,\omega),(-1,1),(1,\omega^2),(-1,\omega)\}$, then there is an $f\in \{q_{yy},p^-_{yyy},q_{yy}^2,q_{yy}p^-_{yyy}\}$ such that $ft_\chi$ is a $G$-invariant of degree at most $6$, hence $t_\chi(u)=t_\chi(u')$. If $\chi=(-1,\omega^2)$, then $q_{yy}t_\chi^2$ and $p^-_{yyy}t_\chi^3$ are $G$-invariants showing that $t_\chi(u)=t_\chi(u')$. Finally, if $\chi=(1,1)$, then $t_\chi$ is a $G$-invariant of degree $1$, so $t_\chi(u)=t_\chi(u')$. Summarizing, we have $u=u'$. Assume next that $|\mathrm{Stab}_G(w)|=2$; this means that $H:=\mathrm{Stab}_G(w)$ is $\langle b\rangle$, $\langle ab\rangle$, or $\langle a^2b\rangle$, and $q_{yy}(w)\neq 0$. For any $H$-invariant monomial $T:=t_\chi\cdots t_{\chi'}$ with $\deg(T)\le 2$, either $T$, $q_{yy}T$, or $q_{yy}^2T$ is a $G$-invariant of degree at most $6$. It follows that $T(u)=T(u')$ for any $H$-invariant eld[U]$ of degree at most $2$. Therefore there exists an $h\in H$ with $h\cdot u=u'$ eld(H)=2$). Assume next that $|\mathrm{Stab}_G(w)|=3$; this means that $H:=\mathrm{Stab}_G(w)$ is $\langle ac\rangle$ or $\langle ac^2\rangle$, and $p^-_{yyy}(w)\neq 0$. For any $H$-invariant monomial $T:=t_\chi\cdots t_{\chi'}$ with $\deg(T)\le 3$, either $T$ or $p^-_{yyy}T$ is a $G$-invariant of degree at most $6$. It follows that $T(u)=T(u')$ for any $H$-invariant eld[U]$ of degree at most $3$. Therefore there exists an $h\in H$ with $h\cdot u=u'$ eld(H)=3$). By Lemma~\ref{lemma:S3xC3stabilizer}, the only remaining case is when $w=0$, and then the result follows from Proposition~\ref{prop:S3xC3,U}. The case $W=W_3$ follows by Lemma~\ref{lemma:auto}. Indeed, the group $G$ has the automorphism $\alpha:a\mapsto a$, $b\mapsto b$, $c\mapsto c^{-1}$. We have $\psi_2=\psi_3\circ \alpha$, so $\alpha$ interchanges $W_2$ and $W_3$, and permutes the $1$-dimensional representations. \end{proof} \begin{lemma}\label{lemma:S3xC3,V2+V1+U} Let $v=(w_i,w_j)\in V=W_i\oplus W_j$, where $(i,j)\in \{(1,2),(1,3),(2,3)\}$. Suppose that $\mathrm{Stab}_G(w_i)$, $\mathrm{Stab}_G(w_j)$ are both nontrivial, and $\mathrm{Stab}_G(v)=\{1_G\}$. Then for any $\chi\in\widehat G$ there exists a eld[V]^{G,\chi}$ with $\deg(f)\le 4$ and $f(v)\neq 0$. \end{lemma} eld[V]^G$. \emph{Case 1:} $V=W_2\oplus W_3$. Then the assumptions on the stabilizers imply by Lemma~\ref{lemma:S3xC3stabilizer} that the stabilizers of $w_2$ and $w_3$ both have order $2$ or $3$, moreover, exactly one of $q_{yy}(w_2)$ and $p^-_{yyy}(w_2)$ is zero, and exactly one of $q_{zz}(w_3)$ and $p^-_{zzz}(w_3)$ is zero. \emph{Case 1.a:} $q_{yy}(w_2)=0=q_{zz}(w_3)$. Then both $w_2$ and $w_3$ are non-zero scalar multiples of $[1,0]^T$ or $[0,1]^T$. eld^\times [0,1]^T$ is similar). Then $\mathrm{Stab}_G(w_2)=\langle ac^2\rangle$, and $w_3$ can not be a scalar multiple of $[0,1]^T$ (since the stabilizer of $[0,1]^T\in W_3$ is also $\langle ac^2\rangle$). This means that $w_3$ is a non-zero scalar multiple of $[1,0]^T$ as well, and so none of $\{p_{yyz}, p_{yzz}, p^-_{yyz}, p^-_{yzz}, p^-_{zzz}\}$ vanishes on $v=(w_2,w_3)$. These are relative invariants of degree $3$, and their weights represent all the five non-trivial characters. \emph{Case 1.b:} $q_{yy}(w_2)=0=p^-_{zzz}(w_3)$. Then $w_2$ is a non-zero scalar multiple of $[1,0]^T$ or $[0,1]^T$, and $w_3$ is a non-zero scalar multiple of $[1,1]^T$, $[1,\omega]^T$, $[1,\omega^2]^T$. It follows that none of $\{p_{yyz}, p_{yzz}, p^-_{yyz}, p^-_{yzz}, p^-_{yyy}\}$ vanishes on $v$. These are relative invariants of degree $3$, and their weights represent all the five non-trivial characters. \emph{Case 1.c:} $p^-_{yyy}(w_2)=0=q_{zz}(w_3)$: same as Case 1.b, we just need to interchange the roles of $W_2$ and $W_3$. \emph{Case 1.d:} $p^-_{yyy}(w_2)=0=p^-_{zzz}(w_3)$. Then both of $w_2,w_3$ are non-zero scalar multiples of $[1,1]^T$, $[1,\omega]^T$, or $[1,\omega^2]^T$. Moreover, $w_2$ and $w_3$ are linearly independent (otherwise they would have the same stabilizer). Thus none of $\{q_{yy},q_{zz},q^-_{yz},q_{yy}q^-_{yz},q_{zz}q^-_{yz}\}$ vanishes on $v$. These are relative invariants of degree at most $4$, and their weights represent all the five non-trivial characters. \emph{Case 2:} $V=W_1\oplus W_2$. Then the assumptions on the stabilizers imply by Lemma~\ref{lemma:S3xC3stabilizer} that exactly one of $q_{yy}(w_2)$ and $p^-_{yyy}(w_2)$ is zero. \emph{Case 2.a:} $q_{yy}(w_2)=0$ and $p^-_{yyy}(w_2)\neq 0$. Then $w_2$ is a non-zero scalar multiple of $[1,0]^T$ or $[0,1]^T$, say $w_2$ is a non-zero scalar multiple of $[1,0]^T$. If $w_1$ is a scalar multiple of $[0,1]^T$, then none of $q_{xy}$, $q^-_{xy}$, $q_{xy}q^-_{xy}$, $q_{xy}^2$, $p^-_{yyy}$ vanishes on $v=(w_1,w_2)$, and these relative invariants on $V$ of degree at most $4$ exhaust all the five non-trivial weights, so we are done. Otherwise the first coordinate of $w_1$ is non-zero, and thus $p_{xxy}$, $p_{xyy}$, $p^-_{xxy}$, $p^-_{xyy}$ are relative invariants on $V$ of degree $3$ not vanishing on $v$ (just like $p^-_{yyy}$). So we have exhausted all the five non-trivial weights, and we are done. \emph{Case 2.b:} $p^-_{yyy}(w_2)=0$ and $q_{yy}(w_2)\neq 0$. Then $w_2$ is a non-zero scalar multiple of $[1,1]^T$, $[1,\omega]^T$, or $[1,\omega^2]^T$. Moreover, $w_1$ is not a scalar multiple of $w_2$ (since otherwise its stabilizer would contain the stabilizer of $w_2$). Then $q_{yy}$, $q^-_{xy}$, $q_{yy}^2$, $q^-_{xy}q_{yy}$, $p^-_{xyy}$ are non-zero on $v=(w_1,w_2)$, and these relative invariants on $W_1\oplus W_2$ of degree at most $4$ exhaust all the five non-trivial weights. \emph{Case 3:} $V=W_1\oplus W_3$. The group $G$ has the automorphism $\alpha:a\mapsto a$, $b\mapsto b$, $c\mapsto c^2$. We have $\psi_3=\psi_2\circ \alpha$, so $\alpha$ interchanges $W_2$ and $W_3$, and permutes the $1$-dimensional representations. So Case 3 follows from Case 2 by Lemma~\ref{lemma:auto}. \end{proof} \begin{theorem}\label{thm:sepbeta(S3xC3)} eld$ has an element of multiplicative order $6$. eld(\mathrm{S}_3\times \mathrm{C}_3)=6$. \end{theorem} \begin{proof} The cyclic group $\mathrm{C}_6$ is a homomorphic image of $G$, eld(\mathrm{C}_6)=6$. eld(G)\le 6$. eld$ is large enough so that Lemma~\ref{lemma:multfree} applies and we can reduce to the study of multiplicity-free representations. Set $V:=W_1\oplus W_2\oplus W_3\oplus U$, take $v:=(w_1,w_2,w_3,u) \in V$ and eld[V]^G$ with $\deg(f)\le 6$. We need to show that $G\cdot v=G\cdot v'$. By Proposition~\ref{prop:S3xC3,V1+V2+V3}, $(w_1,w_2,w_3)$ and $(w_1',w_2',w_3')$ have the same $G$-orbit in $W_1\oplus W_2\oplus W_3$. So replacing $v'$ by an appropriate element in its $G$-orbit, we may assume that $w_1=w_1'$, $w_2=w_2'$, $w_3=w_3'$. If $w_2=w_3=0$, then $v$ and $v'$ are in the same $G$-orbit by Proposition~\ref{prop:S3xC3,V1+U}. Otherwise there exists a $k\in \{2,3\}$ such that $w_k\neq 0$, and thus $|\mathrm{Stab}_G(w_k)|\in\{1,2,3\}$. If $\mathrm{Stab}_G(w_2)=\{1_G\}$ or $\mathrm{Stab}_G(w_3)=\{1_G\}$, then $u=u'$ (and hence $v=v'$) by Lemma~\ref{lemma:S3xC3,stab-inv}. Similarly, if $\mathrm{Stab}_G(w_l)\supseteq \mathrm{Stab}_G(w_k)$ for both $l\in \{1,2,3\}\setminus \{ k\}$, then $\mathrm{Stab}_G(w_k)\subseteq \mathrm{Stab}_G(w_1,w_2,w_3)$. By Lemma~\ref{lemma:S3xC3,stab-inv} we have a $g\in \mathrm{Stab}_G(w_k)$ with $g\cdot u=u'$. Then $g\cdot v=v'$, and we are done. It remains to deal with the case when there exists a pair $(i,j)\in \{(1,2),(1,3),(2,3)\}$ such that $(w_i,w_j)$ satisfies the assumptions of Lemma~\ref{lemma:S3xC3,V2+V1+U}. Therefore for any $\chi\in \widehat G$ there exists a eld[W_1\oplus W_2\oplus W_3]^{G,\chi^{-1}}$ with $f(w_1,w_2,w_3)\neq 0$ and $\deg(f)\le 4$. eld[V]^G$ has degree at most $5$, thus $(ft_\chi)(v)=(ft_\chi)(v')$, implying in turn that \[t_\chi(u)=\frac{(ft)(v)}{f(w_1,w_2,w_3)}= \frac{(ft)(v)}{f(w_1,w_2,w_3)}=t_\chi(u').\] This holds for all $\chi\in \widehat G$, so $u=u'$ and hence $v=v'$. \end{proof} \section{Some groups $G=\mathrm{C}_n\rtimes \mathrm{C}_k$ eld(G)$} \label{sec:CnrtimesCk} \subsection{The group $\mathrm{C}_4\rtimes \mathrm{C}_4$} In this section \[G=\mathrm{C}_4\rtimes \mathrm{C}_4=\langle a,b\mid a^4=1=b^4,\quad bab^{-1}=a^3\rangle.\] eld$ has an element $\mathrm{i}$ of multiplicative order $4$. Then $G$ has two irreducible $2$-dimensional representations (up to isomorphism), namely \[\psi_1:a\mapsto \begin{bmatrix} \mathrm{i}& 0 \\ 0 & -\mathrm{i}\\ \end{bmatrix}, \quad b\mapsto \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}, \qquad \psi_2: a\mapsto \begin{bmatrix} \mathrm{i}& 0 \\ 0 & -\mathrm{i}\\ \end{bmatrix}, \quad b\mapsto \begin{bmatrix} 0 & -1 \\ 1 & 0 \\ \end{bmatrix}.\] The other irreducible representations of $G$ are $1$-dimensional, and can be labeled by eld^\times$ as follows: identify $\chi=(\chi_1,\chi_2)\in \widehat G$ with the representation eld^\times$ given by \[\chi:a\mapsto \chi_1,\quad b\mapsto \chi_2.\] eld$ endowed with the representation $\chi$. Set $U:=\bigoplus_{\chi\in \widehat G}U_\chi$. Note that none of $\psi_1$ or $\psi_2$ is faithful: $\ker(\psi_1)=\langle b^2\rangle$, $G/\ker(\psi_1)\cong \mathrm{D}_8$, and $\psi_1$ is the lift to $G$ of the only $2$-dimensional representation of $\mathrm{D}_8$. On the other hand, $\ker(\psi_2)=\langle a^2b^2\rangle$, $G/\ker(\psi_2)\cong \mathrm{Dic}_8$, and $\psi_2$ is the lift to $G$ of the only $2$-dimensional irreducible representation of the quaternion group. The coordinate functions on $W_1$ are denoted by $x_1,x_2$, the coordinate functions on $W_2$ are $y_1,y_2$, and the coordinate function on $U_\chi$ is $t_\chi$ for $\chi\in \widehat G$. It is well known and easy to see that eld[x_1x_2,x_1^4+x_2^4], eld[(y_1y_2)^2,\ y_1^4+y_2^4,\ y_1y_2(y_1^4-y_2^4)]. \end{align*} Consider the following relative invariants (their index indicates their weight): \begin{align*}r_{(-1,1)}:=x_1^2+x_2^2, \quad r_{(-1,-1)}:=x_1^2-x_2^2, \quad r_{(1,-1)}:=r_{(-1,1)}r_{(-1,-1)}=x_1^4-x_2^4 \\ s_{(1,-1)}:=y_1y_2,\quad s_{(-1,1)}:=y_1^2+y_2^2, \quad s_{(-1,-1)}:=y_1^2-y_2^2. \end{align*} Using the above observations, one can prove by arguments similar to the proof of Proposition~\ref{prop:Vi+U} the following: \begin{proposition}\label{prop:V1+U} For $i=1,2$ we have $\beta(G,W_i\oplus U)=6$. \end{proposition} \begin{proposition} \label{prop:V1+V2} We have $\beta(G,W_1\oplus W_2)\le 6$. \end{proposition} \begin{proof} eld(G)\le 7$ by \cite[Proposition 3.1]{cziszter-domokos-szollosi}. On the other hand, the element $a^2\in G$ acts on $W_1\oplus W_2$ as multiplication by the scalar $-1$, hence any homogeneous element of eld[W_1\oplus W_2]^G$ has even degree. It follows that $\beta(G,W_1\oplus W_2)$ is an even number, less than or equal to $7$. Consequently, $\beta(G,W_1\oplus W_2)\le 6$. \end{proof} \begin{proposition}\label{prop:V1+V2+U(1,i)} We have $\sepbeta(W_1\oplus W_2\oplus U_{(1,\mathrm{i})})\le 6$. \end{proposition} \begin{proof} Let $v=(w_1,w_2,u)\in V$ and $v'=(w'_1,w'_2,u')\in V$ eld[V]^G$ with $\deg(f)\le 6$ (here $w_i,\ w'_i\in W_i$ and $u,u'\in U$). We need to show that $v$ and $v'$ have the same $G$-orbit. By Proposition~\ref{prop:V1+V2}, $(w_1,w_2)$ and $(w'_1,w'_2)$ are in the same $G$-orbit. Replacing $v$ by an appropriate element in its $G$-orbit, we may assume that $w_1=w'_1$ and $w_2=w'_2$. If $w_1=0$ or $w_2=0$, then we have $G\cdot v=G\cdot v'$ by Proposition~\ref{prop:V1+U}. From now on we assume that none of $w_1$ or $w_2$ is zero. Set \[r^{(1)}_{(1,-\mathrm{i})}:=x_1y_2+\mathrm{i}x_2y_1, \quad r^{(2)}_{(1,-\mathrm{i})}:=x_1y_1^3-\mathrm{i}x_2y_2^3, \quad r_{(1,\mathrm{i})}:=x_1y_2-\mathrm{i}x_2y_1.\] Then $r^{(1)}_{(1,-\mathrm{i})}$, $r^{(2)}_{(1,-\mathrm{i})}$, $t^3$ are relative invariants of weight $(1,-\mathrm{i})\in \widehat G$, whereas $r_{(1,\mathrm{i})}$, $t$ are relative invariants of weight $(1,\mathrm{i})$. Therefore we have the invariants \[f_1:=t^4,\quad f_2:=r^{(1)}_{(1,-\mathrm{i})}t,\quad f_3:=r^{(2)}_{(1,-\mathrm{i})}t,\quad f_4:=r_{(1,\mathrm{i})}t^3.\] All of the above invariants have degree at most $5$. By assumption, $f_j(v)=f_j(v')$ for $j=1,2,3,4$. The equality $t^4(v)=t^4(v')$ implies that $u'\in \{u,-u,\mathrm{i}u,-\mathrm{i}u\}$. In particular, if $u=0$, then $u'=0$, and hence $v=v'$. It remains to deal with the case $u\neq 0$. Moreover, $u=u'$ if and only if $t(v)=t(v')$ if and only if $t^3(v)=t^3(v')$. Suppose for contradiction that $u\neq u'$. Then $f_j(v)=f_j(v')$ for $j=2,3,4$ and $(w_1,w_2)=(w'_1,w'_2)$ imply that \[0=r^{(1)}_{(1,-\mathrm{i})}(v)=r^{(2)}_{(1,-\mathrm{i})}(v)=r_{(1,\mathrm{i})}(v).\] From $0=r^{(1)}_{(1,-\mathrm{i})}(v)=r_{(1,\mathrm{i})}(v)$ we conclude $0=x_1(v)y_2(v)=x_2(v)y_1(v)$. Taking into account that $w_1\neq 0$ and $w_2\neq 0$, we conclude that $0=x_1(v)=y_1(v)$ or $0=x_2(v)=y_2(v)$. Then $r^{(2)}_{(1,-\mathrm{i})}(v)=0$ yields that both $x_1(v)y_1(v)$ and $x_2(v)y_2(v)$ are zero. We deduce $w_1=0$ or $w_2=0$, a contradiction. \end{proof} \begin{proposition} \label{prop:V1+V2+U(-1,1)} We have $\sepbeta(G,W_1\oplus W_2\oplus U_{(-1,1)})\le 6$ and $\sepbeta(G,W_1\oplus W_2\oplus U_{(1,-1)})\le 6$. \end{proposition} \begin{proof} First we deal with $V:=W_1\oplus W_2\oplus U_{(-1,1)}$. Let $v=(w_1,w_2,u)\in V$ and $v'=(w'_1,w'_2,u')\in V$ eld[V]^G$ with $\deg(f)\le 6$. We need to show that $v$ and $v'$ have the same $G$-orbit. In the same way as in the proof of Proposition~\ref{prop:V1+V2+U(1,i)}, we may assume that $w_1=w'_1$, $w_2=w'_2$ and none of $w_1$ or $w_2$ is zero. It follows that \[r_{(-1,1)}=x_1^2+x_2^2,\quad r_{(-1,-1)}s_{(1,-1)}=(x_1^2-x_2^2)y_1y_2, \quad s_{(-1,1)}=y_1^2+y_2^2\] can not all vanish at $v$. Suppose that $f$ is one of the above polynomials, with $f(v)\neq 0$. Since $f$ is a relative invariant of weight $(-1,1)$ of degree at most $4$, eld[V]^G$ has degree at most $5$. Moreover, the equality $(ft)(v)=(ft)(v')$ implies $u=u'$, hence $v=v'$. The proof for $V=W_1\oplus W_2\oplus U_{(1,-1)}$ is similar, the only difference is that we use the relative invariants \[s_{(1,-1)}=y_1y_2, \quad s_{(-1,1)}r_{(-1,-1)}=(y_1^2+y_2^2)(x_1^2-x_2^2), \quad s_{(-1,-1)}r_{(-1,1)}=(y_1^2-y_2^2)(x_1^2+x_2^2).\] \end{proof} \begin{proposition}\label{prop:mu(C4rtimesC4)} We have $\mu(G)\le 2$, and hence $\kappa(G)\le 3$. \end{proposition} \begin{proof} Let $S$ be an intersection independent set of subgroups of $G$ with $|S|=\mu(G)$. Clearly $G\notin S$. If $S$ contains a cyclic subgroup of prime power order, then $|S|\le 2$ by Lemma~\ref{lemma:prime power}. From now on assume that $S$ contains no cyclic subgroup of prime power order. All such proper subgroups of $G$ contain the subgroup $\langle a^2,b^2\rangle$. If $\langle a^2,b^2\rangle\in S$, then $|S|=1$. Otherwise $S$ consists of two order $8$ subgroups of $G$. In either case, $|S|\le 2$. The inequality $\mu(G)\le 2$ is proved, and hence $\kappa(G)\le 1+\mu(G)\le 3$ holds by \eqref{eq:kappa-mu}. \end{proof} \begin{theorem}\label{thm:sepbeta(C4rtimesC4)} eld$ has an element of multiplicative order $8$. Then we have $\sepbeta(\mathrm{C}_4\rtimes \mathrm{C}_4)=6$. \end{theorem} \begin{proof} The factor group of $G$ modulo its central normal subgroup $\langle a^2b^2\rangle$ eld(\mathrm{Dic}_8)=6$ (in fact we have $\sepbeta(G,W_2)=6$, as we saw in Section~\ref{sec:indextwo}). eld$ is large enough so that Lemma~\ref{lemma:multfree} applies and we can reduce to the study of multiplicity-free representations. By Proposition~\ref{prop:mu(C4rtimesC4)} we have $\kappa(G)\le 3$, and therefore by Lemma~\ref{lemma:helly} it is sufficient to show that $\sepbeta(V)\le 6$ if $V$ is the direct sum of three pairwise non-isomorphic $G$-modules. Assume that $V$ is such a $G$-module. By Proposition~\ref{prop:V1+V2}, Proposition~\ref{prop:V1+U} we know that $\sepbeta(G,V)\le \beta(G,V)\le 6$, unless $V=W_1\oplus W_2\oplus U_{\chi}$ for some non-trivial character $\chi \in \widehat G$. The cases when $\chi \in \{(1,\mathrm{i}),\ (-1,1),\ (1,-1)\}$ are covered by Proposition~\ref{prop:V1+V2+U(1,i)}, Proposition~\ref{prop:V1+V2+U(-1,1)}. The remaining representations are of the form $\rho\circ \alpha$, where $\rho$ is one of the above three representations, and $\alpha$ is an automorphism of $G$. So the proof is complete by Lemma~\ref{lemma:auto}. \end{proof} \subsection{The group $\mathrm{C}_5\rtimes \mathrm{C}_4$} In this section \[G=\mathrm{C}_5\rtimes \mathrm{C}_4=\langle a,b\mid a^5=b^4=1,\ bab^{-1}=a^2 \rangle.\] eld$ has an element $\xi$ of multiplicative order $20$. Then $\omega:=\xi^4$ has multiplicative order $5$, and $\mathrm{i}:=\xi^5$ has multiplicative order $4$. Consider the following irreducible $4$-dimensional representation of $G$: \[ \psi: a\mapsto \begin{bmatrix} \omega & 0 & 0 & 0 \\ 0 & \omega^2 & 0 & 0 \\ 0 & 0 & \omega^4 & 0 \\ 0 & 0 & 0 & \omega^3 \end{bmatrix}, \quad b\mapsto \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \end{bmatrix}.\] The other irreducible representations of $G$ are $1$-dimensional, and can be labelled by eld^\times$, where $\chi\in \widehat G$ is identified with the representation \[\chi:a\mapsto 1,\ b\mapsto \chi\] (note that $\langle a \rangle$ is the commutator subgroup of $G$). eld^4$ endowed with the representation $\psi$, eld$ endowed with the representation $\chi$, and set $U:=\bigoplus_{\chi\in \widehat G}U_\chi$. Set \begin{align*} h_0:=x_1x_3-x_2x_4,\qquad h_1:=x_1x_2^2+x_3x_4^2,\qquad h_2:=x_2x_3^2+x_4x_1^2, \\ h_3:=x_1^3x_2+x_3^3x_4,\qquad h_4:=x_2^3x_3+x_4^3x_1, \end{align*} eld[W]^G$: \begin{align*} & f_1:=x_1x_3+x_2x_4,\qquad f_2:=x_1x_2x_3x_4, \\ &f_3:=h_1+h_2,\qquad f_4:=h_1h_2, \qquad f_5:=(h_1-h_2)h_0, \\ &f_6:=h_3+h_4,\qquad f_7:=x_1^5+x_2^5+x_3^5+x_4^5, \\ &f_8:=x_1^6x_3+x_1x_3^6+x_2^6x_4+x_2x_4^6 \\ &f_9:=(h_3-h_4)h_0. \end{align*} The following result was obtained using the CoCalc platform \cite{CoCalc}: \begin{proposition}\label{prop:C5rtimesC4,mingen} eld$ has characteristic zero. eld$-algebra eld[W]^G$. \end{proposition} It turns out that the elements of degree at most $6$ from the above generating system are sufficient to separate $G$-orbits in $W$: \begin{proposition}\label{prop:C5rtimesC4,V} If for $v,v'\in W$ we have that $f_i(v)=f_i(v')$ for all $i=1,\dots,7$, then $f_8(v)=f_8(v')$. eld$ has characteristic zero, then the $G$-invariants $f_1,\dots,f_7$ separate the $G$-orbits in $W$. \end{proposition} \begin{proof} Let $v,v'\in W$ such that $f_j(v)=f_j(v')$ for $j=1,\dots,7$. In order to prove $f_8(v)=f_8(v')$, we will prove the existence of a chain of subsets eld[V]\] and elements $v'=w^{(0)},\dots,w^{(k)}$ in the $G$-orbit of $v'$ such that \begin{itemize} \item $f(v)=f(w^{(j)})$ for all $f\in S^{(j)}$ and all $j=0,\dots,k$; \item $f_8\in S^{(k)}.$ \end{itemize} Then $f_8(v)=f_8(w^{(k)})$, and since $f_8$ is a $G$-invariant and $G\cdot v'=G\cdot w^{(k)}$, we have $f_8(w^{(k)})=f_8(v')$ eld)=0$). eld[V]$ generated by $S^{(j)}$; obviously, $f(v)=f(w^{(j)})$ holds for all $f\in T^{(j)}$ as well, so any element of $T^{(j)}$ can be added to $S^{(j)}$. From $f_1(v)=f_1(v')$ and $f_2(v)=f_2(v')$ we deduce \[\{(x_1x_3)(v),(x_2x_4)(v)\}=\{(x_1x_3)(v'),(x_2x_4)(v')\}.\] So either $(x_1x_3)(v)=(x_1x_3)(v')$ and $(x_2x_4)(v)=(x_2x_4)(v')$, or $(x_1x_3)(v)=(x_1x_3)(b\cdot v')$ and $(x_2x_4)(v)=(x_2x_4)(b\cdot v')$. Therefore we can take $w^{(1)}=v'$ or $w^{(1)}=b\cdot v'$, and \[S^{(1)}=S^{(0)}\cup \{x_1x_3,x_2x_4\}.\] In particular, this implies $h_0\in T^{(1)}$. \emph{Case I.:} $f_2(v)\neq 0$ and $h_0(v)\neq 0$. From $f_3(v)=f_3(w^{(1)})$, $f_5(v)=f_5(w^{(1)})$, and $h_0(v)=h_0(w^{(1)})\neq 0$ we infer that $h_1(v)=h_1(w^{(1)})$ and $h_2(v)=h_2(w^{(1)})$. The product of the two summands of $h_1$ (respectively $h_2$) is $(x_1x_3)(x_2x_4)^2$ (respectively $(x_1x_3)^2(x_2x_4)$), eld[V]$. It follows that \begin{align*}\{(x_1x_2^2)(v),(x_3x_4^2)(v)\}=\{(x_1x_2^2)(w^{(1)}),(x_3x_4^2)(w^{(1)})\}\qquad \text{ and } \\ \{(x_2x_3^2)(v),(x_4x_1^2)(v)\}=\{(x_2x_3^2)(w^{(1)}),(x_4x_1^2)(w^{(1)})\}.\end{align*} Similarly, from $f_6(v)=f_6(w^{(1)})$ and $f_9(v)=f_9(w^{(1)})$ (note that $f_9=f_1f_6-2f_4$, hence $f_9(v)=f_9(v')=f_9(w^{(1)})$) we get \begin{align*}\{(x_1^3x_2)(v),(x_3^3x_4)(v)\}=\{(x_1^3x_2)(w^{(1)}),(x_3^3x_4)(w^{(1)})\} \end{align*} (and also $\{(x_2^3x_3)(v),(x_4^3x_1)(v)\}=\{(x_2^3x_3)(w^{(1)}),(x_4^3x_1)(w^{(1)})\}$, but we do not use it below). Note that the elements of $S^{(1)}$ are $b^2$-invariant, whereas $b^2$ interchanges $x_1x_2^2$ and $x_3x_4^2$, $x_2x_3^2$ and $x_4x_1^2$, $x_1^3x_2$ and $x_3^3x_4$. It follows that with $w^{(2)}=w^{(1)}$ or $w^{(2)}=b^2\cdot w^{(1)}$ one of the following sets can be taken as $S^{(2)}$: \begin{enumerate} \item[(i)] $S^{(2)}=S^{(1)}\cup \{x_1x_2^2,x_3x_4^2,x_2x_3^2,x_4x_1^2\}$ \item[(ii)] $S^{(2)}=S^{(1)}\cup \{x_1x_2^2,x_3x_4^2,x_1^3x_2,x_3^3x_4\}$ \item[(iii)] $S^{(2)}=S^{(1)}\cup \{x_2x_3^2,x_4x_1^2,x_1^3x_2,x_3^3x_4\}$ \end{enumerate} In case (i), we have $(x_1x_2^2)^2(x_2x_3^2)=x_2^5(x_1x_3)^2\in T^{(2)}$. Since $0\neq (x_1x_3)(v)=(x_1x_3)(w^{(2)})$, we conclude that $x_2(v)^5=x_2(w^{(2)})^5$. For an appropriate $s\in \{0,1,2,3,4\}$, we have $x_2(v)=x_2(a^s\cdot w^{(2)})$. Set $w^{(3)}:=a^s\cdot w^{(2)}$. Since the elements of $S^{(2)}$ are $\langle a\rangle$-invariant, we may take \[S^{(3)}:=S^{(2)}\cup \{x_2\}.\] Then from $(x_1x_2^2)(v)=(x_1x_2^2)(w^{(3)})$ and $x_2(v)=x_2(w^{(3)})\neq 0$ we get $x_1(v)=x_1(w^{(3)})$. Given that, from $(x_1x_3)(v)=(x_1x_3)(w^{(3)})$ and $x_1(v)=x_1(w^{(3)})\neq 0$ we infer $x_3(v)=x_3(w^{(3)})$. Finally, from $(x_2x_4)(v)=(x_2x_4)(w^{(3)})$ and $x_2(v)=x_2(w^{(3)})\neq 0$ we get $x_4(v)=x_4(w^{(3)})$. So keeping $w^{(4)}:=w^{(3)}$ we can take \[S^{(4)}:=S^{(3)}\cup \{x_1,x_3,x_4\}.\] Then $f_8\in T^{(4)}$, hence with $w^{(5)}=w^{(4)}$ we can take $S^{(5)}=S^{(4)}\cup\{f_8\}$, and we are done. The cases when $S^{(2)}$ is as in (ii) or (iii) can be dealt with similarly. \emph{Case II.:} $h_0(v)=0$. Then we have \begin{equation}\label{eq:C5rtimesC4,x1x3=x2x4} (x_1x_3)(w^{(1)})=(x_1x_3)(v)=(x_2x_4)(v)=(x_2x_4)(w^{(1)}). \end{equation} Observe that \begin{equation}\label{eq:f8=x1x3f7} f_8=(x_1x_3)(x_1^5+x_3^5)+(x_2x_4)(x_2^5+x_4^5).\end{equation} Therefore by \eqref{eq:C5rtimesC4,x1x3=x2x4} we obtain \[f_8(v)=(x_1x_3)(v)f_7(v)=(x_1x_3)(w^{(1)})f_7(w^{(1)})=f_8(w^{(1)}),\] so with $w^{(2)}=w^{(1)}$ we can take $S^{(2)}=S^{(1)}\cup \{f_8\}$. \emph{Case III.:} $h_0(v)\neq 0$ and $f_2(v)=0$. By symmetry, we may assume that $x_1(v)=0$. From $(x_1x_3)(v)=(x_1x_3)(w^{(1)})$ we deduce that $x_1(w^{(1)})=0$ or $x_3(w^{(3)})=0$. Since $b^2$ interchanges $x_1$ and $x_3$, and fixes the elements of $S^{(1)}$, we may take $w^{(2)}=w^{(1)}$ or $w^{(2)}=b^2\cdot w^{(1)}$ and \[S^{(2)}=S^{(1)}\cup \{x_1\}.\] Moreover, we have \begin{equation} \label{eq:C5rtimesC4,x1neq0} x_1(v)=0=x_1(w^{(2)}) \end{equation} and hence \begin{equation}\label{eq:C5rtimesC4,x2x4neq0} 0\neq h_0(v)=-(x_2x_4)(v)=-(x_2x_4)(w^{(2)})=h_0(w^{(2)}). \end{equation} We have \[h_1(v)-h_2(v)=\frac{f_5(v)}{h_0(v)}= \frac{f_5(w^{(2)})}{h_0(w^{(2)})}=h_1(w^{(2)})-h_2(w^{(2)}).\] Together with $h_1(v)+h_2(v)=f_3(v)=f_3(w^{(2)}) =h_1(w^{(2)})+h_2(w^{(2)})$ and with \eqref{eq:C5rtimesC4,x1neq0} this implies \begin{align*} (x_3x_4^2)(v)=h_1(v)=h_1(w^{(2)}) =(x_3x_4^2)(w^{(2)}) \text{ and } \\ (x_2x_3^2)(v)=h_2(v)=h_2(w^{(2)})=(x_2x_3^2)(w^{(2)}). \end{align*} Therefore with $w^{(3)}=w^{(2)}$ we can take \[S^{(3)}=S^{(2)}\cup \{x_2x_3^2,x_3x_4^2\}.\] The equality \[(x_2x_3^2)^2(x_3x_4^2)=(x_3^5)(x_2x_4)^2\in T^{(3)}\] with $(x_2x_4)(v)=(x_2x_4)(w^{(3)})\neq 0$ (see \eqref{eq:C5rtimesC4,x2x4neq0}) imply that \begin{equation}\label{eq:C5rtimesC4,x3^5} x_3^5(v)=x_3^5(w^{(3)}). \end{equation} So we may take $w^{(4)}=w^{(3)}$ and \[S^{(4)}:=S^{(3)}\cup\{x_3^5\}.\] Then $f_7$, $x_1$, $x_3^5$ all belong to $S^{(4)}$, implying that $x_2^5+x_4^5=f_7-x_1^5-x_3^5\in T^{(4)}$. So we may take $w^{(5)}=w^{(4)}$ and \[S^{(5)}=S^{(4)}\cup \{x_2^5+x_4^5\}\] Thus $x_1x_3$, $x_2x_4$, $x_1^5+x_3^5$, $x_2^5+x_4^5$ all belong to $T^{(5)}$, hence equality \eqref{eq:f8=x1x3f7} shows that $f_8\in T^{(5)}$. Therefore $f_8(v)=f_8(w^{(5)})$, and we are done. \end{proof} \begin{theorem}\label{thm:sepbeta(C5rtimesC4)} eld$ has characteristic $0$, and contains an element of multiplicative order $20$. eld(\mathrm{C}_5\rtimes \mathrm{C}_4)=6$. \end{theorem} \begin{proof} The subgroup $\langle a,b^2\rangle$ of $G$ is isomorphic to $\mathrm{D}_{10}$, the dihedral group of order $10$, hence by equation \eqref{eq:sepbeta(H)} and eld(G)\ge 6$. In view of Lemma~\ref{lemma:multfree}, to prove the reverse inequality take $v=(w,u),v'=(w',u')\in V=W\oplus U$ such that \begin{equation}\label{eq:proofC5rtimesC4assumption} eld[V]^G \text{ with } \deg(f)\le 6. \end{equation} We have to show that $G\cdot v=G\cdot v'$. By Proposition~\ref{prop:C5rtimesC4,V} we have $G\cdot w=G\cdot w'$, so replacing $v'$ by an appropriate element in its $G$-orbit we may assume that $w=w'$. Moreover, $G/G'=G/\langle a\rangle\cong \langle b\rangle\cong \mathrm{C}_4$, and $\mathsf{D}(\mathrm{C}_4)=4$, so $G\cdot u=G\cdot u'$ as well, implying that $u'_{\mathrm{i}}\in \{\pm u_{\mathrm{i}}, \pm\mathrm{i}u_{\mathrm{i}}\}$, $u'_{-1}=\pm u_{-1}$, and $u'_1=u_1$. So it is sufficient to deal with the case when $w\neq 0$ and $u_\chi\neq 0$ for some $\chi\in \widehat G\setminus \{1\}$. The eigenvalues of $\psi(g)$ for a non-identity element $g\in \langle a\rangle$ differ from $1$, therefore $\mathrm{Stab}_G(w)\cap \langle a\rangle=\{1_G\}$, and hence $\mathrm{Stab}_G(w)$ is isomorphic to a subgroup of $\mathrm{C}_4$. \emph{Case I.:} $\mathrm{Stab}_G(w)\cong \mathrm{C}_4$. Then $\mathrm{Stab}_G(w)G'=G$, hence from $G\cdot u=G\cdot u'$ it follows that there exists an element $h\in \mathrm{Stab}_G(w)$ with $h\cdot u=u'$. Thus we have $h\cdot (w,u)=(w,u')$, and we are done. \emph{Case II.:} $\mathrm{Stab}_G(w)\cong \mathrm{C}_2$. Then $\mathrm{Stab}_G(w)$ is conjugate in $G$ to $\langle b^2\rangle$, so $\mathrm{Stab}_G(w)=g\langle b^2 \rangle g^{-1}$ for some $g\in G$. Replacing $(w,u)$ and $(w,u')$ by $g^{-1}\cdot (w,u)$ and $g^{-1}\cdot (w,u')$ we may assume that $\mathrm{Stab}_G(w)=\langle b^2\rangle$. eld^4=V$, where $\nu\neq \eta$. eld[U]$ of degree at most $2$, then either it is $b$-invariant, or $b\cdot m=-m$. So if $m$ is not $b$-invariant, then both $h_0m$ and $(h_1-h_2)m$ are $G$-invariants of degree at most $5$. Note that $h_0(v)=\nu^2-\eta^2$ and $(h_1-h_2)(v)=2\nu\eta(\nu-\eta)$, so $\nu\neq\eta$ implies that $h_0(v)$ and $(h_1-h_2)(v)$ can not be simultaneously zero. We infer by \eqref{eq:proofC5rtimesC4assumption} that $m(u)=m(u')$ holds for all $\langle b^2\rangle$-invariant eld[U]$ of degree at most $2$. As $\mathsf{D}(\mathrm{C}_2)=2$, we deduce that $u$ and $u'$ belong to the same orbit under $\langle b^2\rangle=\mathrm{Stab}_G(w)$, implying in turn that $v=(w,u)$ and $v'=(w,u')$ belong to the same $G$-orbit. \emph{Case III.:} $\mathrm{Stab}_G(w)=\{1_G\}$. Clearly it is sufficient to show that $u_\chi=u'_\chi$ for all $\chi\in\widehat G\setminus \{1\}=\{-1,\pm\mathrm{i}\}$. First we deal with $\chi=-1$. Using the CoCalc platform \cite{CoCalc} we obtained that eld[W\oplus U_{-1}]^G$ consists of the generators $f_1,\dots,f_8$ eld[V]^G$ from Proposition~\ref{prop:C5rtimesC4,mingen}, together with $t_{-1}^2$, $h_0t_{-1}$, $(h_1-h_2)t_{-1}$, $(h_3-h_4)t_{-1}$, $(x_1^5-x_2^5+x_3^5-x_4^5)t_{-1}$. So all the generators involving $t_{-1}$ have degree at most $6$. This implies that $G\cdot (w,u_{-1})=G\cdot (w,u'_{-1})$. Taking into account that the stabilizer of $w$ is trivial, this means that $u_{-1}=u'_{-1}$. Next we deal with $\chi=-\mathrm{i}$ and show that that $u_{-\mathrm{i}}=u'_{-\mathrm{i}}$ (the argument for $u_\mathrm{i}=u'_\mathrm{i}$ is obtained by obvious modification). Since we know already $G\cdot u_{-\mathrm{i}}=G\cdot u'_{-\mathrm{i}}$, i.e. $t_{-\mathrm{i}}^4(u)=t_{-\mathrm{i}}^4(u')$, our claim is equivalent to \begin{equation}\label{eq:C5rtimesC4,t or t^3} t_{-\mathrm{i}}(u)=t_{-\mathrm{i}}(u') \text{ or }t_{-\mathrm{i}}^3(u)=t_{-\mathrm{i}}^3(u'). \end{equation} The latter holds if there exists a relative invariant eld[W]^{G,-\mathrm{i}}$) with $\deg(f)\le 5$ (respectively, $\deg(f)\le 3$) with $f(w)\neq 0$, because then $ft_{-\mathrm{i}}$ (respectively, $ft_{-\mathrm{i}}^3$) is a $G$-invariant of degree a most $6$, and therefore $f(w)t_{-\mathrm{i}}(u)=f(w)t_{-\mathrm{i}}(u')$ (respectively, $f(w)t_\mathrm{i}^3(u)=f(w)t_\mathrm{i}^3(u')$), so \eqref{eq:C5rtimesC4,t or t^3} holds. Suppose for contradiction that $w$ belongs to the common zero locus of eld[W]^{G,\mathrm{i}}_{\le 5}$ eld[W]^{G,-\mathrm{i}}_{\le 3}$. Set \begin{align*} k_1:=x_1x_2^2-x_3x_4^2, \qquad k_2:=x_2x_3^2-x_4x_1^2 \\ k_3:=x_1^3x_2-x_3^3x_4,\qquad k_4:=x_2^3x_3-x_4^3x_1 \\ k_5:=x_1^5-x_3^5, \qquad k_6:=x_2^5-x_4^5. \end{align*} Then \begin{align*} k_1-\mathrm{i}k_2,\quad k_3-\mathrm{i}k_4,\quad k_5-\mathrm{i}k_6\in eld[V]^{G,\mathrm{i}}_{\le 5} \\ eld[V]^{G,-\mathrm{i}}_{\le 3}. \end{align*} So the above four relative invariants all vanish at $w$. In particular, it follows that $k_1(w)=0$ and $k_2(w)=0$, i.e. \begin{equation}\label{eq:C5rtimesC4,x1x2^2} (x_1x_2^2)(w)=(x_3x_4^2)(w) \text{ and } (x_2x_3^2)(w)=(x_4x_1^2)(w).\end{equation} \emph{Case III.a:} $x_j(w)=0$ for some $j\in \{1,2,3,4\}$; by symmetry we may assume that $x_1(w)=0$. By \eqref{eq:C5rtimesC4,x1x2^2} we conclude $x_3(w)=0$ or $x_4(w)=0=x_2(w)$. In the latter case by $(k_5-\mathrm{i}k_6)(w)=0$ we deduce $x_3(w)=0$, leading to the contradiction that $w=0$. So $x_1(w)=0=x_3(w)$. Then $(k_5-\mathrm{i}k_6)(w)=0$ imply $x_2^5(w)=x_4^5(w)$, so (as $w\neq 0$) we have that $x_4(w)=\omega^jx_2(w)$ for some $j\in \{0,1,2,3,4\}$. Then one can easily check that the stabilizer of $w$ is a conjugate of the subgroup $\langle b^2\rangle$ of $G$, a contradiction. \emph{Case III.b:} $(x_1x_2x_3x_4)(w)\neq 0$. From \eqref{eq:C5rtimesC4,x1x2^2} we deduce \[x_3(w)=x_1(w)x_2(w)^2x_4(w)^{-2}\text{ and then } x_4(w)=x_2(w)(x_1(w)x_2(w)^2x_4(w)^{-2})^2x_1(w)^{-2}.\] The latter equality implies \[x_2(w)^5=x_4(w)^5,\] which together with $(k_5-\mathrm{i}k_6)(w)=0$ yields \[x_1(w)^5=x_3(w)^5.\] So there exist unique $j,k\in\{0,1,2,3,4\}$ with \[x_3(w)=\omega^jx_1(w)\text{ and }x_4(w)=\omega^kx_2(w).\] From \eqref{eq:C5rtimesC4,x1x2^2} it follows that \[(j,k)\in \{(0,0),(1,2),(2,4),(3,1),(4,3)\}.\] If $(j,k)=(0,0)$, then $b^2\in \mathrm{Stab}_G(w)$, a contradiction. If $(j,k)=(1,2)$, then $a^4b^2\in \mathrm{Stab}_G(w)$, a contradiction. Similarly we get to a contradiction for all other possible $(j,k)$. \end{proof} \subsection{The group $\mathrm{C}_7\rtimes \mathrm{C}_3$} In this section \[G=\mathrm{C}_7\rtimes \mathrm{C}_3 =\langle a,b\mid a^7=b^3=1_G,\quad bab^{-1}=a^2\rangle\] is the only non-abelian group of order $21$. It is proved in \cite[Theorem 4.1]{cziszter:C7rtimesC3} that for any prime $p$ congruent to $1$ modulo $3$, the separating Noether number of the non-abelian semidirect product $\mathrm{C}_p\rtimes \mathrm{C}_3$ equals $p+1$, eld$ contains an element of multiplicative order $3p$. Actually, \cite{cziszter:C7rtimesC3} works over an algebraically closed base field, but the representation providing the exact lower bound for the separating Noether number is defined over any field containing an element of multiplicative order $3p$, and in order to prove that $p+1$ is also an upper bound, we may pass to the algebraic closure using Lemma~\ref{lemma:spanning invariants}. As the special case $p=7$, we get the following: \begin{corollary} eld$ contains an element of multiplicative order $21$. eld(\mathrm{C}_7\rtimes \mathrm{C}_3)=8$. \end{corollary} \subsection{The group $\mathrm{M}_{27}$} In this section \[G=\mathrm{M}_{27}=\langle a,b \mid a^9=b^3=1,\ bab^{-1}=a^4 \rangle\cong \mathrm{C}_9\rtimes \mathrm{C}_3\] is the non-abelian group of order $27$ with an index $3$ cyclic subgroup. eld$ contains an element $\omega$ of multiplicative order $9$, and consider the following irreducible $3$-dimensional representations of $G$: \[ \psi_1: a\mapsto \begin{bmatrix} \omega & 0 & 0 \\ 0 & \omega^4 & 0 \\ 0 & 0 & \omega^7 \end{bmatrix}, \quad b\mapsto \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{bmatrix}.\] \[ \psi_2: a\mapsto \begin{bmatrix} \omega^2 & 0 & 0 \\ 0 & \omega^8 & 0 \\ 0 & 0 & \omega^5 \end{bmatrix}, \quad b\mapsto \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{bmatrix}.\] The commutator subgroup of $G$ is $G'=\langle a^3\rangle$, and $G/G'\cong \mathrm{C}_3\times \mathrm{C}_3$. So the remaining irreducible representations of $G$ are $1$-dimensional and can be labeled by $\widehat G eld^\times$, where $\varepsilon:=\omega^3$ has multiplicative order $3$. Identify $\chi=(\chi_1,\chi_2)\in \widehat G$ with the representation \[\chi:a\mapsto\chi_1,\qquad b\mapsto \chi_2.\] eld$ endowed by the representation $\chi$, and set $U:=\bigoplus_{\chi\in \widehat G}U_\chi$. The following result was obtained using the CoCalc platform \cite{CoCalc}: \begin{proposition}\label{prop:M27,V1+V2} eld)=0$. Then we have $\beta(G,W_1\oplus W_2)=9$. \end{proposition} \begin{lemma}\label{lemma:M27,stabilizers} If $\mathrm{Stab}_G(v)$ is non-trivial for some $v\in W_1\oplus W_2$, then either $v=0$ (and then $\mathrm{Stab}_G(v)=G$), or $v\neq 0$ and then $\mathrm{Stab}_G(v)\in \{\langle b\rangle, \langle a^3b\rangle, \langle a^6b\rangle\}$ (in particular, then $|\mathrm{Stab}_G(v)|=3$). Moreover, for a non-zero $w_1\in W_1$ and $w_2\in W_2$ we have that \begin{itemize} \item $\mathrm{Stab}_G(w_1)=\langle b\rangle eld [1,1,1]^T$; \item $\mathrm{Stab}_G(w_2)=\langle b\rangle eld [1,1,1]^T$; \item $\mathrm{Stab}_G(w_1)=\langle a^3b\rangle eld [\varepsilon^2,\varepsilon,1]^T$; \item $\mathrm{Stab}_G(w_2)=\langle a^3b\rangle eld [\varepsilon,\varepsilon^2,1]^T$; \item $\mathrm{Stab}_G(w_1)=\langle a^6b\rangle eld [\varepsilon,\varepsilon^2,1]^T$; eld [\varepsilon^2,\varepsilon,1]^T$. \end{itemize} \end{lemma} \begin{proof} One can check by basic linear algebra that $\psi_j(g)$ has the eigenvalue $1$ for some $g\in G$ and $j\in \{1,2\}$ if and only if $g$ belongs to one of the order $3$ subgroups $\langle b\rangle$, $\langle a^3b\rangle$, $\langle a^6b\rangle$ of $G$. Computing the corresponding eigenvectors we obtain the result. \end{proof} eld[V]$ of polynomials, $\mathcal{V}(S)$ stands for the common zero locus in $V$ of the elements of $S$. \begin{lemma}\label{lemma:M27,common zero locus} \begin{itemize} \item[(i)] For $\chi\in \{(\varepsilon,1),(\varepsilon^2,1)\}$ we have eld[W_1\oplus W_2]^{G,\chi}\mid \deg(f)\le 6)=\{0\}.\] \item[(ii)] For $\chi\in \{(\varepsilon,\varepsilon),(\varepsilon,\varepsilon^2),(\varepsilon^2,\varepsilon),(\varepsilon^2,\varepsilon^2),(1,\varepsilon),(1,\varepsilon^2)\}$ we have eld[W_1\oplus W_2]^{G,\chi}\mid \deg(f)\le 9)=\{v\in W_1\oplus W_2\mid \mathrm{Stab}_G(v)\neq \{1_G\}\}.\] \end{itemize} \end{lemma} \begin{proof} (i) For $\chi=(\varepsilon,1)$, consider the following relative invariants in eld[W_1\oplus W_2]^{G,(\varepsilon,1)}$: \begin{align*} f_{(\varepsilon,1)}^{(1)}:=x_1x_2x_3, &\qquad h_{(\varepsilon,1)}^{(1)}:=(y_1y_2y_3)^2, \\ f_{(\varepsilon,1)}^{(2)}:=x_1^3+x_2^3+x_3^3, &\qquad h_{(\varepsilon,1)}^{(2)}:=y_1^6+y_2^6+y_3^6, \\ f_{(\varepsilon,1)}^{(3)}:=x_1^5x_3+x_2^5x_1+x_3^5x_2, &\qquad h_{(\varepsilon,1)}^{(3)}:=y_1^2y_2+y_2^2y_3+y_3^2y_1. \end{align*} It is easy to see that $0=f_{(\varepsilon,1)}^{(1)}(v) =f_{(\varepsilon,1)}^{(2)}(v) =f_{(\varepsilon,1)}^{(3)}(v)$ implies $0=x_1(v)=x_2(v)=x_3(v)$, and similarly $0=h_{(\varepsilon,1)}^{(1)}(v) =h_{(\varepsilon,1)}^{(2)}(v) =h_{(\varepsilon,1)}^{(3)}(v)$ implies $0=y_1(v)=y_2(v)=y_3(v)$. For $\chi=(\varepsilon^2,1)$ we just need to interchange the roles of the variable sets $\{x_1,x_2,x_3\}$ and $\{y_1,y_2,y_3\}$ in the relative invariants constructed above. (ii) Take $\chi\in \{(\varepsilon,\varepsilon),(\varepsilon,\varepsilon^2),(\varepsilon^2,\varepsilon),(\varepsilon^2,\varepsilon^2),(1,\varepsilon),(1,\varepsilon^2)\}$. None of $b,a^3b,a^6b$ belongs to $\ker(\chi)$, hence by Lemma~\ref{lemma:M27,stabilizers}, $\mathrm{Stab}_G(v)\neq \{1_G\}$ if and only if $\mathrm{Stab}_G(v)\nsubseteq \ker(\chi)$. Therefore the inclusion "$\supseteq$" holds by Lemma~\ref{lemma:common zero locus}. We turn to the proof of the reverse inclusion "$\subseteq$". eld[W_1\oplus W_2]^{G,\chi}_{\le 9}$ vanish at $(w_1,w_2)$. We have to show that $\mathrm{Stab}_G(w_1,w_2)\neq\{1_G\}$. \emph{Case I.:} $\chi=(\varepsilon,\varepsilon)$. Consider the following relative invariants in eld[W_1\oplus W_2]^{G,(\varepsilon,\varepsilon)}$: \begin{align*} f_{(\varepsilon,\varepsilon)}^{(1)}&:=x_1^3+\varepsilon^2 x_2^3+\varepsilon x_3^3 \qquad &h_{(\varepsilon,\varepsilon)}^{(1)}:=y_1^6+\varepsilon^2 y_2^6+\varepsilon y_3^6 \\ f_{(\varepsilon,\varepsilon)}^{(2)}&:=x_1^5x_3+\varepsilon^2 x_2^5x_1+\varepsilon x_3^5x_2 \qquad &h_{(\varepsilon,\varepsilon)}^{(2)}:=y_1^2y_2+\varepsilon^2 y_2^2y_3+\varepsilon y_3^2y_1 \\ f_{(\varepsilon,\varepsilon)}^{(3)}&:= x_1^7x_3^2+\varepsilon^2 x_2^7x_1^2+\varepsilon x_3^7x_2^2 \qquad &h_{(\varepsilon,\varepsilon)}^{(3)}:= y_1^5y_3^4+\varepsilon^2 y_2^5y_1^4+\varepsilon y_3^5y_2^4 \\ k_{(\varepsilon,\varepsilon)}^{(1)}&:= x_1y_1+\varepsilon^2 x_2y_2+\varepsilon x_3y_3 \qquad &h_{(\varepsilon,\varepsilon)}^{(4)}:= y_1y_2y_3(y_1^3+\varepsilon^2 y_2^3+\varepsilon y_3^3) \\ k_{(\varepsilon,\varepsilon)}^{(2)}&:=x_1^2y_1^5+\varepsilon^2 x_2^2y_2^5+\varepsilon x_3^2y_3^5 \qquad & \end{align*} The above relative invariants have degree at most $9$, so each of them vanishes at $(w_1,w_2)\in W_1\oplus W_2$. In particular, $0=f_{(\varepsilon,\varepsilon)}^{(1)}(w_1)= f_{(\varepsilon,\varepsilon)}^{(2)}(w_1) =f_{(\varepsilon,\varepsilon)}^{(3)}(w_1)$. Then \[0=\det\begin{bmatrix} x_1^3 & x_2^3 & x_3^3\\ x_1^5x_3 & x_2^5x_1 & x_3^5x_2 \\ x_1^7x_3^2 & x_2^7x_1^2 & x_3^7x_2^2 \end{bmatrix}(w_1) =(x_1x_2x_3)^4(x_1x_2-x_3^2)(x_1x_3-x_2^2)(x_2x_3-x_1^2))(w_1).\] If $x_j(w_1)=0$ for some $j\in\{1,2,3\}$, then $0=f_{(\varepsilon,\varepsilon)}^{(1)}(w_1)= f_{(\varepsilon,\varepsilon)}^{(2)}(w_1) =f_{(\varepsilon,\varepsilon)}^{(3)}(w_1)$ implies $0=x_1(w_1)=x_2(w_1)=x_3(w_1)$, and so $\mathrm{Stab}_G(w_1,w_2)=\mathrm{Stab}_G(w_2)$. Suppose next that $(x_1x_2x_3)(w_1)\neq 0$. Then one of $x_1x_2-x_3^2$, $x_1x_3-x_2^2$, $x_2x_3-x_1^2$ vanishes at $w_1$. Assume that \begin{equation}\label{eq:M27,x1x2-x3^2} (x_1x_2-x_3^2)(w_1)=0\end{equation} (the other cases follow by cyclic symmetry). From \eqref{eq:M27,x1x2-x3^2} and $f_{(\varepsilon,\varepsilon)}^{(1)}(w_1)=0$ we deduce \begin{equation}\label{eq:M27,x1x2x3} \varepsilon (x_1x_2x_3)(w_1)=-x_1^3(w_1)-\varepsilon^2 x_2^3(w_1). \end{equation} From \eqref{eq:M27,x1x2-x3^2}, \eqref{eq:M27,x1x2x3} and $f_{(\varepsilon,\varepsilon)}^{(2)}(w_1)=0$ we deduce \begin{align*}0&=(x_1^5x_3+\varepsilon^2 x_2^5x_1+\varepsilon (x_1x_2)^2x_3x_2)(w_1) \\ &=(x_1^5x_3+\varepsilon^2 x_2^5x_1-(x_1^3+\varepsilon^2 x_2^3)x_1x_2^2)(w_1) \\ &=x_1^4(w_1)(x_1x_3-x_2^2)(w_1), \end{align*} so we have \begin{equation}\label{eq:M27,x1x3-x2^2} (x_1x_3)(w_1)=(x_2^2)(w_1)=0. \end{equation} From $0=f_{(\varepsilon,\varepsilon)}^{(3)}(w_1)$, \eqref{eq:M27,x1x2-x3^2} and \eqref{eq:M27,x1x3-x2^2} we deduce \begin{align*} 0&=(x_1^5(x_1x_3)^2+\varepsilon^2 x_2^7x_1^2+\varepsilon (x_1x_2)^3x_2^2x_3)(w_1) \\ &=(x_1^5x_2^4+\varepsilon^2 x_2^7x_1^2+\varepsilon x_2^2x_1^2x_2^5)(w_1) \\ &=(x_1^2x_2^4)(w_1)(x_1^3-x_2^3)(w_1), \end{align*} so we have \begin{equation}\label{eq:M27,x1^3-x_2^3} x_1^3(w_1)=x_2^3(w_1). \end{equation} So $x_1(w_1)=x_2(w_1)$ or $x_1(w_1)=\varepsilon x_2(w_1)$ or $x_1(w_1)=\varepsilon^2 x_2(w_1)$. Taking into account \eqref{eq:M27,x1x2-x3^2} and \eqref{eq:M27,x1x3-x2^2} we conclude that $w_1$ is a non-zero scalar multiple of $[1,1,1]^T$,$[\varepsilon^2,\varepsilon,1]^T$, or $[\varepsilon, \varepsilon^2,1]^T\}$. By Lemma~\ref{lemma:M27,stabilizers} it follows that $\mathrm{Stab}_G(w_1)\in \{\langle b\rangle,\langle a^3b\rangle, \langle a^6b\rangle\}$. In particular, we are done if $w_2=0$, since then $\mathrm{Stab}_G(w_1,w_2)=\mathrm{Stab}_G(w_1)$. Assume next that $w_2\neq 0$. Let us take into account that $0=h_{(\varepsilon,\varepsilon)}^{(1)}(w_2)=h_{(\varepsilon,\varepsilon)}^{(2)}(w_2)= h_{(\varepsilon,\varepsilon)}^{(3)}(w_2)=h_{(\varepsilon,\varepsilon)}^{(4)}(w_2)$. Since $w_2\neq 0$, the above equalities imply that $y_j(w_2)\neq 0$ for $j=1,2,3$. Therefore $0=h_{(\varepsilon,\varepsilon)}^{(4)}(w_2)$ implies \begin{equation}\label{eq:M27,y1^3+...} (y_1^3+\varepsilon^2 y_2^3+\varepsilon y_3^3)(w_2)=0. \end{equation} It follows that \[0=\det\begin{bmatrix}1&1&1\\y_1^3&y_2^3&y_3^3 \\y_1^6&y_2^6&y_3^6\end{bmatrix}(w_2) =(y_2^3-y_1^3)(w_2)(y_3^3-y_1^3)(w_2)(y_3^3-y_2^3)(w_2).\] Assume that \begin{equation}\label{eq:M27,y1^3=y2^3} y_1^3(w_2)=y_2^3(w_2) \end{equation} (the cases when $y_1^3(w_2)=y_3^3(w_2)$ or $y_2^3(w_2)=y_3^3(w_2)$ follow by cyclic symmetry). From \eqref{eq:M27,y1^3+...}, \eqref{eq:M27,y1^3=y2^3} and $0=h_{(\varepsilon,\varepsilon)}^{(2)}(w_2)$ we deduce \[0=\det\begin{bmatrix} 1&1&1\\ y_1^2y_2&y_2^2y_3&y_3^2y_1\\ y_1^3&y_2^3&y_3^3\end{bmatrix}(w_2)= \det\begin{bmatrix}1&0&1\\y_1^2y_2&y_2(y_2y_3-y_1^2)&y_3^2y_1\\y_1^3&0&y_3^3\end{bmatrix}(w_2),\] hence \begin{equation}\label{eq:M27,y1^2=y2y3} y_1^2(w_2)=(y_2y_3)(w_2) \end{equation} or \begin{equation}\label{eq:M27,y1^3-y3^3} y_1^3(w_2)=y_3^3(w_2). \end{equation} Now \eqref{eq:M27,y1^3=y2^3}, \eqref{eq:M27,y1^2=y2y3} clearly imply that $w_2$ is a non-zero scalar multiple of $[1,1,1]^T$, $[\varepsilon,\varepsilon^2,1]^T$, or $[\varepsilon^2,\varepsilon,1]^T$, implying in turn by Lemma~\ref{lemma:M27,stabilizers} that $\mathrm{Stab}_G(w_2)\in \{\langle b\rangle, \langle a^3b\rangle, \langle a^6b\rangle\}$. If \eqref{eq:M27,y1^3-y3^3} holds, then together with \eqref{eq:M27,y1^3=y2^3} it implies that $y_2(w_2)=\nu_2y_1(w_2)$ and $y_3(w_2)=\nu_3y_1(w_2)$, where $\{\nu_2,\nu_3\}\in\{1,\varepsilon,\varepsilon^2\}$. Recall that $y_1(w_2)\neq 0$. From $h_{(\varepsilon,\varepsilon)}^{(2)}(w_2)=0$ we infer $\nu_2+\varepsilon^2 \nu_2^2\nu_3+\varepsilon \nu_3^2=0$, and hence $\nu_2\nu_3=1$ or $\nu_2\nu_3=\varepsilon^2$. On the other hand, $h_{(\varepsilon,\varepsilon)}^{(3)}(w_2)=0$ yields $\nu_3+\varepsilon^2 \nu_2^2+\varepsilon \nu_3^2\nu_2=0$, and hence $\nu_2\nu_3=1$ or $\nu_2\nu_3=\varepsilon$. Thus necessarily we have $\nu_2\nu_3=1$. So eld[1,1,1]^T$) or $(\nu_2,\nu_3)=(\varepsilon,\varepsilon^2)$ (i.e. $w_2\in eld[\varepsilon,\varepsilon^2,1]^T$) or $(\nu_2,\nu_3)=(\varepsilon^2,\varepsilon)$ (i.e. $w_2\in eld[\varepsilon^2,\varepsilon,1]^T$). It follows by Lemma~\ref{lemma:M27,stabilizers} that $\mathrm{Stab}_G(w_2)\in \{\langle b\rangle, \langle a^3b\rangle,\langle a^6b\rangle\}$. Thus we showed that both $w_1$ and $w_2$ are scalar multiples of $[1,1,1]^T$, $[\varepsilon^2,\varepsilon,1]^T$, or $[\varepsilon,\varepsilon^2,1]^T$. If $w_1=0$ or $w_2=0$, then we have the desired $\mathrm{Stab}_G(w_1,w_2)\neq \{1_G\}$ by Lemma~\ref{lemma:M27,stabilizers}. If both $w_1$ and $w_2$ are non-zero scalar multiples of $[1,1,1]^T$, $[\varepsilon^2,\varepsilon,1]^T$, or $[\varepsilon,\varepsilon^2,1]^T$, then again using Lemma~\ref{lemma:M27,stabilizers} one can verify by direct computation that for such a pair $(w_1,w_2)$, we have \begin{equation*}\label{eq:M27,k} k_{(\varepsilon,\varepsilon)}^{(1)}(w_1,w_2)=0=k_{(\varepsilon,\varepsilon)}^{(2)}(w_1,w_2) \iff \mathrm{Stab}_G(w_1)=\mathrm{Stab}_G(w_2). \end{equation*} Therefore $\mathrm{Stab}_G(w_1,w_2)=\mathrm{Stab}_G(w_1)\neq \{1_G\}$. \emph{Case II.:} $\chi=(1,\varepsilon)$. Consider the following relative invariants in eld[W_1\oplus W_2]^{G,(1,\varepsilon)}$: \begin{align*} f_{(1,\varepsilon)}^{(1)}&:=(x_1x_2x_3)^2(x_1^3+\varepsilon^2 x_2^3+\varepsilon x_3^3) \qquad & h_{(1,\varepsilon)}^{(1)}:= y_1^2y_3+\varepsilon^2 y_2^2y_1+\varepsilon y_3^2y_2 \\f_{(1,\varepsilon)}^{(2)}&:=x_1x_2x_3(x_1^6+\varepsilon^2 x_2^6+\varepsilon x_3^6) \qquad &h_{(1,\varepsilon)}^{(2)}:= y_1^4y_3^2+\varepsilon^2 y_2^4y_1^2+\varepsilon y_3^4y_2^2 \\ f_{(1,\varepsilon)}^{(3)}&:=x_1x_2^2+\varepsilon^2 x_2x_3^2+\varepsilon x_3x_1^2 \qquad &h_{(1,\varepsilon)}^{(3)}:=y_1^9+\varepsilon^2 y_2^9+ \varepsilon y_3^9 \\ f_{(1,\varepsilon)}^{(4)}&:=x_1^9+\varepsilon^2 x_2^9+\varepsilon x_3^9 \qquad &k_{(1,\varepsilon)}^{(1)}:=x_1y_2+\varepsilon^2 x_2y_3+\varepsilon x_3y_1 \\ & \qquad & k_{(1,\varepsilon)}^{(2)}:=x_1^2y_2^2+\varepsilon^2 x_2^2y_3^2+\varepsilon x_3^2y_1^2. \end{align*} The above relative invariants have degree at most $9$, so each of them vanishes at $(w_1,w_2)\in W_1\oplus W_2$. If $x_j(w_1)=0$ for some $j\in \{1,2,3\}$, then $0=f_{(1,\varepsilon)}^{(3)}(w_1)=f_{(1,\varepsilon)}^{(4)}(w_1)$ implies $w_1=0$, and thus $\mathrm{Stab}_G(w_1,w_2)= \mathrm{Stab}_G(w_2)$. Otherwise $(x_1x_2x_3)(w_1)\neq 0$, and $0=f_{(1,\varepsilon)}^{(1)}(w_1)=f_{(1,\varepsilon)}^{(2)}(w_1)$ implies \[0=\det\begin{bmatrix} 1&1&1 \\ x_1^3(w_1) & x_2^3(w_1) & x_3^3(w_1) \\ x_1^6(w_1) & x_2^6(w_1) & x_3^6(w_1)\end{bmatrix}=(x_2^3-x_1^3)(x_3^3-x_1^3)(x_3^3-x_2^3)(w_1).\] By symmetry it is sufficient to deal with the case when \begin{equation*}\label{eq:M27,x1^3=x2^3,2} x_1^3(w_1)=x_2^3(w_1). \end{equation*} By $0=f_{(1,\varepsilon)}^{(1)}(w_1)$, $0=f_{(1,\varepsilon)}^{(3)}$ and $(x_1x_2x_3)^2(w_1)\neq 0$ we have \[0=\det\begin{bmatrix} 1 & 1 & 1\\ x_1^3 & x_2^3 & x_3^3 \\ x_1x_2^2 & x_2x_3^2 & x_3x_1^2\end{bmatrix}(w_1).\] Taking into account $x_1^3(w_1)=x_2^3(w_1)$ we end up with \begin{equation}\label{eq:oct4} 0=x_2(x_1x_2-x_3^2)(w_1)(x_3^3-x_1^3)(w_1).\end{equation} If $(x_1x_2)(w_1)=x_3^2(w_1)$, then we have \[0=f_{(1,\varepsilon)}^{(3)}(w_1)= x_1x_2^2+\varepsilon^2 x_2(x_1x_2) +\varepsilon x_3x_1^2)(w_1) =\varepsilon x_1(w_1)(x_1x_3-x_2^2)(w_1).\] Thus $(x_1x_3)(w_1)=x_2^2(w_1)$, and this together with $(x_1x_2)(w_1)=x_3^2(w_1)$ implies that $w_1$ is a non-zero scalar multiple of $[1,1,1]^T$, $[\varepsilon,\varepsilon^2,1]^T$, or $[\varepsilon^2,\varepsilon,1]^T$. If $x_3^3(w_1)=x_1^3(w_1)$ (the other alternative from \eqref{eq:oct4}, then (recall that $x_2^3(w_1)=x_1^3(w_1)$) there exist some cubic roots $\nu_2,\nu_3$ of $1$ such that $w_1$ is a non-zero scalar multiple of $[1,\nu_2,\nu_3]$. Then $0=f_{(1,\varepsilon)}^{(3)}(w_1)$ reduces to \[\nu_2^2+\varepsilon^2 \nu_2\nu_3^2 +\varepsilon \nu_3=0.\] The sum of three cubic roots of $1$ is zero only if the three summands are $1,\varepsilon,\varepsilon^2$ in some order. We conclude that $w_1$ is a non-zero scalar multiple of $[1,1,1]^T$, $[\varepsilon,\varepsilon^2,1]^T$, or $[\varepsilon^2,\varepsilon,1]^T$. In particular, if $w_2=0$, then $\mathrm{Stab}_G(w_1,w_2)=\mathrm{Stab}_G(w_1)\neq \{1_G\}$ by Lemma~\ref{lemma:M27,stabilizers}, and we are done. Assume next that $w_2\neq 0$. Then by $0=h_{(1,\varepsilon)}^{(1)}(w_2)= h_{(1,\varepsilon)}^{(3)}(w_2)$ we deduce that none of $y_1(w_2)$, $y_2(w_2)$, $y_3(w_2)$ is zero. From $0=h_{(1,\varepsilon)}^{(1)}(w_2)= h_{(1,\varepsilon)}^{(2)}(w_2)$ we get \[0=\det\begin{bmatrix} 1 & 1 & 1 \\ y_1y_2^2 & y_2y_3^2 & y_3y_1^2 \\ y_1^2y_2^4 & y_2^2y_3^4 & y_3^2y_1^4 \end{bmatrix}(w_2) =(y_1y_2y_3)(y_1y_3-y_2^2)(y_3^2-y_1y_2)(y_1^2-y_2y_3)(w_2).\] By symmetry we may assume that $y_2^2(w_2)=(y_1y_3)(w_2)$. Then we have \[0=h_{(1,\varepsilon)}^{(1)}(w_2)= (y_1^2y_3+\varepsilon^2 (y_1y_3)y_1+\varepsilon y_3^2y_2)(w_2)= \varepsilon y_3(w_2)(y_2y_3-y_1^2)(w_2).\] Thus we have \[\frac{y_2^2(w_2)}{y_1(w_2)}=y_3(w_2)= \frac{y_1^2(w_2)}{y_2(w_2)},\] and so $y_1^3(w_2)=y_2^3(w_2)$. Obviously, this together with $y_2^2(w_2)=y_1(w_2)y_3(w_2)$ means that $w_2$ is a non-zero scalar multiple of $[1,1,1]^T$, $[\varepsilon,\varepsilon^2,1]^T$, or $[\varepsilon^2,\varepsilon,1]^T$. eld[1,1,1]^T\cup eld [\varepsilon^2,\varepsilon,1]^T$. If $w_1=0$ or $w_2=0$, then $\mathrm{Stab}_G(w_1,w_2)\neq \{1_G\}$ by Lemma~\ref{lemma:M27,stabilizers}. If both $w_1$ and $w_2$ are non-zero, then again using Lemma~\ref{lemma:M27,stabilizers} one can verify by direct computation that \begin{equation*}\label{eq:M27,k,2} k_{(1,\varepsilon)}^{(1)}(w_1,w_2)=0=k_{(1,\varepsilon)}^{(2)}(w_1,w_2)\iff \mathrm{Stab}_G(w_1)=\mathrm{Stab}_G(w_2). \end{equation*} Therefore $\mathrm{Stab}_G(w_1,w_2)=\mathrm{Stab}_G(w_1)\neq \{1_G\}$. The map $a\mapsto a^2$, $b\mapsto b$ extends to an automorphism $\alpha$ of $G$, and for $\chi=(\varepsilon,\varepsilon)$ we have $\chi\circ\alpha=(\varepsilon^2,\varepsilon)$, whereas for $\chi=(\varepsilon,1)$ we have $\chi\circ\alpha=(\varepsilon^2,1)$. So by Lemma~\ref{lemma:auto}, statement (ii) holds also for the weight $\chi=(\varepsilon^2,\varepsilon)$ and $\chi=(\varepsilon^2,1)$. The only information on $\varepsilon$ used in the constructions of relative invariants above was that it has multiplicative order $3$. Therefore we can replace it by the other element of multiplicative order $3$, namely by $\varepsilon^2$, and we get that (ii) holds also for the weights $\chi\in\{\varepsilon,\varepsilon^2), (\varepsilon^2,\varepsilon^2),(1,\varepsilon^2)\}$. \end{proof} \begin{theorem}\label{thm:sepbeta(M27)} eld$ has characteristic zero, and it contains an element of multiplicative order $9$. Then we have the equality eld(\mathrm{M}_{27})=10$. \end{theorem} \begin{proof} In view of Lemma~\ref{lemma:multfree}, take $v=(w_1,w_2,u),v'=(w'_1,w'_2,u')\in W_1\oplus W_2\oplus U$ with \begin{equation}\label{eq:proofM27assumption} eld[W_1\oplus W_2\oplus U]^G \text{ where } \deg(f)\le 10. \end{equation} We need to show that $G\cdot v=G\cdot v'$. Replacing $v'$ by an appropriate element in its $G$-orbit, we may assume by Proposition~\ref{prop:M27,V1+V2} that $(w_1,w_2)=(w'_1,w'_2)$. Moreover, $G\cdot u=G\cdot u'$, since $\mathsf{D}(G/G')=\mathsf{D}(\mathrm{C}_3\times\mathrm{C_3})=5$. Therefore it is sufficient to deal with the case when $(w_1,w_2)\in W_1\oplus W_2$ is non-zero. \emph{Case I.:} $\mathrm{Stab}_G(w_1,w_2)\neq \{1_G\}$. eld[W_1\oplus W_2\oplus U]^{G,\chi}$ eld[W_1\oplus W_2\oplus U]^{G,\chi^{-1}}$ of degree at most $6$ with $f(w_1,w_2)\neq 0$. Note that eld[W_1\oplus W_2\oplus U]^G$ has degree at most $9$. It follows by \eqref{eq:proofM27assumption} that $(fm)(v)=(fm)(v')$, implying in turn that $m(u)=m(u')$. This holds for all monomials eld[U]^{\langle b\rangle}$ with $\deg(m)\le 3$. Since $\mathsf{D}(\langle b\rangle)=3$, we conclude that $u$ and $u'$ belong to the same $\langle b\rangle$-orbit. Note that $\mathrm{Stab}_G(w_1,w_2)G'=\langle b\rangle G'$ by Lemma~\ref{lemma:M27,stabilizers}. So the $\langle b\rangle$-orbits of $u$ and $u'$ coincide with their $\mathrm{Stab}_G(w_1,w_2)$-orbits. Thus $\mathrm{Stab}_G(w_1,w_2)\cdot u=\mathrm{Stab}_G(w_1,w_2)\cdot u'$, and therefore $G\cdot (w_1,w_2,u)=G\cdot (w'_1,w'_2,u')$. \emph{Case II.:} $\mathrm{Stab}_G(w_1,w_2)=\{1_G\}$. We claim that $u=u'$; that is, we claim that $u_\chi=u'_\chi$ for all $\chi\in \widehat G$. This is obvious for $\chi=(1,1)\in \widehat G$, since $u$ and $u'$ eld[W_1\oplus W_2\oplus U]^{G,\chi^{-1}}$ with $\deg(f)\le 9$ and $f(w_1,w_2)\neq 0$. Then $ft_\chi$ is a $G$-invariant of degree at most $10$. Thus by \eqref{eq:proofM27assumption} we have $(ft_\chi)(w_1,w_2,u)=(ft_\chi)(w_1,w_2,u')$, implying in turn that $t_\chi(u)=t_\chi(u')$, i.e. $u_\chi=u'_\chi$. This finishes the proof of the inequality eld(G)\le 10$. To see the reverse inequality, consider the $G$-module $W_1\oplus U_{(1,\varepsilon^2)}$. Consider the vectors $v:=([1,0,0]^T,\varepsilon)$ and $v ':=([1,0,0]^T,\varepsilon^2)$. We have $(f_{(1,\varepsilon)}^{(4)}t_{(1,\varepsilon^2)})(v)=\varepsilon$, whereas $(f_{(1,\varepsilon)}^{(4)}t_{(1,\varepsilon^2)})(v')=\varepsilon^2$, so the invariant $(f_{(1,\varepsilon)}^{(4)} t_{(1,\varepsilon^2)}$) separates $v$ and $v'$. We claim that no $G$-invariant of degree at most $9$ separates $v$ and $v'$. The elements of eld[U_{(1,\varepsilon^2)}]^G= eld[t_{(1,\varepsilon)}^3]$ agree on $v$ and $v'$. Suppose for contradiction that there exists a multihomogeneous invariant $f=ht_{(1,\varepsilon^2)}$ (respectively $f=ht_{(1,\varepsilon^2)}^2$) of degree at most $9$ with $f(v)\neq f(v')$, where eld[W_1]^{G,(1,\varepsilon)}$ (respectively eld[W_1]^{G,(1,\varepsilon^2)}$). Then $h$ has a monomial of the form $x^d$ with non-zero coefficients (since $0=x_2(v)=x_3(v)= x_2(v')=x_3(v')$). Moreover, $x_1^d$ must be an $\langle a\rangle$-invariant monomial. However, the smallest $d$ for which $x_1^d$ is $\langle a\rangle$-invariant is $9$. This is a contradiction, because the degree of $h$ is strictly less than $9$. The inequality $\sepbeta(G,W_1\oplus U_{(1,\varepsilon)})\ge 10$ is proved. \end{proof} \section*{Acknowledgements} This research was partially supported by the Hungarian National Research, Development and Innovation Office, NKFIH K 138828. \begin{thebibliography}{ccc} \bibitem{aberth} O. Aberth, The elementary symmetric functions in a finite field of prime order, Illinois J. Math. 8 (1964), 132-138. \bibitem{bandeire-blumsmith-kileel-niles-weed-perry-wein} A. S. Bandeira, B. Blum-Smith, J. Kileel, J. Niles-Weed, A. Perry, A. S. Wein, Estimation under group actions: recovering orbits from invariants, Appl. Comput. Harmon. Anal. 66 (2023), 236-319. \bibitem{benson} D. J. Benson, Polynomial Invariants of Finite Groups, London Math. Society. Lecture Notes Series 190, Cambridge University Press, 1993. \bibitem{blumsmith-garcia-hidalgo-rodriguez} B. Blum-Smith, T. Garcia, R. Hidalgo, C. Rodriguez, Degree bounds for fields of rational invariants of $\mathbb{Z}/p\mathbb{Z}$ and other finite groups, J. Pure Appl. Algebra 228 (2024), Article ID 107693. \bibitem{cahill-contreras-hip} J. Cahill, A. Contreras, A. C. Hip, Stable separation of orbits for finite abelian group actions, J. Fourier Anal. Appl. 30 (2024), Paper No. 12, 16 p. \bibitem{cahill-iverson-mixon} J. Cahill, J. W. Iverson, D. G. Mixon, Towards a bilipschitz invariant theory, Appl. Comput. Harmon. Anal. 72 (2024), Article ID 101669, 27 p. \bibitem{cahill-iverson-mixon-packer} J. Cahill, J. W. Iverson, D. G. Mixon, D. Packer, Group-invariant max filtering, Foundations of Computational Mathematics (2024), https://doi.org/10.1007/s10208-024-09656-9 \bibitem{chevalley} C. Chevalley, Invariants of finite groups generated by reflections, Amer. J. Math. 77 (1955), 778-792. \bibitem{cziszter:C7rtimesC3} K. Cziszter, The Noether number of the non-abelian group of order $3p$, Period. Math. Hung. 68 (2014), 150-159. \bibitem{cziszter:p-group} K. Cziszter, The Noether number of $p$-groups, J. Algebra Appl. 18, (2019), article id. 1950066, 14 p. \bibitem{CzD:1} K. Cziszter and M. Domokos, Groups with large Noether bound, Ann. Inst. Fourier (Grenoble) 64, no. 3 (2014), 909-944. \bibitem{cziszter-domokos:indextwo} K. Cziszter, M. Domokos, The Noether number for the groups with a cyclic subgroup of index two, J. Algebra 399 (2014), 546-560. \bibitem{cziszter-domokos:lower bound} K. Cziszter, M. Domokos, Lower bounds on the Noether number, Transform. Groups 24 (2019), 823-834. \bibitem{cziszter-domokos-geroldinger} K. Cziszter, M. Domokos, A. Geroldinger, The interplay of invariant theory with multiplicative ideal theory and with arithmetic combinatorics, in: Scott T. Chapman, M. Fontana, A. Geroldinger, B. Olberding (Eds.), Multiplicative Ideal Theory and Factorization Theory, Springer-Verlag, 2016, pp. 43-95. \bibitem{cziszter-domokos-szollosi} K. Cziszter, M. Domokos, I. Sz\"oll\H osi, The Noether numbers and the Davenport constants of the groups of order less than 32, J. Algebra 510 (2018), 513-541. \bibitem{derksen-kemper} H. Derksen, G. Kemper, Computational Invariant Theory, Second Edition, Encyclopaedia of Mathematical Sciences 130, Invariant Theory of Algebraic Transformation Groups VIII, Springer-Verlag, Berlin, Heidelberg, 2015. \bibitem{domokos:typical} M. Domokos, Typical separating invariants, Transform. Groups 12 (2007), 49-63. \bibitem{domokos:abelian} M. Domokos, Degree bound for separating invariants of finite abelian groups, Proc. Amer. Math. Soc. 145 (2017), 3695-3708. \bibitem{domokos-hegedus} M. Domokos, P. Heged\H us, Noether's bound for polynomial invariants of finite groups, Arch. Math. 74 (2000), 161-167. \bibitem{domokos-miklosi} M. Domokos, B. Mikl\'osi, Symmetric polynomials over finite fields, Finite Fields Appl. 89 (2023), Article ID 102224, 16p. \bibitem{domokos-schefler} M. Domokos, B. Schefler, On the separating Noether number of small groups over finite fields in non-modular characteristic, in preparation. \bibitem{domokos-szabo} M. Domokos, E. Szab\'o, Helly dimension of algebraic groups, J. London Math. Soc. (2) 84 (2011), 19-34. \bibitem{draisma-kemper-wehlau} J. Draisma, G. Kemper, D. Wehlau, Polarization of separating invariants, Canad. J. Math. 60 (2008) 556-571. \bibitem{dufresne} E. Dufresne, Separating invariants and finite reflection groups, Adv. Math. 221 (2009), 1979-1989. \bibitem{dufresne-elmer-kohls} E. Dufresne, J. Elmer, M. Kohls, The Cohen-Macaulay property of separating invariants of finite groups, Transform. Groups 14 (2009), 771-785. \bibitem{dufresne-jeffries} E. Dufresne, J. Jeffries, Separating invariants and local cohomology, Adv. Math. 270 (2015), 565-581. \bibitem{elmer-kohls} J. Elmer, M. Kohls, Zero-separating invariants for finite groups, J. Algebra 411 (2014), 92-113. \bibitem{fine} N. J. Fine, On the asymptotic distribution of the elementary symmetric function (mod $p$), Trans. Amer. Math. Soc. 69 (1950), 109-129. \bibitem{fleischmann} P. Fleischmann, The Noether bound in invariant theory of finite groups, Adv. Math. 156 (2000), 23-32. \bibitem{fogarty} J. Fogarty, On Noether's bound for polynomial invariants of a finite group, Electron. Res. Announc. Amer. Math. Soc. 7 (2001), 5-7. \bibitem{GAP4} The GAP~Group: GAP -- Groups, Algorithms, and Programming, Version 4.12.2; 2022, \url{https://www.gap-system.org}. \bibitem{huffman} W. C. Huffman, Polynomial invariants of finite linear groups of degree two, Canad. J. Math. 32 (1980), 317-330. \bibitem{hunziker} M. Hunziker, Classical invariant theory for finite reflection groups, Transform. Groups 2 (1997), 147-163. \bibitem{kemper} G. Kemper, Separating invariants, J. Symb. Comput. 44 (2009),1212-1222. \bibitem{kemper-lopatin-reimers} G. Kemper, A. Lopatin, F. Reimers, Separating invariants over finite fields, J. Pure Appl. Algebra 226 (2022), paper no. 106904 \bibitem{knop} F. Knop, On Noether's and Weyl's bound in positive characteristic, in "Invariant Theory in All Characteristics", (Ed.: H. E. A. Eddy Campbell and D. L. Wehlau), CRM Proceedings and Lecture Notes 35, Amer. Math. Soc., Providence, Rhode Island, pp. 175-188, 2004. \bibitem{kohls-kraft} M. Kohls, H. Kraft, Degree bounds for separating invariants, Math. Res. Lett. 17 (2010), 1171-1182. \bibitem{kohls-sezer} M. Kohls, M. Sezer, Separating invariants for the Klein four group and cyclic groups, Int. J. Math. 24 (2013), Article ID 1350046. \bibitem{lopatin-muniz} A. Lopatin, P. A. Muniz, Separating invariants for two-dimensional orthogonal groups over finite fields, Lin. Alg. Appl. 692 (2024), 71-83. \bibitem{lopatin-reimers} A. Lopatin, F. Reimers, Separating invariants for multisymmetric polynomials, Proc. Amer. Math. Soc. 149 (2021), 497-508. \bibitem{noether} E. Noether, Der Endlichkeitssatz der Invarianten endlicher Gruppen, Math. Ann. 77 (1916), 89-92. \bibitem{popov-vinberg} V. L. Popov, E. B. Vinberg, Invariant Theory, in Algebraic Geometry IV, Encyclopedia of Mathematical Sciences, vol. 55, Springer-Verlag, Berlin-Heidelberg, 1994. \bibitem{richman} D. R. Richman, Invariants of finite groups over fields of characteristic $p$, Adv. Math. 124 (1996), 25-48. \bibitem{CoCalc} Sagemath, Inc., CoCalc – Collaborative Calculation and Data Science, 2020, \url{https://cocalc.com}. \bibitem{schefler_c_n^r} B. Schefler, The separating Noether number of the direct sum of several copies of a cyclic group, Proc. Amer. Math. Soc. 153 (2025), 69–79. \bibitem{schefler_rank2} B. Schefler, The separating Noether number of abelian groups of rank two, J. Comb. Theory, Ser. A 209 (2025), Paper no. 105951. \bibitem{schmid} B.~J. Schmid, Finite groups and invariant theory, in ``Topics in invariant theory'' (M.-P. Malliavin, ed.), Lecture notes in mathematics, no. 1478, Springer, 1989-90, pp.~35-66. \bibitem{sezer:1} M. Sezer, Sharpening the generalized Noether bound in the invariant theory of finite groups, J. Algebra 254 (2002), 252-263. \bibitem{sezer} M. Sezer, Explicit separating invariants for cyclic $P$-groups, J. Comb. Theory, Ser. A (2011), 681-689. \bibitem{wehlau} D. L. Wehlau, The Noether number in invariant theory, C. R. Math. Acad. Sci., Soc. R. Can. 28 (2006), 39-62. \end{thebibliography} \end{document}
2412.08600v2
http://arxiv.org/abs/2412.08600v2
Chebotarev's theorem for roots of unity of square free order
\documentclass[a4paper,12pt,reqno]{amsart} \usepackage{latexsym} \usepackage{amsmath,amsthm,amssymb,amscd} \usepackage{fullpage} \usepackage{xcolor} \usepackage{parskip} \usepackage{mathtools} \usepackage{thmtools} \mathtoolsset{showonlyrefs} \usepackage{csquotes} \usepackage{comment} \usepackage[activate={true,nocompatibility},final,tracking=true,kerning=true,spacing=true]{microtype} \usepackage[pdfencoding=unicode,pdftex,bookmarks=true,bookmarksnumbered]{hyperref} \definecolor{citecolour}{rgb}{0.0, 0.0, 0.8} \colorlet{linkcolour}{green!50!black} \hypersetup{colorlinks,breaklinks, linkcolor=linkcolour,citecolor=citecolour, filecolor=linkcolour, urlcolor=linkcolour} \usepackage{txfonts,pxfonts,tikz} \usepackage[T1]{fontenc} \newtheorem{prevtheorem}{Theorem} \renewcommand*{\theprevtheorem}{\Alph{prevtheorem}} \newtheorem{theorem}{Theorem} \newtheorem*{propositionno}{Proposition} \newtheorem{proposition}{Proposition} \newtheorem{claim}{Claim} \newtheorem{property}{Property} \newtheorem{remark}{Remark} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}{Corollary} \newtheorem{conjecture}{Conjecture} \newtheorem{question}{Question} \DeclareMathOperator{\inj}{Inj} \DeclareMathOperator{\adj}{adj} \newenvironment{proofofA}{{\bf {Proof of Theorem \ref{Thm:A}.} }}{\hfill $\blacksquare$ \\} \newenvironment{proofofB}{{\bf {Proof of Theorem \ref{Thm:B}.} }}{\hfill $\blacksquare$ \\} \newenvironment{proofofC}{{\bf {Proof of Theorem \ref{Thm:injectorratio}.} }}{\hfill $\blacksquare$ \\} \newenvironment{proofof}{{\bf {Proof.} }}{\hfill $\blacksquare$ \\} \newenvironment{proofofPropA}{{\bf {Proof of \autoref{Prop:Unc-A}.} }}{\hfill $\blacksquare$ \\} \def\Ker{\mathrm{Ker}} \def\Z{\mathbf{Z}} \def\C{\mathbf{C}} \def\N{\mathbf{N}} \def\Q{\mathbf{Q}} \def\R{\mathbf{R}} \def\F{\mathbf{F}} \def\supp{\mathrm{supp}} \def\chb{\mathrm{chb}} \def\ord{\mathrm{ord}} \usepackage{graphicx} \begin{document} \title{ Chebotarev's theorem for roots of unity of square free order} \author{Maria Loukaki} \address{Department of Mathematics \& Applied Mathematics, University of Crete, Greece} \email{[email protected]} \keywords{Chebotarev's Theorem, Principal Matrices, Roots of Unity, Uncertainty Principle} \subjclass[2020]{15A15, 11R04, 11Z05, 42A99} \begin{abstract} Let $p$ be a prime number and $\zeta_p$ a primitive $p$-th root of unity. Chebotarev's theorem states that every square submatrix of the $p \times p$ matrix $(\zeta_p^{ij})_{i,j=0}^{p-1}$ is non-singular. In this paper we prove the same for principal submatrices of $(\zeta_n^{ij})_{i,j=0}^{n-1}$, when $n=pr$ is the product of two distinct primes, and $p$ is a large enough prime that has order $r-1$ in $\Z_r^*$. As an application, an uncertainty principle for cyclic groups of order $n$ is established when $n=pr$ as described above. \end{abstract} \maketitle \section{Introduction} For any integer $ n$, let $ \zeta_n$ denote a primitive $n$-th root of unity. For a prime $p$ and a positive integer $k$, we denote the field with $ p^k $ elements as $\F_{p^k}$. The matrix $(\zeta_p^{ij})$ with $i,j \in \{ 0, 1, \cdots, p-1\}$ is a Vandermonde matrix with non-zero determinant. It was Chebotarev, in 1926, who first noticed that any square submatrix of the above inherits the same property (see \cite{Lenstra}). He proved \begin{theorem}\label{Thm:Chebotarev} If $I, J \subseteq \{ 0, 1, 2, \cdots p-1 \}$ with $|I|=|J|$ then the matrix $(\zeta_p^{ij})_{i\in I, j \in J}$ has non-zero determinant. \end{theorem} After Chebotarev's initial proof, several others have emerged in the literature using different approaches; Dieudonne \cite{Dieudonne}, Evans and Isaacs \cite{EvIs}, Goldstein et al. \cite{Gold}, Tao \cite{Tao}, Frenkel \cite{Frenkel}, just to mention a few. If $n$ is not a single prime, we still have the Vandermonde matrix $(\zeta_n^{ij})_{0 \leq i, j \leq n-1}$ which is nonsingular, but we can find examples of square submatrices of the original that are singular (see \cite{CL.24}). On the other hand, no counterexample of a singular principal submatrix when $n$ is a square-free integer is known. (By definition, a principal submatrix of $(\zeta_n^{ij})_{0 \leq i,j \leq n-1}$ is any submatrix $(\zeta_n^{ij})_{i,j \in I}$ for some $I \subseteq \{0, 1, \cdots, n-1 \}$). The following conjecture was first stated in \cite{Cabreli}, with a more general version presented in \cite{CL.24}. \begin{conjecture}\label{Conjecture} If $n$ is square-free then all principal submatrices of $(\zeta_n^{ij})$ with $i,j \in \{ 0, 1, \cdots, n-1\}$ have nonzero determinant. \end{conjecture} In \cite{Cabreli}, the conjecture was confirmed for all $2 \times 2$ principal submatrices. Even more was actually proved: for $n \geq 2$, all principal $2 \times 2$ submatrices of $(\zeta_n^{ij})_{0\leq i,j \leq n-1}$ are invertible if and only if $n$ is square-free (see Corollary 2.14 in \cite{Cabreli}). In addition to the above, in \cite{CL.24} it was proved that, for $n >4$, all principal $3 \times 3$ submatrices of $(\zeta_n^{ij})_{0\leq i,j \leq n-1}$ are invertible if and only if $n$ is square free (see Theorem 1.1 in \cite{CL.24}). These results also imply that any $(n-2) \times (n-2)$ and any $(n-3) \times (n-3)$ principal submatrix of $(\zeta_n^{ij})_{0\leq i,j \leq n-1}$ is non-singular, see Section \ref{Complementary}. The truth of Conjecture \ref{Conjecture} has important implications for problems connected to bases of finite dimensional linear spaces. For instance, in \cite{Cabreli}, it is shown that two ordered bases of $\mathbf{C}^n$ are \textit{woven} (this means that we can drop some elements of one basis as long as we replace them with the corresponding elements of the other basis, and so keep the basis property) if and only if all principal minors of the change-of-basis matrix are nonsingular. When the two bases of $\mathbf{C}^n$ in consideration are the usual basis $\{e_1, \ldots, e_n\}$ and the Fourier basis $\{u_i = (1, \zeta_n^i, \zeta_n^{2i}, \ldots, \zeta_n^{(n-1)i}), i=0, 1, \ldots, n-1\}$ the validity of Conjecture \ref{Conjecture} for the dimension $n$ implies that for any subset $I$ of $\{0, 1, \ldots, n-1\}$ a vector $x \in \mathbf{C}^n$ can be reconstructed by the values $x_i$, $i \in I$, and $\widehat{x}_j$, $j \notin I$. Here $\widehat{x}$ is the Fourier Transform of the vector $x$ (in the cyclic group of order $n$) or, in other words, the coefficients of $x$ written in the Fourier basis $\{u_i\}$. If $n=p r$ with $p,r$ distinct primes, then $\Z_p \times \Z_r \cong \Z_n$ under the group isomorphism that sends \begin{equation}\label{eq:isomo} \Z_p \times \Z_r \ni (i_p, \, i_r) \longrightarrow i \in \Z_n, \end{equation} with $i \equiv i_p \cdot r +i_r \cdot p \pmod{ pr}$. So $i \equiv i_p r \pmod p$ and $i \equiv i_r p \pmod r$. Assume now that a primitive $n$-th root $\zeta_n$ of $1$ is fixed. Using the isomorphism above, for every $i \in \Z_n$, we have \begin{equation}\label{eq:zn^i} \zeta_n^i = (\zeta_n^r)^{i_p} \cdot (\zeta_n^p)^{i_r}= \zeta_p^{i_p} \cdot \zeta_r^{i_r}, \end{equation} where $\zeta_p = \zeta_n^r$ and $\zeta_r = \zeta_n^p$ are primitive $p$-th and $r$-th roots of unity, respectively. Note also that for every $i, j \in \Z_n$ we get \[ \zeta_n^{ij}= \zeta_p^{ri_pj_p} \cdot \zeta_r^{pi_rj_r}. \] If $I \subseteq \Z_n$, (where $n = pq$), we define \[ I_r^k = \{ i \in I \mid i_r = k \} \] for each $k = 0, 1, \ldots, r-1$. Observe that $I = \dot\bigcup_{k}\, I_r^k$. In this paper, we heavily use the approach introduced by P.E. Frenkel in \cite{Frenkel} to partially address \autoref{Conjecture}. Our first main theorem deals with the case where \( n=2p \) for some odd prime \( p \). \begin{prevtheorem}\label{Thm:A} Assume that $n=2p$ with $p$ an odd prime and let $I, J \subseteq \Z_{n}$ with $|I_2^k| = |J_2^k|$, for $k=0,1$. Then the matrix $(\zeta_{n}^{ij})_{i \in I, j\in J}$ has nonzero determinant. \end{prevtheorem} As principal submatrices are exactly those with $I = J$ (in the notation of \autoref{Thm:A}) we get: \begin{corollary}\label{Cor:n=2p} If $n=2p$, where $p$ is an odd prime, then every principal submatrix of $(\zeta_n^{ij})_{0 \leq i,j\leq n-1}$ has non-zero determinant. \end{corollary} In \cite{Tao}, T. Tao presented a new proof of Chebotarev's theorem, demonstrating its equivalence to an improved uncertainty principle for complex-valued functions on cyclic groups of prime order. This principle asserts that \[ |\supp(f)| + |\supp(\widehat{f})| \geq p + 1, \] for every non-zero complex function $f : \Z_p \to \C$, where $\widehat{f}$ is the Fourier transform of $f$. A. Biró \cite{Biro} and R. Meshulam \cite{Meshulam} also independently established this lower bound on the sum of the supports of $f$ and $\widehat{f}$. In the same spirit, using \autoref{Thm:A} we can show \begin{proposition}\label{Prop:Unc-A} Let $ p$ be an odd prime. If $f: \Z_2 \times \Z_p \to \C $ is a non-zero function and $S_i := \{i\} \times \Z_p = \{(i, k) \mid k \in \Z_p\}$ for $ i=0,1$, then for at least one of $i=0 $ or $ i=1 $, we have \[ |\supp(f) \cap S_i| + |\supp(\widehat{f} \, ) \cap S_i| \geq p + 1. \] \end{proposition} In the general case where \( n = pr \) and \( p, r \) are distinct odd primes, we can establish an analogue of \autoref{Thm:A} only under two additional assumptions about \( p \) and \( r \). First, \( p \) must be a primitive element in \( \Z_r \) (that is, $p$ is a generator of $\Z_r^*$ or equivalently the order of $p \pmod r$ is $r-1$). Second, \( p \) must exceed a constant \( \Gamma_r \) that depends on \( r \) (details about \( \Gamma_r \) are provided in \autoref{Sec:n=pr}). The constant $\Gamma_r$ is required in our theorem because we are relying on the following theorem of G. Zhang (Theorem A in \cite{Zhang}), that is the analogue of Chebotarev's theorem for finite fields. \begin{theorem}(Zhang)\label{Zhang} Let $p, r$ be distinct odd primes with $p$ primitive in $\Z_r$ and $p > \Gamma_r$. Suppose that $\omega$ is an $r$-th primitive root of unity in $\F_{p^{r-1}}$. Then all square submatrices of $(\omega^{ij})_{i,j= 0}^{r-1}$ have nonzero determinants. \end{theorem} Our main theorem reads: \begin{prevtheorem} \label{Thm:B} Let $ n = pr$, where $ p $ and $r$ are distinct odd primes such that $p$ is primitive in $\Z_r$ and $ p >\Gamma_r$. If $I, J \leq \Z_{n}$ with $|I_r^k| = |J_r^k|$, for all $k=0,1, \cdots, r-1$, then the matrix $(\zeta_{n}^{ij})_{i \in I, j\in J}$ has nonzero determinant. \end{prevtheorem} As with \autoref{Cor:n=2p},\autoref{Thm:B} implies: \begin{corollary}\label{Cor:n=pr} Assume $ n = pr$, where $ p $ and $r$ are distinct odd primes such that $p$ is primitive in $\mathbb{Z}_r$ and $ p >\Gamma_r$. Then every principal submatrix of $(\zeta_n^{ij})_{0 \leq i,j\leq n-1}$ has non-zero determinant. \end{corollary} For complex valued functions $f$ defined on cyclic groups $\Z_r \times \Z_p$ with $p, r $ as in \autoref{Thm:B} the following uncertainty principle holds. \begin{proposition}\label{Prop:Unc-B} Let $p$ and $r$ be distinct odd primes, with $p$ being primitive in $\Z_r$ and $p > \Gamma_r$. For each $i = 0, 1, \ldots, r-1$, define $S_i := \{i\} \times \Z_p= \{(i, k) \mid k \in \Z_p\} \subseteq \Z_r \times \Z_p$. If $f: \Z_r \times \Z_p \to \C$ is a non-zero function, then for at least one $i \in \{0, 1, \ldots, r-1\}$, we have \[ |\supp(f) \cap S_i| + |\supp(\widehat{f}) \cap S_i| \geq p + 1. \] \end{proposition} {\bf Acknowledgment.} I thank J. Antoniadis for his support and valuable discussions, and G. Pfander for introducing me to the problem. \section{Preliminaries}\label{Sec:prelim} We begin with a lemma whose proof relies on fundamental concepts from algebraic number theory, briefly outlined here. Assume that $K$ is an algebraic number field of degree $[K:\Q]=n$ with $K/\Q$ a Galois extension. If $R$ is the ring of algebraic integers of $K$ then any prime $p \in \Z$ satisfies \[ pR = \prod_{i=1}^r P_i^e, \] where $\{ P_i \}$ are the distinct prime ideals of $R$ lying above the ideal $p\Z$, and $e:= e(P_i/p)$ is the ramification index of $P_i$ in $K/\Q$, which is the same for every $i$ as the extension is Galois. Additionally, the residual degree $f_i:= f(P_i/p)$ of $P_i$ in $K/\Q$ is the same constant $f$ for each $i=1, \ldots, r$, and satisfies $f = [R/P_i : \Z/p\Z]$. Furthermore, \begin{equation}\label{eq:efr} n=efr. \end{equation} Both, the ramification index and the residual degree follow some transitivity rules, in the sense that if $L/K$ is a Galois extension, $S$ the ring of algebraic integers of $L$ and $Q$ a prime ideal of $S$ above $P$ then \begin{align*} f(Q/p)&= f(Q/P)\cdot f(P/p), \\ e(Q/p)&= e(Q/P)\cdot e(P/p). \end{align*} The transitivity holds more general for separable extensions but we only need it here for Galois extensions. All the above, and much more, can be found in any book of Algebraic Number Theory, see for example Chapter 11 in \cite{Ribe}. Assume now that $n=pm $ with $m$ coprime to $p$ and let $K= \Q(\zeta_n)$ be a cyclotomic field with $R= \Z[\zeta_n]$ its ring of algebraic integers. In this special case the decomposition of $pR$ into prime ideals in $R$ is given as \[ pR= \prod_{i=1}^r P_i^e \] where $e= \phi(p)=p-1$ and $r= \frac{\phi(m)}{h}$ with $h$ being the order of $p$ in the multiplicative group $\Z_m^*$. Furthermore, $f(P_i/p) = h$ and the norm of the ideals $P_i$ equals $N(P_i)=p^h$, for all $i=1, \cdots, r$. For the general theorem regarding the factorization of a rational prime $p$ into prime ideals in the ring of integers $\Z[\zeta_t]$ of a cyclotomic field $\Q(\zeta_t)$ with $p \mid t$, see Theorems 8.7 and 8.8 in \cite{Mann}. We are now ready to prove our first lemma which generalizes Lemma 1 in \cite{Frenkel}. \begin{lemma}\label{Lem:MaximalIdeal} Let $p$ be an odd prime and $n = pm$ for some integer $m$ such that $(p, m) = 1$. The ideal $\langle 1 - \zeta_p \rangle = (1 - \zeta_p)\mathbb{Z}[\zeta_n]$ is prime (and hence maximal) in $\mathbb{Z}[\zeta_n]$ if and only if the order of $p$ in $\mathbb{Z}_m^*$ is $\phi(m)$ and so $\mathbb{Z}_m^*$ is cyclic and $p$ is one of its generators. In this case, the quotient \[ \mathbb{Z}[\zeta_{n}]/\langle 1 - \zeta_p \rangle = \F_{p^{\phi(m)}} \] is a finite field of characteristic $p$ and order $p^{\phi(m)}$. Moreover, $m$ can only be one of $m=1,2,4$, $q^k$, or $2q^k$ for some odd prime $q$ and a positive integer $k$. \end{lemma} \begin{proof} Let $ L = \Q(\zeta_n) $ and $ K = \Q(\zeta_p) $ be cyclotomic fields, with \( S = \Z[\zeta_n] \) and \( R = \Z[\zeta_p] \) their ring of algebraic integers. Clearly $[L:\Q]= \phi(n) = \phi(p) \phi(m)$, $[K:\Q]= \phi(p)$ and thus $[L:K]= \phi(m)$. Furthermore, $L= K(\zeta_m)$ is also a Galois extension of $K$ of degree $\phi(m)$. It is well known, see for example Section 11.3 Proposition N in \cite{Ribe}, that the ideal $P = ( 1 - \zeta_p ) R$ is a prime ideal of $R$, and it is the only ideal of $R$ above $p\Z$. In addition \[ pR= P^{p-1}. \] Hence the ramification index $e(P/p)$ of $P$ in $K/\Q$ is $e(P/p)=p-1$ and its residual degree $f(P/p)= 1$. Now, if $\{ Q_i\}_{i=1}^r$ is the set of prime ideals of $S$ lying above $p$ then \[ pS= \prod_{i=1}^r Q_i^{e(Q/p)} \] where $Q=Q_1$, $e(Q/p)=p-1$ and $r= \frac{\phi(m)}{h}$ where $h$ is the order of $p$ in the multiplicative group $\Z_m^*$, by our preliminary remarks on the decomposition of a rational prime into prime ideals in cyclotomic fields. In addition, $f(Q/p)=h$ and the norm of $Q_i$ is $N(Q_i)= p^h$. Observe now that $\{Q_i\}_{i=1}^r$ is exactly the set of prime ideals of $S$ above $P$ as well, while the extension $L/K$ is also Galois, and thus \[ PS = \prod_{i=1}^r Q_i^{e(Q/P)}. \] We conclude that the ideal $PS= (1-\zeta_p)\Z[\zeta_n]$ is prime in $S= \Z[\zeta_n]$ if and only if $1=r =\frac{\phi(m)}{h}$. Hence the first part of the lemma follows. By the transitivity of the ramification index we get \[ p-1 = e(Q/p) = e(Q/P)\cdot e(P/p)= e(Q/P) \cdot (p-1). \] Hence $e(Q/P)=1$. Thus, in the case that $r=1$ we get $PS=Q$ and so \[ h=f(Q/p) = [S/Q:\Z/p\Z]= [ \mathbb{Z}[\zeta_{n}]/\langle 1 - \zeta_p \rangle :\Z_p]. \] We conclude that $\mathbb{Z}[\zeta_{n}]/\langle 1 - \zeta_p \rangle $ is a finite field of order $p^h= p^{\phi(m)}$. The final part of the lemma states the well-known fact, first proved by Gauss, that the only integers $m$ for which $ \Z_m^* $ is cyclic are $1,2, 4, q^k $, or $ 2q^k $ for some odd prime $ q $ and integer $ k$. \end{proof} The above lemma easily implies: \begin{lemma} \label{Lem:z_p-primitive} Assume $p, r$ are distinct odd primes such that $p$ is primitive in $\Z_r$. Then \[ \Z[\zeta_{pr}]/\langle 1- \zeta_p \rangle = \F_{p^{r-1}} \] and the image $\bar{\zeta_r}$ of $\zeta_r \in \Z[\zeta_{pr}]$ in $\Z[\zeta_{pr}]/\langle 1- \zeta_p \rangle$ is also a primitive $r$-th root of unity in the field $\Z[\zeta_{pr}]/\langle 1- \zeta_p \rangle$. \end{lemma} \begin{proof} We only need to show that $\bar{\zeta_r}$ has order $r$ in $\Z[\zeta_{pr}]/\langle 1- \zeta_p \rangle$. Clearly $\bar{\zeta_r}^r= \bar{1}$ and suppose that $\bar{\zeta_r}^k= \bar{1}$ for some $0 < k < r$. Hence $\zeta_r^k-1 \equiv 0 \pmod{(1- \zeta_p)}$ and so $(1- \zeta_p) \mid (\zeta_r^k-1)$ in $Z[\zeta_{pr}]$. If $L = \Q[\zeta_{pr}]$, $M= \Q[\zeta_r]$ and $K= \Q[\zeta_p]$ then $[L:M]= p-1$ and $[L:K]= r-1$. Additionally, the norm \( N_{L/Q}(1- \zeta_p) = p^{r-1} \) by \autoref{Lem:MaximalIdeal}, or we can compute it directly as follows: \[ N_{L/Q}(1- \zeta_p) = N_{L/K}(N_{K/Q}(1 - \zeta_p)) = N_{L/K}\left(\prod_{i=0}^{p-1} (1-\zeta_p^i)\right) = N_{L/K}(\Phi_p(1)) = N_{L/K}(p) = p^{r-1}. \] For $0 < k < r$, as $i$ ranges from $0$ to $r-1$, the set $\{ \zeta_r^{ik} \}$ covers all primitive $r$-th roots of unity. Hence \[N_{L/Q}(1- \zeta_r^k)= N_{L/M}(N_{M/Q}(1 - \zeta_r^k))=N_{L/M}\left(\prod_{i=0}^{r-1} (1-\zeta_r^{ik})\right)= N_{L/M}(\Phi_r(1))= N_{L/M}(r)=r^{p-1}. \] If $(1 - \zeta_p) \mid (\zeta_r^k - 1)$ in $\Z[\zeta_{pr}]$ we should have \[ p^{r-1}= N_{L/Q} (1-\zeta_p) \, \big| \, N_{L/Q}(\zeta_r^k-1)= r^{p-1}. \] But this last division occurs in $\Z$, leading to an obvious contradiction and thus completing the proof of the lemma. \end{proof} We conclude the preliminaries of Algebraic Number Theory with two remarks. \begin{remark}\label{remark1} For every odd prime $p$, the element $1 - \zeta_p$ in $\Z(\zeta_p)$ does not divide $2$ in $\Z(\zeta_p)$. If it did, then $N_{\Q(\zeta_p)/\Q}(1 - \zeta_p)$ should divide $N_{\Q(\zeta_p)/\Q}(2)$ in $\Z$, leading to a contradiction since $N_{\Q(\zeta_p)/\Q}(1 - \zeta_p) = p$ and $N_{\Q(\zeta_p)/\Q}(2) = 2^{p-1}$. \end{remark} \begin{remark}\label{remark2} Let $D$ be a Dedekind Domain and $P = \langle a \rangle \neq 0$ a prime principal ideal of $D$. For any non-zero element $d \in D$, there exists an integer $k = 0,1, \ldots $ so that $a^k$ divides $d$. This follows from the unique factorization of the ideal $\langle d \rangle$ into a product of finite powers of prime ideals; that is, $a^k \mid d$ if and only if $P^k$ is a factor in the prime factorization of $\langle d \rangle$ and thus $0 \leq k < \infty$. \end{remark} For any ring $R$ and any polynomial $g(x) \in R[x]$ we denote by $|\supp(g(x))|$ the number of nonzero coefficients of $g(x)$. The following is Lemma 2 in \cite{Frenkel} and for completeness we include its proof here. \begin{lemma}[Frenkel]\label{Lem:Frenkel} Let $ \F$ be a finite field of characteristic $p$ and $0 \neq g(x) \in \F[x]$ be a polynomial of degree $<p$. If $0 \neq a \in \F$ is a root of $g(x) $ with multiplicity $t$ then \[ t< |\supp(g(x))|. \] \end{lemma} \begin{proof} We induct on the degree of $g(x)$, with the base case, that of constant polynomials being trivially true. Assume now the lemma holds for all polynomials of degree $<k$, for some fixed $1 \leq k <p$, and take $g(x)$ of degree $k$. If $g(0)= 0 $, then $|\supp(g(x))|= |\supp(g(x)/x)|$ and $a$ is also a root of $g(x)/x$ with multiplicity $t$. By induction the lemma holds for $g(x)/x$ and thus for $g(x)$. So we may assume that $g(0) \neq 0 $. Then $|\supp(g'(x))| = |\supp(g(x))|-1$ and $a$ is a root of $g'(x)$ with multiplicity $\geq t-1$. Because $g(x)$ is of positive degree $k < p$, its derivative $g'(x) \neq 0$. So the lemma holds for $g'(x)$ and therefore also for $g(x)$. \end{proof} \section{The case $n=2p$ }\label{Sec:n=2p} Throughout this section $n=2p$ with $p$ being an odd prime. Observe that $\zeta_n= (-1) \cdot \zeta_p$ and $\Z[\zeta_n]=\Z[\zeta_p]$. \begin{proofofA} The theorem is equivalent to saying that if numbers $z_j \in \Q(\zeta_n)$ $(j \in J)$ satisfy $\sum_{j \in J}z_j \zeta_n^{ij}=0$ for every $i \in I$, then all $z_j$ must be zero. Factoring out the denominators, we can assume that $z_j \in \Z[\zeta_p]$. The ideal $P= \langle 1 - \zeta_p \rangle $ is prime in the Dedekind Domain $\Z[\zeta_p]$ and thus \autoref{remark1} implies that the maximum power of $1 - \zeta_p$ dividing $z_j $ is some $k_j\in \N$. Dividing, if necessary, with $(1-\zeta_p)^{\min\{k_j\}}$ we may assume that there is at least one $z_j$ that is not divisible by $ 1 - \zeta_p$. Thus we obtain the polynomial \begin{equation}\label{eq:s(x)} r(x) = \sum_{j \in J }z_jx^{j} \in \Z[\zeta_p][x] \end{equation} which vanishes at $\zeta_n^i$ for all $i \in I$, and is such that not all $z_j$ are divisible by $1-\zeta_p$. Using the isomorphism and notation from \eqref{eq:isomo} and \eqref{eq:zn^i} with $r=2$, we get $\zeta_{2p}^i= \zeta_p^{i_p} \cdot (-1)^{i_2}$, for every $i \in I$. Next we define a polynomial in two variables \begin{equation}\label{eq:g(x)} g(x,y) = \sum_{j \in J }z_jx^{j_p}y^{j_2} \in \Z[\zeta_p][x,y]. \end{equation} We clearly have \begin{equation}\label{eq:rootsOfg(x)} g(\zeta_p^{2i_p}, (-1)^{i_2}) = \sum_{j \in J }z_j \zeta_p^{2i_pj_p} (-1)^{i_2j_2} = \sum_{j \in J }z_j \zeta_n^{ij}= 0 \end{equation} for all $i \in I$. Consider the sets \begin{equation}\label{eq:I^k} J_2^k = \{ j \in J \mid j_2 = k \}, \end{equation} for $k=0,1$. {\bf Case 1. } Assume first that $J = J_2^1$ and thus $J_2^0 = \emptyset$. Consequently, we have $I = I_2^1$ and $I_2^0 = \emptyset$. Thus the polynomial \[ T(x):= g(x,-1) = - \sum_{j \in J }z_jx^{j_p} \in \Z[\zeta_p][x] \] has roots $T(\zeta_p^{2i_p})=0$, for all $i \in I$. So $T(x)$ is divisible by $\prod_{i \in I}(x- \zeta_p^{2i_p})$. Observe that all elements $\{ i_p \mid i \in I_2^1\}$ are distinct in $\Z_p$ and thus all $\zeta_p^{2i_p}$ are also distinct $p$-th roots of unity (as $\zeta_p^2$ is also a primitive $p$-th root). Now we pass to the quotient $\Z[\zeta_p] /\langle 1 - \zeta_p \rangle $ by applying the homomorphism $\Z[\zeta_p] \to \Z[\zeta_p]/\langle 1 - \zeta_p \rangle \cong \F_p $ to the coefficients of $T(x)$ to get the polynomial $\bar{T}(x) \in \F_p[x]$. Clearly $(x- \bar{1})^{|I|}$ divides $\bar{T}(x)$. As $|\supp(T(x))| \leq |J|$ and $|I|=|J|$, \autoref{Lem:Frenkel} implies that $\bar{T}(x) = \bar{0}$ in $ \Z[\zeta_p]/\langle 1 - \zeta_p \rangle $. Therefore, all coefficients $z_j$ of $T(x)$ are divisible by $1-\zeta_p$, contradicting the initial selection of $z_j$ in $r(x)$. We conclude that Case 1 is impossible. The case $J = J_2^0$ similarly results in a contradiction, so we move on to the next case. {\bf Case 2. } Neither $J_2^0$ nor $J_2^1$ is an empty set. Consider the polynomials \begin{align} T_0(x)&:=g(x,1)= \sum_{j \in J} z_j x^{j_p}= S_0(x) +S_1(x)\\ T_1(x)&:=g(x,-1)= \sum_{j \in J} z_j x^{j_p} (-1)^{j_2}=S_0(x) - S_1(x), \end{align} where $S_0(x):= \sum_{j \in J_2^0} z_j x^{j_p}$ and $S_1(x):= \sum_{j \in J_2^1} z_j x^{j_p}$. In view of equation \eqref{eq:rootsOfg(x)}, $T_0(\zeta_p^{2i_p}) = 0$ for all $i \in I_2^0$, and similarly $T_1(\zeta_p^{2i_p})= 0$ for all $i \in I_2^1$. As in Case 1, the elements $\{\zeta_p^{2i_p}\} $ are distinct for all $i \in I_2^0$, and the same holds for $\{ \zeta_p^{2i_p} \mid i \in I_2^1\}$. Hence $\prod_{i \in I_2^0}(x- \zeta_p^{2i_p})$ divides $T_0(x)$ and $\prod_{i \in I_2^1}(x- \zeta_p^{2i_p})$ divides $T_1(x)$. Passing again to the quotient $\Z[\zeta_p]/\langle 1 - \zeta_p \rangle $ and to the corresponding polynomials there we conclude that \begin{align} (x-\bar{1})^{|I_2^0|}\, \, & \big| \, \, \bar{T}_0(x) = \bar{S}_0(x) + \bar{S}_1(x), \\ (x-\bar{1})^{|I_2^1|}\, \, & \big| \, \, \bar{T}_1(x) = \bar{S}_0(x) - \bar{S}_1(x). \end{align} Without loss assume $|I_2^0 | \leq |I_2^1|$. Then $(x-\bar{1})^{|I_2^0|}$ divides $\bar{T}_0(x) +\bar{T}_1(x)= 2 \bar{S}_0(x)$ while $|\supp(2\bar{S}_0(x)| \leq |J_2^0|= |I_2^0|$. Applying \autoref{Lem:Frenkel}, we find that $2 \bar{S}_0(x) = \bar{0}$ in $\Z[\zeta_p]/\langle 1 - \zeta_p \rangle$. Since $2$ is not divisible by $1 - \zeta_p$ (as noted in \autoref{remark1}), it follows that $1 - \zeta_p$ divides $z_j$ for all $j \in J_2^0$. This implies $\bar{S}_0(x) = 0$ and that $\bar{T}_1(x) = \bar{S}_1(x)$ has $|\supp(\bar{T}_1(x))| \leq |J_2^1| = |I_2^1|$. However, $(x - \bar{1})^{|I_j^1|}$ divides $\bar{T}_1$, which forces $\bar{T}_1(x) = \bar{0}$ in $\Z[\zeta_p]/\langle 1 - \zeta_p \rangle$ by \autoref{Lem:Frenkel}. Consequently, $1 - \zeta_p$ divides $z_j$ for all $j \in J_2^1$. Since $J = J_2^0 \cup J_2^1$, we conclude that $1 - \zeta_p$ divides $z_j$ for all $j \in J$, contradicting the choice of $r(x)$. The proof of the theorem is now complete. \end{proofofA} \section{ The case $n=pq $ }\label{Sec:n=pr} We begin this section with the definition of $\Gamma_r$ as provided in Section 3 of \cite{Zhang}; interested readers can refer to \cite{Zhang} for further details and motivation. Denote by $V_n(x_1, x_2, \cdots, x_n)$ the determinant of the $n\times n$ Vandermonde matrix whose $i, j$-entry is given as $x_i^{j-1}$, for $1 \leq i, j \leq n$. So, \[ V_n(x_1, \cdots, x_n)= \prod_{1 \leq i \leq j \leq n} (x_i - x_j). \] For any odd prime $r$ and any fixed $2 \leq n \leq r-1$, let \[ \gamma_n:= \max \Bigl\{ \frac{V_n(a_1, a_2, \cdots, a_n)}{V_n(0,1,\cdots,n-1) } \Big| \, 0 \leq a_1 <a_2 < \cdots <a_n \leq r-1 \Bigr\}. \] Then \[ \Gamma_r := \max \big\{ \gamma_n \, \big| \, \, 2 \leq n \leq r-1 \big\}. \] Note that $\Gamma_r$ grows at least exponentially on $r$. If we choose $n=\frac{r-1}{2}$ and work with $a_i = 2i$, for $i=0,1, \cdots , n-1$, it is easy to see that $\Gamma_r$ is at least $2^{n \choose 2}$. Nevertheless, as it was already noted in \cite{Zhang} (Remark 3.3) for every prime $r$ there are infinitely many primes $p$ that are primitive in $\Z_r$. \begin{proofofB} As in the case of $n=2p$, we need to show that if the numbers $z_j \in \Q(\zeta_n)$ for $j \in J$ satisfy $\sum_{j \in J} z_j \zeta_n^{ij} = 0$ for every $i \in I$, then all $z_j$ must be zero. We can assume without loss of generality that $z_j \in \Z[\zeta_n]$. Note that $\langle 1 - \zeta_p \rangle$ is a prime ideal in $\Z[\zeta_n]$ (by \autoref{Lem:MaximalIdeal}), allowing us to apply \autoref{remark2} in $\Z[\zeta_n]$. Consequently, we obtain the polynomial \begin{equation}\label{eq:r(x)-n} r(x) = \sum_{j \in J} z_j x^{j} \in \Z[\zeta_n][x] \end{equation} which vanishes at $\zeta_n^i$ for all $i \in I$, and is such that not all coefficients $z_j$ are divisible by $1 - \zeta_p$. Using the notation from \eqref{eq:isomo} and \eqref{eq:zn^i}, we define a polynomial in two variables \begin{equation}\label{eq:g(x)-n} g(x,y) = \sum_{j \in J }z_jx^{j_p}y^{j_r} \in \Z[\zeta_n][x,y]. \end{equation} We clearly have \begin{equation}\label{eq:rootsOfg(x)-n} g(\zeta_p^{ri_p},\zeta_r^{pi_r}) = \sum_{j \in J }z_j \zeta_p^{ri_pj_p}\zeta_r^{pi_rj_r} = \sum_{j \in J }z_j \zeta_n^{ij}= 0 \end{equation} for all $i \in I$. Consider the sets \begin{equation} J_r^k = \{ j \in J \mid j_r = k \}, \end{equation} and let $L \subseteq \{ 0, 1, \cdots, r-1\}$ consisting of those integers $k$ with $J_r^k \neq \emptyset$. By assumption, $|J_r^k|= |I_r^k|$ for all $k \in \{0, 1, \cdots, r-1\}$, so $L$ also identifies the integers $k$ with $I_r^k \neq \emptyset$. Write $L = \{ k_1, k_2, \cdots, k_{|L|} \}$ so that $|I_r^{k_1}| \leq |I_r^{k_2}| \leq \cdots \leq |I_r^{k_{|L|}}|$. We define the polynomials \[ T_t(x) := g(x, \zeta_r^{pt}) \in \Z[\zeta_n][x], \] for each $t \in L$. Then \[ T_t(x) = \sum_{j \in J} z_j \zeta_r^{ptj_r} x^{j_p} = \sum_{k \in L} \zeta_r^{ptk} \sum_{j \in J_r^k} z_j x^{j_p}. \] In view of \eqref{eq:rootsOfg(x)-n} we get $ T_t(\zeta_p^{ri_p})= 0$, for all $i \in I_r^t$ and thus \begin{equation}\label{eq:dividesT} \prod_{i \in I_r^t} (x- \zeta_p^{ri_p}) \, \big| \, T_t(x), \end{equation} for every $t \in L$. Observe that all elements $\{ i_p | i \in I_r^t \}$ are distinct in $\Z_p$, which means $\zeta_p^{ri_p}$ are also distinct $p$-th roots of unity, as $\zeta_p^r$ is a primitive $p$-th root of unity. For every $k \in L$, we consider the polynomials $S_k(x) := \sum_{j \in J_r^k} z_j x^{j_p}$ of $ \Z[\zeta_n][x]$ and express $T_t(x)$, as \begin{equation}\label{eq:T-S} T_t(x) = \sum_{k\in L} \zeta_r^{ptk} S_k(x). \end{equation} This way we have produced the $|L| \times |L|$ system \begin{equation}\label{eq:system} \left( \begin{matrix} T_1(x)\\ \vdots\\ T_{|L|}(x)\\ \end{matrix} \right) = R \cdot \left( \begin{matrix} S_1(x)\\ \vdots\\ S_{|L|}(x)\\ \end{matrix} \right) \end{equation} with matrix $R= (R_{t,k})=(\zeta_r^{ptk})_{t, k \in L}= (\omega_r^{tk})_{t, k \in L}$, where $\omega_r= \zeta_r^p$ is also a primitive $r$-th root of unity. Now we pass to the quotient $\Z[\zeta_n]/\langle 1- \zeta_p \rangle \cong \F_{p^{r-1}}$. Applying the homomorphism $\Z[\zeta_n] \to \Z[\zeta_n]/\langle 1- \zeta_p \rangle \cong \F_{p^{r-1}}$ to the coefficients of all the polynomials involved we get $\bar{T}_t(x)$ and $\bar{S}_k(x) \in \F_{p^{r-1}}[x]$ satisfying the system \begin{equation}\label{eq:systemInF_p} \left( \begin{matrix} \bar{T}_1(x)\\ \vdots\\ \bar{T}_{|L|}(x)\\ \end{matrix} \right) = \bar{R} \cdot \left( \begin{matrix} \bar{S}_1(x)\\ \vdots\\ \bar{S}_{|L|}(x)\\ \end{matrix} \right) \end{equation} with matrix $\bar{R}= (\bar{R}_{t,k})=(\bar{\omega}_r^{tk})_{t, k \in L}$. Observe also that \begin{equation}\label{eq:bar{s}} \bar{S}_k(x)=\sum_{j \in J_r^k} \bar{z}_j x^{j_p}. \end{equation} Based on \autoref{Lem:z_p-primitive}, the element $\bar{\omega}_r$ in $\Z[\zeta_n]/\langle 1- \zeta_p \rangle $ is a primitive $r$-th root of unity. Therefore, $\bar{R}$ is a square submatrix of $(\bar{\omega}_r^{ij})_{i,j = 0}^{r-1}$ and all the hypothesis of \autoref{Zhang} are satisfied. Consequently, $\bar{R}$ is invertible and so \begin{equation*} \left( \begin{matrix} \bar{S}_1(x)\\ \vdots\\ \bar{S}_{|L|}(x)\\ \end{matrix} \right) = \bar{R}^{-1} \cdot \left( \begin{matrix} \bar{T}_1(x)\\ \vdots\\ \bar{T}_{|L|}(x)\\ \end{matrix} \right). \end{equation*} In particular, \begin{equation}\label{eq:S_k1} \bar{S}_{k_1}(x) = \sum_{t \in L} \bar{a}_t \bar{T}_t(x), \end{equation} where $\bar{a}_t \in \Z[\zeta_n]/ \langle 1- \zeta_p \rangle$. According to \eqref{eq:dividesT}, $\bar{T}_t(x)$ is divisible by $(x-\bar{1})^{|I_r^t|}$, which means that $(x-\bar{1})^{|I_r^{k_1}|}$ divides $\bar{T}_t(x)$ for every $t \in L$, given that $|I_r^{k_1}|$ is the minimum in the set $\{ |I_r^k| \}_{k \in L}$. Consequently, $(x-\bar{1})^{|I_r^{k_1}|}$ also divides $\bar{S}_{k_1}(x)$. Furthermore, equation \eqref{eq:bar{s}} implies that $|\supp(\bar{S}_{k_1}(x))| \leq |J_r^{k_1}| = |I_r^{k_1}|$. Using Frenkel's Lemma, we conclude that $\bar{S}_{k_1}(x) = \bar{0}$. Thus, $z_j$ is divisible by $1-\zeta_p$ for all $j \in J_r^{k_1}$. Since we have established that $\bar{S}_{k_1}(x)=\bar{0}$, we can deduce from the system \eqref{eq:systemInF_p} that \begin{equation} \left( \begin{matrix} \bar{T}_{k_2}(x)\\ \vdots\\ \bar{T}_{k_{|L|}}(x)\\ \end{matrix} \right) = \bar{D} \cdot \left( \begin{matrix} \bar{S}_{k_2}(x)\\ \vdots\\ \bar{S}_{k_{|L|}}(x)\\ \end{matrix} \right) \end{equation} where $\bar{D}= (\bar{D}_{k_t,k_l})=(\bar{\omega}_r^{k_tk_l})_{k_t, k_l \in L'}$ for $L'= L \setminus \{ k_1 \}$. All hypotheses of \autoref{Zhang} still hold, ensuring that the matrix $\bar{D}$ is invertible. Repeating the previous argument yields $\bar{S}_{k_2} = \bar{0}$, and thus $z_j$ is divisible by $1 - \zeta_p$ for all $j \in J_r^{k_2}$. By continuing this process and reducing the matrix dimensions by one each time, we conclude that $z_j$ is divisible by $1 - \zeta_p$ for all $j \in \bigcup_{i=1}^{|L|} J_r^{k_i} = J$. This clearly contradicts the choice of $r(x)$ and the proof of the theorem is complete. \end{proofofB} \begin{remark} In \cite{Zhang}, Examples 4.1 and 4.3 demonstrate the necessity of the second condition in \autoref{Zhang}; however, the resulting singular submatrices are not principal. \end{remark} \begin{remark} Explicit examples of primes that meet the criteria of \autoref{Zhang} and \autoref{Thm:B} were also presented in \cite{Zhang}. For instance, $\Gamma_3= 2$, $\Gamma_5=8$ and $\Gamma_7=75$. This means that, for example, \autoref{Thm:B} is applicable for $n=3\cdot 5$ (as $5$ is primitive in $\Z_3$ and greater than $\Gamma_3$) but not for $n= 3 \cdot 7$ because on one hand the order of $7$ in $\Z_3^*$ is $1$ and on the other hand $3$ is not greater than $\Gamma_7$ although $3$ is primitive in $\Z_7$. \end{remark} \section{An uncertainty principle} The proof of the uncertainty principle provided by Tao in \cite{Tao} was based on the following observation: Assume that $G =\Z_n$ is a cyclic group of order $n$ and let \( f: G \to \C \) be a non-zero function. If \( |\supp(f)| + |\supp(\widehat{f} \, )| \leq n \), then \[ |\supp(f)| \leq n - |\supp(\widehat{f} \, )| = |\{ \lambda \in \widehat{G} \mid \widehat{f} (\lambda) = 0 \}|. \] Thus, if $ A = \supp(f)$, there exists a subset $ B \subseteq G \cong \widehat G $ such that $|A| = |B| $ and $ \widehat{f}(b) = 0 $ for all $ b \in B $. In particular, the linear map $T: l^2(A) \to l^2(B)$ defined by $T: g|_A \to \widehat{g} |_B $ is singular as $T(f|_A)=0$ but $f|_A \neq 0$ (by $l^2(A)$ is denoted the set of functions $g: G \to \C$ which are $0$ out of $A$). Observe now that the matrix of $T$ is precisely the submatrix $(\zeta_n^{ij})_{i\in A, j \in B}$ of $(\zeta_n^{ij})_{0 \leq i , j \leq n-1}$. This observation, together with \autoref{Thm:A}, will prove \autoref{Prop:Unc-A} as demonstrated below. \begin{proofofPropA} Assume for contradiction that the theorem fails for some odd prime $p$ and some non-zero fucnction $ f: \Z_2 \times \Z_p \to \C$. Then for both $i=0,1$ we have $|\supp(f) \cap S_i| + |\supp(\widehat{f} \, ) \cap S_i| \leq p $. Let $A_i = \supp(f) \cap S_i$ for $i=0,1$. Then there exist sets $B_i \subseteq S_i$ such that $|A_i| = |B_i|$ and $\widehat f (b) = 0$ for all $b \in B_i$, for $i=0,1$. Define $A = A_0 \cup A_1$ and $B = B_0 \cup B_1$ and so $A, B \subseteq \Z_2 \times \Z_p$. According to the notation in \autoref{Thm:A}, we have $A^i_2 = A_i$ and $B^i_2 = B_i$ for $i=0,1$. Hence $A, B$ satisfy \autoref{Thm:A}, and so the matrix $(\zeta_{2p}^{ij})_{i \in A, j \in B}$ is non-singular. However, the linear map $T: l^2(A) \to l^2(B)$ defined by this matrix sends any $g \in l^2(A)$ to $\widehat{g}|_B$, and thus maps the non-zero $f$ to $\widehat{f} |_B = 0$. This contradiction completes the proof of the proposition. \end{proofofPropA} The proof of \autoref{Prop:Unc-B} follows the same argument as above, using the sets $A_i = \supp(f) \cap S_i$ for $i=0,1, \cdots, r-1$, so we omit it. \section{Complementary matrices} \label{Complementary} Let $A$ be an $n \times n $ matrix and write $[n]$ for the set $\{ 1, 2, \cdots n \}$. For every $I \subseteq [n]$ we denote by $I^c$ the complementary subset $[n] \setminus I $. If $I, J \subseteq [n]$ with $|I|=|J|$ we write $A_{I, J}$ for the $|I| \times |I|$ submatrix of $A$ obtained from $A$ by removing all rows whose indices do not belong to $I$ and all columns whose indices do not belong to $J$. Observe that $A_{I,J} = A^t_{J,I}$ for all $I, J$. If $I = J$ we simply write $A_I$ and this is a principal submatrix of $A$. It is known, that a principal submatrix of a matrix $A$ is non-singular if and only if its complementary principal submatrix is also non-singular(see for example Proposition 5.4 in \cite{Brow} for a more general result). We present here a simple proof of this fact based on Jacobi's complementary minor theorem, which states: \begin{theorem}[Jacobi] Assume $A $ is an invertible $ n \times n$ matrix over a field $K$ and let $I, J \subseteq [n]$ with $|I|=|J|$. Then \[ \det (A_{I,J}) = (-1)^{\sum I +\sum J} \cdot \det A \cdot \det( (A^{-1})_{J^c, I^c}). \] \end{theorem} With the use of the adjoint $\adj A$ of A we have $A^{-1} = \frac{1}{\det A} \cdot \adj A$ and so \[ (A^{-1})_{J^c, I^c}= \frac{1}{\det A} \cdot (\adj A)_{J^c, I^c}= \frac{1}{\det A} \cdot ((\adj A)^t)_{I^c, J^c} \] Hence Jacobi's formula is translated to \begin{equation}\label{eq:jacobi-adj} \det (A_{I,J})= (-1)^{\sum I +\sum J} \cdot \frac{\det(A)}{\det(A)^{|I^c|}} \cdot \det(((\adj A)^t)_{I^c, J^c}). \end{equation} We can now prove \begin{proposition} Let $A =(\omega^{k,l})_{0 \leq k, l \leq n-1} $ where $\omega$ a primitive $n$-th root of unity. For any $I, J \subseteq [n]$ with $|I|=|J|$ the submatrix $A_{I, J}$ is invertible if and only if $A_{I^c,J^c} $ is invertible. In particular, for $I=J$ we have that the principal minor $\det{A_I}$ is non zero if and only if the principal minor $\det{A_{I^c}}$ is non zero. \end{proposition} \begin{proof} The matrix $A$ is the character table of the cyclic group $C_n$ of order $n$ and as such it is invertible and satisfies \[ A \cdot \bar{A}^t= n \cdot I_n. \] This along with the formula for the adjoint $\adj A$ of $A$ implies that $\det(A) \cdot \bar{A} = n \cdot (\adj A)^t$ and thus \[ det(A) \cdot (\bar{A})_{I^c,J^c} = n \cdot ((\adj A)^t)_{I^c,J^c} \] for any $I, J \subseteq [n]$ with $|I|=|J|$. If $k = n-|I|$, taking determinants to the above equation we get \begin{equation}\label{eq:Det-adj} (\det(A))^k \cdot \det((\bar{A})_{I^c,J^c}) = n^k \cdot \det(((\adj A)^t)_{I^c,J^c}) \end{equation} Clearly $\det((\bar{A})_{I^c,J^c})=\overline{ \det(A_{I^c,J^c})} $ and we substitute to Jacobi's formula \eqref{eq:jacobi-adj} to get \[ \det (A_{I,J})=(-1)^{\sum I +\sum J}\cdot \frac{\det(A)}{(\det(A))^{k}} \cdot \frac{(\det(A))^k}{n^k} \cdot \overline{ \det(A_{I^c,J^c})} = (-1)^{\sum I +\sum J} \cdot \frac{\det(A)}{n^k} \cdot \overline{ \det(A_{I^c,J^c})}. \] Hence $\det (A_{I,J}) \neq 0 $ if and only if $\det(A_{I^c,J^c})) \neq 0 $ and the proposition follows. \end{proof} \begin{thebibliography}{{R.M}96} \bibitem{Biro} A. Bir\'o, Schweitzer Competition, Problem 3, 1998 \bibitem{Brow} M. Bownik, P. Casazza, A. W. Marcus, D. Speegle, Improved bounds in Weaver and Feichtinger conjectures, J. Reine Angew. Math., {\bf 749}, 267-293, (2019) https://doi.org/10.1515/crelle-2016-0032 \bibitem{Cabreli} C. Cabrelli, U. Molter, F. Negreira, Weaving Riesz bases, J Fourier Anal Appl {\bf 31}, 4 (2025). \bibitem{CL.24} A.Caragea and D.G.Lee, On the principal minors of fourier matrices https://arxiv.org/abs/2409.09793v1 \bibitem{Dieudonne} J. Dieudonne, Une propriete des racines de L'unite, Collection of articles dedidacted to Alberto Gonzalez Dominguez on his sixty-fifth birthday. Rev.Un.Mat.Argentina {\bf 25}, 1-3 (1970/71) \bibitem{EvIs} R.J. Evans and I.M. Isaacs, Generalized Vandermonde determinants and roots of unity of prime order, Proc. Amer. Math. Soc. {\bf 58}, 51-54, (1976) \bibitem{Frenkel} P.E.Frenkel, Simple proof of Chebotarev's theorem on roots of unity, https://arxiv.org/abs/math/0312398 \bibitem{Gold} D. Goldstein, R.M. Guralnick, I.M. Isaacs, Inequalities for finite group permutation modules, Trans. Amer. Math. Soc., {\bf 357 },4017-4042, (2005) \bibitem{Mann} H. B. Mann, Introduction to Algebraic Number Theory, Ohio State University Press, Columbus, Ohio, 1955 \bibitem{Meshulam} R. Meshulam, An uncertainty inequality for finite abelian groups, Eur. J. Comb. {\bf 27 } (1), 63-67, (2006) \bibitem{Ribe} P.Ribenboim, Classical Theory Of Algebraic Numbers, 2001, https://api.semanticscholar.org/CorpusID:117566621 \bibitem{Lenstra} P. Stevenhagen, H.W. Lenstra Jr., Chebotar¨ev and his density theorem,Math. Intelligencer, {\bf 18}, no.2, 26–37, (1996) \bibitem{Tao} T. Tao, An uncertainty principle for cyclic groups of prime order, Math. Research Letters, {\bf 12}, 121-127, (2003) \bibitem{Zhang} G. Zhang, On the Chabotarev theorem over finite fields, Finite Fields and Their Applications {\bf56}, 97-108 (2019). \end{thebibliography} \end{document}
2412.08727v1
http://arxiv.org/abs/2412.08727v1
Lengths of saddle connections on random translation surfaces of large genus
\documentclass{amsart} \usepackage{amsthm} \usepackage{amsmath} \usepackage{amssymb} \usepackage{mathtools} \usepackage{graphicx} \usepackage{xspace} \usepackage{pinlabel} \usepackage{enumerate} \usepackage{bbm} \usepackage{xfrac} \usepackage[margin=1.3in]{geometry} \setlength{\textheight}{21cm} \setlength{\textwidth}{14cm} \usepackage{tikz} \usetikzlibrary{calc,through,backgrounds, arrows} \usetikzlibrary{decorations.pathmorphing, decorations.pathreplacing,positioning,fit} \newcommand{\cherry}{{\begin{tikzpicture} \tikzstyle{vertex} =[circle,draw,fill=black,thick, inner sep=0pt,minimum size= .2 mm] \node [vertex] (x) at (0,0) {}; \node [vertex] (y) at (-.06,-.1) {}; \node [vertex] (z) at (0.06,-0.1) {}; \draw (y) -- (x) -- (z);\end{tikzpicture}}} \newcommand{\homsc}{{ \begin{tikzpicture} \tikzstyle{vertex} =[circle,draw,fill=black,thick, inner sep=0pt,minimum size= .2 mm] \node [vertex] (x) at (0,0) {}; \node [vertex] (y) at (0.2,0) {}; \draw (x) to [bend right] (y); \draw (y) to [bend right] (x); \end{tikzpicture}}} \newcommand{\homsconeone}{{ \begin{tikzpicture} \tikzstyle{vertex} =[circle,draw,fill=black,thick, inner sep=0pt,minimum size= .2 mm] \clip (-0.25,-0.1) rectangle (0.4, 0.1); \node [vertex] (x) at (0,0) {}; \draw (x) node [left=-0.8mm, font=\scriptsize] {$1$}; \node [vertex] (y) at (0.2,0) {}; \draw (y) node [right=-0.8mm, font=\scriptsize] {$1$}; \draw (x) to [bend right] (y); \draw (y) to [bend right] (x); \end{tikzpicture}}} \newcommand{\anyloop}{{ \begin{tikzpicture} \tikzstyle{vertex} =[circle,draw,fill=black,thick, inner sep=0pt,minimum size= .3 mm] \clip (-0.22,-0.11) rectangle (0.03, 0.11); \node [vertex] (x) at (0,0) {}; \draw (x) arc (0:360:0.1); \end{tikzpicture}}} \newcommand{\mloop}{{ \begin{tikzpicture} \tikzstyle{vertex} =[circle,draw,fill=black,thick, inner sep=0pt,minimum size= .3 mm] \clip (-0.22,-0.11) rectangle (0.3, 0.11); \node [vertex] (x) at (0,0) {}; \draw (x) node [right=-0.5mm, font=\scriptsize] {$m$}; \draw (x) arc (0:360:0.1); \end{tikzpicture}}} \newcommand{\oneloop}{{ \begin{tikzpicture} \tikzstyle{vertex} =[circle,draw,fill=black,thick, inner sep=0pt,minimum size= .3 mm] \clip (-0.22,-0.11) rectangle (0.19, 0.11); \node [vertex] (x) at (0,0) {}; \draw (x) node [right=-0.5mm, font=\scriptsize] {$1$}; \draw (x) arc (0:360:0.1); \end{tikzpicture}}} \newcommand{\twoloop}{{ \begin{tikzpicture} \tikzstyle{vertex} =[circle,draw,fill=black,thick, inner sep=0pt,minimum size= .3 mm] \clip (-0.22,-0.11) rectangle (0.2, 0.11); \node [vertex] (x) at (0,0) {}; \draw (x) node [right=-0.5mm, font=\scriptsize] {$2$}; \draw (x) arc (0:360:0.1); \end{tikzpicture}}} \newcommand{\twotoone}{{ \begin{tikzpicture} \tikzstyle{vertex} =[circle,draw,fill=black,thick, inner sep=0pt,minimum size= .3 mm] \clip (-0.22,-0.1) rectangle (0.35, 0.1); \node [vertex] (x) at (0,0) {}; \draw (x) node [left=-0.6mm, font=\scriptsize] {$2$}; \node [vertex] (y) at (0.2,0) {}; \draw (y) node [right=-0.8mm, font=\scriptsize] {$1$}; \draw (x) to (y); \end{tikzpicture}}} \newcommand{\twototwo}{{ \begin{tikzpicture} \tikzstyle{vertex} =[circle,draw,fill=black,thick, inner sep=0pt,minimum size= .3 mm] \clip (-0.21,-0.1) rectangle (0.39, 0.1); \node [vertex] (x) at (0,0) {}; \draw (x) node [left=-0.6mm, font=\scriptsize] {$2$}; \node [vertex] (y) at (0.2,0) {}; \draw (y) node [right=-0.6mm, font=\scriptsize] {$2$}; \draw (x) to (y); \end{tikzpicture}}} \newcommand{\twotostar}{{ \begin{tikzpicture} \tikzstyle{vertex} =[circle,draw,fill=black,thick, inner sep=0pt,minimum size= .3 mm] \clip (-0.21,-0.1) rectangle (0.38, 0.1); \node [vertex] (x) at (0,0) {}; \draw (x) node [left=-0.6mm, font=\scriptsize] {$2$}; \node [vertex] (y) at (0.2,0) {}; \draw (y) node [right=-0.6mm, font=\scriptsize] {$\ast$}; \draw (x) to (y); \end{tikzpicture}}} \newcommand{\monetomtwo}{{ \begin{tikzpicture} \tikzstyle{vertex} =[circle,draw,fill=black,thick, inner sep=0pt,minimum size= .3 mm] \clip (-0.5,-0.1) rectangle (0.6, 0.1); \node [vertex] (x) at (0,0) {}; \draw (x) node [left=-0.8mm, font=\scriptsize] {$m_1$}; \node [vertex] (y) at (0.2,0) {}; \draw (y) node [right=-0.8mm, font=\scriptsize] {$m_2$}; \draw (x) to (y); \end{tikzpicture}}} \newcommand{\closedgeod}{{ \begin{tikzpicture} \tikzstyle{vertex} =[circle,draw,fill=black,thick, inner sep=0pt,minimum size= .3 mm] \clip (-0.12,-0.11) rectangle (0.13, 0.11); \node [vertex] (x) at (0.1,0) {}; \draw (-115:0.1) arc (-115:115:0.1); \draw[densely dotted] (115:0.1) arc (115:245:0.1); \end{tikzpicture}}} \usepackage{hyperref} \usepackage[noabbrev, capitalize]{cleveref} \newcommand{\calA}{\mathcal{A}} \newcommand{\calB}{\mathcal{B}} \newcommand{\calC}{\mathcal{C}} \newcommand{\calD}{\mathcal{D}} \newcommand{\calE}{\mathcal{E}} \newcommand{\calF}{\mathcal{F}} \newcommand{\calG}{\mathcal{G}} \newcommand{\calH}{\mathcal{H}} \newcommand{\calI}{\mathcal{I}} \newcommand{\calJ}{\mathcal{J}} \newcommand{\calK}{\mathcal{K}} \newcommand{\calL}{\mathcal{L}} \newcommand{\calM}{\mathcal{M}} \newcommand{\calN}{\mathcal{N}} \newcommand{\calO}{\mathcal{O}} \newcommand{\calP}{\mathcal{P}} \newcommand{\calQ}{\mathcal{Q}} \newcommand{\calR}{\mathcal{R}} \newcommand{\calS}{\mathcal{S}} \newcommand{\calT}{\mathcal{T}} \newcommand{\calU}{\mathcal{U}} \newcommand{\calW}{\mathcal{W}} \newcommand{\calX}{\mathcal{X}} \newcommand{\calY}{\mathcal{Y}} \newcommand{\calZ}{\mathcal{Z}} \renewcommand{\AA}{\mathbb{A}} \newcommand{\BB}{\mathbb{B}} \newcommand{\CC}{\mathbb{C}} \newcommand{\DD}{\mathbb{D}} \newcommand{\EE}{\mathbb{E}} \newcommand{\FF}{\mathbb{F}} \newcommand{\HH}{\mathbb{H}} \newcommand{\KK}{\mathbb{K}} \newcommand{\MM}{\mathbb{M}} \newcommand{\NN}{\mathbb{N}} \newcommand{\OO}{\mathbb{O}} \newcommand{\PP}{\mathbb{P}} \newcommand{\QQ}{\mathbb{Q}} \newcommand{\RR}{\mathbb{R}} \renewcommand{\SS}{\mathbb{S}} \newcommand{\TT}{\mathbb{T}} \newcommand{\VV}{\mathbb{V}} \newcommand{\WW}{\mathbb{W}} \newcommand{\XX}{\mathbb{X}} \newcommand{\YY}{\mathbb{Y}} \newcommand{\ZZ}{\mathbb{Z}} \newcommand{\gothic}{\mathfrak} \newcommand{\Ga}{{\gothic a}} \newcommand{\Gb}{{\gothic b}} \newcommand{\Gc}{{\gothic c}} \newcommand{\Gd}{{\gothic d}} \newcommand{\Ge}{{\gothic e}} \newcommand{\Gf}{{\gothic f}} \newcommand{\Gg}{{\gothic g}} \newcommand{\Gh}{{\gothic h}} \newcommand{\Gi}{{\gothic i}} \newcommand{\Gj}{{\gothic j}} \newcommand{\Gk}{{\gothic k}} \newcommand{\Gl}{{\gothic l}} \newcommand{\Gm}{{\gothic m}} \newcommand{\gn}{{\gothic n}} \newcommand{\go}{{\gothic o}} \newcommand{\Gp}{{\gothic p}} \newcommand{\Gq}{{\gothic q}} \newcommand{\Gr}{{\gothic r}} \newcommand{\Gs}{{\gothic s}} \newcommand{\Gt}{{\gothic t}} \newcommand{\Gu}{{\gothic u}} \newcommand{\gv}{{\gothic v}} \newcommand{\Gw}{{\gothic w}} \newcommand{\gx}{{\gothic x}} \newcommand{\gy}{{\gothic y}} \newcommand{\Gz}{{\gothic z}} \newcommand{\GS}{{\gothic S}} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{problem}[theorem]{Problem} \newtheorem{question}[theorem]{Question} \newtheorem{introthm}{Theorem} \newtheorem{introcor}[introthm]{Corollary} \renewcommand{\theintrothm}{\Alph{introthm}} \renewcommand{\theintrocor}{\Alph{introcor}} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{claim}[theorem]{Claim} \newtheorem{subclaim}[theorem]{Subclaim} \newtheorem*{claim*}{Claim} \newtheorem{exercise}[theorem]{Exercise} \newtheorem{example}[theorem]{Example} \newtheorem{fact}[theorem]{Fact} \newtheorem*{question*}{Question} \newtheorem*{answer*}{Answer} \newtheorem*{application*}{Application} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newtheorem*{remark*}{Remark} \newcommand{\secref}[1]{Section~\ref{#1}} \newcommand{\thmref}[1]{Theorem~\ref{#1}} \newcommand{\corref}[1]{Corollary~\ref{#1}} \newcommand{\lemref}[1]{Lemma~\ref{#1}} \newcommand{\propref}[1]{Proposition~\ref{#1}} \newcommand{\clmref}[1]{Claim~\ref{#1}} \newcommand{\remref}[1]{Remark~\ref{#1}} \newcommand{\exaref}[1]{Example~\ref{#1}} \newcommand{\facref}[1]{Fact~\ref{#1}} \newcommand{\exref}[1]{Exercise~\ref{#1}} gref}[1]{Figure~\ref{#1}} \newcommand{\defref}[1]{Definition~\ref{#1}} \newcommand{\probref}[1]{Problem~\ref{#1}} \newcommand{\eqnref}[1]{Equation~\eqref{#1}} \DeclareMathOperator{\Mod}{Mod} \DeclareMathOperator{\I}{i} \DeclareMathOperator{\Vol}{Vol} \DeclareMathOperator{\Area}{Area} \DeclareMathOperator{\Cone}{Cone} \newcommand{\Teich}{Teich\-m\"u\-ller\ } \newcommand{\Ham}{Ham\-en\-st\"adt\ } \hyphenation{geo-desics} \newcommand{\radius}{{\sf r}} \newcommand{\dL}{{d_L}} \newcommand{\dT}{{d_T}} \newcommand{\dS}{{d_S}} \newcommand{\balpha}{{\overline \alpha}} \newcommand{\bbeta}{{\overline \beta}} \newcommand{\bgamma}{{\overline \gamma}} \newcommand{\bphi}{{\overline \phi}} \newcommand{\bnu}{{\overline \nu}} \newcommand{\ba}{{\overline a}} \newcommand{\bb}{{\overline b}} \newcommand{\bc}{{\overline c}} \newcommand{\bd}{{\overline d}} \newcommand{\be}{{\overline e}} \newcommand{\bI}{{\overline I}} \newcommand{\bk}{{\overline k}} \newcommand{\bp}{{\overline p}} \newcommand{\bq}{{\overline q}} \newcommand{\bw}{{\overline w}} \newcommand{\bx}{{\overline x}} \newcommand{\by}{{\overline y}} \newcommand{\bz}{{\overline z}} \newcommand{\ua}{{\underline a}} \newcommand{\ub}{{\underline b}} \newcommand{\param}{{\mathchoice{\mkern1mu\mbox{\raise2.2pt\hbox{$ \centerdot$}} \mkern1mu}{\mkern1mu\mbox{\raise2.2pt\hbox{$\centerdot$}}\mkern1mu}{ \mkern1.5mu\centerdot\mkern1.5mu}{\mkern1.5mu\centerdot\mkern1.5mu}}} \DeclarePairedDelimiter\abs{\lvert}{\rvert} \DeclarePairedDelimiter\norm{\lVert}{\rVert} \newcommand{\ang}[1]{\langle #1 \rangle} \renewcommand{\setminus}{{\smallsetminus}} \renewcommand{\th}{{\rm th}} \newcommand{\st}{\mathbin{\mid}} \newcommand{\ST}{\mathbin{\Big|}} \newcommand{\from}{\colon\thinspace} \newcommand{\ep}{\epsilon} \newcommand{\bdy}{\partial} \newcommand{\One}{{\mathbbm{1}}} \newcommand{\mz}{{m\mbox{-\tiny zeros}}} \newcommand{\Hgen}{{\calH_{\rm Gen}}} \newcommand{\Hng}{{\calH_{\rm NG}}} \newcommand{\sys}{{\rm sys}} \begin{document} \title [Lengths of saddle connections] {Lengths of saddle connections \\ on random translation surfaces of large genus} \author {Howard Masur} \address{Department of Mathematics, University of Chicago, Chicago, Il.} \email{[email protected]} \author {Kasra Rafi} \address{Department of Mathematics, University of Toronto, Toronto, ON } \email{[email protected]} \author {Anja Randecker} \address{Institute for Mathematics, Heidelberg University, Heidelberg \newline \indent Department of Mathematics, Saarland University, Saarbrücken} \email{[email protected]} \date{December 10, 2024} \subjclass[2020]{32G15 (30F60, 57M50)} \begin{abstract} We determine the distribution of the number of saddle connections on a random translation surface of large genus. More specifically, for genus $g$ going to infinity, the number of saddle connections with lengths in a given interval $[\sfrac{a}{g}, \sfrac{b}{g}]$ converges in distribution to a Poisson distributed random variable. Furthermore, the numbers of saddle connections associated to disjoint intervals are independent. \end{abstract} \maketitle \section{Introduction} We study the distribution of short saddle connections on a random large-genus translation surface. This is part of a continuing effort to understand the geometry of a random surface as the genus of the surface goes to infinity. Much is known about the shape of random hyperbolic surfaces but most questions about random translation surfaces remain open. Here, we aim to emulate the celebrated results of Mirzakhani--Petri \cite{mirzakhani_petri_19} regarding the number of short curves in a random large-genus hyperbolic surface. The space of translation surfaces of genus $g$ can be identified with the space of abelian differentials $(X, \omega)$ where $X$ is a Riemann surface of genus $g$ and $\omega$ is a holomorphic $1$--form on $X$. The space of abelian differentials can be decomposed into strata depending on the number and the order of zeros of $\omega$. Let $\calH_g=\calH_g(1,1, \dots, 1)$ be the principle stratum of unit-area abelian differentials on a closed surface of genus $g$ where there are~$2g-2$ zeros of order~$1$. We equip the stratum with the normalized Lebesgue measure $\Vol(\param)$ as in~\cite{Masur-82, Veech-82}. Then $\calH_g$ has a finite total volume \cite{Masur-82, Veech-82} and we can define the probability of a measurable subset $E \subseteq \calH_g$ by \[ \PP_g(E) = \frac{\Vol(E)}{\Vol(\calH_g)}. \] Since the total area of a surface in $\calH_g$ is $1$ and the number of zeros of an abelian differential in $\calH_g$ is $2g-2$, the expected number of saddle connections of length at most $\ep$ on a random translation surface in $\calH_g$ goes to infinity as $g \to \infty$. Therefore, we need to choose the right scale at which the expected number of saddle connections is a finite, positive real number. We show that this correct scale is $\sfrac 1g$ in the following sense. Given $(X, \omega) \in \calH_g$ and an interval $[a, b] \subseteq \RR_+$, let $N_{g,[a,b]}(X,\omega)$ denote the number of saddle connections on $(X, \omega)$ with lengths in the interval~$\left[ \sfrac ag, \sfrac bg \right]$. \begin{theorem} \label{Thm:Main} Let $[a_1, b_1], [a_2, b_2], \dots, [a_k, b_k] \subseteq \RR_+$ be disjoint intervals. Then, as $g \to \infty$, the vector of random variables \[ \left( N_{g,[a_1,b_1]},\dots ,N_{g,[a_k,b_k]} \right) \from \calH_g \to \NN_0^k \] converges jointly in distribution to a vector of random variables with Poisson distributions of means $\lambda_{[a_i,b_i]}$, where \[ \lambda_{[a_i, b_i]}= 8\pi(b_i^2-a_i^2) \] for $i = 1,\dots,k$. That is, \[ \lim_{g \to \infty} \PP_g \left( N_{g,[a_1,b_1]} = n_1,..., N_{g,[a_k,b_k]} = n_k \right) = \prod_{i=1}^k \frac{\lambda_{[a_i, b_i]}^{n_i} e^{-\lambda_{[a_i,b_i]}}}{n_i!}. \] \end{theorem} \cref{Thm:Main} has the following consequence: Let $l_{\rm min}(X, \omega)$ be the length of the shortest saddle connection on $(X, \omega)$. Then \begin{equation*} \lim_{g \to \infty} \PP_g \left( l_{\rm min} < \sfrac{\ep}{g} \right) = 1 - \PP \left( N_{g,[0,\ep]} = 0 \right) = 1 - e^{-\lambda_{[0,\ep]}} \approx 8\pi \ep^2 \quad \text{as } \ep \to 0 . \end{equation*} This is in line with the fact that for a fixed stratum $\calH$ and a small $\ep > 0$, the thin part $\calH_{\rm thin}^{\ep} \subseteq \calH$ which contains all translation surfaces with a saddle connection of length at most~$\ep$ satisfies \begin{equation*} \Vol(\calH_{\rm thin}^{\ep} ) = O(\ep^2) \cdot \Vol(\calH) . \end{equation*} For one short saddle connection, this can be proven using Siegel--Veech constants. However, also for the set of translation surfaces with two short saddle connections, there exists a similar result on asymptotics by \cite{masur_smillie_91} which can be made more concrete with \cref{Thm:Main}. \subsection*{Other strata} For a given genus, the principal stratum has the highest dimension and includes all other strata for that genus on its boundary. Thus, although \cref{Thm:Main} is stated for the principal stratum, it also applies to the space of all unit-area translation surfaces of a given genus. For non-principal strata, the result does not hold universally; however, in certain cases, similar conclusions can be drawn, which we now explore. The first variation is for strata where the number of non-simple zeros is growing slow enough. \begin{theorem} \label{thm:slow-growing_exceptions} Let $f, \ell \colon \NN_0 \to \RR$ be functions such that $\frac{f(g)^2 \cdot \ell(g)}{g} \to 0$ for $g\to \infty$. If $\mathcal{H}_g$ is replaced by the stratum $\calH_g(m_1, m_2, \ldots, m_{\ell(g)}, 1, \ldots, 1)$ of unit-area translation surfaces of genus~$g$ with at most $\ell(g)$ zeros which are not simple and $m_1,\ldots, m_{\ell(g)} \leq f(g)$, then the same statement as in \cref{Thm:Main} is true for the same $\lambda_{[a_i, b_i]} = 8\pi (b_i^2 - a_i^2)$. \end{theorem} We can also consider strata where none of the zeros is simple but all zeros have the same, fixed~order. \begin{theorem} \label{thm:order_m} Fix $m\in \NN$ odd and consider only such $g\in \NN$ for which $2g-2$ is divisible by~$m$. If $\mathcal{H}_g$ is replaced by the stratum $\calH_g(m, m, \dots, m)$ of unit-area translation surfaces of genus~$g$ where there are $\frac{2g-2}{m}$ zeros of order $m$, then the same statement as in \cref{Thm:Main} is true with means \[ \lambda_{[a_i, b_i]}= \left( \frac{m+1}{m}\right)^2 \cdot 2\pi(b_i^2-a_i^2) \] for $i = 1,\dots,k$. \end{theorem} Other slight variations of the theorem can also be derived with the same method. However, our method does not work for the minimal stratum $\calH_g(2g-2)$ when there is only one zero. The reason is that for a surface in the minimal stratum, every saddle connection is a closed loop and cannot be collapsed. However, the scale $\sfrac 1g$ still seems to be correct. \begin{question} Is there an analogue of \cref{Thm:Main} for the space $\calH_g(2g-2)$? \end{question} \subsection*{History and tools} Large-genus asymptotics in the context of surfaces have been studied for some years primarily for hyperbolic surfaces. Mirzakhani started a program in \cite{mirzakhani_13} in which she determined the expected systole, Cheeger constant, diameter, and other geometric properties of random surfaces of large genus. Independently, Guth--Parlier--Young \cite{guth_parlier_young_11} considered geometric invariants related to pants decompositions of random hyperbolic surfaces. Subsequently, many properties have been studied such as lengths of separating curves \cite{nie_wu_xue_23} or the first non-zero Laplacian eigenvalue \cite{wu_xue_22, lipnowski_wright_24, anantharaman_monk_24}. The inspiration for the article at hand is the work of Mirzakhani--Petri \cite{mirzakhani_petri_19} in which they determined the distribution of the number of short curves in a random hyperbolic surface of large genus. For translation surfaces, less is still known than for hyperbolic surfaces. However, for the volumes of strata of translation surfaces, there has been a recent increase in results. Based on work of Eskin--Okounkov \cite{eskin_okounkov_01}, Eskin--Zorich predicted in \cite{eskin_zorich_15} the volumes of strata of translation surfaces and Aggarwal confirmed these \cite{aggarwal_20}. A more precise error term for the volume estimates has been determined by Chen--Möller--Sauvaget--Zagier~\cite{chen_moeller_sauvaget_zagier_20}. Also for half-translation surfaces, first results have been obtained by Chen--Möller--Sauvaget~\cite{chen_moeller_sauvaget_23}. By work of Eskin--Masur--Zorich \cite{eskin_masur_zorich_03}, volumes of strata are strongly related to Siegel--Veech constants (see \cref{sec:Siegel-Veech} for definitions and more details). Asymptotics of Siegel--Veech constants for large genus have been calculated by many different authors in recent years, e.g.\ Chen--Möller--Zagier \cite{chen_moeller_zagier_18} for principal strata, Sauvaget \cite{sauvaget_18} for minimal strata, Zorich in an appendix to \cite{aggarwal_20} and Aggarwal \cite{aggarwal_19} for all strata of translation surfaces, and Chen--Möller--Sauvaget--Zagier \cite{chen_moeller_sauvaget_zagier_20} for principal strata of half-translation surfaces. Furthermore, also geometric properties of random translation surfaces of large genus have been studied: An upper bound for the covering radius is given in \cite{masur_rafi_randecker_22} and Delecroix--Goujard--Zograf--Zorich have determined geometric and combinatorial properties for large-genus square-tiled surfaces in a series of articles, see \cite{delecroix_goujard_zograf_zorich_22, delecroix_goujard_zograf_zorich_23} and the references therein. In an upcoming work, Bowen--Rafi--Vallejos \cite{bowen_rafi_vallejos_24} show that a sequence of random translation surfaces with area equal to genus Benjamini--Schramm converges as genus tends to infinity. For the proof of our main theorem, we will strongly use the recent works on large-genus asymptotics for volumes of strata and for Siegel--Veech constants that are outlined above. \subsection*{Further questions and remarks} Let $\sys(X, \omega)$ be the length of the shortest closed geodesic in~$(X, \omega)$. To examine $\sys$, we need to work at a different scale as the expected number of closed geodesics of length $\sfrac 1g$ converges to zero. It turns out that the correct scale for this setup is~$\sfrac 1{\sqrt g}$. Let $L_{g,[a,b]}(X,\omega)$ denote the number of closed geodesics on~$(X, \omega)$ with lengths in the interval $\left[ \sfrac a{\sqrt g}, \sfrac b{\sqrt g} \right]$. \begin{question} \label{question:closed_geodesics} Does $L_{g,[a,b]}(X,\omega)$ converge in distribution to a random variable with a Poisson distribution? \end{question} We can not directly apply the same methods as in this paper, again because our methods involve the collapsing of saddle connections which does not work for closed geodesics. Another natural setting is the space of quadratic differentials (or half-translation surfaces). Let $\calQ_g= \calQ_g(1,1, \dots, 1)$ be the principle stratum of unit-area quadratic differentials. Again the correct scale seems to be $\sfrac 1g$. \begin{question} Does an analogue of \cref{Thm:Main} and/or an analogue of \cref{question:closed_geodesics} hold for the space $\calQ_g$? \end{question} Again, our methods fail for the analogue of \cref{Thm:Main} since collapsing a saddle connection in a quadratic differential is not a local construction. Furthermore, much less is known about the volumes of strata of quadratic differentials. \subsection*{Outline of the paper} In the next four sections, we recall the background needed for our arguments: In \cref{sec:moments}, we describe the method of moments which is used to deduce that a variable is Poisson distributed. In \cref{sec:translation_surfaces_strata}, we introduce translation surfaces and their strata. The method of Siegel--Veech constants and some relevant values of them are collected in \cref{sec:Siegel-Veech}. In \cref{sec:collapsing}, we recall the construction for collapsing a saddle connection and describe a variation of it to collapse two saddle connections sharing a zero. We start the proof of \cref{Thm:Main} in \cref{sec:generic} by defining a generic subset of the stratum, on which we describe a simultaneous collapsing procedure in \cref{sec:collapsing_simultaneously}. This construction is used in \cref{sec:length_spectrum} to determine the factorial moments of $N_{g,[a,b]}$ on the generic subset. In \cref{sec:finish}, we explain how to extend the result from the generic subset to the whole stratum. We finish with arguments to prove the variations of the main theorem in \cref{sec:variations}. \subsection*{Acknowledgements} The authors are very grateful to Anton Zorich for many helpful conversations during the work on this project as well as for factors of $2$ and detailed comments on an earlier draft of this work. K.~R.\ was partially supported by NSERC Discovery Grant RGPIN 05507 and the Simons Fellowship Program. The work of A.~R.\ was partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- 441856315. \section{The method of moments} \label{sec:moments} Similar to the proof in \cite{mirzakhani_petri_19}, we use the method of moments for showing that a random variable is Poisson distributed. Recall that a random variable $N \from \Omega \to \NN_0$ on a probability space $(\Omega, \PP)$ is said to be \emph{Poisson distributed with mean~$\lambda \in (0, \infty)$} if \[ \PP ( N=k ) = \frac{\lambda^k e^{-\lambda}}{k!} \qquad \text{for all $k \in \NN_0$}. \] The moments of a Poisson distributed random variable are slightly complicated terms, so we instead consider the factorial moments here. Given a random variable $N \from \Omega \to \NN_0$ and $r \in \NN$, we define the random variable \[ (N)_r =N(N-1)\dots(N-r+1). \] If its expectation $\EE (N)_r$ exists, it is called the \emph{$r$--th factorial moment} of $N$. The $r$--th factorial moment of a Poisson distributed variable with mean $\lambda$ is equal to $\lambda^r$ for all $r \in \NN$. It turns out that this is an if and only if condition. \begin{theorem}[The method of moments {\cite[Theorem 1.23]{bollobas_01}}] \label{Thm:moments} Let $\{(\Omega_i,\PP_i)\}_{i\in\NN}$ be a sequence of probability spaces. For $k \in \NN$, let $N_{1,i}, \dots , N_{k,i} \from \Omega_i \to \NN_0$ be random variables for all $i\in \mathbb{N}$ and suppose there exist $\lambda_1, \dots , \lambda_k \in (0, \infty)$ such that \[ \lim_{i\to \infty} \EE ( (N_{1,i})_{r_1} \dots (N_{k,i})_{r_k} ) = \lambda_1^{r_1}\dots \lambda_k^{r_k} \] for all $r_1,\dots, r_k \in \NN$. Then \[ \lim_{i \to \infty} \PP ( N_{1,i} = n_1,...,N_{k,i} = n_k ) = \prod_{j=1}^k \frac{\lambda_j^{n_j} e^{-\lambda_j}}{n_j!} \] for all $n_1,\dots , n_k \in \NN$. In other words, the vector $(N_{1,i},\dots,N_{k,i}) \from \Omega_i \to \NN_0^k$ converges jointly in distribution to a vector which is independently Poisson distributed with means~$\lambda_1,\dots,\lambda_k$. \end{theorem} \section{Translation surfaces and strata} \label{sec:translation_surfaces_strata} A \emph{translation surface} $(X, \omega)$ is a pair consisting of a compact, connected Riemann surface~$X$ and a holomorphic $1$--form $\omega$ on $X$. By integrating the $1$--form, we obtain a metric on $X$ which is locally isometric to the Euclidean plane except in a neighbourhood of the zeros of $\omega$. The neighbourhood of a zero of order $k$ is isometric to a neighbourhood of the branching point of a $(k+1)$--covering of a Euclidean disk. The moduli space of all translation surfaces of a given genus $g$ decomposes into strata. We focus only on those translation surfaces whose area is $1$. Given a partition of $2g-2 = \sum_{i=1}^\ell m_i$, the \emph{stratum} $\calH_g( m_1, \ldots, m_\ell)$ is the space of all unit-area translation surfaces of genus $g$ with $\ell$ zeros of order $m_1, \ldots, m_\ell$. A \emph{saddle connection} in $(X,\omega)$ is a geodesic segment that starts and ends in a zero and does not contain a zero in its interior. To any given (oriented) saddle connection $\gamma$ on~$(X, \omega)$, we associate the vector~$\int_\gamma \omega$ in $\mathbb{C}$, called the \emph{holonomy vector} of the saddle connection. On a given translation surface, the number of saddle connections grows quadratically in their length \cite{masur_88, masur_90}. A saddle connection can also be thought of as an element of the relative homology group of~$(X, \omega)$ relative to the set of zeros. If $\calB$ is a set of saddle connections that form a basis for the relative homology, then the holonomy vectors of the elements of $\calB$ determine $(X, \omega)$. For every $(X,\omega)$ and $\calB$, there is a neighbourhood $U$ of $(X, \omega)$ in the moduli space such that for every $(X', \omega')$ in $U$, all elements of~$\calB$ (thought of as elements in the relative homology group) can still be represented in $(X', \omega')$ as saddle connections. Then the set of holonomy vectors of saddle connections in $\calB$ give coordinates for translation surfaces in $U$ (which in general do not have area $1$). We refer to this set of holonomy vectors as \emph{period coordinates} around $(X,\omega)$. The period coordinates give an embedding from $U$ to $\mathbb{C}^{2g+\ell-1}$. The associated pullback measure, denoted $\mu$, of Lebesgue measure on $\mathbb{C}^{2g+\ell-1}$, on the moduli space was studied by Masur \cite{Masur-82} and Veech \cite{Veech-82}. It also defines a measure on the stratum $\calH_g(m_1,\ldots,m_\ell)$ in the following way. For an open set $V \subseteq \calH_g(m_1,\ldots,m_\ell)$, its volume is defined by \begin{equation*} \Vol(V) \coloneqq 2(2g + \ell -1) \cdot \mu(\Cone(V)) \end{equation*} where $\Cone(V)$ is the cone of $V$ in the moduli space and $2(2g + \ell -1)$ is the real dimension of the cone. We refer to this measure on $\calH_g(m_1,\ldots,m_\ell)$ when we write $\Vol(\param)$ in the following discussion. Calculating volumes of strata explicitly is difficult in general. However, using an algorithm of Eskin--Okounkov \cite{eskin_okounkov_01}, Eskin--Zorich predicted in \cite{eskin_zorich_15} and Aggarwal confirmed in \cite[Theorem 1.4]{aggarwal_20} that we have \begin{equation} \label{eq:volume_stratum} \Vol(\calH_g( m_1, \ldots, m_\ell) ) = \frac{4}{\prod_{i=1}^\ell (m_i + 1)} \cdot \left( 1+ O\left(\frac1g \right) \right) . \end{equation} Recall that strata are not necessarily connected, they can have up to three connected components \cite{kontsevich_zorich_03}. However, the strata that we consider in this article are all connected. \section{The method of Siegel--Veech constants} \label{sec:Siegel-Veech} A useful method to calculate the probability of having a saddle connection of a certain length is by Siegel--Veech constants. To do this, we fix a stratum $\calH$ of unit-area translation surfaces. For $X \in \calH$, $V_{sc}(X)$ denotes the set of (holonomy vectors of) saddle connections on $X$. A~configuration $\calC$ of a saddle connection includes the multiplicity of the saddle connection, the order of the zeros that the saddle connection connects, and whether the zeros are different or not. Let $V_{\calC}(X) \subseteq V_{sc}(X)$ denote the subset of holonomy vectors whose saddle connections are in the configuration $\calC$. For an integrable function $f:\RR^2 \rightarrow \RR$ with compact support, we define the \textit{Siegel--Veech transform} $\hat f_{\calC}:\calH \rightarrow \RR$ via \[ \hat f_{\calC} (X) = \sum_{v \in V_{\calC}(X)} f(v). \] \begin{theorem}[Siegel--Veech formula {\cite{veech_98}}] \label{Thm:SV-formula} Let $\calH$ be a connected stratum of unit-area translation surfaces and~$\calC$ a configuration of saddle connections. Then there exists a constant~$c(\calC, \calH)$ such that for every integrable $f:\RR^2 \rightarrow \RR$ with compact support, \begin{equation*} \mathbb{E} \left( \hat f_{\calC} \right) = \frac{\int_{\calH} \hat f_{\calC}}{\Vol(\calH)} = c(\calC, \calH) \cdot \int_{\RR^2}f . \end{equation*} The integral on the left is integration with respect to the normalized Lebesgue measure on $\calH$, and the integral on the right is with respect to Lebesgue measure on $\RR^2$. \label{thm:siegel-veech-formula} \end{theorem} The constant $c(\calC, \calH)$ is called the \emph{Siegel--Veech constant} of the configuration and the stratum. It does not only appear when studying the average number of saddle connections for translation surfaces in a given stratum but also when counting saddle connections on a fixed translation surface of this stratum. \begin{theorem}[Quadratic growth of number of saddle connections \cite{eskin_masur_01}] Let $\calH$, $\calC$, and~$c(\calC, \calH)$ as in \cref{Thm:SV-formula}. Then for almost every $(X,\omega) \in \calH$, we have \begin{equation*} \lim_{T \to \infty} \frac{ \left| V_{\calC}(X) \cap B(0,T) \right| }{\pi T^2} = c(\calC, \calH) . \end{equation*} \end{theorem} \Cref{thm:siegel-veech-formula} allows us to estimate expected values of various quantities on a stratum in terms of Siegel--Veech constants. Eskin--Masur--Zorich developed in \cite{eskin_masur_zorich_03} a method for computing Siegel--Veech constants using combinatorial constructions and surgeries as well as the Siegel--Veech formula above. Based on their recursive formulas, asymptotics of Siegel--Veech constants for large genus have been more recently calculated by many different authors, e.g.\ \cite{chen_moeller_zagier_18, sauvaget_18, aggarwal_20, aggarwal_19, chen_moeller_sauvaget_zagier_20}. We recall now the Siegel--Veech constants for the configurations that are relevant for our proofs. Let $\calC_{\! \monetomtwo}$ be the configuration of a single saddle connection connecting a fixed zero of order $m_1$ to a fixed, different zero of order $m_2$. Then the corresponding Siegel--Veech constant is \cite[Corollary 1]{aggarwal_20}, \cite[Theorem 1.2]{aggarwal_19} \begin{equation} \label{Eq:sc} c_{\! \monetomtwo} = (m_1+1)(m_2+1) \cdot \left(1 + O \left( \frac{1}{g} \right) \right). \end{equation} Let $\calC_{\! \homsconeone}$ be the configuration of two homologous saddle connections connecting a fixed zero of order $1$ to a fixed, different zero of order $1$. Then the corresponding Siegel--Veech constant is \cite[Proposition 3.4]{aggarwal_19} \begin{equation} \label{Eq:hom} c_{\! \homsconeone} = O \left(\frac{1}{g^2} \right). \end{equation} Let $\calC_{\mloop}^{\rm non-hom}$ be the configuration in $\calH_g(m, 1, \ldots, 1)$ of a saddle connection of multiplicity~$1$, connecting a fixed zero of order $m$ to itself. Note that such a saddle connection is a loop which could either bound a cylinder or not, giving different cases for the formula for the Siegel--Veech constant. A combination of \cite[Corollary 3, 4, and 5]{aggarwal_20} yields that the Siegel--Veech constant is \begin{align*} c_{\mloop}^{\rm non-hom} & = \frac{(m+1)(m-1)}{2} \left(1 + O \left( \frac{1}{g} \right) \right) + (m+1) \cdot \left(1 + O \left( \frac{1}{g} \right) \right) + O \left( \frac{1}{g} \right) \\ & = \frac{(m+1)^2}{2} \left(1 + O \left( \frac{1}{g} \right) \right) . \end{align*} We will also need an estimate for $c_{\mloop}$ where we allow saddle connections of higher multiplicity in $\calH_g(m, 1, \ldots, 1)$ as well as in $\calH_g(m, 2, \ldots, 2, 1, \ldots, 1)$. With the formula from~\cite[Section 13]{eskin_masur_zorich_03}, this can be estimated as in the proof of \cite[Corollary 3,~4, and 5]{aggarwal_20} to be \begin{equation} \label{eq:loop} c_{\mloop} = (m+1)^2 \cdot O(1) . \end{equation} A detailed calculation of this Siegel--Veech constant can be found in \cite[Appendix 1]{vallejos_24}. \section{Collapsing and opening up zeros} \label{sec:collapsing} Our calculations depend heavily on the estimation of volumes of subsets of strata which contain only translation surfaces with short saddle connections. For this, we vary a construction from \cite[Sections 8.1 and 8.2]{eskin_masur_zorich_03} that collapses a saddle connection between two different zeros of order $1$. We describe here the construction and its inverse. Let $(X,\omega)$ be a translation surface with two zeros $v_1$ and $v_2$ of order $1$ that are connected by a saddle connection $\alpha$ of length $\delta$. We assume for the description of the construction that~$\alpha$ is horizontal. There are four horizontal separatrices starting in $v_2$, one of them being~$\alpha$. Let $\gamma$ be a geodesic segment of length $\delta$ on the horizontal separatrix starting in~$v_2$ which forms an angle of $2\pi$ with $\alpha$. We call $\gamma$ the \emph{ghost double} of $\alpha$. Note that $\gamma$ might not always exist and we will discuss in detail later what to do when it does not exist. We now cut open along $\alpha$ and $\gamma$, obtaining two copies of $\alpha$ and $\gamma$ each in the metric completion of $X \setminus (\alpha \cup \gamma)$. The upper copy of $\alpha$ we call $\alpha_1$ and the lower copy~$\alpha_2$, correspondingly for $\gamma$. Then we reglue $\alpha_1$ with~$\gamma_1$ and reglue $\alpha_2$ with $\gamma_2$ as in \cref{fig:collapsing_one}. \begin{figure}[h] \centering \begin{tikzpicture}[scale=1] \draw[very thick] (-1.5,0) -- node[above]{$\alpha_1$} node[below]{$\alpha_2$} (0,0) ; \draw[very thick, densely dashed] (0,0) -- node[above]{$\gamma_1$} node[below]{$\gamma_2$} (1.5,0); ll (-1.5,0) circle (2pt); \draw (-1.5,0) + (-15:0.3) arc (-15:-180:0.3) node[left]{$4\pi$} arc (180:15:0.3); ll (0,0) circle (2pt); \draw (165:0.3) arc (165:90:0.3) node[above] {$2\pi$} arc (90:15:0.3); \draw (-165:0.3) arc (-165:-90:0.3) node[below] {$2\pi$} arc (-90:-15:0.3); \draw (1.5,0) + (160:0.3) arc (160:0:0.3) node[right]{$2\pi$} arc (0:-160:0.3); \begin{scope}[xshift=5cm] \draw[very thick, densely dashed] (0,0) -- node[left]{$\alpha_1$} node[right]{$\gamma_1$} (0,1.5) ; \draw[very thick, densely dashed] (0,0) -- node[left]{$\alpha_2$} node[right]{$\gamma_2$} (0,-1.5); ll (0,0) circle (2pt); \draw (105:0.3) arc (105:180:0.3) node[left]{$4\pi$} arc (-180:-105:0.3); \draw (-75:0.3) arc (-75:0:0.3) node[right]{$2\pi$} arc (0:75:0.3); \draw (0,1.5) + (-75:0.3) arc (-75:90:0.3) node[above]{$2\pi$} arc (90:255:0.3); \draw (0,-1.5) + (75:0.3) arc (75:-90:0.3) node[below]{$2\pi$} arc (-90:-255:0.3); \end{scope} \end{tikzpicture} \caption{Collapsing a zero into another zero.} \label{fig:collapsing_one} \end{figure} Note that the two points corresponding to $v_2$ under this cutting-and-gluing are regular points and that~$v_1$ turns into a new zero of order $2$. Therefore we call this construction \emph{collapsing the zero~$v_2$ into the zero $v_1$ (along $\alpha$)} and the inverse of it \emph{opening up a zero of order $2$}. The data for opening up a zero of order $2$ is the holonomy vector of the saddle connection $\alpha$ and a combinatorial information arising from choosing two out of the three geodesic segments with the given holonomy vector starting in the given zero of order $2$. We consider now $\calH_g=\calH_g(1,1, \dots, 1)$, the principle stratum of unit-area translation surfaces of genus $g$ with~$2g-2$ zeros of order~$1$. The process of collapsing a zero into another zero defines a map \begin{equation} \label{eq:measure-preserving-map} \calH^\epsilon \times \{\pm 1\} \to \calH_g(2,1, \ldots, 1) \times M \times \mathbb{D}_{\epsilon} \end{equation} where $\calH^\epsilon$ is the subset of $\calH_g$ of translation surfaces with exactly one saddle connection of length at most~$\epsilon$ and the choice of $1$ or $-1$ indicates the orientation of this saddle connection. Furthermore, $M$ is the set of possible combinatorial data of size $(2g-2)(2g-3)$ which records the names of the zeros that were connected by the saddle connection, and~$\mathbb{D}_{\epsilon}$ is the disk of radius~$\epsilon$ in $\mathbb{R}^2$. This map is well-defined and its inverse, defined by opening up the unique zero of order $2$, is also well-defined up to the three possible choices of two geodesic segments. Hence, the map is $3$--to--$1$. Furthermore, the map is locally measure-preserving: Let $(X,\omega) \in \calH^\epsilon$ and $\calB$ be a basis of relative homology for $(X, \omega)$ which contains the unique shortest saddle connection~$\alpha$. Then $\calB$ defines period coordinates around $(X,\omega)$ and the image of $\calB \setminus \alpha$ defines period coordinates around the translation surface in the image of~$(X,\omega)$ whereas the holonomy vector of $\alpha$ defines coordinates for $\mathbb{D}_{\epsilon}$. As the real dimensions of domain and image are both $2(4g - 3)$ and the lengths of the vectors in a basis of domain and image coincide, also the measures of a neighbourhood of $(X,\omega)$ and its image coincide; see \cite[Section~8.2]{eskin_masur_zorich_03} for more details. If we consider several saddle connections which do not intersect (including not sharing a zero as end point) and whose ghost doubles do not intersect each other nor the saddle connections, we can collapse all pairs of zeros simultaneously, see \cref{sec:collapsing_simultaneously}. For intersecting saddle connections, this is in general not true. However, for a specific configuration, we can use the following simultaneous collapsing~procedure. Let $(X,\omega)$ be a translation surface with~$v_1$, $v_2$, and $v_3$ three zeros of order $1$, $\alpha$ a saddle connection between $v_1$ and~$v_2$ of length~$\delta_1$, and $\beta$ a saddle connection between $v_2$ and $v_3$ of length $\delta_2$. To collapse $v_1$ and $v_3$ into $v_2$, we consider a ghost double of~$\alpha$, starting in~$v_1$ and forming angle $2\pi$ with $\alpha$, and a ghost double of~$\beta$, starting in $v_3$ and forming angle~$2\pi$ with~$\beta$. Note that we have to make sure that neither of the ghost doubles and saddle connections intersect. The cases, in which they do intersect, we again exclude later by other volume~estimations. We cut open along $\alpha$, $\beta$, and their ghost doubles and reglue as indicated in \cref{fig:collapsing_two}. Then~$v_1$ and $v_3$ turn into regular points and $v_2$ turns into a zero of order $3$. \begin{figure}[h] \centering \begin{tikzpicture}[scale=1] \draw[very thick] (0,0) -- node[above]{$\alpha_1$} node[below]{$\alpha_2$} (1.7,0) ; \draw[very thick, densely dashed] (1.7,0) -- node[above]{$\gamma_1$} node[below]{$\gamma_2$} (3.4,0); \draw[very thick] (0,0) -- node[left]{$\beta_1$} node[right]{$\beta_2$} (120:2.3) ; \draw[very thick, densely dashed] (120:2.3) -- node[left]{$\gamma_3$} node[right]{$\gamma_4$} +(120:2.3); ll (0,0) circle (2pt); \draw (-15:0.3) arc (-15:-120:0.3) node[left]{$4\pi-\theta$} arc (-120:-225:0.3); \draw (15:0.3) arc (15:60:0.3) node[above right]{$\theta$} arc (60:105:0.3); ll (1.7,0) circle (2pt); \draw (1.7,0) + (165:0.3) arc (165:90:0.3) node[above] {$2\pi$} arc (90:15:0.3); \draw (1.7,0) + (-165:0.3) arc (-165:-90:0.3) node[below] {$2\pi$} arc (-90:-15:0.3); \draw (3.4,0) + (160:0.3) arc (160:0:0.3) node[right]{$2\pi$} arc (0:-160:0.3); ll (120:2.3) circle (2pt); \draw (120:2.3) + (-45:0.3) arc (-45:30:0.3) node[right]{$2\pi$} arc (30:105:0.3); \draw (120:2.3) + (-75:0.3) arc (-75:-150:0.3) node[left]{$2\pi$} arc (-150:-225:0.3); \draw (120:4.6) + (-45:0.3) arc (-45:120:0.3) node[above]{$2\pi$} arc (-240:-75:0.3); \begin{scope}[xshift=7cm, yshift=2cm] \draw[very thick, densely dashed] (0,0) -- node[pos=0.6, above]{$\alpha_1$} node[pos=0.6, below]{$\gamma_1$} +(0:1.7) ; \draw[very thick, densely dashed] (0,0) -- node[left]{$\alpha_2$} node[right]{$\gamma_2$} (0,-1.7); \draw[very thick, densely dashed] (0,0) -- node[above]{$\gamma_3$} node[below]{$\beta_1$} +(150:2.3) ; \draw[very thick, densely dashed] (0,0) -- node[left]{$\gamma_4$} node[right]{$\beta_2$} +(60:2.3); ll (0,0) circle (2pt); \draw (-195:0.3) arc (-195:-150:0.3) node[left]{$4\pi-\theta$} arc (-150:-105:0.3); \draw (-75:0.3) node[right]{$2\pi$} arc (-75:-45:0.3) arc (-45:-15:0.3); \draw (135:0.3) arc (135:105:0.3) node[above]{$2\pi$} arc (105:75:0.3); \draw (45:0.3) node[right]{$\theta$} arc (45:30:0.3) arc (30:15:0.3); \draw (0:1.7) + (-165:0.3) arc (-165:0:0.3) node[right]{$2\pi$} arc (0:165:0.3); \draw (0,-1.7) + (75:0.3) arc (75:-90:0.3) node[below]{$2\pi$} arc (-90:-255:0.3); \draw (150:2.3) + (-15:0.3) arc (-15:150:0.3) node[left] {$2\pi$} arc (150:315:0.3); \draw (60:2.3) + (-105:0.3) arc (-105:60:0.3) node[above] {$2\pi$} arc (60:235:0.3); \end{scope} \end{tikzpicture} \caption{Collapsing two zeros into a third one, along two saddle connections.} \label{fig:collapsing_two} \end{figure} \section{The generic and the non-generic subsets of \texorpdfstring{$\calH_g$}{the stratum}} \label{sec:generic} Our strategy to prove \cref{Thm:Main} is to show the statement first for a generic subset $\Hgen \subseteq \calH_g$. Then we show that the proportion of the measure of the non-generic set $\Hng$ in $\calH_g$ goes to zero as~$g \to \infty$. \begin{definition} \label{Def:generic} Fix $B \in \mathbb{R}_+$. We define the following subsets of $\calH_g$ \begin{align*} \calH_\anyloop & \coloneqq \left\{ (X,\omega) \in \calH_g : \, X \text{ contains a loop of length at most $\sfrac{6B}{g}$} \right\} \\ \calH_\cherry &\coloneqq \left\{ \begin{aligned} (X,\omega) \in \calH_g : \, & X \text{ contains a saddle connection of length at most $\sfrac{B}{g}$ and} \\ & \qquad \text{a saddle connection of length at most $\sfrac{2B}{g}$ that share a zero} \end{aligned} \right\} \end{align*} where a \emph{loop} is a saddle connection that connects a zero to itself. Furthermore, we define the \emph{non-generic subset} of $\calH_g$ to be $\Hng = \calH_{\anyloop} \cup \calH_{\cherry}$ and the \emph{generic subset} to be the complement of the non-generic set: $\Hgen = \calH_g \setminus \Hng$. \end{definition} The next proposition justifies calling the subsets generic and non-generic. \begin{proposition} \label{Prop:generic-is-generic} Let $\Hgen$ as in \cref{Def:generic}. Then \begin{equation*} \frac{\Vol(\Hgen)}{\Vol(\calH_g)} \to 1 \qquad\text{as}\qquad g \to \infty. \end{equation*} More specifically, \[ \frac{\Vol(\calH_{\rm NG})}{\Vol(\calH_g)} = O\left( \frac1g \right) \] where the implied constant does not depend on $g$. \end{proposition} \begin{proof} We first use the method of Siegel--Veech constants to obtain an estimate on the volume of $\calH_{\anyloop}$. For this, let $f \colon \RR^2 \to \RR$ be the indicator function of the ball of radius $\sfrac{6B}{g}$ centred at~$0$. Choose~$\calC_\oneloop$ to be the configuration of a saddle connections which forms a loop, starting at a zero of order $1$. Applying \cref{Thm:SV-formula} and the Siegel--Veech constant from Equation~\eqref{eq:loop}, we obtain \begin{equation} \Vol (\calH_\anyloop) = \int_{\calH_\anyloop } \One \leq \int_{\calH_g} \hat{f}_{\calC_\oneloop} = (2g-2) \cdot c_\oneloop \cdot \int_{\RR^2} f \cdot \Vol(\calH_g) = O\left( \frac{B^2}{g} \right) \cdot \Vol(\calH_g) , \label{eq:volume_loop} \end{equation} where the factor $2g-2$ corresponds to the number of possible choices for the zero at which the loop is based. We will later in the proof need similar estimates as \eqref{eq:volume_loop} for the following subset of $\calH_g$ \begin{align*} \calH_\homsc &\coloneqq \left\{ \begin{aligned} (X,\omega) \in \calH_g : \, &X \text{ contains a pair of homologous saddle connections }\\ & \qquad \text{of length at most $\sfrac{B}{g}$ between different zeros} \end{aligned}\right\} \end{align*} and the following two subsets of $\calH_g(2,1,\ldots,1)$ \begin{align*} \calH_{\twoloop} & \coloneqq \left\{ \begin{aligned} (X,\omega) \in \calH_g(2,1,\ldots,1) : \, & X \text{ contains a loop of length at most $\sfrac{6B}{g}$} \\ & \qquad \text{starting at the zero of order } 2 \end{aligned} \right\} \\ \calH_\twotoone &\coloneqq \left\{ \begin{aligned} (X,\omega) \in \calH_g(2,1,\ldots,1) : \, & X \text{ contains a saddle connection of length at most $\sfrac{3B}{g}$} \\ & \qquad \text{from the zero of order } 2 \text{ to a zero of order } 1 \end{aligned} \right\} \!. \end{align*} Analogously to \eqref{eq:volume_loop}, we obtain with the Siegel--Veech constant from ~\eqref{Eq:hom} \begin{align} \Vol (\calH_\homsc) & = O\left( \frac{B^2}{g^2} \right) \cdot \Vol(\calH_g) , \label{eq:volume_hom} \\ \intertext{ with the Siegel--Veech constant from~\eqref{eq:loop} for $m=2$} \Vol ( \calH_{\twoloop} ) & = O \left( \frac{B^2}{g^2} \right) \cdot \Vol(\calH_g(2,1,\ldots,1)) , \label{eq:volume_loops_order-2} \\ \intertext{and with the Siegel--Veech constant from~\eqref{Eq:sc} for $m_1=2$ and $m_2=1$} \Vol (\calH_\twotoone ) &= O \left( \frac{B^2}{g} \right) \cdot \Vol(\calH_g(2,1,\ldots,1)) . \label{eq:volume_order-2} \end{align} To estimate the volume of $\calH_\cherry$, the idea is to simultaneously collapse the two saddle connections that share a zero. We introduce another subset of $\calH_g$ which contains some of the cases for which the collapsing is not possible. Let \begin{equation*} \calH_{\closedgeod} \coloneqq \left\{ % \begin{aligned} (X,\omega) \in \calH_g : X \text{ contains a closed geodesic of length at most } \sfrac{6B}{g} \right\} . \end{equation*} Note that a closed geodesic on a translation surface in $\calH_\closedgeod$ is either a chain of saddle connections with angles at least $\pi$ between them or is contained in a cylinder. In the latter case, we can represent the closed geodesic as a boundary curve of the cylinder and hence we have a chain of saddle connections again. We already have an upper bound for the volume of $\calH_\anyloop$. To determine an upper bound for the volume of $\Hng = \calH_\anyloop \cup \calH_{\cherry}$, we consider now $\calH_{\cherry} \setminus (\calH_\anyloop \cup \calH_\closedgeod)$, $\calH_{\closedgeod} \setminus (\calH_\anyloop \cup \calH_\homsc)$ and~$\calH_\homsc$ separately. First, for a translation surface in $\calH_\cherry \setminus (\calH_\anyloop \cup \calH_\closedgeod)$, let $\alpha$ be the shortest saddle connection that shares a zero with a saddle connection of length at most $\sfrac{2B}{g}$. Furthermore, let $\beta$ be the shortest saddle connection that $\alpha$ shares a zero with. Then we can collapse two zeros along $\alpha$ and $\beta$ into the shared zero. As the length of $\alpha$ is at most~$\sfrac{B}{g}$ and the length of $\beta$ is at most $\sfrac{2B}{g}$, this gives us the following locally measure-preserving~map: \begin{equation*} \calH_{\cherry} \setminus (\calH_\anyloop \cup \calH_\closedgeod) \to M_{\cherry} \times \calH_g(3,1,\ldots, 1) \times \mathbb{D}_{\sfrac{B}{g}} \times \mathbb{D}_{\sfrac{2B}{g}} \end{equation*} Note that this map is well-defined on a subset of full measure: The saddle connections $\alpha$ and~$\beta$ are uniquely determined except for a subset of measure $0$. And if $\alpha$ and $\beta$ or their ghost doubles would intersect, $\alpha$ would be a loop or we would have a closed geodesic of length at most~$\sfrac{6B}{g}$ that shares a zero with $\alpha$, so the translation surface would be in $\calH_\anyloop \cup \calH_\closedgeod$. The set~$M_{\cherry}$ here expresses the possible choices of the three zeros involved and hence is of order $g^3$. We can define an inverse of the map up to a finite choice of the geodesic segments which are to be cut open. Hence, the map is finite--to--$1$ and we obtain with Equation~\eqref{eq:volume_stratum} \begin{align*} \frac{ \Vol (\calH_{\cherry} \setminus (\calH_\anyloop \cup \calH_\closedgeod) )}{ \Vol (\calH_g) } & = O(1) \cdot |M_{\cherry}| \cdot \frac{ \Vol (\calH_g(3,1,\ldots, 1))}{ \Vol (\calH_g) } \cdot \Vol \left( \mathbb{D}_{\sfrac{B}{g}} \times \mathbb{D}_{\sfrac{2B}{g}} \right) \\ & = O \left( g^3 \cdot \frac{4}{2^{2g-5} \cdot 4} \cdot \frac{2^{2g-2}}{4} \cdot \frac{B^4}{g^4} \right) = B^4 \cdot O\left( \frac{1}{g} \right) . \end{align*} For the set $\calH_\closedgeod \setminus (\calH_\anyloop \cup \calH_\homsc)$, we approach the volume estimate in two steps: For a translation surface in $\calH_\closedgeod \setminus (\calH_\anyloop \cup \calH_\homsc)$, choose the shortest closed geodesic. Let $\alpha$ be the shortest saddle connection on this closed geodesic. Then $\alpha$ is not a loop and we can collapse a zero along $\alpha$ into another zero such that we obtain a closed geodesic of length at most~$\sfrac{6B}{g}$ containing a zero of order $2$. This gives us the following finite--to--$1$, locally measure-preserving map: \begin{equation*} \calH_{\closedgeod}\setminus (\calH_\anyloop \cup \calH_\homsc) \to M_{\closedgeod} \times \left( \calH_{\twoloop} \cup \calH_\twotoone \right) \times \mathbb{D}_{\sfrac{6B}{g}} \end{equation*} Note that this map is well-defined on a subset of full measure: The ghost double of $\alpha$ would not be defined only if there is a saddle connection of shorter length with the same direction (which only happens for a set of measure zero) or the ghost double defines a saddle connection which is homologous to $\alpha$ (which would mean that we are in $\calH_\homsc$). Here, $M_\closedgeod$ is the combinatorial set of order $g^2$ that expresses the choices of zeros that are collapsed into each other. We can estimate the volume of $ \calH_{\twoloop} \cup \calH_\twotoone$ using Equations~\eqref{eq:volume_loops_order-2} and \eqref{eq:volume_order-2} as \begin{equation*} \frac{ \Vol\left( \calH_{\twoloop} \cup \calH_\twotoone \right) }{\Vol(\calH_g(2,1,\ldots,1))} = O\left( \frac{B^2}{g^2} \right) + O\left( g \cdot \frac{B^2}{g^2} \right) = B^2 \cdot O\left( \frac{1}{g} \right) . \end{equation*} Therefore, we obtain with Equation~\eqref{eq:volume_stratum} \begin{equation*} \frac{ \Vol (\calH_\closedgeod \setminus (\calH_\anyloop \cup \calH_\homsc)) }{ \Vol (\calH_g) } = O(1) \cdot |M_\closedgeod| \cdot \frac{ \Vol \left( \calH_{\twoloop} \cup \calH_\twotoone \right) }{ \Vol (\calH_g) } \cdot \Vol \left( \mathbb{D}_{\sfrac{6B}{g}} \right) = B^4 \cdot O\left( \frac{1}{g} \right). \end{equation*} Finally, with Equations~\eqref{eq:volume_loop} and~\eqref{eq:volume_hom}, we have \begin{align*} \frac{\Vol(\calH_{\rm NG})}{\Vol(\calH_g)} & \leq \frac{ \Vol(\calH_\anyloop )}{\Vol( \calH_g)} + \frac{ \Vol (\calH_{\cherry} \setminus (\calH_\anyloop \cup \calH_\closedgeod ))}{ \Vol (\calH_g) } + \frac{ \Vol (\calH_\closedgeod \setminus (\calH_\anyloop \cup \calH_\homsc)) }{ \Vol (\calH_g) } + \frac{ \Vol ( \calH_\homsc) }{ \Vol (\calH_g) } \\ & = B^2 \cdot O\left( \frac1g \right) + B^4 \cdot O\left( \frac{1}{g} \right) + B^4 \cdot O\left( \frac{1}{g} \right) + B^2 \cdot O\left( \frac{1}{g^2} \right) \\ & = \max\{B^2, B^4\} \cdot O\left( \frac1g \right) . \qedhere \end{align*} \end{proof} We will also need a variation of \cref{Prop:generic-is-generic} for strata different from the principal stratum, specifically for the strata to which translation surfaces belong after collapsing along $K$ saddle connections. \begin{definition} Fix $B \in \mathbb{R}_+$ as well as $K \in \NN, K <g$ and let $\calH' = \calH_g(2, \dots, 2, 1, \dots, 1)$ be the stratum with~$K$ zeros of order $2$ and $2g-2-2K$ zeros of order $1$. We define the following subsets of $\calH'$: \begin{align*} \calH'_{\oneloop} & \coloneqq \left\{ \begin{aligned} (X,\omega) \in \calH' : \, & X \text{ contains a loop of length at most $\sfrac{6B}{g}$} \\ & \qquad \text{starting at a zero of order } 1 \end{aligned} \right\} \\ \calH'_\cherry &\coloneqq \left\{ \begin{aligned} (X,\omega) \in \calH' : \, & X \text{ contains a saddle connection of length at most $\sfrac{B}{g}$} \\ & \qquad \text{and a saddle connection of length at most $\sfrac{2B}{g}$}\\ & \qquad \text{that share a zero where all zeros are of order $1$} \end{aligned} \right\} \\ \calH'_{\twoloop} & \coloneqq \left\{ \begin{aligned} (X,\omega) \in \calH' : \, & X \text{ contains a loop of length at most $\sfrac{2B}{g}$} \\ & \qquad \text{starting at a zero of order } 2 \end{aligned} \right\} \\ \calH'_\twotostar &\coloneqq \left\{ \begin{aligned} (X,\omega) \in \calH' : \, & X \text{ contains a saddle connection of length at most $\sfrac{2B}{g}$ } \\ & \qquad \text{from a zero of order $2$ to another zero} \end{aligned} \right\} \end{align*} Similarly to before, we define the \emph{non-generic subset} of $\calH'$ to be \begin{equation*} \calH'_{\rm NG} = \calH'_\oneloop \cup \calH'_{\cherry} \cup \calH'_\twoloop \cup \calH'_\twotostar \end{equation*} and the \emph{generic subset} to be the complement of the non-generic set: $\calH_{\rm Gen}' = \calH' \setminus \calH'_{\rm NG}$. \end{definition} We can prove as in \cref{Prop:generic-is-generic} that the volume of $\calH'_{\rm NG}$ is small: \begin{proposition} \label{prop:generic-prime-is-generic} For fixed $K\in \NN$, we have \begin{equation*} \frac{\Vol(\calH_{\rm Gen}' )}{\Vol(\calH')} \to 1 \qquad\text{as}\qquad g \to \infty. \end{equation*} More specifically, \[ \frac{\Vol(\calH_{\rm NG}') }{\Vol(\calH')} = O\left( \frac{K}{g} \right) \] where the implied constant does not depend on $K$ or $g$. \end{proposition} \begin{proof} Similar to Equations~\eqref{eq:volume_loops_order-2} and \eqref{eq:volume_order-2}, we can use Equations~\eqref{eq:loop} and~\eqref{Eq:sc} to obtain \begin{equation*} \frac{\Vol(\calH'_\twoloop \cup \calH'_\twotostar ) }{\Vol(\calH')} = 9B^2 \cdot O\left( K \cdot \frac{1}{g^2} \right) + 6B^2 \cdot O\left( K g \cdot \frac{1}{g^2} \right) + 9B^2 \cdot O\left( K^2 \cdot \frac{1}{g^2} \right) , \end{equation*} where the third (and first) term is dominated by the second as $K< g$. Furthermore, as in the proof of \cref{Prop:generic-is-generic}, we can show that \begin{equation*} \frac{\Vol(\calH'_{\cherry} \cup \calH'_\oneloop )}{\Vol(\calH')} = B^4 \cdot K \cdot O \left( \frac1g \right) + B^2 \cdot O\left( \frac{1}{g^2}\right) . \end{equation*} Note that the additional factor of $K$ comes from the fact that we now have $K+1$ zeros of order~$2$ in ${\calH'}_{\twoloop} \cup {\calH'}_\twotostar$ instead of only one in $\calH_{\twoloop} \cup \calH_\twotoone$. In summary, we can deduce that \begin{equation*} \frac{\Vol(\calH'_{\rm NG})}{\Vol(\calH')} = \max\{B^2, B^4\} \cdot K \cdot O \left( \frac1g \right) \qquad\text{and}\qquad \frac{\Vol(\calH_{\rm Gen}' )}{\Vol(\calH')} \to 1. \qedhere \end{equation*} \end{proof} \section{Simultaneous collapsing} \label{sec:collapsing_simultaneously} Fix now $0 < a_1 < b_1 \leq a_2 <b_2 \leq \dots \leq a_k <b_k \in \RR_+$ and choose $B \coloneqq b_k$. Furthermore, fix $r_1,\dots, r_k \in \NN$ and define $K \coloneqq \sum_{i=1}^k r_i$. To apply \cref{Thm:moments}, we have to determine \begin{equation*} \int_{\Hgen} (N_{g,[a_1,b_1]})_{r_1} \cdot \ldots \cdot (N_{g,[a_k,b_k]})_{r_k} . \end{equation*} Instead of calculating this integral directly, we replace $\Hgen$ by a feasible cover $\widehat \calH$, such that we can replace counting saddle connections on elements of $\Hgen$ by calculating the volume of $\widehat \calH$. For this, let $\widehat \calH$ be the set of tuples~$(X, \omega, \Gamma)$ where $(X, \omega) \in \Hgen$, $\Gamma=(\Gamma_1, \dots, \Gamma_k)$ and $\Gamma_i=(\alpha_{i,1}, \dots, \alpha_{i, r_i})$ is an ordered list of disjoint oriented saddle connections with lengths in the interval $[\sfrac{a_i}g, \sfrac{b_i}g]$ for every $i\in \{1,\ldots,k\}$. We sometimes think of $\Gamma$ as a set and refer to a saddle connection~$\alpha \in \Gamma$, which means $\alpha$ belongs to some~$\Gamma_i \in \Gamma$. Note that none of the saddle connections in $\Gamma$ is a loop and that no two saddle connections in $\Gamma$ or their ghost doubles intersect each other: If the latter happened, there would exist another saddle connection of length at most $\sfrac{2B}{g}$ that shares a zero with one of the saddle connections in~$\Gamma$, so $(X, \omega)$ would be in $\calH_{\rm NG}$. Therefore, we can simultaneously collapse all pairs of zeros that are connected by saddle connections in $\Gamma$ to obtain a~map \[ \Phi \from \widehat \calH \to M_{r_1, \dots, r_k} \times \calH' \times \prod_{i=1}^k A_{a_i, b_i}^{r_i} . \] Here $\calH' = \calH_g(2, \dots, 2, 1, \dots, 1)$ is the stratum of translation surfaces of genus $g$ with~$K$ zeros of order $2$ and the rest of order $1$. The set $M_{r_1, \dots, r_k}$ consists of the possible combinatorial data needed to record the labels of the (ordered) pairs of zeros of order $1$ in~$\widehat \calH$ which give rise to zeros of order $2$ in $\calH'$. Finally, the set $A_{a_i, b_i} \subseteq \RR^2$ is an annulus in~$\RR^2$ centred at the origin with inner radius $\sfrac{a_i}{g}$ and outer radius $\sfrac{b_i}{g}$ and the notation $A_{a_i, b_i}^{r_i}$ means that we take~$r_i$ copies of this annulus. With the same arguments as for the map in~\eqref{eq:measure-preserving-map}, the map $\Phi$ is $3^K$--to--$1$ and locally measure-preserving. We use this now to estimate the volume of $\widehat{\calH}$. \begin{lemma} \label{lem:image-phi} For $\widehat{\calH}$ as above, we have \begin{equation*} \Vol( \widehat{\calH}) = 3^K \cdot \Vol( \calH_{\rm Gen}' ) \cdot \left( 1 + K \cdot O \left( \frac1g \right)\right) \cdot \prod_{i=1}^k \left( \frac{ \pi(b_i^2-a_i^2)}{g^2}\right)^{r_i} \cdot |M_{r_1, \ldots, r_k}| \end{equation*} where the implied constant does not depend on $K$ or $g$. \begin{proof} For any $(X', \omega') \in \calH_{\rm Gen}' $, any element of $M_{r_1, \dots, r_k}$ and any $K$--tuple of vectors in $\prod_{i=1}^k A_{a_i, b_i}^{r_i}$, the $K$ zeros of order $2$ can be opened up (in $3^K$ different ways) in the direction of vectors in the chosen tuple and labelled according to the element of $M_{r_1, \dots, r_k}$ to obtain~$(X, \omega)$ in~$\Hgen$. Furthermore, by choosing the oriented saddle connections in $X$ obtained from opening up zeros of order~$2$ as the elements of the tuples~$\Gamma_i$, we obtain a point in $\widehat \calH$. That is, the chosen point $(X', \omega')$ together with the further data is in the image of $\Phi$. Therefore, we have \begin{equation*} \calH_{\rm Gen}' \times \prod A_{a_i, b_i}^{r_i} \times M_{r_1, \dots, r_k} \subseteq \Phi( \widehat \calH) \subseteq \calH' \times \prod A_{a_i, b_i}^{r_i} \times M_{r_1, \dots, r_k} . \end{equation*} As the map $\Phi$ is $3^K$--to--$1$, locally measure-preserving and nearly onto as just described and with the estimates on the volume of the complement of $\calH_{\rm Gen}'$ from \cref{prop:generic-prime-is-generic}, we obtain the claimed volume estimate. \end{proof} \end{lemma} \section{The Length Spectrum} \label{sec:length_spectrum} In this section, we check the assumptions of \cref{Thm:moments} for the set $\Hgen$. As before, fix $0 < a_1 < b_1 \leq a_2 <b_2 \leq \dots \leq a_k <b_k \in \RR_+$. For positive integers $r_1, \dots , r_k \in \NN$, define \[ Y_{g,r_1,...,r_k} := (N_{g,[a_1,b_1]})_{r_1} \dots (N_{g,[a_k,b_k]})_{r_k} \from \calH_g \to \NN_0. \] Then $Y_{g,r_1,...,r_k}$ counts the number of (ordered) lists of length $k$ where the $i$--th item is an ordered $r_i$--tuple of saddle connections with lengths in $\left[\sfrac{a_i}g, \sfrac{b_i}g \right]$ on a surface $(X, \omega) \in \calH_g$. Define \[ \EE_{\rm Gen}(\param) = \frac{1}{\Vol(\Hgen)} \int_{\Hgen} \param. \] \begin{proposition} \label{Prop:E} For fixed $0 < a_1 < b_1 \leq \dots \leq a_k <b_k \in \RR_+$, we have \[ \lim_{g \to \infty} \EE_{\rm Gen}(Y_{g,r_1,\dots,r_k})= \prod_{i=1}^k \lambda_{[a_i, b_i]}^{r_i} \qquad \text{with} \qquad \lambda_{[a_i, b_i]}= 8 \pi(b_i^2-a_i^2) . \] \end{proposition} \begin{proof} Let $K \coloneqq \sum_{i=1}^k r_i$ as before. For every $(X,\omega) \in \Hgen$, every suitable set of $K$ saddle connections gives rise to $2^K$ elements in $\widehat \calH$ with different choices of orientations for the saddle connections in $\Gamma$. Hence we have \[ \EE_{\rm Gen}(Y_{g,r_1,\dots,r_k}) = \frac 1{\Vol{\Hgen}} \cdot \frac{1}{2^K} \int_{\widehat \calH} \One. \] The volume estimate from \cref{lem:image-phi} implies \begin{equation*} \int_{\widehat \calH} \One = 3^K \cdot \Vol(\calH_{\rm Gen}') \cdot \prod_{i=1}^k \left( \frac{\pi(b_i^2-a_i^2)}{g^2}\right)^{r_i} \cdot |M_{r_1, \ldots, r_k}| \cdot \left( 1 + K \cdot O \left(\frac1g \right) \right) \end{equation*} with the notation as in \cref{sec:collapsing_simultaneously}. An element in $M_{r_1, \ldots, r_k}$ is a way to choose $2K$ zeros (with labels) successively out of a set of $2g-2$ zeros with labels, hence \[ |M_{r_1, \dots, r_k}| = \frac{ (2g-2)! }{(2g-2-2K)!} . \] Combining these formulas, we get \begin{equation*} \EE_{\rm Gen} (Y_{g,r_1,\dots,r_k}) = \frac{\Vol(\calH_{\rm Gen}')}{\Vol(\Hgen)} \cdot \frac{3^K}{2^K} \cdot \frac{ (2g-2)! }{ (2g-2-2K)!} \cdot \prod_{i=1}^k \left( \frac{\pi(b_i^2-a_i^2)}{g^2}\right)^{r_i} \cdot \left( 1 + K \cdot O \left( \frac1g \right) \right) . \end{equation*} Furthermore, we have from \cref{Prop:generic-is-generic,prop:generic-prime-is-generic} and Equation~\eqref{eq:volume_stratum} that \begin{equation*} \lim_{g \to \infty } \frac{\Vol(\calH_{\rm Gen}')}{\Vol(\Hgen)} = \lim_{g \to \infty} \frac{\Vol(\calH')}{\Vol(\calH_g)} = \frac{4}{3^K \cdot 2^{2g-2-2K}} \cdot \frac{2^{2g-2}}{4} = \frac{2^{2K}}{3^K} = \left( \frac43\right)^K. \end{equation*} Taking the limit of $\EE_{\rm Gen} (Y_{g,r_1,\dots,r_k})$ for $g\to\infty$ and using $K = \sum_{i=1}^k r_i$ yields \begin{align*} \lim_{g \to \infty} \EE_{\rm Gen} (Y_{g,r_1,\dots,r_k}) & = \lim_{g \to \infty} \left( \frac43\right)^K \cdot \frac{3^K}{2^K} \cdot (2g)^{2K} \cdot \frac{1}{g^{2K}} \cdot \prod_{i=1}^k \left( \pi(b_i^2-a_i^2) \right)^{r_i} \\ & = \prod_{i=1}^k \left( 8 \pi(b_i^2-a_i^2) \right)^{r_i} . \qedhere \end{align*} \end{proof} \section{ From the generic set to the whole stratum} \label{sec:finish} Let $\PP(\param)$ denote the probability of an event in $\calH_g$ and let $\PP_{\rm Gen}(\param)$ denote the probability of an event in $\Hgen$. For a random variable $N \from \calH_g \to \NN_0$ and $k \in \NN$, we have \[ \PP_{\rm Gen} (N=k) = \frac{\Vol \Big\{ (X, \omega) \in \Hgen \ST N(X, \omega) =k \Big\} }{ \Vol(\Hgen) } . \] To compare $\PP(\param)$ and $\PP_{\rm Gen}(\param)$, note that \begin{align*} \Vol \Big\{ (X, \omega) \in \calH_g \ST N(X, \omega) =k \Big\} - \Vol(\Hng) & \leq \Vol \Big\{ (X, \omega) \in \Hgen \ST N(X, \omega) =k \Big\} \\ & \leq \Vol \Big\{ (X, \omega) \in \calH_g \ST N(X, \omega) =k \Big\} . \end{align*} From \cref{Prop:generic-is-generic}, we know that \[ \frac{\Vol(\Hgen)}{ \Vol(\calH_g) } \to 1 \qquad\text{and}\qquad \frac{\Vol(\Hng)}{ \Vol(\calH_g) } \to 0 \qquad\text{as}\qquad g \to \infty. \] Hence, if $\lim_{g \to \infty} \PP_{\rm Gen} (N=k)$ exists, so does $\lim_{g \to \infty} \PP (N=k)$ and the limits are equal. The same holds for any event involving multiple random variables. In the case discussed here, from \cref{Prop:E} and \cref{Thm:moments}, it follows that \[ \lim_{g \to \infty} \PP_{\rm Gen} \left( N_{g,[a_1,b_1]} = n_1,..., N_{g,[a_k,b_k]} = n_k \right) = \prod_{i=1}^k \frac{\lambda_{[a_i, b_i]}^{n_i} e^{-\lambda_{[a_i,b_i]}}}{n_i!} . \] Therefore, \[ \lim_{g \to \infty} \PP \left( N_{g,[a_1,b_1]} = n_1,..., N_{g,[a_k,b_k]} = n_k \right) = \prod_{i=1}^k \frac{\lambda_{[a_i, b_i]}^{n_i} e^{-\lambda_{[a_i,b_i]}}}{n_i!} . \] This finishes the proof of \cref{Thm:Main}. \section{Proof of \texorpdfstring{\cref{thm:slow-growing_exceptions,thm:order_m}}{the variations of the main theorem}} \label{sec:variations} In this section, we outline how to prove the two versions of the main theorem for other strata than the principal stratum. We start with $\calH_g(m,\ldots,m)$ where $m$ is a fixed odd number that divides $2g-2$. To adapt the proof of \cref{Prop:generic-is-generic} to this situation, we replace all the zeros of order~$1$ by zeros of order $m$. The collapsing procedure still works when considering all of the potential~$m$ ghost doubles simultaneously. As $m$ is fixed, the values of the Siegel--Veech constants and the combinatorial constants are changed but not the order of $g$ in these terms. Hence, the statement of \cref{Prop:generic-is-generic} and the analogous version of \cref{prop:generic-prime-is-generic} is also true for~$\calH_g(m, \ldots, m)$. The map $\Phi$ from \cref{sec:collapsing_simultaneously} is replaced by the map \begin{equation*} \widehat \calH \to M_{r_1, \dots, r_k} \times \calH'' \times \prod_{i=1}^k A_{a_i, b_i}^{r_i} , \end{equation*} where $\calH'' = \calH_g(2m,\ldots,2m,m,\ldots,m)$ is the stratum of translation surfaces of genus $g$ with~$K$ zeros of order $2m$ and the rest of order $m$. Note that this map is $(2m+1)^K$--to--$1$ which changes the volume of $\widehat{\calH}$ to \begin{equation*} \Vol( \widehat{\calH}) = (2m+1)^K \cdot \Vol( \calH_{\rm Gen}'' ) \cdot \left( 1 + K \cdot O \left( \frac1g \right)\right) \cdot \prod_{i=1}^k \left( \frac{\pi(b_i^2-a_i^2)}{g^2}\right)^{r_i} \cdot |M_{r_1, \ldots, r_k}| . \end{equation*} With this, we can now do the same calculation as in \cref{Prop:E}. In particular, we have \begin{equation*} |M_{r_1, \dots, r_k}| = \frac{ \left( \frac{2g-2}{m} \right)! }{\left( \frac{2g-2}{m}-2K \right)!} \end{equation*} and \begin{equation*} \lim_{g \to \infty } \frac{\Vol(\calH_{\rm Gen}'')}{\Vol(\Hgen)} = \frac{4}{(2m+1)^K \cdot (m+1)^{\frac{2g-2}{m}-2K}} \cdot \frac{(m+1)^{\frac{2g-2}{m}}}{4} = \left( \frac{(m+1)^2}{2m+1}\right)^K. \end{equation*} Hence with $K = \sum_{i=1}^k r_i$, we obtain \begin{align*} \lim_{g \to \infty} \EE_{\rm Gen} (Y_{g,r_1,\dots,r_k}) & = \! \lim_{g \to \infty} \! \left( \frac{(m+1)^2}{2m+1} \right)^K \!\!\! \cdot \frac{(2m+1)^K}{2^K} \! \cdot \! \left( \frac{2g-2}{m} \right)^{2K} \!\!\! \cdot \! \frac{1}{g^{2K}} \cdot \! \prod_{i=1}^k \left( \pi(b_i^2-a_i^2) \right)^{r_i} \\ & = \prod_{i=1}^k \left( \left( \frac{m+1}{m} \right)^2 2\pi(b_i^2-a_i^2) \right)^{r_i} \end{align*} which proves \cref{thm:order_m}. For $\calH_g(m_1, m_2, \dots, m_{\ell(g)}, 1, \dots, 1)$ where $m_i \leq f(g)$, $i=1, \dots, \ell(g)$, we consider the following: We use Equation~\eqref{Eq:sc} to show that the volume proportion of translation surfaces with a saddle connection of length at most $\sfrac{B}{g}$ from a non-simple zero to any other zero is at most of the order of \begin{equation*} (f(g)+1)^2 \cdot \ell(g) \cdot g \cdot \left( \frac{B}{g} \right)^2 = O \left( \frac{f(g)^2 \cdot \ell(g)}{g} \right) . \end{equation*} Therefore, if $\frac{f(g)^2 \cdot \ell(g)}{g} \to 0$ for $g\to \infty$, the occurrence of these saddle connections is dominated by the occurrence of saddle connections from a simple zero to another simple zero that are used in the volume calculation in \cref{Prop:E}. Hence, the same reasoning as in the principal stratum works to obtain \cref{thm:slow-growing_exceptions}. \bibliographystyle{alpha} \bibliography{literature} \end{document}
2412.08884v1
http://arxiv.org/abs/2412.08884v1
Multiple partial rigidity rates in low complexity subshifts
\documentclass[reqno]{amsart} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{pgf,pgfarrows,pgfnodes,pgfautomata,pgfheaps,pgfshade,hyperref, amssymb} \usepackage{amssymb} \usepackage{enumitem} \usepackage[english]{babel} \usepackage[capitalize]{cleveref} \usepackage{mathtools,tikz} \usepackage[colorinlistoftodos]{todonotes} \usepackage{soul} \usepackage{tikz} \usepackage{xcolor} \hypersetup{ colorlinks, linkcolor={blue!30!black}, citecolor={green!50!black}, urlcolor={blue!80!black} } \usepackage{mathrsfs} \usepackage{dsfont} \newcommand{\supp}{\operatorname{supp}} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{lemma}[theorem]{Lemma} \newcounter{thmcounter} \renewcommand{\thethmcounter}{\Alph{thmcounter}} \newtheorem{thmintro}[thmcounter]{Theorem} \newcounter{introthmcounter} \renewcommand*{\theintrothmcounter}{\Alph{introthmcounter}} \newtheorem{Maintheorem}[introthmcounter]{Theorem} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem*{definition*}{Definition} \newtheorem{question}[theorem]{Question} \newtheorem*{question*}{Question} \newcounter{proofcount} \AtBeginEnvironment{proof}{\stepcounter{proofcount}} \newtheorem{claim}{Claim} \makeatletter \@addtoreset{claim}{proofcount}\makeatother \theoremstyle{remark} \newtheorem{problem}[theorem]{Problem} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \newtheorem{exercise}[theorem]{Exercise} \newtheorem*{remark*}{Remark} \newtheorem*{example*}{Example} \newcommand{\edit}[3]{\color{#1}{#3}\color{black}\marginpar{\textcolor{#1}{[[#2]]}}} \newcommand{\ale}[1]{\edit{red!60}{AM}{#1}} \newcommand{\seba}[1]{\edit{green!60!black}{SD}{#1}} \newcommand{\tristan}[1]{\edit{blue!60}{TR}{#1}} \newcommand{\tristanii}[1]{\edit{purple!60}{TR}{#1}} \newcommand{\sebat}[1]{\todo[color=green!50]{#1}} \newcommand{\tristant}[1]{\todo[color=blue!50]{#1}} \newcommand{\alet}[1]{\todo[color=red!50]{#1}} \def\R{{\mathbb R}} \def\Z{{\mathbb Z}} \def\H{{\mathbb H}} \def\C{{\mathbb C}} \def\N{{\mathbb N}} \def\G{{\mathbb G}} \def\S{{\mathbb S}} \def\F{{\mathbb F}} \def\K{{\mathbb K}} \def\T{{\mathbb T}} \def\cD{{\mathcal D}} \def\cH{{\mathcal H}} \def\cP{{\mathcal P}} \def\cF{{\mathcal F}} \def\cE{{\mathcal E}} \def\cB{{\mathcal B}} \def\cC{{\mathcal C}} \def\cA{{\mathcal A}} \def\cL{{\mathcal L}} \def\cT{{\mathcal T}} \def\cY{{\mathcal Y}} \def\cN{{\mathcal N}} \def\cM{{\mathcal M}} \def\cG{{\mathcal G}} \def\cK{{\mathcal K}} \def\cR{{\mathcal R}} \def\cS{{\mathcal S}} \def\cX{{\mathcal X}} \def\cW{{\mathcal W}} \def\ie{{i.e.}} \def\sT{{\mathscr T}} \def\sP{{\mathscr P}} \def\freq{{\rm freq}} \newcommand{\1}{\ensuremath{\mathds{1}}} \def\kh{{\mathfrak h}} \def \Q {{\bf Q}} \def \RP {{\bf RP}} \def \id {{\rm id}} \def \e {\epsilon} \def \ND {\operatorname{ND}_{\ell_2}} \def \NE {\operatorname{NE}} \def\dist{{\rm dist}} \title[Multiple partial rigidity rates in low complexity subshifts]{Multiple partial rigidity rates in low complexity subshifts} \author{Trist\'an Radi\'c} \address{Department of mathematics, Northwestern University, 2033 Sheridan Rd, Evanston, IL, United States of America} \email{[email protected]} \thanks{Northwestern University} \subjclass[2020]{Primary: 37A05; Secondary: 37B10,37B02} \keywords{partial rigidity, partial rigidity rate, S-adic subshifts} \begin{document} \date{\today} \maketitle \begin{abstract} Partial rigidity is a quantitative notion of recurrence and provides a global obstruction which prevents the system from being strongly mixing. A dynamical system $(X, \cX, \mu, T)$ is partially rigid if there is a constant $\delta >0$ and sequence $(n_k)_{k \in \N}$ such that $\displaystyle \liminf_{k \to \infty } \mu(A \cap T^{n_k}A) \geq \delta \mu(A)$ for every $A \in \cX$, and the partial rigidity rate is the largest $\delta$ achieved over all sequences. For every integer $d \geq 1$, via an explicit construction, we prove the existence of a minimal subshift $(X,S)$ with $d$ ergodic measures having distinct partial rigidity rates. The systems built are $\cS$-adic subshifts of finite alphabetic rank that have non-superlinear word complexity and, in particular, have zero entropy. \end{abstract} \section{Introduction} For measure preserving systems, partial rigidity quantitatively captures recurrence along a particular trajectory. Roughly speaking, this measurement ensures that at least a proportion $\delta \in (0,1]$ of any measurable set $A$ returns to $A$ along some sequence of iterates. The notion was introduced by Friedman \cite{Friedman_partial_mixing_rigidity_factors:1989} and defined formally by King \cite{King_joining-rank_finite_mixing:1988}. An important property of partially rigid systems is that, besides the trivial system, they are not strongly mixing. Although the converse does not hold, many common examples of non-mixing systems are partially rigid, see for example \cite{Dekking_Keane_mixing_substitutions:1978,Katok_interval_exchange_not_mixing:1980,Cortez_Durand_Host_Maass_continuous_measurable_eigen_LR:2003,Bezuglyi_Kwiatkowski_Medynets_Solomyak_Finite_rank_Bratteli:2013,Danilenko_finite_rank_rationalerg_partial_rigidity:2016,Creutz_mixing_minimal_comp:2023, Goodson_Ryzhikov_conj_joinings_producs_rank1:1997}. To be more precise, a measure-preserving systems $(X, \cX, \mu, T)$ is \emph{partially rigid} if there exists $\delta > 0$ and an increasing sequence $(n_k)_{k \in \N}$ of integers such that \begin{equation} \label{eq p rigid} \liminf_{k \to \infty} \mu (A \cap T^{-n_k}A) \geq \delta \mu(A) \end{equation} for every measurable set $A$. A constant $\delta>0$ and a sequence $(n_k)_{k \in \N}$ satisfying \eqref{eq p rigid} are respectively called a \emph{constant of partial rigidity} and a \emph{partial rigidity sequence}. Once we know that a system is partially rigid, computing the largest value of $\delta$ provides valuable information on how strongly the system exhibits recurrent behavior. In particular, as was remarked by King in 1988 \cite[Proposition 1.13]{King_joining-rank_finite_mixing:1988}, this constant is invariant under measurable isomorphisms and increases under factor maps. We call this constant the \emph{partial rigidity rate}, we denote it $\delta_{\mu}$ and it is given by \begin{equation*} \delta_{\mu} = \sup \{ \delta >0 \mid \delta \text{ is a partial rigidity constant for some sequence } (n_k)_{k \in \N} \}, \end{equation*} with the convention that $\delta_{\mu} = 0$ whenever the system is not partially rigid. There are only limited partially rigid systems for which that constant is known. One major case is \emph{rigid systems}, that is when $\delta_{\mu}=1$. Such systems have been well studied after Furstenberg and Weiss introduced them in \cite{Furstenberg_Weiss77}, see for instance \cite{Bergelson_delJunco_Lemanczyk_Rosenblatt_rigidity_nonrecurrence:2014,Coronel_Maass_Shao_seq_entropy_rigid:2009,Donoso_Shao_uniform_rigid_models:2017,Fayad_Kanigowski_rigidity_wm_rotation:2015,Glasner_Maon_rigidity_topological:1989}. The only non-rigid examples for which the partial rigidity rates are calculated are some specific substitution subshifts studied in \cite[Section 7]{donoso_maass_radic2023partial}. Since minimal substitution subshifts are uniquely ergodic, it is natural to ask whether it is possible to construct a minimal, low-complexity system with more than one ergodic measure and distinct partial rigidity rates. Via an explicit construction, we fully resolve this question. More precisely, we show \begin{theorem} \label{main thrm} For any natural number $d\geq 2$, there exists a minimal subshift with non-superlinear complexity that has $d$ distinct ergodic measures $\mu_0, \ldots, \mu_{d-1}$ for which the partial rigidity rates $0< \delta_{\mu_0} < \ldots < \delta_{\mu_{d-1}} < 1$ are also distinct. Moreover, the partial rigidity sequence $(n_k)_{k \in \N}$ associated to each $\delta_{\mu_i}$ is the same for all $i \in \{0,\ldots, d-1\}$. \end{theorem} Constructing measures all of which share the same partial rigidity sequence is a key aspect because, in general, an invariant measure can be partially rigid for two different sequences $(n_k)_{k \in \N}$ and $(n'_k)_{k \in \N}$ and have different partial rigidity constants $\delta$ and $\delta'$ for each sequence. For instance, in \cite[Theorem 7.1]{donoso_maass_radic2023partial} it is proven that for the Thue-Morse substitution subshift equipped with its unique invariant measure $\nu$, $\delta_{\nu} = 2/3$ and its associated partial rigidity sequence is $(3 \cdot 2^n)_{n \in \N}$. Using a similar proof, the largest constant of partial rigidity for the sequence $(2^n)_{n \in \N}$ is $1/3$. In contrast, the discrepancy between the values in \cref{main thrm} is not due to quantifying along a different trajectory, but rather that for each measure the returning mass takes on a different value. The system constructed to prove \cref{main thrm} is an $\cS$-adic subshift, that is a symbolic system formed as a limit of morphisms $\boldsymbol \sigma = (\sigma_n \colon A_{n+1}^* \to A_n^*)_{n \in \N}$ (see \cref{section prelimanries} for the precise definitions). We introduce a novel technique that allows us to build minimal $\cS$-adic subshift with $d$ ergodic measures, where each ergodic measure ``behaves like'' a substitution subshift for which we already know its partial rigidity rate. The idea is that the measures of the cylinder sets ``closely approximate'' the values assigned by the unique invariant measure of the substitution subshift that is ``imitating''. For the precise statement, see \cref{thrm gluing technique}. This gluing technique is of interest on its own, as it gives a general way for controlling distinct ergodic measures in some specific $\cS$-adic subshift. For each ergodic measure $\mu_i$, with $i \in \{0,\ldots,d-1\}$, the gluing technique gives us a lower bound for the partial rigidity rate (see \cref{cor delta smaler}). The lower bound corresponds to the partial rigidity rate associated to the uniquely ergodic system that the measure $\mu_i$ is ``imitating''. In \cref{section computation partial rigidity}, we restrict to a specific example in which that lower bound is achieved. In that section, we prove that the number of morphisms needed for building the $\cS$-adic subshift can be reduced to three. Combining results from Sections \ref{section gluing technique} and \ref{section computation partial rigidity}, we complete the proof of \cref{main thrm}. An extended version of the theorem that includes the values of $\delta_{\mu_i}$ for $i \in \{0, \ldots,d-1\}$ and the partial rigidity sequence is stated in \cref{thrm final result}. \textbf{Acknowledgments.} The author thanks B. Kra for her careful reading and helpful suggestions on the earlier versions of this paper. He is also grateful to A. Maass and S. Donoso for their insights in the early stages of this project, and extends his thanks to F. Arbulu for providing valuable references. Special thanks to S. Petite, who, during the author's first visit to the UPJV in Amiens, asked whether an example with multiple partial rigidity rates, such as the one described in this paper, could be constructed. \section{Preliminaries and notation} \label{section prelimanries} \subsection{Topological and symbolic dynamical systems} In this paper, a {\em topological dynamical system} is a pair $(X,T)$, where $X$ is a compact metric space and $T \colon X \to X$ is a homeomorphism. We say that $(X,T)$ is {\em minimal} if for every $x \in X$ the orbit $\{T^n x: n\in \Z\}$ is dense in $X$. A continuous and onto map $\pi \colon X_1 \to X_2$ between two topological dynamical systems $(X_1, T_1)$ and $(X_2,T_2)$ is a \emph{factor map} if for every $x \in X_1$, $T_2 \circ \pi (x) = \pi \circ T_1 (x) $. We focus on a special family of topological dynamical system, symbolic systems. To define them, let $A$ be a finite set that we call {\em alphabet}. The elements of $A$ are called {\em letters}. For $\ell \in \N$, the set of concatenations of $\ell$ letters is denoted by $A^{\ell}$ and $w = w_1 \ldots w_{\ell} \in A^{\ell}$ is a {\em word} of length $\ell$. The length of a word $w$ is denoted by $|w|$. We set $A^* = \bigcup_{n \in \N} A^{\ell}$ and by convention, $A^0 = \{ \varepsilon \}$ where $\varepsilon$ is the {\em empty word}. For a word $w = w_1 \ldots w_{\ell}$ and two integers $1 \leq i < j \leq \ell$, we write $w_{[i, j+1)} = w_{[i, j]} = w_i \ldots w_j$. We say that $u$ {\em appears} or {\em occurs} in $w $ if there is an index $ 1 \leq i \leq |w|$ such that $u=w_{[i,i+|u|)}$ and we denote this by $u \sqsubseteq w$. The index $i$ is an {\em occurrence} of $u$ in $w$ and $|w|_u$ denotes the number of (possibly overleaped) occurrences of $u$ in $w$. We also write $\freq(u,w) = \frac{|w|_u}{|w|}$, the \emph{frequency of} $u$ \emph{in} $w$. Let $A^{\Z}$ be the set of two-sided sequences $(x_n)_{n \in \Z}$, where $x_n \in A$ for all $n \in \Z$. Like for finite words, for $x \in A^{\Z}$ and $- \infty < i < j < \infty$ we write $x_{[i,j]}= x_{[i,j+1)}$ for the finite word given by $x_ix_{i+1} \ldots x_j$. The set $A^{\Z}$ endowed with the product topology is a compact and metrizable space. The {\em shift map} $S\colon A^{\Z} \to A^{\Z}$ is the homeomorphism defined by $S((x_n)_{n \in \Z})= (x_{n+1})_{n \in \Z}$. Notice that, the collection of {\em cylinder sets} $\{ S^j[w] \colon w \in A^*, j \in \Z \}$ where $[w] = \{ x \in A^{\Z} \colon x_{[0, |w|) } = w\} $, is a basis of clopen subsets for the topology of $A^{\Z}$. A {\em subshift} is a topological dynamical system $(X,S)$, where $X$ is a closed and $S$-invariant subset of $A^{\Z}$. In this case the topology is also given by cylinder sets, denoted $[w]_X = [w] \cap X$, but when there is no ambiguity we just write $[w]$. Given an element $x \in X$, the \emph{language} $\cL(x)$ is the set of all words appearing in $x$ and $\cL(X) = \bigcup_{x \in X} \cL(x)$. Notice that $[w]_X \neq \emptyset$ if and only if $w \in \cL(X)$. Also, $(X,S)$ is minimal if and only if $\cL(X)=\cL(x)$ for all $x \in X$. Let $A$ and $B$ be finite alphabets and $\sigma\colon A^* \to B^*$ be a \emph{morphism} for the concatenation, that is $\sigma(uw) = \sigma(u)\sigma(w)$ for all $u,w \in A^*$. A morphism $\sigma\colon A^* \to B^*$ is completely determined by the values of $\sigma(a)$ for every letter $a \in A$. We only consider \emph{non-erasing} morphisms, that is $\sigma(a) \neq \varepsilon$ for every $a \in A$, where $\varepsilon$ is the empty word in $B^*$. A morphism $\sigma \colon A^* \to A^*$ is called a \emph{substitution} if for every $a \in A$, $\displaystyle \lim_{n \to \infty} |\sigma^n(a)| = \infty$. A \emph{directive sequence} $\boldsymbol \sigma = (\sigma_n\colon A^*_{n+1} \to A^*_n )_{n \in \N}$ is a sequence of (non-erasing) morphisms. Given a directive sequence $\boldsymbol \sigma$ and $n \in \N$, define $\cL^{(n)}(\boldsymbol \sigma)$, the \emph{language of level} $n$ \emph{associated to} $\boldsymbol \sigma $ by \begin{equation*} \cL^{(n)}(\boldsymbol \sigma) = \{ w \in A_n^* : w \sqsubseteq \sigma_{[n,N)}(a) \text{ for some } a \in A_N \text{ and } N>n \} \end{equation*} where $\sigma_{[n,N)} = \sigma_n \circ \sigma_{n+1} \circ \ldots \circ \sigma_{N-1}$. For $n \in \N$, we define $X_{\boldsymbol \sigma}^{(n)}$, the $n$-\emph{th level subshift generated by} $\boldsymbol \sigma$, as the set of elements $x \in A_n^{\Z}$ such that $\cL(x) \subseteq \cL^{(n)}(\boldsymbol \sigma)$. For the special case $n=0$, we write $X_{\boldsymbol \sigma}$ instead of $X_{\boldsymbol \sigma}^{(0)}$ and we call it the $\cS$-\emph{adic subshift} generated by $\boldsymbol \sigma$. A morphism $\sigma \colon A^* \to B^*$ has a \emph{composition matrix} $M(\sigma) \in \N^{B \times A} $ given by $M(\sigma)_{b,a} = |\sigma(a)|_b$ for all $b \in B$ and $a \in A$. If $\tau \colon B^* \to C^*$ is another morphism, then $M(\tau \circ \sigma) = M (\tau) M(\sigma)$. Therefore, for a substitution, $\sigma\colon A^* \to A^*$, $M(\sigma^2) = M(\sigma)^2$. We say that $\boldsymbol \sigma$ is {\em primitive} if for every $n \in \N$ there exists $k \geq 1$ such that the matrix $M (\sigma_{[n,n+k]}) = M(\sigma_n)M(\sigma_{n+1}) \cdots M( \sigma_{n+k})$ has only positive entries. When $\boldsymbol \sigma$ is primitive, then for every $n \in \N$ $(X_{\boldsymbol \sigma}^{(n)},S)$ is minimal and $\cL(X^{(n)}_{\boldsymbol \sigma}) = \cL^{(n)}(\boldsymbol \sigma)$. When $\boldsymbol \sigma$ is the constant directive sequence $\sigma_n = \sigma$ for all $n \in \N$, where $\sigma \colon A^* \to A^*$ is a substitution, then $X_{\boldsymbol \sigma}$ is denoted $X_{\sigma}$ and it is called \emph{substitution subshift}. Similarly $\cL(\boldsymbol \sigma)$ is denoted $\cL(\sigma)$. Also if in that context $\boldsymbol \sigma$ is primitive, we say that the substitution $\sigma$ itself is primitive, which is equivalent to saying that the composition matrix $M(\sigma)$ is primitive. We also say that the substitution $\sigma$ is positive if $M(\sigma)$ only have positive entries. By definition, every positive substitution is also primitive. A morphism $\sigma\colon A^* \to B^*$ has constant length if there exists a number $\ell \geq 1$ such that $|\sigma(a)| = \ell$ for all $a \in A$. In this case, we write $| \sigma| = \ell$. More generally, a directive sequence $\boldsymbol \sigma = (\sigma_n\colon A^*_{n+1} \to A^*_n)_{n \in \N}$ is of \emph{constant-length} if each morphism $\sigma_n$ is of constant length. Notice that we do not require that $|\sigma_n| = |\sigma_m|$ for distinct $n,m\in \N$. We define the \emph{alphabet rank} $AR$ of $\boldsymbol \sigma = (\sigma_n\colon A^*_{n+1} \to A^*_n )_{n \in \N}$ as $\displaystyle AR(\boldsymbol \sigma) = \liminf_{n \to \infty} |A_n|$. Having finite alphabet rank has many consequences, for instance if $AR(\boldsymbol \sigma) < \infty$ then $X_{\boldsymbol \sigma}$ has zero topological entropy. For a general subshift $(X, S)$, let $p_X \colon \N \to \N$ denote \emph{the word complexity function} of $X$ given by $p_X (n) = |\cL_n (X)|$ for all $n \in \N$. Here $\cL_n(X) = \{ w \in \cL(X) \colon |w|=n\}$. If $\displaystyle \liminf_{n \to \infty} \frac{p_X(n)}{n} = \infty$ we say that $X$ has \emph{superlinear complexity}. Otherwise we say $X$ has \emph{non-superlinear complexity}. We say that a primitive substitution $\tau \colon A^* \to A^*$ is \emph{right prolongable} (resp. \emph{left prolongable}) on $u \in A^*$ if $\tau(u)$ starts (resp. ends) with $u$. If, for every letter $a \in A$, $\tau \colon A^* \to A^*$ is left and right prolongable on $a$, then $\tau \colon A^* \to A^*$ is said to be \emph{prolongable}. A word $w=w_1 \ldots w_{\ell}\in \cA^*$ is \emph{complete} if $\ell \geq 2$ and $w_1 = w_{\ell}$. Notice that if a substitution $\tau \colon A^* \to A^*$ is primitive and prolongable, then $\tau(a)$ is a complete word for every $a \in A$. If $W$ is a set of words, then we denote \begin{equation} \label{eq complete W} \cC W = \{w \in W \colon |w| \geq 2, w_1 = w_{|w|} \}. \end{equation} the set of complete words in $W$. In particular, for $k \geq2$, $\cC A^k$ is the set of complete words of length $k$ with letters in $A$, for example, $\cC\{a,b\}^3= \{aaa,aba,bab,bbb\}$. Finally, when the alphabet has two letters $\cA= \{a,b\}$, the \emph{complement} of a word $w = w_1 \ldots w_{\ell} \in \cA^*$ denoted $\overline{w}$ is given by $\overline{w}_1 \ldots \overline{w}_{\ell}$ where $\overline{a}= b$ and $\overline{b}=a$. A morphism $\tau \colon \cA^* \to \cA^*$ is said to be a mirror morphism if $\tau(\overline{w}) = \overline{\tau(w)}$ (the name is taken from \cite[Chapter 8.2]{Queffelec1987} with a slight modification). \subsection{Invariant measures} \label{section invariant measures} A \emph{measure preserving system} is a tuple $(X,\mathcal{X},\mu,T)$, where $(X,\mathcal{X},\mu)$ is a probability space and $T\colon X\to X$ is a measurable and measure preserving transformation. That is, $T^{-1}A\in\mathcal{X}$ and $\mu(T^{-1}A)=\mu(A)$ for all $A\in \cX$, and we say that $\mu$ is $T$\emph{-invariant}. An invariant measure $\mu$ is said to be {\em ergodic} if whenever $A \subseteq X$ is measurable and $\mu(A\Delta T^{-1}A)=0$, then $\mu(A)=0$ or $1$. Given a topological dynamical system $(X,T)$, we denote $\cM(X,T)$ (resp. $\cE(X,T)$) the set of Borel $T$-invariant probability measures (resp. the set of ergodic probability measures). For any topological dynamical system, $\cE(X,T)$ is nonempty and when $\cE(X,T) = \{ \mu\}$ the system is said to be {\em uniquely ergodic}. If $(X,S)$ is a subshift over an alphabet $A$, then any invariant measure $\mu \in \cM(X,S)$ is uniquely determined by the values of $\mu([w]_X)$ for $w \in \cL(X)$. Since $X \subset A^{\Z}$, $\mu \in \cM(X,S)$ can be extended to $A^{\Z}$ by $\Tilde{\mu}( B) = \mu ( B \cap X) $ for all $B \subset A^{\Z} $ measurable. In particular, $\Tilde{\mu}([w]) = \mu ([w]_{X})$ for all $w \in A^*$. We use this extension many times, making a slight abuse of notation and not distinguishing between $\mu$ and $\Tilde{\mu}$. Moreover, for $w \in A^*$, since there is no ambiguity with the value of the cylinder set we write $\mu(w)$ instead of $\mu([w])$. This can also be done when we deal with two alphabets $A \subset B$, every invariant measure $\mu$ in $A^{\Z}$ can be extended to an invariant measure in $B^{\Z}$, where in particular, $\mu(b) =0 $ for all $b \in B\backslash A$. A sequence of non-empty subsets of the integers, $\boldsymbol{\Phi}= (\Phi_n)_{n\in \N} $ is a F\o lner sequence if for all $t \in \Z$, $\displaystyle \lim_{n \to \infty} \frac{|\Phi_n \Delta (\Phi_n+t)|}{|\Phi_n |} = 0$. Let $(X,T)$ be a topological system and let $\mu$ be an invariant measur, an element $x \in X$ is said to be \emph{generic} along $\boldsymbol \Phi$ if for every continuous function $f \in C(X)$ \begin{equation*} \lim_{n \to \infty} \frac{1}{|\Phi_n| } \sum_{k \in \Phi_n} f(Tx) = \int_X f d\mu. \end{equation*} Every point in a minimal system is generic for some F\o lner sequence $\boldsymbol \Phi$, more precisely \begin{proposition} \label{prop furstenberg generic}\cite[Proposition 3.9]{Furstenbergbook:1981} Let $(X,T)$ be a minimal system and $\mu$ an ergodic measure. Then for every $x \in X$ there exists sequences $(m_n)_{n \in \N}, (m'_n)_{n \in \N} \subset \N$ such that $m_n < m'_n$ for every $n \in \N$ and $\displaystyle \lim_{n \to \infty} m'_n - m_n = \infty$ such that $x$ is generic along $\boldsymbol \Phi = (\{m_n , \ldots, m'_n\})_{n \in \N}$. \end{proposition} In particular, for an $\cS$-adic subshift with primitive directive sequence $\boldsymbol \sigma = (\sigma_n \colon A_{n+1}^* \to A_n^*)_{n \in \N}$, when the infinite word $\boldsymbol w = \displaystyle \lim_{n \to \infty} \sigma_0 \circ \sigma_1 \circ \cdots \circ \sigma_{n-1}(a_n)$ is well-defined then every invariant measure $\mu \in \cM(X_{\boldsymbol \sigma},S)$ is given by \begin{equation} \label{equation empiric measure} \mu(u) = \lim_{n \to \infty} \frac{|\boldsymbol{w}_{[m_n,m'_n]} |_u }{m'_n-m_n +1} = \lim_{n \to \infty} \freq(u,\boldsymbol{w}_{[m_n,m'_n]}) \quad \forall u \in \cL(X_{\boldsymbol \sigma}), \end{equation} for some $(m_n)_{n \in \N}, (m'_n)_{n \in \N} \subset \N$ as before. Notice that such infinite word $\boldsymbol w$ is well-defined for example when $A_n = A$, $a_n = a$ and $\sigma_n \colon A^* \to A^*$ is prolongable, for all $n \in \N$, where $A$ and $a \in A$ are a fixed alphabet and letter respectively. Those are the condition for the construction of the system announced in \cref{main thrm}. We remark that for a primitive substitution, $\sigma \colon A^* \to A^*$ the substitution subshift $(X_{\sigma},S)$ is uniquely ergodic and the invariant measure is given by any limit of the form \eqref{equation empiric measure}. \subsection{Partial rigidity rate for $\cS$-adic subshifts} Every $\cS$-adic subshift can be endowed with a natural sequence of Kakutani-Rokhlin partitions see for instance \cite[Lemma 6.3]{Berthe_Steiner_Thuswaldner_Recognizability_morphism:2019}, \cite[Chapter 6]{Durand_Perrin_Dimension_groups_dynamical_systems:2022} or \cite[section 5]{donoso_maass_radic2023partial}. To do this appropriately, one requires \emph{recognizability} of the directive sequence $\boldsymbol \sigma = (\sigma_n \colon A_{n+1}^* \to A_n^*)_{n \in \N} $, where we are using the term recognizable as defined in \cite{Berthe_Steiner_Thuswaldner_Recognizability_morphism:2019}. We do not define it here, but if every morphism $\sigma_n \colon A_{n+1}^* \to A_n^*$ is left-permutative, that is the first letter of $\sigma_n(a)$ is distinct from the first letter of $\sigma_n(a')$ for all $a \neq a'$ in $A_n$, then the directive sequence is recognizable. In this case we say that the directive sequence $\boldsymbol \sigma$ itself is left-permutative. If $\tau \colon A^* \to A^*$ is prolongable, then it is left-permutative. Once we use the Kakutani-Rokhlin partition structure, $X^{(n)}_{\boldsymbol \sigma}$ can be identified as the induced system in the $n$-th basis and for every invariant measure $\mu'$ in $X^{(n)}_{\boldsymbol \sigma}$, there is an invariant measure $\mu$ in $X_{\boldsymbol \sigma}$ such that $\mu'$ is the induced measure of $\mu$ in $X^{(n)}_{\boldsymbol \sigma}$. We write $ \mu' = \mu^{(n)}$ and this correspondence is one-to-one. This is a crucial fact for computing the partial rigidity rate for an $\cS$-adic subshift, for instance, if $\boldsymbol \sigma$ is a directive sequence of constant-length, $\delta_{\mu} = \delta_{\mu^{(n)}}$ for all $\mu \in \cE(X_{\boldsymbol \sigma}, S)$ and $n \geq 1$ (see \cref{theorem constant length delta mu}). Since the aim of this paper is building a specific example, we give a way to characterize $\mu^{(n)}$ for a more restricted family of $\cS$-adic subshift that allows us to carry out computations. In what follows, we restrict the analysis to less general directive sequences $\boldsymbol \sigma$. To do so, from now on, $\cA$ always denotes the two letters alphabet $\{a,b\}$. Likewise, for $d \geq 2$, $\cA_i = \{a_i, b_i\}$ for $i \in \{0, \ldots, d-1\}$ and $ \Lambda_d= \bigcup_{i=0}^{d-1} \cA_{i}$. We cite a simplified version of \cite[Theorem 4.9]{bezuglyi_karpel_kwiatkowski2019exact}, the original proposition is stated for Bratelli-Vershik transformations, but under recognizability, it can be stated for $\cS$-adic subshifts, see \cite[Theorem 6.5]{Berthe_Steiner_Thuswaldner_Recognizability_morphism:2019}. \begin{lemma} \label{lemma BKK} Let $\boldsymbol \sigma = (\sigma_n \colon \Lambda_d^* \to \Lambda_d^*)_{n \geq 1} $ be a recognizable constant-length and primitive directive sequence, such that for all $i \in \{0, \ldots, d-1\}$, \begin{equation} \label{eqa} \lim_{n \to \infty}\frac{1}{|\sigma_n|} \sum_{j \neq i } |\sigma_n(a_i)|_{a_j} + |\sigma_n(a_i)|_{b_j} + |\sigma_n(b_i)|_{a_j} + |\sigma_n(b_i)|_{b_j} = 0 \end{equation} \begin{equation} \label{eqc} \sum_{n \geq 1} \left( 1- \min_{c \in \cA_i} \frac{1}{|\sigma_n|} \left( |\sigma_n(c)|_{a_i} + |\sigma_n(c)|_{b_i} \right) \right) < \infty \end{equation} \begin{equation} \label{eqd} \text{and } \quad \lim_{n \to \infty} \frac{1}{| \sigma_n|} \max_{c,c' \in \cA_i} \sum_{d \in \Lambda_d} | |\sigma_n(c)|_d - |\sigma_n(c')|_d | =0. \end{equation} Then the system $(X_{\boldsymbol \sigma},S)$ has $d$ ergodic measures $\mu_0, \ldots, \mu_{d-1}$. Moreover, for $N \in \N$ sufficiently large, the measures $\mu^{(n)}_i$ are characterized by $\mu^{(n)}_i(a_i) + \mu^{(n)}_i (b_i) = \max \{ \mu' (a_i)+ \mu'(b_i) \colon \nu \in \cM(X_{\boldsymbol \sigma}^{(n)},S) \}$ for all $n \geq N$. Also, for all $j \neq i$, $$ \lim_{n \to \infty} \mu_i^{(n)}(a_j) + \mu_i^{(n)}(b_j) = 0.$$ \end{lemma} Whenever $\boldsymbol \sigma = (\sigma_n \colon A_{n+1}^* \to A_n^*)_{n \in \N}$ is a constant-length directive sequence, we write $h^{(n)} = |\sigma_{[0,n)}|$ where we recall that $\sigma_{[0,n)} = \sigma_0 \circ \sigma_1 \circ \cdots \circ \sigma_{n-1}$. \begin{theorem} \cite[Theorem 7.1]{donoso_maass_radic2023partial} \label{theorem constant length delta mu} Let $\boldsymbol \sigma = (\sigma_n \colon A_{n+1}^* \to A_n^*)_{n \in \N}$ be a recognizable, constant-length and primitive directive sequence. Let $\mu$ be an $S$-invariant ergodic measure on $X_{\boldsymbol \sigma}$. Then \begin{equation} \label{eq Toeplitz delta mu} \delta_{\mu} = \lim_{n \to \infty } \sup_{k \geq 2} \left\{ \sum_{w \in \cC A^k_n} \mu^{(n)} (w) \right\}, \end{equation} where $\cC A^k_n$ is defined in \eqref{eq complete W}. Moreover, if $(k_n)_{n \in \N}$ is a sequence of integers (posibly constant), with $k_n \geq 2$ for all $n \in \N$, such that \begin{equation} \label{eq constant length p rig rates} \delta_{\mu} = \lim_{n \to \infty } \left\{ \sum_{w \in \cC A_n^{k_n }} \mu^{(n)} (w) \right\}, \end{equation} then the partial rigidity sequence is $((k_n-1) h^{(n)})_{n \in \N} $. \end{theorem} Another useful characterization of the invariant measures is given by explicit formulas between the invariant measures of $X_{\boldsymbol \sigma}^{(n)}$ and $X_{\boldsymbol \sigma}^{(n+1)}$. To do so we combine \cite[Proposition 1.1, Theorem 1.4]{bedaride_hilion_lusting_2023measureSadic} and \cite[Proposition 1.4]{bedaride_hilion_lusting_2022measureMonoid}. In the original statements one needs to normalize the measures to get a probability measure (see \cite[Proposition 1.3]{bedaride_hilion_lusting_2022measureMonoid}), but for constant length morphisms the normalization constant is precisely the length of the morphism. Before stating the lemma, for $\sigma \colon A^* \to B^*$, $w \in A^*$ and $u \in B^*$, we define $\lfloor \sigma(w) \rfloor_u$, the \emph{essential occurrence of} $u$ \emph{on} $\sigma(w)$, that is the number of times such that $u$ occurs on $w$ for which the first letter of $u$ occurs in the image of the first letter of $w$ under $\sigma$, and the last letter of $u$ occurs in the image of last letter of $w$ under $\sigma$. \begin{example*} Let $\sigma \colon \cA^* \to \cA^*$ given by $\sigma(a)=abab$ and $\sigma(b)=babb$. Then $\sigma(ab)=ababbabb$ and $|\sigma(ab)|_{abb} =2 $ but $\lfloor \sigma(ab) \rfloor_{abb}=1$. \end{example*} \begin{lemma} \label{lemma directive sequence measure formula} Let $\boldsymbol \sigma = (\sigma_n \colon A_{n+1}^* \to A_n^*)_{n \in \N}$ be a recognizable constant-length and primitive directive sequence and fix an arbitrary $n \in \N$. Then there is a bijection between $\cM (X_{\boldsymbol \sigma}^{(n)},S)$ and $\cM (X_{\boldsymbol \sigma}^{(n+1)},S)$. Moreover, for every invariant measure $\mu' \in \cM (X_{\boldsymbol \sigma}^{(n)},S)$, there is an invariant measure $\mu \in \cM (X_{\boldsymbol \sigma}^{(n+1)},S)$ such that for all words $u \in A_n^*$, \begin{equation} \label{eq formula1} \mu'(u) = \frac{1}{|\sigma_n|} \sum_{w \in W(u)} \lfloor \sigma_n(w) \rfloor_{u} \cdot \mu (w), \end{equation} where $ \displaystyle W(u) = \left\{ w \colon |w| \leq \frac{|u|-2}{|\sigma_n|} + 2 \right\}$. Finally, if $\mu$ is ergodic, then $\mu'$ is also ergodic. \end{lemma} \begin{corollary} Let $\boldsymbol \sigma = (\sigma_n \colon \Lambda_d^* \to \Lambda_d^*)_{n \in \N} $ be a recognizable constant-length and primitive directive sequence that fulfills \eqref{eqa},\eqref{eqc} and \eqref{eqd} from \cref{lemma BKK}. Letting $\mu_0, \ldots, \mu_{d-1}$ denote the $d$ ergodic measures, then for $n\in \N$ sufficiently large \begin{equation} \label{eq formula2} \mu^{(n)}_i(u) = \frac{1}{|\sigma_n|} \sum_{w \in W(u)} \lfloor \sigma_n(w) \rfloor_{u} \cdot \mu^{(n+1)}_i (w) \quad \forall u \in \Lambda_d^*. \end{equation} \end{corollary} \begin{proof} By the characterization given by \cref{lemma BKK} and using \eqref{eq formula1} \begin{align*} \mu^{(n)}_i(a_i) &+ \mu^{(n)}_i(b_i) = \max \{ \nu (a_i) + \nu (b_i) \colon \nu \in \cM(X_{\boldsymbol \sigma}^{(n)},S) \} \\ &= \frac{1}{|\sigma_n|} \max\left\{ \sum_{c \in \Lambda_d} (| \sigma_n(c) |_{a_i} + | \sigma_n(c) |_{b_i}) \cdot \nu'(c) \mid \nu' \in \cM(X_{\boldsymbol \sigma}^{(n+1)},S) \right\}. \end{align*} Using \eqref{eqc}, for big enough $n \in \N$, the invariant measure $\nu'$ that maximizes this equation has to be the invariant measure that maximize $\nu'(a_i)+\nu'(b_i)$ which is in fact $\mu^{(n+1)}_i$. \end{proof} \begin{remark} \label{rmk letters to letters} When $\phi \colon A^* \to B^*$ is a letter to letter morphism, that is $|\phi(c)|=1$ for all $c \in A$, we have that $\phi$ induces a continuous map from $A^{\Z}$ to $B^{\Z}$ and that if $\mu$ is an invariant measure in $B^{\Z}$, then $ \mu' (w) = \displaystyle \sum_{u \in \phi^{-1}(w)} \mu (u)$ corresponds to the pushforward measure $\phi_* \mu$. \end{remark} \section{The gluing technique and lower bound for the partial rigidity rates} \label{section gluing technique} We recall that $\cA_i = \{a_i, b_i\}$ and $\Lambda_d = \bigcup_{i=0}^{d-1} \cA_i$. Let $\kappa \colon \Lambda^*_d \to \Lambda_d^*$ be the function that for every word of the form $ua_i$ (resp. $ub_i$) with $u\in \Lambda_d^*$, $\kappa(ua_i) = ua_{i+1}$ (resp. $\kappa(ub_i) = ub_{i+1}$) where the index $i \in \{0, \ldots,d-1\}$ is taken modulo $d$. For example, if $d=2$, $\kappa(a_0a_0) = a_0a_1 $, $\kappa(a_0b_0) = a_0b_1 $, $\kappa(a_0a_1) = a_0a_0 $ and $\kappa(a_0b_1) = a_0b_0 $. We highlight that the function $\kappa \colon \Lambda^*_d \to \Lambda_d^*$ is not a morphism. For a finite collection of substitutions $\{ \tau_i \colon \cA_i^* \to \cA_i^* \mid i =0, \ldots, d-1\}$ we call the morphism $ \sigma = \Gamma( \tau_0, \ldots, \tau_{d-1}) \colon \Lambda_d^* \to \Lambda_d^*$ given by \begin{align*} \sigma(a_i) &= \kappa(\tau_i(a_i)) \\ \sigma(b_i) &= \kappa(\tau_i(b_i)) \end{align*} for all $i \in \{0,\ldots,d-1\}$, the \emph{glued substitution} . This family of substitutions is the main ingredient for our construction. \begin{example*} Let $d=2$, $\tau_0 \colon \cA_0^* \to \cA_0^*$ and $\tau_1 \colon \cA_1^* \to \cA_1^*$ be the substitutions given by \begin{equation*} \begin{array}{cccc} \tau_0(a_0)&= a_0b_0b_0a_0 & \tau_0(b_0)&= b_0a_0a_0b_0,\\ \tau_1(a_1)&= a_1b_1b_1b_1 & \tau_1(b_1)&= b_1a_1a_1a_1. \end{array} \end{equation*} Then $\sigma = \Gamma (\tau_0, \tau_1) \colon \Lambda_2^* \to \Lambda_2^*$ is given by \begin{equation*} \begin{array}{cccc} \sigma(a_0)&= a_0b_0b_0a_1 & \sigma(b_0)&= b_0a_0a_0b_1,\\ \sigma(a_1)&= a_1b_1b_1b_0 & \sigma(b_1)&= b_1a_1a_1a_0 \end{array} \end{equation*} \end{example*} \begin{lemma} \label{prop glued morphism} Let $\tau_i \colon \cA_i^* \to \cA_i^*$ for $i = 0, \ldots d-1$ be a collection of positive and prolongable substitutions. Let $\boldsymbol \sigma = (\sigma_n \colon \Lambda_d \to \Lambda_d)_{n \in \N}$ be the directive sequence for which $\sigma_n = \Gamma (\tau^{n+1}_0, \ldots, \tau^{n+1}_{d-1})$, that is \begin{align*} \sigma_n(a_i) &= \kappa(\tau_i^{n+1}(a_i)) \\ \sigma_n(b_i) &= \kappa(\tau_i^{n+1}(b_i)) \end{align*} for all $i \in \{0, \ldots, d-1\}$. Then $\boldsymbol \sigma$ is primitive and left-permutative. \end{lemma} \begin{proof} Firstly, $\tau_0, \ldots, \tau_{d-1}$ are prolongable, in particular they are left-permutative and $\min\{|\tau_i(a_i)|,|\tau_i(b_i)|\} \geq 2$ for all $i \in \{0,\ldots,d-1\}$. Since the function $\kappa \colon \Lambda^*_d \to \Lambda^*_d$ does not change the first letter and every $\tau_i$ is defined over a different alphabet, the left permutativity is preserved. Secondly, $M(\sigma_n)_{c,d} = M(\tau_i^{n+1})_{c,d} - \1_{c=d}$ if $c,d$ are in the same alphabet $\cA_i$, $M(\sigma_n)_{a_{i+1},a_i} = M(\sigma_n)_{b_{i+1},b_i} =1$ and $M(\sigma_n)_{c,d} = 0$ otherwise. Notice that by positivity and prolongability, the sub-blocks $(M(\sigma_n)_{c,d})_{c,d \in \cA_i}$ are positive and therefore, for every $n \in \N$, $M(\sigma_{[n,n+d)})$ only has positive entries. \end{proof} \begin{theorem} \label{thrm gluing technique} Let $\tau_i \colon \cA_i^* \to \cA_i^*$ for $i = 0, \ldots, d-1$ be a collection of positive and prolongable substitutions. Suppose that every substitution $\tau_i$ has constant length for the same length. Let $\boldsymbol \sigma = (\sigma_n \colon \Lambda_d \to \Lambda_d)_{n \in \N}$ be the directive sequence of glued substitutions $\sigma_n = \Gamma (\tau^{n+1}_0, \ldots, \tau^{n+1}_{d-1})$. Then the $\cS$-adic subshift $(X_{\boldsymbol \sigma},S)$ is minimal and has $d$ ergodic measures $\mu_0, \ldots, \mu_{d-1}$ such that for every $i \in \{0,\ldots,d-1\}$ \begin{align} \label{eq limit} \lim_{n \to \infty} \mu^{(n)}_i(w) = \nu_i(w) \quad \text{ for all } w \in \cA_i^* \end{align} where $\nu_i$ is the unique invariant measure of the substitution subshift given by $\tau_i$. \end{theorem} \begin{remark*} From \eqref{eq limit}, we get that $\displaystyle \lim_{n \to \infty} \mu^{(n)}_i(a_i) + \mu_i^{(n)}(b_i) = 1$ and therefore \\ $\displaystyle \lim_{n \to \infty} \mu^{(n)}_i(w) =0$ for all $w \not \in \cA_i^*$. \end{remark*} Before proving the theorem, we want to emphasize that this gluing technique can be easily generalized. Indeed, many of the hypothesis are not necessary but we include them to simplify notation and computations. For instance, restricting the analysis to substitutions defined over two letter alphabets is arbitrary. Also, the function $\kappa \colon \Lambda^*_d \to \Lambda_d^*$ could change more than one letter at the end of words. Furthermore, with an appropriated control of the growth, the number of letters replaced could even increase with the levels. One fact that seems critical for the conclusion of \cref{thrm gluing technique} is that $\boldsymbol \sigma$ is a constant-length directive sequence and that $\frac{1}{|\sigma_n|}M(\sigma_n)_{c,d}$ for two letters $c$ and $d$ in distinct alphabets $\cA_i$, $\cA_j$ goes to zero when $n$ goes to infinity. \begin{proof} By \cref{prop glued morphism}, $(X_{\boldsymbol \sigma},S)$ is minimal. Let $|\tau_i|= \ell$, which is well defined because the substitutions $\tau_0, \ldots, \tau_{d-1}$ all have the same length. Then, for every $n \in \N$, $\sigma_n = \Gamma(\tau_0^{n+1},\ldots, \tau_{d-1}^{n+1})$ has constant length $\ell^{n+1}$. We need to prove that $(X_{\boldsymbol \sigma},S)$ has $d$ ergodic measures, and so we check the hypotheses of \cref{lemma BKK}, \begin{align*} &\lim_{n \to \infty}\frac{1}{|\sigma_n|} \sum_{j \neq i } |\sigma_n(a_i)|_{a_j} + |\sigma_n(a_i)|_{b_j} + |\sigma_n(b_i)|_{a_j} + |\sigma_n(b_i)|_{b_j} \\ &= \lim_{n \to \infty}\frac{1}{\ell^{n+1}} (|\sigma_n(a_i)|_{a_{i+1}} + |\sigma_n(b_i)|_{b_{i+1}}) = \lim_{n \to \infty}\frac{2}{\ell^{n+1}} = 0. \end{align*} This verifies \eqref{eqa}. Similarly for \eqref{eqc}, \begin{equation*} \sum_{n \geq 1} \left( 1- \frac{1}{\ell^{n+1}} (|\sigma_n(a_i)|_{a_i} + |\sigma_n(a_i)|_{b_i}) \right) = \sum_{n \geq 1} \left( 1- \frac{\ell^{n+1}-1}{\ell^{n+1}} \right) < \infty. \end{equation*} For \eqref{eqd}, notice that $|\sigma_n(a_i)|_{a_i} = |\tau_{i}^{n+1}(a_i)|_{a_i} -1$, therefore $\frac{1}{\ell^{n+1}} |\sigma_n(a_i)|_{a_i} = \freq (a_i, \tau^{n+1}(a_i)) - \frac{1}{\ell^{n+1}}$. Similarly for $|\sigma_n(a_i)|_{b_i}, |\sigma_n(b_i)|_{a_i}$ and $|\sigma_n(b_i)|_{b_i}$. Therefore \begin{align*} &\lim_{n \to \infty} \frac{1}{\ell^{n+1}} ||\sigma_n(a_i)|_{a_i} - |\sigma_n(b_i)|_{a_i} | \\ =& \lim_{n \to \infty} |\freq(a_i, \tau_i^{n+1}(a_i)) - \freq(a_i, \tau_i^{n+1} (b_i)) | = \nu_i(a_i) - \nu_i(a_i) =0. \end{align*} Likewise $\displaystyle \lim_{n \to \infty} \frac{1}{\ell^{n+1}} ||\sigma_n(a_i)|_{b_i} - |\sigma_n(b_i)|_{b_i} | = \nu_i(b_i) - \nu_i(b_i) = 0$. Thus, by \cref{lemma BKK}, there are $d$ ergodic measures, $\mu_0, \ldots, \mu_{d-1}$ which are characterize by \begin{equation} \label{eq measure charact} \mu^{(n)}_i(a_i) + \mu^{(n)}_i (b_i) = \max \{ \mu' (a_i)+ \mu'(b_i) \colon \mu' \in \cM(X_{\boldsymbol \sigma}^{(n)},S) \} \end{equation} for sufficiently large $n \in \N$. The invariant measure that reaches the maximum in \eqref{eq measure charact} can be characterize as a limit like in \eqref{equation empiric measure}. Indeed, fix $n \in \N$ sufficiently large, $i \in \{0, \ldots, d-1\}$ and define the infinite one-sided word $\displaystyle \boldsymbol w^{(n)} = \lim_{k \to \infty} \sigma_{[n,n+k]} (a_i) = \lim_{k \to \infty} (\sigma_n \circ \cdots \circ \sigma_{n+k}) (a_i)$ and the number $N_k^{(n)}= |\sigma_{[n,n+k]} (a_i)|$ for every $k \in \N$. Let $\mu_n \in \cM(X_{\boldsymbol\sigma},S)$ be the measure given by \begin{equation*} \label{eq de mu_n} \mu_n(u) = \lim_{k \to \infty} \frac{1}{N^{(n)}_k} \left|\boldsymbol{w}^{(n)}_{[1,N^{(n)}_k]} \right|_u = \lim_{k \to \infty} \freq(u, \sigma_{[n,n+k]}(a_i)) \end{equation*} for all $u \in \Lambda_d^*$. Notice that for any other F\o lner sequence of the form $(\{m_k, m_k+1, \ldots, m'_k\})_{k \in \N}$, $\displaystyle \lim_{k \to \infty} \frac{1}{m'_k-m_k} \left( \left|\boldsymbol{w}^{(n)}_{[m_k,m'_k)} \right|_{a_i} + \left|\boldsymbol{w}^{(n)}_{[m_k,m'_k)} \right|_{b_i} \right) \leq \mu_n(a_i) + \mu_n(b_i)$. Thus, if $\mu'$ is given by $\displaystyle \mu'(u) = \lim_{k \to \infty} \frac{1}{m'_k-m_k} \left|\boldsymbol{w}^{(n)}_{[m_k,m'_k)} \right|_{u} $ we get that $\mu'(a_i) + \mu'(b_i) \leq \mu_n(a_i) + \mu_n(b_i)$ and since every invariant measure $\mu' \in \cM(X_{\boldsymbol \sigma}^{(n)},S)$ has this form, $\mu_n = \mu_i^{(n)}$ by \eqref{eq measure charact}. To prove \eqref{eq limit}, fix $w \in \cA_i^*$ and $n \in \N$ large enough, then \begin{align} \mu_i^{(n)}(w) &= \lim_{k \to \infty} \frac{|\sigma_{[n,n+k]}(a_i)|_w}{|\sigma_{[n,n+k]}(a_i)|} = \lim_{k \to \infty} \frac{|\sigma_{[n,n+k)} \circ \kappa (\tau_i^{n+k+1}(a_i))|_w}{|\sigma_{[n,n+k]}(a_i)|} \notag \\ &\geq \lim_{k \to \infty} \frac{1}{|\sigma_{[n,n+k]}(a_i)|} \left( |\sigma_{[n,n+k)}(\tau_i^{n+k+1}(a_i))|_w - 1 + |\sigma_{[n,n+k)} (a_{i+1})|_w \right) \notag \\ &\geq \lim_{k \to \infty} \frac{|\sigma_{[n,n+k)}(\tau_i^{n+k+1}(a_i))|_w }{|\sigma_{[n,n+k]}(a_i)|}, \label{ineq freq} \end{align} where in the last inequality we use that $|\sigma_{[n,n+k]}| = \ell^{n} \cdot \ell^{n+1}\cdots \ell^{n+k+1}$ and therefore $\frac{|\sigma_{[n,n+k)}|}{|\sigma_{[n,n+k]}|} = \frac{1}{\ell^{n+k+1}} \xrightarrow{k \to \infty} 0$. Notice that \begin{align*} |\sigma_{[n,n+k)}(\tau_i^{n+k+1}(a_i))|_w &\geq |\sigma_{[n,n+k)}(a_i)|_w |\tau_i^{n+k+1}(a_i)|_{a_i} \\&+ |\sigma_{[n,n+k)}(b_i)|_w |\tau_i^{n+k+1}(a_i)|_{b_i} \end{align*} and since $|\tau_i^{n+k+1}(a_i)|_{a_i} + |\tau_i^{n+k+1}(a_i)|_{b_i} = \ell^{n+k+1}$ there exists $\lambda \in (0,1)$ such that \begin{equation*} |\sigma_{[n,n+k)}(\tau_i^{n+k+1}(a_i))|_w \geq \ell^{n+k+1} \left( \lambda |\sigma_{[n,n+k)}(a_i)|_w + (1-\lambda) |\sigma_{[n,n+k)}(b_i)|_w \right). \end{equation*} Combining the previous inequality with \eqref{ineq freq} and supposing, without lost of generality, that $\displaystyle|\sigma_{[n,n+k)}(a_i)|_w = \min \{ |\sigma_{[n,n+k)}(a_i)|_w, |\sigma_{[n,n+k)}(b_i)|_w\}$, we get that $$ \mu_i^{(n)} (w) \geq \lim_{k \to \infty} \frac{ \ell^{n+k+1}}{|\sigma_{[n,n+k]}(a_i)|} |\sigma_{[n,n+k)}(a_i)|_w. $$ Now inductively \begin{align*} \mu_i^{(n)}(w) &\geq \lim_{k \to \infty} \frac{\ell^{n+2} \ell^{n+3} \cdots \ell^{n+k+1}} {|\sigma_{[n,n+k]}(a_i)|} |\tau_i^{n+1}(a_i)|_w = \frac{ |\tau_i^{n+1}(a_i)|_w }{\ell^{n+1}}, \end{align*} where in the last equality we use again that $|\sigma_{[n,n+k]}| = \ell^{n} \cdot \ell^{n+1}\cdots \ell^{n+k+1}$. We conclude that $ \displaystyle \mu_i^{(n)}(w) \geq \freq (w, \tau_i^{n+1}(a_i) )$, and then taking $n \to \infty$, \begin{equation} \label{ineq final} \lim_{n \to \infty} \mu_i^{(n)}(w) \geq \lim_{n \to \infty} \freq (w, \tau_i^n(a_i)) = \nu_i(w). \end{equation} Since $w \in \cA_i^*$ was arbitrary \eqref{ineq final} holds for every word with letters in $\cA_i$. In particular, for every $k \geq 1$, $\displaystyle 1 = \sum_{u \in \cA_i^k} \nu_i(u) \leq \lim_{n \to\infty} \sum_{u \in \cA_i^k} \mu_i^{(n)}(u) \leq 1$ which implies that the inequality in \eqref{ineq final} is an equality for every word $w \in \cA_i^*$. \end{proof} In what follows every system $(X_{\boldsymbol \sigma}, S)$ and family of substitutions $\tau_i \colon \cA^*_i \to \cA^*_i$ for $i = 0, \ldots,d-1$ satisfy the assumption of \cref{thrm gluing technique}. \begin{corollary} $(X_{\boldsymbol \sigma},S)$ has non-superlinear complexity. \end{corollary} \begin{proof} This is direct from \cite[Corollary 6.7]{Donoso_Durand_Maass_Petite_interplay_finite_rank_Sadic:2021} where $\cS$-adic subshifts with finite alphabet rank and constant-length primitive directive sequences have non-superlinear complexity. \end{proof} \begin{corollary} \label{cor delta smaler} If $\mu_0, \ldots, \mu_{d-1}$ are the ergodic measures of $(X_{\boldsymbol \sigma},S)$, then \begin{equation} \label{eq lower bound delta} \delta_{\nu_i} \leq \delta_{\mu_i} \end{equation} for all $i \in \{0,\ldots,d-1\}$, where each $\nu_i$ is the unique invariant measure of $X_{\tau_i}$. \end{corollary} \begin{proof} By \cref{theorem constant length delta mu} equation \eqref{eq constant length p rig rates}, there exists a sequence of $(k_t)_{t \in \N}$ such that \begin{equation*} \delta_{\nu_i} = \lim_{t \to \infty} \sum_{w \in \cC \cA_i^{k_t}} \nu_i (w) \end{equation*} and by \eqref{eq limit} for every $t \in \N$, there exists $n_t$ such that \begin{equation*} \sum_{w \in \cC \cA_i^{k_t}} \mu_i^{(n)} (w) \geq \sum_{w \in \cC \cA_i^{k_t}} \nu_i (w) - \frac{1}{t} \quad \text{ for all } n \geq n_t. \end{equation*} Taking limits we have, \begin{equation*} \delta_{\mu_i} \geq \lim_{t \to \infty} \left( \sum_{w \in \cC \cA_i^{k_t}} \nu_i (w) - \frac{1}{t} \right) = \delta_{\nu_i}. \qedhere \end{equation*} \end{proof} We finish this section with a case where the lower bound in \eqref{eq lower bound delta} is trivially achieved. For that, when we define a substitution $\tau \colon \cA^* \to \cA^*$ we abuse notation and write $\tau \colon \cA_i^* \to \cA_i^*$, by replacing the letters $a$ and $b$ by $a_i$ and $b_i$ respectively. Using that abuse of notation for $i \neq j$, we say that $\tau \colon \cA_i^* \to \cA_i^*$ and $\tau \colon \cA_j^* \to \cA_j^*$ are the \emph{same substitution} even though they are defined over different alphabets. We write $\Gamma(\tau,d) \colon \Lambda_d^* \to \Lambda_d^*$ when we are gluing $d$ times the same substitution. In the next corollary we prove that if we glue the same substitutions then we achieve the bound. \begin{corollary} \label{cor one substitution} Let $\tau \colon \cA^* \to \cA^*$ be a positive, prolongable and constant length substitution. Let $\boldsymbol \sigma = (\sigma_n \colon \Lambda_d \to \Lambda_d)_{n \in \N}$ be the directive sequence of glued substitutions $\sigma_n = \Gamma (\tau^{n+1},d)$. Then $(X_{\boldsymbol \sigma},S)$ has $d$ ergodic measures with the same partial rigidity rate $\delta_{\nu}$, where $\nu$ denotes the unique invariant measure of the substitution subshift $(X_{\tau},S)$. \end{corollary} \begin{proof} The letter-to-letter morphism $\phi \colon \Lambda_d^* \to \cA^*$ given by $a_i \mapsto a$ and $b_i \mapsto b$ for all $i=0,\ldots,d-1$ induce a factor map from $X_{\boldsymbol \sigma}$ to $X_{\tau}$ and therefore $\delta_{\mu} \leq \delta_{\nu}$ for all $\mu \in \cE(X_{\boldsymbol \sigma}, S)$ (see \cite[Proposition 1.13]{King_joining-rank_finite_mixing:1988}). The opposite inequality is given by \cref{cor delta smaler}. \end{proof} \section{Computation of the partial rigidity rates} \label{section computation partial rigidity} \subsection{Decomposition of the directive sequence} We maintain the notation, using $\cA_i = \{a_i,b_i \} $ and $\Lambda_d = \bigcup_{i=0}^{d-1} \cA_i$ and we also fix $\cA_i' = \{a_i', b_i'\}$, $\Lambda_d' = \bigcup_{i=0}^{d-1} \cA_i \cup \cA_i'$. In this section, $\tau_i \colon \cA^*_i \to \cA_i^*$ for $i = 0, \ldots, d-1$ is a collection of mirror substitutions satisfying the hypothesis of \cref{thrm gluing technique}, $\ell = |\tau_i|$ and $\boldsymbol \sigma = ( \Gamma(\tau_0^{n+1}, \ldots, \tau_{d-1}^{n+1}))_{n \in \N}$, that is \begin{align*} \sigma_n(a_i) &= \kappa(\tau_i^{n+1}(a_i)) \\ \sigma_n(b_i) &= \kappa(\tau_i^{n+1}(b_i)) \end{align*} for all $i \in \{0, \ldots,d-1\}$. We also write $\cE$ instead of $\cE(X_{\boldsymbol \sigma}, S)= \{\mu_0, \ldots, \mu_{d-1}\}$ for the set of ergodic measures. \begin{proposition} The directive sequence $\boldsymbol \sigma$ can be decomposed using $3$ morphisms in the following way: for every $n \in \N$, $\sigma_n = \phi \circ \rho^{n} \circ \psi$ where \begin{align*} \psi \colon \Lambda_d^* \to (\Lambda_d')^* & \quad a_i \mapsto u_i a_{i+1}' \\ & \quad b_i \mapsto v_i b_{i+1}'\\ \\ \rho \colon (\Lambda_d')^* \to (\Lambda_d')^* & \quad a_i \mapsto \tau_i(a_i) \quad a_i' \mapsto u_{i-1} a_i' \\ & \quad b_i \mapsto \tau_i (b_i) \quad b_i' \mapsto v_{i-1} b_i' \\ \\ \phi \colon (\Lambda_d')^* \to \Lambda_d^* & \quad a_i \mapsto a_i \quad a_i' \mapsto a_{i} \\ & \quad b_i \mapsto b_i \quad b_i' \mapsto b_{i}. \end{align*} with $u_i = \tau_i(a_i)_{[1,\ell)}$ and $v_i = \tau_i(b_i)_{[1,\ell)}$ and the index $i$ is taken modulo $d$. \end{proposition} \begin{proof} Fix $i \in \{0,\ldots,d-1\}$. Consider first that for every $n \geq 1$, $\rho^n(a_{i+1}') = \rho^{n-1}(u_i)\rho^{n-1}(a_{i+1}')= \tau_i^{n-1}(u_i)\rho^{n-1}(a_{i+1}')$, therefore by induction $$\rho^n(a_{i+1}') = \tau_i^{n-1}(u_i)\tau_i^{n-2}(u_{i}) \cdots \tau_i(u_i)u_ia_{i+1}' .$$ Since, by assumption, the last letter of $\tau_i(a_i)$ is $a_i$, one gets that $\tau_i^{n-1}(u_i)\tau_i^{n-2}(u_{i}) $ $ \cdots \tau_i(u_i)u_i = \tau^{n}(a_i)_{[1,\ell^n)}$ and then $\rho^n(a_{i+1}') = \tau^{n}(a_i)_{[1,\ell^n)} a_{i+1}'$. Also, we notice that $\psi(a_i) = \rho(a_{i+1}')$ and therefore $\rho^n \circ \psi(a_i) = \rho^{n+1}(a_{i+1}') = \tau^{n+1}(a_i)_{[1,\ell^{n+1})} a_{i+1}' $. Finally, $\displaystyle \phi \circ \rho^n \circ \psi(a_i) = \phi( \tau^{n+1}(a_i)_{[1,\ell^{n+1})}) \phi(a_{i+1}') = \tau^{n+1}(a_i)_{[1,\ell^{n+1})} a_{i+1} = \kappa(\tau^{n+1}(a_i))= \sigma_n(a_i) .$ We conclude noticing that the same proof works for $b_i$. \end{proof} With this decomposition, we make an abuse of notation and define a directive sequence $\boldsymbol \sigma '$ over an index $Q$ different from $\N$. Set $\displaystyle Q = \{0\} \cup \bigcup_{n \geq 1} \left\{ n + \frac{m}{n+2}: m = 0, \ldots, n+1 \right\} $ we define the directive sequence $\boldsymbol \sigma' $ indexed by $Q$ given by \begin{equation*} \sigma'_q = \begin{cases} \begin{array}{cc} \phi & \text{ if } q=n \\ \rho & \text{ if } q=n + m/(n+2) \text{ for } m=1, \ldots, n \\ \psi & \text{ if } q=n + (n+1)/(n+2) \end{array} \end{cases} \end{equation*} for all $n \geq 1$. We use this abuse of notation, in order to get $X^{(n)}_{\boldsymbol \sigma} = X^{(n)}_{\boldsymbol \sigma'}$ for every positive integer $n$, and therefore we maintain the notation for $\mu^{(n)}_i$. The advantage of decomposing the directive sequence is that every morphism in $\boldsymbol \sigma$ has constant length, either $\ell$ in the case of $\psi$ and $\rho$ or $1$ in the case of $\phi$. This simplifies the study of the complete words at each level. Notice that, the morphisms $\phi$, $\rho$ and $\psi$ are not positive, otherwise the $\cS$-adic subshift would automatically be uniquely ergodic, see \cite{Durand2000}, which does not happen as we show in \cref{thrm gluing technique}. \subsection{Recurrence formulas for complete words} The formulas in this section are analogous to those presented in \cite[Lemma 7.7]{donoso_maass_radic2023partial}, and aside from technicalities, the proofs are not so different. We define four sets of words that are useful in what follows, \begin{align} C_k^i&= \{ w \in \Lambda_d^k \colon w_1,w_k \in \cA_i \cup \cA_{i+1}', w_1 = w_k\} \label{equation C}\\ D_k^i&= \{ w \in (\Lambda_d')^k \colon w_1,w_k \in \cA_i \cup \cA_{i+1}', \eta(w_1) = \eta(w_k)\} \label{equation D}\\ \overline{C}_k^i&= \{ w \in \Lambda_d^k \colon w_1,w_k \in \cA_i \cup \cA_{i+1}', w_1 = \overline{w_k} \} \\ \overline{D}_k^i&= \{ w \in (\Lambda_d')^k \colon w_1,w_k \in \cA_i \cup \cA_{i+1}', \eta(w_1) = \overline{\eta(w_k)}\} \label{equation D bar} \end{align} where $\eta \colon \Lambda_{d}' \to \Lambda_{d}$ is a letter-to-letter function for which $a_i \mapsto a_i$, $b_i \mapsto b_i$, $a_{i+1}' \mapsto a_{i}$ and $b_{i+1}' \mapsto b_i$. For instance if $w \in D_k^i$ and $w_1 = a_i$ then $w_k \in \{a_i, a_{i+1}'\}$. To simplify the notation, we enumerate the index set $Q = \{q_m \colon m \in \N\}$ where $q_{m} < q_{m+1}$ for all $m \in \N$. We continue using the abuse of notation $\mu(w) = \mu([w])$ and for a set of words $W$, $\displaystyle \mu(W) = \mu \left(\bigcup_{w \in W} [w]\right)$. For $i \in \{0, \ldots, d-1\}$, fix the word $v= \tau_i(a_i)$ and we define $\delta_{j,j'}^{i} = \1_{v_j = v_{j'}}$ for $j, j' = \{1,\ldots, \ell\}$ where $\ell = |v|$. Notice that if one defines $\delta_{j,j'}^{i}$ with the word $\tau_i(b_i)$ instead of $\tau_i(a_i)$, by the mirror property, the value remains the same. Now, for $j \in \{ 1, \ldots, \ell\}$, we define \begin{equation*} r_j^{i} = \sum^{j}_{j'=1} \delta_{\ell-j + j', j'}^i \quad \text{ and } \quad \Tilde{r}_j^{i} = \sum^{\ell-j}_{j'=1} \delta_{j', j+j'}^i. \end{equation*} \begin{lemma} \label{lemma complete rho} If $\boldsymbol \sigma' = (\sigma'_q)_{q \in Q}$ and $\mu \in \cE$, then for every $n \in \N$, and every $q_m = n + \frac{m'}{n+2}$ for $m' \in \{1, \ldots, n\}$, \begin{align*} \ell \cdot \mu^{(q_m)} (D^i_{\ell k + j }) = & r^i_j \cdot \mu^{(q_{m+1})} (D^i_{k+2}) + \Tilde{r}^i_j \cdot \mu^{(q_{m+1})} (D^i_{k+1}) \\ &+ (j -r^i_j) \mu^{(q_{m+1})} (\overline{D}^i_{k+2}) + (\ell-j-\Tilde{r}^i_j) \mu^{(q_{m+1})} (\overline{D}^i_{k+1}) \\ \\ \ell \cdot \mu^{(q_m)} (\overline{D}^i_{\ell k + j }) = & (j - r^i_j) \mu^{(q_{m+1})} (D^i_{k+2}) + (\ell-j- \Tilde{r}^i_j) \mu^{(q_{m+1})} (D^i_{k+1}) \\ &+ r^i_j \cdot \mu^{(q_{m+1})} (\overline{D}^i_{k+2}) + \Tilde{r}^i_j \cdot \mu^{(q_{m+1})} (\overline{D}^i_{k+1}) \end{align*} for $j \in \{1, \ldots, \ell\}$, where the set $D^i_k$ was defined in \eqref{equation D}. \end{lemma} \begin{proof} Notice that in this case $\sigma'_{q} = \rho $. If $w \in \cL(X^{(q_m)}_{\boldsymbol{\sigma'}})$ for which $w_1 \in \cA_i \cup \cA_{i+1}'$, then $w \sqsubseteq \rho(u)$, where $u \in \cL(X^{(q_{m+1})}_{\boldsymbol{\sigma'}})$ and $u_1 \in \cA_i \cup \cA_{i+1}'$. This is equivalent to the condition $\eta(u_1) \in \cA_i$ . Since $\eta(\rho(a_i)) =\eta(\rho(a_{i+1}')) = \tau_i(a_i)$ and $\eta(\rho(b_i)) = \eta(\rho(b_{i+1}')) = \tau_i(b_i)$, for $u \in \cL(X^{(q_{m+1})}_{\boldsymbol{\sigma'}})$ satisfying $\eta(u_1) \in \cA_i$, we deduce that if $|u|=k+2$ with $\eta(u_1) = \eta(u_k)$, then \begin{equation*} r^i_j = \sum_{j'=1}^j\1_{\eta(\rho(u_1)_{\ell -j -j'}) = \eta(\rho(u_{k+2})_{j'}) } \end{equation*} and when we consider $\eta(u_1) = \overline{\eta(u_{k+2})}$, $\displaystyle j - r^i_j = \sum_{j'=1}^j \1_{\eta(\rho(\overline{u}_1)_{\ell -j -j'}) = \eta(\rho(u_{k+2})_{j'}) }$. If $|u|=k+1$ with $\eta(u_1) = \eta(u_k)$ \begin{equation*} \Tilde{r}^i_j = \sum_{j'=1}^{\ell-j} \1_{\eta(\rho(u_1)_{j'}) = \eta(\rho(u_{k+1})_{j+j'}) } \end{equation*} and when we consider $\eta(u_1) = \overline{\eta(u_{k+1})}$, $\displaystyle \ell - j - \Tilde{r}^i_j = \sum_{j'=1}^{\ell-j} \1_{\eta(\rho(\overline{u}_1)_{j'}) = \eta(\rho(u_{k+1})_{j+j'}) }$. Thus, the first equality of the lemma is a direct consequence of \eqref{eq formula2} and the second equality is completely analogous. \end{proof} \begin{lemma} \label{lemma complete psi} If $\boldsymbol \sigma' = (\sigma'_q)_{q \in Q}$ and $\mu \in \cE$, then for every $n \in \N$, let $q = n + \frac{n+1}{n+2}$, we get \begin{align*} \ell \cdot \mu^{(q_m)} (D^i_{\ell k + j }) = & r^i_j \cdot \mu^{(q_{m+1})} (C^i_{k+2}) + \Tilde{r}^i_j \cdot \mu^{(q_{m+1})} (C^i_{k+1}) \\ &+ (j -r^i_j) \mu^{(q_{m+1})} (\overline{C}^i_{k+2}) + (\ell-j-\Tilde{r}^i_j) \mu^{(q_{m+1})} (\overline{C}^i_{k+1}) \\ \\ \ell \cdot \mu^{(q_m)} (\overline{D}^i_{\ell k + j }) = & (j - r^i_j) \mu^{(q_{m+1})} (C^i_{k+2}) + (\ell-j- \Tilde{r}^i_j) \mu^{(q_{m+1})} (C^i_{k+1}) \\ &+ r^i_j \cdot \mu^{(q_{m+1})} (\overline{C}^i_{k+2}) + \Tilde{r}^i_j \cdot \mu^{(q_{m+1})} (\overline{C}^i_{k+1}) \end{align*} for $j \in \{1, \ldots, \ell\}$. \end{lemma} \begin{proof} Noting $\sigma'_{q_m} = \psi $ and that $\psi(a_i)=\rho(a_{i+1}')$ for all $i \in \{0, \ldots, d-1\}$, one can repeat the steps of \cref{lemma complete rho} proof and deduce the formula. \end{proof} \begin{lemma} \label{lemma complete phi} If $\boldsymbol \sigma' = (\sigma'_q)_{q \in Q}$ and $\mu \in \cE$, then for every $q_m = n \in \N$, \begin{align} \mu^{(n)} (C^i_{k}) &\leq \mu^{(q_{m+1})} (D^i_{k}) + \frac{2}{\ell^{n+1}} \label{ineq C_k}\\ \mu^{(n)} (\overline{C}^i_{k}) &\leq \mu^{(q_{m+1})} (\overline{D}^i_{k}) + \frac{2}{\ell^{n+1}} \label{ineq over C_k} \end{align} \end{lemma} \begin{proof} Notice that $\sigma'_{n} = \phi $ is letter-to-letter so by \cref{rmk letters to letters} \begin{equation*} \mu^{(n)} (w) = \sum_{u \in \phi^{-1}(w)} \mu^{(q_{m+1})} (u). \end{equation*} The set $\phi^{-1}(C_k^i)$ is contained in $U \cup U'$ where $U$ is the set of complete words $u$ with length $k$ and first letter in $\cA_i$ and $U'$ is the set of words $u$ with length $k$ and first or last letter in $\cA_i'$. With that, \begin{align*} \mu^{(n)} (C_k^i) \leq& \mu^{(q_{m+1})} (U) + \mu^{(q_{m+1})} (U') \\ \leq & \mu^{(q_{m+1})}(D^i_k) + 2( \mu^{(q_{m+1})}(a_i') + \mu^{(q_{m+1})}(b_i')) \leq \mu^{(q_{m+1})}(D^i_k) + \frac{2}{\ell^{n+1}}. \end{align*} where the last inequality uses that, by induction, $ \mu^{(q_{m+1})}(a_i') = \frac{1}{\ell^{n+1}} \mu^{(n+1)}(a_{i-1}) \leq \frac{1}{2 \ell^{n+1}}$. Likewise, $ \mu^{(q_{m+1})}(b_i') \leq \frac{1}{2 \ell^{n+1}}$. Inequality \eqref{ineq over C_k} uses the same reasoning. \end{proof} \subsection{Upper bounds} Recall the definition of $C^i_k$, $D^i_k$, $\overline{C}^i_k$ and $\overline{D}^i_k$ given by the equations \eqref{equation C} to \eqref{equation D bar}. \begin{lemma} \label{lemma i constant length bound} For every $\mu \in \cE$ $n \in \N$ and $k \geq 2$, \begin{equation} \label{ineq max all levels} \mu^{(n)} (C^i_{k}) \leq \max_{\substack{k' =2, \ldots, \ell \\ q \in Q, q\geq n} } \{ \mu^{(q)} (D^i_{k'}) , \mu^{(q)} (\overline{D}^i_{k'}) \} + \frac{\ell }{\ell -1 }\frac{2}{\ell^{n+1}}. \end{equation} \end{lemma} \begin{remark*} Following what we discuss in \cref{section invariant measures} in the right hand side, if $q$ is an integer, $\mu^{(q)}$ is supported in $\Lambda_d^{\Z}$ and therefore it can be studied as a measure in $(\Lambda_d')^{\Z}$. In that context, $\mu^{(q)}(D^i_{k'}) = \mu^{(q)}(C^i_{k'}) $ and $\mu^{(q)}(\overline{D}^i_{k'}) = \mu^{(q)}(\overline{C}^i_{k'}) $, because $\mu^{(q)}(w) = 0$ whenever $w$ contains a letter in $\Lambda_d' \backslash \Lambda_d$. \end{remark*} \begin{proof} Combining Lemmas \ref{lemma complete rho} and \ref{lemma complete psi} we deduce that for $q_m \in Q \backslash \N$, $\mu^{(q_m)} (D^i_{\ell k + j })$ and $\mu^{(q_m)} (\overline{D}^i_{\ell k + j })$ are convex combinations of $\mu^{(q_{m+1})} (D^i_{k + s })$ and $\mu^{(q_{m+1})} (\overline{D}^i_{k + s})$ for $s=1,2$. Therefore, if $q_m \in Q \backslash \N$ \begin{equation*} \mu^{(q_m)} (D^i_{\ell k + j }) \leq \max_{s=1,2}\{ \mu^{(q_{m+1})} (D^i_{k + s }), \mu^{(q_{m+1})} (\overline{D}^i_{k + s})\} \end{equation*} and the same bound holds for $\mu^{(q_m)} (\overline{D}^i_{\ell k + j })$. Likewise, using \cref{lemma complete phi} for $q_m \in\N$, \begin{align*} \mu^{(q_m)} (D^i_{k}) & \leq \mu^{(q_{m+1})} (D^i_{k }) + \frac{2}{\ell^{n+1}} \\ \mu^{(q_m)} (\overline{D}^i_{k}) &\leq \mu^{(q_{m+1})} (\overline{D}^i_{k }) + \frac{2}{\ell^{n+1}} \end{align*} Notice that for $2 \leq k \leq \ell$, the proposition is trivial. Thus, fix $k > \ell $, there exists an integer $k_1 \in \N$ and $m_1 \in \{1, \ldots, \ell\}$ such that $k = \ell \cdot k_1 + m_1 $. Now, take $q_m = n \in \N$, then by the previous inequalities \begin{align*} \mu^{(n)} (C^i_{k}) & \leq \mu^{(q_{m+1})} (D^i_{k}) + \frac{2}{\ell^{n+1}} \label{ineq first step}\\ \mu^{(q_{m+1})} (D^i_{k}) & \leq \max_{s=1,2}\{ \mu^{(q_{m+2})} (D^i_{k_1 + s }), \mu^{(q_{m+2})} (\overline{D}^i_{k_1 + s})\} \end{align*} If $k_1 \in \{1, \ldots, \ell -2\}$ we are done. If $k_1 = \ell -1$, we need to control the values indexed by $k_1+2 = \ell +1$, but for that we need to iterate the argument one more time. Otherwise, that is if $k_1 \geq \ell $, we can find $k_2 \geq 1$ and $m_2 \in \{1, \ldots, \ell\}$ such that $k_1 + 1 = \ell k_2 + m_2$ (similarly for $k_1 + 2 = \ell k_2 + m_2 +1$ or, if $m_2 = \ell$, $k_1 + 2 = \ell (k_2+1) + 1$). With that decomposition one can bound the right hand side of the second equality by $\displaystyle \max_{s = 1, 2, 3} \{ \mu^{(q_{m+3})} (D^i_{k_2 + s}), \mu^{(q_{m+3})} (\overline{D}^i_{k_2 + s}) \}$. Consider the sequence, $(k_t)_{t \in \N}$ and $(m_t)_{t \geq 1}$ such that $k_t \geq 0$ and $m_t \in \{1,\ldots, \ell \}$ and are defined as follow, $k_0 = k$, $k_0 = \ell k_1 + m_1$ and inductively $k_t = \ell (k_{t+1} + t) + m_t $. Then eventually $k_t = 0$ for some $t \in \N$. With that, one can iterate the previous argument a finite amount of time and be able to express everything with only values $k' \in \{2, \ldots, \ell \}$. The only problem is when $n \leq \overline{n} = q_{m+t} \in \N$ in that case, we are force to add the term $ 2/ \ell^{\overline{n}+1}$. So we get \begin{equation*} \mu^{(n)} (C^i_{k}) \leq \max_{\substack{k' =2, \ldots, \ell \\ q \in Q, n \leq q < N} } \{ \mu^{(q)} (D^i_{k'}) , \mu^{(q)} (\overline{D}^i_{k'}) \} + \frac{2}{\ell^{n+1}} + \frac{2}{\ell^{n+2}} + \cdots + \frac{2}{\ell^{N}} \end{equation*} for some $N \geq n$, but that value is bounded by $$\max_{\substack{k' =2, \ldots, \ell \\ q \in Q, q \geq n} } \{ \mu^{(q)} (D^i_{k'}) , \mu^{(q)} (\overline{D}^i_{k'}) \} + \sum_{s \geq 1} \frac{2}{\ell^{n+s}}, $$ which finish the proof. \vspace{-0.5em} \end{proof} \begin{proposition} \label{thrm combination bound max} For every $i \in \{0, \ldots, d-1\}$, \begin{equation*} \delta_{\mu_i} \leq \max_{k=2, \ldots, \ell } \left\{ \sum_{ w \in \cC \cA_i^k} \nu_i ( w) ,\sum_{w \in \overline{\cC} \cA_i^k} \nu_i (w) \right\} \end{equation*} where the notation $\cC \cA_i^k$ is introduced in \eqref{eq complete W} and $\overline{\cC}\cA^k_i$ is the set of words $w \in \cA_i^*$ of length $k$ such that $w_1 = \overline{w}_k$ \end{proposition} \begin{proof} First notice that, for every $(k_t)_{t \in \N}$ a possibly constant sequence of integers greatest or equal than $2$, \begin{align*} \lim_{t \to \infty} \sum_{w \in \cC \Lambda_d^{k_t}} \mu_i^{(t)} (w) &= \lim_{t \to \infty} \sum_{w \in \cC \Lambda_d^{k_t}, w_1 \in \cA_i} \mu_i^{(t)} (w) + \lim_{t \to \infty} \sum_{w \in \cC \Lambda_d^{k_t}, w_1 \not \in \cA_i} \mu_i^{(t)} (w) \\ &\leq \lim_{t \to \infty} \mu_i^{(t)} (C_{k_t}^i) + \lim_{t \to \infty} \sum_{c \in \Lambda_d \backslash \cA_i} \mu_i^{(t)} (c) = \lim_{t \to \infty} \mu_i^{(t)} (C_{k_t}^i) \end{align*} Therefore, by \cref{theorem constant length delta mu} we get that there exists $(k_t)_{t \in \N}$ a possibly constant sequence of integers greatest or equal than $2$ such that \begin{align*} \delta_{\mu_i} &= \lim_{t \to \infty} \sum_{w \in \cC \Lambda_d^{k_t}} \mu_i^{(t)} (w) \leq \lim_{t \to \infty} \mu_i^{(t)} (C_{k_t}^i) \leq \lim_{t \to \infty} \max_{\substack{k' =2, \ldots, \ell \\ q \in Q, q\geq t} } \{ \mu^{(q)} (D^i_{k'}) , \mu^{(q)} (\overline{D}^i_{k'}) \} \end{align*} where the last inequality is a consequence of \eqref{ineq max all levels}. Thus, we only have to control the values of $\mu^{(q)}(D^i_k)$ and $\mu^{(q)}(\overline{D}^i_k)$ for $k \in \{2, \ldots, \ell\}$ and big $q \in Q$. This is already controlled when $q$ is an integer because, \cref{thrm gluing technique} implies that for every $\epsilon>0$, there exists $N\geq 1$ such that for every $n \geq N$ and every word $w \in \cA^*_i$, with $|w|\leq \ell$, $\mu_i^{(n)}(w) \leq \nu_i(w) + \varepsilon$ and $w \not \in \cA_i^*$, $\mu_i^{(n)}(w) \leq \frac{\varepsilon}{2}$. Now, fix $q = n_1 + \frac{m'}{n_1 + 2} \not \in \N$ and $n_1 \geq N$ , notice that for $j \neq i$, $$\mu^{(q)}_i(D^j_k) \leq \sum_{c \in \cA_j \cup \cA_{j+1}'} \mu^{(q)}_i(c) \leq \mu_i^{(n_1 +1)}(a_j) + \mu_i^{(n_1 +1)}(a_j) \leq \varepsilon.$$ If one repeats a proof similar to the one of \cref{thrm gluing technique} for the subshift $\eta(X_{\boldsymbol \sigma'}^{(q)})$, we get that for every $w \in \cA^*_i$, with $|w|\leq \ell$, $\eta_*\mu_i^{(q)}(w) \leq \nu_i(w) + \varepsilon$. Noting that, for $k' \leq \ell$, if $w \in D^i_{k'}$ then $\eta(w) \in \cC \cA_i^{k'}$ we deduce \begin{equation*} \mu^{(q)}_i (D^i_{k'}) \leq \eta_* \mu^{(q)}_i (\cC \cA_i^{k'}) \leq \sum_{u \in \cC \cA_i^{k'}} (\nu_i (u) + \varepsilon) \leq 2^{k'} \varepsilon + \nu_i (\cC \cA_i^{k'}). \end{equation*} Similarly $\mu^{(q)}_i (\overline{D}^i_{k'}) \leq 2^{k'} \varepsilon + \nu_i (\overline{\cC} \cA_i^{k'})$. Therefore for every $\varepsilon >0$ there exists $N$, such that for every $n \geq N$ \begin{equation*} \max_{\substack{k' =2, \ldots, \ell \\ q \in Q, q\geq n} } \{ \mu^{(q)} (C^i_{k'}) , \mu^{(q)} (\overline{C}^i_{k'}) \} \leq 2^{\ell} \varepsilon + \max_{k=2, \ldots, \ell } \left\{\nu_i (\cC \cA_i^{k'}),\nu_i (\overline{\cC} \cA_i^{k'}) \right\} \end{equation*} Thus taking limit $n \to \infty$ and $\varepsilon \to 0$ and we conclude. \end{proof} \subsection{System with multiple partial rigidity rates} We use the result of the last section of \cite{donoso_maass_radic2023partial}, for that fix $L \geq 6$ and let $\zeta_L \colon \cA^* \to \cA^*$ given by \begin{align*} a \mapsto a^Lb \\ b \mapsto b^La. \end{align*} In particular $\zeta_L^2 $ is a prolongable and mirror morphism. \begin{proposition}\cite[Proposition 7.17]{donoso_maass_radic2023partial} \label{prop very rigid family} Fix $L \geq 6$ and let $(X_{\zeta_{L}}, \cB, \nu, S)$ be the substitution subshift given by $\zeta_L \colon \cA^* \to \cA^*$, then \begin{equation*} \delta_{\nu} = \nu(aa) + \nu(bb) = \max_{k\geq 2 } \left\{ \sum_{w \in \cC \cA^k} \nu (w) ,\sum_{w \in \overline{\cC} \cA^k} \nu (w) \right\} = \frac{L-1}{L+1} \end{equation*} \end{proposition} Now we can give a detailed version of \cref{main thrm} stated in the introduction. For that, as for \cref{cor one substitution}, we write $\zeta_L \colon \cA_i^* \to \cA_i^*$ even if it is originally define in the alphabet $\cA$. \begin{theorem} \label{thrm final result} For $L \geq 6$, let $\boldsymbol \sigma $ be the directive sequence of glued substitutions $ \boldsymbol \sigma = ( \Gamma(\zeta_{L^{2^{i+1}}}^{(n+1)2^{d-i}} \colon i =0, \ldots,d-1))_{n \in \N}$. That is \begin{equation*} \begin{array}{cc} \sigma_n(a_i) &= \kappa(\zeta_{L^{2^{i+1}}}^{(n+1)2^{d-i}}(a_i))\\ \sigma_n(b_i) &= \kappa(\zeta_{L^{2^{i+1}}}^{(n+1)2^{d-i}}(b_i)) \end{array} \quad \text{ for } i \in \{0 , \ldots, d-1\}. \end{equation*} Then, \begin{equation} \label{final eq} \delta_{\mu_i} = \frac{L^{2^{i+1}}-1}{L^{2^{i+1}}+1} \end{equation} and the rigidity sequence is $(h^{(n)})_{n \in \N}$. \end{theorem} \begin{remark*} The directive sequence $\boldsymbol \sigma$ in the statement fullfils all the hypothesis of \cref{thrm gluing technique} with $\tau_i = \zeta_{L^{2^{i+1}}}^{2^{d-i}} $ for $i = 0, \ldots, d-1$. In particular, $|\zeta_{L^{2^{i+1}}}^{2^{d-i}}(a_i)|= (L^{2^{i+1}})^{2^{d-i}} = L^{2^{i+1} \cdot 2^{d-i}} = L^{2^{d+1}}$, therefore the morphism $\sigma_n$ is indeed of constant length, for all $n \in \N$. In this case $h^{(n)} = (L^{2^{d+1}})^{n+1}\cdot(L^{2^{d+1}})^{n} \cdots L^{2^{d+1}}$. \end{remark*} \begin{proof} By \cref{prop very rigid family} \begin{equation*} \max_{k=2, \ldots, L^{2^{d+1}} } \left\{ \nu (\cC \cA_i^k) , \nu ( \overline{\cC}\cA_i^k) \right\} = \nu_i (a_ia_i) + \nu_i (b_ib_i)= \frac{L^{2^{i+1}}-1}{L^{2^{i+1}}+1} = \delta_{\nu_i}. \end{equation*} Therefore, by \cref{cor delta smaler} and \cref{thrm combination bound max}, $ \displaystyle \delta_{\mu_i} = \delta_{\nu_i}$, concluding \eqref{final eq}. Since \begin{equation*} \lim_{n \to \infty} \sum_{j=0}^{d-1} \mu_i^{(n)}(a_ja_j) + \mu_i^{(n)}(b_jb_j) = \lim_{n \to \infty} \mu_i^{(n)}(a_ia_i) + \mu_i^{(n)}(b_ib_i) = \delta_{\mu_i}, \end{equation*} by \cref{theorem constant length delta mu}, the partial rigidity sequence is given by $(h^{(n)})_{n \in \N}$. \end{proof} \begin{remark*} The construction in \cref{thrm final result} relies on the fact that the family of substitutions $\zeta_L$, for $L\geq 6$, had been previously studied. A similar result could be achieved including $\zeta_2$, which corresponds to the Thue-Morse substitution, also studied in \cite{donoso_maass_radic2023partial}, but in that case the partial rigidity sequence for its corresponding ergodic measure would have been $(3 \cdot h^{(n)})_{n \in \N}$ instead of $(h^{(n)})_{n \in \N}$. In general, studying the partial rigidity rates of more substitution subshifts should allow us to construct more examples like the one in \cref{thrm final result}. In particular, it would be interesting to construct systems with algebraically independent partial rigidity rates, but even in the uniquely ergodic case, there is no example in the literature of a system with an irrational partial rigidity rate. We also notice that with the gluing technique introduced in \cref{thrm gluing technique} one can only build constant-length $\cS$-adic subshift, which have non-trivial rational eigenvalues, that is $m \geq 2$ and $1\leq k <m$ such that for some non-zero function $f \in L^2(X, \mu)$, $f \circ S = e^{2\pi i k /m} f$. Thus, a refinement of \cref{main thrm} would be to construct a minimal system with distinct weak-mixing measures and distinct partial rigidity rates. To construct an explicit example following a similar approach to the one outlined in this paper, it would be necessary to use a non-constant-length directive sequence and then being forced to use the general formula for the partial rigidity rate from \cite[Theorem B]{donoso_maass_radic2023partial}. Additionally, the equation \eqref{eq formula1} no longer holds in the non-constant-length case. \end{remark*} \bibliographystyle{abbrv} \bibliography{refs} \end{document}
2412.08917v1
http://arxiv.org/abs/2412.08917v1
Lefschetz properties through a topological lens
\documentclass[12pt, reqno]{amsart} \usepackage{comment} \usepackage{amssymb,amsmath,amsthm,wasysym} \usepackage{graphicx} \usepackage{paralist} \usepackage{stackrel} \usepackage{tikz-cd} \usepackage{tikz} \usepackage[all,cmtip]{xy} \usetikzlibrary{arrows} \usepackage{tcolorbox} \makeatletter \renewcommand*\env@matrix[1][*\c@MaxMatrixCols c]{ \hskip -\arraycolsep \let\@ifnextchar\new@ifnextchar \array{#1}} \makeatother \renewcommand{\o}{\omega} \def\a{\alpha} \def\b{\beta} \def\d{\delta} \def\td{\tilde{\delta}} \def\e{\epsilon} \def\g{\gamma} \def\t{\theta} \def\oo{\overline{\omega}} \def\s{\sigma} \def\z{\zeta} \def\cP{\mathcal P} \def\cE{\mathcal E} \def\cL{\mathcal L} \def\cJ{\mathcal J} \def\cJmor{\cJ^\mor} \def\ctJ{\tilde{\mathcal J}} \def\tPhi{\tilde{\Phi}} \def\cA{\mathcal A} \def\cB{\mathcal B} \def\cC{\mathcal C} \def\cZ{\mathcal Z} \def\cD{\mathcal D} \def\cF{\mathcal F} \def\cG{\mathcal G} \def\cO{\mathcal O} \def\cI{\mathcal I} \def\cS{\mathcal S} \def\cT{\mathcal T} \def\cM{\mathcal M} \def\cN{\mathcal N} \def\cK{\mathcal K} \def\cKH{\mathcal KH} \def\cH{\mathcal H} \def\cV{\mathcal V} \newcommand{\Q}{\mathbb{Q}} \newcommand{\bP}{\mathbb{P}} \newcommand{\bM}{\mathbb{M}} \newcommand{\A}{\mathbb{A}} \newcommand{\bH}{{\mathbb{H}}} \newcommand{\G}{\mathbb{G}} \newcommand{\bR}{{\mathbb{R}}} \newcommand{\bL}{{\mathbb{L}}} \newcommand{\R}{{\mathbb{R}}} \newcommand{\F}{\mathbb{F}} \newcommand{\E}{\mathbb{E}} \newcommand{\bF}{\mathbb{F}} \newcommand{\bE}{\mathbb{E}} \newcommand{\bK}{\mathbb{K}} \newcommand{\bD}{\mathbb{D}} \newcommand{\bS}{\mathbb{S}} \newcommand{\bN}{\mathbb{N}} \newcommand{\bG}{\mathbb{G}} \newcommand{\C}{\mathbb{C}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\N}{\mathbb{N}} \renewcommand{\H}{\mathbb{H}} \newcommand{\M}{\mathcal{M}} \newcommand{\W}{\mathcal{W}} \newcommand{\fa}{{\mathfrak a}} \newcommand{\fc}{{\mathfrak c}} \newcommand{\fp}{{\mathfrak p}} \newcommand{\fm}{{\mathfrak m}} \newcommand{\fq}{{\mathfrak q}} \newcommand{\fg}{{\mathfrak g}} \newcommand{\gl}{\mathfrak {gl}} \def\perm{\operatorname{perm}} \def\ad{\operatorname{ad}} \def\Adj{\operatorname{Ad}} \def\Ann{\operatorname{Ann}} \def\Biad{\operatorname{Biad}} \def\Aut{\operatorname{Aut}} \def\Inn{\operatorname{Inn}} \def\Out{\operatorname{Out}} \def\uHom{\operatorname{\underline{Hom}}} \def\uSpec{\operatorname{\underline{Spec}}} \def\Tor{\operatorname{Tor}} \def\Ext{\operatorname{Ext}} \def\Mat{\operatorname{M}} \def\NLL{\operatorname{NLL}} \def\End{\operatorname{End}} \def\Frac{\operatorname{Frac}} \def\Fun{\operatorname{Fun}} \def\Hess{\operatorname{Hess}} \def\hess{\operatorname{hess}} \def\Perm{\operatorname{Perm}} \def\Sper{\operatorname{Sperner}} \def\coSper{\operatorname{CoSperner}} \newcommand{\GL}{\operatorname{GL}} \newcommand{\SL}{\operatorname{SL}} \def\im{\operatorname{Im}} \def\ker{\operatorname{Ker}} \def\coker{\operatorname{coker}} \def\dm{\operatorname{dim}} \def\rank{\operatorname{rank}} \def\trace{\operatorname{trace}} \def\chr{\operatorname{char}} \def\chrpoly{\operatorname{char poly}} \def\Gal{\operatorname{Gal}} \def\cont{\operatorname{cont}} \def\te{\tilde{e}} \def\gcd{\operatorname{gcd}} \def\stab{\operatorname{stab}} \def\ptstab{\operatorname{PtStab}} \def\setstab{\operatorname{SetStab}} \def\lcm{\operatorname{lcm}} \def\norm{\mathrel{\unlhd}} \def\chr{\operatorname{char}} \def\inc{{\mathrm inc}} \def\sign{{\operatorname {sign}}} \def\Orb{\operatorname{Orbit}} \def\Stab{\operatorname{Stab}} \def\Syl{\operatorname{Syl}} \def\sdp{\rtimes} \def\M{\operatorname{M}} \def\tr{\operatorname{tr}} \def\lt{\operatorname{lt}} \def\init{\operatorname{in}} \def\weight{\operatorname{weight}} \def\reg{\operatorname{reg}} \def\lra{\longrightarrow} \def\ra{\rightarrow} \def\into{\hookrightarrow} \def\onto{\twoheadrightarrow} \newcommand{\xra}[1]{\xrightarrow{#1}} \newcommand{\xla}[1]{\xleftarrow{#1}} \newcommand{\xroa}[1]{\overset{#1}{\twoheadrightarrow}} \def\ov#1{\overline{#1}} \newcommand\mapsfrom{\mathrel{\reflectbox{\ensuremath{\mapsto}}}} \newcommand{\tensor}{\otimes} \newcommand{\homotopic}{\simeq} \newcommand{\homeq}{\cong} \newcommand{\iso}{\approx} \newcommand{\dual}{\vee} \newcommand{\id}{\text{id}} \def\nsg{\unlhd} \def\can{{\pi}} \def\sl{\mathfrak{sl}} \newcommand{\vectwo}[2]{\begin{bmatrix} #1 \\ #2 \end{bmatrix}} \newcommand{\contract}{\bullet} \usepackage[colorlinks=true, linkcolor=blue, anchorcolor=black, citecolor=blue, filecolor=blue, menucolor= blue, urlcolor=blue]{hyperref} \usepackage[nameinlink, capitalize]{cleveref} \numberwithin{equation}{section} \theoremstyle{plain} \newtheorem{thm}{Theorem}[section] \newtheorem{axiom}[thm]{Axiom} \newtheorem{thmdef}{TheoremDefinition} \newtheorem*{thm*}{Theorem} \newtheorem{question}[thm]{Question} \newtheorem*{question*}{Question} \newtheorem{cor}[thm]{Corollary} \newtheorem*{cor*}{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{fact}[thm]{Fact} \newtheorem{conj}[thm]{Conjecture} \newtheorem{quest}[thm]{Question} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \newtheorem*{defn*}{Definition} \newtheorem{chunk}{} \newtheorem{construction}[thm]{Construction} \newtheorem{ex}[thm]{Example} \newtheorem{exer}[thm]{Exercise} \newtheorem*{exer*}{Exercise} \newtheorem{notation}[thm]{Notation} \newtheorem*{notation*}{Notation} \newtheorem{conv}[thm]{Convention} \theoremstyle{remark} \newtheorem{rem}[thm]{Remark} \newtheorem*{rem*}{Remark} \newtheorem{terminology}[thm]{Terminology} \def\C{{\mathbb{C}}} \def\F{{\mathbb{F}}} \def\P{{\mathbb{P}}} \def\R{{\mathbb{R}}} \def\cB{{\mathcal{B}}} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\Span}{Span} \def\ra{\rightarrow} \def\la{\leftarrow} \usepackage[top=1.2in, bottom=1.2in, left=1.2in, right=1.2in]{geometry} \makeatletter \newcommand*{\toccontents}{\@starttoc{toc}} \makeatother \setcounter{tocdepth}{1} \begin{document} \title{Lefschetz properties through a topological lens} \author{Alexandra Seceleanu} \address{Mathematics Department, University of Nebraska--Lincoln, 203 Avery Hall} \email{[email protected]} \date{} \thanks{Partially supported by NSF DMS--2401482.} \maketitle \begin{abstract} These notes were prepared for the {\em Lefschetz Preparatory School}, a graduate summer course held in Krakow, May 6--10, 2024. They present the story of the algebraic Lefschetz properties from their origin in algebraic geometry to some recent developments in commutative algebra. The common thread of the notes is a bias towards topics surrounding the algebraic Lefschetz properties that have a topological flavor. These range from the Hard Lefschetz Theorem for cohomology rings to commutative algebraic analogues of these rings, namely artinian Gorenstein rings, and topologically motivated operations among such rings. \end{abstract} \tableofcontents \pagenumbering{arabic} \section{Introduction} The topic of these notes is the algebraic Lefschetz properties, which are abstractions of the important Hard Lefschetz Theorem from complex geometry. \cref{s: 1} explains the topological context of this result. \cref{s: 3} introduces the algebraic Lefschetz properties and their relevance to commutative algebra. \cref{s: 4} establishes a correspondence between the strong Lefschetz property and an action of the Lie group $\sl_2$. \cref{s: 6} focuses on the class of Gorenstein rings and their construction via Macaulay's inverse system. \cref{s: 5} and \cref{s: 7} investigate how various constructions of new rings from old interact with the Lefschetz properties. These notes are deeply influenced by the monograph \cite{book} by T.~Harima, T.~Maeno, H.~Morita, Y.~Numata, A.~Wachi and J.~Watanabe. This reference contains a parallel description of many topics in these notes except for \cref{s: 7}, which describes more recent developments based on \cite{IMMSW}. The treatment of earlier chapters, while deeply influenced by \cite{book}, reflects the author's mathematical taste. \section{Cohomology rings and the Hard Lefschetz Theorem} \label{s: 1} This section gives an introduction to the origins of the algebraic Lefschetz properties. The motivation for this topic comes from algebraic topology, so we will spend some time looking at how the Lefschetz property arises there. \subsection{Cohomology rings} Let $\F$ be a vector space and let $X$ be a topological space (such as projective space $\P^n$ or the $n$-dimensional sphere $S^n$). We recall the notion of cohomology of $X$ with coefficients in $\F$. First, one can think of $X$ as being made out of simple cells (or at least one can approximate $X$ in this manner). This endows $X$ with a {\em cell complex (CW-complex)} structure. \begin{ex}[CW structures on sphere] \label{ex:CW sphere} The 2-dimensional sphere $S^2$ can be obtained by taking a point (0-dimensional cell) and glueing a 2-dimensional disc onto it along its entire boundary. So the CW-structure of $S^2$ is $$S^2= \text{pt} + \text{2-dimensional disc}$$ More generally one can do the same for the $n$-dimensional sphere $S^n$: $$S^n= \text{pt} + n\text{-dimensional disc}.$$ There is another, less economical way to give the sphere a CW-structure. For $S^2$ one takes two $0$-dimensional cells, connects them using two line segments ($1$-dimensional cells) to form a circle $S^1$. Then one can glue two $2$-dimensional discs via their boundaries to the circle to form $S^2$. Similarly, there is a CW-structure on $S^n$ with two cells in each dimension summarized by $$S^n= 2\times \text{pt} + 2\times \text{1-dimensional disc} + 2\times \text{2-dimensional disc} + \cdots +2\times \text{n-dimensional disc}.$$ \end{ex} \begin{ex}[CW structure on the real projective space] \label{ex: CW proj} Consider first $\P_\R^n$. It can be written as $S^n/ \{\pm 1\}$. If we take a CW structure on $S^n$ with two cells in each dimension, then the action of $-1$ swaps the cells, thus they become identified in the quotient. Due to this $\P_\R^n$ has a CW structure with one cell in each dimension. $$\P_\R^n= \text{pt} + \text{1-dimensional cell}+ \dots +\text{n-dimensional cell}.$$ Next consider $\P_\C^n$. This has a cell in every even (real) dimension: $$\P_\C^n= \text{pt} + \text{2-dimensional cell}+ \dots +\text{2n-dimensional cell}.$$ \end{ex} Proceeding towards homology, we define a {\em chain complex} ${\bf C_\bullet}(X)$ by letting $C_n(X)$ be the $\F$-vector space generated by the $n$-dimensional cells of $X$. There are so-called boundary maps\footnote{We will not describe the boundary maps here.}, which fit into the following sequence $${\bf C_\bullet}(X): 0\la \F^{\# \text{0-cells}} \la \F^{\# \text{1-cells}} \la \cdots \la \F^{\# \dim(X)\text{-cells}}\la 0.$$ There is also a dual version called the {\em cochain complex} of $X$ with coefficients in $R$ $${\bf C^\bullet}(X)=\Hom({\bf C_\bullet}(X),\F): 0\ra \F^{\# \text{0-cells}} \stackrel{\partial_1}{\ra} \F^{\# \text{1-cells}} \stackrel{\partial_2}{\ra} \cdots \stackrel{\partial_n}{\ra} \F^{\# \dim(X)\text{-cells}}\ra 0.$$ \begin{defn} The {\em cohomology groups} of $X$ are defined as $$H^i(X,\F)=H^i\left({\bf C^\bullet}(X)\right)=\ker{\partial_i}/\im{\partial_{i-1}}.$$ \end{defn} \begin{ex}\label{ex: H sphere} Based on \cref{ex:CW sphere} we have the following chain complexes, which lead to easy computations of the corresponding cohomology groups. $${\bf C^\bullet}(S^n): 0\ra \F \ra 0 \ra 0 \ra \dots \ra \F\ra 0 $$ $$ H^i(S^n,\F)=\begin{cases} \F & i=0,n \\0 & \text{ otherwise} \end{cases}$$ $${\bf C^\bullet}(\P_\C^n): 0\ra \F\ra 0 \ra \F \ra 0 \ra \F \ra \dots\ra \F \ra 0 $$ $$ H^i(\P_\C^n,R)=\begin{cases} \F & i=\text{even } \\ 0 & i=\text{odd.} \end{cases}$$ \end{ex} The special property of these cohomology groups that allows us to study them using tools from ring theory is that they can be assembled into a graded ring. \begin{defn} The {\em cohomology ring} of $X$ is $$H^\bullet(X,\F)=\bigoplus_{i\geq 0}H^i(X,\F).$$ To study multiplication on this ring we need to define a map called the {\em cup product} $$H^m(X,\F)\times H^n(X,\F)\to H^{m+n}(X,\F).$$ For this recall the K\"unneth isomorphism: for two topological spaces $X$ and $Y$ if one of $X$ or $Y$ has torsion-free homology (this holds when working over a field $\F$) and has finitely many cells in each dimension, there is an isomorphism $\mu:H^\bullet(X\times Y,\F)\cong H^\bullet(X,\F)\otimes_\F H^\bullet(Y,\F)$. The composite with the diagonal map $$ H^\bullet(X,\F)\otimes_\F H^\bullet(X,\F)\stackrel{\cong}{\ra} H^\bullet(X\times X,\F)\stackrel{\Delta^*}{\ra} H^\bullet(X,\F)$$ defines the cup product by $x \cup y = \Delta^*\mu(x \otimes y)$. The cup product is not commutative, but it is what we call {\em graded commutative}. This means that the ring is graded so that \begin{quote} if $x\in H^m(X,\F)$, $|x|=m$ denotes the {\em cohomological degree} of $x$, \end{quote} and any elements $x,y$ in this ring satisfy \begin{equation} \label{eq:gradedcomm} x \cup y = (-1)^{|x||y|}y \cup x. \end{equation} Note that in a graded commutative ring even degree elements commute with all other elements, while odd degree elements anti-commute with other odd degree elements. \end{defn} \begin{ex}[Cohomology ring of a sphere] From \cref{ex: H sphere} we have \[ H^\bullet(S^n,\F)=\F\oplus \F. \] Set $1$ and $e$ to be the basis of $H^0(S^n,\F)$ and $H^n(S^n,\F)$ as $\F$-vector spaces, respectively. Then $1$ is the multiplicative identity of the ring $H^\bullet(S^n,\F)$ and $e^2=e \cup e \in H^{2n}(S^n,\F) = 0$, so $$H^\bullet(S^n,\F)=\F[e]/(e^2) \text{ with } |e|=n.$$ \end{ex} \begin{ex}[Cohomology ring of a torus] \label{ex:torus} Applying the K\"unneth formula to the torus $T^n=S^1\times \cdots \times S^1$ gives for elements $e_1,\ldots, e_n$ with $|e_i|=1$ \[ H^\bullet(T^n,\F)=\F[e_1]/(e_1^2)\otimes_\F \F[e_2]/(e_2^2)\otimes_\F \F[e_n]/(e_n^2)=\bigwedge_\F\langle e_1,\ldots, e_n\rangle. \] Note that the tensor product above is taken in the category of graded-commutative algebras which implies that $e_ie_j=-e_je_i$ as expected since $|e_i|=|e_j|=1$. If the characteristic of $\F$ is not equal to 2 then this implies $e_i^2=0$ for all $i$. The ring above, denoted $\bigwedge_\F\langle e_1,\ldots, e_n\rangle$, is called an {\em exterior algebra}. As an $\F$-vector space, a basis of the exterior algebra is given by all the square-free monomials in the variables $e_1, \ldots, e_n$. \end{ex} \begin{ex}[Cohomology ring of projective space]\label{ex: H proj space} From \cref{ex: H sphere} we have $H^\bullet(\P_\C^n,\F)=\F\oplus \F \oplus \cdots \oplus \F$, with $n$ summands in degrees $0,2, \ldots, 2n$. Set $x$ to be the generator of $H^2(\P_\C^n,\F)$. It turns out then that $x^i\neq 0\in H^{2i}(\P_\C^n,\F)$ for $1\leq i\leq n$, so $x^i$ generates $H^{2i}(\P_\C^n,\F)$. Moreover $x^{n+1}=0$ since $H^{2n+2}(\P_\C^n,\F)=0$. Thus we have $$H^\bullet(\P_\C^n,\F)=\F[x]/(x^{n+1}), \text{ with }|x|=2.$$ We can apply the K\"unneth formula to compute \begin{eqnarray*} H^\bullet(\P_\C^{d_1}\times \P_\C^{d_2}\times \cdots \times\P_\C^{d_n},\F) &\cong& \F[x_1]/(x^{d_1+1})\otimes_\F \F[x_2]/(x^{d_2+1})\otimes_\F \cdots \otimes_\F \F[x_n]/(x^{d_n+1})\\ &\cong& \F[x_1,\ldots,x_n]/(x_1^{d_1+1}, \ldots, x_n^{d_n+1}), \text{ with }|x_i|=2. \end{eqnarray*} \end{ex} \subsection{The Hard Lefschetz Theorem} Solomon Lefschetz (1888--1972) was a prominent mathematician who did fundamental work on algebraic topology, its applications to algebraic geometry, and the theory of non-linear ordinary differential equations. His career, including transitions from industry to mathematics and from working in Nebraska and Kansas to Princeton University is beutifully summarized in his own words in \cite{Lefschetzbio}. He was also a great supporter of mathematics in developing countries, helping train a great number of Mexican mathematicians. Lefschetz understood that cohomology rings can be used to study questions in algebraic geometry. Speaking about his work Lefschetz states: \begin{quote} {\em ``The harpoon of algebraic topology was planted in the body of the whale of algebraic geometry."} \end{quote} We now come to the main result that we have been building up to. Let $X$ be an algebraic subvariety of $P_\C^n$ and let $H$ denote a general hyperplane in $P_\C^n$. Then $X \cap H$ is a subvariety of $X$ of real codimension two and thus, by a, standard construction in algebraic geometry, represents a cohomology class $L \in H^2(X,\R)$ called the class of a hyperplane section. In more detail, $L$ is an $\F$-linear homomorphism that takes a dimension 2 subvariety of $X$ (or a 2-dimensional cell of $X$ if we view this as a CW-complex), intersects it with $H$ and returns the number of points of intersection. This function is then extended $\F$-linearly. \begin{thm}[Hard Lefschetz Theorem] \label{HLT} Let $X$ be a smooth irreducible complex projective variety of complex dimension $n$ (real dimension $2n$), $H^\bullet(X)=H^\bullet(X,\R)$, and let $L \in H^2(X,\R)$ be the class of a general hyperplane section. Then for $0\leq i\leq n$ the following maps are isomorphisms $$L^{i}:H^{n-i}(X)\to H^{n+i}(X), \text{ where } L^i(x)=\underbrace{L\cup \cdots \cup L}_{L^i} \cup \, x.$$ \end{thm} \begin{rem} The Hard Lefschetz theorem holds for $H^\bullet(X,\F)$ where $\F$ is any field of characteristic zero (not just $\R$), but the conclusion of the theorem is false in positive characteristic. \end{rem} The theorem above was first stated by Lefschetz in \cite{Lefschetz}, but his proof was not entirely rigorous.The first complete proof of \cref{HLT} was given by Hodge \cite{Hodge}. The standard proof given nowadays uses the representation theory of the Lie algebra $\sl_2(\C)$ and is due to Chern \cite{Chern}. Lefschetz's original proof was only recently made rigorous by Deligne\cite{Deligne}, who extended it to positive characteristic. \begin{ex}[The Hard Lefschetz theorem in action] See \Cref{ex: H proj space} for context. For $H^\bullet(P_\C^n)=\F[x]/(x^{n+1})$ the class of a hyperplane is $L=x$ (recall that $|x|=2$) and it gives whenever $i\equiv n\pmod{2}$ isomorphisms \begin{eqnarray*} H^{n-i}(P_\C^n)=x^{\frac{n-i}{2}}\F\xrightarrow{\times x^i} H^{n+i}(P_\C^n)= x^{\frac{n+i}{2}}\F \\ x^{\frac{n-i}{2}}y\mapsto x^i(x^{\frac{n-i}{2}}y)=x^{\frac{n+i}{2}}y. \end{eqnarray*} \end{ex} Cohomology rings of $n$-dimensional complex projective varieties $X$ with coefficients in a field $\F$ satisfy the following properties: \begin{enumerate} \item[(1)] $H^\bullet(X,\F)$ is a graded commutative ring\footnote{It is worth cautioning that the phrase graded commutative ring has a very different meaning from commutative graded ring. In the former odd degree elements anti-commute while in the latter all elements commute regardless of their degree.} in the sense of \eqref{eq:gradedcomm}. Its even part $A:=H^{2\bullet(}X,\F)=\bigoplus_{i\geq 0} H^{2i}(X,\F)$ is a commutative graded ring as defined in the next chapter. We can re-grade this ring by setting $|x|=i$ if $x\in H^{2i}(X,\F)$. With this convention we have $|L|=1$. \item[(2)] $H^\bullet(X,\F)$ and $A$ are finite dimensional $\F$-vector spaces (so $A$ is an artinian ring cf.~\cref{def: artinian}) \item[(3)] $H^\bullet(X,\F)$ and $A$ satisfiy Poincar\'e duality (hence $A$ is a Gorenstein ring cf.~\cref{prop: Poincare}). \end{enumerate} The main goal of these notes is to demonstrate how one may hope to extend the Hard Lefschetz theorem (and some weaker versions thereof) to arbitrary rings which may not necessarily be cohomology rings, but satisfy at least some of the properties above. Thus we are motivated by the following question. \begin{question} Which commutative graded rings $A$ that are artinian or both artinian and Gorenstein also satisfy the conclusion of the Hard Lefschetz theorem? \end{question} \section{Classes of graded rings} \label{s: 2} From now on all rings will be commutative unless specified otherwise. \subsection{Artinian algebras} \begin{defn}[Graded ring] A commutative ring $A$ is an ($\N$--){\em graded ring} provided it decomposes as $$A=\bigoplus_{i\geq 0}A_i$$ with $A_i$ abelian groups such that $\forall i,j \in \N$ $A_iA_j\subseteq A_{i+j}$ ($a\in A_i, b\in A_j \Rightarrow ab\in A_{i+j}$). \end{defn} From now on we restrict to graded rings $A$ with $A_0=\F$ a field. I will refer to these as $\F$-{\em algebras}. Note that in particular such a ring $A$ and each of its homogeneous components $A_i$ are $\F$ vector spaces. An $\N$--graded ring $A$ has a unique homogeneous maximal ideal, namely $\fm=\bigoplus_{i\geq 1}A_i$. \begin{ex} $A=\F[x_1,\dots,x_n]$ is the fundamental example of a graded ring with $A_i=$ the set of homogeneous polynomials of degree $i$. Note that the degree of $x_i$ is allowed to be an arbitrary positive integer. \end{ex} \begin{exer} Show that if $A$ is a commutative, Noetherian, graded $\F$-algebra then $\dim_\F A_i$ is finite for each $i$. \end{exer} \begin{defn}[Hilbert function] The {\em Hilbert function} of a Noetherian graded $\F$-algebra $A$ is the function $$h_A:\N\to \N, h_A(i)=\dim_\F A_i.$$ The {\em Hilbert series} of $A$ is the power series $H_A(t)=\sum_{i\geq 0} h_A(i)t^i$. \end{defn} \begin{exer} Prove that the Hilbert function of the polynomial ring $R=\F[x_1,\ldots, x_n]$ is given by \[ h_R(i)=\binom{n+i-1}{i}, \, \forall i\geq 0 \] and the Hilbert series is \[ H_R(t)=\frac{1}{(1-t)^n}. \] \end{exer} \begin{ex} \label{ex: truncated poly} The Hilbert function of the truncated polynomial ring $A=\frac{\F[x_1,\ldots, x_n]}{(x_1,\ldots, x_n)^d}$ is given by \[ h_A(i)=\begin{cases} \binom{n+i-1}{i} & \text{ if } 0\leq i < d\\ 0 & \text{ if } i \geq d. \end{cases} \] Thus $H_A(t)=\sum_{i=0}^{d-1}\binom{n+i-1}{i} t^i$. \end{ex} \begin{ex} \label{ex:222} Consider a field $\F$ and let $A=\F[x,y,z]/(x^2,y^2,z^2)$. Clearly, $A$ is a finite dimensional $\F$-vector space with basis given by the monomials $\{1,x,y,z,xy,yz,xz,xyz\}$. We see that the elements of $A$ have only four possible degrees 0,1,2,3 and moreover \begin{eqnarray*} A_0&=& \Span_\F\{1\}\cong \F \Rightarrow h_A(0)=1\\ A_1&=& \Span_\F\{x,y,z\}\cong \F^3 \Rightarrow h_A(1)=3\\ A_2&=& \Span_\F\{xy,yz,xz\}\cong \F^3 \Rightarrow h_A(2)=3\\ A_3&=& \Span_\F\{xyz\}\cong \F \Rightarrow h_A(3)=1\\ A_i&=& 0, \forall i\geq 4 \Rightarrow h_A(i)=0, \forall i\geq 4 \end{eqnarray*} Thus $H_A(t)=1+3t+3t^2+t^3$. \end{ex} In \cref{ex: truncated poly} and \cref{ex:222} the Hilbert series was in fact a polynomial, equivalently the Hilbert function was eventually equal to zero. We now define a class of graded rings which satisfy this property. \begin{defn}[Artinian ring] \label{def: artinian} A (local or) graded $\F$-algebra $A$ with (homogeneous) maximal ideal $\fm$ is {\em artinian} if any of the following equivalent conditions holds. \begin{itemize} \item[(a)] $A$ is finite dimensional as a $\F$-vector space. \item[(b)] $A$ has Krull dimension zero. \item[(c)] $\mathfrak m^d = 0$ for some (hence all sufficiently large) integers $d \geq 1$. If $A$ is graded this can be restated as $A_d=0$ for sufficiently large integers $d\geq 1$. \item[(d)] $A$ satisfies the descending chain condition on ideals. \item[(e)] There exists a descending sequence of ideals $$ A=\fa_0 \supseteq \fa_1 \supseteq \fa_2 \supseteq \dots \supseteq \fa_\ell=0 \text{ such that } \fa_{i-1}/\fa_i\cong\F.$$ Such a sequence of ideals is called a {\em composition series}. \end{itemize} Moreover, if $R=\F[x_1,\ldots, x_n]$ is a polynomial ring and $A=R/I$ for some homogeneous ideal $I$ of $R$ then the conditions above are also equivalent to \begin{itemize} \item[(f)] For each $0 \leq i \leq n$ there is some integer $p_i$ such that $x_i^{p_i} \in I$. \item[(g)] If $\F$ is algebraically closed, another equivalent condition is that the vanishing locus of $I$ in projective space is empty. \end{itemize} \end{defn} \subsection{Artinian Gorenstein rings and complete intersections} \begin{defn}[Socle]\label{def: AG} For a graded artinian $\F$ algebra the maximal integer $d$ such that $A_d \neq 0$ is called the {\em maximal socle degree} of $A$. The {\em socle} of $A$ is the ideal $$(0:_A\fm)=\{x\in A \mid xy=0, \forall y\in \fm\}.$$ There is always a containment $A_d\subseteq (0:_A\fm)$, where $d$ denotes the maximal socle degree of $A$. When the converse holds, namely $ (0:_A\fm) \subseteq A_d$, then $A$ is called a {\em level algebra}. This condition means that the socle is concentrated in a single degree. \end{defn} \begin{defn}[Artinian Gorenstein ring] A graded $\F$-algebra is {\em artinian Gorenstein} (AG) if its socle is a one dimensional $\F$-vector space. \end{defn} An equivalent characterization of AG algebras is given by the following proposition. \begin{prop}[Poincar\'e duality] \label{prop: Poincare} A graded $\F$-algebra $A$ of maximal socle degree $d$ is AG if and only if for each nonzero element $a_{soc}$ of $A_d$ there exists an $\F$-vector space homomorphism $\int_A: A \to \F$ called an {\em orientation}, satisfying the following properties: \begin{enumerate} \item if $b\in A_i$ for some $i<d$ then $\int_A b=0$, \item the orientation induces an isomorphism $A_d\cong \F$ such that $\int_A a_{soc}=1$, \item for each $0\leq i\leq d$ and each element $a\in A_i$ there exists a unique element $b\in A_{d-i}$ so that $\int_A ab=1$. \end{enumerate} \end{prop} In \cref{s: BUG} we will use the notation $a_{soc}$ implicitly to mean fixing the unique orientation on $A$ that satisfies $\int_A a_{soc}=1$. \begin{ex} Continuing with \cref{ex:222}, the socle is $(0:_A\fm)=\Span_{\F}\{xyz\}$, a 1-dimensional $\F$-vector space. This shows that $A$ is Gorenstein. Take the orientation on $A$ to be specified by $\int_A xyz=1$. We see that the $\F$-basis elements $\{1,x,y,z, xy, yz, xz, xyz\}$ of $A$ form pairs with respect to the given orientation in the following manner \begin{eqnarray*} \int_A 1\cdot xyz &=& 1\\ \int_A x\cdot yz &=& 1\\ \int_A y\cdot xz &=& 1\\ \int_A z\cdot xy &=& 1. \end{eqnarray*} \end{ex} \begin{exer} \label{exer: Gor} Show that the following is an artinian Gorenstein ring \[ R=\frac{\F[x,y,z]}{(xy,xz,yz, x^2-y^2,x^2-z^2)}. \] \end{exer} \begin{exer} Prove that if $A$ is a graded AG algebra of maximum socle degree $d$ then $h_A(i)=h_A(d-i)$ for each $0\leq i \leq d$. This is usually stated by saying AG algebras have symmetric Hilbert function. \end{exer} \begin{defn} A graded {\em artinian} $\F$-algebra is a {\em complete intersection} (CI) if $A=R/I$ where $R=\F[x_1,\ldots, x_n]$ and $I=(f_1, \ldots, f_n)$, that is, $I$ is a homogeneous ideal generated by as many elements as there are variables in $R$. \end{defn} \begin{ex} \label{ex: monomial CI} The rings $A=\F[x_1,\ldots, x_n]/(x_1^{d_1}, \ldots, x_n^{d_n})$, where $d_1,\ldots, d_n\geq 1$ are integers, are called {\em monomial complete intersections}. \end{ex} \begin{exer} Prove that the rings in \cref{ex: monomial CI} are the only artinian Gorenstein rings of the form $R/I$ where $R=\F[x_1,\ldots, x_n]$ and $I$ is an ideal generated by monomials. \end{exer} All CI rings are Gorenstein, but not all Gorenstein rings are CI, as exemplified by the ring in \cref{exer: Gor}. \section{The Lefschetz properties} \label{s: 3} \subsection{Weak Lefschetz property and consequences} \begin{defn}[Weak Lefschetz property] Let $A=\bigoplus_{i=0}^cA_i$ be a graded artinian $\F$-algebra. We say that $A$ has the {\bf \em weak Lefschetz property (WLP)} if there exists an element $L\in A_1$ such that for $0\leq i \leq c-1$ each of the multiplication maps \[ \times L: A_i\to A_{i+1}, x\mapsto Lx \quad \text{ is injective or surjective}. \] We call $L$ with this property a {\bf \em weak Lefschetz element}. The WLP says that $\times L$ has the maximum possible rank, which is referred to as {\em full rank}. \end{defn} \begin{defn} The {\bf \em non-weak Lefschetz locus} of a graded artinian $\F$-algebra $A$ is the set (more accurately the algebraic set) \[ NLL_w(A)=\{(a_1,\ldots, a_n)\in \F^n \mid L=a_1x_1+\cdots+a_nx_n \text{ not a weak Lefschetz element on } A\}. \] \end{defn} \begin{exer}[Equivalent WLP statements]\label{exer: equivalent WLP} Prove that for an artinian graded $\F$-algebra $A$ the following are equivalent: \begin{enumerate} \item $L\in A_1$ is a weak Lefschetz element for $A$. \item For all $0\leq i \leq c-1$ the map $\times L: A_i\to A_{i+1}$ has rank $\min\{h_A(i),h_A(i+1)\}$. \item For all $0\leq i \leq c-1$ $\dim_\F([(L)]_{i+1})=\min\{h_A(i),h_A(i+1)\}$. \item For all $0\leq i \leq c-1$ $\dim_\F([A/(L)]_{i+1})=\max\{0,h_A(i+1)-h_A(i)\}$. \item For all $0\leq i \leq c-1$ $\dim_\F([0:_A L]_{i})=\max\{0,h_A(i)-h_A(i+1)\}$. \end{enumerate} \end{exer} \begin{exer} Show that the non-weak Lefschetz locus is a Zariski closed set. \end{exer} \begin{ex} Take $A=\C[x,y]/(x^2,y^2)$ with the standard grading $|x|=|y|=1$ and $L=x+y$. Then the multiplication map $\times L$ gives the following matrices with respect to the monomial bases $\{1\},\{x,y\}$ and $\{xy\}$: \begin{center} \begin{tabular}{llll} map & matrix & rank & injective/ surjective \\ \hline $A_0\to A_1$ &$\begin{bmatrix} 1 \\ 1\end{bmatrix}$ & 1 & injective\\ $A_1\to A_2$ & $\begin{bmatrix} 1 & 1\end{bmatrix}$ & 1 & surjective\\ $A_i\to A_{i+1}, i\geq 2 $ & $\begin{bmatrix} 0\end{bmatrix}$ & 0 & surjective\\ \end{tabular} \end{center} We conclude that $A$ has the WLP and $x+y$ is a Lefschetz element on $A$. \end{ex} \begin{ex}[Dependence on characteristic] Take $A=\F[x,y,z]/(x^2,y^2,z^2)$ with the standard grading $|x|=|y|=1$ and $L=ax+by+cz$. Then the multiplication map $\times L$ is represented by the following matrices with respect to the monomial bases $1$ for $A_0$, $\{x,y,z\}$ for $A_1$, $\{xy,xz,yz\}$ for $A_2$, and $xyz$ for $A_3$: \begin{eqnarray*} \times L:A_0\to A_1& M=\begin{bmatrix} a\\ b\\ c \end{bmatrix}, & \text{ injective unless } a=b=c=0 \\ \times L:A_1\to A_2 & M=\begin{bmatrix} b & a & 0\\ c & 0 & a \\ 0 & c & b \end{bmatrix}, & \det(M)=-2abc\\ \times L:A_2\to A_3& M=\begin{bmatrix} a& b& c \end{bmatrix}, & \text{ surjective unless } a=b=c=0. \end{eqnarray*} The map $\times L:A_1\to A_2$ has full rank if and only if $\chr(\F)\neq 2$ and $a\neq 0, b\neq 0, c\neq 0$. We conclude that $A$ has the WLP if and only if $\chr(\F)\neq 2$ because in that case $L=x+y+z$ is a weak Lefschetz element. The {\em non-(weak) Lefschetz locus} of $A$ in this example is \begin{eqnarray*} \NLL_w(A) &=& \{(a,b,c)\in \F^3 \mid L=ax+by+cz \text{ is not a weak Lefschetz element on } A\} \\ &=& V(abc)= \{(a,b,c)\in \F^3 \mid a=0 \text{ or } b=0 \text{ or } c=0\} \\ &=& \text{ the union of the three coordinate planes in } \F^3. \end{eqnarray*} \end{ex} \begin{defn} A sequence of numbers $h_1,\ldots,h_c$ is called {\em unimodal} if there is an integer $j$ such that $$h_1\leq h_2 \leq \dots \leq h_j \geq h_{j+1}\geq \dots \geq h_c.$$ \end{defn} \begin{lem} If $B$ is a standard graded $\F$-algebra and $B_j=0$ for some $j\in\N$ then $B_i=0$ for all $i\geq j$. \end{lem} \begin{proof} $B$ standard graded means that $B=\F[B_1]=\F[x_1,\ldots,x_n]/I$ where $x_1,\ldots,x_n$ are an $\F$-basis for $B_1$, which means $|x_1|=\dots=|x_n|=1$, and $I$ is a homogeneous ideal. Then it follows that $B_i=\Span_\F\{B_{i-j}B_j\}=\Span_\F\{0\}=0$ for any $i\geq j$. \end{proof} \begin{prop}Suppose that $A$ is a standard graded artinian algebra over a field $\F$. If $A$ has the weak Lefschetz property then $A$ has a unimodal Hilbert function. \end{prop} \begin{proof} Let $j$ be the smallest integer such that $\dim_\F A_j > \dim_\F A_{j+1}$ and let $L$ be a Lefschetz element on $A$. Then $\times L:A_j\to A_{j+1}$ is surjective i.e. $LA_j=A_{j+1}$. Now consider the cokernel $A/(L)$ of the map $A\stackrel{\times L}{\lra}A$. We have that $\left(A/(L)\right)_{j+1}=A_{j+1}/LA_j=0$, so by the previous Lemma $\left[A/(L)\right]_{i}=0$ for $i\geq j+1$. This means that $\times L: A_i\to A_{i+1}$ is surjective for $i\geq j$ and so we have $$h_0(A)\leq h_1(A)\leq \dots \leq h_j(A)>h_{j+1}(A)\geq h_{j+2}(A)\geq \dots \geq h_c(A). \qedhere$$ \end{proof} The proof above yields: \begin{cor} For a standard graded artinian algebra $A$ with the weak Lefschetz property there exists $j\in \N$ such that the multiplication maps by a weak Lefschetz element $\times L:A_i\to A_{i+1}$ are injective for $i<j$ after which they become surjective for $i\geq j$. \end{cor} \begin{ex}[Dependence on grading] Recall from Example 2.18 that the algebra $A=\F[x,y]/(x^2,y^2)$ with $|x|=|y|=1$ is standard graded and has WLP and notice that the Hilbert function of $A$, $1,2,1$ is unimodal. Take $B=\C[x,y]/(x^2,y^2)$ with $|x|=1, |y|=3$. Then $B$ is a graded algebra with nonunimodal Hilbert function $1, 1, 0, 1, 1$, but $x$ is a weak Lefschetz element on $B$. Take $C=\C[x,y]/(x^2,y^2)$ with $|x|=1,|y|=2$. Then $C$ has a unimodal Hilbert function $1,1,1,1$ but does not have the WLP because the only degree one elements are multiples of $x$, which do not have maximal rank with respect to multiplication from degree one to degree two. \end{ex} \subsection{Strong Lefschetz property and consequences} \begin{defn}[Strong Lefschetz property] Let $A=\bigoplus_{i=1}^cA_i$ be a graded artinian $\F$-algebra. We say that $A$ has the {\bf \em strong Lefschetz property (SLP)} if there exists an element $L\in A_1$ such that for all $1\leq d\leq c$ and $0\leq i \leq c-d$ each of the multiplication maps \[ \times L^d: A_i\to A_{i+d}, x\mapsto L^dx \quad \text{ is injective or surjective.} \] We call $L$ with this property a {\bf \em strong Lefschetz element}. \end{defn} \begin{rem} An element $L\in A_1$ is strong Lefschetz on $A$ if and only if for all $1\leq d\leq c$ and $0\leq i \leq c-d$ the maps $\times L^d: A_i\to A_{i+d}$ have rank $\min\{h_A(i),h_A(d+i)\}$. \end{rem} \begin{defn} The {\bf \em non-strong Lefschetz locus} of a graded artinian $\F$-algebra $A$ is the set (more accurately the algebraic set) \[ NLL_s(A)=\{(a_1,\ldots, a_n)\in \F^n \mid L=a_1x_1+\cdots+a_nx_n \text{ not a strong Lefschetz element on } A\}. \] \end{defn} \begin{rem} The non-strong Lefschetz locus is a Zariski closed set. \end{rem} \begin{rem}[SLP $\Rightarrow$ WLP] If $A$ satisfies SLP then $A$ satisfies WLP (the $d=1$ case). \end{rem} The following exercise shows this implication is not reversible. \begin{exer} Let $\F$ be a field of characteristic zero and let \[ A=\frac{\F[x,y,z]}{(x^3,y^3,z^3,(x+y+z)^3)}. \] \begin{enumerate} \item Find the Hilbert function of $A$. \item Prove that $A$ satisfies WLP but not SLP. \end{enumerate} {\em Hint:} One can use a computer algebra system such as {\em Macaulay2} \cite{M2} to prove that an algebra satisfies WLP or SLP by finding a linear form that conforms to the respective definition. Usually such form is found by trial and picking at random from the set of linear forms. Macaulay2 has a function \texttt{random()} which is helpful in this regard. One can also prove that an algebra does or does not satisfy WLP or SLP by working over an extension of the coefficient field $\F$ of $A$. In the example above, one would work over the filed extension $K=\F(a,b,c)$, defined in Macaulay2 as \texttt{K=frac(QQ[a,b,c])}. To compute the rank of multiplication by $L=ax+by+cz$ and its powers on the artinian algebra $A'=K\otimes_{\F}A$ in Macaulay2 for $\F=\Q$ use the criteria in \Cref{exer: equivalent WLP} to express the desired ranks in terms of the Hilbert function of cyclic modules $A'/(L^j)$. \end{exer} \begin{ex}[Dependence on characteristic] Take $A=\F[x,y]/(x^2,y^2)$ with the standard grading $|x|=|y|=1$ and $L=ax+by$. Then the multiplication map $\times L^2$ gives the following matrices with respect to the monomial bases $\{1\},\{x,y\}$ and $\{xy\}$: \begin{center} \begin{tabular}{llll} map & matrix & rank & injective/ surjective \\ \hline $A_0\to A_2$ &$\begin{bmatrix} 2ab \end{bmatrix}$ & $\begin{cases} 1 & \chr(\F)\neq 2 \\ 0 & \chr(\F) = 2 \end{cases}$ & $\begin{cases} \text{bijective} & \chr(\F)\neq 2 \\ \text{none} & \chr(\F) = 2 \end{cases}$\\ $A_i\to A_{i+2}, i\geq 1 $ & $\begin{bmatrix} 0\end{bmatrix}$ & 0 & surj\\ \end{tabular} \end{center} \noindent If $\chr(\F)\neq 2$ we conclude that $A$ has the SLP and $ax+by$ where $a\neq 0, b\neq 0$ is a Lefschetz element on $A$. The {\em non-(strong) Lefschetz locus} is the union of the coordinate axes in $\F^2$ $$NLL_s(A)=V(ab)=\{(a,b)\in \F^2 \mid a=0 \text{ or } b=0\}.$$ However $A$ does not have the SLP if $\chr(\F)=2$ so in that case $\NLL_s(A)=\F^2$. \end{ex} \begin{prop} Let $A$ be a (not necessarily standard) graded artinian $\F$-algebra which satisfies the SLP. Then $A$ has unimodal Hilbert function. \end{prop} \begin{proof} Suppose that the Hilbert function of $A$ is not unimodal. Then there are integers $k < l <m$ such that $\dim_\F A_k > \dim_F A_l < \dim_\F A_m$. Recall the rank of a composite map is bounded above by the ranks of its components. Hence the multiplication map $\times L^{m-k}:A_k \to A_m$ cannot have full rank for any linear element $L\in A$ because it is the composition of $\times L^{m-l}:A_l \to A_m$ and $\times L^{l-k}:A_l \to A_k$, each of which have rank strictly less than $\min\{\dim_\F A_k, \dim_\F A_m\}$. Thus $A$ cannot have the SLP. \end{proof} \begin{defn}. Let $A=\bigoplus_{i=1}^cA_i$ be a graded artinian $\F$-algebra. We say that $A$ has the {\bf \em strong Lefschetz property in the narrow sense (SLPn)} if there exists an element $L\in A_1$ such that the multiplication maps $\times L^{c-2i}: A_i\to A_{c-i}, x\mapsto L^{c-2i}x$ are bijections for all $0\leq i \leq \lceil c/2\rceil$. \end{defn} \begin{rem} SLP in the narrow sense is the closest property to the conclusion of the Hard Lefschez \cref{HLT}. \end{rem} \begin{defn} We say that a graded artinian algebra $A=\bigoplus_{i=1}^c A_i$ of maximum socle degree $c$ has a {\em symmetric Hilbert function} if $h_A(i)=h_A(c-i)$ for $0\leq i \leq \lceil c/2\rceil$. \end{defn} \begin{prop}\label{equiv narrow SLP} If a graded artinian $\F$-algebra $A$ has the strong Lefschetz property in the narrow sense, then the Hilbert function of A is unimodal and symmetric. Moreover we have the equivalence: $A$ has SLP and symmetric Hilbert function if and only if $A$ has SLP in the narrow sense. \end{prop} \begin{proof} ($\Leftarrow$) The fact that SLP in the narrow sense implies symmetric Hilbert function follows from the definition because the bijections give $\dim_F A_i=\dim_F A_{c-i}$. The fact that SLP in the narrow sense implies SLP can be noticed by considering $\times L^d:A_i\to A_{i+d}$. For each such $d,i$ there exists $j=c-i-d$ such that: \begin{itemize} \item if $i\leq (c-d)/2$ then $j=c-i-d \geq i$ and $(\times L^{d}) \circ (\times L^{j-i}) =\times L^{c-2i}$ is a bijection implies that $\times L^{d}$ is surjective, hence has full rank; \item if $i> (c-d)/2$ then $c-2i< d$ and $(\times L^{d-(c-2i)})\circ (\times L^{d})=\times L^{c-2i}$ is a bijection implies that $\times L^{d}$ is injective, hence full rank; \end{itemize} ($\Rightarrow$) The fact that SLP + symmetric Hilbert function implies SLPn is clear from the definitions. \end{proof} \begin{ex} The standard graded algebra $\F[x,y]/(x^2,xy,y^a)$ with $a>3$ has non-symmetric Hilbert function $1,2,1^{a-2}$ (here $1^{a-2}$ means $1$ repeated $a-2$ times). Notice that $A$ has the SLP because $L=x+y$ is a strong Lefschetz element. However, $A$ does not satisfy SLPn because its Hilbert function is not symmetric. \end{ex} \subsection{Stanley's Theorem} The most famous theorem in the area of investigation of the algebraic Lefschetz properties, and also the theorem which started this, is the following: \begin{thm}[Stanley's theorem] \label{thm:Stanley} If $\chr(\F)=0$, then all monomial complete intersections, i.e. $\F$-algebras of the form $$A=\frac{\F[x_1,\ldots, x_n]}{(x_1^{d_1},\ldots,x_n^{d_n})}$$ with $d_1,\ldots,d_n\in \N$ have the SLP. \end{thm} \begin{proof} Recall that $H^\bullet(\P_\C^{d-1},\F)=\F[x]/(x^d)$, so by K\"unneth we have $$H^\bullet(\P_\C^{d_1-1}\times \P_\C^{d_2-1}\times \cdots \times \P_\C^{d_n-1},\F)=\F[x]_1/(x_1^{d_1})\otimes_\F\F[x_2]/(x_2^{d_2})\otimes_\F \cdots\otimes_\F \F[x_n]/(x_n^{d_n})=A.$$ Since $X=\P_\C^{d_1-1}\times \P_\C^{d_2-1}\times \cdots \times \P_\C^{d_n-1}$ is an irreducible complex projective variety, the Hard Lefschetz theorem says that $A$ has SLP in the narrow sense which implies that $A$ has SLP. \end{proof} Alternate proofs of this theorem have been given by Watanabe in \cite{Watanabe} using representation theory, by Reid, Roberts and Roitman in \cite{RRR} with algebraic methods, also by Lindsey \cite{Lindsey} and by Herzog and Popescu \cite{HP}. We will give a different proof of Stanley's theorem later in these notes in \Cref{Stanley second}. \begin{exer} With help from a computer make conjectures regarding the WLP and SLP for monomial complete intersections in positive characteristics. A characterization is known for SLP, but not for WLP. See \cite{Cook, Nicklasson} for related work. \end{exer} \subsection{Combinatorial applications} The following discussion of a spectacular application of SLP is taken from \cite{StanleyICM}. A {\em polytope} is a convex body in Euclidean space which is bounded and has finitely many vertices. Let $\cP$ be a $d$-dimensional simplicial convex polytope with $f_i$ $i$-dimensional faces, $0<i<d-1$. We call the vector $f(\cP)= (f_0,\ldots,f_{d-1})$ the {\em $f$-vector} of $\cP$.The problem of obtaining information about such vectors goes back to Descartes and Euler. In 1971 McMullen \cite{McMullen} gave a remarkable condition on a vector $(f_0,\ldots,f_{d-1})$ which he conjectured was equivalent to being the $f$-vector of some polytope. To describe this condition, define a new vector $h(\cP) = (h_0, ...,h_d)$, called the {\em $h$-vector} of $\cP$, by \[ h_i=\sum_{j=0}^i \binom{d-j}{d-i}(-1)^{i-j}f_{j-1} \] where we set $f_{-1} = 1$. McMullen's conditions, though he did not realize it, turn out to be equivalent to $h_i=h_{d-i}$ for all $i$ together with the existence of a standard graded commutative algebra $A$ with $A_0=\F$ and $h_A(i)=h_i-h_{i-1}$ for $1\leq i<\lfloor d/2\rfloor$. Stanley \cite{Stanley} constructed from $\cP$ a $d$-dimensional complex projective toric variety $X(\cP)$ for which $\dim_\C H^{2i}(X(\cP))= h_i$. Although $X(\cP)$ need not be smooth, its singularities are sufficiently nice that the hard Lefschetz theorem continues to hold. Namely, $X(\cP)$ locally looks like $\C^n/G$, where $G$ is a finite group of linear transformations. Ta\-king $A=H^{*}(X(\cP))/(L)$ with degrees scaled by $1/2$, where $L$ is the class of a hyperplane section, the necessity of McMullen's condition follows from \cref{exer: equivalent WLP}(4). Sufficiency was proved about the same time by Billera and Lee \cite{BL}. \section{Lefschetz property via representation theory of $\sl_2$} \label{s: 4} \subsection{The Lie algebra $\sl_2$ and its representations} \phantom{a} Some of the exercises in this section are taken from \cite{Robles}. Throughout this section let $\F$ be an algebraically closed field of characteristic zero. \begin{defn} A {\em Lie algebra} is a vector space $\fg$ equipped with a bilinear operator $[-,-]:\fg\times \fg \to \fg$ satisfying the following two conditions : \begin{itemize} \item $[x,y]=-[y,x]$ \item $[[x,y],z]+[[y,z],x]+[[z,x],y]=0$. \end{itemize} The bilinear operator $[-,-]$ is called the bracket product, or simply the {\em bracket}. The second identity in the definition is called the {\em Jacobi identity}. \end{defn} \begin{rem} Any associative algebra has a Lie algebra structure with the bracket product defined by commutator $[x,y]=xy-yx$. Associativity implies the Jacobi identity. \end{rem} The set of $n\times n$ matrices $\mathcal{M}_n(\F)$ forms a Lie algebra since it is associative. This Lie algebra is denoted by $\gl_n(\F)$. \begin{defn} Since the set of matrices of trace zero is closed under the bracket of $\gl_n(\F)$ (because $\tr(AB)=\tr(BA)$ for any matrices $A,B$), it forms a Lie subalgebra \[ \sl_n(\F)=\{M\in \gl_n(\F) \mid \tr(M)=0\}. \] \end{defn} \begin{ex}[The Lie algebra $\sl_2(\F)$] In the case where $n=2$, $\sl_2(\F)$ is three-dimensional, with basis \[ E= \begin{bmatrix} 0 &1\\ 0 &0 \end{bmatrix}, \quad H= \begin{bmatrix} 1 & 0\\ 0 &-1 \end{bmatrix}, \quad F=\begin{bmatrix} 0 &0 \\ 1 & 0 \end{bmatrix} \] The three elements $E,H,F$ are called the {\em $\sl_2$-triple}. These elements satisfy the following three relations, which we call the fundamental relations: \begin{equation} \label{eq:sl2} [E,F]=H, \quad [H,E]=2E, \quad [H,F]=-2F. \end{equation} The algebra $\sl_2(\F)$ is completely determined by these relations. \end{ex} We are interested in representations of $\sl_2$. \begin{defn}[Lie algebra representation] Let $V$ be an $\F$-vector space. Then $\End(V)$ is a Lie algebra with the bracket defined by $[f,g]=f\circ g-g\circ f$. A {\em representation} of a Lie algebra $\fg$ is vector space $V$ endowed with a Lie algebra homomorphism \[\rho:\fg\to \End(V),\] i.e. a vector space homomorphism which satisfies \[\rho([x,y])=[\rho(x),\rho(y)].\] A representation is called {\em irreducible} if it contains no trivial (nonzero) subrepresentation i.e. if $W\subsetneq V$ is such that $\rho(W)\subseteq W$ then $W=0$. \end{defn} In the case of $\fg=\sl_2(\F)$, we abuse notation and call the set of elements $\rho(E), \rho(H), \rho(F)$ just $E, H, F$ and say they form an {\em $\sl_2$-triple}. \begin{exer}\label{ex: Sd irrep} Let $\F[x,y]_d$ be the vector space of homogeneous polynomials of degree $d$ in $\F[x,y]$. Prove that \begin{enumerate} \item $E=x\frac{\partial}{\partial y}, H=x\frac{\partial}{\partial x}-y\frac{\partial}{\partial y}, F=y\frac{\partial}{\partial x}$ form an $\sl_2$-triple. \item Prove that the monomial $x^ay^b$ is an eigenvector of $H$ with eigenvalue $a-b\in\Z$. In particular the eigenvalues of $H$ on $\F[x,y]_d$ are $d, d-2, d-4, \ldots, 4-d, 2-d, -d$. \item Prove that a basis of $\F[x,y]_d$ is $y^d, E(y^d), E^2(y^d), \ldots, E^d(y^d)$. \end{enumerate} Pictorially this can be summarized as \smallskip \begin{tikzpicture}[->,>=stealth',auto,node distance=2.5cm, thick] \node (1) {$0$}; \node (2) [right of=1] {$\F y^d$}; \node (3) [right of=2] {$\F xy^{d-1}$}; \node (4) [right of=3] {$\cdots$}; \node (5) [right of=4] {$\F x^{d-1}y$}; \node (6) [right of=5] {$\F x^d$}; \node (7) [right of=6] {$0$}; \path[every node/.style={font=\small}] (2) edge[bend right] node [midway,below] {E} (3) (3) edge[bend right] node [midway,below] {E} (4) (4) edge[bend right] node [midway,below] {E} (5) (5) edge[bend right] node [midway,below] {E} (6) (6) edge node [midway,below] {E} (7) (2) edge node [midway,above] {F} (1) (3) edge[bend right] node [midway,above] {F} (2) (4) edge[bend right] node [midway,above] {F} (3) (5) edge[bend right] node [midway,above] {F} (4) (6) edge[bend right] node [midway,above] {F} (5); \end{tikzpicture} \end{exer} We will soon see that the vector space in \cref{ex: Sd irrep} is the basic building block of all other representations of $\sl_2$. An important result on Lie algebra representations is: \begin{thm}[Weyl's Theorem] Any Lie algebra representation decomposes uniquely up to isomorphism as a direct sum of irreducible representations. \end{thm} \begin{defn}[Weight vectors] Let $\rho:\sl_2(\F)\to \End(V)$ be a representation. The eigenvalues of $H$ are called {\em weights} and the eigenvectors are called {\em weight vectors}. In particular an eigenvector $u$ is called a {\em lowest weight vector} if $Fu=0$ and is called a {\em highest weight vector} if $Eu=0$. \end{defn} \begin{ex} In the representation introduced in \cref{ex: Sd irrep} the highest weight vectors are the elements of $\F x^d$ and the lowest weight vectors are the elements of $\F y^d$. \end{ex} To justify the name of highest weight we state the following theorem: \begin{thm}[Irreducible representations of $\sl_2$] \label{thm:sl2reps} Let $\rho:\sl_2(\F)\to \End(V)$ be an irreducible representation with $\dim(V)=d+1$. Then there exist a basis $\cB=\{v_0, \ldots, v_d\}$ for $V$ such that \begin{enumerate} \item each $v_i$ is an eigenvector for $H$ with eigenvalue $-d+2i$, i.e. $Hv_i=(-d+2i)v_i$ \item $Ev_i=v_{i+1}$ for $i<d$, $Ev_d=0$ \item $Fv_i=i(d-i+1)v_{i-1}$ for $i>0$, $Fv_0=0$. \end{enumerate} In particular, the elements $E,H,F\in M_{d+1}(\F)$ are represented with respect to this basis by the matrices \begin{eqnarray} [E]_\cB &=& \begin{bmatrix} 0& 0 & \cdots & 0 &0 \\ 1& 0& \cdots & 0 &0 \\ \vdots & \ddots &\ddots & \vdots & \vdots \\ 0 & 0 &\cdots & 1 &0 \end{bmatrix}, \\ {[H]_{\cB}} &=& \begin{bmatrix} -d& 0 & \cdots & 0 \\ 0& -d+2& \cdots & 0 \\ \vdots & \ddots & & \vdots \\ 0 & 0 &\cdots & d \end{bmatrix}, \label{eq: H} \\ {[F]_\cB} &=& \begin{bmatrix} 0& 1\cdot d & \cdots & 0 &0 \\ 0& 0& 2(d-1) & 0 &0 \\ \vdots & \ddots &\ddots & \vdots & \vdots \\ 0 & 0& \cdots&0 & d\cdot 1 \\ 0 & 0 &\cdots & 0 &0 \end{bmatrix}.\label{eq: F} \end{eqnarray} \end{thm} \begin{exer} Find a basis that satisfies the properties given by \cref{thm:sl2reps} for the representation $\F[x,y]_d$ introduced in \cref{ex: Sd irrep}. \end{exer} \cref{thm:sl2reps} above says in particular that there is only one representation of $\sl_2(\F)$ of dimension $d+1$ (up to isomorphism). A representative for this isomorphism class can be chosen to be the representation $\F[x,y]_d$ in \cref{ex: Sd irrep}. Furthermore any representation of $\sl_2(\F)$ has a basis consisting of weight vectors. This justifies the following: \begin{defn} Let $V$ be a representation of $\sl_2$ and let $W_\lambda(V)=\{v\in V \mid Hv=\lambda v\}$ be the eigenspace corresponding to a weight (eigenvalue) $\lambda$ for $H$. Then there is a decomposition $V=\bigoplus_{\lambda} W_\lambda(V)$ called the {\em weight space decomposition} of $V$. \end{defn} \begin{rem} If $V$ is an irreducible representation for $\sl_2(\F)$ and $\dim(V)=n+1$ then the weight spaces are the 1-dimensional spaces $W_{-n+2i}(V)=\F v_i$, with $v_i$ as in \cref{thm:sl2reps}. \end{rem} \begin{exer} \begin{enumerate} \item Suppose that $V$ is a representation of $\sl_2$ and that the eigenvalues of $H$ on $V$ are $2, 1, 1, 0, -1, -1, -2$. Show that the irreducible decomposition of $V$ is $V\cong \F[x,y]_2\oplus \F[x,y]_1\oplus \F[x,y]_1$. \item Prove that if $V$ is any representation of $\sl_2$ then its irreducible decomposition is determined by the eigenvalues of $H$. \end{enumerate} \end{exer} \begin{exer} Let $V$ be an $\sl_2$ representation and set $W_k=\{v\in V\mid H(v)=kv\}$. \begin{enumerate} \item Show that $\dim_\F W_k=\dim_\F W_{-k}$. \item Prove that $E^k:W_{-k}\to W_k$ is an isomorphism. \item Show that $\dim_\F W_{k+2}\leq \dim_\F W_k$ for all $k\geq 0$, that is, the two sequences \[ \ldots, \dim_\F W_4, \dim_\F W_2, \dim_\F W_0, \dim_\F W_{-2}, \dim_\F W_{-4},\ldots \] \[ \ldots, \dim_\F W_3, \dim_\F W_1, \dim_\F W_{-1}, \dim_\F W_{-3}, \ldots \] are unimodal. \end{enumerate} \end{exer} \subsection{Weight space decompositions and the narrow sense of SLP} We now show that there is a close connection between artinian algebras satisfying SLPn and the representations of $\sl_2$. \begin{rem} If $A$ is a graded artinian $\F$-algebra and $L$ is a linear form, then we can view $A$ as a $\F[L]$-module since by the universal mapping property of polynomial rings there exists a well defined ring homomorphism $\F[L]\to A$ which maps $L\mapsto L$. Since $\F[L]$ is a PID and $A$ is a module over it, the structure theorem for modules over PIDs says that there is a module isomorphims \[ A\cong \F[L] / ( p_ 1^{e_1} ) \oplus \cdots \oplus F[L] / ( p_k^{e_k} ) \] where each $p_i$ is a prime element of $\F[L]$ (no free part since $A$ is finite dimensional). Since $A$ is furthermore graded the elementary divisors $p_i^{e_i}$ must be homogeneous elements of $\F[L]$, thus $p_i=L$ for all $i$. This implies that $A$ decomposes as a direct sum \[ A\cong S^{(1)} \oplus \cdots \oplus S^{(k)}, \text { with } S^{(i)}\cong \F[L] / ( L^{e_i} ). \] The cyclic $\F[L]$ modules $ S^{(i)}$ are the {\em strands} of multiplication by $L$ on $A$ which are essential in defining the notion of Jordan type, a finer invariant than the Lefschetz properties. The generic {\em Jordan type} of $A$ is the multi-set of sizes of Jordan blocks $e_1, e_2, \ldots, e_s$ ordered decreasingly. This follows because the action of $L$ on $ S^{(i)}$ is given by a single Jordan block of size $e_i$. \end{rem} Here is the connection between SLPn and the representations of $\sl_2(\F)$: \begin{cor} The following are equivalent \begin{enumerate} \item $S$ is a cyclic graded $\F[L]$ module i.e. $S\cong \F[L] / ( L^{d} )$ (not necessarily degree preserving isomorphism) \item $S\cong \F[x,y]_{d-1}$ as an irreducible representation of $\sl_2$ with $Es=Ls$. \end{enumerate} \end{cor} \begin{proof} This follows because both the action of $L$ on $S$ as well as the action of $E$ on $\F[x,y]_{d-1}$ is given by a single Jordan block matrix. Once the basis of $S$ has been fixed to be $1,L, L^2, \ldots, L^{d-1}$, the action of $H$ and $F$ can be simply defined to be the one given by the matrices displayed in \cref{thm:sl2reps}. \end{proof} If we put the $\sl_2(\F)$-module structures on the individual strands together we obtain: \begin{thm}[SLPn via weight decomposition]\label{thm:sl2andSLPn} Let $A$ be a graded artinian algebra of socle degree $c$ and let $L\in A_1$. The following are equivalent \begin{enumerate} \item $L$ is a strong Lefschetz element on $A$ in the narrow sense, \item $A$ is an $\sl_2(\F)$-representation with $E=\times L$ and the weight space decomposition of $A$ coincides with the grading decomposition via ${\rm weight}(v)=2\deg(v)-c$. This means that \[ A=\bigoplus_{i=0}^c A_i=\bigoplus_{i=0}^c W_{2i-c}(A), \text{ where } A_i=W_{2i-c}(A). \] \end{enumerate} \end{thm} \begin{proof}[Proof sketch] Suppose $L$ is a strong Lefschetz element on $A$ in the narrow sense. We construct an $\sl_2(\F)$ triple in $\End_\F(A)$ as follows: let $E=\times L:A\to A$. Consider the Jordan decomposition of $A$ with respect to the endomorphism $E$ written as $A=\bigoplus V_i$, that is, let the subspaces $V_i$ be the eigenspaces of the operator $E$. For each $V_i$, let $F_i,H_i:V_i\to V_i$ to be the endomorphisms of $V_i$ given with respect to the basis in which $E\big |_{V_i}$ is in Jordan form by the matrices in \eqref{eq: H} and \eqref{eq: F}, respectively, where $d=\dim(V_i)-1$. Setting $H=\bigoplus H_i$ and $F=\bigoplus F_i$, one can check $E, H, F$ is an $\sl_2(\F)$ triple. Furthermore, from the properties of Jordan type one knows that the Jordan blocks of $L$ are centered around the middle degree of $A$; see \cite[Proposition 2.38]{IMM}. It follows that if $v$ is an eigenvector of weight $2k-d$ it is in degree $ (c-d)/2 +k$ (note that $c\equiv d\pmod{2}$). Substituting $i=(c-d)/2 +k$ it follows that $W_{2i-c}(A)=A_i$. Conversely, suppose $A$ is an $\sl_2(\F)$-representation with $E=\times L$. Then one can use the information about the grading to verify that the Jordan blocks are centered around degree $\lfloor c/2\rfloor$. Thus the Jordan degree type is the transpose of the Hilbert function of $A$. By \cite[Proposition 3.64 (2)]{book} it follows that $L$ is a strong Lefschetz element on $A$ in the narrow sense. \end{proof} \subsection{Tensor products}\label{s: 5} From \cref{thm:sl2andSLPn}, we can deduce how SLPn behaves when we take tensor products. We need the following lemma. \begin{lem} If $\F$ is an algebraically closed field of characteristic zero and $A,A'$ are associative algebras which are representations of $\sl_2(\F)$ , then so is $A\otimes_\F A'$ with the action $g\cdot(v\otimes v')=(gv)\otimes v' + v\otimes (gv')$. If $v,v'$ are weight vectors then $v\otimes v'$ is also a weight vector with $\weight(v\otimes v')=\weight(v)+\weight(v')$. \end{lem} \begin{proof} We show the statement about weights only: say $\weight(v)=\lambda$ and $\weight(v')=\lambda'$ so that $Hv=\lambda v, Hv'=\lambda v'$. Then \[ H(v\otimes v')=(Hv)\otimes v' + v\otimes (Hv')=\lambda v\otimes v'+v\otimes \lambda'v'=(\lambda+\lambda')v\otimes v' \] shows that $v\otimes v'$ is a weight vector with weight $\lambda+\lambda'$. \end{proof} \begin{thm} \label{thm:tensor} Let $\F$ be an algebraically closed field of characteristic zero. If $L$ is a strong Lefschetz element in the narrow sense on $A$ and if $L'$ is a strong Lefschetz element in the narrow sense on $A'$ then $L\otimes 1+1\otimes L'$ is a strong Lefschetz element in the narrow sense on $A\otimes_\F A'$. \end{thm} \begin{proof} By \cref{thm:sl2andSLPn} we have that if $c, c'$ are the socle degrees of $A,A'$, respectively, then $A_i=W_{2i-c}(A)$ and $A'_j=W_{2j-c'}(A')$, so \begin{equation}\label{eq: 1} A=\bigoplus_{i=0}^c A_i=\bigoplus_{i=0}^c W_{2i-c}(A) \quad \text{and} \quad A'=\bigoplus_{j=0}^{c'} A'_j=\bigoplus_{j=0}^{c'} W_{2j-c'}(A') \end{equation} imply \begin{equation}\label{eq: 2} A\otimes_\F A'=\bigoplus_{i=0, j=0}^{c,c'} A_i\otimes_\F A'_j=\bigoplus_{i=0, j=0}^{c,c'} W_{2i-c}(A)\otimes_\F W_{2j-c'}(A'). \end{equation} From the fact that $\deg(v\otimes v')=\deg(v)+\deg(v')$ and \eqref{eq: 1} we deduce that \[ (A\otimes_\F A')_k=\bigoplus_{i=0}^c A_i\otimes_F A'_{k-i}. \] Note that the maximum socle degree of $A\otimes_\F A'$ is $c+c'$. From the identity $$\weight(v\otimes v')=\weight(v)+\weight(v')$$ and \eqref{eq: 2} we deduce that \[ W_{2k-c-c'}(A\otimes_\F A')=\bigoplus_{i=0}^c W_{2i-c}(A)\otimes_\F W_{2(k-i)-c'}(A')=\bigoplus_{i=0}^c A_i\otimes_\F A'_{k-i}. \] Comparing, we see that $(A\otimes_\F A')_k=W_{2k-c-c'}(A\otimes_\F A')$, where the weight spaces of $A\otimes_\F A'$ correspond to the action \[ E(v\otimes v')=Ev\otimes v'+v\otimes Ev'=Lv\otimes v'+v\otimes L'v'=(L\otimes 1+1\otimes L')v\otimes v'. \] \cref{thm:sl2andSLPn} gives that $L\otimes 1+1\otimes L'$ is a strong Lefschetz element on $A\otimes_\F A'$. \end{proof} A corollary of \cref{thm:tensor} is the following \begin{cor}[Tensor product preserves SLPn] If $\F$ is a field of characteristic zero and $A,A'$ are graded artinian $\F$-algebras which satisfy SLPn, then $A\otimes_\F A'$ also satisfies SLPn. \end{cor} From the above corollary one can easily deduce Stanley's theorem applying induction on the embedding dimension $n$. \begin{cor}[Stanley's Theorem - second proof]\label{Stanley second} If $\F$ has characteristic 0, then the algebra $A=\F[x_1,\ldots,x_n]/(x_1^{d_1}, \ldots, x_n^{d_n})=\F[x_1]/(x_1^{d_1}) \otimes_\F \cdots \otimes_\F \F[x_n]/(x_n^{d_n}) $ satisfies SLP in the narrow sense. \end{cor} \begin{rem} \hfill \begin{enumerate} \item While the symmetric unimodality of Hilbert functions is preserved under taking tensor product, just unimodality is not. For example for \[ A=\F[x,y,z]/(x^2,xy,y^2,xz,yz,z^5) \] with Hilbert function $1, 3, 1, 1, 1$ we have that the Hilbert function of $A\otimes_\F A$ is $1, 6, 11, 8, 9, 8, 3, 2, 1$. \item While the SLPn is preserved under taking tensor product, the SLP (not in the narrow sense) is not preserved by tensor product. In the example above $A$ has SLP but since its Hilbert function is not unimodal, $A\otimes_\F A$ cannot have the SLP. \end{enumerate} \end{rem} The issue in part 2 of the remark is remedied by restricting to Gorenstein algebras, which have symmetric Hilbert function. Recall that for algebras with symmetric Hilbert function the SLP is equivalent to SLPn by \Cref{equiv narrow SLP}. Thus we have: \begin{cor}[{\cite[Theorem 6.1]{MW}}] If $\F$ is a field of characteristic zero and $A,A'$ are graded artinian Gorenstein $\F$-algebras which satisfy SLP, then $A\otimes_\F A'$ also satisfies SLP. \end{cor} \section{Gorenstein rings via Macaulay inverse systems} \label{s: 6} The description of the dual ring of the polynomial ring in \cref{DUAL} is taken from \cite{DNS}. The material in \cref{MIS} follows Eisenbud's Commutative Algebra book \cite{Eisenbud} and Geramita's lectures \cite[Lecture~9]{Ger95}. The material on Hessians in \cref{HESS} follows \cite{MW}. \subsection{The graded dual of the polynomial ring}\label{DUAL} Recall the notion of a dual for an $\F$-vector space: \begin{defn} Let $V$ be an $\F$-vector space. Its dual is \[V^*=\Hom_\F(V,\F)=\{\varphi: V\to \F \mid \varphi \text{ is } \F-\text{linear}\}, \] the vector space of linear functionals on $V$. \end{defn} \begin{exer}\label{exer: double dual} If $V$ is a finite dimensional vector space, there is a natural isomorphism of vector spaces $V\cong V^{**}$. \end{exer} We extend this idea to construct duals of rings and modules. \begin{defn}[Divided power algebra] \label{def:gradedhom} Say $R=\F[x_1,\ldots,x_n]$ is the polynomial ring. Let $$R^*:=\Hom_\F^{\rm gr}(R,\F)=\bigoplus_{i\geq 0} \Hom_\F(R_i,\F).$$ We use a standard shorthand for monomials: if $\mathbf{a}=(a_1,\ldots,a_n)\in \Z^{n}_{\ge 0}$, then $x^\mathbf{a}=x_1^{a_1}\cdots x_n^{a_n}$ is the corresponding monomial in $R$. If $x^\mathbf{a}$ is in $R_d$, we write $X^{[\mathbf{a}]}$ for the functional (in $R^*_d$) on $R_d$ which sends $x^{\mathbf{a}}$ to $1$ and all other monomials in $R_d$ to $0$. We'll make the convention from now on to write $X_i$ for the duals of the elements $x_i$ in $R_1^*$. As a vector space, $R^*$ is isomorphic to a polynomial ring in the $n$ variables $X_1, \ldots, X_n$. However, as we recall shortly, $R^*$ has the multiplicative structure of a \textit{divided power algebra}. For this reason, we call $X^{[\mathbf{a}]}$ a \textit{divided} monomial and we write $R^*=\F[X_1,\ldots, X_n]_{DP}$. \end{defn} The ring $R$ acts on $R^*$ by \textit{contraction}, which we denote by $\contract$. That is, if $x^{\mathbf{a}}$ is a monomial in $R$ and $X^{[\mathbf{b}]}$ is a divided monomial in $R^*$, then \[ x^{\mathbf{a}}\contract X^{[\mathbf{b}]}=\begin{cases} X^{[\mathbf{b}-\mathbf{a}]} & \text{ if } \mathbf{b} \ge \mathbf{a},\\ 0& \text{ otherwise}. \end{cases} \] This action is extended linearly to all of $R$ and $R^*$. This action of $R$ on $R^*$ gives a perfect pairing of vector spaces $R_d\times R^*_d \to\F$ for any degree $d\ge 0$. Suppose $U$ is a subspace of $R_d$. We define \[ U^{\perp}=\{g\in R^*_d: f\contract g=0 \mbox{ for all } f\in U\}. \] Macaulay \cite{Macaulay} introduced the {\em inverse system} of an ideal $I$ of $R$ to be \[ I^{-1}:= \mbox{Ann}_{R^*}(I)=\{g\in R^*: f\contract g=0 \mbox{ for all } f\in I\}. \] If $I$ is a homogeneous ideal of $R$ then the inverse system $I^{-1}$ can be constructed degree by degree using the identification $(I^{-1})_d=I_d^{\perp}$. We return to this notion in \cref{defn: inverse system}. A priori, $R^*$ is simply a graded $R$-module. However, $R^*$ can be equipped with a multiplication which makes it into a ring. Suppose $\mathbf{a}=(a_1,\ldots,a_n),\mathbf{b}=(b_1,\ldots,b_n)\in \Z^{n}_{\ge 0}$. The multiplication in $R^*$ is defined on monomials by \begin{equation}\label{eq:dividedmultiplication} X^{[\mathbf{a}]}X^{[\mathbf{b}]}= {\mathbf{a}+\mathbf{b} \choose \mathbf{a} } X^{[\mathbf{a+b}]}, \end{equation} where \begin{equation}\label{eq:multifactorial} \mathbf{a}!=\prod_{i=0}^N a_i ! \quad\mbox{and}\quad \binom{\mathbf{a}+\mathbf{b}}{\mathbf{a}}=\prod_{i=1}^n \binom{a_i+b_i}{a_i}. \end{equation} This multiplication is extended linearly to all of $R^*$. We see from the above definition that if $\mathbf{a}=(a_1,\ldots,a_n)$ then $X^{[\mathbf{a}]}=\prod_{i=1}^n X_i^{[a_i]}$. \begin{exer}\label{ex: 6.4} Now set $X^{\mathbf{a}}=\prod_{i=1}^n X_i^{a_i}$, where the multiplication occurs in the divided power algebra as defined above. Deduce from the above definition that \begin{equation}\label{eq:regularmonomialtodividedmonomial} X^{\mathbf{a}}=\mathbf{a}! X^{[\mathbf{a}]}. \end{equation} \end{exer} \begin{rem} In characteristic zero, $\mathbf{a}!$ never vanishes and so, by \eqref{eq:regularmonomialtodividedmonomial}, $R^*$ is generated as an algebra by $X_0,\ldots,X_N$, just like the polynomial ring. However, in charateristic $p>0$, $R^*$ is infinitely generated by all the divided power monomials $X_j^{[p^{k_i}]}$ for all $j=0,\ldots ,N$ and $k_j\ge 0$. The exercise below justifies this last assertion. \end{rem} \begin{exer} Prove that in characteristic $p$ for any $\mathbf{a}=(a_1,\ldots,a_n)$ where $a_j=\sum a_{ij}p^i$, we have \[ X^{[\mathbf{a}]}=\prod_{j=1}^n \prod_{i} (X_j^{[p^i]})^{a_{ij}}. \] {\em Hint:} Use Lucas' identity -- given base $p$ expansions $a=\sum a_ip^i$ and $b=\sum b_ip^i$ for $a, b\in\N$, then \[ {b \choose a} = \prod_{i=0}^\infty {b_i \choose a_i} \mod p. \] \end{exer} We now revisit the characteristic zero case. Suppose $\F$ is a field of characteristic zero and let $S=\F[X_1,\ldots,X_n]$ be a polynomial ring. Consider the action of $R$ on $S$ by partial differentiation, which we represent by `$\circ$'. That is, if $\mathbf{a}=(a_1,\ldots,a_n)\in \Z^{n}_{\ge 0}$, $x^\mathbf{a}=x_1^{a_0}\cdots x_n^{a_n}$ is a monomial in $R$, and $g\in S$, we write \[ x^{\mathbf{a}}\circ g=\frac{\partial^{\mathbf{a}}g}{\partial X^{\mathbf{a}}} \] for the action of $x^{\mathbf{a}}$ on $g$ (extended linearly to all of $R$). In particular, if $\mathbf{a}\le \mathbf{b}$, then \[ x^{\mathbf{a}}\circ X^{\mathbf{b}}=\frac{\mathbf{b}!}{(\mathbf{b}-\mathbf{a})!}X^{\mathbf{b}-\mathbf{a}}, \] where we use the conventions in~\eqref{eq:multifactorial}. This action gives a perfect pairing $R_d\times S_d\to\F$, and, given a homogeneous ideal $I\subset R$, we define $I_d^{\perp}$ and $I^{-1}$ in the same way as we do for contraction. Since we are in characteristic zero, the map of rings $\Phi:S\to R^*$ defined by $\Phi(X_i)=X_i$ extends to all monomials via~\eqref{eq:regularmonomialtodividedmonomial} to give $\Phi(y^\mathbf{a})=Y^{\mathbf{a}}=\mathbf{a}! Y^{[\mathbf{a}]}$. Thus $S$ and $R^*$ are isomorphic rings in view of \Cref{ex: 6.4}. Moreover, if $F\in R$ and $g\in S$, then $\Phi(F\circ g)=F\contract\Phi(g)$~\cite[Theorem~9.5]{Ger95}, so $S$ and $R^*$ are isomorphic as $R$-modules. \subsection{Macaulay inverse systems}\label{MIS} \begin{defn}[Dualizing functor] Let $M$ be a finitely generated $R$-module. Define the dual of $M$ to be $ D(M) = \Hom_R(M, R^*) $. Let $f:M\to N$ be an $R$-module homomorphism. Define $D(f)$ to be the induced $R$-module homomorphism \[D(f):D(N)=\Hom_R(N, R^*)\to D(M)=\Hom_R(M, R^*)\] given by \[ D(f)(\varphi)=\varphi\circ f. \] This makes $D$ into a contravariant functor in the category of finitely generated $R$-modules. \end{defn} \begin{exer} Let $M$ be a finitely generated $R$-module. Recall that we defined $D(M)=\Hom_R(M, R^*)$. In this exercise we also consider the set $M^*=\Hom_\F(M,\F)$ with its two structures induced from the $R$-module structure of $M$ by setting \[ r\phi(x)=\phi(r\cdot x), \quad \forall r\in R, x\in M. \] Show that $D(M)\cong \Hom_\F(M,\F)$ as $R$-modules, so an equivalent way to define the dual module dual to $M$ is $M^*$ (with its $R$-module structure). {\em Hint:} Hom-tensor adjointness may come in handy. \end{exer} We now come to a form of duality that involves the above defined functor. \begin{thm}[Matlis duality]\label{thm:Matlis} \label{thm: Matlis} The functor $D$ induces an anti-equivalence of categories between \[ \{\text{noetherian $R$-modules}\} \leftrightarrow \{\text{artinian $R$-submodules of $R^*$}\} \] given by sending $M\mapsto D(M)$. \end{thm} Next we wish to make the meaning of $D(M)$ more concrete in the special case when $M=R/I$ is a cyclic $R$-module. \begin{lem} \label{lem:DR/I} Suppose $I$ is a homogeneous ideal of a polynomial ring $R$. We compute $$D(R/I)=\Hom_R(R/I, R^*)\cong \Ann_{R^*}(I) =(0:_{R^*} I)=\{g\in R^* \mid f \contract g=0 \ \forall f\in I\}.$$ \end{lem} \begin{defn}[Inverse system]\label{defn: inverse system} Suppose $I$ is a homogeneous ideal of a polynomial ring $R$. The {\em inverse system} of $I$ is the vector space \[ I^{-1}=\{g\in R^*\mid f\contract g=0, \forall f\in I\}. \] \end{defn} \begin{rem} Don't let the notation deceive you! If $I$ is an ideal of $R$, it does not mean that $I^\perp$ is an ideal (or $R^*$-submodule) of $R^*$. It is just an $R$-module which happens to be a subset of $R^*$. \end{rem} \begin{ex} Concretely, say \begin{enumerate} \item $I=(x^2,y^3)\subseteq R=\F[x,y]$. Then \[ I^{-1}=(0:_{R^*} I)=\Span\{XY^2, XY, Y^2, X, Y, 1\}=R\contract XY^2 \] is the $R$-submodule of $R^*$ generated by $XY^2$. \item $I=(x^2,xy^2,y^3)\subseteq R=\F[x,y]$. Then \[ I^{-1}=(0:_{R^*} I)=\Span\{XY, Y^2, X, Y, 1\}=R\contract XY +R\contract Y^2 \] is an $R$-submodule of $R^*$ with two generators. \end{enumerate} \end{ex} Next, take \begin{ex}\label{ex: perps} \begin{enumerate} \item $I=(x)\subseteq R=\F[x,y]$. Then \[ I^{-1}=(0:_{R^*} I)=\Span\{Y^i\mid i\geq 0\}. \] \item $I=(x^d)\subseteq R=\F[x,y]$. Then \begin{eqnarray*} I^{-1}=(0:_{R^*} I) &=& \Span\{X^iY^j\mid 0\leq i\leq d-1, j\geq 0\} \\ &=& R^*_0\oplus R^*_1 \oplus R^*_2 \oplus \cdots \oplus R^*_{d-1} \oplus YR^*_{d-1} \oplus Y^2R^*_{d-1}\oplus \cdots \oplus Y^kR^*_{d-1}\oplus \cdots \end{eqnarray*} \end{enumerate} \end{ex} Both of the above $(0:_{R^*} I)$ are non-finitely generated $R$-modules. \Cref{thm:Matlis} shows that this corresponds to $R/I$ not being artinian. \begin{exer} Generalize \cref{ex: perps} to find the inverse system of the ideal defining a point in projective $n$-space and the inverse systems of all of the powers of this ideal. \end{exer} We now wish to study the inverse functor involved in the Matlis duality \cref{thm: Matlis}. In order to do this we define the inverse system of an $\F$-subspace of $R^*$. \begin{defn} Let $V$ be an $\F$-vector subspace of the $\F$-algebra $R^*$. The {\em inverse system} of $V$ is \[ V^\perp=\Ann_R(V)=\{f\in R\mid f\circ v=0, \forall v\in V\}. \] We will be most interested in the case when $V=\Span\{F\}$ is a 1-dimensional $\F$-vector space and thus \[ F^\perp=\Ann_R(F)=\{f\in R\mid f\circ F=0\}. \] \end{defn} Macaulay inverse system duality is a concrete version of Matlis duality \cref{thm: Matlis} which can be stated in terms of the inverse systems defined above as follows: \begin{thm}[Macaulay inverse system duality] \label{thm:Mac} With notation as above, there are bijective correspondence between \begin{eqnarray*} \{R-\text{modules } M\subseteq R^*\} &\leftrightarrow & \{R/I \mid I\subseteq R \text{ homogeneous ideal}\} \\ M &\mapsto & R/\Ann_R(M)\\ I^\perp=D(R/I) &\mapsfrom& R/I. \end{eqnarray*} Furthermore, we have the additional correspondences \begin{center} \begin{tabular}{cccc} (a) & $M$ finitely generated & $\iff$ & $R/\Ann_R(M)$ artinian\\ (b) & $M=R\circ F$ cyclic& $\iff$ & $R/\Ann_R(F)$ artinian Gorenstein \\ & & & $\deg(F)=$ socle degree of $R/\Ann_R(F)$. \end{tabular} \end{center} \end{thm} The value of \cref{thm:Mac} often lies in producing examples of artinian Gorenstein rings. \begin{defn} In view of statement (b) in \cref{thm:Mac}, the polynomial $F\in R^*$ is called a {\em Macaulay dual generator} for $R/\Ann_R(F)$. \end{defn} \begin{ex} The artinian Gorenstein algebra with Macaulay dual generator $$F=X^2+Y^2+Z^2$$ is the ring of \cref{ex: Gor} $$\F[x,y,z]/\Ann_{\F[x,y,z]}(F)=\F[x,y,z]/(x^2-y^2,y^2-z^2,xy,xz,yz).$$ \end{ex} \begin{ex} The artinian Gorenstein algebra with Macaulay dual generator $$F=X_1^{d_1}\cdots X_n^{d_n}$$ is the monomial complete intersection $$\F[x_1,\ldots, x_n]/\Ann_{\F[x_1,\ldots,x_n]}(F)=\F[x_1,\ldots, x_n]/(x_1^{d_1+1},\ldots,x_n^{d_n+1}).$$ \end{ex} \begin{defn} For a graded module $M$ and an integer $d$, define $M(d)$ to be the module $M$ with grading modified such that $M(d)_i=M_{d+i}$. \end{defn} \begin{exer} \label{ex: Gor} For any homogeneous polynomial $F\in R^*$ of degree $d$, prove \begin{enumerate} \item $\Ann_R(F)^\perp=R\circ F$. This statement is an instance of Macaulay's double annihilator theorem. \item the cyclic ring $A=R/\Ann_R(F)$ is artinian Gorenstein if and only if the function $A\to D(A)(-d)$, $a\mapsto [b\mapsto (ab)\circ F]$ is an isomorphism. \end{enumerate} {\em Hint for (1):} Start by showing the equality is true in degree $d$, then use the $R$-module structure.\\ {\em Hint for (2):} Use \cref{prop: Poincare}. Prove that the function $a \mapsto (a\contract F)(0)$ is an orientation on $A$ and that $A$ satisfies Poincar\'e duality with respect to this orientation. \end{exer} \noindent In view of \cref{ex: Gor} we can state an alternate definition of graded Gorenstein rings. \begin{defn} An artinian graded ring $A$ is {\em Gorenstein} of socle degree $d$ if and only if $A\cong D(A)(-d)$ as graded $A$-modules (degree preserving isomorphism). \end{defn} \subsection{SLP for Gorenstein rings via Hessian matrices}\label{HESS} For this section let $R=\F[x_1,\dots,x_n]$ be a polynomial ring and $R^*$ its graded dual. We will further assume that $\chr(\F)=0$. In this section we use that $R^*$ is isomorphic to $\F[X_1,\ldots, X_n]$ with $R$-action $x_i\circ F=\frac{\partial F}{\partial X_i}$. We will use this description for $R^*$. \begin{lem}\label{lem: 6.24} Let $F\in R^*_c$ and let $L=a_1x_1+\dots+a_nx_n\in R_1$. Then $$L^c\circ F=c!\cdot F(a_1,\ldots, a_n).$$ \end{lem} \begin{proof} \[ L^c\circ F=\sum_{i_1+\cdots+i_n=c}\frac{c!}{i_1!\cdots i_n!}a_1^{i_1}\cdots a_n^{i_n} x_1^{i_1}\cdots x_n^{i_n}\circ F=c!\cdot F(a_1,\ldots, a_n). \] \end{proof} \begin{defn}[Higher Hessians] Let $F\in R^*$ be a homogeneous polynomial and let $B=\{b_1,\ldots, b_s\}\subseteq R_d$ be a finite set of homogeneous polynomials of degree $d \geq 0$. We call \[ \Hess_B^d(F)=\left[b_ib_j \circ F\right]_{1\leq i,j\leq s} \text{ and } \hess_B^d(F)=\det \Hess_B^d(F) \] the $d$-th {\em Hessian matrix} and the $d$-th {\em Hessian determinant} of $F$ with respect to $B$, respectively. \end{defn} \begin{rem} If $B=\{x_1,\ldots x_n\}$ then $\Hess_B^1(F)=\left[x_ix_j F\right]_{1\leq i,j\leq n}=\left[\frac{\partial F}{\partial X_i \partial X_j}\right]_{1\leq i,j\leq n}$ is the classical Hessian of $F$. \end{rem} Hessians are useful in establishing the SLP for artinian Gorenstein rings. \begin{thm}[Hessian criterion for SLP \cite{MW}]\label{thm: hessian criterion} Assume $\F$ is a field of characteristic zero. Let $A$ be a graded artinian Gorenstein ring with Macaulay dual generator $F\in R^*_c$. Then $A$ has the SLP if and only if \[ \hess^i_{B_i}(F)\neq 0 \text{ for } 0\leq i\leq \lfloor \frac{c}{2} \rfloor \] where $B_i$ is some (equivalently, any) basis of $A_i$. \end{thm} \begin{proof} From the hypothesis and \cref{thm:Mac} we have that $A=R/\Ann_R(F)$ has socle degree $d=\deg(F)$. Since $A$ is Gorenstein, $A$ has symmetric Hilbert function, so by \Cref{equiv narrow SLP} $A$ has SLP if and only if $A$ has $SLP$ in the narrow sense, i.e. there exists $L\in A_1$ such that for any $0\leq i\leq \lfloor \frac{d}{2}\rfloor$ the multiplication maps $L^{c-2i}:A_i\to A_{c-i}$ are vector space isomorphisms. Say $L=a_1x_1+\dots+a_nx_n$. Recall that the isomorphism $A\cong D(A)(-c)=(R\circ F)(-c), a\mapsto a\circ F$ induces vector space isomorphisms $A_{c-i}\cong A_{i}^*$ also defined by $a\mapsto \left[b\mapsto b\circ (a\circ F)=(ba)\circ F\right]$; see \Cref{ex: Gor}. This isomorphism is denoted $- \circ F$ in the sequence displayed below. The composite map \[ T_i:A_i\stackrel{\times L^{c-2i}}{\lra} A_{c-i} \stackrel{-\circ F}{\lra} A_i^* \] is an isomorphism if and only if multiplication by $L^{c-2i}$ is an isomorphism. Let $B_i$ be a basis for $A_i$ and let $B^*_i$ be its dual, which is a basis for $A_i^*$. The matrix $[t^{(i)}_{jk}] $for $T_i$ with respect to these bases is defined as follows \[ T_i(b_j)=\sum_{k=1}^s t^{(i)}_{jk}b_k^*. \] Hence we compute \[ t^{(i)}_{jk}=T_i(b_j)(b_k)=(L^{c-2i}b_j)^* (b_k)=(L^{c-2i}b_j b_k) \circ F=L^{c-2i}\circ(b_j b_k \circ F). \] Using \Cref{lem: 6.24} the formula above becomes \[ t^{(i)}_{jk}=(c-2i)!(b_jb_k\circ F)(a_1,\ldots, a_n). \] Thus $T_i$ is an isomorphism for some $L\in R_1$ if and only if $$\hess^i_{B_i}F(a_1,\ldots, a_n)=\det\left[b_ib_j \circ F(a_1,\ldots, a_n)\right]_{1\leq i,j\leq s}\neq 0.$$ Overall the SLP holds if and only if for $ 0\leq i\leq \lfloor \frac{c}{2} \rfloor$ the hessian determinant $\hess^i_{B_i}F$ does not vanish identically. \end{proof} \begin{ex} Say $F=X^{2}+Y^{2}+Z^{2}$. Then with respect to the standard monomial basis for each $R_i$ \begin{eqnarray*} \hess^0(F) &=&F \\ \hess^1(F) &=&\det \begin{bmatrix} 2 & 0 & 0 \\ 0 & 2 &0 \\ 0 & 0 & 2\end{bmatrix} =8\\ \hess^i(F) &=&0 \text{ for }i\geq 2. \end{eqnarray*} This shows the algebra in \cref{exer: Gor} has SLP in characteristic zero. \end{ex} \begin{ex}[H. Ikeda \cite{Ikeda}] \label{ex:Ikeda} Let $G=XYW^3+X^3ZW+Y^3Z^2$. Then $A=R/ \Ann_R(G)$ has Hilbert function $(1, 4, 10, 10, 4, 1)$ and a basis for $A_1$ is $B_1=\{x, y, z,w\}$ whereas a basis for $A_2$ is $B_2=\{x^2, xy, xz, xw, y^2, yz, yw, z^2, zw, w^2\}$. Furthermore \begin{eqnarray*} \hess^0(G) &=&G \\ \hess^1_{B_1}(G) &=&\det \begin{pmatrix} 6XZW & W^3 & 3X^2W & 3X^2Z+3YW^2\\ W^3 & 6YZ^2 & 6Y^2Z &3XW^2\\ 3X^2W & 6Y^2Z & 2Y^3 & X^3\\ 3X^2Z+3YW^2 & 3XW^2 &X^3 & 6XYW \end{pmatrix}\neq 0 \\ \hess^2_{B_2}(G) &=&\det \left(\begin{matrix} 0&0&6\,W&6\,z&0&0&0&0&6\,X&0\\ 0&0&0&0&0&0&0&0&0&6\,W\\ 6\,W&0&0&6\,X&0&0&0&0&0&0\\ 6\,Z&0&6\,X&0&0&0&6\,W&0&0&6\,Y\\ 0&0&0&0&0&12\,Z&0&12\,Y&0&0\\ 0&0&0&0&12\,Z&12\,Y&0&0&0&0\\ 0&0&0&6\,W&0&0&0&0&0&6\,X\\ 0&0&0&0&12\,Y&0&0&0&0&0\\ 6\,X&0&0&0&0&0&0&0&0&0\\ 0&6\,W&0&6\,Y&0&0&6\,X&0&0&0\\ \end{matrix}\right) =0. \end{eqnarray*} We conclude that the map $L:A_2\to A_3$ fails to have maximum rank for all $L\in A_1$. However the map $L^3:A_1\to A_4$ does have maximum rank. \end{ex} \begin{exer}[R. Gondim \cite{Gondim}]\label{ex: Gondim} Let $x_1,\ldots, x_n$ and $u_1, \ldots, u_m$ be two sets of indeterminates with $n \geq m \geq 2$. Let $f_i \in \F[x_1, \ldots, x_n]_k$ and $g_i \in \F[u_1,\ldots, u_m]_e$ for $1\leq i\leq s$ be linearly independent forms with $1 \leq k<e$. If $s >\binom{m-1+k}{k}$, then \[ F=f_1g_1+\cdots+f_sg_s \] is called a Perazzo form and $A=\F[x_1, \ldots, x_n,u_1,\ldots,u_m]$ is called a Perazzo algebra. \begin{enumerate} \item Show that $\hess_k(F)=0$ and so $A$ does not have SLP. \item Make conjectures regarding the Hilbert functions of Perazzo algebras. \item Make conjectures regarding the WLP for Perazzo algebras. \item Do there exist two Perazzo algebras $A$ and $B$ having the same Hilbert function so that $A$ has WLP and $B$ does not? \end{enumerate} Some answers to (2) and (3) can be found in \cite{AADFMMMN}. Part (4) is an open problem suggested by Lisa Nicklasson. \end{exer} \begin{cor} Let $F\in \F[X_1,\ldots, X_n], G\in \F[Y_1,\ldots, Y_m]$ be homogeneous polynomials of the same degree. Then $A=\F[x_1,\ldots, x_n]/\Ann_R(F)$ and $B= \F[y_1,\ldots, y_m]/\Ann_R(G)$ have SLP if and only if \[ C= \F[x_1,\ldots, x_n,y_1,\ldots, y_m]/\Ann_R(F+G) \text{ satisfies SLP}. \] \end{cor} \begin{proof} It turns out that for $1\leq i< \deg(F)$ a basis $\beta$ of $C_i$ is given by the union of a basis $\beta'$ of $A_i$ and a basis $\beta''$ of $B_i$ (for a proof of this refer to \cref{prop:ConnSumF} and \cref{exactFP}) and hence the hessians of $F+G$ look like \begin{eqnarray*} \Hess^i(F+G) &=& \begin{bmatrix} b_ib_j(F+G)\end{bmatrix}_{b_i,b_j\in \beta}=\begin{bmatrix} b'_ib'_j(F) & 0\\ 0 & b''_ib''_j(F) \end{bmatrix}_{b'_i,b'_j\in \beta',b''_i,b''_j\in \beta''}\\ & =&\begin{bmatrix} \Hess^i(F) & 0 \\ 0 & \Hess^i(G)\end{bmatrix}\\ \hess^i(F+G) &=&\hess^i(F)\hess^i(G). \end{eqnarray*} Now we see that $\hess^i(F+G)\neq 0$ if and only if $\hess^i(F)\neq 0$ and $\hess^i(G)\neq 0$, which gives the desired conclusion. \end{proof} We take the preceding example up again in the following section, generalizing the construction that produces $C$ from $A$ and $B$ in \Cref{Def_CS}. \section{Topological ring constructions and the Lefschetz properties} \label{s: 7} We have seen in \cref{s: 1} that the Lefschetz properties emerged from algebraic topology. Now we return to this idea implementing some constructions that originate in topology at the ring level. The material in \cref{s:FP} is taken from \cite{IMS} and the material in \cref{s: BUG} is taken from \cite{IMMSW}. \subsection{Fiber products and connected sums}\label{s:FP} We first consider the operation termed connected sum. A connected sum of manifolds along a disc is obtained by identifying a disk in each (with opposite orientations). One can more generally take connected sums by identifying two homeomorphic sub-manifolds, one from each summand. If the cohomology rings of the two summands are $A$ and $B$ and the cohomology ring of the common submanifold is $T$, then it turns out that the cohomology ring of the connected sum is $A\#_TB$, a ring that we term the connected sum of $A$ and $B$ over $T$ in \cref{Def_CS}. To define a connected sum of rings we need a preliminary construction. Recall that an {\em oriented} AG algebra is a pair $(A,\int_A)$ with $A$ an AG algebra and $\int_A$ an orientation as in \cref{prop: Poincare}. A choice of orientation on $A$ also corresponds to a choice of Macaulay dual generator. \begin{exer} Every orientation on $A$ can be written as the function $\int_A:A\to K$ defined by $\int_A g =(g\circ F)(0)$ for some Macaulay dual generator $F$ of $A$. The notation $(g\circ F)(0)$ refers to evaluating the element $g\circ F$ of $R'$ at $X_1=\cdots=X_n=0$. \end{exer} Next we discuss how the orientations of two AG algebras relate. \begin{defn}[Thom class] \label{def:ThomClass} Let $(A,\int_A)$ and $(T,\int_T)$ be two oriented AG $K$-algebras with socle degree $d$ for $A$ and $k$ for $T$, respectively, with $d\geq k$. Let $\pi: A \to T$ be a graded map. By \cite[Lemma 2.1]{IMS}, there exists a unique homogeneous element $\tau_A \in A_{d-k}$ such that $\int_A(\tau _A a)=\int_T(\pi (a))$ for all $a\in A$ ; we call it the {\em Thom class} for $\pi : A \to T$. \end{defn} Note that the Thom class for $\pi : A \to T$ depends not only on the map $\pi $, but also on the orientations chosen for $A$ and $T$. \begin{ex} Let $(A,\int_A)$ be an oriented AG $K$-algebra with socle degree $d$. Consider $(K,\int_K)$ where $f_K:K\to K$ is the identity map. Then the Thom class for the canonical projection $\pi:A\to K$ is the unique element $a_{soc}\in A_d$ such that $\int_A a_{soc}=1$. \end{ex} \begin{exer} Given a homomorphism $\pi: A\to T$ of AG algebras having dual generators $F, H$ of degrees $d$ and $k$, respectively, with $d\geq k$, show that the Thom class of \cref{def:ThomClass} is the unique element $\tau$ of $A_{d-k}$ such that $\tau\circ F=H$. \end{exer} \begin{defn} \label{def:fiberProduct} Given graded $\F$-algebras $A$, $B$, and $T$, and graded $\F$-algebra maps $\pi_A\colon A\rightarrow T$ and $\pi_B\colon B\rightarrow T$, the \emph{fiber product} of $A$ and $B$ over $T$ (with respect to $\pi_A$ and $\pi_B$) is the graded $\F$-subalgebra of $A\oplus B$ $$A\times_T B=\left\{(a,b)\in A\oplus B \ \left| \ \pi_A(a)=\pi_B(b)\right.\right\}.$$ \end{defn} Let $\rho_1\colon A\times_TB\rightarrow A$ and $\rho_2\colon A\times_T B\rightarrow B$ be the natural projection maps. It is well known that fiber products are pullbacks in the category of $\F$ algebras and hence they satisfy the following universal property. \begin{lem} \label{lem:UnivProp} The fiber product $A\times_TB$ satisfies the following universal property: If $C$ is another $\F$-algebra with maps $\phi_1\colon C\rightarrow A$ and $\phi_2\colon C\rightarrow B$ such that $\pi_A\circ\phi_1(c)=\pi_B\circ\phi_2(c)$ for all $c\in C$, then there is a unique $\F$-algebra homomorphism $\Phi\colon C\rightarrow A\times_TB$ which makes the diagram below commute: \begin{equation} \label{eq:UP} \xymatrix{C \ar@{-->}[dr]^{\Phi} \ar@/^1pc/[drr]^-{\phi_1}\ar@/_1pc/[ddr]_-{\phi_2}& & \\ & A\times_TB\ar[r]^-{\rho_1}\ar[d]_-{\rho_2} & A\ar[d]^-{\pi_A}\\ & B\ar[r]_-{\pi_B} & T.\\} \end{equation} \end{lem} By \cite[Lemma 3.7]{IMS} the fiber product is characterized by the following exact sequence of vector spaces: \begin{equation}\label{exactFP} 0 \to A \times _T B \to A\oplus B \xrightarrow{\pi_A-\pi_B} T\to 0, \end{equation} whence the Hilbert function of the fiber product satisfies \begin{equation}\label{eq: HF FP} H_{ A \times _T B}=H_A+H_B-H_T. \end{equation} Henceforth we assume that $\pi_A(\tau_A) = \pi_B(\tau_B)$, so that $(\tau_A,\tau_B) \in A \times _T B$. \begin{defn}\label{Def_CS} The {\em connected sum} of the oriented AG $K$-algebras $A$ and $B$ over $T$ is the quotient ring of the fiber product \[A\times _T B := \{(a, b) \in A\oplus B \mid \pi_A(a) = \pi _B(b) \}\] by the principal ideal generated by the pair of Thom classes $(\tau_A, \tau_B)$, i.e. $$ A \#_TB = (A \times_T B)/ \langle (\tau_A, \tau_B) \rangle .$$ \end{defn} By \cite[Lemma 3.7]{IMS} the connected sum is characterized by the following exact sequence of vector spaces: \begin{equation}\label{exactCS} 0 \to T(k - d) \to A \times _T B \to A\# _T B \to 0. \end{equation} Therefore, the Hilbert series of the connected sum satisfies \begin{equation}\label{HilbertCS} HF_{A\# _T B}( t) = HF_A( t) + HF_B( t)- (1 + t^{d-k})HF_T( t). \end{equation} When $T=\F$ we have an easy description of the fiber product and connected sum. \begin{prop}\label{prop:ConnSumF} Let $R=\F[x_1,\ldots, x_n]$, $R'=\F[y_1,\ldots, y_m]$ be polynomial rings. Let $\left(A=R/I,\int_A\right)$ and $\left(B=R'/I',\int_B\right)$ be oriented AG algebras each with socle degree $d$, and let $\pi_A\colon A\rightarrow\F$ and $\pi_B\colon B\rightarrow \F$ be the natural projection maps with Thom classes $\tau_A\in A_d$ and $\tau_B\in B_d$. Then the fiber product $A\times_\F B$ has a presentation $$A\times_\F B\cong \frac{\F[x_1,\ldots,x_n,y_1,\ldots, y_m]}{(x_iy_j\mid 1\leq i\leq n, 1\leq j\leq m)+I+I'}$$ and the connected sum $A\#_\F B$ has a presentation $$A\#_\F B\cong \frac{\F[x_1,\ldots,x_n,y_1,\ldots, y_m]}{(x_iy_j\mid 1\leq i\leq n, 1\leq j\leq m)+I+I'+(\tau_A+\tau_B)}.$$ In particular, if $A$ and $B$ are standard graded then so are $A\times_\F B$ and $A\#_\F B$. \end{prop} \begin{ex}[Standard graded fiber product and connected sum]\label{ex:FPEx} Let $A=\F[x,y]/(x^2,y^4)$ and $B=\F[u,v]/(u^3,v^3)$ each with the standard grading $\deg(x)=\deg(y)=\deg(u)=\deg(v)=1$. Let $T=\F[z]/(z^2)$, and define maps $\pi_A:A\rightarrow T$, $\pi_A(x)=z, \ \pi_A(y)=0$ and $\pi_B\colon B\rightarrow T$, $\pi_B(u)=z, \ \pi_B(v)=0$. Then the fiber product $A\times_TB$ is generated as an algebra by elements $z_1=(y,0)$, $z_2=(x,u)$, and $z_3=(0,v)$, all having degree one. One can check that it has the following presentation: \begin{equation} \label{eq:FPEx} A\times_TB=\frac{\F[z_1,z_2,z_3]}{\left\langle z_1^4,z_2^3,z_3^3, z_1z_3,z_1z_2^2\right\rangle}. \end{equation} The Hilbert function of the fiber product is \begin{align*} H(A\times_TB)= & (1,3,5,4,2)\\ = & (1,2,2,2,1)+(1,2,3,2,1)-(1,1,0,0,0)\\ = & H(A)+H(B)-H(T). \end{align*} Fix orientations on $A$, $B$, and $T$ by $\int_A\colon xy^3\mapsto 1$, $\int_B\colon u^2v^2\mapsto 1$, and $\int_T\colon z\mapsto 1$, respectively. Then the Thom classes for $\pi_A\colon A\rightarrow T$ and $\pi_B\colon B\rightarrow T$ are, respectively, $\tau_A=y^3$, $\tau_B=uv^2$. Note that $\pi_A(\tau_A)=0=\pi_B(\tau_B)$, hence $(\tau_A,\tau_B)\in A\times_TB$, and in terms of Presentation \eqref{eq:FPEx} we have $(\tau_A,\tau_B)=z_1^3+z_2z_3^2$. Therefore we see that \begin{equation} \label{eq:CSEx} A\#_TB=\frac{\F[z_1,z_2,z_3]}{\left\langle z_1^4,z_2^3,z_3^3, z_1z_3,z_1z_2^2,z_1^3+z_2z_3^2\right\rangle}. \end{equation} The Hilbert function of the connected sum is \begin{align*} H(A\#_TB)= & (1,3,5,3,1)\\ = & (1,2,2,2,1)+(1,2,3,2,1)-(1,1,0,0,0)-(0,0,0,1,1)\\ = & H(A)+H(B)-H(T)-H(T)[3], \end{align*} where $H(T)[3]$ is the Hilbert function of $T(-3)$. \end{ex} However, if $T\neq \F$ the presentation of the connected sum and fiber product can be complicated and they need not be standard graded. \begin{ex}[Non-standard graded fiber product and connected sum] \label{ex:FP3} Let $$A=\F[x]/(x^4), \ B=\F[u,v]/(u^3,v^2), \ T=\F[z]/(z^2),$$ have Hilbert functions $H(A)=(1,1,1,1)$ and $H(B)=(1,2,2,1)$. Define maps $\pi_A\colon A\rightarrow T$, $\pi_A(x)=z$ and $\pi_B\colon B\rightarrow T$, $\pi_B(u)=z$, $\pi_B(v)=0$. Then the fibered product has the presentation $$A\times_TB=\frac{\F[z_1,z_2,z_3]}{\left(z_1^4,z_2^2,z_3^2,z_1z_3,z_1^2z_2-z_2z_3\right)}, \ \ \text{where} \begin{cases} z_1= & (x,u)\\ z_2= & (0,v)\\ z_3= & (0,u^2). \end{cases}$$ Here $z_1,z_2$ have degree one, and $z_3$ has degree two. We then have a presentation for the connected sum $C=A\#_TB=A\times_T B/(\tau)$, whence \begin{align*} A\#_TB \cong & \frac{\F[z_1,z_2,z_3]}{\left(z_1^4,z_2^2,z_3^2,z_1z_3,z_1^2z_2-z_2z_3,(z_1^2-z_3)+z_1z_2\right)}\cong \frac{\F[z_1,z_2]}{(z_1^3+z_1^2z_2,z_2^2)}. \end{align*} It has Hilbert function $H(C)=(1,2,2,1)=H(A)+H(B)-H(T)-H(T)[1]$ as in \eqref{HilbertCS}. It is interesting to note that the connected sum $A\#_TB$ has a standard grading whereas the fibered product $A\times_TB$ does not. \end{ex} Finally, we have the following result which shows how the Lefschetz properties of the components influence the Lefschetz property of the fiber product and connected sum. \begin{thm} \label{prop:SLPFP} \begin{enumerate} \item If $A$ and $B$ are artinian Gorenstein algebras of the same socle degree that each have the SLP, then the fiber product $D=A\times_\F B$ over a field $\F$ also has the SLP. If $A$ and $B$ have the standard grading, then the converse holds as well. \item If $A$ and $B$ both have the SLP, then the connected sum $C=A\#_\F B$ over a field $\F$ also has the SLP. If $A$ and $B$ have the standard grading, then the converse holds as well. \item Let $A, T$ be artinian Gorenstein algebras with socle degrees $d, k$ respectively and let $\pi_A\colon A\rightarrow T$ be a surjective ring homomorphism such that its Thom class $\tau_A$ satisfies $\pi_A(\tau_A)=0$. Let $x$ be an indeterminate of degree one, set $B=T[x]/(x^{d-k+1})$, and define $\pi_B\colon B\rightarrow T$ to be the natural projection map satisfying $\pi_B(t)=t$ and $\pi_B(x)=0$. In this setup, if $A$ and $T$ both satisfy the SLP, then the fiber product $A\times_T B$ also satisfies the SLP. Moreover if the field $\F$ is algebraically closed, then the connected sum $A\#_T B$ also satisfies the SLP. \item Let $A$ and $B$ be standard graded artinian Gorenstein algebras of socle degree $d$ satisfying the SLP, and let $T$ be a graded artinian Gorenstein algebra of socle degree $k$, with $k<\lfloor \frac{d-1}{2} \rfloor$, endowed with surjective $\F$-algebra homomorphisms $\pi_A:A\to T$ and $\pi_B:B\to T$. Then the resulting fiber product $A\times_TB$ and the connected sum $A\#_T B$ both satisfy the WLP. \end{enumerate} \end{thm} \begin{ex} Take $$F=XY(XZ-YT)\in K[X,Y,Z,T]$$ and set $A=K[x,y,z,t]/\Ann(F)$. Then \[ \Ann(F)=(zt, xz + yt,x^{2}t,y^{2}z,x^{2}y^{2},x^{3},y^{3},z^{2},t^{2}), \] $A$ is a connected sum \[ A=K[x,y,z]/\Ann(X^2YZ)\#_{K[x,y]/\Ann(XY)} K[x,y,t]/\Ann(XY^2T), \] and the Hilbert function of $A$ is $(1, 4, 6, 4, 1)$. By \cref{prop:SLPFP} (4), since the summands of $A$ are monomial complete intersections, $A$ has WLP if the characteristic of $\F$ is 0. \end{ex} \begin{ex}[\cite{ADFMMSV}] Take $$F=X^3YZ-XY^3T=XY(X^2Z-Y^2T)\in K[X,Y,Z,T]$$ and set $A=K[x,y,z,t]/\Ann(F)$. Then $$\Ann(F)=(z^2,t^2,tz,x^2t,y^2z,x^2z+y^2t,y^4,x^2y^2,x^4),$$ $A$ is a connected sum \[ A=K[x,y,z]/\Ann(X^3YZ)\#_{K[x,y]/\Ann(XY)} K[x,y,t]/\Ann(XY^3T), \] and the Hilbert function of $A$ is $(1, 4, 7, 7, 4, 1)$. The Hessian matrix of $F$ of order two is of the following form $$ \Hess^2(F)=6\begin{pmatrix} 0&y&x&z&0&0&0\\ y&0&0&x&0&0&0\\ x&0&0&0&0&0&0\\ z&x&0&0&0&-y&-t\\ 0&0&0&0&0&0&-y\\ 0&0&0&-y&0&0&-x\\ 0&0&0&-t&-y&-x&0\\ \end{pmatrix} $$ and it has vanishing determinant. According to the Hessian criteria \cref{thm: hessian criterion} $A$ does not have WLP because in this case the second Hessian corresponds to the multiplication map from degree 2 to degree 3. Note that the socle degrees don't satisfy the condition in \cref{prop:SLPFP} since $2=k=\left \lfloor\frac{d-1}{2} \right\rfloor=\frac{5-1}{2}$. \end{ex} \subsection{Cohomological blowups}\label{s: BUG} The second construction is inspired by the geometric operation of blowing up a smooth projective algebraic variety. The blow-up of such a space at a point replaces the point with the set of all directions through the point, that is, a projective space. More generally one can blow up a subset and replace it with another space called an {\em exceptional divisor}. The cohomology ring of the blow-up can be determined based on the cohomology ring of the original variety (called $A$ below), that of the subvariety being blown up (called $T$ below) and the way the latter sits inside the former, specifically captured via the cohomology class of the {\em normal bundle} of the subvariety, encoded via a polynomial $f_A(\xi)$ below. We now explain the algebraic construction for the cohomology ring of a blowup. \begin{defn}[Cohomological Blow-Up] \label{def:blowup} For oriented artinian Gorenstein algebras $A$ and $T$ of socle degrees $d>k$, respectively, and surjective degree-preserving algebra map $\pi\colon A\rightarrow T$ with Thom class $\tau\in A_n$ where $n=d-k$, set $K=\ker(\pi)$. Given a homogeneous monic polynomial $f_A(\xi)=\xi^n+a_1\xi^{n-1}+\cdots+a_n\in A[\xi]$ of degree $n$ with homogeneous elements $a_i\in A_i$ for $1\leq i\leq n$ and with $a_n=\lambda\cdot \tau$ for some non-zero constant $\lambda$, we call the artinian Gorenstein algebra $\tilde{A}$ below a \emph{cohomological blow up of $A$ along $\pi$} or BUG for short \[ \tilde{A}=\frac{A[\xi]}{(\xi \cdot K,\underbrace{\xi^n+a_1\xi^{n-1}+\cdots+\lambda\cdot\tau}_{f_A(\xi)})}. \] Setting $t_i=\pi(a_i)$ for $1\leq i\leq n-1$, the AG algebra \[ \tilde{T}=\frac{T[\xi]}{(\underbrace{\xi^n+t_1\xi^{n-1}+\cdots+\lambda\cdot \pi(\tau))}_{f_T(\xi)}} \] is called the \emph{exceptional divisor of $T$ with parameters $(t_1,\ldots,t_{n-1},\lambda)$}. These algebras fit in the following commutative diagram, where we refer to $A$ as the \emph{cohomological blow down of $\tilde{A}$ along $\hat{\pi}$}. \begin{equation*} \label{eq:cd} \xymatrix{A\ar[r]^-{}\ar[d]_-{\pi} & \tilde{A}\ar[d]^-{\hat{\pi}}\\ T\ar[r]_-{} & \tilde{T}.} \end{equation*} \end{defn} Since $\tilde{T}$ is a quotient of a 1-dimensional Gorenstein ring by a non zero-divisor, it is clear that $\tilde{T}$ is artinian Gorenstein. It is shown in \cite{IMMSW} that the condition that the last term of $f_A(\xi)$ be a scalar multiple of the Thom class $\tau$ is precisely equivalent to $\tilde{A}$ being AG. \begin{ex} \label{ex:notGor} Let $$A=\frac{\F[x,y]}{(x^3,y^3)} \ \overset{\pi}{\rightarrow} \ T=\frac{\F[x,y]}{(x^2,y)}$$ where $\pi(x)=x$ and $\pi(y)=0$. Note $K=\ker(\pi)=(x^2,y)$. Orient $A$ and $T$ with socle generators $a_{soc}=x^2y^2$ and $t_{soc}=x$; then the Thom class of $\pi$ is $\tau=xy^2\in A_3$. Set $f_T(\xi)=\xi^3+x\xi^2\in T[\xi]$ and let $\tilde{T}$ be the associated exceptional divisor algebra: $$\tilde{T}=\frac{T[\xi]}{(f_T(\xi))}=\frac{\F[x,y,\xi]}{(x^2,y,\xi^3+x\xi^2)}.$$ Consider $f_A(\xi)=\xi^3+x\xi^2+xy^2\in A[\xi]$. This gives rise to the BUG $$\tilde{A}=\frac{A[\xi]}{(\xi \cdot K,f_A(\xi))}=\frac{\F[x,y,\xi]}{(x^3,y^3,x^2\xi,y\xi,\xi^3+x\xi^2+xy^2)}$$ which has basis $$\left\{1,x,y,\xi,x^2,xy,y^2,x\xi,\xi^2,x^2y,xy^2,x\xi^2,x^2y^2\right\}$$ and Hilbert function $H(\tilde{A})=(1,3,5,3,1).$ Here the socle of $\tilde{A}$ is generated by $\tilde{a}_{soc}=a_{soc}=x^2y^2$, hence $\tilde{A}$ is Gorenstein, as expected. \end{ex} We are now ready to discuss the Lefschetz properties for cohomological blow-up algebras. \begin{thm}[{\cite[Theorem 8.5]{IMMSW}}] \label{thm:SLPblowup} Let $\F$ be an infinite field and let $\pi:A\to T$ be a surjective homomorphism of graded AG $\F$-algebras of socle degrees $d>k$ respectively such that both $A$ and $T$ have SLP. Assume that characteristic $\F$ is zero or characteristic $F$ is $ p>d$. Then every cohomological blow-up algebra of $A$ along $T$ satisfies SLP. \end{thm} The following example shows that the converse of \cref{thm:SLPblowup} is not true: if the cohomological blowup $\tilde{A}$ has SLP it does not follow that $A$ has SLP. In other words, while the process of blowing up preserves SLP, the process of blowing down does not preserve SLP, nor even WLP. \begin{ex} \label{ex:Rodrigo} As in \cref{ex: Gondim}, the following example, originally due to U. Perazzo \cite{Perazzo}, but re-examined more recently by R. Gondim and F. Russo \cite{GR}, is an artinian Gorenstein algebra with unimodal Hilbert function which does not have SLP or WLP: \begin{eqnarray*} A &=& \frac{\F[x,y,z,u,v]}{\operatorname{Ann}(XU^2+YUV+ZV^2)} \\ &=&\frac{\F[x,y,z,u,v]}{\left(x^2,xy,y^2,xz,yz,z^2,u^3,u^2v,uv^2,v^3,xv,zu,xu-yv,zv-yu\right)}. \end{eqnarray*} Taking the quotient $T$ of $A$ given by the Thom class $\tau=u^2$ yields $$T=\frac{\F[x,y,z,u,v]}{\operatorname{Ann}(X)}=\frac{\F[x,y,z,u,v]}{\left(x^2,y,z,u,v\right)}\cong\frac{\F[x]}{(x^2)}.$$ Fix a parameter $\lambda\in\F$ and define polynomials $f_T(\xi)\in T[\xi]$ and $f_A(\xi)\in A[\xi]$ by $$f_T(\xi)=\xi^2-\lambda x\xi \quad \text{ and } f_A(\xi)=\xi^2-\lambda x\xi+u^2.$$ Denoting the ideal of relations of $A$ by $I$ we obtain the cohomological blowup $$\tilde{A}=\frac{\F[x,y,z,u,v, \xi]}{I+\xi(y,z,u,v)+(f_A(\xi))}, $$ which has Hilbert function $H(\tilde{A})=H(A)+H(T)[1]=(1,6,6,1)$. Fix $\F$-bases $$\tilde{A}_1=\operatorname{span}_{\F}\left\{x,y,z,u,v, \xi\right\}, \ \ \text{and} \ \ \tilde{A}_2=\operatorname{span}_{\F}\left\{u^2,uv,v^2,yv,yu,-x\xi\right\}$$ and let $\ell\in\tilde{A}_1$ be a general linear form $$\ell=ax+by+cz+du+ev+f\xi.$$ Then the matrix for the Lefschetz map $\times\ell\colon \tilde{A}_1\rightarrow\tilde{A}_2$ and its determinant are given by $$M = \left(\begin{array}{cccccc} 0 & 0 & 0 & d & 0 & -f\\ 0 & 0 & 0 & e & d & 0\\ 0 & 0 & 0 & 0 & e & 0\\ d & e & 0 & a & b & 0\\ 0 & d & e & b & c & 0\\ -f & 0 & 0 & 0 & 0 & -(a+\lambda f)\\ \end{array}\right)\Rightarrow \det(M)=f^2e^4.$$ Thus $\ell$ is a strong Lefschetz element for $\tilde{A}$ if and only if $e\cdot f\neq 0$. In particular $\tilde{A}$ satisfies SLP and also WLP. \end{ex} \noindent Surprisingly, the analogous result to \cref{thm:SLPblowup} does not hold for the weak Lefschetz property. We now give an example illustrating that blowing up does not preserve WLP. \begin{exer}\label{8.7ex} Consider the following algebra \begin{eqnarray*} A &=& \frac{\F[x,y,z,u,v]}{\operatorname{Ann}(XU^6+YU^4V^2+ZU^5V)}\\ &=& \frac{\F[x,y,z,u,v]}{\left(yz,xz,xy,vy-uz,vx,ux-vz,u^5y,u^5v^2,u^6v,u^7,v^3,x^2,y^2,z^2\right)} \end{eqnarray*} and its quotient corresponding to the Thom class $\tau=u^3$ \begin{eqnarray*} T &=& \frac{\F[x,y,z,u,v]}{\operatorname{Ann}(XU^3+YUV^2+ZU^2V)}\\ &=& \frac{\F[x,y,z,u,v]}{\left(z^2,yz,xz,y^2,xy,vy-uz,x^2,vx,ux-vz,u^2y,v^3,u^2v^2,u^3v,u^4 \right)} \end{eqnarray*} Consider also the cohomological blowup $$\tilde{A}=\frac{\F[x,y,z,u,v, \xi]}{I+\xi\cdot K+(\xi^3-u^3)}, \quad \text{where } K=\ker(A\twoheadrightarrow T).$$ \begin{enumerate}[(a)] \item Compute the Hilbert functions of $A$ and $T$ respectively. \item Show that both $A$ and $T$ satisfy WLP, but not SLP. \item Show that the BUG $\tilde{A}$ does not satisfy WLP. \end{enumerate} \end{exer} In \cref{8.7ex} the Thom class of the map $A\to T$ has degree 3. This is the minimal possible value for such an example based on the following result. \begin{thm}[{\cite[Theorem 8.9]{IMMSW}}] Let $\F$ be an infinite field and let $\pi:A\to T$ be a surjective homomorphism of graded AG $\F$-algebras such that the difference between the socle degrees of $A$ and $T$ is at most 2 and $A$ and $T$ both satisfy WLP. Then every cohomological blow-up algebra of $A$ along $\pi$ satisfies WLP. \end{thm} \vspace{-0.3em} \begin{thebibliography}{99} \bibitem{AADFMMMN} N.~Abdallah, N.~ Altafi, P.~De Poi, L.~Fiorindo, A.~Iarrobino, P.~Macias Marques, E.~Mezzetti, R.~ M. Mir\'o-Roig, L.~Nicklasson, {\em Hilbert functions and Jordan type of Perazzo Artinian algebras}, Lefschetz properties, 59--80, Springer INdAM Ser., 59 Springer, Singapore, (2024). \bibitem{ADFMMSV} N.~Altafi, R.~Dinu, S.~Faridi, S.~Masuti, R.~Mir\'o-Roig, A.~Seceleanu, and N.~Villamizar, {\em Artinian Gorenstein algebras having binomial Macaulay dual generator}, in preparation. \bibitem{BL} L. J.~Billera , C. W.~Lee , {\em A proof of the Sufficiency of McMullen's Conditions for f-Vectors of Simplicial Convex Polytopes}, J. Combimtorial Theory (A) 31 (1981), pp. 237--255. \bibitem{Chern} S.-S.~Chern , {\em On a Generalization of K\"ahler Geometry} In: R. H. Pox, et al. (eds.), Algebraic Geometry and Topology, Princeton University Press, Princeton, N. J., 1951, pp. 103--124. \bibitem{Cook} D.~Cook II, {\em The Lefschetz properties of monomial complete intersections in positive characteristic}, J. Algebra 369 (2012), 42--58. \bibitem{Deligne} P.~Deligne, {\em La conjecture de Weil. II.}, Inst. Hautes \'Etudes Sci. Publ. Math.(1980), no.52, 137--252. \bibitem{DNS} M.~DiPasquale, T.~Nguy$\tilde{\text{\^e}}$n, A.~Seceleanu, {\em Duality for asymptotic invariants of graded families}, Adv. Math. 430 (2023), Paper No. 109208. \bibitem{Eisenbud} D.~Eisenbud, {\em Commutative algebra. With a view toward algebraic geometry}, Grad. Texts in Math., 150 Springer-Verlag, New York, 1995. \bibitem{Ger95} A.~V.~Geramita, {\em Inverse systems of fat points: Waring's problem, secant varieties of Veronese varieties and parameter spaces for Gorenstein ideals} In The Curves Seminar at Queen's, vol102 of Queen's Papers in Pure and Appl. Math., p. 2--114. Queen's Univ., Kingston, ON, 1996. \bibitem{Gondim} R.~Gondim, {\em On higher Hessians and the Lefschetz properties}, J. Algebra 489 (2017), 241--263. \bibitem{GR} R.~Gondim, F.~Russo, {\em Cubic hypersurfaces with vanishing Hessian}, J. Pure Appl. Algebra 219 (2015), 779--806. \bibitem{M2} D.~Grayson, M.~Stillman, {\em Macaulay2}, a software system for research in algebraic geometry, available at \url{http://www2.macaulay2.com}. \bibitem{book} T.~Harima, T.~Maeno, H.~Morita, Y.~Numata, A.~Wachi and J.~Watanabe, {\em The Lefschetz properties}, Lecture Notes in Math., 2080, Springer, Heidelberg, 2013. \bibitem{HW} T.~Harima, J.~Watanabe, {\em The strong Lefschetz property for Artinian algebras with non-standard grading}, J. Algebra 311 (2007) 511?537. \bibitem{HP} J.~Herzog and D.~Popescu, {\em The strong Lefschetz property and simple extensions}, ArXiv:0506.5537. \bibitem{Hodge} W.~V.~D.~Hodge, {\em The Theory and Applications of Harmonic Integrals}, 2nd ed., Cambridge University Press, London, 1952. \bibitem{IMM} A.~Iarrobino, P.~Macias Marques, and C.~McDaniel, {\em Artinian algebras and Jordan type}, J. Commut. Algebra 14 (3), 365--414, (2022). \bibitem{IMS} A.~Iarrobino, C.~McDaniel, and A.~Seceleanu, {\em Connected sums of graded artinian Gorenstein algebras and Lefschetz properties}, J. Pure Appl. Algebra 226 (2022), no. 1, 106787. \bibitem{IMMSW} A.~Iarrobino, P.~Macias Marques, C.~McDaniel, A.~Seceleanu and J.~Watanabe, {\em Cohomological blow ups of graded artinian Gorenstein algebras along surjective maps}, Int. Math. Res. Not. IMRN 2023, no. 7, 5816--5886. \bibitem{Ikeda}H.~Ikeda, {\em Results on Dilworth and Rees numbers of Artinian local rings}, Jpn. J. Math. (N.S.) 22(1), 147--158 (1996). \bibitem{Lefschetz} S.~Lefschetz., {\em L'Analysis situs et la G\'eometrie Alg\'ebrique}, Gauthiers--Villars, Paris, 1924; reprinted in: Selected Papers, Chelsea, New York, 1971. \bibitem{Lefschetzbio} S.~Lefschetz., {\em Reminiscences of a mathematical immigrant in the United States}, American Mathematical Monthly, vol.77, 1970, pp. 344--350. \bibitem{Lindsey} M.~Lindsey, {\em A class of Hilbert series and the strong Lefschetz property}, Proc. Amer. Math. Soc. 139 (2011), 79 92. \bibitem{Macaulay}F.~S.~Macaulay, {\em The algebraic theory of modular systems}, Cambridge Math. Lib., Cambridge University Press, Cambridge, 1994. \bibitem{MW} T.~Maeno, J.~Watanabe, {\em Lefschetz elements of Artinian Gorenstein algebras and Hessians of homogeneous polynomials}, Illinois J. Math. 53 (2009), no.2, 591--603. \bibitem{McMullen} P.~McMullen, {\em The numbers of faces of simplicial polytopes}, Israel J. Math. 9 (1971), pp. 659--570. \bibitem{MMN} J. Migliore, R.M. Mir\'o-Roig, and U. Nagel {\em Monomial ideals, almost complete intersections and the Weak Lefschetz property} Trans. Amer. Math. Soc. 363 (2011), 229--257. \bibitem{Nicklasson} L.~Nicklasson, {\em The strong Lefschetz property of monomial complete intersections in two variables}, Collect. Math. 69 (2018), no. 3, 359--375. \bibitem{Perazzo} U.~Perazzo, {\em Sulle variet\`a cubiche la cui hessiana svanisce identicamente}, Giornale di Matematiche (Battaglini) 38 (1900), 337--354. \bibitem{RRR} L.~Reid, L.~Roberts and M.~Roitman, {\em On complete intersections and their Hilbert functions}, Canad. Math. Bull. 34 (1991), 525--535. \bibitem{Robles} Coleen Robles, {\em Linear structures of Hodge theory}, available at \url{https://sites.math.duke.edu/~robles/21.02-ICERM.pdf}. \bibitem{Stanley} R.~Stanley, {\em The Number of Faces of a Simplicial Convex Polytope}, Advances in Math. 35 (1980), pp. 236--238. \bibitem{StanleyICM} R.~Stanley, {\em Combinatorial applications of the hard Lefschetz theorem}, Proceedings of the International Congress of Mathematicians. Held in Warsaw, August 16--24, 1983. PWN---Polish Scientific Publishers, Warsaw, 1984, 447--453. \bibitem{Watanabe} J.~Watanabe, {\em The Dilworth number of Artinian rings and finite posets with rank function}, in Commutative algebra and combinatorics, Adv. Stud. Pure Math. 11, Kinokuniya Co., North Holland, Amsterdam, 1987. \end{thebibliography} \end{document} \newpage \begin{appendix} \section{Exercise session 1: Computations in Macaulay2} {\em Use Macaulay2 to solve the exercises in this section!} \begin{exer} Determine whether the algebra \[ \frac{\Q[x,y,z]}{(x^2+y^2+z^2,\ xyz,\ z^4-3xz^3)} \] is artinian. \end{exer} \begin{exer}\label{exer:monomial_ACI} Compute the Hilbert series of the algebra \[ A=\frac{\Z/3\Z[x,y,z]}{(x^{10},y^{10},z^{10},x^3y^3z^3)}. \] Is $x+y+z$ a weak Lefschetz element of $A$? \end{exer} \begin{exer} Let $R=\Q[x_1,x_2,x_3,x_4]$ and $\fm=(x_1,x_2,x_3,x_4)$. Does the algebra $R/(\fm^5+x_1 \fm^2 + (x_2^3))$ have WLP? \end{exer} \begin{exer}\label{exer:WLP_function} Build a function in Macaulay2 that takes as input an artinian standard graded algebra $A$ and an element $\ell \in A_1$ and returns {\tt true} of {\tt false} given by whether $\ell$ is a weak Lefschetz element of $A$. {\it Hint}: use the result of \cref{exer: equivalent WLP} (also stated as \cref{ex:4.3 again} below). \end{exer} \begin{exer}\label{exer:almost_monomial_ACI} Use your function from \cref{exer:WLP_function} to explore the WLP of the algebra \[ \frac{\Z/p\Z[x_1, \ldots, x_n]}{(x_1^n , \ldots, x_n^n,\ x_1 \cdots x_{n-1}(x_1+x_n))} \] for some integer $n \ge 3$, and some prime number $p$. For which $n$ and $p$ can you detect WLP? \end{exer} Results related to \cref{exer:monomial_ACI} and \cref{exer:almost_monomial_ACI} can be found in \cite{MMN}. \section{Exercise session 2: Lefschetz Properties} {\em A $(*)$ denotes that at least some portion of the exercise is an open (research) question.} \begin{exer} The rings $A=\F[x_1,\ldots, x_n]/(x_1^{d_1}, \ldots, x_n^{d_n})$, where $d_1,\ldots, d_n\geq 1$ are integers, are called monomial complete intersections. \begin{enumerate} \item Prove that monomial complete intersections are the only complete intersection rings of the form $\F[x_1,\ldots, x_n]/I$ where $I$ is an ideal generated by monomials. \item Prove that monomial complete intersections are the only artinian Gorenstein rings of the form $\F[x_1,\ldots, x_n]/I$ where $I$ is an ideal generated by monomials. \end{enumerate} \end{exer} \begin{exer}[Equivalent WLP statements]\label{ex:4.3 again} Prove that for an artinian graded $\F$-algebra $A$ the following are equivalent: \begin{enumerate} \item $L\in A_1$ is a weak Lefschetz element for $A$. \item For all $0\leq i \leq c-1$ the map $\times L: A_i\to A_{i+1}$ has rank $\min\{h_A(i),h_A(i+1)\}$. \item For all $0\leq i \leq c-1$ $\dim_\F([(L)]_{i+1})=\min\{h_A(i),h_A(i+1)\}$. \item For all $0\leq i \leq c-1$ $\dim_\F([A/(L)]_{i+1})=\max\{0,h_A(i+1)-h_A(i)\}$. \item For all $0\leq i \leq c-1$ $\dim_\F([0:_A L]_{i})=\max\{0,h_A(i)-h_A(i+1)\}$. \end{enumerate} \end{exer} \begin{exer} Let $\F$ be a field of characteristic zero and let \[ A=\frac{\F[x,y,z]}{(x^3,y^3,z^3,(x+y+z)^3)}. \] \begin{enumerate} \item Find the Hilbert function of $A$. \item Prove that $A$ satisfies $WLP$ but not $SLP$. \end{enumerate} \end{exer} \begin{exer} $(*)$ With help from a computer make conjectures regarding the WLP and SLP for monomial complete intersections in positive characteristics. A characterization is known for SLP, but not for WLP. See \cite{Cook, Nicklasson} for related work. \end{exer} \begin{exer} Let $\F[x,y]_d$ be the vector space of polynomials of degree $d$ in $\F[x,y]$. Prove: \begin{enumerate} \item $E=x\frac{\partial}{\partial y}, H=x\frac{\partial}{\partial x}-y\frac{\partial}{\partial y}, F=y\frac{\partial}{\partial x}$ form an $\sl_2$-triple. \item Prove that the monomial $x^ay^b$ is an eigenvector of $H$ with eigenvalue $a-b\in\Z$. In particular the eigenvalues of $H$ on $\F[x,y]_d$ are $d, d-2, d-4, \ldots, 4-d, 2-d, -d$. \item Prove that a basis of $\F[x,y]_d$ is $y^d, E(y^d), E^2(y^d), \ldots, E^d(y^d)$. \item Find a basis that satisfies the properties given by \cref{thm:sl2reps}. \end{enumerate} Pictorially this can be summarized as \begin{tikzpicture}[->,>=stealth',auto,node distance=2.5cm, thick] \node (1) {$0$}; \node (2) [right of=1] {$\F y^d$}; \node (3) [right of=2] {$\F xy^{d-1}$}; \node (4) [right of=3] {$\cdots$}; \node (5) [right of=4] {$\F x^{d-1}y$}; \node (6) [right of=5] {$\F x^d$}; \node (7) [right of=6] {$0$}; \path[every node/.style={font=\small}] (2) edge[bend right] node [midway,below] {E} (3) (3) edge[bend right] node [midway,below] {E} (4) (4) edge[bend right] node [midway,below] {E} (5) (5) edge[bend right] node [midway,below] {E} (6) (6) edge node [midway,below] {E} (7) (2) edge node [midway,above] {F} (1) (3) edge[bend right] node [midway,above] {F} (2) (4) edge[bend right] node [midway,above] {F} (3) (5) edge[bend right] node [midway,above] {F} (4) (6) edge[bend right] node [midway,above] {F} (5); \end{tikzpicture} \end{exer} \begin{exer} \begin{enumerate} \item Suppose that $V$ is a representation of $\sl_2$ and that the eigenvalues of $H$ on $V$ are $2, 1, 1, 0, -1, -1, -2$. Show that the irreducible decomposition of $V$ is $V\cong \F[x,y]_2\oplus \F[x,y]_1\oplus \F[x,y]_1$. \item Prove that if $V$ is any representation of $\sl_2$ then its irreducible decomposition is determined by the eigenvalues of $H$. \end{enumerate} \end{exer} \begin{exer} Let $V$ be an $\sl_2$ representation and set $W_k=\{v\in V\mid H(v)=kv\}$. \begin{enumerate} \item Show that $\dim_\F W_k=\dim_\F W_{-k}$. \item Prove that $E^k:W_{-k}\to W_k$ is an isomorphism. \item Show that $\dim_\F W_{k+2}\leq \dim_\F W_k$ for all $k\geq 0$, that is, the two sequences \[ \ldots, \dim_\F W_4, \dim_\F W_2, \dim_\F W_0, \dim_\F W_{-2}, \dim_\F W_{-4},\ldots \] \[ \ldots, \dim_\F W_3, \dim_\F W_1, \dim_\F W_{-1}, \dim_\F W_{-3}, \ldots \] are unimodal. \end{enumerate} \end{exer} \section{Exercise session 3: Gorenstein rings} {\em A $(*)$ denotes that at least some portion of the exercise is an open (research) question.} \begin{exer} Generalize \cref{ex: perps} to find the inverse system of the ideal defining a point in projective $n$-space and the inverse systems of all of the powers of this ideal. \end{exer} \begin{exer} For any homogeneous polynomial $F\in R^*$ of degree $d$, prove \[ \Ann_R(F)^\perp:=\{g\in R^* \mid f\contract g=0, \, \forall f\in \Ann_R(F)\} \]is equal to $R\contract F$. This is an instance of Macaulay's double annihilator theorem. \end{exer} \noindent {\em Hint:} start by showing the equality is true in degree $d$, then use the $R$-module structure. \begin{exer}[R. Gondim \cite{Gondim}]\label{ex: Gondim} Let $x_1,\ldots, x_n$ and $u_1, \ldots, u_m$ be two sets of indeterminates with $n \geq m \geq 2$. Let $f_i \in \F[x_1, \ldots, x_n]_k$ and $g_i \in \F[u_1,\ldots, u_m]_e$ for $1\leq i\leq s$ be linearly independent forms with $1 \leq k<e$. If $s >\binom{m-1+k}{k}$, then \[ F=f_1g_1+\cdots+f_sg_s \] is called a Perazzo form and $A=\F[x_1, \ldots, x_n,u_1,\ldots,u_m]$ is called a Perazzo algebra. \begin{enumerate} \item Show that $\hess_k(F)=0$ and so $A$ does not have SLP. \item Make conjectures regarding the Hilbert functions of Perazzo algebras. \item Make conjectures regarding the WLP for Perazzo algebras. \item[(4*)] Do there exist two Perazzo algebras $A$ and $B$ having the same Hilbert function so that $A$ has WLP and $B$ does not? \end{enumerate} Some answers to (2) and (3) can be found in \cite{AADFMMMN}. \end{exer} \begin{exer} Show that every orientation on an AG algebra $A$ can be written as the function $\int_A:A\to K$ defined by $\int_A g =(g\circ F)(0)$ for some Macaulay dual generator $F$ of $A$, where $(g\circ F)(0)$ refers to evaluating the element $g\circ F$ of $R'$ at $X_1=\cdots=X_n=0$. \end{exer} \begin{exer} Show that there exists a surjective homomorphism $\pi: A\to T$ of AG algebras having dual generators $F, H$ of degrees $d \geq k$ if and only if there exists $\tau\in A_{d-k}$ such that $\tau\circ F=H$. {\em Hint:} You may use the Thom class in \cref{def:ThomClass}. \end{exer} \begin{exer} Consider the following algebra \begin{eqnarray*} A &=& \frac{\F[x,y,z,u,v]}{\operatorname{Ann}(XU^6+YU^4V^2+ZU^5V)}\\ &=& \frac{\F[x,y,z,u,v]}{\left(yz,xz,xy,vy-uz,vx,ux-vz,u^5y,u^5v^2,u^6v,u^7,v^3,x^2,y^2,z^2\right)} \end{eqnarray*} and its quotient corresponding to the Thom class $\tau=u^3$ \begin{eqnarray*} T &=& \frac{\F[x,y,z,u,v]}{\operatorname{Ann}(XU^3+YUV^2+ZU^2V)}\\ &=& \frac{\F[x,y,z,u,v]}{\left(z^2,yz,xz,y^2,xy,vy-uz,x^2,vx,ux-vz,u^2y,v^3,u^2v^2,u^3v,u^4 \right)} \end{eqnarray*} Let $K$ be the kernel of the canonical surjection $\pi:A\to T$. Consider the cohomological blow up $$\tilde{A}=\frac{\F[x,y,z,u,v, \xi]}{I+\xi\cdot K+(\xi^3-u^3)}.$$ \begin{enumerate}[(a)] \item Compute the Hilbert functions of $A$ and $T$ respectively. Feel free to use {\em Macaulay2}. \item Show that both $A$ and $T$ satisfy WLP, but not SLP. You may use {\em Macaulay2}. \item Show that the BUG $\tilde{A}$ does not satisfy WLP. \end{enumerate} \end{exer} \end{appendix} \end{document}
2412.09067v1
http://arxiv.org/abs/2412.09067v1
On higher-dimensional symmetric designs
\documentclass[12pt,reqno]{amsart} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{graphicx} \usepackage{color} \usepackage{hyperref} \usepackage{lscape} \usepackage{multirow} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{definition}[theorem]{Definition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{example}[theorem]{Example} \def\F{\mathbb{F}} \def\Z{\mathbb{Z}} \def\P{\mathcal{P}} \def\C{\mathcal{C}} \def\H{\mathcal{H}} \def\D{\mathcal{D}} \DeclareMathOperator{\dev}{dev} \DeclareMathOperator{\Aut}{Aut} \DeclareMathOperator{\Atop}{Atop} \DeclareMathOperator{\Apar}{Apar} \begin{document} \title{On higher-dimensional symmetric designs} \author[V.~Kr\v{c}adinac and M.~O.~Pav\v{c}evi\'{c}]{Vedran Kr\v{c}adinac$^1$ and Mario Osvin Pav\v{c}evi\'{c}$^2$} \address{$^1$Faculty of Science, University of Zagreb, Bijeni\v{c}ka cesta~30, HR-10000 Zagreb, Croatia} \address{$^2$Faculty of Electrical Engineering and Computing, University of Zagreb, Unska~3, HR-10000 Zagreb, Croatia} \email{[email protected]} \email{[email protected]} \thanks{This work has been supported by the Croatian Science Foundation under the project $9752$.} \subjclass{05B05, 05B10, 05B20} \keywords{higher-dimensional design, symmetric design, difference set} \date{December 12, 2024} \begin{abstract} We study two kinds of generalizations of symmetric block designs to higher dimensions, the so-called $\C$-cubes and $\P$-cubes. For small parameters all examples up to equivalence are determined by computer calculations. Known properties of automorphisms of symmetric designs are extended to autotopies of $\P$-cubes, while counterexamples are found for $\C$-cubes. An algorithm for the classification of $\P$-cubes with prescribed autotopy groups is developed and used to construct more examples. A linear bound on the dimension of difference sets for $\P$-cubes is proved and shown to be tight in elementary abelian groups. The construction is generalized to arbitrary groups by introducing regular sets of (anti)automorphisms. \end{abstract} \maketitle \section{Introduction} Two different generalizations of symmetric block designs to higher dimensions have recently been studied in~\cite{KPT24} and~\cite{KR24}. A symmetric $(v,k,\lambda)$ design can be represented by its incidence matrix, i.e.\ by a $v\times v$ matrix~$A$ with $\{0,1\}$-entries satisfying \begin{equation}\label{eqsbibd} A A^t = (k-\lambda)I + \lambda J. \end{equation} Here $I$ is the identity matrix and $J$ is the all-ones matrix. An \emph{$n$-cube of $(v,k,\lambda)$ designs}~\cite{KPT24} is an $n$-dimensional $v\times \cdots \times v$ matrix such that its every $2$-section is an incidence matrix of a $(v,k,\lambda)$ design. Sections of dimension~$2$ are submatrices obtained by fixing all but two coordinates. This generalization is a special case of W.~de~Launey's proper $n$-dimensional transposable designs; see \cite[Definitions 2.1, 2.6, 2.7, and Example 2.2]{dL90}. The set of all $n$-cubes of $(v,k,\lambda)$ designs was denoted by $\C^n(v,k,\lambda)$ in~\cite{KPT24}. We shall refer to them as $\C$-cubes. The second generalization was introduced in~\cite{KR24} under the name of \emph{$(v,k,\lambda)$ projection $n$-cubes}. These are $n$-dimensional $v\times \cdots \times v$ matrices with $\{0,1\}$-entries such that every $2$-projection is an incidence matrix of a $(v,k,\lambda)$ design. If~$C$ is an $n$-dimensional matrix and $1\le x<y\le n$, the projection $\Pi_{xy}(C)$ is the $2$-dimensional matrix with $(i_x,i_y)$-entry \begin{equation*}\label{projsum} \sum_{1\le i_1,\ldots,i_{x-1},i_{x+1},\ldots,i_{y-1},i_{y+1},\ldots,i_n\le v} C(i_1,\ldots,i_n). \end{equation*} The sum is taken over all $n$-tuples $(i_1,\ldots,i_n)\in \{1,\ldots,v\}^n$ with fixed coordinates $i_x$ and $i_y$ in a field of characteristic~$0$. This definition was inspired by Room squares~\cite{JHD07}, which are generalized to $n$-dimensional Room cubes in an analogous way. In~\cite{KR24}, the set of all $(v,k,\lambda)$ projection $n$-cubes was denoted by $\P^n(v,k,\lambda)$. We shall refer to them as $\P$-cubes. The purpose of this paper is to study both types of cubes and to compare their properties. For dimension $n=2$ both are just incidence matrices of symmetric designs, but for $n\ge 3$ there are significant differences. For example, it was shown in~\cite[Theorem 2.8]{KR24} that the dimension of $(v,k,\lambda)$ projection cubes with $k\ge 2$ is bounded by \begin{equation}\label{dimbound} n\le \frac{v(v+1)}{2}. \end{equation} The largest integer~$n$ such that $\P^n(v,k,\lambda)$-cubes exists is denoted by $\nu(v,k,\lambda)$. In contrast, the dimension of $\C$-cubes can be arbitrarily large for fixed parameters $(v,k,\lambda)$. The layout of our paper is as follows. In Section~\ref{sec2} we recall the definitions of isotopy and equivalence of $n$-dimensional incidence cubes. We discuss the numbers of inequivalent cubes in $\C^n(v,k,\lambda)$ and $\P^n(v,k,\lambda)$ for $k=1$ and $k=2$. The main result in this section is Teorem~\ref{class731}, giving a complete enumeration up to equivalence of $\P$-cubes for parameters $(7,3,1)$, $(7,4,2)$ and all dimensions~$n$. The proof is a computer calculation based on an algorithm that successively increases the dimension. The largest possible dimensions are $\nu(7,3,1)=7$ and $\nu(7,4,2)=9$. Autotopies of $\C$- and $\P$-cubes are studied in Section~\ref{sec3}. Results about the action of automorphisms of symmetric designs on the points and blocks carry over to the action of autotopies of $\P$-cubes on each coordinate. These results do not hold for autotopies of $\C$-cubes. The main computational result in this section is Theorem~\ref{v11aut}, giving a complete classification of $\P^n(11,5,2)$-cubes with nontrivial autotopies. Cubes in $\P^3(16,6,2)$ with an autotopy of order~$8$ acting semiregularly are classified in Proposition~\ref{v16aut8}. Among them are examples with three non-isomorphic $(16,6,2)$ designs as projections, answering a question posed in~\cite{KR24}. In Section~\ref{sec4} we study $\P^n(v,k,\lambda)$-cubes constructed from higher-dimensional difference sets. Theorem~\ref{dsbound} gives a much sharper bound than~\eqref{dimbound} on the dimension: $n\le v$. Building on results from~\cite{KR24}, we compute new values of $\mu_G(v,k,\lambda)$, the largest dimension of $(v,k,\lambda)$ difference sets in the group~$G$. For elementary abelian groups~$G$, Theorem~\ref{tmelab} shows that the bound is tight, i.e.\ $\mu_G(v,k,\lambda)=v$ holds whenever difference sets exist. Theorem~\ref{tmreg} generalizes the construction to arbitrary groups~$G$ based on the notion of regular sets of (anti)automorphisms. A nice consequence is Corollary~\ref{cycdim}, showing that cyclic $(v,k,\lambda)$ difference sets extend at least to dimension~$p$, where~$p$ is the smallest prime divisor of~$v$. In the final Section~\ref{sec5} we put forward some observations based on data in Table~\ref{tab4}. The table contains numbers of inequivalent $\P^n(v,k,\lambda)$-cubes constructed from difference sets. We have only partial explanations for apparent symmetries of the numbers and hope that the observations could lead to new theorems. Some of our results have traditional formal proofs, while others are proved by computer calculations. There are numerous connections between the two types of results in both directions. In Section~\ref{sec3}, formal results about autotopies enable computer classifications of $\P$-cubes with prescribed autotopy groups. In Section~\ref{sec4}, computer classifications of higher-dimensional difference sets provide impetus for formal results, notably Theorems~\ref{tmelab} and~\ref{tmreg}. Higher-dimensional incidence cubes are most easily explored by studying examples and performing experiments on a computer. Tools for examining $\C$- and $\P$-cubes of symmetric designs are available in the GAP package \emph{Prescribed automorphism groups}~\cite{GAP, PAG}. Plenty of examples are given in this paper, as well as in~\cite{KPT24, KR24}. \section{Equivalence and classification}\label{sec2} Formally, incidence matrices of symmetric $(v,k,\lambda)$ designs are functions $A:\{1,\ldots,v\}\times \{1,\ldots,v\} \to \{0,1\}$ satisfying equation~\eqref{eqsbibd}. An important consequence of the equation is non-singularity. \begin{lemma}\label{nonsing} Incidence matrices of symmetric $(v,k,\lambda)$ designs are invertible. \end{lemma} The lemma implies that the transposed matrix~$A^t$ is also an incidence matrix of a $(v,k,\lambda)$ design, called the \emph{dual design}. We refer to the monographs~\cite{IS06, EL83} and the book~\cite{BJL99} for proofs. Symmetric designs with incidence matrices~$A$ and~$A'$ are \emph{isomorphic} if there are permutations $\pi_1, \pi_2\in S_v$ such that $A'(i,j)=A(\pi_1^{-1}(i),\pi_2^{-1}(j))$, $\forall i,j\in \{1,\ldots,v\}$. This can be written in matrix form as $A'=P_1AP_2^t$, where $P_1$ and $P_2$ are permutation matrices corresponding to~$\pi_1$ and~$\pi_2$. We call~$A$ and~$A'$ \emph{equivalent} if~$A'$ is isomorphic to $A$ or $A^t$. The equivalence classes are orbits of the wreath product $S_v\wr S_2$ acting on the set of all incidence matrices and represent symmetric designs up to isomorphism and duality. Both $\C$- and $\P$-cubes are defined as $n$-dimensional incidence matrices of order~$v$, i.e.\ as functions $C:\{1,\ldots,v\}^n \to \{0,1\}$ with the Cartesian $n$-ary power of $\{1,\ldots,v\}$ as domain. Isomorphism of $n$-dimensional matrices is called \emph{isotopy}: $C$ and $C'$ are isotopic if there are permutations $\pi_1,\dots,\pi_n\in S_v$ such that $C'(i_1,\ldots,i_n)=C(\pi_1^{-1}(i_1),\ldots,\pi_n^{-1}(i_n))$, $\forall i_1,\ldots,i_n\in \{1,\ldots,v\}$. Now the order of the coordinates can be permuted by any $\gamma \in S_n$. This is called \emph{conjugation}: $$C^\gamma (i_1,\ldots,i_n) = C(i_{\gamma^{-1}(1)},\ldots,i_{\gamma^{-1}(n)}),\kern 2mm \forall i_1,\ldots,i_n \in \{1,\ldots,v\}.$$ The cubes $C$ and $C'$ are \emph{equivalent} or \emph{paratopic} if $C'$ is isotopic to a conjugate $C^\gamma$. The equivalence classes are orbits of the wreath product $S_v\wr S_n$ acting on $\C^n(v,k,\lambda)$ or $\P^n(v,k,\lambda)$. The terminology is borrowed from latin squares~\cite{KD15}, where equivalence classes of paratopy are usually called \emph{main classes}. The classification problem is to determine the equivalence classes of $\C$- and $\P$-cubes for given parameters $(v,k,\lambda)$ and dimension~$n$. We start with the degenerate case $k=1$. Incidence matrices of symmetric $(v,1,0)$ designs are permutation matrices of order~$v$. They are all equivalent, since the rows and columns can be permuted to get the identity matrix~$I$. This extends straightforwardly to $\P^n(v,1,0)$-cubes, which are all equivalent with $$C(i_1,\ldots,i_n)=\left\{\begin{array}{ll} 1, & \mbox{ if } i_1=\ldots=i_n,\\[1mm] 0, & \mbox{otherwise.}\\ \end{array}\right.$$ On the other hand, the classification of $\C^n(v,1,0)$-cubes is a difficult problem. For $n=3$ they are in $1$-to-$1$ correspondence with latin squares $L=(\ell_{i_1i_2})$ of order~$v$ by $C(i_1,i_2,i_3)=[\ell_{i_1 i_2}=i_3]$. The square bracket $[P]$ is the \emph{Iverson symbol}, taking the value~$1$ if $P$ is true, and~$0$ otherwise~\cite{DK92}. The number of main classes of latin squares has been determined by computer calculations up to $v=11$~\cite{HKO11, MMM07}. Cubes in $\C^n(v,1,0)$ of arbitrary dimension~$n$ are in $1$-to-$1$ correspondence with latin hypercubes of order~$v$ and dimension~$n-1$. There have been several definitions of latin hypercubes in the literature; the suitable definition for our purpose is used in~\cite{MW08}. There, classification is performed for $n=4$, $v\le 6$ and $n=5,6$, $v\le 5$. For $k=2$, the only feasible parameters of symmetric designs are $(3,2,1)$. The number of $\C^n(3,2,1)$ cubes can be determined by complementation, i.e.\ by exchanging $0\leftrightarrow 1$. This is a bijection between $\C^n(v,k,\lambda)$ and $\C^n(v,v-k,v-2k+\lambda)$. \begin{proposition} The total number of $\C^n(3,2,1)$-cubes is $3\cdot 2^{n-1}$ and they are all isotopic. \end{proposition} \begin{proof} By complementation, it suffices to count cubes in $\C^n(3,1,0)$. They are in $1$-to-$1$ correspondence with latin hypercubes of order $v=3$ and dimension $n-1$. The number of such hypercubes is known to be $|\H_3^{n-1}|=3\cdot 2^{n-1}$, see~\cite[page 726]{MW08} or~\cite[Theorem~1]{ES06}. The proof in~\cite{MW08} relies on the fact that all latin hypercubes of order $v\le 3$ are linear. Linear hypercubes are easy to count and they are all isotopic. Therefore, the cubes in $\C^n(3,2,1)$ are also all isotopic. \end{proof} \begin{proposition}\label{nocompl} The complement of a $\P$-cube of dimension $n\ge 3$ is not a $\P$-cube. \end{proposition} \begin{proof} By~\cite[Proposition~2.3]{KR24}, $\P^n(v,k,\lambda)$-cubes have $vk$ incidences ($1$-entries) regardless of the dimension~$n$. The number of incidences in the complement is $v^n-vk=v(v^{n-1}-k)$. This is too large for any $\P^n(v,k',\lambda')$-cube if $n>2$. \end{proof} Cubes in $\P^n(3,2,1)$ have been classified directly in~\cite{KR24}. The result is reproduced in the first row of Table~\ref{tab1}. We see that $\nu(3,2,1)=5$; the bound~\eqref{dimbound} gives $\nu(3,2,1)\le 6$ and is not tight. The next feasible parameters are $(7,3,1)$. It is well known that the Fano plane is the unique $(7,3,1)$ design. In~\cite{KR24}, two inequivalent $\P^3(7,3,1)$-cubes are presented as Examples~2.2 and~2.6. Here we present a full classification of $\P^n(7,3,1)$ and $\P^n(7,4,2)$-cubes. \begin{theorem}\label{class731} The numbers of cubes in $\P^n(7,3,1)$ and $\P^n(7,4,2)$ up to equivalence are given in Table~\textup{\ref{tab1}}. In particular, $\nu(7,3,1)=7$ and $\nu(7,4,2)=9$. \end{theorem} \begin{table}[!h] \begin{tabular}{|c|ccccccccc|} \hline & \multicolumn{9}{c|}{$n$} \\[1mm] $(v,k,\lambda)$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline \rule{0mm}{5mm}$(3,2,1)$ & 1 & 2 & 1 & 1 & 0 & & & & \\[1mm] $(7,3,1)$ & 1 & 13 & 20 & 4 & 3 & 2 & 0 & 0 & 0 \\[1mm] $(7,4,2)$ & 1 & 877 & 884 & 74 & 19 & 9 & 6 & 5 & 0 \\[1mm] \hline \end{tabular} \vskip 2mm \caption{Numbers of $\P^n(v,k,\lambda)$-cubes up to equivalence.}\label{tab1} \end{table} The proof is a computer calculation that was carried out using the \emph{orthogonal array representation} of $\P$-cubes. A function $C:\{1,\ldots,v\}^n\to \{0,1\}$ is the characteristic function of a set of $n$-tuples $$\overline{C}=\{(i_1,\ldots,i_n)\in \{1,\ldots,v\}^n \mid C(i_1,\ldots,i_n)=1\}.$$ If $C\in \P^n(v,k,\lambda)$, then $\overline{C}$ is an orthogonal array $OA(vk,n,v,1)$ \cite[Corollary~2.5]{KR24}. A general orthogonal array $OA(N,n,v,t)$ of \emph{size}~$N$, \emph{degree}~$n$, \emph{order}~$v$, and \emph{strength}~$t$ is an $N$-set of $n$-tuples from $\{1,\ldots,v\}^n$ such that for any choice of~$t$ coordinates, each $t$-tuple from $\{1,\ldots,v\}^t$ appears as a restriction of the $n$-tuples to the chosen coordinates exactly~$\lambda$ times. This parameter is called the \emph{index} of the orthogonal array and is given by $\lambda=N/v^t$. In our case the index is~$k$, meaning that each element from $\{1,\ldots,v\}$ appears exactly~$k$ times in every coordinate. This property is not sufficient; a characterization of $OA(vk,n,v,1)$'s representing $\P^n(v,k,\lambda)$-cubes is given in \cite[Proposition~2.4]{KR24}. The OA-representation of $\P$-cubes is convenient for classification because the size $N=vk$ remains constant across all dimensions~$n$. We can take the $21$ incident pairs of the Fano plane and extend them to triples representing $\P^3(7,3,1)$-cubes, and continue increasing the dimension by~$1$ in this way. If the conditions from \cite[Proposition~2.4]{KR24} are not taken into account, the number of extensions of an $OA(vk,n,v,1)$ is $(vk)!/(k!)^v$. For parameters $(7,3,1)$ this is already too large for an exhaustive computer search, even if we do check the conditions for partial extensions using a backtracking algorithm. We amend this by performing isomorph rejection. As an intermediate step, we add the entry~$1$ to the new coordinate in ${vk\choose k}$ ways and check the necessary and sufficient conditions. We then eliminate equivalent copies among the valid partial extensions, and proceed by adding the next entry~$2$ in the same way. When we reach the last entry~$v$, we have all OA-representations of $\P^{n+1}(v,k,\lambda)$-cubes up to equivalence. We performed this calculation for parameters $(7,3,1)$ and $(7,4,2)$, increasing the dimension until further extension is not possible. We relied on canonical labelings produced by nauty and Traces~\cite{MP14} to eliminate equivalent (partial) OA-representations of cubes. The results $\nu(7,3,1)=7$ and $\nu(7,4,2)=9$ are considerably smaller than the bound~\eqref{dimbound}, which gives~$28$ for $v=7$. Online versions of Table~\ref{tab1} and other tables in this paper are available on the web page \begin{center} \url{https://web.math.pmf.unizg.hr/~krcko/results/pcubes.html} \end{center} The online tables contain links to files with OA-representations of the corresponding cubes that can be used to verify our calculations. The files are given in GAP~\cite{GAP} format. The GAP package \emph{Prescribed automorphism groups}~\cite{PAG} contains functions for working with $\P$- and $\C$-cubes; see the package manual. We were not able to perform a full classification for the next parameters $(11,5,2)$, but this might be within reach of today's computer technology. Instead, we classify $\P^n(11,5,2)$-cubes with nontrivial autotopy groups in Theorem~\ref{v11aut}. Computer classification of $\C$-cubes is a much more difficult problem. $\C^n(v,k,\lambda)$-cubes can also be represented as orthogonal arrays $OA(kv^{n-1},n,v,n-1)$ of index $k$, see~\cite[Section~2]{KPT24}. However, the size $N=kv^{n-1}$ grows exponentially with the dimension~$n$, so another approach is needed. We have not found a feasible classification strategy even for $\C^3(7,3,1)$. \section{Autotopies}\label{sec3} Isotopy from an incidence cube to itself is called \emph{autotopy}. The set of all autotopies of~$C$ forms a group with coordinatewise composition, called the \emph{full autotopy group} and denoted by $\Atop(C)$. This is a generalization of the full automorphism group of a symmetric design. Automorphisms of symmetric designs are usually defined as single permutations of points, because the corresponding permutations of blocks are uniquely determined. This is a consequence of Lemma~\ref{nonsing}: if $A=P_1AP_2^t$ holds for permutation matrices~$P_1$ and~$P_2$, then $P_2=A^{-1}P_1A$. The next two propositions describe how this carries over to higher-dimensional $\P$- and $\C$-cubes. \begin{proposition} Let $(\pi_1,\ldots,\pi_n)$ be an autotopy of $C\in \P^n(v,k,\lambda)$. Then any component $\pi_x$ uniquely determines all other components. \end{proposition} \begin{proof} For any other component $\pi_y$, the pair $(\pi_x,\pi_y)$ is an automorphism of the symmetric design $\Pi_{xy}(C)$. Therefore, $\pi_y$ is uniquely determined by~$\pi_x$ and $C$. \end{proof} \begin{proposition} Let $(\pi_1,\ldots,\pi_n)$ be an autotopy of $C\in \C^n(v,k,\lambda)$. Then any component $\pi_x$ is uniquely determined by the $n-1$ other components. \end{proposition} \begin{proof} Suppose we want to determine $\pi_n$ from $\pi_1,\ldots,\pi_{n-1}$ and~$C$. Let~$A'$ and~$A$ be the $2$-sections of~$C$ obtained by fixing the first $n-2$ coordinates to $1,\ldots,1$ and $\pi_1^{-1}(1),\ldots,\pi_{n-2}^{-1}(1)$, respectively. According to the definition, they are incidence matrices of $(v,k,\lambda)$ designs. The pair $(\pi_{n-1},\pi_{n})$ is an isomorphism between~$A$ and~$A'$: \begin{align*} A'(i,j) &= C(1,\ldots,1,i,j)= C(\pi_1^{-1}(1),\ldots,\pi_{n-2}^{-1}(1),\pi_{n-1}^{-1}(i),\pi_n^{-1}(j))\\[1mm] &= A(\pi_{n-1}^{-1}(i),\pi_n^{-1}(j)), \kern 2mm \forall i,j\in\{1,\ldots,v\}. \end{align*} We can write this in matrix form as $A'=P_{n-1}AP_n^t$. Thus, $\pi_n$ is uniquely determined by $P_n=(A')^{-1}P_{n-1}A$. \end{proof} The previous proposition is best possible, in the sense that~$\pi_x$ is not determined by fewer than $n-1$ other components of the autotopy. By Theorem~3.4 of~\cite{KPT24}, a $\C$-cube of dimension~$n$ constructed from a $(v,k,\lambda)$ difference set in~$G$ has an autotopy group isomorphic to $G^{n-1}$. In this group there are~$v$ different choices for $\pi_{n-1}$ and $\pi_n$ for any given components $\pi_1,\ldots,\pi_{n-2}$. It is known that automorphisms of symmetric $(v,k,\lambda)$ designs fix as many points as blocks. The analogous statement is true for $\P$-cubes. \begin{proposition}\label{fixp} Let $\pi\in \Atop(C)$ be an autotopy of $C\in \P^n(v,k,\lambda)$. Then every component $\pi_x$ has the same number of fixed points. \end{proposition} \begin{proof} The claim follows from the observation that $(\pi_x,\pi_y)$ is an automorphism of the symmetric design $A=\Pi_{xy}(C)$. If $P_x$ and $P_y$ are the corresponding permutation matrices, then $A=P_x A P_y^t$ holds. By Lemma~\ref{nonsing} we can write this as $P_y=A^{-1}P_x A$. Now $P_x$ and $P_y$ have the same trace, and this is the number of fixed points of $\pi_x$ and $\pi_y$. \end{proof} As a consequence, autotopy groups of $\P$-cubes have the same number of orbits on each coordinate. \begin{proposition} Let $G\le \Atop(C)$ be an autotopy group of $C\in \P^n(v,k,\lambda)$. For any $x\in \{1,\ldots,n\}$, $G_x=\{\pi_x \mid \pi\in G\}$ is a permutation group acting on $\{1,\ldots,v\}$; the number of orbits of $G_x$ does not depend on the choice of~$x$. \end{proposition} \begin{proof} By the Burnside--Cauchy--Frobenius lemma, the number of orbits can be expressed as $$\frac{1}{|G_x|} \sum_{\pi_x \in G_x} f(\pi_x) = \frac{1}{|G|} \sum_{\pi \in G} f(\pi_x).$$ Here $f(\pi_x)$ is the number of fixed points and does not depend on~$x$ by Proposition~\ref{fixp}. \end{proof} Now it is clear that an autotopy group $G\le \Atop(C)$ of a $\P$-cube~$C$ acts (sharply) transitively on one coordinate if and only if it acts (sharply) transitively on every coordinate; cf.\ \cite[Proposition~3.7]{KR24}. Furthermore, bounds on the number of fixed points of automorphisms of symmetric $(v,k,\lambda)$ designs apply to all components $\pi_x$ of autotopies of $\P^n(v,k,\lambda)$-cubes. For example, $f(\pi_x)\le v/2$ by \cite[Theorem~3]{WF70} and $f(\pi_x)\le k-\sqrt{k-\lambda}$ by \cite[Corollary~2.5.6]{SBW83}. On the other hand, autotopies of $\C$-cubes can have components with different numbers of fixed points. The $\C^3(7,3,1)$ cube of~\cite[Example~2.2]{KPT24} has an autotopy of order~$7$ fixing $(0,0,7)$ points per component. In~\cite[Propositions~5.1 and 5.3]{KPT24} a total of $2396$ inequivalent $\C^3(16,6,2)$-cubes have been constructed. They have autotopies fixing $(0,0,f)$ points per component for $f=2,4,6,8,12,14,16$. This also provides counterexamples for bounds on the number of fixed points and other claims stated above. Our next goal is to classify $\P^n(11,5,2)$-cubes with nontrivial autotopy groups. It suffices to consider autotopies of prime orders~$p$. By~\cite[Theorem~2.7]{MA71}, either $p$ divides $v$ or $p\le k$. The unique $(11,5,2)$ design has full automorphism group of order $660$, hence all possible orders $p\in \{2,3,5,11\}$ occur for dimension $n=2$. Cubes with autotopies of order~$11$ are obtained from higher-dimensional $(11,5,2)$ difference sets and have already been classified in~\cite{KR24}; the result is reproduced in the first row of Table~\ref{tab2}. Interestingly, each one of these cubes has full autotopy group of order~$55$ isomorphic to $\Z_{11}\rtimes \Z_5$. For $p=5$ we adopt the classification strategy from the previous section of successively increasing the dimension. By Proposition~\ref{fixp} we know that every component of an autotopy of order~$5$ has one fixed point and two cycles of length~$5$. Crucially, autotopies are preserved when $\P^n(11,5,2)$-cubes are restricted to $n-1$ dimensions by deleting a coordinate in the OA-representation. We can therefore extend the $\P^{n-1}(11,5,2)$-cubes with autotopy of order~$5$, respecting the autotopy on the added coordinate and the conditions from \cite[Proposition~2.4]{KR24}. If we do this exhaustively, we get all $\P^n(11,5,2)$-cubes with the prescribed autotopy. The procedure was implemented in the C programming language and required a modest amount of CPU time to perform a full classification for $p=5$. Nauty and Traces~\cite{MP14} were used to eliminate equivalent cubes and to compute full autotopy groups. \begin{proposition}\label{v11p5} Up to equivalence, the numbers of $\P^n(11,5,2)$-cubes with an autotopy of order~$5$ are given in the second row of Table~\textup{\ref{tab2}}. \end{proposition} \begin{table}[t] \begin{tabular}{|c|ccccccccccc|} \hline & \multicolumn{11}{c|}{$n$}\\[1mm] $p$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12\\ \hline \rule{0mm}{5mm}$11$ & 1 & 2 & 4 & 6 & 6 & 4 & 2 & 1 & 1 & 1 & 0 \\[1mm] $5$ & 1 & 283 & 443 & 8 & 7 & 4 & 2 & 1 & 1 & 1 & 0 \\[1mm] $3$ & 1 & 4758 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[1mm] $2$ & 1 & 5142 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[1mm] \hline \rule{0mm}{5mm}Total & 1 & 10178 & 443 & 8 & 7 & 4 & 2 & 1 & 1 & 1 & 0 \\[1mm] \hline \end{tabular} \vskip 2mm \caption{The $\P^n(11,5,2)$-cubes with nontrivial autotopies.}\label{tab2} \end{table} All the new examples from the previous proposition have full autotopy groups of order~$5$. Of course, the examples with full autotopy groups of order~$55$ were also recovered. The next case $p=3$ was settled by a similar calculation. \begin{proposition} The number of inequivalent $\P^n(11,5,2)$-cubes with an autotopy of order~$3$ is $4758$ for $n=3$ and $0$ for $n\ge 4$; see the third row of Table~\textup{\ref{tab2}}. \end{proposition} \begin{proof} Automorphisms of order~$3$ of the unique $(11,5,2)$ design have $2$ fixed points, each incident with $2$ fixed blocks. By Proposition~\ref{fixp} we can assume that all components of the autotopy are $\pi_x=(3,4,5)(6,7,8)$ $(9,10,11)$. An exhaustive computer search, systematically extending the $55$ incident pairs of the $(11,5,2)$ design, produced $4758$ inequivalent OA-representations of $\P^3(11,5,2)$-cubes invariant under the prescribed autotopy. A few hours of CPU time were required. In the next step none of the $3$-cubes extend to~$4$ dimensions. This can be inferred directly by considering the fixed elements. For $n=2$, incidences of the fixed elements are $(1,1)$, $(1,2)$, $(2,1)$, $(2,2)$. The pairs can be extended to triples $(1,1,1)$, $(1,2,2)$, $(2,1,2)$, $(2,2,1)$, but not to quadruples of fixed elements satisfying all requirements. \end{proof} Five $3$-cubes from the previous proposition have full autotopy groups of order~$12$ isomorphic to $(\Z_2\times \Z_2)\rtimes \Z_3$. The $4753$ other $3$-cubes have full autotopy groups of order~$3$. It remains to classify cubes with autotopies of order~$p=2$, i.e.\ involutions. \begin{proposition}\label{v11p2} Up to equivalence, the number of $\P^n(11,5,2)$-cubes with an involutory autotopy is $5142$ for $n=3$ and $0$ for $n\ge 4$; see the fourth row of Table~\textup{\ref{tab2}}. \end{proposition} \begin{proof} Automorphisms of order $p=2$ of the $(11,5,2)$ design have~$3$ fixed points and blocks, forming $3$ incident pairs. Now incidences of the fixed elements can be extended to any dimension. The cubes exist for $n=3$; we classified them by the algorithm described above. The resulting $5142$ inequivalent $3$-cubes cannot be extended to dimension $n=4$. This was established by running the algorithm in parallel and required some $2$ years of CPU time in total. Not a single extension was found, so $\P^n(11,5,2)$-cubes with an involutory autotopy do not exist for $n\ge 4$. \end{proof} The five $\P^3(11,5,2)$-cubes with full autotopy groups of order~$12$ were also found in Proposition~\ref{v11p2}. Of the remaining cubes, $71$ have full autotopy groups of order~$4$ isomorphic to $\Z_2\times \Z_2$ and $5066$ have full autotopy groups of order~$2$. We calculated the total numbers of $\P^n(11,5,2)$-cubes with nontrivial autotopies by concatenating the lists for $p=11$, $5$, $3$, $2$ and eliminating equivalent copies. \begin{table}[!b] \begin{tabular}{|c|cccccccccc|c|} \hline $(T,P)$ & \rotatebox{90}{\rule{-1.5mm}{0mm}$(\D_R,\D_R,\D_R)$\rule{1mm}{0mm}} & \rotatebox{90}{\rule{-1.5mm}{0mm}$(\D_R,\D_R,\D_G)$\rule{1mm}{0mm}} & \rotatebox{90}{\rule{-1.5mm}{0mm}$(\D_R,\D_R,\D_B)$\rule{1mm}{0mm}} & \rotatebox{90}{\rule{-1.5mm}{0mm}$(\D_R,\D_G,\D_G)$\rule{1mm}{0mm}} & \rotatebox{90}{\rule{-1.5mm}{0mm}$(\D_R,\D_G,\D_B)$\rule{1mm}{0mm}} & \rotatebox{90}{\rule{-1.5mm}{0mm}$(\D_R,\D_B,\D_B)$\rule{1mm}{0mm}} & \rotatebox{90}{\rule{-1.5mm}{0mm}$(\D_G,\D_G,\D_G)$\rule{1mm}{0mm}} & \rotatebox{90}{\rule{-1.5mm}{0mm}$(\D_G,\D_G,\D_B)$\rule{1mm}{0mm}} & \rotatebox{90}{\rule{-1.5mm}{0mm}$(\D_G,\D_B,\D_B)$\rule{1mm}{0mm}} & \rotatebox{90}{\rule{-1.5mm}{0mm}$(\D_B,\D_B,\D_B)$\rule{1mm}{0mm}} & Total \\[1mm] \hline \hline \rule{0mm}{5mm}$(8,1)$ & 4 & 48 & 76 & 124 & 152 & 56 & 102 & 136 & 48 & 35 & 781 \\[1mm] $(8,2)$ & 21 & 0 & 8 & 72 & 0 & 64 & 0 & 8 & 0 & 6 & 179 \\[1mm] $(8,3)$ & 0 & 0 & 0 & 0 & 0 & 0 & 6 & 0 & 0 & 5 & 11 \\[1mm] $(8,6)$ & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & 3 \\[1mm] $(16,1)$ & 23 & 8 & 0 & 16 & 0 & 0 & 12 & 0 & 0 & 0 & 59 \\[1mm] $(16,2)$ & 18 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 26 \\[1mm] $(16,3)$ & 0 & 0 & 0 & 0 & 0 & 0 & 4 & 0 & 0 & 0 & 4 \\[1mm] $(16,6)$ & 4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 4 \\[1mm] $(32,1)$ & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\[1mm] $(32,2)$ & 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3 \\[1mm] $(32,3)$ & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\[1mm] $(32,6)$ & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\[1mm] $(48,2)$ & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\[1mm] $(48,6)$ & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\[1mm] $(96,6)$ & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\[1mm] \hline \rule{0mm}{5mm}Total & 80 & 56 & 84 & 220 & 152 & 120 & 124 & 144 & 48 & 48 & 1076 \\[1mm] \hline \end{tabular} \vskip 2mm \caption{$\P^3(16,6,2)$-cubes with an autotopy of order~$8$ acting semiregularly.}\label{tab3} \end{table} \begin{theorem}\label{v11aut} Cubes in $\P^n(11,5,2)$ with nontrivial autotopy groups exist if and only if $2\le n\le 11$. The number of such cubes up to equivalence is $1$, $10178$, $443$, $8$, $7$, $4$, $2$, $1$, $1$, and $1$ for successive dimensions~$n$ in this range. \end{theorem} In \cite{KPT24} and \cite{KR24}, constructions of $C^3(16,6,2)$ and $\P^3(16,6,2)$-cubes with prescribed autotopy groups $G$ were performed by a different approach. Essentially, all orthogonal arrays $OA(1536,3,16,2)$ and $OA(96,$ $3,16,1)$ invariant under~$G$ were constructed and the ones not representing $\C$- and $\P$-cubes were discarded. Because of high proportions of ``bad'' OAs, rather large prescribed groups had to be used: $|G|\ge 512$ in~\cite{KPT24} and $|G|\ge 18$ in~\cite{KR24}. We now have a more efficient approach for $\P$-cubes and can handle smaller groups. As an example we present the following result. \begin{figure}[!b] \begin{center} \includegraphics[width=127mm]{16-6-2rgb.jpg} \end{center} \caption{A $\P^3(16,6,2)$-cube with non-isomorphic projections.}\label{fig1} \end{figure} \begin{proposition}\label{v16aut8} There are exactly $1076$ inequivalent $\P^3(16,6,2)$-cubes with an autotopy of order~$8$ acting in two cycles on each coordinate. \end{proposition} \begin{proof} Three $(16,6,2)$ designs exist~\cite{MR07}. We will call them the \emph{red}, \emph{green} and \emph{blue} design and denote them by $\D_R$, $\D_G$, and $\D_B$. The full automorphism groups are of orders $|\Aut(\D_R)| = 11520$, $|\Aut(\D_G)| = 768$, and $|\Aut(\D_B)| = 384$. All three designs have automorphisms of order~$8$ acting in two cycles of length~$8$ on the points and blocks. We ran the dimension increasing algorithm and found that they extend to $440$, $744$, and $596$ inequivalent $\P^3(16,6,2)$-cubes invariant under the prescribed autotopy, respectively. The total number of extensions was determined by concatenating the lists and eliminating equivalent cubes. \end{proof} A detailed statistics of the $1076$ cubes is given in Table~\ref{tab3}. The cubes are divided according to the size of the full auto(para)topy group and the $2$-projections. The \emph{full autoparatopy group} $\Apar(C)$ contains all combinations of isotopy and conjugation mapping~$C$ onto itself. The group size is given as $(T,P)$, where $T=|\Atop(C)|$ and $P=|\Apar(C)| / T$. A question whether $\P^3(16,6,2)$-cubes with three non-isomorphic projections exist was raised in~\cite{KR24}. From the table we see that there are $152$ cubes with projections $(\D_R,\D_G,\D_B)$ and the prescribed autotopy of order~$8$. One example is shown in Figure~\ref{fig1}, rendered in the ray tracing software POV-Ray~\cite{POVRay}. Red, green, and blue light sources were placed so that the projections appear as shadows in the appropriate color. \section{Difference sets}\label{sec4} Let $G$ be an additively written group of order~$v$, not necessarily abelian. Henceforth we will index cubes with the elements of~$G$ instead of the integers $\{1,\ldots,v\}$. Thus, $\C$- and $\P$-cubes are functions $C:G^n\to \{0,1\}$ with all $2$-sections or $2$-projections being incidence matrices of symmetric $(v,k,\lambda)$ designs. A \emph{$(v,k,\lambda)$ difference set} in~$G$ is a $k$-subset $D\subseteq G$ such that every element $g\in G\setminus\{0\}$ can be written as $g=a-b$ with $a,b\in D$ in exactly~$\lambda$ ways. By \cite[Theorem~3.1]{KPT24}, difference sets give rise to $\C$-cubes of arbitrary dimension $n$. Using the Iverson bracket, a cube $C\in \C^n(v,k,\lambda)$ is given by $$C(g_1,\ldots,g_n)=[g_1+\ldots+g_n\in D].$$ This is a special case of \cite[Theorem~2.9]{dL90}. Properties of these \emph{difference cubes} were studied in Section~3 of~\cite{KPT24}. In Section~4, the construction was generalized to the so-called \emph{group cubes} \cite[Theorem~4.1]{KPT24}. This construction gives examples not equivalent with any difference cube, although it still requires difference sets. The only known examples of $\C$-cubes not coming from \cite[Theorem~4.1]{KPT24} are the $\C^3(16,6,2)$-cubes of \cite[Proposition~5.3]{KPT24}. A stronger kind of difference sets are needed for $\P$-cubes. An \emph{$n$-dimensional $(v,k,\lambda)$ difference set} \cite[Definition~3.1]{KR24} is a $k$-subset of $n$-tuples $D\subseteq G^n$ such that $\{d_x-d_y \mid d\in D\}$ is an ``ordinary'' $(v,k,\lambda)$ difference set for every pair of coordinates $1\le x<y\le n$. By~\cite[Proposition~3.2]{KR24}, the development $$\dev D = \{(d_1+g,\ldots,d_n+g) \mid g\in G,\, d\in D\}\subseteq G^n$$ is an OA-representation~$\overline{C}$ of a cube $C\in \P^n(v,k,\lambda)$ indexed by~$G$. Properties of these cubes were studied in Section~3 of~\cite{KR24}. There it was shown that $n$-dimensional difference sets can be normalized so that all $n$-tuples in~$D$ start with a $0$ coordinate. We will now prove a linear bound on the dimension of~$D$ using a different kind of normalization. \begin{theorem}\label{dsbound} If an $n$-dimensional $(v,k,\lambda)$ difference set~$D\subseteq G^n$ exists, then $n\le v$. \end{theorem} \begin{proof} For a group element $g\in G$ and an $n$-tuple $d=(d_1,\ldots,d_n)\in G^n$, denote by $g+_x d=(d_1,\ldots,d_{x-1},g+d_x,d_{x+1},\ldots,d_n)$. We claim that $D'=\{g+_x d\mid d\in D\}$ is also an $n$-dimensional $(v,k,\lambda)$ difference set. For any other index $y\in \{1,\ldots,n\}$, the set $\{d_x-d_y \mid d\in D'\}=\{g+d_x-d_y \mid d\in D\}$ is a left translate of the difference set $\{d_x-d_y\mid d\in D\}$, hence also a $(v,k,\lambda)$ difference set. By repeated application of this transformation we can make an $n$-dimensional difference set~$D'$ containing the $n$-tuple $(0,\ldots,0)$. Now any other $n$-tuple $d\in D'$ must have distinct coordinates, because $d_x=d_y$ would imply that the set of differences $\{d_x-d_y \mid d\in D'\}$ contains fewer than~$k$ elements of~$G$. Therefore, the number of coordinates~$n$ cannot exceed $v=|G|$. \end{proof} The normalization of $n$-dimensional difference sets from~\cite{KR24} does not change the development~$\dev D$. The new normalization can change the development, but $\dev D$ and $\dev D'$ are always isotopic. In~\cite[Propositions~3.5 and~3.7]{KR24}, $\P$-cubes coming from difference sets were characterized as having an autotopy group acting regularly on the coordinates. Theorem~\ref{class731} shows that the bound $n\le v$ does not hold for general $\P^n(v,k,\lambda)$-cubes. The largest integer~$n$ such that an $n$-dimensional $(v,k,\lambda)$ difference set in~$G$ exists was denoted by $\mu_G(v,k,\lambda)$ in~\cite{KR24}. To determine values of this function, a computer classification of small $n$-dimensional difference sets was performed using the library of difference sets available in the GAP package~\cite{DP19}. In~\cite{KR24}, calculations were performed in GAP. Exact values of $\mu_G(16,6,2)$ were determined in~\cite[Table 3]{KR24} for $10$ of the $14$ groups of order~$16$, and lower bounds were given for the remaining $4$ groups. We have implemented the classification algorithm in the C programming language and determined more exact values of~$\mu$ and better lower bounds. Detailed results of our calculations are presented in Table~\ref{tab4}. The newly established values of~$\mu$ are summarized in Theorem~\ref{newmuval}. When there is only one group of order~$v$ up to isomorphism we write $\mu(v,k,\lambda)$, and when there are several groups we identify them by their ID in the GAP library of small groups~\cite{GAP}. \begin{theorem}\label{newmuval} For groups of order~$16$ with IDs $10$ and $13$ the values of $\mu_G(16,6,2)$ are $4$ and $8$. For groups with IDs $2$, $3$, $4$, $5$, $6$, and $8$ the values of $\mu_G(16,10,6)$ are $6$, $6$, $6$, $5$, $4$, and $6$, respectively. Furthermore, $\mu(13,9,6)=13$, $\mu(19,9,4)=\mu(19,10,5)=19$, $\mu(23,11,5)=\mu(23,12,6)=23$, and $\mu(31,6,1)=31$. \end{theorem} The previously known values of~$\mu$ are given in \cite[Tables 2 and 3]{KR24} and can also be read from Table~\ref{tab4}. The groups of order $16$ with IDs $1$, $7$, $12$, and $14$ are missing from Table~\ref{tab4}. The former two groups are the cyclic group $\Z_{16}$ and the dihedral group $D_{16}$, which do not contain difference sets. For the latter two groups we could not completely classify $n$-dimensional difference sets due to large numbers of inequivalent $\P$-cubes arising from them. Example~\ref{exid12} establishes an improved lower bound $\mu_G(16,6,2)\ge 11$ for the group with ID $12$. The group with ID $14$ is the elementary abelian group $\Z_2^4$ and will be dealt with in Corollary~\ref{mu16elab}. \begin{example}\label{exid12} The group of order $16$ with GAP ID $12$ is isomorphic to $G=\Z_2\times Q_8$, where $Q_8$ is the quaternion group. If elements of the factor groups are $\Z_2=\{0,1\}$ and $Q_8=\{1,i,j,k,-1,-i,-j,-k\}$, the following is a $11$-dimensional $(16,6,2)$ difference set in~$G$: {\footnotesize \begin{align*} \{\, & ( (0,1), (0,1), (0,1), (0,j), (0,i), (0,j), (0,k), (1,1), (1,i), (0,1), (1,-i) ),\\[0.3mm] & ( (0,1), (0,j), (1,1), (0,1), (0,-i), (1,1), (0,j), (0,k), (1,1), (1,i), (1,-j) ),\\[0.3mm] & ( (0,1), (0,i), (0,j), (1,1), (1,-j), (0,i), (1,1), (0,j), (0,-1), (1,1), (1,i) ),\\[0.3mm] & ( (0,1), (1,1), (0,i), (1,-k), (0,1), (0,-i), (0,-1), (0,-1), (0,j), (1,-i), (0,j) ),\\[0.3mm] & ( (0,1), (0,-1), (1,-k), (0,-1), (1,1), (0,1), (0,1), (1,i), (0,k), (1,-j), (0,1) ),\\[0.3mm] & ( (0,1), (1,-k), (0,-1), (0,i), (0,j), (1,-j), (1,i), (0,1), (0,1), (0,j), (1,1) )\, \}. \end{align*}} \end{example} We see that equality is reached quite often in the upper bound $\mu_G(v,k,\lambda)\le v$ of Theorem~\ref{dsbound}. For parameters $(q,(q-1)/2,(q-3)/4)$ this is explained by the construction of $q$-dimensional Paley difference sets in the additive group of $\F_q$, $q\equiv 3 \pmod{4}$; see \cite[Theorem~4.1]{KR24}. It was noted that the same construction works for cyclotomic difference sets ($4$th and $8$th powers in~$\F_q$ for appropriate orders~$q$), but this does not explain the equality $\mu(13,4,1)= \mu(13,9,6)=13$. We will now generalize this construction to difference sets with arbitrary parameters in elementary abelian groups. \begin{landscape} \begin{table}[!h] \vskip 9mm \begin{tabular}{|cc|cccccccccccccccccc|} \hline & & \multicolumn{18}{c|}{$n$} \\[1mm] $G$ & $(v,k,\lambda)$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 \\[1mm] \hline \hline \multirow{2}{*}{$\Z_7$} & \rule{0mm}{4.5mm}$(7,3,1)$ & 1 & 2 & 2 & 1 & 1 & 1 & & & & & & & & & & & & \\[1mm] & $(7,4,2)$ & 1 & 2 & 2 & 1 & 1 & 1 & & & & & & & & & & & & \\[0.5mm] \hline \multirow{2}{*}{$\Z_{11}$} & \rule{0mm}{4.5mm}$(11,5,2)$ & 1 & 2 & 4 & 6 & 6 & 4 & 2 & 1 & 1 & 1 & & & & & & & & \\[1mm] & $(11,6,3)$ & 1 & 2 & 4 & 6 & 6 & 4 & 2 & 1 & 1 & 1 & & & & & & & & \\[0.5mm] \hline \multirow{2}{*}{$\Z_{13}$} & \rule{0mm}{4.5mm}$(13,4,1)$ & 1 & 3 & 7 & 10 & 14 & 14 & 10 & 7 & 3 & 1 & 1 & 1 & & & & & & \\[1mm] & $(13,9,6)$ & 1 & 146 & 422 & 652 & 305 & 60 & 13 & 8 & 3 & 1 & 1 & 1 & & & & & & \\[0.5mm] \hline \multirow{2}{*}{$\Z_{15}$} & \rule{0mm}{4.5mm}$(15,7,3)$ & 1 & 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & & & & \\[1mm] & $(15,8,4)$ & 1 & 6 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & & & & \\[0.5mm] \hline \multirow{2}{*}{ID2} & \rule{0mm}{4.5mm}$(16,6,2)$ & 1 & 31 & 81 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & & & \\[1mm] & $(16,10,6)$ & 1 & 2565 & 152314 & 12115 & 36 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & & & \\[0.5mm] \hline \multirow{2}{*}{ID3} & \rule{0mm}{4.5mm}$(16,6,2)$ & 1 & 16 & 55 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & & & \\[1mm] & $(16,10,6)$ & 1 & 6638 & 462880 & 111294 & 196 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & & & \\[0.5mm] \hline \multirow{2}{*}{ID4} & \rule{0mm}{4.5mm}$(16,6,2)$ & 1 & 38 & 113 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & & & \\[1mm] & $(16,10,6)$ & 1 & 6516 & 389060 & 34076 & 53 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & & & \\[0.5mm] \hline \end{tabular} \vskip 5mm \caption{Numbers of inequivalent $\P^n(v,k,\lambda)$-cubes obtained from difference sets.}\label{tab4} \end{table} \addtocounter{table}{-1} \setlength{\tabcolsep}{3.8pt} \begin{table}[!h] \vskip 9mm \begin{tabular}{|cc|cccccccccccccccccc|} \hline & & \multicolumn{18}{c|}{$n$} \\[1mm] $G$ & $(v,k,\lambda)$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 \\[1mm] \hline \hline \multirow{2}{*}{ID5} & \rule{0mm}{4.5mm}$(16,6,2)$ & 2 & 56 & 140 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & & & \\[1mm] & $(16,10,6)$ & 2 & 10680 & 323520 & 6874 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & & & \\[0.5mm] \hline \multirow{2}{*}{ID6} & \rule{0mm}{4.5mm}$(16,6,2)$ & 1 & 8 & 6 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & & & \\[1mm] & $(16,10,6)$ & 1 & 506 & 1192 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & & & \\[0.5mm] \hline \multirow{2}{*}{ID8} & \rule{0mm}{4.5mm}$(16,6,2)$ & 1 & 18 & 44 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & & & \\[1mm] & $(16,10,6)$ & 1 & 3746 & 76580 & 5444 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & & & \\[0.5mm] \hline ID9 & \rule{0mm}{4.5mm}$(16,6,2)$ & 1 & 38 & 112 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & & & \\[0.5mm] \hline ID10 & \rule{0mm}{4.5mm}$(16,6,2)$ & 1 & 86 & 1941 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & & & \\[0.5mm] \hline ID11 & \rule{0mm}{4.5mm}$(16,6,2)$ & 1 & 24 & 88 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & & & \\[0.5mm] \hline ID13 & \rule{0mm}{4.5mm}$(16,6,2)$ & 2 & 129 & 4960 & 19734 & 8106 & 374 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & & & \\[0.5mm] \hline \multirow{2}{*}{$\Z_{19}$} & \rule{0mm}{4.5mm}$(19,9,4)$ & 1 & 8 & 14 & 36 & 86 & 154 & 228 & 280 & 280 & 228 & 154 & 86 & 36 & 14 & 4 & 1 & 1 & 1 \\[1mm] & $(19,10,5)$ & 1 & 8 & 14 & 36 & 86 & 154 & 228 & 280 & 280 & 228 & 154 & 86 & 36 & 14 & 4 & 1 & 1 & 1 \\[0.5mm] \hline $\Z_{21}$ & \rule{0mm}{4.5mm}$(21,5,1)$ & 1 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & $\cdots$ \\[0.5mm] \hline $F_{21}$ & \rule{0mm}{4.5mm}$(21,5,1)$ & 1 & 6 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & $\cdots$ \\[0.5mm] \hline \end{tabular} \vskip 5mm \caption{Numbers of inequivalent $\P^n(v,k,\lambda)$-cubes obtained from difference sets (continued).} \end{table} \clearpage \end{landscape} \addtocounter{table}{-1} \begin{table}[!t] \begin{tabular}{|cc|cccccccccc|} \hline & & \multicolumn{10}{c|}{$n$} \\[1mm] $G$ & $(v,k,\lambda)$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 \\[1mm] \hline \hline \multirow{2}{*}{$\Z_{23}$} & \rule{0mm}{4.5mm}$(23,11,5)$ & 1 & 11 & 20 & 69 & 207 & 492 & 984 & 1630 & 2282 & 2694 \\[1mm] & $(23,12,6)$ & 1 & 11 & 20 & 69 & 207 & 492 & 984 & 1630 & 2282 & 2694 \\[0.5mm] \hline $\Z_{31}$ & \rule{0mm}{4.5mm}$(31,6,1)$ & 1 & 10 & 49 & 195 & 812 & \raisebox{0.25pt}{\footnotesize 2846} & \raisebox{0.25pt}{\footnotesize 8528} & \raisebox{0.5pt}{\scriptsize 21731} & \raisebox{0.5pt}{\scriptsize 47801} & \raisebox{0.5pt}{\scriptsize 91148} \\[0.5mm] \hline \end{tabular} \vskip 2mm \setlength{\tabcolsep}{4.6pt} \begin{tabular}{|cc|cccccccc|} \hline & & \multicolumn{8}{c|}{$n$} \\[1mm] $G$ & $(v,k,\lambda)$ & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 \\[1mm] \hline \hline \multirow{2}{*}{$\Z_{23}$} & \rule{0mm}{4.5mm}$(23,11,5)$ & 2694 & 2282 & 1630 & 984 & 492 & 207 & 69 & 20 \\[1mm] & $(23,12,6)$ & 2694 & 2282 & 1630 & 984 & 492 & 207 & 69 & 20 \\[0.5mm] \hline $\Z_{31}$ & \rule{0mm}{4.5mm}$(31,6,1)$ & \raisebox{0.85pt}{\tiny 151924} & \raisebox{0.85pt}{\tiny 221959} & \raisebox{0.85pt}{\tiny 285357} & \raisebox{0.85pt}{\tiny 323396} & \raisebox{0.85pt}{\tiny 323396} & \raisebox{0.85pt}{\tiny 285357} & \raisebox{0.85pt}{\tiny 221959} & \raisebox{0.85pt}{\tiny 151924} \\[0.5mm] \hline \end{tabular} \vskip 2mm \setlength{\tabcolsep}{3.3pt} \begin{tabular}{|cc|cccccccccccc|} \hline & & \multicolumn{12}{c|}{$n$} \\[1mm] $G$ & $(v,k,\lambda)$ & 20 & 21 & 22 & 23 & 24 & 25 & 26 & 27 & 28 & 29 & 30 & 31 \\[1mm] \hline \hline \multirow{2}{*}{$\Z_{23}$} & \rule{0mm}{4.5mm}$(23,11,5)$ & 4 & 1 & 1 & 1 & & & & & & & & \\[1mm] & $(23,12,6)$ & 4 & 1 & 1 & 1 & & & & & & & & \\[0.5mm] \hline $\Z_{31}$ & \rule{0mm}{4.5mm}$(31,6,1)$ & \raisebox{0.5pt}{\scriptsize 91148} & \raisebox{0.5pt}{\scriptsize 47801} & \raisebox{0.5pt}{\scriptsize 21731} & \raisebox{0.25pt}{\footnotesize 8528} & \raisebox{0.25pt}{\footnotesize 2846} & 811 & 187 & 38 & 6 & 1 & 1 & 1 \\[0.5mm] \hline \end{tabular} \vskip 4mm \caption{Numbers of inequivalent $\P^n(v,k,\lambda)$-cubes obtained from difference sets (continued).} \end{table} \begin{theorem}\label{tmelab} Let $G$ be an elementary abelian group, i.e.\ the additive group of a finite field~$\F_q$. Then any $(q,k,\lambda)$ difference set in~$G$ extends to dimension~$q$. \end{theorem} \begin{proof} Led $D=\{d_1,\ldots,d_k\}\subseteq G$ be a $(q,k,\lambda)$ difference set, $\alpha\in \F_q$ a primitive element, and $w=(0,1,\alpha,\alpha^2,\ldots,\alpha^{q-2}) \in G^q$. An extension of~$D$ to dimension~$q$ is $\vec D = \{d_1 w,\ldots, d_k w\}$, where each coordinate of~$w$ is multiplied by an element of~$D$. To prove that $\vec D$ is indeed a $q$-dimensional $(q,k,\lambda)$ difference set, we have to check that the differences of any two coordinates are difference sets in~$G$. The first coordinate of any vector in $\vec D$ is $0$, and if we subtract the coordinate with $\alpha^y$ in~$w$, we get $\{-\alpha^y d_1,\ldots,-\alpha^y d_k\}$. This is a $(q,k,\lambda)$ difference set in~$G$ because it is the image of~$D$ by the group automorphism $\varphi:G\to G$, $\varphi(g)=-\alpha^y g$. Next we subtract two non-zero coordinates, say with~$\alpha^x$ and~$\alpha^y$ in~$w$ for some $x<y$. We get the set $$\{\alpha^x (1-\alpha^{y-x})d_1,\ldots,\alpha^x (1-\alpha^{y-x})d_k\}.$$ This is again the image of~$D$ by a group automorphism $\psi:G\to G$, $\psi(g)=\alpha^x(1-\alpha^{y-x})g$. \end{proof} Two missing values of $\mu_G$ follow from Theorem~\ref{tmelab}. \begin{corollary}\label{mu16elab} For $G=\Z_2^4$, $\mu_G(16,6,2)=\mu_G(16,10,6)=16$. \end{corollary} A further generalization of Theorem~\ref{tmelab} applies to arbitrary groups~$G$. An \emph{antiautomorphism} of~$G$ is a bijection $\varphi:G\to G$ such that $$\varphi(g+h)=\varphi(h)+\varphi(g),\kern 3mm \forall g,h\in G.$$ If~$G$ is abelian, automorphisms and anti\-auto\-morphisms coincide. Antiautomorphisms of non-abelian groups are the functions $-\varphi$, where~$\varphi$ is an automorphism. The crucial fact in the proof of Theorem~\ref{tmelab} is that the image $\varphi(D)$ of a $(v,k,\lambda)$ difference set~$D$ by an automorphism~$\varphi$ is also a $(v,k,\lambda)$ difference set. The same holds true for antiautomorphisms of nonabelian groups. \begin{definition} We say that $R=\{\varphi_1,\ldots,\varphi_{n-1}\}$ is a \emph{regular set of (anti)automorphisms} of~$G$ if each $\varphi_x:G\to G$ is an automorphism or antiautomorphism, and each difference $\varphi_x-\varphi_y$ is an automorphism or antiautomorphism for $1\le x<y\le n-1$. \end{definition} Given such a set~$R$ of size $n-1$ and an element $d\in D$, denote by $\vec d$ the $n$-tuple $(0,\varphi_1(d),\ldots,\varphi_{n-1}(d))\in G^n$. Then the set $\vec D = \{\vec d \mid d\in D\}$ is an $n$-dimensional $(v,k,\lambda)$ difference set. This proves the following theorem. \begin{theorem}\label{tmreg} If a group $G$ allows a regular set of (anti)automorphisms of size~$n-1$, then any $(v,k,\lambda)$ difference set in~$G$ extends to dimension~$n$. \end{theorem} Theorem~\ref{tmelab} can be seen as a special case of Theorem~\ref{tmreg} by using multiplication in $\F_q$. The functions $\varphi_x:\F_q\to \F_q$, $\varphi_x(g)=\alpha^x g$ for $x=0,\ldots,q-2$ constitute a regular set of automorphism of the additive group, yielding the construction of Theorem~\ref{tmelab}. Kerdock sets provide more examples of regular sets in elementary abelian $2$-groups \cite{AMK72, CS73, DG75, WMK82a, WMK82b} and $3$-groups \cite{NJP76}. Theorem~\ref{tmreg} also applies to groups that are not elementary abelian. For example, $\varphi_1(g)=g$ and $\varphi_2(g)=2g$ are automorphisms of the group~$\Z_{15}$. The difference $(\varphi_1-\varphi_2)(g)=-g$ is an auto\-morphism as well. Thus, $\{\varphi_1,\varphi_2\}$ is a regular set and all $(15,7,3)$ and $(15,8,4)$ difference sets extend at least to dimension~$3$. In Table~\ref{tab4} we see that there is a $4$-dimensional $(15,8,4)$ difference set, but it cannot be obtained from Theorem~\ref{tmreg} because~$\Z_{15}$ does not allow regular sets of size~$3$. The next result applies to difference sets in arbitrary cyclic groups~$\Z_v$. \begin{corollary}\label{cycdim} Let $p$ be the smallest prime divisor of $v$. Then any cyclic $(v,k,\lambda)$ difference set extends to dimension~$p$. \end{corollary} \begin{proof} Automorphisms of $\Z_v$ are of the form $\varphi_a(g)=ag$, where $a\in \Z_v\setminus \{0\}$ is relatively prime to~$v$. The set $\{\varphi_1,\ldots,\varphi_{p-1}\}$ is a regular set of size~$p-1$. Hence, any difference set in~$\Z_v$ extends to dimension~$p$ by Theorem~\ref{tmreg}. \end{proof} For a non-abelian example we turn to groups of order~$27$. There are five such groups, but only two of them contain nontrivial difference sets~\cite{DP19}. These are the elementary abelian group $\Z_3\times\Z_3\times\Z_3$ (GAP group ID~5) and a semidirect product $\Z_9\rtimes \Z_3$ (GAP group ID~4). Difference sets in the former group extend to dimension~$27$ by Theorem~\ref{tmelab}. A regular set in the latter group is given in the next example, using multiplicative notation. \begin{example} The group of order $27$ with GAP ID $4$ can be presented as $G=\langle a, b \mid a^9=b^3=1,\, ba=a^4b\rangle$. Let $\varphi_1(g)=g$ be the identity automorphism of~$G$ and $\varphi_2(g)=g^{-1}$ the corresponding antiautomorphism. The ``difference'' $\varphi_1(g)(\varphi_2(g))^{-1}=g^2$ is an antiautomorphism, namely the one corresponding to $\varphi\in \Aut(G)$, $\varphi(a)=a^7$, $\varphi(b)=b$. Thus, $\{\varphi_1,\varphi_2\}$ is a regular set of (anti)automorphisms of~$G$. \end{example} By Theorem~\ref{tmreg}, all $(27,13,6)$ and $(27,14,7)$ difference sets in~$G$ extend to dimension~$3$. \section{Final observations}\label{sec5} A curious fact visible in Table~\ref{tab4} is equality of the numbers for some complementary parameters. Ordinary ($2$-dimensional) difference sets and designs are rarely discussed for both sets of complementary parameters, because complementation is a bijection. This is also true for higher-dimensional $\C$-cubes, but not for $\P$-cubes as we have seen in Proposition~\ref{nocompl}. However, the numbers in Table~\ref{tab4} coincide for complementary Paley-Hadamard parameters $\P^n(4m-1,2m-1,m-1)$ and $\P^n(4m-1,2m,m)$ when $4m-1=p^s$ is a prime power and $G=(\Z_p)^s$ is an elementary abelian group. For small parameters, this can be explained by the following bijection. Every $(4m-1,2m-1,m-1)$ difference set $D\subseteq G$ can be uniquely extended to a $(4m-1,2m,m)$ difference set $D\cup\{a\}$ by adding the element $a=-2\sum_{d\in D} d$. We have checked that this is a bijection between $(4m-1,2m-1,m-1)$ and $(4m-1,2m,m)$ difference sets in the groups $\Z_7$, $\Z_{11}$, $\Z_{19}$, and $\Z_{23}$ from Table~\ref{tab4} and also in $G=(\Z_3)^3$. We don't have a proof for arbitrary elementary abelian groups. However, if this is indeed a bijection between ordinary difference sets, then it is easy to establish a $1$-to-$1$ correspondence between $n$-dimensional difference sets. Given an $n$-dimensional $(4m-1,2m-1,m-1)$ difference set $D\subseteq G^n$, one simply adds the $n$-tuple $\left(-2\sum_{d\in D} d_1,\ldots,-2\sum_{d\in D} d_n\right)$ to obtain an $n$-dimensional $(4m-1,2m,m)$ difference set. If a $(4m-1,2m-1,m-1)$ difference set $D\subseteq G$ extends to a $(4m-1,2m,m)$ difference set $D\cup\{a\}$, then the complement $D'=G\setminus (D\cup\{a\})$ is also a $(4m-1,2m-1,m-1)$ difference set in~$G$. Then $(D-a)\cup (D'-a) \cup \{0\}$ is a tiling of~$G$ by two difference sets, and $D-a$ is a skew Hadamard difference set by \cite[Theorem~8]{CKZ15}. A long-standing conjecture was that the classical Paley difference sets are the only skew Hadamard difference sets in elementary abelian groups, but it has been refuted in~\cite{DY06, DWX07} for groups of orders $3^m$, $m\ge 5$ odd. It might also be the case that our bijection holds up only for elementary abelian groups of small orders, but we don't have a counterexample. Yet another apparent symmetry in Table~\ref{tab4} is equality of the numbers for ``complementary dimensions'' $n$ and $v-n$. Equality holds for some small parameters $(v,k,\lambda)$ when the upper bound of Theorem~\ref{dsbound} is reached: $(7,3,1)$, $(7,4,2)$, $(11,5,2)$, $(11,6,3)$, and $(13,4,1)$. It breaks for the complementary parameters $(13,9,6)$, while for $(19,9,4)$, $(19,10,5)$, $(23,11,5)$, and $(23,12,6)$ equality holds except for dimension $n=3$. For $(31,6,1)$ it breaks for $n=3$, $4$, $5$, $6$ and holds for the remaining dimensions. We don't have an explanation for this phenomenon. \begin{thebibliography}{20} \bibitem{MA71} M.~Aschbacher, \emph{On collineation groups of symmetric block designs}, J.\ Combin.\ Theory Ser.\ A \textbf{11} (1971), 272--281. \url{https://doi.org/10.1016/0097-3165(71)90054-9} \bibitem{BJL99} T.~Beth, D.~Jungnickel, H.~Lenz, \emph{Design theory. Second edition. Volume~I}, Cambridge University Press, Cambridge, 1999. \url{https://doi.org/10.1017/CBO9780511549533} \bibitem{CS73} P.~J.~Cameron, J.~J.~Seidel, \emph{Quadratic forms over $GF(2)$}, Nederl.\ Akad.\ Wetensch.\ Proc.\ Ser.\ A \textbf{76} Indag.\ Math.\ \textbf{35} (1973), 1--8. \bibitem{CKZ15} A.~\'{C}usti\'{c}, V.~Kr\v{c}adinac, Y.~Zhou, \emph{Tiling groups with difference sets}, Electron.\ J.\ Combin.\ \textbf{22} (2015), no.\ 2, Paper 2.56, 13 pp. \url{https://doi.org/10.37236/5157} \bibitem{dL90} W.~de~Launey, \emph{On the construction of $n$-dimensional designs from $2$-dimensional designs}, Australas.\ J.\ Combin.\ \textbf{1} (1990), 67--81. \url{https://ajc.maths.uq.edu.au/pdf/1/ocr-ajc-v1-p67.pdf} \bibitem{dLH93} W.~de~Launey, K.~J.~Horadam, \emph{A weak difference set construction for higher-dimensional designs}, Des.\ Codes Cryptogr.\ \textbf{3} (1993), no.\ 1, 75--87. \url{https://doi.org/10.1007/BF01389357} \bibitem{DG75} P.~Delsarte, J.-M.~Goethals, \emph{Alternating bilinear forms over GF(q)}, J.\ Combin.\ Theory Ser.\ A \textbf{19} (1975), 26--50. \url{https://doi.org/10.1016/0097-3165(75)90090-4} \bibitem{DY06} C.~Ding, J.~Yuan, \emph{A family of skew Hadamard difference sets}, J.\ Combin.\ Theory Ser.\ A \textbf{113} (2006), no.\ 7, 1526--1535. \url{https://doi.org/10.1016/j.jcta.2005.10.006} \bibitem{DWX07} C.~Ding, Z.~Wang, Q.~Xiang, \emph{Skew Hadamard difference sets from the Ree-Tits slice symplectic spreads in PG$(3,3^{2h+1})$}, J.\ Combin.\ Theory Ser.\ A \textbf{114} (2007), no.\ 5, 867--887. \url{https://doi.org/10.1016/j.jcta.2006.09.008} \bibitem{JHD07} J.~H.~Dinitz, \emph{Room squares}, in: Handbook of combinatorial designs. Second edition (eds.\ C.~J.~Colbourn, J.~H.~Dinitz), Chapman \& Hall/CRC, Boca Raton, FL, 2007, pp.\ 584--590. \url{https://doi.org/10.1201/9781420010541} \bibitem{WF70} W.~Feit, \emph{Automorphisms of symmetric balanced incomplete block designs}, Math.\ Z.\ \textbf{118} (1970), 40--49. \url{https://doi.org/10.1007/BF01109892} \bibitem{GAP} The GAP Group, \emph{GAP -- Groups, Algorithms, and Programming}, Version 4.14.0, 2024. \url{https://www.gap-system.org} \bibitem{HKO11} A.~Hulpke, P.~Kaski, P.~R.~J.~\"{O}sterg\aa{}rd, \emph{The number of Latin squares of order $11$}, Math.\ Comp.\ \textbf{80} (2011), no.\ 274, 1197--1219. \url{https://doi.org/10.1090/S0025-5718-2010-02420-2} \bibitem{IS06} Y.~J.~Ionin, M.~S.~Shrikhande, \emph{Combinatorics of symmetric designs}, Cambridge University Press, Cambridge, 2006. \url{https://doi.org/10.1017/CBO9780511542992} \bibitem{WMK82a} W.~M.~Kantor, \emph{Spreads, translation planes and Kerdock sets. I}, SIAM J.\ Algebraic Discrete Methods \textbf{3} (1982), no.\ 2, 151--165. \url{https://doi.org/10.1137/0603015} \bibitem{WMK82b} W.~M.~Kantor, \emph{Spreads, translation planes and Kerdock sets. II}, SIAM J.\ Algebraic Discrete Methods \textbf{3} (1982), no.\ 3, 308--318. \url{https://doi.org/10.1137/0603032} \bibitem{KD15} A.~D.~Keedwell, J.~D\'{e}nes, \emph{Latin squares and their applications}, second ed., Elsevier/North-Holland, Amsterdam, 2015. \url{https://doi.org/10.1016/C2014-0-03412-0} \bibitem{AMK72} A.~M.~Kerdock, \emph{A class of low-rate nonlinear binary codes}, Information and Control \textbf{20} (1972), 182--187; \textbf{21} (1972), 395. \bibitem{DK92} D.~E.~Knuth, \emph{Two notes on notation}, Amer.\ Math.\ Monthly \textbf{99} (1992), no.\ 5, 403--422. \url{https://doi.org/10.1080/00029890.1992.11995869} \bibitem{PAG} V.~Kr\v{c}adinac, \emph{PAG -- Prescribed Automorphism Groups}, Version 0.2.4, 2024. \url{https://vkrcadinac.github.io/PAG/} \bibitem{KPT24} V.~Kr\v{c}adinac, M.~O.~Pav\v{c}evi\'{c}, K.~Tabak, \emph{Cubes of symmetric designs}, Ars Math.\ Contemp.\ (2024). \url{https://doi.org/10.26493/1855-3974.3222.e53} \bibitem{KR24} V.~Kr\v{c}adinac, L.~Reli\'{c}, \emph{Projection cubes of symmetric designs}, preprint, 2024. \url{https://arxiv.org/abs/2411.06936} \bibitem{EL83} E.~S.~Lander, \emph{Symmetric designs: an algebraic approach}, Cambridge University Press, Cambridge, 1983. \url{https://doi.org/10.1017/CBO9780511662164} \bibitem{MR07} R.~Mathon, A.~Rosa, \emph{$2$-$(v,k,\lambda)$ designs of small order}, in: Handbook of combinatorial designs. Second edition (eds.\ C.~J.~Colbourn, J.~H.~Dinitz), Chapman \& Hall/CRC, Boca Raton, FL, 2007, pp.\ 25--58. \url{https://doi.org/10.1201/9781420010541} \bibitem{MMM07} B.~D.~McKay, A.~Meynert, W.~Myrvold, \emph{Small Latin squares, quasigroups, and loops}, J.\ Combin.\ Des.\ \textbf{15} (2007), no.\ 2, 98--119. \url{https://doi.org/10.1002/jcd.20105} \bibitem{MP14} B.~D.~McKay, A.~Piperno, \emph{Practical graph isomorphism, II}, \emph{J.\ Symbolic Comput.} \textbf{60} (2014), 94--112. \url{https://doi.org/10.1016/j.jsc.2013.09.003} \bibitem{MW08} B.~D.~McKay, I.~M.~Wanless, \emph{A census of small Latin hypercubes}, SIAM J.\ Discrete Math.\ \textbf{22} (2008), no.\ 2, 719--736. \url{https://doi.org/10.1137/070693874} \bibitem{NJP76} N.~J.~Patterson, \emph{A four-dimensional Kerdock set over $GF(3)$}, J.\ Combin.\ Theory Ser.\ A \textbf{20} (1976), no.\ 3, 365--366. \url{https://doi.org/10.1016/0097-3165(76)90031-5} \bibitem{DP19} D.~Peifer, \emph{DifSets, an algorithm for enumerating all difference sets in a group}, Version 2.3.1, 2019. \url{https://dylanpeifer.github.io/difsets} \bibitem{POVRay} Persistence of Vision Raytracer, Version 3.7, 2013. Persistence of Vision Pty. Ltd., Williamstown, Victoria, Australia. \url{http://www.povray.org/} \bibitem{SBW83} J.~J.~Seidel, A.~Blokhuis, H.~A.~Wilbrink, \emph{Graphs and association schemes, algebra and geometry}, Tech.\ Report 83-WSK-02, Eindhoven University of Technology, 1983. \url{https://research.tue.nl/en/publications/graphs-and-association-schemes-algebra-and-geometry} \bibitem{ES06} E.~Soedarmadji, \emph{Latin hypercubes and MDS codes}, Discrete Math.\ \textbf{306} (2006), no.\ 12, 1232--1239. \url{https://doi.org/10.1016/j.disc.2006.02.011} \end{thebibliography} \end{document}
2412.09080v1
http://arxiv.org/abs/2412.09080v1
On the number of modes of Gaussian kernel density estimators
\documentclass[a4paper,11pt]{article} \usepackage{cmap} \usepackage[usenames,dvipsnames]{xcolor} \definecolor{myblue}{rgb}{0.21, 0.34, 0.74} \definecolor{mygrey}{rgb}{0.55, 0.57, 0.67} \definecolor{myred}{rgb}{0.79, 0.0, 0.09} \definecolor{JuliaRed}{RGB}{204, 52, 51} \usepackage[pdftex, colorlinks=true, bookmarksopen=true, linkcolor=myblue, citecolor=myblue]{hyperref} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{lmodern} \usepackage{libertine} \usepackage{bbm} \usepackage{mathrsfs} \usepackage{amssymb} \usepackage{amsmath} \usepackage{upgreek} \usepackage{mathtools} \usepackage{amsthm} \usepackage[nottoc,notlot,notlof]{tocbibind} \usepackage{comment} \usepackage[font=small,labelfont=bf]{caption} \usepackage{authblk} \renewcommand\Affilfont{\itshape\small} \setlength{\affilsep}{0.0em} \setcounter{Maxaffil}{10} \usepackage{subcaption} \usepackage[svgnames,pdf]{pstricks} \usepackage{wrapfig} \usepackage{upgreek} \usepackage{bm} \usepackage[scr=boondoxo]{mathalfa} \usepackage{comment} \usepackage[font=small,labelfont=bf]{caption} \interfootnotelinepenalty=10000 \usepackage[noabbrev,capitalize,nameinlink]{cleveref} \usepackage{tikz} \usepackage{bbm} \usepackage[T1]{fontenc} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \usetikzlibrary{arrows.meta} \newcommand{\cmark}{\text{\ding{51}}} \newcommand{\xmark}{\text{\ding{55}}} \usepackage{environ} \usepackage{framed} \usepackage{url} \usepackage[labelfont=bf]{caption} \usepackage{cite} \usepackage{framed} \usepackage[framemethod=tikz]{mdframed} \usepackage{appendix} \usepackage{graphicx} \setlength{\marginparwidth}{2cm} \usepackage[textsize=tiny]{todonotes} \usepackage{tcolorbox} \usepackage{multicol} \usepackage{enumerate} \allowdisplaybreaks[1] \usepackage{enumerate} \usepackage{stmaryrd} \usepackage{upgreek} \usepackage[shortlabels]{enumitem} \crefformat{enumi}{#2#1#3} \crefrangeformat{enumi}{#3#1#4 to~#5#2#6} \crefmultiformat{enumi}{#2#1#3} { and~#2#1#3}{, #2#1#3}{ and~#2#1#3} \DeclareSymbolFont{symbolsC}{U}{pxsyc}{m}{n} \SetSymbolFont{symbolsC}{bold}{U}{pxsyc}{bx}{n} \DeclareFontSubstitution{U}{pxsyc}{m}{n} \DeclareMathSymbol{\medcircle}{\mathbin}{symbolsC}{7} \crefname{equation}{}{} \AtBeginEnvironment{appendices}{\crefalias{section}{appendix}} \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{claim}[theorem]{Claim} \newtheorem{assumption}[theorem]{Assumption} \newtheorem{remark}[theorem]{Remark} \newtheorem{observation}[theorem]{Observation} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem*{question*}{Question} \newtheorem{fact}[theorem]{Fact} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{problem}{Problem} \newtheorem{question}[theorem]{Question} \newtheorem*{definition*}{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{setup}[theorem]{Setup} \theoremstyle{remark} \newtheorem*{notation}{Notation} \newcommand{\norm}[1]{\bigg\lVert#1\bigg\rVert} \newcommand{\snorm}[1]{\lVert#1\rVert} \newcommand{\sang}[1]{\langle #1 \rangle} \newcommand{\pvec}[1]{{\vec{#1}\,}'} \newcommand{\avec}[1]{{\vec{#1}\,}^\ast} \newcommand{\evec}[2]{{\vec{#1}\,}^{#2}} \newcommand{\bs}{\boldsymbol} \newcommand{\mb}{\mathbb} \newcommand{\mbf}{\mathbf} \newcommand{\mbm}{\mathbbm} \newcommand{\mc}{\mathcal} \newcommand{\mf}{\mathfrak} \newcommand{\mr}{\mathrm} \newcommand{\msc}{\mathscr} \newcommand{\ol}{\overline} \newcommand{\on}{\operatorname} \newcommand{\wh}{\widehat} \newcommand{\wt}{\widetilde} \newcommand{\edit}[1]{{\color{red}#1}} \newcommand{\diff}{\, \mathrm{d}} \renewcommand{\le}{\leqslant} \renewcommand{\ge}{\geqslant} \newcommand{\eps}{\varepsilon} \let\originalleft\left \let\originalright\right \renewcommand{\left}{\mathopen{}\mathclose\bgroup\originalleft} \renewcommand{\right}{\aftergroup\egroup\originalright} \allowdisplaybreaks \newif\ifpublic \publictrue \ifpublic \newcommand{\ignore}[1]{} \global\long\def\ms#1{\ignore{#1}} \global\long\def\mms#1{\ignore{#1}} \global\long\def\as#1{\ignore{#1}} \else \renewcommand{\L}{\wh{L}} \renewcommand{\P}{\wh{P}} \definecolor{MIT}{cmyk}{.24, 1.00, .78, .17} \newcommand{\ndpr}[1]{ {\colorbox{MIT}{\color{white} \textsf{PR}} \color{MIT}{#1}} } \newcommand{\nbyp}[1]{{\color{myred}[Kimi: #1]}} \newcommand{\nbbg}[1]{ {\colorbox{myblue}{\color{white} \textsf{BG}} \color{myblue}{#1}} } \title{On the number of modes of Gaussian kernel density estimators} \author{Borjan Geshkovski} \affil{Inria \& Sorbonne Université} \author{Philippe Rigollet} \affil{MIT} \author{Yihang Sun} \affil{Stanford University} \date{ \today } \begin{document} \setlist[itemize,enumerate]{left=0pt} \maketitle \begin{abstract} We consider the Gaussian kernel density estimator with bandwidth $\beta^{-\frac12}$ of $n$ iid Gaussian samples. Using the Kac-Rice formula and an Edgeworth expansion, we prove that the expected number of modes on the real line scales as $\Theta(\sqrt{\beta\log\beta})$ as $\beta,n\to\infty$ provided $n^c\lesssim \beta\lesssim n^{2-c}$ for some constant $c>0$. An impetus behind this investigation is to determine the number of clusters to which Transformers are drawn in a metastable state. \bigskip \noindent \textbf{Keywords.}\quad Kernel density estimator, Kac-Rice formula, Edgeworth expansion, self-attention, mean-shift. \medskip \noindent \textbf{\textsc{ams} classification.}\quad \textsc{62G07, 60G60, 60F05, 68T07}. \end{abstract} \thispagestyle{empty} \setcounter{tocdepth}{2} \tableofcontents \section{Introduction} \subsection{Setup and main result} \label{sec:setup} For $\beta >0$ and $X_1, \dots, X_n \stackrel{\text{iid}}{\sim} N(0, 1)$, the \emph{Gaussian kernel density estimator (KDE)} with bandwidth $h=\beta^{-1/2}$ is defined as \begin{equation}\label{eq:gkde} \wh{P}_n(t):= \frac{1}{n}\sum_{i=1}^n \mathsf{K}_{h}\ast\delta_{X_i}(t) = \frac{\sqrt{\beta}}{n\sqrt{2\pi}}\sum_{i=1}^n e^{-\frac{\beta}{2}(t-X_i)^2}, \hspace{1cm} t\in\mb{R}. \end{equation} Here, ``Gaussian'' refers to the choice of kernel $\mathsf{K}_h$. In this paper we are interested in determining the expected number of modes (local maxima) of $\wh{P}_n$ over $\mb{R}$. While this is a classical question, addressed in even more general settings than \eqref{eq:gkde}---such as non-Gaussian kernels, compactly supported samples, and higher dimensions \cite{mammen91,mammen95,mammen97}---a definite answer has not been given in the literature. Indeed, the best-known results fall into one of two settings: either considering samples drawn from a compactly supported density (instead of $N(0,1)$ as done here), or counting the modes within a fixed compact interval. In the special case of the Gaussian KDE \eqref{eq:gkde}, one has in the latter setting for instance \begin{theorem}[{\!\cite[Thm.~1]{mammen95}}] \label{thm:mammen} Let $\wh{P}_n$ be the Gaussian KDE defined in \eqref{eq:gkde}, with bandwidth $h>0$, of $X_1, \dots, X_n \stackrel{\text{iid}}{\sim} N(0, 1)$. Asymptotically as $n\to\infty$, the expected number of modes of $\wh{P}_n$ in a fixed interval $[a, b]$ is $1+o(1)$ if $1\ll\beta \ll n^{2/3}$, and $\wt{\Theta}\left(\sqrt{\beta}\right)$ if $n^{2/3}\lesssim \beta \ll n^2/\log^6 n$. \end{theorem} In \cite{mammen91, mammen95, mammen97}, the authors additionally conduct more refined casework on the bandwidth to provide more precise estimates, such as pinpointing the leading constants. In fact, \cite{mammen91} \emph{does} count modes in \(\mathbb{R}\), but the underlying distribution of the samples \(X_i\) is supported on a closed interval (thus excluding $N(0, 1)$), so there are no modes outside the interval anyway. Our main result provides the answer to the case of counting modes of \eqref{eq:gkde} over $\mb{R}$, and reads as follows. In particular, it generalizes \cref{thm:mammen}. \begin{theorem}\label{thm:main-result} Let $\wh{P}_n$ be the Gaussian KDE defined in \eqref{eq:gkde}, with bandwidth $\beta^{-1/2}$, of $X_1, \dots, X_n \stackrel{\text{iid}}{\sim} N(0, 1)$. Suppose $n^c\lesssim \beta \lesssim n^{2-c}$ for arbitrarily small $c>0$, and \Cref{ass: assumption.1}, then asymptotically as $n,\beta\to\infty$, \begin{enumerate} \item In expectation over $X_i$, the number of modes of $\wh{P}_n$ is $\Theta\left( \sqrt{\beta\log \beta} \right)$. \item Almost all modes lie in two intervals of length $\Theta\left(\sqrt{\log\beta}\right)$---namely, the expected number of modes $t\in\mb{R}$, such that $t^2\not\in \left[2\log n-3\log\beta,2\log n-\log\beta\right]$, is $o\left(\sqrt{\beta\log\beta}\right)$. \end{enumerate} \end{theorem} \begin{figure} \centering \includegraphics[scale=0.24]{figures/kde_100-10000.pdf} \includegraphics[scale=0.24]{figures/kde_300-10000.pdf} \caption{A realization of the random function \eqref{eq:gkde} for $n=10^4$, with $\beta=100$ (left) and $\beta=300$ (right).} \label{fig: kde} \end{figure} \Cref{ass: assumption.1} is related to the decay of the tails of the modulus of continuity of $\widehat{P}_n''(\cdot)$, and is needed to apply the \emph{Kac-Rice formula} (\Cref{thm:kac-rice}), which states that the expected number of modes of $\widehat{P}_n$ is some conditional expectation with respect to the joint law of $(\widehat{P}_n'(t), \widehat{P}_n''(t))$. We postpone a further discussion to \Cref{sec: kac.rice.application}. \begin{remark} \label{rem:minimax} \begin{itemize} \item To better appreciate the range of values for $\beta$ in this theorem as well as subsequent ones, we use minimax theory as a benchmark; see, e.g.,~\cite{Tsy09}. The reparametrization $h=\sqrt{\beta}$ is motivated by the connection to the Transformer model described in \Cref{sec:transformers}. Using an optimal bias-variance tradeoff~\cite[Chapter~1]{Tsy09}, we see that the optimal scaling of the bandwidth parameter $h$ depends on the smoothness of the underlying density of interest: if the underlying density has $s$ bounded (fractional) derivatives, then the optimal choice of $h$ is given by $h \asymp n^{-\frac{1}{2s+1}}$. This gives $\beta \asymp n^\frac{2}{2s+1}$. For $s \in (0, \infty)$, we get $\beta \in [n^{c}, n^{2-c}]$ for some $c>0$. In particular, the transition of the number of modes from 1 to $\sqrt{\beta}$ in \Cref{thm:mammen} is achieved for $\beta \approx n^{2/3}$, which is the optimal choice for Lipschitz densities. The message of our main Theorem~\ref{thm:main-result} below is that this scaling in $\sqrt{\beta}$ is the prevailing one for the whole range $\beta \in [n^{c}, n^{2-c}]$ if one does not restrict counting modes in a bounded interval $[a,b]$. \item Point 2. in \Cref{thm:main-result} shows that most of the modes are at distance at least $C\log n$ from the origin provided $\beta > n^{\frac{2-C}{3}}$ for $C>0$ small. This corresponds to a choice of a bandwidth adapted to smoothness $s<1$. This result is in agreement with and completes the picture drawn by \Cref{thm:mammen}. \end{itemize} \end{remark} \begin{remark} Through refined computations, one can determine the modes in the regime $1 \ll \beta \ll n^2$ and also pinpoint the leading constant. For the sake of clarity, we stick to the regime where \[2\log n-\log \beta \asymp \log\beta\asymp \log n,\] and comment on how to do expand the regime in appropriate places. \end{remark} \begin{remark} \label{rmk:empirical} We further motivate Point 2. in \cref{thm:main-result} by considering a qualitative picture of the distribution of the modes displayed in \Cref{figure:approx}. \begin{itemize} \item Near the origin, we find most of the samples $X_i$ and they are densely packed in the shape of a Gaussian. The corresponding Gaussian summands in \cref{eq:gkde} cancel to create one mode, as shown already in \cref{thm:mammen}. \item In the two intervals of length $\Theta\left(\sqrt{\log \beta}\right)$, the samples $X_i$ are separated enough that the corresponding Gaussian summands do not cancel, but rather form $\Omega\left(\sqrt{\log\beta}\right)$ \emph{isolated bumps}, as discussed in more generality in \cite[Section 9.3]{devgyo}. \item Further away at the tails, the phenomena of isolated bumps occur, but there are so few samples $X_i$ that the number of modes created is a negligible fraction. \end{itemize} We revisit this discussion and \Cref{figure:approx} in \cref{rmk:delicate}. \end{remark} To prove~\cref{thm:main-result}, we truncate the real line to the interval \begin{equation}\label{eq:T} T:=\left[-\sqrt{2\log n-\log\beta-\omega(\beta)}, \sqrt{2\log n-\log\beta-\omega(\beta)}\right] \end{equation} where $\omega$ is a fixed, slow growing function such that $$1\ll \omega(\beta)\ll \log\log \beta,$$ and so $T$ is well-defined for large enough $\beta$. Motivated by \cref{thm:main-result}, we also define the interval \begin{equation}\label{eq:T'} T':=\left[-\sqrt{2\log n-3\log\beta}, \sqrt{2\log n-3\log\beta}\right] \end{equation} if $\beta \le n^{2/3}$ and define $T'=\varnothing$ if $\beta >n^{2/3}$. We see how \cref{thm:main-result} implies \cref{thm:mammen} with $h=\beta^{-1/2}$. From the former, we see that almost all the modes lie in $T\setminus T'$. If $ h\gg n^{-1/3}$ so that $\sqrt{2\log n-3\log \beta}\gg 1$, then $T\setminus T'$ is disjoint from $[a,b]$, so there are few modes in $[a, b]$; if $h\ll n^{-1/3}$ so $\sqrt{2\log n-3\log \beta}\ll 1$, the fixed interval $[a, b]$ contains a near constant fraction of length of $T\setminus T'$ and thus a near constant fraction of the modes. \begin{figure}[!ht] \centering \includegraphics[scale=0.285]{figures/nolog_n=1000.pdf} \hspace{0.1cm} \includegraphics[scale=0.285]{figures/modes_n=100.pdf} \includegraphics[scale=0.285]{figures/nolog_n=10000.pdf} \hspace{0.1cm} \includegraphics[scale=0.285]{figures/modes_n=1000.pdf} \caption{(Left) Plot of the average number of modes as a function of $\beta$ for $n=10^3$ (top) and $n=10^4$ (bottom). (Right) Log-log plot for $n=10^3$ (top) and $n=10^4$ (bottom); the predicted linear regression line ({\color{JuliaRed}red}) corroborates a power-law of the form $\text{average \# of modes} \approx 0.179 \cdot \beta^{0.504}$, in line with \Cref{thm:main-result}.} \label{fig: 1} \end{figure} \subsection{Motivation} \label{sec:transformers} The question of estimating the number of modes as a function of the bandwidth has a plethora of applications in statistical inference and multimodality tests---see \cite{mammen91, mammen95, mammen97} and the references therein. Another application which has stimulated some of the recent progress on the topic is data clustering. The latter can be achieved nonparametrically using a KDE, whose modes, and hence clusters, can be detected using the \emph{mean-shift algorithm} \cite{fukunaga1975estimation, cheng1995mean, comaniciu2002mean, carreira2000mode, carreira2003number, carreira2007gaussian, rodriguez2014clustering, carreira2015review}, which can essentially be seen as iterative local averaging. The main idea in mean-shift clustering is to perform a mean-shift iteration starting from each data point and then define each mode as a cluster, with all points converging to the same mode grouped into the same cluster. The analysis of this algorithm has led to upper bounds on the number of modes of \eqref{eq:gkde} \cite{carreira2003number}. \begin{figure}[!ht] \centering \includegraphics[scale=0.425]{figures/11-16-040.pdf} \includegraphics[scale=0.425]{figures/11-16-0450.pdf} \includegraphics[scale=0.425]{figures/11-16-04200.pdf} \caption{Metastability of self-attention dynamics at temperature $\beta=81$ initialized with $n$ iid uniform points on the circle, with $n=200$ (top) and $n=1000$ (bottom). The number of clusters appears of the correct order $\sim\sqrt{\beta}$. (Code available at \href{https://github.com/borjanG/2023-transformers-rotf}{\color{myblue}github.com/borjanG/2023-transformers-rotf}.)} \includegraphics[scale=0.425]{figures/11-38-180.pdf} \includegraphics[scale=0.425]{figures/11-38-1850.pdf} \includegraphics[scale=0.425]{figures/11-38-18200.pdf} \label{fig: 2} \end{figure} We were instead brought to this problem from another perspective, motivated by the study of \emph{self-attention dynamics} \cite{sander2022sinkformers, geshkovski2023mathematical, geshkovski2024emergence, geshkovski2024measure}---a toy model for \emph{Transformers}, the deep neural network architecture that has driven the success of large language models \cite{vaswani2017attention}. These dynamics form a mean-field interacting particle system \begin{equation*} \frac{\diff}{\diff\tau}x_i(\tau) = \sum_{j=1}^n \frac{e^{\beta\langle x_i(\tau), x_j(\tau)\rangle}}{\displaystyle \sum_{k=1}^n e^{\beta\langle x_i(\tau), x_j(\tau)\rangle}} \mathsf{P}_{x_i(\tau)}^\perp(x_j(\tau)), \end{equation*} evolving on the unit sphere $\mathbb{S}^{d-1}$ because of $\mathsf{P}_x^\perp := I_d-xx^\top$. Here, $\tau\ge0$ plays the role of layers, the $n$ particles $x_i(\tau)$ represent tokens evolving through a dynamical system. This system is characterized by a temperature parameter $\beta \ge 0$ that governs the space localization of particle interations. One sees that all particles move in time by following the field $\nabla \log (\mathsf{K}_{\beta^{-1/2}} \ast \mu_\tau)$; here, $\mu_\tau$ is the empirical measure of the particles $x_1(\tau), \ldots, x_n(\tau)$ at time $\tau$. It is shown that for almost every initial configuration $x_1(0),\ldots,x_n(0)$, and for $\beta \ge 0$ in dimension $\ge 3$, or $\beta \le 1 \vee \beta \gtrsim n^2$ in dimension $2$, all particles converge to a single cluster in infinite time~\cite{geshkovski2023mathematical}. Rather than converging quickly, the authors in \cite{geshkovski2024dynamic} prove that the dynamics instead manifest \emph{metastability}: particles quickly approach a few clusters, remain in the vicinity of these clusters for a very long period, and eventually coalesce into a single cluster in infinite time. Concurrently, and using different methods, the authors in \cite{bruno2024emergence} show a similar result: starting from a perturbation of the uniform distribution, beyond a certain time, the empirical measure of the $n$ particles approaches an empirical measure of $O(\sqrt{\beta})$-equidistributed points on the circle in the mean-field limit, and stays near it for long time. This is done by a study of the linearized system and leveraging nonlinear stability results from \cite{grenier2000nonlinear}. Our interest lies in counting the number of clusters during the first metastable phase in dimension $d=2$. At time $\tau = 0$, particles are initialized at $n$ iid points from the uniform distribution on the circle. In turn, the stationary points of $\mathsf{K}_{\beta^{-1/2}} \ast \mu_0$ partition the circle into intervals, with points clustering within their respective interval. This highlights the importance of counting the number of stationary points. Here, we focus on a simplified setting by working on the real line instead of the circle (or higher-dimensional spheres), but we believe the analysis could be extended to these cases pending technical adaptations. Notwithstanding, \Cref{thm:main-result} reflects what is seen in simulations (\Cref{fig: 2}). \subsection{Sketch of the proof} \label{sec: sketch} The spirit of the proof of results such as \Cref{thm:mammen} and others presented in \cite{mammen91,mammen95,mammen97} is similar to ours---one applies the Kac-Rice formula (\Cref{thm:kac-rice}) to a Gaussian approximation of $(\wh{P}_n'(t), \wh{P}_n^{''}(t))$ and argues its validity. However, the main limitation of these works is that modes are counted in a fixed and finite interval $[a, b]$ (and $[0,1]^d$ in the higher dimensional cases). Extending these techniques to the whole real line demands for different, significantly stronger, approximation results using Edgeworth expansions. We sketch the key ideas that allow us to count modes over $\mb{R}$. We use the Kac-Rice formula to compute the expected number of modes of $\wh{P}_n$ in the symmetric interval $T$ and $T'$ defined in \cref{eq:T,eq:T'}. All asymptotics are as $n, \beta\to \infty$. \begin{proposition}\label{prop:main-int} If $n^c\lesssim \beta \lesssim n^{2-c}$ for arbitrarily small $c>0$, then under \Cref{ass: assumption.1}, \begin{enumerate} \item In expectation over $X_i$, the number of modes of $\wh{P}_n$ in $T$ is $\Theta\left( \sqrt{\beta\log \beta} \right)$. \item In expectation over $X_i$, the number of modes of $\wh{P}_n$ in $T'$ is $O\left( e^{-\frac{\omega(\beta)}{4}} \sqrt{\beta\log\beta} \right)$. \end{enumerate} \end{proposition} The Kac-Rice computation appears tractable only when the joint distribution of $(\wh{P}_n'(t), \wh{P}_n''(t))$ is Gaussian, which it is not. To overcome this obstacle, we apply the Kac-Rice formula over a Gaussian approximation of the joint distribution in \cref{sec:normal}. For the specific underlying density and KDE in \cref{eq:gkde}, we are able to justify in \cref{sec:error} the approximation for all $t$ in the growing interval $T$ instead of a fixed interval. This is why \cref{thm:mammen} only counts modes in a fixed interval. To show the validity of the Gaussian approximation, we use the \emph{Edgeworth expansion} of the joint distribution of $(\wh{P}_n'(t), \wh{P}_n''(t))$ around the Gaussian distribution with matching first two moments. We bound the error due to the third order term of the expansion directly, and deal with the higher order terms by appealing to the error bounds of densities in the Edgeworth approximation similar to \cite[Theorem 19.2]{bhattacharya2010normal}. We note that \cite{mammen97} employ the same theorem to justify the Gaussian process approximation over a fixed interval. In doing so, we will see that the normal approximation is invalid outside of $T$ (see \cref{rmk:delicate}), but crucially $T$ is sufficiently large to cover almost all modes, as observed empirically in \cref{rmk:empirical} and \Cref{figure:approx} and given below. \begin{proposition} \label{prop:main-tail} If $n^c\lesssim \beta \lesssim n^{2-c}$ for arbitrarily small $c>0$, then the expectation over $X_i$ of the number of modes of $\wh{P}_n$ that lie outside of $T$ is $O\left( \sqrt{\beta \exp(\omega(\beta))} \right)$. \end{proposition} We prove this in \cref{sec:tail} with an argument from scale-space theory: we bound the number of modes outside $T$ by the number of samples $X_i$ outside $T$, which we then bound naively. This is precisely the argument used by \cite[Theorem 2]{carreira2003number} to show Gaussian mixtures over $\mb{R}$ with $n$ components must have at most $n$ modes. This argument crucially relies on the kernel density estimator being Gaussian (see \cref{rmk:gkde}). Now, \cref{thm:main-result} follows from \cref{prop:main-int,prop:main-tail} upon recalling that $1\ll \omega(\beta)\ll \log \log \beta$. \subsection{Notation} We adopt standard notation from asymptotic analysis: we write $f(x)\ll g(x)$ or $f(x)=o(g(x))$ if $f(x)/g(x)\to 0$ as $x\to\infty$; $f(x) \lesssim g(x)$ or $f(x)=O(g(x))$ if there exists a finite, positive constant $C$ such that $f(x)\le Cg(x)$; and we write $f(x)\asymp g(x)$ or $f(x)=\Theta(g(x))$ if $f(x)\lesssim g(x)$ and $g(x)\lesssim f(x)$. \section{Kac-Rice for the normal approximation} \label{sec:normal} \subsection{The Kac-Rice formula} \label{sec:kac-rice} We say that $\Psi:\mb{R}\to\mb{R}$ has an \emph{upcrossing of level $u$} at $t\in\mb{R}$ if $\Psi(t)=u$ and $\Psi'(t)>0$. The Kac-Rice formula allows us to compute the expected number of up-crossings when $F$ is a random field (i.e., a stochastic process). \begin{theorem}[Kac-Rice, {\cite[pp. 62]{azais2009level}, \cite[Section 11.1]{adler2009random}}] \label{thm:kac-rice} Consider a random $\Psi:\mb{R}\to\mb{R}$, some fixed $u\in\mathbb{R}$ and a compact $T\subset\mathbb{R}$. Suppose \begin{enumerate} \item $\Psi$ is a.s. in $\mathscr{C}^1(\mathbb{R})$, and $\Psi, \Psi'$ both have finite variance over $T$; \item The law of $\Psi(t)$ admits a density $p_t^{[1]}(x)$ which is continuous for $t\in T$ and $x$ in a neighborhood of $u$; \item The joint law of $(\Psi(t), \Psi'(t))$ admits a density $p_t(x,y)$ which is continuous for $t\in T$, $x$ in a neighborhood of $u$, and $y\in\mb{R}$; \item $\mathbb{P}(\upomega(\eta)>\varepsilon)=O(\eta)$ as $\eta\to 0^+$ for any $\varepsilon>0$, where $\upomega(\cdot)$ denotes the modulus of continuity\footnote{defined, for $f:\mb{R}\to\mb{R}$, as $\upomega(\eta)= \sup_{t, s\colon |t-s|\leq\eta}|f(t)-f(s)|$.} of $\Psi'(\cdot)$. \end{enumerate} Define the number of up-crossings in $T$ of $\Psi$ at level $u\in\mathbb{R}$ as \begin{equation*} U_u(\Psi, T) := \left|\{t\in T:\Psi(t)=u, \Psi'(t)>0\}\right|. \end{equation*} Then, with expectation taken over the randomness of $\Psi$, \begin{equation} \label{eq:krf} \mb{E}U_u(\Psi, T) = \int_T \int_0^\infty yp_t(u, y)\diff y\diff t. \end{equation} \end{theorem} The Kac-Rice formula extends to any dimension $d\ge 1$, and also on manifolds other than $\mathbb{R}^d$---see \cite[Section 11.1]{adler2009random}. It is the classical tool for computing the expected number of critical points of random fields, with many recent applications including spin glasses \cite{auffinger2013random, fan2021tap} and landscapes of loss functions arising in machine learning \cite{maillard2020landscape}. While the method applies to general densities, the conditional expectation appears infeasible to compute or estimate beyond the Gaussian case. For the KDE $\wh{P}_n$ defined in \cref{eq:gkde}, define the random function $F_n:\mb{R}\to\mb{R}$ by \begin{equation} \label{eq:Fn} F_n(t) = \frac{1}{\sqrt{n}}\sum_{i=1}^n \left(t-X_i\right)e^{-\frac{\beta}{2}\left(t-X_i\right)^2}= -\sqrt{\frac{2\pi n}{\beta^3}}\wh{P}_n'(t). \end{equation} Then $t\in\mb{R}$ is an upcrossing of $F_n$ at level $0$ if and only if $F_n(t)=0$ and $F'_n(t)>0$. This is equivalent to $\wh{P}'_n(t)=0$ and $\wh{P}_n''(t)<0$, i.e. $t$ is a mode of $\wh{P}_n$. Thus, the number of modes of $\wh{P}_n$ in $T$ is given by $U_0(F_n, T)$. For $T, T'$ defined in \eqref{eq:T}--\eqref{eq:T'}, \cref{prop:main-int,prop:main-tail} yield \begin{equation} \label{eq:main-eq-form} \begin{aligned} \mb{E}U_0\left(F_n, T\right) & \asymp \sqrt{\beta\log\beta},\\ \mb{E}U_0\left(F_n, T'\right) & \lesssim e^{-\frac{\omega(\beta)}{4}}\sqrt{\beta\log\beta},\\ \mb{E}U_0\left(F_n, \mb{R}\setminus T\right) & \lesssim e^{\frac{\omega(\beta)}{2}}\sqrt{\beta}. \end{aligned} \end{equation} \subsection{Computing the Gaussian approximation} Without loss of generality, fix $t\in T$ with $t\ge 0$. We can rewrite $F_n(t)$ from \cref{eq:Fn} and compute its derivative: for independent copies $(G_i, G_i')$ of \begin{equation} \label{eq: Gt} \begin{bmatrix} G(t) \\ G'(t) \end{bmatrix}= e^{-\frac{\beta}{2} (t-X)^2}\begin{bmatrix} t-X \\ 1-\beta (t-X)^2 \end{bmatrix}, \end{equation} where $X\sim N(0,1)$, we have \begin{equation*} \begin{bmatrix} F_n(t) \\ F_n'(t) \end{bmatrix} = \frac{1}{\sqrt{n}}\sum_{i=1}^n \begin{bmatrix} G_i(t) \\ G_i'(t) \end{bmatrix}\sim p_t. \end{equation*} We prove that $p_t$ is a well-defined density in \Cref{lem: pt.bdd}, and defer the following computation to \cref{sec:moments,sec:moments}. \begin{fact} \label{fact:moments-p} The mean and covariance matrix of the random vector $(F_n(t), F_n'(t))$ are given respectively by \begin{equation} \label{eq:moments-p} \begin{aligned} \mu_t &:= \sqrt{n}\begin{bmatrix} \mb{E}G(t)\\ \mb{E}G'(t) \end{bmatrix} \asymp n^{\frac12}\beta^{-\frac32}e^{-\frac{t^2}{2}}\begin{bmatrix} t \\ -t^2 \end{bmatrix} \\ \Sigma_t &:= \begin{bmatrix} \on{Var}G(t) & \on{Cov}(G(t), G'(t))\\ \on{Cov}(G(t), G'(t)) & \on{Var}G'(t) \end{bmatrix} \asymp \beta^{-\frac32} e^{-\frac{t^2}{2}}\begin{bmatrix} 1 & -t \\ -t & \beta \end{bmatrix}. \end{aligned} \end{equation} \end{fact} We proceed to centering and rescaling the density $p_t$. Let $Y_i(t)$, $i\in[n]$, be independent copies of \begin{equation} \label{eq:Yi} Y(t) = \Sigma_t^{-\frac12}\begin{bmatrix} G(t) - \mb{E}G(t) \\ G'(t) - \mb{E}G'(t) \end{bmatrix} \end{equation} Let $q_t$ denote the density of $\sqrt{n} \sum_{i=1}^n Y_i(t)$. By construction $q_t$ has mean $0$ and covariance $I_2$. Moreover, by the change-of-variables formula, it holds \begin{equation} \label{eq:qt} p_t(x, y) = \left(\det\Sigma_t\right)^{-\frac12}q_t\left(\Sigma_{t}^{-\frac12}[(x, y)-\mu_t]\right). \end{equation} Now, let $\varphi:\mb{R}^2\to\mb{R}$ be the density of $N(0, I_2)$, i.e., \begin{equation*} \varphi(x) := \frac{1}{\sqrt{2\pi}} e^{-\frac{\Vert x\Vert^2}{2}}. \end{equation*} We aim to approximate the Kac-Rice integral \cref{eq:krf} as follows: \begin{equation} \label{eq:approx} \int_T\int_0^\infty yp_t(0, y)\diff y \approx \int_T\int_0^\infty y \left(\det\Sigma_t\right)^{-\frac12}\varphi\left(\Sigma_{t}^{-\frac12}[(0, y)-\mu_t] \right)\diff y\diff t. \end{equation} The validity of this approximation is deferred to \cref{sec:error}. In the remainder of this section, we solely focus on computing the right hand side integral. \begin{lemma} \label{lem:phi-t} There exists some $A_t\asymp \beta^{-\frac32}nt^2e^{-\frac{t^2}{2}}$ such that \begin{equation} \label{eq:phi-t} \begin{aligned} \varphi\left(\Sigma_{t}^{-\frac12}[(0, y)-\mu_t] \right) & \asymp \exp\left(-A_t-\Theta\left(\beta^{\frac12}e^{\frac{t^2}{2}}\right)y^2\right),\\ \int_0^\infty y \varphi\left(\Sigma_{t}^{-\frac12}[(0, y)-\mu_t] \right)\diff y & \asymp \beta^{-\frac12}e^{-\frac{t^2}{2}}e^{-A_t}. \end{aligned} \end{equation} \end{lemma} \begin{proof}[Proof of \Cref{lem:phi-t}] Since $\det \Sigma_t\asymp \beta^{-2}e^{-t^2}$, we have $$ \Omega:=\Sigma_t^{-1}\asymp \beta^{\frac12}e^{\frac{t^2}{2}} \begin{bmatrix} \beta & t \\ t &1 \end{bmatrix}. $$ Now as $t\in T$ and so $t^2\ll\beta$, we find \begin{align*} \left\Vert \Sigma_{t}^{-\frac12}[(0, y)-\mu_t]\right\Vert^2 & = \left\langle (-\mu_{t, 1}, y-\mu_{t, 2}), \Sigma_t^{-1}(-\mu_{t, 1}, y-\mu_{t, 2})\right\rangle \\ & \asymp \Omega_{11}\mu_{t, 1}^2-2\Omega_{12}\mu_{t, 1}(y-\mu_{t, 2})+\Omega_{22}(y-\mu_{t, 2})^2 \\ & \asymp \beta^{\frac12}e^{\frac{t^2}{2}}\mu_{t, 1}^2\left[\beta-2t\left(\frac{y}{\mu_{t, 1}}+t\right)+\left(\frac{y}{\mu_{t, 1}}+t\right)^2\right] \\ & \asymp \beta^{\frac12}e^{\frac{t^2}{2}}\mu_{t, 1}^2(\beta-t^2)+\beta^{\frac12}e^{\frac{t^2}{2}}y^2 \\ & \asymp \beta^{-\frac32}nt^2e^{-\frac{t^2}{2}}+\beta^{\frac12}e^{\frac{t^2}{2}}y^2. \end{align*} This shows the first statement in \cref{eq:phi-t}. For the second, we have \begin{align*} \int_0^\infty y \varphi\left(\Sigma_{t}^{-\frac12}[(0, y)-\mu_t] \right)\diff y & \asymp e^{-A_t}\int_0^\infty y e^{-\Theta\left(\beta^{1/2}e^{t^2/{2}}\right)y^2} \diff y\\ &\asymp \beta^{-\frac12}e^{-\frac{t^2}{2}}e^{-A_t} \end{align*} by Gaussian integral computations (see \cref{fact:gaussian-int}). \end{proof} \subsection{\texorpdfstring{The Kac-Rice integral over $\varphi$}{The Kac-Rice}} \begin{figure}[ht!] \centering \includegraphics[scale=0.29]{figures/hist-b=100.pdf} \hspace{0.2cm} \includegraphics[scale=0.29]{figures/At-100.pdf} \includegraphics[scale=0.29]{figures/hist-b=300.pdf} \hspace{0.2cm} \includegraphics[scale=0.29]{figures/At-300.pdf} \caption{$n=10^5$ is fixed throughout. (Left) Empirical distribution of the modes of $\widehat P_n$ over $T$ for $\beta=100$ (top) and $\beta=300$ (bottom). (Right) The function $t\mapsto \sqrt{\beta}\exp(-A_t)$ for $\beta=100$ (top) and $\beta=300$ (bottom), which, due to the Kac-Rice formula, is an approximation for the distribution of the number of modes of $\widehat P_n$ in $T$. Shaded in grey is the interval $T$. (Code available at \href{https://github.com/KimiSun18/2024-gauss-kde-attention}{\color{myblue}github.com/KimiSun18/2024-gauss-kde-attention}.)} \label{figure:approx} \end{figure} We compute \cref{eq:krf} under the approximation \cref{eq:approx}. By \cref{eq:phi-t} and \cref{eq:moments-p}, we have that \begin{equation} \label{eq:int-phi-final} \int_S\int_0^\infty y \left(\det\Sigma_t\right)^{-\frac12}\varphi\left(\Sigma_{t}^{-\frac12}[(0, y)-\mu_t] \right)\diff y\diff t\asymp \sqrt{\beta}\int_S e^{-A_t} \diff t \end{equation} for any measurable $S\subset \mb{R}$. Assuming validity of the Gaussian approximation (see \cref{sec:error}), it follows from the Kac-Rice formula that the density of modes at $t \in \mb{R}$ is proportional to $\sqrt{\beta}e^{-A_t}$. We plot this density in \Cref{figure:approx} with the same choice of $n$ and $\beta$ as in the empirical distribution. We see that they match on the highlighted interval $T$, but not outside of $T$ where the Gaussian approximation---see \cref{rmk:delicate}. We compute \cref{eq:int-phi-final} explicitly for $S=T$ and $S=T'$. \begin{lemma} \label{lem:main-int-phi} If $n^c\lesssim \beta \lesssim n^{2-c}$ for some $c>0$, then \begin{align*} \int_T\int_0^\infty y \left(\det\Sigma_t\right)^{-\frac12}\varphi\left(\Sigma_{t}^{-\frac12}[(0, y)-\mu_t] \right)\diff y\diff t & \asymp \sqrt{\beta\log\beta}, \\ \int_{T'}\int_0^\infty y \left(\det\Sigma_t\right)^{-\frac12}\varphi\left(\Sigma_{t}^{-\frac12}[(0, y)-\mu_t] \right)\diff y\diff t & \lesssim \sqrt{\beta}. \end{align*} \end{lemma} \begin{proof}[Proof of \Cref{lem:main-int-phi}] Recall $A_t$ from \cref{lem:phi-t}. By \cref{eq:int-phi-final}, it suffices to show that \begin{equation} \label{eq:int-phi-b} \int_Te^{-A_t}\diff t \asymp \sqrt{\log \beta}\quad\text{and}\quad \int_{T'} e^{-A_t}\diff t \lesssim 1. \end{equation} As $A_t>0$, the integral is at most the length of $T$, which is $O(\sqrt{\log\beta})$ by \cref{eq:T}. Let $t_s := \sqrt{2\log n - s\log \beta}$. As the integrand is positive, for constants $C, C'>0$ \begin{align*} \int_Te^{-A_t}\diff t & \ge \int_{t_2}^{t_{5/2}}\exp\left(-C\beta^{-\frac32}nt^2e^{-\frac{t^2}{2}}\right)\diff t \\ & \ge \left(t_{5/2}-t_2\right) \exp\left(-C\beta^{-\frac32}nt_{5/2}^2e^{-\frac{t_2^2}{2}}\right) \\ & \gtrsim \sqrt{\log \beta}\exp \left(-C'\beta^{-\frac12}\log n\right) \\ & \gtrsim \sqrt{\log\beta} \end{align*} as $n, \beta\to\infty$ with $\log n\asymp \log\beta$. Now if $t\in T'=[-t_3, t_3]$, we have $$ e^{-\frac{t^2}{2}}\ge e^{-\frac{t_3^2}{2}}=\beta^{\frac32}n^{-1}. $$ Hence \[ \int_{T'} e^{-A_t}\diff t \lesssim \int_{0}^{t_3}\exp\left(-C\beta^{-\frac32}nt^2e^{-\frac{t^2}{2}}\right)\diff t \le \int_{0}^{t_3}e^{-Ct^2}\diff t \lesssim 1.\qedhere\] \end{proof} \section{Leveraging the Edgeworth expansion} \label{sec:error} In this section, we show the approximation of $p_t$ by $\varphi$ is valid in $T$ by showing \begin{equation} \label{eq:error-goal} \int_T\int_0^\infty \left(\det\Sigma_t\right)^{-\frac12}y\left|q_t-\varphi\right|\left(\Sigma_{t}^{-\frac12}[(0, y)-\mu_t]\right)\diff y\diff t \ll \sqrt{\beta\log \beta}. \end{equation} One natural idea is to use some asymptotic series to expand $p_t$ around $\varphi$, e.g. the Edgeworth expansion, to argue that $|p_t-\varphi|\ll \varphi$ in the sense of the integral over $y$. We discuss the two major obstacles we have to overcome in order to implement this approach. \begin{itemize} \item Firstly, all known results on validity of asymptotic series such as Edgeworth expansions treat densities $q_t$ and $ \varphi$ that are independent of $n$. As $\beta$ grows in $n$, we will need to re-derive these results and carefully track the dependence on $\beta$. This will give extra constraints on $(t, \beta, n)$ for the validity of such an asymptotic series. Fortunately, this will be satisfied precisely when $t\in T$. \item Secondly, even without the $n$-dependence of $\beta$, expanding $q_t$ to the order $\varphi$ plus error in for example \cite[Theorem 19.2]{bhattacharya2010normal} gives error roughly \[|p_t-\varphi|(\mbf{x})\lesssim \frac{1}{1+\Vert\mbf{x}\Vert^2} \hspace{1cm} \text{ for all } \mbf{x}\in\mb{R}^2,\] but then the integral over $y$ in \cref{eq:error-goal} fails to converge. Therefore, we will need to go to the next term $\psi$ in the Edgeworth series. We control its integral over $y$ and $t$ in \cref{eq:error-goal} manually in \cref{sec:error-3}. Then, we control the higher order terms in \cref{sec:error-higher} using the approach motivated above, so our error now converges upon integrating over $y$, i.e. roughly \[\left|p_t-\varphi-n^{-\frac12}\psi\right|(\mbf{x})\lesssim \frac{1}{1+\Vert \mbf{x}\Vert^3} \hspace{1cm} \text{ for all } \mbf{x}\in\mb{R}^2.\] \end{itemize} \subsection{Bounding the third order error} \label{sec:error-3} Recall $Y$ from \cref{eq:Yi}. For a multi-index $\alpha\in\mb{Z}_{\ge 0}^2$, let $\kappa^\alpha_t$ be the cumulant of $Y$ indexed by $\alpha$, which depends on $t$. Let $H^\alpha_t$ denote the standard Hermite polynomials of with index $\alpha$. The next term in the Edgeworth series is $n^{-1/2}\psi$ with \begin{equation} \label{eq:psi} \psi(\mbf{x}) := \frac{\varphi(\mbf{x})}{6}\sum_{k=0}^3 \kappa_t^{(k, 3-k)} H^{(k, 3-k)}(\mbf{x})\quad\text{where}\quad H^{\alpha} := (-1)^{|\alpha|}\frac{\partial^\alpha \varphi}{\varphi} \end{equation} If $|\alpha|=3$, $\kappa_t^\alpha$ are the third moments of $Y$, which we bound in \cref{sec:moments}. \begin{fact} \label{fact:eta} Let $\eta_3$ be the largest third cumulant of $Y$. Then \[ \eta_3 :=\max\left\{ \kappa_t^{(k, 3-k)}: k\in \{0, 1, 2, 3\}\right\} \lesssim \beta^{\frac14}e^{\frac{t^2}{4}}.\] \end{fact} We bound $H^{(k, 3-k)}\left(\Sigma_{t}^{-1/2}[(0, y)-\mu_t]\right)$ by a polynomial in $\tilde{y}=\Theta(\beta^{1/2}e^{{t^2}/{4}}y)$. Now, by a similar method as \cref{lem:phi-t}, we obtain the following bound. It says that when we integrate the Edgeworth series $p_t = \varphi +n^{-1/2}\psi+\dots$ over $y$ and $t$, the contribution $\varphi$ dominates $n^{-1/2}\psi$, hinting at the validity of the approximation. \begin{lemma} \label{lem:error-3} Recall $T$ from \cref{eq:T} and $A_t$ from \cref{lem:phi-t}. Then \[ \int_T\int_0^\infty y\left(n\det\Sigma_t\right)^{-\frac12}\psi\left(\Sigma_{t}^{-\frac12}[(0, y)-\mu_t]\right)\diff y\diff t \lesssim e^{-\frac{\omega(\beta)}{4}} \sqrt{\beta\log\beta}. \] \end{lemma} \begin{proof}[Proof of \Cref{lem:error-3}] By the proof of \cref{lem:phi-t}, \cref{eq:psi}, and \cref{fact:eta} \begin{align*} & \int_T\int_0^\infty y\left(n\det\Sigma_t\right)^{-\frac12}\psi\left(\Sigma_{t}^{-\frac12}[(0, y)-\mu_t]\right)\diff y\diff t \\ & = \frac{1}{6}\sum_{k=0}^3 \int_T \left(n\det\Sigma_t\right)^{-\frac12} \kappa_t ^{(k, 3-k)}\int_0^\infty y\left[\varphi H^{(k, 3-k)}\right]\left(\Sigma_{t}^{-\frac12}[(0, y)-\mu_t]\right) \diff y\diff t \\ & \asymp \int_T \left(n\det\Sigma_t\right)^{-\frac12} \eta_3 e^{-A_t} \int_0^\infty y\left(\beta^{\frac14}e^{\frac{t^2}{4}}y\right)^{O(1)} e^{-\Theta\left(\beta^{\frac12}e^{\frac{t^2}{2}}\right)y^2}\diff y\diff t \\ & \asymp \int_T \left(n\det\Sigma_t\right)^{-\frac12} e^{-A_t}\eta_3 \beta^{-\frac12}e^{-\frac{t^2}{2}}\left(\int_0^\infty \tilde{y}^{O(1)}e^{-\frac{\tilde{y}^2}{2}}d\tilde{y}\right)\diff t \\ & \lesssim n^{-\frac12}\beta^{\frac34}\sup_{t\in T}\left(e^{\frac{t^2}{4}}\right)\int_T e^{-A_t}\diff t \\ & \lesssim e^{-\frac{\omega(\beta)}{4}} \sqrt{\beta\log\beta} \end{align*} where the last step follows \cref{eq:int-phi-b} and the definition \cref{eq:T} of $T$. \end{proof} \begin{remark} \label{rmk:delicate} One can see at this is actually an asymptotic equality by checking the Gaussian integrals in the proof above are of their typical order (i.e. no cancellation of leading terms). Hence, the decay is only a factor of $e^{-\omega (\beta)/4}$. For $t\not\in T$, even $t=\sqrt{2\log n-0.99\log\beta}$, the last inequality in \cref{lem:error-3} fails and we get a bound of polynomially larger that $\sqrt{\beta}$. Then, as the third order error is asymptotically larger than the contribution of the Gaussian approximation, so the normal approximation is no longer valid. This can be seen by comparing the plots in \Cref{figure:approx}. \end{remark} \subsection{Bounding higher order errors} \label{sec:error-higher} We follow the classical proof about the validity of the Edgeworth expansion as an asymptotic series to show bound the higher order pointwise error of density function as follows. \begin{lemma} \label{lem:error-higher} Assume that $p_t$ is bounded almost everywhere for any fixed $t\in T$. If $n^c\lesssim \beta \lesssim n^{2-c}$ for some $c>0$, then for any $t\in T$ and $\mbf{x}\in\mb{R}^2$, we have that \begin{equation} \label{eq:error-higher} \left(1+\Vert\mbf{x}\Vert^3\right)\left|q_t-\varphi-n^{-\frac12}\psi\right|(\mbf{x}) \lesssim e^{-\frac{\omega(\beta)}{4}}. \end{equation} \end{lemma} We defer the discussion of the assumption to \cref{ass: assumption.1} and defer the proof to \cref{sec:pf-error-higher}. Here, we comment on the differences with \cite[Theorem 19.2]{bhattacharya2010normal} for $s=3$, which we follow in spirit but modify to allow $n$ dependence in $q_t$ in the form of $\beta$. There, it is shown that the left hand side is $o(n^{-1/2})$ provided third moments $\eta_3 = O(1)$. With the bound \cref{fact:eta} on $\eta_3$ in our case which is not constant, the error from the asymptotic series decays not in powers of $n^{-1/2}$ but powers of $O(n^{-1/2}e^{{t^2}/{4}}\beta^{1/4}) =O(\exp(-\omega(\beta)/2))$. \begin{corollary} \label{cor:error-higher} If $n^c\lesssim\beta\lesssim n^{2-c}$ for $c>0$, then asymptotically in $n, \beta\to\infty$ \[ \int_T\int_0^\infty \left(\det\Sigma_t\right)^{-\frac12}y\left|q_t-\varphi-n^{-\frac12}\psi\right|\left(\Sigma_{t}^{-\frac12}[(0, y)-\mu_t]\right)\diff y\diff t \lesssim e^{-\frac{\omega(\beta)}{4}} \sqrt{\beta\log\beta}. \] \end{corollary} \begin{proof}[Proof of \Cref{cor:error-higher}] By \cref{lem:error-higher} and computations in the proof of \cref{lem:phi-t} \begin{align*} & \int_T\int_0^\infty \left(\det\Sigma_t\right)^{-\frac12}y\left|q_t-\varphi-n^{-\frac12}\psi\right|\left(\Sigma_{t}^{-\frac12}[(0, y)-\mu_t]\right)\diff y\diff t \\ & \lesssim e^{-\frac{\omega(\beta)}{4}}\int_T\int_0^\infty \left(\det\Sigma_t\right)^{-\frac12} y\left(1+\left\Vert \Sigma_{t}^{-\frac12}[(0, y)-\mu_t]\right\Vert ^3\right)^{-1}\diff y\diff t \\ & \lesssim e^{-\frac{\omega(\beta)}{4}}\int_T\int_0^\infty \left(\det\Sigma_t\right)^{-\frac12} y\left(1+\Theta \left(\beta^{-\frac32}nt^2e^{-\frac{t^2}{2}}+\beta^{\frac12}e^{\frac{t^2}{2}}y^2 \right) ^{\frac32}\right)^{-1}\diff y\diff t \\ &\lesssim e^{-\frac{\omega(\beta)}{4}} \int_T \left(\det\Sigma_t\right)^{-\frac12} \left(\beta^{\frac14}e^{\frac{t^2}{4}}\right)^{-2}\int_0^\infty \frac{\tilde{y}}{1+\tilde{y}^3}\diff\tilde{y} \\ & \asymp e^{-\frac{\omega(\beta)}{4}}\sqrt{\beta\log\beta} \end{align*} for $\tilde{y}\asymp \beta^{1/4}e^{{t^2}/{4}}y$, where we note that $T$ has length $\Theta(\sqrt{\log \beta})$ by \cref{eq:T} and \[ \int_0^\infty \frac{\tilde{y}}{1+\tilde{y}^3}\diff\tilde{y} = \frac{2\pi}{3\sqrt{3}} = O(1).\qedhere \] \end{proof} \section{\texorpdfstring{Proof of \Cref{thm:main-result}}{Proof of}} We prove \cref{prop:main-int,prop:main-tail} by checking \cref{eq:main-eq-form}, thereby proving \cref{thm:main-result}. \subsection{\texorpdfstring{Proof of \cref{prop:main-int}}{Proof of}} \label{sec: kac.rice.application} To prove \Cref{prop:main-int} we seek to apply \Cref{thm:kac-rice} to $\mb{E}U_0(F_n, T)$. This in turn requires checking all the assumptions of \Cref{thm:kac-rice}. We have \begin{proposition} \label{lem: pt.bdd} Let $\beta, n$ be as in \Cref{thm:main-result}, and fix $t\in T$. Let $\upmu_t$ denote the law of $(F_n(t), F_n'(t))$ defined in \eqref{eq:Fn}. Then $\upmu_t$ admits a density $p_t\in\mathscr{C}^0(\mb{R}^2)$ satisfying $p_t({\bf{x}})\to0$ as $\|\bf{x}\|\to\infty$. Moreover, conditions {\it 1, 2} in \Cref{thm:kac-rice} also hold for $\Psi(t) = F_n(t)$. \end{proposition} We defer the proof to \Cref{sec: proof.pt.bdd}. We work under assumption \begin{assumption} \label{ass: assumption.1} Consider the setting of \Cref{lem: pt.bdd}. We assume that condition {\it 4} of \Cref{thm:kac-rice} holds for $\Psi(t) = F_n(t)$. \end{assumption} To check conditions on moduli of continuity such as {\it 4} in \Cref{thm:kac-rice} in the Gaussian setting, one usually resorts to using results such as the Borell-TIS inequality \cite[Theorem 2.1.1]{adler2009random}. Checking the validity of this assumption in the present, non-Gaussian, setting does not appear straightforward. With \Cref{lem: pt.bdd} and \Cref{ass: assumption.1}, we deduce \begin{lemma} \label{lem:kr-appl} With the notation as in \cref{thm:kac-rice}, \begin{equation} \mb{E}U_0\left(F_n, T\right) = \int_T\int_0^\infty yp_t(0, y)\diff y\diff t. \end{equation} \end{lemma} \begin{proof}[Proof of \cref{prop:main-int}] Combining \cref{lem:main-int-phi,lem:error-3,cor:error-higher} gives \begin{align*} \mb{E}U_0\left(F_n, T\right) & = \int_T\int_0^\infty yp_t(0, y)\diff y\diff t \\ & = \int_T\int_0^\infty \left(\det\Sigma_t\right)^{-\frac12}y q_t\left(\Sigma_{t}^{-\frac12}[(0, y)-\mu_t]\right)\diff y\diff t \\ & = \int_T\int_0^\infty \left(\det\Sigma_t\right)^{-\frac12}y\varphi\left(\Sigma_{t}^{-\frac12}[(0, y)-\mu_t]\right)\diff y\diff t \\ & \quad + \int_T\int_0^\infty \left(n\det\Sigma_t\right)^{-\frac12}y\psi\left(\Sigma_{t}^{-\frac12}[(0, y)-\mu_t]\right)\diff y\diff t \\ & \quad + \int_T\int_0^\infty \left(\det\Sigma_t\right)^{-\frac12}y\left[q_t-\varphi-n^{-\frac12}\psi\right]\left(\Sigma_{t}^{-\frac12}[(0, y)-\mu_t]\right)\diff y\diff t \\ & \asymp \sqrt{\beta\log \beta} \end{align*} for $t\in T$, as the first summand is $\Theta\left(\sqrt{\beta\log \beta}\right)$ while the last two are $o\left(\sqrt{\beta\log \beta}\right)$. Replacing $T$ by $T'=[-\sqrt{2\log n-3\log\beta}, \sqrt{2\log n-3\log\beta}]$, the first summand is $o\left(\sqrt{\beta\log \beta}\right)$ by \cref{lem:main-int-phi}. By positivity of the integrand, we bound the last two integrals over $T'$ by those over $T$, which are themselves $o\left(\sqrt{\beta\log \beta}\right)$. Now, all three summands are $o\left(\sqrt{\beta\log \beta}\right)$, as desired in \cref{prop:main-int}. \end{proof} \begin{figure}[!ht] \centering \includegraphics[scale=0.3]{figures/density_F_F_prime_b=81_t=0.pdf} \includegraphics[scale=0.3]{figures/density_F_F_prime_b=81_t=1.pdf} \includegraphics[scale=0.3]{figures/density_F_F_prime_b=81_t=3.pdf} \includegraphics[scale=0.3]{figures/density_F_F_prime_b=81_t=2.pdf} \caption{An estimate of the density $p_t=p_t(x,y)$ of $(F_n(t), F_n'(t))$ for $t=0, 1, 2, 3$ (clockwise from top left), where $\beta=81$ and $n=6500$, so that $\sqrt{2\log n -\log \beta}\approx 3$. (Code available at \href{https://github.com/KimiSun18/2024-gauss-kde-attention}{\color{myblue}github.com/KimiSun18/2024-gauss-kde-attention}.)} \label{fig: kimi} \end{figure} \subsection{\texorpdfstring{Proof of \cref{prop:main-tail}}{Proof of}} \label{sec:tail} In this section, we prove \cref{prop:main-tail}. \begin{lemma}\label{lem:scale-space} For any $a>0$ and $X_1, \dots , X_n\in \mb{R}$, the number of modes of $\wh{P}_n$ in $(a, \infty)$ is at most $|I|$ where $I=\{i\in [n]:X_i\ge a\}$. \end{lemma} \begin{proof}[Proof of \Cref{lem:scale-space}] Note that $\wh{P}_n (t) =\sum_{i=1}^n g_i(t)$ where for $i\in [n]$ we define \begin{equation} g_i(t):=\sqrt{\frac{\beta}{2\pi n^2}} \mathsf{K}_{\beta^{-1/2}}\left(t-X_i\right).\end{equation} For $i\not\in I$, $g_i$ is monotonically decreasing on $[X_i, \infty)\supset (a, \infty)$, so $\sum_{i\not\in I}g_i(t)$ has no modes in $(a, \infty)$. To this Gaussian mixture, we add in $g_i(t)$ for $i\in I$ one-by-one. By \cite[Theorem 2]{carreira2003number}, each time the number of modes in $(a, \infty)$ increases by at most one. In $|I|$-many steps, there are at most $|I|$ such modes. \end{proof} \begin{remark} \label{rmk:gkde} This argument crucially relies on the KDE being Gaussian. As discussed in \cite{carreira2003number}, the Gaussian kernel is the only kernel where for any fixed samples the number of modes of the KDE is non-increasing in the bandwidth $h$. For other kernels, we do suspect the analog of \cref{lem:scale-space} to hold, but a different argument is needed. In particular, \cite{mammen91,mammen95,mammen97} avoids this problem by counting modes on compact sets. \end{remark} \begin{proof}[Proof of \cref{prop:main-tail}] By \cref{lem:scale-space}, symmetry of $T$ in \cref{eq:T} around $t=0$, linearity of expectations, and the tail bound $\mb{P}(|X|\ge a)\le 2e^{-a^2/2}$ for $X\sim N(0,1)$ \begin{equation} \begin{aligned} \mb{E}U_0\left(F_n, \mb{R}\setminus T\right) & \le \mb{E}\left|\{i:X_i\not\in T\} \right| \\ & = n\mb{P}\left(X\not\in T\right) \\ &\le 2n\exp\left(-\frac{2\log n-\log\beta-\omega(\beta)}{2}\right) \\ & = 2\sqrt{\beta \exp(\omega(\beta))} \\ & \ll \sqrt{\beta\log\beta} \end{aligned} \end{equation} by the definition of $\omega(\beta)$, proving \cref{prop:main-tail}. \end{proof} This concludes the proof of \cref{thm:main-result}. \section{Concluding remarks} We showed that the expected number of modes of a Gaussian KDE with bandwidth $\beta^{-\frac12}$ of $n\ge1$ samples drawn iid from $N(0, 1)$ is of order $\Theta(\sqrt{\beta\log \beta})$ for $n^c\lesssim \beta\lesssim n^{2-c}$, where $c>0$ is arbitrarily small. We also provide a precise picture of where the modes are located. The question in the higher-dimensional case, as well as on the unit sphere $\mathbb{S}^{d-1}$ with uniformly distributed samples, remains open. \subsection*{Acknowledgments} The authors would like to thank Enno Mammen for useful discussion and sharing important references. We also thank Dan Mikulincer for discussions on Gaussian approximation using Edgeworth expansions, Valeria Banica for comments on the method of stationary phase, and Alexander Zimin for providing \Cref{fig: 1}. \medskip \noindent {\it Funding.} P.R. was supported by NSF grants DMS-2022448, CCF-2106377, and a gift from Apple. Y.S. was supported by the MIT UROP and MISTI France Programs. \appendix \section{Additional proofs} \subsection{\texorpdfstring{Proof of \Cref{fact:moments-p}}{Proof of}} \label{sec:moments} In this section we compute the first two moments of $(G, G')$ to prove \cref{fact:moments-p}. Note that if $n^c\lesssim \beta\lesssim n^{2-c}$ for some $c>0$, and for $t\in T$, then we have $\exp \Theta(t^2/\beta) \to 1$. This implies that exponentials in the moments are asymptotically $e^{-{t^2}/{2}}$. We frequently use the following fact about Gaussian integrals both in exact and asymptotic forms. \begin{fact} \label{fact:gaussian-int} Let $\Gamma$ denote the Gamma function. For any $\alpha >0$ and integer $n\ge 0$, \[\int _{0}^{\infty }z^{n}e^{-\alpha u^{2}}\diff u=\frac{1}{2}\Gamma\left(\frac{n+1}{2}\right) \alpha^{-\frac{n+1}{2}}.\] \end{fact} We first compute $\mu_t$. Completing the square gives \[ \frac{\beta}{2}z^2+\frac{1}{2}(z-t)^2 = \frac{\beta+1}{2}u^2+\frac{\beta t^2}{2(\beta+1)}\quad \text{where}\quad u = z-\frac{t}{\beta+1}.\] Hence, using \cref{fact:gaussian-int} we compute { \[\begin{aligned}\mb{E}G(t) = \int_{-\infty}^\infty ze^{-\frac{\beta}{2}z^2}\diff\gamma_{t, 1}(z)& = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty ze^{-\frac{\beta}{2} z^2 -\frac12 (z-t)^2}\diff z\\ & = \frac{e^{-\frac{\beta}{2(\beta+1)}t^2}}{\sqrt{2\pi}}\int_{-\infty}^\infty \left(u+\frac{t}{\beta+1}\right)e^{-\frac{\beta+1}{2}u^2}\diff u \\ & = \frac{e^{-\frac{\beta}{2(\beta+1)}t^2}}{\sqrt{2\pi}} \left(\frac{t}{\beta+1}\right) \frac{\sqrt{\pi}}{\left(\frac{\beta+1}{2}\right)^{\frac12}} \\ & = \frac{e^{-\frac{\beta}{2(\beta+1)}t^2}t}{(\beta+1)^{\frac32}}, \end{aligned}\]} as well as { \[\begin{aligned} \mb{E}G'(t) & = \int _{-\infty}^\infty (1-\beta z^2)z^{-\frac{\beta}{2}z^2}\diff\gamma_{t, 1}(z) \\ & = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty \left(1-\beta z^2\right)e^{-\frac{\beta}{2}z^2-\frac{1}{2}(z-t)^2}\diff z\\ & =\frac{e^{-\frac{\beta}{2(\beta+1)}t^2}}{\sqrt{2\pi}}\int_{-\infty}^\infty \left[1-\beta\left(u+\frac{t}{\beta+1}\right)^2\right]e^{-\frac{\beta+1}{2}u^2}\diff u \\ & = \frac{e^{-\frac{\beta}{2(\beta+1)} t^2}}{\sqrt{2\pi}} \left[\left(1-\frac{\beta t^2}{(\beta+1)^2}\right)\frac{\sqrt{\pi}}{\left(\frac{\beta+1}{2}\right)^{\frac12}}-\frac{\beta\sqrt{\pi}}{2\left(\frac{\beta+1}{2}\right)^{\frac32}} \right]\\ &= \frac{e^{-\frac{\beta}{ 2(\beta+1)}t^2}}{(\beta+1)^{\frac52}}\left((\beta+1)^2-\beta t^2-\beta(1+\beta)\right)\\ &= \frac{e^{-\frac{\beta}{ 2(\beta+1)}t^2}}{(\beta+1)^{\frac52}}\left(1+\beta-\beta t^2\right). \end{aligned}\] } From these computations, and the remark after \cref{fact:gaussian-int}, we readily obtain the asymptotics of $\mu_t$ as in \cref{fact:moments-p} upon multiplying by $\sqrt{n}$. We now compute $\Sigma_t$. Completing the square gives \[z^2+\frac{1}{2}(z-t)^2 =\frac{2\beta +1}{2}u^2+\frac{\beta t^2}{2\beta +1}\quad \text{where}\quad u = z-\frac{t}{2\beta+1}.\] Hence using \cref{fact:gaussian-int} we compute { \[ \begin{aligned} \mb{E}G^2(t) & = \frac{1}{\sqrt{2\pi}}\int _{-\infty}^\infty z^2e^{-\beta z^2-\frac12(z-t)^2}\diff z \\ & = \frac{e^{-\frac{\beta}{(2\beta+1)}t^2}}{\sqrt{2\pi}}\int_{-\infty}^\infty \left(u+\frac{t}{2\beta+1}\right)^2e^{-\frac{1+2\beta}{2}u^2}\diff u \\ & = \frac{e^{-\frac{\beta}{(2\beta+1)}t^2}}{\sqrt{2\pi}} \left[\left(\frac{t}{2\beta+1}\right)^2 \frac{\sqrt{\pi}}{\left(\frac{2\beta+1}{2}\right)^{\frac12}}+\frac{\sqrt{\pi}}{2\left(\frac{2\beta+1}{2}\right)^{\frac32}} \right] \\ & = \frac{e^{-\frac{\beta}{(2\beta+1)}t^2}}{(2\beta +1)^{\frac52}} \left(t^2+2\beta+1\right), \end{aligned} \] } as well as { \[ \begin{aligned} &\mb{E}\left[G(t)G'(t)\right] = \frac{1}{\sqrt{2\pi}}\int _{-\infty}^\infty z(1-\beta z^2)e^{-\beta z^2-\frac12(z-t)^2}\diff z \\ & = \frac{e^{-\frac{\beta}{(2\beta+1)}t^2}}{\sqrt{2\pi}}\int_{-\infty}^\infty \left(u+\frac{t}{2\beta+1}-\beta\left(u+\frac{t}{2\beta+1}\right)^3\right)e^{-\frac{1+2\beta}{2}u^2}\diff u \\ & = \frac{e^{-\frac{\beta}{2\beta+1}t^2}}{\sqrt{2\pi}} \left[\left(\frac{t}{2\beta+1}-\beta\left(\frac{t}{2\beta+1}\right)^3\right) \frac{\sqrt{\pi}}{\left(\frac{2\beta+1}{2}\right)^{\frac12}}-\left(\frac{3t\beta}{2\beta+1}\right)\frac{\sqrt{\pi}}{2\left(\frac{2\beta+1}{2}\right)^{\frac32}} \right] \\ & = \frac{e^{-\frac{\beta}{2\beta+1}t^2}}{(2\beta +1)^{\frac72}} \left[t(2\beta+1)^2 -\beta t^3 -3t\beta(2\beta+1)\right]\\ & = \frac{e^{-\frac{\beta}{2\beta+1}t^2}}{(2\beta +1)^{\frac72}} \left(-2\beta^2t+\beta t-\beta t^3+t\right), \end{aligned} \]} and, finally, { \[ \begin{aligned} &\mb{E}G'^2(t) = \frac{1}{\sqrt{2\pi}}\int _{-\infty}^\infty (1-\beta z^2)^2e^{-\beta z^2-\frac12(z-t)^2}\diff z \\ & = \frac{e^{-\frac{\beta}{2\beta+1}t^2}}{\sqrt{2\pi}}\int_{-\infty}^\infty \left(1-\beta\left(u+\frac{t}{2\beta+1}\right)^2\right)^2e^{-\frac{1+2\beta}{2}u^2}\diff u \\ & = \frac{e^{-\frac{\beta}{2\beta+1}t^2}}{\sqrt{2\pi}} \Bigg[\left(1-\frac{\beta t^2}{(2\beta+1)^2}\right)^2\frac{\sqrt{\pi}}{\left(\frac{2\beta+1}{2}\right)^{\frac12}} \\ &\hspace{2.75cm}+ \left(\frac{6\beta^2t^2}{(2\beta+1)^2}-2\beta\right) \frac{\sqrt{\pi}}{2\left(\frac{2\beta+1}{2}\right)^{\frac32}}+\beta^2\cdot \frac{3\sqrt{\pi}}{4\left(\frac{2\beta+1}{2}\right)^{\frac52}} \Bigg] \\ & = \frac{e^{-\frac{\beta}{2\beta+1}t^2}}{(2\beta +1)^{\frac92}} \Big[((2\beta+1)^2-\beta t^2)^2+(2\beta+1)6\beta^2t^2-2\beta(2\beta+1)^3+3\beta^2(2\beta+1)^2\Big]\\ & = \frac{e^{-\frac{\beta}{2\beta+1}t^2}}{(2\beta +1)^{\frac92}} \left(12\beta^4+4\beta^3(t^2+5)+\beta^2(t^4-2t^2+15)-2\beta (t^2-3)+1 \right). \end{aligned}\] } We check that entries of $\Sigma_t$ are asymptotically the corresponding second moments. Indeed, surpressing the dependence on $t$, we have that \begin{itemize} \item $\mb{E}G^2\asymp \beta^{-\frac52}(t^2+\beta)e^{\frac{t^2}{2}}\gg \beta^{-3}t^2E^2_t \asymp (\mb{E}G)^2,$ \item $\mb{E}GG' \asymp -\beta^{-\frac72}e^{\frac{t^2}{2}}(\beta^2t+\beta t^3)\gg\beta^{-4}E^2_t t(\beta-\beta t^2)\asymp (\mb{E}G)(\mb{E}G'),$ \item $\mb{E}G'^2\asymp \beta^{-\frac92}e^{\frac{t^2}{2}}(\beta^4+\beta^2t^4)\gg \beta^{-5}E^2_t\beta^2(1+t^4)\asymp (\mb{E}G')^2,$ \end{itemize} where we bound $e^{\frac{t^2}{2}}\le 1$. From these computations, and the remark after \cref{fact:gaussian-int}, we readily obtain the asymptotics of $\Sigma_t$ as indicated in \cref{fact:moments-p}.\qed \subsection{\texorpdfstring{Proof of \Cref{fact:eta}}{Proof of}} In this section, we prove \cref{fact:eta} on third moments of $Y = \Sigma_t^{-1/2}\left(G-\mb{E}G, G'-\mb{E}G'\right)$. To upper bound, we do not need to track the leading coefficients to ensure that they do not vanish when we combine applications of \cref{fact:gaussian-int}. Recalling $\Sigma_t^{-1}$ from the proof of \cref{lem:phi-t}, we upper bound asymptotically via H\"{o}lder's inequality: \begin{align*} \eta_3 & = \max_{k} \left|Y^{(k, 3-k)}\right| \\ & \le \mb{E} \Vert Y\Vert ^3 \\ & \le \mb{E}\left\Vert\left(G-\mb{E}G, G'-\mb{E}G'\right)^\intercal \Sigma_t^{-1}\left(G-\mb{E}G, G'-\mb{E}G'\right)\right\Vert^{\frac32} \\ & \lesssim \beta^{\frac34} e^{\frac{3t^2}{4}} \mb{E}\left|\beta (G-\mb{E}G)^2+2t(G-\mb{E}G)(G'-\mb{E}G')+(G'-\mb{E}G')^2\right|^{\frac32} \\ & \lesssim \beta^{\frac34} e^{\frac{3t^2}{4}} \left(\beta^{\frac32}\mb{E}|G|^3+\mb{E}|G'|^3\right) \\ & \lesssim \beta^{\frac34} e^{\frac{3t^2}{4}} \int_{-\infty}^\infty \left(\beta^{\frac32}|z|^3+|1-\beta z^2|^3\right)e^{-\frac{3\beta}{2}z^2-\frac12(z-t)^2}\diff z \\ &\lesssim \beta^{\frac34} e^{\frac{t^2}{4}} \int_{0}^\infty h\left(u+\frac{t}{3\beta+1}\right)e^{-\frac{3\beta+1}{2}u^2}\diff u, \end{align*} where we factor out $e^{-\frac{3\beta}{2(3\beta+1)}t^2}\asymp e^{-\frac{t^2}{2}}$ of the integral by substituting \[ h(z)=\beta^{\frac32}z^3+(1+\beta z^2)^3\quad\text{and}\quad u = z-\frac{t}{3\beta+1}.\] By linearity of integration and \cref{fact:gaussian-int}, each monomial $u^n$ integrates to $O(\beta^{-(n+1)/2})$. By monotonicity of $h$ on $\mb{R}_{\ge 0}$ and since $t\in T$, for some constant $C>0$, \begin{align*} \eta_3 \lesssim \beta^{\frac34} e^{\frac{t^2}{4}} \int_{0}^\infty h\left(u+\frac{t}{3\beta+1}\right)e^{-\frac{3\beta+1}{2}u^2}\diff u & \lesssim \beta^{\frac14} e^{\frac{t^2}{4}} h\left(C\beta^{-\frac12}+\frac{t}{3\beta+1}\right) \\ & \lesssim \beta^{\frac14} e^{\frac{t^2}{4}} h\left(2C\beta^{-\frac12}\right) \\ & \lesssim \beta^{\frac14} e^{\frac{t^2}{4}} \end{align*} upon noting $w\mapsto h(w/\sqrt{\beta})$ has constant coefficients. This proves \cref{fact:eta}.\qed \subsection{\texorpdfstring{Proof of \Cref{lem:error-higher}}{Proof of}} \label{sec:pf-error-higher} Fix $n, \beta$ sufficiently large. For a multi-index $\alpha$, define \begin{align*} h(\mbf{x})&=\mbf{x}^\alpha \left(q_t-\varphi-n^{-\frac12}\psi\right)(\mbf{x}),\\ \mathscr{F}h(\mbf{z}) &=\partial ^\alpha \left(\mathscr{F}(q_t)-\frac{\varphi}{6\sqrt{n}}\sum_{k=0}^3 \kappa_t^{(k, 3-k)} H^{(k, 3-k)} \right)(\mbf{z}), \end{align*} where \begin{equation*} (\mathscr{F}h)(\mbf{z}) = \int_{\mb{R}^2} e^{-\mathrm{i}\langle \mbf{x},\mbf{z}\rangle}h(\mbf{x})\diff \mbf{x} \end{equation*} denotes the Fourier transform of $h$. By Fourier inversion, it suffices to show that for any multi-index $\alpha$ with order $|\alpha|\le 3$, \begin{equation} \label{eq:higher-error-goal} \left\vert h(\mbf{x})\right| = \left|\frac{1}{(2\pi)^{2}}\int_{\mb{R}^2} e^{-\mathrm{i}\langle \mbf{z}, \mbf{x}\rangle}\mathscr{F}{h}(\mbf{z})\diff \mbf{z}\right|\lesssim \int_{\mb{R}^2} \left|\mathscr{F}{h}(\mbf{z})\right|\diff \mbf{z} \end{equation} \[ \] is $O(\exp(-\omega(\beta)/2))$. We apply \cite[Theorem 9.12]{bhattacharya2010normal}---which is not asymptotic and has explicit constants---so we may use it even though $q_t$ depends on $n$. Upon verifying the conditions on $Y$ via \cref{fact:moments-p} and \cref{fact:eta}, we have that \[ \left|\mathscr{F}{h}(\mbf{z})\right| \lesssim n^{-\frac12}\eta_{3}^{\frac12} \Vert \mbf{z}\Vert^{O(1)}e^{-\frac{\Vert \mbf{z}\Vert^2}{4}} \] provided $\Vert \mbf{z}\Vert\le a\sqrt{n}$ for some $a\asymp\eta_{3}^{-1/2}$. By \cref{fact:eta}, we have that \begin{equation} \label{eq:small-z} \int_{\Vert \mbf{z}\Vert \le a\sqrt{n}}\left|\mathscr{F}{h}(\mbf{z})\right| \diff\mbf{z}\lesssim n^{-\frac12}\eta_{3}^{\frac12} \int_{\mb{R}^2}\Vert \mbf{z}\Vert^{O(1)}e^{-\frac{\Vert\mbf{z}\Vert^2}{4}} \diff \mbf{z}\lesssim e^{-\frac{\omega(\beta)}{2}}. \end{equation} Recall that $q_t$ is the density of $n^{-1/2}\sum_{i=1}^n Y_i$, and let $f$ denote the density of $Y$. Now, we proceed as the proof of \cite[Theorem 19.2]{bhattacharya2010normal}. As $p_t$ is bounded, so is $f$, and so \cite[Theorem 19.1]{bhattacharya2010normal} gives $\msc{F}f\in L^1(\mb{R}^2)$ and \[ \delta := \sup_{\Vert \mbf{z}\Vert >a} \left|\mathscr{F}{f}(\mbf{z})\right| <1. \] By properties of the Fourier transform and the product rule, \begin{equation} \label{eq:big-z-exp} \begin{aligned} \int_{\Vert \mbf{z}\Vert > a\sqrt{n}} \left|\partial^\alpha \mathscr{F}{q_t}(\mbf{z})\right|\diff\mbf{z} & \lesssim \eta_{|\alpha|}n^{\frac{|\alpha|}{2}}\delta^{n-|\alpha|-1}\int_{\mb{R}^2}\left|\mathscr{F}{f}\left(\frac{\mbf{z}}{\sqrt{n}}\right)\right| \diff\mbf{z} \\ & \lesssim \left(n\beta e^{\frac{t^2}{2}}\right)^{O(1)} \delta^{n-O(1)} \\ & \lesssim e^{-\frac{\omega(\beta)}{4}} \end{aligned} \end{equation} for sufficiently large $n$. Finally, we bound similar to \cref{lem:error-3}: \begin{equation} \label{eq:big-z-poly} \begin{aligned} \int_{\Vert \mbf{z}\Vert > a\sqrt{n}} &\left|\partial^\alpha \frac{\varphi}{6\sqrt{n}}\sum_{k=0}^3 \kappa_t^{(k, 3-k)} H^{(k, 3-k)}\right|(\mbf{z})\diff\mbf{z} \\ & \lesssim n^{-\frac12}\sum_{k=0}^3\kappa_{t}^{(k, 3-k)}\int_{\mb{R}^2}\left|\partial^{\alpha}H^{(k, 3-k)}\varphi\right|(\mbf{z}) \diff\mbf{z} \\ & \lesssim n^{-\frac12}\eta_3 \int_{\mb{R}^2} \Vert \mbf{z}\Vert ^{O(1)}e^{-\frac{\Vert \mbf{z}\Vert^2}{2}}\diff\mbf{z} \\ & \lesssim e^{-\frac{\omega(\beta)}{4}}. \end{aligned} \end{equation} Combining \cref{eq:small-z,eq:big-z-exp,eq:big-z-poly} proves \cref{eq:higher-error-goal} and hence \cref{lem:error-higher}.\qed \subsection{\texorpdfstring{Proof of \Cref{lem: pt.bdd}}{Proof of} } \label{sec: proof.pt.bdd} Point {\it 1} in \Cref{thm:kac-rice} can readily be seen to hold because of the explicit form of both of the fields. We focus on showing Point {\it 3}, the proof of which can be repeated essentially verbatim to deduce Point {\it 2}. Observe that $\upmu_t=\upnu_t^{\ast n}$, where $\upnu_t$ is the law of \begin{equation*} \label{eq: Gt-prime} \begin{bmatrix} G(t)\\ G'(t) \end{bmatrix} = \begin{bmatrix} g(Z)\\ g'(Z) \end{bmatrix} \end{equation*} with $Z\sim N(t,1)$ and $g(z) = ze^{-{\beta} z^2/2}$. (Also, for $n=1$ we have $\upmu_t=\upnu_t$, and $\upnu_t$ cannot have a continuous density on $\mb{R}^2$, since both components of a drawn random vector $(G(t), G'(t))$ are functions of the same one-dimensional Gaussian random variable.) We first show that $\mathscr{F}(\upnu_t^{\ast n}) = (\mathscr{F}\upnu_t)^n\in L^1(\mb{R}^2)$ for any fixed $t\in T$, and without loss of generality we take $t=0$. This would imply that $\upmu_t$ has a density $p_t\in\mathscr{C}^0(\mb{R}^2)$ satisfying $p_t(\mbf{x})\to0$ as $\|\mbf{x}\|\to\infty$ by virtue of Fourier inversion and the Riemann-Lebesgue lemma. We also perform computations as if $\upnu_t^{\ast n}$ were already a function, and all arguments can be justified by appealing to the framework of Schwarz distributions $\mathcal{S}'(\mb{R}^2)$ and duality. We have \begin{align*} \int_{\mb{R}^2}\left|\mathscr{F}(\upnu_t)(\xi)^n\right|\diff\xi&= \int_{\mb{R}^2}\left|\mathscr{F}(\upnu_t)(\xi)\right|^n\diff\xi \\ &=\int_{\mb{R}^2}\left|\int_{\mb{R}} e^{-\mathrm{i}(\xi_1g(x)+\xi_2g'(x))}e^{-\frac{x^2}{2}}\diff x\right|^n\diff\xi. \end{align*} Recalling that $n\to\infty$ in our regime, we can suppose $n\ge 4$, and for the above integral to be finite, it suffices to show that \begin{equation*} \left|\int_{\mb{R}} e^{-\mathrm{i}(\xi_1g(x)+\xi_2g'(x))}e^{-\frac{x^2}{2}}\diff x\right|\lesssim\frac{1}{\sqrt{\xi_1+\xi_2}} \hspace{1cm} \text{ as } \xi_1,\xi_2\to\infty. \end{equation*} Observe that the critical points of $g$ are $\pm\sqrt{\frac{1}{\beta}}$, whereas those of $g'$ are $0$ and $\pm\sqrt{\frac{3}{\beta}}$. Since $\beta\to\infty$ as well, pick $\varepsilon>0$ sufficiently small and such that $\varepsilon>\sqrt{\frac{3}{\beta}}$. We first see that \begin{align*} &\left|\int_{|x|>\varepsilon} e^{-\mathrm{i}(\xi_1g(x)+\xi_2g'(x))}e^{-\frac{x^2}{2}}\diff x\right|\\ &= \left|\int_{|x|>\varepsilon}\frac{1}{\mathrm{i}}\frac{1}{\xi_1g'(x)+\xi_2 g''(x)}\frac{\diff}{\diff x}\left(e^{-\mathrm{i}(\xi_1g(x)+\xi_2g'(x))}\right)e^{-\frac{x^2}{2}}\diff x\right|\\ &\lesssim\frac{1}{\xi_1+\xi_2}, \end{align*} where we used integration by parts to obtain the last inequality---this is in fact an elementary version of the method of non-stationary phase. For the integral over $\{|x|\le\varepsilon\}$, we look to use the method of stationary phase as $\xi_1, \xi_2\to\infty$, by distinguishing the three regimes $\xi_1\gg \xi_2$, $\xi_2\gg \xi_1$, and $\xi_1\sim \xi_2$. When $\xi_1\gg \xi_2$, we have \begin{align*} &\left|\int_{|x|\le\varepsilon} e^{-\mathrm{i}(\xi_1g(x)+\xi_2g'(x))}e^{-\frac{x^2}{2}}\diff x\right|\\ &=\left|\int_{|x|\le\varepsilon} e^{-\mathrm{i}(\xi_1+\xi_2)\left(\frac{\xi_1}{\xi_1+\xi_2}g(x)+\frac{\xi_2}{\xi_1+\xi_2}g'(x)\right)} e^{-\frac{x^2}{2}}\diff x \right|\\ &=\left|\int_{|x|\le\varepsilon} e^{-\mathrm{i}(\xi_1+\xi_2)(g(x)+O(\xi_1\varepsilon))} e^{-\frac{x^2}{2}}\diff x \right|\\ &\lesssim \frac{1}{\sqrt{\xi_1+\xi_2}} + o\left(\frac{1}{\sqrt{\xi_1+\xi_2}}\right) \end{align*} by the method of stationary phase applied to the phase $g$, since $g''$ is non-degenerate at the critical points $\pm\sqrt{\frac{1}{\beta}}$. Similarly when $\xi_2\gg\xi_1$, we have \begin{align*} \left|\int_{|x|\le\varepsilon} e^{-\mathrm{i}(\xi_1g(x)+\xi_2g'(x))}e^{-\frac{x^2}{2}}\diff x\right|&=\left|\int_{|x|\le\varepsilon} e^{-\mathrm{i}(\xi_1+\xi_2)(g'(x)+O(\xi_2\varepsilon))} e^{-\frac{x^2}{2}}\diff x \right|\\ &\lesssim \frac{1}{\sqrt{\xi_1+\xi_2}} + o\left(\frac{1}{\sqrt{\xi_1+\xi_2}}\right) \end{align*} by the method of stationary phase applied to the phase $g'$, since $g'''$ is non-degenerate at the critical points $0$ and $\pm\sqrt{\frac{3}{\beta}}$. The same argument then carries through when $\xi_1\sim\xi_2$, using the phase $g+g'$. This yields the desired conclusion. To deduce that $(t,\mbf{x})\mapsto p_t(\mbf{x})$ is continuous on $T\times\mb{R}^2$, we note that \begin{equation*} p_t(\mbf{x}) = \frac{1}{(2\pi)^2}\int_{\mb{R}^2} e^{\mathrm{i}\langle\mbf{x}, \mbf{z}\rangle}\int_{\mb{R}^n} \exp\left(-\mathrm{i}\left\langle\mbf{z}, \begin{bmatrix} \sum_{j=1}^n g(\xi_j)\\ \sum_{j=1}^n g'(\xi_j)\end{bmatrix}\right\rangle\right) \gamma_t(\xi_1)\cdots\gamma_t(\xi_n)\diff\xi\diff\mbf{z}. \end{equation*} We can conclude by the Lebesgue dominated convergence theorem.\qed \bibliographystyle{alpha} \bibliography{refs}{} \bigskip \begin{minipage}[t]{.5\textwidth} {\footnotesize{\bf Borjan Geshkovski}\par Inria \& Laboratoire Jacques-Louis Lions\par Sorbonne Université\par 4 Place Jussieu\par 75005 Paris, France\par \par e-mail: \href{mailto:[email protected]}{\textcolor{blue}{\scriptsize [email protected]}} } \end{minipage}\begin{minipage}[t]{.5\textwidth} {\footnotesize{\bf Philippe Rigollet}\par Department of Mathematics\par Massachusetts Institute of Technology\par 77 Massachusetts Ave\par Cambridge 02139 MA, United States\par \par e-mail: \href{mailto:blank}{\textcolor{blue}{\scriptsize [email protected]}} } \end{minipage} \begin{center} \begin{minipage}[t]{.5\textwidth} {\footnotesize{\bf Yihang Sun}\par Department of Mathematics\par Stanford University\par 450 Jane Stanford Way Building 380\par Stanford, CA 94305, United States\par \par e-mail: \href{mailto:blank}{\textcolor{blue}{\scriptsize [email protected]}} } \end{minipage}\end{center} \end{document}
2412.12172v1
http://arxiv.org/abs/2412.12172v1
Inner-outer factorization of analytic matrix-valued functions
\documentclass[12pt,a4paper]{article} \usepackage{graphicx} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsthm} \usepackage{floatflt} \usepackage{setspace} \usepackage{pdfpages} \usepackage{fancyhdr} \usepackage{makeidx} \makeindex \lhead{\rightmark} \chead{} \rhead{} \newcommand{\vecN}[1][2]{\N\hspace{0.25mm}^{#1}} \newcommand{\vecNz}[1][2]{\N\hspace{0.25mm}_0^{#1}} \newcommand{\absatz}[0]{\vspace{1em}\\} \newcommand{\setNn}[1][0]{\N\hspace{0.25mm}_{#1}} \newcommand{\ginv}{\dagger} \theoremstyle{plain} \newtheorem{thm}{Theorem}[section] \newtheorem{hopefully-a-thm}{Wish}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition} \newtheorem{lemma}[thm]{Lemma} \newtheorem{observation}{Observation}[section] \theoremstyle{definition} \newtheorem{definition}{Definition}[section] \theoremstyle{remark} \newtheorem*{remark}{Remark} \newtheorem*{remarks}{Remarks} \newtheorem*{example}{Example} \newcommand{\N}{\mathbf{N}} \newcommand{\R}{\mathbf{R}} \newcommand{\Z}{\mathbf{Z}} \newcommand{\C}{\mathbf{C}} \newcommand{\Q}{\mathbf{Q}} \newcommand{\D}{\mathbf{D}} \newcommand{\Con}{\mathcal{C}} \newcommand{\T}{\mathbf{T}} \newcommand{\tr}{\mathrm{tr}\;} \newcommand{\Hpl}{\mathbf{H}} \renewcommand{\H}{\mathcal{H}} \newcommand{\mint}{\overset{\curvearrowright}{\int}} \newcommand{\prodr}{\overset{\curvearrowright}{\prod}} \newcommand{\prodl}{\overset{\curvearrowleft}{\prod}} \newcommand{\diag}{\mathrm{diag}} \renewcommand{\Re}{\mathrm{Re}\,} \renewcommand{\Im}{\mathrm{Im}\,} \newcommand{\bigslant}[2]{{\raisebox{.2em}{$#1$}\left/\raisebox{-.2em}{$#2$}\right.}} \newcommand{\chara}{\mathrm{char}\;} \newcommand{\defin}[1]{{\em #1}} \newcommand{\adj}{{\mbox{*}}} \newcommand{\tinypf}[1]{} \newcommand{\sigmamax}{\sigma_{\text{max}}} \newcommand{\Sch}{\mathcal{S}} \newcommand{\Img}{\mathrm{Im}\,} \newcommand{\SU}{\mathrm{SU}} \newcommand{\BV}{\mathrm{BV}} \newcommand{\var}{\mathrm{var}} \newcommand{\osc}{\mathrm{osc}} \newcommand{\ch}{\mathbf{1}} \newif\ifdetail } } \detailtrue \renewcommand{\l}{\ell} \onehalfspacing \numberwithin{equation}{section} \makeatletter \newcommand*{\betreuer}[1]{\def\@betreuer{#1}} \betreuer{} \newcommand*{\ausarbeitungstyp}[1]{\def\@ausarbeitungstyp{#1}} \ausarbeitungstyp{} \newcommand*{\institut}[1]{\def\@institut{#1}} \institut{} \newcommand*{\authornew}[1]{\def\@authornew{#1}} \authornew{} \renewcommand\maketitle{\begin{titlepage} \let\footnotesize\small \let\footnoterule\relax \let \footnote \thanks \null\vfil \begin{center} \parbox{12cm}{\centering\LARGE\bfseries \@title \par}\\ {\large \vspace{.917em} \@authornew}\\ \vspace{.917em} \vspace{.917em} {\normalsize \@date} \vspace{13.75em} {\normalsize \@ausarbeitungstyp}\\ \vspace{.917em} {\normalsize \@betreuer}\\ \vspace{.917em} \centerline{{\normalsize\sc \@institut}} \vspace{13.75em} \centerline{{\normalsize\sc Mathematisch-Naturwissenschaftliche Fakult\"at der}} \vspace{.917em} \centerline{{\normalsize\sc Rheinischen Friedrich-Wilhelms-Universit\"at Bonn}} \end{center} \clearpage \thispagestyle{empty}\mbox{} \clearpage \pagenumbering{arabic} \end{titlepage} \setcounter{footnote}{0} \global\let\maketitle\relax \global\let\@author\@empty \global\let\@date\@empty \global\let\@title\@empty \global\let\title\relax \global\let\author\relax \global\let\date\relax \global\let\and\relax } \makeatother \authornew{Joris Roos} \date{4th February 2014} \betreuer{Advisor: Prof. Dr. Christoph Thiele} \institut{Mathematical Institute} \title{Inner-outer factorization of analytic matrix-valued functions} \ausarbeitungstyp{Master's Thesis Mathematics} \begin{document} \maketitle \pagestyle{fancy} \setcounter{page}{3} \tableofcontents \newpage \section{Introduction} It is well known \cite{Garnett}, \cite{Rudin2} that an analytic function $f\not\equiv 0$ on the unit disk $\D$ in the complex plane, satisfying $|f(z)|\le 1$ for all $z\in\D$, can be uniquely factored as \begin{align} \label{eqn:scinnerouter} f=m_f\cdot q_f \end{align} where $m_f$ is an \emph{inner function}\index{scalar inner function}, i.e. a bounded analytic function satisfying $|m_f(z)|=1$ almost everywhere on $\T=\partial\D$ and $q_f$ is an \emph{outer function}\index{scalar outer function}, i.e. $-\log|q_f|$ is the Poisson extension of an absolutely continuous positive measure. Inner functions can be further factored into a Blaschke product\index{Blaschke product}, that is, a convergent product of functions of the form\footnote{With obvious modifications if $z_0=0$.} $b_n(z)=\frac{|z_0|}{z_0}\frac{z_0-z}{1-z\overline{z_0}}$, $z_0\in\D$, and a non-vanishing inner function. This work describes that factorization for the case that $f$ is a bounded analytic matrix-valued function on the unit disk (we will abbreviate the term \emph{matrix-valued function} by \emph{mvf}\index{mvf} from now on). It should be noted that a factorization of the type \eqref{eqn:scinnerouter} can be achieved also for a more general class of functions taking as values operators on a Hilbert space. This theory was developed by Sz.-Nagy et al. \cite{Sz.-Nagy} and uses abstract machinery of operator theory. However, our goal is to provide a more explicit understanding for the case of matrices. An appropriate generalization of the Poisson integral representations for scalar inner and outer functions turns out to be given by multiplicative integrals. These are defined in a similar manner to classical Riemann-Stieltjes integrals, where the usual Riemann sums are replaced by products and we are integrating over matrices instead of complex numbers. A motivation for considering multiplicative integrals originates in the study of the nonlinear Fourier transform or scattering transform \cite{TaoThieleTsai}, a certain discrete nonlinear analogue of the classical Fourier transform. Multiplicative integrals also arise naturally in the theory of canonical systems of ordinary differential equations, as they can be interpreted as monodromy matrices of such systems \cite{Arov-DymI}, \cite{Arov-DymII}. In physics, particularly in quantum field theory, multiplicative integrals play a central role and are known as time-ordered or path-ordered exponentials. Multiplicative representations for certain classes of analytic mvfs have been developed by V.P. Potapov \cite{Potapov}. The inner-outer factorization of bounded analytic mvfs was found by Ginzburg \cite{Ginzburg}, but he omits the details of his proofs. We are trying to fill in the gaps. Let us proceed to describe the main result. This requires a few definitions, which we will briefly state now. They will be repeated and covered in greater detail in later sections. A bounded analytic mvf is a mvf whose entries are bounded analytic functions. A Blaschke-Potapov product is a possibly infinite product of mvfs on the unit disk of the form \begin{align} b(z)=I-P+\frac{|z_0|}{z_0}\frac{z_0-z}{1-z\overline{z_0}}P \end{align} for some $z_0\in\D$ and an orthogonal projection $P$. By convention, we also allow a Blaschke-Potapov product to be multiplied by a unitary constant. We call a mvf $A$ on an interval $[a,b]$ increasing if it is Hermitian and $A(t)-A(s)$ is positive semidefinite for all $t,s\in[a,b]$ with $t\ge s$. By an outer mvf we mean a bounded analytic mvf on the unit disk of the form \begin{align} \label{eqn:introouter} E(z)=U\mint_0^{2\pi} \exp\left(\frac{z+e^{i\varphi}}{z-e^{i\varphi}} M(\varphi)d\varphi\right) \end{align} where $U$ is a unitary constant, $M$ an integrable and Hermitian mvf, whose least eigenvalue is bounded from below. The symbol $\mint$ denotes a multiplicative integral, which we will discuss in detail in Section \ref{sect:multint}. A pp-inner inner function is a mvf on the unit disk taking the form \begin{align} \label{eqn:introppinner} S_{pp}(z)=U\prod_{k=1}^m \mint_0^{l_k} \exp\left(\frac{z+e^{i\theta_k}}{z-e^{i\theta_k}}\,dE_k(t)\right) \end{align} for a unitary constant $U$, $m\in\N\cup\{\infty\}$, $l_k>0$, $\theta_k\in[0,2\pi)$ and increasing mvfs $E_k$ with $\tr E_k(t)=t$. By an sc-inner function we mean a mvf on the unit disk which can be written as \begin{align} \label{eqn:introscinner} S_{sc}(z)=U\mint_0^{2\pi} \exp\left(\frac{z+e^{i\varphi}}{z-e^{i\varphi}}dS(\varphi)\right) \end{align} for a unitary constant $U$ and a singular continuous increasing mvf $S$. More details and equivalent characterizations for these definitions will be given in Section \ref{sect:innerouterfact}. We can now state the main theorem. \begin{thm} \label{thm:main} Let $A$ be a bounded analytic function on $D$ such that $\det A~\not\equiv~0$. Then there is a Blaschke-Potapov product $B$, a pp-inner mvf $S_{pp}$, an sc-inner mvf $S_{sc}$ and an outer mvf $E$ such that \begin{align} A(z)=B(z)S_{pp}(z)S_{sc}(z)E(z) \end{align} for all $z\in\D$. Moreover, this factorization is unique in the sense that the factors are uniquely determined up to multiplication with a unitary constant. Also, the function $M$ in the representation \eqref{eqn:introouter} is uniquely determined up to changes on a set of measure zero. \end{thm} It is a natural question, whether also the functions $S, E_k$ in \eqref{eqn:introppinner}, \eqref{eqn:introscinner} are uniquely determined. We will see that the answer to that question is, at least in the case of the functions $E_k$, negative. This thesis is structured as follows. In Section \ref{sect:multint} we develop the theory of multiplicative Riemann-Stieltjes integrals to an extent sufficient for our purpose. Section \ref{sect:contractivemvfs} presents Potapov's fundamental theorem on multplicative representations of contractive mvfs (Theorem \ref{thm:potapov}). We also discuss convergence and uniqueness questions on Blaschke-Potapov products. The next section is devoted to the proofs of Theorem \ref{thm:main} and some additional properties of inner and outer mvfs. In the appendix we have assembled several basic facts which are needed in the text. This includes in particular a proof of Herglotz' representation theorem for mvfs, which is used in a crucial step in the proof of Potapov's fundamental theorem.\\ {\bf Acknowledgement.} I would like to express my gratitude to my advisor Prof. Christoph Thiele, who has supported me throughout the work on this thesis and provided me with helpful comments. \section{Multiplicative integrals} \label{sect:multint} The multiplicative integrals which we are concerned with are certain multiplicative analogues of classical Riemann-Stieltjes integrals. Multiplicative integrals first originated in the work of V. Volterra who considered them in 1887 for the purpose of studying systems of ordinary differential equations. L. Schlesinger later formulated Volterra's concepts in a rigorous framework, cf. \cite{Schlesinger1}. An overview of the subject is given in \cite{Slavik}. However, the focus there is on multiplicative Riemann and Lebesgue integrals. Multiplicative integrals of Stieltjes type are discussed in \cite[Appendix \S 1]{Potapov} and \cite[\S 25]{Brodskii}. \subsection{Definition} \label{multint} Let us first fix some notation and conventions which will be used not only in this section, but throughout the entire text. The space of $n\times n$ matrices with entries in $\C$ will be denoted by $M_n$. We equip $M_n$ with the matrix norm\index{matrix norm} $\|A\|=\sup_{\|x\|_2=1} \|Ax\|_2$ where $\|\cdot\|_2$ denotes the Euclidean norm in $\C^n$. Several properties and estimates for this norm are given in Appendix \ref{subsect:matrixnorm}. They will be used without further reference throughout the text. We call a matrix $A\in M_n$ \defin{positive}\index{positive matrix} and write $A\ge 0$, if it is Hermitian and positive semidefinite. For a positive definite Hermitian matrix $A$ we write $A>0$ and call it \defin{strictly positive}\index{strictly positive matrix}. A mvf $A:[a,b]\rightarrow M_n$ is called \defin{increasing}\index{increasing} if it is Hermitian and monotonously increasing, i.e. $A(t)\ge A(s)$ whenever $t\ge s$. Likewise, $A$ is called \defin{strictly increasing}\index{strictly increasing} if $A(t)>A(s)$ whenever $t>s$. The terms \defin{decreasing}\index{decreasing} and \defin{strictly decreasing}{\index{strictly decreasing} are defined accordingly. \begin{definition} Let $a\le b$. A \defin{subdivision}\index{subdivision} or \defin{partition}\index{partition} $\tau$ of the interval $[a,b]$ is a finite set of real numbers $\{t_i\in[a,b]\,:\,i=0,\dots,m\}$ such that $$a=t_0\le t_1\le\cdots\le t_m=b$$ Define $\Delta_i \tau=t_i-t_{i-1}$ for $i=1,\dots,m$ and $\nu(\mathcal{T})=\max_i \Delta_i \tau$. Moreover, given a mvf $E:[a,b]\rightarrow M_n$, we set $\Delta_i E=\Delta_i^\tau E=E(t_i)-E(t_{i-1})$ for $i=1,\dots,m$. Also define $$\var_{[a,b]}^\tau E = \sum_{i=1}^m \|\Delta_i E\|$$ Then $E$ is called of \defin{bounded variation}\index{bounded variation}\index{$\var_{[a,b]} E$} if $$\var_{[a,b]} E=\sup_{\tau\in\mathcal{T}^b_a} \var^\tau_{[a,b]} E<\infty$$ The space of bounded variation functions (\defin{BV-functions})\index{$\mathrm{BV}([a,b]; M_n)$} with values in $M_n$ is denoted by $\BV([a,b]; M_n)$. If $n=1$, we write $\BV([a,b])$. For $f\in\BV([a,b]; M_n)$, we call $$|E|(t)=\var_{[a,t]} E$$ the \defin{total variation function}\index{total variation}\index{$|E|(t)$} of $E$. It should not be confused with $\|E(t)\|$, which is the matrix norm of $E(t)$.\\ Given a subdivision $\tau$, choose intermediate points $\xi=(\xi_i)_{i=1,\dots,m}$ with $\xi_i\in[t_{i-1},t_i]$. By $\mathcal{T}^b_a$\index{$\mathcal{T}^b_a$} we denote the set of tagged partitions \index{tagged partition} $(\tau,\xi)$ such that $\tau$ is a subdivision of the interval $[a,b]$ and $\xi$ is a choice of corresponding intermediate points. Given also a function $f$ on $[a,b]$ with values in $\C$ or $M_n$ we define\index{$P(f,E,\tau,\xi)$} $$P(f,E,\tau,\xi)=P(\tau,\xi)=\prodr^m_{i=1} \exp(f(\xi_i)\Delta_i E)$$ Here $\prodr_{i=1}^m A_i=A_1\cdot A_2\cdots A_m$ denotes multiplication of the matrices $(A_i)_i$ from left to right. We will also often simply write $\prod_{i=1}^m A_i$ for $\prodr_{i=1}^m A_i$. Similarly, $\prodl_{i=1}^m A_i~=~A_m\cdot~A_{m-1}~\cdots~A_1$ denotes multiplication from right to left. \end{definition} The $P(\tau,\xi)$ form a net with respect to the directed set $(\mathcal{T}^b_a,\le)$, where we say that $(\tau,\xi)\le (\tau^\prime, \xi^\prime)$ if $\tau^\prime\subset\tau$. Note that $\tau^\prime\subset\tau$ implies $\nu(\tau)\le\nu(\tau^\prime)$, but the converse is not true. \begin{definition}Let $X=\C$ or $X=M_n$. For a matrix $P\in M_n$ and functions $f~:~[a,b]\rightarrow X$, $E:[a,b]\rightarrow M_n$ we say that $P$ is the \defin{(right) multiplicative Stieltjes integral}\index{multiplicative Stieltjes integral} corresponding to this data if the net $(P(\tau,\xi))_{(\tau,\xi)\in\mathcal{T}^b_a}$ converges to $P$: \begin{align} \label{eqn:netlimit} P=\lim_{\nu(\tau)\rightarrow 0} P(\tau,\xi) \end{align} i.e. for every $\epsilon>0$ there exists a $(\tau_0,\xi_0)\in\mathcal{T}^b_a$ such that $$\|P(\tau,\xi)-P\|<\epsilon\;\text{for every}\;(\tau,\xi)\le (\tau_0,\xi_0)$$ In words, we will often refer to this as "the limit as $\nu(\tau)\rightarrow 0$". We introduce the notation $$P=\mint_a^b \exp(f\,dE)=\mint_a^b e^{f\,dE}$$ For short we also write $f\in\mathcal{M}^b_a[E]$ to denote the existence of $\mint^b_a\exp(f\,dE).$ \end{definition} For the remainder of this section we will be concerned with criteria for the existence of multiplicative integrals. \begin{lemma}[Cauchy criterion]\index{Cauchy criterion} \label{lemma:cauchycrit} Suppose that $f,E,a,b$ are as above and that for every $\epsilon>0$ there exists a $(\tau_0,\xi_0)\in\mathcal{T}^b_a$ such that for all $(\tau,\xi),(\tau^\prime,\xi^\prime)\le (\tau_0,\xi_0)$ we have $$\|P(f,E,\tau,\xi)-P(f,E,\tau^\prime,\xi^\prime)\|<\epsilon$$ Then the integral $\mint^b_a \exp(f\,dE)$ exists. The converse also holds. \end{lemma} \begin{proof} The condition in the lemma means that $(P(\tau,\xi))_{(\tau,\xi)}$ is a Cauchy net on the complete space $M_n$. Hence it converges. \end{proof} \begin{prop} \label{prop:mintexist1} Let $f:[a,b]\rightarrow\C$ be Riemann integrable and $E:[a,b]\rightarrow M_n$ Lipschitz continuous. Then the multiplicative integral $\mint^b_a\exp(f\,dE)$ exists. \end{prop} \begin{proof} By Lebesgue's criterion\index{Lebesgue's criterion} for Riemann integrability we can choose $M~>~0$ such that $|f(t)|\le M$ for all $t\in[a,b]$. Let $L>0$ be a Lipschitz constant for $E$ and set $C=M\cdot L$. Now consider $(\tau,\xi),(\tau^\prime,\xi^\prime)\in\mathcal{T}^b_a$ and assume that $\tau^\prime$ coincides with $\tau$ except on the subinterval $[t_{k-1},t_k]$ for some fixed $k=1,\dots,m$, where it is given by $$t_{k-1}=s_0\le s_1\le \cdots \le s_l=t_k$$ We denote the intermediate points $\xi^\prime$ in $[t_{i-1},t_i]$ by $\zeta_j\in [s_{j-1},s_j]$ for $j=1,\dots,l$. Then \begin{align} \label{PPest1} &\begin{aligned} &\|P(\tau,\xi)-P(\tau^\prime,\xi^\prime)\|\\ &\qquad \le e^{\sum_{j\not=k}|f(\xi_j)|\cdot\|\Delta_j E\|}\|\exp(f(\xi_k)\Delta_k E)-\prod_{j=1}^l \exp(f(\zeta_j)(E(s_{j-1})-E(s_j)))\|\\ &\qquad \le e^{C(b-a)}\|\exp(f(\xi_k)\Delta_k E)-\prod_{j=1}^l \exp(f(\zeta_j)(E(s_{j-1})-E(s_j)))\| \end{aligned} \end{align} Set $A_j=f(\zeta_j)(E(s_j)-E(s_{j-1}))$. Note that \begin{align} \label{sumAjest} \sum_{j=1}^l \|A_j\|\le C\sum_{j=1}^l (s_j-s_{j-1})=C\Delta_k\tau \end{align} We now use the power series expansion of $\exp$ to see \begin{align} \label{exp2exp} \prod^l_{j=1}\exp(A_j)=I+\sum^l_{j=1} A_j+R \end{align} Apply \eqref{sumAjest} to estimate the remainder term $R$ as follows \begin{align} \|R\|\le \exp\left(\sum^l_{j=1} \|A_j\|\right)-1-\sum_{j=1}^l \|A_j\| \le C^2 (\Delta_k\tau)^2 e^{C(b-a)} \end{align} Similarly we obtain \begin{align} \label{exp1exp} \exp(f(\xi_k)\Delta_k E)=I+f(\xi_k)\Delta_k E+R^\prime \end{align} where $R^\prime$ also satisfies $\|R^\prime\|\le C^2(\Delta_k\tau)^2 e^{C(b-a)}$. Plugging \eqref{exp1exp} and \eqref{exp2exp} into \eqref{PPest1} yields \begin{align} \label{PPest2} \begin{aligned} \|P(\tau,\xi)-P(\tau^\prime,\xi^\prime)\| &\le Le^{C(b-a)}\sum_{j=1}^l |f(\xi_k)-f(\zeta_j)|(s_j-s_{j-1}) + 2C^2(\Delta_k\tau)^2e^{2C(b-a)}\\ &\le Le^{C(b-a)}\osc_{[t_{k-1},t_k]} f\cdot \Delta_k \tau+2C^2(\Delta_k\tau)^2e^{2C(b-a)} \end{aligned} \end{align} where for an interval $I\subseteq[a,b]$, $\osc_I f=\sup\{|f(t)-f(s)|\,:\,s,t\in I\}=\sup_I f - \inf_I f$ denotes the oscillation\index{oscillation} of $f$ on $I$.\\ Now let $(\tau^\prime,\xi^\prime)\in\mathcal{T}^b_a$ be arbitrary and write $\tau^\prime=\{s^{(k)}_j\,:\,k=1,\dots,m,\,j=0,\dots,l_k\}$ where $$a=t_0=s^{(1)}_0\le s^{(1)}_1\le \cdots \le s^{(1)}_{l_1}=t_1=s^{(2)}_0\le\cdots\le s^{(m)}_{l_m}=t_m=b$$ Then we apply the above estimate \eqref{PPest2} $m$ times to obtain \begin{align} \label{PPest3} \|P(\tau,\xi)-P(\tau^\prime,\xi^\prime)\| \le Le^{C(b-a)}\sum_{k=1}^m \osc_{[t_{k-1},t_k]} f\cdot \Delta_k \tau+2C^2e^{2C(b-a)}(b-a)\nu(\tau) \end{align} Since $\osc_I f=\sup_I f-\inf_I f$, the Darboux definition of Riemann integrability by upper and lower sums implies that we can make the sum on the right hand side arbitrarily small for small enough $\nu(\tau)$. The claim now follows from Lemma \ref{lemma:cauchycrit}. \end{proof} \begin{prop} \label{prop:mintexist2} Let $f:[a,b]\rightarrow\C$ be continuous and $E:[a,b]\rightarrow M_n$ be a mvf of bounded variation. Then the multiplicative integral $\mint^b_a\exp(f\,dE)$ exists. \end{prop} The proof is very similar as for the last proposition (see \cite[Appendix \S 1.1]{Potapov}, so we omit it. We do not claim that these existence results are in any way optimal. However, they are sufficient for the purpose of this thesis. \subsection{Properties} In this section we state and prove several important properties for multiplicative integrals. Among them are in particular a formula for the determinant, a change of variables formula and some estimates relating multiplicative integrals to (additive) Riemann-Stieltjes integrals.\\ First let us consider the case when the multiplicative integral reduces to an additive integral. \begin{prop} If $f\in\mathcal{M}_a^b[E]$ and the family of matrices $\{E(t)\,:\,t~\in~[a,b]\}$ commutes, then $$\mint_a^b \exp(f(t)dE(t))=\exp\left(\int_a^b f(t) dE(t)\right)$$ In particular, this is always the case if $n=1$. \end{prop} \begin{proof} This follows from the relation $e^{A+B}=e^A e^B$ which holds if $AB~=~BA$. \end{proof} The next property will be of fundamental importance in later arguments and allows us to decompose multiplicative integrals into products with respect to a decomposition of the interval. \begin{prop} \label{prop:mintsep} Let $a\le b\le c$. If $f\in\mathcal{M}^c_a[E]$, then $f\in\mathcal{M}^b_a[E]\cap\mathcal{M}^c_b[E]$ and $$\mint_a^c \exp(f\,dE)=\mint_a^b \exp(f\,dE) \mint_b^c \exp(f\,dE)$$ \end{prop} \begin{proof} To a partition of $[a,b]$ or $[b,c]$ we can always add one more point to make it a partition of $[a,c]$. Therefore Lemma \ref{lemma:cauchycrit} implies $f\in\mathcal{M}^b_a[E]~\cap~\mathcal{M}^c_b[E]$.\\ For brevity let us write $I_a^c, I^b_a, I^c_b$, respectively for the three multiplicative integrals in the claim. On the other hand, given $(\tau,\xi)\in\mathcal{T}^c_a$, we find $(\tau_0,\xi_0)\in\mathcal{T}^b_a$, $(\tau_1,\xi_1)\in\mathcal{T}^c_b$ such that $\nu(\tau_i)\le \nu(\tau)$ for $i=0,1$ and $P(\tau,\xi)=P(\tau_0,\xi_0)P(\tau_1,\xi_1)$. Let us choose $\epsilon>0, \delta>0$ such that for $(\tau,\xi)\in\mathcal{T}^c_a$ with $\nu(\tau)<\delta$ we have $\|P(\tau,\xi)-I_a^c\|<\epsilon$ and the same holds correspondingly for such partitions of the subintervals $[a,b]$, $[b,c]$. Then, $$\|I_a^c - I_a^b I_b^c\|\le \|I_a^c-P(\tau,\xi)\|+\|I_a^bI_b^c-P(\tau,\xi)|<\epsilon+\|I_a^bI_b^c-P(\tau_0,\xi_0)P(\tau_1,\xi_1)\|$$ Now applying the identity $xy-zw=\frac{1}{2}\left((x-z)(y+w)+(y-w)(x+z)\right)$, we can estimate the remaining term as $\|I_a^bI_b^c-P(\tau_0,\xi_0)P(\tau_1,\xi_1)\|\le C\epsilon$, where $C>0$ is an appropriate constant. Letting $\epsilon\rightarrow 0$ we obtain $I_a^c=I_a^bI_b^c$. \end{proof} \begin{prop} \label{prop:mintconst} Let $A$ be a constant matrix. Then $$\mint_a^b \exp(A\,d(tI)) = \exp((b-a)A)$$ \end{prop} \begin{proof} Let $(\tau,\xi)\in\mathcal{T}^b_a$. Then $$P(\tau,\xi)=\prod^m_{j=1}\exp(A\Delta_j\tau)=\exp(A(b-a))$$ \end{proof} \begin{example} Let $A=\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}$. Then $$\mint_0^1 \exp(A d(tI))=\exp(A)=\begin{pmatrix}\cosh(1) & \sinh(1)\\ \sinh(1) & \cosh(1)\end{pmatrix}$$ \end{example} Unfortunately, there is no general formula for integrating a constant with respect to a general integrator. Typically we have \begin{align} \label{eqn:neqmintconst} \mint_a^b \exp(A\,dE(t))\not=\exp(A\cdot (E(b)-E(a))) \end{align} \begin{example} Let $A$ be as in the last example and $B=\mathrm{diag}(1,2)$. Then $AB~\not=~BA$. Set $E(t)=tA$ for $t\in[-1,0)$ and $E(t)=tB$ for $t\in [0, 1]$. Then $$\mint_{-1}^1 \exp(dE(t))=e^A\cdot e^B\not= e^{A+B}=e^{E(1)-E(-1)}$$ \end{example} But equality holds of course in \eqref{eqn:neqmintconst} if the family of matrices $\{A\}\cup\{E(t)\,:\,t\in[a,b]\}$ commutes. \begin{prop}[Determinant formula]\index{determinant formula} \label{prop:mintdet} For $f\in\mathcal{M}^b_a[E]$ we have $$\det\mint_a^b \exp(f(t)\,dE(t))=\exp\left(\int_a^b f(t)\,d\tr E(t)\right)$$ In particular, multiplicative integrals always yield invertible matrices. \end{prop} \begin{proof} For $(\tau,\xi)\in\mathcal{T}^b_a$ we have $$\det P(\tau,\xi)=\prod^m_{j=1} \det\exp(f(\xi_j)\Delta_j E) = \exp\left(\sum^m_{j=1} f(\xi_j) \Delta_j (\tr E)\right)$$ The claim now follows by continuity of the determinant. \end{proof} \begin{prop} \label{prop:mintunitaryconst} Let $f\in\mathcal{M}^b_a[E]$ and $U$ a constant invertible matrix. Then $$U\mint^b_a \exp(f\,dE)U^{-1}=\mint^b_a\exp(f\,d(UEU^{-1}))$$ \end{prop} \begin{proof} This follows from $Ue^AU^{-1}=e^{UAU^{-1}}$. \end{proof} The next fact will be useful to determine when a multiplicative integral is unitary. \begin{prop} \label{prop:mintgram} Suppose $E$ is Hermitian and $f\in\mathcal{M}^b_e[E]$. Let $$A=\mint_a^b \exp(f\,dE)$$ Then $$AA^*=\mint_a^b \exp(2\Re f\,dE)$$ \end{prop} \begin{proof} Let $(\tau,\xi)\in\mathcal{T}^b_a$. We have $$\left(\prodr_{k=1}^m \exp(f(\xi_k)\Delta_k E)\right)^*=\prodl_{k=1}^m \exp(\overline{f(\xi_k)} \Delta_k E)$$ Further $$\left(\prodr_{k=1}^m \exp(f(\xi_k)\,\Delta_k E)\right)\left(\prodl_{k=1}^m \exp(\overline{f(\xi_k)}\,\Delta_k E)\right)=\prodr_{k=1}^m \exp((f(\xi_k)+\overline{f(\xi_k)})\,\Delta_k E)$$ Letting $\nu(\tau)\rightarrow 0$ gives the claim. \end{proof} Now we prove two important estimates for multiplicative integrals in terms of additive Riemann-Stieltjes integrals. \begin{prop} \label{prop:mintest} Assume that $f\in\mathcal{M}^b_a[E]$, $E\in\BV([a,b]; M_n)$ and $|f|$ is Stieltjes integrable with respect to $|E|$. Then $$ \left\|\mint_a^b \exp(f(t)\,dE(t))\right\|\le \exp\left(\int^b_a |f(t)| d|E|(t)\right)$$ \end{prop} \begin{proof} Choose $(\tau,\xi)\in\mathcal{T}^b_a$. Then $$\|P(\tau,\xi)\|\le \prod_{j=1}^m \exp(|f(\xi_j)| \|\Delta_j E\|)=\exp\left(\sum_{j=1}^m |f(\xi_j)| \|\Delta_j E\|\right).$$ Also, we have $\|\Delta_j E\|\le \var_{[t_{j-1},t_j]} E=|E|(t_j)-|E|(t_{j-1})=\Delta_j |E|$. Letting $\nu(\tau)\rightarrow 0$ we obtain the claim. \end{proof} \begin{prop}\index{multiplicative integral estimate} \label{prop:minttaylorest} Let $f\in\mathcal{M}^b_a[E]$, $E\in\BV([a,b]; M_n)$ and $|f|$ be Stieltjes integrable with respect to $|E|$. Then $$ \mint^b_a \exp(f(t)\,dE(t))=I+\int_a^b f(t)\,dE(t)+R $$ where $R$ is a matrix that satisfies \begin{align} \label{eqn:minttayloresterr} \|R\|\le\sum_{\nu=2}^\infty \frac{1}{\nu!}\left(\int^b_a |f(t)|\,d|E|(t)\right)^\nu \end{align} If $\int_a^b |f| d|E|\le 1$, then there exists $0<C<1$ such that $$\|R\|\le C\left(\int^b_a |f(t)|\,d|E|(t)\right)^2$$ \end{prop} \begin{proof} Let $(\tau,\xi)\in\mathcal{T}^b_a$. We expand the product $P(\tau,\xi)$ using the exponential series: $$P(\tau,\xi)=\prod_{j=1}^m \sum_{\nu=0}^\infty \frac{(f(\xi_j)\Delta_j E)^\nu}{\nu!}=I+\sum_{j=1}^m f(\xi_j)\Delta_j E+R$$ where the remainder term $R=R(\tau,\xi)$ is of the form $R=\sum_{\nu=2}^\infty T_\nu$ with the terms $T_\nu$ of order $\nu$ being $$T_\nu=\sum_{\nu_1+\cdots+\nu_m=\nu}\prodr_{j=1}^m \frac{(f(\xi_j)\Delta_j E)^{\nu_j}}{\nu_j!}$$ We use the triangle inequality and $\|\Delta_j E\|\le \Delta_j |E|$ to estimate \begin{align*} \|T_\nu\|&\le \sum_{\nu_1+\cdots+\nu_m=\nu} \prod_{j=1}^m \frac{(|f(\xi_j)|\cdot \Delta_j |E|)^{\nu_j}}{\nu_j!}\\ &=\frac{1}{\nu!}\sum_{\nu_1+\cdots+\nu_m=\nu}{\nu\choose \nu_1,\cdots,\nu_m}\prod_{j=1}^m (|f(\xi_j)|\cdot\Delta_j |E|)^{\nu_j}\\ &=\frac{1}{\nu!}\left(\sum_{j=1}^m |f(\xi_j)| \cdot\Delta_j |E|\right)^\nu \end{align*} Letting $\nu(\tau)\rightarrow 0$ we obtain the first part of the claim.\\ If now $\int_a^b |f| d|E|\le 1$, then $$\|R\|\le C\left(\int^b_a |f(t)|\,d|E|(t)\right)^2$$ holds with $C=\sum_{\nu=2}^\infty \frac{1}{\nu!}=e-2$. \end{proof} Lastly, we give a change of variables formula for multiplicative integrals. Note that we allow the variable transformation to have jump discontinuities. \begin{prop}[Change of variables\index{Change of variables}] If $\varphi:[a,b]\rightarrow[\alpha,\beta]$ is a strictly increasing function with $\varphi(a)=\alpha$ and $\varphi(b)=\beta$, $f$ continuous on $[\alpha,\beta]$ and $E$ a continuous increasing mvf on $[a,b]$, then $$ \mint^b_a \exp(f(\varphi(t))\,dE(t))=\mint^\beta_\alpha \exp(f(s)\,dE(\varphi^\dagger(s))) $$ assuming that the the integrals exist. \end{prop} Here, $\varphi^\dagger$ is the \defin{generalized inverse}\index{generalized inverse} of $\varphi$ given by $\varphi^\dagger(s)=\inf\{t\in [a,b]\,:\,\varphi(t)\ge s\}$, which gives a natural notion of inverse for a general increasing function. For strictly increasing continuous functions we have $\varphi^\dagger=\varphi^{-1}$. Jump discontinuities of $\varphi$ translate into intervals of constancies of $\varphi^\dagger$ and intervals of constancy of $\varphi$ become jump discontinuities of $\varphi^\dagger$. The generalized inverse is normalized in the sense that it is left-continuous. \begin{proof} Set $g=f\circ\varphi$ and $F=E\circ\varphi^\dagger$. Given a tagged partition $(\tau, \xi)\in \mathcal{T}_{a}^{b}$ on $[a,b]$, we can apply $\varphi$ to produce a partition on $[a,b]$ given by $(\tau^\prime, \xi^\prime)=(\varphi(\tau), \varphi(\xi))\in \mathcal{T}_\alpha^\beta$. This mapping and its inverse respect the partial order on the set of tagged partitions. That is, refinements of $(\tau,\xi)$ map to refinements of $(\tau^\prime,\xi^\prime)$ and vice versa. Now it only remains to notice that $$P(f,F,\tau^\prime,\xi^\prime)=\prod_{i=1}^m \exp(f(\xi_i^\prime)\Delta^{\tau^\prime}_i F) =\prod_{i=1}^m \exp(g(\xi_i))\Delta^{\tau}_i E)=P(g, E, \tau,\xi)$$ and similarly the other way around. \end{proof} As opposed to the additive case, this formula does not extend to increasing functions which have intervals of constancy. This corresponds to the difficulties in computing the multiplicative integral of constants. \subsection{The multiplicative Lebesgue integral} The additive Lebesgue integral can be written as a Riemann-Stieltjes integral. We use this observation to define a multiplicative version of the Lebesgue integral and show that it behaves as expected. In particular, there is a Lebesgue differentiation theorem. Let us first recall some definitions. We denote the Lebesgue measure on the real line by $\lambda$. We say that a mvf $A$ on $[a,b]$ is \defin{Lebesgue integrable}\index{Lebesgue integrable}, if $$\int_a^b \|A(t)\|d\lambda(t)<\infty$$ in that case we write $A\in L^1([a,b]; M_n)$. As usual, we identify two such functions if they differ only on a set of measure zero. That is, $L^1$ functions are strictly speaking equivalence classes of functions. A mvf $E$ on $[a,b]$ is \defin{absolutely continuous}\index{absolutely continuous} if for all $\epsilon>0$ there exists $\delta>0$ such that for all partitions $\tau=\{a=t_0<\dots<t_m=b\}$ with $\nu(\tau)<\delta$ we have $$\var_{[a,b]}^\tau=\sum_{k=1}^m \|\Delta_k E\|<\epsilon$$ As in the scalar case, absolutely continuous mvfs are differentiable almost everywhere with integrable derivative. They are also by definition of bounded variation. \begin{definition}[Multiplicative Lebesgue integral] Let $A\in L^1([a,b]; M_n)$. Define $$E(t)=\int^t_a A(s)\,d\lambda(s)$$ for $t\in[a,b]$. Then $E$ is absolutely continuous. We define the expression $$\mint^b_a \exp(A(t)\,dt)=\mint^b_a \exp(dE(t))$$ to be the \defin{multiplicative Lebesgue integral}\index{multiplicative Lebesgue integral} of $A$. It is well-defined and also exists by Proposition \ref{prop:mintexist2}. \end{definition} As for ordinary Riemann-Stieltjes integrals, multiplicative integrals with an absolutely continuous integrator can be rewritten as multiplicative Lebesgue integrals. \begin{prop} If $E$ is an absolutely continuous, increasing mvf and $f$ is a continuous scalar function, then $$\mint_a^b \exp(f(t)dE(t))=\mint_a^b \exp(f(t)E^\prime(t)dt)$$ \end{prop} \begin{proof} As for scalar functions, $E$ is differentiable almost everywhere and the derivative $E^\prime$ is in $L^1([a,b]; M_n)$. Setting $F(t)=\int_a^t f(s)E^\prime(s)d\lambda(s)$, the claim is equivalent to \begin{align} \label{eqn:lebesguepf1} \mint_a^b\exp(dF(t))=\mint_a^b \exp(f(t)dE(t)) \end{align} Pick a partition $\tau=\{a=t_0<\dots<t_m=b\}$. Apply the mean value theorem of integration to choose $\xi_j\in[t_{j-1},t_j]$ such that $f(\xi_j)\int_{t_{j-1}}^{t_j} E^\prime(s)d\lambda(s)=\int_{t_{j-1}}^{t_j} f(s)E^\prime(s)d\lambda(s)$. This is possible because $f$ is continuous. Then \begin{align*} \prod_{k=1}^m \exp(\Delta_j F)&=\prod_{k=1}^m \exp\left(\int_{t_{j-1}}^{t_j} f(s)E^\prime(s)d\lambda(s)\right)\\ &=\prod_{k=1}^m \exp\left(f(\xi_j)\int_{t_{j-1}}^{t_j}E^\prime(s)d\lambda(s)\right)\\ &=\prod_{k=1}^m \exp(f(\xi_j)\Delta_j E) \end{align*} In the limit $\nu(\tau)\rightarrow 0$, this implies \eqref{eqn:lebesguepf1}. \end{proof} Multiplicative integrals can also be viewed as solutions of certain ordinary differential equations\index{ordinary differential equation}, as we will see from the next proposition, which we can also view as a multiplicative version of the classical Lebesgue differentiation theorem\index{Lebesgue differentiation theorem}. \begin{prop} Let $A\in L^1([a,b]; M_n)$. Then the function $$F(x)=\mint_a^x \exp(A(t)dt)$$ is differentiable almost everywhere in $(a,b)$ and \begin{align} \label{mintode} \frac{dF}{dx}(x)=F(x)A(x) \end{align} for almost every $x\in (a,b)$. \end{prop} \begin{remark} The uniqueness and existence theory for ordinary differential equations shows that we could have also \emph{defined} the multiplicative integral $\mint_a^x \exp(A(t)dt)$ as the unique solution of the Cauchy problem\index{Cauchy problem} given by \eqref{mintode} and the initial value condition $F(a)=I_n$. \end{remark} \begin{proof} By Propositions \ref{prop:mintsep} and \ref{prop:minttaylorest} we have for $x\in(a,b)$ and $h>0$ small enough: $$F(x+h)=F(x)\mint_x^{x+h} \exp(A(t)dt)=F(x)(I+\int_x^{x+h} A(t)dt+R)$$ where $\|R\|\le C\left(\int_x^{x+h} \|A(t)\|dt\right)^2$. Therefore $$\frac{1}{h}(F(x+h)-F(x))=F(x)\left(\frac{1}{h}\int_x^{x+h} A(t)dt+\frac{R}{h}\right)$$ A similar calculation works for $h<0$. By the Lebesgue differentiation theorem, $$\lim_{h\rightarrow 0}\frac{1}{h}\int_x^{x+h} A(t)dt=A(x)$$ for almost every $x\in(a,b)$. Also, $$\frac{\|R\|}{h}\le Ch\left(\frac{1}{h}\int_x^{x+h}\|A(t)\|dt\right)^2\longrightarrow 0$$ as $h\rightarrow 0$ for almost every $x\in (a,b)$. The claim follows. \end{proof} \subsection{Helly's theorems} \label{hellythm} Our theory of multiplicative integrals is still missing a convergence theorem. The aim of this section is to fill in this gap. The convergence theorem we will obtain is an analogue of Helly's convergence theorem for scalar Riemann-Stieltjes integrals (see Theorem \ref{thm:hellyconvsc} in the appendix). First, we prove two auxiliary statements. A useful trick when estimating differences of products is the following. \begin{lemma}[Telescoping identity] \label{lemma:telescoping}\index{telescoping identity} For matrices $Q_1,\dots,Q_m,P_1,\dots,P_m$ we have\begin{align} \label{eqn:telescoping} \prod^m_{\nu=1} P_\nu - \prod^m_{\nu=1} Q_\nu =\sum_{l=1}^m \left(\prod^{l-1}_{\nu=1} P_\nu\right) (P_l-Q_l) \left(\prod^m_{\nu=l+1} Q_\nu\right) \end{align} \end{lemma} \begin{proof} We do an induction on $m$. For $m=1$ there is nothing to show. Setting $A=\prod_{\nu=1}^{m-1} P_\nu$, $B=P_{m}$, $C=\prod_{\nu=1}^{m-1} Q_\nu$ and $D=Q_{m}$ we have $$\prod_{\nu=1}^{m} P_\nu-\prod_{\nu=1}^{m} Q_\nu=AB-CD=(A-C)D+A(B-D)$$ By the induction hypothesis, the right hand side is equal to $$\sum_{l=1}^{m} \left(\prod^{l-1}_{\nu=1} P_\nu\right) (P_l-Q_l) \left(\prod^{m}_{\nu=l+1} Q_\nu\right)$$ \end{proof} The following is a simple estimate for the additive matrix-valued Riemann-Stieltjes integral. \begin{lemma}\index{Stieltjes integral estimate} \label{lemma:addstieltjesest} If $E$ is an increasing mvf on $[a,b]$ and $f$ a bounded scalar function which is Stieltjes integrable with respect to $E$, then $$\left\|\int_a^b f\,dE\right\|\le C \|E(b)-E(a)\|$$ for some constant $C>0$. \end{lemma} \begin{proof} For $n=1$ the statement is true. Choose $C>0$ such that $|f(t)|\le C$ for $t\in[a,b]$. Let $v\in\C^n$. Since $v^*Ev$ is an increasing scalar function, \begin{align*} \left|v^*\left(\int_a^b f\,dE\right)v\right| &=\left|\int_a^b f(t) d(v^*E(t)v)\right|\le \int_a^b |f(t)|d(v^*E(t)v)\\ &\le C (v^* (E(b)-E(a))v) \end{align*} Taking the supremum over all $v$ with $\|v\|=1$ we obtain the claim. \end{proof} \begin{thm}[Helly's convergence theorem for multiplicative integrals]\index{Helly's convergence theorem}\index{Helly's second theorem} \label{thm:hellyconvmvf} Let $(E_k)_k$ be a sequence of increasing and uniformly Lipschitz continuous mvfs on $[a,b]$ which converge pointwise to the function $E$ on $[a,b]$. Suppose also that $(f_k)_k$ is a sequence of uniformly bounded Riemann integrable scalar functions on $[a,b]$ which converges pointwise to the Riemann integrable function $f$. Then \begin{align*} \lim_{k\rightarrow\infty} \mint^b_a \exp(f_k\,dE_k)=\mint^b_a \exp(f\,dE) \end{align*} \end{thm} The prerequisites in this theorem are of course not the mildest possible to arrive at the desired conclusion, but they allow a relatively easy proof and are sufficient for our applications. \begin{proof} Let us assume without loss of generality that $[a,b]=[0,1]$ and that the uniform Lipschitz constant for all the $E_k$ is $1$. Then, since $\|E_k(t)-E_k(s)\|\le |t-s|$ for all $k$, also the pointwise limit $E$ is Lipschitz continuous with Lipschitz constant $1$. In particular $f\in\mathcal{M}^1_0[E]$. Choose a constant $K>0$ such that $|f_k(t)|\le K$ and $|f(t)|\le K$ for all $t$ and $k$. Now we begin by estimating $$\left\| \mint_0^1 e^{f\,dE}-\mint_0^1 e^{f_k\,dE_k}\right\|\le \left\|\mint_0^1 e^{f\,dE}- \mint_0^1 e^{f\,dE_k}\right\|+\left\|\mint_0^1 e^{f\,dE_k}-\mint_0^1 e^{f_k\,dE_k} \right\|$$ The differences on the right hand side we name $d_1$ and $d_2$, respectively. Let us first estimate $d_1$. By Proposition \ref{prop:mintest} we have \begin{align} \label{eqn:hellypf0} \left\|\mint_a^b \exp(f \,dE_k)\right\|\le \exp\left(\int_a^b |f(t)| dt\right)\le M_0 \end{align} for every $0\le a\le b\le 1$ and $k\in\N\cup\{\infty\}$, where we set $E_\infty=E$ and $M_0>0$ is a constant, not depending on $a,b$. Now let us choose a partition $\tau$ of $[0,1]$ by setting $t_i=\frac{i}{m}$ for $i=0,\dots,m$. Also set $I_i=[t_{i-1},t_i]$ for $i=1,\dots,m$. We apply the telescoping identity \eqref{eqn:telescoping}\index{telescoping identity} and the previous estimate \eqref{eqn:hellypf0} to see \begin{align} \label{eqn:hellypf1} d_1=\left\|\prod_{i=1}^m\mint_{t_{i-1}}^{t_i} e^{f\,dE}-\prod_{i=1}^m\mint_{t_{i-1}}^{t_i} e^{f\,dE_k}\right\| \le M\sum_{i=1}^m \left\|\mint_{t_{i-1}}^{t_i} e^{f\,dE} -\mint_{t_{i-1}}^{t_i} e^{f\,dE_k}\right\| \end{align} where $M=M_0^2$. For large enough $m$ we have $\int_{t_{i-1}}^{t_i} |f(t)| dt\le 1$. Then Proposition \ref{prop:minttaylorest} applied to the multiplicative integrals on the right hand side of \eqref{eqn:hellypf1} gives \begin{align} \label{eqn:hellypf2} d_1 &\le M\sum_{i=1}^m \left\|\int_{t_{i-1}}^{t_i} f dE - \int_{t_{i-1}}^{t_i} f dE_k\right\|+2C\left(\int_{t_{i-1}}^{t_i} |f(t)| dt\right)^2 \end{align} To see this, note that the Lipschitz condition on $E_k$ implies $|E_k|(t)-|E_k|(s)\le t-s$ for $t\ge s$ and $k\in\N\cup\{\infty\}$. We estimate further \begin{align} \left\|\int_{t_{i-1}}^{t_i} f dE - \int_{t_{i-1}}^{t_i} f dE_k\right\| &\le \left\|\int_{t_{i-1}}^{t_i} f dE-f(i/m)\Delta_i E\right\|+ \left\|f(i/m)\Delta_i E - f(i/m)\Delta_i E_k\right\|\nonumber\\ \label{eqn:hellypf3} &+\left\|f(i/m)\Delta_i E_k-\int_{t_{i-1}}^{t_i} f dE_k\right\| \end{align} Using Lemma \ref{lemma:addstieltjesest}, we get \begin{align} \label{eqn:hellypf4} \left\|\int_{t_{i-1}}^{t_i} f dE-f(i/m)\Delta_i E\right\|&=\left\|\int_{t_{i-1}}^{t_i}(f(t)-f(i/m))dE(t)\right\|\\ &\le (\osc_{I_i} f)\|\Delta_i E\|\le \frac{\osc_{I_i} f}{m} \end{align} By the same argument also \begin{align} \label{eqn:hellypf5} \left\|f(i/m)\Delta_i E_k-\int_{t_{i-1}}^{t_i} f dE_k\right\|\le \frac{\osc_{I_i} f}{m} \end{align} Combining \eqref{eqn:hellypf3}, \eqref{eqn:hellypf4} and \eqref{eqn:hellypf5}, we see from \eqref{eqn:hellypf2}, that \begin{align} \label{eqn:hellypf6} d_1\le MK\sum_{i=1}^m \|\Delta_i E-\Delta_i E_k\|+2M\sum_{i=1}^m (\osc_{I_i} f)\cdot \frac{1}{m}+\frac{2CK^2}{m^2} \end{align} Let $\epsilon>0$. Since $f$ is Riemann integrable, we can choose $m$ large enough such that \begin{align} \label{eqn:hellypf7} 2M\sum_{i=1}^m (\osc_{I_i} f)\cdot \frac{1}{m}+\frac{2CK^2}{m^2}\le\frac{\epsilon}{3} \end{align} With $m$ fixed like that we now use the pointwise convergence of $E_k$ and make $k$ large enough such that \begin{align} \label{eqn:hellypf8} MK\sum_{i=1}^m \|\Delta_i E-\Delta_i E_k\|\le\frac{\epsilon}{6} \end{align} This is possible, since $E-E_k$ is only being evaluated at finitely many points. Whenever we make $m$ even larger later on during the estimate of $d_2$, we silently also increase $k$ accordingly such that \eqref{eqn:hellypf8} holds. Combining \eqref{eqn:hellypf6},\eqref{eqn:hellypf7} and \eqref{eqn:hellypf8}, we get $d_1\le\frac{\epsilon}{2}$.\\ Let us now estimate $d_2$. Similarly as for $d_1$, the telescoping identity and Proposition \ref{prop:minttaylorest} imply \begin{align} d_2 &\le M\sum_{i=1}^m \left\|\int_{t_{i-1}}^{t_i} f\,dE_k-\int_{t_{i-1}}^{t_i} f_k\,dE_k\right\| +C\left(\int_{t_{i-1}}^{t_i} |f(t)|dt\right)^2+\\ &+C\left(\int_{t_{i-1}}^{t_i} |f_k(t)|dt\right)^2\nonumber \le M\sum_{i=1}^m \int_{t_{i-1}}^{t_i} |f(t)-f_k(t)| dt+\frac{2CK^2}{m^2}\\\label{eqn:hellypf9} &=M\int_0^1 |f(t)-f_k(t)|dt+\frac{2CK^2}{m^2} \end{align} Note that $f,f_k$ are measurable, therefore we may use Egorov's theorem\index{Egorov's theorem} to conclude that for every $\delta>0$, there exists a measurable set $Q\subset[0,1]$ such that $\lambda([0,1]\backslash Q)~\le~\delta$, where $\lambda$ denotes the Lebesgue measure on $[0,1]$, and $f_k\rightarrow f$ uniformly on $Q$. Let us choose $\delta=\frac{\epsilon}{12MK}$ and make $k$ large enough such that $|f(t)-f_k(t)|\le\frac{\epsilon}{6M}$ for all $t\in Q$. Then \begin{align} M\int_0^1 |f(t)-f_k(t)| dt &= M\int_Q |f-f_k| d\lambda+M\int_{[0,1]\backslash Q} |f-f_k| d\lambda\nonumber\\ \label{eqn:hellypf10} &\le M\left(\frac{\epsilon}{6M}+2K\delta\right)=\frac{\epsilon}{3} \end{align} Note that we have reinterpreted the Riemann-Stieltjes integral as a Lebesgue integral. Now \eqref{eqn:hellypf9} and \eqref{eqn:hellypf10} imply $d_2 \le \frac{\epsilon}{2}$ for large enough $m$. Altogether we proved that $$\left\| \mint_0^1 e^{f\,dE}-\mint_0^1 e^{f_k\,dE_k}\right\|\le d_1+d_2\le \epsilon$$ for sufficiently large $k$. Since $\epsilon$ was arbitrary, the claim follows. \end{proof} We will now also prove an analogue of Helly's selection theorem (see Theorem \ref{thm:hellyselsc} in the appendix). It is not directly related to multiplicative integrals, but it is a natural addendum to the previous theorem and important for the proof of Potapov's theorem in Section \ref{sect:contractivemvfs}. \begin{thm}[Helly's selection theorem for matrix-valued functions]\index{Helly's selection theorem}\index{Helly's first theorem} \label{thm:hellyselmvf} Let $(E^{(k)})_k$ be a uniformly bounded sequence of increasing mvfs on $[a,b]$. Then there exists a subsequence $(E^{(k_j)})_j$ such that $E^{(k_j)}$ converges pointwise to an increasing mvf $E$ on $[a,b]$. \end{thm} \begin{proof} Choose $C>0$ such that $\|E^{(k)}(t)\|\le C$ for all $k$ and $t\in[a,b]$. We claim that for all $i,j$, the entries $E_{ij}^{(k)}$ form a sequence of BV-functions with uniformly bounded total variation. Let $\tau=\{a=t_0\le t_1\le \cdots\le t_m=b\}$ be a partition of $[a,b]$. Then we estimate \begin{align} \label{eqn:hellyselmvfpf1} |\Delta_{l} E_{ij}^{(k)}|\le \|\Delta_l E^{(k)}\|\le \tr \Delta_l E^{(k)}=\Delta_l \tr E^{(k)} \end{align} for $l=1,2,\dots,m$. In the second inequality we have used positivity of $\Delta_l E^{(k)}$. Therefore $$\var_{[a,b]}^\tau E_{ij}^{(k)}=\sum_{l=1}^m |\Delta_{l} E_{ij}^{(k)}|\le \sum_{l=1}^m \Delta_l \tr E^{(k)}=\tr (E^{(k)}(b)- E^{(k)}(a))$$ Using the estimate $\tr A=\sum_{i=1}^n A_{ii}\le n\|A\|$ for $A\ge 0$ we see that $$\var_{[a,b]} E_{ij}^{(k)}\le 2nC$$ Hence we can repeatedly apply Helly's scalar selection Theorem \ref{thm:hellyselsc} to find a subsequence such that all the entries $E^{(k_j)}_{ij}$ converge pointwise to BV-functions $E_{ij}$. The resulting mvf $E=(E_{ij})_{ij}$ is also increasing as pointwise limit of increasing mvfs. \end{proof} Both theorems were given by Potapov in \cite[Appendix \S 2]{Potapov}, but with a slightly different proof for the selection theorem. \section{Contractive analytic mvfs} \label{sect:contractivemvfs} By an \defin{analytic mvf}\index{analytic mvf} on the unit disk $\D\subset\C$ we mean a function $A:\D\rightarrow M_n$ all components of which are holomorphic throughout $\D$. A mvf $A:\D\rightarrow M_n$ is called \defin{contractive}\index{contractive mvf} if $$A(z)A^*(z)\le I$$ for all $z\in\D$, i.e. the Hermitian matrix $I-A(z)A^*(z)$ is positive semidefinite. An equivalent condition is that $\|A(z)\|\le 1$ for all $z\in\D$. This and some other basic facts are proven in Appendix \ref{sect:background}. We say that $A$ is \defin{bounded}\index{bounded mvf} if $$\|A\|_\infty=\sup_{z\in\D}\|A(z)\|<\infty$$ We denote the space of bounded analytic matrix functions on $\D$ whose determinant does not vanish identically by $\H^\infty$\index{$\H^\infty$}. The subspace of analytic mvfs, which are also contractive on $\D$ will be denoted by $\Sch\subset\H^\infty$\index{$\Sch$}\index{Schur class}, where the letter $\Sch$ is chosen in honour of I. Schur, who studied this class of functions in the case $n=1$. The goal of this section is to prove a theorem of V.P. Potapov on the multiplicative structure of functions in the class $\Sch$. In his paper \cite{Potapov}, Potapov considered the more general case of $J$-contractive mvfs\index{$J$-contractive mvf}, however we only require the case $J=I$. \begin{definition} Let \begin{align} \label{betadef} \beta_{z_0}(z)=\left\{\begin{array}{cc} \frac{z_0-z}{1-\overline{z_0}z}\frac{|z_0|}{z_0}, & \text{if}\,z_0\not=0\\ z, & \text{if}\,z_0=0 \end{array}\right. \end{align} A \defin{Blaschke-Potapov factor} (B.P. factor)\index{Blaschke-Potapov factor}\index{B.P. factor} is a function $b:\D\rightarrow M_n$ such that \begin{align} \label{BPfact1} b(z)=U\left(\begin{array}{cc}\beta_{z_0}(z) I_r & 0\\0 & I_{n-r}\end{array}\right)U^* \end{align} for all $z\in\D$, where $U$ is a constant unitary matrix, $0<r\le n$ and $z_0\in\C$. We call $r$ the \defin{rank} of the B.P. factor. A \defin{Blaschke-Potapov product} (B.P. product)\index{Blaschke-Potapov product}\index{B.P. product} is a possibly infinite product of B.P. factors, which may be multiplied from the right or left by a constant unitary matrix. By convention, also a unitary constant is considered a B.P. product. \end{definition} Equivalently, we can define a B.P. factor by \begin{align} \label{BPfact2} b(z)=I-P+\beta_{z_0}(z)P \end{align} where $P\in M_n$ is an orthogonal projection and $z_0\in\D$. The connection to \eqref{BPfact1} is that $P$ projects onto the $r$-dimensional image of $I-b(0)$, i.e. $P=U\left(\begin{array}{cc}I_r & 0\\0 & 0\end{array}\right)U^*$. This shows that B.P. factors are uniquely determined by the zero $z_0$ and the choice of the subspace that $P$ projects onto. The condition for the convergence of a B.P. product turns out to be the same condition as for the scalar analogue (see Theorem \ref{bpconv}). Whenever in the following we speak of a B.P. product as a function, it is understood implicitly that the B.P. product really is convergent in the sense defined later in this section. Let us denote the Herglotz kernel\index{Herglotz kernel} by $$h_z(\theta)=\frac{z+e^{i\theta}}{z-e^{i\theta}}$$ for $z\in\D$ and $\theta\in[0,2\pi]$. \begin{thm}[Potapov]\index{Potapov's fundamental theorem} \label{thm:potapov} Let $A\in\Sch$. Then $A$ can be written as \begin{align} \label{eqn:potapovrepr} A(z)=B(z)\cdot \mint^L_0 \exp\left(h_z(\theta(t))\,dE(t)\right) \end{align} Here, $B$ is a B.P. product corresponding to the zeros of $\det A$, $0\le L<\infty$, $E$ an increasing mvf such that $\tr E(t)=t$ for $t\in[0,L]$ and $\theta:[0,L]\rightarrow [0,2\pi]$ a right-continuous increasing function. \end{thm} The proof of the theorem will proceed in two steps. First we detach a maximal Blaschke-Potapov product to obtain a mvf with non-vanishing determinant. In the second step we use an approximation by rational functions and Helly's theorems to obtain the desired multiplicative integral representation. \subsection{Blaschke-Potapov products} In this section we discuss the convergence of B.P. products and prove a factorization of an arbitrary function in $\H^\infty$ into a maximal Blaschke-Potapov product and a function with non-vanishing determinant. By a \defin{zero}\index{zero of a mvf} of a function $A\in\Sch$, we always mean a zero of $\det A$, i.e. a point at which $A$ becomes singular. If $z$ is a zero of $A$, then the dimension of $\ker A$ is called the \defin{rank}\index{rank of a zero} of the zero. \subsubsection{Convergence of B.P. products} A scalar Blaschke product $b(z)=z^m\prod^\infty_{i=1}\frac{z_i-z}{1-\overline{z_i}z}\frac{|z_i|}{z_i}$ converges if and only if the \defin{Blaschke condition}\index{Blaschke condition} $$\sum_{i\ge 1} (1-|z_i|)<\infty$$ is fulfilled (see \cite[Chapter II.2]{Garnett}). We will prove that the same is true for B.P. products. It is clear that the Blaschke condition is necessary. For, if $A\in\H^\infty$, then $\det A$ is a scalar bounded analytic function, whence it follows that its zeros satisfy the Blaschke condition. We will now prove the converse by adapting the corresponding scalar proof as presented in \cite{Rudin2}. Let us call an infinite product $P=\prod^\infty_{i=1} B_i$ of matrices $B_i$ \defin{convergent}\index{convergent product} if the sequence given by $P_k=\prod^k_{i=1}B_i=B_1\cdot B_2\cdots B_k$ converges in the $\|\cdot\|$ norm. The limit is then denoted by $\prod^\infty_{i=1}B_i$. The notion of uniform convergence for an infinite product of matrix-valued functions is defined accordingly. \begin{lemma}\label{estimatelemma} Let $(B_i)_{i=1,\dots,k}$ be given matrices. Then $$\|I-\prod_{i=1}^k B_i\|\le \prod^k_{i=1}\left(1+\|I-B_i\|\right)-1$$ \end{lemma} \begin{proof} We do an induction on $k$. For $k=1$ there is nothing to show. Writing $A_k=\prod_{i=1}^k B_i$ and $a_k=\prod_{i=1}^k (1+\|I-B_i\|)$ we see that $$A_{k+1}-I=(A_k-I)(I+(B_{k+1}-I))+B_{k+1}-I$$ and thus $$\left\|I-A_{k+1}\right\| \le (a_k-1)(1+\|I-B_{k+1}\|)+\|I-B_{k+1}\|=a_{k+1}-1$$ by the induction hypothesis. \end{proof} \begin{thm}\label{thm:prodconv}Let $(B_i)_i$ be matrix-valued functions on $\D$ such that \begin{enumerate} \item[(a)] $\sum_{i\ge 1}\|I-B_i(z)\|$ converges uniformly on compact subsets of $\D$ and \item[(b)] the sequence $P_k(z)=\prod^k_{i=1}B_i(z)$ is uniformly bounded on $\D$. \end{enumerate} Then $P(z)=\prod^\infty_{i=1}B_i(z)$ converges uniformly on compact subsets of $\D$. The same theorem holds if we replace $\prod$ everywhere by $\prodl$. \end{thm} \begin{proof} Let $K\subset\D$ be compact. By assumption there is $C>0$ such that $\|P_k(z)\|\le C$ for all $z\in\D$ and $k\ge 1$. Choose $\epsilon>0$ and $N$ so large that $$\sup_{z\in K}\sum^l_{i=k+1}\|I-B_i(z)\|<\epsilon$$ for $l\ge k\ge N$. Then we have by Lemma \ref{estimatelemma} \begin{align*} \|P_k(z)-P_l(z)\| &\le \|P_k(z)\|\cdot\|I-\prod_{i=k+1}^l B_i(z)\|\\ &\le C\left(\prod^l_{i=k+1}(1+\|I-B_i(z)\|)-1\right)\\ &\le C\left(\exp\left(\sum^l_{i=k+1}\|I-B_i(z)\|\right)-1\right)\\ &\le C(e^\epsilon-1) \end{align*} for $z\in K$. The right hand side converges to $0$ if $\epsilon\rightarrow 0$. The proof for $\prodl$ works analogously using also an analogous version of Lemma \ref{estimatelemma}. \end{proof} \begin{thm}\label{bpconv}\index{Blaschke condition} If $B$ is a formal B.P. product corresponding to the zeros $z_1,z_2,\dots$ and the Blaschke condition holds, then $B$ converges (uniformly on compact sets) to an analytic matrix function in $\D$ and $\|B\|_\infty=1$. \end{thm} \begin{proof} We may assume $z_i\not=0$. Now we want to apply Theorem \ref{thm:prodconv}. Let $B_k(z)=\prod^k_{i=1}b_i(z)$ be the $k$th partial product of $B$ and $$b_i(z)=U_i\left(\begin{array}{cc}\frac{z_i-z}{1-\overline{z_i}z}\frac{|z_i|}{z_i} I_{r_i} & 0\\0 & I_{n-r_i}\end{array}\right)U_i^{-1}=I-U_i \left(\begin{array}{cc}\left(1-\frac{z_i-z}{1-\overline{z_i}z}\frac{|z_i|}{z_i}\right) I_{r_i} & 0\\0 & 0\end{array}\right)U_i^{-1}$$ Then for $|z|\le r<1$ we get $$\|I-b_i(z)\|=\left|1-\frac{z_i-z}{1-\overline{z_i}z}\frac{|z_i|}{z_i}\right| =\left|\frac{z_i+z|z_i|}{z_i-z|z_i|^2}\right|(1-|z_i|)\le\frac{1+r}{1-r}(1-|z_i|)$$ which implies condition (a). Note that $\|B_k\|$ is the absolute value of a finite scalar Blaschke product, so condition (b) is satisfied. By the theorem and Lemma \ref{unifconvlemma} we obtain $B$ as an analytic matrix function. $\|B\|_\infty=1$ is again clear because $\|B\|$ is the absolute value of a scalar Blaschke product. \end{proof} \subsubsection{Factorization} Our next objective is the factorization of an arbitrary bounded analytic matrix function into a B.P. product and a function with non-vanishing determinant. To do this we will first analyze the detachment of a single Blaschke-Potapov factor from a given contractive analytic function. \begin{definition}If $A\in\Sch$ has a zero at some $z_0\in\D$, we call a B.P. factor $b$ with $b(z_0)=0$ \defin{detachable} from $A$ if $b^{-1}A\in\Sch$. \end{definition} \begin{lemma}[Detachability condition]\index{detachability condition} \label{detachlemma} Suppose $A\in\Sch$ and $\det A(z_0)=0$ for $z_0\in\D$. Then a B.P.factor $b$, which is given by \eqref{BPfact1} is detachable from $A$ if and only if \begin{align} \label{detachcrit} U^*A(z_0)=\left(\begin{array}{cc}0_r & 0\\ * & * \end{array}\right) \end{align} where $0_r$ denotes the $r\times r$ zero matrix and the $*$ denote arbitrary block matrices of the appropriate dimensions. \end{lemma} \begin{proof} We use Taylor development to write $$A(z)=A(z_0)+R(z)(z-z_0)$$ for $z$ in some neighborhood of $z_0$ which is small enough to be contained in $\D$ and $R$ a holomorphic mvf. Writing $U^*A(z_0)=\left(\begin{array}{cc}B & C\\ * & * \end{array}\right)$ for some $r\times r$ matrix $B$ and $r\times (n-r)$ matrix $C$ we get \begin{align*} b^{-1}(z)A(z) &= U\left(\begin{array}{cc}\beta_{z_0}(z) I_r & 0\\0 & I_{n-r}\end{array}\right)\left(\begin{array}{cc}B & C\\ * & * \end{array}\right)+U\left(\begin{array}{cc}\beta_{z_0}(z)(z-z_0)I_r & 0\\0 & I_{n-r}\end{array}\right)U^* R(z)\\ &= \left(\begin{array}{cc}\beta_{z_0}(z)B & \beta_{z_0}(z)C\\ * & * \end{array}\right)+U\left(\begin{array}{cc}\beta_{z_0}(z)(z-z_0)I_r & 0\\0 & I_{n-r}\end{array}\right)U^* R(z) \end{align*} Recalling that $\beta_{z_0}^{-1}$ is given by \eqref{betadef} and thus only has a simple pole at $z_0$ we see that $b^{-1}A$ is holomorphic at $z_0$ if and only if \eqref{detachcrit} holds. It remains to show that $b^{-1}A$ is in that case contractive. Let $\epsilon>0$. Then we can choose $0\le r<1$ so close to $1$ that $$\|b^{-1}(z)\|=|\beta_{z_0}(z)|^{-1}=\left|\frac{1-\overline{z_0}z}{z_{0}-z}\right|\le 1+\epsilon$$ for $|z|\ge r$. This can be seen from the properties of the pseudohyperbolic distance as described in \cite[Chapter I.1]{Garnett}. Since $A$ is contractive by assumption this implies $$\|b^{-1}(z)A(z)\|\le 1+\epsilon$$ for $|z|\ge r$. Because the norm of an analytic mvf is subharmonic\index{subharmonic} (see the appendix for a proof of this), we conclude by the maximum principle\index{maximum principle} (see \cite[Theorem I.6.3]{Garnett}) that this estimate holds also for $|z|<r$. As $\epsilon$ was arbitrary, we obtain $\|b^{-1}(z)A(z)\|\le 1$ for every $z\in\D$. \end{proof} Using the alternative formulation \eqref{BPfact2} for $b$, the detachability condition\index{detachability condition} \eqref{detachcrit} becomes \begin{align} \label{detachcrit2} \mathrm{Im} A(z_0)\subseteq \ker P \end{align} or equivalently, $\Img A(z_0)\perp\Img P$. If we also require the rank $r$ of $P$ to be maximal, the condition becomes \begin{align} \label{maxdetachcrit} \Img A(z_0)=\ker P \end{align} so the B.P. factor is in that case uniquely determined. We are now ready to prove the main result of this section. \begin{thm}\label{thm:bpfactor}\index{B.P. product} Given $A\in\Sch$, there exists a B.P. product $B$ and $\tilde{A}\in\Sch$ without zeros, such that $A=B\cdot\tilde{A}$. Moreover, $B$ is uniquely determined up to multiplication with a constant unitary matrix. \end{thm} The uniqueness statement says that $B$ is uniquely determined as a function on $\D$, \emph{not} as a formal product. That is, the individual B.P. factors may be quite different depending on the order in which we detach the zeros. {\em Proof of existence.} Let $z_1,z_2,\dots$ be the zeros of $\det A$ in no particular order, counted according to their multiplicities. Let $0\le r<1$. By the Blaschke condition, there can only be finitely many zeros such that $|z_i|\le r$. We now construct sequences of B.P. factors $(b_k)_k$ and functions $(A_k)_k$ in $\Sch$ by the following inductive process starting with $A_0=A$ and $k=1$:\\ If $\det A_{k-1}$ has a zero at $z_k$ then $A_{k-1}(z_k)$ has defect $0< r_k\le n$ and by singular value decomposition we obtain \begin{align} \label{svd} A_{k-1}(z_k)=U_k\left(\begin{array}{cc} 0 & 0\\ 0 & D\\ \end{array}\right)V_k \end{align} where $D$ is a $(n-r_k)\times(n-r_k)$ diagonal matrix with non-zero entries and $U_k$ and $V_k$ are unitary matrices. Now set \begin{align} \label{defbkak} b_k(z)=U_k\left(\begin{array}{cc} \beta_{z_k}(z)I_{r_k} & 0\\ 0 & I_{n-r_k}\end{array}\right)U_k^{-1}\,\,\,\mbox{and}\,\,\, A_k(z)=b^{-1}_k(z)A_{k-1}(z) \end{align} for $z\in\D$. By Lemma \ref{detachlemma}, this effects the detachability of $b^{-1}_k$ from $A_{k-1}$, so $A_k\in\Sch$.\\ Now we continue from the start with $k+1$ instead of $k$, where we skip those $k$ such that $\det A_{k-1}(z_k)\not=0$, which may happen since the individual zeros occur as often as their multiplicity dictates. From the equation $$\det A_k(z)=\beta_{z_k}(z)^{r_k} \det A_{k-1}(z)$$ we see that each zero $z_i$ will be ``consumed" eventually, i.e. there exists $N=N(i)$ such that $\det A_k(z_i)\not=0$ for $k\ge N$. Also, the process will end after finitely many steps if and only if there are finitely many zeros. In the case of infinitely many zeros, we know from Theorem \ref{bpconv} that $B_k(z)=\prod^k_{i=1}b_i(z)$ will converge to a B.P. product $B$. We claim that also the sequence $(A_k)_k$ converges to a bounded analytic function $\tilde{A}$. For the proof note that \eqref{defbkak} implies $$A_k(z)=\left(\prodl_{i=1}^k b_i(z)^{-1}\right)A(z)$$ Also a calculation shows that for $|z|\le r<1$ $$\|I-b_k(z)^{-1}\|=\left|1-\frac{1-\overline{w}z}{w-z}\frac{w}{|w|}\right| =\left|\frac{w+z|w|}{w|w|-z|w|}\right|(1-|w|)\le \frac{1}{1-r}\cdot(1-|w|)$$ where $w=z_k\not=0$. Hence we can apply Theorem \ref{thm:prodconv} to conclude convergence of the partial products $A_k$ against a function $\tilde{A}\in\Sch$ which satisfies $A=B\cdot\tilde{A}$.\qed To prove uniqueness we require the following observation. \begin{lemma} \label{detachlemma2} Let $A,A_1,A_2\in\Sch$ satisfy $A(z)=A_1(z)A_2(z)$. Furthermore, suppose that $\det A_1$ has a zero at $z_0\in\D$ and $\det A_2(z_0)\not=0$. Let $b$ be a Blaschke-Potapov factor for $z_0$, which is detachable from $A$ in the sense defined above. Then it is also detachable from $A_1$. \end{lemma} \begin{proof}The claim follows from Lemma \ref{detachlemma} and $$ U^*A_1(z_0)=\left(\begin{array}{cc}0_r & 0\\ * & *\end{array}\right)A_2^{-1}(z_0)=\left(\begin{array}{cc}0_r & 0\\ * & *\end{array}\right)$$ \end{proof} \noindent Now we can complete the proof of Theorem \ref{thm:bpfactor}. {\em Proof of uniqueness.} Suppose that $$A=B^{(1)}\cdot \tilde{A}^{(1)}=B^{(2)}\cdot \tilde{A}^{(2)}$$ are two factorizations such that for $i=1,2$, $B^{(i)}$ is a B.P. product and $\tilde{A}^{(i)}$ a contractive analytic mvf with non-vanishing determinant. Without loss of generality we write $$B^{(i)}(z)=\prod^\infty_{j=1}b^{(i)}_j(z)\;\;\;\text{and}\;\;\; B^{(i)}_k(z)=\prod^k_{j=1}b^{(i)}_j(z)\;\;\;(z\in\D)$$ for $i=1,2$, where the $b_j^{(i)}$ are B.P. factors. In case the $B^{(i)}$ come with constant unitary right factors, we can include those in the $\tilde{A}^{(i)}$. Clearly, $b^{(2)}_{k+1}$ is detachable from $$B_k^{(2)}(z)^{-1} B^{(2)}(z)=\prod_{j=k+1}^\infty b^{(2)}_j(z)$$ By Lemma \ref{detachlemma2}, it is also detachable from $$B_k^{(2)}(z)^{-1} B^{(1)}(z)=B_k^{(2)}(z)^{-1} B^{(2)}(z)\tilde{A}^{(2)}(z)\tilde{A}^{(1)}(z)^{-1}$$ Letting $k\rightarrow\infty$ we obtain that $F=(B^{(2)})^{-1}B^{(1)}\in\Sch$. By symmetry, also $F^{-1}=(B^{(1)})^{-1}B^{(2)}$ is contractive. Hence we have $I-F(z)^{-1}F^*(z)^{-1}\ge 0$ for $z\in\D$, so also $$0 \le F(z)(I-F(z)^{-1}F^*(z)^{-1})F^*(z)=F(z)F^*(z)-I$$ But at the same time we know $I-F(z)F^*(z)\ge 0$, so $F$ must be unitary everywhere in $\D$. By Corollary \ref{cor:unitaryconst} in the appendix, $F$ is a constant unitary matrix. This concludes the proof of Theorem \ref{thm:bpfactor}.\qed \subsubsection{Finite B.P. products} A consequence of the above factorization is the following characterization of finite B.P. products which turns out to be the same as in the scalar case. \begin{lemma}\index{B.P. product} \label{lemma:finitebp} A mvf $A\in\Sch$ is a finite Blaschke-Potapov product if and only if it extends continuously to $\overline{\D}$ and takes unitary values on $\T$. \end{lemma} \begin{proof} Suppose that $A\in\Sch$ extends continuously to $\overline{\D}$ and takes unitary values on $\T$. The Blaschke condition implies that $\det A$ has only finitely many zeros, since otherwise they would accumulate at some point on $\T$, which is impossible because $\|A\|=1$ on the unit circle. By Theorem \ref{thm:bpfactor} there exists a finite Blaschke-Potapov product $B$ and $\tilde{A}\in\Sch$ such that $A=B\cdot\tilde{A}$ and $\det\tilde{A}$ is non-vanishing. Hence also $\tilde{A}^{-1}\in\Sch$ and $\tilde{A}^{-1}=B^{-1}\cdot A$ also extends continuously to $\overline{\D}$ with unitary values on $\T$. In particular, $\|\tilde{A}^{-1}(z)\|=1$ for $z\in\T$. By the maximum principle for subharmonic functions, $\tilde{A}^{-1}$ is contractive. Summarizing, we have for arbitrary $z\in\D$ $$I-\tilde{A}(z)\tilde{A}^*(z)\ge 0\,\,\,\text{and}\,\,\,I-\tilde{A}(z)^{-1}(\tilde{A}^*(z))^{-1}\ge 0$$ whence it follows that $\tilde{A}$ is unitary at $z$. This implies that $\tilde{A}$ is equal to a constant unitary matrix which proves the claim. The other implication follows directly from the definition of a B.P. factor. \end{proof} \subsubsection{B.P. products for $n=2$} In the case of $2\times 2$ matrices, the B.P. factors can have only rank $1$ or $2$. B.P. factors of rank $2$ are just scalar Blaschke factors times the identity matrix. Thus we can factor out a maximal scalar Blaschke product to obtain a function which has only zeros of rank 1. \begin{lemma}\index{B.P. product} Let $n=2$ and $A\in\Sch$. Assume that $A$ has no zeros of rank 2. Let $z_1,z_2,\dots$ be an enumeration of the zeros of $A$ counted with multiplicities. Then there exist uniquely determined B.P. factors $b_1,b_2,\dots$ and $\tilde{A}\in\Sch$ without zeros such that $b_k(z_k)=0$ and $$A=\prod_{k=1}^N b_k(z)\cdot \tilde{A}(z)$$ where $N\in\N\cup\{\infty\}$ is the number of zeros (including multiplicities). \end{lemma} This is clear in view of the above. In fact, condition \eqref{maxdetachcrit} allows for the $b_k$ to be expressed explicitly: Let $$b_k(z)=I-P_k+\beta_{z_k}(z)P_k$$ be the $k$th B.P. factor and $A_k$ be recursively given by $$A_1(z)=A(z)\,\,\text{and}\,\,A_{k+1}(z)=b_k^{-1}(z)A_k(z)\,\,\text{for}\;k\ge 1$$ Then $b_k$ is determined by $\ker P_k=\Im A_k(z_k)$. \subsection{Multiplicative representation} The key ingredient for obtaining a multiplicative representation of a contractive analytic matrix function with non-vanishing determinant will be the following approximation theorem. The proof is from \cite[Chapter V]{Potapov}. \begin{thm}\index{rational approximation} \label{thm:rationalapprox} For every $A\in\Sch$ there exists a sequence of rational contractive mvfs $(A_k)_{k\ge 1}$, unitary on $\T$, such that $A_k$ converges to $A$ uniformly on compact sets in $\D$ as $k\rightarrow\infty$. \end{thm} By a rational mvf, we mean a matrix-valued functions the entries of which are scalar rational functions. Combined with Lemma \ref{lemma:finitebp}, this theorem implies that every contractive analytic matrix function can be uniformly approximated by finite Blaschke-Potapov products. \begin{proof} We may choose a constant $w\in\C$ with $|w|=1$ such that $\det(wI-A(0))\not=0$ since $A(0)$ can have at most $n$ distinct eigenvalues. This implies that the holomorphic function $\det(wI-A(z))$ is not identically zero, so the matrix $wI-A(z)$ is regular for all but countably many points $(\mu_j)_j$ in $\D$. Thus we may define a (possibly meromorphic) mvf by \begin{align} \label{Cayley} T(z)=i(wI-A(z))^{-1}(wI+A(z)) \end{align} This transformation is a matrix-valued analogue of a conformal mapping from the unit disk to the upper half plane which is sometimes referred to as Cayley transform\index{Cayley transform}. The inverse transformation is given by \begin{align} \label{CayleyInv} A(z)=w(T(z)-iI)(T(z)+iI)^{-1} \end{align} A calculation shows $$\Im T(z)=(wI-A(z))^{-1}(I-A(z)A^*(z))(\overline{w}I-A^*(z))^{-1}\ge 0$$ because $I-A(z)A^*(z)\ge 0$ by assumption. By the open mapping principle, we know that a scalar meromorphic function that maps onto the upper-half plane is actually holomorphic. The same holds for meromorphic mvfs with non-negative imaginary part, because the diagonal entries are scalar functions mapping onto the upper-half plane and and the off-diagonal entries can be bounded in terms of diagonal entries. Consequently, the singularities $(\mu_j)_j$ are removable and $T$ is holomorphic. By the Herglotz representation theorem (see the appendix for a proof of this theorem) we can write $T$ as $$T(z)=T_0+i\int^{2\pi}_0 \frac{e^{it}+z}{e^{it}-z}\,d\sigma(t)$$ where $T_0$ is a constant Hermitian matrix and $\sigma$ an increasing mvf. This integral representation allows us to approximate $T$ uniformly by Riemann-Stieltjes sums. Choose $0<r_k<1$ for $k=1,2,\dots$ such that $r_k\nearrow 1$ as $k\rightarrow\infty$ and none of the points $\mu_j$ lies on any of the circles $\{z:|z|=r_k\}$. For each $k=1,2,\dots$ we also pick an appropriate subdivision $0\le t^{(k)}_0\le t^{(k)}_1\le \dots \le t^{(k)}_{m_k} = 2\pi$ of the interval $[0,2\pi]$ such that the Riemann-Stieltjes sum \begin{align} \label{Tkdef} T_k(z)=T_0+i\sum^{m_k-1}_{\nu=0} \frac{e^{it^{(k)}_\nu}+z}{e^{it^{(k)}_\nu}-z}(\sigma(t^{(k)}_{\nu+1})-\sigma(t^{(k)}_\nu)) \end{align} satisfies the estimate \begin{align} \label{TTkestimate} \|T(z)-T_k(z)\|\le \frac{1}{k}\hspace{1cm}\text{for all}\,|z|\le r_k \end{align} By construction, the rational functions $(T_k)_k$ are holomorphic on $\D$ and converge to $T$ uniformly on compact sets. The meromorphic function \begin{align} \label{TpIeq} T(z)+iI=2iw(wI-A(z))^{-1} \end{align} is only singular at the points $(\mu_j)_j$. Hence $\det(T_k(z)+iI)$ can at most vanish at countably many points for large enough $k$. Thus it makes sense to define \begin{align} \label{Akdef} A_k(z)=w(T_k(z)-iI)(T_k(z)+iI)^{-1} \end{align} From \eqref{Tkdef} we see that \begin{align} \label{ImTkeq} \Im T_k(z)=\sum^{m_k-1}_{\nu=0}\frac{1-|z|^2}{|e^{it^{(k)}_\nu}-z|}(\sigma(t^{(k)}_{\nu+1}) -\sigma(t^{(k)}_\nu)) \end{align} which implies $\Im T_k(z)\ge 0$. It follows that $$I-A_k(z)A_k^*(z)=4(T_k(z)+iI)^{-1}\cdot\Im T_k(z)\cdot(T_k^*(z)-iI)^{-1}\ge 0$$ so $A_k$ is a rational contractive mvf which has no poles in $\D$. Also, \eqref{ImTkeq} implies that $A_k(z)$ is unitary for $|z|=1$. We claim that $A_k$ converges to $A$ uniformly on compact sets. To prove this, let $K\subset\D$ be compact and $N$ large enough such that $K$ is contained in the disk $|z|\le r_N$. By choice of the $(r_k)_k$ we may select a $\delta>0$ such that none of the singularities $(\mu_j)_j$ lie in the annulus $R=\{z:r_N-\delta\le|z|\le r_N+\delta\}\subset\D$. Then, by the identity $$A(z)-A_k(z)=2i(T(z)+iI)^{-1}(T(z)-T_k(z))(T_k(z)+iI)^{-1}$$ and using \eqref{TTkestimate}, \eqref{TpIeq} we obtain \begin{align} \label{AAkestimate1} \|A(z)-A_k(z)\|\le \frac{2}{k}\cdot \|(T_k(z)-iI)^{-1}\| \end{align} for $z\in R$ and $k$ so large that $T_k(z)-iI$ is invertible in $R$ and $\{z:|z|~\le~r_k\}~\supset~R$. Since $T_k(z)-iI$ converges uniformly to $T(z)-iI$, the matrix norm $\|T_k(z)-iI\|$ is bounded uniformly in $k$ and $z$. By the matrix norm estimate $\|A^{-1}\|~\le~\frac{\|A\|^{n-1}}{|\det A|}$ (see the appendix for a proof of this) it follows that \begin{align} \label{Tkinvest} \|(T_k(z)-iI)^{-1}\|\le \frac{C_0}{|\det(T_k(z)-iI)|} \end{align} for some large enough $C_0>0$. Since $\det(T_k(z)-iI)\not=0$ in $R$ for large enough $k$ and also $\det(T(z)-iI)\not=0$ in $R$ we can infer that $|\det(T_k(z)-iI)|$ is bounded from below by some positive constant. Combining this with \eqref{AAkestimate1} and \eqref{Tkinvest} we get \begin{align} \label{AAkestimate2} \|A(z)-A_k(z)\|\le \frac{C}{k} \end{align} for some large enough $C>0$ and $z\in R$. From Lemma \ref{subharmlemma} and the maximum principle for subharmonic functions we conclude that \eqref{AAkestimate2} holds throughout $\{z:|z|\le r_N+\delta\}\supset K$. \end{proof} We will now describe how to obtain the multiplicative representation, proceeding as in \cite[Introduction, Chapter V]{Potapov}. Let $A\in\Sch$ have no zeros. Applying Theorem \ref{thm:rationalapprox} and Lemma \ref{lemma:finitebp}, we can choose a sequence \begin{align} \label{Akeq} A_k(z)=b_1^{(k)}(z)b_2^{(k)}(z)\cdots b_{m_k}^{(k)}(z)U_k \end{align} where the $b_j^{(k)}$, $1\le j\le m_k$ are B.P. factors and the $U_k$ unitary matrices, such that $A_k\rightarrow A$ uniformly on compact subsets. Let also \begin{align} \label{bjkequ1} b_j^{(k)}(z) &= U^{(k)}_j\left(\begin{array}{cc}\beta_{z^{(k)}_j}(z) I_{r^{(k)}_j} & 0\\0 & I_{n-r^{(k)}_j}\end{array}\right)(U^{(k)}_j)^* \end{align} with $U^{(k)}_j$ unitary and\footnote{We tacitly suppose that $z^{(k)}_j\not=0$. This is true anyway for large enough $k$.} $z^{(k)}_j=\rho^{(k)}_j e^{i\theta^{(k)}_j},\,j=1,2,\dots,m_k$ with $\rho^{(k)}_j>0$ and $0\le\theta^{(k)}_j<2\pi$. We may assume that the $z^{(k)}_j$ are arranged in order of increasing $\theta^{(k)}_j$. Now define \begin{align} \label{Hdef} H^{(k)}_j=U^{(k)}_j\left(\begin{array}{cc} (1-|z_j^{(k)}|)I_{r^{(k)}_j} & 0\\0 & 0\end{array}\right)(U^{(k)}_j)^* \end{align} Notice that $\|H_j^{(k)}\|=1-|z_j^{(k)}|$. Now \eqref{betadef}, \eqref{bjkequ1} and \eqref{Hdef} imply \begin{align} \label{bjkequ2} b_j^{(k)}(z)=I-\frac{z_j^{(k)}+|z_j^{(k)}|z}{z_j^{(k)}-|z_j^{(k)}|^2 z}H_j^{(k)} \end{align} The sequence $(\det A_k(0))_k$ converges, hence it is bounded by some constant $L>0$. Since $$\prod^{m_k}_{\nu=1}|z_\nu^{(k)}|^{r_\nu^{(k)}}=\det A_k(0)$$ there exists a constant $C>0$ such that \begin{align} \label{Cinequ} \sum^{m_k}_{\nu=1}(1-|z_\nu^{(k)}|)\le C \end{align} We would like to write the right hand side of \eqref{Akeq} as a multiplicative integral. However, this is not possible because multiplicative integrals are invertible everywhere while $\det A_k(z)$ has zeros. To remedy this situation, we consider the modified factors \begin{align} \label{bjktildedef} \tilde{b}_j^{(k)}(z)=\exp\left(-\frac{e^{i\theta_j^{(k)}}+z}{e^{i\theta_j^{(k)}}-z}H_j^{(k)}\right)=\exp(h_z(\theta_j^{(k)})H_j^{(k)}) \end{align} and accordingly \begin{align} \label{Aktildedef} \tilde{A}_k(z)=\tilde{b}_1^{(k)}(z)\cdots\tilde{b}_{m_k}^{(k)}(z)U_k \end{align} We now show that the non-vanishing of $\det A$ implies that we may work with $\tilde{A}_k$ instead. \begin{lemma} \label{lemma:Atildelemma} With $A_k,\tilde{A_k}$ given as above, we have that $\|A_k-\tilde{A}_k\|\rightarrow 0$ uniformly on compact sets in $\D$. \end{lemma} \begin{proof} Let $0<r<1$ and let $k$ be large enough such that none of the points $(z_j^{(k)})_j$ lie in the disk $\{z:|z|\le r\}$. By \eqref{bjkequ2} we can estimate \begin{align} \label{bjkest} \|b_j^{(k)}(z)\|\le 1+\frac{1+r}{1-r}\|H_j^{(k)}\|\le e^{\frac{1+r}{1-r}\|H_j^{(k)}\|}=e^{\frac{1+r}{1-r}(1-|z_j^{(k)}|)} \end{align} for $|z|\le r$. The same conclusion holds with $b_j^{(k)}$ replaced by $\tilde{b}_j^{(k)}$. Now the telescoping identity\index{telescoping identity} from Lemma \ref{lemma:telescoping} implies via \eqref{bjkest} and \eqref{Cinequ} that \begin{align} \label{AAtildeest} \|A_k(z)-\tilde{A}_k(z)\|=e^{C\frac{1+r}{1-r}}\sum_{\nu=1}^{m_k}\|b_\nu^{(k)}(z)-\tilde{b}^{(k)}_\nu(z)\| \end{align} for $|z|\le r$. Define $$c_j^{(k)}(z)=I-\frac{e^{i\theta_j^{(k)}}+z}{e^{i\theta_j^{(k)}}-z}H_j^{(k)}$$ We write $$ \|b_j^{(k)}(z)-\tilde{b}^{(k)}_j(z)\|\le \|b_j^{(k)}(z)-c_j^{(k)}(z)\| +\|c_j^{(k)}(z)-\tilde{b}^{(k)}_j(z)\| $$ and estimate the two differences separately. First, \begin{align} \label{bjkcjkest} \|b_j^{(k)}(z)-c_j^{(k)}(z)\| &= \left| \frac{e^{i\theta}+z}{e^{i\theta}-\rho z}- \frac{e^{i\theta}+z}{e^{i\theta}-z} \right|\cdot \|H_j^{(k)}\|\notag\\ &\le 2\frac{1-\rho}{(1-r)^2}\|H_j^{(k)}\|=2\left(\frac{1-|z_j^{(k)}|}{1-r}\right)^2 \end{align} where $\rho=\rho_j^{(k)}=|z_j^{(k)}|$, $\theta=\theta_j^{(k)}$ and $|z|\le r$. Secondly, \begin{align} \label{cbtildeest} \|c_j^{(k)}(z)-\tilde{b}_j^{(k)}(z)\| &=\left\|\sum_{\nu=2}^\infty \frac{(h_z(\theta_j^{(k)})H_j^{(k)})^\nu}{\nu!}\right\|\le \sum_{\nu=2}^\infty \frac{1}{\nu!}\left(\frac{1+r}{1-r}\right)^\nu \|H_j^{(k)}\|^\nu\notag\\ &\le 4e^{C\frac{1+r}{1-r}}\left(\frac{1-|z_j^{(k)}|}{1-r}\right)^2 \end{align} where $|z|\le r$. Setting $$ M=M(r)=\frac{2}{(1-r)^2}\cdot e^{C\frac{1+r}{1-r}}\max\{1,2e^{C\frac{1+r}{1-r}}\} $$ and applying \eqref{bjkcjkest}, \eqref{cbtildeest} to \eqref{AAtildeest}, we see \begin{align} \label{AAtildeest2} \|A_k(z)-\tilde{A}_k(z)\|\le M \sum_{\nu=1}^{m_k} (1-|z_\nu^{(k)}|)^2 \le CM \max_{\nu=1,\dots,m_k} (1-|z_\nu^{(k)}|) \end{align} in the disk $\{z:|z|\le r\}$. Since $A_k\rightarrow A$ and $\det A$ has no zeros in $\D$, the right hand side of \eqref{AAtildeest2} converges to $0$ as $k\rightarrow\infty$. \end{proof} Now we will write \eqref{Aktildedef} as a multiplicative Stieltjes integral on some interval $[0,L]$. Define $t_0^{(k)}=0$ and \begin{align} \label{tjkdef} t_j^{(k)}=\sum^{j}_{\nu=1}\tr H_\nu^{(k)}\;\;\;\text{for}\;j=1,\dots,m_k \end{align} From \eqref{tjkdef} and \eqref{Hdef} we see \begin{align*} t_j^{(k)}\le\sum_{\nu=1}^{m_k} r_\nu^{(k)}(1-|z_\nu^{(k)}|)< \prod^{m_k}_{\nu=1}|z_\nu^{(k)}|^{r_\nu^{(k)}}=\det A_k(0)\le L \end{align*} Finally, we define \begin{align} \label{Ekdef} E^{(k)}(t)=\left\{\begin{array}{ll}\sum^{j-1}_{\nu=1} H^{(k)}_\nu + \frac{t-t^{(k)}_{j-1}}{t_j^{(k)}-t_{j-1}^{(k)}}H_j^{(k)}, & \text{if}\;t_{j-1}^{(k)}\le t<t_j^{(k)}\;\text{for some}\;j=1,\dots,m_k\\ \sum^{m_k}_{\nu=1}H_\nu^{(k)}, & \text{if}\;t_{m_k}^{(k)}\le t\le L\end{array}\right.\\ \label{thetadef} \theta^{(k)}(t)=\theta^{(k)}_j\;\text{for}\;t_{j-1}^{(k)}\le t<t_j^{(k)}\;\;\text{and}\;\;\theta^{(k)}(t)=\theta^{(k)}_{m_k}\;\text{for}\; t_{m_k}^{(k)}\le t\le L \end{align} $E^{(k)}$ is chosen such that it is an increasing mvf on $[0,L]$ and the equation $$\tr E^{(k)}(t)=t$$ is satisfied for $t\in[0,t_{m_k}]$. Note also that for $0\le s\le t\le L$ we have $$\|E^{(k)}(t)-E^{(s)}(s)\|\le \tr (E^{(k)}(t)-E^{(k)}(s)) \le t-s$$ so $E^{(k)}$ is Lipschitz continuous. The function $\theta^{(k)}:[0,L]\rightarrow[0,2\pi)$ is increasing and right-continuous. Thus, $h_z(\theta^{(k)}(t))$ is Riemann integrable on $[0,L]$ as a function of $t$. By Proposition \ref{prop:mintconst}, \eqref{bjktildedef} and \eqref{Ekdef} we can write $$ \tilde{b}_j^{(k)}(z) = \mint_{t_{j-1}^{(k)}}^{t_j^{(k)}} \exp\left(\frac{h_z(\theta^{(k)}(t))}{t_j^{(k)}-t_{j-1}^{(k)}}H_j^{(k)}dt\right) =\mint_{t_{j-1}^{(k)}}^{t_j^{(k)}} \exp\left(h_z(\theta^{(k)}(t))\,dE^{(k)}(t)\right)$$ for $j=1,\dots,m_k$. We write all the factors on the right hand side of \eqref{Aktildedef} in a single multiplicative integral using Proposition \ref{prop:mintsep} to obtain \begin{align} \label{Akmint} \tilde{A}_k(z)=\mint^L_0 \exp\left(h_z(\theta^{(k)}(t))\,dE^{(k)}(t)\right) \end{align} Now we apply Helly's selection Theorem\index{Helly's selection theorem} \ref{thm:hellyselmvf} twice to extract a common subsequence such that both, $(E^{(k_j)})_j$ and $(\theta^{(k_j)})_j$, converge to respective limit functions $E$ and $\theta$. Then $E$ is an increasing mvf with $\|E(t)-E(s)\|\le t-s$ for $t\ge s$. Moreover, $\theta$ is a bounded increasing function. In particular, it has only countably many discontinuities. Thus $h_z(\theta(t))$ is Riemann integrable in $t$ on $[0,L]$. By Helly's convergence theorem\index{Helly's convergence Theorem} for multiplicative integrals (Theorem \ref{thm:hellyconvmvf}) we conclude that $$\lim_{j\rightarrow\infty}\tilde{A}_{k_j}(z)= \mint_0^L \exp(h_z(\theta(t))\,dE(t))$$ But by Lemma \ref{lemma:Atildelemma}, $\tilde{A}_{k_j}(z)$ converges also to $A(z)$. Consequently, $$A(z)=\mint_0^L \exp(h_z(\theta(t))\,dE(t))$$ Should $\theta$ not be right-continuous we can change its values at the discontinuities such that it will be right-continuous. Since this changes $\theta$ only at countably many places, the value of the integral stays the same. We have obtained the desired multiplicative representation and thereby finished the proof of Theorem \ref{thm:potapov}. \section{Inner-outer factorization} \label{sect:innerouterfact} \subsection{Existence} The inner-outer factorization of a bounded analytic matrix-valued function was discovered by Y.P. Ginzburg in \cite{Ginzburg}, where he also indicates the basic steps for the proof. It should be noted that these factorization theorems can be achieved in a much more general setting without the use of multiplicative integrals. This was done for example by Sz.-Nagy and Foias in \cite{Sz.-Nagy}. However, their treatment is quite abstract and our goal here are specifically the explicit multiplicative representations. Let us first remark that for a bounded analytic matrix function $A$, it makes sense to speak of its values almost everywhere on the circle, defined for instance radially by $$A(e^{i\theta})=\lim_{r\rightarrow 1-}A(re^{i\theta})$$ This limit exists for a.e. $\theta\in[0,2\pi)$ since the components of $A$ are bounded analytic scalar functions. We will denote the radial limit\index{radial limit} function also by $A$. \begin{definition}\label{def:inner} A function $A\in\H^\infty$ is called \defin{inner}\index{inner function}, if $A$ is unitary almost everywhere on $\T$. An inner function without zeros is a \defin{singular inner}\index{singular inner function} function. \end{definition} By the maximum principle for subharmonic functions applied to $\|A\|$, inner functions are contractive. In the scalar case, we can split up singular inner functions further with respect to the decomposition of the corresponding singular measure into pure point and singular continuous components. In the matrix-valued case this leads to the following definitions. \begin{definition} A mvf $A\in\H^\infty$ is called \defin{pp-inner}\index{pp-inner} (short for pure point inner), if there exist a unitary constant $U$, $m\in\N\cup\{\infty\}$, $l_k>0$, $\theta_k\in[0,2\pi)$ and increasing mvfs $E_k$ with $\tr E_k(t)=t$ such that \begin{align} \label{eqn:ppinner} A(z)=U\prod_{k=1}^m \mint_0^{l_k} \exp(h_z(\theta_k)\,dE_k(t)) \end{align} for $z\in\D$. \end{definition} A pp-inner function is inner. It suffices to check this for the case that $A(z)~=~\mint_0^{l} \exp(h_z(\theta)\,dE(t))$ for a constant $\theta\in[0,2\pi)$ and an increasing mvf $E$. By Proposition \ref{prop:mintgram} we have $$A(z)A^*(z)=\mint_0^{l} \exp(2\Re h_z(\theta)\,dE(t))$$ Set $z=re^{i\varphi}$. If $\varphi\not=\theta$, then $\lim_{r\rightarrow 1-}\Re h_z(\theta)=0$. Therefore, the radial limit $A(e^{i\varphi})$ is unitary. We call a mvf \defin{singular continuous}\index{singular continuous} if it is nonconstant, continuous, increasing and has derivative $0$ almost everywhere. \begin{definition} A mvf $A\in\H^\infty$ is called \defin{sc-inner}\index{sc-inner} (short for singular continuous-inner), if there exist a unitary constant $U$ and a singular continuous mvf $S$ such that \begin{align} \label{eqn:scinner} A(z)=U\mint_0^{2\pi} \exp(h_z(\varphi)dS(\varphi)) \end{align} for $z\in\D$. \end{definition} Let us check that an sc-inner function is really an inner function. Applying Proposition \ref{prop:mintgram} and Proposition \ref{prop:minttaylorest} we have $$A(z)A^*(z)=I+2\int_0^{2\pi} \Re h_z(\varphi) dS(\varphi)+R$$ where the error term $R$ satisfies the estimate \eqref{eqn:minttayloresterr}. But $$\left\|\int_0^{2\pi} \Re h_z(\varphi) dS(\varphi)\right\|\le \int_0^{2\pi} |\Re h_z(\varphi)| d|S|(\varphi)$$ The estimate \eqref{eqn:hellyselmvfpf1} applied to $S$ gives $$\int_0^{2\pi} |\Re h_z(\varphi)| d|S|(\varphi)\le \int_0^{2\pi} |\Re h_z(\varphi)| d\tr S(\varphi)$$ Combining the last three estimates with \eqref{eqn:minttayloresterr} we get \begin{align} \label{eqn:scinnerisinnereq1} \|I-A(z)A^*(z)\| \le \exp\left(\int_0^{2\pi} |\Re h_z(\varphi)| d\tr S(\varphi)\right)-1 \end{align} Now notice that $\tr S$ is a scalar singular continuous function. Set $z=re^{i\theta}$. By the scalar theory (compare \cite{Rudin2}), the left hand side of \eqref{eqn:scinnerisinnereq1} converges to $0$ for almost every $\theta$ as $r$ approaches $1$. We conclude that sc-inner functions are really inner functions. \begin{definition} We call $A\in\H^\infty$ \defin{outer}\index{outer function} if \begin{align} \label{eqn:outer} A(z)=U\mint^{2\pi}_0 \exp(h_z(\varphi)M(\varphi)\,d\varphi) \end{align} where $U$ is a unitary constant and $M\in L^1([0,2\pi]; M_n)$ a Hermitian mvf, whose least eigenvalue is bounded from below. \end{definition} Later, we will give equivalent characterizations for outer functions. Let us remark that all these definitions agree in the case $n=1$ with the corresponding scalar concepts. Our goal is a canonical factorization of any $A\in\H^\infty$ into a B.P. product, a pp-inner, an sc-inner and an outer function. To achieve this, we will manipulate the multiplicative integral representation obtained from Potapov's fundamental Theorem \ref{thm:potapov}. We need two lemmas. The first one says roughly that we can commute two functions in $\Sch$ as long as we only care about their determinants. This was given by Ginzburg, but unfortunately he did not provide a proof. \begin{lemma} \label{lemma:ginzburg} Let $A_1,A_2\in\Sch$. Then there exist $\tilde{A}_1,\tilde{A}_2\in\Sch$ such that $$A_1\cdot A_2=\tilde{A}_2\cdot\tilde{A}_1$$ and $\det A_j=\det\tilde{A}_j$ for $j=1,2$. \end{lemma} \begin{proof} By Theorem \ref{thm:rationalapprox} and Lemma \ref{lemma:finitebp}, we can choose a sequence of finite B.P. products $$A_2^{(k)}(z)=b^{(k)}_1(z)\cdots b^{(k)}_{m_k}(z)$$ such that $A_2^{(k)}$ converges to $A_2$ uniformly on compact sets. Let us assume without loss of generality that $A_1$ and $A_2^{(k)}$ have no common zeros. From the mvf $A_1A_2^{(k)}$, we detach a B.P. product corresponding to the zeros of $A_2^{(k)}$ which we call $\tilde{A}_2^{(k)}$. That is, we obtain \begin{align} \label{ginzburglemma1} A_1 A_2^{(k)}=\tilde{A}_2^{(k)} \tilde{A}_1^{(k)} \end{align} where $\tilde{A}_2^{(k)}$ is a finite B.P. product such that $\det A_2^{(k)}=\det \tilde{A}_2^{(k)}$ and $\tilde{A}_1^{(k)}\in\Sch$ the remainder. By Montel's theorem\index{Montel's theorem} (see Theorem \ref{thm:montel}), there exists a subsequence $(\tilde{A}^{(k_i)}_2)_i$ which converges on compact sets to some analytic mvf $\tilde{A}_2$. We also get $\tilde{A}_2\in\Sch$ and $\det\tilde{A}_2=\det A_2$. The identity \eqref{ginzburglemma1} implies that also $\tilde{A}^{(k_m)}_1$ converges to some $\tilde{A}_1\in\Sch$ with $\det A_1=\det\tilde{A}_1$. \end{proof} The next lemma says that the angular function $\theta$ in Potapov's multiplicative representation \eqref{eqn:potapovrepr} is already determined by $\det A$. The point of the proof is to use uniqueness of the measures in the scalar inner-outer factorization. \begin{lemma} \label{lemma:theta} Suppose that $A\in\Sch$ and \begin{align} \label{eqn:lemma2pf1} \det A(z)=\exp\left(\int_0^L h_z(\theta(t))dt\right) \end{align} for $z\in\D$, where $L>0$ and $\theta:[0,L]\rightarrow[0,2\pi]$ is a right-continuous, increasing function. Then $$A(z)=U\mint_0^L\exp(h_z(\theta(t))dE(t))$$ for some unitary constant $U$ with $\det U=1$ and an increasing mvf $E$ with $\tr E(t)=t$. \end{lemma} \begin{proof} By Potapov's Theorem\index{Potapov's fundamental theorem} \ref{thm:potapov} we know that \begin{align} \label{eqn:lemma2pf2} A(z)=U\mint_0^{\tilde{L}} \exp(h_z(\tilde{\theta}(t))dE(t)) \end{align} for a right-continuous, increasing function $\tilde{\theta}$ and an increasing mvf $E$ with $\tr E(t)=t$. Taking the determinant we see \begin{align} \label{eqn:lemma2pf3} \det A(z)=\det(U)\exp\left(\int_0^{\tilde{L}} h_z(\tilde{\theta}(t))dt\right) \end{align} Plugging in the value $z=0$ and comparing with \eqref{eqn:lemma2pf1} yields $\det U=1$ and $\tilde{L}=L$. Now it suffices to show that $\theta$ in the representation \eqref{eqn:lemma2pf1} is uniquely determined. By a change of variables we get \begin{align} \label{eqn:thetauniquepf2} \int^{L}_0 h_z(\theta(t))\,dt=\int^{2\pi}_0 h_z(\varphi)\,d\theta^\dagger(\varphi) \end{align} where $\theta^\dagger$ is the left-continuous generalized inverse\index{generalized inverse} of $\theta$ given by $\theta^\dagger(\varphi)=\inf\{t\in[0,L]\,:\,\theta(t)\ge \varphi\}$. Let $\mu$ be the unique positive Borel measure such that $\int_0^{2\pi} f(\varphi) d\theta^\dagger(\varphi)=\int_0^{2\pi} f(\varphi) d\mu(\varphi)$ for all continuous functions $f$. Note that the map $\theta\mapsto \theta^\dagger\mapsto \mu$ is injective. But by the scalar inner-outer factorization we know that the measure $\mu$ in $$\det A(z)=\exp\left(\int_0^{2\pi} h_z(\varphi) d\mu(\varphi)\right)$$ is uniquely determined (see \cite{Garnett}, \cite{Rudin2}). Therefore also $\theta$ in \eqref{eqn:lemma2pf1} is uniquely determined. \end{proof} \begin{cor} \label{cor:thetacont} Let $A\in\Sch$ and assume $$\det A(z)=\exp\left(\int_0^{2\pi} h_z(\varphi) d\psi(\varphi)\right)$$ for some continuous increasing function $\psi$. Then $$A(z)=U\mint_0^{2\pi} \exp(h_z(\varphi) dE(\psi(\varphi))$$ for some unitary constant $U$ with $\det U=1$ and an increasing mvf $E$ satisfying $\tr E(t)=t$. \end{cor} \begin{proof} Since $\psi$ is continuous, it is the generalized inverse of some right-continuous and strictly increasing $\theta$. Changing variables, applying Lemma \ref{lemma:theta} and changing variables again yields the claim. \end{proof} The following observation lies in the same vein. \begin{lemma} \label{lemma:detinnerouter} Let $A\in\Sch$. Then $A$ is outer (resp. sc-inner) if and only if $\det A$ is outer (resp. sc-inner). \end{lemma} \begin{proof} If $A$ is outer or sc-inner, respectively, then $\det A$ is by the determinant formula also outer or sc-inner, respectively. If on the other hand $\det A$ is outer or sc-inner, respectively, then we find $$\det A(z)=c\cdot\exp\left(\int_0^{2\pi} h_z(\varphi)\,d\psi(\varphi)\right)$$ with $|c|=1$ and $\psi$ being an absolutely continuous or singular continuous increasing function, respectively. Therefore we can conclude by Corollary \ref{cor:thetacont} that $$A(z)=U\exp\left(\int_0^{2\pi} h_z(\varphi)\,dE(\psi(\varphi))\right)$$ where $U$ is a unitary constant and $E$ an increasing mvf satisfying $\tr E(t)~=~t$. Note that $E$ is Lipschitz continuous, because for $t\ge s$ we have $$\|E(t)-E(s)\|\le \tr (E(t)-E(s))=t-s$$ It follows that if $\psi$ is absolutely continuous, then also $E\circ\psi$ is absolutely continuous. On the other hand, if $\psi$ is singular continuous, then $E\circ\psi$ is also singular continuous. That is, $A$ is outer or sc-inner, respectively. \end{proof} Now we can proceed to proving Theorem \ref{thm:main}. In the first step we show how to detach the pp-inner factor. \begin{lemma} \label{lemma:ppinnerfact} Let $A\in\Sch$ and assume that $A$ has no zeros. Then there exists a pp-inner function $S_{pp}$ and a continuous increasing mvf $\Sigma$ such that \begin{align} A(z)=S_{pp}(z)\cdot\mint_0^{2\pi} \exp(h_z(\varphi)d\Sigma(\varphi)) \end{align} for all $z\in\D$. \end{lemma} \begin{proof} By Potapov's Theorem \ref{thm:potapov} \begin{align} A(z)=U\mint^L_0 \exp\left(h_z(\theta(t))\,dE(t)\right) \end{align} where $U$ is a unitary constant, and $E,L,\theta$ are as stated in that theorem. Let $\{(a_k,b_k)\,:\,k=1,\dots,m\}$ with $m\in\N\cup\{\infty\}$ be a complete enumeration of all the intervals on which $\theta$ is constant and let $\theta_k$ be the value of $\theta$ on the interval $(a_k,b_k)$. The length of the $k$th interval will be denoted by $\ell_k=b_k-a_k$. For any increasing function $\psi$, we denote the total length of intervals on which $\psi$ is constant by $\ell(\psi)$. That is, $\ell(\theta)=\sum_{i=1}^m \ell_i$. Note that $\ell(\theta)\le L<\infty$. We inductively construct sequences $(S_j)_j, (A_j)_j$ of contractive mvfs such that $S_0=I$, $A_0=A$, $S_j\cdot A_j=A$ and \begin{align} \label{eqn:ppinnerfact1} S_j(z)=\prod_{i=1}^j\mint_0^{\ell_i} \exp(h_z(\theta_i)dE_i(t)),\,\,A_j(z)=U_j\mint_0^{L_j}\exp(h_z(\theta^{(j)}(t))dF_j(t)) \end{align} for $j\ge 1$ with $U_j$ unitary constants, $L_j=L-\sum_{i=1}^{j-1} \ell_i$, $\theta^{(j)}$ increasing functions and $F_j, E_j$ increasing mvfs satisfying $\tr E_j(t)=\tr F_j(t)=t$. We also want the intervals of constancy of $\theta^{(j)}$ to have exactly the lengths $\ell_{j+1},\ell_{j+2},\dots,\ell_m$ and that $\theta^{(j)}$ assumes the values $\theta_{j+1},\theta_{j+2},\dots,\theta_m$ on them, respectively. Let $0\le k< m$ and assume we have constructed $A_j,S_j$ already for all $0\le j\le k$. Then $A_{k}$ has an interval of constancy of length $\ell_{k+1}$, say $(a,b)$. We rewrite $A_k$ as $$A_k(z)=\mint_0^a e^{h_z(\theta^{(k)}(t))dF_{k}(t)} \cdot \mint_a^b e^{h_z(\theta_{k+1})dF_{k}(t)} \cdot \mint_b^{L_{k}}e^{h_z(\theta^{(k)}(t))dF_{k}(t)}$$ By Lemma \ref{lemma:ginzburg}, we can interchange the first two factors while preserving their determinants. Using Lemma \ref{lemma:theta} and Proposition \ref{prop:mintunitaryconst} on the modified factors yields \begin{align} \label{eqn:ppinnerfact3} A_k(z)=\mint_0^{\ell_{k+1}} e^{h_z(\theta_{k+1})dG(t)}\cdot \tilde{U}\cdot\mint_0^a e^{h_z(\theta^{(k)}(t))dH(t)} \cdot \mint_b^{L_{k}}e^{h_z(\theta^{(k)}(t))dF_{k}(t)} \end{align} where $\tilde{U}$ is a unitary constant with $\det\tilde{U}=1$ and $G,H$ are increasing mvfs with $\tr G(t)=\tr H(t)=t$. Now we define $E_{k+1}=G$ and define $S_{k+1}$ as prescribed in \eqref{eqn:ppinnerfact1}. Also set \begin{align} \label{eqn:ppinnerfact4} A_{k+1}(z)=\tilde{U}\cdot\mint_0^a e^{h_z(\theta^{(k)}(t))dH(t)} \cdot \mint_b^{L_{k}}e^{h_z(\theta^{(k)}(t))dF_{k}(t)} \end{align} Now it remains to check that $A_{k+1}$ has the form prescribed in \eqref{eqn:ppinnerfact1}. Computing the determinant gives \begin{align*} \det A_{k+1}(z) &=\exp\left(\int_0^a h_z(\theta^{(k)}(t))dt + \int_b^{L_{k}}h_z(\theta^{(k)}(t))dt\right)\\ &=\exp\left(\int_0^{L_{k+1}} h_z(\theta^{(k+1)}(t)) dt\right) \end{align*} where we have set $L_{k+1}=L_k-b+a$ and $$\theta^{(k+1)}(t)=\left\{\begin{array}{ll} \theta^{(k)}(t),\, & \text{for}\;t\in[0,a)\\ \theta^{(k)}(t+b-a),\, & \text{for}\;t\in[a,L_{k+1}] \end{array}\right. $$ By Lemma \ref{lemma:theta}, $A_{k+1}$ can be written as $$A_{k+1}(z)=U_{k+1}\mint_0^{L_{k+1}}\exp(h_z(\theta^{(k+1)}(t))dF_{k+1}(t))$$ where $U_{k+1}$ is a unitary constant and $F_{k+1}$ an increasing mvf with $\tr F_{k+1}(t)~=~t$. If $m<\infty$, then this process terminates after $m$ steps. Then $A=S_mA_m$ is the desired factorization up to the unitary constants $U_k$ which we can pull up to the front using Proposition \ref{prop:mintunitaryconst}. So from now on we assume $m=\infty$. We claim that the infinite product $$S_\infty(z)=\prod_{i=1}^\infty \mint_0^{\ell_i} \exp(h_z(\theta_i)dE_i(t))$$ converges. To verify that, we invoke Theorem \ref{thm:prodconv}. Condition (b) is trivially satisfied because all the factors are contractive mvfs. Denote the factors by $B_i(z)=\mint_0^{\ell_i} \exp(h_z(\theta_i)dE_i(t))$. To check condition (a), we write $$B_i(z)=I+\int_0^{\ell_i} h_z(\theta_i) dE_i(t)+R_i$$ where the remainder term $R_i$ satisfies \eqref{eqn:minttayloresterr}. Suppose that $K\subset\D$ is compact. Then there is $C>0$ such that $\frac{1+r}{1-r}\le C$ for all $z=re^{i\varphi}\in K$. Now estimate $$\left\|\int_0^{\ell_i} h_z(\theta_i) dE_i(t)\right\|\le \int_0^{\ell_i} |h_z(\theta_i)| dt \le C \ell_i$$ Let $N$ be large enough such that $C\ell_i\le 1$ for all $i\ge N$. Then $R_i\le C^2 \ell_i^2$ for $i\ge N$ and $$\sum_{i=N}^\infty \|I-B_i(z)\|\le \sum_{i=N}^\infty \int_0^{\ell_i}|h_z(\theta_i)| dt + R_i \le \sum_{i=N}^\infty C\ell_i +C^2\ell_i^2<\infty$$ because $\sum_{i=1}^\infty \ell_i\le L<\infty$ converges. This proves the prerequisites of Theorem \ref{thm:prodconv}. Therefore $(S_k)_k$ converges uniformly on compact sets to the pp-inner function $S_\infty$. Therefore also $A_k=S_k^{-1}A$ converges to a contractive mvf $A_\infty=S_\infty^{-1}A$. Applying the determinant formula gives \begin{align} \det A_\infty(z)&=(\det S_\infty(z))^{-1} \det A(z)\\ &=c\cdot \exp\left(-\sum_{i=1}^\infty \ell_i h_z(\theta_i)+\int_0^L h_z(\theta(t))dt\right)\label{eqn:ppinnerfact5} \end{align} where $|c|=1$. As in the proof of Lemma \ref{lemma:theta} we write \begin{align} \label{eqn:ppinnerfact6} \int_0^L h_z(\theta(t)) dt=\int_0^{2\pi} h_z(\varphi)d\mu(\varphi) \end{align} where $\mu$ is the positive Borel measure corresponding to the increasing function $\theta^\dagger$. Decompose $\mu=\mu_p+\mu_c$ where $\mu_p$ is a pure point measure and $\mu_c$ an atomless measure. The pure point component $\mu_p$ is given by the jump discontinuities of $\theta^\dagger$ which in turn correspond to the intervals of constancy of $\theta$ the size of the jump being the length of the interval. That is, \begin{align} \label{eqn:ppinnerfact7} \mu_p=\sum_{i=1}^\infty \ell_i \delta_{\theta_i} \end{align} where $\delta_\varphi$ denotes the Dirac measure at $\varphi$. Combining \eqref{eqn:ppinnerfact5}, \eqref{eqn:ppinnerfact6}, \eqref{eqn:ppinnerfact7} we get \begin{align} \det A_\infty(z)=c\cdot \exp\left(\int_0^{2\pi} h_z(\varphi)d\mu_c(\varphi)\right) \end{align} By Corollary \ref{cor:thetacont} we get \begin{align} A_\infty(z)=U_\infty\mint_0^{2\pi} \exp(h_z(\varphi)\,d\Sigma(\varphi)) \end{align} for a unitary constant $U_\infty$ and a continuous increasing mvf $\Sigma$. After moving the unitary constant $U_\infty$ to the front, $A=S_\infty A_\infty$ is the desired factorization. \end{proof} \begin{thm} \label{thm:ginzburg-fact} Let $A\in\H^\infty$. Then there exist a B.P. product $B$, a pp-inner function $S_{pp}$, an sc-inner function $S_{sc}$ and an outer function $E$ such that \begin{align} \label{eqn:ginzburg-fact} A(z)=B(z)S_{pp}(z)S_{sc}(z)E(z)\,\,\,\,\text{for}\;z\in\D \end{align} \end{thm} \begin{proof} Assume without loss of generality that $A\in\Sch$ (multiply with a positive multiple of the identity matrix). The B.P. product can be detached using Theorem \ref{thm:bpfactor}. Thus we can assume that $A$ has no zeros. Lemma \ref{lemma:ppinnerfact} reduces the claim to showing that a contractive mvf $A$ of the form \begin{align} \label{eqn:ginzburgpf1} A(z)=\mint^{2\pi}_0 \exp\left(h_z(\varphi)\,d\Sigma(\varphi)\right) \end{align} where $\Sigma$ is a continuous increasing mvf, can be factored into an sc-inner and an outer function. To achieve this let us decompose $\Sigma=\Sigma_s+\Sigma_a$ where $\Sigma_s$ is a singular-continuous and $\Sigma_a$ an absolutely continuous function (see e.g. \cite{SteinShakarchi3}). By continuity, we can approximate $\Sigma_s$ uniformly by a sequence of step functions $(T_k)_k$ which we may assume to be of the form $$T_k=\sum_{i=1}^{m_k}T^{(i)}_k\ch_{(t_k^{(i-1)},t_k^{(i)}]}$$ for $m_k\in\N, 0=t_k^{(0)}<t_k^{(1)}<\cdots<t_k^{(m_k)}=2\pi$ and $T_k^{(i)}$ Hermitian matrices such that $T_k^{(1)}=0$ and $\Delta_k^{(i)}=T_k^{(i)}-T_k^{(i-1)}\ge 0$ for $m_k\ge i> 1$. Set $\Delta_k^{(1)}=0$ for convenience. Let $\Sigma^{(k)}=T_k+\Sigma_a$. Then $$\mint^{t_k^{(i)}}_{t_k^{(i-1)}} e^{h_z(\varphi)\,d\Sigma^{(k)}(\varphi)} =e^{h_z(t_k^{(i-1)})\Delta_k^{(i)}}\mint^{t_k^{(i)}}_{t_k^{(i-1)}} e^{h_z(\varphi)\,d\Sigma_a(\varphi)}$$ for $i=1,\dots,m_k$. This implies \begin{align} \label{eqn:factpf2} \mint_{0}^{2\pi} e^{h_z(\varphi)\,d\Sigma^{(k)}(\varphi)} =\prod_{i=1}^{m_k}e^{h_z(t_k^{(i-1)})\Delta_k^{(i)}}\mint^{t_k^{(i)}}_{t_k^{(i-1)}} e^{h_z(\varphi)\,d\Sigma_a(\varphi)}\end{align} We use Lemma \ref{lemma:ginzburg} to move all the factors $e^{h_z(t_k^{(i-1)})\Delta_k^{(i)}}$ up to the front. Thereby, we obtain \begin{align} \label{eqn:factpf3} \mint^{2\pi}_{0} e^{h_z(\varphi)\,d\Sigma^{(k)}(\varphi)}=S_k(z)E_k(z) \end{align} where $S_k$, $E_k$ are contractive mvfs satisfying \begin{align} \label{eqn:factpf5} \det E_k(z)=\exp\left(\int_0^{2\pi} h_z(\varphi) d\tr\Sigma_a(\varphi)\right)\;\text{and} \end{align} \begin{align} \label{eqn:factpf6} \det S_k(z)=\exp\left(\sum_{i=1}^{m_k} h_z(t_k^{(i)})\tr\Delta_k^{(i)}\right)=\exp\left(\int_0^{2\pi} h_z(\varphi) d\tr T_k(\varphi)\right) \end{align} By Montel's theorem, there exists a subsequence $(k_j)_j$ such that $(S_{k_j})_j$ and $(E_{k_j})_j$ both converge uniformly on compact sets to contractive mvfs $S$ and $E$, respectively. Equation \eqref{eqn:factpf5} implies \begin{align} \label{eqn:factpf7} \det E(z)=\lim_{j\rightarrow\infty}\det E_{k_j}(z)=\exp\left(\int_0^{2\pi} h_z(\varphi) d\tr\Sigma_a(\varphi)\right) \end{align} Because the trace of an absolutely continuous function is absolutely continuous, $\det E$ is an outer function. By Lemma \ref{lemma:detinnerouter}, also $E$ must be an outer function. We also know that $\tr T_k$ converges uniformly to $\tr\Sigma_s$ as $k\rightarrow\infty$. Therefore \eqref{eqn:factpf6} gives \begin{align} \label{eqn:factpf8} \det S(z)=\lim_{j\rightarrow\infty}\det S_{k_j}(z)=\exp\left(\int_0^{2\pi} h_z(\varphi) d\tr \Sigma_s(\varphi)\right) \end{align} The trace of a singular continuous mvf is singular continuous. Hence Lemma \ref{lemma:detinnerouter} shows that $S$ is an sc-inner mvf. By Helly's convergence Theorem \ref{thm:hellyconvmvf}, the left hand side of \eqref{eqn:factpf3} converges to $A(z)$ for all $z\in\D$. Therefore we obtain the desired factorization: \begin{align} A(z)=S(z)E(z) \end{align} \end{proof} We proceed proving some additional properties of inner and outer functions. \begin{lemma} \label{lemma:innerouterconst} A function $A\in\H^\infty$ that is both inner and outer must be a unitary constant. \end{lemma} \begin{proof} If $A$ is inner and outer, then also $\det A$ is a scalar function which is inner and outer. Hence $\det A(z)$ is a unimodular constant. By Potapov's Theorem \ref{thm:potapov}, $A$ is given by \eqref{eqn:potapovrepr}. Plugging in the value $z=0$ gives $$1=|\det A(0)|=\exp(-L)$$ and therefore $L=0$. \end{proof} \begin{thm} \label{thm:innermintrep} Let $A\in\H^\infty$. Then $A$ is an inner function if and only if there exist a B.P. product $B$, a pp-inner function $S_{pp}$ and an sc-inner function $S_{sc}$ such that \begin{align} \label{eqn:innerfct1} A(z)=B(z)S_{pp}(z)S_{sc}(z) \end{align} for all $z\in\D$. \end{thm} \begin{proof}We already saw that functions of the form \eqref{eqn:innerfct1} are inner. The inner-outer factorization \eqref{eqn:ginzburg-fact} and Lemma \ref{lemma:innerouterconst} imply the other direction. \end{proof} \begin{thm} \label{thm:outermintrep} A function $A\in\H^\infty$ is outer if and only if for every $B\in\H^\infty$ with the property $B^*(e^{i\theta})B(e^{i\theta})=A^*(e^{i\theta})A(e^{i\theta})$ for almost every $\theta\in[0,2\pi]$, we have that \begin{align} \label{eqn:outerdef2} B^*(z)B(z)\le A^*(z)A(z) \end{align} for all $z\in\D$. \end{thm} Ginzburg \cite{Ginzburg} uses this as the definition of outerness. The condition can be shown to be equivalent to a Beurling-type definition\index{Beurling} of outer functions by invariant subspaces of a certain shift operator (see \cite{Sz.-Nagy}). \begin{proof} Suppose that $A$ is an outer function and let $B\in\H^\infty$ be such that $A^*A=B^*B$ holds a.e. on $\T$.\\ By Theorems \ref{thm:ginzburg-fact} and \ref{thm:innermintrep} we have $B=X\cdot E$ for an inner function $X$ and an outer function $E$. Then $A^*A=E^* X^* X E=E^*E$ a.e. on $\T$. Therefore, $Y=A\cdot E^{-1}$ is an inner function. But it is also outer. So $A(z)=U\cdot E(z)$ for a unitary constant $U$. Since $X$ is an inner function, $X^*X\le I$ on $\D$. Hence $$B^*B=E^* X^* X E \le E^* E = A^* A.$$ \end{proof} Note that the proof also shows that outer functions are, up to a unitary constant, uniquely determined by their radial limit functions. \begin{thm} \label{thm:detinnerouter} A mvf $A\in\H^\infty$ is inner (resp. outer) if and only if $\det A$ is a scalar inner (resp. outer) function. \end{thm} \begin{proof} We already showed the part for outer functions in Lemma \ref{lemma:detinnerouter}. The part for inner functions follows similarly using Theorem \ref{thm:innermintrep}. \end{proof} We still have to show that the factorization \eqref{eqn:ginzburg-fact} obtained in Theorem \ref{thm:ginzburg-fact} is unique. \begin{thm} \label{thm:fact-unique} The functions $B, S_{pp}, S_{sc}, E$ in Theorem \ref{thm:ginzburg-fact} are uniquely determined up to multiplication with a unitary constant. \end{thm} \begin{proof} Uniqueness of the B.P. product was shown already in Theorem \ref{thm:bpfactor}. Suppose that $$S_{pp}S_{sc}E=\tilde{S}_{pp}\tilde{S}_{sc}\tilde{E}$$ are two factorizations into pp-inner, sc-inner and outer factors. Then $$\tilde{S}_{sc}^{-1}\tilde{S}_{pp}^{-1}S_{pp}S_{sc}=\tilde{E}\cdot E^{-1}$$ The left hand side is an inner function and the right hand side an outer function. By Lemma \ref{lemma:innerouterconst}, $\tilde{E}(z)=U\cdot E(z)$ for some unitary constant $U$. Now we have $\tilde{S}_{pp}^{-1}S_{pp}=\tilde{S}_{sc}U S_{sc}^{-1}$. The left hand side is a pp-inner function and the right hand side an sc-inner function. This holds also for their determinants. By the scalar uniqueness, both sides must be unitary constants. \end{proof} \noindent Theorems \ref{thm:ginzburg-fact} and \ref{thm:fact-unique} prove the first part of Theorem \ref{thm:main}. \subsection{Uniqueness} Now we proceed to proving the remaining claim of Theorem \ref{thm:main}. \begin{thm}Let $A_i\in\H^\infty$, $i=1,2$ with $$A_i(z)=\mint^{2\pi}_0 \exp(h_z(\varphi)M_i(\varphi)d\varphi)$$ for $z\in\D$ and $M_i$ Lebesgue integrable Hermitian mvfs. Assume $A_1\equiv A_2$ on $\D$. Then $M_1=M_2$ almost everywhere on $[0,2\pi]$. That is, the function $M$ in the representation \eqref{eqn:outer} of an outer function is uniquely determined up to changes on a set of measure zero. \end{thm} \begin{proof} Set $$A_i(z,t)=\mint_0^t \exp(h_z(\varphi)M_i(\varphi)d\varphi)\,\,\text{and}\,\, \tilde{A}_i(z,t)=\mint_t^{2\pi} \exp(h_z(\varphi)M_i(\varphi)d\varphi) $$ for $z\in\D$, $t\in[0,2\pi]$, $i=1,2$. These are outer functions for every fixed $t$. Then $$A_1(z,t)\tilde{A}_1(z,t)=A_1(z,t)=A_2(z,t)=A_2(z,t)\tilde{A}_2(z,t)$$ and consequently \begin{align} \label{eqn:uniquenesspf} X(z,t)=A_2(z,t)^{-1}A_1(z,t)=\tilde{A}_2(z,t)\tilde{A}_1(z,t)^{-1} \end{align} for all $z\in\D$, $t\in[0,2\pi]$. For every $t$, $X(\cdot,t)$ is an outer function. But for every $\theta\in(t,2\pi)$, the radial limit $A_i(e^{i\theta},t)$ is unitary. Consequently, $X(e^{i\theta},t)$ is unitary for $t\in (t,2\pi)$. Likewise, $\tilde{A}_i(e^{i\theta},t)$ is unitary for all $\theta\in(0,t)$. Together, \eqref{eqn:uniquenesspf} implies that $X(e^{i\theta},t)$ is unitary for almost all $\theta\in[0,2\pi]$, i.e. $X(\cdot,t)$ is an inner function. Therefore, $X(\cdot,t)$ must be a unitary constant. That is, there exist unitary matrices $U(t)$ for $t\in[0,2\pi]$ such that \begin{align} \label{eqn:uniquenesspf2} A_1(z,t)=A_2(z,t)U(t) \end{align} for all $z\in\D$. Since $A_i(0,t)=\mint_0^{t}\exp(-M_i(\varphi)d\varphi)$ are positive matrices, $U(0)$ is also positive, i.e. $U(0)=I$. So we conclude \begin{align} \mint_0^{t} \exp(-M_1(\varphi)d\varphi)=\mint_0^{t} \exp(-M_2(\varphi)d\varphi) \end{align} Deriving this equation with respect to $t$ gives \begin{align} -M_1(t)A_1(0,t)=-M_2(t)A_2(0,t) \end{align} for almost all $t\in[0,2\pi]$. Since $A_1(0,t)=A_2(0,t)$, the assertion follows. \end{proof} It is natural to ask whether the $E_k$ and $S$ in the representations of pp-inner and sc-inner functions, respectively, are uniquely determined. The question for uniqueness of $S$ in the representation of an sc-inner function remains unresolved for now. For the pp-inner part, the answer is negative. To see that there is no uniqueness for the $E_k$ let us consider the following. \begin{example} Let $$E_1(t)=\begin{pmatrix}t^2/2 & 0\\0 & t-t^2/2\end{pmatrix}\;\text{and}\; E_2(t)=\begin{pmatrix}t-t^2/2 & 0\\ 0 & t^2/2\end{pmatrix}$$ for $t\in[0,1]$. The mvfs $E_1$, $E_2$ are increasing and satisfy $\tr E_1(t)=\tr E_2(t)=t$. Then $$\mint_0^1 \exp\left(\frac{z+1}{z-1}\, dE_i(t)\right)=\begin{pmatrix}e^{\frac{z+1}{2(z-1)}} & 0\\ 0 & e^{\frac{z+1}{2(z-1)}}\end{pmatrix}$$ for $i=1,2$, but $E_1(t)\not=E_2(t)$ for almost all $t\in[0,1]$. \end{example} Despite the apparent non-uniqueness, it turns out that sometimes $E_k$ \emph{is} uniquely determined and in fact there exists a criterion to decide when that is the case \cite[Theorem 0.2]{Arov-DymII}, \cite[Theorem 30.1]{Brodskii}. The proof of this however relies on an elaborate operator theoretic framework, which would lead us too far astray here. \appendix \section{Background} \label{sect:background} In this section we collect some basic facts on matrix norms and analytic matrix-valued functions. \subsection{Properties of the matrix norm} \label{subsect:matrixnorm} The matrix norm\index{matrix norm} we use throughout this work is the operator norm\index{operator norm} given by $$\|A\|=\sup_{\|v\|=1}\|Av\|$$ for $A\in M_n$, where $\|v\|$ denotes the Euclidean norm of $v\in\C^n$. For reasons which are apparent from Lemma \ref{lemma:matrixnorm}, this norm is often referred to as spectral norm\index{spectral norm}. \begin{lemma} \label{lemma:matrixnorm} Let $U\in M_n$ be unitary, $A,B\in M_n$ arbitrary matrices, $D=\diag(\lambda_1,\dots,\lambda_n)$ a diagonal matrix and $v\in\C^n$. Then \begin{enumerate} \item[(1)] $\|AB\|\le\|A\|\cdot\|B\|$ \tinypf{$\|ABv\|\le \|A\| \|Bv\|$ holds for all $\|v\|=1$.} \item[(2)] $\|A\|=\|A^*\|$ \tinypf{$\|Av\|^2=\langle Av,Av\rangle=\langle v,A\adj Av\rangle\le\|v\|\cdot\|A\adj\|\cdot\|Av\|$ $\Rightarrow$ $\|A\|\le\|A\adj\|$. '$\ge$' follows by symmetry.} \item[(3)] $\|AU\|=\|UA\|=\|A\|$ \tinypf{(a) $\|UAv\|^2=\langle UAv,UAv\rangle=\langle Av,Av\rangle=\|Av\|^2$ $\Rightarrow$ $\|UA\|=\|A\|$. (b) $\|AU\|=\sup_{v\not=0} \frac{\|AUv\|}{\|v\|}=\sup_{w\not=0} \frac{\|Aw\|}{\|U\adj w\|}=\|A\|$} \item[(4)] $\|D\|=\max_{i}|\lambda_i|$ \tinypf{'$\ge$': $\|D\|\ge \|De_i\|=\|\lambda_i\|$. '$\le$': $v=(x_1,\dots,x_n)^\perp$ $\Rightarrow$ $\|Dv\|^2=\sum_i |\lambda_i x_i|^2\le \max_i |\lambda_i|^2 \cdot \|v\|^2$} \item[(5)] $A$ Hermitian $\Rightarrow$ $\|A\|=\sigmamax(A)$ \tinypf{$A$ is unitarily diagonalizable $\Rightarrow$ $A=UDU\adj$ $\Rightarrow$ $\|A\|\overset{(3)}{=}\|D\|\overset{(4)}{=}\sigmamax(A)$} \item[(6)] $\|A\|=\sqrt{\sigmamax(AA^*)}=\sqrt{\|AA^*\|}$ \tinypf{$\|AA\adj\|=\sigmamax(AA\adj)$ by (5). First $\|AA\adj\|\le\|A\|\cdot\|A\adj\|=\|A\|^2$. For $\|v\|=1$ we have $\|A\adj v\|^2=\langle A\adj v,A\adj v\rangle=\langle v,AA\adj v\rangle\le\|AA\adj v\|\le\|AA\adj\|$, so $\|A\|^2=\|A\adj\|^2\le\|AA\adj\|$.} \item[(7)] $\|U\|=1$ \tinypf{$\|U\|\overset{(6)}{=}\sqrt{\|UU\adj\|}=1$} \item[(8)] $A\ge 0$ $\Rightarrow$ $\|A\|\le\tr A$ \tinypf{$\|A\|=\sigmamax(A)\le\tr(A)$} \item[(9)] $\|A\|=\sup_{\|x\|=1}\sqrt{|x^* AA^*x|}$ \tinypf{By (6) it suffices to show $\|A\|=\sup_{\|x\|=1}|x^*Ax|$ for $A$ Hermitian. For $\|x\|=1$ note $x^*Ax=\sum_{i,j}A_{ij}\overline{x}_i x_j=\sum_{i} (Ax)_i \overline{x}_i=\langle Ax,x\rangle$. So $|x^*Ax|\le \|A\|$. Taking the supremum gives '$\ge$' of the claim. The other direction follows from (5) by plugging in an eigenvector corresponding to the greatest eigenvalue of $A$.} \item[(10)] $\|A\|=\sup_{\|x\|=\|y\|=1}|x^* Ay|$ \tinypf{Same argument as in (9).} \item[(11)] $\|A\|\ge \sqrt{\sum_j |A_{ij}|^2}$ for all $i=1,\dots,n$. \tinypf{Plug in $x=e_i$ in (9).} \item[(12)] $\|A\|\ge |A_{ij}|$ for all $i,j=1,\dots,n$. \tinypf{Plug in $x=e_i$, $y=e_j$ in (10).} \item[(13)] $\|A^{-1}\|\le \frac{\|A\|^{n-1}}{|\det A|}$ if $\det A\not=0$ \tinypf{By homogeneity we may assume $\|A\|=1$. Let $\lambda$ be the smallest eigenvalue of $AA\adj$. Then $\lambda\ge \det(AA^*)=(\det A)^2$ by (6). Note that $\lambda^{-1}$ is the greatest eigenvalue of $(AA\adj)^{-1}$, so $\lambda^{-1}=\|(A^{-1})(A^{-1})\adj\|=\|A^{-1}\|^2$. Together $\|A^{-1}\|=\lambda^{-1/2}\le \frac{1}{|\det A|}$.} \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item[(1)] $\|ABv\|\le \|A\| \|Bv\|$ holds for all $\|v\|=1$. \item[(2)] $\|Av\|^2=\langle Av,Av\rangle=\langle v,A\adj Av\rangle\le\|v\|\cdot\|A\adj\|\cdot\|Av\|$ $\Rightarrow$ $\|A\|\le\|A\adj\|$. '$\ge$' follows by symmetry. \item[(3)] (a) $\|UAv\|^2=\langle UAv,UAv\rangle=\langle Av,Av\rangle=\|Av\|^2$ $\Rightarrow$ $\|UA\|=\|A\|$. (b) $\|AU\|=\sup_{v\not=0} \frac{\|AUv\|}{\|v\|}=\sup_{w\not=0} \frac{\|Aw\|}{\|U\adj w\|}=\|A\|$ \item[(4)] '$\ge$': $\|D\|\ge \|De_i\|=\|\lambda_i\|$. '$\le$': $v=(x_1,\dots,x_n)^\perp$ $\Rightarrow$ $\|Dv\|^2=\sum_i |\lambda_i x_i|^2\le \max_i |\lambda_i|^2 \cdot \|v\|^2$ \item[(5)] $A$ is unitarily diagonalizable $\Rightarrow$ $A=UDU\adj$ $\Rightarrow$ $\|A\|\overset{(3)}{=}\|D\|\overset{(4)}{=}\sigmamax(A)$ \item[(6)] $\|AA\adj\|=\sigmamax(AA\adj)$ by (5). First $\|AA\adj\|\le\|A\|\cdot\|A\adj\|=\|A\|^2$. For $\|v\|=1$ we have $\|A\adj v\|^2=\langle A\adj v,A\adj v\rangle=\langle v,AA\adj v\rangle\le\|AA\adj v\|\le\|AA\adj\|$, so $\|A\|^2=\|A\adj\|^2\le\|AA\adj\|$. \item[(7)] $\|U\|\overset{(6)}{=}\sqrt{\|UU\adj\|}=1$ \item[(8)] $\|A\|=\sigmamax(A)\le\tr(A)$ \item[(9)] By (6) it suffices to show $\|A\|=\sup_{\|x\|=1}|x^*Ax|$ for $A$ Hermitian. Let $\|x\|=1$. Note $x^*Ax=\sum_{i,j}A_{ij}\overline{x}_i x_j=\sum_{i} (Ax)_i \overline{x}_i=\langle x,Ax\rangle$. So $|x^*Ax|\le \|A\|$. Taking the supremum gives '$\ge$' of the claim. The other direction follows from (5) by plugging in an eigenvector corresponding to the greatest eigenvalue of $A$. \item[(10)] Same argument as in (9). \item[(11)] Plug in $x=e_i$ in (9). \item[(12)] Plug in $x=e_i$, $y=e_j$ in (10). \item[(13)] By homogeneity we may assume $\|A\|=1$. Let $\lambda$ be the smallest eigenvalue of $AA\adj$. Then $\lambda\ge \det(AA^*)=(\det A)^2$ by (6). Note that $\lambda^{-1}$ is the greatest eigenvalue of $(AA\adj)^{-1}$, so $\lambda^{-1}=\|(A^{-1})(A^{-1})\adj\|=\|A^{-1}\|^2$. Together $\|A^{-1}\|=\lambda^{-1/2}\le \frac{1}{|\det A|}$. \end{enumerate} \end{proof} The eigenvalues of the positive matrix $AA^*$ are also called the \defin{singular values}\index{singular value} of $A$. So property (6) says that the matrix norm of $A$ is the square root of the largest singular value.\\ The following fact is often useful when dealing with contractive mvfs and makes the particular choice of the spectral norm especially convenient. \begin{lemma} \label{lemma:normcontr} A matrix $A\in M_n$ satisfies $\|A\|\le 1$ if and only if $I-AA^*\ge 0$. \end{lemma} \begin{proof} Let $AA^*=UDU^*$ with $U$ a unitary and $D$ a diagonal matrix. Then $I-AA^*\ge 0$ is equivalent to $I-D\ge 0$ and this is equivalent to $\|A\|\le 1$ since the diagonal entries of $D$ are the singular values of $A$. \end{proof} \subsection{Analytic matrix-valued functions} \label{analyticmatrixfcts} By an \defin{analytic matrix-valued function (analytic mvf)}\index{analytic mvf} in the unit disk $\D\subset\C$ we mean a function $A:\D\rightarrow M_n$ all components of which are holomorphic throughout $\D$. The terms \defin{rational}, \defin{continuous}, \defin{differentiable}, \defin{meromorphic} and \defin{harmonic} are defined the same way for mvfs. It is clear that the basic implements of ``non-multiplicative" complex analysis, i.e. Laurent development, Cauchy and Poisson integral formula, mean value property etc., immediately carry over to the matrix-valued case. An analytic mvf $A$ is called \defin{bounded} if $$\|A\|_\infty=\sup_{z\in\D}\|A(z)\|<\infty$$ The space $\H^\infty$ is defined as the set of bounded analytic mvfs on the unit disk, whose determinant does not vanish identically. \begin{lemma} \label{unifconvlemma} Let $(A_k)_k$ be a sequence of analytic mvfs on $\D$ which converges uniformly on compact subsets to a mvf $A$. Then $A$ is analytic. \end{lemma} \begin{proof}This follows just like in the scalar case using the appropriate analogoues of Cauchy's and Morera's theorems (cf. \cite[Theorem 10.28]{Rudin2}). \end{proof} \begin{lemma}\index{subharmonic} \label{subharmlemma} Let $A$ be an analytic mvf on $\D$. Then $\|A\|$ is subharmonic. \end{lemma} \begin{proof} Let $z_0\in\D$ and $\gamma(t)=z_0+re^{it}$ for $t\in[0,2\pi)$ where $0\le r<1$ is such that $\gamma(t)\in\D$ for all $t\in[0,2\pi)$. Then by the Cauchy integral formula applied to the components of $A$ we get $$A(z_0)=\frac{1}{2\pi i}\int_\gamma \frac{A(z)}{z-z_0} dz =\frac{1}{2\pi}\int_0^{2\pi} A(\gamma(t))\,dt$$ Hence the triangle inequality gives $\|A(z_0)\|\le\frac{1}{2\pi}\int_0^{2\pi}\|A(z_0+re^{it})\|\,dt$ which proves the claim. \end{proof} By saying that $A:\D\rightarrow M_n$ is a \emph{contractive} mvf\index{contractive mvf}, we mean that $\|A(z)\|\le 1$ for all $z\in\D$. The space of all contractive analytic mvfs, whose determinant does not vanish identically is denoted by $\Sch\subset\H^\infty$.\index{Schur class} The following rank invariance statement is from \cite[Chapter I]{Potapov}. \begin{lemma}\index{rank invariance lemma} \label{rankinvar} Let $A\in\Sch$ be such that $I-A(z_0)^*A(z_0)$ has rank $0\le r<n$ for some $z_0\in\D$. Then there exist unitary matrices $U,V$ and a contractive analytic mvf $\tilde{A}:\D\rightarrow M_r$ such that $$A(z)=U\left(\begin{array}{cc}\tilde{A}(z) & 0\\0 & I_{n-r}\end{array}\right)V$$ for all $z\in\D$. In particular, $I-A(z)A^*(z)$ has rank $r$ throughout $\D$. \end{lemma} \begin{proof} By assumption, $1$ is an eigenvalue of $A(z_0)A^*(z_0)$ and the dimension of the corresponding eigenspace is $n-r$. Hence, by singular value decomposition, we can find unitary matrices $U, V$ and a $r\times r$ diagonal matrix $D$ such that $$A(z_0)=U\left(\begin{array}{cc}D & 0\\0 & I_{n-r}\end{array}\right)V$$ Now consider the contractive analytic matrix function $B(z)=U^* A(z) V^*$. Since $|B_{ij}(z)|\le\|B(z)\|=\|A(z)\|\le 1$, the entries of $B$ are analytic functions mapping $\D$ to $\overline{\D}$. We have by construction that $B_{ii}(z_0)=1$ for $i=r+1,\dots,n$ which implies by the maximum principle that $B_{ii}(z)=1$ for all $z\in\D$, $i=r+1,\dots,n$. Finally, property (11) from above implies that the off-diagonal entries in the rows and columns $r+1,\dots,n$ of $B$ are all constant $0$. Summing up, $B$ is of the form $$B(z)=\left(\begin{array}{cc}\tilde{A}(z) & 0\\0 & I_{n-r}\end{array}\right)$$ for some contractive analytic $r\times r$ matrix-valued function $\tilde{A}$. \end{proof} We can use this to obtain the following analogue of the strong maximum principle for holomorphic functions on the unit disk. \begin{cor}\label{cor:unitaryconst}\index{maximum principle} A contractive analytic mvf, which is unitary at some $z\in\D$, is constant.\end{cor} \begin{proof}This is the case $r=0$ in the previous lemma.\end{proof} An important tool for extracting convergent subsequences from contractive analytic mvfs is Montel's theorem. \begin{thm}[Montel]\index{Montel's theorem} \label{thm:montel} Let $(A_k)_k$ be a sequence of contractive analytic mvfs. Then there exists a subsequence $(A_{k_j})_j$ which converges uniformly on compact sets to a contractive analytic mvf. \end{thm} \begin{proof} This is a consequence of the theorem of Arzel\`a-Ascoli. Equicontinuity of the sequence follows, because it is uniformly bounded and also the sequence of its derivatives is uniformly bounded by the Cauchy integral formula. \end{proof} \section{The Herglotz representation theorem for mvfs} \label{sect:herglotz} The aim of this section is to prove a generalization of the Herglotz representation theorem for positive harmonic functions (see \cite[Theorem I.3.5 (c)]{Garnett}) to the matrix-valued case. The proof uses Helly's classical (scalar) theorems, which we will prove first. \begin{thm}[Herglotz] \label{thm:herglotz}\index{Herglotz representation theorem} Let $A:\D\rightarrow M_n$ be holomorphic and $\Im A(z)=\frac{1}{2i}(A(z)-A^*(z))\ge 0$ for $z\in\D$. Then there exists a Hermitian matrix $A_0$ and an increasing mvf $\sigma:[0,2\pi]\rightarrow M_n$ such that $$A(z)=A_0+i\int^{2\pi}_0 \frac{e^{it}+z}{e^{it}-z}\,d\sigma(t)\hspace{1cm}(z\in\D)$$ \end{thm} The integral used in this theorem is the classical Riemann-Stieltjes integral with respect to a matrix-valued function. Integrals of this type are discussed for instance in \cite{Rudin1}, \cite{Apostol} and in a more general setting in \cite[\S 4]{Brodskii}. \subsection{Helly's classical theorems} The next proposition is also known as Helly's first theorem and provides a compactness result for BV-functions with respect to pointwise convergence. \begin{thm}[Helly's selection theorem, scalar version] \label{thm:hellyselsc}\index{Helly's selection theorem}\index{Helly's first theorem} Let $(f_k)_k$ be a sequence of functions in $\BV([a,b])$ with uniformly bounded total variation. Then there exists a subsequence $(f_{k_j})_j$ such that $f_{k_j}$ converges pointwise to some function $f\in\BV([a,b])$. \end{thm} \begin{proof} Let us first prove the statement in the case that $(f_k)_k$ is a uniformly bounded sequence of increasing functions. By the usual diagonal subsequence argument we can use the uniform boundedness to extract a subsequence $(f_{k_j})_j$ which converges at all rational points in $[a,b]$ to an increasing function $f$ on $[a,b]\cap\Q$. We extend $f$ to all of $[a,b]$ by $$f(t)=\inf\{f(q)\,:\,q\in[t,b]\cap\Q\}\hspace{1cm}(t\in[a,b])$$ Now $f$ is by construction increasing on $[a,b]$. It remains to discuss the convergence of $f_{k_j}(t)$ for $t\in[a,b]\cap\Q^c$. If $f$ is continuous at $t$, the density of $\Q\cap[a,b]$ in $[a,b]$ implies $f_{k_j}(t)\rightarrow f(t)$. Since $f$ is only discontinuous at countably many points, we may achieve convergence everywhere by repeating the diagonal subsequence argument on the set of discontinuities, therefore choosing a further subsequence. This finishes the proof for increasing functions. Now assume that $(f_k)_k$ is a sequence of BV-functions with uniformly bounded total variation. We decompose $f_k$ into increasing functions by writing $$f_k=g_k-h_k$$ with $g_k(t)=\var_{[a,t]} f_k$ and $h_k=g_k-f_k$. Assume $|f_k(t)|\le\var_{[a,b]} f_k\le C$ for all $k$. Then also $|g_k(t)|\le C$ and $|h_k(t)|\le 2C$ for all $k$ and $t\in[a,b]$. Thus we can use the statement for increasing functions to choose a subsequence such that both $(g_{k_j})_j$ and $(h_{k_j})_j$ converge pointwise to increasing functions $g$ and $h$, respectively. That implies $f_k\rightarrow f=g-h$ pointwise. Finally, $f$ is also of bounded variation since for every fixed partition $\tau$ of $[a,b]$ we have that $\lim_{k\rightarrow\infty} \var_{[a,b]}^\tau f_k=\var_{[a,b]}^\tau f$. Therefore $\var_{[a,b]} f\le C$. \end{proof} The next theorem is a convergence theorem for Riemann-Stieltjes integrals. It is also known as Helly's second theorem. \begin{thm}[Helly's convergence theorem, scalar version] \label{thm:hellyconvsc}\index{Helly's convergence theorem}\index{Helly's second theorem} Let $(f_n)_n$ be a sequence in $\BV([a,b])$ with uniformly bounded total variation which converges pointwise to some function $f$ on $[a,b]$. Then $f\in\BV([a,b])$ and for every $\varphi\in C([a,b])$ we have \begin{align*} \lim_{k\rightarrow\infty} \int^b_a \varphi(t)\,df_k(t)=\int^b_a \varphi(t)\,df(t) \end{align*} \end{thm} \begin{proof} Choose $C>0$ such that $\var_{[a,b]} f_k\le C$ for all $k$. First note that $f\in\BV([a,b])$, since for any partition $\tau$ of $[a,b]$, $\var_{[a,b]}^\tau f=\lim_{k\rightarrow\infty} \var_{[a,b]}^\tau f_k$ and therefore also $\var_{[a,b]} f\le C$. Let $\epsilon>0$. Since $\varphi$ is uniformly continuous on $[a,b]$, there exists $\delta>0$ such that $|\varphi(t)-\varphi(s)|<\frac{\epsilon}{2C}$ for $|t-s|<\delta$. Now whenever $(\tau,\xi)\in\mathcal{T}_a^b$ is a tagged partition such that $\nu(\tau)<\delta$, we get \begin{align*} \left|\int_a^b \varphi df-\sum_{i=1}^m \varphi(\xi_i)\Delta_i f\right| &=\left|\sum_{i=1}^m\int_{t_{i-1}}^{t_i} (\varphi(t)-\varphi(\xi_i))df(t)\right|\le \left(\frac{\epsilon}{2C}\right) \sum_{i=1}^m \var_{[t_{i-1},t_i]} f\\ &=\frac{\epsilon}{2C}\cdot \var_{[a,b]} f\le \frac{\epsilon}{2} \end{align*} The same calculation gives also $$\left|\int_a^b \varphi df_k - \sum_{i=1}^m \varphi(\xi_i)\Delta_i f_k\right|\le \frac{\epsilon}{2}$$ for all $k$. Therefore \begin{align*} \left|\int^b_a \varphi\,df_k - \int^b_a\varphi\,df\right| &\le \left|\int^b_a \varphi\, df_k - \sum^m_{i=1}\varphi(\xi_i)\Delta_i f_k\right|+\left|\sum^m_{i=1}\varphi(\xi_i)(\Delta_i f_k-\Delta_i f)\right|\\ &+\left|\sum^m_{i=1}\varphi(\xi_i)\Delta_i f-\int^b_a \varphi\,df\right| \le \epsilon + \|\varphi\|_\infty \sum^m_{i=1} |\Delta_i f_k-\Delta_i f| \end{align*} Since $f_k\rightarrow f$ pointwise, we can let $k\rightarrow\infty$, while letting the tagged partition $(\tau,\xi)$ fixed, and obtain $$\limsup_{k\rightarrow\infty} \left|\int^b_a \varphi\,df_k - \int^b_a\varphi\,df\right|\le \epsilon$$ Since $\epsilon$ was arbitrary, the proposition is proven. \end{proof} One can use Helly's theorems to prove the familiar characterization of continuous functionals on $C([a,b])$ as being given by Stieltjes-integrating against $\BV$-functions. This is the classical Riesz representation theorem.\index{Riesz representation theorem} In view of that, Helly's theorems also provide a more explicit insight in the weak-* compactness of the unit ball in $C([a,b])^\prime$ than is offered by the fairly abstract proof of the Banach-Alaoglu theorem.\index{Banach-Alaoglu theorem} \subsection{Proof of the theorem} In this section we prove the Herglotz representation theorem. The proof is based on the proof in the scalar case, just that we use Helly's theorems instead of Riesz' representation theorem and Banach-Alaoglu. This has the benefit that we are simply dealing with pointwise converging sequences of functions. \emph{Proof of Theorem \ref{thm:herglotz}.} We assume $A(0)=0$. Set $T(z)=\Im A(z)$ for $z\in\D$ and $T^{(r)}(t)=T(re^{it})$ for $t\in[0,2\pi]$ and $0<r<1$. Then the scalar mean value property for harmonic functions gives \begin{align*} \frac{1}{2\pi}\int^{2\pi}_0 \|T(re^{it})\|\,dt\le \frac{1}{2\pi}\int^{2\pi}_0 \tr T(re^{it})\,dt=\tr T(0)=C \end{align*} Hence also $\|T^{(r)}_{ij}\|_1\le C$ for all $1\le i,j\le n$ where $\|\cdot\|_1$ denotes the $1$-norm w.r.t. the Lebesgue measure on $[0,2\pi]$. Now define $$\sigma^{(r)}_{ij}(t)=\frac{1}{2\pi}\int_0^{t}T_{ij}^{(r)}(s)\,ds$$ Note that $\sigma_{ij}^{(r)}$ are BV-functions with $\var_{[0,2\pi]} \sigma_{ij}^{(r)}\le C$ and the mvfs $\sigma^{(r)}(t)=(\sigma_{ij}^{(r)}(t))_{ij}$ are increasing, because $T^{r}(t)\ge 0$. Now we can apply Helly's selection Theorem \ref{thm:hellyselsc} to obtain a sequence $(r_k)_k$ with $r_k\rightarrow 1-$ and a $\BV$-function $\sigma_{ij}$ such that $$\sigma_{ij}^{(r_k)}\longrightarrow \sigma_{ij}$$ pointwise on $[0,2\pi]$. By taking appropriate subsequences we can assume without loss of generality that this holds for all $(i,j)$. The mvf $\sigma=(\sigma_{ij})_{ij}$ is increasing, as it is the pointwise limit of increasing mvfs. The functions $A^{(r)}(z)=A(rz),\,z\in\D$ are holomorphic on $\D$ and continuous on $\overline{\D}$. Hence the Poisson integral formula for holomorphic functions implies \begin{align*} A^{(r)}(z)=\frac{i}{2\pi}\int_0^{2\pi}\frac{e^{it}+z}{e^{it}-z}T^{(r)}(t)\,dt \end{align*} Let $z\in\D$ be fixed. Applying the above to $f(t)=\frac{e^{it}+z}{e^{it}-z}$ we get \begin{align*} A(z) &= \lim_{k\rightarrow\infty} A^{(r_k)}(z) = \lim_{k\rightarrow\infty} i\int_0^{2\pi} \frac{e^{it}+z}{e^{it}-z}\,d\sigma^{(r_k)}(t) = i\int_0^{2\pi} \frac{e^{it}+z}{e^{it}-z}\,d\sigma(t) \end{align*} where the last equality follows from Helly's convergence Theorem \ref{thm:hellyconvsc}. \qed \section*{Notation} \addcontentsline{toc}{section}{Notation} \begin{tabular}{ll} $\D$ & unit disk around the origin in $\C$\\ $\T$ & unit circle around the origin in $\C$\\ $\Hpl$ & upper half plane in $\C$\\ $M_n$ & space of $n\times n$ matrices with entries in $\C$\\ $I=I_n$ & unit matrix in $M_n$\\ $A^*$ & conjugate transpose of $A\in M_n$\\ $A_{ij}$ & $(i,j)$th entry of the matrix $A$\\ $A\ge 0$ & the Hermitian matrix $A$ is positive semidefinite\\ $A>0$ & the Hermitian matrix $A$ is positive definite\\ $\|A\|$ & operator norm of the matrix $A$\\ $\tr A$ & trace of the matrix $A$\\ $\sigma(A)$ & set of eigenvalues of $A$\\ $\sigmamax(A)$ & spectral radius of $A$, i.e. largest eigenvalue\\ $\H^\infty$ & bounded analytic matrix functions on $\D$ with $\det A\not\equiv0$\\ $\Sch\subset\H^\infty$ & subspace of contractive functions\\ $\prodr=\prod$ & product of matrices ordered from left to right\\ $\prodl$ & product of matrices ordered from right to left\\ $\mint$ & (left-)multiplicative integral\\ $\Re A$ & real part of the matrix $A$, given by $\frac{1}{2}(A+A^*)$\\ $\Im A$ & imaginary part of the matrix $A$, given by $\frac{1}{2i}(A-A^*)$\\ $C(K)$ & space of continuous function on a compact set $K\subset\R^n$\\ $\BV([a,b];M_n)$ & space of matrix-valued bounded variation functions on $[a,b]\subset\R$\\ $\BV([a,b])$ & short for $\BV([a,b];M_1)$\\ $\var_{[a,b]} f$ & variation of a matrix-valued function $f$ on the interval $[a,b]$\\ $\osc_{[a,b]} f$ & oscillation of a function $f$ on the interval $[a,b]$\\ $f\in\mathcal{M}^b_a[E]$ & the multiplicative integral $\mint^b_a \exp(f\,dE)$ exists\\ $h_z(\theta)$ & the Herglotz kernel $h_z(\theta)=\frac{z+e^{i\theta}}{z-e^{i\theta}}$\\ $\ch_M$ & characteristic function of the set $M$\\ $\theta^\dagger$ & generalized inverse of the increasing function $\theta$\\ $\lambda$ & Lebesgue measure on $\R$ \end{tabular} \newpage \addcontentsline{toc}{section}{References} \bibliographystyle{plain} \bibliography{References} \newpage \addcontentsline{toc}{section}{Index} \printindex \end{document}
2412.09159v1
http://arxiv.org/abs/2412.09159v1
Hessian curvature hypersurfaces with prescribed Gauss image
\documentclass{amsart} \usepackage{mathrsfs} \usepackage{amsfonts} \usepackage{latexsym,amsmath,amssymb} \usepackage{xcolor} \usepackage{ulem} \usepackage{soul} \textwidth 5.5 true in \oddsidemargin 0.35 true in \evensidemargin 0.35 true in \setcounter{section}{0} \pagestyle{myheadings} \footskip=50pt \renewcommand{\epsilon}{\varepsilon} \newcommand{\newsection}[1] {\section{#1}\setcounter{theorem}{0} \setcounter{equation}{0} \par\noindent} \renewcommand{\theequation}{\arabic{section}.\arabic{equation}} \renewcommand{\thesection}{\arabic{section}} \newtheorem{theorem}{Theorem}[section] \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}} \newtheorem{example}[theorem]{example} \newtheorem{Example}[theorem]{Example} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{Corr}[theorem]{Corr} \newtheorem{Corollary}[theorem]{Corollary} \newtheorem{prop}[theorem]{Proposition} \newtheorem{deff}[theorem]{Definition} \newtheorem{rem}[theorem]{Remark} \newcommand{\bth}{\begin{theorem}} \newcommand{\ble}{\begin{lemma}} \newcommand{\bcor}{\begin{corr}} \newcommand{\bdeff}{\begin{deff}} \newcommand{\bprop}{\begin{proposition}} \newcommand{\ele}{\end{lemma}} \newcommand{\ecor}{\end{corr}} \newcommand{\edeff}{\end{deff}} \newcommand{\ii}{i} \newcommand{\eprop}{\end{proposition}} \newcommand{\rlnu}{{R_{\lambda}^{\nu}}} \newcommand{\trlnu}{{\tilde{R}_{\lambda}^{\nu}}} \newcommand{\tlnu}{{T_{\lambda}^{\nu}}} \newcommand{\ttlnu}{{\tilde{T}_{\lambda}^{\nu}}} \newcommand{\slnu}{{S_{\lambda}^{\nu}}} \newcommand{\slnut}{{}^t\!\slnu} \newcommand{\cd}{\, \cdot\, } \newcommand{\mlnu}{{m_{\lambda}^{\nu}}} \newcommand{\psilnu}{\psi_{\lambda}^{\nu}} \newcommand{\xilnu}{\xi_{\lambda}^{\nu}} \newcommand{\nlnu}{N_{\lambda}^{\nu}} \newcommand{\nl}{N_{\lambda}} \newcommand{\Rn}{{\mathbb R}^n} \newcommand{\Rf}{{\mathbb R}^4} \newcommand{\jump}{{}} \newcommand{\la}{\lambda} \newcommand{\eps}{\varepsilon} \newcommand{\e}{\varepsilon} \renewcommand{\l}{\lambda} \newcommand{\loc}{{\text{\rm loc}}} \newcommand{\comp}{{\text{\rm comp}}} \newcommand{\Coi}{C^\infty_0} \newcommand{\supp}{\text{supp }} \renewcommand{\Pi}{\varPi} \renewcommand{\Re}{\rm{Re} \,} \renewcommand{\Im}{\rm{Im} \,} \renewcommand{\epsilon}{\varepsilon} \newcommand{\sgn}{{\text {sgn}}} \newcommand{\Gmid}{\Gamma_{\text{mid}}} \newcommand{\Rt}{{\mathbb R}^3} \newcommand{\R}{{\mathbb R}} \newcommand{\Mdel}{{{\cal M}}^\alpha} \newcommand{\dist}{{\text{dist}}} \newcommand{\Adel}{{{\cal A}}_\delta} \newcommand{\Kob}{{\mathcal K}} \newcommand{\Kout}{\Rn\backslash \Kob} \newcommand{\Kfout}{{\mathbb R}^4\backslash \Kob} \newcommand{\Dia}{\overline{\mathbb E}^{1+3}} \newcommand{\Diap}{\overline{\mathbb E}^{1+3}_+} \newcommand{\Cyl} {{\mathbb E}^{1+3}} \newcommand{\Cylp}{{\mathbb E}^{1+3}_+} \newcommand{\Penrose}{{\cal P}} \newcommand{\Rplus}{{\mathbb R}_+} \newcommand{\parital}{\partial} \newcommand{\tidle}{\tilde} \newcommand{\Stk}{S_T^\Kob} \newcommand{\extn}{{\R^n\backslash\mathcal{K}}} \newcommand{\T}{T_\varepsilon} \newcommand{\p}{\partial} \numberwithin{equation}{section} \newtheorem{THM}{{\!}}[section] \newtheorem{lem}{Lemma}[section] \newtheorem{cor}[lem]{Corollary} \newtheorem{them}[lem]{Theorem} \newtheorem{definition}[lem]{Definition} \pagestyle{plain} \thanks{The first author is supported by NSFC Grant, No.12101145 and Guangxi Science, Technology Project, Grant No. GuikeAD22035202. The second and last authors are supported by NSFC Grant No. 12431008. The third author is supported by NSFC Grant No.12141105.} \title [Hessian curvature hypersurfaces with prescribed Gauss image]{Hessian curvature hypersurfaces with prescribed Gauss image } \author{Rongli Huang} \address{School of Mathematics and Statistics, Guangxi Normal University, Guilin, Guangxi 541004, China} \email{[email protected]} \author{Changzheng Qu} \address{School of Mathematics and Statistics, Ningbo University, Ningbo, China} \curraddr{} \email{[email protected]} \thanks{} \author{ZhiZhang Wang} \address{School of Mathematical Science, Fudan University, Shanghai 200433, China} \email{[email protected]} \author{Weifeng Wo} \address{School of Mathematics and Statistics, Ningbo University, Ningbo, China} \curraddr{} \email{[email protected]} \thanks{} \begin{document} \maketitle \begin{abstract} In this paper, we investigate Hessian curvature hypersurfaces with prescribed Gauss images. Given geodesically strictly convex bounded domains $\Omega$ in $\mathbb{R}^n$ and $\tilde{\Omega}$ in the unit hemisphere, we prove that there is a strictly convex graphic hypersurface defined in $\Omega$ with prescribed $k$-Hessian curvatures such that its Gauss image is $\tilde{\Omega}$. Our proof relies on a novel $C^2$ boundary estimate which utilizes the orthogonal invariance of hypersurfaces. Indeed, we employ some special vector fields generated by the infinitesimal rotations in $\mathbb{R}^{n+1}$ to establish the boundary $C^2$ estimates. This new approach enables us to handle the additional negative terms that arise when taking second order derivatives near the boundary. \end{abstract} \section{Introduction} Suppose $\mathbb{R}^{n+1}$ is the $n+1$ dimensional Euclidean space. Always assume $\Omega$ is a strictly convex bounded domain in $\mathbb{R}^n$. Suppose $u$ is a function defined on $\Omega$ and $M_u=\{X=(x,u(x)); x\in\Omega\}$ is a graphic hypersurface in $\mathbb{R}^{n+1}$. Let \(N\) be the upward unit normal vector to \(M_u\). Then $N$ is the Gauss map for $M_u$ and its image lies in the unit hemisphere $$\mathbb{S}^n_+=\{x=(x_1,\cdots,x_{n+1})\in \mathbb{S}^n; x_{n+1}>0\}.$$ Assume $\tilde{\Omega}$ is a (geodesically) strictly convex bounded domain in $\mathbb{S}^n_+$. We can pose the second boundary value problem for the prescribed $k$-Hessian curvature equations as following: Given $\Omega,\tilde{\Omega}$, can one find a positive constant $c$ and a strictly convex graphic hypersurface $M_u$ defined by $u$ such that \begin{equation}\label{e1.1} \sigma_{k}(\kappa[M_u])=c \quad \text{in} \ \Omega, \end{equation} and the Gauss image of $M_u$ is $\tilde{\Omega}$? Here $\kappa[M_u]=(\kappa_{1},\kappa_{2},\cdots,\kappa_n)$ are the principal curvatures of $M_u$ and $\sigma_{k}$ denotes the $k$-th elementary symmetric function \begin{equation*} \sigma_{k}(\kappa)=\sum_{1\leq i_{1}< i_{2}<\cdots< i_{k}\leq n}\kappa_{i_{1}}\kappa_{i_{2}}\cdots\kappa_{i_{k}}. \end{equation*} By a projection map, one can map $\mathbb{S}^n_+$ into $\mathbb{R}^n$ which is a diffeomorphism. Therefore it maps $\tilde{\Omega}$ to some strictly convex domain $\Omega^*$ in $\mathbb{R}^n$. That is \begin{equation}\label{bc} Du(\Omega)=\Omega^*, \end{equation} where $Du$ is the gradient map. \eqref{bc} is known as the second boundary value condition. We will explain more detail in the next section. Thus given $\tilde{\Omega}$ is equivalent to given $\Omega^*$. The problem of finding hypersurfaces with prescribed curvatures is a classical topic in differential geometry. Caffarelli-Nirenberg-Spruck \cite{Caffarelli1986} studied the existence of star-shaped closed hypersurfaces with prescribed Hessian curvature. For further studies of closed hypersurfaces related to prescribed curvature problem, please see \cite{GuanGuan2002, GuanLi2012,Guan2015, Jiao2022,Sheng2004,Urbas2000} and the references therein. Caffarelli-Nirenberg-Spruck \cite{Caffarelli1988} and Ivochkina \cite{Ivochkina1989, Ivochkina1990} considered the homogeneous and nonhomogeneous Dirichlet problem for $k$-Hessian curvature equations. The Neumann boundary problems for mean curvature equation and Gauss curvature equation were investigated by Ma-Xu \cite{Ma2016} and Lions-Trudinger-Urbas \cite{Lions1986} respectively. For more results on Dirichlet or Neumann boundary problems of curvature equations, we refer to \cite{Jiao2022, Ma2018, Urbas1996}, and the reference therein. For the second boundary value problem, Pogorelov \cite{Pogorelov1964} first studied the Monge-Amp\`ere equations with \eqref{bc}. Caffarelli \cite{Caffarelli1996} and Urbas \cite{Urbas1997, Urbas2001} established the existence of globally smooth solutions with \eqref{bc} for Monge-Amp\`ere equations and Hessian equations. One may see \cite{Chen2021,Chen2016, Jiang2018, von2010} and the reference therein for recent progress. Such boundary condition has attracted increased attention, since it is connected to mass transfer problems \cite{Caffarelli1992} and minimal Lagrangian diffeomorphisms \cite{Wolfson1997, Urbas2007}. Second boundary value problems also appear in various geometric problems, such as prescribed curvature problem, minimal Lagrangian submanifolds, and geometric flows. Urbas \cite{Urbas2002} first studied Weingarten hypersurfaces with prescribed Gauss images. However, the general $k$-Hessian curvature equations ($k<n$) were not addressed there. Brendle-Warren \cite{Brendle2010} studied minimal Lagrangian graphs in strictly convex domains. Recent progress on curvature equations with prescribed gradient images can be found in \cite{Huang2022} and \cite{Wang2023}. Schn\"urer-Smoczyk \cite{Schnurer2002, Schnurer2003} investigated Gauss curvature flows and Weingarten curvature flows with second boundary conditions. Further research on curvature flows can be found in \cite{ Huang2015,Kitagawa2012, Wang2023, Wang2024}. Now we state our first result: \begin{theorem}\label{t1.1} Assume $\Omega$ and $\tilde{\Omega}$ are bounded, (geodesically) strictly convex domains with smooth boundaries in $\mathbb{R}^n$ and $\mathbb{S}^n_+$ respectively. Then, up to a constant, there exists a unique smooth strictly convex graphic hypersurface $M_u$ determined by a smooth function $u$ and a unique positive constant $c$ satisfying \eqref{e1.1} such that the Gauss image of $M_u$ is $\tilde{\Omega}$. \end{theorem} In \cite{Urbas2001}, Urbas discussed the $k$-Hessian equations in general form. Theorem 1.5 in \cite{Urbas2001} can be generalized to the corresponding prescribed curvature equation by using our approach. Specifically, we consider the following equation: \begin{equation}\label{1.2} \sigma_k^{1/k}(\kappa[M_u]) =\psi(\langle X,N\rangle, N)\quad \text{in} \ \ \Omega, \end{equation} where $\langle X,N\rangle$ is the support function of $M_u$ and $\psi(z,p)\in C^{\infty}(\mathbb{R}\times \tilde{\Omega})$ is a positive function satisfying the following conditions, \begin{align} & \psi_z\leq 0 \text{ on } \mathbb{R}\times \tilde{\Omega}; \label{Cond1}\\ & \psi(z,p)\rightarrow +\infty, \text{ as } z\rightarrow -\infty;\ \ \ \psi(z,p)\rightarrow 0 \text{ as } z\rightarrow +\infty \label{Cond2}. \end{align} Thus we have the following theorem: \begin{theorem}\label{t1.2} Assume $\Omega$ and $\tilde{\Omega}$ are bounded, (geodesically) strictly convex domains with smooth boundaries in $\mathbb{R}^n$ and $\mathbb{S}^n_+$ respectively. Assume $\psi\in C^{\infty}(\mathbb{R}\times \tilde{\Omega})$ is a positive function satisfying \eqref{Cond1} and \eqref{Cond2}. Then, there exists a smooth strictly convex graphic hypersurface $M_u$ determined by a smooth function $u$ satisfying \eqref{1.2} such that the Gauss image of $M_u$ is $\tilde{\Omega}$. If \eqref{Cond1} changes to $\psi_z<0$ on $\mathbb{R}\times \tilde{\Omega}$, the solution is unique in the class of strictly convex functions. \end{theorem} If $\psi$ only depends on $N$, i.e. $\psi=\psi(N)$, we also have an existence result as Theorem \ref{t1.1}: \begin{theorem}\label{t1.3} Assume $\Omega$ and $\tilde{\Omega}$ are bounded, (geodesically) strictly convex domains with smooth boundaries in $\mathbb{R}^n$ and $\mathbb{S}^n_+$ respectively. Assume $\psi\in C^{\infty}(\tilde{\Omega})$ is a positive function. Then, up to a constant, there exist a unique smooth strictly convex graphic hypersurface $M_u$ determined by a smooth function $u$ and a unique positive constant $c$ satisfying \begin{equation}\label{1.3} \sigma_k^{1/k}(\kappa[M_u]) =c\psi(N)\quad \text{in} \ \ \Omega \end{equation} such that the Gauss image of $M_u$ is $\tilde{\Omega}$. \end{theorem} \begin{rem} In Theorem \ref{t1.1} and \ref{t1.3}, the uniqueness is in the class of strictly convex functions. \end{rem} We now highlight the novelty of our proof. To find strictly convex solutions, we consider the dual problem via the Legendre transform. In fact, the dual equation of \eqref{1.2} is a quotient Hessian equation: \begin{equation}\label{Legendre1} F^*\Big(w^* \sum_{k,l}b^*_{ik} (D_{kl}u^*) b^*_{lj}\Big)=\left[\frac{\sigma_n}{\sigma_{n-k}}\Big(\la^*\Big[w^*\sum_{k,l} b^*_{ik}(D_{kl}u^*)b^*_{lj}\Big]\Big)\right]^{\frac{1}{k}}=\psi^*(y,u^*), \end{equation} where $y=Du(x)$, $u^*$ is the Legendre transform of $u$, $F^*$ represents the dual operator and $\psi^*$ is the corresponding dual function. The terms $w^*$, $b^*_{ik}$, and $D_{kl} u^*$ come from the Legendre transform of the original functions. This is the central equation to our approach. We will provide more details in the next section. One main difficulty arises when taking second derivatives near the boundary. Direct differentiation yields additional negative terms $-C\sum_i F^{*ii}\Delta u^*$, which cannot be eliminated by simply adding a test function due to the structure of the quotient Hessian. To overcome the difficulty, our strategy is to take derivative along some special vector fields, which are generated by infinitesimal rotations in $\mathbb{R}^{n+1}$. Here we have used the orthogonal invariance of the hypersurfaces. Note that the method to establish the second-order estimate in \cite{Urbas2001,Urbas2002} is not applicable to our $k$-Hessian curvature equations. The structure of the paper is as follows. In Section 2, we present preliminaries, including the notations, the three different models for representing the curvature equations via the Gauss map and the Legendre transform. In Section 3, we obtain the $C^0$ estimate and use continuity method and degree theory to prove our main existences theorems. In Section 4, we will prove the strict obliqueness by modifying the argument in \cite{Urbas2001}. Section 5 is dedicated to studying the vector field generated by the rotations in $\mathbb{R}^{n+1}$, where we construct a special family of rotations and the corresponding vector field. Finally in Section 6, we establish the $C^2$ estimates, including global and boundary $C^2$ estimates for \eqref{Legendre1} by using the vector field constructed in Section 5. \section{Preliminaries}\label{Pre} Assume that $u$ is a sufficiently smooth function defined in some bounded domain $\Omega\subset\mathbb{R}^{n}$. Let $D$ be the connection with respect to the standard Euclidean metric. We always denote $$u_i=D_{i}u=\dfrac{\partial u}{\partial x_{i}}, u_{ij}=D_{ij}u=\dfrac{\partial^{2}u}{\partial x_{i}\partial x_{j}}, u_{ijk}=D_{ijk}u=\dfrac{\partial^{3}u}{\partial x_{i}\partial x_{j}\partial x_{k}}, \cdots $$ and $|Du|=\sqrt{\sum_{i=1}^{n}|D_{i}u|^{2}}.$ \subsection{Vertical graph} We follows \cite{Caffarelli1986} to give various geometric quantities associated with $M_u$. In the standard coordinates of $\mathbb{R}^{n+1}$, the indued metric of $M_u$ is \begin{equation}\label{e2.2} g_{ij}=\delta_{ij}+D_{i}uD_{j}u,\quad 1\leq i,j\leq n. \end{equation} While the inverse of $g_{ij}$ and second fundamental form of $M_u$ are given by \begin{equation}\label{e2.3} g^{ij}=\delta_{ij}-\frac{D_{i}uD_{j}u}{1+|Du|^{2}},\quad 1\leq i,j\leq n, \end{equation} and \begin{equation}\label{e2.4} h_{ij}=\frac{D_{ij}u}{\sqrt{1+|Du|^{2}}},\quad 1\leq i,j\leq n. \end{equation} The upward unit normal vector to $M_u$ is expressed by \begin{equation}\label{e2.5} N(X)=\frac{(-Du,1)}{\sqrt{1+|Du|^{2}}}. \end{equation} The principal curvatures of $M_u$ are the eigenvalues of $h^{j}_{i}\equiv h_{ik}g^{kj}$, which are the eigenvalues of the symmetric matrix, by \cite{Caffarelli1986} \begin{equation}\label{e2.17} a_{ij}=\frac{1}{w}b^{ik}(D_{kl}u)b^{lj}, \end{equation} where $w=\sqrt{1+|Du|^{2}}$ and $b^{ij}=\delta_{ij}-\frac{D_{i}uD_{j}u}{w(1+w)}$ is the square root of $g^{ij}$. Then $b_{ij}$ is the inverse of $b^{ij}$ expressed as $b_{ij}=\delta_{ij}+\frac{D_{i}uD_{j}u}{1+w}$. Let $\mathcal{S}$ be the vector space of $n\times n$ symmetric matrices and \[\mathcal{S}_+=\{A\in \mathcal{S}: \lambda(A)\in \Gamma_n\},\] where $\Gamma_n:=\{\lambda\in\R^n:\,\,\mbox{each component $\lambda_i>0$}\}$ is the convex cone, and $\lambda(A)=(\lambda_1, \cdots, \lambda_n)$ denotes the eigenvalues of $A.$ Define a function $F$ by \[F(A)=\sigma^{\frac{1}{k}}_k(\lambda(A)),\,\, A\in\mathcal{S}_+,\] then \eqref{1.2} can be written as \begin{equation}\label{main equation graph} F\left(\frac{1}{w}b^{ik}(D_{kl}u)b^{lj}\right)=\psi(\langle X,N\rangle, N). \end{equation} Note that, in fact the function $F$ is well defined on $\mathcal{S}_k=\{A\in \mathcal{S}: \lambda(A)\in \Gamma_k\},$ where $\Gamma_k$ is the G{\aa}rding's cone (see \cite{Caffarelli1985}). However, in this paper, we only study strictly convex hypersurfaces, thus we restrict ourselves to $\mathcal{S}_+.$ Throughout this paper we denote \[F^{ij}(A)=\frac{\partial F}{\partial a_{ij}}(A),\,\,F^{ij, kl}=\frac{\partial^2 F}{\partial a_{ij}\partial a_{kl}}.\] \subsection{The Gauss map} Let $M_u$ be an entire, strictly convex hypersurface, $N(X)$ be the upward unit normal vector to $M_u$ at $X.$ We define the Gauss map: \[G: M_u\rightarrow \mathbb{S}^n_+;\,\, X\mapsto N(X).\] If we take the hyperplane $\mathbb{P}:=\{X=(x_1, \cdots, x_{n}, x_{n+1}) |\, x_{n+1}=1\}$ and consider the projection of $\mathbb{S}^n_+$ from the origin into $\mathbb{P}.$ Then $\mathbb{S}^n_+$ is mapped in a one-to-one fashion onto $\mathbb{R}^n$. The map $P$ is given by \begin{equation}\label{proj} P: \mathbb{S}^n_+\rightarrow \mathbb{R}^n;\,\,(x_1, \cdots, x_{n+1})\mapsto (y_1, \cdots, y_n), \end{equation} where $x_{n+1}=\sqrt{1-x_1^2-\cdots-x_n^2},$ $y_i=-\frac{x_i}{x_{n+1}}.$ Thus it is easy to check that \begin{eqnarray} P\circ G:& (x,u(x))\in M_u\;\; \rightarrow\;\; D u(x)\in \mathbb{R}^n. \end{eqnarray} Thus the gradient map $D u$ is equivalent to the Gauss map. Next let's consider the support function of $M_u.$ We denote \[v:=-\langle X, N\rangle=\frac{1}{\sqrt{1+|Du|^2}}\left(\sum_ix_i\frac{\partial u}{\partial x_i}-u\right).\] Here $\langle \cdot, \cdot\rangle$ denotes the standard Euclidean inner product. Let $\{e_1, \cdots, e_n\}$ be an orthonormal frame on $\mathbb{S}^n$ and $\nabla$ be the standard Levi-Civita connection of $\mathbb{S}^n$. We denote \begin{equation}\label{sphere} \Lambda_{ij}=\nabla_{ij}v+v\delta_{ij} \end{equation} to be the spherical Hessian. Here $\nabla_iv, \nabla_{ij}v$ denote the first and second covariant derivatives with respect to the standard spherical metric. In convex geometry, it is well known that $$\nabla_iv=-\langle X, e_i\rangle, \ \ X=-\sum_i(\nabla_iv)e_i-vN.$$ Moreover we have \begin{eqnarray*} g_{ij}=\sum_k\Lambda_{ik}\Lambda_{kj},\ \ h_{ij}=\Lambda_{ij}. \end{eqnarray*} This implies that the eigenvalues of the spherical Hessian are the curvature radius of $M_u$. That is, if the principal curvatures of $M_u$ are $(\la_1, \cdots, \la_n),$ then the eigenvalues of the spherical Hessian are $\left(\la_1^{-1}, \cdots, \la_n^{-1}\right).$ Therefore, equation \eqref{1.2} can be written as \begin{equation}\label{main equation hyperbolic} F^*(\nabla_{ij}v+v\delta_{ij})=\tilde{\psi}(x,v),\;\;x\in\mathbb{S}^n_+, \end{equation} where $F^*(A)=\left[\frac{\sigma_n}{\sigma_{n-k}}(\lambda(A))\right]^{\frac{1}{k}}$ and \begin{equation}\label{tildepsi} \tilde{\psi}(x,v)=\frac{1}{\psi(v, x)}. \end{equation} \subsection{The Legendre transform} Suppose $M_u$ is an entire, strictly convex graphic hypersurface defined by a function $u$. Then we have \[x_{n+1}=\left<X, \mathbf{E}\right>=u(x_1, \cdots, x_n),\] where $\mathbf{E}=(0, \cdots, 0, 1).$ Introduce the Legendre transform \[y_i=\frac{\p u}{\p x_i},\,\, u^*=\sum_{i=1}^n x_iy_i-u.\] From the theory of convex bodies, we know that \[\Omega^*=Du(\Omega)\] is a convex domain. It is well known that $$\left(\frac{\p^2 u}{\p x_i\p x_j}\right)=\left(\frac{\p^2 u^*}{\p y_i\p y_j}\right)^{-1}.$$ We have, using the coordinate $(y_1,y_2,\cdots,y_n)$, the first and the second fundamental forms can be rewritten as: $$g_{ij}=\delta_{ij}+y_iy_j, \text{ and\,\, } h_{ij}=\frac{u^{* ij}}{\sqrt{1+|y|^2}},$$ where $\left(u^{* ij}\right)$ denotes the inverse matrix of $(u^*_{ij})$ and $|y|^2=\sum_iy_i^2$. Now let $W$ denote the Weingarten matrix of $M_u,$ then $$(W^{-1})_{ij}=\sqrt{1+|y|^2}\sum_kg_{ik}u^*_{kj}.$$ From the discussion above, we can see that if $M_u=\{(x, u(x)) | x\in\R^n\}$ is an entire, strictly convex hypersurface satisfying \eqref{main equation graph}, then the Legendre transform of $u$ denoted by $u^*$ satisfies \begin{equation}\label{Legendre} F^*\Big(w^* \sum_{k,l}b^*_{ik} (D_{kl}u^*) b^*_{lj}\Big)=\left[\frac{\sigma_n}{\sigma_{n-k}}\Big(\la^*\Big[w^*\sum_{k,l} b^*_{ik}(D_{kl}u^*)b^*_{lj}\Big]\Big)\right]^{\frac{1}{k}}=\psi^*(y,u^*). \end{equation} Here $w^*=\sqrt{1+|y|^2}$, $b^*_{ij}=\delta_{ij}+\frac{y_iy_j}{1+w^*}$ is the square root of the matrix $g_{ij},$ and \begin{equation}\label{psi*} \psi^*(y,u^*)=\frac{1}{\psi \left(\frac{u^*}{w^*}, \frac{(-y,1)}{w^*} \right)}. \end{equation} \section{$C^0$ estimates and existence } In this section, we prove the main theorems. The existence of solutions is established by using the continuity method, the Leray-Schauder degree theory and the a prior $C^0-C^2$ estimate. The $C^0$ estimate will be carried out here. The $C^1$ estimate follows from the boundedness of $\Omega$ and $\Omega^*$. The $C^2$ estimate, being the most challenging, is proved in the next three sections. {\bf The proof of Theorem \ref{t1.2}}: Suppose $f(\lambda)$ is a symmetric concavity function, where $\lambda=(\lambda_1,\cdots,\lambda_n)$. By the concavity of $f$, we have $$f(\lambda)\leq f(1,\cdots,1)+\sum_i\frac{\p f}{\p \lambda_i}(1,\cdots,1) (\lambda_i-1).$$ Since $\sigma_k^{1/k}$ and $\left(\frac{\sigma_n}{\sigma_{n-k}}\right)^{1/k}$ are concave functions, it implies that \begin{equation}\label{ineq:F} F\left(\frac{1}{w}b^{ik}(D_{kl}u)b^{lj}\right)\leq C\sigma_1\left(\frac{1}{w}b^{ik}(D_{kl}u)b^{lj}\right)+C \end{equation} and \begin{equation}\label{ineq:F*} F^*\left(w^* \sum_{k,l}b^*_{ik} (D_{kl}u^*) b^*_{lj}\right)\leq C\sigma_1\left(w^* \sum_{k,l}b^*_{ik} (D_{kl}u^*) b^*_{lj}\right)+C. \end{equation} Therefore integrating \eqref{ineq:F}, we get $$\int_{\Omega}\psi(\langle X,N\rangle, N)\leq C|\Omega|+\int_{\Omega}D\left(\frac{Du}{w}\right)=C|\Omega|-\int_{\p\Omega}\frac{D_{\nu}u}{w}\leq C $$ by the the boundedness of $\Omega^*$. Here $\nu$ is the unit interior normal of $\Omega$. By \eqref{Cond2}, there exists a point $p \in \bar{\Omega}$ such that $$\langle X,N\rangle \geq -C,$$ which implies $u(p)\leq C$. Since $Du$ is bounded, we get \begin{equation}\label{sup} \sup_{\Omega}u\leq C. \end{equation} Integrating \eqref{ineq:F*}, we get \begin{eqnarray*} \int_{\Omega^*}\psi^*(y,u^*)&\leq& C|\Omega^*|+\int_{\Omega^*}\sum_{i,j}w^*g_{ij}u^*_{ij}\\ &=&C|\Omega^*|-\int_{\Omega^*}\sum_{i,j}(w^*g_{ij})_ju^*_i-\int_{\p\Omega^*}\sum_{i,j}w^*g_{ij}\nu_j^*u^*_i\leq C \end{eqnarray*} by the boundedness of $\Omega$. Here $\nu^*$ is the unit interior normal of $\Omega^*$. By the definition of $\psi^*(y,z)$, we have \begin{align*} & \psi^*_z\geq 0 \text{ on } \Omega^*\times \mathbb{R}, \\ & \psi^*(y, z)\rightarrow +\infty, \text{ as } z\rightarrow +\infty,\\ & \psi^*(y,z)\rightarrow 0 \text{ as } z\rightarrow 0 . \end{align*} Therefore same argument as before shows $$\sup_{\Omega^*} u^*\leq C.$$ Since $u^*(y)=x\cdot y-u(x)$, we obtain the uniformly lower bound of $u$ \begin{equation}\label{inf} \inf_{\Omega} u\geq -C. \end{equation} Combining \eqref{sup} and \eqref{inf}, we have the $C^0$ estimate of $u$. The $C^1$ estimate comes from the boundedness of $\Omega,\Omega^*$. In the following Sections 4-6, we will establish the $C^2$ estimate of \eqref{Legendre}. By Newton-Maclruing inequality, the upper bound of \eqref{Legendre} will give the uniformly positive lower bound. Consequently, applying the continuity method, the Leray-Schauder degree theory and the Hopf's boundary point lemma, we demonstrate the existence and uniqueness of \eqref{1.2}. This completes the proof of Theorem \ref{t1.2}. \bigskip Then we turn to prove Theorem \ref{t1.1} and \ref{t1.3}. If $\psi$ is a constant function in \eqref{1.3}, then \eqref{1.3} becomes \eqref{e1.1}. Thus, we only need to prove Theorem \ref{t1.3}. {\bf The proof of Theorem \ref{t1.3}}: We consider the approximate equation: \begin{equation}\label{appro} \sigma_k^{1/k}(\kappa[M_u])=e^{-\epsilon\frac{\langle X,N\rangle}{\langle N, \mathbf{E}\rangle}}\psi(N), \end{equation} for any positive constant $\epsilon$ with $Du_{\epsilon}(\Omega)=\Omega^*$. Here $\mathbf{E}=(0, \cdots, 0, 1).$ The right hand side obviously satisfies conditions \eqref{Cond1} and \eqref{Cond2}. Therefore according to Theorem \ref{t1.2}, we can find a solution $u_{\epsilon}$ of \eqref{appro}. In view of the $C^0$ estimate in the proof of Theorem \ref{t1.2}, we get \begin{equation}\label{eu} |\epsilon u_{\epsilon}|\leq C. \end{equation} The $C^1$ estimate is straightforward. For the $C^2$ estimate, it does not depend on $C^0$ norm of $u_{\epsilon}$. Thus, if we let \begin{equation}\label{hatu} \hat{u}_{\epsilon}=u_{\epsilon}-\frac{1}{|\Omega|}\int_{\Omega}u_{\epsilon}, \end{equation} then we have $$\|\hat{u}_{\epsilon}\|_{C^2(\bar{\Omega})}\leq C.$$ By Evance-Krylov theory, we get $$\|\hat{u}_{\epsilon}\|_{C^{2,\alpha}(\bar{\Omega})}\leq C$$ for some $0<\alpha<1$. Thus we can assume $$\lim_{\epsilon\rightarrow 0}\hat{u}_{\epsilon}=\hat{u},$$ in the space $C^{2,\beta}(\bar{\Omega})$ for $\beta<\alpha$. Thus $\epsilon \hat{u}_{\epsilon}\rightarrow 0$. Using \eqref{hatu} and \eqref{eu}, we get $$\frac{\epsilon}{|\Omega|}\left|\int_{\Omega}u_{\epsilon}\right|\leq C.$$ Thus taking some subsequence, we can prove $\epsilon u_{\epsilon}$ convergences to some constant $\log c$. Now letting $\epsilon\rightarrow 0$ in \eqref{appro}, we get \eqref{1.3}. This completes the proof of Theorem \ref{t1.3}. \section{Strict obliqueness} In this section, we will prove the strict obliqueness of the problem \eqref{e1.1}. Since our argument is closed to \cite{Urbas2002}, \cite{ Urbas1997} and \cite{Urbas2001}, we just provide a brief outline. \begin{deff} Suppose $\Omega$ is a strictly convex bounded domain. A uniformly concave function $h:\mathbb{R}^n\rightarrow\mathbb{R}$ is called the defining function of $\Omega$, if $\Omega=\{p\in\mathbb{R}^{n} : h(p)>0\}$ and $|Dh|=1$ on $\partial \Omega$. \end{deff} \begin{lemma}\label{3.4} Suppose $\Omega$, $\Omega^*$ are bounded strictly convex domains with smooth boundary in $\mathbb{R}^{n}$. Suppose $h(p_1,\cdots,p_n)$ is the uniform concave defining function of $\Omega^*$, then $$\nu^*(p)=Dh(p_1,\cdots,p_n)=\left(\frac{\p h}{\p p_1},\cdots, \frac{\p h}{\p p_n}\right)$$ is the interior unit normal of $\partial\Omega^*$. Let $\nu$ be the unit interior normal of $\partial\Omega$. If $u$ is a strictly convex solution to (\ref{e1.1}) with $\Omega^*=Du(\Omega)$, then the strict obliqueness estimate \begin{equation}\label{ee3.9} \langle\nu^*(Du(x)), \nu(x)\rangle\geq c_0>0 \end{equation} holds for any $x\in\partial \Omega$. Here $c_0$ is some constant only depending on $\Omega$ and $\Omega^*$. \end{lemma} \begin{proof} Let $$\chi(x)=\langle\nu^*(Du(x)),\nu(x)\rangle=\sum_kh_{p_k}(Du(x))\nu_k(x),$$ where $h_{p_k}=\frac{\p h}{\p p_k}$ and $\nu=(\nu_1,\cdots,\nu_n)$. A same argument as \cite{Urbas2001}, we get $\chi\geq 0$ on $\p \Omega$ and we can prove \begin{eqnarray}\label{a} \chi=\sqrt{u^{ij}\nu_i\nu_j(D_{kl}u) h_{p_k}h_{p_l}} \ \ \text{ on }\ \ \p\Omega. \end{eqnarray} Suppose $x_0\in\p\Omega$ is the minimum value point of the function $\chi |_{\p\Omega}$. Let $$\omega(x)=\chi(x)+A h(Du(x)),$$ where $A$ is a positive sufficiently large constant. A same argument as \cite{Urbas2001}, to prove that $\sum_{k,l}(D_{kl}u) h_{p_k}h_{p_l}$ has a uniform positive lower bound, we only need to prove \begin{eqnarray}\label{a1} \omega_n(x_0)\geq -C, \end{eqnarray} where $n$ is the unit outward normal of $\p\Omega$. Suppose $h^*(x)$ is the uniformly concave function of $\Omega$ giving $\nu = Dh^*$. Let $u^*$ be the Legendre transform of $u$ satisfying \eqref{Legendre}. We define $$\chi^*(y)=\langle\nu^*(y),\nu(Du^*(y))\rangle, $$ then $\chi^*(y)=\chi(x)$, where $y=Du(x)$. Let $$\omega^*(y)=\chi^*(y)+A^* h^*(Du^*(y)).$$ Similarly, to prove $u^{ij}\nu_i\nu_j$ has a uniform positive lower bound, we only need to prove \begin{eqnarray}\label{a2} \omega^*_{n^*}(y_0)\geq -C, \end{eqnarray} where $n^*$ is the unit outward normal of $\p\Omega^*$ and $y_0=Du(x_0)$. To obtain \eqref{ee3.9}, it suffices to prove \eqref{a1} and \eqref{a2}. At first we give the proof of \eqref{a1}. Denote $$G(Du,D^2u)=\sigma_k(a_{ij}) \text{ and } \mathcal{L}f=G^{ij}D_{ij}f+(G^s-\psi^s)D_sf$$ for any function $f$, where $$G^{ij}=\frac{\p G}{\p u_{ij}}=\sum_{p,q}\sigma_k^{pq}\frac{1}{w}b^{ip}b^{qj},$$ and \begin{eqnarray*} G^s&=&\frac{\p G}{\p u_s}=-\frac{u_s}{w} \sum_{i,j}G^{ij}u_{ij}-\frac{2}{w(1 + w)}\sum_{t,j} \sigma^{ij}_k a_{it}( wu_t b^{sj} + u_jb^{ts}),\\ \psi^s&=&\frac{\p \psi}{\p u_s}=\psi_z\left(\frac{x_s}{w}-\frac{\langle x,Du\rangle-u}{w^3}u_s\right)+\sum_{i\neq n+1}\psi_{p_i}\left(\frac{-\delta_{is}}{w}+\frac{u_iu_s}{w^3}\right)-\psi_{p_{n+1}}\frac{u_s}{w^3}. \end{eqnarray*} Near $x_0$, we can calculate \begin{equation*}\label{e3.10} \begin{aligned} \mathcal{L}\omega=&\sum_{l,m,k}G^{ij}u_{il}u_{jm}(h_{p_{k}p_{l}p_{m}}\nu_{k}+A h_{p_{l}p_{m}})\\ &+2\sum_{l,k}G^{ij}h_{p_{k}p_{l}}u_{li}\nu_{kj}+\sum_kG^{ij}h_{p_{k}}\nu_{kij}+\sum_kG^{s}h_{p_{k}}\nu_{ks}\\ \leq& \sum_{l,m,k}(h_{p_{k}p_{l}p_{m}}\nu_{k}+A h_{p_{l}p_{m}}+\delta_{lm})G^{ij}u_{il}u_{jm}+C\mathcal{T}+C, \end{aligned} \end{equation*} where $\mathcal{T}=\sum_iG^{ii}$. Since $D^{2}h\leq-\theta I$ for some positive constant $\theta$, we may choose sufficiently large $A$ satisfying $$(h_{p_{k}p_{l}p_{m}}\nu_{k}+A h_{p_{l}p_{m}}+\delta_{lm})<0,$$ which implies \begin{equation}\label{e3.11} \mathcal{L}\omega\leq C\mathcal{T} \ \text{ in } \ \ \Omega, \end{equation} by $\mathcal{T}\geq c_1>0$ for some constant $c_1$. Without loss of generality, we may assume $x_{0}$ is the origin and that the positive $x_{n}$-axis is in the interior normal direction to $\partial\Omega$ at the origin. Suppose near the origin, the boundary $\partial\Omega$ is given by \begin{equation}\label{e3.12} x_{n}=\rho(x')=\frac{1}{2}\sum_{\alpha<n}\kappa^{b}_{\alpha}x^{2}_{\alpha}+O(|x'|^{3}), \end{equation} where $\kappa^{b}_{1},\kappa^{b}_{2},\cdots,\kappa^{b}_{n-1}$ are the principal curvatures of $\partial\Omega$ at the origin and $x'=(x_{1},x_{2},\cdots,$ $x_{n-1})$. Denote a neighborhood of $x_{0}$ in $\Omega$ by $$\Omega_{r}:=\{x\in\Omega:\rho(x')< x_{n}<\rho(x')+r^{2}, |x'|<r\},$$ where $r$ is a small positive constant to be chosen. Define a barrier function \begin{equation}\label{boundary} \varphi(x)=-\rho(x')+ x_{n}+\delta|x'|^{2}-Kx^{2}_{n}, \end{equation} where $\delta=\frac{1}{6}\min\{\kappa^{b}_{1},\kappa^{b}_{2},\cdots,\kappa^{b}_{n-1}\}$ and $K$ is some large undetermined positive constant. It is clear that $-\varphi$ is strictly convex when $r$ is sufficiently small. By the concavity of $\sigma_k^{1/k}$ and Proposition 2.1 in \cite{Jiao2022}, one can show that \begin{equation*} \begin{aligned} G^{ij}\varphi_{ij}\leq& -kG^{1-1/k}(Du,D^2u)G^{1/k}(-D^2\varphi, Du)\\ \leq& -kG^{1-1/k}(Du,D^2u)\frac{\sigma_k^{1/k}(-D^2\varphi)}{w^{1+2/k}}\\ \leq& -k\eta_0G^{1-1/k}(Du,D^2u)\frac{K^{1/k}}{w^{1+2/k}}, \end{aligned} \end{equation*} where $\eta_0$ is some small positive constant. Thus, since $|G^s|$ is bounded, we get \begin{equation}\label{e2.45} \mathcal{L}\varphi \leq -c_3\mathcal{T},\,\,\,x\in\Omega_{r}, \end{equation} when $K$ is sufficiently large and $r$ is sufficiently small. Note that the boundary $\partial \Omega_r$ consists of three parts: $\partial \Omega_r = \partial_1 \Omega_r \cup \partial_2 \Omega_r \cup \partial_3 \Omega_r$, where $\partial_1 \Omega_r$ and $\partial_2 \Omega_r$ are defined by $\{x_n=\rho\}\cap\bar{\Omega}_r$ and $\{x_n=\rho +r^2\}\cap\bar{\Omega}_r$ respectively, and $\partial_3 \Omega_r$ is defined by $\{|x'| = r\}\cap\bar{\Omega}_r$. Thus, when $r$ is sufficiently small (depending on $\delta$ and $K$), we have \begin{equation} \label{BC2-12} \begin{aligned} \varphi \geq & \frac{\delta}{2} |x'|^2, \mbox{ on } \partial_1 \Omega_r,\\ \varphi \geq & \frac{r^2}{2},\ \ \ \ \mbox{ on } \partial_2 \Omega_r,\\ \varphi \geq & \frac{\delta r^2}{2}, \ \ \ \mbox{ on } \partial_3 \Omega_r. \end{aligned} \end{equation} Therefore we have $$\varphi\geq 0, \,\,\,\text{on } \partial\Omega_{r}.$$ In order to obtain the desired results, it suffices to consider the auxiliary function $$\Phi(x)=\omega(x)-\omega(x_{0})+B\varphi(x),$$ where $B$ is some positive large constant to be determined. Combining (\ref{e3.11}) with (\ref{e2.45}), a direct computation yields \begin{equation*} \mathcal{L}(\Phi(x))\leq (C-B c_{3})\mathcal{T}<0, \end{equation*} when $B$ is large. On $\partial_1\Omega$, it is clear that $\Phi \geq0$. On $\partial_2 \Omega_r \cup \partial_3 \Omega_r$, we also have $\Phi\geq 0$ if $B$ is sufficiently large. By the maximum principle, we arrive at \begin{equation}\label{e3.15aaaa} \Phi(x)\geq 0 \ \ \text{ on } \bar{\Omega}_r. \end{equation} Combining with $\Phi(x_0)=0$, we have $\partial_n\Phi(x_0)\geq 0$, which gives \eqref{a1}. Finally, we turn to prove \eqref{a2}, which is similar to the proof of \eqref{a1}. Define $$\mathcal{L}^*=G^{*ij}D_{ij},$$ where $$G^*(y, D^2 u^*)=\frac{\sigma_n}{\sigma_{n-k}}\Big(\la^*\Big[w^* \sum_{k,l}b^*_{ik}u^*_{kl}b^*_{lj}\Big]\Big)\text{ and } G^{*ij}=\frac{\p G^*}{\p u^*_{ij}}.$$ Here, $w^*,b^*_{ij}$ are defined in subsection 2.3. Similar calculation as above shows \begin{equation}\label{e3.16} \mathcal{L^*}\omega^*\leq C\mathcal{T}^*, \end{equation} where $\mathcal{T}^*=\sum_iG^{*ii}$. Since our equation is invariant under rotations in $\mathbb{R}^n$. We may assume the positive $y_{n}$-axis is the interior normal direction to $\partial\Omega^*$ at $y_0=((y_0)_1,(y_0)_2,\cdots,(y_0)_n)$. Suppose near $y_0$, the boundary $\partial\Omega^*$ is given by \begin{equation}\label{e3.12n} \bar{y}_n=y_{n}-(y_0)_n=\rho^*(y')=\frac{1}{2}\sum_{\alpha<n}\kappa^{b*}_{\alpha}(y_{\alpha}-(y_0)_{\alpha})^2+O(|y'|^{3}), \end{equation} where $\kappa^{b*}_{1},\kappa^{b*}_{2},\cdots,\kappa^{b*}_{n-1}$ are the principal curvatures of $\partial\Omega^*$ at $y_0$ and $y'=(y_{1}-(y_0)_1,y_{2}-(y_0)_2,\cdots,y_{n-1}-(y_0)_{n-1})$. Denote a neighborhood of $y_{0}$ in $\Omega^*$ by \begin{equation}\label{Omega*} \Omega^*_{r}:=\{y\in\Omega^*:\rho^*(y')< \bar{y}_{n}<\rho^*(y')+r^{2}, |y'|<r\}, \end{equation} where $r$ is a small positive constant to be chosen. Define a barrier function \begin{equation}\label{boundary*} \varphi^*(y)=-\rho^*(y')+ \bar{y}_{n}+\delta^*|y'|^{2}-K^*\bar{y}_{n}^2, \end{equation} where $\delta^*=\frac{1}{6}\min\{\kappa^{b*}_{1},\kappa^{b}_{2*},\cdots,\kappa^{b*}_{n-1}\}$ and $K^*$ is some large undetermined positive constant. Then we have \begin{equation*} \mathcal{L^*}\varphi^* \leq -c_4\mathcal{T}^*,\,\,\,y\in\Omega^*_{r}, \ \ \text{ and } \varphi^*\geq 0 \text{ on } \p\Omega_r^*. \end{equation*} Therefore, for the auxiliary function $$\Psi(y)=\omega^*(y)-\omega^*(y_{0})+B^*\varphi^*(y),$$ we have $\mathcal{L}^*\Psi\leq 0$ if $B^*$ is sufficiently large, which implies it is always nonnegative on $\bar{\Omega}_r^*$ and $\Psi(y_0)=0$. Thus, we get \eqref{a2}. \end{proof} \section{The derivatives from rotations} First, let's prove a technical lemma. We follow the notations introduced in Section \ref{Pre}. \begin{lem}\label{4.1} Suppose $(y_1,\cdots,y_n)$ are the rectangular coordinate of $\mathbb{R}^n$. Suppose $\nabla$ is the standard Levi-Civita connection on the unit sphere. Let \[ e_i = \sum_kw^* \, b_{ik}^* \frac{\partial}{\partial y_k}. \] Then $\{e_1,\cdots,e_n\}$ is an orthonormal frame on $\mathbb{S}^n$. Moreover, for any function $v$ defined on some subset of $\mathbb{R}^n$, we get \begin{eqnarray}\label{form} \sum_{k,l}w^* \, b_{ik}^* D_{kl}v \, b_{lj}^*=\nabla^2_{ij}\frac{v}{w^*}+\frac{v}{w^*}\delta_{ij}. \end{eqnarray} \end{lem} \begin{proof} Denote $g_{ij} = \delta_{ij} + y_i y_j$ and $g^{ij} = \delta_{ij} - \frac{y_i y_j}{1+|y|^2}$. We find that the induced metric of the unit sphere on $\mathbb{R}^n$ by the map $P$ is given by \[ \tilde{g} = \sum_{i,j}\frac{1}{1+|y|^2}\Big(\delta_{ij} - \frac{y_i y_j}{1+|y|^2}\Big) d y_i \otimes d y_j = \frac{g^{ij}}{{w^*}^2} d y_i \otimes d y_j. \] Thus, we get $$\tilde{g}(e_i,e_j)=g^{pq}b^*_{ip}b^*_{qj}=\delta_{ij}.$$ Now we calculate the Christoffel symbol $\Gamma_{ij}^k$ of $\tilde{g}$. First we have \[ \frac{\partial \tilde{g}_{li}}{\partial y_j} = - \frac{2 y_j g^{li}}{{w^*}^4} + \frac{1}{{w^*}^2} \Big(\frac{- \delta_{lj}y_i - \delta_{ij} y_l}{{w^*}^2} + \frac{2 y_l y_i y_j}{{w^*}^4}\Big). \] Therefore we get \[ \begin{aligned} \Gamma_{ij}^k = \,& \frac{1}{2} \tilde{g}^{kl} \Big(\frac{\partial \tilde{g}_{li}}{\partial y_j} + \frac{\partial \tilde{g}_{lj}}{\partial y_i} - \frac{\partial \tilde{g}_{ij}}{\partial y_l}\Big)= - \frac{1}{{w^*}^2} (y_i \delta_{kj} + y_j \delta_{ki}). \end{aligned} \] Denote $\tilde{v} = \frac{v}{w^*}$. Then we have \[ \frac{\partial \tilde{v}}{\partial y_k} = \frac{1}{w^*} \frac{\partial v}{\partial y_k} - \frac{v y_k}{{w^*}^3} \] and \[ \frac{\partial^2 \tilde{v}}{\partial y_k y_l} = \frac{1}{w^*} \frac{\partial^2 v}{\partial y_k y_l} - \frac{y_k}{{w^*}^3} \frac{\partial v}{\partial y_l} - \frac{y_l}{{w^*}^3} \frac{\partial v}{\partial y_k} - \frac{\delta_{kl} v}{{w^*}^3} + 3 \frac{y_k y_l}{{w^*}^5} v. \] Consequently, we have \[ \begin{aligned} \nabla_{ij} \tilde{v} = \,& \sum_{k,l}{w^*}^2 b^*_{ik} b^*_{jl} \nabla_{k l} \tilde{v}\\ = \,& \sum_{k,l}{w^*}^2 b^*_{ik} b^*_{jl} \Big(\frac{\partial^2 \tilde{v}}{\partial y_k y_l} - \Gamma_{kl}^s \frac{\partial \tilde{v}}{\partial y_s}\Big)\\ = \,& \sum_{k,l}{w^*}^2 b^*_{ik} b^*_{jl} \Big[\frac{v_{kl}}{w^*} - \frac{y_k v_l}{{w^*}^3} - \frac{y_l v_k}{{w^*}^3} - \frac{\delta_{kl} v}{{w^*}^3}\\ & + 3 \frac{y_k y_l v}{{w^*}^5} +\sum_s \frac{y_k \delta_{sl} + y_l \delta_{sk}}{{w^*}^2} \Big(\frac{v_s}{w^*} - \frac{v y_s}{{w^*}^3}\Big)\Big]\\ = \,& \sum_{k,l}w^* b^*_{ik} b^*_{jl} v_{kl} - \frac{v}{w^*} \delta_{ij}. \end{aligned} \] Here, in the first equality, $\nabla_{ij}$ in the left hand side means taking covariant derivatives with respect to $e_i,e_j$ and $\nabla_{kl}$ in the right hand side means taking covariant derivatives with respect to $\frac{\p}{\p y_k},\frac{\p}{\p y_l}$. Thus, \eqref{form} follows immediately. \end{proof} Always assume $\epsilon_0$ is a small positive constant. Suppose $\{A_t\}, t\in[0,\epsilon_0]$ is a family of orthogonal matrices in $\mathbb{R}^{n+1}$. For any $y\in\mathbb{R}^n$, we define \begin{eqnarray}\label{sigmat} \sigma_t(y)=PA_t^{-1}P^{-1}(y), \end{eqnarray} where $P: \mathbb{S}^n_+\rightarrow \mathbb{R}^n$ is the projection map defined by \eqref{proj}, and $P^{-1}, A_t^{-1}$ are inverse maps of $P, A_t$ respectively. Suppose $u^*$ is a solution of \eqref{Legendre}. We then define \begin{equation}\label{ust} \tilde{u}_t^*(y)=\frac{\sqrt{1+|y|^2}}{\sqrt{1+|\sigma_t y|^2}}u^*(\sigma_t(y)), \text{ and } u_t^*=u^*(\sigma_t(y)). \end{equation} Then we derive the equation of $\tilde{u}^*_t$ as the following lemma. \begin{lem} For any $t\in[0,\epsilon_0]$, $\tilde{u}_t^*$ satisfies \begin{equation}\label{Fst} F^*\left( \sum_{k,l}w^* \, b_{ik}^* (\tilde{u}^*_t)_{kl} \, b_{lj}^* \right) (y)= \psi^*(\sigma_t y, u^*_t(y)). \end{equation} \end{lem} \begin{proof} For the graphic hypersurface $M_u$, suppose its position vector is $X=(\xi,u(\xi))$, where $\xi\in\mathbb{R}^n$. Suppose $x\in \mathbb{S}^n_+$ is its unit normal. We can view $X$ as a vector depending on $x$. Then the support function of $M_u$ is defined by $$v(x)=-\langle X(x), x\rangle.$$ Let $X_t=A_t(X)$, where $A_t$ is an orthogonal matrix. The support function $v_t$ for $X_t$ is given by $$v_t(x)=-\langle X_t(x), x\rangle.$$ We know that $$(A_t(X))(A_tx)=A_t(X(x)).$$ Thus we get \begin{eqnarray}\label{vt} \begin{aligned} v_t(x)=&-\langle A_t(X)(x), x\rangle \\ =&-\langle A_t(X)(A_tA_t^{-1}(x)), A_tA_t^{-1}x\rangle\nonumber\\ =&-\langle A_t(X(A_t^{-1}(x))), A_t(A_t^{-1}x)\rangle\nonumber\\ =&-\langle X(A_t^{-1}(x)), A_t^{-1}(x)\rangle \nonumber\\ =&\;v(A_t^{-1}(x))\nonumber. \end{aligned} \end{eqnarray} Suppose $\{e_1,\cdots,e_n\}$ is an orthonormal frame of $\mathbb{S}^n$. Then we have \begin{align} \nabla_iv_t(x)&=dv_t(x)(e_i)=dv(A_t^{-1}x)(A_t^{-1}e_i),\\ \nabla^2_{ij}v_t(x)&=e_j((v_t)_i)-\nabla_{e_j}e_i v_t\label{D2v}\\ &=d[dv(A_t^{-1}x)(A_t^{-1}e_i)](A_t^{-1}e_j)-dv_t(x)(\nabla_{e_j}e_i)\nonumber\\ &=\nabla^2v(A_t^{-1}x)(A_t^{-1}e_i,A_t^{-1}e_j)+dv(A_t^{-1}x)(\nabla_{A_t^{-1}e_j}A_t^{-1}e_i)\nonumber\\ &-dv(A_t^{-1}x)(A_t^{-1}\nabla_{e_j}e_i)\nonumber\\ &=\nabla^2v(A_t^{-1}x)(A_t^{-1}e_i, A_t^{-1}e_j)\nonumber. \end{align} Here $\nabla$ is the canonical connection on $\mathbb{S}^n$ and we have used the fact that $$A_t^{-1}\nabla_{e_j}e_i=\nabla_{A_t^{-1}e_j}(A_t^{-1}e_i).$$ Note that, in the above equality, the left hand side represents the derivative taking at the point $x$ and the right hand side represents the derivative taking at the point $A_t^{-1} x$. Since $v$ satisfies equation \eqref{main equation hyperbolic}, using \eqref{vt} and \eqref{D2v}, we obtain \[ F^*((v_t)_{i,j} + v_t\delta_{i,j})(x) = \tilde{\psi}(A_t^{-1}x, v_t(x)). \] By \eqref{ust} and \eqref{vt}, we derive \ \begin{align} \label{uts} \tilde{u}^*_t(y)&=\frac{\sqrt{1+|y|^2}}{\sqrt{1+|\sigma_t y|^2}}u^*_t(y)=\frac{\sqrt{1+|y|^2}}{\sqrt{1+|PA_t^{-1}P^{-1}y|^2}}u^*(PA_t^{-1}P^{-1}y)\\ &=\sqrt{1+|y|^2} v(A_t^{-1}P^{-1}y)=\sqrt{1+|y|^2} \, v_t \left(P^{-1} y\right). \nonumber \end{align} Applying Lemma \ref{4.1}, we have \[ \sum_{k,l}w^* \, b_{ik}^* (\tilde{u}^*_t)_{kl} \, b_{lj}^* = (v_t)_{i,j} + v_t \delta_{i,j}, \] which implies \eqref{Fst}. \end{proof} Suppose the components of $\sigma_t$ are \begin{equation}\label{sigmacom} \sigma_{t}(y) = (g_1(t,y), \ldots, g_n(t,y)). \end{equation} Now taking once and twice derivatives with respect to $t$ in \eqref{Fst}, we get \begin{equation}\label{D1} \sum_{k,l}F^{*ij} w^* \, b_{ik}^* \left( \frac{d}{dt} \tilde{u}_t^* \right)_{kl} b_{lj}^* = \sum_m\psi^*_{y_m}\frac{\p g_m}{\p t}+\psi^*_{u^*} \frac{d}{dt} u_t^* , \end{equation} and \begin{align}\label{D2} &\sum_{k,l}F^{*ij} w^* b_{ik}^* \left( \frac{d^2}{dt^2} \tilde{u}_t^* \right)_{kl} b_{lj}^* \\ &= -F^{*ij, rs} \left[\sum_{k,l}w^* b_{ik}^* \left( \frac{d}{dt} \tilde{u}_t^* \right)_{kl} b_{lj}^*\right]\left[ \sum_{p,q}w^* b_{rp}^* \left( \frac{d}{dt} \tilde{u}_t^* \right)_{pq} b_{qs}^* \right]+\psi^*_{u^*u^*}\left(\frac{d u_t^*}{dt}\right)^2\nonumber\\ &\qquad+\sum_m\psi^*_{y_m}\frac{\p^2 g_m}{\p t^2}+\sum_{m,n}\psi^*_{y_my_n}\frac{\p g_m}{\p t}\frac{\p g_n}{\p t}+2\sum_m\psi^*_{y_m}\psi^*_{u^*}\frac{\p g_m}{\p t} \frac{d}{dt} u_t^* +\psi^*_{u^*} \frac{d^2}{dt^2} u_t^* . \nonumber \end{align} Set $T$ to be the tangential vector field generated by $\sigma_t$, namely, \begin{equation}\label{T} T = \sum_{m=1}^n \left. \frac{\partial g_m}{\partial t} \right|_{t=0}\frac{\partial}{\partial y_m}. \end{equation} To proceed, we need the following proposition. \begin{prop}\label{prop} Suppose $\{\sigma_t, t\in[0,\epsilon_0]\}$ is an one-parameter transformation group of $\mathbb{R}^n$, namely, for any $0\leq s,t\leq \epsilon_0$, \begin{equation*} \sigma_{t+s} = \sigma_t \circ \sigma_s = \sigma_s \circ \sigma_t, \text{ and } \sigma_{0}=id. \end{equation*} For any smooth function $h(y)$, we define $h_t(y)=h(\sigma_t(y))$. Then we have \begin{equation}\label{prop1} \left. \frac{d}{dt} h_t \right|_{t=0}(y) = T h(y) \ \ \text{ and } \ \ \left. \frac{d^2}{dt^2} h_t \right|_{t=0} (y) = T^2 h (y),\end{equation} where $T$ is defined by \eqref{sigmacom} and \eqref{T}. \end{prop} \begin{proof} The first equality of \eqref{prop1} is obvious. To prove the second equality of \eqref{prop1}, we only need to show that \[ \left. \frac{d^2}{dt^2} h_t \right|_{t=0} (y) = \left. \frac{d}{dt} \left[ T h (\sigma_t (y)) \right] \right|_{t=0}, \] and use the first equality. Since \[ \left. \frac{d^2}{dt^2} h_t \right|_{t=0} (y) =\lim_{t \to 0} \frac{1}{t} \left[ \frac{d}{dt} h_t (y) - \left. \frac{d}{dt} h_t (y) \right|_{t=0} \right]\] and \[ \left. \frac{d}{dt} \left( T h (\sigma_t (y)) \right) \right|_{t=0} = \lim_{t \to 0} \frac{1}{t} \left[ T h (\sigma_t (y)) - T h (y) \right], \] it suffices to prove \[ \frac{d}{dt} h_t (y)= T h (\sigma_t (y)). \] Indeed we have \begin{align*} \frac{d}{dt} h_t (y)&=\lim_{s \to 0} \frac{1}{s} \left[ h (\sigma_t(\sigma_s(y))) - h (\sigma_t(y)) \right] \\ &= \lim_{s \to 0} \frac{1}{s} \left[ h (\sigma_s (\sigma_t (y))) - h (\sigma_t (y)) \right]\\ &=\left. \frac{d}{ds} h_s (\sigma_t (y)) \right|_{s=0} \\ &= T h (\sigma_t (y)). \end{align*} \end{proof} Using \eqref{prop1}, we get \begin{align}\label{newut} \left.\frac{d}{dt} \tilde{u}_t^*\right|_{t=0} &=w^*\left.\frac{d}{dt} \frac{u_t^*}{\sqrt{1+|\sigma_t(y)|^2}}\right|_{t=0}=w^*T\frac{u^*}{w^*},\\ \left.\frac{d^2}{dt^2} \tilde{u}_t^*\right|_{t=0} &=w^*\left.\frac{d^2}{dt^2} \frac{u_t^*}{\sqrt{1+|\sigma_t(y)|^2}}\right|_{t=0}=w^*T^2\frac{u^*}{w^*}.\nonumber \end{align} Thus, letting $t=0$ in \eqref{D1}, \eqref{D2}, then using \eqref{newut}, Proposition \ref{prop} and the concavity of $F^*$, we obtain the following lemma: \begin{lem}\label{lem4.3} Suppose $u^*$ is the solution of \eqref{Legendre}. Let $T$ be a vector field generated by some one parameter transformation group $\{\sigma_t, t\in[0,\epsilon_0]\}$ of $\mathbb{R}^n$. Namely $T$ is defined by \eqref{sigmacom} and \eqref{T}. Then we have \begin{align} \label{T2} \\ \sum_{k,l}F^{*ij} w^* \, b_{ik}^* \left(w^*T \frac{u^*}{w^*}\right)_{kl} b_{lj}^*&= T\psi^*+\psi^*_{u^*}Tu^*, \nonumber\\ \sum_{k,l}F^{*ij} w^* \, b_{ik}^* \left(w^*T^2 \frac{u^*}{w^*}\right)_{kl} b_{lj}^* &\geq T^2\psi^*+2\psi^*_{u^*}T\psi^*Tu^*+\psi^*_{u^*}T^2u^*+\psi^{*}_{u^*u^*}(Tu^*)^2.\nonumber \end{align} \end{lem} Now we construct a ``canonical" vector field generated by rotations. Let $dP$ be the differential of the map $P$. We would like to construct a vector field near any given boundary point $y_0$, which parallel any given tangential vector at $y_0$. \begin{prop}\label{propT} Given some boundary point $y_0\in\p\Omega^*$ and some tangential vector $\xi=(\xi_1,\cdots,\xi_n)$ of $\p\Omega^*$ at $y_0$. We can construct a vector field $T=\sum_mT_m\frac{\p}{\p y_m}$ near $y_0$. It satisfies \begin{equation}\label{5.2} T_m(y_0)=\sqrt{1+|y_0|^2}\xi_m \end{equation} and \begin{equation}\label{5.21} |T(y)|^2=\sum_mT_m^2(y)\leq 1+|y|^2 \text{ for any } y. \end{equation} Here the equality of \eqref{5.21} holds when $y=y_0$. Moreover, \eqref{T2} holds for the special $T$ constructed here. \end{prop} \begin{proof} Let $x_0 = P^{-1}(y_0)$ and $e_1$ be the direction of $(dP)^{-1}(\xi)$. We expand $e_1$ to some orthonormal vectors $\{e_1, e_2, \ldots, e_{n-1}, e_n\}$ such that $\{e_1,\cdots,e_{n-1}\}$ can span the tangential space of $\p\tilde{\Omega}$ at $x_0$. For any $x\in \mathbb{S}^n_+$, suppose $$x= a_0(x) x_0 + \sum_{i=1}^n a_i(x)e_i, $$ where $a_0(x),a_1(x),\cdots, a_n(x)$ are the components of $x$. Given constant $\epsilon_0>0$, for any $0\leq t\leq \epsilon_0$, we define a family of rotations: \begin{equation*} \begin{aligned} A_t (x_0) &= \cos t \, x_0 + \sin t \, e_1, \\ A_t (e_1) &= -\sin t \, x_0 + \cos t \, e_1, \\ A_t (e_i) &= e_i, \quad \text{for } i \geq 2. \end{aligned} \end{equation*} Then, we get \begin{equation*} A_t (x) = a_0 A_t (x_0) + \sum_{i=1}^n a_i A_t (e_i). \end{equation*} Next we let $$\sigma_t (y) = P A_t P^{-1} (y) = (g_1(t,y), \cdots, g_n(t,y)),$$ and define the vector field \begin{equation*} T = \sum_{i=1}^n \frac{\partial g_m}{\partial t} \Bigg|_{t=0} \frac{\partial}{\partial y_m}. \end{equation*} Suppose $x=P^{-1} (y) = (\tilde{x}, x_{n+1})$, then we see that \begin{equation*} x_{n+1} =\frac{1}{\sqrt{1+\lvert y \rvert^2}}, \quad \tilde{x} =- \frac{y}{\sqrt{1+\lvert y \rvert^2}}. \end{equation*} Thus we have \begin{equation*} a_0 (y) = \langle P^{-1} (y), x_0\rangle, \quad a_i (y) = \langle P^{-1} (y), e_i\rangle. \end{equation*} Therefore \begin{equation*} A_t P^{-1} (y) = (a_0(y) \cos t - a_1(y) \sin t) \, x_0 + (a_0(y) \sin t + a_1(y) \cos t) \, e_1 + \sum_{i=2}^n a_i(y) e_i. \end{equation*} Suppose $(y_1,\cdots,y_n)$ are the rectangular coordinate. We let $E_i=\frac{\p}{\p y_i}$ for $i=1,\cdots,n+1$. Thus we get \begin{equation*} g_m (t,y) = -\frac{\langle A_t \, P^{-1} (y), E_m\rangle}{\langle A_t \, P^{-1} (y), E_{n+1}\rangle}. \end{equation*} Then, taking derivative with respect to $t$, we have \begin{equation*} \frac{d}{dt} A_t \, P^{-1} (y) \Bigg|_{t=0} = - a_1 (y) \, x_0 + a_0 (y) e_1 \end{equation*} and \begin{align}\label{5.4} &T_m(y) =\left. \frac{\partial g_m}{\partial t} \right|_{t=0} \\ &= -\frac{\left.\frac{d}{dt} \langle A_t P^{-1}(y), E_m\rangle \right|_{t=0}}{\langle P^{-1}(y), E_{n+1}\rangle} +\frac{\langle A_t P^{-1}(y),E_m\rangle}{ \langle A_t P^{-1}(y), E_{n+1} \rangle^2} \left. \frac{d}{dt} \langle A_t P^{-1}(y), E_{n+1}\rangle \right|_{t=0}\nonumber\\ & = \frac{a_1(y) \, \langle x_0,E_m\rangle - a_0(y) \, \langle e_1,E_m\rangle}{\frac{1 }{\sqrt{1 + |y|^2}} } -y_m\sqrt{1 + |y|^2} \left[ -a_1(y) \, \langle x_0, E_{n+1}\rangle + a_0(y) \, \langle e_1, E_{n+1}\rangle \right]\nonumber\\ &= \sqrt{1 + |y|^2} \left\{ \langle P^{-1}(y), e_1\rangle ( \langle x_0, E_m\rangle + y_m \langle x_0, E_{n+1}\rangle \right)\nonumber\\ &\qquad\qquad -\langle P^{-1}(y), x_0\rangle \left( \langle e_1, E_m\rangle + y_m \langle e_1, E_{n+1} \rangle) \right\}.\nonumber \end{align} Indeed it is a polynomial about $y$ with degree 2. It is clear that \[ dP^{-1} \left( \frac{\partial}{\partial y_k} \right) = \frac{\p}{\p y_k} P^{-1}= -\frac{1}{\sqrt{1 + |y|^2}} E_k - \frac{ y_k}{1 + |y|^2} P^{-1}(y). \] Then we can calculate \[ \langle P^{-1}(y), dP^{-1}\left( \frac{\partial}{\partial y_k} \right) \rangle= \frac{y_k}{1+|y|^2} - \frac{y_k}{1+|y|^2} = 0. \] Next we have \begin{eqnarray*} \langle dP^{-1} \left( \frac{\partial}{\partial y_k} \right), ( E_m + y_m E_{n+1})\rangle = -\frac{1}{\sqrt{1+|y|^2}} \delta_{mk}. \end{eqnarray*} Letting $\tilde{\xi}=(\tilde{\xi}_1,\cdots,\tilde{\xi}_n) =dP(e_1)$ and using \eqref{5.4} and $\langle x_0, e_1\rangle=0$, we have \begin{align}\label{newTm} T_m(y_0) &= -\sqrt{1+|y_0|^2} \left( \langle e_1,E_m\rangle + (y_0)_m \langle e_1,E_{n+1}\rangle \right)\\ &= -\sqrt{1+|y_0|^2} \langle dP^{-1} (\tilde{\xi}), \left( E_m + (y_0)_m E_{n+1} \right)\rangle \nonumber\\ &= -\sqrt{1+|y_0|^2} \sum_k \tilde{\xi}_k \langle dP^{-1} \left( \frac{\partial}{\partial y_k} \right), \left( E_m + (y_0)_m E_{n+1} \right)\rangle \nonumber\\ &= \sum_k \tilde{\xi}_k \delta_{mk}=\tilde{\xi}_m.\nonumber \end{align} Here we let $y_0=((y_0)_1,(y_0)_2,\cdots, (y_0)_n)$. Let us first prove \eqref{5.21}. Since we have \begin{equation}\label{4.101} P^{-1}(y)=a_0(y)x_0+\sum_{i\geq 1}a_i(y)e_i, \end{equation} where $a_0(y_0)=1$ and $a_i(y_0)=0$ for $i\geq 1$. Define the function \[ \tilde{\eta} (y) = a_1(y) x_0 - a_0(y) e_1. \] Then, using \eqref{5.4}, we get \begin{eqnarray*} T_m(y) &=& \sqrt{1+|y|^2}\langle \tilde{\eta}(y), (E_m + y_m E_{n+1})\rangle. \end{eqnarray*} We denote \[ \tilde{e}_m = E_m + y_mE_{n+1} - y_m P^{-1}(y). \] Then, in view of $\langle\tilde{\eta}, P^{-1}(y) \rangle= 0 $, we have \[ \langle\tilde{\eta}(y),\tilde{e}_m \rangle= \langle\tilde{\eta}(y), (E_m + y_m E_{n+1})\rangle. \] This allows us to conclude that \[ T_m(y) = \sqrt{1+|y|^2} \langle\tilde{\eta}(y), \tilde{e}_m\rangle. \] Using the fact that $\langle P^{-1}(y),(E_m+y_mE_{n+1})\rangle=0,$ we find that \[ \tilde{e}_m \cdot \tilde{e}_{k} = \delta_{km} + y_k y_m - y_m y_k = \delta_{km}. \] Thus we can derive \begin{align}\label{TTT} \sum_{m=1}^{n} T_m^2 &=\left(1+|y|^2 \right) \sum_{m=1}^{n} \langle\tilde{\eta}, \tilde{e}_m\rangle^2 = \left( 1+|y|^2 \right) |\tilde{\eta}|^2 \\ &= \left( 1+|y|^2 \right) (a_0^2(y) + a_1^2(y))\leq 1+|y|^2,\nonumber \end{align} where the equality holds when $y=y_0$. Here we have used $a_0^2(y)+a^2_1(y)\leq |P^{-1}(y)|^2=1$. This completes the proof of \eqref{5.21}. Next, from \eqref{newTm} and noting that $\tilde{\xi}$ is parallel to $\xi$ and $|\xi|=1$, we obtain \eqref{5.2}. Finally, for any $0\leq t,s\leq \epsilon_0$, since $A_{t+s} = A_t \cdot A_s = A_s \cdot A_t$ and $A_0=I$, we get \[ \sigma_{t+s} = \sigma_t \circ \sigma_s = \sigma_s \circ \sigma_t, \text{ and } \sigma_0=id. \] Therefore, $\{\sigma_t, t\in[0,\epsilon_0]\}$ is an one-parameter transformation group. Thus, we get \eqref{T2}. \end{proof} \section{$C^2$ estimates} In this section, we continue to use the notations in Section \ref{Pre}. The main result of this section is the following proposition. \begin{prop}Suppose $\Omega$, $\Omega^*$ are bounded strictly convex domains with smooth boundary in $\mathbb{R}^{n}$. If $u^*$ is a strictly convex solution to \eqref{Legendre} with $\Omega=Du^*(\Omega^*)$, then we have the $C^2$ estimate \begin{equation*} |D^2u^*|\leq C, \end{equation*} where $C$ is a constant depending on $\Omega^*$, $\sup_{\Omega^*}|u^*|$ and $\sup_{\Omega^*}|Du^*|$. \end{prop} \subsection{The global $C^2$ estimate} To establish the estimate in $\Omega^*$, we utilize the equation \eqref{main equation hyperbolic}. Let $\tilde{\Omega} = P^{-1}(\Omega^*)$. Suppose $\{e_1,\cdots, e_n\}$ is an orthonormal frame on $\mathbb{S}^n_+$. The spherical Hessian $\Lambda_{ij} $ defined by \eqref{sphere} satisfies \[ \nabla_k\Lambda_{ij} =\nabla_j \Lambda_{ik} \] and \[ \nabla_{jj}\Lambda_{ii} -\nabla_{ii} \Lambda_{jj} = -\Lambda_{jj} + \Lambda_{ii}. \] We consider the following problem \[ \sup_{x\in \tilde{\Omega},\ \eta \in T_{x} \mathbb{S}^n, |\eta|=1} \Lambda_{\eta\eta}(x). \] Without loss of generality, we can assume $x_0, e_1$ are the maximum point and direction of the above problem. Moreover we can assume $\Lambda_{ij}$ is a diagonal matrix at $x_0$. Consequently, we have \[ \Lambda_{\eta \eta} \leq \Lambda_{11}(x_0). \] Next we will consider the test function: \[ \varphi = \Lambda_{11}. \] By using \eqref{main equation hyperbolic}, we have $$F^{*ii}\Lambda_{ii11}=\nabla_{11}\tilde{\psi}+2(\nabla_1\tilde{\psi})\tilde{\psi}_{v}\nabla_1v+\tilde{\psi}_{v}\nabla_{11}v+\tilde{\psi}_{vv}(\nabla_1v)^2.$$ Since $|X|^2=|x|^2+u^2=|\nabla v|^2+v^2$, the above equation gives $$F^{*ii}\Lambda_{ii11}\geq \tilde{\psi}_{v}\Lambda_{11}-C,$$ where $C$ depends on $\Omega^*$, $\sup_{\Omega^*}|u^*|$ and $\sup_{\Omega^*}|Du^*|$. Then we have \begin{align*} F^{*ij} \varphi_{ij} &= F^{*ii} \Lambda_{11ii} = F^{*ii} \Lambda_{ii11} + \sum_i F^{*ii} \Lambda_{11} - \sum_i F^{*ii} \Lambda_{ii}\\ &\geq\Lambda_{11} \sum_i F^{*ii} +\tilde{\psi}_v\Lambda_{11}- C. \end{align*} By \eqref{tildepsi} and \eqref{Cond1}, we have $$\tilde{\psi}_v=-\frac{\psi_z}{\psi^2}\geq 0.$$ Combining with $\sum F^{*ii} \geq F^*=\tilde{\psi}$, we get \[ \Lambda_{11}(x_0) \leq C. \] Otherwise, $x_0$ is on $\partial \tilde{\Omega}$. Therefore, we obtain \[ \sup_{\Omega^*} \sum_{k,l}w^* b_{ik}^* D_{kl}^2 u^* b_{jl}^* \leq C + \sup_{\partial \Omega^*} \sum_{k,l}w^* b_{ik}^* D_{kl}^2 u^* b_{lj}^*, \] which implies \begin{equation}\label{5.1} \sup_{\Omega^*} |D^2 u^*| \leq C \left( 1 + \sup_{\partial \Omega^*} |D^2 u^*| \right). \end{equation} \subsection{$C^2$ boundary estimate} Suppose $h^*(Du^*)= 0$ is the uniformly concave defining function of $\partial \Omega$. Let $\beta=(h^*_{p_1}, \ldots, h^*_{p_n})$. Denote $\nu$ as the interior unit normal of $\p\Omega$. Thus we have $\beta(y)=\nu(Du^*(y))$. By Lemma \ref{3.4}, it follows that $\beta \cdot \nu^* > 0$, where $\nu^*$ is the interior unit normal of $\p\Omega^*$. \textbf{Step 1}: The Tangential-Normal Estimates. We first establish that \begin{equation}\label{tn} D_{\tau\beta} u^* = 0, \ \ \text{on}\ \ \partial \Omega^*, \end{equation} where $\tau$ is any tangential vector field of $\partial \Omega^*$. \textbf{Step 2}: The Double Normal Estimates. Let $H^* = h^*(Du^*)$. We compute the derivatives: \[ D_i H^* = \sum_kh^*_{p_k} D_{i k} u^* \quad \text{and} \quad D_{ij} H^* = \sum_{k}h^*_{p_k} D_{ij k} u^* +\sum_{k,l} h^*_{p_k p_l} u^*_{i k} u^*_{j l}. \] For any function $f$, we still denote \begin{equation}\label{L*} \mathcal{L^*} f = \sum_{k,l}F^{*ij} w^*\, b_{ik}^* f_{kl} b_{lj}^*. \end{equation} Taking derivative with respect to $\frac{\p}{\p y_k}$ in \eqref{Legendre}, we get \begin{eqnarray*} \begin{aligned} \sum_{p,q}F^{*ij} w^* \, b_{i p}^* u^*_{pq k} b_{qj}^* + \frac{y_k}{w^*} \sum_{p,q}F^{*ij} b_{i p}^*u^*_{pq} b_{qj}^*+2 \sum_{p,q}F^{*ij} w^* \frac{\partial b^*_{ip}}{\partial y_k} u_{pq}^* b_{qj}^*= \psi^*_{y_k}+\psi^*_{u^*}u_k^*. \end{aligned} \end{eqnarray*} Using \eqref{L*}, the above equation reduces to \begin{eqnarray*} \psi^*_{y_k}+\psi^*_{u^*}u_k^* =\mathcal{L}^* u_k^* + \frac{y_k}{w^{*2}} \psi^*+ 2\sum_{p,q} F^{*ij} w^* \frac{\partial b^*_{ir}}{\partial y_k} ( b^*)^{rs} b^*_{s p}u^*_{pq} b_{qj}^*. \end{eqnarray*} It is clear that \[ \frac{\partial b^*_{ir}}{\partial y_k} = \frac{y_i \delta_{k r} + y_{r} \delta_{i k}}{1 + w^*} - \frac{y_i y_r}{(1 + w^*)^2} \frac{y_k}{w^*} \] and \[ (b^*)^{ rs} = \delta_{rs} - \frac{y_{r} y_s}{w^* (1 + w^*)}. \] Thus, by using the convexity of $u^*$, we conclude that \begin{equation}\label{uk} |\mathcal{L}^* u^*_k| \leq C, \end{equation} in view of the fact that $|\sum_{p,q}F^{*ij} w^* b^*_{s p}u^*_{pq} b_{qj}^*|$ is uniformly bound for $1\leq i,s\leq n$. Then we have \begin{equation}\label{L*H} \mathcal{L}^* H^* =\sum_k h^*_{p_k} \mathcal{L}^* u^*_k + \sum_{k,l,p,q}h^*_{p_k p_l}F^{*ij} w^* b_{i p}^* u^*_{ pk} u^*_{q l} b_{qj}^*. \end{equation} We have the following identity \[ \sum_{p,q}F^{*ij} w^* b_{i p}^* u^*_{ pk} u^*_{q l}b_{qj}^*=\sum_{p,q,s,n}F^{*ij} w^* b_{i p}^* u^*_{p s} b_{s t}^* (b^*)^{tk} (b^*)^{lm} b^*_{mn} u^*_{ nq} b_{ qj}^*. \] From this, we can derive \begin{equation}\label{ukk} \left| \sum_{k,l,p,q}h^*_{p_k p_l} F^{*ij} w^* b_{i p}^* u^*_{ pk} u^*_{q l} b_{q j}^* \right| \leq C \sum_i F^{*ii} \lambda_i^2, \end{equation} where $\lambda_1,\cdots,\lambda_n$ are the curvature radius of $M_u$, which also are eigenvalues of the matrix $(\sum_{k,l}w^* b_{ik}^* u^*_{k l} b_{lj}^*)$. Consequently, we obtain \begin{equation}\label{5.6} \left| \mathcal{L}^* H^*\right| \leq C \left(1 + \sum_i F^{*ii} \lambda_i^2\right). \end{equation} By using \cite{Urbas2001}, we know that, for any $\epsilon>0$, \[ \sum_i F^{*ii} \lambda_i^2 \leq \left( C(\epsilon) + \epsilon M \right) \sum_i F^{*ii}, \] where $C(\epsilon)$ is a constant depending only on $\epsilon$ and \[ M = \sup_{y\in\Omega^*}\{\lambda_1(y),\cdots,\lambda_n(y)\}. \] For any boundary point $\hat{y}\in\p\Omega^*$, as discussed in Section 4, we can define a neighborhood $\Omega^*_r$ of $\hat{y}$ and barrier function $\varphi^*$ using \eqref{Omega*} and \eqref{boundary*}. For positive constants $B$, we consider the following test function: \[ \Phi = H^* -B \left( C(\epsilon) + \epsilon M \right) \varphi^* \] in $\Omega^*_r$. Since $-D^2\varphi^*\geq CI$, by using \eqref{5.6}, we have \begin{align*} \mathcal{L}^* \Phi &\geq \mathcal{L}^*H^* + BC \left( C(\epsilon) + \epsilon M \right) F^{*ij} w^* b_{ik}^* b_{kj}^*\\ &\geq -C \left( C(\epsilon) + \epsilon M \right) \sum_i F^{*ii} + BC \left(C(\epsilon) + \epsilon M \right)\sum_i F^{*ii}\\ &\geq 0. \end{align*} Moreover, if $r$ is sufficiently small, we have $\Phi\leq 0$ on $\p\Omega^*$ and $\Phi(y_0)=0$. It follows that $\Phi$ achieves its maximum value at $\hat{y}$. Consequently, at $\hat{y}$, we get \[ D_{\nu^*} H^* \leq C \left( C(\epsilon) + \epsilon M \right). \] Since $\hat{y}$ is arbitrary, we conclude that \begin{equation}\label{nn} 0 \leq D_{\beta \beta} u^* \leq C(\epsilon) + \epsilon M \quad \text{on} \quad \partial \Omega^*. \end{equation} \textbf{Step 3}: The Double Tangential Estimates. We consider the maximization problem: \[ \tilde{M}=\max_{y\in \partial \Omega^*, \eta \in T_{y}(\partial \Omega^*), |\eta|=1}(1 + |y|^2) D_{\eta\eta} u^* (y). \] Let $y_0\in\p\Omega^*$ and the unit tangential vector $\xi \in T_{y_0}(\partial \Omega^*)$ be the maximum point and direction of the above problem. Then we have \[ \tilde{M}=(1+|y_0|^2) D_{\xi\xi}u^*(y_0). \] Let $x_0=P^{-1}(y_0)$ and $e_1$ be the unit vector which has the same direction as $dP^{-1}(\xi)$. Using Proposition \ref{propT}, we can construct a vector field $T$ around $y_0$ with $T(y_0)=\sqrt{1+|y_0|^2}\xi$. Moreover we have $|T(y)|^2\leq 1+|y|^2$. Let \[ T = \tau(T) + \frac{\langle\nu^*,T\rangle}{\langle\beta,\nu^*\rangle} \beta \quad \text{on} \quad \partial \Omega^*, \] where $\tau(T)$ is the tangential component of $T$ on the tangential space of $\partial \Omega^*$ and $\nu^*$ is the unit interior normal of $\p\Omega^*$. Next we can write \[ \beta = \beta^t + \langle\beta,\nu^*\rangle\nu^*, \] where $\beta^t$ is the tangential component of $\beta$ on the tangential space of $\p\Omega^*$. Then, we get \[ \tau(T) = T - \langle\nu^*,T\rangle \nu^* - \frac{\langle \nu^*,T\rangle}{\langle\beta,\nu^*\rangle} \beta^t. \] Using \eqref{tn}, we have \begin{equation}\label{B1} D_{TT} u^* = D_{\tau(T) \tau(T)} u^* + \frac{\langle\nu^*,T\rangle^2}{\langle\beta,\nu^*\rangle^2} D_{\beta \beta} u^*. \end{equation} It is clear that \[ |\tau(T)|^2 = |T|^2 + \langle\nu^*,T\rangle^2 + \left( \frac{\langle\nu^*,T\rangle}{\langle\beta,\nu^*\rangle} \right)^2 |\beta^t|^2 - 2 \langle\nu^*, T\rangle^2 - 2 \frac{\langle\nu^*, T\rangle}{\langle \beta,\nu^*\rangle} \langle T, \beta^t\rangle, \] which implies \[ |\tau(T)|^2 \leq |T|^2 + C \langle\nu^*, T\rangle^2 - 2 \langle\nu^*, T\rangle \frac{\langle\beta^t, T\rangle}{\langle\beta, \nu^*\rangle}. \] We denote $\tilde{T} = \frac{1}{\sqrt{1+|y|^2}}T$, then the above inequality becomes \[ |\tau(T)|^2 \leq \left(1+|y|^2\right) \left( |\tilde{T}|^2 + C \langle\nu^*, \tilde{T}\rangle^2 - 2 \langle\nu^*, \tilde{T}\rangle \frac{\langle\beta^t, \tilde{T}\rangle}{\langle\beta, \nu^*\rangle} \right). \] Let $\eta=\tau(T)/|\tau(T)|$ be the unit tangential vector field. Using the above inequality, we get \[ D_{\tau(T)\tau(T)} u^*(y) \leq \left( |\tilde{T}|^2 + C \langle\nu^*, \tilde{T}\rangle^2 - 2 \langle\nu^*, \tilde{T}\rangle \frac{\langle\beta^t, \tilde{T}\rangle}{\langle\beta, \nu^*\rangle} \right) \left( 1 + |y|^2 \right) D_{\eta \eta} u^*(y). \] Now, applying equations \eqref{nn}, \eqref{B1} and the definition of $\tilde{M}$, we have \begin{eqnarray*} D_{TT} u^* &\leq& \left( |\tilde{T}|^2 + C \langle\nu^*, \tilde{T}\rangle^2 - 2 \langle\nu^*, \tilde{T}\rangle \frac{\langle\beta^t, \tilde{T}\rangle}{\langle\beta, \nu^*\rangle} \right) \tilde{M}+(C(\epsilon) +\epsilon M)\left( \frac{\langle\nu^*, T\rangle}{\langle\beta, \nu^*\rangle} \right)^2. \end{eqnarray*} It implies that on $\p\Omega^*$, \begin{equation}\label{B2} \frac{D_{TT} u^*}{\tilde{M}}\leq |\tilde{T}|^2 + C \langle\nu^*, \tilde{T}\rangle^2 - 2 \langle\nu^*, \tilde{T}\rangle \frac{\langle\beta^t, \tilde{T}\rangle}{\langle\beta, \nu^*\rangle} + \frac{C}{\tilde{M}}(C(\epsilon) +\epsilon M)\langle\nu^*, \tilde{T}\rangle^2 . \end{equation} Here we have used the fact that $\langle\beta,\nu^*\rangle$ has positive lower bound on $\p\Omega^*$. Next we can define a function near the tubular neighborhood of $\p\Omega^*$, \begin{equation}\label{funw} w = \frac{D_{TT} u^*}{\tilde{M}} + 2 \langle\nu^*, \tilde{T}\rangle \frac{\langle\beta^t, \tilde{T}\rangle}{\langle\beta, \nu^*\rangle}- AH^*, \end{equation} where $A$ is a positive constant to be determined. At the point $y=y_0$, since $T(y_0)=\sqrt{1+|y_0|^2}\xi$, according to the definition of $\tilde{M}$, we find \[ \tilde{T} = \xi, \quad \text{and} \quad D_{TT} u^* = \tilde{M}, \] which implies \begin{equation}\label{B21} w(y_0) = 1. \end{equation} Here we have used $h^*=0$ on $\p\Omega^*$. On the other hand, for any $y \in\p\Omega^*$ and $y\neq y_0$, by \eqref{B2}, we have \begin{equation}\label{B3} w \leq |\tilde{T}|^2 + C \left( 1 + \frac{C (\epsilon) + \epsilon M}{\tilde{M}} \right) \langle\nu^*, \tilde{T}\rangle^2. \end{equation} Since $|T(y)|^2\leq 1+|y|^2$, we have $|\tilde{T}|^2 \leq 1$. Moreover, since the function $\langle \nu^*,\tilde{T}\rangle$ is smooth, for $y\in\p\Omega^*$, we obtain \begin{equation}\label{B4} |\langle\nu^*, \tilde{T}\rangle(y)| = |\langle\nu^*, \tilde{T}\rangle(y) - \langle\nu^*, \tilde{T}\rangle (y_0)| \leq C |y-y_0| \quad \text{near} \quad y_0. \end{equation} Combining \eqref{B3} with \eqref{B4}, we conclude that \begin{equation}\label{B5} w \leq 1 + C \left( 1 + \frac{C (\epsilon) + \epsilon M}{\tilde{M}} \right) |y-y_0|^2. \end{equation} By \eqref{5.1}, \eqref{nn} and the definitions of $M$ and $\tilde{M}$, we have \begin{equation}\label{MM} M \leq C \left( C(\epsilon)+\epsilon M + \tilde{M} \right). \end{equation} Assuming $\tilde{M} \geq 1$ , we can derive from \eqref{B5} and the inequality above that, for any $y \in \partial \Omega^*$, \begin{equation}\label{ww} w \leq 1 + C_1 (\epsilon) \left| y - y_0 \right|^2 \quad \text{near} \,\ y_0, \end{equation} if $\epsilon$ is sufficiently small, where $C_1(\epsilon)$ is a constant only depending on $\epsilon$. As in Section 4, we also can define a neighborhood $\Omega^*_r$ of $y_0$ and barrier function $\varphi^*$ by using \eqref{Omega*} and \eqref{boundary*}. Then we define a test function: \[ \Psi = w - C_1(\epsilon) |y - y_0|^2 - B_1(C(\epsilon) +\epsilon M)\varphi^*, \] where $B_1$ is some undermined positive constant. Thus, by using \eqref{B21} and $\varphi^*(y_0)=0$, we have $$\Psi(y_0) = 1.$$ Moreover, since $\varphi^*\geq 0$ on $\p\Omega^*_r\cap \p\Omega^*$ for $r$ sufficiently small and using \eqref{ww}, on $\p\Omega^*_r\cap \p\Omega^*$, $\Psi$ achieves its maximum value at $y_0$. On $\p\Omega_r^*\cap\Omega^*$, $\varphi^*$ has a positive lower bound. Thus, if $B_1$ is chosen sufficiently large, we have $\Psi\leq 1$ on $\p\Omega_r^*\cap\Omega^*$. Now let's calculate $\Psi$ in $\Omega^*_r$. Using the notation defined by \eqref{L*}, we have \begin{equation}\label{L1} \mathcal{L}^* (|y - y_0|^2) = 2 \sum_kF^{*ij} w^* b_{ik}^* b_{kj}^* \leq C \sum_i F^{*ii}, \quad \text{and}\quad \mathcal{L}^* (-\varphi) \geq \theta \sum_i F^{*ii}, \end{equation} where $\theta$ is a positive constant. Next we define $\chi$ as follows: $$\chi = 2 \langle\nu^*, \tilde{T}\rangle \frac{\langle\beta^t, \tilde{T}\rangle}{\langle\beta, \nu^*\rangle}.$$ Then $\chi(y,p) $ is a smooth function depending on $y$ and $p=D u^*$. Thus, by \eqref{funw}, we have \begin{equation}\label{L2} \mathcal{L}^* w = \frac{1}{\tilde{M}} \mathcal{L}^*D_{TT} u^* + \mathcal{L}^* \chi - A \mathcal{L}^* H^*. \end{equation} We will now compute each term in $\mathcal{L}^* w$ individually. It is clear that \[ D_{TT} u^* = T^2 u^* - (D_T T) u^*. \] Moreover we note that \begin{eqnarray*} \begin{aligned} w^*T\frac{u^*}{w^*}=&\;Tu^*-\frac{u^*}{w^*}Tw^* \text{ and } \\ w^*T^2\frac{u^*}{w^*}=&\;T^2u^*-2Tu^*\frac{Tw^*}{w^*}-u^*\frac{T^2w^*}{w^*}+2u^*\frac{(Tw^*)^2}{(w^*)^2}. \end{aligned} \end{eqnarray*} Therefore, by using \eqref{T2} and the above two equalities, we get \begin{align*} \mathcal{L}^* (D_{TT} u^*) &\geq \mathcal{L}^* \left(2Tu^*\frac{Tw^*}{w^*}+u^*\frac{T^2w^*}{w^*}-2u^*\frac{(Tw^*)^2}{(w^*)^2}-D_T T (u^*)\right)\\ &\qquad +T^2\psi^*+2\psi^*_{u^*}T\psi^*Tu^*+\psi^*_{u^*}T^2u^*+\psi^{*}_{u^*u^*}(Tu^*)^2\\ &\geq\mathcal{L}^* \left(2Tu^*\frac{Tw^*}{w^*}-D_T T (u^*)\right)+\psi^*_{u^*}T^2u^*-C\sum_iF^{*ii}-C. \end{align*} We assume \[ D_T T-2 \frac{Tw^*}{w^*} T= \sum_k t_k \frac{\partial}{\partial y_k}. \] Thus we get \begin{align*} \mathcal{L}^*\left(D_TTu^*-2 \frac{Tw^*}{w^*} Tu^*\right)&=\mathcal{L}^* \left( \sum_{k} t_k u^*_k \right) \\ &= \sum_{k,p,q} F^{*ij} w^* b^*_{ip} (t_k u^*_k)_{pq} b^*_{qj}\\ &= \sum_{k}t_k \mathcal{L}^* u^*_k + 2 \sum_{k,p,q} F^{*ij} w^* b^*_{ip} D_pt_kD_{kq}u^* b_{qj}^* \\ &+ \sum_{k,p,q} F^{*ij} w^* b^*_{ip} (t_{k})_{pq} u^*_kb_{qj}^*\\ &\leq C + C \sum_i F^{*ii}+ 2 \sum_{k,p,q} F^{*ij} w^* b^*_{ip} D_pt_s (b^*)^{sr}(b^*)_{rk}D_{kq}u^* b_{qj}^*\\ &\leq C + C \sum_i F^{*ii}, \end{align*} where we have used \eqref{uk} and $$\left|\sum_{k,q} F^{*ij} w^*(b^*)_{rk}D_{kq}u^* b_{qj}^*\right|\leq C$$ for any $1\leq i,r\leq n$. Thus, by combining the above four formulae and $D_{TT}u^* >0$, we obtain \begin{equation}\label{L3} \mathcal{L}^* (D_{TT} u^*) \geq - C - C \sum_i F^{*ii}+\psi^*_{u^*}D_{TT}u^*\geq - C - C \sum_i F^{*ii}. \end{equation} Here we have used $\psi^*_{u^*}=-\frac{\psi_z}{\psi^2}\geq 0$ in view of \eqref{psi*} and \eqref{Cond1}. A straightforward calculation shows \begin{align*} \chi_{i} &= \chi_{y_i} + \sum_s\chi_{p_s} u^*_{si},\\ \chi_{ij} &= \chi_{y_iy_j} +\sum_s \chi_{y_i p_s} u^*_{sj} + \sum_s\chi_{p_s y_j}u^*_{si}+ \sum_{s,t}\chi_{p_sp_t} u^*_{si}u^*_{tj}+\sum_s\chi_{p_s}u^*_{sij}. \end{align*} Then, using \eqref{uk} and \eqref{ukk}, we have \begin{align} \label{L4} \mathcal{L}^* \chi &\geq -C \sum_i F^{*ii} + 2 \sum_{k,l,s}F^{*ij} w^* b^*_{ik} \chi_{y_kp_s} u^*_{sl} b_{lj}^*\\ &+ \sum_{s,t,k,l} \chi_{p_sp_t} F^{*ij} w^* b^*_{ik} u^*_{sk} u^*_{tl}b^*_{lj} - C\nonumber\\ & \geq-C \sum_i F^{*ii} - C-C \sum_{s,k,l} F^{*ij} w^* b^*_{ik} u^*_{sk} u^*_{sl}b^*_{lj} .\nonumber \end{align} On the other hand, by using \eqref{L*H}, \eqref{uk} and the concavity of $h^*$, we get \begin{equation}\label{L5} \mathcal{L}^* (-H^*) \geq \hat{\theta} \sum_{s,k,l}F^{*ij} w^* b^*_{ik} u^*_{ks} u^*_{sl} b^*_{lj} - C, \end{equation} where $\hat{\theta}$ is a small positive constant. Thus, if $A, B_1$ are sufficiently large, combining \eqref{L1}-\eqref{L5}, we obtain \[ \mathcal{L}^* \Psi \geq 0. \] Thus we deduce that $\Psi$ achieves its maximum value at $y_0$. Therefore we have \[ D_\beta w(y_0) \leq C(C(\epsilon) + \epsilon M). \] This leads to \[ D_{TT\beta} u^*(y_0) \leq C(C(\epsilon) + \epsilon M) \tilde{M}, \] which implies \begin{equation}\label{LL1} D_{\xi\xi \beta} u^*(y_0) \leq C(C(\epsilon) + \epsilon M) \tilde{M}, \end{equation} in view of $T(y_0)=\sqrt{1+|y_0|^2}\xi$. By the boundary condition $h^*(Du^*) = 0$, at $y_0$, we take twice derivatives with respect to $\xi$, then we get \begin{equation}\label{LL2} D_{\xi\xi \beta} u^* + \sum_{k,l}h^*_{p_kp_l} D_{\xi k} u^* D_{\xi l} u^* +II(\xi,\xi)D_{\nu^* \beta}u^*=0,\end{equation} where $II$ is the second fundamental form of $\p\Omega^*$ with respect to $\nu^*$. Combining \eqref{LL1} with \eqref{LL2}, we get \[ - h^*_{p_kp_l} D_{\xi k} u^* D_{\xi l} u^* \leq C(C(\epsilon) + \epsilon M) \tilde{M}. \] Therefore, using the concavity of $h^*$, we have \[ (D_{\xi\xi} u^*)^2 \leq C \left( C(\epsilon) + \epsilon M \right) \tilde{M}. \] According to the definition of $\tilde{M}$, we get \[ \tilde{M} \leq C \left( C(\epsilon) + \epsilon M \right). \] Combining \eqref{MM} and choosing $\epsilon$ sufficiently small , we conclude that \[ M\leq C(\epsilon) . \] This completes the proof. \vspace{5mm} \begin{thebibliography}{DU} \bibitem{Brendle2010} S. Brendle and M. Warren, {\it A boundary value problem for minimal Lagrangian graphs}, J. Differential Geom., {\bf 84} (2010), 267-287. \bibitem{Caffarelli1985} L. Caffarelli, L. Nirenberg, and J. Spruck, {\it The Dirichlet problem for nonlinear second order elliptic equations, III: Functions of the eigenvalues of the Hessian}, Acta Math. {\bf 155} (1985), 261-301. \bibitem{Caffarelli1986} L. Caffarelli, L. Nirenberg, and J. Spruck, {\it Nonlinear second order elliptic equations. IV. Starshaped compact Weingarten hypersurfaces}, in Current topics in partial differential equations, Kinokuniya, Tokyo, 1986, 1-26. \bibitem{Caffarelli1988} L. Caffarelli, L. Nirenberg, and J. Spruck, {\it Nonlinear second-order elliptic equations. V. The Dirichlet problem for Weingarten hypersurfaces}, Comm. Pure Appl. Math. {\bf 41} (1988), 47-70. \bibitem{Caffarelli1992} L. Caffarelli, {\it Boundary regularity of maps with convex potentials}, Comm. Pure Appl. Math. {\bf 45} (1992), 1141-1151. \bibitem{Caffarelli1996} L. Caffarelli, {\it Boundary regularity of maps with convex potentials}, II, Ann. of Math. {\bf 144} (1996), 453-496. \bibitem{Chen2021} S. Chen, J. Liu, and X.J. Wang, {\it Global regularity for the Monge-Amp\`{e}re equation with natural boundary condition}, Ann. of Math. {\bf 194} (2021), 745-793. \bibitem{Chen2016} S. Chen and A. Figalli, {\it Stability results on the smoothness of optimal transport maps with general costs}, J. Math. Pures Appl. {\bf 106} (2016), 280-295. \bibitem{GuanGuan2002} B. Guan and P. Guan, {\it Convex hypersurfaces of prescribed curvatures}, Ann. of Math. {\bf 156} (2002), 655-673. \bibitem{GuanLi2012} P. Guan, J. Li, and Y. Li, {\it Hypersurfaces of Prescribed Curvature Measure}, Duke Math. J. {\bf 161} (2012), 1927-1942. \bibitem{Guan2002} B. Guan and J. Spruck, {\it The existence of hypersurfaces of constant Gauss curvature with prescribed boundary}, J. Differential Geom. {\bf 62} (2002), 259-287. \bibitem{Guan2021} P. Guan and X. Zhang, {\it A class of curvature type equations}, Pure and Applied Mathematics Quarterly, {\bf 17} (2021), 3. \bibitem{Guan2015} P. Guan, C.~Y. Ren and Z. Wang, {\it Global $C^2$-estimates for convex solutions of curvature equations}, Comm. Pure Appl. Math. {\bf 68} (2015), 1287-1325. \bibitem{Huang2015} R. Huang, {\it On the second boundary value problem for Lagrangian mean curvature flow}, J. Funct. Anal. {\bf 269} (2015), 1095-1114. \bibitem{Huang2022} R. Huang and S. Li, {\it On the second boundary value problem for special Lagrangian curvature potential equation}, Math. Z. {\bf 302} (2022), 391-417. \bibitem{Huang2024} R. Huang, D. Wei, Y. Ye, {\it The constant mean curvature hypersurfaces with prescribed gradient image}, arXiv preprint arXiv:2411.00817, 2024. \bibitem{Ivochkina1989} N. Ivochkina, {\it Solution of the Dirichlet problem for equations of mth order curvature} Math. USSR-Sb. {\bf 67} (1990), 317-339; translated from Mat. Sb. {\bf 180} (1989), 867-887. \bibitem{Ivochkina1990} N. Ivochkina, {\it The Dirichlet problem for the curvature equation of order m}. Leningrad Math. J. {\bf 2} (1991), 631-654; translated from Algebra i Analiz {\bf 2} (1990), 192-217. \bibitem{Jiang2018} F. Jiang and N. Trudinger,{\it On the second boundary value problem for Monge-Amp\`{e}re type equations and geometric optics}, Arch. Ration. Mech. Anal. {\bf 229} (2018), 547-567. \bibitem{Jiao2022} H. Jiao and Z. Wang, {\it The Dirichlet problem for degenerate curvature equations}, J. Funct. Anal. {\bf 283} (2022), 109485. \bibitem{Kitagawa2012} J. Kitagawa, {\it A parabolic flow toward solutions of the optimal transportation problem on domains with boundary}, J. Reine Angew. Math., {\bf 672} (2012), 127-160. \bibitem{Lions1986} P. Lions, N. Trudinger and J. Urbas, {\it The Neumann problem for equations of Monge-Amp\`{e}re type}, Comm. Pure Appl. Math. {\bf 39} (1986), 539--563 \bibitem{Ma2016}X. Ma and J. Xu, {\it Gradient estimates of mean curvature equations with Neumann boundary value problems}, Adv. Math. {\bf 290} (2016), 1010-1039. \bibitem{Ma2018} X. Ma, P. Wang and W. Wei, {\it Constant mean curvature surfaces and mean curvature flow with non-zero Neumann boundary conditions on strictly convex domains}, J. Funct. Anal. {\bf 274} (2018), 252-277. \bibitem{Nirenberg1953} L. Nirenberg, {\it The Weyl and Minkowski problems in differential geometry in the large}, Comm. Pure Appl. Math. {\bf 6} (1953), 337-394. \bibitem{Pogorelov1952} A. Pogorelov, {\it Regularity of a convex surface with given Gaussian curvature}, Mat. Sb. {\bf 31} (1952), 88-103. \bibitem{Pogorelov1964} A. Pogorelov, L. Boron, A. Rabenstein, {\it Monge-Amp\`{e}re equations of elliptic type}. Groningen, Noordhoff, 1964. \bibitem{Schnurer2002} O.Schn$\ddot{\text{u}}$rer, {\it Translating solutions to the second boundary value problem for curvature flows}, Manuscripta Math. {\bf 108} (2002), 319-347. \bibitem{Schnurer2003} O. Schn$\ddot{\text{u}}$rer and K. Smoczyk, {\it Neumann and second boundary value problems for Hessian and Gauss curvature flows}, Ann. Inst. H. Poincar\'{e} C Anal. Non Lin\'{e}aire {\bf 20} (2003), 1043-1073. \bibitem{Sheng2004} W. Sheng, N. Trudinger, and X.J. Wang, {\it Convex hypersurfaces of prescribed Weingarten curvatures}, Commun. Anal. Geom. {\bf 12} (2004), 213-232. \bibitem{Urbas1996} J. Urbas, {\it Nonlinear oblique boundary value problems for two-dimensional curvature equations}, Adv. Differential Equations {\bf 1} (1996), 301-336. \bibitem{Urbas1997} J. Urbas, {\it On the second boundary value problem for equations of Monge-Amp\`{e}re type}, J. Reine Angew. Math. {\bf 487} (1997), 115-124. \bibitem{Urbas2000} J. Urbas, {\it An interior curvature bound for hypersurfaces of prescribed k-th mean curvature}, J. Reine Angew. Math. {\bf 519} (2000), 41-57. \bibitem{Urbas2001} J. Urbas, {\it The second boundary value problem for a class of Hessian equations}, Comm. Partial Differential Equations, {\bf 26} (2001), 859-882. \bibitem{Urbas2002} J. Urbas, {\it Weingarten hypersurfaces with prescribed gradient image}, Math. Z. {\bf 240} (2002), 53-82. \bibitem{Urbas2007} J. Urbas, {\it A remark on minimal Lagrangian diffeomorphisms and the Monge-Amp\`{e}re equation}, Bull. Austral. Math. Soc. {\bf 76} (2007), 215-218. \bibitem{von2010} G. T. von Nessi, {\it On the second boundary value problem for a class of modified-Hessian equations}, Comm. Partial Differential Equations {\bf 35} (2010), 745-785. \bibitem{Wang2023} C. Wang, R. Huang, and J. Bao, {\it On the second boundary value problem for Lagrangian mean curvature equation}, Calc. Var. Partial Differential Equations {\bf 62} (2023), Paper No. 74. \bibitem{Wang2024} C. Wang, R. Huang, and J. Bao, {\it On the second boundary value problem for a class of fully nonlinear flow III}, J. Evol. Equ. {\bf 24} (2024), Paper No. 52. \bibitem{Wolfson1997}J. Wolfson,{\it Minimal Lagrangian diffeomorphisms and the Monge-Amp\`ere equation}, J. Differential Geom. {\bf 46} (1997), 335-373. \end{thebibliography} \end{document} Then, we have \begin{eqnarray*} 1 &=& dP^{-1}(\tilde{\xi}) \cdot dP^{-1}(\tilde{\xi})\\ & =& t^2 \sum_{k,l} \xi_k \xi_l dP^{-1} \left( \frac{\partial}{\partial y_k} \right) \cdot dP^{-1} \left( \frac{\partial}{\partial y_l} \right)\\ &=& t^2 \sum_{k,l} \xi_k \xi_l \left[ dP^{-1} \left( \frac{\partial}{\partial y_k} \right) \cdot \frac{2}{1+|y|^2} \left( E_l + y_l E_{n+1} \right) \right]\\ &=& t^2 \sum_{k, l} \xi_k \xi_l \left( \frac{2}{1+|y|^2} \right)^2 \delta_{k l}. \end{eqnarray*} Thus, we have \[ t^2 = \frac{(1 + |y|^2)^2}{4} \cdot \frac{1}{|\xi|^2}, \] which is \[ t = \frac{1+|y|^2}{2|\xi|}. \] Thus, we get \[ \tilde{\xi} = \frac{(1 + |y|^2)}{2}\frac{ \xi}{|\xi|} = \frac{1+|y|^2}{2} \xi. \] \\\\\\ \[ \Phi = \rho(\tilde{y}') - y_n' - \theta |\tilde{y}'|^2 + \frac{1}{2}( y_n')^2, \] $ \tilde{y}' = (y_1', \ldots, y_{n-1}'), \ \ \theta \leq \frac{1}{2} \min_{1\leq i\leq n-1} \kappa_i,$ and $\kappa_1,\cdots, \kappa_{n-1}$ are the principal curvatures of $\partial \Omega^*$ at $0$. Consider the new coordinates $(y_1', \dots, y_{n-1}', y_n')$ in $\mathbb{R}^n$, such that $\partial \Omega^*$ is tangential to $\{y_i', \ldots, y_{n-1}'\} = \mathbb{R}^{n-1}$ at $0$, with the origin corresponding to $y_0$. The function $y_n' = \rho(\tilde{y}')$ is the defining function of $\partial \Omega^*$ accordingly, i.e. \[ y_n' = \frac{1}{2} \sum\kappa_i (y_i')^2 + O(|\tilde{y}'|^3). \] Let \[ \omega_\delta = \left\{ (y_1' \ldots, y_n') \in \mathbb{R}^n : \rho(\tilde{y}') < y_n' < \rho(\tilde{y}') + \delta^2, |\tilde{y}'| < \delta \right\}, \] where $\delta$ is an undetermined constant. Then we have \[ \partial \omega_\delta = \partial_1 \omega_\delta \cup \partial_2 \omega_\delta \cup \partial_3 \omega_\delta, \] where \[ \partial_1 \omega_\delta = \{y_n' = \rho(\tilde{y}')\}\cap \bar{\omega}_{\delta}, \quad \partial_2 \omega_\delta = \{y_n' = \rho(\tilde{y}') + \delta^2\} \cap \bar{\omega}_\delta, \quad \partial_3 \omega_\delta = \{|\tilde{y}'| = \delta\} \cap \bar{\omega}_\delta. \] If $\delta$ is sufficiently small, we get \begin{eqnarray*} \Phi &\leq& -\frac{\theta}{2} |\tilde{y}'|^2 \quad \text{on} \ \partial_1 \omega_\delta,\\ \Phi &\leq& -\frac{\delta^2}{2} \quad \ \ \ \ \text{on} \ \partial_2 \omega_\delta,\\ \Phi &\leq& -\frac{\theta \delta^2}{2} \quad \ \ \ \text{on} \ \partial_3 \omega_\delta. \end{eqnarray*} Moreover, we have $$(\Phi_{ij})=diag\{\kappa_1-\theta,\cdots,\kappa_{n-1}-\theta, 1\}\geq \theta I.$$
2412.09201v1
http://arxiv.org/abs/2412.09201v1
A new type of minimizers in lattice energy and its application
\documentclass[reqno,english]{amsart} \usepackage{amsfonts,amsmath,latexsym,verbatim,amscd,mathrsfs,color,array} \usepackage{amsmath,amssymb,amsthm,amsfonts,graphicx,color} \usepackage{amssymb} \usepackage{pdfsync} \usepackage{epstopdf} \usepackage[colorlinks=true]{hyperref} \usepackage{subfigure} \makeatletter \def\@currentlabel{2.1}\label{e:dispaa} \def\@currentlabel{2.21}\label{e:dispau} \def\@currentlabel{2.22}\label{e:dispav} \def\@currentlabel{2.23}\label{e:dispaw} \def\@currentlabel{2.24}\label{e:dispax} \def\theequation{\thesection.\@arabic\c@equation} \makeatother \let\oldbibliography\thebibliography \renewcommand{\thebibliography}[1]{\oldbibliography{#1}\setlength{\itemsep}{0pt}} \oddsidemargin 0.25in \evensidemargin 0 cm \marginparsep 0pt \topmargin -0.1in \textheight 22.0 cm \textwidth 15 cm \long\def\red#1{{\color{red}#1}} \long\def\blue#1{{\color{black}#1}} \long\def\green#1{{\color{green}#1}} \long\def\comment#1{\marginpar{\raggedright\small$\bullet$\ #1}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \newtheorem{lemma}{Lemma}[section] \newtheorem{coro}{Corollary}[section] \newtheorem{definition}{Definition} \newtheorem{teo}{Theorem}[section] \newtheorem{proposition}{Proposition}[section] \newtheorem{corollary}{Corollary}[section] \newtheorem{remark}{Remark}[section] \newtheorem{conjecture}{Conjecture}[section] \newtheorem{open problem}{Open Problem}[section] \newtheorem{open question}{Open Quesion}[section] \newcommand{\bremark}{\begin{remark} \em} \newcommand{\eremark}{\end{remark} } \newtheorem{claim}{Claim} \newtheorem{numerical/experimental results}{Numerical/Experimental results}[section] \newtheorem{theorem}{Theorem}[section] \newcommand{\R}{ {\mathbb R}} \newcommand{\BE}{\begin{equation}} \newcommand{\BEN}{\begin{equation*}} \newcommand{\EE}{\end{equation}} \newcommand{\EEN}{\end{equation*}} \newcommand{\BL}{\begin{lemma}} \newcommand{\EL}{\end{lemma}} \newcommand{\BT}{\begin{theorem}} \newcommand{\ET}{\end{theorem}} \newcommand{\BP}{\begin{proposition}} \newcommand{\EP}{\end{proposition}} \newcommand{\BC}{\begin{corollary}} \newcommand{\EC}{\end{corollary}} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \usepackage{amsmath} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \begin{document} \title{A new type of minimizers in lattice energy and its application} \author{Kaixin Deng} \author{Senping Luo} \address[K.~Deng]{School of Mathematics and statistics, Jiangxi Normal University, Nanchang, 330022, China} \address[S.~Luo]{School of Mathematics and statistics, Jiangxi Normal University, Nanchang, 330022, China} \email[S.~Luo]{[email protected]} \email[K.~Deng]{[email protected]} \begin{abstract} Let $z\in \mathbb{H}:=\{z= x+ i y\in\mathbb{C}: y>0\}$ and $$ \mathcal{K}(\alpha;z):=\sum_{ (m,n)\in \mathbb{Z} ^2 }\frac{{\left| mz+n \right|}^2}{{{\Im}(z)}}e^{-\pi\alpha\frac{ \left|mz+n\right|^2}{\Im(z)}}. $$ In this paper, we characterize the following minimization problem \begin{equation}\aligned\nonumber \min_{ \mathbb{H} } \big(\mathcal{K}(\alpha;z)-b\mathcal{K}(2\alpha;z)\big). \endaligned\end{equation} We prove that there exist hexagonal to skinny-rhombic minimizers, which is a novel finding in the literature. \end{abstract} \maketitle \section{Introduction and Statement of Main Results} \setcounter{equation}{0} {\it The fact that chemical elements combine to form crystals, periodic objects where the atoms are arranged in a periodic lattice of points with a limited set of symmetries, has been a basic belief for more than two centuries} (Bindi \cite{Bindi2020}). However, there is still an open largely crystal problem: what are the fundamental mechanisms behind the spontaneous arrangement of atoms into periodic configurations at low temperatures? (Radin \cite{Radin1987}). This famous problem, called as `Crystallization Conjecture', was proposed by Radin in 1987, a recent review of this conjecture can be found in Blanc-Lewin \cite{Blanc2015}. We are particularly interested in two-dimensional crystals. These systems capture many essential features of higher dimensions without the added complexity. Two-dimensional crystals play a crucial role in various physical systems, such as lipid monolayers on the surface of water \cite{Kaganer1999}, a monolayer of electrons on the surface of liquid helium, rare-gas clusters \cite{Schwerdtfeger2006}, colloidal systems in two dimensions \cite{Peeters1987}, and dusty plasmas \cite{Nosenko2004}. For a comprehensive overview of these applications, see B\'etermin \cite{Bet2015,Bet2016,Bet2018,BP2017,Betermin2021AHP}. A fundamental mathematical and physical model for the Crystallization Conjecture and two-dimensional crystals is given by: \begin{equation}\aligned\label{EFL} \min_L E_f(L ), \;\;\hbox{where}\;\;E_f(L):=\sum_{\mathbb{P}\in L \backslash\{0\}} f(|\mathbb{P}|^2),\; |\cdot|\;\hbox{is the Euclidean norm on}\;\mathbb{R}^2. \endaligned\end{equation} The $E_f(L )$ denotes the lattice energy per particle of the crystals, the summation ranges over all the lattice points except the origin $0$. $f$ is the background potential of the system and $L$ denotes the lattice. For the Riesz potential $f(r^2)=|r|^{-2s}$ with $s>1$, the lattice energy $E_f(L)$ becomes the classical Epstein zeta function, Rankin \cite{Rakin1953}, Cassels \cite{Cassels1963}, Ennola \cite{Ennola1964} and Diananda \cite{Dia1964} proved that the hexagonal lattice is the unique minimizer of the Epstein zeta function up to the action by modular group. For the Gaussian potential $f(r^2)=e^{-\pi \alpha r^2}$ with $ \alpha>0$, the lattice energy becomes the classical theta function, and Montogmery \cite{Mon1988} proved that the hexagonal lattice still minimize the lattice energy. In fact, Montogmery's Theorem \cite{Mon1988} implies the results of Rankin \cite{Rakin1953}, Cassels \cite{Cassels1963}, Ennola \cite{Ennola1964} and Diananda \cite{Dia1964}. The motivation of these research come from pure interest of number theory, and it turns out these theorems have deep applications in lattice energy. In this paper, we are interested in the lattice energy of the following difference form: \begin{equation}\aligned\label{EFL1} \min_L E_{f}(L ), \;\;\hbox{where}\;\;E_f(L):=E_{f_1}(L)-E_{f_2}(L),\; f=f_1-f_2, \endaligned\end{equation} where the potentials $f_1, f_2$ are two potentials with simple forms. Mathematically, the problem \eqref{EFL1} is equivalent to the problem \eqref{EFL}, but physically, it introduces new insights. For more details on this model and its applications in physics, we refer to B\'etermin \cite{Bet2018,Betermin2021JPA,Betermin2021AHP,Betermin2021Arma}, B\'etermin-Petrache\cite{Bet2019AMP}, B\'etermin-Faulhuber-Kn$\ddot{u}$pfer \cite{Bet2020}, and B\'etermin-Friedrich-Stefanelli \cite{Betermin2021LMP}. The problem \eqref{EFL1} was initiated by the study of difference of Yukawa potential (B\'etermin \cite{Bet2016}), Lennard-Jones potential (B\'etermin \cite{Bet2018}), and Morse potential (B\'etermin \cite{Bet2019}). In this paper, we consider the potential of the form: \begin{equation}\aligned\label{Potential} f=f_1-f_2,\;\;\hbox{where}\;f_1(r^2)=r^2e^{-\pi \alpha r^2},\;\;f_2(r^2)=br^2e^{-\pi 2\alpha r^2},\;\hbox{and}\;\alpha\geq2,\;b>0. \endaligned\end{equation} Let $ z\in \mathbb{H}:=\{z\in\mathbb{C}: \Im(z)>0\}$ and $\Lambda =\frac{1}{\sqrt{\Im(z)}}\big({\mathbb Z}\oplus z{\mathbb Z}\big)$ be the lattices with unit density in $ \mathbb{R}^2$. Define \begin{equation}\aligned\label{define} \mathcal{K}(\alpha;z):=\underset{\mathbb{P}\in \Lambda}{\sum}{\left| \mathbb{P} \right|}^2{e}^{-\pi\alpha{\left| \mathbb{P} \right|}^2}=\sum_{ (m,n)\in \mathbb{Z} ^2 }\frac{{\left| mz+n \right|}^2}{{{\Im}(z)}}e^{-\pi\alpha\frac{ \left|mz+n\right|^2}{\Im(z)}}. \endaligned\end{equation} Then the lattice energy \eqref{EFL1} under potential \eqref{Potential} becomes \begin{equation}\aligned\nonumber \mathcal{K}(\alpha;z)-b \mathcal{K}(2\alpha;z),\;\hbox{where}\;\alpha\geq2,\;b>0. \endaligned\end{equation} It is interesting to study the phase transitions of the lattice energy under the potential \eqref{Potential}. To this end, we prove the following theorem: \begin{theorem}\label{Th1} Assume that $ \alpha \geq 2$. Consider the lattice energy: \begin{equation}\aligned\nonumber \underset{z\in \mathbb{H}}{\min} \big(\mathcal{K}(\alpha;z)-b \mathcal{K}(2\alpha;z)\big). \endaligned\end{equation} Then, up to the action by modular group, there exists two thresholds $b_{c_1}<b_{c_2}$, where $b_{c_1}$ depends on $\alpha$ with bound $b_{c_1}\in(2,2\sqrt2)$ and $b_{c_2}=2\sqrt2$ is independent of $\alpha$. Specifically, the following holds: \begin{itemize} \item [(1)] For $b\leq b_{c_1}$, the minimizer is $e^{i\frac{\pi}{3}}$, corresponding to a hexagonal lattice. \item [(2)] For $b_{c_1}<b<b_{c_2}$, the minimizer is $\frac{1}{2}+i y_{b}$ with $y_b>\frac{\sqrt3}{2}$, corresponding to a skinny-rhombic lattice. Furthermore, as $b$ approaches $2\sqrt{2}$, $y_{b}$ approaches $+\infty$. \item [(3)] For $b\geq b_{c_2}$, the minimizer does not exist. \end{itemize} \end{theorem} B\'etermin discovered the hexagonal-rhombic-square-rectangular phase transitions of the system under Lennard-Jones potential and Morse potential in \cite{Bet2018} and \cite{Bet2019}, respectively. A rigorous proof of these phase transitions can be found in tri-copolymer systems (Luo-Ren-Wei \cite{Luo2019}) and Bose-Einstein condensates (Luo-Wei \cite{Luo2022}), respectively. In Theorem \ref{Th1}, we uncover a new type of phase transition from hexagonal to skinny-rhombic. \begin{figure} \centering \includegraphics[scale=0.45]{domain.png} \caption{The existing minimizers and a new type of minimizers.} \label{d} \end{figure} \vskip0.1in This paper is organized as follows. In Section 2, we introduce the symmetries of the function $\mathcal{K}(\alpha;z)$, such that this problem can be reduced to the fundamental domain. In Section 3, we prove that the minimization problem on the fundamental domain can be reduced to its right boundary. In Section 4, we explain partially the case where the minimizer is the hexagonal point and the methods can be applied to the relevant problems. Finally, in Section 5, we prove Theorem \ref{Th1}. \section{Preliminaries } In this section, we collect some basic properties of the theta function along with some useful estimates of Jacobi theta function. We also introduce a group related to the functional $\theta(\alpha;z)$. The generators of the group are given by \begin{equation}\aligned\label{GroupG1} \mathcal{G}: \hbox{the group generated by} \;\;\tau\mapsto -\frac{1}{\tau},\;\; \tau\mapsto \tau+1,\;\;\tau\mapsto -\overline{\tau}. \endaligned\end{equation} By Definition (\ref{GroupG1}), the fundamental domain associated to modular group $\mathcal{G}$ is \begin{equation}\aligned\label{Fd1} \mathcal{D}_{\mathcal{G}}:=\{ z\in\mathbb{H}: |z|>1,\; 0<x<\frac{1}{2} \}. \endaligned\end{equation} The following lemma characterizes the invariance properties of the theta function under the action of the group $\mathcal{G}$: \begin{lemma}[B\'etermin \cite{Bet2018}, Luo-Wei \cite{Luo2022}]\label{G111} For any $\alpha>0$, any $\gamma\in \mathcal{G}$ and $z\in\mathbb{H}$, $\theta (\alpha; \gamma(z))=\theta (\alpha; z)$. \end{lemma} $\mathcal{K}(\alpha;z)$ has a close connection to the theta function. \begin{lemma}[The relationship between the function $\mathcal{K}(\alpha;z)$ and the theta function]\label{2lem2} \begin{equation}\aligned\nonumber \mathcal{K}(\alpha;z)=-\frac{1}{\pi} \frac{\partial}{\partial{\alpha}}\theta (\alpha;z). \endaligned\end{equation} \end{lemma} \begin{lemma}[The invariance of $(\mathcal{K}(\alpha;z)-b \mathcal{K}(2\alpha;z))$]\label{2lem3} For any $\alpha>0,\:b\in\mathbb{R}$, any $\gamma\in \mathcal{G}$ and $z\in\mathbb{H}$, it holds that $$ \mathcal{K}(\alpha;z)-b \mathcal{K}(2\alpha;z)=\mathcal{K}\big(\alpha;\gamma(z)\big)-b \mathcal{K}\big(2\alpha;\gamma(z)\big).$$ \end{lemma} The Jacobi theta function is defined as follows: \begin{equation}\aligned\label{Jacobi}\nonumber \vartheta_J(z;\tau):=\sum_{n=-\infty}^\infty e^{i\pi n^2 \tau+2\pi i n z}. \endaligned\end{equation} The classical one-dimensional theta function is given by \begin{equation}\aligned\label{TXY} \vartheta(X;Y):=\vartheta_J(Y;iX)=\sum_{n=-\infty}^\infty e^{-\pi n^2 X} e^{2n\pi i Y}. \endaligned\end{equation} By the Poisson summation formula, it holds that \begin{equation}\aligned\label{Poisson} \vartheta(X;Y)=X^{-\frac{1}{2}}\sum_{n=-\infty}^\infty e^{-\pi \frac{(n-Y)^2}{X}} . \endaligned\end{equation} To estimate bounds of quotients of derivatives of $\vartheta(X;Y)$, we denote that \begin{equation}\aligned\nonumber \underline\vartheta_{1}(X)&:=4\pi e^{-\pi X}\big(1-\mu(X)\big), \:\:\:\:\:\:\:\:\:\:\overline{\vartheta}_{1}(X):=4\pi e^{-\pi X}\big(1+\mu(X)\big),\:\:\:\:\:\:\\ \underline\vartheta_{2}(X)&:=\pi e^{-\frac{\pi}{4X}}X^{-\frac{3}{2}},\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\overline{\vartheta}_{2}(X):=X^{-\frac{3}{2}},\:\:\:\:\:\:\:\:\:\:\:\:\:\\ \endaligned\end{equation} and \begin{equation}\aligned\label{duct} \mu(X)&:=\sum_{n=2}^{\infty}n^2e^{-\pi (n^2-1) X},\:\:\:\:\:\:\:\:\:\:\:\:\hat{\mu}(X):=\sum_{n=2}^{\infty}n^2e^{-\pi (n^2-1) X}(-1)^{n+1},\\ \nu(X)&:=\sum_{n=2}^{\infty}n^4e^{-\pi (n^2-1) X},\:\:\:\:\:\:\:\:\:\:\:\:\hat{\nu}(X):=\sum_{n=2}^{\infty}n^4e^{-\pi (n^2-1) X}(-1)^{n+1}.\\ \endaligned\end{equation} There are some useful estimates for quotients of 1-d theta functions and their derivatives. \begin{lemma}[Luo-Wei \cite{Luo2022}]\label{2lem4} Assume that $\sin(2\pi Y)>0$. It holds that \begin{itemize} \item [(1)] if $ X>\frac{1}{5}$, then $ -\overline{\vartheta}_{1}(X)\sin(2\pi Y)\leq\frac{\partial}{\partial Y}\vartheta(X;Y)\leq-\underline\vartheta_{1}(X)\sin(2\pi Y);$ \item [(2)] if $X< \frac{\pi}{\pi+2} $, then $-\overline\vartheta_{2}(X)\sin(2\pi Y)\leq\frac{\partial}{\partial Y}\vartheta(X;Y)\leq-\underline\vartheta_{2}(X)\sin(2\pi Y).$ \end{itemize} \end{lemma} \begin{remark}\label{2rem1} By Lemma \ref{2lem4}, $-\frac{\partial}{\partial Y}\vartheta(X;Y)>0$ for $X\in\mathbb{R},Y\in(0,\frac{1}{2})$. \end{remark} Furthermore, the following estimates for higher order derivatives of quotients of 1-d theta functions hold: \begin{lemma}[\cite{Luo2023,DK2024}]\label{2lem4add} Assume that $Y>0,k\in \mathbb{N^{+}}$. It holds that \begin{itemize} \item [(1)] for $ X>\frac{1}{5}$, then $\left|\frac{\vartheta_{Y}(X;kY)}{\vartheta_{Y}(X;Y)} \right|\leq k\cdot \frac{1+\mu(X)}{1-\mu(X)};$ \item [(2)] for $X< \frac{\pi}{\pi+2} $, then $\left|\frac{\vartheta_{Y}(X;kY)}{\vartheta_{Y}(X;Y)} \right|\leq k\cdot \frac{1}{\pi}e^{\frac{\pi}{4X}};$ \item [(3)] for $X\geq \frac{1}{5}$, then $-{\pi}\cdot \frac{1+{\nu}(X)}{1+{\mu}(X)}\leq \frac{\vartheta_{XY}(X;Y)}{\vartheta_{Y}(X;Y)} \leq -{\pi} \cdot\frac{1+\hat{\nu}(X)}{1+\hat{\mu}(X)};$ \item [(4)] for $0<X\leq\frac{1}{2}$, then $\frac{\frac{3}{4}{X}^2+2{\pi}^2 e^{-\frac{\pi}{X}}}{-\frac{1}{2}{X}^3+2\pi{X}^2e^{-\frac{\pi}{X}}}\leq \frac{\vartheta_{XY}(X;Y)}{\vartheta_{Y}(X;Y)} \leq \frac{\pi}{4{X}^2};$ \item [(5)] for $ X\geq \frac{1}{5}$, then $\left|\frac{\vartheta_{XY}(X;kY)}{\vartheta_{Y}(X;Y)} \right|\leq k\pi \cdot \frac{1+\nu(X)}{1-\mu(X)};$ \item [(6)] for $0<X\leq \frac{9}{20}$, then $\left|\frac{\vartheta_{XY}(X;kY)}{\vartheta_{Y}(X;Y)} \right|\leq \frac{k}{4X^2}e^{\frac{\pi}{4X}}.$ \end{itemize} \end{lemma} \section{The transversal monotonicity} We define the vertical line $\Gamma_{c}$ in the upper half-plane $\mathbb{H}$ as follows: $$\Gamma_{c}:=\{z\in\mathbb{H}: \Re(z)=\frac{1}{2},\; \Im(z)\geq\frac{\sqrt3}{2}\}.$$ By the group invariance (Lemma \ref{2lem3}), one has \begin{equation}\aligned\label{T1} \underset{z\in \mathbb{H}}{\min}(\mathcal{K}(\alpha;z)-b \mathcal{K}(2\alpha;z))=\underset{z\in \overline{\mathcal{D}_{\mathcal{G}}}}{\min}(\mathcal{K}(\alpha;z)-b \mathcal{K}(2\alpha;z)). \endaligned\end{equation} In this section, we aim to establish the following theorem: \begin{theorem}\label{3Thm1} Assume that $ \alpha \geq \frac{3}{2}$. Then for $b\leq 2\sqrt{2}$, it holds that \begin{equation}\aligned\nonumber \underset{z\in \mathbb{H}}{\min}\big(\mathcal{K}(\alpha;z)-b \mathcal{K}(2\alpha;z)\big)&=\underset{z\in \overline{\mathcal{D_{\mathcal{G}}}}}{\min}\big(\mathcal{K}(\alpha;z)-b \mathcal{K}(2\alpha;z)\big)\\ &=\underset{z\in \Gamma_{c}}{\min}\big(\mathcal{K}(\alpha;z)-b \mathcal{K}(2\alpha;z)\big). \endaligned\end{equation} \end{theorem} To prove Theorem \ref{3Thm1}, we establish the following monotonicity result: \begin{proposition}\label{Thm2} Assume that $ \alpha \geq \frac{3}{2},\: b\leq 2\sqrt{2}$. Then it holds that \begin{equation}\aligned\nonumber \frac{\partial}{\partial{x}}\big(\mathcal{K}(\alpha;z)-b \mathcal{K}(2\alpha;z)\big)< 0,\:\: for\:\: z\in \mathcal{D_{G}}. \endaligned\end{equation} \end{proposition} Note that Proposition \ref{Thm2} implies Theorem \ref{3Thm1}. The rest of this section is devoted to proving Proposition \ref{Thm2}. \subsection{The estimates} Using the ideas from B\'etermin \cite{Bet2016} and Luo-Wei \cite{LW2022,Luo2023,Luo2024+}, we will utilize the exponential expansion of the theta function, such that Theorem \ref{Thm2} is reduced to the study of Jacobi theta function. \begin{lemma}[\cite{Luo2022,Mon1988}\label{3Lemma1}] We have the exponential expansion of $\theta (\alpha;z)$. For $\alpha, y>0$, it holds that \begin{equation}\aligned\nonumber \theta (\alpha;z) &=\sqrt{\frac{y}{\alpha}}\sum_{n\in\mathbb{Z}}e^{-\alpha \pi y n^2}\vartheta(\frac{y}{\alpha};nx)\\ &=2\sqrt{\frac{y}{\alpha}}\sum_{n=1}^\infty e^{-\alpha \pi y n^2}\vartheta(\frac{y}{\alpha};nx)+\sqrt{\frac{y}{\alpha}}\vartheta(\frac{y}{\alpha};0). \endaligned\end{equation} \end{lemma} Based on Lemmas \ref{2lem2} and \ref{3Lemma1}, we derive the exponential expansion of the function $(\mathcal{K}(\alpha;z)-b \mathcal{K}(2\alpha;z))$ in Lemma \ref{3Lemma2}. The proof of Lemma \ref{3Lemma2} is straightforward, hence we omit it. \begin{lemma}\label{3Lemma2} We have the following expression of $(\mathcal{K}(\alpha;z)-b \mathcal{K}(2\alpha;z))$: for $\alpha,y>0$, it holds that \begin{equation}\aligned\nonumber \mathcal{K}(\alpha;z)-b \mathcal{K}(2\alpha;z)=&\frac{1}{\pi}{2}^{-\frac{5}{2}}\alpha^{-\frac{5}{2}}y^{\frac{1}{2}}\Big( 2\sqrt{2} \alpha\sum_{n\in\mathbb{Z}} e^{-\pi\alpha y n^2}\vartheta(\frac{y}{\alpha};nx) -b \alpha \sum_{n\in\mathbb{Z}} e^{-2\pi\alpha y n^2}\vartheta(\frac{y}{2 \alpha};nx)\\ &+4\sqrt{2}\pi \alpha^2y\sum_{n\in\mathbb{Z}} n^2e^{-\pi\alpha y n^2}\vartheta(\frac{y}{\alpha};nx) -4 \pi b \alpha^2y\sum_{n\in\mathbb{Z}} n^2e^{-2\pi\alpha y n^2}\vartheta(\frac{y}{2 \alpha};nx)\\ &+4\sqrt{2}y\sum_{n\in\mathbb{Z}} e^{-\pi\alpha y n^2}\vartheta_X(\frac{y}{\alpha};nx) -b y\sum_{n\in\mathbb{Z}} e^{-2\pi\alpha y n^2}\vartheta_X(\frac{y}{2 \alpha};nx)\Big). \endaligned\end{equation} \end{lemma} By Lemma \ref{3Lemma2}, we have: \begin{lemma}\label{3Lemma3} We have the following identity for the partial $x$-derivative of the function $(\mathcal{K}(\alpha;z)-b \mathcal{K}(2\alpha;z))$: \begin{equation}\aligned\nonumber -\frac{\partial}{\partial{x}}\big(\mathcal{K}(\alpha;z)-b \mathcal{K}(2\alpha;z)\big)=\frac{\sqrt{2}}{4\pi}\alpha^{-\frac{5}{2}}y^{\frac{1}{2}}e^{-\pi\alpha y} \cdot \big(I(\alpha;z;b)+\mathcal{E}(\alpha;z;b)\big), \endaligned\end{equation} where \begin{equation}\aligned\label{3eq2} I(\alpha;z;b):=&-2\sqrt{2}\alpha \vartheta_{Y}(\frac{y}{\alpha};x)-4\sqrt{2}\pi \alpha^2y\vartheta_{Y}(\frac{y}{\alpha};x)-4\sqrt{2}y\vartheta_{XY}(\frac{y}{\alpha};x)\\ &+b \alpha {e}^{-\pi\alpha y}\vartheta_{Y}(\frac{y}{2 \alpha};x)+4\pi b\alpha^2ye^{-\pi\alpha y }\vartheta_{Y}(\frac{y}{2 \alpha};x) +by e^{-\pi\alpha y}\vartheta_{XY}(\frac{y}{2 \alpha};x),\\ \endaligned\end{equation} and \begin{equation}\aligned\label{3eq3} \mathcal{E}(\alpha;z;b):=&-2\sqrt{2}\alpha\underset{n=2}{\overset{\infty}{\sum}}n {e}^{-\pi\alpha y (n^2-1)}\vartheta_{Y}(\frac{y}{\alpha};nx) +b \alpha \underset{n=2}{\overset{\infty}{\sum}} n {e}^{-\pi\alpha y (2 n^2-1)}\vartheta_{Y}(\frac{y}{2 \alpha};nx)\\ &-4\sqrt{2}\pi \alpha^2y\underset{n=2}{\overset{\infty}{\sum}} n^3e^{-\pi\alpha y (n^2-1)}\vartheta_{Y}(\frac{y}{\alpha};nx) +4\pi b\alpha^2y\underset{n=2}{\overset{\infty}{\sum}} n^{3}e^{-\pi\alpha y (2{n}^2-1)}\vartheta_{Y}(\frac{y}{2 \alpha};nx)\\ &-4\sqrt{2}y\underset{n=2}{\overset{\infty}{\sum}} n e^{-\pi\alpha y (n^2-1)}\vartheta_{XY}(\frac{y}{\alpha};nx) +by\underset{n=2}{\overset{\infty}{\sum}}n e^{-\pi\alpha y(2 n^2-1)}\vartheta_{XY}(\frac{y}{2 \alpha};nx). \endaligned\end{equation} \end{lemma} In view of Lemma \ref{3Lemma3}, Theorem \ref{Thm2} is equivalent to the following lemma: \begin{lemma}\label{lem3.4} Assume that $ \alpha \geq \frac{3}{2},\: b\leq 2\sqrt{2}$. Then for $z\in \mathcal{D_{G}}$, it holds that \begin{equation}\aligned\nonumber I(\alpha;z;b)+\mathcal{E}(\alpha;z;b)>0. \endaligned\end{equation} \end{lemma} To explain Lemma \ref{lem3.4}, we aim to illustrate that $I(\alpha;z;b)$ acts as the major term, which is positive, while $\mathcal{E}(\alpha;z;b)$ represents an error term that does not change the sign of the whole expression. Based on \eqref{3eq2} and \eqref{3eq3}, we will provide a deformation of $I(\alpha;z;b)$ and an upper bound for $\mathcal{E}(\alpha;z;b)$ in the following lemma: \begin{lemma}\label{3Lemma4} Assume that $\alpha>0,\:b\leq 2\sqrt{2}$. Then for $z\in \mathcal{D_{G}}$, it holds that \begin{itemize} \item [(1)] Deformation of $I(\alpha;z;b):$ \begin{equation}\aligned\nonumber I(\alpha;z;b)=2\sqrt{2}\cdot\big(-\vartheta_{Y}(\frac{y}{\alpha};x)\big)\cdot\big(\alpha+2\pi{\alpha}^2y+2y\:\frac{\vartheta_{XY}(\frac{y}{\alpha};x)}{\vartheta_{Y}(\frac{y}{\alpha};x)}\big)\\ +b {e}^{-\pi\alpha y}\vartheta_{Y}(\frac{y}{2 \alpha};x)\cdot\big( \alpha+4\pi{\alpha}^2 y+y\: \frac{\vartheta_{XY}(\frac{y}{2\alpha};x)}{\vartheta_{Y}(\frac{y}{2\alpha};x)}\big); \endaligned\end{equation} \item [(2)] Upper bound for $\left|\mathcal{E}(\alpha;z;b)\right|:$ \begin{equation}\aligned\nonumber &\left|\mathcal{E}(\alpha;z;b)\right|\leq2\sqrt{2}\big(-\vartheta_{Y}(\frac{y}{\alpha};x)\big)\Big(\alpha\underset{n=2}{\overset{\infty}{\sum}}n {e}^{-\pi\alpha y (n^2-1)}\left|\frac{\vartheta_{Y}(\frac{y}{\alpha};nx)}{\vartheta_{Y}(\frac{y}{\alpha};x)}\right| \\ &\:\:\:\:\:\:\:\:+2\pi \alpha^2y\underset{n=2}{\overset{\infty}{\sum}} n^3e^{-\pi\alpha y (n^2-1)}\left|\frac{\vartheta_{Y}(\frac{y}{\alpha};nx)}{\vartheta_{Y}(\frac{y}{\alpha};x)}\right|+2y\underset{n=2}{\overset{\infty}{\sum}} n e^{-\pi\alpha y (n^2-1)}\left|\frac{\vartheta_{XY}(\frac{y}{\alpha};nx)}{\vartheta_{Y}(\frac{y}{\alpha};x)}\right|\Big)\\ &\:\:\:\:\:\:\:\:+2\sqrt{2}\big(-\vartheta_{Y}(\frac{y}{2 \alpha};x)\big)\Big( \alpha \underset{n=2}{\overset{\infty}{\sum}} n {e}^{-\pi\alpha y (2 n^2-1)}\left|\frac{\vartheta_{Y}(\frac{y}{2 \alpha};nx)}{\vartheta_{Y}(\frac{y}{2 \alpha};x)}\right|\\ &\:\:\:\:\:\:\:\:+4\pi \alpha^2y\underset{n=2}{\overset{\infty}{\sum}} n^{3}e^{-\pi\alpha y (2{n}^2-1)}\left|\frac{\vartheta_{Y}(\frac{y}{2 \alpha};nx)}{\vartheta_{Y}(\frac{y}{2 \alpha};x)}\right| +y\underset{n=2}{\overset{\infty}{\sum}}n e^{-\pi\alpha y(2 n^2-1)}\left|\frac{\vartheta_{XY}(\frac{y}{2 \alpha};nx)}{\vartheta_{Y}(\frac{y}{2 \alpha};x)}\right|\Big). \endaligned\end{equation} \end{itemize} \end{lemma} \begin{proof} Item (1) follows by \eqref{3eq2}. Item (2) is derived from \eqref{3eq3}, Remark \ref{2rem1}, and $b\leq 2\sqrt{2}$. \end{proof} We will divide the proof of Lemma \ref{lem3.4} into three cases, where {\bf Case A}: $\frac{y}{\alpha}\geq\frac{1}{2}$, {\bf Case B}: $\frac{y}{\alpha}\leq\frac{1}{4}$ and {\bf Case C}: $\frac{y}{\alpha}\in[\frac{1}{4},\frac{1}{2}]$, which will be presented separately in the next three subsections. \subsection{Case A of Lemma \ref{lem3.4}: $\frac{y}{\alpha}\geq\frac{1}{2}$} In this subsection, we shall prove that \begin{lemma}\label{3Lemma9} Assume that $\alpha\geq \frac{5}{4},\:\frac{y}{\alpha}\geq\frac{1}{2}\:\:and\:\:b\leq 2\sqrt{2}$, then for $z\in \mathcal{D_{G}}$, it holds that \begin{itemize} \item [(1)] The lower bound function of $I(\alpha;z;b):$ \begin{equation}\aligned\nonumber I(\alpha;z;b)\geq &8\sqrt{2}\pi e^{-\pi\frac{y}{\alpha}}\sin(2\pi x)\big(1-\mu(\frac{1}{2})\big)\big(\alpha+2\pi{\alpha}^2y-2\pi y\:\frac{1+\nu(\frac{1}{2})}{1+\mu(\frac{1}{2})}\big)\\ &-8\sqrt{2}\pi e^{-\pi y(\alpha+\frac{1}{2\alpha})}\sin(2\pi x)\big(1+\mu(\frac{1}{4})\big)\big(\alpha+4\pi{\alpha}^2y-\pi y\:\frac{1+\hat{\nu}(\frac{1}{4})}{1+\hat{\mu}(\frac{1}{4})}\big). \endaligned\end{equation} \item [(2)] The upper bound function of $\left|\mathcal{E}(\alpha;z;b)\right|:$ $$\left|\mathcal{E}(\alpha;z;b)\right|\leq 56\sqrt{2}\pi e^{-\pi \frac{y}{\alpha}}\sin(2\pi x)\cdot 10^{-3}.$$ \item [(3)] $I(\alpha;z;b)+ \mathcal{E}(\alpha;z;b)\geq \frac{2 \sqrt{2}}{25}\pi e^{-\pi \frac{y}{\alpha}}\sin(2\pi x)>0.$ \end{itemize} \end{lemma} \begin{proof} By Lemmas \ref{3Lemma4}, \ref{2lem4} and \ref{2lem4add}, one has \begin{equation}\aligned\label{3.2eq1} I(\alpha;z;b)\geq &8\sqrt{2}\pi e^{-\pi\frac{y}{\alpha}}\sin(2\pi x)\big(1-\mu(\frac{y}{\alpha})\big)\big(\alpha+2\pi{\alpha}^2y-2\pi y\:\frac{1+\nu(\frac{y}{\alpha})}{1+\mu(\frac{y}{\alpha})}\big)\\ &-8\sqrt{2}\pi \: e^{-\pi y(\alpha+\frac{1}{2\alpha})}\sin(2\pi x)\big(1+\mu(\frac{y}{2\alpha})\big)\big(\alpha+4\pi{\alpha}^2y-\pi y\:\frac{1+\hat{\nu}(\frac{y}{2\alpha})}{1+\hat{\mu}(\frac{y}{2\alpha})}\big). \endaligned\end{equation} Notice that $\mu(X)$ and $\nu(X)$ are decreasing as $X\geq\frac{1}{4}$. Then as $\frac{y}{\alpha}\geq\frac{1}{2}$, one has \begin{equation}\aligned\label{3.2eq2} \mu(\frac{y}{\alpha})\leq \mu(\frac{1}{2})=0.0359\cdots,\:\:\:\:\mu(\frac{y}{2\alpha})\leq \mu(\frac{1}{4})=0.3960\cdots. \endaligned\end{equation} Notice that for $X\geq\frac{1}{4}$, we have \begin{equation}\aligned\label{3.2eq3} \frac{1+\nu(X)}{1+\mu(X)},\;\; \frac{1+\mu(X)}{1-\mu(X)}, \;\;\frac{1+\nu(X)}{1-{\mu}(X)}\;\;\mathrm{\:\:are\: \:decreasing },\:\:\frac{1+\hat{\nu}(X)}{1+\hat{\mu}(X)} \mathrm{\:\:is\: \:increasing}. \endaligned\end{equation} Then, there holds that \begin{equation}\aligned\label{3.2eq4} -0.5759\cdots=\frac{1+\hat{\nu}(\frac{1}{4})}{1+\hat{\mu}(\frac{1}{4})}\leq \frac{1+\hat{\nu}(\frac{y}{2\alpha})}{1+\hat{\mu}(\frac{y}{2\alpha})},\;\;\;\; \frac{1+\nu(\frac{y}{\alpha})}{1+\mu(\frac{y}{\alpha})}\leq\frac{1+\nu(\frac{1}{2})}{1+\mu(\frac{1}{2})}=1.1042\cdots. \endaligned\end{equation} Therefore, \eqref{3.2eq1}-\eqref{3.2eq4} yield item (1). By Lemmas \ref{3Lemma4} and \ref{2lem4}, one gets \begin{equation}\aligned\label{3.2eq5} &\left|\mathcal{E}(\alpha;z;b)\right|\leq 8\sqrt{2}\pi e^{-\pi \frac{y}{\alpha}}\sin(2\pi x)\Big(\big(1+\mu(\frac{y}{2\alpha})\big)\big( \alpha\:\underset{n=2}{\overset{\infty}{\sum}}n^2 e^{-\pi{y}(\alpha(2n^2-1)-\frac{1}{2\alpha})}\cdot\frac{1+\mu(\frac{y}{2\alpha})}{1-\mu(\frac{y}{2\alpha})}\\ &\;\;+4\pi{\alpha}^2 y \underset{n=2}{\overset{\infty}{\sum}}n^4 e^{-\pi{y}(\alpha(2n^2-1)-\frac{1}{2\alpha})}\frac{1+\mu(\frac{y}{2\alpha})}{1-\mu(\frac{y}{2\alpha})}+\pi y \underset{n=2}{\overset{\infty}{\sum}}n^2 e^{-\pi{y}(\alpha(2n^2-1)-\frac{1}{2\alpha})}\frac{1+\nu(\frac{y}{2\alpha})}{1-\mu(\frac{y}{2\alpha})}\big)\\ &\;\;+\big(1+\mu(\frac{y}{\alpha})\big)\cdot\big( \alpha\cdot \mu(\alpha y)\cdot\frac{1+\mu(\frac{y}{\alpha})}{1-\mu(\frac{y}{\alpha})} +2\pi{\alpha}^2 y \cdot \nu(\alpha y)\cdot \frac{1+\mu(\frac{y}{\alpha})}{1-\mu(\frac{y}{\alpha})}+2\pi y \cdot\mu(\alpha y)\cdot \frac{1+\nu(\frac{y}{\alpha})}{1-\mu(\frac{y}{\alpha})}\big) \Big). \endaligned\end{equation} Note that $z\in \mathcal{D_{G}}$ implies $y>\frac{\sqrt{3}}{2}$. Then for $\alpha\geq \frac{5}{4}$ and $\frac{y}{\alpha}\geq\frac{1}{2}$, by \eqref{3.2eq3}, \eqref{3.2eq5}, along with the monotonicity of $\mu(X)$ and $\nu(X)$, item (2) is deduced. Items (1) and (2) yield item (3). \end{proof} \subsection{Case B of Lemma \ref{lem3.4}: $\frac{y}{\alpha}\leq\frac{1}{4}$} In this subsection, we shall prove that \begin{lemma}\label{3Lemma14} Suppose $\alpha\geq 1,\:\frac{y}{\alpha}\in (0,\frac{1}{4}]\:\:and\:\:b\leq 2\sqrt{2}$, then for $z\in \mathcal{D_{G}}$, it holds that \begin{itemize} \item [(1)] The lower bound function of $I(\alpha;z;b):$ \begin{equation}\aligned\nonumber I(\alpha;z;b)\geq &\sin(2\pi x)\big(\frac{\alpha}{y}\big)^{\frac{3}{2}}\big(2\sqrt{2}\pi e^{-\frac{\pi\alpha}{4y}}(\alpha+2\pi{\alpha}^2 y- y\cdot\frac{3(\frac{y}{\alpha})^2+8{\pi}^2 e^{-4\pi}}{(\frac{y}{\alpha})^3-\frac{\pi}{4} e^{-4\pi}} )\\ &\;\;-8 e^{-\pi\alpha y}(\alpha+4\pi{\alpha}^2 y+\frac{\pi{\alpha}^2}{y})\big). \endaligned\end{equation} \item [(2)] The upper bound function of $\left|\mathcal{E}(\alpha;z;b)\right|:$ $$\left|\mathcal{E}(\alpha;z;b)\right|\leq \frac{\sqrt{2}\pi }{25}e^{-\frac{\pi\alpha}{4y}}\sin(2\pi x)\big(\frac{\alpha}{y}\big)^{\frac{3}{2}}.$$ \item [(3)] $I(\alpha;z;b)+ \mathcal{E}(\alpha;z;b)\geq 2\sqrt{2}\pi e^{-\frac{\pi\alpha}{4y}}\sin(2\pi x)\big(\frac{\alpha}{y}\big)^{\frac{3}{2}}>0.$ \end{itemize} \end{lemma} \begin{proof} (1). By Lemmas \ref{3Lemma4}, \ref{2lem4} and \ref{2lem4add}, one has \begin{equation}\aligned\nonumber I(\alpha;z;b)\geq &\sin(2\pi x)\big(\frac{\alpha}{y}\big)^{\frac{3}{2}}\Big(2\sqrt{2}\pi e^{-\frac{\pi\alpha}{4y}}\big(\alpha+2\pi{\alpha}^2 y- y\:\frac{3(\frac{y}{\alpha})^2+8{\pi}^2 e^{-\frac{\pi\alpha}{y}}}{(\frac{y}{\alpha})^3-4\pi(\frac{y}{\alpha})^2 e^{-\frac{\pi\alpha}{y}}} \big)\\ &\;\;\;\;\;\;\;-8 e^{-\pi\alpha y}\big(\alpha+4\pi{\alpha}^2 y+\frac{\pi{\alpha}^2}{y}\big)\Big). \endaligned\end{equation} Note that $\frac{\alpha}{y}\geq4$, then $e^{-\frac{\pi\alpha}{y}}\leq e^{-4\pi}$ and $(\frac{y}{\alpha})^2 e^{-\frac{\pi\alpha}{y}}\leq \frac{1}{16}e^{-4\pi}$. Thus, (1) is obtained. (2). Notice that $z\in \mathcal{D_{G}}$ implies $y>\frac{\sqrt{3}}{2}$. By Lemmas \ref{3Lemma4} and \ref{2lem4}-\ref{2lem4add}, one has \begin{equation}\aligned\label{3lemma16eq} \left|\mathcal{E}(\alpha;z;b)\right|\leq &2\sqrt{2}\pi e^{-\frac{\pi\alpha}{4y}}\sin(2\pi x)\big(\frac{\alpha}{y}\big)^{\frac{3}{2}}\big( \frac{\alpha}{{\pi}^2} \underset{n=2}{\overset{\infty}{\sum}}n^2 e^{-\pi{\alpha}[y(n^2-1)-\frac{1}{2y}]}+\frac{2}{\pi}{\alpha}^2y \underset{n=2}{\overset{\infty}{\sum}}n^4 e^{-\pi{\alpha}[y(n^2-1)-\frac{1}{2y}]}\\ &+\frac{{\alpha}^2}{2\pi y}\underset{n=2}{\overset{\infty}{\sum}}n^2 e^{-\pi{\alpha}[y(n^2-1)-\frac{1}{2y}]} +\frac{2\sqrt{2}}{{\pi}^2}\alpha\:\underset{n=2}{\overset{\infty}{\sum}}n^2 e^{-\pi{\alpha}[y(2n^2-1)-\frac{3}{4y}]}\\ &+\frac{8\sqrt{2}}{\pi}{\alpha}^2y\underset{n=2}{\overset{\infty}{\sum}}n^4 e^{-\pi{\alpha}[y(2n^2-1)-\frac{3}{4y}]}+ \frac{2\sqrt{2}}{\pi}\alpha^2 y^{-1}\underset{n=2}{\overset{\infty}{\sum}}n^2 e^{-\pi{\alpha}[y(2n^2-1)-\frac{3}{4y}]} \big).\\ \endaligned\end{equation} Given that the terms in \eqref{3lemma16eq} are exponentially decaying, they can be effectively bounded. (3). Item (3) follows by items (1) and (2). \end{proof} \subsection{Case C of Lemma \ref{lem3.4}: $\frac{1}{4}\leq\frac{y}{\alpha}\leq \frac{1}{2}$} In this subsection, we shall prove that \begin{lemma}\label{3Lemma19} Assume that $\alpha\geq \frac{3}{2},\:\frac{1}{4}\leq\frac{y}{\alpha}\leq \frac{1}{2}$, and $b\leq 2\sqrt{2}$. Then for $z\in \mathcal{D_{G}}$, it holds that \begin{itemize} \item [(1)] Lower bound of $I(\alpha;z;b):$ \begin{equation}\aligned\nonumber I(\alpha;z;b)\geq & 8e^{-\pi\frac{y}{\alpha}}\sin(2\pi x)\Big(\sqrt{2}\pi \big(1-\mu(\frac{1}{4})\big)\big(\alpha+2\pi{\alpha}^2 y-2\pi y\cdot\frac{1+\nu(\frac{1}{4})}{1+\mu(\frac{1}{4})}\big)\\ &-e^{\frac{\pi}{2}}\cdot e^{-\pi \alpha y} {\alpha}^{\frac{5}{2}}y^{-\frac{3}{2}}(1+4\pi\alpha y+\pi\alpha {y}^{-1})\Big). \endaligned\end{equation} \item [(2)] Upper bound of $\left|\mathcal{E}(\alpha;z;b)\right|:$ $$\left|\mathcal{E}(\alpha;z;b)\right|\leq \frac{6 \sqrt{2}\pi}{125} e^{-\pi \frac{y}{\alpha}}\sin(2\pi x).$$ \item [(3)] $I(\alpha;z;b)+ \mathcal{E}(\alpha;z;b)\geq \frac{2\sqrt{2}\pi}{25} e^{-\pi\frac{y}{\alpha}}\sin(2\pi x)>0.$ \end{itemize} \end{lemma} \begin{proof} (1). For $I(\alpha;z;b)$, as $b\leq 2\sqrt{2}$, by Lemmas \ref{3Lemma4}, \ref{2lem4}, \ref{2lem4add}, one has \begin{equation}\aligned\nonumber I(\alpha;z;b)\geq & 8e^{-\pi\frac{y}{\alpha}}\sin(2\pi x)\Big(\sqrt{2}\pi \big(1-\mu(\frac{1}{4})\big)\big(\alpha+2\pi{\alpha}^2 y-2\pi y\cdot\frac{1+\nu(\frac{1}{4})}{1+\mu(\frac{1}{4})}\big)\\ &\;\;-e^{\frac{\pi y}{\alpha}} \cdot e^{-\pi \alpha y} \cdot{\alpha}^{\frac{5}{2}}y^{-\frac{3}{2}}(1+4\pi\alpha y+\pi\alpha {y}^{-1})\Big). \endaligned\end{equation} Note that $\frac{1}{4}\leq\frac{y}{\alpha}\leq\frac{1}{2}$, then one has $e^{\frac{\pi y}{\alpha}}\leq e^{\frac{\pi }{2}}.$ Thus, item (1) is obtained. (2). The estimate for (2) here is similar to that for (2) in Lemma \ref{3Lemma9}, hence, we omit its details. (3). $1-\mu(\frac{1}{4})=0.6039\cdots, \:\:\:\: \frac{1+\nu(\frac{1}{4})}{1+\mu(\frac{1}{4})}=1.9123\cdots.$ Item (3) is deduced by items (1) and (2). \end{proof} \section{Minimization on the vertical line $\Gamma_{c}$} Recalling Theorem \ref{3Thm1}, as $ \alpha \geq \frac{3}{2},\:b\leq 2\sqrt{2}$, we have \begin{equation}\aligned\nonumber \underset{z\in \mathbb{H}}{\min}\big(\mathcal{K}(\alpha;z)-b \mathcal{K}(2\alpha;z)\big) =\underset{z\in \Gamma_{c}}{\min}\big(\mathcal{K}(\alpha;z)-b \mathcal{K}(2\alpha;z)\big),\\ \endaligned\end{equation} In this section, we aim to establish the following result: \begin{theorem}\label{4Thm1} Assume that $ \alpha \geq 2$. Then, for $b\leq 2$, up to the action by the modular group, \begin{equation}\aligned\nonumber \argmin_{z\in\Gamma_{c}}\big(\mathcal{K}(\alpha;z)-b \mathcal{K}(2\alpha;z)\big)=e^{i\frac{\pi}{3}}. \endaligned\end{equation} \end{theorem} Regarding to $\mathcal{K}(\alpha;z)$, it is known that \begin{proposition}[Luo-Wei \cite{Luo2023}]\label{Pro}For $\alpha\geq2$, up to the modular group, then \begin{equation}\aligned\nonumber \argmin_{z\in\Gamma_{c}}\mathcal{K}(\alpha;z)=e^{i\frac{\pi}{3}}. \endaligned\end{equation} \end{proposition} By Proposition \ref{Pro} and the following deformation: $$\mathcal{K}(\alpha;z)-b \mathcal{K}(2\alpha;z)=\mathcal{K}(\alpha;z)-2 \mathcal{K}(2\alpha;z)+(2-b)\mathcal{K}(2\alpha;z),$$ to prove Theorem \ref{4Thm1}, it suffices to prove that \begin{proposition}\label{PropK2} Assume that $ \alpha \geq 2$. Then, up to the action by the modular group, \begin{equation}\aligned\nonumber \argmin_{z\in\Gamma_{c}}\big(\mathcal{K}(\alpha;z)-2\mathcal{K}(2\alpha;z)\big)=e^{i\frac{\pi}{3}}. \endaligned\end{equation} \end{proposition} This proof of Proposition \ref{PropK2} is based on Propositions \ref{4prop1} and \ref{4prop2} (an illustration the proof can be found in Figure \ref{a}). \begin{figure} \centering \includegraphics[scale=0.4]{axis.png} \caption{A diagram of the proof of Theorem \ref{4Thm1}.} \label{a} \end{figure} \begin{proposition}\label{4prop1}Suppose that $\alpha\geq 2$. Then for $y\in[\frac{\sqrt{3}}{2},2]$, it holds that \begin{equation}\aligned\nonumber \frac{\partial}{\partial{y}}\big(\mathcal{K}(\alpha;\frac{1}{2}+iy)-2 \mathcal{K}(2\alpha;\frac{1}{2}+iy)\big)\geq 0. \endaligned\end{equation} \end{proposition} For $y\geq2$, we use a direct method as follows: \begin{proposition}[A direct comparison]\label{4prop2}Suppose that $\alpha\geq 2$. Then for $y\geq 2$, it holds \begin{equation}\aligned\nonumber \mathcal{K}(\alpha;\frac{1}{2}+iy)-2 \mathcal{K}(2\alpha;\frac{1}{2}+iy)> \mathcal{K}(\alpha;\frac{1}{2}+i\frac{\sqrt{3}}{2})-2 \mathcal{K}(2\alpha;\frac{1}{2}+i\frac{\sqrt{3}}{2}). \endaligned\end{equation} \end{proposition} We provide the proof of Propositions \ref{4prop1} and \ref{4prop2} in Subsections 4.1 and 4.2, respectively. \subsection{Proof of Proposition \ref{4prop1}} By Proposition 3.4 of B\'etermin \cite{Bet2018}: $$y^2\frac{\partial}{\partial{y}}\big(\mathcal{K}(\alpha;\frac{1}{2}+iy)-2 \mathcal{K}(2\alpha;\frac{1}{2}+iy)\big)\big|_{y=\frac{\sqrt{3}}{2}}=0,\;\;\mathrm{for\;}\alpha\geq2,$$ and $\frac{1}{y^2}\frac{\partial}{\partial{y}}(y^2\frac{\partial}{\partial{y}})=\frac{\partial^2}{\partial{y}^2}+\frac{2}{y}\frac{\partial}{\partial{y}},$ to prove Proposition \ref{4prop1}, it suffices to prove Lemma \ref{4lemma1}. In this subsection, we aim to prove that \begin{lemma}\label{4lemma1}Assume that $ \alpha \geq 2$. Then for $y\in[\frac{\sqrt{3}}{2},2]$, \begin{equation}\aligned\nonumber \big(\frac{\partial^2}{\partial{y}^2}+\frac{2}{y}\frac{\partial}{\partial{y}}\big)\big(\mathcal{K}(\alpha;\frac{1}{2}+iy)-2 \mathcal{K}(2\alpha;\frac{1}{2}+iy)\big)>0. \endaligned\end{equation} \end{lemma} To better illustrate the proof of Lemma \ref{4lemma1}, we denote that \begin{equation}\aligned\label{4eq1} \mathcal{S}_{1}(\alpha;y)&:= \underset{n,m}{\sum}n^2e^{-\pi\alpha(yn^2+\frac{(m+\frac{n}{2})^2}{y})},\\ \mathcal{S}_{2}(\alpha;y)&:=\underset{n,m}{\sum}(n^2-\frac{(m+\frac{n}{2})^2}{y^2})^2e^{-\pi\alpha(yn^2+\frac{(m+\frac{n}{2})^2}{y})},\\ \mathcal{S}_{3}(\alpha;y)&:=\underset{n,m}{\sum}n^2(yn^2+\frac{(m+\frac{n}{2})^2}{y}) e^{-\pi\alpha(yn^2+\frac{(m+\frac{n}{2})^2}{y})},\\ \mathcal{S}_{4}(\alpha;y)&:=\underset{n,m}{\sum}(n^2-\frac{(m+\frac{n}{2})^2}{y^2})^2(yn^2+\frac{(m+\frac{n}{2})^2}{y}) e^{-\pi\alpha(yn^2+\frac{(m+\frac{n}{2})^2}{y})}.\\ \endaligned\end{equation} Using the expression of $\mathcal{K}(\alpha;z)$ given by \eqref{define}, we give the expression of $(\frac{\partial^2}{\partial{y}^2}+\frac{2}{y}\frac{\partial}{\partial{y}})\big(\mathcal{K}(\alpha;\frac{1}{2}+iy)-2 \mathcal{K}(2\alpha;\frac{1}{2}+iy)\big)$ in the following lemma. The proof of this lemma is straightforward and thus omitted. \begin{lemma}[The expression of $(\frac{\partial^2}{\partial{y}^2}+\frac{2}{y}\frac{\partial}{\partial{y}})(\mathcal{K}(\alpha;\frac{1}{2}+iy)-2 \mathcal{K}(2\alpha;\frac{1}{2}+iy))$]\label{4lemma2} \begin{equation}\aligned\nonumber &\big(\frac{\partial^2}{\partial{y}^2}+\frac{2}{y}\frac{\partial}{\partial{y}}\big)\big(\mathcal{K}(\alpha;\frac{1}{2}+iy)-2 \mathcal{K}(2\alpha;\frac{1}{2}+iy)\big)\\ =&\:\:\frac{2}{y}\mathcal{S}_{1}(\alpha;y)+8\pi\alpha\mathcal{S}_{2}(2\alpha;y)+\frac{8\pi\alpha}{y}\mathcal{S}_{3}(2\alpha;y)+{\pi}^2{\alpha}^2\mathcal{S}_{4}(\alpha;y)\\ &-\frac{4}{y}\mathcal{S}_{1}(2\alpha;y)-2\pi\alpha\mathcal{S}_{2}(\alpha;y)-\frac{2\pi\alpha}{y}\mathcal{S}_{3}(\alpha;y)-8{\pi}^2{\alpha}^2\mathcal{S}_{4}(2\alpha;y). \endaligned\end{equation} \end{lemma} In view of the expression provided in Lemma \ref{4lemma2}, we first provide the lower bound estimates of $\mathcal{S}_{i}(\alpha;y)$ $(i=1,\cdots,4)$. \begin{lemma}[The lower bound estimates]\label{lem4.4} For $\alpha,y>0$, it holds that \begin{itemize} \item [(2)] $\mathcal{S}_{1}(\alpha;y)\geq 4e^{-\pi\alpha(y+\frac{1}{4y})};$ \item [(3)] $\mathcal{S}_{2}(2\alpha;y)\geq \frac{2}{y^4}e^{-2\pi\frac{\alpha}{y}}+4(1-\frac{1}{4y^2})^2e^{-2\pi\alpha(y+\frac{1}{4y})};$ \item [(4)] $\mathcal{S}_{3}(2\alpha;y)\geq (4y+\frac{1}{y})e^{-2\pi\alpha(y+\frac{1}{4y})};$ \item [(1)] $\mathcal{S}_{4}(\alpha;y)\geq \frac{2}{y^5}e^{-\frac{\pi\alpha}{y}}+4(1-\frac{1}{4y^2})^2(y+\frac{1}{4y})e^{-\pi\alpha(y+\frac{1}{4y})}.$ \end{itemize} \end{lemma} The following results were derived in Luo-Wei (\cite{LW2022}). \begin{lemma}[Luo-Wei \cite{LW2022}]\label{4lemma8} For $\alpha\geq2,y\in[\frac{\sqrt{3}}{2},2]$, it holds that \begin{itemize} \item [(1)] $\mathcal{S}_{1}(2\alpha;y)\leq4e^{-2\pi\alpha(y+\frac{1}{4y})}\cdot(1+\epsilon_{a});$ \item [(2)] $\mathcal{S}_{2}(\alpha;y)\leq\frac{2}{y^4}e^{-\pi\frac{\alpha}{y}}\cdot(1+\epsilon_{b}),$ \end{itemize} where \begin{equation}\aligned\label{4lem7eq1}\nonumber {\epsilon}_{a}:=&2e^{-2\pi\alpha(3y-\frac{1}{4y})}(1+\underset{n=2}{\overset{\infty}{\sum}}n^2e^{-8\pi\alpha y(n^2-1)})\cdot(1+2\underset{n=1}{\overset{\infty}{\sum}}e^{-\pi{n}^2\frac{2\alpha}{y}})+\underset{n=2}{\overset{\infty}{\sum}}e^{-\frac{2\pi\alpha}{y}n\cdot(n-1)}\\ &+\underset{n=2}{\overset{\infty}{\sum}}(2n-1)^{2}e^{-8{\pi}{\alpha}y(n-1)\cdot n} +\underset{n=2}{\overset{\infty}{\sum}}(2n-1)^{2}e^{-8{\pi}{\alpha}y(n-1)\cdot n}\underset{m=2}{\overset{\infty}{\sum}}e^{-\frac{2\pi\alpha}{y}m\cdot(m-1)},\\ \endaligned\end{equation} and $\epsilon_{b}:=\epsilon_{b,1}+\epsilon_{b,2}+\epsilon_{b,3}+\epsilon_{b,4}$ and each $\epsilon_{b,i}\:(i=1,2,3,4)$ is expressed as follows \begin{equation}\aligned\nonumber \epsilon_{b,1} :&=2y^4 e^{-\pi\alpha y}\cdot(1+\sum_{n=2}^\infty e^{-\frac{4\pi\alpha}{y}n\cdot(n-1)})\cdot (1+\sum_{n=2}^\infty (2n-1)^4 e^{-4\pi\alpha y n(n-1)}),\\ \epsilon_{b,2} :&=\frac{1}{8}e^{-\pi\alpha y}\cdot(1+\sum_{n=2}^\infty (2n-1)^4 e^{-\frac{4\pi\alpha}{y}n(n-1)}) \cdot(1+\sum_{n=2}^\infty e^{-4\pi\alpha yn(n-1)}),\\ \epsilon_{b,3} :&=16y^4 e^{-\pi \alpha (4y-\frac{1}{y})}\cdot(1+\sum_{n=2}^\infty n^4e^{-4\pi\alpha y(n^2-1)})\cdot(1+2\sum_{n=1}^\infty e^{-\pi\frac{\alpha}{y}n^2}),\\ \epsilon_{b,4} :&=y^4 e^{-\pi \alpha (4y-\frac{1}{y})}\cdot(1+\sum_{n=2}^\infty e^{-4\pi\alpha y(n^2-1)})\cdot(1+2\sum_{n=1}^\infty \frac{n^4}{y^4}e^{-\pi\frac{\alpha}{y}n^2}). \endaligned\end{equation} Numerically, ${\epsilon}_{a}\leq4\cdot 10^{-6},$ and $\epsilon_{b}\leq6\cdot 10^{-3}.$ \end{lemma} Next, we provide upper bound of $\mathcal{S}_{j}(\alpha;y), j=3,4$. \begin{lemma}[Upper bound of $\mathcal{S}_{j}(\alpha;y), j=3,4$]\label{4lemma10} For $\alpha\geq2,y\in[\frac{\sqrt{3}}{2},2]$, it holds that \begin{itemize} \item [(1)] $\mathcal{S}_{3}(\alpha;y)\leq4ye^{-\pi\alpha(y+\frac{1}{4y})}\cdot(1+\epsilon_{c}),$ \item [(2)] $\mathcal{S}_{4}(2\alpha;y) \leq \frac{2}{y^5}e^{-\frac{2\pi\alpha}{y}}\cdot(1+\epsilon_{d})+4ye^{-2\pi\alpha(y+\frac{1}{4y})}\cdot(1-\frac{1}{4y^2}-\frac{1}{16y^4}+\epsilon_{e}).$ \end{itemize} where ${\epsilon}_{c}:=\underset{j=1}{\overset{6}{\sum}}\epsilon_{c,j}$. Here each $\epsilon_{c,j}\:(j=1,2,\cdots,6)$ is small and expressed by \begin{equation}\aligned\nonumber \epsilon_{c,1}:&=\underset{n=2}{\overset{\infty}{\sum}}(2n-1)^4e^{-4{\pi}{\alpha}y(n-1)\cdot n},\:\:\:\: \epsilon_{c,2}:=\underset{n=2}{\overset{\infty}{\sum}}e^{-\pi\alpha\frac{(n-1)\cdot n}{y}},\:\:\:\:\: \epsilon_{c,3}:=\epsilon_{c,1}\cdot\epsilon_{c,2},\\ \epsilon_{c,4}:&=\frac{1}{4{y}^2}(1+\underset{n=2}{\overset{\infty}{\sum}}(2n-1)^2e^{-4{\pi}{\alpha}y(n-1)\cdot n})\cdot(1+\underset{n=2}{\overset{\infty}{\sum}}(2n-1)^2e^{-\pi\alpha\frac{(n-1)\cdot n}{y}}),\\ \epsilon_{c,5}:&=8 e^{-{\pi}{\alpha}(3y-\frac{1}{4y})}(1+\underset{n=2}{\overset{\infty}{\sum}}n^4e^{-4{\pi}{\alpha}y(n^2-1)})\cdot(1+2\underset{m=1}{\overset{\infty}{\sum}}e^{-\pi\alpha\frac{m^2}{y}}),\\ \epsilon_{c,6}:&=\frac{4}{{y}^2}e^{-3{\pi}{\alpha}(y+\frac{1}{4y})}(1+\underset{n=2}{\overset{\infty}{\sum}}n^2e^{-4\pi\alpha{y}(n^2-1)})\cdot(1+\underset{m=2}{\overset{\infty}{\sum}} m^2e^{-\pi\alpha\frac{m^2-1}{y}}).\\ \endaligned\end{equation} and \begin{equation}\aligned\nonumber {\epsilon}_{d}:=&2\underset{n=1}{\overset{\infty}{\sum}}e^{-8{\pi}{\alpha}y{n}^2}+\underset{n=2}{\overset{\infty}{\sum}}n^6e^{-2\pi\alpha\frac{n^2-1}{y}}+ 2\underset{n=1}{\overset{\infty}{\sum}}e^{-8{\pi}{\alpha}y{n}^2} \underset{m=2}{\overset{\infty}{\sum}}m^6e^{-2\pi\alpha\frac{m^2-1}{y}}\;\;\;\;\;\;\;\;\;\;\\ &+64{y}^6e^{-2{\pi}{\alpha}(4y-\frac{1}{y})}(1+\underset{n=2}{\overset{\infty}{\sum}}n^{6}e^{-8{\pi}{\alpha}y(n^2-1)})\cdot(1+2\underset{n=1}{\overset{\infty}{\sum}}e^{-\pi{n}^2\frac{2\alpha}{y}}), \endaligned\end{equation} \begin{equation}\aligned\nonumber {\epsilon}_{e}:=&\:\:\frac{1}{64{y}^6}(1+\underset{n=2}{\overset{\infty}{\sum}}e^{-8{\pi}{\alpha}y(n-1)\cdot n})(1+\underset{n=2}{\overset{\infty}{\sum}}(2n-1)^{6}e^{-{2\pi}{\alpha}\frac{(n-1)\cdot n}{y}})+\underset{n=2}{\overset{\infty}{\sum}}e^{-2{\pi}{\alpha}\frac{(n-1)\cdot n}{y}}\\ &+\underset{n=2}{\overset{\infty}{\sum}}(2n-1)^{6}e^{-8{\pi}{\alpha}y(n-1)\cdot n}+\underset{n=2}{\overset{\infty}{\sum}}(2n-1)^{6}e^{-8{\pi}{\alpha}y(n-1)\cdot n}\underset{m=2}{\overset{\infty}{\sum}}e^{-2{\pi}{\alpha}\frac{(m-1)\cdot m}{y}}. \endaligned\end{equation} Numerically, $\epsilon_{c}\leq\frac{2}{5}$, $\epsilon_{d}\leq 5\cdot10^{-7},$ and $\epsilon_{e}\leq 4\cdot 10^{-2}.$ \end{lemma} \begin{proof} Since the proofs of the two items in Lemma \ref{4lemma10} are similar, we provide the proof for item (1) only to avoid repetition. For simplicity, we denote: \begin{equation}\aligned\label{qiao} \mathcal{S}_{3,a}(\alpha;y)&:=\underset{p\equiv q\equiv0(mod2) }{\sum} y {p}^4 e^{-\pi\alpha(y{p}^2+\frac{q^2}{4y})},\:\:\:\: \mathcal{S}_{3,b}(\alpha;y):=\underset{p\equiv q\equiv1(mod2) }{\sum} y{p}^4 e^{-\pi\alpha(y{p}^2+\frac{q^2}{4y})},\\ \mathcal{S}_{3,c}(\alpha;y)&:=\underset{p\equiv q\equiv0(mod2) }{\sum} \frac{p^2{q}^2}{4y} e^{-\pi\alpha(y{p}^2+\frac{q^2}{4y})},\:\: \mathcal{S}_{3,d}(\alpha;y):=\underset{p\equiv q\equiv1(mod2) }{\sum} \frac{p^2{q}^2}{4y} e^{-\pi\alpha(y{p}^2+\frac{q^2}{4y})}. \endaligned\end{equation} Then, by \eqref{4eq1}, the sum $\mathcal{S}_{3}(\alpha;y)$ can be divided into the following four parts: \begin{equation}\aligned\label{moon} \mathcal{S}_{3}(\alpha;y)=\underset{p\equiv q(mod2) }{\sum} p^2(yp^2+\frac{q^2}{4y}) e^{-\pi\alpha(y{p}^2+\frac{q^2}{4y})}=\mathcal{S}_{3,a}(\alpha;y)+\mathcal{S}_{3,b}(\alpha;y)+\mathcal{S}_{3,c}(\alpha;y)+\mathcal{S}_{3,d}(\alpha;y). \endaligned\end{equation} Next, we shall calculate the four parts respectively. Using \eqref{qiao}, one has \begin{equation}\aligned\label{moon1} \mathcal{S}_{3,a}(\alpha;y)&=\underset{p=2n,q=2m }{\sum} y {p}^4 e^{-\pi\alpha(y{p}^2+\frac{q^2}{4y})} =16y\:\underset{n }{\sum} n^4e^{-4\pi{\alpha}y{n}^2}\underset{m }{\sum}e^{-\pi\alpha\frac{m^2}{y}}\\ &=32y\:e^{-4\pi\alpha{y}}\cdot\big(1+\underset{n=2}{\overset{\infty}{\sum}}n^{4}e^{-4{\pi}{\alpha}y(n^2-1)} \big)\cdot(1+2\underset{m=1}{\overset{\infty}{\sum}}e^{-\pi\alpha\frac{m^2}{y}})=\:\:4 ye^{-\pi\alpha(y+\frac{1}{4y})}\cdot\epsilon_{c,5}, \endaligned\end{equation} and \begin{equation}\aligned\label{moon2} \mathcal{S}_{3,b}(\alpha;y)&=\underset{p=2n-1,q=2m-1 }{\sum} y {p}^4 e^{-\pi\alpha(y{p}^2+\frac{q^2}{4y})}=4y\:\underset{n=1 }{\overset{\infty}{\sum}}(2n-1)^4e^{-\pi{\alpha}y(2n-1)^2}\underset{m=1 }{\overset{\infty}{\sum}}e^{-\pi{\alpha}\frac{(2m-1)^2}{4y}}\\ &=4y\:e^{-\pi\alpha(y+\frac{1}{4y})}\big(1+\underset{n=2}{\overset{\infty}{\sum}}(2n-1)^4e^{-4{\pi}{\alpha}y(n-1)\cdot n} \big)\cdot\big(1+\underset{m=2}{\overset{\infty}{\sum}}e^{-\pi\alpha\frac{(m-1)\cdot m}{y}} \big)\\ &=4y\:e^{-\pi\alpha(y+\frac{1}{4y})}\cdot(1+\epsilon_{c,1})\cdot(1+\epsilon_{c,2})=4ye^{-\pi\alpha(y+\frac{1}{4y})} \cdot(1+\epsilon_{c,1}+\epsilon_{c,2}+\epsilon_{c,3}). \endaligned\end{equation} Similarly, \begin{equation}\aligned\label{moon3} \mathcal{S}_{3,c}(\alpha;y)&=\underset{p=2n,q=2m }{\sum} \frac{p^{2}q^{2}}{4y} e^{-\pi\alpha(y{p}^2+\frac{q^2}{4y})}=\frac{16}{y}\underset{n=1 }{\overset{\infty}{\sum}}n^2e^{-4\pi{\alpha}y{n}^2}\underset{m=1 }{\overset{\infty}{\sum}}m^2e^{-\pi{\alpha}\frac{m^2}{y}}\\ &=\frac{16}{y}e^{-4\pi\alpha(y+\frac{1}{4y})}\big(1+\underset{n=2}{\overset{\infty}{\sum}}n^2e^{-4\pi\alpha{y}(n^2-1)}\big)\big(1+\underset{m=2}{\overset{\infty}{\sum}} m^2e^{-\pi\alpha\frac{m^2-1}{y}}\big)\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\\ &=4ye^{-\pi\alpha(y+\frac{1}{4y})}\cdot\epsilon_{c,6}, \endaligned\end{equation} and \begin{equation}\aligned\label{moon4} \mathcal{S}_{3,d}(\alpha;y)&=\underset{p=2n-1,q=2m-1 }{\sum} \frac{p^{2}q^{2}}{4y} e^{-\pi\alpha(y{p}^2+\frac{q^2}{4y})}\\ &=\frac{1}{y}\underset{n=1 }{\overset{\infty}{\sum}}(2n-1)^2e^{-\pi{\alpha}y(2n-1)^2}\underset{m=1 }{\overset{\infty}{\sum}}(2m-1)^{2}e^{-\pi{\alpha}\frac{(2m-1)^2}{4y}}\\ &=\frac{1}{y}e^{-\pi\alpha(y+\frac{1}{4y})}(1+\underset{n=2}{\overset{\infty}{\sum}}(2n-1)^2e^{-4{\pi}{\alpha}y(n-1)\cdot n})\cdot(1+\underset{n=2}{\overset{\infty}{\sum}}(2n-1)^2e^{-\pi\alpha\frac{(n-1)\cdot n}{y}})\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\\ &=4 y\cdot e^{-\pi\alpha(y+\frac{1}{4y})}\cdot{\epsilon}_{c,4}. \endaligned\end{equation} Thus, \eqref{moon}-\eqref{moon4} together yield item (1). \end{proof} With Lemmas \ref{4lemma2}-\ref{4lemma10} established, we are now ready to prove Lemma \ref{4lemma1}. \begin{proof} By Lemmas \ref{4lemma2}-\ref{4lemma10}, one has \begin{equation}\aligned\label{4.1eq1} \big(\frac{\partial^2}{\partial{y}^2}+\frac{2}{y}\frac{\partial}{\partial{y}}\big)\big(\mathcal{K}(\alpha;\frac{1}{2}+iy)-2 \mathcal{K}(2\alpha;\frac{1}{2}+iy)\big)\geq \frac{4\pi\alpha}{y^4}e^{-\frac{\pi\alpha}{y}}\big(A(\alpha;y)+B(\alpha;y)\big), \endaligned\end{equation} where \begin{equation}\aligned\nonumber A(\alpha;y):=&\frac{\pi\alpha}{2y}-(1+\epsilon_{b})-4e^{-\frac{\pi\alpha}{y}}\big(\frac{\pi\alpha}{y}(1+\epsilon_{d})-1\big),\:\:B(\alpha;y):=B_{1}(\alpha;y)-B_{2}(\alpha;y), \endaligned\end{equation} and \begin{equation}\aligned\nonumber B_{1}(\alpha;y):=&2y^4e^{-\pi\alpha(y-\frac{3}{4y})}\big(\frac{\pi\alpha y}{2}(1-\frac{1}{4y^2})^2(1+\frac{1}{4y^2})+\frac{1}{\pi\alpha y}-1-\epsilon_{c}\big),\\ B_{2}(\alpha;y):=&8y^4e^{-2\pi\alpha(y-\frac{1}{4y})}\big(\pi\alpha y(1+\epsilon_{e} -\frac{1}{4y^2} -\frac{1}{16y^4})+\frac{1}{2\pi\alpha y}(1+\epsilon_{a})-(1-\frac{1}{4y^2})^2-1-\frac{1}{4y^2}\big). \endaligned\end{equation} Here, \begin{equation}\aligned\nonumber \epsilon_{a}\leq4\cdot 10^{-6},\:\epsilon_{b}\leq6\cdot 10^{-3},\:\epsilon_{c}\leq4\cdot 10^{-1},\:\epsilon_{d}\leq 5\cdot10^{-7},\:\epsilon_{e}\leq 4\cdot 10^{-2}. \endaligned\end{equation} Note that $ \alpha \geq 2$ and $y\in[\frac{\sqrt{3}}{2},2]$ implies that $\frac{\alpha}{y}\geq 1$. Using this observation along with an elementary inequality, \begin{equation}\aligned\label{4.1eq2} \frac{\pi}{2}x-(1+\epsilon_{b})-4e^{-\pi x}\big(\pi x(1+\epsilon_{d})-1\big)\geq10^{-1},\:\:\:\:\mathrm{for}\:\:x\geq1, \endaligned\end{equation} we obtain \begin{equation}\aligned\label{4.1eq3} A(\alpha;y)\geq 10^{-1}. \endaligned\end{equation} For $B(\alpha;y)$, given that $\alpha \geq 2, y\in[\frac{\sqrt{3}}{2},2]$, one has \begin{equation}\aligned\label{4.1eq4} B_{1}(\alpha;y)\geq0,\:\:B_{2}(\alpha;y)\leq B_{2}(2,\frac{\sqrt{3}}{2})\leq 5\cdot 10^{-3}. \endaligned\end{equation} Therefore, by \eqref{4.1eq1} and \eqref{4.1eq4}, we obtain \begin{equation}\aligned\nonumber A(\alpha;y)+B(\alpha;y)\geq10^{-1}-5\cdot 10^{-3}>0, \endaligned\end{equation} which yields the result. \end{proof} \subsection{Proof of Proposition \ref{4prop2}} We first give the exponential expansion of $\mathcal{K}(\alpha;\frac{1}{2}+iy)-2 \mathcal{K}(2\alpha;\frac{1}{2}+iy)$. \begin{lemma}\label{4lemma0}For $\alpha,y>0$, we have the following expression of $(\mathcal{K}(\alpha;\frac{1}{2}+iy)-2 \mathcal{K}(2\alpha;\frac{1}{2}+iy)):$ \begin{equation}\aligned\nonumber \mathcal{K}(\alpha;\frac{1}{2}+iy)-2 \mathcal{K}(2\alpha;\frac{1}{2}+iy)=&\frac{1}{\pi}{2}^{-\frac{5}{2}}\alpha^{-\frac{5}{2}}y^{\frac{1}{2}} \Big(\sum_{n\in\mathbb{Z}}2\sqrt{2} \alpha(1+2\pi\alpha y {n}^2) e^{-\pi\alpha y n^2}\vartheta(\frac{y}{\alpha};\frac{n}{2})\\ &+\sum_{n\in\mathbb{Z}} 4\sqrt{2}ye^{-\pi\alpha y n^2}\vartheta_X(\frac{y}{\alpha};\frac{n}{2})- \sum_{n\in\mathbb{Z}}2 ye^{-2\pi\alpha y n^2}\vartheta_X(\frac{y}{2 \alpha};\frac{n}{2})\\ &- \sum_{n\in\mathbb{Z}}2 \alpha(1+4\pi\alpha y{n}^2) e^{-2\pi\alpha y n^2}\vartheta(\frac{y}{2 \alpha};\frac{n}{2})\Big).\\ \endaligned\end{equation} \end{lemma} In view of Lemma \ref{4lemma0}, to better demonstrate the proof of Proposition \ref{4prop2}, we will decompose the expression $\mathcal{K}(\alpha;\frac{1}{2}+iy)-2 \mathcal{K}(2\alpha;\frac{1}{2}+iy)$. We denote that \begin{equation}\aligned\label{4eq4} M_{1}(\alpha;y):=&2\sqrt{2}\alpha\:\vartheta(\frac{y}{\alpha};0)-2\alpha\:\vartheta(\frac{y}{2\alpha};0),\:\:\:\: M_{2}(\alpha;y):=4\sqrt{2}y\:\vartheta_X(\frac{y}{\alpha};0)-2 y\:\vartheta_X(\frac{y}{2 \alpha};0),\\ M_{3}(\alpha;y):=&4\sqrt{2}\alpha(1+2\pi\alpha y)e^{-\pi\alpha y}\vartheta(\frac{y}{\alpha};\frac{1}{2})-4\alpha(1+4\pi\alpha y)e^{-2\pi\alpha y}\vartheta(\frac{y}{2\alpha};\frac{1}{2}),\\ M_{4}(\alpha;y):=&8\sqrt{2}y e^{-\pi\alpha y} \vartheta_X(\frac{y}{\alpha};\frac{1}{2})-4y e^{-2\pi\alpha y}\vartheta_X(\frac{y}{2 \alpha};\frac{1}{2}),\\ \endaligned\end{equation} and \begin{equation}\aligned\label{4eq5} \mathcal{E}_{1}(\alpha;y):=&4\sqrt{2} \alpha\sum_{n\geq2}(1+2\pi\alpha y {n}^2) e^{-\pi\alpha y n^2}\vartheta(\frac{y}{\alpha};\frac{n}{2}),\:\:\:\:\:\: \mathcal{E}_{2}(\alpha;y):=8\sqrt{2}y\sum_{n\geq2} e^{-\pi\alpha y n^2}\vartheta_X(\frac{y}{\alpha};\frac{n}{2}),\\ \mathcal{E}_{3}(\alpha;y):=&-4 \alpha \sum_{n\geq2}(1+4\pi\alpha y {n}^2) e^{-2\pi\alpha y n^2}\vartheta(\frac{y}{2 \alpha};\frac{n}{2}),\:\:\: \mathcal{E}_{4}(\alpha;y):=-4 y\sum_{n\geq2} e^{-2\pi\alpha y n^2}\vartheta_X(\frac{y}{2 \alpha};\frac{n}{2}). \endaligned\end{equation} By \eqref{4eq4} and \eqref{4eq5}, we further denote that \begin{equation}\aligned\label{4eq3} M(\alpha;y):=\underset{i=1}{\overset{4}{\sum}}M_{i}(\alpha;y),\:\:\mathcal{E}(\alpha;y):=\underset{i=1}{\overset{4}{\sum}}\mathcal{E}_{i}(\alpha;y). \endaligned\end{equation} Thus, by Lemma \ref{4lemma0} and \eqref{4eq4}-\eqref{4eq3}, we have \begin{equation}\aligned\label{4eq2} \mathcal{K}(\alpha;\frac{1}{2}+iy)-2 \mathcal{K}(2\alpha;\frac{1}{2}+iy)=\frac{1}{\pi}{2}^{-\frac{5}{2}}\alpha^{-\frac{5}{2}}y^{\frac{1}{2}}\big(M(\alpha;y)+\mathcal{E}(\alpha;y)\big).\\ \endaligned\end{equation} By \eqref{4eq2}, Proposition \ref{4prop2} is equivalent to the following lemma. \begin{lemma}\label{4lemma}Assume that $ \alpha\geq 2,\;y\geq2$, it holds that \begin{equation}\aligned\nonumber y^{\frac{1}{2}}\big(M(\alpha;y)+ \mathcal{E}(\alpha;y)\big)-(\frac{\sqrt{3}}{2})^{\frac{1}{2}}\big(M(\alpha;\frac{\sqrt{3}}{2})+ \mathcal{E}(\alpha;\frac{\sqrt{3}}{2})\big)>0. \endaligned\end{equation} \end{lemma} To prove Lemma \ref{4lemma}, we aim to show that the term \begin{equation}\nonumber y^{\frac{1}{2}}M(\alpha;y) - (\frac{\sqrt{3}}{2})^{\frac{1}{2}}M(\alpha;\frac{\sqrt{3}}{2}) \end{equation} is positive and serves as the principal term, while the term \begin{equation}\nonumber y^{\frac{1}{2}}\mathcal{E}(\alpha;y) - (\frac{\sqrt{3}}{2})^{\frac{1}{2}}\mathcal{E}(\alpha;\frac{\sqrt{3}}{2}) \end{equation} is significantly smaller in comparison when when $\alpha,y\geq2$. Given the complexity of this problem, our proof will be divided into two cases: $\frac{y}{\alpha}\geq1$ and $\frac{y}{\alpha}\in(0,1)$. The proof of cases $\frac{y}{\alpha}\geq 1$ and $\frac{y}{\alpha}\in(0,1)$ will be given in Lemma \ref{4.2lemma2} and \ref{4.3lemma2}, respectively. \begin{lemma}\label{4.2lemma2}Assume that $ y\geq\alpha \geq 2$. Then \begin{itemize} \item [(1)] $ y^{\frac{1}{2}} M(\alpha;y)-(\frac{\sqrt{3}}{2})^{\frac{1}{2}}M(\alpha;\frac{\sqrt{3}}{2})\geq \frac{1}{500} \sqrt{y}\alpha>0.$ \item [(2)] $\left|\frac{ \mathcal{E}(\alpha;y)}{\alpha}\right|+\frac{{3}^{\frac{1}{4}}}{2}\left|\frac{\mathcal{E}(\alpha;\frac{\sqrt{3}}{2}) }{\alpha}\right|\leq10^{-6}.$ \item [(3)] $\left|\frac{ y^{\frac{1}{2}}\mathcal{E}(\alpha;y)-(\frac{\sqrt{3}}{2})^{\frac{1}{2}}\mathcal{E}(\alpha;\frac{\sqrt{3}}{2})}{y^{\frac{1}{2}}M(\alpha;y)-(\frac{\sqrt{3}}{2})^{\frac{1}{2}}M(\alpha;\frac{\sqrt{3}}{2})}\right| \leq 10^{-3}.$ \item [(4)] $y^{\frac{1}{2}}\big(M(\alpha;y)+ \mathcal{E}(\alpha;y)\big)-(\frac{\sqrt{3}}{2})^{\frac{1}{2}}\big(M(\alpha;\frac{\sqrt{3}}{2})+ \mathcal{E}(\alpha;\frac{\sqrt{3}}{2})\big)\geq \frac{1}{500} \sqrt{y}\alpha\:(1-10^{-3})>0.$ \end{itemize} \end{lemma} \begin{proof} (1). Notice that $y\geq\alpha \geq 2$. By Lemmas \ref{lemma4.19} and \ref{lemma4.20}, we have \begin{equation}\aligned\nonumber \sqrt{y} M(\alpha;y)-(\frac{\sqrt{3}}{2})^{\frac{1}{2}}M(\alpha;\frac{\sqrt{3}}{2})\geq \sqrt{y}\alpha\big(\frac{1}{5} -y^{-\frac{1}{2}}(\frac{\sqrt{3}}{2})^{\frac{1}{2}}\cdot \frac{3}{10}\big)\geq\frac{1}{500} \sqrt{y}\alpha. \endaligned\end{equation} (2). By \eqref{4eq5} and Lemma \ref{lem4.23}, we have \begin{equation}\aligned\nonumber \left|\frac{\mathcal{E}_{1}(\alpha;y) }{\alpha}\right| =&4\sqrt{2}e^{-4\pi\alpha y}\vartheta(\frac{y}{\alpha};1)\sum_{n\geq2}(1+2\pi\alpha y {n}^2) e^{-\pi\alpha y (n^2-4)}\left|\frac{\vartheta(\frac{y}{\alpha};\frac{n}{2}}{\vartheta(\frac{y}{\alpha};1)})\right|\\ \leq& 4\sqrt{2}e^{-4\pi\alpha y}\vartheta(\frac{y}{\alpha};1)\sum_{n\geq2}(1+2\pi\alpha y {n}^2) e^{-\pi\alpha y (n^2-4)}\\ \leq& 4\sqrt{2}e^{-4\pi\alpha y}\vartheta(\frac{y}{\alpha};1)\Big(1+8\pi\alpha y +\sum_{n\geq3}(1+2\pi\alpha y {n}^2) e^{-\pi\alpha y (n^2-4)}\Big).\\ \endaligned\end{equation} Given that $y\geq \alpha\geq2$, we recall from \eqref{TXY} that $$\vartheta(\frac{y}{\alpha};1)=1+2\underset{n\geq1}{\sum}e^{-\pi n^2\frac{y}{\alpha}}\leq 1+2\underset{n\geq1}{\sum}e^{-\pi n^2}.$$ Trivially, we have the following bound $$\sum_{n\geq1} e^{-\pi n^2}\leq4.33\cdot 10^{-2},\;\sum_{n\geq3}(1+8\pi {n}^2) e^{-4 \pi (n^2-4)}\leq 10^{-40}.$$ Therefore, combining these together, we obtain that \begin{equation}\aligned\label{M2aaa} \left|\frac{\mathcal{E}_{1}(\alpha;y) }{\alpha}\right|\leq \frac{31}{5}(1+8\pi\alpha y)e^{-4\pi\alpha y}. \endaligned\end{equation} Similarly, to avoid repetition, we have \begin{equation}\aligned\label{M2aab} \left|\frac{\mathcal{E}(\alpha;y)}{\alpha}\right|\leq &\underset{i=1}{\overset{4}{\sum}}\left|\frac{\mathcal{E}_{i}(\alpha;y)}{\alpha}\right|\leq2\pi(1+8\pi\alpha y+\frac{y}{2\alpha})e^{-4\pi\alpha y},\\ \left|\frac{\mathcal{E}(\alpha;\frac{\sqrt{3}}{2})}{\alpha}\right|\leq & \underset{j=1}{\overset{4}{\sum}}\left|\frac{\mathcal{E}_{j}(\alpha;\frac{\sqrt{3}}{2})}{\alpha}\right|\leq4\pi{\alpha}^{\frac{1}{2}}(1+2\sqrt{3}\pi\alpha)e^{-2\sqrt{3}\pi\alpha}. \endaligned\end{equation} \eqref{M2aaa} and \eqref{M2aab} yield (2). (3). Noting $y\geq2$, by Lemma \ref{4.2lemma2}, one has \begin{equation}\aligned\label{4.2lemaddeq1} \left|\frac{ y^{\frac{1}{2}}\mathcal{E}(\alpha;y)-(\frac{\sqrt{3}}{2})^{\frac{1}{2}}\mathcal{E}(\alpha;\frac{\sqrt{3}}{2})}{y^{\frac{1}{2}}M(\alpha;y)-(\frac{\sqrt{3}}{2})^{\frac{1}{2}}M(\alpha;\frac{\sqrt{3}}{2})}\right| \leq&\left| \frac{y^{\frac{1}{2}}\mathcal{E}(\alpha;y)-(\frac{\sqrt{3}}{2})^{\frac{1}{2}}\mathcal{E}(\alpha;\frac{\sqrt{3}}{2}) }{\frac{1}{500}\sqrt{y}\alpha}\right|\\ \leq& 500\bigg(\left|\frac{ \mathcal{E}(\alpha;y)}{\alpha}\right|+(\frac{\sqrt{3}}{2})^{\frac{1}{2}}y^{-\frac{1}{2}}\left|\frac{\mathcal{E}(\alpha;\frac{\sqrt{3}}{2}) }{\alpha}\right|\bigg)\\ \leq& 500\bigg(\left|\frac{ \mathcal{E}(\alpha;y)}{\alpha}\right|+\frac{{3}^{\frac{1}{4}}}{2}\left|\frac{\mathcal{E}(\alpha;\frac{\sqrt{3}}{2}) }{\alpha}\right|\bigg). \endaligned\end{equation} Then \eqref{4.2lemaddeq1} and item (2) yield item (3). Items (1) and (3) yield item (4). \end{proof} Recall that $M(\alpha;y)=\underset{i=1}{\overset{4}{\sum}}M_{i}(\alpha;y).$ In the following lemma, we will provide an lower bound for $M(\alpha;y)$ to prove (1) of Lemma \ref{4.2lemma2}. \begin{lemma}\label{lemma4.19} Assume that $ y\geq \alpha \geq2,$ then $M(\alpha;y)\geq\frac{1}{5}\alpha$. We split it into four items. \begin{itemize} \item [(1)] $M_{1}(\alpha;y)\geq \frac{1}{5}\alpha.$ \item [(2)] $M_{2}(\alpha;y)>0.$ \item [(3)] $M_{3}(\alpha;y)>0.$ \item [(4)] $M_{4}(\alpha;y)>0.$ \end{itemize} \end{lemma} \begin{proof} (1). By \eqref{TXY} and \eqref{4eq4}, one has \begin{equation}\aligned\nonumber M_{1}(\alpha;y) = (2\sqrt{2}-2)\alpha-4\alpha\underset{n=1}{\overset{\infty}{\sum}}e^{-\pi n^2\frac{y}{2\alpha}}(1-\sqrt{2}e^{-\pi n^2\frac{y}{2\alpha}}). \endaligned\end{equation} Since $\frac{y}{\alpha}\geq 1$, one gets \begin{equation}\aligned\nonumber \underset{n=1}{\overset{\infty}{\sum}}e^{-\pi n^2\frac{y}{2\alpha}}(1-\sqrt{2}e^{-\pi n^2\frac{y}{2\alpha}})\leq \underset{n=1}{\overset{\infty}{\sum}}e^{-\frac{\pi n^2}{2}}(1-\sqrt{2}e^{-\frac{\pi n^2}{2}})=0.1486\cdots. \endaligned\end{equation} (2). For $M_{2}(\alpha;y)$, by \eqref{TXY}, one has \begin{equation}\aligned\label{lem4.17eq1} {\vartheta}_{X}(X;Y)=-2\pi\underset{n=1}{\overset{\infty}{\sum}}n^2e^{-\pi n^2 X}\cos(2n\pi Y). \endaligned\end{equation} By \eqref{4eq4}, \eqref{lem4.17eq1} and using $\frac{y}{\alpha}\geq1$, one has \begin{equation}\aligned\nonumber M_{2}(\alpha;y)=&4\pi y\underset{n=1}{\overset{\infty}{\sum}}n^2 e^{-\pi n^2 \frac{y}{2\alpha}}(1-2\sqrt{2} e^{-\pi n^2 \frac{y}{2\alpha}})\geq 4\pi y\underset{n=1}{\overset{\infty}{\sum}}n^2 e^{-\pi n^2 \frac{y}{2\alpha}}(1-2\sqrt{2} e^{-\frac{\pi }{2}})>0. \endaligned\end{equation} (3). For $M_{3}(\alpha;y)$, by \eqref{TXY} and \eqref{4eq4}, one has \begin{equation}\aligned\nonumber M_{3}(\alpha;y)=&4\sqrt{2}\alpha(1+2\pi\alpha y)e^{-\pi\alpha y}(1+2\underset{n=1}{\overset{\infty}{\sum}}e^{-\pi n^2\frac{y}{\alpha}}(-1)^{n})\\ & -4\alpha(1+4\pi\alpha y)e^{-2\pi\alpha y}(1+2\underset{n=1}{\overset{\infty}{\sum}}e^{-\pi n^2\frac{y}{2\alpha}}(-1)^{n})\\ \geq& 4\sqrt{2}\alpha(1+2\pi\alpha y)e^{-\pi\alpha y}(1-2e^{-\pi\frac{y}{\alpha}})-4\alpha(1+4\pi\alpha y)e^{-2\pi\alpha y}. \endaligned\end{equation} Notice that $ y\geq\alpha \geq 2$. Then $M_{3}(\alpha;y)>0$. (4). For $ M_{4}(\alpha;y)$, by \eqref{4eq4} and \eqref{lem4.17eq1}, one has \begin{equation}\aligned\nonumber M_{4}(\alpha;y) =&16\sqrt{2}\pi y e^{-\pi\alpha y} \underset{n=1}{\overset{\infty}{\sum}}n^2 e^{-\pi n^2 \frac{y}{\alpha}}(-1)^{n+1}+8\pi y e^{-2\pi\alpha y}\underset{n=1}{\overset{\infty}{\sum}}n^2 e^{-\pi n^2 \frac{y}{2\alpha}}(-1)^{n}\\ \geq &16\sqrt{2}\pi y e^{-\pi y(\alpha+\frac{1}{\alpha})}(1-4 e^{-3\pi \frac{y}{\alpha}}-\frac{\sqrt{2}}{4}e^{-\pi y(\alpha-\frac{1}{2\alpha})}). \endaligned\end{equation} Note that $y\geq\alpha \geq 2$, then $e^{-3\pi \frac{y}{\alpha}}\leq e^{-3\pi },\:\:e^{-\pi y(\alpha-\frac{1}{2\alpha})}\leq e^{-\frac{7}{2}\pi}$. Thus $ M_{4}(\alpha;y)>0.$ \end{proof} Before providing an upper bound function of $M(\alpha;\frac{\sqrt{3}}{2})$, we denote that \begin{equation}\aligned\label{lem4.15eq0} \epsilon_{1}:=&\underset{n=3}{\overset{\infty}{\sum}}e^{-\frac{2\sqrt{3}\pi\alpha (n^2-4)}{3}},\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: \epsilon_{2}:=\underset{n=1}{\overset{\infty}{\sum}}e^{-\frac{2\sqrt{3}\pi\alpha(n-\frac{1}{2})^2}{3}},\\ \epsilon_{3}:=&\underset{n=2}{\overset{\infty}{\sum}}n^2e^{-\frac{2\sqrt{3}\pi\alpha ({n}^2-1)}{3}},\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: \epsilon_{4}:=\underset{n=2}{\overset{\infty}{\sum}}4(n-\frac{1}{2})^2e^{-\frac{2\sqrt{3}\pi\alpha(n-1)n}{3}}.\\ \endaligned\end{equation} \begin{lemma}[An upper bound function of $M(\alpha;\frac{\sqrt{3}}{2})$]\label{lemma4.20}Assume that $ \alpha \geq 2$. Then \begin{itemize} \item [(1)] $M_{1}(\alpha;\frac{\sqrt{3}}{2})\leq 8{\alpha}^{\frac{3}{2}}3^{-\frac{1}{4}}\big(e^{-\frac{2\sqrt{3}\pi\alpha }{3}}-e^{-\frac{4\sqrt{3}\pi\alpha}{3}}+e^{-\frac{8\sqrt{3}\pi\alpha }{3}}(1+\epsilon_{1})\big).$ \item [(2)] $M_{2}(\alpha;\frac{\sqrt{3}}{2}) \leq 8{\alpha}^{\frac{3}{2}}3^{-\frac{1}{4}}(e^{-\frac{4\sqrt{3}\pi\alpha} {3}}-e^{-\frac{2\sqrt{3}\pi\alpha}{3}}) +32\pi{\alpha}^{\frac{5}{2}}3^{-\frac{3}{4}}\big(e^{-\frac{2\sqrt{3}\pi\alpha }{3}}(1+\epsilon_{3})-2e^{-\frac{4\sqrt{3}\pi\alpha}{3}}\big).$ \item [(3)] $M_{3}(\alpha;\frac{\sqrt{3}}{2}) \leq 16{\alpha}^{\frac{3}{2}}3^{-\frac{1}{4}}e^{-\frac{\sqrt{3}}{2}\pi\alpha}\big( (1+\sqrt{3}\pi\alpha)\epsilon_{2}-(1+2\sqrt{3}\pi\alpha)e^{-\frac{5\sqrt{3}\pi\alpha}{6}}\big).$ \item [(4)] $M_{4}(\alpha;\frac{\sqrt{3}}{2})\leq 16\pi{\alpha}^{\frac{5}{2}}3^{-\frac{3}{4}}e^{-\frac{2\sqrt{3}\pi\alpha}{3}}(1+\epsilon_{4}).$ \item [(5)] $M(\alpha;\frac{\sqrt{3}}{2})\leq \frac{792}{25}{\alpha}^{\frac{3}{2}}3^{-\frac{1}{4}}e^{-\frac{\sqrt{3}}{2}\pi\alpha}\big( 1+\frac{\sqrt{3}\pi\alpha}{11}\big).$ Trivially, $M(\alpha;\frac{\sqrt{3}}{2})\leq\frac{3}{10}\alpha.$ \end{itemize} \end{lemma} \begin{proof} (1). For $M_{1}(\alpha;\frac{\sqrt{3}}{2})$, by \eqref{Poisson} and \eqref{4eq4}, one has \begin{equation}\aligned\nonumber M_{1}(\alpha;\frac{\sqrt{3}}{2})=&8 {\alpha}^{\frac{3}{2}}3^{-\frac{1}{4}}\underset{n=1}{\overset{\infty}{\sum}}(e^{-\frac{2\sqrt{3}\pi\alpha n^2}{3}}-e^{-\frac{4\sqrt{3}\pi\alpha n^2}{3}})\\ \leq &8 {\alpha}^{\frac{3}{2}}3^{-\frac{1}{4}}(e^{-\frac{2\sqrt{3}\pi\alpha }{3}}+e^{-\frac{8\sqrt{3}\pi\alpha }{3}}(1+\epsilon_{1})-e^{-\frac{4\sqrt{3}\pi\alpha}{3}}).\\ \endaligned\end{equation} (2). For $M_{2}(\alpha;\frac{\sqrt{3}}{2})$, by \eqref{Poisson}, one has \begin{equation}\aligned\label{lem4.22eq1} \vartheta_{X}(X;Y)=-\frac{1}{2}X^{-\frac{3}{2}}\sum_{n\in\mathbb{Z}} e^{-\pi \frac{(n-Y)^2}{X}} +\pi {X}^{-\frac{5}{2}}\sum_{n\in\mathbb{Z}} (n-Y)^2e^{-\pi \frac{(n-Y)^2}{X}}. \endaligned\end{equation} Then by \eqref{4eq4} and \eqref{lem4.22eq1}, we have \begin{equation}\aligned\nonumber M_{2}(\alpha;\frac{\sqrt{3}}{2})=&8{\alpha}^{\frac{3}{2}}3^{-\frac{1}{4}}\underset{n=1}{\overset{\infty}{\sum}}(e^{-\frac{4\sqrt{3}\pi\alpha {n}^2}{3}}-e^{-\frac{2\sqrt{3}\pi\alpha {n}^2}{3}})\\ &+32\pi{\alpha}^{\frac{5}{2}}3^{-\frac{3}{4}}\underset{n=1}{\overset{\infty}{\sum}}(n^2e^{-\frac{2\sqrt{3}\pi\alpha {n}^2}{3}}-2 n^2e^{-\frac{4\sqrt{3}\pi\alpha {n}^2}{3}})\\ \leq& 8{\alpha}^{\frac{3}{2}}3^{-\frac{1}{4}}(e^{-\frac{4\sqrt{3}\pi\alpha} {3}}-e^{-\frac{2\sqrt{3}\pi\alpha}{3}})+32\pi{\alpha}^{\frac{5}{2}}3^{-\frac{3}{4}}\big(e^{-\frac{2\sqrt{3}\pi\alpha }{3}}(1+\epsilon_{3})-2e^{-\frac{4\sqrt{3}\pi\alpha}{3}}\big). \endaligned\end{equation} (3). For $M_{3}(\alpha;\frac{\sqrt{3}}{2})$, by \eqref{Poisson} and \eqref{4eq4}, we have \begin{equation}\aligned\label{lem4.22eq2} M_{3}(\alpha;\frac{\sqrt{3}}{2})=&16{\alpha}^{\frac{3}{2}}3^{-\frac{1}{4}}e^{-\frac{\sqrt{3}}{2}\pi\alpha}\big((1+\sqrt{3}\pi\alpha)\cdot\epsilon_{2} -(1+2\sqrt{3}\pi\alpha)e^{-\frac{\sqrt{3}\pi\alpha}{2}}\underset{n=1}{\overset{\infty}{\sum}}e^{-\frac{4\sqrt{3}\pi\alpha(n-\frac{1}{2})^2}{3}}\big), \endaligned\end{equation} which yields the result.\\ (4). For $M_{4}(\alpha;\frac{\sqrt{3}}{2})$, by \eqref{4eq4} and \eqref{lem4.22eq1}, we have \begin{equation}\aligned\nonumber M_{4}(\alpha;\frac{\sqrt{3}}{2})=&64\pi{\alpha}^{\frac{5}{2}}{3}^{-\frac{3}{4}}e^{-\frac{\sqrt{3}\pi\alpha}{2}}(\underset{n=1}{\overset{\infty}{\sum}}(n-\frac{1}{2})^{2}e^{-\frac{2\sqrt{3}\pi\alpha(n-\frac{1}{2})^2}{3}}-\frac{\sqrt{3}}{4\pi\alpha}\underset{n=1}{\overset{\infty}{\sum}}e^{-\frac{2\sqrt{3}\pi\alpha(n-\frac{1}{2})^2}{3}})\\ &+16 {\alpha}^{\frac{3}{2}}{3}^{-\frac{1}{4}}e^{-\sqrt{3}\pi\alpha}\underset{n=1}{\overset{\infty}{\sum}}(1-\frac{8\sqrt{3}\pi\alpha(n-\frac{1}{2})^2}{3})e^{-\frac{4\sqrt{3}\pi\alpha(n-\frac{1}{2})^2}{3}}\\ \leq&64\pi{\alpha}^{\frac{5}{2}}{3}^{-\frac{3}{4}}e^{-\frac{\sqrt{3}\pi\alpha}{2}}\underset{n=1}{\overset{\infty}{\sum}}(n-\frac{1}{2})^{2}e^{-\frac{2\sqrt{3}\pi\alpha(n-\frac{1}{2})^2}{3}} \leq 16\pi{\alpha}^{\frac{5}{2}}3^{-\frac{3}{4}}e^{-\frac{2\sqrt{3}\pi\alpha}{3}}(1+\epsilon_{4}). \endaligned\end{equation} (5). Combining items (1)-(4) and noting that $\alpha\geq2$, one has item (5). \end{proof} We then give the proof of item (2) in Lemma \ref{4.2lemma2}. \begin{lemma}\label{lem4.23} Assume that $n\in\mathbb{Z}$. Then it holds that \begin{itemize} \item [(1)]If $X>0$, then $\left|\frac{\vartheta(X;\frac{n}{2})}{\vartheta(X;1)}\right|\leq1.$ \item [(2)] If $X>0$, then $\left|\frac{\vartheta_{X}(X;\frac{n}{2})}{\vartheta_{X}(X;1)}\right|\leq 2.$ \end{itemize} \end{lemma} \begin{proof} For simplicity, we denote that \begin{equation}\aligned\label{lem4.23eq1} f(X;Y):=\underset{m\in \mathbb{Z}}{\sum}e^{-\frac{\pi(m-Y)^2}{X}},\:\:g(X;Y):=\underset{m\in \mathbb{Z}}{\sum}(m-Y)^2e^{-\frac{\pi(m-Y)^2}{X}}. \endaligned\end{equation} A direct calculation shows that \begin{equation}\aligned\label{lem4.23eq1add} f(X;Y)=f(X;Y+1),\:\:g(X;Y)=g(X;Y+1). \endaligned\end{equation} As $X\in (0,1]$, by \eqref{lem4.23eq1}, one has \begin{equation}\aligned\label{lem4.23eq2} \frac{f(X;\frac{1}{2})}{f(X;1)}=\frac{2\underset{m=1}{\overset{\infty}{\sum}}e^{-\frac{\pi(m-\frac{1}{2})^2}{X}}} {1+2\underset{m=2}{\overset{\infty}{\sum}}e^{-\frac{\pi(m-1)^2}{X}}}\leq 2\underset{m=1}{\overset{\infty}{\sum}}e^{-\pi(m-\frac{1}{2})^2}<1. \endaligned\end{equation} By \eqref{Poisson} and \eqref{lem4.23eq1add}-\eqref{lem4.23eq2}, one has \begin{equation}\aligned\label{lem4.23eq3} \left|\frac{\vartheta(X;\frac{n}{2})}{\vartheta(X;1)}\right| =\frac{f(X;\frac{n}{2})}{f(X;1)} =\left\{ \begin{array}{ll} \frac{f(X;\frac{1}{2})}{f(X;1)},& \hbox{if}\:\:\:n\:\: is \:\:odd;\\ \:\:\:\:\:1,& \hbox{if}\:\:\:n\:\:is \:\:even.\\ \end{array} \right. \endaligned\end{equation} Given that $X\geq1$, by \eqref{TXY}, one has \begin{equation}\aligned\label{lem4.23eq3add} \left|\frac{\vartheta(X;\frac{n}{2})}{\vartheta(X;1)}\right|=\frac{1+2\sum_{m=1}^{\infty}e^{-\pi m^2 X} \cos (mn\pi)}{1+2\sum_{m=1}^{\infty}e^{-\pi m^2 X}}\leq 1. \endaligned\end{equation} Then \eqref{lem4.23eq2}-\eqref{lem4.23eq3add} yield item (1). By \eqref{lem4.22eq1}, one has \begin{equation}\aligned\label{lem4.23eq4} \vartheta_{X}(X;Y)=-\frac{1}{2}X^{-\frac{3}{2}}f(X;Y)+\pi{X}^{-\frac{5}{2}}g(X;Y). \endaligned\end{equation} Then for $X\in(0,1],n\in\mathbb{Z}$, by \eqref{lem4.23eq1add} and \eqref{lem4.23eq4}, we have \begin{equation}\aligned\label{lem4.23eq5} \left|\frac{\vartheta_{X}(X;\frac{n}{2})}{\vartheta_{X}(X;1)}\right| =\left|\frac{\vartheta_{X}(X;\frac{n}{2}+1)}{\vartheta_{X}(X;1)}\right| =\left\{ \begin{array}{ll} \left|\frac{\vartheta_{X}(X;\frac{1}{2})}{\vartheta_{X}(X;1)}\right|,& \hbox{if}\:\:\:n\:\: is \:\:odd ;\\ \:\:\:\:\:1,& \hbox{if}\:\:\:n\:\:is \:\:even.\\ \end{array} \right. \endaligned\end{equation} By \eqref{lem4.22eq1}, one has \begin{equation}\aligned\label{lem4.23add0eq1} \left|\vartheta_{X}(X;\frac{1}{2})\right|=& 2\pi {X}^{-\frac{5}{2}}\sum_{n=1}^{\infty} (n-\frac{1}{2})^2e^{-\pi \frac{(n-\frac{1}{2})^2}{X}}-X^{-\frac{3}{2}}\sum_{n=1}^{\infty} e^{-\pi \frac{(n-\frac{1}{2})^2}{X}}\\ \leq&\frac{\pi}{2}X^{-\frac{5}{2}}e^{-\frac{\pi}{4X}}(1+4\sum_{n=2}^{\infty} (n-\frac{1}{2})^2e^{-\pi \frac{(n-1)\cdot n}{X}})-X^{-\frac{3}{2}}e^{-\frac{\pi}{4X}}, \endaligned\end{equation} and \begin{equation}\aligned\label{lem4.23add0eq2} \left|\vartheta_{X}(X;1)\right|=\frac{1}{2}X^{-\frac{3}{2}}\Big(1-\sum_{n=2}^{\infty}\big(4\pi(n-1)^2X^{-1}-2\big)e^{-\frac{\pi(n-1)^2}{X}}\Big). \endaligned\end{equation} Here, since $X\in(0,1]$, one has \begin{equation}\aligned\label{lem4.23add0eq3} 4\sum_{n=2}^{\infty} (n-\frac{1}{2})^2e^{-\pi \frac{(n-1)\cdot n}{X}}\leq4\sum_{n=2}^{\infty} (n-\frac{1}{2})^2e^{-\pi\cdot(n-1)\cdot n}\leq \frac{1}{50}, \endaligned\end{equation} and \begin{equation}\aligned\label{lem4.23add0eq4} \sum_{n=2}^{\infty}\big(4\pi(n-1)^2X^{-1}-2\big)e^{-\frac{\pi(n-1)^2}{X}}\leq \sum_{n=2}^{\infty}\big(4\pi(n-1)^2-2\big)e^{-\pi(n-1)^2}\leq \frac{1}{2}. \endaligned\end{equation} Then for $X\in(0,1]$, by \eqref{lem4.23add0eq1}-\eqref{lem4.23add0eq4}, one has \begin{equation}\aligned\label{lem4.23add0eq4add} \left|\frac{\vartheta_{X}(X;\frac{1}{2})}{\vartheta_{X}(X;1)}\right|\leq e^{-\frac{\pi}{4X}}\big(2\pi{X}^{-1}(1+\frac{1}{50}) -4 \big)\leq2. \endaligned\end{equation} As $X\geq1,\;n\in\mathbb{Z}$, by \eqref{TXY}, one has \begin{equation}\aligned\label{lem4.23eq5add} \left|\frac{\vartheta_{X}(X;\frac{n}{2})}{\vartheta_{X}(X;1)}\right|=\left|\frac{\sum_{m=1}^{\infty}m^2e^{-\pi m^2 X} \cos (mn\pi)}{\sum_{m=1}^{\infty}m^2e^{-\pi m^2 X}}\right|\leq1. \endaligned\end{equation} Then combining \eqref{lem4.23eq5} with \eqref{lem4.23add0eq4add}-\eqref{lem4.23eq5add}, we obtain item (2). \end{proof} Finally, we provide the proof Lemma \ref{4lemma} for $\frac{\alpha}{y}\geq1$. It is stated as follows: \begin{lemma}\label{4.3lemma2}Assume that $ \alpha\geq y\geq 2$. Then \begin{itemize} \item [(1)] Lower bound estimate of the major term. \begin{equation}\aligned\nonumber y^{\frac{1}{2}} M(\alpha;y)-(\frac{\sqrt{3}}{2})^{\frac{1}{2}}M(\alpha;\frac{\sqrt{3}}{2})\geq 8\pi{\alpha}^{\frac{3}{2}}e^{-\frac{\pi\alpha}{y}}>0. \endaligned\end{equation} \item [(2)] An auxiliary estimate about the error term. \begin{equation}\aligned\nonumber \left| y^{\frac{1}{2}}{\alpha}^{-\frac{3}{2}} e^{\frac{\pi\alpha}{y}}\mathcal{E}(\alpha;y)\right|+\left| (\frac{\sqrt{3}}{2})^{\frac{1}{2}} {\alpha}^{-\frac{3}{2}}e^{\frac{\pi\alpha}{2}}\mathcal{E}(\alpha;\frac{\sqrt{3}}{2})\right|\leq 10^{-5}. \endaligned\end{equation} \item [(3)] Comparison of the error term with the major term. \begin{equation}\aligned\nonumber \left|\frac{ y^{\frac{1}{2}}\mathcal{E}(\alpha;y)-(\frac{\sqrt{3}}{2})^{\frac{1}{2}}\mathcal{E}(\alpha;\frac{\sqrt{3}}{2})}{y^{\frac{1}{2}}M(\alpha;y) -(\frac{\sqrt{3}}{2})^{\frac{1}{2}}M(\alpha;\frac{\sqrt{3}}{2})}\right|\leq 10^{-5}. \endaligned\end{equation} \item [(4)] $y^{\frac{1}{2}}\big(M(\alpha;y)+ \mathcal{E}(\alpha;y)\big)-(\frac{\sqrt{3}}{2})^{\frac{1}{2}}\big(M(\alpha;\frac{\sqrt{3}}{2})+ \mathcal{E}(\alpha;\frac{\sqrt{3}}{2})\big)\geq 25{\alpha}^{\frac{3}{2}}e^{-\frac{\pi\alpha}{y}}(1-10^{-5})>0.$ \end{itemize} \end{lemma} \begin{proof} Note that $\frac{\alpha}{y}\geq 1$. By Lemmas \ref{4.3lemma3} and \ref{lemma4.20}, one has \begin{equation}\aligned\nonumber \sqrt{y} M(\alpha;y)-(\frac{\sqrt{3}}{2})^{\frac{1}{2}}M(\alpha;\frac{\sqrt{3}}{2})\geq& 10\pi{\alpha}^{\frac{5}{2}}y^{-1}e^{-\frac{\pi\alpha}{y}}- \frac{396\sqrt{2}}{25}{\alpha}^{\frac{3}{2}}e^{-\frac{\sqrt{3}}{2}\pi\alpha}\big( 1+\frac{\sqrt{3}\pi\alpha}{11}\big)\\ \geq& 10\pi{\alpha}^{\frac{3}{2}}e^{-\frac{\pi\alpha}{y}}\big(1-\frac{198\sqrt{2}}{ 125\pi}e^{-\pi\alpha(\frac{\sqrt{3}}{2}-\frac{1}{y})}(1+\frac{\sqrt{3}\pi\alpha}{11})\big).\\ \geq&8\pi{\alpha}^{\frac{3}{2}}e^{-\frac{\pi\alpha}{y}}. \endaligned\end{equation} which yields item (1). By Lemma \ref{lem4.23} and \eqref{Poisson}, we have \begin{equation}\aligned\label{4.3lemma2error} &\left| y^{\frac{1}{2}}{\alpha}^{-\frac{3}{2}} e^{\frac{\pi\alpha}{y}}\mathcal{E}(\alpha;y)\right|\leq 4\pi(1+4\pi\alpha y)e^{-\pi\alpha(4y-\frac{1}{y})},\\ &\left| (\frac{\sqrt{3}}{2})^{\frac{1}{2}} {\alpha}^{-\frac{3}{2}}e^{\frac{\pi\alpha}{2}}\mathcal{E}(\alpha;\frac{\sqrt{3}}{2})\right|\leq (\frac{\sqrt{3}}{2})^{\frac{1}{2}}\cdot 4\pi (1+2\sqrt{3}\pi\alpha)e^{-(2\sqrt{3}-\frac{1}{2})\pi\alpha}. \endaligned\end{equation} Item (2) follows by \eqref{4.3lemma2error} and the conditions that $\alpha\geq 2, y\geq2.$ Combining item (1) with $y\geq2$, one has \begin{equation}\aligned\label{4.3lemma3eq1} \left|\frac{ y^{\frac{1}{2}}\mathcal{E}(\alpha;y)-(\frac{\sqrt{3}}{2})^{\frac{1}{2}}\mathcal{E}(\alpha;\frac{\sqrt{3}}{2})}{y^{\frac{1}{2}}M(\alpha;y) -(\frac{\sqrt{3}}{2})^{\frac{1}{2}}M(\alpha;\frac{\sqrt{3}}{2})}\right|\leq&\left|\frac{ y^{\frac{1}{2}}\mathcal{E}(\alpha;y)-(\frac{\sqrt{3}}{2})^{\frac{1}{2}}\mathcal{E}(\alpha;\frac{\sqrt{3}}{2})}{8\pi{\alpha}^{\frac{3}{2}}e^{-\frac{\pi\alpha}{y}}}\right|\\ \leq&\frac{1}{8\pi}\bigg( \left|y^{\frac{1}{2}} \alpha^{-\frac{3}{2}}e^{\frac{\pi\alpha}{y}}\mathcal{E}(\alpha;y) \right| +\left|(\frac{\sqrt{3}}{2})^{\frac{1}{2}} \alpha^{-\frac{3}{2}}e^{\frac{\pi\alpha}{y}} \mathcal{E}(\alpha;\frac{\sqrt{3}}{2})\right| \bigg)\\ \leq&\frac{1}{8\pi}\bigg( \left|y^{\frac{1}{2}} \alpha^{-\frac{3}{2}}e^{\frac{\pi\alpha}{y}}\mathcal{E}(\alpha;y) \right| +\left|(\frac{\sqrt{3}}{2})^{\frac{1}{2}} \alpha^{-\frac{3}{2}}e^{\frac{\pi\alpha}{2}} \mathcal{E}(\alpha;\frac{\sqrt{3}}{2})\right| \bigg).\\ \endaligned\end{equation} Then, \eqref{4.3lemma3eq1} and item (2) together yield item (3). Items (1) and (3) yield item (4). \end{proof} For simplicity, we denote that $ \delta(y):=\underset{n=2}{\overset{\infty}{\sum}}e^{-\pi({n}^2-1)y}. $ The following lemma provides the bounds used in Lemma \ref{4.3lemma2}. \begin{lemma} \label{4.3lemma3} Assume that $ \alpha\geq y\geq 2$. Then \begin{itemize} \item [(1)] $M_{1}(\alpha;y)\geq 4\sqrt{2}{\alpha}^{\frac{3}{2}}y^{-\frac{1}{2}}\big(e^{-\pi\frac{\alpha}{y}}-e^{-2\pi\frac{\alpha}{y}}(1+\delta(2))\big);$ \item [(2)] $M_{2}(\alpha;y)\geq 4\sqrt{2}{\alpha}^{\frac{3}{2}}{y}^{-\frac{1}{2}}\big(e^{-2\pi \frac{\alpha}{y}}-e^{-\pi \frac{\alpha}{y}}(1+\delta(1))\big) +8\sqrt{2}\pi{\alpha}^{\frac{5}{2}}{y}^{-\frac{3}{2}}\big(e^{-\pi \frac{\alpha}{y}}-2e^{-2\pi \frac{\alpha}{y}}(1+\mu(2))\big).$ \item [(3)]$M_{3}(\alpha;y)>0;$ \item [(4)] $M_{4}(\alpha;y)>0.$ \item [(5)] $M(\alpha;y)\geq 10\pi{\alpha}^{\frac{5}{2}}y^{-\frac{3}{2}}e^{-\frac{\pi\alpha}{y}}.$ \end{itemize} Numerically, $\delta(1)\leq9\cdot10^{-5}$, $\delta(2)\leq7\cdot10^{-9},$ $\mu(2)\leq3\cdot10^{-8}.$ \end{lemma} \begin{proof} We provide the bounds before the proof. Noting that $\frac{\alpha}{y}\geq1$, we calculate directly that \begin{equation}\aligned\label{note1} \underset{n=1}{\overset{\infty}{\sum}}e^{-2\pi{n}^2\frac{\alpha}{y}}&=e^{-2\pi\frac{\alpha}{y}}(1+\underset{n=2}{\overset{\infty}{\sum}}e^{-2\pi({n}^2-1)\frac{\alpha}{y}})\leq e^{-2\pi\frac{\alpha}{y}}(1+\delta(2)),\\ \underset{n=1}{\overset{\infty}{\sum}}e^{-\pi {n}^2\frac{\alpha}{y}}&\leq e^{-\pi \frac{\alpha}{y}}(1+\delta(1)),\:\:\:\: \underset{n=1}{\overset{\infty}{\sum}}n^2e^{-2\pi {n}^2\frac{\alpha}{y}}\leq e^{-2\pi \frac{\alpha}{y}}(1+\mu(2)), \endaligned\end{equation} and \begin{equation}\aligned\label{note3} &\underset{n=2}{\overset{\infty}{\sum}}e^{-2\pi(n-1)n\cdot\frac{\alpha}{y}}\leq \underset{n=2}{\overset{\infty}{\sum}}e^{-2\pi(n-1)n}\leq 10^{-5},\;\;\frac{y}{\alpha}\underset{n=1}{\overset{\infty}{\sum}} e^{-\pi(n-\frac{1}{2})^2\frac{\alpha}{y}}\leq e^{-\frac{\pi\alpha}{4y}}(1+\underset{n=2}{\overset{\infty}{\sum}} e^{-\pi(n-1)n}),\\ &e^{-\pi\alpha y}\underset{n=1}{\overset{\infty}{\sum}} (n-\frac{1}{2})^2e^{-2\pi(n-\frac{1}{2})^2\frac{\alpha}{y}}\leq\frac{1}{4}e^{-\pi\alpha(y+\frac{1}{2y})}(1+\underset{n=2}{\overset{\infty}{\sum}}4(n-\frac{1}{2})^2 e^{-2\pi(n-1)n}). \endaligned\end{equation} (1). For $ M_{1}(\alpha;y)$, by \eqref{Poisson}, \eqref{4eq4} and \eqref{note1}, one has \begin{equation}\aligned\nonumber M_{1}(\alpha;y)=4\sqrt{2}{\alpha}^{\frac{3}{2}}y^{-\frac{1}{2}}(\underset{n=1}{\overset{\infty}{\sum}}e^{-\pi{n}^2\frac{\alpha}{y}}-\underset{n=1}{\overset{\infty}{\sum}}e^{-2\pi{n}^2\frac{\alpha}{y}}) \geq 4\sqrt{2}{\alpha}^{\frac{3}{2}}y^{-\frac{1}{2}}\big(e^{-\pi\frac{\alpha}{y}}-e^{-2\pi\frac{\alpha}{y}}(1+\delta(2))\big). \endaligned\end{equation} (2). For $M_{2}(\alpha;y)$, by \eqref{4eq4} and \eqref{lem4.22eq1}, one has \begin{equation}\aligned\label{item2} M_{2}(\alpha;y)=&4\sqrt{2}{\alpha}^{\frac{3}{2}}{y}^{-\frac{1}{2}}(\underset{n=1}{\overset{\infty}{\sum}}e^{-2\pi {n}^2\frac{\alpha}{y}}-\underset{n=1}{\overset{\infty}{\sum}}e^{-\pi {n}^2\frac{\alpha}{y}})\\ &+8\sqrt{2}\pi{\alpha}^{\frac{5}{2}}{y}^{-\frac{3}{2}}(\underset{n=1}{\overset{\infty}{\sum}}n^2e^{-\pi {n}^2\frac{\alpha}{y}}-2\underset{n=1}{\overset{\infty}{\sum}}n^2e^{-2\pi {n}^2\frac{\alpha}{y}}).\\ \endaligned\end{equation} which yields item (2). (3). For $ M_{3}(\alpha;y)$, by \eqref{Poisson} and \eqref{4eq4}, one has \begin{equation}\aligned\label{lem4.24eq1} M_{3}(\alpha;y)= &8\sqrt{2}{\alpha}^{\frac{3}{2}}y^{-\frac{1}{2}} e^{-\pi\alpha(y+\frac{1}{4y})}\big( (1+2\pi\alpha y)(1+\underset{n=2}{\overset{\infty}{\sum}}e^{-\pi(n-1)n\cdot\frac{\alpha}{y}})\\ & \;\;\;\;-(1+4\pi\alpha y) e^{-\pi\alpha(y+\frac{1}{4y})}(1+\underset{n=2}{\overset{\infty}{\sum}}e^{-2\pi(n-1)n\cdot\frac{\alpha}{y}})\big).\\ \endaligned\end{equation} Therefore, given that $\alpha\geq y\geq 2$, \eqref{lem4.24eq1} and \eqref{note3} yield item (3).\\ (4). For $M_{4}(\alpha;y)$, by \eqref{4eq4} and \eqref{lem4.22eq1}, one has \begin{equation}\aligned\label{lem4.31eq1} M_{4}(\alpha;y)=&16\sqrt{2}\pi {\alpha}^{\frac{5}{2}}y^{-\frac{3}{2}}e^{-\pi\alpha y}\underset{n=1}{\overset{\infty}{\sum}} \Big( \big((n-\frac{1}{2})^2-\frac{1}{2\pi}\frac{y}{\alpha}\big)e^{\pi(n-\frac{1}{2})^2\frac{\alpha}{y}} -2e^{-\pi \alpha y}(n-\frac{1}{2})^2 \Big)e^{-2\pi(n-\frac{1}{2})^2\frac{\alpha}{y}}\\ &+8\sqrt{2}{\alpha}^{\frac{3}{2}}y^{-\frac{1}{2}}e^{-2\pi\alpha y}\underset{n=1}{\overset{\infty}{\sum}} e^{-2\pi(n-\frac{1}{2})^2\frac{\alpha}{y}}.\\ \endaligned\end{equation} Since $\alpha\geq y\geq 2$, trivially, we have \begin{equation}\aligned\label{lem4.31eq1bb} \big((n-\frac{1}{2})^2-\frac{1}{2\pi}\frac{y}{\alpha}\big)e^{\pi(n-\frac{1}{2})^2\frac{\alpha}{y}} -2e^{-\pi \alpha y}(n-\frac{1}{2})^2 >0. \endaligned\end{equation} \eqref{lem4.31eq1} and \eqref{lem4.31eq1bb} give that $M_{4}(\alpha;y)>0.$\\ (5). By \eqref{4eq3} and items (1)-(4), one has \begin{equation}\aligned\nonumber M(\alpha;y)\geq &8\sqrt{2}\pi{\alpha}^{\frac{5}{2}}y^{-\frac{3}{2}}e^{-\frac{\pi\alpha}{y}}(1-\frac{y}{2\pi\alpha}\delta(1)-2(1+\mu(2))e^{-\frac{\pi\alpha}{y}} -\frac{1}{2\pi}\cdot\delta(2)\cdot\frac{y}{\alpha}e^{-\frac{\pi\alpha}{y}})\\ \geq & 8\sqrt{2}\pi{\alpha}^{\frac{5}{2}}y^{-\frac{3}{2}}e^{-\frac{\pi\alpha}{y}}(1-\frac{1}{2\pi}\delta(1)-2(1+\mu(2))e^{-\pi} -\frac{1}{2\pi}\cdot \delta(2)e^{-\pi})>0.\\ \endaligned\end{equation} (6). Item (6) follows by Lemma \ref{lemma4.20}. \end{proof} \section{Proof of Theorem \ref{Th1}} We start with a nonexistence result. \begin{proposition} \label{Prop51} Assume that $\alpha\geq2$, Then for $b\geq 2\sqrt{2}$, $\argmin_{z\in\mathbb{H}}(\mathcal{K}(\alpha;z)-b \mathcal{K}(2\alpha;z))$ does not exist. \end{proposition} \begin{proof} By Lemma \ref{3Lemma2}, one has \begin{equation}\aligned\label{5eq1} \mathcal{K}(\alpha;z)-b \mathcal{K}(2\alpha;z) =&\frac{1}{\pi}{2}^{-\frac{5}{2}}\alpha^{-\frac{5}{2}}y^{\frac{1}{2}}\big((2\sqrt{2}-b)\alpha+2be^{-\pi\frac{y}{2\alpha}}(\pi y-\alpha) \\ &+4\sqrt{2}e^{-\pi\frac{y}{\alpha}}(\alpha-2\pi y)+o(1)\big),\:\:\:\:\mathrm{as}\:\: y\rightarrow +\infty. \endaligned\end{equation} When $b>2\sqrt{2},$ by \eqref{5eq1}, $ \mathcal{K}(\alpha;z)-b \mathcal{K}(2\alpha;z)\rightarrow -\infty$, as $y\rightarrow +\infty$, hence the minimizer does not exist. When $b=2\sqrt{2}$, $ \mathcal{K}(\alpha;z)-b \mathcal{K}(2\alpha;z)\rightarrow 0^{+}$, as $y\rightarrow +\infty$, therefore Theorem \ref{3Thm1} and Lemma \ref{lem5.1} yield the result. \end{proof} We observe that \begin{lemma}\label{lem5.1} Assume that $ \alpha\geq 2$. Then for $z=\frac{1}{2}+iy\in \Gamma_{c}$, it holds that \begin{equation}\aligned\nonumber \mathcal{K}(\alpha;\frac{1}{2}+iy)-2\sqrt{2} \mathcal{K}(2\alpha;\frac{1}{2}+iy)>0. \endaligned\end{equation} \end{lemma} Recalling Lemma \ref{3Lemma2}, for $\alpha\geq2,\:y\geq\frac{\sqrt{3}}{2}$, one has \begin{equation}\aligned\label{lem5.1denote} \mathcal{K}(\alpha;\frac{1}{2}+iy)-2\sqrt{2} \mathcal{K}(2\alpha;\frac{1}{2}+iy) =\frac{1}{\pi}{2}^{-\frac{5}{2}}\alpha^{-\frac{5}{2}}y^{\frac{1}{2}}\big(\mathcal{P}(\alpha;y)+\tilde{\mathcal{E}}(\alpha;y)\big), \endaligned\end{equation} where \begin{equation}\aligned\nonumber \mathcal{P}(\alpha;y):=&2\sqrt{2} \alpha\sum_{|n|\leq1} e^{-\pi\alpha y n^2}\vartheta(\frac{y}{\alpha};\frac{n}{2}) -2\sqrt{2} \alpha \sum_{|n|\leq1} e^{-2\pi\alpha y n^2}\vartheta(\frac{y}{2 \alpha};\frac{n}{2})\\ &+4\sqrt{2}\pi \alpha^2y\sum_{|n|\leq1} n^2e^{-\pi\alpha y n^2}\vartheta(\frac{y}{\alpha};\frac{n}{2}) -8\sqrt{2} \pi \alpha^2y\sum_{|n|\leq1} n^2e^{-2\pi\alpha y n^2}\vartheta(\frac{y}{2 \alpha};\frac{n}{2})\\ &+4\sqrt{2}y\sum_{|n|\leq1} e^{-\pi\alpha y n^2}\vartheta_X(\frac{y}{\alpha};\frac{n}{2}) -2\sqrt{2} y\sum_{|n|\leq1} e^{-2\pi\alpha y n^2}\vartheta_X(\frac{y}{2 \alpha};\frac{n}{2}), \endaligned\end{equation} and \begin{equation}\aligned\nonumber \tilde{\mathcal{E}}(\alpha;y):=&2\sqrt{2} \alpha\sum_{|n|\geq2} e^{-\pi\alpha y n^2}\vartheta(\frac{y}{\alpha};\frac{n}{2}) -2\sqrt{2} \alpha \sum_{|n|\geq2} e^{-2\pi\alpha y n^2}\vartheta(\frac{y}{2 \alpha};\frac{n}{2})\\ &+4\sqrt{2}\pi \alpha^2y\sum_{|n|\geq2} n^2e^{-\pi\alpha y n^2}\vartheta(\frac{y}{\alpha};\frac{n}{2}) -8\sqrt{2} \pi \alpha^2y\sum_{|n|\geq2} n^2e^{-2\pi\alpha y n^2}\vartheta(\frac{y}{2 \alpha};\frac{n}{2})\\ &+4\sqrt{2}y\sum_{|n|\geq2} e^{-\pi\alpha y n^2}\vartheta_X(\frac{y}{\alpha};\frac{n}{2}) -2\sqrt{2} y\sum_{|n|\geq2} e^{-2\pi\alpha y n^2}\vartheta_X(\frac{y}{2 \alpha};\frac{n}{2}). \endaligned\end{equation} To prove Lemma \ref{lem5.1}, we will seek an appropriate lower bound for $\mathcal{P}(\alpha;y)$ and an upper bound for $\tilde{\mathcal{E}}(\alpha;y)$. To state the proof clearly, we decompose $\mathcal{P}(\alpha;y)$ into several parts. We denote that \begin{equation}\aligned\nonumber \mathcal{P}_{1}(\alpha;y):=&2\sqrt{2} \alpha\vartheta(\frac{y}{\alpha};0) -2\sqrt{2} \alpha \vartheta(\frac{y}{2 \alpha};0) +4\sqrt{2}y\vartheta_X(\frac{y}{\alpha};0) -2\sqrt{2} y\vartheta_X(\frac{y}{2 \alpha};0),\\ \mathcal{P}_{2}(\alpha;y):=&4\sqrt{2} \alpha(1+2\pi\alpha y) e^{-\pi\alpha y }\vartheta(\frac{y}{\alpha};\frac{1}{2}) -4\sqrt{2} \alpha e^{-2\pi\alpha y }(1+4\pi\alpha y)\vartheta(\frac{y}{2 \alpha};\frac{1}{2}),\\ \mathcal{P}_{3}(\alpha;y):=&8\sqrt{2}y e^{-\pi\alpha y }\vartheta_X(\frac{y}{\alpha};\frac{1}{2}) -4\sqrt{2} y e^{-2\pi\alpha y }\vartheta_X(\frac{y}{2 \alpha};\frac{1}{2}). \endaligned\end{equation} Then \begin{equation}\aligned\label{lem5.1P} \mathcal{P}(\alpha;y)=\mathcal{P}_{1}(\alpha;y)+\mathcal{P}_{2}(\alpha;y)+\mathcal{P}_{3}(\alpha;y). \endaligned\end{equation} Lemma \ref{lem5.1} is implied by \eqref{lem5.1denote} and the following lemma. \begin{lemma}\label{lem5.2} Assume that $ \alpha\geq 2$ and $y\geq\frac{\sqrt{3}}{2}$. \begin{itemize} \item [(1)] If $\frac{y}{\alpha}\geq1,$ then \begin{itemize} \item [(1)] $\mathcal{P}(\alpha;y)\geq 4\sqrt{2}\alpha e^{-\pi\frac{y}{2\alpha}}\big(\frac{\pi y}{\alpha}-1 -( \frac{2\pi y}{\alpha}-1) e^{-\pi \frac{y}{2\alpha}}\big).$ \item [(2)] $|\tilde{\mathcal{E}}(\alpha;y)|\leq(2\pi\alpha+112\pi {\alpha}^2 y+8 \sqrt{2} y)e^{-4\pi\alpha y}.$ \end{itemize} \item [(2)] If $\frac{y}{\alpha}\in(0,1)$, then \begin{itemize} \item [(1)] $ \mathcal{P}(\alpha;y)\geq 8\sqrt{2}\pi{\alpha}^{\frac{5}{2}}y^{-\frac{3}{2}}(e^{-\frac{\pi\alpha}{y}}-2\sqrt{2}e^{-\frac{2\pi\alpha}{y}}).$ \item [(2)] $|\tilde{\mathcal{E}}(\alpha;y)|\leq 4\pi {\alpha}^{\frac{3}{2}}y^{-\frac{1}{2}}(1+16\alpha y)e^{-4\pi\alpha y}.$ \end{itemize} \item [(3)] $\mathcal{P}(\alpha;y)+\tilde{\mathcal{E}}(\alpha;y)>0.$ \end{itemize} \end{lemma} \begin{proof} (1). Combining \eqref{TXY} and $\frac{y}{\alpha}\geq1$, one has \begin{equation}\aligned\label{lem5.2eq1} \mathcal{P}_{1}(\alpha;y) =&4\sqrt{2}\alpha\Big(e^{-\pi\frac{y}{2\alpha}}\big(\frac{\pi y}{\alpha}-1 -( \frac{2\pi y}{\alpha}-1) e^{-\pi \frac{y}{2\alpha}}\big)\\ &\;\;+ \sum_{n=2}^{\infty}e^{-\pi{n}^2\frac{y}{2\alpha}}\big( \frac{\pi y}{\alpha}n^2-1- ( \frac{2\pi y}{\alpha}n^2-1) e^{-\pi{n}^2\frac{y}{2\alpha}}\big)\Big)\\ \geq& 4\sqrt{2}\alpha e^{-\pi\frac{y}{2\alpha}}\big(\frac{\pi y}{\alpha}-1 -( \frac{2\pi y}{\alpha}-1) e^{-\pi \frac{y}{2\alpha}}\big)>0.\\ \endaligned\end{equation} For $\mathcal{P}_{2}(\alpha;y)$ and $\mathcal{P}_{3}(\alpha;y)$, similar to the analysis of positiveness of $M_{3}(\alpha;y)$ and $M_{4}(\alpha;y)$ in Lemma \ref{lemma4.19}, one can get $\mathcal{P}_{2}(\alpha;y)>0$ and $\mathcal{P}_{3}(\alpha;y)>0$. Then, by \eqref{lem5.1P} and \eqref{lem5.2eq1}, we obtain the lower bound estimate of $\mathcal{P}(\alpha;y)$. Similar to the proof of item (2) of Lemma \ref{4.2lemma2}, using \eqref{TXY}, \eqref{Poisson} and Lemma \ref{lem4.23}, we can get an upper bound estimate of $\tilde{\mathcal{E}}(\alpha;y)$. (2). As $\frac{\alpha}{y}\geq1$, for $\mathcal{P}_{1}(\alpha;y)$, by \eqref{Poisson}, one has \begin{equation}\aligned\label{lem5.3eq1} \mathcal{P}_{1}(\alpha;y)=&8\sqrt{2}\pi{\alpha}^{\frac{5}{2}}y^{-\frac{3}{2}}\big(\sum_{n=1}^{\infty}n^2 e^{-\pi n^2 \frac{\alpha}{y}}-2\sqrt{2}\sum_{n=1}^{\infty}n^2 e^{-2\pi n^2 \frac{\alpha}{y}}\big)\\ =&8\sqrt{2}\pi{\alpha}^{\frac{5}{2}}y^{-\frac{3}{2}}\big( e^{-\pi \frac{\alpha}{y}}-2\sqrt{2}e^{-2\pi \frac{\alpha}{y}}+ \sum_{n=2}^{\infty}n^2 e^{-\pi n^2 \frac{\alpha}{y}}(1-2\sqrt{2} e^{-\pi n^2 \frac{\alpha}{y}} )\big)\\ \geq& 8\sqrt{2}\pi{\alpha}^{\frac{5}{2}}y^{-\frac{3}{2}}(e^{-\frac{\pi\alpha}{y}}-2\sqrt{2}e^{-\frac{2\pi\alpha}{y}}). \endaligned\end{equation} Similar to the analysis of the positiveness of $M_{3}(\alpha;y)$ and $M_{4}(\alpha;y)$ in Lemma \ref{4.3lemma3}, one gets $\mathcal{P}_{2}(\alpha;y)>0$ and $\mathcal{P}_{3}(\alpha;y)>0.$ Thus, by \eqref{lem5.1P}, \eqref{lem5.3eq1} and the positiveness of $\mathcal{P}_{2}(\alpha;y)$ and $\mathcal{P}_{3}(\alpha;y)$, we get the estimate for $\mathcal{P}(\alpha;y)$. Similar to the proof of item (2) of Lemma \ref{4.2lemma2}, we get the upper bound estimate of $\mathcal{E}(\alpha;y)$. Items (1) and (2) yield item (3). \end{proof} With the previous preparation, we are ready to prove our main theorem. Case 1: By Theorems \ref{3Thm1} and \ref{4Thm1}, we obtain that, up to the action by the modular group, \begin{equation}\aligned\label{Lemma511} \underset{z\in \mathbb{H}}{\arg \min}\;\big(\mathcal{K}(\alpha;z)-b\mathcal{K}(2\alpha;z)\big)=e^{i\frac{\pi}{3}} \;\;\hbox{for}\;\;\alpha\geq2\;\hbox{and}\;b\leq2. \endaligned\end{equation} Therefore, we introduce and define that $$b_{c_{1}}:=\{\max\; b\;\big|\;\underset{z\in \Gamma_c}{\arg \min}\;\big(\mathcal{K}(\alpha;z)-b\mathcal{K}(2\alpha;z)\big)=e^{i\frac{\pi}{3}} \;\;\hbox{for}\;\;\alpha\geq2 \}.$$ Then by Proposition \ref{Prop51} and \eqref{Lemma511}, $b_{c_1}\in(2,2\sqrt2)$. The threshold $b_{c_1}$ varies with $\alpha$. For example, $b_{c_{1}}\approx2.8061$ when $\alpha=2$,\; and $b_{c_{1}}\approx2.8242$ when $\alpha=2.5.$ Case 2: $b\in (b_{c_{1}},b_{c_2}).$ When $b<b_{c_2}=2\sqrt{2}$, by Theorem \ref{3Thm1} and \eqref{5eq1}, the minimizer always exists. Furthermore, by Theorem \ref{3Thm1} and the definition of $b_{c_{1}}$ in \eqref{define}, the minimizer is $\frac{1}{2}+iy_b$ $(y_b>\frac{\sqrt{3}}{2})$, corresponding the skinny-rhombic lattice. A numerical simulation can be found in Figure \ref{excel}. Furthermore, the monotonicity result in Luo-Wei-Zou \cite{LW2021} implies that when $b$ approaches $2\sqrt{2}$, $y_{b}$ approaches $+\infty$. Case 3: $b\geq b_{c_2}=2\sqrt{2}.$ It follows by Proposition \ref{Prop51}. \begin{figure} \centering \includegraphics[scale=0.42]{excel.png} \caption{The minimizers $z=\frac{1}{2}+i y_{b}$ corresponding to skinny-rhombic lattices.} \label{excel} \end{figure} \vskip0.1in {\bf Acknowledgements.} The research of S. Luo is partially supported by the National Natural Science Foundation of China (NSFC) under Grant Nos. 12261045 and 12001253, and by the Jiangxi Jieqing Fund under Grant No. 20242BAB23001. \begin{thebibliography}{10} \bibitem{Bet2015} L. B\'etermin and P. Zhang, Minimization of energy per particle among Bravais lattices in $\mathbb{R}^2$ Lennard-Jones and Thomas-Fermi cases, {\em Commun. Contemp. Math.}, 17(6) (2015), 1450049. \bibitem{Bet2016} L. B\'etermin, Two-dimensional theta functions and crystallization among Bravais lattices, {\em SIAM Journal on Mathematical Analysis}, 48(5) (2016), 3236-269. \bibitem{Bet2018} L. B\'etermin, Local variational study of 2d lattice energies and application to Lennard-Jones type interactions, {\em Nonlinearity}, 31(9) (2018), 973-4005. \bibitem{Bet2019} L. B\'etermin, Minimizing lattice structures for Morse potential energy in two and three dimensions, {\em Journal of Mathematical Physics}, 60(10) (2019), 102901. \bibitem{BP2017} L. B\'etermin and M. Petrache, Dimension reduction techniques for the minimization of theta functions on lattices, {\em Journal of Mathematical Physics}, 58 (7) (2017), 071902. \bibitem{Bet2020} L. B\'etermin, M. Faulhuber and H. Kn$\ddot{u}$pfer, On the optimality of the rock-salt structure among lattices with charge distributions, {\em Mathematical Models and Methods in Applied Sciences}, 31 (2): 293-325, 2021. \bibitem{Bet2019AMP} L. B\'etermin and M. Petrache, Optimal and non-optimal lattices for non-completely monotone interaction potentials, {\em Analysis and Mathematical Physics}, 9 (4): 2033-2073, 2019. \bibitem{Betermin2021JPA} L. B\'etermin, On energy ground states among crystal lattice structures with prescribed bonds, {\em Journal of Physics A}, 54(24): 245202, 2021. \bibitem{Betermin2021AHP} L. B\'etermin, Effect of periodic arrays of defects on lattice energy minimizers. {\em Ann. Henri Poincar\'e} 22 (2021), no. 9, 2995-3023. \bibitem{Betermin2021Arma} L. B\'etermin, L.De Luca and M. Petrache, Crystallization to the Square Lattice for a Two-Body Potential, {\em Arch. Rational Mech. Anal.}, 240, 987-1053 (2021). \bibitem{Betermin2021LMP} L. B\'etermin, M. Friedrich and U. Stefanelli, {\em Letters in Mathematical Physics}, Lattice ground states for embedded-atom models in 2D and 3D, Volume 111, article number 107, (2021). \bibitem{Bindi2020} L. Bindi, What Are Quasicrystals and Why They Are so Important? In: Natural Quasicrystals. SpringerBriefs in Crystallography. Springer, Cham, 2020. \bibitem{Blanc2015} X.Blanc and M. Lewin, The crystallization conjecture: a review, {\em EMS Surv. Math. Sci}. 2 (2015), no. 2, pp. 255-306. \bibitem{Cassels1963} J. W. S. Cassels, On a problem of Rankin about the Epstein zeta function, {\em Proc. Glasgow Math. Assoc.}, 4 (1959), 73-80. (Corrigendum, ibid. 6 (1963), 116.) \bibitem{Dia1964} P. H. Diananda, Notes on two lemmas concerning the Epstein zeta-function, {\em Proc. Glasgow Math. Assoc.} 6 (1964), 202-204. \bibitem{DK2024} K. Deng and S. Luo, Minimizing Lattice ennergy and hexagonal crystallization, \bibitem{Kaganer1999} V. M. Kaganer, H. M{\"o}hwald, and P. Dutta, Structure and phase transitions in Langmuir monolayers, {\em Rev. Mod. Phys.}, 71, 779, 1999. \bibitem{Luo2019} S. Luo, X. Ren and J. Wei, Non-hexagonal lattices from a two species interacting system, {\em SIAM J. Math. Anal.}, 52(2) (2020), 1903-1942. \bibitem{Luo2022} S. Luo and J. Wei, On minima of sum of theta functions and application to Mueller-Ho conjecture, {\em Arch. Ration. Mech. Anal.}, 243 (2022), no. 1, 139-199. \bibitem{LW2022} S. Luo and J. Wei, On minima of difference of theta functions and application to hexagonal crystallization, {\em Math. Ann.}, 387 (2023), no. 1-2, 499-539. \bibitem{LW2021} S. Luo, J. Wei and W. Zou, On universally optimal lattice phase transitions and energy minimizers of completely monotone potentials, arXiv:2110.08728. \bibitem{Luo2023} S. Luo and J. Wei, On lattice hexagonal crystallization for non-monotone potentials, {\em J. Math. Phys.}, 65(2024), no. 7, Paper No. 071901, 28 pp. \bibitem{Luo2024+} S. Luo and J. Wei, On a variational model for the continuous mechanics exhibiting hexagonal to square phase transitions, arxiv:2312.02497. \bibitem{Mon1988} H. Montgomery, Minimal theta functions, {\em Glasgow Math. J.}, 30 (1988), 75-85. \bibitem{Nosenko2004} V. Nosenko, K. Avinash, J. Goree, and B. Liu, Nonlinear Interaction of Compressional Waves in a 2D Dusty Plasma Crystal, {\em Phys. Rev. Lett.}, 92, 085001, 2004. \bibitem{Peeters1987} F. M. Peeters and Xiaoguang Wu, Wigner crystal of a screened-Coulomb-interaction colloidal system in two dimensions, {\em Phys. Rev. A},35, 3109, 1987. \bibitem{Radin1987} C. Radin, low temperature and the origin of crystallization symmetry, {\em International Journal of Modern Physics B}, Vol. 01, No. 05n06, pp. 1157-1191 (1987). \bibitem{Rakin1953} R. A. Rankin, A minimum problem for the Epstein zeta function, {\em Proc. Glasgow Math. Assoc.}, 1 (1953), 149-158. \bibitem{Schwerdtfeger2006} P. Schwerdtfeger, N. Gaston, R. P. Krawczyk, R.f Tonner, and G. E. Moyano, Extension of the Lennard-Jones potential: Theoretical investigations into rare-gas clusters and crystal lattices of He, Ne, Ar, and Kr using many-body interaction expansions, {\em Phys. Rev. B}, 73, 064112, 2006. \bibitem{Ennola1964} Viekko Ennola, A lemma about the Epstein zeta function, {\em Proc. Glasgow Math. Assoc.}, 6 (1964), 198-201. \end{thebibliography} \end{document}
2412.09322v1
http://arxiv.org/abs/2412.09322v1
Equivariant Q-sliceness of strongly invertible knots
\PassOptionsToPackage{dvipdfm}{graphicx} \documentclass[11pt]{amsart} \setlength\textheight{7.7in} \setlength\textwidth{6.5in} \setlength\oddsidemargin{0in} \setlength\evensidemargin{0in} \setlength\parindent{0.25in} \setlength\marginparwidth{0.8in} \usepackage{listings} \usepackage{amsmath, amssymb, amscd, amsthm, mathrsfs, url ,pinlabel, verbatim, lipsum, wrapfig} \usepackage[text={6.5in,9.5in}, centering, letterpaper, dvips]{geometry} \usepackage{bmpsize} \usepackage{color, multirow, latexsym, float, xcolor} \usepackage{caption, tikz, tikz-cd, setspace, enumitem} \captionsetup{font=small} \usetikzlibrary{positioning, arrows.meta, knots, braids} \usepackage{scalerel,stackengine} \stackMath \newcommand\reallywidehat[1]{\savestack{\tmpbox}{\stretchto{ \scaleto{ \scalerel*[\widthof{\ensuremath{#1}}]{\kern-.6pt\bigwedge\kern-.6pt} {\rule[-\textheight/2]{1ex}{\textheight}} }{\textheight}}{0.5ex}}\stackon[1pt]{#1}{\tmpbox}} \usepackage[backref=page, linktocpage=true]{hyperref} \renewcommand*{\backref}[1]{} \renewcommand*{\backrefalt}[4]{ \ifcase #1 (Not cited.) \or (Cited on page~#2.) \else (Cited on pages~#2.)} \definecolor{maroon}{rgb}{0.5, 0.0, 0.0} \definecolor{darkblue}{rgb}{0.03, 0.27, 0.49} \hypersetup{ colorlinks = true, urlcolor = maroon, linkcolor = darkblue, citecolor = maroon } \newcommand{\CommaPunct}{\mathpunct{\raisebox{0.5ex}{,}}} \usepackage[T1]{fontenc} \usepackage{lmodern} \DeclareUrlCommand{\bfurl}{\def\UrlFont{\ttfamily}} \DeclareMathOperator{\Fix}{Fix} \DeclareMathOperator{\Diffeo}{Diffeo} \DeclareMathOperator{\Ker}{Ker} \DeclareMathOperator{\Coker}{Coker} \DeclareMathOperator{\id}{id} \DeclareMathOperator{\Img}{Im} \newtheorem{thm}{Theorem}[section] \newtheorem*{nota}{Notation} \newtheorem*{claim}{Claim} \newtheorem{assum}[thm]{Assumption} \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition} \newtheorem{prob}[thm]{Problem} \newtheorem{ques}[thm]{Question} \newtheorem*{conj}{Conjecture} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \newtheorem*{deftn}{Definition} \newtheorem{exmp}{Example}[section] \newtheorem{introthm}{Theorem} \newtheorem{introcor}[introthm]{Corollary} \newtheorem{introconj}[introthm]{Conjecture} \newtheorem{introprob}{Problem} \newtheorem{introques}{Question} \renewcommand{\theintrothm}{\Alph{introthm}} \renewcommand{\theintrocor}{\Alph{introcor}} \renewcommand{\theintroconj}{\Alph{introconj}} \renewcommand{\theintroprob}{\Alph{introprob}} \renewcommand{\theintroques}{\Alph{introques}} \theoremstyle{remark} \newtheorem{rem}{Remark} \newcommand{\T}{\mathcal{T}} \renewcommand{\L}{\mathcal{L}} \newcommand\Z{\mathbb{Z}} \newcommand\Q{\mathbb{Q}} \newcommand\R{\mathbb{R}} \newcommand\C{\mathcal{C}} \newcommand\CQ{\mathcal{C}_{\mathbb{Q}}} \newcommand\CT{\widetilde{\mathcal{C}}} \newcommand\CTQ{\widetilde{\mathcal{C}}_{\mathbb{Q}}} \newcommand\J{\mathcal{J}} \newcommand\JO{\mathcal{J}_{\mathrm{odd}}} \newcommand\JE{\mathcal{J}_{\mathrm{even}}} \usepackage[colorinlistoftodos,textwidth=3cm]{todonotes} \newcommand{\noteOS}[1]{\todo[author=OS,color=green!30,inline]{#1}} \newcommand{\noteAdP}[1]{\todo[author=AdP,color=blue!30,inline]{#1}} \newlength\Colsep \setlength\Colsep{10pt} \title{Equivariant $\mathbb{Q}$-sliceness of strongly invertible knots} \author{Alessio Di Prisa} \address{Scuola Normale Superiore, 56126 Pisa, Italy} \email{\url{[email protected]}} \urladdr{\url{https://sites.google.com/view/alessiodiprisa}} \author{O{\u{g}}uz \c{S}avk} \address{CNRS and Laboratorie de Math\'ematiques Jean Leray, Nantes Universit\'e, 44322 Nantes, France} \email{\url{[email protected]}} \urladdr{\url{https://sites.google.com/view/oguzsavk}} \date{} \begin{document} \begin{abstract} We introduce and study the notion of equivariant $\mathbb{Q}$-sliceness for strongly invertible knots. On the constructive side, we prove that every \emph{Klein amphichiral knot}, which is a strongly invertible knot admitting a \emph{compatible} negative amphichiral involution, is equivariant $\mathbb{Q}$-slice in a single $\Q$-homology $4$-ball, by refining Kawauchi's construction and generalizing Levine's uniqueness result. On the obstructive side, we show that the equivariant version of the classical Fox-Milnor condition, proved recently by the first author in \cite{DP23b}, also obstructs equivariant $\Q$-sliceness. We then introduce the equivariant $\mathbb{Q}$-concordance group and study the natural maps between concordance groups as an application. We also list some open problems for future study. \end{abstract} \maketitle \section{Introduction} \label{sec:intro} A knot $K \subset S^3$ is called \emph{invertible} if $K$ is isotopic to its reverse $-K$. Similarly, $K$ is said to be \emph{negative amphichiral} if $K$ is isotopic to the reverse of its mirror image $-\overline{K}$. When the isotopy maps $\rho$ and $\tau$ are further chosen to be involutions (see {\sc\S}\ref{sec:symmetry}), the pairs $(K, \rho)$ and $(K, \tau)$ are called \emph{strongly invertible} and \emph{strongly negative amphichiral}, respectively. The search for sliceness notions for strongly invertible and strongly negative amphichiral knots dates back to the nineteen-eighties. In his influential article \cite{Sak86}, Sakuma introduced the notion of \emph{equivariant sliceness} for strongly invertible knots, by requiring them to bound equivariant disks smoothly and properly embedded in $B^4$. Cochran and Kawauchi first observed the \emph{$\Q$-sliceness} for the figure-eight knot and the $(2,1)$-cable of the figure-eight knot, respectively, in the sense that these knots bound disks smoothly properly embedded in some $\Q$-homology $4$-balls. Kawauchi's observation appeared in an unpublished note \cite{Kaw80}, and Cochran used the earlier work of Fintushel and Stern \cite{FS84}, which was never published. Later, Kawauchi \cite{Kaw09} proved his famous characterization result, showing that every strongly negative amphichiral knot is $\Q$-slice. See {\sc\S}\ref{sec:applications} for more details. The main objective of this article is to merge the concepts of equivariant sliceness and $\Q$-sliceness and to introduce the study of equivariant $\mathbb{Q}$-sliceness. We call a strongly invertible knot $(K, \rho)$ \emph{equivariant $\Q$-slice} if $K$ bounds an equivariant disk smoothly properly embedded in a $\Q$-homology $4$-ball (see {\sc\S}\ref{sec:equivariant-slice}). \subsection{Fundamentals} \label{sec:fundamentals} Combining these two notions leads us to study the relations between knot symmetries and introduce a specific type of symmetry, which we call \emph{Klein amphichirality}. A \emph{Klein amphichiral} knot $K \subset S^3$ is a triple $(K, \rho, \tau)$ such that the strongly invertible involution $\rho$ and strongly negative amphichiral involution $\tau$ commute with each other. This explains the motivation behind the name (see {\sc\S}\ref{sec:equivariant-slice} for more details), since the maps $\rho$ and $\tau$ together generate the \emph{Klein four group} (\emph{Vierergruppe} in German) in the symmetry group of the knot: $$V = \Z / 2\Z \times \Z / 2\Z \ \leq \ \mathrm{Sym} (S^3, K) .$$ Our first theorem provides a refinement of Kawauchi's characterization of the $\Q$-sliceness of strongly negative amphichiral knots. Kawauchi constructed a $\Q$-homology $4$-ball $Z_K$ in which $(K,\tau)$ is slice a priori depending on the given knot. However, Levine \cite{Lev23} surprisingly proved that such manifolds $Z_K$ are actually all diffeomorphic to a single $\Q$-homology ball $Z$, called the \emph{Kawauchi manifold}. Moreover, our theorem also extends Levine's crucial result to the equivariant case, showing that the pairs $(Z,\rho_K)$, corresponding to a $\Q$-homology $4$-ball for a Klein amphichiral knot $(K, \rho, \tau)$, is also unique. \begin{introthm} \label{thm:mainthm1} Let $(K, \rho, \tau)$ be a Klein amphichiral knot. Then $(K, \rho)$ is equivairant $\Q$-slice. In fact, $(K, \rho)$ bounds a slice disk in the Kawauchi manifold $Z$ which is invariant under an involution $\rho_K$ of $Z$, extending $\rho$. Moreover, the involution $\rho_K$ does not depend on $(K,\rho,\tau)$ up to conjugacy in $\operatorname{Diff}(Z)$, where $\operatorname{Diff}(Z)$ denotes the group of diffeomorphisms of $Z$. \end{introthm} Inspired by the earlier work of Cochran, Franklin, Hedden, and Horn \cite{CFHH13}, we provide a fundamental obstruction, which was already shown in \cite{DP23b} to obstruct equivariant sliceness. It is an equivariant and rational version of the Fox-Milnor condition, which plays a key role in comparing the notions of $\Q$-sliceness and equivariant $\Q$-sliceness. It is sufficient to obstruct well-known strongly invertible $\Q$-slice knots being equivariant $\Q$-slice, including the figure-eight knot, (see {\sc\S}\ref{sec:obstructions} for more details). \begin{introthm} \label{thm:mainthm2} If $(K,\rho)$ is equivariant $\Q$-slice, then its Alexander polynomial $\Delta_K(t)$ is a square. \end{introthm} Note that every Klein amphichiral knot is clearly \emph{strongly positive amphichiral}, i.e., isotopic to its mirror image via an involution. Furthermore, it is well known from the classical result of Hartley and Kawauchi \cite{HK79} that the Alexander polynomial of a strongly positive amphichiral knot is square. Therefore, it is interesting to compare Theorem~\ref{thm:mainthm1} with Theorem~\ref{thm:mainthm2}, see Problem~\ref{prob:+amphichiral}. \subsection{Applications} \label{sec:applications} Using our main theorems in {\sc\S}\ref{sec:fundamentals}, we further investigate the natural maps between concordance groups. To do so, we introduce the \emph{equivariant $\Q$-concordance group} $\CTQ$ (see {\sc\S}\ref{sec:equivariant-concordance}) by analyzing the equivariant $\Q$-concordance classes of strongly invertible knots. Recall that two knots in $S^3$ are said to be \emph{concordant} if they cobound a smoothly properly embedded annulus in $S^3 \times [0,1]$. The set of oriented knots modulo concordance forms a countable abelian group, namely the \emph{concordance group} $\C$, under the operation induced by connected sum. The trivial element in $\C$ is formed by the concordance class of the unknot. The knots lying in this concordance class are the so-called \emph{slice knots}, and they bound smoothly properly embedded disks in $B^4$. The concordance group and the notion of sliceness were defined in the seminal work of Fox and Milnor \cite{FM66}. Since then, they have been very central objects of active research in knot theory and low-dimensional topology, see the survey articles \cite{Liv05, Hom17, Sav24}. Cha's monograph \cite{Cha07} systemically elaborated the \emph{$\Q$-concordance} of knots and the \emph{$\Q$-concordance group} $\CQ$. Recently, there has been a great deal of interest in these concepts as well, see \cite{KW18, Mil22, HKPS22, Lev23, Lee24}. In \cite{Sak86}, Sakuma introduced the \emph{equivariant concordance group} $\CT$ by studying strongly invertible knots under equivariant connected sum. It is again known to be countable, but until the recent work of the first author \cite{DP23} and his joint work with Framba \cite{DPF23}, the structure of $\CT$ was completely mysterious. Unlike $\C$, it turns out that $\CT$ is non-abelian and in fact non-solvable. The comprehensive study of $\CT$ and its invariants have been the subjects of various recent articles \cite{Wat17, BI22, DMS23, HHS23, MP23}. Considering four concordance groups and their natural maps, we therefore have the following commutative diagram. Since two concordant (resp. equivariant concordant) knots are $\Q$-concordant (resp. equivariant $\Q$-concordant), we have the surjective maps $\psi$ and $\Psi$. Moreover, $\mathfrak{f}$ and $\mathfrak{f}_\Q$ are both \emph{forgetful} maps, forgetting the additional structures. \[ \begin{tikzcd} \CT \arrow{r}{\Psi} \arrow[swap]{d}{\mathfrak{f}} & \CTQ \arrow{d}{\mathfrak{f}_\Q} \\\C \arrow{r}{\psi}& \CQ \end{tikzcd} \] Using the new characterization in Theorem~\ref{thm:mainthm1}, we show that the kernel of the map $\Psi$ has a rich algebraic structure. Our constructions use certain Klein amphichiral Turk's head knots $J_n = Th (3,n)$, which are defined as the braid closures of the $3$-braids $(\sigma_1 {\sigma_2}^{-1})^n$ (see \cite{DPS24} and {\sc\S}\ref{sec:constructions} for more details). For the obstructions, we rely on the moth polynomials introduced in the recent work of Framba and the first author \cite{DPF23b} and on the application of Milnor invariants to equivariant concordance, as described in \cite{DPF23}. \begin{introthm} \label{thm: mainthm3} There exists a nonabelian subgroup $\mathcal{J}$ of $\mathrm{Ker}(\Psi)$ such that its abelianization $\mathcal{J}^{ab}$ is isomorphic to $\Z^\infty$. Moreover, the image of $\mathcal{J}$ under $\mathfrak{f}$ is a $2$-torsion subgroup of $\C$. \end{introthm} Using the fundamental obstruction in Theorem~\ref{thm:mainthm2}, we are able to prove another interesting result about equivariant knots. Here, the constructive part follows from Cha's result (see {\sc\S}\ref{sec:obstructions}). \begin{introthm} \label{thm: mainthm4} There exists a subgroup $\mathcal{K}$ of $\Ker (\mathfrak{f}_\Q)$ which surjects onto $(\Z/2\Z)^\infty$. \end{introthm} Previously, the other maps in the above diagram have been studied extensively. In \cite{Cha07}, Cha showed that $\mathrm{Ker}(\psi)$ has a $(\Z/2\Z)^\infty $ subgroup, generated by non-slice amphichiral knots, containing the figure-eight knot (see {\sc\S}\ref{sec:obstructions}). Recently, Hom, Kang, Park, and Stoffregen \cite{HKPS22} proved that $(2n-1, 1)$-cables of the figure-eight knot for $n \geq 2$ generate a $\Z^\infty $ subgroup in $\mathrm{Ker}(\psi)$. On the other hand, Livingston \cite{Liv83} (cf. \cite{Kim23}) proved that the map $\mathfrak{f}$ is not surjective, exhibiting knots which are not concordant to their reverses. This result was later improved in the work of Kim and Livingston \cite{KL22}, by showing the existence of topologically slice knots which are not concordant to their reverses. More recently, Kim \cite{Kim23} showed the existence of knots that are not $\Q$-concordant to their reverses, showing that also the map $\mathfrak{f}_\Q$ is not surjective. Potential counterexamples to the slice-ribbon conjecture based on certain cables of $\Q$-slice knots were recently eliminated by the work of Dai, Kang, Mallick, Park, and Stoffregen \cite{DKMPS22}. The core example was the $(2,1)$-cable of the figure-eight knot $K = 4_1$, denoted by $K_{2,1}$. Their strategy for obstructing the sliceness of a knot $K$ was to show that its double branched cover bounds no equivariant homology $4$-ball, remembering the data of the branching involution. This is closely related to the doubling construction (see {\sc\S}\ref{sec:constructions}) by means of the Montesinos trick which provides the diffeomorphism $\Sigma_2 ({K}_{2,1}) \cong S^3_{+1} (K \# \overline{K})$. More precisely, this diffeomorphism identifies the branching involution on $\Sigma_2 (K_{2,1})$ with the involution on surgered manifold $S^3_{+1} (K \# \overline{K})$, induced from the strong inversion on $K \# \overline{K}$. More obstructions were recently obtained by Kang, Park, and Taniguchi \cite{KPT24}. Both $\Q$-slice and strongly invertible knots have been the subject of interesting constructions of $3$- and $4$-manifolds. For example, $\Q$-slice knots were used to exhibit Brieskorn spheres, which bound $\Q$-homology $4$-balls but not $\Z$-homology $4$-balls, see the articles by Akbulut and Larson \cite{AL18} and the second author \cite{Sav20}. Another instance was the work of Dai, Hedden, and Mallick \cite{DHM23}, which used strongly invertible slice knots to produce infinite families of new corks. More recently, Dai, Mallick, and Stofferegen \cite{DMS23} provided a new detection of exotic pairs of smooth disks in $B^4$ by relying on strong inversions of slice knots in $S^3$. \subsection{Open Problems} Finally, we would like to list some basic open problems for the future study of the new group $\CTQ$ and the behavior of its elements. The first problem aims to measure the difference between Theorem~\ref{thm:mainthm1} and Theorem~\ref{thm:mainthm2}. \begin{introprob} \label{prob:+amphichiral} Is every equivariant $\Q$-slice knot equivariant concordant to a Klein amphichiral knot? \end{introprob} The second problem concerns the structure of $\CTQ$, and we expect affirmative answers to both. \begin{introprob} Is $\CTQ$ non-abelian? Is $\CTQ$ non-solvable? \end{introprob} The other problem is about the potential complexity of equivariant $\Q$-slice knots. Following the paper of Boyle and Issa \cite{BI22}, recall that given a strongly invertible knot $(K, \rho)$, the \emph{equivariant $4$-genus} $\widetilde{g_4}$ of $K$ is the minimal genus of an orientable, smoothly properly embedded surface $S \subset B^4$ with boundary $K$ for which $\rho$ extends to an involution $\widetilde{\rho}: (B^4, S) \to (B^4, S)$. Previously, using Casson-Gordon invariants, Miller \cite{Mil22} proved that there are $\Q$-slice knots with arbitrarily large $g_4$, namely the classical smooth $4$-genus. \begin{introprob} Are there equivariant $\Q$-slice knots with arbitrarily large $\widetilde{g_4}$? \end{introprob} Theorem \ref{thm:mainthm2} provides a first obstruction to equivariant $\Q$-sliceness, which can be seen as a first \emph{algebraic concordance} obstruction in this setting. We would like to ask whether it is possible to define other obstructions and invariants, similar to the ones obtained in \cite{Lev69b,Lev69,Cha07,DP23b}. See also Remark~\ref{rem:algebraic concordance}. \begin{introprob} \label{prob:algebraic concordance} Can we define a notion of equivariant algebraic $\Q$-concordance? \end{introprob} The knot Floer theoretic invariants have had several important applications to the study of knot concordance, see Hom's surveys \cite{Hom17,Hom23} for more details. The two famous invariants -- $\tau$ and $\epsilon$ -- are also known to be $\Q$-concordance invariants. More recently, Dai, Mallick, and Stoffregen \cite{DMS23} also provided several equivariant concordance invariants using knot Floer homology. As a final problem, we would like to ask: \begin{introprob} \label{prob:floer_homology} Can we define equivariant $\Q$-concordance invariants using knot Floer homology? \end{introprob} \subsection*{Organization} In {\sc\S}\ref{sec: equivariant}, we study equivariant $\Q$-concordances in the broad perspective. We review symmetries of knots and introduce the notion of Klein amphichirality in {\sc\S}\ref{sec:symmetry}. Then, in {\sc\S}\ref{sec:equivariant-slice}, we prove Theorem~\ref{thm:mainthm1}. Next, we introduce the equivariant $\Q$-concordance group in {\sc\S}\ref{sec:equivariant-concordance}. In {\sc\S}\ref{sec:constructions}, we construct equivariant $\Q$-slice knots by using Klein amphichiral Turk's head knots. Finally, we prove Theorem~\ref{thm:mainthm2} in {\sc\S}\ref{sec:obstructions}, and give examples of non-equivariant $\Q$-slice knots. We close the section by proving Theorem~\ref{thm: mainthm4}. In {\sc\S}\ref{sec:independence}, we particularly work on the obstructions. After discussing preliminary notions such as weighted graphs, Gordon-Litherland forms, and moth polynomials, we show independence of certain equivariant $\Q$-slice knots in $\CT$. We finally prove Theorem~\ref{thm: mainthm3}. \subsection*{Acknowledgements} Our project has started during the events Winter Braids XIII in Montpellier and Between the Waves 4 in Besse, so we would like to thank the organizers of both meetings. We are grateful to Leonardo Ferrari for the useful discussions on hyperbolic knot symmetries. \section{Equivariant Q-Sliceness and Q-Concordance} \label{sec: equivariant} \subsection{Symmetries of Knots and Klein Amphichirality} \label{sec:symmetry} Following Kawauchi's book \cite[{\sc\S}10]{Kaw90}, we consider the following important symmetries of knots. Furthermore, we use the resolution of the Smith conjecture \cite{W69, MB84} to identify the fixed point set of a given involution, denoted by $\mathrm{Fix} (\cdot)$. We also set the notation $\operatorname{Diff}(\cdot)$ (resp. $\operatorname{Diff}^+(\cdot)$) for the group of diffeomorphisms (resp. orientation-preserving diffeomorphisms) of a manifold, and $\operatorname{Diff}^-(\cdot) \doteq \operatorname{Diff}(\cdot) \setminus \operatorname{Diff}^+(\cdot)$, i.e., the set of orientation-reversing diffeomorphisms of a manifold. \begin{defn} A knot $K$ in $S^3$ is said to be: \begin{itemize}[leftmargin=2em] \item \emph{invertible} if there is a map $\rho \in \operatorname{Diff}^+(S^3)$ such that $\rho (K) = -K$. If $\rho$ is further an involution, then $(K, \rho)$ is called \emph{strongly invertible}. In this case, we have $\mathrm{Fix}(\rho) = S^1$ and $\mathrm{Fix}(\rho) \cap K = S^0$. Moreoever, the knots $(K, \rho)$ and $(K', \rho')$ are called \emph{equivalent} if there is a map $f \in \operatorname{Diff}^+(S^3) $ such that $f(K) = K'$ and $f \circ \rho \circ f^{-1} = \rho^{-1}.$ \item \emph{negative amphichiral} if there is a map $\tau \in \operatorname{Diff}^-(S^3)$ such that $\tau (K) = -K$. If $\tau$ is further an involution, $(K, \tau)$ is called \emph{strongly negative amphichiral}. In this case, we have either $\mathrm{Fix}(\tau) = S^0$ or $\mathrm{Fix}(\tau)=S^2$. \item \emph{positive amphichiral} if there is a map $\delta \in \operatorname{Diff}^-(S^3)$ such that $\delta(K) = K$. If $\delta$ is further an involution, then $(K, \delta)$ is called \emph{strongly positive amphichiral}. In this case, we have $\mathrm{Fix}(\tau) = S^0$. \item \emph{$n$-periodic} if there is a map $\theta \in \operatorname{Diff}^+(S^3)$ such that $\theta (K) = K$, $\Fix(\theta)\cap K=\emptyset$ and $\theta$ is period of $n$, i.e., $n$ is the minimal number so that $\theta^n = \mathrm{id} \in \operatorname{Diff}^+(S^3) $. If $\mathrm{Fix}(\theta) = S^1$ (resp. $\Fix(\theta)=\emptyset$), then we say that $(K,\theta)$ is \emph{cyclically periodic} (resp. \emph{freely periodic}). \end{itemize} \end{defn} \begin{figure}[htbp] \centering \includegraphics[width=0.6\columnwidth]{images/10_123_sym.pdf} \caption{From left to right: strongly positive amphichiral, strongly negative amphichiral and strongly invertible symmetries on $10_{123}$. In the first two cases, the involution is given by the $\pi$-rotation around the blue dot composed with the reflection along the plane of the diagram. The third symmetry is given by the $\pi$-rotation around the red axis.} \label{fig:Th(3,5)} \end{figure} In order to compare the symmetries of knots in our new context, we introduce the following crucial notion. \begin{defn} \label{defn:klein} A knot $K$ in $S^3$ is said to be \emph{Klein amphichiral} if there exist two involutions $\rho,\tau: S^3 \to S^3$ such that \begin{itemize}[leftmargin=2em] \item $(K,\rho)$ is strongly invertible, \item $(K,\tau)$ is strongly negative amphichiral, \item $\rho \circ \tau = \tau \circ \rho$. \end{itemize} \end{defn} The \emph{symmetry group} of a knot $K$ in $S^3$, which is denoted by $\mathrm{Sym} (S^3, K)$, is defined to be the mapping class group of the knot exterior $S^3 \setminus \nu(K)$, see \cite[{\sc\S}10.6]{Kaw90}. Denote by $\mathrm{Sym}^+(S^3,K)$ the subgroup consisting of orientation-preserving maps. Observe that since the maps $\tau$ and $\rho$ of a Klein amphichiral knot commute, they together generate the \emph{Klein four group} $$ V = \Z/2\Z \times \Z/2\Z ,$$ hence the composition map $\tau \circ \rho$ is a strongly positive amphichiral involution for $K$. See Figure~\ref{fig:10_123_sym} for an example of a Klein amphichiral knot. \begin{figure}[ht] \centering \includegraphics[width=0.7\linewidth]{images/klein_J5.pdf} \caption{Klein amphichiral symmetry on $10_{123}$: $\rho$ is given by a $\pi$-rotation around the red axis, while $\tau$ is the point reflection around the two blue points.} \label{fig:10_123_sym} \end{figure} Notice that there exist two distinct types of Klein amphichiral symmetries, distinguished by the fixed point set of $\tau$: \begin{enumerate} \item $\Fix(\tau)\cong S^2$, which forces $K=J\widetilde{\#}J^{-1}$ for some DSI knot $J$, \item $\Fix(\tau)\cong S^0$. \end{enumerate} In the first case, $K$ is clearly equivariant slice, which we will regard as the trivial case. Therefore, we will always assume that the symmetry falls in the second case. \begin{rem} The recent work of Boyle, Rouse, and Williams \cite{BRW23} provided a classification result for symmetries of knots in $S^3$. In their terminology, a Klein amphichiral knot corresponds to a $D_2$-symmetric knot of type SNASI-(1). \end{rem} A knot $K$ is called \emph{hyperbolic} if the knot complement $S^3 \setminus \nu(K)$ admits a complete metric of constant curvature $-1$ and finite volume. Now, we recall the following proposition relating the symmetries of hyperbolic knots with their symmetry groups. \begin{prop} \label{prop:equivariant} Let $K$ be a hyperbolic, invertible, and negative amphichiral knot in $S^3$. Suppose that $\mathrm{Sym} (S^3, K) = D_{2m}$ with $m$ odd, where $D_n$ is the dihedral group with $2n$ elements. Then $K$ admits a unique strongly invertible involution $\rho$ (up to \emph{equivalence}, see {\sc\S}\ref{sec:constructions}) and a strongly negative amphichiral involution $\tau$ such that $(K,\rho,\tau)$ is Klein amphichiral. \end{prop} \begin{proof} Let $K$ be a hyperbolic, invertible and negative amphcheiral knot in $S^3$. Since $K$ is a hyperbolic knot, by the work of Kawauchi \cite[Lemma~1]{Kaw79}, we have a pair of strongly negative amphichiral and strongly invertible involutions $\tau, \rho \in \mathrm{Sym} (S^3, K)$ for the knot $K$. As $K$ is both strongly negative amphichiral and strongly invertible, we know that $\mathrm{Sym} (S^3, K) = D_{2m}$ for some $m$, due to the result of Kodama and Sakuma \cite[Lemma~1.1]{KS92}. Now, assume that $\mathrm{Sym} (S^3, K) = D_{2m}=\langle s,t\;|\;t^{2m}=s^2=1, sts=t^{-1}\rangle$ with $m$ odd. It is a well-known fact that the involutions of $D_{2m}$ split into three conjugacy classes, namely $\{t^m\}$, $\{st^{2i}\}_{0\leq i< m}$ and $\{st^{2i+1}\}_{0\leq i<m}$. Since $\tau$ and $\rho$ are, respectively, orientation reversing and preserving, they are not conjugate. If one of them corresponds to $t^m$, which is central, then they commute. Otherwise, they lie respectively in $\{st^{2i}\}_{0\leq i\leq m}$ and $\{st^{2i+1}\}_{0\leq i\leq m}$ (or vice versa). By changing $\rho$ and $\tau$ with some conjugates, we can suppose $\tau=st$ and $\rho=st^{2i}$ so that $2i\equiv 1\mod{m}$ (which exists since $m$ is odd). It is now easy to check that $\rho \circ \tau = \tau \circ \rho = t^m$, where $t^m$ is a strongly positive amphichiral involution. Therefore, $(K,\rho,\tau)$ is Klein amphichiral. \end{proof} Now we fix a standard model for the Klein amphichiral symmetry. To do so, we can think of $S^3=\{(z,w)\in\mathbb{C}^2\;|\;|z|^2+|w|^2=1\}$. Now consider $\rho$ and $\tau$ to be the following involutions \begin{align*} \rho: S^3 \to S^3, \quad (z,w)\mapsto(-z,w), \\ \tau:S^3 \to S^3, \quad (z,w)\mapsto(\overline{z},-w). \end{align*} so that $$ (\rho \circ \tau) (z,w) = (-\overline{z}, -w) = ( \tau \circ \rho) (z,w) .$$ According to \cite{BRW23}, up to conjugation in $\operatorname{Diff}^+(S^3)$, we can always suppose that the Klein amphichiral symmetry is given by the action of $\rho$ and $\tau$ above. Then the fixed point sets of these involutions are given by \begin{align*} \Fix(\rho)&=\{(0,w)\;|\;w\in S^1\},\\ \Fix(\tau)&=\{(\pm 1,0)\},\\ \Fix(\rho \circ \tau)&=\{(\pm i,0)\}. \end{align*} Let $N_{\rho}$, $N_\tau$, and $N_{\rho \circ \tau}$ be small equivariant tubular neighbourhoods of $\Fix(\rho)$, $\Fix(\tau)$, and $\Fix(\rho\circ \tau)$, respectively. Then we have: \begin{itemize}[leftmargin=2em] \item $N_\rho\cong D^2\times S^1$ and $\rho_{|N_\rho}=(-\id_{D^2},\id_{S^1})$, \item $N_\rho\cap K=\{(t\cdot z_i,w_i)\;|\;t\in[-1,1]\}$ for some $z_0,z_1,w_0,w_1\in S^1$, \item $N_\tau\cong B^3_1\sqcup B^3_{-1}$, where $B^3_{\pm 1}$ is a small ball centered at $\pm 1$ and $\tau_{|B^3_{\pm 1}}=-\id$, \item $B_{\pm 1}^3\cap K=\{t\cdot p\;|\;t\in [-1,1]\}$ for some $p\in S^2$, \item $N_{\rho \circ \tau}\cap K=\emptyset$. \end{itemize} Let $Y=S^3\setminus \operatorname{int}(N_\rho\cup N_\tau\cup N_{\rho \circ \tau})$. Observe that $Y$ is simply a solid torus with four small balls removed and that $K\cap Y$ consists of four arcs: two of the arcs connect $\partial N_\rho$ to $\partial B^3_1$ and the other two connect $\partial N_\rho$ to $\partial B^3_{-1}$, see Figure~\ref{fig:ext_fix}. \begin{figure}[ht] \centering \includegraphics[width=0.4\linewidth]{images/ext_fix.pdf} \caption{The manifold $Y$, given by removing the fixed point sets.} \label{fig:ext_fix} \end{figure} Define $\overline{Y}=Y/V$ to be the quotient of $Y$ by the action of $V$. It is not difficult to see that $\overline{Y}$ is a non-orientable $3$-manifold with boundary, whose boundary components are given as follows:In particular, one can see that $\overline{Y}\cong (\mathbb{RP}^2\times I)\natural(\mathbb{RP}^2\times I)$. \begin{itemize}[leftmargin=2em] \item a Klein bottle, which is the image of the toric boundary component of $Y$, denoted by $A$, \item two $\mathbb{RP}^2$'s, which are the images of $\partial N_\tau$ and $\partial N_{\rho \circ \tau}$, respectively. Denote the image of $N_\tau$ by $B$. \end{itemize} In particular, one can see that $\overline{Y}\cong (\mathbb{RP}^2\times I)\natural(\mathbb{RP}^2\times I)$. Now, denote the image of $K\cap Y$ in $\overline{Y}$ by $\overline{K}$. Then $\overline{K}$ is a simple properly embedded arc in $\overline{Y}$ starting at $A$ and ending in $B$, see Figure~\ref{fig:ext_fix_quo}. \begin{figure}[ht] \centering \includegraphics[width=0.3\linewidth]{images/ext_fix_quo.pdf} \caption{The quotient manifold $\overline{Y}$. Both the upper and lower punctured disks are glued to themselves by $-\id$. In red, there is an example of an arc $\overline{K}$ going from the Klein bottle boundary component to one $\mathbb{RP}^2$ boundary component.} \label{fig:ext_fix_quo} \end{figure} Define now $$\pi_1(\overline{Y},A,B) = \{ \gamma:[0,1]\to\overline{Y} \ \vert \ \gamma(0)\in A, \ \gamma(1)\in B \} / \text{homotopy}.$$ Notice that every class in $\pi_1(\overline{Y},A,B)$ can be represented by a simple properly embedded arc. Observe that any homotopy between two properly embedded arcs in $\overline{Y}$ connecting $A$ to $B$ is lifted to an equivariant homotopy in $Y$. This can be closed up to an equivariant homotopy between the corresponding Klein amphichiral knots in $S^3$ in the following way. Let $\gamma_t$ be the lift of the homotopy at time $t$. Then $\gamma_t$ is given by four proper arcs in $Y$ invariant under the action of $V$. We extend the homotopy over $N_\rho$ and $N_\tau$ following the local models described above. Observe that $\gamma_t\cap\partial N_\rho$ is given by $4$ points: $(z_0,w_0), (-z_0,w_0), (z_1,w_1), (-z_1,w_1)\in S^1\times S^1$. We connect then the components of $\gamma_t$ by adding the arcs $(s\cdot z_0,w_0)$ and $(s\cdot z_1,w_1)$, $s\in [-1,1]$ inside $N_\rho$. Similarly, consider the intersection $\gamma_t\cap \partial N_\tau$, which is given by four points $p,-p\in \partial B_{+1}^3$ and $q,-q\in\partial B_{-1}^3$. Then the components of $\gamma_t$ are connected by adding the arcs $\{s\cdot p\}_{s\in [-1,1]}\subset B_{+1}^3$ and $\{s\cdot q\}_{s\in [-1,1]}\subset B_{-1}^3$. In this way, we obtain an equivariant homotopy of Klein amphichiral knots in $S^3$. We can sum up the discussion above in the following lemma: \begin{lem} The natural map $$ \{\text{Klein amphichiral knots}\}/\text{equivariant homotopy}\longrightarrow\pi_1(\overline{Y},A,B)$$ is a bijection. \end{lem} In order to prove the refinement of Levine's theorem given in Theorem \ref{thm:mainthm1}, the essential ingredient is the following lemma. \begin{lem}\label{lem:equiv_crossing} Every Klein amphichiral knot can be turned into the standard Klein amphichiral unknot by a finite number of $V$-equivariant crossing changes. \end{lem} \begin{proof} Let $K_0$ and $K_1$ be two Klein amphichiral knots, and let $\overline{K_0}$ and $\overline{K_1}$ be the corresponding arcs in $\overline{Y}$. Suppose that there exists a homotopy $\overline{K}_t$, $t\in[0,1]$ between $\overline{K_0}$ and $\overline{K_1}$. Then up to a small perturbation, we can suppose that $\overline{K_t}$ is a simple properly embedded arc connecting $A$ to $B$ for all $t\in[0,1]$ except for finitely many times, for which $\overline{K_t}$ has a point of double transverse self-intersection. Let $K_t$, $t\in[0,1]$ be the corresponding homotopy between $K_0$ and $K_1$. Then for all $t$, we have that $K_t$ is a Klein amphichiral knot, except at finitely many times, for which it has some points of double transverse self-intersection. Such double points either arise from the double points of $\overline{K_t}$ or they might come from the closing up procedure of the homotopy described above. Using the notation in the discussion above, for example, it might happen that $w_0=w_1$, leading to a new double point inside $N_\rho$ during the homotopy. Thus, it suffices to show that $\pi_1(\overline{Y},A,B)$ consists of a single point. For this purpose, pick two points $p\in A$ and $q\in B$, and consider the set $$ \pi_1(\overline{Y},p,q)=\{\gamma:[0,1]\to\overline{Y}\;|\;\gamma(0)=p, \ \gamma(1)=q\}/\text{homotopy}. $$ Notice that since $A$ and $B$ are connected, the natural map $$ \iota: \pi_1(\overline{Y},p,q)\to\pi_1(\overline{Y},A,B) $$ induced by the inclusion is surjective. Observe then that \begin{itemize}[leftmargin=2em] \item both $\pi_1(A,p)$ and $\pi_1(B,q)$ act on $\pi_1(\overline{Y},p,q)$ by pre-composition and post-composition, respectively, \item the image of a class under $\iota$ is not changed by the action of $\pi_1(A,p)$ and $\pi_1(B,q)$. \end{itemize} \noindent This reduces our argument to show that the combined action of $\pi_1(A,p)$ and $\pi_1(B,q)$ is transitive on $\pi_1(\overline{Y},p,q)$, which is a consequence of the following two facts: \begin{itemize}[leftmargin=2em] \item $\pi_1(\overline{Y},p)$ acts transitively on $\pi_1(\overline{Y},p,q)$ by pre-composition, \item $\pi_1(\overline{Y})=\langle\pi_1(A),\pi_1(B)\rangle$. \end{itemize} \noindent Therefore, $\pi_1(\overline{Y},A,B)=\{*\}$, as we desired. \end{proof} \subsection{The Equivariant Q-Slice Knots} \label{sec:equivariant-slice} An \emph{equivariant $\Q$-slice slice knot} is a strongly invertible knot $(K, \rho_K )$ in $S^3$ that bounds a disk $D$ smoothly properly embedded in a $\Q$-homology $4$-ball (a $4$-manifold having the $\Q$-homology of $B^4$) with an orientation preserving involution $\rho : W \to W$ such that $$ \partial D = K, \ \ \ \rho_{\vert_{\partial W = S^3}} = \rho_K, \ \ \ \text{and} \ \ \ \rho (D) = D ,$$ cf. the conditions (\ref{defn:p3}) and (\ref{defn:p4}) below. Recall that Kawauchi's famous result \cite[{\sc\S}2]{Kaw09} provided a characterization for $\Q$-slice knots, showing that every strongly negative amphichiral knot is $\Q$-slice. Now, we prove an equivariant refinement of Kawauchi's theorem by proving that every Klein amphichiral knot is equivariant $\Q$-slice. This is essentially the first half of Theorem~\ref{thm:mainthm1}. \begin{proof}[Proof of Theorem \ref{thm:mainthm1}, the $1^{st}$ half] Consider the product extension of $\rho$ and $\tau$ on $S^3\times[0,1]$ , which we will still denote them using the same letters. Let $X_K$ be the $4$-manifold obtained from $S^3\times [0,1]$ by attaching a $0$-framed $2$-handle along $K\subset S^3\times\{1\}$. We can find a $\langle \rho,\tau\rangle$-invariant neighbourhood $N(K)\cong S^1\times D^2$ of $K$ such that \begin{itemize}[leftmargin=2em] \item $\rho(z,w)=(\overline{z},\overline{w})$, for $(z,w)\in S^1\times D^2$, \item $\tau(z,w)=(\overline{z},-w)$ for $(z,w)\in S^1\times D^2$. \end{itemize} Therefore, we can extend $\rho$ and $\tau$ over the $2$-handle $D^2\times D^2$ by using the obvious extensions of the formulas above. Denote the core disk of the $2$-handle by $D=D^2\times \{0\}$. Observe in particular that $\rho(D)=D$. Let $S^3_0(K)$ be the $0$-surgery of $K$. Notice that the restriction of $\tau$ on the $S^3_0(K)$ component of $\partial X_K$ is fixed-point free and orientation-reversing. Therefore, we can glue $X_K$ to itself by identifying $x\sim\tau(x)$ for all $x\in S^3_0(K)$, and obtain an oriented $4$-manifold $Z_K$ with $\partial Z_K = S^3$, namely the Kawauchi manifold. Let $D'$ be the disk obtained by gluing the product cylinder $K\times [0,1]$ with $D$. In \cite[Lemma~2.3, Theorem~1.1]{Kaw09}, Kawauchi proves that $D'$ is a slice disk for $K$ in $Z_K$, and $Z_K$ is a $\Q$-homology $4$-ball. Finally, since $\rho$ and $\tau$ commute, the action of $\rho$ on $X_K$ induces an involution $\rho_{Z_K}$ on $Z_K$ such that $\rho_{Z_K} (D') = D'$. This shows that $D'$ is actually an equivariant slice disk for $(K,\rho)$ in $Z_K$. Therefore, $(K,\rho)$ is equivariantly $\Q$-slice in the Kawauchi manifold $Z_K=Z$. \end{proof} Due to its construction, a priori the Kawauchi manifold $Z$ seems to depend on the strongly negative amphichiral knot. However, Levine \cite{Lev23} proved that $Z$ is a unique manifold in the sense that all strongly negative amphichiral knots bound smooth disks in $Z$. Our proof extends Levine's theorem to the equivariant case by showing that every Klein amphichiral knot $(K, \rho, \tau)$ bounds an equivariant disk in $(V, \rho_K)$, where $\rho_K$ is an involution extending $\rho$. This is the remaining part of Theorem~\ref{thm:mainthm1}. \begin{proof}[Proof of Theorem \ref{thm:mainthm1}, the $2^{nd}$ half] Observe that Levine's original proof \cite{Lev23} has three main ingredients: a $5$-dimensional handlebody argument, an equivariant unknotting theorem of Boyle and Chen \cite[Proposition~3.12]{BC24}, and the equivariant isotopy extension theorem of Kankaanrinta \cite[Theorem~8.6]{Kan07}. The first and last ingredients are essentially the same for our extended argument. In this setting, the second ingredient is simply replaced by Lemma~\ref{lem:equiv_crossing}. \end{proof} \subsection{The Equivariant Q-Concordance Group} \label{sec:equivariant-concordance} A \emph{direction} on a given strongly invertible knot $(K,\rho)$ is a choice of oriented \emph{half-axis} $h$, i.e. the choice of an oriented connected component of $\mathrm{Fix} (\rho)\setminus K$. We will call a triple $(K,\rho,h)$ a \emph{directed strongly invertible knot} (a \emph{DSI knot} in short). We say that two DSI knots $(K_0,\rho_0,h_0)$ and $(K_1,\rho_1,h_0)$ are \emph{equivariantly isotopic} if there exists and $\varphi\in\operatorname{Diff}^+(S^3)$ such that $\varphi(K_0)=K_1$, $\varphi\circ\rho_0=\rho_1\circ\varphi$ and $\phi(h_0)=h_1$ as oriented half-axes. Given two DSI knots $(K_0,\rho_0,h_0)$ and $(K_1,\rho_1,h_0)$, we may form \emph{equivariant connected sum} operation $\widetilde{\#}$, introduced by Sakuma \cite[{\sc\S}1]{Sak86}. This yields a potentially new DSI knot $( K_0 \widetilde{\#} K_1,\rho_0 \widetilde{\#} \rho_1, h_0\widetilde{\#}h_1 )$ whose oriented half-axis starts from the tail of the half-axis for $K_0$ and ends at the head of the half-axis for $K_1$, see Figure~\ref{fig:conn_sum}. \begin{figure}[ht] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] at (0,0){\includegraphics[width=0.5\linewidth]{images/connected_sum.pdf}}; \draw (4.3,2.3) node[scale=2] (a) {$=$}; \draw (4.3,4.5) node[scale=2] (b) {$\widetilde{\#}$}; \end{tikzpicture} \caption{An example of equivariant connected sum.} \label{fig:conn_sum} \end{figure} Let $(K_0,\rho_0,h_0)$ and $(K_1,\rho_1,h_1)$ be two DSI knots in $S^3$. We call $(K_0,\rho_0,h_0)$ and $(K_,\rho_1,h_1)$ \emph{equivariant $\Q$-concordant} and denote it by $$(K_0,\rho_0,h_0) \thicksim_\Q (K_,\rho_1,h_1)$$ if there is a pair of smooth manifolds $\left ( W, A \right ) $ satisfying the following conditions: \begin{enumerate} \item $W$ is a $\Q$-homology cobordism from $S^3$ to itself, i.e., $H_* (W; \Q ) \cong H_* (S^3 \times [0,1]; \Q )$, \item $A$ is a submanifold of $W$ with $A \cong S^1 \times [0,1]$ and $\partial A \cong -(K_0) \cup K_1$, \item \label{defn:p3} There is an involution $\rho_W: W \to W$ that extends $\rho_0$ and $\rho_1$, and has $\rho_W (A) = A$, \item \label{defn:p4} Denote by $F$ the fixed-point surface of $\rho_W$. Then the half-axes $h_0,h_1$ lie in the same boundary component of $F-A$ and their orientations induce the same orientation on $F-A$. \end{enumerate} \begin{rem} Observe that if $W$ as above is not a $\Z/2\Z$-homology cobordism then $\Fix(\rho_W)$ might not be an annulus. This implies that the equivariant $\Q$-sliceness for a directed strongly invertible knot is not equivalent to the equivariant $\Q$-sliceness of its \emph{butterfly link} (see Definition \ref{butterfly_link}) or \emph{moth link} (see \cite[Definition 5.2]{DPF23}), contrary to the non-rational case, as proved in \cite[Proposition 4.2]{BI22} and \cite[Proposition 5.4]{DPF23}. As a consequence, several invariants obtained from these links, such as the butterfly and moth polynomial, do not automatically vanish for equivariant $\Q$-slice knots (compare with the computations in Section \ref{ssec:buttefly_moth}). \end{rem} As a natural extension of the equivariant conconcordance group $\CT$ defined by Sakuma \cite[{\sc\S}4]{Sak86} (see also Boyle and Issa \cite[{\sc\S}2]{BI22}), we introduce the \emph{equivariant $\Q$-concordance group} $\CTQ$ as $$\CTQ \doteq \left ( \left \{ \text{DSI knots in} \ S^3 \right \} / \thicksim_\Q , \ \widetilde{\#} \right ) .$$ \noindent We have the following main properties: \begin{itemize}[leftmargin=2em] \item The operation is induced by the equivariant connected sum $\widetilde{\#}$. \item The identity element is the equivariant $\Q$-concordance class of the unknot $(U, \rho_U , h_U)$\footnote{The unknot $U$ in $S^3$ has a unique strong inversion, see \cite[Definition~1.1, Lemma~1.2]{Sak86}.}, see Figure~\ref{fig:unknot}. \item The inverse element for $(K, \rho , h)$ is given by axis-inverse of the mirror of $K$, i.e., $(\overline{K}, \rho , -h)$. \end{itemize} \begin{figure}[ht] \centering \includegraphics[width=0.15\linewidth]{images/unknot.pdf} \caption{The DSI unknot.} \label{fig:unknot} \end{figure} \subsection{A Construction for Equivariant Q-Slice Knots} \label{sec:constructions} In this subsection, we construct examples of equivariant $\Q$-slice knots. We now introduce the first family of examples which are obtained by using the following doubling argument. Given an oriented knot $K$, its \emph{double} $\mathfrak{r}(K)$ is the DSI knot obtained by $K\#r(K)$, (where $r(K)$ is the \emph{reverse} of $K$, i.e. the same knot but oppositely oriented) with the involution $\rho$ that exchanges $K$ and $r(K)$ (namely the $\pi$-rotation around the vertical axis in Figure~\ref{rK}. The direction on $\mathfrak{r}(K)$ is given as follows: the connected sum can be performed by a suitable band move along a grey band $B$ where $\Fix(\rho)\cap B$ is the half-axis $h$. Here, $h$ is oriented as the portion of $B$ lying on $K$. \begin{figure}[ht] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] at (0,0){\includegraphics[scale=0.3]{images/r_K.png}}; \node[label={$K$}] at (1.2,1.7){}; \node[label={$h$}] at (4.1,1.7){}; \node[label={$r(K)$}] at (7.8,1.7){}; \end{tikzpicture} \caption{The DSI knot $\mathfrak{r}(K)$ with the solid chosen half-axis.} \label{rK} \end{figure} As proven by Boyle and Issa \cite[{\sc\S}2]{BI22}, $\mathfrak{r}$ defines an injective homomorphism $\mathfrak{r}:\C \longrightarrow \CT.$ The same argument actually shows that $\mathfrak{r}$ also induces a homomorphism $\mathfrak{r}_{\Q}:\CQ\to\CTQ$ (not necessarily injective), which fits into the following commutative diagram \begin{center} \begin{tikzcd} \C\ar[r,"\mathfrak{r}"]\ar[d,"\psi"]&\CT\ar[d,"\Phi"]\\ \CQ\ar[r,"\mathfrak{r}_{\Q}"]&\CTQ. \end{tikzcd} \end{center} Therefore, the first family of DSI knots are trivial in the sense that they lie in the $\Ker (\Phi)$ and are simply given by the image of $\mathrm{Ker}(\psi)$ under $\mathfrak{r}$. It is known that the algebraic structure of $\mathrm{Ker}(\psi)$ is very complicated. In particular, $\mathrm{Ker}(\psi)$ contains a $\Z^\infty \oplus (\Z/2\Z)^\infty$ subgroup due to the work of Cha \cite{Cha07} and Hom, Kang, Park, and Stoffregen \cite{HKPS22}. At the moment, the structure of $\CT$ is quite mysterious. The first author proves that $\CT$ is non-abelian \cite{DP23} and together with Framba that $\CT$ is also non-solvable \cite{DPF23}. He also observes in \cite[Corollary~1.16]{DP23b} that the equivariant concordance group splits as the following direct sum $$\CT= \mathrm{Ker} (\mathfrak{b})\oplus\mathfrak{r}(\mathcal{C}),$$ where $\mathrm{Ker} (\mathfrak{b})$ is the subgroup of $\CT$ formed by DSI knots $K$ such that their \emph{$0$-butterfly links} $L_b^0(K)$ (see Definition \ref{butterfly_link} and \cite{BI22}) have slice components (but they are not necessarily slice as links). Now, we construct the non-trivial examples of equivariant $\Q$-slice knots in $S^3$. Let $$\J = \{ n \in \mathbb{N} \ \vert \ n>1 \ \text{and} \ n \not \equiv 0 \mod 3 \} .$$ We can write $$\J = \JE \sqcup \JO = \{ n \in \J \ \vert \ n \ \text{is even} \} \sqcup \{ n \in \J \ \vert \ n \ \text{is odd} \}.$$ \begin{defn} \label{defn:turks-head} For $n \in \J$, the \emph{Turk's head knots} $J_n = Th (3,n)$ are defined as the $3$-braid closures $$J_n \doteq \reallywidehat{(\sigma_1 {\sigma_2}^{-1})^n } ,$$ where a single $3$-braid $\sigma_1 {\sigma_2}^{-1}$ is depicted as follows: \vspace{0.7 em} \begin{center} \begin{tikzpicture} \pic[ rotate=0, braid/.cd, every strand/.style={thick}, strand 1/.style={black}, strand 2/.style={black}, strand 3/.style={black}, ] {braid={s_1^{-1} s_2 }}; \end{tikzpicture} \end{center} \end{defn} \vspace{0.7 em} The Turk's head knots $J_n$ are known to be alternating, cyclically $n$-periodic, fibered, hyperbolic, prime, strongly invertible, strongly negative amphichiral, see our recent survey paper \cite{DPS24}. Using Knotinfo \cite{knotinfo} and Knotscape \cite{knotscape}, we can identify the Turk's head knots $J_n$ with small number of crossings. In particular, we have $$J_2 = 4_1, \ J_4 = 8_{18}, \ J_5 = 10_{123}, \ J_7 = 14_{a19470}, \ \text{and}, J_8 = 16_{a275159} .$$ We will need the following three important properties of $J_n$. Their symmetry groups were computed by Sakuma and Weeks \cite[Proposition~I.2.5]{SW95}. The recent work of AlSukaiti and Chbili \cite[Corollary~3.5, Proposition~5.1]{AC23} provided their determinants and the roots of their Alexander polynomials. For more references, one can consult our recent survey \cite{DPS24}. \begin{enumerate} \item \label{property:SW} For a fixed value of $n$, we have $$\mathrm{Sym} (S^3 , J_n) \cong D_{2n} \CommaPunct$$ where $D_{m}\cong\Z/m\Z\rtimes\Z/2\Z$ denotes the dihedral group of $2m$ elements. \item \label{eq:determinant} Let $L_k$ denote the $k^{th}$ Lucas number.\footnote{The Lucas numbers are defined recursively as $L_0 = 2, L_1 =1$, and $L_k = L_{k-1} + L_{k-2}$ for $k \geq 2$.} Then we have $$\mathrm{det}(J_n) = \Delta_{J_n} (-1) = L_{2n} - 2 \cdot$$ \item \label{eq:roots} The roots of the Alexander polynomial $\Delta_{J_n} (t)$ are of the form: $$z = -\frac{1}{2} \left( 2\cos \left( \frac{2k}{n} \pi \right ) -1 \pm \sqrt{ \left (2\cos \left (\frac{2k}{n} \pi \right ) -1 \right ) ^2 -4 } \right) \CommaPunct $$ with $1 \leq k \leq \lfloor n/2 \rfloor.$ \end{enumerate} In the following proposition, we explicitly determine the number of strong inversions for the Turk's head knots $J_n$. \begin{prop} \label{prop:equivalent} The Turk's head knot $J_n$ has at most two inequivalent (i.e. not conjugate in $\mathrm{Sym}^+(S^3,J_n)$) strong inversions, say $\rho_1$ and $\rho_2$. In particular, \begin{itemize}[leftmargin=2em] \item if $n \in \JO$, then the strong inversion is unique and $J_n$ is Klein amphichiral, \item if $n\in \JE$, then $J_n$ has exactly two inequivalent strong inversions, which are conjugated by an element of $\mathrm{Sym}(S^3,J_n)$. \end{itemize} \end{prop} \begin{proof} According to \cite[Proposition 3.4]{Sak86} $J_n$ admits at most two inequivalent strong inversion, since it is invertible, amphichiral and hyperbolic. More precisely, Sakuma proved that the followings are equivalent: \begin{itemize}[leftmargin=2em] \item $J_n$ admits a period $2$ symmetry, \item $\rho_1$ and $\rho_2$ are not equivalent. \end{itemize} Since by property \ref{property:SW} we have $\mathrm{Sym}(S^3,J_n)\cong D_{2n}$ and $n$ is odd, we know that we only have three conjugacy classes of involutions in $\mathrm{Sym}(S^3,J_n)$. By Proposition \ref{prob:+amphichiral} these classes are exactly given by a strongly inversion $\rho$ and a strongly negative and positive amphichiral involutions $\tau$ and $\delta$. In particular, $J_n$ cannot admit a period $2$ symmetry (cf. \cite[p.~332]{KS92}). In Figure~\ref{fig:klein_Jn} we can explicitly see that the maps $\tau$ and $\rho$ commute. Therefore, $\rho$ is the unique strong inversion. On the other hand, if $n$ is even, then $J_n$ admits an obvious a period $2$ symmetry, which can be seen, for example from the braid description in Definition \ref{defn:turks-head}. Therefore, $\rho_1$ and $\rho_2$ are inequivalent strong inversions. \end{proof} \begin{figure}[ht] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] at (0,0){\includegraphics[scale=1]{images/klein_Jn.pdf}}; \node at (2.6,5.4) (b1){$(\sigma_1\sigma_2^{-1})^k$}; \node at (2.6,2.2) (b2){$(\sigma_2^{-1}\sigma_1)^k$}; \node at (8,5.4) (c1){$(\sigma_1\sigma_2^{-1})^k$}; \node at (8,2.2) (c2){$(\sigma_2^{-1}\sigma_1)^k$}; \node at (2.6,3.8) (a1) {$\alpha$}; \node at (8,3.8) (a1) {$\beta$}; \end{tikzpicture} \caption{The Turk's head knot $J_n$ for $n$ odd. If $n=4k+1$ then $\alpha=\sigma_1$ and $\beta=\sigma_2^{-1}$, while for $n=4k-1$ we have $\alpha=\sigma_2$ and $\beta=\sigma_1^{-1}$. Its Klein amphichiral symmetry is represented in Figure \ref{fig:10_123_sym}.} \label{fig:klein_Jn} \end{figure} \begin{rem} We fix here the choice of direction on $J_n$ for $n$ odd. We will always consider $J_n$ as a DSI knot with the involution $\rho$ given by the $\pi$-rotation around the red axis in Figure \ref{fig:klein_Jn} and the chosen half-axis $h$ is the bounded one in the figure, oriented from left to right. In the following, we will omit to recall these choices. Observe that the strongly negative amphichiral involution $\tau$ (see \ref{fig:klein_Jn}) maps the DSI knot $(J_n,\rho,h)$ to $(\overline{J_n}, \rho, -h')$, where $-h'$ is the complementary half-axis endowed with the opposite orientation. Therefore, while in general these choices would be relevant for the computations in Section \ref{sec:independence}, in this specific case, changing the DSI structure would change the invariants computed in Section \ref{sec:independence} at most by a sign. \end{rem} \begin{cor} If $n \in \JO$, then the Turk's head knots $J_n$ are equivariant $\Q$-slice. \end{cor} \begin{proof} We know that $J_n$ is always strongly negative amphichiral. Since $n \in \JO$, by Proposition~\ref{prop:equivalent}, the knots $(J_n , \rho, \tau)$ are all Klein amphichiral. Then from Theorem~\ref{thm:mainthm1} we know that $(J_n,\rho)$ is equivariantly $\Q$-slice. \end{proof} \begin{rem} Let $\mathrm{gcd} (p,q) = 1$. If $p$ and $q$ are both odd integers, then the Turk's head knots $Th(p,q)$ are strongly invertible and strongly negative amphichiral, see \cite[{\sc\S}3.4]{DPS24}. A positive answer to \cite[Conjecture~B]{DPS24} would also prove that the knots $Th(p,q)$ are all equivariant $\Q$-slice. \end{rem} \subsection{An Obstruction for Equivariant Q-Slice Knots} \label{sec:obstructions} In this subsection, we prove an equivariant rational version of the classical Fox-Milnor condition, which is a generalization of the result by Cochran, Franklin, Hedden, and Horn \cite{CFHH13}. Here, we normalize the Alexander polynomial of a knot $K$ such that $$\Delta_K (1) = 1 \quad \text{and} \quad \Delta_K (t) = \Delta_K (t^{-1}).$$ Now, we prove Theorem~\ref{thm:mainthm2}, claiming that the Alexander polynomial of an equivariant $\Q$-slice knot must be square. \begin{proof}[Proof of Theorem~\ref{thm:mainthm2}] Let $(K,\rho)$ be an equivariant $\Q$-slice knot with a slice disk $D\subset Z$ in a $\Q$-homology $4$-ball $Z$ and let $\rho$ be an involution on $Z$ extending $\rho_K$ such that $\rho(D)=D$. Let $Z_D = Z \setminus N(D)$ be the equivariant slice disk exterior, where $N(D)$ is a $\rho$-invariant tubular neighborhood of $D$. Denote the inclusion of the boundary by $i:\partial Z_D\cong S^3_0(K)\to Z_D$. Since $H_1(Z_D ;\Q) = H_1(S^1 \times B^3 ;\Q)$, we have $H_1(Z_D ;\Z)/ \mathrm{torsion} \cong\Z$. Let $$\phi:\pi_1(Z_D)\to \Z$$ be the projection map on the homology modulo torsion. Given a space $X$ and a map $\epsilon:\pi_1(X)\to\Z=\langle t\rangle$ we denote by $H_*(X,\epsilon)$ the integral homology of $X$ twisted by $\epsilon$, which is naturally a $\Z[t^{\pm1}]$-module. By \cite[Proposition~4.6]{CFHH13}, the order of $H_1(S^3_0(K),\phi\circ i)$ is $\Delta_K(t^n)$ where $n$ is the complexity of the slice disk, i.e., the absolute value of the element represented by a meridian of $D$ in $H_1(Z_D ;\Z)/ \mathrm{torsion} \cong\Z$. Let $M$ be the kernel of $$ i_*:H_1(S^3_0(K),\phi\circ i)\to H_1(Z_D,\phi), $$ and denote its order by $f(t)$. Then, by using \cite[Proposition~4.5]{CFHH13}, we see that $\Delta_K(t^n)=f(t)f(t^{-1})$. Now, observe that $\phi\circ\rho_*=(-id)\circ\phi$ as follows: \begin{center} \begin{tikzcd} \pi_1(Z_D) \arrow{r}{\phi} \arrow[swap]{d}{\rho_*} & \Z \arrow{d}{-id } \\ \pi_1(Z_D) \arrow{r}{\phi} & \Z \end{tikzcd} \end{center} Therefore the order of $\rho(M)$ is $f(t^{-1})$. Since the inclusion map $i$ is $\rho$-equivariant we have that $\rho(M)=M$, hence $f(t)=f(t^{-1})$. Hence $\Delta_K(t^n)$ is a square. In turn, this implies that $\Delta_K(t)$ is also a square. \end{proof} Using the equivariant Fox-Milnor condition in Theorem~\ref{thm:mainthm2}, we now prove that the other half of the knots $J_n$ are not trivial in $\CTQ$. \begin{prop} If $n \in \JE$, then the Turk's head knots $J_n$ are not equivariant $\Q$-slice. \end{prop} \begin{proof} It is a well-known fact that Lucas numbers satisfy the following equality: $$ (L_n)^2=L_{2n}+(-1)^n\cdot2. $$ If $n\in \JE$, then $\det(J_n)=L_{2n}-2=(L_n)^2-2$ by the identity (\ref{eq:determinant}). Since the determinant is not a perfect square, for an arbitrary $n$, $J_n$ is not equivariant $\Q$-slice by Theorem \ref{thm:mainthm2}. \end{proof} Generalizing Kirby calculus argument by Fintushel and Stern \cite{FS84}, in \cite[Theorem~4.14]{Cha07} Cha exhibits a familiy of infinitely many $\Q$-slice knots $K_n$ in $S^3$, depicted in Figure~\ref{fig:cha_knots}. Since the knots $K_n$ are clearly strongly negative amphichiral (see \cite[Figure~5]{Cha07}), Cha's result can be reproven by Kawauchi's characterization. Note that $K_1$ is the figure-eight knot. \begin{figure}[ht] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] at (0,0){\includegraphics[scale=0.6]{images/cha_knots.pdf}}; \node[scale=1.7] at (1.9,2) (b4){$-n$}; \node[scale=1.7] at (5.8,1.99) (b37){$n$}; \end{tikzpicture} \caption{The knots $K_n$. The square box with the integer $n$ (resp. $-n$) represents the right-handed (resp. the left-handed) $n$ full twists. The strong inversion is the $\pi$-rotation around the dashed axis.} \label{fig:cha_knots} \end{figure} Now, we are ready to prove Theorem \ref{thm: mainthm4}, showing that Cha's knots $K_n$ are also not equivariant $\Q$-slice. \begin{proof}[Proof of Theorem \ref{thm: mainthm4}] Assume that $K_n$ is equivariant $\Q$-slice for each $n \geq 1$. By \cite[Theorem~4.14]{Cha07}), we know that $$\Delta_{K_n} (t) = -n^2 t^{-1} + 2n^2 + 1 - n^2 t .$$ Then we get $$\mathrm{det} (K_n) = \Delta_{K_n} (-1) = 4n^2 + 1 = (2n^2)^2 + 1.$$ Then, by Theorem \ref{thm:mainthm2}, the determinant must be a square, which is a contradiction. Therefore, the knots $K_n$ are not equivariant $\Q$-slice. Let $H$ be the subgroup of $\widetilde{\C}_\Q$ spanned by $K_n$ for $n\geq1$ (fix any choice of strong inversion and direction on $K_n$). Let $K=\widetilde{\#}^{a_1}K_{n_1}\widetilde{\#}\dots\widetilde{\#}^{a_l}K_{n_l}, $ where the equivariant connected sum is taken with respect to any ordering of the knots. Since the polynomials $\Delta_{K_n}(t)$ are quadratic, it is not difficult to check that they are pairwise coprime. Therefore, $\Delta_K(t)$ is a square if and only if $a_i=0\mod{2}$ for $i=1,\dots,l$. Hence, the group $H$ surjects onto $(\Z/2\Z)^\infty$. \end{proof} \begin{rem} \label{rem:algebraic concordance} One can easily check by using the so-called \emph{equivariant signature jumps homomorphism} $\widetilde{J}_\lambda:\widetilde{\C}\to\Z$ introduced in \cite[Definition 6.1]{DP23b} and by applying \cite[Theorem 6.9]{DP23b} that the knots $K_n$ span a subgroup of $\widetilde{\C}$ which surjects onto $\Z^\infty$. As the classical Levine-Tristram signatures provide invariants of $\Q$-concordance (see \cite{CK02}), we expect the maps $\widetilde{J}_\lambda$ to factor through $\widetilde{\C}_\Q$. This would prove that the subgroup $H\subset\widetilde{\C}_\Q$ described in the proof of Theorem~\ref{thm: mainthm4} would actually surject onto $\Z^\infty$. Compare with Problem~\ref{prob:algebraic concordance}. \end{rem} \section{Proof of Theorem \ref{thm: mainthm3}} \label{sec:independence} In this section, we recall some preliminary notions and we prove the intermediate results needed in order to prove Theorem \ref{thm: mainthm3}. \subsection{Weighted Graphs, Spanning Trees and Gordon-Litherland form} In this subsection, we recall some useful results about the Gordon-Litherland form and graph theory. \label{sec:graphs} \begin{defn} A \emph{weighted graph} $\Gamma=(V,E)$ is a simple graph with vertex set $V$ and edges $E$ with the additional data of a \emph{weight} $\lambda_{e}=\lambda_{ij}=\lambda_{ji}\in\R$, if there exists $e\in E$ connecting $i,j\in V$. If two vertices are not connected by an edge the weight is understood to be zero. We associate a \emph{Laplacian matrix} $\L(\Gamma)$ for each weighted graph by letting $$ \L(\Gamma)_{i,j}=\begin{cases} -\lambda_{ij}\quad\text{if}\quad i\neq j\\ \sum_{k\neq i}\lambda_{ik}\quad\text{if}\quad i=j. \end{cases} $$ In the following, we will denote by $\L(\Gamma;i)$ the square matrix obtained from $\L(\Gamma)$ by removing the $i$-th column and row. \end{defn} \begin{defn} Let $\Gamma=(V,E)$ be a weighted graph. A \emph{spanning tree} of $G$ is subgraph $T=(V_T,E_T)$ of $\Gamma$ such that \begin{itemize} \item $T$ is a tree, \item the vertex set of $T$ is equal to $V$. \end{itemize} We denote the \emph{weight} of $T$ by $$ w(T)=\prod_{e\in E_T}\lambda_e. $$ Finally, we define the \emph{(weighted) number of spanning trees} of $\Gamma$ as $$ \T(\Gamma)=\sum_{T\subset\Gamma\;\text{spanning tree}}w(T). $$ \end{defn} \begin{thm}\cite[Theorem VI.29]{Tut01}\label{thm:matrix_tree} Let $\Gamma=(V,E)$ be a weighted graph. Then for every $i\in V$, we have $$ \det(\L(\Gamma;i))=\T(\Gamma). $$ \end{thm} \begin{thm}\cite[Theorem 6.1.1 (Gershgorin's Theorem)]{HJ85}\label{thm:gershgorin} Let $A=(a_{i,j})$ be a $n\times n$ square complex matrix, and denote by $R_i(A)=\sum_{j\neq i}|a_{i,j}|$. Then the eigenvalues of $A$ are contained in the following set $$ \bigcup_{i}\{z\in\mathbb{C}\;|\;|z-a_{i,i}|\leq R_i(A)\}. $$ \end{thm} \begin{defn}\label{def:dom_diag} Let $A=(a_{i,j})$ be as above. We say that $A$ is \emph{dominant diagonal} if for every $1\leq i\leq n$ we have $$ |a_{i,i}|\geq R_i(A). $$ Moreover, if there exist $i$ such that $|a_{i,i}|>R_i(A)$ then we say that $A$ is \emph{strongly dominant diagonal}. \end{defn} \begin{cor}\label{cor:pos_def} If $A=(a_{i,j})$ is a symmetric matrix and strongly dominant diagonal matrix with positive entries on the diagonal then $A$ is positive definite. \end{cor} \begin{exmp} Let $\Gamma=(V,E)$ be a connected and weighted graph such that $\lambda_e>0$ for every $e\in E$. Then for every $i\in V$ the matrix $\L(\Gamma;i)$ is strongly dominant diagonal and all the diagonal entries are positive, hence $\L(\Gamma;i)$ is positive definite. \end{exmp} \begin{defn} Let $\Gamma=(V,E)$ be a weighted graph. Given an edge $e\in E$ we denote by $\Gamma\setminus e=(V,E\setminus{e})$ the weighted graph obtained from $\Gamma$ by \emph{edge deletion} of $e$. Let $i,j$ be the endpoints of $e$. We define $\Gamma/e=(\overline{V},\overline{E})$ as the graph obtained from $\Gamma$ by \emph{edge contraction} of $e$, where \begin{itemize} \item $\overline{V}=V/{i\sim j}$, \item if $e\in E$ has endpoints $h,k\in V\setminus\{i,j\}$ then $e\in\overline{E}$ with the same label, \item for every $h\in V\setminus\{i,j\}$ there is an edge $e\in\overline{E}$ joining $[i\sim j]$ to $h$ of weight $\lambda_e=\lambda_{ih}+\lambda_{jh}$. \end{itemize} \end{defn} \begin{thm}[\cite{tutte2004graph}]\label{thm:DCT} Let $\Gamma=(V,E)$ be a weighted graph. Then for every $e\in E$, we have $$ \T(\Gamma)=\T(\Gamma\setminus e)+\lambda_e\cdot\T(\Gamma/e). $$ \end{thm} \subsubsection{Gordon-Litherland Form}\label{sec:GL_form} Let $L\subset S^3$ be a link and let $D\subset S^2$ be a connected diagram for $L$. Recall that by coloring $S^2\setminus D$ in a checkerboard fashion, we determine two spanning surfaces for $L$. Denote by $F$ the surface given by the black coloring. Then, we can describe the \emph{Gordon-Litherland form} conveniently in terms of a weighted graph $\Gamma=(V,E)$ associated with $F$, as follows. See \cite{GL78} for more details. Let $V$ be the set of white regions of $D$. Given two white regions $R_1,R_2$, we have an edge $e$ connecting them if they share at least one crossing of $D$. The label of $e$ is given by the sum over all the crossing of $D$ shared by $R_1$ and $R_2$ of $+1$ for a right-handed half-twist and $-1$ for a left-handed half-twist (see Figure \ref{fig:goeritz_crossings}). \begin{figure}[ht] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] at (0,0){\includegraphics[scale=0.7]{images/goeritz_crossings.pdf}}; \node at (1.2,-0.5) (b1){$+1$}; \node at (0.5,1.5) (b3){$R_1$}; \node at (1.7,1.5) (b4){$R_2$}; \node at (5.3,1.5) (b37){$R_1$}; \node at (6.5,1.5) (b43){$R_2$}; \node at (6,-0.5) (b2){$-1$}; \end{tikzpicture} \caption{Sign convention for the crossings.} \label{fig:goeritz_crossings} \end{figure} Then, for any $R\in V$ the matrix $\L(\Gamma;R)$ represents the Gordon-Litherland form of $F$. In particular, the determinant of the link $L$ is given by the absolute value of $\det(\L(\Gamma;R))=\T(\Gamma)$. \subsection{Butterfly and Moth Links}\label{ssec:buttefly_moth} We are now going to recall the definition of \emph{$n$-butterfly link} that is defined by the first author in \cite{DP23} as a generalization of the \emph{butterfly link} introduced by Boyle and Issa in \cite{BI22}. \begin{defn}\label{butterfly_link} Let $(K,\rho,h)$ be a DSI knot. Take a $\rho$-invariant band $B$, containing the half-axis $h$, which attaches to $K$ at the two fixed points. Performing a band move on $K$ along $B$ produces a $2$-component link, which has a natural semi-orientation induced by the unique semi-orientation on $K$. The linking number between the components of the link depends on the number of twists of the band $B$. Then, the \emph{$n$-butterfly link} $L_b^n(K)$, is the $2$-component $2$-periodic link (i.e. the involution $\rho$ exchanges its components) obtained from such a band move on $K$, so that the linking number between its components is $n$. \end{defn} \begin{rem} In order to avoid confusion, we want to remark that the definition of $L_b^n(K)$ given in Definition \ref{butterfly_link} above actually coincide with the definition of $\widehat{L}_b^{-n}(K)$ given in \cite[Definition 1.9]{DP23}. We choose to use a different notation to improve readability. \end{rem} In \cite{DPF23b}, Framba and the first author associate with a DSI knot the so called \emph{moth link} which we are now going to recall (see \cite[Definition 5.2]{DPF23b} for details). \begin{defn}\label{def:moth_link} Let $(K,\rho,h)$ be a DSI knot and let $B$ be the invariant band giving the $0$-butterfly link $L_b^0(K)$, as in Definition \ref{butterfly_link}. Observe that we can undo the band move on $B$ by attaching another invariant band $B^*$ on $L_b^0(K)$. Then we define the \emph{moth link} $L_m(K)$ as the \emph{strong fusion} (see \cite{Kai92}) of $L_b^0(K)$ along the band $B^*$. \end{defn} Now, the \emph{moth polynomial} of $(K,\rho,h)$ is defined as the Kojima-Yamasaki eta-function (see \cite{KY79}) of the moth link of $K$. \begin{prop}\cite[Proposition 5.6]{DPF23b} The moth polynomial induces a group homomorphism \begin{align*} \eta_m : \ & \CT \to \Q(t) \\ & K \mapsto \eta(L_m(K))(t). \end{align*} \end{prop} \begin{prop}\cite[Proposition 5.7]{DPF23b}\label{prop:eta_conway} The moth polynomial of a DSI knot $K$ can be computed by the following formula: $$ \eta_m(K)(t)=\frac{\nabla_{L_b^0(K)}(z)}{z\nabla_K(z)}, $$ where $\nabla_L(z)$ is the Conway polynomial of a (semi)-oriented link $L$ and $z=i(2-t-t^{-1})^{1/2}$. \end{prop} \begin{rem} In the following, we will regard the moth polynomial as a group homomorphism $$ \eta_m: \CT \to \Q(z) $$ defined by the formula in Proposition \ref{prop:eta_conway}. \end{rem} \begin{lem}\label{lemma:skein_conway} Let $K$ be a DSI knot. Then for any $p,q\in\Z$ $$ \nabla_{K}(z)|\nabla_{L_b^p(K)}(z)\iff\nabla_{K}(z)|\nabla_{L_b^q(K)}(z).$$ \end{lem} \begin{proof} Let $p\in\Z$ be any integer. It is sufficient to prove that $$\nabla_{K}(z)|\nabla_{L_b^p(K)}(z)\iff\nabla_{K}(z)|\nabla_{L_b^{p+1}(K)}(z) .$$ In order to do so we can apply the skein relation for the Conway polynomial as indicated in Figure \ref{fig:skein_butterfly} to obtain $$ \nabla_{L_b^{p+1}(K)}(z)=\nabla_{L_b^p(K)}(z)+z\nabla_K(z). $$ \begin{figure}[ht] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] at (0,0){\includegraphics[scale=0.5]{images/skein_butterfly.pdf}}; \node[scale=1.7] at (1.6,1) (b4){$p$}; \node[scale=1.7] at (5.7,1) (b37){$p$}; \node[scale=1.7] at (10.1,1) (b7){$p$}; \node[scale=1.3] at (1.6,-0.5) (a){$L^{p+1}_b(K)$}; \node[scale=1.3] at (5.7,-0.5) (b){$L^{p}_b(K)$}; \node[scale=1.3] at (10.1,-0.5) (c){$K$}; \end{tikzpicture} \caption{The three terms appearing in the skein relation. The box denotes $p$ full twists.} \label{fig:skein_butterfly} \end{figure} \end{proof} \begin{cor}\label{cor:eta_determinant} Let $K$ be a DSI knot and let $\eta_m(K)(z)=f(z)/g(z)$, where $f(z),g(z)\in\Z[z]$ are coprime polynomials. Suppose that for some $p\in\Z$ we have that $\det(K)$ does not divide $\det(L_b^p(K))$. Then $\deg g(z)>0$ and $g(z)|\nabla_K(z)$. \end{cor} \begin{proof} Recall that for a link $L$ we have that $\det(L)=|\nabla_L(-2i)|$. Since $\det(K)$ does not divide $\det(L_b^p(K))$, it follows that $\nabla_K(z)$ does not divide $\nabla_{L_b^{p}(K)}(z)$. Hence, by Lemma \ref{lemma:skein_conway}, $\nabla_K(z)$ does not divide $\nabla_{L_b^{0}(K)}(z)$ either. The proof follows by observing that $z|\nabla_{L_b^0(K)}(z)$, since $L_b^0(K)$ is a link with more than one component. \end{proof} We are now going to use the moth polynomial to prove the main result of this section. \begin{rem} In \cite{Sak86} Sakuma introduced an equivariant concordance invariant $\eta_{(K,\rho)}$ obtained by taking the Koijima-Yamasaki eta-function of a certain link associated with $(K,\rho)$. However, thanks to the symmetries of $J_n$ and \cite[Proposition 3.4]{Sak86}, we know that Sakuma's eta-polynomial always vanishes for $J_n$, $n\in\JO$. \end{rem} \begin{lem}\label{lem:det_int} Let $n \in \JO $. Then there exists $p\in\Z$ such that $$ \frac{\det(L_b^p(J_n))}{\det(J_n)}\text{ is not an integer}. $$ \end{lem} \begin{proof} Let $F_n$ be the spanning surface for $J_n$ depicted in Figure \ref{fig:spanning_Jn}, where $n=2k+1$. \vspace{1em} \begin{figure}[ht] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] at (0,0){\includegraphics[scale=0.6]{images/spanning_surface_Jn.pdf}}; \node at (1,7) (b1){$(\sigma_1\sigma_2^{-1})^k$}; \node at (1,2.6) (b2){$(\sigma_2^{-1}\sigma_1)^k$}; \node at (11,6.3) (c1){$\sigma_1\sigma_2^{-1}=$}; \node at (11,3.2) (c2){$\sigma_2^{-1}\sigma_1=$}; \node at (4,5) (h){$h$}; \end{tikzpicture} \caption{The spanning surface $F_n$ for $J_n$} \label{fig:spanning_Jn} \end{figure} Observe that by cutting $F_n$ along the half-axis $h$, we get a spanning surface $\overline{F_n}$ for the $p$-butterfly link of $J_n$ for some $p\in\Z$. We are now going to show that \begin{equation}\label{eq:det_ineq} 2\det(J_n)<\det(L^p_b(J_n))<4\det(J_n), \end{equation} which is sufficient, since $\det(J_n)$ is odd, while $\det(L_b^p(J_n))$ is even, hence it cannot be that $$\det(L_b^p(J_n))=3\det(J_n).$$ Let $\Gamma_n$ (see Figure \ref{fig:goeritz_graph}) be the graph associated with $F_n$ (see Section \ref{sec:GL_form}). \begin{figure}[ht] \centering \begin{tikzpicture}[thick,main/.style = {draw, circle},minimum width =8mm] \node[main] at (0,0) (b){$b$}; \node[main] at (0,-2) (a){$a$}; \node[main] at (0,2) (c){$c$}; \node[main] at (2,0) (v1){$v_1$}; \node[main] at (4,0) (v2){$v_2$}; \node[main] at (7,0) (vk){$v_k$}; \node[main] at (-2,0) (w1){$w_1$}; \node[main] at (-4,0) (w2){$w_2$}; \node[main] at (-7,0) (wk){$w_k$}; \draw[-] (a) -- (b); \draw[-] (b) -- (v1); \draw[-] (v1) -- (v2); \draw[-] (v2) -- (5,0); \draw[dotted] (5,0) -- (6,0); \draw (6,0) -- (vk); \draw[-] (b) -- (w1); \draw[-] (w1) -- (w2); \draw[-] (a) -- (v1); \draw[-] (a) -- (v2); \draw[-] (a) -- (w1); \draw[-] (a) -- (w2); \draw[-] (w2) -- (-5,0); \draw[dotted] (-5,0) -- (-6,0); \draw (-6,0) -- (wk); \draw (vk) -- (c) -- (wk); \draw[-] (a) -- node[midway, below left] {$2$} (wk) ; \draw[-] (a) -- node[pos=0.7, below right] {$2$} (vk) ; \draw (a) .. controls +(14,1) and +(7,0) .. (c) node[pos=0.8, above right] {$-1$}; \end{tikzpicture} \caption{The graph associated with $F_n$.} \label{fig:goeritz_graph} \end{figure} The corresponding graph $\overline{\Gamma_n}$ for $\overline{F_n}$ is easily obtained from $\Gamma_n$ by identifying the vertices $b$ and $c$. Recall that from Theorem \ref{thm:matrix_tree} and the discussion in Section \ref{sec:GL_form}, we have \begin{equation} \label{eq:det_eq} \det(J_n)=\T(\Gamma_n)\quad\text{and}\quad\det(L_p(J_n))=\T(\overline{\Gamma_n}). \end{equation} Denote by $\Gamma_n(x)$ be the graph obtained by adding an edge $e$ with label $x$ between $b$ and $c$ to $\Gamma_n$. Applying Theorem \ref{thm:DCT} to the edge $e$ of $\Gamma_n(x)$, we get \begin{equation}\label{eq:gamma_nx} \T(\Gamma_n(x))=\T(\Gamma_n)+x\T(\overline{\Gamma_n}). \end{equation} Using (\ref{eq:det_eq}) and (\ref{eq:gamma_nx}), we can see that the inequalities (\ref{eq:det_ineq}) are equivalent to showing that $$ \T(\Gamma_n(-1/4))>0\quad\text{and}\quad\T(\Gamma_n(-1/2))<0. $$ Observe now that the matrix $\L(\Gamma_n(-1/4);a)$ (see Section \ref{sec:graphs}) is positive definite: multiplying by $3$ both the row and column corresponding to the vertex $c$ we get a dominant diagonal matrix with positive diagonal entries, which has all positive eigenvalues by Gershgorin Theorem. Hence $\T(\Gamma_n(-1/4))=\det(\L(\Gamma_n(-1/4);a))>0$. On the other hand, it is not difficult to see that $\L(\Gamma_n(-1/2);a)$ has an inertia $(n,1,0)$ i.e. it is nonsingular and it has $n$ positive eigenvalues and $1$ negative eigenvalue. Removing the column and row corresponding to $c$ we get a positive definite matrix by Gershgorin Theorem, hence $\L(\Gamma_n(-1/2);a)$ has at least $n$ positive eigenvalues. However, $\L(\Gamma_n(-1/2);a)$ has at least (and hence exactly) one negative eigenvalue: the restriction to the subspace spanned by the vertices $c,b,v_k,w_k$ is given by $$ \begin{pmatrix} 1/2&1/2&-1&-1\\ 1/2&5/2&0&0\\ -1&0&4&0\\ -1&0&0&4 \end{pmatrix} $$ which has negative determinant. In particular, $\T(\Gamma_n(-1/2))=\det(\L(\Gamma_n(-1/2);a))<0$. \end{proof} \begin{prop}\label{prop:lin_indep} Let $\mathcal{F} \subset \JO$ be an infinite family such that if $m,n\in\mathcal{F}$ and $n\neq n$ then $m$ and $n$ are coprime. Let $J_\mathcal{F}$ be the subgroup of $\CT$ generated by $\{J_n\;|\;n\in\mathcal{F}\}$. Then $$ J^{ab}_\mathcal{F} = J_\mathcal{F}/\left[J_\mathcal{F},J_\mathcal{F}\right]\cong \Z^\infty. $$ \end{prop} \begin{proof} We prove that $\{\eta_m(J_n)(z)\;|\;n\in\mathcal{F}\}$ are $\Z$-linearly independent in $\Q(z)$. If $\operatorname{gcd} (p,q)=1$, by the property (\ref{eq:roots}), the sets of roots of the Alexander polynomials of $J_p$ and $J_q$ do not intersect, hence $\Delta_{J_p}(t)$ and $\Delta_{J_q}(t)$ are coprime polynomials. This in turns implies that the Conway polynomials $\nabla_{J_p}(t)$ and $\nabla_{J_q}(t)$ are coprime. Then the linear independence follows by the use of Lemma \ref{lem:det_int} and Corollary \ref{cor:eta_determinant}. \end{proof} \subsection{String links and Milnor invariants} In \cite{DPF23} Framba and the first author used the close relation between strongly invertible knots and \emph{string links} to prove that the equivariant concordance group $\CT$ is not solvable. In particular, they considered a homomorphism $$ \widetilde{\C}\xrightarrow{\varphi\circ\pi}\C(2), $$ where $\C(2)$ is the \emph{concordance group of string links on 2 strings}, introduced in \cite{LeD88} (see \cite[Section 3]{DPF23} for details), and they proved that $\widetilde{\C}$ is not solvable by using Milnor invariants for string links (see \cite{HL98}). We now use a similar approach in order to prove that two given knots in $\widetilde{\C}$ do not commute: we determine their image in $\C(2)$ and then we compute the Milnor invariants of their commutator to show that it is nontrivial. For details, one can consult \cite{DPF23}. \begin{lem}\label{lemma:commutator} The commutator between $J_5$ and $J_7$ is not equivariantly slice. \end{lem} \begin{proof} Following the procedure described in \cite[Remark 4.5]{DPF23}, we determine the image of $[J_5,J_7]=J_5\widetilde{\#}J_7\widetilde{\#}J_5^{-1}\widetilde{\#}J_7^{-1}$ in $\C(2)$, depicted in Figure \ref{fig:commutator}. \begin{figure}[ht] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] at (0,0){\includegraphics[scale=0.5]{images/commutator_5_7.pdf}}; \node at (1.2,4) (b3){$J_5$}; \node at (3.5,4) (b4){$J_7$}; \node at (5.7,4) (b37){$J_5^{-1}$}; \node at (7.8,4) (b43){$J_7^{-1}$}; \end{tikzpicture} \vspace{3mm} \raggedright \begin{itemize} \color{red} \item[\texttt{l1: }]\texttt{30 26 51 43 65 61 84 76} \color{black} \item[\texttt{l2: }] \texttt{30 34 35 4 24 4 22 27 51 47 46 7 56 53 37 7 39 42 64 69 11 67 11 56 57 61 85 88 16 90 74 71 16 81 80 76} \end{itemize} \caption{The $2$-string link representing $\varphi\circ\pi([J_5,J_7])$.} \label{fig:commutator} \end{figure} Using the computer program \texttt{stringcmp} \cite{TKS13}, we find out that $\varphi\circ\pi([J_5,J_7])$ has nontrivial Milnor invariants. The first nontrivial invariant is in degree 6. In Figure \ref{fig:commutator}, we also report the input data needed to run \texttt{stringcmp}, which codifies the longitudes of the two components of the string link. \end{proof} \begin{proof}[Proof of Theorem \ref{thm: mainthm3}] Let $\mathcal{J}$ be the subgroup of $\widetilde{\C}$ generated by $\{J_p\;|\;p\geq5\text{ prime}\}$. By Proposition \ref{prop:lin_indep}, we know that $\mathcal{J}^{ab}\cong\Z^\infty$. Moreover, it is spanned by negative amphichiral knots, therefore $\mathfrak{f}(\mathcal{J})\subset\C$ is $2$-torsion. Finally, by Lemma \ref{lemma:commutator}, we know that $\mathcal{J}$ is not abelian. \end{proof} \bibliography{references} \bibliographystyle{amsalpha} \end{document} \section{Equivariant Q-Sliceness and Q-Concordance} \label{sec: equivariant} \subsection{Symmetries of Knots and Klein Amphichirality} \label{sec:symmetry} Following Kawauchi's book \cite[{\sc\S}10]{Kaw90}, we consider the following important symmetries of knots. Furthermore, we use the resolution of the Smith conjecture \cite{W69, MB84} to identify the fixed point set of a given involution, denoted by $\mathrm{Fix} (\cdot)$. We also set the notation $\operatorname{Diff}(\cdot)$ (resp. $\operatorname{Diff}^+(\cdot)$) for the group of diffeomorphisms (resp. orientation-preserving diffeomorphisms) of a manifold, and $\operatorname{Diff}^-(\cdot) \doteq \operatorname{Diff}(\cdot) \setminus \operatorname{Diff}^+(\cdot)$, i.e., the set of orientation-reversing diffeomorphisms of a manifold. \begin{defn} A knot $K$ in $S^3$ is said to be: \begin{itemize}[leftmargin=2em] \item \emph{invertible} if there is a map $\rho \in \operatorname{Diff}^+(S^3)$ such that $\rho (K) = -K$. If $\rho$ is further an involution, then $(K, \rho)$ is called \emph{strongly invertible}. In this case, we have $\mathrm{Fix}(\rho) = S^1$ and $\mathrm{Fix}(\rho) \cap K = S^0$. Moreoever, the knots $(K, \rho)$ and $(K', \rho')$ are called \emph{equivalent} if there is a map $f \in \operatorname{Diff}^+(S^3) $ such that $f(K) = K'$ and $f \circ \rho \circ f^{-1} = \rho^{-1}.$ \item \emph{negative amphichiral} if there is a map $\tau \in \operatorname{Diff}^-(S^3)$ such that $\tau (K) = -K$. If $\tau$ is further an involution, $(K, \tau)$ is called \emph{strongly negative amphichiral}. In this case, we have either $\mathrm{Fix}(\tau) = S^0$ or $\mathrm{Fix}(\tau)=S^2$. \item \emph{positive amphichiral} if there is a map $\delta \in \operatorname{Diff}^-(S^3)$ such that $\delta(K) = K$. If $\delta$ is further an involution, then $(K, \delta)$ is called \emph{strongly positive amphichiral}. In this case, we have $\mathrm{Fix}(\tau) = S^0$. \item \emph{$n$-periodic} if there is a map $\theta \in \operatorname{Diff}^+(S^3)$ such that $\theta (K) = K$, $\Fix(\theta)\cap K=\emptyset$ and $\theta$ is period of $n$, i.e., $n$ is the minimal number so that $\theta^n = \mathrm{id} \in \operatorname{Diff}^+(S^3) $. If $\mathrm{Fix}(\theta) = S^1$ (resp. $\Fix(\theta)=\emptyset$), then we say that $(K,\theta)$ is \emph{cyclically periodic} (resp. \emph{freely periodic}). \end{itemize} \end{defn} \begin{figure}[htbp] \centering \includegraphics[width=0.6\columnwidth]{images/10_123_sym.pdf} \caption{From left to right: strongly positive amphichiral, strongly negative amphichiral and strongly invertible symmetries on $10_{123}$. In the first two cases, the involution is given by the $\pi$-rotation around the blue dot composed with the reflection along the plane of the diagram. The third symmetry is given by the $\pi$-rotation around the red axis.} \label{fig:Th(3,5)} \end{figure} In order to compare the symmetries of knots in our new context, we introduce the following crucial notion. \begin{defn} \label{defn:klein} A knot $K$ in $S^3$ is said to be \emph{Klein amphichiral} if there exist two involutions $\rho,\tau: S^3 \to S^3$ such that \begin{itemize}[leftmargin=2em] \item $(K,\rho)$ is strongly invertible, \item $(K,\tau)$ is strongly negative amphichiral, \item $\rho \circ \tau = \tau \circ \rho$. \end{itemize} \end{defn} The \emph{symmetry group} of a knot $K$ in $S^3$, which is denoted by $\mathrm{Sym} (S^3, K)$, is defined to be the mapping class group of the knot exterior $S^3 \setminus \nu(K)$, see \cite[{\sc\S}10.6]{Kaw90}. Denote by $\mathrm{Sym}^+(S^3,K)$ the subgroup consisting of orientation-preserving maps. Observe that since the maps $\tau$ and $\rho$ of a Klein amphichiral knot commute, they together generate the \emph{Klein four group} $$ V = \Z/2\Z \times \Z/2\Z ,$$ hence the composition map $\tau \circ \rho$ is a strongly positive amphichiral involution for $K$. See Figure~\ref{fig:10_123_sym} for an example of a Klein amphichiral knot. \begin{figure}[ht] \centering \includegraphics[width=0.7\linewidth]{images/klein_J5.pdf} \caption{Klein amphichiral symmetry on $10_{123}$: $\rho$ is given by a $\pi$-rotation around the red axis, while $\tau$ is the point reflection around the two blue points.} \label{fig:10_123_sym} \end{figure} Notice that there exist two distinct types of Klein amphichiral symmetries, distinguished by the fixed point set of $\tau$: \begin{enumerate} \item $\Fix(\tau)\cong S^2$, which forces $K=J\widetilde{\#}J^{-1}$ for some DSI knot $J$, \item $\Fix(\tau)\cong S^0$. \end{enumerate} In the first case, $K$ is clearly equivariant slice, which we will regard as the trivial case. Therefore, we will always assume that the symmetry falls in the second case. \begin{rem} The recent work of Boyle, Rouse, and Williams \cite{BRW23} provided a classification result for symmetries of knots in $S^3$. In their terminology, a Klein amphichiral knot corresponds to a $D_2$-symmetric knot of type SNASI-(1). \end{rem} A knot $K$ is called \emph{hyperbolic} if the knot complement $S^3 \setminus \nu(K)$ admits a complete metric of constant curvature $-1$ and finite volume. Now, we recall the following proposition relating the symmetries of hyperbolic knots with their symmetry groups. \begin{prop} \label{prop:equivariant} Let $K$ be a hyperbolic, invertible, and negative amphichiral knot in $S^3$. Suppose that $\mathrm{Sym} (S^3, K) = D_{2m}$ with $m$ odd, where $D_n$ is the dihedral group with $2n$ elements. Then $K$ admits a unique strongly invertible involution $\rho$ (up to \emph{equivalence}, see {\sc\S}\ref{sec:constructions}) and a strongly negative amphichiral involution $\tau$ such that $(K,\rho,\tau)$ is Klein amphichiral. \end{prop} \begin{proof} Let $K$ be a hyperbolic, invertible and negative amphcheiral knot in $S^3$. Since $K$ is a hyperbolic knot, by the work of Kawauchi \cite[Lemma~1]{Kaw79}, we have a pair of strongly negative amphichiral and strongly invertible involutions $\tau, \rho \in \mathrm{Sym} (S^3, K)$ for the knot $K$. As $K$ is both strongly negative amphichiral and strongly invertible, we know that $\mathrm{Sym} (S^3, K) = D_{2m}$ for some $m$, due to the result of Kodama and Sakuma \cite[Lemma~1.1]{KS92}. Now, assume that $\mathrm{Sym} (S^3, K) = D_{2m}=\langle s,t\;|\;t^{2m}=s^2=1, sts=t^{-1}\rangle$ with $m$ odd. It is a well-known fact that the involutions of $D_{2m}$ split into three conjugacy classes, namely $\{t^m\}$, $\{st^{2i}\}_{0\leq i< m}$ and $\{st^{2i+1}\}_{0\leq i<m}$. Since $\tau$ and $\rho$ are, respectively, orientation reversing and preserving, they are not conjugate. If one of them corresponds to $t^m$, which is central, then they commute. Otherwise, they lie respectively in $\{st^{2i}\}_{0\leq i\leq m}$ and $\{st^{2i+1}\}_{0\leq i\leq m}$ (or vice versa). By changing $\rho$ and $\tau$ with some conjugates, we can suppose $\tau=st$ and $\rho=st^{2i}$ so that $2i\equiv 1\mod{m}$ (which exists since $m$ is odd). It is now easy to check that $\rho \circ \tau = \tau \circ \rho = t^m$, where $t^m$ is a strongly positive amphichiral involution. Therefore, $(K,\rho,\tau)$ is Klein amphichiral. \end{proof} Now we fix a standard model for the Klein amphichiral symmetry. To do so, we can think of $S^3=\{(z,w)\in\mathbb{C}^2\;|\;|z|^2+|w|^2=1\}$. Now consider $\rho$ and $\tau$ to be the following involutions \begin{align*} \rho: S^3 \to S^3, \quad (z,w)\mapsto(-z,w), \\ \tau:S^3 \to S^3, \quad (z,w)\mapsto(\overline{z},-w). \end{align*} so that $$ (\rho \circ \tau) (z,w) = (-\overline{z}, -w) = ( \tau \circ \rho) (z,w) .$$ According to \cite{BRW23}, up to conjugation in $\operatorname{Diff}^+(S^3)$, we can always suppose that the Klein amphichiral symmetry is given by the action of $\rho$ and $\tau$ above. Then the fixed point sets of these involutions are given by \begin{align*} \Fix(\rho)&=\{(0,w)\;|\;w\in S^1\},\\ \Fix(\tau)&=\{(\pm 1,0)\},\\ \Fix(\rho \circ \tau)&=\{(\pm i,0)\}. \end{align*} Let $N_{\rho}$, $N_\tau$, and $N_{\rho \circ \tau}$ be small equivariant tubular neighbourhoods of $\Fix(\rho)$, $\Fix(\tau)$, and $\Fix(\rho\circ \tau)$, respectively. Then we have: \begin{itemize}[leftmargin=2em] \item $N_\rho\cong D^2\times S^1$ and $\rho_{|N_\rho}=(-\id_{D^2},\id_{S^1})$, \item $N_\rho\cap K=\{(t\cdot z_i,w_i)\;|\;t\in[-1,1]\}$ for some $z_0,z_1,w_0,w_1\in S^1$, \item $N_\tau\cong B^3_1\sqcup B^3_{-1}$, where $B^3_{\pm 1}$ is a small ball centered at $\pm 1$ and $\tau_{|B^3_{\pm 1}}=-\id$, \item $B_{\pm 1}^3\cap K=\{t\cdot p\;|\;t\in [-1,1]\}$ for some $p\in S^2$, \item $N_{\rho \circ \tau}\cap K=\emptyset$. \end{itemize} Let $Y=S^3\setminus \operatorname{int}(N_\rho\cup N_\tau\cup N_{\rho \circ \tau})$. Observe that $Y$ is simply a solid torus with four small balls removed and that $K\cap Y$ consists of four arcs: two of the arcs connect $\partial N_\rho$ to $\partial B^3_1$ and the other two connect $\partial N_\rho$ to $\partial B^3_{-1}$, see Figure~\ref{fig:ext_fix}. \begin{figure}[ht] \centering \includegraphics[width=0.4\linewidth]{images/ext_fix.pdf} \caption{The manifold $Y$, given by removing the fixed point sets.} \label{fig:ext_fix} \end{figure} Define $\overline{Y}=Y/V$ to be the quotient of $Y$ by the action of $V$. It is not difficult to see that $\overline{Y}$ is a non-orientable $3$-manifold with boundary, whose boundary components are given as follows:In particular, one can see that $\overline{Y}\cong (\mathbb{RP}^2\times I)\natural(\mathbb{RP}^2\times I)$. \begin{itemize}[leftmargin=2em] \item a Klein bottle, which is the image of the toric boundary component of $Y$, denoted by $A$, \item two $\mathbb{RP}^2$'s, which are the images of $\partial N_\tau$ and $\partial N_{\rho \circ \tau}$, respectively. Denote the image of $N_\tau$ by $B$. \end{itemize} In particular, one can see that $\overline{Y}\cong (\mathbb{RP}^2\times I)\natural(\mathbb{RP}^2\times I)$. Now, denote the image of $K\cap Y$ in $\overline{Y}$ by $\overline{K}$. Then $\overline{K}$ is a simple properly embedded arc in $\overline{Y}$ starting at $A$ and ending in $B$, see Figure~\ref{fig:ext_fix_quo}. \begin{figure}[ht] \centering \includegraphics[width=0.3\linewidth]{images/ext_fix_quo.pdf} \caption{The quotient manifold $\overline{Y}$. Both the upper and lower punctured disks are glued to themselves by $-\id$. In red, there is an example of an arc $\overline{K}$ going from the Klein bottle boundary component to one $\mathbb{RP}^2$ boundary component.} \label{fig:ext_fix_quo} \end{figure} Define now $$\pi_1(\overline{Y},A,B) = \{ \gamma:[0,1]\to\overline{Y} \ \vert \ \gamma(0)\in A, \ \gamma(1)\in B \} / \text{homotopy}.$$ Notice that every class in $\pi_1(\overline{Y},A,B)$ can be represented by a simple properly embedded arc. Observe that any homotopy between two properly embedded arcs in $\overline{Y}$ connecting $A$ to $B$ is lifted to an equivariant homotopy in $Y$. This can be closed up to an equivariant homotopy between the corresponding Klein amphichiral knots in $S^3$ in the following way. Let $\gamma_t$ be the lift of the homotopy at time $t$. Then $\gamma_t$ is given by four proper arcs in $Y$ invariant under the action of $V$. We extend the homotopy over $N_\rho$ and $N_\tau$ following the local models described above. Observe that $\gamma_t\cap\partial N_\rho$ is given by $4$ points: $(z_0,w_0), (-z_0,w_0), (z_1,w_1), (-z_1,w_1)\in S^1\times S^1$. We connect then the components of $\gamma_t$ by adding the arcs $(s\cdot z_0,w_0)$ and $(s\cdot z_1,w_1)$, $s\in [-1,1]$ inside $N_\rho$. Similarly, consider the intersection $\gamma_t\cap \partial N_\tau$, which is given by four points $p,-p\in \partial B_{+1}^3$ and $q,-q\in\partial B_{-1}^3$. Then the components of $\gamma_t$ are connected by adding the arcs $\{s\cdot p\}_{s\in [-1,1]}\subset B_{+1}^3$ and $\{s\cdot q\}_{s\in [-1,1]}\subset B_{-1}^3$. In this way, we obtain an equivariant homotopy of Klein amphichiral knots in $S^3$. We can sum up the discussion above in the following lemma: \begin{lem} The natural map $$ \{\text{Klein amphichiral knots}\}/\text{equivariant homotopy}\longrightarrow\pi_1(\overline{Y},A,B)$$ is a bijection. \end{lem} In order to prove the refinement of Levine's theorem given in Theorem \ref{thm:mainthm1}, the essential ingredient is the following lemma. \begin{lem}\label{lem:equiv_crossing} Every Klein amphichiral knot can be turned into the standard Klein amphichiral unknot by a finite number of $V$-equivariant crossing changes. \end{lem} \begin{proof} Let $K_0$ and $K_1$ be two Klein amphichiral knots, and let $\overline{K_0}$ and $\overline{K_1}$ be the corresponding arcs in $\overline{Y}$. Suppose that there exists a homotopy $\overline{K}_t$, $t\in[0,1]$ between $\overline{K_0}$ and $\overline{K_1}$. Then up to a small perturbation, we can suppose that $\overline{K_t}$ is a simple properly embedded arc connecting $A$ to $B$ for all $t\in[0,1]$ except for finitely many times, for which $\overline{K_t}$ has a point of double transverse self-intersection. Let $K_t$, $t\in[0,1]$ be the corresponding homotopy between $K_0$ and $K_1$. Then for all $t$, we have that $K_t$ is a Klein amphichiral knot, except at finitely many times, for which it has some points of double transverse self-intersection. Such double points either arise from the double points of $\overline{K_t}$ or they might come from the closing up procedure of the homotopy described above. Using the notation in the discussion above, for example, it might happen that $w_0=w_1$, leading to a new double point inside $N_\rho$ during the homotopy. Thus, it suffices to show that $\pi_1(\overline{Y},A,B)$ consists of a single point. For this purpose, pick two points $p\in A$ and $q\in B$, and consider the set $$ \pi_1(\overline{Y},p,q)=\{\gamma:[0,1]\to\overline{Y}\;|\;\gamma(0)=p, \ \gamma(1)=q\}/\text{homotopy}. $$ Notice that since $A$ and $B$ are connected, the natural map $$ \iota: \pi_1(\overline{Y},p,q)\to\pi_1(\overline{Y},A,B) $$ induced by the inclusion is surjective. Observe then that \begin{itemize}[leftmargin=2em] \item both $\pi_1(A,p)$ and $\pi_1(B,q)$ act on $\pi_1(\overline{Y},p,q)$ by pre-composition and post-composition, respectively, \item the image of a class under $\iota$ is not changed by the action of $\pi_1(A,p)$ and $\pi_1(B,q)$. \end{itemize} \noindent This reduces our argument to show that the combined action of $\pi_1(A,p)$ and $\pi_1(B,q)$ is transitive on $\pi_1(\overline{Y},p,q)$, which is a consequence of the following two facts: \begin{itemize}[leftmargin=2em] \item $\pi_1(\overline{Y},p)$ acts transitively on $\pi_1(\overline{Y},p,q)$ by pre-composition, \item $\pi_1(\overline{Y})=\langle\pi_1(A),\pi_1(B)\rangle$. \end{itemize} \noindent Therefore, $\pi_1(\overline{Y},A,B)=\{*\}$, as we desired. \end{proof} \subsection{The Equivariant Q-Slice Knots} \label{sec:equivariant-slice} An \emph{equivariant $\Q$-slice slice knot} is a strongly invertible knot $(K, \rho_K )$ in $S^3$ that bounds a disk $D$ smoothly properly embedded in a $\Q$-homology $4$-ball (a $4$-manifold having the $\Q$-homology of $B^4$) with an orientation preserving involution $\rho : W \to W$ such that $$ \partial D = K, \ \ \ \rho_{\vert_{\partial W = S^3}} = \rho_K, \ \ \ \text{and} \ \ \ \rho (D) = D ,$$ cf. the conditions (\ref{defn:p3}) and (\ref{defn:p4}) below. Recall that Kawauchi's famous result \cite[{\sc\S}2]{Kaw09} provided a characterization for $\Q$-slice knots, showing that every strongly negative amphichiral knot is $\Q$-slice. Now, we prove an equivariant refinement of Kawauchi's theorem by proving that every Klein amphichiral knot is equivariant $\Q$-slice. This is essentially the first half of Theorem~\ref{thm:mainthm1}. \begin{proof}[Proof of Theorem \ref{thm:mainthm1}, the $1^{st}$ half] Consider the product extension of $\rho$ and $\tau$ on $S^3\times[0,1]$ , which we will still denote them using the same letters. Let $X_K$ be the $4$-manifold obtained from $S^3\times [0,1]$ by attaching a $0$-framed $2$-handle along $K\subset S^3\times\{1\}$. We can find a $\langle \rho,\tau\rangle$-invariant neighbourhood $N(K)\cong S^1\times D^2$ of $K$ such that \begin{itemize}[leftmargin=2em] \item $\rho(z,w)=(\overline{z},\overline{w})$, for $(z,w)\in S^1\times D^2$, \item $\tau(z,w)=(\overline{z},-w)$ for $(z,w)\in S^1\times D^2$. \end{itemize} Therefore, we can extend $\rho$ and $\tau$ over the $2$-handle $D^2\times D^2$ by using the obvious extensions of the formulas above. Denote the core disk of the $2$-handle by $D=D^2\times \{0\}$. Observe in particular that $\rho(D)=D$. Let $S^3_0(K)$ be the $0$-surgery of $K$. Notice that the restriction of $\tau$ on the $S^3_0(K)$ component of $\partial X_K$ is fixed-point free and orientation-reversing. Therefore, we can glue $X_K$ to itself by identifying $x\sim\tau(x)$ for all $x\in S^3_0(K)$, and obtain an oriented $4$-manifold $Z_K$ with $\partial Z_K = S^3$, namely the Kawauchi manifold. Let $D'$ be the disk obtained by gluing the product cylinder $K\times [0,1]$ with $D$. In \cite[Lemma~2.3, Theorem~1.1]{Kaw09}, Kawauchi proves that $D'$ is a slice disk for $K$ in $Z_K$, and $Z_K$ is a $\Q$-homology $4$-ball. Finally, since $\rho$ and $\tau$ commute, the action of $\rho$ on $X_K$ induces an involution $\rho_{Z_K}$ on $Z_K$ such that $\rho_{Z_K} (D') = D'$. This shows that $D'$ is actually an equivariant slice disk for $(K,\rho)$ in $Z_K$. Therefore, $(K,\rho)$ is equivariantly $\Q$-slice in the Kawauchi manifold $Z_K=Z$. \end{proof} Due to its construction, a priori the Kawauchi manifold $Z$ seems to depend on the strongly negative amphichiral knot. However, Levine \cite{Lev23} proved that $Z$ is a unique manifold in the sense that all strongly negative amphichiral knots bound smooth disks in $Z$. Our proof extends Levine's theorem to the equivariant case by showing that every Klein amphichiral knot $(K, \rho, \tau)$ bounds an equivariant disk in $(V, \rho_K)$, where $\rho_K$ is an involution extending $\rho$. This is the remaining part of Theorem~\ref{thm:mainthm1}. \begin{proof}[Proof of Theorem \ref{thm:mainthm1}, the $2^{nd}$ half] Observe that Levine's original proof \cite{Lev23} has three main ingredients: a $5$-dimensional handlebody argument, an equivariant unknotting theorem of Boyle and Chen \cite[Proposition~3.12]{BC24}, and the equivariant isotopy extension theorem of Kankaanrinta \cite[Theorem~8.6]{Kan07}. The first and last ingredients are essentially the same for our extended argument. In this setting, the second ingredient is simply replaced by Lemma~\ref{lem:equiv_crossing}. \end{proof} \subsection{The Equivariant Q-Concordance Group} \label{sec:equivariant-concordance} A \emph{direction} on a given strongly invertible knot $(K,\rho)$ is a choice of oriented \emph{half-axis} $h$, i.e. the choice of an oriented connected component of $\mathrm{Fix} (\rho)\setminus K$. We will call a triple $(K,\rho,h)$ a \emph{directed strongly invertible knot} (a \emph{DSI knot} in short). We say that two DSI knots $(K_0,\rho_0,h_0)$ and $(K_1,\rho_1,h_0)$ are \emph{equivariantly isotopic} if there exists and $\varphi\in\operatorname{Diff}^+(S^3)$ such that $\varphi(K_0)=K_1$, $\varphi\circ\rho_0=\rho_1\circ\varphi$ and $\phi(h_0)=h_1$ as oriented half-axes. Given two DSI knots $(K_0,\rho_0,h_0)$ and $(K_1,\rho_1,h_0)$, we may form \emph{equivariant connected sum} operation $\widetilde{\#}$, introduced by Sakuma \cite[{\sc\S}1]{Sak86}. This yields a potentially new DSI knot $( K_0 \widetilde{\#} K_1,\rho_0 \widetilde{\#} \rho_1, h_0\widetilde{\#}h_1 )$ whose oriented half-axis starts from the tail of the half-axis for $K_0$ and ends at the head of the half-axis for $K_1$, see Figure~\ref{fig:conn_sum}. \begin{figure}[ht] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] at (0,0){\includegraphics[width=0.5\linewidth]{images/connected_sum.pdf}}; \draw (4.3,2.3) node[scale=2] (a) {$=$}; \draw (4.3,4.5) node[scale=2] (b) {$\widetilde{\#}$}; \end{tikzpicture} \caption{An example of equivariant connected sum.} \label{fig:conn_sum} \end{figure} Let $(K_0,\rho_0,h_0)$ and $(K_1,\rho_1,h_1)$ be two DSI knots in $S^3$. We call $(K_0,\rho_0,h_0)$ and $(K_,\rho_1,h_1)$ \emph{equivariant $\Q$-concordant} and denote it by $$(K_0,\rho_0,h_0) \thicksim_\Q (K_,\rho_1,h_1)$$ if there is a pair of smooth manifolds $\left ( W, A \right ) $ satisfying the following conditions: \begin{enumerate} \item $W$ is a $\Q$-homology cobordism from $S^3$ to itself, i.e., $H_* (W; \Q ) \cong H_* (S^3 \times [0,1]; \Q )$, \item $A$ is a submanifold of $W$ with $A \cong S^1 \times [0,1]$ and $\partial A \cong -(K_0) \cup K_1$, \item \label{defn:p3} There is an involution $\rho_W: W \to W$ that extends $\rho_0$ and $\rho_1$, and has $\rho_W (A) = A$, \item \label{defn:p4} Denote by $F$ the fixed-point surface of $\rho_W$. Then the half-axes $h_0,h_1$ lie in the same boundary component of $F-A$ and their orientations induce the same orientation on $F-A$. \end{enumerate} \begin{rem} Observe that if $W$ as above is not a $\Z/2\Z$-homology cobordism then $\Fix(\rho_W)$ might not be an annulus. This implies that the equivariant $\Q$-sliceness for a directed strongly invertible knot is not equivalent to the equivariant $\Q$-sliceness of its \emph{butterfly link} (see Definition \ref{butterfly_link}) or \emph{moth link} (see \cite[Definition 5.2]{DPF23}), contrary to the non-rational case, as proved in \cite[Proposition 4.2]{BI22} and \cite[Proposition 5.4]{DPF23}. As a consequence, several invariants obtained from these links, such as the butterfly and moth polynomial, do not automatically vanish for equivariant $\Q$-slice knots (compare with the computations in Section \ref{ssec:buttefly_moth}). \end{rem} As a natural extension of the equivariant conconcordance group $\CT$ defined by Sakuma \cite[{\sc\S}4]{Sak86} (see also Boyle and Issa \cite[{\sc\S}2]{BI22}), we introduce the \emph{equivariant $\Q$-concordance group} $\CTQ$ as $$\CTQ \doteq \left ( \left \{ \text{DSI knots in} \ S^3 \right \} / \thicksim_\Q , \ \widetilde{\#} \right ) .$$ \noindent We have the following main properties: \begin{itemize}[leftmargin=2em] \item The operation is induced by the equivariant connected sum $\widetilde{\#}$. \item The identity element is the equivariant $\Q$-concordance class of the unknot $(U, \rho_U , h_U)$\footnote{The unknot $U$ in $S^3$ has a unique strong inversion, see \cite[Definition~1.1, Lemma~1.2]{Sak86}.}, see Figure~\ref{fig:unknot}. \item The inverse element for $(K, \rho , h)$ is given by axis-inverse of the mirror of $K$, i.e., $(\overline{K}, \rho , -h)$. \end{itemize} \begin{figure}[ht] \centering \includegraphics[width=0.15\linewidth]{images/unknot.pdf} \caption{The DSI unknot.} \label{fig:unknot} \end{figure} \subsection{A Construction for Equivariant Q-Slice Knots} \label{sec:constructions} In this subsection, we construct examples of equivariant $\Q$-slice knots. We now introduce the first family of examples which are obtained by using the following doubling argument. Given an oriented knot $K$, its \emph{double} $\mathfrak{r}(K)$ is the DSI knot obtained by $K\#r(K)$, (where $r(K)$ is the \emph{reverse} of $K$, i.e. the same knot but oppositely oriented) with the involution $\rho$ that exchanges $K$ and $r(K)$ (namely the $\pi$-rotation around the vertical axis in Figure~\ref{rK}. The direction on $\mathfrak{r}(K)$ is given as follows: the connected sum can be performed by a suitable band move along a grey band $B$ where $\Fix(\rho)\cap B$ is the half-axis $h$. Here, $h$ is oriented as the portion of $B$ lying on $K$. \begin{figure}[ht] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] at (0,0){\includegraphics[scale=0.3]{images/r_K.png}}; \node[label={$K$}] at (1.2,1.7){}; \node[label={$h$}] at (4.1,1.7){}; \node[label={$r(K)$}] at (7.8,1.7){}; \end{tikzpicture} \caption{The DSI knot $\mathfrak{r}(K)$ with the solid chosen half-axis.} \label{rK} \end{figure} As proven by Boyle and Issa \cite[{\sc\S}2]{BI22}, $\mathfrak{r}$ defines an injective homomorphism $\mathfrak{r}:\C \longrightarrow \CT.$ The same argument actually shows that $\mathfrak{r}$ also induces a homomorphism $\mathfrak{r}_{\Q}:\CQ\to\CTQ$ (not necessarily injective), which fits into the following commutative diagram \begin{center} \begin{tikzcd} \C\ar[r,"\mathfrak{r}"]\ar[d,"\psi"]&\CT\ar[d,"\Phi"]\\ \CQ\ar[r,"\mathfrak{r}_{\Q}"]&\CTQ. \end{tikzcd} \end{center} Therefore, the first family of DSI knots are trivial in the sense that they lie in the $\Ker (\Phi)$ and are simply given by the image of $\mathrm{Ker}(\psi)$ under $\mathfrak{r}$. It is known that the algebraic structure of $\mathrm{Ker}(\psi)$ is very complicated. In particular, $\mathrm{Ker}(\psi)$ contains a $\Z^\infty \oplus (\Z/2\Z)^\infty$ subgroup due to the work of Cha \cite{Cha07} and Hom, Kang, Park, and Stoffregen \cite{HKPS22}. At the moment, the structure of $\CT$ is quite mysterious. The first author proves that $\CT$ is non-abelian \cite{DP23} and together with Framba that $\CT$ is also non-solvable \cite{DPF23}. He also observes in \cite[Corollary~1.16]{DP23b} that the equivariant concordance group splits as the following direct sum $$\CT= \mathrm{Ker} (\mathfrak{b})\oplus\mathfrak{r}(\mathcal{C}),$$ where $\mathrm{Ker} (\mathfrak{b})$ is the subgroup of $\CT$ formed by DSI knots $K$ such that their \emph{$0$-butterfly links} $L_b^0(K)$ (see Definition \ref{butterfly_link} and \cite{BI22}) have slice components (but they are not necessarily slice as links). Now, we construct the non-trivial examples of equivariant $\Q$-slice knots in $S^3$. Let $$\J = \{ n \in \mathbb{N} \ \vert \ n>1 \ \text{and} \ n \not \equiv 0 \mod 3 \} .$$ We can write $$\J = \JE \sqcup \JO = \{ n \in \J \ \vert \ n \ \text{is even} \} \sqcup \{ n \in \J \ \vert \ n \ \text{is odd} \}.$$ \begin{defn} \label{defn:turks-head} For $n \in \J$, the \emph{Turk's head knots} $J_n = Th (3,n)$ are defined as the $3$-braid closures $$J_n \doteq \reallywidehat{(\sigma_1 {\sigma_2}^{-1})^n } ,$$ where a single $3$-braid $\sigma_1 {\sigma_2}^{-1}$ is depicted as follows: \vspace{0.7 em} \begin{center} \begin{tikzpicture} \pic[ rotate=0, braid/.cd, every strand/.style={thick}, strand 1/.style={black}, strand 2/.style={black}, strand 3/.style={black}, ] {braid={s_1^{-1} s_2 }}; \end{tikzpicture} \end{center} \end{defn} \vspace{0.7 em} The Turk's head knots $J_n$ are known to be alternating, cyclically $n$-periodic, fibered, hyperbolic, prime, strongly invertible, strongly negative amphichiral, see our recent survey paper \cite{DPS24}. Using Knotinfo \cite{knotinfo} and Knotscape \cite{knotscape}, we can identify the Turk's head knots $J_n$ with small number of crossings. In particular, we have $$J_2 = 4_1, \ J_4 = 8_{18}, \ J_5 = 10_{123}, \ J_7 = 14_{a19470}, \ \text{and}, J_8 = 16_{a275159} .$$ We will need the following three important properties of $J_n$. Their symmetry groups were computed by Sakuma and Weeks \cite[Proposition~I.2.5]{SW95}. The recent work of AlSukaiti and Chbili \cite[Corollary~3.5, Proposition~5.1]{AC23} provided their determinants and the roots of their Alexander polynomials. For more references, one can consult our recent survey \cite{DPS24}. \begin{enumerate} \item \label{property:SW} For a fixed value of $n$, we have $$\mathrm{Sym} (S^3 , J_n) \cong D_{2n} \CommaPunct$$ where $D_{m}\cong\Z/m\Z\rtimes\Z/2\Z$ denotes the dihedral group of $2m$ elements. \item \label{eq:determinant} Let $L_k$ denote the $k^{th}$ Lucas number.\footnote{The Lucas numbers are defined recursively as $L_0 = 2, L_1 =1$, and $L_k = L_{k-1} + L_{k-2}$ for $k \geq 2$.} Then we have $$\mathrm{det}(J_n) = \Delta_{J_n} (-1) = L_{2n} - 2 \cdot$$ \item \label{eq:roots} The roots of the Alexander polynomial $\Delta_{J_n} (t)$ are of the form: $$z = -\frac{1}{2} \left( 2\cos \left( \frac{2k}{n} \pi \right ) -1 \pm \sqrt{ \left (2\cos \left (\frac{2k}{n} \pi \right ) -1 \right ) ^2 -4 } \right) \CommaPunct $$ with $1 \leq k \leq \lfloor n/2 \rfloor.$ \end{enumerate} In the following proposition, we explicitly determine the number of strong inversions for the Turk's head knots $J_n$. \begin{prop} \label{prop:equivalent} The Turk's head knot $J_n$ has at most two inequivalent (i.e. not conjugate in $\mathrm{Sym}^+(S^3,J_n)$) strong inversions, say $\rho_1$ and $\rho_2$. In particular, \begin{itemize}[leftmargin=2em] \item if $n \in \JO$, then the strong inversion is unique and $J_n$ is Klein amphichiral, \item if $n\in \JE$, then $J_n$ has exactly two inequivalent strong inversions, which are conjugated by an element of $\mathrm{Sym}(S^3,J_n)$. \end{itemize} \end{prop} \begin{proof} According to \cite[Proposition 3.4]{Sak86} $J_n$ admits at most two inequivalent strong inversion, since it is invertible, amphichiral and hyperbolic. More precisely, Sakuma proved that the followings are equivalent: \begin{itemize}[leftmargin=2em] \item $J_n$ admits a period $2$ symmetry, \item $\rho_1$ and $\rho_2$ are not equivalent. \end{itemize} Since by property \ref{property:SW} we have $\mathrm{Sym}(S^3,J_n)\cong D_{2n}$ and $n$ is odd, we know that we only have three conjugacy classes of involutions in $\mathrm{Sym}(S^3,J_n)$. By Proposition \ref{prob:+amphichiral} these classes are exactly given by a strongly inversion $\rho$ and a strongly negative and positive amphichiral involutions $\tau$ and $\delta$. In particular, $J_n$ cannot admit a period $2$ symmetry (cf. \cite[p.~332]{KS92}). In Figure~\ref{fig:klein_Jn} we can explicitly see that the maps $\tau$ and $\rho$ commute. Therefore, $\rho$ is the unique strong inversion. On the other hand, if $n$ is even, then $J_n$ admits an obvious a period $2$ symmetry, which can be seen, for example from the braid description in Definition \ref{defn:turks-head}. Therefore, $\rho_1$ and $\rho_2$ are inequivalent strong inversions. \end{proof} \begin{figure}[ht] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] at (0,0){\includegraphics[scale=1]{images/klein_Jn.pdf}}; \node at (2.6,5.4) (b1){$(\sigma_1\sigma_2^{-1})^k$}; \node at (2.6,2.2) (b2){$(\sigma_2^{-1}\sigma_1)^k$}; \node at (8,5.4) (c1){$(\sigma_1\sigma_2^{-1})^k$}; \node at (8,2.2) (c2){$(\sigma_2^{-1}\sigma_1)^k$}; \node at (2.6,3.8) (a1) {$\alpha$}; \node at (8,3.8) (a1) {$\beta$}; \end{tikzpicture} \caption{The Turk's head knot $J_n$ for $n$ odd. If $n=4k+1$ then $\alpha=\sigma_1$ and $\beta=\sigma_2^{-1}$, while for $n=4k-1$ we have $\alpha=\sigma_2$ and $\beta=\sigma_1^{-1}$. Its Klein amphichiral symmetry is represented in Figure \ref{fig:10_123_sym}.} \label{fig:klein_Jn} \end{figure} \begin{rem} We fix here the choice of direction on $J_n$ for $n$ odd. We will always consider $J_n$ as a DSI knot with the involution $\rho$ given by the $\pi$-rotation around the red axis in Figure \ref{fig:klein_Jn} and the chosen half-axis $h$ is the bounded one in the figure, oriented from left to right. In the following, we will omit to recall these choices. Observe that the strongly negative amphichiral involution $\tau$ (see \ref{fig:klein_Jn}) maps the DSI knot $(J_n,\rho,h)$ to $(\overline{J_n}, \rho, -h')$, where $-h'$ is the complementary half-axis endowed with the opposite orientation. Therefore, while in general these choices would be relevant for the computations in Section \ref{sec:independence}, in this specific case, changing the DSI structure would change the invariants computed in Section \ref{sec:independence} at most by a sign. \end{rem} \begin{cor} If $n \in \JO$, then the Turk's head knots $J_n$ are equivariant $\Q$-slice. \end{cor} \begin{proof} We know that $J_n$ is always strongly negative amphichiral. Since $n \in \JO$, by Proposition~\ref{prop:equivalent}, the knots $(J_n , \rho, \tau)$ are all Klein amphichiral. Then from Theorem~\ref{thm:mainthm1} we know that $(J_n,\rho)$ is equivariantly $\Q$-slice. \end{proof} \begin{rem} Let $\mathrm{gcd} (p,q) = 1$. If $p$ and $q$ are both odd integers, then the Turk's head knots $Th(p,q)$ are strongly invertible and strongly negative amphichiral, see \cite[{\sc\S}3.4]{DPS24}. A positive answer to \cite[Conjecture~B]{DPS24} would also prove that the knots $Th(p,q)$ are all equivariant $\Q$-slice. \end{rem} \subsection{An Obstruction for Equivariant Q-Slice Knots} \label{sec:obstructions} In this subsection, we prove an equivariant rational version of the classical Fox-Milnor condition, which is a generalization of the result by Cochran, Franklin, Hedden, and Horn \cite{CFHH13}. Here, we normalize the Alexander polynomial of a knot $K$ such that $$\Delta_K (1) = 1 \quad \text{and} \quad \Delta_K (t) = \Delta_K (t^{-1}).$$ Now, we prove Theorem~\ref{thm:mainthm2}, claiming that the Alexander polynomial of an equivariant $\Q$-slice knot must be square. \begin{proof}[Proof of Theorem~\ref{thm:mainthm2}] Let $(K,\rho)$ be an equivariant $\Q$-slice knot with a slice disk $D\subset Z$ in a $\Q$-homology $4$-ball $Z$ and let $\rho$ be an involution on $Z$ extending $\rho_K$ such that $\rho(D)=D$. Let $Z_D = Z \setminus N(D)$ be the equivariant slice disk exterior, where $N(D)$ is a $\rho$-invariant tubular neighborhood of $D$. Denote the inclusion of the boundary by $i:\partial Z_D\cong S^3_0(K)\to Z_D$. Since $H_1(Z_D ;\Q) = H_1(S^1 \times B^3 ;\Q)$, we have $H_1(Z_D ;\Z)/ \mathrm{torsion} \cong\Z$. Let $$\phi:\pi_1(Z_D)\to \Z$$ be the projection map on the homology modulo torsion. Given a space $X$ and a map $\epsilon:\pi_1(X)\to\Z=\langle t\rangle$ we denote by $H_*(X,\epsilon)$ the integral homology of $X$ twisted by $\epsilon$, which is naturally a $\Z[t^{\pm1}]$-module. By \cite[Proposition~4.6]{CFHH13}, the order of $H_1(S^3_0(K),\phi\circ i)$ is $\Delta_K(t^n)$ where $n$ is the complexity of the slice disk, i.e., the absolute value of the element represented by a meridian of $D$ in $H_1(Z_D ;\Z)/ \mathrm{torsion} \cong\Z$. Let $M$ be the kernel of $$ i_*:H_1(S^3_0(K),\phi\circ i)\to H_1(Z_D,\phi), $$ and denote its order by $f(t)$. Then, by using \cite[Proposition~4.5]{CFHH13}, we see that $\Delta_K(t^n)=f(t)f(t^{-1})$. Now, observe that $\phi\circ\rho_*=(-id)\circ\phi$ as follows: \begin{center} \begin{tikzcd} \pi_1(Z_D) \arrow{r}{\phi} \arrow[swap]{d}{\rho_*} & \Z \arrow{d}{-id } \\ \pi_1(Z_D) \arrow{r}{\phi} & \Z \end{tikzcd} \end{center} Therefore the order of $\rho(M)$ is $f(t^{-1})$. Since the inclusion map $i$ is $\rho$-equivariant we have that $\rho(M)=M$, hence $f(t)=f(t^{-1})$. Hence $\Delta_K(t^n)$ is a square. In turn, this implies that $\Delta_K(t)$ is also a square. \end{proof} Using the equivariant Fox-Milnor condition in Theorem~\ref{thm:mainthm2}, we now prove that the other half of the knots $J_n$ are not trivial in $\CTQ$. \begin{prop} If $n \in \JE$, then the Turk's head knots $J_n$ are not equivariant $\Q$-slice. \end{prop} \begin{proof} It is a well-known fact that Lucas numbers satisfy the following equality: $$ (L_n)^2=L_{2n}+(-1)^n\cdot2. $$ If $n\in \JE$, then $\det(J_n)=L_{2n}-2=(L_n)^2-2$ by the identity (\ref{eq:determinant}). Since the determinant is not a perfect square, for an arbitrary $n$, $J_n$ is not equivariant $\Q$-slice by Theorem \ref{thm:mainthm2}. \end{proof} Generalizing Kirby calculus argument by Fintushel and Stern \cite{FS84}, in \cite[Theorem~4.14]{Cha07} Cha exhibits a familiy of infinitely many $\Q$-slice knots $K_n$ in $S^3$, depicted in Figure~\ref{fig:cha_knots}. Since the knots $K_n$ are clearly strongly negative amphichiral (see \cite[Figure~5]{Cha07}), Cha's result can be reproven by Kawauchi's characterization. Note that $K_1$ is the figure-eight knot. \begin{figure}[ht] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] at (0,0){\includegraphics[scale=0.6]{images/cha_knots.pdf}}; \node[scale=1.7] at (1.9,2) (b4){$-n$}; \node[scale=1.7] at (5.8,1.99) (b37){$n$}; \end{tikzpicture} \caption{The knots $K_n$. The square box with the integer $n$ (resp. $-n$) represents the right-handed (resp. the left-handed) $n$ full twists. The strong inversion is the $\pi$-rotation around the dashed axis.} \label{fig:cha_knots} \end{figure} Now, we are ready to prove Theorem \ref{thm: mainthm4}, showing that Cha's knots $K_n$ are also not equivariant $\Q$-slice. \begin{proof}[Proof of Theorem \ref{thm: mainthm4}] Assume that $K_n$ is equivariant $\Q$-slice for each $n \geq 1$. By \cite[Theorem~4.14]{Cha07}), we know that $$\Delta_{K_n} (t) = -n^2 t^{-1} + 2n^2 + 1 - n^2 t .$$ Then we get $$\mathrm{det} (K_n) = \Delta_{K_n} (-1) = 4n^2 + 1 = (2n^2)^2 + 1.$$ Then, by Theorem \ref{thm:mainthm2}, the determinant must be a square, which is a contradiction. Therefore, the knots $K_n$ are not equivariant $\Q$-slice. Let $H$ be the subgroup of $\widetilde{\C}_\Q$ spanned by $K_n$ for $n\geq1$ (fix any choice of strong inversion and direction on $K_n$). Let $K=\widetilde{\#}^{a_1}K_{n_1}\widetilde{\#}\dots\widetilde{\#}^{a_l}K_{n_l}, $ where the equivariant connected sum is taken with respect to any ordering of the knots. Since the polynomials $\Delta_{K_n}(t)$ are quadratic, it is not difficult to check that they are pairwise coprime. Therefore, $\Delta_K(t)$ is a square if and only if $a_i=0\mod{2}$ for $i=1,\dots,l$. Hence, the group $H$ surjects onto $(\Z/2\Z)^\infty$. \end{proof} \begin{rem} \label{rem:algebraic concordance} One can easily check by using the so-called \emph{equivariant signature jumps homomorphism} $\widetilde{J}_\lambda:\widetilde{\C}\to\Z$ introduced in \cite[Definition 6.1]{DP23b} and by applying \cite[Theorem 6.9]{DP23b} that the knots $K_n$ span a subgroup of $\widetilde{\C}$ which surjects onto $\Z^\infty$. As the classical Levine-Tristram signatures provide invariants of $\Q$-concordance (see \cite{CK02}), we expect the maps $\widetilde{J}_\lambda$ to factor through $\widetilde{\C}_\Q$. This would prove that the subgroup $H\subset\widetilde{\C}_\Q$ described in the proof of Theorem~\ref{thm: mainthm4} would actually surject onto $\Z^\infty$. Compare with Problem~\ref{prob:algebraic concordance}. \end{rem} \section{Proof of Theorem \ref{thm: mainthm3}} \label{sec:independence} In this section, we recall some preliminary notions and we prove the intermediate results needed in order to prove Theorem \ref{thm: mainthm3}. \subsection{Weighted Graphs, Spanning Trees and Gordon-Litherland form} In this subsection, we recall some useful results about the Gordon-Litherland form and graph theory. \label{sec:graphs} \begin{defn} A \emph{weighted graph} $\Gamma=(V,E)$ is a simple graph with vertex set $V$ and edges $E$ with the additional data of a \emph{weight} $\lambda_{e}=\lambda_{ij}=\lambda_{ji}\in\R$, if there exists $e\in E$ connecting $i,j\in V$. If two vertices are not connected by an edge the weight is understood to be zero. We associate a \emph{Laplacian matrix} $\L(\Gamma)$ for each weighted graph by letting $$ \L(\Gamma)_{i,j}=\begin{cases} -\lambda_{ij}\quad\text{if}\quad i\neq j\\ \sum_{k\neq i}\lambda_{ik}\quad\text{if}\quad i=j. \end{cases} $$ In the following, we will denote by $\L(\Gamma;i)$ the square matrix obtained from $\L(\Gamma)$ by removing the $i$-th column and row. \end{defn} \begin{defn} Let $\Gamma=(V,E)$ be a weighted graph. A \emph{spanning tree} of $G$ is subgraph $T=(V_T,E_T)$ of $\Gamma$ such that \begin{itemize} \item $T$ is a tree, \item the vertex set of $T$ is equal to $V$. \end{itemize} We denote the \emph{weight} of $T$ by $$ w(T)=\prod_{e\in E_T}\lambda_e. $$ Finally, we define the \emph{(weighted) number of spanning trees} of $\Gamma$ as $$ \T(\Gamma)=\sum_{T\subset\Gamma\;\text{spanning tree}}w(T). $$ \end{defn} \begin{thm}\cite[Theorem VI.29]{Tut01}\label{thm:matrix_tree} Let $\Gamma=(V,E)$ be a weighted graph. Then for every $i\in V$, we have $$ \det(\L(\Gamma;i))=\T(\Gamma). $$ \end{thm} \begin{thm}\cite[Theorem 6.1.1 (Gershgorin's Theorem)]{HJ85}\label{thm:gershgorin} Let $A=(a_{i,j})$ be a $n\times n$ square complex matrix, and denote by $R_i(A)=\sum_{j\neq i}|a_{i,j}|$. Then the eigenvalues of $A$ are contained in the following set $$ \bigcup_{i}\{z\in\mathbb{C}\;|\;|z-a_{i,i}|\leq R_i(A)\}. $$ \end{thm} \begin{defn}\label{def:dom_diag} Let $A=(a_{i,j})$ be as above. We say that $A$ is \emph{dominant diagonal} if for every $1\leq i\leq n$ we have $$ |a_{i,i}|\geq R_i(A). $$ Moreover, if there exist $i$ such that $|a_{i,i}|>R_i(A)$ then we say that $A$ is \emph{strongly dominant diagonal}. \end{defn} \begin{cor}\label{cor:pos_def} If $A=(a_{i,j})$ is a symmetric matrix and strongly dominant diagonal matrix with positive entries on the diagonal then $A$ is positive definite. \end{cor} \begin{exmp} Let $\Gamma=(V,E)$ be a connected and weighted graph such that $\lambda_e>0$ for every $e\in E$. Then for every $i\in V$ the matrix $\L(\Gamma;i)$ is strongly dominant diagonal and all the diagonal entries are positive, hence $\L(\Gamma;i)$ is positive definite. \end{exmp} \begin{defn} Let $\Gamma=(V,E)$ be a weighted graph. Given an edge $e\in E$ we denote by $\Gamma\setminus e=(V,E\setminus{e})$ the weighted graph obtained from $\Gamma$ by \emph{edge deletion} of $e$. Let $i,j$ be the endpoints of $e$. We define $\Gamma/e=(\overline{V},\overline{E})$ as the graph obtained from $\Gamma$ by \emph{edge contraction} of $e$, where \begin{itemize} \item $\overline{V}=V/{i\sim j}$, \item if $e\in E$ has endpoints $h,k\in V\setminus\{i,j\}$ then $e\in\overline{E}$ with the same label, \item for every $h\in V\setminus\{i,j\}$ there is an edge $e\in\overline{E}$ joining $[i\sim j]$ to $h$ of weight $\lambda_e=\lambda_{ih}+\lambda_{jh}$. \end{itemize} \end{defn} \begin{thm}[\cite{tutte2004graph}]\label{thm:DCT} Let $\Gamma=(V,E)$ be a weighted graph. Then for every $e\in E$, we have $$ \T(\Gamma)=\T(\Gamma\setminus e)+\lambda_e\cdot\T(\Gamma/e). $$ \end{thm} \subsubsection{Gordon-Litherland Form}\label{sec:GL_form} Let $L\subset S^3$ be a link and let $D\subset S^2$ be a connected diagram for $L$. Recall that by coloring $S^2\setminus D$ in a checkerboard fashion, we determine two spanning surfaces for $L$. Denote by $F$ the surface given by the black coloring. Then, we can describe the \emph{Gordon-Litherland form} conveniently in terms of a weighted graph $\Gamma=(V,E)$ associated with $F$, as follows. See \cite{GL78} for more details. Let $V$ be the set of white regions of $D$. Given two white regions $R_1,R_2$, we have an edge $e$ connecting them if they share at least one crossing of $D$. The label of $e$ is given by the sum over all the crossing of $D$ shared by $R_1$ and $R_2$ of $+1$ for a right-handed half-twist and $-1$ for a left-handed half-twist (see Figure \ref{fig:goeritz_crossings}). \begin{figure}[ht] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] at (0,0){\includegraphics[scale=0.7]{images/goeritz_crossings.pdf}}; \node at (1.2,-0.5) (b1){$+1$}; \node at (0.5,1.5) (b3){$R_1$}; \node at (1.7,1.5) (b4){$R_2$}; \node at (5.3,1.5) (b37){$R_1$}; \node at (6.5,1.5) (b43){$R_2$}; \node at (6,-0.5) (b2){$-1$}; \end{tikzpicture} \caption{Sign convention for the crossings.} \label{fig:goeritz_crossings} \end{figure} Then, for any $R\in V$ the matrix $\L(\Gamma;R)$ represents the Gordon-Litherland form of $F$. In particular, the determinant of the link $L$ is given by the absolute value of $\det(\L(\Gamma;R))=\T(\Gamma)$. \subsection{Butterfly and Moth Links}\label{ssec:buttefly_moth} We are now going to recall the definition of \emph{$n$-butterfly link} that is defined by the first author in \cite{DP23} as a generalization of the \emph{butterfly link} introduced by Boyle and Issa in \cite{BI22}. \begin{defn}\label{butterfly_link} Let $(K,\rho,h)$ be a DSI knot. Take a $\rho$-invariant band $B$, containing the half-axis $h$, which attaches to $K$ at the two fixed points. Performing a band move on $K$ along $B$ produces a $2$-component link, which has a natural semi-orientation induced by the unique semi-orientation on $K$. The linking number between the components of the link depends on the number of twists of the band $B$. Then, the \emph{$n$-butterfly link} $L_b^n(K)$, is the $2$-component $2$-periodic link (i.e. the involution $\rho$ exchanges its components) obtained from such a band move on $K$, so that the linking number between its components is $n$. \end{defn} \begin{rem} In order to avoid confusion, we want to remark that the definition of $L_b^n(K)$ given in Definition \ref{butterfly_link} above actually coincide with the definition of $\widehat{L}_b^{-n}(K)$ given in \cite[Definition 1.9]{DP23}. We choose to use a different notation to improve readability. \end{rem} In \cite{DPF23b}, Framba and the first author associate with a DSI knot the so called \emph{moth link} which we are now going to recall (see \cite[Definition 5.2]{DPF23b} for details). \begin{defn}\label{def:moth_link} Let $(K,\rho,h)$ be a DSI knot and let $B$ be the invariant band giving the $0$-butterfly link $L_b^0(K)$, as in Definition \ref{butterfly_link}. Observe that we can undo the band move on $B$ by attaching another invariant band $B^*$ on $L_b^0(K)$. Then we define the \emph{moth link} $L_m(K)$ as the \emph{strong fusion} (see \cite{Kai92}) of $L_b^0(K)$ along the band $B^*$. \end{defn} Now, the \emph{moth polynomial} of $(K,\rho,h)$ is defined as the Kojima-Yamasaki eta-function (see \cite{KY79}) of the moth link of $K$. \begin{prop}\cite[Proposition 5.6]{DPF23b} The moth polynomial induces a group homomorphism \begin{align*} \eta_m : \ & \CT \to \Q(t) \\ & K \mapsto \eta(L_m(K))(t). \end{align*} \end{prop} \begin{prop}\cite[Proposition 5.7]{DPF23b}\label{prop:eta_conway} The moth polynomial of a DSI knot $K$ can be computed by the following formula: $$ \eta_m(K)(t)=\frac{\nabla_{L_b^0(K)}(z)}{z\nabla_K(z)}, $$ where $\nabla_L(z)$ is the Conway polynomial of a (semi)-oriented link $L$ and $z=i(2-t-t^{-1})^{1/2}$. \end{prop} \begin{rem} In the following, we will regard the moth polynomial as a group homomorphism $$ \eta_m: \CT \to \Q(z) $$ defined by the formula in Proposition \ref{prop:eta_conway}. \end{rem} \begin{lem}\label{lemma:skein_conway} Let $K$ be a DSI knot. Then for any $p,q\in\Z$ $$ \nabla_{K}(z)|\nabla_{L_b^p(K)}(z)\iff\nabla_{K}(z)|\nabla_{L_b^q(K)}(z).$$ \end{lem} \begin{proof} Let $p\in\Z$ be any integer. It is sufficient to prove that $$\nabla_{K}(z)|\nabla_{L_b^p(K)}(z)\iff\nabla_{K}(z)|\nabla_{L_b^{p+1}(K)}(z) .$$ In order to do so we can apply the skein relation for the Conway polynomial as indicated in Figure \ref{fig:skein_butterfly} to obtain $$ \nabla_{L_b^{p+1}(K)}(z)=\nabla_{L_b^p(K)}(z)+z\nabla_K(z). $$ \begin{figure}[ht] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] at (0,0){\includegraphics[scale=0.5]{images/skein_butterfly.pdf}}; \node[scale=1.7] at (1.6,1) (b4){$p$}; \node[scale=1.7] at (5.7,1) (b37){$p$}; \node[scale=1.7] at (10.1,1) (b7){$p$}; \node[scale=1.3] at (1.6,-0.5) (a){$L^{p+1}_b(K)$}; \node[scale=1.3] at (5.7,-0.5) (b){$L^{p}_b(K)$}; \node[scale=1.3] at (10.1,-0.5) (c){$K$}; \end{tikzpicture} \caption{The three terms appearing in the skein relation. The box denotes $p$ full twists.} \label{fig:skein_butterfly} \end{figure} \end{proof} \begin{cor}\label{cor:eta_determinant} Let $K$ be a DSI knot and let $\eta_m(K)(z)=f(z)/g(z)$, where $f(z),g(z)\in\Z[z]$ are coprime polynomials. Suppose that for some $p\in\Z$ we have that $\det(K)$ does not divide $\det(L_b^p(K))$. Then $\deg g(z)>0$ and $g(z)|\nabla_K(z)$. \end{cor} \begin{proof} Recall that for a link $L$ we have that $\det(L)=|\nabla_L(-2i)|$. Since $\det(K)$ does not divide $\det(L_b^p(K))$, it follows that $\nabla_K(z)$ does not divide $\nabla_{L_b^{p}(K)}(z)$. Hence, by Lemma \ref{lemma:skein_conway}, $\nabla_K(z)$ does not divide $\nabla_{L_b^{0}(K)}(z)$ either. The proof follows by observing that $z|\nabla_{L_b^0(K)}(z)$, since $L_b^0(K)$ is a link with more than one component. \end{proof} We are now going to use the moth polynomial to prove the main result of this section. \begin{rem} In \cite{Sak86} Sakuma introduced an equivariant concordance invariant $\eta_{(K,\rho)}$ obtained by taking the Koijima-Yamasaki eta-function of a certain link associated with $(K,\rho)$. However, thanks to the symmetries of $J_n$ and \cite[Proposition 3.4]{Sak86}, we know that Sakuma's eta-polynomial always vanishes for $J_n$, $n\in\JO$. \end{rem} \begin{lem}\label{lem:det_int} Let $n \in \JO $. Then there exists $p\in\Z$ such that $$ \frac{\det(L_b^p(J_n))}{\det(J_n)}\text{ is not an integer}. $$ \end{lem} \begin{proof} Let $F_n$ be the spanning surface for $J_n$ depicted in Figure \ref{fig:spanning_Jn}, where $n=2k+1$. \vspace{1em} \begin{figure}[ht] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] at (0,0){\includegraphics[scale=0.6]{images/spanning_surface_Jn.pdf}}; \node at (1,7) (b1){$(\sigma_1\sigma_2^{-1})^k$}; \node at (1,2.6) (b2){$(\sigma_2^{-1}\sigma_1)^k$}; \node at (11,6.3) (c1){$\sigma_1\sigma_2^{-1}=$}; \node at (11,3.2) (c2){$\sigma_2^{-1}\sigma_1=$}; \node at (4,5) (h){$h$}; \end{tikzpicture} \caption{The spanning surface $F_n$ for $J_n$} \label{fig:spanning_Jn} \end{figure} Observe that by cutting $F_n$ along the half-axis $h$, we get a spanning surface $\overline{F_n}$ for the $p$-butterfly link of $J_n$ for some $p\in\Z$. We are now going to show that \begin{equation}\label{eq:det_ineq} 2\det(J_n)<\det(L^p_b(J_n))<4\det(J_n), \end{equation} which is sufficient, since $\det(J_n)$ is odd, while $\det(L_b^p(J_n))$ is even, hence it cannot be that $$\det(L_b^p(J_n))=3\det(J_n).$$ Let $\Gamma_n$ (see Figure \ref{fig:goeritz_graph}) be the graph associated with $F_n$ (see Section \ref{sec:GL_form}). \begin{figure}[ht] \centering \begin{tikzpicture}[thick,main/.style = {draw, circle},minimum width =8mm] \node[main] at (0,0) (b){$b$}; \node[main] at (0,-2) (a){$a$}; \node[main] at (0,2) (c){$c$}; \node[main] at (2,0) (v1){$v_1$}; \node[main] at (4,0) (v2){$v_2$}; \node[main] at (7,0) (vk){$v_k$}; \node[main] at (-2,0) (w1){$w_1$}; \node[main] at (-4,0) (w2){$w_2$}; \node[main] at (-7,0) (wk){$w_k$}; \draw[-] (a) -- (b); \draw[-] (b) -- (v1); \draw[-] (v1) -- (v2); \draw[-] (v2) -- (5,0); \draw[dotted] (5,0) -- (6,0); \draw (6,0) -- (vk); \draw[-] (b) -- (w1); \draw[-] (w1) -- (w2); \draw[-] (a) -- (v1); \draw[-] (a) -- (v2); \draw[-] (a) -- (w1); \draw[-] (a) -- (w2); \draw[-] (w2) -- (-5,0); \draw[dotted] (-5,0) -- (-6,0); \draw (-6,0) -- (wk); \draw (vk) -- (c) -- (wk); \draw[-] (a) -- node[midway, below left] {$2$} (wk) ; \draw[-] (a) -- node[pos=0.7, below right] {$2$} (vk) ; \draw (a) .. controls +(14,1) and +(7,0) .. (c) node[pos=0.8, above right] {$-1$}; \end{tikzpicture} \caption{The graph associated with $F_n$.} \label{fig:goeritz_graph} \end{figure} The corresponding graph $\overline{\Gamma_n}$ for $\overline{F_n}$ is easily obtained from $\Gamma_n$ by identifying the vertices $b$ and $c$. Recall that from Theorem \ref{thm:matrix_tree} and the discussion in Section \ref{sec:GL_form}, we have \begin{equation} \label{eq:det_eq} \det(J_n)=\T(\Gamma_n)\quad\text{and}\quad\det(L_p(J_n))=\T(\overline{\Gamma_n}). \end{equation} Denote by $\Gamma_n(x)$ be the graph obtained by adding an edge $e$ with label $x$ between $b$ and $c$ to $\Gamma_n$. Applying Theorem \ref{thm:DCT} to the edge $e$ of $\Gamma_n(x)$, we get \begin{equation}\label{eq:gamma_nx} \T(\Gamma_n(x))=\T(\Gamma_n)+x\T(\overline{\Gamma_n}). \end{equation} Using (\ref{eq:det_eq}) and (\ref{eq:gamma_nx}), we can see that the inequalities (\ref{eq:det_ineq}) are equivalent to showing that $$ \T(\Gamma_n(-1/4))>0\quad\text{and}\quad\T(\Gamma_n(-1/2))<0. $$ Observe now that the matrix $\L(\Gamma_n(-1/4);a)$ (see Section \ref{sec:graphs}) is positive definite: multiplying by $3$ both the row and column corresponding to the vertex $c$ we get a dominant diagonal matrix with positive diagonal entries, which has all positive eigenvalues by Gershgorin Theorem. Hence $\T(\Gamma_n(-1/4))=\det(\L(\Gamma_n(-1/4);a))>0$. On the other hand, it is not difficult to see that $\L(\Gamma_n(-1/2);a)$ has an inertia $(n,1,0)$ i.e. it is nonsingular and it has $n$ positive eigenvalues and $1$ negative eigenvalue. Removing the column and row corresponding to $c$ we get a positive definite matrix by Gershgorin Theorem, hence $\L(\Gamma_n(-1/2);a)$ has at least $n$ positive eigenvalues. However, $\L(\Gamma_n(-1/2);a)$ has at least (and hence exactly) one negative eigenvalue: the restriction to the subspace spanned by the vertices $c,b,v_k,w_k$ is given by $$ \begin{pmatrix} 1/2&1/2&-1&-1\\ 1/2&5/2&0&0\\ -1&0&4&0\\ -1&0&0&4 \end{pmatrix} $$ which has negative determinant. In particular, $\T(\Gamma_n(-1/2))=\det(\L(\Gamma_n(-1/2);a))<0$. \end{proof} \begin{prop}\label{prop:lin_indep} Let $\mathcal{F} \subset \JO$ be an infinite family such that if $m,n\in\mathcal{F}$ and $n\neq n$ then $m$ and $n$ are coprime. Let $J_\mathcal{F}$ be the subgroup of $\CT$ generated by $\{J_n\;|\;n\in\mathcal{F}\}$. Then $$ J^{ab}_\mathcal{F} = J_\mathcal{F}/\left[J_\mathcal{F},J_\mathcal{F}\right]\cong \Z^\infty. $$ \end{prop} \begin{proof} We prove that $\{\eta_m(J_n)(z)\;|\;n\in\mathcal{F}\}$ are $\Z$-linearly independent in $\Q(z)$. If $\operatorname{gcd} (p,q)=1$, by the property (\ref{eq:roots}), the sets of roots of the Alexander polynomials of $J_p$ and $J_q$ do not intersect, hence $\Delta_{J_p}(t)$ and $\Delta_{J_q}(t)$ are coprime polynomials. This in turns implies that the Conway polynomials $\nabla_{J_p}(t)$ and $\nabla_{J_q}(t)$ are coprime. Then the linear independence follows by the use of Lemma \ref{lem:det_int} and Corollary \ref{cor:eta_determinant}. \end{proof} \subsection{String links and Milnor invariants} In \cite{DPF23} Framba and the first author used the close relation between strongly invertible knots and \emph{string links} to prove that the equivariant concordance group $\CT$ is not solvable. In particular, they considered a homomorphism $$ \widetilde{\C}\xrightarrow{\varphi\circ\pi}\C(2), $$ where $\C(2)$ is the \emph{concordance group of string links on 2 strings}, introduced in \cite{LeD88} (see \cite[Section 3]{DPF23} for details), and they proved that $\widetilde{\C}$ is not solvable by using Milnor invariants for string links (see \cite{HL98}). We now use a similar approach in order to prove that two given knots in $\widetilde{\C}$ do not commute: we determine their image in $\C(2)$ and then we compute the Milnor invariants of their commutator to show that it is nontrivial. For details, one can consult \cite{DPF23}. \begin{lem}\label{lemma:commutator} The commutator between $J_5$ and $J_7$ is not equivariantly slice. \end{lem} \begin{proof} Following the procedure described in \cite[Remark 4.5]{DPF23}, we determine the image of $[J_5,J_7]=J_5\widetilde{\#}J_7\widetilde{\#}J_5^{-1}\widetilde{\#}J_7^{-1}$ in $\C(2)$, depicted in Figure \ref{fig:commutator}. \begin{figure}[ht] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] at (0,0){\includegraphics[scale=0.5]{images/commutator_5_7.pdf}}; \node at (1.2,4) (b3){$J_5$}; \node at (3.5,4) (b4){$J_7$}; \node at (5.7,4) (b37){$J_5^{-1}$}; \node at (7.8,4) (b43){$J_7^{-1}$}; \end{tikzpicture} \vspace{3mm} \raggedright \begin{itemize} \color{red} \item[\texttt{l1: }]\texttt{30 26 51 43 65 61 84 76} \color{black} \item[\texttt{l2: }] \texttt{30 34 35 4 24 4 22 27 51 47 46 7 56 53 37 7 39 42 64 69 11 67 11 56 57 61 85 88 16 90 74 71 16 81 80 76} \end{itemize} \caption{The $2$-string link representing $\varphi\circ\pi([J_5,J_7])$.} \label{fig:commutator} \end{figure} Using the computer program \texttt{stringcmp} \cite{TKS13}, we find out that $\varphi\circ\pi([J_5,J_7])$ has nontrivial Milnor invariants. The first nontrivial invariant is in degree 6. In Figure \ref{fig:commutator}, we also report the input data needed to run \texttt{stringcmp}, which codifies the longitudes of the two components of the string link. \end{proof} \begin{proof}[Proof of Theorem \ref{thm: mainthm3}] Let $\mathcal{J}$ be the subgroup of $\widetilde{\C}$ generated by $\{J_p\;|\;p\geq5\text{ prime}\}$. By Proposition \ref{prop:lin_indep}, we know that $\mathcal{J}^{ab}\cong\Z^\infty$. Moreover, it is spanned by negative amphichiral knots, therefore $\mathfrak{f}(\mathcal{J})\subset\C$ is $2$-torsion. Finally, by Lemma \ref{lemma:commutator}, we know that $\mathcal{J}$ is not abelian. \end{proof}
2412.09515v1
http://arxiv.org/abs/2412.09515v1
Skew Laurent Series Ring Over a Dedekind Domain
\documentclass[a4paper, 12pt]{amsart} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{amssymb} \usepackage{mathrsfs} \usepackage{mathtools} \usepackage{enumitem} \usepackage{stmaryrd} \usepackage{hyperref} \usepackage{microtype} \usepackage{xparse} \usepackage{cite} \usepackage{scalerel} \usepackage{graphicx} \setlength{\textwidth}{400pt} \newcommand{\ov}[1]{\overline{#1}} \allowdisplaybreaks[2] \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{examples}[theorem]{Examples} \newtheorem{question}[theorem]{Question} \numberwithin{equation}{section} \newtheorem{mainthm}{Theorem} \renewcommand{\themainthm}{\Alph{mainthm}} \newtheorem{conjecture}[mainthm]{Conjecture} \newcommand{\R}{\mathbb R} \newcommand{\N}{\mathbb N} \newcommand{\No}{\N_0} \newcommand{\Z}{\mathbb Z} \newcommand{\C}{\mathbb C} \newcommand{\Q}{\mathbb Q} \newcommand{\HH}{\mathbb H} \newcommand{\F}{\mathbb F} \newcommand{\sa}{a} \newcommand{\sac}{\overline{a}} \newcommand{\sbb}{b} \newcommand{\sbc}{\overline{b}} \newcommand{\sbwi}{\widetilde{b}^{(1)}} \newcommand{\sbwii}{\widetilde{b}^{(2)}} \newcommand{\st}{\overline{t}} \newcommand{\stc}{t} \newcommand{\stt}{\overline{\alphaaa}} \newcommand{\sttc}{\alphaaa} \newcommand{\stwi}{\alphaaaw^{(1)}} \newcommand{\stwii}{\alphaaaw^{(2)}} \newcommand{\aaa}{\textbf{a}} \newcommand{\aaaw}{\widetilde{\aaa}} \newcommand{\aaac}{\overline{\textbf{a}}} \newcommand{\aaact}{\aaac^\perp} \newcommand{\aaat}{\aaa^\perp} \newcommand{\bbb}{\textbf{b}} \newcommand{\bbbw}{\widetilde{\bbb}} \newcommand{\bbbt}{\bbb^\perp} \newcommand{\ccc}{\textbf{c}} \newcommand{\ccct}{\ccc^\perp} \newcommand{\geng}{\textbf{g}} \newcommand{\gengt}{\geng^\perp} \newcommand{\sss}{\textbf{s}} \newcommand{\ddd}{\textbf{d}} \newcommand{\eee}{\textbf{e}} \newcommand{\nnn}{\textbf{n}} \newcommand{\rrr}{\textbf{r}} \newcommand{\uuu}{\textbf{u}} \newcommand{\vvv}{\textbf{v}} \newcommand{\www}{\textbf{w}} \newcommand{\xxx}{\textbf{x}} \newcommand{\yyy}{\textbf{y}} \newcommand{\zzz}{\textbf{z}} \newcommand{\veca}{\vec{a}} \newcommand{\vecb}{\vec{b}} \newcommand{\vecc}{\vec{c}} \newcommand{\vecd}{\vec{d}} \newcommand{\vece}{\vec{e}} \newcommand{\vecu}{\vec{u}} \newcommand{\vecv}{\vec{v}} \newcommand{\vecw}{\vec{w}} \newcommand{\vecz}{\vec{z}} \newcommand{\alphaaa}{\textbf{t}} \newcommand{\alphaaaw}{\widetilde{\alphaaa}} \newcommand{\III}{\mathfrak{I}} \newcommand{\JJJ}{\mathfrak{L}} \newcommand{\AAA}{\mathfrak{A}} \newcommand{\BBB}{\mathfrak{B}} \newcommand{\TTT}{T} \newcommand{\TTTw}{\widetilde{\TTT}} \newcommand{\UUU}{U} \newcommand{\UUUw}{\widetilde{\UUU}} \newcommand{\FFFw}{\widetilde{H}} \newcommand{\MMM}{M} \newcommand{\Laurent}[3]{#1( \! ( #2; #3 )\!)} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\tr}{tr} \DeclareMathOperator{\ch}{char} \DeclareMathOperator{\K}{\scalerel*{\kappa}{C}} \DeclareMathOperator{\gld}{gld} \DeclareMathOperator{\glr}{glr} \DeclareMathOperator{\sr}{sr} \DeclareMathOperator{\const}{const} \ExplSyntaxOn \NewDocumentCommand{\cycle}{ O{\;} m } { ( \alec_cycle:nn { #1 } { #2 } ) } \seq_new:N \l_alec_cycle_seq \cs_new_protected:Npn \alec_cycle:nn #1 #2 { \seq_set_split:Nnn \l_alec_cycle_seq { , } { #2 } \seq_use:Nn \l_alec_cycle_seq { #1 } } \ExplSyntaxOff \title{Skew Laurent Series Ring Over a Dedekind Domain} \author{Daniel Z.\ Vitas} \address{Department of Mathematics, Faculty of Mathematics and Physics, University of Ljubljana, Slovenia \newline \indent Institute of Mathematics, Physics and Mechanics, Ljubljana, Slovenia} \email{[email protected]} \thanks{\emph{Mathematics Subject Classification} (2020). 16E60, 16N60, 16P40, 19A49} \keywords{Skew Laurent series rings, noncommutative Dedekind domains, ideal class groups} \begin{document} \begin{abstract} We show that the formal skew Laurent series ring $R = \Laurent{D}{x}{\sigma}$ over a commutative Dedekind domain $D$ with an automorphism $\sigma$ is a noncommutative Dedekind domain. If $\sigma$ acts trivially on the ideal class group of $D$, then $K_0(R)$, the Grothendieck group of $R$, is isomorphic to $K_0(D)$. Furthermore, we determine the Krull dimension, the global dimension, the general linear rank, and the stable rank of $R$. \end{abstract} \maketitle \section{Introduction} Let $D$ be a commutative ring with an automorphism $\sigma\colon D \rightarrow D$ and let $R = \Laurent{D}{x}{\sigma}$ be the formal skew Laurent series ring over $D$, i.e., the ring that consists of the series $\sum_{i=k}^\infty a_i x^i$ with $k \in \Z$ and $a_i \in D$, and is subject to the rule $x a = \sigma(a) x$ for all $a \in D$. Hilbert used this construction in 1899 to provide the first known example of a centrally infinite division ring. This led several authors, including Dickson, Hahn, Schur, Mal'cev, and Neumann, to study this and similar rings. In \cite[Section 14]{Lam}, Lam gives a detailed account of the early development of the skew Laurent series ring and similar constructions. It is well known that if $D$ is noetherian (resp., an integral domain, a field), then $R$ is also (left and right) noetherian (resp., a domain, a division ring) (see \cite[Example 1.8]{Lam} and \cite{Tu}). Several authors have studied extensions of various properties; Tuganbaev shows that $R$ is right serial right artinian if and only if $D$ is \cite{Tu4}, Romaniv and Sagan show that $R$ is $\omega$-Euclidean if so is $D$ \cite{RS}, Mazurek and Ziembowski determine when $R$ satisfies an ascending chain condition on principal right ideals \cite{MZ} (while in \cite{MZ2} they also characterize generalized power series rings that are semilocal right Bézout; see also \cite{Sal1, Sal2}), Letzter and Wang determine the Goldie rank of $R$ \cite{LW}, Majidinya and Moussavi study the Baer property of $R$ \cite{Ma1, Ma2}, and Annin describes the associated primes of $R$ \cite{Ann}. However, there are very few established results in the case where $D$ is a Dedekind domain; in the commutative setting (i.e., if $\sigma = {\rm id}$), \cite[Ch 10.3, Ex 3]{Jac} shows that $R$ is also a Dedekind domain (see \cite{PO} for the generalized power series case). A similar problem in the noncommutative setting is studied in \cite[Section 7.9]{McR}. It is established that the skew polynomial ring $S = D[x,x^{-1}; \sigma]$, the subring of $R$ consisting of all the series with only finitely many nonzero terms, is a noncommutative Dedekind domain if $D$ is a Dedekind domain. Furthermore, there is a surjective map $$ \phi \colon K_0(D) \rightarrow K_0(S)$$ with the kernel generated by $[M^\sigma] - [ M ]$ for all finitely generated projective $D$-modules $M$. Here, $K_0(S)$ denotes the Grothendieck group of $S$, $[ M ]$ denotes the isomorphism class of $M$, and $M^\sigma$ denotes the $\sigma$-skewed module we get from $M$ (see Section \ref{sec1} and \cite[Section 12.5]{McR} for details). Also, note that the Grothendieck group of $D$ splits as $$ K_0(D) = G(D) \oplus \Z \text{,}$$ where $G(D)$ is the ideal class group of $D$ \cite[Section 35]{LR}. We say that $\sigma$ \emph{acts trivially on} $G(D)$ if $\sigma(\III)$ is isomorphic to $\III$ for any ideal $\III \lhd D$. In this paper, we will prove the following theorem. \begin{mainthm}\label{mainthm} If $D$ is a commutative Dedekind domain with an automorphism $\sigma$, then the skew Laurent series ring $R = \Laurent{D}{x}{\sigma}$ is a noncommutative Dedekind domain. Furthermore, if $\sigma$ acts trivially on $G(D)$, then $$ K_0(R) \cong K_0(D) \text{.}$$ \end{mainthm} We will actually show something more about the ideal structure of $R$, namely, we will prove that every right ideal $I_R \leq R_R$ is isomorphic to the extended ideal $\III R$ for some $\III \lhd D$ (Proposition \ref{extension lemma}); yet not every right ideal of $R$ is of this form (e.g., the ideal $(2+x)R$ for $D = \Z$). With this we also get the following theorem. \begin{mainthm}\label{mainthm2} Let $D$ be a commutative Dedekind domain with an automorphism $\sigma$ that acts trivially on $G(D)$, and let $R = \Laurent{D}{x}{\sigma}$. Then any two stably isomorphic finitely generated projective $R$-modules are isomorphic. \end{mainthm} Note that this is not in general true for all noncommutative Dedekind domains; the first Weyl algebra $A_1(k)$ is a simple noncommutative Dedekind domain, but has stably free modules that are not free \cite[Corollary 11.2.11]{McR}. In the setting of classical maximal orders in central simple algebras over number fields the situation is subtle: stable isomorphism implies isomorphism unless the algebra is a totally definite quaternion algebra. (This is a consequence of strong approximation.) In the exceptional case of totally definite quaternion algebras, on the other hand, there are only finitely many instances where stable isomorphism still implies isomorphism. These have all been classified \cite{Sm3}. Finally, we also compute a few invariants of $R$; namely, its right Krull dimension $\K(R)$, right global dimension $\gld(R)$, general linear rank $\glr(R)$, and stable rank $\sr(R)$. \begin{mainthm} Let $D$ be a commutative Dedekind domain, that is not a field, with an automorphism $\sigma$, and let $R = \Laurent{D}{x}{\sigma}$. Then $\K(R) = 1$, $\gld(R) = 1$, and $\sr(R) = 2$. If $\sigma$ acts trivially on $G(D)$, then $\glr(R)=1$. \end{mainthm} \section{Preliminaries} \label{sec1} Let $D$ be a commutative Dedekind domain, that is, an integral domain such that every nonzero ideal $\III \lhd D$ is invertible, i.e., there exists a finitely generated $D$-module $\JJJ \subseteq K$ (we call such modules \emph{fractional ideals}) such that $\III \JJJ = D$. Let $\sigma \colon D \rightarrow D$ be an automorphism of $D$. The skew Laurent series ring over $D$ is the set $$ R = \Laurent{D}{x}{\sigma}= \Big\{ \,\sum_{i = k}^\infty a_i x^i \,\, \Big\vert \,\, a_i \in D, \,\, k \in \Z \, \Big\}$$ with addition defined term-wise and multiplication defined by using the distributive property and the formula $$ x^i a = \sigma^i (a) x$$ for any $a \in D$ and $i \in \Z$. We will often write $K$ for the ring of quotients of $D$ and $Q$ for $\Laurent{K}{x}{\sigma}$ (note that this is not the ring of quotients of $R$, but it is a division ring containing $R$). For a Laurent series \begin{align} \label{right-form} f = \sum_{i=k}^\infty a_i x^i \in R \end{align} with $a_k \neq 0$ we define the \emph{constant coefficient of $f$} to be $ \const(f) = a_k$ and write $\const(f) = 0$ if $f = 0$. For a right ideal $I_R \leq R_R$ the set $$ \const(I) = \left\{ \, \const(f) \,\, \middle\vert \,\, f \in I \, \right\} $$ is an ideal of $D$; we call it \emph{the constant ideal of $I$}. The ideal $\const(I)$ plays an important role in studying the ideal $I$. \begin{remark} We will establish our first result only for right ideals. To invoke the left-right symmetry and use the same result for left ideals, we have to adjust our terms a little bit. We can write every element of $R$ as in \eqref{right-form} in the \emph{left normal form} as $$ f = \sum_{i=k}^\infty x^i b_i$$ for $b_i = \sigma^{-i}(a_i)$. It is easy to check that elements in the left normal form multiply symmetrically to those in $R$ (except applying $\sigma^{-1}$ instead of $\sigma$ to the elements in $D$). By defining the \emph{\textup{(}left\textup{)} constant term of $f$} to be $\const_l(f) = \sigma^{-k}(a_k)$ and writing $$\const_l(J) = \left\{ \, \const_l(f) \,\, \middle\vert \,\, f \in I \, \right\}$$ for a left ideal $\prescript{}{R}{J} \leq \prescript{}{R}{R}$, we obtain a symmetric notion to $\const(I)$ defined for a right ideal $I_R \leq R_R$. In this context, applying all the steps in future proofs symmetrically from the left will yield a symmetric result for the left ideals, despite the fact that the proofs are only written for the right ideals. \end{remark} The following lemma is found in \cite[Lemma 4.2]{Tu} and will be important later on. In particular, it already shows that if $D$ is a PID, then $R$ is also a PID. \begin{lemma} \label{gen lemma} Let $I_R \leq R_R$ be a right ideal and let $$ \const(I) = a_1 D + \dots + a_n D$$ for some $a_i \in D$. Then there exist $f_1, \ldots, f_n \in R$ with $\const(f_i) = a_i$ such that $$ I = f_1 R + \dots + f_n R \text{.}$$ \end{lemma} Let us recall a few notions. Let $S$ be an arbitrary (unital) ring. We say that $S$ is a \emph{noncommutative Dedekind domain} if it is a domain, it is noetherian (i.e., both left and right ideals are finitely generated), it is hereditary (i.e., both left and right ideals are projective), and it is an Asano order (i.e., two sided nonzero ideals are both left and right invertible); see \cite[Definition 23.5]{LR}. (The left hereditary property follows from the right hereditary property, as long as the ring $S$ is left and right noetherian \cite[Corollary 7.65]{Lam2}.) Note that for a commutative ring $S$, this notion coincides with the standard notion of a commutative Dedekind domain. This type of rings admits factorization properties of modules, similar to those of commutative Dedekind domains (see \cite[Section 5.7]{McR} or \cite[Sections 33, 35]{LR}). In particular, every finitely generated module is a direct sum of a projective module and a torsion module. Furthermore, every finitely generated projective module $M$ is isomorphic to \begin{align} \label{ideal property} M \simeq I \oplus S^n \end{align} for some right ideal $I_S \leq S_S$ and $n \in \N_0 = \N \cup \{ 0\}$. Noncommutative Dedekind domains also admit a cancellation property. More precisely, if $$ I \oplus S^n \simeq J \oplus S^n$$ for some right ideals $I$ and $J$ and $n \in \N$, then \begin{align} \label{canc property} I \oplus S \simeq J \oplus S \text{.} \end{align} See \cite[Corollary 11.7.14]{McR} for details. For any ring $S$, not necessarily a Dedekind domain, we say that two right $S$-modules $M_S$ and $N_S$ are \emph{stably isomorphic} if $$ M \oplus S^n \simeq N \oplus S^n$$ for some $n \in \N_0$. In general, the Grothendieck group $K_0(S)$ of any ring $S$ is a group generated by stable isomorphism classes of finitely generated projective (right) modules $M_S$ (we denote the stable isomorphism class of $M$ by $[M]$), where the addition is given by $$ [ M ] +[ N ] = [ M \oplus N ] \text{.}$$ For a noncommutative Dedekind domain $S$, we can define a homomorphism $$\psi \colon K_0(S) \rightarrow \Z \text{,}$$ mapping an element of the form $[I \oplus S^n] - [ J \oplus S^m]$ for some nonzero ideals $I$ and $J$ to $n-m$. The kernel of $\psi$ consists of elements of the form $[I]-[S]$ (this follows from \cite[Theorem 11.7.13(ii)]{McR}), where $I$ is a right ideal of $S$; we denote it by $$ G(S) = \ker \psi$$ and call it \emph{the (right) ideal class group} of $S$. Since $\Z$ is projective, the morphism $\psi$ splits and we have $$ K_0(S) \cong G(S) \oplus \Z \text{.}$$ Note that if $S$ is a commutative ring, then $G(S)$ coincides with the usual ideal class group. See \cite[Section 12.1]{McR} for details. \section{Extension Proposition} Recall that $D$ is a commutative Dedekind domain with an automorphism $\sigma \colon D \rightarrow D$ and $R = \Laurent{D}{x}{\sigma}$ is the skew Laurent series ring. This section is dedicated to proving the following proposition. \begin{proposition} \label{extension lemma} Let $I_R \leq R_R$ be a right ideal. Then $$ I \simeq \III R$$ as right $R$-modules, where $\III = \const(I) \lhd D$. \end{proposition} \begin{remark} In future, we will use the fact that $$ \III R = \Laurent{\III}{x}{\sigma} = \Big\{ \, \sum_{i = k}^\infty a_i x^i \,\, \Big\vert \,\, a_i \in \III, \,\, k \in \Z \, \Big\}$$ without any further reference. To check that this is true, note that the ideal $\III \lhd D$ is generated by two elements, say $g_1$ and $g_2$ (this holds for any ideal of $D$, since $D$ is a commutative Dedekind domain). The left-to-right inclusion is obvious, so let us take an $$ a = \sum_{i = k}^\infty a_i x^i$$ with $a_i \in \III$, and show that $a \in \III R$. Since $g_1$ and $g_2$ generate $\III$, we can write each $a_i$ as $$ a_i = g_1 a_{i1} + g_2 a_{i2}$$ for some $a_{i1}, a_{i2} \in D$. Thus, $$ a = g_1 \left( \sum_{i = k}^\infty a_{i1} x^i \right) + g_2 \left( \sum_{i = k}^\infty a_{i2} x^i \right)\text{,}$$ which is now obviously an element of $\III R$. \end{remark} From now on let $I_R \leq R_R$ be a right ideal, and write $\III$ for $\const(I)$. Since $D$ is a Dedekind domain, every ideal in $D$ is generated by two elements; let $a_1,a_2 \in D$ generate $\III$. Set $$\geng_0 = \begin{bmatrix} a_1 & a_2 \end{bmatrix} \quad \text{and} \quad \gengt_0 = \begin{bmatrix} a_2 \\ -a_1 \end{bmatrix}\text{.}$$ By Lemma \ref{gen lemma}, there are $f_1, f_2 \in I$ with $\const(f_i) = a_i$ that generate $I$. Without loss of generality we may assume that $$ f_i = a_i + \sum_{j=1}^\infty a_{ij} x^j$$ for some $a_{ij} \in D$. Define $$ \geng_j = \begin{bmatrix} a_{1j} & a_{2j} \end{bmatrix}$$ for every $j \in \N$ and set $\geng = \sum_{j=0}^\infty \geng_j x^j \text{.}$ Since the entries of $\geng$ generate $I$, and the entries of $\geng_0$ generate $\III$, we have the following relation on the rows $\geng_i$. \begin{lemma} \label{lemma s_i} For every $n \in \N_0$ and $$\sss_0, \sss_1, \dots, \sss_{n-1} \in \begin{bmatrix} D \\ D \end{bmatrix}$$ such that \begin{align} \label{predp0} \sum_{i=0}^k \geng_i \sigma^i ( \sss_{k-i} ) = 0 \end{align} for every $k = 0, 1, \ldots, n-1$, we have $$ \sum_{i=1}^n \geng_i \sigma^i ( \sss_{n-i} ) \in \III \text{.}$$ \end{lemma} \begin{proof} Set $ \sss = \sum_{i=0}^{n-1} \sss_{i} x^i\text{.}$ Since the entries of $\geng$ generate $I$, we have $$ \geng \sss \in I \text{.}$$ Computing this expression yields \begin{align*} \geng \sss &= \left( \sum_{i=0}^\infty \geng_i x^i \right) \left( \sum_{i=0}^{n-1} \sss_i x^i \right) \\ &= \sum_{k=0}^{n-1} \sum_{i=0}^k \geng_i \sigma^i ( \sss_{k-i} ) x^k + \sum_{k=n}^{\infty} \sum_{i=k-n+1}^k \geng_i \sigma^i ( \sss_{k-i} ) x^k \text{.} \end{align*} By assumption \eqref{predp0}, the first summand is zero, so we have that $$ \geng \sss = \sum_{i=1}^n \geng_i \sigma^i ( \sss_{n-i} ) x^n + h x^{n+1} \in I$$ for some $h = \sum_{i=0}^\infty h_i x^i$ with $h_i \in D$. By definition of the constant ideal, this means that $$\sum_{i=1}^n \geng_i \sigma^i ( \sss_{n-i} ) \in \III \text{.} \eqno \qedhere$$ \end{proof} The following lemma gives us sufficient conditions on $\geng_i$ for the ideal $I$ to be isomorphic to $\III R$. \begin{lemma} \label{lemma A_i} If there exist matrices $A_i \in M_2(D)$, with $A_0$ being the identity matrix, such that \begin{align} \label{predp} \sum_{i=0}^k \geng_i \sigma^i \left( A_{k-i} \right)\sigma^k(\gengt_0) = 0\end{align} holds for all $k \in \N_0$, then $ I \simeq \III R$. \end{lemma} \begin{proof} Recall that we write $K$ for the ring of quotients of $D$ and $Q$ for $\Laurent{K}{x}{\sigma}$. Set $$ A = \sum_{i=0}^\infty A_i x^i \in M_2(R)$$ and let $$q = 1 + \sum_{i=1}^\infty q_i x^i \in Q \text{,}$$ where $q_i \in K$ have yet to be determined. We will define $q_i$ so that we have \begin{align} \label{q} q \geng_0 = \geng A \text{.}\end{align} Since the matrix $A$ is invertible, the two entries of $\geng A$ generate $I$. Therefore \eqref{q} implies that $\III R$ is isomorphic to $I$ (via multiplication by $q$). The equation \eqref{q} is equivalent to the system \begin{align*} q_k \sigma^k (\geng_0) = \sum_{i=0}^k \geng_i \sigma^i (A_{k-i}) \end{align*} for $k \in \N$. Since these are $2$-dimensional rows and $\sigma^k (\geng_0)$ is nonzero, the existence of such $q_i \in K$ is equivalent to the condition \eqref{predp}. \end{proof} Now all that is left is to construct the matrices $A_i$ from the previous lemma. \begin{proof}[Proof of Proposition \ref{extension lemma}] We assume that $\III \neq 0$, since otherwise $I = 0$ as well and the statement is clear. We will prove the existence of matrices $A_i$ from Lemma \ref{lemma A_i} by induction. The equality \eqref{predp} for $k=0$ is trivial (since $A_0$ is the identity matrix), so assume that $n \geq 1$ and that there exist matrices $A_1$,~\ldots,~$A_{n-1}$ such that \eqref{predp} holds for every $k=0,1,\ldots, n-1$. Since $D$ is a Dedekind domain and $\III$ is a nonzero ideal of $D$, there is a fractional ideal $\III^{-1} \subseteq K$ (i.e., a finitely generated $D$-module) such that \begin{align} \label{inverse} \III \III^{-1} = D \text{.} \end{align} We will first show that \begin{align}\label{ideal cont} \sum_{i=1}^n \geng_i \sigma^i (A_{n-i}) \sigma^n ( \gengt_0) \sigma^n (\III^{-1}) \subseteq \III \text{.} \end{align} To prove this take a $q \in \III^{-1}$ and let $$\sss_i = A_i \sigma^i (\gengt_0) \sigma^i (q) \in \begin{bmatrix} D \\ D \end{bmatrix}$$ for $i=0,1,\ldots,n-1$. Then \eqref{predp} implies \eqref{predp0}, and by Lemma \ref{lemma s_i}, we have $$ \sum_{i=1}^n \geng_i \sigma^i ( \sss_{n-i} ) = \sum_{i=1}^n \geng_i \sigma^i (A_{n-i}) \sigma^n ( \gengt_0) \sigma^n (q) \in \III \text{.}$$ Since this holds for every $q \in \III^{-1}$, inclusion \eqref{ideal cont} follows. By multiplying inclusion \eqref{ideal cont} with the ideal $\sigma^n(\III)$, we get $$\sum_{i=1}^n \geng_i \sigma^i (A_{n-i}) \sigma^n ( \gengt_0) \sigma^n (\III^{-1} \III) \subseteq \III \sigma^n (\III) \text{.}$$ By \eqref{inverse}, we have $\sigma^n(\III^{-1} \III) = D$, which implies $$ \sum_{i=1}^n \geng_i \sigma^i (A_{n-i}) \sigma^n ( \gengt_0) \in \III \, \sigma^n ( \III ) \text{.}$$ Since the entries of $\geng_0$ and $\sigma^n (\gengt_0)$ generate $\III$ and $\sigma^n (\III)$, respectively, this means that there exists a matrix $A_n \in M_2(D)$ such that $$ -\geng_0 A_n \sigma^n ( \gengt_0) = \sum_{i=1}^n \geng_i \sigma^i (A_{n-i}) \sigma^n ( \gengt_0 ) \text{.}$$ This concludes the induction step. Applying Lemma \ref{lemma A_i} concludes the proof. \end{proof} \section{Laurent series ring is a Dedekind domain} We are now in position to establish one of the main results. \begin{theorem} \label{dedekind} Let $D$ be a commutative Dedekind domain with an automorphism $\sigma$. Then $R = \Laurent{D}{x}{\sigma}$ is a noncommutative Dedekind domain. Furthermore, the ring $R$ is simple if and only if $\sigma(\III) \neq \III$ for every nonzero proper ideal $\III \lhd D$. \end{theorem} \begin{proof} The fact that $R$ is a domain is a standard result. The ring $R$ is noetherian due to Proposition \ref{extension lemma}. To see that $R$ is hereditary, first we prove that it is flat as a left $D$-module. Indeed, direct products of flat modules are flat over a noetherian ring $D$, and $R$ can be written as a union of $$ R_k = \Big\{ \, \sum_{i = k}^\infty a_i x^i \,\, \Big\vert \,\, a_i \in D \, \Big\} \simeq \prod_{i=k}^{\infty} D$$ as left $D$-modules. In fact, $R$ is a direct limit of $R_k$ with the natural embeddings and is thus itself flat. Now take any right ideal $I_R \leq R_R$ and let $\III = \const(I)$. Proposition \ref{extension lemma} shows that $I \simeq \III R$. Since $R$ is a flat $D$-module, we have that $\III R \simeq \III \otimes_D R$. The projective property is preserved by tensoring with flat modules, hence $I$ is projective. This shows that $R$ is right hereditary; the left hereditary property follows by symmetry (or, alternatively, it follows from \cite[Corollary 7.65]{Lam2}). Now, we will prove that $R$ is an Asano order. Take a nonzero two-sided ideal $I \lhd R$. Since $I$ is (in particular) a right ideal, Proposition \ref{extension lemma} implies that there is a $q \in Q$ and an ideal $\III \lhd D$ such that $ I = q \III R$. Consider the left fractional ideal $J = R \III^{-1} q^{-1} R \text{.}$ Since $R I = I$, we see that \begin{align} \label{idealeq1} J I = R \text{.} \end{align} Similarly, since $I$ is also a left ideal, the left symmetric version of Proposition \ref{extension lemma} implies that there is a $p \in Q$ and an ideal $\JJJ \lhd D$ such that $I = R \JJJ p$. For a right fractional ideal $J' = R p^{-1} \JJJ^{-1} R$, we have \begin{align} \label{idealeq2} I J' = R \text{.} \end{align} Multiplying equation \eqref{idealeq1} from the right by $J'$ and considering \eqref{idealeq2}, yields $J = J'$. Therefore $J$ is a two sided fractional ideal with $IJ = JI=R$. Finally, if $I \lhd R$ is a two-sided ideal of $R$, then $x I = I$, which implies that $\sigma(\III) = \III$ for the ideal $\III = \const(I) \lhd D$. So if there are no such nonzero proper ideals, then either $I = 0$ or $I = R$. Conversely, if $\III \lhd D$ is a nonzero proper ideal with $\sigma(\III) = \III$, then $I = \III R$ is a nonzero proper two-sided ideal of $R$. \end{proof} For the ring $R$, we will now determine its Krull dimension $\K(R)$, right global dimension $\gld(R)$, and stable rank $\sr(R)$. For definitions see \cite[Sections 6.2.2, 7.1.8, 11.3.4]{McR}. \begin{theorem} Let $D$ and $R$ be as in the previous theorem, and assume that $D$ is not a field. Then $\K(R) = 1$, $\gld(R) = 1$, and $\sr(R) = 2$. \end{theorem} \begin{proof} By Theorem \ref{dedekind}, $R$ is hereditary, noetherian, and prime, and thus by \cite[Corollary 6.2.8]{McR}, we have $\K(R) \leq 1$. If $\K(R) = 0$, then $R$ would be (right) artinian, hence $D$ would be an artinian Dedekind domain, and thus a field. Therefore $\K(R) = 1$. By Theorem \ref{dedekind}, $R$ is (right) hereditary, and thus $\gld(R) \leq 1$. Since $R$ is not (right) artinian, as before, we have $\gld(R) = 1$. By \cite[Theorem 11.3.7]{McR}, we have that $\sr(R) \leq \K(R) +1 = 2$. We will prove that $\sr(R) \geq 2$ by finding a two dimensional unimodular row $\aaa$ that is not stable, i.e., $$ \aaa = \begin{bmatrix} a_1 & a_2\end{bmatrix} \in \begin{bmatrix} R & R \end{bmatrix}$$ such that $$ a_1 R + a_2 R = R \text{,}$$ but $a_1 + a_2 r$ is not invertible for any $r \in R$. To find such an $\aaa$, let $a \in D$ be a nonzero element that is not invertible. For $b = 1 + \sigma(a)$, the row $$ \aaa = \begin{bmatrix} a + x & a^2 + bx\end{bmatrix}$$ has the wanted properties. Indeed, $\aaa$ is unimodular, since $$ (a+x)a - (a^2+bx) = (\sigma(a) - b) x = - x$$ is an invertible element of $R$. We will prove that $\aaa$ is not stable. For contradiction, assume that there is an $r \in R$ such that $$ (a+x) + (a^2+bx) r \in R$$ is invertible. Since $a$ is not invertible or zero, we must have that $r = \sum_{i=0}^\infty r_i x^i$ for some $r_i \in D$ with $r_0 \neq 0$. This implies that $$ a + a^2 r_0 \in D$$ is either invertible or zero, but this is clearly impossible. \end{proof} We have yet to compute the general linear rank of $R$, defined in \cite[Section 11.1.14]{McR}. In the following section we will show that $\glr(R) = 1$. \section{Computing the ideal class group} Since, by Theorem \ref{dedekind}, $R$ is a Dedekind domain, we have that $K_0 (R) \cong G(R) \oplus \Z$, where $G(R)$ denotes the (right) ideal class group of $R$. Therefore, determining $K_0(R)$ is the same as determining $G(R)$. We can verify that $G(R)$ is isomorphic to the group of stable isomorphism classes $[I]$ of (right) ideals $I_R \leq R_R$, where the addition is given by $$ [I] + [J] = [K] \quad \text{if and only if} \quad [I \oplus J] = [K \oplus R] \text{.}$$ Proposition \ref{extension lemma} shows that the natural map $\phi \colon G(D) \rightarrow G(R)$, mapping $[ \III ]$ to $[\III R]$, is surjective. The kernel of $\phi$ obviously contains all the elements of the form \begin{align}\label{ker} [ \sigma(\III) ] - [ \III ] \text{,}\end{align} since $\sigma(\III)R \simeq \III R$ via multiplication by $x^{-1}$. Note that in a commutative Dedekind domain $D$ stable isomorphism classes are exactly the same as isomorphism classes, while in a noncommutative Dedekind domain $R$ the former can be strictly larger than the latter. Throughout this section assume that $\sigma$ acts trivially on the ideal class group of $D$, i.e., $\sigma(\III)$ is isomorphic to $\III$ for any ideal $\III \lhd D$. This is equivalent to $\JJJ^{-1} \sigma(\JJJ)$ being a principal fractional ideal for any nonzero fractional ideal $\JJJ$. In this case, the elements in \eqref{ker} vanish. The following theorem, that we will prove at the end of this section, states that the kernel of $\phi$ is then trivial. \begin{theorem} \label{G theorem} Let $D$ be a commutative Dedekind domain with an automorphism $\sigma$ that acts trivially on $G(D)$, and let $R = \Laurent{D}{x}{\sigma}$. Then $\phi \colon G(D) \rightarrow G(R)$, sending $[\III]$ to $[\III R]$, is an isomorphism. In that case, $$ K_0(R) \cong K_0(D) \text{.}$$ \end{theorem} Let us also point out a corollary to this theorem, which shows that in $R$ stable isomorphism yields an isomorphism. \begin{corollary} Let $D$ and $R$ be as in Theorem \ref{G theorem}. Then any two stably isomorphic finitely generated projective $R$-modules are isomorphic. \end{corollary} \begin{proof} Take $M$ and $N$ to be stably isomorphic finitely generated projective right $R$-modules. If either $M$ or $N$ are zero, the statement trivially follows, so assume that they are both nonzero. Since $R$ is a noncommutative Dedekind domain, by \eqref{ideal property}, there are nonzero (right) ideals $I_R, J_R \leq R_R$ such that \begin{align} \label{ex formula} M \simeq I \oplus R^n \quad \text{and} \quad N \simeq J \oplus R^m \end{align} for some $n, m \in \N_0$. By computing the uniform dimension, since $N$ and $M$ are stably isomorphic, we see that $n = m$. By Proposition \ref{extension lemma}, there are ideals $\III, \JJJ \lhd D$ such that \begin{align} \label{repr formula} I \simeq \III R \quad \text{and} \quad J \simeq \JJJ R \text{.} \end{align} Since $M$ and $N$ are stably isomorphic, in particular, $\III R$ and $\JJJ R$ are also stably isomorphic, which implies that $[ \III ] - [ \JJJ ]$ lies in the kernel of $\phi$. Since $\phi$ is injective by Theorem \ref{G theorem}, we have $\III \simeq \JJJ$. The modules $M$ and $N$ are then clearly isomorphic by \eqref{ex formula} and \eqref{repr formula}. \end{proof} In particular, the above corollary states that every finitely generated stably free $R$-module is actually free. The following observation follows from \cite[Proposition 11.1.12]{McR}. \begin{corollary} Let $D$ and $R$ be as in Theorem \ref{G theorem}. Then $\glr(R) = 1$. \end{corollary} The rest of this section is dedicated to the proof of Theorem \ref{G theorem}. We will require a few auxiliary definitions and lemmas. Let $I_R \leq R_R$ be a nonzero right ideal of $R$. By $I^{-1}$ denote the set $$ I^{-1} = (R :_{l} I) = \{ \, q \in Q \mid q I \subseteq R \, \} \text{.}$$ For example, if we consider the ideal $I = \III R$ for some ideal $\III \lhd D$, we have $$ I^{-1} = \Big\{ \, \sum_{i=k}^{\infty} \sigma^i(q_i) x^i \,\, \Big\vert \,\, q_i \in \III^{-1} , \,\, k \in \Z \, \Big\} \text{.}$$ Let $J$ be another right ideal of $R$. Essentially by \cite[Proposition 3.1.15]{McR}, we have that ${\rm Hom} (I, J) \cong J I^{-1}$ and we can identify ${\rm Hom}(R \oplus I, R \oplus J)$ with $2 \times 2$ matrices with entries in $$\begin{bmatrix} R & I^{-1} \\ J & J I^{-1} \end{bmatrix} \text{.}$$ In particular, $${\rm End}(R \oplus I) \cong \begin{bmatrix} R & I^{-1} \\ I & I I^{-1} \end{bmatrix} \text{,}$$ where the right-hand-side ring is a unital ring as well. We will say that a row $\aaa \in \begin{bmatrix} R & I^{-1} \end{bmatrix}$ is \emph{invertible} if there is another row $\bbb \in \begin{bmatrix} I & I I^{-1} \end{bmatrix}$ such that the matrix $$ A = \begin{bmatrix} \aaa \\ \bbb \end{bmatrix} \in \begin{bmatrix} R & I^{-1} \\ I & I I^{-1} \end{bmatrix}$$ is invertible (in this ring). \begin{lemma} \label{irrelevant lemma} Let $I$ be a nonzero right ideal of $R$. A row $\aaa \in \begin{bmatrix} R & I^{-1} \end{bmatrix}$ is invertible if and only if there is an invertible matrix $$T \in \begin{bmatrix} R & I^{-1} \\ I & I I^{-1} \end{bmatrix}$$ such that $ \aaa T = \begin{bmatrix} 1 &0\end{bmatrix}$. \end{lemma} \begin{proof} Assume that $\aaa$ is invertible and let $A$ be the matrix from the definition before the lemma. Set $T = A^{-1}$ and note that $T$ is an invertible matrix. Since $\aaa$ is the first row of the matrix $A$, we have $ \aaa T = \begin{bmatrix} 1 &0\end{bmatrix}$ as required. Now assume that there is an invertible matrix $$T \in \begin{bmatrix} R & I^{-1} \\ I & I I^{-1} \end{bmatrix}$$ with $ \aaa T = \begin{bmatrix} 1 &0\end{bmatrix}$. Multiplying this equation by $T^{-1}$, we see that $\aaa$ is the first row of $T^{-1}$, which proves that $\aaa$ is invertible. \end{proof} We will say that a row $\aaa \in \begin{bmatrix} R & I^{-1} \end{bmatrix}$ is \emph{unimodular} if there is a column $$ \alphaaa \in \begin{bmatrix} R \\ I \end{bmatrix}$$ with $$ \aaa \alphaaa = 1 \text{.}$$ The following lemma provides a sufficient condition for a right ideal $I_R \leq R_R$ to have the property that any stable isomorphism yields an isomorphism (see \cite[Proposition 11.1.12]{McR} for the case where $I =R$). \begin{lemma} \label{stab imp iso} Let $I$ be a nonzero right ideal of $R$ such that every unimodular row $\aaa \in \begin{bmatrix} R & I^{-1} \end{bmatrix}$ is invertible. Then, if $R \oplus I \simeq R \oplus J$ for some other right ideal $J_R \leq R_R$, we must have $I \simeq J$. \end{lemma} \begin{proof} Assume that $R \oplus I \simeq R \oplus J$. By computing the uniform dimension, we see that $J$ is nonzero. Therefore, by the identification with $2 \times 2$ matrices mentioned before, there exist matrices $$ A \in \begin{bmatrix} R & I^{-1} \\ J & J I^{-1} \end{bmatrix} \quad \text{and} \quad B \in \begin{bmatrix} R & J^{-1} \\ I & I J^{-1} \end{bmatrix} \text{,}$$ which are inverse to each other, i.e., $AB = BA = I_{2\times 2}$. Denote by $\aaa$ the first row of $A$ and by $\alphaaa$ the first column of $B$. Since $\aaa \alphaaa = 1$, the row $\aaa$ is unimodular, and thus, by assumption, it is invertible. By Lemma \ref{irrelevant lemma}, there is an invertible matrix $$ T \in \begin{bmatrix} R & I^{-1} \\ I & I I^{-1} \end{bmatrix}$$ such that $ \aaa T = \begin{bmatrix} 1 &0\end{bmatrix}$. Then the matrix $A' = A T$ is a lower triangular matrix with the (lower triangular) inverse $B' = T^{-1} B$. In particular, for the bottom right entry of $A'$, say $a' \in J I^{-1}$, and the bottom right entry of $B'$, say $b' \in I J^{-1}$, we have $a' b' =1$, which implies $I \simeq J$. \end{proof} To prove Theorem \ref{G theorem} we will prove that any unimodular row $\aaa \in \begin{bmatrix} R & I^{-1} \end{bmatrix}$ is invertible. By Proposition \ref{extension lemma}, it is enough to consider ideals of the form $I = \III R$ for some nonzero $\III \lhd D$. We have $\aaa = \sum_{i=k}^{\infty} \aaa_i x^i$ for some $$\aaa_i \in \begin{bmatrix} D & \sigma^{i}(\III^{-1})\end{bmatrix} \text{.}$$ Without loss of generality, we may assume that $k = 0$ and $\aaa_0 \neq 0$. Indeed, a row $\aaa$ is unimodular if and only if $x^{-k}\aaa$ is. To see this, let $$\alphaaa \in \begin{bmatrix} R \\ I \end{bmatrix}$$ be such that $\aaa \alphaaa = 1$. Then $$\alphaaa x^k \in \begin{bmatrix} R \\ I \end{bmatrix}$$ and we have $\left( x^{-k}\aaa \right) \left( \alphaaa x^{k} \right) = 1$, proving that $x^{-k} \aaa$ is unimodular. The reverse implication follows by symmetry. Similarly, a row $\aaa$ is invertible if and only if $x^{-k}\aaa$ is. Thus, we can really assume that $k=0$ and $\aaa_0 \neq 0$. We start with a very general definition. It might not be clear at first why this definition is needed (namely, why is the ideal $\AAA$ appearing in the definition), but it turns out to be a crucial generalization for proving the wanted theorem. \begin{definition} \label{def1} Let $n \in \N_0$, and let $\AAA \lhd D$ be a nonzero ideal. For each $i \in \N_0$, let $\aaa_i \in \begin{bmatrix} \sigma^{i}(\AAA^{-1})& \sigma^{i}(\III^{-1}\AAA)\end{bmatrix}$. We say that $\aaa = \sum_{i=0}^\infty \aaa_i x^i$ is \emph{$n$-unimodular} with respect to $\AAA$ if and only if there is a column $\alphaaa = \sum_{i=0}^{\infty} \alphaaa_i x^i$ with $$\alphaaa_i \in \begin{bmatrix} \AAA \\ \III \AAA^{-1}\end{bmatrix}$$ such that $$ \aaa \alphaaa = x^n + \sum_{i=n+1}^\infty h_i x^{i}$$ for some $h_i \in D$. \end{definition} \begin{lemma} \label{lem1} If a row $\aaa = \sum_{i=0}^{\infty} \aaa_i x^i \in \begin{bmatrix} R & I^{-1}\end{bmatrix}$ is unimodular, it is $n$-unimodular with respect to $\AAA = D$ for some $n \in \N_0$. \end{lemma} \begin{proof} Assume that $\aaa$ is unimodular, i.e., there is a vector $$\alphaaa = \sum_{i=-n}^\infty \alphaaa_i x^i \in \begin{bmatrix} R \\ I\end{bmatrix}$$ such that $\aaa \alphaaa = 1$. Let $\alphaaa' = \alphaaa x^{n} = \sum_{i=0}^\infty \alphaaa_{i-n} x^i$. Since $$\alphaaa_{i-n} \in \begin{bmatrix} D \\ \III \end{bmatrix}$$ for every $i$ and $$\aaa \alphaaa' = x^n\text{,}$$ this shows that $\aaa$ is $n$-unimodular with respect to $\AAA = D$. \end{proof} We can define a generalization of invertibility of a row in a similar way. \begin{definition} \label{def} Let $n \in \N_0$, and let $\AAA \lhd D$ be a nonzero ideal. For each $i \in \N_0$, let $\aaa_i \in \begin{bmatrix} \sigma^{i}(\AAA^{-1})& \sigma^{i}(\III^{-1}\AAA)\end{bmatrix}$. We say that $\aaa = \sum_{i=0}^\infty \aaa_i x^i$ is \emph{$n$-invertible} with respect to $\AAA$ if and only if there is a row $\bbb = \sum_{i=0}^\infty \bbb_i x^i$ with $\bbb_i \in \begin{bmatrix} \III\sigma^{i}(\AAA^{-1})& \III\sigma^{i}(\III^{-1}\AAA)\end{bmatrix}$ and a matrix $T = \sum_{i=0}^{\infty} T_i x^i$ with $$T_i \in \begin{bmatrix} \AAA& \sigma^{i}(\III^{-1}) \AAA \\ \III \AAA^{-1} & \III \sigma^{i}(\III^{-1})\AAA^{-1}\end{bmatrix}$$ such that $$ \begin{bmatrix} \aaa \\ \bbb \end{bmatrix} T = \sum_{i=n}^{\infty} H_i x^{i}$$ for some matrices $$H_i \in \begin{bmatrix} D & \sigma^{i}(\III^{-1}) \\ \III & \III \sigma^i (\III^{-1}) \end{bmatrix}$$ with $\det(H_n)$ generating $\III \sigma^n(\III^{-1})$. \end{definition} \begin{lemma} \label{lem2} If a row $\aaa = \sum_{i=0}^{\infty} \aaa_i x^i \in \begin{bmatrix} D & I^{-1}\end{bmatrix}$ is $n$-invertible with respect to $\AAA = D$ for some $n \in \N_0$, then it is invertible. \end{lemma} \begin{proof} Assume that $\aaa$ is $n$-invertible with respect to $\AAA = D$. Take $\bbb$, $T$ and $H_i$ to be as in the previous definition. Let $\mu = 1/\det(H_n)$, denote $M = \diag(1, \mu)$, and let $$ S = \begin{bmatrix} \aaa \\ \bbb \end{bmatrix} T x^{-n} M = H_n M + \sum_{i=1}^\infty H_{i+n} \sigma^i(M) x^i \text{.}$$ Since $$H_n M \in \begin{bmatrix} D & \III^{-1} \\ \III & \III \III^{-1} \end{bmatrix}$$ has determinant $1$, it is an invertible matrix in this ring, and thus, the matrix $S$ is invertible in $$\begin{bmatrix} D &I^{-1} \\ I & I I^{-1} \end{bmatrix} \text{.}$$ Therefore, $T' = T x^{-n} M S^{-1}$ is an inverse of the matrix $$A = \begin{bmatrix} \aaa \\ \bbb \end{bmatrix}\text{,}$$ which shows that $\aaa$ is invertible. \end{proof} In our next step we will, for a row $\aaa$, define the row $\aaaw$. We will show that if $\aaa$ is $n$-unimodular, then $\aaaw$ is $n-1$-unimodular, and that if $\aaaw$ is $n-1$-invertible, then $\aaa$ is $n$-invertible (with respect to a certain ideal). This will enable us to prove that $R$ satisfies the assumption of Lemma \ref{stab imp iso} by a simple induction on $n$. But before that, we need to decompose rows in a suitable way. For a row $\ccc = \begin{bmatrix} c^{(1)} & c^{(2)} \end{bmatrix} \in \begin{bmatrix} D & D \end{bmatrix}$, denote by $\ccct$ the orthogonal column $$ \ccct = \begin{bmatrix} c^{(2)} \\ -c^{(1)} \end{bmatrix} \text{,}$$ which satisfies $ \ccc \ccct = 0$. Let $\aaa = \sum_{i=0}^{\infty} \aaa_i x^i$ with $\aaa_i \in \begin{bmatrix} \sigma^{i}(\AAA^{-1})& \sigma^{i}(\III^{-1}\AAA)\end{bmatrix}$ for $i \in \N_0$ and $\aaa_0 \neq 0$. Then $\aaa_0 = \begin{bmatrix} s & q \end{bmatrix}$ for some $s \in \AAA^{-1}$ and $q \in \III^{-1} \AAA$. Let $$\BBB = s \AAA + q \III \AAA^{-1} \lhd D \text{.}$$ Since $s \AAA \BBB^{-1} + q \III \AAA^{-1} \BBB^{-1} = D$, there is a row $\aaac_0 \in \begin{bmatrix} \III \AAA^{-1} \BBB^{-1} & \AAA \BBB^{-1} \end{bmatrix}$ such that $$ \det \begin{bmatrix} \aaa_0 \\ \aaac_0 \end{bmatrix} = \aaa_0 \aaact_0 = 1 \text{.}$$ Fix such an $\aaac_0$. Note that the matrix \begin{align} \label{A} A = \aaact_0 \aaa_0 - \aaat_0 \aaac_0 \end{align} satisfies $\aaa_0 A = \aaa_0$ and $\aaac_0 A = \aaac_0$, and since $\aaa_0$ and $\aaac_0$ are linearly independent, this implies that $A$ is the identity matrix. This enables us to decompose rows in a suitable manner, namely, we can write rows $\aaa_i \in \begin{bmatrix} \sigma^{i}(\AAA^{-1})& \sigma^{i}(\III^{-1}\AAA)\end{bmatrix}$ as $$ \aaa_i = \aaa_i \sigma^i (A) = \sa_i \sigma^i(\aaa_0) - \sac_i \sigma^i(\aaac_0)$$ with $\sa_i = \aaa_i \sigma^i(\aaact_0)\in \sigma^i(\BBB^{-1})$ and $\sac_i = \aaa_i \sigma^i(\aaat_0)\in \sigma^i(\III^{-1} \BBB)$. Fix a $\lambda \in K$ such that \begin{align}\label{lambda} \lambda D = \III \BBB^{-1} \sigma(\III^{-1} \BBB)\end{align} and denote $$\aaaw_i = \begin{bmatrix} \sa_i & \sac_{i+1} / \sigma^{i}(\lambda) \end{bmatrix} \in \begin{bmatrix} \sigma^i(\BBB^{-1}) & \sigma^i (\III^{-1} \BBB) \end{bmatrix} \text{.}$$ Let $\aaaw = \sum_{i=0}^\infty \aaaw_i x^i$. The next two lemmas connect the previously defined properties of a row $\aaa$ to the row $\aaaw$. \begin{lemma} \label{lemma uni} Let $n \geq 1$. If a row $\aaa$ is $n$-unimodular with respect to $\AAA$, then $\aaaw$ is $n-1$-unimodular with respect to $\BBB$. \end{lemma} \begin{proof} Assume that $\aaa$ is $n$-unimodular with respect to $\AAA$, i.e., there is a column $\alphaaa = \sum_{i=0}^{\infty} \alphaaa_i x^i$ with $$\alphaaa_i \in \begin{bmatrix} \AAA \\ \III \AAA^{-1}\end{bmatrix}$$ such that $$ \aaa \alphaaa = x^n + \sum_{i=n+1}^{\infty} h_i x^i$$ for some $h_i \in D$. This is equivalent to the system of equations \begin{align} \label{system1} \sum_{i=0}^k \aaa_i \sigma^i(\alphaaa_{k-i}) = \delta_{kn} \end{align} for $k = 0, 1, \ldots, n$. Here, $\delta_{kn} = 1$ if $k = n$, and $\delta_{kn} = 0$ otherwise. For the matrix $A$ from \eqref{A}, we have $$\alphaaa_i = A \alphaaa_i = \aaact_0 \st_i + \aaat_0 \stc_i$$ for $\st_i = \aaa_0 \alphaaa_i \in \BBB$ and $\stc_i= -\aaac_0 \alphaaa_i \in \III \BBB^{-1}$. Then, the system of equations \eqref{system1} can be written as \begin{align} \label{system2} \sum_{i=0}^k \sa_i \sigma^i(\st_{k-i}) + \sac_i \sigma^i(\stc_{k-i}) = \delta_{kn} \end{align} for $k = 0, 1, \ldots, n$. In particular, taking into account that $\sa_0 = 1$ and $\sac_0 = 0$, the $k=0$ equation of \eqref{system2} states $$ \st_0 = 0 \text{.}$$ Considering this, we can rewrite the system \eqref{system2} as \begin{align} \label{system3} \sum_{i=0}^{k-1} \sa_i \sigma^i(\st_{k-i}) + \sac_{i+1} \sigma^{i+1}(\stc_{k-i-1}) = \delta_{kn} \end{align} for $k=1,\ldots,n$. For the $\lambda$ from \eqref{lambda}, let $$ \alphaaaw_i = \begin{bmatrix} \st_{i+1} \\ \lambda \, \sigma(\stc_{i}) \end{bmatrix} \in \begin{bmatrix} \BBB \\ \III \BBB^{-1} \end{bmatrix} \text{.}$$ Then, we can write the system of equations \eqref{system3} as \begin{align*} \sum_{i=0}^{k} \aaaw_i \sigma^{i}( \alphaaaw_{k-1})= \delta_{k,n-1} \end{align*} for $k = 0, 1, \ldots, n-1$, which shows that $\aaaw$ is $n-1$-unimodular with respect to $\BBB$. \end{proof} \begin{lemma} \label{lemma inv} Let $n \geq 1$. If $\aaaw$ is $n-1$-invertible with respect to $\BBB$, then $\aaa$ is $n$-invertible with respect to $\AAA$. \end{lemma} \begin{proof} Assume that $\aaaw$ is $n-1$-invertible with respect to $\BBB$, i.e., there is a row $\bbbw = \sum_{i=0}^\infty \bbbw_i x^i$ with $\bbbw_i \in \begin{bmatrix} \III\sigma^{i}(\BBB^{-1})& \III\sigma^{i}(\III^{-1}\BBB)\end{bmatrix}$ and a matrix $\TTTw = \sum_{i=0}^{\infty} \TTTw_i x^i$ with $$\TTTw_i \in \begin{bmatrix} \BBB& \sigma^{i}(\III^{-1}) \BBB \\ \III \BBB^{-1} & \III \sigma^{i}(\III^{-1})\BBB^{-1}\end{bmatrix}$$ such that $$ \begin{bmatrix} \aaaw \\ \bbbw \end{bmatrix} \TTTw = \sum_{i = n -1}^\infty \FFFw_i x^i$$ where $$\FFFw_i \in \begin{bmatrix} D & \sigma^{i}(\III^{-1}) \\ \III & \III \sigma^{i} (\III^{-1}) \end{bmatrix}$$ with $\det(\FFFw_{n-1})$ generating $\III \sigma^{n-1}(\III^{-1})$. This is equivalent to the system of equations \begin{align} \label{sys1} \sum_{i=0}^k \begin{bmatrix} \aaaw_i \\ \bbbw_i \end{bmatrix} \sigma^i(\TTTw_{k-i}) = \FFFw_{n-1} \delta_{k,n-1} \end{align} for $k = 0,1,\ldots,n-1$. Write $\sbwi_i$ and $\sbwii_i$ for the entries of $\bbbw_i$. For the $\lambda$ from \eqref{lambda}, let $$\sbb_i = \sbwi_i \quad \text{and} \quad \sbc_i = \sbwii_{i-1} \sigma^{i-1}(\lambda)$$ for $i \in \N_0$ with $\sbc_0 = 0$, and set $$\bbb_i = \sbb_i \sigma^i(\aaa_0) - \sbc_i \sigma^i(\aaac_0) \in \begin{bmatrix} \III\sigma^{i}(\AAA^{-1})& \III\sigma^{i}(\III^{-1}\AAA)\end{bmatrix} \text{.}$$ The row $\sum_{i=0}^\infty \bbb_i x^i$ will turn out to be the row $\bbb$ from Definition \ref{def}. We have yet to construct the matrix $\TTT$ form the definition. For this purpose, let $\mu$ be a generator for $\III \sigma(\III^{-1})$ and denote $\MMM = \diag(1, \mu)$. Write $\stwi_i$ and $\stwii_i$ for the rows of $\TTTw_i$. Let $$\stt_i = \stwi_{i-1} \sigma^{i-1}(M) \quad \text{and} \quad \sttc_i = \sigma^{-1}(\stwii_i/\lambda) \sigma^{i-1}(M)$$ for $i \in \N_0$ with $ \stt_0 = 0$, and set $$ \TTT_i = \aaact_0 \stt_i + \aaat_0 \sttc_i \in \begin{bmatrix} \AAA& \sigma^{i}(\III^{-1}) \AAA \\ \III \AAA^{-1} & \III \sigma^{i}(\III^{-1})\AAA^{-1}\end{bmatrix} \text{.}$$ For $k = 0,1, \ldots,n-1$ we can compute \begin{align} \label{sys2} \begin{split} \sum_{i=0}^{k+1} \begin{bmatrix} \aaa_i \\ \bbb_i \end{bmatrix} \sigma^i(\TTT_{k+1-i}) &= \sum_{i=0}^{k+1} \begin{bmatrix} \sa_i \sigma^i(\stt_{k+1-i}) + \sac_i \sigma^i(\sttc_{k+1-i}) \\ \sbb_i \sigma^i(\stt_{k+1-i}) + \sbc_i \sigma^i(\sttc_{k+1-i}) \end{bmatrix} \\ &= \sum_{i=0}^{k} \begin{bmatrix} \sa_i \sigma^i(\stt_{k+1-i}) + \sac_{i+1} \sigma^{i+1}(\sttc_{k-i}) \\ \sbb_i \sigma^i(\stt_{k+1-i}) + \sbc_{i+1} \sigma^{i+1}(\sttc_{k-i}) \end{bmatrix} \\ &= \sum_{i=0}^{k} \begin{bmatrix} \aaaw_i \\ \bbbw_i \end{bmatrix} \sigma^i \left( \begin{bmatrix} \stt_{k+1-i} \\ \lambda \, \sigma(\sttc_{k-i}) \ \end{bmatrix} \right) \text{,} \end{split} \end{align} where in the second line we used the fact that $\sac_0 = \sbc_0 = 0$ and $\stt_0 = 0$. Since we have $$ \begin{bmatrix} \stt_{k+1-i} \\ \lambda \, \sigma(\sttc_{k-i}) \ \end{bmatrix} = \TTTw_{k-i} \sigma^{k-i}(\MMM)\text{,}$$ the system of equations \eqref{sys2} can be written as $$\sum_{i=0}^{k+1} \begin{bmatrix} \aaa_i \\ \bbb_i \end{bmatrix} \sigma^i(\TTT_{k+1-i}) = \left( \sum_{i=0}^{k} \begin{bmatrix} \aaaw_i \\ \bbbw_i \end{bmatrix} \sigma^i(\TTTw_{k-i})\right) \sigma^k (\MMM) = \FFFw_{n-1} \sigma^k(M) \delta_{k,n-1}$$ for $k = 0, 1, \ldots, n-1$. Let $$H_n = \FFFw_{n-1} \sigma^{n-1}(M) \in \begin{bmatrix} D & \sigma^{n}(\III^{-1}) \\ \III & \III \sigma^{n} (\III^{-1}) \end{bmatrix}$$ and note that $\det (H_n)$ generates $\III \sigma^{n}(\III^{-1})$. Using $$ \begin{bmatrix} \aaa_0 \\ \bbb_0 \end{bmatrix} \TTT_{0} = 0 \text{,}$$ we see that $$ \begin{bmatrix} \aaa \\ \sum_{i=0}^{\infty} \bbb_i x^i \end{bmatrix} \sum_{i=0}^{\infty} \TTT_{i} x^i = H_n x^n + \sum_{i=n+1}^\infty H_i x^i$$ for some matrices $$H_i \in \begin{bmatrix} D & \sigma^{i}(\III^{-1}) \\ \III & \III \sigma^{i} (\III^{-1}) \end{bmatrix} \text{,}$$ which proves that $\aaa$ is $n$-invertible with respect to $\AAA$. \end{proof} This final lemma before the proof of the main theorem essentially shows that the ring $R$ satisfies the assumption of Lemma \ref{stab imp iso}. \begin{lemma} \label{induction} If $\aaa$ is $n$-unimodular with respect to $\AAA$, then it is $n$-invertible with respect to $\AAA$. \end{lemma} \begin{proof} We proceed by induction on $n$, where $\AAA$ varies over all non-zero ideals of $D$ and $\aaa$ over all suitable rows. First consider the $n=0$ case. Assume that $\aaa$ is $0$-unimodular with respect to $\AAA$, i.e., there is a column $\alphaaa = \sum_{i=0}^{\infty} \alphaaa_i x^i$ with $$\alphaaa_i \in \begin{bmatrix} \AAA \\ \III \AAA^{-1}\end{bmatrix}$$ such that $$ \aaa \alphaaa = 1 + \sum_{i=1}^{\infty} h_i x^i$$ for some $h_i \in D$. Take any nonzero $s \in \III$ and $b \in \AAA$. Then $s b \in \AAA$. Since $D$ is a Dedekind domain, there is an $a \in \AAA$ such that $a$ and $sb$ generate $\AAA$. Then there exist $p, q \in \AAA^{-1}$ such that $$ a p + sbq = 1 \text{.}$$ Thus, the matrix $$ \TTT_0 = \begin{bmatrix} a & b \\ -sq & p \end{bmatrix} \in \begin{bmatrix} \AAA& \III^{-1} \AAA \\ \III \AAA^{-1} & \AAA^{-1}\end{bmatrix}$$ has determinant $1$. Let $\bbb_0 \in \begin{bmatrix} \III\AAA^{-1}& \AAA \end{bmatrix}$ be a row such that $$ \alphaaa_0 = \bbbt_0 \text{.}$$ Taking $\TTT = \TTT_0$ and $\bbb = \bbb_0$, we see that $$ \begin{bmatrix} \aaa \\ \bbb \end{bmatrix} \TTT = \begin{bmatrix} \aaa_0 \\ \bbb_0 \end{bmatrix} \TTT_0 + \sum_{i=1}^\infty H_i x^i$$ for some matrices $$H_i \in \begin{bmatrix} D & \sigma^{i}(\III^{-1}) \\ \III & \III \sigma^{i} (\III^{-1}) \end{bmatrix} \text{,}$$ and since $$\det \left( \begin{bmatrix} \aaa_0 \\ \bbb_0 \end{bmatrix} \TTT_0\right) = \aaa_0 \alphaaa_0 \det ( \TTT_0) = 1 \text{,}$$ this shows that $\aaa$ is $0$-invertible with respect to $\AAA$. Now let $n \geq 1$ and assume that the lemma holds for $n-1$ (and all nonzero ideals $\AAA$ and suitable rows $\aaa$). Let $\aaa$ be $n$-unimodular with respect to $\AAA$. By Lemma \ref{lemma uni}, $\aaaw$ is $n-1$-unimodular with respect to $\BBB$. By induction hypothesis, $\aaaw$ is $n-1$-invertible with respect to $\BBB$, and finally, by Lemma \ref{lemma inv}, $\aaa$ is $n$-invertible with respect to $\AAA$. This concludes the induction step. \end{proof} We are now in position to prove the main theorem. \begin{proof}[Proof of Theorem \ref{G theorem}] Take any nonzero ideal $\III \lhd D$ and denote $I = \III R$. First, we will show that we can use Lemma \ref{stab imp iso}, i.e., we will show that any unimodular row $\aaa \in \begin{bmatrix} R & I^{-1} \end{bmatrix}$ is invertible. As pointed out in a paragraph before Definition \ref{def1}, we can assume that $\aaa = \sum_{i=0}^{\infty} \aaa_i x^i$ for some $\aaa_i \in \begin{bmatrix} D & \sigma^{i}(\III^{-1})\end{bmatrix}$ and $\aaa_0 \neq 0$. By Lemma \ref{lem1}, $\aaa$ is $n$-unimodular with respect to $\AAA = D$ for some $n \in \N_0$, by Lemma \ref{induction}, $\aaa$ is then $n$-invertible with respect to $\AAA = D$, and finally, by Lemma \ref{lem2}, $\aaa$ is invertible. This shows that the assumption of Lemma \ref{stab imp iso} is satisfied. Take any other ideal $\JJJ \lhd D$ and denote $J = \JJJ R$. Assume that $I$ and $J$ are stably isomorphic in $R$. Since $R$ is a noncommutative Dedekind domain, the cancellation property \eqref{canc property} implies that $R \oplus I \simeq R \oplus J$. Lemma \ref{stab imp iso} shows that $I \simeq J$, i.e., there is a $q = \sum_{i=k}^\infty q_i x^i \in Q$ with $q_k \neq 0$ such that $$ q I = J \text{.}$$ In particular, this implies that $q_k \sigma^k(\III) = \JJJ$. Therefore, we have $\sigma^k(\III) \simeq \JJJ$, but since $\sigma$ acts trivially on $G(D)$, this shows that $\III \simeq \JJJ$. Therefore, the map $\phi \colon G(D) \rightarrow G(R)$ is injective, and thus, by Proposition \ref{extension lemma}, an isomorphism. \end{proof} \section*{Acknowledgment} The author would like to thank his supervisor Daniel Smertnig for his guidance throughout this work. The author was supported by the Slovenian Research and Innovation Agency (ARIS) program P1-0288. \bibliographystyle{alphaabbr} \bibliography{references} \end{document}
2412.09695v2
http://arxiv.org/abs/2412.09695v2
Codes in algebras of direct products of groups and associated quantum codes
\documentclass{article} \usepackage{geometry} \geometry{hmargin={2.5cm,2.5cm},height=24cm} \usepackage[main=british, spanish]{babel} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \setlength{\parskip}{5pt} \usepackage{tikz-cd} \usetikzlibrary{babel} \usepackage{tabularray} \UseTblrLibrary{diagbox} \usepackage{multirow} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsthm} \allowdisplaybreaks \usepackage[shortlabels]{enumitem} \usepackage[hidelinks]{hyperref} \usepackage[capitalise, nameinlink]{cleveref} \crefname{subsection}{Subsection}{Subsections} \renewcommand{\thesection}{\arabic{section}} \renewcommand{\thesubsection}{\thesection.\arabic{subsection}} \newtheoremstyle{estiloteorema} {\topsep} {\topsep} {\em} {} {\bfseries} {.} { } {\thmname{#1}\thmnumber{ #2}\thmnote{ (#3)}} \theoremstyle{estiloteorema} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{question}[theorem]{Question} \newtheorem{remark}[theorem]{Remark} \newtheorem{problem}{Problem}[section] \newtheorem{conjecture}[problem]{Conjecture} \theoremstyle{definition} \newtheorem{notation}[theorem]{Notation} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newlist{propenum}{enumerate}{1} \setlist[propenum]{label=\roman*), ref=\theproposition(\roman*)} \crefalias{propenumi}{proposition} \newlist{corolenum}{enumerate}{1} \setlist[corolenum]{label=\roman*), ref=\thecorollary(\roman*)} \crefalias{corolenumi}{corollary} \usepackage{colortbl} \usepackage[colorinlistoftodos,textwidth=20mm]{todonotes} \setlength{\marginparwidth}{2cm} \newcommand{\todomiguel}[1]{\todo[color=purple!10]{\color{black} Miguel: \small{{#1}}}} \newcommand{\todovictor}[1]{\todo[color=blue!40]{\color{black} Víctor: \small{{#1}}}} \newcommand{\todoxaro}[1]{\todo[color=purple!20]{\color{black} Xaro: \small{{#1}}}} \newcommand{\F}{\mathbb{F}} \newcommand{\Fp}{\mathbb{F}_p} \newcommand{\Fpr}[1]{\mathbb{F}_{p^{#1}}} \newcommand{\Fq}{\mathbb{F}_q} \newcommand{\Fqk}{\mathbb{F}_{q^k}} \newcommand{\Fqn}{\mathbb{F}_{q^n}} \newcommand{\Fqr}[1]{\mathbb{F}_{q^{#1}}} \DeclareMathOperator{\lcm}{lcm} \DeclareMathOperator{\gal}{Gal} \DeclareMathOperator{\End}{End} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\Stab}{Stab} \DeclareMathOperator{\Image}{Im} \DeclareMathOperator{\card}{card} \newcommand{\rsp}{\operatorname{rowsp}} \newcommand{\rk}{\operatorname{rk}} \newcommand{\mx}{\mathrm{x}} \def\cU{ {\cal U} } \def\cV{ {\cal V} } \def\cC{ {\cal C} } \newcommand{\cG}{\mathcal{G}} \newcommand{\Red}[1]{\textcolor{red}{#1}} \newcommand{\then}{\Longrightarrow} \newcommand{\sii}{\Longleftrightarrow} \usepackage[sorting=nty, maxbibnames=6, giveninits=true, sortcites]{biblatex} \addbibresource{ref.bib} \DeclareNameAlias{default}{family-given} \usepackage{csquotes} \begin{document} \title{Codes in algebras of direct products of groups and associated quantum codes \thanks{This study forms part of the Quantum Communication programme and was supported by MCIN with funding from European Union NextGenerationEU (PRTR-C17.I1) and by Generalitat Valenciana COMCUANTICA/008. This work is also partially supported by the Ministerio de Ciencia e Innovaci\'on project PID2022-142159OB-I00. The second author is supported by the Generalitat Valenciana project CIAICO/2022/167. The first and second authors are supported by the I+D+i projects VIGROB23-287 and UADIF23-132 of Universitat d'Alacant. The last author is supported by Ayuda a Primeros Proyectos de Investigación (PAID-06-23) from Vicerrectorado de Investigación de la Universitat Politècnica de València (UPV).} } \author{\renewcommand\thefootnote{\arabic{footnote}} Miguel Sales-Cabrera\footnotemark[1], \renewcommand\thefootnote{\arabic{footnote}} Xaro Soler-Escriv\`{a}\footnotemark[1], \renewcommand\thefootnote{\arabic{footnote}} V\'ictor Sotomayor\footnotemark[2]} \footnotetext[1]{Departament de Matem\`{a}tiques, Universitat d'Alacant. Ap.\ Correus 99, E-03080, Alacant (Spain).} \footnotetext[2]{Instituto Universitario de Matem\'atica Pura y Aplicada (IUMPA-UPV), Universitat Polit\`ecnica de Val\`encia, Camino de Vera s/n, 46022 Valencia (Spain). \\ E-mail adresses: \texttt{[email protected], [email protected], [email protected]}} {\small \date{\today}} \maketitle \begin{abstract} In this paper we obtain the Wedderburn-Artin decomposition of a semisimple group algebra associated to a direct product of finite groups. We also provide formulae for the number of all possible group codes, and their dimensions, that can be constructed in a group algebra. As particular cases, we present the complete algebraic description of the group algebra of any direct product of groups whose direct factors are cyclic, dihedral, or generalised quaternion groups. Finally, in the specific case of semisimple dihedral group algebras, we give a method to build quantum error-correcting codes, based on the CSS construction. \bigskip \noindent \textbf{Keywords:} Linear codes, Group algebras, Direct products of groups, CSS quantum codes \noindent \textbf{MSC 2020:} 94B05, 16S34, 16D25, 81P73 \end{abstract} \section{Introduction} Let $\Fq$ be a finite field of $q$ elements, where $q$ is a prime power. Given a finite group $G$ of order $n$, a (left) group code, or simply a $G$-code, of length $n$ is a linear code of $\Fq^n$ which is the image of a (left) ideal of the group algebra $\Fq[G]$ via an isomorphism which maps $G$ to the standard basis of $\Fq^n$. In the setting of linear coding theory, a considerable amount of codes, as generalised Reed-Solomon codes, Reed-Muller codes, and many other optimal codes, have been shown to be group codes (see for instance \cite{bernal2009intrinsical, borello2021dihedral, landrock1992, mcloughlin2008} and the references therein). The interest in the study of group codes is clearly linked to their powerful algebraic structure, which allows valuable information about the parameters of the code to be obtained, as well as to provide efficient coding and decoding algorithms (cf. \cite{martinez2023}). Most of the research that has been done on group codes deals with the case where the group $G$ is abelian. However, in recent years the study of the non-abelian case has been gaining an increasing interest (\cite{borello2021dihedral,Gao2021,Gao2020}, to name a few). There are several reasons for this. First, non-commutativity could possibly improve the security of code-based encryption protocols, which are one of the few mathematical techniques that enables the construction of public-key cryptosystems that are secure against an adversary equipped with a quantum computer (cf. \cite{Sendrier}). Second, a non-abelian group algebra has a richer algebraic structure, so we can construct linear codes that cannot be obtained using abelian groups (see \cite{gonzalez2019group} for instance). Let us assume hereafter that the size of the field and the group order are relatively prime. Hence the group algebra $\mathbb{F}_q[G]$ is semisimple by a celebrated theorem due to Maschke. Moreover, as a consequence of the Wedderburn-Artin Theorem and the Wedderburn Little Theorem, $\Fq[G]$ is isomorphic to a direct sum of some matrix rings over finite extensions of $\Fq$ (cf. \cite{doerk1992}). Thus the ideals of $\mathbb{F}_q[G]$ are principal and can be seen as sum of matrix ideals over finite fields. It turns out that all possible $G$-codes can be determined whenever the Wedderburn-Artin decomposition of the group algebra $\mathbb{F}_q[G]$ and some specific ideal generators are known. In particular, these generators can be realised as matrices over finite fields. In the last decade, this has been done for certain non-abelian groups, as (generalised) dihedral, (generalised) quaternion, metacyclic, symmetric and alternating groups (cf. \cite{Brochero2015, Gao2020, Gao2021, Brochero2022, Ricardo2023}). Besides, the authors of \cite{Vedenev2019_2} considered the direct product of two dihedral groups $D_{n}\times D_{m}$ of order $2n$ and $2m$, respectively, such that $m$ divides $q-1$. The aim of this paper is to take a step further in the aforementioned research line. More concretely, we provide the Wedderburn-Artin decomposition of the semisimple algebra of any direct product of finite groups $G_1\times \cdots \times G_r$ based on the structure of the corresponding group algebras of the direct factors $G_1, \ldots, G_r$. This information will be utilised to compute the associated group codes, the number of such codes, and their dimensions. It is worthwhile to mention that, in contrast to \cite{Vedenev2019_2}, our study does not depend on either the number of direct factors or the dihedral structure in such a way that some of their results appear as a particular case. As a direct consequence, we get the full description of semisimple group algebras of direct products of dihedral, cyclic, or generalised quaternion groups. We illustrate with examples that some linear codes that achieve the best-known minimum distance for their dimension can be seen as group codes that arise from semisimple algebras of direct products of groups. Finally, in the specific case of a dihedral group algebra, we will show a construction of quantum error-correcting codes, via the CSS method, by using the matrix ring decomposition of the corresponding group algebra. The paper is organised as follows. In Section \ref{sec_prel} we collect some preliminary results concerning tensor products, the Wedderburn-Artin decomposition of a semisimple group algebra, group codes, and quantum CSS codes. Later we present our main contributions. More specifically, the general structure of a semisimple algebra associated to a direct product of finite groups is determined in Section \ref{sec_direct}. In particular we compute the number of group codes that can be constructed, as well as their dimension. As a consequence, it is obtained the full description of the group algebra associated to the direct product of either a dihedral group and a cyclic group, two dihedral groups, or a dihedral group and a generalised quaternion group. Finally, in Section \ref{sec_examples}, we show applications of the theoretical results stated in this paper. To be more precise, we construct several group codes arising from algebras of direct products that involve dihedral groups, and we also develop a method to get quantum CSS codes from the dihedral group algebra. \section{Preliminaries} \label{sec_prel} All groups considered in this paper are supposed to be finite. We denote by $\Fq$ the finite field of $q$ elements, where $q$ is a power of a prime $p$, whereas $\mathbb{K}$ will denote an arbitrary field. Given an arbitrary ring $R$, we write $M_{n}(R)$ for the ring of $(n\times n)$-matrices over $R$. If $X$ is a non-empty subset of $R$, then $\langle X\rangle_R$ is the ideal of $R$ generated by $X$ (we will simply write $\langle X\rangle$ when the ambient ring $R$ is clear enough). Moreover, all algebras and rings considered in this paper are associative and unitary. The remaining unexplained notation and terminology are standard in the context of coding theory and group theory. \subsection{On tensor products} Below, several properties about tensor products that will be of importance for the development of this paper are listed. They can be found in many books covering tensor products of algebras, such as \cite{Bourbaki1973, Adkins1992, doerk1992}. Recall that given a commutative ring $R$, an $R$-algebra $A$ is a ring which is also an $R$-module. In particular, in what follows, all $R$-algebras are always unitary rings. Moreover, $A$ will be a commutative $R$-algebra if $A$ is a commutative ring. An homomorphism of $R$-algebras is an $R$-linear ring homomorphism. Given two $R$-algebras $A$ and $B$, we will denote by $A\otimes_R B$ its tensor product over $R$. Defining the product on elements of the form $a\otimes_R b$ by $(a_1\otimes_R b_1)(a_2\otimes_R b_2)=a_1a_2\otimes_R b_1b_2$, it turns out that $A\otimes_R B$ is an $R$-algebra too. Moreover, the tensor product of $R$-algebras is associative and it is commutative whenever $A$, $B$ and $R$ are commutative. Next we list some other properties on tensor product of $R$-algebras that will be needed in the sequel. As usual, we will denote by $\oplus$ the direct sum of $R$-modules. \color{black} \begin{proposition}\label{tpproperties} Let $R$ be a commutative ring. \begin{propenum} \item Assume that $M_i, N_i$ are $R$-algebras such that $M_i \cong N_i$ as $R$-algebras, for $i=1,2$. Then $M_1 \otimes_R M_2 \cong N_1 \otimes_R N_2$ as $R$-algebras. \item\label{tpproperties:directsum} Given two $R$-algebras $M$ and $N$ such that $M \cong \bigoplus_{i \in I} M_i$ and $N \cong \bigoplus_{j \in J} N_j$ as $R$-algebras, for some families of $R$-algebras $\{M_i\ |\ i\in I\}$ and $\{N_j\ |\ j\in J\}$, then the following isomorphism of $R$-algebras holds: $$M \otimes_R N \cong \bigoplus_{(i, j) \in I \times J} \left( M_i \otimes_R N_j\right).$$ \item\label{tpproperties:function} Given $f: M_1 \longrightarrow N_1$ and $g: M_2 \longrightarrow N_2$ two $R$-algebra homomorphisms, the map \[ \begin{array}{cccc} f \otimes_R g: & M_1 \otimes_R M_2 & \longrightarrow & N_1 \otimes_R N_2\\ & a \otimes_R b & \longmapsto &f(a) \otimes_R g(b) \end{array} \] defines an homomorphism of $R$-algebras. As a consequence, we obtain the canonical homomorphism of $R$-modules \begin{equation}\label{Hom_lambda} \lambda: \Hom_R(M_1, N_1) \otimes_R \Hom_R(M_2, N_2) \longrightarrow \Hom_R(M_1 \otimes_R M_2, N_1 \otimes_R N_2). \end{equation} \end{propenum} \end{proposition} In the particular case when $R$ is a field, the homomorphism $\lambda$ given in \eqref{Hom_lambda} can be used to produce the following well-known result concerning the tensor product of matrices over a field. Since we were unable to find it stated in this way, we present it and include a proof, although it is elementary. Given a $\mathbb{K}$-algebra $A$, we will denote by $A^{\oplus n}$ the direct sum of $n$ copies of $A$. The set of all $A$-linear endomorphisms of $A^{\oplus n}$ will be denoted as $\End_{A}\left(A^{\oplus n}\right)$. Notice that $\End_{A}\left(A^{\oplus n}\right)$ is, in particular, an $A$-algebra. \begin{lemma}\label{tpmatrices} Let $A_1, A_2$ be two commutative algebras over a field $\mathbb{K}$. Then the following isomorphism of $\mathbb{K}$-algebras holds: \[ M_{n_1}\left(A_1\right) \otimes_{\mathbb{K}} M_{n_2}\left(A_2\right) \cong M_{n_1n_2} \left( A_1 \otimes_{\mathbb{K}} A_2 \right). \] \end{lemma} \begin{proof} Since $\mathbb{K}$ can be embedded into $A_1$ and $A_2$, it turns out that $\End_{A_i}\left(A_i^{\oplus n_i}\right)$ can be realised as a $\mathbb{K}$-subalgebra of $\End_{\mathbb{K}}\left(A_i^{\oplus n_i}\right)$, for $i=1, 2$. Therefore, according to \cite[Lemma 1, p.~214]{Bourbaki2023}, the homomorphism $\lambda$ defined in \eqref{Hom_lambda} induces the isomorphism of $\mathbb{K}$-vector spaces \[ \varphi: \End_{A_1}\left(A_1^{\oplus n_1}\right) \otimes_{\mathbb{K}} \End_{A_2}\left(A_2^{\oplus n_2}\right) \longrightarrow \End_{A_1 \otimes A_2}\left(A_1^{\oplus n_1} \otimes_{\mathbb{K}} A_2^{\oplus n_2}\right). \] Let us see that $\varphi$ is a ring homomorphism too. Let $f_1, g_1 \in \End_{A_1}\left(A_1^{\oplus n_1}\right)$ and $f_2, g_2 \in \End_{A_2}\left(A_2^{\oplus n_2}\right)$. For any $a \in A_1^{\oplus n_1}$ and $b \in A_2^{\oplus n_2}$, applying \cref{tpproperties:function}, one gets \[\arraycolsep=2pt \def\arraystretch{1.5} \begin{array}{rcccl} \left((f_1 \otimes_{\mathbb{K}} f_2)\circ (g_1 \otimes_{\mathbb{K}} g_2)\right)(a \otimes_{\mathbb{K}} b) & = & (f_1 \otimes_{\mathbb{K}} f_2)(g_1(a) \otimes_{\mathbb{K}} g_2(b)) & = & \\ & = & f_1(g_1(a)) \otimes_{\mathbb{K}} f_2(g_2(b)) & = & \\ & = & (f_1 \circ g_1)(a) \otimes_{\mathbb{K}} (f_2 \circ g_2)(b) & = &((f_1 \circ g_1) \otimes_{\mathbb{K}} (f_2 \circ g_2))(a \otimes_{\mathbb{K}} b), \end{array} \] so $\varphi$ is in fact a $\mathbb{K}$-algebra isomorphism. Now, taking into account that $\End_{A_i}\left(A_i^{\oplus n_i}\right) \cong M_{n_i}\left(A_i\right)$ for $i=1, 2$ (see \cite[Chapter 4, Corollary 3.9, p.~219]{Adkins1992}), and that $A_1^{\oplus n_1} \otimes_{\mathbb{K}} A_2^{\oplus n_2} \cong \left(A_1 \otimes_{\mathbb{K}} A_2\right)^{\oplus n_1n_2}$ (\cref{tpproperties:directsum}), the result follows. \end{proof} \subsection{On the decomposition of a group algebra}\label{sec:decomposition} \label{group-algebra-decomposition} Given a group $G$, the set of all formal $\mathbb{F}_q$-linear combinations of elements of $G$, i.e. \[ \Fq[G]=\left\{\displaystyle\sum_{g\in G} \alpha_g g\ |\ \alpha_g\in \Fq\ \right\}, \] is an $\Fq$-vector space with basis the elements of $G$. Moreover, by considering the multiplication \[ \left(\sum_{g\in G}\alpha_g g \right)\cdot \left(\sum_{g\in G}\beta_g g\right)=\sum_{g\in G}\left(\sum_{h\in G}\alpha_h\beta_{h^{-1}g}\right) g, \] we obtain an $\Fq$-algebra which is called the \emph{group algebra} of $G$ over $\Fq$. In this paper we will always deal with group algebras $\Fq[G]$ that are semisimple, that is, they can be realised as a direct sum of simple $\Fq[G]$-modules. Recall that Maschke's theorem states that $\Fq[G]$ is a semisimple $\Fq$-algebra if and only if the characteristic of the field does not divide the order of $G$ (cf. \cite{doerk1992,Bourbaki2023}). This is the reason why, from now on, the order of the considered groups $G$ will never be divided by the characteristic of the field $\Fq$. Moreover, when we talk about ideals of $\Fq[G]$ we always assume that they are left ideals. Semisimple $\Fq$-algebras have many important properties. For instance, assuming that $\Fq[G]$ is semisimple, every ideal of $\Fq[G]$ is always a direct summand. As a consequence, every ideal of the algebra must be principal. Moreover, the algebra can be realised as a direct sum of matrix algebras. This is the well-known Wedderburn-Artin theorem, which is stated below for finite group algebras (see \cite[Theorem 4.4, p.~112]{doerk1992}). \begin{theorem}[Wedderburn-Artin decomposition for finite group algebras]\label{Wedderburn} Let $G$ be a finite group such that $\Fq[G]$ is a semisimple group algebra. Then $\Fq[G]$ is isomorphic, as $\Fq$-algebra, to the direct sum of some matrix rings over suitable extensions of $\Fq$. Specifically, one has: \[ \Fq[G] \cong \bigoplus_{i=1}^s M_{n_i} \left( \Fqr{r_i} \right) \] satisfying $|G| = \sum_{i=1}^{s} n_i^2 r_i$. \end{theorem} For some semisimple group algebras over finite fields, the Wedderburn-Artin decomposition is known. Below we list some of them: \begin{itemize} \item Let $C_n$ be the cyclic group of order $n$. Assume that the polynomial $\mx^n-1\in\Fq[\mx]$ can be factorised as $\mx^n-1 = \prod_{j=1}^r f_j$, where $f_j$ is irreducible over $\Fq[\mx]$. Then the Wedderburn-Artin decomposition of $\Fq[C_n]$ can be obtained by applying the Chinese Remainder Theorem: \begin{equation}\label{eq:descomp_ciclic} \Fq[C_n] \cong \dfrac{\Fq[\mx]}{\langle\mx^n-1\rangle} \cong \bigoplus_{i=1}^{r} \dfrac{\Fq[\mx]}{\langle f_i\rangle} \cong \bigoplus_{i=1}^{r} \Fqr{\deg{f_i}}. \end{equation} \end{itemize} The generalisation of the previous result for abelian groups is computed in \cite{Perlis1950}, which is the so-called Perlis-Walker Theorem. \begin{itemize} \item If $G$ is an abelian group of order $n$, then the Wedderburn-Artin decomposition of $\Fq[G]$ is as follows: \begin{equation}\label{eq:descomp_abelia} \Fq[G] \cong \bigoplus_{d | n} (\mathbb{F}_{q^{t_d}})^{\oplus a_d}, \end{equation} where $t_d=|\Fq(\alpha_d) : \Fq|$, with $\alpha_d$ a primitive $d$-th root of unity, $a_d = \frac{n_d}{t_d}$, and $n_d$ the number of elements of order $d$ in $G$. \end{itemize} In \cite[Theorem 3.1]{Brochero2015}, the Wedderburn-Artin decomposition of dihedral group algebras is given. For every non-zero polynomial $g\in \Fq[\mx]$, we denote by $g^*$ the {\em reciprocal} polynomial of $g$, i.e., $g^*(\mx) =\mx^{deg(g)}g(\mx^{-1})$. The polynomial $g$ is said to be {\em auto-reciprocal} if $g=g^*$. In this case, $g$ always has even degree \cite[Remark 3.2]{Brochero2015}. Consequently $\mx^n-1$ can be factorised over $\Fq[\mx]$ into irreducible monic polynomials as \[\mx^n-1 = f_1f_2 \cdots f_r f_{r+1}f_{r+1}^* \cdots f_{r+s}f_{r+s}^*, \] where $f_1 = \mx-1$, $f_2=\mx+1$ if $n$ is even, and $f_j=f_{j}^*$ for $1\leqslant j\leqslant r$. In this way, $r$ is the number of auto-reciprocal factors in the factorisation and $2s$ the number of factors that are not auto-reciprocal. \begin{itemize} \item Let $D_n=\langle x, y \, | \, x^{n} =y^2= 1, y^{-1}xy = x^{-1} \rangle$ be the dihedral group of order $2n$. Set $\zeta(n) = 2$ if $n$ is even and $\zeta(n) = 1$ otherwise. Then \begin{equation}\label{eq:descomp_diedric} \Fq[D_n] \cong \bigoplus_{i=1}^{r+s}A_i , \mbox{ \ \ where } \quad A_i= \begin{cases} \Fq \oplus \Fq & \text{if } 1 \leqslant i \leqslant \zeta(n) \\[1mm] M_2 \left(\Fqr{\deg(f_i)/2} \right) & \text{if } \zeta(n) +1\leqslant i \leqslant r \\[2mm] M_2 \left(\Fqr{\deg(f_i)} \right) & \text{if } r+1 \leqslant i \leqslant r + s \end{cases}. \end{equation} \end{itemize} Pursuing this line of work, in \cite[Theorems 3.1 and 3.6]{Gao2021}, we find the Wedderburn-Artin decomposition for generalised quaternion group algebras. In this case, we need to consider both the factorisation into irreducible monic polynomials over $\Fq[\mx]$ of $\mx^n-1$ given in the dihedral case, and also the factorisation into irreducible monic polynomials of the polynomial $\mx^n+1 = g_1g_2 \cdots g_t g_{t+1}g_{t+1}^* \cdots g_{t+k}g_{t+k}^*$, where $g_1 = \mx+1$ if $n$ is odd. \begin{itemize} \item Let $Q_{n}$ be the generalised quaternion group of order $4n$, with $n\geq 1$, which admits the presentation $Q_{n} = \langle x, y \, | \, x^{2n} = 1, y^2 = x^n, y^{-1}xy = x^{-1} \rangle$. Set $\mu(n) = 0$ if $n$ is even and $\mu(n) = 1$ otherwise. Then \begin{equation}\label{eq:descomp_quaterni} \Fq[Q_{n}] \cong \bigoplus_{i=1}^{r+s}A_i\oplus \bigoplus_{i=1}^{t+k}B_i, \end{equation} where every $A_i$ is given as in the dihedral case (see (\ref{eq:descomp_diedric})) and \[ B_i= \begin{cases} \Fq \oplus \Fq & \text{if } 1\leq i \leqslant \mu(n) \\[1mm] M_2\left(\Fqr{\deg(g_i)/2}\right) & \text{if } \mu(n) + 1\leqslant i \leqslant t \\[2mm] M_2\left(\Fqr{\deg(g_i)}\right) & \text{if } t+1 \leqslant i \leqslant t + k \end{cases} . \] \end{itemize} It is worth noting that when $n$ is an even integer and we set $n=2t$, one has a criterion to decide when the group algebras $\Fq[D_{n}]$ and $\Fq[Q_{t}]$ are isomorphic or not. This is done in \cite{Flaviana2009}, where it is shown that $\Fq[D_{n}]$ and $\Fq[Q_{t}]$ are isomorphic $\Fq$-algebras if and only if $2|t$ or $q\equiv 1 \mod{4}$ (regardless of whether they are semisimple or not). There are other groups for which the Wedderburn-Artin decomposition over finite fields is also known. For instance, in \cite{Gao2020} and \cite{Brochero2022}, the authors compute the Wedderburn-Artin decomposition for generalised dihedral group algebras and some metacyclic group algebras respectively. Besides, by adapting known results about $\mathbb{Q}[S_n]$ and $\mathbb{Q}[A_n]$, the Wedderburn-Artin decompositions of $\Fq[S_n]$ and $\Fq[A_n]$ are obtained in \cite{Ricardo2023}. \subsection{Group Codes} In coding theory, a linear code $\mathcal{C}$ of $\Fq^n$ is said to be \emph{cyclic} if it satisfies that a word $(c_1, c_2, \dots, c_n) \in \cC$ if and only if $(c_2,\dots, c_n, c_1)\in \cC$. Cyclic codes have nice properties, and efficient decoding algorithms have been developed for them. The key fact is that a cyclic code can be seen as an ideal, generated by a monic polynomial dividing $\mx^n-1$, in the quotient ring $\Fq[\mx]/\langle \mx^n-1\rangle$, which is a ring of principal ideals. The notion of group code is a natural extension of the one of cyclic code, so that a cyclic code is a group code when the associated group is cyclic. Set $G=\{g_1,\dots , g_n\}$. Following \cite{gonzalez2019group, bernal2009intrinsical}, we say that a linear code $\cC\subseteq \Fq^n$ is a (left) {\em $G$-code} if there exists a bijection $\theta: \{1,\dots, n\} \longrightarrow G$ such that the set \[ \left\{\sum_{i=1}^n a_i \theta(i)\ |\ (a_1, \dots, a_n) \in \cC \right\} \] is a (left) ideal of the group algebra $\Fq[G]$. In this way, a linear code $\cC$ over $\Fq$ will be a \emph{group code} if there exists a finite group $G$ such that $\cC$ is a $G$-code. In practice, we usually identify $\cC$ with its corresponding ideal of $\Fq[G]$. When the group $G$ is abelian (resp. non-abelian) we say that $\cC$ is an abelian (resp. non-abelian) group code. Not all linear codes can be realised as group codes; in fact, in \cite{bernal2009intrinsical} the reader can find a criterion to decide when a linear code is a group code. Moreover, notice that a given linear code $\cC$ can be seen as group code over two different groups: for instance $\cC = \{000000,111111\}\subseteq \mathbb{F}_2^6$ is a $G$-code for any group $G$ of order $6$. When $\Fq[G]$ is semisimple, the corresponding group codes are always principal ideals. Moreover, we will use hereafter that, in virtue of the Wedderbun-Artin decomposition, we can always see such a group code as a sum of principal ideals of matrices over finite extensions of $\Fq$. \subsection{Quantum CSS codes} If $x=(x_1,x_2,...,x_n)$ and $y=(y_1,y_2,...,y_n)$ are two elements in $\mathbb{F}_q^n$, then their inner-product is defined as $x\cdot y=\sum_{i=1}^n x_i y_i$. In this way, given a linear code $\mathcal{C}$ of $\Fq^n$, the {\em dual code} of $\mathcal{C}$ is $\mathcal{C}^{\perp}=\{y\in\mathbb{F}_q^n \; : \; x\cdot y =0, \;\forall \, x\in\mathcal{C}\}$. We collect below some basic facts concerning quantum group codes, and we refer readers interested in additional details to the book \cite{Lidar2013}. Let $\mathbb{C}$ be the complex field and $V_n=\mathbb{C}^q\otimes \cdots \otimes \mathbb{C}^q=(\mathbb{C}^q)^{\otimes n}$ be the Hilbert space of dimension $q^n$. A \emph{quantum error-correcting code} (QECC) can be seen as a subspace of $V_n$. An important subclass of quantum codes are the so-called {\em quantum CSS codes}, whose name is due to Calderbank, Shor and Steane. These authors exhibited in \cite{CS, Steane} the first construction of (binary) quantum codes in the literature (the so-called CSS construction), which was later generalised to non-binary alphabets (cf. \cite{CRSS, KKKS}). It utilises a pair of classical linear codes to address the problem of correcting phase and flip quantum errors. Specifically, in this paper we will use the following version of the general construction. Given two linear codes $\mathcal{C}_1, \mathcal{C}_2\subseteq \mathbb{F}_q^n$ of dimensions $k_1$ and $k_2$ respectively, and such that $\mathcal{C}_2^{\perp}\subseteq \mathcal{C}_1$, it is possible to construct a QECC from them, denoted $\text{CSS}(\mathcal{C}_1, \mathcal{C}_2)$, which will be a subspace of $V_n$ of dimension $2^{k_1+k_2-n} $. Moreover, the minimum distance of $\text{CSS}(\mathcal{C}_1, \mathcal{C}_2)$ is $d$ whenever all errors acting on at most $d-1$ of the $n$ subsystems (tensor components) can be detected or act trivially on the code. We say then that $\text{CSS}(\mathcal{C}_1, \mathcal{C}_2)$ is a $[[n,k_1+k_2-n,d]]_q$ quantum code. \begin{theorem} \label{quantum_CSS} If $\mathcal{C}_1, \mathcal{C}_2\subseteq \mathbb{F}_q^n$ are linear codes of dimensions $k_1$ and $k_2$, respectively, such that $\mathcal{C}_2^{\perp}\subseteq\mathcal{C}_1$, then there exists a quantum error-correcting code $\mathcal{Q}=\operatorname{CSS}(\mathcal{C}_1, \mathcal{C}_2)$ with parameters $[[n,k_1+k_2-n,d_{\mathcal{Q}}]]_q$, where $d_{\mathcal{Q}}$ is the minimum of the weights of $c$ lying in $(\mathcal{C}_1\smallsetminus\mathcal{C}_2^{\perp})\cup (\mathcal{C}_2\smallsetminus\mathcal{C}_1^{\perp})$. \end{theorem} \section{Group codes from direct products of groups} \label{sec_direct} In this section we focus on group codes that arise in a group algebra of type $\Fq[G_1\times \cdots \times G_r] $, when the group algebra corresponding to each factor $G_i$ is semisimple. To this end, starting from the Wedderburn-Artin decomposition of each $\Fq[G_i]$, we analyse the specific decomposition of the group algebra of the direct product, also as a direct sum of matrix rings over finite fields (Theorem \ref{Wedderburntp}). Then we apply this result to some specific direct products of groups involving dihedral groups. Subsection \ref{group_codes_from_group_algebras} is devoted to analyse the structure of the ideals of a group algebra when the Wedderburn-Artin decomposition is known, in order to provide the number of group codes that can be constructed, as well as their dimension. In the last part of this section we collect some examples of group codes that can be obtained by using the previous techniques. \subsection{The group algebra of a direct product of groups} The aim of this section is to obtain the Wedderburn-Artin decomposition of the group algebra corresponding to a direct product of groups having a semisimple finite group algebra. Specifically, given two finite groups $G$ and $H$ such that both $\Fq[G]$ and $\Fq[H]$ can be realised as a direct sum of matrix rings over finite fields (Theorem \ref{Wedderburn}), we are going to see how the tensor product of $\Fq$-algebras can be used in order to obtain the corresponding decomposition of the group algebra $\Fq[G\times H]$ as a direct sum of matrix rings too. First of all, the map $(g,h)\mapsto g\otimes h$, for all $g\in G$ and $h\in H$ is an homomorphism from $G\times H$ to the group of units of $\Fq[G] \otimes_{\Fq} \Fq[H]$, which can be extended by linearity to an $\Fq$-algebra isomorphism (see \cite[Lemma 3.4, p.~25]{Passman1977}): \begin{equation}\label{isom_direct_tens} \Fq[G \times H ] \cong \Fq[G] \otimes_{\Fq} \Fq[H] . \end{equation} Next, we deep on the structure of $ \Fq[G] \otimes_{\Fq} \Fq[H]$. For this, we need to give an explicit description of the tensor product of two finite fields as a direct sum of fields. Although we think it must be a known result, we have not been able to find a proof of it anywhere, so we include it here. Recall that, given an extension of finite fields $\Fqr{t}/\Fq$, its Galois group is a cyclic group of order $t$, namely, $\gal(\Fqr{t}/\Fq) = \langle \sigma \rangle$, where $\sigma(x) = x^{q}$, for all $x\in \Fqr{t}$. \begin{proposition}\label{tpfields} Let $\Fqr{n}$ and $\Fqr{m}$ be two finite fields, and denote $d = \gcd(n, m)$ and $\ell = \lcm(n, m)$. Then, the tensor product $\Fqr{n} \otimes_{\Fq} \Fqr{m}$ is isomorphic to the direct sum of $d$ copies of the field $\Fqr{\ell}$, that is, \begin{equation}\label{eqtpfields} \Fqr{n} \otimes_{\Fq} \Fqr{m}\cong \left(\Fqr{\ell}\right)^{\oplus d}. \end{equation} \end{proposition} \begin{proof} Let $\alpha, \beta$ such that $\Fqr{n} = \Fq(\alpha)$ and $\Fqr{m} = \Fq(\beta)$ and take into account the following field extension diagram: \begin{center} \begin{tikzcd} & {\Fqr{\ell} = \Fq(\alpha, \beta)} \arrow[rd, "d_1", no head] \arrow[dd, no head] \arrow[ld, "d_2"', no head] & \\ \Fqr{m} = \Fq(\beta) \arrow[rd, "d_1"', no head] \arrow[rdd, "{[\Fqr{m}:\Fq] = m}"', no head, bend right] & & \Fqr{n} = \Fq(\alpha) \arrow[ld, "d_2", no head] \arrow[ldd, "{[\Fqr{n}:\Fq] = n}", no head, bend left] \\ & \Fqr{d} \arrow[d, "d", no head] & \\ & \Fq & \end{tikzcd} \end{center} Notice that $\ell d=nm$ and there exist integers $d_1,d_2$ such that $\ell=d_1 n=d_2 m$, $n=d_2d$, $m=d_1d$. If we consider the Galois group $\gal(\Fqr{\ell} / \Fq) = \langle \sigma \rangle$ of the extension $\Fqr{\ell} / \Fq$, then $\gal(\Fqr{\ell} / \Fqr{n}) = \langle \sigma^n\rangle$. Let $p(\mx) \in \Fq[\mx]$ be the minimal polynomial of $\beta$ over $\Fq$. We are going to consider the action of $\langle \sigma^n\rangle$ on the set of roots of $p(\mx)$, which is $R_p = \left\{ \beta, \beta^{q}, \beta^{q^2}, \dots, \beta^{q^{m-1}}\right\}\subseteq \Fqr{m}$. Since the order of $\sigma^n$ is $d_1$, its action on $R_p$ will provide $d$ orbits of $d_1$ elements each. To see this, first observe that, $\langle \sigma^m \rangle\leq \Stab_{\langle\sigma\rangle}(\omega)$, for any root $\omega$ of $p(\mx)$. Thus, taking $k_1, k_2 \in \mathbb{Z}$ such that $d=nk_1+mk_2$, one has \[ \omega^{q^d} = \sigma^d(\omega) = \sigma^{nk_1+mk_2}(\omega) = \sigma^{nk_1}\left(\sigma^{mk_2}(\omega)\right) = \sigma^{nk_1}(\omega)\in \mathcal{O}_{\langle \sigma^n \rangle}(\omega). \] Therefore, we obtain $d_1$ different elements in this orbit and, as a result, we conclude that \[ \left\{ \omega^{q^{rd}}: 0 \leqslant r \leqslant d_1-1 \right\} = \mathcal{O}_{\langle \sigma^n \rangle}(\omega)= \left\{ \sigma^{rn}(\omega): 0 \leqslant r \leqslant d_1-1 \right\}. \] Let $\omega_k = \beta^{q^{k-1}} \in \Fqr{\ell}$ for $k=1, \dots, d$. Then $R_p=\bigcup_{k=1}^d \mathcal{O}_{\langle \sigma^n \rangle}(\omega_k)$ and we can write $p(\mx) = \prod_{k=1}^d p_k(\mx)$, where every $p_k(\mx) \in \Fqr{n}[\mx]$ is irreducible over $\Fqr{n}$, has degree $d_1$ and roots $R_{p_k}=\mathcal{O}_{\langle \sigma^n \rangle}(\omega_k)$. Now, consider the following map \[\arraycolsep=2pt \begin{array}{rccl} \nu: & \Fqr{n} \otimes_{\Fq} \Fqr{m} & \longrightarrow & \left(\Fqr{\ell}\right)^{\oplus d}\\[1mm] & \alpha^i \otimes_{\Fq} \beta^j & \longmapsto & \alpha^i \big( \omega_1^j, \dots, \omega_d^j \big)=\big( \alpha^i \omega_1^j, \dots, \alpha^i \omega_d^j \big). \end{array} \] Since $\left\{ \alpha^i \otimes \beta^j: 0 \leqslant i \leqslant n-1, 0 \leqslant j \leqslant m-1 \right\}$ is an $\Fq$-basis of $\Fqr{n} \otimes_{\Fq} \Fqr{m}$, extending by $\Fq$-linearity, one has that $\nu$ defines an $\Fq$-vector space homomorphism. Moreover, it can be easily checked that $\nu$ respects multiplication. Thus, $\nu$ is an $\Fq$-algebra homomorphism. We will show that it is indeed an isomorphism, and the proof will be complete. Let $u = \sum_{i, j} \lambda_{ij} (\alpha^i \otimes \beta^j) \in \ker(\nu)$, where $\lambda_{ij} \in \Fq$. Then \begin{equation}\label{eqdemotensor} \nu(u) = \sum_{i, j} \lambda_{ij} \alpha^i \big( \omega_1^j, \dots, \omega_d^j \big) = (0,\dots, 0). \end{equation} Consider the polynomial $P_\alpha(\mx) = \sum_{i,j} \lambda_{ij} \alpha^i \mx^j \in \Fqr{n}[\mx]$, which satisfies $\deg(P_\alpha(\mx)) \leqslant m-1$. By \eqref{eqdemotensor}, we obtain that $P_\alpha(\omega_k) = 0$, for all $k \in \{1, \dots, d\}$. Therefore, every $p_k(\mx)$ must divide $P_\alpha(\mx)$, and so must its product $\prod_{k=1}^d p_k(\mx) = p(\mx)$. If $P_\alpha(\mx)$ is not the zero polynomial, then the following contradiction would be reached: $m = \deg(p(\mx)) \leqslant \deg(P_\alpha(\mx)) \leqslant m-1$. Thus, $P_\alpha(\mx)$ must be the zero polynomial and then $\sum_{i=1}^{n-1} \lambda_{ij} \alpha^i = 0 $, for all $j \in \{0, \dots, m-1\}$. Since $\{ \alpha^i : 0 \leqslant i \leqslant n-1\}$ is an $\Fq$-basis of $\Fqr{n}$, it can only happen that $\lambda_{ij} = 0$, for all $i \in \{0, \dots, n-1\}$ and $j \in \{0, \dots, m-1\}$. This means that $\ker(\nu) = \{0\}$ , that is, $\nu$ is injective. In addition, $\dim_{\Fq}(\Fqr{n} \otimes_{\Fq} \Fqr{m}) = \dim_{\Fq}(\Fqr{n})\dim_{\Fq}(\Fqr{m}) = nm=ld=\dim_{\Fq}\left(\left(\Fqr{\ell}\right)^{\oplus d}\right)$. Consequently, $\nu$ is an isomorphism of $\Fq$-algebras. \end{proof} We are now in a position to state the main outcome of this section. \begin{theorem}\label{Wedderburntp} Let $\Fq[G]$ and $\Fq[H]$ be two group algebras with the following Wedderburn-Artin decompositions: \begin{align*} \Fq[G] \overset{\scriptscriptstyle \psi_1}{\cong} & \bigoplus_{i=1}^{s_G} M_{n_i} \left( \Fqr{r_i} \right) \\ \Fq[H] \overset{\scriptscriptstyle \psi_2}{\cong} & \bigoplus_{j=1}^{s_H} M_{m_j} \big( \Fqr{t_j} \big). \end{align*} Then, $\Fq[G \times H]$ can be decomposed as: \[ \Fq[G \times H] \cong \bigoplus_{i=1}^{s_G} \bigoplus_{j=1}^{s_H} \left( M_{n_im_j} \left( \Fqr{\lcm(r_i, t_j)} \right) \right)^{\oplus \gcd(r_i, t_j)} . \] \end{theorem} \begin{proof} If we apply the isomorphism provided in \eqref{isom_direct_tens}, joint with Lemma \ref{tpfields}, and Propositions \ref{tpproperties} and \ref{tpmatrices}, then we have the following chain of $\Fq$-algebra isomorphisms: \begin{center} \begin{tblr}{colspec={rcl}, cells={mode=dmath}} \Fq[G \times H] & \overset{\scriptscriptstyle \eqref{isom_direct_tens}}{\cong} & \Fq[G] \otimes_{\Fq} \Fq[H]\\ & \overset{\scriptscriptstyle \psi_1 \otimes \psi_2}{\cong} & \left( \bigoplus_{i=1}^{s_G} M_{n_i} (\Fqr{r_i}) \right) \otimes_{\Fq} \left(\bigoplus_{j=1}^{s_H} M_{m_j} ( \Fqr{t_j}) \right)\\ &\overset{\scriptscriptstyle \text{Prop.} \ref{tpproperties}}{\cong} & \bigoplus_{i=1}^{s_G} \bigoplus_{j=1}^{s_H} \left( M_{n_i} \left( \Fqr{r_i} \right) \otimes_{\Fq} M_{m_j} ( \Fqr{t_j})\right)\\ & \overset{\scriptscriptstyle \text{Lem.} \ref{tpmatrices}}{\cong} & \bigoplus_{i=1}^{s_G} \bigoplus_{j=1}^{s_H} M_{n_im_j} \left( \Fqr{r_i} \otimes_{\Fq} \Fqr{t_j}\right)\\ & \overset{\scriptscriptstyle \eqref{eqtpfields}}{\cong} & \bigoplus_{i=1}^{s_G} \bigoplus_{j=1}^{s_H} M_{n_im_j} \left((\Fqr{\lcm(r_i, t_j)} )^{\oplus \gcd(r_i, t_j) } \right)\\ & \cong & \bigoplus_{i=1}^{s_G} \bigoplus_{j=1}^{s_H} \left(M_{n_im_j} \left( \Fqr{\lcm(r_i, t_j)} \right)\right)^{\oplus \gcd(r_i, t_j)} . \end{tblr} \end{center} \end{proof} \begin{remark} The above result tells us that, if the algebras $\Fq[G]$ and $\Fq[H]$ can be decomposed as direct sums of matrix algebras over finite fields, then so can $\Fq[G\times H]$. In fact, after reindexing the terms, we get $\Fq[G\times H]\cong \bigoplus_{i=1}^{s_{(G\times H)}} M_{\ell_i} (\Fqr{s_i})$, for certain positive integers $s_{(G\times H)}, l_i, s_i$. Thus, given a direct product of finite groups $G_1\times \cdots\times G_r$ such that each algebra $\Fq[G_i]$ can be decomposed as a direct sum of matrix algebras, the recursive application of Theorem \ref{Wedderburntp} yields the corresponding decomposition of the group algebra $\Fq[G_1\times \cdots \times G_r]$. \end{remark} As a consequence of \cref{Wedderburntp} and the specific group algebra decompositions listed in Section \ref{sec:decomposition}, we are able to decompose the group algebra of any direct product of groups whose factors are cyclic groups, dihedral groups or generalised quaternion groups. In short, what we obtain, for this type of groups, is an expression of the group algebra as a direct sum of rings of matrices over finite fields. As an example of the application of this technique, below we present the corresponding decomposition for the group algebras of the direct product of a dihedral group and a cyclic group, the direct product of two dihedral groups or the direct product of a dihedral group and a generalised quaternion group. Let us first introduce the following elementary lemma. \begin{lemma}\label{lem:gcd} Let $m$, $n$ be two positive integers such that $m$ is even. Denote $\alpha=\gcd(m,n)$ and $\beta=\lcm(m,n)$. Then \begin{enumerate}[$(i)$] \item If $2\alpha \mid m$, then $\gcd(\frac{m}{2},n)=\alpha$ and $\lcm(\frac{m}{2},n)=\frac{\beta}{2}$. \item If $2\alpha\nmid m$, then $\gcd(\frac{m}{2},n)=\frac{\alpha}{2}$ and $\lcm(\frac{m}{2},n)=\beta$. \item If $n$ is also an even number, then $\gcd(\frac{m}{2}, \frac{n}{2})=\frac{\alpha}{2}$ and $\lcm(\frac{m}{2},\frac{n}{2})=\frac{\beta}{2}$. \end{enumerate} \end{lemma} \begin{proof} $(i)$ If $2\alpha \mid m$, then $\alpha\mid \frac{m}{2}$, since $m$ is even. Therefore $\alpha$ divides $\gcd(\frac{m}{2},n)$, which also divides $\alpha$. Thus, we obtain the first equality. For the second one, we notice that \[ \lcm\left(\frac{m}{2},n\right) \gcd\left(\frac{m}{2},n\right)=\frac{mn}{2} \then \lcm\left(\frac{m}{2},n\right)=\frac{mn}{2\alpha}=\frac{\beta}{2}. \] $(ii)$ If $2\alpha \nmid m$, then $\alpha \nmid \frac{m}{2}$, since $m$ is even. Moreover, notice that $\alpha$ must be also an even number in this case. Therefore $\frac{\alpha}{2}$ must divide $ \frac{m}{2}$ and we conclude that $\frac{\alpha}{2}$ is a divisor of $\gcd\left(\frac{m}{2},n\right)$. In turn, notice that $\gcd\left(\frac{m}{2},n\right)$ is a proper divisor of $\alpha$. Thus, necessarily, $\frac{\alpha}{2}=\gcd\left(\frac{m}{2},n\right)$. Finally, we observe that \[ \lcm\left(\frac{m}{2},n\right) \gcd\left(\frac{m}{2},n\right)=\frac{mn}{2} \then \lcm\left(\frac{m}{2},n\right)=\frac{mn}{\alpha}=\beta. \] $(iii)$ This follows from well-known properties of gcd and lcm. \end{proof} \begin{notation} \label{notation} Let us denote by $C_a, D_n, Q_m$ the cyclic group of order $a$, the dihedral group of order $2n$, and the generalised quaternion group of order $4m$, respectively. Set $\zeta(n) = 2$ if $n$ is even and $\zeta(n) = 1$ otherwise; and set $\mu(m) = 0$ if $m$ is even and $\mu(m) = 1$ otherwise. In addition, for each of the previous groups, we consider the following factorisation into irreducible monic polynomials (recall that $f^*$ denotes the reciprocal polynomial of $f$): \vspace*{-3mm} \begin{itemize} \setlength{\itemsep}{-0.5mm} \item $\mx^a-1 = p_1p_2 \cdots p_b$; \item $\mx^n-1 = f_1f_2 \cdots f_r f_{r+1}f_{r+1}^* \cdots f_{r+s}f_{r+s}^*$, where $f_1 = \mx-1$ and $f_2=\mx+1$ if $n$ is even. \item $\mx^m-1 = g_1g_2 \cdots g_t g_{t+1}g_{t+1}\cdots g_{t+u}g_{t+u}^*$, where $g_1 = \mx-1$, and $g_2=\mx+1$ if $m$ is even. \item $\mx^m+1 = h_1h_2 \cdots h_l h_{l+1}h_{l+1}\cdots h_{l+k}h_{l+k}^*$, where $h_1=\mx+1$ if $m$ is odd. \end{itemize} \end{notation} \begin{corollary} With the Notation \ref{notation}, the following Wedderburn-Artin decompositions hold. \begin{corolenum} \item \label{corolCxD} Denote $\ell_{ij} = \lcm(\deg(f_i), \deg(p_j))$ and $a_{ij} = \gcd(\deg(f_i), \deg(p_j))$ for $1\leqslant i\leqslant r+s$ and $1\leqslant j\leqslant b$. Then $$\Fq[D_n \times C_a] \cong \displaystyle\bigoplus_{i=1}^{r+s} \bigoplus_{j=1}^{b} \left( A_{ij}^{\oplus d_{ij}} \right),$$ where \[ A_{ij} = \begin{cases} \Fqr{\deg(p_j)} \oplus \Fqr{\deg(p_j)} & \text{if } 1 \leqslant i \leqslant \zeta(n) \\[1mm] M_2 \left(\Fqr{\ell_{ij}/2} \right) & \text{if } \zeta(n) +1\leqslant i \leqslant r \text{ and } 2a_{ij} \, | \, \deg(f_i) \\[2mm] M_2 \left(\Fqr{\ell_{ij}} \right) & \text{if } \zeta(n) +1\leqslant i \leqslant r \text{ and } 2a_{ij} \nmid \deg(f_i) \\[2mm] M_2 \left(\Fqr{\ell_{ij}} \right) & \text{if } r+1 \leqslant i \leqslant r + s \end{cases} \] and \[ d_{ij} = \begin{cases} \frac{a_{ij}}{2} & \text{if } \zeta(n) +1\leqslant i \leqslant r \text{ and } 2 a_{ij} \nmid \deg(f_i) \\[1mm] a_{ij} & \text{otherwise} \end{cases} \] \item \label{corolDxD} Denote $\ell_{ij} = \lcm(\deg(f_i), \deg(g_j))$ and $a_{ij} = \gcd(\deg(f_i), \deg(g_j))$ for $1\leqslant i\leqslant r+s$ and $1\leqslant j\leqslant t+u$. Then \[ \Fq[D_n \times D_m] \cong \bigoplus_{i=1}^{r+s} \bigoplus_{j=1}^{t+u} \left( A_{ij}^{\oplus d_{ij}} \right), \] where \[ A_{ij} = \begin{cases} \left(\Fq\right)^{\oplus 4} & \text{if } 1 \leqslant i \leqslant \zeta(n) \text{ and } 1 \leqslant j \leqslant \zeta(m) \\[2mm] \multirow{2}{*}{$M_2 \left(\Fqr{\ell_{ij}/2} \right)$} & \text{if } \zeta(n) +1\leqslant i \leqslant r \text{ and } 1 \leqslant j \leqslant \zeta(m) \\ & \text{if } 1 \leqslant i \leqslant \zeta(n) \text{ and } \zeta(m) +1\leqslant j \leqslant t \\[2mm] \multirow{2}{*}{$M_2 \left(\Fqr{\ell_{ij}} \right)$} & \text{if } r + 1\leqslant i \leqslant r + s \text{ and } 1 \leqslant j \leqslant \zeta(m) \\ & \text{if } 1 \leqslant i \leqslant \zeta(n) \text{ and } t +1\leqslant j \leqslant t + u \\[2mm] \multirow{3}{*}{$M_4 \left(\Fqr{\ell_{ij}/2} \right)$} & \text{if } \zeta(n) +1\leqslant i \leqslant r \text{ and } \zeta(m) +1\leqslant j \leqslant t \\ & \text{if }\zeta(n) +1\leqslant i \leqslant r, \ t+1\leqslant j\leqslant t+u \text{ and } 2a_{ij} \, | \, \deg(f_i)\\ & \text{if } r+1\leqslant i\leqslant r+s, \ \zeta(m) +1\leqslant j \leqslant t \text{ and } 2a_{ij} \, | \, \deg(g_j)\\[2mm] M_4 \left(\Fqr{\ell_{ij}} \right) & \text{ otherwise } \end{cases}, \] and \[ d_{ij} = \begin{cases} \multirow{2}{*}{$2a_{ij}$} & \text{ if } 1\leqslant i\leqslant \zeta(n) \text{ and } \zeta(m)+1\leqslant j\leqslant t+u\\ & \text{ if } \zeta(n)+1\leqslant i\leqslant r+s \text{ and } 1\leqslant j\leqslant \zeta(m)\\[2mm] \multirow{3}{*}{$\dfrac{a_{ij}}{2}$} & \text{if } \zeta(n) +1\leqslant i \leqslant r \text{ and } \zeta(m) +1\leqslant j \leqslant t \\ & \text{if }\zeta(n) +1\leqslant i \leqslant r, \ t+1\leqslant j\leqslant t+u \text{ and } 2a_{ij} \, \nmid \, \deg(f_i)\\ & \text{if } r+1\leqslant i\leqslant r+s, \ \zeta(m) +1\leqslant j \leqslant t \text{ and } 2a_{ij} \, \nmid \, \deg(g_j)\\[2mm] a_{ij} & \text{otherwise} \end{cases} \] \item Denote $\ell_{ij} = \lcm(\deg(f_i), \deg(h_j))$ and $a_{ij} = \gcd(\deg(f_i), \deg(h_j))$ for $1\leqslant i\leqslant r+s$ and $1\leqslant j\leqslant l+k$. Then \[ \Fq[D_n \times Q_m] \cong \Fq[D_n \times D_m] \oplus \bigoplus_{i=1}^{r+s} \bigoplus_{j=1}^{l+k} \left( B_{ij}^{\oplus d_{ij}} \right), \] where \[ B_{ij} = \begin{cases} \left(\Fq\right)^{\oplus 4} & \text{if } 1 \leqslant i \leqslant \zeta(n) \text{ and } 1 \leqslant j \leqslant \mu(m) \\[2mm] \multirow{2}{*}{$M_2 \left(\Fqr{\ell_{ij}/2} \right)$} & \text{if } \zeta(n) +1\leqslant i \leqslant r \text{ and } 1 \leqslant j \leqslant \mu(m) \\ & \text{if } 1 \leqslant i \leqslant \zeta(n) \text{ and } \mu(m) +1\leqslant j \leqslant l \\[2mm] \multirow{2}{*}{$M_2 \left(\Fqr{\ell_{ij}} \right)$} & \text{if } r + 1\leqslant i \leqslant r + s \text{ and } 1 \leqslant j \leqslant \mu(m) \\ & \text{if } 1 \leqslant i \leqslant \zeta(n) \text{ and } l +1\leqslant j \leqslant l + k \\[2mm] \multirow{3}{*}{$M_4 \left(\Fqr{\ell_{ij}/2} \right)$} & \text{if } \zeta(n) +1\leqslant i \leqslant r \text{ and } \mu(m) +1\leqslant j \leqslant l \\ & \text{if }\zeta(n) +1\leqslant i \leqslant r, \ l+1\leqslant j\leqslant l+k \text{ and } 2a_{ij} \, | \, \deg(f_i)\\ & \text{if } r+1\leqslant i\leqslant r+s, \ \mu(m) +1\leqslant j \leqslant l \text{ and } 2a_{ij} \, | \, \deg(h_j) \\[2mm] M_4 \left(\Fqr{\ell_{ij}} \right) & \text{ otherwise } \end{cases} \] and \[ d_{ij} = \begin{cases} \multirow{2}{*}{$2a_{ij}$} & \text{ if } 1\leqslant i\leqslant \zeta(n) \text{ and } \mu(m)+1\leqslant j\leqslant l+k\\ & \text{ if } \zeta(n)+1\leqslant i\leqslant r+s \text{ and } 1\leqslant j\leqslant \mu(m)\\[2mm] \multirow{3}{*}{$\dfrac{a_{ij}}{2}$} & \text{if } \zeta(n) +1\leqslant i \leqslant r \text{ and } \mu(m) +1\leqslant j \leqslant l \\ & \text{if }\zeta(n) +1\leqslant i \leqslant r, \ l+1\leqslant j\leqslant l+k \text{ and } 2a_{ij} \, \nmid \, \deg(f_i)\\ & \text{if } r+1\leqslant i\leqslant r+s, \ \mu(m) +1\leqslant j \leqslant l \text{ and } 2a_{ij} \, \nmid \, \deg(h_j)\\[2mm] a_{ij} & \text{otherwise} \end{cases} \] \end{corolenum} \end{corollary} \begin{proof} $ $ $(i)$ We will use the decompositions given in $\eqref{eq:descomp_ciclic}$ and $\eqref{eq:descomp_diedric}$, for $ \Fq[C_a]$ and $ \Fq[D_n]$ respectively. Applying \cref{tpproperties} and the isomorphism given in \cref{isom_direct_tens}, we obtain that \begin{center} \begin{tblr}{colspec={lcr}, cells={mode=dmath}} \Fq[D_n \times C_a] & \overset{\scriptscriptstyle\eqref{isom_direct_tens}}{\cong} & \Fq[D_n] \otimes_{\Fq} \Fq[C_a] \\ & \cong & \left( \bigoplus_{i=1}^{r+s} A_i \right) \otimes_{\Fq} \left(\bigoplus_{j=1}^{b} \Fqr{\deg{(p_j)}} \right) \\ & \overset{\scriptscriptstyle \text{Prop. \ref{tpproperties}}}{\cong} & \bigoplus_{i=1}^{r+s} \bigoplus_{j=1}^{b} \left(A_i \otimes_{\Fq} \Fqr{\deg(p_j)}\right). \end{tblr} \end{center} We will now study the tensor product $A_i \otimes_{\Fq} \Fqr{\deg(p_j)}$ for the different values of $i$. \begin{itemize} \item If $1 \leqslant i \leqslant \zeta(n)$, then $A_i = \Fq \oplus \Fq$, and \cref{tpfields,tpproperties:directsum} imply that \[ A_i \otimes_{\Fq} \Fqr{\deg(p_j)} = \left( \Fq \oplus \Fq \right) \otimes_{\Fq} \Fqr{\deg(p_j)} \cong \Fqr{\deg(p_j)} \oplus \Fqr{\deg(p_j)} \] Therefore, $A_{ij} = \Fqr{\deg(p_j)} \oplus \Fqr{\deg(p_j)}$. Moreover, since $f_i\in\{\mx-1,\mx+1\}$, it follows that $ a_{ij}= 1 = d_{ij}$. \item If $\zeta(n) +1\leqslant i \leqslant r$, then $A_i = M_2 \left(\Fqr{\deg(f_i)/2} \right)$ and \cref{tpmatrices} implies that \[ A_i \otimes_{\Fq} \Fqr{\deg({p_j})} = M_2 \left(\Fqr{\deg(f_i)/2} \right) \otimes_{\Fq} \Fqr{\deg(p_j)} \cong M_2 \left(\Fqr{\deg(f_i)/2} \otimes_{\Fq} \Fqr{\deg(p_j)} \right) \] Now, we use Lemma \ref{lem:gcd} and \cref{tpfields} to conclude this case: \begin{itemize} \item If $2 a_{ij} \, | \, \deg(f_i)$, then $$\begin{cases} \lcm \left(\frac{\deg(f_i)}{2}, \deg(p_j) \right) = \dfrac{\ell_{ij}}{2} \\ \gcd \left(\frac{\deg(f_i)}{2}, \deg(p_j) \right) = a_{ij} \end{cases}.$$ Therefore $A_{ij} = M_2 \left(\Fqr{\ell_{ij}/2} \right)$ and $d_{ij} =a_{ij}$. \item If $2 a_{ij} \, \nmid \, \deg(f_i)$, then $$\begin{cases} \lcm \left(\frac{\deg(f_i)}{2}, \deg(p_j) \right) = \ell_{ij} \\ \gcd \left(\frac{\deg(f_i)}{2}, \deg(p_j) \right) = \frac{a_{ij}}{2} \end{cases}.$$ Therefore, $A_{ij} = M_2 \left(\Fqr{\ell_{ij}} \right)$ and $d_{ij} = \frac{a_{ij}}{2}$. \end{itemize} \item If $r+1 \leqslant i \leqslant r + s$, then $A_i = M_2 \left(\Fqr{\deg(f_i)} \right)$ and \cref{tpfields,tpmatrices} imply that \[ A_i \otimes_{\Fq} \Fqr{\deg(p_j)} = M_2 \left(\Fqr{\deg(f_i)} \right) \otimes_{\Fq} \Fqr{\deg(p_j)} \cong M_2 \left(\Fqr{\deg(f_i)} \otimes_{\Fq} \Fqr{\deg(p_j)} \right) \cong M_2 \left(\Fqr{\ell_{ij}} \right)^{\oplus a_{ij}} \] Therefore, $A_{ij} = M_2 \left(\Fqr{\ell_{ij}} \right)$ and $d_{ij} =a_{ij}$. \end{itemize} We can argue in a similar way to obtain the decompositions given in statements $(ii)$ and $(iii)$. \end{proof} \subsection{Group codes in semisimple group algebras} \label{group_codes_from_group_algebras} Given a group $G$, in this section we pay attention on $G$-codes when the Wedderburn-Artin decomposition of $\Fq[G]$ is known. We will compute the number of $G$-codes we can construct as well as their dimension. Assume that the $\Fq$-algebra of a group $G$ is semisimple. Then we can write $\Fq[G] \cong \bigoplus_{i=1}^s M_{n_i} \left( \Fqr{r_i} \right)$ for some positive integers $n_i, r_i$, and any ideal of $\Fq[G]$ can be seen as a sum $I_1\oplus\cdots\oplus I_s$, where $I_i$ is an ideal in $M_{n_i} \left( \Fqr{r_i} \right)$, for $i \in \{1, \dots, s\}$. Moreover, this decomposition is unique. This is the reason why, in order to study the parameters and properties of group codes of $\Fq[G]$, we start by exploring the ideals of an arbitrary ring of matrices $M_n(\Fqr{t})$, over a finite field $\Fqr{t}$. Since $M_n(\Fqr{t})$ is a ring of principal ideals, every ideal $I$ of $M_n(\Fqr{t})$ is generated by a matrix $M\in M_n(\Fqr{t})$, that is, \begin{equation}\label{eq:idealgenerat} I=\langle M\rangle=\{XM\ |\ X\in M_n(\Fqr{t}) \}. \end{equation} From the point of view of ring theory, the rank of the ideal $I$ is just de rank of any generator matrix $M$ of $I$. In contrast, if we want to compute the dimension of the ideal $I$ as an $\Fq$-vector space, then we must first take into account the vector space over $\Fqr{t}$ generated by the rows of $M$, that is, $\rsp(M)=\{xM\ |\ x\in\Fqr{t}^n \}$, since then $\rk(M)$ is exactly its dimension, that is, \[ \rk(M)=\dim_{\Fqr{t}}\rsp(M). \] Now, note that $M_n(\Fqr{t}) \cong \left(\Fqr{t}^n\right)^{\oplus n}$ as $\Fqr{t}$-vector spaces through the isomorphism \[ \begin{array}{cccc} \varphi: & M_n(\Fqr{t}) & \longrightarrow & \left(\Fqr{t}^n\right)^{\oplus n}\\[1mm] & X = \begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{pmatrix} & \longmapsto & x_1 \oplus x_2 \oplus \cdots \oplus x_n \end{array} . \] In this way, $\varphi(I)=\varphi(\langle M\rangle )=\rsp(M)\oplus\cdots\oplus \rsp(M)$ and therefore $\dim_{\Fqr{t}}(I)=n\rk(M)$. As a result, the dimension of $I=\langle M\rangle$ as an $\Fq$-vector space is given in terms of $\rk(M)$ as follows. \begin{lemma} The dimension of an ideal $I=\langle M\rangle$ of $M_n(\Fqr{t})$ as $\Fq$-vector space is $\dim_{Fq}(I)=t n \rk(M)$. \end{lemma} Now we are ready to compute the dimension of any $G$-code when $\Fq[G]$ can be realised as a sum of matrix algebras over finite fields. \begin{theorem}\label{code_dim} Let $G$ be a group such that $\Fq[G]\overset{\scriptscriptstyle \psi}{\cong} \bigoplus_{i=1}^s M_{n_i} \left( \Fqr{r_i} \right)$ for some positive integers $n_i, r_i$, for $1\leqslant i\leqslant s$, and for some isomorphism $\psi$ of algebras. Let $\mathcal{C} = \langle u\rangle\subseteq \Fq[G]$ be a $G$-code, for some element $u\in \Fq[G]$. If $\psi(u) = \bigoplus_{i=1}^s B_i$ then \[ \dim_{\Fq}(\mathcal{C}) = \sum_{i=1}^s n_i r_i \rk(B_i). \] \end{theorem} In addition to being able to give the dimension of any $G$-code, the decomposition of the group algebra $\Fq[G]$ into rings of matrices over finite fields also allows us to count the total number of $G$-codes we can construct within this group algebra. To do so, note that in an arbitrary ring of matrices $M_n(\Fq)$, each ideal is generated exactly by one matrix $M\in M_n(\Fq)$ in row reduced echelon form (see \eqref{eq:idealgenerat}). In turn, each matrix of $M_n(\Fq)$ in row reduced echelon form having rank $k$ generates exactly a $k$-dimensional vector subspace of $\Fq^n$ (its row space). Thus, there is a bijection between the set of ideals of rank $k$ in $M_n(\Fq)$ and the set of $k$-dimensional vector subspaces of $\Fq^n$, which is the Grassmann variety ${\cal G}_{q}(k,n)$. As is well known, the cardinality of ${\cal G}_{q}(k,n)$ is given by the $q$-ary Gaussian coefficient (\cite[Ch. 24]{van2001course}): \[ \begin{bmatrix} n \\ k \end{bmatrix}_q =\dfrac{(q^n-1)(q^{n-1}-1) \cdots (q^{n-k+1}-1)}{(q^k-1)(q^{k-1}-1) \cdots (q-1)} . \] Therefore, this is also the number of ideals of rank $k$ in $M_n(\Fq)$. It follows then that the total number of ideals we can construct in $M_n(\Fq)$, denoted as ${\cal I}_q(n)$, is exactly the cardinality of the projective geometry ${\cal P}(\Fq^n)$ (that is, the set of all subspaces) associated to the $\Fq$-vector space $\Fq^n$. This number is: \[ {\cal I}_q(n)=|{\cal P}(\Fq^n)|=\sum_{k=0}^n \begin{bmatrix} n \\ k \end{bmatrix}_q . \] Now, we can compute the exact number of $G$-codes we can construct in $\Fq[G]$ when this algebra is semisimple. \begin{theorem}\label{numberofcodes} If $\Fq[G] \cong \bigoplus_{i=1}^s M_{n_i} \left( \Fqr{r_i} \right)$, then the number of $G$-codes over $\Fq$ is given by the formula \[ \prod_{i=1}^s {\cal I}_{q^{r_i}}(n_i). \] \end{theorem} In particular, if $G$ is a dihedral group, then using the decomposition of the semisimple group algebra $\mathbb{F}_q[G]$ given in (\ref{eq:descomp_diedric}) we recover the number $\mathcal{N}$ of dihedral codes given in \cite[Theorem 3.1]{CaoCaoMa}. \begin{example} We can use \cref{numberofcodes} to compute the number of group codes in the following examples: \begin{center} \begin{tblr}{colspec={ccc}, row{2-Z}={mode=dmath}, vlines, hlines} Group algebra & Wedderburn-Artin decomposition & Number of group codes \\ \mathbb{F}_3[C_5] & \mathbb{F}_3 \oplus\mathbb{F}_{3^4} & 2 \cdot 2 = 4 \\ \mathbb{F}_3[D_4] & 4\mathbb{F}_3 \oplus M_2(\mathbb{F}_3) & 2^4 \cdot 6 = 96 \\ \mathbb{F}_3[D_4 \times C_5] & 4\mathbb{F}_3 \oplus 4\mathbb{F}_{3^4} \oplus M_2(\mathbb{F}_3) \oplus M_2(\mathbb{F}_{3^4}) & 2^4 \cdot 2^4 \cdot 6 \cdot 84 = 131072 \\ \mathbb{F}_3[D_4 \times D_4] & 16\mathbb{F}_3 \oplus 8M_2(\mathbb{F}_3) \oplus M_4(\mathbb{F}_3) & 2^{16} \cdot 6^8 \cdot 212 = 23335966605312 \end{tblr} \end{center} \end{example} \section{Some explicit constructions of codes involving dihedral groups} \label{sec_examples} The aim of this section is to show how the theoretical results stated in this paper can be applied. In Subsection \ref{exemples_prod_directes} we will construct several group codes arising from the group algebras $\mathbb{F}_q[D_n\times C_t]$ and $\mathbb{F}_q[D_n\times D_m]$. After that, in Subsection \ref{sec_quantum}, we will develop a method for constructing quantum-error correcting codes by using group codes from $\mathbb{F}_q[D_n]$. In both cases, the dihedral group $D_n$ and the Wedderburn-Artin decomposition of its group algebra $\mathbb{F}_q[D_n]$ presented in Subsection \ref{group-algebra-decomposition} will play an essential role. For this reason, we will first give an explicit description of every dihedral code arising from $\mathbb{F}_q[D_n]$ as well as its corresponding dual code. Using notation of Subsection \ref{group-algebra-decomposition}, if $1\leqslant j \leqslant \zeta(n)$, then $A_j=\mathbb{F}_q \oplus \mathbb{F}_q$, and thus it is clear that any proper ideal of $A_j$ has the form $\mathbb{F}_q\oplus \pmb{0}$ or $\pmb{0} \oplus\mathbb{F}_q$. If $\zeta(n)+1\leqslant j \leqslant r$, then $A_j=M_2 \left(\Fqr{\deg(f_j)/2} \right)$; since every ideal of such ring is generated by a matrix in row reduced echelon form (see \cref{group_codes_from_group_algebras}), the proper ideals of $A_j$ in this case are generated by one of the following matrices: $$ \begin{pmatrix} 1&0\\0&0 \end{pmatrix}, \qquad \begin{pmatrix} 0&1\\0&0 \end{pmatrix}, \qquad \begin{pmatrix} 1&\lambda\\0&0 \end{pmatrix}, $$ where $0\neq \lambda\in \mathbb{F}_{q^{\operatorname{deg}(f_j)/2}}$. This last assertion also holds whenever $r+1\leqslant j\leqslant r+s$, but taking $0\neq \lambda\in \mathbb{F}_{q^{\operatorname{deg}(f_j)}}$, since in that case $A_j=M_2 \left(\Fqr{\deg(f_j)} \right)$. Besides, for each group code in $\mathbb{F}_q[D_n]$, the corresponding dual code is described in \cite{Vedenev2021}. This information is summarised below, and it will be used in the examples we are going to build later. \medskip \begin{theorem} \label{theo_dual} Let $\operatorname{gcd}(q,2n)=1$. Then $\mathcal{C}\subseteq \mathbb{F}_q[D_n]$ is a group code if and only if it satisfies that \begin{equation} \label{eq1} \mathcal{C}\cong \displaystyle\bigoplus_{j=1}^{\zeta(n)} \left(x_{j_1}\mathbb{F}_q\oplus x_{j_2}\mathbb{F}_q\right) \oplus \bigoplus_{j=\zeta(n)+1}^{r+s} \langle M_j\rangle_{A_j} \end{equation} where $M_j\in \left\lbrace \begin{pmatrix} 1&0\\0&1 \end{pmatrix}, \begin{pmatrix} 1&0\\0&0 \end{pmatrix}, \begin{pmatrix} 0&1\\0&0 \end{pmatrix}, \begin{pmatrix} 1&\lambda_j\\0&0 \end{pmatrix}, \begin{pmatrix} 0&0\\0&0 \end{pmatrix}\right\rbrace$, $x_{j_1}, x_{j_2}\in \{0,1\}$, $0\neq \lambda_j\in \mathbb{F}_{q^{\operatorname{deg}(f_j)/2}}$ if $j\leq r$, and $0\neq \lambda_j\in \mathbb{F}_{q^{\operatorname{deg}(f_j)}}$ otherwise. Moreover, its dual code $\mathcal{C}^{\perp}\subseteq \mathbb{F}_q[D_n]$ verifies that $$ \mathcal{C}^{\perp}\cong \displaystyle\bigoplus_{j=1}^{\zeta(n)}\left((1-x_{j_1})\mathbb{F}_q\oplus (1-x_{j_2})\mathbb{F}_q \right) \oplus \displaystyle\bigoplus_{j=\zeta(n)+1}^{r+s} \langle \widehat{M}_j\rangle_{A_j}$$ where: \begin{propenum} \item for $\zeta(n)+1\leq j \leq r$ and a root $\alpha_j$ of $f_j$: \begin{equation} \label{eq2} \begin{footnotesize} \widehat{M}_j=\left\lbrace\begin{array}{ll} \begin{pmatrix} 1&0\\0&1 \end{pmatrix}, & \text{if } M_j=\begin{pmatrix} 0&0\\0&0 \end{pmatrix} \\*[4mm] \begin{pmatrix} 2&-\alpha_j-\alpha_j^{-1}\\0&0 \end{pmatrix}, & \text{if } M_j=\begin{pmatrix} 0&1\\0&0 \end{pmatrix} \\*[4mm] \begin{pmatrix} \alpha_j+\alpha_j^{-1}&-2\\0&0 \end{pmatrix}, & \text{if } M_j=\begin{pmatrix} 1&0\\0&0 \end{pmatrix} \\*[4mm] \begin{pmatrix} \alpha_j+\alpha_j^{-1}+2\lambda_j &-2-(\alpha_j+\alpha_j^{-1})\lambda_j\\0&0 \end{pmatrix}, & \text{if } M_j=\begin{pmatrix}1&\lambda_j\\0&0 \end{pmatrix} \\*[4mm] \begin{pmatrix} 0&0\\0&0 \end{pmatrix}, & \text{if } M_j=\begin{pmatrix} 1&0\\0&1 \end{pmatrix} \end{array}\right. \end{footnotesize} \end{equation} \item for $r+1\leq j \leq r+s$: \begin{equation} \label{eq3} \begin{footnotesize} \widehat{M}_j=\left\lbrace\begin{array}{ll} \begin{pmatrix} 1&0\\0&1 \end{pmatrix}, & \text{if } M_j=\begin{pmatrix} 0&0\\0&0 \end{pmatrix} \\*[4mm] \begin{pmatrix} 0&1\\0&0 \end{pmatrix}, & \text{if } M_j=\begin{pmatrix} 0&1\\0&0 \end{pmatrix} \\*[4mm] \begin{pmatrix} 1&0\\0&0 \end{pmatrix}, & \text{if } M_j=\begin{pmatrix} 1&0\\0&0 \end{pmatrix} \\*[4mm] \begin{pmatrix}1&-\lambda_j\\0&0 \end{pmatrix}, & \text{if } M_j=\begin{pmatrix}1&\lambda_j\\0&0 \end{pmatrix}\\*[4mm] \begin{pmatrix} 0&0\\0&0 \end{pmatrix}, & \text{if } M_j=\begin{pmatrix} 1&0\\0&1 \end{pmatrix} \end{array}\right. \end{footnotesize} \end{equation} \end{propenum} \end{theorem} \begin{proof} Equation (\ref{eq1}) is clear from the discussion at the beginning of the section, whilst Equations (\ref{eq2}) and (\ref{eq3}) directy follows from \cite[Theorem 5]{Vedenev2021}. \end{proof} \subsection{Group codes arising from $\mathbb{F}_q[D_n\times C_t]$ and $\mathbb{F}_q[D_n\times D_m]$}\label{exemples_prod_directes} Let $q=3$ and consider the group algebra $\mathbb{F}_3[D_4\times C_4]$, where $D_4=\langle x, y \, | \, x^4 =y^2= 1, y^{-1}xy = x^{-1} \rangle$ and $C_4=\langle z \, | \, z^4 = 1\rangle$. According to \cref{corolCxD}, the Wedderburn-Artin decomposition of $\mathbb{F}_3[D_4\times C_4]$ depends on the factorisation of the polynomial $\mx^4-1 $ in irreducible polynomials of $\mathbb{F}_3[\mx]$. Following Notation \ref{notation} and Corollary \ref{corolCxD}, one has $\zeta(4)=2, \, r=b=3, \, s=0$ and $\mx^4-1=p_1p_2p_3=f_1f_2f_3$, where \begin{gather*} f_1 = p_1 = \mx-1 \\ f_2 = p_2 = \mx+1 \\ f_3 = p_3 = \mx^2+1. \end{gather*} The values of $a_{ij}, d_{ij}$ and $\ell_{ij}$ are organised in the following table: \begin{center} \begin{tblr}{colspec={cccc}, columns={mode=dmath}, vlines} \hline \diagbox{i}{j} & 1 & 2 & 3 \\ \hline \SetCell[r=3]{m} 1 & a_{11} = 1 & a_{12} = 1 & a_{13} = 1 \\ & d_{11} = 1 & d_{12} = 1 & d_{13} = 1 \\ & \ell_{11} = 1 & \ell_{12} = 1 & \ell_{13} = 2 \\ \hline \SetCell[r=3]{m} 2 & a_{21} = 1 & a_{22} = 1 & a_{23} = 1 \\ & d_{21} = 1 & d_{22} = 1 & d_{23} = 1 \\ & \ell_{21} = 1 & \ell_{22} = 1 & \ell_{23} = 2 \\ \hline \SetCell[r=3]{m} 3 & a_{31} = 1 & a_{32} = 1 & a_{33} = 2 \\ & d_{31} = 1 & d_{32} = 1 & d_{33} = 1 \\ & \ell_{31} = 2 & \ell_{32} = 2 & \ell_{33} = 2 \\ \hline \end{tblr} \end{center} Therefore, the blocks $A_{ij}$ are \begin{center} \begin{tblr}{colspec={cccc}, rows={mode=dmath}, vlines, hlines} \diagbox{i}{j} & 1 & 2 & 3 \\ 1 & \mathbb{F}_3 \oplus \mathbb{F}_3 & \mathbb{F}_3 \oplus \mathbb{F}_3 & \mathbb{F}_{3^2} \oplus \mathbb{F}_{3^2} \\ 2 & \mathbb{F}_3 \oplus \mathbb{F}_3 & \mathbb{F}_3 \oplus \mathbb{F}_3 & \mathbb{F}_{3^2} \oplus \mathbb{F}_{3^2} \\ 3 & M_2\left( \mathbb{F}_3 \right) & M_2\left( \mathbb{F}_3 \right) & M_2\left( \mathbb{F}_{3^2} \right) \\ \end{tblr} \end{center} The information in both tables yields, after a proper reordering of the summands, that $\mathbb{F}_3[D_4\times C_4]$ can be decomposed as: \begin{equation}\label{descomp_D4xC4} \mathbb{F}_3[D_4\times C_4] \overset{\scriptscriptstyle \psi}{\cong} 8\mathbb{F}_3 \oplus 4\mathbb{F}_{3^2} \oplus 2 M_2(\mathbb{F}_3) \oplus M_2(\mathbb{F}_{3^2}). \end{equation} We will also need the expression for the isomorphism $\psi$. The isomorphism $\psi_1$ between $\mathbb{F}_3[D_4]$ and its decomposition can be found in \cite{Brochero2015}, whereas the isomorphism $\psi_2$ between $\mathbb{F}_3[C_4]$ and its decomposition is a direct consequence of the Chinese Remainder Theorem. By proceeding as in the proof of \cref{Wedderburntp}, $\psi$ can be explicitly determined through the images of the generators of the product group algebra: \begin{gather*} \psi(x) = 1 \oplus 1 \oplus 1 \oplus 1 \oplus -1 \oplus -1 \oplus -1 \oplus -1 \oplus 1 \oplus 1 \oplus -1 \oplus -1 \oplus \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \oplus \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \oplus \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \\[1mm] \psi(y) = 1 \oplus -1 \oplus 1 \oplus -1 \oplus 1 \oplus -1 \oplus 1 \oplus -1 \oplus 1 \oplus -1 \oplus 1 \oplus -1 \oplus \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \oplus \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \oplus \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \\[1mm] \psi(z) = -1 \oplus -1 \oplus 1 \oplus 1 \oplus -1 \oplus -1 \oplus 1 \oplus 1 \oplus \xi^2 \oplus \xi^2 \oplus \xi^2 \oplus \xi^2 \oplus \begin{pmatrix} -1 & 0 \\ 0 & -1 \end{pmatrix} \oplus \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \oplus \begin{pmatrix} \xi^2 & 0 \\ 0 & \xi^2 \end{pmatrix} \end{gather*} where $\xi$ a primitive element of $\mathbb{F}_{3^2}$ such that $\xi^2-\xi-1=0$. Now, considering the decomposition of $\mathbb{F}_3[D_4\times C_4]$ given in $(\ref{descomp_D4xC4})$, we take for instance $\mathcal{C}$ the code associated to the ideal $$\mathbb{F}_3 \oplus \mathbb{F}_3 \oplus \pmb{0} \oplus \mathbb{F}_3 \oplus \pmb{0} \oplus \mathbb{F}_3 \oplus \pmb{0} \oplus \pmb{0} \oplus \pmb{0} \oplus \mathbb{F}_{3^2} \oplus \mathbb{F}_{3^2} \oplus \mathbb{F}_{3^2} \oplus \pmb{0} \oplus M_2(\mathbb{F}_3) \oplus \left\langle \begin{pmatrix} 1 & \xi \\ 0 & 0 \end{pmatrix} \right\rangle_{M_2\left(\mathbb{F}_{3^2}\right)}.$$ Its dimension can be easily computed utilising \cref{code_dim}. Indeed, since \begin{gather*} \dim_{\mathbb{F}_3}(\mathbb{F}_3) = 1, \qquad \dim_{\mathbb{F}_3}(\mathbb{F}_{3^2}) = 2, \qquad \dim_{\mathbb{F}_3}\left(M_2(\mathbb{F}_3)\right) = 4, \qquad \dim_{\mathbb{F}_3}\left(\left\langle \begin{pmatrix} 1 & \xi \\ 0 & 0 \end{pmatrix} \right\rangle_{M_2\left(\mathbb{F}_{3^2}\right)}\right) = 4, \end{gather*} we can conclude that the dimension of $\mathcal{C}$ is 18. Using the software \textsc{Magma} \cite{Magma}, we were able to construct this concrete $\psi$ and its inverse, and so we computed a generator element of $\mathcal{C}$ in $\mathbb{F}_3[D_4\times C_4]$, which is $x+z-z^2+x^2+xy-xz+xz^2-x^3+yz^2-x^2y+z^3-x^2z^2-xyz^2+xz^3+x^3z-x^2yz^2+xyz^3+x^3yz-x^3z^3$. Moreover, we obtained its minimum distance, which is 8. We point out that $\mathcal{C}$ achieves the best known minimum distance for a $[32, 18]_3$ linear code according to Grassl's tables \cite{Grasslcodetables}. Below we present other examples of group codes over $\mathbb{F}_3$ associated with the direct product of two specific groups, being one of them always a dihedral group. All are group codes with the best-known minimum distance for their length and dimension (cf. \cite{Grasslcodetables}). The results were obtained using \textsc{Magma} \cite{Magma} by random search. Let $D_n=\langle x, y \, | \, x^{n} =y^2= 1, y^{-1}xy = x^{-1} \rangle$ and $C_t=\langle z \, | \, z^t = 1\rangle$. Consider the group algebra $\mathbb{F}_3[D_n\times C_t]$. The following table displays some of the codes we computed. \begin{center} \begin{tblr}[caption={Some good codes in $\mathbb{F}_3[D_n \times C_t]$, where $D_n=\langle a,b \, | \, a^n=b^2=1, bab=a^{-1}\rangle$ and $C_t=\langle c \, | \, c^t = 1 \rangle$}]{|Q[r, m, 0.57\textwidth]|c|c|c|c|c|} \hline Generator of an ideal of $\mathbb{F}_3[D_n\times C_t]$ & $n$ & $t$ & Length & Dimension & Distance \\ \hline $x+z-z^2+x^2+xy-xz+xz^2-x^3+yz^2-x^2y+z^3-x^2z^2-xyz^2+xz^3+x^3z-x^2yz^2+xyz^3+x^3yz-x^3z^3$ & 4 & 4 & 32 & 18 & 8 \\ \hline $y+z-x-yz+yz^2+x^4y+z^3+xz-yz^3-x^3y-x^2z-x^3-x^3yz+x^2y+x^2z^3-xy+x^3z^3+x^4z-x^4z^3-xyz^3$ & 5 & 4 & 40 & 21 & 10 \\ \hline $1-y-x+x^2-yz^2+x^2y-x^2z^2+xy-yz^4+xz^4-x^2z^4+x^3yz^4-yz-xz-x^3z^4-x^2z-x^3yz-xyz^4-yz^3-xz^3+x^3z-x^2z^3-x^3yz^3+xyz+x^3z^3+xyz^3$ & 4 & 5 & 40 & 25 & 8 \\ \hline $1+z^2-yz^2-x^4y+z^4-xz^2-yz^4-x^3y+z-xz^4+x^3-yz+x^3yz^2+z^3-xz-x^3z^2+x^4-yz^3+x^3yz^4-x^2z-x^3z^4+x^4yz^3+xyz^2+x^3yz^3+x^2yz+xyz^4-x^3z^3+xyz-x^4z^3$ & 5 & 5 & 50 & 29 & 10 \\\hline \end{tblr} \end{center} Now, let $D_n=\langle x, y \, | \, x^{n} =y^2= 1, y^{-1}xy = x^{-1} \rangle$ and $D_m=\langle c,d\, | \, c^m=d^2=1, dcd=c^{-1}\rangle$, and consider the group algebra $\mathbb{F}_3[D_n\times D_m]$. Then we have the following table. \begin{center} \begin{tblr}[caption={Some good codes in $\mathbb{F}_3[D_n \times D_m]$, where $D_n=\langle x, y \, | \, x^n=y^2= 1, y^{-1}xy = x^{-1} \rangle$ and $C_t=\langle c \, | \, c^t = 1 \rangle$}]{|Q[r, m, 0.57\textwidth]|c|c|c|c|c|} \hline Generator of an ideal of $\mathbb{F}_3[D_n\times D_m]$ & $n$ & $m$ & Length & Dimension & Distance \\ \hline $-x^4cd + x^4ycd + x^5cd - x^3yc + x^5c - x^3yd - x^2ycd - x^5ycd - x^6cd + x^3c - x^5yc + x^3d - x^2yd + x^6d + x^2cd + x^6ycd - xyc + x^2d - x^6yd - ycd + xc + yc + yd + cd - y - c - d - 1$ & 7 & 2 & 56 & 31 & 11 \\ \hline $-1+x+d-x^2-xd-x^3-x^5+x^4y+cd-x^2c+x^4d-x^6-xyc+xyd+x^7y-x^5y+xcd+x^3c-x^5c+x^3d-x^6yc-x^4yc+x^6yd+x^4cd-x^6c+xycd-x^7yc-x^5yc-x^7yd+x^5yd-x^3y-x^5cd-x^7d-x^6ycd+x^6yd-x^7ycd-x^3yd+x^7cd+x^3ycd+x^2yd$ & 8 & 2 & 64 & 35 & 12 \\ \hline $-x^2yc^2d + x^2yc^3 + x^3c^2 + x^2ycd + x^3cd -x^2c^2d -xyc^2d -x^3yc^2d + x^2c^3 + xyc^3 + x^3yc^3 + x^3c + xyc^2 + x^3yc^2 - x^2cd - xycd + x^3d - x^2c^3d + yc^2d + yc^3 + x^2y + x^3 + x^2c + xyc -x^3yc + xc^2 - xcd + xyd - yc^3d - x^3y - xc - c^2 - y - 1$ & 4 & 4 & 64 & 44 & 8 \\ \hline $x^2yc^2d - x^3c^2d - x^2yc^3 - x^3c^3 - x^2yc^2 - x^3cd + x^2yc^3d - x^3c^3d + x^3yc^2d - x^3yc^3 - x^3c + x^2c^2 - xyc^2 + x^3yc^2 - xycd + x^3ycd + x^2c^3d - xyc^3d - yc^2d + xc^3 - x^3 + x^2c + x^3yc - xc^2 - yc^2 + xcd + ycd + x^2d - xyd + x^3yd - yc^3d + c^3 - x^2 + xy - x^3y + yc - c^2 + cd - xd - c^3d + y - c - d$ & 4 & 4 & 64 & 45 & 8 \\ \hline \end{tblr} \end{center} \subsection{On Quantum error-correcting Dihedral Codes} \label{sec_quantum} \setlength{\arraycolsep}{2pt} Given a finite group $G$, recall that every $G$-code can be determined whenever the Wedderburn-Artin decomposition of the group algebra is known, since each $G$-code is isomorphic to a direct sum of ideals in the corresponding matrix rings (see \cref{group_codes_from_group_algebras}). Hence, if the corresponding dual code can also be computed, then we can utilise this information to obtain quantum error-correcting $G$-codes via the CSS construction (cf. \cref{quantum_CSS}). Using Theorem \ref{theo_dual}, in this section we are going to apply this method when the considered group is the dihedral group $D_n$. \begin{example} Let us consider $q=3$ and $n=16$, so we work on the group algebra $\mathbb{F}_3[D_{16}]$. Following the notation in \cref{group-algebra-decomposition}, it is clear that $\zeta(n)=2$, and it can be checked that $$\mx^{16}-1=(\mx-1)(\mx+1)(\mx^2+1)(\mx^2+\mx-1)(\mx^2-\mx-1)(\mx^4+\mx^2-1)(\mx^4-\mx^2-1)=f_1f_2f_3f_4f_4^*f_5f_5^*,$$ so $r=3$ and $s=2$. Consider $\beta$ a primitive element of $\mathbb{F}_{3^4}$. Take $$\mathcal{C}\cong(\mathbb{F}_3\oplus \mathbb{F}_3)\oplus(\pmb{0}\oplus \mathbb{F}_3)\oplus \left\langle \begin{pmatrix} 1&0\\0&1 \end{pmatrix} \right\rangle_{M_2(\mathbb{F}_{3})}\oplus \left\langle \begin{pmatrix} 1&0\\0&0 \end{pmatrix} \right\rangle_{M_2(\mathbb{F}_{3^2})}\oplus \left\langle \begin{pmatrix} 1&-\beta\\0&0 \end{pmatrix} \right\rangle_{M_2(\mathbb{F}_{3^4})}.$$ In virtue of Theorem \ref{theo_dual}, $\mathcal{C}$ is a $D_{16}$-code of dimension $19$ (by \cref{code_dim}), and $$ \mathcal{C}^{\perp}\cong (\pmb{0} \oplus \pmb{0}) \oplus (\mathbb{F}_3 \oplus \pmb{0}) \oplus \left\langle \begin{pmatrix} 0&0\\0&0 \end{pmatrix} \right\rangle_{M_2(\mathbb{F}_{3})}\oplus \left\langle \begin{pmatrix} 1&0\\0&0 \end{pmatrix} \right\rangle_{M_2(\mathbb{F}_{3^2})}\oplus \left\langle \begin{pmatrix} 1&\beta\\0&0 \end{pmatrix} \right\rangle_{M_2(\mathbb{F}_{3^4})} $$ Note that with each $D_{16}$-code containing $\mathcal{C}^{\perp}$ we can construct a dihedral quantum code applying the CSS construction (cf. \cref{quantum_CSS}). We consider for instance: $$ \mathcal{D}\cong (\mathbb{F}_3\oplus \mathbb{F}_3) \oplus (\mathbb{F}_3 \oplus \pmb{0}) \oplus \left\langle \begin{pmatrix} 1&0\\0&1 \end{pmatrix} \right\rangle_{M_2(\mathbb{F}_{3})}\oplus \left\langle \begin{pmatrix} 1&0\\0&1 \end{pmatrix} \right\rangle_{M_2(\mathbb{F}_{3^2})}\oplus \left\langle \begin{pmatrix} 1&\beta\\0&0 \end{pmatrix} \right\rangle_{M_2(\mathbb{F}_{3^4})}. $$ It holds that $\mathcal{D}$ is a $D_{16}$-code of dimension $23$. By \cref{quantum_CSS} and making computations with \textsc{Magma} \cite{Magma}, we obtain that ${\cal Q}_1=\operatorname{CSS}(\mathcal{D}, \mathcal{C})$ is a quantum $D_{16}$-code with parameters $[[32,10,4]]_{3}$. \end{example} \setlength{\arraycolsep}{4pt} \begin{example} Now we consider the group algebra $\mathbb{F}_3[D_{20}]$, so $n=20$ and $q=3$. Thus $\zeta(n)=2$, and since \begin{eqnarray*}\mx^{20}-1&=&(\mx-1)(\mx+1)(\mx^2+1)(\mx^4+\mx^3+\mx^2+\mx+1) \cdot \\ & \cdot & (\mx^4-\mx^3+\mx^2-\mx+1)(\mx^4+\mx^3-\mx+1)(\mx^4-\mx^3+\mx+1)= \\ &=&f_1f_2f_3f_4f_5f_6f_6^*,\end{eqnarray*} we deduce that $r=5$ and $s=1$. For $i=1,2$, let $\alpha_i$ be a root of $f_i$. By \cite[Remark 3.2]{Brochero2015} there exists a polynomial $h_i$ in $\mathbb{F}_3[\mx]$ having degree $2$ and $\alpha_i+\alpha_i^{-1}$ as a root. Thus, we can take \[ \lambda_4=-2/(\alpha_4+\alpha_4^{-1})\in \mathbb{F}_3(\alpha_4+\alpha_4^{-1})\cong \mathbb{F}_{3^2} \] and \[ \lambda_5=-(\alpha_5+\alpha_5^{-1})/2\in \mathbb{F}_3(\alpha_5+\alpha_5^{-1})\cong \mathbb{F}_{3^2}. \] Let us take the $D_{20}$-code \begin{eqnarray*}\mathcal{C}\cong(\mathbb{F}_3\oplus \mathbb{F}_3)&\oplus&(\pmb{0}\oplus \mathbb{F}_3)\oplus \left\langle \begin{pmatrix} 1&0\\0&1 \end{pmatrix} \right\rangle_{M_2(\mathbb{F}_{3})}\oplus \left\langle \begin{pmatrix} 1&\lambda_4\\0&0 \end{pmatrix} \right\rangle_{M_2(\mathbb{F}_{3}(\alpha_4+\alpha_4^{-1}))} \oplus \\&\oplus& \left\langle \begin{pmatrix} 1&\lambda_5\\0&0 \end{pmatrix} \right\rangle_{M_2(\mathbb{F}_{3}(\alpha_5+\alpha_5^{-1}))}\oplus \left\langle \begin{pmatrix} 1&0\\0&0 \end{pmatrix} \right\rangle_{M_2(\mathbb{F}_{3^4})}.\end{eqnarray*} By Theorems \ref{theo_dual} and \ref{code_dim}, we get that $\mathcal{C}$ is a $D_{20}$-code of dimension $23$, and \begin{eqnarray*}\mathcal{C}^{\perp}\cong(\pmb{0}\oplus \pmb{0})&\oplus&(\mathbb{F}_3 \oplus \pmb{0})\oplus \left\langle \begin{pmatrix} 0&0\\0&0 \end{pmatrix} \right\rangle_{M_2(\mathbb{F}_{3})}\oplus \left\langle \begin{pmatrix} 1&0\\0&0 \end{pmatrix} \right\rangle_{M_2(\mathbb{F}_{3}(\alpha_4+\alpha_4^{-1}))}\oplus \\&\oplus& \left\langle \begin{pmatrix} 0&1\\0&0 \end{pmatrix} \right\rangle_{M_2(\mathbb{F}_{3}(\alpha_5+\alpha_5^{-1}))}\oplus \left\langle \begin{pmatrix} 1&0\\0&0 \end{pmatrix} \right\rangle_{M_2(\mathbb{F}_{3^4})}.\end{eqnarray*} As in the previous example, any $D_{20}$-code containing $\mathcal{C}^{\perp}$ allows us to apply the CSS construction in order to obtain a dihedral quantum code. More concretely, we choose the following $D_{20}$-code $\mathcal{D}$ such that $\mathcal{C}^{\perp}\subseteq \mathcal{D}$: \begin{eqnarray*}\mathcal{D}\cong(\mathbb{F}_3\oplus \mathbb{F}_3)&\oplus&(\mathbb{F}_3 \oplus \pmb{0})\oplus \left\langle \begin{pmatrix} 0&1\\0&0 \end{pmatrix} \right\rangle_{M_2(\mathbb{F}_{3})}\oplus \left\langle \begin{pmatrix} 1&0\\0&1 \end{pmatrix} \right\rangle_{M_2(\mathbb{F}_{3}(\alpha_4+\alpha_4^{-1}))}\oplus \\&\oplus& \left\langle \begin{pmatrix} 0&1\\0&0 \end{pmatrix} \right\rangle_{M_2(\mathbb{F}_{3}(\alpha_5+\alpha_5^{-1}))}\oplus \left\langle \begin{pmatrix} 1&0\\0&1 \end{pmatrix} \right\rangle_{M_2(\mathbb{F}_{3^4})}.\end{eqnarray*} It holds that $\mathcal{D}$ is a $D_{20}$-code of dimension $33$. Consequently, by \cref{quantum_CSS} and through computations with \textsc{Magma} \cite{Magma} to obtain the distance, we conclude that ${\cal Q}_2=\operatorname{CSS}({\cal D}, {\cal C})$ is a quantum CSS $D_{20}$-code with parameters $[[40,16,4]]_{3}$. \end{example} According to the database \url{http://quantumcodes.info} we can compare the quantum codes $[[12,3,4]]_{3}$ and $[[16,4,4]]_{3}$ with our dihedral quantum CSS codes computed in the previous two examples. All these codes have minimum distance 4. However, our codes have a larger code rate, namely $k/n=10/32 = 0.3125$ in the first example and $k/n=16/40=0.4$ in the second example, in contrast to $3/12=0.25=4/16$. Hence, our dihedral quantum codes perform better than these ones. \renewcommand*{\mkbibcompletename}[1]{\textsc{#1}} \printbibliography \end{document}
2412.09713v1
http://arxiv.org/abs/2412.09713v1
Fractal analysis of canard cycles and slow-fast Hopf points in piecewise smooth Liénard equations
\documentclass[a4paper]{article} \usepackage{amsmath} \usepackage{amsthm} \usepackage{amssymb} \usepackage{graphicx} \usepackage{authblk} \usepackage{hyperref} \usepackage{multirow} \usepackage{cite} \usepackage{comment} \usepackage{overpic} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{conjecture}{Conjecture} \newtheorem{definition}{Definition} \newtheorem{remark}{Remark} \newtheorem{example}{Example} \newenvironment{vf}{\left\{\begin{array}{rcl}}{\end{array}\right.} \DeclareMathOperator{\divergenceOperator}{div} \DeclareMathOperator{\cycl}{Cycl} \newcommand\norm[1]{\left\lVert#1\right\rVert} \newcommand{\sgn}{\mathop{\mathrm{sgn}}} \newcommand{\cdim}{\mathop{\mathrm{co\textrm{-}dim}}} } \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\R}{\mathbb{R}} \author[1]{Renato Huzak} \author[2]{Ansfried Janssens\footnote{Corresponding author, {\tt [email protected]}}} \author[3]{Otavio Henrique Perez} \author[4]{Goran Radunovi\'{c}} \affil[1,2,3]{Hasselt University, Campus Diepenbeek, Agoralaan Gebouw D, 3590 Diepenbeek, Belgium} \affil[3]{Universidade de S\~{a}o Paulo (USP), Instituto de Ci\^{e}ncias Matem\'aticas e de Computa\c{c}\~{a}o (ICMC). Avenida Trabalhador S\~{a}o Carlense, 400, CEP 13566-590, S\~{a}o Carlos, S\~{a}o Paulo, Brazil.} \affil[4]{University of Zagreb, Faculty of Science, Horvatovac 102a, 10000 Zagreb, Croatia} \affil[1]{{\tt [email protected]}} \affil[2]{{\tt [email protected]}} \affil[3]{{\tt [email protected]}} \affil[4]{{\tt [email protected]}} \title{Fractal analysis of canard cycles and slow-fast Hopf points in piecewise smooth Li\'{e}nard equations} \date{} \begin{document} \maketitle \begin{abstract} The main goal of this paper is to give a complete fractal analysis of piecewise smooth (PWS) slow-fast Li\'{e}nard equations. For the analysis, we use the notion of Minkowski dimension of one-dimensional orbits generated by slow relation functions. More precisely, we find all possible values for the Minkowski dimension near PWS slow-fast Hopf points and near bounded balanced crossing canard cycles. We study fractal properties of the unbounded canard cycles using PWS classical Li\'{e}nard equations. We also show how the trivial Minkowski dimension implies the non-existence of limit cycles of crossing type close to Hopf points. This is not true for crossing limit cycles produced by bounded balanced canard cycles, i.e. we find a system undergoing a saddle-node bifurcation of crossing limit cycles and a system without limit cycles (in both cases, the Minkowski dimension is trivial). We also connect the Minkowski dimension with upper bounds for the number of limit cycles produced by bounded canard cycles. \end{abstract} \textit{Keywords:} canard cycles; Minkowski dimension; piecewise smooth slow-fast Hopf point; piecewise smooth slow-fast Li\'{e}nard equations; slow relation function\newline \textit{2020 Mathematics Subject Classification:} 34E15, 34E17, 34C40, 28A80, 28A75 \tableofcontents \section{Introduction}\label{Section-Introduction} The main purpose of this paper is to give a fractal classification of piecewise smooth (PWS) continuous slow-fast Li\'enard equations \begin{align}\label{PWSLienard-Intro} \begin{cases} \dot x=y-F(x) ,\\ \dot y=\epsilon G(x), \end{cases} \end{align} with \begin{align}\label{PWSLienard-Intro-FG} F(x)=\begin{cases} F_-(x), \ x\le 0,\\ F_+(x), \ x\ge 0, \end{cases} \quad G(x)=\begin{cases} G_-(x), \ x\le 0,\\ G_+(x), \ x\ge 0, \end{cases} \end{align} where $\epsilon\ge 0$ is a singular perturbation parameter kept small, $F_\pm$ and $G_\pm$ are $C^\infty$-smooth functions, $F_\pm(0)=F_\pm'(0)=0$ and $G_\pm(0)=0$. The set $\Sigma = \{x=0\}$ is called the switching line or switching manifold. For the classification, we will use the notion of Minkowski dimension (always equal to the box dimension \cite{Falconer,tricot}) of one-dimensional monotone orbits generated by so-called slow relation (or entry-exit) function (see Section \ref{sec-Motivation}). We refer to \cite{Benoit,Dbalanced,BoxNovo} and references therein for the definition of the notion of slow relation function in smooth planar slow-fast systems. \smallskip One of the important properties of such one-dimensional monotone orbits is their density. The density is usually measured by calculating the length of $\delta$-neighborhood of orbits as $\delta\to 0$ and comparing the length with $\delta^{1-s}$, $0\le s\le 1$. In this way we obtain the Minkowski dimension of orbits, taking values between $0$ and $1$ (for more details, see Section \ref{Minkowski-def-separated}). The bigger the Minkowski dimension of the orbits, the higher the density of orbits. Following \cite{EZZ,ZupZub,MRZ}, the Minkowski dimension of orbits generated by the Poincar\'{e} map near foci, limit cycles, homoclinic loops, etc., is closely related to the number of limit cycles produced in bifurcations (roughly speaking, the bigger the Minkowski dimension, the more limit cycles can be born). Similarly, in smooth planar slow-fast systems, the Minkowski dimension of orbits generated by a slow relation function plays an important role in detecting the codimension of singular Hopf bifurcations in a coordinate-free way \cite{BoxNovo}, finding the maximum number of limit cycles produced by canard cycles \cite{BoxRenato,BoxDomagoj,BoxVlatko}, etc. For a more detailed motivation we refer the interested reader to \cite[Section 1]{BoxNovo} and \cite[Section 1]{MinkLienard}. Since these papers deal only with smooth slow-fast systems, it is natural to ask whether we can use similar methods to study fractal properties of \textit{PWS slow-fast systems}. \begin{figure}[htb] \begin{center} \includegraphics[width=2.3cm,height=3.5cm]{examplecrossing.png} {\footnotesize \put(-13,98){$\Sigma$} \put(-80,78){$X_-$} \put(33,78){$X_+$} } \end{center} \caption{A crossing periodic orbit where $\Sigma$ is the switching manifold.} \label{fig-crossing-example} \end{figure} Piecewise smooth systems \cite{filippov1988differential} are an active field of recent research. The determination of crossing limit cycles, for example, is an important problem in PWS theory in the plane (see \cite{carmona2023a,llibre2013a,LlibreOrd,HuanYang2,freire2013a} and references therein). Such cycles intersect the switching manifold $\Sigma$ at points where the vector fields $X_-$ and $X_+$ point in the same direction relative to $\Sigma$ (see Figure \ref{fig-crossing-example}). In this paper we are interested in the following limit periodic sets of the PWS slow-fast system \eqref{PWSLienard-Intro} (we also refer to Figure \ref{fig-Motivation} in Section \ref{sec-Motivation}). Assume that the curve of singularities of \eqref{PWSLienard-Intro} contains a normally attracting branch $\{y=F_+(x),x>0\}$ and a normally repelling branch $\{y=F_-(x),x<0\}$. Then the balanced canard cycle $\Gamma_{\hat y}$, with $\hat y>0$ when $\epsilon = 0$, consists of a portion of the normally attracting branch, a portion of the normally repelling branch and a horizontal fast orbit at level $y=\hat y$. These canard cycles may produce crossing limit cycles after a perturbation of \eqref{PWSLienard-Intro} if the slow dynamics of \eqref{PWSLienard-Intro} defined along the curve of singularities points from the attracting branch to the repelling branch and it has a regular extension through the origin $(x,y)=(0,0)$ (see Section \ref{section-applications}). This can be done by merging two smooth slow-fast Hopf points \cite{DRbirth} into a so-called PWS slow-fast Hopf point located at the origin $(x,y)=(0,0)$ (see assumption \eqref{assum1} in Section \ref{sec-Motivation}). The PWS slow-fast Hopf point is the limit of $\Gamma_{\hat y}$ when $\hat y\to 0$. If $\hat y\to \infty$, we can have an unbounded canard cycle, which will be denoted by $\Gamma_\infty$, consisting of the curve of singularities and a part at infinity (see Section \ref{sectionunbounded}). The main goal is to give a \textit{complete} fractal classification (i.e., we find all possible Minkowski dimensions) of the PWS slow-fast Hopf point and $\Gamma_{\hat y}$ described in the previous paragraph, related to the PWS continuous Li\'enard family \eqref{PWSLienard-Intro}. The fractal classification of the PWS slow-fast Hopf point is given in Theorems \ref{thm1} and \ref{thm2} in Section \ref{sectionHopf}, whereas the possible Minkowski dimensions of $\Gamma_{\hat y}$ are given in Theorem \ref{thm3} in Section \ref{sectionbounded}. We assume that $\Gamma_{\hat y}$ is balanced, that is, the slow divergence integral computed along the portion of the curve of singularities contained in $\Gamma_{\hat y}$ is zero (see Section \ref{sectionbounded} and \cite{Dbalanced}). In Theorem \ref{thm4} in Section \ref{sectionunbounded} we compute Minkowski dimensions near $\Gamma_\infty$ when \eqref{PWSLienard-Intro} is a PWS classical Li\'enard system (that is, $F_\pm$ are polynomials of degree $n+1$, $n\ge 1$, and $G_\pm$ are linear). Theorems \ref{thm1} to \ref{thm4} are proven in Section \ref{section-proof-all}. In Section \ref{section-applications}, the link between the Minkowski dimensions computed in Section \ref{section-proof-all} and the number of crossing limit cycles of a perturbation of \eqref{PWSLienard-Intro} near $\Gamma_{0}=\{(0,0)\}$, $\Gamma_{\hat{y}}$ and $\Gamma_{\infty}$ is addressed (see system \eqref{eq-pws-lienard-general} in Section \ref{section-applications}). We focus our study to the case in which the Minkowski dimensions are trivial (that is, equal zero). Geometrically speaking, close to a PWS Hopf point, trivial Minkowski dimension of orbits tending to $\Gamma_{0}$ means that the connection between center manifolds are broken in the blow-up locus, so we cannot expect crossing limit cycles (see Section \ref{sec-blow-up} and Figure \ref{fig-pws-lienard}). However, trivial Minkowski dimension of orbits tending to $\Gamma_{\hat{y}}$ does not imply the absence of limit cycles. Indeed, we present an example in which the system undergoes a saddle-node bifurcation of limit cycles (see Section \ref{section-bounded-LC}). Finally, in Section \ref{sec-LC-unbound} we give examples in which the Minkowski dimension of orbits tending to $\Gamma_{\infty}$ is trivial, but, nevertheless, one can expect crossing limit cycles. The number of limit cycles related to higher Minkowski dimensions is a topic for future study (see Remark \ref{remark-nonzeroMD} in Section \ref{section-applications} for some results in that direction). In the smooth setting, that is, when the functions $F$ and $G$ in \eqref{PWSLienard-Intro-FG} are $C^\infty$-smooth, we deal with a smooth slow-fast Hopf point at the origin $(x,y)=(0,0)$ and the following discrete set of values of the Minkowski dimension can be produced (see \cite{BoxNovo}): $\frac{1}{3},\frac{3}{5},\frac{5}{7},\dots,1$. From these values, which can be computed numerically \cite{BoxNovo}, we can read upper bounds for the number of limit cycles produced by the smooth slow-fast Hopf point (for more details see \cite[Theorem 3.4]{BoxNovo}). Theorems \ref{thm1} and \ref{thm2} imply that the PWS slow-fast Hopf point produces infinitely many new values: $0,\frac{1}{2},\frac{2}{3},\frac{3}{4},\dots$. We strongly believe that they give information about the number of limit cycles produced by the PWS slow-fast Hopf point. This is a topic of further study. Similarly, besides old values of the Minkowski dimension when $F$ is a polynomial of even degree $n+1$ and $G$ is linear ($\frac{1}{2},\frac{3}{4},\dots,\frac{n-2}{n-1}$, see \cite[Remark 1]{MinkLienard}), in the piecewise smooth setting, $\Gamma_\infty$ produces the following new values: $0,\frac{4}{5},\frac{6}{7},\dots,\frac{n-1}{n}$. We refer to Theorem \ref{thm4} for more details. The main reason why we assume that $F_\pm$ and $G_\pm$ are $C^\infty$-smooth is because we want to detect all possible Minkowski dimensions of orbits. We need higher-order Taylor expansions (i.e., higher degrees of smoothness of $F_\pm$ and $G_\pm$) in order to find larger Minkowski dimensions of orbits (see Sections \ref{proofThm1} and \ref{proofThm3}). Let us highlight the differences between our approach and those already presented in the literature concerning PWS slow-fast systems. In \cite{2016Roberts, 2024CX} the authors studied the existence of crossing canard limit cycles in piecewise smooth Li\'enard equations, in such a way that the origin is a corner point of the critical manifold (or curve of singularities) positioned in $\Sigma$, in the sense that $F'(0)$ in \eqref{PWSLienard-Intro} is not well defined. Moreover, the critical manifold of the models studied in such references presents a ``van der Pol - like'' shape. On the other hand, in \cite{2013DFHPT} the authors also address the existence of canard limit cycles, but considering a three-zoned piecewise smooth Li\'enard equation instead. In \cite{2023CFGT, 2016FGDKT} the authors studied the existence of canard cycles in four-zoned and three-zoned piecewise linear (PWL) systems, respectively. In all those references, \cite{2023CFGT, 2024CX, 2013DFHPT, 2016FGDKT, 2016Roberts}, the authors fixed a linear function $G$ in \eqref{PWSLienard-Intro} (that is, $G_{-} = G_{+}$) and defined $F$ in a piecewise smooth way. Moreover, in their models, the critical manifold loses smoothness in the intersection with the switching manifold. On the other hand, in this paper, we allow both $F$ and $G$ to be defined in a piecewise smooth way. Moreover, in the study of the Hopf point and the bounded canard cycle, we do not require $G$ to be linear. In addition, the intersection between the critical manifold and $\Sigma$ is not a corner point. Finally, our main tools are Fractal Geometry, Slow Divergence Integrals and Slow-Relation Functions, which were not used in those previous references. A connection between sliding canard cycles in regularized PWS systems and the slow divergence integral can be found in \cite{RHKK2023}. \section{Minkowski dimension}\label{Minkowski-def-separated} Let $U\subset\mathbb R^N$ be a bounded set. One defines its $\delta$-neighborhood (or $\delta$-parallel body) as $ U_\delta:=\{x\in\mathbb R^N \ | \ \text{dist}(x,U)\le\delta\} $, where $\text{dist}(x,U)$ denotes the euclidean distance from $x$ to the set $U$. Denote the Lebesgue measure of $U_\delta$ by $|U_\delta|$. For $s\ge0$, we introduce the lower $s$-dimensional Minkowski content of $U$ $$ \mathcal M_*^s(U)=\liminf_{\delta\to 0}\frac{|U_\delta|}{\delta^{N-s}}, $$ and similarly, the upper $s$-dimensional Minkowski content $\mathcal M^{*s}(U)$ (replacing $\liminf_{\delta\to0}$ with $\limsup_{\delta\to0}$ above). We then define the lower and upper Minkowski (or box-counting, since they always coincide) dimensions of $U$ as: $$ \underline\dim_BU=\inf\{s\ge0 \ | \ \mathcal M_*^s(U)=0\}, \ \overline\dim_BU=\inf\{s\ge0 \ | \ \mathcal M^{*s}(U)=0\}. $$ When the upper and lower dimensions coincide, we refer to their common value as the Minkowski dimension of $U$, denoted by $\dim_BU$. For a comprehensive treatment of Minkowski dimension, we direct the reader to \cite{Falconer,tricot} and the references therein. Furthermore, if there exists a $d$ such that $0<\mathcal M_*^d(U)\le\mathcal M^{*d}(U)<\infty$, we say that $U$ is Minkowski nondegenerate in which case, $d=\dim_B U$ necessarily. Consider a bi-Lipschitz mapping $\Phi:U \subset \mathbb{R}^{N}\rightarrow \mathbb{R}^{N_1}$, i.e., there exists a constant $\rho>0$ such that $$ \rho\left\|x-y\right\|\leq\left\|\Phi(x)-\Phi(y)\right\| \leq\frac{1}{\rho} \left\|x-y\right\| $$ for all $x,y\in U$. Then it is well-known that $$ \underline\dim_{B}U=\underline\dim_{B}\Phi(U), \ \overline\dim_{B}U=\overline\dim_{B}\Phi(U). $$ Moreover, if $U$ is Minkowski nondegenerate, then $\Phi(U)$ is also Minkowski nondegenerate (refer to \cite[Theorem 4.1]{ZuZuR^3}). \smallskip We also introduce here some notation used throughout this paper. For two sequences of positive real numbers $(a_l)_{l\in\mathbb{N}}$ and $(b_l)_{l\in\mathbb{N}}$ converging to zero, we write $a_l\simeq b_l$ as $l\to\infty$ if there exists a small positive constant $\rho$ such that $\frac{a_l}{b_l}\in[\rho,\frac{1}{\rho}]$ for all $l\in\mathbb{N}$. Note that the Minkowski dimension has proven to be a useful tool in fractal analysis of various dynamical systems which enables one to extract information about the cyclicity of the system directly from analyzing the Minkowski dimension of one of its orbits \cite{zbMATH02196683,GoranInf} or even by looking just at the Minkowski dimension of a discrete orbit generated by the suitable Poincar\'e map \cite{EZZ,ZupZub} or even Dulac map \cite{zbMATH07307367}. Furthermore, since the Minkowski dimension is always equal to the box dimension \cite{Falconer} which can be effectively computed numerically \cite{WU2020100106,FREIBERG2021105615,MEISEL19971565,10.1007/978-3-030-64616-5_8,RuizDeMiras2020,Panigrahy2019DifferentialBC,10.1063/5.0160394,BoxNovo}, it is natural to expect that numerical methods for determining cyclicity via the Minkowski dimension can be developed which puts further value to the results in our paper. It was also shown that the Minkowski dimension is useful in providing a novel tool for formal and analytic classification of parabolic diffeomorphism in the complex plane. Even for the formal classification of parabolic germs one first needs to extend the definition of the Minkowski dimension either as in \cite{zbMATH06224533} or alternatively also look at higher order terms in the asymptotic series of the $\delta$-neighborhood of the orbit as $\delta$ tends to zero \cite{zbMATH07584629}. The latter approach is closely connected to the theory of complex (fractal dimensions) and associated fractal zeta functions introduced by Lapidus and van Frankenhuijsen \cite{LF12} for subsets of $\R$ and then extended to the general case of subsets of $\R^N$ in \cite{Goran}. Furthermore, in order to tackle the analytic classifications of parabolic germs one needs to further extend and adapt the theory of complex dimensions as in \cite{KMRR25}. Finally, note that, in contrast to the Minkowski dimension, the Hausdorff dimension would not extract us any relevant information from orbits of dynamical systems. The reason for this stems from the countable stability of the Hausdorff dimension which renders all of the orbits of the dynamical systems to have either dimension 1 in the continuous case, or 0 in the discrete case. On the other hand, the lack of the countable stability of the Minkowski dimension is exactly the reason which makes it interesting and useful for the fractal analysis of orbits of dynamical systems. Of course, as it is well known, an attractor of a dynamical system can have nontrivial Hausdorff dimension (strange attractors such as Lorenz or H\'enon, etc). However, in all cases mentioned above, as well as in this paper the attractor is either a point (possibly at infinity) or a piecewise smooth curve, hence of trivial Hausdorff dimension. \section{PWS slow-fast Li\'enard systems and statement of results} \label{sec-Motivation} We consider a PWS slow-fast Li\'enard equation \begin{align}\label{PWSLienard} X_-: \begin{cases} \dot x=y-F_-(x) ,\\ \dot y=\epsilon G_-(x), \end{cases} \ \text{for }x\le 0,\quad X_+: \begin{cases} \dot x=y-F_+(x) ,\\ \dot y=\epsilon G_+(x), \end{cases} \ \text{for }x\ge 0, \end{align} where $0<\epsilon\ll 1$ is a singular perturbation parameter and $F_\pm$ and $G_\pm$ are $C^\infty$-smooth functions. We assume that $X_-$ and $X_+$ have a slow-fast Hopf point at the origin $(x, y) = (0,0)$, that is, they satisfy (see also \cite[Definition 1.1]{DRbirth}) \begin{equation} \label{assum1} F_\pm(0)=F_\pm'(0)=G_\pm(0)=0, \ F_\pm''(0)>0 \text{ and } G_\pm'(0)<0. \end{equation} We say that system \eqref{PWSLienard} satisfying \eqref{assum1} has a PWS slow-fast Hopf point at $(x,y)=(0,0)$. For $\epsilon=0$, system \eqref{PWSLienard} has the curve of singularities $$ S=\{(x,F_-(x)) \ | \ x< 0\}\cup \{(0,0)\}\cup\{(x,F_+(x)) \ | \ x> 0\}.$$ We denote by $ S_-$ (resp. $ S_+$) the branch of $S$ contained in $x< 0$ (resp. $x> 0$). We refer to Figure \ref{fig-Motivation}. From \eqref{assum1}, it follows that (near the PWS slow-fast Hopf point) $ S_-$ (resp. $ S_+$) consists of normally repelling (resp. attracting) singularities. Then we can define the slow vector field of \eqref{PWSLienard} along $S$, near $(x,y)=(0,0)$, as the following PWS vector field \begin{equation}\label{PWSslowdyn} X_-^s: \ \frac{dx}{d\tau}=\frac{G_-(x)}{F_-'(x)}, \ x\le 0, \ \ \quad X_+^s: \ \frac{dx}{d\tau}=\frac{G_+(x)}{F_+'(x)}, \ x\ge 0, \end{equation} where $\tau=\epsilon t$ is the slow time ($t$ denotes the fast time in \eqref{PWSLienard}). Its flow is called the slow dynamics. Using \eqref{assum1}, it is clear that \eqref{PWSslowdyn} has a removable singularity in $x=0$ and the slow dynamics is regular and it points from the attracting branch $S_+$ to the repelling branch $ S_-$. Notice that the slow vector field \cite[Chapter 3]{DDR-book-SF} of the smooth slow-fast system $X_-$ (resp. $X_+$) along $ S_-$ (resp. $ S_+$) is given by $X_-^s$ (resp. $X_+^s$) defined in \eqref{PWSslowdyn}. See also \cite{DHGener}. \smallskip In this paper we focus on fractal analysis of $3$ different types of limit periodic sets of \eqref{PWSLienard}, for $\epsilon=0$ (see Figure \ref{fig-Motivation}): (a) the PWS slow-fast Hopf point at $(x,y)=(0,0)$ (Section \ref{sectionHopf}), (b) bounded canard cycles $\Gamma_{\hat y}$, $\hat y>0$, consisting of the fast horizontal orbit of \eqref{PWSLienard} passing through the point $(0,\hat y)$ and the portion of $S$ between the $\omega$-limit point $(\omega(\hat y),\hat y)\in S_+$ and the $\alpha$-limit point $(\alpha(\hat y),\hat y)\in S_-$ of that orbit (Section \ref{sectionbounded}), and (c) an unbounded canard cycle consisting of $S$ and a part at infinity (Section \ref{sectionunbounded}). When we deal with the canard cycles in (b) and (c), we need some additional assumptions on the functions $F_\pm$ and $G_\pm$: \begin{equation} \label{assum2} F_-'(x)<0, \ G_-(x)>0, \ \forall x\in L_-, \quad F_+'(x)>0, \ G_+(x)<0, \ \forall x\in L_+, \end{equation} where $L_-=[\alpha(\hat y),0[$ and $L_+=]0,\omega(\hat y)]$ in case (b) and $L_-=]-\infty,0[$ and $L_+=]0,\infty[$ in case (c). The assumptions in \eqref{assum2} imply that the slow vector field \eqref{PWSslowdyn} is well-defined on the closure $\overline {L_-\cup L_+}$ and it has no singularities. \begin{figure}[htb] \begin{center} \includegraphics[width=6.9cm,height=6.5cm]{Motivation.png} {\footnotesize \put(-29,148){$S_+$} \put(-111,188){$x=0$} \put(-71,99){$\Gamma_{\hat y}$} \put(-100,147){$y_0$} \put(-100,159){$y_1$} \put(-100,173){$y_2$} \put(-100,119){$y_0$} \put(-100,106){$y_1$} \put(-100,99){$y_2$} \put(-101,49){$y_0$} \put(-101,40){$y_1$} \put(-101,33){$y_2$} \put(-59,62){$I_+$} \put(-152,62){$I_-$} \put(-44,94){$(\omega (\hat y),\hat y)$} \put(-191,94){$(\alpha (\hat y),\hat y)$} \put(-158,133){$S_-$} } \end{center} \caption{The phase portrait of \eqref{PWSLienard} for $\epsilon=0$, with indication of the slow dynamics along the curve of singularities $S$. Orbits $U=\{y_0,y_1,\dots\}$ generated by the slow relation function $H$ can converge to the PWS slow-fast Hopf point $(x,y)=(0,0)$, a canard cycle $\Gamma_{\hat y}$ (green) or the unbounded canard cycle.} \label{fig-Motivation} \end{figure} The canard cycles considered throughout this paper are, in fact, crossing canard cycles according to Filippov's convention \cite{filippov1988differential}. More precisely, when one deals with piecewise smooth vector fields, one can define sewing and sliding regions in the switching locus $\Sigma = \{x = 0\}$, which are given by \begin{equation*} \begin{array}{ccc} \Sigma^{w} & = & \big{\{}(x,y)\in\Sigma \ ; \ (y - F_{-}(x))(y - F_{+}(x)) > 0\big{\}}, \\ \Sigma^{s} & = & \big{\{}(x,y)\in\Sigma\ ; \ (y - F_{-}(x))(y - F_{+}(x)) < 0\big{\}}, \end{array} \end{equation*} respectively. It follows directly from assumptions in \eqref{assum1} that $\Sigma^{s} = \emptyset$ and $\Sigma^{w} = \Sigma\backslash\{0\}$. Slow divergence integrals (see \cite[Chapter 5]{DDR-book-SF} and \cite{DHGener}) play an important role in fractal analysis of the limit periodic sets defined above. The slow divergence integral of $X_-$ (resp. $X_+$) associated with the segment $[\alpha( y),0]$ (resp. $[0,\omega(y)]$) are given by \begin{equation}\label{PWSSDI} I_-(y):=-\int_{\alpha(y)}^{0}\frac{F'_-(x)^2}{G_-(x)}dx<0, \ I_+(y):=-\int_{\omega(y)}^{0}\frac{F_+'(x)^2}{G_+(x)}dx<0, \end{equation} respectively, where $\alpha(y)<0$, $F_-(\alpha(y))=y$, $\omega(y)>0$ and $F_+(\omega(y))=y$. The argument $y>0$ of $I_\pm$ is close to $y=0$ (case (a)), $y=\hat y$ (case (b)) or large enough (case (c)). It is not difficult to see that $I_\pm'(y)<0$ and $I_\pm(y)\to 0$ as $y$ tends to $0$. We also define \begin{equation} \label{SDI-total} I(y):=I_+(y)-I_-(y). \end{equation} Our goal is to compute the Minkowski dimension of orbits generated by so-called slow relation function $H$ (or its inverse) defined by \begin{equation}\label{slow-relation-def} I_-(H(y))=I_+(y). \end{equation} See \cite[Section 4]{Dbalanced} for more details concerning the slow relation function in the framework of smooth slow-fast systems. We denote by $U$ the orbit of $y_0>0$ by $H$, that is, $U=\{y_l=H^l(y_0) \ | \ l\in\mathbb N\}$, in which $H^l$ is the $l$-fold composition of $H$. In Section \ref{sectionHopf} (resp. Sections \ref{sectionbounded} and \ref{sectionunbounded}) we consider orbits $U$ that tend to $0$ (resp. $\hat y$ and $\infty$). \smallskip \subsection{Fractal analysis of the PWS slow-fast Hopf point}\label{sectionHopf} In this section we consider \eqref{PWSLienard} in a small neighborhood of the PWS slow-fast Hopf point $(x,y)=(0,0)$. Since the integrals $I_-$ and $I_+$ are (continuous) decreasing functions and tend to zero as $y\to 0$, it is clear that, for each $y>0$ small enough, there is a unique $H(y)>0$ such that \eqref{slow-relation-def} holds. Analogously, for each $y>0$ small enough, there is a unique $H^{-1}(y)>0$ such that $I_-(y)=I_+(H^{-1}(y))$. Furthermore, we also assume that there is a small $y^*>0$ such that $I$ defined in \eqref{SDI-total} is nonzero in the open interval $]0,y^*[$. Now, given $y_0\in ]0,y^*[$, if $I>0$ (resp. $I<0$) on $]0,y^*[$ then we denote by $U$ the orbit $\{y_0,y_1,y_2,\dots\}$ defined by \begin{center} $I_-(y_{l+1})=I_+(y_l)$ \ (resp. $I_-(y_{l})=I_+(y_{l+1})$), \ with $l\ge 0$. \end{center} Observe that $U$ is the orbit of $y_0$ by $H$ (resp. $H^{-1}$) and it tends monotonically to $0$ as $l\to\infty$. Conversely, if $y_0>0$ and the orbit of $y_0$ by $H$ (resp. $H^{-1}$) tends monotonically to $0$, then $I>0$ (resp. $I<0$) in the open interval $]0,y^*[$, for a small $y^*>0$. \begin{theorem}\label{thm1} Consider a PWS slow-fast Li\'enard system \eqref{PWSLienard} and assume that \eqref{assum1} is satisfied. Given $y_0\in ]0,y^*[$, let $U$ be the orbit with the initial point $y_0$ (as defined above). Then $\dim_B U$ exists and \begin{equation} \label{formula-BOX} \dim_BU\in\left \{\frac{m-1}{m+1} \ | \ m=1,2,\dots\right\}\cup\{1\}. \end{equation} If $\dim_BU\ne 0,1$, then $U$ is Minkowski nondegenerate. These results do not depend on the choice of the initial point $y_0\in ]0,y^*[$. \end{theorem} \begin{remark}\label{remark-old-new} From \eqref{formula-BOX} in Theorem \ref{thm1} it follows that $\dim_BU$ can take the following discrete set of values: $0,\frac{1}{3},\frac{1}{2},\frac{3}{5},\frac{2}{3},\frac{5}{7},\dots,1$. We point out that the values $\frac{1}{3},\frac{3}{5},\frac{5}{7},\dots$ ($m$ even) and $1$ are found near smooth slow-fast Hopf points (see \cite{BoxNovo}). Besides these old values of the Minkowski dimension, the PWS slow-fast Hopf point in \eqref{PWSLienard} also produces infinitely many new values: $0,\frac{1}{2},\frac{2}{3},\dots$ ($m$ odd). See also Theorem \ref{thm2} and Remark \ref{remark-normal}. \end{remark} The proof of Theorem \ref{thm1} goes as follows (for more details we refer to Section \ref{proofThm1}). Firstly, using assumptions \eqref{assum1} we can write $$F_\pm(x)=x^2f_\pm(x),$$ where $f_-$, $f_+$ are $C^\infty$-smooth functions and $f_\pm(0)>0$. Then, it can be easily checked that the homeomorphism \begin{align}\label{PWShomeo} T(x,y)= \begin{cases} (x\sqrt{f_-(x)},y), \ x<0,\\ (x,y), \ x=0,\\ (x\sqrt{f_+(x)},y), \ x>0, \end{cases} \end{align} is a $\Sigma$-equivalence in the sense of \cite[Definition 2.20]{GST}, which brings $X_-$ (resp. $X_+$) defined in \eqref{PWSLienard}, locally near $(x,y)=(0,0)$, into $\widetilde X_-$ (resp. $\widetilde X_+$), after multiplication by $\psi'_->0$ (resp. $\psi'_+>0$), where \begin{align}\label{PWSLienardNormal} \widetilde X_-: \begin{cases} \dot x=y-x^2 ,\\ \dot y=\epsilon G_-(\psi_-(x))\psi'_-(x), \end{cases} x\le 0,\ \widetilde X_+: \begin{cases} \dot x=y-x^2 ,\\ \dot y=\epsilon G_+(\psi_+(x))\psi'_+(x), \end{cases} x\ge 0, \end{align} and $\psi_\pm$ is the inverse of $x\to x\sqrt{f_\pm(x)}$. System \eqref{PWSLienardNormal} has a Li\'enard form similar to \eqref{PWSLienard}, with a PWS slow-fast Hopf point at $(x,y)=(0,0)$. We show that \eqref{PWSLienard} and \eqref{PWSLienardNormal} have the same slow relation function (Section \ref{proofThm1}), and therefore they produce the same orbits. Thus, it suffices to give a complete fractal classification of \eqref{PWSLienardNormal}, using the Minkowski dimension. Such fractal classification is given in Theorem \ref{thm2} below. In summary, Theorem \ref{thm1} follows from Theorem \ref{thm2}. We define \begin{equation}\label{FunctionMultip} \bar{G}(x):=G_-(\psi_-(x))\psi'_-(x)+G_+(\psi_+(-x))\psi'_+(-x). \end{equation} In what follows, $m_0(\bar G)$ denotes the multiplicity of the zero $x=0$ of $\bar G$, and $g_{m_0(\bar G)}\ne 0$ denotes the $m_0(\bar G)$-th Taylor coefficient of $\bar G$ about $x=0$, if $m_0(\bar G)$ is finite. In Section \ref{proofThm1} we prove the following result. \begin{theorem}\label{thm2} Consider \eqref{PWSLienardNormal} and let $H$ be its slow relation function. Given $y_0\in ]0,y^*[$, then the following statements hold. \begin{enumerate} \item Suppose that $1\le m_0(\bar G)<\infty$. If $(-1)^{m_0(\bar G)+1}g_{m_0(\bar G)}>0$ (resp. $<0$), then the orbit $U=\{y_l \ | \ l\in\mathbb N\}$ of $y_0$ by $H$ (resp. $H^{-1}$) tends monotonically to $0$ and holds the bijective correspondence \begin{equation} \label{1-1} m_0(\bar G)=\frac{1+\dim_BU}{1-\dim_B U}. \end{equation} Moreover, if $1< m_0(\bar G)<\infty$, then $U$ is Minkowski nondegenerate. \item If $m_0(\bar G)=\infty$, then $\dim_BU=1$. \end{enumerate} The above results do not depend on the choice of the initial point $y_0\in ]0,y^*[$. \end{theorem} \begin{remark} \label{remark-normal} From \eqref{1-1}, it follows that $$\dim_BU=\frac{m_0(\bar G)-1}{m_0(\bar G)+1}.$$ This and Statement 2 of Theorem \ref{thm2} imply \eqref{formula-BOX}. If we assume that $F$ and $G$ in \eqref{PWSLienard-Intro-FG} are $C^\infty$-smooth, then the function $\bar G$ defined in \eqref{FunctionMultip} is even and, as a consequence of Theorem \ref{thm2}, we have the following sequence of values of $\dim_BU$: $1, \frac{1}{3},\frac{3}{5},\frac{5}{7},\dots$. \end{remark} \subsection{Fractal analysis of bounded balanced canard cycles}\label{sectionbounded} In this section we focus on the fractal analysis of bounded balanced canard cycles $\Gamma_{\hat y}$, with $\hat y>0$. We call $\Gamma_{\hat y}$ a balanced canard cycle if $I(\hat y)=0$, with $I$ being the integral defined in \eqref{SDI-total}. From now on, we assume that $\Gamma_{\hat y}$ is balanced. Due to Equations \eqref{assum2}, \eqref{PWSSDI} and the Implicit Function Theorem, it follows that there is a function $H(y)$ such that $H(\hat y)=\hat y$ and $$I_-(H(y))=I_+(y),$$ for $y$ kept close to $\hat y$ (see \eqref{slow-relation-def}). If we differentiate this last equation, we obtain $H'>0$ (recall that $I_\pm'(y)<0$). This implies that orbits generated by $H$ are monotone. Assume that there exists a small $y^*>0$ such that $I$ is nonzero in the open interval $]\hat y,\hat y+y^*[$. Given $y_0\in ]\hat y,\hat y+y^*[$, if $I>0$ (resp. $I<0$) in $]\hat y,\hat y+y^*[$, then $U$ denotes the orbit $\{y_0,y_1,y_2,\dots\}$ defined by \begin{center} $I_-(y_{l+1})=I_+(y_l)$ \ (resp. $I_-(y_{l})=I_+(y_{l+1})$), \ with $l\ge 0$. \end{center} Notice that $U$ is the orbit of $y_0$ by $H$ (resp. $H^{-1}$) and it converges monotonically to $\hat y$ as $l\to\infty$. See also Section \ref{sectionHopf}. Denote by $m_{\hat y}(I)$ the multiplicity of the zero $y=\hat y$ of $I$. When $m_{\hat y}(I)$ is finite, $i_{m_{\hat y}(I)}\ne 0$ is the $m_{\hat y}(I)$-th Taylor coefficient of $I$ about $y=\hat y$. The fractal classification of $\Gamma_{\hat{y}}$ is given in Theorem \ref{thm3} and it will be proved in Section \ref{proofThm3}. \begin{theorem} \label{thm3} Consider system \eqref{PWSLienard} and let $H$ be its slow relation function defined near a balanced canard cycle $\Gamma_{\hat y}$. If $y_0\in ]\hat y,\hat y+y^*[$, then the following statements are true. \begin{enumerate} \item Suppose that $1\le m_{\hat y}(I) <\infty$. If $i_{m_{\hat y}(I)}>0$ (resp. $<0$), then the orbit $U=\{y_l \ | \ l\in\mathbb N\}$ of $y_0$ by $H$ (resp. $H^{-1}$) tends monotonically to $\hat y$ and the following bijective correspondence holds \begin{equation} \label{1-1-balanced-PWS} m_{\hat y}(I)=\frac{1}{1-\dim_B U}. \end{equation} Moreover, if $1< m_{\hat y}(I)<\infty$, then $U$ is Minkowski nondegenerate. \item If $m_{\hat y}(I)=\infty$, then $\dim_BU=1$. \end{enumerate} The above results do not depend on the choice of the initial point $y_0\in ]\hat y,\hat y+y^*[$. \end{theorem} \begin{remark}\label{remark-balanceddd} Using \eqref{1-1-balanced-PWS}, it follows that $$\dim_B U=\frac{m_{\hat y}(I)-1}{m_{\hat y}(I)}.$$ This result, combined with Statement 2 of Theorem \ref{thm3}, produces the following set of values of $\dim_B U$: $0,\frac{1}{2},\frac{2}{3},\dots,1$. We highlight that, in the PWS setting, we do not obtain new values of the Minkowski dimension in comparison with the smooth setting due to the $C^\infty$-smoothness of $I_-$ and $I_+$ at $y=\hat y$. In other words, the above values also can be produced by balanced canard cycles in the smooth setting (see also \cite{BoxRenato}). \end{remark} \subsection{Fractal analysis of PWS classical Li\'enard equations near infinity}\label{sectionunbounded} In this section we consider a special case of \eqref{PWSLienard}, namely PWS classical Li\'{e}nard equations of degree $n+1$, which are given by \begin{equation} \label{PWSLienard_inf} \begin{vf} \dot{x} &=& y-\sum\limits_{k=2}^{n+1} B_k^-x^k, \\ \dot{y} &=&-\epsilon A_1^{-}x, \end{vf} \ \ x\le 0, \ \ \begin{vf} \dot{x} &=& y-\sum\limits_{k=2}^{n+1} B_k^+x^k, \\ \dot{y} &=&-\epsilon A_1^{+}x, \end{vf} \ \ x\ge 0, \end{equation} where $n \in\mathbb N_1$, $B_{n+1}^\pm\ne 0$, $A_{1}^{\pm}> 0$ and $B_{2}^{\pm}>0$. It is clear that system \eqref{PWSLienard_inf} satisfies \eqref{assum1}. Additionally, it is assumed that \eqref{PWSLienard_inf} satisfies \eqref{assum2} with $L_-=]-\infty,0[$ and $L_+=]0,\infty[$. As a direct consequence of \eqref{assum2}, we have $(-1)^{n+1}B_{n+1}^->0$ and $B_{n+1}^+>0$. After a rescaling $(x,t)=(a^{\pm}X,c^{\pm}T)$, we can bring \eqref{PWSLienard_inf} into \begin{equation} \label{model-Lienard1} \begin{vf} \dot{x} &=& y- \left((-1)^{n+1}x^{n+1}+\sum\limits_{k=2}^{n} b_k^-x^k \right)\\ \dot{y} &=&-\epsilon a_1^{-}x \end{vf} ´ \begin{vf} \dot{x} &=& y -\left(x^{n+1}+\sum\limits_{k=2}^{n} b_k^+x^k \right)\\ \dot{y} &=& -\epsilon a_{1}^{+}x \end{vf} \end{equation} where $a_{1}^{\pm}> 0$, $b_{2}^{\pm}>0$ (when $n=1$, we have $b_{2}^{\pm}=1$) and we denote $(X,T)$ again by $(x,t)$. It can be easily checked that \eqref{PWSLienard_inf} and \eqref{model-Lienard1} have the same slow relation function, and therefore it suffices to give a complete fractal classification of \eqref{model-Lienard1} near infinity. Denote $F_-(x)=(-1)^{n+1}x^{n+1}+\sum\limits_{k=2}^{n} b_k^-x^k$, $F_{+}(x)=x^{n+1}+\sum\limits_{k=2}^{n} b_k^+x^k$ and $G_\pm(x)=- a_{1}^{\pm}x$. Then we have $$\frac{F'_\pm(x)^2}{G_\pm(x)}=-\frac{(n+1)^2}{a_1^\pm}x^{2n-1}\left(1+o(1)\right), \ x\to\pm\infty.$$ This and \eqref{PWSSDI} imply that $I_\pm(y)\to -\infty$ as $y\to +\infty$. Let us recall that $I_\pm$ are (strictly) decreasing functions and $I_\pm(y)\to 0$ as $y$ tends to $0$. We conclude that the slow relation function $H$ (see \eqref{slow-relation-def}) is well-defined for all $y>0$. We suppose that there exists $y^*>0$ large enough such that $I=I_+-I_-$ is nonzero in the open interval $]y^*,\infty[$. Given $y_0\in ]y^*,\infty[$, if $I<0$ (resp. $I>0$) on $]y^*,\infty[$, we define the orbit $U=\{y_0,y_1,y_2,\dots\}$ by \begin{center} $I_-(y_{l+1})=I_+(y_l)$ \ (resp. $I_{-} (y_{l})=I_+(y_{l+1})$), \ with $l\ge 0$. \end{center} Then $U$ is the orbit of $y_0$ by $H$ (resp. $H^{-1}$) and it tends monotonically to $+\infty$. We define the lower Minkowski dimension of $U$ by \begin{equation} \label{Minkowski-def-inf} \underline\dim_BU=\underline\dim_B \Big\{\frac{1}{y_l^\frac{1}{n+1}} \ | \ l\in \mathbb{N} \Big\}, \end{equation} and similarly for the upper Minkowski dimension $\overline\dim_BU$. If $\underline\dim_BU=\overline\dim_BU$, then we denote it by $\dim_BU$ and call it the Minkowski dimension of $U$. In Section \ref{proofThm4}, system \eqref{model-Lienard1} will be studied on the Poincar\'e--Lyapunov disc of degree $(1, n + 1)$ and the exponent of $y_l$ in \eqref{Minkowski-def-inf} is related to that degree (see also \cite[Section 2]{MinkLienard} for more details). We introduce the notation \begin{align} F_{+}(x)-F_{-}(-x)&= \sum\limits_{k=2}^{n} \left(b_k^++(-1)^{k+1}b_k^-\right)x^k \nonumber \\ &=(b_{2}^{+}-b_{2}^{-})x^2+(b_{3}^{+}+b_{3}^{-})x^3+\dots+(b_{n}^{+}+(-1)^{n+1}b_{n}^{-})x^n \nonumber\\ &=:f_{2}x^2+f_{3}x^3+\dots+f_{n}x^{n}. \label{Function_f_n} \end{align} In the case $n=1$, then $F_{-}(x)-F_{+}(-x)=0$. When one of the coefficients $f_k$ in \eqref{Function_f_n} is nonzero, we denote by $k_0$ the maximal $k$ with the property $f_k\ne 0$. Now we are able to state the main result of this section. Theorem \ref{thm4} is proved in Section \ref{proofThm4}. \begin{theorem}\label{thm4} Consider system \eqref{model-Lienard1} and the associated slow relation function $H$. Given $y_0\in ]y^*,\infty[$, the following statements hold. \begin{enumerate} \item Suppose that $a_{1}^{-}\neq a_{1}^{+}$. If $a_{1}^{-}> a_{1}^{+}$ (resp. $a_{1}^{-}< a_{1}^{+}$), then the orbit $U=\{y_l \ | \ l\in\mathbb N\}$ of $y_0$ by $H$ (resp. $H^{-1}$) tends monotonically to $+\infty$ and \begin{equation*} \dim_{B}U=0. \end{equation*} \item \textbf Suppose that $a_{1}^{-}= a_{1}^{+}$ and that one of the coefficients $f_k$ is nonzero with $k_0\ne n-1$. If $f_{k_0}(1+k_0-n)>0$ (resp. $f_{k_0}(1+k_0-n)<0$), then the orbit $U=\{y_l \ | \ l\in\mathbb N\}$ of $y_0$ by $H$ (resp. $H^{-1}$) tends monotonically to $+\infty$, $U$ is Minkowski nondegenerate and \begin{equation} \label{formula-BOX_inf} \dim_{B} U=\frac{n+1-k_0}{n+2-k_0}. \end{equation} \end{enumerate} These results do not depend on the choice of the initial point $y_0$. \end{theorem} If $a_{1}^{-}= a_{1}^{+}$ and $f_2=\dots=f_n=0$, then we have $H(y)=y$ ($I\equiv 0$) and each orbit $U$ is a fixed point of $H$ with the trivial Minkowski dimension ($\dim_{B}U=0$). \begin{remark} Suppose that $F_-=F_+$ and it is a polynomial of even degree $n+1$ ($n$ odd) and $G_-=G_+$ ($a_{1}^{-}= a_{1}^{+}$). Then \eqref{Function_f_n} implies that $f_k=0$ for $k$ even (hence, $k_0$ is odd and $k_0\ne n-1$). From \eqref{formula-BOX_inf} it follows that we have the following Minkowski dimensions of $U$: $\frac{1}{2},\frac{3}{4},\dots,\frac{n-4}{n-3},\frac{n-2}{n-1}$ (see also \cite[Remark 1]{MinkLienard}). In the PWS setting, we get for $n$ odd or even: $0,\frac{1}{2},\frac{3}{4},\frac{4}{5},\dots,\frac{n-2}{n-1},\frac{n-1}{n}$. We refer to Statement 1 and Statement 2 of Theorem \ref{thm4}. The case $k_0= n-1$ is a topic of further study. We believe that in this case the Minkowski dimension is different from $\frac{2}{3}$ and we have to deal with more complicated expansions of functions. \end{remark} \section{Proof of Theorems \ref{thm1}--\ref{thm4}}\label{section-proof-all} \subsection{Proof of Theorems \ref{thm1} and \ref{thm2}}\label{proofThm1} We consider \eqref{PWSLienard} and assume that \eqref{assum1} is satisfied. Recall that the integrals $I_-$ and $I_+$ are defined in \eqref{PWSSDI} and $I$ in \eqref{SDI-total}. We denote by $\widetilde I_-,\widetilde I_+,\widetilde I$ the integrals $I_-,I_+,I$ computed for system \eqref{PWSLienardNormal}: \begin{equation}\label{PWSSDINormal1} \widetilde I_\pm(y)=-4\int_{\pm\sqrt y}^{0}\frac{x^2}{G_\pm(\psi_\pm(x))\psi'_\pm(x)}dx, \ \widetilde I(y)=\widetilde I_+(y)-\widetilde I_-(y). \end{equation} Since the slow divergence integral is invariant under changes of coordinates and time reparameterizations (see \cite[Chapter 5]{DDR-book-SF}), we have $\widetilde I_\pm (y)=I_\pm(y)$ (this can be easily seen if we use the change of variable $s=\psi_\pm(x)$ in $\widetilde I_\pm$). Now, it is clear that $H$ defined in \eqref{slow-relation-def} is also the slow relation function of \eqref{PWSLienardNormal} ($\widetilde I_-(H(y))=\widetilde I_+(y)$). This implies that Theorem \ref{thm1} follows from Theorem \ref{thm2}. Therefore, in the rest of this section we prove Theorem \ref{thm2} and, from now on, When we refer to \eqref{PWSSDINormal1}, we use $I_-,I_+,I$ instead of $\widetilde I_-,\widetilde I_+,\widetilde I$. We will prove Theorem \ref{thm2} for $I>0$ on $]0,y^*[$. Let $y_0\in ]0,y^*[$. Then the orbit $U$ of $y_0$ by $H$ tends monotonically to $0$ as $l\to \infty$ ($I_-(y_{l+1})=I_+(y_l)$ with $l\ge 0$, see Section \ref{sectionHopf}). The case where $I<0$ on $]0,y^*[$ can be treated in a similar fashion. We introduce the notation $$g_\pm(x)=G_\pm(\psi_\pm(x))\psi'_\pm(x),$$ and therefore Equation \eqref{FunctionMultip} can be written as $$\bar{G}(x) = g_{-}(x) + g_{+}(-x).$$ From assumption \eqref{assum1} and the definition of $\psi_\pm$ after \eqref{PWSLienardNormal}, it follows that $g_\pm(0)=0$ and $g'_\pm(0)<0$. Moreover, using Taylor series of $g_{\pm}$ at the origin, one can write the integrand in \eqref{PWSSDINormal1} as \begin{align}\label{integrand1} \frac{x^2}{g_\pm(x)}&=\frac{1}{g'_\pm(0)}x+O(x^2). \end{align} For the integral $I(y_l)$ we get \begin{align}\label{intEE1} I(y_l)=& I_+(y_l)-I_-(y_{l+1})+I_-(y_{l+1})-I_-(y_{l})\nonumber \\ =&4\int_{-\sqrt {y_l}}^{-\sqrt {y_{l+1}}}\frac{x^2}{g_-(x)}dx\nonumber\\ =& -\frac{2}{g_-'(0)}y_l\int_{\frac{y_{l+1}}{y_l}}^{1}\left (1+O(\sqrt{y_l s})\right)ds. \end{align} In the second step in \eqref{intEE1} we use $I_-(y_{l+1})=I_+(y_l)$ and in the last step we use \eqref{integrand1} and then the change of variable $s=\frac{x^2}{y_l}$. The idea is to compare \eqref{intEE1} with equivalent asymptotic expansions of \eqref{PWSSDINormal1}. There are three cases that must be considered: $m_0(\bar G)=1$, $1< m_0(\bar G)<\infty$ and $m_0(\bar G)=\infty$. \\ \\ (a) $m_0(\bar G)=1$. This holds if, and only if, $g'_-(0)\ne g'_+(0)$. Using \eqref{PWSSDINormal1} and \eqref{integrand1}, we get \begin{align}\label{intasymp1} I(y)=&-4\int_{\sqrt y}^{0}\frac{x^2}{g_+(x)}dx+4\int_{-\sqrt y}^{0}\frac{x^2}{g_-(x)}dx\nonumber\\ =& 2\left(\frac{1}{g_+'(0)}-\frac{1}{g_-'(0)}\right)y(1+o(1)), \end{align} where $o(1)$ tends to zero as $y\to 0$. It is clear from \eqref{intasymp1} that $I>0$ is equivalent to $g_-'(0)>g_+'(0)$ (and recall that $g'_\pm(0)<0$). From \eqref{intasymp1} with $y=y_l$, \eqref{intEE1} and $y_l\to 0$ as $l\to\infty$ it follows that $$\lim_{l\to\infty}\frac{y_{l+1}}{y_l}=\frac{g_-'(0)}{g_+'(0)}\in ]0,1[.$$ This implies that the orbit $U$ converges exponentially to zero as $l\to \infty$, that is, there exist $\lambda\in]0,1[$ and a constant $c>0$ such that $0<y_l\le c\lambda^l$ for all $l$. It follows from \cite[Lemma 1]{EZZ} that $\dim_BU=0$ and the Minkowski dimension does not depend on the choice of $y_0$. This completes the proof of Statement 1 of Theorem \ref{thm2} when $m_0(\bar G)=1$. \\ \\ (b) $1< m_0(\bar G)<\infty$. This holds if, and only if, $g'_-(0)= g'_+(0)$. Using Taylor series at $0$, one can write the integrand of \eqref{PWSSDINormal1} as \begin{align}\label{integrand2} \frac{x^2}{g_\pm(x)}&=\frac{x^2}{T_\pm(x)+\gamma_\pm x^{m_0(\bar G)}+O(x^{m_0(\bar G)+1})} \nonumber\\ &=\frac{x^2}{T_\pm(x)}-\frac{\gamma_\pm}{\left(g'_\pm(0)\right)^2} x^{m_0(\bar G)}+O(x^{m_0(\bar G)+1}), \end{align} where $T_\pm$ is the Taylor polynomial of degree $m_0(\bar G)-1$ of $g_\pm$ at $x=0$, $T_\pm'(0)=g'_\pm(0)$ and $\gamma_\pm$ is the $m_0(\bar G)$-th Taylor coefficient of $g_\pm$ about $x=0$. Notice that \begin{equation}\label{gamma+-} g_{m_0(\bar G)}=\gamma_-+(-1)^{m_0(\bar G)}\gamma_+\ne 0, \end{equation} where $g_{m_0(\bar G)}$ denotes the $m_0(\bar G)$-th Taylor coefficient of $\bar G$ about $x=0$ (recall its definition after Equation \eqref{FunctionMultip}). A direct consequence of \eqref{gamma+-} is \begin{equation}\label{eq-gamma+-further} (-1)^{m_0(\bar G)+1}g_{m_0(\bar G)} = (-1)^{m_0(\bar G)+1}\gamma_{-} - \gamma_{+}. \end{equation} The integral $I$ can be written as \begin{align}\label{intasymp2} I(y)=&-4\int_{\sqrt y}^{0}\frac{x^2}{T_+(x)}dx+4\int_{-\sqrt y}^{0}\frac{x^2}{T_-(x)}dx\nonumber\\ \ & +\frac{4(-1)^{m_0(\bar G)+1}g_{m_0(\bar G)}}{\left(g_+'(0)\right)^2(m_0(\bar G)+1)}y^\frac{m_0(\bar G)+1}{2}(1+o(1))\nonumber\\ =& \frac{4(-1)^{m_0(\bar G)+1}g_{m_0(\bar G)}}{\left(g_+'(0)\right)^2(m_0(\bar G)+1)}y^\frac{m_0(\bar G)+1}{2}(1+o(1)), \end{align} where $o(1)$ tends to zero as $y\to 0$. In the first step in \eqref{intasymp2} we use \eqref{PWSSDINormal1}, \eqref{integrand2}, \eqref{gamma+-} and the fact that $g'_-(0)= g'_+(0)$ (because $m_0(\bar G)>1$). We also used the relation \eqref{eq-gamma+-further}. Moreover, in the second step we use the fact that $$\int_{\sqrt y}^{0}\frac{x^2}{T_+(x)}dx=\int_{-\sqrt y}^{0}\frac{x^2}{T_-(x)}dx.$$ This follows directly from $T_-(x)=-T_+(-x)$ (observe that the Taylor polynomial $T_-(x)+T_+(-x)$ of degree $m_0(\bar G)-1$ of $\bar G$, defined in \eqref{FunctionMultip}, at $x=0$ is identically zero). A simple consequence of \eqref{intasymp2} is that $I>0$ if, and only if, $(-1)^{m_0(\bar G)+1}g_{m_0(\bar G)}>0$. Finally, \eqref{intasymp2} and The Mean Value Theorem for Integrals applied to \eqref{intEE1} yield \begin{equation}\label{eqsim} y_l-y_{l+1}\simeq y_l^\frac{m_0(\bar G)+1}{2}, l\to\infty, \end{equation} where the notation $\simeq$ was introduced in Section \ref{Minkowski-def-separated}. Note that $\nu:=\frac{m_0(\bar G)+1}{2}>1$. It follows from Equation \eqref{eqsim} and \cite[Theorem 1 and Remark 1]{EZZ} that the orbit $U$ is Minkowski nondegenerate and $$\dim_BU=1-\frac{1}{\nu}=\frac{m_0(\bar G)-1}{m_0(\bar G)+1},$$ and these results are independent of the choice of $y_0$. We have proved Statement 1 of Theorem \ref{thm2} for $1< m_0(\bar G)<\infty$.\\ \\ (c) $m_0(\bar G)=\infty$. Using similar steps to the ones used in case (b), it is not difficult to see that for every $\widetilde \nu>0$ then $I(y)=O(y^{\widetilde \nu})$, when $y\to 0$. This and \eqref{intEE1} imply that for every $\widetilde \nu>0$ then $y_l-y_{l+1}=O(y_l^{\widetilde \nu})$, when $l\to\infty$. Following \cite[Theorem 6]{EZZ} we have $\dim_BU=1$, and once again the Minkowski dimension does not depend on $y_0$. This completes the proof of Statement 2 of Theorem \ref{thm2}. \subsection{Proof of Theorem \ref{thm3}}\label{proofThm3} Suppose that $\Gamma_{\hat y}$ is a balanced canard cycle with the associated slow relation function $H$ defined in Section \ref{sectionbounded}. We prove Theorem \ref{thm3} for $I>0$ in the open interval $ ]\hat y,\hat y+y^*[$. In this case, the orbit $U$ of $y_0\in ]\hat y,\hat y+y^*[$ by $H$ converges monotonically to $\hat y$ as $l\to \infty$. We have $\hat y < H(y) < y$, for each $y\in ]\hat y,\hat y+y^*[$. The case $I<0$ on $ ]\hat y,\hat y+y^*[$ can be treated in a similar way. We have \begin{align*} I(y)&=I_+(y)-I_-(y)\\ &=I_+(y)-I_-(H(y))+I_-(H(y))-I_-(y)\\ &=\int_{\alpha(y)}^{\alpha(H(y))}\frac{F'_-(x)^2}{G_-(x)}dx. \end{align*} In the last step, we use $I_-(H(y))=I_+(y)$ and \eqref{PWSSDI}. From \eqref{assum2} and The Fundamental Theorem of Calculus, it follows that $\alpha'(y)<0$ and $$I(y)=\zeta(y)(y-H(y)),$$ where $y$ is kept near $\hat y$ and $\zeta$ is a positive smooth function. This implies that $y=\hat y$ is a zero of multiplicity $m$ of $I(y)$ if, and only if, $y=\hat y$ is a zero of multiplicity $m$ of $y-H(y)$. We know that the following result holds (see \cite{EZZ,MRZ}). \begin{theorem}\label{th-EZZ} Let $\tilde F$ be a smooth function on $[0,\tilde y[$, $\tilde F(0)=0$ and $0<\tilde F(y)<y$ for each $y\in ]0,\tilde y[$. Define $\tilde H=id-\tilde F$ and let $U$ be the orbit of $y_0\in ]0,\tilde y[$ by $\tilde H$. Then $\dim_BU$ is independent of the initial point $y_0$ and, for $1\le m\le \infty$, the following bijective correspondence holds: \begin{equation} m=\frac{1}{1-\dim_BU}\nonumber \end{equation} where $m$ is the multiplicity of the zero $y=0$ of $\tilde F$. If $m=\infty$, then $\dim_BU=1$. \end{theorem} Theorem \ref{thm3} follows from Theorem \ref{th-EZZ} with $\tilde y=y^*$, $\tilde F(y)=y+\hat y-H(y+\hat y)$ and $\tilde H(y)=H(y+\hat y)-\hat y$. When $1\le m_{\hat y}(I) <\infty$, it is clear that $I>0$ if, and only if $i_{m_{\hat y}(I)}>0$. Moreover, for $1< m_{\hat y}(I)<\infty$, the orbit $U$ is Minkowski nondegenerate (see e.g. \cite[Theorem 1]{EZZ}). \subsection{Proof of Theorem \ref{thm4}}\label{proofThm4} In this section we focus on system \eqref{model-Lienard1} and prove Theorem \ref{thm4}. In Section \ref{m<-section-infty}, we study the Poincar\'{e}--Lyapunov compactification of \eqref{model-Lienard1} and detect the unbounded canard cycle. Section \ref{subsubsec-analysis} is devoted to a fractal classification of \eqref{model-Lienard1} at infinity in the positive $y$-direction. In Section \ref{complete-proof}, we complete the proof of Theorem \ref{thm4} by using the invariance of the slow divergence integral under changes of coordinates and changes of time. \subsubsection{Poincar\'{e}--Lyapunov compactification}\label{m<-section-infty} This section is devoted to study the dynamics of \eqref{model-Lienard1} near infinity on the Poincar\'{e}--Lyapunov disc of degree $(1,n+1)$. In the positive $x$-direction we use the transformation \begin{equation} x=\frac{1}{r}, \ y=\frac{\bar y}{r^{n+1}}, \nonumber \end{equation} with $r>0$ small and $\bar y$ kept in a large compact set. In the new coordinates $(r,\bar y)$, system \eqref{model-Lienard1} becomes (after multiplication by $r^{n}$) \begin{equation} \label{model-infinity m^+< pos x} \begin{vf} \dot{r} & = & -r\left(\bar y-1-\sum\limits_{k=2}^{n}b_k^+ r^{n+1-k} \right), \\ \dot{\bar y} & = &-\epsilon a_{1}^{+} r^{2n} -(n+1)\bar y \left(\bar y-1-\sum\limits_{k=2}^{n}b_k^+ r^{n+1-k} \right). \end{vf} \end{equation} Suppose that $\epsilon=0$. On the line $\{r=0\}$ system \eqref{model-infinity m^+< pos x} has two singular points: $\bar y=0$ and $\bar y=1$. The eigenvalues of the linear part at $\bar y=0$ are given by $(1,n+1)$ and the eigenvalues of $\bar y=1$ are given by $(0,-(n+1))$. This implies that $\bar y=0$ is a hyperbolic repelling node and $\bar y=1$ is a semi-hyperbolic singularity with the $\bar y$-axis as the stable manifold and the curve of singular points $\bar y=1+\sum_{k=2}^{n}b_k^+ r^{n+1-k}$ as center manifold. It is not difficult to see, using the invariance under the flow and asymptotic expansions in $\epsilon$, that center manifolds of \eqref{model-infinity m^+< pos x}$+0\frac{\partial}{\partial \epsilon}$ at $(r,\bar y, \epsilon)=(0,1,0)$ are given by \[\bar y=1+\sum_{k=2}^{n} b_k^+ r^{n+1-k}-r^{2n}\left(\frac{a_{1}^{+}}{n+1}+O(r)\right)\epsilon+O(\epsilon^2).\] If we substitute this for $\bar y$ in the first component of \eqref{model-infinity m^+< pos x}, divide out $\epsilon$ and let $\epsilon\to 0$, we obtain the slow dynamics \[r'=r^{2n+1}\left(\frac{a_{1}^{+}}{n+1} +O(r)\right).\] Since $a_{1}^{+}>0$, the slow dynamics points away from the singularity $r=0$. In the negative $x$-direction we use the transformation \begin{equation} x=\frac{-1}{r}, \ y=\frac{\bar y}{r^{n+1}},\nonumber \end{equation} and system \eqref{model-Lienard1} changes (after multiplication by $r^{n}$) into \begin{equation} \label{model-infinity m^-< neg x} \begin{vf} \dot{r} &=& r\left(\bar y - 1 -\sum\limits_{k=2}^{n}b_k^-(-1)^k r^{n+1-k} \right), \\ \dot{\bar y} &=&\epsilon a_{1}^{-} r^{2n} +(n+1)\bar y \left(\bar y - 1 - \sum\limits_{k=2}^{n}b_k^- (-1)^k r^{n+1-k} \right) . \end{vf} \end{equation} When $\epsilon=0$, the line $\{r=0\}$ contains two singularities of \eqref{model-infinity m^-< neg x}: $\bar y = 0$ and $\bar y = 1$. The eigenvalues of the linear part at $\bar y=0$ (resp. $\bar y = 1$) are $(-1, -(n+1))$ (resp. $(0,n+1)$). Hence, $\bar y=0$ is a hyperbolic attracting node and $\bar y = 1$ is a semi-hyperbolic singularity with the $\bar y$-axis as the unstable manifold. The curve of singularities is given by $\bar y = 1 + \sum_{k=2}^{n}b_k^-(-1)^k r^{n+1-k}$. The slow dynamics along the curve of singularities is given by \[r'=r^{2n+1}\left(\frac{-a_{1}^{-}}{n+1} +O(r)\right),\] and it points towards the origin $r=0$ because $a_{1}^{-}>0$. In the positive and negative $y$-direction, there are no extra singularities. After putting all the information together, we get the phase portrait near infinity of \eqref{model-Lienard1} for $\epsilon=0$, including direction of the slow dynamics (see Figure \ref{fig_PWS}). Now, it is clear that the unbounded canard cycle consists of the curve of singularities of \eqref{model-Lienard1}, denoted by $S$, and the regular orbit connecting the two semi-hyperbolic singularities at infinity (these two points are the end points of $S$). \begin{figure}[htb] \begin{center} \includegraphics[scale=0.5]{PWS_infinity.png} {\footnotesize \put(-87,-6){$x=0$} } \end{center} \caption{The phase portrait near infinity of \eqref{model-Lienard1}, with $\epsilon=0$. We have a crossing near the switching line $x=0$.}\label{fig_PWS} \end{figure} \subsubsection{Fractal Analysis Near Infinity}\label{subsubsec-analysis} Since the attracting branch and the repelling branch of the curve of singularities are visible in the positive $y$-direction (see Figure \ref{fig_PWS}), it is natural to present the fractal analysis in the positive $y$-direction. We have \begin{equation} x=\frac{\bar x}{r}, \ y=\frac{1}{r^{n+1}},\nonumber \end{equation} and system \eqref{model-Lienard1} becomes (after multiplication by $r^{n}$) \begin{equation} \label{PL_negative} \begin{cases} \dot{r} =\frac{\epsilon a_{1}^{-}}{n+1} r^{2n+1}\bar{x},\\ \dot{\bar{x}} = 1-\left((-1)^{n+1}\bar{x}^{n+1}+\sum\limits_{k=2}^{n} b_k^- r^{n+1-k} \bar{x}^k \right) +\frac{\epsilon a_{1}^{-}}{n+1} r^{2n} \bar{x}^2, \end{cases} \ \ \bar x\le 0, \end{equation} \begin{equation} \label{PL_positive} \begin{cases} \dot{r} = \frac{\epsilon a_{1}^{+}}{n+1}r^{2n+1}\bar{x}, \\ \dot{\bar{x}} = 1-\left(\bar{x}^{n+1}+ \sum\limits_{k=2}^{n} b_k^+r^{n+1-k}\bar{x}^k \right)+\frac{\epsilon a_{1}^{+}}{n+1} r^{2n}\bar{x}^2, \end{cases} \ \ \bar x\ge 0. \end{equation} When $\epsilon=0$, system (\ref{PL_negative}) has a normally repelling curve of singularities $\Bar{x}=\Phi_{-}(r)$ satisfying $\Phi_{-}(0)=-1$ and \begin{equation}\label{phidef-} 1-(-1)^{n+1}\Phi_{-}(r)^{n+1}-\sum_{k=2}^{n} b_k^- r^{n+1-k}\Phi_{-}(r)^k=0, \end{equation} and (\ref{PL_positive}) has a normally attracting curve of singularities $\Bar{x}=\Phi_{+}(r)$ satisfying $\Phi_{+}(0)=1$ and \begin{equation}\label{phidef+} 1-\Phi_{+}(r)^{n+1}-\sum_{k=2}^{n} b_k^+r^{n+1-k}\Phi_{+}(r)^k=0. \end{equation} Recall from Equation \eqref{Function_f_n} that $f_k=b_k^++(-1)^{k+1}b_k^-$, for $k=2,\dots,n$. The following lemma gives a useful connection between $\Phi_{-}$ and $\Phi_{+}$. \begin{lemma} \label{lemma-Phi} The functions $\Phi_{-}(r)$ and $\Phi_{+}(r)$ defined in \eqref{phidef-} (resp. \eqref{phidef+}) satisfy \begin{equation}\label{expansionPhi_PWS} \Phi_{+}(r)=-\Phi_{-}(r) -\displaystyle\frac{1}{n+1} \sum_{k=2}^{n} f_k r^{n+1-k} \left(1+ O(r) \right), \nonumber \end{equation} \end{lemma} \begin{proof} The lemma can be easily proved using $\Phi_{+}(r)=-\Phi_{-}(r)$ when $f_2=\dots=f_n=0$ (see \eqref{phidef-} and \eqref{phidef+}). \end{proof} If we substitute $\Phi_{-}(r)$ (resp. $\Phi_{+}(r)$) for $\bar x$ in the $r$-component of (\ref{PL_negative}) (resp. (\ref{PL_positive})) and divide out $\epsilon$, then we obtain the slow dynamics along $\bar{x}=\Phi_{\pm}(r)$: \begin{equation}\label{slow-dyn-infty} r'=\frac{a_{1}^{\pm}}{n+1}r^{2n+1}\Phi_{\pm}(r). \end{equation} Let $\Tilde{r}>0$ be small and fixed. For $r \in ]0,\Tilde{r}[$, we define the slow divergence integral of (\ref{PL_negative}) along the portion $[r,\Tilde r]$ of $\Bar{x}=\Phi_{-}(r)$ \begin{equation} \label{J_-} J_{-}(r)=-(n+1)\int_{r}^{\Tilde{r}}\frac{(n+1)(-1)^{n+1}\Phi_{-}(s)^{n}+\sum\limits_{k=2}^{n} kb_k^{-}s^{n+1-k}\Phi_{-}(s)^{k-1}}{a_{1}^{-} s^{2n+1}\Phi_{-}(s)} ds < 0, \end{equation} and the slow divergence integral of (\ref{PL_positive}) along the portion $[r,\Tilde r]$ of $\Bar{x}=\Phi_{+}(r)$ \begin{equation} \label{J_+} J_{+}(r)=-(n+1)\int_{r}^{\Tilde{r}}\frac{(n+1)\Phi_{+}(s)^{n}+\sum\limits_{k=2}^{n} kb_k^{+}s^{n+1-k}\Phi_{+}(s)^{k-1}}{a_{1}^{+}s^{2n+1}\Phi_{+}(s)} ds < 0. \end{equation} Observe that $J_+$ is the integral of the divergence of (\ref{PL_positive}) for $\epsilon=0$, computed in singular points $(s,\Phi_{+}(s))$, where the variable of integration is the time variable of the slow dynamics \eqref{slow-dyn-infty}. A similar remark holds for the integral $J_-$ (but it is computed in backward time). From \eqref{J_-} and \eqref{J_+} it follows that $J_{\pm}(r)\to -\infty$ as $r \rightarrow 0$. Let $\Tilde{J}\in\mathbb R$ be arbitrary but fixed. For $r_0 \in ]0,\Tilde{r}[$, we suppose that the sequence $(r_l)_{l \in \N}$, defined by \begin{equation*} J_{+}(r_l)-J_{-}(r_{l+1})=\Tilde{J}, \ l\ge 0, \end{equation*} tends monotonically to $0$ as $l \rightarrow \infty$. The case where $(r_l)_{l \in \N}$, tending to $0$ as $l \rightarrow \infty$, is defined by $J_{+}(r_{l+1})-J_{-}(r_{l})=\Tilde{J}$ can be treated in a similar way. \begin{remark} In Section \ref{complete-proof} the constant $\Tilde J$ will be equal to the slow divergence integral $-I\left(\frac{1}{\tilde r^{n+1}}\right)$, with $I$ defined in \eqref{SDI-total}. \end{remark} We can write \begin{equation} \label{J-equation} J_{+}(r_l)-J_{-}(r_l)=\Tilde{J}+J_{-}(r_{l+1})-J_{-}(r_l). \end{equation} Let us first study $J_{+}(r_l)-J_{-}(r_l)$. For simplicity sake, we write $\Phi_{\pm}=\Phi_{\pm}(s)$. From Lemma \ref{lemma-Phi} and the relation $b_{k}^{+} = f_{k} + (-1)^{k}b_{k}^{-}$, one can check that \begin{align}\label{pomocno-integrand} &\frac{(n+1)\Phi_{+}^{n}+\sum\limits_{k=2}^{n} kb_k^{+} s^{n+1-k}\Phi_{+}^{k-1}}{a_{1}^{+} \Phi_{+}}\nonumber\\ &=\frac{(n+1)(-\Phi_{-})^{n}+\sum\limits_{k=2}^{n} kb_k^{+} s^{n+1-k}(-\Phi_{-})^{k-1}}{-a_{1}^{+} \Phi_{-}} - \sum_{k=2}^{n} f_k s^{n+1-k} \left(\frac{n-1}{a_{1}^{+}}+{O}(s) \right)\nonumber\\ &=\frac{-(n+1)(-1)^{n+1}\Phi_{-}^{n} \! - \! \sum\limits_{k=2}^{n} kb_k^{-} s^{n+1-k}\Phi_{-}^{k-1} \! + \! \sum\limits_{k=2}^{n} k f_k s^{n+1-k}(-\Phi_{-})^{k-1} }{-a_{1}^{-}\Phi_{-} +(a_{1}^{-}-a_{1}^{+})\Phi_{-}}\nonumber\\ & \ \ \ \ - \sum_{k=2}^{n} f_k s^{n+1-k} \left(\frac{n-1}{a_{1}^{+}}+{O}(s) \right) \nonumber\\ &=\frac{(n+1)(-1)^{n+1}\Phi_{-}^{n}+\sum\limits_{k=2}^{n} kb_k^{-}s^{n+1-k}\Phi_{-}^{k-1}}{a_{1}^{-}\Phi_{-}} \nonumber \\ & \ \ \ - \sum_{k=2}^{n} f_k s^{n+1-k} \left(\frac{n-1-k}{a_{1}^{+}}+{O}(s) \right) +(a_{1}^{-}-a_{1}^{+})\left( \frac{n+1}{a_{1}^{-}a_{1}^{+}}+{O}(s) \right). \end{align} Using the integrals (\ref{J_-}), (\ref{J_+}) and considering the integrand \eqref{pomocno-integrand}, it follows that \begin{align}\label{integral-pomocno} J_{+}(r_l)-J_{-}(r_l)&= (n+1)\int_{r_l}^{\Tilde{r}}\frac{1}{s^{2n+1}} \sum_{k=2}^{n} f_k s^{n+1-k} \left(\frac{n-1-k}{a_{1}^{+}}+{O}(s) \right)ds\nonumber \\ & \ \ \ -(n+1)\int_{r_l}^{\Tilde{r}}\frac{1}{s^{2n+1}}(a_{1}^{-}-a_{1}^{+})\left( \frac{n+1}{a_{1}^{-}a_{1}^{+}}+{O}(s) \right)ds\nonumber\\ &=\sum_{k=2}^{n}f_k r_l^{1-n-k}\left(\frac{(n+1)(1+k-n)}{a_{1}^{+}(1-n-k)}+o(1)\right)\nonumber\\ & \ \ \ +(a_{1}^{-}-a_{1}^{+})r_l^{-2n}\left( -\frac{(n+1)^2}{2na_{1}^{-}a_{1}^{+}}+o(1) \right)+\hat J, \end{align} where $o(1) \rightarrow 0$ as $r_l \rightarrow 0$ and $\hat{J}$ is a constant independent of $r_l$. Now, consider the term $J_{-}(r_{l+1})-J_{-}(r_l)$ in \eqref{J-equation}. Using \eqref{J_-}, we have \begin{align} J_{-}(r_{l+1})-J_{-}(r_l)&=-\frac{(n+1)^2}{a_{1}^{-}}\int_{r_{l+1}}^{r_l} \frac{1}{s^{2n+1}} \left(1+{O}(s) \right) ds\nonumber\\ &=-\frac{(n+1)^2}{a_{1}^{-}r_l^{2n}}\int_{\frac{r_{l+1}}{r_l}}^{1} \frac{1}{\tilde s^{2n+1}} \left(1+{O}(r_l\tilde s) \right) d\tilde s, \label{Integral_left_to_right} \end{align} where in the last step we use the change of variable $s=r_l\tilde s$. Now, we use \eqref{integral-pomocno} and \eqref{Integral_left_to_right} in the Equation \eqref{J-equation} and then we get \begin{align} -\frac{(n+1)^2}{a_{1}^{-}}&\int_{\frac{r_{l+1}}{r_l}}^{1} \frac{1}{\tilde s^{2n+1}} \left(1+{O}(r_l\tilde s) \right) d\tilde s\nonumber\\ &=\sum_{k=2}^{n}f_k r_l^{n+1-k}\left(\frac{(n+1)(1+k-n)}{a_{1}^{+}(1-n-k)}+o(1)\right)\nonumber\\ & \ \ \ +(a_{1}^{-}-a_{1}^{+})\left( -\frac{(n+1)^2}{2na_{1}^{-}a_{1}^{+}}+o(1) \right)+r_l^{2n}(\hat J-\Tilde{J}). \label{Recursion-konacno} \end{align} We distinguish between two cases: $a_{1}^{-}\ne a_{1}^{+}$ and $a_{1}^{-}=a_{1}^{+}$. Recall that $a_1^\pm>0$. \paragraph{Case $a_{1}^{-}\ne a_{1}^{+}$.} Since $(r_l)_{l \in \N}$ tends monotonically to $0$ as $l \rightarrow \infty$, it can be easily seen that \eqref{Recursion-konacno} implies $$\lim_{l \rightarrow \infty}\frac{r_{l+1}}{r_l}=\left(\frac{a_{1}^{+}}{a_{1}^{-}}\right)^\frac{1}{2n} \in ]0,1[.$$ Since $a_{1}^{-}> a_{1}^{+}$, we conclude that $\dim_B(r_l)_{l \in \N}=0$ and the Minkowski dimension does not depend on the choice of the initial point $r_0 \in ]0,\Tilde{r}[$ (see Section \ref{proofThm1}). We remark that when the sequence $(r_l)_{l \in \N}$, converging monotonically to $0$ as $l \rightarrow \infty$, is defined by $J_{+}(r_{l+1})-J_{-}(r_{l})=\Tilde{J}$, we have $a_{1}^{-}< a_{1}^{+}$. \paragraph{Case $a_{1}^{-}=a_{1}^{+}$.}Now the right-hand side of \eqref{Recursion-konacno} tends to 0 as $l\to \infty$, and we get \begin{equation}\label{quotient=1}\lim_{l \rightarrow \infty}\frac{r_{l+1}}{r_l}=1.\end{equation} Suppose that one of the coefficients $f_k$ is nonzero. Then $k_0$ is well-defined (see Section \ref{sectionunbounded}) and from \eqref{Recursion-konacno} it follows that \begin{align} \int_{\frac{r_{l+1}}{r_l}}^{1} \frac{1}{\tilde s^{2n+1}} \left(1+{O}(r_l\tilde s) \right) d\tilde s=f_{k_0} r_l^{n+1-k_0}\left(\frac{1+k_0-n}{(n+k_0-1)(n+1)}+o(1)\right), \label{Recursion-konacno-1} \end{align} where $o(1) \rightarrow 0$ as $r_l \rightarrow 0$. Notice that $$\kappa\le \frac{1}{\tilde s^{2n+1}} \left(1+{O}(r_l\tilde s) \right) \le \frac{1}{\kappa}\left(\frac{r_l}{r_{l+1}}\right)^{2n+1}, \ \ \forall\tilde s\in [\frac{r_{l+1}}{r_l},1],$$ for $\kappa>0$ small enough. This, \eqref{quotient=1} and \eqref{Recursion-konacno-1} imply \begin{equation} \label{finally_C} r_l-r_{l+1}\simeq r_l^{n+2-k_0}, \ l\to \infty,\nonumber \end{equation} if $f_{k_0}(1+k_0-n)>0$. The notation $\simeq$ was introduced in Section \ref{Minkowski-def-separated}. As $n+2-k_0>1$ (recall that $ k_0 \leq n$), we have that $(r_l)_{l\in\mathbb{N}}$ is Minkowski nondegenerate, \begin{equation}\label{box-dim-inftyyy} \dim_B(r_l)_{l \in \N}=1-\frac{1}{n+2-k_0}=\frac{n+1-k_0}{n+2-k_0}, \end{equation} and this is independent of the choice of $r_0 \in ]0,\Tilde{r}[$ (see Section \ref{proofThm1}). When $(r_l)_{l \in \N}$, tending (monotonically) to $0$ as $l \rightarrow \infty$, is defined by $J_{+}(r_{l+1})-J_{-}(r_{l})=\Tilde{J}$, we assume $f_{k_0}(1+k_0-n)<0$. \subsubsection{Completing the proof of Theorem \ref{thm4}} \label{complete-proof} Using $x=\frac{\bar x}{r}, \ y=\frac{1}{r^{n+1}}$ we have the following relation between $F_\pm$, defined in Section \ref{sectionunbounded}, and $\Phi_{\pm}$ satisfying \eqref{phidef-} and \eqref{phidef+}: \begin{equation}\label{y=F(x)} \frac{1}{r^{n+1}}=F_\pm\left(\frac{\Phi_{\pm}(r)}{r}\right). \end{equation} The invariance of the slow divergence integral under changes of coordinates and time reparameterizations implies that \begin{equation}\label{invariance-int} J_\pm(r)=-\int_{\frac{\Phi_{\pm}(r)}{r}}^{\frac{\Phi_{\pm}(\tilde r)}{\tilde r}}\frac{F'_\pm(x)^2}{G_\pm(x)}dx, \ \forall r \in ]0,\Tilde{r}[, \end{equation} with $J_\pm$ defined in \eqref{J_-} and \eqref{J_+} and $G_\pm(x)=- a_{1}^{\pm}x$ (we use the change of variable $x=\frac{\Phi_{\pm}(s)}{s}$). Assume that $I=I_+-I_-$ defined in \eqref{SDI-total} is negative on $]y^*,\infty[$, where $y^*>0$ is large enough, and let $y_0\in ]y^*,\infty[$. (We take $\tilde r$ such that $y^*=\frac{1}{\tilde r^{n+1}}$.) Then the orbit $U=\{y_0,y_1,y_2,\dots\}$ generated by $I_-(y_{l+1})=I_+(y_l)$, $l\ge 0$, tends monotonically to $+\infty$ (see Section \ref{sectionunbounded}). If we write $r_l:=\frac{1}{y_l^\frac{1}{n+1}}$ (see \eqref{Minkowski-def-inf}), then it is clear that $(r_l)_{l \in \N}$ tends monotonically to $0$, and \begin{align*} I_+(y_l)-I_-(y_{l+1})&= I_+\left(\frac{1}{r_l^{n+1}}\right)-I_-\left(\frac{1}{r_{l+1}^{n+1}}\right)\\ &=-\int_{\frac{\Phi_{+}(r_l)}{r_l}}^{0}\frac{F'_+(x)^2}{G_+(x)}dx+\int_{\frac{\Phi_{-}(r_{l+1})}{r_{l+1}}}^{0}\frac{F'_-(x)^2}{G_-(x)}dx\\ &=J_+(r_l)-\int_{\frac{\Phi_{+}(\tilde r)}{\tilde r}}^{0}\frac{F'_+(x)^2}{G_+(x)}dx-J_-(r_{l+1})+\int_{\frac{\Phi_{-}(\tilde r)}{\tilde r}}^{0}\frac{F'_-(x)^2}{G_-(x)}dx\\ &=J_+(r_l)-J_-(r_{l+1})+I\left(\frac{1}{\tilde r^{n+1}}\right) \end{align*} where we use \eqref{y=F(x)} and \eqref{invariance-int}. This implies that $(r_l)_{l \in \N}$ is generated by $J_{+}(r_l)-J_{-}(r_{l+1})=\Tilde{J}$, where $\Tilde J:=-I\left(\frac{1}{\tilde r^{n+1}}\right)$ and we can therefore use the results of Section \ref{subsubsec-analysis}. Statement 1 (resp. Statement 2) of Theorem \ref{thm4} follows from \eqref{Minkowski-def-inf} and the case $a_{1}^{-}\ne a_{1}^{+}$ (resp. $a_{1}^{-}=a_{1}^{+}$) in Section \ref{subsubsec-analysis}. (If $I$ is positive on $]y^*,\infty[$, then we use $I_-(y_{l})=I_+(y_{l+1})$ and $J_{+}(r_{l+1})-J_{-}(r_{l})=\Tilde{J}$.) This completes the proof of Theorem \ref{thm4}. \section{Crossing limit cycles and Minkowski dimension}\label{section-applications} In this section we consider the piecewise smooth system of Li\'{e}nard equations \begin{equation}\label{eq-pws-lienard-general} \left\{ \begin{array}{rcl} \dot{x} & = & y - F_{-}(x), \\ \dot{y} & = & \epsilon^{2}(\epsilon\alpha_{-}+G_-(x)), \end{array} \right. \ x\leq 0, \quad \left\{ \begin{array}{rcl} \dot{x} & = & y - F_{+}(x), \\ \dot{y} & = & \epsilon^{2}(\epsilon\alpha_{+}+G_+(x)), \end{array} \right. \ x\geq 0, \end{equation} where $0<\epsilon \ll 1$, $\alpha_\pm$ are regular parameters kept near $0$ and $C^\infty$-smooth functions $F_\pm$ and $G_\pm$ satisfy \eqref{assum1} (see Section \ref{sec-Motivation}). We define \begin{equation}\label{const-beta}\beta_\pm:=-\frac{2G_\pm'(0)}{F_\pm''(0)}>0. \end{equation} A natural question that arises is how to link the Minkowski dimension of orbits defined in Sections \ref{sectionHopf} to \ref{sectionunbounded} with the number of crossing limit cycles that \eqref{eq-pws-lienard-general} can have for $\epsilon > 0$. If the Minkowski dimension of orbits tending monotonically to $y=0$ is trivial or, equivalently, $\beta_-\ne\beta_+$ (see Theorem \ref{thm2} and note that $\beta_-\ne\beta_+$ if, and only if, $m_0(\bar G)=1$), then there are no limit cycles near the PWS Hopf point (for the precise statement see Proposition \ref{prop-no-LC} in Section \ref{sec-blow-up}). The condition $\beta_-\ne\beta_+$ means that, after blowing-up the vector field $\eqref{eq-pws-lienard-general} + 0\frac{\partial}{\partial \epsilon}$, the connection on the blow-up locus between the attracting branch $\{y = F_{+}(x)\}$ and the repelling branch $\{y = F_{-}(x)\}$ does not exist (see Figure \ref{fig-pws-lienard}). In Section \ref{section-bounded-LC} we show that trivial Minkowski dimension of orbits tending monotonically to a balanced bounded canard cycle (see Section \ref{sectionbounded}) is not equivalent to $\beta_-\ne\beta_+$. Indeed, we find a system \eqref{eq-pws-lienard-general} undergoing a saddle-node bifurcation of limit cycles and a system without limit cycles in which, in both cases, the Minkowski dimension is trivial. Finally, in Section \ref{sec-LC-unbound} we provide examples of PWS classical Li\'{e}nard equations \eqref{eq-pws-lienard-general} such that the Minkowski dimension of the unbounded canard cycle $\Gamma_{\infty}$ is trivial. It is convenient to set $\epsilon^2$ in \eqref{eq-pws-lienard-general} instead of $\epsilon$ when we introduce a family blow-up at the PWS Hopf point (see Section \ref{sec-blow-up}). \subsection{Limit cycles near the PWS Hopf point}\label{sec-blow-up} Our goal is to study limit cycles of \eqref{eq-pws-lienard-general} in a small $\epsilon$-uniform neighborhood of the origin $(x,y)=(0,0)$ (i.e., the neighborhood does not shrink to the origin as $\epsilon\to 0$). For this purpose, we start our analysis by performing a blow-up transformation $$(x,y,\epsilon) = (\rho\bar{x},\rho^{2}\bar{y},\rho\bar{\epsilon}),$$ with $(\bar{x},\bar{y},\bar{\epsilon})\in\mathbb S^2$, $\rho\ge 0$ and $\bar\epsilon\ge 0$. We work with different charts. We point out that only the phase directional chart $\{\bar y=1\}$ and the family chart $\{\bar \epsilon=1\}$ are relevant for the study of limit cycles of \eqref{eq-pws-lienard-general}, produced by the PWS Hopf point or (bounded and unbounded) canard cycles (see Figure \ref{fig-pws-lienard}). \subsubsection{Dynamics in the chart $\bar{y} = 1$}\label{sec-phase-direc-y} Using the coordinate change $(x,y,\epsilon) = (\rho\bar{x},\rho^{2},\rho\bar{\epsilon})$, with small $\rho\ge 0$ and $\epsilon\ge 0$ and $\bar x$ kept in a large compact subset of $\mathbb R$, we obtain (after division by $\rho$) \begin{equation}\label{eq-blowup-y1} \left\{ \begin{array}{rcl} \dot{\bar{x}} & = & 1-\frac{F_\pm''(0)}{2}\bar{x}^{2} +O(\rho\bar x^3)-\frac{1}{2}\bar x\bar\epsilon^2\left(\bar\epsilon\alpha_\pm+G_\pm'(0)\bar x+O(\rho\bar x^2)\right), \\ \dot{\rho} & = &\frac{1}{2}\rho\bar\epsilon^2 \left(\bar\epsilon\alpha_\pm+G_\pm'(0)\bar x+O(\rho\bar x^2)\right), \\ \dot{\bar{\epsilon}} & = & -\frac{1}{2}\bar\epsilon^3\left(\bar\epsilon\alpha_\pm+G_\pm'(0)\bar x+O(\rho\bar x^2)\right). \end{array} \right. \end{equation} System \eqref{eq-blowup-y1} with $-$ (resp. $+$) is defined on $\bar x\le 0$ (resp. $\bar x\ge 0$). Let us briefly describe the dynamics of \eqref{eq-blowup-y1}. In the invariant line $\{\rho = \bar{\epsilon} = 0\}$ there are two singularities $\bar{x} = \pm \sqrt{\frac{2}{F_\pm''(0)}}$, which we will denote by $p_{\pm}$. Both singularities have two-dimensional center manifolds, and the unstable (resp. stable) manifold of $p_{-}$ (resp. $p_{+}$) is the $\bar{x}$-axis. Such singularities correspond to the intersection of the critical manifold with the blow-up locus. The center behavior near $p_\pm$ is given by \begin{equation} \left\{ \begin{array}{rcl} \dot{\rho} & = &\frac{1}{2}\rho\bar\epsilon^2 \left(\pm G_\pm'(0)\sqrt{\frac{2}{F_\pm''(0)}}+O(\rho,\bar\epsilon)\right),\nonumber \\ \dot{\bar{\epsilon}} & = & -\frac{1}{2}\bar\epsilon^3\left(\pm G_\pm'(0)\sqrt{\frac{2}{F_\pm''(0)}}+O(\rho,\bar\epsilon)\right).\nonumber \end{array} \right. \end{equation} \subsubsection{Dynamics in the chart $\bar{\epsilon} = 1$} Using the rescaling $(x,y) = ({\epsilon}\bar{x},{\epsilon}^{2}\bar{y})$, with $(\bar x,\bar y)$ kept in a large compact set, one obtains (after division by $\epsilon$) the PWS system \begin{equation}\label{eq-pws-blownup} \left\{ \begin{array}{rcl} \dot{\bar{x}} & = & \bar{y} - \frac{F_-''(0)}{2}\bar{x}^{2} + O(\epsilon\bar x^3), \\ \dot{\bar{y}} & = & \alpha_{-} +G_-'(0)\bar x+O(\epsilon\bar x^2), \end{array} \right. \quad \left\{ \begin{array}{rcl} \dot{\bar{x}} & = & \bar{y} - \frac{F_+''(0)}{2}\bar{x}^{2} + O(\epsilon\bar x^3), \\ \dot{\bar{y}} & = & \alpha_{+} +G_+'(0)\bar x+O(\epsilon\bar x^2). \end{array} \right. \end{equation} Let us describe the dynamics in the top of the blow-up locus. For ${\epsilon} = 0$ and $\alpha_\pm=0$, the PWS system \eqref{eq-pws-blownup} has the following form: \begin{equation}\label{eq-hamiltonian} \left\{ \begin{array}{rcl} \dot{\bar{x}} & = & \bar{y} - \frac{F_-''(0)}{2}\bar{x}^{2}, \\ \dot{\bar{y}} & = & G_-'(0)\bar x, \end{array} \right. \ \bar x\le 0, \quad \left\{ \begin{array}{rcl} \dot{\bar{x}} & = & \bar{y} - \frac{F_+''(0)}{2}\bar{x}^{2}, \\ \dot{\bar{y}} & = &G_+'(0)\bar x, \end{array} \right. \ \bar x\ge 0. \end{equation} A first integral of \eqref{eq-hamiltonian} is given by $$H_{\pm}(\bar x,\bar y) = e^{-\frac{2\bar y}{\beta_{\pm}}}(\frac{\bar{y}}{\beta_{\pm}} + G_\pm'(0) \frac{\bar{x}^{2}}{\beta_{\pm}^2} + \frac{1}{2}),$$ where $\beta_\pm$ were defined in \eqref{const-beta}. Note that $p_-$ (resp. $p_+$) defined in Section \ref{sec-phase-direc-y} is the end point of the invariant curve $\{\bar{y}= - G_-'(0) \frac{\bar{x}^{2}}{\beta_{-}} - \frac{\beta_-}{2},\bar x\le 0\}$ (resp. $\{\bar{y}= - G_+'(0) \frac{\bar{x}^{2}}{\beta_{+} }- \frac{\beta_+}{2},\bar x\ge 0 \}$), corresponding to the level $H_{-}(\bar x,\bar y)=0$ (resp. $H_{+}(\bar x,\bar y)=0$) Moreover, the curve intersects the switching locus at $(0,-\frac{\beta_{-}}{2})$ (resp. $(0,-\frac{\beta_{+}}{2})$). The origin $(\bar x,\bar y)=(0,0)$ is a center for both vector fields in \eqref{eq-hamiltonian} and it corresponds to the levels $H_{\pm}(\bar x,\bar y) = \frac{1}{2}$. The origin has a ``focus-like'' behavior for $\beta_{-} \neq \beta_{+}$ and a ``center-like'' behavior for $\beta_{-}=\beta_{+}$. See Figure \ref{fig-pws-lienard}. \begin{figure}[htb] \begin{center} \begin{overpic}[width=0.8\textwidth]{fig-pws-lienard.pdf} \put(21,-4){(a)} \put(75,-4){(b)} \put(5,41){$p_-$} \put(38,41){$p_+$} \put(59,41){$p_-$} \put(92,41){$p_+$} \end{overpic} \end{center} \caption{Phase portraits of the blown-up vector field \eqref{eq-hamiltonian} for $\beta_{-} \neq \beta_{+}$ (Figure (a)) and $\beta_{-} = \beta_{+}$ (Figure (b)). Each of the charts $\{\bar x=\pm 1\}$ contains one extra singularity of hyperbolic type. In the chart $\{\bar y=-1\}$ the dynamics is regular pointing from the right to the left.}\label{fig-pws-lienard} \end{figure} It is now clear that, for $\beta_{-}=\beta_{+}$, the study of crossing limit cycles of \eqref{eq-pws-lienard-general} in a small $\epsilon$-uniform neighborhood of the origin $(x,y)=(0,0)$ has three components: (1) the study near the origin $(\bar x,\bar y)=(0,0)$ inside the family \eqref{eq-pws-blownup}, (2) the study near the singular polycycle (it consists of $p_-$ and $p_+$ and the regular orbits that are heteroclinic to them), combining \eqref{eq-blowup-y1} and \eqref{eq-pws-blownup}, and (3) the study near ``ovals'' surrounding the origin inside the family \eqref{eq-pws-blownup}. This is a topic of further study. Suppose that $\beta_{-}\ne\beta_{+}$. Then the connection between $p_+$ and $p_-$ is broken (see Figure \ref{fig-pws-lienard}(a)). We consider the half return maps $\Pi_{\pm}:]0,\infty[\rightarrow ]-\frac{\beta_{\pm}}{2},0[$. We define $\Pi_{+}$ by considering the flow of \eqref{eq-hamiltonian} in forward time and $\Pi_{-}$ by considering the flow of \eqref{eq-hamiltonian} in backward time. \begin{proposition}\label{prop-pws-lienard} Consider system \eqref{eq-hamiltonian}. If $\beta_{-} \neq \beta_{+}$, then the difference map $\Delta(\bar y) = \Pi_{+}(\bar y) - \Pi_{-}(\bar y)$ does not have zeros in the interval $]0,\infty[$. \end{proposition} \begin{proof} For any $\bar y\in ]0,\infty[$, the points $(0,\bar y)$ and $(0,\Pi_{\pm}(\bar y))$ belong to the same level curve $H_{\pm}(\bar x,\bar y) = h_{\pm}$, with $0 < h_{\pm} < \frac{1}{2}$. If we write $H_\pm(\bar y):=H_{\pm}(0,\bar y)$, then we have $H_{\pm}(\bar y) = H_{\pm}( \Pi_{\pm}(\bar y) )$. This implies \begin{equation}\label{eq-derivative-half-return} H_{\pm}'(\bar y) = H_{\pm}'( \Pi_{\pm}(\bar y) )\Pi_{\pm}'(\bar y). \end{equation} From \eqref{eq-derivative-half-return} it follows that $u=\Pi_{\pm}(\bar y)$ is the $\bar y>0$-subset of the stable manifold of the hyperbolic saddle $(u, \bar y) = (0, 0)$ of \begin{equation}\label{eq-auxiliary-system} \left\{ \begin{array}{rcl} \dot{u} & = & \bar ye^{-\frac{2\bar y}{\beta_{\pm}}}, \\ \dot{\bar y} & = & ue^{-\frac{2u}{\beta_{\pm}}}. \end{array} \right. \end{equation} We remark that $u=\Pi_{\pm}(\bar y)$ are contained in the second quadrant $\{u<0,\bar y>0\}$. The result follows by showing that $u=\Pi_{-}(\bar y)$ and $u=\Pi_{+}(\bar y)$ do not have intersection points in the second quadrant. Suppose by contradiction that they intersect in the second quadrant. This would imply the existence of contact points between the orbits of system \eqref{eq-auxiliary-system} with $\beta_-$ and the separatrix $u=\Pi_{+}(\bar y)$. However, observe that $$(\bar ye^{-\frac{2\bar y}{\beta_{-}}}, ue^{-\frac{2u}{\beta_{-}}}) \cdot (ue^{-\frac{2u}{\beta_{+}}},-\bar ye^{-\frac{2\bar y}{\beta_{+}}}) = u\bar y\Big{(}e^{-\frac{2\bar y}{\beta_{-}}-\frac{2u}{\beta_{+}}} - e^{-\frac{2\bar y}{\beta_{+}}-\frac{2u}{\beta_{-}}}\Big{)}.$$ For $\beta_{-} \neq \beta_{+}$, the last equation only vanishes in the set $\{u\bar y(u-\bar y) = 0\}$, and therefore there are no contact points in the second quadrant. \end{proof} A direct consequence of Proposition \ref{prop-pws-lienard} is the following. \begin{proposition}\label{prop-no-LC} Consider the PWS Li\'{e}nard equation \eqref{eq-pws-lienard-general} with $\beta_{-}\neq\beta_{+}$. Let $[\bar y_1,\bar y_2]\subset ]0,\infty[$. Then there exist $\epsilon_{0} > 0$ and a small neighborhood $\mathcal U_\pm$ of $\alpha_\pm=0$ such that for each $(\epsilon,\alpha_-,\alpha_+)\in [0,\epsilon_0]\times \mathcal U_-\times \mathcal U_+$ the difference map $\Delta_{\epsilon,\alpha_-,\alpha_+}(\bar y)$ associated to \eqref{eq-pws-blownup} does not have zeros in $[\bar y_1,\bar y_2]$. \end{proposition} Observe that we do not study limit cycles bifurcating close to the origin of the phase space of \eqref{eq-pws-blownup}. \subsection{Limit cycles near bounded canard cycles}\label{section-bounded-LC} Consider the PWS Li\'{e}nard equation \begin{align}\label{PWSLienard-example-balanced} \begin{cases} \dot x=y-\frac{x^{2}}{2} ,\\ \dot y=\epsilon \left(-(1+\delta) x - \frac{x^{2}}{2} + x^{4}\right), \end{cases} \ x\le 0,\quad \begin{cases} \dot x=y-\frac{x^{2}}{2} ,\\ \dot y=\epsilon \left(-x - \frac{x^{2}}{2} + x^{4}\right), \end{cases} \ x\ge 0, \end{align} with $\delta\in\mathbb R$ kept close to $0$. This section is devoted to prove the following proposition. \begin{proposition}\label{prop-bounded-CL} There is a continuous function $\hat y:]-\delta_0,\delta_0[\to \mathbb R$ with $\delta_0 > 0$ small and satisfying $\hat y(0)>0$ such that, for each $\delta\in ]-\delta_0,\delta_0[$, $y=\hat y(\delta)$ is a simple zero of the slow divergence integral $I_\delta$ of \eqref{PWSLienard-example-balanced}, defined in \eqref{SDI-example-1}. \end{proposition} Concerning System \eqref{PWSLienard-example-balanced}, Proposition \ref{prop-bounded-CL} and Theorem \ref{thm3} (see also Remark \ref{remark-balanceddd}) imply that, for each $\delta\in ]-\delta_0,\delta_0[$, the Minkowski dimension of any entry-exit orbit tending monotonically to $\hat y(\delta)$ is equal to $0$. It can be checked that system \eqref{PWSLienard-example-balanced} satisfies \eqref{assum1} for $\delta$ sufficiently small, and the curve of singularities is given by $S=\{y=\frac{x^{2}}{2}\}$. The associated slow dynamics is given by (see also \eqref{PWSslowdyn}) $$x'=-1-\delta-\frac{x}{2}+x^3, \text{ for } x\le 0, \ x'=-1-\frac{x}{2}+x^3, \text{ for } x\ge 0. $$ The slow dynamics has a simple zero $x_0>0$ and it is strictly negative for all $x\in]-x_0,x_0[$ and $\delta$ small enough. One can compute $x_{0}$ numerically, and we obtain $x_{0} \approx 1,16537$. Using \eqref{SDI-total} we get \begin{equation} \label{SDI-example-1} I_\delta(y)=\int_0^{\sqrt{2y}}\frac{xdx}{-1-\frac{x}{2}+x^3}+\int_{-\sqrt{2y}}^0\frac{xdx}{-1-\delta-\frac{x}{2}+x^3}, \ y\in]0,\frac{x_0^2}{2}[. \end{equation} Following \cite[Section 5.3]{BoxDomagoj}, we know that $I_0$ has a simple zero $\hat y_0\in]0,\frac{x_0^2}{2}[$, and it follows from the Implicit Function Theorem that this zero persists for $\delta > 0$ sufficiently small. One can compute zeroes of $I_{0}$ numerically. Indeed, by approximating the integrand of \eqref{SDI-example-1} using Taylor series and then evaluating the integral, one obtains $\hat y_0 \approx 0,608853$. Observe that $\frac{x_0^2}{2} \approx 0,679047$. Now, we define \begin{equation}\label{eq-pws-bounded} \begin{cases} \dot x=y-\frac{x^{2}}{2} ,\\ \dot y=\epsilon^2 \left(\epsilon\alpha_--(1+\delta) x - \frac{x^{2}}{2} + x^{4}\right), \end{cases} \ \begin{cases} \dot x=y-\frac{x^{2}}{2} ,\\ \dot y=\epsilon^2 \left(\epsilon\alpha_+ -x - \frac{x^{2}}{2} + x^{4}\right), \end{cases} \end{equation} with $\alpha_{\pm}$ kept near $0$. System \eqref{eq-pws-bounded} with $-$ (resp. $+$) corresponds to the vector field defined in $x \leq 0$ (resp. $x \geq 0$). We have $$\beta_-=2(1+\delta) \text{ and } \beta_+=2,$$ with $\beta_\pm$ defined in \eqref{const-beta}. Let $\hat\delta\in ]-\delta_0,\delta_0[$, $\hat\delta\ne 0$. Then \eqref{eq-pws-bounded} has no crossing limit cycles Hausdorff close to the balanced canard cycle $\Gamma_{\hat y(\hat\delta)}$, for $\epsilon>0$, $\epsilon\sim 0$, $\alpha_\pm\sim 0$ and $\delta\sim\hat\delta$. Indeed, notice that the connection on the blow-up locus between the attracting branch $S_+=\{y=\frac{x^{2}}{2},x>0\}$ and the repelling branch $S_-=\{y=\frac{x^{2}}{2},x<0\}$ of $S$ is broken, because $\beta_-\ne\beta_+$ for $\delta=\hat\delta$ (see Figure \ref{fig-pws-lienard}(a)). Suppose that $\delta = 0$ and $\alpha_{\pm} = \alpha$. Then \eqref{eq-pws-bounded} becomes a smooth slow-fast system and therefore we are in the framework of \cite[Section 5.3]{BoxDomagoj}. Thus, for each $\epsilon>0$ and $\epsilon\sim 0$, \eqref{eq-pws-bounded} undergoes a saddle-node bifurcation of crossing limit cycles, Hausdorff close to $\Gamma_{\hat y(0)}$, when we vary $\alpha\sim 0$. \begin{remark}\label{remark-nonzeroMD} If $y=\hat y$ is a zero of $I$ of multiplicity $m_{\hat y}(I)$, then \eqref{eq-pws-lienard-general} can have at most $m_{\hat y}(I)+1$ limit cycles Hausdorff close to the canard cycle $\Gamma_{\hat y}$, for $\epsilon>0$, $\epsilon\sim 0$ and $\alpha_\pm\sim 0$. Moreover, if $\beta_\pm(\delta)$ are functions of $\delta$, $\beta_-(0)=\beta_+(0)$ and $\beta_-'(0)\ne \beta_+'(0)$ (connection between $p_+$ and $p_-$ is broken in a regular way), and $I$ has a simple zero at $y=\hat y$, then for each $\epsilon>0$, $\epsilon\sim 0$ and $\alpha_\pm\sim 0$, \eqref{eq-pws-lienard-general} undergoes a saddle-node bifurcation of (crossing) limit cycles, near $\Gamma_{\hat y}$, when we vary $\delta\sim 0$. (We can apply this to \eqref{eq-pws-bounded}.) These and other cyclicity results will be proved in a separate paper. \end{remark} \subsection{Limit cycles near the unbounded canard cycle}\label{sec-LC-unbound} Consider the classical PWS Li\'{e}nard equation \begin{align}\label{PWSLienard-example-unbounded} \begin{cases} \dot x=y-(x^4+2x^2),\\ \dot y=-\epsilon 2x, \end{cases} \ x\le 0,\quad \begin{cases} \dot x=y-(x^4+\delta x^2),\\ \dot y=-\epsilon x, \end{cases} \ x\ge 0, \end{align} with $\delta\in\mathbb R$ kept close to $1$. System \eqref{PWSLienard-example-unbounded} is a special case of \eqref{model-Lienard1} and it satisfies \eqref{assum1} and \eqref{assum2} with $L_-=]-\infty,0[$ and $L_+=]0,\infty[$. Statement 1 of Theorem \ref{thm4} implies that, for each $\delta$ close to $1$, the Minkowski dimension of any entry-exit orbit tending (monotonically) to $\infty$ is equal to $0$. We focus now on \begin{align}\label{PWSLienard-example-unbounded-perturbed} \begin{cases} \dot x=y-(x^4+2x^2),\\ \dot y=\epsilon^2(\epsilon\alpha_- -2x), \end{cases} \ x\le 0,\quad \begin{cases} \dot x=y-(x^4+\delta x^2),\\ \dot y=\epsilon^2(\epsilon\alpha_+- x), \end{cases} \ x\ge 0, \end{align} where $\alpha_\pm$ are close to zero. We have (see \eqref{const-beta}) $$\beta_-=1 \text{ and } \beta_+=\frac{1}{\delta}.$$ Take $\hat \delta\ne 1$. Then $\beta_-\ne \beta_+$ and \eqref{PWSLienard-example-unbounded-perturbed} has no limit cycles Hausdorff close to the unbounded canard cycle defined in Section \ref{m<-section-infty}, for $\epsilon>0$, $\epsilon\sim 0$, $\alpha_\pm\sim 0$ and $\delta\sim\hat\delta$ (see Figure \ref{fig-pws-lienard}(a)). For $\delta=1$, we have the orbit on the blow-up locus connecting $p_+$ and $p_-$ (see Figure \ref{fig-pws-lienard}(b)), and the unbounded canard cycle may produce limit cycles of \eqref{PWSLienard-example-unbounded-perturbed} for $\epsilon>0$, $\epsilon\sim 0$, $\alpha_\pm\sim 0$ and $\delta\sim 1$. \section*{Declarations} \textbf{Ethical Approval} \ Not applicable. \\ \\ \textbf{Competing interests} \ The authors declare that they have no conflict of interest.\\ \\ \textbf{Authors' contributions} \ All authors conceived of the presented idea, developed the theory, performed the computations and contributed to the final manuscript. \\ \\ \textbf{Funding} \ The research of R. Huzak and G. Radunovi\'{c} was supported by: Croatian Science Foundation (HRZZ) grant IP-2022-10-9820. Additionally, the research of G. Radunovi\'{c} was partially supported by the Horizon grant 101183111-DSYREKI-HORIZON-MSCA-2023-SE-01. Otavio Henrique Perez is supported by Sao Paulo Research Foundation (FAPESP) grants 2021/10198-9 and 2024/00392-0.\\ \\ \textbf{Availability of data and materials} \ Not applicable. \bibliographystyle{plain} \bibliography{bibtex} \end{document}
2412.09767v1
http://arxiv.org/abs/2412.09767v1
Generalized Fiber Contraction Mapping Principle
\documentclass[11pt]{article} \usepackage[a4paper,margin=1in,footskip=0.25in]{geometry} \usepackage{verbatim} \usepackage[utf8]{inputenc} \usepackage{amsmath,amssymb,amsthm} \usepackage{tikz-cd} \usepackage{pgfplots} \pgfplotsset{compat=1.11} \usepgfplotslibrary{fillbetween} \usetikzlibrary{intersections} \usepackage{bbm} \usepackage{hyperref} \usepackage{comment} \usepackage{stmaryrd} \usepackage[new]{old-arrows} \usepackage{graphicx} \graphicspath{ {./Images/} } \usepackage{todonotes} \usepackage [english]{babel} \usepackage [autostyle, english = american]{csquotes} \MakeOuterQuote{"} \numberwithin{equation}{section} \theoremstyle{definition} \newtheorem*{definition}{Definition} \newtheorem{remark}{Remark} \newtheorem*{example}{Example} \newtheorem{claim}{Claim} \newtheorem{theorem}{Theorem} \newtheorem*{main theorem}{Main Theorem} \newtheorem{problem}{Problem} \newtheorem{proposition}{Proposition} \newtheorem{lemma}{Lemma} \newtheorem{corollary}{Corollary} \newtheorem{conjecture}{Conjecture} \newtheorem*{theorem*}{Theorem} \newtheorem*{unnumprop}{Proposition} \pgfdeclarelayer{bg} \pgfsetlayers{bg,main} \begin{document} \title{Generalized Fiber Contraction Mapping Principle} \author{Alexandro Luna and Weiran Yang} \date{} \maketitle \begin{abstract} We prove a generalized non-stationary version of the fiber contraction mapping theorem. It was originally used in \cite{hp} to prove that the stable foliation of a $C^2$ Anosov diffeomorphism of a surface is $C^1$. Our generalized principle is used in \cite{Lu2}, where an analogous regularity result for stable foliations of non-stationary systems is proved. The result is stated in a general setting so that it may be used in future dynamical results in the random and non-stationary settings, especially for graph transform arguments. \end{abstract} \section*{Introduction} The contraction mapping principle is a classical result in mathematical analysis that has an numerous applications in the theory of iterated function systems, Newton's method, the Inverse and Implicit Function Theorems, ordinary and partial differential equations, and more (see \cite{BS}, and references therein, for a survey of applications). Many versions of this principle and converses have been studied and examined in different spaces. A detailed historical note of this theorem can be found in \cite{JJT}. This principle is frequently used in various areas of dynamical systems, especially in smooth dynamics. Since the early 1970s, it has been used in graph transform arguments to prove various existence and regularity results of stable foliations of hyperbolic systems \cite{hp, HPS}. Recently, hyperbolic dynamics have been used in studying the so-called trace maps \cite{C, Ca, DG1}. Understanding the dynamical behavior of these maps is a useful tool in deriving spectral properties of discrete Schr\"odinger operators with Sturmian potential \cite{bist, D2000, degt, dg3, DGY, GJK, L, M}. One promising approach to advance these results is to further develop the theory in the non-stationary case. In the random and non-stationary settings, existence and smoothness of stable manifolds is well understood (see for example Chapter 7 of \cite{Ar}). In the non-stationary or non-autonomous settings, questions regarding dynamical properties of Anosov families such as existence of stable manifolds \cite{Mu}, openness in the space of two-sided sequences of diffeomorphisms \cite{Mu1}, and structural stability \cite{CRV, Mu2} have been addressed. When it comes to regularity of non-stationary stable foliations, only partial results are available, such as when the sequence of maps has a constant tail \cite{S} or for a neighborhood of a common fixed point of the maps \cite{ZLZ}, but our overall goal is to derive regularity results of these foliations, currently not available in the literature. Our primary motivation comes from questions on spectral properties of Sturmian Hamiltonians. This note is dedicated to providing the preliminary technical contraction mapping principles that will be useful in these non-stationary settings. In \cite{Lu2}, it is proved that the non-stationary stable foliation of a collection of diffeomorphisms of $\mathbb T^2$ that satisfy a common cone condition, and have uniformly bounded $C^2$ norms, is a $C^1$ foliation of $\mathbb T^2$. This result generalizes the classical version in \cite{hp} where it is proved that the stable foliation of a $C^2$ Anosov diffeomorphism of a surface is $C^1$. In \cite{hp}, a fibered version of the contraction mapping principle is used to prove this $C^1$ smoothness, and this paper is dedicated to supplying the appropriate generalized version of this principle to be applicable in \cite{Lu2} and future analogous results. Given complete metric spaces $X$ and $Y$, we consider a sequence of maps $(f_n)$, $f_n:X\rightarrow X$, and for each $x\in X$, a sequence of maps $\left(h_n^x\right)$, $h_n^x:Y\rightarrow Y$. Given a sequence of skew maps $(F_n)$ via $$F_n:X\times Y\rightarrow X\times Y, (x,y)\mapsto \left(f_n(x), h_n^x(y)\right),$$ we show that under uniform contraction rates and reasonable continuity and bounded orbit assumptions that there is a $\left(x^*, y^*\right)\in X\times Y$ such that $$\lim\limits_{n\rightarrow\infty} F_1\circ \cdots \circ F_n(x,y)=\left(x^*, y^*\right)$$ for all $(x,y)\in X\times Y$. Outside of its direct application in \cite{Lu2}, this result has the potential to be used for various dynamical techniques in the random or non-stationary settings. This paper is a result of an undergraduate research project, supervised by the first author, that occurred during the Summer and Fall quarters of 2024. \subsection*{Background and Main Results} Given a metric space $(X,d)$ and a mapping $f:X\rightarrow X$, we define the \textit{Lipschitz constant} of $f$ to be $$\text{Lip}(f):=\sup_{x_1\neq x_2} \frac{d\left(f(x_1), f(x_2)\right)}{d(x_1,x_2)}.$$ If $\text{Lip}(f)<1$, then we say that $f$ is a \textit{contraction} on $X$. An element $x^*\in X$ is a \textit{fixed point} of $f$ if $$f\left(x^*\right)=x^*.$$ \begin{theorem}[Contraction Mapping Principle] If $X$ is a complete metric space and $f:X\rightarrow X$ is a contraction on $X$, then $f$ has a unique fixed point $x^*\in X$ and moreover, $$\lim\limits_{n\rightarrow\infty} f^n(x)=x^*$$ for all $x\in X$. \end{theorem} Our goal is to generalize the following fibered version of this principle. \begin{theorem}[Fiber Contraction Principle \cite{hp}]\label{Fiber Contraction Theorem} Let $X$ be a space, $Y$ be a metric space, $f:X\rightarrow X$ a mapping, and $\{g_x\}_{x\in X}$ a family of maps $g_x:Y\rightarrow Y$ such that $$F:X\times Y\rightarrow X\times Y, \ (x,y)\mapsto \left( f(x), g_x(y)\right)$$ is continuous. Suppose that $p\in X$ is a fixed point of $f$ satisfying $\lim\limits_{n\rightarrow\infty}f^n(x)=p$ for all $x\in X$, $q\in Y$ is a fixed point of $g_p$, and $$\limsup\limits_{n\rightarrow\infty} \text{Lip} \left(g_{f^n(x)}\right)<1$$ for all $x\in X$. Then, $(p,q)\in X\times Y$ is a fixed point of $F$ satisfying $$\lim\limits_{n\rightarrow\infty}F^n(x,y)=(p,q),$$ for all $(x,y)\in X\times Y$. \end{theorem} The following theorem is a non-stationary version of the Contraction Mapping Principle. \begin{theorem}\label{base lemma} Let $(X,d)$ be a complete metric space and $(f_n)$, $f_n:X\rightarrow X$, a sequence of contractions. If $$\mu:=\sup_{n\in\mathbb N}\text{Lip}(f_n)<1$$ and there is a $x_0\in X$ such that $\left(d(f_n(x_0), x_0)\right)$ is bounded, then there is a $x^*\in X$ such that $$\lim\limits_{n\rightarrow\infty}f_1\circ \cdots \circ f_n(x)=x^*$$ for all $x\in X$. \end{theorem} The next theorem is a non-stationary version of the Fiber Contraction Mapping Principle. \begin{theorem}\label{main theorem} Let $X$ and $Y$ be complete metric space. Suppose that $(f_n)$, $f_n:X\rightarrow X$, is a sequence of mappings such that \begin{equation}\label{base contraction rate} \mu := \sup\limits_{n\in\mathbb N}\text{Lip}\left(f_n\right)<1. \end{equation} For each $x\in X$, let $\left(h_n^x\right)$, $h_n^x:Y\rightarrow Y$, be a sequence of mappings such that \begin{equation}\label{fiber contraction rate} \lambda := \sup_{n\in\mathbb N}\sup_{x\in X}\text{Lip}\left(h_n^x\right)<1. \end{equation} Suppose that \begin{itemize} \item[(1)] There is a $x_0\in X$ such that $\left\{f_n(x_0)\right\}_{n\in\mathbb N}$ is bounded in $X$; \item[(2)] For any bounded set $\Omega\subset X\times Y$, the set $\left\{h_n^x(y)\right\}_{(x,y)\in \Omega, n\in\mathbb N}$ is bounded in $Y$; \item[(3)] For any bounded set $K\subset Y$ and any $n\in\mathbb N$, we have $\lim\limits_{\left|x-x'\right|\rightarrow0}d_Y\left(h_n^x(y), h_n^{x'}(y)\right)=0$, and this limit is uniform in $y\in K$. \end{itemize} Then, for the skew-maps $F_n:X\times Y\rightarrow X\times Y$, defined via $F_n(x,y):=\left(f_n(x), h_n^x(y)\right)$, there is a $\left(x^*,y^*\right)\in X\times Y$ such that $$\lim\limits_{n\rightarrow \infty}F_1\circ\cdots\circ F_n(x,y)=\left(x^*,y^*\right)$$ for all $(x,y)\in X\times Y$. \end{theorem} \section{Proofs of the Main Theorems} We first prove Theorem \ref{base lemma}. \begin{proof}[Proof of Theorem \ref{base lemma}.] Denote $$M:=\sup_{n\in\mathbb N}\left\{d(f_n(x_0), x_0)\right\}.$$ Suppose $n,m\in\mathbb N$ with $m>n$. Then, \begin{align} \label{Cauchy seq boundary} d\left(f_1\circ\cdots \circ f_n(x),f_1\circ\cdots \circ f_m(x)\right)\leq \mu^n d(f_{n+1} \circ \ldots \circ f_m(x_0),x_0), \end{align} and by the triangle inequality, \begin{align*}\notag &\hspace{0.2in}d(f_{n+1} \circ \ldots \circ f_m(x_0),x_0)\\ &\leq d\left(f_{n+1}(x_0), x_0\right)+\sum_{i=2}^{m-n} d\left(f_{n+1}\circ\cdots \circ f_{n+i}(x_0), f_{n+1}\circ\cdots \circ f_{n+i-1}(x_0)\right) \\ \notag &\leq d\left(f_{n+1}(x_0), x_0\right)+\sum_{i=2}^{m-n} \mu^{i-1} d\left(f_{n+i}(x_0), x_0\right)\\ \notag &\leq M\sum_{i=1}^{m-n}\mu^{i-1}\leq M\sum_{i=1}^{\infty} \mu^{i-1}. \end{align*} It follows that $$ d\left(f_1\circ\cdots \circ f_n(x),f_1\circ\cdots \circ f_m(x)\right)\leq \mu^n M \sum_{i=1}^{\infty} \mu^{i-1}\rightarrow 0,$$ as $n,m\rightarrow\infty$, so that $\left(f_1\circ\cdots\circ f_n(x_0)\right)$ is a Cauchy sequence. Since $X$ is complete, there is a $x^*\in X$ such that $$\lim\limits_{n\rightarrow\infty}f_1\circ \cdots \circ f_n(x_0)=x^*.$$ If $x\in X$, then $$d\left(f_1\circ\cdots\circ f_n(x), f_1\circ\cdots\circ f_n(x_0) \right)\leq \mu^nd(x,x_0)\rightarrow 0$$ as $n\rightarrow \infty$, and hence $$\lim\limits_{n\rightarrow\infty}f_1\circ \cdots \circ f_n(x)=x^*.$$ \end{proof} \begin{remark}\label{remark on boundedness assumption in base lemma} Notice that from the proof, we deduced that the condition that $\left(d(f_n(x_0), x_0)\right)$ is a bounded sequence implies that the set $\{f_k\circ\cdots \circ f_l( x_0)\}_{k\leq l}$ is bounded in $X$, due to the uniform contraction rates of the maps. We also note that this condition cannot be removed. As an example, let $X=\mathbb R$ and define $f_n:\mathbb R\rightarrow \mathbb R$ via $f_n(x):=\frac{1}{2} x+ 3^n$. Then, we have that $$f_1\circ\cdots\circ f_n(0)=\sum_{i=1}^n \frac{3^i}{2^{i-1}}\rightarrow \infty$$ as $n\rightarrow \infty$. \end{remark} We now prove Theorem \ref{main theorem}. \begin{proof}[Proof of Theorem \ref{main theorem}.] If $\pi_Y:X\times Y\rightarrow Y$ is the projection map $\pi_Y(x,y)=y$, then for each $k\leq n$, we have \begin{equation}\label{projected coordinate} \pi_Y\circ F_k\circ \cdots \circ F_n(x,y)=h_k^{f_{k+1}\cdots f_nx}\cdots h_n^x(y). \end{equation} From Remark \ref{remark on boundedness assumption in base lemma}, we know there is a $M=M(x_0)>0$ such that \begin{equation}\label{base iteration bound} d_{X}\left(f_k\circ \cdots \circ f_l (x_0), x_0\right)<M \end{equation} for all $k\leq l$. Fix $y_0\in Y$. By condition (2), there is an $S>0$ such that $$d_Y\left(h_n^{x}(y_0), y_0\right)<S$$ for all $n\in\mathbb N$ and $x\in B_M(x_0)$. From the triangle inequality, (\ref{fiber contraction rate}), and (\ref{base iteration bound}), if $k\leq n$, we have \begin{align*} &d_Y\left(h_k^{f_{k+1}\cdots f_n x_0}\cdots h_n^{x_0}(y_0), y_0\right)\leq d_Y\left( h_k^{f_{k+1}\cdots f_n x_0}(y_0), y_0\right) \\ &\hspace{2cm}+\sum_{i=2}^{n-k}d_{Y}\left(h_k^{f_{k+1}\cdots f_n x_0}\cdots h_{k+j}^{f_{k+j+1}\cdots f_n x_0}(y_0), h_k^{f_{k+1}\cdots f_n x_0}\cdots h_{k+j-1}^{f_{k+j}\cdots f_n x_0}(y_0)\right)\\ &\leq d_Y\left( h_k^{f_{k+1}\cdots f_n x_0}(y_0), y_0\right)+\sum_{i=2}^{n-k} \lambda^{i-1} d_Y\left( h_{k+j}^{f_{k+j+1}\cdots f_n x_0}(y_0), y_0\right)\leq S\sum_{i=1}^{n-k}\lambda ^{i-1}. \end{align*} That is, \begin{equation}\label{Upper bound for iterations in Y} d_Y\left( h_k^{f_{k+1}\circ\cdots\circ f_n x_0}\circ\cdots\circ h_{n-1}^{f_nx_0}\circ h_n^{x_0}(y_0) ,y_0\right) < L:=S\sum_{i=1}^{\infty} \lambda^{i-1} \end{equation} for all $k\leq n$. \begin{claim}\label{existence of limit} There is a $y^*\in Y$ such that $$\lim\limits_{n\rightarrow\infty}h_1^{f_2\cdots f_n x_0}\circ \cdots \circ h_n^{x_0}(y_0)=y^*.$$ \end{claim} \begin{proof}[Proof of Claim \ref{existence of limit}.] Let $\epsilon>0$. Choose $N_0$ such that \begin{equation}\label{choice of N_0 claim 1} 2\lambda^{N_0-1}L<\epsilon. \end{equation} By condition (3), there is a $\delta>0$ such that if $d_X(x,x')<\delta$, $x,x'\in B_M(x_0)$, then \begin{equation}\label{Uniform continuity claim 1} d_{Y}\left(h_n^{x}(y),h_n^{x'}(y)\right)<\epsilon, \end{equation} for all $y\in B_L(y_0)$ and $n=1,2,\dots, N_0$. Choose $N_1$ such that \begin{equation}\label{Choice of N_1 in claim 1} \mu^{N_1}M<\delta. \end{equation} Now, suppose $m,n>\tilde N:=N_0+N_1$ with $m>n$. For $j=1,\dots, N_0$, define $$A_{j}:=d_{Y}\left(\pi_Y\circ F_j\circ\cdots\circ F_m(x_0,y_0), h_j^{f_{j+1}\cdots f_m(x_0)}\left( \pi_Y\circ F_{j+1}\circ\cdots \circ F_n(x_0,y_0)\right)\right),$$ $$B_j:=d_Y\left(h_j^{f_{j+1}\cdots f_m(x_0)}\circ \pi_Y \circ F_{j+1}\circ\cdots \circ F_n(x_0,y_0), \pi_Y \circ F_j\circ \cdots \circ F_n(x_0,y_0)\right) ,$$ and $$C_j:=d_Y\left( \pi_Y\circ F_j\circ\cdots\circ F_m(x_0,y_0), \pi_Y\circ F_j\circ \cdots \circ F_n(x_0,y_0)\right).$$ We will show that $$C_1\leq \left(\text{Constant}\right) \cdot \epsilon $$ for some constant that only depends on $\lambda$. First, we derive relations between the quantities $A_j, \ B_j$ and $C_j$. Notice that from (\ref{projected coordinate}) and (\ref{Upper bound for iterations in Y}), we have \begin{equation}\label{Upper bound for C claim 1} C_{N_0}\leq 2L \end{equation} and by the triangle inequality, we have \begin{equation}\label{C compared to A and B} C_j\leq A_j+B_j. \end{equation} In addition, since \begin{align*} A_j=d_{Y}\left(h_j^{f_{j+1}\cdots f_m(x_0)}\left(\pi_Y\circ F_{j+1}\circ\cdots\circ F_m(x_0,y_0)\right), h_j^{f_{j+1}\cdots f_m(x_0)}\left( \pi_Y\circ F_{j+1}\circ\cdots \circ F_n(x_0,y_0)\right)\right), \end{align*} by (\ref{fiber contraction rate}), this implies that \begin{equation}\label{A compared to C} A_j\leq \lambda C_{j+1}. \end{equation} Also, since $j\leq N_0$, we have that $$n-{j}\geq \tilde N-{N_0}=N_1$$ so from (\ref{base contraction rate}), (\ref{base iteration bound}), and (\ref{Choice of N_1 in claim 1}), we have $$d_X\left(f_{j+1}\circ\cdots \circ f_m(x_0),f_{j+1}\circ \cdots \circ f_{n}(x_0)\right)\leq \mu^{n-{j}}d_X(f_{n+1}\circ\cdots\circ f_m(x_0),x_0) \leq \mu^{N_1}M<\delta.$$ Since $$B_j=d_Y\left(h_j^{f_{j+1}\cdots f_m(x_0)}\left(\pi_Y \circ F_{j+1}\circ\cdots \circ F_n(x_0,y_0)\right), h_j^{f_{j+1}\cdots f_n(x_0)} \circ\left(\pi_Y \circ F_{j+1}\circ \cdots \circ F_n(x_0,y_0)\right)\right),$$ in combination with (\ref{Upper bound for iterations in Y}) and (\ref{Uniform continuity claim 1}), this implies that \begin{equation}\label{B less than epsilon} B_j<\epsilon, \end{equation} for all $j\leq N_0$. By a repeated application of equations (\ref{C compared to A and B}), (\ref{A compared to C}), and (\ref{B less than epsilon}), we have \begin{align*} &d_Y\left(\pi_Y\circ F_1\circ\cdots\circ F_m(x_0,y_0), \pi_Y\circ F_1\circ\cdots\circ F_n(x_0,y_0)\right)=C_1\\ &\leq A_1+B_1\leq \lambda C_2+\epsilon\\ &\leq \lambda(A_2+B_2)+\epsilon\leq \lambda^2C_3+\lambda\epsilon+\epsilon\\ &\leq \lambda^2(A_3+B_3)+\lambda\epsilon+\epsilon\leq \lambda^3C_4+\lambda^2\epsilon+\lambda\epsilon+\epsilon\\ &\leq \dots\leq \lambda^{N_0-1}C_{N_0}+\lambda^{n-2}\epsilon+\cdots+\lambda\epsilon+\epsilon\\ &\leq 2\lambda^{N_0-1}L+\lambda^{n-2}\epsilon+\cdots+\lambda\epsilon+\epsilon\\ &\leq \epsilon+\epsilon \sum_{i=0}^{n-2}\lambda^i\leq \left(1+\sum_{i=0}^{\infty}\lambda^i\right)\cdot \epsilon, \end{align*} where the third to last inequality follows from (\ref{Upper bound for C claim 1}) and the second to last follows from (\ref{choice of N_0 claim 1}). We conclude that $\left(\pi_Y\circ F_1\circ\cdots \circ F_n(x_0,y_0)\right)$ is a Cauchy sequence in $Y$ and hence convergent, since $Y$ is complete. \end{proof} Denoting the limit of the sequence $\left(\pi_Y\circ F_1\circ\cdots \circ F_n(x_0,y_0)\right)_{n\in\mathbb N}$ by $y^*$, we see that for any $y\in Y$, we have $$d_Y\left(\pi_Y\circ F_1\circ\cdots \circ F_n(x_0,y), \pi_Y\circ F_1\circ\cdots \circ F_n(x_0,y_0)\right)\leq \lambda^nd_Y(y,y_0)\rightarrow 0$$ as $n\rightarrow \infty$, so that \begin{equation}\label{independence of y_0} \lim\limits_{n\rightarrow\infty} \pi_Y\circ F_1\circ\cdots \circ F_n(x_0,y)=y^*. \end{equation} \begin{claim}\label{limit independent of x} For each $x\in X$ and $y\in Y$, we have $$\lim\limits_{n\rightarrow\infty}\pi_Y\circ F_1\circ\cdots\circ F_n(x,y)=y^*.$$ \end{claim} \begin{proof}[Proof of Claim \ref{limit independent of x}] Let $\epsilon > 0$, $x \in X$, $y \in Y$. Set $L':=d_Y(y,y_0)+L$. By property (2), there is a $S'>0$ such that if $d_X\left(x',x''\right)<d_X(x,x_0)$, then \begin{equation}\label{bound S' in claim 2} d\left(h_n^{x'}(y), h_n^{x''}(y)\right)\leq S' \end{equation} for all $y\in B_{L'}(y_0)$ and $n\in\mathbb N$. Choose $N_0$ so large such that \begin{equation}\label{choice of N_0 claim 2} S' \sum^{\infty}_{i =N_0} \lambda^{i} < \epsilon. \end{equation} By condition (3), there is a $\delta>0$ such that if $d_X(x',x'')<\delta$, then \begin{equation}\label{Uniform continuity claim 2} d_{Y}\left(h_n^{x'}(y),h_n^{x''}(y)\right)<\epsilon, \end{equation} for all $y\in B_{L'}(y_0)$ and $n=1,2,\dots, N_0$. Choose $N_1$ so large such that \begin{equation}\label{Choice of N_1 claim 2} \mu^{N_1}d(x,x_0)<\delta. \end{equation} Suppose $n > \tilde N:=N_0 + N_1$. Let us adopt the notation $$T_k^x:=h_1^{f_2\cdots f_n x}\circ\cdots \circ h_k^{f_{k+1}\cdots f_n x}$$ so that $$\pi_Y\circ F_1\circ\cdots \circ F_n(x,y)=T_{k-1}^{x}\left(\pi_Y\circ F_k\circ\cdots \circ F_n(x,y)\right)$$ for each $k\leq n$. Define $$A_k:=d_Y\left(T_{k-1}^x(\pi_Y\circ F_{k}\circ \cdots \circ F_n(x,y), T_{k-1}^x\left(\pi_Y\circ F_{k}\circ \cdots \circ F_n(x_0,y) \right) \right)$$ $$B_k:=d_Y\left(h_k^{f_{k+1}\cdots f_nx}\left(\pi_Y\circ F_{k+1}\circ\cdots \circ F_n(x_0,y)\right), h_k^{f_{k+1}\cdots f_nx_0} \left(\pi_Y\circ F_{k+1}\circ\cdots \circ F_n(x_0,y)\right)\right)$$ and $$C_k := d_Y\left(T_{k}^x\left(\pi_Y\circ F_{k+1}\circ \cdots \circ F_n(x_0,y) \right),T^{x_0}_n(y_0) \right)$$ for $k\leq n$. We will show that $$C_n\leq (\text{Constant}) \cdot \epsilon$$ for some constant that only depends on $\lambda$, but we first establish relations between the quantities $A_k$, $B_k$ and $C_k$. First notice that by definition of $A_k$ and $B_k$, and (\ref{fiber contraction rate}), we have \begin{equation}\label{relationship A and B Claim 2} A_k \leq \lambda^{k-1}B_k, \end{equation} and by the triangle inequality, \begin{equation}\label{relationship A and C claim 2} C_k \leq A_k + C_{k-1}. \end{equation} From (\ref{Upper bound for iterations in Y}), (\ref{fiber contraction rate}) and the triangle inequality, we have \begin{align*} &d_Y(\pi_Y\circ F_{k+1}\circ\cdots \circ F_n(x_0,y),y_0)\\ &\leq d_Y(\pi_Y\circ F_{k+1}\circ\cdots \circ F_n(x_0,y),\pi_Y\circ F_{k+1}\circ\cdots \circ F_n(x_0,y_0))\\ & \hspace{5 cm} +d_Y(\pi_Y\circ F_{k+1}\circ\cdots \circ F_n(x_0,y_0),y_0)\\ &\leq \lambda^{n-k}d_Y(y,y_0)+d_Y(\pi_Y\circ F_{k+1}\circ\cdots \circ F_n(x_0,y_0),y_0)\\ &\leq d_Y(x_0,y_0)+L=L'. \end{align*} and also $$d_X\left( f_{k+1}\circ\cdots \circ f_n(x), f_{k+1}\circ\cdots \circ f_n(x_0)\right)\leq \mu^{n-k}d(x,x_0)\leq d_X(x,x_0)$$ for all $k\leq n$. So, from (\ref{bound S' in claim 2}), we have \begin{equation}\label{Bound for B claim 2} B_k\leq S', \end{equation} for all $k\leq n$. Also, for $k \leq N_0$, we have that $$n-{k}\geq \tilde N-{N_0}=N_1,$$ so from (\ref{base contraction rate}) and (\ref{Choice of N_1 claim 2}), we know that $$d_X\left(f_{k+1}\circ\cdots \circ f_n(x_0),f_{k+1}\circ \cdots \circ f_{n}(x)\right)\leq \mu^{n-{k}}d_X(x,x_0)\leq \mu^{N_1}d(x,x_0)<\delta.$$ Thus, from (\ref{Uniform continuity claim 2}), we have \begin{align}\label{B less than epsilon claim 2} B_k \leq \epsilon \end{align} for all $k\leq N_0$. Now, using a repeated application of (\ref{relationship A and B Claim 2}) and (\ref{relationship A and C claim 2}), we have \begin{align*} &d_Y\left(\pi_Y\left(F_1\circ\cdots \circ F_n(x,y)\right), \pi_Y(F_1\circ\cdots \circ F_n(x_0,y_0))\right)= C_n\\ &\leq A_n+C_{n-1}\leq \lambda^{n-1}B_n+A_{n-1} + C_{n-2} \\ &\leq \lambda^{n-1}B_n + \lambda^{n-2}B_{n-1} + A_{n-2} + C_{n-3}\\ &\leq \lambda^{n-1}B_n + \lambda^{n-2}B_{n-1} + \lambda^{n-3}B_{n-2} + A_{n-3}+C_{n-4}\\ &\leq \dots\leq \lambda^{n-1}B_n+\lambda^{n-2}B_{n-1}+\cdots+\lambda B_2+B_1 \end{align*} and from (\ref{Bound for B claim 2}), (\ref{B less than epsilon claim 2}) and (\ref{choice of N_0 claim 2}), the last quantity satisfies \begin{align*} &\sum_{k=1}^{n}\lambda^{k-1}B_k \leq \sum^{N_0}_{k=1}\lambda^{k-1}B_k + \sum^{n}_{k=N_0+1}\lambda^{k-1}B_k \\ &\leq \epsilon \sum^{N_0}_{k=1}\lambda^{k-1} + \epsilon\leq \left(1+\sum_{k=1}^{\infty}\lambda^{k-1}\right)\cdot \epsilon. \end{align*} Therefore, $$ \lim_{n \rightarrow\infty}d_Y\left(\pi_Y\left(F_1\circ\cdots \circ F_n(x,y)\right), \pi_Y(F_1\circ\cdots \circ F_n(x_0,y_0))\right) = 0,$$ so from (\ref{independence of y_0}), the claim holds. \end{proof} Combining these claims with Lemma \ref{base lemma} implies that there is some $x^*\in X$ such that $$\lim\limits_{n\rightarrow\infty}F_1\circ \cdots \circ F_n(x,y)=\left(x^*,y^*\right)$$ for all $(x,y)\in X\times Y$. \end{proof} \begin{remark} Notice that condition (4) can be reformulated as the assumption that for bounded $K\subset Y$, the family of maps $\left\{x\mapsto h_n^{x}\right\}_{y\in K}$ is uniformly equicontinuous for each $n\in\mathbb N$. This is analogous to the continuity assumption in the Fiber Contraction Theorem. \end{remark} We now give examples demonstrating the conditions (2) and (3) cannot be removed in Theorem \ref{main theorem}. Notice that condition (1) cannot be removed by Remark \ref{remark on boundedness assumption in base lemma}. \begin{example} Condition (2) cannot be removed for if we set $X=Y=\mathbb R$, and for all $x,y \in \mathbb{R}$ and $n \in \mathbb{N}$, set $f_n(x)=\frac{1}{2}x$ and $h_n^x(y)=\frac{1}{2}y+3^n$, then $$\lim\limits \pi_Y\circ F_1\circ\cdots\circ F_n(0,0)=\sum_{i=1}^n \frac{3^i}{2^{i-1}}\rightarrow \infty$$ as $n\rightarrow \infty$. \end{example} \begin{example} Condition (3) cannot be removed for if we set $X=Y=\mathbb R$, and for all $x,y\in \mathbb R$ and $n\in\mathbb N$, we have $f_n(x)=\frac{1}{2}{x}$ and $$h_n^x(y)=\begin{cases} 0 & x=0 \\ \frac{1}{2}\left(y-\frac{1}{4}\right)+\frac{1}{4} & x\neq 0 \end{cases},$$ then $$\lim\limits_{n\rightarrow\infty} F_1\circ\cdots\circ F_n(x,y)=\begin{cases} (0,0) & \text{if} \ x= 0 \\ \left(0,\frac{1}{4}\right) & \text{if} \ x\neq 0 \end{cases}.$$ \end{example} \section*{Acknowledgements} We would like to thank Anton Gorodetski for checking the validity of our statements and offering suggestions to improve the quality of the text. The first author was supported by NSF grant DMS-2247966 (PI: A. Gorodetski). \footnotesize \begin{thebibliography}{99} \bibitem[Ar]{Ar} Arnold L., Random dynamical systems, {\it Springer Monographs in Mathematics}, Springer-Verlag, Berlin, 1998. xvi+586 pp. \bibitem[BIST]{bist} Bellissard J., Iochum B., Scoppola E., and Testard D., Spectral properties of one-dimensional quasicrystals, {\it Commun. Math. Phys.}, vol. 125 (1989), pp. 527--543. \bibitem[BS]{BS} Brooks R.M., Schmitt K. The contraction mapping principle and some applications. (2009). Electronic Journal of Differential Equations, 1(Mon. 01-09), Mon. 09, 1-90. https://doi.org/10.58997/ejde.mon.09 \bibitem[C]{C} Casdagli M., Symbolic dynamics for the renormalization map of a quasiperiodic Schr\"odinger equation, \textit{Comm.\ Math.\ Phys.}\ \textbf{107} (1986), 295--318. \bibitem[Ca]{Ca} Cantat S., Bers and H\'enon, Painlev\'e and Schr\"odinger, \textit{Duke Math.\ J.}\ \textbf{149} (2009), 411--460. \bibitem[CRV]{CRV} Castro A., Rodrigues F.B., Varandas P., Stability and limit theorems for sequences of uniformly hyperbolic dynamics, \textit{Journal of Mathematical Analysis and Applications}, Volume 480, Issue 2, 2019. \bibitem[D]{D2000} Damanik D., Substitution Hamiltonians with bounded trace map orbits, {\it J. Math. Anal. Appl.}, vol. 249 (2000), no. 2, pp. 393--411. \bibitem[DEGT]{degt} Damanik D., Embree M., Gorodetski A., and Tcheremchantsev S., The Fractal Dimension of the Spectrum of the Fibonacci Hamiltonian, {\it Communications in Mathematical Physics}, vol. 280 (2008), no. 2, pp. 499-516. \bibitem[DG1]{DG1} Damanik D., Gorodetski A., Hyperbolicity of the Trace Map for the Weakly Coupled Fibonacci Hamiltonian, {\it Nonlinearity,} vol. 22 (2009), pp. 123--143. \bibitem[DG2]{dg3} Damanik D., Gorodetski A., Spectral and Quantum Dynamical Properties of the Weakly Coupled Fibonacci Hamiltonian, {\it Communications in Mathematical Physics}, vol. 305 (2011), pp. 221--277. \bibitem[DGY]{DGY} Damanik D., Gorodetski A., Yessen W.N., The Fibonacci Hamiltonian. \textit{Inventiones mathematicae}, 206 (2014), 629-692. \bibitem[GJK]{GJK} Gorodetski A., Jang S.u., Kleptsyn V., Sturmian Trace Maps, in preparation. \bibitem[HP]{hp} Hirsch M., Pugh C., Stable manifolds and hyperbolic sets. 1970 {\it Global Analysis (Proc. Sympos. Pure Math., Vol. XIV, Berkeley, Calif., 1968)} pp. 133--163 {\it Amer. Math. Soc., Providence, R.I.} \bibitem[HPS]{HPS} Hirsch M., Pugh C., Shub M., Invariant Manifolds, Lecture Notes in Math. \textbf{583}, Springer-Verlag, Berlin, 1977. \bibitem[J]{J} Jachymski J., Continuous dependence of fixed points on parameters, remetrization theorems, and an application to the initial value problem. \textit{Math. Methods Appl. Sci.} 46 (2023). \bibitem[JJT]{JJT} Jachymski J., Jóźwik I., Terepeta M. The Banach Fixed Point Theorem: selected topics from its hundred-year history. \textit{Rev. Real Acad. Cienc. Exactas Fis. Nat. Ser. A-Mat.} 118, 140 (2024). https://doi.org/10.1007/s13398-024-01636-6 \bibitem[KH]{KH} Katok A., Hasselblat B., Introduction to the Modern Theory of Dynamical System, Cambridge Univ. Press, 1995. \bibitem[L1]{L} Luna A., On the spectrum of Sturmian Hamiltonians in a small coupling regime, (2024). arXiv:2408.01637 \bibitem[L2]{Lu2} Luna A., Regularity of Non-stationary Stable Manifolds of Toral Anosov Maps, Pre-print (2024). \bibitem[M]{M} M.\,Mei, Spectral properties of discrete Schr\"odinger operators with potentials generated by primitive invertible substitutions, {\it J. Math. Phys}. 55, 082701 (2014). \bibitem[Mu1]{Mu1} Muentes Acevedo J. de J., Openness of Anosov Families. Journal of the Korean Mathematical Society, 55(3) (2018), 575–591. https://doi.org/10.4134/JKMS.J170312 \bibitem[Mu2]{Mu2} Muentes Acevedo J. de J., Structural stability and a characterization of Anosov families. \textit{Dynamical Systems}, 34(3) (2018), 399–421. https://doi.org/10.1080/14689367.2018.1546380 \bibitem[Mu3]{Mu} Muentes Acevedo J. de J., Local stable and unstable manifolds for Anosov families, \textit{Hokkaido Math. J.}, 48 (2019), 513--535. \bibitem[S]{S} Stenlund M., Non-stationary compositions of Anosov diffeomorphisms, \textit{Nonlinearity}, \textbf{24}, No.10 (2011) \bibitem[ZLZ]{ZLZ} Zhang W., Lu K., Zhang W., Smooth invariant foliations without a bunching condition and Belitskii's $C^1$ linearization for random dynamical systems, (2023). arXiv:2307.11284 \end{thebibliography} \newcommand{\Addresses}{{ \bigskip \footnotesize \textsc{Department of Mathematics, University of California, Irvine, CA 92697, USA}\par\nopagebreak \textit{E-mail address}: \texttt{[email protected]}\\ \\ \textsc{Department of Mathematics, University of California, Irvine, CA 92697, USA}\par\nopagebreak \textit{E-mail address}: \texttt{[email protected]} }} \Addresses \end{document}
2412.09848v2
http://arxiv.org/abs/2412.09848v2
Polarized cylinders in Du Val del Pezzo surfaces of degree two
\documentclass[a4paper,11pt]{amsart} \usepackage{amscd,amssymb} \usepackage{mathrsfs} \usepackage{bm} \usepackage{color} \usepackage{multirow,bigdelim} \usepackage[all]{xy} \usepackage{hyperref} \usepackage{tikz} \usepackage{geometry} \geometry{left=25mm,right=25mm,top=30mm,bottom=30mm} \newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition} \newtheorem{prob}[thm]{Problem} \newtheorem{conj}[thm]{Conjecture} \newtheorem{claim}[thm]{Claim} \theoremstyle{definition} \newtheorem{defin}[thm]{Definition} \newtheorem{eg}[thm]{Example} \newtheorem*{organization}{Organization of article} \newtheorem*{conventions}{Conventions} \newtheorem*{notation}{Notation} \newtheorem*{acknowledgment}{Acknowledgment} \theoremstyle{remark} \newtheorem{rem}[thm]{Remark} \newtheorem*{remark}{Remark} \numberwithin{equation}{section} \newcommand{\bA}{\mathbb{A}} \newcommand{\bC}{\mathbb{C}} \newcommand{\fC}{\mathfrak{C}} \newcommand{\bD}{\mathbb{D}} \newcommand{\fD}{\mathfrak{D}} \newcommand{\bF}{\mathbb{F}} \newcommand{\sF}{\mathscr{F}} \newcommand{\bG}{\mathbb{G}} \newcommand{\bk}{\Bbbk} \newcommand{\sL}{\mathscr{L}} \newcommand{\wsL}{\widetilde{\mathscr{L}}} \newcommand{\sM}{\mathscr{M}} \newcommand{\fm}{\mathfrak{m}} \newcommand{\sO}{\mathscr{O}} \newcommand{\bP}{\mathbb{P}} \newcommand{\fp}{\mathfrak{p}} \newcommand{\wtp}{\widetilde{p}} \newcommand{\bQ}{\mathbb{Q}} \newcommand{\bR}{\mathbb{R}} \newcommand{\fS}{\mathfrak{S}} \newcommand{\fU}{\mathscr{U}} \newcommand{\wU}{\widetilde{U}} \newcommand{\bV}{\mathbb{V}} \newcommand{\bZ}{\mathbb{Z}} \newcommand{\Amp}{\mathrm{Amp}} \newcommand{\Ampc}{\mathrm{Amp}^{\mathrm{cyl}}} \newcommand{\Aut}{\mathrm{Aut}} \newcommand{\Bl}{\mathrm{Bl}} \newcommand{\Bs}{\mathrm{Bs}} \newcommand{\cont}{\mathrm{cont}} \newcommand{\Cl}{\mathrm{Cl}} \newcommand{\Div}{\mathrm{Div}} \newcommand{\Ker}{\mathrm{Ker}} \newcommand{\NE}{\overline{\mathrm{NE}}} \newcommand{\mult}{\mathrm{mult}} \newcommand{\Pic}{\mathrm{Pic}} \newcommand{\Picq}{\mathrm{Pic}(S) \otimes _{\mathbb{Z}}\mathbb{Q}} \newcommand{\Proj}{\mathrm{Proj}} \newcommand{\rank}{\mathrm{rank}} \newcommand{\red}{\mathrm{red}} \newcommand{\Sing}{\mathrm{Sing}} \newcommand{\Sm}{\mathrm{Sm}} \newcommand{\Spec}{\mathrm{Spec}} \newcommand{\Supp}{\mathrm{Supp}} \newcommand{\wA}{\widetilde{A}} \newcommand{\wB}{\widetilde{B}} \newcommand{\wC}{\widetilde{C}} \newcommand{\wD}{\widetilde{D}} \newcommand{\wE}{\widetilde{E}} \newcommand{\wF}{\widetilde{F}} \newcommand{\wR}{\widetilde{R}} \newcommand{\wS}{\widetilde{S}} \newcommand{\wGamma}{\widetilde{\Gamma}} \newcommand{\wDelta}{\widetilde{\Delta}} \newcommand{\hD}{\hat{D}} \newcommand{\hF}{\hat{F}} \newcommand{\hM}{\hat{M}} \newcommand{\hR}{\hat{R}} \newcommand{\hS}{\hat{S}} \newcommand{\hGamma}{\hat{\Gamma}} \newcommand{\baE}{\bar{E}} \newcommand{\baM}{\bar{M}} \newcommand{\baGamma}{\bar{\Gamma}} \newcommand{\cF}{\check{F}} \newcommand{\cM}{\check{M}} \newcommand{\cGamma}{\check{\Gamma}} \newcommand{\sA}{\textsf{A}} \newcommand{\sD}{\textsf{D}} \newcommand{\sE}{\textsf{E}} \renewcommand\thefootnote{*\arabic{footnote}} \begin{document} \title{Polarized cylinders in Du Val del Pezzo surfaces of degree two} \author{} \address{} \curraddr{} \email{} \thanks{} \author{Masatomo Sawahara} \address{Faculty of Education, Hirosaki University, Bunkyocho 1, Hirosaki-shi, Aomori 036-8560, JAPAN} \curraddr{} \email{[email protected]} \thanks{} \subjclass[2020]{14C20, 14E05, 14J17, 14J26, 14J45, 14R25. } \keywords{cylinder, rational surface, $\mathbb{P}^1$-biration, Du Val singularity. } \date{} \dedicatory{} \begin{abstract} Let $S$ be a del Pezzo surface with at worst Du Val singularities of degree $2$ such that $S$ admits an $(-K_S)$-polar cylinder. In this article, we construct an $H$-polar cylinder for any ample $\mathbb{Q}$-divisor $H$ on $S$. \end{abstract} \maketitle \setcounter{tocdepth}{1} Throughout this article, all considered varieties are assumed to be algebraic and defined over an algebraically closed field $\bk$ of characteristic $0$. \section{Introduction}\label{1} Let $X$ be a normal projective variety. An open subset $U$ in $X$ is called a {\it cylinder} if $U$ is isomorphic to $\bA ^1_{\bk} \times Z$ for some variety $Z$. Moreover, letting $H$ be an ample $\bQ$-divisor on $X$, we say that a cylinder $U$ in $X$ is an {\it $H$-polar cylinder} if there exists an effective $\bQ$-divisor $D$ on $X$ on $X$ such that $D \sim _{\bQ} H$ and $U = X \backslash \Supp (D)$. The motivation for studying of polarized cylinders comes from studying of $\bG _a$-actions on the corresponding affine cones, which are special kinds of affine varieties. \begin{thm}[{\cite{KPZ11,KPZ14}}] Let $X$ be a normal projective variety, and let $H$ be an ample $\bQ$-divisor on $X$. Then the following affine variety: \begin{align*} \Spec \left( \bigoplus _{n \ge 0} H^0(X,\sO _X(nH)) \right) \end{align*} admits a non-trivial $\bG _a$-action if and only if $X$ contains an $H$-polar cylinder. \end{thm} Moreover, the geometry of polarized cylinders in projective varieties can be applied to the discrimination of flexibility of affine cones and the existence of $\bG _a$-actions on the complements on hypersurfaces (see {\cite{Pre13,PW16,CDP18,Par22}}). Meanwhile, the connection between anti-canonical polarized cylinders in Fano varieties and their $\alpha$-invariants is also known (see {\cite[Theorem 1.26]{CPPZ21}}). In this article, we study polarized cylinders in log del Pezzo surfaces. Here, a log del Pezzo surface means a del Pezzo surface (see \S \S \ref{2-2}, for the definition) with at worst klt singularities. We introduce the following notation: \begin{defin} Let $S$ be a normal projective rational surface. Then we say that the following set is called the {\it cylindrical ample set} of $S$: \begin{align*} \Ampc (S) := \{ H \in \Amp (S) \,|\, \text{there is an $H$-polar cylinder on $S$}\}, \end{align*} where $\Amp (S)$ is the ample cone of $S$. \end{defin} Cheltsov, Park and Won establish the following conjecture on polarized cylinders in log del Pezzo surfaces: \begin{conj}[{\cite{CPW17}}]\label{conj} Let $S$ be a log del Pezzo surface. Then $-K_S \in \Ampc (S)$ if and only if $\Ampc (S) = \Amp (S)$. \end{conj} Note that Conjecture \ref{conj} is true for smooth del Pezzo surfaces. Indeed, it follows from the following two theorems: \begin{thm}[{\cite{KPZ11,KPZ14,CPW16a}}] Let $S$ be a smooth del Pezzo surface of degree $d \in \{ 1,\dots ,9\}$ (see \S \S \ref{2-2}, for the definition). Then $-K_S \in \Ampc (S)$ if and only if $d \ge 4$. \end{thm} \begin{thm}[{\cite{CPW17}}] Let $S$ be a smooth del Pezzo surface of degree $\ge 4$. Then $\Ampc (S) = \Amp (S)$. \end{thm} \begin{rem} {\cite{CPW17}} further studies the structure of cylindrical ample sets of smooth del Pezzo surfaces of degree $\le 3$. Moreover, {\cite{KP21,KW25}} studies polarized cylinders in smooth del Pezzo surfaces of degree $2$. See also {\cite{MW18,Che21}} for polarized cylinders in smooth rational surfaces. \end{rem} From now on, we consider polarized cylinders in del Pezzo surfaces with at worst Du Val singularities, which are so-called Du Val del Pezzo surfaces. Notice that every Du Val del Pezzo surface is a log del Pezzo surface because all Du Val singularities are klt singularities. The condition for a Du Val del Pezzo surface to contain anti-canonical polar cylinders is completely determined. More precisely, the following theorem holds: \begin{thm}[{\cite{CPW16b}}]\label{CPW} Let $S$ be a Du Val del Pezzo surface of degree $d \in \{ 1,\dots ,9\}$. Then: \begin{enumerate} \item If $d \ge 4$, then $-K_S \in \Ampc (S)$. \item If $d=3$, then $-K_S \in \Ampc (S)$ if and only if $S$ has a singular point. \item If $d=2$, then $-K_S \in \Ampc (S)$ if and only if $S$ has a singular point, which is not of type $\sA _1$. \item If $d=1$, then $-K_S \in \Ampc (S)$ if and only if $S$ has a singular point, which is not of type $\sA _1$, $\sA _2$, $\sA _3$ nor $\sD _4$. \end{enumerate} \end{thm} \begin{rem} For a Du Val del Pezzo surface $S$, Belousov shows that $\Ampc (S) \not= \emptyset$ if and only if ${\rm Dyn}(S) \not= 4\sA _2$, $2\sA _3 + 2\sA _1$ or $2\sD _4$ (see {\cite{Bel17,Bel23}}). \end{rem} \begin{rem} There are infinitely many log del Pezzo surfaces without anti-canonical polar cylinders (see {\cite{KKW}}). However, the condition for whether a (general) log del Pezzo surface contains anti-canonical polar cylinders is still unknown. \end{rem} In {\cite{Saw1}}, the author studies polarized cylinders in Du Val del Pezzo surfaces of degree $\ge 3$. This article will develop this work; that is, we will study polarized cylinders in Du Val del Pezzo surfaces of degree $2$ with anti-canonical polar cylinders. As a result, we obtain the following theorem, which is our main result: \begin{thm}\label{main(1)} Let $S$ be a Du Val del Pezzo surface of degree $\ge 2$. Then $-K_S \in \Ampc (S)$ if and only if $\Ampc (S) = \Amp (S)$. \end{thm} Theorem \ref{main(1)} provides a partially positive answer to Conjecture \ref{conj} as follows: \begin{cor} Conjecture \ref{conj} is true for Du Val del Pezzo surfaces with degree $\ge 2$. \end{cor} \begin{organization} In Section \ref{2}, we prepare results on cylinders in Hirzebruch surfaces and singular fibers of $\bP ^1$-fibrations. Moreover, we review the relationship between Du Val del Pezzo surfaces and weak del Pezzo surfaces and then discuss the singularity types of Du Val del Pezzo surfaces. In Section \ref{3}, we find a special kind of $(-1)$-curves on Du Val del Pezzo surfaces of degree $2$ with a singular point of type $\sA _n$ $(n \ge 3)$ referring to {\cite[\S 4]{Saw24}}. Using this result, we can obtain $\bP ^1$-fibrations from these surfaces, which can construct polarized cylinders. In Section \ref{4}, we present a method to construct $H$-polar cylinders for every ample $\bQ$-divisor $H$ on normal rational surfaces with specific $\bP ^1$-fibration structures. Our argument is similar to {\cite[\S 3]{Saw1}} but requires a more intricate discussion. In the last Section \ref{5}, we prove Theorem \ref{main(1)}, which is our main result. In more detail, using results in Section \ref{3}, we will show that every Du Val del Pezzo surface of degree $2$ with anti-canonical polar cylinders has a specific $\bP ^1$-fibration in the above sense. Namely, Theorem \ref{main(1)} is shown by this argument combined with the result of Section \ref{4}. \end{organization} \begin{conventions} For an integer $m$, we say that an $m$-curve on a smooth projective surface is called a smooth projective rational curve with self-intersection number $m$. For any weighted dual graph, a vertex $\circ$ with the number $m$ corresponds to an $m$-curve. Exceptionally, we omit this weight (resp. we omit this weight and use the vertex $\bullet$ instead of $\circ$) if $m=-2$ (resp. $m=-1$). \end{conventions} \begin{notation} We employ the following notations: \begin{itemize} \item $\bA ^d_{\bk}$: the affine space of dimension $d$. \item $\bP ^d_{\bk}$: the projective space of dimension $d$. \item $\bF _n$: the Hirzebruch surface of degree $n$; i.e., $\bF _n := \bP (\sO _{\bP ^1_{\bk}} \oplus \sO _{\bP ^1_{\bk}}(n))$. \item $\bA ^1_{\ast ,\bk}$: the affine line with one closed point removed, i.e., $\bA ^1_{\ast ,\bk} = \Spec (\bk [t^{\pm 1}])$. \item $\Cl (X)$: the divisor class group of $X$. \item $\Cl (X)_{\bQ} := \Cl (X) \otimes _{\bZ} \bQ$. \item $\rho (X)$: the Picard number of $X$. \item $K_X$: the canonical divisor on $X$. \item $\Sing (X)$: the set of singular points on $X$. \item $D_1\sim D_2$: $D_1$ and $D_2$ are linearly equivalent. \item $D_1\sim _{\bQ} D_2$: $D_1$ and $D_2$ are $\bQ$-linearly equivalent. \item $(D_1 \cdot D_2)$: the intersection number of $D_1$ and $D_2$. \item $(D)^2$: the self-intersection number of $D$. \item $\varphi ^{\ast}(D)$: the total transform of $D$ by a morphism $\varphi$. \item $\psi _{\ast}(D)$: the direct image of $D$ by a morphism $\psi$. \item $\Supp (D)$: the support of $D$. \item $|D|$: the complete linear system of $D$. \item $\sharp D$: the number of all irreducible components in $\Supp (D)$. \item $\delta _{i,j}$: The Kronecker delta. \end{itemize} \end{notation} \begin{acknowledgment} The author would like to thank Doctor Jaehyun Kim and Doctor Dae-Won Lee for their valuable discussions and comments at Ewha Womans University. Moreover, he is also grateful to Professor Joonyeong Won for providing an excellent discussion place with them. The author is supported by JSPS KAKENHI Grant Number JP24K22823. \end{acknowledgment} \section{Preliminaries}\label{2} \subsection{Basic results} In this subsection, we summarize the following results, which are basic but important. \begin{lem}\label{lem(2-1-1)} Let $\hM$ and $\hF$ be the minimal section and a general fiber on the Hirzebruch surface $\bF _n$ of degree $n$, respectively. Then the following assertions hold: \begin{enumerate} \item Let $\hF_1,\dots ,\hF _r$ be distinct fibers on $\bF _n$ other than $\hF$. Then $\bF _n \backslash (\hM \cup \hF \cup \hF _1 \cup \dots \cup \hF _r) \simeq \bA ^1_{\bk} \times (\bA ^1_{\bk} \backslash \{ r\text{ points}\})$. \item Let $\hGamma$ be a smooth rational curve with $\hGamma \sim \hM + n\hF$. Then $\bF _n \backslash (\hGamma \cup \hM \cup \hF ) \simeq \bA ^1_{\bk} \times \bA ^1_{\ast ,\bk}$. \item Let $\hGamma$ be a smooth rational curve with $\hGamma \sim \hM + (n+1)\hF$. Then $\bF _n \backslash (\hGamma \cup \hM \cup \hF _0 ) \simeq \bA ^1_{\bk} \times \bA ^1_{\ast ,\bk}$, where $\hF _0$ is the fiber satisfying $\hM \cap \hGamma \cap \hF _0 \not= \emptyset$. \end{enumerate} \end{lem} \begin{proof} See {\cite[Lemma 2.3]{Saw1}} (see also {\cite{Koj02}}). \end{proof} \begin{lem}\label{lem(2-1-2)} Let $\wS$ be a smooth projective surface with a $\bP ^1$-fibration $g:\wS \to \bP ^1_{\bk}$. Assume that $g$ admits a section $\wD _0$ and a singular fiber $\wF$, which consists only of $(-1)$-curves and $(-2)$-curves. Then the weighted dual graph of $\wF _i+ \wD _0$ is one of the following: \begin{align} \label{I-1} &\xygraph{\circ ([]!{+(0,.25)} {^{\wD_0}}) -[rr] \bullet -[r] \circ -[r] \cdots ([]!{+(0,-.35)} {\underbrace{\ \quad \qquad \qquad \qquad}_{\ge 0}}) -[r] \circ -[r] \bullet} \tag{I-1}\\ \label{I-2} &\xygraph{\circ ([]!{+(0,.25)} {^{\wD_0}}) -[rr] \circ (- []!{+(1,0.5)} \circ -[r] \cdots ([]!{+(0,-.35)} {\underbrace{\ \quad \qquad \qquad \qquad}_{\ge 0}}) -[r] \circ -[r] \bullet, - []!{+(1,-.5)} \circ -[r] \cdots ([]!{+(0,-.35)} {\underbrace{\ \quad \qquad \qquad \qquad}_{\ge 0}}) -[r] \circ -[r] \bullet)} \tag{I-2}\\ \label{II} & \xygraph{\bullet -[l] \circ -[l] \cdots ([]!{+(0,-.4)} {\underbrace{\ \quad \qquad \qquad \qquad}_{\ge 0}}) -[l] \circ (-[]!{+(-1,-0.5)} \circ , -[]!{+(-1,0.5)} \circ -[ll] \circ ([]!{+(0,.25)} {^{\wD_0}}))} \tag{II} \end{align} Here, each vertex with label $\wD _0$ in the above graphs corresponds to the curve $\wD _0$. \end{lem} \begin{proof} See {\cite[Lemma 1.5]{Zha88}}. \end{proof} \subsection{Du Val del Pezzo surfaces and Weak del Pezzo surfaces}\label{2-2} A {\it del Pezzo surface} is a normal projective surface, whose anti-canonical divisor is ample. Moreover, we say that a {\it Du Val del Pezzo surface} is a del Pezzo surface with at worst Du Val singularities. See, e.g. {\cite{Dur79}}, for details of Du Val singularities. For a Du Val del Pezzo surface $S$, ${\rm Dyn} (S)$ denotes the Dynkin type of its singularities; e.g., ${\rm Dyn}(S) = \sA _2+2\sA _1$ implies that $\Sing (S)$ consists of a Du Val singular point of type $\sA _2$ and two Du Val singular points of type $\sA _1$. On the other hand, a {\it weak del Pezzo surface} is a smooth projective surface, whose anti-canonical divisor is nef and big. Notice that the minimal resolution in every Du Val del Pezzo surface is a weak del Pezzo surface. We summarize some properties of weak del Pezzo surfaces. Let $\wS$ be a weak del Pezzo surface. We note $(-K_{\wS})^2 > 0$ and $(-K_{\wS} \cdot \wC) \ge 0$ for every curve $\wC$ on $\wS$ because $-K_{\wS}$ is nef and big. \begin{lem}\label{lem(2-2-1)} $\wS \simeq \bP ^1_{\bk} \times \bP ^1_{\bk}$, $\wS \simeq \bF _2$, or there exists a birational morphism $h:\wS \to \bP ^2_{\bk}$. In particular, $\wS$ is rational. \end{lem} \begin{proof} See {\cite[Theorem 8.1.15]{Dol12}}. \end{proof} By Lemma \ref{lem(2-2-1)} combined with $(-K_{\wS})^2 > 0$, we know that $(-K_{\wS})^2$ is an integer between $1$ and $9$. We say that the number $(-K_{\wS})^2$ is called the {\it degree} of $\wS$. Moreover, we say that the {\it degree} of a Du Val del Pezzo surface is the degree of its minimal resolution. An irreducible curve $\wC$ on $\wS$ is a {\it negative curve} if $(\wC ) ^2 < 0$. Then the following lemma on negative curves holds: \begin{lem}\label{lem(2-2-2)} The following assertions hold: \begin{enumerate} \item Every negative curve of $\wS$ is either a $(-1)$-curve or a $(-2)$-curve. \item The number of all $(-2)$-curves on $\wS$ is at most $9-(-K_{\wS})^2$. \item For any irreducible curve $\wC$ on $\wS$, then $\wC$ is a $(-2)$-curve if and only if $(\wC \cdot -K_{\wS}) = 0$. \end{enumerate} \end{lem} \begin{proof} In (1) and(2), see {\cite[Lemma 8.1.13]{Dol12}} and {\cite[Proposition 8.2.25]{Dol12}}, respectively. In (3), let $\wC$ be an irreducible curve on $\wS$. If $\wC$ is a $(-2)$-curve, then $(\wC \cdot -K_{\wS}) = 0$ by the adjunction formula. Conversely, assume that $(\wC \cdot -K_{\wS}) = 0$. Since $(-K_{\wS})^2 > 0$, we know that $\wC$ is a negative curve by the Hodge index theorem. Hence, $\wC$ is a $(-2)$-curve by virtue of (1) combined with $(\wC \cdot -K_{\wS}) = 0$. \end{proof} Now, $d$ denotes the degree of $\wS$. By Lemma \ref{lem(2-2-2)} (2), there are at most finite $(-2)$-curves on $\wS$. Let $\wD$ be the reduced divisor consisting of all $(-2)$-curves on $\wS$. It is known that the dual graph of $\wD$ corresponds to a subsystem of the root systems of types $\sE _8$, $\sE _7$, $\sE _6$, $\sD _5$, $\sA _4$, $\sA _2 + \sA _1$ and $\sA _1$ for $d=1,\dots ,7$, respectively, with exceptions: $8\sA _1$, $7\sA _1$ and $\sD _4+4\sA _1$ for $d=1$ and $7\sA _1$ for $d=2$ (see {\cite{CPW16b,CT88,BW79,Ura81}}). In particular, the intersection matrix of $\wD$ is negative definite. Hence, we obtain the contraction $f:\wS \to S$ of $\wD$. Then we can easily see that $-K_S$ is ample. Namely, $S$ is a Du Val del Pezzo surface. Therefore, Du Val del Pezzo surfaces are in one-to-one correspondence with weak del Pezzo surfaces via minimal resolutions. \subsection{Singularity types of Du Val del Pezzo surfaces}\label{2-3} Note that Du Val del Pezzo surfaces are classified (see, e.g., {\cite{CT88,BW79,Ura81}}). In this subsection, we recall a classification of Du Val del Pezzo surfaces. To consider types of Du Val del Pezzo surfaces, we prepare the following definition: \begin{defin}[{\cite[Definition 2.8]{CPW16b}}] Let $S_1$ and $S_2$ be two Du Val del Pezzo surfaces, and let $f_1:\wS _1 \to S_1$ and $f_2:\wS _2 \to S_2$ be the minimal resolutions. Then we say that $S_1$ and $S_2$ have the {\it same singularity type} if there exists an isomorphism of $\Pic (\wS _1) \simeq \Pic (\wS _2)$ preserving the intersection form that gives a bijection between their sets of classes of negative curves. \end{defin} Note that any negative curve on every weak del Pezzo surface is a $(-1)$-curve or a $(-2)$-curve by Lemma \ref{lem(2-2-2)} (1). Hence, for two Du Val del Pezzo surfaces $S_1$ and $S_2$ with the same type, we know $(-K_{S_1})^2 = (-K_{S_2})^2$ and ${\rm Dyn}(S_1) = {\rm Dyn}(S_2)$. However, the converse does not hold. In other words, there exist two Du Val del Pezzo surfaces $S_1$ and $S_2$ such that $(-K_{S_1})^2 = (-K_{S_2})^2$ and ${\rm Dyn}(S_1) = {\rm Dyn}(S_2)$ but they do not have the same type. More precisely, the pair of the degree and the Dynkin type of a Du Val del Pezzo surface is one of the following if and only if there exist two Du Val del Pezzo surfaces, whose pair of the degree and the Dynkin type is the same, not have the same singularity type: \begin{align*} &(6,\sA _1),\\ &(4,\sA _3),\ (4,2\sA _1),\\ &(2,\sA _5+\sA _1),\ (2,\sA _5),\ (2,\sA _3+2\sA _1),\ (2,\sA _3+\sA _1),\ (2,4\sA _1),\ (2,3\sA _1),\\ &(1,\sA _7),\ (1,\sA _5+\sA _1),\ (1,2\sA _3),\ (1,\sA _3+2\sA _1),\ (1,4\sA _1). \end{align*} Let $S$ be a Du Val del Pezzo surface of degree $2$ with ${\rm Dyn}(S) = \sA _5+\sA _1$, $\sA _5$, $\sA _3+2\sA _1$ or $\sA _3+\sA _1$, and let $f:\wS \to S$ be the minimal resolution. Notice that the singularity type of $S$ is not uniquely determined. Thus, we shall introduce a notation to distinguish the singularity types of $S$. Assume that ${\rm Dyn}(S) = \sA _5+\sA _1$ or $\sA _5$. Let $\wD$ be the reduced divisor on $\wS$, which can be contracted to a singular point of type $\sA _5$, and let $\wD _0$ be the central component of $\wD$; i.e., $\wD _0$ is the irreducible component of $\wD$ such that $\wD - \wD _0$ can be contracted two singular points of type $\sA _2$. Then we say that $S$ is of type $(\sA _5+\sA _1)'$ (resp. $(\sA _5)'$) if ${\rm Dyn}(S) = \sA _5+\sA _1$ (resp. $\sA _5$) and there exists a $(-1)$-curve on $\wS$ meeting $\wD _0$. On the other hand, we say that $S$ is of type $(\sA _5+\sA _1)''$ (resp. $(\sA _5)''$) if ${\rm Dyn}(S) = \sA _5+\sA _1$ (resp. $\sA _5$) and there is no such $(-1)$-curve on $\wS$. In what follows, assume that ${\rm Dyn}(S) = \sA _3+2\sA _1$ or $\sA _3+\sA _1$. Let $\wD$ be the reduced divisor on $\wS$, which can be contracted to a singular point of type $\sA _3$, and let $\wD _0$ be the central component of $\wD$; i.e., $\wD _0$ is the irreducible component of $\wD$ such that $\wD - \wD _0$ can be contracted two singular points of type $\sA _1$. Then we say that $S$ is of type $(\sA _3+2\sA _1)'$ (resp. $(\sA _3+\sA _1)'$) if ${\rm Dyn}(S) = \sA _3+2\sA _1$ (resp. $\sA _3+\sA _1$) and there exists a $(-1)$-curve on $\wS$ meeting $\wD_ 0$ and a $(-2)$-curve not contained in $\Supp (\wD)$. On the other hand, we say that $S$ is of type $(\sA _3+2\sA _1)''$ (resp. $(\sA _3+\sA _1)''$) if ${\rm Dyn}(S) = \sA _3+2\sA _1$ (resp. $\sA _3+\sA _1$) and there is no such $(-1)$-curve on $\wS$. \begin{rem} We can consider that the number of primes in the above notation corresponds to the number of special $(-1)$-curves on $\wS$. Indeed, if $\wS$ is of type $(\sA _5+\sA _1)'$ $(\sA _5)'$, $(\sA _3+2\sA _1)'$ or $(\sA _3+\sA _1)'$, then there exists a $(-1)$-curve on $\wS$ meeting the central component of $\wD$. On the other hand, if $\wS$ is of type $(\sA _5+\sA _1)''$ or $(\sA _5)''$, then there exist two $(-1)$-curves on $\wS$ such that they meet distinct two $(-2)$-curves on $\wD$, which meet the central component of $\wD$, respectively. Moreover, if $\wS$ is of type $(\sA _3+2\sA _1)''$ or $(\sA _3+\sA _1)''$, then there exist two $(-1)$-curves on $\wS$ meeting the central component of $\wD$. These results will be proved in the next section \S \ref{3}. \end{rem} \section{$(-1)$-curves on Du Val del Pezzo surfaces of degree two}\label{3} Let $S$ be a Du Val del Pezzo surface of degree $2$. Assume that $S$ has a singular point $P$ of type $\sA _n$ with $n \ge 3$. Let $f:\wS \to S$ be the minimal resolution, let $\wD$ be the reduced exceptional divisor of $f$, let $\wD = \sum _{i=1}^m \wD ^{(i)}$ be the decomposition of $\wD$ into connected components, and let $\wD ^{(i)} = \sum _{j=1}^{n(i)}\wD _j^{(i)}$ be the decomposition of $\wD ^{(i)}$ into irreducible components for $i=1,\dots ,m$. Without loss of generality, we can assume $f(\Supp (\wD ^{(1)})) = \{ P\}$. Namely, $n(1) = n$. For simplicity, we put $\wD _j := \wD _j^{(1)}$ for $j=1,\dots ,n$. \begin{prop}\label{prop(3-0)} With the same notations as above, one of the following assertions holds: \begin{enumerate} \item[(A)] There exist two $(-1)$-curves $\wE _1$ and $\wE _2$ on $\wS$ such that the weighted dual graph of $\wE _1 + \wE _2 + \wD ^{(1)}$ looks like that in Figure \ref{fig(3-1)} (A). \item[(B)] $n=5$ and there exists a $(-1)$-curve $\wE$ on $\wS$ such that the weighted dual graph of $\wE + \wD ^{(1)}$ looks like that in Figure \ref{fig(3-1)} (B). \item[(C)] $n=3$ and there exist an irreducible component $\wD _4$ of $\wD - \wD ^{(1)}$ and a $(-1)$-curve $\wE$ on $\wS$ such that $(\wD _4 \cdot \wD - \wD _4) = 0$ and the weighted dual graph of $\wE + \wD ^{(1)} + \wD _4$ looks like that in Figure \ref{fig(3-1)} (C). \end{enumerate} \end{prop} \begin{figure} \begin{center}\scalebox{0.8}{\begin{tikzpicture} \node at (0,2) {\large (A)}; \node at (3,1) {\xygraph{\circ -[r] \circ (-[d] \bullet, -[r] \cdots -[r] \circ (-[d] \bullet ,-[r] \circ )) }}; \node at (7,2) {\large (B)}; \node at (10,1) {\xygraph{\circ -[r] \circ -[r] \circ (-[d] \bullet ([]!{+(.3,0)} {\wE}) ,-[r] \circ -[r] \circ ) }}; \node at (14,2) {\large (C)}; \node at (15.75,0.4) {\xygraph{ \circ -[r] \circ (-[d] \bullet ([]!{+(.3,0)} {\wE}) -[d] \circ ([]!{+(.3,0)} {\wD _4}) ,-[r] \circ ) }}; \end{tikzpicture}}\end{center} \caption{The weighted dual graphs in Proposition \ref{prop(3-0)}}\label{fig(3-1)} \end{figure} It seems that the result of Proposition \ref{prop(3-0)} is well-known to study Du Val del Pezzo surfaces of low degree but could not find any proof in the literature. Hence, the purpose of this section is to prove this proposition for the readers' convenience. Here, our argument is based on {\cite[\S 4]{Saw24}}. Note that every singular point on $S$ is cyclic by the classification of Du Val del Pezzo surfaces of degree $2$ (see, e.g., {\cite{Ura81} or {\cite[\S 8.7]{Dol12}}) because $S$ has the singular point of type $\sA _n$ with $n \ge 3$. Hence, for every $i=1,\dots ,m$ we may assume that the dual graph of $\wD ^{(i)}$ is the following: \begin{align*} \xygraph{\circ ([]!{+(0,.25)} {^{\wD_1^{(i)}}}) -[r] \circ ([]!{+(0,.25)} {^{\wD_2^{(i)}}}) -[r] \cdots -[r] \circ ([]!{+(0,.25)} {^{\wD_n^{(i)}}}) }. \end{align*} We consider the following two basic lemmas: \begin{lem}[{\cite[Lemma 4.5]{Saw24}}]\label{lem(3-1)} Let $a_1,\dots ,a_n$ be positive integers satisfying $a_j \ge 2$ for every $j=2,\dots ,n-1$, and set the effective $\bZ$-divisor $\wA := \sum _{j=1}^n a_j \wD _j$ on $\wS$. Then $(\wA ) ^2 \le -4$. Moreover, $(\wA ) ^2 = -4$ if and only if $a_1=a_n=1$ and $a_2 = \dots = a_{n-1} = 2$. \end{lem} \begin{proof} We have: \begin{align*} (\wA ) ^2 = -(a_1^2+a_n^2) - \sum _{j=1}^{n-1}(a_j-a_{j+1})^2. \end{align*} Noticing $a_1,\dots ,a_n$ are positive integers, we can easily show this lemma. \end{proof} \begin{lem}[{\cite[Lemma 4.1]{Saw24}}]\label{lem(3-2)} Fix an integer $i$ with $1 \le i \le m$ and an integer $\ell$ with $1 \le \ell \le n(i)$. If an effective $\bQ$-divisor $\wB _{\ell}^{(i)} := \sum _{j=1}^{n(i)}b_{i,j}\wD _j^{(i)}$ on $\wS$ satisfies $(-\wB _{\ell}^{(i)} \cdot \wD _j^{(i)}) = \delta _{j,\ell}$ for every $j=1,\dots ,n$, then we have: \begin{align*} \wB _{\ell}^{(i)} = \frac{n-\ell +1}{n+1} \sum _{j=1}^{\ell}j\wD _j^{(i)} + \frac{\ell}{n+1} \sum _{j=1}^{n-\ell}j\wD _{n-j+1}^{(i)} \end{align*} and: \begin{align*} (\wB _{\ell}^{(i)})^2 = -\frac{(n-\ell +1)\ell}{n+1}. \end{align*} \end{lem} \begin{proof} We can easily show this lemma because it is enough to compute some intersection numbers directly. \end{proof} In Lemma \ref{lem(3-2)}, if $(\wB _{\ell}^{(i)}\cdot \wD _j^{(i)}) = \delta _{j,\ell}$, then the value of $(\wB _{\ell}^{(i)})^2$ is explicitly summarized in Table \ref{A-list} depending on the values of $n(i)$ and $\ell$: \begin{table} \begin{center} \begin{tabular}{|c||c|c|c|c|c|c|c|} \hline $n(i) \backslash \ell$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ \\ \hline \hline $1$ & $-\frac{1}{2}$ & & & & & & \\ \hline $2$ & $-\frac{2}{3}$ & $-\frac{2}{3}$ & & & & & \\ \hline $3$ & $-\frac{3}{4}$ & $-1$ & $-\frac{3}{4}$ & & & & \\ \hline $4$ & $-\frac{4}{5}$ & $-\frac{6}{5}$ & $-\frac{6}{5}$ & $-\frac{4}{5}$ & & & \\ \hline $5$ & $-\frac{5}{6}$ & $-\frac{4}{3}$ & $-\frac{3}{2}$ & $-\frac{4}{3}$ & $-\frac{5}{6}$ & & \\ \hline $6$ & $-\frac{6}{7}$ & $-\frac{10}{7}$ & $-\frac{12}{7}$ & $-\frac{12}{7}$ & $-\frac{10}{7}$ & $-\frac{6}{7}$ & \\ \hline $7$ & $-\frac{7}{8}$ & $-\frac{3}{2}$ & $-\frac{15}{8}$ & $-2$ & $-\frac{15}{8}$ & $-\frac{3}{2}$ & $-\frac{7}{8}$ \\ \hline \end{tabular} \end{center} \caption{The value of $(\wB _{\ell}^{(i)})^2$ in Lemma \ref{lem(3-2)}}\label{A-list} \end{table} From now on, we set the divisor $\wDelta := -K_{\wS} - \wD _1 -2(\wD _2 + \dots + \wD _{n-1}) - \wD _n$ on $\wS$. \begin{lem}[{\cite[Lemma 4.6]{Saw24}}]\label{lem(3-3)} With the same notations as above, we have $|\wDelta | \not= \emptyset$. \end{lem} \begin{proof} Note that $\wS$ is rational by Lemma \ref{lem(2-2-1)}. Hence, by the Riemann-Roch theorem we have: \begin{align*} \dim |\wDelta| \ge \frac{1}{2}(\wDelta \cdot \wDelta - K_{\wS}) = 0. \end{align*} This implies that $|\wDelta | \not= \emptyset$. \end{proof} By Lemma \ref{lem(3-3)}, there exist two effective $\bZ$-divisors $\wDelta _+$ and $\wDelta _0$ on $\wS$ such that $\wDelta \sim \wDelta _+ + \wDelta _0$ and every irreducible component $\wC_+$ of $\wDelta _+$ (resp. every irreducible component $\wC_0$ of $\wDelta _0$) satisfies $(\wC _+ \cdot -K_{\wS})>0$ (resp. $(\wC _0 \cdot -K_{\wS})=0$). Note that all irreducible components of $\wDelta _0$ are $(-2)$-curves on $\wS$ by Lemma \ref{lem(2-2-2)} (3). Hence, we can write: \begin{align*} \wDelta - \wDelta _0 = -K_{\wS} - \sum _{i=1}^m \wA ^{(i)}, \end{align*} where $\wA ^{(i)}$ is an effective $\bZ$-divisor, which is a $\bZ$-linear combination of $\wD _1^{(i)},\dots ,\wD _{n(i)}^{(i)}$, on $\wS$. Note that: \begin{align}\label{(3-1)} \wA ^{(1)} = (c_1+1)\wD _1 + (c_2+2)\wD _2 + \dots + (c_{n-1}+2)\wD _{n-1} + (c_n+1)\wD _n \end{align} for some non-negative integers $c_1,\dots ,c_n$. \begin{lem}[{\cite[Lemma 4.7]{Saw24}}]\label{lem(3-4)} With the same notations as above, we have $(\wDelta _+)^2 \le -2$. \end{lem} \begin{proof} We can write: \begin{align*} \wDelta _+ \sim \wDelta - \wDelta _0 = -K_{\wS} - \sum _{i=1}^m \wA ^{(i)}. \end{align*} By (\ref{(3-1)}) and Lemma \ref{lem(3-1)}, we have $(\wA ^{(1)})^2 \le -4$. Hence, we have $(\wDelta _+ )^2 \le 2 - 4 + 0 = -2$ because the intersection matrix of $\wD - \wD ^{(1)}$ is negative definite. \end{proof} Note that any irreducible component of $\wDelta _+$ has seif-intersection number $\ge -1$ because it is not a $(-2)$-curve on $\wS$. Hence, $\wDelta _+$ is reducible by Lemma \ref{lem(3-4)}. Moreover, since $(\wDelta _+ \cdot -K_{\wS}) = (\wDelta \cdot -K_{\wS}) = 2$, there exist two irreducible curves $\wE _1$ and $\wE _2$ on $\wS$ such that $\wDelta _+ = \wE _1+\wE _2$. By Lemmas \ref{lem(2-2-2)} (1) and (3) and \ref{lem(3-4)}, $\wE _1$ and $\wE _2$ are $(-1)$-curves on $\wS$. Now, we shall deal with the result based on {\cite[Proposition 4.9]{Saw24}}. This result consists of two propositions; that is, Propositions \ref{prop(3-1)} and \ref{prop(3-2)}. \begin{prop}\label{prop(3-1)} With the same notations as above, assume further that $\wE _1 \not= \wE _2$. Then we have the following assertions: \begin{enumerate} \item $(\wE _1 \cdot \wE _2) = 0$. \item $\wDelta _0 = 0$; in other words, $\wDelta \sim \wE _1 + \wE _2$. \item $(\wE _1 + \wE _2 \cdot \wD _j) = \delta _{2,j} + \delta _{n-1,j}$ for $j = 1,\dots ,n$. \item $(\wE _1 \cdot \wD ^{(1)}) = (\wE _2 \cdot \wD ^{(1)}) = 1$. \end{enumerate} \end{prop} \begin{proof} In (1), this assertion follows from Lemma \ref{lem(3-4)}. In (2), we can write: \begin{align*} -K_{\wS} -\wE _1 - \wE _2 \sim \sum _{i=1}^m \wA ^{(i)}. \end{align*} Note that $(-K_{\wS} -\wE _1 - \wE _2)^2 = -4$ by virtue of (1). Since the intersection matrix of $\wD - \wD ^{(1)}$ is negative definite, we have $(\wA ^{(1)})^2 \ge -4$. Hence, we know: \begin{align*} \wA ^{(1)} = \wD _1 + 2\wD _2 + \dots + 2\wD _{n-1} + \wD _n \end{align*} by (\ref{(3-1)}) and Lemma \ref{lem(3-1)}; moreover, $\wA ^{(i)} = 0$ for every $i=2,\dots ,m$. Namely, $\wDelta _0 = 0$. In (3), we know $(\wE _1 + \wE _2 \cdot \wD _j) = (\wDelta \cdot \wD _j) = \delta _{2,j} + \delta _{n-1,j}$ for $j = 1,\dots ,n$ by virtue of (2). In (4), by using (1) and (2), we have: \begin{align*} -1 = (\wE _1)^2 = (\wE _1 \cdot \wDelta - \wE _2) = (\wE _1 \cdot -K_{\wS}) - 2(\wE _1 \cdot \wD ^{(1)}) + (\wE _1 \cdot \wD _1 + \wD _n). \end{align*} Hence, we obtain $(\wE _1 \cdot \wD ^{(1)}) = 1$ by virtue of (3). Similarly, we also obtain $(\wE _2 \cdot \wD ^{(1)}) = 1$. \end{proof} By Proposition \ref{prop(3-1)} (1), (3) and (4), if $\wE _1 \not= \wE _2$, then we know that the weighted dual graph of $\wE _1 + \wE _2 + \wD ^{(1)}$ looks like that in Figure \ref{fig(3-1)} (A). From now on, we thus assume that $\wE _1 = \wE _2$. For simplicity, we put $\wE := \wE _1$. Since $2\wE \sim \wDelta - \wDelta _0$, we can write: \begin{align*} \wE \sim _{\bQ} -\frac{1}{2}K_{\wS} - \sum _{i=1}^m \frac{1}{2}\wA ^{(i)}. \end{align*} \begin{lem}[{\cite[Lemmas 4.11--4.13]{Saw24}}]\label{lem(3-5)} With the same notations and assumptions as above, we have $(\wE \cdot \wD ^{(1)}) = 1$. \end{lem} \begin{proof} Since $\wA ^{(1)} \not= 0$ (see (\ref{(3-1)})), we know: \begin{align*} (\wE \cdot \wD ^{(1)}) = -\frac{1}{2}(\wA ^{(1)} \cdot \wD ^{(1)}) > 0 \end{align*} because the intersection matrix of $\wD ^{(1)}$ is negative definite. Hence, we have $(\wE \cdot \wD ^{(1)}) \ge 1$ because $(\wE \cdot \wD ^{(1)})$ is an integer. In what follows, we shall show $(\wE \cdot \wD ^{(1)}) \le 1$. For $\ell =1,\dots ,n$, let $\wB _{\ell}$ be the $\bQ$-divisor, which is a $\bQ$-linear combination of $\wD _1,\dots ,\wD _n$, on $\wS$ such that $(-\wB _{\ell} \cdot \wD _j)=\delta _{j,\ell}$ for $j=1,\dots ,n$. Note that such a $\bQ$-divisor $\wB _{\ell}$ uniquely exists and each coefficient of $\wB _{\ell}$ is determined by Lemma \ref{lem(3-2)}. In particular, any coefficient of $\wB _{\ell}$ is at most $-\frac{1}{2}$ (see Table \ref{A-list}). Hence, we have $(\wB _{\ell} \cdot \wB _{\ell '}) \le -\frac{1}{2}$ for any $\ell , \ell '=1,\dots ,n$. On the other hand, since the intersection matrix of $\wD ^{(1)}$ is negative definite, we obtain: \begin{align*} \frac{1}{2}\wA ^{(1)} = \sum _{j=1}^n(\wE \cdot \wD _j) \wB _j. \end{align*} Suppose that $(\wE \cdot \wD ^{(1)}) \ge 2$. If there exists $\ell _0$ such that $(\wE \cdot \wD _{\ell _0}) \ge 2$, then we have: \begin{align*} -1 = (\wE )^2 \le \frac{1}{2} + (\wE \cdot \wD _{\ell _0})^2(\wB _{\ell _0})^2 \le \frac{1}{2} + 4 \cdot \left( -\frac{1}{2} \right) = -\frac{3}{2}, \end{align*} which is absurd. Meanwhile, if there exist two distinct integers $\ell _1$ and $\ell _2$ such that $(\wE \cdot \wD _{\ell _1}) = (\wE \cdot \wD _{\ell _2})=1$, then we have: \begin{align*} -1 = (\wE )^2 \le \frac{1}{2} +(\wB _{\ell _1})^2 + (\wB _{\ell _2})^2 + 2(\wB _{\ell _1} \cdot \wB _{\ell _2}) \le \frac{1}{2} -\frac{1}{2} -\frac{1}{2} + 2 \cdot \left( -\frac{1}{2}\right) = -\frac{3}{2}, \end{align*} which is absurd. Therefore, we obtain $(\wE \cdot \wD ^{(1)}) = 1$. \end{proof} \begin{prop}\label{prop(3-2)} With the same notations and assumptions as above, we have $n=5$ or $n=3$. Furthermore, the following assertions hold: \begin{itemize} \item Assume that $n=5$. Then $(\wE \cdot \wD _j) = \delta _{3,j}$ for $j=1,\dots ,5$. Moreover, $2\wE \sim -K_{\wS} -\wD _1-2\wD_2 - 3\wD _3 - 2\wD _4 -\wD _5$. \item Assume that $n=3$. Then $(\wE \cdot \wD _j) = \delta _{2,j}$ for $j=1,2,3$. Moreover, there exists an irreducible component $\wD _4$ of $\wD$ such that $(\wD _4 \cdot \wD - \wD _4) = 0$ and $2\wE \sim -K_{\wS} -\wD _1-2\wD_2 - \wD _3 - \wD _4$. \end{itemize} \end{prop} \begin{proof} For simplicity, we put $\wB ^{(i)} := \frac{1}{2}\wA ^{(i)}$ for $i=1,\dots ,m$. Since: \begin{align*} \sum _{i=1}^m \wB ^{(i)} \sim _{\bQ} -\frac{1}{2}K_{\wS} - \wE, \end{align*} we have: \begin{align*} \sum _{i=1}^m (\wB ^{(i)})^2 = -\frac{3}{2}. \end{align*} By Lemmas \ref{lem(3-2)} and \ref{lem(3-5)}, we know $(\wB ^{(i)})^2 \le -\frac{1}{2}$ for every $i = 1,\dots ,m$ (see also Table \ref{A-list}). Moreover, we notice $(\wB ^{(1)})^2 < -\frac{1}{2}$ because $n(1) = n \ge 3$. Hence, we may assume: \begin{align*} \wE \sim _{\bQ} -\frac{1}{2}K_{\wS} - \wB ^{(1)} - \wB ^{(2)}. \end{align*} We consider the following two cases separately. \smallskip \noindent {\bf Case 1:} ($\wB ^{(2)} = 0$). In this case, we have $(\wB ^{(1)})^2 = -\frac{3}{2}$. Hence, we know $n=5$ or $n=7$ by looking at Table \ref{A-list}. We consider the following two subcases separately. \smallskip \noindent {\bf Subcase 1-1:} ($n=5$). In this subcase, $(\wB ^{(1)} \cdot \wD _j) = \delta _{3,j}$ for $j=1,\dots ,5$ by looking at Table \ref{A-list}. Hence: \begin{align*} \wB ^{(1)} \sim _{\bQ} \frac{1}{2}\wD _1 + \wD _2 + \frac{3}{2}\wD _3 + \wD _4 + \frac{1}{2}\wD _5 \end{align*} by Lemma \ref{lem(3-2)}. In particular, $2\wE _1 \sim -K_{\wS} - \wD _1 -2 \wD _2 -3 \wD _3 -2 \wD _4 - \wD _5$. \smallskip \noindent {\bf Subcase 1-2:} ($n=7$). In this subcase, $(\wB ^{(1)} \cdot \wD _j) = \delta _{2,j}$ or $\delta _{5,j}$ for $j=1,\dots ,7$ by looking at Table \ref{A-list}. By the symmetry, we may assume $(\wB ^{(1)} \cdot \wD _j) = \delta _{2,j}$ for $j=1,\dots ,7$. Hence: \begin{align*} \wB ^{(1)} \sim _{\bQ} \frac{3}{4}\wD _1 + \frac{3}{2} \wD _2 + \frac{5}{4}\wD _3 + \wD _4 + \frac{3}{4}\wD _5 + \frac{1}{2} \wD _6 + \frac{1}{4} \wD _7 \end{align*} by Lemma \ref{lem(3-2)}. In particular, $2\wE _1 \sim \wDelta - \wDelta _0 = -K_{\wS} - \frac{3}{2} \wD _1 -3 \wD _2 -\frac{5}{2} \wD _3 -2 \wD _4 -\frac{3}{2} \wD _5 - \wD _6 -\frac{1}{2} \wD _7$. However, this is a contradiction that $\wDelta - \wDelta _0$ is a $\bZ$-divisor. Thus, this subcase does not take place. \smallskip \noindent {\bf Case 2:} ($\wB ^{(2)} \not= 0$). In this case, we have $(\wB ^{(1)})^2 + (\wB ^{(2)})^2 = -\frac{3}{2}$. Hence, the pair $((\wB ^{(1)})^2, (\wB ^{(2)})^2)$ equals one of $(-1,-\frac{1}{2})$, $(-\frac{5}{6}-\frac{2}{3})$ and $(-\frac{3}{4},-\frac{3}{4})$ by looking at Table \ref{A-list}, where note $n \ge 3$. We consider the following two subcases separately. \smallskip \noindent {\bf Subcase 2-1:} ($(\wB ^{(1)})^2 = -1$). In this subcase, $n=3$, $n(2) = 1$, $(\wB ^{(1)} \cdot \wD _j) = \delta _{2,j}$ for $j=1,2,3$, and $(\wB ^{(2)} \cdot \wD _1^{(2)}) = 1$ by looking at Table \ref{A-list}. We put $\wD _4 := \wD _1^{(2)}$. Then $(\wD _4 \cdot \wD - \wD _4) = 0$ because $n(2)=1$. Moreover: \begin{align*} \wB ^{(1)} \sim _{\bQ} \frac{1}{2}\wD _1 + \wD _2 + \frac{1}{2}\wD _3, \quad \wB ^{(2)} = \frac{1}{2} \wD _4. \end{align*} by Lemma \ref{lem(3-2)}. In particular, $2\wE _1 \sim -K_{\wS} - \wD _1 -2 \wD _2 - \wD _3 - \wD _4$. \smallskip \noindent {\bf Subcase 2-2:} ($(\wB ^{(1)})^2 \not= -1$). In this subcase, by the same argument as above, we see that $\wDelta - \wDelta _0$ is not a $\bZ$-divisor. However, this is a contradiction. Thus, this subcase does not take place. \smallskip The proof of Proposition \ref{prop(3-2)} is thus completed. \end{proof} By Proposition \ref{prop(3-2)}, if $\wE _1 = \wE _2$, then $n=5$ or $n=3$. Furthermore, if $n=5$ (resp. $n=3$), then we obtain the weighted dual graph of $\wE + \wD ^{(1)}$ (resp. $\wE + \wD ^{(1)} + \wD _4$) looks like that in Figure \ref{fig(3-1)} (B) (resp. (C)), where $\wE := \wE _1$. Therefore, the proof of Proposition \ref{prop(3-0)} is completed. \section{Construction of cylinders in rational surfaces}\label{4} \subsection{Notation on some curves}\label{4-1} Let $S$ be a normal projective rational surface, let $f:\wS \to S$ be the minimal resolution, and let $\wD$ be the reduced exceptional divisor of $f$. Assume that there exists a $\bP ^1$-fibration $g:\wS \to \bP ^1_{\bk}$. In this section, we will construct polarized cylinders in $S$ under several conditions about the $\bP ^1$-fibration $g$ (see \S\S \ref{4-2}--\ref{4-7}). Hence, we shall prepare some notations in this subsection. \begin{defin} Let the notation be the same as above. Then: \begin{enumerate} \item We say that $g$ satisfies the condition $(\ast)$ if the following two conditions hold: \begin{itemize} \item There exists exactly one irreducible component $\wD _0$, which is a section of $g$, of $\wD$. Moreover, $(-K_{\wS})^2 = 4-m_0$ holds, where $m_0 := -(\wD _0) \ge 2$. \item Any irreducible component of $\wD - \wD _0$ is contained in singular fibers of $g$. \end{itemize} \item We say that $g$ satisfies the condition $(\ast \ast)$ if the following two conditions hold: \begin{itemize} \item There exist exactly two irreducible components $\wD _0$ and $\wD _{\infty}$, which are sections of $g$, of $\wD$. Moreover, $(-K_{\wS})^2 = 6-m_0-m_{\infty}$ holds, where $m_0 := -(\wD _0) \ge 2$ and $m_{\infty} := -(\wD _{\infty}) \ge 2$. \item Any irreducible component of $\wD - (\wD _0+\wD _{\infty})$ is contained in singular fibers of $g$. \end{itemize} \end{enumerate} \end{defin} In \S \ref{5}, we will show that almost Du Val del Pezzo surface of degree 2 admits a $\bP ^1$-fibration satisfying either condition $(\ast )$ or $(\ast \ast )$ (see Lemmas \ref{lem(4-1)}, \ref{lem(4-2)}, \ref{lem(4-3)}, \ref{lem(4-4)}, \ref{lem(4-5)} and \ref{lem(4-6)}). In what follows, assume that there exists a $\bP ^1$-fibration $g:\wS \to \bP ^1_{\bk}$ satisfying either condition $(\ast )$ or $(\ast \ast )$. Namely, $g$ admits a section $\wD _0$, which is an irreducible component of $\wD$, and $(\wD _0)^2 = -m_0$ holds. From now on, we prepare some notation to be used in \S\S \ref{4-2}--\ref{4-7}. First, $\wF$ denotes a general fiber of $g$. In what follows, we consider the notation for singular fibers of $g$. By Lemma \ref{lem(2-1-2)}, there are three kinds of singular fibers of $g$. Let $r$, $s$ and $t$ be the numbers of singular fibers of $g$ corresponding to (\ref{I-1}), (\ref{I-2}) and (\ref{II}), respectively, and let $\wF _1,\dots ,\wF _{r+s+t}$ be all singular fibers of $g$. Here, if $r>0$ (resp. $s>0$, $t>0$), singular fibers $\{ \wF _i\} _{1 \le i \le r}$ (resp. $\{ \wF _{r+i} \} _{1 \le i \le s}$, $\{ \wF _{r+s+i} \} _{1 \le i \le t}$) correspond to (\ref{I-1}) (resp. (\ref{I-2}), (\ref{II})). Suppose $r>0$. Then, for $i=1,\dots ,r$ let $\{ \wD _{i,\lambda}\} _{0 \le \lambda \le \alpha _i}$ be all irreducible components of $\wF _i$, where we assume that the weighted dual graph of $\wD _0 + \wF _i$ is as follows: \begin{align*} \xygraph{\circ ([]!{+(0,-.3)} {^{-m_0}}) ([]!{+(0,.25)} {^{\wD_0}}) -[rr] \bullet ([]!{+(0,.25)} {^{\wD_{i,0}}}) - []!{+(1.5,0)} \circ ([]!{+(0,.25)} {^{\wD_{i,1}}}) - []!{+(1.5,0)} \cdots - []!{+(1.5,0)} \circ ([]!{+(0,.25)} {^{\wD_{i,\alpha _i-1}}}) - []!{+(1.5,0)} \bullet([]!{+(0,.25)} {^{\wD_{i,\alpha _i}}})} \end{align*} Hence, notice that $\sharp \wF _i = \alpha _i + 1$. Furthermore, we put $\wE _i' := \wD _{i,0}$ and $\wE _i := \wD _{i,\alpha _i}$. Suppose $s>0$. Then, for $i=1,\dots ,s$ let $\{ \wD _{r+i,\lambda}\} _{0 \le \lambda \le \beta _i + \beta _i'}$ be all irreducible components of $\wF _{r+i}$, where we assume that the weighted dual graph of $\wD _0 + \wF _{r+i}$ is as follows: \begin{align*} \xygraph{\circ ([]!{+(0,-.3)} {^{-m_0}}) ([]!{+(0,.25)} {^{\wD_0}}) -[rr] \circ ([]!{+(0,.25)} {^{\wD_{r+i,0}}}) (- []!{+(1.5,0.5)} \circ ([]!{+(0,.25)} {^{\wD_{r+i,1}}}) - []!{+(1.5,0)} \cdots - []!{+(1.5,0)} \circ ([]!{+(0,.25)} {^{\wD_{r+i,\beta _i-1}}}) - []!{+(1.5,0)} \bullet ([]!{+(0,.25)} {^{\wD_{r+i,\beta _i}}}), - []!{+(1.5,-.5)} \circ ([]!{+(0,.25)} {^{\wD_{r+i,\beta _i+1}}}) - []!{+(1.5,0)} \cdots - []!{+(1.5,0)} \circ ([]!{+(0,.25)} {^{\wD_{r+i,\beta _i + \beta _i' -1}}}) - []!{+(1.5,0)} \bullet ([]!{+(0,.25)} {^{\wD_{r+i,\beta _i + \beta _i'}}}))} \end{align*} and assume further that $\beta _i \ge \beta _i'$. Hence, notice that $\sharp \wF _{r+i} = \beta _i + \beta _i' + 1$. Furthermore, we put $\wE _{r+i} := \wD _{r+i,\beta _i}$ and $\wE _{r+i}' := \wD _{r+i,\beta _i + \beta _i'}$. Suppose $t>0$. Then, for $i=1,\dots ,t$ let $\{ \wD _{r+s+i,\lambda}\} _{0 \le \lambda \le \gamma _i}$ be all irreducible components of $\wF _{r+s+i}$, where we assume that the weighted dual graph of $\wD _0 + \wF _{r+s+i}$ is as follows: \begin{align*} \xygraph{\bullet ([]!{+(0,.25)} {^{\wD_{r+s+i,\gamma _i}}}) - []!{+(-1.5,0)} \circ ([]!{+(0,.25)} {^{\wD_{r+s+i,\gamma _i-1}}}) - []!{+(-1.5,0)} \cdots - []!{+(-1.5,0)} \circ ([]!{+(0,.25)} {^{\wD_{r+s+i,2}}}) (- []!{+(-1.5,-0.5)} \circ ([]!{+(0,.25)} {^{\wD_{r+s+i,1}}}),- []!{+(-1.5,0.5)} \circ ([]!{+(0,.25)} {^{\wD_{r+s+i,0}}}) -[ll] \circ ([]!{+(0,-.3)} {^{-m_0}}) ([]!{+(0,.25)} {^{\wD_0}}))} \end{align*} Hence, notice that $\sharp \wF _{r+s+i} = \gamma _i +1$. Furthermore, we put $\wE _{r+s+i} := \wD _{r+s+i,\gamma _i}$. Now, set $\alpha := \sum _{i=1}^r\alpha _i$, $\beta := \sum _{i=1}^s\beta _i$, $\beta ':= \sum _{i=1}^s\beta _i'$ and $\gamma := \sum _{i=1}^t\gamma _i$. Then we notice $(-K_{\wS})^2 = 8- (\alpha + \beta + \beta ' + \gamma ) = 6-m_0-m_{\infty}$, where $m_{\infty} := 2$ if $g$ satisfies $(\ast)$. Put $F := f_{\ast}(\wF)$, $D_0 := f_{\ast}(\wD _0)$, $E_i := f_{\ast}(\wE _i)$ for $i=1,\dots ,r+s+t$ and $E_j' := f_{\ast}(\wE _j')$ for $j=1,\dots ,r+s$ (if $r+s>0$). Then we know $E _i' \sim _{\bQ} F - E_i$ for $i=1,\dots ,r+s$ (if $r+s>0$) and $2E_{r+s+j} \sim _{\bQ} F$ for $j=1,\dots ,t$ (if $t>0$). \subsection{Type $\sD _n$ $(n \not = 5)$ or $\sE _n$ case}\label{4-2} In this subsection, we keep the notation from \S \S \ref{4-1} and assume further that $g$ satisfies $(\ast)$ and one of the following conditions holds: \begin{enumerate} \item $\gamma \ge 5$; \item $\gamma > 0$ and $\beta ' \ge 2$; \item $\gamma =0$ and $\beta ' \ge 3$, \end{enumerate} Then we have the following result, which is the main result of this subsection: \begin{prop}\label{prop(4-1)} With the same notations and assumptions as above, we have $\Ampc (S) = \Amp (S)$. \end{prop} Proposition \ref{prop(4-1)} can be shown by a similar argument to {\cite[\S 4.1]{Saw1}}. For the reader's convenience, we present the proof of this proposition. Let $H \in \Amp (S)$. Since $\Amp (S)$ is contained in $\Cl (S)_{\bQ} = \bQ [F] \oplus \left( \bigoplus _{i=1}^r\bQ [E_i] \right) \oplus \left( \bigoplus _{j=1}^s\bQ [E_{r+j}] \right)$, we can write: \begin{align*} H \sim _{\bQ} aF + \sum _{i=1}^rb_iE_i + \sum _{j=1}^sc_jE_{r+j} \end{align*} for some rational numbers $a,b_1,\dots ,b_r,c_1,\dots ,c_s$. Then we note $b_i = -\alpha _i(H \cdot E_i)<0$ for $i=1,\dots ,r$ if $r>0$. Put $s' := \sharp \{ j \in \{1,\dots ,s\} \,|\, c_j <0 \}$, where $s' := 0$ if $s=0$. In what follows, we assume $c_j<0$ for $j=1,\dots ,s'$ when $s'>0$. \begin{lem}\label{lem(3-1-1)} We set the divisor: \begin{align*} \wDelta &:= 2\wD _0 + 2m_0\wF - \sum _{i=1}^r \sum _{\lambda =1}^{\alpha _i}2\lambda \wD _{i,\lambda} \\ &\qquad - \sum _{j=1}^{s'} \sum _{\mu =1}^{\beta _j}2\mu \wD _{r+j,\mu} - \sum _{j'=s'+1}^s \sum _{\mu' =1}^{\beta _{j'}'}2\mu' \wD _{r+j',\beta_{j'}+\mu'} - \sum _{k=1}^t\sum _{\nu =1}^{\gamma _k}\nu \wD _{r+s+k,\nu} \end{align*} on $\wS$. Then the following assertions hold: \begin{enumerate} \item $\dim |\wDelta| \ge 3m_0+2 -3\alpha -3\beta -\gamma$. \item If $t=0$, then $\frac{1}{2}\wDelta$ is a $\bZ$-divisor and further $\dim |\frac{1}{2}\wDelta| \ge m_0+1-\alpha -\beta$. \end{enumerate} \end{lem} \begin{proof} Note that: \begin{align*} (\wDelta)^2 &= 4m_0 - \sum _{i=1}^r 4\alpha _i - \sum _{j=1}^{s'} 4\beta _j - \sum _{j'=s'+1}^s 4\beta _{j'}' - \sum _{k=1}^t \gamma _k \ge 4m_0 -4(\alpha + \beta) - \gamma ,\\ (\wDelta \cdot -K_{\wS}) &= 2(m_0+2) - \sum _{i=1}^r 2\alpha _i - \sum _{j=1}^{s'} 2\beta _j - \sum _{j'=s'+1}^s 2\beta _{j'}' - \sum _{k=1}^t \gamma _k \ge 2(m_0+2) -2(\alpha + \beta) - \gamma \end{align*} because $\beta _{j'} \ge \beta _{j'}'$ for every $j' = s'+1,\dots ,s$ (if $s' \not= s$). By the Riemann-Roch theorem and rationality of $\wS$, we have: \begin{align*} \dim |\wDelta| \ge \frac{1}{2}(\wDelta \cdot \wDelta - K_{\wS}) = 3m_0+2 -3\alpha -3\beta -\gamma. \end{align*} Hence, we obtain the assertion (1). In order to show the assertion (2), we assume $t=0$. Then $\frac{1}{2}\wDelta$ is obviously a $\bZ$-divisor on $\wS$ by the definition of $\wDelta$. The remaining assertion follows from the same argument as the above argument. \end{proof} \begin{rem} For the reader's convenience, we present figures of general members in linear systems related to Lemma \ref{lem(3-1-1)}. Let $\wDelta$ be the same as in Lemma \ref{lem(3-1-1)}. Then: \begin{itemize} \item The configuration of a general member $\wC \in |\wDelta|$ is given as Figure \ref{C2} (a). \item Assuming $t=0$, the configuration of a general member $\wC \in |\frac{1}{2}\wDelta|$ is given as Figure \ref{C2} (b). \end{itemize} \end{rem} \begin{figure} \begin{minipage}[c]{1\hsize}\begin{center}\scalebox{0.7}{\begin{tikzpicture} \draw [thick] (-1,0)--(19,0); \node at (-1.4,0) {$\wD _0$}; \node at (-2,6.3) {\Large (a)}; \node at (1.5,3.4) {\large $\cdots \cdots$}; \node at (5,3) {\Large $\cdots$}; \node at (10,2.7) {\Large $\cdots$}; \node at (15,3) {\large $\cdots \cdots$}; \draw [very thick] (-0.25,5.25)--(19,5.25); \draw [very thick] (-0.25,5.75)--(13,5.75); \draw [very thick] (-0.25,5.25) .. controls (-0.5,5.25) and (-0.5,5.75) .. (-0.25,5.75); \node at (-0.8,5.5) {$\wC$}; \node at (0,-1.5) {}; \draw [dashed] (0,-0.25)--(0.5,1); \draw (0.5,0.75)--(0,2); \draw (0,1.75)--(0.5,3); \draw (0.5,3.75)--(0,5); \draw [dashed] (0,4.75)--(0.5,6); \node at (0.5,3.45) {$\vdots$}; \node at (0,-0.6) {\footnotesize $\wE _1'$}; \node at (0.5,6.3) {\footnotesize $\wE _1$}; \draw [dashed] (2,-0.25)--(2.5,1); \draw (2.5,0.75)--(2,2); \draw (2,1.75)--(2.5,3); \draw (2.5,3.75)--(2,5); \draw [dashed] (2,4.75)--(2.5,6); \node at (2.5,3.5) {$\vdots$}; \node at (2,-0.6) {\footnotesize $\wE _r'$}; \node at (2.5,6.3) {\footnotesize $\wE _r$}; \draw (3.5,-0.25)--(3.5,1); \draw (3.5,1) .. controls (3.5,1.25) .. (3.75,1.25); \draw (3.75,1.25)--(4.5,1.25); \draw (3.75,1)--(4,1.75); \draw (4.25,1)--(4.5,1.75); \draw (3.75,2.25)--(4,1.5); \draw (4.25,2.25)--(4.5,1.5); \node at (3.75,2.75) {$\vdots$}; \node at (3.75,3.25) {$\vdots$}; \node at (4.25,2.75) {$\vdots$}; \node at (4.25,3.25) {$\vdots$}; \draw (3.75,3.5)--(4,4.25); \draw (4.25,3.5)--(4.5,4.25); \draw (3.75,4.75)--(4,4); \draw [dashed] (4.25,4.75)--(4.5,4); \draw [dashed] (3.75,4.25)--(4.25,6); \node at (3.5,-0.6) {\footnotesize $\wD _{r+1,0}$}; \node at (4.9,4.5) {\footnotesize $\wE_{r+1}'$}; \node at (4.5,6.3) {\footnotesize $\wE _{r+1}$}; \draw (5.5,-0.25)--(5.5,1); \draw (5.5,1) .. controls (5.5,1.25) .. (5.75,1.25); \draw (5.75,1.25)--(6.5,1.25); \draw (5.75,1)--(6,1.75); \draw (6.25,1)--(6.5,1.75); \draw (5.75,2.25)--(6,1.5); \draw (6.25,2.25)--(6.5,1.5); \node at (5.75,2.75) {$\vdots$}; \node at (5.75,3.25) {$\vdots$}; \node at (6.25,2.75) {$\vdots$}; \node at (6.25,3.25) {$\vdots$}; \draw (5.75,3.5)--(6,4.25); \draw (6.25,3.5)--(6.5,4.25); \draw (5.75,4.75)--(6,4); \draw [dashed] (6.25,4.75)--(6.5,4); \draw [dashed] (5.75,4.25)--(6.25,6); \node at (5.5,-0.6) {\footnotesize $\wD _{r+s',0}$}; \node at (6.9,4.5) {\footnotesize $\wE_{r+s'}'$}; \node at (6.5,6.3) {\footnotesize $\wE _{r+s'}$}; \draw (8.25,-0.25)--(8.25,1); \draw (8.25,1) .. controls (8.25,1.25) .. (8.5,1.25); \draw (8.5,1.25)--(9.25,1.25); \draw (8.5,1)--(8.75,1.75); \draw (9,1)--(9.25,1.75); \draw (8.5,2.25)--(8.75,1.5); \draw (9,2.25)--(9.25,1.5); \node at (8.5,2.75) {$\vdots$}; \node at (9,2.75) {$\vdots$}; \draw (8.5,3)--(8.75,3.75); \draw (9,3)--(9.5,4.75); \draw (8.5,4.25)--(8.75,3.5); \draw [dashed] (8.75,4.75)--(8.5,4); \draw [dashed] (9,6)--(9.5,4.25); \node at (8.25,-0.6) {\footnotesize $\wD _{r+s'+1,0}$}; \node at (9.2,6.3) {\footnotesize $\wE_{r+s'+1}'$}; \node at (8,4.75) {\footnotesize $\wE _{r+s'+1}$}; \draw (10.75,-0.25)--(10.75,1); \draw (10.75,1) .. controls (10.75,1.25) .. (11,1.25); \draw (11,1.25)--(11.75,1.25); \draw (11,1)--(11.25,1.75); \draw (11.5,1)--(11.75,1.75); \draw (11,2.25)--(11.25,1.5); \draw (11.5,2.25)--(11.75,1.5); \node at (11,2.75) {$\vdots$}; \node at (11.5,2.75) {$\vdots$}; \draw (11,3)--(11.25,3.75); \draw (11.5,3)--(12,4.75); \draw (11,4.25)--(11.25,3.5); \draw [dashed] (11.25,4.75)--(11,4); \draw [dashed] (11.5,6)--(12,4.25); \node at (10.75,-0.6) {\footnotesize $\wD _{r+s,0}$}; \node at (11.5,6.3) {\footnotesize $\wE_{r+s}'$}; \node at (10.7,4.75) {\footnotesize $\wE _{r+s}$}; \draw (13,-0.25)--(13,1.5); \draw (12.75,1.25)--(14,1.25); \draw (13.75,1.5)--(13.75,0.25); \draw (13.25,1)--(13.5,1.75); \draw (13.25,2.25)--(13.5,1.5); \node at (13.25,2.75) {$\vdots$}; \node at (13.25,3.25) {$\vdots$}; \draw (13.25,3.5)--(13.5,4.25); \draw (13.25,4.75)--(13.5,4); \draw [dashed] (13.25,4.25)--(13.75,6); \node at (13,-0.6) {\footnotesize $\wD _{r+s+1,0}$}; \node at (14.7,1.25) {\footnotesize $\wD _{r+s+1,2}$}; \node at (14.6,0.5) {\footnotesize $\wD _{r+s+1,1}$}; \node at (13.75,6.3) {\footnotesize $\wE _{r+s+1}$}; \draw (16,-0.25)--(16,1.5); \draw (15.75,1.25)--(17,1.25); \draw (16.75,1.5)--(16.75,0.25); \draw (16.25,1)--(16.5,1.75); \draw (16.25,2.25)--(16.5,1.5); \node at (16.25,2.75) {$\vdots$}; \node at (16.25,3.25) {$\vdots$}; \draw (16.25,3.5)--(16.5,4.25); \draw (16.25,4.75)--(16.5,4); \draw [dashed] (16.25,4.25)--(16.75,6); \node at (16,-0.6) {\footnotesize $\wD _{r+s+t,0}$}; \node at (17.7,1.25) {\footnotesize $\wD _{r+s+t,2}$}; \node at (17.6,0.5) {\footnotesize $\wD _{r+s+t,1}$}; \node at (16.75,6.3) {\footnotesize $\wE _{r+s+t}$}; \end{tikzpicture}}\end{center}\end{minipage} \begin{minipage}[c]{1\hsize}\begin{center}\scalebox{0.7}{\begin{tikzpicture} \draw [thick] (-1,0)--(13,0); \node at (-1.4,0) {$\wD _0$}; \node at (-2,6.3) {\Large (b)}; \node at (1.5,3.4) {\large $\cdots \cdots$}; \node at (5,3) {\Large $\cdots$}; \node at (10,2.7) {\Large $\cdots$}; \draw [very thick] (-0.25,5.55)--(13,5.5); \node at (-0.6,5.5) {$\wC$}; \draw [dashed] (0,-0.25)--(0.5,1); \draw (0.5,0.75)--(0,2); \draw (0,1.75)--(0.5,3); \draw (0.5,3.75)--(0,5); \draw [dashed] (0,4.75)--(0.5,6); \node at (0.5,3.45) {$\vdots$}; \node at (0,-0.6) {\footnotesize $\wE _1'$}; \node at (0.5,6.3) {\footnotesize $\wE _1$}; \draw [dashed] (2,-0.25)--(2.5,1); \draw (2.5,0.75)--(2,2); \draw (2,1.75)--(2.5,3); \draw (2.5,3.75)--(2,5); \draw [dashed] (2,4.75)--(2.5,6); \node at (2.5,3.5) {$\vdots$}; \node at (2,-0.6) {\footnotesize $\wE _r'$}; \node at (2.5,6.3) {\footnotesize $\wE _r$}; \draw (3.5,-0.25)--(3.5,1); \draw (3.5,1) .. controls (3.5,1.25) .. (3.75,1.25); \draw (3.75,1.25)--(4.5,1.25); \draw (3.75,1)--(4,1.75); \draw (4.25,1)--(4.5,1.75); \draw (3.75,2.25)--(4,1.5); \draw (4.25,2.25)--(4.5,1.5); \node at (3.75,2.75) {$\vdots$}; \node at (3.75,3.25) {$\vdots$}; \node at (4.25,2.75) {$\vdots$}; \node at (4.25,3.25) {$\vdots$}; \draw (3.75,3.5)--(4,4.25); \draw (4.25,3.5)--(4.5,4.25); \draw (3.75,4.75)--(4,4); \draw [dashed] (4.25,4.75)--(4.5,4); \draw [dashed] (3.75,4.25)--(4.25,6); \node at (3.5,-0.6) {\footnotesize $\wD _{r+1,0}$}; \node at (4.9,4.5) {\footnotesize $\wE_{r+1}'$}; \node at (4.5,6.3) {\footnotesize $\wE _{r+1}$}; \draw (5.5,-0.25)--(5.5,1); \draw (5.5,1) .. controls (5.5,1.25) .. (5.75,1.25); \draw (5.75,1.25)--(6.5,1.25); \draw (5.75,1)--(6,1.75); \draw (6.25,1)--(6.5,1.75); \draw (5.75,2.25)--(6,1.5); \draw (6.25,2.25)--(6.5,1.5); \node at (5.75,2.75) {$\vdots$}; \node at (5.75,3.25) {$\vdots$}; \node at (6.25,2.75) {$\vdots$}; \node at (6.25,3.25) {$\vdots$}; \draw (5.75,3.5)--(6,4.25); \draw (6.25,3.5)--(6.5,4.25); \draw (5.75,4.75)--(6,4); \draw [dashed] (6.25,4.75)--(6.5,4); \draw [dashed] (5.75,4.25)--(6.25,6); \node at (5.5,-0.6) {\footnotesize $\wD _{r+s',0}$}; \node at (6.9,4.5) {\footnotesize $\wE_{r+s'}'$}; \node at (6.5,6.3) {\footnotesize $\wE _{r+s'}$}; \draw (8.25,-0.25)--(8.25,1); \draw (8.25,1) .. controls (8.25,1.25) .. (8.5,1.25); \draw (8.5,1.25)--(9.25,1.25); \draw (8.5,1)--(8.75,1.75); \draw (9,1)--(9.25,1.75); \draw (8.5,2.25)--(8.75,1.5); \draw (9,2.25)--(9.25,1.5); \node at (8.5,2.75) {$\vdots$}; \node at (9,2.75) {$\vdots$}; \draw (8.5,3)--(8.75,3.75); \draw (9,3)--(9.5,4.75); \draw (8.5,4.25)--(8.75,3.5); \draw [dashed] (8.75,4.75)--(8.5,4); \draw [dashed] (9,6)--(9.5,4.25); \node at (8.25,-0.6) {\footnotesize $\wD _{r+s'+1,0}$}; \node at (9.2,6.3) {\footnotesize $\wE_{r+s'+1}'$}; \node at (8,4.75) {\footnotesize $\wE _{r+s'+1}$}; \draw (10.75,-0.25)--(10.75,1); \draw (10.75,1) .. controls (10.75,1.25) .. (11,1.25); \draw (11,1.25)--(11.75,1.25); \draw (11,1)--(11.25,1.75); \draw (11.5,1)--(11.75,1.75); \draw (11,2.25)--(11.25,1.5); \draw (11.5,2.25)--(11.75,1.5); \node at (11,2.75) {$\vdots$}; \node at (11.5,2.75) {$\vdots$}; \draw (11,3)--(11.25,3.75); \draw (11.5,3)--(12,4.75); \draw (11,4.25)--(11.25,3.5); \draw [dashed] (11.25,4.75)--(11,4); \draw [dashed] (11.5,6)--(12,4.25); \node at (10.75,-0.6) {\footnotesize $\wD _{r+s,0}$}; \node at (11.5,6.3) {\footnotesize $\wE_{r+s}'$}; \node at (10.7,4.75) {\footnotesize $\wE _{r+s}$}; \end{tikzpicture}} \caption{The configuration of $\wC$ in Lemma \ref{lem(3-1-1)}}\label{C2} \end{center}\end{minipage} \end{figure} For simplicity, we set $d_{r,s'} := a + \sum _{i=1}^rb_i + \sum _{j=1}^{s'}c_j $, where we consider $\sum _{j=1}^{s'}c_j = 0$ if $s'=0$. \begin{lem}\label{lem(3-1-2)} If one of the following conditions holds: \begin{enumerate} \item $\gamma \ge 5$; \item $\gamma > 0$ and $\beta ' \ge 2$; \item $\gamma =0$ and $\beta ' \ge 3$, \end{enumerate} then $d_{r,s'} >0$. \end{lem} \begin{proof} Let $\wDelta$ be the same as in Lemma \ref{lem(3-1-1)}. In this proof, we will use the formula $(-K_{\wS})^2 = 8 - (\alpha + \beta + \beta ' + \gamma ) = 4-m_0$. In (1) and (2), we then obtain $\dim |\wDelta| \ge 3\beta ' + 2\gamma -10 \ge 0$ by Lemma \ref{lem(3-1-1)} (1) combined with $-(\alpha + \beta) = -m_0 -4 + \beta ' + \gamma$. Here, we note that $\gamma \ge 2$ provided $\gamma > 0$. Hence, $|\wDelta| \not= \emptyset$, so that we can take a general member $\wC$ of $|\wDelta|$. Notice $f_{\ast}(\wC ) \sim _{\bQ} 2m_0F - \sum _{i=1}^r 2\alpha _i E_i - \sum _{j=1}^{s'} 2\beta _j E_{r+j} - \sum _{j'=s'+1}^s 2\beta _{j'}' E_{r+j'}' - \sum _{k=1}^t \gamma _k E_{r+s+k} \not\sim _{\bQ} 0$. Thus, we obtain $0 < \frac{1}{2}(H \cdot f_{\ast}(\wC )) = d_{r,s'}$ (see also Figure \ref{C2} (a)). In (3), we then obtain $\dim | \frac{1}{2}\wDelta| \ge -3+\beta ' \ge 0$ by Lemma \ref{lem(3-1-1)} (2) combined with $-(\alpha + \beta) \ge -m_0 -4 + \beta '$. Hence, $|\frac{1}{2}\wDelta| \not= \emptyset$, so that we can take a general member $\wC$ of $|\frac{1}{2}\wDelta|$. Notice $f_{\ast}(\wC ) \sim _{\bQ} m_0F - \sum _{i=1}^r \alpha _i E_i - \sum _{j=1}^{s'} \beta _j E_{r+j} - \sum _{j'=s'+1}^s \beta _{j'}' E_{r+j'}' \not\sim _{\bQ} 0$. Thus, we obtain $0 < (H \cdot f_{\ast}(\wC )) = d_{r,s'}$ (see also Figure \ref{C2} (b)). \end{proof} Now, we shall consider $U := f(\wS \backslash \Supp (\wD _0 + \wF + \wF _1 +\dots + \wF _{r+s+t}))$. Notice that $U$ is a cylinder in $S$. Indeed: \begin{align*} U \simeq \wS \backslash \Supp \left( \wD _0 + \wF + \wF _1 + \dots + \wF _{r+s+t} \right) \simeq \bA ^1_{\bk} \times (\bA ^1_{\bk} \backslash \{ (r+s+t)\text{ points}\}) \end{align*} by Lemma \ref{lem(2-1-1)} (1). Then the following result holds: \begin{lem}\label{lem(3-1-3)} If $d_{r,s'} >0$, then $U$ is an $H$-polar cylinder. \end{lem} \begin{proof} We take the effective $\bQ$-divisor: \begin{align*} D := &\sum _{i=1}^r (-b_i) E_i' + \sum _{j=1}^{s'} (-c_j) E_{r+j}' + \sum _{j'=s'+1}^s c_{j'} E_{r+j'} \\ &\quad +\frac{d_{r,s'}}{r+s+t+1} \left \{F + \sum _{k=1}^{r+s}(E_k+E_k') + \sum _{\ell =1}^t2E_{r+s+\ell} \right\} \end{align*} on $S$. Then we know $D \sim _{\bQ} H$ and $S \backslash \Supp (D) = U$. Thus, $U$ is an $H$-polar cylinder. \end{proof} Proposition \ref{prop(4-1)} follows from Lemmas \ref{lem(3-1-2)} and \ref{lem(3-1-3)}. \subsection{Type $\sD _5$ case}\label{4-3} In this subsection, we keep the notation from \S \S \ref{4-1} and assume further that $g$ satisfies $(\ast)$, $(s,t) = (0,1)$ and $\gamma = 4$. Namely, $m_0 = \alpha$. By a similar argument to {\cite[Lemma 3.4]{Saw1}}, there exists the $(-1)$-curve $\wGamma$ on $\wS$ such that $\wGamma \sim \wD _0 + m_0\wF -\sum _{i=1}^r \sum _{\lambda = 1}^{\alpha _i} \lambda \wD _{i,\lambda} -\sum _{\mu = 1}^4 \wD _{r+1,\mu}$. Note that the configuration of the $\bP ^1$-fibration $g:\wS \to \bP ^1_{\bk}$ is given as in Figure \ref{Case2}. \begin{figure}\begin{center}\scalebox{0.7} {\begin{tikzpicture} \draw [very thick] (-1,0)--(7,0); \node at (-1.4,0) {$\wD _0$}; \node at (1.75,3) {\Large $\cdots \cdots$}; \draw [thick][dashed] (-1,5.5)--(7,5.5); \node at (-1.4,5.5) {$\wGamma$}; \node at (7.6,0) {$-m_0$}; \draw [dashed] (0.5,-0.25)--(0,1); \draw (0,0.75)--(0.5,2); \draw (0.5,1.75)--(0,3); \draw (0,3.75)--(0.5,5); \draw [dashed] (0.5,4.75)--(0,6); \node at (0,3.5) {$\vdots$}; \node at (0.5,-0.6) {\footnotesize $\wE _1'$}; \node at (0.7,1.25) {\footnotesize $\wD _{1,1}$}; \node at (-0.1,2.25) {\footnotesize $\wD _{1,2}$}; \node at (-0.5,4.3) {\footnotesize $\wD _{1,\alpha _1-1}$}; \node at (0,6.3) {\footnotesize $\wE _1$}; \draw [dashed] (4,-0.25)--(3.5,1); \draw (3.5,0.75)--(4,2); \draw (4,1.75)--(3.5,3); \draw (3.5,3.75)--(4,5); \draw [dashed] (4,4.75)--(3.5,6); \node at (3.5,3.5) {$\vdots$}; \node at (4,-0.6) {\footnotesize $\wE _r'$}; \node at (4.2,1.25) {\footnotesize $\wD _{r,1}$}; \node at (3.4,2.25) {\footnotesize $\wD _{r,2}$}; \node at (3,4.3) {\footnotesize $\wD _{r,\alpha _r-1}$}; \node at (3.5,6.3) {\footnotesize $\wE _r$}; \draw (5.5,-0.25)--(6.5,2); \draw (6.3,0.9)--(6.3,4.85); \draw (6.5,3.75)--(5.5,6); \node at (5.5,-0.6) {\footnotesize $\wD _{r+1,0}$}; \node at (5.5,6.3) {\footnotesize $\wD _{r+1,1}$}; \node at (6.9,2.5) {\footnotesize $\wD _{r+1,2}$}; \draw (5,3)--(7,3); \draw [dashed] (5.5,1.5)--(5.5,4.5); \node at (4.7,3.3) {\footnotesize $\wD _{r+1,3}$}; \node at (5,4.2) {\footnotesize $\wE _{r+1}$}; \end{tikzpicture}} \caption{The configuration of $g : \wS \to \bP ^1_{\bk}$ in Subsection \ref{4-3}.}\label{Case2} \end{center}\end{figure} Then we have the following result, which is the main result of this subsection: \begin{prop}\label{prop(4-2)} With the same notations and assumptions as above, we have $\Ampc (S) = \Amp (S)$. \end{prop} In what follows, we shall prove the above result. Let $H \in \Amp (H)$. Since $\Amp (H)$ is contained in $\Cl (S)_{\bQ} = \bQ [F] \oplus \left( \bigoplus _{i=1}^r \bQ [E_i] \right)$, we can write: \begin{align*} H \sim _{\bQ} aF + \sum _{i=1}^rb_iE_i \end{align*} for some rational numbers $a,b_1,\dots ,b_r$. Without loss of generality, we may assume $\frac{b_1}{\alpha _1} \le \frac{b_2}{\alpha _2} \le \dots \le \frac{b_r}{\alpha _r}$. Then the following lemma holds: \begin{lem}\label{lem(3-2-1)} $2a + \sum _{i=1}^r2b_i - \frac{b_r}{\alpha_r} > 0$. \end{lem} \begin{proof} Consider the following divisor on $\wS$: \begin{align*} \wDelta := 2\wD _0 + 2m_0\wF - \sum _{i=1}^r \sum _{\lambda = 1}^{\alpha _i} 2\lambda \wD _{i,\lambda} + \wE _r - \sum _{\mu = 1}^4\mu \wD _{r+1,\mu} \end{align*} Since $(\wDelta \cdot K_{\wS} - \wF) = -2(m_0-\alpha)-3 = -3$, $(\wDelta )^2 = 4(m_0-\alpha)-1 = -1$ and $(\wDelta \cdot -K_{\wS}) = 2(m_0-\alpha)+1 = 1$, we obtain $\dim |\wDelta| \ge 0$ by the Riemann-Roch theorem and rationality of $\wS$. Hence, by virtue of $(H \cdot E_i) = -\frac{b_i}{\alpha _i}$ for $i=1,\dots ,r$ we have: \begin{align*} 0 < (H \cdot f_{\ast}(\wDelta)) = a(F \cdot f_{\ast}(\wDelta)) + \sum _{i=1}^rb_i(E_i \cdot f_{\ast}(\wDelta)) = 2a + \sum _{i=1}^r2b_i - \frac{b_r}{\alpha_r}. \end{align*} \end{proof} We note $2E_{r+1} \sim _{\bQ} F$ because $\wD _{r+1,0} \sim \wF - \wD _{r+1,1} - 2 (\wD _{r+1,2} + \wD _{r+1,3} + \wE _{r+1})$. Put $\Gamma := f_{\ast}(\wGamma ) \sim _{\bQ} -\sum _{i=1}^r\alpha _iE_i + (2\alpha -1)E_{r+1}$. Note $-\frac{b_r}{\alpha _r} = (H \cdot E_r) > 0$. Assume $\frac{b_1}{\alpha _1} = \frac{b_r}{\alpha _r}$. For simplicity, we set $d_0 := 2a + \frac{2\alpha b_r}{\alpha_r} - \frac{b_r}{\alpha _r}$. Notice that: \begin{align*} d_0 = 2a + \sum _{i=1}^r \frac{2\alpha _i b_r}{\alpha_r} - \frac{b_r}{\alpha _r} = 2a + \sum _{i=1}^r 2b_i - \frac{b_r}{\alpha _r} > 0 \end{align*} by Lemma \ref{lem(3-2-1)}. Letting $\varepsilon$ be a positive rational number satisfying $\varepsilon < \frac{d_0}{2m_0-1}$, we take the effective $\bQ$-divisor: \begin{align*} D := \left( -\frac{b_r}{\alpha _r} + \varepsilon \right) \Gamma + \sum _{i=1}^r\alpha _i \varepsilon E_i + \left\{ d_0-(2m_0-1)\varepsilon \right\} E_{r+1} \end{align*} on $S$. Then we know $H \sim _{\bQ} D$ and: \begin{align*} S \backslash \Supp (D) \simeq \wS \backslash \Supp \left( \wGamma + \wD _0 + \sum _{i=1}^r (\wF _i - \wE _i') + \wF _{r+1} \right) \simeq \bA ^1_{\bk} \times \bA ^1_{\ast , \bk} \end{align*} by Lemma \ref{lem(2-1-1)} (2). Thus, $H \in \Ampc (S)$. In what follows, we can assume $\frac{b_1}{\alpha _1} < \frac{b_r}{\alpha _r}$. Put $r' := \max \{ i \in \{ 1,\dots ,r-1\} \,|\, \frac{b_i}{\alpha _i} < \frac{b_r}{\alpha _r} \}$. For simplicity, we put $\alpha ' := \sum _{i=1}^{r'}\alpha _i$. Then $2m_0-1 - 2\alpha ' > 0$ by virtue of $m_0 = \alpha > \alpha '$. Moreover, we set $d_{r'} := 2a + \sum _{i=1}^{r'}2b_i + \frac{2(\alpha - \alpha ')b_r}{\alpha_r} - \frac{b_r}{\alpha _r}$. Notice that: \begin{align*} d_{r'} = 2a + \sum _{i=1}^{r'}2b_i + \sum _{j=r'+1}^r \frac{2\alpha _j b_r}{\alpha_r} - \frac{b_r}{\alpha _r} \ge 2a + \sum _{i=1}^r 2b_i - \frac{b_r}{\alpha _r} > 0 \end{align*} by Lemma \ref{lem(3-2-1)}. Letting $\varepsilon$ be a positive rational number satisfying $\varepsilon < \frac{d_{r'}}{2m_0-1-2\alpha '}$, we take the effective $\bQ$-divisor: \begin{align*} D := \left( -\frac{b_r}{\alpha _r} + \varepsilon \right) \Gamma + \sum _{i=1}^{r'} \alpha _i \left( \frac{b_r}{\alpha _r} -\frac{b_i}{\alpha _i} - \varepsilon \right) E_i' + \sum _{j=r'+1}^r\alpha _j \varepsilon E_j + \left\{ d_{r'}-(2m_0-1 - 2\alpha ') \varepsilon \right\} E_{r+1} \end{align*} on $S$. Then $H \sim _{\bQ} D$. Moreover, we know: \begin{align*} S \backslash \Supp (D) \simeq \wS \backslash \Supp \left( \wGamma + \wD _0 + \sum _{i=1}^{r'} (\wF _i - \wE _i) + \sum _{j=r'+1}^r (\wF _j - \wE _j') + \wF _{r+1} \right) \simeq \bA ^1_{\bk} \times \bA ^1_{\ast , \bk} \end{align*} by Lemma \ref{lem(2-1-1)} (2). Thus, $H \in \Ampc (S)$. The proof of Proposition \ref{prop(4-2)} is thus completed. \subsection{Type $(\sA _5)'$ case}\label{4-4} In this subsection, we keep the notation from \S \S \ref{4-1} and assume further that $g$ satisfies $(\ast \ast)$, $(s,t) = (0,1)$ and $\gamma = 3$. Namely, $-m_{\infty} = m_0 - \alpha - 1$. Note that the configuration of the $\bP ^1$-fibration $g:\wS \to \bP ^1_{\bk}$ is given as in Figure \ref{Case4}. \begin{figure}\begin{center}\scalebox{0.7} {\begin{tikzpicture} \draw [very thick] (-1,0)--(7,0); \node at (-1.4,0) {$\wD _0$}; \node at (1.75,3) {\Large $\cdots \cdots$}; \draw [very thick] (-1,5.5)--(7,5.5); \node at (-1.4,5.5) {$\wD _{\infty}$}; \node at (7.6,0) {$-m_0$}; \node at (7.6,5.5) {$-m_{\infty}$}; \draw [dashed] (0.5,-0.25)--(0,1); \draw (0,0.75)--(0.5,2); \draw (0.5,1.75)--(0,3); \draw (0,3.75)--(0.5,5); \draw [dashed] (0.5,4.75)--(0,6); \node at (0,3.5) {$\vdots$}; \node at (0.5,-0.6) {\footnotesize $\wE _1'$}; \node at (0.7,1.25) {\footnotesize $\wD _{1,1}$}; \node at (-0.1,2.25) {\footnotesize $\wD _{1,2}$}; \node at (-0.5,4.3) {\footnotesize $\wD _{1,\alpha _1-1}$}; \node at (0,6.3) {\footnotesize $\wE _1$}; \draw [dashed] (4,-0.25)--(3.5,1); \draw (3.5,0.75)--(4,2); \draw (4,1.75)--(3.5,3); \draw (3.5,3.75)--(4,5); \draw [dashed] (4,4.75)--(3.5,6); \node at (3.5,3.5) {$\vdots$}; \node at (4,-0.6) {\footnotesize $\wE _r'$}; \node at (4.2,1.25) {\footnotesize $\wD _{r,1}$}; \node at (3.4,2.25) {\footnotesize $\wD _{r,2}$}; \node at (3,4.3) {\footnotesize $\wD _{r,\alpha _r-1}$}; \node at (3.5,6.3) {\footnotesize $\wE _r$}; \draw (5.5,-0.25)--(6.5,2); \draw (6.3,0.9)--(6.3,4.85); \draw (6.5,3.75)--(5.5,6); \node at (5.5,-0.6) {\footnotesize $\wD _{r+1,0}$}; \node at (5.5,6.3) {\footnotesize $\wD _{r+1,1}$}; \node at (5.7,2.3) {\footnotesize $\wD _{r+1,2}$}; \draw [dashed] (5,3)--(7,3); \node at (5,3.3) {\footnotesize $\wE _{r+1}$}; \end{tikzpicture}} \caption{The configuration of $g : \wS \to \bP ^1_{\bk}$ in Subsection \ref{4-4}.}\label{Case4} \end{center}\end{figure} Then we have the following result, which is the main result of this subsection: \begin{prop}\label{prop(3-4)} With the same notations and assumptions as above, we have $\Ampc (S) = \Amp (S)$. \end{prop} In what follows, we shall prove the above result. Let $H \in \Amp (S)$. Since $\Amp (S)$ is contained in $\Cl (S)_{\bQ} = \bigoplus _{i=1}^r \bQ [E_i]$, we can write: \begin{align*} H \sim _{\bQ} \sum _{i=1}^r a_iE_i \end{align*} for some rational numbers $a_1,\dots ,a_r$. Without loss of generality, we may assume $\frac{a_1}{\alpha _1} \le \frac{a_2}{\alpha _2} \le \dots \le \frac{a_r}{\alpha _r}$. Since $\alpha + 1 = m_0+m_{\infty}$ and $m_{\infty} \ge 2$, we have $\alpha _1 + \dots + \alpha _r > m_0$. We set $r' := \min \{ i \in \{ 1,\dots ,r\} \,|\, \alpha _1 + \dots + \alpha _i \ge m_0\}$. For simplicity, we put $\alpha ' := \sum _{i=1}^{r'-1}\alpha _i$, where $\alpha ' = 0$ provided $r'=1$. Then the following lemma holds: \begin{lem}\label{lem(3-4-1)} $2(a_1+\dots +a_i) + \frac{\left(2(m_0-\alpha _1 - \dots - \alpha _i)-1\right)a_{r'}}{\alpha _{r'}} > 0$ for every $i=1,\dots ,r'-1$. Moreover, $\frac{a_{r'}}{\alpha _{r'}} > 0$. \end{lem} \begin{proof} Consider the following divisor on $\wS$: \begin{align*} \wDelta &:= 2\wD _0 + 2m_0\wF \\ &\qquad- \sum _{i=1}^{r'} \sum _{\lambda = 1}^{\alpha _i} 2\lambda \wD _{i,\lambda} + \sum _{\mu = m_0-\alpha '}^{\alpha _{r'}}(2\mu - 2m_0 + 2\alpha ' + 1)\wD _{r',\mu} - (\wD _{r+1,1}+2\wD _{r+1,2}+3\wE _{r+1}). \end{align*} Since $(\wDelta \cdot K_{\wS}-\wF) = -4 <0$, $(\wDelta)^2 = 0$ and $(\wDelta \cdot -K_{\wS}) = 2$, we obtain $\dim |\wDelta| \ge 1$ by the Riemann-Roch theorem and rationality of $\wS$. At first, we shall show the first assertion with $i=r'-1$. Since $(\wDelta \cdot \wD _0) = (\wDelta \cdot \wD _{\infty}) = (\wDelta \cdot \wD _{r+1,0}) = 0$, we have: \begin{align*} (E _i \cdot f_{\ast}(\wDelta)) = \left\{ \begin{array}{cc} 2 & \text{if}\ i <r' \\ \frac{2(m_0-\alpha ')-1}{\alpha _{r'}} & \text{if}\ i =r' \\ 0 & \text{if}\ i >r' \end{array} \right. \end{align*} for $i=1,\dots ,r$. Hence: \begin{align*} 0 < (H \cdot f_{\ast}(\wDelta)) = \sum _{i=1}^ra_i(E_i \cdot f_{\ast}(\wDelta)) = 2(a_1 + \dots + a_{r'-1}) + \frac{\left(2(m_0-\alpha ')-1\right)a_{r'}}{\alpha _{r'}} . \end{align*} In what follows, we shall show the first assertion for the general case. By using the above result combined with $\frac{a_{i+1}}{\alpha _{i+1}} \le \dots \le \frac{a_{r'-1}}{\alpha _{r'-1}} \le \frac{a_{r'}}{\alpha _{r'}}$, we have: \begin{align*} 0 &< 2(a_1 + \dots + a_{r'-1}) + \frac{\left(2(m_0-\alpha ')-1\right)a_{r'}}{\alpha _{r'}} \\ &\le 2(a_1 + \dots +a_i) + \frac{\left( 2(\alpha _{i+1} + \dots + \alpha _{r'-1} + m_0 - \alpha ')-1\right)a_{r'}}{\alpha _{r'}} \\ &= 2(a_1 + \dots +a_i) + \frac{\left( 2(m_0 - \alpha _1 -\dots - \alpha _i)-1\right)a_{r'}}{\alpha _{r'}}. \end{align*} We shall prove the remaining assertion. Similarly, we have: \begin{align*} 0 < 2(a_1 + \dots + a_{r'-1}) + \frac{\left(2(m_0-\alpha ')-1\right)a_{r'}}{\alpha _{r'}} \le \frac{(2m_0-1)a_{r'}}{\alpha _{r'}}. \end{align*} By virtue of $2m_0-1>0$, we thus obtain $\frac{a_{r'}}{\alpha _{r'}} > 0$. \end{proof} Note $2E_{r+1} \sim _{\bQ} F$ and $\frac{2m_0-1}{2}F \sim _{\bQ} \sum _{i=1}^r\alpha _iE_i$ because $\wD _{r+1,0} \sim \wF -\wD _{r+1,1} - 2\wE _{r+1}$ and $\wD _{\infty} \sim \wD _0 + m_0\wF - \sum _{i=1}^r \sum _{\lambda = 1}^{\alpha _i} \lambda \wD _{i,\lambda} -(\wD _{r+1,1}+\wD_{r+1,2}+\wE_{r+1})$. Assume $\frac{a_1}{\alpha _1} = \frac{a_{r'}}{\alpha _{r'}}$. Note $\frac{a_{r'}}{\alpha _{r'}} > 0$ by Lemma \ref{lem(3-4-1)}. Letting $\varepsilon$ be a positive rational number satisfying $\varepsilon < \frac{a_{r'}}{\alpha _{r'}}$, we take the effective $\bQ$-divisor: \begin{align*} D := \sum _{i=1}^r\alpha _i \left( \frac{a_i}{\alpha _i} - \frac{a_{r'}}{\alpha _{r'}} + \varepsilon \right)E_i + (2m_0-1) \left( \frac{a_{r'}}{\alpha _{r'}} - \varepsilon \right)E_{r+1} \end{align*} on $S$. Then we know $H \sim _{\bQ} D$ and: \begin{align*} S \backslash \Supp (D) \simeq \wS \backslash \Supp \left( \wD _0 + \wD _{\infty} + \sum _{i=1}^r (\wF _i - \wE _i') + \wF _{r+1} \right) \simeq \bA ^1_{\bk} \times \bA ^1_{\ast , \bk} \end{align*} by Lemma \ref{lem(2-1-1)} (2). Thus, $H \in \Ampc (S)$. In what follows, we can assume $\frac{a_1}{\alpha _1} < \frac{a_{r'}}{\alpha _{r'}}$. Put $r'' := \max \{ i \in \{ 1,\dots ,r'-1\} \,|\, \frac{a_i}{\alpha _i} < \frac{a_{r'}}{\alpha _{r'}} \}$. For simplicity, we put $\alpha '' := \sum _{i=1}^{r''}\alpha _i$ and $a'' := \sum _{i=1}^{r''}a_i$. Then we note $m_0 - \alpha '' > 0$. Moreover, by using Lemma \ref{lem(3-4-1)} we have: \begin{align*} \frac{1}{2m_0-1-2\alpha''}\left( a'' + \frac{(2m_0-1-2\alpha'')a_{r'}}{\alpha _{r'}} \right) > 0. \end{align*} Letting $\varepsilon$ be a positive rational number satisfying: \begin{align*} \varepsilon < \min \left\{ \frac{a_{r'}}{\alpha _{r'}} - \frac{a_1}{\alpha _1},\ \frac{1}{2m_0-1-2\alpha''}\left( a'' + \frac{(2m_0-1-2\alpha'')a_{r'}}{\alpha _{r'}} \right) \right\} , \end{align*} we take the effective $\bQ$-divisor: \begin{align*} D &:= \sum _{i=1}^{r''} \alpha _i \left( \frac{a_{r'}}{\alpha _{r'}} -\frac{a_i}{\alpha _i} - \varepsilon \right)E_i' + \sum _{j=r''+1}^r\alpha _j \left( \frac{a_j}{\alpha _j} - \frac{a_{r'}}{\alpha _{r'}} + \varepsilon \right)E_j \\ &\qquad + \left\{ 2a'' + (2m_0-1-2\alpha'')\left( \frac{a_{r'}}{\alpha _{r'}}-\varepsilon \right) \right\} E_{r+1} \end{align*} on $S$. Then $H \sim _{\bQ} D$. Moreover, we know: \begin{align*} S \backslash \Supp (D) \simeq \wS \backslash \Supp \left( \wD _0 + \wD _{\infty} + \sum _{i=1}^{r''} (\wF _i - \wE _i) + \sum _{j=r''+1}^r (\wF _j - \wE _j') + \wF _{r+1} \right) \simeq \bA ^1_{\bk} \times \bA ^1_{\ast , \bk} \end{align*} by Lemma \ref{lem(2-1-1)} (2). Thus, $H \in \Ampc (S)$. The proof of Proposition \ref{prop(3-4)} is thus completed. \subsection{Type $(\sA _3+\sA _1)'$ case}\label{4-5} In this subsection, we keep the notation from \S \S \ref{4-1} and assume further that $g$ satisfies $(\ast \ast)$, $(s,t) = (0,1)$ and $\gamma = 2$. Namely, $-m_{\infty} = m_0 - \alpha$. Note that the configuration of the $\bP ^1$-fibration $g:\wS \to \bP ^1_{\bk}$ is given as in Figure \ref{Case3}. \begin{figure}\begin{center}\scalebox{0.7} {\begin{tikzpicture} \draw [very thick] (-1,0)--(7,0); \node at (-1.4,0) {$\wD _0$}; \node at (1.75,3) {\Large $\cdots \cdots$}; \draw [very thick] (-1,5.5)--(4.5,5.5); \draw [very thick] (5,0.5)--(7,0.5); \draw [very thick] (4.5,5.5) .. controls (4.75,5.5) and (4.75,0.5) .. (5,0.5); \node at (-1.4,5.5) {$\wD _{\infty}$}; \node at (0,-1.5) {}; \node at (7.6,0) {$-m_0$}; \node at (7.6,0.5) {$-m_{\infty}$}; \draw [dashed] (0.5,-0.25)--(0,1); \draw (0,0.75)--(0.5,2); \draw (0.5,1.75)--(0,3); \draw (0,3.75)--(0.5,5); \draw [dashed] (0.5,4.75)--(0,6); \node at (0,3.5) {$\vdots$}; \node at (0.5,-0.6) {\footnotesize $\wE _1'$}; \node at (0.7,1.25) {\footnotesize $\wD _{1,1}$}; \node at (-0.1,2.25) {\footnotesize $\wD _{1,2}$}; \node at (-0.5,4.3) {\footnotesize $\wD _{1,\alpha _1-1}$}; \node at (0,6.3) {\footnotesize $\wE _1$}; \draw [dashed] (4,-0.25)--(3.5,1); \draw (3.5,0.75)--(4,2); \draw (4,1.75)--(3.5,3); \draw (3.5,3.75)--(4,5); \draw [dashed] (4,4.75)--(3.5,6); \node at (3.5,3.5) {$\vdots$}; \node at (4,-0.6) {\footnotesize $\wE _r'$}; \node at (4.2,1.25) {\footnotesize $\wD _{r,1}$}; \node at (3.4,2.25) {\footnotesize $\wD _{r,2}$}; \node at (3,4.3) {\footnotesize $\wD _{r,\alpha _r-1}$}; \node at (3.5,6.3) {\footnotesize $\wE _r$}; \draw (5.5,-0.25)--(6.5,2); \draw [dashed] (6.3,0.9)--(6.3,4.85); \draw (6.5,3.75)--(5.5,6); \node at (5.5,-0.6) {\footnotesize $\wD _{r+1,0}$}; \node at (5.5,6.3) {\footnotesize $\wD _{r+1,1}$}; \node at (5.8,2.5) {\footnotesize $\wE _{r+1}$}; \end{tikzpicture}} \caption{The configuration of $g : \wS \to \bP ^1_{\bk}$ in Subsection \ref{4-5}.}\label{Case3} \end{center}\end{figure} Then we have the following result, which is the main result of this subsection: \begin{prop}\label{prop(3-3)} With the same notations and assumptions as above, we have $\Ampc (S) = \Amp (S)$. \end{prop} In what follows, we shall prove the above result. Let $H \in \Amp (S)$. Since $\Amp (S)$ is contained in $\Cl (S)_{\bQ} = \bigoplus _{i=1}^r \bQ [E_i]$, we can write: \begin{align*} H \sim _{\bQ} \sum _{i=1}^r a_iE_i \end{align*} for some rational numbers $a_1,\dots ,a_r$. Without loss of generality, we may assume $\frac{a_1}{\alpha _1} \le \frac{a_2}{\alpha _2} \le \dots \le \frac{a_r}{\alpha _r}$. Since $\alpha = m_0+m_{\infty}$ and $m_{\infty} >0$, we have $\alpha _1 + \dots + \alpha _r \ge m_0$. We set $r' := \min \{ i \in \{ 1,\dots ,r\} \,|\, \alpha _1 + \dots + \alpha _i \ge m_0\}$. For simplicity, we put $\alpha ' := \sum _{i=1}^{r'-1}\alpha _i$, where $\alpha ' = 0$ provided $r'=1$. Then the following lemma holds: \begin{lem}\label{lem(3-3-1)} $a_1+\dots +a_i + \frac{(m_0-\alpha _1 - \dots - \alpha _i)a_{r'}}{\alpha _{r'}} > 0$ for every $i=1,\dots ,r'-1$. Moreover, $\frac{a_{r'}}{\alpha _{r'}} > 0$. \end{lem} \begin{proof} Consider the following divisor on $\wS$: \begin{align*} \wDelta := \wD _0 + m_0\wF - \sum _{i=1}^{r'} \sum _{\lambda = 1}^{\alpha _i} \lambda \wD _{i,\lambda} + \sum _{\mu = m_0-\alpha '}^{\alpha _{r'}}(\mu - m_0 + \alpha ')\wD _{r',\mu} - (\wD _{r+1,1}+\wE _{r+1}). \end{align*} Since $(\wDelta \cdot K_{\wS}-\wF) = -3 <0$, $(\wDelta)^2 = -1$ and $(\wDelta \cdot -K_{\wS}) = 1$, we obtain $\dim |\wDelta| \ge 0$ by the Riemann-Roch theorem and rationality of $\wS$. At first, we shall show the first assertion with $i=r'-1$. Since $(\wDelta \cdot \wD _0) = (\wDelta \cdot \wD _{\infty}) = (\wDelta \cdot \wD _{r+1,0}) = 0$, we have: \begin{align*} (E _i \cdot f_{\ast}(\wDelta)) = \left\{ \begin{array}{cc} 1 & \text{if}\ i <r' \\ \frac{m_0-\alpha '}{\alpha _{r'}} & \text{if}\ i =r' \\ 0 & \text{if}\ i >r' \end{array} \right. \end{align*} for $i=1,\dots ,r$. Hence: \begin{align*} 0 < (H \cdot f_{\ast}(\wDelta)) = \sum _{i=1}^ra_i(E_i \cdot f_{\ast}(\wDelta)) = a_1 + \dots + a_{r'-1} + \frac{(m_0-\alpha ')a_{r'}}{\alpha _{r'}}. \end{align*} In what follows, we shall show the first assertion for the general case. By using the above result combined with $\frac{a_{i+1}}{\alpha _{i+1}} \le \dots \le \frac{a_{r'-1}}{\alpha _{r'-1}} \le \frac{a_{r'}}{\alpha _{r'}}$, we have: \begin{align*} 0 &< a_1+\dots +a_{r'-1} + \frac{(m_0-\alpha')a_{r'}}{\alpha _{r'}} \\ &\le a_1 + \dots +a_i + \frac{(\alpha _{i+1} + \dots + \alpha _{r'-1} + m_0 - \alpha ')a_{r'}}{\alpha _{r'}} \\ &= a_1 + \dots +a_i + \frac{(m_0- \alpha _1 - \dots - \alpha _i)a_{r'}}{\alpha _{r'}}. \end{align*} We shall prove the remaining assertion. Similarly, we have: \begin{align*} 0 < a_1+\dots +a_{r'-1} + \frac{(m_0-\alpha')a_{r'}}{\alpha _{r'}} \le \frac{m_0a_{r'}}{\alpha _{r'}}. \end{align*} By virtue of $m_0 \ge 2$, we thus obtain $\frac{a_{r'}}{\alpha _{r'}} > 0$. \end{proof} Note $2E_{r+1} \sim _{\bQ} F$ and $m_0F \sim _{\bQ} \sum _{i=1}^r\alpha _iE_i$ because $\wD _{r+1,0} \sim \wF -\wD _{r+1,1} - 2\wE _{r+1}$ and $\wD _{\infty} \sim \wD _0 + m_0\wF - \sum _{i=1}^r \sum _{\lambda = 1}^{\alpha _i} \lambda \wD _{i,\lambda}$. Assume $\frac{a_1}{\alpha _1} = \frac{a_{r'}}{\alpha _{r'}}$. Then we note $\frac{a_{r'}}{\alpha _{r'}} > 0$ by Lemma \ref{lem(3-3-1)}. Letting $\varepsilon$ be a positive rational number satisfying $\varepsilon < \frac{a_{r'}}{\alpha _{r'}}$, we take the effective $\bQ$-divisor: \begin{align*} D := \sum _{i=1}^r\alpha _i \left( \frac{a_i}{\alpha _i} - \frac{a_{r'}}{\alpha _{r'}} + \varepsilon \right)E_i + 2m_0 \left( \frac{a_{r'}}{\alpha _{r'}} - \varepsilon \right)E_{r+1} \end{align*} on $S$. Then we know $H \sim _{\bQ} D$ and: \begin{align*} S \backslash \Supp (D) \simeq \wS \backslash \Supp \left( \wD _0 + \wD _{\infty} + \sum _{i=1}^r (\wF _i - \wE _i') + \wF _{r+1} \right) \simeq \bA ^1_{\bk} \times \bA ^1_{\ast , \bk} \end{align*} by Lemma \ref{lem(2-1-1)} (2). Thus, $H \in \Ampc (S)$. In what follows, we can assume $\frac{a_1}{\alpha _1} < \frac{a_{r'}}{\alpha _{r'}}$. Put $r'' := \max \{ i \in \{ 1,\dots ,r'-1\} \,|\, \frac{a_i}{\alpha _i} < \frac{a_{r'}}{\alpha _{r'}} \}$. For simplicity, we put $\alpha '' := \sum _{i=1}^{r''}\alpha _i$ and $a'' := \sum _{i=1}^{r''}a_i$. Then we note $m_0 - \alpha '' > 0$. Moreover, by Lemma \ref{lem(3-3-1)} we have: \begin{align*} \frac{1}{m_0-\alpha''}\left( a'' + \frac{(m_0-\alpha'')a_{r'}}{\alpha _{r'}} \right) > 0. \end{align*} Letting $\varepsilon$ be a positive rational number satisfying: \begin{align*} \varepsilon < \min \left\{ \frac{a_{r'}}{\alpha _{r'}} - \frac{a_1}{\alpha _1},\ \frac{1}{m_0-\alpha''}\left( a'' + \frac{(m_0-\alpha'')a_{r'}}{\alpha _{r'}} \right) \right\} , \end{align*} we take the effective $\bQ$-divisor: \begin{align*} D &:= \sum _{i=1}^{r''} \alpha _i \left( \frac{a_{r'}}{\alpha _{r'}} -\frac{a_i}{\alpha _i} - \varepsilon \right)E_i' + \sum _{j=r''+1}^r\alpha _j \left( \frac{a_j}{\alpha _j} - \frac{a_{r'}}{\alpha _{r'}} + \varepsilon \right)E_j \\ &\qquad + 2\left\{ a'' + (m_0-\alpha '')\left( \frac{a_{r'}}{\alpha _{r'}}-\varepsilon \right) \right\}E_{r+1} \end{align*} on $S$. Then $H \sim _{\bQ} D$. Moreover, we know: \begin{align*} S \backslash \Supp (D) \simeq \wS \backslash \Supp \left( \wD _0 + \wD _{\infty} + \sum _{i=1}^{r''} (\wF _i - \wE _i) + \sum _{j=r''+1}^r (\wF _j - \wE _j') + \wF _{r+1} \right) \simeq \bA ^1_{\bk} \times \bA ^1_{\ast , \bk} \end{align*} by Lemma \ref{lem(2-1-1)} (2). Thus, $H \in \Ampc (S)$. The proof of Proposition \ref{prop(3-3)} is thus completed. \subsection{Type $\sA _n$ $(n \ge 3)$ case}\label{4-6} In this subsection, we keep the notation from \S \S \ref{4-1} and assume further that $g$ satisfies $(\ast \ast)$, $(s,t) = (1,0)$ and $\beta ' = 1$. Namely, $-m_{\infty} = m_0 - \alpha - (\beta -1)$. Note that the configuration of the $\bP ^1$-fibration $g:\wS \to \bP ^1_{\bk}$ is given as in Figure \ref{Case6}. \begin{figure}\begin{center}\scalebox{0.7} {\begin{tikzpicture} \draw [very thick] (-1,0)--(7,0); \node at (-1.4,0) {$\wD _0$}; \node at (1.75,3) {\Large $\cdots \cdots$}; \draw [very thick] (-1,5.5)--(4.5,5.5); \draw [very thick] (5,4.5)--(7,4.5); \draw [very thick] (4.5,5.5) .. controls (4.75,5.5) and (4.75,4.5) .. (5,4.5); \node at (-1.4,5.5) {$\wD _{\infty}$}; \node at (0,-1.5) {}; \node at (7.6,0) {$-m_0$}; \node at (7.6,4.5) {$-m_{\infty}$}; \draw [dashed] (0.5,-0.25)--(0,1); \draw (0,0.75)--(0.5,2); \draw (0.5,1.75)--(0,3); \draw (0,3.75)--(0.5,5); \draw [dashed] (0.5,4.75)--(0,6); \node at (0,3.5) {$\vdots$}; \node at (0.5,-0.6) {\footnotesize $\wE _1'$}; \node at (0.7,1.25) {\footnotesize $\wD _{1,1}$}; \node at (-0.1,2.25) {\footnotesize $\wD _{1,2}$}; \node at (-0.5,4.3) {\footnotesize $\wD _{1,\alpha _1-1}$}; \node at (0,6.3) {\footnotesize $\wE _1$}; \draw [dashed] (4,-0.25)--(3.5,1); \draw (3.5,0.75)--(4,2); \draw (4,1.75)--(3.5,3); \draw (3.5,3.75)--(4,5); \draw [dashed] (4,4.75)--(3.5,6); \node at (3.5,3.5) {$\vdots$}; \node at (4,-0.6) {\footnotesize $\wE _r'$}; \node at (4.2,1.25) {\footnotesize $\wD _{r,1}$}; \node at (3.4,2.25) {\footnotesize $\wD _{r,2}$}; \node at (3,4.3) {\footnotesize $\wD _{r,\alpha _r-1}$}; \node at (3.5,6.3) {\footnotesize $\wE _r$}; \draw (5.5,-0.25)--(6.5,1.5); \draw [dashed] (6.5,0.25)--(5.5,1); \draw (6.5,1.25)--(5.5,3); \draw (5.5,3.75)--(6.5,5.5); \draw [dashed] (6.5,4.75)--(5.5,6); \node at (5.5,3.4) {$\vdots$}; \node at (5.5,-0.6) {\footnotesize $\wD_ {r+1,0}$}; \node at (6.7,0.6) {\footnotesize $\wE _{r+1}'$}; \node at (6.5,2.5) {\footnotesize $\wD _{r+1,1}$}; \node at (6.5,4) {\footnotesize $\wD _{r+1,\beta -1}$}; \node at (5.5,6.3) {\footnotesize $\wE _{r+1}$}; \end{tikzpicture}} \caption{The configuration of $g : \wS \to \bP ^1_{\bk}$ in Subsection \ref{4-6}.}\label{Case6} \end{center}\end{figure} Then we have the following result, which is the main result of this subsection: \begin{prop}\label{prop(3-6)} With the same notations and assumptions as above, we have the following: \begin{enumerate} \item If $\beta = 1$, then $\Ampc (S) = \Amp (S)$. \item If $\beta \ge 2$ and $m_0=2$, then $\Ampc (S) = \Amp (S)$. \end{enumerate} \end{prop} In what follows, we shall prove the above result. Let $H \in \Amp (S)$. Since $\Amp (S)$ is contained in $\Cl (S)_{\bQ} = \bigoplus _{i=1}^{r+1} \bQ [E_i]$, we can write: \begin{align*} H \sim _{\bQ} \sum _{i=1}^r a_iE_i + bE_{r+1} \end{align*} for some rational numbers $a_1,\dots ,a_r,b$. Without loss of generality, we may assume $\frac{a_1}{\alpha _1} \le \frac{a_2}{\alpha _2} \le \dots \le \frac{a_r}{\alpha _r}$. At first, we consider the case $\beta = 1$. Since $\alpha = m_0+m_{\infty}$ and $m_{\infty} > 0$, we have $\alpha _1 + \dots + \alpha _r \ge m_0$. We set $r' := \min \{ i \in \{ 1,\dots ,r\} \,|\, \alpha _1 + \dots + \alpha _i \ge m_0\}$. For simplicity, we put $\alpha ' := \sum _{i=1}^{r'-1}\alpha _i$, where $\alpha ' = 0$ provided $r'=1$. Then we obtain the following lemma: \begin{lem}\label{lem(3-6-1)} Assume that $\beta = 1$. Then the following assertions hold: \begin{enumerate} \item $a_1+\dots +a_i + \frac{(m_0-\alpha _1 - \dots - \alpha _i)a_{r'}}{\alpha _{r'}} > 0$ for every $i=1,\dots ,r'-1$. Moreover, $\frac{a_{r'}}{\alpha _{r'}} > 0$. \item $a_1+\dots +a_i + \frac{(m_0-\alpha _1 - \dots - \alpha _i)a_{r'}}{\alpha _{r'}} + b > 0$ for every $i=1,\dots ,r'-1$. Moreover, $\frac{m_0a_{r'}}{\alpha _{r'}} + b > 0$. \end{enumerate} \end{lem} \begin{proof} Consider the following divisors on $\wS$: \begin{align*} \wDelta &:= \wD _0 + m_0\wF - \sum _{i=1}^{r'} \sum _{\lambda = 1}^{\alpha _i} \lambda \wD _{i,\lambda} + \sum _{\mu = m_0-\alpha '}^{\alpha _{r'}}(\mu - m_0 + \alpha ')\wD _{r',\mu} - \wE _{r+1}' ,\\ \wDelta' &:= \wD _0 + m_0\wF - \sum _{i=1}^{r'} \sum _{\lambda = 1}^{\alpha _i} \lambda \wD _{i,\lambda} + \sum _{\mu = m_0-\alpha '}^{\alpha _{r'}}(\mu - m_0 + \alpha ')\wD _{r',\mu} - \wE _{r+1} .\\ \end{align*} Since $(\wDelta \cdot K_{\wS}-\wF) = -3 <0$, $(\wDelta)^2 = -1$ and $(\wDelta \cdot -K_{\wS}) = 1$, we obtain $\dim |\wDelta| \ge 0$ by the Riemann-Roch theorem and rationality of $\wS$. Similarly, we also obtain $\dim |\wDelta '| \ge 0$. We only show the assertion (1). Indeed, the assertion (2) can be shown by the same argument replacing $\wDelta$ by $\wDelta '$ in (1). At first, we shall show the first assertion with $i=r'-1$. Since $(\wDelta \cdot \wD _0) = (\wDelta \cdot \wD _{\infty}) = (\wDelta \cdot \wD _{r+1,0}) = (\wDelta \cdot \wD _{r+1,1}) = \dots = (\wDelta \cdot \wD _{r+1,\beta - 1}) = 0$, we have: \begin{align*} (E _i \cdot f_{\ast}(\wDelta)) = \left\{ \begin{array}{cc} 1 & \text{if}\ i <r' \\ \frac{m_0-\alpha '}{\alpha _{r'}} & \text{if}\ i =r' \\ 0 & \text{if}\ i >r' \end{array} \right. \end{align*} for $i=1,\dots ,r,r+1$. Hence: \begin{align*} 0 < (H \cdot f_{\ast}(\wDelta)) = \sum _{i=1}^ra_i(E_i \cdot f_{\ast}(\wDelta)) + b(E_{r+1} \cdot f_{\ast}(\wDelta)) = a_1 + \dots + a_{r'-1} + \frac{(m_0-\alpha ')a_{r'}}{\alpha _{r'}} . \end{align*} In what follows, we shall show the first assertion for the general case. By using the above result combined with $\frac{a_{i+1}}{\alpha _{i+1}} \le \dots \le \frac{a_{r'-1}}{\alpha _{r'-1}} \le \frac{a_{r'}}{\alpha _{r'}}$, we have: \begin{align*} 0 &< a_1 + \dots + a_{r'-1} + \frac{(m_0-\alpha ')a_{r'}}{\alpha _{r'}} \\ &\le a_1 + \dots +a_i + \frac{(\alpha _{i+1} + \dots + \alpha _{r'-1} + m_0 - \alpha ')a_{r'}}{\alpha _{r'}} \\ &= a_1 + \dots +a_i + \frac{(m_0 - \alpha _1 -\dots - \alpha _i)a_{r'}}{\alpha _{r'}}. \end{align*} We shall prove the remaining assertion. Similarly, we have: \begin{align*} 0 < a_1 + \dots + a_{r'-1} + \frac{(m_0-\alpha ')a_{r'}}{\alpha _{r'}} \le \frac{m_0a_{r'}}{\alpha _{r'}}. \end{align*} By virtue of $m_0 > 0$, we thus obtain $\frac{a_{r'}}{\alpha _{r'}} > 0$. \end{proof} Then the following result holds: \begin{lem}\label{lem(3-6-2)} If $\beta = 1$, then $H \in \Ampc (S)$. \end{lem} \begin{proof} Since $\beta = 1$, we note $F \sim _{\bQ} E_{r+1} + E_{r+1}'$ and $m_0F \sim _{\bQ} \sum _{i=1}^r\alpha _iE_i$ because $\wF \sim \wE _{r+1}' + \wD _{r+1,0} + \wE _{r+1}$ and $\wD _{\infty} \sim \wD _0 + m_0\wF - \sum _{i=1}^r \sum _{\lambda = 1}^{\alpha _i} \lambda \wD _{i,\lambda}$. Suppose that $\frac{a_1}{\alpha _1} = \frac{a_{r'}}{\alpha _{r'}}$. Note that $\frac{a_{r'}}{\alpha _{r'}} > 0$ and $b + \frac{m_0a_{r'}}{\alpha _{r'}} > 0$ by Lemma \ref{lem(3-6-1)}. Letting $\varepsilon$ be a positive rational number satisfying $\varepsilon < \frac{a_{r'}}{\alpha _{r'}}$, we take the effective $\bQ$-divisor: \begin{align*} D := \sum _{i=1}^r\alpha _i \left( \frac{a_i}{\alpha _i} - \frac{a_{r'}}{\alpha _{r'}} + \varepsilon \right)E_i + \left\{ b + m_0 \left( \frac{a_{r'}}{\alpha _{r'}} - \varepsilon \right)\right\} E_{r+1} + m_0 \left( \frac{a_{r'}}{\alpha _{r'}} - \varepsilon \right)E_{r+1}' \end{align*} on $S$. Then we know $H \sim _{\bQ} D$ and: \begin{align*} S \backslash \Supp (D) \simeq \wS \backslash \Supp \left( \wD _0 + \wD _{\infty} + \sum _{i=1}^r (\wF _i - \wE _i') + \wF _{r+1} \right) \simeq \bA ^1_{\bk} \times \bA ^1_{\ast , \bk} \end{align*} by Lemma \ref{lem(2-1-1)} (2). Thus, $H \in \Ampc (S)$. In what follows, we can assume that $\frac{a_1}{\alpha _1} < \frac{a_{r'}}{\alpha _{r'}}$. Put $r'' := \max \{ i \in \{ 1,\dots ,r'-1\} \,|\, \frac{a_i}{\alpha _i} < \frac{a_{r'}}{\alpha _{r'}} \}$. For simplicity, we put $\alpha '' := \sum _{i=1}^{r''}\alpha _i$ and $a'' := \sum _{i=1}^{r''}a_i$. Then we note $m_0 - \alpha '' > 0$. Moreover, by using Lemma \ref{lem(3-6-1)} we have: \begin{align*} \frac{1}{m_0-\alpha''}\left( a'' +b + \frac{(m_0-\alpha'')a_{r'}}{\alpha _{r'}} \right) > 0. \end{align*} and: \begin{align*} \frac{1}{m_0-\alpha''}\left( a'' + \frac{(m_0-\alpha'')a_{r'}}{\alpha _{r'}} \right) > 0. \end{align*} Letting $\varepsilon$ be a positive rational number satisfying: \begin{align*} \varepsilon < \min \left\{ \frac{a_{r'}}{\alpha _{r'}} - \frac{a_1}{\alpha _1},\ \frac{1}{m_0-\alpha''}\left( a'' + b + \frac{(m_0-\alpha'')a_{r'}}{\alpha _{r'}} \right),\ \frac{1}{m_0-\alpha''}\left( a'' + \frac{(m_0-\alpha'')a_{r'}}{\alpha _{r'}} \right) \right\} , \end{align*} we take the effective $\bQ$-divisor: \begin{align*} D &:= \sum _{i=1}^{r''} \alpha _i \left( \frac{a_{r'}}{\alpha _{r'}} -\frac{a_i}{\alpha _i} - \varepsilon \right)E_i' + \sum _{j=r''+1}^r\alpha _j \left( \frac{a_j}{\alpha _j} - \frac{a_{r'}}{\alpha _{r'}} + \varepsilon \right)E_j \\ &\qquad + \left\{ a'' + b + (m_0 - \alpha '') \left( \frac{a_{r'}}{\alpha _{r'}} - \varepsilon \right) \right\} E_{r+1} + \left\{ a'' + (m_0 - \alpha '') \left( \frac{a_{r'}}{\alpha _{r'}} - \varepsilon \right) \right\} E_{r+1}' \end{align*} on $S$. Then $H \sim _{\bQ} D$. Moreover, we know: \begin{align*} S \backslash \Supp (D) \simeq \wS \backslash \Supp \left( \wD _0 + \wD _{\infty} + \sum _{i=1}^{r''} (\wF _i - \wE _i) + \sum _{j=r''+1}^r (\wF _j - \wE _j') + \wF _{r+1} \right) \simeq \bA ^1_{\bk} \times \bA ^1_{\ast , \bk} \end{align*} by Lemma \ref{lem(2-1-1)} (2). Thus, $H \in \Ampc (S)$. \end{proof} From now on, we thus consider the case $\beta \ge 2$. Assume $m_0=2$ in what follows. Then we obtain the following lemma: \begin{lem}\label{lem(3-6-3)} The following assertions hold: \begin{enumerate} \item If $\alpha _1 > 1$, then $a_1>0$. \item If $\alpha > 1$ and $\alpha _1 = 1$, then $a_1+\frac{a_2}{\alpha _2}>0$. \item If $\beta \ge 3$, then $b>0$. \item If $\beta \ge 2$, then $\frac{(\beta -1)a_1}{\alpha _1} + b > 0$. \end{enumerate} \end{lem} \begin{proof} In (1), consider the divisor $\wDelta := \wD _0 + 2\wF - \wD _{1,1} - \sum _{\lambda=2}^{\alpha _1}2\wD _{1,\lambda} - \wE _{r+1}'$. Since $(\wDelta \cdot K_{\wS}-\wF) = -3 <0$, $(\wDelta)^2 = -1$ and $(\wDelta \cdot K_{\wS}) = 1$, we obtain $\dim |\wDelta| \ge 0$ by the Riemann-Roch theorem and rationality of $\wS$. Hence, we have $0 < (H \cdot f_{\ast}(\wDelta)) = \frac{2a_1}{\alpha _1}$. Namely, we obtain $a_1>0$. In (2), consider the divisor $\wDelta := \wD _0 + 2\wF - \wE _1 - \sum _{\lambda = 1}^{\alpha _2}\wD _{2,\lambda} - \wE _{r+1}'$. Since $(\wDelta \cdot K_{\wS}-\wF) = -3 <0$, $(\wDelta)^2 = -1$ and $(\wDelta \cdot K_{\wS}) = 1$, we obtain $\dim |\wDelta| \ge 0$ by the Riemann-Roch theorem and rationality of $\wS$. Hence, we have $0 < (H \cdot f_{\ast}(\wDelta)) = a_1 + \frac{a_2}{\alpha _2}$. In (3), we first assume $\beta = 3$. Then consider the divisor $\wDelta := \wD _0 + 2\wF - \sum _{\mu = 1}^3\mu \wD_{r+1,\mu}$. Since $(\wDelta \cdot K_{\wS}-\wF) = -3 <0$, $(\wDelta)^2 = -1$ and $(\wDelta \cdot K_{\wS}) = 1$, we obtain $\dim |\wDelta | \ge 0$ by the Riemann-Roch theorem and rationality of $\wS$. Hence, we have $0 < (H \cdot f_{\ast}(\wDelta)) = b$ because $(\wDelta \cdot \wD _0) = (\wDelta \cdot \wD _{\infty}) = 0$. In what follows, we can assume $\beta \ge 4$. Then consider the divisor $\wDelta := (\beta -1)\wD _0 + 2(\beta -1)\wF - \sum _{\mu =1}^{\beta} 2\mu \wD _{r+1,\mu} - (\beta -3)\wE _{r+1}'$. Since $(\wDelta \cdot K_{\wS} - \wF) = -2(\beta - 1) < 0$, $(\wDelta )^2 = \beta ^2 -2\beta - 7$ and $(\wDelta \cdot -K_{\wS}) = \beta - 1$, we obtain $\dim |\wDelta| \ge \frac{1}{2}(\beta ^2 - \beta - 8) \ge 0$ by the Riemann-Roch theorem and rationality of $\wS$. Hence, we have $0 < \frac{1}{2}(H \cdot f_{\ast}(\wDelta)) = b$ because $(\wDelta \cdot \wD _0) = (\wDelta \cdot \wD _{\infty}) = 0$. In (4), consider the divisor $\wDelta := (\beta -1)\wD _0 + 2(\beta -1)\wF - \sum _{\lambda = 1}^{\alpha _1}(\beta -1)\wD _{1,\lambda} - \sum _{\mu =1}^{\beta} \mu \wD _{r+1,\mu} - (\beta -2)\wE _{r+1}'$. Since $(\wDelta \cdot K_{\wS} - \wF) = -2(\beta - 1) < 0$, $(\wDelta )^2 = \beta - 3$ and $(\wDelta \cdot -K_{\wS}) = \beta - 1$, we obtain $\dim |\wDelta| \ge \beta - 2 \ge 0$ by the Riemann-Roch theorem. Hence, we have $0 < (H \cdot f_{\ast}(\wDelta)) = \frac{(\beta -1)a_1}{\alpha _1} + b$ because $(\wDelta \cdot \wD _0) = (\wDelta \cdot \wD _{\infty}) = 0$. \end{proof} We note $E_{r+1}' \sim _{\bQ} F - E_{r+1}$ and $2F \sim _{\bQ} \sum _{i=1}^r\alpha _iE_i + (\beta - 1)E_{r+1}$ because $\wF \sim \wE _{r+1}' + \sum _{\mu =1}^{\beta}\mu \wD_{r+1,\mu}$ and $\wD _{\infty} \sim \wD _0 + 2\wF - \sum _{i=1}^r \sum _{\lambda = 1}^{\alpha _i}\lambda \wD _{i,\lambda} - \sum _{\mu = 1}^{\beta}\mu \wD _{r+1,\mu} + \wE _{r+1}$. Then the following result holds: \begin{lem}\label{lem(3-6-4)} If $\beta \ge 2$ and $m_0=2$, then $H \in \Ampc (S)$. \end{lem} \begin{proof} We consider the following two cases separately. \smallskip \noindent {\bf Case 1:} ($a_1>0$). In this case, we note $\frac{a_1}{\alpha _1} > 0$. We consider the following two subcases separately. \smallskip \noindent {\bf Subcase 1-1:} ($\beta = 2$). In this subcase, we note $\frac{a_1}{\alpha _1}+b>0$ by Lemma \ref{lem(3-6-3)} (4). Letting $\varepsilon$ be a positive rational number satisfying $\varepsilon < \min \left\{ \frac{a_1}{\alpha _1} + b, \frac{a_1}{\alpha _1} \right\}$, we take the effective $\bQ$-divisor: \begin{align*} D := \sum _{i=1}^r \alpha _i \left( \frac{a_i}{\alpha _i} - \frac{a_1}{\alpha _1} + \varepsilon \right) E_i + \left( \frac{a_1}{\alpha _1} + b -\varepsilon \right) E_{r+1} + 2 \left( \frac{a_1}{\alpha _1} - \varepsilon \right) E_{r+1}' \end{align*} on $S$. Then we know $H \sim _{\bQ} D$ and: \begin{align*} S \backslash \Supp (D) \simeq \wS \backslash \Supp \left( \wD _0 + \wD _{\infty} + \sum _{i=1}^r (\wF _i - \wE _i') + \wF _{r+1} \right) \simeq \bA ^1_{\bk} \times \bA ^1_{\ast , \bk} \end{align*} by Lemma \ref{lem(2-1-1)} (2). Thus, $H \in \Ampc (S)$. \smallskip \noindent {\bf Subcase 1-2:} ($\beta \ge 3$). In this subcase, we note $b>0$ by Lemma \ref{lem(3-6-3)} (3). Letting $\varepsilon$ be a positive rational number satisfying: \begin{align*} \varepsilon < \left\{ \begin{array}{ll} \frac{a_1}{\alpha _1} & \text{if}\ \beta = 3\\ \min \left\{ \frac{a_1}{\alpha _1},\ \frac{b}{\beta-3} \right\} & \text{if}\ \beta > 3 \end{array}\right. , \end{align*} we take the effective $\bQ$-divisor: \begin{align*} D := \sum _{i=1}^r \alpha _i \left( \frac{a_i}{\alpha _i} - \varepsilon \right) E_i + \{b - (\beta -3)\varepsilon \} E_{r+1} + 2 \varepsilon E_{r+1}' \end{align*} on $S$. Then we know $H \sim _{\bQ} D$ and: \begin{align*} S \backslash \Supp (D) \simeq \wS \backslash \Supp \left( \wD _0 + \wD _{\infty} + \sum _{i=1}^r (\wF _i - \wE _i') + \wF _{r+1} \right) \simeq \bA ^1_{\bk} \times \bA ^1_{\ast , \bk} \end{align*} by Lemma \ref{lem(2-1-1)} (2). Thus, $H \in \Ampc (S)$. \smallskip \noindent {\bf Case 2:} ($a_1 \le 0$). In this case, we notice $\alpha _1 = 1$ by Lemma \ref{lem(3-6-3)} (1). In particular, $(\beta -1)a_1+b>0$ by using Lemma \ref{lem(3-6-3)} (4). We consider the following two subcases separately. \smallskip \noindent {\bf Subcase 2-1:} ($\alpha = 1$). In this subcase, we notice $r=1$ and $\beta \ge 4$. Letting $\varepsilon$ be a positive rational number satisfying $\varepsilon < \frac{(\beta - 1)a_1+b}{\beta -2}$, we take the effective $\bQ$-divisor: \begin{align*} D := (-2a_1+\varepsilon)E_1' + \{(\beta -1)a_1+b - (\beta -2) \varepsilon \}E_2 + \varepsilon E_2' \end{align*} on $S$. Then we know $H \sim _{\bQ} D$ and: \begin{align*} S \backslash \Supp (D) \simeq \wS \backslash \Supp \left( \wD _0 + \wD _{\infty} + \wE _1' + \wF _2 \right) \simeq \bA ^1_{\bk} \times \bA ^1_{\ast , \bk} \end{align*} by Lemma \ref{lem(2-1-1)} (2). Thus, $H \in \Ampc (S)$. \smallskip \noindent {\bf Subcase 2-2:} ($\alpha > 1$). In this subcase, we have $a_1+\frac{a_2}{\alpha _2} > 0$ by Lemma \ref{lem(3-6-3)} (2). Letting $\varepsilon$ be a positive rational number satisfying: \begin{align*} \varepsilon < \left\{ \begin{array}{ll} a_1+\frac{a_2}{\alpha _2} & \text{if}\ \beta = 2 \\ \min \left\{ a_1+\frac{a_2}{\alpha _2},\ \frac{(\beta - 1)a_1+b}{\beta -2} \right\} & \text{if}\ \beta \ge 3 \end{array}\right. , \end{align*} we take the effective $\bQ$-divisor: \begin{align*} D := (-2a_1+\varepsilon)E_1' + \sum _{i=2}^r \alpha _i \left( a_1 + \frac{a_i}{\alpha _i} - \varepsilon \right) E_i + \{(\beta -1)a_1+b - (\beta -2) \varepsilon \}E_{r+1} + \varepsilon E_{r+1}' \end{align*} on $S$. Then we know $H \sim _{\bQ} D$ and: \begin{align*} S \backslash \Supp (D) \simeq \wS \backslash \Supp \left( \wD _0 + \wD _{\infty} + \wE _1' + \sum _{i=2}^r (\wF _i - \wE _i') + \wF _{r+1} \right) \simeq \bA ^1_{\bk} \times \bA ^1_{\ast , \bk} \end{align*} by Lemma \ref{lem(2-1-1)} (2). Thus, $H \in \Ampc (S)$. \smallskip Lemma \ref{lem(3-6-4)} is thus verified. \end{proof} Proposition \ref{prop(3-6)} follows from Lemmas \ref{lem(3-6-2)} and \ref{lem(3-6-4)}. The proof of Proposition \ref{prop(3-6)} is thus completed. \subsection{Type $\sA_2$ case}\label{4-7} In this subsection, we keep the notation from \S \S \ref{4-1} and assume further that $g$ satisfies $(\ast \ast)$ and $(s,t) = (0,0)$. Namely, $-m_{\infty} = (m_0+2) - \alpha$. Note that the configuration of the $\bP ^1$-fibration $g:\wS \to \bP ^1_{\bk}$ is given as in Figure \ref{Case5}. \begin{figure}\begin{center}\scalebox{0.7} {\begin{tikzpicture} \draw [very thick] (-1,0)--(7,0); \node at (-1.4,0) {$\wD _0$}; \node at (4.25,3) {\Large $\cdots \cdots$}; \draw [very thick] (1.25,5.5)--(7,5.5); \draw [very thick] (-0.5,-0.25)--(1.25,5.5); \node at (0.6,5) {$\wD _{\infty}$}; \node at (7.6,0) {$-m_0$}; \node at (7.6,5.5) {$-m_{\infty}$}; \draw [dashed] (3,-0.25)--(2.5,1); \draw (2.5,0.75)--(3,2); \draw (3,1.75)--(2.5,3); \draw (2.5,3.75)--(3,5); \draw [dashed] (3,4.75)--(2.5,6); \node at (2.5,3.5) {$\vdots$}; \node at (3,-0.6) {\footnotesize $\wE _1'$}; \node at (3.2,1.25) {\footnotesize $\wD _{1,1}$}; \node at (2.4,2.25) {\footnotesize $\wD _{1,2}$}; \node at (2,4.3) {\footnotesize $\wD _{1,\alpha _1-1}$}; \node at (2.5,6.3) {\footnotesize $\wE _1$}; \draw [dashed] (6.54,-0.25)--(6,1); \draw (6,0.75)--(6.5,2); \draw (6.5,1.75)--(6,3); \draw (6,3.75)--(6.5,5); \draw [dashed] (6.5,4.75)--(6,6); \node at (6,3.5) {$\vdots$}; \node at (6.5,-0.6) {\footnotesize $\wE _r'$}; \node at (6.7,1.25) {\footnotesize $\wD _{r,1}$}; \node at (5.9,2.25) {\footnotesize $\wD _{r,2}$}; \node at (5.5,4.3) {\footnotesize $\wD _{r,\alpha _r-1}$}; \node at (6,6.3) {\footnotesize $\wE _r$}; \end{tikzpicture}} \caption{The configuration of $g : \wS \to \bP ^1_{\bk}$ in Subsection \ref{4-7}.}\label{Case5} \end{center}\end{figure} Then we have the following result, which is the main result of this subsection: \begin{prop}\label{prop(3-5)} With the same notations and assumptions as above, we have $\Ampc (S) = \Amp (S)$. \end{prop} In what follows, we shall prove the above result. Let $H \in \Amp (S)$. Since $\Amp (S)$ is contained in $\Cl (S)_{\bQ} = \bigoplus _{i=1}^r \bQ [E_i]$, we can write: \begin{align*} H \sim _{\bQ} \sum _{i=1}^r a_iE_i \end{align*} for some rational numbers $a_1,\dots ,a_r$. Without loss of generality, we may assume $\frac{a_1}{\alpha _1} \le \frac{a_2}{\alpha _2} \le \dots \le \frac{a_r}{\alpha _r}$. Since $\alpha - 2 = m_0+m_{\infty}$ and $m_{\infty}>0$, we have $\alpha _1 + \dots + \alpha _r \ge m_0+1$. We set $r' := \min \{ i \in \{ 1,\dots ,r\} \,|\, \alpha _1 + \dots + \alpha _i \ge m_0+1\}$. For simplicity, we put $\alpha ' := \sum _{i=1}^{r'-1}\alpha _i$, where $\alpha ' = 0$ provided $r'=1$. Then the following lemma holds: \begin{lem}\label{lem(3-5-1)} $a_1+\dots +a_i + \frac{(m_0 + 1 -\alpha _1 - \dots - \alpha _i)a_{r'}}{\alpha _{r'}} > 0$ for every $i=1,\dots ,r'-1$. Moreover, $\frac{a_{r'}}{\alpha _{r'}} > 0$. \end{lem} \begin{proof} Consider the following divisor on $\wS$: \begin{align*} \wDelta := \wD _0 + m_0\wF - \sum _{i=1}^{r'} \sum _{\lambda = 1}^{\alpha _i} \lambda \wD _{i,\lambda} + \sum _{\mu = m_0-\alpha '+1}^{\alpha _{r'}}(\mu - m_0 -1 + \alpha ')\wD _{r',\mu}. \end{align*} Since $(\wDelta \cdot K_{\wS}-\wF) = -3 <0$, $(\wDelta)^2 = -1$ and $(\wDelta \cdot -K_{\wS}) = 1$, we obtain $\dim |\wDelta| \ge 0$ by the Riemann-Roch theorem and rationality of $\wS$. At first, we shall show the first assertion with $i=r'-1$. Since $(\wDelta \cdot \wD _0) = (\wDelta \cdot \wD _{\infty}) = 0$, we have: \begin{align*} (E _i \cdot f_{\ast}(\wDelta)) = \left\{ \begin{array}{cc} 1 & \text{if}\ i <r' \\ \frac{m_0+1-\alpha '}{\alpha _{r'}} & \text{if}\ i =r' \\ 0 & \text{if}\ i >r' \end{array} \right. \end{align*} for $i=1,\dots ,r$. Hence: \begin{align*} 0 < (H \cdot f_{\ast}(\wDelta)) = \sum _{i=1}^ra_i(E_i \cdot f_{\ast}(\wDelta)) = a_1 + \dots + a_{r'-1} + \frac{(m_0+1-\alpha ')a_{r'}}{\alpha _{r'}}. \end{align*} In what follows, we shall show the first assertion for the general case. By using the above result combined with $\frac{a_{i+1}}{\alpha _{i+1}} \le \dots \le \frac{a_{r'-1}}{\alpha _{r'-1}} \le \frac{a_{r'}}{\alpha _{r'}}$, we have: \begin{align*} 0 &< a_1+\dots +a_{r'-1} + \frac{(m_0+1-\alpha')a_{r'}}{\alpha _{r'}} \\ &\le a_1 + \dots +a_i + \frac{(\alpha _{i+1} + \dots + \alpha _{r'-1} + m_0 + 1 - \alpha ')a_{r'}}{\alpha _{r'}} \\ &= a_1 + \dots +a_i + \frac{(m_0 + 1 - \alpha _1 - \dots - \alpha _i)a_{r'}}{\alpha _{r'}}. \end{align*} We shall prove the remaining assertion. Similarly, we have: \begin{align*} 0 < a_1+\dots +a_{r'-1} + \frac{(m_0+1-\alpha')a_{r'}}{\alpha _{r'}} \le \frac{(m_0+1)a_{r'}}{\alpha _{r'}}. \end{align*} By virtue of $m_0+1 \ge 3$, we thus obtain $\frac{a_{r'}}{\alpha _{r'}} > 0$. \end{proof} Note $(m_0+1)F \sim _{\bQ} \sum _{i=1}^r \alpha _i E_i$ because $\wD _{\infty} \sim \wD _0 + (m_0+1)\wF - \sum _{i=1}^r \sum _{\lambda = 1}^{\alpha _i} \lambda \wD _{i,\lambda}$. Since $(\wD _0 \cdot \wD _{\infty}) = 1$, there exists a unique fiber $\wF _0$ of $g$ such that $\wD _0 \cap \wD _{\infty} \cap \wF _0 \not= \emptyset$. Put $F_0 := f_{\ast}(\wF _0)$. Assume $\frac{a_1}{\alpha _1} = \frac{a_{r'}}{\alpha _{r'}}$. Note that $\frac{a_{r'}}{\alpha _{r'}} > 0$ by Lemma \ref{lem(3-5-1)}. Letting $\varepsilon$ be a positive rational number satisfying $\varepsilon < \frac{a_{r'}}{\alpha _{r'}}$, we take the effective $\bQ$-divisor: \begin{align*} D := (m_0+1) \left( \frac{a_{r'}}{\alpha _{r'}} - \varepsilon \right)F_0 + \sum _{i=1}^r\alpha _i \left( \frac{a_i}{\alpha _i} - \frac{a_{r'}}{\alpha _{r'}} + \varepsilon \right)E_i \end{align*} on $S$. Then we know $H \sim _{\bQ} D$ and: \begin{align*} S \backslash \Supp (D) \simeq \wS \backslash \Supp \left( \wD _0 + \wD _{\infty} + \wF _0 + \sum _{i=1}^r (\wF _i - \wE _i') \right) \simeq \bA ^1_{\bk} \times \bA ^1_{\ast , \bk} \end{align*} by Lemma \ref{lem(2-1-1)} (3). Thus, $H \in \Ampc (S)$. In what follows, we can assume $\frac{a_1}{\alpha _1} < \frac{a_{r'}}{\alpha _{r'}}$. Put $r'' := \max \{ i \in \{ 1,\dots ,r'-1\} \,|\, \frac{a_i}{\alpha _i} < \frac{a_{r'}}{\alpha _{r'}} \}$. For simplicity, we put $\alpha '' := \sum _{i=1}^{r''}\alpha _i$ and $a'' := \sum _{i=1}^{r''}a_i$. Then we note $m_0 + 1 - \alpha '' > 0$. Moreover, by Lemma \ref{lem(3-5-1)} we have: \begin{align*} \frac{1}{m_0+1-\alpha''}\left( a'' + \frac{(m_0+1-\alpha'')a_{r'}}{\alpha _{r'}} \right) > 0. \end{align*} Letting $\varepsilon$ be a positive rational number satisfying: \begin{align*} \varepsilon < \min \left\{ \frac{a_{r'}}{\alpha _{r'}} - \frac{a_1}{\alpha _1},\ \frac{1}{m_0+1-\alpha''}\left( a'' + \frac{(m_0+1-\alpha'')a_{r'}}{\alpha _{r'}} \right) \right\} , \end{align*} we take the effective $\bQ$-divisor: \begin{align*} D &:= \left\{ a'' + (m_0+1-\alpha '') \left( \frac{a_{r'}}{\alpha _{r'}}-\varepsilon \right) \right\} F_0 \\ &\qquad + \sum _{i=1}^{r''} \alpha _i \left( \frac{a_{r'}}{\alpha _{r'}} -\frac{a_i}{\alpha _i} - \varepsilon \right)E_i' + \sum _{j=r''+1}^r\alpha _j \left( \frac{a_j}{\alpha _j} - \frac{a_{r'}}{\alpha _{r'}} + \varepsilon \right)E_j \end{align*} on $S$. Then $H \sim _{\bQ} D$. Moreover, we know: \begin{align*} S \backslash \Supp (D) \simeq \wS \backslash \Supp \left( \wD _0 + \wD _{\infty} + \wF _0 + \sum _{i=1}^{r''} (\wF _i - \wE _i) + \sum _{j=r''+1}^r (\wF _j - \wE _j') \right) \simeq \bA ^1_{\bk} \times \bA ^1_{\ast , \bk} \end{align*} by Lemma \ref{lem(2-1-1)} (3). Thus, $H \in \Ampc (S)$. The proof of Proposition \ref{prop(3-5)} is thus completed. \section{Proof of Theorem \ref{main(1)}}\label{5} In this section, we shall prove Theorem \ref{main(1)}. Let $S$ be a Du Val del Pezzo surface of degree $d \ge 2$. If $\Ampc (S) = \Amp (S)$, then we obtain that $-K_S \in \Ampc (S)$ because $-K_S$ is ample. From now on, we assume that $-K_S \in \Ampc (S)$. If $d \ge 3$, then we obtain that $\Ampc (S) = \Amp (S)$ by {\cite{Saw1}}. Hence, we can assume further that $d=2$. If $\rho (S) = 1$, then we know that $\Ampc (S) = \Amp (S) = \bQ _{>0}[-K_S]$. In what follows, we thus assume that $\rho (S) \ge 2$. Let $f:\wS \to S$ be the minimal resolution, and let $\wD$ be the reduced exceptional divisor of $f$. \begin{lem}\label{lem(4-1)} If $S$ has a non-cyclic quotient singular point, which is not of type $\sD _5$, then $\Ampc (S) = \Amp (S)$. \end{lem} \begin{proof} By assumption and $\rho (S) \ge 2$, ${\rm Dyn}(S) = \sD _4$, $\sD _4+\sA _1$, $\sD_ 4+2\sA _1$, $\sD _6$ or $\sE _6$. Indeed, it can be seen from the classification of Du Val del Pezzo surfaces of degree $2$ (see, e.g., {\cite{Ura81}} or {\cite[\S 8.7]{Dol12}}). We consider this proof according to Dynkin types of $S$. In ${\rm Dyn}(S) = \sD _4$, by the configuration of this Du Val del Pezzo surface (cf.\ {\cite[p.\ 1220]{CPW16b}}), we can find a $\bP ^1$-fibration $g:\wS \to \bP ^1_{\bk}$ such that there exists exactly one $(-2)$-curve is a section of $g$ and other $(-2)$-curves are fiber components of $g$ (see Figure \ref{fig(4-1)} (1)). In particular, $g$ satisfies $(\ast )$. Moreover, we have $(\alpha , \beta ,\beta' ,\gamma) = (0,3,3,0)$ provided that we use notations from \S \S \ref{4-1}. In ${\rm Dyn}(S) = \sD _4 + \sA _1$, by the configuration of this Du Val del Pezzo surface (cf.\ {\cite[p.\ 1217]{CPW16b}}), we can find a $\bP ^1$-fibration $g:\wS \to \bP ^1_{\bk}$ such that there exists exactly one $(-2)$-curve is a section of $g$ and other $(-2)$-curves are fiber components of $g$ (see Figure \ref{fig(4-1)} (2)). In particular, $g$ satisfies $(\ast )$. Moreover, we have $(\alpha , \beta ,\beta' ,\gamma) = (0,2,2,2)$ provided that we use notations from \S \S \ref{4-1}. In ${\rm Dyn}(S) = \sD _4 + 2\sA _1$, by the configuration of this Du Val del Pezzo surface (cf.\ {\cite[p.\ 1217]{CPW16b}}), we can find a $\bP ^1$-fibration $g:\wS \to \bP ^1_{\bk}$ such that there exists exactly one $(-2)$-curve is a section of $g$ and other $(-2)$-curves are fiber components of $g$ (see Figure \ref{fig(4-1)} (3)). In particular, $g$ satisfies $(\ast )$. Moreover, we have $(\alpha , \beta ,\beta' ,\gamma) = (0,1,1,4)$ provided that we use notations from \S \S \ref{4-1}. In ${\rm Dyn}(S) = \sD _6$, by the configuration of this Du Val del Pezzo surface (cf.\ {\cite[p.\ 1216]{CPW16b}}), we can find a $\bP ^1$-fibration $g:\wS \to \bP ^1_{\bk}$ such that there exists exactly one $(-2)$-curve is a section of $g$ and other $(-2)$-curves are fiber components of $g$ (see Figure \ref{fig(4-1)} (4)). In particular, $g$ satisfies $(\ast )$. Moreover, we have $(\alpha , \beta ,\beta' ,\gamma) = (0,1,1,4)$ provided that we use notations from \S \S \ref{4-1}. In ${\rm Dyn}(S) = \sE _6$, by the configuration of this Du Val del Pezzo surface (cf.\ {\cite[p.\ 1216]{CPW16b}}), we can find a $\bP ^1$-fibration $g:\wS \to \bP ^1_{\bk}$ such that there exists exactly one $(-2)$-curve is a section of $g$ and other $(-2)$-curves are fiber components of $g$ (see Figure \ref{fig(4-1)} (5)). In particular, $g$ satisfies $(\ast )$. Moreover, we have $(\alpha , \beta ,\beta' ,\gamma) = (1,0,0,5)$ provided that we use notations from \S \S \ref{4-1}. Hence, in every case, we obtain $\Amp (S) = \Ampc (S)$ by Proposition \ref{prop(4-1)}. \end{proof} \begin{figure} \begin{minipage}[c]{1\hsize}\begin{center}\scalebox{0.6}{\begin{tikzpicture} \node at (-1,6) {\Large (1)}; \node at (-1,-2) {\ }; \node at (0.5,0) {\Large $\ast$}; \draw [thick] (0,0)--(5,0); \draw [thick] (1,-0.5)--(1,5); \draw [thick] (4,-0.5)--(4,5); \draw [thick] (2.5,-0.5)--(2.5,5); \draw [thick,dashed] (0.5,4)--(1.5,4); \draw [thick,dashed] (2,4)--(3,4); \draw [thick,dashed] (3.5,4)--(4.5,4); \draw [thick,dashed] (0.5,2)--(1.5,2); \draw [thick,dashed] (2,2)--(3,2); \draw [thick,dashed] (3.5,2)--(4.5,2); \end{tikzpicture} \qquad \qquad \begin{tikzpicture} \node at (-1,6) {\Large (2)}; \node at (-1,-2) {\ }; \draw [thick] (0,0)--(5,0); \node at (0.5,0) {\Large $\ast$}; \draw [thick] (1,-0.5)--(1,5); \draw [thick] (2.5,-0.5)--(2.5,5); \draw [thick] (3.7,-0.5)--(4.3,1.5); \draw [thick] (3.7,5)--(4.3,3); \draw [thick,dashed] (4.15,0.75)--(4.15,3.75); \draw [thick,dashed] (0.5,4)--(1.5,4); \draw [thick,dashed] (2,4)--(3,4); \draw [thick,dashed] (0.5,2)--(1.5,2); \draw [thick,dashed] (2,2)--(3,2); \end{tikzpicture} \qquad \qquad \begin{tikzpicture} \node at (-1,6) {\Large (3)}; \node at (-1,-2) {\ }; \draw [thick] (0,0)--(5,0); \node at (0.5,0) {\Large $\ast$}; \draw [thick] (1,-0.5)--(1,5); \draw [thick] (3.7,-0.5)--(4.3,1.5); \draw [thick] (3.7,5)--(4.3,3); \draw [thick,dashed] (4.15,0.75)--(4.15,3.75); \draw [thick] (2.2,-0.5)--(2.8,1.5); \draw [thick] (2.2,5)--(2.8,3); \draw [thick,dashed] (2.65,0.75)--(2.65,3.75); \draw [thick,dashed] (0.5,4)--(1.5,4); \draw [thick,dashed] (0.5,2)--(1.5,2); \end{tikzpicture}}\end{center}\end{minipage} \begin{minipage}[c]{1\hsize}\begin{center}\scalebox{0.6}{\begin{tikzpicture} \node at (-1,6) {\Large (4)}; \node at (-1,-2) {\ }; \draw [thick] (0,0)--(7,0); \node at (0.5,0) {\Large $\ast$}; \draw [thick] (1,-0.5)--(1,5); \draw [thick] (5.5,-0.5)--(6.3,1.5); \draw [thick] (5.5,5)--(6.3,3); \draw [thick] (6.15,0.75)--(6.15,3.75); \draw [thick,dashed] (0,4)--(2,4); \draw [thick] (4,2.25)--(7,2.25); \draw [thick,dashed] (5,0.75)--(5,3.75); \draw [thick,dashed] (0,2)--(2,2); \end{tikzpicture} \qquad \qquad \begin{tikzpicture} \node at (-1,6) {\Large (5)}; \node at (-1,-2) {\ }; \draw [thick] (0,0)--(7,0); \node at (0.5,0) {\Large $\ast$}; \draw [thick,dashed] (1.3,-0.5)--(0.7,3); \draw [thick,dashed] (0.7,2)--(1.3,5); \draw [thick] (5.5,-0.5)--(6.3,1.5); \draw [thick] (5.5,5)--(6.3,3); \draw [thick] (6.15,0.75)--(6.15,3.75); \draw [thick] (4.75,2)--(7,2.5); \draw [thick] (5.75,2)--(3.25,2.5); \draw [thick,dashed] (1.75,2)--(4.25,2.5); \end{tikzpicture}} \caption{Lemma \ref{lem(4-1)}; A dotted line (resp. a solid line) stands for a $(-1)$-curve (resp. a $(-2)$-curve); A line with $\ast$ means a non-fiber component of the $\bP ^1$-fibrations from $\wS$. }\label{fig(4-1)} \end{center}\end{minipage} \end{figure} \begin{figure} \begin{minipage}[c]{1\hsize}\begin{center}\scalebox{0.6}{\begin{tikzpicture} \node at (-1,6.5) {\large (a)}; \node at (-1,-2) {\ }; \node at (0.5,0) {\Large $\ast$}; \draw [thick] (0,0)--(5,0); \node at (-0.4,0) {$\wD _5$}; \draw [thick] (2,-0.25)--(4,2); \draw [thick] (3.5,1)--(3.5,5); \draw [thick] (4,3.75)--(2,6); \node at (1.75,-0.6) {$\wD _4$}; \node at (1.75,6.3) {$\wD _2$}; \node at (3.5,5.5) {$\wD _3$}; \draw [thick] (0,3)--(4,3); \draw [thick,dashed] (1,1.5)--(1,4.5); \node at (-0.4,3) {$\wD _1$}; \node at (1,5) {$\wE _1$}; \end{tikzpicture} \qquad \qquad \begin{tikzpicture} \node at (-1,6.5) {\large (b)}; \node at (-1,-2) {\ }; \node at (0.5,0) {\Large $\ast$}; \node at (0.5,5.75) {\Large $\ast$}; \draw [thick] (0,5.75)--(5,5.75); \draw [thick] (0,0)--(5,0); \node at (-0.4,5.75) {$\wD _1$}; \node at (-0.4,0) {$\wD _5$}; \draw [thick] (2,-0.25)--(4,2); \draw [thick] (3.5,1)--(3.5,5); \draw [thick] (4,3.75)--(2,6); \node at (1.75,-0.6) {$\wD _4$}; \node at (1.75,6.3) {$\wD _2$}; \node at (4,5) {$\wD _3$}; \draw [thick,dashed] (0,3)--(4,3); \node at (-0.4,3) {$\wE _3$}; \end{tikzpicture} \qquad \qquad \begin{tikzpicture} \node at (-1,6.5) {\large (c)}; \node at (-1,-2) {\ }; \node at (0.5,0) {\Large $\ast$}; \node at (0.5,5.75) {\Large $\ast$}; \draw [thick] (0,5.75)--(1,5.75); \draw [thick] (2,0.5)--(5,0.5); \draw [very thick] (1,5.75) .. controls (1.5,5.75) and (1.5,0.5) .. (2,0.5); \draw [thick] (0,0)--(5,0); \node at (-0.4,5.75) {$\wD _1$}; \node at (-0.4,0) {$\wD _3$}; \draw [thick] (2.5,-0.25)--(4,2); \draw [thick,dashed] (3.5,1)--(3.5,5); \draw [thick] (4,3.75)--(2.5,6); \node at (2.25,-0.6) {$\wD _2$}; \node at (2.25,6.3) {$\wD _4$}; \node at (4,5) {$\wE _2$}; \end{tikzpicture}}\end{center}\end{minipage} \begin{minipage}[c]{1\hsize}\begin{center}\scalebox{0.6}{\begin{tikzpicture} \node at (-1,6.5) {\large (d)}; \node at (0.5,0) {\Large $\ast$}; \node at (0.5,5.75) {\Large $\ast$}; \draw [thick] (0,5.75)--(5,5.75); \draw [thick] (0,0)--(5,0); \node at (-0.4,5.75) {$\wD _1$}; \node at (-0.4,0) {$\wD _3$}; \draw [thick] (2.5,-0.5)--(2.5,6.25); \node at (3,-0.6) {$\wD _2$}; \draw [thick,dashed] (0,2)--(4,2); \draw [thick,dashed] (0,4)--(4,4); \node at (-0.4,2) {$\wE _2'$}; \node at (-0.4,4) {$\wE _2$}; \end{tikzpicture} \qquad \qquad \begin{tikzpicture} \node at (-1,6.5) {\large (e)}; \node at (0.5,0) {\Large $\ast$}; \node at (0.5,5.75) {\Large $\ast$}; \draw [thick] (0,5.75)--(5,5.75); \draw [thick] (0,0)--(5,0); \node at (-0.4,5.75) {$\wD _1$}; \node at (-0.4,0) {$\wD _n$}; \draw [thick] (2.5,-0.25)--(4,1.5); \draw [thick,dashed] (1.25,2)--(3.5,0.5); \draw [thick] (2.5,2)--(4,1); \draw [thick,dashed] (1.25,3.75)--(3.5,5.25); \draw [thick] (4,4.25)--(2.5,6); \draw [thick] (4,4.75)--(2.5,3.75); \node at (2.25,-0.6) {$\wD _{n-1}$}; \node at (2.25,6.3) {$\wD _2$}; \node at (0.75,3.75) {$\wE _2$}; \node at (0.75,2) {$\wE _{n-1}$}; \node at (2.5,3.25) {$\vdots$}; \node at (2.5,2.75) {$\vdots$}; \end{tikzpicture} \qquad \qquad \begin{tikzpicture} \node at (-1,6.5) {\large (f)}; \node at (4.5,0) {\Large $\ast$}; \node at (4.5,5.75) {\Large $\ast$}; \draw [thick] (3,5.75)--(5,5.75); \draw [thick] (0.5,-0.25)--(1.5,4.75); \draw [very thick] (1.5,4.75) .. controls (1.5,5) and (2,5.75) .. (3,5.75); \draw [thick] (0,0)--(5,0); \node at (0.5,-0.6) {$\wD _1$}; \node at (-0.4,0) {$\wD _2$}; \end{tikzpicture}} \caption{Lemmas \ref{lem(4-2)}, \ref{lem(4-3)}, \ref{lem(4-4)}, \ref{lem(4-5)} and \ref{lem(4-6)}; Configurations of some sections and fiber components of $g$; A line with $\ast$ means a non-fiber component of $g$. }\label{fig(4-2)} \end{center}\end{minipage} \end{figure} \begin{lem}\label{lem(4-2)} If $S$ has a singular point of type $\sD _5$, then $\Ampc (S) = \Amp (S)$. \end{lem} \begin{proof} By assumption and $\rho (S) \ge 2$, ${\rm Dyn}(S) = \sD _5$ or $\sD _5+\sA _1$. Indeed, it can be seen from the classification of Du Val del Pezzo surfaces of degree $2$ (see, e.g., {\cite{Ura81}} or {\cite[\S 8.7]{Dol12}}). Let $P$ be a singular point on $S$ of type $\sD _5$ and let $\wD _1+\dots + \wD _5$ be the connected component of $\wD$ corresponding to $P$ such that the weighted dual graph of $\wD _1+\dots + \wD _5$ is given as follows: \begin{align*} \xygraph{\circ ([]!{+(0,.25)} {^{\wD_5}}) -[l] \circ ([]!{+(0,.25)} {^{\wD_4}}) -[l] \circ ([]!{+(0,.25)} {^{\wD_3}}) (-[]!{+(-1,-0.5)} \circ ([]!{+(0,.25)} {^{\wD_2}}), -[]!{+(-1,0.5)} \circ ([]!{+(0,.25)} {^{\wD_1}}))} \end{align*} By using the list of the configurations of all $(-2)$-curves and some $(-1)$-curves on weak del Pezzo surfaces of degree $2$ (see {\cite[p.\ 20]{PW10}}), we know that there exists a $(-1)$-curve $\wE _1$ on $\wS$ such that $(\wE _1 \cdot \wD _i) = \delta _{1,i}$ for $i=1,\dots ,5$. Then a divisor $\wF := \wD_2+2(\wE_1+\wD_1+\wD_3)+\wD_4$ defines a $\bP ^1$-fibration $g := \Phi _{|\wF|}: \wS \to \bP ^1_{\bk}$, $\wD_5$ becomes a section of $g$ and each component of $\wD-\wD_5$ is a fiber component of $g$. Note that the configuration of $\wF + \wD _5$ looks like that in Figure \ref{fig(4-2)} (a). Moreover, $g$ satisfies $(\ast )$, $(s,t)=(0,1)$ and $\gamma = 4$ provided that we use notations from \S \S \ref{4-1}. Hence, we obtain that $\Amp (S) = \Ampc (S)$ by Proposition \ref{prop(4-2)}. \end{proof} In what follows, we assume that $S$ has only cyclic quotient singular points. Thus, $S$ allows only singular points of types $\sA _1$, $\sA _2$, $\sA _3$, $\sA _4$, $\sA _5$, $\sA _6$. Here, note that $S$ has no singular point of type $\sA _n$ with $n \ge 7$ because $\rho (S) \ge 2$. Moreover, $S$ has a singular point, which is not of type $\sA _1$ by Theorem \ref{CPW}. \begin{lem}\label{lem(4-3)} If $S$ is of type $(\sA _5)'$ or $(\sA _5+\sA _1)'$, then $\Ampc (S) = \Amp (S)$. \end{lem} \begin{proof} By assumption, $S$ has a singular point $P$ of type $\sA _5$. Let $\wD _1+\dots + \wD _5$ be the connected component of $\wD$ corresponding to $P$ such that the weighted dual graph of $\wD _1+\dots + \wD _5$ is given as follows: \begin{align*} \xygraph{\circ ([]!{+(0,.25)} {^{\wD_1}}) -[r] \circ ([]!{+(0,.25)} {^{\wD_2}}) -[r] \circ ([]!{+(0,.25)} {^{\wD_3}}) -[r] \circ ([]!{+(0,.25)} {^{\wD_4}})-[r] \circ ([]!{+(0,.25)} {^{\wD_5}})} \end{align*} By assumption, there exists a $(-1)$-curve $\wE _3$ on $\wS$ such that $(\wD _i \cdot \wE _3) = \delta _{i,3}$ for $i=1,\dots ,5$. Then a divisor $\wF := \wD_2+2(\wD_3+\wE_3)+\wD_4$ defines a $\bP ^1$-fibration $g := \Phi _{|\wF|}: \wS \to \bP ^1_{\bk}$, $\wD_1$ and $\wD_5$ become sections of $g$ and each component of $\wD-(\wD_1+\wD_5)$ is a fiber component of $g$. Note that the configuration of $\wF + \wD _1 + \wD _5$ looks like that in Figure \ref{fig(4-2)} (b). Moreover, $g$ satisfies $(\ast \ast )$, $(s,t)=(0,1)$ and $\gamma = 3$ provided that we use notations from \S \S \ref{4-1}. Hence, we obtain that $\Amp (S) = \Ampc (S)$ by Proposition \ref{prop(3-3)}. \end{proof} \begin{lem}\label{lem(4-4)} If $S$ is of type $(\sA _3+\sA _1)'$ or $(\sA _3+2\sA _1)'$, then $\Ampc (S) = \Amp (S)$. \end{lem} \begin{proof} By assumption, $S$ has two singular points $P_1$ and $P_2$ of types $\sA _3$ and $\sA _1$, respectively. Let $\wD _1+\wD _2 + \wD _3$ (resp. $\wD _4$) be the connected component of $\wD$ corresponding to $P_1$ (resp. $P_2$) such that the weighted dual graph of $(\wD _1+\wD _2 + \wD _3) + \wD_4$ is given as follows: \begin{align*} \xygraph{\circ ([]!{+(0,.25)} {^{\wD_1}}) -[r] \circ ([]!{+(0,.25)} {^{\wD_2}}) -[r] \circ ([]!{+(0,.25)} {^{\wD_3}}) [r] \circ ([]!{+(0,.25)} {^{\wD_4}})} \end{align*} By assumption, there exists a $(-1)$-curve $\wE _2$ on $\wS$ such that $(\wD _i \cdot \wE _2) = \delta _{i,2} + \delta _{i,4}$ for $i=1,\dots ,4$. Then a divisor $\wF := \wD_2+2\wE_2+\wD_4$ defines a $\bP ^1$-fibration $g := \Phi _{|\wF|}: \wS \to \bP ^1_{\bk}$, $\wD_1$ and $\wD_3$ become sections of $g$ and each component of $\wD-(\wD_1+\wD_3)$ is a fiber component of $g$. Note that the configuration of $\wF + \wD _1 + \wD _3$ looks like that in Figure \ref{fig(4-2)} (c). Moreover, $g$ satisfies $(\ast \ast )$, $(s,t)=(0,1)$ and $\gamma = 2$ provided that we use notations from \S \S \ref{4-1}. Hence, we obtain that $\Amp (S) = \Ampc (S)$ by Proposition \ref{prop(3-4)}. \end{proof} \begin{lem}\label{lem(4-5)} If $S$ has a singular point of type $\sA _3$, $\sA _4$, $\sA _5$ or $\sA _6$, then $\Ampc (S) = \Amp (S)$. \end{lem} \begin{proof} If $S$ is of type $(\sA _5)'$, $(\sA _5+\sA _1)'$, $(\sA _3+\sA _1)'$ or $(\sA _3+2\sA _1)'$, then we know $\Ampc (S) = \Amp (S)$ by Lemmas \ref{lem(4-3)} and \ref{lem(4-4)}. Hence, we assume that $S$ is not of the above types in what follows. By assumption, $S$ has a singular point $P$ of type $\sA _n$ for some $n=3,4,5,6$. Let $\wD _1+\dots + \wD _n$ be the connected component of $\wD$ corresponding to $P$ such that the weighted dual graph of $\wD _1+\dots + \wD _n$ is given as follows: \begin{align*} \xygraph{\circ ([]!{+(0,.25)} {^{\wD_1}}) -[r] \circ ([]!{+(0,.25)} {^{\wD_2}}) -[r] \cdots -[r] \circ ([]!{+(0,.25)} {^{\wD_n}})} \end{align*} We first consider the case $n=3$. Since $S$ is not of type $(\sA _3+\sA _1)'$ nor $(\sA _3+2\sA _1)'$, there exist two distinct $(-1)$-curves $\wE _2$ and $\wE _2'$ on $\wS$ such that $(\wD _i \cdot \wE _2) = (\wD _i \cdot \wE _2') = \delta _{i,2}$ for $i=1,2,3$ by Proposition \ref{prop(3-0)}. Then a divisor $\wF := \wE _2+\wD_2+\wE_2'$ defines a $\bP ^1$-fibration $g := \Phi _{|\wF|}: \wS \to \bP ^1_{\bk}$, $\wD_1$ and $\wD_3$ become sections of $g$ and each component of $\wD-(\wD_1+\wD_3)$ is a fiber component of $g$. Note that the configuration of $\wF + \wD _1 + \wD _3$ looks like that in Figure \ref{fig(4-2)} (d). Moreover, $g$ satisfies $(\ast \ast )$, $(s,t)=(1,0)$, $\beta ' = 1$ and $\beta = 1$ provided that we use notations from \S \S \ref{4-1}. Hence, we obtain that $\Amp (S) = \Ampc (S)$ by Proposition \ref{prop(3-6)} (1). In what follows, we consider the remaining case. Since $S$ is not of type $(\sA _5)'$ nor $(\sA _5+\sA _1)'$, there exist two distinct $(-1)$-curves $\wE _2$ and $\wE _{n-1}$ on $\wS$ such that $(\wD _i \cdot \wE _j) = \delta _{i,j}$ for $i=1,\dots ,n$ and $j=2,n-1$ by Proposition \ref{prop(3-0)}. Then a divisor $\wF := \wE _2 + \wD_2 + \dots + \wD _{n-1} + \wE _{n-1}$ defines a $\bP ^1$-fibration $g := \Phi _{|\wF|}: \wS \to \bP ^1_{\bk}$, $\wD_1$ and $\wD_n$ become sections of $g$ and each component of $\wD-(\wD_1+\wD_n)$ is a fiber component of $g$. Note that the configuration of $\wF + \wD _1 + \wD _n$ looks like that in Figure \ref{fig(4-2)} (e). Moreover, $g$ satisfies $(\ast \ast )$, $(s,t)=(1,0)$, $\beta ' = 1$ and $\beta \ge 2$ provided that we use notations from \S \S \ref{4-1}. Hence, we obtain that $\Amp (S) = \Ampc (S)$ by Proposition \ref{prop(3-6)} (2). \end{proof} \begin{lem}\label{lem(4-6)} If $S$ has a singular point of type $\sA _2$, then $\Ampc (S) = \Amp (S)$. \end{lem} \begin{proof} Let $P$ be a singular point on $S$ of type $\sA _2$ and let $\wD _1+\wD _2$ be the connected component of $\wD$ corresponding to $P$ such that the weighted dual graph of $\wD _1+\wD _2$ is given as follows: \begin{align*} \xygraph{\circ ([]!{+(0,.25)} {^{\wD_1}}) -[r] \circ ([]!{+(0,.25)} {^{\wD_2}})} \end{align*} Consider the divisor $\wDelta := -K_{\wS} - (\wD _1 + \wD _2)$ on $\wS$. By the Riemann-Roch theorem, we have $\dim |\wDelta| \ge 1$. Moreover, $(\wDelta )^2 = 0$ and $(\wDelta \cdot -K_{\wS}) = 2$. Since $\wS$ is a weak del Pezzo surface, we know that there exists a $0$-curve $\wC$ included in $|\wDelta|$ (see also {\cite[Lemma 5.2]{Saw24}}). Hence, $\wC$ defines a $\bP ^1$-fibration $g := \Phi _{|\wC|}: \wS \to \bP ^1_{\bk}$, $\wD_1$ and $\wD_2$ become sections of $g$ and each component of $\wD-(\wD_1+\wD_2)$ is a fiber component of $g$. Note that the configuration of $\wD _1 + \wD _2$ looks like that in Figure \ref{fig(4-2)} (f). Moreover, $g$ satisfies $(\ast \ast )$ and $(s,t)=(0,0)$ provided that we use notations from \S \S \ref{4-1}. Hence, we obtain that $\Amp (S) = \Ampc (S)$ by Proposition \ref{prop(3-5)}. \end{proof} Hence, in every case, we obtain $\Amp (S) = \Ampc (S)$ by Lemmas \ref{lem(4-1)}, \ref{lem(4-2)}, \ref{lem(4-3)}, \ref{lem(4-4)}, \ref{lem(4-5)} and \ref{lem(4-6)}. The proof of Theorem \ref{main(1)} is thus completed. \begin{thebibliography}{99} \bibitem{Bel17} G. Belousov, {\em Cylinders in del Pezzo surfaces with du Val singularities}, Bull. Korean Math. Soc. {\bf 54} (2017), 1655--1667. \bibitem{Bel23} G. Belousov, {\em Cylinders in del Pezzo surfaces of degree two}, In: {\em Birational Geometry, K\"{a}hler-Einstein Metrics and Degenerations}, Springer Proc. Math. Stat., Vol. 409, Springer, Cham, 2023, 17--70. \bibitem{BW79} J. W. Bruce and C. T. C. Wall, {\em On the classification of cubic surfaces}, J. Lond. Math. Soc. {\bf 19} (1979), 245--256. \bibitem{Che21} I. Cheltsov, {\em Cylinders in rational surfaces}, Sb. Math. {\bf 212} (2021), 399--415. \bibitem{CDP18} I. Cheltsov, A. Dubouloz and J. Park, {\em Super-rigid affine Fano varieties}, Compos. Math. {\bf 154} (2018), 2462--2484. \bibitem{CPPZ21} I. Cheltsov, J. Park, Y. Prokhorov and M. Zaidenberg, {\em Cylinders in Fano varieties}, EMS Surv. Math. Sci. {\bf 8} (2021), 39--105. \bibitem{CPW16a} I. Cheltsov, J. Park and J. Won, {\em Affine cones over smooth cubic surfaces}, J. Eur. Math. Soc. {\bf 18} (2016), 1537--1564. \bibitem{CPW16b} I. Cheltsov, J. Park and J. Won, {\em Cylinders in singular del Pezzo surfaces}, Compos. Math. {\bf 152} (2016), 1198--1224. \bibitem{CPW17} I. Cheltsov, J. Park and J. Won, {\em Cylinders in del Pezzo surfaces}, Int. Math. Res. Not. {\bf 2017} (2017), 1179--1230. \bibitem{CT88} D. F. Coray and M. A. Tsfasman, {\em Arithmetic on singular Del Pezzo surfaces}, Proc. Lond. Math. Soc. (3) {\bf 57} (1988), 25--87. \bibitem{Dur79} A. H. Durfee, {\em Fifteen characterizations of rational double points and simple critical points}, Enseign. Math. {\bf 25} (1979), 131--163. \bibitem{Dol12} I. V. Dolgachev, {\em Classical Algebraic Geometry: a modern view}, Cambridge Univ. Press, Cambridge, 2012. \bibitem{KKW} I.-K. Kim, J. Kim and J. Won, {\em $K$-unstable singular del Pezzo surfaces without anticanonical polar cylinder}, Int. Math. Res. Not. {\bf 2024} (2024), 12599--12619. \bibitem{KP21} J. Kim and J. Park, {\em Generic flexibility of affine cones over del Pezzo surfaces of degree $2$}, Int. J. Math. {\bf 32} (2021), Article ID 2150104, 18pp. \bibitem{KW25} J. Kim and J. Won, {\em Cylinders in smooth del Pezzo surfaces of degree $2$}, Adv. Geom. {\bf 25} (2025), 71--91. \bibitem{KPZ11} T. Kishimoto, Y. Prokhorov and M. Zaidenberg, {\em Group actions on affine cones}, In: {\em Affine Algebraic Geometry}, CRM Proc. Lecture Notes, Vol. 54, American Mathematical Society, Providence, RI, 2011, 123--163. \bibitem{KPZ14} T. Kishimoto, Y. Prokhorov and M. Zaidenberg, {\em Unipotent group actions on del Pezzo cones}, Algebr. Geom. {\bf 1} (2014), 46--56. \bibitem{Koj02} H. Kojima, {\em Algebraic compactifications of some affine surfaces}, Algebra Colloq. {\bf 9} (2002), 417--425. \bibitem{MW18} L. Marquand and J. Won, {\em Cylinders in rational surfaces}, Eur. J. Math. {\bf 4} (2018), 1161--1196. \bibitem{Par22} J. Park, {\em $\mathbb{G}_a$-actions on the complements of hypersurfaces}, Transform. Groups {\bf 27} (2022), 651--657. \bibitem{PW10} J. Park and J. Won, {\em Log canonical thresholds on del Pezzo surfaces of degree $\ge 2$}, Nagoya Math. J. {\bf 200} (2010), 1--26. \bibitem{PW16} J. Park and J. Won, {\em Flexible affine cones over del Pezzo surfaces of degree $4$}, Eur. J. Math. {\bf 2} (2016), 304--318. \bibitem{Pre13} A. Y. Perepechko, {\em Flexibility of affine cones over del Pezzo surfaces of degree 4 and 5}, Funct. Anal. Appl. {\bf 47} (2013), 284--289. \bibitem{Saw24} M. Sawahara, {\em Cylinders in canonical del Pezzo fibrations}, Ann. Inst. Fourier {\bf 74} (2024), 1--69. \bibitem{Saw1} M. Sawahara, {\em Cylindrical ample divisors on Du Val del Pezzo surfaces}, Forum Math. (Online first), 23pp. DOI: \href{https://doi.org/10.1515/forum-2024-0279}{10.1515/forum-2024-0279} \bibitem{Ura81} T. Urabe, {\em On singularities on degenerate del Pezzo surfaces of degree 1, 2}, In: {\em Singularities, Part 2 (Arcata, Calif., 1981)}, Proc. Sympos. Pure Math., Vol. 40, American Mathematical Society, Providence, RI, 1983, 587--591. \bibitem{Zha88} D.-Q. Zhang, {\em Logarithmic del Pezzo surfaces of rank one with contractible boundaries}, Osaka J. Math. {\bf 25} (1988) 461--497. \end{thebibliography} \end{document}
2412.10475v1
http://arxiv.org/abs/2412.10475v1
Quasispecies dynamics with time lags and periodic fluctuations in replication
\documentclass[11pt,reqno]{amsart} \usepackage{amssymb,amsmath,amsthm,graphics,epsfig} \usepackage{amsfonts} \usepackage{graphicx,cite} \usepackage{amssymb} \usepackage[pagewise]{lineno} \usepackage{mathrsfs} \usepackage[colorlinks=true, pdfstartview=FitV, linkcolor=blue, citecolor=blue, urlcolor=blue]{hyperref} \usepackage{booktabs} \usepackage{graphicx} \usepackage{tikz} \usepackage{subfigure} \usepackage{subcaption} \usepackage{multicol} \usepackage{cases} \usepackage{booktabs} \usepackage{hyperref} \usepackage{arydshln} \usepackage[colorinlistoftodos]{todonotes} \usepackage[normalem]{ulem} \usepackage{color} \usepackage{amsaddr} \setcounter{tocdepth}{4} \setcounter{secnumdepth}{4} \usepackage[margin=2.7cm]{geometry} \setlength{\textheight}{8.7in} \setlength{\topmargin}{0in} \setlength{\headsep}{0in} \pagestyle{plain} \usepackage{lineno} \newtheorem{Lem}{Lemma} \newtheorem{The}{Theorem} \newtheorem{Pro}{Proposition} \newtheorem{Cor}{Corollary} \usepackage{float} \theoremstyle{definition} \newtheorem*{Con}{Open question} \newtheorem{Rem}{Remark} \newtheorem{Exa}{Example} \newtheorem{Prik}{Counter--example} \newtheorem{Def}{Definition} \newtheorem{nota}{Notation} \newtheorem{Obs}{Observation} \newtheorem{Prop}{Propotition} \newenvironment{Proof}{\removelastskip \noindent {\bf Proof~:} } ll} {\bf q.e.d.} \bigskip \noindent} \definecolor{ColorEdward}{rgb}{0.4,0.6,0.7} \newcommand{\ed}[1]{\textcolor{ColorEdward}{#1}} \newcommand{\edbar}[1]{\textcolor{ColorEdwardi!80}{\sout{#1}}} \newcommand{\edcom}[2][noinline]{\todo[#1, color=ColorEdward!20!white]{\small \texttt{Edward}: #2}\medskip} \definecolor{ColorFrancisco}{rgb}{0.956,0.878,0.5} \newcommand{\fr}[1]{\textcolor{ColorFrancisco}{#1}} \newcommand{\frbar}[1]{\textcolor{ColorFranciscoi!80}{\sout{#1}}} \newcommand{\frcom}[2][noinline]{\todo[#1, color=ColorFrancisco!20!white]{\small \texttt{Francisco}: #2}\medskip} \definecolor{ColorJosep}{rgb}{0.2,0.5,0.1} \newcommand{\js}[1]{\textcolor{ColorJosep}{#1}} \newcommand{\jsbar}[1]{\textcolor{ColorJosepi!80}{\sout{#1}}} \newcommand{\jscom}[2][noinline]{\todo[#1, color=ColorJosep!20!white]{\small \texttt{Josep}: #2}\medskip} \newcommand{\sgn}{\operatorname{sgn}} \DeclareMathOperator{\R}{\mathbb{R}} \begin{document} \title{Quasispecies dynamics with time lags and periodic fluctuations in replication} \author{Edward A. Turner$^{1,2,*}$, Francisco crespo$^{2}$, Josep Sardanyés$^{3}$\\ \MakeLowercase{and} Nolbert Morales$^4$} \address{$^{1}${Universidad de Concepción, Departamento de Ciencias Básicas, Campus Los Ángeles,\\ {\small \itshape{Av. J.A. Coloma 0201, Los Ángeles, Chile}}}} \address{$^{2}${Universidad del Bío-Bío, Grupo de investigaci\'on en Sistemas Din\'amicos\\ y Aplicaciones (GISDA), {\small \itshape{Casilla 5-C, Concepci\'on, Chile}}}} \address{$^{3}$Centre de Recerca Matem\`atica (CRM), {\small \itshape{08193 Cerdanyola del Vall\`es,\\ Barcelona, Catalonia, Spain }}} \address{$^{4}$Universidad San Sebastián, Facultad de Ingenier\'ia, Arquitectura y Diseño,\\ {\small \itshape{Lago Panguipulli 1390, Puerto Montt, Chile }}} \thanks{$^{*}$Corresponding author: \texttt{[email protected]}} \subjclass{92D25, 34K13, 47H11.} \keywords{Quasispecies, delay differential equation, periodic solutions, degree theory.} \begin{abstract} Quasispecies theory provides the conceptual and theoretical bases for describing the dynamics of biological information of replicators subject to large mutation rates. This theory, initially conceived within the framework of prebiotic evolution, is also being used to investigate the evolutionary dynamics of RNA viruses and heterogeneous cancer cells populations. In this sense, efforts to approximate the initial quasispecies theory to more realistic scenarios have been made in recent decades. Despite this, how time lags in RNA synthesis and periodic fluctuations impact quasispecies dynamics remains poorly studied. In this article, we combine the theory of delayed ordinary differential equations and topological Leray-Schauder degree to investigate the classical quasispecies model in the single-peak fitness landscape considering time lags and periodic fluctuations in replication. First, we prove that the dynamics with time lags under the constant population constraint remains in the simplex in both forward and backward times. With backward mutation and periodic fluctuations, we prove the existence of periodic orbits regardless of time lags. Nevertheless, without backward mutation, neither periodic fluctuation nor the introduction of time lags leads to periodic orbits. However, in the case of periodic fluctuations, solutions converge exponentially to a periodic oscillation around the equilibria associated with a constant replication rate. We check the validity of the error catastrophe hypothesis assuming no backward mutation; we determine that the error threshold remains sound for the case of time of periodic fitness and time lags with constant fitness. Finally, our results show that the error threshold is not found with backward mutations. \end{abstract} \maketitle \section{Introduction} Quasispecies theory was conceived in the 1970's by Manfred Eigen~\cite{Eigen1971} and Peter Schuster~\cite{Eigen1988}. This theory was developed to investigate the dynamics of biological information for replicators subject to large mutation rates and was initially applied within the framework of prebiotic evolution. Later on, this theory was adapted to systems of replicons evolving under large mutation rates such as RNA viruses~\cite{Mas2004,Perales2020,Revull2021} and cells with remarkable genetic instability in cancer~\cite{Sole2003,Sole2004,Brumera2006}. More recently, a conceptual parallelism between viral quasispecies and the conformational heterogeneity of prions have been established~\cite{Li2010,Weissmann2011,Weissmann2012}. The investigations into experimental quasispecies involving bacterial, animal, and plant RNA viruses, along with its implications for RNA genetics, were early summarized in Refs.~\cite{Domingo1985,Domingo1988}. The concept quasispecies refers to highly heterogeneous populations of replicators composed by the so-called master or wild-type (wt) sequence which is surrounded by a cloud of mutants (mutant spectrum) which stabilize at the mutation-selection balance~\cite{Eigen1971,Eigen1988,Nowak1991,Bull2005}. Hence, selection acts on the quasispecies as a whole more than on a particular sequence or set of sequences. The integration of quasispecies theory into virology has fundamentally transformed our comprehension of the composition and dynamics of viral populations during disease onset. The presence of a mutant spectrum was initially demonstrated through clonal analyses of RNA bacteriophage Q$\beta$ populations in an infection initated with a single viral particle~\cite{Domingo1978}. Since this finding, viral quasispecies have been identified and quantified in multitude of viruses such as foot-and-mouth disease virus~\cite{Domingo1980,Sobrino1983}, vesicular stomatitis virus~\cite{Holland1979,Holland1982}, hepatitis viruses~\cite{Martell1992,Davis1999,Mas2004,Perales2020}, or SARS-CoV-2~\cite{Domingo2023}, to cite some examples. One of the most ground-breaking predictions of quasispecies theory is the so-called error threshold or error catastrophe~\cite{Eigen1971,Eigen1988,Biebricher2005}. This notion has also led to other important concepts such as lethal mutagenesis and lethal defection. In the classic quasispecies model, the error threshold is the mutation rate beyond which the master sequences i.e., the sequence with the highest replicative capacity, fades out and the population is exclusively composed of mutant sequences~\cite{Eigen1971,Biebricher2005}. As a difference from lethal mutagenesis (see below), which involves an effective extinction of sequences, the error threshld involves a shift in the sequence space due to the surpass of the critical mutation involving a full population composed by mutants. It is well known that, under the single-peak fitness landscape assumption, the error threshold is governed by a transcritical bifurcation~\cite{Sole2004, Sole2006,Castillo2017}. Increased mutation rates typically result in decreased population fitness as most mutations with phenotypic effects are detrimental. This principle underlies the concept of lethal mutagenesis, where a viral population can be eradicated by intentionally inducing mutations through mutagenic agents~\cite{Bull2008}. Evidence of lethal mutagenesis have been provided i.a., for human immunodeficiency virus type 1~\cite{Loeb1999,Dapp2013}, poliovirus~\cite{Crotty2001}, food-and-mouth-disease virus~\cite{Perales2011,Avila2017}, and hepatitis C virus~\cite{Prieto2013,Avila2016} in cell cultures (see also~\cite{Perales2019}). Lethal defection, which involves the extinction of viral populations due to appearance of defective viral genomes at low amounts of mutagen, has been identified for lymphocytic choriomeningitis virus un cell cultures~\cite{Grande2005}. Initial models of viral quasispecies inherited assumptions of quasispecies theory developed for prebiotic replicators. These included, for instance, deterministic dynamics (infinite populations) and geometric replication~\cite{Eigen1971,Eigen1988}, and simple fitness landscapes such as the Swetina-Schuster landscape~\cite{Swetina1982,Sole2003,Sole2004,Sole2006}. This simple fitness landscape considers two different populations given by the master sequence and the pool of mutants which are averaged over a single variable. Despite the simplicity of this approach, a simple model with such assumptions qualitatively explained complexity features tied to the error threshold in hepatitis C-infected patients~\cite{Sole2006,Mas2004}. However, RNA viruses unfold an enormous complexity at the genetic and population dynamics levels~\cite{Sole2021,Sanjuan2021,Aylward2022}. During the last decades, considerable efforts have been made to approach the initial quasispecies theory to more realistic scenarios for RNA viruses and for quasispecies theory in general. These new models have considered, either separately or in combination: finite populations~\cite{Nowak1989,Sole2004,Sardanyes2008}; stochastic effects~\cite{Sole2004,Sardanyes2011,Ari2016}; spatially-embedded quasispecies~\cite{Altemeyer2001,Sardanyes2008,Sardanyes2011}; more complex fitness landscapes including dynamic ones~\cite{Wilke2001a,Wilke2001c}, epistasis~\cite{Sardanyes2009,Elena2010}, and mutational fitness effects~\cite{Josep2014}; viral complementation~\cite{Sardanyes2010}; the survival-of-the-flattest effect (with empirical evidence in the evolution of computer programs~\cite{Wilke2001} and viroids~\cite{Codoner2006}), see also~\cite{Wilke2001b,Sardanyes2008}; and asymmetric modes of replication~\cite{Sardanyes2009,Sardanyes2011}, among others. Despite these efforts, there are still some features of viral (and RNA) dynamics that have not been considered within quasispecies theory. For instance, the impact of time lags in the replication of RNAs. Quasispecies theory, and most of the models for RNA virus replication, consider instantaneous synthesis of full genomes (either wildtype or distinct mutants), thus ignoring the time needed for the synthesis of the genomes. However, viral genomes are synthesized by the sequential incorporation of nucleotides by the RNA-dependent RNA-polymerase in the replication complex. Several studies have quantified the rates of elongation of viral genomes. For instance, recent studies have revealed that the SARS-CoV-2 replication complex has an elongation rate of $150$ to $200$ nucleotides per second~\cite{Campagnola2022}. This elongation rate is more than twice as fast the poliovirus polymerase complex. According to these rates, a full SARS-CoV-2 RNA genome ($\approx 29.9$ Kb) is synthesized in about $2.5$ and $3.32$ minutes. A full poliovirus genome ($\approx 7.4$ Kb) will be synthesized in about $1.5$ minutes. Another, often ignored, important effect are the fluctuations that key parameters such as replication rates may suffer. For RNA viruses, specially those infecting plants, replication processes may follow periodic fluctuations at the within-tissue or within-host levels at different time scales mainly due to to changes in temperature~\cite{Obreepalska2015,Honjo2020}. Moreover, it is known that the mammalian brain has an endogenous central circadian clock that regulates central and peripheral cellular activities. At the molecular level, this day-night cycle induces the expression of upstream and downstream transcription factors that influence the immune system and the severity of viral infections over time. In addition, there are also circadian effects on host tolerance pathways. This stimulates adaptation to normal changes in environmental conditions and requirements (including light and food). These rhythms influence the pharmacokinetics and efficacy of therapeutic drugs and vaccines. The importance of circadian systems in regulating viral infections and the host response to viruses is currently of great importance for clinical management~\cite{Zandi2023}. In this article we aim to covering this gap in quasispecies models by studying the impact of time lags and fluctuations in RNA replication. To do so we use the Swetina-Schuster fitness landdscape~\cite{Swetina1982,Sole2006} as a first approach to this problem. The article is organised as follows. In Section~\ref{sec:general_model} we introduce the classical quasispecies model and prove that the constant population assumption is valid to study time lags in dynamics. Section~\ref{sec:landscape} introduces the model under the single-peak fitness landscape assumption. Here, we first investigate the dynamics without backward mutation considering periodic replicative fitness and time lags in replication separately. Finally, we also consider the case where phenotypic reversions occur considering periodic repliation rates. \section{General quasispecies model with time lags and periodic fitness} \label{sec:general_model} The classical quasispecies model was initially formulated within the framework of well-stirred populations of replicators assuming a constant population (CP)~\cite{Eigen1971,Eigen1988}. This system can be modelled by the following system of autonomous ordinary differential equations: \begin{equation}\label{eq:Quasi} \dot x_{i}=\sum_{j=0}^n f_jQ_{ji}x_j - \tilde\Phi(\mathbf{x})x_{i}. \end{equation} The state variables $x_i$ denote the concentration (population numbers) of the $i$-th replicator species. Parameter $f_j$ is the replication rate of the $j$-th population of replicators, $Q_{ji}$ is the matrix denoting the transitions from the $j$-th to the $i$-the population of replicators due to mutation, and $\tilde\Phi(\mathbf{x})=\sum_{j=0}^nf_jx_j$ is the out-flux term ensuring a CP i.e., $\sum_i \dot{x}_i = 0$ and $\sum_i x_i = constant$. Note that the CP setting $\sum_i x_i = 1$ involves that the out-flux term is given by the average fitness associated to the population vector $\mathbf{x}=(x_0,\dots,x_n).$ The CP condition for Eqs.~\eqref{eq:Quasi} makes the orbits to span the simplex \begin{equation}\label{eq:CP} \Sigma_{n+1}=\left\lbrace \mathbf x\in\mathbb R^{n+1}:x_0+x_1+\dots+x_n=1,x_0,\dots,x_n>0 \right\rbrace. \end{equation} This feature is proved in the following proposition. \begin{Pro}\label{Prop:1} The simplex \eqref{eq:CP} is invariant under the flow of system \eqref{eq:Quasi}. \end{Pro} \begin{proof} Let observe that $\frac{d}{dt}\Sigma_{n+1}=D\Sigma_{n+1}(\mathbf x)\dot{\mathbf x}=\sum_{i=0}^{n}\dot x_{i}.$ Then, by adding the equations of \eqref{eq:Quasi} and considering that $\sum_{j=0}^n Q_{ji}=1$ for $i=0,\dots,n,$ we obtain that \begin{align*} \sum_{i=0}^n\dot x_i&=\tilde\Phi(\mathbf x)\left(1-\sum_{i=0}^nx_i\right)=0. \end{align*} Thus, $\Sigma_{n+1}$ is invariant. \end{proof} As we highlighted in the Introduction, the quasispecies model assumes that the replication of the populations is instantaneous. However, many biological processes suffer time lags. For instance the replication of RNAs or the proliferation of cancer cells do not occur instantaneously. If we focus on RNA viruses, viral RNA genomes are synthesized by the RNA-dependent RNA polymerase, which take some time in synthesizing the full RNA sequences. This polymerization and other phenomena such as RNA folding and maturation introduce time lags in the production of a mature, replicating RNA sequence. Moreover, viral replication can also change due to circadian cycles or day-night temperature fluctuations. Incorporating these effects into the quasispecies model is fundamental for a more comprehensive understanding and accurate representation of complex biological phenomena. The quasispecies model to explore these features in a qualitative manner is given by the next non-autonomous delay differential equation \begin{equation}\label{eq:QuasiDelay} \dot x_{i}(t)=\sum_{j=0}^n f_j(t)Q_{ji}x_j(t-r_j)-x_{i}(t)\Phi(\mathbf x), \end{equation} where now $f_j(t)$ is a periodic function and $r_j$ introduce the time lags (see below), and the outflow term reads $\Phi(\mathbf x)=\sum_{j=0}^n f_j(t)\,x_j(t-r_j)$. As we mentioned above, the CP condition largely determines the dynamics of the quasispecies model which is tied to the simplex space~\eqref{eq:CP}. Next, we will show that this condition still remains valid after with time lags. It is well-known that time lags can introduce major changes into a system of differential equations. Usually, key features of the system are modified and many techniques employed to analyze ordinary differential systems are no longer in use. However, the CP constraint is compatible with the delayed model \eqref{eq:QuasiDelay}. Hence, Proposition \ref{Prop:1} can be extended to system \eqref{eq:QuasiDelay}. For this purpose, we proceed in two stages; first, Proposition~\ref{vari_inva} shows that initial conditions with right endpoints in $\Sigma_{n+1}$ will remain for all positive time in $\Sigma_{n+1}$. In other words, it is not necessary for the entire initial condition to reside completely within $\Sigma_{n+1}$. Later, Proposition~\ref{Perm_esp_inv} indicates that any completely positive $T$-periodic solution of the system must necessarily reside in $\Sigma_{n+1}$. \begin{Pro}\label{vari_inva} Let us consider $\phi_i\in C(\mathbb R)$ and $\gamma=\max_{j=0,n}\{r_{j}\}$. If $\phi_i(t_0)\in \Sigma_{n+1}$ with $t_0\in\R$, then $\Sigma_{n+1}$ is an invariant space of delay differential equations system \begin{equation} \begin{split}\label{Sist_Gen} &\dot x_{i}(t)=\sum_{j=0}^n f_j(t)Q_{ji}x_j(t-r_j)-x_{i}(t)\Phi(\mathbf x),\\ &x_i(t)=\phi_i(t),\quad t\in\left[t_0-\gamma,t_0\right], \quad i=0,\dots,n. \end{split} \end{equation} \end{Pro} \begin{proof} Let $x_i(t)\in C\left([t_0,\infty[,\R^{n+1}\right)$ is a solution of the system \eqref{Sist_Gen}. By the Proposition \ref{Prop:1}, $$\frac{d}{dt}\left(\displaystyle\sum_{i=0}^nx_i(t) \right)=\left(1-\displaystyle\sum_{i=0}^nx_i(t) \right)\displaystyle\sum_{j=0}^nf_j(t)x_j(t-r_{j}).$$ If we consider $z(t)=\sum_{i=0}^nx_i(t)$ and considering the method of steps in system \eqref{Sist_Gen}, this system reduces for $t\in\left[0, \gamma\right]$ in the ordinary differential equation \begin{equation}\label{EDO} \dot z(t) =(1-z(t))\displaystyle\sum_{j=0}^nf_j(t) \phi_j(t-r_{j}), \end{equation} such that $z(t_0)=1$, then by the existence and uniqueness theorem we have that $z(t)=1$ for $t\in [t_0,t_0+\gamma]$, applying again the method of the steps for $t\in[t_0+\gamma,t_0+2\gamma]$ we obtain the equation \eqref{EDO} with the initial condition $z(t_0+\gamma)=1$ from where again we have that $z(t)=1$ for $t\in [ t_0+\gamma,t_0+2\gamma]$. If we continue in this way by induction we obtain that $z(t)=\sum_{i=0}^nx_i(t)=1$ for all $t\geq t_0$. \end{proof} \begin{Pro}\label{Perm_esp_inv} If $\phi_i\in C([0,T],\R)$ are positive $T$-periodical solutions of the system \eqref{vari_inva} in $]0,1[^{n+1}$, then $(\phi_0(t),...,\phi_n(t))\in \Sigma_{n+1}$ for all $t\in\R$. \end{Pro} \begin{proof} Let us assume that $\phi_i \in C([0,T],\mathbb{R})$ with $i=0,...,n$ are positive $T$-periodic solutions of the system \eqref{vari_inva} in $]0,1[^{n+1}$, not contained in $\Sigma_{n+1}$. Then, the function $z(t) = \sum_{i=0}^n \phi_i(t)$ is also a positive periodic function that satisfies the differential equation \begin{equation}\nonumber \dot z(t) =(1-z(t))\displaystyle\sum_{j=0}^nf_j(t)\phi_j(t-r_{j}(t)). \end{equation} Since $z$ is a $T$-periodic function, there exists $t_0\in[0,T]$ such that \begin{equation}\nonumber 0 =(1-z(t_0))\displaystyle\sum_{j=0}^nf_j(t_0)\phi_j(t-r_{j}(t_0)), \end{equation} and therefore $(\phi_0(t_0),...,\phi_n(t_0))\in \Sigma_{n+1}$. Then, if we consider the initial value problem \begin{equation}\nonumber \dot x_i(t)=\sum_{j=0}^nf_j(t)x_j(t-r_{j}(t))(Q_{ji}-x_i(t)), \quad x_i(t)=\phi_i(t),\quad t\in\left[t_0-T,t_0\right]. \end{equation} From the Proposition \ref{vari_inva} we obtain that $(x_0(t),...,x_n(t)) \in \Sigma_{n+1}$ for all $ t\geq t_0.$ Then, due to the uniqueness of solutions in delayed differential equations, we can conclude that $(x_0(t),...,x_n(t))=(\phi_0(t),...,\phi_n(t))\in \Sigma_{n+1}$ for all $t\in \R$. \end{proof} These statements provide our system with a biological interpretation, with and without time lags. \section{Quasispecies model for the single-peak fitness landscape} \label{sec:landscape} In this section we introduce the quasispecies model in a simple fitness landscape which assumes that mutations generate deleterious mutants. This allows to divide the population of sequences into two state variables: the master sequence $x_0$ with a high fitness, and the pool of mutants which are lumped together into a, lower fitness, average sequence $x_1$. This fitness landscape, also known as the Swetina-Schuster landscape~\cite{Swetina1982,Sole2003} is explored here for two reasons: (i) it provides a suitable mathematical framework allowing for analytical derivations, specially under the CP for which the dynamical system can be reduced by a degree of freedom; (ii) such a landscape recovered quasispecies complexity features in clinical data~\cite{Sole2006}. This system is obtained by considering $n=1$ in \eqref{eq:QuasiDelay} \begin{equation}\label{eq} \begin{split} \dot x_{0}&=f_{0}(1-\mu)x_{0}(t-r_0)+f_1\xi x_1(t-r_1)-x_{0}(t)\left(f_0x_0(t-r_0)+f_1x_1(t-r_1)\right),\\ \dot x_{1}&=f_{0}\mu x_{0}(t-r_0)+f_{1}(1-\xi)x_{1}(t-r_1)-x_{1}(t)\left(f_0x_0(t-r_0)+f_1x_1(t-r_1)\right), \end{split} \end{equation} where $Q_{01}=\mu,$ $Q_{00}=1-\mu,$ $Q_{11}=(1-\xi),$ and $Q_{10}=\xi.$ In the subsequent sections, we also employ the following compact notation for \eqref{eq} $$\dot{\mathbf{x}}(t) = F(\mathbf x(t),\mathbf x(t-r_0),\mathbf x(t-r_1)),\quad \mathbf x=(x_0,x_1),$$ which is a nonlinear retarded functional differential system with two bounded lags, and the map $F:\mathbb R^6\to\mathbb R$. As mentioned, $f_i$ denotes the replication rates, and $\mu$ and $\xi$ are the mutation rates, both forward and backward. For $n=1$, the dynamics span the segment $\Sigma_{2}$, hereafter denoted as $\Sigma$ for simplicity. It is important to note that our analyses will distinguish the absence and presence of backward mutation, which correspond with $\xi = 0$ and $\xi \neq 0$, respectively. Some works have studied the single-peak fitness landscapes without backward mutations~\cite{Sole2003,Sole2004,Sole2006}. This approach assumes that mutations occur from the master to the mutant populations but not in the reverse sense. The enormous size of the sequence space makes this assumption a good first approximation. However, models with backward mutations have been also explored~\cite{Sardanyes2009,Sardanyes2011,Josep2014}. Despite backward point-mutations producing the original master sequences may be very improbable, beneficial mutations causing phenotypic reversions producing sequences with the same fitness than the master sequence may occur. Moreover, our investigation primarily explores how periodic fitness and time lags affect the qualitative behavior of the flow, specifically through the identification of oscillatory phenomena for both cases. \subsection{Dynamics with no backward mutation} The single-peak landscape model without backward mutation is obtained by considering $\xi=0$ in \eqref{eq}, obtaining \begin{equation}\label{eq:16} \begin{split} \dot x_{0}&=f_{0}(t)(1-\mu)x_{0}(t-r_0)-x_{0}(t)\Phi(\mathbf x) \\ \dot x_{1}&=f_{0}(t)\mu x_{0}(t-r_0)+f_{1}(t)x_{1}(t-r_1)-x_{1}(t)\Phi(\mathbf x) . \end{split} \end{equation} where $f_j$ represents positive $T$-periodic continuous functions, and $\mu \in \left[0,1\right]$ denotes the mutation rate of $x_{0}$, here with $\Phi(\mathbf x) =f_0(t)x_0(t-r_0)+f_1(t)x_1(t-r_1)$. \begin{figure} \begin{center} \subfigure[$\mu<\mu_c$]{ \begin{tikzpicture}[scale=0.76] \draw[->] (-2,0) -- (4,0); \draw[->] (0,-1) -- (0,5); \draw [] (0,3) -- (3,0); \draw[dashed] (-2,5) -- (0,3); \draw[dashed] (3,0) -- (4,-1); lldraw[color=black] (0,3) circle (1.5pt); lldraw[color=black] (1.5,1.5) circle (1.5pt); \coordinate [label=above:\textcolor{black}{\small$\Sigma$}] (E2) at (2.5,0.8); \coordinate [label=below:\textcolor{black}{\small$x_0$}] (E2) at (4,0); \coordinate [label=left:\textcolor{black}{\small$x_1$}] (E2) at (0,5); \coordinate [label=left:\textcolor{black}{\small$\mathbf x_1^*$}] (E2) at (0,3); \coordinate [label=left:\textcolor{black}{\small$\mathbf x_2^*$}] (E2) at (1.5,1.5); \coordinate [label=left:\textcolor{black}{\rotatebox{-45}{\small$<$}}] (E2) at (-0.5,3.9); \coordinate [label=below:\textcolor{black}{\rotatebox{135}{\small$<$}}] (E2) at (0.7,2.7); \coordinate [label=below:\textcolor{black}{\rotatebox{-45}{\small$<$}}] (E2) at (2.2,1.2); \end{tikzpicture}} \subfigure[$\mu=\mu_c$]{ \begin{tikzpicture}[scale=0.8] \draw[->] (-2,0) -- (4,0); \draw[->] (0,-1) -- (0,5); \draw [] (0,3) -- (3,0); \draw[dashed] (-2,5) -- (0,3); \draw[dashed] (3,0) -- (4,-1); lldraw[color=black] (0,3) circle (1.5pt); \coordinate [label=above:\textcolor{black}{\small$\Sigma$}] (E2) at (2.5,0.76); \coordinate [label=below:\textcolor{black}{\small$x_0$}] (E2) at (4,0); \coordinate [label=left:\textcolor{black}{\small$x_1$}] (E2) at (0,5); \coordinate [label=left:\textcolor{black}{\small$\mathbf x_1^*=\mathbf x_2^*$}] (E2) at (0,3); \coordinate [label=below:\textcolor{black}{\rotatebox{-45}{\small$<$}}] (E2) at (0.7,2.7); \coordinate [label=left:\textcolor{black}{\rotatebox{-45}{\small$<$}}] (E2) at (-0.5,3.9); \end{tikzpicture}} \subfigure[$\mu>\mu_c$]{ \begin{tikzpicture}[scale=0.76] \draw[->] (-2,0) -- (4,0); \draw[->] (0,-1) -- (0,5); \draw [] (0,3) -- (3,0); \draw[dashed] (-2.5,5.5) -- (0,3); \draw[dashed] (3,0) -- (4,-1); lldraw[color=black] (0,3) circle (1.5pt); lldraw[color=black] (-1.5,4.5) circle (1.5pt); \coordinate [label=above:\textcolor{black}{\small$\Sigma$}] (E2) at (2.5,0.77); \coordinate [label=below:\textcolor{black}{\small$x_0$}] (E2) at (4,0); \coordinate [label=left:\textcolor{black}{\small$x_1$}] (E2) at (0,5); \coordinate [label=left:\textcolor{black}{\small$\mathbf x_1^*$}] (E2) at (0,3); \coordinate [label=left:\textcolor{black}{\small$\mathbf x_2^*$}] (E2) at (-1.5,4.5); \coordinate [label=below:\textcolor{black}{\rotatebox{-45}{\small$<$}}] (E2) at (0.7,2.7); \coordinate [label=left:\textcolor{black}{\rotatebox{135}{\small$<$}}] (E2) at (-0.5,3.9); \coordinate [label=below:\textcolor{black}{\rotatebox{-45}{\small$<$}}] (E2) at (-2.4,5.8); \end{tikzpicture}} \end{center} \captionsetup{width=\linewidth} \caption{Transcritical bifurcation associated with the error threshold in the quasispecies model. The segment $\Sigma$ is the biological meaningful region, which is contained in the attraction basin of the equilibria $\mathbf x_1^*$ for $\mu\geq\mu_c$.} \label{fig:Trans} \end{figure} \begin{Rem} \label{Remark:Erro} The classical cuasispecies model corresponds with $r_i=0$ and constant fitness $f_i(t)=f_i$. This system has three equilibrium points; $\mathbf x_0^*=(0,0)\notin \Sigma,$ $\mathbf x_1^*=(0,1)$ and \begin{equation}\label{eq:equilibioEsp} \mathbf x_2^*=(x_2^*,y_2^*)=\left(\frac{{f_0}-{f_1}-{f_0}\mu}{{f_0}-{f_1}},\frac{{f_0}\mu}{f_0-{f_1}}\right), \end{equation} where $\mathbf x_1^*,\mathbf x_2^*\in\Sigma$. A transcritical bifurcation occurs at the critical mutation rate $\mu_c=1-f_1/f_0$, indicating a threshold beyond which selection becomes impossible, leading the system into a drift phase~\cite{Sole2006,Sole2021}. The equilibrium $\mathbf x_1^*$ is unstable and $\mathbf x_2^*$ is globally asymptotically stable for $\mu < \mu_c$. At $\mu = \mu_c$ both equilibria $\mathbf x_1^*$ and $\mathbf x_2^*$ collide and interchange stability in a transcritical bifurcation. Hence, for $\mu > \mu_c$, $\mathbf x_1^*$ is globally asymptotically stable and $\mathbf x_2^*$ unstable (Fig.~\ref{fig:Trans}). The error threshold is significantly relevant within the quasispecies model. However, when $\xi\neq 0$, this threshold is never reached and $\mathbf x_1^*$ is not an equilibrium point of the system, as discussed in \cite{CrespoCancer2020}. \end{Rem} \subsubsection{Periodic replication rates with no time lags} This section analyzes the dynamics considering periodic fluctuations in the replication rate. One might expect that a periodic replicative fitness would lead to periodic behavior of the solutions. However, our study will prove otherwise. If we consider $f_i(t)=m_i+n\cos(t)$, the CP condition $x_1=1-x_0$ and a reparametrization of the independent variable, we can express system \eqref{eq:16} in terms of the master sequence $x_0$, denoted by $x$ for simplicity. The model reads \begin{equation}\label{eq:sistred} \dot x=(1+a\cos(t))(1-\mu)x-x((1+a\cos(t))x+(r+a\cos(t))(1-x)), \end{equation} where $a=n/m_0$ and $r=m_1/m_0.$ This equation has only one equilibria $x^*=0$, whose stability changes with $\mu$. Precisely, following Remark~\ref{Remark:Erro}, we know that $x^*$ is unstable for $\mu\neq\mu_c=1-r$. However, inspecting the dynamics in $\Sigma$, the equilibrium $x^*$ is stable for $\mu\geq\mu_c$. Moreover, considering $x(0)=c$ the solution of \eqref{eq:sistred} can be explicitly computed and is given by \begin{eqnarray} \label{eq:solucion} x(t)&=& \dfrac{ce^{(\mu_c-\mu)t-a\mu\sin(t)}}{1+c\mu_c\displaystyle\int_0^t e^{(\mu_c-\mu)s-a\mu\sin(s)}ds}. \end{eqnarray} Note that for any initial condition and parameters, the numerator of \eqref{eq:solucion} is periodic, while the denominator is not. Therefore, the system does not posses periodic solutions for any value of the parameters. We will distinguish several regimes for $x(t)$ according to the sign of $\mu_c-\mu.$ \begin{figure} \begin{tikzpicture}[scale=1] \node[inner sep=0pt] (caso1) at (-4.3,0) {\includegraphics[scale=0.8]{images/1b.pdf}}; \node[inner sep=0pt] (caso1) at (4.3,0) {\includegraphics[scale=0.8]{images/1c.pdf}}; \coordinate [label=below:\textcolor{black}{\small$\mu>\mu_c$}] (E2) at (-4,2); \coordinate [label=below:\textcolor{black}{\small$\mu=\mu_c$}] (E2) at (4,2); \end{tikzpicture} \captionsetup{width=\linewidth} \caption{Drift phase arising beyond the critical mutation rate $\mu_c$, with the dominance of mutant sequences i.e., $x(t)=0$ for $m_0=0.3,$ $m_1=0.2,$ $n=0.2,$ $\mu=0.1$.} \label{fig:3} \end{figure} If $\mu > \mu_c,$ previous treatment of the non-periodic case \cite{Eigen1971,Schuster1994} predicts a lost of genomic information as the population enters into a drift phase. This fact is preserved with a periodic replicative fitness. In one hand, if $\mu=\mu_c$ we can bound the solution \eqref{eq:solucion} as follows \begin{equation} g_1(t)=\dfrac{c}{e^{a\mu \sin(t)}(1+c\mu_c e^{a\mu} t)} \leq x(t)\leq \dfrac{c}{e^{a\mu \sin(t)}(1+c\mu_c e^{-a\mu} t)}=g_2(t). \end{equation} Since $g_i(t)\to 0$ as $t\to\infty$ for $i=1,2$, we have that $x(t)\to0$. On the other hand, considering $\mu>\mu_c$, we have that the denominator in \eqref{eq:solucion} is a positive number greater than 1, and the numerator tends to zero as $t\to\infty$. Thus, we conclude that the $x(t)\to0$. We illustrate this behavior in Figure~\ref{fig:3}. In the case where $\mu<\mu_c,$ the solution is bounded by \begin{equation} h_1(t)=\frac{e^{-a \mu \sin (t)}}{k_{11}+k_{12} e^{-t(\mu_c-\mu)}}\leq x(t) \leq \frac{e^{-a \mu \sin (t)}}{k_{21}+k_{22} e^{-t(\mu_c-\mu)}}=h_2(t), \end{equation} where \begin{equation} k_{11}=\frac{\mu_c e^{a \mu }}{\mu_c-\mu },\quad k_{12}=\frac{\mu_c e^{a \mu }}{\mu -\mu_c}+\frac{1}{c}\quad\text{and}\quad k_{21}=\frac{\mu_c e^{-a \mu }}{\mu_c-\mu },\quad k_{22}=\frac{\mu_c e^{-a \mu }}{\mu -\mu_c}+\frac{1}{c}. \end{equation} To understand the behavior of the bounds $h_1(t)$ and $h_2(t)$, we observe that these function are not periodic, but they converge exponentially to the following periodic functions \begin{equation} p_i(t)=\frac{e^{-a \mu \sin (t)}}{k_{i1}},\quad i=1,2. \end{equation} Moreover, the functions $p_i(t)$ are related to the equilibrium $\mathbf x_2^*$ of the corresponding constant fitness model given in \eqref{eq:equilibioEsp}. In particular, we have \begin{equation} \label{eq:Promedios} \dfrac{1}{2\pi}\int_0^{2\pi} p_1(t) \,dt=k_{11}^{-1}=e^{-a \mu }\, x_2^*,\qquad \dfrac{1}{2\pi}\int_0^{2\pi} p_2(t) \,dt=k_{21}^{-1}=e^{a \mu } \,x_2^*. \end{equation} As a result, after a initial drifting stage, the solution $x(t)$ shows a quasi-periodic behavior and is bounded by two quasi-periodic functions oscillating below and above the equilibria of the constant fitness model. We illustrate this feature in Figure~\ref{fig:2}, which is a representative example of numerical simulations for $x(t)$. It is noteworthy that the bounds $h_i(t)$ converge to $p_i(t)$ independently of the initial condition $x(0)=c$. Therefore, the oscillation around the average values given in \eqref{eq:Promedios} play the role of an attractor for arbitrary solutions. In practice, every solution exhibits a first stage moving toward the bounds imposed by $p_i(t)$. After that, they behave almost periodically. \begin{figure} \begin{tikzpicture}[scale=1] \node[inner sep=0pt] (caso1) at (0,0) {\includegraphics[scale=1]{images/1a.pdf}}; \coordinate [label=below:\textcolor{black}{\small$\mu<\mu_c$}] (E2) at (-0.5,2.3); \end{tikzpicture} \caption{Quasi-periodic behavior of the the solutions of system \eqref{eq:sistred} for $m_0=0.3,$ $m_1=0.2,$ $n=0.2.$} \label{fig:2} \end{figure} \subsubsection{Dynamics with time lags and constant replication} This section analyzes the model with time lags in the synthesis of the sequences assuming a constant replicative fitness. Let us consider system \eqref{eq:16} with $f_0=1,$ $f_1<1$ and $r_0=r_1.$ To simplify the notation, we will henceforth denote $f_1=f$ and $r_i=r$, taking the form \begin{equation}\label{eq:Delayconstante} \begin{split} \dot x_{0}&=(1-\mu)x_{0}(t-r)-x_{0}(t)\left(x_0(t-r)+f x_1(t-r)\right) \\ \dot x_{1}&=\mu x_{0}(t-r)+fx_{1}(t-r)-x_{1}(t)\left(x_0(t-r)+f x_1(t-r)\right). \end{split} \end{equation} This system has three equilibrium points: $\mathbf x_0^*=(0,0)\notin\Sigma,$ $\mathbf x_1^*=(x_1^*,y_1^*)=(0,1)$ and \begin{equation} \mathbf x_2^*=(x_2^*,y_2^*)=\left(\frac{\mu+f-1}{f-1},\frac{-\mu}{f-1}\right), \end{equation} where $\mathbf x_1^*,\mathbf x_2^*\in\Sigma.$ The transcritical bifurcation mentioned in Remark~\ref{Remark:Erro} remains in the system \eqref{eq:Delayconstante} for the critical value $\mu_c=1-f,$ given us the conditions on which every equilibrium exists. Previously we showed that the simplex $\Sigma_n$ associated with the CP constraint is invariant even in the presence of lags. In particular, $\Sigma_n$ for system \eqref{eq:Delayconstante} is given by $$x_0(t)+x_1(t)=1.$$ Considering only the biological meaningful domine, we are left with the following reduced system \begin{equation}\label{eq:1616} \dot{x}(t) = G(x(t), x(t-r))=-f x(t)+(1-\mu)x(t-r)+(f-1)x(t)x(t-r), \end{equation} where the master sequence $x_0$ is denoted by $x.$ The reduced system is endowed with the following equilibrium points $x_1^*=0$ and $x_2^*=(\mu+f-1)/(f-1).$ Absolute stability is defined in \cite{Ruan2001}. This type of stability is the main obstacle for using the method given in \cite{Ruan2001} determining periodic orbits. Following the mentioned method, we linearize the equation around $x^*_1,$ $x_2^*$ and observe absolute stability for the system \eqref{eq:1616}. \begin{The} Let $x_1^*$ and $x_2^*$ be the equilibria of the equation \eqref{eq:1616}. Then, \begin{enumerate} \item[\textit{i)}] If $1-\mu<f,$ $x_1^*$ is unstable and $x_2^*$ is absolutely stable, \item[\textit{ii)}] If $1-\mu>f,$ $x_1^*$ is absolutely stable and $x_2^*$ is unstable. \end{enumerate} \end{The} \begin{proof} In one hand, the linearized equation around $x_1^*=0$ is $\dot x(t)=(1-\mu)x(t-r)-f x(t).$ If we consider $x=e^{zt},$ the characteristic equation becomes \begin{equation}\label{12} z+f=(1-\mu)e^{-rz}, \end{equation} which has infinite complex solutions. However, by the Theorem 4.7 of \cite{smith2010introduction} when $1-\mu<f,$ $x_1^*$ is unstable and $x_2^*$ is absolutely stable. On the other hand, the system around $x_2^*$ takes the form \begin{equation} \begin{split} \frac{\partial G}{\partial x(t)}(x^*,x^*)&=-f+(f-1)x^*=\mu-1\\ \frac{\partial G}{\partial x(t-r)}(x^*,x^*)&=1-\mu+(f-1)x^*=f. \end{split} \end{equation} The system becomes $\dot x(t)=(\mu-1)x(t)+fx(t-r)$ and the associated characteristic equation is \begin{equation}\label{123} z+(1-\mu)=f e^{-zr}. \end{equation} In the same way as the previous equilibrium, given the form of \eqref{123} and by the Theorem 4.7 of \cite{smith2010introduction} when $1-\mu>f,$ $x_1^*$ is absolutely stable and $x_2^*$ is unstable. \end{proof} \begin{Rem} The previous theorem establishes conditions for instability and absolute stability. The latest prevents the existence of periodic orbits around the equilibrium at hand, while this matter remains inconclusive for an unstable equilibrium. A classical mechanism to determine periodic orbits existence is the Hopf bifurcation. However, this procedure does not apply in our model. \end{Rem} \subsection{Dynamics with backward mutation and periodic replication rates} \label{sec:4} In this section, we consider arbitrary periodic replication rates $f_0$ and $f_1$ in the presence of backward mutation. For this setting, we establish the existence of at least one positive $T$-periodic solution for the system \eqref{eq}. Our approach relies on topological Leray-Schauder degree arguments \cite{L_Schauder} and is either valid when considering time lags or not. In what follows, we consider $\mu,\xi\in]0,1[$ and define the ratio $w:=f_{0}/f_{1}$. If we first restrict $f_0$ and $f_1$ to be constant functions (period equal zero), it is easy to see that the system \eqref{eq} has three trivial periodic solutions, which are as follows: \begin{itemize} \item If $w \neq 1$, considering $\alpha_1=1+\xi-w (1-\mu)$ and $\alpha_2=1-\xi+w (1-\mu)-2w$, \begin{equation} \begin{split}\label{equilibria} \mathbf{x}^*_0 &= (0,0) \\ \mathbf{x}^*_1 &= \left(\dfrac{\alpha_1-\sqrt{\alpha_1^2-4\xi(1-w)}}{2(1-w)},\dfrac{\alpha_2+\sqrt{\alpha_1^2-4\xi(1-w)}}{2(1-w)}\right)\\ \mathbf{x}^*_2 &= \left(\dfrac{\alpha_1+\sqrt{\alpha_1^2-4\xi(1-w)}}{2(1-w)},\dfrac{\alpha_2-\sqrt{\alpha_1^2-4\xi(1-w)}}{2(1-w)}\right). \end{split} \end{equation} \item If $w=1$ (this case corresponds to neutral mutants), \begin{equation} \mathbf{x}^*_0 = (0,0), \quad \mathbf{x}^*_1 = \mathbf{x}^*_2 = \left(\dfrac{\xi}{\xi+\mu},\dfrac{\mu}{\xi+\mu}\right). \end{equation} \end{itemize} A basic analysis shows that $x_1$ and $x_2$ switch positions in the first quadrant as $r$ increases from $r<1$ to $r>1$. This fact ensures that at least one trivial solution always lies in the first quadrant. Moreover, we observe that by perturbing the values of $f_0$ and $f_1$ with non-constant periodic functions, the trivial periodic solutions mentioned above are no longer valid. Next, we will prove that these trivial periodic solutions located in the first quadrant become non-trivial periodic solution for periodic replication rates $f_0$ and $f_1$. \begin{The}\label{TeoP} Let us consider $\xi\neq 1-\mu$. Then, system \eqref{eq} admits at least one non-trivial $T$-periodic positive solution in $\Sigma$ for all $ t\in[0,T]$, such that, \begin{equation}\nonumber \min\{1-\mu,\xi\}\leq x_0(t)\leq \max\{1-\mu,\xi\},\quad \min\{1-\xi,\mu\}\leq x_1(t)\leq \max\{1-\xi,\mu\}. \end{equation} \end{The} \begin{The}\label{Teo2} Let $x_0,x_1$ be positive T-periodic solutions of the system \eqref{eq} in the interval $]0,1[$. Then it is fulfilled that $(x_0(t),x_1(t))\in\Sigma$, $\forall t\in[0,T]$ and \begin{itemize} \item[\textit{i)}] If $1-\mu>\xi$, then \begin{equation}\nonumber \xi<x_0(t)<(1-\mu),\ \forall t\in[0,T]\quad \text{and}\quad \mu<x_1(t)<1-\xi,\ \forall t\in[0,T]. \end{equation} \item[\textit{ii)}] If $1-\mu<\xi$, then \begin{equation}\nonumber (1-\mu)<x_0(t)<\xi,\ \forall t\in[0,T]\quad \text{and}\quad 1-\xi<x_1(t)<\mu,\ \forall t\in[0,T]. \end{equation} \item[\textit{iii)}] If $(1-\mu)=\xi$, then $x_0(t)=\xi$ and $x_1(t)= 1-\xi$ for all $t\in[0,T].$ \end{itemize} \end{The} We prove these theorems in the following section. \begin{Rem} Notice that the previous results not only establish the existence of periodic orbits in the meaningful biological region $\Sigma$, but they also determine the bounds for the oscillations of these periodic orbits. \end{Rem} \begin{figure} \begin{center} \begin{tikzpicture}[scale=1.5] \draw[->] (0,0) -- (3.5,0); \draw[->] (0,0) -- (0,3.5); \draw [line width = 1pt] (0,3) -- (3,0); \draw [line width = 1pt, color = red] (0.75,2.25) -- (2.25,0.75); \draw[dashed] (0,2.25) -- (2.25,2.25); \draw[dashed] (0.75,0) -- (0.75,2.25); \draw[dashed] (0,0.75) -- (2.25,0.75); \draw[dashed] (2.25,0) -- (2.25,2.25); \coordinate [label=below:\textcolor{black}{\small$\Sigma$}] (E2) at (0.5,3.25); \coordinate [label=below:\textcolor{black}{\small$x_0$}] (E2) at (3.5,0); \coordinate [label=left:\textcolor{black}{\small$x_1$}] (E2) at (0,3.5); \coordinate [label=below:\textcolor{black}{\small $\nu$}] (E2) at (2.25,0); \coordinate [label=below:\textcolor{black}{\small$\eta$}] (E2) at (0.75,0); \coordinate [label=left:\textcolor{black}{\small$1-\eta$}] (E2) at (0,2.25); \coordinate [label=left:\textcolor{black}{\small$1-\nu$}] (E2) at (0,0.75); ll[gray!35,nearly transparent] (0.75,0.75) -- (0.75,2.25) -- (2.25,2.25) -- (2.25,0.75) -- cycle; \end{tikzpicture} \end{center} \caption{In red, we can observe the segment containing the periodic orbit. } \label{fig:PeriodicOrbit} \end{figure} Firstly, we develop the theoretical framework to prove Theorem~\ref{TeoP} and Theorem~\ref{Teo2}. Then, the mentioned proofs are given at the end of this section. We aim to reformulate the task of discovering $T$-periodic solutions by framing it as a fixed-point problem associated with a relatively compact operator for the system given in equation \eqref{eq}. Let us define $C_T=\left\{ u \in C(\R): u(t-r)=u(t+r) \mbox{ for }t\in \R\right\}$ and the operator $$\mathcal{A} :C_T\times C_T\to C_T\times C_T,$$ \begin{equation} \quad \mathcal{A} [x_0,x_1] =\mathcal P[x_0,x_1]+\mathcal Q\mathcal N[x_0,x_1] +\mathcal K(I-\mathcal Q)\mathcal N[x_0,x_1] , \end{equation} where $\mathcal P[x_0,x_1]=(x_0(0),x_1(0)),$ $\mathcal Q[x_0,x_1]=(\overline{x_0},\overline{x_1}),$ $\overline{x_0}=\frac{1}{T}\int_0^Tx_0(t)dt,$ $$\mathcal N[x_0,x_1] (t)=\left(\begin{array}{lr} f_{0}(t)(1-\mu)x_{0}(t-r_0)+f_1(t)\xi x_1(t-r_1)-x_{0}(t)\left(f_0(t)x_0(t-r_0)+f_1(t)x_1(t-r_1)\right)\\ f_{0}(t)\mu x_{0}(t-r_0)+f_{1}(t)(1-\xi)x_{1}(t-r_1)-x_{1}(t)\left(f_0(t)x_0(t-r_0)+f_1(t)x_1(t-r_1)\right)\end{array}\right)$$ and $$\mathcal K[x_0,x_1]=\left(\int_0^t x_0(s)ds,\int_0^t x_1(s)ds\right).$$ Note that this operator is entirely continuous, and thus the \textit{Leray-Schauder Degree} is applicable. Furthermore, the periodic boundary value problem for \eqref{eq} is equivalently transformed into the fixed-point problem for an operator equation: \begin{equation*} (x_0,x_1) =\mathcal{A}[x_0,x_1] ,\quad (x_0,x_1) \in C_T\times C_T. \end{equation*} The crucial step in establishing the validity of Theorems \ref{TeoP} is to demonstrate that the degree of the operator $I-\mathcal{A}$ on a suitable open set is non-zero. Let $\eta, \nu \in \mathbb{R}$, $\eta =\min\{1 - \mu, \xi\}$, $\nu=\max\{1 - \mu, \xi\} $ and consider $\xi \neq 1 - \mu$. Then, we define the open set \begin{gather*} \Omega:=\left\{(x_0,x_1)\in C_T\times C_T: \eta<x_0(t)<\nu, \mbox{ } 1-\nu<x_1(t)<1-\eta, \mbox{ }\forall t\in[0,T]\right\}\end{gather*} and the homotopy $\mathcal{H}:[0,1]\times C_T\times C_T\to C_T\times C_T,$ defined by \begin{equation*} \mathcal{H}[\lambda,x_0,x_1]=(x_0,x_1)-(\mathcal P[x_0,x_1]+\mathcal Q\mathcal N[x_0,x_1]+\lambda \mathcal K(I-\mathcal Q)\mathcal N[x_0,x_1]). \end{equation*} Before demonstrating the admissibility of the homotopy, we first compute $d_{LS}(\mathcal{H}[0,\cdot],\Omega,0)$. This computation is particularly straightforward due to the fact that $(I-\mathcal{H}[0,\cdot])(\overline{\Omega})\subseteq\mathbb{R}^2$. Exploiting this condition allows us to reduce its calculation to the finite-dimensional case, namely the Brouwer degree. To accomplish this, we formally establish the following result: \begin{Pro}\label{grado_finit} $d_{LS}(\mathcal{H}[0,\cdot],\Omega,0)=1$. \end{Pro} \begin{proof} As mentioned before, $(I-\mathcal{H}[0,\cdot])(\overline{\Omega})\subseteq\mathbb{R}^2$ implies that \begin{equation*} d_{LS}(\mathcal{H}[0,\cdot],\Omega,0)=d_{B}(\mathcal{H}[0,\cdot]_{\overline{\Omega\cap\mathbb{R}^2}},\Omega\cap\mathbb{R}^2,0)=d_{B}(- \mathcal{Q}\mathcal{N}[\cdot]_{\overline{\Omega\cap\mathbb{R}^2}},\Omega\cap\mathbb{R}^2,0). \end{equation*} If $\mathcal{Q}\mathcal{N}(x,y)=0$, then we obtain the system \begin{equation}\label{eq:SystemEquilibria} \begin{split} &\overline{f_0}(1-\mu)x+\overline{f_1} \xi y-x(\overline{f_0}x+\overline{f_1}y)=0, \\ &\overline{f_0}\mu x+\overline{f_1}(1-\xi)y-y(\overline{f_0}x+\overline{f_1}y)=0. \end{split} \end{equation} From system \eqref{eq:SystemEquilibria} we get \begin{equation}\label{y=} \overline{f_1}y(x-\xi)=\overline{f_0}x((1-\mu)-x) \end{equation} \begin{equation}\label{x=} \overline{f_0}x(y-\mu)= \overline{f_1}y((1-\xi)-y). \end{equation} Furthermore from \eqref{y=} and \eqref{x=} we obtain, respectively, \begin{equation}\label{y'=} \dfrac{dy}{dx}=-\left(\frac{\overline{f_0}}{\overline{f_1}}\right)\frac{x(x-\xi) +\xi((1-\mu)-x)}{(x-\xi)^2},\quad \dfrac{dx}{dy}=-\left(\frac{\overline{f_1}}{\overline{f_0}}\right)\frac{y(y-\mu)+\mu((1-\xi)-y)}{(y-\mu)^2}. \end{equation} We will prove that $\mathcal{Q}\mathcal{N}(x,y)\neq 0$, $\forall (x,y)\in\partial\left([\eta,\nu]\times[1-\nu,1-\eta]\right)$. For this, we assume that there exists $(x_0,y_0)\in\left(\partial[\eta,\nu]\times[1-\nu,1-\eta]\right)$ that satisfies system \eqref{eq:SystemEquilibria}, and we will rule out this possibility through the following cases. \begin{itemize} \item[Case 1.] If $\xi<1-\mu$, $y\in[1-\nu,1-\eta]$ and $x=\eta=\xi$ our $x=\nu=1-\mu$ then from \eqref{y=} we reach a contradiction.\\ \item[Case 2.] If $\xi<1-\mu$, $x\in[\eta,\nu]$ and $y=(1-\nu)=\mu$ our $y=(1-\eta)=1-\xi$ then from \eqref{x=} we reach a contradiction. \\ \item[Case 3.] If $\xi>1-\mu$, $y\in[1-\nu,1-\eta]$ and $x=\eta$ our $x=\nu$. This case is similar to case 1. \item[Case 4.] If $\xi>1-\mu$, $x\in[\eta,\nu]$ and $y=1-\nu$ our $x=1-\eta$. This case is similar to case 2. \end{itemize} Therefore, we can now compute the Brouwer degree. For this, we assert that the system of equations \eqref{eq:SystemEquilibria} has only one solution. In fact, from Cases 1, 2, 3, and 4 it follows that if $(x_0, y_0)$ satisfies \eqref{eq:SystemEquilibria}, then $(x_0, y_0) \in ]\eta, \nu[ \times ]1 - \nu, 1 - \eta[$. Furthermore, in the case $\xi < 1 - \mu$, we observe the following: considering $(a,y_a) $ a point of \eqref{y=}, $(b,y_b) $ a point of \eqref{x=}, $w_0=\overline{f_0}/\overline{f_1}$ and the monotony given by\eqref{y'=} then if $a,b\to \xi^{+}$ we obtain that \begin{align*} y_b&\to \left(\frac{1}{2} \left(1-\xi (w_0+1)+\sqrt{4 \xi \mu w_0+(\xi (w_0+1)-1)^2} \right)\right)^-\\&<\frac{1}{2} \left(1-\xi (w_0+1)+\sqrt{(\xi (w_0-1)+1)^2}\right)\\ &=1-\xi\\ &<y_a\\ &\to+\infty. \end{align*} and if we consider that $a,b\to(1-\mu)^-$ we obtain that \begin{align*} y_b& \to \frac{1}{2} \left(\sqrt{((1-\xi)-(1-\mu)w_0)^2+4\mu (1-\mu) w_0}+(1-\xi)-(1-\mu)w_0 \right)^+ \\ & > \frac{1}{2} \left(\sqrt{(\mu-(1-\mu)w_0)^2+4 \mu(1-\mu) w_0}+(1-\xi)-(1-\mu)w_0 \right) \\ & = \frac{1}{2} \left(\mu+(1-\xi) \right) \\ &>\mu\\ &>y_a\\ &\to 0^+. \end{align*} Then from \eqref{y'=} we can conclude that there is a single point of intersection. A symmetric analysis is followed in the case $\xi > 1 - \mu$, leading to the same conclusion. Thus, we have the following: \begin{align*} &d_{B}(- \mathcal Q\mathcal N[\cdot]_{\overline{\Omega\cap\R^2}},\Omega\cap\R^2,0)\\ &\qquad\quad=\sgn\left(\ \overline{f_0}(1-\mu-2x_0)-\overline{f_1}y_0)(\overline{f_1}(1-\xi-2y_0)-\overline{f_0}x_0)\overline{f_0}~\overline{f_1}(\xi-x_0)(\mu-y_0) \right), \end{align*} and considering that $(x_0,y_0)$ satisfy \eqref{y=} and \eqref{x=} we have that \begin{align*} &d_{B}(- \mathcal Q\mathcal N[\cdot]_{\overline{\Omega\cap\R^2}},\Omega\cap\R^2,0)\\ &\qquad=\sgn\left( \overline{f_0}~\overline{f_1}\left(\frac{(1-\mu)-x_0}{x_0-\xi}+x_0\right)\left(\frac{(1-\xi)-y_0}{y_0-\mu}+y_0\right) -\overline{f_0}~\overline{f_1}(\xi-x_0)(\mu-y_0) \right)\\ &\qquad= 1. \end{align*} \end{proof} The upcoming lemma establishes the admissibility of the homotopy $\mathcal H$ and allow us to compute the degree of the operator $I-\mathcal{A}.$ \begin{Lem}\label{cots1} Considering all the hypotheses of Theorem \ref{TeoP}. Then, every solution of $$\mathcal H\left[\lambda, x_0, x_1\right] = 0$$ is contained in $\Omega,$ for all $\lambda\in[0, 1]$ and $t\in\mathbb R.$ \end{Lem} \begin{proof} Let $\Lambda$ denote the set of all solutions to $\mathcal H[\lambda, x_0, x_1] = 0$. Assuming by contradiction there exists $\lambda \in [0,1]$ and $(x_0, x_1) \in \Lambda$ such that $(x_0, x_1) \in \partial\Omega$. The case $\lambda=0$ is discarded from the consideration of Proposition \ref{grado_finit}. If $\lambda\in]0,1]$ from the definition of $\mathcal{H}$ we obtain \begin{equation*} \begin{split} \dot x_{0}&=\lambda\left(f_{0}(t)(1-\mu)x_{0}(t-r_0)+f_1(t)\xi x_1(t-r_1)-x_{0}(t)\left(f_0(t)x_0(t-r_0)+f_1(t)x_1(t-r_1)\right)\right),\\ \dot x_{1}&=\lambda\left(f_{0}(t)\mu x_{0}(t-r_0)+f_{1}(t)(1-\xi)x_{1}(t-r_1)-x_{1}(t)\left(f_0(t)x_0(t-r_0)+f_1(t)x_1(t-r_1)\right)\right). \end{split} \end{equation*} Let $ t_{x}, t_{y}\in [0,T]$ such that $\dot x_0(t_{x})=0$ and $\dot x_1(t_{y})=0$, \begin{eqnarray} \label{x'} x_1(t_x-r_{1}(t_x))(\xi-x_0(t_x))&=&w(t_x) x_0(t_x-r_{0}(t_x))(x_0(t_x)-1+\mu), \\ \label{y'} x_1(t_y-r_{1}(t_y))(1-\xi-x_1(t_y))&=&w(t_y)x_0(t_y-r_{0}(t_y))(x_1(t_y)-\mu), \end{eqnarray} where $w=f_0/f_1.$ We have the following cases: If $x_0(t_x)=\max_{t\in[0,T]}x_0(t)=\nu=\xi$ (similar in the case $\nu=1-\mu$) or $x(t_x)=\min_{t\in[0,T]}x_0(t)=\eta=1-\mu$ (similar in the case $\eta=\xi$), then from \eqref{x'} \begin{align*} 0= x_1(t_x-r_{1}(t_x))(\xi-\nu)&=w(t_x)x_0(t_x-r_{0}(t_x))(\nu-1+\mu)>0,\\ 0< x_1(t_x-r_{1}(t_x))(\xi-\eta)&=w(t_x)x_0(t_x-r_{0}(t_x))(\eta-1+\mu)=0, \end{align*} respectively, which implies a contradiction in both cases. On the other hand, if $x_1(t_y)=\max_{t\in[0,T]}x_1(t)=1-\eta=\mu$ (similar in the case $\eta=\xi$) or $x_1(t_y)=\min_{t\in[0,T]}x_1(t)=1-\nu=1-\xi$ (similar in the case $\nu=1-\mu$), then from \eqref{y'} \begin{align*} 0> x_1(t_y-r_{1}(t_y))(1-\xi-(1-\eta))&=w(t_y)x_0(t_y-r_{0}(t_y))((1-\eta)-\mu)=0,\\ 0= x_1(t_y-r_{1}(t_y))(1-\xi-(1-\nu))&=w(t_y)x_0(t_y-r_{0}(t_y))((1-\nu)-\mu)<0, \end{align*} respectively, which implies a contradiction in both cases. \end{proof} \begin{proof}[Proof of Theorem \ref{TeoP}] By applying Lemma \ref{cots1}, it follows that $\mathcal{H}$ is an admissible homotopy. Consequently, \begin{equation*} \begin{split} d_{LS}(I-\mathcal{A},\Omega,0)&=d_{LS}(\mathcal{H}[1,\cdot],\Omega,0)\\ &=d_{LS}(\mathcal{H}[0,\cdot],\Omega,0)\\ &=1. \end{split} \end{equation*} Therefore \eqref{eq} has at least one solution in $\Omega$. Additionally, considering Proposition \ref{Perm_esp_inv}, the proof of Theorem \ref{TeoP} is concluded. \end{proof} \begin{proof}[Proof of Theorem \ref{Teo2}] Assuming the fulfillment of all the hypotheses in Theorem \ref{Teo2}. The initial part of the proof follows from Proposition \ref{Perm_esp_inv}. To establish the remaining aspects, we will proceed by considering different cases: \begin{itemize} \item[Case 1.] Assume $1-\mu>\xi$ and $x_0'(t_x)=0$. If $x(t_x)=M_{x_0}=\max_{t\in[0,T]}\{x_0(t)\}\geq 1-\mu$ then from equation \eqref{x'} we get \[0> x_1\left(t_x-r_{1}(t_x)\right)(\xi-M_{x_0})=w(t_x)x_0(t_x-r_{0}(t_x))(M_{x_0}-1+\mu)\geq 0.\] This is a contradiction. Similarly if $x(t_x)=m_{x_0}=\min_{t\in[0,T]}\{x_0(t)\}\leq \xi$ then from equation \eqref{x'} we have \[0\leq x_1(t_x-r_{1}(t_x))(\xi-m_{x_0})=w(t_x)x_0(t_x-r_{0}(t_x))(m_{x_0}-1+\mu)< 0,\] which is also a contradiction. On the other hand, since $1-\mu>\xi$ then $1-\xi>\mu$. If we assume that $x_1(t_y)=m_{x_1}\leq \mu$ then from equation \eqref{y'} we obtain \[0<x_1(t_y-r_{1}(t_y))(1-\xi-m_{x_1} )=w(t_y)x_0(t_y-r_{0}(t_y))(m_{x_1}-\mu)\leq0,\] leading to a contradiction. Similarly if $x_1(t_y)=M_{x_1}\geq 1-\xi$ then from equation \eqref{y'} we obtain \[0\geq x_1(t_y-r_{1}(t_y))(1-\xi-M_{x_1} )=w(t_y)x_0(t_y-r_{0}(t_y))(M_{x_1}-\mu)>0.\] This is a contradiction. \item[Case 2.] Assume $1-\mu<\xi.$ The proof is analogous to the previous case. \item[Case 3.] Assume $1-\mu=\xi$ and suppose that $M_{x_0}> \xi$, then from equation \eqref{x'} we have that $$0> x_1(t_x-r_{1}(t_x))(\xi-M_{x_0})=w(t_x)x_0(t_x-r_{0}(t_x))(M_{x_0}-1+\mu)> 0.$$ This is a contradiction then $M_{x_0}\leq \xi$. Similarly if $m_{x_0}< \xi$, then from equation \eqref{x'} we have that $$0< x_1(t_x-r_{1}(t_x))(\xi-m_{x_0})=w(t_x)x_0(t_x-r_{0}(t_x))(m_{x_0}-1+\mu)< 0.$$ This is a contradiction then $\xi\leq m_{x_0}$. Therefore $x_0(t)=\xi,$ for all $ t\in[0,T]$. Similarly, it is shown that $x_1(t)=1-\xi,$ for all $ t\in[0,T]$. \end{itemize} \end{proof} \section{Conclusions} Quasispecies theory has fundamentally transformed our understanding of the dynamics of replicators such as macromolecules or cells subject to large mutation rates~\cite{Eigen1971,Eigen1988}. Initially applied to prebiotic evolution, the theory was later extended to RNA viruses~\cite{Revull2021,Mas2004,Perales2020} and genetically unstable cancer cells~\cite{Sole2003,Sole2006,Brumera2006}. Early investigations highlighted the heterogeneous nature of quasispecies, where a master sequence is surrounded by a cloud of mutants that stabilize at mutation-selection balance~\cite{Eigen1971,Eigen1988}. This insight has significantly impacted virology, revealing the intricate composition and dynamics of viral populations during infections, with initial experimental demonstrations involving RNA bacteriophage Q$\beta$~\cite{Domingo1978}. Subsequently, the population heterogeneity conceptualized by quasispecies theory has been validated in various viruses, including foot-and-mouth disease virus~\cite{Domingo1980,Sobrino1983}, vesicular stomatitis virus~\cite{Holland1979,Holland1982}, hepatitis viruses~\cite{Martell1992,Davis1999,Mas2004,Perales2020}, and SARS-CoV-2~\cite{Domingo2023}. A key prediction of quasispecies theory is the error threshold or error catastrophe~\cite{Eigen1971,Domingo1988,Biebricher2005}, where excessive mutation rates lead to the loss of the master sequence, leaving only mutant sequences~\cite{Eigen1971,Eigen1988,Bull2008}. This concept underlies lethal mutagenesis, a strategy to eradicate viral populations by inducing high mutation rates through mutagenic agents~\cite{Bull2008}. Evidence for lethal mutagenesis has been observed in several viruses, including HIV-1~\cite{Loeb1999,Dapp2013}, poliovirus~\cite{Crotty2001}, and hepatitis C virus~\cite{Prieto2013,Avila2016}, among others~\cite{Perales2020}. Moreover, lethal defection, involving the extinction of viral populations due to defective viral genomes, has been identified in lymphocytic choriomeningitis virus~\cite{Grande2005}. Although initial models of quasispecies theory assumed deterministic dynamics and simple fitness landscapes~\cite{Eigen1971,Eigen1988,Swetina1982}, recent efforts have incorporated more realistic scenarios such as finite populations~\cite{Nowak1989,Sardanyes2008,Sardanyes2009}, stochastic effects~\cite{Sole2004,Sardanyes2011,Ari2016}, more complex fitness landscapes~\cite{Wilke2001a,Wilke2001c,Sardanyes2009,Elena2010,Josep2014}, or viral complementation~\cite{Sardanyes2010}. These advancements continue to refine our understanding of RNA virus dynamics, although some aspects, like the impact of time lags in RNA synthesis and periodic replicative fitness, remain largely underexplored. In this article we investigate the quasispecies model under the single-peak fitness landscape framework. This fitness landscape is one of the simplest ones and allow for analytical exploration. Despite this simplicity, it has been used to characterize quasispecies complexity features in hepatitis C-infected patients~\cite{Mas2004,Sole2006}. We have analyzed different scenarios for this model considering both no backward and backward mutations. Specifically, we demonstrated that periodic orbits exist when backward mutation and periodic fluctuations are present, regardless of time lags. Without backward mutation, neither periodic fluctuations nor time lags result in periodic orbits. In scenarios with periodic fluctuations, solutions converge exponentially to a periodic oscillation around the equilibria associated with a constant replication rate. Concerning the error catastrophe, this hypothesis holds true without backward mutation. However, backward mutations under the given fitness landscape mitigate the error catastrophe bifurcation. \section*{Acknowledgment} The authors ET and FC thank Adri\'an G\'omez for fruitful discussion and guidance on time-delayed systems. Support from Research Agencies of Chile and Universidad del Bío-Bío is acknowledged, they came in the form of research projects GI2310532-VRIP-UBB and ANID PhD/2021-21210522. ET thanks the Centre de Recerca Matem\`atica and especially J. Sardany\'es for their hospitality and continuous support during the research stay in the first semester of 2024, which allowed us to carry out this research. JS wants to thank Esteban Domingo, Santiago Elena, Ricard Sol\'e, and Celia Perales for insightful discussions on quasispecies theory and viral quasispecies. JS has been funded by a Ramón y Cajal Fellowship (RYC-2017-22243) and by grant PID2021-127896OB-I00 funded by MCIN/AEI/10.13039/501100011033 ”ERDF A way of making Europe”. We also thank Generalitat de Catalunya CERCA Program for institutional support and the support from María de Maeztu Program for Units of Excellence in R$\&$D grant CEX2020-001084-M (JS). \bibliographystyle{abbrv} \bibliography{CuasiDelay} \end{document}
2412.09904v2
http://arxiv.org/abs/2412.09904v2
Quantum chromatic numbers of some graphs in Hamming schemes
\documentclass[preprint,12pt]{elsarticle} \usepackage{amssymb} \usepackage[colorlinks=true]{hyperref} \usepackage{geometry} \geometry{a4paper,scale=0.8} \usepackage{amsmath} \usepackage{booktabs} \usepackage[all]{xy} \usepackage{amsmath} \usepackage{accents} \newlength{\dhatheight} \newcommand{\doublewidehat}[1]{ \settoheight{\dhatheight}{\ensuremath{\widehat{#1}}} \addtolength{\dhatheight}{-0.3ex} \widehat{\vphantom{\rule{1pt}{\dhatheight}} \smash{\widehat{#1}}}} \usepackage{latexsym} \usepackage{mathrsfs} \usepackage{amsthm} \usepackage{graphicx} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{amsbsy} \let\labelindent\relax \usepackage[shortlabels]{enumitem} \usepackage{url} \usepackage{array} \usepackage{pdflscape} \usepackage{xcolor} \usepackage{stmaryrd} \newcommand\dsb[1]{\llbracket #1 \rrbracket} \newcommand{\todo}[1]{{\color{red} (TODO: #1) }} \newcommand{\new}[1]{{\color{blue}#1}} \setcounter{MaxMatrixCols}{40} \newcommand\Diag{\operatorname{Diag}} \usepackage{verbatim} \usepackage{epsfig} \usepackage{bookmark} \newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{conj}[thm]{Conjecture} \newtheorem{prop}[thm]{Proposition} \newtheorem{prob}[thm]{Problem} \newtheorem{defn}[thm]{Definition} \newtheorem{remark}[thm]{Remark} \newtheorem{exam}[thm]{Example} \def\Tr{\operatorname{Tr}} \def\tr{\operatorname{tr}} \begin{document} \begin{frontmatter} \title{Quantum chromatic numbers of some graphs in Hamming schemes} \author{Xiwang Cao$^{a,}$\footnote{The research of X. Cao is supported by National Natural Science Foundation of China, Grant No. 12171241. The research of K. Feng is supported by National Natural Science Foundation of China, Grant No. 12031011. The research of Y. Tan is supported by National Natural Science Foundation of China, Grant No. 12371339}, Keqin Feng$^{b}$, Ying-Ying Tan$^c$} \address{$^{a}$School of Mathematics, Nanjing University of Aeronautics and Astronautics, China\\ $^{b}$Department of Mathematics, Tsinghua University, China\\ $^c$Anhui Jianzhu University, China} \begin{abstract} The study of quantum chromatic numbers of graphs is a hot research topic in recent years. However, the infinite family of graphs with known quantum chromatic numbers are rare, as far as we know, the only known such graphs (except for complete graphs, cycles, bipartite graphs and some trivial cases) are the Hadamard graphs $H_n$ with $2^n$ vertices and $n$ a multiple of $4$. In this paper, we consider the graphs in Hamming schemes, we determined the quantum chromatic numbers of one class of such graphs. Notably, this is the second known family of graphs whose quantum chromatic numbers are explicitly determined except for some cases aforementioned. We also provide some bounds for the quantum chromatic numbers of some other graphs in Hamming schemes. Consequently, we can obtain the quantum chromatic numbers of products of some graphs. \end{abstract} \begin{keyword} chromatic number \sep quantum chromatic number \sep colouring \sep quantum colouring \MSC 05C15 \sep 05E30 \sep 94B25 \sep 97K30 \end{keyword} \end{frontmatter} \section{Introduction} \label{intro} In recent years, combinatorial designs and graph theory have become useful tools in the study of quantum communications and quantum information processing, mainly reflected in the following aspects: \begin{itemize} \item Quantum states constructed from graphs and hypergraphs are used to study entanglement phenomena and construct high-performance quantum error correction codes \cite{AHKS06,cameron}; \item Spherical designs are used to construct various types of unbiased bases for quantum measurement \cite{feng}; \item Classical combinatorial designs are extended to quantum versions (quantum orthogonal Latin squares, etc.) to study and construct various maximally entangled quantum states\cite{CW}; \item Quantum state transfer are employed for transmitting quantum information using quantum networks, for example, the so-called perfect state transfer, uniform mixing. etc.\cite{Ada}; \item Extend some parameters and concepts of the classical graphs to that of the quantum version, such as quantum homomorphism, quantum chromatic numbers, quantum independent numbers, etc.\cite{{cameron}} \end{itemize} Let $\Gamma$ be a simple graph whose vertex set is $V$ and edge set $E$. A colouring on $\Gamma$ is an assignment of colors to vertices of the graph such that the two vertices of each edge have different colours. Graph colouring is of particular interesting since it finds applications in quantum information theory and communication as seen in \cite{AHKS06}. Classical graph colouring can be interpreted as a so-called non-local game, where two players Alice and Bob collaborate to answer pairs of questions without communication using some prior agreed-upon strategy. Quantum coloring of graphs is a modification of the classical graph coloring where the players may use ``quantum" strategies meaning a shared entangled resource is available. In the colouring games, we are interested in the minimum number of the colours needed to win. For a classical graph $\Gamma$, this minimum number is denoted by $\chi(\Gamma)$, and termed as the chromatic number. For quantum graphs, by $\chi_q(\Gamma)$ and called the quantum chromatic number. The mathematical definition of these notations will be given in the next section. Notably, it was shown the tremendous quantum advantage in \cite{AHKS06} that Hadamard graphs can provide examples of an exponential gap between the quantum and classical chromatic numbers using a result in \cite{PF}. However, in general, it is difficult to determine the chromatic number of a given graph, even more harder to evaluate or estimate the quantum chromatic number of the graph. In \cite{ji}, Ji proved that determining these numbers is NP hard. Up to date, except for complete graphs, cycles, bipartite graphs, Hadamard graphs $H_n$ and some trivial cases, only a few lower bounds on the chromatic numbers are known for some sporadic graphs, as far as we know, the only non-trivial family of graphs whose quantum chromatic numbers are explicitly determined is the class of Hadamard graphs which were defined by Ito \cite{Ito} in the year 1985. Using the representations of certain groups and extensive computations, Ito obtained the spectra of the Hadamaed graphs. Very recently, Menamara \cite{Mena} also calculated the quantum chromatic numbers of the Hadamard graphs of order $N = 2^n$ for $n$ a multiple of $4$ using character sums over finite fields and the upper bound derived by Avis etal \cite{AHKS06}, as well as an application of the Hoffman-like lower bound of Elphick and Wocjan \cite{CW} that was generalized by Ganesan \cite{Gan} for quantum graphs. One of the main results in \cite{Mena} is as follows: \begin{thm}\cite{Mena}\label{thm-1} (Exact quantum chromatic number of Hadamard graphs). Let $H_n$ be the Hadamard graph on $2^n$ vertices, $n$ a multiple of 4. Then, \begin{equation}\label{f-1} \chi_q(H_n ) = n. \end{equation} \end{thm} We note that the above result is already known, see for example \cite{CW}. Menamara \cite{Mena} gave a new proof of it by providing an explicit quantum colouring of $H_n$. In this paper, we give a new method for calculating the spectrum of the Hadamard graph in Theorem \ref{thm-1} by using some properties of Krawchouk polynomials, we also determined the quantum chromatic numbers of some other graphs in Hamming schemes, some bounds on the quantum chromatic numbers of some graphs in Hamming schemes are provided. The organization of the paper is as follows: In Sect. \ref{prelim}, we give some backgrounds on quantum information and quantum measurements, as well as some basic concepts of graph theory. In Sect. \ref{main results}, we consider the graphs in Hamming schemes. Using some spectral bounds on the quantum chromatic numbers, we successfully obtained the quantum chromatic numbers of one class of such graphs, see Theorem \ref{main-1}. Some bounds of quantum chromatic numbers of other class of graphs in Hamming shcems are provided as well (Theorem \ref{thm-3.5} and Proposition \ref{prop-3.6}). By utilizing the products of graphs, we can also get the quantum chromatic numbers of some graphs (Theorem \ref{thm-3.11}). \section{Preliminaries}\label{prelim} \subsection{Some basic concepts of quantum communication}\label{Some basic concepts of quantum communication} \subsubsection{ Quantum state} In digital communications, information is represented as an $K$-tuple $c=(c_0,c_1,\cdots,c_{K-1})$, where the entries $c_i\in Q$ which is a $q$-ary set. In most cases, $Q$ is chosen as the finite field $\mathbb{F}_q$ with $q$ elements, or a cyclic group $\{0,1,\cdots,q-1\} \pmod q$. Then $c$ can be viewed as a vector in $Q^K$. In quantum communication, each qubit, denoted by $|v\rangle =(v_0,v_1,\cdots, v_{K-1})^T$, is a unite vector in the $K$-dimensional vector space $\mathbb{C}^K$. For every $|v\rangle =(v_0,v_1,\cdots, v_{K-1})^T$, $|u\rangle =(u_0,u_1,\cdots, u_{K-1})^T\in \mathbb{C}^K$, define the inner product \begin{equation*} \langle u|v\rangle=\sum_{i=0}^{K-1}u_iv_i^*. \end{equation*} If $\langle u|v\rangle=0$, we say that $|u\rangle$ and $|v\rangle$ are separable. A quantum state is a vector in the space $\mathbb{C}^{d_1}\otimes \mathbb{C}^{d_2}\otimes \cdots \otimes\mathbb{C}^{d_K}$ which is the tensor product of complex spaces. Take $V_i=\mathbb{C}^{d_i}$, $1\leq i\leq K$, and choose an orthnormal basis of $V_i$ as $|0\rangle, |1\rangle, \cdots, |d_{i}-1\rangle$. Then $$\{|e_1\rangle\otimes\cdots\otimes|e_K\rangle: 0\leq e_i\leq d_i-1, (1\leq i\leq K)\}$$ forms an orthnormal basis of $\mathfrak{V}:=\mathbb{C}^{d_1}\otimes \mathbb{C}^{d_2}\otimes \cdots \otimes\mathbb{C}^{d_K}$. Thus each quantum state in $\mathfrak{V}$ can be uniquely represented as \begin{equation*} |v\rangle=\sum_{0\leq e_i\leq d_i-1,1\leq i\leq K}a_{e_1,\cdots,e_K}|e_1\rangle\otimes\cdots\otimes|e_K\rangle, a_{e_1,\cdots,e_K}\in \mathbb{C}. \end{equation*} \subsubsection{ Quantum measurement} Let $H=(h_{ij})_{0\leq i,j\leq K-1}$ be a Hermite matrix. Then the quantum measurement of $H$ on $|v\rangle\in \mathbb{C}^K$ is defined by $H|v\rangle$. In quantum communication, $H$ can be written as $H=\sum_{i,j=0}^{K-1}h_{ij}|i\rangle \langle j|$. Generally speaking, it is not easy to devise a measurement procedure which uniquely identifies the given quantum state from the statistical date produced by the measurements. For example, if the state of the quantum system is given by an $K \times K$ density matrix, the complete measurement statistics of one fixed {\it von Neumann} measurement is not sufficient to reconstruct the state, see e.g. \cite{kla}. However, it is possible to perform a somewhat general measurement procedure on a quantum system, namely a {positive operator-valued measurement} (or POVM for short), see \cite{peres}. Mathematically, a POVM is a collection of some semi-positive operators $E_i \geq 0$, each $E_i$ is a $K$ by $K$ matrix, called POVM elements, satisfying the summation of all these operators is equal to $I_K$ the identity matrix. POVMs constitute a basic ingredient in many applications of quantum information processing: quantum tomography, quantum key distribution required in cryptography, discrete Wigner function, quantum teleportation, quantum error correction codes, dense coding, teleportation, entanglement swapping, covariant cloning and so on, see for example \cite{NC}. \subsubsection{Projective measurement} In a quantum measurement, people usually use some projective matrices $P=(p_{ij})_{1\leq i,j\leq K}: \mathbb{C}^K\rightarrow \mathbb{C}^K$. A Hermite matrix $P$ is called projective if $P^2=P=P^*$. Suppose that $|v\rangle$ is contained in the image of $P$, that is, there is a vector $|a\rangle\in \mathbb{C}^K$ such that $P|a\rangle=|v\rangle$. Then \begin{equation*} P|v\rangle=P^2|a\rangle=P|a\rangle=|v\rangle. \end{equation*} Thus $P|_{{\rm Im}(P)}={\rm id}$. Then there exists a unitary matrix $U$ such that $U^*PU={\rm diag}(I_r,0)$, where $r={\rm rank}(P)$. Finally, a set of projective operators $\{P_1,P_2,\cdots, P_K\}$ in $\mathbb{C}^{K\times K}$ is called a complete POVM if $P_iP_j=0_K$ for every $1\leq i\neq j\leq K$, and $\sum_{i=1}^KP_i=I_K$. In this case, it can be proved that there exists a unitary matrix $U$ such that \begin{equation*} U^*P_iU={\rm diag}(0,0,\cdots,1,0,\cdots,0), 1\leq i\leq K, \end{equation*} where $1$ is in the $i$-th entry. Moreover, $\mathbb{C}^K={\rm Im}(P_1)\oplus{\rm Im}(P_2)\oplus \cdots \oplus {\rm Im}(P_K).$ \subsection{ Quantum homomorphism of graphs and graph colouring}\label{graph theory} Let $\Gamma=(V,E)$ be a simple graph with $n=|V|$ vertices and $m=|E|$ edges. A homomorphism $\varphi$ from a graph $\Gamma_1=(V_1,E_1)$ to a graph $\Gamma_2=(V_2,E_2)$ is a mapping $\varphi: \Gamma_1\rightarrow \Gamma_2$ satisfying $(\varphi(u),\varphi(v))\in E_2$ if $(u,v)\in E_1$. For example, if $\Gamma_2=K_c$ is a complete graph on $c$ vertices, then $\varphi: \Gamma=(V, E)\rightarrow K_c$ is a homomorphism means that if $(u,v)\in E$, then $\varphi(u)\neq \varphi(v)$. We name the minimum number $c$ such that there exists a homomorphism from $\Gamma$ to $K_c$ the chromatic number of $\Gamma$ and denote it by $\chi(\Gamma)$. The maximum number $c$ such that there is a homomorphism from $K_c$ to $\Gamma$ is called the clique number of $\Gamma$ and denoted by $\omega(\Gamma)$. Let $\bar{\Gamma}$ be the complement graph of $\Gamma$. Then $\alpha(\Gamma):=\omega(\bar{\Gamma})$ is called the independent number of $\Gamma$. \begin{defn}A quantum homomorphism from a graph $\Gamma_1=(V_1,E_1)$ to a graph $\Gamma_2=(V_2,E_2)$ means that there is a positive integer $d$ such that for every $x\in V_1$, there exists a complete orthogonal projective system $\mathfrak{F}_x=\{P_{x,y}: y\in V_2\}$ satisfying the following two conditions: \begin{enumerate} \item (Completeness) For every $x\in V_1$, $\mathfrak{F}_x$ is a complete orthogonal system, namely, $P_{x,y}^2=P_{x,y}=P_{x,y}^*$ and, when $y\neq y'$, we have $P_{x,y}P_{x,y'}=0_d$. Moreover, $\sum_{y\in V_2}P_{x,y}=I_d$. \item (Orthogonality) For every $x,x'\in V_1$, $y,y'\in V_2$, we have $P_{x,y}P_{x',y'}=0_d$ whence $(x,x')\in E_1, (y,y')\not\in E_2$. \end{enumerate} \end{defn} It is easy to see that a classical graph homomorphism is actually a quantum homomorphism. We note that, in a recent paper \cite{Ada}, Chan et al gave a definition of quantum isomorphism of graphs and proved that any two Hadamard graphs on the same number of vertices are quantum isomorphic. \begin{defn}The quantum chromatic number of a graph $\Gamma$, denoted by $\chi_q(\Gamma)$, is the minimum number $c$ such that there exists a quantum homomorphism from $\Gamma$ to the complete graph $K_c$.\end{defn} By definition, we see that for every graph $\Gamma$, \begin{equation}\label{f-2} \chi_q(\Gamma)\leq \chi(\Gamma). \end{equation} It seems that how to determine the quantum chromatic number of a given graph is very hard. Up to date, except for some sporadic graphs with small size of vertices and some trivial cases, the only known class of graphs whose quantum chromatic numbers are determined are the Hadamard graphs $H_n$ with $n$ a multiple of $4$. This situation motivates the study of quantum chromatic numbers of graphs. The following questions are of particular interesting. \begin{itemize} \item For a specific class of graphs, determine their chromatic numbers; \item Separable problem: find graphs such that their chromatic numbers are strictly less than their quantum chromatic numbers, note that $\chi_q(H_n)=n$ when $4|n$, however, $\chi(H_n)\geq 2^{n/2}$ when $n$ is bigger enough; \item Find some lower or upper bounds for the chromatic numbers of some class of graphs. \end{itemize} For more information about quantum chromatic numbers, we refer the reader to \cite{ cameron,CW, feng}. \subsection{Spectra of Cayley graphs and bounds on quantum chromatic numbers}\label{Spectrum of Cayley graphs} Let $\Gamma=(V,E)$ be a simple graph with $|V|=n, |E|=m$, $A$ be its adjacency matrix. The spectrum of $A$ is also termed the spectrum of $\Gamma$. For every $x\in V$, the number of its neighborhoods is defined as its valency (or degree). If we label the vertices of $\Gamma$ as $x_1,\cdots,x_n$, and denote the valency of $x_i$ by $k_i$. Then $D:={\rm diag}(k_1,\cdots,k_n)$ is the degree matrix of $\Gamma$. $L=D-A$ (resp. $L^+=D+A$) is the Laplace (resp. signless Laplace) matrix of $\Gamma$. Suppose that the eigenvalues of $A$, $L$ and $L^+$ are $\lambda_1\geq \lambda_2\geq \cdots\geq \lambda_n$, $\theta_1\geq \theta_2\geq \cdots \geq \theta_n(= 0)$, and $\delta_1\geq \delta_2\geq \cdots \delta_n$, respectively. The following result is known, see for example, \cite{CW} and the references therein. \begin{thm}\label{thm-2.3} Let notations be defined as above. Then \begin{equation}\label{f-3} \chi(\Gamma)\geq 1+\max\left\{\frac{\lambda_1}{|\lambda_n|}, \frac{2m}{2m-n\delta_n},\frac{\lambda_1}{\lambda_1-\delta_1+\theta_1},\frac{n^+}{n^-},\frac{n^-}{n^+},\frac{S^+}{S^-},\frac{S^-}{S^+}\right\}, \end{equation} where $n^+$ (resp. $n^-$) is the number of positive (resp. negative) eigenvalues of $\Gamma$, and $S^+$ (resp. $S^-$) is the summation of the squares of positive (resp. negative) eigenvalues of $\Gamma$. \end{thm} Some quantum versions of Theorem \ref{thm-2.3} are known, for example, a spectral lower bound on the quantum chromatic numbers is provided in \cite{CW}. \begin{lem}\cite{CW}\label{lem-2.4} For any graph $\Gamma$ with eigenvalues $\lambda_1\geq \lambda_2\geq \cdots \geq \lambda_n$, we have \begin{equation}\label{f-4'} \chi_q(\Gamma)\geq 1+\frac{\lambda_1}{|\lambda_n|}. \end{equation} \end{lem} Let $G$ be a finite group. A { representation} of $G$ is a homomorphism $\rho: G \rightarrow GL(U)$ for a (finite-dimensional) non-zero vector space $U$. The dimension of $U$ is called the { degree} of $\rho$. Two representations $\rho: G\rightarrow GL(U)$ and $\varrho: G\rightarrow GL(W)$ are {\it equivalent}, denoted by $\rho\sim \varrho$, if there exists an isomorphism $T: U\rightarrow W$ such that $\rho_g=T^{-1}\varrho_g T$ for all $g\in G$. For every representation $\rho: G\rightarrow GL(U)$ of $G$, the { character} of $\chi_\rho$ is defined by: \begin{equation*} \chi_\rho: G\rightarrow \mathbb{C}, \chi_\rho(g)=\tr(\rho(g)) \mbox{ for all $g\in G$}, \end{equation*} where $\tr(\rho(g))$ is the trace of the representation matrix with respect to a basis of $U$. A subspace $W$ of $U$ is said to be $G$-{invariant} if $\rho(g)\omega\in W$ for every $g\in G$ and $\omega\in W$. Obviously, $\{0\}$ and $U$ are $G$-invariant subspaces, called trivial subspaces. If $U$ has no non-trivial $G$-invariant subspaces, then $\rho$ is called an {irreducible representation} and $\chi_\rho$ an {irreducible character} of $G$. Let $S$ be a subset of $G$ with $S^{-1}:=\{s^{-1}: s\in S\}=S$. A Cayley graph over $G$ with connection set $S$ is defined by $\Gamma:={\rm Cay}(G,S)$ where the vertex set is $G$ and two elements $x,y\in G$ are adjacent if and only if $xy^{-1}\in S$. If $S$ is conjugation closed meaning that for every $x\in G$ and $s\in S$, we have $x^{-1}sx\in S$. In this case, the Cayley graph ${\rm Cay}(G,S)$ is called normal. For normal Cayley graphs, the following result is well-known. \begin{lem}\label{lem-2.3}\cite[pp. 69-70]{stein} Let $G=\{g_1,\cdots,g_n\}$ be a finite group of order $n$ and let $\rho^{(1)},\cdots,\rho^{(s)}$ be a complete set of unitary representatives of the equivalent classes of irreducible representations of $G$. Let $\chi_i$ be the character of $\rho^{(i)}$ and $d_i$ be the degree of $\chi_i$. Let $S$ be a symmetric set and further that $gSg^{-1}=S$ for all $g\in G$. Then the eigenvalues of the adjacency matrix $A$ of the Cayley graph ${\rm Cay}(G,S)$ with respect to $S$ are given by \begin{equation*} \lambda_k=\frac{1}{d_k}\sum_{g\in S}\chi_k(g), 1\leq k\leq s, \end{equation*} each $\lambda_k$ has multiplicity $d_k^2$. Moreover, the vectors \begin{equation*} v_{ij}^{(k)}=\frac{\sqrt{d_k}}{|G|}\left(\rho_{ij}^{(k)}(g_1),\cdots,\rho_{ij}^{(k)}(g_n)\right)^T, 1\leq i,j\leq d_k \end{equation*} form an orthonormal basis for the eigenspace $V_{\lambda_k}$. \end{lem} Note that a proof of Lemma \ref{lem-2.3} can also be found in \cite[Theorem 9]{murty}. As a consequence, if $G$ is a finite abelian group, we assume that $G$ is decomposed as a direct sum of cyclic groups, $G=\mathbb{Z}_{n_1}\oplus \cdots \oplus \mathbb{Z}_{n_r}$, then the spectrum of the Cayley graph $\Gamma={\rm Cay}(G,S)$ is given by \begin{equation}\label{f-4} \lambda_g=\sum_{s\in S}\chi_g(s), \end{equation} where $\chi_g(s)=\prod_{j=1}^s\xi_{n_j}^{g_js_j}$, $\forall g=(g_1,\cdots,g_r)\in G$, $s=(s_1,\cdots,s_r)\in S$, and $\xi_{n_j}$ is a primitive $n_j$-th root of unity in $\mathbb{C}$. Of course, (\ref{f-4}) can also be proved by using an elementary method. \subsection{Krawtchouk polynomials} For positive integers $n,\ell$, and $q$, the Krawchouk polynomial in variable $x$ is defined by \begin{equation}\label{f-5} K_\ell^{n,q}(x)=\sum_{j=0}^\ell(-1)^j(q-1)^{\ell-j}\tbinom{x}{j}\tbinom{n-x}{\ell-j}. \end{equation} Krawchouk polynomials are a kind of orthogonal polynomials and have many important applications in many fields such as coding theory, function analysis and approximation etc. For our purpose, we list some of the properties of such polynomials as follows. \begin{thm}\cite{Lev}\label{Krawchouk} The Krawchouk polynomials have the following properties. \begin{enumerate} \item (Orthogonality Relations): For every $i,j, (i,j=0,1,\cdots,n)$ \begin{equation}\label{f-6} \sum_{d=0}^nK_i^n(d)K_j^n(d)(q-1)^d\tbinom{n}{d}=q^n(q-1)^i\tbinom{n}{i}\delta_{i,j}. \end{equation} \item (Recursive Relation): For any $k = 1,\cdots, n$ and any real $x$ \begin{eqnarray} K_k^n(x)&=& K_k^{n-1}(x-1)-K_{k-1}^{n-1}(x-1) \\ K_k^n(x) &=& K_k^{n-1}(x)+(q-1)K_{k-1}^{n-1}(x)\\ K_k^{n-1}(x)&=&\sum_{j=0}^kK_j^n(x)(1-q)^{k-j}. \end{eqnarray} \item (Reciprocal Law): \begin{equation}\label{f-14} (q-1)^i\tbinom{n}{i}K_d^n(i)=(q-1)^d\tbinom{n}{d}K_i^n(d). \end{equation} \item (Generating Function): \begin{equation}\label{f-15} \sum_{k=0}^{n}K_k^n(d)z^k=(1-z)^{d}(1+(q-1)z)^{n-d}. \end{equation} \item (Inversion Formula): \begin{equation}\label{f-16} f(x)=\sum_{j=0}^nf_jK_j^n(x) \end{equation} if and only if for every $i=0,1,\cdots,n$, \begin{equation}\label{f-17} f_i=q^{-n}\sum_{j=0}^nf(j)K_j^n(i). \end{equation} \end{enumerate} \end{thm} \subsection{Hamming schemes} Let $q\geq 2, n\geq 1$ be integers, $Q$ be a set of $q$ elements. $Q^n=\{(x_1,x_2,\cdots, x_n): x_i\in Q\}$. For $x=(x_1,x_2,\cdots, x_n), y=(y_1,y_2,\cdots,y_n)\in Q^n$, the Hamming distance of $x,y$, denoted by $d(x,y)$, is the number of coordinates they differ. For every $1\leq \ell\leq n$, the graph $H(n,q,\ell)$ is defined as $H(n,q,\ell)=(V,E)$, where the vertex set $V=Q^n$, two vectors $x,y$ are adjacent if $d(x,y)=\ell$. Let $A_\ell$ be the adjacency matrix of $H(n,q,\ell)$. Then $\{A_\ell: 0\leq \ell\leq n\}$, where $A_0=I_n$, forms an association scheme, named the Hamming scheme. When $q$ is fixed, we write $H(n,q,\ell)$ simply as $H_{n,\ell}$. In this paper, we call $H_{n,\ell}$ a Hamming graph for each $\ell$. The eigenvalues of $A_\ell, 0\leq \ell\leq n$ are well-known. In fact, $H_{n,\ell}$ is a Cayley graph. Let $Q=\{0,1,2,\cdots,q-1\} \pmod q$ be a cyclic group of order $q$, $S=\{x\in Q^n: wt(x)=\ell\}$, where $wt(x)=d(x,0_n)$. Then $H_{n,\ell}={\rm Cay}(Q,S)$. Thus for every $a\in Q^n$, the corresponding eigenvalue is $\lambda_a=\sum_{x\in S}\xi_q^{a\cdot x}$, where $a\cdot x$ is the inner product of $x$ with $a$, namely, $(x_1,\cdots,x_n)\cdot (a_1,\cdots,a_n)=\sum_{i=1}^nx_ia_i$, and $\xi_q=e^{\frac{2\pi \sqrt{-1}}{q}}$ is a primitive $q$-th root of unity. Write $a=(a_0,\cdots,a_{n-1})$ and $wt(a)=r$. Then \begin{equation*} \lambda_a=\sum_{x=(x_0,\cdots,x_{n-1})\in Q^n, wt(x)=\ell}\xi_q^{\sum_{i=0}^{n-1}x_ia_i}. \end{equation*} Since \begin{equation*} \sum_{0\neq x_i\in Q}\xi_q^{x_ia_i}=\left\{\begin{array}{ll} q-1, & \mbox{ if $a_i=0$}, \\ -1, & \mbox{ if $a_i\neq 0$ }, \end{array} \right. \end{equation*} we know that \begin{equation}\label{f-n1} \lambda_a=\sum_{j=0}^\ell(-1)^j(q-1)^{\ell-j}\tbinom{r}{j}\tbinom{n-r}{\ell-j}=K_\ell(r). \end{equation} Even though we have the above formula for computing the eigenvalues of $H_{n,\ell}$, it is not an explicit expression. In this paper, we will give some concise formulae for eigenvalues of Hamming graphs. \section{Main results}\label{main results} Let $V_n=\{(x_0,x_1,\cdots,x_{n-1}): x_i\in \mathbb{F}_2\}$, where $\mathbb{F}_2$ is the binary field. $V_n$ is a $n$-dimensional vector space over $\mathbb{F}_2$. For $x=(x_0,x_1,\cdots,x_{n-1})\in V_n$, the Hamming weight of $x$, denoted by $wt(x)$, is the number of nonzero coordinates of $x$, the support of $x$ is ${\rm supp}(x):=\{i: 0\leq i\leq n-1, x_i=1\}$. For $x,y\in V_n$, the Hamming distance between $x$ and $y$ is $d(x,y)=wt(x-y)$. The following defined Hadamard graph is isomorphic to that defined by Ito \cite{Ito}. \begin{defn}Let $n$ be a positive integer with $4|n$. Define the Hadamard graph $H_n=(V_n,E_n)$, where $V_n$ is the $n$-dimensional vector space over $\mathbb{F}_2$, two vectors $x,y\in V_n$ are adjacent if and only if $d(x,y)=n/2$.\end{defn} In this paper, we consider the graph $H_{n,\ell}$. That is, $H_{n,\ell}=(V_n,E_n^{(\ell)})$, $V_n$ is the $n$-dimensional vector space over $\mathbb{F}_2$, two vectors $x,y\in V_n$ are adjacent if and only if $d(x,y)=\ell$. Obviously, the Hadamard graph $H_n$ is $H_{n,n/2}$. Note that if $\ell$ is odd, then $H_{n,\ell}$ is a bipartite graph and then its quantum chromatic number is $2$. Thus in next sequel, we assume that $\ell$ is even. In this section, we first give a simple method to calculate the spectrum of $H_n$ and prove that $\chi_q(H_n)=n$. Then for the Hamming graphs, we present some new results on the quantum chromatic numbers of such graphs. \subsection{Proof of Theorem \ref{thm-1}}\label{proof of Thm-1} Firstly, it is easy to see that $H_n={\rm Cay}(V_n,S)$, where $S=\{x\in V_n: wt(x)=n/2\}$. The character group of $V_n$ (as an elementary commutative $2$-group of rank $n$) is $\widehat{V_n}=\{\phi_a: a\in V_n\}$, where $\phi_a(x)=(-1)^{x\cdot a}$, $x\cdot a$ is the inner product of $x$ and $a$, i.e., $x\cdot a=\sum_{i=0}^{n-1}x_ia_i$, $a=(a_0,\cdots,a_{n-1})$. By (\ref{f-4}), the eigenvalues of $H_n$ are \begin{equation}\label{f-18} \lambda_a=\sum_{s\in S}(-1)^{s\cdot a}, a\in V_n. \end{equation} Obviously, $\lambda_{0_n}=|S|=\tbinom{n}{n/2}$. Take $a=1_n:=(1,1,\cdots,1)$. Then \begin{equation*} \lambda_{1_n}=\sum_{s\in S}(-1)^{s\cdot 1_n}=\sum_{s\in S}(-1)^{wt(s)}=\sum_{s\in S}(-1)^{n/2}=\sum_{s\in S}1=|S|=\tbinom{n}{n/2}=\lambda_{0_n}. \end{equation*} And for every $a\in V_n$, $a\neq 0_n, 1_n$, then $\lambda_a<\lambda_{0_n}$. Thus $\lambda_{\max}=\tbinom{n}{n/2}$ with multiplicity $2$. $H_n$ has two isomorphic components. Below, we proceed to find the minimum eigenvalue $\lambda_{\min}$. For $a(\neq 0_n,1_n)\in V_n$, \begin{equation*} \lambda_a=\sum_{s\in S}(-1)^{s\cdot a}=\sum_{x\in V_n: wt(x)=n/2}(-1)^{x\cdot a}. \end{equation*} Suppose that $a=(a_0,\cdots,a_{n-1})\in V_n$, $wt(a)=r$, $1\leq wt(a)<n$. Assume that ${\rm supp}(a)=\{i_1,i_2,\cdots,i_r\}$. Let $x$ run through $V_n$ with weight $n/2$. If $|{\rm supp}(x)\cup {\rm supp}(a)|=j$, then $x\cdot a=j$. A simple combinatorial counting shows that \begin{equation*} \lambda_a=\sum_{s\in S}(-1)^{s\cdot a}=\sum_{x\in V_n: wt(x)=n/2}(-1)^{x\cdot a}=\sum_{j=0}^{n/2}(-1)^j\tbinom{r}{j}\tbinom{n-r}{n/2-j}=K_{n/2}^n(r). \end{equation*} By using the Reciprocal Law of the Krawchouk polynomials (see Theorem \ref{Krawchouk}), we have \begin{equation*} K_{n/2}^n(r)=\frac{\tbinom{n}{n/2}}{\tbinom{n}{r}}K_r^n(n/2). \end{equation*} Since $K_r^n(n/2)$ is the coefficient of $x^r$ in $(1-x)^{n/2}(1+x)^{n-n/2}=(1-x^2)^{n/2}$. Thus, if $r=2j+1$ is odd, then $\lambda_a=K_{n/2}^n(2j+1)=0$; if $r=2j$ for some $j$, then \begin{equation}\label{f-19} \lambda_a=(-1)^j\frac{\tbinom{n}{n/2}\tbinom{n/2}{j}}{\tbinom{n}{2j}}. \end{equation} Now, it is easy to see that the minimum eigenvalue of $H_n$ is \begin{equation}\label{f-19'} \lambda_{\min}=-\frac{\tbinom{n}{n/2}\tbinom{n/2}{1}}{\tbinom{n}{2}}=-\frac{\tbinom{n}{n/2}}{{n-1}}=-\frac{\lambda_{\max}}{{n-1}}. \end{equation} Then, by the spectral bounds in (\ref{f-4'}), we obtain \begin{equation}\label{f-20} \chi_q(H_n)\geq 1+\frac{\lambda_{\max}}{|\lambda_{\min}|}=n. \end{equation} Next, we show that $\chi_q(H_n)\leq n$. To this end, we need to find a quantum homomorphism of $H_n$. Very recently, Menamara \cite{Mena} found such a homomorphism. We provide his result for completeness. For every $x=(x_0,x_1,\cdots,x_{n-1})\in V_n$, and $0\leq \alpha\leq n-1$, we define the following operators: \begin{equation}\label{f-21} P_x^\alpha=(a_x^{\alpha}(i,j))_{0\leq i,j\leq n-1}, a_x^\alpha(i,j)=\frac{1}{n}\xi_n^{(j-i)\alpha}(-1)^{x_i+x_j}, \end{equation} where $\xi_n=e^{\frac{2 \pi \sqrt{-1}}{n}}$ is an $n$-th root of unity in $\mathbb{C}$. Then it is obvious that $P_x^\alpha$ is a Hermite matrix, moreover, let $(P_x^\alpha)^2=(b(i,j))_{0\leq i,j\leq n-1}$. Then \begin{eqnarray*} b(i,j) &=& \sum_{k=0}^{n-1}a_x^\alpha(i,k)a_x^{\alpha}(k,j) \\ &=& \frac{1}{n^2}\sum_{k=0}^{n-1}\xi_n^{(k-i)\alpha}(-1)^{x_i+x_k}\xi_n^{(j-k)\alpha}(-1)^{x_j+x_k}\\ &=&\frac{1}{n^2}\xi_n^{(j-i)\alpha}(-1)^{x_i+x_j}\sum_{k=0}^{n-1}1\\ &=&a_x^{\alpha}(i,j). \end{eqnarray*} Thus $(P_x^\alpha)^2=P_x^\alpha$. That is, $P_x^{\alpha}$ is a projection. For every $x\in V_n$, let $\triangle_x=\{P_x^{\alpha}: 0\leq \alpha\leq n-1\}$. We aim to prove $\triangle_x$ is a complete orthogonal system of $\mathbb{C}^{n\times n}$. Indeed, for every $0\leq \alpha\neq \alpha'\leq n-1$, denote $P_x^\alpha P_x^{\alpha'}=(c(i,j))$. Then \begin{eqnarray*} c(i,j) &=& \sum_{k=0}^{n-1}\frac{1}{n^2}\xi_n^{(k-i)\alpha}(-1)^{x_i+x_k}\xi_n^{(j-k)\alpha'}(-1)^{x_j+x_k} \\ &=&\frac{1}{n^2}\xi_n^{j\alpha'-i\alpha}(-1)^{x_i+x_j}\sum_{k=0}^{n-1}\xi_n^{k(\alpha-\alpha')}\\ &=&0. \end{eqnarray*} Therefore, $P_x^\alpha P_x^{\alpha'}=0$. Furthermore, we can prove that for every $x\in V_n$, the above defined $\triangle_x$ is complete, i.e., $\sum_{\alpha=0}^{n-1}P_x^{\alpha}=I_n$. Let $\sum_{\alpha=0}^{n-1}P_x^{\alpha}=(u (i,j))_{0\leq i,j\leq n-1}$. Then \begin{eqnarray*} u(i,j) &=& \sum_{\alpha=0}^{n-1}a_x^{\alpha}(i,j) \\ &=&\frac{1}{n}\sum_{\alpha=0}^{n-1}\xi_n^{(j-i)\alpha}(-1)^{x_i+x_j}\\ &=&\delta_{i,j}(-1)^{x_i+x_j}\\ &=&\delta_{i,j}, \end{eqnarray*} where $\delta_{i,j}=1$ if $i=j$, and $0$ otherwise. Thus $\sum_{\alpha=0}^{n-1}P_x^{\alpha}=I_n$. Finally, let $x,y\in V_n$ with $(x,y)\in E$ be an edge of $H_n$, that is $d(x,y)=2t$. Then \begin{eqnarray*} ( P_x^{\alpha}P_y^\alpha)(i,j)&=&\frac{1}{n^2}\sum_{k=0}^{n-1}\xi_n^{(i-k)\alpha}(-1)^{x_i+x_k}\xi_n^{(k-j)\alpha}(-1)^{y_j+y_k}\\ &=&\frac{1}{n^2}\xi_{n}^{(i-j)\alpha}(-1)^{x_i+y_j}\sum_{k=0}^{n-1}(-1)^{x_k+y_k}\\ &=&\frac{1}{n^2}\xi_{n}^{(i-j)\alpha}(-1)^{x_i+y_j}(-2t+4t-2t)\\ &=&0. \end{eqnarray*} Thus the set $\mathfrak{F}=\{\Delta_x: x \in V_n\}$ provides a quantum colouring of $H_n$. Therefore, by the definition of quantum chromatic numbers, we know that \begin{equation}\label{f-23} \chi_q(H_n)\leq n. \end{equation} Combining (\ref{f-20}) and (\ref{f-23}), we have $\chi_q(H_n)=n$ as required. \subsection{Some new results}\label{neq results} \subsubsection{Quantum chromatic numbers of a kind of Hamming graphs} Firstly, we have the following result: \begin{thm}\label{main-1}Let $V_n=\mathbb{F}_2^n$ be the $n$-dimensional vector space over $\mathbb{F}_2$, $S=\{x\in V_n: wt(s)=\ell\}$. Define a graph by $H_{n,\ell}:={\rm Cay}(V_n,S)$. If $n=4t-1$ and $\ell=2t$ for some positive integer $t$, then the spectrum of $H_{n,\ell}$ is \begin{equation}\label{f-36} \lambda_a=\left\{\begin{array}{cl} (-1)^j\frac{\tbinom{4t-1}{2t}\tbinom{2t-1}{j}}{\tbinom{4t-1}{2j}} & \mbox{ if $wt(a)=r=2j$, $0\leq j\leq 2t-1$,} \\ (-1)^{j+1}\frac{\tbinom{4t-1}{2t}{\tbinom{2t-1}{j}}}{\tbinom{4t-1}{2j+1}} & \mbox{ if $wt(a)=r=2j+1$, $0\leq j\leq 2t-1$}. \end{array} \right. \end{equation} Moreover, \begin{equation}\label{37} \chi_q(H_{n,\ell})=n+1. \end{equation} \end{thm} \begin{proof} For every $a=(a_0,a_1,\cdots,a_{n-1})\in V_n$, if $wt(a)=r$, the corresponding eigenvalue of $H_{n,\ell}$ is \begin{equation*} \lambda_a=\sum_{s\in S}(-1)^{s\cdot a}. \end{equation*} It is readily seen that the maximum eigenvalue of $H_{n,\ell}$ is $\lambda_{\max}=\tbinom{n}{\ell}=\lambda_{0_n}=\lambda_{1_n}$ since $\ell$ is even. For $a(\neq 0_n,1_n)\in V_n$, \begin{equation*} \lambda_a=\sum_{s\in S}(-1)^{s\cdot a}=\sum_{x\in V_n: wt(x)=\ell}(-1)^{x\cdot a}. \end{equation*} Moreover, \begin{equation}\label{f-23'} \lambda_{1_n-a}=\sum_{s\in S}(-1)^{s\cdot (1_n-a)}=\sum_{x\in V_n: wt(x)=\ell}(-1)^{x\cdot (1_n-a)}=(-1)^\ell\lambda_a=\lambda_a. \end{equation} A similar analysis as that in Section \ref{proof of Thm-1} leads to \begin{equation}\label{f-24} \lambda_a=\sum_{s\in S}(-1)^{s\cdot a}=\sum_{x\in V_n: wt(x)=\ell}(-1)^{x\cdot a}=K_\ell^n(r). \end{equation} By the Reciprocal Law of the Krawchouk polynomials (see Theorem \ref{Krawchouk}), we have \begin{equation*} K_\ell^n(r)=\frac{\tbinom{n}{\ell}}{{\tbinom{n}{r}}}K_r^n(\ell). \end{equation*} Now, $K_r^n(\ell)$ is the coefficient of $x^r$ in the expansion of \begin{equation*} (1-x)^\ell(1+x)^{n-\ell}=(1-x^2)^{2t-1}(1-x)=\sum_{j=0}^{2t-1}(-1)^j\tbinom{2t-1}{j}(x^{2j}-x^{2j+1}). \end{equation*} Therefore, if $r=2j$ for some $j$, then \begin{equation}\label{f-25} \lambda_a=(-1)^j\frac{\tbinom{4t-1}{2t}\tbinom{2t-1}{j}}{\tbinom{4t-1}{2j}}. \end{equation} If $r=2j+1$ is odd, then \begin{equation}\label{f-26} \lambda_a=(-1)^{j+1}\frac{\tbinom{4t-1}{2t}\tbinom{2t-1}{j}}{\tbinom{4t-1}{2j+1}}. \end{equation} By (\ref{f-23'}), (\ref{f-25}) and (\ref{f-26}), one can check that \begin{equation}\label{f-27} \lambda_{\min}=-\frac{\tbinom{4t-1}{2t}}{4t-1}=-\frac{\lambda_{\max}}{4t-1}. \end{equation} In fact, by (\ref{f-25}) and (\ref{f-26}), $\lambda_a$ depends only on $wt(a)=r$. Write $\lambda_a=\rho(r)$. To find $\lambda_{\min}$, we just need to consider $\rho(4j+2)$ and $\rho(4j+1)$ for $0\leq j\leq \lfloor \frac{t}{2}\rfloor$. Now, \begin{equation*} \frac{|\rho(4j+2)|}{|\rho(4j+1)|}=\frac{\tbinom{2t-1}{2j+1}\tbinom{4t-1}{4j+1}}{\tbinom{2t-1}{2j}\tbinom{4t-1}{4j+2}}=\frac{(4j+2)!(4t-4j-3)!(2j)!(2t-2j-1)!}{(4j+1)!(4t-4j-2)!(2j+1)!(2t-2j-2)!}=1. \end{equation*} Moreover, for $1\leq j\leq \lfloor\frac{t}{2}\rfloor$, \begin{equation*} \frac{|\rho(4j+2)|}{|\rho(4j-2)|}=\frac{\tbinom{2t-1}{2j+1}\tbinom{4t-1}{4j-2}}{\tbinom{2t-1}{2j-1}\tbinom{4t-1}{4j+2}}<1. \end{equation*} So that the eigenvalues of $H_{4t-1,2t}$ with negative sign are \begin{equation*} \rho(1)=\rho(2)<\rho(5)=\rho(6)<\cdots<\rho\left(4\left\lfloor \frac{t}{2}\right\rfloor-3\right)=\rho\left(4\left\lfloor \frac{t}{2}\right\rfloor-2\right). \end{equation*} Note that we just list one half of them, the symmetric part is $\rho(n-1)=\rho(1)$, $\rho(n-5)=\rho(5)$ and so on. Thus, $\lambda_{\min}=\rho(1)=-\frac{\tbinom{4t-1}{2t}}{4t-1}=-\frac{\lambda_{\max}}{4t-1}.$ Thanks to Lemma \ref{lem-2.4}, we have \begin{equation}\label{f-28} \chi_q(H_{4t-1,2t})\geq 4t=n+1. \end{equation} In order to prove $\chi_q(H_{4t-1,2t})\leq 4t$, we need to find a proper quantum colouring of such a graph. To this end, for every $x=(x_0,x_1,\cdots,x_{n-1})\in V_n$, we embed it to $V_{n+1}$ by adding a coordinate at the end of $x$ as $\widetilde{x}=(x_0,x_1,\cdots,x_{n-1},0)$, i.e., $x_n=0$. Define the following set of operators on $\mathbb{C}^{4t}$: \begin{equation*} \mathfrak{F}:=\{P_x^{\alpha}: x\in V_n, 0\leq \alpha\leq 4t-1\}, \end{equation*}{ where } \begin{equation*} P_x^{\alpha}=(p_x^{\alpha}(i,j))_{0\leq i,j\leq 4t-1}, p_x^{\alpha}(i,j)=\frac{1}{4t}\xi_{4t}^{(j-i)\alpha}(-1)^{x_i+x_j}. \end{equation*} It is obvious that each operator is Hermitian, and the $(i,j)$-th entry of $(P_x^\alpha)^2$ is \begin{equation*} ((P_x^\alpha)^2){(i,j)}=\sum_{k=0}^{4t-1}p_x^\alpha(i,k)p_x^\alpha(k,j)=\frac{1}{(4t)^2}\xi_{4t}^{(j-i)\alpha}(-1)^{x_i+x_j}\sum_{k=0}^{4t-1}1= p_x^{\alpha}(i,j). \end{equation*} Thus $P_x^\alpha$ is a projection. Similarly, one can prove that \begin{equation*} P_x^\alpha P_x^{\alpha'}=0 \mbox{ if } \alpha\neq \alpha', \end{equation*} \begin{equation*} \mbox{ and} \sum_{0\leq \alpha\leq 4t-1}P_x^{\alpha}=I_{4t}. \end{equation*} Now, we assume that $(x,y)\in E_{4t-1,2t}$ is an edge of $H_{4t-1,2t}$, i.e., $d(x,y)=2t$, then we need to prove that $P_x^\alpha P_y^\alpha=0$, whence $x=(x_0,x_1,\cdots,x_{4t-2}), y=(y_0,y_1,\cdots,y_{4t-2})$ and $x_{4t-1}=y_{4t-1}=0$. Since \begin{equation*} \sum_{k=0}^{4t-1}(-1)^{x_k+y_k}=1+\sum_{k=0}^{4t-2}(-1)^{x_k+y_k}=1-2t+2t-1=0, \end{equation*} it follows that \begin{equation*} (P_x^\alpha P_y^\alpha)_{i,j}=\sum_{k=0}^{4t-1}p_x^{\alpha}(i,k)p_y^{\alpha}(k,j)=\frac{1}{(4t)^2}\xi_{4t}^{(j-i)\alpha}(-1)^{x_i+y_j}\sum_{k=0}^{4t-1}(-1)^{x_k+y_k}=0. \end{equation*} Thus $P_x^{\alpha}P_y^{\alpha}=0$ if $(x,y)\in E_{4t-1,2t}$ is an edge. Therefore, $\mathfrak{F}$ gives a quantum colouring of $H_{4t-1,2t}$, and then \begin{equation}\label{f-30} \chi_q(H_{4t-1,2t})\leq 4t=n+1. \end{equation} Combining (\ref{f-28}) and (\ref{f-30}) yields $\chi_q(H_{4t-1,2t})=4t=n+1$. This completes the proof. \end{proof} \begin{remark}We note that, except for some trivial cases, the family of graphs in Theorem \ref{main-1} is the second known family of infinite graphs whose quantum chromatic numbers are exactly determined, up to our knowledge.\end{remark} \subsubsection{A upper bound for quantum chromatic numbers of Hamming graphs} Let $\ell$ be a positive integer. $H_{n,\ell}={\rm Cay}(V_n,S)$, where $S=\{x\in V_n: wt(x)=\ell\}$. If $\ell$ is odd, then $H_{n,\ell}$ is a bipartite graph and then $\chi_q(H_{n,\ell})=2$. Thus we assume that $\ell=2t$ for some integer $t$. In this case, we have the following result: \begin{thm}\label{thm-3.5}Let notations be defined as above, if $\ell\geq n/2$, then \begin{equation}\label{f-31} \chi_q(H_{n,\ell})\leq 2\ell. \end{equation} \end{thm} \begin{proof}Denote $d=2\ell(\geq n)$. For every $x=(x_0,x_1,\cdots,x_{n-1})\in V_n$, we let $x_{n}=\cdots =x_{d-1}=0$ if $d>n$. Define a set of operators as follows: \begin{equation*} \mathfrak{F}=\{P_x^{\alpha}: x\in V_n, 0\leq \alpha\leq d-1\}, \end{equation*} where $P_x^{\alpha}(i,j)=\frac{1}{d}\xi_d^{(i-j)\alpha}(-1)^{x_i+x_j}, 0\leq i,j\leq d-1$. Using a similar method as that in the previous section, we can prove that $\mathfrak{F}$ forms a complete orthogonal system of $\mathbb{C}^d$. When $0\leq \alpha\leq d-1$, we claim that if $x,y\in V_n$ with $d(x,y)=\ell$, it holds that $P_x^{\alpha}P_y^{\alpha}=0$. Note that for $y\in V_n$, we define $y_{n}=\cdots=y_{d-1}=0$. Indeed, we have \begin{equation*} P_x^{\alpha}P_y^{\alpha}(i,j)=\frac{1}{d^2}\sum_{k=0}^{d-1}\xi_d^{(i-k)\alpha}(-1)^{x_i+x_k}\xi_d^{(j-k)\alpha}(-1)^{y_k+y_j}=\frac{1}{d^2}\xi_d^{(j-i)\alpha}(-1)^{x_i+x_j}\sum_{k=0}^{d-1}(-1)^{x_k+y_k}. \end{equation*} Now, \begin{equation*} \sum_{k=0}^{d-1}(-1)^{x_k+y_k}= \sum_{k=0}^{n-1}(-1)^{x_k+y_k}+ \sum_{k=n}^{d-1}(-1)^{x_k+y_k}=(-\ell+n-\ell)+(d-n)=0. \end{equation*} Therefore, $\mathfrak{F}$ gives a proper quantum colouring of $H_{n,\ell}$, and then the desired result follows. \end{proof} Particularly, if $n=4t+2$, $\ell=2t+2$, we have \begin{prop}\label{prop-3.6} Let $H_{n,\ell}={\rm Cay}(V_n,S)$, where $S=\{x\in V_n: wt(x)=\ell\}$. If $n=4t+2,\ell=2t+2$, then the spectrum of $H_{n,\ell}$ is given by \begin{equation}\label{f-33} \lambda_a=\left\{\begin{array}{cl} (-1)^j\frac{\left[\tbinom{2t}{j}-\tbinom{2t}{j-1}\right]{\tbinom{n}{\ell}}}{\tbinom{n}{2j}} & \mbox{ if $wt(a)=r=2j$,} \\ (-1)^{j+1}\frac{2\tbinom{2t}{j}\tbinom{n}{\ell}}{\tbinom{n}{2j+1}} & \mbox{ if $wt(a)=r=2j+1$}. \end{array} \right. \end{equation} Moreover, \begin{equation}\label{f-34} \ell\leq \chi_q(H_{n,\ell})\leq 2\ell. \end{equation} \end{prop} \begin{proof}Since $\ell>n/2$, the upper bound $\chi_q(H_{n,\ell})\leq 2\ell$ follows from Theorem \ref{thm-3.5}. Now, we prove the lower bound. In order to prove it, we compute the eigenvalues of $H_{n,\ell}$. For every $a\in V_n$, if $wt(a)=r$, the eigenvalue of $H_{n,\ell}$ corresponding to $a$ is \begin{equation*} \lambda_a=\sum_{x\in V_n, wt(x)=\ell}(-1)^{x \cdot a}=K_\ell^n(r)=\frac{\tbinom{n}{\ell}}{\tbinom{n}{ r}}K_r^n(\ell). \end{equation*} When $n=4t+2,\ell=2t+2$, $K_r^n(\ell)$ is the coefficient of $x^r$ in the expansion of \begin{equation*} (1-x)^{2t+2}(1+x)^{2t} =1+x^{4t+2}+\sum_{j=1}^{2t}(-1)^j\left[\tbinom{2t}{j}-\tbinom{2t}{j-1}\right]x^{2j}-2\sum_{j=1}^{2t}(-1)^j\tbinom{2t}{j}x^{2t+1}. \end{equation*} Thus, we have \begin{equation}\label{f-35} \lambda_a=\left\{\begin{array}{cl} (-1)^j\frac{\left[\tbinom{2t}{j}-\tbinom{2t} {j-1}\right]\tbinom{n}{\ell}}{\tbinom{n}{2j}} & \mbox{ if $wt(a)=r=2j$,} \\ (-1)^{j+1}\frac{2\tbinom{2t}{j}\tbinom{n}{\ell}}{\tbinom{n}{2j+1}} & \mbox{ if $wt(a)=r=2j+1$}. \end{array} \right. \end{equation} It is a routine to check that $\lambda_{\min}=-\tbinom{n }{\ell}/(2t+1)$ as we have done in previous sections. By the spectral bounds on the quantum chromatic numbers, see Lemma \ref{lem-2.4}, we have \begin{equation*} \chi_q(H_{4t+2,2t+2})\geq 2t+2=\ell. \end{equation*} This completes the proof. \end{proof} \begin{remark}Form the discussions before this, we can see that for the Hamming graphs $H_{n,\ell}$, we can use the properties of Krawchouk polnomials to evaluate the spectra of these graphs, and then by the spectral bounds on quantum chromatic numbers, we can get a lower bound on such numbers, especially, when $n-2\ell$ is small. If $\ell>n/2$, we have a upper bound for the quantum chromatic numbers. But for $\ell<n/2$, we don't know how to find such a upper bound. The following question is still open. \noindent{\bf Open question}: Find a upper bound on $\chi_q(H_{n,\ell})$ when $2\ell<n$. \end{remark} \subsection{Quantum chromatic numbers of products of graphs} Let $\Gamma_1$ and $\Gamma_2$ be two graphs. The product of $\Gamma_1$ and $\Gamma_2$, denoted by $\Gamma_1\times \Gamma_2$, is defined by $\Gamma_1\times \Gamma_2=(V,E)$, where $V=V(\Gamma_1)\times V(\Gamma_2)=\{(x_1,x_2): x_1\in V(\Gamma_1), x_2\in V(\Gamma_2)$ and $E=\{((x_1,x_2),(y_1,y_2)): (x_1,y_1)\in E(\Gamma_1) \mbox{ and } (x_2,y_2)\in E(\Gamma_2)\}$. Let the eigenvalues of $\Gamma_1$ be $\lambda_1\geq \cdots \geq \lambda_n$ and the eigenvalues of $\Gamma_2$ be $\mu_1\geq \cdots \geq \mu_m$. Then the eigenvalues of $\Gamma_1\times \Gamma_2$ are $\lambda_i\mu_j: 1\leq i\leq n, 1\leq j\leq m$. Thus $\lambda_{\max}(\Gamma_1\times \Gamma_2)=\lambda_1\mu_1$, and $\lambda_{\min}(\Gamma_1\times \Gamma_2)=\min\{\lambda_1\mu_m,\lambda_n\mu_1\}$. In \cite{SM}, a upper bound on the quantum chromatic numbers of products of graphs is given as follows: \begin{lem}\cite{SM} \label{lem-3.8} Let notations be defined as above. Then \begin{equation}\label{f-38} \chi_q(\Gamma_1\times \Gamma_2)\leq \min\{\chi_q(\Gamma_1), \chi_q(\Gamma_2)\}. \end{equation} \end{lem} By Lemma \ref{lem-3.8}, Elphick and Wocjan \cite{CW} proved the following result. \begin{thm}[\cite{CW}, Prop. 3.2]\label{thm-3.9} Suppose that the eigenvalues of $\Gamma_1$ are $\lambda_1\geq \cdots \geq \lambda_n$ and the eigenvalues of $\Gamma_2$ are $\mu_1\geq \cdots \geq \mu_m$. If $\frac{\lambda_1}{|\lambda_n|}\geq \frac{\mu_1}{|\mu_m|}$, then \begin{equation}\label{f-40} \chi_q(\Gamma_1\times \Gamma_2)\geq 1+\frac{\mu_1}{|\mu_m|}. \end{equation} \end{thm} As a consequence, we have \begin{cor}\label{cor-3.10} If $\chi_q(\Gamma_1)=1+\frac{\lambda_1}{|\lambda_n|}$ and $\chi_q(\Gamma_2)=1+\frac{\mu_1}{|\mu_m|}$, then \begin{equation}\label{f-41} \chi_q(\Gamma_1\times \Gamma_2)=\min\{\chi_q(\Gamma_1), \chi_q(\Gamma_2)\}. \end{equation} \end{cor} By Corollary \ref{cor-3.10}, we get the following result for products of Hamming graphs. \begin{thm}\label{thm-3.11} Let $n_i,\ell_i$ $(i=1,2)$ and $s\geq t$ be positive integers. Then \begin{enumerate} \item $\chi_q(H_{4t,2t}\times H_{4s,2s})=4t$; \item $\chi_q(H_{4t-1,2t}\times H_{4s,2s})=4t$; \item $\chi_q(H_{4t,2t}\times H_{4s-1,2s})=4t$; \item $\chi_q(H_{4t-1,2t}\times H_{4s-1,2s})=4t$. \end{enumerate} \end{thm} \vskip 0.3 cm {\bf References} \begin{thebibliography}{00} \bibitem{AHKS06}David Avis, Jun Hasegawa, Yosuke Kikuchi, and Yuuya Sasaki, A quantum protocol to win the graph colouring game on all hadamard graphs, IEICE Trans. Fundam. Electron. Commun. Comput. Sci. E89-A, 5(2006), 1378-1381. \bibitem{cameron}Peter J. Cameron, Ashley Montanaro, Michael W. Newman, Simone Severini, and Andreas Winter, On the quantum chromatic number of a graph, Electron. J. Combin. 14(2007), No. 1, Research Paper 81, 15 pages. \bibitem{Ada}Ada Chan and William J. Martin, Quantum isomorphism of graphs from association schemes, J. Combin. Theory, Ser. B, 164(2024), 340-363. \bibitem{CW}Clive Elphick and Pawel Wocjan, Spectral lower bounds for the quantum chromatic number of a graph, J. Combin. Theory, Ser. A, 168(2019), 338-347. \bibitem{feng}Keqin Feng, Quantum chromatic numbers: A survey (in Chinese), J. Hebei Normal University (Natural Science), 17(5)(2023), 433-446. \bibitem{PF}Peter Frankl and Vojtech R\"{o}dl, Forbidden intersections, Trans. Amer. Math. Soc. 300(1)(1987), 259-286. \bibitem{Gan} Priyanga Ganesan, Spectral bounds for the quantum chromatic number of quantum graphs, Linear Algebra Appl. 674(2023), 351-376. \bibitem{Ito}Noboru Ito, Hadamard graphs. I, Graphs Combin. 1 (1985), 57-64. \bibitem{ji}Zhengfeng Ji, Binary constraint system games and locally commutative reductions, ArXiv abs/1310.3794 (2013). \bibitem{kla} A. Klappenecker, M. R${\rm\ddot{o}}$tteler, I. E. Spharlinski, A Winterhof, { On approximately symmetric informationally complete positive operator-valued measures and related systems of quantum states}. J. Math Phys. A, {46}(2005): 082104. \bibitem{Lev}Vladimir I. Levenshtein, Krawtchouk polynomials and universal bounds for codes and designs in Hamming spaces, IEEE Trans. Inform. Theory, 41(1995), 1303-1321. \bibitem{Mena}A.M. Menamara, ArXiv: 2410. 0042 v2 [math. OA], 14, Oct. 2024. \bibitem{murty}M.R. Murty, Ramanujan graphs, J. Ramanujan Math. Sco. 18, No. 1(2003), 1-20. \bibitem{NC}M.A. Nilsen and I.L. Chuang, Quantum computation and quantum information, Cambridge University Press, 2000. \bibitem{peres} A. Peres, {Quantum Theory: Concepts and Methods}. Kluwer Academic Publishers, Boston, 1995. \bibitem{SM}R. Santiago, A.M. Mcnamara, Quantum chromatic numbers of products of graphs, ArXiv:2408.11911v1 [math.OA], 21 Aug 2024. \bibitem{stein}B. Steinberg, Representation Theory of Finite Groups: An Introductory Approach. Universitext. Springer New York, NY, 2011. DOI: 10.1007/978-1-4614-0776-8-7. \end{thebibliography} \section{Appendix} For readers convenience, we provide some tables on the spectra of some Hamming graphs. Note that if $wt(a)=r$, we denote by $\rho_\ell^n(r)(=K_\ell^n(r))$ the eigenvalue of $H_{n,\ell}$ corresponding to $a$. Table 1. $\rho_{\ell}^3(r)$ \begin{tabular}{|c|c|c|c|c|} \hline $n=3$ & $\ell=0$ & 1 & 2 & 3 \\ \hline $r=0$ & 1 & 3 & 3 & 1 \\ 1 & 1 & 1 & -1 & -1 \\ 2 & 1 & -1 & -1 & 1 \\ 3 & 1 & -3 & 3 & -1 \\ \hline \end{tabular} \vskip 0.2 cm Table 2. $\rho_\ell^4(r)$ \begin{tabular}{|c|c|c|c|c|c|} \hline $n=4$ & $\ell=0$ & 1 & 2 & 3 & 4 \\ \hline $r=0$ & 1 & 4 & 6 & 4 & 1 \\ 1 & 1 & 2 & 0 & -2 & -1 \\ 2 & 1 & 0 & -2 & 0 & 1 \\ 3 & 1 & -2 & 0 & 2 & -1 \\ 4 & 1 & -4 & 6 & -4 & 1 \\ \hline \end{tabular} \vskip 0.2 cm Table 3. $\rho_\ell^5(r)$ \begin{tabular}{|c|c|c|c|c|c|c|} \hline $n=5$ & $\ell=0$ & 1 & 2 & 3 & 4 & 5 \\ \hline $r=0$ & 1 & 5 & 10 & 10 & 5 & 1 \\ 1 & 1 & 3 & 2 & -2 & -3 & -1 \\ 2 & 1 & 1 & -2 & -2 & 1 & 1 \\ 3 & 1 & -1 & -2 & 2 & 1 & -1 \\ 4 & 1 & -3 & 2 & 2 & -3 & 1 \\ 5 & 1 & -5 & 10 & -10 & 5 & -1 \\ \hline \end{tabular} \vskip 0.2 cm Table 4. $\rho_\ell^6(r)$ \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline $n=6$ & $\ell=0$ & 1 & 2 & 3 & 4 & 5 &6\\ \hline $r=0$ & 1 & 6 & 15 & 20 & 15 & 6 &1\\ 1 & 1 & 4 & 5 & 0 & -5 & -4& -1\\ 2 & 1 & 2 & -1 & -4 & -1 & 2 &1\\ 3 & 1 & 0 & -3 & 0 & 3 & 0 &-1\\ 4 & 1 & -2 & -1 & 4 & -1 & -2&1 \\ 5 & 1 & -4 & 5 & 0 & -5 & 4& -1\\ 6 & 1 & -6 & 15 & -20 & 15 & -6&1 \\ \hline \end{tabular} \newpage Table 5. $\rho_\ell^7(r)$ \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $n=7$ & $\ell=0$ & 1 & 2 & 3 & 4 & 5 &6&7\\ \hline $r=0$ & 1 & 7 & 21 & 35 & 35 & 21 &7&1\\ 1 & 1 & 5 & 9 & 5 & -5 & -9& -5&-1\\ 2 & 1 & 3 & 1 & -5 & -5 & 1 &3&1\\ 3 & 1 & 1 & -3 & -3 & 3 & 3 &-1&-1\\ 4 & 1 & -1 & -3 & 3 & 3 & -3&-1& 1\\ 5 & 1 & -3 & 1 & 5 & -5 & -1& 3&-1\\ 6 & 1 & -5 & 9 & -5 & -5 & 9&-5 &1\\ 7 & 1 & -7 & 21 & -35 & 35 & -21&7 &-1\\ \hline \end{tabular} \vskip 0.2 cm Table 6. $\rho_\ell^8(r)$ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline $n=8$ & $\ell=0$ & 1 & 2 & 3 & 4 & 5 &6&7&8\\ \hline $r=0$ & 1 & 8 & 28 & 56 &70 & 56 &28&8&1\\ 1 & 1 & 6 & 14 & 14 & 0 & -14& -14&-6&-1\\ 2 & 1 & 4 & 4 & -4 & -10 & -4 &4&4&1\\ 3 & 1 & 2 & -2 & -6 & 0 & 6 &2&-2&-1\\ 4 & 1 & 0 & -4 & 0 & 6 & 0&-4& 0&1\\ 5 & 1 & -2 & -2 & 6 & 0& -6& 2&2&-1\\ 6 & 1 & -4 & 4 & 4 & -10 & 4&4 &-4&1\\ 7 & 1 & -6 & 14 & -14 & 0 & 14&-14 &6&-1\\ 8 & 1 & -8 & 28 & -56 & -70 & -56&28 &-8&1\\ \hline \end{tabular} \vskip 0.2 cm Table 7. $\rho_\ell^9(r)$ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline $n=9$ & $\ell=0$ & 1 & 2 & 3 & 4 & 5 &6&7&8&9\\ \hline $r=0$ & 1 & 9 & 36 & 84 &126 & 126 &84&36&9&1\\ 1 & 1 & 7 & 20 & 28 & 14 & -14& -28&-20&-7&-1\\ 2 & 1 & 5 & 8 & 0 & -14 & -14 &0&8&5&1\\ 3 & 1 & 3 & 0 & -8 & -6 & 6 &8&0&-3&-1\\ 4 & 1 & 1 & -4 & -4 & 6 & 6&-4& -4&1&1\\ 5 & 1 & -1 & -4 &4 &6 & -6& -4& 4&1&-1\\ 6 & 1 & -3 & 0 & 8 & -6 & -6&8 &0&-3&1\\ 7 & 1 & -5 & 8 & 0 & -14 & 14&0 &-8&5&-1\\ 8 & 1 & -7 & 20 & -28 & 14 & 14&-28 &20&-7&1\\ 9 & 1 & -9 & 36 & -84 & 126 & -126&84 &-36&9&-1\\ \hline \end{tabular} \newpage Table 8. $\rho_\ell^{10}(r)$ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $n=10$ & $\ell=0$ & 1 & 2 & 3 & 4 & 5 &6&7&8&9&10\\ \hline $r=0$ & 1 & 10 & 45 & 120 &210 & 252 &210&120&45&10&1\\ 1 & 1 & 8 & 27 & 48 & 42 & 0& -42&-48&-27&-8&-1\\ 2 & 1 & 6 & 13 & 8 & -14 & -28 &-14&8&13&6&1\\ 3 & 1 & 4 & 3 &-8& -14 & 0 & 14 &8&-3&-4&-1\\ 4 & 1 & 2 & -3 & -8 & 2 & 12&2& -8&-3&2&1\\ 5 & 1 & 0 & -5 &0 &10& 0& -10& 0&5&0&-1\\ 6 & 1 & -2 & -3 & 8 & 2 & -12&2 &8&-3&-2&1\\ 7 & 1 & -4 & 3 & 8 &-14& 0 & 14&-8 &-3&4&-1\\ 8 & 1 & -6 & 13 & -8 & -14 & 28&-14 &-8&13&-6&1\\ 9 & 1 & -8 & 27 & -48 & 42 & 0&-42 &48&-27&8&-1\\ 10 & 1 & -10 & 45 & -120 & 210 & -252&210 &-120&45&-10&1\\ \hline \end{tabular} \vskip 0.2 cm Table 9. $\rho_\ell^{11}(r)$ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $n=11$ & $\ell=0$ & 1 & 2 & 3 & 4 & 5 &6&7&8&9&10&11\\ \hline $r=0$ & 1 & 11 & 55 & 165 &330 & 462 &462&330&165&55&11&1\\ 1 & 1 & 9 & 35 & 75 & 90 & 42& -42&-90&-75&-35&-9&-1\\ 2 & 1 & 7 & 19 & 21 & -6 & -42 &-42&-6&21&19&7&1\\ 3 & 1 & 5 & 7 &-5& -22 & -14 & 14 &22&5&-7&-5&-1\\ 4 & 1 & 3 & -1 & -11 & -6 & 14&14& -6&-11&-1&3&1\\ 5 & 1 & 1 & -5 &-5 &10& 10& -10& -10&5&5&-1&-1\\ 6 & 1 & -1 & -5 & 5 & 10 & -10&-10 &10&5&-5&-1&1\\ 7 & 1 & -3 & -1 & 11 &-6& -14 & 14&6 &-11&1&3&-1\\ 8 & 1 & -5 & 7 & 5 & -22 & 14&14 &-22&5&7&-5&1\\ 9 & 1 & -7 & 19 & -21 & -6 & 42&-42 &6&21&-19&7&-1\\ 10 & 1 & -9 & 35 & -75 & 90 & -42&-42 &90&-75&35&-9&1\\ 11 & 1 & -11 & 55 & -165 & 330 & -462&462 &-330&165&-55&11&-1\\ \hline \end{tabular} \vskip 0.2 cm Table 10. $\rho_\ell^{12}(r)$ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $n=12$ & $\ell=0$ & 1 & 2 & 3 & 4 & 5 &6&7&8&9&10&11 &12\\ \hline $r=0$ & 1 & 12 & 66 & 220 &495 & 792 &924&792&495&220&66&12 &1\\ 1 & 1 & 10 & 44 & 110 & 165 & 132& 0&-132&-165&-110&-44&-10 &-1\\ 2 & 1 & 8 & 26 & 40 & 15 & -48 &-84&-48&15&40&26&8 &1\\ 3 & 1 & 6 & 12 &2& -27 & -36 & 0 &36&27&-2&-12&-6 &-1\\ 4 & 1 & 4 & 2 & -12 & -17 & 8&28& 8&-17&-12&2&4 &1\\ 5 & 1 & 2 & -4 &-10 &5& 20& 0& -20&-5&10&4& -2&-1\\ 6 & 1 &0 & -6 & 0 & 15 & 0&-20 &0&15&0&-6&0 &1\\ 7 & 1 & -2 & -4 & 10 &5& -20 & 0&20&-5 &-10&4&2& -1\\ 8 & 1 & -4 & 2 & 12 & -17 & -8&28 &-8&-17&12&2&-4 &1\\ 9 & 1 & -6 & 12 & -2 & -27 & 36&0 &-36&27&2&-12& 6&-1\\ 10 & 1 & -8 & 26 & -40 & 15 & 48&-84 &48&15&-40&26&-8 &1\\ 11 & 1 & -10 & 44 & -110 & 165 & -132&0 &132&-165&110&-44&10 &-1\\ 12 & 1 & -12 & 66 & -220 & 495 & -792&924 &-792&495&-220&66& -12&1\\ \hline \end{tabular} \end{document} \endinput
2412.09885v1
http://arxiv.org/abs/2412.09885v1
Structure fault diameter of hypercubes
\documentclass[12pt,a4paper,twoside]{article} \usepackage{graphicx} \usepackage{times} \usepackage{mathptmx} \usepackage{cite} \usepackage[T1,OT1]{fontenc} \usepackage{textcomp} \usepackage{xcolor} \usepackage{multirow} \usepackage{mathrsfs,amssymb,amsthm,stmaryrd,amsmath,latexsym,indentfirst} \usepackage{stmaryrd} \usepackage{makecell} \usepackage{booktabs} \usepackage{xcolor} \usepackage{subfig} \usepackage{bm} \usepackage[ruled,linesnumbered,vlined]{algorithm2e} \setlength{\parindent}{3ex} \usepackage[symbol]{footmisc} \usepackage{cellspace} \usepackage[capitalise]{cleveref} \setcounter{page}{1} \newtheorem{lem}{Lemma}[section] \newtheorem{thm}[lem]{Theorem} \newtheorem{dfn}[lem]{Definition} \newtheorem{rem}{Remark} \textheight=22.5cm \textwidth=16cm \parskip = 0.1cm \topmargin=0cm \oddsidemargin=0cm \evensidemargin=0cm \newtheorem{mytheorem}{Theorem}[section] \newtheorem{mylemma}[mytheorem]{Lemma} \newtheorem{mycorollary}[mytheorem]{Corollary} \newtheorem{mydefinition}[mytheorem]{Definition} \newtheorem{myproposition}[mytheorem]{Proposition} \newtheorem{myconj}{Conjecture} \newtheorem{mycase}{Case} \newtheorem{myremark}{Remark} \newtheorem{myexample}[mytheorem]{Example} \newtheorem{myques}{Question} \begin{document} \title{{Structure fault diameter of hypercubes}\footnote{The research is supported by NSFC (No. 12261085)}} \author{Honggang Zhao$^{a}$, Eminjan Sabir$^{a,}$\footnote{Corresponding author: [email protected]} , and Cheng-Kuan Lin$^{b}$} \date{ $^a$College of Mathematics and System Sciences, Xinjiang University, \\Urumqi, 830046, P. R. China\\ $^b$Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu 30010, Taiwan} \maketitle \renewcommand{\abstractname}{} \begin{abstract} \noindent {\bf Abstract:} { Structure connectivity and substructure connectivity are innovative indicators for assessing network reliability and fault tolerance. Similarly, fault diameter evaluates fault tolerance and transmission delays in networks. This paper extends the concept of fault diameter by introducing two new variants: structure fault diameter and substructure fault diameter, derived from structure connectivity and substructure connectivity respectively. For a connected graph $G$ with $W$-structure connectivity $\kappa(G;W)$ or $W$-substructure connectivity $\kappa^s(G;W)$, the $W$-structure fault diameter $D_f(G;W)$ and $W$-substructure fault diameter $D_f^s(G;W)$ are defined as the maximum diameter of any subgraph of $G$ resulting from removing fewer than $\kappa(G;W)-1$ $W$-structures or $\kappa^s(G;W)-1$ $W$-substructures. For the $n$-dimensional hypercube $Q_n$ with $n \geq 3$ and $1 \leq m \leq n - 2$, we determine both $D_f(Q_n;Q_m)$ and $D_f^s(Q_n;Q_1)$. These findings generalize existing results for the diameter and fault diameter of $Q_n$, providing a broader understanding of the hypercube's structural properties under fault conditions. } \begin{flushleft} \textbf{Keywords:} Connectivity; Structure connectivity; Substructure connectivity; Structure fault diameter; Substructure fault diameter; Hypercube \end{flushleft} \end{abstract} \section{Introduction} In the study of communication networks, graphs serve as powerful tools for modeling network structures and analyzing their properties. The \textit{connectivity} and \textit{diameter} are fundamental parameters to measure fault tolerance and communication delay. A reliable communication network must not only withstand faults but also maintain a minimal diameter to ensure efficient communication despite failures. This is particularly crucial in large-scale distributed systems, where disruptions can severely affect performance. To tackle this issue, the concept of \textit{fault diameter} has been introduced, which evaluates the impact of faults on a network's diameter. The fault diameter, $D_f(G)$, is defined as the maximum diameter of any subgraph of a connected graph $G$ obtained after removing up to $\kappa(G)-1$ vertices, where $\kappa(G)$ represents the graph's connectivity. The study of fault diameter provides critical insights into a network's resilience to failures and the impact of faults on communication delay. This is particularly relevant in applications such as data centers, cloud computing, and parallel processing, where maintaining low-latency communication is essential. Analyzing fault diameter deepens our understanding of graph structures and their robustness under adversarial conditions. This analysis provides valuable insights for designing resilient network topologies capable of effectively managing node failures. For example, hypercube networks and their variations are extensively employed in distributed computing due to their exceptional characteristics, such as symmetry, scalability, and inherent fault tolerance. A thorough understanding of their fault diameters is essential for optimizing these networks to maintain performance and reliability during failure scenarios. Krishnamoorthy and Krishnamurthy first introduced the concept of fault diameter, demonstrating that the fault diameter of the $n$-dimensional hypercube $Q_n$ is $n + 1$ \cite{03}. This foundational work has since been expanded to more intricate network structures. Tsai et al. studied the exchanged hypercube $EH(s, t)$ and discovered that after removing fewer than $s$ vertices, the diameter of the resulting graph is $s + t + 3$ for $3 \leq s \leq t$ \cite{08}. Qi and Zhu established upper bounds for the fault diameters of two families of twisted hypercubes, $H_n$ and $Z_{n, k}$ \cite{09}. Additionally, Day and Al-Ayyoub found that the fault diameter of the $k$-ary $n$-cube $Q_n^k$ increases by at most one compared to its fault-free diameter \cite{13}. Similar findings have been reported for other topologies, including star graphs \cite{15}, hierarchical cubic networks \cite{17}, and exchanged crossed cubes \cite{12}. Despite these advancements, there remains a need to investigate fault diameters across a wider range of graph structures, particularly within modern network models that incorporate complex and hierarchical designs. Such research not only enriches the theoretical understanding of network robustness but also provides practical insights for designing reliable and efficient communication systems in environments prone to faults. This paper aims to address this gap by introducing new fault diameter concepts based on structure connectivity and substructure connectivity, and applying these concepts to analyze the fault-tolerant properties of $Q_n$ under various fault conditions. By considering the impact of structures becoming faulty instead of individual vertices, Lin et al. introduced the notions of structure connectivity and substructure connectivity \cite{02}. For a connected graph $G$, let $W$ be a subgraph of $G$. Then $W$-\textit{structure connectivity} (resp. $W$-\textit{substructure connectivity}) of $G$, denoted $\kappa(G;W)$ (resp. $\kappa^s(G;W)$), is the cardinality of a minimal set of vertex-disjoint subgraphs $\mathcal{W} = \{W_1, W_2, \ldots, W_t\}$, such that each $W_k \in \mathcal{W}$ is isomorphic to $W$ (resp. each $W_k \in \mathcal{W}$ is a connected subgraph of $W$) for $k = 1, 2, \ldots, t$, and removing $\mathcal{W}$ disconnects $G$. They also determined $\kappa(Q_n; W)$ and $\kappa^s(Q_n; W)$ and structure $W \in \{K_1, K_{1,1}, K_{1,2}, K_{1,3}, C_4\}$. Following this trend, many scholars have engaged in this research field. For instance, in the split-star networks $S^2_n$, Zhao and Wang determined both $\kappa(S^2_n; W)$ and $\kappa^s(S^2_n; W)$ for $W \in \{P_t, C_q\}$, where $4 \le t \le 3n - 5$ and $6 \le q \le 3n - 5$ \cite{22}. Ba et al. investigated $P_t$-structure connectivity and $P_t$-substructure connectivity of augmented $k$-ary $n$-cubes $AQ^k_n$ \cite{23}. Yang et al. proved that $\kappa(S_n; K_{1,m}) = \kappa^s(S_n; K_{1,m}) = n - 1$ for $n \ge 4$ and $0 \le m \le n - 1$, where $S_n$ is a star graph \cite{24}. Wang et al. proposed the concept of \textit{double-structure connectivity} and studied the double-structure connectivity of hypercubes \cite{21}. For the $n$-dimensional hypercube $Q_n$, Sabir and Meng considered a special kind of substructure connectivity, called \textit{$W$-subcube connectivity} $\kappa^{sc}(Q_n; W)$, by restricting the structure $W$ and its subgraphs to subcubes of $Q_n$ \cite{04}. In this paper, we propose two novel extensions of the fault diameter, defined based on the concepts of structure connectivity and substructure connectivity. The $W$-\textit{structure fault diameter}, denoted as $D_f(G;W)$, of a connected graph $G$ with $W$-structure connectivity $\kappa(G;W)$, is the maximum diameter of any subgraph of $G$ obtained by removing up to $\kappa(G;W) - 1$ $W$-structures. Similarly, the $W$-\textit{substructure fault diameter}, denoted as $D^s_f(G;W)$, of $G$ with $W$-substructure connectivity $\kappa^s(G;W)$, is the maximum diameter of any subgraph of $G$ obtained by removing up to $\kappa^s(G;W) - 1$ $W$-substructures. Importantly, when $W$ is a single vertex (i.e., $K_1$), the $W$-structure fault diameter and $W$-substructure fault diameter reduce to the traditional fault diameter. Furthermore, it can be observed from the definitions that $D^s_f(G;W) \geq D_f(G;W)$. The $n$-dimensional hypercube $Q_n$, known for its symmetry, scalability, and fault tolerance, is one of the most popular interconnection networks. It is well established that the diameter $D(Q_n)$ and the fault diameter $D_f(Q_n)$ of $Q_n$ are $n$ and $n + 1$, respectively. In this paper, we extend these results by proving the following: \begin{enumerate} \item $D_f(Q_n;Q_m) = n$ for $n = m + 2$ and $D_f(Q_n;Q_m) = n + 1$ for $n \geq m + 3$. \item $D^s_f(Q_n;Q_m) = n + 1$ for $m \geq 0$ and $n \geq m + 3$, where $Q_0 \cong K_1$. \end{enumerate} The rest of this paper is organized as follows. In Section 2, we introduce the definitions and notations used throughout this study. In Section 3, we present our main results and proofs. Finally, in Section 4, we conclude the paper and discuss potential directions for future research. \section{Preliminaries} The definitions and notation of graph are based on \cite{01}. Let $G=(V,E)$ be a $graph$ with vertex set $V$ and edge set $E$. A graph $G$ is \textit{vertex transitive} if there is an isomorphism $f$ from $G$ into itself such that $f(u)=v$ for any two vertices $u$ and $v$ of $G$. A graph $G$ is \textit{edge transitive} if there is an isomorphism $f$ from $G$ into itself such that $f((u,v))=(x,y)$ for any two edges $(u,v)$ and $(x,y)$. For a vertex $u$ in a graph $G$, $N_G(u)$ denotes the \textit{neighborhood} of $u$, which is the set $\{v \mid (u,v)\in E\}$. A \textit{path} $P$ is a sequence of adjacent vertices, written as $\langle u_1, u_2, \ldots, u_n \rangle$. The \textit{length} of a path $P$, denoted $l(\textit{P})$, is the number of edges in $P$. We also write the path $\langle u_1, u_2,\ldots, u_n \rangle$ as $\langle u_1, P_1, u_i, u_{i+1},\ldots, u_j, P_2, u_t,\ldots, u_n \rangle$, where $P_1$ is the path $\langle u_1, u_2,\ldots, u_i \rangle$ and $P_2$ is the path $\langle u_j, u_{j+1},\ldots, u_t \rangle$. Hence, it is possible to write a path as $\langle u_1, Q, u_1, u_2,\ldots, u_n \rangle$ if $l(Q)=0$. We use $d_G(u,v)$ to denote the \textit{distance} between $u$ and $v$, that is, the length of a shortest path joining $u$ and $v$ in $G$. The $diameter$ of a graph $G$, denoted $D(\textit{G})$, is defined as max$\{d(u,v) \mid u,v \in V(G)\}$. We use $\langle u, P_s, v \rangle$ to denote the shortest path between $u$ and $v$ in a graph $G$. And we use $K_n$ to represent the complete graph with $n$ vertices. An $n$-\textit{dimensional hypercube} is an undirected graph, $Q_n$, with $2^n$ vertices and $2^{n-1}n$ edges. Each vertex in $Q_n$ can be represented as an $n$-bit binary string. We use boldface to denote vertices in $Q_n$. For any vertex $\textbf{x}={x_1}{x_2}\cdots{x_n}$ in $Q_n$, we set $(\textbf{x})^i={x^i_1}{x^i_2}\cdots{x^i_n}$ is the neighbor of $\textbf{x}$ in dimension $i$, where $x^i_j=x_j$ for every $j \ne i$ and $x^i_i=1-x_i$. In particular, $Q_0$ represents $K_1$ and $Q_1$ represents $K_2$. The $x_i$ in $\textbf{x}={x_1}{x_2}\cdots{x_n}$ is defined as $i$th bit. Fig.~\ref{fig:1} shows $Q_n$ for $n\in\{1,2,3,4\}.$ By fixing the $n$th bit of the vertices in $Q_n$, we get two $(n-1)$-dimensional hypercubes named of ${Q^{\{0\}}_n}$ whose $n$th bit is $0$ and ${Q^{\{1\}}_n}$ whose $n$th bit is $1$, respectively. In this way, we divide $Q_n$ into two parts ${Q^{\{0\}}_n}$ and ${Q^{\{1\}}_n}$. For any vertex $\textbf{x}$ in ${Q^{\{0\}}_n}$ (resp. in ${Q^{\{1\}}_n}$), there exists an unique external neighbor $(\textbf{x})^n$ in ${Q^{\{1\}}_n}$ (resp. in ${Q^{\{0\}}_n}$). It is known that $Q_n$ has many attractive properties, such as being bipartite, $n$-regular, $n$-connected, vertex transitive and edge transitive \cite{18}. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{q4} \caption{The $n$-dimensional hypercube for $n\in\{1,2,3,4\}$.} \label{fig:1} \end{figure} The \textit{cartesian product} of simple graphs $G$ and $H$ is the graph $G\Box H$ whose vertex set is $V(G)\times V(H)$ and whose edge set is the set of all pairs $(u_1v_1,u_2v_2)$ such that either $(u_1,u_2)\in E(G)$ and $v_1=v_2$, or $(v_1,v_2)\in E(H)$ and $u_1=u_2$ \cite{01}. Hypercubes also can be represented in the form of cartesian product, i.e., $Q_n=\underbrace{K_2 \Box K_2 \Box \cdots \Box K_2}_n$ \cite{14}. In this way, we can decompose $Q_n=Q_m\Box Q_{n-m}$. Now, for any $\textbf{t}\in V(Q_{n-m})$ we denote by $(Q_m,\textbf{t})$ the subgraph of $Q_n$ induced by the vertices whose last $n-m$ bits form the tuple $\textbf{t}$. It is easy to observe that $(Q_m,\textbf{t})$ is isomorphic to $Q_m$. As $Q_{n-m}$ is $(n-m)$-regular and $(n-m)$-connected, every vertex in $V(Q_{n-m})$ is adjacent to exactly $n-m$ vertices in $Q_{n-m}$. Let $N_{Q_{n-m}}(\textbf{t})=\{\textbf{t}_1, \textbf{t}_2,\ldots, \textbf{t}_{n-m}\}$. Hence induced subgraph $(Q_m,\textbf{t})$ of $Q_n$ is adjacent to exactly $n-m$ subcubes, namely $(Q_m,\textbf{t}_1)$, $(Q_m,\textbf{t}_2)$,$\ldots, (Q_m,\textbf{t}_{n-m})$. Clearly, $(Q_m,\textbf{t}_i)$ is not adjacent to $(Q_m,\textbf{t}_j)$ for $1\le i,j\le n-m$, and $(Q_m,\textbf{t})$ and $(Q_m,\textbf{t}_i)$ can form a subcube, namely $(Q_m,\textbf{t}^*_i)$, which is isomorphic to $Q_{m+1}$. Fig.~\ref{fig:2} shows $Q_5=Q_2\Box Q_3$. \begin{figure} \centering \includegraphics[height=6cm]{q6} \caption[Fig.2]{$Q_5=Q_2\Box Q_3$.} \label{fig:2} \end{figure} \begin{figure} \centering \includegraphics[height=5cm]{q1} \caption[Fig.3]{An example of $| F^n_3| =6$, $| A^n_{3,0}| =3$, $| A^n_{3,1}| =1$ and $| B^n_3| =2$.} \label{fig:3} \end{figure} For any two vertices $\textbf{u}$, $\textbf{v}\in Q_n$, the \textit{Hamming distance} $H_{Q_n}(\textbf{u}$, $\textbf{v})$ is defined to be the number of different positions between the two strings. Then $\textbf{u}$ and $\textbf{v}$ are called \textit{symmetric} if $H_{Q_n}(\textbf{u}$, $\textbf{v})=n$, and $\textbf{u}$ and $\textbf{v}$ are called \textit{unsymmetric} if $H_{Q_n}(\textbf{u}$, $\textbf{v})\le n-1$. By definition of hypercubes, we know that any pair of vertices is either symmetric or unsymmetric in $Q_n$. We list some symbols in Table 1 and their illustrations in \Cref{fig:3}. The following results play crucial role in the proof of our main results. \begin{mylemma}\label{lemma3.2}\cite{07} For $n\ge 2$, after the removal of $n-2$ or less vertices in $Q_n$, the diameter of the remaining graph is still $n$. \end{mylemma} \begin{mylemma}\label{lemma2.2} \cite{03} For $n\ge 3$, $D_f(Q_n)=n+1$. \end{mylemma} \begin{mylemma}\label{lemma2.3} \cite{02} For $n\ge 3$, $\kappa(Q_n;Q_1)=\kappa^s(Q_n;Q_1)=n-1$ \end{mylemma} \begin{mylemma}\label{lemma2.4} \cite{04} For $n\ge 3$ and $m\le n-2$, $\kappa^{sc}(Q_n;Q_m) = \kappa(Q_n;Q_m) = n-m$. \end{mylemma} \begin{mylemma}\label{lemma2.5} \cite{06} Any two vertices $\textbf{u}$ and $\textbf{v}$ in $Q_n(n\ge 3)$ have exactly $2$ common neighbors if they have any. Besides, there are two common neighbors if and only if $((\textbf{u})^i)^j=\textbf{v}$, where $1\le i\ne j\le n$. \end{mylemma} Let $Q_m$ be a subcube of $Q_n$. For any two vertices $\textbf{u}$ and $\textbf{v}$ in $Q_m(m\ge 2)$, if $\textbf{u}$ and $\textbf{v}$ have common neighbors, by Lemma~\ref{lemma2.5}, they have exactly two common neighbors and $H_{Q_n}(\textbf{u},\textbf{v})=H_{Q_m}(\textbf{u},\textbf{v})=2$. Clearly, their common neighbors are in $Q_m$. Moreover, the two vertices of $Q_1$ have no common neighbors. Then we have the following corollary of Lemma~\ref{lemma2.5}. \begin{table} \label{Table11} \caption{Symbol table} \centering \footnotesize \begin{tabular}{ll} \toprule {\bf Symbol} & {\bf Definition}\\ \midrule $\kappa(G;W)$ & $W$-structure connectivity of $G$\\ $\kappa^s(G;W)$ & $W$-substructure connectivity of $G$\\ $D_f(G;W)$ & $W$-structure fault diameter of $G$\\ $D^s_f(G;W)$ & $W$-substructure fault diameter of $G$\\ $Q_n$ & the $n$-dimensional hypercube\\ $\kappa^{sc}(Q_n;Q_m)$ & $Q_m$-subcube connectivity of $Q_n$\\ $D^{sc}_f(Q_n;Q_m)$ & $Q_m$-subcube fault diameter of $Q_n$\\ ${Q^{\{h\}}_n}$ & the $(n-1)$-dimensional hypercube with $V({Q^{\{h\}}_n})=\{\textbf{x}\mid\textbf{x}={x_1}{x_2}\cdots{x_n}$, $x_n=h\}$,\\ & where $h\in \{{0,1}\}$\\ $S_k(Q_n)$ & the set $\{ U \mid U \subseteq V(Q_n)$ and the subgraph induced by $U$ is isomorphic to $Q_k \}$\\ $\mathcal{F}_k^n$ & the vertex-disjoint subset of $\cup^k_{i=0} S_i(Q_n)$, i.e., any two distinct $A, B \in \mathcal{F}_k^n$\\ & have no common vertex\\ $\mathcal{A}^n_{k,h}$ & the set of $\mathcal{F}^n_k\cap \cup^k_{i=0}S_i({Q^{\{h\}}_n})$\\ $\mathcal{B}^n_k$ & the set of $\mathcal{F}^n_k\setminus (\mathcal{A}^n_{k,0}\cup \mathcal{A}^n_{k,1})$\\ $F_k^n$ & the subset of $\mathcal{F}^n_k$, and for any $A \in F_k^n$, we have $A\in S_k(Q_n)$\\ $A^n_{k,h}$ & the set of $F^n_k\cap S_k({Q^{\{h\}}_n})$\\ $B^n_k$ & the set of $F^n_k\setminus (A^n_{k,0}\cup A^n_{k,1})$\\ $E^n$ & the set of edges which connect ${Q^{\{0\}}_n}$ and ${Q^{\{1\}}_n}$\\ \bottomrule \end{tabular} \end{table} \begin{mycorollary}\label{corollary2.6} Let $Q_m$ be a subcube of $Q_n$. Then, any two vertices of $Q_m$ have no common neighbor in $Q_n-Q_m$. \end{mycorollary} We get the following lemma easily by the cardinality of symmetric vertices. \begin{mylemma}\label{lemma2.7} For $n\ge 2$, let $S$ be any vertex set of $Q_n$ with $| S|< 2^{n-1}$. If $Q_n-S$ is connected, then $D(Q_n-S)\ge n$. \end{mylemma} \section{$Q_1$-structure fault diameter $Q_1$-substructure fault diameter} We provide some lemmas for later use. \begin{mylemma}\label{lemma3.1} Let $m\le n-3$ and $| \mathcal{F}^n_m|\le n-1$. For any two symmetric vertices $\textbf{u}$ and $\textbf{v}$ in ${Q_n}-\mathcal{F}^n_m$, there exists a pair of vertices $(\textbf{u})^{j}$ and $(\textbf{v})^{j}$ in ${Q_n}-\mathcal{F}^n_m$ for some $j\in \{{1,2,\ldots,n}\}$. \end{mylemma} \begin{proof} Let $(\textbf{u})^{j}$ and $(\textbf{v})^{k}$ respectively be neighbors of $\textbf{u}$ and $\textbf{v}$ in $Q_n$, where $j,k\in \{{1,2,\ldots,n}\}$. Then $H_{Q_n}((\textbf{u})^{j}$, $(\textbf{v})^{k})=n$ if $j=k$, and $H_{Q_n}((\textbf{u})^{j}$, $(\textbf{v})^{k})=n-2$ if $j\ne k$. Combining this with the condition $m\le n-3$, we infer that no subcube in $\mathcal{F}^n_m$ can contain both $(\textbf{u})^{j}$ and $(\textbf{v})^{k}$ simultaneously. By Corollary~\ref{corollary2.6}, no subcube in $\mathcal{F}^n_m$ can contain both $(\textbf{u})^{j}$ and $(\textbf{u})^{h}$ for $j\ne h$ simultaneously. The same is holds for $(\textbf{v})^{j}$ and $(\textbf{v})^{h}$ for $j\ne h$. This implies that the removal of any subcube in $\mathcal{F}^n_m$ reduces the neighbors of $\textbf{u}$ or $\textbf{v}$ by at most one. Note that $d_{Q_n}(\textbf{u})=d_{Q_n}(\textbf{v})=n$. However, $| \mathcal{F}^n_m|\le n-1$. So there must exist a pair of vertices $(\textbf{u})^{j}$ and $(\textbf{v})^{j}$ in ${Q_n}-\mathcal{F}^n_m$. \end{proof} \begin{mytheorem}\label{theorem3.3} $D^s_f(Q_3;Q_1)=3$. \end{mytheorem} \begin{proof} By Lemma~\ref{lemma2.3}, $\kappa^s(Q_3;Q_1) = 2$. Thus, we need to consider the event $| \mathcal{F}^3_1|\le 1=\kappa^s(Q_3;Q_1)-1$. By Lemma~\ref{lemma3.2}, $D(Q_3-\mathcal{F}^3_1)=3$ if $| F^3_1|=0$. Bellow, we suppose that $| F^3_0|=0$ and $| F^3_1|=1$. Since $Q_3$ is vertex transitive and edge transitive, we may assume that $F^3_1=\{\{000,001\}\}$ is a faulty $Q_1$-structure in $Q_3$. From \Cref{fig:4}, we get that the diameter of $Q_3-F^3_1$ is $3$, and so $D^s_f(Q_3;Q_1)=3$. \end{proof} \begin{mylemma}\label{lemma3.4} For $n\ge 4$, $D_f(Q_n;Q_1)\ge n+1$ and $D^s_f(Q_n;Q_1)\ge n+1$. \end{mylemma} \begin{proof} Let $\textbf{x}=00 \cdots 0$ and $\textbf{z}=(\textbf{x})^{1}$. Then, we set $\mathcal{F}^n_1=\{\{(\textbf{x})^{i}, (\textbf{z})^{i}\}\mid2\le i\le n-1\}$. Obviously, $| \mathcal{F}^n_1|=n-2$, ${Q^{\{0\}}_n}-\mathcal{F}^n_1$ is disconnected and one of components of ${Q^{\{0\}}_n}-\mathcal{F}^n_1$ contains $\{\textbf{x}, \textbf{z}\}$. Further, we let $\textbf{y}=11 \cdots 10$. Clearly, $\{\textbf{x}, \textbf{z}\}$ and $\textbf{y}$ are in the distinct components of ${Q^{\{0\}}_n}-\mathcal{F}^n_1$. Since $Q_n-\mathcal{F}^n_1$ is connected and $N_{Q_n-\mathcal{F}^n_1}(\textbf{x})={\{\textbf{z},(\textbf{x})^{n}\}}$, there are two paths $\langle \textbf{x}, (\textbf{x})^{n}, \textit{P}_1, \textbf{y} \rangle$ and $\langle \textbf{x}, \textbf{z}, (\textbf{z})^{n}, \textit{P}_2, \textbf{y} \rangle$ between $\textbf{x}$ and $\textbf{y}$ in $Q_n-\mathcal{F}^n_1$. Note that $H_{Q_n}((\textbf{x})^{n},\textbf{y})=n$ and $H_{Q_n}((\textbf{z})^{n},\textbf{y})=n-1$, so we have $l(P_1)\ge n$ and $l(P_2)\ge n-1$, respectively. Thus, the length of any path between $\textbf{x}$ and $\textbf{y}$ is at least $n+1$. This implies $D_f(Q_n;Q_1)\ge n+1$. Moreover, since $D^s_f(Q_n;Q_1) \ge D_f(Q_n;Q_1)$, $D^s_f(Q_n;Q_1)\ge n+1$. \end{proof} \begin{figure} \centering \includegraphics[height=5cm]{f3} \caption[Fig.3]{An illustration of the proof in Theorem~\ref{theorem3.3}}. \label{fig:4} \end{figure} \begin{mylemma}\label{lemma3.5} $D^s_f(Q_4;Q_1)\le 5$. \end{mylemma} \begin{proof} By Lemma~\ref{lemma2.3}, $\kappa^s(Q_4;Q_1) = 3$. Thus, we need to consider the event $| \mathcal{F}^4_1|\le 2$. By Lemma~\ref{lemma2.2}, $D(Q_4-\mathcal{F}^4_1)\le 5$ if $| F^4_1|<2$. Thus, we suppose that $| F^4_0|=0$ and $| F^4_1|=2$. Since $Q_4$ is vertex transitive and edge transitive, we assume that $\textbf{x}\neq (\textbf{y})^4$ for each $\{\textbf{x}, \textbf{y}\}\in F^4_1$. Thus, $| B^4_1|=0$ and $F^4_1=A^4_{1,0}\cup A^4_{1,1}$. Without of generality, we assume $| A^4_{1,0}|\ge | A^4_{1,1}|$. Let $\textbf{u}$ and $\textbf{v}$ be any two vertices in $Q_4-F^4_1$, then we have the following cases: \noindent \textbf{Case 1}. $| A^4_{1,0}|=2$. Since $D(Q_3)=3$, $d_{Q_4-F^4_1}(\textbf{u},\textbf{v})\le 3$ if $\textbf{u}$,$\textbf{v}\in {Q^{\{1\}}_4}$. If $\textbf{u}$,$\textbf{v}\in {Q^{\{0\}}_4}$, then there exists a path $\langle \textbf{u}, (\textbf{u})^{4}, \textit{P}_s, (\textbf{v})^{4}, \textbf{v} \rangle$ between $\textbf{u}$ and $\textbf{v}$ in $Q_4-F^4_1$, where $P_s$ is in ${Q^{\{1\}}_4}$. Since $l(\textit{P}_s)\le D({Q^{\{1\}}_4})=3$, $d_{Q_4-F^4_1}(\textbf{u},\textbf{v})\le 5$. Obviously, if $\textbf{u}\in {Q^{\{0\}}_4}$ and $\textbf{v}\in {Q^{\{1\}}_4}$, then there exists a path $\langle \textbf{u}, (\textbf{u})^{4}, \textit{P}_s, \textbf{v} \rangle$ between $\textbf{u}$ and $\textbf{v}$ in $Q_4-F^4_1$, where $P_s$ is in ${Q^{\{1\}}_4}$. then $d_{Q_4-F^4_1}(\textbf{u},\textbf{v})\le 4$. \noindent \textbf{Case 2}. $| A^4_{1,0}|=1$ and $| A^4_{1,1}|=1$. By Theorem~\ref{theorem3.3}, $d_{Q_4-F^4_1}(\textbf{u},\textbf{v})\le 3$ if $\textbf{u}$,$\textbf{v}\in {Q^{\{0\}}_4}$ or $\textbf{u}$,$\textbf{v}\in {Q^{\{1\}}_4}$. So we only need to consider that $\textbf{u}\in {Q^{\{0\}}_4}$ and $\textbf{v}\in {Q^{\{1\}}_4}$. Since $N_{{Q^{\{0\}}_4}-A^4_{1,0}}(\textbf{u})\ge 2$, we set $N_{{Q^{\{0\}}_4}-A^4_{1,0}}(\textbf{u})=\{\textbf{u}_1,\textbf{u}_2\}$. Since $((\textbf{u}_1)^4)^i\ne (\textbf{u}_2)^4$ for each $i\in\{1,2,3\}$, there must exist $(\textbf{u}_1)^4\in {Q^{\{1\}}_4}-A^4_{1,1}$ or $(\textbf{u}_2)^4\in {Q^{\{1\}}_4}-A^4_{1,1}$. We suppose the former, i.e., $(\textbf{u}_1)^4\in {Q^{\{1\}}_4}-A^4_{1,1}$. Thus, there exists a path $\langle \textbf{u}, \textbf{u}_1, (\textbf{u}_1)^4,\textit{P}_s, \textbf{v} \rangle$ between $\textbf{u}$ and $\textbf{v}$ in $Q_4-F^4_1$, where $P_s$ is in ${Q^{\{1\}}_4}-A^4_{1,1}$. Clearly, $l(\textit{P}_s)\le D({Q^{\{1\}}_4-A^4_{1,1}})=3$, then $d_{Q_4-F^4_1}(\textbf{u},\textbf{v})\le 5$. \end{proof} \begin{mylemma}\label{lemma3.6} $D^s_f(Q_n;Q_1)\le n+1$ for $n\ge 4$. \end{mylemma} \begin{proof} We prove this lemma by induction on $n$. By Lemma~\ref{lemma3.5}, the lemma holds for $n=4$. Thus, we assume that this lemma holds for $4\le k\le n-1$, i.e., $D^s_f(Q_k;Q_1)\le k+1$ for $4 \le k \le n-1$. Note that $\kappa^s(Q_n;Q_1)$=$n-1$ by Lemma~\ref{lemma2.3}, then $Q_n-\mathcal{F}^n_1$ is connected for $| \mathcal{F}^n_1|\le n-2$. And let $\textbf{u}$ and $\textbf{v}$ be any two vertices in $Q_n-\mathcal{F}^n_1$. \noindent \textbf{Case 1}. $\textbf{u}$ and $\textbf{v}$ are symmetric. We may assume $\textbf{u}=00 \cdots 0$ and $\textbf{v}=11 \cdots 1$. Since $n\ge 5$ and $| \mathcal{F}^n_1|\le n-2$, by Lemma~\ref{lemma3.1}, there must exist a pair of vertices $(\textbf{u})^{j}$ and $(\textbf{v})^{j}$ in $Q_n-\mathcal{F}^n_1$ for some $j\in \{1,2, \ldots,n\}$. Thus we may assume $(\textbf{u})^{n},(\textbf{v})^{n}\in Q_n-\mathcal{F}^n_1$. Note that $\mathcal{F}^n_1$ contains at most $2(n-2)$ vertices. So we may assume ${Q^{\{0\}}_n}$ has at most $n-2$ faulty vertices. Then there exists a path $\langle \textbf{u}, \textit{P}_s, (\textbf{v})^n, \textbf{v} \rangle$ between $\textbf{u}$ and $\textbf{v}$ in ${Q_n}-\mathcal{F}^n_1$, where $\textit{P}_s$ is in ${Q^{\{0\}}_n}-\mathcal{F}^n_1$. By Lemma~\ref{lemma2.2}, we have $ D^s_f({Q^{\{0\}}_n};Q_1)\le n$. Thus, $l(P_s)\le n$ and $d_{{Q_n}-\mathcal{F}^n_1}(\textbf{u}, \textbf{v})\le n+1$. \noindent \textbf{Case2}. $\textbf{u}$ and $\textbf{v}$ are unsymmetric. Without loss of generality, we may assume $\textbf{u}$, $\textbf{v}$ $\in$ ${Q^{\{0\}}_n}$. \noindent \textbf{Case 2.1}. $| \mathcal{A}^n_{1,1}|\ge 1$. Then $| \mathcal{A}^n_{1,0}|+|\mathcal{B}^n_1| \le n-3$. Thus, there exists a path $\langle \textbf{u},\textit{P}_s, \textbf{v} \rangle$ between $\textbf{u}$ and $\textbf{v}$ in $Q_n-\mathcal{F}^n_1$, where $P_s$ is in ${Q^{\{0\}}_n}-\mathcal{A}^n_{1,0}-\mathcal{B}^n_1$. By the induction hypothesis, we infer $l(\textit{P}_s)\le n$, and so we get $d_{Q_n-\mathcal{F}^n_1}(\textbf{u}$, $\textbf{v})\le n$. \noindent \textbf{Case 2.2}. $| \mathcal{A}^n_{1,1}|=0$. If $| \mathcal{A}^n_{1,0}|=0$, then there exists a path $\langle \textbf{u},\textit{P}_s,\textbf{v}\rangle$ between $\textbf{u}$ and $\textbf{v}$ in $Q_n-\mathcal{F}^n_1$, where $P_s$ is in ${Q^{\{0\}}_n}-\mathcal{B}^n_1$. Since $l(\textit{P}_s)\le D({Q^{\{0\}}_n}-\mathcal{B}^n_1)\le D_f(Q_{n-1})=n$, $d_{Q_n-\mathcal{F}^n_1}(\textbf{u}$, $\textbf{v})\le n$. If $| \mathcal{A}^n_{1,0}|\ge 1$, then there exists a path $\langle \textbf{u}, (\textbf{u})^{n},\textit{P}_s, (\textbf{v})^{n},\textbf{v}\rangle$ between $\textbf{u}$ and $\textbf{v}$ in $Q_n-\mathcal{F}^n_1$, where $P_s$ is in ${Q^{\{1\}}_n}-\mathcal{B}^n_1$ and $|\mathcal{B}^n_1| \le n-3$. By Lemma~\ref{lemma3.2}, we have $D({Q^{\{1\}}_n}-\mathcal{B}^n_1)=n-1$. Then $l(\textit{P}_s)\le n-1$, and so $d_{Q_n-\mathcal{F}^n_1}(\textbf{u}$, $\textbf{v})\le n+1$. \end{proof} Combining Theorem~\ref{theorem3.3}, Lemma~\ref{lemma3.4}, and Lemma~\ref{lemma3.6} with the fact $D^s_f(G;W)\ge D_f(G;W)$, we have the following result. \begin{mytheorem}\label{theorem3.7} $D^s_f(Q_3;Q_1)=D_f(Q_3;Q_1)=3$ and $D^s_f(Q_n;Q_1)=D_f(Q_n;Q_1)=n+1$ if $n \geq 4$. \end{mytheorem} \section{$Q_m$-subcube fault diameter and $Q_m$-structure fault diameter} For the $Q_m$-subcube connectivity $\kappa^{sc}(Q_n;Q_m)$ of $Q_n$, we define the $Q_m$-\textit{subcube fault diameter} $D^{sc}_f(Q_n;Q_m)$ is the maximum diameter of any sub-graph of $Q_n$ obtained by removing $\kappa^{sc}(Q_n;Q_m)-1$ or less $Q_m$-subcubes. Since $Q_m$-structure fault diameter is a special case of $Q_m$-subcube fault diameter, let's first determine $Q_m$-subcube fault diameter $D_f^{sc}(Q_n;Q_m)$. \begin{mytheorem}\label{theorem3.20} $D^{sc}_f(Q_{m+2};Q_m)=m+2$ for $m\ge 0$. \end{mytheorem} \begin{proof} Firstly, we prove $D^{sc}_f(Q_{m+2};Q_m)\le m+2$ by induction on $m$. Clearly, $D^{sc}_f(Q_2;Q_0)=D_f(Q_2)=2$ and $D^{sc}_f(Q_3;Q_1)=D^{s}_f(Q_3;Q_1)=3$, so the lemma is true for $m=0,1$. In the following, we assume that the statement holds for $1\le k\le m-1$, i.e., $D^{sc}_f(Q_{k+2};Q_k)\le k+2$ for $1\le k\le m-1$. Note that $\kappa^{sc}(Q_{m+2};Q_m) = 2$ by Lemma~\ref{lemma2.4}. For $| \mathcal{F}^{m+2}_m|\le 1$, without loss of generality, we may assume $\mathcal{F}^{m+2}_m\subseteq \cup^m_{i=0}S_i({Q^{\{0\}}_{m+2}})$. Let $\textbf{u}$ and $\textbf{v}$ be any two vertices in $Q_{m+2}-\mathcal{F}^{m+2}_m$. \noindent \textbf{Case 1}. $| F^{m+2}_m|=0$. Since $\mathcal{F}^{m+2}_m=F^{m+2}_m\cup\mathcal{F}^{m+2}_{m-1}$, $| \mathcal{F}^{m+2}_{m-1}|\le 1$. By the induction hypothesis, we have $D(Q^{\{0\}}_{m+2}-\mathcal{F}^{m+2}_{m-1})\le D^{sc}_f(Q_{m+1};Q_{m-1})\le m+1$. Then $d_{Q_{m+2}-\mathcal{F}^{m+2}_m}(\textbf{u}$, $\textbf{v})\le m+1$ if $\textbf{u},\textbf{v}\in Q^{\{0\}}_{m+2}$. Obviously, $d_{Q_{m+2}-\mathcal{F}^{m+2}_m}(\textbf{u}$, $\textbf{v})\le m+1$ if $\textbf{u},\textbf{v}\in Q^{\{1\}}_{m+2}$. Since $(\textbf{u},(\textbf{u})^{m+2})\in E(Q_{m+2}-\mathcal{F}^{m+2}_m)$, there exists a path $\langle \textbf{u}, (\textbf{u})^{m+2}, \textit{P}_s,\textbf{v}\rangle$ between $\textbf{u}\in Q^{\{0\}}_{m+2}$ and $\textbf{v}\in Q^{\{1\}}_{m+2}$ in $Q_{m+2}-\mathcal{F}^{m+2}_m$, where $P_s$ is in ${Q^{\{1\}}_{m+2}}$. Note that $l(\textit{P}_s)\le D(Q^{\{1\}}_{m+2})=m+1$, then $d_{Q_{m+2}-\mathcal{F}^{m+2}_m}(\textbf{u}$, $\textbf{v})\le m+2$. \noindent \textbf{Case 2}. $| F^{m+2}_m|=1$. Since $Q^{\{0\}}_{m+2}-F^{m+2}_m$ is connected and isomorphic to $Q_{m+1}$, $d_{Q_{m+2}-\mathcal{F}^{m+2}_m}(\textbf{u}$, $\textbf{v})\le m+1$ if $\textbf{u},\textbf{v}\in Q^{\{0\}}_{m+1}$. With similar argument used in Case 1, one can get that $d_{Q_{m+2}-\mathcal{F}^{m+2}_m}(\textbf{u}$, $\textbf{v})\le m+1$ if $\textbf{u},\textbf{v}\in Q^{\{1\}}_{m+2}$, and $d_{Q_{m+2}-\mathcal{F}^{m+2}_m}(\textbf{u}$, $\textbf{v})\le m+2$ if $\textbf{u}\in Q^{\{0\}}_{m+2}$ and $\textbf{v}\in Q^{\{1\}}_{m+2}$. Since there are $2^{m+1}$ pairs of symmetric vertices in $Q_{m+2}$ and $2^{m+1}> 2^m$, we have $D^{sc}_f(Q_{m+2};Q_m)\ge m+2$ by Lemma~\ref{lemma2.7}. In summary, we get $D^{sc}_f(Q_{m+2};Q_m)=m+2$. \end{proof} \begin{mylemma}\label{lemma3.21} $D^{sc}_f(Q_{m+3};Q_m)\le m+4$ for $m\ge 0$. \end{mylemma} \begin{proof} We prove this lemma by induction on $m$. Clearly, $D^{sc}_f(Q_3;Q_0)=D_f(Q_3)=4$ and $D^{sc}_f(Q_4;Q_1)=D^s_f(Q_4;Q_1)\le 5$, so the result holds for $m=0,1$. In the following, we assume that the statement holds for $1\le k\le m-1$, i.e., $D^{sc}_f(Q_{k+3};Q_k)\le k+4$ for $1\le k\le m-1$. Note that $\kappa^{sc}(Q_{m+3};Q_m) = 3$. So we assume $| \mathcal{F}^{m+3}_m|\le 2$ and let $\textbf{u}$ and $\textbf{v}$ be any two vertices in $Q_{m+3}-\mathcal{F}^{m+3}_m$. \noindent \textbf{Case 1}. $\textbf{u}$ and $\textbf{v}$ are symmetric. Without loss of generality, we take $\textbf{u}=00 \cdots 0$ and $\textbf{v}=11 \cdots 1$. For $m\ge 2$ and $| \mathcal{F}^{m+3}_m|\le 2\le m+2$, by Lemma~\ref{lemma3.1}, there exists a pair of vertices $(\textbf{u})^{j}$ and $(\textbf{v})^{j}$ in $Q_{m+3}-\mathcal{F}^{m+3}_m$. For convenience, we may assume $(\textbf{u})^{m+3},(\textbf{v})^{m+3}\in Q_{m+3}-\mathcal{F}^{m+3}_m$. \noindent \textbf{Case 1.1}. $| \mathcal{A}^{m+3}_{m,0}|+| \mathcal{B}^{m+3}_m|=2$. Then $| \mathcal{A}^{m+3}_{m,1}|=0$ and $| \mathcal{B}^{m+3}_m|\le 2$. Thus, there exists a path $\langle \textbf{u}, (\textbf{u})^{m+3},\textit{P}_s, \textbf{v} \rangle$ between $\textbf{u}$ and $\textbf{v}$ in $Q_{m+3}-\mathcal{F}^{m+3}_m$, where $\textit{P}_s$ is in ${Q^{\{1\}}_{m+3}}-\mathcal{B}^{m+3}_m$. By the induction hypothesis, we have $l(\textit{P}_s) \le D({Q^{\{1\}}_{m+3}}-\mathcal{B}^{m+3}_m) \le D^{sc}_f(Q_{m+2};Q_{m-1}) \le m+3$. Thus $d_{Q_{m+3}-\mathcal{F}^{m+3}_m}(\textbf{u}$, $\textbf{v})\le m+4$. \noindent \textbf{Case 1.2}. $| \mathcal{A}^{m+3}_{m,0}|+| \mathcal{B}^{m+3}_m|\le 1$. Then there exists a path $\langle \textbf{u},\textit{P}_s,(\textbf{v})^{m+3}, \textbf{v} \rangle$ between $\textbf{u}$ and $\textbf{v}$ in $Q_{m+3}-\mathcal{F}^{m+3}_m$, where $\textit{P}_s$ is in ${Q^{\{0\}}_{m+3}}-\mathcal{A}^{m+3}_{m,0}-\mathcal{B}^{m+3}_m$. By Theorem~\ref{theorem3.20}, $l(\textit{P}_s) \le D({Q^{\{0\}}_{m+3}}-\mathcal{A}^{m+3}_{m,0}-\mathcal{B}^{m+3}_m) \le D^{sc}_f(Q_{m+2};Q_m) = m+2$. So we have $d_{Q_{m+3}-\mathcal{F}^{m+3}_m}(\textbf{u}$, $\textbf{v})\le m+3$. \noindent \textbf{Case 2}. $\textbf{u}$ and $\textbf{v}$ are unsymmetric. We may assume $\textbf{u}$, $\textbf{v}$ $\in$ ${Q^{\{0\}}_n}$. \noindent \textbf{Case 2.1.}. $| \mathcal{A}^{m+3}_{m,0}|+| \mathcal{B}^{m+3}_m|=2$ and $| \mathcal{B}^{m+3}_m|=2$. Then $| \mathcal{A}^{m+3}_{m,0}|=0$. Thus, there exists a path $\langle \textbf{u},\textit{P}_s,\textbf{v} \rangle$ between $\textbf{u}$ and $\textbf{v}$ in $Q_{m+3}-\mathcal{F}^{m+3}_m$, where $\textit{P}_s$ is in ${Q^{\{0\}}_{m+3}}-\mathcal{B}^{m+3}_m$. By the induction hypothesis, $l(\textit{P}_s) \le D({Q^{\{0\}}_{m+3}}-\mathcal{B}^{m+3}_m) \le D^{sc}_f(Q_{m+2};Q_{m-1}) \le m+3$. So $d_{Q_{m+3}-\mathcal{F}^{m+3}_m}(\textbf{u}$, $\textbf{v})\le m+3$. \noindent \textbf{Case 2.2.}. $| \mathcal{A}^{m+3}_{m,0}|+| \mathcal{B}^{m+3}_m|=2$ and $| \mathcal{B}^{m+3}_m|\le 1$. Note that $| \mathcal{A}^{m+3}_{m,1}|=0$. Thus, there exists a path $\langle \textbf{u}, (\textbf{u})^{m+3},\textit{P}_s, (\textbf{v})^{m+3},\textbf{v} \rangle$ between $\textbf{u}$ and $\textbf{v}$ in $Q_{m+3}-\mathcal{F}^{m+3}_m$, where $\textit{P}_s$ is in ${Q^{\{1\}}_{m+3}}-\mathcal{B}^{m+3}_m$. By Theorem~\ref{theorem3.20}, we have $l(\textit{P}_s) \le D({Q^{\{1\}}_{m+3}}-\mathcal{B}^{m+3}_m) \le D^{sc}_f(Q_{m+2};Q_m) =m+2$. So $d_{Q_{m+3}-\mathcal{F}^{m+3}_m}(\textbf{u}$, $\textbf{v})\le m+4$. \noindent \textbf{Case 2.3.} $| \mathcal{A}^{m+3}_{m,0}|+| \mathcal{B}^{m+3}_m|\le 1$. There exists a path $\langle \textbf{u},\textit{P}_s, \textbf{v} \rangle$ between $\textbf{u}$ and $\textbf{v}$ in $Q_{m+3}-\mathcal{F}^{m+3}_m$, where $\textit{P}_s$ is in ${Q^{\{0\}}_{m+3}}-\mathcal{A}^{m+3}_{m,0}-\mathcal{B}^{m+3}_m$. By Case 1.2, we get $l(\textit{P}_s)\le m+2$, and thus $d_{Q_{m+3}-\mathcal{F}^{m+3}_m}(\textbf{u}$, $\textbf{v})\le m+2$. \end{proof} \begin{mylemma}\label{lemma3.22} Let $m\ge 0$ and $n\ge m+3$. If $D^{sc}_f(Q_{n-1};Q_m)\leqslant n$, then $D(Q_{n-1}-\mathcal{F}^{n-1}_m)\le n-1$ for $| \mathcal{F}^{n-1}_m|\le n-m-3$. \end{mylemma} \begin{proof} For $| \mathcal{F}^{n-1}_m|\le n-m-3$, we prove $D(Q_{n-1}-\mathcal{F}^{n-1}_m)\le n-1$ by induction on $n$. Obviously, $D(Q_{m+2}-\mathcal{F}^{m+2}_m)=D(Q_{m+2})=m+2$ for $| \mathcal{F}^{m+2}_m|= 0$. Next, we prove $D(Q_{m+3}-\mathcal{F}^{m+3}_m)\le m+3$ for $| \mathcal{F}^{m+3}_m|\le 1$. Without loss of generality, we may assume $\mathcal{F}^{m+3}_m\subseteq \cup^m_{i=0}S_i({Q^{\{0\}}_{m+3}})$. Let $\textbf{x}$ and $\textbf{y}$ be two vertices in $Q_{m+3}-\mathcal{F}^{m+3}_m$. By Theorem~\ref{theorem3.20}, $d_{Q_{m+3}-\mathcal{F}^{m+3}_m}(\textbf{x}$, $\textbf{y})\le m+3$ if $\textbf{x},\textbf{y}\in Q^{\{0\}}_{m+3}$ or $\textbf{x},\textbf{y}\in Q^{\{1\}}_{m+3}$. Since $(\textbf{x},(\textbf{x})^{m+3})\in E(Q_{m+3}-\mathcal{F}^{m+3}_m)$, $d_{Q_{m+3}-\mathcal{F}^{m+3}_m}(\textbf{x}$, $\textbf{y})\le m+3$ for $\textbf{x}\in Q^{\{0\}}_{m+3}$ and $\textbf{y}\in Q^{\{1\}}_{m+3}$. This means that the result holds for $n=m+3$ and $n=m+4$. Thus, we assume that the result also holds for $m+4\le k\le n-1$, i.e., $D(Q_{k-1}-\mathcal{F}^{k-1}_m)\le k-1$ for $m+4\le k\le n-1$. Let $\textbf{u}$ and $\textbf{v}$ be any two vertices in $Q_{n-1}-\mathcal{F}^{n-1}_m$. \noindent \textbf{Case 1}. $\textbf{u}$ and $\textbf{v}$ are symmetric. We set $\textbf{u}=00 \cdots 0$ and $\textbf{v}=11 \cdots 1$. For $m\le n-5$ and $| \mathcal{F}^{n-1}_m|\le n-m-3$, by Lemma~\ref{lemma3.1}, there exists a pair of vertices $(\textbf{u})^{j}$ and $(\textbf{v})^{j}$ in $Q_{n-1}-\mathcal{F}^{n-1}_m$. We may assume $(\textbf{u})^{n-1},(\textbf{v})^{n-1}\in Q_{n-1}-\mathcal{F}^{n-1}_m$. \noindent \textbf{Case 1.1}. $| \mathcal{A}^{n-1}_{m,0}|+| \mathcal{B}^{n-1}_m|=n-m-3$. Then $| \mathcal{A}^{n-1}_{m,1}|=0$ and $| \mathcal{B}^{n-1}_m|\le n-m-3$. This means that there exists a path $\langle \textbf{u}, (\textbf{u})^{n-1},\textit{P}_s, \textbf{v} \rangle$ between $\textbf{u}$ and $\textbf{v}$ in $Q_{n-1}-\mathcal{F}^{n-1}_m$, where $\textit{P}_s$ is in ${Q^{\{1\}}_{n-1}}-\mathcal{B}^{n-1}_m$. By the induction hypothesis, we have $ l(\textit{P}_s) \le D({Q^{\{1\}}_{n-1}}-\mathcal{B}^{n-1}_m) \le D(Q_{n-2}-\mathcal{F}^{n-2}_{m-1}) \le n-2$. Thus $d_{Q_{n-1}-\mathcal{F}^{n-1}_m}(\textbf{u}$, $\textbf{v})\le n-1$. \noindent\textbf{Case 1.2}. $| \mathcal{A}^{n-1}_{m,0}|+| \mathcal{B}^{n-1}_m|\le n-m-4$. Then there exists a path $\langle \textbf{u},\textit{P}_s,(\textbf{v})^{n-1}, \textbf{v} \rangle$ between $\textbf{u}$ and $\textbf{v}$ in $Q_{n-1}-\mathcal{F}^{n-1}_m$, where $\textit{P}_s$ is in ${Q^{\{0\}}_{n-1}}- \mathcal{A}^{n-1}_{m,0}- \mathcal{B}^{n-1}_m$. By the induction hypothesis, we have $l(\textit{P}_s) \le D({Q^{\{0\}}_{n-1}}-\mathcal{A}^{n-1}_{m,0}- \mathcal{B}^{n-1}_m) \le D(Q_{n-2}-\mathcal{F}^{n-2}_{m}) \le n-2$. So $d_{Q_{n-1}-\mathcal{F}^{n-1}_m}(\textbf{u}$, $\textbf{v})\le n-1$. \noindent \textbf{Case 2}. $\textbf{u}$ and $\textbf{v}$ are unsymmetric. We may assume $\textbf{u}$, $\textbf{v}$ $\in$ ${Q^{\{0\}}_{n-1}}$. Note that $| \mathcal{A}^{n-1}_{m,0}|+| \mathcal{B}^{n-1}_m|\le n-m-3$ and $D^{sc}_f(Q_{n-1};Q_m)\le n$. Then there exists a path $\langle \textbf{u},\textit{P}_s, \textbf{v} \rangle$ between $\textbf{u}$ and $\textbf{v}$ in $Q_{n-1}-\mathcal{F}^{n-1}_m$, where $\textit{P}_s$ is in ${Q^{\{0\}}_{n-1}}-\mathcal{A}^{n-1}_{m,0}-\mathcal{B}^{n-1}_m$. We have $d_{Q_{n-1}-\mathcal{F}^{n-1}_m}(\textbf{u}, \textbf{v}) \le D(Q^{\{0\}}_{n-1}-\mathcal{A}^{n-1}_{m,0}-\mathcal{B}^{n-1}_m) \le D^{sc}_f(Q_{n-2};Q_m) \le n-1$. \end{proof} \begin{mylemma}\label{lemma3.23} $D^{sc}_f(Q_n;Q_m)\le n+1$ for $m\ge 0$ and $n\ge m+3$. \end{mylemma} \begin{proof} We prove this lemma by induction on $n$. By Lemma~\ref{lemma3.21}, this lemma holds for $n=m+3$. Thus, we assume that this lemma holds for $m+3\le k\le n-1$, i.e., $D^{sc}_f(Q_k;Q_m)\le k+1$ for $m+3\le k\le n-1$. Note that $\kappa^{sc}(Q_n;Q_m) = n-m$. Then, for $| \mathcal{F}^n_m|\le n-m-1$, let $\textbf{u}$ and $\textbf{v}$ be any two vertices in $Q_n-\mathcal{F}^n_m$. \noindent \textbf{Case 1}. $\textbf{u}$ and $\textbf{v}$ are symmetric. Note that $m\le n-4$ and $| \mathcal{F}^n_m|\le n-m-1$. By Lemma~\ref{lemma3.1}, there must exist a pair of vertices $(\textbf{u})^{j}$ and $(\textbf{v})^{j}$ in $Q_n-\mathcal{F}^n_m$ for some $j\in \{1,2, \ldots,n\}$, thus we may assume $(\textbf{u})^{n},(\textbf{v})^{n}\in Q_n-\mathcal{F}^n_m$. \noindent \textbf{Case 1.1}. $| \mathcal{A}^n_{m,0}|+| \mathcal{B}^n_m|=n-m-1$. Then $| \mathcal{A}^n_{m,1}|=0$ and $| \mathcal{B}^n_m|\le n-m-1$. This implies that there exists a path $\langle \textbf{u}, (\textbf{u})^{n},\textit{P}_s, \textbf{v} \rangle$ between $\textbf{u}$ and $\textbf{v}$ in $Q_n-\mathcal{F}^n_m$, where $\textit{P}_s$ is in ${Q^{\{1\}}_n}-\mathcal{B}^n_m$. By the induction hypothesis, we have $ l(\textit{P}_s) \le D({Q^{\{1\}}_n}-\mathcal{B}^n_m) \le D^{sc}_f(Q_{n-1};Q_{m-1}) \le n$. So $d_{Q_n-\mathcal{F}^n_m}(\textbf{u}$, $\textbf{v})\le n+1$. \noindent \textbf{Case 1.2}. $| \mathcal{A}^n_{m,0}|+| \mathcal{B}^n_m|\le n-m-2$. Then there exists a path $\langle \textbf{u},\textit{P}_s,(\textbf{v})^{n}, \textbf{v} \rangle$ between $\textbf{u}$ and $\textbf{v}$ in $Q_n-\mathcal{F}^n_m$, where $\textit{P}_s$ is in ${Q^{\{0\}}_n}-\mathcal{A}^n_{m,0}-\mathcal{B}^n_m$. By the induction hypothesis, we have $l(\textit{P}_s) \le D({Q^{\{0\}}_n}-\mathcal{A}^n_{m,0}-\mathcal{B}^n_m) \le D^{sc}_f(Q_{n-1};Q_m) \le n$. Thus, $d_{Q_n-\mathcal{F}^n_m}(\textbf{u}$, $\textbf{v})\le n+1$. \noindent \textbf{Case 2}. $\textbf{u}$ and $\textbf{v}$ are unsymmetric. Without loss of generality, we may assume $\textbf{u}$, $\textbf{v}$ $\in$ ${Q^{\{0\}}_n}$. \noindent \textbf{Case 2.1.} $| \mathcal{A}^n_{m,0}|+| \mathcal{B}^n_m|=n-m-1$, and $| \mathcal{A}^n_{m,0}|\ge 1$. Then $| \mathcal{A}^n_{m,1}|=0$ and $| \mathcal{B}^n_m|\le n-m-2$. Thus, there exists a path $\langle \textbf{u}, (\textbf{u})^{n},\textit{P}_s,(\textbf{v})^{n}, \textbf{v} \rangle$ between $\textbf{u}$ and $\textbf{v}$ in $Q_n-\mathcal{F}^n_m$, where $\textit{P}_s$ is in ${Q^{\{1\}}_n}-\mathcal{B}^n_m$. By the induction hypothesis, we have $D^{sc}_f(Q_{n-1};Q_{m-1})\le n$. Then, by Lemma~\ref{lemma3.22}, $ l(\textit{P}_s) \le D({Q^{\{1\}}_n}-\mathcal{B}^n_m) \le D(Q_{n-1}-\mathcal{F}^{n-1}_{m-1}) \le n-1$. And so $d_{Q_n-\mathcal{F}^n_m}(\textbf{u}$, $\textbf{v})\le n+1$. \noindent \textbf{Case 2.2.} $| \mathcal{A}^n_{m,0}|+| \mathcal{B}^n_m|=n-m-1$, and $| \mathcal{A}^n_{m,0}|=0$. Then $| \mathcal{B}^n_m|=n-m-1$. So there exists a path $\langle \textbf{u},\textit{P}_s,\textbf{v} \rangle$ between $\textbf{u}$ and $\textbf{v}$ in $Q_n-\mathcal{F}^n_m$, where $\textit{P}_s$ is in ${Q^{\{0\}}_n}-\mathcal{B}^n_m$. By the induction hypothesis, we have $l(\textit{P}_s) \le D({Q^{\{0\}}_n}-\mathcal{B}^n_m) \le D^{sc}_f(Q_{n-1};Q_{m-1}) \le n$. Thus $d_{Q_n-\mathcal{F}^n_m}(\textbf{u}$, $\textbf{v})\le n$. \noindent \textbf{Case 2.3.} $| \mathcal{A}^n_{m,0}|+| \mathcal{B}^n_m|\le n-m-2$. Then there exists a path $\langle \textbf{u},\textit{P}_s, \textbf{v} \rangle$ between $\textbf{u}$ and $\textbf{v}$ in $Q_n-\mathcal{F}^n_m$, where $\textit{P}_s$ is in ${Q^{\{0\}}_n}-\mathcal{A}^n_{m,0}-\mathcal{B}^n_m$. By Case 1.2, we get $l(\textit{P}_s)\le n$. Therefore $d_{Q_n-\mathcal{F}^n_m}(\textbf{u}$, $\textbf{v})\le n$. \end{proof} \begin{mylemma}\label{lemma3.24} For $m\ge 0$ and $n\ge m+3$, $D_f(Q_n;Q_m)\ge n+1$ and $D^{sc}_f(Q_n;Q_m)\ge n+1$. \end{mylemma} \begin{proof} We take $\textbf{x}=00\cdots0\in Q_m$ and assume $Q_m\subseteq {Q^{\{0\}}_n}$. Note that ${Q^{\{0\}}_n}=Q_m\Box Q_{n-m-1}$. Let $\textbf{t}\in V(Q_{n-m-1})$, and we take $N_{Q_{n-m-1}}(\textbf{t})=\{\textbf{t}_1, \textbf{t}_2,\ldots, \textbf{t}_{n-m-1}\}$. We set $\textbf{y}=11\cdots10\in {Q^{\{0\}}_n}$. Clearly, $|\cup^{n-m-1}_{i=1}(Q_m,\textbf{t}_i)|=n-m-1$ and the removal of $\cup^{n-m-1}_{i=1}(Q_m,\textbf{t}_i)$ disconnects $Q^{\{0\}}_n$ such that $Q_m$ and $\textbf{y}$ are in the distinct components. For any $\textbf{z}\in Q_m$, we have that $\textbf{z}$ and $\textbf{y}$ are also in the distinct components of ${Q^{\{0\}}_n}-\cup^{n-m-1}_{i=1}(Q_m,\textbf{t}_i)$. Since ${Q_n}-\cup^{n-m-1}_{i=1}(Q_m,\textbf{t}_i)$ is connected, there is a path $\langle \textbf{x}, P_1, \textbf{z}, (\textbf{z})^{n}, \textit{P}_2, \textbf{y} \rangle$ between $\textbf{x}$ and $\textbf{y}$ in ${Q_n}-\cup^{n-m-1}_{i=1}(Q_m,\textbf{t}_i)$. Note that $H_{Q_n}((\textbf{z})^{n},\textbf{y})=n-s$ if $H_{Q_m}(\textbf{x},\textbf{z})=s$, and $0\le s\le m$. Then $l(P_1)\ge s$ and $l(P_2)\ge n-s$. Thus, the length of any path between $\textbf{x}$ and $\textbf{y}$ is at least $n+1$. This implies $D_f(Q_n;Q_m)\ge n+1$. Since $D_f(Q_n;Q_m)\le D^{sc}_f(Q_n;Q_m)$, $D^{sc}_f(Q_n;Q_m)\ge n+1$. \end{proof} By Lemmas~\ref{lemma3.23} and \ref{lemma3.24}, we have the following theorem: \begin{mytheorem}\label{theorem3.25} $D^{sc}_f(Q_n;Q_m)=n+1$ for $m\ge 0$ and $n\ge m+3$. \end{mytheorem} On one hand, according to Theorem~\ref{theorem3.25} and the fact $D_f(Q_n;Q_m)\le D^{sc}_f(Q_n;Q_m)$, we obtain $D_f(Q_n;Q_m)\le n+1$. On the other hand, by Lemma~\ref{lemma3.24}, we derive $D_f(Q_n;Q_m)\ge n+1$. Thus, $D_f(Q_n;Q_m)= n+1$. Combining this with Theorem~\ref{theorem3.20}, we get the following theorem: \begin{mytheorem}\label{theorem3.26} For $n \geq 3$ and $0 \leq m \leq n - 2$, $D_f(Q_n;Q_m)=$ $\begin{cases} n & \textit{if } n=m+2,\\ n+1 & \textit{if } n\ge m+3. \end{cases}$ \end{mytheorem} \section{Concluding remarks} In this paper, we introduced two novel concepts of fault diameters for graphs: the \textit{structure fault diameter} and the \textit{substructure fault diameter}, applying these definitions to analyze the $n$-dimensional hypercube $Q_n$. Unlike the conventional wide diameter approach, our method focuses on determining the maximum diameter between any two vertices under structure faults. A natural extension of this work would involve determining the $K_{1,n}$-structure fault diameter and $K_{1,n}$-substructure fault diameter of various graph structures, further enriching our understanding of network robustness in the presence of complex fault scenarios. \begin{thebibliography}{99} \bibitem{23} L. Ba, Y. Zhang, and H. Zhang, The path-structure connectivity of augmented $k$-ary $n$-cubes, The Computer Journal 66 (2023) 3119-3128. \bibitem{01} J. Bondy and U. Murty, Graph Theory with Applications, New York, Springer (2008). \bibitem{05} X. Chen, Hamiltonicity of hypercubes with faulty vertices, Information processing letters 116 (2016) 343-346. \bibitem{13} K. Day and A. E. Al-Ayyoub, Fault diameter of $k$-ary $n$-cube networks, IEEE Transactions on Paralleel and Distributed Systems 8 (1997) 903-907. \bibitem{17} J.-S. Fu, G.-H. Chen, and D.-R. Duh, Node-disjoint paths and related problems on hierarchical cubic networks, Networks 40 (2002) 142-154. \bibitem{14} F. Harary, Graph Theory, Addison-Wesley, Reading (1969). \bibitem{03} M. S. Krishnamoorthy and B. Krishnamurthy, Fault diameter of interconnection networks, Computers and Mathematics with Applications 13 (1987) 577-582. \bibitem{07} T.-L. Kung, C.-K. Lin, T. Liang, L.-Y. Hsu, and and J. J.-M. Tan. Fault diameter of hypercubes with hybrid node and link faults, Journal of Interconnection Networks 10 (2009) 233-242. \bibitem{18} F. T. Leighton, Introduction to Parallel Algorithms and Architecture: Arrays, Trees, Hypercubes, Morgan Kaufmann (1992). \bibitem{02} C.-K. Lin, L. Zhang, J. Fan, and D. Wang, Structure connectivity and substructure connectivity of hypercubes, Theoretical Computer Science 634 (2018) 97-107. \bibitem{27} Y. Lv, C.-K. Lin, J. Fan, and X. Jia, Hamiltonian cycle and path embeddings in $3$-ary $n$-cubes based on $K_{1,3}$-structure faults, Journal of Parallel and Distributed Computing 120 (2018) 1148-1158. \bibitem{28} Y. Lv, C.-K. Lin, and J. Fan, Hamiltonian cycle and path embeddings in $k$-ary $n$-cubes based on structure faults, The Computer Journal 60 (2017)159-179. \bibitem{12} B. Niu, S. Zhou, T. Tian, and Q. Zhang, The wide diameter and fault diameter of exchanged crossed cube, International Journal of Foundations of Computer Science 35 (2024) 435-451. \bibitem{06} K. Pan, Star fault tolerance of hypercube, Theoretical Computer Science 972 (2023) 114052. \bibitem{09} H. Qi and X. Zhu, The fault-diameter and wide-diameter of twisted hypercubes, Discrete Applied Mathematics 235 (2018) 154-160. \bibitem{15} Y. Rouskov and P. K. Srimani, Fault diameter of star graphs, Information Processing Letters 48 (1993) 243-251. \bibitem{04} E. Sabir and J. Meng, Fault-tolerant hamiltonicity of hypercubes with faulty subcubes, Information Processing letters 172 (2021) 106160. \bibitem{30} E. Sabir, J. Fan, J. Meng, and B. Cheng, Structure fault-tolerant hamiltonian cycle and path embeddings in bipartite $k$-ary $n$-cube networks, IEEE Transactions on Reliability 73 (2024) 257-269. \bibitem{08} T.-H. Tsai, Y.-C. Chen, and J. J.-M. Tan, Topological properties on the wide and fault diameters of exchanged hypercubes, IEEE Transactions on Paralleel and Distributed Systems 25 (2014) 3317-3327. \bibitem{21} G. Wang, J. Yu, Y. Zou, J. Fan, and W. Cheng, A new measure of fault-tolerance for network reliability: double-structure connectivity, IEEE Transactions on Networking 32 (2024) 874-889. \bibitem{20} N. Wang, J. Meng, and Y. Tian, Reliability analyses of regular graphs based on edge-structure connectivity, Discrete Applied Mathematics 356 (2024) 329-342. \bibitem{24} Y. Yang, X. Hua, and L. Yang, Hyper $K_{1,r}$ and sub-$K_{1,r}$ fault tolerance of star graphs, Discrete Applied Mathematics 339 (2023) 172-177. \bibitem{29} Y. Zhang, W. Fan, Z. Han, Y. Song, and R. Wang, Fault-tolerant routing algorithm based on disjoint paths in 3-ary $n$-cube networks with structure faults, The Journal of Supercomputing 77 (2021) 13090-13114. \bibitem{22} L. Zhao and S. Wang, Structure connectivity and substructure connectivity of split-star networks, Discrete Applied Mathematics 341 (2023) 359-371. \end{thebibliography} \end{document}
2412.10001v1
http://arxiv.org/abs/2412.10001v1
On the Markov transformation of Gaussian processes
\documentclass[11pt]{article} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{tabularx} \usepackage{graphicx} \usepackage{amsmath} \usepackage{amsthm} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{amstext} \usepackage{rotating} \usepackage{latexsym} \usepackage{epsfig} \usepackage{pdfpages}\usepackage{geometry} \usepackage{dsfont} \usepackage{xcolor} \usepackage{stmaryrd} \usepackage{tikz} \usepackage{comment} \usepackage{hyperref} \usepackage{mathtools} \usepackage[autostyle]{csquotes} \usepackage{lineno} \geometry{includehead,includefoot, left=2cm,right=2cm, top=2cm,bottom=2cm, headheight=1cm,headsep=1cm, footskip=1cm} \newtheorem{them}{Theorem} \newtheorem{pro}[them]{Proposition} \newtheorem{cor}[them]{Corollary} \newtheorem{lem}[them]{Lemma} \newtheorem*{org}{Organisation of the paper} \newtheorem*{theorem}{Theorem} \newtheorem{theoremA}{Theorem} \newtheorem{defipro}[them]{Definition/Proposition} \theoremstyle{definition} \newtheorem{defi}[them]{Definition} \newtheorem{remq}[them]{Remark} \newtheorem{ex}[them]{Example} \newtheorem{nott}[them]{Notation} \newtheorem{notdef}[them]{Notation/Definition} \renewcommand{\thetheoremA}{\Alph{theoremA}} \DeclareMathOperator{\var}{Var} \DeclareMathOperator{\card}{card} \newcommand{\dcv}[1]{\xrightarrow[#1]{}} \newcommand{\ens}[2]{\left\{ #1 ~;~#2 \right\} } \newcommand{\la}{\lambda} \newcommand{\g}{\gamma} \renewcommand{\a}{\alpha} \renewcommand{\b}{\beta} \renewcommand{\d}{\delta} \newcommand{\s}{\sigma} \newcommand{\Joint}{\textnormal{Joint}} \newcommand{\fd}{\textnormal{f.d.}} \newcommand{\sgn}{\textnormal{sgn}} \newcommand{\E}{\mathbb{E}} \newcommand{\G}{\mathbb{G}} \newcommand{\Gg}{\mathbb{\Gamma}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\V}{\mathbb{V}} \newcommand{\C}{\mathbb{C}} \newcommand{\B}{\mathbb{B}} \newcommand{\A}{\mathcal{A}} \newcommand{\II}{\mathcal{I}} \newcommand{\FF}{\mathcal{F}} \newcommand{\CC}{\mathcal{C}} \newcommand{\BB}{\mathcal{B}} \newcommand{\KK}{\mathcal{K}} \newcommand{\GG}{\mathcal{G}} \newcommand{\DD}{\mathcal{D}} \newcommand{\JJ}{\mathcal{J}} \newcommand{\TT}{\mathcal{T}} \newcommand{\PP}{\mathcal{P}} \newcommand{\LL}{\mathcal{L}} \newcommand{\OO}{\mathcal{O}} \newcommand{\YY}{\mathcal{Y}} \newcommand{\Nor}{\mathcal{N}} \newcommand{\MM}{\mathcal{M}} \newcommand{\NN}{\mathcal{N}} \renewcommand{\SS}{\mathcal{S}} \newcommand{\ee}{\mathbb{\varepsilon}} \newcommand{\1}{\mathds{1}} \newcommand{\ps}{\textnormal{p.s.}} \newcommand{\ie}{\textnormal{i.e.}} \newcommand{\id}{\textnormal{id}} \newcommand{\ssi}{\textnormal{si et seulement si}} \newcommand{\supp}{\textnormal{supp}} \newcommand{\Marg}{\textnormal{Marg}} \newcommand{\argmin}{\textnormal{argmin}} \newcommand{\Ent}{\textnormal{Ent}} \newcounter{numeroexo} \newcommand{\comm}[1]{\textcolor{red}{#1}} \newcommand{\com}[1]{\textcolor{green}{#1}} \newcommand{\ti}[1]{\tilde{#1}} \newcommand{\Ima}{\textnormal{Im}} \newcommand{\Tr}{\textnormal{Tr}} \newcommand{\proj}{\textnormal{proj}} \newcommand{\GL}{\textnormal{GL}} \newcommand{\n}[2]{\left\|#1\right\|_{#2}} \newcommand\exo[1]{ \par \noindent \stepcounter{numeroexo} \hspace{-.25cm} \begin{center} \textbf{\arabic{numeroexo}. #1} \end{center} \noindent } \newcommand{\accol}[1]{\left\{\begin{array}{l}#1\end{array}\right .} \newcommand{\ent}[2]{\llbracket #1,#2\rrbracket} \newcommand\rappel{ \par \vspace{.5cm} \noindent \textit{Rappel : } \noindent } \begin{document} \pagestyle{plain} \vspace{.8cm} \begin{center} {\huge On the Markov transformation of Gaussian processes\\} \vspace{0.5cm} {\Large Armand Ley\\} \vspace{0.5cm} October 2024 \end{center} \textbf{Abstract:} {\small Given a Gaussian process $(X_t)_{t \in \R}$, we construct a Gaussian \emph{Markov} process with the same one-dimensional marginals using sequences of transformations of $(X_t)_{t \in \R}$ ``made Markov'' at finitely many times. We prove that there exists at least such a Markov transform of $(X_t)_{t \in \R}$. In the case the instantaneous decorrelation rate of $(X_t)_{t \in \R}$ is continuous, we prove that the Markov transform is uniquely determined and characterized through the same instantaneous decorrelation rate. } \vspace{2cm} \section{Introduction}\label{sec:Intro} \subsection{Context and main results} During the last decades, partly under the impulsion of mathematical finance, the question of mimicking stochastic processes has become a recurrent problem. Put in a very general way, the problem can be expressed as follows: Given a stochastic process $(X_t)_{t \in T}$, one can ask if there exists a process $(Y_t)_{t \in T}$ preserving certain properties of $(X_t)_{t \in T}$ while satisfying some additional conditions. To motivate our problem of mimicking Gaussian processes, we now present a selection of four mimicking problems: \begin{enumerate} \item The Kellerer problem of mimicking a martingale with a \emph{Markov} martingale process; \item The Gyöngy problem of mimicking an Itô process by the solution of a SDE; \item The problem of faking Brownian motion; \item The problem of mimicking an $\R^2$-valued process by an $\R^2$-valued order-preserving Markov process. \end{enumerate} We shall then expose with more details the problem of Boubel and Juillet about mimicking an increasing process for the stochastic order with a Markov process that has non-decreasing trajectories. This mimicking problem is the most important one for this article, as Markov transformation, the construction method used to build its solution, is central in our article. In a seminal article, Strassen \cite{strassen_existence_1965} investigated if, given a coupling $(X_t)_{t \in \{1,2\}}$, there exists a coupling $(Y_t)_{t \in \{1,2\}}$ with the same $1$-marginals satisfying the martingale property, \ie , \ $\E(Y_2|Y_1) = Y_1$. He proved that such a coupling exists if and only if $X_1$ is smaller than $X_2$ for the convex order, \ie ,\ $\E(f(X_1)) \leq \E(f(X_2))$ for every function $f$ that is convex\footnote{We refer to \cite{muller_comparison_2002,shaked_stochastic_2007} for more information about stochastic orders.}. Kellerer \cite{kellerer_markov-komposition_1972} generalized this result: A real-valued process $(X_t)_{t \in \R}$ can be mimicked by a \emph{Markov} martingale having the same $1$-marginals if and only if its components are increasing for the convex order (see also \cite{beiglbock_root_2016,boubel_markov-quantile_2022,hirsch_kellerers_2015}). In a different vein, Gyöngi \cite{gyongy_mimicking_1986}, using an approach suggested by Krylo \cite{krylov_once_1985}, showed that any Itô process $(X_t)_{t \in \R}$ with coefficients satisfying certain conditions can be mimicked by a Markov process which is a solution of a stochastic differential equation and which has the same one-dimensional marginals. Another mimicking problem is to fake the Brownian motion, that is, to find a non-Brownian process that has as many common features with the Brownian motion as possible. It is known (see e.g. \cite{lowther_fitting_2008}) that the Brownian motion is the only continuous martingale with Brownian marginals that satisfies the \emph{strong} Markov property. Beiglböck and \emph{al.} \cite{beiglbock_faking_2024}, in the line of previous articles (see their introduction for numerous references) finally proved that there exists a non-Brownian continuous martingale with Brownian marginals that is Markov. More recently, for processes valued in product spaces, Bérard and Frénais \cite{berard_comonotone_2024} studied the case of a homogeneous Markov process $(X^{x_1}_t, X^{x_2}_t)_{t \geq 0}$ starting at $x_1 \leq x_2$ and whose marginal processes $ (X^{x_i}_t)_{t \geq 0}$ are governed by the same stochastically monotone Feller semi-group. They showed that it can be mimicked by a Feller process $(Y^{x_1}_t, Y^{x_2}_t)_{t \geq 0}$ starting at $(x_1,x_2)$, satisfying $\text{Law}\left((X^{x_i}_t)_{t \geq 0} \right) = \text{Law}\left((Y^{x_i}_t)_{t \geq 0}\right)$ for $i = 1,2$ and $Y_t^{x_1} \leq Y_t^{x_2}$ for every $t \in \R_+.$ Boubel and Juillet \cite{boubel_markov-quantile_2022} studied the problem of mimicking a stochastic process $(X_t)_{t \in \R}$ by a \emph{Markov} process with the same $1$-marginals and with non-decreasing trajectories. If we ignore the Markov property, it is well known that there exists a solution if and only if the marginals are increasing for the stochastic order, \ie , $\E(f(X_s)) \leq \E(f(X_t))$ for every $s<t \in \R^2$ and $f : \R \to \R$ non-decreasing. An explicit solution is then given by the quantile process, \ie , \ by $(G_t(U))_{t \in \R}$ where $U$ is a uniformly distributed random variable on $[0,1]$ and $G_t : q \in ]0,1[ \mapsto \inf \big( \ens{x \in \R}{\mu_t(]-\infty,x]) \geq q} \big) \in \R$ stands for the quantile function of $\mu_t.$ However, as shown in \cite{juillet_peacocks_2016}, the quantile process is not Markov in general. To describe how their solution to the mimicking problem with the Markov property is obtained, we have to introduce the notions of ``transformation of a process made Markov at certain times'' and of ``Markov transforms''. Given a finite set of times $R \subset \R$ and a process $X = (X_t)_{t \in \R}$, we say that a process $X^R$ is the\footnote{As all the processes made Markov at times $R$ follow the same law, we will talk about {\emph{the}} transformation made Markov at times $R$. A rigorous definition of Markov transforms will be given in Definition \ref{def:made_markov}.} transformation of $X$ made Markov at times $R$ if: \begin{itemize} \item On every interval between two successive times of $R$, $X^R$ and $X$ have the same law; \item For every $r \in R$, $X^R$ is made Markov at time $r$: If one knows the value of the trajectory at time $r$, then the future $(X_t^R)_{t>r}$ of the trajectory does not depend on the past $(X_t^R)_{t<r}$ of the trajectory. \end{itemize} A \emph{Markov transform of $X$} is then a Markov process $X'$ obtained as the limit (for the finite-dimensional topology) of a sequence of processes $\left(X^{R_n}\right)_{n \geq 1}$, with $(R_n)_{n \geq 1}$ an admissible\footnote{If we denote by $\s_{R} := \sup_{x \in \R} d(x, R \setminus \{x\})$ the mesh of a finite set $R \subset \R$, a sequence of sets of times $(R_n)_{n \geq 1}$ is admissible if : $\lim_{n \to + \infty} \inf(R_n) = - \infty$, $\lim_{n \to + \infty} \sup(R_n) = + \infty$ and $\lim_{n \to + \infty} \s_{R_n} = 0$.} sequence of sets of times. Their solution to the mimicking problem, called Markov-quantile process, is then obtained as a Markov transform of the quantile process. Since $X^R$ has the same $1$-marginals as $X$, a Markov transform is a Markov process that has the same $1$-marginals as $X$. This draws a general construction of mimicking processes. The hope is that making a process Markov at certain times and passing to the limit preserves certain property of the original process while adding the Markov property. In the article, we study this construction in the case of Gaussian processes, confirming its interests but also highlighting its limitations (see Section \ref{sec:counterexample}). As expected, it turns out that Markov transforms of ``regular'' Gaussian processes are solutions of the mimicking problem presented hereby in Theorem \ref{them:mimicking_intro}. Assume we have a Gaussian process $X = (X_t)_{ t \in \R}$ with a continuous covariance function $K$ and an instantaneous decorrelation rate (or instantaneous decay rate of the correlation) $\a^K$ given by \begin{equation}\label{eq:IDR} \a^K(t) := \lim_{h \to 0^+} \frac{1}{h}\left(1 - \frac{K(t,t+h)}{\sqrt{K(t,t)}\sqrt{K(t+h,t+h)}} \right), \end{equation} that is well defined\footnote{As we will see in Section \ref{sec:counterexample}, the decay rate of the correlation does not always converge when $h$ goes to $0^+$.} and continuous. The mimicking problem of interest is to find a Gaussian \emph{Markov} process that meets the $1$-marginals of $X$ and has the same instantaneous decorrelation rate. The following result solves this problem and establishes that, under a reinforced condition, the solution of this mimicking problem is a Markov transform. \begin{theoremA}\label{them:mimicking_intro} Let $X=(X_t)_{t \in \R}$ denote a Gaussian process with continuous covariance function $K$ and positive variance function. Assume $\a^K$ (recall \eqref{eq:IDR}) is well defined and continuous. \begin{enumerate} \item\label{Existence} \emph{Existence:} There exists a Gaussian process $Y =(Y_t)_{t \in \R}$ with covariance function $K'$ satisfying: \begin{itemize} \leftskip=0.3in \item[(1)] For every $t \in \R$, $\text{Law}(X_t) = \text{Law}(Y_t)$; \item[(2)] The process $Y$ has the same instantaneous decorrelation rate as $X$, \ie , \ \[\lim_{h \to 0^+} \frac{1}{h}\left(1 - \frac{K'(t,t+h)}{\sqrt{K'(t,t)}\sqrt{K'(t+h,t+h)}} \right)= \a^{K}(t);\] \item[(3)] $(Y_t)_{t \in \R}$ is a Markov process. \end{itemize} In the following of the article, if a Gaussian process $Y$ satisfies $(1),(2)$ and $(3)$, we allow ourselves to say that $Y$ is a \emph{mimicking process} of $X.$ \item\label{Point:Uniqueness} \emph{Uniqueness in law:} Every mimicking process of $X$ with covariance function $K' :\R^2 \to \R$ has the same mean function as $(X_t)_{t \in \R}$ and, for every $s<t \in \R^2$, \begin{equation*}\label{eq:transfor_formule_cov} K'(s,t) = K(s,s)^{1/2}K(t,t)^{1/2} \exp\left(-\int_s^t \a^K(u) du\right). \end{equation*} \item\label{Point:mim_markov_transf} \emph{The mimicking process is a Markov transform (under the following reinforced hypothesis):} Assume \begin{equation}\label{eq:correlation} \sup_{v \in [s,t]} \left| \a^K(v) - \frac{1}{h}\left(1 - \frac{K(v,v+h)}{\sqrt{K(v,v)}\sqrt{K(v+h,v+h)}} \right) \right| \dcv{h \to 0^+} 0 \end{equation} for every $s<t \in \R^2$. Let $(R_n)_{n \geq 1} \in \A$ be an admissible sequence and $Y$ be the mimicking process of $X$ (see Point $1$ and $2$). For every $n \geq 1$, we denote by $X^{R_n}$ the transformation of $X$ made Markov at times $R_n$. Then $(X^{R_n})_{n\geq 1}$ and $Y$ almost surely have continuous paths and $X^{R_n}$ converges weakly to $Y$ on compact sets. \end{enumerate} \end{theoremA} Note that, for every $s<t \in \R^2$, the correlation of $(Y_s,Y_t)$ is inversely proportional to the exponential value of the sum of the instantaneous decorrelation rate from $s$ to $t.$ Hence, informally, if $X$ is highly correlated between $s$ and $t$, the instantaneous decorrelation rate of $X$ will be small between $s$ and $t$, so that the correlation coefficient of $(Y_s,Y_t)$ will be high. Theorem \ref{them:mimicking_intro} will be proved in Theorem \ref{them:mimicking}, where we also prove that our mimicking process is the solution of a given stochastic differential equation (SDE). According to Theorem \ref{them:mimicking_intro}, under hypothesis \eqref{eq:correlation}, the general method of making a process Markov at certain times and passing to the limit behaves well: Our process $(X_t)_{t \in \R}$ admits a Markov transform and this Markov transform is a Markov process with the same $1$-marginal as $X$ that preserves its instantaneous decorrelation rate. However, in general, there is no reason why a process should admit a Markov transform and, if it does, no reason why it should be unique. Given a process $X$, there is no guarantee that one can find an admissible sequence $(R_n)_{n \geq 1}$ and a Markov process $X'$ such that $\lim_{n \to +\infty} X^{R_n} = X'$, nor that it is impossible to find two admissible sequences $(R_n^1)_{n \geq 1}$, $(R_n^2)_{n \geq 1}$ and two distinct Markov processes $X'^1$, $X'^2$ satisfying $\lim_{n \to + \infty} X^{R_n^1} = X'^1$ and $\lim_{n \to + \infty} X^{R_n^2} = X'^2$. As we will see, it is natural to distinguish two notions of Markov transform: $X'$ is a \emph{strong} Markov transform if \emph{for every} admissible sequence $(R_n)_{n \geq 1}$, $\lim_{n \to +\infty} X^{R_n} = X'$, whereas $X'$ is a \emph{weak} Markov transform of $X$ if \emph{there exists} an admissible sequence $(R_n)_{n \geq 1}$ such that $\lim_{n \to +\infty} X^{R_n} = X'$. We shall see that it is easier to study a local version of Markov transforms. Instead of requiring that $X'$ is the limit of $X^{R_n}$ for an admissible sequence $(R_n)_{n \geq 1}$, we rather require that, for each pair of times $s<t \in \R^2$, there exists a sequence of partitions $(R_n^{s,t})_{n \geq 1}$ of $[s,t]$ with mesh size going to $0$ and such that the laws of $\left( X^{R_n^{s,t}}_s,X^{R_n^{s,t}}_t \right)_{n \geq 1}$ converge to the law of $(X_s',X_t').$ In this case, we say that $X'$ is a \emph{weak local Markov transform} of $X.$ If every sequence of partitions $(R_n^{s,t})_{n \geq 1}$ leads to convergence, we say that $X'$ is a \emph{strong local Markov transform} of $X.$ This local version of Markov transform is less stringent than the former global version of Markov transform, as we just ask for the convergence of the two-dimensional laws and, more importantly, the time sets dependence on $(s,t)$ is allowed\footnote{We finally end up with four notions of Markov transforms: weak local Markov transform, strong local Markov transform, weak global Markov transform and strong global Markov transform.}. Assuming condition \eqref{eq:correlation}, Theorem \ref{them:mimicking_intro} implies that there that there exists a (unique) strong global Markov transform that is characterized as the unique solution of our mimicking problem. Similarly, Boubel and Juillet showed that every quantile process admits a unique weak (not strong) global Markov transform (the Markov-quantile process) and they give a characterization of this process in terms of stochastic orders. Hence, they asked \cite[$\mathsection 5.5.1$, Open Question $a)$]{boubel_markov-quantile_2022} whether the weak local Markov transform of a process (when it exists) is always unique and, if it is, how to characterize it. The following result shows that a (stationary) Gaussian process can admit infinitely many weak local Markov transforms and undermines the hope to find a nice characterization of the set of weak local Markov transforms in general. \begin{theoremA}\label{them:counterexample_intro} There exists a stationary Gaussian process $(X_t)_{t \in \R} $ whose set of weak local Markov transforms is the set of all the processes $(Y_t)_{t \in \R}$ satisfying: \begin{enumerate} \item The process $(Y_t)_{t \in \R}$ is centered Gaussian with constant variance function equal to $1$; \item The covariance function of $(Y_t)_{t \in \R}$ is non-negative; \item The process $(Y_t)_{t \in \R}$ is Markov. \end{enumerate} \end{theoremA} Hence, the set of Markov transforms of $(X_t)_{t \in \R}$ is not reduced to a singleton but contains a large variety of processes. In particular, Theorem \ref{them:counterexample_intro} shows that a weak local Markov transform of a stationary process is not necessarily stationary. Before presenting the organization of the paper, note that all the involved concepts depend only on the law of the involved processes. Hence, we shall work directly with measures on a product space instead of stochastic processes. \subsection{Organization of the article} In Section \ref{sec:preliminaries}, we introduce some notation and give some definitions. At first, we recall the operation of concatenation and composition of transport plans that will often be used in this paper. Then, we thoroughly define the notions of weak local Markov transform and strong local Markov transform of a measure and see how these notions behave relatively to some transformations. In Section \ref{seq:compo_gaussian}, we give some general results on Gaussian measures. We begin by proving the concatenation formula, which is an explicit formula of the law obtained by concatening Gaussian transport plans with non-singular marginals (Lemma \ref{lem:compo_gaussien}). This will enable us to give a criterion to find out if a Gaussian measure with non-singular marginals is Markov (Proposition \ref{pro:criteria_gauss_Markov}), to show that the concatenation of Gaussian measures is a continuous operation (Lemma \ref{lem:compo_gaussien_stab}) and to prove that a weak local Markov transform of a Gaussian measure remains a Gaussian measure (Proposition \ref{pro:Gaussianite_limite}). Applying a result of Kellerer \cite[Theorem $1$]{kellerer_markov-komposition_1972}, we show the existence of a weak local Markov transform in the case of Gaussian measures with non-singular marginals (Theorem \ref{them:fonda_NB_gaussien}). This is the same conclusion as \cite[Theorem $2.26$]{boubel_markov-quantile_2022}, but in the context of Gaussian measures, our kernels do not need be increasing and multi-dimensional marginals will be considered. In Section \ref{sec:Identification_criteria}, we establish a sufficient criterion to prove that a real-valued Gaussian process admits a strong local Markov transform (Theorem \ref{them:Markov_stationnaire}), which is also a preliminary version of Point $3$ of Theorem \ref{them:mimicking_intro}. If $K$ and $\a_K$ are defined as in Theorem \ref{them:mimicking_intro} and we assume that Hypothesis \eqref{eq:correlation} is satisfied, then $X$ admits a strong local Markov transform and this strong Markov transform is also its mimicking process\footnote{Since weak convergence implies finite-dimensional convergence, this result is weaker than Point $3$ of Theorem \ref{them:mimicking_intro}.}. We also give a sufficient criterion to identify weak local Markov transforms of a \emph{stationary} Gaussian process, by looking at the cluster points of the decay rate of its correlation function (Theorem \ref{them:Markov_stationnaire_bis}): If $\a$ is a cluster point of this decay rate when $h \to 0^+$, the (renormalized) stationary Ornstein-Uhlenbeck process with parameter $\a$ is a weak local Markov transform of $X.$ We finally apply our criterion on strong local Markov transforms of Gaussian processes, starting with fractional Brownian motion. Section \ref{sec:counterexample} is devoted to the proof of Theorem \ref{them:counterexample_intro}. First, we use the Weierstrass's continuous nowhere differentiable functions \cite{hardy_weierstrasss_1916} to construct a probability measure $\mu$ on $\R$ whose Fourier transform has a decay rate that has infinitely many cluster points at $0^+$ (Lemma \ref{lem:va_fourier_preli} and Proposition \ref{pro:va_fourier}). Then, we apply a theorem of Bochner \cite{bochner_monotone_1933} to construct a stationary Gaussian process $(X_t)_{t \in \R}$ using $\mu$. Finally, we apply the identification criterion of Markov transforms of stationary processes proved in Section \ref{sec:Identification_criteria} to establish Theorem \ref{them:counterexample_intro} (labelled as Theorem \ref{them:contre_exemple}). Finally, in Section \ref{sec:global_markov_transforms}, after properly defining measures made Markov at times $R$ for a finite set $R \subset \R$ (Definition \ref{def:made_markov}), we study the link between local Markov transforms and global Markov transforms. We shall see that, in the Gaussian case, there is no difference between a strong local Markov transform and a strong global Markov transform (Proposition \ref{pro:cov_proc_Markovinifie}). For stationary Gaussian processes, we show that our criterion to identify weak \emph{local} Markov transforms still holds for weak \emph{global} Markov transforms (Proposition \ref{pro:finite_dim_conv_stati}). Then, we apply some standard results about convergence of processes to carry out the convergence from the finite-dimensional topology to the topology on continuous processes associated to the uniform norm (Theorem \ref{them:fd_to_wiener}). We finally state and prove Theorem \ref{them:mimicking_intro}, to which we add a SDE characterization of the mimicking process (Theorem \ref{them:mimicking}). \section{Preliminaries and (weak) Markov transformation}\label{sec:preliminaries} In this section, we fix some generic notation that will be used in the rest of the article. \begin{nott}\label{not:basique} We write $\ent{a}{b} := [a,b] \cap \N$ for $(a,b) \in \R^2$ and $\{t_1 < \dots < t_m\}$ (resp. $(t_1 < \dots < t_m)$) for the set $\{t_1, \dots , t_m\}$ (resp. the sequence $(t_1, \dots,t_m)),$ when $t_1 < \dots < t_m.$ For each measurable space $(E,\SS)$, we denote by $\PP(E)$ (resp. $\MM_+(E)$) the set of probability measures (resp. positive finite measures on $E$). Classically, we endow every product space of measurable spaces with its cylindrical $\sigma$-algebra and every topological space with its Borel sets. For a product space $E = \prod_{t \in T} E_{t}$, if $T' \subset T$, we write $\proj^{T'}$ the projection from $\prod_{ t \in T} E_t $ to $\prod_{ t \in T'} E_t$. We then denote by $f_{\#} \g : A \mapsto \g\left((f^{-1}(A)\right)$ the push-forward measure of $\g$ by $f$ and set $P^{T'} := \proj^{T'}_\# P \in \PP(\prod_{t \in T'} E_{t})$ for every $P \in \PP(\prod_{t \in T} E_{t})$. In case $T' = \{t_1,\dots,t_m\}$, we rather denote $P^{T'}$ by $P^{t_1, \dots , t_m}$. Furthermore, for $(\mu_t)_{ t \in T} \in \prod_{t \in T} \PP(E_t)$, we write $\Marg((\mu_t)_{t \in T}) := \ens{P \in \PP(\prod_{t \in T} E_{t})}{\forall t \in T, P^t = \mu_t}.$ \end{nott} \begin{notdef}[Concatenation and composition] \label{def:compo} Let $E_1, \dots, E_d$ be Polish spaces, $(\mu_i)_{i \in \ent{1}{d}} \in \prod_{i=1}^d \PP(E_i)$, and $(P_i)_{i \in \ent{1}{d-1}} \in \prod_{i = 1}^{d-1} \Marg(\mu_i,\mu_{i+1})$. The concatenation of $P_1, \dots, P_{d-1}$ is the probability measure $P_1 \circ \dots \circ P_{d-1} \in \PP(E_1 \times \cdots \times E_d)$ defined by \[ (P_1 \circ \dots \circ P_{d-1})(A_1 \times \dots \times A_d) = \int_{A_1 \times \dots \times A_d} \mu_1(dx_1)k^{1,2}(x_1,dx_2)\dots k^{d-1,d}(x_{d-1},dx_d), \] where $k^{i,i+1}$ is the probability kernel defined by the disintegration $P_i(dx_i,dx_{i+1}) = \mu_i(dx_i)k^{i,i+1}(x_i,dx_{i+1})$. Defining $k^{2,1}$ as the kernel given by the disintegration $P_{1}(dx_1,dx_2) = \mu_2(dx_2)k^{2,1}(x_2,dx_1)$, we leave it to the reader to verify that $(P_1\circ P_{2})(dx_1,dx_2,dx_3) = \mu_2 (dx_2) [k^{2,1}(x_2, \cdot) \otimes k^{2,3}(x_2, \cdot)](dx_1,dx_3)$. This means that, conditionally to the present, the future is independent of the past. For $T \subset \R$ and $(E_t)_{t \in T}$ a family of Polish spaces, we say that a probability $P \in \PP( \prod_{t \in T} E_{t})$ is a Markov measure if for all subset $\{t_1< \dots <t_m\} \subset T$, $P^{t_1,\dots,t_m} = P^{t_1,t_2} \circ \cdots \circ P^{t_{m-1},t_m}.$ Of course the notion of Markov measure is related to the more usual notion of Markov process: we leave it to the reader to verify that a process is Markov (relatively to its canonical filtration) if and only if its law is a Markov measure. The composition $P_1 \cdot \ \dots \ \cdot P_d \in \PP(E_1 \times E_d)$ of $P_1, \dots, P_d$ is now defined by $ P_1 \cdot \ \dots \ \cdot P_d := \proj^{1,d}_\# (P_1 \circ \dots \circ P_d).$ For $s<t \in \R^2$, we write $\SS_{[s,t]}$ the set of partitions of $[s,t]$, \ie , \ the sequences $(t_1 < \dots < t_m) \in \R^m$ with $t_1 = s$ and $t_m = t.$ If $R = (t_1 < \dots < t_m) \in \SS_{[s,t]}$, we set $P_{\{R\}}^{s,t} := P^{t_1,t_2} \cdot \ \dots \ \cdot P^{t_{m-1},t_m}$ and denote by $\sigma_R := \sup_{i \in \ent{1}{d-1}} |t_{i+1} - t_i|$ the mesh of $R$. \end{notdef} We now define (weak and strong) local Markov transforms of a measure. We stress out that this notion is different from the notion of \emph{global} Markov transform introduced in Definition \ref{defi:global_markov_transform}. \begin{defi}[Local Markov transform]\label{def:Markovinified} Let us consider an interval $T \subset \R$, $d \geq 1$ and a measure $P \in \PP( (\R^d )^T )$. \begin{enumerate} \item We say that $P$ admits a weak local Markov transform if there exists a Markov measure $P'$ such that \[\forall s<t \in T^2, \exists (R_n)_{n \geq 1} \in {\left(\SS_{[s,t]} \right)}^{\N^*}, \accol{\lim_{n \to +\infty} P_{\{R_n\}}^{s,t} = P'^{s,t} \\ \lim_{n \to + \infty} \s_{R_n} = 0}.\] In this case, we say that $P'$ is a weak local Markov transform of $P.$ \item We say that $P$ admits a strong local Markov transform if there exists a Markov measure $P'$ such that \[ \forall s<t \in T^2, \forall (R_n)_{n \geq 1} \in {\left(\SS_{[s,t]} \right)}^{\N^*} : \lim_{n \to +\infty} \s_{R_n} = 0 \implies \lim_{n \to + \infty} P_{\{R_n\}}^{s,t} = P'^{s,t}.\] In this case, we say that $P'$ is a strong Markov transform of $P.$ \item Given two $\R^d$-valued stochastic processes $(X_t)_{t \in T}$ and $(Y_t)_{t \in T}$, we say that $(Y_t)_{t \in T}$ is a weak (resp. strong) local Markov transform of $(X_t)_{t \in T}$ if the law of $(Y_t)_{t \in T}$ is a weak (resp. strong) Markov transform of the law of $(X_t)_{t \in T}.$ \end{enumerate} \end{defi} \begin{remq}\label{remp:uniqueness_markov_transform} \begin{enumerate} \item If $P'$ is a strong local Markov transform of a measure $P \in \PP( (\R^d )^T )$ and $Q$ is a weak local Markov transform of $P$, then $P' = Q.$ Indeed, for every $s<t \in T^2$, there exists $(R_n)_{n \geq 1} \in \left( \SS_{[s,t]} \right)^{\N^*}$ such that $Q^{s,t} = \lim_{n \to + \infty} P^{s,t}_{\{R_n\}}$ and $\lim_{n \to + \infty} \s_{R_n} = 0.$ As $P$ is a strong local Markov transform, we have $ {P'}^{s,t} =\lim_{n \to + \infty} P^{s,t}_{\{R_n\}}$, hence ${P'}^{s,t} = Q^{s,t}$. Since a Markov measure is completely characterized by its two-dimensional laws, this implies $P' = Q.$ In particular, there is at most one strong local Markov transform, and if a measure $P$ admits a strong local Markov transform, we will talk about \emph{the} strong local Markov transform of $P.$ \item A strong local Markov transform of $P$ is clearly a weak local Markov transform of $P$, but the converse is false, even when the weak local Markov transform is unique. For instance, if $(Y_t)_{t \in \R}$ is a sequence of i.i.d. random variables with law $\NN(0,1)$ and $P$ is the law of the process $ X = (Y_0 1_{t \in \Q} + Y_t 1_{t \in \R \setminus \Q} )_{t \in \R}$, we leave it to the reader to verify that $P' := \otimes_{t \in \R} \ \NN(0,1)$ is the unique weak local Markov transform of $P$, but is not a strong local Markov transform of $P$. \item If $P$ is a Markov process, then $P_{\{R\}}^{s,t} = P'^{s,t}$ for every $R \in \SS_{[s,t]}$, so that $P$ is a strong local Markov transform of itself. In particular, $P$ is the only weak local Markov transform of $P.$ \end{enumerate} \end{remq} In \cite[Theorem $2.26$]{boubel_markov-quantile_2022}, Boubel and Juillet showed an existence result of a weak local Markov transform when the measure $P$ has increasing kernels in the sens given below. We denote by $\leq_{st}$ the stochastic order on $\PP(\R)$, namely $\mu \leq_{st} \nu$ if, for every non-decreasing bounded function $f$, $\int_{\R} f d\mu \leq \int_{\R} f d\nu.$ \begin{defi}\label{def:increasing_kernel} \begin{enumerate} \item Consider $\mu, \nu \in \PP(\R)$ and $P \in \Marg(\mu,\nu).$ We say that $P$ has increasing kernels for the stochastic order if there exists a disintegration $P(dx,dy) = \mu(dx)k_x(dy)$ and a Borel set $\Gamma$ such that $\mu(\Gamma)=1$ and $k_x \leq_{st} k_y$ for every $x < y \in \Gamma^2$, \item Consider $T \subset \R$ and $P \in \PP\left(\R^{T}\right)$. We say that $P$ has increasing kernels for the stochastic order if, for every $s<t \in T^2$, $P^{s,t}$ has increasing kernels for the stochastic order. \end{enumerate} \end{defi} For more informations about the stochastic order and other orders on probability spaces, we refer to the monographs \cite{muller_comparison_2002,shaked_stochastic_2007}. For additional information about increasing kernels, we refer to \cite[Proposition/Definition $3.11.$]{boubel_markov-quantile_2022}. \begin{them}[Boubel--Juillet]\label{them:fonda_NB} Consider an interval $T \subset \R$ and a probability measure $P \in \PP\left(\R^{T}\right)$. If $P$ has increasing kernels for the stochastic order, then $P$ admits a weak local Markov transform. \end{them} In \cite{boubel_markov-quantile_2022}, Theorem \ref{them:fonda_NB} was proved and written with $T = \R$, but an easy modification shows that it stays true for any interval $T \subset \R$. The following proposition indicates how Markov transforms behave relatively to a change in time and a transformation \enquote{component by component}. Given $S,T \subset \R$, $d \geq 1$, $P \in \PP( ( \R^d )^T )$ and $\phi :S \to T$, we denote by $P^{\phi}$ the measure on $\R^S$ with finite-dimensional laws $(P^{\phi})^{s_1,\dots,s_m} := P^{\phi(s_1), \dots, \phi(s_n)}$, $s_1 < \dots < s_m \in S^m$. If $(X_t)_{t \in T}$ is a stochastic process with law $P$, then $P^{\phi}$ is the law of the time-changed process $\left(X_{\phi(s)}\right)_{s \in S}.$ \begin{pro}\label{pro:transformation} We denote by $T \subset \R$ an interval, we fix $d \geq 1$ and we consider $P, P' \in \PP((\R^d)^T)$. \begin{enumerate} \item Let $(f_t)_{t \in T}$ be a family of continuous injective functions from $\R^d$ to $\R^d$ and set $ f := \left( \otimes_{t \in T} f_t \right): (x_t)_{t \in T} \in {(\R^d)}^T \mapsto (f_t(x_t))_{t \in T} \in {(\R^d)}^T$. If $P'$ is a weak (resp. the strong) local Markov transform of $P$, then ${f }_\# P'$ is a weak (resp. the strong) local Markov transform of ${ f}_\# P$. \item Let $S,T \subset \R$ be two intervals and $\phi :S \to T$ be a strictly increasing continuous function. If $P'$ is a weak (resp. the strong) local Markov transform of $P$, then $P'^{\phi} \in \PP((\R^d)^S)$ is a weak (resp. the strong) local Markov transform of $P^\phi \in \PP((\R^d)^S)$. \end{enumerate} \end{pro} \begin{proof} \begin{enumerate} \item Fix $s<t \in \R^2$. We leave it to the reader to verify that, by injectivity of the functions $(f_t)_{t \in \R}$, the Markov property transmits from $P'$ to $f_\# P'$ and $\left(f_\# Q \right)_{\{R\}}^{s,t} = (f_s \otimes f_t)_\# Q_{\{R\}}^{s,t}$ for every $ Q \in \PP( (\R^d)^T )$, $R \in \SS_{[s,t]}$. So, if $\lim_{n \to + \infty} P_{\{R_n\}}^{s,t} = P'^{s,t}$, the continuity of $f_s \otimes f_t$ leads to $\left(f_\# P \right)_{\{R_n\}}^{s,t} = (f_s \otimes f_t)_\# P_{\{R_n\}}^{s,t} \dcv{n \to +\infty} (f_s \otimes f_t)_\# P'^{s,t} = \left(f_\# P' \right)^{s,t}$. This shows the result. \item First, assume $P'$ is the \emph{strong local Markov transform} of $P$. Fix $s<t \in S^2$ and $(R_n)_{n \geq 1} \in \left(\SS_{[s,t]}\right)^{\N^*}$ such that $\lim_{n \to +\infty}\s_{R_n} = 0.$ Since $\phi$ is strictly increasing and uniformly continuous on $[s,t]$, $(\phi(R_n))_{n \geq 1} \in \left( \SS_{[\phi(s),\phi(t)]} \right)^{\N*}$ and satisfies $ \lim_{n \to +\infty} \s_{\phi(R_n)} = 0.$ Thus $(P^{\phi})_{\{R_n\}}^{s,t} = P^{\phi(s),\phi(t)}_{\{\phi(R_n)\}} \dcv{n \to + \infty} \left(P'\right)^{\phi(s),\phi(t)} = {\left(P^{\phi}\right)}^{s,t}$, which shows that $P^\phi$ is the strong local Markov transform of $P.$ Now, assume $P'$ is a \emph{weak local Markov transform} of $P$ and fix $s<t \in S^2$. We have to find a sequence $(R_n)_{n \geq 1} \in \left( \SS_{[s,t]}\right)^{\N^*}$ such that $\lim_{n \to + \infty} \s_{R_n} = 0$ and $\lim_{n \to + \infty} \left(P^{\phi}\right)_{\{R_n\}}^{s,t} = \left(P'^{\phi}\right)^{s,t}.$ As $\phi(s) < \phi(t)$ and $P'$ is a weak local Markov transform of $P$, there exists a sequence $(S_n)_{n \geq 1} \in \left(\SS_{[\phi(s),\phi(t)]}\right)^{\N^*}$ such that $\lim_{n \to +\infty}\s_{S_n} = 0$ and $\lim_{n \to +\infty} P^{\phi(s),\phi(t)}_{\{S_n\}} = P'^{\phi(s),\phi(t)} = {\left(P'^{\phi}\right)}^{s,t}.$ We set $R_n := \phi^{-1}(S_n)$. Since $\phi^{-1} : [\phi(s),\phi(t)] \to [s,t]$ is strictly increasing, uniformly continuous and $\left(P^{\phi}\right)_{\{R_n\}}^{s,t} = P_{\{S_n\}}^{\phi(s),\phi(t)}$, the sequence $(R_n)_{n \geq 1}$ meets the requirement. \end{enumerate} \end{proof} \begin{remq}\label{remq:iden_mark_trans_def} \begin{itemize} \item In terms of random variables, Proposition \ref{pro:transformation} tells us that if the law of $(Y_t)_{t \in T}$ is a weak (resp. the strong) local Markov transform of the law of $(X_t)_{t \in T}$, then the law of $\left(f_{\phi(s)}(Y_{\phi(s)})\right)_{s \in S}$ is a weak (resp. the strong) local Markov transform of the law of $\left(f_{\phi(s)}(X_{\phi(s)})\right)_{s \in S}$. \item Consider $S,T$ two intervals, $u :S \to \R^*$ and $\phi : S \to T$ a strictly injective continuous function. Assume $P \in \PP(\R^T)$ is a centered Gaussian process with covariance function $K$ and $P' \in \PP(\R^S)$ is a weak (resp. the strong) Markov transform of $P$ with covariance function $K$. Applying Proposition \ref{pro:transformation}, it is straightforward that the centered Gaussian process with covariance $ (s,t) \in S^2 \mapsto u(s)u(t)K'(\phi(s),\phi(t))$ is a weak (resp. the strong) local Markov transform of the centered Gaussian process with covariance function $(s,t) \in S^2 \mapsto u(s)u(t)K(\phi(s),\phi(t)).$ \end{itemize} \end{remq} For $\g \in \PP(\R^d)$, we denote by $m_\g \in \R^d$ (resp. $\Sigma_{\g} \in \MM_d(\R)$) the expected value of $\g$ (resp. the covariance of $\g$), \ie , \ the expected value (resp. the covariance matrix) of a $\R^d$-valued random variable with law $\g.$ \begin{remq}\label{remq:centered_enough} In the rest of the article, we work under the hypothesis of non-singular marginals, \ie , \ $\Sigma_{\mu_t} \in \GL_d(\R)$ for every $t \in T.$ In this case, we can apply Proposition \ref{pro:transformation} with $f_t : x_t \mapsto \Sigma_{\mu_t}^{-1/2}(x_t - m_{\mu_t})$ and $f_t^{-1} : u_t \mapsto \Sigma_{\mu_t}^{1/2}x_t + m_{\mu_t}$, where $\Sigma_{\mu_t}^{1/2}$ stands for the symmetric and positive-definite square root of $\Sigma_{\mu_t}$. So, for a given measure $P \in \PP( (\R^d)^T )$, $P'$ is a weak (resp. the strong) Markov transform of $P$ if and only if ${\left(\otimes_{t \in T} f_t\right)}_\# P'$ is a weak (resp. the strong) Markov transform of ${\left(\otimes_{t \in T} f_t\right)}_\# P$. Since $ {\left(\otimes_{t \in T} f_t\right)}_\# P' \in \Marg((\ti{\mu}_t)_{t \in T})$, where $\ti{\mu}_t := {f_t}_\# \mu_t$ satisfies $m_{\ti{\mu}_t} = 0$ and $\Sigma_{\ti{\mu}_t} = I_d$, we can restrict our study to Markov transforms of centered processes with constant covariance function equal to the identity matrix. \end{remq} \section{Composition and Markov transformation of Gaussian measures}\label{seq:compo_gaussian} For every $d \geq 1,$ we write $\GG_d$ the set of centered Gaussian measures and $\GG_d^*$ the set of centered Gaussian measures on $\R^d$ with invertible covariance matrix. Consider $(d_1,d_2) \in \N^* \times \N^*$, $\pi \in \PP(\R^{d_1} \times \R^{d_2})$ and $(X_1,X_2)$ a $(\R^{d_1} \times \R^{d_2})$-valued random variable with law $\pi$. We denote by $\Sigma_{\pi}^{d_1,d_2} \in \MM_{d_1,d_2}(\R)$ the covariance between $X_1$ and $X_2.$ The following lemma is well known and characterizes the weak convergence of (centered) Gaussian measures by the convergence of their covariance matrices. We refer for instance to \cite[Ch. 8, Theorem 3]{bergstrom_weak_1982} for a proof. \begin{lem}\label{lem:stab_gaussienne} Let consider $(\mu_n)_{n \geq 1} \in \left({\GG_d}\right)^{\N^*}$ and $\mu \in \PP(\R^d).$ The following conditions are equivalent. \begin{enumerate} \item The measure $\mu$ is centered Gaussian and $\Sigma_{\mu} = \lim_{n \to + \infty} \Sigma_{\mu_n} .$ \item The sequence $(\mu_n)_{n \geq 1}$ weakly converges to $\mu$. \end{enumerate} \end{lem} We now recall a standard result about conditional laws of Gaussian vectors. We refer to \cite[Chapter $8$, Section $9$]{von_mises_mathematical_1964} for a proof. \begin{lem}[Conditioning of Gaussian measures]\label{lem:gauss_cond} Let $(m,n)$ be in ${\N^*} \times \N^*$. \begin{enumerate} \item Fix $\pi \in\PP(\R^{m+n})$ a Gaussian measure and set $\mu := {\proj^{1, \cdots,m}}_\# \pi$, $\nu := {\proj^{m+1, \cdots,m+n}}_\# \pi$, $\Sigma_{\pi} := \Sigma_{\pi}^{m,n} \in \MM_{m,n}(\R)$. If $\mu \in \GG_m^*$ and $k : \R^m \to \PP(\R^n)$ is a probability kernel such that $\pi(dx,dy) = \mu(dx)k_x(dy)$, then \[\mu(dx)-\textnormal{a.s.} , \ k_x = \NN\left(\Sigma_{\pi}^t \Sigma_\mu^{-1}x, \Sigma_\nu- \Sigma_{\pi}^t \Sigma_\mu^{-1}\Sigma_{\pi}\right).\] \item Consider $\mu \in \GG_m^*$, $(A,\Gamma) \in \MM_{n,m}(\R) \times \MM_n(\R)$ and $k : x \in \R^m \to \NN(Ax,\Gamma) \in \PP(\R^n)$. Then, writing $\pi(dx,dy) = \mu(dx)k_x(dy)$, we have \[ \pi = \NN\left( 0_{\R^{m+n}},\begin{pmatrix} \Sigma_\mu & \Sigma_\mu A^t \\ A \Sigma_\mu & \Gamma + A \Sigma_\mu A^t \end{pmatrix}\right) \in \GG_{m+n}.\] \end{enumerate} \end{lem} For $\mu \in \PP(\R^m), \nu \in \PP(\R^n)$, we set $\GG(\mu,\nu) := \GG_{m+n} \cap \Marg(\mu,\nu)$. Using Lemma \ref{lem:gauss_cond}, we obtain an explicit formula for the concatenation and the composition of Gaussian measures. \begin{lem}\label{lem:compo_gaussien} Let us consider $(\mu_1, \dots, \mu_p) \in \GG_{d_1}^* \times \cdots \times \GG_{d_p}^*$ and $(P_i)_{i \in \ent{1}{p-1}} \in \prod_{i=1}^{p-1} \GG(\mu_i,\mu_{i+1})$. We denote $\Sigma_{P_i}^{d_i,d_{i+1}}$ by $\Sigma_{i,i+1}$ and $\Sigma_{\mu_i}$ by $\Sigma_{i,i}$. \begin{description} \leftskip=0.3in \item[Concatenation formula:] $P_1 \circ \cdots \circ P_{p-1} = \NN \left( 0, \left( A_{i,j} \right)_{1 \leq i,j \leq p} \right),$ where $A_{i,j} \in \MM_{d_i,d_j}(\R)$ is defined by \[A_{i,j} = \begin{cases} \Sigma_{i,i+1} \Sigma_{i+1,i+1}^{-1} \Sigma_{i+1,i+2} \cdots \Sigma_{j-1,j-1}^{-1} \Sigma_{j-1,j}\in \MM_{d_i,d_{j}}(\R) & \text{ if } i < j \\ \Sigma_{i,i} & \text{ if } i=j \\ A_{j,i}^t &\text{ if } j < i \\ \end{cases} \] \item[Composition formula:] \[ P_1 \cdot \ \dots \ \cdot P_{p-1} = \NN \left( 0 , \begin{pmatrix} \Sigma_{\mu_1} & \Sigma_{1,2} \Sigma_{2,2}^{-1} \Sigma_{2,3} \dots \Sigma_{p-1,p-1}^{-1} \Sigma_{p-1,p} \\ (\Sigma_{1,2} \Sigma_{2,2}^{-1} \Sigma_{2,3} \dots \Sigma_{p-1,p-1}^{-1} \Sigma_{p-1,p})^t & \Sigma_{\mu_p} \end{pmatrix}\right). \] \end{description} \end{lem} \begin{proof} The proof of the concatenation formula is based on a recursion on $p$, Lemma \ref{lem:gauss_cond} and the decomposition $P_{1,2} \circ P_{2,3} = \mu_2 (dx_2) [k^{2,1}(x_2, \cdot) \otimes k^{2,3}(x_2, \cdot)](dx_1,dx_3)$ (see Notation/Definition \ref{def:compo}). The composition formula is then immediately implied by the concatenation formula and the projection on $\R^{d_1}\times \R^{d_p}$. \end{proof} In the case where the covariance matrices are identity matrices, for every $i<j \in \ent{1}{d}^2$, $A_{i,j} = \Sigma_{i,i+1} \dots \Sigma_{j-1,j} \in \MM_{d_i,d_j}$ is the product of the $j-i$ matrices successive matrices $\Sigma_{k,k+1}$ starting with $k =i$. Lemma \ref{lem:compo_gaussien} allows us to recover\footnote{This criterion seems to be known for a long time \cite{borisov_criterion_1982}, but to the best of our knowledge, not its multi-dimensional version.} a characterization of the Markov property for Gaussian measures. \begin{pro}\label{pro:criteria_gauss_Markov} Let fix $d \geq 1$, $T \subset \R$, $(\mu_t)_{t \in T} \in (\GG_d^*)^T$ and denote by $P \in \Marg((\mu_t)_{t \in T})$ a centered Gaussian measure. For $s<t \in T^2$, we set $\Sigma_{t,t} := \Sigma_{\mu_t}$ and $\Sigma_{s,t} := \Sigma_{P^{s,t}}^{d,d}$. The measure $P$ is Markov if and only if \begin{equation}\label{eq:comp_dim} \forall s<t<u \in T^3, \Sigma_{s,u} = \Sigma_{s,t} \Sigma_{t,t}^{-1} \Sigma_{t,u}. \end{equation} \end{pro} \begin{proof} If $P$ is Markov, then for every $s<t<u \in T^3$, according to the composition formula, we have $\Sigma_{s,u} = \Sigma_{P^{s,u}}^{d,d} = \Sigma_{P^{s,t} \cdot P^{t,u}}^{d,d} = \Sigma_{P^{s,t}}^{d,d} \Sigma_{\mu_t}^{-1} \Sigma_{P^{t,u}}^{d,d} = \Sigma_{s,t}\Sigma_{t,t}^{-1} \Sigma_{t,u}.$ For the converse implication, we assume that Hypothesis \eqref{eq:comp_dim} is true and we want to show that $P^{t_0,\dots,t_m} = P^{t_0,t_1} \circ \dots \circ P^{t_{m-1},t_m}$ for all $t_0 < \dots < t_m \in T^{m+1}$. Since, for every $i<j \in \ent{1}{d-1}$, applying \eqref{eq:comp_dim} recursively, we obtain $\Sigma_{t_i,t_{i+1}} \Sigma_{t_{i+1},t_{i+1}}^{-1} \Sigma_{t_{i+1},t_{i+2}} \cdots \Sigma_{t_{j-1},t_{j-1}}^{-1} \Sigma_{t_{j-1},t_j} = \Sigma_{t_i,t_j}.$ According to the concatenation formula, $P^{t_0,\dots,t_m}$ and $P^{t_0,t_1} \circ \dots \circ P^{t_{m-1},t_m}$ are two centered Gaussian measures with the same covariance matrix, hence are equal. \end{proof} Using Lemma \ref{lem:stab_gaussienne} and the concatenation formula of Lemma \ref{lem:compo_gaussien}, we obtain the continuity of concatenation and composition on product spaces of Gaussian transport plans. \begin{lem}\label{lem:compo_gaussien_stab} Let us consider $(\mu_1, \dots, \mu_p) \in \GG_{d_1}^* \times \cdots \times \GG_{d_p}^*$, $P_i \in \GG(\mu_i,\mu_{i+1})$ and, for $i \in \ent{1}{p-1}$, $(P_i^n)_{n \geq 1} \in \GG(\mu_i,\mu_{i+1})^{\N^*}$. If $ \lim_{n \to + \infty} P^n_i = P_i$ for every $i \in \ent{1}{d-1}$, then $P_1 \circ \cdots \circ P_p = \lim_{n \to + \infty} P^n_1 \circ \cdots \circ P^n_p $ and $P_1 \cdot \ \dots \ \cdot P_p = \lim_{n \to + \infty} P^n_1 \cdot \ \dots \ \cdot P^n_p.$ \end{lem} \begin{proof} According to Lemma \ref{lem:stab_gaussienne}, for every $i<j \in \ent{1}{d-1}$, the function $A_{i,j} : (P_1, \dots, P_d) \in \GG(\mu_1,\mu_2) \times \dots \times \GG(\mu_{p-1},\mu_p) \mapsto \Sigma_{P_i}^{d_i,d_{i+1}} \Sigma_{\mu_i}^{-1} \dots \Sigma_{\mu_{j-1}}^{-1} \Sigma_{P_i}^{d_{j-1},d_{j}}$ is continuous. According to Lemma \ref{lem:stab_gaussienne} and the concatenation formula in Lemma \ref{lem:compo_gaussien}, the function $\mathcal{C} : (P_1, \dots, P_{d-1}) \in \GG(\mu_1,\mu_2) \times \dots \times \GG(\mu_{d-1},\mu_d) \mapsto P_1 \circ \dots \circ P_{d-1}$ is continuous. The continuity of the composition is then a consequence of the continuity of projections. \end{proof} Applying Lemma \ref{lem:compo_gaussien}, we obtain that a weak local Markov transform of a Gaussian measure is a Gaussian measure. \begin{pro}\label{pro:Gaussianite_limite} Fix an interval $T \subset \R$, $d \geq 1$, $(\mu_t)_{t \in T} \in \left( \GG_d^* \right)^T$ and denote by $P \in \PP((\R^d)^T)$ a Gaussian measure. If $P' \in \PP((\R^d)^T)$ is a weak Markov transform of $P$, then $P'$ is a Gaussian measure. \end{pro} \begin{proof} According to Remark \ref{remq:centered_enough}, we can assume that our $(\mu_l)_{l \in T}$ are centered. Fix $s<t \in T^2$ and a sequence $(R_n)_{n \geq 1} \in {(\SS_{[s,t]})}^{\N^*}$ such that $\lim_{n \to + \infty} P_{\{R_n\}}^{s,t} = P'^{s,t}$. According to Lemma \ref{lem:compo_gaussien}, $P_{\{R_n\}}^{s,t} \in \GG(\mu_s,\mu_t).$ It is well known that $\Marg(\mu_s, \mu_t)$ is closed and according to Lemma \ref{lem:stab_gaussienne}, a limit of a sequence of centered Gaussian measures is a centered Gaussian measure. Thus $\GG(\mu_s,\mu_t)$ is closed and we obtain $P'^{s,t} \in \GG(\mu_s,\mu_t).$ Hence, for each $t_1 < \dots < t_p \in T^p$, since $P'$ is a Markov measure, $P'^{t_1,\dots,t_p} = P'^{t_1,t_2} \circ \cdots \circ P'^{t_{p-1},t_p}.$ According to the composition formula in Lemma \ref{lem:compo_gaussien}, $P'^{t_1,\dots,t_p}$ is Gaussian, which proves the desired result. \end{proof} In the context of Gaussian measures, the hypothesis of increasing kernel in Theorem \ref{them:fonda_NB} can be removed. For this purpose, we adapt the proof of Boubel--Juillet \cite[Theorem $2.26.$]{boubel_markov-quantile_2022}. We first recall a theorem of Kellerer, main tool of the proof (see \cite[Theorem $1$]{kellerer_markov-komposition_1972}). This theorem is an existence result of a Markov measure satisfying certain constraints. It generalizes the standard Kolmogorov extension theorem of a Markov process fitting a consistent family of two-dimensional laws (just take $\NN^{s,t} = \{\mu_{s,t}\}$ below). \begin{them}\label{Them:Kellerer_consistency} Let $T$ be an interval, and $ (\mu_t)_{t \in {T}} $ a family of probability measures on some Polish space \( E \). For every \( s < t \in T^2\), we consider a subset $ \mathcal{N}_{s,t} $ of $ \mathcal{P}(E^2)$. Assume that, for every $s<t \in T^2$: \begin{enumerate} \item[(1)] $ \mathcal{N}_{s,t} $ is non-empty; \item[(2)] \( \mathcal{N}_{s,t} \subset \Marg(\mu_s, \mu_t) \); \item[(3)] \( \mathcal{N}_{s,t} \) is closed for the weak topology; \item[(4)] For every \( r < s < t \in T^3\) and \( (P, P') \in \mathcal{N}_{r,s} \times \mathcal{N}_{s,t} \), \( P \cdot P' \in \mathcal{N}_{r,t} \); \item[(5)] For every \( d \geq 1 \) and \( t_1 < \ldots < t_d \in T^d \), if for every $i \in \ent{1}{d-1}$ the sequences \( (Q^{n}_{t_i, t_{i+1}})_{n \geq 1} \in \left(\mathcal{N}_{t_i, t_{i+1}}\right)^{\N^*} \) converge weakly to \( Q_{t_i, t_{i+1}} \), then the sequence \( \left(Q^{n}_{t_1, t_2} \circ \ldots \circ Q^{n}_{t_{d-1}, t_d}\right)_{n \geq 1} \) tends weakly to \( Q_{t_1, t_2} \circ \ldots \circ Q_{t_{d-1}, t_d} \). \end{enumerate} Then, there exists a Markov measure \( P \in \Marg((\mu_t)_{t \in T}) \) satisfying \( \proj^{s,t}_{\#}P \in \mathcal{N}_{s,t} \) for every \( s < t \in T^2\). \end{them} If $R =(r_1, \dots, r_p)$ and $S = (s_1, \dots, s_q)$ are such that $r_p = s_1$, we denote $(r_1, \dots,r_p,s_2,\dots,s_p)$ by $R+S \in \SS_{[r_1,s_q]}.$ \begin{them}\label{them:fonda_NB_gaussien} Consider an interval $T \subset \R$ and $(\mu_t)_{t \in T} \in \left(\GG_d^*\right)^T$ . Then, every Gaussian measure $P \in \Marg((\mu_t)_{t \in T})$ admits a weak local Markov transform. \end{them} \begin{proof} For every $s<t \in T^2$ and $\s >0$, put $ \NN_{s,t}^{\s} := \ens{P^{s,t}_{\{R\}}}{R \in \SS_{[s,t]} \text{ and } \s_R \leq \s}$ . For each $s<t$, we set $\NN_{s,t} := \cap_{\s >0} \hspace{0.05cm} \overline{\NN_{s,t}^\s}.$ In order to apply Theorem \ref{Them:Kellerer_consistency}, we establish that the conditions $(1)$ to $(5)$ are fulfilled. First recall that $\GG(\mu_s,\mu_t) \subset \Marg(\mu_s,\mu_t)$ is closed and $\Marg(\mu_s,\mu_t)$ is compact, so that $\GG(\mu_s,\mu_t)$ is compact. For every $s<t \in T^2$ and $\s >0$, according to the composition formula of Lemma \ref{lem:compo_gaussien}, we get $ \emptyset \subsetneq \NN_{s,t}^\s \subset \GG(\mu_s,\mu_t)$, which implies $\overline{\NN_{s,t}^\s} \subset \GG(\mu_s,\mu_t).$ Hence, $\NN_{s,t}$ is a decreasing intersection of non-empty compact subsets of $\GG(\mu_s,\mu_t)$, thus a non-empty compact subset of $\GG(\mu_s,\mu_t).$ This establishes $(1), (2)$ and $(3)$. To prove $(4)$, consider $Q_1 \in \NN_{s,t}$, $Q_2 \in \NN_{t,u}$ and $\s > 0.$ There exists two sequences of partitions $(R_n)_{n \geq 1} \in {(\SS_{[s,t]})}^{\N^*}$ and $(S_n)_{n \geq 1} \in {(\SS_{[t,u]})}^{\N^*}$ such that $Q_1 = \lim_{n \to +\infty} P_{\{R_n\}}^{s,t} $, $Q_2 = \lim_{n \to +\infty} P_{\{S_n\}}^{t,u}$ and $\max(\s_{R_n},\s_{S_n}) \leq \s.$ According to Lemma \ref{lem:compo_gaussien_stab}, we get $P_{\{R_n + S_n\}}^{s,u} = P_{\{R_n\}}^{s,t} \cdot P_{\{S_n\}}^{t,u} \dcv{n \to + \infty} Q_1 \cdot Q_2.$ Since $\s_{R_n+ S_n} = \max(\s_{R_n},\s_{S_n}) \leq \s$, we have $Q_1 \cdot Q_2 \in \overline{\NN_{s,u}^\s}$. This being true for all $\s > 0$, $Q_1 \cdot Q_2 \in \NN_{s,u},$ so that $(4)$ is true. For $(5)$, just recall that $\NN_{s,t} \subset \GG(\mu_s,\mu_t)$ and apply Lemma \ref{lem:compo_gaussien_stab}. Thus, Theorem \ref{Them:Kellerer_consistency} applies and there exists a Markov measure $ P' \in \PP(\R^T)$ satisfying $P'^{s,t} \in \NN_{s,t}$ for every $s<t \in T^2$. This means exactly that $P'$ is a weak local Markov transform of $P$. \end{proof} Theorem \ref{them:fonda_NB_gaussien} improves Theorem \ref{them:fonda_NB} for Gaussian measures. First, our result is valid for any $(\mu_t)_{ t \in T} \in \left(\GG_d^*\right)^{\N^*}$ with $d \geq 1$, whereas Theorem \ref{them:fonda_NB} only applies when $d=1$. Moreover if $d = 1$, we do not ask that $P \in \Marg((\mu_t)_{t \in T})$ has increasing kernels. For a Gaussian process, having increasing kernels means having a non-negative covariance function. Indeed, for $P \in \Marg((\mu_t)_{t \in T})$ with covariance function $K$, according to the first point of Lemma \ref{lem:gauss_cond}, $P^{s,t}(dx,dy) = \mu_t(dx)k^{s,t}_x(dy)$ with $k^{s,t}_x = \NN\left( K(s,t)/K(s,s)x, K(t,t)-K(s,t)^2/K(s,s) \right)$. Recall that, for every $(x, y , \s) \in \R^2 \times \R_+^*$, $\NN(x, \s) \leq_S \NN(y,\s)$ if and only if $x \leq y$. Hence, $P^{s,t}$ has increasing kernels if and only if $K(s,t) \geq 0$, which proves that $P$ has increasing kernels if and only if $K$ only takes non-negative values. \section{Identification criteria of Markov transform for Gaussian measure.}\label{sec:Identification_criteria} From now on, we restrict ourselves to real-valued Gaussian processes. The proof of the following proposition is straightforward, but the result is nevertheless crucial. It uses that, for every Gaussian process $P$, $s<t \in \R^2$ and $R = (t_0 < \dots < t_m)$, the correlation coefficient of $P_{\{R\}}^{s,t}$ is the product of the correlation coefficients of $P^{t_0,t_1}, \dots, P^{t_{d-1},t_d}$. \begin{pro}\label{pro:formulation_calculatoire} Fix $(\mu_t)_{t \in T} \in \left(\GG_1^*\right)^T$ and denote by $P,P' \in \Marg((\mu_t)_{t \in T})$ two Gaussian processes with covariance functions $K$ and $K'$ respectively. For every $s<t \in T^2$ and $(R_n)_{n \geq 1} = \left((t_k^n\right)_{k \in \ent{1}{m_n}})_{n \geq 1} \in (\SS_{[s,t]})^{\N^*},$ the following conditions are equivalent: \begin{enumerate} \item $P'^{s,t} = \lim_{n \to +\infty} P_{\{R_n\}}^{s,t}$, \item $ K'(s,t) = \lim_{n \to + \infty} \frac{\prod_{k=0}^{m_n-1} K(t_k^n,t_{k+1}^n)}{{\prod_{k=1}^{m_n-1}} K(t_k^n,t_k^n)} .$ \end{enumerate} \end{pro} \begin{proof} According to Lemma \ref{lem:stab_gaussienne}, $P'^{s,t} = \lim_{n \to + \infty} P_{\{R_n\}}^{s,t}$ if and only if \ $ \Sigma_{P'^{s,t}} = \lim_{n \to + \infty} \Sigma_{P_{\{R_n\}}^{s,t}}$. Hence, writing $u_n$ the fraction appearing in the limit of Point $2.$, the composition formula of Lemma \ref{lem:compo_gaussien} implies that $P'^{s,t} = \lim_{n \to + \infty} P_{\{R_n\}}^{s,t}$ boils down to \[ \lim_{n \to + \infty} \begin{pmatrix} \Sigma_{\mu_s} & u_n \\ u_n & \Sigma_{\mu_t} \\ \end{pmatrix} = \begin{pmatrix} \Sigma_{\mu_s} & K'(s,t) \\ K'(s,t) & \Sigma_{\mu_t} \\ \end{pmatrix},\] that is $K'(s,t) = \lim_{n \to + \infty} u_n .$ \end{proof} If one writes $c_K :(s,t) \mapsto K(s,t)/\sqrt{K(s,s)K(t,t)}$ for the correlation function of $P$ and $c_{K'} :(s,t) \mapsto K'(s,t)/\sqrt{K'(s,s)K'(t,t)}$ for the correlation function of $K'$, the second point becomes \[\lim_{n \to + \infty} c_K(t_0^n,t_1^n)\cdots c_K(t_{m_n-1}^n,t_{m_n}^n) = c_{K'}(s,t).\] According to Proposition \ref{pro:formulation_calculatoire}, being a local Markov transform of a Gaussian measure is a property depending only on the covariance functions of the involved measures. Thus, in order to find a criteria to identify Markov transforms of Gaussian measures, we focus on covariance functions of Gaussian measures, \ie ,\ positive semi-definite kernels. For $T \subset \R $ and $K : T \times T \to \R$, we recall that $K$ is said to be a positive semi-definite kernel if for all $(t_1,\dots,t_m) \in T^m$, the matrix $(K(t_i,t_j))_{1 \leq i,j \leq m}$ is symmetric and positive semi-definite. Moreover, this kernel is said to be stationary if there exists $\ti{K} :\R \to \R$ such that, for every $(s,t) \in T \times T$, we have $ K(s,t) = \ti{K}(t-s)$. It is a standard fact that a function $K : T \times T \to \R$ is a positive semi-definite kernel (resp. stationary positive semi-definite kernel) if and only if there exists a Gaussian measure (resp. a stationary Gaussian measure) on $\R^T$ with covariance function $K$. For a proof, one can e.g. refer to \cite[Chapter $3$]{von_mises_mathematical_1964}. Before stating our criteria to identify Markov transform of Gaussian measure, we define the variance, correlation and instantaneous decorrelation rate associated with a positive semi-definite kernel. \begin{defi} Let $K : T \times T \to \R$ be a positive semi-definite kernel. We denote by $v_K : t \in T \mapsto K(t,t) \in \R$ the variance function of the kernel $K$ and by $\s_K := \sqrt{v_K}$ its standard deviation function. \begin{enumerate} \item The kernel $K$ is said non-singular if its variance function of $K$ takes positive values. \item If $K$ is non-singular, we denote by $c_K :(s,t) \mapsto \s_K^{-1}(s)\s_K^{-1}(t)K(s,t)$ its correlation function. Moreover, for every point $t \in T \setminus \{\sup T \}$, we define the decay rate of the correlation of $K$ at point $t$ as the function $L^K_t : h \in (T-t) \cap \R_+^* \mapsto h^{-1}(1-{c_K}(t,t+h)) \in \R_+$. \item If for every $t \in T \setminus\{\sup T)\}$, $ \a^K(t) := \lim_{h \to 0^+} L_t^K(h)$ exists, then $\a^K : t \in T \setminus \{\sup T \} \mapsto \a^K(t) \in \R_+$ is well-defined and we call it the instantaneous decorrelation rate of $K.$ \end{enumerate} \end{defi} If $X = (X_t)_{t \in T}$ is a centered real-valued random process with covariance function $K$, then $v_K$ is the variance function of $X$, $c_K$ is the correlation function of $X$ and $K$ is non-singular if and only if $X_t = 0$ a.s. never happens. Denote by $\C(X_s,X_t)$ the correlation between $X_s$ and $X_t$. Then, the map $L_t^K : h \mapsto L_t^K(h) = h^{-1} \left( \C(X_t,X_t) - \C(X_s,X_t)\right)$ is the decay rate of the function $s \mapsto \C(X_s, X_t)$, that is the decay rate of the correlation to $X_t.$ Hence, $\a^K(t)$ is the instantaneous decay of the correlation to $X_t$, \ie , the instantaneous decay rate of $X$ at time $t.$ \begin{pro}\label{pro:alpha_proces} Consider an interval $T \subset \R$ and a non-negative measurable map $\a : T \to [0,+\infty]$. Then $K_\a : (s,t) \in T \times T \mapsto \exp\left( - \int_{\min(s,t)}^{\max(s, t)} \a(u) du \right)$ is a positive semi-definite kernel and the centered Gaussian measure with covariance function $K_\a$ is Markov. We denote this process $P_\a \in \PP(\R^T)$, \end{pro} \begin{proof} First, we prove that $K_\a$ is a positive semi-definite kernel. Fix $s<t \in T^2$ and set $A_{s,t} := \begin{pmatrix} K_\a(s,s) & K_\a(s,t) \\ K_\a(t,s) & K_\a(t,t) \end{pmatrix}.$ Since $\Tr(A_{s,t}) = K_\a(s,s) + K_\a(t,t) = 1 + 1 = 2 > 0 $ and $\det(A_{s,t}) = 1 - \exp(-2\int^{s}_{ t}\a(v)dv) \geq 0$, the matrix $A_{s,t}$ is positive semi-definite and the probability measure $\mu_{s,t} := \NN(0,A_{s,t})$ is well defined. For every $s<t<u \in T^3$, the composition formula in Lemma \ref{lem:compo_gaussien} implies $\mu_{s,t} \cdot \mu_{t,u} = \NN(0,A),$ where \[A = \begin{pmatrix} K_\a(s,s) & K_\a(s,t)K_\a(t,u)/K_\a(t,t) \\ K_\a(s,t)K_\a(t,u)/K_\a(t,t) & K_\a(u,u) \end{pmatrix}.\] Since \begin{equation}\label{eq:Markov_P_alpha} \frac{K_\a(s,t)K_\a(t,u)}{K_\a(t,t)} = \exp\left( -\int_s^t \a(u) du\right) \exp\left( -\int_t^u \a(x) dx\right)/1 = K_\a(s,u), \end{equation} we have $A = A_{s,u}$, which implies $\mu_{s,t} \cdot \mu_{t,u} = \mu_{s,u}.$ According to the Kolmogorov extension theorem, there exists a unique Markov measure $P_\a \in \PP(\R^T)$ such that $P_\a^{s,t} = \mu_{s,t}$ for every $s<t \in T^2.$ According to the composition formula in Lemma \ref{lem:compo_gaussien}, for every $t_1< \cdots < t_m \in T^m$, we have $P_\a^{t_1, \dots,t_m} = P_\a^{t_1,t_2} \circ \cdots \circ P_\a^{t_{m-1},t_m} \in \GG_m$. Hence $P_\a$ is a Gaussian process and its covariance function is $K_\a.$ In particular, $K_\a$ is a positive semi-definite kernel. Finally, according to Proposition \ref{pro:criteria_gauss_Markov} and Equation \eqref{eq:Markov_P_alpha}, $P_\a$ is Markov . \end{proof} \begin{remq}\label{remq:interpretation_sto_P_alpha} \begin{enumerate} \item If $\a$ is constant equal to $0$, then $K_\a(s,t) = 1$ for every $s<t \in \R^2$. Thus $P_\a$ is the law of a completely correlated process $ (Z)_{t \in \R}$, where $Z \sim \NN(0,1).$ \item If $\a$ is constant equal to $+\infty$, then $K_\a(s,t) = 0$ for every $s<t \in \R^2$. Hence, $P_{\a}$ is the law of a sequence $ (X_t)_{t \in \R}$ of i.i.d. random variables with law $\NN(0,1).$ \item The stationary Ornstein-Uhlenbeck process with parameter $\a \in \R_+^*$ is defined as the solution to the stochastic differential equation \begin{equation} \begin{cases} dX_t = -\alpha X_t dt + d B_t \\ X_0 =Z \end{cases}, \end{equation} where $(B_t)_{t \geq 0}$ is a standard Brownian motion and $Z$ is a random variable independent from $(B_t)_{t \geq 0}$ with law $\NN(0,1/2\a)$. It is well known that a stationary Ornstein-Uhlenbeck is a centered Gaussian process with covariance function $(s,t) \mapsto \frac{1}{2\alpha}\exp\left(-\a |t-s| \right).$ Thus $P_{\a}$ is the law of $(\sqrt{2\a} X_t)_{t \geq 0}$, where $(X_t)_{t \geq 0}$ is an Ornstein-Uhlenbeck process. \end{enumerate} \end{remq} We can now state our criterion to identify the strong local Markov transforms. Since the expansion of $\ln(1+x)$ at point $0$ will often be used, we fix a notation. \begin{nott}\label{nott:DL_Taylor} Let $\ee : ]-1,+\infty[ \to \R$ be the function defined by \[ \ee(x) = \begin{cases} \frac{\ln(1+x)-x}{x} &\text{if } x \in ]-1,+\infty[ \setminus \{0\} \\ 0 &\text{if } x = 0 \end{cases}.\] This function is continuous, $\ee(0) = 0$ and $\ln(1+x) = x + x \ee(x)$ for every $x \in ]-1,+ \infty[.$ \end{nott} \begin{them}\label{them:Markov_stationnaire} Consider an interval $T \subset \R$ and a continuous positive semi-definite kernel $K : T \times T \to \R$ with constant variance function equal to $1.$ We denote by $P$ the centered Gaussian process with covariance function $K.$ For every $(s,t,h^*) \in T \times T \times \R_+^*$, we set $$C_{s,t}(h^*) := \ens{(v,h)}{v \in [s,t[, v +h \in [s,t], h \in ] 0, h^*]}.$$ \begin{enumerate} \item Assume $\a^K$ is well defined, continuous and for every $ s<t \in T \times T$ \begin{equation}\label{eq:conv_fini} \sup_{(v,h) \in C_{s,t}(h^*)} \left|L^K_v(h) - \a^K(v)\right| \dcv{h^* \to 0^+} 0. \end{equation} Then $P_{\a^K}$ is the strong local Markov transform of $P.$ \item Assume, for every $s<t \in T \times T$ \begin{equation}\label{eq:conv_infini} \inf_{(v,h) \in C_{s,t}(h^*)} L^K_v(h) \dcv{h^* \to 0^+} + \infty. \end{equation} Then $P_{+\infty}$ is the strong local Markov transform of $P.$ \end{enumerate} \end{them} \begin{proof} In order to prove the \emph{first point}, let fix $s<t \in T^2$ and $(R_n)_{n \geq 1} \in \left(\SS_{[s,t]}\right)^{\N^*}$ such that $ \lim_{n \to +\infty} \s_{R_n} = 0.$ Put $L := L^K $, $\a := \a^K$, $R_n = (t_0^n,\dots,t_{m_n}^n)$ for $n \geq 1$ and $h^n_k := t_{k+1}^n - t_k^n$ for $k \in \ent{0}{m_n - 1}$. According to Proposition \ref{pro:formulation_calculatoire}, we have to establish the limit \[\prod_{k=0}^{m_n-1} K(t_k^n,t_{k+1}^n) \dcv{n \to +\infty} \exp\left(-\int_s^t\a(u)du\right),\] that is, \begin{equation}\label{eq:conv_num_schema} \sum_{k=0}^{m_n-1} \ln\left(K(t_{k}^n,t_{k}^n + h_k^n)\right) \dcv{n \to + \infty} -\int_s^t \a(u) du. \end{equation} Defining $\ee$ as in Notation \ref{nott:DL_Taylor}, for every $(v,h) \in T \times \R^*$ such that $v+h \in T$ and $K(v,v+h)>0$ , we get $\ln(K(v,v+h)) = (K(v,v+h)-1)\left[1+\ee\left(K(v,v+h)-1\right)\right] = -h L_v(h)\left[1+\ee\left(-h L_v(h)\right)\right].$ Thus, \begin{equation}\label{eq:dl} \sum_{k=0}^{m_n-1} \ln\left(K(t_{k}^n,t_{k}^n + h_k^n)\right) = -\sum_{k=0}^{m_n-1} h_k^n L_{t_k^n}(h_k^n)\big[1 + \ee\left(-h_k^n L_{t_k^n}(h_k^n)\right)\big], \end{equation} which implies that $u_n := \Big{|} \underbrace{\sum_{k=0}^{m_n-1} \ln\left(K(t_{k}^n,t_{k}^n + h_k^n)\right)}_{\leq 0} - \underbrace{\left(-\int_s^t \a(u) du \right)}_{\leq 0} \Big{|}$ satisfies \begin{align*} u_n &= \left| \sum_{k=0}^{m_n-1} -h_k^n L_{t_k^n}(h_k^n)\left[1 + \ee\left( -h_k^nL_{t_k^n}(h_k^n) \right) \right] - \left( - \int_s^t \a(u) du \right) \right| \\ &= \left| \sum_{k=0}^{m_n-1} h_k^n L_{t_k^n}(h_k^n) - \left( \int_s^t \a(u) du \right) \right. + \left. \sum_{k=0}^{m_n-1} h_k^n L_{t_k^n}(h_k^n) \ee\left(-h_k^n L_{t_k^n}(h_k^n) \right) \right|\\ &= \left| \left( \sum_{k=0}^{m_n-1} h_k^n L_{t_k^n}(h_k^n) - \sum_{k=0}^{m_n-1 } h_k^n \a(t_k^n)\right) + \left( \sum_{k=0}^{m_n-1} h_k^n \a(t_k^n) - \int_s^t \a(u)du \right) \right. + \left. \sum_{k=0}^{m_n-1} h_k^n L_{t_k^n}(h_k^n)\ee\left(-h_k^nL_{t_k^n}(h_k^n) \right) \right| \\ &\leq \underbrace{\left|\sum_{k=0}^{m_n-1} h_k^n L_{t_k^n}(h_k^n) - \sum_{k=0}^{m_n-1} h_k^n \a(t_k^n) \right|}_{:= \ a_n} +\underbrace{\left| \sum_{k=0}^{m_n-1} h_k^n \a(t_k^n) - \int_s^t \a(u) du \right|}_{:= \ b_n} + \underbrace{\left|\sum_{k=0}^{m_n-1} h_k^n L_{t_k^n}(h_k^n) \ee\left(-h_k^n L_{t_k^n}(h_k^n)\right) \right|}_{:= \ c_n} \\ \end{align*} We are now left to prove that $\lim_{n \to + \infty} a_n = \lim_{n \to + \infty} b_n = \lim_{n \to + \infty} c_n = 0.$ For every $n \geq 1$, we have \begin{equation*}\label{eq:maj_a_n} 0 \leq a_n \leq \sum_{k = 0}^{m_n- 1} h_k^n \left| L_{t_k^n}(h_k^n) - \a(t_k^n) \right| \leq |t-s| \sup_{(v,h) \in C_{s,t}(\s_{R_n})} |L_v(h) - \a(v) |. \end{equation*} According to Hypothesis \eqref{eq:conv_fini}, $\lim_{n \to + \infty} a_n = 0.$ Since $\a$ is continuous and continuous functions are Riemann-integrable, we immediately get $\lim_{n \to + \infty} b_n = 0.$ In order to prove that $\lim_{n \to + \infty} c_n = 0$, put $C_{s,t} := \ens{(v,h)}{v \in [s,t], v +h \in [s,t], 0 \leq h}$ and $L_v(0) := \a(v)$ for every $v \in [s,t]$. We want to prove that $L$ is continuous on the compact set $C_{s,t}.$ Since $K$ is continuous, we know that $L$ is continuous on $C_{s,t} \setminus \left([s,t] \times \{0\}\right)$ and we are left proving the continuity on $[s,t] \times \{0\}$. Let us fix $ \ee >0$, $v^* \in [s,t]$ and consider $ \a_\ee \in \R_+^*$ such that \[ \begin{cases} \sup_{ (v,h) \in C_{s,t}(\a_\ee)} |L_v(h) - \a(v)| \leq \ee/2\\ \sup_{v \in [s,t] ; |v-v^*| \leq \a_\ee} |\a(v)-\a(v^*)| \leq \ee/2 \end{cases}. \] For every $(v,h) \in C_{s,t} \cap \left( [v^*-\a_\ee, v^* + \a_\ee] \times [-\a_\ee, \a_\ee] \right) \subset C_{s,t}(\a_\ee) \cup \left([v^*-\a_\ee,v^*+\a_\ee] \times \{0\}\right)$, we have \begin{align*} |L_v(h) - L_{v^*}(0) | &= |L_v(h) - \a(v^*) | \\ &\leq |L_v(h) - \a(v)| + |\a(v) - \a(v^*)| \\ &\leq \sup_{ (v,h) \in C_{s,t}(\a_\ee)} |L_v(h) - \a(v)| + \sup_{v \in [s,t];|v-v^*| \leq \a_\ee} |\a(v)-\a(v^*)| \\ &\leq \ee/2 + \ee/2 = \ee, \end{align*} which shows the desired continuity. Hence $M := \sup_{(v,h) \in C_{s,t}} L_v(h)$ is finite and we have \begin{equation*} 0 \leq c_n \leq \sum_{k=0}^{m_n-1} h_k^n L_{t_k^n}(h_k^n) \left| \ee\left(- h_k^n L_{t_k^n}(h_k^n)\right) \right| \leq |t-s| M \sup_{ |x| \leq M \s_{R_n}} |\ee(x)|, \end{equation*} which implies $\lim_{n \to + \infty} c_n = 0$ and finishes the proof of the first point. We now prove the \emph{second point}. Using the same notation as before, we are left to prove \eqref{eq:conv_num_schema} with $\a$ constant equal to $+ \infty$, that is $\lim_{n \to +\infty} \sum_{k=0}^{m_n-1} \ln\left({K}(t^n_{k},t^n_{k+1})\right) = - \infty$. As for \eqref{eq:dl}, this amounts to show \[ \sum_{k=0}^{m_n-1} h_k^n L_{t_k^n}(h_k^n)\big[1 + \ee\left(-h_k^n L_{t_k^n}(h_k^n)\right)\big] \dcv{n \to + \infty} + \infty. \] One can find a rank $N \geq 1$ such that for every $n \geq N$ and $k \in \ent{0}{m_n-1}$, we have $ \ee\left(-h_k^n L_{t_k^n}(h_k^n)\right) = \ee\left(K(t_k^n,t_k^n + h_k^n)-1\right) \geq -1/2$. Thus, for every $n \geq N$, \begin{align*} \sum_{k=0}^{m_n-1} h_k^n L_{t_k^n}(h_k^n)\left[1 + \ee \left(-h_k^n L_{t_k^n}(h_k^n)\right)\right] &\geq \sum_{k=0}^{m_n-1} h_k^n L_{t_k^n}(h_k^n)2^{-1} \geq \frac{t-s}{2}v_n, \end{align*} where $v_n :=\inf_{(v,h) \in C_{s,t}(\s_{R_n})} L_v(h).$ According to Hypothesis \eqref{eq:conv_infini} we get $\lim_{n \to + \infty} v_n = + \infty$, which finishes the proof. \end{proof} \begin{remq}\label{remq:commentaire_hyp_markovinification} \begin{enumerate} \item We ask that the variance of $K$ is constant equal to $1$ and $P$ is centered only to simplify the statement of our result and the notation used in the proof. If we just assume that $K$ is non-singular, \ie ,\ the variance function of $K$ is positive, we obtain that the Gaussian process with covariance $K' : (s,t) \mapsto \sqrt{K(s,s)K(t,t)}K_{\a}(s,t)$ and same mean function as $P$ is the strong local Markov transform of $P$. This is a straightforward consequence of Remark \ref{remq:centered_enough}. \item\label{point:cas_stationnaire} In the case of a stationary kernel $K : (s,t) \in T^2 \mapsto \ti{K}(t-s)$, the value of the map $L_t^K : h \mapsto h^{-1}\left(1-\ti{K}(0)^{-1}\ti{K}(h)\right)$ does not depend on $t$ and Equation \eqref{eq:conv_fini} and Equation \eqref{eq:conv_infini} become $\lim_{h \to 0^+} h^{-1}(1-\ti{K}(h)) = \a \in [0,+\infty]$. In this case, Theorem \ref{them:Markov_stationnaire} states that, if $K$ is continuous, $\ti{K}(0) = 1$ and $\lim_{h \to 0^+} h^{-1}(1-\ti{K}(h)) = \a \in [0,+\infty],$ then $P_{\a}$ is the strong Markov transform of $P.$ \end{enumerate} \end{remq} \begin{them}\label{them:Markov_stationnaire_bis} Consider an interval $T$ and a stationary positive semi-definite kernel $K :(s,t) \in T^2 \mapsto \ti{K}(t-s) \in \R$ that is continuous and satisfies $\ti{K}(0) = 1.$ We denote by $P$ a centered Gaussian measure with covariance function $K$ and set $L^K : h \in \R_+^* \mapsto h^{-1}(1-\ti{K}(h)).$ \begin{enumerate} \item\label{point:Markov_stationnaire_limite} If $\lim_{h \to 0^+} L^K(h) = \a \in [0, +\infty],$ then $P_\a$ is the strong local Markov transform of $P$. \item\label{point:Markov_stationnaire_va} Assume $\a \in [0,+\infty]$ is a cluster point of $L^K$ at $0^+$ and consider a sequence of positive numbers $(s_n)_{n \geq 1}$ converging to zero that satisfies $\lim_{n \to + \infty} L^K(s_n) = \a$. For every $s<t \in T^2$, writing $R_n^{s,t} := \left((s_n \Z) \cap ]s,t[\right) \cup \{s,t\}$, we obtain $\lim_{n \to + \infty} P_{\{R_n^{s,t}\}}^{s,t} = P_{\a}^{s,t}$. In particular, $P_\a$ is a weak local Markov transform of $P.$ \end{enumerate} \end{them} \begin{proof} The first point is only a restatement of the Point \ref{point:cas_stationnaire} in Remark \ref{remq:commentaire_hyp_markovinification} and we are left with the proof of the second point. We set $L := L^K$ and fix $s<t \in T^2$. Let $(s_n)_{n \geq 1}$ and $R_n^{s,t}$ be as in the statement. For every $n \geq 1$, we write $ R_n^{s,t} := (t_0^n, \dots, t_{m_n}^n) \in \SS_{[s,t]}.$ For every $k \in \{1, \dots, m_n-2\}$, we have \[\ln(\ti{K}(s_n)) = \ln(1 + [\ti{K}(s_n)-1]) = -s_nL(s_n)\left[1+ \ee\left(\ti{K}(s_n)-1\right)\right] = -(t_{k+1}^n - t_k^n)L(s_n)\left[1+ \ee\left(\ti{K}(s_n)-1\right)\right], \] where $\ee$ is defined in Notation \ref{nott:DL_Taylor}. Hence, \begin{align*} \sum_{k=0}^{m_n-1} \ln\left(K(t_{k}^n,t_{k+1}^n)\right) &= \sum_{k=0}^{m_n-1} \ln\left(\ti{K}(t_{k+1}^n-t_k^n)\right) \\ &= \ln(\ti{K}(t_1^n-s)) + \ln(\ti{K}(t-t_{m_n-1}^n)) + \sum_{k=1}^{m_n-2} \ln(\ti{K}(s_n)) \\ &= \ln(\ti{K}(t_1^n-s)) + \ln(\ti{K}(t-t_{m_n-1}^n)) - \sum_{k=1}^{m_n-2}(t_{k+1}^n - t_k^n)L(s_n)\left[1+ \ee\left(\ti{K}(s_n)-1\right)\right] \\ &= \ln(\ti{K}(t_1^n-s)) + \ln(\ti{K}(t-t_{m_n-1}^n)) - (t_{m_n-1}^n-t_1^n) L(s_n)\left[1+ \ee\left(\ti{K}(s_n)-1\right)\right].\\ \end{align*} Since $\ti{K}(0) = 1$ and $K$ is continuous, we obtain $ \lim_{n \to +\infty} \sum_{k=0}^{m_n-1} \ln\left(K(t_{k}^n,t_{k+1}^n)\right) = 0 + 0 - \a (t-s) = -\a(t-s)$, which implies the result according to Proposition \ref{pro:formulation_calculatoire}. \end{proof} We now apply Theorem \ref{them:Markov_stationnaire} to two different examples: We shall prove that they admit a strong local Markov transform and then compute it. We begin with the fractional Brownian motion. \begin{pro} Consider $H \in ]0,1[$ and denote by $\BB_H$ the fractional Brownian motion with Hurst parameter $H$, \ie ,\ the centered Gaussian measure with covariance function $K_H : (s,t) \in \R_+^* \times \R_+^* \mapsto \frac{1}{2}(|t|^{2H} + |s|^{2H} - |t-s|^{2H})$. The centered Gaussian measure $\BB_H'$ with covariance matrix \[K_{H}' : (s,t) \in \R_+^* \times \R_+^* \mapsto \begin{cases} s^{H}t^H\mathds{1}_{s = t} + 0\mathds{1}_{s \neq t} & \text{if } H < 1/2 \\ \min(s,t) &\text{if } H=1/2 \\ s^{H}t^{H}& \text{if } H > 1/2 \end{cases} \] is the strong local Markov transform of $\BB_H.$ \end{pro} \begin{proof} For $(s,t) \in \R_+^* \times \R_+^*$, \begin{equation}\label{eq:mbf_is_sta} K_H(s,t) = s^{H}t^{H} \cdot \frac{1}{2}\left( \left( \frac{t}{s} \right)^{H} + \left( \frac{s}{t} \right)^{H} - \left| \left( \frac{t}{s} \right)^{1/2} - \left( \frac{s}{t} \right)^{1/2} \right|^{2H} \right) = u(s)u(t) \ti{K}(\phi(t)-\phi(s)), \end{equation} where $u : t \in \R_+^* \mapsto t^{H} \in \R$, $\phi : s \in \R^*_+ \mapsto \frac{\ln(s)}{2}$ and $\ti{K} : x \mapsto \frac{1}{2} \left( e^{2H x} + e^{-2H x} - |e^x - e^{-x}|^{2H} \right).$ Let define $P$ as the stationary centered Gaussian process with covariance function $K :(s,t) \mapsto \ti{K}(|t-s|).$ Using the Taylor expansion of the exponential function, we get \begin{equation*} h^{-1}(1-\ti{K}(h) ) = o(1) + (2h)^{2H - 1}[(1+ o(1))^{2H} \dcv{h \to 0^+} \begin{cases} +\infty &\text{if } H < 1/2 \\1 &\text{if } H= 1/2 \\ 0 &\text{if }H > 1/2 \end{cases} =: \b. \end{equation*} Thus, according to Theorem \ref{them:Markov_stationnaire_bis}, $P_\b$ is the strong local Markov transform of $P$. According to Remark Equation \eqref{eq:mbf_is_sta} and Remark \ref{remq:iden_mark_trans_def}, the centered Gaussian process with kernel $(s,t) \mapsto u(s)u(t) \exp(-\b|\phi(t)-\phi(s)|)$ is the strong local Markov transform of $\BB_H$. Finally, it is straightforward that $u(s)u(t) \exp(-\b|\phi(t)-\phi(s)|) = K'_H(s,t)$, which gives the wanted result. \end{proof} We now continue our sequence of examples with a class of non-stationary processes. \begin{pro} Consider two intervals $T \subset \R$, $J \subset \R_+^*$, a Brownian motion $(B_u)_{u \geq 0}$ on a probability space $(\Omega,\mathcal{F},\mathbb{P})$ and a family $(k_t)_{t \in T}$ of elements of $L^2(J)$ such that $\int_{J} k_t(u)^2du = 1$ for every $t \in T$. Set $X_t := \int_J k_t(u) dB_u$ for every $t \in T.$ The, the law $P \in \PP(\R^T)$ of $(X_t)_{t \in T}$ is a Gaussian measure with covariance function $K : (s,t) \in T \times T \mapsto \int_J k_s(u)k_t(u) du$. Moreover, set $C_{s,t}(h^*) := \ens{(v,h) \in [s,t[ \times ]0,h^*]}{v+h \in [s,t]}$ for every $(s,t,h*) \in T \times T \times \R_+^*$ and assume: \vspace{0.2cm} \begin{description}\label{pro:mark_bruit_gaussien} \leftskip=0.3in \item[$(H_1)$ ] $\exists f : T \times J \to \R, \forall (s,t,u) \in T \times T \times J$, \begin{equation} \sup_{(v,h) \in C_{s,t}(h^*)} |h^{-1}(k_v(u)- k_{v+h}(u)) - f(v,u)| \dcv{ h^* \to 0^+} 0 \ ; \end{equation} \item[$(H_2)$ ] $\exists g \in L^1(J), \exists h_0 \in \R_+^*, \forall (h,t,u) \in ]0,h_0] \times T \times J,\left| h^{-1}(k_{t}(u)-k_{t+h}(u)) - f(t,u) \right| \leq g(u)$; \item[$(H_3)$ ] $\a : t \in T \mapsto \int_J k_t(u)f(t,u)du$ is continuous. \end{description} \leftskip=0in Then $P_\a$ is the strong Markov transform of $P.$ \end{pro} \begin{proof} The hypothesis $k_s \in L^2(J)$ ensures that $(X_t)_{t \in T}$ is well defined. The process $P$ is clearly centered, Gaussian and for every $(s,t) \in T^2$, \[K(s,t) = \E(X_sX_t) = \E\left(\int_J k_s(u)dB_u \int_J k_t(u)dB_u\right) = \E \left( \int_J k_s(u)k_t(u) d \textlangle B , B \textrangle_u \right)= \int_J k_s(u)k_t(u) du.\] Moreover, for $(v,h) \in C_{s,t}(h^*)$, $L^K_v(h) = h^{-1}\left(1-K(v,v+h) \right) = \int_J h^{-1} (k_v(u) - k_{v+h}(u))k_v(u) du.$ Hence, \begin{align*} |L^K_v(h)-\a(v)| &= \left| \int_J h^{-1}\left(k_v(u)-k_{v+h}(u)\right) k_v(u) du - \int_J f(v,u) k_v(u)du \right| \\ &\leq \int_J \left| h^{-1}(k_v(u)-k_{v+h}(u)) - f_v(u)\right| |k_v(u) |du \\ &\leq \left( \sup_{w \in [s,t]} |k_w(u) |\right) \int_J \sup_{(w,h) \in C_{s,t}(h^*)} \left\{ \left| h^{-1}(k_w(u)-k_{w+h}(u)) - f(w,u) \right| \right\} du, \end{align*} which implies \begin{equation}\label{eq:majoration_Lebesgue} \sup_{(v,h) \in C_{s,t}(h^*)} |L^K_v(h)-\a(v)| \leq \left( \sup_{w \in [s,t]} |k_w(u) | \right)\int_J \sup_{(w,h) \in C_{s,t}(h^*)} \left\{ \left| h^{-1}(k_w(u)-k_{w+h}(u)) - f(w,u) \right| \right\} du. \end{equation} According to $(H_1)$ and $(H_2)$, the Lebesgue's dominated convergence theorem applies to the right-hand side of Equation \eqref{eq:majoration_Lebesgue} so that $\lim_{h^* \to 0^+} \sup_{(v,h) \in C{s,t}(h^*)} |L^K_v(h)-\a(v)| = 0$. According to $(H_3)$, $\a$ is continuous and we have $K(t,t) = \int_{J} k_t(u)^2 du =1$. Hence, Theorem \ref{them:Markov_stationnaire} applies, which proves the result. \end{proof} \begin{ex} Let us apply Proposition \ref{pro:mark_bruit_gaussien} to the law $P$ of $(X_s)_{s > 0} := \left( \int_0^{+\infty} \sqrt{s}e^{-\frac{su}{2}} dB_u \right)_{s>0}$. In order to satisfy $(H_2)$, we rather work on $(X_t)_{t \in [a,b]}$ for a fixed couple $a<b \in \left(\R_+^* \right)^2.$ Fix $s<t \in [a,b]^2$ and set $k_v(u) := \sqrt{v} e^{-vu/2}$ for every $(v,u) \in [s,t] \times \R_+^*$. Using the Taylor expansion of the exponential map and of $x \in ]-1,+\infty[ \mapsto \sqrt{1+x}$, we get \begin{align*} h^{-1}(k_v(u)-k_{v+h}(u)) &= k_v(u) \left[h^{-1} \left( 1- \sqrt{1+\frac{h}{v}} e^{\frac{-hu}{2}}\right) \right]\\ &= \frac{1}{2} k_v(u)\left( u- \frac{1}{v} +\theta(h) \right), \end{align*} where $\theta : \R \to \R$ satisfies $\lim_{h \to 0^+} \theta(h) = 0.$ Setting $f(v,u) := \frac{1}{2}k_v(u) \left(u-1/v \right) $, we get \begin{equation*} \left| h^{-1}(k_v(u)-k_{v+h}(u)) - f(v,u) \right| = \left|\frac{1}{2} k_v(u) \theta(h)\right| \leq \frac{\sqrt{t}}{2} |\theta(h)|, \end{equation*} for every $(v,u,h) \in [s,t] \times \R_+ \times \R_+^*$. Hence, Hypothesis $(H_1)$ is satisfied. Let $h_0 \in \R_+^*$ be such that $\sup_{ h \in ]0,h_0]}{|\theta(h)|} \leq 1.$ For every $h \in ]0,h_0]$ and $(v,u) \in [a,b] \times \R_+^*$, we have \begin{equation*} \left|h^{-1}(k_v(u)-k_{v+h}(u))-f(v,u) \right| = \left|\frac{1}{2} k_v(u) \theta(h)\right| \leq \frac{1}{2}\sqrt{b} e^{-\frac{au}{2}}=: g(u), \end{equation*} which shows $(H_2).$ Since $f(v,u)k_v(u) = \frac{1}{2}(u ve^{-vu}-e^{-vu} )$, we have $\a(v) = \int_0^{+\infty} f(v,u) k_v(u) du = \int_{0}^{+\infty} (uve^{-vu} - e^{-vu} ) du = 1-1= 0$ for every $v \in [a,b]$. Since $(H_3)$ is obviously true, Proposition \ref{pro:mark_bruit_gaussien} applies and the Markov transform of the law of $(X_t)_{t \in [a,b]}$ is the law of the completely correlated process $(Z)_{t \in [a,b]}$, where $Z \sim \NN(0,1).$ This being true for every $a<b \in \left(\R_+^* \right)^2$, the Markov transform of the law of $(X_t)_{t \in t>0}$ is $(Z)_{t >0}$. Applying Remark \ref{remq:centered_enough}, the strong local Markov transform of the law of $\left( \int_0^{+ \infty} e^{-tu}dB_u \right)_{t>0}$ is the law of $\left(Z/\sqrt{2t} \right)_{t>0}.$ \end{ex} \section{Default of uniqueness for a weak Markov transform.}\label{sec:counterexample} As already noticed in Remark \ref{remp:uniqueness_markov_transform}, strong local Markov transforms are unique. However, we shall see now that this fails in the case of weak local Markov transforms. In this section, our aim is to construct a (stationary) Gaussian measure which has several weak local Markov transforms. The guiding result for our construction is a theorem of Bochner which characterizes the continuous positive semi-definite stationary kernels. For a proof, we refer to \cite[Paragraph $8$]{bochner_monotone_1933} (or \cite[Page $208$]{gihman_theory_1974} for a more recent presentation). We denote by $\hat{\mu} : t \in \R \mapsto \int_{\R} \exp(itx) \mu(dx)$ the Fourier transform of a positive finite measure $\mu \in \MM_+(\R).$ \begin{them}[Bochner\footnote{In general, Theorem \ref{them:Bochner} is stated for $\mathbb{C}$-valued kernels, but it is straightforward that this implies our reformulation with $\R$-valued kernels.}]\label{them:Bochner} Let $K : \R^2 \to \R$ be a symmetric map. The following conditions are equivalent: \begin{enumerate} \item There exists $\mu \in \MM_+(\R)$ such that $K(s,t) = \hat{\mu}(t-s)$ for every $(s,t) \in \R^2.$ \item The function $K$ is a continuous positive semi-definite stationary kernel. \end{enumerate} The measure $\mu$ is then unique and is called the spectral measure associated with $K$. \end{them} We now claim that, in order to obtain a stationary Gaussian measure that has not a unique weak local Markov transform, it is sufficient to find a symmetric probability measure $\mu \in \PP(\R)$ with a Fourier transform whose growth rate at $0^+$ admits several cluster points. To justify this, notice that if $\mu$ is a real-valued symmetric probability measure, then the function $K : (s,t) \in \R^2 \mapsto \hat{\mu}(t-s)$ is real valued and symmetric. According to Theorem \ref{them:Bochner}, this implies that $K$ is a stationary positive semi-definite kernel, with constant variance equal to $\hat{\mu}(0) = \mu(\R) = 1$. Thus, for every cluster point $\a \in [0,+\infty]$ of the decay rate of $\hat{\mu}$ at point $0^+$, according to Theorem \ref{them:Markov_stationnaire_bis}, the centered Gaussian process $P$ with covariance function $K$ admits $P_{\a}$ as weak local Markov transform. In our construction, the set of cluster points of the decay rate of the Fourier transform of $\mu$ at point $0^+$ will be $[0,+\infty]$, which guarantees an infinity of weak local Markov transforms. Denoting by $\emph{\sgn} : \R \to \{-1,0,1\}$ the usual sign function, our strategy is to consider a symmetric probability measure of the form $\g := \sum_{|k| \geq 2 } a_{|k|} \d_{\sgn(k)b_{|k|}}$ whose Fourier transform has an infinite decay rate at $0^+$. Then, to obtain our measure $\mu$, we \enquote{mix} $\g$ with $\d_0$, whose Fourier transform has a decay rate converging to $0$ at point $0^+$. More precisely, we will recursively construct a set $S \subset \left(\N \cap [2,+\infty[ \right)$ and put \[y_k := \begin{cases} b_k &\text{if } k \in S \\ 0 &\text{otherwise} \end{cases},\] in order to define our measure $\mu$ by \[\mu := \sum_{|k| \geq 2 } a_{|k|} \d_{\sgn(k)y_{|k|}} = \sum_{|k| \in S} a_{|k|}\d_{\sgn(k)b_{|k|}} + \left( \sum_{|k| \notin S} a_{|k|} \right) \d_0.\] Since \begin{equation}\label{eq:trans_fourier_mu_gamma} \begin{cases}\hat{\g}(t) = \sum_{k \geq 2} a_k(e^{itb_k} + e^{-it b_k}) = 2 \sum_{ k \geq 2} a_k \cos(tb_k)\\ \hat{\mu}(t) = \sum_{k \geq 2} a_k(e^{ity_k} + e^{-it y_k}) = 2 \sum_{ k \geq 2} a_k \cos(ty_k) \end{cases}, \end{equation} the decay rates of $\hat{\g}$ and $\hat{\mu}$ at $0$ are given by \begin{equation}\label{eq:calcul_slope} \begin{cases} t^{-1}(1-\hat{\g}(t)) = 2 t^{-1}\sum_{k \geq 2} a_k \left( 1- \cos(t b_k) \right)\\ t^{-1}(1-\hat{\mu}(t)) = 2 t^{-1}\sum_{k \in S} a_k \left( 1- \cos(t b_k) \right) \end{cases}. \end{equation} Put $a := 1/2$, $b := 3$ and $a_k := a^k$, $b_k := b^k$ for every $k \geq 2.$ We have $ab>1$ and \eqref{eq:trans_fourier_mu_gamma} shows that $\hat{\g}/2$ is the well-known continuous nowhere differentiable Weierstrass function \cite{weierstras_uber_1872}. In Point \ref{point:cv_derivee_Fourier} of Lemma \ref{lem:va_fourier_preli} below, we rely on an article of Hardy \cite{hardy_weierstrasss_1916} to prove $\lim_{t \to 0^+} t^{-1}(1-\hat{\g}(t)) = + \infty$. We are left to find a recursive construction of $S$ such that the lacunary series $t^{-1}(1-\hat{\mu}(t))$ of $t^{-1}(1-\hat{\g}(t))$ admits the elements of $[0,+\infty]$ as cluster points. Points \ref{point:cv_derivee_Fourier_queue}-\ref{point:cv_derivee_Fourier_modif} of Lemma \ref{lem:va_fourier_preli} are useful to define the sequence $(y_k)_{k \geq 1}$ and to show that the resulting measure $\mu$ has the wanted property. The construction itself is done in Proposition \ref{pro:va_fourier}, using the tools of Lemma \ref{lem:va_fourier_preli}. \begin{lem}\label{lem:va_fourier_preli} Put $a := 1/2$, $b := 3$ and $a_k := a^k$, $b_k := b^k$ for every $k \geq 2.$ \begin{enumerate} \item Then $\lim_{x \to + \infty} x \sum_{k \geq 2} a_k \left(1 - \cos \left( \frac{b_k}{x} \right) \right) = +\infty.$ \item\label{point:cv_derivee_Fourier_queue} For any given sequence $(x_k)_{k \geq 1} \in \R^{\N^*}$, $\lim_{x \to +\infty} x \sum_{k \geq \lfloor x \rfloor +1} a_k \left(1 - \cos \left( \frac{x_k}{x} \right) \right) = 0.$ \item\label{point:cv_derivee_Fourier_milieu} For $p \geq 1$ and $((n_i,m_i))_{i \in \ent{1}{p}} \in {\left(\N^2\right)}^p,$ the map \[g_{(n_1,m_1),\dots,(n_p,m_p)} : x \in \R_+^* \mapsto x \sum_{l=1}^p \sum_{k=n_l}^{m_l} a_k \left( 1-\cos\left(\frac{b_k}{x}\right) \right) \in \R_+\] satisfies $\lim_{x \to + \infty} g_{(n_1,m_1),\dots,(n_p,m_p)}(x) = 0.$ \item\label{point:cv_derivee_Fourier_finale} For $n \geq 2$, the map \[f_n : x \in \R \mapsto x\sum_{k = n}^{\lfloor x \rfloor } a_k \left( 1 - \cos \left( \frac{b_k}{x} \right) \right) \in \R_+\] satisfies $ \lim_{x \to + \infty} f_n(x) = +\infty.$ \item\label{point:cv_derivee_Fourier_modif} Consider a increasing sequence of integers $(n_k)_{k \geq 1}$ such that $n_0 = 2$ and define the sequence $(y_k)_{k \geq 0}$ by \[ y_k = \begin{cases} b_k & \text{if } k \in \cup_{i \geq 0} \llbracket{n_{2i}},{ n_{2i+1}}\llbracket \\ 0 &\text{if } k \in \cup_{i \geq 0} \llbracket n_{2i+1}, n_{2(i+1)} \llbracket\end{cases}.\] Then the map \[f : x \in \R_+^* \mapsto x\sum_{k=2}^{ \lfloor x \rfloor } a_k \left( 1 - \cos \left( \frac{y_k}{x} \right) \right)\] satisfies \[f(x) = \begin{cases} g_{(n_0,n_1-1),\dots,(n_{2(i-1)},n_{2i-1}-1)}(x) + f_{n_{2i}}(x) & \text{if } x \in [n_{2i},n_{2i+1}[ \\ g_{(n_0,n_1-1),\dots,(n_{2i},n_{2i+1}-1)}(x) &\text{if } x \in [n_{2i+1},n_{2(i+1)}[ \end{cases}.\] \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item\label{point:cv_derivee_Fourier} According to \cite[Points $2.41$-$2.42$]{hardy_weierstrasss_1916}, $ \lim_{h \to 0^+} h^{-1} \sum_{k \geq 0} a^k (1- \cos(b^k \pi h)) = + \infty$. Since $\lim_{h \to 0^+} h^{-1} a^0 (1-\cos(b^0\pi h) = \lim_{h \to 0^+} h^{-1} a^1 (1-\cos(b^1\pi h) = 0$, the change of variable $x^{-1} = \pi h$ gives the wanted result. \item For $x \geq 2$, $ {\small 0 \leq x \sum_{k \geq \lfloor x \rfloor +1} a_k \left( 1 - \cos\left( \frac{x_k}{x} \right) \right) \leq 2x \sum_{k \geq \lfloor x \rfloor +1} a^k = 2x\frac{a^{\lfloor x \rfloor + 1}}{1-a}} ,$ which proves the result. \item For $x \in \R^*_+$, $ 0 \leq x \sum_{k=1}^p \sum_{l=n_k}^{m_k} a_l \left( 1-\cos\left(\frac{b_l}{x}\right) \right) \leq x \sum_{k=1}^p \sum_{l=n_k}^{m_k} a_l \frac{b_l^2/x^2}{2} = \frac{1}{x} \sum_{k=1}^p \sum_{l=n_k}^{m_k} \frac{a_l b_l^2}{2},$ which proves the result. \item According to Points \ref{point:cv_derivee_Fourier}, \ref{point:cv_derivee_Fourier_queue} and \ref{point:cv_derivee_Fourier_milieu}, the equality \[f_n(x) = x\sum_{k \geq 2}^{\infty} a_k \left(1 - \cos \left( \frac{b_k}{x} \right) \right) - x\sum_{k=2}^{n-1} a_k \left(1 - \cos \left( \frac{b_k}{x} \right) \right) - x \sum_{k \geq \lfloor x \rfloor + 1} a_k \left(1 - \cos \left( \frac{b_k}{x} \right) \right) \] implies $\lim_{n \to + \infty} f_n(x) = + \infty -0-0 = + \infty.$ \item The computation is straightforward. \end{enumerate} \end{proof} \begin{pro}\label{pro:va_fourier} There exists a sequence $(y_k)_{k \geq 1} \in \R_+^{\N^*}$ such that the cluster points of the decay rate at point $0^+$ of the Fourier transform of $\mu := \sum_{|k| \geq 2 } 2^{-|k|} \d_{\sgn(k)y_{|k|}}$ are the elements of $[0,+\infty]$. \end{pro} \begin{proof} We define the functions $f_n$ and $g_{(n_1,m_1), \dots , (n_p,m_p)}$ as in Lemma \ref{lem:va_fourier_preli}. According to Points \ref{point:cv_derivee_Fourier_milieu} and \ref{point:cv_derivee_Fourier_finale} of Lemma \ref{lem:va_fourier_preli}, we can define a sequence $(n_i)_{i \geq 2}$ by \[\accol{n_0 := 2 \\ \forall i \geq 0, n_{2i+1} := \inf \ens{n > n_{2i} }{ f_{n_{2i}}(n-1) > i }\\ \forall i \geq 0, n_{2(i+1)} := \inf \ens{n > n_{2i+1} }{g_{(n_0,n_1-1),\dots,(n_{2i},n_{2i+1}-1)}(n-1) < 1/i}}.\] As in Point \ref{point:cv_derivee_Fourier_modif} of Lemma \ref{lem:va_fourier_preli}, we associate a sequence $(y_k)_{k \geq 2}$ and a function $f$ to our sequence $(n_i)_{i \geq 2}$. Since $n_{2i+1} - 1 \in [n_{2i},n_{2i+1}[$ and $n_{2(i+1)} -1 \in [n_{2i+1}, n_{2(i+1)}[$, according to this same point, $$f(n_{2i+1}-1) = g_{(n_0,n_1-1),\dots ,(n_{2(i-1)},n_{2i-1}-1)}(n_{2i+1}-1) + f_{n_{2i}}(n_{2i+1}-1)\geq f_{n_{2i}}(n_{2i+1}-1) > i$$ and $$f(n_{2(i+1)}-1) = g_{(n_0,n_1-1),\dots,(n_{2i},n_{2i+1}-1)}(n_{2(i+1)}-1) < 1/i$$ for $i \geq 1$. Set $\mu := \sum_{|k| \geq 2 } 2^{-|k|} \d_{\sgn(k)y_{|k|}}$ and denote by $L :t \in \R_+^* \to t^{-1}(1-\hat{\mu}(t))$ the decay rate of $\hat{\mu}$ at $0^+$. As in Equation \eqref{eq:calcul_slope}, \[L(1/x) = 2x\sum_{k \geq 2} a_k \left( 1-\cos(y_k/x) \right) = f(x) + g(x),\] where $g(x) := \sum_{k \geq \lfloor x \rfloor + 1} a_k \left( 1- \cos(y_k/x) \right) $. According to Point \ref{point:cv_derivee_Fourier_queue} of Lemma \ref{lem:va_fourier_preli}, $\lim_{x \to +\infty} g(x) = 0.$ Thus, writing $s_i := \frac{1}{n_{2i+1}-1}$ and $t_i := \frac{1}{n_{2(i+1)}-1}$, we get $ L(s_i) = f(n_{2i+1}-1) + g(n_{2i+1}-1) \dcv{ i \to + \infty} + \infty + 0 = +\infty$ and $ L(t_i) = f(n_{2(i+1)}-1) + g(n_{2(i+1)}-1) \dcv{i \to + \infty} 0$, so that $0 $ and $+\infty$ are cluster points of $L$ at $0^+.$ Since $L$ is continuous, according to the intermediate value theorem, any elements of $[0,+\infty]$ is a cluster points of $L$. Since $L$ is non-negative, all its cluster points are elements of $[0,+\infty]$ which finishes the proof. \end{proof} The following Remark will be used in the construction and further in the article. \begin{remq}\label{rq:pos_markov} Let $T$ be an interval and $P \in \PP\left(\R^T\right)$ a Gaussian measure with non-singular covariance function $K : T \times T \to \R$. In \cite[Theorem $1$]{mehr_certain_1965}, the authors proved that if $P$ satisfies the Markov property and $K$ is continuous, then there exists two functions $f,g$ satisfying $K(s,t) =f(s)g(t)$ for every $s<t \in \R^2$. Since, for every $t \in \R$, $K(t,t) = f(t)g(t) > 0$, both $f$ and $g$ do not vanish. Since $f$ (resp. $g$) is continuous and does not vanish $f$ (resp. $g$) is either positive or negative. As $f(t) g(t) >0$ for every $t \in T$, either $f$ and $g$ are both positive or $f$ and $g$ are both negative. This implies $K$ is positive. Hence, every non-singular continuous covariance function of a Gaussian measure that satisfies the Markov property is positive. \end{remq} We can now construct a stationary Gaussian measure with an infinity of weak local Markov transforms. \begin{them}\label{them:contre_exemple} There exists a centered stationary Gaussian measure $P \in \PP\left(\R^\R\right)$ whose set of weak local Markov transforms is the set of all the measures $P'$ satisfying: \begin{enumerate} \item The measure $P'$ is centered Gaussian with constant variance function equal to $1$; \item The covariance function of $P'$ is non-negative; \item The measure $P'$ is Markov. \end{enumerate} \end{them} \begin{proof} According to Proposition \ref{pro:va_fourier}, we can find a symmetric probability measure $\mu$ such that the cluster points of the decay rate of $\hat{\mu}$ at $0^+$ are the elements of $[0,+\infty]$. Since $\mu$ is symmetric, the function $K : (s,t) \in \R^2 \mapsto \hat{\mu}(t-s)$ is real valued and symmetric. Hence, according to Theorem \ref{them:Bochner}, $K$ is a stationary positive semi-definite kernel, with constant variance equal to $\hat{\mu}(0) = \mu(\R) = 1$. Based on Theorem \ref{them:Bochner}, the Gaussian measure $P$ with covariance function $ K : (s,t) \in \R \times \R \mapsto \hat{\mu}(t-s)$ is a well defined stationary Gaussian measure such that $K(t,t) = \hat{\mu}(0) = 1$ for every $t \in \R$. We denote by $P$ the centered Gaussian measure with covariance function $K.$ Let $P'$ be a Gaussian and Markov measure with non-negative covariance function $K'$ and constant variance equal to $1$. In order to show that $P'$ is a weak local Markov transform of $P$, let fix $s<t \in \R^2$. Since $0 \leq K'(s,t) \leq K'(s,s)^{1/2}K'(t,t)^{1/2} = 1$, $\a := -|t-s|^{-1} \ln(K'(s,t))$ is well defined and we have $\a \in [0,+\infty]$. Moreover $K'(s,t) = \exp(-\a |t-s|) = K_\a(s,t)$. Since $\a$ is a cluster point of the decay rate of $\hat{\mu}$ at $0^+$, according to Point \ref{point:Markov_stationnaire_va} of Theorem \ref{them:Markov_stationnaire_bis}, $P_\a$ is a weak local Markov transform of $P$. Hence, there exists $(R_n)_{n \geq 1} = ((t_k^n)_{n \in \ent{1}{m_n}}) \in \left(\SS_{[s,t]}\right)^{\N^*}$ such that $\lim_{n \to +\infty} \s_{R_n} = 0$ and $\lim_{n \to + \infty} K(s,t_1^n) \dots K(t_{m_n-1}^n,t) = K_\a (s,t) = K'(s,t).$ According to Proposition \ref{pro:formulation_calculatoire}, $P'$ is a weak local Markov transform of $P.$ Conversely, let $P'$ be a weak local Markov transform of $P$ with covariance function $K'$. We want to show that $P'$ is a Gaussian and Markov measure with non-negative covariance function and constant variance function equal to $1$. First $P'$ is Markov by definition and Gaussian according to Proposition \ref{pro:Gaussianite_limite}. As $P'$ has the same variance function as $P$, its variance is constant equal to $1.$ Since $P'$ is a weak local Markov transform of $P$, there exists $(R_n)_{n \geq 1} = ((t_k^n)_{k \in \ent{1}{m_n}}) \in \left(\SS_{[s,t]}\right)^{\N^*}$ such that $\lim_{n \to +\infty} K(t_0^n,t_1^n) \cdots K(t_{m_n-1}^n,t_{m_n}^n) = K'(s,t).$ According to Remark \ref{rq:pos_markov}, $K$ is positive, which implies $K'(s,t) \geq 0$ and finishes the proof. \end{proof} In particular for a measure $P$ as in Theorem \ref{them:contre_exemple} and any measurable function $\a : \R \to [0,+\infty]$, $P_\a$ is a weak local Markov transform of $P.$ As one can see taking $\a : t \mapsto 0 \cdot 1_{t \leq 1} + (+ \infty) \cdot 1_{t>1}$, this also proves that a weak Markov transform of a stationary measure is not always a stationary measure. \section{Global Markov transform, weak convergence of the transformations and SDE characterization of the mimicking process}\label{sec:global_markov_transforms} In the past sections, we gave some results local Markov transforms. In this section, we shall study global Markov transform and show how these results can be used to get results about global Markov transforms. As said in the introduction, a global Markov transform is a law $P'$ of a process $X'$ obtained as limit of transformations of $X$ made Markov at certain times. We recall that, given a finite subset $R$ of $\R$, the transformation of $X$ made Markov at times $R$ is a process $X^R$ satisfying: \begin{itemize} \item On every interval between two successive times of $R$, $X^R$ and $X$ have the same law; \item For every $r \in R$, $X^R$ is made Markov at time $r$: if one knows the value of the trajectory at time $r$, then the future $(X_t^R)_{t>r}$ of the trajectory does not depend on the past $(X_t^R)_{t<r}$ of the trajectory. \end{itemize} It is possible to give a rigorous definition of the law $P_{[R]}$ of this process $X^R.$ This has been done by Boubel and Juillet \cite[Definition $4.18$]{boubel_markov-quantile_2022} using the Kolmogorov extension theorem. \begin{defipro}[Measure made Markov at times $R$]\label{def:made_markov} Let $T \subset \R$ be a set, $E$ a Polish space, $P \in \PP(E^T)$ a probability measure and $R = \{r_1 < \dots < r_m\} \subset T$ a finite set of times. For each finite set $S \subset T$, we write \[S \cup R = \{s_1^0 < \dots < s_{k_0}^0 < r_1 < s_1^1 < \cdots < s_{k_1}^1 < r_2< \cdots < s_1^{m-1} < \cdots < s_{k_{m-1}}^{m-1}< r_m < s_1^m < \cdots < s_{k_m}^m\}\] and \[\mu_S := \proj^S_{\#}[ P^{s_1^0,\dots,s^0_{k_0},r_1} \circ P^{r_1, s_1^1 , \dots, s^1_{k_1},r_2} \circ \dots \circ P^{r_m,s_1^m,\dots, s_{k_m}^m}].\] Denoting by $\SS$ the class of finite subsets of $T$, it is readily verified that $(\mu_S)_{S \in \SS}$ is a consistent family of measures. According to the Kolmogorov extension Theorem, there exists a unique probability $P_{[R]}$ on $E^T$ such that $\proj^S_\# P_{[R]} = \mu_S$ for every $S \in \SS.$ We say that $P_{[R]} \in \PP(E^T)$ is the measure $P$ made Markov at times $R.$ Given a process $X$ with law $P$, we say that $X^R$ is the\footnote{We speak about \emph{the} transformation, even if we just have uniqueness in law} transformation of $X$ made Markov at times $R$ if $X^R$ has law $P_{[R]}.$ \end{defipro} In order to simplify the following definitions and notation, we will assume that $T = \R$, but the results remain true for any interval $T \subset \R.$ A global Markov transform will be defined as a Markov limit of a sequence $P_{[R_n]}$ for an admissible sequence $(R_n)_{n \geq 1}$ of set of times (see Notation \ref{def:admissible}). The topology that we will consider first is the topology of finite-dimensional convergence, that is to say the weak convergence on $\PP(E^\R)$, where $E^\R$ is endowed with the product topology. More explicitly, a sequence $(P_n)_{n \geq 1} \in \PP(E^\R)^{\N^*}$ converges to $P \in \PP(E^\R)$ for this topology if for all $s_1< \cdots < s_m \in \R^m$, $(P^{s_1,\dots,s_m}_n)_{n \geq 1}$ converges to $P^{s_1,\dots,s_m}$ for the weak topology. We denote this convergence by $P_n \xrightarrow[n \to + \infty]{\fd} P$. The following remark states that for Gaussian measures, this convergence is equivalent to the two-dimensional convergence. \begin{remq}\label{remq:finite_dim_conv_gaussian} Let $\{P_n\}_{n \geq 1} \cup \{P'\} \subset \PP\left( \R^\R \right)$ be a set of centered Gaussian processes. Then $P_n \xrightarrow[n \to + \infty]{\fd} P'$ if and only if, for every $s<t \in \R^2$, $\lim_{n \to +\infty} \left(P_n\right)^{s,t} = \left(P'\right)^{s,t}$. The direct implication is trivial. Conversely, assume, for every $s<t \in \R^2$, $\lim_{n \to + \infty} \left(P_n\right)^{s,t} = \left(P'\right)^{s,t}$ and fix $t_1 < \cdots < t_m \in \R^m$. According to Lemma \ref{lem:stab_gaussienne}, $\lim_{n \to + \infty} P_n^{t_1, \dots, t_m} = P'^{t_1, \dots,t_m}$ if and only if $\Sigma_{P_n^{t_1, \dots, t_m}} \dcv{n \to + \infty} \Sigma_{\left(P'\right)^{t_1, \dots,t_m}}.$ As for every Gaussian process $Q \in \PP(\R^{\R})$ and $i<j \in \ent{1}{m}^2$, we have $\Sigma_{Q^{t_1,\dots,t_m}}(i,j) = \Sigma_{Q^{t_i,t_j}}(1,2)$ we get the converse implication. \end{remq} \begin{nott}\label{def:admissible} Fix $\R^k_{\shortuparrow} := \ens{(t_1,\dots,t_k) \in \R^k}{t_1< \dots <t_k}$, $\R_{\shortuparrow} := \cup_{k \geq 1} \R^k_{\shortuparrow}$ and denote by \[\A := \ens{(R_n)_{n \geq 1} \in \left(\R_{\shortuparrow}\right)^{\N^*}}{ \lim_{n \to + \infty }\s_{R_n} = 0, \ \lim_{n \to + \infty} \inf(R_n) = -\infty \text{ and } \lim_{n \to + \infty} \sup(R_n) = +\infty}\] the set of admissible sequences. \end{nott} We can now give the definition of both weak and strong global Markov transform. \begin{defi}\label{defi:global_markov_transform}[Global Markov transform] Let $E$ be a Polish space and $P$ a probability measure on $E^\R.$ \begin{enumerate} \item We say that $P$ admits a weak global Markov transform if there exists a Markov measure $P'$ on $E^\R$ and $(R_n)_{n \geq 1} \in \A$ such that $ P_{[R_n]} \xrightarrow[n \to + \infty]{\fd} P'.$ We say that $P'$ is a weak global Markov transform of $P.$ \item We say that $P$ admits a strong global Markov transform if there exists a Markov measure $P'$ on $E^\R$ such that for every $(R_n)_{n \geq 1} \in \A $, we have $P_{[R_n]} \xrightarrow[n \to + \infty]{\fd} P'.$ We say that $P'$ is the strong global Markov transform of $P.$ \end{enumerate} \end{defi} The two-dimensional laws at time $(s,t)$ of the measure made Markov at times $R$ can be obtained by composing the transition kernel passing trough the times of $R$. More explicitly, one can readily check that $\left(P_{[R]}\right)^{s,t} = P_{\{\{s,t\} \cup (R \cap ]s,t[)\}}^{s,t}$ (see Definition \ref{def:compo} for the right-hand side of the equality). In the following proposition, we verify that this implies that a weak (resp. strong) global Markov transforms of $P$ is a weak (resp. strong) local Markov transform of $P$. A natural question to ask is if local Markov transforms are also global Markov transform. This is less obvious and false in general. However, using Remark \ref{remq:finite_dim_conv_gaussian}, we shall prove that it is true for strong Markov transform of Gaussian measures: If a Gaussian measure $P'$ is a strong local Markov transform of $P$, then $P'$ is also its strong global Markov transform\footnote{According to Remark \ref{remp:uniqueness_markov_transform}, this justifies that we talk about \emph{the} strong global Markov transform of $P$.}. In the case of weak Markov transforms, it stays true for stationary Gaussian measures under the hypothesis of Theorem \ref{them:Markov_stationnaire_bis}. \begin{pro}\label{pro:cov_proc_Markovinifie} Let $P \in \PP(\R^\R)$ be a Gaussian measure and $P' \in \PP(\R^\R)$ be a Markov measure. \begin{enumerate} \item The measure $P'$ is the strong global Markov transform of $P$ if and only if it is the strong local Markov transform of $P.$ \item If $P'$ is a weak global Markov transform of $P$, then $P'$ is a weak local Markov transform of $P$. \end{enumerate} \end{pro} \begin{proof} \begin{enumerate} \item Assume $P'$ is the strong \emph{global} Markov transform of $P.$ In order to show that $P'$ is the strong \emph{local} Markov transform of $P$, fix $s<t \in \R^2$ and $(\ti{R}_n)_{n \geq 1} \in \left(\SS_{[s,t]}\right)^{\N^*}$ satisfying $\lim_{n \to +\infty} \s_{\ti{R}_n} = 0.$ For every $n \geq 1$, we set $R_n := \ti{R}_n \cup \big( ([-n,s[ \cup ]t,n])\cap \left(2^{-n} \mathbb{Z} \right) \big) $. We have $(R_n)_{n \geq 1} \in \A$, $(R_n \cap ]s,t[ ) \cup \{s,t\} = \ti{R}_n$ and $P_{[R_n]} \xrightarrow[n \to + \infty]{\fd} P$. Hence, we get $ P^{s,t}_{\{\ti{R}_n\}} = P^{s,t}_{(R_n \cap ]s,t[ ) \cup \{s,t\}} = \left(P_{[R_n]}\right)^{s,t} \dcv{n \to + \infty} {P'}^{s,t}$. Conversely, assume $P'$ is the strong \emph{local} Markov transform of $P$. According to Remark \ref{remq:finite_dim_conv_gaussian}, it is sufficient to show that, for every $s<t \in \R^2$, $\lim_{n \to + \infty} \left( P_{[R_n]} \right)^{s,t} = (P')^{s,t}$, \ie ,\ $ \lim_{n \to + \infty} P_{\{\ti{R}_n\}}^{s,t} = (P')^{s,t}$ where $\ti{R}_n := \{s,t\} \cup (R_n \cap ]s,t[).$ Since $(\ti{R}_n)_{n \geq 1} \in \left( \SS_{[s,t]} \right)^{\N^*}$, $\lim_{n \to + \infty} \s_{\ti{R}_n} = 0$ and $P'$ is a strong local Markov transform of $P$, we get $\left( P_{[R_n]} \right)^{s,t} = P_{\{\ti{R}_n\}}^{s,t} \dcv{n \to + \infty} \left(P'\right)^{s,t}$. Thus $P'$ is the strong global Markov transform of $P.$ \item Fix $(R_n)_{n \geq 1} \in \A $ such that $P_{[R_n]} \xrightarrow[{n \to + \infty}]{\fd} P'.$ Given $s<t \in \R^2$, put $\ti{R}_n := \{s,t\} \cup (R_n \cap [s,t]).$ We have $(\ti{R}_n)_{n \geq 1} \in \left(\SS_{[s,t]}\right)^{\N^*}$, $\lim_{n \to + \infty} \s_{\ti{R}_n} = 0$ and $P^{s,t}_{\{\ti{R}_n\}} = P^{s,t}_{(R_n \cap ]s,t[ ) \cup \{s,t\}} = \left(P_{[R_n]}\right)^{s,t} \dcv{n \to +\infty} \left(P'\right)^{s,t}$. As this is true for every $s<t \in \R^2$, $P'$ is a weak local Markov transform of $P.$ \end{enumerate} \end{proof} Notice that we did not use the fact that $P$ is Gaussian to prove that every weak (resp. strong) global Markov transforms of $P$ is a weak (resp. strong) local Markov transform of $P$, but used it to prove that every strong local Markov transforms of $P$ is the strong global Markov transform of $P$. Indeed, we used the Remark \ref{remq:finite_dim_conv_gaussian}, which is only valid in the Gaussian case. \begin{pro}\label{pro:finite_dim_conv_stati} Let $K :(s,t) \in T^2 \mapsto \ti{K}(t-s) \in \R$ be a continuous stationary positive semi-definite kernel that satisfies $\ti{K}(0) = 1.$ Let $P$ be a centered Gaussian measure with covariance function $K$ and set $L^K : h \mapsto h^{-1}(1-\ti{K}(h)).$ Assume $\a \in [0,+\infty]$ is a cluster point of $L^K$ at $0^+$ and let $(s_n)_{n \geq 1}$ be a positive sequence converging to zero such that $\lim_{n \to + \infty} L^K(s_n) = \a.$ Then, writing $R_n := \big( s_n \cdot \Z \big) \cap [-n,n]$, we have $P_{[R_n]} \xrightarrow[n \to + \infty]{\fd} P_\a$. In particular, the measure $P_\a$ is a weak global Markov transform of $P.$ \end{pro} \begin{proof} Let $(s_n)_{n \geq 1}$ and $(R_n)_{n \geq 1}$ be as in the statement. According to Remark \ref{remq:finite_dim_conv_gaussian}, we are left to prove that $\lim_{n \to +\infty} P_{[R_n]}^{s,t} = P_\a^{s,t}$ for every $s<t \in \R^2$. Since for every $n \geq \max(t,-s)$, we have $P_{[R_n]}^{s,t} = P_{\{\{s,t\} \cup \left(]s,t[ \cap R_n \right)\}}^{s,t} = P^{s,t}_{\{\{s,t\} \cup \left(]s,t[ \cap s_n\Z \cap ]-n,n[ \right)\}} = P^{s,t}_{\{\{s,t\} \cup \left(]s,t[ \cap s_n\Z \right)\}} $, this corresponds exactly to the second point of Theorem \ref{them:Markov_stationnaire_bis}. \end{proof} \begin{cor} Let $P$ be the stationary Gaussian process appearing in Theorem \ref{them:contre_exemple}. Then, for every $\a \in [0,+\infty]$, $P_\a$ is weak global Markov transform of $P.$ \end{cor} \begin{proof} By construction of $P$, every $\a \in [0,+ \infty]$ is a cluster point of the decay rate of the correlation function of $P$. According to Proposition \ref{pro:finite_dim_conv_stati}, for every $\a \in [0,+\infty]$, the process $P_\a$ is a weak global Markov transform of $P.$ \end{proof} Until now, we only considered the topology of finite-dimensional convergence. In the case where $a< b \in \R^2$, we can endow $\CC([a,b],\R)$ with the topology of the uniform convergence, inducing a topology on $\PP(\CC([a,b],\R))$, that we simply call weak topology. We shall write $ P_n \xrightarrow[n\to +\infty]{w} P$ to express that a sequence $(P_n)_{n \geq 1} \in \PP(\CC([a,b],\R))^{\N^*}$ converges to $P \in \PP(\CC([a,b],\R))$ for this topology, \ie ,\ $\int f d P_n \dcv{ n \to + \infty} \int f d P$ for every continuous bounded function $f : (\CC([a,b], \R), \n{ \cdot}{\infty} ) \to \R.$ Weak convergence implies convergence for the finite-dimensional topology, but in general the converse is false. However, if the family $(P_n)_{n \geq 1}$ is \emph{tight}\footnote{In this paper we only use tightness criteria not the definition of tightness. For completeness, we refer to \cite{billingsley_probability_1995}.}, then both convergences are equivalent. To carry out our convergence result from the finite-dimensional topology to the weak convergence topology, we need the Kolmogorov-Chenstov criterion to ensure that our measures $P$ and $P_{[R_n]}$ are concentrated on $\CC([a,b],\R)$ and the Kolmogorov tightness criterion to ensure that $\{P\} \cup \{P_{[R_n]} \ ; \ n \geq 1\}$ is tight. For a proof of the Kolmogorov continuity criterion, we refer to \cite[Theorem 2.9]{le_gall_brownian_2016}, whereas for the Kolmogorov tightness criteria we refer to \cite[Theorem 23.7]{kallenberg_foundations_2021}. \begin{them}\label{them:tighness_criteria} Let $T \subset \R$ be a compact set. \begin{description} \leftskip=0.3in \item[Kolmogorov-Chenstov criteria:] If $P \in \PP(\R^T)$ and \ \[\exists a,b,C \in \R_+^*, \forall (s,t) \in T^2, \int_{\R^2} |x-y|^b P^{s,t}(dx,dy) \leq C |t-s|^{1+a},\] then $P$ is concentrated on continuous paths. \item[Kolmogorov tightness criteria:] If $(P_n)_{n \geq 1} \in \PP(\CC(T,\R))^{\N^*}$ and \ \[\exists a,b,C \in \R_+^*, \forall (s,t) \in T^2, \sup_{n \geq 1} \int_{\R^2} |x-y|^b P_n^{s,t}(dx,dy) \leq C |t-s|^{1+a},\] then $(P_n)_{n \geq 1}$ is tight for the weak convergence. \end{description} \end{them} In order to prove the uniform bounds of Theorem \ref{them:tighness_criteria}, we first establish a technical lemma. \begin{lem}\label{lem:inequality_mark_variance} Consider a continuous semi-definite positive kernel $K : \R^2 \to \R$ such that $K(t,t) = 1$ for every $t \in \R$. Fix $a<b \in \R^2$ and denote by $P$ a centered Gaussian measure with covariance function $K.$ \begin{enumerate} \item Assume, for every $s<t \in \R^2$, \begin{equation}\label{eq:hyp_conv_unif_R_mesure} \sup_{v \in [s,t]} \left|L_v^K(h) - \a^K(v) \right| \dcv{h \to 0^+} 0. \end{equation} Then \[ \forall (R_n)_{n \geq 1} \in \A, \exists M \in \R_+, \forall (s,t) \in [a,b]^2,\forall n \geq 1, 1- K_n(s,t) \leq M|s-t|,\] where $K_n$ stands for the covariance function of $P_{[R_n]}.$ \item Assume $K : (s,t) \mapsto \ti{K}(t-s)$ is a stationary kernel, $ \a \in \R_+$ is a cluster point of $L^K : h \in \R_+^* \mapsto h^{-1}(1- \ti{K}(h)) $ and $L^K$ is bounded. Fix a positive sequence $(s_n)_{n \geq 1}$ converging to zero such that $\lim_{n \to + \infty} L^K(s_n) = \a$ and put $(R_n)_{n \geq 1} := \left( \left( s_n \cdot \Z \right) \cap [-n,n] \right)_{n \geq 1} \in \A$. Then, \[\exists M \in \R_+,\forall (s,t) \in [a,b]^2, \forall n \geq 1, 1- K_n(s,t) \leq M|s-t|,\] where $K_n$ stands for the covariance function of $P_{[R_n]}.$ \end{enumerate} \end{lem} \begin{proof} We successively prove both statement. \begin{enumerate} \item Fix $(R_n)_{n \geq 1} \in \A$ and $s<t \in [a,b]^2$. We write $L_t$ instead of $L_t^K$, $\a$ instead of $\a^K$ and, for every $v \in \R,$ we put $L_v(0) :=\a(v)$. Set $\sigma^* := \sup_{n \geq 1} \s_{R_n}$, $ M_1:= \sup_{0 \leq h \leq \sigma^*, v \in [a,b]} |L_v(h) - \a(v)|$, $M_2 := \sup_{x,y \in [a,b]} |\a(x)-\a(y) |$, $M_3 := \sup_{0 \leq h \leq \sigma^*, v \in [a,b]} L_v(h)$, $M_4 := \sup_{|x| \leq \sigma^* M_3} |\ee(x) |$ and $M_5 := M_1 + M_2 + M_3 M_4$, where $\ee$ is defined in Notation \ref{nott:DL_Taylor}. As in Theorem \ref{them:Markov_stationnaire}, hypothesis \eqref{eq:hyp_conv_unif_R_mesure} implies $L$ that is continuous on $[a,b] \times [0,\s^*]$ and thus $M_3< + \infty$. Since $K$, $\a$ and $\ee$ are continuous, $M_5 < + \infty$. For a fixed $n \geq 1$, we write $\{s,t\} \cup \left( R_n \cap ]s,t[ \right) = \{t_0 < \cdots < t_{m}\}$ and $h_k := t_{k+1} - t_k$ for every $k \in \{0, \dots , m-1\}.$ Since, $\left(P_{[R_n]}\right)^{s,t} = P_{\{\{s,t\} \cup \left( R_n \cap ]s,t[ \right)\}}^{s,t}$, the composition formula in Lemma \ref{lem:compo_gaussien} gives $K_n(s,t) = K(t_0,t_1) \cdots K(t_{m-1},t_m).$ As in the proof of Theorem \ref{them:Markov_stationnaire}, \begin{align*} \left| \ln(K_n(s,t)) - \left(-\int_s^t \a(u) du \right) \right| &\leq \left|\sum_{k=0}^{m-1} h_k L_{t_k}(h_k) - \sum_{k=0}^{m-1} h_k \a(t_k) \right| +\left| \sum_{k=0}^{m-1} h_k \a(t_k) - \int_s^t \a(u) du \right| \\ &~~~~~~~~~~~~~ + \left|\sum_{k=0}^{m_n-1} h_k L_{t_k}(h_k) \ee( -h_k L_{t_k}(h_k)) \right| \\ &\leq \sum_{k=0}^{m-1} h_k^n \left|L_{t_k}(h_k) - \a(t_k) \right| + \sum_{k=0}^{m-1} \int_{t_k}^{t_{k+1}} |\a(t_k) - \a(u)|du \\ &+ \sum_{k=0}^{m-1} h_k L_{t_k}(h_k)\left|\ee\left(- h_kL_{t_k}(h_k) \right)\right| \\ &\leq |t-s| M_1 + |t-s| M_2+ |t-s|M_3 M_4 \\ &= |t-s| M_5. \end{align*} Thus $\ln(K_n(s,t)) \geq - M_5|t-s| -\int_s^t \a(u) du \geq - M|t-s|$, where $M = M_5 + \sup_{u \in [a,b]} |\a(u)|$. Hence, for every $s<t \in [a,b]^2$, $K_n(s,t) \geq \exp(-M|t-s|) \geq 1-M|s-t|$ which proves our result. \item Let $(s_n)_{n \geq 1}$ and $R_n$ be as in the statement. Put $R_n = \{t_0 < \cdots < t_m\}$, $\s^* := \sup_{n \geq 1} \s_{R_n}$ and, for every $k \in \ent{0}{m-1}$, $h_k := t_{k+1} - t_k.$ Defining $\ee$ as in Notation \ref{nott:DL_Taylor}, for every $(v,h) \in \R \times \R_+^*$, $\ln \left( K(v,v+h) \right) = -hL^K(h)\left[1 + \ee\left(\ti{K}(h)\right)\right]$. Hence, \begin{align*} \big|\ln(K_n(s,t))\big| &= \left| \sum_{k=0}^{m-1} h_k L^K(h_k)\left[1 + \ee\left(\ti{K}(h_k)-1\right)\right] \right| \\ &\leq |t-s| M, \end{align*} where $M :=\sup_{0 \leq h \leq \s^*}|L^K(h)|\left(1+ \sup_{|x|\leq \s^*} |\ee(\ti{K}(x)-1)|\right)$. Since $L^K$ in bounded and $\ee, \ti{K}$ are continuous, $M$ is finite. So $\ln(K_n(s,t)) \geq -M|t-s|$, which implies $K_n(s,t) \geq e^{-M|t-s|} \geq 1-M|t-s|$ and shows the result. \end{enumerate} \end{proof} We can now apply our criteria to prove weak convergence. \begin{them}\label{them:fd_to_wiener} Let us consider a continuous positive semi-definite kernel $K : \R \times \R \to \R$ such that $K(t,t) = 1$ for every $t \in \R.$ We denote by $P$ a centered Gaussian measure with covariance function $K.$ \begin{enumerate} \item Assume, for every $s<t \in \R^2$, \begin{equation*} \sup_{v \in [s,t]} \left|L^K_v(h) - \a^K(v)\right| \dcv{h \to 0^+} 0. \end{equation*} Then \[ \forall (R_n)_{n \geq 1} \in \A, \forall a<b \in \R^2, \proj^{[a,b]}_\# P_{[R_n]} \xrightarrow[n \to + \infty]{w} \proj^{[a,b]}_\# P_{\a^K}.\] \item Assume $K : (s,t) \mapsto \ti{K}(t-s)$ is a stationary positive semi-definite kernel, $ \a \in \R_+$ is a cluster point of $L : h \in \R_+^* \mapsto h^{-1}(1- \ti{K}(h)) $ and $L$ is bounded. Fix $(s_n)_{n \geq 1}$ a positive sequence converging to zero such that $\lim_{n \to + \infty} L^K(s_n) = \a$ and set $(R_n)_{n \geq 1} := \left( \left( s_n \cdot \Z \right) \cap [-n,n] \right)_{n \geq 1} \in \A$. Then \[\forall a<b \in \R^2, \proj^{[a,b]}_\# P_{[R_n]} \xrightarrow[n \to + \infty]{w} \proj^{[a,b]}_\# P_\a.\] \end{enumerate} \end{them} \begin{proof} \begin{enumerate} \item Applying Lemma \ref{lem:inequality_mark_variance}, there exists $ M \in \R_+$ such that for every $ (s,t,n) \in [a,b]^2 \times \N^*$, we have $1-K_n(s,t) \leq M|t-s|.$ Hence, \[\int_{\R^2} |x-y|^2 dP_{[R_n]}^{s,t}(x,y) = K_n(s,s) + K_n(t,t) - 2 K_n(s,t) = 2(1-K_n(s,t)) \leq 2M\cdot |s-t|.\] Since for a centered Gaussian random variable $X$, one has $\E(X^4) = 3\E(X^2)^2$, \begin{equation}\label{eq:inequality_tight_mark} \int_{\R^2} |x-y|^4 dP_{[R_n]}^{s,t}(x,y) = 3 \left( \int_{\R^2} |x-y|^2 dP_{[R_n]}^{s,t}(x,y)\right)^2 \leq 12M^2|s-t|^2. \end{equation} As $\lim_{n \to +\infty} P_{[R_n]}^{s,t} = P'^{s,t}$ and $(x,y) \mapsto |x-y|^4$ is lower semicontinuous and bounded by below, the Portmanteau theorem gives $\int_{\R^2} |x-y|^4 dP^{s,t}(x,y) \leq \liminf_{n \to + \infty} \int_{\R^2} |x-y|^4 dP_{[R_n]}^{s,t}(x,y) \leq 12M^2|s-t|^2$. According to the Kolmogorov-Chenstov criterion, we obtain that $\proj_{\#}^{[a,b]} P'$ is concentrated on continuous paths and $(\proj_{\#}^{[a,b]}P_n)_{n \geq 1}$ is a sequence of measures concentrated on continuous paths. Since our measures are concentrated on continuous paths, according to Inequality \eqref{eq:inequality_tight_mark}, the Kolmogorov tightness criteria applies and $(\proj^{[a,b]}_\#P_{[R_n]})_{n \geq 1}$ is tight. According to Point $1.$ of Proposition \ref{pro:cov_proc_Markovinifie}, we have $\proj_{\#}^{[a,b]} P_{[R_n]} \xrightarrow[n \to +\infty]{\fd} \proj_\#^{[a,b]} P_{\a^K}$. Combined with tightness, this implies $\proj_{\#}^{[a,b]} P_{[R_n]} \xrightarrow[n \to +\infty]{w} \proj_\#^{[a,b]} P_\a^K$, \ie , the wanted result. \item Same proof as the first point, but applied to the sequence $(R_n)_{n \geq 1} \in \A$ defined by $R_n := s_n \Z \cap [-n,n]$. According to the second Point of Lemma \ref{lem:inequality_mark_variance}, we also obtain Inequality \eqref{eq:inequality_tight_mark}, whereas the finite-dimensional convergence is given by Point $2.$ of Proposition \ref{pro:finite_dim_conv_stati}. \end{enumerate} \end{proof} \begin{remq}\label{remq:fd_to_wiener_centered} In Theorem \ref{them:fd_to_wiener}, $P$ is centered and $K(t,t) = 1$ for every $t \in \R$. However, both hypotheses are only there to simplify the statement and the proof. Assume instead that $P$ is non-centered with mean function $m$ and that, for every $t \in \R$, we have $K(t,t) > 0$ . For every $a< b$, the map $f : (x_t)_{t \in [a,b]} \mapsto \left(\sqrt{K(t,t)}x_t + m(t) \right)_{t \in [a,b]}$ is $\left(\sup_{t \in [a,b]}\sqrt{K(t,t)}\right)$-Lipschitz, hence continuous for the uniform norm. Hence, one can apply Theorem \ref{remq:fd_to_wiener_centered} to the normalized measure $f^{-1}_\# P$ and push forward the obtained convergence results by $f$, which is continuous\footnote{To apply Theorem \ref{them:fd_to_wiener}, notice that $L^K$ and $\a^K$ are invariant by renormalization, \ie ,\ $L^{c_K} = L^K$ and $\a^{c_K} = \a^K $. }. At the end, we obtain \[ \forall (R_n)_{n \geq 1} \in \A, \forall a<b \in \R^2, \proj^{[a,b]}_\# P_{[R_n]} \xrightarrow[n \to + \infty]{w} \proj^{[a,b]}_\# Q,\] where $Q$ is the Gaussian process with mean function $m$ and with covariance function $K'$ given by \[K'(s,t) = K(s,s)^{1/2} K(t,t)^{1/2}\exp\left(-\int_s^t \a^K(u) du \right).\] \end{remq} We now prove Theorem \ref{them:mimicking_intro} of page \pageref{them:mimicking_intro} and add to it a result about the underlying dynamics the mimicking process. In Point \ref{Existence} and \ref{Point:Uniqueness}, we prove that our mimicking problem has a unique solution. In Point \ref{Point:SDE}, under a regularity assumption on the mean function and the variance function , we characterize the solution of this mimicking problem as the solution of a SDE. In Point \ref{Point:mim_markov_transf}, we show that the solution of the mimicking problem is obtained as the strong global Markov transform of the initial process. \begin{them}\label{them:mimicking} Let $X=(X_t)_{t \in \R}$ be a Gaussian process with continuous covariance function $K$ and positive variance function. Assume $\a^K$, the instantaneous decay rate of $K$, is well-defined and continuous. \begin{enumerate} \item\label{Existence} \emph{Existence:} There exists a Gaussian process $Y =(Y_t)_{t \in \R}$ with covariance $K'$ satisfying: \begin{itemize} \leftskip=0.3in \item[(1)] For every $t \in \R$, $\text{Law}(X_t) = \text{Law}(Y_t)$; \item[(2)] $\a^{K'} = \a^{K}$; \item[(3)] $(Y_t)_{t \in \R}$ is a Markov process. \end{itemize} Moreover, for every $s<t \in \R^2$, \begin{equation}\label{eq:univ_conv} \sup_{v \in [s,t] } \left| \a^K(v) - L_v^{K'}(h) \right| \dcv{h \to 0^+} 0. \end{equation} If a Gaussian process $Y$ satisfies $(1),(2)$ and $(3)$, we allow ourselves to say that $Y$ is a \emph{mimicking process} of $X.$ \item\label{Point:Uniqueness} \emph{Uniqueness in law:} Every mimicking process of $X$ with covariance function $K' :\R^2 \to \R$ has the same mean function as $(X_t)_{t \in \R}$ and, for every $s<t \in \R^2$, \begin{equation}\label{eq:transfor_formule_cov} K'(s,t) = K(s,s)^{1/2}K(t,t)^{1/2} \exp\left(-\int_s^t \a^K(u) du\right). \end{equation} \item\label{Point:SDE} \emph{Underlying dynamic of the mimicking process: } Assume $m : t \mapsto \E(X_t)$ and $\sigma : t \mapsto K(t,t)^{1/2}$ are continuously differentiable. Then strong existence and strong uniqueness \footnote{Strong uniqueness is meant in the sense of \cite[Chapter $5$, Definition $2.3$]{karatzas_brownian_1988}. By strong existence, we mean existence of a strong solution for any given brownian motion and independent condition. We refer to \cite[Chapter $5$, Definition $2.1$]{karatzas_brownian_1988} for the definition of a strong solution.} hold for the SDE \begin{equation} \begin{cases*}\label{eq:SDE_complet} Z_t =\left[m'(t) +(\sigma'(t)-\a^K(t))Z_t\right]dt + \s(t)\sqrt{2 \a^K(t)}dB_t \\ Z_0 \sim \NN(0,1) \end{cases*} \end{equation} Moreover, the\footnote{Since strong uniqueness holds, this process is unique up to indistinguishability.} law of its solution is the law of the mimicking process of $X$ (restricted to $\R_+$). \item\label{Point:mim_markov_transf} \emph{The mimicking process is a Markov transform (under a reinforced condition):} Assume \eqref{eq:correlation} is verified, \ie ,\ $$\sup_{v \in [s,t]} \left| \a^K(v) - \frac{1}{h}\left(1 - \frac{K(v,v+h)}{\sqrt{K(v,v)}\sqrt{K(v+h,v+h)}} \right) \right| \dcv{h \to 0^+} 0,$$ for every $s<t \in \R^2.$ Let $(R_n)_{n \geq 1} \in \A$ be an admissible sequence and $Y$ be a mimicking process. For every $n \geq 1$, we denote by $X^{R_n}$ the transformation of $X$ made Markov at times $R_n$. Then $(X^{R_n})_{n\geq 1}$ and $Y$ almost surely have continuous paths and $X^{R_n}$ converges weakly to $Y$ on compact sets\footnote{This means that, for every $a<b \in \R^2$, \ $\text{Law}\left((X^{R_n}_t)_{t \in [a,b]}\right) \xrightarrow[n \to + \infty]{w} \text{Law}\left((Y_t)_{t \in [a,b]} \right).$}. In particular, the strong global Markov transform of $X$ is the mimicking process of $X$. \end{enumerate} \end{them} \begin{proof} We write $\a$ instead of $\a^K$. To prove \emph{Point \ref{Existence}}, we denote by $P$ the law of $X$. According to Proposition \ref{pro:alpha_proces}, the Gaussian measure $Q$ with same mean function as $P$ and covariance function $K'$ given by Formula \eqref{eq:transfor_formule_cov} is well defined and Markov. We fix a process $Y = (Y_t)_{t \in \R}$ with law $Q.$ For every $t \in \R$, $\text{Law}(Y_t) = \NN\left(\E(Y_t),K'(t,t)\right) = \NN\left(\E(X_t),K(t,t)\right) = \text{Law}(X_t)$, so $Y$ satisfies Condition $(1)$. Let $\theta$ be defined by \begin{equation*} \theta : x \in \R \mapsto \begin{cases} \frac{e^x - 1 - x}{x} & \text{if } x \neq 0 \\ 0 & \text{if } x = 0 \end{cases} . \end{equation*} It is continuous and satisfies $e^x = 1 + x + x \theta(x)$ for every $x \in \R.$ Hence, for every $v \in \R$, \begin{align*} h^{-1}\left( 1 - c_{K'}(v,v+h) \right) &= h^{-1} \left( 1 - \exp\left(-\int_v^{v+h} \a(u) du \right) \right)\\ &= h^{-1}\int_v^{v+h} \a(u) dx + h^{-1}\left(\int_v^{v+h} \a(u) du \right) \theta\left(\int_v^{v+h} \a(u) du\right). \end{align*} To show \eqref{eq:univ_conv}, fix $s<t \in \R^2.$ For every $h \in ]0,1]$, set \[M_{h} := \sup \ens{\left| \a(v) - h^{-1} \int_v^{v+h} \a(u)du\right|}{v \in [s,t], 0 < h \leq h}.\]Since \begin{align*} M_{h} &\leq \sup_{v \in [s,t]} h^{-1} \int_v^{v+h} \left| \a(v) - \a(u) \right| du \\ &\leq \sup \ens{|\a(w)-\a(u)|}{(w,u) \in [s,t+1]^2, |w-u| \leq h} \end{align*} and $\a$ is uniformly continuous on $[s,t+1]$, we get $\lim_{h \to 0^+} M_{h} = 0.$ Hence, for every $v \in [s,t] $, \begin{align*} \left|\a(v) -L^{K'}(v,v+h) \right| &= \left| \a(v) - h^{-1}\int_v^{v+h} \a(u) du - h^{-1}\left(\int_v^{v+h} \a(u) du \right) \theta\left(-\int_v^{v+h} \a(u) du\right) \right| \\ &\leq \left| \a(v) - h^{-1}\int_v^{v+h} \a(u) du \right| + \left| h^{-1}\left(\int_v^{v+h} \a(u) du \right) \theta\left(-\int_v^{v+h} \a(u) du\right) \right| \\ &\leq M_{h} + \left(M_{h} + \sup_{u \in [s,t]} |\a(u) | \right) \sup \ens{|\theta(x)|}{|x| \leq h \left(M_{h} + \sup_{u \in [s,t]} |\a(u) | \right)} \\ & =: N_{h}, \end{align*} where we used $$\left| h^{-1}\int_v^{v+h} \a(u) du \right| \leq \left| h^{-1}\int_v^{v+h} \a(u) du - \a(v) \right| + |\a(v)| \leq M_{h^*} + \sup_{u \in [s,t]} |\a(u)|.$$ Hence, \begin{equation*} \sup_{v \in [s,t]} \left| \a^K(v) - L_v^{K'}(h) \right| \leq N_{h}. \end{equation*} Since $\lim_{h \to 0^+} N_{h}=0$, this proves Equation \ref{eq:univ_conv} and in particular Condition $(2).$ As $Q$ is Markov and $\text{Law}(Y) = Q$, the process $Y$ is Markov, \ie , condition $(3)$. To prove \emph{Point \ref{Point:Uniqueness}}, we consider a Gaussian process $Y$ with covariance function $K'$ satisfying Conditions $(1),(2)$ and $(3)$. According to hypothesis $(1)$, $Y$ clearly has the same mean function as $X$ and we are left to prove Formula \eqref{eq:transfor_formule_cov}. According to Remark \ref{rq:pos_markov}, we can set $g_s : t \in \R \mapsto -\log\left(c_{K'}(s,t)\right) \in \R_+$ for every $s \in \R$. We have $g_s(s) = 0$ and $c_{K'}(s,t+h) = c_{K'}(s,t)c_{K'}(t,t+h)$ for every $(t,h) \in \R \times \R_+^*$. By definition of $\a$, we have : \begin{align*} h^{-1} (g_s(t+h)-g_s(t)) &= - h^{-1}\left(\ln(c_{K'}(s,t+h)) - \ln(c_{K'}(s,t))\right) \\ &=-h^{-1} \ln(c_{K'}(s,t+h)/c_{K'}(s,t)) \\ &= -h^{-1}\ln(c_{K'}(t,t+h)) \\ &= -h^{-1} \left(c_{K'}(t,t+h)-1 \right) \left[1+\ee\left(c_{K'}(t,t+h)-1\right)\right]\\ &\dcv{h \to 0^+} \a(t), \end{align*} where $\ee$ is defined in Notation \ref{nott:DL_Taylor} and the convergence is obtained using Condition $(3).$ This implies that $g_s(t) = g_s(s) + \int_s^t \a(u)du = \int_s^t \a(u) du$, \ie ,\ Formula \eqref{eq:transfor_formule_cov}. To prove \emph{Point \ref{Point:SDE}}, put $b(t,x) := m'(t) +(\sigma'(t)-\a(t))x $ and $\sigma(t,x) := \sigma(t) \sqrt{2\a(t)}.$ We now prove strong uniqueness holds for $(b,\sigma)$. According to \cite[Chapter $5$,Theorem $2.5$]{karatzas_brownian_1988}, it is sufficient to prove that for every $T>0$ and $n \geq 0$, there exists $ K_{T,n} \in \R_+$ such that: \begin{equation}\label{eq:uniqueness_SDE} \forall x, y \in [-n,n], \forall t \in [0,T], \left|b(t,x) - b(t,y) \right| + \left|\s(t,x)- \sigma(t,y) \right| \leq K_{T,n} |x-y|. \end{equation} By continuity of $\a$, the constant $K_{T,n} := \sup_{t \in [0,T]} |\s'(t)-\a(t)|$ satisfies \eqref{eq:uniqueness_SDE} and strong uniqueness holds for $(b,\s)$. For strong existence, we have to prove the existence of a strong solution for any probability space $(\Omega,\FF,\mathbb{P})$ endowed with a brownian motion $(B_t)_{t \geq 0}$ and an initial condition $Z$ independant from the brownian motion. First, prove the existence of a strong solution for the SDE \begin{equation}\label{eq:SDE} \begin{cases*} d\ti{Z}_t = \left( - \a (t) \ti{Z}_t \right) dt + \sqrt{2 \a(t)} d B_t \\ \ti{Z}_0 = Z \end{cases*}. \end{equation} According to \cite[Chapter 5, Section 6, Page 354]{karatzas_brownian_1988}, a strong solution to the SDE \eqref{eq:SDE} is the process $(\ti{Z}_t)_{t \geq 0}$ defined by \[\ti{Z}_t := \Phi(t)Z + \int_0^t \frac{\Phi(t)}{\Phi(u)} \sqrt{2 \a(u)} d B_u,\] where $\Phi(t) := \exp\left( - \int_0^t \a(u) du \right).$ Hence, the process $(\ti{Z}_t)_{ t \geq 0}$ is centered Gaussian and its covariance function $K'' : (s,t) \mapsto \E(\ti{Z}_s\ti{Z}_t)$ is computed as follows. For every $s<t \in \R^2$, \begin{align*} K''(s,t) &= \E\left( \Phi(s) \Phi(t) Z^2 \right) + \E\left( \int_0^s \frac{\Phi(s)}{\Phi(u)} \sqrt{2\a(u)}dB_u \int_0^t \frac{\Phi(t)}{\Phi(v)} \sqrt{2\a(v)}dB_v\right) \\ &= \Phi(s)\Phi(t) + \Phi(s) \Phi(t) \int_0^s \frac{2 \a(u)}{\Phi(u)^2} du \\ &= \Phi(s) \Phi(t) \left( 1 + \int_0^s 2 \a(u) \exp \left( \int_0^u 2 \a(v) dv \right) du \right) \\ &= \Phi(s)\Phi(t) \left( 1 + \left[ \exp \left( \int_0^u 2 \a(v) dv \right)\right]_0^s\right) \\ &= \Phi(s)\Phi(t)\left(1 + \Phi(s)^{-2} -1 \right) = \Phi(t)/\Phi(s) = K_\a(s,t). \end{align*} Hence $P_\a$ and $\text{Law}((\ti{Z}_t)_{t \geq 0})$ are two Gaussian measures with same mean and covariance function, thus are equal. Consider now the process ${Z}_t := m(t) + \sigma(t)\ti{Z}_t.$ This process is Gaussian, has mean function $m$ and covariance function given by $\eqref{eq:transfor_formule_cov}$. To obtain the fact that $(Z_t)_{t \geq 0}$ is solution of the SDE \eqref{eq:SDE_complet}, we just apply the Itô formula with $f(t,x) = m(t) + \sigma(t)x$ and use the fact that $(\ti{Z}_t)_{t \geq 0}$ is a solution of the SDE \eqref{eq:SDE}. We are left to prove \emph{Point \ref{Point:mim_markov_transf}}. Since $\text{Law}(X^{R_n}) = P_{[R_n]} $, we have to prove $\proj^{[a,b]}_\# P_{[R_n]} \xrightarrow[n \to + \infty]{w} \proj^{[a,b]}_\# P'.$ This follows from Theorem \ref{them:fd_to_wiener} and Remark \ref{remq:fd_to_wiener_centered}. \end{proof} The following numerical simulation of trajectories of a Gaussian process with law $P_\a$ and trajectories of solutions to the SDE $\eqref{eq:SDE}$ give an illustration of Point $\ref{Point:mim_markov_transf}$ of Theorem \ref{them:mimicking}. The Gaussian process is simulated by discretizing time and using the Choleski decomposition, while our SDE is simulated with the Euler-Maruyama algorithm. \begin{center} \begin{figure} \includegraphics[scale = 0.9]{Comparaison_EDS_processus_gaussien.png} \caption{Comparison of the trajectories of the SDE \eqref{eq:SDE} with these of the Gaussian measure $P_\a$.} \end{figure} \end{center} \begin{remq}\label{them:time_changed_OU} Suppose $(X_t)_{t \in \R}$ is a standard stationary Ornstein-Uhlenbeck process with parameter $1$, $\a$ is a continuous non-negative function and set $ \Phi : t\in \R \mapsto \int_0^t \a(u) du$. Then $\left(X_{\Phi(t)}\right)_{t\in \R}$ has law $P_\a$. Indeed, $\left(X_{\Phi(t)}\right)_{t\in \R}$ is a centered Gaussian process satisfying $\E(X_\Phi(s)X_{\Phi(t)}) = \exp\left(-|\Phi(t)-\Phi(s)|\right) = \exp\left( -\int_s^t \a(u) du \right) .$ \end{remq} \textbf{\large{Acknowledgment.}} I would like to express my deep gratitude to Nicolas Juillet for introducing me to this research problem and for his constant support and valuable suggestions throughout the writing process. \vspace{0.5cm} \textbf{\large{Copyright notice}:} \begin{minipage}{1.8cm} \includegraphics[width=1.8cm]{by.pdf} \end{minipage} \vspace{0.1cm}This research was funded, in whole or in part, by the Agence nationale de la recherche (ANR), Grant ANR-23-CE40-0017. A CC-BY public copyright license has been applied by the authors to the present document and will be applied to all subsequent versions up to the Author Accepted Manuscript arising from this submission, in accordance with the grant’s open access conditions. \bibliographystyle{plain} \bibliography{document.bib} \vspace{0.5cm} \begin{minipage}{18cm} \emph{Armand Ley} -- IRIMAS, UR 7499, Université de Haute-Alsace \\ 18 rue des Frères Lumière, 68 093 Mulhouse, France \\ Email : \texttt{[email protected]} \end{minipage} \end{document}
2412.10067v1
http://arxiv.org/abs/2412.10067v1
On the embedding of weighted Sobolev spaces with applications to a planar nonlinear Schrödinger equation
\documentclass[11pt]{amsart} \usepackage[a4paper,margin=2.5cm]{geometry} \setlength{\parindent}{0pt} \usepackage{enumitem} \usepackage[normalem]{ulem} \usepackage[colorlinks=true,urlcolor=blue,citecolor=red,linkcolor=blue,linktocpage,pdfpagelabels,bookmarksnumbered,bookmarksopen]{hyperref} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{lmodern} \usepackage{mathrsfs} \usepackage[colorinlistoftodos]{todonotes} \makeatletter \providecommand\@dotsep{5} \def\listtodoname{List of Todos} \def\listoftodos{\@starttoc{tdo}\listtodoname} \makeatother \allowdisplaybreaks \newcommand{\e}{\varepsilon} \newcommand{\eps}{\varepsilon} \newcommand{\C}{\mathbb{C}} \newcommand{\R}{\mathbb{R}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Rn}{{\mathbb{R}^n}} \newcommand{\RN}{{\mathbb{R}^N}} \newcommand{\RD}{{\mathbb{R}^2}} \newcommand{\RT}{{\mathbb{R}^3}} \newcommand{\RP}{{\mathbb{R}^N_+}} \newcommand{\RR}{{\R^+ \times \R}} \newcommand{\de}{\partial} \newcommand{\weakto}{\rightharpoonup} \DeclareMathOperator{\essinf}{ess\, inf} \DeclareMathOperator{\cat}{cat} \DeclareMathOperator{\dv}{div} \DeclareMathOperator{\dist}{dist} \DeclareMathOperator{\meas}{meas} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\Span}{span} \DeclareMathOperator{\Id}{Id} \renewcommand{\le}{\leslant} \renewcommand{\ge}{\geslant} \renewcommand{\a }{\alpha } \newcommand{\ab }{\bar{\alpha}} \renewcommand{\b }{\beta } \newcommand{\bb }{\bar{\beta}} \renewcommand{\d }{\delta } \newcommand{\vfi}{\varphi} \newcommand{\g }{\gamma } \newcommand{\gb }{\gamma} \renewcommand{\l }{\lambda} \renewcommand{\ln }{\lambda_n} \newcommand{\n }{\nabla } \newcommand{\s }{\sigma } \renewcommand{\t}{\theta} \renewcommand{\O}{\Omega} \renewcommand{\OE}{\Omega_\varepsilon} \newcommand{\G}{\Gamma} \newcommand{\GT}{\tilde \Gamma} \renewcommand{\S}{\Sigma} \renewcommand{\L}{\Lambda} \newcommand{\Ab}{{\bar A}} \newcommand{\Bb}{{\bar B}} \newcommand{\Cb}{{\bar C}} \newcommand{\A}{{\cal A}} \newcommand{\F}{{\cal F}} \newcommand{\X}{\mathcal{X}} \newcommand{\Il}{I_{\l}} \newcommand{\Iln}{I_{\l_n}} \newcommand{\Itq}{I^T_{q}} \newcommand{\Iql}{I_{q,\l_n}} \newcommand{\Iq}{I_{q}} \newcommand{\cl}{{\cal L}} \newcommand{\calh}{\mathcal{H}} \newcommand{\ch}{\mathcal{H}} \newcommand{\chr}{\mathcal{H}_r^{2,p}} \renewcommand{\H}{H^1(\RD)} \newcommand{\HV}{H^1_V(\RD)} \newcommand{\Hr}{H^1_r(\RD)} \newcommand{\HT}{H^1(\RT)} \newcommand{\Ha}{\mathcal H} \newcommand{\Har}{\mathcal H_r} \newcommand{\HTr}{H^1_r(\RT)} \renewcommand{\P}{{\cal P}} \newcommand{\Ne}{\mathcal{N}} \newcommand{\E}{\mathcal{E}} \newcommand{\I}{\mathcal{I}} \newcommand{\T}{\mathcal{T}} \newcommand{\M}{\mathcal{M}} \renewcommand{\S}{\mathcal{S}} \newcommand{\N}{\mathbb{N}} \newcommand{\Iinf}{I^*_{\infty}} \renewcommand{\C}{\mathbb{C}} \renewcommand{\o}{\omega} \newcommand{\re}{\operatorname{Re}} \newcommand{\tmstrong}[1]{\textbf{#1}} \newcommand{\tmem}[1]{{\em #1\/}} \newcommand{\tmop}[1]{\operatorname{#1}} \newcommand{\bw}{\textbf{w}} \newcommand{\dz}{\dot{z}} \newcommand{\ut}{\tilde u} \newcommand{\vt}{\tilde v} \newcommand{\tf}{\tilde{\phi}} \newcommand{\vb }{\bar{v}} \newcommand{\wv}{\widetilde v} \newcommand{\wu}{\widetilde u} \newcommand{\wz}{\widetilde z} \newcommand{\wZ}{\widetilde Z} \newcommand{\ws}{\widetilde s} \newcommand{\D }{{\mathcal D}^{1,2}(\RT)} \newcommand{\Dr }{{\mathcal D}^{1,2}_r(\RT)} \newcommand{\inb }{\int_{B_R}} \newcommand{\ird }{\int_{\RD}} \newcommand{\irn }{\int_{\RN}} \newcommand{\irt }{\int_{\RT}} \newcommand{\iRc}{\int_{|x|>R}} \newcommand{\iR}{\int_{|x|\le R}} \newcommand{\ikc}{\int_{|x|>k}} \newcommand{\ik}{\int_{|x|\le k}} \def\bbm[#1]{\mbox{\boldmath $#1$}} \newcommand{\beq }{\begin{equation}} \newcommand{\eeq }{\end{equation}} \newcommand{\SU}{\sum_{i=1}^k} \newcommand{\Ce}{\mathcal{C}_\eps} \renewcommand{\le}{\leq} \renewcommand{\ge}{\geq} \newcommand{\dis}{\displaystyle} \newcommand{\ir}{\int_{-\infty}^{+\infty}} \newcommand{\red}[1]{{\color{red}#1}} \newcommand{\orange}[1]{{\color{orange}#1}} \newcommand{\blue}[1]{{\color{blue}#1}} \newcommand{\torange}[1]{\todo[inline]{#1}} \newcommand{\tcyan}[1]{\todo[inline,color=cyan]{#1}} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{example}[theorem]{Example} \newtheorem{definition}[theorem]{Definition} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{counterexample}[theorem]{Counterexample} \theoremstyle{definition} \newtheorem{remark}[theorem]{Remark} \allowdisplaybreaks \title[Embedding of weighted Sobolev spaces and applications]{On the embedding of weighted Sobolev spaces \\with applications to a planar\\ nonlinear Schr\"{o}dinger equation} \author[A. Azzollini]{Antonio Azzollini} \address{A. Azzollini \newline\indent Dipartimento di Matematica, Informatica ed Economia, \newline\indent Universit\`a degli Studi della Basilicata, \newline\indent Via dell'Ateneo Lucano 10, 85100 Potenza, Italy} \email{[email protected]} \author[A. Pomponio]{Alessio Pomponio} \address{A. Pomponio \newline\indent Dipartimento di Meccanica, Matematica e Management,\newline \indent Politecnico di Bari \newline\indent Via Orabona 4, 70125 Bari, Italy} \email{[email protected]} \author[S. Secchi]{Simone Secchi} \address{S. Secchi \newline\indent Dipartimento di Matematica e Applicazioni \newline\indent Università degli Studi di Milano - Bicocca \newline\indent Via Roberto Cozzi 55, 20125 Milano, Italy} \email{[email protected]} \thanks{A.A. and A.P. are partially supported by INdAM - GNAMPA Project 2024 ``Metodi variazionali e topologici per alcune equazioni di Schrodinger nonlineari" CUP E53C23001670001. A.P. is partially financed by European Union - Next Generation EU - PRIN 2022 PNRR ``P2022YFAJH Linear and Nonlinear PDE's: New directions and Applications". S.S. is partially supported by INdAM - GNAMPA Project 2024 ``Aspetti geometrici e analitici di alcuni problemi locali e non-locali in mancanza di compattezza'' CUP E53C23001670001} \subjclass[2020]{35J20, 35J60, 46E35} \keywords{weighted Sobolev spaces, embedding's properties, nonlinear Schr\"odinger equation} \begin{document} \begin{abstract} In this paper we study the embedding's properties for the weighted Sobolev space $H^1_V(\RN)$ into the Lebesgue weighted space $L^\tau_W(\RN)$. Here $V$ and $W$ are diverging weight functions. The different behaviour of $V$ with respect to $W$ at infinity plays a crucial role. Particular attention is paid to the case $V=W$. This situation is very delicate since it depends strongly on the dimension and, in particular, $N=2$ is somewhat a limit case. As an application, an existence result for a planar nonlinear Schr\"odinger equation in presence of coercive potentials is provided. \end{abstract} \maketitle \section{Introduction} This note is devoted to the description of some embedding theorems for weighted Sobolev spaces in $\mathbb{R}^N$ for a class of diverging weight functions. More precisely, we consider two continuous potentials $V\colon \RN\to \R$ and $W \colon \mathbb{R}^N \to \mathbb{R}$ bounded below by a positive constant and diverging at infinity. We investigate the continuous and the compact embeddings of the $H^1_V(\RN)$ into $L^\tau_W(\RN)$ where $H_V^1 (\mathbb{R}^N)$ is the weighted Sobolev space defined as the completion of $C_0^\infty(\R^N)$ with respect to the norm \begin{displaymath} \Vert u \Vert_V := \left( \int_{\mathbb{R}^N} \left( \vert \nabla u \vert^2 + V(x) \vert u \vert^2 \right) \, dx \right)^{1/2}, \end{displaymath} and $L_W^\tau (\mathbb{R}^N)$ is the weighted Lebesgue space \begin{displaymath} L^\tau_W(\RN):=\left\{u\in \mathscr{M}(\RN) \mid \int_{\mathbb{R}^N} W(x) \vert u \vert^{\tau} \, dx<+\infty\right\}, \end{displaymath} endowed with the norm \[ \|u\|_{W,\tau}:=\left(\int_{\mathbb{R}^N} W(x) \vert u \vert^{\tau} \, dx\right)^{\frac 1\tau}. \] The symbol~$\mathscr{M}(\mathbb{R}^N)$ denotes the set of all measurable functions from $\mathbb{R}^N$ to $\mathbb{R}$. The embedding properties between weighted Sobolev spaces, eventually also with a potential for the gradient term in norm, and weighted Lebesgue spaces have been extensively investigated: we refer the reader to the monograph \cite{opic} and the references therein. In this note we focus on the case of two potentials, both coercive, and a potential equal to one in the gradient term of the norm, for which we were unable to find useful results in the literature. Our aim is, therefore, to analyse this situation and apply our results to the Schr\"odinger equation. \bigskip Roughly speaking, whenever $W$ \emph{dominates} $V$ at infinity, in general we cannot expect any embedding between $H_V^1 (\mathbb{R}^N) $ and $L_W^\tau (\mathbb{R}^N)$ and so we need to restrict our attention to the case of two potentials with a similar behaviour at infinity or to that in which $V$ \emph{prevails} over $W$ at infinity. In particular, we first consider suitable conditions on $V$ and $W$ which ensure that \[ \lim_{|x|\to +\infty}\frac{V(x)}{W(x)}=+\infty. \] This case is easier and, in Theorem \ref{thvwa}, we can prove that $H_V^1 (\mathbb{R}^N) $ is compactly embedded into $L_W^\tau (\mathbb{R}^N)$, for suitable $\tau$. On the other hand, the case in which \begin{equation}\label{eq:chall} \liminf_{|x|\to +\infty}\frac{V(x)}{W(x)}\in\R \end{equation} appears to be challenging, even in the simplest case $V=W$. To the best of our knowledge, the only result in the literature is contained in \cite{ADP}. In \cite[Theorem 2.2]{ADP}, in order to deal with a planar Schr\"odinger equation with competing logarithmic self-interactions, the authors prove that, under a suitable relation between $V$ and its gradient, $H_V^1 (\mathbb{R}^2)$ is continuously embedded into $L_W^\tau (\mathbb{R}^2)$, for any $\tau \ge 2$. A precise statement appears in Theorem \ref{th:emb}. We point out that the analysis performed in \cite{ADP} is restricted only to the continuous embedding and dimension $N=2$. \\ One of the main aims of the present paper consists in enlarging the investigation initiated in \cite{ADP}, studying the influence of the dimension in both continuous and compact embedding of $H_V^1 (\mathbb{R}^N)$ in $L_V^\tau (\mathbb{R}^N)$.\\ Taking into account \cite[Theorem 2.2]{ADP}, by Theorem \ref{Vnon}, Theorem \ref{pr:general} and Theorem \ref{th:main1}, we conclude that, for a considerable large class of positive and diverging potentials, we have: \begin{itemize} \item if $N\ge 3$, $H_V^1 (\mathbb{R}^N)$ is \emph{not} included into $L_V^\tau (\mathbb{R}^N)$, for any $\tau > 2$; \item if $N=2$, $H_V^1 (\mathbb{R}^2)$ is continuously but \emph{not} compactly embedded into $L_V^\tau (\mathbb{R}^2)$, for any $\tau > 2$; \item if $N=1$, $H_V^1 (\mathbb{R})$ is compactly embedded into $L_V^\tau (\mathbb{R})$ for any $\tau > 2$ and in $L^\infty(\R)$. \end{itemize} Roughly speaking, the presence of a coercive potential $V$ \emph{improves} the usual Sobolev embedding theorem in dimension $N=1$, leaves it \emph{unchanged} in dimension $N=2$, while it basically \emph{destroys} the embedding in dimension $N \geq 3$. We can say that $N = 2$ is a sort of limit case for the embedding properties of $H_V^1 (\mathbb{R}^N)$ into $L_V^\tau (\mathbb{R^N})$. \medskip In addition, we analyse the case in which $V$ and $W$ are both radially symmetric functions. Denoting by $H^1_{V,{\rm rad}}(\RN)$ and $L^\tau_{W,{\rm rad}}(\RN)$, respectively, the subsets of radial functions contained in $H^1_{V}(\RN)$ and in $L^\tau_{W}(\RN)$, by means of the Strauss Radial Lemma (see \cite{strauss}) we show in Theorem \ref{thrad} that $H^1_{V,{\rm rad}}(\RN)$ is compactly embedded into $L^\tau_{W,{\rm rad}}(\RN)$, for suitable $\tau$, and for a large class of potentials. In particular, under certain conditions, $W$ could also dominate $V$ at infinity. \bigskip In light of the considerations yielded in the first part, in the second part of the paper we focus our attention on semilinear Schr\"odinger equation with external potentials. One can find many papers about the problem \begin{equation} \label{eq:2} -\Delta u + V(x)u = W(x) \vert u \vert^{p-1}u \quad \hbox{in $\mathbb{R}^N$}, \end{equation} where $V$ and $W$ are continuous real-valued functions. If we look for solutions $u \in H^1(\mathbb{R}^N)$ of \eqref{eq:2} by means of variational methods, the lack of compactness of the embedding $H^1(\mathbb{R^N}) \subset L^p(\mathbb{R}^N)$ for $p<2N/(N-2)$ is typically overcome by adding suitable assumptions on $V$ and $W$. Rabinowitz considered in \cite{rabinowitz} the case $W\equiv 1$ with the requirement \begin{displaymath} 0<\inf_{x \in \mathbb{R}^N} V(x) < \liminf_{\vert x \vert \to +\infty} V(x). \end{displaymath} The case of \emph{competing} potentials $V$ and $W$ is harder to deal with, and most existence results require $V$ and $W$ to behave essentially in \emph{opposite} ways. Roughly speaking, by P.L. Lions' concentration-compactness principle, sequences of ``almost critical'' points of the Euler functional associated to \eqref{eq:2} fail to be compact because they slide off to infinity. Compactness may be restored by assuming that $V$ and $W$ converge to some finite limits at infinity in such a way that this behaviour is not convenient for minimizing sequences. See, for example, \cite{CP} and the references therein. The borderline case $V \equiv W$ under the assumption that $V$ diverges at infinity turns out to be very hard to handle: we do not have any problem at infinity to use as a barrier, and we do not have suitable compact embeddings in order to ensure the convergence of minimizing or Palais-Smale sequences. The construction of a solution $u \in H^1(\R^N)$ to \begin{equation*} -\Delta u + V(x)u = V(x) \vert u \vert^{p-1}u \end{equation*} when $\lim_{\vert x \vert \to +\infty} V(x)=+\infty$ remains essentially open. However, the continuous embedding of $H_V^1 (\mathbb{R}^2)$ into $L_V^{p+1} (\mathbb{R}^2)$ obtained in the first part of the paper suggests an \emph{ansatz} to overcome such difficulties. Then we consider in \eqref{eq:2} couples of potentials differing for a constant reducing the equation to the form \begin{equation}\label{eq:11} -\Delta u +( V(x)-\l) u=V(x)|u|^{p-1}u,\quad\hbox{ in }\RD, \end{equation} for some $\l\in \R$, whose solutions could be found as critical points of the functional \begin{displaymath} E(u) = \frac{1}{2} \int_{\mathbb{R}^2} \left( \vert \nabla u \vert^2 + \left( V(x) - \lambda \right) \vert u \vert^2 \right)\, dx - \frac{1}{p+1} \int_{\mathbb{R}^2} V(x) \vert u \vert^{p+1} \, dx. \end{displaymath} Even if the lack of the compact embedding of $\HV$ into $L^{p+1}_V(\RD)$ makes the variational approach to the problem not trivial, in Theorem \ref{th:main} we are able to provide an existence result for \eqref{eq:11} by reducing the equation to an eigenvalue problem for the nonlinear operator \begin{displaymath} u\in H_V^1 (\mathbb{R}^2)\mapsto-\Delta u + V(x)(1-|u|^{p-1})u\in (H_V^1 (\mathbb{R}^2))'. \end{displaymath} \medskip The paper is organized as follows. In Section \ref{se:func} we study the embedding properties of $H^1_V(\RN)$ into $L^\tau_V(\RN)$ under various behaviour at infinity of the potentials $V$ and $W$. In Section \ref{se:exi} we deal with the nonlinear eigenvalue problem \eqref{eq:11}. \section{Embedding properties of some weighted Sobolev spaces}\label{se:func} We introduce a practical shorthand to describe the class of potential functions we will be working with in this paper. \begin{definition} We will say that a function $V \colon \RN \to \R$ belongs to $\mathscr{C}^*(\RN)$ if $V$ is continuous and $\inf_{x \in \RN} V(x) >0$. \end{definition} In this section we study the embeddings of $H^1_V(\RN)$ into $L^\tau_W(\RN)$ assuming that $V$ is unbounded and both $V$ and $W$ belong to $\mathscr{C}^*(\RN)$. \subsection{The general setting}\label{sec:general} \ As a first step we consider the case of two potentials in $\mathscr{C}^*(\RN)$ with different behaviour at infinity. We will see that if $V$ {\em dominates} $W$ at infinity, namely under a condition guaranteeing $V(x)/W(x) \to +\infty$ as $\vert x \vert \to +\infty$, then $H^1_V(\RN)$ is compactly embedded into $L^\tau_W(\RN)$, for suitable $\tau$. More precisely, the following holds. \begin{theorem}\label{thvwa} Let $N \geq 1$, $V \in \mathscr{C}^*$, $W \in \mathscr{C}^*$, and suppose that there exist a number~$\a\in (0,1)$ and two positive constants $c$ and $C$ such that \begin{equation}\label{VWa}\tag{$\mathcal{VW}$} 0<c\le W(x)\le C \big(V(x)\big)^\a \quad \text{ for all } x\in \RN. \end{equation} If \( \lim_{\vert x \vert \to +\infty}V(x)=+\infty \), then $H^1_V(\RN)$ is compactly embedded in $L^\tau_W(\RN)$ for \begin{equation} \label{tau} \begin{cases} \tau \ge 2 & \text{if }N=1,2, \\ 2\le \tau < \dfrac{2N-4\a}{N-2} & \text{if }N\ge 3. \end{cases} \end{equation} The embedding is only continuous if $N\ge 3$ and $\tau = \frac{2N-4\a}{N-2}$. \end{theorem} \begin{proof} Let $u\in H^1_V(\RN)$ and take $\tau$ as in \eqref{tau}. We have \begin{align*} \|u\|_{W,\tau}^\tau&=\irn W(x)|u|^\tau\, dx =\irn\frac{W(x)}{\big(V(x)\big)^\a}\big(V(x)\big)^\a|u|^{2\a}|u|^{\tau-2\a}\, dx \\ &\le C \left(\irn\left(\big(V(x)\big)^\a|u|^{2\a}\right)^\frac{1}{\a}\, dx\right)^\a \left(\irn|u|^\frac{\tau-2\a}{1-\a}\, dx\right)^{1-\a} \\ &= C \left(\irn V(x)|u|^{2}\, dx\right)^\a \left(\irn|u|^\frac{\tau-2\a}{1-\a}\, dx\right)^{1-\a} \\ &= C\|u\|_{V,\tau}^{2\a}\|u\|_{\frac{\tau-2\a}{1-\a}}^{\tau-2\a} \\ &\le C\|u\|_V^{2\a}\|u\|_{\frac{\tau-2\a}{1-\a}}^{\tau-2\a}. \end{align*} Since $H^1_V(\RN)$ is compactly embedded in $L^{\frac{\tau-2\a}{1-\a}}(\RN)$, the conclusion follows. \end{proof} We remark that a related result can be found in \cite{VSX}. In the previous theorem $W$ could be diverging at infinity or not but, anyway, the case $V=W$ is excluded. This is, actually, the most intriguing and difficult situation, as the following result suggests. \begin{theorem}\label{Vnon} Let $N\ge 3$ and $V \in \mathscr{C}^*(\RN)$. Suppose that there exist positive constants $c_1$, $c_2$ and $m$, and a sequence $\{x_n\}_n$ in $\RN$ such that $V(x_n)\to +\infty$ and \begin{equation*} c_1V(x_n)\le V(x)\le c_2V(x_n), \quad\text{ for all } \ x\in B_{\frac{m}{\sqrt{V(x_n)}}}(x_n). \end{equation*} Then $H_V^1(\RN)\setminus L_V^\tau(\RN)$ is non-empty for all $\tau>2$. \end{theorem} \begin{proof} In the following, for $n\ge 1$, we denote $\mathcal{B}'_n:=B_{\frac{m}{2\sqrt{V(x_n)}}}(x_n)$, $\mathcal{B}''_n:=B_{\frac{m}{\sqrt{V(x_n)}}}(x_n)$ and $\mathcal{A}_n:=\mathcal{B}_n''\setminus \mathcal{B}_{n}'$. Since $V \in \mathscr{C}^*(\RN)$, we may assume that $\vert x_n \vert \to +\infty$. For any $n\ge 1$, let $u_n\in C^1\big(\RN,[0,V(x_n)]\big)$ be such that \[ u_n(x):= \begin{cases} \big(V(x_n)\big)^{\frac{N-2}4} &\text{for }x\in \mathcal{B}_{n}', \\ 0 &\text{for }x\in \RN\setminus \mathcal{B}_{n}'', \end{cases} \] and with $|\n u_n(x)|\le 2\big(V(x_n)\big)^{\frac N4}$, for $x\in\mathcal{A}_{n}$. \\ We have that \begin{align*} \irn |\n u_n|^2\, dx &=\int_{\mathcal{A}_n}|\n u_n|^2\,dx \le c \big(V(x_n)\big)^{\frac N2} \meas(\mathcal{A}_n)\le c, \\ \irn V(x)|u_n|^2\,dx &\le \int_{\mathcal{B}_n''}V(x)u_n^2\,dx \le c_2 V(x_n)\big(V(x_n)\big)^{\frac{N-2}2} \meas(\mathcal{B}_n'')\le c, \end{align*} and so $\{u_n\}_n$ is a bounded sequence in $H_V^1(\RN)$, while, for any $\tau>2$, \[ \irn V(x)|u_n|^\tau\,dx\ge c_1V(x_n)\big(V(x_n)\big)^{\frac{(N-2)\tau}4} \meas(\mathcal{B}_n')\to +\infty, \qquad \text{as }n \to +\infty, \] namely $\{u_n\}_n$ is unbounded in $L_V^\tau(\RN)$.\\ Now observe that, up to subsequence, we can assume that the balls $\mathcal B''_n$ are pairwise disjoint, and that $\big(V(x_n)\big)^{\frac{(N-2)(\tau-2)}{4\tau}}\ge 2^n.$ If we define \begin{displaymath} v_n(x):=\frac{u_n}{\big(V(x_n)\big)^{\frac{(N-2)(\tau-2)}{4\tau}}}, \end{displaymath} there exist positive constants $C_1$ and $C_2$ such that, for all $n\ge 1$, \begin{displaymath} \|v_n\|_V\le \frac{C_1}{V(x_n)^{\frac{(N-2)(\tau-2)}{4\tau}}}\le\frac{C_1}{2^n} \end{displaymath} and $\|v_n\|_{V,\tau}\ge C_2$. The function $w=\sum_{n=1}^{\infty}v_n$ belongs to $H_V^1(\RN)\setminus L_V^\tau(\RN)$. \end{proof} \begin{remark} Observe that any positive uniformly continuous and coercive potential satisfies all the conditions of Theorem \ref{Vnon}. \end{remark} It seems to be more challenging to prove a \emph{positive} statement, that is a continuous embedding result under reasonable assumptions on $V$. To the best of our knowledge this situation has been studied in \cite{ADP} only in dimension \(N=2\). More precisely, the following continuous embedding theorem is proved in \cite{ADP}. \begin{theorem}[\cite{ADP}]\label{th:emb} Let $V \in \mathscr{C}^*(\R^2)$. If the distributional derivatives of $V$ are functions satisfying \begin{equation}\label{gradv}\tag{$\mathcal{V}$} |\n V(x)|\le C V^{\frac 32}(x)\quad \hbox{ for a.e. $x \in \RD$} \end{equation} for some constant~$C>0$, then the space $H^1_V(\RD) $ is continuously embedded into $L^\tau_V(\RD)$ for all $\tau\ge 2$. \end{theorem} \begin{remark}\label{re:VW} If $V$ and $W$ are in $\mathscr{C}^*$, \eqref{gradv} is satisfied and $W(x)\leq C V(x)$ for all $x\in\RD$ and for a suitable positive constant $C$, then $L_V^\tau(\RD)$ is continuously embedded into $L_W^\tau(\RD)$. So $H^1_V(\RD) $ is continuously embedded into $L^\tau_W(\RD)$ for all $\tau\ge 2$. \end{remark} It seems very hard to isolate, in the existing literature, precise conditions which ensure the continuous embedding of general weighted Sobolev spaces into weighted Lebesgue space without restrictive assumptions. An interesting statement appears in \cite[Theorem 2.4]{Avantaggiati}. Adapting the notation therein to ours, the author proves that if $\Omega$ satisfies the cone property in $\mathbb{R}^N$, then \begin{displaymath} W^{1,p_0}_V(\Omega) \subset L^q_W(\Omega) \end{displaymath} continuously, provided that $1\leq p_0 <N$, $W \in C^1(\Omega) \cap L^\infty(\Omega)$ and $V$ are positive functions, and \begin{displaymath} \left\vert \nabla W \right\vert \leq C V^{1/p_0}. \end{displaymath} The assumptions of this result exclude the case $p=N=2$. However, it witnesses that our assumption \eqref{gradv} has already been considered in some similar form. Roughly speaking, assumption \eqref{gradv} prevents the potential $V$ from \emph{oscillating too much} at infinity. For example the potential \begin{displaymath} V(x)=|x|^2(\sin (e^{|x|})+2)+1 \end{displaymath} does not satisfies \eqref{gradv}. \bigskip Now, assuming in the sequel the validity of the continuous embedding $H^1_V(\RD) \hookrightarrow L^\tau_V(\RD),$ we investigate if such embedding is also compact. As a first step we provide a general condition which assures the lack of the compact embedding, then we study some more specific cases. \begin{proposition}\label{pr:general} Let $V\in \mathscr{C}^*(\RD)$ such that $\HV\hookrightarrow L^\tau_V(\RD)$ for some $\tau>2$. If $V$ satisfies \begin{enumerate}[label=$(\mathcal{V}_0)$,ref=$\mathcal{V}_0$] \setcounter{enumi}{-1} \item\label{ipotesiV} there exist $c_1>0$ $,c_2>0$, $m>0$ and a sequence $\{x_n\}_n \subset\RD$ with $|x_n| \rightarrow +\infty$ and such that, for any $n\ge 1$, $$c_1 V(x_n)\le V(x)\le c_2 V(x_n), \quad\hbox{ in }B_{\frac {m}{\sqrt{V(x_n)}}}(x_n),$$ \end{enumerate} then $\HV$ is not compactly embedded into $L^\tau_V(\RD)$. \end{proposition} \begin{proof} In the following, for $n\ge 1$, we denote \begin{displaymath} \mathcal{B}'_n:=B_{\frac {m}{2\sqrt{V(x_n)}}}(x_n), \quad \mathcal{B}''_n:=B_{\frac {m}{\sqrt{V(x_n)}}}(x_n), \quad \mathcal{A}_n:=\mathcal{B}_n''\setminus \mathcal{B}_{n}' \end{displaymath} For any $n\ge1$, we consider a function $u_n\in C^1(\RD,[0,1])$ such that \(u_n=1\) in \(\mathcal{B}_{n}'\), \(u_n=0\) in \(\RD \setminus \mathcal{B}_n''\), and \(|\n u_n|\le (3/m) \sqrt{V(x_n)} \) in \(\mathcal{A}_{n}\). By assumption, we have that \begin{align*} &\ird |\n u_n|^2\, dx =\int_{\mathcal{A}_n}|\n u_n|^2\,dx \le \frac 9{m^2}V(x_n) \cdot \meas(\mathcal{A}_n)\simeq c>0, \\ &\ird V(x)|u_n|^2\,dx \le \int_{\mathcal{B}_n''}V(x)\,dx \le\sup_{\mathcal{B}_n''}V\cdot \meas(\mathcal{B}_n'')\simeq c''>0, \\ &\ird V(x)|u_n|^\tau\,dx \ge \int_{\mathcal{B}_n'}V(x)\,dx \ge\inf_{\mathcal{B}_n'}V \cdot \meas(\mathcal{B}_n')\simeq c'>0. \end{align*} We conclude that $\{u_n\}_n$ is a bounded sequence in $\HV$ such that the support of $u_n$ is \emph{travelling} at infinity and then $u_n\weakto 0$ in $\HV$ but $\|u_n\|_{V, \tau}$ is bounded away from zero. This shows that $\HV$ is not compactly embedded into $L^\tau_V(\RD)$. \end{proof} Here we present two specific cases where $V$ satisfies \eqref{ipotesiV}. \begin{corollary} Let $V\in \mathscr{C}^*(\RD)$. If $\HV\hookrightarrow L^\tau_V(\RD)$ for some $\tau>2$ and $V$ satisfies one of the following: \begin{enumerate}[label=\arabic{*}.,ref=\arabic{*}] \setcounter{enumi}{0} \item\label{caso1} $V$ is uniformly continuous; \item\label{caso2} there exists $m>0$ such that $V$ satisfies the following asymptotic locally Lipschitz-type property: \begin{equation}\label{supershit}\tag{$\mathcal{V}_1$} \sup_{\substack{y \neq x \\ |y-x|\le\frac {m}{\sqrt{V(y)}}}} \frac{|V(y)-V(x)|}{|y-x|}=o\big(V^{\frac 32}(y)\big)\quad \hbox{ for } |y|\to +\infty; \end{equation} \end{enumerate} then the embedding of $\HV$ in $L^\tau_V(\RD)$ is not compact. \end{corollary} \begin{proof} {\sc Case }\ref{caso1}. Let $\a>0$ be such that $V\ge \a$ in $\RD$. Since $V$ is uniformly continuous, there exists $\d>0$ such that, if $|x-y|<\d$, we have $|V(x)-V(y)|< \a/2$. Let $\{x_n\}_n$ be a divergent sequence. We set $m:=\d \sqrt{\a}$. For any $n\ge 1$, if $x\in B_{\frac {m}{\sqrt{V(x_n)}}}(x_n)$, we have $|x-x_n|<\d$ and so $$V(x)\le V(x_n)+|V(x)-V(x_n)|\le V(x_n)+\frac \a 2\le \frac 32 V(x_n)$$ and $$\frac 12 V(x_n)\le V(x_n)-\frac \a 2\le V(x_n)- |V(x)-V(x_n)|\le V(x)$$ and then property \eqref{ipotesiV} holds. \vskip .2cm {\sc Case }\ref{caso2}. Since \eqref{supershit} holds, there exists $y_0\in \RD$ with $|y_0|$ large enough such that, for any $|y|>|y_0|$ and $x\in B_{\frac {m}{\sqrt{V(y)}}}(y)$, $$|V(y)-V(x)|\le m' \big(V(y)\big)^{\frac 32}|x-y|$$ where $m'\in (0,1/m)$. Take any divergent sequence $\{x_n\}_n$ such that $|x_n|>|y_0|$, for all $n\ge 1.$ For any $x\in B_{\frac {m}{\sqrt{V(x_n)}}}(x_n)$, we have \begin{align*} V(x)&\le V(x_n)+|V(x)-V(x_n)|\le V(x_n) +m' \big(V(x_n)\big)^{\frac 32}|x-x_n|\le (1+mm') V(x_n)\\ V(x)&\ge V(x_n)-|V(x)-V(x_n)|\ge V(x_n) -m' \big(V(x_n)\big)^{\frac 32}|x-x_n|\ge (1-mm') V(x_n) \end{align*} and then property \eqref{ipotesiV} holds. \end{proof} \begin{remark} Even if assumption \eqref{supershit} seems to be quite technical, there exists a large class of potentials satisfying it. In particular observe that, if we assume that $V$ is coercive and $V\in C^1(\RD\setminus B_R)$, for some $R>0$, we just have to verify the simpler condition \begin{equation}\label{megashit}\tag{$\mathcal{V}_2$} \exists\, \eps>0 \hbox{ such that }\sup_{z\in B_\eps (y)}|\n V(z)|=o\big(V^{\frac 32}(y)\big), \quad \hbox{ for } |y|\to +\infty. \end{equation} Indeed, by coercivity, for an arbitrary $m>0$ there exists $y_0\in \RD$ with $|y_0|$ large enough such that for any $|y|>|y_0|$ we have $\frac m{\sqrt{V(y)}}<\eps$. Assuming \eqref{megashit}, as a consequence of multidimensional mean value theorem we obtain \eqref{supershit} as follows \begin{equation*} \sup_{\substack{y \neq x \\ |y-x|\le\frac {m}{\sqrt{V(y)}}}} \left|\frac{V(y)-V(x)}{|y-x|}\right|\le \sup_{z\in B_\eps (y)}|\n V(z)|=o\big(V^{\frac 32}(y)\big),\quad \hbox{ for } |y|\to +\infty. \end{equation*} Condition \eqref{megashit} is satisfied for example by all potentials of the type $|x|^\a$ with $\a>0$ but also by potentials growing exponentially fast likes $\exp \left( \vert x \vert^\alpha \right)$ with $\a>0$. \end{remark} We conclude with another counterexample where we assume neither regularity nor coercivity assumptions on the potential. \begin{proposition}\label{pr:noncomp} If $V\colon \RD\to \R$ is bounded below by a positive constant and is a measurable function such that $V(x)=n^2$, for all $n\ge 1$ and for all $x\in \RD$ such that $n-\frac 1n \le |x|\le n+\frac 1n$, then $\HV$ is not compactly embedded into $L^\tau_V(\RD)$, for $\tau>2$. \end{proposition} \begin{proof} In the following, for $n\ge 1$, we denote $\mathcal{B}'_n:=B_{\frac 1{2n}}(n,0)$, $\mathcal{B}''_n:=B_{\frac 1n}(n,0)$ and $\mathcal{A}_n:=\mathcal{B}_n''\setminus \mathcal{B}_{n}'$. For any $n\ge1$, pick a function $u_n\in C(\RD,[0,1])$ such that \[ u_n(x):= \begin{cases} 1 &\text{for }x\in \mathcal{B}_{n}', \\ 0 &\text{for }x\in \RD\setminus \mathcal{B}_{n}'', \end{cases} \] and with $|\n u_n(x)|\le 2n$, for $x\in\mathcal{A}_{n}$. \\ We have that, for any $\tau>2$, \begin{align*} \ird |\n u_n|^2\, dx &=\int_{\mathcal{A}_n}|\n u_n|^2\,dx =4n^2 \meas(\mathcal{A}_n)\simeq c>0, \\ \ird V(x)|u_n|^2\,dx &\le \int_{\mathcal{B}_n''}V(x)\,dx =n^2 \meas(\mathcal{B}_n'')\simeq c>0, \\ \ird V(x)|u_n|^\tau\,dx &\ge \int_{\mathcal{B}_n'}V(x)\,dx =n^2 \meas(\mathcal{B}_n')\simeq c'>0. \end{align*} We conclude as in Proposition \ref{pr:general}. \end{proof} Finally, we deal with the last remaining case, namely the one dimensional case. \begin{theorem}\label{th:main1} Let $V \in \mathscr{C}^*(\R)$ be coercive. If the distributional derivative of $V$ is a function satisfying \begin{equation}\label{gradv1}\tag{$\mathcal{V'}$} |V'(x)|\le C V^{\frac 32}(x)\quad \hbox{ for a.e. $x \in \R$} \end{equation} for some constant~$C>0$, then the space $H^1_V(\R) $ is compactly embedded into $L^\infty(\R)$ and into $L^\tau_V(\R)$ for all $\tau>2$. \end{theorem} \begin{proof} First we prove the continuous embeddings. Let $u\in C^1_c(\R)$, namely of class $C^1$ and with compact support, and let $w=\sqrt{V}u^2$. Observe that, for any $x\in \R$ we have \begin{equation} \label{eq:4} w(x)=\int_{-\infty}^{x}w'(t)\,dt=\int_{-\infty}^{x}\left(\frac{V'(t)}{2 \sqrt{V(t)}}u^2(t)+2 \sqrt{V(t)}u(t)u'(t)\right)dt. \end{equation} So, for any $x\in \R$, equation~\eqref{eq:4} yields \begin{align*} u^2(x)&\le C w(x) \le \int_{\R}|w'|\, dx \le C\int_{\R}\left(\frac{|V'(x)|}{ \sqrt{V(x)}}u^2(x)+ \sqrt{V(x)}|u(x)||u'(x)|\right)dx \nonumber \\ &\le C \left(\|u\|_{V,2}^2+\|u\|_{V,2}\|u'\|_2\right) \le C \|u\|_V^2. \end{align*} This implies that \begin{equation}\label{key} \|u\|_\infty \le C \|u\|_V, \qquad\text{for all }u\in C^1_c(\R). \end{equation} The embedding of $H_V^1(\R)$ into $L^\infty(\R)$ follows by density. Let now $\tau>2$ and $u\in H_V^1(\R)$. We have \begin{equation} \label{eq:8} \|u\|_{V,\tau}^\tau \le \|u\|_\infty^{\tau-2}\|u\|_{V,2}^2 \le C\|u\|_V^2, \end{equation} and the conclusion about the continuous embeddings follows easily. \medskip Let us now show the compact embeddings. Let us go back to equation \eqref{eq:4}. Fix $\varepsilon >0$, and choose $R>0$ so large that $1/V(x)<\varepsilon^4$ for each $x \in \mathbb{R} \setminus (-R,R)$. It is known (see \cite[Théorème VIII.7]{brezis}) that $H^1(-R,R)$ is compactly embedded into $L^\infty(-R,R)$. On the other hand, if $\vert x \vert \geq R$ then \eqref{eq:4} implies \begin{equation*} \sqrt{V(x)} u^2(x) \leq C \Vert u \Vert_V^2, \end{equation*} and thus \begin{equation} \label{eq:7} \vert u(x) \vert \leq \left( \sqrt{C} \Vert u \Vert_V \right) \varepsilon. \end{equation} It now follows from \eqref{eq:7} that $H_V^1(\mathbb{R}\setminus (-R,R))$ is compactly embedded into $L^\infty(\mathbb{R} \setminus (-R,R))$. We conclude that $H_V^1(\mathbb{R})$ is compactly embedded into $L^\infty(\mathbb{R})$, and also in $L_V^\tau(\mathbb{R})$ by \eqref{eq:8}, for all $\tau>2$. \end{proof} \begin{remark}\label{re:VW1} If $V$ and $W$ are two potentials from $\mathscr{C}^*(\R)$ such that $W(x)\leq C V(x)$ for all $x\in\R$ and for a suitable positive constant $C$, then $L_V^\tau(\R)$ is continuously embedded into $L_W^\tau(\R)$. Therefore, if $V \in \mathscr{C}^*(\R)$ is coercive and satisfies \eqref{gradv1} then $H^1_V(\R) $ is compactly embedded into $L^\tau_W(\R)$ for all $\tau> 2$. \end{remark} \subsection{The radial setting} \ It is well-known that, for $N\ge 2$, the constraint of radial symmetry improves the compactness properties of Sobolev spaces, see \cite{willem}. Here we study the embedding of $H^1_{V,{\rm rad}}(\RN)$ into $L^\tau_{W,{\rm rad}}(\RN)$, where $H^1_{V,{\rm rad}}(\RN)$ and $L^\tau_{W,{\rm rad}}(\RN)$ are, respectively, the subsets of radial functions contained in $H^1_{V}(\RN)$ and $L^\tau_{W}(\RN)$. \begin{theorem}\label{thrad} Let $N\ge 2$, $V$ and $W$ be two potentials in $\mathscr{C}^*(\RN)$ such that there exist $\phi:\RN \to R_+$ vanishing at infinity, $\bar \tau>2$, and $\tilde R>0$ with \begin{equation}\label{VWx} \tag{$\mathcal{VW}_{{\rm rad}}$} W(x)\le C \phi(x) V(x)|x|^{\frac{(N-1)(\bar\tau-2)}{2}}, \quad\text{ whenever }|x|\ge \tilde R \end{equation} Then the following compact embeddings hold $$H^1_{V,{\rm rad}}(\RN)\hookrightarrow \hookrightarrow L^\tau_{W,{\rm rad}}(\RN), \qquad\text{ for all } \begin{cases} \tau \ge \bar \tau & \text{if }N=2, \\ \bar \tau\le \tau < 2^* & \text{if }N\ge 3. \end{cases}$$ \end{theorem} \begin{proof} Let $\{u_n\}_n$ be a bounded sequence of $H^1_{V,{\rm rad}}(\RN)$. There exists $u\in H^1_{V,{\rm rad}}(\RN)$ such that $u_n \weakto u$ weakly in $H^1_{V,{\rm rad}}(\RN)$. By linearity, we can assume that $u=0$. \\ Let $R>1$. Since $V$ is bounded below by a positive constant, $\{u_n\}_n$ is a bounded sequence of $H^1_{\rm rad}(\RN)$ as well. So, by Strauss Radial Lemma (see \cite{strauss}) and \eqref{VWx}, we have for any $R\ge\tilde R$, \begin{align*} \int_{B_R^c} W(x)|u_n|^\tau\, dx &=\int_{B_R^c} W(x)|u_n|^{\tau-2}u_n^2\, dx \\ &\le C\int_{B_R^c} \frac{W(x)}{|x|^{\frac{(N-1)(\tau-2)}{2}}}u_n^2\, dx \\ &\le C\int_{B_R^c} \frac{\phi(x) V(x)|x|^{\frac{(N-1)(\bar\tau-2)}{2}}}{|x|^{\frac{(N-1)(\tau-2)}{2}}}u_n^2\, dx \\ &\le C\sup_{B_R^c}\phi\int_{B_R^c} V(x)u_n^2\, dx \le C\sup_{B_R^c}\phi. \end{align*} Therefore, for any $\eps>0$ we can find $R>0$ sufficiently large such that \[ \int_{B_R^c} W(x)|u_n|^\tau\, dx <\frac \eps 2 \qquad \text{for all }n\ge 1. \] On the other hand, since $V$ and $W$ are locally bounded and $H^1(\RN)$ is locally compact embedded into $L^\tau(\RN)$, we have that \[ \int_{B_R} W(x)|u_n|^\tau\, dx <\frac \eps 2 \qquad \text{for all $n$ sufficiently large}, \] and we conclude that \[ \irn W(x)|u_n|^\tau\, dx \to 0 \qquad \text{as $n \to +\infty$}. \] \end{proof} \begin{remark} Observe that under condition \eqref{VWx}, $\liminf_{|x|\to +\infty}\frac{V(x)}{W(x)}$ may be $+\infty$ or any non-negative number and so also zero. In particular $V$ and $W$ are allowed to be respectively bounded and coercive provided that \begin{displaymath} \limsup_{|x|\to+\infty}\frac{W(x)}{\phi(x)|x|^{\frac{ (N-1)(\bar \tau-2)}{2}}}<+\infty. \end{displaymath} \end{remark} \begin{remark} The class $\mathscr{C}^*(\RN)$ is made of \emph{continuous} functions. This is rather natural requirement for applications to PDEs. It should be noticed, however, that our assumptions on the potential functions $V$ and $W$ may be slightly relaxed. Continuity may be replaced by measurability in most statements, but of course the assumption $V \in L^1_{\mathrm{loc}}$ is needed whenever distributional derivatives of $V$ are taken into account. \end{remark} \section{Applications to a nonlinear eigenvalue problem}\label{se:exi} This section is devoted to the study of a planar nonlinear Schr\"odinger equation in presence of the same coercive potential in the linear term and in the nonlinearity. By what observed in Section \ref{se:func}, the fact that $\HV$ is continuously but not compactly embedded into $L_V^{p+1}(\RD)$ makes the problem quite tough. \begin{theorem} \label{th:main} Let $p\in (1,+\infty)$ and assume that $V\colon \RD\to\R$ is a $C^1$ positive coercive potential satisfying \eqref{gradv}. Then there exists a nontrivial pair $(\bar u,\lambda)\in \big(\HV\cap C^2(\RD)\big)\times\R_+$, $u\ge 0$, solving the problem \eqref{eq:11}. Explicitly, \begin{displaymath} \l=\frac2{p+3}\frac{\|\bar u\|_{V}^{2}}{\|\bar u\|_2^2}. \end{displaymath} \end{theorem} To achieve the proof, we define $I\colon \HV \to \R$ by \begin{equation}\label{I} I(u):=\frac 12 \ird \left(|\n u|^2 + V(x)u^2\right)dx -\frac 1{p+1}\|u\|_2^2\ird V(x)|u|^{p+1}\, dx, \end{equation} for all $u\in \HV$, and \begin{displaymath} \Ne:= \left\{ u\in H^1_V(\RD)\mid u\neq 0, \; J(u)=0 \right\}, \end{displaymath} where $J\colon \HV \to \R$ is such that \begin{displaymath} J(u):=I'(u)[u]=\ird \left(|\n u|^2 + V(x)u^2\right) dx -\frac {p+3}{p+1}\|u\|_2^2\ird V(x)|u|^{p+1}\, dx, \end{displaymath} for all $u\in \HV$. \begin{lemma}\label{le:ne} $\Ne$ is a natural constraint. \end{lemma} \begin{proof} The conclusion follows observing that \[ J'(u)[u]= -(p+3)\|u\|_2^2\ird V(x)|u|^{p+1}\, dx<0, \] for all $u\in \Ne$. \end{proof} \begin{lemma}\label{le:p+1} There exists $c>0$ such that $\|u\|_2\cdot\|u\|_{V,p+1}\ge c$, for any $u\in \Ne$. \end{lemma} \begin{proof} Since $\HV$ is continuously embedded into $L_V^{p+1}(\RD)$, we have \[ \|u\|_{V,p+1}^2\le C\|u\|_V^2=C'\|u\|_{2}^2\cdot \|u\|_{V,p+1}^{p+1}, \] and we conclude. \end{proof} We set \begin{displaymath} \s=\inf_{u\in\Ne}I(u). \end{displaymath} \begin{lemma}\label{le:sigma} We have that $\s>0$. \end{lemma} \begin{proof} Since, \begin{equation}\label{I|N} I(u)=\frac{p+1}{2(p+3)}\ird \left(|\n u|^2 + V(x)u^2\right) dx =\frac 12\|u\|_2^2\ird V(x)|u|^{p+1}\, dx \quad\text{for any }u \in \Ne, \end{equation} by Lemma \ref{le:p+1} we deduce that $\s>0$. \end{proof} \begin{lemma}\label{le:u_0} Let $\{u_n\}_n$ be a sequence in $\Ne$ such that $I(u_n)\to \s$, there exists $u_0\in \HV\setminus\{0\}$ such that $u_n\rightharpoonup u_0$ in $\HV$. \end{lemma} \begin{proof} By \eqref{I|N}, it is easy to see that $\{u_n\}_n$ is bounded in $\HV$ and then, up to a subsequence, there exists $u_0\in \HV$ such that $u_n\rightharpoonup u_0$ in $\HV$. \\ Since $V$ is coercive, $\HV$ is compactly embedded into $L^2(\RD)$ and so \begin{equation}\label{eq:str} u_n\to u_0 \hbox{ in }L^2({\RD}). \end{equation} Since $\{u_n\}_n$ is bounded in $\HV$ and hence also in $L^{p+1}_V(\RD)$, by Lemma \ref{le:p+1}, we deduce that $u_0\neq 0$. \end{proof} For any $n\ge 1$ and an arbitrary $\alpha \in \left(\frac 1{p+3},\frac 12\right)$ we define the positive measure \(\nu_n\) by \begin{equation*} \nu_n(\O):= \left(\frac 12 -\a\right)\int_\O\left(|\n u_n|^2 + V(x)u_n^2\right) dx+\frac{\alpha(p+3)-1}{p+1}\|u_n\|_2^2\int_\O V(x)|u_n|^{p+1}\, dx \end{equation*} for each measurable subset~$\O\subset \R^2$, and \begin{equation*} G(u):= \left(\frac 12 -\a\right)\ird\left(|\n u|^2 + V(x)u^2\right) dx+\frac{\alpha(p+3)-1}{p+1}\|u\|_2^2\ird V(x)|u|^{p+1}\, dx \end{equation*} for any $u\in H^1_V(\RD)$. Of course $\nu_n(\RD)=G(u_n)=I(u_n)=\s+o_n(1)$. By \cite{Lions1,Lions2} there are three possibilities: \begin{itemize} \item[1.] {\it concentration:} there exists a sequence $\{\xi_n\}_n$ in $\RD$ with the following property: for any $\epsilon> 0$, there exists $r = r(\epsilon) > 0$ such that \[ \nu_n(B_r(\xi_n))\ge \s-\epsilon; \] \item[2.] {\it vanishing:} for all $r > 0$ we have that \[ \lim_{n \to +\infty} \sup_{\xi\in\RD} \nu_n(B_r(\xi))=0; \] \item[3.] {\it dichotomy:} there exist two sequences of positive measures $\{\nu_n^1\}_n$ and $\{\nu_n^2\}_n$, a positively diverging sequence of numbers $\{R_n\}_n,$ and $\tilde \s \in (0,\s)$ such that \begin{align*} &0\le \nu_n^1 + \nu_n^2\le \nu_n,\quad \nu_n^1(\RD)\to \tilde \s,\quad \nu_n^2(\RD)\to \s-\tilde \s, \\ & \operatorname{supp} \nu_n^1\subset B_{R_n},\quad \operatorname{supp} \nu_n^2\subset B_{2R_n}^c. \end{align*} \end{itemize} We are going to rule out both vanishing and dychotomy. In order to exclude the vanishing, we need the equivalent of \cite[Lemma I.1]{Lions2} in $\HV$. \begin{lemma}\label{le:lions} Suppose that $V$ satisfies \eqref{gradv}. Let $\{u_n\}_n$ be a bounded sequence in $\HV$ such that \begin{equation}\label{lions} \lim_{n \to +\infty} \sup_{\xi\in\RD} \int_{B_r(\xi)} V(x) u_n^2\, dx=0, \end{equation} for some $r>0$, then $u_n \to 0$ strongly in $L_V^\tau(\RD)$, for all $\tau>2$, as $n \to +\infty$. \end{lemma} \begin{proof} We start with the following intermediate step. \vspace{.2cm} {\sc Claim:} $u_n \to 0$ strongly in $L_V^4(\RD)$, as $n \to +\infty$. \\ Since $W^{1,1}(B_r(\xi))\hookrightarrow L^2(B_r(\xi))$ and by \eqref{gradv}, we have \begin{align*} \int_{B_r(\xi)} V(x) u_n^4\, dx &=\|\sqrt{V}u_n^2\|_{L^2(B_r(\xi))}^2 \le C\|\sqrt{V}u_n^2\|_{W^{1,1}(B_r(\xi))}^2 \\ &\le C\left(\int_{B_r(\xi)} \Big(\frac{|\n V(x)|}{2\sqrt{V(x)}} u_n^2 +2\sqrt{V(x)} u_n |\n u_n| +\sqrt{V(x)} u_n^2\Big) dx\right)^2 \\ &\le C\left(\int_{B_r(\xi)} \Big(V(x) u_n^2+\sqrt{V(x)} u_n |\n u_n|\Big) dx\right)^2 \\ &\le C\left(\|u_n\|_{L_V^2(B_r(\xi))}^2 +\|u_n\|_{L_V^2(B_r(\xi))}\|\n u_n\|_{L^2(B_r(\xi))}\right)^2 \\ &\le C\|u_n\|_{L_V^2(B_r(\xi))}^2 \left(\|u_n\|_{L_V^2(B_r(\xi))}+\|\n u_n\|_{L^2(B_r(\xi))}\right)^2 \\ &\le C\|u_n\|_{L_V^2(B_r(\xi))}^2 \|u_n\|_{H_V^1(B_r(\xi))}^2 \\ &= o_n(1) \|u_n\|_{H_V^1(B_r(\xi))}^2. \end{align*} Then, covering $\RD$ by ball of radius $r$ in such a way that any point of $\RD$ is contained in at most $m$ balls (where $m$ is a prescribed integer), being $\{u_n\}_n$ bounded in $\HV$, we deduce \[ \|u_n\|_{V,4}^2\le m \, o_n(1)\|u_n\|_V^2=o_n(1). \] \vspace{.2cm} {\sc Claim:} $u_n \to 0$ strongly in $L_V^\tau(\RD)$, for any $\tau>2$, as $n \to +\infty$. \\ The conclusion follows by an interpolation argument by \cite[Lemma 2.1]{ADP}. \end{proof} \begin{lemma}\label{le:van} Vanishing does not hold. \end{lemma} \begin{proof} If vanishing holds, we have that \[ \lim_{n \to +\infty} \sup_{\xi\in\RD} \int_{B_r(\xi)} V(x) u_n^2\, dx=0 \] and so, by Lemma \ref{le:lions}, we deduce that ${u_n}\to 0 $ strongly in $L^{p+1}_V(\RD)$ and then, being $J(u_n)=0$, we would have $u_n\to 0$ in $\HV$, contradicting $I(u_n)\to \s>0$. \end{proof} \begin{lemma}\label{le:dicho} Dichotomy does not hold. \end{lemma} \begin{proof} As usual, we perform a proof by contradiction assuming that, on the contrary, dichotomy holds.\\ Consider a radial function $\rho_n\in C^1_0(\RD,[0,1])$ such that, for any $n\ge 1$, $\rho_n\equiv 1$ in $B_{R_n}$, $\rho_n\equiv 0$ in $ B_{2R_n}^c$ and $\sup_{x\in\RD}|\n\rho_n(x)|\le 2/R_n$. Moreover set $v_n=\rho_nu_n$ and $w_n=(1-\rho_n)u_n$. It is clear that $v_n, w_n\in H^1_V$. Now we proceed by steps. \vspace{.2cm} {\sc First step}: we claim that \begin{align}\label{eq:onu} &\left(\frac 12 -\a\right)\int_{\O_n}(|\n u_n|^2+V(x)u_n^2)\, dx+\frac{\alpha(p+3)-1}{p+1}\|u_n\|_2^2\int_{\O_n} V(x)|u_n|^{p+1}\, dx \to 0, \\\label{eq:onv} &\left(\frac 12 -\a\right)\int_{\O_n}(|\n v_n|^2+V(x)v_n^2)\, dx+\frac{\alpha(p+3)-1}{p+1}\|v_n\|_2^2\int_{\O_n} V(x)|v_n|^{p+1}\, dx \to 0, \\\label{eq:onw} &\left(\frac 12 -\a\right)\int_{\O_n}(|\n w_n|^2+V(x)w_n^2)\, dx+\frac{\alpha(p+3)-1}{p+1}\|w_n\|_2^2\int_{\O_n} V(x)|w_n|^{p+1}\, dx \to 0, \end{align} as $n \to +\infty$, where $\O_n:=\{x\in\RD: R_n\le |x|\le 2R_n\}$. Let's prove \eqref{eq:onu}. Observe that \begin{align*} \nu_n(\O_n)&=\s-\nu_n(B_{R_n})-\nu_n(B^c_{2R_n})+o_n(1)\\ &\le \s- \nu^1_n(B_{R_n})-\nu^2_n(B^c_{2R_n})+o_n(1)=o_n(1) \end{align*} and then we deduce \eqref{eq:onu}. Moreover, by simple computations \begin{align*} &\left(\frac 12 -\a\right)\int_{\O_n} \left(|\n v_n|^2+V(x)v_n^2 \right)\, dx+\frac{\alpha(p+3)-1}{p+1}\|v_n\|_2^2\int_{\O_n} V(x)|v_n|^{p+1}\, dx\\ &\qquad\le 2\left(\frac 12 -\a\right) \int_{\O_n}\left(|\n u_n|^2+\left(V(x)+\frac4{R^2_n}\right) u_n^2\right)\, dx \\ &\qquad\qquad {}+\frac{\alpha(p+3)-1}{p+1}\|u_n\|_2^2\int_{\O_n} V(x)|u_n|^{p+1}\, dx \\ &\qquad\le o_n(1) \end{align*} and then we have proved also \eqref{eq:onv}. The proof of \eqref{eq:onw} is analogous. \vspace{.2cm} {\sc Second step}: $\liminf_{n \to +\infty} G(v_n)=\tilde{\s}$.\\ Observe that \begin{equation}\label{superfico} G(v_n)\ge \nu_n (B_{R_n})\ge \nu_n^1 (B_{R_n})\to \tilde{\s}, \end{equation} Now, observe that, by the first step and considering that $\nu_n\ge\nu_n^2$, \begin{align*} \s&=\lim_{n \to +\infty}\nu_n(\RD)=\lim_{n \to +\infty} (\nu_n(B_{R_n})+\nu_n(B_{2R_n}^c))\\ &\ge \liminf_{n \to +\infty} G(v_n)+\lim_{n \to +\infty}\nu_n^2(B_{2R_n}^c). \end{align*} Since $\lim_{n \to +\infty}\nu_n^2(\RD)=\s-\tilde \s$ and $\supp \nu_n^2\subset B_{2R_n}^c$, we conclude that $$\liminf_{n \to +\infty} G(v_n)=\tilde \s.$$ \vspace{.2cm} {\sc Third step}: conclusion.\\ First of all observe that, since $u_n=v_n+w_n$, then by the first step \begin{equation}\label{eq:Gun} G(u_n)\ge G(v_n)+G(w_n)+o_n(1). \end{equation} Observe that, by first step, \begin{equation}\label{eq:Jun} 0=J(u_n)\ge J(v_n)+J(w_n)+o_n(1). \end{equation} For any $n\in \N$, let $t_n, s_n>0$ be the numbers, respectively, such that $t_nv_n\in \Ne$ and $s_nw_n\in \Ne$. There are now three possibilities. \textit{Case 1}: up to a subsequence, $J(v_n)\le 0$. \\ By simple computations we see that $t_n\le 1$ and then we have \begin{equation*} \s\le I(t_nv_n)=G(t_nv_n)\le G(v_n), \end{equation*} which, for a large $n\ge 1$, leads to a contradiction due to the fact that, by the second step, $$\s>\tilde \s= \liminf_{n \to +\infty} G(v_n).$$ {\it Case 2}: up to a subsequence, $J(w_n)\le 0.$\\ Then, proceeding as in the first case, by \eqref{superfico} and using \eqref{eq:Gun}, we have, for $n$ sufficiently large, \begin{equation*} \s\le I(s_nw_n)=G(s_nw_n)\le G(w_n)\le G(u_n)+o_n(1)=\s+o_n(1), \end{equation*} that is $\s=\lim_{n \to +\infty} G(w_n)$. Then, passing to the limit in \eqref{eq:Gun}, we have $$\s\ge \s + \liminf_{n \to +\infty} G(v_n)$$ which contradicts the result obtained in the second step. {\it Case 3}: there exists $n_0\ge 1$ such that for all $n\ge n_0$ both $J(v_n)>0$ and $J(w_n)>0$. \\ Then $\liminf_{n \to +\infty} t_n\ge1$ and, by \eqref{eq:Jun}, we also have that $J(v_n)=o_n(1)$. \\ If $ \liminf_{n \to +\infty} t_n = 1$, we can repeat the computations performed in the first case and get the contradiction. If $\liminf_{n \to +\infty} t_n >1$, from \begin{equation*} o_n(1) = J(v_n)-\frac 1{t_n^{p+3}}J(t_nv_n)=\left(1-\frac 1{t_n^{p+1}}\right)\| v_n\|^2_V, \end{equation*} we get a contradiction since $\liminf_{n \to +\infty} G(v_n)>0$ by the second step. \end{proof} \begin{proposition}\label{conc} Concentration holds and, moreover, the sequence $\{\xi_n\}_n$ is bounded. \end{proposition} \begin{proof} By Lemmas \ref{le:van} and \ref{le:dicho}, we deduce that concentration occurs. Now assume, by contradiction, that $\{\xi_n\}_n$ is unbounded. Since, by Lemma \ref{le:u_0}, $u_0\neq 0$, there exists $ R>0$ such that \[ m:=\frac{\alpha(p+3)-1}{p+1}\|u_0\|_2^2\int_{B_{R}} V(x)|u_0|^{p+1}\, dx>0. \] Since the embedding of $H^1_V(B_{ R})$ in $L^\tau_V(B_{ R})$ is compact, for all $\tau\ge 2$, we have \begin{equation*} \liminf_{n \to +\infty} \nu_n(B_{R})\ge \frac{\alpha(p+3)-1}{p+1}\|u_0\|_2^2\int_{B_{R}} V(x)|u_0|^{p+1}\, dx=m>0. \end{equation*} Consider $\eps=m/2$ and apply the concentration hypothesis. We would have for some $\tilde R>0$ that $\nu_n(B_{\tilde R}(\xi_n))\ge \s-\varepsilon$ for all $n\ge 1$. \\ Then, since for large $n\ge1$ we would have $B_{R}\cap B_{\tilde R}(\xi_n)=\emptyset$, the following should hold \begin{displaymath} \s=\lim_{n \to +\infty}\nu_n(\RD)\ge \liminf_{n \to +\infty} \nu_n(B_{R}) + \liminf_{n \to +\infty} \nu_n(B_{\tilde R}(\xi_n)) \ge \s+\frac m2, \end{displaymath} reaching a contradiction. \end{proof} \begin{proof}[Proof of Theorem \ref{th:main}] By Proposition \ref{conc}, $u_n$ tends to $u_0$ in $H^1_V(\RD)$. As a consequence $u_0$ solves the minimizing problem \begin{displaymath} I(u_0)=\inf_{u\in \Ne}I(u). \end{displaymath} Of course, since also $|u_0|$ solves the same minimizing problem, we may assume $u_0$ nonnegative. By Lemma \ref{le:ne} and standard arguments, it solves the problem \begin{displaymath} -\Delta u_0 + \left(V(x) - \frac 2{p+1}\ird V(x)u_0^{p+1}\, dx\right) u_0=\|u_0\|_2^2 V(x)u_0^{p}. \end{displaymath} We set \begin{displaymath} \lambda = \frac 2{p+1}\ird V(x)u_0^{p+1}\, dx, \quad \mu = \|u_0\|_2^2. \end{displaymath} With some simple computations, the function $\bar u=\mu^{\frac1{p-1}}u_0$ solves \begin{displaymath} -\Delta \bar u + \left(V(x) - \lambda \right) \bar u= V(x)\bar u^{p} \end{displaymath} and the relation between $\l$ and $\bar u$ is obtained by direct computations. Finally, since $H_V^1(\R^2)\hookrightarrow H^1(\R^2)$ and locally $V$ is strictly positive and bounded, the regularity of $\bar u$ follows as usual. \end{proof} \begin{remark}\label{re:th} Some comments are in order about Theorem \ref{th:main} and its proof. Our result, with the presence of the parameter $\l$, is a direct consequence of the choice of functional $I$ defined in \eqref{I} and, in particular, of the presence of the multiplicative factor $L^2$-norm. This choice is, for sure, quite tricky, but it is strictly related to lack of the compact embedding of $\HV$ into $L_V^{p+1}(\RD)$. Removing the multiplicative factor $L^2$-norm, we can still avoid the vanishing and the dichotomy for a minimizing sequence. Unfortunately, the sequence $\lbrace \xi_n\rbrace_n$ might well diverge to infinity: the minimizing sequence could behave as the sequence described in the proof of Proposition \ref{pr:general}. \end{remark} The difficulties described in Remark \ref{re:th} basically disappear when the compact embedding holds. \\ In view of this, we list the following three possibilities \begin{enumerate}[label=($H_\arabic{*}$),ref=$H_\arabic{*}$] \setcounter{enumi}{0} \item\label{H1} $N=1, p>2$, $V$ is coercive, $W(x)\le CV(x)$ for some $C>0$ and any $x\in\R$ and \eqref{gradv1} holds for $V$; \item\label{H2} $N\ge 1$, $p$ satisfying\eqref{tau} with $\alpha\in (0,1)$, $V$ and $W$ such that $V$ is coercive and and \eqref{VWa} holds; \item\label{H3} $N\ge 2$, $V$ and $W$ are radial functions such that \eqref{VWx} holds for $V$, $W$ and $p\in (2,2^*)$ ($2^*=\infty$ if $N=2$). \end{enumerate} Taking into account Theorems \ref{thvwa}, \ref{th:main1} and \ref{thrad} and Remark \ref{re:VW1}, we have the following \begin{corollary} \label{th:main2} Assume that $V\colon \RN\to\R$ and $W\colon \RN\to\R$ are potentials in $\mathscr{C}^*(\RN)$ and one among \eqref{H1}, \eqref{H2} and \eqref{H3} holds. Then there exists a nontrivial solution $u \in H^1_{V}(\RN)$ (actually $u \in H^1_{V,{\rm rad}}(\RN)$ if \eqref{H3} holds) of \[ -\Delta u +V(x) u=W(x)|u|^{p-1}u \qquad\text{ in }\RN. \] \end{corollary} \bibliographystyle{amsplain} \nocite{*} \bibliography{aps} \end{document} Finally, as a nontrivial application, we will consider the following nonlinear eigenvalue problem for the stationary Schrödinger operator \begin{equation}\label{eq:1}\tag{NLS} -\Delta u + \left(V(x) - \lambda\right) u=W(x)|u|^{p-1}u \qquad\text{ in }\RD, \end{equation} where $V\colon \R^2\to \R$ and $W \colon \mathbb{R}^2 \to \mathbb{R}$ are continuous potential functions which are bounded below by a positive constant and diverging at infinity. For $\lambda$ \emph{fixed}, It is evident that \eqref{eq:1} can be seen as a case of the general semilinear equation \begin{displaymath} -\Delta u = g(x,u), \end{displaymath} with the choice $g(x,u) = \left(\lambda-V(x) \right)u + W(x)\vert u \vert^{p-1}u$. The NLS equation in $\R^N$ has been studied intensively over the last decades. We refer to \cite{berlions1,berlions2} for a major breakthrough about variational methods for \eqref{eq:1} under general conditions. As far as we know, all existence results of variational solutions $u \in H^1(\R^N)$ to \eqref{eq:1} require the two potential functions $V$ and $W$ to behave in a qualitatively different way. While the case $V$ coercive and $W \equiv 1$ is a basic toy model of Calculus of Variations, see for example \cite{rabinowitz}, the presence of coercive potentials on both sides of the equation prevents most standard techniques from working. For simplicity, we focus our attention on the case $V=W$ and we consider only the limit case $N=2$ since, as already observed, for $N\ge 3$ and a large class of positive and diverging potentials $H_V^1 (\mathbb{R}^N)$ is not continuous embedded into $L_V^\tau (\mathbb{R}^N)$, for any $\tau > 2$, while for $N=1$ the embeddings are compact. So, assuming $V=W$, to solve \eqref{eq:1} for a \emph{fixed} value of $\lambda$ one is led to looking for critical points of the functional \begin{displaymath} E(u) = \frac{1}{2} \int_{\mathbb{R}^2} \left( \vert \nabla u \vert^2 + \left( V(x) - \lambda \right) \vert u \vert^2 \right)\, dx - \frac{1}{p+1} \int_{\mathbb{R}^2} V(x) \vert u \vert^{p+1} \, dx. \end{displaymath} Even if, under suitable assumptions, by our embedding theorems the functional is well defined in $\HV$, the lack of the compact embedding of $\HV$ into $L^{p+1}_V(\RD)$ makes the problem difficult to attach with variational techniques. On the contrary, in Theorem \ref{th:main} we are able to construct a solution $(\lambda,u)$ to \eqref{eq:1} in which $\lambda$ plays the role of a Lagrange multiplier. In Remark \ref{re:th} we comment on the obstacles which prevent us from solving directly the more natural equation \begin{equation*} -\Delta u + V(x)u = W(x) \vert u \vert^{p-1}u \qquad\text{ in }\RD. \end{equation*}
2412.10210v2
http://arxiv.org/abs/2412.10210v2
On diffeomorphisms of 4-dimensional 1-handlebodies
\documentclass[a4paper]{amsart} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsthm} \usepackage{tikz} \usepackage{verbatim} \usepackage[justification=centering]{caption} \usepackage{thmtools, thm-restate} \declaretheorem[numberwithin=section]{theorem} \textwidth 16cm \oddsidemargin 0cm \evensidemargin 0cm \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem*{theo}{Theorem} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \newtheorem{convention}[theorem]{Convention} \numberwithin{equation}{section} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Zt}{\mathbb{Z}[t^{\pm1}]} \newcommand{\K}{\mathbb{K}} \newcommand{\F}{\mathbb{F}} \newcommand{\Aa}{\mathcal{A}} \newcommand{\Bb}{\mathcal{B}} \newcommand{\Cc}{\mathcal{C}} \newcommand{\Dd}{\mathcal{D}} \newcommand{\Ee}{\mathcal{E}} \newcommand{\Ff}{\mathcal{F}} \newcommand{\Gg}{\mathcal{G}} \newcommand{\Hh}{\mathcal{H}} \newcommand{\Jj}{\mathcal{J}} \newcommand{\Kk}{\mathcal{K}} \newcommand{\Ll}{\mathcal{L}} \newcommand{\Qq}{\mathcal{Q}} \newcommand{\Rr}{\mathcal{R}} \newcommand{\Ss}{\mathcal{S}} \newcommand{\Vv}{\mathcal{V}} \newcommand{\Ww}{\mathcal{W}} \newcommand{\Xx}{\mathcal{X}} \newcommand{\Yy}{\mathcal{Y}} \renewcommand{\Im}{\mathrm{Im}} \newcommand{\rk}{\mathop{rk}} \newcommand{\Int}{\mathrm{Int}} \newcommand{\ca}{\hspace{-2pt}\raisebox{-2pt}{ \begin{tikzpicture} [scale=0.3] \draw (0,-0.2) -- (-0.4,1) (0.3,-0.2) -- (-0.1,1); \end{tikzpicture} }\hspace{-2pt}} \newcommand{\wt}{\widetilde} \definecolor{vert}{RGB}{0,205,0} \newcommand\dm[1]{\textcolor{vert}{{\em #1}}} \begin{document} \title{On diffeomorphisms of $4$--dimensional $1$--handlebodies} \author{Delphine Moussard} \begin{abstract} We give a new proof of Laudenbach and Poénaru's theorem, which states that any diffeomorphism of the boundary of a $4$--dimensional $1$--handlebody extends to the whole handlebody. Our proof is based on the cassification of Heegaard splittings of double handlebodies and a result of Cerf on diffeomorphisms of the $3$--ball. Further, we extend this theorem to $4$--dimensional compression bodies, namely cobordisms between $3$--manifolds constructed using only $1$--handles: when the negative boundary is a product of a compact surface by interval, we show that every diffeomorphism of the positive boundary extends to the whole compression body. This invlolves a strong Haken theorem for sutured Heegaard splittings and a classification of sutured Heegaard splittings of double compression bodies. Finally, we show how this applies to the study of relative trisection diagrams for compact $4$--manifolds. \end{abstract} \maketitle \section{Introduction} A famous theorem of Laudenbach and Poénaru asserts that every diffeomorphism of the boundary of a $4$--dimensional $1$--handlebody extends to a diffeomorphism of the whole handlebody \cite{LP}. This result is of great importance in the theory of smooth $4$--manifolds because it implies that, given a handle decomposition of a closed $4$--manifold $X$, the attaching information for the $1$-- and $2$--handles contained in a Kirby diagram is sufficient to determine $X$ up to diffeomorphism. Likewise, in the theory of trisections, it implies that a trisection diagram determines a unique closed $4$--manifold up to isotopy. We give here an alternative proof of Laudenbach--Poénaru's theorem, based on two main ingredients. The first one is the uniqueness of the minimal genus Heegaard splitting of the boundary of a $4$--dimensional $1$--handlebody, that is a double handlebody, due to Carvalho and Oertel \cite{CarOer}. The second ingredient is the fact that every diffeomorphism of a $3$--dimensional handlebody, that restricts to the identity on the boundary, is isotopic to the identity; this is based on a result of Cerf \cite{Cerf4}. From that point, our proof of Laudenbach--Poénaru's theorem is very short. Carvalho and Oertel actually used much of the same machinery that Laudenbach and Poénaru used in their original proof. In particular, both papers relied on Laudenbach’s results from \cite{Laudenbach}. So at first glance this new proof might be regarded as a repackaging of Laudenbach and Poénaru’s original proof. However, in \cite{HS}, Hensel and Schultens reprove Carvalho--Oertel's result using brief cut and paste arguments. Thus, we believe that this proof of Laudenbach--Poénaru’s theorem represents a true simplification of the original. We then extend the setting and consider $4$--dimensional compression bodies. Such a compression body is a cobordism between two $3$--manifolds, its negative boundary and its positive boundary, constructed using only $1$--handles. We further require that the negative boundary is a product of a compact surface and an interval. These compression bodies are the building blocks of the so-called relative trisections of compact $4$--manifolds with boundary. We generalize Laudenbach--Poénaru's theorem to compression bodies. \begin{theo}[Theorem~\ref{th:LPrel}] Let $V$ be a $4$--dimensional compression body. Assume the negative boundary of $V$ is a product $P\times I$, where $P$ is a compact oriented surface which contains no $2$--sphere. Then every diffeomorphism of the positive boundary of $V$ extends to a diffeomorphism of $V$. \end{theo} From this result, we recover the statement, due to Castro, Gay and Pinz\'on-Caicedo \cite{CGPC2}, that every relative trisection diagram determines a unique compact $4$--manifold up to diffeomorphism, and we extend it to the case when the page of the trisection (the surface $P$) is allowed to contain closed components. This fails when $P$ is allowed to contain $2$--spheres, as we show with an example which was communicated to us by David Gay. The proof of Theorem~\ref{th:LPrel} follows the lines of our proof of Laudenbach--Poénaru's theorem. The second ingredient is easily adapted to the relative case: given a $3$--dimensional compression body $C$, namely a cobordism between two compact surfaces constructed using only $1$--handles, we show that every diffeomorphism of $C$, which restricts to the identity on its positive boundary, is isotopic to the identity. The first ingredient requires more work. The positive boundary of a $4$--dimensional compression body is diffeomorphic to the connected sum of some products $F\times I$, where $F$ is a compact surface, and some copies of $S^1\times S^2$. We need to understand the sutured Heegaard splittings of these so-called double compression bodies. \begin{theo}[Theorem~\ref{th:doublecompHSfull}] Any two sutured Heegaard splittings of a double compression body with the same genus are isotopic. \end{theo} This will essentially follow from the strong Haken theorem due to Scharlemann \cite{Scharlemann}: every $2$--sphere embedded in a $3$--manifold equipped with a Heegaard splitting is isotopic to a $2$--sphere which intersects the Heegaard surface along a single circle. Another proof of this result was given by Hensel and Schultens in \cite{HS}, which appears surprisingly simple to us. We apply their technique to check that the strong Haken theorem remains true in the setting of sutured Heegaard splittings. To prove our relative Laudenbach--Poénaru's theorem, we only need the uniqueness of the minimal genus Heegaard splittings of double compression bodies. However, the general classification result is also useful in the theory of relative trisections. In the original definition of a relative trisection, one has to precisely describe a decomposition of the boundary of the building blocks, namely the $4$--dimensional compression bodies, in order to prescribe the way the different pieces should meet. We give a simpler definition of a relative trisection, and Theorem~\ref{th:doublecompHSfull} shows that the two definitions are equivalent. \subsection*{Plan of the paper} In Section~\ref{secLP}, we reprove Laudenbach--Poénaru's theorem. In Section~\ref{secHS}, we define sutured Heegaard splittings, we give a proof of the strong Haken theorem in this setting, and we classify the sutured Heegaard splittings of double compression bodies. In Section~\ref{secLPrel}, we prove Theorem~\ref{th:LPrel}, we apply it to the study of relative trisection diagrams, and we discuss its failure when there is a $2$--sphere in the page $P$. \subsection*{Conventions} The boundary of an oriented manifold with boundary is oriented using the outward normal first convention. If $M$ is an oriented manifold, $-M$ represents $M$ with the opposite orientation. If $M$ is a compact manifold and $N$ is a submanifold, we denote by $M\ca N$ the manifold ``$M$ cut along $N$'', which comes with a surjective map $\pi:M\ca N\to M$ such that $\pi$ is a diffeomorphism from $\pi^{-1}(M\setminus N)$ to $M\setminus N$ and a double cover from $\pi^{-1}(N)$ to $N$. \subsection*{Acknowledgements} I warmly thank Trenton Schirmer for many helpful conversations and for valuable comments on the first version of the paper. I am also grateful to David Gay for helpful conversations. \section{Proof of Laudenbach--Poénaru's theorem via Heegaard splittings} \label{secLP} A {\em genus--$g$ handlebody} is a $3$--manifold diffeomorphic to a $3$--ball with $g$ $1$--handles glued on its boundary. A {\em Heegaard splitting} of a closed $3$--manifold $M$ is a decomposition $M=H_1\cup_\Sigma H_2$, where $H_1$ and $H_2$ are handlebodies and $\Sigma=\partial H_1=-\partial H_2$. A {\em genus--$g$ double handlebody} is a $3$--manifold diffeomorphic to~$\sharp_{i=1}^g(S^1\times S^2)$ (it is the result of gluing of two copies of a genus--$g$ handlebody along their boundary {\em via} the identity map). The first preliminary result we need to reprove Laudenbach--Poénaru's theorem is the classification of minimal genus Heegaard splittings of double handlebodies. This result is recovered in Theorem~\ref{th:doublecompHS} within the more general setting of double compression bodies. \begin{theorem}[Carvalho--Oertel] \label{th:CO} A genus--$g$ double handlebody admits a unique genus--$g$ Heegaard splitting, up to isotopy. \end{theorem} The second preliminary result is based on the following theorem of Cerf \cite{Cerf4}. \begin{theorem}[Cerf] Every diffeomorphism of a $3$--ball, which is the identity on the boundary, is isotopic to the identity, relative to the boundary. \end{theorem} We shall generalize this fact to handlebodies of positive genus. A {\em defining disk system} for a $3$--dimensional handlebody $H$ is a union $\Dd$ of disjoint properly embedded disks such that $H\ca\Dd$ is a $3$--ball. \begin{lemma} Let $H$ be a $3$--dimensional handlebody. Every two defining disk systems for $H$ which coincide on the boundary are isotopic. \end{lemma} \begin{proof} This follows from a standard innermost disk argument, using the fact that handlebodies are irreducible. \end{proof} \begin{lemma} \label{lemma:diffeohandlebody} Let $H$ be a $3$--dimensional handlebody. Let $\varphi$ be a diffeomorphism of $H$. If $\varphi$ is the identity on $\partial H$, then $\varphi$ is isotopic to the identity, relative to the boundary. \end{lemma} \begin{proof} Pick a defining disk system $\Dd$ for $H$. Then $\varphi(\Dd)$ is another defining disk system with the same boundary, thus isotopic to $\Dd$. By the isotopy extension theorem, there is an ambient isotopy sending $\varphi(\Dd)$ to $\Dd$, keeping $\partial H$ fixed. Hence, up to isotoping $\varphi$, we can assume that $\varphi(\Dd)=\Dd$. Now, every diffeomorphism of a $2$--disk, which is the identity on the boundary, is isotopic to the identity (Smale, see \cite[p.132]{Cerf4}). Hence we can even assume that $\varphi$ is the identity on $\partial H\cup\Dd$. We are led to a diffeomorphism of a $3$--ball which is the identity on the boundary, and we apply Cerf's result. \end{proof} A {\em $4$--dimensional $1$--handlebody} is a compact oriented smooth $4$--manifold obtained from a $4$--ball by adding a finite number of $1$--handles. The number of $1$--handles glued is the {\em genus} of the handlebody. Note that the boundary of a $4$--dimensional $1$--handlebody of genus $g$ is a genus--$g$ double handlebody. \begin{theorem}[Laudenbach--Poénaru] \label{th:LP} Let $Z$ be a $4$--dimensional $1$--handlebody. Then every diffeomorphism of $\partial Z$ extends to a diffeomorphism of $Z$. \end{theorem} \begin{proof} Fix an identification of $Z$ with a product $H\times I$, with $H$ a $3$--dimensional genus--$g$ handlebody, where the vertical boundary $\partial H\times I$ has been collapsed along the $I$--factor. This induces a foliation of $Z$ by $3$--dimensional handlebodies $H_t$, $t\in[0,1]$, which meet exactly along their common boundary $\Sigma=\partial H_t$. This surface $\Sigma$ defines a minimal genus Heegaard splitting of $\partial Z$. Now take a diffeomorphism $\varphi$ of $\partial Z$. It sends the Heegaard splitting $\partial Z=H_0\cup_\Sigma H_1$ onto another splitting $\partial Z=\varphi(H_0)\cup_{\varphi(\Sigma)} \varphi(H_1)$, with the same genus. By Theorem~\ref{th:CO}, they are isotopic. Hence, realizing the isotopy in a collar neighborhood of $\partial Z$, we can assume that $\varphi(\Sigma)=\Sigma$ and $\varphi(H_t)=H_t$ for each $t=0,1$. Now $\varphi_{|H_0}$ and $\varphi_{|H_1}$ define two diffeomorphisms of $H$ which coincide on the boundary. By Lemma~\ref{lemma:diffeohandlebody}, there is an isotopy $\varphi_t$ of diffeomorphisms of $H$ from $\varphi_{|H_0}$ to $\varphi_{|H_1}$. The map $\phi:Z\to Z$ induced by the diffeomorphism $(x,t)\mapsto (\varphi_t(x),t)$ of $H\times I$ is the desired diffeomorphism. \end{proof} \section{Sutured Heegaard splittings} \label{secHS} \subsection{Compression bodies} \begin{definition} A \emph{compression body} $C$ is a cobordism from a connected compact oriented surface $\partial_+C$ to a compact oriented surface $\partial_-C$ which is constructed using only $2$--handles and $3$--handles, where enough $3$--handles are glued to avoid any $S^2$--component in $\partial_-C$. A \emph{lensed} compression body is then obtained by collapsing the vertical boundary of the cobordism so that the boundary of $\partial_+C$ becomes identified with the boundary of $\partial_-C$. \end{definition} Note that the definition includes the possibility that $\partial_-C$ be empty. Note also that a compression body can alternatively be constructed by adding $1$--handles, either to a thickening of the negative boundary, which is a compact oriented surface containing no $2$--sphere, or to a $3$--ball. In what follows, compression bodies are supposed to be lensed. \begin{definition} Let $C$ be a compression body. A \emph{defining disk system} for $C$ is a collection $\Dd$ of disjoint disks properly embedded in $C$ such that $C\ca\Dd$ is a thickening of $\partial_-C$, or a $3$--ball if $\partial_-C$ is empty. The boundary $\partial\Dd\subset\partial_+C$ is a {\em cut-system} for $C$. \end{definition} Note that defining disk systems do exist: take for instance the core disks of the $2$--handles in the definition (with a minimal number of $2$-- and $3$--handles). \begin{lemma} \label{lemma:compirr} Every compression body is irreducible. \end{lemma} \begin{proof} We start with a trivial compression body $P\times I$, where $P$ is a compact connected oriented surface different from $S^2$. We embed this product in $\R^3$, which is irreducible. Now $S$ bounds a $3$--ball $B$ in $\R^3$. If $\partial P$ is non-empty, then $\partial(P\times I)$ is connected, so that it is contained in~$\R^3\setminus B$, and $B\subset(P\times I)$. Now assume $P$ is closed. If $S$ separate the two boundary components of $P\times I$, then the retraction of $P\times I$ onto $P\times\{0\}$ provides a map $f:S\to P$ which induces an isomorphism $f_*:H_2(S)\to H_2(P)$. But $f$ lifts to a map $\tilde f:S\to\tilde P$, where $\tilde P$ is the universal cover of $P$, which is non-compact since $g(P)>0$. It follows that $f_*$ factors through $H_2(\tilde P)=0$. We get a contradiction and conclude that $S$ does not separate the boundary of $P\times I$, so that $B\subset(P\times I)$. Now let $S$ be a $2$--sphere embedded in a compression body~$C$. Let $\Dd$ be a defining collection of disks for~$C$. By the previous case, the components of $C\ca\Dd$ are irreducible. If $S$ meets $\Dd$, choose an intersection curve $\gamma$ which is innermost in $S$. Then $\gamma$ bounds a disk in $\Dd$ and a disk in $S$, which together form a $2$--sphere in $C\ca\Dd$, hence bounds a $3$--ball. Thus we can isotope $\Dd$ to remove this intersection curve. Iterating, we can assume $S$ is disjoint from $\Dd$ and we are done. \end{proof} Given a compact oriented $3$--manifold $M$, a {\em sutured decomposition} of $\partial M$ is a decomposition $\partial M=\partial_0 M\cup_s\partial_1 M$, where $\partial_0 M$ and $\partial_1 M$ are two compact surfaces oriented like $M$, and $s$ is their common boundary oriented as $\partial(\partial_1 M)$. The curve $s$ is called the {\em suture}. Note that, if $\partial_0 M$ and $\partial_1 M$ have no closed component, the datum of the oriented suture fully determines the sutured decomposition. A {\em sutured $3$--manifold} is a manifold whose boundary is equipped with a sutured decomposition. \begin{definition} Let $M$ be a sutured $3$--manifold. A {\em sutured Heegaard splitting} of $M$ is a decomposition $M=C_1\cup_\Sigma C_2$ where $C_1$ and $C_2$ are compression bodies, $\Sigma=\partial_+ C_1=-\partial_+ C_2$ is a compact connected oriented surface, $\partial_-C_1=\partial_0 M$ and $\partial_-C_2=\partial_1 M$. \end{definition} Every sutured $3$--manifold admits a sutured Heegaard splitting, unique up to stabilization (see Juhasz \cite[Section~2]{Juh} or Dissler \cite[Section~3]{Dissler1}). \subsection{Strong Haken's theorem for sutured Heegaard splittings} \begin{definition} An essential embedded $2$--sphere in a $3$--manifold with a Heegaard splitting is {\em Haken} if it intersects the Heegaard surface transversely along a connected simple closed curve. \end{definition} \begin{definition} An embedded $2$--sphere $S$ in a compact $3$--manifold $M$ is {\em surviving} if $S$ is essential in the manifold obtained from $M$ by filling every $S^2$--component in $\partial M$ with a $3$--ball. \end{definition} The following theorem is due to Haken in the closed case \cite{Haken}. The proof we give here is taken from Jaco \cite{Jaco}. We reproduce it in order to take care of the small additionnal argument needed in the case of sutured Heegaard splittings. \begin{theorem}[Haken] \label{th:Haken} Let $M=C_1\cup_\Sigma C_2$ be a sutured Heegaard splitting. If there is an essential ({\em resp} surviving) embedded $2$--sphere $S\subset M$, then there is an essential ({\em resp} surviving) embedded $2$--sphere $S'\subset M$ which is Haken. \end{theorem} This result will follow from two lemmas. A {\em spanning annulus} in a compression body $C$ is a properly embedded annulus with a boundary component on each of $\partial_{\pm}C$. \begin{lemma} \label{lemma:Haken1} Let $C$ be a compression body. Let $S$ be a genus--$0$ compact surface with no closed component. Assume $(S,\partial S)$ is embedded in $(C,\partial_+C)$. If $S$ is incompressible and boundary-incompressible, then $S$ is a union of disks. \end{lemma} \begin{proof} We construct a family $\Ff$ of disks and annuli properly embedded in $C$, as follows. First take a defining disk system for $C$. Then, for each component $F$ of $\partial_-C$: \begin{itemize} \item if $F$ is closed, choose a collection of simple closed curves $(\alpha_i,\beta_i)_{i\in I}$ on $F$ which are pairwise disjoint, except that for all $i$, $\alpha_i$ intersects $\beta_i$ transversely once, and which cut $F$ into a disk; take spanning annuli $A_i\cong\alpha_i\times I$ and $B_i\cong\beta_i\times I$, \item if $F$ has a non-empty boundary, choose a collection of properly embedded arcs $(\gamma_i)_{i\in J}$ which cut $F$ into a disk; take spanning annuli $C_i\cong\gamma_i\times I$. \end{itemize} We assume that all these disks and annuli are pairwise disjoint, except each pair $(A_i,B_i)$ which intersects along an arc. Note that cutting $C$ along all the disks and annuli in $\Ff$ gives a $3$--ball. Suppose $S$ meets $F\in\Ff$ along a simple closed curve $\gamma$ that bounds a disk in $F$. Since $S$ is incompressible, $\gamma$ also bounds a disk in $S$. It then follows from the irreducibility of $C$ that $F$ can be isotoped in order to remove this intersection. By a standard innermost disk argument, we see that all such intersections can be removed while keeping the disjointness properties of $\Ff$. Similarly, by $\partial$--incompressibility of $S$ and irreducibility of $C$, we can assume that $S$ never meets an $F\in\Ff$ along a properly embedded arc which is non-essential in $F$. The last possibility is that $S$ intersects an annulus $A_i$ ({\em resp} $B_i$) along an essential curve~$\xi$. It implies that $S$ also meets the annulus $B_i$ ({\em resp} $A_i$) along an essential curve $\zeta$ (we have removed yet non-essential intersection curves). But then a tubular neighborhood of $\xi\cup\zeta$ in $S$ is a punctured torus. This is impossible since $S$ is a genus--$0$ surface. Finally, we can assume that $S$ is disjoint from all $\Ff$. Cutting $C$ along all disks and annuli in $\Ff$ gives a $3$--ball $B$. Take a boundary component $\sigma$ of $S$ which innermost in $\partial B$. The curve $\sigma$ bounds a disk in~$B$, so, since $S$ is incompressible, this component of $S$ is a disk. Iterating the argument, we see that $S$ is a disjoint union of disks. \end{proof} \begin{lemma}[\textup{\cite[Lemma II.8]{Jaco}}] \label{lemma:Haken2} Let $S$ be a genus--$0$ compact surface which is not a union of disks. Let $\alpha_1,\dots,\alpha_n$ be disjoint properly embedded arcs in $S$ such that: \begin{itemize} \item $S\ca\cup_{i=1}^n\alpha_i$ is a union of disks, \item for all $i$, $\alpha_i$ is essential in $S\ca\cup_{j=i+1}^n\alpha_j$. \end{itemize} Then $S\ca\cup_{i=1}^n\alpha_i$ has strictly fewer components than $\partial S$. \end{lemma} \begin{proof}[Proof of Theorem~\ref{th:Haken}] For $i=1,2$, set $S_i=S\cap C_i$. Note that, if both $S_1$ and $S_2$ are unions of disks, then they are connected and $S$ is Haken. Assume $S_i$ is not a union of disks. Perform on $S_i$ as many compressions as possible. At each compression, $S$ is surgered into two spheres, at least one of which is essential ({\em resp} surviving); keep that one. Note that the number of components of $\partial S_i$ cannot increase during this process. If $S_i$ is still not a union of disks, perform on $S_i$ as many $\partial$--compressions as possible. Thanks to Lemma~\ref{lemma:Haken1}, $S_i$ is now a union of disks, and by Lemma~\ref{lemma:Haken2}, the number of components of $\partial S_i$ has strictly decreased. Iterate this process until $S\cap\Sigma$ is connected. \end{proof} We now prove a strong Haken's theorem for sutured Heegaard splittings, following Hensel and Schultens~\cite{HS}. We need a condition on the splitting. We say that a sutured Heegaard splitting $M=C_1\cup_\Sigma C_2$ is {\em admissible} if every $2$--sphere in $\partial M$ meets $\Sigma$ along a connected curve. \begin{theorem} \label{th:StrongHaken} Let $M=C_1\cup_\Sigma C_2$ be an admissible sutured Heegaard splitting. Every essential embedded $2$--sphere in $M$ is isotopic to a Haken sphere. \end{theorem} \begin{lemma} \label{lemma:non-surviving} In a sutured Heegaard splitting, every non-surviving sphere is isotopic to a Haken sphere. \end{lemma} \begin{proof} The proof of \cite[Lemma 3.7]{HS} applies verbatim. \end{proof} The proof of Theorem~\ref{th:StrongHaken} involves the graph of surviving spheres $\Ss(M)$, defined as follows. \begin{itemize} \item The vertices are the isotopy classes of surviving spheres in $M$. \item There is an edge between two vertices if the corresponding isotopy classes can be realized by disjoint spheres. \end{itemize} \begin{lemma} The graph $\Ss(M)$ is connected. \end{lemma} \begin{proof} Assume there are non-isotopic surviving spheres $S$ and $S'$ in $M$. Assume they minimize their number of intersection curves within their isotopy classes. If they do intersect, choose an intersection curve $c$ which is innermost in $S$, thus bounds a disk $\delta\subset S$ whose interior is disjoint from $S'$. Then surger $S'$ along $\delta$: this provides two embedded spheres in $M$, disjoint from $S'$ and having fewer intersection curves with $S$, at least one of which is surviving. Iterating, we get a path between the isotopy classes of $S$ and $S'$ in $\Ss(M)$. \end{proof} \begin{proof}[Proof of Theorem~\ref{th:StrongHaken}] We proceed by induction on $(g,n)=(g(\Sigma),|\partial\Sigma|)$ with lexicographic order. If $g=0$, then $M$ is a punctured $S^3$, hence it contains no surviving sphere and Lemma~\ref{lemma:non-surviving} concludes. Now fix $(g,n)\succ(0,3)$. We first prove the following claim: if $S\subset M$ is a surviving Haken sphere and $T\subset M$ is a surviving sphere disjoint from $S$, then $T$ is isotopic to a Haken sphere. Indeed, let $N$ be the component of $M\ca S$ (over $1$ or $2$ components) which contains $T$. The Heegaard splitting induced on $N$ by that of $M$ has either a strictly lower genus, or a Heegaard surface with strictly less boundary components. Hence we can apply the inductive hypotesis. Now, the result is given by Lemma~\ref{lemma:non-surviving} if $M$ contains no surviving sphere. Otherwise, $M$ contains a Haken sphere $S_0$ by Theorem~\ref{th:Haken}. If $S$ is an essential $2$--sphere in $M$ non-isotopic to $S_0$, either it is non-surviving and we apply Lemma~\ref{lemma:non-surviving}, or it is surviving and there is a path of surviving spheres from $S_0$ to $S$ in which successive spheres are disjoint, then we apply the above claim. \end{proof} \subsection{Sutured Heegaard splittings of double compression bodies} \label{sec:doublecomp} We are interested in understanding the Heegaard splittings of {\em double compression bodies}, namely compact $3$--manifolds obtained by gluing two copies of a given compression body along their positive boundaries {\em via} the identity map. Such a double compression body can be written as $\big(\sharp(P\times I)\big)\sharp\big(\sharp^k (S^1\times S^2)\big)$, where $P$ is a compact surface, connected or not, possibly empty, containing no $2$--sphere, and $\sharp(P\times I)$ is the connected sum of all the components of $(P\times I)$. We define a sutured decomposition of $\partial M$ as follows: $P\times \{0\}\subset\partial_0 M$, $P\times\{1\}\subset\partial_1 M$, and the suture is $s=\partial P\times\{\frac12\}$. Thanks to Theorem~\ref{th:StrongHaken}, we essentially need to understand the splittings of products $P\times I$ with $P$ connected, possibly with punctures. \begin{proposition} \label{prop:HSpuncproduct} Let $P$ be a compact connected oriented surface of genus $g\geq0$. Let $B_1,\dots,B_k$ be disjoint closed $3$--balls embedded in the interior of $P\times I$, each of which intersects $P\times\{\frac12\}$ transversely along a $2$--disk, where $k\geq0$. Set $\Bb=\cup_{i=1}^kB_i$, $M=(P\times I)\setminus\Int(\Bb)$, and $\Sigma_0=M\cap(P\times\{\frac12\})$. Define a sutured decomposition of $\partial M$ with the suture $s=\partial\Sigma_0$, $P\times \{0\}\subset\partial_0 M$, and $P\times\{1\}\subset\partial_1 M$. Then the Heegaard splitting of $M$ defined by the Heegaard surface $\Sigma_0$ is the unique genus--$g$ Heegaard splitting of $M$ up to isotopy. It is the minimal genus Heegaard splitting of $M$. \end{proposition} \begin{proof} Let $M=C_1\cup_\Sigma C_2$ be a Heegaard splitting of $M$. Then $\Sigma$ is the positive boundary of $C_1$, whose negative boundary is the union of a copy of $P$ and some $2$--disks. It follows that the genus of $\Sigma$ is at least that of $P$. Hence $\Sigma_0$ defines a minimal genus Heegaard splitting of $M$. We now assume that $g(\Sigma)=g(P)$. For $i=1,\dots,k$, choose a properly embedded arc $\gamma_i$ in $C_1$ with an end on $\partial B_i\cap C_1$ and the other end on $P\times\{0\}$ (hence the ends of $\gamma_i$ lie on different components of $\partial_-C_1$). Since $C_1$ is obtained from a thickening of $\partial_-C_1$ by gluing $k$ $1$--handles (which make $C_1$ connected), the arcs $\gamma_i$ can be chosen so that $C_1$ is a regular neighborhood of $\partial_-C_1\cup\big(\cup_{i=1}^k\gamma_i\big)$. Hence the uniqueness of the Heegaard splitting will follow from the uniqueness of the collection of arcs $(\gamma_i)$ up to isotopy. We consider here isotopies within similar collections of arcs, which means that the ends are not fixed, but they must remain in $\partial_-C_1$. Such collections of arcs are clearly homotopic, so we need to show that homotopic collections are isotopic. We shall see that we can allow a strand to cross another. Indeed, any point of an arc $\gamma_i$ is the center of a properly embedded disk in $C_1$ whose boundary $c$ is parallel to the component of the suture $s$ which lies on $\partial B_i$. This curve $c$ also bounds a disk $\delta$ in $C_2$. Hence we can slide part of any $\gamma_j$ (including $\gamma_i$) along this disk $\delta$ in order to realize a crossing with $\gamma_i$. \end{proof} \begin{theorem} \label{th:doublecompHS} Consider a double compression body $M=\big(\sharp(P\times I)\big)\sharp\big(\sharp^k (S^1\times S^2)\big)$, where $P$ is a compact surface which contains no $2$--sphere and $k\geq0$, with the sutured decomposition of $\partial M$ defined above. The sutured manifold $M$ admits a unique minimal genus Heegaard splitting up to isotopy, and this minimal genus is $g(P)+k$. \end{theorem} \begin{proof} If $F$ is a component of $P$, the minimal genus Heegaard splitting of $F\times I$ is given in Proposition~\ref{prop:HSpuncproduct}. The components $S^1\times S^2$ have a standard genus--$1$ splitting. The connected sum of all these splittings gives a Heegaard splitting of $M$ of genus $g(P)+k$. Now let $M=C_1\cup_\Sigma C_2$ be any Heegaard splitting of $M$. First assume that $k>0$. Then, thanks to Theorem~\ref{th:StrongHaken}, there is a non-separating essential sphere $S\subset M$ which is Haken. Cutting along this sphere and adding $S\cap\Sigma$ to the suture produces a Heegaard splitting of $\big(\sharp(P'\times I)\big)\sharp\big(\sharp^{k-1} (S^1\times S^2)\big)$, where $P'$ is the disjoint union of $P$ and two $2$--disks, and the genus has decreased by one. Hence, by an inductive argument, we are led to the case $k=0$. We proceed by induction on $(g,n)=(g(\Sigma),|\partial\Sigma|)$ with lexicographic order. The case $(g,n)=(0,0)$ is reduced to $M=S^3$, which has a unique genus--$0$ splitting. For $(g,n)\succ(0,0)$, first assume that $M$ contains an essential $2$--sphere $S$ which is non peripheral ({\em ie} non-parallel to a boundary component). By Theorem~\ref{th:StrongHaken}, we can assume that $S$ is Haken. Hence, cutting along $S$ provides two double compression bodies, with Heegaard splittings whose genera add up to give $g(\Sigma)$, and for both the value of $(g,n)$ has strictly decreased. We are led to the case when $M$ contains no essential non-peripheral $2$--sphere, which gives three possibilities: $P$ is connected, $P$ is the disjoint union of a connected surface and a disk, or $P$ is the disjoint union of three disks. All three cases are covered by Proposition~\ref{prop:HSpuncproduct}. \end{proof} In \cite{Wald}, Waldhausen classified the Heegaard splittings of the $3$--sphere. In \cite{ST}, Scharlemann and Thompson classified the Heegaard splittings of a product $S\times I$ where $S$ is a closed surface. Together with the strong Haken theorem and the above result, it provides a full classification of Heegaard splittings of double compression bodies, with the sutured decomposition we have fixed. \begin{theorem}[Waldhausen] Every Heegaard splitting of $S^3$ of positive genus is stabilized. \end{theorem} This result and the Haken theorem provide a classification of Heegaard splittings of $\sharp^g(S^1\times S^2)$ up to diffeomorphisms. As explained in \cite{HS}, one can get a classification up to isotopy, using the strong Haken theorem and the classification of genus--$0$ splittings of a punctured $3$--ball (\cite[Theorem~3.3]{HS}, also a particular case of Proposition~\ref{prop:HSpuncproduct}). This is recovered in Theorem~\ref{th:doublecompHSfull} below (with $P$ empty). \begin{theorem}[Scharlemann--Thompson] \label{th:ST} Let $M=P\times I$, where $P$ is a compact connected oriented surface of genus $g$, with the sutured decomposition of $\partial M$ defined above. Every Heegaard splitting of genus $n>g$ of $M$ is stabilized. \end{theorem} \begin{proof} This is proven in \cite{ST} for $P$ closed. The general case is deduced as follows. For each component $c$ of~$\partial P$, we fill in $c\times I$ with a solid tube and we cap off the Heegaard surface with a disk. The closed case tells us that the splitting we obtain is stabilized. The solid tubes we added do not interact with the stabilization, so the original Heegaard splitting was already stabilized. \end{proof} \begin{theorem} \label{th:doublecompHSfull} Consider a double compression body $M=\big(\sharp(P\times I)\big)\sharp\big(\sharp^k (S^1\times S^2)\big)$, where $P$ is a compact surface which contains no $2$--sphere and $k\geq0$, with the sutured decomposition of $\partial M$ defined above. For all $n\geq g(P)+k$, the sutured manifold $M$ admits a unique Heegaard splitting of genus $n$ up to isotopy. \end{theorem} \begin{proof} Assume $n>g(P)+k$. We need to prove that, in this case, the Heegaard splitting is stabilized. As in the proof of Theorem~\ref{th:doublecompHS}, cutting along non-separating Haken spheres, we reduce to problem to the case $k=0$. Thanks to Theorem~\ref{th:StrongHaken}, there are Haken spheres which realize $M$ as the connected sum of some products $F\times I$ with $F$ a connected surface. We cut along these Haken spheres and then fill in each created boundary with a $3$--ball containing a properly embedded $2$--disk which caps the Heegaard surface. We obtain a Heegaard splitting on each $F\times I$, one of which has a genus greater than $g(F)$. By Theorem~\ref{th:ST}, this splitting is stabilized, so that our initial splitting of $M$ was already stabilized. \end{proof} \section{Diffeomorphisms of $4$--dimensional compression bodies and relative trisections} \label{secLPrel} \subsection{A relative Laudenbach--Poénaru's theorem} In this section, we adapt our proof of Laudenbach--Poénaru's theorem to a relative setting. We had two preliminaries for this proof. The first one is the classification of minimal genus splittings of double handlebodies. We generalized it in Theorem~\ref{th:doublecompHS} to double compression bodies. The second one is the fact that every diffeomorphism of a $3$--dimensional handlebody which is the identity on the boundary is isotopic to the identity. We now generalize it to compression bodies. \begin{lemma} Let $C$ be a compression body. Every two defining disk systems for $C$ which coincide on the boundary are isotopic. \end{lemma} \begin{proof} We have proved in Lemma~\ref{lemma:compirr} that compression bodies are irreducible. Hence a standard innermost disk argument concludes. \end{proof} \begin{lemma} \label{lemma:diffeocompbody} Let $C$ be a compression body. Let $\varphi$ be a diffeomorphism of $C$. If $\varphi$ is the identity on $\partial_+ C$, then $\varphi$ is isotopic to the identity, relative to the positive boundary. \end{lemma} \begin{proof} Pick a defining disk system $\Dd$ for $C$. By the same reasoning as in the proof of Lemma~\ref{lemma:diffeohandlebody}, we can assume that $\varphi$ is the identity on $\partial_+C\cup\Dd$. We are led to a diffeomorphism of a product $\partial_-C\times I$ which is the identity on $\partial_-C\times\{1\}$. Interpolating with the identity provides the required isotopy. \end{proof} \begin{definition} A \emph{(lensed) hyper compression body} $V$ is a smooth connected manifold constructed as follows: \begin{itemize} \item start with $M\times I$ where $M$ is a compact oriented $3$--manifold, \item glue $4$--dimensional $1$--handles along $M\times\{1\}$, \item collapse the vertical boundary along the $I$--factor. \end{itemize} The {\em negative boundary} $\partial_-V$ is defined as $M\times\{0\}$ and the {\em positive boundary} is $M\times\{1\}$, so that $\partial V=\partial_-V\cup_\partial\partial_+V$. \end{definition} We will consider hyper compression bodies with a specific condition on the negative boundary. We say that a hyper compression body $V$ is {\em $P$--based} if $\partial_-V$ is a trivial compression body $P\times I$, where $P$ is a compact oriented surface. Note that it comes with a sutured decomposition of its boundary, defined as in Section~\ref{sec:doublecomp}. Further, the positive boundary is diffeomorphic to a double compression body $\big(\sharp(P\times I)\big)\sharp\big(\sharp^k (S^1\times S^2)\big)$, again with a sutured structure. \begin{theorem} \label{th:LPrel} Let $V$ be a $P$--based hyper compression body. Assume that $P$ contains no $2$--sphere. Then every diffeomorphism of $\partial_+V$ extends to a diffeomorphism of $V$. \end{theorem} \begin{proof} Like in the proof of Theorem~\ref{th:LP}, we see that there is a foliation of $V$ by compression bodies~$C_t$, $t\in[0,1]$, which intersect along their positive boundary $\Sigma=\partial_+C_t$, such that $C_0=\partial_0(\partial_+V)$ and $C_1=\partial_1(\partial_+V)$. We conclude with the very same argument, using Theorem~\ref{th:doublecompHS} instead of Theorem~\ref{th:CO} and Lemma~\ref{lemma:diffeocompbody} instead of Lemma~\ref{lemma:diffeohandlebody}. \end{proof} \subsection{Diagrams of $4$--dimensional multisections} In this section, we apply Theorem~\ref{th:LPrel} to the problematic of diagrams in the setting of relative multisections in the sense of Islambouli--Naylor \cite{IN}, a generalization of Gay and Kirby's trisections \cite{GayKirby}. \begin{definition}\label{def:Multisection} A \emph{multisection} of a compact oriented $4$--manifold $X$ is a decomposition $X=\cup_{i=1}^nX_i$ with the following properties (all arithmetic involving indices is mod $n$): \begin{enumerate} \item $\displaystyle\Sigma=\bigcap_{i=1}^n X_i$ is a compact connected oriented surface, \item when $|i-j|>1$, $X_i\cap X_j=\Sigma$. \item $C_i=X_i\cap X_{i+1}$ is a $3$--dimensional compression body satisfying $\partial_+C_i=\Sigma$ and $\partial_-C_i=C_i\cap \partial X$, \item there is a compact oriented surface $P$, which contains no $S^2$, such that each $X_i$ is a $P$--based hyper compression body with $\partial_+X_i=C_{i-1}\cup_\Sigma C_i$ and $\partial_-X_i=X_i\cap \partial X$, and the natural sutured decomposition of $\partial(\partial_-X_i)$ coincides with the decomposition $\partial(\partial_-X_i)=\partial_-C_{i-1}\cup\partial_-C_i$, \item the surface $\Sigma$ is smoothly properly embedded in $X$, the $C_i$ are submanifolds with corners whose codimension--$2$ stratum is $\partial\Sigma$, the $X_i$ are submanifolds with corners, whose codimension--$2$ stratum is $\Sigma\cup\partial_-C_{i-1}\cup\partial_-C_i$ and whose codimension--$3$ stratum is $\partial\Sigma$. \end{enumerate} A multisection is called a \emph{trisection} when $n=3$. \end{definition} \begin{figure}[htb] \begin{center} \begin{tikzpicture} [scale=0.25] \draw (0,0) circle (5); \foreach \s in{1,...,5,6} { \draw[rotate=60*(\s+1)] (0,0) -- (5,0); \draw[rotate=60*(\s+1)] (6,0) node {$C_\s$}; \draw[rotate=60*\s+30] (3,0) node {$X_\s$};} \draw (0,0) node {$\scriptstyle\bullet$} (0.2,-0.2) node[above right] {$\scriptstyle\Sigma$}; \end{tikzpicture} \end{center} \caption{Schematic of a multisection} \label{fig:multisection} \end{figure} A {\em diagram} of such a multisection is a tuple $(\Sigma;\alpha_1,\dots,\alpha_n)$ where $\Sigma$ is the central surface of the multisection and $\alpha_i$ is a cut-system for $C_i$. Note that the positive boundaries of the $X_i$ are double compression bodies, so that Theorem~\ref{th:doublecompHSfull} implies that each subdiagram $(\Sigma;\alpha_{i-1},\alpha_i)$ is handleslide diffeomorphic to a diagram as represented in Figure~\ref{fig:StandardDiagram}. Note that any abstract diagram satisfying this property is a diagram of some multisected $4$--manifold. When the surface $P$ has no closed component, Castro, Gay and Pinz\'on-Caicedo proved that a multisection diagram determines a unique $4$--manifold up to isotopy \cite{CGPC2}. Theorem~\ref{th:LPrel} allows us to give a simple proof of this fact and to extend it to any surface $P$ containing no $S^2$--component. We will see below that this is optimal. \begin{figure}[htb] \begin{center} \begin{tikzpicture} [xscale=0.3,yscale=0.28] \newcommand{\trou}{ (2,0) ..controls +(0.5,-0.25) and +(-0.5,-0.25) .. (4,0) (2.3,-0.1) ..controls +(0.6,0.2) and +(-0.6,0.2) .. (3.7,-0.1)} \draw (0,0) ..controls +(0,1) and +(-2,1) .. (4,2); \draw (4,2) ..controls +(1,-0.5) and +(-1,0) .. (6,1.25); \draw[dashed] (6,1.25); \draw (0,0) ..controls +(0,-1) and +(-2,-1) .. (4,-2); \draw (4,-2) ..controls +(1,0.5) and +(-1,0) .. (6,-1.25); \foreach \x/\y in {6/0,18/0} { \begin{scope} [xshift=\x cm,yshift=\y cm] \draw (0,1.25) ..controls +(1,0) and +(-2,1) .. (4,2); \draw (4,2) ..controls +(2,-1) and +(-2,-1) .. (8,2); \draw (8,2) ..controls +(2,1) and +(-1.2,0) .. (12,1.25); \draw (0,-1.25) ..controls +(1,0) and +(-2,-1) .. (4,-2); \draw (4,-2) ..controls +(2,1) and +(-2,1) .. (8,-2); \draw (8,-2) ..controls +(2,-1) and +(-1.2,0) .. (12,-1.25); \end{scope}} \foreach \x in {0,6,12,18,24} { \draw[xshift=\x cm] \trou;} \foreach \x in {0,6} { \draw[color=red,xshift=\x cm] (3,-0.2) ..controls +(0.2,-0.5) and +(0.2,0.5) .. (3,-2.3); \draw[dashed,color=red,xshift=\x cm] (3,-0.2) ..controls +(-0.2,-0.5) and +(-0.2,0.5) .. (3,-2.3); \draw[color=blue,xshift=\x cm] (3,0)ellipse(1.6 and 0.8);} \foreach \x in {11.9,17.9,23.9} { \draw[color=red,xshift=\x cm] (3,-0.2) ..controls +(0.2,-0.5) and +(0.2,0.5) .. (3,-2.3); \draw[dashed,color=red,xshift=\x cm] (3,-0.2) ..controls +(-0.2,-0.5) and +(-0.2,0.5) .. (3,-2.3);} \foreach \x in {12.2,18.2,24.2} { \draw[color=blue,xshift=\x cm] (3,-0.2) ..controls +(0.2,-0.5) and +(0.2,0.5) .. (3,-2.3); \draw[dashed,color=blue,xshift=\x cm] (3,-0.2) ..controls +(-0.2,-0.5) and +(-0.2,0.5) .. (3,-2.3);} \begin{scope}[yscale=0.8] \begin{scope} [xshift=38cm,yshift=3 cm] \draw (0,1.25) ..controls +(1,0) and +(-2,1) .. (4,2); \draw (4,2) ..controls +(2,-1) and +(-2,-1) .. (8,2); \draw (8,2) ..controls +(2,1) and +(-1.2,0) .. (12,1.25); \draw (0,-1.25) ..controls +(1,0) and +(-2,-1) .. (4,-2); \draw (4,-2) ..controls +(2,1) and +(-2,1) .. (8,-2); \draw (8,-2) ..controls +(2,-1) and +(-1.2,0) .. (12,-1.25); \end{scope} \foreach \x/\y in {38/3,44/3,38/-3,38/9} { \draw[xshift=\x cm,yshift=\y cm] \trou;} \foreach \x/\y in {44/-3,50/3}{ \begin{scope} [xshift=\x cm,yshift=\y cm] \draw (0,1.25) ..controls +(0.4,-1) and +(0.4,1) .. (0,-1.25); \draw[dashed] (0,1.25) ..controls +(-0.4,-1) and +(-0.4,1) .. (0,-1.25); \end{scope}} \begin{scope} [xshift=26cm,yshift=-3cm] \draw (14,2) ..controls +(2,1) and +(-1.2,0) .. (18,1.25); \draw (14,-2) ..controls +(2,-1) and +(-1.2,0) .. (18,-1.25); \draw (12,1.25) ..controls +(0.5,0) and +(-1,-0.5) .. (14,2); \draw (12,-1.25) ..controls +(0.5,0) and +(-1,0.5) .. (14,-2); \end{scope} \begin{scope} [xshift=26cm,yshift=9cm] \draw (14,2) ..controls +(2,1) and +(-1,0) .. (18,2); \draw (14,-2) ..controls +(2,-1) and +(-1,0) .. (18,-2); \draw (12,1.25) ..controls +(0.5,0) and +(-1,-0.5) .. (14,2); \draw (12,-1.25) ..controls +(0.5,0) and +(-1,0.5) .. (14,-2); \end{scope} \begin{scope} [xshift=26cm,yshift=-9cm] \draw (12,1.25) ..controls +(2,0) and +(-2,0) .. (18,2); \draw (12,-1.25) ..controls +(2,0) and +(-2,0) .. (18,-2); \end{scope} \foreach \y in {9,-9} { \begin{scope} [xshift=44cm,yshift=\y cm] \foreach \s in {1,-1} { \draw (0,2*\s) ..controls +(0.2,-0.5*\s) and +(0.2,0.5*\s) .. (0,0.5*\s); \draw[dashed] (0,2*\s) ..controls +(-0.2,-0.5*\s) and +(-0.2,0.5*\s) .. (0,0.5*\s);} \draw (0,0.5) arc (90:270:0.5); \end{scope}} \foreach \y in {-6,0,6} { \draw (38,1.75+\y) arc (90:270:1.75);} \foreach \s in {1,-1} { \draw (30,1.56*\s) .. controls +(3,0) and +(-5,0) .. (38,10.25*\s);} \foreach \y in {-9,-3,3} { \foreach \x/\c in {37.9/red,38.2/blue} { \begin{scope} [xshift=\x cm,yshift=\y cm,\c] \draw (0,1.25) ..controls +(0.4,-1) and +(0.4,1) .. (0,-1.25); \draw[dashed] (0,1.25) ..controls +(-0.4,-1) and +(-0.4,1) .. (0,-1.25); \end{scope}}} \end{scope} \end{tikzpicture} \end{center} \caption{Heegaard diagram for $C_{i-1}\cup C_i$} \label{fig:StandardDiagram} \end{figure} \begin{proposition} \label{prop:diagrams} A multisection diagram determines a unique compact $4$--manifold up to diffeomorphism. \end{proposition} \begin{proof} Let $X$ and $X'$ be multisected $4$--manifolds with diffeomorphic multisection diagrams $(\Sigma;\alpha_1,\dots,\alpha_n)$ and $(\Sigma';\alpha'_1,\dots,\alpha'_n)$, meaning that there is a diffeomorphism $\varphi:\Sigma\to\Sigma'$ such that $\varphi(\alpha_i)=\alpha'_i$. First extend $\varphi$ along defining disk systems for the $3$--dimensional compression bodies of the multisection. Then extend it to the whole compression bodies (which amounts to extending a diffeomorphism from $S^2$ to $B^3$ or from $P\times\{1\}$ to $P\times I$). Finally extend $\varphi$ to the $4$--dimensional compression bodies using Theorem~\ref{th:LPrel}. \end{proof} \subsection{The bad case} In this section, we discuss the failure of Proposition~\ref{prop:diagrams} in the case when the surface~$P$ is allowed to contain some $2$--spheres. Accordingly, we allow $2$--spheres in the negative boundary of a $3$--dimensional compression body. We will analyse an example pointed out by David Gay. Start with a torus $\Sigma=S^1\times S^1$ and three parallel essential simple closed curves $\alpha_i\subset\Sigma$. Form the product $\Sigma\times\Delta$ where $\Delta$ is a $2$--disk, see Figure~\ref{figXn}. Choose three distinct points $p_i\in\partial\Delta$, $i=1,2,3$, and glue a $4$--dimensional $2$--handle $D_i\times D^2$ along $\alpha_i\times\{p_i\}$ for each $i$. More precisely glue the handle along $A_i\times[x_i,y_i]$ where $A_i$ is a closed tubular neighborhood of $\alpha_i$ in $\Sigma$ and $[x_i,y_i]$ an interval around $p_i$ in~$\partial\Delta$, so that $D_i\times\{p_i\}$ is the core of the handle, see Figure~\ref{figXn} for the order of the points on the circle. It remains to glue some $3$--handles. \begin{figure}[htb] \begin{center} \begin{tikzpicture} \begin{scope} [scale=0.7] \foreach \s in {-1,1} \draw (0,0) .. controls +(0,\s) and +(-1,0) .. (3,1.5*\s) .. controls +(1,0) and +(0,\s) .. (6,0); \draw (2,0) ..controls +(0.5,-0.25) and +(-0.5,-0.25) .. (4,0); \draw (2.3,-0.1) ..controls +(0.6,0.2) and +(-0.6,0.2) .. (3.7,-0.1); \draw[blue] (3,-0.2) ..controls +(0.2,-0.5) and +(0.2,0.5) .. (3,-1.5); \draw[dashed,blue] (3,-0.2) ..controls +(-0.2,-0.5) and +(-0.2,0.5) .. (3,-1.5)node[below] {$\scriptstyle{\alpha_2}$}; \draw[red] (0.85,1) node[above] {$\scriptstyle{\alpha_1}$} .. controls +(0.5,0) and +(-0.3,0.5) .. (2.5,-0.03); \draw[red,dashed] (0.85,1) .. controls +(0.3,-0.5) and +(-0.5,0.1) .. (2.5,-0.03); \draw[green] (5.15,1) node[above] {$\scriptstyle{\alpha_3}$} .. controls +(-0.5,0) and +(0.3,0.5) .. (3.5,-0.03); \draw[green,dashed] (5.15,1) .. controls +(-0.3,-0.5) and +(0.5,0.1) .. (3.5,-0.03); \draw (3,0.8) node {$\scriptstyle{\widetilde\Sigma_1}$}; \draw (1.5,-0.4) node {$\scriptstyle{\widetilde\Sigma_2}$}; \draw (4.5,-0.4) node {$\scriptstyle{\widetilde\Sigma_3}$}; \end{scope} \begin{scope} [xshift=8cm] \draw (0,0) node {$\scriptstyle{\Sigma\times\Delta}$} circle (1) circle (2); \foreach \t/\i in {150/1,270/2,30/3} { \foreach \s in {-25,25} \draw[rotate=\t+\s] (1,0) -- (2,0); \draw[rotate=\t] (1.5,0) node {$\scriptstyle{D_\i\times D^2}$}; \draw[rotate=\t] (1,0) node {$\scriptscriptstyle{\bullet}$} (0.75,0) node {$\scriptstyle{p_\i}$}; \draw[rotate=\t-25] (1,0) node {$\scriptscriptstyle{\bullet}$} (0.75,0) node {$\scriptstyle{x_\i}$}; \draw[rotate=\t+25] (1,0) node {$\scriptscriptstyle{\bullet}$} (0.75,0) node {$\scriptstyle{y_\i}$}; \draw[rotate=\t-60] (1.5,0) node {$\scriptstyle{B_\i}$}; } \end{scope} \end{tikzpicture} \caption{Decomposition of the surface $\Sigma$ and schematic of the construction of the manifolds $X_n$} \label{figXn} \end{center} \end{figure} Define a foliation of $\Sigma$ by simple closed curves $\alpha_t$, $t\in[0,3]$, such that, for $i=1,2,3$, $\alpha_i$ is the curve previously defined, and $\alpha_0=\alpha_3$. From now on, the indices $i$ are considered modulo $3$. For each $i$, let $t\in[0,1]\mapsto q_i(t)\in\partial\Delta$ be an injective path from $y_{i-1}$ to $x_i$, chosen so that $q_1$, $q_2$ and $q_3$ are pairwise disjoint. Set $\Sigma_i=\cup_{t\in[0,1]}\{q_i(t)\}\times\alpha_{t+i-1}$ (Figure~\ref{figXn} represents the projection $\widetilde\Sigma_i$ of $\Sigma_i$ on $\Sigma$). Now let $D_i^x$ be a disk parallel to $D_i$ on the boundary of the $2$--handle and attached to $\alpha_i\times\{x_i\}$. Similarly define $D_{i-1}^y$. For $i=1,2$, glue a $3$--handle $B_i$ along $D_{i-1}^y-D_i^x+\Sigma_i$. In the gluing of $B_3$, we shall give more flexibility in order to produce a family of distinct manifolds. Fix an integer $n\geq0$. Define a path $q_3^n(t)$ from $y_2$ to $x_3$ in $\partial\Delta$ as a concatenation of $q_3$ and $n-1$ full turns around $\Delta$. Set $\Sigma_3^n=\cup_{t\in[0,1]}\{q_3^n(t)\}\times\alpha_{t+2}$, and glue a $3$--handle $B_3$ along $D_2^y-D_3^x+\Sigma_3^n$. We claim that this defines trisected manifolds with boundary $X_n$ sharing a common trisection diagram, namely that of Figure~\ref{figXn}. Since they are constructed by gluing handles, it is easy to compute their homology. We get $H_2(X_n)\cong\Z/n\Z$, so that the $X_n$ are non-homeomorphic manifolds. This failure of Proposition~\ref{prop:diagrams} can be analysed as follows. The $3$--dimensional pieces of the trisections we constructed are punctured solid tori, say $C_i$. These are non-irreducible, and each curve $\alpha_i$ on their positive boundary bounds a family of pairwise non-isotopic properly embedded disks indexed by~$\Z$. This implies that they admit diffeomorphisms that restrict to the identity on the positive boundary, but are non-isotopic to the identity. The non-existence of such diffeomorphisms was a key point in our proof of the relative Laudenbach--Poénaru theorem. One can check that the $X_n$ are related by the following move. Cut $X_n$ along one of the $C_i\setminus\partial_+C_i$ and reglue {\em via} a diffeomorphism of $C_i$ that restricts to the identity on the positive boundary, but is non-isotopic to the identity. \def\cprime{$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \begin{thebibliography}{CGPC18} \bibitem[Cer68]{Cerf4} Jean Cerf, \emph{Sur les diff\'eomorphismes de la sph\`ere de dimension trois {$(\Gamma \sb{4}=0)$}}, Lecture Notes in Mathematics, vol. No. 53, Springer-Verlag, Berlin-New York, 1968. \bibitem[CGPC18]{CGPC2} Nickolas~Andres Castro, David~T Gay, and Juanita Pinz{\'o}n-Caicedo, \emph{Trisections of 4--manifolds with boundary}, Proceedings of the National Academy of Sciences \textbf{115} (2018), no.~43, 10861--10868. \bibitem[CO05]{CarOer} Leonardo~N Carvalho and Ulrich Oertel, \emph{A classification of automorphisms of compact 3--manifolds}, arXiv: math/0510610, 2005. \bibitem[Dis23]{Dissler1} Rudy Dissler, \emph{Relative trisections of fiber bundles over the circle}, arXiv:2304.09300, 2023. \bibitem[GK16]{GayKirby} David~T Gay and Robion Kirby, \emph{Trisecting 4--manifolds}, Geometry \& Topology \textbf{20} (2016), no.~6, 3097--3132. \bibitem[Hak68]{Haken} Wolfgang Haken, \emph{Some results on surfaces in 3--manifolds}, Studies in Modern Topology/Prentice-Hall (1968). \bibitem[HS24]{HS} Sebastian Hensel and Jennifer Schultens, \emph{The strong {H}aken theorem via sphere complexes}, Algebraic \& Geometric Topology \textbf{24} (2024), no.~5, 2707--2719. \bibitem[IN24]{IN} Gabriel Islambouli and Patrick Naylor, \emph{Multisections of 4--manifolds}, Transactions of the American Mathematical Society \textbf{377} (2024), no.~2, 1033--1068. \bibitem[Jac80]{Jaco} William Jaco, \emph{Lectures on three-manifold topology}, CBMS Regional Conference Series in Mathematics, vol.~43, American Mathematical Society, Providence, RI, 1980. \bibitem[Juh06]{Juh} Andr{\'a}s Juh{\'a}sz, \emph{Holomorphic discs and sutured manifolds}, Algebraic \& Geometric Topology \textbf{6} (2006), no.~3, 1429--1457. \bibitem[Lau73]{Laudenbach} Fran{\c{c}}ois Laudenbach, \emph{Sur les 2--sph\`eres d'une vari\'et\'e de dimension 3}, Annals of Mathematics \textbf{97} (1973), no.~1, 57--81. \bibitem[LP72]{LP} Fran{\c{c}}ois Laudenbach and Valentin Po{\'e}naru, \emph{A note on 4--dimensional handlebodies}, Bulletin de la Soci{\'e}t{\'e} Math{\'e}matique de France \textbf{100} (1972), 337--344. \bibitem[Sch24]{Scharlemann} Martin Scharlemann, \emph{A strong {H}aken theorem}, Algebraic \& Geometric Topology \textbf{24} (2024), no.~2, 717--753. \bibitem[ST93]{ST} Martin Scharlemann and Abigail Thompson, \emph{Heegaard splittings of (surface)$\times {I}$ are standard}, Mathematische Annalen \textbf{295} (1993), no.~1, 549--564. \bibitem[Wal68]{Wald} Friedhelm Waldhausen, \emph{{H}eegaard-{Z}erlegungen der 3--sph{\"a}re}, Topology \textbf{7} (1968), 195--203. \end{thebibliography} \end{document} \bibliographystyle{amsalpha} \bibliography{/home/delphine/Documents/Biblio/biblio.bib}
2412.10335v1
http://arxiv.org/abs/2412.10335v1
Regular Edges, Matchings and Hilbert Series
\documentclass[12pt]{amsart} \usepackage{amssymb,tikz,tikz-cd} \usepackage{graphicx} \pagestyle{myheadings} \setlength{\oddsidemargin}{0.25in} \setlength{\evensidemargin}{0.25in} \setlength{\textwidth}{6in} \setlength{\textheight}{21.5cm} \setlength{\parskip}{0.15cm} \setlength{\parindent}{0.5cm} \numberwithin{equation}{section} \begin{document} {\theoremstyle{theorem} \newtheorem{theorem}{\bf Theorem}[section] \newtheorem{proposition}[theorem]{\bf Proposition} \newtheorem{conjecture}[theorem]{\bf Conjecture} \newtheorem{claim}{\bf Claim}[theorem] \newtheorem{lemma}[theorem]{\bf Lemma} \newtheorem{corollary}[theorem]{\bf Corollary} \newtheorem{notation}[theorem]{\bf Notation} } {\theoremstyle{remark} \newtheorem{remark}[theorem]{\bf Remark} \newtheorem{example}[theorem]{\bf Example} } {\theoremstyle{definition} \newtheorem{definition}[theorem]{\bf Definition} \newtheorem{question}[theorem]{\bf Question} } \newenvironment{dedication} {\vspace{6ex}\begin{quotation}\begin{center}\begin{em}} {\par\end{em}\end{center}\end{quotation}} \def\C{{\mathcal C}} \def\height{\operatorname{ht}} \def\Ass{\operatorname{Ass}} \def\Min{\operatorname{Min}} \def\depth{\operatorname{depth}} \def\Edepth{\operatorname{Edge depth}} \def\dim{\operatorname{dim}} \def\Deg{\operatorname{Deg}} \def\qed{\hfill$\Box$} \newcommand{\m}{\mathfrak{m}} \newcommand{\rar}{\rightarrow} \title{Regular Edges, Matchings and Hilbert Series} \author{Joseph Brennan} \address{Department of Mathematics\\ University of Central Florida\\ 4000 Central Florida Blvd.\\ Orlando, FL 32816-1364} \email{[email protected]} \author{Susan Morey} \address{Department of Mathematics \\ Texas State University\\ 601 University Drive\\ San Marcos, TX 78666} \email{[email protected]} \keywords{monomial ideals, edge ideals, Hilbert functions, Hilbert series, regular elements, regular sequences} \subjclass[2010]{13F55, 13D40, 05E40} \begin{abstract} When $I$ is the edge ideal of a graph $G$, we use combinatorial properities, particularly Property $P$ on connectivity of neighbors of an edge, to classify when a binomial sum of vertices is a regular element on $R/I(G)$. Under a mild separability assumption, we identify when such elements can be combined to form a regular sequence. Using these regular sequences, we show that the Hilbert series and corresponding $h$-vector can be calculated from a related graph using a simplified calculation on the $f$-vector, or independence vector, of the related graph. In the case when the graph is Cohen-Macaulay with a perfect matching of regular edges satisfying the separability criterion, the $h$-vector of $R/I(G)$ will be precisely the $f$-vector of the Stanley-Reisner complex of a graph with half as many vertices as $G$. \end{abstract} \maketitle \bibliographystyle{amsalpha} \section{Introduction} The second half of the twentieth century saw the saw the rise of homological methods to address question in commutative algebra. One of the key elements in the introduction of these methods is the use of induction to reduce questions to the case where a given homological invariant is of minimal value. The most ubiquitous application of this is to be found in the use of regular sequences. Such sequences allow for the iterated application of relations obtained from short exact sequences and hence induction on any invariant (such as the Krull dimension) that decreases on taking the quotient by a regular element. Within the last fifty-five years there has also arisen a significant interest in combinatorial structures in commutative algebra. Starting with the work of Hochster, Reisner and Stanley, much effort has been placed upon the recovery of information about ring invariants from combinatorial structures associated to the rings. In particular, starting with the pioneering efforts of Simis, Vasconcelos, and Villarreal there has been a focus on understanding the relationship between structural invariants and properties of graphs and rings associated to a graph. Let \(G\) be a graph with vertex set \(V=V(G)\) and edge set \(E=E(G).\) Let \(k\) be a field. Associated to the graph \(G\) is the ideal \[I(G)=\left<\{xy\in k[V] \>|\> \{x,y\} \in E\}\right>\] and the ring \(k[V]/I(G)\). When studying graphs, there is also a method to employ induction. This involves using contraction and deletion of vertices or edges and studying how structural properties or invariants behave under such processes. This paper involves examining how these two paths of induction are related. For graphs, the process of contraction of an edge \(\{u,v\}\) is related to the process of taking the quotient by the element \(u-v\) in \(k[V]/I(G).\) We look at the question of when this element is a regular element. Having a regular sequence of linear binomial elements allows for an induction process to proceed on both the algebraic side and the graph theoretic side of the identification. The utility of this idea is showcased in the ability to relate the Hilbert series of one graph to structural invariants of a simpler smaller graph obtained by edge contractions. In Section 2 we introduce terminology and notation that are critical to this paper. Property \(P\) is the primary graph property that we use in this paper. Property \(P\) was originally a property of perfect matchings of graphs. We define it as a property of an edge of the graph. We further give a history of the development and use of Property \(P\) in understanding perfect matchings and its sometimes hidden use in commutative algebra. Section 3 provides the reason that Property \(P\) is central to this paper. We define an edge \(\{u,v\}\) to be a {\em regular edge} in a way that is equivalent to the element \(u-v\) being a regular element of the ring \(k[V]/I(G).\) We then show in Theorem \ref{good=P} that a edge being a regular edge is equivalent to the edge having Property \(P.\) To proceed in parallel in induction between graphs and rings we must address graphs with loops. Section 4 introduces Property \(P\) for graphs with loops and shows in Theorem \ref{good=P loops} the analogue of Theorem \ref{good=P} for graphs with loops. This is then used in Theorem \ref{reg seq} in Section 5 to connect (not necessarily perfect) matchings whose edges have Property \(P\) and a separability condition to regular sequences on the ring \(R/I(G).\) Section 6 asks and answers the question of the existence and characterization of linear binomial regular elements that are not associated with an edge. Theorem \ref{binomial regular} connects those elements with Property \(P.\) The preceding results are then used in Section 7 to compute the Hilbert series of the ring associated to a graph by reduction as established in \cite{Brennan-Morey}. The paper ends with some illustrative examples that we hope will provide the reader with considerable interest in the methods developed in this paper. \section{Background and Property \(P\)}\label{background} In this section we establish notation and provide graph theoretic terminology and definitions that will be used throughout. For further background and any unexplained terms we refer the reader to \cite{Chartrand-book,West-book}. Let \(G\) be a graph with vertex set \(V=V(G)=\{x_1, \ldots, x_n\}\) and edge set \(E=E(G).\) Let \(k\) be a field. The ring \(R=k[x_1, \ldots, x_n],\) sometimes denoted as \(k[V]\) in the literature, is the polynomial ring over \(k\) whose indeterminates are identified with the vertices of the graph. Associated to the graph \(G\) is the ring \(R/I(G)\) where \[I(G)=\left<\{xy\in R \>|\> \{x,y\} \in E\}\right>.\] A \emph{vertex cover} of the graph \(G\) is a collection of vertices \(X\subseteq V(G)\) such that every edge of \(G\) contains a vertex in \(X.\) The height of the edge ideal, \(\height(I(G))\) is the cardinality of a minimum vertex cover of \(G\), which Villareal \cite{MonomialAlgebras}, following Berge \cite{Berge}, denotes by \(\alpha_0\), as in some of the combinatiorial optimization literature. This is also denoted by \(\beta\) in some of the graph theoretic literature \cite[Definition 3.1.12]{West-book}. Further, in \cite[(3.35)]{CombinatorialOptimizationA}, Schrijver denotes this by \(\tau(G)\). Since the notation varies and our perspective is algebraic, we will use the height notation. By abuse of notation, we will refer to this invariant as the height of \(G\), defining \(\height(G) = \height(I(G))\) in the ring \(R.\) An \emph{independent set of vertices} of a graph \(G\) is a set of vertices \(Y\subseteq V(G)\) such that no edge has both of its vertices in \(Y.\) A \emph{matching} of \(G\) is a collection of edges \(M\subseteq E(G)\) such that if \(e_1,e_2 \in M\) then \(e_1\) and \(e_2\) do not share a common vertex. This is also called an independent set of edges in the literature. The collection of matchings is ordered by inclusion. The largest size of a maximal matching of the graph \(G\) is the matching number which we will denote by \(\hbox{\rm mat}(G)\) . The matching number appears in the graph-theoretic literature \cite[Definition 3.1.12]{West-book} as \(\alpha'(G)\), by Villarreal \cite[Definition 7.17]{MonomialAlgebras} as \(\beta_1(G)\) and by Schrijver \cite[(3.35)]{CombinatorialOptimizationA} as \(\nu(G).\) A \emph{perfect matching} is a matching \(M\) in which every vertex appears in some edge of \(M\). If a graph \(G\) has a perfect matching, then \(G\) does not contain isolated vertices and the cardinality of the matching is precisely one-half the number of vertices of \(G\). A graph \(G\) is \emph{well-covered} if every minimal vertex cover of \(G\) has the same cardinality. This is equivalent to every minimal prime, and thus every associated prime, of \(I(G)\) having the same height. Thus \(G\) is well covered if and only if \(I(G)\) is unmixed. If in addition, \(G\) does not have isolated vertices and every minimal vertex cover has precisely \(|V|/2\) elements, then \(g\) is \emph{very well-covered}. \medskip An important concept that has appeared in the study of graphs is called Property \(P.\) Although first defined for perfect matchings, we give a more general definition focused on single edges here. \begin{definition}\label{Property P} Let \(G\) be a simple graph and \(e=\{x,y\} \in E(G).\) Then \(e\) is said to have {\em Property P} if for any \(a,b \in V(G)\) with \(\{a,x\}, \{y,b\} \in E(G)\) we have \(a \neq b\) and \(\{a,b\} \in E(G).\) \end{definition} A perfect matching has Property \(P\) if every edge in the matching has Property \(P\). The idea of Property \(P\) appeared in the thesis of Staples \cite{StaplesThesis} and independently in \cite{Ravindra}. Recall that the neighbor set of a vertex \(v\) is \( N_G(v)=\{ u \mid \{u,v\} \in E(G)\},\) and the closed neighbor set of a vertex \(v\) is \( N_G[v]=N_G(v) \cup \{v\}.\) Note that Property \(P\) can also be described by the requirement that the induced graph on the neighbor sets of the vertices of an edge with Property $P$, excluding the vertices in the edge, contains the complete bipartite graph on those two vertex sets. Property \(P\) is closely related to other graph theoretic properties. For instance, the well-covering property is closely related to Property \(P\). \begin{theorem} \cite{Ravindra,Staples} A bipartite graph without isolated vertices is well covered if and only if there is a perfect matching with Property \(P.\)\end{theorem} Favaron is the first to explicitly coin the term Property \(P\) as a property of perfect matchings. This was used to extend the above result to graphs without isolated vertices, where for graphs that are not bipartite, very well covered is used to replace well covered. This connection was also independently shown by Staples. \begin{theorem} \cite{Favaron,Staples} A graph without isolated vertices is very well-covered if and only if there exists a perfect matching that satisfies Property \(P.\)\end{theorem} Later Levit and Mandrescu gave an additional criterion for graphs to be very well-covered. Let \(mat(G)\) be the size of a maximal matching of \(G\). \begin{theorem} \cite{Levit} A graph without isolated vertices is well covered if and only if \[ |V(G)| - \height(G)=|E(G)| - mat(G).\] \end{theorem} Later Rautenbach and Volkmann \cite{R-V} directly proved the equivalence of the Levit and Mandrescu criterion with the existence of a perfect matching with Property \(P.\) Herzog and Hibi used an ordering criterion on the vertices of a bipartite graph to classify Cohen-Macaulay-bipartite graphs. \begin{theorem}\cite[Theorem 3.4]{H-H}\label{bipartite} Let \(G\) be a finite bipartite graph on the vertex set \[V(G)= \{x_1,\dots, x_n\} \cap\{y_1,\dots, y_n\}\] and suppose that \begin{itemize} \item for all \(1\leq i\leq n\) one has \(\{x_i,y_i\}\in E(G)\) \item if \(\{x_i,y_j\}\in E(G)\) then \(i\leq j\) \end{itemize} Then \(G\) is Cohen-Macaulay if and only if the condition \[\hbox{\rm If } \{x_i,y_j\}\in E(G) \, \hbox{\rm and } \{x_j, y_k\}\in E(G) \, \hbox{\rm with } i<j<k \, \hbox{\rm then } \{x_i, y_k\}\in E(G)\] holds. \end{theorem} Note that although the language used is different, the final condition above is equivalent to Property \(P\). Using the language of Theorem~\ref{bipartite}, in \cite[Theorem 1.1]{Villarreal}, inspired by the above result of Herzog and Hibi, Villarreal proved that for a bipartite graph \(G\) with a perfect matching, the edges of the perfect matching all satisfy Property \(P\) if and only if \(G\) is unmixed. Further elaboration by Castrilli\'{o}n, Cruz, and Reyes \cite[Proposition 15]{CCR} showed that a K{\"o}nig graph without isolated vertices is unmixed provided that a matching of K{\"o}nig type has Property \(P.\) These ideas were extended to directed graphs in \cite{Pitones} and these ideas were then further extended in the oriented case \cite{Cruz,CruzReyes,CruzThesis}. The following result of \cite{R-V} is a reformulation of the theorem of \cite{Favaron,Staples} linking Property \(P\) to very well-covered graphs. \begin{proposition}\cite[Corollary 18]{R-V} Let \(G\) be a graph without isolated vertices. Then \(G\) is very well-covered of an only if \begin{enumerate} \item there exists a perfect matching \(F\) with \(\height(G)= |F| =|V(G)|/2.\) and \item for each edge \(e=\{x,y\}\in F,\) if \(S\subset V(G)\setminus \{x,y\}\) with \(|S| \leq 2\), then \(\{x,y\} \not\subseteq N_G(S).\) \end{enumerate} \end{proposition} \section{Regular edges}\label{regular} In this section, we show that edges that satisfy Property \(P\) can be used to form regular elements on $R/I(G)$. We first note that Property \(P\) can be reinterpreted as a property of induced subgraphs containing the edge. \begin{lemma}\label{induced} Let \(G\) be a graph and \(e=\{x,y\} \in E(G).\) Then \(e\) has Property \(P\) if and only if for every \(a \in N_G(x)\setminus \{y\}\) and \(b \in N_G(y)\setminus \{x\}\), the induced subgraph on \( \{a,x,y,b\} \) is a \(4\)-cycle. \end{lemma} \begin{proof} This follows directly from the definition of Property \(P\). \end{proof} We now define the key concept of regular edge. \begin{definition}\label{regular} An edge \(e=\{a,b\}\) of a graph \(G\) is a {\em regular} edge if for every associated prime \(Q\) of \(I(G),\) \(e \not\in Q^2.\) That is, precisely one of \(a\) or \(b\) is in \(Q,\) but not both. \end{definition} The reason for this nomenclature is clear from the following result. \begin{lemma}\label{regular sums} Let \(G\) be a graph and \(\{x,y\}\in E(G)\). Then the following are equivalent: \begin{itemize} \item[\((i)\)] \(\{x,y\}\) is a regular edge of \(G\), \item[\((ii)\)] \(x+y\) is a regular element of the ring \(R/I(G)\), \item[\((iii)\)] \(x-y\) is a regular element of the ring \(R/I(G)\). \end{itemize} \end{lemma} \begin{proof} Assume \(\{x,y\}\) is a regular edge of \(G\). Then by definition, either \(x\) or \(y\) is in \(Q\) for every associated prime \(Q\) of \(R/I(G)\) but not both. Thus \[x+y \not\in \bigcup_{\scalebox{0.6}{ $Q \in \Ass (R/I(G))$}} Q\] and so \(x+y, x-y\) are regular on \(R/I(G)\). Thus \((i) \Rightarrow (ii)\). The argument that \((i) \Rightarrow (iii)\) follows similarly. Assume \(x+y\) is a regular element of \(R/I(G)\). Then \(x+y \not\in Q\) for every \(Q \in \Ass(R/I(G))\). Since \(\{x,y\} \in E(G)\) and every associated prime contains a minimal prime, which corresponds to a minimal vertex cover of \(G\), then at least one of \( x,y \in Q\). If both \( x,y \in Q\), then \( x+y \in Q\), a contradiction. Thus precisely one of \( x,y \in Q\) and \( xy \not\in Q^2\). Hence \(\{x,y\}\) is a regular edge of \(G\) and \((ii) \Rightarrow (i)\). The proof that \((iii) \Rightarrow (i)\) is similar. \end{proof} In order to establish a relationship between Property \(P\) and regular edges, a first step is to explore potential induced graphs containing a regular edge. We first show that a regular edge cannot be contained in an induced \(3\)-cycle. \begin{lemma}\label{no-triangle} If \(G\) is a graph and \(\{x,y\}\) is a regular edge of \(G,\) then \(\{x,y\}\) is not contained in any triangle in \(G.\) \end{lemma} \begin{proof} Suppose \(b \in V(G)\) is such that \(x,y,b\) form the vertices of a triangle in \(G.\) Set \(W=V(G)\setminus \{b\}.\) Then \(W\) is a vertex cover of \(G.\) Thus there exists \(Q \subseteq W\) such that \(Q\) is a minimal vertex cover of \(G.\) Since \(b \not\in W,\) then \(x\in Q\) because \(\{x,b\}\in E(G)\) and \(y \in Q\) because \(\{y,b\}\in E(G).\) This contradicts the definition of \(\{x,y\}\) being a regular edge. Thus no such \(b\) exists. \end{proof} We now consider potential induced subgraphs using distinct neighbors of the vertices of a regular edge. \begin{lemma}\label{no-3-path} If \(G\) is a graph and \(\{x,y\}\) is a regular edge of \(G,\) then for any \(a,b \in V\) with \(a \in N_G(x)\setminus \{y\}\) and \(b \in N_G(y) \setminus \{x\},\) the induced subgraph of \(G\) on the vertices \(a,x,y,b\) is a \(4\)-cycle. That is, if \(a \in N_G(x)\) and \(b\in N_G(y),\) then \(\{a,b\}\in E(G).\) \end{lemma} \begin{proof} First, notice that if \(a = b,\) or \(\{a,y\}\in E(G),\) or \(\{x,b\}\in E(G)\) then there is a triangle of \( G\) containing \( \{x,y\}\), contradicting Lemma~\ref{no-triangle}. Thus \(a \ne b,\) and \( \{a,y\}\not\in E(G),\) and \(\{x,b\}\not\in E(G).\) Then if $\{a,b\}\not\in E(G)$, set $W=V\setminus \{a,b\}$. Since $W$ is a vertex cover of $G,$ it contains a minimal vertex cover $Q$. By definition, $x \in Q$ since $a \not\in Q$ and $\{x,a\}\in E(G)$ and $y\in Q$ since $b\not\in Q$ and $\{y,b\}\in E(G)$. Thus $x,y \in Q$, a contradiction. Hence $\{a,b\} \in E(G)$ and the induced subgraph of \( G\) on \(a,c,y,b\) is a \(4\)-cycle. \end{proof} Using this information about induced subgraphs, we now show that Property \(P\) is equivalent to regularity for an edge. \begin{theorem} \label{good=P} Let \(G\) be a graph and \(e\) an edge of \(G.\) Then \(e\) is a regular edge if and only if \(e\) satisfies Property \(P.\) \end{theorem} \begin{proof} First assume \(e=\{x,y\}\) is a regular edge of a graph \(G.\) Then by Lemmas~\ref{induced} and \ref{no-3-path}, the edge has Property \(P.\) Now assume \(e=\{x,y\}\) satistfies Property \(P\) and that there exists a minimal vertex cover \(Q\) with \(x,y \in Q.\) Note that neither \(x\) nor \(y\) can be a leaf, that is a vertex with a single neighbor, of \(G\) since \(Q\) is minimal. Moreover, since \(Q\) is minimal, there must exist an edge \(\{a,x\}\) with \(a \not\in Q\) and an edge \(\{y,b\}\) with \(b \not\in Q.\) By Property \(P,\) \(a \neq b\) and \(\{a,b\} \in E(G).\) But \(\{a,b\}\) is not covered by \(Q,\) a contradiction. Thus no minimal vertex cover can contain both \(x\) and \(y,\) so \(\{x,y\}\) is a regular edge. \end{proof} \begin{corollary}\label{leaf-cond} Let \(G\) be a graph and \(f = \{x,y\} \) an edge of \(G\) with \(y\) a leaf. Then an edge \(e=\{x,z\}\) is a regular edge if and only if \(z\) is a leaf. \end{corollary} \begin{proof} Note that if \(z\) is a leaf, then \(e\) vacuously satisfies Property \(P\) and so is a regular edge. For the other implication assume that \(z\) is not a leaf. Then there exists an edge \( \{z,w\}\) with \(w \ne x\). If \(e\) is a regular edge then by Theorem \ref{good=P}, \(e\) has Property \(P\) and hence there is an edge \(\{y,w\}\) contradicting the assumption that \(y\) is a leaf. So, if \(e\) is a regular edge then \(z\) is a leaf. \end{proof} Corollary \ref{leaf-cond} has an interesting implication. An edge containing a leaf is always a regular edge since it will satisfy Property $P$ automatically. However, in seeking regular sequences formed by edges, Corollary \ref{leaf-cond} indicates that we need to focus on disjoint edges. Indeed, if $e=\{x,y\}$ is a regular edge of a graph $G$, then $R/(I,x+y)$ is effectively (up to isomorphism) the edge ring of a graph $G'$ with a loop replacing $e$ and $x$ and $y$ identified (see \cite{FHM}). After polarizing the loop, there is an edge $e'=\{x,y'\}$ where $y'$ is a leaf and $N_{G'}(x) = N_G(x) \cup N_G(y).$ Since no edge of $G'$ containing $x$ can be regular, this means no edge of $G$ containing $x$ or $y$ can be regular on (the polarization of) $R/(I,x+y)$, hence the need to focus on disjoint edges to form regular sequences. \begin{example} Let $G$ be a star graph with central vertex $x$ and leaves $y_1, \ldots, y_t.$ Then every edge $\{x,y_i\}$ is a regular edge, but since $\depth(R/I(G))=1$, these edges cannot be combined to form a regular sequence of length greater than one. Note that $R/(I,x+y_1) \cong R/J$ where $J$ is the graph with a loop shown below, and $J^{pol} = I$. $$ \begin{tabular}{cc} \begin{tikzpicture} \tikzstyle{point}=[inner sep=0pt] \node (x)[point, label=above:{$x$}] at (1,.9) {}; \node (y1)[point, label=left:{$y_1$}] at (0,0) {}; \node (y2)[point, label=left:{$y_2$}] at (2,0) {}; \node (y3)[point, label=right:{$y_3$}] at (2,1.5) {}; \node (yt)[point, label=left:{$y_t$}] at (0,1.5) {}; \node (dots)[point, label=below:{$\cdots$}] at (1,2) {}; \draw[black, fill=black] (x) circle(0.05); \draw[black, fill=black] (y1) circle(0.05); \draw[black, fill=black] (y2) circle(0.05); \draw[black, fill=black] (y3) circle(0.05); \draw[black, fill=black] (yt) circle(0.05); \draw (x.center) -- (y1.center); \draw (x.center) -- (y2.center); \draw (x.center) -- (y3.center); \draw (x.center) -- (yt.center); \end{tikzpicture} & \begin{tikzpicture}[every loop/.style={min distance=10mm, in=190, out=240, looseness=10}] \tikzstyle{point}=[inner sep=0pt] \node (x)[point, loop below, label=above:{$x$}] at (1,.9) {}; \node (y2)[point, label=left:{$y_2$}] at (2,0) {}; \node (y3)[point, label=right:{$y_3$}] at (2,1.5) {}; \node (yt)[point, label=left:{$y_t$}] at (0,1.5) {}; \node (dots)[point, label=below:{$\cdots$}] at (1,2) {}; \draw[black, fill=black] (x) circle(0.05); \draw[black, fill=black] (y2) circle(0.05); \draw[black, fill=black] (y3) circle(0.05); \draw[black, fill=black] (yt) circle(0.05); \draw (x) edge [anchor=center, loop below] (x); \draw (x.center) -- (y2.center); \draw (x.center) -- (y3.center); \draw (x.center) -- (yt.center); \end{tikzpicture} \\ $G$ a star graph & $J$ a graph with a loop \end{tabular} $$ \end{example} To avoid this type of situation, we will focus on disjoint edges in later sections when forming regular sequences. Note that when we quotient with the first regular edge, the graph associated to $R/(I,x+y)$ has a loop. Therefore, we first expand our focus to include graphs with loops. \section{Property $P$ and Graphs with Loops} The literature surrounding Property $P$ has focused on graphs without loops. In order to be able to apply the property inductively later in this paper, we will need to consider graphs that have loops. To this end, we extend the definition of Property $P$ and results from Section~\ref{regular} to graphs with loops. A graph $G$ has a loop at $x \in V(G)$ if $\{x,x\} \in E(G)$. We write $\{x^2\}\in E(G)$ in this case. We will not consider a vertex with a loop to be a leaf. That is, a leaf is a vertex contained in a single edge that is not a loop. \begin{definition}\label{Property P loops} Let \(G\) be a graph, potentially with loops, and \(e=\{x,y\} \in E(G).\) Then \(e\) is said to have {\em Property P} if $\{x^2\}, \{y^2\} \not\in E(G)$ and for any \(a,b \in V(G)\) with \(\{a,x\}, \{y,b\} \in E(G)\) we have \(a \neq b\) and \(\{a,b\} \in E(G).\) \end{definition} Note that with this definition, the situation with induced subgraphs is similar to that presented in Lemma~\ref{induced}, but the induced subgraph might also contain loops. More precisely, if \(G\) is a graph potentially with loops and \(e=\{x,y\} \in E(G)\) then \(e\) has Property \(P\) if and only if for every \(a \in N_G(x) \setminus \{y\}\) and \(b \in N_G(y) \setminus \{x\}\) the induced subgraph on \( \{a,x,y,b\} \) is a \(4\)-cycle, potentially with loops at $a$ and $b$. However, using Definition~\ref{Property P loops}, more care needs to be taken to ensure an edge with Property \(P\) is regular. As a first step, we observe that the vertices of a regular edge cannot have loops. \begin{lemma}\label{regular-no-loop} Let \(G\) be a graph, potentially with loops, and \(e=\{x,y\}\) an edge of \(G.\) If \(e\) is a regular edge, then \(G\) does not contain a loop at \(x\) or at \(y\). Moreover, at most one of \(N_G(x)\) or \(N_G(y)\) contains vertices with loops. \end{lemma} \begin{proof} First assume \(e=\{x,y\}\) is a regular edge of a graph \(G.\) Since $e$ is regular, for any associated prime $Q$, \(x\) and \(y\) cannot both be in $Q$. If $\{x^2\}$ is an edge of $G$, that is, $G$ has a loop at $x$, then there is an associated prime $Q$ of $R/I(G)$ that contains $N_G[x]$ by \cite[Corollary 4.14]{MV}. Since $x,y \in N_G[x]$, this is a contradiction to $e$ being regular, so $x^2 \not\in E(G)$. Similarly, $y^2 \not\in E(G)$. Now assume there exist $a \in N_G(x)$ and $b \in N_G(y)$ with $a^2, b^2 \in E(G)$. By \cite[Corollary 4.14]{MV}, there exists an associated prime $Q$ of $G$ containing $N_G[a] \cup N_G[b]$. Since $x \in N_G[a]$ and $y\in N_G[b]$, this is a contradiction. Thus at most one of $N_G(x), N_G(y)$ contains loops. \end{proof} We now establish that the analogs of Lemmas~\ref{no-triangle} and~\ref{no-3-path} hold for graphs with loops. Note that both Definition~\ref{regular} and Lemma~\ref{regular sums} are based on associated primes and hold for graphs with loops. \begin{lemma}\label{no-triangle-loops} If \(G\) is a graph, potential with loops, and \(\{x,y\}\) is a regular edge of \(G,\) then \(\{x,y\}\) is not contained in any triangle in \(G.\) \end{lemma} \begin{proof} Suppose \(b \in V(G)\) is such that \(x,y,b\) form the vertices of a triangle in \(G.\) Then \(x,y \in N_G[b]\). If \(G\) has a loop at \(b\), then there is an associated prime \(Q\) of \(R/I(G)\) that contains \(N_G[b]\) by \cite[Corollary 4.14]{MV}. Then \(x,y \in Q\), a contradiction. If \(G\) does not have a loop at \(b\), then \(W=V(G)\setminus \{b\}\) is a vertex cover of \(G.\) Thus there exists \(Q \subseteq W\) such that \(Q\) is a minimal vertex cover of \(G.\) Since \(b \not\in W,\) then \(x\in Q\) because \(\{x,b\}\in E(G)\) and \(y \in Q\) because \(\{y,b\}\in E(G).\) This contradicts the definition of \(\{x,y\}\) being a regular edge. Thus no such \(b\) exists. \end{proof} \begin{lemma}\label{no-3-path-loops} If \(G\) is a graph, potentially with loops, and \(\{x,y\}\) is a regular edge of \(G,\) then for any \(a,b \in V(G)\) with \(a \in N_G(x)\setminus \{y\}\) and \(b \in N_G(y) \setminus \{x\},\) the induced subgraph of \(G\) on the vertices \(a,x,y,b\) is a \(4\)-cycle with at most one loop. In particular, if \(a \in N_G(x)\) and \(b\in N_G(y),\) then \(\{a,b\}\in E(G).\) \end{lemma} \begin{proof} First, notice that if \(a = b,\) or \(\{a,y\}\in E(G),\) or \(\{x,b\}\in E(G)\) then there is a triangle of \( G\) containing \( \{x,y\}\), contradicting Lemma~\ref{no-triangle-loops}. Thus \(a \ne b,\) and \( \{a,y\}\not\in E(G),\) and \(\{x,b\}\not\in E(G).\) By Lemma~\ref{regular-no-loop}, at most one of the vertices in the set \(\{a,x,y,b\}\) contains a loop. If such a loop exists, it is either a loop at \(a\) or at \(b\). First assume neither \(a\) nor \(b\) has a loop. Then if $\{a,b\}\not\in E(G)$, set $W=V\setminus \{a,b\}$. Since $W$ is a vertex cover of $G,$ it contains a minimal vertex cover $Q$. By definition, $x \in Q$ since $a \not\in Q$ and $\{x,a\}\in E(G)$ and $y\in Q$ since $b\not\in Q$ and $\{y,b\}\in E(G)$. Thus $x,y \in Q$, a contradiction. Assume \(G\) has a loop at \(a\) and \(\{a,b\} \not\in E(G).\) Consider \(W=V(G) \setminus (N_G[a] \cup \{b\})\). Then \(W\) contains a minimal vertex cover \(K\) of the induced subgraph of \(G\) on \(V(G) \setminus N_G[a].\) Now \(y\in K\) since \(\{b,y\} \in E(G)\) and \(b \not\in K.\) By \cite[Corollary 4.14]{MV}, there is an associated prime \(Q\) of \(G\) containing \(N_{G}[a] \cup K.\) Since \(x \in N_G[a]\) and \(y\in K\), we have \(x,y \in Q\), a contradiction. Hence $\{a,b\} \in E(G)$ and the induced subgraph of \( G\) on \(a,x,y,b\) is a \(4\)-cycle with at most one loop. \end{proof} We are now ready to prove the extension to graphs with loops of the connection between Property \(P\) and regularity. \begin{theorem} \label{good=P loops} Let \(G\) be a graph, potentially with loops, and \(e=\{x,y\}\) an edge of \(G.\) Then \(e\) is a regular edge if and only if \(e\) satisfies Property \(P\) and at most one of \(N_G(x)\) or \(N_G(y)\) contains vertices with loops. \end{theorem} \begin{proof} First assume \(e=\{x,y\}\) is a regular edge of a graph \(G.\) By Lemma~\ref{regular-no-loop}, \(G\) does not contain a loop at \(x\) or \(y\) and at most one of \(N_G(x), N_G(y)\) contains loops. If \(a,b \in V(G)\) with \(\{a,x\},\{y,b\} \in E(G)\) then by Lemma~\ref{no-triangle-loops}, \(a \ne b\) and by Lemma~\ref{no-3-path-loops}, \(\{a,b\}\in E(G).\)Thus the conditions for Property \(P\) in Definition~\ref{Property P loops} are satisfied. Now assume \(e=\{x,y\}\) satistfies Property \(P\) and and at most one of $N(x)$ or $N(y)$ contains vertices with loops. As in the proof of Theorem~\ref{good=P}, assume that there exists a minimal vertex cover \(Q\) with \(x,y \in Q.\) Note that neither \(x\) nor \(y\) can be a leaf of \(G\) since \(Q\) is minimal. Moreover, since \(Q\) is minimal, there must exist an edge \(\{a,x\}\) with \(a \not\in Q\) and an edge \(\{y,b\}\) with \(b \not\in Q.\) By Property \(P,\) \(a \neq b\) and \(\{a,b\} \in E(G).\) But \(\{a,b\}\) is not covered by \(Q,\) a contradiction. Thus no minimal vertex cover can contain both \(x\) and \(y.\) By \cite[Corollary 4.14]{MV}, the associated primes of \(R/I(G)\) that are not minimal are constructed from (unions of) closed neighborhoods of loops extended to vertex covers of \(G\). By hypothesis, \(x\) and \(y\) are not both contained in the union of all closed neighborhoods of vertices with loops. Since \(x\) and \(y\) are not both contained in any minimal vertex cover, if \(x\) and \(y\) are both contained in an embedded associated prime \(Q\), then one of them, say \(x,\) is in the closed neighborhood of a loop \(a\) and the other, say \(y,\) is necessary to extend the the set formed from closed neighborhoods of loops to contain a vertex cover. That means there is an edge \(\{y,b\}\) with \(b \not\in Q\). By Property \(P\), \(\{a,b\}\in E(G)\) and \(b \in N[a] \subseteq Q\), a contradiction. Thus no associated prime of \(R/I(G)\) contains both \(x\) and \(y\), so \(\{x,y\}\) is a regular edge. \end{proof} We conclude this section with the analog of Corollary~\ref{leaf-cond} for graphs with loops. As before, this corollary is a motivating factor in focusing on distjoint edges. \begin{corollary}\label{leaf-cond loops} Let \(G\) be a graph and \(f = \{x,y\} \) an edge of \(G\) with \(y\) a leaf. Then an edge \(e=\{x,z\}\) is a regular edge if and only if \(z\) is a leaf and $x^2 \not\in E(G)$. \end{corollary} \begin{proof} Note that if \(z\) is a leaf and $x^2 \not\in E(G)$, then \(e\) satisfies Property \(P\). Since $z$ is a leaf, the conditions of Theorem~\ref{good=P loops} are satisfied and so $e$ is a regular edge. For the other implication assume that \(z\) is not a leaf. Then there exists an edge \( \{z,w\}\) with \(w \ne x\). If \(e\) is a regular edge then by Theorem \ref{good=P loops}, \(e\) has Property \(P\) and hence there is an edge \(\{y,w\}\) contradicting the assumption that \(y\) is a leaf. So, if \(e\) is a regular edge then \(z\) is a leaf. \end{proof} \section{Matchings and Regular sequences of edges}\label{Regular Sequences} The goal of this section is to form a special type of regular sequence using edges that satisfy Property $P$. In general, if $x+y$ is a regular element on $R/I(G)$ for a graph $G$, then $R/(I(G),x+y) \cong R/(J, x+y)$ where $J$ is the ideal formed from $I(G)$ by replacing $y$ by $x$ in each generator divisible by $y$ (see \cite[Theorem 2.6]{FHM}). Since the ideal $J$ is not square-free, indeed, it is the edge ideal of a graph with a loop at vertex $x$, we will define two new graphs that will allow us to toggle between working modulo the regular element and working with the loop-free graph corresponding to a polarization of $J$. \begin{definition}\label{Ge} Let $G$ be a graph, potentially with loops, and fix an edge $e=\{x,y\} \in E(G)$ with $x \neq y$. Define $G_e$ to be the graph with $V(G_e) = V(G)\setminus \{y\}$ and edges defined by: \begin{itemize} \item[(i)] $\{a,b\} \in E(G_e)$ if $\{a,b\} \in E(G)$ and $a,b \neq y$, or \item[(ii)] $\{a,x\}\in E(G_e)$ if $\{a,y\} \in E(G)$. \end{itemize} \end{definition} Note that since $\{x, y\}$ is an edge of $G$, the second criteria above gives that $\{x, x\}$ is an edge of $E(G_e)$, that is, $G_e$ has a loop at $x$. In addition, using the notation from the start of this section, $J = I(G_e)$. \begin{definition}\label{polarizedGe} Let $G$ be a graph, potentially with loops, and fix an edge $e=\{x,y\} \in E(G)$ with $x \neq y$. Define $G^e$ to be the graph with $V(G^e) = V(G)$ and edges defined by: \begin{itemize} \item[(i)] $\{a,b\} \in E(G^e)$ if $\{a,b\} \in E(G)$ and $a,b \neq y$, or \item[(ii)] $\{a,x\}\in E(G^e)$ if $\{a,y\} \in E(G)$ and $a \ne x$, or \item[(iii)] $\{x,y\}\in E(G^e)$. \end{itemize} \end{definition} Note that if $G$ does not have loops, then $I(G^e) = (I(G_e))^{pol}$. In general, $I(G^e)$ is formed from $I(G_e)$ by polarizing only the loop created from $e$. For convenience, we will denote $(G_{e_1})_{e_2}$ as $G_{e_1,e_2}$ and $(G^{e_1})^{e_2}$ as $G^{e_1,e_2}$ in the remainder of the paper. The next lemma shows how Property $P$ for an edge $f$ passes to $G_e$ and $G^e$ when $e$ and $f$ are disjoint edges. Note that in order to iterate the process of passing Property \(P\) through the contraction of multiple edges, the hypotheses of the following results allow for the graph $G$ to have loops. \begin{lemma}\label{P descends} Let $G$ be a graph, potentially with loops. Suppose $\{u,v\} \in E(G)$ has Property $P$ and $e=\{x,y\}$ is an edge of $G$ with $u,v,x,y$ distinct and the induced graph on $u,v,x,y$ is not a $4$-cycle, up to potential loops on \(x\) and \(y\). Then the image of $\{u,v\}$ has Property $P$ in both $G_e$ and $G^e$. \end{lemma} \begin{proof} Note that since $u,v,x,y$ are distinct, the image of $\{u,v\}$ in both $G_e$ and $G^e$ is again $\{u,v\}$ by condition $(i)$ of Definitions~\ref{Ge} and~\ref{polarizedGe}. Also by definition, since the vertices are distinct and $\{u,v\}$ has Property $P$ in $G$, then $\{u^2\}, \{v^2\}$ are not in $E(G_e)$ or $E(G^e)$. Now let $a,b \in V(G_e)$ with $\{a,u\}, \{v,b\} \in E(G_e)$. If neither $a$ nor $b$ is equal to $x$, then $\{a,u\}, \{v,b\} \in E(G)$ and thus $a \neq b$ and $\{a,b\}\in E(G)$ and so by Definition~\ref{Ge} $(i)$ $\{a,b\} \in E(G_e)$. If $a=x$ and $b=x$, then since in $G$ neither $x$ nor $y$ is a common neighbor of $u$ and $v$ since $\{u,v\}$ has Property $P$, precisely one of $\{a,u\}$ or $\{v,b\}$ was produced by situation $(ii)$ of Definition~\ref{Ge}. Without loss of generality, assume $\{v,b\} = \{v,x\}$ was produced by the edge $\{v,y\}$ in $E(G)$. Then the induced graph in $G$ on $u,v,x,y$ contains a $4-$cycle. Note that since $\{u,v\}$ has Property $P$ in $G$, $u,v$ do not have a common neighbor, meaning that the $4-$cycle is induced modulo potential loops on \(x\) and \(y\), a contradiction. Finally assume $a\neq x$ and $b = x$. Then either $\{v,x\}$ or $\{v,y\}$ is an edge of $G$. Then either $\{a,x\}$ or $\{a,y\}$ respectively is in $E(G)$ since $\{u,v\}$ has Property $P$. In either case, $\{a,x\}$ is an edge of $G_e$ and thus $\{u,v\}$ has Property $P$ in $G_e$. An analgous argument holds for $G^e$. \end{proof} The following is the key lemma in considering regular sequences of edges. For convenience, set \(L=\{v\in V(G) \mid v^2 \in E(G)\}\) to be the set of loops in \(G\). \begin{lemma}\label{two edges} Suppose $\{x_1,y_1\}$ and $\{x_2,y_2\}$ are disjoint regular edges of a graph $G$, potentially with loops. Then $\{x_2,y_2\}$ is a regular edge of $H=G_{\{x_1,y_1\}}$ if and only if \begin{itemize} \item if $N_G(x_2) \cap \{x_1,y_1\} \ne \emptyset$ then $N_G(y_2) \cap (\{x_1, y_1\} \cup L)=\emptyset$, and \item if $N_G(y_2) \cap \{x_1,y_1\} \ne \emptyset$ then $N_G(x_2) \cap (\{x_1, y_1\} \cup L)=\emptyset$. \end{itemize} Moreover, $x_1+y_1, x_2+y_2$ is a regular sequence on $R/I(G)$. \end{lemma} \begin{proof} Set $e=\{x_1,y_1\}$. We need to show the image of $\{x_2,y_2\}$ is a regular edge of $H=G_e$. By Theorem~\ref{good=P loops}, \(\{x_2,y_2\}\) has Property \(P\) in \(G\). By Definition~\ref{Property P loops}, neither $x_2$ nor $y_2$ has a loop in $G$. By definition, there is only one new loop in $G_e$, which is located at $x_1$. Thus in $H$, neither $x_2$ nor $y_2$ has a loop. By Theorem~\ref{good=P loops} it is enough to show that $\{x_2,y_2\}$ satisfies Property $P$ in the graph $H$ and at most one of $N_H(x_2)$ and $N_H(y_2)$ contains a loop. By hypothesis, at most one of these neighbor sets contains $x_1$ and the other set does not contain a loop in $G$ or in $H$. Note that the induced graph in \(G\) on \(\{x_1,y_1,x_2,y_2\}\) is not a $4$-cycle by the conditions imposed on \(N_G(x_2)\) and \(N_G(y_2)\). By regularity, there are no loops on any of these $4$ vertices. Thus by Lemma~\ref{P descends}, \(\{x_2,y_2\}\) has Property \(P\) in \(H\). For the converse, assume \(\{x_2,y_2\}\) is a regular edge of \(H\) and \(N_G(x_2) \cap \{x_1,y_1\} \ne \emptyset\). Then \(\{x_1,x_2\} \in E(H)\). Recall that there is a loop at \(x_1\) in \(H\). Suppose \(N_G(y_2) \cap (\{x_1, y_1\} \cup L) \ne \emptyset\). Then there is a vertex \(a \in \{x_1, y_1\} \cup L\) with \(\{a,y_2\} \in E(G).\) If \(a=x_1\) or \(a=y_1\), then \(\{x_1,y_2\}\in E(H)\). Otherwise, \(\{a,y_2\}\in E(H).\) In either case, in \(H\), both \(x_2\) and \(y_2\) are adjacent to loops, a contradiction to Theorem~\ref{good=P loops} since \(\{x_2,y_2\}\) is a regular edge of \(H\). Finally, by Lemma~\ref{regular sums} and the comment following Lemma~\ref{regular-no-loop}, $x_1+y_1$ is regular on $R/I(G)$. Now $$R/(I(G),x_1+y_1) \cong R/((I(H), x_1+y_1) \cong R'/I(H)$$ where $R'$ is a polynomial ring in one fewer variable (see the proof of \cite[Theorem 2.6]{FHM}). More specifically, the variables of $R'$ are $V(H)=V(G_{\{x_1,y_1\}})=V(G)\setminus\{y_1\}$. Since $\{x_2,y_2\}$ is a regular edge of $H$, the final statement follows. \end{proof} Note that the edges being disjoint in Lemma~\ref{two edges} is important. If we start with two distinct edges $\{x_1,y_1\}$ and $\{x_1,y_2\}$ that are each regular in \(G\) but which share a vertex, then $\{x_1,y_2\}$ will not be regular in $H=G_{\{x_1,y_1\}}$, since there is a loop in \(H\) at \(x_1\). \begin{remark} The proof of Lemma~\ref{two edges} can be easily adapted to show that under the hypotheses, \(\{x_2,y_2\}\) is a regular edge of \(G^{\{x_1,y_1\}}\) as well. However the converse no longer holds, as illustrated by the edges \(\{x_1,y_1\}, \{x_2,y_2\}\) in the graph \begin{center} \begin{tikzpicture}[every loop/.style={}] \tikzstyle{point}=[inner sep=0pt] \node (a)[point,label=above: $x_1$] at (1,1) {}; \node (b)[point,label=right:$x_2$] at (2,1){}; \node (c)[point,label=right:$y_2$] at (2,0){}; \node (d)[point,label=above right:$a$] at (1,0){}; \node (e)[point,label=below:$y_1$] at (0,1){}; \draw (a.center) -- (b.center); \draw (b.center) -- (c.center); \draw (c.center) -- (d.center); \draw (d.center) -- (a.center); \draw (a.center) -- (e.center); \draw (d) edge [anchor=center, loop left] (d); lldraw [black] (a.center) circle (1pt); lldraw [black] (b.center) circle (1pt); lldraw [black] (c.center) circle (1pt); lldraw [black] (d.center) circle (1pt); lldraw [black] (e.center) circle (1pt); \end{tikzpicture} \end{center} Note that \(\{x_2,y_2\}\) is a regular edge of \(G^{\{x_1,y_1\}}\) but both \(N_G(x_2) \cap \{x_1,y_1\}\ne \emptyset\) and \(N_G(y_2) \cap \{x_1, y_1, a\} \ne \emptyset.\) \end{remark} \begin{corollary}\label{reg 2} Let $G$ be a graph without loops and assume $\{x,y\}, \{u,w\}$ are disjoint regular edges of $G$ such that $\{x,y,u,w\}$ are not the vertices of a square. Then $x+y, u+w$ is a regular sequence on $R/I(G)$. \end{corollary} \begin{proof} Note that $I(G_{\{x,y\}})$ is the edge ideal of a graph with precisely one loop, $\{x,x\}$. The result follow immediately from Lemma~\ref{two edges}. \end{proof} We next establish conditions that will allow us to build longer regular sequences from disjoint regular edges. Although stated in a slightly different format, when $t=2$ the conditions of the theorem below are equivalent to the hypotheses of Lemma~\ref{two edges} since at most one vertex of a regular edge of $G$ has a neighbor in $G$ that is a loop. \begin{theorem}\label{reg seq} Let \(G\) be a graph, potentially with loops on vertices \(\{z_1, \ldots, z_r\}\). Assume \(\{x_1,y_1\}, \ldots ,\{x_t,y_t\}\) are disjoint regular edges of \(G\) such that for \(2 \leq i \leq t\) either \[ N_G(x_i) \cap \{x_1,y_1,\ldots , x_{i-1},y_{i-1}, z_1, \ldots, z_r\} = \emptyset, \,\, \mbox{\rm{or}}\] \[N_G(y_i) \cap \{x_1,y_1,\ldots , x_{i-1},y_{i-1},z_1, \ldots, z_r\} = \emptyset.\] Then $x_1+y_1, x_2+y_2,\ldots,x_t+y_t$ is a regular sequence on $R/I(G)$. \end{theorem} \begin{proof} We proceed by induction on \(t\), with Lemma~\ref{regular sums} and the comment following Lemma~\ref{regular-no-loop} establishing the result for \(t=1\) and Lemma~\ref{two edges} establishing the result for \(t=2\). Assume \(t \geq 3\) and that the result holds for \(t-1\). Consider \[R/(I(G), x_1+y_1, \ldots , x_{t-1}+y_{t-1}) \cong R''/I(H)\] where $H = G_{\{x_1,y_y\},\{x_2,y_2\}, \ldots ,\{x_{t-1},y_{t-1}\}}$ and $R'' \cong R/(y_1, \ldots, y_{t-1})$. We will show that $\{x_t, y_t\}$ is a regular edge of $H$. First, by hypothesis, for $2 \leq s \leq t$, we have that $\{x_1,y_1\},\{x_s,y_s\}$ are disjoint regular edges of \(G\) and if \(N_G(x_s) \cap \{x_1,y_1\} \ne \emptyset\), then \[N_G(y_s) \cap \{x_1,y_1,\ldots , x_{s-1},y_{s-1},z_1, \ldots, z_r\} = \emptyset,\] so in particular \(N_G(y_s) \cap \{x_1, y_1, z_1, \ldots, z_r\}=\emptyset.\) A similar argument holds when \(N_G(y_s) \cap \{x_1,y_1\} \ne \emptyset\). Thus by Lemma~\ref{two edges}, \(\{x_s,y_s\}\) is regular in \(G_1 = G_{\{x_1,y_1\}}\) for all \(2 \leq s \leq t\). Inductively, define \(G_i = G_{\{x_1,y_1\},\{x_2,y_2\}, \ldots ,\{x_{i},y_{i}\}}\) for \(i \leq t-1\). Notice that \(G_i\) is a graph with loops at \(x_1, \ldots , x_{i},z_1,\ldots,z_r\) and no other loops. For \(i+2 \le s \le t\), \(\{x_{i+1},y_{i+1}\}\), \(\{x_s,y_s\}\) are disjoint edges of \(G_i\) that are regular in \(G_i\) by induction on \(i\). If \(N_{G_i}(x_s) \cap \{x_{i+1},y_{i+1}\} \ne \emptyset\), then \[N_{G_{i}}(y_s) \cap \{x_{i+1},y_{i+1},\ldots , x_{s-1},y_{s-1},x_1,\ldots, x_i, z_1, \ldots, z_r\} = \emptyset,\] so in particular \(N_{G_i}(y_s) \cap \{x_{i+1}, y_{i+1},x_1,\ldots,x_i, z_1, \ldots, z_r\}=\emptyset.\) A similar argument holds when \(N_{G_i}(y_s) \cap \{x_{i+1},y_{i+1}\} \ne \emptyset\). Thus by Lemma~\ref{two edges}, \(\{x_s,y_s\}\) is regular in \(G_{i+1}.\) In particular, \(\{x_t,y_t\}\) is regular in \(G_{t-1} = H\) as desired. Hence $x_t+y_t$ is regular on $R''/I(H)$ and the result folllows. \end{proof} \begin{remark} If \(G\) is a graph without loops, the results of Theorem~\ref{reg seq} imply that if \(G\) has a perfect matching with Property \(P\), then these edges form a regular sequence, in which case \(G\) is Cohen-Macaulay, as long as they can be ordered so that each edge has (at least) one vertex not connected to the prior edges. Such will be the case for well-known classes of graphs, such as suspensions of graphs and Cohen-Macaulay bipartite graphs. In addition, by exchanging vertex labels of \(x_i,y_i\) if necessary, one can assume that \(x_i\) is independent from the set \(\{x_1,y_1,\ldots,x_{i-1},y_{i-1}\}\) in this case, or that \[N_G(x_i) \cap \{x_1,y_1,\ldots , x_{i-1},y_{i-1}, z_1, \ldots, z_r\} = \emptyset\] in the more general setting of Theorem~\ref{reg seq}. \end{remark} In the special case where $G$ is a Cohen-Macaulay graph and the edges used in Theorem~\ref{reg seq} form a perfect matching, then the precise form of $G$ is known. \begin{remark} If $\{x_1,y_1\}, \ldots, \{x_t,y_t\}$ in Theorem~\ref{reg seq} forms a perfect matching, then $G$ is a very well-covered graph, and the results follow from \cite{Crupi2011}. To see this, note that by relabeling if necessary, we can assume $N(x_i) \cap \{x_1,y_1,\ldots , x_{i-1},y_{i-1}\} = \emptyset$ for all $i$. Thus $\{x_1, \ldots, x_t\}$ is an independent set. Indeed, if $\{x_i,x_j\} \in E(G)$ with $i<j$ then $N(x_j) \cap \{x_1,y_1,\ldots , x_{j-1},y_{j-1}\} \ne \emptyset$. Then $\{y_1, \dots ,y_t\}$ is a minimal vertex cover and since the matching is perfect, $t = 1/2|V(G)| = \height(I)$. \end{remark} \section{Linear binomial regular elements} Theorem~\ref{good=P} establishes a means of creating linear binomials that are regular on \(R/I(G)\) for any (simple) graph \(G\) conaining an edge with Property \(P\). Theorem~\ref{good=P loops} does the same for graphs with loops. In this section, we show that the only linear binomials that are regular on \(R/I(G)\) correspond to either edges with Property \(P\) in \(G\) or pairs of non-connected vertices \(x,y\) for which adding an edge \(\{x,y\}\) to \(G\) results in the new edge having Property \(P\) in the augmented (simple) graph. An analgous statement holds for a graph with loops, assuming conditions similar to those in previous sections. \begin{theorem}\label{binomial regular} Let \(G\) be a graph with \(x,y \in V(G)\) but \(\{x,y\}\not\in E(G).\) Then the following are equivalent \begin{enumerate} \item In the graph \(\widehat{G}\) with \(V(\widehat{G}) = V(G)\) and \(E(\widehat{G}) = E(G)\cup \{\{x,y\}\},\) the edge \(\{x,y\}\) has Property \(P\). \item The elements \(x+y\) and \(x-y\) are regular elements of \(R/I(G)\). \item The elements \(x+y\) and \(x-y\) are regular elements of \(R/I(\widehat{G})\). \item For any associated prime \(Q\) of \(R/I(G)\) the element \(xy\) is not an element of \(Q^2.\) \item For any associated prime \(Q\) of \(R/I(\widehat{G})\) the element \(xy\) is not an element of \(Q^2.\) \item No minimal vertex cover of \(G\) contains the set \(\{x,y\}.\) \item No minimal vertex cover of \(\widehat{G}\) contains the set \(\{x,y\}.\) \end{enumerate} \end{theorem} \begin{proof} As in Lemma~\ref{regular sums}, \(x+y\) and \(x-y\) are regular on \(R/I(G)\) if and only if no associated prime of \(I(G)\), or equivalently, no minimal vertex cover of \(G\), contains both \(x\) and \(y\). This shows the equivalence of \((2),(4),\) and \((6)\). By Theorem~\ref{good=P}, Lemma~\ref{regular sums}, and Definition~\ref{regular}, \((1),(3),(5)\) and \((7)\) are equivalent. Thus it suffices to show \((6)\) and \((7)\) are equivalent. Suppose that no minimal vertex cover of \(G\) contains the set \(\{x,y\}.\) Let \(X\) be a minimal vertex cover of \(\widehat{G}\) that contains the set \(\{x,y\}\). Then \(X\) is a vertex cover of \(G\) and so contains a minimal vertex cover \(Y\subset X\) of \(G.\) By the assumption that no minimal vertex cover of \(G\) contains the set \(\{x,y\},\) at most one of the vertices \(x,y\) is in the set \(Y\). Without loss of generality, assume \(x \not\in Y\). Then \(Y\cup\{y\}\) is a proper subset of \(X\) that is a cover for \(\widehat{G}\) contradicting the minimality of \(X.\) Thus \((6) \Rightarrow (7).\) Suppose that in the graph \(\widehat{G}\) , no minimal vertex cover of \(\widehat{G}\) contains the set \(\{x,y\}.\) Let \(Z\) be a minimal vertex cover of \(G\) that contains \(\{x,y\}.\) Then \(Z\) is a vertex cover of \(\widehat{G}\) that contains a minimal vertex cover \(Y\) of \(\widehat{G}.\) This is a contradiction, as \(Y\) is a vertex cover of \(\widehat{G}\), and so is a vertex cover of \(G\), but \(Y\) is a proper subcover of \(Z,\) contradicting the minimality of \(Z.\) Hence no minimal vertex cover of \(G\) contains the set \(\{x,y\}.\) Thus \((7) \Rightarrow (6).\) \end{proof} Note that Theorem~\ref{binomial regular} recovers the regularity of the sum of vertices forming a "leaf pair" as in \cite[Theorem 4.11]{FHM}. Since it classifies all binomial regular elements, it can be combined with \cite[Proposition 4.1]{FHM} to provide lower bounds on depths of edge ideals of graphs. \section{Application to Hilbert Series of Edge Ideals of Graphs} The regular sequences found in Section~\ref{Regular Sequences} consist of homogeneous forms of degree one. This allows one to use these regular sequences to simplify the computation of the Hilbert Series of $R/I$. In this section, we show how such an approach leads to a simplification of the standard correspondence between an $h$-vector of an ideal and the $f$-vector of an associated simplicial complex. Moreover, in the Cohen-Macaulay case, when there is a perfect matching formed by a regular sequence of edges satisfying Property $P$ with the necessary condition on the neighbor sets in Theorem~\ref{reg seq}, then we provide an interpretation of the Hilbert coefficients appearing in the $h$-vector. First recall that when $I$ is a square-free monomial ideal, the {\em Hilbert Series} of $R/I$ can be written as $$ HS_{R/I} = \sum_{s=0}^{\infty} \dim_k(R/I)_s t^s = \frac{h_0+h_1t+\cdots +h_dt^d}{(1-t)^d}$$ where $d = \dim(R/I).$ For ease of reference, we collect the coefficients of the numerator of this series in the {\em $h$-vector} of $R/I$, which is $(h_0,h_1,\ldots, h_d).$ Given an edge ideal $I(G)$, consider the simplicial complex $I_{\Delta}$ formed by the clique complex of the complement graph of $G$. That is, $\sigma = \{x_{i_1} , \ldots,x_{i_s}\}$ is a face of $I_{\Delta}$ if and only if $\sigma$ is an independent set. The {\em $f$-vector of $I_{\Delta}$} is the vector $(f_{-1}, f_0, f_1, \ldots, f_{d-1})$ where $f_{-1}=1$ represents the empty subset, $f_0=n$ is the number of vertices in $G$, and in general $f_i$ is the number of subsets of $V(G)$ containing $i+1$ independent vertices, which is the number of faces of dimension $i$ of $I_{\Delta}$. There is a well-known relationship between the $h$-vector of $I$ and the $f$-vector of the simplicial complex whose Stanley-Reisner ideal is $I$. While this relationship is well-known and holds for any square-free monomial ideal, we state it here for graphs for ease of reference. \begin{theorem}\cite{Stanley} Assume $I$ is an edge ideal of a graph $G$ and $(h_0, \ldots, h_d)$ is the $h$-vector of $I$. If $(f_{-1},f_0, \ldots, f_{d-1})$ is the $f$-vector of $I_{\Delta}$. Then $$h_k = \sum_{i=0}^k (-1)^{k-i} \binom{d-i}{k-i}f_{i-1}$$ \end{theorem} To better understand $h_i$, we define a simplicial complex that is smaller than $I_{\Delta}$ and relate $h_i$ to the $f$-vector of this smaller complex. \begin{lemma}\label{HS equal} If $G$ is a graph, potentially with loops, and $e=\{x,y\}$ is a regular edge of $G$, then $G$ and $G^e$ have identical Hilbert Series. Moreover, the $h$-vectors of $I(G), I(G_e),$ and $I(G^e)$ are all equal. \end{lemma} \begin{proof} Since $x+y$ is a homogeneous regular element on $R/I$ of degree $1$, then $$HS_{R/(I,x+y)} = (1-t) HS_{R/I}.$$ Recall that $$R/(I,x+y) = R/(I(G_e),x+y) \cong R'/I(G_e)$$ where $R'$ is a ring in one fewer variable. Thus $$HS_{R'/(I(G_e))} = (1-t) HS_{R/I}$$ and so the $h$-vectors of $I=I(G)$ and of $I(G_e)$ are the same. Similarly, since $G^e$ is the polarization of $G_e$, we have $$R/(I(G^e),x+y) = R/(I(G_e),x+y) \cong R'/I(G_e)$$ and so $$HS_{R'/(I(G_e))} = (1-t) HS_{R/I(G^e)}.$$ Hence $I(G_e)$ has the same $h$-vector as $I(G^e)$. Combining the above equalities yields $HS_{R/I} = HS_{R/I(G^e)}$ as well. \end{proof} This process can be iterated using a longer regular sequence. \begin{proposition}\label{HS} Let \(G\) be a graph, potentially with loops at \(z_1, \ldots, z_r,\) and assume $\{x_1,y_1\}, \ldots ,\{x_t,y_t\}$ are disjoint regular edges of \(G\) such that for \(2 \leq i \leq t\) either \[N(x_i) \cap \{x_1,y_1,\ldots , x_{i-1},y_{i-1},z_1, \ldots, z_r\} = \emptyset, \,\, \mbox{\rm{or}}\] \[N(y_i) \cap \{x_1,y_1,\ldots , x_{i-1},y_{i-1},z_1, \ldots, z_r\} = \emptyset.\] Then \(I(G_{e_1, \ldots, e_t})\) and \(I(G^{e_1, \ldots ,e_t})\) both have the same \(h\)-vector as \(I=I(G)\) and the Hilbert series of \(R/I\) is equal to the Hilbert Series of \(R/(I(G^{e_1, \ldots, e_t}))\). \end{proposition} \begin{proof} By Theorem~\ref{reg seq}, $x_1+y_1, x_2+y_2,\ldots,x_t+y_t$ is a regular sequence on $R/I(G)$. By repeated use of Lemma~\ref{HS equal}, the result follows. \end{proof} The corollary above allows one to essentially replace edges in a graph that have the Property $P$ with whiskers, which are edges with one vertex a leaf, as long as the condition on the neighbor sets is satisfied. This allows the apply the results of \cite{Brennan-Morey} on whiskered graphs to a more general class of graphs. \begin{corollary}\label{CM} If $I=I(G)$ is Cohen-Macaulay and $e_1=\{x_1,y_1\}, \ldots ,e_d=\{x_d,y_d\}$ is a perfect matching of distinct regular edges such that for $2 \leq i \leq t$ either $N(x_i) \cap \{x_1,y_1,\ldots , x_{i-1},y_{i-1}\} = \emptyset$ or $N(y_i) \cap \{x_1,y_1,\ldots , x_{i-1},y_{i-1}\} = \emptyset$, then the $h$-vector of $I$ is the $f$-vector of the simplicial complex corresponding to the induced graph on $\{x_1, \ldots, x_d\}$ in the whiskered graph $G^{e_1, \ldots,e_d}$. \end{corollary} \begin{proof} This follows immediately from Proposition~\ref{HS} and \cite[Corollary 3.6]{Brennan-Morey}. \end{proof} The power of Corollary~\ref{CM} is that it directly interprets the entries of the $h$-vector as the entries of an $f$-vector of a smaller simplicial complex than $I_{\Delta}$. Moreover, there is a direct interpretation of the entries of this $f$-vector in terms of the edges of the perfect matching.This leads to a main result of the paper. As mentioned above, after potentially relabeling, one can assume the condition on the neighbor sets holds for \(x_i\). \begin{theorem}\label{main} Let $G$ be a graph without loops and let \(M=\{\{x_1,y_1\}, \ldots,\{x_t,y_t\}\}\) be a perfect matching of \(G\) with Property \(P\) such that for \(2 \leq i \leq t, \,\, N(x_i) \cap \{x_1,y_1,\ldots , x_{i-1},y_{i-1}\} = \emptyset.\) Then \(h_i\) is the number of independent subsets of \(M\) of size $i$. \end{theorem} \begin{proof} Set $e_i = \{x_i,y_i\}$. Since $M$ is a perfect matching with Property \(P\), \(G\) is Cohen-Macualay, so by Corollary~\ref{CM}, the \(h_i = f_{i-1}\) where \(f=(f_{-1}, \ldots ,f_{d-1})\) is the \(f\)-vector of the simplicial complex corresponding to induced subgraph of $G^{e_1, \ldots, e_d}$ on the vertices $x_1, \ldots, x_d.$ Then $f_{i-1}$ is the number of subsets of size $i$ of $\{x_1, \ldots, x_d\}$ that are independent in $G^{e_1, \ldots, e_d}$. By the definition of $G^{e_1, \ldots,e_d}$, a set $\{x_{j_1}, \ldots, x_{j_i}\}$ is independent in $G^{e_1, \ldots,e_d}$ if and only if $e_{j_1}, \ldots, e_{j_i}$ is an induced matching of $G$. \end{proof} This leads to an interesting result on the degree of the numerator of the Hilbert series in this setting in terms of the induced matching number of $G$, which we will denote $ind(G)$. \begin{corollary} Assume $I=I(G)$ is Cohen-Macaulay and $e_1=\{x_1,y_1\}, \ldots ,e_d=\{x_d,y_d\}$ is a perfect matching of distinct regular edges such that for $2 \leq i \leq t$ either $N(x_i) \cap \{x_1,y_1,\ldots , x_{i-1},y_{i-1}\} = \emptyset$ or $N(y_i) \cap \{x_1,y_1,\ldots , x_{i-1},y_{i-1}\} = \emptyset$. If $s = \max\{i \mid h_i \ne 0\}$, then $s \leq ind(G)$. \end{corollary} \begin{proof} This follows immediately from Theorem~\ref{main} since in this setting $h_s$ is the number of subsets of $e_1, \ldots, e_d$ of size $s$ that form an induced matching. Since $h_s \ge 0$, we have $ind(G) \geq s$. \end{proof} Proposition~\ref{HS} can also be applied in the more general setting where the regular sequence does not form a perfect matching. The result will be a partially whiskered graph with the same Hilbert series, allowing the use of techniques in \cite[Section 4]{Brennan-Morey}. We conclude the paper with some illustrative examples. \begin{example} If \(G\) is a Cohen-Macaulay bipartite graph, then by \cite{H-H} \(G\) has a perfect matching \(\{x_1,y_1\}, \ldots, \{x_d,y_d\}\) such that if \(\{x_i,y_j\}\in E(G)\), then \(i\leq j\) and if \(\{x_i,y_j\},\{x_j,y_k\} \in E(G)\) then \(\{x_i,y_k\} \in E(G)\). The second condition implies that each edge of the matching has Property \(P\) and the first condition implies \(N_G(x_i) \cap \{x_1, y_1, \ldots, x_{i-1},y_{i-1}\} = \emptyset\) and so Theorem~\ref{reg seq} applies. Applying Corollary~\ref{CM}, one can then compute the $h$-vector directly from the $f$-vector of an appropriate simplicial complex, formed using the clique complex of the complement of the connectivity graph on the perfect matching. Equivalently, $h_i$ is the number of subsets of cardinality $i$ of the matching for which no two edges of the subset are connected in \(G\). For a specific example, consider the Cohen-Macaulay bipartite graph on \(8\) vertices depicted below. The center graph represents \(G_{\{x_1,y_1\}\{x_2,y_2\}\{x_3,y_3\}\{x_4,y_4\}}\) and the right hand graph depicts \(G^{\{x_1,y_1\}\{x_2,y_2\}\{x_3,y_3\}\{x_4y_y\}}\), all of which have the same \(h\)-vector, \((1,4,2)\). Note the non-loop edges of the center graph encode connections between the edges of the perfect matching, while independent sets of these vertices (ignoring the loops) determine the $h$-vector. \vspace{-.5in} \noindent\begin{minipage}{\textwidth} \begin{minipage}[c][6cm][c]{0.3\textwidth} \begin{tikzpicture} \tikzstyle{point}=[inner sep=0pt] \node (a)[point,label=left: $x_1$] at (-1,1.3) {}; \node (b)[point,label=left:$x_2$] at (0,1.3){}; \node (c)[point,label=right:$x_3$] at (1,1.3){}; \node (d)[point,label=right:$x_4$] at (2,1.3){}; \node (e)[point,label=left:$y_1$] at (-1,0){}; \node (f)[point,label=left:$y_2$] at (0,0){}; \node (g)[point,label=right:$y_3$] at (1,0){}; \node (h)[point,label=right:$y_4$] at (2,0){}; \draw (a.center) -- (e.center); \draw (b.center) -- (f.center); \draw (c.center) -- (g.center); \draw (d.center) -- (h.center); \draw (a.center) -- (f.center); \draw (a.center) -- (g.center); \draw (a.center) -- (h.center); \draw (b.center) -- (g.center); \draw (b.center) -- (h.center); lldraw [black] (a.center) circle (1pt); lldraw [black] (b.center) circle (1pt); lldraw [black] (c.center) circle (1pt); lldraw [black] (d.center) circle (1pt); lldraw [black] (e.center) circle (1pt); lldraw [black] (f.center) circle (1pt); lldraw [black] (g.center) circle (1pt); lldraw [black] (h.center) circle (1pt); \end{tikzpicture} \end{minipage} \begin{minipage}[c][6cm][c]{0.3\textwidth} \begin{tikzpicture}[every loop/.style={}] \tikzstyle{point}=[inner sep=0pt] \node (a)[point,label=left: $x_1$] at (-1,1) {}; \node (b)[point,label=left:$x_2$] at (0,.3){}; \node (c)[point,label=right:$x_3$] at (1,0.3){}; \node (d)[point,label=right:$x_4$] at (2,1){}; \draw (a.center) -- (b.center); \draw (a.center) -- (c.center); \draw (a.center) -- (d.center); \draw (b.center) -- (c.center); \draw (b.center) -- (d.center); \draw (a) edge [anchor=center, loop above] (a); \draw (b) edge [anchor=center, loop below] (b); \draw (c) edge [anchor=center, loop below] (c); \draw (d) edge [anchor=center, loop above] (d); lldraw [black] (a.center) circle (1pt); lldraw [black] (b.center) circle (1pt); lldraw [black] (c.center) circle (1pt); lldraw [black] (d.center) circle (1pt); \end{tikzpicture} \end{minipage} \begin{minipage}[c][6cm][c]{0.3\textwidth} \begin{tikzpicture}[every loop/.style={}] \tikzstyle{point}=[inner sep=0pt] \node (a)[point,label=left: $x_1$] at (-1,.7) {}; \node (b)[point,label=left:$x_2$] at (0,0){}; \node (c)[point,label=right:$x_3$] at (1,0){}; \node (d)[point,label=right:$x_4$] at (2,.7){}; \node (e)[point,label=left:$y_1$] at (-1,1.5){}; \node (f)[point,label=left:$y_2$] at (0,1.2){}; \node (g)[point,label=right:$y_3$] at (1,1.2){}; \node (h)[point,label=right:$y_4$] at (2,1.5){}; \draw (a.center) -- (b.center); \draw (a.center) -- (c.center); \draw (a.center) -- (d.center); \draw (b.center) -- (c.center); \draw (b.center) -- (d.center); \draw (a.center) -- (e.center); \draw (b.center) -- (f.center); \draw (c.center) -- (g.center); \draw (d.center) -- (h.center); lldraw [black] (a.center) circle (1pt); lldraw [black] (b.center) circle (1pt); lldraw [black] (c.center) circle (1pt); lldraw [black] (d.center) circle (1pt); lldraw [black] (e.center) circle (1pt); lldraw [black] (f.center) circle (1pt); lldraw [black] (g.center) circle (1pt); lldraw [black] (h.center) circle (1pt); \end{tikzpicture} \end{minipage} \end{minipage} \end{example} \vspace{-.5in} The following illustrates how the results can be applied when the graph is not Cohen-Macaulay, and thus does not have a perfect matching. \begin{example} Let \(I=(x_1x_2,x_2x_3,x_3x_4,x_4x_5,x_5x_6,x_6x_7,x_2x_7,x_2x_5)\) be the edge ideal of the graph depicted below. Notice that \(\{x_1,x_2\}, \{x_3,x_4\},\{x_6,x_7\}\) is a maximal matching where each edge has Property \(P\) and satisfies the condition on neighbor sets neccesary to apply Theorem~\ref{reg seq}. As in \cite[Proposition 4.4]{Brennan-Morey}, the $h$-vector of each of these three graphs is computed via a difference of two $f$-vectors: one that counts independent sets of size $i$ and the other (shifted) counting independent sets of size $i-1$ that do not contain \(x_5\). The result is the $h$-vector \((1,3,-2,-1)\), written below as a difference of polynomials to make the shift clear: \[(1+4t+t^2) - t(1+3t+t^2) = 1 + 3t -2t^2-t^3.\] \noindent\begin{minipage}{\textwidth} \begin{minipage}[c][6cm][c]{0.3\textwidth} \begin{tikzpicture} \tikzstyle{point}=[inner sep=0pt] \node (a)[point,label=above: $x_1$] at (3.5, 1) {}; \node (b)[point,label=above:$x_2$] at (2.5,1){}; \node (c)[point,label=right:$x_3$] at (1.75,0){}; \node (d)[point,label=left:$x_4$] at (0.75,0){}; \node (e)[point,label=left:$x_5$] at (0,1){}; \node (f)[point,label=left:$x_6$] at (0.75,2){}; \node (g)[point,label=right:$x_7$] at (1.75,2){}; \draw (a.center) -- (b.center); \draw (b.center) -- (c.center); \draw (c.center) -- (d.center); \draw (d.center) -- (e.center); \draw (e.center) -- (f.center); \draw (f.center) -- (g.center); \draw (b.center) -- (g.center); \draw (b.center) -- (e.center); lldraw [black] (a.center) circle (1pt); lldraw [black] (b.center) circle (1pt); lldraw [black] (c.center) circle (1pt); lldraw [black] (d.center) circle (1pt); lldraw [black] (e.center) circle (1pt); lldraw [black] (f.center) circle (1pt); lldraw [black] (g.center) circle (1pt); \end{tikzpicture} \end{minipage} \begin{minipage}[c][6cm][c]{0.3\textwidth} \begin{tikzpicture}[every loop/.style={}] \tikzstyle{point}=[inner sep=0pt] \node (b)[point,label=above:$x_2$] at (2.5,1){}; \node (d)[point,label=left:$x_4$] at (1.25,0){}; \node (e)[point,label=left:$x_5$] at (0,1){}; \node (g)[point,label=right:$x_7$] at (1.25,2){}; \draw (b.center) -- (d.center); \draw (d.center) -- (e.center); \draw (e.center) -- (g.center); \draw (b.center) -- (g.center); \draw (b.center) -- (e.center); \draw (b) edge [anchor=center, loop right] (b); \draw (d) edge [anchor=center, loop below] (d); \draw (g) edge [anchor=center, loop above] (g); lldraw [black] (b.center) circle (1pt); lldraw [black] (d.center) circle (1pt); lldraw [black] (e.center) circle (1pt); lldraw [black] (g.center) circle (1pt); \end{tikzpicture} \end{minipage} \begin{minipage}[c][6cm][c]{0.3\textwidth} \begin{tikzpicture}[every loop/.style={}] \tikzstyle{point}=[inner sep=0pt] \tikzstyle{point}=[inner sep=0pt] \node (a)[point,label=right:$x_1$] at (2.5,2){}; \node (b)[point,label=right:$x_2$] at (2.5,1){}; \node (c)[point,label=right:$x_3$] at (1.25,0.85){}; \node (d)[point,label=left:$x_4$] at (1.25,0){}; \node (e)[point,label=left:$x_5$] at (0,1){}; \node (f)[point,label=right:$x_6$] at (1.25,3){}; \node (g)[point,label=right:$x_7$] at (1.25,2){}; \draw (b.center) -- (d.center); \draw (d.center) -- (e.center); \draw (e.center) -- (g.center); \draw (b.center) -- (g.center); \draw (b.center) -- (e.center); \draw (a.center) -- (b.center); \draw (c.center) -- (d.center); \draw (f.center) -- (g.center); lldraw [black] (a.center) circle (1pt); lldraw [black] (b.center) circle (1pt); lldraw [black] (c.center) circle (1pt); lldraw [black] (d.center) circle (1pt); lldraw [black] (e.center) circle (1pt); lldraw [black] (f.center) circle (1pt); lldraw [black] (g.center) circle (1pt); \end{tikzpicture} \end{minipage} \end{minipage} \end{example} We conclude with an example of how these techniques can be used when there is a known regular sequence consisting of linear binomials that do not necessarily all come from edges with Property \(P\). Examples of such regular elements can be found in \cite{FHM} and such binomials are classified in Theorem~\ref{binomial regular}. \begin{example} Let $I=(x_1x_2, x_2x_3,x_2x_4,x_4x_5,x_4x_6)$ be the edge ideal of the graph $G$ depicted below. Then $x_1+x_2, x_4+x_6, x_3+x_5$ forms a regular sequence on $R/I$ (see \cite{FHM}). The graph corresponding to $R/(I,x_1+x_2, x_4+x_6, x_3+x_5)$ and its polarization are also depicted below. Note that the $h$-vector can again be computed as a difference via \cite[Proposition 4.4]{Brennan-Morey}. \noindent\begin{minipage}{\textwidth} \begin{minipage}[c][6cm][c]{0.3\textwidth} \begin{tikzpicture} \tikzstyle{point}=[inner sep=0pt] \node (a)[point,label=right: $x_1$] at (0,0) {}; \node (b)[point,label=left:$x_2$] at (1,1){}; \node (c)[point,label=right:$x_3$] at (0,2){}; \node (d)[point,label=right:$x_4$] at (2,1){}; \node (e)[point,label=left:$x_5$] at (3,2){}; \node (f)[point,label=left:$x_6$] at (3,0){}; \draw (a.center) -- (b.center); \draw (b.center) -- (c.center); \draw (b.center) -- (d.center); \draw (d.center) -- (e.center); \draw (d.center) -- (f.center); lldraw [black] (a.center) circle (1pt); lldraw [black] (b.center) circle (1pt); lldraw [black] (c.center) circle (1pt); lldraw [black] (d.center) circle (1pt); lldraw [black] (e.center) circle (1pt); lldraw [black] (f.center) circle (1pt); \end{tikzpicture} \end{minipage} \begin{minipage}[c][6cm][c]{0.3\textwidth} \begin{tikzpicture}[every loop/.style={}] \tikzstyle{point}=[inner sep=0pt] \node (b)[point,label=left:$x_1$] at (1,1){}; \node (c)[point,label=right:$x_3$] at (1.5,2){}; \node (d)[point,label=right:$x_4$] at (2,1){}; \draw (b.center) -- (c.center); \draw (b.center) -- (d.center); \draw (d.center) -- (c.center); \draw (b) edge [anchor=center, loop below] (b); \draw (d) edge [anchor=center, loop below] (d); lldraw [black] (b.center) circle (1pt); lldraw [black] (c.center) circle (1pt); lldraw [black] (d.center) circle (1pt); \end{tikzpicture} \end{minipage} \begin{minipage}[c][6cm][c]{0.3\textwidth} \begin{tikzpicture}[every loop/.style={}] \tikzstyle{point}=[inner sep=0pt] \node (a)[point,label=right: $x_2$] at (0,0) {}; \node (b)[point,label=left:$x_1$] at (1,1){}; \node (c)[point,label=right:$x_3$] at (1.5,2){}; \node (d)[point,label=right:$x_4$] at (2,1){}; \node (f)[point,label=left:$x_6$] at (3,0){}; \draw (a.center) -- (b.center); \draw (b.center) -- (c.center); \draw (b.center) -- (d.center); \draw (d.center) -- (c.center); \draw (d.center) -- (f.center); lldraw [black] (a.center) circle (1pt); lldraw [black] (b.center) circle (1pt); lldraw [black] (c.center) circle (1pt); lldraw [black] (d.center) circle (1pt); lldraw [black] (f.center) circle (1pt); \end{tikzpicture} \end{minipage} \end{minipage} Then as above, writing the $h$-vector as a polynomial in $t$ to emphasize the shifts, we have the numerator of the Hilbert series is $(1+3t) - t(1+2t) = 1+2t-2t^2$, where $1+3t$ encapsulates the independent sets of the triangle $x_1,x_3,x_4$ and $t(1+2t)$ counts independent sets not involving $x_3$. \end{example} \section{Declarations} No external funding was received to assist with the preparation of this manuscript. The authors have no relevant financial or non-financial interests to disclose. There is no associated data with this manuscript. \bibliography{bibliography} \end{document}
2412.10532v1
http://arxiv.org/abs/2412.10532v1
On Enumerating Higher Bruhat Orders Through Deletion and Contraction
\documentclass[12pt]{article} \newcommand{\cleverefoptions}{capitalise,noabbrev} \usepackage[amsmath, cleveref]{e-jc} \usepackage{enumitem} \usepackage{graphicx} \usepackage{lipsum} \usepackage{mathtools} \usepackage{tikz} \usepackage{xcolor} \usepackage[ruled]{algorithm2e} \usepackage{bm} \usetikzlibrary{positioning} \usepackage{standalone} \usepackage[ backend=biber, maxnames=10, minnames=10, style=alphabetic]{biblatex} \renewbibmacro{in:}{} \DeclareFieldFormat{doi}{ \ifhyperref {\href{https://doi.org/#1}{\texttt{[DOI]}}} {\texttt{[doi]}}} \addbibresource{main.bib} \usepackage{xurl} \hypersetup{breaklinks=true} \newcommand{\defn}[1]{\emph{\textbf{#1}}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\N}{\mathbb{N}} \renewcommand{\S}{\mathfrak{S}} \renewcommand{\L}{\mathcal{L}} \newcommand{\X}{X} \newcommand{\ssep}{:} \newcommand{\ind}{\mathbb{1}} \newcommand{\A}[2]{\mathcal{A}(#1, #2)} \newcommand{\Astar}[2]{\mathcal{A}^*(#1, #2)} \newcommand{\B}[2]{\mathcal{B}(#1, #2)} \newcommand{\Bstar}[2]{\mathcal{B}^*(#1, #2)} \newcommand{\Blex}[2]{\mathcal{B}_{lex}(#1, #2)} \newcommand{\C}[2]{\mathcal{C}(#1, #2)} \newcommand{\Cstar}[2]{\mathcal{C}^*(#1, #2)} \newcommand{\CstarI}[3]{\mathcal{C}_{#1}^*(#2, #3)} \renewcommand{\P}{\mathcal{P}} \newcommand{\T}{\mathcal{T}} \newcommand{\delete}{\backslash} \newcommand{\contract}{/} \newcommand{\reviso}[2]{\Rev_{#1,#2}} \newcommand{\coreviso}[2]{\Corev_{#1,#2}} \newcommand{\textmultiset}[2]{\bigl(\!{\binom{#1}{#2}}\!\bigr)} \newcommand{\displaymultiset}[2]{\left(\!{\binom{#1}{#2}}\!\right)} \newcommand\multiset[2]{\mathchoice{\displaymultiset{#1}{#2}} {\textmultiset{#1}{#2}} {\textmultiset{#1}{#2}} {\textmultiset{#1}{#2}}} \DeclareMathOperator{\Inv}{Inv} \DeclareMathOperator{\coinv}{coinv} \DeclareMathOperator{\corev}{corev} \DeclareMathOperator{\Corev}{Corev} \DeclareMathOperator{\Rev}{Rev} \DeclareMathOperator{\width}{width} \definecolor{mygray}{HTML}{ACACAC} \definecolor{myred}{HTML}{E6194B} \definecolor{mygreen}{HTML}{3CB44B} \definecolor{myblue}{HTML}{4363D8} \definecolor{mypink}{HTML}{F032E6} \definecolor{myorange}{HTML}{F58231} \definecolor{mypurple}{HTML}{7F00FF} \definecolor{mybrown}{HTML}{954535} \usetikzlibrary{cd} \Crefname{algocf}{Algorithm}{Algorithms} \numberwithin{equation}{section} \title{On Enumerating Higher Bruhat Orders\\Through Deletion and Contraction} \author{Herman Chau} \date{\today} \authortext{}{Department of Mathematics, University of Washington (\email{[email protected]}). The author was partially supported by NSF Grant DMS-1764012.} \newcounter{x} \newcounter{y} \newcounter{z} \newcommand\xaxis{210} \newcommand\yaxis{-30} \newcommand\zaxis{90} \newcommand\topside[3]{ ll[fill=yellow, draw=black,shift={(\xaxis:#1)},shift={(\yaxis:#2)}, shift={(\zaxis:#3)}] (0,0) -- (30:1) -- (0,1) --(150:1)--(0,0); } \newcommand\leftside[3]{ ll[fill=red, draw=black,shift={(\xaxis:#1)},shift={(\yaxis:#2)}, shift={(\zaxis:#3)}] (0,0) -- (0,-1) -- (210:1) --(150:1)--(0,0); } \newcommand\rightside[3]{ ll[fill=blue, draw=black,shift={(\xaxis:#1)},shift={(\yaxis:#2)}, shift={(\zaxis:#3)}] (0,0) -- (30:1) -- (-30:1) --(0,-1)--(0,0); } \newcommand\cube[3]{ \topside{#1}{#2}{#3} \leftside{#1}{#2}{#3} \rightside{#1}{#2}{#3} } \newcommand\planepartition[1]{ \setcounter{x}{-1} \foreach \a in {#1} { \addtocounter{x}{1} \setcounter{y}{-1} \foreach \b in \a { \addtocounter{y}{1} \setcounter{z}{-1} \foreach \c in {1,...,\b} { \addtocounter{z}{1} \cube{\value{x}}{\value{y}}{\value{z}} } } } } \begin{document} \maketitle \begin{abstract} The higher Bruhat orders $\B{n}{k}$ were introduced by Manin--Schechtman to study discriminantal hyperplane arrangements and subsequently studied by Ziegler, who connected $\B{n}{k}$ to oriented matroids. In this paper, we consider the enumeration of $\B{n}{k}$ and improve upon Balko's asymptotic lower and upper bounds on $|\B{n}{k}|$ by a factor exponential in $k$. A proof of Ziegler's formula for $|\B{n}{n-3}|$ is given and a bijection between a certain subset of $\B{n}{n-4}$ and totally symmetric plane partitions is proved. Central to our proofs are deletion and contraction operations for the higher Bruhat orders, defined in analogy with matroids. Dual higher Bruhat orders are also introduced, and we construct isomorphisms relating the higher Bruhat orders and their duals. Additionally, weaving functions are introduced to generalize Felsner's encoding of elements in $\B{n}{2}$ to all higher Bruhat orders $\B{n}{k}$. \end{abstract} \section{Introduction} The higher Bruhat orders $\B{n}{k}$ are a family of partially ordered sets introduced by Manin--Schechtman to study the combinatorics and topology of discriminantal hyperplane arrangements. Specifically, the elements in $\B{n}{k}$ are related to the order in which paths connecting antipodal chambers in a discriminantal arrangement pass through the hyperplanes in the arrangement \cite[Section 2.3]{maninschechtman89}. The intersection lattice of discriminantal arrangements was further studied by Falk \cite{falk-1994}, Bayer--Brandt \cite{bayer-brandt-1997}, and Athanasiadis \cite{athanasiadis-1999}. Furthermore, Laplante-Anfossi--Williams showed a correspondence between $\B{n}{k}$ and the cup-$i$ coproducts defining Steenrod squares in cohomology \cite{laplante-anfossi-williams-2023}. Despite the name, the higher Bruhat orders generalize the weak order on the symmetric group $\S_n$ and \emph{not} the Bruhat order. The poset $\B{n}{1}$ is precisely the weak order on $\S_n$. The poset $\B{n}{2}$ is related to primitive sorting networks \cite{knuth92}, commutation classes of reduced expressions of the longest permutation \cite{elnitsky97, tenner2006}, weakly separated set systems \cite{danilovkarzanovkoshevoy2010}, and rhombic tilings of a regular $2n$-gon \cite{escobar-pechenik-tenner-yong-2018}. In \cite{ziegler93}, Ziegler introduced a characterization of the higher Bruhat orders in terms of collections of $k$-subsets that satisfy a certain consistency property, partially ordered by single step inclusion. See \cref{section:preliminaries} for the precise definition of consistency. The poset $\C{n}{k+1}$ on consistent subsets is isomorphic to $\B{n}{k}$, and the proof of the isomorphism of the two relies on the theory of single element extensions of alternating matroids \cite{ziegler93}. The goal of this paper is to obtain new enumerative and asymptotic results on the higher Bruhat orders. Along the way, the dual higher Bruhat orders $\Bstar{n}{k}$ and $\Cstar{n}{k}$ are introduced. Notably, $\Bstar{n}{k}$ and $\Cstar{n}{k}$ differ from the usual notion of poset duality, which reverses the partial order on elements. Deletion and contraction on elements of $\B{n}{k}$ and $\C{n}{k}$ are defined in analogy with the corresponding operations in matroids. As one would expect, deletion and contraction are dual to each other in the higher Bruhat orders. The main structural result is the commutative diagram of isomorphisms in \eqref{equation:commutative-diagram} relating $\B{n}{k}$, $\C{n}{k+1}$ and their duals. The commutative diagram is similar to the one in Falk's note on the duality of deletion and contraction in \cite[Remark 2.6]{falk-1994}, but applied to the higher Bruhat orders instead of the generic stratum in the Grassmannian. See \cref{section:dual-orders} for the precise construction of the isomorphisms. \begin{theorem} \label{theorem:fundamental-duality} For all integers $1 \le k < n$, there exist poset isomorphisms $\reviso{n}{k}$, $\coreviso{n}{k}$, $\beta_{n,k}$, and $\gamma_{n,k}$ such that the following diagram commutes: \begin{equation} \label{equation:commutative-diagram} \begin{tikzcd}[row sep=normal, column sep=huge] \B{n}{k} \arrow[r, "\reviso{n}{k}"] \arrow[d, "\beta_{n,k}"] & \C{n}{k+1} \arrow[d, "\gamma_{n,k+1}"]\\ \Bstar{n}{n-k} \arrow[r, "\coreviso{n}{n-k}"] & \Cstar{n}{n-k-1}. \end{tikzcd} \end{equation} \end{theorem} Little is known about the exact enumeration of the higher Bruhat orders. Since $\B{n}{1}$ is the weak order on $\S_n$, it is clear that $|\B{n}{1}| = n!$, but already there is no known closed formula for $|\B{n}{2}|$. The sequence $|\B{n}{2}|$ begins 1, 2, 8, 62, 908, $\ldots$ and can be found in the OEIS sequence \href{https://oeis.org/A006245}{A006245}. The prime factors of $|\B{n}{2}|$ grow quickly relative to $n$, so a product formula for $|\B{n}{2}|$ is unlikely. On the other hand, for $|\B{n}{n-k}|$ the following exact formulas appear in \cite{ziegler93} for $k = 0, 1, 2, 3$: $|\B{n}{n}| = 1$, $|\B{n}{n-1}| = 2$, $|\B{n}{n-2}| = 2n$, and $|\B{n}{n-3}| = 2^n + n2^{n-2}-2n$. Given the difficulty in computing a closed formula, one might ask for asymptotics on $|\B{n}{k}|$ instead. When $k = 2$, the best upper bound to date is $\log_2 |\B{n}{2}| \le 0.6571n^2$ due to Felsner and Valtr \cite{felsnervaltr2011}, and the best lower bound to date is $0.2721n^2 \le \log_2 |\B{n}{2}|$ due to K\"uhnast, Dallant, Felsner, and Scheucher \cite{kuhnast-dallant-felsner-scheucher}. Furthermore, Balko \cite[Theorem 3]{balko2019} recently showed that for $k \ge 2$ and sufficiently large $n \gg k$, we have \begin{equation} \frac{n^k}{(k+1)^{4(k+1)}} \le \log_2 |\B{n}{k}| \le \frac{2^{k-1}n^k}{k!}. \end{equation} The following theorem improves upon Balko's asymptotic bounds. A more precise statement is given in \cref{theorem:higher-bruhat-order-asymptotics}. The proof uses a new encoding of elements in $\B{n}{k}$ termed \emph{weaving functions}, which generalize an encoding of $\B{n}{2}$ due to Felsner's \cite{felsner1997}. The same methods are also used to prove \cref{theorem:dual-higher-bruhat-order-asymptotics}, which gives similar asymptotic bounds on $|\B{n}{n-k}| = |\Bstar{n}{k}|$. \begin{theorem} \label{theorem:higher-bruhat-order-asymptotics-intro} For every integer $k \ge 2$, there exists a constant $c_k$ such that for sufficiently large $n \gg k$, we have \begin{equation} \frac{c_kn^k}{k!(k+1)!} \le \log_2 |\B{n}{k}| \le \frac{n^k}{k!\log{2}}. \end{equation} Furthermore, the constant $c_k$ satisfies $\displaystyle \lim_{k \to \infty} \frac{c_k}{k!/\sqrt{24 \pi k}} = 1$. \end{theorem} Partitioning the elements of $\B{n}{k}$ by their deletion leads to the formula for $|\B{n}{n-3}|$ that appeared in \cite{ziegler93} without proof. By enumerating elements in $\B{n}{n-4}$ with a fixed deletion set, one also obtains a connection between the higher Bruhat orders and the celebrated totally symmetric plane partitions (TSPPs), which have been studied by Andrews, Paule, and Schneider \cite{andrews-paule-schneider-2005}, Mills, Robbins, and Rumsey Jr. \cite{mills-robbins-rumsey}, and Stembridge \cite{stembridge95}, among many others. \begin{theorem} \label{theorem:TSPP-bijection} For every integer $n \ge 5$, the poset of TSPPs $\T_{n-3}$, partially ordered by inclusion, is isomorphic to the subposet of $\B{n}{n-4}$ obtained by restricting to elements whose deletion is the commutation class of the lexicographic order on $\binom{[n-1]}{k}$. \end{theorem} This paper is structured as follows. In Section 2, the definitions and properties of the weak order on $\mathfrak{S}_n$, the partial orders $\B{n}{k}$ and $\C{n}{k}$, and totally symmetric plane partitions are reviewed. In Section 3, the dual partial orders $\Bstar{n}{k}$ and $\Cstar{n}{k}$ are introduced and \cref{theorem:fundamental-duality} is proved. In Section 4, the deletion and contraction operations are defined, and their fundamental properties are proved. In Section 5, weaving functions are introduced, and it is proved that distinct elements of $\B{n}{k}$ have distinct weaving functions. In Section 6, the asymptotic bounds in \cref{theorem:higher-bruhat-order-asymptotics} and \cref{theorem:dual-higher-bruhat-order-asymptotics} are proved. In Section 7, the known formula for $|\B{n}{n-3}|$ is proved and the isomorphism between a subposet of $\B{n}{n-4}$ and $\T_{n-3}$ is stated and proved. In Section 8, open problems for future study are proposed. \section{Preliminaries} \label{section:preliminaries} \subsection{Weak Order on the Symmetric Group} For a positive integer $n$, let $[n]$ denote the set $\{1, 2, \ldots, n\}$. By convention, $[0] = \varnothing$. Given an unordered set $X = \{x_1, x_2, \ldots, x_n\}$, let $(x_1, x_2, \ldots, x_n)$ denote $X$ as an ordered set. When $(x_1, x_2, \ldots, x_n)$ are integers in increasing order, the notation $[x_1, x_2, \ldots, x_n]$ will be used to emphasize that the elements are in increasing order. In order to avoid ambiguity between $[n]$ as a set of positive integers and $[n]$ as an ordered set of size one, the latter will be written as the singleton set $\{n\}$. For an integer $k$ such that $1 \le k \le n$, let $\binom{[n]}{k}$ denote the collection of ordered subsets of $k$ distinct elements in $[n]$, ordered in increasing order. For example, $\binom{[4]}{2} = \{[1,2], [1,3], [1,4], [2,3], [2,4], [3,4]\}$. By convention, $\binom{[n]}{0} = \{\varnothing\}$, and if $k$ is a positive integer such that $k > n$, then $\binom{[n]}{k} = \varnothing$. A collection $\mathcal{S}$ of sets is said to be partially ordered by \defn{inclusion} if for any two sets $X,Y \in \mathcal{S}$, $X \le Y$ if and only if $X \subseteq Y$. The symmetric group on $[n]$ is denoted $\mathfrak{S}_n$. The one-line notation of a permutation $\rho = (\rho_1, \rho_2, \ldots, \rho_n)$ in $\mathfrak{S}_n$ denotes the permutation that maps $i \mapsto \rho_i$ for $i \in [n]$. For example, $(2,3,1) \in \mathfrak{S}_3$ is the permutation sending $1 \mapsto 2$, $2 \mapsto 3$, and $3 \mapsto 1$. For $i \in [n-1]$, the adjacent transposition $\tau_i \in \mathfrak{S}_n$ is the permutation that swaps $i$ and $i+1$ while fixing the other elements in $[n]$. The inversion set of a permutation $\rho \in \mathfrak{S}_n$ is the subset of $\binom{[n]}{2}$ given by \[ \Inv(\rho) = \left\{[i,j] \in \binom{[n]}{2} : \text{$\rho^{-1}(i) > \rho^{-1}(j)$}\right\}. \] The \defn{(right) weak order} on $\mathfrak{S}_n$ is the partial order obtained by taking the transitive closure of the cover relations where $\rho \lessdot \sigma$ if and only if $\sigma = \rho \tau_i$ for some $i \in [n-1]$ such that $\rho_i < \rho_{i+1}$. One can check that if $\rho \lessdot \sigma$ and $\sigma = \rho\tau_i$, then $\Inv(\sigma) = \Inv(\rho) \cup \{[\rho_i, \rho_{i+1}]\}$. It is well-known that the weak order on $\mathfrak{S}_n$ is isomorphic to the poset of inversion sets of permutations in $\mathfrak{S}_n$, partially ordered by inclusion \cite{bjornerbrenti2005}. \begin{definition}[\cite{ziegler93}] \label{definition:single-step-inclusion} For a collection $\mathcal{S}$ of sets, the \defn{single step inclusion} partial order on $\mathcal{S}$ is the partial order obtained by taking the transitive closure of the cover relations where $X \lessdot Y$ if and only if $Y = X \cup \{y\}$ for some $y \not\in X$. \end{definition} \begin{lemma}[\cite{bjornerbrenti2005,ziegler93}] \label{lemma:weak_order_isomorphic_to_single_step_inclusion} The weak order on $\S_n$ is isomorphic to the single step inclusion partial order on $\{\Inv(w) : w \in \S_n\}$. \end{lemma} \begin{lemma}[\cite{ziegler93}] \label{lemma:inversion_set_characterization} A subset $I \subseteq \binom{[n]}{2}$ is the inversion set of some permutation in $\S_n$ if and only if $I$ satisfies the following two properties: \begin{itemize} \item $(i,j), (j,k) \in I$ implies $(i,k) \in I$ for all $i < j < k \in [n]$, \item $(i,j), (j,k) \not\in I$ implies $(i,k) \not\in I$ for all $i < j < k \in [n]$. \end{itemize} \end{lemma} In general, the inclusion partial order and the single step inclusion partial order on a collection of sets are not isomorphic. However, the two partial orders coincide in the case of the inversion sets of permutations in $\mathfrak{S}_n$. Further details on the weak order can be found in \cite{bjornerbrenti2005}. \subsection{Higher Bruhat Orders} \label{subsection:higher-bruhat-order-intro} In this subsection, the higher Bruhat orders are reviewed, according to the definitions of Manin--Schechtman and Ziegler. For an ordered set $X = [x_1, \ldots, x_k] \in \binom{[n]}{k}$ and a sequence of positive integers $1 \le i_1 < i_2 < \cdots < i_j \le k$, the notation $X_{i_1, \ldots, i_j}$ denotes the ordered set $X \setminus \{x_{i_1}, \ldots, x_{i_j}\}$. For example, if $X = [1, 5, 6, 8, 9]$, then $X_{3,5} = [1,5,8]$. The \defn{packet} of an ordered set $X$ is the unordered set of all size $(k-1)$ subsets of $X$, denoted $P(X) = \{\X_1, \X_2, \ldots, \X_k\}$. For a total order $\rho$ on a set $\mathcal{S}$ and a subset $I \subseteq \mathcal{S}$, the \defn{restriction} $\rho|_I$ is the total order on $I$ inherited from $\rho$. The \defn{lexicographic} (lex) order on $\binom{[n]}{k}$ is the total order on $\binom{[n]}{k}$ where, for any two elements $[i_1, \ldots, i_k], [j_1, \ldots, j_k] \in \binom{[n]}{k}$, $[i_1, \ldots, i_k] < [j_1, \ldots, j_k]$ if and only if there exists $r \in [k]$ such that $i_r < j_r$, and $i_1 = j_1$, $i_2 = j_2$, $\cdots$, $i_{r-1} = j_{r-1}$. The \defn{antilexicographic} (antilex) order is the total order on $\binom{[n]}{k}$ obtained by reversing the lex order. For a packet $P(X)$, the lex order is $(\X_k, \X_{k-1}, \ldots, \X_1)$, and the antilex order is $(\X_1, \X_2, \ldots, \X_k)$. Notice that the indices of $X_k, X_{k-1}, \ldots, X_1$ are in decreasing order for the lex order on $P(X)$ since omitting the largest element $x_k$ corresponds to the lexicographically first element $X_k = [x_1, x_2, \ldots, x_{k-1}]$. A \defn{prefix} of $P(X)$ in lex order is a (possibly empty) ordered subset of the form $(X_k, X_{k-1}, \ldots, X_i)$, and a \defn{suffix} of $P(X)$ in lex order is a (possibly empty) ordered subset of the form $(X_i, X_{i-1}, \ldots, X_1)$. \begin{example} The lex order on $\binom{[4]}{2}$ is the total order \[ [1,2] < [1,3] < [1,4] < [2,3] < [2,4] < [3,4], \] which is identified with the ordered sequence \[ ([1,2], [1,3], [1,4], [2,3], [2,4], [3,4]). \] When unambiguous, the brackets and commas are omitted from an ordered set. Thus, the lex order on $\binom{[4]}{2}$ may be written more compactly as $(12, 13, 14, 23, 24, 34)$. The antilex order on $\binom{[4]}{2}$ is $(34, 24, 23, 14, 13, 12)$. The restriction of the antilex order on $\binom{[4]}{2}$ to the packet $P(123)$ is $(23, 13, 12)$. \end{example} \begin{definition}[\cite{maninschechtman89}] A total order $\rho$ on $\binom{[n]}{k}$ is \defn{admissible} if for every $X \in \binom{[n]}{k+1}$, the restriction $\rho|_{P(X)}$ is either the lex or antilex order on $P(X)$. The set of admissible orders on $\binom{[n]}{k}$ is denoted $\mathbf{\A{n}{k}}$. Given an admissible order $\rho \in \A{n}{k}$, its \defn{reversal set} is \begin{equation} \boldsymbol{\reviso{n}{k}(\rho)} = \left\{X \in \binom{[n]}{k+1} \ssep \text{$\rho|_{P(X)}$ is the antilex order}\right\}. \end{equation} \end{definition} The subscript in $\Rev_{n,k}$ is dropped when $k$ and $n$ are clear from context. For integers $1 \le k \le n$, the lex (resp. antilex) order on $\binom{[n]}{k}$ is always an admissible order in $\A{n}{k}$ since the restriction to any packet is always the lex (resp. antilex) order on the packet. The reversal set of the lex order on $\binom{[n]}{k}$ is the empty set since there are no packets in antilex order. The reversal set of the antilex order on $\binom{[n]}{k}$ is $\binom{[n]}{k+1}$ since the restriction to every packet is the antilex order. \begin{example} Consider the total order \begin{equation} \label{equation:example-admissible-order} \rho = (23, 24, 25, 45, 13, 15, 35, 14, 34, 12). \end{equation} The restriction of $\rho$ to the packet $P(145) = \{14, 15, 45\}$ is $\rho|_{P(145)} = (45, 15, 14)$, which is the antilex order on $P(145)$. One can check that for every $X \in \binom{[5]}{3}$, the restriction of $\rho$ to $P(X)$ is either the lex or antilex order on $P(X)$. Thus, $\rho$ is in $\A{5}{2}$. Its reversal set is \begin{equation} \label{equation:example-reversal-set} \Rev(\rho) = \{123, 124, 125, 145, 345\}. \end{equation} \end{example} \begin{example} As a nonexample, consider transposing $13$ and $15$ in the total order $\rho$ of \eqref{equation:example-admissible-order}. This yields the total order \begin{equation} \tau = (23, 24, 25, 45, 15, 13, 35, 14, 34, 12). \end{equation} The restriction of $\tau$ to the packet $P(135) = \{13, 15, 35\}$ is $\tau|_{P(135)} = (15, 13, 35)$, which is neither the lex order nor the antilex order on $P(135)$. Thus, $\tau$ is \emph{not} an admissible order. Note that $P(13) \cap P(15) \neq \varnothing$. However, $P(45) \cap P(13) = \varnothing$ and one can check that transposing $45$ and $13$ in $\rho$ does yield an admissible total order. \end{example} Subsets $X,Y \in \binom{[n]}{k}$ are said to \defn{commute} if $P(X) \cap P(Y) = \varnothing$. If $\rho = (\rho_1, \ldots, \rho_{\binom{n}{k}}) \in \A{n}{k}$ and $\rho_i, \rho_{i+1}$ commute, then one can observe that the total order obtained by transposing $\rho_i$ and $\rho_{i+1}$ is also an admissible order. Two admissible orders $\rho, \rho' \in \A{n}{k}$ are \defn{commutation equivalent}, denoted $\boldsymbol{\rho \sim \rho'}$, if $\rho'$ can be obtained from $\rho$ by a sequence of commutations. The \defn{commutation class} of an admissible order $\rho \in \A{n}{k}$, denoted $\boldsymbol{[\rho]}$, is the set of admissible orders that are commutation equivalent to $\rho$. Note that reversal sets are invariant under commutations, so $\Rev([\rho])$ is well-defined for a commutation class $[\rho]$. Let $\rho = (\rho_1, \ldots, \rho_{\binom{n}{k}}) \in \A{n}{k}$ and $X \in \binom{[n]}{k+1}$, such that $\rho|_{P(X)}$ is a saturated chain $(\rho_i, \rho_{i+1}, \ldots, \rho_{i+k})$ of $\rho$. Reversing the order on the saturated chain yields a new total order $\sigma = (\rho_1, \ldots, \rho_{i-1}, \rho_{i+k}, \ldots, \rho_i, \rho_{i+k+1}, \ldots, \rho_{\binom{n}{k}})$. If $\rho|_{P(X)}$ is the lex (resp. antilex) order on $P(X)$, then $\sigma$ is said to be obtained from $\rho$ by a \defn{packet flip of $P(X)$ from lex to antilex} (resp. antilex to lex). One can observe that the new total order $\sigma$ is also an admissible order. \begin{definition}[\cite{maninschechtman89}] \label{definition:higher-bruhat-order-1} For integers $1 \le k \le n$, the \defn{higher Bruhat order} $\mathbf{\B{n}{k}}$ is the poset on commutation classes $\A{n}{k}/\hspace{-0.2em}\sim$ obtained by taking the transitive closure of the cover relations where $[\rho] \lessdot [\sigma]$ if and only if there exist $\rho' \in [\rho]$ and $\sigma' \in [\sigma]$ such that $\sigma'$ is obtained from $\rho'$ by a packet flip from lex to antilex. \end{definition} \begin{example} \label{example:total-order-reversal-set} The total order $\rho = (23, 13, 24, 14, 12, 34)$ is an admissible order in $\A{4}{2}$ with reversal set $\Rev(\rho) = \{123, 124\}$. The commutation class $[\rho]$ is \[ \{(23, 24, 13, 14, 12, 34), (23, 13, 24, 14, 34, 12), (23, 24, 13, 14, 34, 12), (23, 13, 24, 14, 12, 34)\}, \] obtained by commuting only $(13, 24)$, only $(12, 34)$, both, or neither. The admissible order $\sigma = (23, 24, 34, 14, 13, 12)$ is obtained from $(23, 24, 13, 14, 34, 12) \in [\rho]$ by a packet flip of $P(134) = \{13, 14, 34\}$ from lex to antilex. Thus, $[\rho] \lessdot [\sigma]$ in $\B{4}{2}$. The Hasse diagram of $\B{4}{2}$ is depicted on the left in \cref{fig:b-4-2} with the corresponding reversal sets on the right, ordered by single step inclusion. \begin{figure}[!ht] \centering \scalebox{1.0}{ \begin{tikzpicture}[every node/.style={font=\small}] \node (a1) at (0, 2.5) {$[(34, 24, 23, 14, 13, 12)]$}; \node (a2) at (-2,1.25) {$[(23, 24, 34, 14, 13, 12)]$}; \node (a3) at (2,1.25) {$[(34, 24, 14, 12, 13, 23)]$}; \node (a4) at (-2,0) {$[(23, 13, 24, 14, 12, 34)]$}; \node (a5) at (2,0) {$[(12, 34, 14, 13, 24, 23)]$}; \node (a6) at (-2,-1.25) {$[(23, 13, 12, 14, 24, 34)]$}; \node (a7) at (2,-1.25) {$[(12, 13, 14, 34, 24, 23)]$}; \node (a8) at (0,-2.5) {$[(12, 13, 14, 23, 24, 34)]$}; \draw (a1) -- (a2); \draw (a1) -- (a3); \draw (a2) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a6); \draw (a5) -- (a7); \draw (a6) -- (a8); \draw (a7) -- (a8); \end{tikzpicture} } \scalebox{1.0}{ \begin{tikzpicture}[every node/.style={font=\small}] \node (a1) at (0, 2.5) {$\{123, 124, 134, 234\}$}; \node (a2) at (-2,1.25) {$\{123, 124, 134\}$}; \node (a3) at (2,1.25) {$\{124, 134, 234\}$}; \node (a4) at (-2,0) {$\{123, 124\}$}; \node (a5) at (2,0) {$\{134, 234\}$}; \node (a6) at (-2,-1.25) {$\{123\}$}; \node (a7) at (2,-1.25) {$\{234\}$}; \node (a8) at (0,-2.5) {$\varnothing$}; \draw (a1) -- (a2); \draw (a1) -- (a3); \draw (a2) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a6); \draw (a5) -- (a7); \draw (a6) -- (a8); \draw (a7) -- (a8); \end{tikzpicture} } \caption{The partial orders $\B{4}{2}$ on the left and $\C{4}{3}$ on the right.} \label{fig:b-4-2} \end{figure} \end{example} \begin{definition}[\cite{ziegler93}] For integers $1 \le k \le n$, a subset $I \subseteq \binom{[n]}{k}$ is \defn{consistent} if for every $X \in \binom{[n]}{k+1}$, the intersection $P(X) \cap I$ is either a prefix or a suffix of $P(X)$ in lex order. The poset of all consistent subsets of $\binom{[n]}{k}$ ordered by single step inclusion is denoted $\boldsymbol{\C{n}{k}}$. \end{definition} \begin{example} One can check that the reversal set $\Rev(\rho)$ in \eqref{equation:example-reversal-set} is a consistent subset in $\C{5}{3}$ by considering $\Rev(\rho) \cap P(X)$ for all $X \in \binom{[5]}{4}$. For example, the intersection $\Rev(\rho) \cap P(1234) = (123, 124)$ is a prefix of $P(1234)$ in lex order, and the intersection $\Rev(\rho) \cap P(1345) = (145, 345)$ is a suffix of $P(1345)$ in lex order. The remaining 3 subsets are left to the reader. \end{example} Total orders on $\binom{[n]}{1}$ are easily identified with permutations in $\S_n$. Under this identification, each permutation is in its own commutation class, and packet flips correspond to adjacent transpositions. Furthermore, reversal sets of total orders of $\binom{[n]}{1}$ correspond to inversion sets of permutations in $\S_n$, which are the consistent subsets of $\binom{[n]}{2}$. One can check from the definitions that $\B{n}{1}$ is isomorphic to $\S_n$ under the weak order. By \cref{lemma:weak_order_isomorphic_to_single_step_inclusion} and \cref{lemma:inversion_set_characterization}, $\C{n}{2}$ is also isomorphic to $\S_n$ under the weak order, and hence $\B{n}{1}$ and $\C{n}{2}$ are isomorphic as posets. This isomorphism motivated the following theorem due to Ziegler. \begin{theorem}[{\cite[Theorem~4.1]{ziegler93}}] \label{theorem:higher-bruhat-order-isomorphism} For integers $1 \le k < n$, the map $\reviso{n}{k}$ is a poset isomorphism between $\B{n}{k}$ and $\C{n}{k+1}$. \end{theorem} \begin{example} The Hasse diagram of $\C{4}{3}$ is drawn on the right in \cref{fig:b-4-2}. Per \cref{theorem:higher-bruhat-order-isomorphism}, $\B{4}{2}$ and $\C{4}{3}$ are isomorphic as posets, with the isomorphism sending $[\rho] \mapsto \Rev([\rho])$, which can be seen from the figure. \end{example} \subsection{Totally Symmetric Plane Partitions} For positive integers $r$, $s$, and $t$, a \defn{plane partition} that fits in an $r \times s \times t$ box is a subset $T$ of the Cartesian product $[r] \times [s] \times [t]$ such that, if $(x_1, x_2, x_3) \in T$, then $[x_1] \times [x_2] \times [x_3] \subseteq T$. For plane partitions, the inclusion partial order and single step inclusion partial order are isomorphic. The number of plane partitions that fit inside an $r \times s \times t$ box is given by a celebrated product formula due to MacMahon \cite{macmahon-1960} \begin{equation} \label{equation:macmahon-product-formula} \prod_{1 \le i,j,k \le n} \frac{i+j+k-1}{i+j+k-2}. \end{equation} \begin{definition} For a positive integer $n$, a \defn{totally symmetric plane partition} (TSPP) that fits in an $n \times n \times n$ box is a plane partition $T \subseteq [n]^3$ such that, for all $\sigma \in \mathfrak{S}_3$, if $(x_1,x_2,x_3) \in T$, then $(x_{\sigma(1)}, x_{\sigma(2)}, x_{\sigma(3)}) \in T$. \end{definition} Let $\T_n$ denote the set of all TSPPs that fit in an $n \times n \times n$ box. The number of TSPPs $|\T_n|$ is given by a product formula analogous to \eqref{equation:macmahon-product-formula} due to Stembridge \cite{stembridge95} \begin{equation} \label{equation:stembridge-product-formula} |\T_n| = \prod_{1 \le i \le j \le k \le n} \frac{i+j+k-1}{i+j+k-2}. \end{equation} Note that the two product formulas differ only in that $i,j,k$ are weakly increasing in \eqref{equation:stembridge-product-formula}. \begin{example} \label{example:TSPP} The set $T = \{(1,1,1), (1,1,2), (1,1,3), (1,2,1), (1,1,3), (2,1,1), (3,1,1)\}$ is a totally symmetric plane partition that fits in a $3 \times 3 \times 3$ box. One can visualize $T$ as a set of boxes in $\mathbb{R}^3$ as in \cref{figure:TSPP example}. \begin{figure}[!ht] \centering \begin{tikzpicture}[scale=0.8] \planepartition{{3,1,1},{1},{1}} \end{tikzpicture} \caption{The TSPP in \cref{example:TSPP}.} \label{figure:TSPP example} \end{figure} \end{example} \section{Dual Higher Bruhat Orders} \label{section:dual-orders} In \cref{subsection:higher-bruhat-order-intro}, the definitions of admissible orders in $\A{n}{k}$ or consistent sets in $\C{n}{k}$ involved packets $P(X)$ for sets $X \in \binom{[n]}{k+1}$ of cardinality $k+1$. By considering sets $X \in \binom{[n]}{k-1}$ of cardinality $k-1$, one is led to develop the notion of the dual higher Bruhat order. In this subsection, unless otherwise specified, $k$ and $n$ are fixed positive integers that satisfy $1 \le k \le n$. \begin{definition} For $X \in \binom{[n]}{k}$, the \defn{copacket} of $X$ is $P^*_n(X) = \{X \cup \{i\} : i \in [n] \setminus X\}$. \end{definition} Unlike $P(X)$, the copacket is dependent on the ambient set $[n]$. When it is clear from context, the subscript $n$ will be suppressed, so that $P^*(X)$ denotes the copacket of $X$ with respect to $[n]$. The $i$th element of $P^*(X)$ in lex order will be written as $X^i$. \begin{lemma} \label{lemma:commute-cocommute-coincide} For $X,Y \in \binom{[n]}{k}$, $P(X) \cap P(Y) = \varnothing$ if and only if $P^*(X) \cap P^*(Y) = \varnothing$. \end{lemma} \begin{proof} It suffices to note that $P(X) \cap P(Y) = \varnothing$ and $P^*(X) \cap P^*(Y) = \varnothing$ are both equivalent to the condition $|X \setminus Y| > 1$ and $|Y \setminus X| > 1$. \end{proof} Recall that two elements $X,Y \in \binom{[n]}{k}$ commute if $P(X) \cap P(Y) = \varnothing$. It would be natural to say that $X,Y$ cocommute if $P^*(X) \cap P^*(Y) = \varnothing$, but \cref{lemma:commute-cocommute-coincide} implies that $X,Y$ cocommute if and only if $X,Y$ commute. Henceforth, even in the context of duality, elements will be said to commute since the definitions of cocommutation and commutation coincide. \begin{definition} \label{definition:coadmissible-order} A total order $\rho$ on $\binom{[n]}{k}$ is \defn{coadmissible} if the restriction $\rho|_{P^*(X)}$ is the lex or antilex order on $P^*(X)$ for every $X \in \binom{[n]}{k-1}$. The set of all coadmissible orders on $\binom{[n]}{k}$ is denoted $\mathbf{\Astar{n}{k}}$. The commutation class of a coadmissible order $\rho$ is written $\boldsymbol{[\rho]}$. \end{definition} It is not difficult to see that if $\rho \in \Astar{n}{k}$ is a coadmissible order, then transposing adjacent elements that cocommute yields a new coadmissible order. For $X \in \binom{[n]}{k-1}$, if the copacket $P^*(X) = (X^1, X^2, \ldots, X^{n-(k-1)})$ occurs as a saturated chain in $\rho$, then reversing the order of $P^*(X)$ in $\rho$ yields a new coadmissble order said to be obtained by a \defn{copacket flip of $P^*(X)$ from lex to antilex}. \begin{definition} The \defn{dual higher Bruhat order} $\Bstar{n}{k}$ is the partial order on commutation classes of elements in $\Astar{n}{k}$, where $[\rho] \lessdot [\sigma]$ if and only if there exists $\rho' \in [\rho]$ and $\sigma' \in [\sigma]$ such that $\sigma'$ is obtained from $\rho'$ by a copacket flip from lex order to antilex order. \end{definition} \begin{definition} \label{definition:coreversal-set} The \defn{coreversal set} of a coadmissible order $\rho \in \Astar{n}{k}$ is the set \[ \coreviso{n}{k}(\rho) = \left\{X \in \binom{[n]}{k-1} : \text{$\rho|_{P^*(X)}$ is in antilex order}\right\}. \] When $n$ and $k$ are clear from context, the subscript is omitted in $\coreviso{n}{k}$. \end{definition} \begin{definition} A set $I \subseteq \binom{[n]}{k}$ is \defn{coconsistent} if for all $X \in \binom{[n]}{k-1}$, $P^*(X) \cap I$ is a prefix or suffix of $P^*(X)$ in lex order. The poset of coconsistent subsets of $\binom{[n]}{k}$ partially ordered by single step inclusion is denoted $\Cstar{n}{k}$. \end{definition} \begin{example} For $k = 2$ and $n = 4$, consider the total order $\rho = (12, 34, 23, 13, 24, 14)$ on $\binom{[4]}{2}$. The restriction of $\rho$ to the copacket $P^*(\{3\}) = \{13, 23, 34\}$ is the antilex order $(34, 23, 13)$. One can check that the restriction of $\rho$ to the other copackets $P^*(\{1\})$, $P^*(\{2\})$, and $P^*(\{4\})$ are all either the lex order or the antilex order. Thus, $\rho$ is a coadmissible order in $\Astar{4}{2}$. The coreversal set of $\rho$ is $\Corev(\rho) = \{\{3\}, \{4\}\}$. Observe that $\Corev(\rho)$ is a coconsistent set in $\Cstar{4}{1}$ since the intersection $\Corev(\rho) \cap P^*(\varnothing)$ is $\{\{3\},\{4\}\}$, which is a suffix of the lex order on $P^*(\varnothing)$. This is the only intersection one needs to check since $\binom{[4]}{0} = \{\varnothing\}$. \end{example} Recall that by \cref{theorem:higher-bruhat-order-isomorphism}, the map $\reviso{n}{k}$ is a well-defined isomorphism between $\B{n}{k}$ and $\C{n}{k+1}$. Before proving \cref{theorem:fundamental-duality}, the maps $\beta_{n,k}$, and $\gamma_{n,k}$ need to be defined. The proof that $\coreviso{n}{k}$, $\beta_{n,k}$, and $\gamma_{n,k}$ are well-defined will be deferred until the proof of \cref{lemma:well-defined-isomorphisms}. Define the maps \begin{equation} \label{equation:beta_gamma_maps} \begin{aligned} \beta_{n,k}&: \B{n}{k} \to \Bstar{n}{n-k} &&\text{ where } [(\rho_1, \ldots, \rho_{\binom{n}{k}})] \mapsto [([n] \setminus \rho_{\binom{n}{k}}, \ldots, [n] \setminus \rho_1)],\\ \gamma_{n,k}&: \C{n}{k} \to \Cstar{n}{n-k} &&\text{ where }I \mapsto \{[n] \setminus X : X \in I\}.\\ \end{aligned} \end{equation} For a commutation class $[\rho] \in \B{n}{k}$, the notation $[\rho]^*$ is used to denote $\beta_{n,k}([\rho]) \in \Bstar{n}{n-k}$. Similarly, for a consistent set $I \in \C{n}{k}$, the notation $I^*$ is used to denote $\gamma_{n,k}(I) \in \Cstar{n}{n-k}$. \begin{example} Let $\rho \in \A{5}{2}$ be the admissible order in \eqref{equation:example-admissible-order}, with reversal set in \eqref{equation:example-reversal-set}. The commutation class $\beta_{5,2}([\rho])$ is \begin{equation} \label{equation:example-coadmissible-order} \beta_{5,2}([\rho]) = [(345, 125, 235, 124, 234, 245, 123, 134, 135, 145)] \in \Bstar{5}{3}, \end{equation} and the coreversal set $\coreviso{5}{3}(\beta_{5,2}([\rho]))$ is \begin{equation} \coreviso{5}{3}(\beta_{5,2}([\rho])) = \{45, 35, 34, 23, 12\}. \end{equation} One can check that the set $\gamma_{5,3}(\Rev([\rho]))$ is coconsistent and equal to $\coreviso{5}{3}(\beta_{5,2}([\rho]))$. \end{example} \begin{example} \label{example:b-4-2-and-dual} The map $\beta_{4,2}$ is an isomorphism between $\B{4}{2}$ and $\Bstar{4}{2}$. The Hasse diagram of $\B{4}{2}$ is on the left in \cref{fig:b-4-2} and the Hasse diagram of $\Bstar{4}{2}$ is on the left in \cref{figure:dual-higher-bruhat-orders}. The map $\gamma_{4,3}$ is an isomorphism between $\C{4}{3}$ and $\Cstar{4}{1}$. The Hasse diagram of $\C{4}{3}$ is on the right in \cref{fig:b-4-2}, and the Hasse diagram of $\C{4}{1}$ is on the right in \cref{figure:dual-higher-bruhat-orders}. Note that although the elements of $\B{4}{2}$ and $\Bstar{4}{2}$ are both commutation classes of total orders on $\binom{[4]}{2}$, some total orders such as $(23, 13, 24, 14, 12, 34)$ are admissible but not coadmissible, and some total orders such as $(12, 34, 23, 13, 24, 14)$ are coadmissible but not admissible. \begin{figure}[!ht] \centering \scalebox{1.0}{ \begin{tikzpicture}[every node/.style={font=\small}] \node (b1) at (0, 2.5) {$[(34, 24, 23, 14, 13, 12)]$}; \node (b2) at (-2,1.25) {$[(34, 24, 23, 12, 13, 14)]$}; \node (b3) at (2,1.25) {$[(14, 24, 34, 23, 13, 12)]$}; \node (b4) at (-2,0) {$[(12,34,23,13,24,14)]$}; \node (b5) at (2,0) {$[(14, 13, 24, 23, 12, 34)]$}; \node (b6) at (-2,-1.25) {$[(12, 13, 23, 34, 24, 14)]$}; \node (b7) at (2,-1.25) {$[(14, 13, 12, 23, 24, 34)]$}; \node (b8) at (0,-2.5) {$[(12, 13, 14, 23, 24, 34)]$}; \draw (b1) -- (b2); \draw (b1) -- (b3); \draw (b2) -- (b4); \draw (b3) -- (b5); \draw (b4) -- (b6); \draw (b5) -- (b7); \draw (b6) -- (b8); \draw (b7) -- (b8); \end{tikzpicture} } \scalebox{1.0}{ \begin{tikzpicture}[every node/.style={font=\small}] \node (a1) at (0, 2.5) {$\{\{1\},\{2\},\{3\},\{4\}\}$}; \node (a2) at (-2,1.25) {$\{\{2\},\{3\},\{4\}\}$}; \node (a3) at (2,1.25) {$\{\{1\},\{2\},\{3\}\}$}; \node (a4) at (-2,0) {$\{\{3\},\{4\}\}$}; \node (a5) at (2,0) {$\{\{1\},\{2\}\}$}; \node (a6) at (-2,-1.25) {$\{\{4\}\}$}; \node (a7) at (2,-1.25) {$\{\{1\}\}$}; \node (a8) at (0,-2.5) {$\varnothing$}; \draw (a1) -- (a2); \draw (a1) -- (a3); \draw (a2) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a6); \draw (a5) -- (a7); \draw (a6) -- (a8); \draw (a7) -- (a8); \end{tikzpicture} } \caption{The partial orders $\Bstar{4}{2}$ on the left and $\Cstar{4}{1}$ on the right.} \label{figure:dual-higher-bruhat-orders} \end{figure} \end{example} \begin{lemma} \label{lemma:well-defined-isomorphisms} The maps $\coreviso{n}{k}$, $\beta_{n,k}$, and $\gamma_{n,k}$ are well-defined. \end{lemma} \begin{proof} First, consider $\coreviso{n}{k}$. For a commutation class $[\rho] \in \Bstar{n}{k}$, one can readily check from \cref{definition:coadmissible-order} and \cref{definition:coreversal-set} that $\coreviso{n}{k}(\rho') = \coreviso{n}{k}(\rho)$ for all $\rho' \in [\rho]$. Thus, the coreversal set is well-defined on commutation classes. It remains to check that the image of $\coreviso{n}{k}$ lies in $\Cstar{n}{k-1}$. To see this, let $[\rho] \in \Bstar{n}{k}$, $X \in \binom{[n]}{k-2}$, and $[n] \setminus X = [y_1 < y_2 < \cdots < y_{n-(k-2)}]$. For integers $r, s, t \in [n-(k-2)]$ with $r < s < t$, one can write $X^r$, $X^s$ and $X^t$ as \begin{align*} X^r &= X \cup \{y_r\},\\ X^s &= X \cup \{y_s\},\text{ and}\\ X^t &= X \cup \{y_t\}. \end{align*} Now suppose $X^r, X^t \in \coreviso{n}{k}([\rho])$. Then the restriction of $\rho$ to $P^*(X^r)$ or $P^*(X^t)$ is the antilex order on each copacket respectively. Therefore, in $\rho$ \begin{equation} X^s \cup \{y_t\} = X^t \cup \{y_s\} < X^t \cup \{y_r\} = X^r \cup \{y_t\} < X^r \cup \{y_s\} = X^s \cup \{y_r\}. \end{equation} Since $X^s \cup \{y_t\} < X^s \cup \{y_r\}$ in $\rho$, it follows that the restriction of $\rho$ to $P^*(X^s)$ is the antilex order. Thus, if $X^r, X^t \in \coreviso{n}{k}([\rho])$ then $X^s \in \coreviso{n}{k}([\rho])$ also. A similar argument shows that if $X^r, X^t \not\in \coreviso{n}{k}([\rho])$, then $X^t \not\in \coreviso{n}{k}([\rho])$. Therefore, $\coreviso{n}{k}([\rho]) \subseteq \binom{[n]}{k-1}$ is coconsistent, so $\coreviso{n}{k}([\rho]) \in \Cstar{n}{k-1}$. Next, consider $\beta_{n,k}$. For a total order $\rho$ on $\binom{[n]}{k}$, the map sending $(\rho_1, \ldots, \rho_{\binom{n}{k}})$ to $([n] \setminus \rho_{\binom{n}{k}}, \ldots, [n] \setminus \rho_1)$ complements and reverses $\rho$. Thus, if $\rho, \rho' \in \A{n}{k}$ differ by a commutation move, then the total orders on $\binom{[n]}{n-k}$ obtained by complementing and reversing $\rho$ and $\rho'$ also differ by a commutation move. Therefore, the commutation class $\beta_{n,k}([\rho])$ as defined in \cref{equation:beta_gamma_maps} is independent of the choice of representative in $[\rho]$. Furthermore, if $\rho$ is admissible, then for any $X \in \binom{[n]}{k+1}$, the restriction $\rho|_{P(X)}$ is the lex (resp. antilex) order on $P(X)$ so complementation and reversal of the restriction yields the lex (resp. antilex) order on $P^*([n] \setminus X)$. Thus, if $\rho$ is an admissible order, then $([n] \setminus \rho_{\binom{n}{k}}, \ldots, [n] \setminus \rho_1)$ is a coadmissible order. Therefore, $\beta_{n,k}([\rho]) \in \Bstar{n}{n-k}$ is a commutation class of coadmissible orders. Finally, consider $\gamma_{n,k}$. For $I \in \C{n}{k}$ and $X \in \binom{[n]}{k+1}$, the intersection $P(X) \cap I$ is a prefix or suffix of $P(X)$ in lex order. One can check that this implies that $P^*([n] \setminus X) \cap \gamma_{n,k}(I)$ is a prefix or suffix of $P^*([n] \setminus X)$ in lex order. Since the intersection of $\gamma_{n,k}(I)$ with any copacket $P^*([n] \setminus X)$ is a prefix or suffix, it follows that $\gamma_{n,k}(I) \in \Cstar{n}{n-k}$. \end{proof} \begin{proof}[Proof of \cref{theorem:fundamental-duality}] It suffices to prove that $\beta_{n,k}$ and $\gamma_{n,k}$ are poset isomorphisms and that $\coreviso{n}{n-k} = \gamma_{n,k+1} \circ \reviso{n}{k} \circ \beta_{n,k}^{-1}$. It then follows that the diagram commutes and that $\coreviso{n}{n-k}$ is also a poset isomorphism. Let $[\rho], [\sigma] \in \B{n}{k}$, and suppose $[\rho] \lessdot [\sigma]$, where $\sigma$ is obtained from $\rho$ by a packet flip of $P(X)$ from lex to antilex for some $X \in \binom{[n]}{k+1}$. The packet $P(X)$ bijects with the copacket $P^*([n] \setminus X)$ by taking complements \begin{equation} \label{equation:packet-copacket-bijection} X_i \leftrightarrow [n] \setminus X_i = ([n] \setminus X)^i. \end{equation} Under \eqref{equation:packet-copacket-bijection}, the packet $P(X)$ in lex order bijects to $P^*([n] \setminus X)$ in antilex order. Since $\beta_{n,k}$ complements each set and then reverses the total order, $\beta_{n,k}([\sigma])$ is obtained from $\beta_{n,k}([\rho])$ by a copacket flip of $P^*([n] \setminus X)$ from lex to antilex. Thus, $\beta_{n,k}([\rho]) \lessdot \beta_{n,k}([\sigma])$ in $\Bstar{n}{n-k}$. The inverse map $\beta_{n,k}^{-1}$ is also defined by complementation and reversal, sending \begin{equation} [(\rho_1, \ldots, \rho_{\binom{n}{n-k}})] \mapsto [([n] \setminus \rho_{\binom{n}{n-k}}, \ldots, [n] \setminus \rho_1)] \end{equation} for $[\rho] = [(\rho_1, \ldots, \rho_{\binom{n}{n-k}})] \in \Bstar{n}{n-k}$. By a similar argument, $\beta_{n,k}^{-1}$ also preserves covering relations. Thus, $\beta_{n,k}$ is a poset isomorphism between $\B{n}{k}$ and $\Bstar{n}{n-k}$. Next, to show that $\gamma_{n,k}$ is a poset isomorphism, Let $I,J \in \C{n}{k}$ such that $I \lessdot J$. Since the partial order on $\C{n}{k}$ is single step inclusion, $J = I \cup \{X\}$ for some $X \in \binom{[n]}{k} \setminus I$. Then $\gamma_{n,k}(J) = \gamma_{n,k}(I) \cup \{[n] \setminus X\}$. The partial order on $\Cstar{n}{n-k}$ is also single step inclusion, so $\gamma_{n,k}(I) \lessdot \gamma_{n,k}(J)$ in $\Cstar{n}{n-k}$. The inverse map $\gamma_{n,k}^{-1}$ is defined by sending $I \in \Cstar{n}{n-k}$ to $\{[n] \setminus X : X \in I\}$ and also preserves covering relations by a similar argument. Thus, $\gamma_{n,k}$ is a poset isomorphism between $\C{n}{k}$ and $\Cstar{n}{n-k}$. Finally, to prove that $\coreviso{n}{n-k} = \gamma_{n,k+1} \circ \reviso{n}{k} \circ \beta_{n,k}^{-1}$, let $[\rho] \in \Bstar{n}{n-k}$ and $X \in \binom{[n]}{n-k-1}$. Then the following chain of logical equivalences holds: \begin{align*} X \in \coreviso{n}{n-k}([\rho]) &\iff \text{$\rho|_{P^*(X)}$ is antilex} & \text{by definition of $\coreviso{n}{n-k}$}\\ &\iff \text{$\beta_{n,k}^{-1}([\rho])|_{P([n] \setminus X)}$ is antilex} & \text{by \eqref{equation:packet-copacket-bijection}}\\ &\iff [n] \setminus X \in (\reviso{n}{k} \circ \beta_{n,k}^{-1})([\rho]) & \text{by definition of $\reviso{n}{k}$}\\ &\iff X \in (\gamma_{n,k+1} \circ \reviso{n}{k} \circ \beta_{n,k}^{-1})([\rho]) & \text{by definition of $\gamma_{n,k}$.} \end{align*} Therefore, $\coreviso{n}{n-k} = \gamma_{n,k+1} \circ \reviso{n}{k} \circ \beta_{n,k}^{-1}$ and the diagram commutes. \end{proof} \section{Deletion and Contraction} Throughout this section, unless otherwise specified, let $k$ and $n$ be fixed integers such that $1 \le k \le n$ and $n \ge 2$. In this section, deletion and contraction are defined on total orders and subsets of $\binom{[n]}{k}$. For subsets of $\binom{[n]}{k}$, the definitions of deletion and contraction agrees with the standard definitions in matroid theory. A new poset $\P_I$ is associated to each consistent set $I \in \C{n}{k}$ and used in \cref{theorem:contraction-deletion-equation} to characterize the possible deletions and contractions of consistent sets. \begin{definition} For a set $I \subseteq \binom{[n]}{k}$, the \defn{deletion} of $I$ is \begin{equation} \boldsymbol{I \delete n} = I \cap \binom{[n-1]}{k}. \end{equation} The \defn{contraction} of $I$ is \begin{equation} \boldsymbol{I \contract n} = \{X \delete \{n\} : \text{$X \in I$ and $n \in X$}\}. \end{equation} \end{definition} \begin{definition} For a total order $\rho$ on $\binom{[n]}{k}$, the \defn{deletion} $\boldsymbol{\rho \delete n}$ is the total order obtained by restricting $\rho$ to $\binom{[n-1]}{k}$. The \defn{contraction} $\boldsymbol{\rho \contract n}$ is the total order on $\binom{[n-1]}{k-1}$, where $X < Y$ in $\rho \contract n$ if and only if $X \cup \{n\} < Y \cup \{n\}$ in $\rho$. \end{definition} \begin{lemma} \label{lemma:closed-under-deletion-contraction} The following statements hold. \begin{enumerate}[label=(\arabic*)] \item For all $\rho \in \A{n}{k}$, $\rho \delete n \in \A{n-1}{k}$ and $\rho \contract n \in \A{n-1}{k-1}$. Furthermore, $\Rev(\rho \delete n) = \Rev(\rho) \delete n$ and $\Rev(\rho \contract n) = \Rev(\rho) \contract n$. \item For all $\rho \in \Astar{n}{k}$, $\rho \delete n \in \Astar{n-1}{k}$ and $\rho \contract n \in \Astar{n-1}{k-1}$. Furthermore, $\Corev(\rho \delete n) = \Corev(\rho) \delete n$ and $\Corev(\rho \contract n) = \Corev(\rho) \contract n$. \item For all $I \in \C{n}{k}$, $I \delete n \in \C{n-1}{k}$ and $I \contract n \in \C{n-1}{k-1}$. \item For all $I \in \Cstar{n}{k}$, $I \delete n \in \Cstar{n-1}{k}$ and $I \contract n \in \Cstar{n-1}{k-1}$. \end{enumerate} \end{lemma} \begin{proof} \textbf{Proof of (1):} Let $\rho \in \A{n}{k}$ and $X \in \binom{[n-1]}{k+1}$. Then $(\rho \delete n)|_{P(X)} = \rho|_{P(X)}$. Since $\rho$ is admissible, $\rho|_{P(X)}$ is the lex or antilex order on $P(X)$, and hence so is $(\rho \delete n)|_{P(X)}$. Thus, $\rho \delete n \in \A{n-1}{k}$ and $\Rev(\rho \delete n) = \Rev(\rho) \delete n$. Next, let $Y \in \binom{[n-1]}{k}$ and $Z = Y \cup \{n\}$. If $\rho|_{P(Z)}$ is the lex order $(Z_{k+1}, \ldots, Z_1)$ on $P(Z)$, then $(\rho\contract n)|_{P(Y)}$ is the order $(Z_{k+1,k}, Z_{k+1,k-1}, \ldots, Z_{k+1,1})$. Since $Z_{k+1,i} = Y_i$ for $i \in [k]$, the restriction $(\rho\contract n)|_{P(Y)}$ is the lex order on $P(Y)$. Similarly, if $\rho|_{P(Z)}$ is the antilex order on $P(Z)$, then $(\rho\contract n)|_{P(Y)}$. Thus, $\rho / n \in \A{n-1}{k-1}$ and $\Rev(\rho / n) = \Rev(\rho) / n$. \textbf{Proof of (2):} Let $\rho \in \Astar{n}{k}$ and $X \in \binom{[n-1]}{k-1}$. If $\rho|_{P_n^*(X)}$ is the lex order $(X^1, \ldots, X^{n-k+1})$ on $P_n^*(X)$, then $X^{n-k+1} = X \cup \{n\}$ and $(\rho \delete n)|_{P_{n-1}^*(X)}$ is the lex order $(X^1, \ldots, X^{n-k})$ on $P_{n-1}^*(X)$. Similarly, if $\rho|_{P_n^*(X)}$ is the antilex order on $P_n^*(X)$, then $(\rho \delete n)|_{P_{n-1}^*(X)}$ is the antilex order on $P_{n-1}^*(X)$. Thus, $\rho \setminus n \in \Astar{n-1}{k}$ and $\Corev(\rho \setminus n) = \Corev(\rho) \setminus n$. Next, let $Y \in \binom{[n-1]}{k-2}$ and $Z = Y \cup \{n\}$. If $\rho|_{P_n^*(Z)}$ is the lex order $(Z^1, \ldots, Z^{n-k+1})$ on $P_n^*(Z)$, then $(\rho \contract n)|_{P_{n-1}^*(Y)}$ is the lex order $(Y^1, \ldots, Y^{n-k+1})$ on $P_{n-1}^*(Y)$. Similarly, if $\rho|_{P_n^*(Z)}$ is the antilex order on $P_n^*(Z)$, then $(\rho \contract n)|_{P_{n-1}^*(Y)}$ is the antilex order on $P_{n-1}^*(Y)$. Thus, $\rho / n \in \Astar{n-1}{k-1}$ and $\Corev(\rho / n) = \Corev(\rho) / n$. \textbf{Proof of (3):} Let $I \in \C{n}{k}$ and $X \in \binom{[n-1]}{k+1}$. Then $P(X) \cap (I \delete n) = P(X) \cap I$. Since $I$ is consistent, $P(X) \cap I$ is either a prefix or suffix of $P(X)$ in lex order and hence so is $P(X) \cap (I \delete n)$. Thus, $I \setminus n \in \C{n-1}{k}$. Next, let $Y \in \binom{[n-1]}{k}$ and $Z = X \cup \{n\}$. If $P(Z) \cap I$ is a prefix $(Z_{k+1}, \ldots, Z_{i})$, then $P(Y) \cap (I \contract n)$ is a prefix $(Z_{k+1, k}, \ldots, Z_{k+1,i}) = (Y_k, \ldots, Y_i)$. Similarly, if $P(Z) \cap I$ is a suffix of $P(Z)$ in lex order, then $P(Y) \cap (I \contract n)$ is a suffix of $P(Z)$ in lex order. Thus, $I / n \in \C{n-1}{k-1}$. \textbf{Proof of (4):} Let $I \in \Cstar{n}{k}$ and $X \in \binom{[n-1]}{k-1}$. If $P_n^*(X) \cap I$ is a prefix $(X^1, \ldots, X^i)$ of $P_n^*(X)$, then $P_{n-1}^*(X) \cap (I \delete n)$ is the prefix $(X^1, \ldots, X^{\min(i, n-k)})$ of $P_{n-1}^*(X)$. Similarly, if $P_n^*(X) \cap I$ is a suffix $(X^{n-k+1}, \ldots, X^i)$ of $P_n^*(X)$, then $P_{n-1}^*(X) \cap (I \delete n)$ is a suffix $(X^{n-k}, \ldots, X^i)$ of $P_{n-1}^*(X)$. Thus, $I \delete n \in \Cstar{n-1}{k}$. Next, let $Y \in \binom{[n-1]}{k-2}$ and $Z = Y \cup \{n\}$. If $P_n^*(Z) \cap I$ is a prefix $(Z^1, \ldots, Z^i)$ of $P_n^*(Z)$, then $P_{n-1}^*(Y) \cap (I \contract n)$ is the prefix $(Y^1, \ldots, Y^i)$ of $P_{n-1}^*(Y)$. Similarly, if $P_n^*(Z) \cap I$ is a suffix of $P_n^*(Z)$, then $P_{n-1}^*(Y) \cap (I\contract n)$ is a suffix of $P_{n-1}^*(Y)$. Thus, $I \contract n \in \Cstar{n-1}{k-1}$. \end{proof} A consequence of \cref{lemma:closed-under-deletion-contraction} is that the deletion and contraction of (co)commutation classes of (co)admissible orders are well-defined. Thus, denote the deletion of a $[\rho] \in \B{n}{k}$ by $[\rho] \delete n$, the contraction by $[\rho] \contract n$, and similarly for $[\sigma] \in \Bstar{n}{k}$. \begin{lemma} \label{lemma:contraction-deletion-are-dual} The following equations hold, for $[\rho] \in \B{n}{k}$ and $I \in \C{n}{k}$. \begin{enumerate}[label=(\arabic*)] \item $\gamma_{n-1,k-1}(I \contract n) = \gamma_{n,k}(I) \delete n$, \item $\gamma_{n-1,k}(I \delete n) = \gamma_{n,k}(I) \contract n$, \item $\beta_{n-1,k}([\rho] \delete n) = \beta_{n,k}([\rho]) \contract n$, and \item $\beta_{n-1,k-1}([\rho] \contract n) = \beta_{n,k}([\rho]) \delete n$. \end{enumerate} \end{lemma} \begin{proof} The proofs of (1) and (2) identical to the standard proofs of the duality of contraction and deletion in matroids. A proof of (3) is given, and the proof of (4) is similar. \textbf{Proof of (3):} By \cref{theorem:fundamental-duality}, it suffices to show that $\Corev(\beta_{n-1,k}([\rho] \setminus n)) = \Corev(\beta_{n,k}([\rho]) / n)$. The two coreversal sets are equal according to the following chain of equalities: \begin{align*} \Corev(\beta_{n-1,k}([\rho] \setminus n)) &= \gamma_{n-1,k+1}(\Rev([\rho] \setminus n)) &\text{by \cref{theorem:fundamental-duality}}\\ &= \gamma_{n-1,k+1}(\Rev([\rho]) \setminus n) &\text{by (1) of \cref{lemma:closed-under-deletion-contraction}}\\ &= \gamma_{n,k+1}(\Rev([\rho]))/n &\text{by (2)}\\ &= \Corev(\beta_{n,k}([\rho])) / n &\text{by \cref{theorem:fundamental-duality}}\\ &= \Corev(\beta_{n,k}([\rho]) / n) & \text{by (2) of \cref{lemma:closed-under-deletion-contraction}.} \end{align*} \end{proof} \begin{definition} \label{definition:I-consistent-poset} For $I \in \C{n}{k}$, the \defn{consistent poset} $\P_I$ is the poset on $\binom{[n]}{k-1}$ whose partial order $\prec_I$ is the transitive closure of the relations \begin{enumerate}[label=(\arabic*)] \item $X_i \prec_I X_{i+1}$ for all $X \in I$ and $1 \le i < k$. \item $X_{i+1} \prec_I X_i$ for all $X \in \binom{[n]}{k} \setminus I$ and $1 \le i < k$. \end{enumerate} \end{definition} \begin{remark} For $I \in \Cstar{n}{k}$, one can define an analogous \defn{dual consistent poset} $\P^*_I$. In the dual consistent poset, relation (1) is replaced by $X^{i+1} \prec_I X^i$ for all $X \in I$ and $1 \le i < n-k$ and relation (2) is replaced by $X^i \prec_I X^{i+1}$ for all $X \in \binom{[n]}{k} \setminus I$ and $1 \le i < n-k$. \cref{lemma:upper-order-ideal-is-consistent} and \cref{theorem:contraction-deletion-equation} below also hold for dual consistent posets. \end{remark} The transitive closure of relations (1) and (2) in \cref{definition:I-consistent-poset} is acyclic because any admissible order $\rho \in \A{n}{k}$ satisfies the relations and is a linear extension of $\P_{\Rev(\rho)}$. In fact, the map $\reviso{n}{k}^{-1}: \C{n}{k+1} \to \B{n}{k}$ sends a consistent set $I \in \C{n}{k+1}$ to the commutation class consisting of all linear extensions of $\P_I$. \begin{example} \label{example:consistent-poset} Let $I = \{123, 124\} \in \C{4}{3}$ be the consistent set from \cref{example:total-order-reversal-set}. Then the consistent poset $\P_I$ is the transitive closure of the antilex relations \begin{align*} 23 \prec_I 13 \prec_I 12,\\ 24 \prec_I 14 \prec_I 12, \end{align*} and the lex relations \begin{align*} 13 \prec_I 14 \prec_I 34,\\ 23 \prec_I 24 \prec_I 34. \end{align*} The Hasse diagram of $\P_I$ is shown in \cref{figure:consistent-poset-example}. One can check that the upper and lower order ideals of $\P_I$ are consistent sets in $\C{4}{2}$. \begin{figure}[!ht] \centering \begin{tikzpicture} \node (a1) at (0, 0) {$23$}; \node (a2) at (-1, 1) {$13$}; \node (a3) at (1, 1) {$24$}; \node (a4) at (0, 2) {$14$}; \node (a5) at (-1, 3) {$12$}; \node (a6) at (1, 3) {$34$}; \draw (a1) -- (a2); \draw (a1) -- (a3); \draw (a2) -- (a4); \draw (a3) -- (a4); \draw (a2) -- (a5); \draw (a3) -- (a6); \draw (a4) -- (a5); \draw (a4) -- (a6); \end{tikzpicture} \caption{The Hasse diagram of the consistent poset $\P_{\{123, 124\}}$ in \cref{example:consistent-poset}.} \label{figure:consistent-poset-example} \end{figure} \end{example} \begin{lemma} \label{lemma:upper-order-ideal-is-consistent} Let $I \in \C{n}{k}$. If $J$ is an upper or lower order ideal of $\P_I$, then $J \in \C{n}{k-1}$. \end{lemma} \begin{proof} Let $X \in \binom{[n]}{k}$. If $X \in I$, then (1) in \cref{definition:I-consistent-poset} implies that $X_1 \prec_I \cdots \prec_I X_k$. Otherwise, $X \in \binom{[n]}{k} \setminus I$, so (2) in \cref{definition:I-consistent-poset} implies that $X_k \prec_I \cdots \prec_I X_1$. Thus, the ordered sets $X_1, \ldots, X_{k}$ form a chain in either lex or antilex order in $\P_I$. Since $J$ is an upper or lower order ideal of $\P_I$, the intersection $J \cap P(X)$ is either a prefix or suffix of $P(X)$ in lex order. Therefore, $J$ is consistent. \end{proof} \begin{lemma} \label{theorem:contraction-deletion-equation} The map from $\C{n}{k}$ to $\C{n-1}{k} \times \C{n-1}{k-1}$ that sends $I$ to $(I \delete n, I\contract n)$ is injective, and its image is the set \begin{equation*} \{(S,T) : \text{$T$ is an upper order ideal of $\P_S$}\}. \end{equation*} \end{lemma} \begin{proof} A consistent set $I \in \C{n}{k}$ can be recovered from $(I \delete n, I \contract n)$ by the equation \begin{equation} \label{equation:recover-consistent-from-contraction-deletion} I = (I \delete n) \cup \{X \cup \{n\} : X \in I \contract n\}. \end{equation} Therefore, the map $I \mapsto (I \delete n, I \contract n)$ is injective. To show that $I \contract n$ is an upper order ideal of the consistent poset $\P_{I \delete n}$, suppose $X = [x_1, \ldots, x_k] \in \binom{[n-1]}{k}$. If $X \in I \delete n$, then $P(X)$ forms a chain in antilex order in $\P_{I \delete n}$. Since $X \in I \delete n \subset I$ and $X$ is the lex smallest element in $P(X \cup \{n\})$, the intersection $P(X \cup \{n\}) \cap I$ is a prefix of $P(X \cup \{n\})$ in lex order. It follows that $P(X) \cap (I \contract n)$ is a prefix of $P(X)$ in lex order. Thus, $P(X) \cap (I \contract n)$ is an upper order ideal of the subposet of $\P_{I \delete n}$ restricted to $P(X)$. A similar argument shows that if $X \not\in I \delete n$, then $P(X) \cap (I \contract n)$ is also an upper order ideal of $P(X)$ in $\P_{I \delete n}$. Therefore, for every $I \in \C{n}{k}$, $I \contract n$ is an upper order ideal of $\P_{I \delete n}$. Conversely, suppose $(S,T) \in \C{n-1}{k} \times \C{n-1}{k-1}$ and $T$ is an upper order ideal of $\P_S$. Let $I = S \cup \{X \cup \{n\} : X \in T\}$. Then by \eqref{equation:recover-consistent-from-contraction-deletion}, it suffices to show that $I$ is consistent. Let $X = [x_1, \ldots, x_{k+1}] \in \binom{[n]}{k+1}$. If $x_{k+1} < n$, then $P(X) \cap I = P(X) \cap S$. Since $S$ is consistent, $P(X) \cap S$ is a prefix or suffix of $P(X)$ in lex order. If $x_{k+1} = n$ and $X_{k+1} \in S$, then since $T$ is an upper order ideal of $\P_S$, $P(X_{k+1}) \cap T$ is a prefix of $P(X_{k+1})$ in lex order, and hence $P(X) \cap I$ is a prefix of $P(X)$ in lex order. Similarly, if $x_{k+1} = n$ and $X_{k+1} \not\in S$, then $P(X) \cap I$ is a suffix of $P(X)$ in lex order. Thus, $I$ is consistent, completing the proof. \end{proof} \begin{remark} Some of the ideas in \cref{theorem:contraction-deletion-equation} are present in Lemma 2.5 of \cite{ziegler93}. The new characterization in terms of consistent posets is based on joint work with Billey and Liu \cite{billey-chau-liu} and used in this paper to derive asymptotic results in Section 6. \end{remark} \section{Weaving Functions} \label{section:weaving-functions} In this section, a new computational tool called weaving functions is introduced. Weaving functions generalize an encoding of elements in $\B{n}{2}$ by Felsner in \cite{felsner1997} to an encoding of elements in $\B{n}{k}$ for integers $k$ and $n$ satisfying $1 \le k \le n$. Let $\{0,1\}^*$ denote the set of words in the alphabet $\{0,1\}$. The empty word is denoted $\varnothing$, and the concatenation of words $u,v \in \{0,1\}^*$ is denoted $uv$. \begin{definition} For integers $k,n$ with $1 \le k \le n$, the \defn{prefix-suffix indicator function} associated with $Y \in \binom{[n]}{k}$ is the function $\boldsymbol{w_Y}: \binom{[n]}{k-1} \to \{0,1\}^*$ defined by \begin{equation} w_{Y}(X) = \begin{cases} 0 & \text{if $X = Y_k$,}\\ 1 & \text{if $X = Y_1$,}\\ \varnothing & \text{otherwise.} \end{cases} \end{equation} The \defn{weaving function} associated with $\rho = (\rho_1, \ldots, \rho_{\binom{n}{k}}) \in \A{n}{k}$ is the function $\boldsymbol{W_\rho}: \binom{[n]}{k-1} \to \{0,1\}^*$ defined by \begin{equation} W_\rho(X) = w_{\rho_1}(X)w_{\rho_2}(X) \cdots w_{\rho_{\binom{n}{k}}}(X). \end{equation} \end{definition} \begin{example} Let $\rho = (23,24,25,45,13,15,35,14,34,12) \in \A{5}{2}$ be the admissible order from \eqref{equation:example-admissible-order}. Then \begin{align*} W_\rho(1) &= 0000,\\ W_\rho(2) &= 0001,\\ W_\rho(3) &= 0011,\\ W_\rho(4) &= 1011,\\ W_\rho(5) &= 1111. \end{align*} Let $\sigma = (123, 124, 125, 134, 135, 145, 234, 235, 245, 345) \in \A{5}{3}$. Some values of $W_\sigma$ are $W_\sigma(23) = 100$, $W_\sigma(24) = 10$ and $W_\sigma(34) = 110$. All other values of $W_\sigma$ consist solely of zeroes or solely of ones. Observe that all the values of $W_\rho$ are of the same length, but the same is not true of the values of $W_\sigma$. \end{example} \begin{lemma} \label{lemma:k-2-word-length} Let $n$ be an integer with $n \ge 2$. Then for $\rho \in \A{n}{2}$ and $i \in [n]$, $W_\rho(i)$ is a word of length $n-1$. \end{lemma} \begin{proof} For $[p,q] \in \binom{[n]}{2}$, $w_{[p,q]}(i) \neq \varnothing$ if and only if either $p = i$ or $q = i$. There are exactly $n-1$ sets in $\binom{[n]}{2}$ that contain $i$ so $W_\rho(i)$ is a word of length $n-1$. \end{proof} A natural question one can ask is which admissible orders in $\A{n}{k}$ have the same weaving functions. \cref{theorem:weaving-functions} below implies that two admissible orders have the same weaving function if and only if they are commutation equivalent. As a consequence, weaving functions are well-defined on commutation classes in $\B{n}{k}$ and the map $[\rho] \mapsto W_{\rho}$ is injective. \begin{lemma} \label{lem:weaving-fuctions-helper} Let $k,n$ be integers such that $1 \le k \le n$ and $\rho = (\rho_1, \ldots, \rho_{\binom{[n]}{k}})$, $\sigma = (\sigma_1, \ldots, \sigma_{\binom{[n]}{k}})$ be two admissible orders in $\A{n}{k}$ such that $W_\rho = W_\sigma$. If $(\rho_1, \ldots, \rho_{i-1}) = (\sigma_1, \ldots, \sigma_{i-1})$ for some integer $i$ such that $1 \le i \le \binom{n}{k}$, then there exists $\sigma' \in \A{n}{k}$ such that $\sigma \sim \sigma'$ and $(\rho_1, \ldots, \rho_i) = (\sigma'_1, \ldots, \sigma'_i)$. \end{lemma} \begin{proof} Let $j \ge i$ be the index in $\sigma$ such that $\rho_i = \sigma_j$. Suppose for contradiction that $\sigma_j$ does not commute with some element in $\{\sigma_i, \ldots, \sigma_{j-1}\}$. Then let $i'$ be the least integer such that $i \le i' < j$ and $\sigma_{i'}$ does not commute with $\sigma_j$, and let $j'$ be the index in $\rho$ such that $\rho_{j'} = \sigma_{i'}$. Let $X = \rho_i = \sigma_j$ and $Y = \rho_{j'} = \sigma_{i'}$. Since $X$ and $Y$ do not commute, there exists some $Z \in \binom{[n]}{k+1}$ such that $X,Y \in P(Z)$. By interchanging the roles of $\rho$ and $\sigma$ if necessary, one may assume that $X$ is lexicographically smaller than $Y$. Observe that $X$ occurs before $Y$ in $\rho$, whereas $Y$ occurs before $X$ in $\sigma$. Since $\rho$ and $\sigma$ are admissible orders, the restrictions $\rho|_{P(Z)}$ and $\sigma|_{P(Z)}$ must be the lex and antilex orders on $P(Z)$, respectively. Since $X = \rho_i$ and $(\rho_1, \ldots, \rho_{i-1}) = (\sigma_1, \ldots, \sigma_{i-1})$, it must be that $X = Z_{k+1}$. Since $Y = \sigma_{i'}$ and $i'$ was chosen to be the least integer between $i$ and $j$ such that $\sigma_{i'}$ does not commute with $\sigma_j$, it must be that $Y = Z_1$. The word $W_\rho(Z_{1,k+1})$ begins with the prefix $w_{\rho_1}(Z_{1,k+1}) \cdots w_{\rho_{i-1}}(Z_{1,k+1})$ and the word $W_\sigma(Z_{1,k+1})$ begins with the prefix $w_{\sigma_1}(Z_{1,k+1}) \cdots w_{\sigma_{i'-1}}(Z_{1,k+1})$. The choice of $i'$ implies that $\sigma_r \neq Z_{1,k+1} \cup \{z\}$ for any $z \in [n] \setminus Z_{1,k+1}$ and $i \le r < i'$. Therefore, $w_{\sigma_r}(Z_{1,k+1}) = \varnothing$ for $i \le r < i'$, and hence \[ w_{\rho_1}(Z_{1,k+1}) \cdots w_{\rho_{i-1}}(Z_{1,k+1}) = w_{\sigma_1}(Z_{1,k+1}) \cdots w_{\sigma_{i'-1}}(Z_{1,k+1}). \] However, $w_{\rho_i}(Z_{1,k+1}) = 1$ and $w_{\sigma_{i'}}(Z_{1,k+1}) = 0$. Therefore, $W_\rho(Z_{1,k+1}) \neq W_\sigma(Z_{1,k+1})$ which is a contradiction. It follows that $\sigma_j$ commutes with every element in $\{\sigma_i, \ldots, \sigma_{j-1}\}$ to yield an admissible order $\sigma' \sim \sigma$ such that $(\rho_1, \ldots, \rho_i) = (\sigma'_1, \ldots, \sigma'_i)$. \end{proof} \begin{theorem} \label{theorem:weaving-functions} For integers $1 \le k \le n$ and $[\rho], [\sigma] \in \B{n}{k}$, $[\rho] = [\sigma]$ if and only if $W_\rho = W_\sigma$. \end{theorem} \begin{proof} First, suppose $[\rho] = [\sigma]$. It suffices to prove that $W_\rho = W_\sigma$ in the case where $\rho$ and $\sigma$ differ by a single commutation. Let $\sigma$ be obtained from $\rho$ by commuting $X,Y \in \binom{[n]}{k}$. Since $X$ and $Y$ commute, $P(X) \cap P(Y) = \varnothing$. In particular, the sets $\{X_1, X_k\}$ and $\{Y_1, Y_k\}$ are disjoint. Thus, for any $Z \in \binom{[n]}{k-1}$, at least one of $w_X(Z)$ or $w_Y(Z)$ is the empty string $\varnothing$, which implies that $w_X(Z)w_Y(Z) = w_Y(Z)w_X(Z)$. Since $\rho$ and $\sigma$ differ only by commuting $X$ and $Y$, it follows that $W_\rho = W_\sigma$. Next, suppose that $W_\rho = W_\sigma$. To show that $[\rho] = [\sigma]$, let $1 \le i \le \binom{n}{k}$ be the maximum integer such that $(\rho_1, \ldots, \rho_{i-1}) = (\sigma_1, \ldots, \sigma_{i-1})$. If $i = \binom{n}{k}$, then $\rho = \sigma$ and hence $[\rho] = [\sigma]$. Otherwise, let $i < \binom{n}{k}$. Then by \cref{lem:weaving-fuctions-helper}, there exists $\sigma' \in \A{n}{k}$ such that $\sigma' \sim \sigma$ and $(\rho_1, \ldots, \rho_i) = (\sigma'_1, \ldots, \sigma'_i)$. Repeating the argument on $\rho$ and $\sigma'$ implies that $[\rho] = [\sigma']$. Since $\sigma \sim \sigma'$, it follows that $[\rho] = [\sigma]$. \end{proof} \section{Asymptotic Enumeration} In this section, asymptotics are obtained for $|\B{n}{k}|$ and $|\Bstar{n}{k}|$. The Eulerian number counting the number of permutations in $\S_n$ with $d$ descents is denoted $\boldsymbol{A(n,d)}$. The following theorem is the more precise statement of \cref{theorem:higher-bruhat-order-asymptotics-intro}. \begin{theorem} \label{theorem:higher-bruhat-order-asymptotics} For every integer $k \ge 2$ and sufficiently large $n \gg k$, we have \begin{equation} \label{equation:higher-bruhat-order-asymptotics} \frac{A(k, \lfloor (k-1)/2 \rfloor) n^k}{k!(k+1)!} + O(n^{k-1}) \le \log_2 |\B{n}{k}| \le \frac{n^k}{k!\log{2}} + O(n^{k-1}\log n). \end{equation} \end{theorem} \begin{proof} The upper bound is proved by induction on $k$. To prove the base case $k = 2$, consider the number of possible weaving patterns. By \cref{lemma:k-2-word-length} and \cref{theorem:weaving-functions}, $|\B{n}{2}|$ is bounded above by the number of functions $f: [n] \to \{0,1\}^*$ such that $f(i)$ is a binary word consisting of $(i-1)$ zeroes and $(n-i)$ ones. There are $\prod_{i=1}^n \binom{n-1}{i-1}$ such functions, which by Theorem 3.2 of \cite{lagariasmehta2016}, has the asymptotic expression \[ \log_2 \left(\prod_{i=1}^n \binom{n-1}{i-1}\right) = \frac{n^2}{2\log{2}} + O(n\log n). \] Suppose the upper bound holds for $\log_2|\B{n}{k-1}|$ by induction. Repeated application of \cref{theorem:contraction-deletion-equation} to $|\C{n}{k+1}|$ yields an upper bound of \begin{align*} |\C{n}{k+1}| &\le |\C{n-1}{k+1}| \cdot |\C{n-1}{k}|\\ &\le |\C{n-2}{k+1}| \cdot |\C{n-2}{k}| \cdot |\C{n-1}{k}|\\ &\qquad\vdots\\ &\le \prod_{m=k}^{n-1} |\C{m}{k}|. \end{align*} By \cref{theorem:fundamental-duality}, $|\C{n}{k+1}| = |\B{n}{k}|$ and $|\C{m}{k}| = |\B{m}{k-1}|$ for $m = k, k+1, \ldots, n-1$. Taking logarithms yields \begin{align*} \log_2 |\B{n}{k}| &\le \sum_{m=k}^{n-1} \log_2 |\B{m}{k-1}|\\ &\le \sum_{m=k}^{n-1} \left(\frac{m^{k-1}}{(k-1)! \log 2} + O(m^{k-2} \log m)\right)\\ &\le \frac{n^k}{k!\log 2} + O(n^{k-1}\log n), \end{align*} where the last step follows from Faulhaber's formula for the sum of the $(k-1)$th powers. Next, the lower bound is proved by an explicit construction. \cref{lemma:upper-order-ideal-is-consistent} implies that $|\C{n}{k+1}|$ is bounded below by the number of upper order ideals of $\P_\varnothing$. The partial order in $\P_\varnothing$ is equal to the Gale order where $[x_1, \ldots, x_{k+1}] \preceq [y_1, \ldots, y_{k+1}]$ if and only if $x_i \le y_i$ for $i = 1, 2, \ldots, k+1$. The number of upper order ideals is therefore at least $2^{\width(\P_\varnothing)}$ where $\width(\P_\varnothing)$ is the maximal size of an antichain in $\P_\varnothing$. Taking logarithms yields \begin{equation} \width(\P_\varnothing) \le \log_2 |\C{n}{k+1}|. \end{equation} To bound $\width(\P_\varnothing)$, for an integer $c$ consider the set \begin{equation} S_c = \left\{[x_1, \ldots, x_{k+1}] \in \binom{[n]}{k+1}: \sum_{i=1}^{k+1} x_i = c\right\}. \end{equation} Observe that $S_c$ is an antichain in $\P_\varnothing$. The elements in $S_c$ are in bijection with partitions of $c-\binom{k+1}{2}$ that fit in an $(k+1) \times (n-(k+1))$ rectangle, with the explicit bijection map \begin{equation} [x_1, \ldots, x_{k+1}] \leftrightarrow [x_1 - 1, x_2 -2, \ldots, x_{k+1} - (k+1)]. \end{equation} The number of such restricted partitions is known to be given by the coefficient of $q^{c-\binom{k+1}{2}}$ in the $q$-binomial coefficient $\binom{n}{k+1}_q$. Thus, \begin{equation} [q^{\lfloor\frac{k+1}{2}\rfloor(n-k-1)}]\binom{n}{k+1}_q \le \max_i\ [q^i]\binom{n}{k+1}_q \le \width(\P_\varnothing), \end{equation} where the notation $[q^i]\binom{n}{k+1}_q$ denotes the coefficient of $q^i$ in $\binom{n}{k+1}_q$. By Theorem 2.4 \cite{stanleyzanello2016} and the discussion of Euler-Frobenius numbers following the theorem, for a fixed integer $\alpha \ge 0$, the following asymptotic holds for $a \to \infty$ \begin{equation} \label{equation:stanley-zanello} [q^{\alpha a}]\binom{a+k+1}{k+1}_q = \frac{A(k, \alpha-1)a^k}{k!(k+1)!} + O(a^{k-1}). \end{equation} Notice that in \cite{stanleyzanello2016}, the Eulerian number $A(n,d)$ is defined to by the number of permutations in $\mathfrak{S}_n$ with $d-1$ descents as opposed to $d$ descents. This is accounted for in \eqref{equation:stanley-zanello} by writing $A(k, \alpha-1)$ as opposed to $A(k, \alpha)$. Finally, setting $\alpha = \lfloor \frac{k+1}{2} \rfloor$ and $a = n-k-1$ yields the desired lower bound \[ \frac{A(k, \lfloor (k-1)/2 \rfloor)n^k}{k!(k+1)!} + O(n^{k-1}) \le \log_2 |\C{n}{k+1}|. \] \end{proof} \begin{remark} The constant $c_k$ in \cref{theorem:higher-bruhat-order-asymptotics-intro} is given by the maximal Eulerian number $A(k, \lfloor (k-1)/2 \rfloor)$ (see OEIS sequence \href{https://oeis.org/A006551}{A006551}). From Equation 5.7 and 5.8 of \cite{bender1973}, the asymptotic behavior is \begin{equation} A(k, \lfloor (k-1)/2 \rfloor) \sim \frac{k!e^{-\alpha \lfloor(k-1)/2\rfloor}}{r^{n+1}\sigma_\alpha\sqrt{2\pi k}}, \end{equation} where \begin{equation} \label{equation:bender2} \begin{split} \frac{\lfloor (k-1)/2 \rfloor}{k} &= \frac{e^\alpha}{e^\alpha-1} - \frac{1}{\alpha}\\ r &= \frac{\alpha}{e^{\alpha}-1}\\ \sigma_\alpha^2 &= \frac{1}{\alpha^2} - \frac{e^{\alpha}}{(e^{\alpha}-1)^2}. \end{split} \end{equation} As $k \to \infty$, \eqref{equation:bender2} implies $\alpha \to 0$, $r \to 1$, and $\sigma_\alpha^2 \to \frac{1}{12}$. Thus, as $k$ grows large, $c_k$ tends to $\frac{k!}{\sqrt{24 \pi k}}$ and the lower bound for $\log_2 |\B{n}{k}|$ tends to $\displaystyle \frac{n^k}{(k+1)!\sqrt{24 \pi k}}$. \end{remark} \begin{theorem} \label{theorem:dual-higher-bruhat-order-asymptotics} For integers $k \ge 3$ and $n \gg k$, we have \begin{equation} \label{equation:dual-higher-bruhat-order-asymptotics} \frac{A(k-2, \lfloor (k-3)/2 \rfloor)n^{k-2}}{(k-2)!(k-1)!} + O(n^{k-3}) \le \log_2 |\Bstar{n}{k}| \le \frac{n^{k-2}}{(k-2)!} + O(n^{k-3}\log n). \end{equation} \end{theorem} \begin{proof} When $k = 3$, the exact formula $|\B{n}{n-3}| = 2^n + n2^{n-2} - 2n$ is known (see \cref{theorem:codimension3-formula}). By \cref{theorem:fundamental-duality}, $|\Bstar{n}{3}| = |\B{n}{n-3}|$ and taking logarithms yields $\log_2 |\Bstar{n}{3}| = n + O(\log n)$. For $k = 3$, the coefficient on the lower bound is $\frac{A(1,0)}{1!2!} = \frac{1}{2}$ and the coefficient on the upper bound is $1$. Thus, the lower and upper bounds hold for $k = 3$. The upper bound is proved by induction on $k > 3$. Repeatedly applying the dual version of \cref{theorem:contraction-deletion-equation} gives an upper bound of \begin{equation} \label{equation:dual-upper-bound} |\Cstar{n}{k-1}| \le \prod_{m=k-1}^{n-1} |\Cstar{m}{k-2}|. \end{equation} By \cref{theorem:fundamental-duality}, $|\Cstar{n}{k-1}| = |\Bstar{n}{k}|$ and $|\Cstar{m}{k-2}| = |\Bstar{m}{k-1}|$. Substituting into \cref{equation:dual-upper-bound} and taking logarithms yields \begin{align*} \log_2 |\Bstar{n}{k}| &\le \sum_{m=k-1}^{n-1} \log_2 |\Bstar{m}{k-1}|\\ &\le \sum_{m=k-1}^{n-1} \left(\frac{1}{(k-3)!}m^{k-3} + O(m^{k-4} \log m)\right)\\ &\le \frac{1}{(k-2)!}n^{k-2} + O(n^{k-3} \log n). \end{align*} Next, \cref{lemma:upper-order-ideal-is-consistent} bounds $|\Cstar{n}{k-1}| = |\Bstar{n}{k}|$ from below by the number of upper order ideals of the dual consistent poset $\P^*_\varnothing$. The partial order on $\P^*_\varnothing$ is also equal to the Gale order, so exactly the same reasoning as in the proof of \cref{theorem:higher-bruhat-order-asymptotics} applies. Therefore, $\frac{A(k-2, \lfloor (k-3)/2\rfloor)}{(k-2)!(k-1)!}n^{k-2} + O(n^{k-3}) \le |\Cstar{n}{k-1}|$. \end{proof} \section{Explicit Enumeration} \label{section:explicit-formulas} This section supplies a proof of the explicit formula for $|\B{n}{n-3}|$ that was first put forth in \cite{ziegler93}. The proof techniques generalize to enumerate subsets of $\B{n}{n-4}$. In particular, \cref{theorem:TSPP-bijection} is proved. For an element $I \in \Cstar{n-1}{k-1}$, let $\CstarI{I}{n}{k} = \{J \in \Cstar{n}{k} : J\contract n = I\}$. The sets $\CstarI{I}{n}{k}$ as $I$ ranges over all elements $\Cstar{n-1}{k-1}$ partition the elements of $\Cstar{n}{k}$ by their contraction. It follows that \begin{equation} \label{equation:partition-by-contraction} |\Cstar{n}{k}| = \sum_{I \in \Cstar{n-1}{k-1}} |\CstarI{I}{n}{k}|. \end{equation} By \cref{theorem:fundamental-duality}, $|\B{n}{n-3}| = |\Cstar{n}{2}|$ so to enumerate $\B{n}{n-3}$, it suffices to enumerate $\Cstar{n}{2}$. The contraction of an element in $\Cstar{n}{2}$ is an element in $\Cstar{n-1}{1}$ and one can check that the elements in $\Cstar{n-1}{1}$ are either of the form $\{\{1\}, \ldots, \{r\}\}$ or $\{\{r+1\}, \ldots, \{n-1\}\}$ for some integer $r$ such that $0 \le r \le n-1$. If $r = 0$, then by convention $\{\{1\}, \ldots, \{r\}\} = \varnothing$ and if $r = n-1$, then by convention $\{\{r+1\}, \ldots \{n-1\}\} = \varnothing$. For example, recall that the elements of $\Cstar{4}{1}$ are depicted on the right hand side of \cref{figure:dual-higher-bruhat-orders}. A subset $I \subseteq \binom{[n]}{2}$ can be visualized as a set of squares in the plane. The square in $\N^2$ with bottom left vertex at $(x_1, x_2)$ and top right vertex at $(x_1+1, x_2+1)$ is associated to each $[x_1,x_2] \in I$. If $I$ is a coconsistent subset in $\Cstar{n}{2}$, then the contraction $I/n$ can be determined from the $x$-coordinates of the squares in the top row of the visualization. \begin{example} \label{example:cn2_visualization} The coconsistent set $\binom{[6]}{2} \in \Cstar{6}{2}$ is depicted by the squares in \cref{figure:example-diagram} and $I = \{14, 15, 16, 23, 24, 25, 26, 34, 35, 36\} \in \Cstar{6}{2}$ is depicted by the shaded squares. The contraction $I/6$ is the element $\{\{1\}, \{2\}, \{3\}\} \in \C{5}{1}$ so $I \in \CstarI{\{\{1\}, \{2\}, \{3\}\}}{6}{2}$. \begin{figure}[ht] \centering \input{figures/diagram0} \caption{The shaded squares represent the element $I \in \Cstar{6}{2}$ of \cref{example:cn2_visualization}.} \label{figure:example-diagram} \end{figure} \end{example} \begin{lemma} \label{lemma:cn2-refinement-emptyset} For an integer $n \ge 2$, let $I(n-1) = \{\{1\}, \ldots, \{n-1\}\}$. Then the following formula holds \begin{equation} \label{equation:cn2-refinement-emptyset} |\CstarI{\varnothing}{n}{2}| = |\CstarI{I(n-1)}{n}{2}| = 2^{n-2}. \end{equation} \end{lemma} \begin{proof} One can check that the map sending $I \mapsto \binom{[n]}{2} \setminus I$ is a bijection between $\CstarI{\varnothing}{n}{2}$ and $\CstarI{I(n-1)}{n}{2}$. Thus, it suffices to prove that \cref{equation:cn2-refinement-emptyset} holds for $|\CstarI{\varnothing}{n}{2}|$. Let $J \in \CstarI{\varnothing}{n}{2}$. The contraction $J/n$ is empty so for any $\{x\} \in \binom{[n-1]}{1}$, $[x,n] \not\in J$. Since the intersection $P^*(\{x\}) \cap J$ does not contain the maximal element $[x,n]$ of $P^*(\{x\})$ and $J$ is coconsistent, the intersection $P^*(\{x\}) \cap J$ is a prefix of $P^*(\{x\})$ in lex order. Thus coconsistency of $J$ imposes two types of convexity conditions: \begin{enumerate} \item If $[x,y] \in J$ and $1 < x$, then $[x-1, y] \in J$. \item If $[x,y] \in J$ and $x < y-1$, then $[x, y-1] \in J$. \end{enumerate} Therefore every coconsistent set $J \in \CstarI{\varnothing}{n}{2}$ is determined by the northeast border of the associated squares. For example, the shaded squares in \cref{figure:c_s_i_empty} depict an element in $\CstarI{\varnothing}{8}{2}$ and the corresponding northeast border. Mapping an element $J \in \CstarI{\varnothing}{n}{2}$ to the northeast border of the squares associated to $J$ gives a bijection between $\CstarI{\varnothing}{n}{2}$ and lattice paths of length $n-2$ that start from $(1,n)$ and consist only of south and east steps of unit length. There are $2^{n-2}$ such lattice paths, so $|\CstarI{\varnothing}{n}{2}| = 2^{n-2}$. \end{proof} \begin{figure}[!ht] \centering \input{figures/diagram1b.tex}\quad \caption{The northeast border of an element in $\CstarI{\varnothing}{8}{2}$.} \label{figure:c_s_i_empty} \end{figure} \begin{lemma} \label{lemma:refinement} Let $r,n$ be integers such that $n \ge 2$ and $1 \le r \le n-2$. Let $I(r) = \{\{1\}, \ldots, \{r\}\}$. The following formula holds \begin{equation} \label{equation:Cn2_refinement} |\CstarI{I(r)}{n}{2}| = |\CstarI{I(n-1) \setminus I(r)}{n}{2}| = 2^{n-3} + \binom{n-1}{r} - 1. \end{equation} \end{lemma} \begin{proof} One can check that the map sending $I \mapsto \binom{[n]}{2} \setminus I$ is a bijection between $\CstarI{I(r)}{n}{2}$ and $\CstarI{I(n-1) \setminus I(r)}{n}{2}$. Thus, it suffices to prove that \cref{equation:Cn2_refinement} holds for $|\CstarI{I(r)}{n}{2}|$. The elements in $\CstarI{I(r)}{n}{2}$ can be partitioned into those $J \in \CstarI{I(r)}{n}{2}$ where $[r,r+1] \in J$ and those where $[r,r+1] \not\in J$. The number of elements in each case is computed respectively. \textbf{Case 1: $\boldsymbol{[r,r+1] \in J}$.} Since $J/n = \{\{1\}, \ldots, \{r\}\}$, the element $[r,n]$ is in $J$. Because both $[r,r+1], [r,n] \in J$, the coconsistency of $J$ implies that $[r,j] \in J$ for all $j$ satisfying $r+1 \le j \le n$. Furthermore, $[j, n] \not\in J$ for $r+1 \le j < n$, so the intersection $P^*(\{j\}) \cap J$ does not contain the lex maximal element of $P^*(\{j\})$. Thus, $P^*(\{ j \}) \cap J$ is a prefix of the copacket $P^*(\{j\})$ in lex order. It follows that $[i,j] \in J$ for all $1 \le i \le r$ and $r+1 \le j \le n$. Therefore, to enumerate the elements $J \in \CstarI{I(r)}{n}{2}$ that satisfy $[r,r+1] \in J$ amounts to enumerating the possible intersections \begin{align} \label{equation:case2a-top-right} J &\cap \{[i,j] : r+1 \le i < j \le n\},\\ \label{equation:case2a-bottom-left} J &\cap \{[i,j] : 1 \le i < j \le r\}. \end{align} The intersections in \cref{equation:case2a-top-right} and \cref{equation:case2a-bottom-left} are outlined in bold in the left diagram of \cref{figure:b_n_i}. Observe that the possible intersections in \cref{equation:case2a-top-right} are constrained by the coconsistency of $J$ with respect to intersections $J \cap P^*(\{i\})$ where $r+1 \le i \le n$. Similarly, the possible intersections in \cref{equation:case2a-bottom-left} are constrained by the coconsistency of $J$ with respect to intersections $J \cap P^*(\{i\})$ for $1 \le i \le r$. Thus, the intersections in \cref{equation:case2a-top-right} and \cref{equation:case2a-bottom-left} can be chosen independently of each other. Through considering \cref{figure:b_n_i} or the definition of coconsistency, one can check that the possible intersections in \cref{equation:case2a-top-right} are enumerated by the elements in $\CstarI{\varnothing}{n-r}{2}$. By \cref{lemma:cn2-refinement-emptyset}, $|\CstarI{\varnothing}{n-r}{2}| = 2^{n-r-2}$. Similarly, the possible intersections in \cref{equation:case2a-bottom-left} are enumerated by elements in $\CstarI{I(r)}{r+1}{2}$ and by \cref{lemma:cn2-refinement-emptyset}, $|\CstarI{I(r)}{r+1}{2}| = 2^{r-1}$. Therefore, there are $2^{n-r-2} \cdot 2^{r-1} = 2^{n-3}$ elements enumerated in this case. \begin{figure}[!ht] \centering \input{figures/diagram2}\quad \input{figures/diagram3} \caption{Case 1 (left) and case 2 (right) in the proof of \cref{lemma:refinement}.} \label{figure:b_n_i} \end{figure} \textbf{Case 2: $\boldsymbol{[r,r+1] \not\in J}$.} Since $J/n = \{\{1\}, \ldots, \{r\}\}$, $[r+1,n] \not\in J$. Since also $[r,r+1] \not\in J$ by hypothesis, $P^*(\{r+1\}) \cap J$ is a prefix of $P^*(\{r+1\})$ in lex order, so \begin{equation} J \cap \{[r+1, i] : r+1 < i \le n\} = \varnothing. \end{equation} Since $[r+1,i] \not\in J$ for all $i$ satisfying $r+1 < i \le n$, the coconsistency of $J$ implies that $P^*(\{i\}) \cap J$ is a prefix of $P^*(\{i\})$ in lex order. Therefore, $[i,j] \not\in J$ for all $j$ such that $i < j \le n$ and hence \begin{equation} \label{equation:case2-first-empty} J \cap \{[i,j] : r+1 \le i < j \le n\} = \varnothing. \end{equation} Next, $[r,n] \in J$ but $[r,r+1] \not\in J$, so the coconsistency of $J$ implies that $P^*(\{r\}) \cap J$ is a suffix of $P^*(\{r\})$ in lex order and furthermore, \begin{equation} J \cap \{[j,r] : 1 \le j < r\} = \varnothing. \end{equation} Therefore, for all $j$ such that $1 \le j < r$, $[j,r] \not\in J$ and $J \in \CstarI{I(r)}{n}{2}$, so we know that $[j,n] \in J$ and $P^*(\{j\}) \cap J$ is a suffix of $P^*(\{j\})$ in lex order. Thus, $[i,j] \not \in J$ for all $i$ such that $1 \le i < j$, and hence \begin{equation} \label{equation:case2-second-empty} J \cap \{[i,j] : 1 \le i < j \le r\} = \varnothing. \end{equation} Both intersections in \cref{equation:case2-first-empty} and \cref{equation:case2-second-empty} are empty. Therefore, to enumerate the sets $J \in \CstarI{I(r)}{n}{2}$ that satisfy $[r,r+1] \not\in J$ amounts to enumerating the possible intersections \begin{equation} \label{equation:case2b} J \cap \{[i,j] : \text{$1 \le i \le r$ and $r+1 \le j < n$}\}. \end{equation} The intersection in \cref{equation:case2b} is outlined in bold in the right diagram of \cref{figure:b_n_i}. Let $i,j$ be integers satisfying $1 \le i \le r$ and $r+1 \le j < n$. Observe that the coconsistency of $J$ imposes two types of convexity conditions: \begin{enumerate} \item If $i > 1$ and $[i,j] \in J$, then $[i-1,j] \in J$. \item If $j < n$ and $[i,j] \in J$, then $[i,j+1] \in J$. \end{enumerate} Therefore, the sets $J \in \CstarI{I(r)}{n}{2}$ satisfying $[r,r+1] \not\in J$ are in bijection with the set of lattice paths from $[1,r+1]$ to $[r+1,n]$ consisting of east and north steps of unit length, with the exception of the path consisting of all east steps followed by all north steps. A set $J$ bijects to such a lattice path by mapping $J$ to the southeast border of the squares associated with $J$. There are thus $\binom{n-1}{r}-1$ elements enumerated in this case. Summing the enumeration in cases 1 and 2 together yields $|\CstarI{I(r)}{n}{2}| = 2^{n-3} + \binom{n-1}{r} - 1$. \end{proof} \begin{theorem}[{\cite[Proposition 7.1]{ziegler93}}] \label{theorem:codimension3-formula} For integers $n \ge 4$, $|\B{n}{n-3}| = 2^n + n2^{n-2} - 2n$. \end{theorem} \begin{proof} By \cref{theorem:fundamental-duality}, $|\B{n}{n-3}| = |\Bstar{n}{3}| = |\Cstar{n}{2}|$. Partitioning coconsistent sets by their contraction implies that \begin{align*} |\Cstar{n}{2}| &= \sum_{I \in \Cstar{n-1}{1}} |\CstarI{I}{n}{2}|\\ &= |\CstarI{\varnothing}{n}{2}| + |\CstarI{I(n-1)}{n}{2}| + \sum_{r=1}^{n-2} \left(|\CstarI{I(r)}{n}{2}| + |\CstarI{I(n-1) \setminus I(r)}{n}{2}|\right)\\ &= 2|\CstarI{\varnothing}{n}{2}| + 2\sum_{r=1}^{n-2} |\CstarI{I(r)}{n}{2}|, \end{align*} where the last equality follows by \cref{lemma:cn2-refinement-emptyset} and \cref{lemma:refinement}. By using the explicit formulas \cref{equation:cn2-refinement-emptyset} and \cref{equation:Cn2_refinement}, the summation on the right hand side can be computed as follows \begin{align*} |\CstarI{\varnothing}{n}{2}| + \sum_{r=1}^{n-2} |\CstarI{I(r)}{n}{2}| &= 2^{n-2} + \sum_{r=1}^{n-2} \left(2^{n-3} + \binom{n-1}{r} - 1\right)\\ &= 2^{n-2} + (n-2)2^{n-3} + (2^{n-1}-2) - (n-2)\\ &= 2^{n-1} + n2^{n-3} - n. \end{align*} Multiplying by 2 yields the desired formula. \end{proof} \begin{proof}[Proof of \cref{theorem:TSPP-bijection}] By \cref{theorem:fundamental-duality} and \cref{lemma:contraction-deletion-are-dual}, the subposet of $\B{n}{n-4}$ obtained by restricting to elements whose deletion is the commutation class of the lexicographic order is isomorphic to the subposet of $\Cstar{n}{3}$ obtained by restricting to $\CstarI{\varnothing}{n}{3}$. By Theorem 4.5 in \cite{ziegler93}, single step inclusion and inclusion coincide for $\C{n}{n-3}$ and hence for $\Cstar{n}{3}$ and $\CstarI{\varnothing}{n}{3}$. Consider the map $\varphi: \T_{n-3} \to \CstarI{\varnothing}{n}{3}$ defined by \begin{equation} \varphi(T) = \{[x_1,x_2+1,x_3+2] : [x_1,x_2,x_3] \in T\}. \end{equation} To see that the image of $\varphi$ is $\CstarI{\varnothing}{n}{3}$, let $T \in \T_{n-3}$. For any $[x_1,x_2,x_3] \in T$, the inequalities $1 \le x_1 < x_2 < x_3 \le n-3$ holds. In particular, $x_3 +2 < n$. Therefore, $\varphi(T) / n = \varnothing$. Since $T$ is a plane partition, for any $1 \le i < j \le n$, the intersection $\varphi(T) \cap P^*([i,j])$ is a prefix of the copacket $P^*([i,j])$ in lex order. Thus, $\varphi(T)$ is coconsistent. The inverse map $\varphi^{-1}: \CstarI{\varnothing}{n}{3} \to \T_{n-3}$ is defined to be \begin{equation} \varphi^{-1}(I) = \{[x_{\pi(1)}, x_{\pi(2)}, x_{\pi(3)}] : \text{$[x_1,x_2,x_3] \in I$ and $\pi \in \S_3$}\}. \end{equation} It is clear that for any $T, T' \in \T_{n-3}$, the inclusion $T \subseteq T'$ holds if and only if the inclusion $\varphi(T) \subseteq \varphi(T')$ holds. Thus, $\varphi$ is an isomorphism between $\T_{n-3}$ and $\CstarI{\varnothing}{n}{3}$. \end{proof} \section{Future Work} We conclude with some directions for future study. Consistent posets were introduced to obtain a lower bound in \cref{theorem:higher-bruhat-order-asymptotics-intro} but are an interesting family of posets to study in their own right. We have verified the following conjecture on consistent posets $\P_S$ for all $S \in \binom{[n]}{k-1}$ where $n \le 7$ and $2 \le k \le n$. \begin{conjecture} For integers $1 \le k \le n$ and $S \in \C{n}{k}$, the consistent poset $\P_S$ is ranked. \end{conjecture} The weaving functions introduced in \cref{section:weaving-functions} are also interesting objects of study. It is surprising that commutation classes in $\B{n}{k}$ can be recovered from a weaving function when much of the information in a commutation class appears to be ``forgotten'' by considering only binary words. \begin{problem} For integers $1 \le k \le n$, find a combinatorial criterion to determine which functions $W: \binom{[n]}{k-1} \to \{0,1\}^*$ are the weaving function of some $[\rho] \in \B{n}{k}$. \end{problem} Extensive computational data has been generated in the case $k = 2$ and is available as part of the \href{https://github.com/pnnl/ML4AlgComb/tree/master}{Algebraic Combinatorics Dataset Repository}. In particular, a transformer machine learning model trained on a subset of the weaving functions of $\B{7}{2}$ attains $99.1\%$ accuracy in identifying whether or not an arbitrary map $[n] \to \{0,1\}^*$ is a weaving function $W_\rho$ for some $[\rho] \in \B{7}{2}$. The parameter size of the trained model is significantly smaller than $|\B{7}{2}|$, suggesting that there is some simple learned embedding or heuristic in the model rather than memorization of the training data. \cref{theorem:TSPP-bijection} demonstrates that an explicit formula for $|\CstarI{\varnothing}{n}{3}|$ exists and is given by Stembridge's product formula for TSPPs. If one were able to obtain an explicit formula for $|\CstarI{I}{n}{3}|$ for general coconsistent sets $I$, then by \cref{equation:partition-by-contraction}, one may obtain a formula for $\Cstar{n}{3}$. \begin{problem} For a coconsistent set $I \in \Cstar{n-1}{2}$, find a formula for $|\CstarI{I}{n}{3}|$ analogous to the formula in \cref{equation:Cn2_refinement}. \end{problem} \subsection*{Acknowledgements} Many thanks to Sara Billey and Ben Elias for helpful discussions and introducing me to the higher Bruhat orders. Thanks to Kevin Liu for discussing weaving functions with me. Additional thanks to Sean Grate, Helen Jenne, Jamie Kimble, Henry Kvinge, Cordelia Li, Clare Minnerath, Michael Tang and Rachel Wei for feedback. \newpage \emergencystretch=1em \printbibliography \end{document} \documentclass{standalone} \usepackage{tikz} \definecolor{mygray}{HTML}{ACACAC} \definecolor{myred}{HTML}{E6194B} \definecolor{mygreen}{HTML}{3CB44B} \definecolor{myblue}{HTML}{4363D8} \definecolor{mypink}{HTML}{F032E6} \definecolor{myorange}{HTML}{F58231} \definecolor{mypurple}{HTML}{7F00FF} \definecolor{mybrown}{HTML}{954535} \begin{document} \begin{tikzpicture}[scale=0.5] \draw (0,7) -- (0,0) -- (7,0); \foreach \y in {1,2,...,7} { \node[anchor=east] at (0, \y) {$\y$}; \draw (0, \y) -- (0.1, \y); } \foreach \x in {1,2,...,6} { \draw (\x, 0) -- (\x, 0.1); \node[anchor=north] at (\x, 0) {$\x$}; } ll[mygray] (1,7) -- (4,7) -- (4,4) -- (3,4) -- (3,3) -- (2,3) -- (2,4) -- (1,4) -- cycle; ll[mygray] (4.5, 5.5) circle (3pt); ll[mygray] (4.5, 6.5) circle (3pt); ll[mygray] (5.5, 6.5) circle (3pt); ll[mygray] (1.5, 2.5) circle (3pt); \foreach \i in {2,3,...,6} { \draw (1, \i) -- (\i, \i) -- (\i, 7); } \draw (1, 2) -- (1, 7) -- (6, 7); \end{tikzpicture} \end{document} \documentclass{standalone} \usepackage{tikz} \definecolor{mygray}{HTML}{ACACAC} \definecolor{myred}{HTML}{E6194B} \definecolor{mygreen}{HTML}{3CB44B} \definecolor{myblue}{HTML}{4363D8} \definecolor{mypink}{HTML}{F032E6} \definecolor{myorange}{HTML}{F58231} \definecolor{mypurple}{HTML}{7F00FF} \definecolor{mybrown}{HTML}{954535} \begin{document} \begin{tikzpicture}[scale=0.5] \draw (0,9) -- (0,0) -- (9,0); \foreach \y in {1,2,...,9} { \node[anchor=east] at (0, \y) {$\y$}; \draw (0, \y) -- (0.1, \y); } \foreach \x in {1,2,...,8} { \draw (\x, 0) -- (\x, 0.1); \node[anchor=north] at (\x, 0) {$\x$}; } ll[mygray] (1,8) -- (3,8) -- (3,7) -- (4,7) -- (4,4) -- (3,4) -- (3,3) -- (2,3) -- (2,2) -- (1,2) -- cycle; ll[mygray] (\i+.5, 8.5) circle (3pt); ll[mygray] (\i+.5, 7.5) circle (3pt); ll[mygray] (4.5, 6.5) circle (3pt); ll[mygray] (5.5, 6.5) circle (3pt); ll[mygray] (4.5, 5.5) circle (3pt); \foreach \i in {2,3,...,8} { \draw (1, \i) -- (\i, \i) -- (\i, 9); } \draw (1, 2) -- (1, 9) -- (8, 9); \draw[line width=3pt] (1,8) -- (3,8) -- (3,7) -- (4,7) -- (4,5); ll (1,8) circle (6pt); ll (4,5) circle (6pt); \end{tikzpicture} \end{document} \documentclass{standalone} \usepackage{tikz} \definecolor{mygray}{HTML}{ACACAC} \definecolor{myred}{HTML}{E6194B} \definecolor{mygreen}{HTML}{3CB44B} \definecolor{myblue}{HTML}{4363D8} \definecolor{mypink}{HTML}{F032E6} \definecolor{myorange}{HTML}{F58231} \definecolor{mypurple}{HTML}{7F00FF} \definecolor{mybrown}{HTML}{954535} \begin{document} \begin{tikzpicture}[scale=0.5] \draw (0,9) -- (0,0) -- (9,0); \foreach \y in {1,2,...,9} { \node[anchor=east] at (0, \y) {$\y$}; \draw (0, \y) -- (0.1, \y); } \foreach \x in {1,2,...,8} { \draw (\x, 0) -- (\x, 0.1); \node[anchor=north] at (\x, 0) {$\x$}; } ll[mygray] (1,9) -- (4,9) -- (4,4) -- (1,4) -- cycle; ll[mygray] (\i+.5, 8.5) circle (3pt); \foreach \x in {4,5,6} { \node[mygray] at (\x+.5, 7.5) {\textbf{?}}; } \node[mygray] at (4.5, 6.5) {\textbf{?}}; \node[mygray] at (5.5, 6.5) {\textbf{?}}; \node[mygray] at (4.5, 5.5) {\textbf{?}}; \node[mygray] at (1.5, 3.5) {\textbf{?}}; \node[mygray] at (2.5, 3.5) {\textbf{?}}; \node[mygray] at (1.5, 2.5) {\textbf{?}}; \foreach \i in {2,3,...,8} { \draw (1, \i) -- (\i, \i) -- (\i, 9); } \draw (1, 2) -- (1, 9) -- (8, 9); \draw[line width=3pt] (4,9) -- (8,9) -- (8,8) -- (7,8) -- (7,7) -- (6,7) -- (6,6) -- (5,6) -- (5,5) -- (4,5) -- cycle; \draw[line width=3pt] (1,5) -- (4,5) -- (4,4) -- (3,4) -- (3,3) -- (2,3) -- (2,2) -- (1,2) -- cycle; \node[anchor=north west] at (3.85, 4.15) { $[r,r+1] \in J$}; \end{tikzpicture} \end{document} \documentclass{standalone} \usepackage{tikz} \definecolor{mygray}{HTML}{ACACAC} \definecolor{myred}{HTML}{E6194B} \definecolor{mygreen}{HTML}{3CB44B} \definecolor{myblue}{HTML}{4363D8} \definecolor{mypink}{HTML}{F032E6} \definecolor{myorange}{HTML}{F58231} \definecolor{mypurple}{HTML}{7F00FF} \definecolor{mybrown}{HTML}{954535} \begin{document} \begin{tikzpicture}[scale=0.5] \draw (0,9) -- (0,0) -- (9,0); \foreach \y in {1,2,...,9} { \node[anchor=east] at (0, \y) {$\y$}; \draw (0, \y) -- (0.1, \y); } \foreach \x in {1,2,...,8} { \draw (\x, 0) -- (\x, 0.1); \node[anchor=north] at (\x, 0) {$\x$}; } ll[mygray] (1,9) -- (4,9) -- (4,8) -- (1,8) -- cycle; ll[mygray] (\i+.5, 8.5) circle (3pt); ll[mygray] (\i+.5, 7.5) circle (3pt); ll[mygray] (4.5, 6.5) circle (3pt); ll[mygray] (5.5, 6.5) circle (3pt); ll[mygray] (4.5, 5.5) circle (3pt); ll[mygray] (3.5, 4.5) circle (3pt); ll[mygray] (1.5, 3.5) circle (3pt); ll[mygray] (2.5, 3.5) circle (3pt); ll[mygray] (1.5, 2.5) circle (3pt); \foreach \x in {1,2,3} { \foreach \y in {5,6,7} { \node[mygray] at (\x+.5, \y+.5) {\textbf{?}}; } } \node[mygray] at (1.5, 4.5) {\textbf{?}}; \node[mygray] at (2.5, 4.5) {\textbf{?}}; \node[anchor=north west] at (3.75, 4.25) {$[r,r+1] \not\in J$}; \foreach \i in {2,3,...,8} { \draw (1, \i) -- (\i, \i) -- (\i, 9); } \draw (1, 2) -- (1, 9) -- (8, 9); \draw[line width=3pt] (1,8) -- (4,8) -- (4,5) -- (3,5) -- (3,4) -- (1,4) -- cycle; \end{tikzpicture} \end{document} \documentclass{standalone} \usepackage{tikz} \definecolor{mygray}{HTML}{ACACAC} \definecolor{myred}{HTML}{E6194B} \definecolor{mygreen}{HTML}{3CB44B} \definecolor{myblue}{HTML}{4363D8} \definecolor{mypink}{HTML}{F032E6} \definecolor{myorange}{HTML}{F58231} \definecolor{mypurple}{HTML}{7F00FF} \definecolor{mybrown}{HTML}{954535} \begin{document} \begin{tikzpicture}[scale=0.5] \draw (0,7) -- (0,0) -- (7,0); \foreach \y in {1,2,...,7} { \node[anchor=east] at (0, \y) {$\y$}; \draw (0, \y) -- (0.1, \y); } \foreach \x in {1,2,...,6} { \draw (\x, 0) -- (\x, 0.1); \node[anchor=north] at (\x, 0) {$\x$}; } ll[mygray] (1,2) -- (1,6) -- (2,6) -- (2,5) -- (3,5) -- (3,3) -- (2,3) -- (2,2) -- cycle; \foreach \i in {1,2,...,5} { ll[mygray] (\i+.5, 6.5) circle (3pt); } \foreach \i in {2,3,4} { ll[mygray] (\i+.5, 5.5) circle (3pt); } ll[mygray] (3.5, 4.5) circle (3pt); \foreach \i in {2,3,...,6} { \draw (1, \i) -- (\i, \i) -- (\i, 7); } \draw (1, 2) -- (1, 7) -- (6, 7); \draw[line width=3pt] (1,6) -- (2,6) -- (2,5) -- (3,5) -- (3,4); ll (3,4) circle (6pt); ll (1,6) circle (6pt); \end{tikzpicture} \end{document} \documentclass{standalone} \usepackage{tikz} \definecolor{mygray}{HTML}{ACACAC} \definecolor{myred}{HTML}{E6194B} \definecolor{mygreen}{HTML}{3CB44B} \definecolor{myblue}{HTML}{4363D8} \definecolor{mypink}{HTML}{F032E6} \definecolor{myorange}{HTML}{F58231} \definecolor{mypurple}{HTML}{7F00FF} \definecolor{mybrown}{HTML}{954535} \begin{document} \tikzset{every picture/.style={line width=0.75pt}} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-0.8,xscale=0.8] \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (270,60) -- (310,70) -- (280,90) -- (240,80) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (270,20) -- (310,30) -- (280,50) -- (240,40) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (240,40) -- (280,50) -- (280,90) -- (240,80) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (270,20) -- (310,30) -- (310,70) -- (270,60) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (280,50) -- (310,30) -- (310,70) -- (280,90) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (240,40) -- (270,20) -- (270,60) -- (240,80) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (310,70) -- (350,80) -- (320,100) -- (280,90) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (310,30) -- (350,40) -- (320,60) -- (280,50) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (280,50) -- (320,60) -- (320,100) -- (280,90) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (310,30) -- (350,40) -- (350,80) -- (310,70) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (320,60) -- (350,40) -- (350,80) -- (320,100) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (280,50) -- (310,30) -- (310,70) -- (280,90) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (240,80) -- (280,90) -- (250,110) -- (210,100) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (240,40) -- (280,50) -- (250,70) -- (210,60) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (210,60) -- (250,70) -- (250,110) -- (210,100) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (240,40) -- (280,50) -- (280,90) -- (240,80) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (250,70) -- (280,50) -- (280,90) -- (250,110) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (210,60) -- (240,40) -- (240,80) -- (210,100) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (210,100) -- (250,110) -- (220,130) -- (180,120) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (210,60) -- (250,70) -- (220,90) -- (180,80) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (180,80) -- (220,90) -- (220,130) -- (180,120) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (210,60) -- (250,70) -- (250,110) -- (210,100) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (220,90) -- (250,70) -- (250,110) -- (220,130) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (180,80) -- (210,60) -- (210,100) -- (180,120) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (180,120) -- (220,130) -- (190,150) -- (150,140) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (180,80) -- (220,90) -- (190,110) -- (150,100) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (150,100) -- (190,110) -- (190,150) -- (150,140) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (180,80) -- (220,90) -- (220,130) -- (180,120) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (190,110) -- (220,90) -- (220,130) -- (190,150) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (150,100) -- (180,80) -- (180,120) -- (150,140) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (320,100) -- (360,110) -- (330,130) -- (290,120) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (320,60) -- (360,70) -- (330,90) -- (290,80) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (290,80) -- (330,90) -- (330,130) -- (290,120) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (320,60) -- (360,70) -- (360,110) -- (320,100) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (330,90) -- (360,70) -- (360,110) -- (330,130) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (290,80) -- (320,60) -- (320,100) -- (290,120) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (350,80) -- (390,90) -- (360,110) -- (320,100) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (350,40) -- (390,50) -- (360,70) -- (320,60) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (320,60) -- (360,70) -- (360,110) -- (320,100) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (350,40) -- (390,50) -- (390,90) -- (350,80) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (360,70) -- (390,50) -- (390,90) -- (360,110) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (320,60) -- (350,40) -- (350,80) -- (320,100) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (210,190) -- (250,200) -- (220,220) -- (180,210) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (210,150) -- (250,160) -- (220,180) -- (180,170) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (180,170) -- (220,180) -- (220,220) -- (180,210) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (210,150) -- (250,160) -- (250,200) -- (210,190) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (220,180) -- (250,160) -- (250,200) -- (220,220) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (180,170) -- (210,150) -- (210,190) -- (180,210) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (180,210) -- (220,220) -- (190,240) -- (150,230) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (180,170) -- (220,180) -- (190,200) -- (150,190) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (150,190) -- (190,200) -- (190,240) -- (150,230) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (180,170) -- (220,180) -- (220,220) -- (180,210) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (190,200) -- (220,180) -- (220,220) -- (190,240) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (150,190) -- (180,170) -- (180,210) -- (150,230) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (180,300) -- (220,310) -- (190,330) -- (150,320) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (180,260) -- (220,270) -- (190,290) -- (150,280) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (150,280) -- (190,290) -- (190,330) -- (150,320) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (180,260) -- (220,270) -- (220,310) -- (180,300) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (190,290) -- (220,270) -- (220,310) -- (190,330) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (150,280) -- (180,260) -- (180,300) -- (150,320) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (180,380) -- (220,390) -- (190,410) -- (150,400) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (180,340) -- (220,350) -- (190,370) -- (150,360) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (150,360) -- (190,370) -- (190,410) -- (150,400) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (180,340) -- (220,350) -- (220,390) -- (180,380) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (190,370) -- (220,350) -- (220,390) -- (190,410) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (150,360) -- (180,340) -- (180,380) -- (150,400) -- cycle ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ][line width=3] (285,75) -- (285,185) ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ][line width=3] (285,185) -- (255,290) ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (250,200) -- (290,210) -- (260,230) -- (220,220) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (250,160) -- (290,170) -- (260,190) -- (220,180) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (220,180) -- (260,190) -- (260,230) -- (220,220) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (250,160) -- (290,170) -- (290,210) -- (250,200) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (260,190) -- (290,170) -- (290,210) -- (260,230) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (220,180) -- (250,160) -- (250,200) -- (220,220) -- cycle ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ][line width=3] (255,290) -- (210,280) ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (250,290) -- (290,300) -- (260,320) -- (220,310) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (250,250) -- (290,260) -- (260,280) -- (220,270) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (220,270) -- (260,280) -- (260,320) -- (220,310) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (250,250) -- (290,260) -- (290,300) -- (250,290) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (260,280) -- (290,260) -- (290,300) -- (260,320) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (220,270) -- (250,250) -- (250,290) -- (220,310) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (210,280) -- (250,290) -- (220,310) -- (180,300) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (210,240) -- (250,250) -- (220,270) -- (180,260) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (180,260) -- (220,270) -- (220,310) -- (180,300) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (210,240) -- (250,250) -- (250,290) -- (210,280) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (220,270) -- (250,250) -- (250,290) -- (220,310) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (180,260) -- (210,240) -- (210,280) -- (180,300) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (280,90) -- (320,100) -- (290,120) -- (250,110) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (280,50) -- (320,60) -- (290,80) -- (250,70) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (250,70) -- (290,80) -- (290,120) -- (250,110) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (280,50) -- (320,60) -- (320,100) -- (280,90) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (290,80) -- (320,60) -- (320,100) -- (290,120) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (250,70) -- (280,50) -- (280,90) -- (250,110) -- cycle ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ][line width=3] (395,80) -- (325,185) ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ][line width=3] (325,185) -- (245,165) ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (320,190) -- (360,200) -- (330,220) -- (290,210) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (320,150) -- (360,160) -- (330,180) -- (290,170) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (290,170) -- (330,180) -- (330,220) -- (290,210) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (320,150) -- (360,160) -- (360,200) -- (320,190) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (330,180) -- (360,160) -- (360,200) -- (330,220) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (290,170) -- (320,150) -- (320,190) -- (290,210) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (280,180) -- (320,190) -- (290,210) -- (250,200) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (280,140) -- (320,150) -- (290,170) -- (250,160) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (250,160) -- (290,170) -- (290,210) -- (250,200) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (280,140) -- (320,150) -- (320,190) -- (280,180) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (290,170) -- (320,150) -- (320,190) -- (290,210) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (250,160) -- (280,140) -- (280,180) -- (250,200) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (240,170) -- (280,180) -- (250,200) -- (210,190) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (240,130) -- (280,140) -- (250,160) -- (210,150) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (210,150) -- (250,160) -- (250,200) -- (210,190) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (240,130) -- (280,140) -- (280,180) -- (240,170) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (250,160) -- (280,140) -- (280,180) -- (250,200) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (210,150) -- (240,130) -- (240,170) -- (210,190) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (250,110) -- (290,120) -- (260,140) -- (220,130) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (250,70) -- (290,80) -- (260,100) -- (220,90) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (220,90) -- (260,100) -- (260,140) -- (220,130) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (250,70) -- (290,80) -- (290,120) -- (250,110) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ] (260,100) -- (290,80) -- (290,120) -- (260,140) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (220,90) -- (250,70) -- (250,110) -- (220,130) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (390,90) -- (430,100) -- (400,120) -- (360,110) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (390,50) -- (430,60) -- (400,80) -- (360,70) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (360,70) -- (400,80) -- (400,120) -- (360,110) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (390,50) -- (430,60) -- (430,100) -- (390,90) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=0.5 ][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ] (400,80) -- (430,60) -- (430,100) -- (400,120) -- cycle ; \draw [draw opacity=0][fill={rgb, 255:red, 128; green, 128; blue, 128 } ,fill opacity=0.25 ][dash pattern={on 4.5pt off 4.5pt}] (360,70) -- (390,50) -- (390,90) -- (360,110) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (147.5,400) .. controls (147.5,398.62) and (148.62,397.5) .. (150,397.5) .. controls (151.38,397.5) and (152.5,398.62) .. (152.5,400) .. controls (152.5,401.38) and (151.38,402.5) .. (150,402.5) .. controls (148.62,402.5) and (147.5,401.38) .. (147.5,400) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (147.5,320) .. controls (147.5,318.62) and (148.62,317.5) .. (150,317.5) .. controls (151.38,317.5) and (152.5,318.62) .. (152.5,320) .. controls (152.5,321.38) and (151.38,322.5) .. (150,322.5) .. controls (148.62,322.5) and (147.5,321.38) .. (147.5,320) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (217.5,310) .. controls (217.5,308.62) and (218.62,307.5) .. (220,307.5) .. controls (221.38,307.5) and (222.5,308.62) .. (222.5,310) .. controls (222.5,311.38) and (221.38,312.5) .. (220,312.5) .. controls (218.62,312.5) and (217.5,311.38) .. (217.5,310) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (176.5,300) .. controls (176.5,298.62) and (177.62,297.5) .. (179,297.5) .. controls (180.38,297.5) and (181.5,298.62) .. (181.5,300) .. controls (181.5,301.38) and (180.38,302.5) .. (179,302.5) .. controls (177.62,302.5) and (176.5,301.38) .. (176.5,300) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (287.5,210) .. controls (287.5,208.62) and (288.62,207.5) .. (290,207.5) .. controls (291.38,207.5) and (292.5,208.62) .. (292.5,210) .. controls (292.5,211.38) and (291.38,212.5) .. (290,212.5) .. controls (288.62,212.5) and (287.5,211.38) .. (287.5,210) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (287.5,120) .. controls (287.5,118.62) and (288.62,117.5) .. (290,117.5) .. controls (291.38,117.5) and (292.5,118.62) .. (292.5,120) .. controls (292.5,121.38) and (291.38,122.5) .. (290,122.5) .. controls (288.62,122.5) and (287.5,121.38) .. (287.5,120) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (358,109) .. controls (358,107.62) and (359.12,106.5) .. (360.5,106.5) .. controls (361.88,106.5) and (363,107.62) .. (363,109) .. controls (363,110.38) and (361.88,111.5) .. (360.5,111.5) .. controls (359.12,111.5) and (358,110.38) .. (358,109) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (318,99) .. controls (318,97.62) and (319.12,96.5) .. (320.5,96.5) .. controls (321.88,96.5) and (323,97.62) .. (323,99) .. controls (323,100.38) and (321.88,101.5) .. (320.5,101.5) .. controls (319.12,101.5) and (318,100.38) .. (318,99) -- cycle ; \draw (101,407.4) node [anchor=north west][inner sep=0.75pt] {$( 1,2,3)$}; \draw (101,327.4) node [anchor=north west][inner sep=0.75pt] {$( 1,2,4)$}; \draw (211,317.4) node [anchor=north west][inner sep=0.75pt] {$( 2,3,4)$}; \draw (155,277.4) node [anchor=north west][inner sep=0.75pt] {$( 1,3,4)$}; \draw (281,217.4) node [anchor=north west][inner sep=0.75pt] {$( 3,4,5)$}; \draw (289.5,123.4) node [anchor=north west][inner sep=0.75pt] {$( 3,4,6)$}; \draw (360,112.4) node [anchor=north west][inner sep=0.75pt] {$( 4,5,6)$}; \draw (295,77.4) node [anchor=north west][inner sep=0.75pt] {$( 3,5,6)$}; \end{tikzpicture} \end{document} \documentclass{standalone} \usepackage{tikz} \definecolor{mygray}{HTML}{ACACAC} \definecolor{myred}{HTML}{E6194B} \definecolor{mygreen}{HTML}{3CB44B} \definecolor{myblue}{HTML}{4363D8} \definecolor{mypink}{HTML}{F032E6} \definecolor{myorange}{HTML}{F58231} \definecolor{mypurple}{HTML}{7F00FF} \definecolor{mybrown}{HTML}{954535} \begin{document} \begin{tikzpicture}[scale=0.5] ll[mygray] (0, 0.5) -- (1,0) -- (2,0.5) -- (1,1) -- cycle; ll[mygray] (0, 3) -- (1,2.5) -- (2,3) -- (1,3.5) -- cycle; ll[mygray] (1, 3.5) -- (2,3) -- (3,3.5) -- (2,4); ll[mygray] (2, 3) -- (3,2.5) -- (4,3) -- (3,3.5); ll[mygray] (4, 5.5) -- (5,5) -- (6,5.5) -- (5,6); ll[mygray] (5, 8.5) -- (6,8) -- (7,8.5) -- (6,9); ll[mygray] (6, 8) -- (7,7.5) -- (8,8) -- (7,8.5); ll[mygray] (4, 8) -- (5,7.5) -- (6,8) -- (5,8.5); \foreach \z in {0,1,2,3} { \foreach \x in {0,...,\z} { \pgfmathsetmacro\y{\x * 0.5} \pgfmathsetmacro\zy{\z * 0.5} \pgfmathsetmacro\zz{\z * 2.5} \draw (\x, \y + 0.5 + \zz) -- (\x+\x+1, \zz) -- (\x+\z+2, \zz+0.5+\zy-\y); \draw (0, \zz+0.5) -- (\z+1, \zz+\zy+1) -- (\z+\z+2,\zz+0.5); } } \draw[dashed, mygray] (0,0.5) -- (0, 8); \draw[dashed, mygray] (1,0) -- (1, 7.5); \draw[dashed, mygray] (2,0.5) -- (2, 8); \draw[dashed, mygray] (3,2.5) -- (3, 7.5); \draw[dashed, mygray] (4,3) -- (4, 8); \draw[dashed, mygray] (5,5) -- (5, 7.5); \draw[dashed, mygray] (6,5.5) -- (6, 8); lldraw (0, 0.5) circle (2pt); \node[anchor=east] at (0, 0.5) {$(1,2,3)$}; lldraw (0, 3) circle (2pt); \node[anchor=east] at (0, 3) {$(1,2,4)$}; lldraw (1, 3.5) circle (2pt); \node[anchor=south east] at (1.5, 3.5) {$(1,3,4)$}; lldraw (2, 3) circle (2pt); \node[anchor=north] at (2, 2.7) {$(2,3,4)$}; \draw[myblue, thick] (7,8) -- (5, 5.5) -- (3, 6.5); \draw[myorange, thick] (4,8.5) -- (4, 6) -- (3, 3) -- (2, 3.5); \end{tikzpicture} \end{document}
2412.10602v2
http://arxiv.org/abs/2412.10602v2
Spectral Properties of Positive Definite Matrices over Symmetrized Tropical Algebras and Valued Ordered fields
\documentclass[11pt]{amsart} \usepackage[english]{babel} \usepackage[colorinlistoftodos,bordercolor=orange,backgroundcolor=orange!20,linecolor=orange,textsize=small]{todonotes} \usepackage{filecontents} \usepackage[useregional]{datetime2} \usepackage{fullpage} \usepackage{caption} \usepackage{subcaption} \captionsetup[subfigure]{subrefformat=simple,labelformat=simple} \usepackage{amsmath} \usepackage{amsthm} \usepackage{hyperref} \usepackage{cleveref} \usepackage{centernot} \usepackage{blkarray} \usepackage{float} \usepackage[font=footnotesize,labelfont=bf]{caption} \usepackage{tikz} \newcommand*\circled[1]{\tikz[baseline=(char.base)]{ \node[shape=circle,draw,inner sep=2pt] (char) {#1};}} \usetikzlibrary{matrix,shapes,arrows,positioning} \definecolor{burntorange}{cmyk}{0,0.52,1,0} \usepackage{stackengine} \stackMath \usepackage{booktabs} \usepackage{matlab-prettifier} \providecommand{\arxiv}[1]{\href{http://www.arXiv.org/abs/#1}{arXiv:#1}} \newcommand{\tropprod}{\mathop{}} \newcommand{\bigtprod}{\mathop{{\prod}^{}}} \newcommand{\bigtsum}{\mathop{{\sum}^{\oplus}}} \newcommand{\tsum}{\sum^{\oplus}} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{property}[theorem]{Property} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{xca}[theorem]{Exercise} \newtheorem{assumption}[theorem]{Assumption} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \renewcommand{\theenumi}{\roman{enumi}} \renewcommand{\labelenumi}{\roman{enumi})} \usepackage[foot]{amsaddr} \makeatletter \renewcommand{\email}[2][]{ \@ifnotempty{#1}{\g@addto@macro\emails{\textrm{(#1)}\space}} \g@addto@macro\emails{#2}} \makeatother \usepackage{graphicx} \newcommand\smallO{ \mathchoice {{\scriptstyle\mathcal{O}}} {{\scriptstyle\mathcal{O}}} {{\scriptscriptstyle\mathcal{O}}} {\scalebox{.7}{$\scriptscriptstyle\mathcal{O}$}} } \newcommand{\new}[1]{{\em #1}} \renewcommand\thesubfigure{(\Alph{subfigure})} \newcommand{\bbfamily}{\fontencoding{U}\fontfamily{bbold}\selectfont} \DeclareMathAlphabet{\mathbbold}{U}{bbold}{m}{n} \newcommand{\zero}{\mathbbold{0}} \newcommand{\unit}{\mathbbold{1}} \newcommand{\zeror}{\mathbbold{0}} \newcommand{\unitr}{\mathbbold{1}} \newcommand{\mv}{m} \newcommand{\support}{\operatorname{supp}} \newcommand{\C}{\mathbb{C}} \newcommand{\rel}{\mathcal{R}} \newcommand{\vl}{\mathrm{val}} \newcommand{\R}{\mathbb R} \newcommand{\vall}{\mathrm v} \newcommand{\sval}{\mathrm{sv}} \newcommand{\svP}{\mathbf{P}} \newcommand{\pbool}{\mathrm{Res}} \newcommand{\Val}{\vall} \newcommand{\dc}{\mathrm{dc}} \newcommand{\Dc}{\mathrm{Dc}} \newcommand{\smax}{\mathbb{S}_{\max}} \newcommand{\rmax}{\mathbb{R}_{\max}} \newcommand{\tmax}{\mathbb{T}_{\max}} \newcommand{\bmax}{\mathbb{B}_{\max}} \newcommand{\bmaxs}{{\mathbb B}_{{\mathrm s}}} \newcommand{\LL}{\mathbb{L}} \newcommand{\PF}{\mathcal{P}_{\!\mathrm{f}}}\newcommand{\Sp}{\mathfrak{S}} \newcommand{\Y}{\mathsf{Y}} \newcommand{\X}{\mathsf{X}} \newcommand{\Sv}{\mathrm{Sv}} \newcommand{\G}{{\mathcal G}} \newcommand{\per}{\mathrm{per}} \newcommand{\sdet}{{\mathop{\mathrm{det}}}_{\mathrm{s}}} \newcommand{\sper}{{\mathop{\mathrm{per}}}_{\mathrm{s}}} \newcommand{\psd}{\operatorname{\mathsf{TPSD}}} \newcommand{\pd}{{\operatorname{\mathsf{TPD}}}} \newcommand{\upd}{{\operatorname{\mathsf{UTP}}}} \newcommand{\ps}{P_{A}} \newcommand{\tr}{\mathrm{tr}} \newcommand{\weak}{\prec^{\mathrm{w}}} \newcommand{\F}{\mathbb{F}} \newcommand{\mult}{\mathrm{mult}} \newcommand{\card}{\mathrm{card}} \newcommand{\sat}{\mathrm{sat}} \newcommand{\elf}{b} \newcommand{\balance}{\,\nabla\,} \newcommand{\notbalance}{\!\centernot{\,\balance}} \newcommand{\Pn}{\normalize{P}} \newcommand{\bp}{\bf{P}} \newcommand{\normalize}[1]{\,\overline{\!{#1}}} \newcommand{\surpass}{\trianglelefteq} \newcommand{\leqsign}{\leq} \newcommand{\geqsign}{\geq} \newcommand{\lsign}{<} \newcommand{\nlsign}{\not\lsign} \newcommand{\nleqsign}{\not\leqsign} \newcommand{\gsign}{>} \newcommand{\A}{\mathcal{A}} \newcommand{\T}{\mathcal{T}} \newcommand{\formE}{\mathfrak{E}} \newcommand{\formF}{\mathfrak{F}} \newcommand{\nul}{\mathrm{Null}} \newcommand{\K}{\mathbb{K}} \newcommand{\adj}{\mathrm{adj}} \newcommand{\spec}{\rho_{\max}} \newcommand{\graph}{\mathcal G} \newcommand{\Ab}{\mathbf{A}} \newcommand{\ab}{\mathbf{a}} \newcommand{\ext}{\mbox{$\bigwedge$}} \newcommand{\cycle}{\sigma} \newcommand{\gpath}{p} \newcommand{\permutation}{\pi} \newcommand{\Azero}{\underline{A}} \usepackage[scr=boondox,scrscaled=1.05]{mathalfa} \newcommand{\trop}[1][]{\ifthenelse{\equal{#1}{}}{ \mathbb{T} }{ \mathbb{T}(#1) }} \usepackage{amssymb} \renewcommand{\geq}{\geqslant} \renewcommand{\leq}{\leqslant} \renewcommand{\succeq}{\succcurlyeq} \renewcommand{\le}{\leq} \renewcommand{\ge}{\geq} \newcommand{\botelt}{\bot} \newcommand{\topelt}{\top} \newcommand{\morphism}{\mu} \newcommand{\Q}{\mathbb{Q}} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \DeclareMathOperator*{\lc}{\mathsf{lc}} \DeclareMathOperator{\uval}{ldeg} \newcommand{\coloneqq}{:=} \newcommand{\morphismsys}{\varphi} \newcommand{\resfield}{\mathscr{k}} \newcommand{\hahnseries}[2]{#1[[t^{#2}]]} \newcommand{\puiseuxseries}[1]{#1\{\{t\}\}} \newcommand{\semiring}{\mathcal{A}} \newcommand{\extension}{\mathcal{E}} \newcommand{\semiringvee}{\mathcal{A}^{\vee}} \newcommand{\tangible}{\mathcal{T}} \newcommand{\tangiblezero}{\mathcal{T}_{\zero}} \newcommand{\vfield}{\mathcal{K}}\newcommand{\rfield}{\mathcal{L}}\newcommand{\subgroup}{\mathcal G} \newcommand{\vgroup}{\Gamma} \newcommand{\vring}{\mathscr{O}} \newcommand{\videal}{\mathscr{M}} \newcommand{\res}{\operatorname{res}} \newcommand{\hyper}{\mathcal{H}} \newcommand{\rcfield}{\vfield} \newcommand{\angular}{\mathrm{ac}} \newcommand{\xsec}{\mathrm{cs}} \newcommand{\sign}{\mathrm{sgn}} \newcommand{\oag}{\Gamma} \newcommand{\doag}{\Gamma} \newcommand{\skewproductstar}[2]{#1{\rtimes}{#2}} \begin{document} \title{Spectral Properties of Positive Definite Matrices over Symmetrized Tropical Algebras and Valued Ordered fields} \author{Marianne Akian$^{\, 1}$} \author{Stephane Gaubert$^{\, 2}$} \author{Dariush Kiani$^{\, 3}$} \author{Hanieh Tavakolipour$^{\, 4}$} \address[$1,2$]{Inria and CMAP, Ecole polytechnique, CNRS, Institut Polytechnique de Paris} \address[$3,4$]{Amirkabir University of Technology, Department of Mathematics and Computer Science} \email[$1$]{[email protected]} \email[$2$]{[email protected]} \email[$3$]{[email protected]} \email[$4$]{[email protected]} \thanks{$(3,4)$ The study of third and forth authors was funded by Iran National Science Foundation (INSF) (Grant No. 99023636).} \thanks{$(4)$ This work began when the forth author was a postdoc at Inria and CMAP, Ecole polytechnique, CNRS, Institut Polytechnique de Paris} \date{\today} \maketitle \begin{abstract} We investigate the properties of positive definite and positive semi-definite symmetric matrices within the framework of symmetrized tropical algebra, an extension of tropical algebra adapted to ordered valued fields. We focus on the eigenvalues and eigenvectors of these matrices. We prove that the eigenvalues of a positive (semi)-definite matrix in the tropical symmetrized setting coincide with its diagonal entries. Then, we show that the images by the valuation of the eigenvalues of a positive definite matrix over a valued nonarchimedean ordered field coincide with the eigenvalues of an associated matrix in the symmetrized tropical algebra. Moreover, under a genericity condition, we characterize the images of the eigenvectors under the map keeping track both of the nonarchimedean valuation and sign, showing that they coincide with tropical eigenvectors in the symmetrized algebra. These results offer new insights into the spectral theory of matrices over tropical semirings, and provide combinatorial formul\ae\ for log-limits of eigenvalues and eigenvectors of parametric families of real positive definite matrices. \end{abstract} \subjclass[2020]{Primary 15A18, 12J15, 12J25, 15A80, 16Y60; Secondary 14T10, 16Y20} \keywords{Positive definite matrices; eigenvalues; eigenvectors; tropical algebra; max-plus algebra; symmetrized tropical semiring; hyperfields; valued fields; valuations; ordered fields.} \setcounter{tocdepth}{3} \section{Introduction} \subsection{Motivation} Tropical algebra has been introduced by several authors under various names such as max-plus algebra, max algebra, and it has opened up new pathways in mathematical research, particularly in areas requiring a combinatorial or optimization-focused approach, but also in algebraic geometry, see for instance~\cite{baccelli1992synchronization,butkovivc2010max,viro2001dequantization,itenberg2009tropical,maclagan2015introduction}. The operations in tropical algebra over real numbers, denoted here $\rmax$, involve taking the maximum of real numbers in place of addition and using standard addition in place of multiplication. The absence of a negation and of term cancellation in traditional tropical algebra motivated the introduction in \cite{maxplus90b} of the symmetrized tropical algebra $\smax$, as an extension of $\rmax$, introducing a symmetry playing the role of a negation. There, this semiring was used as a tool to solve systems of linear equations. It has also numerous implications, particularly in the study of matrices, eigenvalues, eigenvectors, and polynomials, see for instance \cite{baccelli1992synchronization,cramer-guterman,adi,tavakolipour2021}. A related construction called the real tropical hyperfield or the signed tropical hyperfield, was considered in the framework of hyperfields with the aim to study real algebraic geometry, see \cite{viro2010hyperfields,viro2001dequantization}, and recent studies of this hyperfield include \cite{baker2018descartes,Lorsch22,gunn,gunn2}. Finally, $\smax$ with its associated partial order relations, can also be seen as a semiring system as in \cite{Rowen2,AGRowen}. Positive (semi-)definite symmetric matrices are of particular interest due to their role in various mathematical and engineering applications, such as stability analysis, optimization problems, and systems theory. In \cite{yu2015tropicalizing}, Yu defined and characterized unsigned tropical positive definite matrices. In \cite{tropicalization}, the authors use $\smax$ to define signed tropical positive semi-definite symmetric matrices (see~\Cref{def:psd} for the definition), and gave in~\cite[Theorem 4.2]{tropicalization} a characterization of positive semi-definite matrices which involve ``minors'' of principal submatrices of size $1$ or $2$ (only). In classical algebra, the properties of positive definite matrices, particularly their eigenvalues and eigenvectors, are well understood and have been extensively studied. One of the aims of this paper is to introduce and study the eigenvalues and eigenvectors of tropical positive definite matrices in the context of symmetrized tropical algebra. The tropical algebra is intimately related to the notion of valuation over a field. Indeed, a valuation can be seen as a ``morphism'' from a field to the tropical algebra $\rmax$, and to make this morphism property rigorous, one can use the concepts of hyperfields or semiring systems (see for instance \cite{baker2018descartes,Rowen2,AGRowen}). Valuations are related to asymptotics and the role of tropical algebra in asymptotics was recognized by Maslov \cite[Ch. VIII]{maslov1987methodes}, see also \cite{kolokoltsov2013idempotent}, and by Viro \cite{viro2001dequantization}. Valuations are also a way to define notions of tropical geometry, see for instance \cite{itenberg2009tropical,maclagan2015introduction}. Valuations with general ordered groups of values can also be considered, together with the associated tropical algebra or hyperfield or semiring system. As said before, the symmetrized tropical algebra $\smax$ can be seen as a semiring system and is related to the signed tropical hyperfield. The latter extends the tropical hyperfield, and its elements correspond to the signed elements in $\smax$, which form the whole set $\smax^\vee$. Then, signed valuations serve as morphisms from ordered valued fields to $\smax^\vee$. To any element in the field, they assign the valuation while also indicating their sign. Signed valuations are useful in the understanding of real algebraic geometry~\cite{Jell2020}. They are also useful in understanding usual optimization problems \cite{allamigeon2020tropical} and allow to define tropical optimization problems, using signed tropical positive semi-definite symmetric matrices \cite{tropicalization}. When applied to polynomials, signed valuations reveal the "signed roots"~\cite{gunn,gunn2,tavakolipour2021}. Applying the concepts and characterizations of eigenvalues and eigenvectors of tropical positive definite matrices over $\smax$, we will be able to characterize the signed valuations of the eigenvalues and eigenvectors of a positive definite matrix over a real closed field. \subsection{Main results} Our primary contribution is the proof that, in $\smax$, the eigenvalues of a positive (semi)-definite matrix are given by its diagonal entries, see~\Cref{sec:eig}. This result offers practical computational advantages, as it simplifies the determination of eigenvalues in symmetrized tropical setting. We build upon the results presented in \cite{tavakolipour2021} and specially \Cref{coro2-uniquefact} to demonstrate that the characteristic polynomial of a positive definite matrix over $\smax$ admits a unique factorization. This result helps us to define the multiplicity of the eigenvalues of such a matrix and to show that the multiplicity of any eigenvalue coincides with the number of its occurrences as a diagonal element of the matrix. Some notions of generalized eigenvectors associated to the eigenvalues over $\rmax$ have already been investigated in the literature in particular in the work of Izhakian and Rowen~\cite{izhakianmatrix3} and in the works of Nishida and co-authors, see \cite{Nishida2020,Nishida2021,nishida2021independence}. In \Cref{eig_vec}, we define a (generalized) notion of geometric eigenvalue and eigenvector over $\smax$. Moreover in \Cref{smaxeigenvector-ws}, we introduce refined concepts of weak and strong eigenvectors, respectively. This offers more tools for analyzing the algebraic structure of matrices, and allows us in some cases to determine eigenvectors using the adjoint matrix (see \Cref{spec-eig-vector}). Using these tools, we identify candidates for all the eigenvectors of a positive definite matrix over $\smax$ (see \Cref{coro-unique-eigen}). Furthermore in \Cref{subsec:kleen}, we characterize these candidate eigenvectors using the Kleene star operation. Such a characterization may be thought as a generalization of the notion of eigenvector over $\rmax$ introduced in \cite{Nishida2020}. Then, in \Cref{sec-generic}, we show that generically these candidate eigenvectors are the unique eigenvectors. Finally, in \Cref{sec:apps}, we show that generically, the signed valuations of the eigenvalues and eigenvectors of a positive definite matrix over a real closed field coincide with the signed tropical eigenvalues and eigenvectors of the signed valuation of the matrix. This can be compared to a characterization of the asymptotic behavior of the eigenvalues and eigenvectors of a parametric family of positive definite matrices over an ordered field, using the eigenvalues and eigenvectors of a positive definite matrix over $\smax$. This result provides new insights into the nature of eigenvalues and eigenvectors of usual positive (semi-)definite matrices. We also show a Gershgorin type bound for the eigenvalues of a positive definite real matrix. \bigskip The paper is structured as follows. We begin with a review in \Cref{sec-elem} of the basic principles of tropical and symmetrized tropical algebra, and in \Cref{sec-matpol} of the definitions and known or elementary properties of the algebraic constructions within these frameworks, such as matrices, polynomials, eigenvalues and eigenvectors. We then explore in \Cref{sec:3} the concepts of positive (semi)-definite matrices over $\smax$, detailing the theoretical developments and methods used to derive our results. In particular, we characterize the eigenvalues of these matrices over $\smax$. In \Cref{sec:3p}, we give several characterizations of the eigenvectors of these matrices over $\smax$. Finally, in \Cref{sec:apps}, we examine the relationship between the eigenvalues of matrices over ordered fields and their counterparts in symmetrized tropical algebra. We finish by illustrating the results by some numerical results on the eigenvalues and eigenvectors of parametric families of positive definite matrices. \section{Definitions and elementary properties}\label{sec-elem} In this section, we review some necessarily definitions, notations and results of max-plus or tropical and symmetrized max-plus or tropical algebra. See for example \cite{baccelli1992synchronization, butkovivc2010max} for more information. \subsection{Preliminaries of max-plus or tropical algebra $\rmax$ and $\tmax$} \begin{definition} Let $\R$ be the set of real numbers. The tropical semiring, $\rmax$, is the set $\R \cup \{-\infty\}$ equipped with the addition $(a,b)\mapsto a\oplus b:=\max\{a,b\}$, with the zero element $\zero:=-\infty$ and the multiplication $(a,b)\mapsto a\odot b:=a+b$, with the unit element $\unit:=0$. \end{definition} \begin{example} Over $\rmax$, we have \begin{itemize} \item $1 \oplus -2 = 1$ \item $6 \odot 2 = 8$ \item $2^{ 3}= 2\odot 2\odot 2= 6$. \end{itemize} \end{example} We shall also use the more general family of tropical semifields defined as follows, see also \cite{tavakolipour2021}. \begin{definition} \label{tmax} Given a (totally) ordered abelian group $(\vgroup,+,0,\leq)$, we consider an element $\botelt$ satisfying $\botelt \leq a$ for all $a\in\vgroup$, and which does not belong to $\vgroup$. Then, the {\em tropical semifield} over $\vgroup$, denoted $\tmax(\vgroup)$, is the set $\vgroup \cup\{\botelt\}$, equipped with the addition $(a,b) \mapsto a\oplus b:= \max(a,b)$, with zero element $\zero:=\botelt$, and multiplication $(a,b)\mapsto a\odot b:= a+b$, and $\botelt \odot a=a \odot\botelt= \botelt$, for all $a,b\in \vgroup$, so with unit $\unit:=0$. \end{definition} In particular, the zero element $\botelt$ is absorbing. The $n$-th power of an element $a\in\vgroup$ for the multiplicative law $\odot$, $a^n:=a \odot \ldots \odot a$ ($n$-times), coincides with the sum $a+ \dots + a$ ($n$-times), also denoted by $na$. We say that the group $\vgroup$ is {\em divisible}, if for all $a\in \vgroup$ and for all positive integers $n$, there exists $b$ such that $nb=a$. In this case, $b$ is unique (since $\vgroup$ is ordered). We say that $\vgroup$ is {\em trivial} if it is equal to $\{0\}$. When $\vgroup=\R$, we recover $\rmax$. \subsection{Preliminaries of symmetrized max-plus algebra $\smax$} Here we recall the construction and basic properties of the symmetrized tropical semiring. We refer the reader to \cite{baccelli1992synchronization,gaubert1992theorie,cramer-guterman} for information at a more detailed level in the case where $\vgroup=\R$. We describe here the generalization to the case of any ordered group $\vgroup$, which was presented in \cite{tavakolipour2021}. Let us consider the set $\tmax^2:=\tmax\times \tmax$ endowed with operations $\oplus$ and $\odot$: \[(a_1,a_2) \oplus (b_1,b_2) =(a_1\oplus b_1, a_2 \oplus b_2),\] \[(a_1,a_2) \odot (b_1,b_2) = (a_1 b_1 \oplus a_2 b_2, a_1 b_2 \oplus a_2 b_1),\] with $\zero:=(\botelt,\botelt)$ as the zero element and $\unit:=(0, \botelt)$ as the unit element. Define the following three operators on $a= (a_1, a_2)\in \tmax^2$: \begin{center} \begin{tabular}{ll} $\ominus a = (a_2, a_1)$ & minus operator $\tmax^2\to \tmax^2$;\\ $|a| = a_1 \oplus a_2$ & absolute value $\tmax^2\to \tmax$;\\ $a^{\circ} = a\ominus a = (|a|, |a|)$& balance operator $\tmax^2\to \tmax^2$. \end{tabular} \end{center} The operator $\ominus$ satisfies all the properties of a minus sign except that $a\ominus a$ is not zero except when $a=\zero$. We also define the \new{balance relation} over $\tmax^2$ as follows: \[ (a_1, a_2) \balance (b_1, b_2) \Leftrightarrow a_1 \oplus b_2 = a_2 \oplus b_1\enspace .\] It satisfies \begin{equation}a \balance b \Leftrightarrow a \ominus b\balance \zero\enspace .\end{equation} Balance relation is reflexive, symmetric, and compatible with addition and multiplication of $\tmax^2$. However, it is not an equivalence relation, because it lacks the expected transitive property. For example (for $\vgroup=\R$), we have $(1,2) \balance (3,3)$, $(3,3) \balance (1,1)$, but $(1,2)\notbalance(1,1)$. We then consider the following relation $\mathcal{R}$ on $\tmax^2$ which refines the balance relation: \[(a_1,a_2) \mathcal{R} (b_1,b_2) \Leftrightarrow \begin{cases} a_1 \oplus b_2 = a_2 \oplus b_1& \;\text{if}\; a_1 \neq a_2, \;b_1 \neq b_2,\\ (a_1,a_2)=(b_1,b_2)& \text{otherwise.} \end{cases} \] \begin{example} To better understanding the difference between $\balance$ and $\rel$, in the following table we compare them for few examples (with $\vgroup=\R$). \[\begin{array}{c|cccc} &(1,4)&(4,1)&(4,4)&(3,3)\\ \hline (1,4)&\balance,\rel&\notbalance, \centernot\rel& \balance,\centernot\rel&\notbalance, \centernot\rel\\ (4,1)&\notbalance, \centernot\rel&\balance,\rel&\balance,\centernot\rel&\notbalance, \centernot\rel\\ (4,4)&\balance, \centernot\rel&\balance, \centernot\rel&\balance, \rel&\balance, \centernot\rel\\ (3,3)&\notbalance, \centernot\rel&\notbalance, \centernot\rel&\balance, \centernot\rel&\balance, \rel \end{array}\] \end{example} One can check that $\mathcal{R}$ has the transitive property and so is an equivalence relation on $\tmax^2$. Also it is compatible with $\oplus$ and $\odot$ of $\tmax^2$, $\balance$, $\ominus$, $|.|$ and $^{\circ}$ operators, which then can be defined on the quotient $\tmax^2 / \mathcal{R}$. \begin{definition}[$\smax$]\label{def:sym_def} The \new{symmetrized tropical semiring} is the quotient semiring $(\tmax^2 / \mathcal{R},\oplus,\odot)$ and is denoted by $\smax$ or $\smax(\vgroup)$. We denote by $\zero:=\overline{(\botelt, \botelt)}$ the zero element and by $\unit:=\overline{(0, \botelt )}$ the unit element. We also use the notation $ab$ for $a\odot b$ with $a,b\in\smax$, and $a^n$ for the product $a\odot \cdot \odot a$ n-times. \end{definition}\label{def:smax} We distinguish three kinds of equivalence classes (\cite{gaubert1992theorie}): \begin{center} \begin{tabular}{ll} $\overline{(c, \botelt)} = \{(c,a_2)\mid a_2<c\}, \; c\in \vgroup$ & positive elements \\ $\overline{(\botelt,c)}=\{(a_1, c)\mid a_1<c\}, \; c\in \vgroup$ & negative elements \\ $\overline{(c,c)}=\{(c,c)\}, \; c\in \vgroup\cup\{\botelt\}$ & balance elements. \end{tabular} \end{center} Then, we denote by $\smax^{\oplus}$, $\smax^{\ominus}$ and $\smax^{\circ}$ the set of positive or zero elements, the set of negative or zero elements, and the set of balance elements, respectively. Therefore, we have: \[\smax^{\oplus}\cup \smax^{\ominus}\cup \smax^{\circ}=\smax, \] where the pairwise intersection of any two of these three sets is reduced to $\{\zero\}$. \begin{property} The subsemiring $\smax^{\oplus} $ of $\smax$ can be identified to $\tmax$, by the morphism $c\mapsto \overline{(c, \botelt)}$. This allows one to write $a \ominus b$ instead of $\overline{(a, \botelt)} \oplus \overline{(\botelt,b)}$. \end{property} \begin{property}\label{prop-modulus} Using the above identification, the absolute value map $a\in \smax \mapsto |a|\in \smax^\oplus$ is a morphism of semirings. \end{property} \begin{definition}[Signed tropical elements]\label{signed_elements} The elements of $\smax^\vee:=\smax^{\oplus} \cup \smax^{\ominus}$ are called \new{signed tropical elements}, or simply \new{signed elements}. They are either positive, negative or zero. \end{definition} \begin{remark} The elements of $\smax^{\circ}$ play the role of the usual zero element. Moreover, the set $\smax \setminus \smax^{\circ}=\smax^\vee\setminus\{\zero\}$ is the set of all invertible elements of $\smax$. \end{remark} \subsection{Relations over $\smax$} \begin{definition}\label{partial_order} We define the following relations, for $a,b \in \smax$: \begin{enumerate} \item $a \preceq b \iff b = a \oplus c \;\text{for some}\;c \in \smax \iff b=a\oplus b$ ; \item $a \prec b \iff a \preceq b, \; a \neq b$ ; \item $a \preceq^{\circ} b \iff b = a \oplus c \;\text{for some}\;c \in \smax^{\circ}$. \end{enumerate} \end{definition} The relations $\preceq$ and $\preceq^\circ$ in \Cref{partial_order} are partial orders (they are reflexive, transitive and antisymmetric). \begin{example} We have the following inequalities: \begin{enumerate} \item $\zero \preceq \ominus 2 \preceq \ominus 3,\;\zero \preceq 2 \preceq 3,\; 2 \preceq \ominus 3$ ; \item $3$ and $\ominus 3$ are not comparable for $\preceq$ ; \item $1\preceq^{\circ} 2^{\circ}$,\;$\ominus 1\preceq^{\circ} 2^{\circ}$,\; $\ominus 2 \preceq^{\circ} 2^{\circ}$ ; \item $3$ and $2^{\circ}$ are not comparable for $\preceq^{\circ}$. \end{enumerate} \end{example} \begin{property}\label{property-preceq}Let $a,b \in \smax$. \begin{enumerate} \item If $|a| \prec |b|$, then $a \oplus b = b$. \item If $a \preceq b$, $|a|=|b|$ and $b \in \smax^{\vee}$, then $a=b$. \item If $b \in \smax^{\vee}$, then $a \preceq^{\circ} b $ iff $a=b$. \item If $|a| \preceq |b|$ and $b \in \smax^{\circ}$, then $a \preceq^{\circ} b $ and so $a \preceq b$. \item $a \oplus b =b \Rightarrow |a| \preceq |b|$. \end{enumerate} \end{property} In \cite{tropicalization}, the authors equiped $\smax$ with other ``order'' relations, by using a relation on $\tmax^2$ and then quotienting, and used them to define positive semidefinite matrices over $\smax$. We give the definition directly on $\smax$ in \Cref{partial_order2} below, while replacing the notations $\preceq$ and $\prec$ of \cite{tropicalization} by the notations $\leqsign$ and $\lsign$, since we already used the notation $\preceq$ for the natural order of $\smax$. \begin{definition}\label{partial_order2}\cite{tropicalization}\ For $a,b \in \smax$: \begin{enumerate} \item $a \leqsign b \iff b \ominus a \in \smax^{\oplus}\cup \smax^{\circ}$ ; \item $a \lsign b \iff b \ominus a \in \smax^{\oplus}\setminus\{\zero\}$. \end{enumerate} \end{definition} \begin{example} Using the relations in \Cref{partial_order2}, we have the following properties: \begin{enumerate} \item $\ominus 3 \lsign \ominus 2 \lsign \zero \lsign 2 \lsign 3$\enspace; \item $\leqsign$ is not antisymmetric on $\smax$: $2 \leqsign 3^{\circ}$ and $3^{\circ} \leqsign 2$\enspace; \item $\leqsign$ is not transitive on $\smax$: $2 \leqsign 3^{\circ}, 3^{\circ} \leqsign 1$ but $2 \nleqsign 1$\enspace. \end{enumerate} \end{example} The relation $\leqsign$ is reflexive, but it is not antisymmetric, nor transitive on $\smax$, as shown in the examples above. However, on $\smax^{\vee}$, $\leqsign$ is a total order and $\lsign$ coincides with ``$\leqsign$ and $\neq$'', see \Cref{order_new} and \Cref{order-exp} below. \begin{proposition}\cite{tropicalization}\label{order_new} Let $a, b , c \in \smax$. \begin{enumerate} \item $a \leqsign a$ for any $a \in \smax$ ($\leqsign $ is reflexive); \item $a \leqsign b$ and $b \leqsign a$ if and only if $a \balance b$; hence $\leqsign $ is antisymmetric on $\smax^{\vee}$; \item If $a \leqsign b$ and $b \leqsign c$ and $b \in \smax^{\vee}$ then $a \leqsign c$; hence $\leqsign $ is transitive on $\smax^{\vee}$. \end{enumerate} \end{proposition} \begin{property}\label{order-exp} If we identify the elements of $\smax^\vee$ with elements of $\R$ by the map $\ominus a\mapsto -\exp(a)$, $\oplus a\mapsto \exp(a)$ and $\zero\mapsto 0$, then, we get that the relations $ \leqsign $ and $\lsign$ on $\smax^\vee$ are the usual order $\leq$ and the strict relation $<$ on $\R$. Moreover, on $\smax^\oplus$, the relations $ \leqsign $ and $\lsign$ are equivalent to the relations $\preceq$ and $\prec$, and to the usual order and its strict version on the set $\tmax$. \end{property} We have also the following properties, which can be deduced easily from \Cref{partial_order2}. \begin{lemma}\label{product_order} Let $a, b, c\in \smax^{\vee}$. Then we have \begin{enumerate} \item $a \leqsign b, \;c \geqsign \zero \Rightarrow a c \leqsign b c\enspace,$ \item $a \lsign b, \;c \gsign \zero \Rightarrow a c \lsign b c\enspace.$ \hfill \qed \end{enumerate} \end{lemma} \begin{lemma}\label{modulus_order} Let $a, b\in \smax^{\vee}$. Then $a^{ 2} \lsign b^{ 2}$ if and only if $|a| \lsign |b|$. Similarly, $a^{ 2} \leqsign b^{ 2}$ if and only if $|a| \leqsign |b|$. \end{lemma} \begin{proof} Any $a\in \smax^{\vee}$ can be written as $a=|a|$ or $a=\ominus |a|$, using the above identifications. So $a^{ 2}=|a|^{ 2}$, where $|a|\in \smax^\oplus$. Then, we only need to check the equivalences of the lemma for $a,b\in \smax^\oplus$. Since in $\smax^\oplus$, $\lsign$ and $\leqsign$ are equivalent to $\prec$ and $\preceq$, respectively, or to the usual order and its strict version on $\tmax$, we obtain the result of the lemma. \end{proof} \begin{property} \label{equality_balance} The relation $\balance$ satisfies the following properties, for $a,b \in \smax$: \begin{enumerate} \item\label{pro1} We have $a \balance b \Leftrightarrow a \ominus b\balance \zero$. \item If $a,b \in \smax^{\vee}$ and $a \balance b$, then we have $a=b$. \item If $b \in \smax^{\vee}$, $a \balance b$ and $a\preceq b$, then we have $a=b$. \end{enumerate} \end{property} \section{Preliminaries on matrices and polynomials over $\smax$}\label{sec-matpol} \subsection{Matrices} Given any semiring $(\mathcal{S},\oplus,\zero,\odot,\unit)$ (such as $\rmax$, $\tmax$ or $\smax$), we denote by $\mathcal{S}^{n}$ and $\mathcal{S}^{n\times m}$ the sets of $n$-dimensional vectors and of $n\times m$ matrices with entries in $\mathcal{S}$. We also use the notation $ab$ for $a\odot b$ with $a,b\in \mathcal{S}$, and $a^n$ for the product $a\odot \cdot \odot a$ n-times. Then, the finite sum $\tsum$ and product $\prod$ notations, and the matrix multiplication, addition and power operations over $\mathcal{S}$ are defined as in usual linear algebra. For example if $A=(a_{ij}) \in \mathcal{S}^{n\times m}$ and $B=(b_{ij}) \in \mathcal{S}^{m\times p}$, then $A B\in \mathcal{S}^{n\times p}$ and has entries $(A B)_{ij}=\tsum_k a_{ik} b_{kj}$. Also, for any $n\geq 1$, we denote by $\zero$, and call the zero vector, the $n$-dimensional vector with all entries equal to $\zero$, and by $I$, the $n\times n$ identity matrix over $\mathcal{S}$ with diagonal entries equal to $\unit$ and off-diagonal entries equal to $\zero$. Finally, for a square $n\times n$ matrix $A$, we denote $A^{ 2}=A A$, etc, with $ A^{ 0}$ equal to the identity matrix $I$. For any positive integer $n$, denote by $[n]$ the set $\{1, \ldots, n\}$. We denote by $\Sp_{n}$, the set of all permutations of $[n]$. Recall that a \new{cycle} in $[n]$ is a sequence $\cycle=(i_{1},i_{2},\ldots , i_{k})$ of different elements of $[n]$, with the convention that $i_{k+1}=i_1$, and that any permutation $\permutation$ of $[n]$ can be decomposed uniquely into disjoint cycles which cover $[n]$, meaning that $\permutation(i_\ell)= i_{\ell+1}$ for all $\ell\in [k]$ and all cycles $\cycle=(i_{1},i_{2},\ldots , i_{k})$ of $\permutation$. Let $A =(a_{ij}) \in \mathcal{S}^{n \times n}$ be a matrix. For any permutation $\permutation$ of $[n]$, the weight of $\permutation$ associated to $A$ is given by \[ w(\permutation)=\bigtprod_{i \in[n]}a_{i\permutation(i)}\enspace ,\] and the weight of any cycle $\cycle=(i_{1},i_{2},\ldots , i_{k})$ associated to $A$ is given by \[w(\cycle)=\bigtprod_{\ell\in [k]} a_{i_\ell i_{\ell+1}}\enspace .\] Then, as in usual algebra, the weight of a permutation is the product of the weights of its cycles. \begin{definition} \label{per}The \new{permanent} of a matrix $A=(a_{ij}) \in \mathcal{S}^{n \times n}$ is \[\per(A)= \bigtsum_{\permutation \in \Sp_{n}} \bigtprod_{i \in[n]}a_{i\permutation(i)} =\bigtsum_{\permutation \in \Sp_{n}} w(\permutation) \enspace . \] \end{definition} When the semiring $\mathcal{S}$ has a negation map, we can also define the determinant. We only give the definition in $\smax$. \begin{definition}[Determinant]\label{det_s} Let $A=(a_{ij})$ be an $n \times n$ matrix over $\smax$. The \new{determinant} is \[\det(A):= \bigtsum_{\permutation \in \Sp_n} \mathrm{sgn}(\permutation) \bigtprod_{i\in [n]} a_{i\permutation(i)} = \bigtsum_{\permutation \in \Sp_n} \mathrm{sgn}(\permutation) w(\permutation) \enspace ,\] where \[\mathrm{sgn}(\permutation)=\begin{cases} \unit & \;\text{if}\;\permutation \;\text{is even};\\ \ominus \unit & \text{otherwise}. \end{cases}\] \end{definition} This allows one to define also the adjugate matrix. \begin{definition}[Adjugate]\label{def-adjugate} The adjugate matrix of $A=(a_{ij}) \in \smax^{n \times n}$ is the matrix $A^{\mathrm{adj}}\in \smax^{n\times n}$ with entries: \[ (A^{\mathrm{adj}})_{i,j} := (\ominus 1)^{i+j} \det(A[\hat{j},\hat{i}])\enspace , \] where $A[\hat{j},\hat{i}]$ is the matrix obtained after eliminating the $j$-th row and the $i$-th column of $A$. \end{definition} For any matrix $A$ with entries in $\smax$, we denote by $|A|$ the matrix with entries in $\tmax$ obtained by applying the modulus map $|\cdot|$ entrywise. \begin{remark}\label{perdet} For $A \in (\smax)^{n \times n}$, we have $|\det(A)|=\per(|A|)$. \end{remark} \begin{lemma}[\protect{\cite{akian2009linear}}]\label{adj} Let $A \in (\smax^\vee)^{n \times n}$. Then the following balance relation holds \[A A^{\mathrm{adj}} \succeq^{\circ} \det(A) I .\] In particular if $\det(A) \balance \zero$ then $A A^{\mathrm{adj}} \balance \zero$. \end{lemma} We now recall some results about the solution of linear systems over $\smax$. \begin{theorem}[\cite{maxplus90b,cramer-guterman}]\label{cramer} Let $A \in (\smax)^{n \times n}$ and $b \in (\smax)^{n}$, then \begin{itemize} \item every solution $x \in (\smax^{\vee})^{n}$ of the linear system $A x \balance b$ satisfies the relation \begin{equation}\label{cram}\det(A) x \balance A^{\adj} b\enspace. \end{equation} \item If $A^{\adj} b \in (\smax^{\vee})^{n}$ and $\det(A)$ is invertible, then \[\tilde{x} = \det(A)^{ -1} A^{\adj} b\] is the unique solution of $A x \balance b$ in $(\smax^{\vee})^{n}$. \end{itemize} \end{theorem} \begin{remark}\label{ith_cramer} Let $D_{x_i}$, be the determinant of the matrix obtained by replacing the $i$-th column of $A$ with $b$. Then $(A^{\adj}b)_i=D_{x_i}$. When $\det(A)$ is invertible, \Cref{cram} is equivalent to $(\forall i) \;x_i \balance \det(A)^{-1}D_{x_i}$, where the right hand side of this equation is exactly the classical $i$-th Cramer formula. \end{remark} \begin{theorem}[\cite{maxplus90b,cramer-guterman}]\label{existence_signed} Let $A \in (\smax)^{n \times n}$. Assume that $\det(A)\neq \zero$ (but possibly $\det(A) \balance \zero$). Then for every $b \in (\smax)^{n}$ there exists a solution $x \in (\smax^{\vee})^n$ of $A x \balance b$, which can be chosen in such a way that $|x|=|\det(A)|^{ -1} |A^{\adj} b|$. \end{theorem} \begin{theorem}[Homogeneous systems over $\smax$ \protect{\cite[Th. 6.5]{maxplus90b}, see also \cite[Th. 6.1]{cramer-guterman}}]\label{homo} Let $A \in (\smax)^{n \times n}$, then there exists a solution $x \in (\smax^{\vee})^{n}\setminus\{\zero\}$ to the linear system $A x \balance \zero$ if and only if $\det(A)\balance \zero$. \end{theorem} We shall also use the following construction. The semirings $\rmax$, $\tmax$, and $\smax$ are all topological semirings (meaning that operations are compatible with the topology), when endowed with the topology of the order $\leq$ for $\tmax$ and $\preceq$ for $\smax$. They are also idempotent meaning that $a\oplus a=a$ for all $a$, so that the sum of elements is also the supremum. They are also relatively complete for their associated partial order, meaning that the supremum of an upper bounded set always exists, or that they become complete when adding a top element to them. In what follows, $\mathcal{S}$ will be $\rmax$, $\tmax$, and $\smax$, but it can be any idempotent semiring which is relatively complete for the associated partial order (such that $a\leq b$ if $a\oplus b=b$). \begin{definition}(Kleene's star)\label{star_smax} The Kleene's star of a matrix $A \in \mathcal{S}^{n \times n}$, denoted $A^*$, is defined as the sum $\tsum_{k\geq 0}A^{ k}$, if the series converges to a matrix over $\mathcal{S}$. Recall that $ A^{ 0}=I$ the identity matrix. \end{definition} To any matrix $A =(a_{ij}) \in \mathcal{S}^{n \times n}$, we associate the weighted directed graph $\graph(A)$ with set of nodes $[n]$, set of edges $E=\big\{(i,j): a_{ij}\neq \zero,\; i,j \in [n]\big\}$, and in which the weight of an edge $(i,j)$ is $a_{ij}$. Then, a path in $\graph(A)$ of length $k\geq 1$ is a sequence $(i_1, \ldots, i_{k+1})$ such that $(i_\ell,i_{\ell+1})\in E$, for all $\ell\in [k]$, it has initial node $i_1$, final node $i_{k+1}$, and weight $\bigtprod_{\ell\in [k]} a_{i_\ell i_{\ell+1}}$. By convention, a path of length $0$ has weight $\unit$ and its initial and final nodes are equal. We say that the matrix $A$ is irreducible if $\graph(A)$ is strongly connected, meaning that there is a path from each node to another node. \begin{property}\label{irreducible} Let $A =(a_{ij}) \in \mathcal{S}^{n \times n}$ be such that $A^*$ exists. Then, for all $i,j\in [n]$, the entry $A^*_{ij}$ is equal to the supremum of the weights of all paths with initial node $i$ and final node $j$. If $A$ is irreducible, then, $A^*$ has no zero entries. \end{property} \subsection{Polynomials over $\rmax$, $\tmax$ and $\smax$} \label{sec-polynomials} The following definitions are the same as in usual algebra. \begin{definition}[Formal polynomial] Given any semiring $(\mathcal{S},\oplus,\zero,\odot,\unit)$ (such as $\rmax$, $\tmax$ or $\smax$), a (univariate) \new{formal polynomial} $P$ over $\smax$ is a sequence $(P_k)_{k\in \mathbb{N}} \in \mathcal{S}$, where $\mathbb{N} $ is the set of natural numbers (including $0$), such that $P_k=\zero$ for all but finitely many values of $k$. We denote a formal polynomial $P$ as a formal sum, $P = \tsum_{k\in \mathbb{N}} P_{k} \X^{k}$, and the set of formal polynomials as $\mathcal{S}[\X]$. This set is endowed with the following two internal operations, which make it a semiring: coefficient-wise wise sum, $(P \oplus Q)_k=P_k \oplus Q_k$; and Cauchy product, $(P Q)_k= \tsum_{0 \leq i \leq k}P_i Q_{k-i}$. A formal polynomial reduced to a sequence of one element is called a \new{monomial}. \end{definition} When the semiring $\mathcal{S}$ is $\smax$, we apply the absolute value map $|\cdot|$, the balance relation $\balance$, and the relations of \Cref{partial_order} and \Cref{partial_order2} to formal polynomials coefficient-wise. \begin{example} $P=\X^4 \oplus \unit^{\circ}\X^{3} \oplus \unit^{\circ}\X^2 \oplus \unit^{\circ} \X \ominus \unit $ and $Q= \X^4 \ominus \unit$, are two examples of formal polynomials over $\smax$, and we have $Q\preceq^\circ P$ and $Q\lsign P$. \end{example} \begin{definition}[Degree, lower degree and support] The \new{degree} of $P$ is defined as \begin{equation}\label{deg}\deg(P):=\sup\{k \in \mathbb{N} \mid P_k \neq \zeror\},\end{equation} and \new{lower degree} of $P$ is defined as \begin{equation}\label{valuation}\uval (P) := \inf\{k \in \mathbb{N}\;|\;P_k \neq \zeror\}.\end{equation} In the case where $P = \zeror$, we have $\deg(P)=0$ and $\uval(P) = +\infty$. We also define the \new{support} of $P$ as the set of indices of the non-zero elements of $P$, that is $\mathrm{supp}(P):=\{k\in \mathbb{N} \mid P_k \neq \zeror\}$. We say that a formal polynomial has a \new{full support} if $P_k\neq \zeror$ for all $k$ such that $\uval(P) \leq k \leq \deg(P)$. \end{definition} \begin{definition}[Polynomial function] To any $P \in \mathcal{S}[\X]$, with degree $n$ and lower degree $\mv$, we associate a \new{polynomial function} \begin{equation}\label{widehat_p}\widehat{P}: \mathcal{S} \rightarrow \mathcal{S} \; ; \; x \mapsto \widehat{P}(x)= \bigtsum_{\mv\leq k\leq n}P_{k} x^{ k}.\end{equation} We denote by $\PF(\smax)$, the set of polynomial functions $\widehat{P}$. \end{definition} We now consider the special case where $\mathcal{S}$ is $\rmax$, $\tmax$ or $\smax$ semiring. From now on, we shall assume that $\vgroup$ is {\bf divisible}. \subsubsection{Roots of polynomials over $\rmax$ and $\tmax$} When the semiring $\mathcal{S}$ is $\rmax$ or $\tmax$, the addition in \eqref{widehat_p} is the maximization. Roots of a polynomial are defined as follows. \begin{definition}[$\rmax$ and $\tmax$-roots and their multiplicities] \label{def_corners} Given a formal polynomial $P$ over $\rmax$ (resp.\ $\tmax$), and its associated polynomial function $\widehat{P}$, the non-zero $\rmax$ (resp.\ $\tmax$)-\new{roots} of $P$ or $\widehat{P}$ are the points $x$ at which the maximum in the definition \eqref{widehat_p} of $\widehat{P}$ as a supremum of monomial functions, is attained at least twice (i.e.\ by at least two different monomials). Then, the multiplicity of $x$ is the difference between the largest and the smallest exponent of the monomials of $P$ which attain the maximum at $x$. If $P$ has no constant term, then $\zero$ is also a $\rmax$ (resp.\ $\tmax$)-root of $P$, and its multiplicity is equal to the lower degree of $P$. \end{definition} Non-zero $\rmax$-roots of a formal polynomial $P$ are also the points of non-differentiability of $\widehat{P}$, and their multiplicity is also the change of slope of the graph of $\widehat{P}$ at these points. The following theorem states the fundamental theorem of tropical algebra which was shown by Cuninghame--Green and Meijer for $\rmax$ and stated in \cite{tavakolipour2021} for $\tmax$. \begin{theorem}[\cite{cuninghame1980algebra} for $\rmax$] Every formal polynomial $P \in \rmax[\X]$ (resp.\ $\tmax[\X]$) of degree $n$ has exactly $n$ roots $c_1\geq \cdots \geq c_n$ counted with multiplicities, and the associated polynomial function $\widehat{P}$ can be factored in a unique way as \[\widehat{P}(x)= P_n (x \oplus c_1) \cdots (x \oplus c_n) \enspace. \] \end{theorem} The following result was shown for $\rmax$ in \cite{baccelli1992synchronization} and stated for $\tmax$ in \cite{tavakolipour2021}. \begin{lemma}[See~\protect{\cite[p.\ 123]{baccelli1992synchronization}} for $\vgroup=\R$]\label{roots_poly} Consider a formal polynomial $P$ over $\rmax$ (resp.\ $\tmax$) of lower degree $\mv$ and degree $n$. \begin{itemize} \item If $P$ is of the form $P=P_n (\X \oplus c_1)\cdots (\X \oplus c_n)$ (where $c_i$ maybe equal to $\zeror$), then $P$ has full support and satisfies: \begin{equation} \label{concavepoly} P_{n-1}-P_n \geq P_{n-2}-P_{n-1} \geq \cdots \geq P_{\mv}-P_{\mv +1}.\end{equation} \item Conversely, if $P$ satisfies \eqref{concavepoly}, then $P$ has full support, the numbers $c_i \in \rmax$ defined by \[c_i := \begin{cases} P_{n-i} - P_{n-i+1}& 1 \leq i \leq n-\mv;\\ \zeror & n-\mv <i \leq n. \end{cases} \] are such that $c_1 \geq \cdots \geq c_n$ and $P$ can be factored as $P=P_n (\X \oplus c_1)\cdots (\X \oplus c_n)$. \end{itemize} If $P$ satisfies one of the above conditions, we shall say that $P$ is {\em factored}. \end{lemma} Over $\rmax$, the condition \eqref{concavepoly} means that the coefficient map from $\N$ to $\R\cup\{-\infty\}$ is concave. \subsubsection{Roots of polynomials over $\smax$} Let us denote by $\smax^\vee[\X]$ the subset of $\smax[\X]$ of formal polynomials over $\smax$ with coefficients in $\smax^\vee$. In \cite{tavakolipour2021}, we only considered roots of such polynomials and their multiplicities. Since characteristic polynomials of matrices need not have coefficients in $\smax^\vee$, one may need to generalize these notions. For this purpose, we shall consider below a notion equivalent to the notion of ``corner root'' introduced in \cite[Section 6]{adi} for a general semiring with a symmetry and a modulus, which is then used to define eigenvalues of matrices, and which applies in particular to the case of $\smax$ semiring. \begin{definition}[$\smax$ or $\smax^\vee$-roots and factorization] \label{def-smaxroots} Suppose that $P\in \smax[\X]$. Define $P^{\vee}$ as the element of $\smax^{\vee}[\X]$ such that for all $i\in \N$, $P^{\vee}_i=P_i$ if $P_i\in \smax^{\vee}$ and $P^{\vee}_i=\zero$ otherwise. Then, the $\smax$-\new{roots} (resp.\ $\smax^{\vee}$-\new{roots}) of $P$ are the signed elements $r \in \smax^{\vee}$ for which $\widehat{P}(r) \balance \zero$ (resp.\ $\widehat{P}(r)=\widehat{P^{\vee}}(r) \balance \zero$). When $P\in\smax^{\vee}[\X]$, $\smax^\vee$-\new{roots} of $\widehat{P}$ are defined as $\smax$-roots or equivalently $\smax^{\vee}$-roots of $P$. \end{definition} \begin{example}\label{tpsd_eig} \begin{enumerate} \item Let $P = \X^2 \ominus \X \oplus \unit^{\circ}$. Then there is an infinite number of $\smax$-roots of $P$, since any $r$ with $|r|\leq \unit$ is a $\smax$-root of $P$. However to be a $\smax^\vee$ root of $P$ (or corner root in \cite[Section 6]{adi}) one need that $x^2\ominus x = x^2 \ominus x \oplus \unit^{\circ}\balance \zero$ and the only solution is $\unit$. \item Let $P=\X^3\oplus \X^2\oplus 2^\circ \X\oplus 2^\circ$. Then, again any $r$ with $|r|\leq \unit$ is a $\smax$-root of $P$. However, $P$ has no $\smax^{\vee}$-root. \end{enumerate} \end{example} \begin{definition}(Factorable polynomial fuction) We say that the polynomial function $\widehat{P}$ can be factored (into linear factors) if there exist $r_i \in \smax^{\vee}$, for $i=1, \ldots, n$, such that \[ \widehat{P}(x)= P_n (x \ominus r_1) \cdots (x \ominus r_n)\enspace . \] \end{definition} \begin{theorem}[Sufficient condition for factorization, see \protect{\cite[Th.\ 4.4]{tavakolipour2021}}]\label{suf_cond} Let ${P} \in \smax^\vee[\X]$. A sufficient condition for $\widehat{P}$ to be factored is that the formal polynomial $|{P}|$ is factored (see \Cref{roots_poly}). In that case, we have $\widehat{P}(x)= P_n (x \ominus r_1) \cdots (x \ominus r_n)$, with $n=\deg(P)$, $r_i\in\smax^\vee$, $i\in [n]$, such that $r_i P_{n-i+1}= \ominus P_{n-i}$ for all $i\leq n-\uval(P)$ and $r_i= \zero$ otherwise. Moreover, $|r_1|\geq \cdots\geq |r_n|$ are the $\tmax$-roots of $|{P}|$, counted with multiplicities. \end{theorem} \begin{corollary}[Sufficient condition for unique factorization, see \protect{\cite[Cor.\ 4.6]{tavakolipour2021}}]\label{coro-uniquefact} Let ${P} \in \smax^\vee[\X]$. Assume that $|{P}|$ is factored (see \Cref{roots_poly}), and let the $r_i$ be as in \Cref{suf_cond}. If all the $r_i$ with same modulus are equal, or equivalently if for each $\tmax$-root $c\neq \zeror$ of $|{P}|$, $c$ and $\ominus c$ are not both $\smax^\vee$-roots of $P$, then the factorization of $\widehat{P}$ is unique (up to reordering). \end{corollary} The following definition of multiplicities of roots of polynomials was introduced in \cite{baker2018descartes} in the framework of hyperfields, and adapted in \cite[\S 5]{tavakolipour2021} to the more general framework of semiring systems. We write it below over $\smax$. Note that it only applies to polynomials with coefficients in $\smax^\vee$. \begin{definition}[Multiplicity of $\smax^\vee$-roots, compare with \cite{baker2018descartes} and \protect{\cite[\S 5]{tavakolipour2021}}] \label{def-mult-BL} For a formal polynomial $P\in \smax^\vee[\X]$, and a scalar $r\in \smax^\vee$, we define the \new{multiplicity} of $r$ as a $\smax^{\vee}$-root of $P$, and denote it by $\mathrm{mult}_r(P)$, as follows. If $r$ is not a root of $P$, set $\mathrm{mult}_r(P)=0$. If $r$ is a root of $P$, then \begin{equation}\label{mult}\mathrm{mult}_r(P)=1+\max\{\mathrm{mult}_r(Q)\mid Q\in \smax^\vee[\X],\; P \balance (\X \ominus r) Q\}\enspace .\end{equation} \end{definition} Characterization of multiplicities of polynomials over $\smax$ are given in \cite{tavakolipour2021} and in the work of Gunn~\cite{gunn,gunn2}. In the special case of \Cref{coro-uniquefact}, the computations can be reduced as follows. \begin{theorem}[Multiplicities and unique factorization, see \protect{\cite[Th.\ 6.7]{tavakolipour2021}}]\label{coro2-uniquefact} Let ${P} \in \smax^\vee[\X]$ satisfy the conditions of \Cref{coro-uniquefact}. Then the multiplicity of a $\smax^\vee$-root $r$ of $P$ coincides with the number of occurences of $r$ in the unique factorization of $\widehat{P}$. It also coincides with the multiplicity of the $\tmax$-root $|r|$ of $|{P}|$. \end{theorem} \subsection{Eigenvalues and eigenvectors over $\rmax$, $\tmax$ and $\smax$} \subsubsection{$\tmax$-eigenvalues} When $\vgroup=\R$, the following definitions coincide with the ones used in~\cite{izhakianmatrix3,akian2016non}, for instance. Let $A=(a_{ij}) \in (\tmax)^{n \times n}$. Then, the $\tmax$-formal \new{characteristic polynomial} of $A$ is: \[ P_A:=\per ( \X I\oplus A )=\bigtsum_{k=0,\ldots,n}(P_A)_k \X^{k} \in \tmax[\X] \enspace , \] in which the expression of $\per (\X I \oplus A)$ is developped formally. Equivalently, the coefficients of $P_A$ are given by $(P_A)_k =\tsum_{I\subset [n],\; \card (I)=n-k} \per(A[I,I])$, where $A[I,I]$ is the submatrix of $A$ with rows and columns in $I$. The polynomial function $\widehat{P_A}$ associated to $P_A$ is called the $\tmax$-\new{characteristic polynomial} function of $A$. \begin{definition}[$\tmax$-algebraic eigenvalue] \label{algebraic}Let $A \in (\tmax)^{ n \times n}$. The $\tmax$-\new{algebraic eigenvalues} of $A$, denoted by $\mu_{1}(A)\geq \cdots\geq \mu_{n}(A)$, are the $\tmax$-roots of its $\tmax$-characteristic polynomial. \end{definition} The term algebraic is used here since a $\tmax$-algebraic eigenvalue $\mu$ may not satisfy the eigenvalue-eigenvector equation $A u = \mu u$ for some $u \in (\tmax)^{n},\; u \neq \zero$. Nevertheless, the maximal such an eigenvalue $\mu$ is equal to the maximal algebraic eigenvalue $\mu_{1}(A)$ and is also equal to the maximal cycle mean of $A$. The $\tmax$-characteristic polynomial function and therefore the $\tmax$-algebraic eigenvalues of $A \in (\tmax)^{n \times n}$ can be computed in $O(n^4)$ \cite{burkard2003finding} which can be reduced to $O(n^3)$, using parametric optimal assignment techniques \cite{gassner2010fast}. However, no polynomial algorithm is known to compute all the coefficients of the $\tmax$-formal characteristic polynomial $P_A$ (see e.g.~\cite{butkovivc2007job}). The computational complexity of computing the $\tmax$-eigenvalues can be reduced to polynomial time when considering special classes of matrices, such as symmetric matrices over $\{0,-\infty\}$, pyramidal matrices, Monge and Hankel matrices, tridiagonal Toeplitz and pentadiagonal Toeplitz matrices (see \cite{butkovivc2007job}, \cite{tavakolipour2020asymptotics}, \cite{tavakolipour2018tropical}). As said before, for a general algebraic eigenvalue $\mu$, there may not exists a vector $u \in (\tmax)^{n},\; u \neq \zero$ such that $A u = \mu u$. Generalizations of the notion of eigenvectors have been considered in \cite{izhakianmatrix3}, by replacing the equalities in $A u = \mu u$ by the conditions ``the maximum is attained at least twice'', and are handled by embedding $\tmax$ into the supertropical semiring of Izhakian \cite{IR}. More special generalizations have been considered in \cite{Nishida2020,Nishida2021,nishida2021independence}, where a constructive change of side of terms in each equation of $A u = \mu u$ is given, and depend on the eigenvalue $\mu$. In the next section, we shall consider another extension which uses signs and thus the embedding of $\tmax$ into $\smax$. \subsubsection{$\smax$-eigenvalues and $\smax$-eigenvectors}\label{subsec:eigvec} \begin{definition}[$\smax$-formal characteristic polynomial]\label{charpoly_s} The $\smax$-\new{formal characteristic polynomial} of $A \in (\smax)^{n \times n}$ is $\ps:= \det( \X I\ominus A ) \in \smax[\X]$, and its $\smax$-\new{characteristic polynomial function} is $\widehat{P}_A(x) := \det(x I\ominus A)$. \end{definition} We can also write the coefficients of $\ps$ in terms of compound matrices of $A$. \begin{definition}($k$-th compound)\label{def-compound} For $k \in [n]$, the $k$-th \new{compound} of a matrix $A \in (\smax)^{n \times n}$ is the matrix $\ext^k A \in (\mathbb{S}_{\max})^{{n\choose k} \times {n \choose k}}$ whose rows and columns are indexed by the subsets $K$ and $K'$ of $[n]$ of cardinality $k$ ($\mathrm{card}(K)=\mathrm{card}(K')=k$), and whose entries are $\bigg(\ext^k A\bigg)_{K,K'}= \det(A[K,K'])$ where $A[K,K']$ is the $k \times k$ submatrix obtained by selecting from $A$ the rows $i \in K$ and columns $j \in K'$. We also set $\ext^0 A $ to be the $1\times 1$ identity matrix. \end{definition} \begin{definition}($k$-th trace)\label{def-trk} The $k$-th trace of $A \in (\smax)^{n \times n}$ is defined as \[\tr_{k} A =\tr\bigg(\ext^k A\bigg) = \bigtsum_{\substack{K \subset [n]\\\mathrm{card}(K)=k}} \det(A[K,K])\enspace ,\] for all $k \in [n]$, where $\ext^k A$ is the $k$-th compound of $A$, see \Cref{def-compound}. \end{definition} \begin{lemma}\label{comp_charpoly} For $A \in (\smax)^{n \times n}$ we have \[P_A = \bigtsum_{k=0,\ldots, n} \bigg((\ominus \unit)^{n-k} \tr_{n-k}A\bigg) \X^{k}\enspace .\] \end{lemma} \Cref{charpoly} is an example of computation of the $\smax$-characteristic polynomial by using \Cref{comp_charpoly}. \begin{definition}[$\smax$ and $\smax^\vee$-algebraic eigenvalues and their multiplicity]\label{s_eig} Let $A \in (\smax)^{n \times n}$. Then, the $\smax$-roots (resp.\ $\smax^\vee$-roots) of $P_A$ (see \Cref{def-smaxroots}) are called the \new{$\smax$ (resp.\ $\smax^\vee$)-algebraic eigenvalues} of $A$. If the characteristic polynomial $P_A$ has coefficients in $\smax^\vee$, then the multiplicity of $\gamma$ as a $\smax^\vee$-root of $P_A$ is called the \new{multiplicity} of $\gamma$ as a $\smax$ (or $\smax^\vee$)-algebraic eigenvalue of $A$. \end{definition} Here, we defined two different notions of eigenvalues of a matrix over $\smax$. In \cite[Section 6]{adi}, ``eigenvalues over $\smax$'' were defined as the corner roots of the characteristic polynomial, which correspond to $\smax^\vee$-algebraic eigenvalues in our definition. \begin{definition}[$\smax$-geometric eigenvalues and eigenvectors]\label{eig_vec} Let $A \in (\smax)^{n \times n}$. Let $ v \in (\smax^\vee)^{n}\setminus\{\zero\}$ and $\gamma\in \smax^\vee$. We say that $v$ is a \new{$\smax$-eigenvector} of $A$ associated with the \new{$\smax$-geometric eigenvalue} $\gamma$ if \begin{equation}\label{smaxeigenvector} A v \balance \gamma v\enspace.\end{equation} \end{definition} Since the last equation is equivalent to $(A \ominus \gamma I) v \balance \zero$, the following property follows from the property of homogeneous systems in $\smax$ recalled in \Cref{homo}. \begin{theorem}\label{existence} Let $A\in (\smax)^{n \times n}$ and $\gamma\in \smax^\vee$. Then, $\gamma$ is a $\smax$-algebraic eigenvalue if and only if there exists a $\smax$-eigenvector $v\in (\smax^{\vee})^n\setminus\{\zero\}$ associated to $\gamma$: $A v\balance \gamma v\enspace.$ \hfill \qed \end{theorem} This shows that $\gamma$ is a $\smax$-geometric eigenvalue if and only if it is a $\smax$-algebraic eigenvalue, as in usual algebra. Then $\gamma$ is called a \new{$\smax$-eigenvalue}. Note however that, even when $P_A$ has coefficients in $\smax^\vee$, the multiplicity of $\gamma$ as a $\smax^\vee$-geometric eigenvalue of $A$ is difficult to define since there are several notions of independence and thus of dimension over $\smax$ (see for instance~\cite{akian2009linear}). We can weaken or strengthen the notion of $\smax$-eigenvector as follows. \begin{definition}\label{smaxeigenvector-ws} Let $A \in (\smax)^{n \times n}$ and let $\gamma$ be a $\smax$-eigenvalue. \begin{description} \item[Weak eigenvector] If $v\in (\smax)^{n}$ has at least one coordinate in $\smax^\vee\setminus\{\zero\}$ and satisfies \eqref{smaxeigenvector} then we say that $v$ is a \new{weak $\smax$-eigenvector}. \item[Strong eigenvector] If $v\in (\smax^\vee)^{n}\setminus\{\zero\}$ satisfies $A v = \gamma v$, then we say that $v$ is a \new{strong $\smax$-eigenvector} and that $\gamma$ is a \new{strong $\smax$-geometric eigenvalue}. \end{description} \end{definition} Using the above definitions, we have that a strong $\smax$-eigenvector is necessarily a $\smax$-eigenvector, and a $\smax$-eigenvector is necessarily a weak $\smax$-eigenvector. \subsubsection{Some special $\smax$-eigenvectors}\label{spec-eig-vector} One effective approach to compute a $\smax$-eigenvector associated to the $\smax$-eigenvalue $\gamma$ is to use the columns of the adjugate of the matrix $A \ominus \gamma I$. The following states this approach. \begin{proposition}\label{lem-Bk} Suppose that $A \in (\smax)^{n \times n}$, let $\gamma$ be a $\smax$-eigenvalue of $A$ and denote \[B=\gamma I \ominus A \enspace .\] Then \begin{equation}\label{adj_vec} A \, B^{\mathrm{adj}} \balance \gamma B^{\mathrm{adj}} \enspace. \end{equation} \end{proposition} \begin{proof} Since $\gamma$ is a $\smax$-eigenvalue of $A$, using \Cref{s_eig} we have $\det(B) \balance \zero$, and by \Cref{adj}, we have \[B \, B^{\mathrm{adj}} \succeq^{\circ} \det(B) I \succeq^{\circ} \zero\enspace.\] So \[A \, B^{\mathrm{adj}} \ominus \gamma B^{\mathrm{adj}} = B B^{\mathrm{adj}} \balance \zero\enspace.\] Then by \Cref{equality_balance}-\eqref{pro1}, we obtain \eqref{adj_vec}. \end{proof} Property \eqref{adj_vec} implies that all the columns of $B^{\mathrm{adj}}$ with at least one entry in $ \smax^\vee\setminus\{\zero\}$ are weak $\smax$-eigenvectors associated with the $\smax$-eigenvalue $\gamma$. In usual algebra, a necessary and sufficient condition to obtain an eigenvector in this way is that the (geometric) eigenvalue be simple, or equivalently that the matrix $B$ has rank $n-1$. In $\smax$ a similar condition, namely that there exists at least one $n-1\times n-1$ minor of $B$ in $\smax^\vee\setminus\{\zero\}$, or equivalently that $B^{\mathrm{adj}}$ has at least one entry in $\smax^\vee\setminus\{\zero\}$ is sufficient to obtain one weak $\smax$-eigenvector. However, it may not be sufficient to obtain one $\smax$-eigenvector in this way. Below we give a stronger condition which is sufficient. Let $C \in \smax^{n\times n}$. In the following by $C_{i,:}$ and $C_{:,j}$ we mean the $i$-th row of $C$ and the $j$-th column of $C$, respectively. Moreover, $C_{i,\hat{j}}$ (resp.\ $C_{\hat{i},j}$) stands for the submatrix of $C_{i,:}$ (resp.\ $C_{:,j}$) obtained by eliminating the $j$th column (resp.\ the $i$-th row). Recall that $C[\hat{i},\hat{j}]$ is the submatrix of $C$ obtained by eliminating the $i$-th row and the $j$th column. \begin{theorem}[A sufficient condition for geometric simple $\smax$-eigenvalue]\label{cond_unique} Consider $A$, $\gamma$ and $B$ as in \Cref{lem-Bk}, and let $v$ be a $\smax$-eigenvector associated to the $\smax$-eigenvalue $\gamma$. \begin{enumerate} \item Assume that there exists an entry of $B^\adj$ which is invertible, that is $B^\adj_{i,j}\in \smax^{\vee}\setminus\{\zero\}$ for some $i,j\in [n]$. Then, there exists $\lambda\in \smax^\vee\setminus\{\zero\}$ such that $v\balance \lambda B^\adj_{:,j}$. \item Assume there exists a column $j$ of $B^\adj$ that is non-zero and has only $\smax^\vee$ entries: $B^\adj_{:,j}\in (\smax^{\vee})^{n} \setminus\{\zero\}$. Then $B^\adj_{:,j}$ is a $\smax$-eigenvector associated to the $\smax$-eigenvalue $\gamma$, and there exists $\lambda\in \smax^\vee\setminus\{\zero\}$ such that $v= \lambda B^\adj_{:,j}$. \end{enumerate} \end{theorem} \begin{proof} First, $v$ is a $\smax$-eigenvector associated to the $\smax$-eigenvalue $\gamma$ if and only if $v$ satisfies: \begin{equation}\label{equofeigenvector} v\in (\smax^{\vee})^{n} \setminus\{\zero\}\quad\text{and} \quad B v\nabla \zero\enspace .\end{equation} Moreover if $j\in [n]$ is such that $B^\adj_{:,j}\in (\smax^{\vee})^{n} \setminus\{\zero\}$, then, by \Cref{lem-Bk}, we know that $B^\adj_{:,j}$ is a $\smax$-eigenvector associated to the $\smax$-eigenvalue $\gamma$ and thus a solution of \eqref{equofeigenvector}. Proof of i): Let $i,j\in [n]$ be such that $B^\adj_{i,j}\in \smax^{\vee}\setminus\{\zero\}$. Denote $F:=B[\hat{j},\hat{i}]$, $b:=B_{\hat{j},i}$. Denote also $P$ and $Q$ the permutation matrices associated to the cycles $(1,\ldots, i)$ and $(1,\ldots, j)$ respectively. Then applying these permutations on $B$, we obtain: \begin{equation}\label{bprime} B':= QBP^{-1}=\begin{pmatrix} * & *\\ b & F\end{pmatrix} \enspace.\end{equation} Applying the corresponding permutation on $v$, we obtain $v':= P v= \begin{pmatrix} v_{i}\\ \tilde{v} \end{pmatrix}$ where $\tilde{v}$ is obtained by eliminating the $i$-th entry of $v$. Then, we have: \begin{equation}\label{main_equ1} B v\nabla \zero\Leftrightarrow B'v'\nabla \zero \Rightarrow F \tilde{v} \nabla \ominus v_i b \enspace .\end{equation} We claim that \begin{equation}\label{formula-adj2} \begin{pmatrix}\det(F)\\ \ominus F^{\adj} b \end{pmatrix} = (\ominus \unit )^{i+j} P B^\adj_{:,j} \enspace .\end{equation} Let us first assume that \eqref{formula-adj2} holds and show that any $\smax$-eigenvector $v$ associated to the $\smax$-eigenvalue $\gamma$, or equivalently any solution of \eqref{equofeigenvector} satisfies $v\balance \lambda B^\adj_{:,j}$ for some $\lambda\in \smax^\vee\setminus\{\zero\}$. Indeed, by \eqref{main_equ1}, any solution $v$ of \eqref{equofeigenvector} satisfies necessarily the equation $F \tilde{v} \nabla \ominus v_i b$. Then, applying the first part of \Cref{cramer} (Cramer's theorem), we deduce that $\det(F) \tilde{v} \balance F^{\adj} (\ominus v_i b) = \ominus v_i F^{\adj} b$. Since $B^\adj_{i,j}\in \smax^{\vee}\setminus\{\zero\}$, it is invertible, and it follows for instance from \eqref{formula-adj2} that $\det(F)= (\ominus \unit )^{i+j} B^\adj_{i,j}$ so is invertible. So, $\tilde{v} \balance \det(F)^{ -1} (\ominus v_i F^{\adj} b)$. Using \eqref{formula-adj2}, we obtain that $Pv \balance \det(F)^{ -1} v_i \begin{pmatrix}\det(F)\\ \ominus F^{\adj} b \end{pmatrix}= \det(F)^{ -1} v_i (\ominus \unit )^{i+j} P B^\adj_{:,j} $. Therefore $v\balance \det(F)^{ -1} v_i (\ominus \unit )^{i+j} B^\adj_{:,j} $. In particular, if $v_i=\zero$, then $v\balance \zero$ and so $v$ is not in $(\smax^{\vee})^{n} \setminus\{\zero\}$, a contradiction with \eqref{equofeigenvector}. Therefore $v_i\in \smax^\vee\setminus\{\zero\}$, and we get that any solution $v$ of \eqref{equofeigenvector} satisfies $v\balance \lambda B^\adj_{:,j}$ for $\lambda=\det(F)^{ -1} v_i (\ominus \unit )^{i+j} \in \smax^\vee\setminus\{\zero\}$. Let us now show our claim, that is \eqref{formula-adj2}. First, we have that $(B')^\adj= (P^{-1})^{\adj}B^\adj Q^\adj = \det(P)^{-1} P B^\adj \det(Q) Q^{-1}$ since $P$ and $Q$ are invertible matrices (see for instance \cite[Cor.\ 2.35]{adi}). Therefore, we have $(B')^\adj_{:,1}= (\ominus \unit)^{i+j} (P B^\adj)_{:, j}$, which is the right hand side of \eqref{formula-adj2}. The coordinates of $w=(B')^\adj_{:,1}$ are $w_k=(B')^\adj_{k,1}=(\ominus \unit)^{k+1} \det (B'[\hat{1},\hat{k}])$, $k\in [n]$. Using \eqref{bprime}, we have clearly $w_1=\det(F)$. For $k\in [n-1]$, let us denote by $F_k$ the matrix obtained from $F$ after replacing its $k$-th column with $b$. Then, by \Cref{ith_cramer}, we have that $(F^\adj b)_k= \det(F_k)$. Let $B'[\hat{1},:]$ be the matrix obtained from $B'$ after eliminating the first row, we have $B'[\hat{1},:]= \begin{pmatrix}b & F\end{pmatrix}$. Since $b$ is the first column of this matrix, we have that $F_k$ can also be obtained from the matrix $B'[\hat{1},:]$ after eliminating the $k+1$ column, then doing $k-1$ swaping from the first column to the $k$-th column. So, $\det(F_k)=(\ominus \unit )^{k-1}\det(B'_{\hat{1},\widehat{k+1}})$ and therefore, we have \[ \ominus (F^\adj b)_k=\ominus \det(F_k)= (\ominus \unit)^{k}\det(B'_{\hat{1},\widehat{k+1}})= (B')^\adj_{k+1,1}\enspace .\] Proof of ii): If now $B^\adj_{:,j}\in (\smax^\vee)^n\setminus\{\zero\}$, with $(B^\adj)_{i,j}\neq \zero$, then Point i) shows that $v\balance \lambda B^\adj_{:,j}$ for $\lambda=\det(F)^{ -1} v_i (\ominus \unit )^{i+j} \in \smax^\vee\setminus\{\zero\}$. Since both sides of the balance equations are in $\smax^\vee$, the second part of \Cref{equality_balance} implies the equality, and so we get that $v= \lambda B^\adj_{:,j}$, which finishes the proof of \eqref{formula-adj2}. Note that this second part of the theorem can also be shown using the second part of \Cref{cramer} (Cramer's theorem). \end{proof} \begin{theorem}\label{cond_existence} Let $A$, $\gamma$ and $B$ as in \Cref{lem-Bk}. Assume that there exists an entry of $B^\adj$ which is non-zero, that is $B^\adj_{i,j}\neq \zero$ for some $i,j\in [n]$. Then there exists a $\smax$-eigenvector $v$ associated to the $\smax$-eigenvalue $\gamma$ such that $|v|=|B^{\adj}_{:,j}|$ and $v_i=B^{\adj}_{i,j}$ for all $i\in [n]$ satisfying $B^{\adj}_{i,j}\in\smax^\vee$. \end{theorem} \begin{proof} Using the same arguments and notations as in the proof of first point of \Cref{cond_unique}, we have that $v$ is a $\smax$-eigenvector $v$ associated to the $\smax$-eigenvalue $\gamma$ if and only the vector $\tilde{v}$ satisfies \eqref{main_equ1}. Moreover, $\det(F)= (\ominus \unit )^{i+j} B^\adj_{i,j}$, so that $\det(F)\neq \zero$. Applying \Cref{existence_signed}, we get that for any $v_i\in\smax^\vee$, there exists $\tilde{v}$ satisfying \eqref{main_equ1} and $|\tilde{v}|=|\det(F)|^{-1} |F^\adj ( \ominus v_i b)|$. Using again the same arguments as in the proof of first point of \Cref{cond_unique}, we deduce that $|Pv|=|v_i| |\det(F)|^{-1} |P B^\adj_{:,j}|$. Since $P$ is a permutation matrix, choosing $v_i= |\det(F)|$, we obtain $|v|= |B^\adj_{:,j}|$. Now by the first point of \Cref{cond_unique}, we know that there exists $\lambda\in\smax^\vee\setminus\{\zero\}$ such that $v\balance \lambda B^\adj_{:,j}$. If there exists $i\in [n]$ such that $B^{\adj}_{i,j}\in\smax^\vee\setminus\{\zero\}$, then by the second point of \Cref{equality_balance}, we have $v_i=\lambda B^{\adj}_{i,j}$ and since $|v_i|=|B^{\adj}_{i,j}|$, we deduce that $\lambda=\unit$ or $\ominus\unit$. Replacing $v$ by $\lambda^{-1} v$, we get that $v$ is a $\smax$-eigenvector $v$ associated to the $\smax$-eigenvalue $\gamma$ and is such that $v\balance B^\adj_{:,j}$ and $|v|= |B^\adj_{:,j}|$. Using again the second point of \Cref{equality_balance}, we deduce that $v_i=B^{\adj}_{i,j}$ for all $i\in [n]$ such that $B^{\adj}_{i,j}\in\smax^\vee$. \end{proof} \section{Tropical positive (semi-)definite matrices and their eigenvalues}\label{sec:3} Tropical positive semi-definite matrices were introduced in \cite{yu2015tropicalizing} and generalized in \cite{tropicalization}. Here we consider also tropical positive definite matrices. \subsection{Tropical positive (semi-)definite matrices} \begin{definition}[$\pd$ and $\psd$ matrices, compared with \cite{tropicalization}]\label{def:psd} Let $A=(a_{ij} ) \in (\smax^\vee)^{n \times n}$ be a symmetric matrix. It is said to be \new{tropical positive definite} ($\pd$) if \begin{equation}\label{def_pd}\zero \lsign x^{T} A x,\; \text{that is}\; x^{T} A x \in \smax^{\oplus}\setminus\{\zero\},\; \text{for all}\; x \in (\smax^{\vee})^{n}\setminus\{\zero\}\enspace.\end{equation} If the strict inequality required in \Cref{def_pd} is weekened to $\zero \leqsign x^{T} A x$, then $A$ is said to be \new{tropical positive semi-definite} ($\psd$). \end{definition} Throughout the paper, the set of $n\times n$ $\pd$ and $\psd$ matrices over $\smax^{\vee}$, are denoted by $\pd_n(\smax^{\vee})$ and $\psd_n(\smax^{\vee})$, respectively. Therefore we have $\pd_n(\smax^{\vee}) \subseteq \psd_n(\smax^{\vee})$. We recall in \Cref{def_psd1} below the characterization of tropical positive definite matrices shown in \cite{tropicalization}. \begin{theorem}[\cite{tropicalization}]\label{def_psd1} The set $\psd_{n}(\smax^\vee)$ is equal to the set \[ \{A=(a_{ij}) \in (\smax^{\vee})^{n \times n} : \zero \leqsign a_{ii}\; \forall i \in [n],\; a_{ij}=a_{ji} \;\text{and}\; a_{ij}^{ 2} \leqsign a_{ii} a_{jj}\; \forall i,j \in [n], i \neq j\}\enspace . \] \end{theorem} Using \Cref{def_psd1}, one can obtain the following similar result for $\pd$ matrices. We give a detailed proof in Appendix. \begin{theorem}\label{def_pd1} The set $\pd_{n}(\smax^\vee)$ is equal to the set \[ \{A=(a_{ij}) \in (\smax^{\vee})^{n \times n} : \zero \lsign a_{ii}\; \forall i \in [n],\; a_{ij}=a_{ji} \;\text{and}\; a_{ij}^{ 2} \lsign a_{ii} a_{jj}\; \forall i,j \in [n], i \neq j\}\enspace . \] \end{theorem} Note that, in the above characterizations of $\psd$ and $\pd$ matrices, the inequalities involve diagonal entries or the square of non-diagonal entries, which are all elements of $\smax^{\oplus}$. \subsection{The $\smax$-characteristic polynomial of $\psd$ and $\pd$ matrices} The following result will help us to compute the characteristic polynomial. \begin{theorem}\label{trace} Let $A \in \psd_n(\smax^{\vee})$ with the diagonal elements $d_n \leqsign \cdots \leqsign d_1$. Then, we have $\tr_k A= \bigtprod_{i\in [k]}d_i \;\text{or} \;\tr_kA =( \bigtprod_{i\in [k]}d_i)^{\circ}$, so $\tr_k A\geq 0$, and for $A \in \pd_n(\smax^{\vee})$ we have $\tr_kA= \bigtprod_{i\in [k]}d_i> 0$. \end{theorem} The proof will follows from the following lemmas. \begin{lemma}\label{diag_cycle} Let $A=(a_{ij}) \in \psd_n(\smax^{\vee})$. Let $\cycle$ be a cycle $(j_{1},j_{2},\ldots ,j_{k})$ of length $k>1$ in $[n]$ and let us denote by $[\cycle]=\{j_{1},j_{2},\ldots ,j_{k}\}$ the set of its elements. Then \begin{enumerate} \item $|w(\cycle)| \leqsign \bigtprod_{i\in [\cycle]}a_{ii}.$ \item Moreover, if $A\in \pd_n(\smax^{\vee})$ we have $|w(\cycle)| \lsign \bigtprod_{i\in [\cycle]}a_{ii}$. \end{enumerate} \end{lemma} \begin{proof} {\bf Proof of Part 1}: Let $\cycle$ be the cycle $(j_{1},j_{2},\ldots ,j_{k})$. Since $A \in \psd_n(\smax^{\vee})$ by \Cref{def_psd1} we have \[\begin{array}{ccc} a_{j_1j_2}^{ 2}&\leqsign& a_{j_1j_1} a_{j_2j_2}\enspace,\\ a_{j_2j_3}^{ 2}&\leqsign &a_{j_2j_2} a_{j_3j_3}\enspace,\\ &\vdots&\\ a_{j_kj_1}^{ 2}&\leqsign& a_{j_kj_k} a_{j_1j_1}\enspace.\end{array} \] So, by the first part of \Cref{product_order} we have $ a_{j_1j_2}^{ 2} a_{j_2j_3}^{ 2} \cdots a_{j_kj_1}^{ 2} \leqsign a_{j_1j_1}^{ 2} a_{j_2j_2}^{ 2} \cdots a_{j_kj_k}^{ 2}\enspace$. Finally, using \Cref{modulus_order}, \begin{eqnarray} |a_{j_1j_2} a_{j_2j_3} \cdots a_{j_kj_1}|&\leqsign& |a_{j_1j_1} a_{j_2j_2} \cdots a_{j_kj_k}| \nonumber\\\label{mar2} &=& a_{j_1j_1} a_{j_2j_2} \cdots a_{j_kj_k} \enspace,\nonumber \end{eqnarray} where the last equality is due to the positiveness of diagonal elements of $A$.\\ {\bf Proof of Part 2}: The proof of the Part 2 is obtained similarly by using the definition of $\pd$ matrices instead of $\psd$ matrices and the second part of \Cref{product_order}. \end{proof} \begin{lemma}\label{diag_cycle2} Let $A=(a_{ij}) \in \psd_n(\smax^{\vee})$. Let $\permutation$ be any permutation of $[n]$. Then \begin{enumerate} \item $|w(\permutation)| \leqsign \bigtprod_{i\in [n]}a_{ii},$ with equality when $\permutation$ is the identity permutation. \item Moreover, if $A\in \pd_n(\smax^{\vee})$ and $\permutation$ is different from the identity permutation, we have $|w(\permutation)| \lsign \bigtprod_{i\in [n]}a_{ii}.$ \end{enumerate} \end{lemma} \begin{proof} Since every permutation of $[n]$ can be decomposed uniquely into disjoint cycles which cover $[n]$, Part 1 of \Cref{diag_cycle} is true for any permutation, when replacing $[\cycle]$ by $[n]$. Moreover, if the permutation is different from identity, then applying Part 2 of \Cref{diag_cycle} to all the cycles of length $>1$, we get Part 2 of \Cref{diag_cycle2}. \end{proof} \begin{proof}[Proof of \Cref{trace}] Let $k \in [n]$ and $A \in \psd_n(\smax^{\vee})$. For any subset $K$ of $[n]$ with cardinality $k$, the submatrix $A[K,K]$ is a positive semi-definite matrix over $\smax^\vee$. Applying Part 1 of \Cref{diag_cycle2} to this matrix, we obtain that $|\det(A[K,K])|=\bigtprod_{i\in K}a_{ii}$. Then, by \Cref{def-trk}, and using that $d_1\geq \cdots\geq d_n$, we get that $|\tr_kA|= \bigtprod_{i\in [k]}d_i$. Since, $\bigtprod_{i\in [k]}d_i$ is one of the summands in the formula of $\tr_kA$, we have $\tr_k A\succeq \bigtprod_{i\in [k]}d_i$. Therefore we conclude two possible cases: $\tr_kA= \bigtprod_{i\in [k]}d_i \;\text{or} \;\tr_kA =( \bigtprod_{i\in [k]}d_i)^{\circ}$. Also, for $A \in \pd_n(\smax^{\vee})$, and any subset $K$ of $[n]$ with cardinality $k$, the submatrix $A[K,K]$ is a positive definite matrix over $\smax^\vee$. Therefore, applying Part 2 of \Cref{diag_cycle2} to this matrix, we obtain that there is no permutation $\permutation$ of $K$ such that $|w(\permutation)|=\bigtprod_{i\in K}a_{ii}$, other than identity permutation. Hence, $\det(A[K,K])=\bigtprod_{i\in K}a_{ii}$. Since all the terms $\det(A[K,K])$ are in $\smax^\oplus$, we get that $\tr_kA$ is also in $\smax^\oplus$, and so $\tr_kA= \bigtprod_{i\in [k]}d_i$. \end{proof} \begin{corollary}\label{char_pd} For $A=(a_{ij}) \in \pd_n(\smax^{\vee})$ with the diagonal elements $d_n \leqsign \cdots \leqsign d_1$ we have \[ P_A = \bigtsum_{k=0}^{n} \bigg((\ominus \unit)^{n-k} (\bigtprod_{i\in [n]-k}d_i)\bigg)\X^{k}\enspace .\] \end{corollary} \begin{example}\label{balanc_char} Let $A= \begin{pmatrix} \unit&\unit\\ \unit&\unit \end{pmatrix} \in \psd_2(\mathbb{S_{\max}^{\oplus}})$. By \Cref{comp_charpoly}, the formal characteristic polynomial of $A$ is $P_A = \X^2 \ominus \X \oplus \unit^{\circ}$,\; which shows that the formal characteristic polynomial associated to $\psd$ matrices may have balance elements. In \Cref{tpsd_eig} we considered the $\smax$-roots and $\smax^{\vee}$-roots of $P_A$ which are the same as $\smax$-eigenvalues and $\smax^{\vee}$-eigenvalues of $A$. \end{example} \begin{remark} In usual algebra, semi-definite matrices which are not definite have the eigenvalue 0, here this is replaced by the fact that the characteristic polynomial has a balanced constant coefficient and that there is an infinite number of $\smax$-eigenvalues. \end{remark} \begin{example}\label{charpoly} Let $A = \begin{pmatrix} 3 &2& 1\\ 2&2&1\\ 1&1&1 \end{pmatrix}$. We have $A \in \pd_{3}(\smax^{\vee})$ and $\ext^1 A =\begin{pmatrix} 3 &2& 1\\ 2&2&1\\ 1&1&1 \end{pmatrix} $, \[\begin{array}{ccc} \ext^2 A& =&\begin{pmatrix} \det\begin{pmatrix} 3&2\\2&2 \end{pmatrix} &\det\begin{pmatrix} 3&1\\2&1 \end{pmatrix} & \det\begin{pmatrix} 2&1\\2&1 \end{pmatrix} \\[1em] \det\begin{pmatrix} 3&2\\1&1 \end{pmatrix} & \det\begin{pmatrix} 3&1\\1&1 \end{pmatrix} & \det\begin{pmatrix} 2&1\\1&1 \end{pmatrix} \\[1em] \det\begin{pmatrix} 2&2\\1&1 \end{pmatrix} &\det\begin{pmatrix} 3&1\\1&1 \end{pmatrix} &\det\begin{pmatrix} 2&1\\1&1 \end{pmatrix} \end{pmatrix} =\begin{pmatrix} 5 &4& 3^{\circ}\\ 4&4&3\\ 3^\circ&4&3 \end{pmatrix}, \end{array}\] and $\ext^3 A =\det\begin{pmatrix} 3 &2& 1\\ 2&2&1\\ 1&1&1 \end{pmatrix}=6$. Therefore $\tr_{0} A=\unit, \; \tr_{1} A= 3, \; \tr_{2} A= 5$ and $\tr_{3} A=6.$ So, we have $P_A = \X^3 \ominus 3 \X^2 \oplus 5\X \ominus 6\enspace$\enspace. \Cref{Fig:plot_poly} illustrates the plot of $P_A$. \begin{figure}[!h] \small \centering \begin{tikzpicture}[scale=0.7] \draw[->] (-3.5,0) -- (3.5,0); \draw[->] (0,-6.5) -- (0,6.5); \draw[dotted](1,-1) -- (1,1); \draw[dotted] (2,-2) -- (2,2); \draw[dotted] (3,4) -- (3,-4); \draw[thick] (1,-1) -- (-1,-1); \draw[thick] (-1,-1) -- (-2,-2); \draw[thick] (-2,-2) -- (-3,-4); \draw[thick] (1,1) -- (2,2); \draw[thick] (2,-2) -- (3,-4); \draw[thick] (3,4) -- (3.5,6.5); \draw[thick] (-3,-4) -- (-3.5,-6.5); ll (1,1) circle (3pt); ll (1,-1) circle (3pt); ll (3,4) circle (3pt); ll (3,-4) circle (3pt); ll (2,2) circle (3pt); ll (2,-2) circle (3pt); ll (-1,-1) circle (3pt); ll (-2,-2) circle (3pt); ll (-3,-4) circle (3pt); ll (0.25,-0.25) node {\tiny$\zero$}; ll (-4,-0.4) node {\tiny$\smax^{\ominus}$}; ll (4,-0.4) node {\tiny$\smax^{\oplus}$}; ll (0.5,6) node {\tiny$\smax^{\oplus}$}; ll (0.5,-6) node {\tiny$\smax^{\ominus}$}; ll (-1,-0.4) node {\tiny$\ominus 1$}; ll (-2,-0.4) node {\tiny$\ominus 2$}; ll (-3,-0.4) node {\tiny$\ominus 3$}; ll (1.1,-0.4) node {\tiny$1$}; ll (2.1,-0.4) node {\tiny$2$}; ll (3.1,-0.4) node {\tiny $3$}; ll (0.25,-1) node {\tiny$\ominus 6$}; ll (0.25,-2) node {\tiny$\ominus 7$}; ll (0.25,-4) node {\tiny$\ominus 9$}; ll (0.25,1) node {\tiny$6$}; ll (0.25,2) node {\tiny$7$}; ll (0.25,4) node {\tiny$9$}; \end{tikzpicture}\caption{ Plot of $P_A=\X^3 \ominus 3 \X^2 \oplus 5\X \ominus 6$ in \Cref{charpoly}. The solid black line illustrates $\widehat{P_A}$. The points of discontinuity of $\widehat{P_A}$ are $1, 2$ and $3$ which are the roots of $P_A$\enspace. }\label{Fig:plot_poly} \end{figure} \end{example} \subsection{$\tmax$-Eigenvalues and $\smax$-Eigenvalues of $\psd$ and $\pd$ matrices}\label{sec:eig} Let $A$ be a $\psd$ matrix. In the following theorem, we compute the $\tmax$-eigenvalues of $|A|$. \begin{theorem}\label{tropical_eigs} Let $A=(a_{ij}) \in \psd_n(\smax^{\vee})$. Then the $\tmax$-eigenvalues of $|A|=(|a_{ij}|)\in (\tmax)^{n \times n}$ are the diagonal elements of $|A|$ counted with multiplicities. \end{theorem} \begin{proof} Let $d_1\geq d_2\geq \cdots \geq d_n$ be the diagonal elements of $|A|$. W.l.o.g let $d_1 \neq \zero$. Therefore, for $i\in [n]$ we get that $\tr_iA\neq \zero$. Otherwise $d_1= \cdots=d_n=\zero$ and since $A \in \psd_n(\smax^{\vee})$ we have $A=\zero$ and the proof is straightforward. Using \Cref{perdet} the characteristic polynomial of $|A|$ over $\tmax$ is $P_{|A|} = \tsum_{k=0,\ldots,n} (\tr_{n-k}A)\X^{k}$. By \Cref{trace} for $k=2, \ldots, n$ \[d_{k-1}= \tr_{k-1}A-\tr_{k-2}A \geq \tr_{k}A - \tr_{k-1}A=d_k.\] Finally, using \Cref{concavepoly} together with \Cref{roots_poly} we deduce the result. \end{proof} Let us consider \Cref{balanc_char} again. The $\smax$-charactestic polynomial of $A$ has the polynomial function $\widehat{P}_A(x) = x^2 \oplus \unit^{\circ} $ which is not a polynomial function in $\PF (\smax^{\vee})$. So we are not interested in considering the $\smax$-eigenvalues of $\psd$ matrices. From here on we prove our results only for the case of $\pd$ matrices. \begin{theorem}\label{sym_eigs} Let $A \in \pd_n(\smax^{\vee})$. The diagonal elements of $A$ are precisely the $\smax$-eigenvalues of $A$, counted with multiplicities. \end{theorem} \begin{proof} Let $d_1\geq d_2\geq \cdots \geq d_n$ be the diagonal elements of $A$. Using \Cref{char_pd} we have \begin{equation}\label{factor_poly}P_A(\X)= \bigtsum_k ((\ominus \unit )^{ k} d_1 \cdots d_k) \X^{n-k}\end{equation} and therefore by \Cref{concavepoly} and \Cref{roots_poly} we have \[|P_A|(\X)= \bigtsum_k (d_1 \ldots d_k )\X^{n-k}= (\X \oplus d_1) \cdots (\X \oplus d_n). \] Moreover, using \Cref{factor_poly} we have $P_{n-i+1}= (\ominus \unit)^{i-1}\tr_{i-1}A$ and $\ominus P_{n-i} = (\ominus \unit)^{i+1} \tr_iA$. Therefore $d_i P_{n-i+1}= \ominus P_{n-i}$ and by \Cref{suf_cond}, $d_i,\; i\in [n]$ are the $\smax$-roots of $P_A$. Also since all the diagonal elements of $A$ ($d_i,\; i\in [n]$) are positive, \Cref{coro-uniquefact} and \Cref{coro2-uniquefact} give us that $P_A$ has a unique factorization and that the multiplicity of a diagonal element as a $\smax$-eigenvalue of $A$ coincides with the number of its occurences as a diagonal element. \end{proof} \section{Eigenvectors of tropical positive (semi-)definite matrices}\label{sec:3p} \subsection{$\smax$-Eigenvectors of $\pd$ matrices using the adjoint matrix} We now specialize some of the properties proved in \Cref{spec-eig-vector} \begin{proposition}\label{balance-adj} Let $A\in \pd_n(\smax^\vee)$, and set $\gamma_{i}=a_{ii}$ for $i\in [n]$. Assume that $\gamma_{1}\succeq \gamma_{2} \succeq \cdots \succeq \gamma_{n}$, and define $B_k=\gamma_k I\ominus A$ for some $k \in [n]$. Then, all the diagonal entries of $(B_k)^{\mathrm{adj}}$ are non-zero and they are all in $\smax^\circ$ except possibly the $k$-th diagonal entry, which is also in $\smax^\circ$ if and only if $\gamma_k$ is not a simple $\smax$-eigenvalue. \end{proposition} \begin{proof} Note that all the $\gamma_k$ are different from $\zero$. Indeed, the modulus of $B_k$ is a positive semi-definite matrix with diagonal entries equal to $\gamma_{1},\ldots, \gamma_{k-1}, \gamma_k,\ldots, \gamma_{k}$. So all $(n-1)\times (n-1)$ principal submatrices are also of same type and so have a determinant modulus equal to the product of its diagonal entries moduli. Since the determinant is also $\succeq$ to this product, it is non-zero and we get that it is in $\smax^\circ$, if the product is in $\smax^\circ$. This is the case for all the principal submatrices which contain the $k$-th diagonal element of $B_k$. This is also the case, when $\gamma_k$ is not a simple $\smax$-eigenvalue. If $\gamma_k$ is a simple $\smax$-eigenvalue, then one can show that the $k$th diagonal entry of $(B_k)^{\mathrm{adj}}$ is equal to $(\ominus \unit)^{k-1} \gamma_{1}\cdots \gamma_{k-1} \gamma_k^{n-k}$, so is not in $\smax^\circ$. \end{proof} Note that $\gamma_k$ is a simple $\smax$-eigenvalue if and only if $\gamma_{k-1}\succ \gamma_k \succ \gamma_{k+1}$, with the convention $\gamma_{n+1}=\zero$. By \Cref{lem-Bk}, special weak $\smax$-eigenvectors associated to the eigenvalue $\gamma_k$ are the columns of $(B_k)^{\mathrm{adj}} $ which are not in $(\smax^\circ)^n$. When $\gamma_k$ is simple, the above result shows that among the columns of $(B_k)^{\mathrm{adj}} $, the $k$-th column is necessarily a weak $\smax$-eigenvector associated to $\gamma_k$, and that the other columns cannot be $\smax$-eigenvectors. Hence, the $k$-th column is the only candidate to be a $\smax$-eigenvector, we shall denote it by $v^{(k)}$. \begin{corollary}\label{coro-simple-eigen} Let $A\in \pd_n(\smax^\vee)$, and $\gamma=\gamma_k$ and $B=B_k$ be as in \Cref{balance-adj}. Assume that $\gamma$ is a simple $\smax$-eigenvalue. Let \begin{equation}\label{vk} v^{(k)}:= (B_k)_{:,k}^{\mathrm{adj}}. \end{equation} Then we have the following properties: \begin{enumerate} \item $v^{(k)}$ is a weak $\smax$-eigenvector associated to $\gamma$, such that $v^{(k)}_k\in\smax^\vee\setminus\{\zero\}$. \item There exists a $\smax$-eigenvector $v$ associated to $\gamma$ such that $|v|=|v^{(k)}|$ and $v_i=v^{(k)}_i$ for all $i\in [n]$ satisfying $v^{(k)}_i\in\smax^\vee$, in particular for $i=k$. \item Any $\smax$-eigenvector $v$ associated to $\gamma$ satisfies $v\balance \lambda v^{(k)}$ for some $\lambda\in \smax^{\vee}\setminus\{\zero\}$. \end{enumerate} \end{corollary} \begin{proof} Since $\gamma$ is simple, \Cref{balance-adj} shows that $(B_k)_{k,k}^{\mathrm{adj}}\in \smax^\vee\setminus\{\zero\}$. Then, Point i) follows from \Cref{lem-Bk}. Point ii) follows from \Cref{cond_existence}, using that $(B_k)_{k,k}^{\mathrm{adj}}\neq \zero$, and the fact that $i=k$ is possible follows from $(B_k)_{k,k}^{\mathrm{adj}}\in \smax^\vee\setminus\{\zero\}$. Point iii) follows from the first part of~\Cref{cond_unique} using that $(B_k)_{k,k}^{\mathrm{adj}}\in \smax^\vee\setminus\{\zero\}$. \end{proof} \begin{remark} If $\gamma$ is not necessarily simple, then Point ii) in \Cref{coro-simple-eigen} still holds, except that $i=k$ may not satisfy the property. Indeed, this follows from \Cref{cond_existence}, using that $(B_k)_{k,k}^{\mathrm{adj}}\neq \zero$, and the later is always true for a positive definite matrix $A$. Moreover, the same holds by replacing $v^{(k)}$ by any column of $(B_k)^{\mathrm{adj}}$, since all diagonal entries of $(B_k)^{\mathrm{adj}}$ are non-zero, by \Cref{balance-adj}. \end{remark} \begin{corollary}\label{coro-unique-eigen} Let $A\in \pd_n(\smax^\vee)$, and $\gamma=\gamma_k$ and $B=B_k$ be as in \Cref{balance-adj}. Assume there exists a column $j$ of $B^\adj$ which is in $(\smax^\vee)^n\setminus \{\zero\}$ (as in \Cref{cond_unique}). Then, $j=k$, and any $\smax$-eigenvector is a multiple of $B^\adj_{:,j}$ and $\gamma$ is a simple (algebraic) $\smax$-eigenvalue of $A$. \end{corollary} \begin{proof} Assume there exists a column $j$ of $B^\adj$ which is in $(\smax^\vee)^n\setminus \{\zero\}$. \Cref{balance-adj} shows that any column of $B^\adj$ different from the $k$-th column has a non-zero balanced coefficient, and so $j=k$. Also, if $\gamma$ is not simple, the same holds for the $j$-th column. This shows that $\gamma$ is a simple (algebraic) $\smax$-eigenvalue of $A$. Finally, by the second part of~\Cref{cond_unique}, any $\smax$-eigenvector associated to the eigenvalue $\gamma$ is a multiple of $B^\adj_{:,k}$. \end{proof} In \Cref{ex_eig} and \Cref{ex_eig2}, we shall see that even though the entries of $A$ are in $\smax^{\vee}$, and that $A$ has $n$ distinct eigenvalues, there may exist eigenvalues $\gamma_k$ such $v^{(k)}$ (and thus any column of $(B_k)^{\mathrm{adj}}$) is not a $\smax$-eigenvector, and that this may hold for the maximal eigenvalue, see \Cref{ex_eig2}. \begin{example}\label{ex_eig1}Let $A = \begin{pmatrix} 3 &\ominus 2& 1\\ \ominus 2&2&1\\ 1&1&1 \end{pmatrix}$. It is immediate to see that $A \in \pd_n(\smax^\vee)$ with the $\smax$-eigenvalues $\gamma_1=a_{11}=3$, $\gamma_2=a_{22}=2$ and $\gamma_3 = a_{33}= 1$. We get \[B_1 = \gamma_{1} I \ominus A= \begin{pmatrix} 3^{\circ} &2& \ominus 1\\ 2& 3&\ominus1\\ \ominus 1&\ominus 1& 3 \end{pmatrix} \Rightarrow (B_1)^{\mathrm{adj}}= \begin{pmatrix} \mathbf{6}&\ominus 5&4\\ \mathbf{\ominus 5}&6^{\circ}&4^{\circ}\\ \mathbf{4}&4^{\circ}&6^{\circ} \end{pmatrix} \Rightarrow v^{(1)} = \begin{pmatrix} 6\\\ominus 5\\4\end{pmatrix}\] For the $\smax$-eigenvector associated to $\gamma_2=a_{22}$ we have \[B_2= \gamma_{2} I \ominus A =\begin{pmatrix} \ominus 3 & 2& \ominus 1\\ 2&2^{\circ} &\ominus 1\\ \ominus 1&\ominus 1& 2 \end{pmatrix} \Rightarrow(B_2)^{\mathrm{adj}}=\begin{pmatrix} 4^{\circ} &\mathbf{\ominus 4}&3^{\circ}\\ \ominus 4&\mathbf{\ominus 5}&\ominus 4\\ 3^{\circ}&\mathbf{\ominus 4}&5^{\circ} \end{pmatrix}\Rightarrow v^{(2)} = \begin{pmatrix} \ominus 4\\\ominus 5\\\ominus 4\end{pmatrix} \] Also, we have \[B_3=\gamma_{3} I \ominus A = \begin{pmatrix} \ominus 3 & 2& \ominus 1\\ 2& \ominus 2 & \ominus 1\\ \ominus 1& \ominus 1&1^{\circ} \end{pmatrix}\Rightarrow (B_3)^{\mathrm{adj}}=\begin{pmatrix} 3^{\circ}&3^{\circ}&\mathbf{\ominus 3}\\ 3^{\circ}&4^{\circ}&\mathbf{\ominus 4}\\ \ominus 3&\ominus 4&\mathbf{5} \end{pmatrix} \Rightarrow v^{(3)} = \begin{pmatrix} \ominus 3\\ \ominus 4\\5\end{pmatrix}.\] It is easy to see that $v^{(1)}\in (\smax^\vee)^{n}\setminus\{\zero\}$ and \[ A v^{(1)}=\gamma_1 v^{(1)}=\begin{pmatrix} 9&\ominus 8&7 \end{pmatrix}^T. \] Therefore $v^{(1)}$ is a strong $\smax$-eigenvector. Also, $v^{(2)}$ and $v^{(3)}$are $\smax$-eigenvectors since $v^{(2)}$ and $v^{(3)}\in (\smax^\vee)^{n}\setminus\{\zero\}$ and \[ A v^{(2)}=\begin{pmatrix} 7^{\circ}& \ominus 7& \ominus 6 \end{pmatrix}^T \balance\;\gamma_2 v^{(2)}=\begin{pmatrix} \ominus 6& \ominus 7& \ominus 6 \end{pmatrix}^T, \] and \[ A v^{(3)}=\begin{pmatrix} 6^{\circ}& 6^{\circ}& 6 \end{pmatrix}^T \balance \;\gamma_3 v^{(3)}=\begin{pmatrix} \ominus 4& \ominus 5& 6 \end{pmatrix}^T. \] They are not strong eigenvectors. \end{example} \begin{example}\label{ex_eig} For another example, let $A = \begin{pmatrix} 3 &2& 1\\ 2&2&1\\ 1&1&1 \end{pmatrix}$. As in the previous example, $A \in \pd_n(\smax^\vee)$ with the $\smax$-eigenvalues $\gamma_1=a_{11}=3$, $\gamma_2=a_{22}=2$ and $\gamma_3 = a_{33}= 1$. We obtain this time: \[ v^{(1)} = \begin{pmatrix} 6\\5\\4\end{pmatrix}\; ,\quad v^{(2)} = \begin{pmatrix} 4\\\ominus 5\\\ominus 4\end{pmatrix} \; , \quad v^{(3)} = \begin{pmatrix} 3^{\circ}\\\ominus 4\\5\end{pmatrix}.\] It is easy to see that $v^{(1)}\in (\smax^\vee)^{n}\setminus\{\zero\}$ and \[ A v^{(1)}=\gamma_1 v^{(1)}=\begin{pmatrix} 9&8&7 \end{pmatrix}^T. \] Therefore $v^{(1)}$ is a strong $\smax$-eigenvector. Also, $v^{(2)}$ is a $\smax$-eigenvector but not a strong one since $v^{(2)}\in (\smax^\vee)^{n}\setminus\{\zero\}$ and \[ A v^{(2)}=\begin{pmatrix} 7^{\circ}& \ominus 7& \ominus 6 \end{pmatrix}^T \neq \;\gamma_2 v^{(2)}=\begin{pmatrix} 6& \ominus 7& \ominus 6 \end{pmatrix}^T, \] and $v^{(3)}$ is a weak $\smax$-eigenvector and not a $\smax$-eigenvector since it has one balanced entries. \end{example} \begin{example}\label{ex_eig2} Let $A = \begin{pmatrix} 3 &\ominus 2& 0\\ \ominus 2&2&1\\ 0&1&1 \end{pmatrix} \in \pd_n(\smax^\vee)$ with again $\smax$-eigenvalues $\gamma_1=a_{11}=3$, $\gamma_2=a_{22}=2$ and $\gamma_3 = a_{33}= 1$. We have $Av^{(1)}=\gamma_1v^{(1)}$, but \[ v^{(1)}=\begin{pmatrix} 6\\ \ominus 5\\ 3^{\circ} \end{pmatrix} \notin (\smax^{\vee})^n\setminus \{\zero\}\enspace .\] By \Cref{coro-simple-eigen} we know that there is at least one $\smax$-eigenvectors of the form $ \begin{pmatrix} 6\\ \ominus 5\\ 3 \end{pmatrix}$ or $\begin{pmatrix} 6\\ \ominus 5\\ \ominus 3 \end{pmatrix}$. In this example, both are $\smax$-eigenvectors. \end{example} \subsection{Computing the leading $\smax$-eigenvector using Kleene's star} In \Cref{coro-unique-eigen} we gave a condition under which a $\smax$-eigenvector associated to a $\smax$-eigenvalue of a tropical positive definite matrix is unique up to a multiplicative constant. We shall give here another characterization of such a $\smax$-eigenvector using Kleene'star of matrices, see \Cref{star_smax}. We shall first consider the case when the eigenvalue is the greatest one, in which case, we speak of a \new{leading $\smax$-eigenvector}. The following well known result is usually written using the maximal cycle mean which is equal to the maximal (algebraic or geometric) eigenvalue of $A$. \begin{lemma}\label{leq_unit} For $A \in (\tmax)^{n \times n}$, $A^*$ exists (in $\tmax$) if and only if all its eigenvalues are $\leq \unit$, and then $A^*= I \oplus A \oplus \cdots \oplus A^{ n-1}$. \end{lemma} The following result follows from idempotency of addition in $\smax$. \begin{lemma}\label{eq_star} For $A \in (\smax)^{n \times n}$ we have $ \tsum_{k=0,\ldots,m} A^{ k} = (I \oplus A)^{ m}$. \hfill \qed \end{lemma} \begin{lemma}\label{existence_star} If $A \in (\smax)^{n \times n}$ and $|A|^*$ exists, then $A^{*} \in (\smax)^{n \times n}$ exists. \end{lemma} \begin{proof} $\{\tsum_{k=0,\ldots,m} A^{ k}\}_m$ is a non-decreasing sequence with respect to $m$ for the order relation $\preceq$ (\Cref{partial_order}) and its absolute value $|\tsum_{k=0,\ldots,m} A^{ k}|= \tsum_{k=0,\ldots,m} |A|^{ k}$ which is stationary for $m\geq n$, and equal to $|A|^*$, by \Cref{leq_unit}. So for $m \geq n$ the sequence is non-decreasing but can only take a finite number of values (the matrices $B$ such that $|B|=|A|^*$). Therefore, there exists $m_0\geq n$ such that $\tsum_{k=0,\ldots,m} A^{ k}$ is stationary for $m\geq m_0$. \end{proof} We first state the main result of this section, which computes the vector $v^{(1)}$ for a matrix $A\in \pd_n(\smax^\vee)$ as in \Cref{balance-adj}, using Kleene's star of the matrix $\gamma^{-1} A$. \begin{theorem}\label{result_pro} Let $A\in \pd_n(\smax^\vee)$, $\gamma_k$ and $B_k$ be as in \Cref{balance-adj}. Assume that $\gamma=\gamma_1$ is simple as an algebraic $\smax$-eigenvalue of $A$, that is $\gamma_1\succ \gamma_2$ Then, we have \[ v^{(1)}=(\gamma I \ominus A )^{\adj}_{:,1}=\gamma^{n-1} (\gamma^{-1}A)^*_{:,1}\enspace .\] Moreover $A v^{(1)}= \gamma v^{(1)}$. In particular, when $v^{(1)} \in (\smax^\vee)^n$, $v^{(1)}$ is the unique leading $\smax$-eigenvector, and this is a strong $\smax$-eigenvector. \end{theorem} We decompose the proof of \Cref{result_pro} into the following lemmas which hold for $A$ as in the theorem. \begin{lemma}\label{lemmaIB} Let $\Azero$ be the matrix obtained by replacing the diagonal entries of $A$ by $\zero$. Then, we have $(I \ominus \gamma^{-1} A)^\adj_{:,1}=(I \ominus \gamma^{-1} \Azero)^{\adj}_{:,1}$. \end{lemma} \begin{proof} We have $A = \Azero \oplus D$ where $D$ is the diagonal matrix with same diagonal entries as $A$, that is $D=(d_{ij})$ with $d_{ij}=\zero$ for $i \neq j$ and $d_{ij}=a_{ij}$ for $i=j$. The matrix $ I\ominus\gamma^{-1} D$ is a diagonal matrix with diagonal entries equal to $\unit$ except for the first one which is equal to $\unit\ominus \gamma^{-1} d_{11}=\unit^\circ$. So we get that \[ ( I \ominus \gamma^{-1} A)^{\adj}_{:,1}=( I \ominus \gamma^{-1} D \ominus \gamma^{-1}\Azero)^{\adj}_{:,1}\nonumber =(I \ominus \gamma^{-1}\Azero)^{\adj}_{:,1}\] where the last equality is by the fact that the entries of $I \ominus \gamma^{-1}\Azero$ and $I \ominus \gamma^{-1}D \ominus \gamma^{-1} \Azero$ only differ in the $(1,1)$ entry, which does not change the first column (and first row) of the adjugate matrix. \end{proof} \begin{definition}[Definite matrix] A matrix $F=(f_{ij}) \in (\smax)^{n \times n}$ is definite if $\det(F)=f_{ii}=\unit\; \forall i \in [n]$. \end{definition} \begin{lemma}\label{lemma325} Let $\Azero$ be as in \Cref{lemmaIB}. Then $I\ominus \gamma^{-1} \Azero$ is definite. \end{lemma} \begin{proof} Let $F=(f_{ij}):=I\ominus \gamma^{-1} \Azero$. We have $f_{ii}=\unit\; \forall i \in [n]$, and $f_{ij}=\ominus \gamma^{-1} a_{ij}$ for $i\neq j\in [n]$. Since $A$ is a $\pd$ matrix with all diagonal entries $\preceq \gamma$, we get that all off-diagonal entries of $\gamma^{-1} A $. Therefore all weights of cycles of $\gamma^{-1} \Azero$ have an absolute value $\prec \unit$. This implies that $\det(F)=\unit$ and so $F$ is a definite matrix. \end{proof} \begin{lemma}[Corollary of \protect{\cite[Th.\ 2.39]{adi}}] \label{adj_star1} Let $A$ and $\Azero$ be as in \Cref{lemmaIB}. Then, $(\gamma^{-1} \Azero)^*$ exists and we have $(\gamma I \ominus \Azero)^{\adj}=\gamma^{n-1} ( \gamma^{-1}\Azero)^*$. \end{lemma} \begin{proof} As said in the previous proof, all weights of cycles of $\gamma^{-1} \Azero$ have an absolute value $\prec \unit$. Since $ \gamma^{-1} \Azero$ has diagonal entries equal to $\zero$, and $I\ominus \gamma^{-1} \Azero$ is definite by previous lemma, we get the assertions in \Cref{adj_star1}, by applying \cite[Th.\ 2.39]{adi}. \end{proof} \begin{lemma}\label{star_star1} Let $A$ and $\Azero$ be as in \Cref{lemmaIB}. Then $ ( \gamma^{-1}\Azero)^*= (\gamma^{-1}A)^*$. \end{lemma} \begin{proof} Without loss of generality, we prove the result when $\gamma=\unit$. Using \Cref{leq_unit} together with \Cref{existence_star}, we get that $A^*$ exist. Suppose that the sequences $\{\tsum_{k\in [m]} {A^k}\}_m$ and $\{\tsum_{k\in [m]} \Azero^k\}_m$ be stationary for $m\geq m_1$ and $m \geq m_2$, respectively. Let $m'=\max\{m_1,m_2\}$. Then we have \[\Azero^*=\bigtsum_{k=0,\ldots, m'}\Azero^k = (I \oplus \Azero)^{m'} = (I \oplus A)^{m'} = \bigtsum_{k=0,\ldots,m'} A^k=A^*,\] where the second and fourth equalities are by \Cref{eq_star}, and the third one is by definition of $\Azero$. \end{proof} \begin{proof}[Proof of \Cref{result_pro}] Consider $A'=\gamma^{-1}A$. Using the multi-linearity of determinant, or using \cite[Cor.\ 2.35]{adi}, we get \[v^{(1)}=(\gamma I\ominus A )^{\adj}_{:,1}=\gamma^{n-1} (I\ominus A')^{\adj}_{:,1}\] and using \Cref{lemmaIB}, \Cref{adj_star1} and \Cref{star_star1}, we get the respective equalities \[ (I\ominus A')^{\adj}_{:,1}= (I \ominus \gamma^{-1}\Azero)^{\adj}_{:,1}= (\gamma^{-1}\Azero)^*_{:,1}=( \gamma^{-1} A)^*_{:,1}= (A')^*_{:,1}.\] This shows the first assertion of \Cref{result_pro}. Since $(A')^*=I\oplus A'(A')^*$ and $[A'(A')^*]_{11}\succeq A'_{11}=\unit$, we get that $ (A')^*_{11}=\unit \oplus [A'(A')^*]_{11}=[A'(A')^*]_{11}$, and so $A'(A')^*_{:,1}=(A')^*_{:,1}$, which with the first assertion, shows the second assertion of \Cref{result_pro}. The last assertion follows from \Cref{coro-unique-eigen}. \end{proof} \begin{corollary} Let $A$ be as in \Cref{result_pro}. Assume that all the entries of $A$ are positive or $\zero$, that is are in $\smax^{\oplus}$. Then, $v^{(1)}$ has also positive or $\zero$ entries, and thus it is necessarily a strong $\smax$-eigenvector. \end{corollary} \begin{remark} The previous result shows that for a positive definite matrix with positive or zero entries, the leading $\smax$-eigenvector of $A$ is positive. This is not the case in general for the other eigenvectors. For instance, in \Cref{ex_eig}, $v^{(2)}$ contains positive and negative coordinates, and since it is in $(\smax^\vee)^{n}$, this is the unique eigenvector associated to $\gamma_2$. Moreover, $v^{(3)}$ contains balance entries.\end{remark} \begin{remark}\label{not_signed} Consider the matrix $A$ in \Cref{ex_eig2} and let $\gamma=3$. We have \[(\gamma^{-1}A)^*=(\gamma^{-1}\Azero)^*=\begin{pmatrix}0 & \ominus (-1)&(-3)^\circ \\\ominus (-1) &0 &-2 \\ (-3)^\circ & -2 &0 \end{pmatrix}\enspace.\] So by \Cref{result_pro}, $v^{(1)}=\gamma^2 (\gamma^{-1}A)^*_{:,1}= 6 \odot \begin{pmatrix} 0 & \ominus (-1) & (-3)^\circ \end{pmatrix}^\intercal$. \Cref{result_pro} also shows that $A v^{(1)}= \gamma v^{(1)}$. However, $v^{(1)}$ is not in $(\smax^\vee)^n$, and the two possible $\smax$-eigenvectors mentioned in \Cref{ex_eig2} are not strong $\smax$-eigenvectors. \end{remark} The later remark is indeed a general result, as follows. \begin{corollary}\label{coro-strong1} Let $A$ and $\gamma$ be as in \Cref{result_pro}. If $v^{(1)}$ does not belong to $(\smax^\vee)^n$, then $A$ has no strong $\smax$-eigenvector associated to the eigenvalue $\gamma$. \end{corollary} \begin{proof} Assume that $w$ is a strong $\smax$-eigenvector associated to the eigenvalue $\gamma$. This means that $w\in (\smax^\vee)^n\setminus\{\zero\}$ and $\gamma w=A w$. This implies $w= (\gamma^{-1}A)^* w$ and so $w\succeq (\gamma^{-1}A)^*_{:,1} w_1$. Since $w$ is necessarily a $\smax$-eigenvector, \Cref{coro-simple-eigen} shows that there exists $\lambda \in \smax^\vee\setminus\{\zero\}$ such that $w\balance \lambda v^{(1)}$. We also have $v^{(1)}_1\in \smax^\vee\setminus\{\zero\}$, so $w_1=\lambda v^{(1)}_1$, by \Cref{equality_balance}. By \Cref{result_pro}, we have $v^{(1)}=\gamma^{n-1}(\gamma^{-1}A)^*_{:,1}= v^{(1)}_1(\gamma^{-1}A)^*_{:,1}$, therefore $ \lambda v^{(1)}= w_1 (\gamma^{-1}A)^*_{:,1}\preceq w$. Since $w\balance \lambda v^{(1)}$, and $w\in (\smax^\vee)^n\setminus\{\zero\}$ this implies, by \Cref{equality_balance}, that $w=\lambda v^{(1)}$ and so $v^{(1)}\in (\smax^\vee)^n\setminus\{\zero\}$, a contradiction. \end{proof} \subsection{Computing all $\smax$-eigenvectors using Kleene's star}\label{subsec:kleen} \begin{theorem}\label{star-general} Let $A$, $\gamma_k$ and $B_k$ be as in \Cref{balance-adj}. Assume that $\gamma=\gamma_k$ is simple as an algebraic $\smax$-eigenvalue of $A$. Let $D^{(k)}$ be the $n\times n$ diagonal matrix with diagonal entries $\gamma_1,\ldots, \gamma_{k-1}, \zero,\ldots, \zero $, let $A^{(k)}$ be the complementary of $D^{(k)}$ in $A$, that is $A^{(k)}$ has entries $[A^{(k)}]_{ij}= a_{ij}$ when $i\neq j$ or $i=j\geq k$, and $\zero$ elsewhere, so that $A= D^{(k)}\oplus A^{(k)}$. Then, we have \[ v^{(k)}=(B_k)^\adj_{:,k}= \lambda_k ((\gamma I\ominus D^{(k)})^{-1} A^{(k)})_{:,k}^*\quad \text{with}\; \lambda_k= (\ominus \unit)^{k-1} \gamma_1\cdots \gamma_{k-1}\gamma ^{n-k} \enspace ,\] and $v^{(k)}$ satisfies $A^{(k)} v^{(k)}=\gamma v^{(k)}\ominus D^{(k)} v^{(k)}$. Moreover, if $v^{(k)}\in (\smax^\vee)^n$, $v^{(k)}$ is the unique $\smax$-eigenvector associated to the eigenvalue $\gamma$ (up to a multiplicative factor). \end{theorem} \begin{proof} Let $A$, $\gamma_k$, $B_k$, $D^{(k)}$ and $A^{(k)}$ be as in the theorem. We have $A= D^{(k)}\oplus A^{(k)}$, so $B_k= \gamma I\ominus A= \gamma I\ominus D^{(k)}\ominus A^{(k)}$. Since $\gamma_1\succeq \cdots \succeq \gamma_{k-1}\succ \gamma_k=\gamma \succ \gamma_{k+1}\cdots \gamma_n$, the matrix $\gamma I\ominus D^{(k)}$ is diagonal with diagonal entries equal to $ \ominus \gamma_1, \ldots, \ominus \gamma_{k-1},\gamma, \cdots \gamma$, so is invertible. Hence, \begin{align*} (B_k)^\adj&=(I\ominus (\gamma I\ominus D^{(k)})^{-1} A^{(k)})^\adj (\gamma I\ominus D^{(k)})^\adj\\ &= (I\ominus (\gamma I\ominus D^{(k)})^{-1} A^{(k)})^\adj \det(\gamma I\ominus D^{(k)}) (\gamma I\ominus D^{(k)})^{-1} \enspace .\end{align*} Then, setting $\lambda_k= (\ominus \unit)^{k-1} \gamma_1\cdots \gamma_{k-1}\gamma ^{n-k}$, we get that the $k$-th column satisfies $v^{(k)}=(B_k)^\adj_{:,k}= \lambda_k (I\ominus (\gamma I\ominus D^{(k)})^{-1} A^{(k)})^\adj_{:,k}$. Let $A'= (\gamma I\ominus D^{(k)})^{-1} A^{(k)}$. Then, the formula of $v^{(k)}$ in the theorem holds as soon as $ (A')^*_{:,k}= (I\ominus A')^\adj_{:,k}$. One shows this property by the same arguments as for the proof of \Cref{result_pro}. First, writting $A'=\Azero'\oplus D$ where $D$ is the diagonal matrix with same diagonal entries as $A'$, and $\Azero'=(\gamma I\ominus D^{(k)})^{-1} \Azero$, we see that $I\ominus D$ is a diagonal matrix with entries equal to $\unit$ except for the $k$-th entry which is equal to $\unit^\circ$. So by the same arguments as in \Cref{lemmaIB}, we get $(I\ominus A')^\adj_{:,k}= (I\ominus \Azero')^\adj_{:,k}$. Second, although some off-diagonal entries of $A'$ may have an absolute value $\succeq \unit$, all weights of cycles of length $\geq 2$ have an absolute value $\prec \unit$. So, as in the proof of \Cref{lemma325}, we have that $I\ominus A'$ is definite. Then, the arguments of \Cref{adj_star1} and \Cref{star_star1} apply when replacing the matrix $\gamma^{-1} A$ by $A'$ and $\gamma^{-1} \Azero$ by $\Azero'$, and shows that $(\Azero')^*$ exists and $(I \ominus \Azero')^{\adj}=(\Azero')^*$ and $ (\Azero')^*=(A')^*$. All together, this implies that $ (A')^*_{:,k}= (\Azero')^*_{:,k}=(I \ominus \Azero')^{\adj}_{:,k}=(I\ominus A')^\adj_{:,k}$, which is what we wanted to prove, and finishes the proof of the first assertion. For the last assertion, we proceed again as in the proof of \Cref{result_pro}. Since $(A')^*=I\oplus A'(A')^*$ and $[A'(A')^*]_{kk}\succeq A'_{kk}=\unit$, we get that $ (A')^*_{kk}=\unit \oplus [A'(A')^*]_{kk}=[A'(A')^*]_{kk}$, and so $A'(A')^*_{:,k}=(A')^*_{:,k}$, which with the first assertion, shows the second assertion of \Cref{star-general}. The last assertion follows from \Cref{coro-unique-eigen}. \end{proof} The construction of the system of equations $A^{(k)} v^{(k)}=\gamma v^{(k)}\ominus D^{(k)} v^{(k)}$ in \Cref{star-general} can be compared to the ones proposed in \cite{Nishida2020,Nishida2021,nishida2021independence} for the definition of ``algebraic eigenvectors''. The novelty here is to consider {\em signed} tropical matrices. Moreover, considering symmetric positive definite matrices leads to a special form of this construction: the ``multi-circuits'' of the above references are only unions of loops, so that the equation involves a diagonal matrix. We can also show a similar result as \Cref{coro-strong1} for all the eigenvalues. \begin{corollary}Let $A$ and $\gamma$ be as in \Cref{star-general}. If $v^{(k)}$ does not belong to $(\smax^\vee)^n$, or if $A$ is irreducible, then $A$ has no strong $\smax$-eigenvector associated to the eigenvalue $\gamma$. \end{corollary} \begin{proof} Assume by contradiction that $w$ is a strong $\smax$-eigenvector associated to the eigenvalue $\gamma$. This means that $w\in (\smax^\vee)^n\setminus\{\zero\}$ and $\gamma w=A w$. Using the decomposition $A=A^{(k)}+D^{(k)}$ of \Cref{star-general}, we see that $\gamma_k w\succeq D^{(k)} w$, hence $\gamma_k w_i\succeq \gamma_i w_i$, for $i=1,\ldots, k-1$. Since $\gamma_i>\succ \gamma_k$, this implies that $w_i=\zero$, for all $i=1,\ldots, k-1$. Then $D^{(k)} w=\zero$ and we get $A^{(k)} w= Aw= \gamma w= (\gamma I\ominus D^{(k)})w$. This implies $w= ((\gamma\ominus D^{(k)})^{-1}A)^* w$. Using the same arguments as in the proof of \Cref{coro-strong1}, with \Cref{star-general} instead of \Cref{result_pro}, we deduce that $w=\lambda v^{(k)}$ for some $\lambda \in \smax^\vee\setminus\{\zero\}$. This implies that $v^{(k)}\in (\smax^\vee)^n\setminus\{\zero\}$ and that the first entries $v^{(k)}_i$ with $i=1,\ldots, k-1$ are equal to $\zero$. If $A$ is irreducible, then so does $(\gamma\ominus D^{(k)})^{-1}A$, and by \Cref{irreducible}, $((\gamma\ominus D^{(k)})^{-1}A)^*$ has only non-zero entries. This implies that the vector $v^{(k)}=\lambda_k ((\gamma\ominus D^{(k)})^{-1}A)^*_{:,k}$ has no zero entries, a contradiction. \end{proof} \subsection{Generic uniqueness of $\smax$-eigenvalues and $\smax$-eigenvectors} \label{sec-generic} \Cref{ex_eig2} shows that a $\smax$-eigenvector is not necessarily unique up to a multiplicative constant, even when the corresponding $\smax$-eigenvalue is simple and even when this eigenvalue is the greatest one. Although there are many examples as above, we also have the following result, which shows that generically the $\smax$-eigenvalues are unique and the $\smax$-eigenvectors are unique up to a multiplicative constant. To any matrix $A=(a_{ij})_{i,j\in [n]} \in \pd_n(\smax^{\vee})$, we associate the vector $\Psi(A)=(a_{ij})_{1\leq i\leq j\leq n}$ of $(\smax^{\vee})^{n(n+1)/2}$. The map $\Psi$ is one to one and we denote by $\upd$ the image of $\pd_n(\smax^{\vee})$ by the map $\Psi$. Similarly to univariate polynomials, see \Cref{sec-polynomials}, multivariate formal polynomials and polynomial functions can be defined in any semiring ${\mathcal S}$. We can also consider formal Laurent polynomials, which are sequences $P_\alpha\in {\mathcal S}$ indexed by elements $\alpha\in \Z^d$ (where $d$ is the number of variables and $\Z$ is the set of relative integers), such that $P_\alpha=\zero$ for all but a finite number of indices $\alpha$. Then the Laurent polynomial function $\widehat{P}$ can only be applied to vectors $x\in ({\mathcal S}\setminus\{\zero\})^d$. The following result shows that generically, a matrix $A \in \pd_n(\smax^{\vee})$ has simple $\smax$-eigenvalues and unique $\smax$-eigenvectors associated all its eigenvalues, up to multiplicative constants, and these $\smax$-eigenvectors are the vectors $v^{(k)}$. \begin{theorem} \label{theoremgeneric} There exist a finite number of formal polynomials $(P_k)_{k\in I}$ over $\smax$, with coefficients in $\smax^\vee$ and $n(n+1)/2$ variables, such that for any matrix $A \in \pd_n(\smax^{\vee})$, and $x=\Psi(A)$, satisfying $\widehat{P_k}(x)\in \smax^{\vee}\setminus\{\zero\}$ for all $k\in I$, we have that $A$ has simple $\smax$-eigenvalues and all the vectors $v^{(k)}$ defined as above are in $(\smax^{\vee}\setminus\{\zero\})^n$. \end{theorem} \begin{proof} Let $x=\Psi(A)$ for some matrix $A \in \pd_n(\smax^{\vee})$, then $x=(x_{ij})_{1\leq i\leq j\leq n}\in (\smax^\vee)^{n(n+1)/2}$, $x_{ii}\in \smax^{\oplus}$, for all $i\in [n]$ and $x_{ij}< x_{ii}\odot x_{jj}$ for all $1\leq i<j\leq n$. The $\smax$-eigenvalues of $A$ are simple if and only if $x_{ii}\neq x_{jj}$ for all $i\neq j$. Take for all $i\neq j$, the formal polynomial $P_{ij}$ equal to $x_{ii}\ominus x_{jj}$, we get that the latter holds if and only if $\widehat{P_{ij}}(x)$ is not balanced for all $i\neq j$. If this holds, then there is a permutation of the indices $i\in [n]$, such that $x$ satisfies $x_{11}>\cdots>x_{nn}$. Then, the vectors $v^{(k)}$ are given by the formula in \Cref{star-general}, that is, for all $i\neq k$, we have $v^{(k)}_i= \lambda_k ((\gamma_k I\ominus D^{(k)})^{-1} A^{(k)})_{i,k}^*$ where $\gamma_k=a_{kk}=x_{kk}$, $\lambda_k=v^{(k)}_k= (\ominus \unit)^{k-1} \gamma_1\cdots \gamma_{k-1}\gamma_k^{n-k}$, $\gamma_k I\ominus D^{(k)}$ is the diagonal matrix with diagonal entries $\ominus \gamma_1,\ldots, \ominus \gamma_{k-1},\gamma_k,\ldots, \gamma_k$, or equivalently $\ominus x_{11},\ldots,\ominus x_{k-1,k-1}, x_{kk},\ldots, x_{kk}$, and $A^{(k)}$ has entries $A^{(k)}_{ij}= a_{ij}=x_{ij}$ when $i<j$, or $i=j\geq k$, $A^{(k)}_{ij}= a_{ij}=x_{ji}$ when $j<i$, and $\zero$ elsewhere. Using that the above Kleene's star exists and is equal to the finite sum of the powers of the matrix up to power $n$, we deduce that the expressions of all the $v^{(k)}_i$ are Laurent polynomials functions in the entries of $x$. These polynomials correspond to the weights of paths from $i$ to $k$ in the graph of $A$, defined by the matrix $(\gamma_k I\ominus D^{(k)})^{-1} A^{(k)}$, and since we assumed that the $\smax$-eigenvalues are simple, the above matrix has no circuit of weight $0$ (see the proof of \Cref{star-general}), then we can restrict the sum to elementary paths (that is paths with no circuits). Let $Q_{ki}$ be the formal (Laurent) polynomial corresponding to the expression of $v^{(k)}_i$, then the above properties imply that each monomial in the coordinates of $x$ appears only once, and so the coefficients of all monomials are in $\smax^\vee$. Moreover, due to the multiplication of the weights by $\lambda_k$, the degree of each monomial with respect to each variable is $\geq 0$ that is $Q_{ki}$ is an ordinary polynomial. Then, all the $v^{(k)}$ are in $(\smax^\vee)^n$ if and only if $\widehat{Q_{ki}}(x)\in \smax^\vee$ for all $k\neq i$. This property holds indeed if $x_{11}>\cdots>x_{nn}$. Now take apply any permutation $\sigma$ of the indices on the polynomials $Q_{ki}$, we obtain polynomials $Q^{\sigma}_{ki}$. The previous properties imply that if $\widehat{P_{ij}}(x)$ is not balanced for all $i\neq j$ and $\widehat{Q^{\sigma}_{ki}}(x)\in \smax^\vee$ for all $k\neq i$ and $\sigma$ a permutation of $[n]$, then $A$ has simple eigenvalues and all the vectors $v^{(k)}$ belong to $(\smax^\vee\setminus\{\zero\})^n$. \end{proof} Another way to see this result is to consider polyhedral complexes, see for instance in \cite{maclagan2015introduction}. A polyhedral complex $\mathcal C$ in $\R^m$, $m\geq 1$, is a collection of polyhedra of $\R^m$, called cells, such that the intersection of any two polyhedra is a face of each of the two polyhedra or is empty. Then, the support of $\mathcal C$ is the union of all its cells, and its dimension is the maximal dimension $p\leq m$ of its cells (and its codimension is $m-p$). If the support is convex, then the dimension of the complex is the dimension of its support. The set of cells of dimension at most $p-1$ is a sub-polyhedral complex of $\mathcal C$ with dimension $p-1$, which is the complementary in the support of $\mathcal C$ of the union of the interiors of the cells of $\mathcal C$ of maximal dimension. Polyhedral complexes are useful to explain the geometric structure of tropical varieties or prevarieties. Given a $m$-variables formal (Laurent) polynomial $P$ over $\rmax$, the set of vectors $x\in \R^m$ such that the maximum in the expression of $\widehat{P}(x)$ is attained at least twice defines a tropical hypersurface. It defines a polyhedral complex $\mathcal C$ in $\R^m$ covering $\R^m$, such that tropical hypersurface is exactly the support of sub-polyhedral complex of $\mathcal C$ obtained by taking the cells of dimension at most $m-1$. In particular, the set of points for which the maximum in the expression of $\widehat{P}(x)$ is attained only once is the union of the interiors of cells of maximal dimension of $\mathcal C$. Now assume that $\Gamma=\R$. We denote by $\mu$ the map from $(\smax^\vee)^m$ to $\rmax^m$ which associates to any vector $x$, the vector $|x|$ obtained by applying the modulus map entrywise. Then, any element of $\rmax^m$ has a finite preimage by $\mu$ of cardinality at most $2^m$. Moreover, if $P$ is a formal polynomial with coefficients in $\smax^\vee$ and $m$ variables, and $Q$ denotes the formal polynomial with coefficients $Q_\alpha=|P_{\alpha}|$, then the set of points $x\in(\smax^\vee\setminus\{\zero\})^m$ such that $\widehat{P}(x)\in \smax^\circ$ is included in the preimage by $\mu$ of the set of points $x\in \R^m$ such that the maximum in the expression of $\widehat{Q}(x)$ is attained at least twice. Consider the set $K$ which is the image by $\Phi:=\mu\circ \Psi$ of $\pd_n(\smax^{\vee})$. We have \[ K\cap \R^{n(n+1)/2}=\{x=(x_{ij})_{1\leq i\leq j\leq n}\in \R^{n(n+1)/2}\mid 2 x_{ij}< x_{ii}+x_{jj}\;\forall\; 1\leq i<j\leq n\}\enspace ,\] where we used the usual notations of $\R$. Therefore, the closure of $K\cap \R^{n(n+1)/2}$ in $\R^{n(n+1)/2}$ is a convex polyhedron of dimension $n(n+1)/2$ that we shall denote by $\widetilde{K}$. The previous result implies the following one. \begin{corollary}\label{corogeneric} There exists a polyhedral complex $\mathcal C$ of $\R^{n(n+1)/2}$ with support $\widetilde{K}$, such that the image by $\Phi$ of the set of matrices $A \in \pd_n(\smax^{\vee})$ such that all the eigenvalues of $A$ are simple and all the vectors $v^{(k)}$ defined as above are in $\smax^{\vee}\setminus\{\zero\}$ contains all the interiors of cells of maximal dimension of $\mathcal C$. \end{corollary} \begin{proof} By \Cref{theoremgeneric}, we have that if $x=\Psi(A)$ with $A \in \pd_n(\smax^{\vee})$, and $\widehat{P_k}(x)\in \smax^\vee\setminus\{\zero\}$ for all $k\in I$, then $A$ has simple $\smax$-eigenvalues and all the vectors $v^{(k)}$ defined as above are in $(\smax^{\vee}\setminus\{\zero\})^n$. Consider the formal polynomial $Q_k=|P_k|$, then by the above comments, $\widehat{P_k}(x)\in \smax^\vee\setminus\{\zero\}$ holds if $|x|\in \R^m$ and the maximum in the expression of $\widehat{Q_k}(|x|)$ is attained only once. Each polynomial $Q_k$ defines a polyhedral complex ${\mathcal C}_k$ covering $\R^{n(n+1)/2}$, such that the set of points for which the maximum in the expression of $\widehat{Q_k}(x)$ is attained only once is the union of the interiors of cells of maximal dimension of $\mathcal C_k$. Let $\mathcal C$ be the refinement of all the polyhedral complexes ${\mathcal C}_k$. We get that if $|x|=\Phi(A)$ is in the interior of a cell of maximal dimension of $\mathcal C$, then $A$ has simple $\smax$-eigenvalues and all the vectors $v^{(k)}$ defined as above are in $(\smax^{\vee}\setminus\{\zero\})^n$. Taking the intersection of the cells with $\widetilde{K}$, we obtain a polyhedral complex with support equal to $\widetilde{K}$, and satisfying the property in the corollary. \end{proof} \section{Applications}\label{sec:apps} \subsection{Signed valuations of positive definite matrices} \begin{definition}[See also \protect{\cite{allamigeon2020tropical}}] \label{def-sign} For any ordered field $\rfield$, we define the {\em sign map} $\sign:\rfield \to \bmaxs^\vee=\{\zero, \ominus \unit, \unit\}$ as follows, for all $\elf\in \rfield$, \[ \sign(\elf):=\begin{cases} \unit&\elf>0,\\ \ominus \unit& \elf<0,\\ \zero& \elf=0. \end{cases}\] Assume that $\rfield$ is also a valued field, with a convex valuation $\vall$ with value group $\vgroup$, then the \textit{signed valuation} on $\rfield$ is the map $\sval:\rfield \to \smax=\smax(\vgroup)$ which associates the element $\sign(\elf) \odot \vall(\elf)\in \smax^\vee$, with an element $\elf$ of $\rfield$, where here $\vall(\elf)$ is seen as an element of $\smax^{\oplus}$. \end{definition} Recall that the value group is an ordered group such that $\vall \colon \rfield \to \vgroup \cup \{\botelt \}$ is surjective, that $\vall$ is a (non-Archimedean) valuation if \[\begin{aligned} \vall(b) = \botelt &\iff b = 0 \, , \\ \forall b_{1}, b_{2} \in \rfield, \ \vall(b_{1}b_{2}) &= \vall(b_{1}) + \vall(b_{2}) \, , \\ \forall b_{1}, b_{2} \in \rfield, \ \vall(b_{1} + b_{2}) &\le \max(\vall(b_{1}),\vall(b_{2})) \, \end{aligned}\] and that the convexity of $\vall$ is equivalent to the condition that $\elf_{1} , \; \elf_{2} \in \rfield\, ,\;\text{and}\; 0 \le \elf_{2} \le \elf_{1}$ implies $\vall(\elf_{2})\leq \vall(\elf_{1})$. Then, the signed valuation is surjective, and it is a morphism of hyperfields or of systems, meaning in the case of a map from a field $\rfield$ to $\smax$, that it satisfies, for all $a,a',b\in \rfield$, we have \begin{align*} &\sval(0)=\zero \\ & \sval(\rfield\setminus\{0\})\subset \smax^\vee\setminus\{\zero\}\\ &\sval(-a)=\ominus \sval(a)\\ &\sval(a + a')\preceq^\circ \sval(a)\oplus \sval(a')\\ &\sval(b a)=\sval(b)\odot \sval(a). \end{align*} The above construction applies in particular when $\rfield$ is a real closed valued field with a convex valuation, and so to the field of formal generalized Puiseux series. Signed valuation can be applied to matrices over $\rfield$ and polynomials over $\rfield$. Then, it is proved in \cite[Th.\ 4.8]{tropicalization} that any tropical positive semidefinite matrix is the image by the signed valuation of a positive semidefinite matrix over the field of formal generalized Puiseux series. We can obtain the same property for positive definite matrices, and a general field $\rfield$. \begin{proposition}[Compare with \protect{\cite[Th.\ 4.8]{tropicalization}}] Let $\rfield$ be an ordered field with a convex valuation with value group $\vgroup$, and let $A\in \pd_n(\smax^{\vee})$. Then, there exists a $n\times n$ symmetric positive definite matrix ${\bf A}$ over $\rfield$ such that $\sval({\bf A})= A$. \end{proposition} \begin{proof} By the surjectivity of the signed valuation, there exists a $n\times n$ symmetric matrix ${\bf A}$ over $\rfield$ such that $\sval({\bf A})= A$. Since $A$ is positive definite in $\smax$, \Cref{diag_cycle} shows that for any cycle $\cycle$ of length $k>1$ in $[n]$, and support $I$, we have $|w(\cycle)| \lsign \bigtprod_{i\in I}a_{ii}$. Then, the $I\times I$ submatrices of $A$ and ${\bf A}$ satisfy $\sval(\det({\bf A}[I,I]))=\det(A[I,I])=\bigtprod_{i\in I}a_{ii}\in \smax^{\oplus}\setminus \{\zero\}$. By definition of sign valuation, this implies that $\det({\bf A}[I,I]))>0$. Since this holds for all subsets $I$ of $[n]$ with at least two elements, and the same holds trivially for singletons $I=\{i\}$, we obtain that ${\bf A}$ has all its principal minors positive and so is positive definite in $\rfield$. \end{proof} \begin{theorem}\label{th_asymptot} Let $\rfield$ be a real closed valued field with convex valuation and value group $\vgroup$, and let $A\in \pd_n(\smax^{\vee})$. Let ${\bf A}$ be a symmetric positive definite matrix over $\rfield$ such that $\sval({\bf A})= A$. Then, the signed valuations of the eigenvalues of ${\bf A}$, counted with multiplicities, coincide with the algebraic $\smax$-eigenvalues of $A$, that is the diagonal entries of $A$. \end{theorem} \begin{proof} Since ${\bf A}\in \rfield^{n\times n}$, so the characteristic polynomial of ${\bf A}$, ${\bf Q}:= \det( \X I- {\bf A} )$ belongs to $\rfield[\X]$ and its coefficients are equal to ${\bf Q}_k =(-1)^{n-k}\tr_{n-k}({\bf A})$. Using the above morphism property of the signed valuation, we have $\sval({\bf Q}_k) \preceq^\circ (\ominus \unit)^{n-k}\tr_{n-k}(A)$. Since $A$ is symmetric positive definite $(\ominus \unit)^{n-k}\tr_{n-k}(A)\in \smax^\vee$, so we get the equality. This shows that the signed valuation of the characteristic polynomial ${\bf Q}$ of ${\bf A}$ is equal to the characteristic polynomial $P_A=\det( \X I\ominus A ) $ of $A$ over $\smax$. Since any symmetric matrix over a real closed field has $n$ eigenvalues counted with multiplicites, ${\bf Q}$ has $n$ roots. Using \cite[Prop.~B]{baker2018descartes} (see also \cite[Cor.\ 7.2]{tavakolipour2021}), we obtain that $P_A=\sval({\bf Q})$ has $n$ roots in $\smax$, which coincide with the signed valuations of the roots of ${\bf Q}$, counted with multiplicity. This means that the signed valuations of the eigenvalues of ${\bf A}$, counted with multiplicities, coincide with the algebraic $\smax$-eigenvalues of $A$. \end{proof} \Cref{th_asymptot} may be thought of as a non-Archimedean variation of Gershgorin's disk theorem. The latter shows that the eigenvalues of a matrix are not too far from is diagonal entries. This analogy is better explained by means of the following result, which differs from Gershgorin's theorem in that we use a ``modulus of tropical positive definiteness'' $\gamma$ instead of a notion of diagonal dominance. \begin{theorem} Suppose $A$ is a real symmetric matrix with positive diagonal entries such that $a_{ii}a_{jj}\geq \gamma^2 a_{ij}^2$ for some $\gamma>0$. Then, \[ \operatorname{spec}(A) \subset \cup_i B(a_{ii}, a_{ii} (n-1)/\gamma) \] \end{theorem} \begin{proof} Set $D:= \operatorname{diag}(a_{ii}^{-1/2})$, and consider $B=D (A-\gamma I) D$. Observe that $|B_{ij}|\leq \gamma^{-1}$ for $i\neq j$ whereas $B_{ii}=1-\gamma/a_{ii}$. If $\gamma$ is an eigenvalue of $A$, then the matrix $B$ is singular, hence, it cannot have a dominant diagonal, which entails there is an index $i\in[n]$ such that $|1-\gamma/a_{ii}|\leq (n-1) \gamma^{-1}$. It follows that $\gamma \in B(a_{ii},a_{ii}(n-1)\gamma^{-1})$. \end{proof} Now let us state a result concerning eigenvectors. \begin{theorem}\label{th_asymptot-vector} Let $\rfield$ be a real closed valued field with convex valuation and value group $\vgroup$. Let $A$, $\gamma_k$ and $B_k$ be as in \Cref{balance-adj}. Let ${\bf A}$ be a symmetric positive definite matrix over $\rfield$ such that $\sval({\bf A})= A$. Let $\lambda_1\geq \cdots \geq \lambda_n>0$ be the eigenvalues of ${\bf A}$. Assume that $\gamma=\gamma_k$ is simple as an algebraic $\smax$-eigenvalue of $A$. Then, $\lambda_k$ is a simple eigenvalue. Let ${\bf v}$ be the eigenvector of ${\bf A}$ associated to the eigenvalue $\lambda_k$ such that ${\bf v}_k=1$. Then $\sval({\bf v})\balance (v^{(k)}_k)^{-1} v^{(k)}$. Assume in addition that $v^{(k)}=(B_k)^\adj_{:,k}\in (\smax^\vee)^n$. Then, $\sval({\bf v})= (v^{(k)}_k)^{-1} v^{(k)}$. \end{theorem} \begin{proof} We have ${\bf A} {\bf v}=\lambda_k {\bf v}$. Taking the signed valuation, we have $\gamma \odot \sval( {\bf v})= \sval(\lambda_k) \sval( {\bf v})= \sval({\bf A} {\bf v})\preceq^\circ A \sval( {\bf v})$. So $\sval( {\bf v})$ is a $\smax$-eigenvector of $A$. From Point (iii) of \Cref{coro-simple-eigen}, we get that $\sval( {\bf v})\balance \lambda v^{(k)}$ for some $\lambda\in \smax^{\vee}\setminus\{\zero\}$. This implies in particular that $v_i= \lambda v^{(k)}_i$ for all $i$ such that $v^{(k)}_i\in \smax^\vee$, and so for $i=k$. Since ${\bf v}_k=1$, we get that $\sval( {\bf v})_k=\unit= \lambda v^{(k)}_k$, so $\lambda = (v^{(k)}_k)^{-1}$. If we assume in addition that $v^{(k)}\in (\smax^\vee)^n$. Then, $\sval({\bf v})= \lambda v^{(k)}=(v^{(k)}_k)^{-1} v^{(k)}$. \end{proof} Recall that by \Cref{theoremgeneric} or \Cref{corogeneric}, we have that generically in the moduli of the coefficients of the $\smax$-matrix $A$, all the eigenvalues $\gamma_k$ are simple and all the vectors $v^{(k)}$ are in $(\smax^\vee)^n$, in which case the above result allows one to obtain the valuation of all the eigenvalues and eigenvectors of $\bf A$. \subsection{Numerical examples} In the following, we illustrate the results of \Cref{th_asymptot} and \Cref{th_asymptot-vector} by some numerical examples. Computation of classical eigenvalues and eigenvectors is done by eig.m command in MATLAB 2019b \begin{example}\label{ex_eig_n} Consider \begin{equation} \label{defAex} A= \begin{pmatrix} 5&4&3 &2& 1\\ 4&4&3&2&1\\ 3&3&3&2&1\\ 2&2&2&2&1\\ 1&1&1&1&1 \end{pmatrix} \in \pd_5(\smax)\quad \text{and}\quad {\bf A}(t)=\begin{pmatrix} t^5&t^4&t^3 &t^2& t^1\\ t^4&t^4&t^3&t^2&t^1\\ t^3&t^3&t^3&t^2&t^1\\ t^2&t^2&t^2&t^2&t^1\\ t^1&t^1&t^1&t^1&t^1 \end{pmatrix} \enspace .\end{equation} Seeing ${\bf A}$ as a matrix in the field of generalized (formal or converging) Puiseux series, we have $\sval({\bf A})=A$. For any function $f:\R_+\to \R$, we define the valuation as \[ \vall (f)=\limsup_{t\to\infty} \log|f(t)|/\log(t)\enspace .\] If $f$ is already a converging Puiseux series with parameter $t$ (which is the case when $f$ is a polynomial), then its valuation as a Puiseux series coincides with its valuation as a function, and it can be approximated by \[ \vall_t(f)=\log|f(t)|/\log(t) \] with $t$ large. We can then define $\sval$ and $\sval_t$ accordingly, and apply these maps entry-wise to $t$-parametric families of matrices and vectors. So seeing now ${\bf A}$ as a family of matrices parameterized by $t$, we have \begin{equation}\label{def_sval}\sval({\bf A})=\sval_t({\bf A})= A \quad \forall t>0 \enspace.\end{equation} Denote $\gamma_i,\; i=1,\ldots,n=5$ the $\smax$-eigenvalues of $A$. Using \Cref{sym_eigs}, the $\gamma_i$ are the diagonal entries of $A$. Denote also $\lambda_1(t)\geq \cdots \geq \lambda_n(t)$ the (classical) eigenvalues of ${\bf A}(t)$. By \Cref{th_asymptot}, we have $\sval(\lambda_i)=\gamma_i$, for all $i=1,\ldots,5$. In \Cref{gamma}, we show the values of $\gamma_i$ and $\sval_t(\lambda_i)$ for $t=10$ and $t=100$, which show the practical convergence of $\sval_t(\lambda_i)$ towards $\gamma_i$. \begin{table}[ht] \caption{Values of $\gamma_i$ and $\sval_t(\lambda_i),\; i=1, \ldots, 5$, for the matrices in \eqref{defAex}.}\label{gamma} \begin{center} \begin{tabular}{c||c|c|c|c|c} &$i=1$&$i=2$&$i=3$&$i=4$&$i=5$\\ \hline \hline $\gamma_i$&$5$&$4$&$3$&$2$&$1$\\ \hline \hline $\sval_t(\lambda_i) (t=10)$&5.0048&3.9543&2.9542&1.9542&0.9494\\ \hline $\sval_t(\lambda_i) (t=100)$&5.0000&3.9978&2.9978&1.9978&0.9978\\ \end{tabular} \end{center} \end{table} \end{example} \begin{example} Assume that $A$ and ${\bf A}(t)$ are as in \eqref{defAex}. Consider, for all $i=1,\ldots, 5$, the $\smax$-vector $v^{(i)}$ defined in \Cref{vk}. By \Cref{coro-simple-eigen}, it is a weak $\smax$-eigenvector of $A$ associated to the $\smax$-eigenvalue $\gamma_i$. Denote, for all $i=1,\ldots, 5$, by ${\bf v}^{(i)}(t)$ the classical eigenvector of ${\bf A}(t)$ associated to the eigenvalue $\lambda_i(t)$, satisfying $({\bf v}^{(i)}(t))_i=1$. Then, by \Cref{th_asymptot-vector}, we have $\sval({\bf v}^{(i)})\balance (v^{(i)}_i)^{-1} v^{(i)}$. In \Cref{vi1}, we show the vectors $(v^{(i)}_i)^{-1} v^{(i)},\;i=1,\dots,5$ (in the first row) and the vectors $\sval_t({\bf v}^{(i)}),\; i=1, \ldots, n$ for $t=10$ and $t=100$ (in second and third row, respectively). The results show the practical convergence of $\sval_t({\bf v}^{(i)})$ when $t$ goes to infinity, which holds because the vectors ${\bf v}^{(i)}(t)$ have a Puiseux series expansion in $t$. The limit of the vectors $\sval_t({\bf v}^{(i)})$ satisfy $\lim_{t\to\infty} \sval_t({\bf v}^{(i)})=\sval({\bf v}^{(i)})\balance (v^{(i)}_i)^{-1} v^{(i)}$ as expected. In particular, the $k$th entry of $\sval({\bf v}^{(i)})$ coincides with the $k$th entry of $(v^{(i)}_i)^{-1} v^{(i)}$ when the latter is in $\smax^\vee$. Note that in this (nongeneric) example, for each $k$th entry of $(v^{(i)}_i)^{-1} v^{(i)}$ which is in $\smax^\circ$, the valuation of the $k$th entry of ${\bf v}^{(i)}$ (that is $|(\sval({\bf v}^{(i)}))_k|$, where the modulus is in the sense of $\smax$) is strictly lower than $|(v^{(i)}_i)^{-1} (v^{(i)})_k|$. \begin{table}[ht] \caption{Values of $(v^{(i)}_i)^{-1} v^{(i)}$ and $\sval_t({\bf v}^{(i)}),$ $i=1, \ldots,5$, for the matrices in \eqref{defAex}.}\label{vi1} \begin{center} \footnotesize \begin{tabular}{c||c|c|c|c|c} &$i=1$&$i=2$&$i=3$&$i=4$&$i=5$\\ \hline \hline $(v^{(i)}_i)^{-1} v^{(i)}$&$\begin{pmatrix} 0\\ -1\\ -2\\ -3\\ -4 \end{pmatrix}$&$\begin{pmatrix} \ominus -1\\ 0\\ -1\\ -2\\ -3 \end{pmatrix}$&$\begin{pmatrix} (-2)^{\circ}\\ \ominus -1\\ 0\\ -1\\ -2 \end{pmatrix}$&$\begin{pmatrix} (-3)^{\circ}\\ (-2)^{\circ}\\ \ominus -1\\ 0\\ -1 \end{pmatrix}$&$\begin{pmatrix} (-4)^{\circ}\\ (-3)^{\circ}\\ (-2)^{\circ}\\ \ominus -1\\ 0 \end{pmatrix}$\\ \hline $\sval_t({\bf v}^{(i)})(t=10)$&$\begin{pmatrix} 0\\ -0.9591\\ -1.9552\\ -2.9548\\ -3.9547 \end{pmatrix}$&$\begin{pmatrix} \ominus-0.9542\\ 0\\ -0.9538\\ -1.9493\\ -2.9489 \end{pmatrix}$&$\begin{pmatrix} -2.9450\\ \ominus -0.9493\\ 0\\ -0.9538\\ -1.9494 \end{pmatrix}$&$\begin{pmatrix} \ominus -5.9443\\ -2.9447\\ \ominus -0.9494\\ 0\\ -0.9542 \end{pmatrix}$&$\begin{pmatrix} -9.9638\\ \ominus -5.9591\\ -2.9548\\ \ominus -0.9547\\ 0 \end{pmatrix}$\\ \hline $\sval_t({\bf v}^{(i)})(t=100)$&$\begin{pmatrix} 0\\ -0.9978\\ -1.9978\\ -2.9978\\ -3.9978 \end{pmatrix}$&$\begin{pmatrix} \ominus-0.9978\\ 0\\ -0.9978\\ -1.9978\\ -2.9978 \end{pmatrix}$&$\begin{pmatrix} -2.9978\\ \ominus -0.9978\\ 0\\ -0.9978\\ -1.9978 \end{pmatrix}$&$\begin{pmatrix} \ominus -5.9978\\ -2.9978\\ \ominus -0.9978\\ 0\\ -0.9978 \end{pmatrix}$&$\begin{pmatrix} -9.9979\\ \ominus -5.9978\\ -2.9978\\ \ominus -0.9978\\ 0 \end{pmatrix}$ \end{tabular} \end{center} \end{table} \end{example} \begin{example}\label{ex_eig_n2} We generate a classical random positive definite matrix of size $100 \times 100$ by using the following MATLAB commands: \begin{lstlisting}[style=Matlab-editor] n = 100; s = 15; rng(s); C1 = rand(n); s = 12; rng(s); C2 = rand(n); C = C1-C2; B = C*C'; \end{lstlisting} Consider now for any $t>0$, the map $\sval_t:\R\to\smax$ such that $\sval_t(a)=\sign(a) \log(|a|)/ \log(t)$, for all $a\neq 0$ and $\sval_t(0)=\zero$. This is as if $a$ is considered as the value in $t$ of a function of $t$. We can extend again $\sval_t$ to matrices and vectors. We fix here $t=10$, and compute $A=\sval_t(B)$, for the matrix $B$ randomly generated as above. Then, generically so almost surely, we have that $A$ is a positive definite $\smax$-matrix and satisfies the conditions of \Cref{th_asymptot-vector}. Let $\gamma_i$, $i=1,\ldots,n$ be the $\smax$-eigenvalues of $A$ and $\lambda_i$, $i=1,\ldots, n$ be the eigenvalues of $B$. In \Cref{rel_error_eigval}, we computed the relative error $\frac{|\sval_t(\lambda_i)-\gamma_i|}{\sval_t(\lambda_i)}$ for $i=1,\ldots, 100$. One can see that the errors are less than $1.6\times 10^{-13}$. \begin{figure}[h!] \centerline{\includegraphics[width=5in]{rel_error_eigval}} \caption{ The values $\frac{|\sval_t(\lambda_i)-\gamma_i|}{\sval_t(\lambda_i)}$ for $i=1,\ldots, 100$ and $t=10$ in \Cref{ex_eig_n2} }\label{rel_error_eigval} \end{figure} \end{example} \begin{thebibliography}{BCOQ92} \bibitem[AAGS23]{tropicalization} Marianne Akian, Xavier Allamigeon, Stéphane Gaubert, and Sergei Sergeev. \newblock Signed tropicalization of polars and application to matrix cones, 2023. \newblock Preprin \arxiv{2305.05637}. \bibitem[ABG16]{akian2016non} Marianne Akian, Ravindra Bapat, and St{\'e}phane Gaubert. \newblock Non-archimedean valuations of eigenvalues of matrix polynomials. \newblock {\em Linear Algebra and its Applications}, 498:592--627, 2016. \bibitem[AGG09]{akian2009linear} Marianne Akian, St{\'e}phane Gaubert, and Alexander Guterman. \newblock Linear independence over tropical semirings and beyond. \newblock In {\em Proceedings of the International Conference on Tropical and Idempotent Mathematics}, volume 495 of {\em Contemp. Math.}, pages 1--38. AMS, 2009. \bibitem[AGG14]{cramer-guterman} Marianne Akian, Stéphane Gaubert, and Alexander Guterman. \newblock {Tropical Cramer Determinants Revisited}. \newblock In G.L. Litvinov and S.N. Sergeev, editors, {\em {Tropical and Idempotent Mathematics and Applications}}, volume 616 of {\em Contemporary Mathematics}, page~45. {AMS}, 2014. \bibitem[AGN18]{adi} Marianne Akian, St\'ephane Gaubert, and Adi Niv. \newblock Tropical compound matrix identities. \newblock {\em Linear Algebra and its Applications}, 551:162--206, 2018. \bibitem[AGR24]{AGRowen} Marianne Akian, Stephane Gaubert, and Louis Rowen. \newblock Semiring systems arising from hyperrings. \newblock {\em Journal of Pure and Applied Algebra}, 228(6):107584, 2024. \bibitem[AGS20]{allamigeon2020tropical} Xavier Allamigeon, St{\'e}phane Gaubert, and Mateusz Skomra. \newblock Tropical spectrahedra. \newblock {\em Discrete and Computational Geometry}, 63(3):507--548, 2020. \bibitem[AGT23]{tavakolipour2021} Marianne Akian, Stephane Gaubert, and Hanieh Tavakolipour. \newblock Factorization of polynomials over the symmetrized tropical semiring and descartes' rule of sign over ordered valued fields, 2023. \newblock Preprint \arxiv{2301.05483}. \bibitem[BB03]{burkard2003finding} R.~E Burkard and P~Butkovi{\v{c}}. \newblock Finding all essential terms of a characteristic maxpolynomial. \newblock {\em Discrete Applied Mathematics}, 130(3):367--380, 2003. \bibitem[BCOQ92]{baccelli1992synchronization} Fran{\c{c}}ois Baccelli, Guy Cohen, Geert~Jan Olsder, and Jean-Pierre Quadrat. \newblock {\em Synchronization and linearity: an algebra for discrete event systems}. \newblock John Wiley \& Sons Ltd, 1992. \bibitem[BL07]{butkovivc2007job} P~Butkovi{\v{c}} and S~Lewis. \newblock On the job rotation problem. \newblock {\em Discrete Optimization}, 4(2):163--174, 2007. \bibitem[BL21]{baker2018descartes} Matthew Baker and Oliver Lorscheid. \newblock Descartes' rule of signs, {N}ewton polygons, and polynomials over hyperfields. \newblock {\em J. Algebra}, 569:416--441, 2021. \bibitem[But10]{butkovivc2010max} P~Butkovi{\v{c}}. \newblock {\em Max-linear Systems: Theory and Algorithms}. \newblock Springer, 2010. \bibitem[CGM80]{cuninghame1980algebra} R.~A. Cuninghame-Green and P.F.J. Meijer. \newblock An algebra for piecewise-linear minimax problems. \newblock {\em Discrete Applied Mathematics}, 2(4):267--294, 1980. \bibitem[Gau92]{gaubert1992theorie} St{\'e}phane Gaubert. \newblock {\em Th{\'e}orie des syst{\`e}mes lin{\'e}aires dans les dio{\"\i}des}. \newblock PhD thesis, Paris, ENMP, 1992. \bibitem[GK10]{gassner2010fast} E~Gassner and B~Klinz. \newblock A fast parametric assignment algorithm with applications in max-algebra. \newblock {\em Networks}, 55(2):61--77, 2010. \bibitem[Gun21]{gunn} T.~Gunn. \newblock A {N}ewton polygon rule for formally-real valued fields and multiplicities over the signed tropical hyperfield, 2021. \newblock Preprint \arxiv{1911.12274v2}. \bibitem[Gun25]{gunn2} Sera Gunn. \newblock Tropical extensions and {Baker}-{Lorscheid} multiplicities for idylls. \newblock {\em Commun. Algebra}, 53(1):63--89, 2025. \bibitem[IMS09]{itenberg2009tropical} Ilia Itenberg, Grigory Mikhalkin, and Eugenii~I Shustin. \newblock {\em Tropical algebraic geometry}, volume~35. \newblock Springer Science \& Business Media, 2009. \bibitem[IR10]{IR} Z.~Izhakian and L.~Rowen. \newblock Supertropical algebra. \newblock {\em Advances in Mathematics}, 225(4):2222--2286, 2010. \bibitem[IR11]{izhakianmatrix3} Zur Izhakian and Louis Rowen. \newblock Supertropical matrix algebra. {III}: {Powers} of matrices and their supertropical eigenvalues. \newblock {\em J. Algebra}, 341(1):125--149, 2011. \bibitem[JSY20]{Jell2020} Philipp Jell, Claus Scheiderer, and Josephine Yu. \newblock Real tropicalization and analytification of semialgebraic sets. \newblock {\em International Mathematics Research Notices}, 2022(2):928--958, May 2020. \bibitem[KM13]{kolokoltsov2013idempotent} Vassili~N Kolokoltsov and Victor~P Maslov. \newblock {\em Idempotent analysis and its applications}, volume 401. \newblock Springer Science \& Business Media, 2013. \bibitem[Lor22]{Lorsch22} O.~Lorscheid. \newblock Tropical geometry over the tropical hyperfield. \newblock {\em Rocky Mountain J. of Math.}, 52:189--222, 2022. \bibitem[Mas87]{maslov1987methodes} V.~Maslov. \newblock {\em M{\'e}thodes op{\'e}ratorielles}. \newblock Mir Moscow, 1987. \bibitem[{Max}90]{maxplus90b} {Max Plus}. \newblock Linear systems in $(\max,+)$-algebra. \newblock In {\em Proceedings of the 29th Conference on Decision and Control}, Honolulu, Dec. 1990. \bibitem[MS15]{maclagan2015introduction} Diane Maclagan and Bernd Sturmfels. \newblock {\em Introduction to tropical geometry}, volume 161. \newblock American Mathematical Soc., 2015. \bibitem[NSW21]{Nishida2021} Yuki Nishida, Kohei Sato, and Sennosuke Watanabe. \newblock A min-plus analogue of the {Jordan} canonical form associated with the basis of the generalized eigenspace. \newblock {\em Linear Multilinear Algebra}, 69(15):2933--2943, 2021. \bibitem[NWW20]{Nishida2020} Yuki Nishida, Sennosuke Watanabe, and Yoshihide Watanabe. \newblock On the vectors associated with the roots of max-plus characteristic polynomials. \newblock {\em Appl. Math., Praha}, 65(6):785--805, 2020. \bibitem[NWW21]{nishida2021independence} Yuki Nishida, Sennosuke Watanabe, and Yoshihide Watanabe. \newblock Independence and orthogonality of algebraic eigenvectors over the max-plus algebra, 2021. \newblock Preprint \arxiv{2110.00285}. \bibitem[Row22]{Rowen2} L.H. Rowen. \newblock Algebras with a negation map. \newblock {\em Eur. J. Math.}, 8(1):62--138, 2022. \bibitem[TS18]{tavakolipour2018tropical} Hanieh Tavakolipour and Fatemeh Shakeri. \newblock On tropical eigenvalues of tridiagonal toeplitz matrices. \newblock {\em Linear Algebra and its Applications}, 539:198--218, 2018. \bibitem[TS20]{tavakolipour2020asymptotics} Hanieh Tavakolipour and Fatemeh Shakeri. \newblock Asymptotics of the eigenvalues for exponentially parameterized pentadiagonal matrices. \newblock {\em Numerical Linear Algebra with Applications}, 27(6):e2330, 2020. \bibitem[Vir01]{viro2001dequantization} Oleg Viro. \newblock Dequantization of real algebraic geometry on logarithmic paper. \newblock In {\em European Congress of Mathematics}, pages 135--146. Springer, 2001. \bibitem[Vir10]{viro2010hyperfields} Oleg Viro. \newblock Hyperfields for tropical geometry i. hyperfields and dequantization, 2010. \newblock Preprint \arxiv{1006.3034}. \bibitem[Yu15]{yu2015tropicalizing} Josephine Yu. \newblock Tropicalizing the positive semidefinite cone. \newblock {\em Proceedings of the American Mathematical Society}, 143(5):1891--1895, 2015. \end{thebibliography} \appendix \section{Detailed proofs of some results} Let us first prove the following result which corresponds to the statement of \Cref{def_pd1} for $n=2$. \begin{lemma}\label{lemma_sergey_pd} Let $a, b, c \in \smax^{\vee}$. Then, \[\zero \lsign (a x_1^{ 2}) \oplus (b x_1 x_2) \oplus (c x_2^{ 2})\quad \forall (x_1, x_2) \in(\smax^{\vee})^2\setminus \{(\zero,\zero)\}\] if and only if \[\zero \lsign a,\; \zero \lsign c, \;b^{ 2} \lsign a c\enspace .\] \end{lemma} \begin{proof} $(\Rightarrow)$ Considering $x_1 = \unit $ and $x_2=\zero$, we get $\zero \lsign a$. Similarly, by considering $x_1 = \zero $ and $x_2=\unit$, we have $\zero \lsign c$. This shows that $a,c \in \smax^{\oplus}\setminus \zero$. Then taking $x_1=a^{ -\frac{1}{2}}$ and $x_2=\eta c^{ -\frac{1}{2}}$ with $\eta \in \{\ominus \unit, \unit\}$, we get \[\zero \lsign \unit \oplus (\eta b a^{ -\frac{1}{2}} c^{ -\frac{1}{2}})\oplus \unit =\unit \oplus (\eta b a^{ -\frac{1}{2}} c^{ -\frac{1}{2}}),\] which is possible only if $|b| a^{ -\frac{1}{2}} c^{ -\frac{1}{2}} \lsign \unit$ or equivalently if $b^{ 2} \lsign a c$, using second part of \Cref{product_order} and \Cref{modulus_order}. $(\Leftarrow)$ Assume $a,c \in \smax^{\oplus} \setminus \{\zero\}$ and $b\in \smax^\vee$ are such that $b^{ 2} \lsign a c$. By the change of variable $x_1= y_1 a^{ -\frac{1}{2}}$ and $x_2= y_2 c^{ -\frac{1}{2}}$, we have $\zero \lsign (a x_1^{ 2}) \oplus (b x_1 x_2) \oplus (c x_2^{ 2})$ iff \begin{equation}\label{iff} \zero \lsign (y_1^{ 2}) \oplus (u y_1 y_2) \oplus (y_2^{ 2}), \end{equation} where $u = b a^{ -\frac{1}{2}} c^{ -\frac{1}{2}}$ is such that $|u|\lsign \unit$ (again by \Cref{product_order} and \Cref{modulus_order}). W.l.o.g. we may assume that $|y_2| \leqsign |y_1|$. This implies that $y_1\neq \zero$, since $(x_1,x_2)\neq (\zero,\zero)$, and that $y_2^{ 2} \leqsign y_1^{ 2}$, by \Cref{modulus_order}. Then $|u y_1 y_2|\leqsign |u| |y_1|^{ 2} \lsign |y_1|^{ 2} =|y_1^{ 2}|$ (again by \Cref{product_order}). So since $\leqsign$ and $\prec$ coincide in $\smax^\oplus$, we get $|u y_1 y_2|\prec |y_1^{ 2}|$. Using \Cref{property-preceq}, we obtain that the right hand side of \eqref{iff} is equal to $y_1^{ 2} \gsign \zero$. \end{proof} \begin{proof}[Proof of \Cref{def_pd1}] Let $S$ be the set given in \Cref{def_pd1}, and let $A \in S$. We need to prove that $A$ is a $\pd$ matrix. Let $x\in (\smax^\vee)^n\setminus \{\zero\}$, and $i,j=1,\ldots, n$ with $i\neq j$. Applying the ``if'' part of \Cref{lemma_sergey_pd} to $a=a_{ii}$, $b=a_{ij}$ and $c=a_{jj}$, we have $\zero \lsign (x_i a_{ii} x_i ) \oplus (x_i a_{ij} x_j) \oplus (x_j a_{jj} x_j)$, if $(x_i,x_j)\neq (\zero,\zero)$. Moreover the previous expression is equal to $\zero$ if $(x_i,x_j)= (\zero,\zero)$. By summing these inequalities over all $i\neq j$, and using that $x$ is not the $\zero$ vector, and idempotency of addition, we obtain $\zero \lsign x^T A x$. This shows that $S\subseteq \pd_{n}(\smax^\vee)$. Let us prove the reverse inclusion. Consider $A \in \pd_n(\smax^{\vee})$. Then, every $2 \times 2$ leading submatrix of $A$ is $\pd$ and therefore by applying the ``only if'' part of \Cref{lemma_sergey_pd}, we obtain, for $i\neq j$, $\zero \lsign a_{ii}$, $\zero \lsign a_{jj}$ and $a_{ij}^{ 2} \lsign a_{ii} a_{jj}$. Hence, $A\in S$, which shows that $\pd_{n}(\smax^\vee)\subseteq S$ and concludes the proof of \Cref{def_pd1}. \end{proof} \end{document}
2412.13215v3
http://arxiv.org/abs/2412.13215v3
Scattering theory for the defocusing 3d NLS in the exterior of a strictly convex obstacle
\documentclass[a4paper,reqno, 10pt]{amsart} \usepackage{amsmath,amssymb,amsfonts,amsthm, mathrsfs} \usepackage{lmodern} \usepackage{makecell} \usepackage{diagbox} \usepackage{multirow} \usepackage{booktabs} \usepackage{verbatim,wasysym,cite} \newcommand{\xp}{x^{\perp}} \newcommand{\scaa}{L_{t,x}^\frac{5\alpha}{2}} \newcommand{\isca}{L_{t}^\frac{5\alpha}{2}L_x^{\frac{30\alpha}{15\alpha-8}}} \newcommand{\HH}{\R_+^3} \usepackage{microtype} \usepackage{color,enumitem,graphicx} \usepackage[colorlinks=true,urlcolor=blue, citecolor=red,linkcolor=blue, linktocpage,pdfpagelabels, bookmarksnumbered,bookmarksopen]{hyperref} \usepackage[english]{babel} \usepackage[symbol]{footmisc} \renewcommand{\epsilon}{{\varepsilon}} \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{Conjection}{Conjecture}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{remark}[theorem]{Remark} \newtheorem{definition}[theorem]{Definition} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{claim}[theorem]{Claim} \oddsidemargin .8cm \evensidemargin .8cm \marginparsep 10pt \topmargin 0.5cm \headsep10pt \headheight 10pt \textheight 9.2in \textwidth 5.8in \sloppy \newcommand{\A}{\mathbb A} \newcommand{\C}{\mathbb C} \newcommand{\D}{\mathbb D} \newcommand{\R}{\mathbb R} \newcommand{\N}{\mathbb N} \newcommand{\T}{\mathbb T} \newcommand{\Z}{\mathbb Z} \newcommand{\dis}{\displaystyle} \newcommand{\norm}{\big\|} \newcommand{\pn}{\phi_n} \newcommand{\cn}{\chi_n} \newcommand{\lamn}{\lambda_n} \newcommand{\psie}{\psi_{\varepsilon}} \newcommand{\Hsc}{\dot{H}^{s_c}} \newcommand{\Nsc}{\dot{N}^{s_c}} \newcommand{\Xsc}{\dot{X}^{s_c}} \newcommand{\Ssc}{\dot{H}^{s_c}} \newcommand{\vn}{\tilde{v}_n} \newcommand{\DeltaO}{\Delta_{\Omega}} \newcommand{\DeltaOn}{\Delta_{\Omega_n}} \newcommand{\RRT}{\R\times\R^3} \newcommand{\RO}{\R\times\Omega} \newcommand{\ROn}{\R\times\On} \newcommand{\On}{\Omega_n} \def\({\left(} \def\){\right)} \def\<{\left\langle} \def\>{\right\rangle} \def\Sch{{\mathcal S}}\def\Pch{{\mathcal P}} \def\O{\mathcal O} \def\B{\mathcal B} \def\F{\mathcal F} \def\K{\mathcal K} \def\L{\mathcal L} \def\EE{\mathcal E} \def\d{{\partial}} \def\eps{\varepsilon} \def\si{\sigma} \def\M{\mathcal M} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\Div}{div} \DeclareMathOperator{\RE}{Re} \DeclareMathOperator{\IM}{Im} \def\Eq#1#2{\mathop{\sim}\limits_{#1\rightarrow#2}} \def\Tend#1#2{\mathop{\longrightarrow}\limits_{#1\rightarrow#2}} \newcommand{\qtq}[1]{\quad\text{#1}\quad} \setlength{\textheight}{23.1cm} \setlength{\textwidth}{16cm} \hoffset=-1.7cm \begin{document} \title[3d NLS outside a convex obstacle] {Scattering theory for the defocusing 3d NLS in the exterior of a strictly convex obstacle } \author[X. Liu]{Xuan Liu} \address{School of Mathematics, Hangzhou Normal University, \ Hangzhou ,\ 311121, \ China} \email{[email protected]} \author{Yilin Song} \address{Yilin Song \newline \indent The Graduate School of China Academy of Engineering Physics, Beijing 100088,\ P. R. China} \email{[email protected]} \author{Jiqiang Zheng} \address{Jiqiang Zheng \newline \indent Institute of Applied Physics and Computational Mathematics, Beijing, 100088, China. \newline\indent National Key Laboratory of Computational Physics, Beijing 100088, China} \email{zheng\[email protected], [email protected]} \begin{abstract} In this paper, we investigate the global well-posedness and scattering theory for the defocusing nonlinear Schr\"odinger equation $iu_t + \Delta_\Omega u = |u|^\alpha u$ in the exterior domain $\Omega$ of a smooth, compact and strictly convex obstacle in $\mathbb{R}^3$. It is conjectured that in Euclidean space, if the solution has a prior bound in the critical Sobolev space, that is, $u \in L_t^\infty(I; \dot{H}_x^{s_c}(\mathbb{R}^3))$ with $s_c := \frac{3}{2} - \frac{2}{\alpha} \in (0, \frac{3}{2})$, then $u$ is global and scatters. In this paper, assuming that this conjecture holds, we prove that if $u$ is a solution to the nonlinear Schr\"odinger equation in exterior domain $\Omega$ with Dirichlet boundary condition and satisfies $u \in L_t^\infty(I; \dot{H}^{s_c}_D(\Omega))$ with $s_c \in \left[\frac{1}{2}, \frac{3}{2}\right)$, then $u$ is global and scatters. The proof of the main results relies on the concentration-compactness/rigidity argument of Kenig and Merle [Invent. Math. {\bf 166} (2006)]. The main difficulty is to construct minimal counterexamples when the scaling and translation invariance breakdown on $\Omega$. To achieve this, two key ingredients are required. First, we adopt the approach of Killip, Visan, and Zhang [Amer. J. Math. {\bf 138} (2016)] to derive the linear profile decomposition for the linear propagator $e^{it\Delta_\Omega}$ in $\dot{H}^{s_c}(\Omega)$. The second ingredient is the embedding of the nonlinear profiles. More precisely, we need to demonstrate that nonlinear solutions in the limiting geometries, which exhibit global spacetime bounds, can be embedded back into $\Omega$. Finally, to rule out the minimal counterexamples, we will establish long-time Strichartz estimates for the exterior domain NLS, along with spatially localized and frequency-localized Morawetz estimates. \vspace{0.3cm} \noindent \textbf{Keywords:} Schr\"odinger equation, well-posedness, scattering, critical norm, exterior domain. \end{abstract} \maketitle \tableofcontents \medskip \section{Introduction} We study the defocusing nonlinear Schr\"odinger equation in the exterior domain $\Omega$ of a smooth compact, strictly convex obstacle in $\mathbb{R}^3$ with Dirichlet boundary condition: \begin{equation} \begin{cases} iu_t+\Delta_\Omega u=|u|^{\alpha }u,\\ u(0,x)=u_0(x),\\ u(t,x)|_{x\in \partial \Omega}=0, \end{cases}\label{NLS} \end{equation} where $u$ is a complex-valued function defined in $\mathbb{R} \times \Omega$ and $-\Delta_{\Omega}$ denotes the Dirichlet Laplacian on $\Omega$. The Dirichlet-Laplacian is the unique self-adjoint operator on $L^2(\Omega)$ corresponding to the following quadratic form \[ Q : H_0^1(\Omega) \to [0,\infty) \quad \text{with} \quad Q(f) := \int_{\Omega} \overline{\nabla f(x)} \cdot \nabla f(x) \, dx. \] We take initial data $u_0\in \dot H^{s}_D(\Omega)$, where for $s\ge0$, the homogeneous Sobolev space is defined by the functional calculus as the completion of $C_c^{\infty}(\Omega)$ with respect to the norm \[ \|f\|_{\dot{H}^{s}_D(\Omega)} := \|(-\Delta_\Omega)^{s/2} f \|_{L^2(\Omega)}. \] It is easy to find that the solution $u$ to equation (\ref{NLS}) with sufficient smooth conditions possesses the mass and energy conservation laws: \[ M_{\Omega}[u(t)] := \int_{\Omega} |u(t,x)|^2 dx = M_\Omega[u_0], \] \[ E_{\Omega}[u(t)] := \frac{1}{2} \int_{\Omega} |\nabla u(t,x)|^2 dx + \frac{1}{\alpha +2} \int_{\Omega} |u(t,x)|^{\alpha +2} dx = E_\Omega[u_0]. \] When posed on the whole Euclidean space $\mathbb{R}^3$, the Cauchy problem \eqref{NLS} is scale-invariant. More precisely, the scaling transformation \[ u(t,x) \longmapsto \lambda^{\frac{2}{\alpha }} u(\lambda x, \lambda^2 t) \quad \text{for} \quad \lambda > 0, \] leaves the class of solutions to NLS$_{\mathbb{R} ^3}$ invariant. This transformation also identifies the critical space $\dot H^{s_c}_x$, where the critical regularity $s_c$ is given by $s_c:=\frac{3}{2}-\frac{2}{\alpha }$. We call \eqref{NLS} mass-critical if $s_c=0$, energy-critical if $s_c=1$, inter-critical if $0<s_c<1$ and energy-supercritical if $s_c>1$ respectively. Although the obstacle in the domain alters certain aspects of the equation, it does not affect the problem's inherent dimensionality. Therefore, (\ref{NLS}) maintains the same criticality and is classified as $\dot H^{s_c}_D(\Omega)$ critical. Throughout this paper, we restrict ourselves to the following notion of solution. \begin{definition}[Solution]\label{Defsolution} A function $ u : I \times \Omega \to \mathbb{C} $ on a non-empty interval $ I \ni 0 $ is called a \emph{solution} to (\ref{NLS}) if it satisfies $u \in C_t \dot{H}^{s_c}_D(K \times \Omega) \cap L^{\frac{5\alpha }{2}}_{t,x}(K \times \Omega)$ for every compact subset $K \subset I$ and obeys the Duhamel formula \[ u(t) = e^{it \Delta_\Omega} u_0 - i \int_0^t e^{i(t-s) \Delta_\Omega} (|u|^\alpha u)(s) \, ds \] for each $ t \in I $. We refer to the interval $I$ as the lifespan of $u$. We say that $ u $ is a maximal-lifespan solution if the solution cannot be extended to any strictly larger interval. We say that $u$ is a global solution if $I=\mathbb{R} $. \end{definition} The assumption that the solution lies in the space $L_{t,x}^{\frac{5\alpha }{2}}(I\times \Omega)$ locally in time is natural since by the Strichartz estimate (see Proposition \ref{PStrichartz} below), the linear flow always lies in this space. Also, if a solution $u$ to (\ref{NLS}) is global, with $ \|u\|_{L_{t,x}^{\frac{5\alpha }{2}}(I\times \Omega)} < \infty $, then it \emph{scatters}; that is, there exist unique $ u_\pm \in \dot{H}^{s_c}_D(\Omega) $ such that \[ \lim_{t \to \pm \infty} \left\| u(t) - e^{it \Delta_\Omega} u_\pm \right\|_{\dot{H}^{s_c}_D(\Omega)} = 0. \] The study of NLS in exterior domains was initiated in \cite{BurqGerardTzvetkov2004}. The authors proved a local existence result for the 3d sub-cubic (i.e., $\alpha < 3$) NLS$_{\Omega}$ equation, assuming that the obstacle is non-trapping. Subsequently, Anton \cite{Anton2008} extended these result to the cubic nonlinearity, while Planchon-Vega \cite{PlanchonVega2009} extended it to the energy-subcritical NLS$_{\Omega}$ equation in dimension $d=3$. Later, Planchon and Ivanovici \cite{IvanoviciPlanchon2010} established the small data scattering theory for the energy-critical NLS$_\Omega$ equation in dimension $d = 3$. For NLS outside a smooth, compact, strictly convex obstacle $\Omega$ in $\mathbb{R} ^3$, Killip-Visan-Zhang \cite{KillipVisanZhang2016a} proved that for arbitrarily large initial data, the corresponding solutions to the defocusing energy-critical equation scatter in the energy space. For related results in the focusing case, see e.g. \cite{DuyckaertsLandoulsiRoudenko2022JFA, KillipVisanZhang2016c, KYang, XuZhaoZheng}. In this paper, we investigate the $\dot H^{s_c}_D(\Omega)$ critical global well-posedness and scattering theory for the defocusing NLS (\ref{NLS}) in the exterior domain $\Omega$ of a smooth, compact and strictly convex obstacle in $\mathbb{R}^3$. To put the problem in context, let us first recall some earlier results for the equivalent problem posed in the whole Euclidean space $\mathbb{R}^d$. The study of global well-posedness and scattering theory for nonlinear Schr\"odinger equations \begin{equation} iu_t + \Delta u = \pm |u|^{\alpha }u,\qquad (t,x) \in \mathbb{R} \times \mathbb{R}^d \label{NLS0} \end{equation} in $\dot H^{s_c} $ has seen significant advancements in recent years. Due to the presence of conserved quantities at the critical regularity, the mass- and energy-critical equations have been the most widely studied. For the defocusing energy-critical NLS, it is now known that arbitrary data in $\dot H^1_x$ lead to solutions that are global and scatter. This was proven first for radial initial data by Bourgain \cite{Bourgain1999}, Grillakis \cite{Grillakis2000}, and Tao \cite{Tao2005} and later for arbitrary data by Colliander- Keel-Staffilani-Takaoka-Tao, \cite{Colliander2008}, Ryckman-Visan \cite{RyckmanVisan2007} and Visan \cite{Visan2007,Visan2012} (For results in the focusing case, see \cite{Dodson2019ASENS,KenigMerle2006,KillipVisan2010}). For the mass-critical NLS, it has also been established that arbitrary data in $L^2_x$ lead to solutions that are global and scatter. This was proven through the use of minimal counterexamples, first for radial data in dimensions $d\ge2$ (see \cite{TaoVisanZhang2007,KillipTaoVisan2009,KillipVisanZhang2008}), and later for arbitrary data in all dimensions by Dodson \cite{Dodson2012,Dodson2015,Dodson2016a,Dodson2016b}. Killip-Visan \cite{KillipVisan2012} and Visan \cite{Visan2012} revisited the defocusing energy-critical problem in dimensions $d \in \{3,4\}$ from the perspective of minimal counterexamples, utilizing techniques developed by Dodson \cite{Dodson2012}. In particular, they established a "long-time Strichartz estimate" for almost periodic solutions, which serves to rule out the existence of frequency-cascade solutions. Additionally, they derived a frequency-localized interaction Morawetz inequality (which may in turn be used to preclude the existence of soliton-like solutions). Unlike the energy- and mass-critical problems, for any other $s_c\neq 0,1$, there are no conserved quantities that control the growth in time of the $\dot H^{s_c}$ norm of the solutions. It is conjectured that, assuming some \textit{a priori} control of a critical norm, global well-posedness and scattering hold for any $s_c > 0$ and in any spatial dimension: \begin{Conjection}\label{CNLS0} Let $d \geq 1$, $\alpha \geq \frac{4}{d}$, and $s_c = \frac{d}{2} - \frac{2}{\alpha }$. Assume $u: I \times \mathbb{R}^d \rightarrow \mathbb{C}$ is a maximal-lifespan solution to (\ref{NLS0}) such that \begin{equation} u \in L_t^\infty \dot{H}_x^{s_c}(I \times \mathbb{R}^d), \notag \end{equation} then $u$ is global and scatters as $t \to \pm \infty$. \end{Conjection} The first work dealing with Conjecture \ref{CNLS0} is attributed to Kenig and Merle \cite{KenigMerle2010} at the case $d = 3, s_c = \frac{1}{2}$ by using their concentration-compactness method developed in \cite{KenigMerle2006} and the scaling-critical Lin-Strauss Morawetz inequality. Subsequently, Murphy \cite{Murphy2014b} extended the methods of \cite{KenigMerle2010} to higher dimensions, resolving Conjecture \ref{CNLS0} for $d \geq 3$ and $s_c = \frac{1}{2}$. In the inter-critical case ($0 < s_c < 1$), Murphy \cite{Murphy2014, Murphy2015} developed a long-time Strichartz estimate in the spirit of \cite{Dodson2012} and proved Conjecture \ref{CNLS0} for the general data in the case \begin{equation} \begin{cases} \frac{1}{2}\le s_c\le \frac{3}{4},\qquad &d=3\\ \frac{1}{2}\le s_c<1,&d=4\\ \frac{1}{2}<s_c<1,&d=5; \end{cases}\notag \end{equation} and for the radial data in the case $d=3,s_c\in (0,\frac{1}{2})\cup (\frac{3}{4},1)$. Later, Gao-Miao-Yang \cite{GaoMiaoYang2019} resolved Conjecture \ref{CNLS0} for radial initial data in the case $d \geq 4$, $0 < s_c < \frac{1}{2}$; Gao-Zhao \cite{GaoZhao2019} resolved Conjecture \ref{CNLS0} for general initial data in the case $d \geq 5$, $\frac{1}{2} < s_c < 1$. See also \cite{XieFang2013} for earlier partial results regarding these cases. Recently, Yu \cite{Yu2021} resolved Conjecture \ref{CNLS0} in the case $d = 2, s_c = \frac{1}{2}$, by first developing a long-time Strichartz estimate in the spirit of \cite{Dodson2016a} and then utilizing the interaction Morawetz estimate from Planchon-Vega \cite{PlanchonVega2009} to exclude the minimal counterexamples. See Table \ref{table1}. In the energy-supercritical case ($s_c > 1$), Killip and Visan \cite{KillipVisan2010} were the first to resolve Conjecture \ref{CNLS0} for $d \ge 5$ under certain conditions on $s_c$. Subsequently, Murphy \cite{Murphy2015} addressed the conjecture for radial initial data in the case $d = 3$ and $s_c \in (1, \frac{3}{2})$. By developing long-time Strichartz estimates for the energy-supercritical regime, Miao-Murphy-Zheng \cite{MiaoMurphyZheng2014} and Dodson-Miao-Murphy-Zheng \cite{Dodson2017} resolved the Conjecture \ref{CNLS0} for general initial data when $d = 4$ and $1 < s_c \le \frac{3}{2}$. For the case $d = 4$ and $\frac{3}{2} < s_c < 2$ with radial initial data, see the work of Lu and Zheng \cite{LuZheng2017}. More recently, Zhao \cite{Zhao2017AMS} and Li-Li \cite{LiLi2022SIAM} resolved the Conjecture \ref{CNLS0} in the case $d \ge 5$ and $1 < s_c < \frac{d}{2}$. For $d \ge 8$, their results also required $\alpha$ to be an even number. See Table 2. \begin{table}[h]\label{table1} \centering \caption{Results for Conjecture \ref{CNLS0} in the sub-critical case: $0<s_c<1$} \begin{tabular}{|c|c|c|c|} \hline & $0 < s_c < \frac{1}{2}$ & $s_c=\frac{1}{2}$& $\frac{1}{2} < s_c < 1 $\\ \hline $d = 1 $& \text{\textcolor{blue}{no results}} & \diagbox{}{} & \diagbox{}{} \\ \hline $d = 2 $& \text{\textcolor{blue}{no results}} & Yu \cite{Yu2021}& \text{\textcolor{blue}{no results}} \\ \hline $d=3$ & \textcolor{blue}{radial}, Murphy \cite{Murphy2015}&Kenig-Merle \cite{KenigMerle2010} & \thead{$\frac{1}{2}<s_c\le \frac{3}{4}$,Murphy\cite{Murphy2014} \\\textcolor{blue}{radial}, $\frac{3}{4}<s_c<1$, Murphy\cite{Murphy2015}} \\ \hline $d\ge4$ & \textcolor{blue}{radial}, Gao-Miao-Yang\cite{GaoMiaoYang2019}& Murphy\cite{Murphy2014b} &Gao-Zhao\cite{GaoZhao2019},Murphy\cite{Murphy2014},Xie-Fang\cite{XieFang2013}\\ \hline \end{tabular} \end{table} \begin{table}[h]\label{table2} \centering \caption{Results for Conjecture \ref{CNLS0} in the super-critical case: $1<s_c<\frac{d}{2}$} \begin{tabular}{|c|c|} \hline $d=3$ & $1<s_c<\frac{3}{2}$, \textcolor{blue}{radial}, Murphy \cite{Murphy2015}\\ \hline $d=4$ & \thead { $1<s_c<\frac{3}{2}$, Miao-Murphy-Zheng\cite{MiaoMurphyZheng2014}; $s_c=\frac{3}{2}$, Dodson-Miao-Murphy-Zheng\cite{Dodson2017}; \\ $\frac{3}{2}<s_c<2$, \textcolor{blue}{radial}, Lu-Zheng\cite{LuZheng2017}}\\ \hline $d\ge5$ & \thead {$1<s_c<\frac{d}{2}$, and \textcolor{blue}{assume $\alpha $ is even when $d\ge8$}, \\ Killip-Visan\cite{KillipVisan2010}, Zhao\cite{Zhao2017AMS}, Li-Li\cite{LiLi2022SIAM}}\\ \hline \end{tabular} \end{table} Analogous to Conjecture \ref{CNLS0}, it is conjectured that for the NLS in the exterior domain $\Omega$ of a smooth, compact, strictly convex obstacle in $\mathbb{R}^3$: \begin{Conjection}\label{CNLS} Let $\alpha >\frac{4}{3}$ and $s_c = \frac{3}{2} - \frac{2}{\alpha }$. Assume $u: I \times \Omega \rightarrow \mathbb{C}$ is a maximal-lifespan solution to (\ref{NLS}) such that \begin{equation} u \in L_t^\infty \dot{H}_D^{s_c}(I \times \Omega), \label{Ebound} \end{equation} then $u$ is global and scatters as $t \to \pm \infty$. \end{Conjection} Killip-Visan-Zhang \cite{KillipVisanZhang2016a} first resolved Conjecture \ref{CNLS} in the case $d = 3$ and $s_c = 1$. Since this corresponds to the energy-critical setting, the energy conservation law eliminates the need for the assumption (\ref{Ebound}); it suffices to require the initial data to belong to $\dot H^{1}_D(\Omega)$. In this paper, under the assumption that Conjecture \ref{CNLS0} holds in Euclidean space, we resolve Conjecture \ref{CNLS} in the case $d = 3$ and $\frac{1}{2} \le s_c < \frac{3}{2}$. Our main result is as follows: \begin{theorem}\label{T1} Let $s_c\in [\frac{1}{2},\frac{3}{2})$. Assume that Conjection \ref{CNLS0} holds. Then Conjection \ref{CNLS} holds. \end{theorem} \begin{remark} In Section \ref{S4}, we will embed the solutions in the limit geometries into $\Omega$ via the stability theorem \ref{TStability}. To achieve this, we need to assume that Conjecture \ref{CNLS0} holds true, so that the solutions in the limit geometries satisfy uniform spacetime bounds; then the solutions to NLS$_{\Omega}$ will inherit these spacetime bounds. These solutions to NLS$_{\Omega}$ will appear again as nonlinear profiles in Proposition \ref{Pps}. \end{remark} \begin{remark} As mentioned earlier, Conjecture \ref{CNLS0} has been resolved for $s_c \in [\frac{1}{2}, \frac{3}{4}]$ and $s_c = 1$. Furthermore, for $s_c \in (\frac{3}{4}, 1) \cup (1, \frac{3}{2})$, Murphy \cite{Murphy2015} addressed Conjecture \ref{CNLS0} in the case of radial initial data. Hence, in Theorem \ref{T1}, we only need to assume that Conjecture \ref{CNLS0} holds for non-radial initial data when $s_c \in (\frac{3}{4}, 1) \cup (1, \frac{3}{2})$. \end{remark} \subsection{Outline of the proof of Theorem \ref{T1}} We proceed by contradiction and assume that Theorem \ref{T1} is false. Observing that Theorem \ref{TLWP} guarantees the global existence and scattering for sufficiently small initial data. From that we deduce the existence of a critical threshold size. Below this threshold, the theorem holds, but above it, solutions with arbitrarily large scattering size can be found. By employing a limiting argument, we establish the existence of minimal counterexamples, which are blowup solutions precisely at the critical threshold. Due to their minimality, these solutions exhibit compactness properties that ultimately conflict with the dispersive nature of the equation. Consequently, we can exclude their existence and conclude that Theorem \ref{T1} holds. A key characteristic of these minimal counterexamples is their almost periodicity modulo the symmetries of the equation. We briefly discuss this property and its immediate implications; for a detailed analysis, the reader is referred to \cite{KillipVisan2013}. \begin{definition} Let $s_c>0$. A solution $u:I\times \Omega\rightarrow \mathbb{C}$ to (\ref{NLS}) is called almost periodic if (\ref{Ebound}) holds and there exist function $C : \mathbb{R}^+ \to \mathbb{R}^+$ such that for all $t \in I$ and all $\eta > 0$, \begin{equation} \|(-\Delta _\Omega)^{\frac{s_c}{2}}u(t,x)\|_{L^2_x(\Omega\cap \{x:|x|>C(\eta)\})} + \|(-\Delta _\Omega)^{\frac{s_c}{2}}P^\Omega_{>C(\eta)}u(t,x)\|_{L^2_x(\Omega)}<\eta,\notag \end{equation} where $P^{\Omega}_{>N} $ denotes the Littlewood-Paley projections adapted to the Dirichlet Laplacian on $\Omega$ (c.f. (\ref{E11121})). We call $C$ the \emph{compactness modulus function}. \end{definition} \begin{remark} Using the equivalence of norms in Lemma \ref{LSquare function estimate}, it is straightforward to deduce that when $\{u(t):t\in I\}$ is precompact in $\dot H^{s_c}_D(\Omega)$, then $u:I\times \Omega\rightarrow \mathbb{C}$ is almost periodic and there exist functions $C, c : \mathbb{R}^+ \to \mathbb{R}^+$ such that for all $t \in I$ and all $\eta > 0$, \begin{equation} \|(-\Delta _\Omega)^{\frac{s_c}{2}}P^\Omega_{<c(\eta)}u(t,x)\|_{L^2_x(\Omega)} + \|(-\Delta _\Omega)^{\frac{s_c}{2}}P^\Omega_{>C(\eta)}u(t,x)\|_{L^2_x(\Omega)}<\eta.\label{E10101} \end{equation} \end{remark} To proceed, we require the following result, which relates the interval length of an almost periodic solution to its Strichartz norms. This result can be established by adapting the proof of \cite[Lemma 5.21]{KillipVisan2013} (the only difference being that we need to use the chain rule (\ref{E12133}) instead of the chain rule in Euclidean space). \begin{lemma} \label{Lspace-time bound} Let $s_c\in [\frac{1}{2},\frac{3}{2})$, and suppose $u : I \times \Omega \to \mathbb{C}$ is an almost periodic solution to (\ref{NLS}). Then \[ |I|\lesssim _u \|(-\Delta _\Omega)^{\frac{s_c}{2}} u \|^2_{L^2_t L^6_x (I \times\Omega)} \lesssim_u 1 + |I|. \] \end{lemma} With these preliminaries established, we can now describe the first major step in the proof of Theorem \ref{T1}. \begin{theorem}[Reduction to almost periodic solutions]\label{TReduction} Suppose that Theorem \ref{T1} fails for some $s_c\in [\frac{1}{2},\frac{3}{2})$. Then there exists a global solution $u : \mathbb{R} \times\Omega \to \mathbb{C}$ to \eqref{NLS} such that $u \in L_t^{\infty} \dot{H}_D^{s_c}(\mathbb{R} \times \Omega)$, whose orbit $\{u(t):t\in \mathbb{R} \}$ is precompact in $\dot H^{s_c}_D(\Omega)$ and there exists $R>0$ such that \begin{equation} \int _{\Omega\cap \{|x|\le R\}}|u(t,x)|^{\frac{3\alpha }{2}}dx\gtrsim1 \quad\text{uniformly for }\quad t\in \mathbb{R} .\label{E} \end{equation} \end{theorem} \begin{remark} Indeed, our proof shows that Theorem \ref{TReduction} is valid for all $s_c \in (0, \frac{3}{2})$. The restriction $ s_c \geq \frac{1}{2}$ in Theorem \ref{T1} arises from the limitations imposed by the indices in Theorem \ref{TEquivalence}, which make it challenging to exclude almost periodic solutions when $s_c\in (0,\frac{1}{2})$. See Remark \ref{R128} for more details. \end{remark} The reduction to almost periodic solutions is now widely regarded as a standard technique in the study of dispersive equations at critical regularity. Keraani \cite{Keraani2006JFA} was the first to prove the existence of minimal blowup solutions, while Kenig-Merle \cite{KenigMerle2006} were the first to use them to establish a global well-posedness result. Since then, this technique has proven to be extremely useful; see \cite{KenigMerle2010,KillipTaoVisan2009,KillipVisan2010,KillipVisan2010AJM,KillipVisan2013,KillipVisan2012,KillipVisanZhang2008,MiaoMurphyZheng2014,Murphy2014,Murphy2014b,Murphy2015} for many more examples of this technique in action (and note that this is by no means an exhaustive list). For a good introduction to these methods, see \cite{KillipVisan2013}. The proof of Theorem \ref{TReduction} relies on three key components. First, the linear and nonlinear profile decompositions are required. For the linear profile decomposition, the case $s_c = 1$ was established in \cite{KillipVisanZhang2016a}, and we will follow the methodology outlined in that work. The main tool used to derive the linear profile decomposition is the inverse Strichartz inequality. This inequality shows that a solution with non-trivial spacetime bounds must concentrate at least one bubble. By repeatedly applying the inverse Strichartz inequality, it can be demonstrated that the linear solution concentrates on multiple bubbles, with the remainder term vanishing after passing to a subsequence. After obtaining the linear profile decomposition, the next step is to construct the nonlinear profiles. These nonlinear profiles are solutions to NLS$_\Omega$ with initial data corresponding to the linear profiles. Due to the presence of the boundary, suitable scaling and spatial translations lead to the study of NLS in different geometries, which significantly distinguishes our setting from the Euclidean setting. The main challenge is that we cannot guarantee whether a profile with given initial data is entirely contained within the exterior domain. Additionally, the profile may exist at any scale and any possible location. To address this, we adopt the approach from \cite{KillipVisanZhang2016a}, which associates each profile with a specific limiting case. Moreover, we consider three scenarios arising from the scaling and spatial translation of $\Omega$. The rescaled domain is denoted as $\Omega_n = \lambda_n^{-1}(\Omega - \{x_n\})$ for the first two cases and $\Omega_n = \lambda_n^{-1} R_n^{-1}(\Omega - \{x_n^*\})$ for the third case, where $x_n^* \in \partial \Omega$, $|x_n - x_n^*| = \operatorname{dist}(x_n, \Omega^c)$, and $R_n \in \operatorname{SO}(3)$ satisfies $R_n e_3 = \frac{x_n - x_n^*}{|x_n - x_n^*|}$. These scenarios are as follows: \begin{enumerate} \item When $\lambda_n \to \infty$, the rescaled domain $\Omega_n$ approximates $\mathbb{R}^3$. \item When $\frac{\operatorname{dist}(x_n, \Omega^c)}{\lambda_n} \to \infty$, the domain $\Omega_n^c$ retreats to infinity. \item When $\lambda_n \to 0$ and $\frac{\operatorname{dist}(x_n, \Omega^c)}{\lambda_n} = K > 0$, the domain $\Omega_n$ approximates a half-space. \end{enumerate} The second ingredient is a stability result for the nonlinear equation (see e.g. Theorem \ref{TStability} below). The third ingredient is a decoupling statement for nonlinear profiles. The last two ingredients are closely related, in the sense that the decoupling must hold in a space that is dictated by the stability theory. Most precisely, this means that the decoupling must hold in a space with $s_c$ derivatives. Keraani \cite{Keraani2001} showed how to prove such a decoupling statement in the context of the mass- and energy-critical NLS; however, these arguments rely on pointwise estimates to bound the difference of nonlinearities and hence fail to be directly applicable in the presence of fractional derivatives. In \cite{KillipVisan2010}, Killip and Visan devised a strategy that is applicable in the energy-supercritical setting, while Murphy \cite{Murphy2014} developed a strategy tailored to the energy-subcritical setting. In particular, by employing a Strichartz square function that provides estimates equivalent to those of $|\nabla|^{s_c}$, they can reduce the problem to a framework where Keraani's arguments can be directly applied. In this paper, we adopt the strategies presented in \cite{KillipVisan2010,Murphy2014}. Specifically, by appropriately selecting the parameters and applying the equivalence theorem (Theorem \ref{TEquivalence}), we reduce the proof of the decoupling for nonlinear profiles to the cases addressed in \cite{KillipVisan2010,Murphy2014}. With all the necessary tools in place, we can now apply the standard arguments in \cite{KillipVisan2013} to establish Theorem \ref{TReduction}. Therefore, to complete the proof of Theorem \ref{T1}, it is sufficient to rule out the existence of the solutions described in Theorem \ref{TReduction}. For this purpose, we will utilize versions of the Lin-Strauss Morawetz inequality: \begin{equation} \int \int _{I\times \Omega}\frac{|u(t,x)|^{\alpha +2}}{|x|}dxdt\lesssim \||\nabla |^{1/2}u\|_{L^\infty _tL_x^2(I\times \Omega)}^2, \label{E1242} \end{equation} which will be applied in Section \ref{S6} to exclude the existence of almost periodic solutions in Theorem \ref{TReduction} for the case $s_c = \frac{1}{2}$. However, when $s_c > \frac{1}{2}$, the estimate (\ref{E1242}) cannot be directly applied because the solutions considered only belong to $\dot H^{s_c}_D(\Omega)$, which means the right-hand side of (\ref{E1242}) might not be finite. For $s_c > \frac{1}{2}$, it is necessary to suppress the low-frequency components of the solutions to make use of the estimate (\ref{E1242}). In the context of the 3D radial energy-critical NLS, Bourgain \cite{Bourgain1999} achieved this by proving a space-localized version of (\ref{E1242}) (see also \cite{Grillakis2000,TaoVisanZhang2007}). In Section \ref{S6}, we adopt a similar approach to preclude the existence of almost periodic solutions in Theorem \ref{TReduction} for the range $1 < s_c < 3/2$. However, since one of the error terms arising from space localization requires controlling the solution at the $\dot{H}_D^1$ level, a different strategy is needed for the range $\frac{1}{2} < s_c < 1$. To address this, in Section \ref{S1/2-1}, we develop a version of (\ref{E1242}) localized to high frequencies. This high-frequency localized version will be employed to exclude the existence of almost periodic solutions in Theorem \ref{TReduction} when $\frac{1}{2} < s_c < 1$. The structure of the paper is as follows: In Section \ref{S2}, we introduce the necessary notation and foundational materials for the analysis. This includes the equivalence of Sobolev spaces and the product rule for the Dirichlet Laplacian; Littlewood-Paley theory and Bernstein inequalities; Strichartz estimates; local and stability theories for (\ref{NLS}); local smoothing; the convergence of functions related to the Dirichlet Laplacian as the underlying domains converge; and the behavior of the linear propagator in the context of domain convergence. Section \ref{S3} begins with the proof of the refined and inverse Strichartz inequalities (Proposition \ref{PRefined SZ} and Proposition \ref{inverse-strichartz}). These results establish that linear evolutions with non-trivial spacetime norms must exhibit a bubble of concentration, which is then used to derive the linear profile decomposition for the propagator $e^{it\Delta_\Omega}$ in $\dot{H}^{s_c}_D(\Omega)$ (see Theorem \ref{linear-profile}). In Section \ref{S4}, we show that nonlinear solutions in the limiting geometries can be embedded into $\Omega$. Since nonlinear solutions in the limiting geometries admit global spacetime bounds (Here we need to assume that Conjecture \ref{CNLS0} holds true), we deduce that solutions to NLS$_{\Omega}$, whose characteristic length scale and location conform closely with one of these limiting cases, inherit these spacetime bounds. These solutions to NLS$_{\Omega}$ will reappear as nonlinear profiles in Section \ref{S5}. Section \ref{S5} is dedicated to proving the existence of almost periodic solutions (Theorem \ref{TReduction}). The key step involves establishing the Palais-Smale condition (Proposition \ref{Pps}). This is achieved using the profile decomposition developed in Section \ref{S4}, the stability theorem (Theorem \ref{TStability}) from Section \ref{S2}, and techniques from \cite{KillipVisan2010, Murphy2014} to ensure the decoupling of nonlinear profiles. In Section \ref{S6}, we rule out almost periodic solutions described in Theorem \ref{TReduction} for $1 < s_c < \frac{3}{2}$ and $s_c = \frac{1}{2}$. The proof relies on a space-localized Lin-Strauss Morawetz inequality, following the method of Bourgain \cite{Bourgain1999}. Finally, in Section \ref{S1/2-1}, we exclude solutions as in Theorem \ref{TReduction} for $\frac{1}{2} < s_c < 1$. The main tool is the long-time Strichartz estimate (Proposition \ref{PLT2}), originally developed by Dodson \cite{Dodson2012} for the mass-critical NLS. Additionally, we establish a frequency-localized Lin-Strauss Morawetz inequality (Proposition \ref{PMorawetz}) to eliminate almost periodic solutions. This approach involves truncating the solution to high frequencies and employing Proposition \ref{PLT2} to handle the error terms introduced by frequency projection. \section{Preliminaries}\label{S2} \subsection{Notation and useful lemmas} We express $ X \lesssim Y $ or $ Y \gtrsim X $ to denote that $ X \leq CY $ for some absolute constant $ C > 0 $, which might change from line to line. If the implicit constant relies on additional variables, this will be shown with subscripts. We employ $ O(Y) $ to represent any quantity $ X $ such that $ |X| \lesssim Y $. The notation $ X \sim Y $ implies that $ X \lesssim Y \lesssim X $. The term $ o(1) $ is used to describe a quantity that converges to zero. We will also use $s+$ or $s-$, which means that there exists a small positive number $ \varepsilon $ such that it is equal to $s+\varepsilon $ or $s-\varepsilon $ respectively. Throughout this paper, we let $s_c = \frac{3}{2} - \frac{2}{\alpha} \in (0, \frac{3}{2})$. Further restrictions on the range of $s_c$ are imposed only in Section \ref{S6} and Section \ref{S1/2-1}. $ \Omega $ will stand for the exterior domain of a smooth, compact, strictly convex obstacle in $ \mathbb{R}^3 $. Without loss of generality, we assume $0 \in \Omega^c$. The notation $\text{diam} := \text{diam}(\Omega^c)$ is used to denote the diameter of the obstacle, and $d(x) := \text{dist}(x, \Omega^c)$ denotes the distance from a point $x \in \mathbb{R}^3$ to the obstacle. We first state the Hardy inequality on the exterior domain. \begin{lemma}[Hardy's inequality, \cite{KillipVisanZhang2016b}] Let $d\geq3$, $1<p<\infty$ and $0<s<\min\{1+\frac{1}{p},\frac{3}{p}\}$, then for any $f\in C_c^\infty(\Omega)$, we have \begin{align*} \Big\|\frac{f(x)}{d(x)}\big\|_{L^p(\Omega)}\lesssim\big\|(-\Delta_\Omega)^\frac{s}{2}f\big\|_{L^p(\Omega)}, \end{align*} where $d(x)=\operatorname{dist}(x,\Omega^c)$. \end{lemma} We will use the following refined version of Fatou's lemma due to Brezis and Lieb. \begin{lemma}[Refined Fatou, \cite{BrezisLieb1983}]\label{LRefinedFatou} Let $0 < p < \infty$ and assume that $\{f_n\} \subset L^p(\mathbb{R}^d)$ with $\limsup_{n \to \infty} \|f_n\|_p < \infty$. If $f_n \to f$ almost everywhere, then \[ \int_{\mathbb{R}^d} \left| |f_n|^p - |f_n - f|^p - |f|^p \right| dx \to 0 \quad \text{as} \quad n \to \infty. \] In particular, $\|f_n\|_{L^p}^p - \|f_n - f\|_{L^p}^p \to \|f\|_{L^p}^p$. \end{lemma} The following fractional difference estimate will be used in the proof of Lemma \ref{Lnonlinearestimate}. \begin{lemma}[Derivatives of differences, \cite{KillipVisan2010}]\label{LDerivatives of differences} Let $F(u) = |u|^p u$ with $p > 0$ and let $0 < s < 1$. Then for $1 < q, q_1, q_2 < \infty$ such that $\frac{1}{q} = \frac{p}{q_1} + \frac{1 }{q_2}$, we have \[ \|\nabla|^s [F(u+v) - F(u)] \|_{L^q(\mathbb{R} ^d)} \lesssim \|\nabla|^s u\|_{L^{q_1}(\mathbb{R} ^d)}^{p } \|v\|_{L^{q_2}(\mathbb{R} ^d)} + \|\nabla|^s v\|_{L^{q_1}(\mathbb{R} ^d)} ^{p }\|u+v\|_{L^{q_2}(\mathbb{R} ^d)}. \] \end{lemma} We will also use the following heat kernel estimate due to Q. S. Zhang \cite{Zhang2003}. \begin{lemma}[Heat kernel estimate \cite{Zhang2003}]\label{Lheatkernel} Let $\Omega$ denote the exterior of a smooth, compact, convex obstacle in $\mathbb{R}^d$ for $d \geq 3$. Then there exists $c > 0$ such that \[ |e^{t\Delta_\Omega}(x,y)| \lesssim \left( \frac{d(x)}{\sqrt{t} \wedge \text{diam}} \wedge 1 \right) \left( \frac{d(y)}{\sqrt{t} \wedge \text{diam}} \wedge 1 \right) e^{-\frac{c|x - y|^2}{t}} t^{-\frac{d}{2}}, \] uniformly for $x, y \in \Omega$ and $t\ge0$; recall that $A\wedge B=\min \{A,B\}$. Moreover, the reverse inequality holds after suitable modification of $c$ and the implicit constant. \end{lemma} There is a natural family of Sobolev spaces associated with powers of the Dirichlet Laplacian. Our notation for these is as follows. \begin{definition} For $s \geq 0$ and $1 < p < \infty$, let $\dot{H}^{s,p}_D(\Omega)$ and $H^{s,p}_D(\Omega)$ denote the completions of $C_c^{\infty}(\Omega)$ under the norms \[ \|f\|_{\dot{H}^{s,p}_D(\Omega)} := \|(-\Delta_{\Omega})^{s/2} f\|_{L^p} \quad \text{and} \quad \|f\|_{H^{s,p}_D(\Omega)} := \|(1 - \Delta_{\Omega})^{s/2} f\|_{L^p}. \] When $p = 2$ we write $\dot{H}^s_D(\Omega)$ and $H^s_D(\Omega)$ for $\dot{H}^{s,2}_D(\Omega)$ and $H^{s,2}_D(\Omega)$, respectively. \end{definition} The following result from \cite{KillipVisanZhang2016c} establishes a connection between Sobolev spaces defined with respect to the Dirichlet Laplacian and those defined through conventional Fourier multipliers. The constraints on regularity $ s $ are important, as shown by counterexamples in \cite{KillipVisanZhang2016c}. \begin{theorem}[Equivalence of Sobolev spaces,\cite{KillipVisanZhang2016c}]\label{TEquivalence} Let $ d \geq 3 $ and let $ \Omega $ denote the complement of a compact convex body $ \Omega^c \subset \mathbb{R}^d $ with smooth boundary. Let $ 1 < p < \infty $. If $ 0 \leq s < \min \left\{ 1 + \frac{1}{p}, \frac{d}{p} \right\} $, then \[ \|(-\Delta_{\mathbb{R}^d})^{s/2} f\|_{L^p} \sim_{d,p,s} \|(-\Delta_{\Omega})^{s/2} f\|_{L^p} \quad \text{for all } f \in C_c^\infty(\Omega). \] \end{theorem} This result allows us to transfer the $L^p$-product rule for fractional derivatives and the chain rule directly from the Euclidean setting, provided we respect the restrictions on $s$ and $p$. \begin{lemma}\label{LFractional product rule} For all $f, g \in C_c^\infty(\Omega)$, we have \[ \|(-\Delta_\Omega)^{s/2} (fg)\|_{L^p(\Omega)} \lesssim \|(-\Delta_\Omega)^{s/2} f\|_{L^{p_1}(\Omega)} \|g\|_{L^{p_2}(\Omega)} + \|f\|_{L^{q_1}(\Omega)} \|(-\Delta_\Omega)^{s/2} g\|_{L^{q_2}(\Omega)} \] with the exponents satisfying $1 < p, p_1, q_2 < \infty$, $1 < p_2, q_1 \leq \infty$, \[ \frac{1}{p} = \frac{1}{p_1} + \frac{1}{p_2} = \frac{1}{q_1} + \frac{1}{q_2},\quad\text{and}\quad 0 < s < \min \left\{ 1 + \frac{1}{p_1}, 1 + \frac{1}{q_2}, \frac{3}{p_1}, \frac{3}{q_2} \right\}. \] \end{lemma} \begin{lemma}\label{LChainrule} Suppose $G\in C^2(\mathbb{C})$ and $1<p,p_1,p_2<\infty $ are such that $\frac{1}{p}=\frac{1}{p_1}+\frac{1}{p_2}$. Then for all $0<s<\min \left\{ 2,\frac{3}{p_2} \right\}$, \begin{equation} \|(-\Delta _\Omega)^{\frac{s}{2}}G(u)\|_{L^p(\Omega)}\lesssim \|G'(u)\|_{L^{p_1}(\Omega)} \|(-\Delta _\Omega)^{\frac{s}{2}}u\|_{L^{p_2}(\Omega)}.\notag \end{equation} \end{lemma} In particular, in Section \ref{S1/2-1}, we will use the following fractional chain rule: \begin{corollary} Given $u\in L_t^{\infty }\dot H^{s_c}_D (I\times \Omega)\cap L_t^{2}\dot H^{s_c,6}_D(I\times \Omega)$, \begin{equation} \|(-\Delta _\Omega)^{\frac{s_c}{2}}(|u|^{\alpha }u)\|_{L_t^{2}L_x^{\frac{6}{5}}(I\times \Omega)}\lesssim \|(-\Delta _\Omega)^{\frac{s_c}{2}}u\|_{L_t^{\infty }L_x^{2}}^{\alpha } \|(-\Delta _\Omega)^{\frac{s_c}{2}}u\|_{L_t^{2}L_x^{6}(I\times \Omega)}.\label{E12133} \end{equation} \end{corollary} \begin{proof} Using the equivalence theorem \ref{TEquivalence}, the chain rule in Euclidean space, and applying the equivalence theorem \ref{TEquivalence} again, we obtain \begin{equation} \|(-\Delta_\Omega)^{\frac{s_c}{2}}(|u|^{\alpha}u)\|_{L_t^{2}L_x^{\frac{6}{5}}(I \times \Omega)} \lesssim \|u\|_{L_t^{2\alpha}L_x^{3\alpha}(I \times \Omega)}^{\alpha} \|(-\Delta_\Omega)^{\frac{s_c}{2}}u\|_{L_t^{\infty}L_x^{2}(I \times \Omega)}. \label{E12131} \end{equation} Moreover, by Sobolev embedding and H\"older's inequality, we have \begin{equation} \|u\|_{L_t^{2\alpha}L_x^{3\alpha}(I \times \Omega)}^{\alpha} \lesssim \|(-\Delta_\Omega)^{\frac{s_c}{2}}u\|_{L_t^{2\alpha}L_x^{\frac{6\alpha}{3\alpha - 2}}(I \times \Omega)}^{\alpha} \lesssim \|(-\Delta_\Omega)^{\frac{s_c}{2}}u\|_{L_t^{\infty}L_x^{2}(I\times \Omega)}^{\alpha-1} \|(-\Delta_\Omega)^{\frac{s_c}{2}}u\|_{L_t^{2}L_x^{6}(I \times \Omega)}. \label{E12132} \end{equation} Substituting (\ref{E12132}) into (\ref{E12131}), we obtain the desired inequality (\ref{E12133}). \end{proof} We will also use the local smoothing estimate. The particular version we need is \cite[Lemma 2.13]{KillipVisanZhang2016a}. \begin{lemma} \label{LLocalSmoothing} Let $u = e^{it\Delta_\Omega} u_0$. Then \[ \int_{\mathbb{R}} \int_\Omega |\nabla u(t, x)|^2 \langle R^{-1} (x-z) \rangle^{-3} dx dt \lesssim R \| u_0 \|_{L^2(\Omega)} \|\nabla u_0 \|_{L^2(\Omega)}, \] uniformly for $z \in \mathbb{R}^3$ and $R > 0$. \end{lemma} A direct consequence of the local smoothing estimate is the following result, which will be used to prove Lemma \ref{LDecoupling of nonlinear profiles}. \begin{corollary}\label{CLocalsmoothing} Given $w_0 \in \dot{H}^{s_c}_D(\Omega)$, \[ \|(-\Delta _\Omega)^{\frac{s_c}{2}} e^{it\Delta_\Omega} w_0 \|_{ L_{t,x}^{2}([\tau-T, \tau+T] \times \{|x-z| \leq R\})} \lesssim T^{\frac{2(5\alpha -4)}{10\alpha (s_c+2)}} R^{\frac{15\alpha -4}{10\alpha (s_c+2)}} \| e^{it\Delta_\Omega} w_0 \|^{\frac{1}{2(s_c+2)}}_{L_{t,x}^{\frac{5\alpha }{2}}(\mathbb{R} \times \Omega)} \| w_0 \|_{\dot{H}^{s_c}_D(\Omega)}^{1-\frac{1}{2(s_c+2)}}, \] uniformly in $w_0$ and the parameters $R, T > 0, \tau \in \mathbb{R}$, and $z \in \mathbb{R}^3$. \end{corollary} \begin{proof} Replacing $w_0$ by $e^{i\tau \Delta _\Omega}w_0$, we see that it suffices to treat the case $\tau=0$. Given $N > 0$, using the H\"older, Bernstein, and Strichartz inequalities, as well as the equivalence of Sobolev spaces, we have \begin{align*} \||\nabla |^{s_c}&e^{it\Delta_\Omega} P^{\Omega}_{<N} w_0\|_{L^2_{t,x}([-T,T] \times \{|x-z| \leq R\})} \notag\\ &\lesssim T^{\frac{5\alpha -4}{10\alpha }}R^{\frac{3(5\alpha +4)}{40\alpha }} \||\nabla|^{s_c} e^{it\Delta_\Omega} P^{\Omega}_{<N} w_0\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{40\alpha }{15\alpha -4}}} \\ &\lesssim T^{\frac{5\alpha -4}{10\alpha }}R^{\frac{3(5\alpha +4)}{40\alpha }} N^{\frac{s_c}{4}}\||\nabla|^{\frac{3}{4}s_c} e^{it\Delta_\Omega} P^{\Omega}_{<N} w_0\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{40\alpha }{15\alpha -4}}}\\ &\lesssim T^{\frac{5\alpha -4}{10\alpha }}R^{\frac{3(5\alpha +4)}{40\alpha }} N^{\frac{s_c}{4}} \|e^{it\Delta _\Omega}P^{\Omega}_{<N}w_0\|_{L_{t,x}^{\frac{5\alpha }{2}}}^{\frac{1}{4}} \||\nabla |^{s_c}e^{it\Delta _\Omega}P^\Omega_{\le N}w_0\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}}^{\frac{3}{4}}\\ &\lesssim T^{\frac{5\alpha -4}{10\alpha }}R^{\frac{3(5\alpha +4)}{40\alpha }} N^{\frac{s_c}{4}} \|e^{it\Delta _\Omega}P^{\Omega}_{<N}w_0\|_{L_{t,x}^{\frac{5\alpha }{2}}}^{\frac{1}{4}} \|w_0\|_{\dot H^{s_c}_D(\Omega)}^{\frac{3}{4}} . \end{align*} We estimate the high frequencies using Lemma \ref{LLocalSmoothing} and the Bernstein inequality: \begin{align*} \||\nabla|^{s_c} &e^{it\Delta_\Omega} P^{\Omega}_{\geq N} w_0\|_{L^2_{t,x}([-T,T] \times \{|x-z| \leq R\})}^2 \notag\\ &\lesssim R \|P^{\Omega}_{\geq N} |\nabla |^{s_c-1}w_0\|_{L_x^2} \||\nabla|^{s_c} P^{\Omega}_{\geq N} w_0\|_{L_x^2} \lesssim R N^{-1} \|w_0\|_{\dot{H}_D^{s_c}(\Omega)}^2. \end{align*} The desired estimate in Corollary \ref{CLocalsmoothing} now follows by optimizing in the choice of $N$. \end{proof} \subsection{Littlewood-Paley theory on exterior domains} Let $ \phi : [0, \infty) \to [0, 1]$ be a smooth, non-negative function satisfying \[ \phi(\lambda) = 1 \quad \text{for } 0 \leq \lambda \leq 1, \quad \text{and} \quad \phi(\lambda) = 0 \quad \text{for } \lambda \geq 2. \] For each dyadic number $N \in 2^\mathbb{Z}$, define \[ \phi_N(\lambda) := \phi(\lambda/N), \quad \psi_N(\lambda) := \phi_N(\lambda) - \phi_{N/2}(\lambda). \] Observe that the collection $\{\psi_N(\lambda)\}_{N \in 2^\mathbb{Z}}$ forms a partition of unity on $(0, \infty)$. Using these functions, we define the Littlewood-Paley projections adapted to the Dirichlet Laplacian on $\Omega$ through the functional calculus for self-adjoint operators: \begin{equation} P_{\leq N}^\Omega := \phi_N(\sqrt{-\Delta_\Omega}), \quad P_N^\Omega := \psi_N(\sqrt{-\Delta_\Omega}), \quad P_{> N}^\Omega := I - P_{\leq N}^\Omega. \label{E11121} \end{equation} For simplicity, we will frequently denote $f_N := P_N^\Omega f$ and similarly for other projections. We will also use $P_N^{\mathbb{R}^3}$ and similar notation to refer to the corresponding operators for the standard Laplacian on $\mathbb{R}^3$. Additionally, we will require analogous operators on the half-space $\mathbb{H} = \{x \in \mathbb{R}^3 : x \cdot e_3 > 0\}$, where $e_3 = (0, 0, 1)$. These operators are denoted by $P_N^\mathbb{H}$, and so on. Just like their Euclidean counterparts, the following two basic estimates are well-known. \begin{lemma}[Bernstein estimates,\cite{KillipVisanZhang2016c}]\label{LBernstein estimates} For any $f \in C_c^\infty(\Omega)$, we have \[ \|P_{\leq N}^\Omega f\|_{L^p(\Omega)} + \|P_N^\Omega f\|_{L^p(\Omega)} \lesssim \|f\|_{L^p(\Omega)} \quad \text{for } 1 < p < \infty, \] \[ \|P_{\leq N}^\Omega f\|_{L^q(\Omega)} + \|P_N^\Omega f\|_{L^q(\Omega)} \lesssim N^{3\left(\frac{1}{p} - \frac{1}{q}\right)} \|f\|_{L^p(\Omega)} \quad \text{for } 1 \leq p < q \leq \infty, \] \[ N^s \|P_N^\Omega f\|_{L^p(\Omega)} \sim \|(-\Delta_\Omega)^{s/2} P_N^\Omega f\|_{L^p(\Omega)} \quad \text{for } 1 < p < \infty \text{ and } s \in \mathbb{R}. \] Here, the implicit constants depend only on $p$, $q$, and $s$. \end{lemma} \begin{lemma}[Square function estimate,\cite{KillipVisanZhang2016c}]\label{LSquare function estimate} Fix $1 < p < \infty$. For all $f \in C_c^\infty(\Omega)$, \[ \|f\|_{L^p(\Omega)} \sim \left\|\left( \sum_{N \in 2^\mathbb{Z}} |P_N^\Omega f|^2 \right)^{\frac{1}{2}} \right\|_{L^p(\Omega)}. \] \end{lemma} \subsection{Strichartz estimates, local well-posedness, and the stability result} Strichartz estimates for domains exterior to a compact, smooth, strictly convex obstacle were proved by Ivanovici \cite{Ivanovici2010a} with the exception of the endpoint $L^2_tL^6_x$, see also \cite{BlairSmithSogge2012}. Subsequently, Ivanovici and Lebeau \cite{IvanoviciLebeau2017} proved the dispersive estimate for $d = 3 $. \begin{lemma}[Dispersive estimate, \cite{IvanoviciLebeau2017}]\label{LDispersive} \begin{equation} \| e^{it\Delta_{\Omega}} f \|_{L_x^{\infty}(\Omega)} \lesssim |t|^{-\frac{3}{2}} \|f\|_{L_x^1(\Omega)}.\label{E11122} \end{equation} \end{lemma} For $d \geq 4$, Ivanovici and Lebeau \cite{IvanoviciLebeau2017} also demonstrated through the construction of explicit counterexamples that the dispersive estimate no longer holds, even for the exterior of the unit ball. However, for $d=5,7$, Li-Xu-Zhang \cite{LiXuZhang2014} established the dispersive estimates for solutions with radially symmetric initial data outside the unit ball. Combining the dispersive estimate (\ref{E11122}) with the Theorem of Keel-Tao\cite{KeelTao1998AJM}, we obtain the following Strichartz estimates: \begin{proposition}[Strichartz estimates \cite{Ivanovici2010a,BlairSmithSogge2012,IvanoviciLebeau2017}]\label{PStrichartz} Let $q, \tilde{q} \geq 2$, and $2 \leq r, \tilde{r} \leq \infty$ satisfying \[ \frac{2}{q} + \frac{3}{r} = \frac{2}{\tilde{q}} + \frac{3}{\tilde{r}}= \frac{3}{2} . \] Then, the solution $u$ to $(i\partial_t + \Delta_\Omega)u = F$ on an interval $I \ni 0$ satisfies \[ \|u\|_{L_t^q L_x^r(I \times \Omega)} \lesssim \|u_0\|_{L_x^2(\Omega)} + \|F\|_{L_t^{\tilde{q}'} L_x^{\tilde{r}'}(I \times \Omega)}. \tag{2.3} \] \end{proposition} By the Strichartz estimate and the standard contraction mapping principle, we can establish the following local well-posedness result. \begin{theorem} \label{TLWP} Let $\Omega \subset \mathbb{R}^3$ be the exterior of a smooth compact strictly convex obstacle. There exists $\eta > 0$ such that if $u_0 \in \dot H_D^{s_c}(\Omega)$ obeys \begin{equation} \|(-\Delta _\Omega)^{\frac{s_c}{2}} e^{it \Delta_\Omega} u_0 \|_{L_t^{\frac{5\alpha }{2}} L_x^{\frac{30\alpha }{15\alpha -8}}(I \times \Omega)} \leq \eta \label{E10201} \end{equation} for some time interval $I \ni 0$, then there is a unique strong solution to (\ref{NLS}) on the time interval $I$; moreover, \[ \|(-\Delta _\Omega)^{\frac{s_c}{2}}u\|_{L_t^{\frac{5\alpha }{2}} L_x^{\frac{30\alpha }{15\alpha -8}}(I \times \Omega)} \lesssim \eta. \] \end{theorem} \begin{remark} \ \begin{enumerate} \item If $u_0$ has small $\dot{H}^{s_c}_D(\Omega)$ norm, then Proposition \ref{PStrichartz} guarantees that (\ref{E10201}) holds with $I = \mathbb{R}$. Thus global well-posedness for small data is a corollary of this theorem. \item For large initial data $u_0$, the existence of some small open interval $I \ni 0$ for which (\ref{E10201}) holds follows from combining the monotone convergence theorem with Proposition \ref{PStrichartz}. In this way, we obtain local well-posedness for all data in $\dot H^{s_c}_D(\Omega)$. \item The argument below holds equally well for initial data prescribed as $t \to \pm \infty$, thus proving the existence of wave operators. \end{enumerate} \end{remark} \begin{proof} Throughout the proof, all space-time norms will be on $I \times \Omega$. Consider the map \begin{equation} \Phi: u \mapsto e^{it\Delta _\Omega}u_0-i\int_0^te^{i(t-s)\Delta _\Omega}(|u|^{\alpha }u)(s)ds.\notag \end{equation} We will show this is a contraction on the ball \[ B := \left\{ u \in L_t^{\infty} \dot H_D^{s_c} \cap L_t^{ \frac{5\alpha }{2}} \dot H_D^{s_c, \frac{30\alpha }{15\alpha -8}} : \| (-\Delta _\Omega)^{\frac{s_c}{2}}u \|_{L_t^{\frac{5\alpha }{2}} L_x^{\frac{30\alpha }{15\alpha -8}}} \leq 2\eta, \right. \] \[ \text{and }\left. \| u \|_{L_t^{\infty} \dot H_D^{s_c}} \leq 2 \| u_0 \|_{\dot H_D^{s_c}}, \| u \|_{L_t^{\frac{5\alpha }{2}} L_x^{\frac{5\alpha }{2}}}\leq 2C \eta \right\} \] under the metric given by \[ d(u,v) := \| u - v \|_{L_t^{\frac{5\alpha }{2}} L_x^{\frac{30\alpha }{15\alpha -8}}}. \] To see that $\Phi$ maps the ball $B$ to itself, we use the Strichartz inequality followed by Lemma \ref{LFractional product rule}, (\ref{E10201}), Sobolev embedding, and then Theorem \ref{TEquivalence}: \begin{align} &\| (-\Delta _\Omega)^{\frac{s_c}{2}} \Phi(u) \|_{L_t^{\frac{5\alpha }{2}} L_x^{\frac{30\alpha }{15\alpha -8}}} \notag\\ &\leq \| (-\Delta _\Omega)^{\frac{s_c}{2}} e^{it\Delta_{\Omega}} u_0 \|_{L_t^{\frac{5\alpha }{2}} L_x^{\frac{30\alpha }{15\alpha -8}}} + C \left\| (-\Delta _\Omega)^{\frac{s_c}{2}} \left( |u|^{\alpha } u \right) \right\|_{L_t^{ \frac{5\alpha }{2(\alpha +1)}} L_x^{\frac{30\alpha }{27\alpha -8}}} \notag\\ &\leq \eta + C \| u \|_{L_t^{\frac{5\alpha }{2}} L_x^{\frac{5\alpha }{2}}} ^{\alpha }\| (-\Delta _\Omega)^{\frac{s_c}{2}} u \|_{L_t^{\frac{5\alpha }{2}} L_x^{\frac{30\alpha }{15\alpha -8}}}\leq \eta + C \| (-\Delta _\Omega)^{\frac{s_c}{2}} u \|_{L_t^{\frac{5\alpha }{2}} L_x^{\frac{30\alpha }{15\alpha -8}}}^{\alpha +1}\notag\\ &\le \eta +C(2\eta )^{\alpha +1}\le 2\eta,\notag \end{align} provided $\eta$ is chosen sufficiently small. Similar estimates give \[ \|\Phi(u)\|_{L_t^{\frac{5\alpha }{2}} L_x^{\frac{5\alpha }{2}}} \leq C\| (-\Delta _\Omega)^{\frac{s_c}{2}} \Phi(u) \|_{L_t^{\frac{5\alpha }{2}} L_x^{\frac{30\alpha }{15\alpha -8}}}\le 2C\eta, \] and \begin{align} \|\Phi(u)\|_{L^\infty _t\dot H^{s_c}_D(\Omega)}&\le \|u_0\|_{\dot H^{s_c}_D(\Omega)}+C \|(-\Delta _\Omega)^{\frac{s_c}{2}}(|u|^{\alpha }u)\|_{L_t^{\frac{5\alpha }{2(\alpha +1)}}L_x^{\frac{30\alpha }{27\alpha -8}}}\notag\\ &\le \|u_0\|_{\dot H^{s_c}_D(\Omega)}+C \|u\|^{\alpha }_{L_t^\frac{5\alpha }{2}L_x^{\frac{5\alpha }{2}}} \|(-\Delta _\Omega)^{\frac{s_c}{2}}u\| _{L_t^\frac{5\alpha }{2}L_x^{\frac{30\alpha }{15\alpha -8}}}\notag\\ &\le \|u_0\|_{\dot H^{s_c}_D(\Omega)} +C(2\eta)^{\alpha +1}\le 2 \|u_0\|_{\dot H^{s_c}_D(\Omega)}, \notag \end{align} provided $\eta$ is chosen small enough. This shows that $\Phi$ maps the ball $B$ to itself. Finally, to prove that $\Phi$ is a contraction, we argue as above: \begin{align} d(\Phi(u),\Phi(v)) &\leq C \||u|^{\alpha }u-|v|^{\alpha }v\| _{L_t^{ \frac{5\alpha }{2(\alpha +1)}} L_x^{\frac{30\alpha }{27\alpha -8}}}\notag\\ &\le Cd(u,v) \left( \| (-\Delta _\Omega)^{\frac{s_c}{2}}u \|_{L_t^\frac{5\alpha }{2}L_x^{\frac{30\alpha }{15\alpha -8}}}^{\alpha }+ \|(-\Delta _\Omega)^{\frac{s_c}{2}}v \|_{L_t^\frac{5\alpha }{2}L_x^{\frac{30\alpha }{15\alpha -8}}}^{\alpha } \right)\notag\\ &\le 2Cd(u,v)(2\eta )^{\alpha }\le \frac{1}{2}d(u,v),\notag \end{align} provided $\eta$ is chosen small enough. \end{proof} Below, we present the stability theorem for the Schr\"odinger equation in the exterior domain. Its proof relies on the following nonlinear estimate. \begin{lemma}\label{Lnonlinearestimate} For any $u, v \in L_t^{\frac{5\alpha }{2}}\dot H^{s_c,\frac{30\alpha }{15\alpha -8}}_D(I\times \Omega)$, the following inequality holds: \begin{align} & \|(-\Delta _\Omega)^{\frac{s_c}{2}}\left(|u+v|^{\alpha }(u+v)-|u|^{\alpha }u\right)\| _{L_t^{\frac{5\alpha }{2(\alpha +1)}}L_x^{\frac{30\alpha }{27\alpha -8}}} \notag\\ &\lesssim \left( \|u\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{5\alpha }{2}}} ^{\alpha -1}+ \|v\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{5\alpha }{2}}} ^{\alpha -1} \right)( \| (-\Delta _\Omega)^{\frac{s_c}{2}}u\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}} + \| (-\Delta _\Omega)^{\frac{s_c}{2}}v\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}} )^2,\label{E1162} \end{align} \end{lemma} where all the space-time integrals are over $I\times \Omega$. Note that since $s_c > 0$, we have $\alpha > \frac{4}{3}$. \begin{proof} We first consider the case $s_c<1$. Applying Lemma \ref{LDerivatives of differences} and the equivalence theorem \ref{TEquivalence}, we obtain \begin{align} & \|(-\Delta _\Omega)^{\frac{s_c}{2}}\left(|u+v|^{\alpha }(u+v)-|u|^{\alpha }u\right)\| _{L_t^{\frac{5\alpha }{2(\alpha +1)}}L_x^{\frac{30\alpha }{27\alpha -8}}} \notag\\ &\lesssim \|v\|^\alpha _{L_t^{\frac{5\alpha }{2}} L_x^{\frac{5\alpha }{2}}} \|(-\Delta _\Omega)^{\frac{s_c}{2}}u\| _{L_t^{\frac{5\alpha }{2}} L_x^{\frac{30\alpha }{15\alpha -8}} } + \|u+v\|_{L_t^{\frac{5\alpha }{2}} L_x^{\frac{5\alpha }{2}} }^\alpha \|(-\Delta _\Omega)^{\frac{s_c}{2}}v\|_{L_t^{\frac{5\alpha }{2}} L_x^{\frac{30\alpha }{15\alpha -8}} }.\notag \end{align} Further using Sobolev embedding yields (\ref{E1162}). Next, we turn to the case $s_c>1$. Writing $F(u) = |u|^{\alpha} u$, we have \begin{equation} |\nabla|^{s_c} \left(|u+v|^{\alpha }(u+v)-|u|^{\alpha }u\right) = |\nabla |^{s_c-1}[F'(u+v)-F'(u)]\nabla u + |\nabla |^{s_c-1}[F'(u+v)\nabla v].\notag \end{equation} Using the fractional differentiation rule and Sobolev embedding, we obtain \begin{align} & \||\nabla |^{s_c-1}[F'(u+v)\nabla v]\| _{L_t^{\frac{5\alpha }{2(\alpha +1)}}L_x^{\frac{30\alpha }{27\alpha -8}}} \notag\\ &\lesssim \||\nabla |^{s_c-1} F'(u+v)\|_{L_t^\frac{5}{2}L_x^{\frac{5\alpha }{2(\alpha -1)}}} \|\nabla v\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{15\alpha }{5\alpha +6}}} + \|u+v\|^\alpha _{L_t^{\frac{5\alpha }{2}}L_x^{\frac{5\alpha }{2}}} \||\nabla |^{s_c}v\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}} \notag\\ &\lesssim \|u+v\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{5\alpha }{2}}} ^{\alpha -1} \||\nabla |^{s_c}(u+v)\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}} \||\nabla |^{s_c}v\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}}.\label{E1163} \end{align} Similarly, using the fractional differentiation rule, Sobolev embedding, and Lemma \ref{LDerivatives of differences}, we have \begin{align} &\||\nabla |^{s_c-1}[\left(F'(u+v)-F'(u)\right)\nabla u]\|_{L_t^{\frac{5\alpha }{2(\alpha +1)}}L_x^{\frac{30\alpha }{27\alpha -8}}}\notag\\ &\lesssim \||\nabla |^{s_c-1}\left(F'(u+v)-F'(u)\right) \|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{17\alpha -20}}} \|\nabla u\|_{L_t^{\frac{5\alpha }{2} }L_x^{\frac{15\alpha }{5\alpha +6}}}\notag\\ &\qquad + \|F'(u+v)-F'(u)\|_{L_t^{\frac{5}{2}}L_x^{\frac{5}{2}}} \|\nabla |^{s_c}u|\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}} \notag\\ &\lesssim \left(\||\nabla |^{s_c-1}u\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{5\alpha -8}}} \|v\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{5\alpha }{2}}} ^{\alpha -1}+ \||\nabla |^{s_c-1}v\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{5\alpha -8}}} \|u+v\|^{\alpha -1}_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{5\alpha }{2}}} \right) \||\nabla |^{s_c}u\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}}\notag\\ &\qquad + \left(\|u+v\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{5\alpha }{2}}} ^{\alpha -1} + \|v\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{5\alpha }{2}}} ^{\alpha -1} \right) \|v\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{5\alpha }{2}}} \||\nabla ^{s_c}u|\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}}\notag\\ &\lesssim \left( \|u\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{5\alpha }{2}}} ^{\alpha -1}+ \|v\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{5\alpha }{2}}} ^{\alpha -1} \right)( \||\nabla |^{s_c}u\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}} + \||\nabla |^{s_c}v\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}} )^2. \label{E1164} \end{align} Combining (\ref{E1163}) and (\ref{E1164}), and using the equivalence theorem \ref{TEquivalence}, we obtain (\ref{E1162}). \end{proof} Now, we are in position to give the stability result for the Schr\"odinger equation (\ref{NLS}). \begin{theorem}[Stability result]\label{TStability} Let $\Omega$ be the exterior of a smooth compact strictly convex obstacle in $\mathbb{R}^3$. Let $I$ a compact time interval and let $\tilde{u}$ be an approximate solution to (\ref{NLS}) on $I \times \Omega$ in the sense that \begin{equation} i\tilde{u}_t = -\Delta_\Omega \tilde{u} + |\tilde{u}|^{\alpha } \tilde{u} + e\label{E118w3} \end{equation} for some function $e$. Assume that \[ \|\tilde{u}\|_{L_t^\infty \dot{H}_D^{s_c}(I \times \Omega)} \leq E \quad \text{and} \quad \|\tilde{u}\|_{L_t^{\frac{5\alpha }{2}} L_x^{\frac{5\alpha }{2}} (I \times \Omega)} \leq L \] for some positive constants $E$ and $L$. Assume also the smallness conditions \[ \|(-\Delta _\Omega)^{\frac{s_c}{2}}e^{i(t-t_0)\Delta_\Omega} (u_0 - \tilde{u}(t_0))\|_{L_t^{\frac{5\alpha }{2}} L_x^{\frac{30\alpha }{15\alpha -8}} (I\times \Omega)} \leq \epsilon, \] \begin{equation} \|e\|_{\dot N^{s_c}((I\times \Omega))}:=\inf \left\{ \|(-\Delta _\Omega)^{\frac{s_c}{2}}e\|_{L_t^{q'}L_x^{r'}(I\times \Omega)}: \ \frac{2}{q}+\frac{3}{r}=\frac{3}{2} \right\} \le \varepsilon .\label{E1241} \end{equation} for some $0 < \epsilon < \epsilon_1 = \epsilon_1(E, L)$. Then, there exists a unique strong solution $u : I \times \Omega \to \mathbb{C}$ to (\ref{NLS}) with initial data $u_0$ at time $t=t_0$ satisfying \[ \|(-\Delta _\Omega)^{\frac{s_c}{2}}(u - \tilde{u})\|_{L_t^{\frac{5\alpha }{2}} L_x^{\frac{30\alpha }{15\alpha -8}} (I\times \Omega)} \leq C(E, L) \varepsilon, \] \[ \|(-\Delta _\Omega)^{\frac{s_c}{2}}u\|_{L_t^{\frac{5\alpha }{2}} L_x^{\frac{30\alpha }{15\alpha -8}}(I\times \Omega) } \leq C(E, L). \] \end{theorem} \begin{proof} We provide only a brief outline of the proof; the standard proof can be found in \cite{Colliander2008, RyckmanVisan2007, TaoVisan2005}. Define $w = u - \widetilde{u}$ so that $(i\partial_{t} + \Delta_\Omega) w= |u|^{\alpha} u - |\widetilde{u}|^{\alpha} \widetilde{u} - e$. It then follows from Lemma \ref{Lnonlinearestimate}, Strichartz estimate, and (\ref{E1241}) that \begin{align} \|(-\Delta _\Omega)^{\frac{s_c}{2}}w\|_{L_t^{\frac{5\alpha}{2}} L_x^{\frac{30\alpha}{15\alpha - 8}}(I \times \Omega)} &\lesssim \varepsilon + \left( \|\widetilde{u}\|^{\alpha -1}_{L_t^{\frac{5\alpha}{2}} L_x^{\frac{5\alpha}{2}}(I \times \Omega)} + \|w\|_{L_t^{\frac{5\alpha}{2}} L_x^{\frac{5\alpha}{2}}(I \times \Omega)}^{\alpha - 1} \right) \notag\\ &\qquad \times \left( \|(-\Delta_\Omega)^{\frac{s_c}{2}} \widetilde{u}\|_{L_t^{\frac{5\alpha}{2}} L_x^{\frac{30\alpha}{15\alpha - 8}}(I \times \Omega)} + \|(-\Delta_\Omega)^{\frac{s_c}{2}} w\|_{L_t^{\frac{5\alpha}{2}} L_x^{\frac{30\alpha}{15\alpha - 8}}(I \times \Omega)} \right)^2. \notag \end{align} We first note that the above inequality implies that there exists $\delta > 0$ such that, under the additional assumption \begin{equation} \|(-\Delta_\Omega)^{\frac{s_c}{2}} \widetilde{u}\|_{L_t^{\frac{5\alpha}{2}} L_x^{\frac{30\alpha}{15\alpha - 8}} (I \times \Omega)} \le \delta, \label{E118w1} \end{equation} we can use the continuity method to obtain \begin{equation} \|(-\Delta_\Omega)^{\frac{s_c}{2}} w\|_{L_t^{\frac{5\alpha}{2}} L_x^{\frac{30\alpha}{15\alpha - 8}} (I \times \Omega)} \lesssim \varepsilon. \label{E118w2} \end{equation} This is the so-called "short-time perturbation" (see \cite[Lemma 3.13]{KillipVisan2013}). For the general case, we divide the interval $I$ into a finite number of smaller intervals $I_j$, $1 \le j \le n$, such that on each subinterval $I_j$, the $L_t^{\frac{5\alpha}{2}} L_x^{\frac{5\alpha}{2}}$ norm of $\widetilde{u}$ is sufficiently small. Then using equation (\ref{E118w3}), the Strichartz estimate, and the continuity method on each subinterval $I_j$, we know that (\ref{E118w1}) holds on each $I_j$, thus obtaining that (\ref{E118w2}) holds on each $I_j$. Summing the estimates over all $I_j$, we obtain the desired estimate in Theorem \ref{TStability}. \end{proof} \subsection{Convergence results} The region $\Omega$ is not preserved under scaling or translation. In fact, depending on the choice of such operations, the obstacle may shrink to a point, move off to infinity, or even expand to fill an entire half-space. In this subsection, we summarize some results from \cite{KillipVisanZhang2016a} regarding the behavior of functions associated with the Dirichlet Laplacian under these transformations, as well as the convergence of propagators in Strichartz spaces. These results are crucial for the proof of the linear profile decomposition (Proposition \ref{linear-profile}). Throughout this subsection, we denote the Green's function of the Dirichlet Laplacian in a general open set $\mathcal{O}$ by \begin{align*} G_{\mathcal{O}}(x, y; \lambda) := \left( - \Delta_{\mathcal{O}} - \lambda \right)^{-1}(x, y). \end{align*} \begin{definition}[\cite{KillipVisanZhang2016a}]\label{def-limit} Given a sequence $\{\mathcal{O}_n\}_n$ of open subsets of $\mathbb{R}^3$, we define \begin{align*} \widetilde{\lim} \, \mathcal{O}_n : = \left\{ x \in \mathbb{R}^3 : \liminf\limits_{n \to \infty } \operatorname{dist} \left(x, \mathcal{O}_n^c \right) > 0 \right\}. \end{align*} Writing $\tilde{O} = \widetilde{\lim} \, \mathcal{O}_n$, we say $\mathcal{O}_n \to \mathcal{O}$ if the following two conditions hold: the symmetric difference $\mathcal{O} \triangle \tilde{O}$ is a finite set and \begin{align}\label{eq3.1v65} G_{\mathcal{O}_n}(x,y; \lambda ) \to G_{\mathcal{O}} (x,y ; \lambda ) \end{align} for all $ \lambda \in (-2 , - 1)$, all $x \in \mathcal{O}$, and uniformly for $y$ in compact subsets of $\mathcal{O} \setminus \{x \}$. \end{definition} \begin{remark} We restrict $\lambda$ to the interval $(-2, -1)$ in (\ref{eq3.1v65}) for simplicity and because it allows us to invoke the maximum principle when verifying this hypothesis. Indeed, Killip-Visan-Zhang \cite[Lemma 3.4]{KillipVisanZhang2016a} proved that this convergence actually holds for all $\lambda \in \mathbb{C} \setminus [0, \infty)$. \end{remark} Given sequences of scaling and translation parameters $N_n \in 2^{\mathbb{Z}}$ and $x_n \in \Omega$, we would like to consider the domains $\Omega_n:=N_n \left( \Omega - \left\{x_n \right\} \right)$. When $\Omega_n\rightarrow\Omega_\infty$ in the sense of Definition \ref{def-limit}, Killip, Visan and Zhang\cite{KillipVisanZhang2016a} used the maximum principle to prove the convergence of the corresponding Green's functions. Then, by applying the Helffer-Sj\"ostrand formula and using the convergence of the Green's functions, they obtain the following two convergence results: \begin{proposition}\label{convergence-domain} Assume $\Omega_n \to \Omega_\infty$ in the sense of Definition \ref{def-limit} and let $\Theta \in C_0^\infty ((0, \infty))$. Then, \begin{align}\label{eq3.11v65} \left\| \left( \Theta \left( - \Delta_{\Omega_n} \right) - \Theta \left( - \Delta_{\Omega_\infty} \right) \right) \delta_y \right\|_{\dot{H}^{-s_c} ( \mathbb{R}^3 )} \to 0 \qtq{ when} n\to \infty, \end{align} uniformly for $y$ in compact subsets of $\widetilde{\lim}\, \Omega_n$. Moreover, for any fixed $t\in\R$ and $h\in C_0^\infty(\widetilde{\lim}\,\Omega_n)$, we have \begin{align*} \lim_{n\to\infty}\big\|e^{it\Delta_{\Omega_n}}h-e^{it\Delta_{\Omega_{\infty}}}h\big\|_{\dot{H}^{-s_c}(\R^3)}=0. \end{align*} \end{proposition} \begin{proposition}\label{P1} Let $\Omega_n\to\Omega_{\infty}$ in the sense of Definition \ref{def-limit}. Then we have \begin{align*} \big\|(-\Delta_{\Omega_n})^\frac{s_c}{2}f-(-\Delta_{\Omega_\infty})^\frac{s_c}2f\big\|_{L^2(\R^3)}\to0 \end{align*} for all $f\in C_0^\infty(\widetilde{\lim}\,\Omega_n)$. \end{proposition} \begin{remark} Killip, Visan and Zhang \cite{KillipVisanZhang2016a} proved Proposition \ref{convergence-domain} and Proposition \ref{P1} for the case when $s_c=1$. Using their results and interpolation, we can easily extend this to the general case where $s_c\in (0,\frac{3}{2})$. \end{remark} Next, we state the convergence of the Schr\"odinger propagators within the Strichartz norms. We rescale and translate the domain $\Omega$ to $\Omega_n=N_n*(\{\Omega\}-x_n)$ which depends on the parameters $N_n\in2^\Bbb{Z}$ and $x_n\in\Omega$ conforming to one of the following three scenarios (recall that $d(x_n):=\operatorname{dist}(x_n,\Omega^c)$): \begin{align*} \begin{cases} \text{(i) }N_n\to0\qtq{and}-N_nx_n\to x_\infty\in\R^3,\\ \text{(ii) }N_nd(x_n)\to\infty,\\ \text{(iii) } N_n\to\infty\qtq{and} N_nd(x_n)\to d_\infty>0. \end{cases} \end{align*} Indeed, in the linear profile decomposition, there are four cases needed to be discussed (see Theorem \ref{linear-profile} below). The first case will not be included in these three scenarios since there is no change of geometry in that case. In Case (i) and (ii), $\Omega_n\to\R^3$ while in Case (iii), $\Omega_n\to\mathbb{H}$. After these preparation, we can state the convergence of linear Schr\"odinger propagators. See Theorem 4.1 and Corollary 4.2 in Killip-Visan-Zhang \cite{KillipVisanZhang2016a}. \begin{theorem}\label{convergence-flow} Let $\Omega_n$ be as above and let $\Omega_\infty$ be such that $\Omega_n\rightarrow\Omega_\infty $. Then, for all $\phi\in C_0^\infty(\widetilde{\lim}\,\Omega_n)$, \begin{align*} \lim_{n\to\infty}\big\|e^{it\Delta_{\Omega_n}}\phi-e^{it\Delta_{\Omega_{\infty}}}\phi\big\|_{L_{t,x}^{\frac{5\alpha }{2}}(\R\times\R^3)}=0. \end{align*} \end{theorem} \section{Linear profile decomposition}\label{S3} In this section, we prove a linear profile decomposition for the Schr\"odinger propagator $e^{it\Delta_\Omega}$ for initial data $u_0\in\dot{H}_D^{s_c}(\Omega)$ with $s_c\in(0,\frac{3}{2})$. The case $s_c = 1$ has been established by Killip-Visan-Zhang \cite{KillipVisanZhang2016a}. In this section, we use the linear profile decomposition for $e^{it\Delta_{\R^d}}$ in $\dot H^{s_c}(\mathbb{R} ^d)$ as a black-box (see e.g. \cite{Shao2009EJDE}), and extend the result of Killip-Visan-Zhang \cite{KillipVisanZhang2016a} to the general $\dot H^{s_c}_D(\Omega)$ setting. Throughout this section, we denote $\Theta:\R^3\to[0,1]$ the smooth function by \begin{align*} \Theta(x)=\begin{cases} 0, & |x|\leqslant\frac{1}{4}, \\ 1, & |x|\geqslant\frac{1}{2}. \end{cases} \end{align*} We start with a refined Strichartz estimates. \begin{proposition}[Refined Strichartz estimate]\label{PRefined SZ}Let $s_c\in(0,\frac{3}{2})$ and $f\in\dot{H}_D^{s_c}(\Omega)$. Then we have \begin{align}\label{refined-strichartz} \big\|e^{it\Delta_\Omega}f\big\|_{L_{t,x}^{q_0}(\R\times\Omega)}\lesssim\|f\|_{\dot{H}_D^{s_c}}^{\frac{2}{q_0}}\sup_{N\in2^\Bbb{Z}}\|e^{it\Delta_\Omega}P_N^\Omega f \|_{L_{t,x}^{q_0}(\R\times\Omega)}^{1-\frac{2}{q_0}}, \end{align} where $q_0:=\frac{10}{3-2s_c}=\frac{5\alpha }{2}$. \end{proposition} \begin{proof} Throughout the proof, all space-time norms are taken over $\R\times\Omega$ and we set $u(t) = e^{it\Delta_\Omega}f$. We divide the proof of Proposition \ref{PRefined SZ} into two cases. \textbf{Case One}. First suppose $s_c>\frac{1}{4}$, so that $q_0=\frac{10}{3-2s_c}>4$. By the square function estimate (Lemma~\ref{LSquare function estimate}), Bernstein inequality and Strichartz estimates, we have \begin{align*} \|u\|_{L_{t,x}^{q_0}}^{q_0} &\lesssim \iint\biggl[\sum_N |u_N|^2\biggr]^{\frac{q_0}{2}}\,dx\,dt \lesssim \sum_{N_1\leq N_2} \iint\biggl[\sum_N |u_N|^2\biggr]^{\frac{q_0}{2}-2} |u_{N_1}|^2|u_{N_2}|^2\,dx\,dt \\ & \lesssim \|u\|_{L_{t,x}^{q_0}}^{q_0-4}\sum_{N_1\leq N_2} \|u_{N_1}\|_{L_t^{q_0}L_x^{q_0+}}\|u_{N_2}\|_{L_t^{q_0}L_x^{q_0-}}\prod_{j=1}^2 \|u_{N_j}\|_{L_{t,x}^{q_0}} \\ & \lesssim \|f\|_{\dot H_D^{s_c}}^{q_0-4} \sup_N \|u_N\|_{L_{t,x}^{q_0}}^2 \sum_{N_1\leq N_2} \bigl(\tfrac{N_1}{N_2}\bigr)^{0+}\prod_{j=1}^2 \|u_{N_j}\|_{L_t^{q_0}\dot H_x^{s_c,r_0}} \\ & \lesssim \|f\|_{\dot H_D^{s_c}}^{q_0-4}\sup_N\|u_N\|_{L_{t,x}^{q_0}}^2 \sum_{N_1\leq N_2}\bigl(\tfrac{N_1}{N_2}\bigr)^{0+}\|f_{N_1}\|_{\dot H_x^{s_c}}\|f_{N_2}\|_{\dot H_x^{s_c}} \\ & \lesssim \|f\|_{\dot H_D^{s_c}}^{q_0-2}\sup_N\|e^{it\Delta_\Omega}f_N\|_{L_{t,x}^{q_0}}^2, \end{align*} where $r_0=\frac{9+4s_c}{10}$ such that $(q_0,r_0)$ is admissible pair. Therefore, we complete the proof of the first case. \textbf{Case Two}. Suppose $\frac{1}{4}\leqslant s_c<\frac{3}{2}$, so that $2<q_0\leq4$. Arguing similar to the first case, we observe that \begin{align*} \|u\|_{L_{t,x}^{q_0}}^{q_0} &\lesssim \iint \biggl[\sum_N |u_N|^2\biggr]^{\frac{q_0}{2}}\,dx\,dt \lesssim \iint \biggl[\sum_N |u_N|^{\frac{q_0}{2}}\biggr]^2\,dx\,dt \\ & \lesssim\sum_{N_1\leq N_2} \iint |u_{N_1}|^{\frac{q_0}{2}}|u_{N_2}|^{\frac{q_0}{2}} \,dx \,dt \\ & \lesssim\sum_{N_1\leq N_2} \|u_{N_1}\|_{L_t^{q_0}L_x^{q_0+}}\|u_{N_2}\|_{L_t^{q_0}L_x^{q_0-}} \prod_{j=1}^2 \|u_{N_j}\|_{L_{t,x}^{q_0}}^{\frac{q_0}{2}-1} \\ & \lesssim \sup_N \|e^{it\Delta_\Omega}f_N\|_{L_{t,x}^{q_0}}^{q_0-2}\|f\|_{\dot H_D^{s_c}}^2, \end{align*} giving the desired result in this case. \end{proof} The refined Strichartz estimates above indicate that a linear solution with nontrivial spacetime norms must concentrate in an annular region. The following inverse Strichartz inequality further demonstrates that the linear solution contains at least one bubble near a specific spacetime point. \begin{proposition}[Inverse Strichartz estimate]\label{inverse-strichartz} Let $\{f_n\} \in \dot{H}_D^{s_c}(\Omega)$. Assume that \begin{align}\label{inverse-con} \lim_{n\to\infty}\|f_n\|_{\dot{H}_D^{s_c}(\Omega)}=A<\infty,\quad\text{and}\quad \lim_{n\to\infty}\big\|e^{it\Delta_{\Omega}}f_n\big\|_{L_{t,x}^{q_0}(\R\times\Omega)}=\varepsilon>0. \end{align} Then, there exists a subsequence $\{f_n\}$, along with $\{\phi_n\} \in \dot{H}_D^{s_c}(\Omega)$, $\{N_n\} \subset 2^{\mathbb{Z}}$, and $\{(t_n, x_n)\} \subset \mathbb{R} \times \Omega$, satisfying one of the four scenarios below, such that: \begin{gather} \liminf_{n\to\infty}\|\phi_n\|_{\dot{H}_D^{s_c}(\Omega)} \gtrsim \varepsilon^\frac{15}{s_c(4s_c+4)}A^{\frac{4s_c^2+4s_c-15}{2s_c(2s_c+2)}} ,\label{inverse-1}\\ \liminf_{n\to\infty}\big\{\|f_n\|_{\dot{H}_D^{s_c}(\Omega)}^2-\|f_n-\phi_n\|_{\dot{H}_D^{s_c}(\Omega)}^2-\|\phi_n\|_{\dot{H}_D^{s_c}(\Omega)}^2\big\} \gtrsim \varepsilon^\frac{15}{s_c(2s_c+2)}A^{\frac{4s_c^2+4s_c-15}{s_c(2s_c+2)}} ,\label{inverse-2}\\ \liminf_{n\to\infty}\left\{\big\|e^{it\Delta_\Omega}f_n\big\|_{L_{t,x}^{q_0}(\R\times\Omega)}^{q_0}-\big\|e^{it\Delta_{\Omega}}(f_n-\phi_n)\big\|_{L_{t,x}^{q_0}(\R\times\Omega)}^{q_0}\right\} \gtrsim \varepsilon^\frac{75}{2s_c(s_c+1)}A^{\frac{20s_c^2+20s_c-75}{2s_c(s_c+1)}} .\label{inverse-3} \end{gather} The four cases are as follows: \begin{itemize} \item \textbf{Case 1:} $N_n \equiv N_\infty \in 2^{\mathbb{Z}}$ and $x_n \to x_\infty \in \Omega$. Here, we select $\phi \in \dot{H}_D^{s_c}(\Omega)$ and a subsequence such that $e^{it_n\Delta_\Omega}f_n \rightharpoonup \phi$ weakly in $\dot{H}_D^{s_c}(\Omega)$, and define $\phi_n = e^{-it_n\Delta_\Omega}\phi$. \end{itemize} \begin{itemize} \item \textbf{Case 2:} $N_n \to 0$ and $-N_nx_n \to x_\infty \in \mathbb{R}^3$. In this case, we find $\tilde{\phi} \in \dot{H}^{s_c}(\mathbb{R}^3)$ and a subsequence such that \[ g_n(x) = N_n^{s_c-\frac{3}{2}}(e^{it_n\Delta_\Omega}f_n)(N_n^{-1}x+x_n) \rightharpoonup \tilde{\phi}(x) \] weakly in $\dot{H}^{s_c}(\mathbb{R}^3)$. We define \[ \phi_n(x) := N_n^{\frac{3}{2}-s_c}e^{-it_n\Delta_\Omega}[(\chi_n\tilde{\phi})(N_n(x-x_n))], \] where $\chi_n(x) = \chi(N_n^{-1}x+x_n)$ and $\chi(x) = \Theta\big(\frac{d(x)}{\operatorname{diam}(\Omega^c)}\big)$. \end{itemize} \begin{itemize} \item \textbf{Case 3:} $N_nd(x_n) \to \infty$. In this situation, we take $\tilde{\phi} \in \dot{H}^{s_c}(\mathbb{R}^3)$ and a subsequence such that \[ g_n(x) = N_n^{s_c-\frac{3}{2}}(e^{it_n\Delta_\Omega}f_n)(N_n^{-1}x+x_n) \rightharpoonup \tilde{\phi}(x) \] weakly in $\dot{H}^{s_c}(\mathbb{R}^3)$. We then define \[ \phi_n(x) := N_n^{\frac{3}{2}-s_c}e^{-it_n\Delta_\Omega}[(\chi_n\tilde{\phi})(N_n(x-x_n))], \] where $\chi_n(x) = 1-\Theta\big(\frac{|x|}{N_nd(x_n)}\big)$. \end{itemize} \begin{itemize} \item \textbf{Case 4:} $N_n \to \infty$ and $N_nd(x_n) \to d_\infty > 0$. Here, we find $\tilde{\phi} \in \dot{H}_D^{s_c}(\mathbb{H})$ and a subsequence such that \[ g_n(x) = N_n^{s_c-\frac{3}{2}}(e^{it_n\Delta_\Omega}f_n)(N_n^{-1}R_nx+x_n^*) \rightharpoonup \tilde{\phi}(x) \] weakly in $\dot{H}^{s_c}(\mathbb{R}^3)$. We define \[ \phi_n(x) = N_n^{\frac{3}{2}-s_c}e^{-it_n\Delta_\Omega}[\tilde{\phi}(N_nR_n^{-1}(\cdot-x_n^*))], \] where $R_n \in SO(3)$ satisfies $R_ne_3 = \frac{x_n-x_n^*}{|x_n-x_n^*|}$ and $x_n^* \in \partial\Omega$ such that $d(x_n) = |x_n-x_n^*|$. \end{itemize} \end{proposition} \begin{proof} Using the refined Strichartz estimate \eqref{refined-strichartz} and \eqref{inverse-con}, we see that for each $n$, there exists $N_n$ such that \begin{align*} \big\|e^{it\Delta_\Omega}P_{N_n}^\Omega f_n\big\|_{L_{t,x}^{q_0}(\R\times\Omega)}&\gtrsim\big\|e^{it\Delta_\Omega}f_n\big\|_{L_{t,x}^{q_0}(\R\times\Omega)}^{\frac{q_0}{q_0-2}}\|f_n\|_{\dot{H}_D^{s_c}(\Omega)}^{-\frac{2}{q_0-2}} \gtrsim\varepsilon^{\frac{q_0}{q_0-2}}A^{-\frac{2}{q_0-2}}. \end{align*} By Strichartz, Bernstein and (\ref{inverse-strichartz}), we obtain \begin{align*} \big\|e^{it\Delta_\Omega}P_{N_n}^\Omega f_n\big\|_{L_{t,x}^ {q_0}(\R\times\Omega)}\lesssim N_n^{-s_c}A. \end{align*} Combining the above two estimates and using H\"older's inequality, we obtain \begin{align*} \varepsilon^{\frac{q_0}{q_0-2}}A^{-\frac{2}{q_0-2}}\lesssim\big\|e^{it\Delta_\Omega}P_{N_n}^\Omega f_n\big\|_{L_{t.x}^{q_0}(\R\times\Omega)} &\lesssim\big\|e^{it\Delta_\Omega}P_{N_n}^\Omega f_n\big\|_{L_{t,x}^\frac{10}{3}(\R\times\Omega)}^{1-\frac{2s_c}{3}}\big\|e^{it\Delta_\Omega}P_{N_n}^\Omega f_n\big\|_{L_{t,x}^\infty(\R\times\Omega)}^{\frac{2s_c}{3}}\\ &\lesssim N_n^{-s_c(1-\frac{2}{3}s_c)}A^{1-\frac{2s_c}{3}}\big\|e^{it\Delta_\Omega}P_{N_n}^\Omega f_n\big\|_{L_{t,x}^\infty(\R\times\Omega)}^{\frac{2s_c}{3}}, \end{align*} which implies \begin{align} \big\|e^{it\Delta_{\Omega}}P_{N_n}^\Omega f_n\big\|_{L_{t,x}^\infty(\R\times\Omega)}\gtrsim N_n^{\frac{3}{2}-s_c}\varepsilon^\frac{15}{s_c(4s_c+4)}A^{\frac{4s_c^2+4s_c-15}{2s_c(2s_c+2)}}.\notag \end{align} Thus there exist $x_n\in\R$ and $t_n\in\R$ such that \begin{align}\label{A} \big|(e^{it_n\Delta_\Omega}P_{N_n}^\Omega f_n)(x_n)\big|\gtrsim N_n^{\frac{3}{2}-s_c}\varepsilon^\frac{15}{s_c(4s_c+4)}A^{\frac{4s_c^2+4s_c-15}{2s_c(2s_c+2)}}. \end{align} Note that the four cases in Proposition \ref{inverse-strichartz} are completely determined by the behavior of $x_n$ and $N_n$. We first claim that \begin{align}\label{claim} N_nd(x_n)\gtrsim \varepsilon^\frac{15}{s_c(4s_c+4)}A^{-\frac{15}{2s_c(2s_c+2)}}. \end{align} Indeed, using the heat kernel bound (Lemma \ref{Lheatkernel}), we have \begin{align*} \int_{\Omega}|e^{t\Delta_\Omega/N_n^2}(x_n,y)|^2dy&\lesssim N_n^6\int_{\Omega}\big|(N_nd(x_n))(N_n(d(x_n)+N_n|x_n-y|))e^{-cN_n^2|x_n-y|^2}\big|^2dy\\ &\lesssim(N_nd(x_n))^2(N_n(d(x_n)+1))^2N_n^3. \end{align*} Writting \begin{align*} (e^{it_n\Delta_\Omega}P_{N_n}^\Omega f_n)(x_n)=\int_{\Omega}[e^{\Delta_\Omega/N_n^2}(x_n,y)P^{\Omega}_{\leq 2N_n}e^{-\Delta_{\Omega}/N_n^2}e^{it_n\Delta_\Omega}P_{N_n}^\Omega f_n](y)dy, \end{align*} using \eqref{A}, and Cauchy-Schwartz gives \begin{align*} N_n^{\frac{3}{2}-s_c}\varepsilon^\frac{15}{s_c(4s_c+4)}A^{\frac{4s_c^2+4s_c-15}{2s_c(2s_c+2)}}&\lesssim(N_nd(x_n))(N_nd(x_n)+1)N_n^\frac{3}{2}\|P_{\leq 2N_n}^\Omega e^{-\Delta_\Omega/N_n^2}e^{it_n\Delta_\Omega}P_{N_n}^\Omega f_n\|_{L^2(\Omega)}\\ &\lesssim (N_nd(x_n))(N_nd(x_n)+1)N_n^{\frac{3}{2}-s_c}A. \end{align*} Then claim \eqref{claim} follows. Due to \eqref{claim} and passing the subsequence, we only need to consider the following four cases: \begin{enumerate} \item[Case 1.] $N_n\sim 1$ and $N_nd(x_n)\sim1$, \item[Case 2.] $N_n\to0$ and $N_nd(x_n)\lesssim1$, \item[Case 3.] $N_nd(x_n)\to\infty$ as $n\to\infty$, \item[Case 4.] $N_n\to\infty$ and $N_nd(x_n)\sim1$. \end{enumerate} We will treat these cases in order. \textbf{Case 1}. After passing through the subsequence, we may assume that \begin{align*} N_n\equiv N_\infty\in2^{\Bbb{Z}}\mbox{ and }x_n\to x_\infty\in\Omega. \end{align*} Let \begin{align*} g_n (x ): = N_n^{s_c-\frac{3}{2}} (e^{it_n\Delta _\Omega}f_n) \left(N_n^{-1} x + x_n \right). \end{align*} Since $f_n$ is supported in $\Omega$, $g_n$ is supported in $\Omega_n : = N_n ( \Omega - \{x_n\})$. Moreover, we have \begin{align*} \|g_n \|_{\dot{H}_D^{s_c}( \Omega_n)} = \|f_n \|_{\dot{H}_D^{s_c}( \Omega)} \lesssim A. \end{align*} Passing to a further subsequence, we find a $\tilde{\phi}$ such that $g_n \rightharpoonup \tilde{\phi}$ in $\dot{H}^{s_c}( \R^3 )$ as $n \to \infty$. Rescaling this weak convergence, we have \begin{align}\label{B} e^{it_n\Delta _\Omega}f_n(x) \rightharpoonup \phi(x) : = N_\infty^{\frac{3}{2}-s_c} \tilde{\phi} (N_\infty (x-x_\infty)) \text{ in } \dot{H}_D^{s_c}(\Omega). \end{align} Since $\dot{H}_D^{s_c}( \Omega)$ is a weakly closed subset of $\dot{H}^{s_c}(\R^3)$, $\phi \in \dot{H}_D^{s_c}(\Omega)$. We now proceed to prove that $\phi$ is non-trivial. To this end, let $h := P_{N_\infty}^\Omega \delta_{x_\infty}$. By the Bernstein inequality, we have \begin{align}\label{eq5.7v65} \left\| \left(- \Delta_\Omega \right)^{-\frac{s_c}{2}} h \right\|_{L^2(\Omega)} = \left\| \left(- \Delta_\Omega \right)^{-\frac{s_c}{2}} P_{N_\infty}^\Omega \delta_{x_\infty} \right\|_{L^2(\Omega)} \lesssim N_\infty^{\frac{3}{2}-s_c}, \end{align} which shows that $h \in \dot{H}_D^{-s_c} (\Omega)$. On the other hand, we observe that \begin{align}\label{eq5.8v65} \langle \phi, h \rangle &= \lim\limits_{n \to \infty} \langle e^{it_n\Delta_\Omega}f_n, h \rangle = \lim\limits_{n \to \infty} \left\langle e^{it_n\Delta_\Omega}f_n, P_{N_\infty}^\Omega \delta_{x_\infty} \right\rangle \nonumber \\ &= \lim\limits_{n \to \infty} \left(e^{it_n\Delta_\Omega}P_{N_n}^\Omega f_n \right)(x_n) + \lim\limits_{n \to \infty} \left\langle e^{it_n\Delta_\Omega}f_n, P_{N_\infty}^\Omega \left( \delta_{x_\infty} - \delta_{x_n} \right) \right\rangle. \end{align} We first claim that the second term in \eqref{eq5.8v65} vanishes. Indeed, since $d(x_n) \sim 1$, the Bernstein inequality implies \begin{align*} \left\| P_{N_\infty}^\Omega e^{it_n\Delta_\Omega}f_n \right\|_{L_x^\infty} \lesssim N_\infty^{\frac{3}{2}-s_c} A, \quad \text{and} \quad \left\|\Delta P_{N_\infty}^\Omega e^{it_n\Delta_\Omega}f_n \right\|_{L_x^\infty} \lesssim N_\infty^{\frac{3}{2}+s_c} A. \end{align*} Using the fundamental theorem of calculus and the basic elliptic estimate \begin{align}\label{eq5.9v65} \| \nabla v \|_{L^\infty(|x| \leq R)} \lesssim R^{-1} \|v\|_{L^\infty(|x| \leq 2R)} + R \|\Delta v\|_{L^\infty(|x| \leq 2R)}, \end{align} it follows for sufficiently large $n$ that \begin{align}\label{eq5.10v65} \left| \left\langle e^{it_n\Delta_\Omega}f_n, P_{N_\infty}^\Omega \left( \delta_{x_\infty} - \delta_{x_n} \right) \right\rangle \right| &\lesssim |x_\infty - x_n| \left\|\nabla P_{N_\infty}^\Omega e^{it_n\Delta_\Omega} f_n \right\|_{L^\infty(|x| \leq R)} \notag\\ &\lesssim \Big( \frac{N_\infty^{\frac{3}{2}-s_c}}{d(x_n)} + N_\infty^{\frac{3}{2}+s_c} d(x_n) \Big) A |x_\infty - x_n|, \end{align} which converges to zero as $n \to \infty$. Therefore, it follows from \eqref{A}, \eqref{eq5.7v65}, \eqref{eq5.8v65}, and \eqref{eq5.10v65} that \begin{align}\label{eq5.11v65} N_\infty^{\frac{3}{2}-s_c} \varepsilon^\frac{15}{s_c(4s_c+4)}A^{\frac{4s_c^2+4s_c-15}{2s_c(2s_c+2)}} \lesssim |\langle \phi, h \rangle | \lesssim \|\phi \|_{\dot{H}_D^{s_c}( \Omega)} \|h \|_{\dot{H}_D^{-s_c} ( \Omega)} \lesssim N_\infty^{\frac{3}2-s_c} \|\phi \|_{\dot{H}_D^{s_c}( \Omega)}, \end{align} which gives \eqref{inverse-1}. Next, since $\dot{H}_D^{s_c}(\Omega)$ is a Hilbert space, \eqref{inverse-2} follows directly from \eqref{inverse-1} and \eqref{B}. It remains to establish the decoupling for the $L_x^{q_0}$ norm in \eqref{inverse-3}. Note that \begin{align*} (i\partial_t)^\frac{s_c}{2}e^{it\Delta_\Omega} = (-\Delta_\Omega)^\frac{s_c}{2}e^{it\Delta_\Omega}. \end{align*} Applying H\"older's inequality on a compact domain $K \subset \mathbb{R} \times \mathbb{R}^3$, we obtain \begin{align*} \big\|e^{it\Delta_\Omega}e^{it_n\Delta_{\Omega}}f_n\big\|_{H_{t,x}^{\frac{s_c}{2}}(K)} \lesssim \|\langle-\Delta_\Omega\rangle^{\frac{s_c}{2}}e^{i(t+t_n)\Delta_\Omega}f_n\|_{L_{t,x}^2(K)} \lesssim_K A. \end{align*} By the Rellich-Kondrachov compactness theorem and a diagonal argument, passing to a subsequence yields \begin{align*} e^{it\Delta_\Omega}e^{it_n\Delta_\Omega}f_n \to e^{it\Delta_\Omega}\phi \quad \text{strongly in } L^2_{t,x}(K), \end{align*} and \begin{align*} e^{it\Delta_\Omega}e^{it_n\Delta_\Omega}f_n \to e^{it\Delta_\Omega}\phi(x) \quad \text{a.e. in } \mathbb{R} \times \mathbb{R}^3. \end{align*} By the refined Fatou lemma (Lemma \ref{LRefinedFatou}) and a change of variables, we have \begin{align*} \lim\limits_{n \to \infty} \left( \|e^{it\Delta_\Omega}f_n \|_{L_{t,x}^{q_0}(\mathbb{R} \times \Omega)}^{q_0} - \|e^{it\Delta_\Omega}(f_n - \phi_n) \|_{L_{t,x}^{q_0}(\mathbb{R} \times \Omega)}^{q_0} \right) = \|e^{it\Delta_\Omega}\phi \|_{L_{t,x}^{q_0}(\mathbb{R} \times \Omega)}^{q_0}, \end{align*} from which \eqref{inverse-3} will follow once we show that \begin{align}\label{eq5.12v65} \|e^{it\Delta_\Omega}\phi \|_{L_{t,x}^{q_0}(\mathbb{R} \times \Omega)} \gtrsim \varepsilon^\frac{15}{s_c(4s_c+4)}A^{\frac{4s_c^2+4s_c-15}{2s_c(2s_c+2)}}. \end{align} To prove \eqref{eq5.12v65}, the Mikhlin multiplier theorem provides the uniform estimate for $|t| \leq N_\infty^{-2}$: \begin{align*} \left\|e^{it\Delta_\Omega}P_{\leq 2 N_\infty}^\Omega \right\|_{L_x^{q_0^\prime} \to L_x^{q_0^\prime}} \lesssim 1, \quad \text{with} \quad q_0^\prime = \frac{10}{2s_c+7}. \end{align*} Combining this with the Bernstein inequality, we get \begin{align*} \|e^{it\Delta_\Omega}h \|_{L_x^{q_0^\prime}} \lesssim \left\|e^{it\Delta_\Omega}P_{\leq 2 N_\infty}^\Omega \right\|_{L_x^{q_0^\prime} \to L_x^{q_0^\prime}} \left\|P_{N_\infty}^\Omega \delta_\infty \right\|_{L_x^{q_0^\prime}} \lesssim N_\infty^{\frac{9-6s_c}{10}}. \end{align*} This, together with \eqref{eq5.11v65}, implies \begin{align*} N_\infty^{\frac{3}{2}-s_c} \varepsilon^\frac{15}{s_c(4s_c+4)}A^{\frac{4s_c^2+4s_c-15}{2s_c(2s_c+2)}} \lesssim |\langle\phi, h\rangle| = |\langle e^{it\Delta_\Omega}\phi, e^{it\Delta_\Omega}h \rangle| \lesssim N_\infty^{\frac{9-6s_c}{10}} \|e^{it\Delta_\Omega}\phi \|_{L_x^{q_0}(\mathbb{R} \times \Omega)}, \end{align*} uniformly for $|t| \leq N_\infty^{-2}$. Integrating over $t$ then establishes \eqref{eq5.12v65}. \textbf{Case 2}. As $N_n \to 0$, the condition $N_n d(x_n) \lesssim 1$ ensures that the sequence $\{N_n x_n\}_{n \geq 1}$ is bounded. Hence, up to a subsequence, we assume $-N_n x_n \to x_\infty \in \mathbb{R}^3$ as $n \to \infty$. Similar to Case 1, we define $\Omega_n := N_n (\Omega - \{x_n\})$. Since $N_n \to 0$, the rescaled obstacles $\Omega_n^c$ shrink to $x_\infty$ as $n \to \infty$. Because $f_n$ is bounded in $\dot{H}_D^{s_c}(\Omega)$, the sequence $g_n$ remains bounded in $\dot{H}_D^{s_c}(\Omega_n) \subset \dot{H}^{s_c}(\mathbb{R}^3)$. Passing to a subsequence, we find $\tilde{\phi}$ such that $g_n \rightharpoonup \tilde{\phi}$ in $\dot{H}^{s_c}(\mathbb{R}^3)$ as $n \to \infty$. Next, we claim that \begin{align}\label{eq5.13v65} \chi_n \tilde{\phi} \to \tilde{\phi}, \quad \text{or equivalently,} \quad \left(1 - \chi\left(N_n^{-1}x + x_n\right)\right)\tilde{\phi}(x) \to 0 \text{ in } \dot{H}^{s_c}(\mathbb{R}^3). \end{align} To show this, let \begin{align*} B_n := \left\{x \in \mathbb{R}^3 : \operatorname{dist}(x, \Omega_n^c) \leq \operatorname{diam}(\Omega_n^c) \right\}. \end{align*} The set $B_n$ contains $\supp(1 - \chi_n)$ and $\supp(\nabla \chi_n)$. Since $N_n \to 0$, the measure of $B_n$ tends to zero as $n \to \infty$. Thus, \eqref{eq5.13v65} follows from H\"older's inequality, Sobolev embedding, the fractional chain rule, and the monotone convergence theorem. With \eqref{eq5.13v65} established, the proofs of \eqref{inverse-1} and \eqref{inverse-2} proceed analogously to their counterparts in Case 1. First, we prove \eqref{inverse-1}. Define $h := P_1^{\mathbb{R}^3}\delta_0$. Then, \begin{align*} \left\langle \tilde{\phi}, h \right\rangle = \lim\limits_{n \to \infty} \langle g_n, h \rangle = \lim\limits_{n \to \infty} \left\langle g_n, P_1^{\Omega_n}\delta_0 \right\rangle + \lim\limits_{n \to \infty} \left\langle g_n, \left(P_1^{\mathbb{R}^3} - P_1^{\Omega_n}\right)\delta_0 \right\rangle. \end{align*} By Proposition \ref{convergence-domain} and the uniform boundedness of $\|g_n\|_{\dot{H}^{s_c}(\mathbb{R}^3)}$, the second term vanishes. Hence, using the definition of $g_n$ and a change of variables, we find \begin{align}\label{estimate-pair} \left|\left\langle \tilde{\phi}, h \right\rangle\right| &= \left|\lim\limits_{n \to \infty} \left\langle g_n, P_1^{\Omega_n}\delta_0 \right\rangle\right| = \left|\lim\limits_{n \to \infty} \left\langle f_n, N_n^{s_c+\frac{3}{2}}\left(P_1^{\Omega_n}\delta_0\right)(N_n(x-x_n)) \right\rangle\right| \notag \\ &= \left|\lim\limits_{n \to \infty} \left\langle f_n, N_n^{s_c-\frac{3}{2}}P_{N_n}^\Omega\delta_{x_n} \right\rangle\right| \gtrsim \varepsilon^\frac{15}{s_c(4s_c+4)}A^{\frac{4s_c^2+4s_c-15}{2s_c(2s_c+2)}}, \end{align} where the final inequality follows from \eqref{A}. Thus, arguing as in \eqref{eq5.11v65}, we obtain \begin{align*} \|\tilde{\phi}\|_{\dot{H}^{s_c}(\mathbb{R}^3)} \gtrsim \varepsilon^\frac{15}{s_c(4s_c+4)}A^{\frac{4s_c^2+4s_c-15}{2s_c(2s_c+2)}}, \end{align*} which, combined with \eqref{eq5.13v65}, establishes \eqref{inverse-1}. To establish the decoupling estimate in $\dot{H}_D^{s_c}(\Omega)$, we write \begin{align*} &\quad \|f_n\|_{\dot{H}_D^{s_c}(\Omega)}^2 - \|f_n - \phi_n\|_{\dot{H}_D^{s_c}(\Omega)}^2 = 2\langle f_n, \phi_n \rangle_{\dot{H}_D^{s_c}(\Omega)} - \|\phi_n\|_{\dot{H}_D^{s_c}(\Omega)}^2 \\ &= 2\left\langle N_n^{s_c-\frac{3}{2}} f_n (N_n^{-1} x + x_n), \tilde{\phi}(x) \chi(x) \right\rangle_{\dot{H}_D^{s_c}(\Omega_n)} - \|\chi_n \tilde{\phi}\|_{\dot{H}_D^{s_c}(\Omega_n)}^2 \\ &= 2\left\langle g_n, \tilde{\phi} \right\rangle_{\dot{H}^{s_c}(\mathbb{R}^3)} - 2\left\langle g_n, \tilde{\phi}(1 - \chi_n) \right\rangle_{\dot{H}^{s_c}(\mathbb{R}^3)} - \|\chi_n \tilde{\phi}\|_{\dot{H}_D^{s_c}(\Omega_n)}^2. \end{align*} Using the weak convergence of $g_n$ to $\tilde{\phi}$, \eqref{eq5.13v65}, and \eqref{inverse-1}, we deduce \begin{align*} \lim\limits_{n \to \infty} \left(\|f_n\|_{\dot{H}_D^{s_c}(\Omega)}^2 - \|f_n - \phi_n\|_{\dot{H}_D^{s_c}(\Omega)}^2\right) = \|\tilde{\phi}\|_{\dot{H}^{s_c}(\mathbb{R}^3)}^2 \gtrsim \varepsilon^\frac{15}{s_c(2s_c+2)} A^{\frac{4s_c^2+4s_c-15}{s_c(2s_c+2)}}. \end{align*} This verifies \eqref{inverse-2}. Next, we establish the decoupling for the $L_{t,x}^{q_0}(\mathbb{R} \times \Omega)$ norm by proving \begin{align}\label{eq5.15v65} \liminf\limits_{n \to \infty} \left(\|e^{it\Delta_\Omega}f_n\|_{L_{t,x}^{q_0}}^{q_0} - \|e^{it\Delta_\Omega}(f_n - \phi_n)\|_{L_{t,x}^{q_0}}^{q_0}\right) = \|e^{it\Delta_\Omega}\tilde{\phi}\|_{L_{t,x}^{q_0}}^{q_0}. \end{align} From this, \eqref{inverse-3} follows by establishing the lower bound \begin{align}\label{eq5.16v65} \|e^{it\Delta_\Omega}\tilde{\phi}\|_{L_x^{q_0}}^{q_0} \gtrsim \left(\varepsilon^\frac{15}{s_c(4s_c+4)} A^{\frac{4s_c^2+4s_c-15}{2s_c(2s_c+2)}}\right)^{q_0}. \end{align} The proof of \eqref{eq5.16v65} is similar to that in Case 1 and is omitted here. It remains to verify \eqref{eq5.15v65}. Two key observations are required: \begin{align}\label{eq5.17v65} e^{it\Delta_{\Omega_n}}(g_n - \chi_n \tilde{\phi}) \to 0 \quad \text{a.e. in } \mathbb{R} \times \mathbb{R}^3, \end{align} and \begin{align}\label{eq5.18v65} \|e^{it\Delta_{\Omega_n}}\chi_n \tilde{\phi} - e^{it\Delta_{\mathbb{R}^3}}\tilde{\phi}\|_{L_{t,x}^{q_0}(\mathbb{R} \times \mathbb{R}^3)} \to 0. \end{align} For \eqref{eq5.17v65}, combining the definition of $\tilde{\phi}$ with \eqref{eq5.13v65}, we find \begin{align*} g_n - \chi_n \tilde{\phi} \rightharpoonup 0 \quad \text{weakly in } \dot{H}^{s_c}(\mathbb{R}^3). \end{align*} Using Lemma \ref{L:compact} and the fact that $(i\partial_t)^{s_c/2}e^{it\Delta_{\Omega_n}} = (-\Delta_\Omega)^{s_c/2}e^{it\Delta_{\Omega_n}}$, we conclude \eqref{eq5.17v65} by passing to a subsequence. For \eqref{eq5.18v65}, we apply \eqref{eq5.13v65}, the Strichartz inequality, and Theorem \ref{convergence-flow} to deduce the result. Combining \eqref{eq5.17v65} and \eqref{eq5.18v65}, and passing to a subsequence if necessary, we obtain \begin{align*} e^{it\Delta_{\Omega_n}}g_n - e^{it\Delta_{\mathbb{R}^3}}\tilde{\phi} \to 0 \quad \text{a.e. in } \mathbb{R} \times \mathbb{R}^3. \end{align*} By the refined Fatou lemma (Lemma \ref{LRefinedFatou}), we have \begin{align*} \liminf\limits_{n \to \infty} \left(\|e^{it\Delta_{\Omega_n}}g_n\|_{L_{t,x}^{q_0}}^{q_0} - \|e^{it\Delta_{\Omega_n}}g_n - e^{it\Delta_{\mathbb{R}^3}}\tilde{\phi}\|_{L_{t,x}^{q_0}}^{q_0}\right) = \|e^{it\Delta_{\mathbb{R}^3}}\tilde{\phi}\|_{L_{t,x}^{q_0}}^{q_0}. \end{align*} Combining this with \eqref{eq5.18v65}, \eqref{eq5.13v65}, and a rescaling argument, we conclude \eqref{eq5.15v65}. \textbf{Case 3}. The proof of this case closely follows the argument in \textit{Case 2}. The main difference lies in the geometry of the two cases, which affects the application of Proposition \ref{convergence-domain} and the analogue of \eqref{eq5.13v65}. Since these key results have already been established for all cases, it suffices to show \begin{align}\label{eq5.19v65} \chi_n \tilde{\phi} \to \tilde{\phi}, \quad \text{or equivalently,} \quad \Theta\left(\frac{|x|}{\operatorname{dist}(0, \Omega_n^c)}\right)\tilde{\phi}(x) \to 0 \text{ in } \dot{H}^{s_c}(\mathbb{R}^3). \end{align} To prove this, define \begin{align*} B_n := \left\{x \in \mathbb{R}^3 : |x| \geq \frac{1}{4} \operatorname{dist}(0, \Omega_n^c) \right\}. \end{align*} Using H\"older's inequality and Sobolev embedding, we estimate \begin{align*} \left\|\Theta\left(\frac{|x|}{\operatorname{dist}(0, \Omega_n^c)}\right)\tilde{\phi}(x)\right\|_{\dot{H}^{s_c}(\mathbb{R}^3)} \lesssim \left\|(-\Delta)^\frac{s_c}{2}\tilde{\phi}\right\|_{L^2(B_n)} + \left\|\tilde{\phi}\right\|_{L^\frac{6}{3-2s_c}(B_n)}. \end{align*} As the measure of $B_n$ shrinks to zero, the right-hand side converges to $0$ by the monotone convergence theorem. \medskip \textbf{Case 4}. By passing to a subsequence, we assume $N_n d(x_n) \to d_\infty > 0$. By the weak sequential compactness of bounded sequences in $\dot{H}^{s_c}(\mathbb{R}^3)$, there exists a subsequence and $\tilde{\phi} \in \dot{H}^{s_c}(\mathbb{R}^3)$ such that $g_n \rightharpoonup \tilde{\phi}$ in $\dot{H}^{s_c}(\mathbb{R}^3)$. Using the characterization of Sobolev spaces, \begin{align*} \dot{H}_D^{s_c}(\mathbb{H}) = \left\{g \in \dot{H}^{s_c}(\mathbb{R}^3) : \int_{\mathbb{R}^3} g(x) \psi(x) dx = 0 \text{ for all } \psi \in C_c^\infty(-\mathbb{H}) \right\}, \end{align*} we conclude that $\tilde{\phi} \in \dot{H}_D^{s_c}(\mathbb{H})$ because for any compact set $K$ in the half-space, $K \subset \Omega_n^c$ for sufficiently large $n$, where \begin{align*} \Omega_n := N_n R_n^{-1}(\Omega - \{x_n^*\}) \supset \supp(g_n). \end{align*} As $\tilde{\phi} \in \dot{H}_D^{s_c}(\mathbb{H})$, it follows that \begin{align*} x \in \mathbb{H} \Longleftrightarrow N_n^{-1}R_nx + x_n^* \in \mathbb{H}_n := \left\{y : \left(x_n - x_n^*\right) \cdot \left(y - x_n^*\right) > 0\right\} \subset \Omega, \end{align*} where $\partial \mathbb{H}_n$ represents the tangent plane to $\partial \Omega$ at $x_n^*$. This inclusion yields \begin{align}\label{eq5.20v65} \|\tilde{\phi}\|_{\dot{H}_D^{s_c}(\mathbb{H})} = \|\phi_n\|_{\dot{H}_D^{s_c}(\mathbb{H}_n)} = \|\phi_n\|_{\dot{H}_D^{s_c}(\Omega)}. \end{align} To establish \eqref{inverse-1}, we need a lower bound for $\|\tilde{\phi}\|_{\dot{H}_D^{s_c}(\mathbb{H})}$. Let $h := P_1^{\mathbb{H}}\delta_{d_\infty e_3}$. Using the Bernstein inequality, we have \begin{align}\label{eq5.21v65} \left\| \left(-\Delta_{\mathbb{H}}\right)^{-\frac{s_c}{2}} h \right\|_{L^2(\Omega)} \lesssim 1, \end{align} which implies $h \in \dot{H}_D^{-s_c}(\mathbb{H})$. Now, define $\tilde{x}_n := N_nR_n^{-1}(x_n - x_n^*)$. By assumption, $\tilde{x}_n \to d_\infty e_3$. Using Proposition \ref{convergence-domain}, we compute \begin{align*} \langle \tilde{\phi}, h \rangle &= \lim\limits_{n \to \infty} \Big(\langle g_n, P_1^{\Omega_n} \delta_{\tilde{x}_n} \rangle + \langle g_n, (P_1^{\mathbb{H}} - P_1^{\Omega_n})\delta_{d_\infty e_3} \rangle + \langle g_n, P_1^{\Omega_n}(\delta_{d_\infty e_3} - \delta_{\tilde{x}_n}) \rangle\Big) \\ &= \lim\limits_{n \to \infty} \Big(N_n^{s_c - \frac{3}{2}}(e^{it_n\Delta_\Omega}P_{N_n}^\Omega f_n)(x_n) + \langle g_n, P_1^{\Omega_n}(\delta_{d_\infty e_3} - \delta_{\tilde{x}_n}) \rangle\Big). \end{align*} Following the argument in \eqref{eq5.10v65} and applying \eqref{eq5.9v65} to $v(x) = \left(P_1^{\Omega_n}g_n\right)(x + \tilde{x}_n)$ with $R = \frac{1}{2}N_n d(x_n)$, we obtain \begin{align*} \left| \left\langle g_n, P_1^{\Omega_n} \left(\delta_{d_\infty e_3} - \delta_{\tilde{x}_n}\right) \right\rangle \right| \lesssim A\left(d_\infty^{-1} + d_\infty\right)\left|d_\infty e_3 - \tilde{x}_n\right| \to 0 \quad \text{as } n \to \infty. \end{align*} Thus, we conclude \begin{align*} \left|\left\langle \tilde{\phi}, h \right\rangle\right| \gtrsim \varepsilon^\frac{15}{s_c(2s_c+2)}A^{\frac{4s_c^2+4s_c-15}{s_c(2s_c+2)}}, \end{align*} which, together with \eqref{eq5.20v65} and \eqref{eq5.21v65}, proves \eqref{inverse-1}. Finally, following the same reasoning as in Case 2, we establish \eqref{inverse-2}. This completes the proof of Proposition \ref{inverse-strichartz}. To establish the linear profile decomposition for the Schr\"odinger flow $e^{it\Delta_\Omega}$, we require the following two weak convergence results. \begin{lemma}[Weak convergence]\label{weak-convergence} Assume that $\Omega_n \equiv \Omega$ or $\{\Omega_n\}$ conforms to one of the last three cases in Proposition \ref{inverse-strichartz}. Let $f \in C_0^\infty(\widetilde{\lim}\,\Omega_n)$ and $\{(t_n, x_n)\}_{n \geq 1} \subset \mathbb{R} \times \mathbb{R}^3$. Assuming either $|t_n| \to \infty$ or $|x_n| \to \infty$, then \begin{align}\label{weak} e^{it_n\Delta_{\Omega_n}}f(x + x_n) \rightharpoonup 0 \end{align} weakly in $\dot{H}^{s_c}(\mathbb{R}^3)$ as $n \to \infty$. \end{lemma} \begin{proof} Killip-Visan-Zhang \cite[Lemma 5.4]{KillipVisanZhang2016a} demonstrated that $\{e^{it_n\Delta_{\Omega_n}}f(x + x_n)\}_{n=1}^\infty$ converges weakly to zero in $\dot{H}^{1}(\mathbb{R}^3)$ as $n \to \infty$. Noting that $\{e^{it_n\Delta_{\Omega_n}}f(x + x_n)\}_{n=1}^\infty$ is also bounded in $\dot{H}^{s_c}(\mathbb{R}^3)$, we deduce it converges to zero in $\dot{H}^{s_c}(\mathbb{R}^3)$ as well. \end{proof} \end{proof} \begin{lemma}[Weak convergence]\label{L:compact} Assume $\Omega_n\equiv\Omega$ or $\{\Omega_n\}$ conforms to one of the last three scenarios considered in Proposition~\ref{inverse-strichartz}. Let $f_n\in \dot H_D^{s_c}(\Omega_n)$ be such that $f_n\rightharpoonup 0$ weakly in $\dot H^{s_c}(\R^3)$ and let $t_n\to t_\infty\in \R$. Then \begin{align*} e^{it_n\Delta_{\Omega_n}} f_n\rightharpoonup 0 \quad\text{weakly in}\quad \dot{H}^{s_c}(\R^3). \end{align*} \end{lemma} \begin{proof} Given any $\phi\in C_c^{\infty}(\R^3)$, \begin{align*} \big|\langle \big(e^{it_n\Delta_{\Omega_n}}-e^{it_\infty\Delta_{\Omega_n}}\big)f_n, \phi\rangle_{\dot H^{s_c}(\R^3)}\big| \lesssim |t_n-t_\infty|^{\frac{s_c}2} \|(-\Delta_{\Omega_n})^{\frac{s_c}2}f_n\|_{L^2} \|\phi\|_{\dot{H}^{2s_c}}, \end{align*} which converges to zero as $n\to \infty$. To obtain the last inequality above, we have used the spectral theorem together with the elementary inequality $|e^{it_n\lambda}-e^{it_\infty\lambda}|\lesssim |t_n-t_\infty|^{s_c/2}\lambda^{s_c/2}$ for $\lambda\geq 0$. Thus, we are left to prove \begin{align*} \int_{\R^3} |\nabla|^{s_c} \bigl[e^{it_\infty\Delta_{\Omega_n}} f_n\bigr](x) |\nabla|^{s_c} \bar\phi(x)dx = \int_{\R^3}e^{it_\infty\Delta_{\Omega_n}}f_n(x) (-\Delta)^{s_c}\bar\phi(x)dx\to0\quad\text{as}\quad n\rightarrow\infty \end{align*} for all $\phi\in C_0^\infty(\R^3)$. As $\{e^{it_\infty\Delta_{\Omega_n}} f_n\}_{n=1}^{\infty }$ is uniformly bounded in $\dot H^{s_c}(\mathbb{R} ^3)$, it suffices to show (using the fact that the measure of $\Omega_n\triangle(\widetilde{\lim}\,\Omega_n)$ converges to zero) \begin{align}\label{9:38am} \int_{\R^3} e^{it_\infty\Delta_{\Omega_n}} f_n (x) \bar\phi(x)\, dx \to 0 \qtq{as} n\to \infty \end{align} for all $\phi\in C_c^\infty(\widetilde{\lim} \Omega_n)$. To prove (\ref{9:38am}), we write \begin{align*} \langle e^{it_\infty\Delta_{\Omega_n}} f_n, \phi \rangle =\langle f_n, [e^{-it_\infty\Delta_{\Omega_n}} -e^{-it_\infty\Delta_{\Omega_\infty}}]\phi \rangle + \langle f_n,e^{-it_\infty\Delta_{\Omega_\infty}}\phi \rangle, \end{align*} where $\Omega_\infty$ denotes the limit of $\Omega_n$. The first term converges to zero by Proposition~\ref{convergence-domain}. As $f_n\rightharpoonup 0$ in $\dot H^{s_c}(\R^3)$, to see that the second term converges to zero, we merely need to prove that $e^{-it_\infty\Delta_{\Omega_\infty}}\phi\in \dot H^{-s_c}(\R^3)$ for all $\phi\in C_0^\infty(\widetilde{\lim}\,\Omega_n)$. This in fact follows from the Mikhlin multiplier theorem and Bernstein's inequality: \begin{align*} \|e^{-it_\infty\Delta_{\Omega_\infty}}\phi\|_{\dot H^{-s_c}(\R^3)} &\lesssim\|e^{-it_\infty\Delta_{\Omega_\infty}}P_{\leq 1}^{\Omega_\infty} \phi\|_{L^{\frac6{2s_c+3}}(\R^3)}+\sum_{N\geq 1}\|e^{-it_\infty\Delta_{\Omega_\infty}}P_N^{\Omega_\infty}\phi\|_{L^{\frac6{2s_c+3}}(\R^3)}\\ &\lesssim \|\phi\|_{L^{\frac6{2s_c+3}}(\mathbb{R} ^3)} + \|(-\Delta_{\Omega_\infty})^2\phi\|_{L^{\frac6{2s_c+3}}(\mathbb{R} ^3)}\lesssim_\phi 1. \end{align*} This completes the proof of the lemma. \end{proof} Now, we are in position to give the linear profile decomposition for the Schr\"odinger propagator $e^{it\Delta_\Omega}$ in $\dot{H}_D^{s_c}(\Omega)$. Indeed, this follows from the application of Proposition \ref{refined-strichartz} and \ref{inverse-strichartz}. \begin{theorem}[$\dot{H}_D^{s_c}(\Omega)$ linear profile decomposition]\label{linear-profile} Let $\{f_n\}_{n\geq1}$ be a bounded sequence in $\dot{H}_D^{s_c}(\Omega)$. Passing to a subsequence, there exist $J^*\in\{0,1,\cdots,\infty\}$, $\{\phi_n^j\}_{j=1}^{J^*}\subset\dot{H}_D^{s_c}(\Omega)$, $\{\lambda_n^j\}_{j=1}^{J^*}\subset(0,\infty)$, and $\{(t_n^j, x_n^j)\}_{j=1}^{J^*}\subset\mathbb{R}\times\Omega$, such that for each $j$, one of the following cases holds: \begin{itemize} \item \textbf{Case 1.} $\lambda_n^j\equiv\lambda_\infty^j$, $x_n^j=x_\infty^j$ and there exists a $\phi^j\in\dot{H}_D^{s_c}(\Omega)$ such that \begin{align*} \phi_n^j=e^{it_n^j(\lambda_n^j)^2\Delta_{\Omega}}\phi^j. \end{align*} We define $[G_n^jf](x):=(\lambda_n^j)^{s_c-\frac{3}{2}}f\big(\frac{x-x_n^j}{\lambda_n^j}\big)$ and $\Omega_n^j=(\lambda_n^j)^{-1}(\Omega-\{x_n^j\})$. \end{itemize} \begin{itemize} \item \textbf{Case 2. } $\lambda_n^j\to\infty$, $-\frac{x_n^j}{\lambda_n^j}\to x_\infty^j\in\R^3$. There exists a $\phi^j\in\dot{H}^{s_c}(\R^3)$ such that \begin{align*} \phi_n^j(x)=G_n^j\big(e^{it_n^j\Delta_{\Omega_{n}^j}}(\chi_n^j\phi^j)\big)(x)\qtq{with}[G_n^jf](x):=(\lambda_n^j)^{s_c-\frac{3}{2}}f\big(\frac{x-x_n^j}{\lambda_n^j}\big), \end{align*} \begin{equation} \Omega_n^j=(\lambda_n^j)^{-1}(\Omega-\{x_n^j\}),\qquad \chi_n^j(x)=\chi(\lambda_n^jx+x_n^j)\qtq{and}\chi(x)=\Theta\big(\frac{d(x)}{\operatorname{diam}(\Omega^c)}\big).\notag \end{equation} \end{itemize} \begin{itemize} \item \textbf{Case 3.} $\frac{d(x_n^j)}{\lambda_n^j}\to\infty$ and there exists a $\phi^j\in\dot{H}^{s_c}(\R^3)$ such that \begin{align*} \phi_n^j(x):=G_n^j\big(e^{it_n^j\Delta_{\Omega_{n}^j}}(\chi_n^j\phi^j)\big)(x)\qtq{with}[G_n^jf](x):=(\lambda_n^j)^{s_c-\frac{3}{2}}f\big(\frac{x-x_n^j}{\lambda_n^j}\big), \end{align*} where \begin{equation} \Omega_n^j=(\lambda_n^j)^{-1}(\Omega-\{x_n^j\}),\quad\text{and}\quad \chi_n^j(x):=1-\Theta\big(\frac{\lambda_n^j|x|}{d(x_n^j)}\big).\notag \end{equation} \end{itemize} \begin{itemize} \item \textbf{Case 4.} $\lambda_n^j\to0$, $\frac{d(x_n^j)}{\lambda_n^j}\to\infty$ and there exists a $\phi^j\in\dot{H}^{s_c}(\mathbb{H})$ such that \begin{align*} \phi_n^j(x):=G_n^j\big(e^{it_n^j\Delta_{\Omega_n^j}}\phi^j\big)(x)\quad\text{with}\quad [G_n^jf](x)=(\lambda_n^j)^{s_c-\frac{3}{2}}f\big(\frac{(R_n^j)^{-1}(x-(x_n^j)^*)}{\lambda_n^j}\big), \end{align*} $\Omega_n^j=(\lambda_n^j)^{-1}(R_n^j)^{}(\Omega-\{(x_n^j)^*\})$, $(x_n^j)^*\in\partial\Omega$ is chosen by $d(x_n)=|x_n^j-(x_n^j)^*|$ and $R_n^j\in \operatorname{SO}(3)$ satisfies $R_n^je_3=\frac{x_n^j-(x_n^j)^*}{|x_n^j-(x_n^j)^*|}.$ \end{itemize} Moreover, for any finite $0 \leq J \leq J^*$, we have the profile decomposition \begin{align*} f_n = \sum_{j=1}^J \phi_n^j + W_n^J, \end{align*} where: \begin{itemize} \item For all $n$ and $J \geq 1$, $W_n^J \in \dot{H}_D^{s_c}(\Omega)$, and \begin{align}\label{profile-1} \lim_{J \to J^*} \limsup_{n \to \infty} \|e^{it\Delta_\Omega}W_n^J\|_{L_{t,x}^{q_0}(\mathbb{R} \times \Omega)} = 0. \end{align} \item For any $J \geq 1$, we have the decoupling property: \begin{align}\label{profile-2} \lim_{n \to \infty} \left(\|f_n\|_{\dot{H}_D^{s_c}(\Omega)}^2 - \sum_{j=1}^J \|\phi_n^j\|_{\dot{H}_D^{s_c}(\Omega)}^2 - \|W_n^J\|_{\dot{H}_D^{s_c}(\Omega)}^2\right) = 0. \end{align} \item For any $1 \leq J \leq J^*$, \begin{align}\label{profile-3} e^{it_n^J\Delta_{\Omega_n^J}}(G_n^J)^{-1}W_n^J \rightharpoonup 0 \quad \text{weakly in } \dot{H}_D^{s_c}(\mathbb{R}^3). \end{align} \item For all $j \neq k$, we have asymptotic orthogonality: \begin{align}\label{profile-4} \lim_{n \to \infty} \left(\frac{\lambda_n^j}{\lambda_n^k} + \frac{\lambda_n^k}{\lambda_n^j} + \frac{|x_n^j - x_n^k|^2}{\lambda_n^j\lambda_n^k} + \frac{|t_n^j(\lambda_n^j)^2 - t_n^k(\lambda_n^k)^2|}{\lambda_n^j\lambda_n^k}\right) = \infty. \end{align} \end{itemize} Finally, we may assume for each $j$ that either $t_n^j \equiv 0$ or $|t_n^j| \to \infty$. \end{theorem} \begin{proof} We employ an induction argument to complete the proof by extracting one bubble at a time. Initially, we set $W_n^0 := f_n$. Suppose that for some $J \geq 0$, we have a decomposition satisfying \eqref{profile-2} and \eqref{profile-3}. Passing to a subsequence if needed, define \begin{align*} A_J := \lim\limits_{n \to \infty} \left\|W_n^J\right\|_{\dot{H}_D^{s_c}(\Omega)} \quad \text{and} \quad \epsilon_J := \lim\limits_{n \to \infty} \left\|e^{it\Delta_{\Omega}}W_n^J\right\|_{L_{t,x}^{q_0}(\mathbb{R} \times \Omega)}. \end{align*} If $\epsilon_J = 0$, the induction terminates, and we set $J^* = J$. Otherwise, we apply the inverse Strichartz inequality (see Proposition \ref{inverse-strichartz}) to $W_n^J$. After passing to a subsequence, we obtain $\{\phi_n^{J+1}\} \subseteq \dot{H}_D^{s_c}(\Omega)$, $\{\lambda_n^{J+1}\} \subseteq 2^{\mathbb{Z}}$, and $\{x_n^{J+1}\} \subseteq \Omega$, which correspond to one of the four cases described in the theorem. The parameters provided by Proposition \ref{inverse-strichartz} are renamed as follows: \[ \lambda_n^{J+1} := N_n^{-1} \quad \text{and} \quad t_n^{J+1} := -N_n^2 t_n. \] The profile $\tilde{\phi}^{J+1}$ is defined as a weak limit: \begin{align*} \tilde{\phi}^{J+1} = w\lim_{n \to \infty}(G_n^{J+1})^{-1}\left[e^{-it_n^{J+1}(\lambda_n^{J+1})^2\Delta_\Omega}W_n^J\right] = w\lim_{n \to \infty} e^{-it_n^{J+1}\Delta_{\Omega_n^{J+1}}}\left[\left(G_n^{J+1}\right)^{-1}W_n^J\right], \end{align*} where $G_n^{J+1}$ is defined in the theorem. In Cases 2, 3, and 4, we set $\phi^{J+1} := \tilde{\phi}^{J+1}$. For Case 1, we define: \[ \phi^{J+1}(x) := G_\infty^{J+1}\tilde{\phi}^{J+1}(x) := \left(\lambda_\infty^{J+1}\right)^{s_c-\frac{3}{2}} \tilde{\phi}^{J+1}\left(\frac{x - x_\infty^{J+1}}{\lambda_\infty^{J+1}}\right). \] Finally, $\phi_n^{J+1}$ is constructed as stated in the theorem. For Case 1, $\phi_n^{J+1}$ can be expressed as \[ \phi_n^{J+1} = e^{it_n^{J+1}(\lambda_n^{J+1})^2\Delta_{\Omega}}\tilde{\phi}^{J+1} = G_\infty^{J+1}e^{it_n^{J+1}\Delta_{\Omega_{\infty}^{J+1}}}\tilde{\phi}^{J+1}, \] where $\Omega_\infty^{J+1} := \left(\lambda_\infty^{J+1}\right)^{-1}\left(\Omega - \left\{x_\infty^{J+1}\right\}\right)$. In all four cases, we observe that \begin{align}\label{weakly-con-profile} \lim\limits_{n \to \infty} \left\| e^{-it_n^{J+1}\Delta_{\Omega_n^{J+1}}}\left(G_n^{J+1}\right)^{-1}\phi_n^{J+1} - \tilde{\phi}^{J+1} \right\|_{\dot{H}^{s_c}(\mathbb{R}^3)} = 0; \end{align} see also \eqref{eq5.13v65} and \eqref{eq5.19v65} for Cases 2 and 3. Next, define $W_n^{J+1} := W_n^J - \phi_n^{J+1}$. By \eqref{weakly-con-profile} and the construction of $\tilde{\phi}^{J+1}$ in each case, we have \[ e^{-it_n^{J+1}\Delta_{\Omega_n^{J+1}}}\left(G_n^{J+1}\right)^{-1}W_n^{J+1} \rightharpoonup 0 \quad \text{in } \dot{H}^{s_c}(\mathbb{R}^3) \quad \text{as } n \to \infty, \] which establishes \eqref{profile-3} at the level $J+1$. Moreover, from \eqref{inverse-2}, we deduce \[ \lim\limits_{n \to \infty} \left(\left\|W_n^J\right\|_{\dot{H}_D^{s_c}(\Omega)}^2 - \left\|\phi_n^{J+1}\right\|_{\dot{H}_D^{s_c}(\Omega)}^2 - \left\|W_n^{J+1}\right\|_{\dot{H}_D^{s_c}(\Omega)}^2\right) = 0. \] This, combined with the inductive hypothesis, verifies \eqref{profile-2} at the level $J+1$. From Proposition \ref{inverse-strichartz}, passing to a further subsequence, we obtain \begin{align}\label{eq5.31v65} \begin{split} A_{J+1}^2 = \lim\limits_{n \to \infty}\left\|W_n^{J+1} \right\|_{\dot{H}_D^{s_c}(\Omega)}^2\leqslant A_J^2 \left(1-C\left(\frac{\epsilon_J}{A_J}\right)^\frac{15 }{s_c(2s_c+2)} \right) \le A_J^2, \\ \epsilon_{J+1}^{q_0}=\lim\limits_{n \to\infty} \left\|e^{it\Delta_\Omega}W_n^{J+1}\right\|_{L_{t,x}^{q_0}( \R\times\Omega)}^{q_0} \le \epsilon_J^{\frac{10}{3-2s_c}} \left( 1-C\left( \frac{\epsilon_J}{A_J} \right)^\frac{75}{s_c(2s_c+2)(3-2s_c)}\right). \end{split} \end{align} If $\epsilon_{J+1} = 0$, we terminate the process and set $J^* = J+1$; in this case, \eqref{profile-1} holds automatically. If $\epsilon_{J+1} > 0$, we proceed with the induction. Should the process continue indefinitely, we set $J^* = \infty$. In this scenario, \eqref{eq5.31v65} ensures that $\epsilon_J \xrightarrow{J \to \infty} 0$, which establishes (\ref{profile-1}). Next, we confirm the asymptotic orthogonality condition \eqref{profile-4} by contradiction. Suppose \eqref{profile-4} does not hold for some pair $(j, k)$. Without loss of generality, assume $j < k$ and that \eqref{profile-4} is valid for all pairs $(j, l)$ where $j < l < k$. Passing to a subsequence, we let \begin{equation} \frac{\lambda_n^j}{ \lambda_n^k} \to \lambda_0 \in (0, \infty), \quad \frac{x_n^j - x_n^k}{ \sqrt{\lambda_n^j \lambda_n^k} } \to x_0, \quad\text{and}\quad \frac{t_n^j(\lambda_n^j)^2-t_n^k(\lambda_n^k)^2}{\lambda_n^j\lambda_n^k}\to t_0\qtq{as}n\to\infty.\label{condition-profile} \end{equation} From the inductive relation \begin{align*} W_n^{k-1}= W_n^j-\sum\limits_{l = j+1}^{k - 1} \phi_n^l \end{align*} and the definition of $\tilde{\phi}^k$, we obtain \begin{align*} \tilde{\phi}^k&=w\lim_{n\to\infty} e^{-it_n^k\Delta_{\Omega_{n}^{k}}}\left[\left(G_n^k \right)^{-1} W_n^{k-1}\right]\\&= w\lim_{n\to\infty}e^{-it_n^k\Delta_{\Omega_{n}^{k}}}\left[\left(G_n^k \right)^{-1} W_n^{j}\right] - \sum\limits_{l = j+1}^{k-1} w\lim_{n\to\infty} e^{-it_n^k\Delta_{\Omega_{n}^{k}}}\left[\left(G_n^k \right)^{-1} \phi_n^l\right]\\&=:A_1+A_2. \end{align*} Next, we claim that the weak limits in $A_1$ and $A_2$ are zero, which would be a contradiction to $\tilde{\phi}^k\neq0$. Rewriting $A_1$ as follows: \begin{align*} e^{-it_n^k\Delta_{\Omega_n^k}}\left[\left(G_n^k\right)^{-1}W_n^j\right] &=e^{-it_n^k\Delta_{\Omega_n^k}}\left(G_n^k\right)^{-1}G_n^je^{it_n^j\Delta_{\Omega_n^j}}\left[e^{-it_n^j\Delta_{\Omega_n^j}}\left(G_n^j\right)^{-1}W_n^j\right]\\ &=\left(G_n^k\right)^{-1}G_n^je^{i\big(t_n^j-t_n^k\tfrac{(\lambda_n^k)^2}{(\lambda_n^j)^2}\big)\Delta_{{\Omega_n^j}}}\left[e^{-it_n^j\Delta_{\Omega_n^j}}\left(G_n^j\right)^{-1}W_n^j\right]. \end{align*} Note that by \eqref{condition-profile}, we have \begin{align} t_n^j - t_n^k \frac{(\lambda_n^k)^2}{(\lambda_n^j)^2} = \frac{t_n^j (\lambda_n^j)^2 - t_n^k (\lambda_n^k)^2}{\lambda_n^j \lambda_n^k} \cdot \frac{\lambda_n^k}{\lambda_n^j} \to \frac{t_0}{\lambda_0}.\label{E11131} \end{align} Using this, along with (\ref{profile-3}), Lemma \ref{L:compact}, and the fact that the adjoints of the unitary operators $(G_n^k)^{-1}G_n^{j}$ converge strongly, we deduce that $A_1 = 0.$ To complete the proof of \eqref{profile-4}, it remains to verify that $A_2 = 0$. For all $j < l < k$, we express \begin{align*} e^{-it_n^k{\Delta_{\Omega_n^k}}}\left[\left(G_n^k\right)^{-1}\phi_n^l\right] = \left(G_n^k\right)^{-1}G_n^j e^{i\big(t_n^j - t_n^k \tfrac{(\lambda_n^k)^2}{(\lambda_n^j)^2}\big)\Delta_{\Omega_n^j}}\left[e^{-it_n^j\Delta_{\Omega_n^j}}(G_n^j)^{-1}\phi_n^l\right]. \end{align*} By (\ref{E11131}) and Lemma \ref{L:compact}, it suffices to show \begin{align*} e^{-it_n^j\Delta_{\Omega_n^j}}\left[\left(G_n^j\right)^{-1}\phi_n^l\right] \rightharpoonup 0 \quad \text{weakly in } \dot{H}^{s_c}(\mathbb{R}^3). \end{align*} By density, this reduces to proving the following: for all $\phi \in C_0^\infty \left( \widetilde{\lim} \, \Omega_n^l \right)$, \begin{align}\label{eq5.35v65} I_n : = e^{-it_n^j\Delta_{\Omega_n^j}}(G_n^j)^{-1}G_n^le^{it_n^l\Delta_{\Omega_n^l}}\phi\rightharpoonup 0 \qtq{weakly in} \dot H^{s_c}(\R^3)\qtq{as}n\to\infty. \end{align} Depending on which cases $j$ and $l$ fall into, we can rewrite $I_n$ as follows: \begin{itemize} \item Case (a): If both $j$ and $l$ conform to Case 1, 2, or 3, then \begin{align*} I_n=\bigg(\frac{\lambda_n^j}{\lambda_n^l}\bigg)^{\frac{3}{2}-s_c}\bigg[e^{i\big(t_n^l-t_n^j\big(\frac{\lambda_n^j} {\lambda_n^l}\big)^2\big)\Delta_{\Omega_n^l}}\phi\bigg]\bigg(\frac{\lambda_n^j x+x_n^j- x_n^l}{\lambda_n^l}\bigg). \end{align*} \end{itemize} \begin{itemize} \item Case (b): If $j$ conforms to Case 1, 2, or 3 and $l$ to Case 4, then \begin{align*} I_n=\bigg(\frac{\lambda_n^j}{\lambda_n^l}\bigg)^{\frac{3}{2}-s_c}\bigg[e^{i\big(t_n^l-t_n^j\big(\frac{\lambda_n^j} {\lambda_n^l}\big)^2\big) \Delta_{\Omega_n^l}}\phi\bigg]\bigg(\frac{(R_n^l)^{-1}(\lambda_n^j x+x_n^j-(x_n^l)^*)}{\lambda_n^l}\bigg). \end{align*} \end{itemize} \begin{itemize} \item Case (c): If $j$ conforms to Case 4 and $l$ to Case 1, 2, or 3, then \begin{align*} I_n=\bigg(\frac{\lambda_n^j}{\lambda_n^l}\bigg)^{\frac{3}{2}-s_c}\bigg[e^{i\big(t_n^l-t_n^j\big(\frac{\lambda_n^j} {\lambda_n^l}\big)^2\big) \Delta_{\Omega_n^l}}\phi\bigg]\bigg(\frac{R_n^j\lambda_n^j x+(x_n^j)^*-x_n^l}{\lambda_n^l}\bigg). \end{align*} \end{itemize} \begin{itemize} \item Case (d): If both $j$ and $l$ conform to Case 4, then \begin{align*} I_n=\bigg(\frac{\lambda_n^j}{\lambda_n^l}\bigg)^{\frac{3}{2}-s_c}\bigg[e^{i\big(t_n^l-t_n^j\big(\frac{\lambda_n^j} {\lambda_n^l}\big)^2\big) \Delta_{\Omega_n^l}}\phi\bigg]\bigg(\frac{(R_n^l)^{-1}(R_n^j\lambda_n^j x+(x_n^j)^*-(x_n^l)^*)}{\lambda_n^l}\bigg). \end{align*} \end{itemize} We first prove \eqref{eq5.35v65} in the case where the scaling parameters are not comparable, i.e., \begin{align}\label{A2} \lim\limits_{n \to \infty} \left( \frac{\lambda_n^j}{\lambda_n^l} + \frac{\lambda_n^l}{\lambda_n^j} \right) = \infty. \end{align} In this scenario, we handle all four cases simultaneously. Using the Cauchy-Schwarz inequality and \eqref{A2}, for any $\psi \in C_c^\infty(\mathbb{R}^3)$, we have \begin{align*} \left| \langle I_n, \psi \rangle_{\dot{H}^{s_c}(\mathbb{R}^3)} \right| &\lesssim \min \left( \|(-\Delta)^{s_c} I_n \|_{L^2(\mathbb{R}^3)} \|\psi \|_{L^2(\mathbb{R}^3)}, \|I_n \|_{L^2(\mathbb{R}^3)} \|(-\Delta)^{s_c} \psi \|_{L^2(\mathbb{R}^3)} \right) \\ &\lesssim \min \left( \left(\frac{\lambda_n^j}{\lambda_n^l}\right)^{s_c} \|(-\Delta)^{s_c} \phi \|_{L^2(\mathbb{R}^3)} \|\psi \|_{L^2(\mathbb{R}^3)}, \left(\frac{\lambda_n^l}{\lambda_n^j}\right)^{s_c} \|\phi \|_{L^2(\mathbb{R}^3)} \|(-\Delta)^{s_c} \psi \|_{L^2(\mathbb{R}^3)} \right), \end{align*} which tends to zero as $n \to \infty$. Therefore, in this case, $A_2 = 0$, leading to the desired contradiction. Now, we may assume \begin{align*} \lim_{n \to \infty} \frac{\lambda_n^j}{\lambda_n^l} = \lambda_0 \in (0, \infty). \end{align*} Proceeding as in the previous case, we further assume that the time parameters diverge, i.e., \begin{align}\label{A3} \lim_{n \to \infty} \frac{|t_n^j (\lambda_n^j)^2 - t_n^l (\lambda_n^l)^2|}{\lambda_n^j \lambda_n^l} = \infty. \end{align} Under this assumption, we have \begin{align*} \left| t_n^l - t_n^j \frac{(\lambda_n^j)^2}{(\lambda_n^l)^2} \right| = \frac{|t_n^l (\lambda_n^l)^2 - t_n^j (\lambda_n^j)^2|}{\lambda_n^j \lambda_n^l} \cdot \frac{\lambda_n^j}{\lambda_n^l} \to \infty \end{align*} as $n \to \infty$. First, we address Case (a). By \eqref{A3} and Lemma \ref{weak-convergence}, we obtain \begin{align*} \lambda_0^{\frac{3}{2}-s_c}\left(e^{i\big(t_n^l - t_n^j\big(\frac{\lambda_n^j}{\lambda_n^l}\big)^2\big)\Delta_{\Omega_n^l}}\phi\right)(\lambda_0 x + (\lambda_n^l)^{-1}(x_n^j - x_n^l)) \rightharpoonup 0 \quad \text{weakly in } \dot{H}^{s_c}(\mathbb{R}^3), \end{align*} which implies \eqref{eq5.35v65}. For Cases (b), (c), and (d), the proof proceeds similarly since $\operatorname{SO}(3)$ is a compact group. Indeed, by passing to a subsequence, we may assume that $R_n^j \to R_0$ and $R_n^l \to R_1$, placing us in a situation analogous to Case (a). Finally, consider the case where \begin{equation} \frac{\lambda_n^j}{\lambda_n^l} \to \lambda_0, \quad \frac{t_n^l(\lambda_n^l)^2 - t_n^j(\lambda_n^j)^2}{\lambda_n^j\lambda_n^l} \to t_0, \quad \text{but} \quad \frac{|x_n^j - x_n^l|^2}{\lambda_n^j\lambda_n^l} \to \infty. \end{equation} In this case, we also have $t_n^l - t_n^j \frac{(\lambda_n^j)^2}{(\lambda_n^l)^2} \to \lambda_0 t_0$. Thus, for Case (a), it suffices to show that \begin{equation} \lambda_0^{\frac{3}{2}-s_c} e^{it_0 \lambda_0 \Delta_{\Omega_n^l}}\phi(\lambda_0 x + y_n) \rightharpoonup 0 \quad \text{weakly in } \dot{H}^{s_c}(\mathbb{R}^3), \label{E1181} \end{equation} where \begin{align*} y_n := \frac{x_n^j - x_n^l}{\lambda_n^l} = \frac{x_n^j - x_n^l}{(\lambda_n^l\lambda_n^j)^{\frac{1}{2}}} \cdot \sqrt{\frac{\lambda_n^j}{\lambda_n^l}} \to \infty \quad \text{as } n \to \infty. \end{align*} The desired weak convergence \eqref{E1181} follows from Lemma \ref{weak-convergence}. In Case (b), since $\operatorname{SO}(3)$ is compact, the argument is similar if we can establish \begin{equation} \frac{|x_n^j - (x_n^l)^*|}{\lambda_n^l} \to \infty \quad \text{as } n \to \infty. \label{E1182} \end{equation} In fact, this follows from the triangle inequality: \begin{align*} \frac{|x_n^j - (x_n^l)^*|}{\lambda_n^l} \geq \frac{|x_n^j - x_n^l|}{\lambda_n^l} - \frac{|x_n^l - (x_n^l)^*|}{\lambda_n^l} \geq \frac{|x_n^j - x_n^l|}{\lambda_n^l} - 2d_\infty^l \to \infty. \end{align*} Case (c) is symmetric to Case (b), so the result for Case (c) follows immediately. Now, we handle case (d). For sufficiently large $n$, we have \begin{align*} \frac{|(x_n^j)^*-(x_n^l)^*|}{\lambda_n^l}&\geq\frac{|x_n^j-x_n^l|}{\lambda_n^l}-\frac{|x_n^j-(x_n^j)^*|}{\lambda_n^l}-\frac{|x_n^l-(x_n^l)^*|}{\lambda_n^l}\\ &\geq\frac{|x_n^j-x_n^l|}{\sqrt{\lambda_n^l\lambda_n^j}}\cdot\sqrt{\frac{\lambda_n^j}{\lambda_n^l}}-\frac{d(x_n^j)\lambda_n^j}{\lambda_n^j\lambda_n^l}-\frac{d(x_n^l)}{\lambda_n^l} \notag\\ &\ge \frac{1}{2}\sqrt{\lambda_0}\frac{|x_n^j-x_n^l|}{\sqrt{\lambda_n^l\lambda_n^j}}-2\lambda_0d_\infty ^j-2d_\infty ^l \rightarrow\infty \quad\text{as }\quad n\rightarrow\infty .\notag \end{align*} The desired weak convergence follows again from Lemma \ref{weak-convergence}. \end{proof} \section{Embedding of nonlinear profiles}\label{S4} In Section \ref{S5}, we will utilize the linear profile decomposition established in the previous section to prove Theorem \ref{TReduction}. The key challenge lies in deriving a Palais-Smale condition for minimizing sequences of blowup solutions to (\ref{NLS}). This task primarily involves proving a nonlinear profile decomposition for solutions to NLS$_\Omega$. A critical aspect of this process is addressing the scenario where the nonlinear profiles correspond to solutions of the $\dot H^{s_c}$-critical equation in \emph{distinct} limiting geometries. To handle this, we embed these nonlinear profiles, associated with different limiting geometries, back into $\Omega$, following the approach in \cite{KillipVisanZhang2016a}. As nonlinear solutions in the limiting geometries possess global spacetime bounds, we infer that the solutions to NLS$_\Omega$ corresponding to Cases 2, 3, and 4 in Theorem \ref{linear-profile} inherit these spacetime bounds. These solutions to NLS$_{\Omega}$ will reappear as nonlinear profiles in Proposition \ref{Pps}. This section presents three theorems: Theorems \ref{Tembbedding1}, \ref{Tembedding2}, and \ref{Embed3}, which correspond to Cases 2, 3, and 4 of Theorem \ref{linear-profile}, respectively. As in the previous section, we denote $\Theta:\R^3\to[0,1]$ the smooth function such that \begin{align*} \Theta(x)=\begin{cases} 0,&|x|\leq\frac{1}{4},\\ 1,&|x|\geq\frac{1}{2}. \end{cases} \end{align*} Our first result in this section consider the scenario when the rescaled obstacles $\Omega_n^{c}$ are shrinking to a point (i.e. Case 2 in Theorem \ref{linear-profile}). \begin{theorem}[Embedding nonlinear profiles for shrinking obstacles]\label{Tembbedding1} Let $\{\lambda_n\}\subset2^{\Bbb Z}$ be such that $\lambda_n\to\infty$. Let $\{t_n\}\subset\R$ be such that $t_n\equiv0$ or $|t_n|\to\infty$. Suppose that $\{x_n\}\subset\Omega$ satisfies $-\lambda_n^{-1}x_n\to x_\infty\in\R^3$. Let $\phi\in\dot{H}^{s_c}(\R^3)$ and \begin{align*} \phi_n(x):=\lambda_n^{s_c-\frac{3}{2}}e^{it_n\lambda_n^2\Delta_\Omega}\left[(\chi_n\phi)\left(\frac{x-x_n}{\lambda_n}\right)\right], \end{align*} where $\chi_n(x)=\chi(\lambda_n x+x_n)$ with $\chi (x)=\Theta (\frac{d(x)}{\text{diam }\Omega^c})$. Then for $n$ sufficiently large, there exists a global solution $v_n$ to (\ref{NLS}) with initial data $v_n(0)=\phi_n$ such that \begin{align*} \|v_n\|_{L_{t,x}^{\frac{5\alpha }{2}}(\R\times\Omega)}\lesssim1, \end{align*} with the implicit constant depending only on $\|\phi\|_{\dot{H}^{s_c}}$. Moreover, for any $\varepsilon>0$, there exists $N_\varepsilon\in\N$ and $\psi_\varepsilon\in C_0^\infty(\R\times\R^3)$ such that for all $n\ge N_\varepsilon $ \begin{align} \big\|(-\Delta _\Omega)^{\frac{s_c}{2}}[v_n(t-\lambda_n^2t_n,x+x_n)-\lambda_n^{s_c-\frac{3}{2}}\psi_\varepsilon(\lambda_n^{-2}t,\lambda_n^{-1}x)]\big\|_{ L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}(\R\times\R^3)}<\varepsilon.\label{approximate-1} \end{align} \end{theorem} \begin{proof} Our proof follows the idea of \cite[Theorem 6.1]{KillipVisanZhang2016a}. For the first step, we will construct the global solution to $\dot{H}^{s_c}$-critical NLS in the limiting geometry of $\Omega_n$. \textbf{Step 1}: Constructing the global solution to NLS$_{\mathbb{R} ^3}$. Let $\theta=\frac{1}{100(\alpha +1)}$. The construction of the global solution on $\R^3$ depends on the choice of time parameter $t_n$. If $t_n\equiv0$, let $w_n$ and $w_\infty$ be the solutions to NLS$_{\mathbb{R} ^3}$ with initial data $w_n(0)=\phi_{\le\lambda_n^\theta}$ and $w_\infty(0)=\phi$. Otherwise, if $t_n\to\pm\infty$, let $w_n$ be the solutions to NLS$_{\mathbb{R} ^3}$ such that \begin{align*} \big\|w_n(t)-e^{it\Delta}\phi_{\le\lambda_n^\theta}\big\|_{\dot{H}^{s_c}(\R^3)}\to0,\qtq{as} t\to\pm\infty. \end{align*} Similarly, we denote $w_\infty$ by the solution to NLS$_{\mathbb{R} ^3}$ such that \begin{equation} \big\|w_\infty(t)-e^{it\Delta}\phi\big\|_{\dot{H}^{s_c}(\R^3)}\to0,\qtq{as}t\to\pm\infty.\label{E11101} \end{equation} By \cite{Murphy2014} and assumption made in Theorem \ref{T1}, both $w_n(t)$ and $w_\infty(t)$ are global solutions and satisfy \begin{equation} \|w_n\|_{ L_t^{\frac{5\alpha }{2}}L_x^{\frac{5\alpha }{2}}(\R\times\R^3)}+\|w_\infty\|_{ L_t^{\frac{5\alpha }{2}}L_x^{\frac{5\alpha }{2}}(\R\times\R^3)}\lesssim1.\label{E11102} \end{equation} Moreover, by the perturbation theory in \cite{Murphy2014}, \begin{align} \lim_{n\to\infty}\big\|w_n(t)-w_\infty(t)\big\|_{ L_t^{\frac{5\alpha }{2}}L_x^{\frac{5\alpha }{2}}(\R\times\R^3)}=0.\label{perturb} \end{align} From the Bernstein inequality, we have \begin{align*} \|\phi_{\le \lambda_n^\theta}\|_{\dot{H}^s(\R^3)}\lesssim\lambda_n^{\theta(s-s_c)},\qtq{for any }s\geqslant s_c. \end{align*} The persistence of regularity yields that \begin{align*} \big\||\nabla|^{s}w_n\big\|_{\dot S^{s_c}(\R\times\R^3)}\lesssim\lambda_n^{\theta s} \qtq{for any}s\geqslant0, \end{align*} which together with the Gagliardo-Nirenberg inequality \[ \|f\|_{L^\infty(\R^3)}\lesssim \|f\|_{\dot{H}^{s_c}(\R^3)}^\frac{1}{2}\|f\|_{\dot{H}^{3-s_c}(\R^3)}^\frac{1}{2} \] implies that \begin{align}\label{key-1} \big\||\nabla|^{s}w_n\big\|_{L_{t,x}^\infty(\R\times\R^3)}\lesssim\lambda_n^{\theta(s+\frac{3}{2}-s_c)},\quad\text{for all} \quad s\ge0. \end{align} Finally, using the structure of the NLS$_{\R^3}$, we have \begin{align}\label{key-2} \|\partial_tw_n\|_{L_{t,x}^\infty(\R\times\R^3)}\lesssim\|\Delta w_n\|_{L_{t,x}^\infty(\R\times\R^3)}+\|w_n\|_{L_{t,x}^\infty(\R\times\R^3)}^{\alpha+1}\lesssim\lambda_n^{\theta(\frac{7}{2}-s_c)}. \end{align} \textbf{Step 2}. Constructing the approximate solution to (\ref{NLS}). As discussed in Case 2 of Proposition \ref{inverse-strichartz}, we let $\Omega_n=\lambda_n^{-1}(\Omega-\{x_n\})$. One may want to embed $w_n(t)$ to $\Omega_n$ by taking $\tilde{v}_n(t)=\chi_nw_n(t)$ directly. However, this is not a approximation of (\ref{NLS}). Instead, we take \begin{align*} z_n(t):=i\int_{0}^{t}e^{i(t-\tau)\Delta_{\Omega_{n}}}(\Delta_{\Omega_{n}}\chi_n)w_n(\tau,-\lambda_n^{-1}x_n)d\tau. \end{align*} This can allow us to control the reflected waves near the boundary. Moreover, we have the following properties. \begin{lemma}\label{zn} For all $T>0$, we have \begin{gather}\label{embed-lem-1} \limsup_{n\to\infty}\|(-\Delta _\Omega)^{\frac{s_c}{2}}z_n\|_{L_{t}^{\frac{5\alpha }{2} } L_{x}^{\frac{30\alpha }{15\alpha -8}}([-T,T]\times\Omega_{n})}=0,\\ \big\|(-\Delta_{\Omega_{n}})^\frac{s}{2}z_n\big\|_{L_{t}^\infty L_{x}^2([-T,T]\times\Omega_{n})}\lesssim\lambda_n^{s-\frac{3}{2}+\theta(\frac{7}{2}-s_c)}(T+\lambda_n^{-2\theta})\qtq{for all}0\le s<\frac{3}{2}.\label{embed-lem-2} \end{gather} \end{lemma} \begin{proof} Integrating by parts, we write \begin{align*} z_n(t)&=-\int_{0}^{t}\big(e^{it\Delta_{\Omega_{n}}}\partial_\tau e^{-i\tau\Delta_{\Omega_{n}}}\chi_n\big)w_n(\tau,-\lambda_n^{-1}x_n)d\tau\\ &=-\chi_nw_n(t,-\lambda_n^{-1}x_n)+e^{it\Delta_{\Omega_{n}}}\big(\chi_nw_n(0,-\lambda_n^{-1}x_n)\big)\\ &\hspace{3ex}+\int_{0}^{t}\big(e^{i(t-\tau)\Delta_{\Omega_{n}}}\chi_n\big)\partial_\tau w_n(\tau,-\lambda_n^{-1}x_n)d\tau. \end{align*} By the Strichartz estimate, the equivalence of Sobolev norms, \eqref{key-1} and \eqref{key-2}, we have \begin{align*} &\big\|(-\Delta_{\Omega_{n}})^\frac{s}{2}z_n\big\|_{L_t^\infty L_x^2([-T,T]\times\Omega_{n})}\notag\\ &\lesssim\big\|(-\Delta)^\frac{s}{2}\chi_nw_n(t,-\lambda_n^{-1}x_n)\big\|_{L_t^\infty L_x^2([-T,T]\times\Omega_{n})} +\big\|(-\Delta_{\Omega_{n}})^\frac{s}{2}\chi_nw_n(0,-\lambda_n^{-1}x_n)\big\|_{L^2([-T,T]\times\Omega_{n})}\\ &\hspace{3ex}+\big\|(-\Delta)^\frac{s}{2}\chi_n\partial_tw_n(t,-\lambda_n^{-1}x_n)\big\|_{L_t^1L_x^2([-T,T]\times\Omega_{n})}\\ &\lesssim\lambda_n^{s-\frac{3}{2}+\theta (\frac{3}{2}-s_c)}+T\lambda_n^{s-\frac32+\theta( \frac{7}{2}-s_c)}. \end{align*} This proves \eqref{embed-lem-2}. By a similar argument, we can prove (\ref{embed-lem-1}). This completes the proof of lemma \ref{zn}. \end{proof} We are now prepared to construct the approximate solution \begin{align*} \tilde{v}_n(t,x) := \begin{cases} \lambda_n^{s_c-\frac{3}{2}}(\chi_n w_n + z_n)(\lambda_n^{-2}t, \lambda_n^{-1}(x-x_n)), & |t| \leqslant \lambda_n^2 T, \\ e^{i(t-\lambda_n^2T)\Delta_{\Omega}} \tilde{v}_n(\lambda_n^2T,x), & t > \lambda_n^2 T, \\ e^{i(t+\lambda_n^2T)\Delta_{\Omega}} \tilde{v}_n(-\lambda_n^2T,x), & t < -\lambda_n^2 T, \end{cases} \end{align*} where $T > 0$ is a parameter to be determined later. We first observe that $\tilde{v}_n$ has a finite scattering norm. Indeed, this follows from Lemma \ref{zn}, the Strichartz estimate, and a change of variables: \begin{align} \|\tilde{v}_n\|_{L_{t,x}^{\frac{5\alpha }{2}}(\R\times\Omega)}&\lesssim\big\|\chi_nw_n+z_n\big\|_{L_{t,x}^{\frac{5\alpha }{2}}([-T,T]\times \Omega)}+\|(\chi_nw_n+z_n)(\pm T)\|_{\dot{H}_D^{s_c}(\Omega_{n})}\notag\\ &\lesssim\|w_n\|_{L_{t,x}^{\frac{5\alpha }{2}}(\R\times\R^3)}+\|z_n\|_{L_{t,x}^{\frac{5\alpha }{2}}([-T,T]\times \Omega)}+\|\chi_n\|_{L_x^\infty(\Omega_{n})}\big\||\nabla|^{s_c}w_n\big\|_{L_t^\infty L_x^2(\R\times\R^3)}\notag\\ &\hspace{3ex} +\big\||\nabla|^{s_c}\chi_n\big\|_{L^{\frac{3}{s_c}}}\|w_n\|_{L_t^\infty L_x^{\frac{6}{3-2s_c}}(\R\times\R^3)}+\big\|(-\Delta_{\Omega_{n}})^\frac{s_c}{2}z_n\big\|_{L_t^\infty L_x^2([-T,T]\times \Omega)}\notag\\ &\lesssim 1+ \|z_n\|_{L_{t,x}^{\frac{5\alpha }{2}}([-T,T]\times \Omega)}++\big\|(-\Delta_{\Omega_{n}})^\frac{s_c}{2}z_n\big\|_{L_t^\infty L_x^2([-T,T]\times \Omega)}<+\infty . \label{step-2} \end{align} \textbf{Step 3.} {Asymptotic agreement of the initial data.} In this step, we aim to show that \begin{align}\label{step-3} \lim_{T\to\infty} \limsup_{n\to\infty} \big\|e^{it\Delta_{\Omega}}\big(\tilde{v}_n(\lambda_n^2t_n) - \phi_n\big)\big\|_{L_t^{\frac{5\alpha}{2}}\dot{H}_D^{s_c,\frac{30\alpha}{15\alpha-8}}(\mathbb{R}\times\Omega)} = 0. \end{align} We first consider the case when $t_n \equiv 0$. Using H\"older's inequality, the Strichartz estimate, and a change of variables, we obtain \begin{align*} &\big\|e^{it\Delta_{\Omega}}\big(\tilde{v}_n(0) - \phi_n\big)\big\|_{L_t^{\frac{5\alpha}{2}}\dot{H}_D^{s_c,\frac{30\alpha}{15\alpha-8}}(\mathbb{R}\times\Omega)} \lesssim \|\tilde{v}_n(0) - \phi_n\|_{\dot{H}_D^{s_c}(\Omega)} \lesssim \|\chi_n \phi_{\le \lambda_n^\theta} - \chi_n \phi\|_{\dot{H}_D^{s_c}(\Omega)} \\ &\quad \lesssim \big\||\nabla|^{s_c}\chi_n\big\|_{L_x^{\frac{3}{s_c}}(\Omega)} \|\phi_{\le \lambda_n^\theta} - \phi\|_{L_x^{\frac{6}{3-2s_c}}(\Omega)} + \|\chi_n\|_{L_x^\infty(\Omega)} \big\||\nabla|^{s_c}(\phi_{\le \lambda_n^\theta} - \phi)\big\|_{L_x^2(\Omega)} \to 0, \quad \text{as } n \to \infty. \end{align*} Next, we address the case when $|t_n| \to \infty$. By symmetry, it suffices to consider $t_n \to +\infty$, as the case $t_n \to -\infty$ can be treated analogously. Since $T > 0$ is fixed, for sufficiently large $n$, we have $t_n > T$, which implies \begin{align*} \tilde{v}_n(\lambda_n^2t_n, x) &= e^{i(t_n - T)\lambda_n^2\Delta_{\Omega}} \tilde{v}_n(\lambda_n^2T, x) \\ &= e^{i(t_n - T)\lambda_n^2\Delta_{\Omega}} \left[\lambda_n^{s_c - \frac{3}{2}} (\chi_n w_n + z_n)\big(T, \frac{x - x_n}{\lambda_n}\big)\right]. \end{align*} Applying a change of variables, H\"older's inequality, and the Strichartz estimate, we obtain \begin{align*} & \big\|(-\Delta_\Omega)^\frac{s_c}{2}e^{it\Delta_{\Omega}}\left[\tilde{v}_n(\lambda_n^2t_n)-\phi_n\right]\big\|_{L_{t}^{\frac{5\alpha}{2}}L_x^{\frac{30\alpha}{15\alpha-8}}(\R\times\Omega)}\\ &= \big\|(-\Delta_{\Omega_n})^\frac{s_c}{2}\left[e^{i(t-T)\Delta_{\Omega_{n}}}(\chi_nw_n+z_n)(T)-e^{it\Delta_{\Omega_{n}}}(\chi_n\phi)\right]\big\|_{L_{t}^{\frac{5\alpha}{2}}L_x^{\frac{30\alpha}{15\alpha-8}}(\R\times\Omega_n)}\\ &\lesssim\big\|(-\Delta_{\Omega_n})^\frac{s_c}2z_n(T)\big\|_{L^2_x}+\big\|(-\Delta_{\Omega_{n}})^\frac{s_c}2\big(\chi_n(w_n-w_\infty)(T)\big)\big\|_{L_x^2}\\ &\hspace{2ex}+\big\|(-\Delta_{\Omega_{n}})^\frac{s_c}2[e^{i(t-T)\Delta_{\Omega_{n}}}(\chi_nw_\infty)(T)-e^{it\Delta_{\Omega_{n}}}(\chi_n\phi)]\big\|_{L_{t}^{\frac{5\alpha}{2}}L_x^{\frac{30\alpha}{15\alpha-8}}(\R\times\Omega_n)}. \end{align*} Using \eqref{perturb} and \eqref{embed-lem-2}, we have \begin{align*} &\big\|(-\Delta_{\Omega_n})^\frac{s_c}2z_n(T)\big\|_{L_x^2}+\big\|(-\Delta_{\Omega_{n}})^\frac{s_c}2\big(\chi_n(w_n-w_\infty)(T)\big)\big\|_{L_x^2}\\ &\lesssim\lambda_n^{s_c-\frac{3}{2}+\theta(\frac{7}{2}-s_c)}(T+\lambda_n^{-2\theta})+\big\|(-\Delta_{\Omega_{n}})^\frac{s_c}2)\chi_n\big\|_{L_x^\frac{3}{s_c}}\|w_n-w_\infty\|_{L_t^\infty L_x^{\frac{6}{3-2s_c}}}\\ &\hspace{3ex}+\|\chi_n\|_{L^\infty}\big\|(-\Delta_{\Omega_{n}})^\frac{s_c}2(w_n-w_\infty)\|_{L_t^\infty L_x^2}\to0\qtq{as}n\to\infty. \end{align*} Thus, we are left to verify that \begin{align*} \lim_{T\to\infty}\limsup_{n\to\infty}\big\|(-\Delta_{\Omega_{n}})^{\frac{s_c}2}\left[e^{i(t-T)\Delta_{\Omega_{n}}}(\chi_nw_\infty)(T)-e^{it\Delta_{\Omega_{n}}}(\chi_n\phi)\right]\big\|_{L_t^\frac{5\alpha}{2}L_x^{\frac{30\alpha}{15\alpha-8}}(\R\times\Omega_{n})}=0. \end{align*} By the triangle inequality and the Strichartz estimate, \begin{align*} &\hspace{3ex} \big\|(-\Delta_{\Omega_{n}})^\frac{s_c}2e^{i(t-T)\Delta_{\Omega_{n}}}\big(\chi_nw_\infty(T)\big)-e^{it\Delta_{\Omega_{n}}}(\chi_n\phi)\big\|_{L_t^\frac{5\alpha}{2}L_x^{\frac{30\alpha}{15\alpha-8}}(\R\times \Omega_n)}\\ &\lesssim\big\|(-\Delta_{\Omega_{n}})^\frac{s_c}2\big(\chi_nw_\infty(T)\big)-\chi_n(-\Delta)^\frac{s_c}{2}w_\infty(T)\big\|_{L_x^2}\\ &\hspace{3ex}+\big\|[e^{i(t-T)\Delta_{\Omega_{n}}}-e^{i(t-T)\Delta}][\chi_n(-\Delta)^\frac{s_c}2w_\infty(T)]\big\|_{L_t^\frac{5\alpha}{2}L_x^{\frac{30\alpha}{15\alpha-8}}(\R\times\Omega_{n})}\\ &\hspace{3ex}+\big\|e^{-iT\Delta}[\chi_n(-\Delta)^\frac{s_c}{2}w_\infty(T)]-\chi_n(-\Delta)^\frac{s_c}{2}\phi\big\|_{L_x^2}\\ &\hspace{3ex}+\big\| [e^{it\Delta _{\Omega_n}}-e^{it\Delta }][\chi_n(-\Delta)^\frac{s_c}{2}\phi]\big\|_{L_t^{\frac{5\alpha}{2}}L_x^{\frac{30\alpha}{15\alpha-8}}(\R\times\Omega_{n})}\\ &\hspace{3ex}+\big\|(-\Delta_{\Omega_{n}})^\frac{s_c}2(\chi_n\phi)-\chi_n(-\Delta)^\frac{s_c}{2}\phi\big\|_{L_x^2}\\ &\stackrel{\triangle}{=}I_1+I_2+I_3+I_4+I_5. \end{align*} The fact that $I_2$ and $I_4$ converge to zero as $n \to \infty$ follows directly from Theorem \ref{convergence-flow} and the density of $C_c^\infty$ functions supported in $\mathbb{R}^3$ minus a point within $L^2_x$. Next, we estimate $I_1$, $I_3$, and $I_5$. Using the triangle inequality, Proposition \ref{P1}, and the monotone convergence theorem, for any $f \in \dot{H}^{s_c}(\mathbb{R}^3)$, we obtain \begin{align} &\hspace{2ex} \big\|\big(-\Delta_{\Omega_{n}}\big)^\frac{s_c}{2}(\chi_n f) - \chi_n (-\Delta)^\frac{s_c}{2} f \big\|_{L^2_x} \notag \\ &\lesssim \big\|(1 - \chi_n)(-\Delta)^\frac{s_c}{2}f\big\|_{L^2_x} + \big\|(-\Delta)^\frac{s_c}{2}\big((1 - \chi_n)f\big)\big\|_{L^2_x} \notag \\ &\hspace{3ex} + \big\|(-\Delta_{\Omega_{n}})^\frac{s_c}{2}(\chi_n f) - (-\Delta)^\frac{s_c}{2}(\chi_n f)\big\|_{L^2_x} \to 0 \quad \text{as } n \to \infty. \notag \end{align} This completes the proof for $I_5$, and thus for $I_1$ as well. Finally, for the term $I_3$, we apply (\ref{E11101}) along with the monotone convergence theorem to find \begin{align*} I_3 &\lesssim \big\|(1 - \chi_n)(-\Delta)^\frac{s_c}{2}w_\infty(T)\big\|_{L^2_x} + \big\|(1 - \chi_n)(-\Delta)^\frac{s_c}{2}\big\|_{L^2_x} \\ &\hspace{3ex} + \big\|e^{-iT\Delta}(-\Delta)^\frac{s_c}{2}w_\infty(T) - (-\Delta)^\frac{s_c}{2}\phi\big\|_{L^2_x} \to 0, \end{align*} first taking $n \to \infty$, and then $T \to \infty$. \textbf{Step 4}. We demonstrate that $\tilde{v}_n$ serves as an approximate solution to \eqref{NLS} in the sense that \begin{align*} i\partial_t\tilde{v}_n + \Delta_{\Omega}\tilde{v}_n = |\tilde{v}_n|^{\alpha}\tilde{v}_n + e_n, \end{align*} where $e_n$ satisfies the smallness condition \begin{equation} \lim_{T \to \infty} \limsup_{n \to \infty} \big\|e_n\big\|_{\dot{N}^{s_c}(\mathbb{R} \times \Omega)} = 0. \label{E1110x1} \end{equation} First, consider the case of a large time scale $t > \lambda_n^2 T$. By symmetry, the case $t < -\lambda_n^2 T$ can be handled similarly. Using the equivalence of Sobolev spaces, Strichartz estimates, and H\"older's inequality, we obtain \begin{align*} &\big\|(-\Delta _\Omega)^{\frac{s_c}{2}}e_n\big\|_{ L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{27\alpha -8}}(\{t>\lambda_n^2 T\}\times\Omega)}\lesssim\big\|(-\Delta_{\Omega})^\frac{s_c}{2}(|\tilde{v}_n|^{\alpha}\tilde{v}_n)\big\|_{ L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{27\alpha -8}}(\{t>\lambda_n^2 T\}\times\Omega)}\\ &\lesssim\big\|(-\Delta_{\Omega})^\frac{s_c}{2}\tilde{v}_n\big\|_{L_t^{\frac{5\alpha }{2}}L_x^{ \frac{30\alpha }{15\alpha -8}}(\{t>\lambda_n^2T\}\times\Omega)}\|\tilde{v}_n\|_{L_{t,x}^{\frac{5\alpha}{2}}(\{t>\lambda_n^2T\}\times\Omega)}^\alpha\\ &\lesssim\big\|(-\Delta_{\Omega})^\frac{s_c}{2}[\chi_nw_n(T)+z_n(T)]\big\|_{L_x^2}\|\tilde{v}_n\|_{L_{t,x}^{\frac{5\alpha}{2}}(\{t>\lambda_n^2T\}\times\Omega)}^\alpha\\ &\lesssim\big(1+\lambda_n^{s_c-\frac{3}{2}+\theta(\frac{7}{2}-s_c)}(T+\lambda_n^{-2\theta})\big)\|\tilde{v}_n\|_{L_{t,x}^{\frac{5\alpha}{2}}(\{t>\lambda_n^2T\}\times\Omega)}^\alpha. \end{align*} Therefore, to establish (\ref{E1110x1}), it suffices to prove that \begin{align}\label{convergence-6.1} \lim_{T\to\infty}\limsup_{n\to\infty}\big\|e^{i(t-\lambda_n^2T)\Delta_{\Omega}}\tilde{v}_n(\lambda_n^2T)\big\|_{L_{t,x}^\frac{5\alpha}{2}(\{t>\lambda_n^2T\}\times\Omega)}=0. \end{align} We now prove (\ref{convergence-6.1}). By the spacetime bounds (\ref{E11102}), the global solution $w_\infty $ scatters. Let $\phi_+$ denote the forward asymptotic state, that is, \begin{align}\label{scattering} \big\|w_\infty-e^{it\Delta}\phi_+\big\|_{\dot{H}^{s_c}(\R^3)}\to0,\qtq{as}t\to\pm\infty. \end{align} It then follows from Strichartz estimate, H\"older's inequality and change of variables that \begin{align*} & \big\|e^{i(t-\lambda_n^2T)\Delta_{\Omega}}\tilde{v}_n(\lambda_n^2T)\big\|_{L_{t,x}^\frac{5\alpha}{2}(\{t>\lambda_n^2T\}\times\Omega)} \lesssim\big\|e^{it\Delta_{\Omega_n}}(\chi_nw_n(T)+z_n(T))\big\|_{L_{t,x}^\frac{5\alpha}{2}([0,\infty)\times\Omega_n)}\\ &\lesssim \big\|(-\Delta_{\Omega_n})^{\frac{s_c}2}z_n(T)\big\|_{L_x^2}+\big\|(-\Delta_{\Omega_n})^{\frac{s_c}2}[\chi_n(w_n(T)-w_\infty(T))]\big\|_{L_x^2}\\ &\quad+\big\|(-\Delta_{\Omega_n})^{\frac{s_c}2}[\chi_n(w_{\infty}(T)-e^{iT\Delta}w_+)]\big\|_{L_x^2}+\big\|e^{it\Delta_{\Omega_n}}[\chi_ne^{iT\Delta}w_+]\big\|_{L_{t,x}^{\frac{5\alpha}{2}}([0,\infty)\times\Omega_n)}\\ &\lesssim \lambda_n^{s_c-\frac{3}2+\theta(\frac72-s_c)}(T+\lambda_n^{-2\theta})+\big\|w_n(T)-w_\infty(T)\big\|_{\dot H^{s_c}}+\big\|w_\infty(T)-e^{iT\Delta}w_+\big\|_{\dot H^{s_c}}\\ &\quad+\big\|[e^{it\Delta_{\Omega_n}}-e^{it\Delta}][\chi_ne^{iT\Delta}w_+]\big\|_{L_{t,x}^{\frac{5\alpha}{2}}([0,\infty)\times\R^3)} +\big\|(-\Delta)^{\frac{s_c}2} [(1-\chi_n)e^{iT\Delta}w_+]\big\|_{L_x^2}\\ &\quad+\big\|e^{it\Delta}w_+\big\|_{L_{t,x}^{\frac{5\alpha}{2}}((T,\infty)\times\R^3)}, \end{align*} which converges to zero by first letting $n\rightarrow\infty $ and then $T\to\infty$ by (\ref{embed-lem-2}), \eqref{scattering}, Theorem \ref{convergence-flow}, and the monotone convergence theorem. Now, we consider the case that $|t_n|\leq \lambda_n^2T$. For these values of time, by the direct calculus we have \begin{align*} e_n(t,x)&=[(i\partial_t+\Delta_\Omega )\tilde v_n- |\tilde v_n|^\alpha\tilde v_n](t,x)\\ &=-\lambda_n^{s_c-\frac72}[\Delta\chi_n](\lambda_n^{-1}(x-x_n))w_n(\lambda_n^{-2}t,-\lambda_n^{-1}x_n)+\lambda_n^{s_c-\frac72}[\Delta\chi_n w_n](\lambda_n^{-2}t,\lambda_n^{-1}(x-x_n))\\ &\quad+2\lambda_n^{s_c-\frac72}(\nabla\chi_n\cdot\nabla w_n)(\lambda_n^{-2}t, \lambda_n^{-1}(x-x_n))\\ &\quad+\lambda_n^{s_c-\frac72}[\chi_n|w_n|^\alpha w_n-|\chi_nw_n+z_n|^\alpha(\chi_nw_n+z_n)](\lambda_n^{-2}t,\lambda_n^{-1}(x-x_n)). \end{align*} By a change of variables and the equivalence of Sobolev norms Theorem \ref{TEquivalence}, we obtain \begin{align*} \big\|(-\Delta_{\Omega})^\frac{s_c}2e_n\big\|_{ \dot N^{s_c}(\R\times\Omega)}\notag &\lesssim\big\|(-\Delta)^\frac{s_c}2[\Delta\chi_n(w_n(t,x)-w_n(t,\lambda_n^{-1}x_n))]\big\|_{L_t^{2}L_x^{\frac{6}{5}}([-T,T]\times\Omega_{n})}\\ &\hspace{3ex}+\big\|(-\Delta)^\frac{s_c}{2}\big(\nabla\chi_n\nabla w_n\big)\big\|_{L_t^{2}L_x^{\frac{6}{5}}([-T,T]\times\Omega_{n})}\\ &\hspace{3ex}+\big\|(-\Delta)^\frac{s_c}{2}\big[(\chi_n-\chi_n^{\alpha+1})|w_n|^{\alpha}w_n\|_{L_t^{2}L_x^{\frac{6}{5}}([-T,T]\times\Omega_{n})}\\ &\hspace{3ex}+ \|(-\Delta )^{s_c} [|\chi_n w_n+z_n|^{\alpha }(\chi_n w_n z_n)-|\chi_n w_n|^{\alpha }\chi_n w_n]\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}([-T,T]\times \Omega_n)} \notag\\ &\stackrel{\triangle}{=}J_1+J_2+J_3+J_4. \end{align*} Using H\"older, the fundamental theorem of calculus, and \eqref{key-1}, we estimate \begin{align*} J_1&\lesssim T^{\frac{1}{2}}\big\|(-\Delta)^\frac{s_c}{2}(w_n(t,x)-w_n(t,-\lambda_n^{-1}x_n))\big\|_{L_{t,x}^\infty}\|\Delta \chi_n\|_{L^\frac{6}{5}}\\ &\hspace{3ex}+T^\frac{1}{2}\|w_n-w_n(t,-\lambda_n^{-1}x_n)\|_{L_{t,x}^\infty(\mathbb{R} \times \text{supp}\Delta \chi_n)}\big\|(-\Delta)^{\frac{s_c}{2}}(\Delta\chi_n)\big\|_{L_x^\frac{6}{5}}\\ &\lesssim T^{\frac{1}{2}}\lambda_n^{-\frac{1}{2}+\frac{3}{2}\theta }+T^{\frac{1}{2}}\lambda_n^{-1+\theta (\frac{5}{2}-s_c)}\lambda_n^{s_c-\frac{1}{2}}\rightarrow0\quad\text{as}\quad n\rightarrow\infty . \end{align*} By a similar argument, we can show that $J_2\rightarrow0$ as $n\rightarrow\infty $ and we omit the details. Next, we turn our attention to $J_3$. By Lemma \ref{LFractional product rule}, H\"older's inequality and (\ref{key-1}), we have \begin{align*} J_3&\lesssim\big\||\nabla|^{s_c}\chi_n\big\|_{L_x^{\frac{6}{5}}}\|w_n\|_{L_t^\infty L_x^{\infty }}^{\alpha+1} +\big\|\chi_n-\chi_n^{\alpha+1}\big\|_{L_x^{\frac{6}{5}}}\|w_n\|_{L_t^\infty L_x^{\infty}}^\alpha\big\||\nabla|^{s_c}w_n\big\|_{L_t^\infty L_x^{\infty}}\\ &\lesssim\lambda_n^ {s_c-\frac{5}{2}+\theta (\alpha +1)(\frac{3}{2}-s_c)}+\lambda_n^{-\frac{5}{2}+\theta \alpha (\frac{3}{2}-s_c)+\frac{3}{2}\theta }\rightarrow0\quad\text{as} \quad n\rightarrow\infty .\notag \end{align*} Finally, we consider $J_4$. By Lemma \ref{Lnonlinearestimate}, \begin{align} J_4&\lesssim \left(\|\chi_n w_n\|^{\alpha -1}_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{5\alpha }{2}}([-T,T]\times \Omega_n)}+ \|z_n\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{5\alpha }{2}}([-T,T]\times \Omega_n)}^{\alpha -1} \right)\notag\\ &\qquad\times \left(\||\nabla |^{s_c}(\chi_n w_n)\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}([-T,T]\times \Omega_n) }+ \||\nabla |^{s_c}z_n\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}([-T,T]\times \Omega_n) }\right)^2.\label{E1110x2} \end{align} Using the fractional product rule and (\ref{E11102}), we have \begin{align} &\||\nabla |^{s_c}(\chi_n w_n)\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}([-T,T]\times \Omega_n) } \lesssim \||\nabla |^{s_c}\chi_n\|_{L_x^{\frac{30\alpha }{15\alpha -8}}} \|w_n\|_{L^\infty _tL^\infty _x}+ \|\chi_n\|_{L_x^{\frac{30\alpha }{15\alpha -8}}} \||\nabla |^{s_c}w_n\| _{L^\infty _tL^\infty _x}\notag\\ &\lesssim T^{\frac{2}{5\alpha }}\lambda_n^{s_c-\frac{15\alpha -8}{30\alpha }\times 3+\theta (\frac{3}{2}-s_c)}+T^{\frac{2}{5\alpha }}\lambda_n^{-\frac{15\alpha -8}{30\alpha }\times 3+\frac{3}{2}\theta }= T^{\frac{2}{5\alpha }}\lambda_n^{\frac{3(2s_c-3)}{10}+\theta (\frac{3}{2}-s_c)}+T^{\frac{2}{5\alpha }}\lambda_n^{-\frac{3}{2}+\frac{4}{5\alpha }+\frac{3}{2}\theta },\notag \end{align} which converges to $0$ as $n\rightarrow\infty $. This together with (\ref{E11102}), Lemma \ref{zn} and (\ref{E1110x2}) gives $J_4\rightarrow0$ as $n\rightarrow\infty $. This completes the proof of (\ref{E1110x1}). \textbf{Step 5.} Constructing $v_n$ and approximation by $C_c^{\infty }$ functions. By (\ref{step-2}), \eqref{step-3}, and applying the stability Theorem \ref{TStability}, we conclude that for sufficiently large $n$ and $T$, there exists a global solution $v_n$ to \eqref{NLS} with initial data $v_n(0) = \phi_n$. Moreover, this solution has a finite scattering norm and satisfies \begin{align}\label{approximate-2} \lim_{T \to \infty} \limsup_{n \to \infty} \big\|v_n(t - \lambda_n^2 t_n) - \tilde{v}_n(t)\big\|_{L_t^{\frac{5\alpha}{2}} L_x^{\frac{5\alpha}{2}}(\mathbb{R} \times \Omega)} = 0. \end{align} Thus, to prove Theorem \ref{Tembbedding1}, it suffices to establish the approximation \eqref{approximate-1}. This result follows from a standard argument; see, for example, \cite{KillipVisan2013,KillipVisanZhang2016a}. Here, we provide only a brief outline of the proof. First, by a density argument, we select $\psi_\varepsilon \in C_0^\infty(\mathbb{R} \times \mathbb{R}^3)$ such that \begin{equation} \|(-\Delta_\Omega)^{\frac{s_c}{2}}(w_\infty - \psi_\varepsilon)\|_{L_t^{\frac{5\alpha}{2}} L_x^{\frac{30\alpha}{15\alpha - 8}}(\mathbb{R} \times \mathbb{R}^3)} < \varepsilon. \label{E1110w1} \end{equation} Then, employing a change of variables and the triangle inequality, we derive \begin{align} &\hspace{3ex} \big\|(-\Delta _\Omega)^{\frac{s_c}{2}}[v_n(t - \lambda_n^2 t_n, x + x_n) - \lambda_n^{s_c - \frac{3}{2}} \psi_\varepsilon(\lambda_n^{-2}t, \lambda_n^{-1}x)]\big\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}(\mathbb{R} \times \mathbb{R}^3)} \notag\\ &\lesssim \big\|(-\Delta _\Omega)^{\frac{s_c}{2}}(w_\infty - \psi_\varepsilon)\big\|_{\dot{X}^{s_c}(\mathbb{R} \times \mathbb{R}^3)} + \big\|v_n(t - \lambda_n^2 t_n) - \tilde{v}_n(t)\big\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}(\mathbb{R} \times \mathbb{R}^3)} \label{E11132}\\ &\hspace{3ex} + \big\|(-\Delta _\Omega)^{\frac{s_c}{2}}[\tilde{v}_n(t, x) - \lambda_n^{s_c - \frac{3}{2}} w_\infty(\lambda_n^{-2}t, \lambda_n^{-1}(x - x_n))]\big\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}(\mathbb{R} \times \mathbb{R}^3)}. \label{E11133} \end{align} Clearly, by \eqref{approximate-2} and (\ref{E1110w1}), we have $(\ref{E11132}) \lesssim \varepsilon$. For (\ref{E11133}), note that by (\ref{perturb}), for sufficiently large $n$, $w_n$ approximates $w_\infty$ and $\chi_n(x) \rightarrow 1$. As $\widetilde{v}_n$ is constructed through $w_n$, $\chi_n$, and $z_n$,, we can use Lemma \ref{zn}, the triangle inequality, the Strichartz estimate, and Theorem \ref{convergence-flow} to show that for sufficiently large $n$, (\ref{E11133}) is also small, which yields (\ref{approximate-1}). \end{proof} Next, we concerns the scenario when the rescaled obstacles $\Omega_n^c$ (where $\Omega_n = \lambda_n^{- 1} \left( \Omega - \left\{ x_n \right\} \right)$) are retreating to infinity, which corresponds to Case 3 of Theorem \ref{linear-profile}. \begin{theorem}[Embedding of nonlinear profiles for retreating obstacles]\label{Tembedding2} Let $\{t_n\}\subset\R$ be such that $t_n\equiv0$ or $|t_n|\to+\infty$. Let $\{x_n\}\subset\Omega$ and $\{\lambda_n\}\subset2^{\Bbb Z}$ satisfy that $\frac{d(x_n)}{\lambda_n}\to\infty$. Suppose that $\phi\in\dot{H}^{s_c}(\R^3)$ and \begin{align*} \phi_n(x)=\lambda_n^{s_c-\frac{3}{2}}e^{i\lambda_n^2t_n\DeltaO}\left[(\chi_n\phi)\left(\frac{x-x_n}{\lambda_n}\right)\right] \end{align*} with $\cn(x)=1-\Theta(\lambda_n|x|/d(x_n))$. Then for sufficiently large $n$, there exists a global solution $v_n$ to $\eqref{NLS}$ with initial data $v_n(0)=\pn$, which satisfies \begin{equation} \|v_n\|_{L_{t,x}^{\frac{5\alpha}{2}}(\R\times\Omega)}\lesssim_{\|\phi\|_{\Hsc}}1.\label{E11145} \end{equation} Furthermore, for every $\varepsilon>0$, there exist $N_\varepsilon>0$ and $\psie\in C_0^\infty(\R\times\R^3)$ such that for $n\geq N_\varepsilon$, we get \begin{align}\label{Embed-2} \norm (-\Delta _\Omega)^{\frac{s_c}{2}}[v_n(t-\lamn^2t_n,x+x_n)-\lamn^{s_c-\frac{3}{2}}\psie(\lamn^{-2}t,\lamn^{-1}x)]\norm_{ L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}(\R\times\R^3)}<\varepsilon. \end{align} \end{theorem} \begin{proof} Similar to the proof of Theorem \ref{Tembbedding1}, we also divide the proof of Theorem \ref{Tembedding2} into five steps. For the sake of simpleness, we denote $-\Delta_{\R^3}=-\Delta$. \textbf{Step 1}. Constructing the global solution to NLS$_{\mathbb{R}^3}$. Let $\theta = \frac{1}{100(\alpha + 1)}$. Following the proof of Theorem \ref{Tembbedding1}, if $t_n \equiv 0$, we define $w_n$ and $w_\infty$ as solutions to NLS$_{\mathbb{R}^3}$ with initial data $w_n(0) = P_{\leq d(x_n)^{\theta} \lambda_n^{-\theta}} \phi$ and $w_\infty(0) = \phi$. If $t_n \to \pm \infty$, we let $w_n$ and $w_\infty$ be solutions to NLS$_{\mathbb{R}^3}$ such that \begin{equation} \begin{cases} \|w_n(t) - e^{it\Delta} P_{\leq d(x_n)^{\theta} \lambda_n^{-\theta}} \phi\|_{\dot{H}_D^{s_c}(\mathbb{R}^3)} \to 0,\\ \|w_\infty(t) - e^{it\Delta} \phi\|_{\dot{H}_D^{s_c}(\mathbb{R}^3)} \to 0. \end{cases}\notag \end{equation} By the assumptions in Theorem \ref{T1}, we deduce that $w_n$ and $w_\infty$ are global solutions with uniformly bounded Strichartz norms. Moreover, using arguments similar to those in the proof of Theorem \ref{Tembbedding1} and invoking Theorem \ref{TStability}, we establish that $w_n$ and $w_\infty$ satisfy the following properties: \begin{equation} \begin{cases} \|w_n\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{5\alpha }{2}}(\R\times\R^3)}+\|w_\infty\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{5\alpha }{2}}(\R\times\R^3)}\lesssim1,\\ \||\nabla |^{s_c}(w_n-w_\infty)\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}(\R\times\R^3)}\to0\qtq{as}t\to\pm\infty,\\ \norm|\nabla|^{s}w_n\norm_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{5\alpha }{2}}(\R\times\R^3)}\lesssim\(\frac{d(x_n)}{\lamn}\)^{\theta s},\qtq{for all }s\geq0. \end{cases}\label{E11141} \end{equation} \textbf{Step 2.} Constructing the approximate solution to \eqref{NLS}. Fix $T>0$ to be chosen later. We define \begin{align*} \tilde{v}_n(t,x)\stackrel{\triangle}{=}\begin{cases} \lamn^{s_c-\frac{3}{2}}\big(\cn w_n\big)(\lamn^{-2}t,\lamn^{-1}(x-x_n)), & |t|\leq\lamn^2T,\\ e^{i(t-\lamn^2T)\DeltaO}\tilde{v}_n(\lamn^2T,x), &t>\lamn^2T,\\ e^{i(t+\lamn^2T)\DeltaO}\tilde{v}_n(-\lamn^2T,x), &t<-\lamn^2T. \end{cases} \end{align*} Similar to (\ref{step-2}), using (\ref{E11141}), it is easy to see that $\tilde{v}_n$ has finite scattering norm. \textbf{Step 3.} Agreement of the initial data: \begin{align}\label{step-3-embed2} \lim_{T\to\infty}\limsup_{n\to\infty}\norm e^{it\DeltaO}\big(\tilde{v}_n(\lambda_n^2 t_n)-\pn\big)\norm_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}(\R\times\Omega)}=0. \end{align} By the same argument as used in the proof of Step 3 in Theorem \ref{Tembbedding1}, we can prove (\ref{step-3-embed2}) in the cases of $t_n \equiv 0$ and $|t_n| \rightarrow \infty$ by applying a change of variables, the Strichartz estimate, and using (\ref{E11141}). \textbf{Step 4.} Proving that $\tilde{v}_n$ is the approximate solution to \eqref{NLS} in the sense that \begin{align}\label{step4-embed2} \lim_{T\to\infty}\limsup_{n\to\infty}\norm (i\partial_t+\DeltaO)\tilde{v}_n-|\tilde{v}_n|^\alpha\tilde{v}_n\norm_{\dot N^{s_c}(\R\times\Omega)}=0. \end{align} Similar to \eqref{convergence-6.1}, it sufficies to prove \begin{align}\label{convergence-6.2} \lim_{T\to\infty}\limsup_{n\to\infty}\norm e^{i(t-\lamn^2T)\DeltaO}\vn(\lamn^2 T)\norm_{L_{t,x}^{\frac{5\alpha}{2}}(\{t>\lamn^2T\}\times\Omega)}=0. \end{align} Let $w_+$ be the asymptotic state of $w_\infty$. Then by Strichartz estimates and the change of variables, we get \begin{align*} &\hspace{3ex}\norm e^{i(t-\lamn^2T)\DeltaO}\vn(\lamn^2T)\norm_{L_{t,x}^\frac{5\alpha}{2}(\{t>\lamn^2T\}\times\Omega)} =\norm e^{it\DeltaOn}(\cn w_n(T))\norm_{L_{t,x}^{\frac{5\alpha}{2}}((0,\infty)\times\Omega)}\\ &\lesssim\norm e^{it\Delta_{\Omega_n}}[\chi_ne^{iT\Delta}w_+]\norm_{L_{t,x}^{\frac{5\alpha}{2}}((0,\infty)\times\Omega_n)}+\norm\cn[w_\infty(T)-e^{iT\Delta}w_+]\norm_{\dot H^{s_c}(\R^3)} +\norm \cn[w_\infty (T)-w_n(T)]\norm_{\Hsc(\R^3)}\\ &\lesssim\norm\big(e^{it\Delta_{\Omega_n}}-e^{it\Delta}\big)[\cn e^{iT\Delta}w_+]\norm_{L_{t,x}^{\frac{5\alpha}{2}}((0,\infty)\times\R^3)}+\norm(1-\cn)e^{iT\Delta}w_+\norm_{\Hsc(\R^3)}\\ &\quad +\norm e^{it\Delta}w_+\norm_{L_{t,x}^{\frac{5\alpha}{2}}((T,\infty)\times\R^3)}+\|w_\infty(T) -e^{iT\Delta}w_+\|_{\Hsc(\R^3)}+\|w_\infty(T)-w_n(T)\|_{\Hsc(\R^3)}, \end{align*} which converges to zero by first letting $n\to\infty$ and then $T\to\infty $ in view of Theorem \ref{convergence-flow}, \eqref{E11141} and the monotone convergence theorem. Finally, we consider the intermediate time scale $|t|\leq \lamn^2T$. We compute \begin{align*} [(i\partial_t+\Delta_\Omega )\tilde v_n- |\tilde v_n|^\alpha\tilde v_n](t,x) &=\lambda_n^{s_c-\frac72}[\Delta\chi_n w_n](\lambda_n^{-2}t,\lambda_n^{-1}(x-x_n))\\ &\quad+2\lambda_n^{s_c-\frac72}(\nabla\chi_n\cdot\nabla w_n)(\lambda_n^{-2}t, \lambda_n^{-1}(x-x_n))\\ &\quad+\lambda_n^{s_c-\frac72}[(\chi_n-\chi_n^{\alpha+1})|w_n|^\alpha w_n](\lambda_n^{-2}t,\lambda_n^{-1}(x-x_n)). \end{align*} Note that the cut-off function $\chi_n\sim1_{|x|\sim\frac{d(x_n)}{\lamn}}$ and $\frac{d(x_n)}{\lamn}\to\infty$ as $n\to\infty$. Therefore, we can modified the proof in step 4 of Theorem \ref{Tembedding2} with minor change to obtain (\ref{step4-embed2}). \textbf{Step 5.} Constructing $v_n$ and approximation by $C_c^{\infty }$ functions. By \eqref{step-3-embed2}, \eqref{step4-embed2} and invoking the stability Theorem \ref{TStability}, for sufficiently large $n$ we obtain a global solution $v_n$ to \eqref{NLS} with initial data $v_n(0)=\pn$. Moreover, it satisfies \begin{equation} \|v_n\|_{L_{t,x}^\frac{5\alpha}{2}(\R\times\Omega)}\lesssim1,\quad\text{and}\quad \lim_{T\to\infty}\limsup_{n\to\infty}\norm v_n(t-\lamn^2t_n)-\vn(t)\norm_{\dot H_D^{s_c}(\Omega)}=0.\notag \end{equation} Finially, by the same argument as that used to derive (\ref{approximate-1}), we can obtain the convergence \eqref{Embed-2} and omit the details. This completes the proof of Theorem \ref{Tembedding2}. \end{proof} At last, we treat the case that the obstacle expands to fill the half-space, i.e. Case 4 in Theorem \ref{linear-profile}. \begin{theorem}[Embedding the nonlinear profiles: the half-space case]\label{Embed3} Let $\{t_n\}\subset\R$ be such that $t_n\equiv0$ and $|t_n|\to\infty$. Let $\{\lamn\}\subset2^{\Bbb Z}$ and $\{x_n\}\subset\Omega$ be such that \begin{align*} \lamn\to0,\qtq{and}\frac{d(x_n)}{\lamn}\to d_\infty>0. \end{align*} Let $x_n^*\in \partial \Omega$ be such that $|x_n-x_n^*|=d(x_n)$ and $R_n\in \operatorname{SO}(3)$ be such that $R_ne_3=\frac{x_n-x_n^*}{|x_n-x_n^*|}$. Finally, let $\phi\in\dot{H}_D^{s_c}(\mathbb{H})$, we define \begin{align*} \pn(x)=\lamn^{s_c-\frac{3}{2}}e^{i\lamn^2t_n\DeltaO}\phi\(\frac{R_n^{-1}(x_n-x_n^*)}{\lamn}\). \end{align*} Then for $n$ sufficiently large, there exists a global solution $v_n$ to \eqref{NLS} with initial data $v_n(0)=\pn$, which also satisfies \begin{align*} \|v_n\|_{L_{t,x}^\frac{5\alpha}{2}(\RO)}\lesssim1. \end{align*} Furthermore, for every $\varepsilon>0$, there exists $N_\varepsilon\in\N$ and $\psie\in C_0^\infty(\R\times\mathbb{H})$ so that for every $n\geq N_\varepsilon$, we have \begin{align}\label{approximate-embed3} \norm (-\Delta _\Omega)^{\frac{s_c}{2}}[v_n(t-\lamn^2t_n,R_nx+x_n^*)-\lamn^{s_c-\frac{3}{2}}\psie(\lamn^{-2}t,\lamn^{-1}x)]\norm_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}(\RRT)}<\varepsilon. \end{align} \end{theorem} \begin{proof} Again, we divide the proof of this theorem into five main steps. \textbf{Step 1}. Construction of the global solution to NLS$_{\mathbb{R}^3}$. Let $\theta \ll 1$. When $t_n \equiv 0$, define $U_n$ and $U_\infty$ as solutions to NLS$_{\mathbb{H}}$ with initial data $U_n(0) = \phi_{\lambda_n^{-\theta}}$ and $U_\infty(0) = \phi$. If $|t_n| \to +\infty$, we set $U_n$ and $U_\infty$ to be solutions to NLS$_{\mathbb{R}^3}$ satisfying \begin{equation} \|U_n(t) - e^{it\Delta_{\mathbb{H}}} \phi_{\leq \lambda_n^{-\theta}}\|_{\dot{H}_D^{s_c}(\mathbb{H})} \to 0 \quad \text{and} \quad \|U_\infty(t) - e^{it\Delta_{\mathbb{H}}} \phi\|_{\dot{H}_D^{s_c}(\mathbb{H})} \to 0, \quad \text{as} \quad t \to \pm\infty. \label{m12} \end{equation} In all cases, the assumption in Theorem \ref{T1} ensures that \begin{align*} \|U_n\|_{L_t^{\frac{5\alpha}{2}}L_x^{\frac{5\alpha}{2}}(\mathbb{R} \times \mathbb{H})} + \|U_\infty\|_{L_t^{\frac{5\alpha}{2}}L_x^{\frac{5\alpha}{2}}(\mathbb{R} \times \mathbb{H})} \lesssim 1. \end{align*} Moreover, the solution to NLS$_{\mathbb{H}}$ can be extended to a solution of NLS$_{\mathbb{R}^3}$ by reflecting across the boundary $\partial\mathbb{H}$. Using similar arguments as in the proofs of the previous embedding theorems, along with the stability theorem and persistence of regularity, we obtain \begin{equation} \begin{cases} \lim_{n\to\infty}\|U_n-U_\infty\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{5\alpha }{2}}(\R\times\mathbb{H})}=0,\\ \norm(-\Delta_{\mathbb{H}})^\frac{s}{2}U_n\norm_{L_t^\infty L_x^2(\R\times\mathbb{H})}\lesssim\lamn^{\theta(s-1)}. \end{cases}\label{difference-half} \end{equation} \textbf{Step 2}. Construction of the approximate solution to \eqref{NLS}. Let $\Omega_n := \lambda_n^{-1} R_n^{-1} (\Omega - \{x_n^*\})$, and let $T > 0$ be chosen later. On the intermediate time scale $|t| < \lambda_n^2 T$, we embed $U_n$ into a corresponding neighborhood in $\mathbb{H}$ by employing a boundary-straightening diffeomorphism $\Psi_n$ of size $L_n := \lambda_n^{-2\theta}$ in a neighborhood of zero in $\Omega_n$. To achieve this, we define a smooth function $\psi_n$ on the set $|x^\perp| \leq L_n$ such that $x^\perp \mapsto (x^\perp, -\psi_n(x^\perp))$ parametrizes $\partial\Omega_n$. Here, we write $x \in \mathbb{R}^3$ as $x = (x^\perp, x_3)$. By our choice of $R_n$, the unit normal to $\partial\Omega_n$ at zero is $e_3$. Moreover, the curvatures of $\partial\Omega_n$ are $O(\lambda_n)$. Thus, $\psi_n$ satisfies the following properties: \begin{align}\label{psin} \begin{cases} \psi_n(0) = 0, \quad \nabla\psi_n(0) = 0, \quad |\nabla\psi_n(x^\perp)| \lesssim \lambda_n^{1-2\theta}, \\ |\partial^{\alpha}\psi_n(x^\perp)| \lesssim \lambda_n^{|\alpha| - 1} \quad \text{for all } |\alpha| \geq 2. \end{cases} \end{align} We then define the map $\Psi_n: \Omega_n \cap \{|x^\perp| \leq L_n\} \to \mathbb{H}$ and a cutoff $\chi_n: \mathbb{R}^3 \to [0,1]$ as follows: \begin{align*} \Psi_n(x) := (x^\perp, x_3 + \psi_n(x^\perp)) \quad \text{and} \quad \chi_n(x) := 1 - \Theta\bigl(\tfrac{x}{L_n}\bigr). \end{align*} On the domain of $\Psi_n$, which contains $\operatorname{supp} \chi_n$, we have: \begin{align}\label{detpsin} |\det(\partial \Psi_n)| \sim 1 \quad \text{and} \quad |\partial\Psi_n| \lesssim 1. \end{align} Now, we are in position to define the approximate solution. Let $\tilde U_n:=\chi_nU_n$ and define \begin{align*} \tilde v_n(t,x):=\begin{cases} \lamn^{s_c-\frac32}[\tilde U_n(\lamn^{-2}t)\circ\Psi_n](\lambda_n^{-1}R_n^{-1}(x-x_n^*)), &|t|\le \lamn^2 T, \\ e^{i(t-\lamn^2 T)\Delta_\Omega}\vn(\lambda_n^2 T,x), &t>\lamn^2 T,\\ e^{i(t+\lamn^2 T)\Delta_\Omega}\vn(-\lambda_n^2T,x), &t<-\lamn^2 T . \end{cases} \end{align*} We first prove that $\tilde v_n$ has finite scattering size. Indeed, by the Strichartz inequality, a change of variables, and \eqref{detpsin}, \begin{align}\label{tildevn4} \|\tilde v_n\|_{L_{t,x}^{\frac{5\alpha}{2}}(\R\times\Omega)} &\lesssim \|\widetilde{U}_n\circ\Psi_n\|_{L_{t,x}^{\frac{5\alpha}{2}}(\R\times\On)}+\|\tilde U_n(\pm T)\circ\Psi_n\|_{\dot H_D^{s_c}(\On)}\notag\\ &\lesssim \|\tilde U_n\|_{L_{t,x}^{\frac{5\alpha}{2}}(\R\times\mathbb{H})} + \|\tilde U_n(\pm T)\|_{\dot H^{s_c}_D(\mathbb{H})}\lesssim 1. \end{align} \textbf{Step 3}. Asymptotic agreement with the initial data: \begin{align}\label{step3-embed3} \lim_{T\to\infty}\limsup_{n\to \infty}\|(-\Delta_\Omega)^{\frac{s_c}2}e^{it\Delta_\Omega}[\tilde v_n(\lambda_n^2 t_n)-\phi_n]\|_{\isca(\R\times\Omega)}=0. \end{align} First, we consider the case that $t_n\equiv0$. By Strichartz and a change of variables, \begin{align*} &\hspace{3ex}\norm (-\DeltaO)^{\frac {s_c}2} e^{it\Delta_\Omega}(\vn(0)-\phi_n)\norm_{\isca(\R\times\Omega)} \lesssim \norm(\chi_n\phi_{\le \lambda_n^{-\theta}})\circ\Psi_n-\phi\|_{\dot H^{s_c}_D(\On)}\\ &\lesssim \norm(-\Delta)^\frac{s_c}{2}\big((\chi_n\phi_{>\lambda_n^{-\theta}})\circ\Psi_n\big)\|_{L^2_x}+\|(-\Delta)^\frac{s_c}{2}[(\chi_n\phi)\circ\Psi_n-\chi_n\phi]\norm_{L^2_x}+\norm(-\Delta)^\frac{s_c}{2}\big((1-\chi_n)\phi\big)\norm_{L^2_x}. \end{align*} As $\lambda_n \to 0$, we have $\| \phi_{>\lambda_n^{-\theta}} \|_{\dot{H}^{s_c}} \to 0$ as $n \to \infty$. Thus, using \eqref{detpsin}, the first term converges to $0$. For the second term, since $\Psi_n(x) \to x$ in $C^1$, approximating $\phi$ by functions in $C_0^\infty(\mathbb{H})$, we conclude that the second term also converges to $0$. Finally, the last term approaches $0$ by the dominated convergence theorem and the fact that $L_n = \lambda_n^{-2\theta} \to \infty$. It remains to prove \eqref{step3-embed3} when $t_n \to +\infty$. The case $t_n \to -\infty$ can be handled similarly. Since $T > 0$ is fixed, for sufficiently large $n$, we have $t_n > T$, so that \begin{align*} \tilde{v}_n(\lambda_n^2 t_n, x) &= e^{i(t_n - T)\lambda_n^2\Delta_\Omega}[\lambda_n^{s_c - \frac{3}{2}}(\tilde{U}_n(T) \circ \Psi_n)(\lambda_n^{-1}R_n^{-1}(x - x_n^*))]. \end{align*} A change of variables then yields that \begin{align} &\hspace{3ex}\norm(-\Delta_\Omega)^{\frac{s_c}2} e^{it\DeltaO}(\vn(\lamn^2 t_n)-\phi_n)\norm_{\isca(\R\times\Omega)}\notag\\ &\lesssim \norm(-\Delta_{\On})^{\frac {s_c}2}(\tilde U_n(T)\circ\Psi_n-U_\infty(T))\norm_{L^2_x}\label{nn13}\\ &\quad+\norm(-\Delta_{\Omega_n})^{\frac {s_c}2}\big(e^{i(t-T)\Delta_{\Omega_n}}U_\infty(T)-e^{it\Delta_{\Omega_n}}\phi\big)\|_{\isca(\R\times\Omega_n)}.\label{nn12} \end{align} By the triangle inequality, \begin{align} \eqref{nn13} &\lesssim\norm(-\Delta_{\Omega_n})^{\frac{s_c}2}\big((\chi_nU_\infty(T))\circ\Psi_n-U_\infty(T)\big)\|_{L^2_x} +\norm(-\Delta_{\Omega_n})^{\frac {s_c}2}\big((\chi_n(U_n(T)-U_\infty(T)))\circ\Psi_n\big)\|_{L^2_x},\notag \end{align} which converges to zero as $n\to \infty$ by the fact that $\Psi_n(x)\to x$ in $C^1$ and (\ref{difference-half}). For the second term, by the Strichartz estimate, Proposition \ref{P1}, Theorem~\ref{convergence-flow}, and \eqref{m12}, we see that \begin{align*} \eqref{nn12} &\lesssim \norm e^{i(t-T)\Delta_{\Omega_n}}(-\Delta_{\mathbb{H}})^{\frac{s_c}2}U_\infty(T)-e^{it\Delta_{\Omega_n}}(-\Delta_{\mathbb{H}})^{\frac{s_c}2}\phi\norm_{\isca(\R\times\Omega_n)}\\ &\quad +\norm\big((-\Delta_{\Omega_n})^{\frac {s_c}2}-(-\Delta_{\mathbb{H}})^{\frac{s_c}2}\big)U_\infty(T)\|_{L^2_x}+\norm\big((-\Delta_{\Omega_n})^{\frac {s_c}2}-(-\Delta_{\mathbb{H}})^{\frac {s_c}2}\big)\phi\|_{L^2_x}\\ &\lesssim\norm\big(e^{i(t-T)\Delta_{\Omega_n}}-e^{i(t-T)\Delta_{\mathbb{H}}}\big)(-\Delta_{\mathbb{H}})^{\frac {s_c}2}U_\infty(T)\|_{\isca(\R\times\Omega_n)}\\ &\quad+\norm\big(e^{it\Delta_{\Omega_n}}-e^{it\Delta_{\mathbb{H}}}\big)(-\Delta_{\mathbb{H}})^ {\frac{s_c}2}\phi\|_{\isca(\R\times\Omega_n)}\\ &\quad+\norm e^{-iT\Delta_{\mathbb{H}}}U_\infty(T)-\phi\|_{\dot H^{s_c}_D(\mathbb{H})}+o(1), \end{align*} and that this converges to zero by first taking $n\to \infty$ and then $T\to \infty$. \textbf{Step 4}. Proving that $\vn$ is approximate solution to \eqref{NLS} in the following sense \begin{align} \label{nn14} \lim_{T\to\infty}\limsup_{n\to\infty}\norm(i\partial_t+\Delta_\Omega)\tilde v_n-|\tilde v_n|^\alpha\tilde v_n\norm_{\dot N^{s_c}(\R\times\Omega)}=0. \end{align} We first control the contribution of $|t|\ge \lambda_n^2T$. By the same argument as that used in step 4 of Theorem \ref{Tembbedding1}, this reduces to proving \begin{align}\label{nn15} \lim_{T\to\infty}\limsup_{n\to\infty}\|e^{i(t-\lambda_n^2T)\Delta_{\Omega}}\tilde v_n(\lambda_n^2 T)\|_{\scaa(\{t>\lamn^2T\}\times\Omega)}=0. \end{align} Let $U_+$ denote the scattering state of $U_\infty$ in the forward-time direction. By the Strichartz estimate, Theorem \ref{convergence-flow}, and the monotone convergence theorem, we obtain \begin{align*} & \norm e^{i(t-\lambda_n^2 T)\Delta_{\Omega}}\tilde{v}_n(\lambda_n^2T)\norm_{\scaa((\lambda_n^2 T, \infty) \times \Omega)} = \norm e^{i(t-T)\Delta_{\Omega_n}}(\tilde{U}_n(T) \circ \Psi_n)\|_{\scaa((T, \infty) \times \Omega_n)} \\ &\lesssim \norm\big(e^{i(t-T)\Delta_{\Omega_n}} - e^{i(t-T)\Delta_{\mathbb{H}}}\big)(e^{iT\Delta_{\mathbb{H}}}U_+)\|_{\scaa((0, \infty) \times \Omega_n)} + \|e^{it\Delta_{\mathbb{H}}}U_+\|_{L_{t,x}^{\frac{5\alpha}{2}}((T, \infty) \times \mathbb{H})} + o(1), \end{align*} and this converges to zero by Theorem \ref{convergence-flow} and the monotone convergence theorem, by first taking $n \to \infty$ and then $T \to \infty$. Next, we consider the middle time interval $\{|t| \leq \lambda_n^2T\}$. By direct computation, we have \begin{align*} \Delta(\widetilde{U}_n \circ \Psi_n) &= (\partial_k\widetilde{U}_n \circ \Psi_n)\Delta\Psi_n^k + (\partial_{kl}\widetilde{U}_n \circ \Psi_n)\partial_j\Psi_n^l \partial_j\Psi_n^k, \end{align*} where $\Psi_n^k$ denotes the $k$th component of $\Psi_n$, and repeated indices are summed. Recall that $\Psi_n(x) = x + (0, \psi_n(\xp))$, hence we have \begin{align*} &\Delta\Psi_n^k=O(\partial^2\psi_n), \quad \partial_j\Psi_n^l=\delta_{jl}+O(\partial\psi_n), \\ &\partial_j\Psi_n^l\partial_j\Psi_n^k=\delta_{jl}\delta_{jk}+O(\partial\psi_n)+O((\partial\psi_n)^2), \end{align*} where we use $O$ to denote a collection of similar terms. Therefore, \begin{align*} (\partial_k\widetilde{U}_n\circ\Psi_n)\Delta\Psi_n^k&=O\bigl((\partial\widetilde{U}_n\circ\Psi_n)(\partial^2\psi_n)\bigr),\\ (\partial_{kl}\widetilde{U}_n\circ\Psi_n)\partial_j\Psi_n^l\partial_j\Psi_n^k &=\Delta\widetilde{U}_n\circ\Psi_n+O\bigl(\bigl(\partial^2\widetilde{U}_n\circ\Psi_n\bigr)\bigl(\partial\psi_n+(\partial\psi_n)^2\bigr)\bigr) \end{align*} and so \begin{align*} (i\partial_t+\Delta_{\Omega_n})(\widetilde{U}_n\circ \Psi_n)-(|\widetilde{U}_n|^\alpha\widetilde{U}_n)\circ\Psi_n &=[(i\partial_t+\Delta_{\mathbb{H}})\widetilde{U}_n-|\widetilde{U}_n|^4\widetilde{U}_n]\circ \Psi_n \\ &\quad+O\bigl((\partial\widetilde{U}_n\circ\Psi_n)(\partial^2\psi_n)\bigr)+O\bigl(\bigl(\partial^2\widetilde{U}_n\circ\Psi_n\bigr)\bigl(\partial\psi_n+(\partial\psi_n)^2\bigr)\bigr). \end{align*} By a change of variables and \eqref{detpsin}, we get \begin{align} &\hspace{3ex}\norm(-\Delta_\Omega)^{\frac {s_c}2}\big((i\partial_t+\Delta_\Omega)\vn-|\tilde v_n|^\alpha\vn\big)\norm_{L_t^1L_x^2(\{|t|\le \lambda_n^2T\}\times\Omega)}\notag\\ &=\norm(-\Delta_{\Omega_n})^{\frac{s_c}2}\big((i\partial_t+\Delta_{\Omega_n})(\tilde U_n\circ\Psi_n)-(|\widetilde{U}_n|^\alpha\tilde U_n)\circ \Psi_n\big)\norm_{L_t^1L_x^2(\{|t|\le \lambda_n^2T\}\times\Omega_n)}\notag\\ &\lesssim \norm(-\Delta_{\Omega_n})^{\frac{s_c}2}\big(((i\partial_t+\Delta_{\mathbb{H}})\tilde U_n-|\tilde U_n|^\alpha\tilde U_n)\circ\Psi_n\big)\norm_{L_t^1L_x^2([-T,T]\times\Omega_n)}\notag\\ &\quad+\norm(-\Delta_{\Omega_n})^{\frac {s_c}2}\big((\partial\tilde U_n\circ \Psi_n)\partial^2\psi_n)\norm_{L_t^1L_x^2([-T,T]\times\Omega_n)}\notag\\ &\quad+\big\|(-\Delta_{\Omega_n})^{\frac {s_c}2}\big((\partial^2\tilde U_n\circ\Psi_n)\big(\partial\psi_n+(\partial\psi_n)^2\big)\big)\big\|_{L_t^1L_x^2([-T,T]\times\Omega_n)}\notag\\ &\lesssim \|(-\Delta)^\frac{s_c}{2}\big((i\partial_t+\Delta_{\mathbb{H}})\tilde U_n -|\tilde U_n|^\alpha\tilde U_n\big)\|_{L_t^1L_x^2([-T,T]\times\mathbb{H})}\label{nn18}\\ &\quad+\norm(-\Delta)^\frac{s_c}{2}\big((\partial \tilde U_n\circ\Psi_n)\partial^2\psi_n\big)\norm_{L_t^1L_x^2([-T,T]\times\Omega_n)}\label{nn16}\\ &\quad+\big\|(-\Delta)^\frac{s_c}{2}\big((\partial^2 \tilde U_n\circ \Psi_n)\big(\partial\psi_n+(\partial\psi_n)^2\big)\big)\big\|_{L_t^1L_x^2([-T,T]\times\Omega_n)}\label{nn17}. \end{align} By direct computation, \begin{align} (i\partial_t+\Delta_{\mathbb{H}})\tilde U_n-|\tilde U_n|^\alpha\tilde U_n=(\chi_n-\chi_n^{\alpha+1})|U_n|^4U_n+2\nabla\chi_n\cdot\nabla w_n+\Delta\chi_n w_n.\label{E11143} \end{align} For fixed $T>0$, using fractional product rule, \eqref{difference-half}, \eqref{psin}, \eqref{detpsin} and $\lambda_n\rightarrow0$, it is easy to see that (\ref{nn16}), (\ref{nn17}) and the $\dot N^{s_c}(\mathbb{R} \times \mathbb{H} )$ norm of the last two terms in (\ref{E11143}) converges to $ 0 $ as $n\rightarrow\infty $. Therefore, the proof of (\ref{nn14}) reduces to show that the $\dot N^{s_c}(\mathbb{R} \times \mathbb{H} )$ norm of the first term in (\ref{E11143}) converges to $ 0 $ as $n\rightarrow\infty $. To this end, we estimate \begin{align*} & \|(\chi_n-\chi_n^{\alpha +1})|U_n|^{\alpha +1}U_n\|_{\dot N^{s_c}([-T,T]\times \mathbb{H} )} \notag\\ &\lesssim \|(\chi_n-\chi_n^{\alpha +1})|U_n|^{\alpha +1}|\nabla |^{s_c}U_n\|_{L_t^{\frac{5\alpha }{2(\alpha +1)}}L_x^{\frac{30\alpha }{27\alpha -8}}([-T,T]\times \mathbb{H} )} + \||U_n|^{\alpha +1}|\nabla |^{s_c}\chi_n\|_{L_t^{\frac{5\alpha }{2(\alpha +1)}}L_x^{\frac{30\alpha }{27\alpha -8}}([-T,T]\times \mathbb{H} )} \notag \\ &\lesssim \|U_n1_{|x|\sim L_n}\|_{L_{t,x}^{\frac{5\alpha }{2}}}^\alpha \||\nabla |^{s_c}U_n\|_{L_t^{\frac{5}{2}}L_x^{\frac{30\alpha }{15\alpha -8}}}+ \|U_n1_{|x|\sim L_n}\|_{L_{t,x}^{\frac{5\alpha }{2}}}^\alpha \||\nabla |^{s_c}U_n\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}} \||\nabla |^{s_c}\chi_n\|_{L_x^{\frac{3}{s_c}}} \\ &\lesssim\|1_{|x|\sim L_n}U_\infty\|_{\scaa}^\alpha+\|U_\infty-U_n\|^\alpha _{L_{t,x}^\frac{5\alpha}{2}}\to0\quad\text{as}\quad n\rightarrow\infty . \end{align*} This completes the proof of (\ref{nn14}). \textbf{Step 5}. Constructing $v_n$ and approximating by compactly supported functions. Similar to Theorem \ref{Tembbedding1} and Theorem \ref{Tembedding2}, using (\ref{tildevn4}), (\ref{step3-embed3}), (\ref{nn14}) and the stability theorem \ref{TStability}, for $ n $ large enough we obtain a global solution $v_n$ to (\ref{NLS}) with initial data $v_n(0)=\phi_n$, which satisfies (\ref{E11145}). Moreover, the similar argument used in Theorem \ref{Tembbedding1} and Theorem \ref{Tembedding2} also gives (\ref{Embed-2}) and we omit the details. \end{proof} \section{Reduction to Almost Periodic Solutions}\label{S5} The goal of this section is to establish Theorem \ref{TReduction}. The proof relys on demonstrating a Palais-Smale condition (Proposition \ref{Pps}) for minimizing sequences of blowup solutions to \eqref{NLS}, which leads to the conclusion that the failure of Theorem \ref{T1} would imply the existence of minimal counterexamples possessing the properties outlined in Theorem \ref{TReduction}. We adopt the framework described in \cite[Section 3]{KillipVisan2010AJM}. This general methodology has become standard in related contexts; see, for instance, \cite{KenigMerle2006,KenigMerle2010,KillipVisan2013,TaoVisanZhang2008FM} for analogous results in different settings. Consequently, we will highlight the main steps, providing detailed discussions only when specific challenges arise in our scenario. Throughout this section, we use the notation \begin{equation} S_I(u) := \int_I \int_{\Omega} |u(t, x)|^{\frac{5\alpha}{2}} \, dx \, dt. \end{equation} Assume Theorem \ref{T1} fails for some $s_c \in [\frac{1}{2}, \frac{3}{2})$. We define the function $L: [0, \infty) \to [0, \infty)$ as \[ L(E) := \sup\{S_I(u) : u : I \times \Omega \to \mathbb{C} \text{ solving } \eqref{NLS} \text{ with } \sup_{t \in I} \|u(t)\|^2_{\dot{H}^{s_c}_D(\Omega)} \leq E\}. \] It is noteworthy that $L$ is non-decreasing, and Theorem \ref{TLWP} provides the bound \begin{equation} L(E) \lesssim E^{\frac{5\alpha}{4}} \quad \text{for sufficiently small } E.\label{E10252} \end{equation} This implies the existence of a unique critical value $E_c \in (0, \infty]$ such that $L(E) < \infty$ for $E < E_c$ and $L(E) = \infty$ for $E > E_c$. The failure of Theorem \ref{T1} implies $0 < E_c < \infty$. A pivotal component of the proof of Theorem \ref{TReduction} is verifying a Palais-Smale condition. Once the following proposition is established, the derivation of Theorem \ref{TReduction} proceeds along standard lines (see \cite{KillipVisan2010AJM}). \begin{proposition}[Palais--Smale condition modulo symmetries]\label{Pps} Let $u_n : I_n \times \Omega \to \mathbb{C}$ be a sequence of solutions to (\ref{NLS}) such that \[ \limsup_{n \to \infty} \sup_{t \in I_n} \|u_n(t)\|_{\dot{H}_D^{s_c}(\Omega)}^2 = E_c, \] and suppose $t_n \in I_n$ are such that \begin{equation} \lim_{n \to \infty} S_{[t_n, \sup I_n]}(u_n) = \lim_{n \to \infty} S_{[\inf I_n, t_n]}(u_n) = \infty. \label{4.2} \end{equation} Then the sequence $u_n(t_n)$ has a subsequence that converges strongly in $\dot{H}_D^{s_c}(\Omega)$. \end{proposition} We now outline the proof of this proposition, following the argument presented in \cite{KillipVisan2010AJM}. As in that framework, the key components are the linear profile decomposition (Theorem \ref{linear-profile} in our setting) and the stability result (Theorem \ref{TStability}). To begin, we translate the sequence so that each $t_n = 0$, and apply the linear profile decomposition (Theorem \ref{linear-profile}) to express \begin{equation} u_n(0) = \sum_{j=1}^J \phi_n^j + w_n^J, \label{E10251} \end{equation} with the properties specified in Theorem \ref{linear-profile}. Next, we proceed to construct the nonlinear profiles. For $j$ conforming to Case 1, we invoke Theorem \ref{TLWP} and define $v^j : I^j \times \mathbb{R}^d \to \mathbb{C}$ as the maximal-lifespan solution to \eqref{NLS} satisfying \[ \begin{cases} v^j(0) := \phi^j & \text{if } t_n^j \equiv 0, \\ v^j \text{ scatters to } \phi^j \text{ as } t \to \pm \infty & \text{if } t_n^j \to \pm \infty. \end{cases} \] We then define the nonlinear profiles $v_n^j(t,x) := v^j(t + t_n^j (\lambda_n^j)^2, x)$. By construction, $v_n^j$ is also a solution to \eqref{NLS} on the time interval $I_n^j := I^j - \{t_n^j (\lambda_n^j)^2\}$. For sufficiently large $n$, we have $0 \in I_n^j$ and \begin{equation} \lim_{n \to \infty} \|v_n^j(0) - \phi_n^j\|_{\dot{H}^{s_c}_D(\Omega)} = 0. \notag \end{equation} For $j$ conforming to Cases 2, 3, or 4, we utilize the nonlinear embedding theorems from the previous section to construct the nonlinear profiles. Specifically, let $v_n^j$ be the global solutions to \eqref{NLS} constructed in Theorems \ref{Tembbedding1}, \ref{Tembedding2}, or \ref{Embed3}, as applicable. The $\dot{H}^{s_c}_D(\Omega)$ decoupling of the profiles $\phi^j$ in \eqref{profile-2}, along with the definition of $E_c$, ensures that for sufficiently large $j$, the profiles $v_n^j$ are global and scatter. Specifically, for $j \ge J_0$, the profiles fall within the small-data regime. To complete the argument, we aim to show that there exists some $1 \leq j_0 < J_0$ such that \begin{equation} \limsup_{n \to \infty} S_{[0, \sup I^{j_0}_n)}(v_n^{j_0}) = \infty. \label{E10261} \end{equation} When a 'bad' nonlinear profile similar to (\ref{E10261}) emerges, it can be shown that such a profile is unique. This conclusion follows by adapting the approach in \cite[Lemma 3.3]{KillipVisan2010AJM}, demonstrating that $\dot{H}^{s_c}_D(\Omega)$ decoupling holds over time. Utilizing the 'critical' nature of $E_c$, we can exclude the existence of multiple profiles. Consequently, the decomposition (\ref{E10251}) has a single profile (i.e., $J^* = 1$), allowing us to express \begin{equation} u_n(0) = \phi_n + w_n \quad \text{with} \quad \lim_{n \to \infty} \|w_n\|_{\dot{H}^1_D(\Omega)} = 0. \label{7.7} \end{equation} If $\phi_n$ belongs to Cases 2, 3, or 4, then by Theorems \ref{Tembbedding1}, \ref{Tembedding2}, or \ref{Embed3}, there exist global solutions $v_n$ to (\ref{NLS}) with initial data $v_n(0) = \phi_n$ that satisfy a uniform space-time bound. Using Theorem \ref{TStability}, this bound extends to $u_n$ for sufficiently large $n$, leading to a contradiction with (\ref{4.2}). Thus, $\phi_n$ must align with Case 1, and (\ref{7.7}) simplifies to \begin{equation} u_n(0) = e^{it_n \lambda_n^2 \Delta_\Omega} \phi + w_n \quad \text{with} \quad \lim_{n \to \infty} \|w_n\|_{\dot{H}^{s_c}_D(\Omega)} = 0\notag \end{equation} where $t_n \equiv 0$ or $t_n \to \pm \infty$. If $t_n \equiv 0$, the desired compactness follows. Therefore, it remains to rule out the case where $t_n \to \pm \infty$. Assume $t_n \to \infty$ (the case $t_n \to -\infty$ is analogous). Here, the Strichartz inequality combined with the monotone convergence theorem gives \[ S_{\geq 0}\left(e^{it\Delta_\Omega} u_n(0)\right) = S_{\geq 0}\left(e^{i(t + t_n \lambda_n^2) \Delta_\Omega} \phi + e^{it \Delta_\Omega} w_n\right) \longrightarrow 0 \quad \text{as} \quad n \to \infty. \] By small data theory, this result implies $S_{\geq 0}(u_n) \to 0$, contradicting (\ref{4.2}). To establish the existence of at least one bad profile, suppose, for contradiction, that no such profiles exist. In this case, the inequality \begin{equation} \sum_{j \geq 1} S_{[0,\infty)}(v_n^j) \lesssim_ {E_c} 1. \label{E10253} \end{equation} holds. For sufficiently large $n$, the solution lies within the small-data regime. Applying small-data local well-posedness, we obtain $S_{[0,\infty)}(v_n^j) \lesssim \|v_n^j\|_{\dot{H}^{s_c}_D(\Omega)}$, and the decoupling property (\ref{profile-2}) ensures that the tail is bounded by $E_c$. Next, we use \eqref{E10253} and the stability result (Theorem \ref{TStability}) to constrain the scattering size of $u_n$, contradicting \eqref{4.2}. To proceed, we define the approximations \begin{equation} u_n^J(t) = \sum_{j=1}^{J} v_n^j(t) + e^{it\Delta} w_n^J. \end{equation} By the construction of $v_n^j$, it is easy to verify that \begin{equation} \limsup_{n \to \infty} \| u_n(0) - u_n^J(0) \|_{\dot{H}^{s_c}_D(\Omega)} = 0. \label{4.6} \end{equation} Furthermore, we claim: \begin{equation} \lim_{J \to \infty} \limsup_{n \to \infty} S_{[0,\infty)}(u_n^J) \lesssim_ {E_c} 1. \label{E10254} \end{equation} To justify \eqref{E10254}, observe that by \eqref{profile-1} and \eqref{E10253}, it suffices to prove \begin{equation} \lim_{J \to \infty} \limsup_{n \to \infty} \left| S_{[0,\infty)} \left( \sum_{j=1}^{J} v_n^j \right) - \sum_{j=1}^{J} S_{[0,\infty)}(v_n^j) \right| = 0. \label{4.8} \end{equation} Note that \[ \left|\left| \sum_{j=1}^{J} v_n^j \right|^{\frac{5\alpha }{2}} - \sum_{j=1}^{J} \left| v_n^j \right|^{\frac{5\alpha }{2}} \right|\lesssim_J \sum_{j \neq k} \left| v_n^j \right|^{\frac{5\alpha }{2}-1} \left| v_n^k \right|. \] It follows from H\"older's inequality that \begin{equation} \text{LHS} \eqref{4.8} \lesssim_J \sum_{j \neq k} \left\| v_n^j \right\|^{\frac{5\alpha }{2}-2}_{L_t^{\frac{5\alpha }{2}} L_x^{\frac{5\alpha }{2}} ([0,\infty) \times \Omega)} \left\| v_n^j v_n^k \right\|_{L_t^{\frac{5\alpha }{4}} L_x^{\frac{5\alpha }{4}} ([0,\infty) \times \Omega)}. \label{E1026s1} \end{equation} Following Keraani's argument \cite[Lemma 2.7]{Keraani2001}, with $j \neq k$, we can first use (\ref{approximate-1}), (\ref{Embed-2}) and (\ref{approximate-embed3}) to approximate $v^j$ and $v^k$ by compactly supported functions in $\mathbb{R} \times \mathbb{R}^3$, then using the asymptotic orthogonality \eqref{profile-4} to demonstrate \begin{equation} \limsup_{n \to \infty} \left(\|v_n^j v_n^k\|_{L_t^{\frac{5\alpha }{4}} L_x^{\frac{5\alpha }{4}} ([0,\infty) \times \Omega)}+ \|v_n^j(-\Delta _\Omega)^{\frac{s_c}{2}}v_n^k\|_{L_t^{\frac{5\alpha }{4}}L_x^{\frac{15\alpha }{15\alpha -8}}([0,\infty )\times \Omega)} \right) = 0.\label{E11161} \end{equation} Combining this with \eqref{E1026s1}, we see that \eqref{4.8} (and hence \eqref{E10254}) is valid. With \eqref{4.6} and \eqref{E10254} in place, proving that $u_n^J$ asymptotically solves (\ref{NLS}) reduces to showing: \begin{equation} \lim_{J \to \infty} \limsup_{n \to \infty} \| (i \partial_t + \Delta) u_n^J - |u_n^J|^\alpha u_n^J\|_{\dot N^{s_c}([0,\infty)\times \Omega)} = 0.\label{E11221} \end{equation} Once this is established, we can apply the stability Theorem \ref{TStability} to bound the scattering size of $u_n$, contradicting (\ref{4.2}). This completes the proof of proposition \ref{Pps}. It sufficies to prove (\ref{E11221}), which relys on demonstrating: \begin{lemma}[Decoupling of nonlinear profiles]\label{LDecoupling of nonlinear profiles}Let $F(u)=|u|^{\alpha }u$. Then \begin{equation} \lim_{J \to \infty} \limsup_{n \to \infty} \| F ( \sum_{j=1}^{J} v_n^j ) - \sum_{j=1}^{J} F(v_n^j) \|_{\dot N^{s_c}([0,\infty)\times \Omega)} = 0,\label{E11151} \end{equation} \begin{equation} \lim_{J \to \infty} \limsup_{n \to \infty} \| F(u_n^J - e^{it \Delta} w_n^J) - F(u_n^J) \|_{\dot N^{s_c}([0,\infty)\times \Omega)} = 0.\label{E11152} \end{equation} \end{lemma} In the energy-critical setting, i.e., $s_c = 1$, one can instead use the pointwise estimate \begin{equation} \left| \nabla \left( F\left( \sum_{j=1}^J v_n^j \right) - \sum_{j=1}^J F(v_n^j) \right) \right| \lesssim_J \sum_{j \neq k} |\nabla v_n^j| |v_n^k|^\alpha \label{E11153} \end{equation} and (\ref{E11161}) to prove (\ref{E11151}) and (\ref{E11152}); the key is to exhibit terms that all contain some $v_n^j$ paired against some $v_n^k$ for $j \neq k$. In the case $s_c = 0$, there are also pointwise estimates similar to (\ref{E11153}). However, when $s_c \neq 0, 1$, a new difficulty arises as the nonlocal operator $|\nabla|^{s_c}$ does not respect pointwise estimates in the spirit of (\ref{E11153}). To address this issue, in the subcritical case ($s_c < 1$), Murphy \cite{Murphy2014} employs the Littlewood-Paley square function estimates, which hold for all $s > 0$ and $1 < r < \infty$: \begin{equation} \|(\sum N^{2s}|f_N(x)|^{2})^{1/2}\|_{L_x^r(\mathbb{R}^d)} \sim \|(\sum N^{2s}|f_{>N}(x)|^{2})^{1/2}\|_{L_x^r(\mathbb{R}^d)} \sim \||\nabla|^{s}f\|_{L_x^r(\mathbb{R}^d)}, \label{Eequvilat} \end{equation} to work at the level of individual frequencies. By utilizing maximal function and vector maximal function estimates, he adapts the standard arguments to this context. In the supercritical case ($s_c > 1$), Killip and Visan \cite{KillipVisan2010} employed the following equivalence (see, e.g., \cite{Strichartz1967JMM}): \begin{equation} \||\nabla|^{s}f\|_{L_x^q} \sim \|\mathcal{D}_s(f)\|_{L_x^q}, \end{equation} where the operator $\mathcal{D}_s$ is defined as \[ \mathcal{D}_s(f)(x) := \left( \int_0^\infty \left| \int_{|y| < 1} \frac{|f(x + ry) - f(x)|}{r^{1 + 2s}} \, dy \right|^2 dr \right)^{1/2}, \] which behaves like $|\nabla|^s$ under symmetries. They then used the following pointwise inequality: \[ \mathcal{D}_s\big(w \cdot [F'(u + v) - F'(u)]\big) \lesssim \mathcal{D}_s(w)|v|^\alpha + M(|w|)M(|v|) \big[\mathcal{D}_s (u + v) + \mathcal{D}_s(u)\big], \] where $M$ denotes the Hardy-Littlewood maximal function. By combining this inequality with various permutations of the techniques discussed above, they adapted the standard arguments to this context. In this paper, we follow the arguments in \cite{Murphy2014,KillipVisan2010} and sketch the proof of Lemma \ref{LDecoupling of nonlinear profiles}. \begin{proof}[\textbf{Proof of (\ref{E11151})}] By induction, it suffices to treat the case of two summands. To simplify notation, we write $f = v_n^j$ and $g = v_n^k$ for some $j \neq k$, and are left to show \begin{equation} \| |f+g|^\alpha (f+g) - |f|^\alpha f - |g|^\alpha g \|_{\dot N^{s_c}([0, \infty) \times \Omega)} \to 0 \quad \text{as } n \to \infty. \notag \end{equation} We first rewrite \[ |f+g|^\alpha(f+g) - |f|^\alpha f - |g|^\alpha g = \big( |f+g|^\alpha- |f|^\alpha \big)f + \big( |f+g|^\alpha - |g|^\alpha \big)g. \] By symmetry, it suffices to treat \begin{equation} \| \big( |f+g|^\alpha - |f|^\alpha \big)f \|_{\dot N^{s_c}([0, \infty) \times \Omega)}. \label{E11173} \end{equation} We then utilize Theorem \ref{TEquivalence} and the Littlewood-Paley square function estimates (\ref{Eequvilat}) to reduce (\ref{E11173}) to handling \begin{equation} \left\| \left( \sum_N \big||\nabla|^{s_c} P_N \big( \big(|f+g|^\alpha - |f|^\alpha \big)f \big)\big|^2 \right)^{\frac{1}{2}} \right\|_{L_t^{\frac{5\alpha}{2(\alpha +1)}} L_x^{\frac{30\alpha}{27\alpha - 8}}}. \label{E11177} \end{equation} Then the key step is to perform a decomposition such that all resulting terms to estimate have $f$ paired against $g$ inside a single integrand. For such terms, the asymptotic orthogonality (\ref{E11161}) can be used. Following the arguments in \cite{Murphy2014}, we decompose (\ref{E11177}) into terms such that each term contains pairings of $f$ and $g$. For instance, one of the terms is \begin{equation} \|(\sum_N |N^{s_c}f_{>N}M(g|f|^{\alpha-1})|^2)^{1/2}\|_{L_t^{\frac{5\alpha}{2(\alpha +1)}} L_x^{\frac{30\alpha}{27\alpha - 8}}}. \label{E11178} \end{equation} Using H\"older's inequality and maximal function estimates, this term can be controlled as \begin{equation} \|(\sum_N |N^{s_c}f_{>N}|^2)^{1/2}\|_{L_t^{\frac{5\alpha }{2}}L_x^{\frac{30\alpha }{15\alpha -8}}} \||g||f|^{\alpha -1}\|_{L_{t,x}^{\frac{d+2}{2}}}. \notag \end{equation} By (\ref{Eequvilat}), the first term is bounded by $\||\nabla|^{s_c}v_n^j\|_{L_{t,x}^{\frac{2(d+2)}{d}}}$, which is further bounded by the construction of $v_n^j$. The second term vanishes as $n \to \infty$ due to the asymptotic orthogonality of parameters (\ref{E11161}). The other terms similar to (\ref{E11178}) can be handled similarly, thereby completing the proof of (\ref{E11151}). \end{proof} \begin{proof}[\textbf{Proof of (\ref{E11152})}] For this term, we will rely on (\ref{profile-1}) rather than (\ref{E11161}). The reasoning closely resembles the proof of (\ref{E11151}). Using the same approach as in the proof of (\ref{E11161}), we derive terms that involve either $e^{it\Delta}w_n^J$ or $|\nabla|^{s_c}e^{it\Delta}w_n^J$. The terms where $e^{it\Delta}w_n^J$ appears without derivatives are relatively simple to address, as we can directly apply (\ref{profile-1}). On the other hand, the terms containing $|\nabla|^{s_c} e^{it\Delta} w_n^J$ demand a more detailed analysis. Specifically, we first use the local smoothing estimate from Corollary \ref{CLocalsmoothing}, followed by an application of (\ref{profile-1}) to demonstrate that these terms vanish as $n \to \infty$. \end{proof} We now apply the Palais-Smale condition in Proposition \ref{Pps} to prove Theorem \ref{TReduction}. \begin{proof}[\textbf{Proof of Theorem \ref{TReduction}.}] Assume Theorem \ref{T1} is false. Using a standard argument (see, e.g., \cite[Theorem 5.2]{KillipVisan2013}), we can apply the Palais-Smale condition to construct a minimal counterexample $u:I \times \Omega \to \mathbb{C}$ satisfying \begin{equation} S_{\ge0}(u) = S_{\le 0}(u) = \infty, \label{E11171} \end{equation} with its orbit $\{u(t): t \in I\}$ being precompact in $\dot{H}^{s_c}_D(\Omega)$. Additionally, since the modulation parameter $N(t) \equiv 1$ is compact, it follows that the maximal lifespan interval is $I = \mathbb{R}$ (see, e.g., \cite[Corollary 5.19]{KillipVisan2013}). Next, we establish the lower bound in (\ref{E}) by contradiction. Suppose there exist sequences $R_n \to \infty$ and $\{t_n\} \subset \mathbb{R}$ such that \[ \int_{\Omega \cap \{|x| \leq R_n\}} |u(t_n, x)|^{\frac{3}{2}\alpha} \, dx \to 0. \] Passing to a subsequence, we obtain $u(t_n) \to \phi$ in $\dot{H}^{s_c}_D(\Omega)$ for some non-zero $\phi \in \dot{H}^{s_c}_D(\Omega)$. If $\phi$ were zero, the solution $u$ would have a $\dot{H}^{s_c}_D(\Omega)$ norm below the small data threshold, contradicting (\ref{E11171}). By Sobolev embedding, $u(t_n) \to \phi$ in $L^{\frac{3}{2}\alpha}$, and since $R_n \to \infty$, \begin{equation} \int_\Omega |\phi(x)|^{\frac{3}{2}\alpha} \, dx = \lim_{n \to \infty} \int_{\Omega \cap \{|x| \leq R_n\}} |\phi(x)|^{\frac{3}{2}\alpha} \, dx = \lim_{n \to \infty} \int_{\Omega \cap \{|x| \leq R_n\}} |u(t_n, x)|^{\frac{3}{2}\alpha} \, dx = 0.\notag \end{equation} This contradicts the fact that $\phi \neq 0$, thus completing the proof of Theorem \ref{TReduction}. \end{proof} \section{The cases $1<s_c<\frac{3}{2}$ and $s_c=\frac{1}{2}$.}\label{S6} In this section, we rule out the existence of almost periodic solutions as in Theorem \ref{TReduction} in the cases $1<s_c<3/2$ and $s_c=\frac{1}{2}$. The proof is based on a space-localized Morawetz estimate as in the work of Bourgain \cite{Bourgain1999} on the radial energy-critical NLS. See also \cite{Grillakis2000,Tao2005}. \begin{lemma}[Morawetz inequality]\label{L1091} Let $1<s_c<\frac{3}{2}$ and let $u$ be a solution to (\ref{NLS}) on the time interval $I$. Then for any $A \geq 1$ with $A |I|^{1/2} \geq \text{diam}(\Omega^c)$ we have \begin{equation} \int_I \int_{|x| \leq A |I|^{1/2}, x \in \Omega} \frac{|u(t,x)|^{\alpha +2}}{|x|}\, dx \, dt \lesssim (A|I|^{\frac{1}{2}})^{2s_c-1}\{ \|u\|_{L_t^{\infty }\dot H^{s_c}_D(I\times \Omega)} ^2+\|u\|_{L_t^{\infty }\dot H^{s_c}_D(I\times \Omega)} ^{\alpha +2}\}.\label{E1092} \end{equation} \end{lemma} \begin{proof} Let $\phi(x)$ be a smooth, radial bump function such that $\phi(x) = 1$ for $|x| \leq 1$ and $\phi(x) = 0$ for $|x| > 2$. We set $R \geq \text{diam}(\Omega^c)$ and denote $a(x) := |x| \phi\left(\frac{x}{R}\right)$. Then, for $|x| \leq R$ we have \begin{equation} \partial_j \partial_k a(x) \text{ is positive definite}, \quad \nabla a(x) = \frac{x}{|x|}, \quad \text{and} \quad \Delta \Delta a(x) < 0. \label{E1094} \end{equation} For $|x| > R$, we have the following rough bounds: \begin{equation} |\partial_k a(x)| \lesssim 1, \quad |\partial_j \partial_k a(x)| \lesssim \frac{1}{R}, \quad \text{and} \quad |\Delta \Delta a(x)| \lesssim \frac{1}{R^3}.\label{E1095} \end{equation} By the direct calculus, we have the following identity \begin{equation} 2\partial_t \text{Im}(\bar{u} \partial_j u) = - 4 \partial_k \text{Re}(\partial_k u \partial_j \bar{u}) + \partial_j \Delta (|u|^2) - \frac{2\alpha }{\alpha +2} \partial_j (|u|^{\alpha +2}).\label{E1096} \end{equation} Multiplying by $\partial_j a$ in both sides and integrating over $\Omega$, we obtain \begin{align} &2\partial_t \text{Im} \int_{\Omega} \bar{u} \partial_j u \partial_j a \, dx \notag\\ &= -4 \text{Re} \int_{\Omega} \partial_k (\partial_k u \partial_j \bar{u}) \partial_j a \, dx+ \int_{\Omega} \partial_j \Delta (|u|^2) \partial_j a \, dx- \frac{2\alpha }{\alpha +2} \int_{\Omega} \partial_j (|u|^{\alpha +2}) \partial_j a \, dx.\label{E1091} \end{align} Now, we give the upper bound of the LHS of \eqref{E1091} which follows immediately from H\"older and the Sobolev embedding: \begin{equation} 2\left| \text{Im} \int_{\Omega} \bar{u} \partial_j u \partial_j a \, dx \right|\lesssim \|u\|_{L_x^{\frac{6}{3-2s_c}}(\Omega)} \|\nabla u\|_{L_x^{\frac{6}{5-2s_c}}(\Omega)} \|\nabla a\|_{L_x^{\frac{3}{2s_c-1}}(\Omega)}\lesssim R^{2s_c-1} \|u\|^2_{\dot H_D^{s_c}(\Omega)} .\label{E1093} \end{equation} Next, we find a lower bound on RHS of (\ref{E1091}). By using the Gauss theorem, we get \begin{align*} -4 \text{Re} \int_{\Omega} \partial_k (\partial_k u \partial_j \bar{u}) \partial_j a \, dx &=4 \text{Re} \int_{\partial \Omega} \partial_k u \partial_{j}a\partial_j \bar{u} \vec{n}_k \, d\sigma(x) +4 \text{Re} \int_{\Omega} \partial_k u \partial_j \bar{u} \partial_k\partial_j a \, dx, \end{align*} where $\vec{n}$ denotes the outer normal vector to $\Omega^c$. We write $\partial_j \bar{u}\vec{n}_j = \nabla \bar{u} \cdot \vec{n} = \bar{u}_n$ and $\partial _jan_j=\nabla a\cdot \vec{n}=a_n$. Moreover, from the Dirichlet boundary condition, the tangential derivative of $u$ vanishes on the boundary: \[ \nabla u = (\nabla u \cdot \vec{n}) \vec{n} = u_n \vec{n}, \quad \text{and} \quad \partial_j \overline{u}_j\partial_j a = u_n a_n. \] Combining the analysis above and (\ref{E1094}), we obtain \begin{align} -4 \text{Re} \int_{\Omega} \partial_k (\partial_k u \partial_j \bar{u}) \partial_j a \, dx &\geq 4 \int_{\partial \Omega} a_n |u_n|^2 \, d\sigma(x) + 4 \int_{|x| \geq R} \partial_k u \partial_j \bar{u} \partial_k\partial_j a \, dx \\ &\ge 4 \int_{\partial \Omega} a_n |u_n|^2 \, d\sigma(x) - \|\Delta a\|_{L_x^{\frac{3}{2(s_c-1)}}( \{x:|x|>R\})} \|\nabla u\|^2_{L_x^{\frac{6}{5-2s_c}}(\Omega)}\\ &\geq 4 \int_{\partial \Omega} a_n |u_n|^2 \, d\sigma(x) - CR^{2s_c-3} \|u\|^2_{\dot H_D^{s_c}(\Omega)}.\label{E10111} \end{align} The second term on RHS of (\ref{E1091}) can be estimated by a similar argument: \begin{align} \int_{\Omega} \partial_j \Delta (|u|^2) \partial_j a \, dx &= \int_{\Omega} \partial_j ( \Delta (|u|^2) \partial_j a) dx - \int_{\Omega} \Delta (|u|^2) \Delta a \, dx\notag \\ &= - \int_{\partial \Omega} \Delta (|u|^2) \partial_j a \vec{n}_j\, d\sigma(x) - \int_{\Omega} |u|^2 \Delta \Delta a \, dx \notag\\ &= -2\int_{\partial \Omega} |\nabla u|^2 a_n \, d\sigma(x) - \int_{ |x|\le R} |u|^{2}\Delta ^2a\, dx -\int _{|x|\ge R}|u|^{2}\Delta ^2a\, dx\notag\\ &\geq -2 \int_{\partial \Omega} |u_n|^2 a_n \, d\sigma(x) - \|u\|_{L_x^{\frac{6}{3-2s_c}}( \Omega)}^2 \|\Delta ^2a\|_{L_x^{\frac{3}{2s_c}}( \{x:|x|>R\})}\notag\\ &\ge -2 \int_{\partial \Omega} |u_n|^2 a_n \, d\sigma(x) -CR^{2s_c-3} \|u\|_{\dot H_D^{s_c}(\Omega)}^2.\label{E10112} \end{align} Finally, it remains to estimate the third term on RHS of (\ref{E1091}). By using (\ref{E1094}) and (\ref{E1095}), \begin{align} -&\frac{2\alpha }{\alpha +2} \int_{\Omega} \partial_j (|u|^{\alpha +2}) \partial_j a \, dx = \frac{2\alpha }{\alpha +2} \int_{\Omega} |u|^{\alpha +2} \Delta a \, dx \notag\\ &= \frac{4\alpha }{\alpha +2} \int_{|x| \leq R} \frac{|u|^{\alpha +2}}{|x|} \, dx - \frac{4\alpha }{\alpha +2} \int _{\Omega \cap \{x:|x|>R\}}\Delta a |u|^{\alpha +2}dx\notag\\ &\ge \frac{4\alpha }{\alpha +2} \int_{|x| \leq R} \frac{|u|^{\alpha +2}}{|x|} \, dx - \|\Delta a\|_{L_x^{\frac{3}{2(s_c-1)}}( \{x:|x|>R\})} \| u\|_{L_x^{\frac{6}{3-2s_c}}(\Omega)}^{\alpha +2}\notag\\ &\ge \frac{4\alpha }{\alpha +2} \int_{|x| \leq R} \frac{|u|^{\alpha +2}}{|x|} \, dx -CR^{2s_c-3} \|u\|_{\dot H_D^{s_c}(\Omega)}^{\alpha +2}.\notag \end{align} Putting these together and using the fact that $a_n \geq 0$ on $\partial \Omega$, we have \begin{equation} \quad \text{LHS(\ref{E1091})} \gtrsim \int_{|x| \leq R} \frac{|u|^{\alpha +2}}{|x|} \, dx - R^{2s_c-3} ( \|u\|_{\dot H_D^{s_c}(\Omega)}^2+ \|u\|_{\dot H_D^{s_c}(\Omega)}^{\alpha +2} ).\label{E1097} \end{equation} Integrating (\ref{E1091}) over $I$ and using the upper bound for the LHD of (\ref{E1091}) and the lower bound for the RHS of (\ref{E1091}), we finally deduce \[ \int_I \int_{|x| \leq R, x \in \Omega} \frac{|u|^{\alpha +2}}{|x|} \, dx \, dt \lesssim R^{2s_c-1} \|u\|_{L_t^{\infty }\dot H^{s_c}_D(I\times \Omega)} ^2+ R^{2s_c-3}|I|\left\{\|u\|_{L_t^{\infty }\dot H^{s_c}_D(I\times \Omega)} ^2+\|u\|_{L_t^{\infty }\dot H^{s_c}_D(I\times \Omega)} ^{\alpha +2} \right\}. \] Taking $R = A |I|^{1/2}$ yields (\ref{E1092}). This completes the proof of the lemma. \end{proof} In the proof of Lemma \ref{L1091}, by taking $R \rightarrow +\infty$ and using the same argument as in \cite[Lemma 2.3]{CKSTT} to control the upper bound of the Morawetz action, we can obtain the following non-spatially localized Lin-Strauss Morawetz inequality. \begin{lemma}[Morawetz inequality]\label{L10911} Let $s_c=\frac{1}{2}$ and let $u$ be a solution to (\ref{NLS}) on the time interval $I$. Then we have \begin{equation} \int_I \int_{ \Omega} \frac{|u(t,x)|^{\alpha +2}}{|x|}\, dx \, dt \lesssim \|u\|_{L^\infty _t\dot H^{\frac{1}{2}}_D(\Omega)}^2 .\label{E109} \end{equation} \end{lemma} We now use Lemma \ref{L1091} and Lemma \ref{L10911} to prove the following. \begin{theorem}\label{T1091} There are no almost periodic solutions $u$ to (\ref{NLS}) as in Theorem \ref{TReduction} with $1<s_c<3/2$ or $s_c=\frac{1}{2}$. \end{theorem} \begin{proof} By contradiction, we suppose that there exists a minimal blow-up solution $u$ whose the orbit is precompact in Hilbert space $\dot{H}_D^{s_c}(\Omega)$ and satisfies (\ref{E}). We first claim that \begin{equation} \int _{\Omega\cap \{x:|x|\le R\}}|u(t,x)|^{\alpha +2}dx\gtrsim _R1, \qquad \text{uniformly for } t\in \mathbb{R}. \label{E12221} \end{equation} Indeed, by (\ref{E}) and H?lder's inequality, it suffices to show that \begin{equation} \int _{\Omega\cap \{x:|x|\le R\}}|u(t,x)|^{2}dx\gtrsim 1, \qquad \text{uniformly for } t\in \mathbb{R}. \label{E12222} \end{equation} Applying H?lder's inequality, Sobolev embedding, Bernstein and (\ref{E10101}), we obtain \begin{align} &\left|\int _{\Omega\cap \{x:|x|\le R\}}|u(t,x)|^2-|P^{\Omega}_{<C(\eta)}u(t,x)|^2dx\right|\notag\\ &\lesssim R^{s_c} \|P^{\Omega}_{>C(\eta)}u(t,x)\|_{L_x^{2}(\Omega)} \|u(t,x)\| _{L_x^{\frac{6}{3-2s_c}}(\Omega)} \lesssim \eta R^{s_c} \|u\|_{L_t^{\infty }\dot H^{s_c}_D(I\times\Omega)}. \label{E1222x1} \end{align} On the other hand, using H?lder's inequality and Sobolev embedding again, we have \begin{align} &\left|\int _{\Omega\cap \{x:|x|\le R\}} |u(t,x)|^{\frac{3\alpha }{2}}-|P^{\Omega}_{<C(\eta)}u(t,x)|^{\frac{3\alpha }{2}}dx\right|\notag\\ &\lesssim \|P^{\Omega}_{>C(\eta)}u\|_{L_t^{\infty }L_x^{\frac{6}{3-2s_c}}(I\times \Omega)} \|u\|_{L_t^{\infty }L_x^{\frac{6}{3-2s_c}}(I\times \Omega)}^{\frac{3\alpha }{2}-1} \lesssim \eta \|u\|_{L_t^{\infty }\dot H^{s_c}_D(I\times \Omega)}^{\frac{3\alpha }{2}-1} . \notag \end{align} Combining the above inequality with (\ref{E}), and further applying H?lder's inequality and Bernstein's estimate, we deduce \begin{align} 1\lesssim& \int _{\Omega\cap \{x:|x|\le R\}}|P^{\Omega}_{<C(\eta)}u|^{\frac{3\alpha }{2}}dx\notag\\ & \lesssim \|P^{\Omega}_{<C(\eta)}u\| _{L_x^{\infty }(\Omega)}^{\frac{3\alpha }{2}-2}\int _{\Omega\cap \{x:|x|\le R\}}|P^{\Omega}_{<C(\eta)}u|^{2}dx \lesssim \int _{\Omega\cap \{x:|x|\le R\}}|P^{\Omega}_{<C(\eta)}u|^{2}dx. \notag \end{align} By combining the above inequality with (\ref{E1222x1}) and choosing $\eta > 0$ sufficiently small, we establish (\ref{E12222}), thereby completing the proof of (\ref{E12221}). We now proceed to prove Theorem \ref{T1091}. Integrating (\ref{E12221}) over $I$ with length $|I| \geq 1$, we obtain \[ |I| \lesssim _R \int_I \int_{\Omega \cap \{|x| \leq R\}} \frac{|u(t, x)|^{\alpha +2}}{|x|} \, dx \, dt \lesssim_ R \int_I \int_{\Omega \cap \{|x| \leq R |I|^{1/2} \}} \frac{|u(t, x)|^{\alpha +2}}{|x|} \, dx \, dt. \] On the other hand, for $R |I|^{1/2} \geq 1$, the Morawetz inequality (Lemma \ref{L1091} and Lemma \ref{L10911}) yields that \[ |I|\lesssim _R\int_I \int_{\Omega \cap \{|x| \leq R |I|^{1/2} \}} \frac{|u(t, x)|^{\alpha +2}}{|x|} \, dx \, dt \lesssim |I| ^{s_c-\frac{1}{2}}, \] with the implicit constant depending only on $ R $ and $ \|u\|_{L_t^{\infty }\dot H^{s_c}_D(I\times \Omega)} $. Choosing $I$ sufficiently large depending on $R$ and $\|u\|_{L_t^{\infty }\dot H^{s_c}_D(I\times \Omega)} $, we get a contradiction, which completes the proof of Theorem \ref{T1091}. \end{proof} \section{The case $\frac{1}{2}<s_c<1$.}\label{S1/2-1} In this section, we rule out the almost periodic solutions in the case $1/2<s_c<1$. The key tool employed is a long-time Strichartz estimate, which will be established in subsection \ref{s1/2-1,1}. In subsection \ref{s1/2-1,2}, we derive a frequency-localized Lin-Strauss Morawetz inequality, and in subsection \ref{s1/2-1,3}, we use it to exclude almost periodic solutions. \subsection{Long-time Strichartz estimates}\label{s1/2-1,1} In this subsection, we prove a long-time Strichartz estimate tailored to the Lin-Strauss Morawetz inequality. This type of estimate, initially developed by Dodson \cite{Dodson2012}, has demonstrated its effectiveness in excluding minimal counterexamples. For references, see \cite{KillipVisan2012,Visan2012} for the energy-critical case, \cite{Murphy2014,Murphy2015,Yu2021} for the inter-critical regime, and \cite{Dodson2017,LuZheng2017,MiaoMurphyZheng2014,Murphy2015} for the super-critical setting. In this work, we introduce for the first time a long-time Strichartz estimate applicable to the exterior domain Schr\"odinger equation. This estimate plays a pivotal role in subsection \ref{s1/2-1,2}, where it is used to handle the error terms arising from frequency projection in the Lin-Strauss Morawetz inequality. Throughout this subsection \ref{S1/2-1}, we use the following notation: \begin{equation} A_I(N):= \|(-\Delta _\Omega)^{\frac{s_c}{2}}P^\Omega _{\le N}u\|_{L^2_tL_x^6(I\times \Omega)}.\notag \end{equation} The main result of this subsection is the following. \begin{proposition}[Long time Strichartz estimate]\label{PLT2} Let $u :\mathbb{R} \times \Omega\to \mathbb{C}$ be an almost periodic solution to (\ref{NLS}) with $1/2 < s_c < 1$. Then for any $N > 0$, we have \begin{equation} A_I(N) \lesssim_u 1 + N^{s_c-1/2} |I|^{1/2}. \label{E10106} \end{equation} Moreover, for any $\varepsilon > 0$, there exists $N_0 = N_0(\varepsilon) > 0$ so that for any $N \leq N_0$, \begin{equation} A_I(N) \lesssim_u \varepsilon (1 + N^{s_c - 1/2} |I|^{1/2}). \label{E10107} \end{equation} \end{proposition} We prove Proposition \ref{PLT2} by induction. The inductive step relies on the following. \begin{lemma}\label{LLT2} Let $\eta>0$, $u$ and $I$ be as above. For any $N>0$, we have \begin{equation} \|(-\Delta _\Omega)^{\frac{s_c}{2}}P^\Omega_{\le N}F(u)\|_{L^2_tL^{6/5}_x(I\times \Omega)}\lesssim _u C_\eta \|u_{\le N/\eta}\|_{L^\infty _t\dot H^{s_c}_{D}(\Omega)}N^{s_c-1/2}|I|^{1/2}+\sum _{M>N/\eta}\left(\frac{N}{M}\right)^{s_c}A_I(M).\notag \end{equation} \end{lemma} \begin{proof} We fix $0 < \eta < 1$ and decompose the nonlinearity as follows: \[ F(u) = F(u_{\leq N/\eta}) + \left[F(u) - F(u_{\leq N/\eta}) \right]. \] Using the fractional chain rule (\ref{E12133}), H\"older and Sobolev embedding, we estimate \begin{align} & \|(-\Delta _\Omega)^{\frac{s_c}{2}}P^\Omega _{\le N}F(u_{\leq N/\eta})\|_{L^2_t L^{6/5}_x ( I\times \Omega)}\notag \\ &\lesssim \| (-\Delta _\Omega)^{\frac{s_c}{2}}u_{\le N/\eta}\|^{\alpha }_{L^\infty_t L^2_x ( I\times \Omega)} \|(-\Delta _\Omega)^{\frac{s_c}{2}} u_{\leq N/\eta} \|_{L^2_t L^6_x (I\times \Omega)} \notag\\ & \lesssim \| (-\Delta _\Omega)^{\frac{s_c}{2}}P^\Omega _{\le c(\eta)}u_{\le N/\eta}\|^{\alpha }_{L^\infty_t L^2_x ( I\times \Omega)} \|(-\Delta _\Omega)^{\frac{s_c}{2}} u_{\leq N/\eta} \|_{L^2_t L^6_x (I\times \Omega)} \label{E1011x3}\\ & \quad + \|(-\Delta _\Omega)^{\frac{s_c}{2}} P^\Omega _{>c(\eta)} u_{\leq N/\eta}\|^{\alpha }_{L^\infty_t L^2_x (I\times \Omega)} \|(-\Delta _\Omega)^{\frac{s_c}{2}} u_{\leq N/\eta} \|_{L^2_t L^6_x (I\times \Omega)}.\label{E1011x4} \end{align} For the first term, we use (\ref{E10101}) to estimate \begin{align} (\ref{E1011x3})& \lesssim \eta ^\alpha \|(-\Delta _\Omega)^{\frac{s_c}{2}} u_{\leq N/\eta} \|_{L^2_t L^6_x (I\times \Omega)}\lesssim \eta ^{s_c}A_I(N/\eta). \label{E10102} \end{align} For the next term, we note that we only need to consider the case $c(\eta) < N/\eta$, in which case we have $1 \lesssim_\eta N^{s_c - 1/2}$. Then by Bernstein and Lemma \ref{Lspace-time bound}, we have \begin{align} (\ref{E1011x4}) & \lesssim_u C_\eta N^{s_c - 1/2} \|(-\Delta _\Omega)^{\frac{s_c}{2}} u_{\leq N/\eta} \|_{L^\infty _t L^2_x (I\times \Omega)} \lesssim_u C_\eta N^{s_c - 1/2} \|u_{\leq N/\eta}\|_{L^\infty_t \dot{H}^{s_c}_D (I\times \Omega)}.\label{E10103} \end{align} Combining (\ref{E10102}) and (\ref{E10103}), we obtain \begin{align} \|(-\Delta _\Omega)^{\frac{s_c}{2}}P^\Omega _{\le N} F(u_{\leq N/\eta}) \|_{L^2_t L^{6/5}_x(I\times \Omega)} & \lesssim_u \eta^{s_c} A_I(N/\eta) + C_\eta \|u_{\leq N/\eta}\|_{L^\infty_t \dot{H}^{s_c}_D (I\times \Omega)} N^{s_c-1/2}|I|^{\frac{1}{2}}.\label{E10161} \end{align} Next, we use Bernstein, H\"older and Sobolev embedding to estimate \begin{align*} &\|(-\Delta _\Omega)^{\frac{s_c}{2}}P^\Omega _{\le N}\left( F(u) - F(u_{\leq N/\eta}) \right) \|_{L^2_t L^{6/5}_x(I\times \Omega)} \notag\\ &\lesssim N^{s_c} \|u\|^{\alpha }_{L^\infty_t L_x^{3\alpha /2} (I\times \Omega)} \sum_{M > N/\eta} \|u_M\|_{L^2_t L^6_x(I\times \Omega)} \\ & \lesssim N^{s_c} \|(-\Delta _\Omega)^{\frac{s_c}{2}}u\|_{L^\infty _tL^2_x(I\times \Omega)} ^{\alpha } \sum _{M>N/\eta}M^{-s_c} \|(-\Delta _\Omega)^{\frac{s_c}{2}}u_M\|_{L^2_tL^6_x(I\times \Omega)}\notag\\ & \lesssim_u \sum_{M > N/\eta} \left( \frac{N}{M} \right)^{s_c} A_I(M), \end{align*} which together with (\ref{E10161}) yields the desired estimate in Lemma \ref{LLT2}. \end{proof} We now turn to the proof of Proposition \ref{PLT2}. \begin{proof}[\textbf{Proof of Proposition \ref{PLT2}}] We use induction to establish the result. For the base case, consider $N > 1$. By Lemma \ref{Lspace-time bound}, we have \begin{equation} A_I(N) \lesssim_u 1 + |I|^{1/2} \leq C_u \left[ 1 + N^{s_c-1/2} |I|^{1/2} \right]. \label{E10104} \end{equation} This inequality remains valid if $C_u$ is replaced with any larger constant. Next, assume that (\ref{E10104}) holds for frequencies $\geq 2N$. We will apply Lemma \ref{LLT2} to verify that it holds at frequency $N$. Using Strichartz estimates and Lemma \ref{LLT2}, we get \begin{align} A_I(N) &\leq \tilde{C_u} \left[ \inf_{t \in I} \| u_{\leq N}(t) \|_{\dot H^{s_c}_D(\Omega)} + C_\eta \| u_{\leq N/\eta} \|_{L_t^\infty \dot{H}_D^{s_c}(I \times \Omega)} N^{s_c-1/2} |I|^{1/2} \right. \notag\\ &\qquad + \left. \sum_{M \geq N/\eta} \left( \frac{N}{M} \right)^{s_c} A_I(M) \right] \notag\\ &\leq \tilde{C_u} \left[ 1 + C_\eta N^{s_c-1/2} |I|^{1/2} + \sum_{M \geq N/\eta} \left( \frac{N}{M} \right)^{s_c} A_I(M) \right]. \label{E10105} \end{align} By the inductive hypothesis, we have \begin{align} A_I(N) &\leq \tilde{C_u} \left[ 1 + C_\eta N^{s_c-1/2} |I|^{1/2} + \sum_{M \geq N/\eta} \left( \frac{N}{M} \right)^{s_c} \big(C_u + C_u M^{s_c-1/2} |I|^{1/2}\big) \right] \notag\\ &\leq \tilde{C_u} \left[ 1 + C_\eta N^{s_c-1/2} |I|^{1/2} \right] + C_u \tilde{C_u} \left[ \eta^{s_c} + \eta^{1/2} N^{s_c-1/2} |I|^{1/2} \right]. \notag \end{align} Choosing $\eta$ sufficiently small depending on $\tilde{C_u}$, we deduce \[ A_I(N) \leq \tilde{C_u} (1 + C_\eta N^{s_c-1/2} |I|^{1/2}) + \frac{1}{2} C_u (1 + N^{s_c-1/2} |I|^{1/2}). \] Finally, by taking $C_u$ large enough to ensure $C_u \geq 2(1 + C_\eta)\tilde{C_u}$, we conclude \[ A_I(N) \leq C_u (1 + N^{s_c-1/2} |I|^{1/2}). \] This completes the proof of (\ref{E10106}) via induction. With (\ref{E10106}) established, we proceed to prove (\ref{E10107}) by building on (\ref{E10105}). In fact, for any $\eta > 0$ sufficiently small, the almost periodicity condition implies \[ \lim_{N \to 0} \|u_{\leq N/\eta}\|_{L^\infty_t \dot H^{s_c}_D(I \times \Omega)} = 0. \] This completes the proof of Proposition \ref{PLT2}. \end{proof} \subsection{A Frequency-Localized Lin-Strauss Morawetz Inequality}\label{s1/2-1,2} \begin{proposition}[Frequency-localized Morawetz]\label{PMorawetz} Let $u : \mathbb{R} \times \Omega \to \mathbb{C}$ be an almost periodic solution to (\ref{NLS}) with $1/2 < s_c < 1$. For any $\eta > 0$, there exists $N_0 = N_0(\eta) \in (0,1)$ such that for $N < N_0$ and $I \subset \mathbb{R}$, the following estimate holds: \[ \int_{I } \int_\Omega \frac{|u_{>N}(t,x)|^{\alpha + 2}}{|x|} \, dx \, dt \lesssim_u \eta \left( N^{1 - 2s_c} + |I| \right). \] \end{proposition} To establish Proposition \ref{PMorawetz}, we begin by truncating the low frequencies of the solution and focusing on $u_{>N}$ for a given $N > 0$. Since $u_{>N}$ is not an exact solution to (\ref{NLS}), additional error terms introduced by this frequency projection must be estimated. To handle these terms, we need the following lemma. \begin{lemma}[High and low frequency control]\label{LHLC} Let $u$ and $I$ be as above. With all spacetime norms over $I \times \Omega$, the following hold: \begin{itemize} \item[(i)] Let $(q,r)$ be an admissible pair. For any $N > 0$ and $0 \le s < 1/2$, we have \begin{equation} \|(-\Delta_\Omega)^{\frac{s}{2}}u_{>N}\|_{L_t^q L_x^r} \lesssim_u N^{s - s_c}(1 + N^{2s_c - 1} |I|)^{1/q}. \label{E1010x1} \end{equation} \item[(ii)] For any $\eta > 0$ and $0 < s < s_c$, there exists $N_1 = N_1(\eta)$ such that for $N < N_1$, we have \begin{equation} \|(-\Delta_\Omega)^{\frac{s}{2}}u_{>N}\|_{L_t^\infty L_x^2} \lesssim_u \eta N^{s - s_c}.\label{E1010x2} \end{equation} \item[(iii)] For any $\eta > 0$, there exists $N_2 = N_2(\eta)$ such that for $N < N_2$, we have \begin{equation} \|(-\Delta_\Omega)^{\frac{s_c}{2}} u_{\leq N}\|_{L_t^2 L_x^6} \lesssim_u \eta(1 + N^{2s_c - 1} |I|)^{1/2}.\label{E1010x3} \end{equation} \end{itemize} \end{lemma} \begin{proof} For (\ref{E1010x1}), we first apply interpolation, (\ref{Ebound}), and (\ref{E10106}) to get \begin{equation} \|(-\Delta_\Omega)^{\frac{s_c}{2}}u_{<N}\|_{L_t^q L_x^r} \lesssim \|u\|_{L_t^\infty \dot H^{s_c}_D(\Omega)}^{1-\frac{2}{q}} \|(-\Delta_\Omega)^{\frac{s_c}{2}}u_{<N}\|_{L_t^2 L_x^6}^{\frac{2}{q}} \lesssim (1 + N^{2s_c - 1}|I|)^{1/q}.\notag \end{equation} Using Bernstein's inequality, we deduce \begin{align} \|u_{>N}\|_{L_t^q L_x^r} &\lesssim \sum_{M > N} M^{s-s_c} \|(-\Delta_\Omega)^{\frac{s_c}{2}}u_M\|_{L_t^q L_x^r} \notag\\ &\lesssim_u \sum_{M > N} M^{s-s_c}(1 + M^{2s_c - 1}|I|)^{1/q} \lesssim_u N^{s-s_c}(1 + N^{2s_c - 1}|I|)^{1/q}.\notag \end{align} For (\ref{E1010x2}), we utilize the almost periodicity property (\ref{E10101}) and Bernstein's inequality to obtain \begin{align} \|(-\Delta_\Omega)^{\frac{s}{2}}u_{>N}\|_{L_t^\infty L_x^2} &\lesssim c(\eta)^{s-s_c} \|(-\Delta_\Omega)^{\frac{s_c}{2}}u_{>c(\eta)}\|_{L_t^\infty L_x^2} + N^{s-s_c} \|(-\Delta_\Omega)^{\frac{s_c}{2}}u_{N \leq \cdot \leq c(\eta)}\|_{L_t^\infty L_x^2} \notag\\ &\lesssim_u c(\eta)^{s-s_c} + \eta N^{s-s_c}.\notag \end{align} Choosing $N_1(\eta) = \eta^{1/(s_c - s)}c(\eta)$ yields the desired result (\ref{E1010x2}). The final inequality (\ref{E1010x3}) directly corresponds to (\ref{E10107}). \end{proof} We now proceed to prove Proposition \ref{PMorawetz}. \begin{proof}[\textbf{Proof of Proposition \ref{PMorawetz}}] Throughout the proof, we take the norms over $I \times \Omega$ and thus we omit it for short. Assume that $0 < \eta \ll 1$ and taking \[ N < \min\{N_1(\eta), \eta^2 N_2(\eta^{2s_c})\}, \] where $N_1$ and $N_2$ are as in Lemma \ref{LHLC}. As a consequence, (\ref{E1010x1}) implies that \begin{equation} \|u_{>N/\eta^{2}}\|_{L_t^2 L_x^6} \lesssim_u \eta N^{-s_c}(1 + N^{2s_c - 1} |I|)^{1/2}. \label{E1010x4} \end{equation} Moreover, using the fact $N/\eta^2 < N_2(\eta^{2s_c})$ we can apply (\ref{E1010x3}) to show \begin{equation} \|(-\Delta _\Omega)^{\frac{s_c}{2}} u_{\leq N/\eta^2}\|_{L_t^2 L_x^6} \lesssim_u \eta(1 + N^{2s_c - 1} |I|)^{1/2}.\label{E1010x5} \end{equation} Then we define the Morawetz action by the following \[ \text{Mor}(t) = 2 \text{Im} \int_{\Omega} \frac{x}{|x|} \cdot \nabla u_{>N}(t,x) \overline{u_{>N}(t,x)} dx. \] Since $(i \partial_t + \Delta_\Omega) u_{>N} = P^{\Omega}_{>N}(F(u))$, we have, by the direct calculus, \begin{align} \partial_{t}\text{Mor}(t) &=-4\text{Re} \int _\Omega \partial_{k}(\partial_{k}u_{>N}\partial_{j}\overline{u}_{>N})\frac{x_j}{|x|}dx+\int _\Omega\partial_{j}\Delta (|u|^{2})\frac{x_j}{|x|}dx + 2\int _{\Omega}\frac{x}{|x|} \cdot \{P^{\Omega}_{>N}(F(u)), u_{>N}\}_p dx,\notag \end{align} where the momentum bracket $\{ \cdot , \cdot \}_p$ is defined by $\{ f, g \}_p := \text{Re}( f \nabla \overline{g} - g \nabla \overline{f} )$. Using the same argument as that used to derive (\ref{E10111}), (\ref{E10112}), and the fact that $\partial_{jk}|x|$ is positive definite, we have \begin{equation} -4\text{Re} \int _\Omega \partial_{k}(\partial_{k}u_{>N}\partial_{j}\overline{u}_{>N})\frac{x_j}{|x|}dx+\int _\Omega\partial_{j}\Delta (|u|^{2})\frac{x_j}{|x|}dx\ge 2\int _{\partial \Omega}|\nabla u_{>N}\cdot \vec{n}|^2\frac{x}{|x|}\cdot \vec{n}d\sigma (x)>0,\notag \end{equation} where $\vec{n}$ denotes the outer normal to $\Omega^c$. Hence \[ \partial_t \text{Mor}(t) \ge2 \int_{\Omega} \frac{x}{|x|} \cdot \{P^{\Omega}_{>N}(F(u)), u_{>N}\}_p dx. \] The fundamental theorem of calculus yields that \begin{equation} \int_{I \times \Omega} \frac{x}{|x|} \cdot \{P^{\Omega}_{>N}(F(u)), u_{>N}\}_p dx \lesssim \|\text{Mor}(t)\|_{L_t^\infty(I)}. \label{E1010x6} \end{equation} By direct computation, one hase $$ \{F(u), u\}_p = - \frac{\alpha }{\alpha +2} \nabla (|u|^{\alpha +2}).$$ Thus, we can write the truncate momentum bracket to the following \begin{align} &\{P^{\Omega}_{>N}(F(u)), u_{>N}\}_p \notag\\ &= \{F(u), u\}_p - \{F(u_{\leq N}), u_{\leq N}\}_p - \{F(u) - F(u_{\leq N}), u_{\leq N}\}_p - \{P^{\Omega}_{\leq N}(F(u)), u_{>N}\}_p\notag\\ &= - \frac{\alpha }{\alpha +2} \nabla (|u|^{\alpha +2} - |u_{\leq N}|^{\alpha +2}) - \{F(u) - F(u_{\leq N}), u_{\leq N}\}_p - \{P^{\Omega}_{\leq N}(F(u)), u_{>N}\}_p\notag\\ &:= I + II + III.\notag \end{align} Integrating by parts, $I$ contributes to left-hand side of $\eqref{E1010x6}$ a multiple of: \begin{equation} \int_{I }\int _\Omega \frac{|u_{>N}(t,x)|^{\alpha +2}}{|x|} dx dt\label{E1010x7} \end{equation} and to the right-hand side of (\ref{E1010x6}) a multiple of: \begin{align} \left\| \frac{1}{|x|} (u_{\leq N})^{\alpha +1} u_{>N} \right\|_{L_{t,x}^1} +\left\| \frac{1}{|x|} u_{\leq N} (u_{>N})^{\alpha +1} \right\|_{L_{t,x}^1 }:=I_1+I_2. \label{E1010w2} \end{align} For the second term $II$, we use the divergence theorem to deduce that \begin{align} II\lesssim \left\| \frac{1}{|x|} u_{\leq N} [F(u) - F(u_{\leq N})] \right\|_{L_{t,x}^1} + \left\| \nabla u_{\leq N} [F(u) - F(u_{\leq N})] \right\|_{L_{t,x}^1}:=II_1+II_2.\label{E1010x10} \end{align} Finally, for the last term $III$, we use integrating by parts when the derivative acts on $u_{>N}$ to get that \begin{align} III\lesssim \left\| \frac{1}{|x|} u_{>N} P^{\Omega}_{\leq N} (F(u)) \right\|_{L_{t,x}^1} + \left\| u_{>N} \nabla P^{\Omega}_{\leq N} (F(u)) \right\|_{L_{t,x}^1}.\label{E1010x12} \end{align} Thus, building upon (\ref{E1010x6}), we conclude that it remains to show: \begin{equation} \| \text{Mor} \|_{L_t^\infty (I)} \lesssim_u \eta N^{1 - 2s_c},\label{E1010x13} \end{equation} and that the error terms (\ref{E1010w2}), (\ref{E1010x10}) and (\ref{E1010x12}) are controlled by $\eta (N^{1 - 2s_c} +|I|)$. First, we begin to prove (\ref{E1010x13}). Making use of the Bernstein estimate, the Hardy inequality and (\ref{E1010x2}) to obtain \begin{align} & \| \text{Mor} \|_{L_t^\infty(I)} \lesssim \| |\nabla |^{-1/2} \nabla u_{>N} \|_{L_t^\infty L_x^2} \left\| |\nabla |^{1/2} \left( \frac{x}{|x|} u_{>N} \right) \right\|_{L_t^\infty L_x^2}\notag\\ &\lesssim_u \| |\nabla |^{1/2} u_{>N} \|_{L_t^\infty L_x^2}^2 \lesssim \|(-\Delta _\Omega)^{\frac{1}{4}}u_{>N}\|_{L^\infty _tL^2_x}^{2} \lesssim_u \eta N^{1 - 2s_c}.\notag \end{align} We next turn to give the estimate of the error terms (\ref{E1010w2})--(\ref{E1010x12}). To estimate $I_1$, we first note that by interpolation, (\ref{Ebound}) and (\ref{E1010x3}), \begin{equation} \|(-\Delta _\Omega)^{\frac{s_c}{2}}u_{\le N}\|^{\alpha +1}_{L_t^{2(\alpha +1)}L_x^{\frac{6(\alpha +1)}{3\alpha +1}}}\lesssim \|(-\Delta _\Omega)^{\frac{s_c}{2}}u\|_{L_t^\infty L_x^2}^{\alpha } \|(-\Delta _\Omega)^{\frac{s_c}{2}}u_{\le N}\|_{L_t^2L_x^6}\lesssim \eta (1+N^{2s_c-1}|I|)^{\frac{1}{2}}.\notag \end{equation} It then follows from H\"older, Hardy's inequality, Sobolev embedding, Bernstein and (\ref{E1010x1}) that \begin{align} &I_1 \lesssim \left\| |x|^{-\frac{1}{\alpha +1}}u_{\leq N} \right\|_{L_t^{2(\alpha +1)} L_x^{\frac{6(\alpha +1)}{5}}} ^{\alpha +1}\| u_{>N} \|_{L_t^2 L_x^6} \lesssim \| (-\Delta _\Omega)^{\frac{1}{2(\alpha +1)}} u_{\leq N} \|_{L_t^{2(\alpha +1)} L_x^{\frac{6(\alpha +1)}{5}}}^{\alpha +1} \| u_{>N} \|_{L_t^2 L_x^6}\notag\\ &\lesssim \|(-\Delta _\Omega)^{\frac{3\alpha -2}{4(\alpha +1)}}u_{\le N}\|_{L_t^{2(\alpha +1)}L_x^{\frac{6(\alpha +1)}{3\alpha +1}}}^{\alpha +1} \|u_{>N}\|_{L^2_tL^6_x} \lesssim N^{1-s_c} \|(-\Delta _\Omega)^{\frac{s_c}{2}}u_{\le N}\|^{\alpha +1}_{L_t^{2(\alpha +1)}L_x^{\frac{6(\alpha +1)}{3\alpha +1}}} \| u_{>N} \|_{L_t^2 L_x^6}\notag\\ & \lesssim_u \eta N^{1 - 2s_c} (1 + N^{2s_c - 1} |I|).\notag \end{align} For $I_2$, we divide it into two cases. If $|u_{\leq N}| \lesssim |u_{>N}|$, then it is the term $I_1$, which we have already treated. Thus, it suffices to consider the case $|u_{\leq N}| \ll |u_{>N}|$, which can be absorbed into the left-hand side of (\ref{E1010x6}), if we can show \begin{equation} \left\| \frac{1}{|x|} |u_{>N}|^{\alpha +2} \right\|_{L_{t,x}^1} < \infty. \label{E1010w3} \end{equation} To prove (\ref{E1010w3}), we apply Hardy's inequality and Sobolev embedding to obtain \begin{align} \left\| \frac{1}{|x|} |u_{>N}|^{\alpha +2} \right\|_{L_{t,x}^1} &\lesssim \left\| |x|^{-\frac{1}{\alpha +2}} u_{>N} \right\|_{L_{t,x}^{\alpha +2}}^{\alpha +2} \lesssim \left\| (-\Delta _\Omega)^{\frac{1}{2(\alpha +1)}}u_{>N} \right\|_{L_{t,x}^{\alpha +2}}^{\alpha +2} \lesssim \left\| (-\Delta _\Omega)^{\frac{3\alpha -2}{4(\alpha +2)}}u_{>N} \right\|_{L_t^{\alpha +2} L_x^{\frac{6(\alpha +2)}{3\alpha +2}}}^{\alpha +2} .\notag \end{align} Moreover, by the Bernstein inequality and Lemma \ref{Lspace-time bound}, we have \begin{equation} \left\| (-\Delta _\Omega)^{\frac{3\alpha -2}{4(\alpha +2)}}u_{>N} \right\|_{L_t^{\alpha +2} L_x^{\frac{6(\alpha +2)}{3\alpha +2}}}^{\alpha +2} \lesssim_u N^{1-2s_c} \| (-\Delta _\Omega)^{\frac{s_c}{2}} u \|_{L_t^{\alpha +2} L_x^{\frac{6(\alpha +2)}{3\alpha +2}}}^{\alpha +2} \lesssim_u N^{1-2s_c} \left( 1 +|I| \right) < \infty.\notag \end{equation} Next, we turn to $II_1$. By triangle inequality, we have \[ II_1 \lesssim \left\| \frac{1}{|x|} (u_{\leq N})^{\alpha +1} u_{>N} \right\|_{L_{t,x}^1} + \left\| \frac{1}{|x|} u_{\leq N} (u_{>N})^{\alpha +1} \right\|_{L_{t,x}^1}, \] which is exactly (\ref{E1010w2}). Thus $II_1$ is done. For $II_2$, we estimate \begin{equation} II_2\lesssim \|\nabla u_{\le N}|u_{>N}|^{\alpha }u_{>N}\|_{L_{t,x}^1}+ \|\nabla u_{\le N}|u_{<N}|^{\alpha }u_{>N}\|_{L_{t,x}^1}.\notag \end{equation} For the first term, we choose $\varepsilon >0$ sufficiently small such that $s:=\frac{1}{2}-\frac{2-\varepsilon \alpha }{2\alpha }>0$. It then follows from H\"older's inequality, Sobolev embedding, Bernstein's estimate, Theorem \ref{TEquivalence}, (\ref{E10101}) and (\ref{E1010x1}) that \begin{align} &\|\nabla u_{\le N}|u_{>N}|^{\alpha }u_{>N}\|_{L_{t,x}^1} \lesssim \|\nabla u_{\le N}\|_{L^\infty _t L_x^{\frac{3}{1+\varepsilon }}} \||u_{>N}|^{\alpha }u_{>N}\|_{L^1_tL_x^{\frac{3}{2-\varepsilon }}}\notag\\ &\lesssim N^{1-s_c+3(\frac{1}{2}-\frac{1+\varepsilon }{3})} \|(-\Delta _\Omega)^{\frac{s_c}{2}}u_{\le N}\|_{L^\infty _tL^2_x} \|u_{>N}\|_{L^\infty _tL_x^{\frac{3\alpha }{2}}} ^{\alpha -1}\|u_{>N}\|_{L^2_tL_x^{\frac{6\alpha }{2-\varepsilon \alpha }}} ^2\notag\\ &\lesssim \eta N^{1-s_c+3(\frac{1}{2}-\frac{1+\varepsilon }{3})} \|(-\Delta _\Omega)^{\frac{s}{2}}u_{>N}\|_{L^2_tL^6_x}^2 \lesssim \eta N^{1-s_c+3(\frac{1}{2}-\frac{1+\varepsilon }{3})}N^{2(s-s_c)}(1+N^{2s_c-1}|I|)\notag\\ &\lesssim \eta N^{1-2s_c}(1+N^{2s_c-1}|I|).\notag \end{align} For the second term, we use H\"older, Sobolev embedding, Bernstein, (\ref{E1010x1}) and (\ref{E1010x3}) to estimate \begin{align} &\|\nabla u_{\le N}|u_{<N}|^{\alpha }u_{>N}\|_{L_{t,x}^1}\notag\\ &\lesssim \|\nabla u_{\le N}\|_{L^\infty _tL^2_x} \|(-\Delta _\Omega)^{\frac{s_c}{2}}u_{<N}\|_{L^4_tL^3_x}^\alpha \|u_{>N}\|_{L_t^{\frac{4}{4-\alpha }}L_x^{\frac{6}{\alpha -1}}}\notag\\ &\lesssim N^{1-s_c} \|(-\Delta _\Omega)^{\frac{s_c}{2}}u_{\le N}\|_{L^\infty _tL^2_x}^{\frac{\alpha }{2}+1} \|(-\Delta _\Omega)^{\frac{s_c}{2}}u_{<N}\|_{L^2_tL^6_x}^{\frac{\alpha }{2}} \|u_{>N}\|_{L^2_tL^6_x}^{\frac{4-\alpha }{2}} \|u_{>N}\|_{L^\infty _tL^2_x}^{\frac{\alpha }{2}-1}\notag\\ &\lesssim N^{1-s_c}\eta ^{\alpha } (1+N^{2s_c-1}|I|)^{\frac{\alpha }{4}}N^{-s_c}(1+N^{2s_c-1}|I|)^{\frac{4-\alpha }{4}}N^{-s_c(\frac{\alpha }{2}-1)}\notag\\ &\lesssim \eta N^{1-2s_c}(1+N^{2s_c-1}|I|),\notag \end{align} where noting that $\alpha <4$ when $s_c=\frac{3}{2}-\frac{2}{\alpha }$. Thus $II_2$ is acceptable. Finally, we turn to $III$. By Hardy's inequality and Theorem \ref{TEquivalence}, \begin{align} (\ref{E1010x12}) &\lesssim \|u_{>N}\|_{L_t^2 L_x^6} \left\| \frac{1}{|x|} P^{\Omega}_{\leq N}(F(u)) \right\|_{L_t^2 L_x^{6/5}} + \|u_{>N}\|_{L_t^2 L_x^6} \|\nabla P^{\Omega}_{\leq N}(F(u))\|_{L_t^2 L_x^{6/5}}\notag\\ &\lesssim \|u_{>N}\|_{L_t^2 L_x^6} \|(-\Delta _\Omega)^{\frac{1}{2}} P^{\Omega}_{\leq N}(F(u))\|_{L_t^2 L_x^{6/5}}.\notag \end{align} Thus, by (\ref{E1010x1}) it remains to prove \[ \| (-\Delta _\Omega)^{\frac{1}{2}}P^{\Omega}_{\leq N}(F(u))\|_{L_t^2 L_x^{6/5}} \lesssim_u \eta N^{1- s_c}(1 + N^{2s_c - 1}|I|)^{1/2}. \] To this end, we use H\"older's inequality, Bernstein's estimate, the fractional chain rule (\ref{E12133}), (\ref{Ebound}), (\ref{E1010x4}), and (\ref{E1010x5}) to estimate \begin{align} & \|(-\Delta _\Omega)^{\frac{1}{2}}P^{\Omega}_{\leq N}(F(u))\|_{L_t^2 L_x^{6/5}}\notag\\ &\lesssim N\|F(u) - F(u_{\leq N/\eta^2})\|_{L_t^2 L_x^{6/5}} + N^{1 - s_c} \|(-\Delta _\Omega)^{\frac{s_c}{2}}F(u_{\leq N/\eta^2})\|_{L_t^2 L_x^{6/5}}\notag\\ &\lesssim N\|u\|_{L_t^\infty L_x^{3\alpha /2}}^{\alpha } \|u_{>N/\eta^2}\|_{L_t^2 L_x^6} + N^{1 - s_c} \|(-\Delta _\Omega)^{\frac{s_c}{2}}u\|_{L_t^\infty L_x^{2}}^\alpha \|(-\Delta _\Omega)^{\frac{s_c}{2}}u_{\leq N/\eta^2}\|_{L_t^2 L_x^6}\notag\\ & \lesssim_u \eta N^{1 - s_c}(1 + N^{2s_c - 1} |I|)^{1/2}. \end{align} This completes the proof of Proposition \ref{PMorawetz}. \end{proof} \subsection{Exclusion of the Minimal Counterexample}\label{s1/2-1,3} In this subsection, we rule out the almost periodic solutions to (\ref{NLS}) for the case $1/2 < s_c < 1$. The proof relies on the frequency-localized Lin-Strauss Morawetz inequality derived in Subsection \ref{s1/2-1,2}. \begin{theorem}\label{Ts1/2-1} There are no almost periodic solutions to (\ref{NLS}) as described in Theorem \ref{TReduction} when $1/2 < s_c < 1$. \end{theorem} \begin{proof} Assume, for contradiction, that $u$ is such a solution. Let $\eta > 0$ and $I \subset [0, \infty)$ be a compact time interval. By Proposition \ref{PMorawetz}, for sufficiently small $N$, we have \begin{equation} \int_{I }\int_\Omega \frac{|u_{>N}(t,x)|^{\alpha + 2}}{|x|} \, dx \, dt \lesssim_u \eta(N^{1-2s_c} + |I|).\label{E1011x1} \end{equation} Next, we establish a lower bound for the left-hand side of (\ref{E1011x1}). Since the orbit $\{u(t): t \in \mathbb{R}\}$ is precompact in $\dot H^{s_c}_D(\Omega)$ and $\dot H^{s_c}_D(\Omega) \hookrightarrow L^{\frac{3\alpha}{2}}(\Omega)$, for any $\varepsilon > 0$, there exists $c(\varepsilon) > 0$ such that \begin{equation} \int_\Omega |u_{<c(\varepsilon)}(t,x)|^{\frac{3\alpha}{2}} dx < \varepsilon \quad \text{uniformly for } t \in \mathbb{R}. \notag \end{equation} Combining this with (\ref{E}), we deduce that for sufficiently small $N$, \begin{equation} \int_{\Omega \cap \{|x| \le R\}} |u_{>N}(t,x)|^{\frac{3\alpha}{2}} dx \gtrsim 1 \quad \text{uniformly for } t \in \mathbb{R}. \notag \end{equation} Using H?lder's inequality, we further obtain that for sufficiently small $N$, \begin{equation} 1 \lesssim \int_{\Omega \cap \{|x| \le R\}} |u_{>N}(t,x)|^{\frac{3\alpha}{2}} dx \lesssim \left(\int_{\Omega \cap \{|x| \le R\}} |u_{>N}(t,x)|^{\alpha + 2} dx\right)^{\frac{3\alpha}{2(\alpha + 2)}} R^{\frac{3(4-\alpha)}{2(\alpha + 2)}}. \notag \end{equation} Thus, for sufficiently small $N$, it follows that \begin{equation} \int_{I }\int_\Omega \frac{|u_{>N}(t,x)|^{\alpha + 2}}{|x|} \, dx \, dt \ge \frac{1}{R} \int_I \int_{\Omega \cap \{x: |x| < R\}} |u_{>N}(t,x)|^{\alpha + 2} dx \, dt \gtrsim R^{-\frac{4}{\alpha}} |I|.\label{E1011x2} \end{equation} Combining (\ref{E1011x1}) and (\ref{E1011x2}), and choosing $\eta = \eta(R)$ sufficiently small, we deduce that $|I| \lesssim_u N^{1-2s_c}$ uniformly in $I$. This leads to a contradiction when $|I|$ is taken to be sufficiently large. The proof of Theorem \ref{Ts1/2-1} is complete. \end{proof} \begin{remark}\label{R128} Finally, we discuss the possibility of extending Theorem \ref{T1} to the case $0 < s_c < \frac{1}{2}$. In this case, we also want to employ a frequency-localized Lin-Strauss Morawetz inequality to preclude almost periodic solutions. To achieve this, we truncate the high frequencies of the solution and work with $u_{<N}$ for some $N > 0$. Similar to Lemma \ref{LHLC}, we first establish a long-time Strichartz estimate and then prove (see e.g. \cite[Lemma 5.7]{Murphy2015}): \begin{equation} \|(-\Delta _\Omega)^{\frac{s}{2}} u_{\le N}\|_{L_t^2 L_x^6(I \times \Omega)} \lesssim N^{s-s_c}(1 + N^{2s_c-1} |I|)^{\frac{1}{2}}, \quad \text{for } \quad s > \frac{1}{2}. \label{E128} \end{equation} Note that the condition $s > 1/2$ is essential for deriving the estimate in (\ref{E128}) using the long-time Strichartz estimate. This condition ensures the convergence of the frequency summation. However, when using the Morawetz inequality to exclude almost periodic solutions, what we actually require is the estimate $\||\nabla|^{s} u_{\le N}\|_{L_t^2 L_x^6(I \times \Omega)}, s > \frac{1}{2}$. Notably, since $s > \frac{1}{2},p=6$ does not satisfy the index relationships in Theorem \ref{TEquivalence}, we cannot derive the required estimate from (\ref{E128}). Therefore, in the case $0 < s_c < \frac{1}{2}$, we still lack the necessary estimates to exclude almost periodic solutions. \end{remark} \begin{thebibliography}{99} \bibitem{Anton2008} R. Anton, \emph{Global existence for defocusing cubic NLS and Gross-Pitaevskii equations in three-dimensional exterior domains,} J. Math. Pures Appl. (9) \textbf{89} (4) (2008), 335-354. \bibitem{BlairSmithSogge2012} M. D. Blair, H. F. Smith, C. D. Sogge, \emph{Strichartz estimates and the nonlinear Schr\"odinger equation on manifolds with boundary,} Math. Ann. \textbf{354} (4) (2012), 1397-1430. \bibitem{Bourgain1999} J. Bourgain, \emph{Global wellposedness of defocusing critical nonlinear Schr\"odinger equation in the radial case,} J. Amer. Math. Soc. \textbf{12} (1999), 145-171. \bibitem{BrezisLieb1983} H. Br\'ezis, E. Lieb, \emph{A relation between pointwise convergence of functions and convergence of functionals,} Proc. Amer. Math. Soc. \textbf{88} (1983), no. 3, 486--490. \bibitem{BurqGerardTzvetkov2004} N. Burq, P. G\'erard, N. Tzvetkov, \emph{On nonlinear Schr\"odinger equations in exterior domains,} Ann. Inst. Henri Poincar\'e, Anal. Non Lin\'eaire \textbf{21} (3) (2004), 295-318. \bibitem{CKSTT} J. Colliander, M. Keel, G. Staffilani, H. Takaoka, T. Tao, \emph{Global existence and scattering for rough solutions of a nonlinear Schr\"odinger equation on $\mathbb{R} ^3$,} Comm. Pure Appl. Math. \textbf{57} (2004), 987--1014. \bibitem{Colliander2008} J. Colliander, M. Keel, G. Staffilani, H. Takaoka, T. Tao, \emph{Global well-posedness and scattering for the energy-critical nonlinear Schr\"odinger equation in $\mathbb{R}^3$,} Ann. of Math. \textbf{167} (2008), 767--865. \bibitem{Dodson2012} B. Dodson, \emph{Global well-posedness and scattering for the defocusing, $L^2$-critical nonlinear Schr\"odinger equation when $d\ge3$,} J. Amer. Math. Soc. \textbf{25} (2012), 429--463. \bibitem{Dodson2015} B. Dodson, \emph{Global well-posedness and scattering for the mass critical nonlinear Schr\"odinger equation with mass below the mass of the ground state,} Adv. Math. \textbf{285} (2015), 1589--1618. \bibitem{Dodson2016a} B. Dodson, \emph{Global well-posedness and scattering for the defocusing $L^2$-critical nonlinear Schr\"odinger equation when $d=2$,} Duke Math. J. \textbf{165} (2016), 3435--3516. \bibitem{Dodson2016b} B. Dodson, \emph{Global well-posedness and scattering for the defocusing $L^2$-critical nonlinear Schr\"odinger equation when $d=1$,} Am. J. Math. \textbf{138} (2016), 531--569. \bibitem{Dodson2019ASENS} B.~Dodson, Global well-posedness and scattering for the focusing, cubic Schr\"odinger equation in dimension $d=4$, \emph{Ann. Sci. Ec. Norm. Sup\'er. (4)}, \textbf{52} (2019), no. 1, 139--180. \bibitem{Dodson2017} B. Dodson, C. X. Miao, J. Murphy, J. Zheng, \emph{The defocusing quintic NLS in four space dimensions,} Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire \textbf{34} (2017), 759--787. \bibitem{Dodson-Murphy} B. Dodson, J. Murphy, A new proof of scattering below the ground state for the non-radial focusing NLS. Math. Res. Lett.,{\bf 25} (2018), 1805-1825. \bibitem{DuyckaertsLandoulsiRoudenko2022JFA} T. Duyckaerts, O. Landoulsi, S. Roudenko, \emph{Threshold solutions in the focusing 3D cubic NLS equation outside a strictly convex obstacle}, Journal of Functional Analysis, \textbf{282} (2022), 109326. \bibitem{GaoMiaoYang2019} C. Gao, C. Miao, J. Yang, \emph{The Interctitical Defocusing Nonlinear Schr\"odinger Equations with Radial Initial Data in Dimensions Four and Higher,} Anal. Theory Appl. \textbf{35} (2019), 205--234. \bibitem{GaoZhao2019} C. Gao, Z. Zhao, \emph{On scattering for the defocusing high dimensional inter-critical NLS,} J. Differential Equations \textbf{267} (2019), 6198--6215. \bibitem{Grillakis2000} M. Grillakis, \emph{On nonlinear Schr\"odinger equations,} Comm. Part. Diff. Eqs. \textbf{25} (2000), 1827-1844. \bibitem{HassellSikora2009} A. Hassell, A. Sikora, \emph{Riesz transforms in one dimension,} Indiana Univ. Math. J. \textbf{58} (2009), no. 2, 823--852. \bibitem{Ivanovici2007} O. Ivanovici, \emph{Precised smoothing effect in the exterior of balls,} Asymptot. Anal. \textbf{53} (4) (2007), 189-208. \bibitem{Ivanovici2010a} O. Ivanovici, \emph{On the Schr\"odinger equation outside strictly convex obstacles,} Anal. PDE \textbf{3} (3) (2010), 261--293. \bibitem{IvanoviciLebeau2017} O. Ivanovici, G. Lebeau, \emph{Dispersion for the wave and the Schr\"odinger equations outside strictly convex obstacles and counterexamples,} Comptes Rendus Math. \textbf{355} (2017), 774--779. \bibitem{IvanoviciPlanchon2010} O. Ivanovici, F. Planchon, \emph{On the energy critical Schr\"odinger equation in 3D non-trapping domains,} Ann. Inst. Henri Poincar\'e, Anal. Non Lin\'eaire \textbf{27} (5) (2010), 1153--1177. \bibitem{KenigMerle2006} C. E. Kenig, F. Merle, \emph{Global well-posedness, scattering and blow-up for the energy-critical, focusing, non-linear Schr\"odinger equation in the radial case,} Invent. Math. \textbf{166} (2006), no. 3, 645-675. \bibitem{KenigMerle2010} C. E. Kenig, F. Merle, \emph{Scattering for $\dot{H}^{1/2}$ bounded solutions to the cubic, defocusing NLS in 3 dimensions,} Trans. Am. Math. Soc. \textbf{362} (2010), 1937--1962. \bibitem{Keraani2001} S. Keraani, \emph{On the defect of compactness for the Strichartz estimates of the Schr\"odinger equations,} J. Differential Equations \textbf{175} (2001), no. 2, 353--392. \bibitem{Keraani2006JFA} S. Keraani, \emph{On the blow up phenomenon of the critical nonlinear Schr \"odinger equation}, J. Funct. Anal., \textbf{235} (2006), 171-192. \bibitem{KillipTaoVisan2009} R. Killip, T. Tao, M. Visan, \emph{The cubic nonlinear Schr\"odinger equation in two dimensions with radial data,} J. Eur. Math. Soc. \textbf{11} (2009), 1203--1258. \bibitem{KillipVisan2010} R. Killip, M. Visan, \emph{Energy-supercritical NLS: Critical $\dot{H}^s$-bounds imply scattering,} Comm. Partial Differential Equations \textbf{35} (2010), 945--987. \bibitem{KillipVisan2010AJM} R. Killip, M. Visan, \emph{The focusing energy-critical nonlinear Schr\"odinger equation in dimensions five and higher,} Amer. J. Math. \textbf{132} (2010), 361--424. \bibitem{KillipVisan2011PAMS} R. Killip, M. Visan, \emph{The radial defocusing energy-supercritical nonlinear wave equation in all space dimensions,} Proc. Amer. Math. Soc. \textbf{139} (2011), 1805--1817. \bibitem{KillipVisan2012} R. Killip, M. Visan, \emph{Global well-posedness and scattering for the defocusing quintic NLS in three dimensions,} Anal. PDE \textbf{5} (2012), 855--885. \bibitem{KillipVisan2013} R. Killip, M. Visan, \emph{Nonlinear Schr\"odinger equations at critical regularity,} in: Evolution Equations, in: Clay Math. Proc., vol. 17, Amer. Math. Soc., Providence, RI, 2013, pp. 325--437. \bibitem{KillipVisanZhang2008} R. Killip, M. Visan, X. Zhang, \emph{The mass-critical nonlinear Schr\"odinger equation with radial data in dimensions three and higher,} Anal. PDE \textbf{1} (2008), 229--266. \bibitem{KillipVisanZhang2016a} R. Killip, M. Visan, X. Zhang, \emph{Quintic NLS in the exterior of a strictly convex obstacle,} Amer. J. Math. \textbf{138} (2016), no. 5, 1193--1346. \bibitem{KillipVisanZhang2016b} R. Killip, M. Visan, X. Zhang, \emph{The focusing cubic NLS on exterior domains in three dimensions,} Appl. Math. Res. Express. AMRX \textbf{2016}, no. 1, 146--180. \bibitem{KillipVisanZhang2016c} R. Killip, M. Visan, X. Zhang, \emph{Riesz transforms outside a convex obstacle,} Int. Math. Res. Not. IMRN \textbf{2016}, no. 19, 5875--5921. \bibitem{KeelTao1998AJM} M. Keel, T. Tao, \emph{Endpoint Strichartz estimates,} Amer. J. Math. \textbf{120} (1998), no. 5, 955--980. \bibitem{Landoulsi2021} O. Landoulsi, \emph{Construction of a solitary wave solution of the nonlinear focusing Schr\"odinger equation outside a strictly convex obstacle in the $L^2$-supercritical case,} Discrete Contin. Dyn. Syst., Ser. A \textbf{41} (2) (2021), 701--746. \bibitem{LiSmithZhang2012} D. Li, H. Smith, X. Zhang, \emph{Global well-posedness and scattering for defocusing energy-critical NLS in the exterior of balls with radial data,} Math. Res. Lett. \textbf{19} (2012), no. 1, 213--232. \bibitem{LiXuZhang2014} D. Li, G. Xu, X. Zhang, \emph{On the dispersive estimate for the Dirichlet Schr\"odinger propagator and applications to energy critical NLS,} Canad. J. Math. \textbf{66} (2014), no. 5, 1110--1142. \bibitem{LiLi2022SIAM} J. Li, K. Li, \emph{The Defocusing Energy-supercritical Nonlinear Schr\"odinger Equation in High Dimensions}, SIAM J. Math. Anal., \textbf{54} (2022), no.3, 3253-3274. \bibitem{LuZheng2017} C. Lu, J. Zheng, \emph{The radial defocusing energy-supercritical NLS in dimension four,} J. Differ. Equ. \textbf{262} (2017), 4390--4414. \bibitem{MiaoMurphyZheng2014} C. Miao, J. Murphy, J. Zheng, \emph{The defocusing energy-supercritical NLS in four space dimensions,} J. Funct. Anal. \textbf{267} (2014), 1662--1724. \bibitem{Murphy2014} J. Murphy, \emph{Inter-critical NLS: Critical $\dot{H}^s$-bounds imply scattering,} SIAM J. Math. Anal. \textbf{46} (2014), 939--997. \bibitem{Murphy2014b} J. Murphy, \emph{The defocusing $\dot{H}^{1/2}$-critical NLS in high dimensions,} Discrete Contin. Dyn. Syst. Ser. A \textbf{34} (2014), 733--748. \bibitem{Murphy2015} J. Murphy, \emph{The radial defocusing nonlinear Schr\"odinger equation in three space dimensions,} Comm. Partial Differential Equations \textbf{40} (2015), 265--308. \bibitem{PlanchonVega2009} F. Planchon, L. Vega, \emph{Bilinear virial identities and applications,} Ann. Sci. \'Ec. Norm. Sup\'er. (4) \textbf{42} (2009), 261--290. \bibitem{RyckmanVisan2007} E. Ryckman, M. Visan, \emph{Global well-posedness and scattering for the defocusing energy critical nonlinear Schr\"odinger equation in $\mathbb{R}^{1+4}$,} Am. J. Math. \textbf{129} (2007), 1--60. \bibitem{Shao2009EJDE} S. Shao, \emph{Maximizers for the Strichartz and the Sobolev-Strichartz inequalities for the Schr\"odinger equation,} Electron. J. Differential Equations \textbf{2009}, No. 3, 13 pp. \bibitem{Strichartz1967JMM} R.~S. Strichartz, \emph{Multipliers on fractional Sobolev spaces, } J. Math. Mech., \textbf{16}, (1967), 1031--1060. \bibitem{Tao2005} T. Tao, \emph{Global well-posedness and scattering for the higher-dimensional energy-critical non-linear Schr\"odinger equation for radial data,} New York J. Math. \textbf{11} (2005), 57--80. \bibitem{TaoVisan2005} T. Tao, M. Visan, \emph{Stability of energy-critical nonlinear Schr\"odinger equations in high dimensions,} Electron. J. Differ. Equ. \textbf{118} (2005), 1--28. \bibitem{TaoVisanZhang2008FM} T. Tao, M. Visan, X. Zhang, \emph{Minimal-mass blowup solutions of the mass-critical NLS,} Forum Math. \textbf{20} (2008), 881--919. \bibitem{TaoVisanZhang2007} T. Tao, M. Visan, X. Zhang, \emph{Global well-posedness and scattering for the mass-critical nonlinear Schr\"odinger equation for radial data in high dimensions,} Duke Math. J. \textbf{140} (2007), 165--202. \bibitem{Visan2007} M. Visan, \emph{The defocusing energy-critical nonlinear Schr\"odinger equation in higher dimensions,} Duke Math. J. \textbf{138} (2007), 281--374. \bibitem{Visan2012} M. Visan, \emph{Global well-posedness and scattering for the defocusing cubic nonlinear Schr\"odinger equation in four dimensions,} Int. Math. Res. Not. IMRN \textbf{2012} (2012), 1037--1067. \bibitem{XieFang2013} J. Xie, D. Fang, \emph{Global well-posedness and scattering for the defocusing $\dot{H}^s$-critical NLS,} Chin. Ann. Math. \textbf{34B} (2013), 801--842. \bibitem{XuZhaoZheng} C. Xu, T. Zhao, J. Zheng, \emph{Scattering for 3D cubic focusing NLS on the domain outside a convex obstacle revisited}. Acta. Math. Sin.-English Ser. {\bf38} (2022), 1054-1068. \bibitem{KYang} K. Yang, \emph{The focusing NLS on exterior domains in three dimensions.} Commun. Pure Appl. Anal. {\bf6} (2017), 2269--2297. \bibitem{Yu2021} X. Yu, \emph{Global well-posedness and scattering for the defocusing $\dot{H}^{1/2}$-critical nonlinear Schr\"odinger equation in $\mathbb{R}^2$,} Anal. PDE \textbf{14} (2021), 1037--1067. \bibitem{Zhang2003} Q. S. Zhang, \emph{The global behavior of heat kernels in exterior domains,} J. Funct. Anal. \textbf{200} (2003), no. 1, 160--176. \bibitem{Zhao2017AMS} T. F. Zhao, \emph{The Defocusing Energy-supercritical NLS in Higher Dimensions}, Acta Mathematica Sinica, English Series, \textbf{33} (2017), 911--925. \end{thebibliography} \end{document}
2501.03236v1
http://arxiv.org/abs/2501.03236v1
P-adic integrals: A solution to the p-adic Schrodinger Equation
\documentclass[12pt]{article} \usepackage[utf8]{inputenc} \usepackage{amsfonts} \DeclareMathSymbol{\shortminus}{\mathbin}{AMSa}{"39} \usepackage{ amssymb } \usepackage{mathpazo} \usepackage{ stmaryrd } \usepackage{amsthm} \usepackage{mathtools} \usepackage{amsmath} \newtheorem{thm}{Theorem} \newtheorem{cor}{Corollary} \newtheorem{lem}{Lemma} \newtheorem{mydef}{Definition} \newtheorem{ex}{Example} \newtheorem{ques}{Question} \newtheorem{conj}{Conjecture} \newtheorem{remark}{Remark} \newtheorem{prop}{Proposition} \title{A solution to the $p$-adic Schr\"{o}dinger equation} \author{Haonan Gu} \date{December 2022} \begin{document} \maketitle \section{Introduction} The field of $p$-adic numbers $\mathbb{Q}_p$ (with $p$ a prime number) was first introduced by the mathematician Kurt Hensel at the end of the 19th century. It provides an alternative number system to the field of real numbers $\mathbb{R}$ that singles out a prime $p$. Because of its unique properties as well as topological and algebraic structure, the field of $p$-adic numbers has wide applications in number theory, with Andrew Wiles' proof of Fermat's Last Theorem being among the most famous ones. This thesis aims to provide an introduction to the basic properties of the field of $p$-adic numbers and explore a $p$-adic analogue to the famous time-independent Schr\"{o}dinger equation. In the second section of this thesis, we start by defining the $p$-adic norm $|\cdot|_p$ on $\mathbb{Q}_p$, an alternative way of measuring distance compared to the real absolute value. Under this view of distance, two rational numbers are closer to one another if their difference is divisible by a higher power of $p$. After establishing some topological definitions and proofs following \cite{Alexa} \cite{Jack} \cite{Zuniga}, we define the $p$-adic field $\mathbb{Q}_p$ as the completion of $\mathbb{Q}$ with respect to the $p$-adic norm $|\cdot|_p$. We prove the existence and uniqueness of the $p$-adic expansion for each $p$-adic number, which is analogous to the decimal expansion of a real number in the sense that every $p$-adic number can be expressed as a power series in $p$. Finally, basic topological properties of the $p$-adic field are proved and the ring of $p$-adic integers $\mathbb{Z}_p$ is defined in Section 2.3. Using the local compactness of $\mathbb{Q}_p$, we define a notion of $p$-adic integration in Section 3 by considering a suitable Haar measure. We prove some important propositions on $p$-adic integration, as well as compute specific examples of $p$-adic integrals. In Section 4, we introduce a $p$-adic derivative operator $D^{\alpha}f(x)$ in \cite{Vladimirov}. Similar to the derivative in $\mathbb{R}$, it satisfies \begin{equation*} D^\alpha [D^\beta f(x)]=D^{\alpha+\beta} f(x) \end{equation*} for any $\alpha,\beta\in\mathbb{N}$ and complex-valued functions $f$ of a $p$-adic variable. Another important property of operator $D^{\alpha}$ is \begin{equation*} D^\alpha {\left| x\right|_p}^n=\frac{\Gamma_p(n+1)}{\Gamma_p(n-\alpha+1)} \left| x\right|_p^{n-\alpha} \end{equation*} for n$\geq$1. Using the notion of $p$-adic integration introduced in Section 3 and the derivative operator in Section 4, we finally explore our main application in Section 5. In 5.1, we introduce the time-independent Schr\"{o}dinger equation \begin{equation*} -\frac{h^2}{2m}\frac{d^2\Psi}{dx^2}+\frac{1}{2}m\omega^2x^2\Psi=E\Psi, \end{equation*} its ground state solution \begin{equation*} \Psi_0(x)=A_0e^{-\frac{m\omega}{2h}x^2}, \end{equation*} where $A_0$ is some constant, and the raising and lowering operators $a_+$ and $a_-$, which produce solutions to the equation with raising or lowering energy $E$. In light of the traditional Schr\"{o}dinger equation, we define a $p$-adic analogue \begin{equation*} D^2\Psi(x)+B\left| x\right|_p^2\Psi(x)=E\Psi(x) \end{equation*} using the derivative operator $D^{2}$ introduced in Section 4. Here B and E are some complex constants. We make two attempts to find a solution to this equation with the first one more naive, and the second one successful. Inspired by \begin{equation*} \Psi_0(x)=A_0e^{-\frac{m\omega}{2h}x^2}, \end{equation*} the ground solution of the traditional Schr\"{o}dinger equation, we make a naive guess that a solution of the $p$-adic Schr\"{o}dinger equation is in the form $$\Psi_0(x)=\sum_{n=0}^\infty b_{2n}\left| x\right|_p^{2n}$$ and solve for the coefficients $b_{2n}$. However, a computation reveals that this series won't converge everywhere in $\mathbb{Q}_p$. We then make a second, successful guess by writing a series that has a better chance of convergence \begin{align*} \psi_0(x) &= \sum_{n=0}^{\infty} c_n f_n (x) + \sum_{n=1}^{\infty} k_n g_{-n}(x) \\ &= \begin{cases} \sum_{n=0}^{\infty} c_n |x|_p^n & \text{ if } x \in \mathbb{Z}_p \\ \sum_{n=1}^{\infty} k_n |x|_p^{-n} & \text{ otherwise}. \end{cases} \end{align*} After determining the recursive relationships of the series' coefficients, we use asymptotic methods to approximate the eigenvalue E of the solution and the explicit expression of $c_n$ and $k_n$, in the limit of $p$ large. We then successfully prove our series converges everywhere in $\mathbb{Q}_p$ in the case of $B=1$. Using similar methods, we find a solution for the case $B=-1$. To summarize, basic definitions and topological properties of the field of $p$-adic numbers are introduced and solutions to the $p$-adic analog of the Schr\"{o}dinger equation are explored in light of the $p$-adic derivative operator and $p$-adic calculus. Whether there exists a $p$-adic version of the raising and lowering operators remains an open question and could be a direction for future research. The power series expression of the solution as this thesis finds could be a good inspiration for finding more solutions. \section{The field of $p$-adic numbers} \subsection{The $p$-adic norm} We start by introducing the definition of a norm on a general field $K$. We then introduce the $p$-adic norm $|\cdot|_p$ on the field of rational numbers $\mathbb{Q}$. \vspace{0.5cm} \begin{mydef} We say $|\cdot|$ is a norm on a field K if for all $x,y\in K$ it satisfies the following properties: 1. $\left| x\right| \geq0$ 2. $\left| x\right|=0$ if and only if $x=0$ 3. $\left| xy\right|=\left| x\right|\left| y\right|$ 4. $\left| x+y\right|\leq\left| x\right|+\left|y\right|$ \end{mydef} \vspace{0.5cm} \begin{thm} Let $p$ be a prime number, and $x\in\mathbb{Q}\setminus \{0\}$. Then there exists a unique $\gamma\in\mathbb{Z}$ such that x=p$^\gamma$$\frac{m}{n}$, where m,n$\in\mathbb{Z}$ are not divisible by $p$. \end{thm} \begin{proof} Fix $x\in\mathbb{Q}\setminus \{0 \}$ and a prime number $p$. Then, $x=\frac{r}{q}$ for some co-prime $r,q\in\mathbb{Z}$. If $x=1$, then $\gamma=0$ and $m=n=1$ obviously satisfy the condition. We need to verify $\gamma=0$ is unique. We prove by contradiction by assuming $\gamma$$\neq$0. Suppose $\gamma>0$, and $n=p^\gamma m$. Then $n$ is divisible by $p$. This contradicts the assumption that $n$ is not divisible by $p$. Suppose instead $\gamma<0$, and $m=p^{\shortminus\gamma}n$. Then $m$ is divisible by $p$. This contradicts the assumption that $m$ is not divisible by $p$. Thus, for $x=1$, $\gamma=0$ exists and is unique. \vspace{0.3cm} If $q=1$ and $r\neq1$, we must have $n=1$, $x=p^\gamma m$. We use the Fundamental Theorem of Arithmetic and have $\left| x\right|=p_1^{a_1} \dotsb p_k^{a_k}$ for unique prime numbers $p_1,\dotsb, p_k$ and unique positive integers $a_1, \dotsb, a_n$. If $x$ is co-prime to $p$, then $\gamma$ can only equal to $0$. If $x$ is not co-prime to $p$, or equivalently, $p=p_j$ for some $j \in \{1,\dotsb,n\}$, then $\gamma$ can only equal to $a_j$. If not, $\gamma>a_j$ contradicts with the Fundamental Theorem of Arithmetic and $\gamma<a_j$ leads to $m$ divisible by $p$, contradicting with our assumption. In this case, we find that $\gamma$ exists and is unique. \vspace{0.3cm} If $q\neq1$ and $r=1$, then we apply the Fundamental Theorem of Arithmetic on $q$, we have that $$\left| x\right|=\frac{1}{q}=p_1^{a_1}\dotsb p_k^{a_k}$$ for unique prime numbers $p_1, \dotsb, p_k$ and negative integers $a_1, \dotsb, a_k$. If $q$ is co-prime to $p$, then $\gamma=0$. If $p=p_j$ for some $j$ in $\{1,\dotsb,k\}$, then $\gamma=a_j$. We find that $\gamma$ exists and is unique. \vspace{0.3cm} If $q\neq1$, $r\neq1$, we have \begin{equation*} \left| r\right|=p_1^{a_1}\dotsb p_n^{a_n} \text{ and } \left| q\right|=p_{n+1}^{a_{n+1}}\dotsb p_{n+m}^{a_{n+m}} \end{equation*} for unique primes $p_1, \dotsb, p_{n+m}$ and unique positive integers $a_1, \dotsb, a_{n+m}$. Thus, \begin{equation*} \left| x\right|=p_1^{a_1}\dotsb p_k^{a_k}p_{k+1}^{-a_{k+1}}\dotsb p_{k+l}^{-a_{k+l}}. \end{equation*} If $p\neq p_j$ for any $j$ in $\{1,\cdots,k\}$, then $\gamma=0$. If $p=p_j$ for some $j$ in $\{1,\dotsb,k\}$, then $\gamma=a_j$. If $j$ is in $\{k+1,\dotsb,k+l\}$, then $\gamma={-a_j}$. Thus, $\gamma$ exists. The uniqueness can be proved by the similar reasoning as before, i.e. any other $\gamma$ will lead to $m$, $n$ not coprime. \end{proof} \vspace{0.3cm} Theorem 1 guarantees that we can define a map on the field $\mathbb{Q}$ as follows. \begin{mydef} Define the map $|\cdot|_p: \mathbb{Q}\rightarrow\mathbb{Q}$ as \begin{equation*} |x|_p = \begin{cases} 0 &\text{ if } x=0 \\ p^{\shortminus\gamma} & \text{ otherwise} \end{cases}, \end{equation*} where $\gamma$ is a constant integer determined by Theorem 1. \end{mydef} \begin{ex} \begin{itemize} \item[(i)] $\left| 114514\right|_2$=$\left| 2\times57257\right|_2$=2$^{\shortminus1}$ \item[(ii)] $\left| \frac{1919}{810}\right|_5$=$\left| \frac{1919}{5\times162}\right|_5$=5$^1$. \end{itemize} \end{ex} We want to verify that $\left| \cdot \right|_p$ is a norm on $\mathbb{Q}$. Since $\left| \cdot \right|_p$ obviously satisfies conditions 1 and 2 in Definition 1, we check the third and fourth conditions of Definition 1 below. \begin{thm} Let $p$ be a prime. The map $|\cdot|_p$ is a norm on $\mathbb{Q}$ (called the $p$-adic norm). Moreover, we have that \begin{equation*} |x+y|_p \leq \text{max}(|x|_p, |y|_p), \end{equation*} for all $x,y \in \mathbb{Q}$, and if $|x|_p \neq |y|_p$ we have \begin{equation*} |x+y|_p = \text{max}(|x|_p, |y|_p). \end{equation*} \end{thm} \begin{proof} When either $x=0$ or $y=0$, it is easy to observe that $$\left| xy\right|_p=\left| x\right|_p\left| y\right|_p=0.$$ Thus, fix $x, y\in\mathbb{Q}\setminus \left\{ 0 \right\}$. Since $x=p^{\gamma_1}\frac{m}{n}$ and $y=p^{\gamma_2}\frac{a}{b}$, for some $m,n,a,b\in\mathbb{Z}$ not divisible by $p$, and unique $\gamma_1$, $\gamma_2\in\mathbb{Z}$ we have $$\left| x\right|_p\left| y\right|_p=p^{\shortminus(\gamma_1+\gamma_2)}.$$ Since $xy=p^{(\gamma_1+\gamma_2)}\frac{ma}{nb}$, and $ma, nb$ are not divisible by $p$, we have $$\left| xy\right|_p=p^{\shortminus(\gamma_1+\gamma_2)}.$$ Thus, we have shown $\left| xy\right|_p=\left| x\right|_p\left| y\right|_p$. \vspace{0.5cm} We now check the fourth condition. When either $x=0$ or $y=0$, it is easy to see $\left| x+y\right|_p=\left| x\right|_p+\left| y\right|_p$. \vspace{0.3cm} Fix $x,y\in\mathbb{Q}\setminus\left\{ 0 \right\}$. We write $x=p^{\gamma_1}\frac{m}{n}$ and $y=p^{\gamma_2}\frac{a}{b}$ for some integers $m,n,a,b$ not divisible by $p$, and unique $\gamma_1$, $\gamma_2\in\mathbb{Z}$. When $\gamma_1\neq\gamma_2$, we assume $\gamma_1<\gamma_2$ without loss of generality. Then, \begin{equation} x+y=p^{\gamma_1}\frac{m}{n}+p^{\gamma_2}\frac{a}{b}=p^{\gamma_1}(\frac{m}{n}+p^{\gamma_2-\gamma_1}\frac{a}{b})=p^{\gamma_1}\frac{mb+p^{\gamma_2-\gamma_1}an}{nb}. \end{equation} Certainly, $mb+anp^{\gamma_2-\gamma_1}$ and $nb$ are not divisible by $p$. Thus, $$\left| x+y\right|_p=p^{\shortminus\gamma_1}.$$ Since $\left| x\right|_p+\left| y\right|_p=p^{\shortminus\gamma_1}+p^{\shortminus\gamma_2}$ and $p^{\shortminus\gamma_1}$, $p^{\shortminus\gamma_2}>0$, we have $$\left| x+y\right|_p<\left| x\right|_p+\left| y\right|_p.$$ In fact, we see $\left| x+y\right|_p=max(\left| x\right|_p,\left| y\right|_p)$ whenever $\left| x\right|_p\neq\left| y\right|_p$. When $\gamma_1=\gamma_2$, \begin{equation} x+y=p^{\gamma_1}\frac{m}{n}+p^{\gamma_1}\frac{a}{b}=p^{\gamma_1}\frac{mb+an}{nb}. \end{equation} Since $nb$ is not divisible by $p$, and $mb+na$ is possibly divisible by $p$, then $\gamma$ for $(x+y)$ is greater than or equal to $\gamma_1$. Thus, \begin{equation} \left| x+y\right|_p=p^{\shortminus\gamma}\leq{p^{\shortminus\gamma_1}}=\left| x\right|_p<{\left| x\right|_p+\left| y\right|_p}. \end{equation} This verifies that the function $\left| \cdot \right|_p$ on $\mathbb{Q}$ is a norm. \end{proof} \begin{mydef} Two norms $|\cdot|_a$, $|\cdot|_b$ on a field $\mathbb{F}$ are equivalent if there exists $\epsilon>0$ such that $|\cdot|_a^\epsilon=|\cdot|_b$. \end{mydef} \begin{thm} (Ostrowski's Theorem) Every nonzero absolute value on $\mathbb{Q}$ is equivalent to either the standard absolute value or one of the $p$-adic norms. \end{thm} \begin{remark} The proof of Theorem 3 can be found on page 3 of \cite{Vladimirov}. This theorem tells us the significance of the $p$-adic norm in understanding the topology of the field of rational numbers. \end{remark} \subsection{The field of $p$-adic numbers} In this section, we introduce a notion of convergence with respect to the $p$-adic norm, Cauchy sequences, and the completion of $\mathbb{Q}$ with respect to $|\cdot|_p$. We then study properties of the field of $p$-adic numbers $\mathbb{Q}_p$, for example, the $p$-adic expansion of an element in $\mathbb{Q}_p$. \begin{mydef} Fix a normed field $(K, \left| \cdot \right|)$. A sequence $(x_n)$ in $K$ is Cauchy with respect to the given norm if for every $\epsilon>0$, there exists some $N\in\mathbb{N}$ such that for all $m,n\geq N$, $$\left| x_n-x_m\right|<\epsilon.$$ \end{mydef} \begin{mydef} Fix a normed field $(K, \left| \cdot \right|)$. A sequence $(x_n)$ in $K$ is convergent with respect to the given norm if there exists $b\in K$ such that for every $\epsilon>0$, there exists some $N\in\mathbb{N}$ with $$|x_n-b|<\epsilon$$ for all $n\geq N$. We call $b$ the limit of the sequence. \end{mydef} \begin{mydef} Fix a normed field $(K, \left| \cdot \right|)$. We say $K$ is complete if every Cauchy sequence has a limit in $K$. \end{mydef} \begin{thm} Fix a normed field $(K, \left| \cdot \right|)$. Then there exists a unique complete normed field $(K', \left| \cdot \right|')$ up to isomorphism that extends $K$. Here $\left| \cdot \right|$' restricts to $\left| \cdot\right|$. Moreover, $K$ is dense in $K'$. \end{thm} \begin{proof} See Theorem 3.4 on p5 of \cite{Alexa} \end{proof} By the uniqueness of the completion of a normed field in Theorem 4, we have the following definition. \begin{mydef} We denote by $\mathbb{Q}_p$ the completion of $\mathbb{Q}$ with respect to the $p$-adic norm $\left| \cdot \right|_p$. We call $\mathbb{Q}_p$ the field of $p$-adic numbers. \end{mydef} \begin{thm}[Convergence implies Cauchy] Any convergent sequence in $\mathbb{Q}_p$ is Cauchy. \end{thm} \begin{proof} Fix $(a_k)_{k\in\mathbb{N}}$ a convergent sequence in $\mathbb{Q}_p$. Then, there exists $a\in\mathbb{Q}_p$ such that for every $\epsilon>0$, there exists $N\in\mathbb{N}$ such that for any $k\geq N$, $$\left| a_k-a\right|_p<\epsilon.$$ Fix $\epsilon>0$ so that the corresponding $N$ is fixed. Fix $n,m \geq N$. Then, $\left| a_n-a\right|_p<\epsilon$ and $\left| a_m-a\right|_p<\epsilon$. We find $$\left| a_n-a_m\right|_p=\left| a_n-a+a-a_m\right|_p\leq max(\left| a_n-a\right|_p, \left| a_m-a\right|_p)<\epsilon.$$ \end{proof} \begin{thm} Let $(a_k)_{k\in\mathbb{N}}$ be a sequence in $\mathbb{Q}_p$. Then $(a_k)$ is Cauchy if and only if for every $\epsilon>0$, there exists some $N\in\mathbb{N}$ such that for all $k\geq N$, $$\left| a_{k+1}-a_{k}\right|_p<\epsilon.$$ \end{thm} \begin{proof} The proof can be found on page 6 of \cite{Alexa}. \end{proof} \begin{prop} For any sequence $(a_k)_{k\in\mathbb{N}\cup \{0\}}$ in $\mathbb{Q}_p$, $\sum_{k=0}^\infty a_k$ converges if and only if $\lim_{k\shortrightarrow\infty}a_k=0$. \begin{proof} Fix $(a_k)_{k\in\mathbb{N}}$ a sequence in $\mathbb{Q}_p$, and $(s_k)$ the sequence of partial sums, i.e. $s_k=a_0 + \dotsb + a_{k-1}$. ($\Rightarrow$) Assume $\sum_{k=0}^\infty a_k$ converges. Then by Theorem 5, the sequence of the partial sums $(s_k)$ is Cauchy. Fix $\epsilon>0$. By Theorem 6, there exists some $N\in\mathbb{N}$ such that for all $k\geq N$, $|s_{k+2}-s_{k+1}|_p=\left| a_{k+1} \right|_p<\epsilon$. Thus, for all $k\geq N+1$, we have $\left| a_k \right|_p<\epsilon$, which is equivalent to $lim_{k\shortrightarrow\infty}a_k=0$. ($\Leftarrow$) Assume $\lim_{k\shortrightarrow\infty}a_k=0$. Fix $\epsilon>0$. Then there exists some $N\in\mathbb{N}$ such that for all $k\geq N$, $\left| a_k \right|_p<\epsilon$. Thus, $\left| s_{k+1}-s_k \right|_p<\epsilon$ for all $k\geq N$. By Theorem 6, $(s_k)$ is Cauchy and therefore converges by the completeness of $\mathbb{Q}_p$. Thus, $\sum_{k=0}^\infty a_k$ converges. \end{proof} \end{prop} \begin{prop} Fix $p$ as a prime number. Any series in the form $$\sum_{k=0}^\infty x_k p^{k+\gamma},$$ with $\gamma\in\mathbb{Z}$, $x_k \in \mathbb{Z}$ such that $x_0>0$ and $0\leq{x_k}\leq{p-1}$, and $k\in\mathbb{N}\cup \{0\}$, converges in $\mathbb{Q}_p$. \end{prop} \begin{proof} We claim that $lim_{k\shortrightarrow\infty}x_kp^k=0$ with respect to the $p$-adic norm, where $0\leq{x_k}\leq{p-1}$ for any $k\in\mathbb{N}\cup \{0\}$. Let $\epsilon>0$. There exists $N\in\mathbb{N}$ such that $\frac{1}{N}<\epsilon$. Fix $k\geq N$. We have $$\frac{1}{p^k}\leq\frac{1}{p^N}<\frac{1}{N}<\epsilon.$$ Thus, $\left| x_kp^k\right|_p=p^{\shortminus k}<\epsilon$. Consequently, $lim_{k\rightarrow\infty}\ x_kp^k=0$ with respect to the $p$-adic norm. Fix a series of the form $\sum_{k=0}^\infty x_kp^{k+\gamma}$ with $\gamma\in\mathbb{Z}$, $x_k$ integers such that $0\leq{x_k}\leq{p-1}$, $x_0>0$, and $k\in\mathbb{N}\cup\{0\}$. By Proposition 1 and the fact that $lim_{k\shortrightarrow\infty}x_kp^{k+\gamma}=0$, we find that $\sum_{k=0}^\infty x_kp^{k+\gamma}$ converges in $\mathbb{Q}_p$. \end{proof} \begin{remark} Equipped with Proposition 1, we proved that a family of power series always converges in $\mathbb{Q}_p$ for some prime number $p$ (Proposition 2 above). It means this expansion gives $p$-adic numbers, but the converse is not necessarily true. This is why we need Proposition 3 to define the $p$-adic expansion for any $x\in\mathbb{Q}_p$. \end{remark} \begin{prop} Every $p$-adic number has a unique $p$-adic expansion. \end{prop} \begin{proof} Fix $x \in\mathbb{Q}_p\setminus \{0\}$. We multiply $x$ by $p^m$ for some $m\in\mathbb{Z}$ such that $$|x_1|_p=|xp^m|_p\leq1.$$ We claim that there exists integers $0\leq a_k\leq p-1$ for $k=0, \dotsb, N$ such that $$|x_1-\Sigma_{k=0}^N a_kp^k|< p^{-N}.$$ Since $p^{-N}$ converges to $0$, proving this claim is sufficient to prove the existence of the $p$-adic expansion by Squeeze Theorem. We prove the base case that there exists an integer $0\leq a_0 \leq p-1$ such that $|x_1-a_0|_p<1$. Suppose $|x_1|_p<1$. Then, $|x_1-a_0|_p=max(|x_1|_p, |a_0|_p)$ since $|x_1|_p\neq|a_0|_p$. Obviously the only $a_0$ that satisfies the condition is $0$. If not, $|a_0|_p=1$, $|x_1-a_0|_p=1$, which is not less than $1$. Now suppose $|x_1|_p=1$. We pick a rational number $q$ that satisfies $|x_1-q|_p<1$, and write $q=\frac{d}{e}$, where $d,e$ are co-prime integers. Since $|x_1-q|_p<1$, $|q|_p=1$. Thus, both $d,e$ are co-prime to $p$. By Bezout's identity, there exist integers $s,t$ such that $se+tp=1$. Thus, $|x_1-sd|_p=|x_1-c+tpc|_p\leq max(|x_1-c|_p, |tpc|_p)<1$. Let $l$ be the unique integer that $sd=lp+r$, where $r\in \{0,\dotsb, p-1\}$. Then $$|x_1-r|_p=|x_1-sd+lp|_r\leq max(|x_1-sd|_p, |lp|_p)<1.$$ Thus, $r$ is the $a_0$ that satisfies the base case. We assume $|x_1-\Sigma_{k=0}^N a_kp^k|_p<p^{-N}$, and need to prove there exists an integer $0\leq a_{N+1}\leq p-1$ such that $$|x_1-\Sigma_{k=0}^{N+1} a_kp^k|_p<p^{-(N+1)}.$$ Given \begin{equation*} |x_1-\Sigma_{k=0}^{N+1}a_kp^k|_p=|x_1-\Sigma_{k=0}^N a_kp^k-a_{N+1}p^{N+1}|_p, \end{equation*} we multiply the inequality by $p^{N+1}$, and have \begin{equation*} \Big|(x_1-\sum_{k=0}^N a_kp^k)p^{-N-1}-a_{N+1}\Big|_p<1. \end{equation*} The existence of $a_{N+1}$ follows by the base case since \begin{equation*} \Big|(x_1-\sum_{k=0}^N a_kp^k)p^{-N-1}\Big|_p\leq1. \end{equation*} Thus, by induction, we conclude that every $p$-adic number has a $p$-adic expansion. \vspace{0.3cm} We now claim that such a $p$-adic expansion is unique. Fix a $p$-adic number $q$, and suppose there exists two $p$-adic expansions of $q$, i.e. \begin{equation*} q=\sum_{k=0}^{\infty}x_kp^{k+\gamma_1}=\sum_{k=0}^{\infty}y_kp^{k+\gamma_2} \end{equation*} where $x_k,y_k\in\mathbb{Z},$ and $0\leq x_k, y_k\leq n-1$ for all $k\in\mathbb{N}$, and $x_0,y_0>0$, $\gamma_1, \gamma_2\in\mathbb{Z}$. Without loss of generality, let $\gamma_1\geq\gamma_2$. Then, \begin{equation*} \sum_{k=0}^{\infty} x_kp^{k+\gamma_1}-y_kp^{k+\gamma_2}=-\sum_{k=0}^{\gamma_1-\gamma_2}y_kp^{k+\gamma_2}+\sum_{k=\gamma_1-\gamma_2+1}^{\infty} (x_k-y_k)p^{k+\gamma_2}. \end{equation*} Let $\epsilon>0$. By the definition of convergence, there exists $N\in\mathbb{N}$ such that whenever $n\geq N$, we have \begin{equation*} \Big|-\sum_{k=0}^{\gamma_1-\gamma_2}y_kp^{k+\gamma_2}+\sum_{k=\gamma_1-\gamma_2+1}^{n} (x_k-y_k)p^{k+\gamma_2}\Big|_p<\epsilon. \end{equation*} Fix $k'\in\mathbb{N}$. We assume by contradiction that some $y_k$ with $k$ between $0$ to $\gamma_1-\gamma_2$, or some $(x_k-y_k)$, with $k$ between $(\gamma_1-\gamma_2+1)$ and $k'$ is nonzero. Then, \begin{align*} &\Big|-\sum_{k=0}^{\gamma_1-\gamma_2}y_kp^{k+\gamma_2}+\sum_{k=\gamma_1-\gamma_2+1}^{k'} (x_k-y_k)p^{k+\gamma_2}\Big|_p \\&\quad =max(|y_0p^{\gamma_2}|_p,..., |(x_{k'}-y_{k'})|_p)=p^{-k''} \end{align*} for some integer $k''$ between $0$ and $k'$. The desired equality follows since the $p$-adic norm of any nonzero term is different. Let $\epsilon=\frac{p^{-k''}}{2}<p^{-k''}$. Then, \begin{equation*} \Big|-\sum_{k=0}^{\gamma_1-\gamma_2}y_kp^{k+\gamma_2}+\sum_{k=\gamma_1-\gamma_2+1}^{N} (x_k-y_k)p^{k+\gamma_2}\Big|_p\geq p^{-k''}>\epsilon \end{equation*} for any $N\geq k'$. This contradicts the convergence of the whole series to $0$ in the $p$-adic sense. Thus, all coefficients of the series are $0$. It directly follows that $\gamma_1=\gamma_2$, and $x_k=y_k$ for any $k\in\mathbb{N}$. Hence the $p$-adic expansion of any $p$-adic number exists and is unique. \end{proof} \begin{mydef} Fix $p$ as a prime number. The $p$-adic expansion of a number $x$ in $\mathbb{Q}_p\setminus\{0\}$ is a series of the form $$x=\sum_{k=0}^\infty x_kp^{k+\gamma}$$ with $\gamma\in\mathbb{Z}$, $x_k$ integers such that $0\leq{x_k}\leq{p-1}$, $x_0>0$, and $k\in\mathbb{N}\cup\{0\}$. \end{mydef} \begin{prop} Let $x$ be a $p$-adic number with $p$-adic expansion \begin{equation*} x=\sum_{k=0}^\infty x_kp^{k+\gamma} \end{equation*} with $\gamma\in\mathbb{Z}$, $x_k$ are integers such that $0\leq{x_k}\leq{p-1}$, $x_0>0$, $k\in\mathbb{N}\cup\{0\}$. Then, $|x|_p=p^{-\gamma}$. \end{prop} \begin{proof} Fix $N\in\mathbb{N}$. \begin{equation*} |\sum_{k=0}^{N}x_kp^{k+\gamma}|_p=max(|x_0p^\gamma|,...,|x_Np^{N+\gamma}|)=p^{-\gamma}. \end{equation*} Thus, $|x|_p=p^{-\gamma}$. \end{proof} \begin{ex} We try to find the first three terms of the $5$-adic expansion of $\frac{4}{3}$ as an example of how $p$-adic expansion works. Note that $|\frac{4}{3}|_5=1$, thus $\gamma=0$. We use the proof of the existence of the $p$-adic expansion in Proposition 3 to find the terms of the expansion. We pick a rational number $q=\frac{-1}{3}$ so that $|\frac{4}{3}-\frac{-1}{3}|_5<1$. Then, since the denominator $3$ is co-prime to $5$, there exists integers $s, t$ such that $3s+5t=1$. In fact, $s=2, t=-1$ is one solution. Then, $s(-1)=2(-1)=-2$. The remainder of $-2$ divided by $5$ is $3$, which is the coefficient of the first term of the expansion. We calculate $|\frac{4}{3}-3|_5=|\frac{-5}{3}|_5=5^{-1}$ to find the second term. We multiply $\frac{-5}{3}$ by $5^{-1}$ and get $\frac{-1}{3}$ to reduce the $5$-adic norm to 1. Again, we pick a rational number $\frac{-2}{1}$ so that $|-\frac{1}{3}-(-\frac{2}{1})|_5=5^{-1}<1$.Then, there exists $s',t'$ such that $s'+5t'=1$. In fact, $s=6, t=-1$ is a solution. Thus, $-2s=-12$. We find that 3, as a remainder of -12 divided by 5, is the coefficient of the second term of the expansion. By using the same method to ($\frac{4}{3}-3-5\cdot3$), we have 1 as the coefficient of the third term of the expansion. Thus, $$\frac{3}{4}=5^0(3+3\cdot 5+1 \cdot 5^2+\dotsb)$$ in the $5$-adic sense. \end{ex} \subsection{Topology on $\mathbb{Q}_p$} This subsection will cover basic concepts of topology, the ring of $p$-adic integers $\mathbb{Z}_p$, and some important topological properties of $\mathbb{Q}_p$, such as the fact that $\mathbb{Q}_p$ is locally compact. \begin{mydef} Fix a normed field $(\mathbb{F}, |\cdot|)$. \begin{itemize} \item[(i)] An open ball of radius $r>0$ centered at a point $x\in\mathbb{F}$ is the set $B_r(x)=\left\{ y\in\mathbb{F}: \left| y-x\right|<r \right\}$. \item[(ii)] A closed ball of radius $r>0$ centered at a point $x\in\mathbb{F}$ is the set $B_r(x)=\left\{ y\in\mathbb{F}: \left| y-x\right|\leq r \right\}$. \item[(iii)] Let $x\in\mathbb{F}$ and $A\subseteq\mathbb{F}$. Then $x$ is a limit point of $A$ if, for every $\epsilon>0$, $B_\epsilon(x)\cap A$ contains a point other than $x$. \item[(iv)] A set $C\subseteq\mathbb{F}$ is called closed if it contains all of its limit points. \item[(v)] A set $O\subseteq \mathbb{F}$ is called open if for every $x\in O$, there exists $\epsilon>0$ such that $B_\epsilon(x)\subseteq O$. \item[(vi)] A set $A\subseteq\mathbb{F}$ is called bounded if there exists $M>0$ such that $|x|<M$ for any $x\in A$. \end{itemize} \end{mydef} \begin{prop} Fix a normed field $(\mathbb{F}, |.|)$. Every closed ball in $\mathbb{F}$ is closed. \end{prop} \begin{proof} Fix $B_r(x)=\{ y\in\mathbb{F}: \left| y-x\right|\leq r \}$. Let $z$ be a limit point of $B_r(x)$. If $\epsilon>0$ fixed, we have $B_\epsilon(z)\cap B_r(x)\neq\varnothing$. Let $n\in B_\epsilon (Z)\cap B_r(x)$. Then, \begin{equation*} |z-x|=|z-y+y=x|\leq|z-y|+|y-x|\leq \epsilon+r. \end{equation*} As $\epsilon\rightarrow 0$, we have $|\epsilon+r|\rightarrow r$. Thus, $|z-x|\leq r$. It follows that $z\in B_r(x)$ by definition. Therefore, $B_r(x)$ is closed. \end{proof} \begin{prop} Fix a normed field $(\mathbb{F}, |\cdot|)$. Every open ball in $\mathbb{F}$ is open. \end{prop} \begin{proof} Fix \begin{equation*} B_r(x)=\{ y\in\mathbb{F}: \left| y-x\right|< r \} \end{equation*} an open ball in $\mathbb{F}$. Pick $y\in B_r(x)$. Then, $|y-x|=r'<r$ for some $r'>0$. Let $\epsilon=\frac{r-r'}{2}$. Pick y'$\in B_\epsilon(y)$. Then \begin{equation*} |y-x|=|y-y'+y'-x|\leq|y-y'|+|y'-x| <\frac{r-r'}{2}+r<r-r'+r=r. \end{equation*} Thus, y'$\in B_r(x)$. We conclude that $B_\epsilon(y)\subseteq B_r(x)$. By definition, it follows that $B_r(x)$ is an open set. \end{proof} \begin{thm} Any open ball in $\mathbb{Q}_p$ is a closed ball and vice versa. \end{thm} \begin{proof} Fix $B_r(x)$ as an open ball in $\mathbb{Q}_p$, where $r>0$. \begin{equation*} B_r(x)= \left\{ y\in\mathbb{Q}_p:\left| y-x\right|<r\right\}. \end{equation*} There exists a smallest integer $k$ such that $r\leq p^{-k}$. Then, $$B_r(x)=\{ y\in\mathbb{Q}_p: \left| y-x\right|<p^{-k} \}=\{ y\in\mathbb{Q}_p: \left| y-x\right|\leq p^{-k-1} \}.$$ Thus, $B$ is a closed ball by definition. A similar argument proves a closed ball in $\mathbb{Q}_p$ is an open ball. \end{proof} \begin{mydef} A $p$-adic number which satisfies $\left| x\right|_p\leq1$ is called a $p$-adic integer. It can be easily verified that the set of $p$-adic integers form a ring. We denote the ring of $p$-adic integers by $\mathbb{Z}_p$. \end{mydef} \begin{prop} All integers are $p$-adic integers. \end{prop} \begin{proof} Fix $p$ as a prime number. Let $x\in\mathbb{Z}$. If $x$ is co-prime to $p$ we have $| x|_p = p^0 =1$. If $x$ is not co-prime to $p$, we have $\gamma(x)>0$, and $| x|_p=p^{-\gamma}>1$. \end{proof} \begin{mydef} We define $p\mathbb{Z}_p$ to be the set of $p$-adic numbers that satisfy $$\left| x\right|_p\leq p^{\shortminus1},$$ and $p^2\mathbb{Z}_p$ to be the set of $p$-adic numbers that satisfy $$\left| x\right|_p\leq p^{\shortminus2}.$$ More generally, we define $p^n\mathbb{Z}_p$ to be the set of elements $x\in\mathbb{Q}_p$ such that $$\left| x\right|_p\leq p^{\shortminus{n}}$$ for any $n\in\mathbb{N}$. \end{mydef} \begin{remark} It is easy to check that $$\mathbb{Z}_p\supset p\mathbb{Z}_p \supset p^2\mathbb{Z}_p\supset...\supset p^n\mathbb{Z}_p\supset \dotsb .$$ Note that $\left| x\right|_p=1$ if $x\in\mathbb{Z}_p\setminus{p\mathbb{Z}_p}$, and $\left| x\right|_p=p^{\shortminus1}$ if $x\in p\mathbb{Z}_p\setminus{p^2\mathbb{Z}_p}$. More generally, $\left| x\right|_p=p^{\shortminus n}$ if $x\in p^n\mathbb{Z}_p\setminus{p^{n+1}\mathbb{Z}_p}$ for some $n \in\mathbb{N}$. \end{remark} \begin{mydef} Fix a normed field $(\mathbb{F}, |\cdot|)$. A topological space $X$ in $\mathbb{F}$ is called compact if every open cover of it has a finite subcover. That is, if $X\subseteq\cup_{m\in K} m$, with $K$ a collection of open sets, then there exists a finite collection $N\subseteq K$ such that $X\subseteq\cup_{n\in N} n$. \end{mydef} \begin{mydef} Fix a normed field $(\mathbb{F}, |\cdot|)$. A topological space $X$ in $\mathbb{F}$ is called locally compact if each $x\in X$ has a compact neighborhood. That is, there exists an open set $Y$ and a compact set $Z$ such that $x\in Y\subseteq Z$. \end{mydef} \begin{thm} ($p$-adic Heine-Borel Theorem) A subset $X\subseteq\mathbb{Q}_p$ is compact if it is closed and bounded. \end{thm} \begin{proof} See Corollary 4 on page 8 of \cite{Jack}. \end{proof} \begin{cor} The field $\mathbb{Q}_p$ is locally compact. \end{cor} \begin{proof} Let $x\in\mathbb{Q}_p$. Consider $Z=\{ x+y: y\in\mathbb{Z}_p\}$. We claim $Z$ is a closed ball centered at $x$. We have \begin{equation*} \begin{split} &Z=\{ x+y\in\mathbb{Q}_p: |y|_p\leq1\}=\{ x+y\in\mathbb{Q}_p: |x+y-x|_p\leq1\} \\ &=\{ z\in\mathbb{Q}_p: |z-x|_p\leq1\}. \end{split} \end{equation*} Thus, $Z$ is a closed ball by definition. It is an open ball by Theorem 7 and an open set by Proposition 6. By Proposition 7, $Z$ is closed. Fix $k\in Z$. Then, $$|k|_p\leq max(|x|_p, |y|_p)$$ for some $y\in\mathbb{Z}_p$. Since $|y|_p\leq1$, and $|x|_p$ is fixed, pick $M=2 \cdot max (|x|_p,1)$. We have $|k|_p<M$ for any $k\in Z$. Thus, $Z$ is bounded. Since $Z$ is closed and bounded, it is compact by Theorem 8. Thus, $Z$ is a compact neighborhood of $x$, which means $\mathbb{Q}_p$ is locally compact. \end{proof} \begin{thm} The ring $\mathbb{Z}_p$ is compact. \end{thm} \begin{proof} We have that $\mathbb{Z}_p=\{ x\in\mathbb{Q}_p: |x|_p\leq1 \}$ is a closed ball by definition, and therefore closed by Proposition 3. Note also that $\mathbb{Z}_p$ is bounded by definition. Thus, $\mathbb{Z}_p$ is compact by Theorem 8. \end{proof} \section{Integration on $\mathbb{Q}_p$} In this section, we introduce the notion of a Haar measure for locally compact topological abelian groups (such as $\mathbb{Q}_p$). This will allow us to set up the concept of an integral of a complex-valued function of a $p$-adic variable. We then present basic rules for $p$-adic calculus using the properties of the Haar measure, $p$-adic numbers and knowledge of abstract algebra. \begin{mydef} (Left, right Haar Measure) Let $(G,\cdot)$ be a topological group. A left Haar measure on $G$ is a nonzero regular Borel measure $\mu$ on $G$ such that $\mu(x\cdot E)=\mu(E)$ for all $x\in G$ and all measurable subsets $E$ of $G$. A right Haar measure on $G$ is a nonzero regular Borel measure $\mu$ on $G$ such that $\mu(E\cdot x)=\mu(E)$ for all $x\in G$ and all measurable subsets $A$ of $G$. \end{mydef} \begin{thm} Let $(G,\cdot)$ be a locally compact topological group. Then, there exists a unique left (right) Haar measure up to scaling by a positive constant. \end{thm} \begin{proof} See the proof of Theorem 4.3 on page 6, and Theorem 4.6 on page 11 of \cite{Gleason}. \end{proof} \begin{thm} Let $(G,\cdot)$ be a locally compact topological abelian group. Then, the left Haar measure is also the right Haar measure. \end{thm} \begin{proof} Fix $(G,\cdot)$ a locally compact abelian topological group. By Theorem 10 and the definition of left and right Haar measures, there exists a left and a right Haar measure $\mu$, $\mu'$ respectively. Fix $E$ a measurable subset of $G$, and $x\in G$. Then, $\mu(E\cdot x)=\mu(E)$, $\mu'(E)=\mu'(x\cdot E)$. Since $G$ is abelian, $\mu'(E)=\mu'(E\cdot x)$. By definition, this right Haar measure is the left Haar measure. \end{proof} By the previous two theorems, we can define the unique Haar measure on any locally compact topological abelian group, since the left Haar measure exists and is unique, and the left Haar measure is also the right Haar measure. \begin{mydef} (Haar Measure) Let $(G,\cdot)$ be a locally compact topological abelian group. The nonzero regular Borel measure $\mu$ on $G$ such that $\mu(x\cdot E)=\mu(E)$ for all $x\in G$ and all measurable subsets $E$ of $G$ is called the Haar Measure of $G$. \end{mydef} \begin{lem} $(\mathbb{Q}_p, +)$ is a locally compact topological abelian group. In addition, there exists a unique Haar measure $dx$ such that $\int_{\mathbb{Z}_p}dx=1$, and $d(x+a)=dx$ for any $x, a \in\mathbb{Q}_p$. \end{lem} \begin{proof} By Corollary 1 in Section 2.3, $\mathbb{Q}_p$ is locally compact. Fix $x,y\in\mathbb{Q}_p$. We have $|x+y|_p=|y+x|_p$. Thus, $(\mathbb{Q}_p, +)$ is abelian. The proof that $(\mathbb{Q}_p, +)$ is a topological group can be seen under Exercise 3.5 on page 9 of \cite{Vladimirov}. By Theorem 10 and Theorem 11, $(\mathbb{Q}_p, +)$ has a unique Haar measure $dx$ up to scaling by a positive constant such that $d(x+a)=dx$ for any $x \in\mathbb{Q}_p$ and $a>0, a\in\mathbb{Q}_p$. We normalize the measure by setting $\int_{\mathbb{Z}_p}dx=1$. Then $dx$ is unique. \end{proof} \begin{thm} Let $dx$ be the Haar measure for $(\mathbb{Q}_p, +)$. Then, $$d(ax)=\left| a\right|_p dx$$ for all a$\in\mathbb{Q}_p^*$, i.e. $\mathbb{Q}_p\setminus \{0\}$. \end{thm} \begin{proof} This proof is inspired by (3.7) on page 13 of \cite{Zuniga}. \vspace{0.3cm} We want to equivalently show that $\int_{aU}dx=|a|_p\int_U dx$ for any Borel set $U\subseteq\mathbb{Q}_p$. Fix a$\in\mathbb{Q}_p^*$. Then U$\mapsto\int_{aU}dx$ is a Haar measure for ($\mathbb{Q}_p$,+). Hence, there exists a unique positive constant C such that $\int_{aU}dx=C\int_Udx$ for any Borel set $U\subseteq\mathbb{Q}_p$. We fix $U=\mathbb{Z}_p$ to calculate C. Assume $a\in\mathbb{Z}_p$. Then $a=p^m u$, where $m\in\mathbb{N}$, and $u \in\mathbb{Z}_p^*$. We have \begin{align*} \int_{\mathbb{Z}_p} 1 \text{d}x &=\sum_{b\in\mathbb{Z}_p /p^m \mathbb{Z}_p} \int_{b+p^m \mathbb{Z}_p} \text{d}x \\ &= \sum_{b\in\mathbb{Z}_p/p^m \mathbb{Z}_p} \int_{p^m\mathbb{Z}_p} \text{d}x \\ &= p^m \int_{p^m\mathbb{Z}_p} \text{d}x =1 \end{align*} Note that the first equation derives from the fact that $$\mathbb{Z}_p=\bigcup_{b\in\mathbb{Z}_p/p^m\mathbb{Z}_p}(b+p^m\mathbb{Z}_p).$$ The second equation is based on Lemma 1. The term $p^m$ in the third equality represents the number of distinct cosets of $p^m\mathbb{Z}_p$ in $\mathbb{Z}_p$, since the $p$-adic expansion of the form $b_0+b_1p+b_2p^2+...+b_{m-1}p^{m-1}+p^m\mathbb{Z}_p$ has $p$ choices for each $b_k$, with $k$ ranging from $0$ to $m-1$. Thus, \begin{equation*} |a|_p=p^{-m}=\int_{a\mathbb{Z}_p}\text{d}x. \end{equation*} If $x\in\mathbb{Q}_p\setminus\mathbb{Z}_p$, we can get the same result by setting $a=p^{-m}u$, and $b\in{p^{-m} \mathbb{Z}_p}/\mathbb{Z}_p$. Thus, $C=|a|_p$, and $d(ax)=|a|_p dx$. \end{proof} \begin{thm} For any m$\in\mathbb{Z}$ $$\mu(p^m \mathbb{Z}_p) = p^{-m}.$$ \end{thm} \begin{proof} Fix m$\in\mathbb{Z}$. Then \begin{align*} \mu(p^m \mathbb{Z}_p)&=\int_{x\in p^m \mathbb{Z}_p}dx=\int_{y\in\mathbb{Z}_p}d(p^m y) \\ &=\int_{y\in\mathbb{Z}_p} |p^m|_p dy \\ &=p^{-m}\int_{y\in\mathbb{Z}_p}dy \\ &=p^{-m}. \end{align*} \end{proof} \begin{cor} For any $\gamma \in \mathbb{Z}$, we have $$\int_{{\left| x\right|_p}=p^\gamma}dx=p^\gamma (1-\frac{1}{p}).$$ \end{cor} \begin{proof} \begin{equation*} \int_{\left| x\right|_p}dx=\int_{\left| x\right|_p\leq p^\gamma}dx-\int_{\left| x\right|_p\leq p^{\gamma-1}}dx=p^\gamma-p^{\gamma-1} \end{equation*} by Theorem 13. \end{proof} \begin{cor} For any $s > -1,\ s\in\mathbb{Z}$, we have $$\int_{\mathbb{Z}_p} |x|_p^s =\frac{p-1}{p-p^{\shortminus s}}.$$ \end{cor} \begin{proof} For $s>-1$ we have \begin{align*} &\int_{\mathbb{Z}_p}\left| x\right|_p^sdx =\sum^{\infty}_{\gamma=0}\int_{\left| x\right|_p=p^{\shortminus{\gamma}}} p^{\shortminus{\gamma}s}dx \\&\quad =\sum^{\infty}_{\gamma=0} p^{\shortminus{\gamma}s}(p^{-\gamma}-p^{-{\gamma-1}}) =\frac{p-1}{p-p^{\shortminus s}} \end{align*} using the geometric series formula. \end{proof} \begin{cor} When $s<-1$ we have $$\int_{\mathbb{Q}_p \setminus \mathbb{Z}_p} |x|_p^s= -\frac{p-1}{p-p^{\shortminus s}}.$$ \end{cor} \begin{proof} Fix $s<-1$. Then \begin{align*} \int_{\mathbb{Q}_p\setminus{\mathbb{Z}_p}}\left| x\right|_p^s dx=\sum^{\infty}_{\shortminus\gamma=1}\int_{\left| x\right|_p=p^{\shortminus{\gamma}}} p^{\shortminus{\gamma}s}dx=\sum^{\infty}_{\shortminus\gamma=1}p^{\shortminus{\gamma s}} (p^{-\gamma}-p^{-{\gamma-1}}). \end{align*} This sum equals $-\frac{p-1}{p-p^{\shortminus s}}$ by the geometric series formula. \end{proof} Theorem 12, Theorem 13, and Corollaries 2, 3, and 4 give a good foundation for the rules of $p$-adic calculus, and are the basic tools for solving $p$-adic integration problems in the following sections. \section{A $p$-adic derivative operator} This is a brief section introducing the $p$-adic derivative operator and its two key properties. \begin{mydef} Let $\alpha>0$. The $p$-adic derivative operator $D^{\alpha}$ is defined as \begin{equation} D^\alpha f(x)=\frac{p^\alpha -1}{1-p^{-\alpha-1}} \int_{\mathbb{Q}_p} \frac{f(x)-f(y)}{\left| x-y\right|_p^{\alpha+1}}dy \end{equation} for any complex-valued function $f(x)$ with variables in the $p$-adic field $\mathbb{Q}_p$. We define the $p$-adic Gamma function to be the function \begin{equation} \Gamma_p(x)=\frac{1-p^{x-1}}{1-p^{\shortminus x}} \end{equation} for $x\neq0$. Then, \begin{equation} D^\alpha f(x)=\frac{1}{\Gamma_p ({\shortminus \alpha})} \int_{\mathbb{Q}_p} \frac{f(y)-f(x)}{\left| x-y\right|_p^{\alpha+1}}dy. \end{equation} \end{mydef} There are two important properties of the differential operator which are stated in the following proposition. \begin{prop} \begin{itemize} \item[(1)] For $\alpha, \beta>0$, we have $$D^\alpha [D^\beta f(x)]=D^{\alpha+\beta} f(x)$$ \item[(2)] For $n \in \mathbb{N}$, $\alpha>0$, we have \begin{equation*} D^\alpha {\left| x\right|_p}^n=\frac{\Gamma_p(n+1)}{\Gamma_p(n-\alpha+1)} \left| x\right|_p^{n-\alpha}. \end{equation*} \end{itemize} \end{prop} \begin{proof} We prove the second identity because its proof gives readers an idea about how $p$-adic integration looks like and helps with the understanding of a similar calculation in Section 5 of the thesis. \vspace{0.4cm} We have \begin{align*} &D^\alpha {\left| x\right|_p}^n=\frac{1}{\Gamma_p ({\shortminus \alpha})} \int_{\mathbb{Q}_p} \frac{{\left| y\right|_p}^n -\left| x\right|_p^n}{\left| x-y\right|_p^{\alpha+1}}dy \\&\quad =\frac{1}{\Gamma_p(-\alpha)} \left(\int_{|x|_p<|y|_p} \frac{{\left| y\right|_p}^n -\left| x\right|_p^n}{\left| x-y\right|_p^{\alpha+1}}dy+0+\int_{|x|_p>|y|_p} \frac{{\left| y\right|_p}^n -\left| x\right|_p^n}{\left| x-y\right|_p^{\alpha+1}}dy\right). \end{align*} Set $|x|_p=p^t$, where $t$ is some integer. Then, \begin{align*} &D^\alpha {\left| x\right|_p}^n =\frac{1}{\Gamma_p(-\alpha)} \left(\int_{|x|<|y|} \frac{|y|_p^n-p^{tn}}{|y|_p^{\alpha+1}}dy+\int_{|x|>|y|} \frac{|y|_p^n-p^{tn}}{p^{t(\alpha+1)}}dy\right) \\&\quad =\frac{1}{\Gamma_p(-\alpha)} \left(\sum_{k=t+1}^\infty \frac{p^{kn}-p^{tn}}{p^{k(\alpha+1)}}p^k \left(1-\frac{1}{p}\right)+\sum_{-\infty}^{k=t-1}\frac{p^{kn}-p^{tn}}{p^{t(\alpha+1)}}p^k\left(1-\frac{1}{p}\right)\right). \end{align*} The left series converges only if $n<\alpha$, the right series converges only if $n+1>0$. However, we can appropriately analytically continue the two series. Then after calculation, \begin{equation*} D^\alpha {\left| x\right|_p}^n=\frac{\Gamma_p(n+1)}{\Gamma_p(n-\alpha+1)} \left| x\right|_p^{n-\alpha}. \end{equation*} \end{proof} \section{The $p$-adic Sch\"{o}dinger equation} \subsection{Classical Schr\"{o}dinger equation} We begin by introducing the classical time-independent Schr\"{o}dinger equation and its solutions following results in Chapter 2 of \cite{Griffiths}. \begin{mydef} A time-independent Schr\"{o}dinger equation is an equation of the form \begin{equation} -\frac{h^2}{2m}\frac{d^2\Psi}{dx^2}+\frac{1}{2}m\omega^2x^2\Psi=E\Psi, \end{equation} where $h,m,\omega, E$ are some complex constants. Here $E$ is called the energy and satisfies $E\geq0$, and $x$ is the variable of the function $\Psi(x)$. \end{mydef} Fix a time-independent Schr\"{o}dinger equation. It can be rewritten as: \begin{equation} \frac{1}{2m}\left[\left(\frac{h}{i}\frac{d}{dx}\right)^2+(m\omega x)^2 \right]\Psi=E\Psi \end{equation} The left side of the above equation inspires us to think about the factorization of complex numbers, $x^2+y^2=(x-iy)(x+iy)$. We consider \begin{equation} a_+=\frac{1}{\sqrt{2m}} \left(\frac{h}{i}\frac{d}{dx}+im\omega x\right) \end{equation} \begin{equation} a_-=\frac{1}{\sqrt{2m}}\left(\frac{h}{i}\frac{d}{dx}-im\omega x\right). \end{equation} We wish to explore the operators $a_-a_+$ and $a_+a_-$ using an arbitrary test function $f(x)$. After a calculation we have \begin{equation} a_-a_+=\frac{1}{2m}\left[\left(\frac{h}{i}\frac{d}{dx}\right)^2+\left(m\omega x\right)^2 \right]+\frac{1}{2}h\omega \end{equation} \begin{equation} a_+a_-=\frac{1}{2m}[(\frac{h}{i}\frac{d}{dx})^2+(m\omega x)^2]-\frac{1}{2}h\omega \end{equation} By equations (8), (11), (12), we have: \begin{equation} (a_-a_+-\frac{1}{2}h\omega)\Psi=E\Psi \end{equation} \begin{equation} (a_+a_-+\frac{1}{2}h\omega)\Psi=E\Psi. \end{equation} \begin{remark} Equation (13) and Equation(14) are just another form of the Schr\"{o}dinger equation in (7). \end{remark} \begin{thm} Fix a time-independent Schr\"{o}dinger equation \begin{equation*} -\frac{h^2}{2m}\frac{d^2\Psi}{dx^2}+\frac{1}{2}m\omega^2x^2\Psi=E\Psi, \end{equation*} where h,m,$\omega$, $E$ are some complex constants, $E$ denotes the energy, and $x$ is the variable of the function $\Psi(x)$. Let $a_+, a_-$ be the operators in (9) and (10). If $\Psi_0$ satisfies the Schrodinger Equation, then: \begin{itemize} \item[(a)] $a_+\Psi_0$ is a solution to the new Schr\"{o}dinger equation with energy $(E+h\omega)$ \item[(b)] $a_-\Psi_0$ is a solution to the new Schrodinger Equation with energy $(E-h\omega)$. \end{itemize} \end{thm} \begin{proof} It is sufficient to prove (a), since (b) can be proved similarly. We claim $(a_+a_-+\frac{1}{2}h\omega)(a_+\Psi)=(E+h\omega)(a_+\Psi)$, which is equivalent to (a). Indeed, \begin{align*} &(a_+a_-+\frac{1}{2}h\omega)(a_+\Psi)=(a_+a_-a_++\frac{1}{2}h\omega a_+)\Psi \\&\quad =a_+(a_-a_++\frac{1}{2}h\omega)\Psi=a_+[(a_-a_+-\frac{1}{2}h\omega)\Psi+h\omega\Psi] \\&\quad =a_+(E\Psi+h\omega\Psi)=(E+h\omega)(a_+\Psi). \end{align*} \end{proof} By Theorem 14, we find $a_+, a_-$ are operators that help us find solutions to the Schr\"{o}dinger equation with raising or lowering energy. Therefore, we make the following definition. \begin{mydef} For a time-independent Schr\"{o}dinger equation \begin{equation*} -\frac{h^2}{2m}\frac{d^2\Psi}{dx^2}+\frac{1}{2}m\omega^2x^2\Psi=E\Psi, \end{equation*} $a_+$ defined in (9) is called the raising operator of the equation, and $a_-$ defined in (10) is called the lowering operator. \end{mydef} \begin{remark} Notice that if the raising operator is iterated, the energy will be lower and lower, reaching a level $E<0$, which contradicts Definition 17. Therefore, a minimum energy level $E_0$ and $\Psi_0$ are required. Its existence and form are proved and stated in the following theorem. \end{remark} \begin{thm} There exists $\Psi_0$ such that $a_-\Psi_0=0$ for a Schr\"{o}dinger equation and the lowering operator $a_-$. In this case, \begin{equation} \Psi_0(x)=A_0e^{-\frac{m\omega}{2h}x^2}, \end{equation} where $A_0$ is some constant. The corresponding energy is \begin{equation} E_0=\frac{1}{2}h\omega. \end{equation} \end{thm} \begin{proof} The existence part is shown on Chapter 2, page 34 of \cite{Griffiths}. We will find the explicit expression of $\Psi_0(x)$. Given $a_-\Psi_0=0$, we have \begin{equation*} \frac{1}{\sqrt{2m}} \left(\frac{h}{i}\frac{d\Psi_0}{dx}-im\omega x\Psi_0 \right)=0. \end{equation*} After simplification, we have the following differential equation: \begin{equation*} \frac{d\Psi_0}{dx}+\frac{m\omega}{h}x\Psi_0=0. \end{equation*} Using the formula for solving first-order differential equations, let $$I(x)=e^{\int \frac{m\omega}{h}x},\ Q(x)=0.$$ Then \begin{align*} &\Psi_0(x)=\frac{1}{I(x)}\left[\int I(x)Q(x)dx+A_0\right]= \\&\quad \frac{1}{e^{\frac{m\omega x^2}{2h}}}[0+A_0]=A_0e^{-\frac{m\omega}{2h}x^2}. \end{align*} We replace the expression of $\Psi_0(x)$ into equation (16), and have \begin{align*} a_+(a_-\Psi_0(x))+\frac{1}{2}h\omega\Psi_0(x)=E_0\Psi_0(x). \end{align*} Given $a_-\Psi_0=0$, we have $a_+(a_-\Psi_0(x))=0$, and therefore $E_0=\frac{1}{2}h\omega$. \end{proof} It is easy to find explicit expressions of solutions to the Schr\"{o}dinger equation with higher energy using the raising operator $a_+$ based on $\Psi_0$ and $E_0$, and thus we have the following definition. \begin{mydef} Given $\Psi_0$ and $E_0$ for a time-independent Schr\"{o}dinger equation as in Theorem 18, we define $\Psi_n(x)=A_n(a_+)^ne^{-\frac{m\omega}{2h}x^2}$ as solutions to the equation with energy $E_n=(n+\frac{1}{2})h\omega$, which represents the $n^{th}$ excited state. \end{mydef} In the next section, we will explore a $p$-adic analog of the Schr\"{o}dinger equation inspired by the explicit expression of solutions, and raising and lowering operators we find in this section. \subsection{A $p$-adic Schr\"{o}dinger equation} We will prove Theorem 16 and 17 to prepare for exploring the $p$-adic analogs of the Schr\"{o}dinger equation since they are useful tools to simplify the calculation. \begin{thm} Fix $n\in \mathbb{N}$. Let \begin{equation} f_n(x) = \begin{cases} |x|_p^n &\text{ if } x \in \mathbb{Z}_p \\ 0 & \text{ otherwise} \end{cases}. \end{equation} Then, \begin{equation} D^{\alpha} f_n (x) = \begin{cases} \frac{\Gamma_p(n+1)}{\Gamma_p(n-1)}|x|_p^{n-\alpha}+\frac{1}{\Gamma_p(-\alpha)}\frac{p-1}{p-p^{-n+\alpha+1}} &\text{ if } x \in \mathbb{Z}_p \\ \frac{1}{\Gamma_p(-\alpha)}\left| x\right|_p^{\shortminus(\alpha+1)}\frac{p-1}{p-p^{\shortminus n}} & \text{ otherwise} \end{cases}. \end{equation} \end{thm} \begin{proof} Let $x\in\mathbb{Z}_p$, and let $|x|_p=p^t$ for some integer $t\leq0$. We have \begin{align*} & D^\alpha f_n(x)=\frac{1}{\Gamma_p(-\alpha)}\int_{\mathbb{Q}_p}\frac{f_n(y)-f_n(x)}{\left| y-x\right|_p^{\alpha+1}}dy \\&\quad =\frac{1}{\Gamma_p(-\alpha)} \left(\int_{|y|_p>|x|_p}\frac{|y|_p-|x|_p}{|y|_p^{\alpha+1}}+0+\int_{|y|_p<|x|_p}\frac{|y|_p-|x|_p}{|x|_p^{\alpha+1}} \right) \\&\quad =\frac{1}{\Gamma_p(-\alpha)}\left(\sum_{k=t+1}^0 \frac{p^{kn}-p^{tn}}{p^{k(\alpha+1)}}(p^k-p^{k-1})+\sum_{k=-\infty}^{t-1} \frac{p^{kn}-p^{tn}}{p^{t(\alpha+1)}}(p^k-p^{k-1})\right) \\&\quad =\frac{\Gamma_p(n+1)}{\Gamma_p(n-1)}|x|_p^{n-\alpha}+\frac{1}{\Gamma_p(-\alpha)}\frac{p-1}{p-p^{-n+\alpha+1}}, \end{align*} after some calculation. \vspace{0.4cm} Let $x\in\mathbb{Q}_p\setminus\mathbb{Z}_p$. Then \begin{align*} &D^\alpha f_n(x)=\frac{1}{\Gamma_p(-\alpha)}\int_{\mathbb{Q}_p}\frac{f_n(y)-0}{\left| y-x\right|_p^{\alpha+1}}dy =\frac{1}{\Gamma_p(-\alpha)} \left(\int_{\mathbb{Z}_p}\frac{\left| y\right|_p^n}{\left| x\right|_p^{\alpha+1}}+\int_{\mathbb{Q}_p\setminus\mathbb{Z}_p}\frac{0-0}{\left| y-x\right|_p^{\alpha+1}}\right) \\&\quad =\frac{1}{\Gamma_p(-\alpha)}\left| x\right|_p^{\shortminus(\alpha+1)}\frac{p-1}{p-p^{\shortminus n}}. \end{align*} \end{proof} \begin{thm} Fix $n\in\mathbb{N}$. Let \begin{equation} g_n(x) = \begin{cases} 0 &\text{ if } x \in \mathbb{Z}_p \\ |x|_p^n & \text{ otherwise} \end{cases}. \end{equation} Then \begin{equation} D^{\alpha} g_n (x) = \begin{cases} \frac{1}{\Gamma_p(-\alpha)}(-\frac{p-1}{p-p^{\shortminus(n-\alpha-1)}}) &\text{ if } x \in \mathbb{Z}_p \\ \frac{\Gamma_p(n+1)}{\Gamma_p(n-1)}|x|_p^{n-\alpha}-\frac{1}{\Gamma_p(-\alpha)}\frac{p-1}{p-p^{-n}}|x|_p^{-\alpha-1} & \text{ otherwise} \end{cases}. \end{equation} \end{thm} \begin{proof} Let $x\in\mathbb{Z}_p$. We have \begin{align*} D^\alpha g_n(x) &=\frac{1}{\Gamma_p(-\alpha)}\int_{\mathbb{Q}_p}\frac{g_n(y)-g_n(x)}{\left| y-x\right|_p^{\alpha+1}}dy \\&\quad =\frac{1}{\Gamma_p(-\alpha)}\int_{\mathbb{Q}_p}\frac{g_n(y)}{\left| y-x\right|_p^{\alpha+1}}dy \\&\quad =\frac{1}{\Gamma_p(-\alpha)}\left(\int_{\mathbb{Z}_p} 0dy+\int_{\mathbb{Q}_p\setminus\mathbb{Z}_p}\frac{\left| y\right|_p^n}{\left| y\right|_p^{\alpha+1}}dy\right) \\&\quad =\frac{1}{\Gamma_p(-\alpha)}\left(-\frac{p-1}{p-p^{\shortminus(n-\alpha-1)}}\right). \end{align*} Let $x\in\mathbb{Q}_p\setminus\mathbb{Z}_p$. We write $|x|_p=p^t$ for some integer $t>0$. Then \begin{align*} D^\alpha g_n(x)&=\frac{1}{\Gamma_p(-\alpha)}\int_{\mathbb{Q}_p}\frac{g_n(y)-\left| x\right|_p^n}{\left| y-x\right|_p^{\alpha+1}}dy \\&\quad =\frac{1}{\Gamma_p(-\alpha)} \Big(\int_{|y|_p>|x|_p}\frac{|y|_p^n-|x|_p^n}{|y|_p^{\alpha+1}} dy+0+\int_{|y|_p<|x|_p,\ y\in\mathbb{Z}_p}\frac{-|x|_p^n}{|x|_p^{\alpha+1}} dy \\&\quad +\int_{|y|_p<|x|_p,\ y\in\mathbb{Q}_p\setminus\mathbb{Z}_p} \frac{|y|_p^n-|x|_p^n}{|x|_p^{\alpha+1}} dy \Big) \\&\quad =\frac{1}{\Gamma_p(-\alpha)}\Big(\sum_{k=t+1}^{\infty}\frac{p^{kn}-p^{tn}}{p^{(\alpha+1)k}}(p^k-p^{k-1})+ \\&\quad \sum_{-\infty}^{k=0}\frac{-p^{tn}}{p^{(\alpha+1)t}}(p^k-p^{k-1}) +\sum_{k=1}^{k=t-1}\frac{p^{kn}-p^{tn}}{p^{(\alpha+1)t}}(p^k-p^{k-1})\Big) \\&\quad =\frac{\Gamma_p(n+1)}{\Gamma_p(n-1)}|x|_p^{n-\alpha}-\frac{1}{\Gamma_p(-\alpha)}\frac{p-1}{p-p^{-n}}|x|_p^{-\alpha-1}. \end{align*} \end{proof} In the next few examples, we will explore solutions to the $p$-adic analog of the time-independent Schr\"{o}dinger equation. \begin{ex} For the $p$-adic differential equation \begin{equation} D^2\Psi(x)+B\left| x\right|_p^2\Psi(x)=E\Psi(x), \end{equation} we seek solutions of the form $$\Psi_0(x)=\sum_{n=0}^\infty b_{2n}\left| x\right|_p^{2n},$$ with $x\in \mathbb{Q}_p$. Here $D^2$ is the $p$-adic differential operator introduced in Section 4. We want to determine coefficients $b_{2n},$ $B$ and $E$, which are assumed to be real numbers, that satisfy the equation and make the series $\sum_{n=0}^\infty b_{2n}\left| x\right|_p^{2n}$ converge. Knowing $D^2\Psi_0(x)=b_{2n}\frac{\Gamma_p(2n+1)}{\Gamma_p(2n-1)}\left| x\right|_p^{2n-2}$, where $\Gamma_p(x)=\frac{1-p^{x-1}}{1-p^{\shortminus x}}$, substituting in the equation, we can write \begin{equation} \sum_{n=1}^\infty b_{2n}\frac{\Gamma_p(2n+1)}{\Gamma_p(2n-1)}\left| x\right|_p^{2n-2}+B\sum_{n=0}^\infty b_{2n}\left| x\right|_p^{2n+2}=E\sum_{n=0}^\infty b_{2n}\left| x\right|_p^{2n}. \end{equation} For the coefficient of $\left| x\right|_p^0$ we have \begin{equation*} b_2\frac{\Gamma_p(3)}{\Gamma_p(1)}=Eb_0. \end{equation*} Since $\Gamma_p(1)=0$, we let $E=b_2=0$ to avoid the meaningless case that the denominator is 0. We assume $b_0=1$. For the coefficient of $\left| x\right|_p^2$ we have \begin{equation*} b_4\frac{\Gamma_p(5)}{\Gamma_p(3)}+Bb_0=Eb_1=0. \end{equation*} Then, $b_4=-B\frac{\Gamma_p(3)}{\Gamma_p(5)}$. For the coefficient of $\left| x\right|_p^4$ we have \begin{equation*} b_6\frac{\Gamma_p(7)}{\Gamma_p(5)}+Bb_2=0. \end{equation*} Then, $b_6=-\frac{Bb_2\Gamma_p(5)}{\Gamma_p(7)}=0$. For the same reason, $b_8=\frac{B^2\Gamma_p(3)\Gamma_p(7)}{\Gamma_p(5)\Gamma_p(9)}.$ More generally, if we assume $b_0=1$, \begin{align} &b_{4n}=\frac{(-B)^n\Gamma_p(4n-1)...\Gamma_p(3)}{\Gamma_p(4n+1)...\Gamma_p(5)} \\&\quad b_{4n+2}=0 \end{align} for $n\in\mathbb{N}$. Next, we check whether $\sum_{n=0}^\infty$ b$_{2n}$$\left| x\right|_p^{2n}$ converges in the real sense. Notice that \begin{equation*} \sum_{n=0}^\infty b_{2n}\left| x\right|_p^{2n}=\sum_{n=1}^\infty (-B)^n\frac{\Gamma_p(4n-1)...\Gamma_p(3)}{\Gamma_p(4n+1)...\Gamma_p(5)}\left| x\right|_p^{4n}. \end{equation*} We apply the Ratio Test and find \begin{align*} & lim_{n\longrightarrow\infty}\left| \frac{b_{4n+4}\left| x\right|_p^{4n+4}}{b_{4n}\left| x\right|_p^{4n}}\right|=lim_{n\longrightarrow\infty}\left| \frac{B\Gamma_p(4n+3)\left| x\right|_p^4}{\Gamma_p(4n+5)}\right| \\&\quad =B\left| x\right|_p^{4n}lim_{n\longrightarrow\infty}\left| \frac{\Gamma_p(4n+3)}{\Gamma_p(4n+5)}\right|. \end{align*} We compute $lim_{x\rightarrow\infty}\left| \frac{\Gamma_p(x)}{\Gamma_p(x+1)}\right|$, then $lim_{n\rightarrow\infty} \left| \frac{\Gamma_p(n)}{\Gamma_p(n+2)}\right|$ to find $lim_{n\rightarrow\infty}\left| \frac{\Gamma_p(4n+3)}{\Gamma_p(4n+5)}\right|$. We have \begin{align*} & lim_{x\longrightarrow\infty}\left| \frac{\Gamma_p(x)}{\Gamma_p(x+1)}\right|=lim_{x\longrightarrow\infty}\left| \frac{(1-p^{x-1})(1-p^{-(x+1)})}{(1-p^{-x})(1-p^x)}\right|= \\&\quad lim_{x\longrightarrow\infty}\left| \frac{(\frac{1}{p}-p^{x-1}+1-\frac{1}{p})(\frac{1}{p}-p^{-(x+1)}+1-\frac{1}{p})}{(1-p^x)(1-p^{-x})}\right|= \\&\quad lim_{x\longrightarrow\infty}\left(\frac{1}{p}+\frac{1-\frac{1}{p}}{1-p^x}\right)\left(\frac{1}{p}+\frac{1-\frac{1}{p}}{1-p^{-x}}\right) =\left(\frac{1}{p}+0 \right)\left(\frac{1}{p}+1-\frac{1}{p}\right)=\frac{1}{p}. \end{align*} Thus \begin{equation*} lim_{n\rightarrow\infty}\left| \frac{\Gamma_p(n)}{\Gamma_p(n+2)}\right|=lim_{n\longrightarrow\infty}\left| \frac{\Gamma_p(n)}{\Gamma_p(n+1)}\right|lim_{n\longrightarrow\infty}\left| \frac{\Gamma_p(n+1)}{\Gamma_p(n+2)}\right|=\frac{1}{p^2}. \end{equation*} Consequently, $(\left| \frac{\Gamma_p(4n+3)}{\Gamma_p(4n+5)}\right|)$ as a subsequence of $\left| \frac{\Gamma_p(n)}{\Gamma_p(n+2)}\right|$ also converges to $\frac{1}{p^2}$. In conclusion, whenever \begin{equation*} lim_{n\longrightarrow\infty}\left| \frac{b_{4n+4}\left| x\right|_p^{4n+4}}{b_{4n}\left| x\right|_p^{4n}}\right|=B\left| x \right|_p^4\frac{1}{p^2}<1, \end{equation*} $\sum_{n=0}^\infty b_{2n}\left| x\right|_p^{2n}$ passes the ratio test and converges. However, it does not converges everywhere, since $\left| x \right|_p$ can be infinitely large. \end{ex} In the next example, we construct a solution $\Psi_0$ whose series expansion is likely to converge for any $x\in\mathbb{Q}_p$. \begin{ex} We guess \begin{align} \psi_0(x) &= \sum_{n=0}^{\infty} c_n f_n (x) + \sum_{n=1}^{\infty} k_n g_{-n}(x) \\ &= \begin{cases} \sum_{n=0}^{\infty} c_n |x|_p^n & \text{ if } x \in \mathbb{Z}_p \\ \sum_{n=1}^{\infty} k_n |x|_p^{-n} & \text{ otherwise}. \end{cases} \end{align} is a solution to the equation $D^2\Psi(x)+B\left| x\right|_p^2\Psi(x)=E\Psi(x)$, where $c_n,k_n$ are some real sequences. Let $x\in\mathbb{Z}_p$. Then, \begin{equation} \begin{split} &D^2\Psi_0(x)=\sum_{n=0}^{\infty}c_n \left(\frac{\Gamma_p(n+1)}{\Gamma_p(n-1)}|x|_p^{n-2}+\frac{1}{\Gamma_p(-2)}\frac{p-1}{p-p^{-n+3}}\right) \\ &+\sum_{n=1}^{\infty}k_n\frac{1}{\Gamma_p(-2)}\frac{1-p}{p-p^{n+3}}. \end{split} \end{equation} We replace expression (27) into the equation and have \begin{equation} \begin{split} &\sum_{n=0}^{\infty}c_n \left(\frac{\Gamma_p(n+1)}{\Gamma_p(n-1)}|x|_p^{n-2}+\frac{1}{\Gamma_p(-2)}\frac{p-1}{p-p^{-n+3}}\right) \\ &+\sum_{n=1}^{\infty}k_n\frac{1}{\Gamma_p(-2)}\frac{1-p}{p-p^{n+3}}+B\sum_{n=0}^{\infty}c_n|x|_p^{n+2} =E\sum_{n=0}^{\infty}c_n|x|_p^n. \end{split} \end{equation} For the coefficient of $|x|_p^{-2}$ we have \begin{equation*} c_0\frac{\Gamma_p(1)}{\Gamma_p(-1)}=0. \end{equation*} Since $\Gamma_p(1)=0$, we do not have information about $c_0$. For the coefficient of $|x|_p^{-1}$ we have \begin{equation*} c_1\frac{\Gamma_p(2)}{\Gamma_p(0)}=0. \end{equation*} We let $c_1=0$ since $\Gamma_p(0)$ is undefined. For the coefficient of $|x|_p^0$ we have \begin{equation*} c_2\frac{\Gamma_p(3)}{\Gamma_p(1)}+\sum_{n=0}^{ \infty}c_n\frac{1}{\Gamma_p(-2)}\frac{p-1}{p-p^{-n+3}}+\sum_{n=1}^{\infty}k_n\frac{1}{\Gamma_p(-2)}\frac{1-p}{p-p^{n+3}}=Ec_0. \end{equation*} Since $\Gamma_p(1)=0$, we set $c_2=0$. Then, \begin{equation} \sum_{n=0}^{\infty}c_n\frac{1}{\Gamma_p(-2)}\frac{p-1}{p-p^{-n+3}}+\sum_{n=1}^{\infty}k_n\frac{1}{\Gamma_p(-2)}\frac{1-p}{p-p^{n+3}}=Ec_0. \end{equation} For the coefficient of $|x|_p^1$ we have \begin{equation*} c_3\frac{\Gamma_p(4)}{\Gamma_p(2)}=Ec_1. \end{equation*} Thus, $c_3=0$. For the coefficient of $|x|_p^2$ we have \begin{align*} & c_4\frac{\Gamma_p(5)}{\Gamma_p(3)}+Bc_0=Ec_2 \\&\quad c_4=\frac{-Bc_0\Gamma_p(3)}{\Gamma_p(5)}. \end{align*} For the coefficient of $|x|_p^3$ we have \begin{align*} & c_5\frac{\Gamma_p(6)}{\Gamma_p(4)}=Ec_3 \\&\quad c_5=0. \end{align*} Generally, \begin{align} & c_{2n+1}=0, \\&\quad c_{2n+4}\frac{\Gamma_p(2n+5)}{\Gamma_p(2n+3)}+Bc_{2n}=Ec_{2n+2} \end{align} for any $n\in\mathbb{N}$. We now explore the case when $x\in\mathbb{Q}_p\setminus\mathbb{Z}_p$. We expect to find equations like (29), (30), (31). With these six equations together, we can prove the convergence of $\Psi_0$ later. Let $x\in\mathbb{Q}_p\setminus{\mathbb{Z}_p}$. Then, \begin{align*} D^2\Psi_0(x)=\sum_{k=1}^{\infty} k_n \left(\frac{\Gamma_p(-n+1)}{\Gamma_p(-n-1)}|x|_p^{-n-2}-\frac{1}{\Gamma_p(-2)}\frac{p-1}{p-p^n}|x|_p^{-3}\right). \end{align*} We replace this expression into the equation and get \begin{align*} \sum_{n=1}^{\infty} & k_n \left(\frac{\Gamma_p(-n+1)}{\Gamma_p(-n-1)}|x|_p^{-n-2}-\frac{1}{\Gamma_p(-2)}\frac{p-1}{p-p^n}|x|_p^{-3}\right) \\&\quad +\sum_{n=0}^{\infty}c_n\frac{1}{\Gamma_p(-2)}|x|_p^{-3}\frac{p-1}{p-p^{-n}}+B\sum_{n=1}^{\infty}k_n|x|_p^{-n+2}=E\sum_{n=1}^{\infty}k_n|x|_p^{-n}. \end{align*} For the coefficient of $|x|_p^{-3}$ we have \begin{equation} -\sum_{n=1}^{\infty}k_n\frac{1}{\Gamma_p(-2)}\frac{p-1}{p-p^n}+\sum_{n=0}^{\infty}c_n\frac{1}{\Gamma_p(-2)}\frac{p-1}{p-p^{-n}}=-Bk_5. \end{equation} Equations (29) and (32) are two equations that can help us determine convergence of $\Psi_0$ and the value of $E$ later. In addition, we need to find the general expression for $k_n$ like in equation (30) and (31). For the coefficient of $|x|_p^1$ we have \begin{align*} &Bk_1=0 \\&\quad k_1=0. \end{align*} For the coefficient of $|x|_p^0$ we have \begin{align*} &Bk_2=0 \\&\quad k_2=0. \end{align*} For the coefficient of $|x|_p^{-1}$ we have \begin{align*} &Bk_3=Ek_1 \\&\quad k_3=0. \end{align*} For the coefficient of $|x|_p^{-2}$ we have \begin{align*} &Bk_4=Ek_2 \\&\quad k_4=0. \end{align*} For the coefficient of $|x|_p^{-4}$ we have \begin{align*} &k_2\frac{\Gamma_p(-1)}{\Gamma_p(-3)}+Bk_6=Ek_4 \\&\quad k_6=0. \end{align*} For the coefficient of $|x|_p^{-5}$ we have \begin{align*} &k_3\frac{\Gamma_p(-2)}{\Gamma_p(-4)}+Bk_7=Ek_5 \\&\quad k_7=\frac{Ek_5}{B}. \end{align*} More generally, \begin{align} & k_{2n}=0, \\ &k_{2n+1}\frac{\Gamma_p(-2n)}{\Gamma_p(-2n-2)}+Bk_{2n+5}=Ek_{2n+3} \end{align} for any positive integer $n$. Given (30), (31), (33), (34), we know all coefficients $c_n$, $k_n$ in terms of $c_0$, $k_5$. Thus, we make the following definition. \begin{mydef} Define $\tau$ and $s$ as the sequences satisfying \begin{align} &c_{2n}=c_0\tau_{2n}, \\&\quad k_{2n+1}=k_5s_{2n+1} \end{align} for $n\in\mathbb{N}\cup\{0\}$, where $c_n,k_n$ are the ones defined by (30),(31),(33),(34). \end{mydef} The following lemma, approximating the explicit expression of $\tau_{2n}$ and $s_{2n+1}$ for large primes $p$ and $B=1$, helps prove the convergence of the power series $\Psi_0$ (so that it becomes a solution to the $p$-adic Schrodinger Equation), the main goal of Example 4. \begin{lem} Given $lim_{n\rightarrow\infty}\frac{\Gamma_p(2n+3)}{\Gamma_p(2n+5)}=\frac{1}{p^2}$, assuming $p$ is large enough, $B=1$, then \begin{equation} \tau_{4n}=(-1)^n\frac{1}{p^{2n}}+O \left( \frac{1}{p^{2n+2}} \right) \end{equation} for $n\geq1$, \begin{equation} \tau_{4n+2}=(-1)^n\frac{nE}{p^{2n+2}}+O \left(\frac{1}{p^{2n+4}}\right) \end{equation} for $n\geq2$, \begin{equation} s_{4n+1}=(-1)^{n+1}p^{2n-2}+O(p^{2n-4}) \end{equation} \begin{equation} s_{4n+3}=(-1)^{n+1}nEp^{2n-2}+O(^{2n-4}) \end{equation} for $n\geq2$, where O represent the error terms. \end{lem} \begin{proof} We claim that $\frac{\Gamma_p(2n+3)}{\Gamma_p(2n+5)}\approx\frac{1}{p^2}$ for any $n\in\{0\}\cup\mathbb{N}$, $\frac{\Gamma_p(-2n)}{\Gamma_p(-2n-2)}\approx p^2$ for any $n\in\mathbb{N}$, given sufficiently large prime number $p$. It is easy to see this claim is true, since all terms are negligible except the ones with the largest powers in the denominator and numerator as $p$ increases. Therefore, for a fixed $n$ we have \begin{equation*} \frac{\Gamma_p(2n+3)}{\Gamma_p(2n+5)}=\frac{1-p^{-2n-5}-p^{2n+2}+p^{-3}}{1-p^{2n+4}-p^{-2n-3}+p}\approx \frac{-p^{2n+2}}{-p^{2n+4}}=\frac{1}{p^2}, \end{equation*} \begin{equation*} \frac{\Gamma_p(-2n)}{\Gamma_p(-2n-2)}=\frac{1-p^{2n+1}-p^{-2n}+p}{1-p^{-2n-2}-p^{2n-1}+p^{-3}}\approx\frac{-p^{2n+1}}{-p^{2n-1}}=p^2. \end{equation*} Given this claim, for $B=1$, $c_{2n}=\tau_{2n}c_0$ and $k_{2n+1}=k_5s_{2n+1}$, we choose a sufficiently large prime number $p$ and rewrite equations (31) and (34) as the following: \begin{equation} \tau_{2n+4}p^2+\tau_{2n}=E\tau_{2n+2} \end{equation} \begin{equation} s_{2n+1}p^2+s_{2n+5}=Es_{2n+3}. \end{equation} By come calculation using Lemma 2 above, \begin{align*} & \tau_4=\frac{1}{p^2}(-1), \\&\quad \tau_6=-E\frac{1}{p^4}, \\&\quad \tau_8=\frac{1}{p^4}-E^2\frac{1}{p^6}, \\&\quad \tau_{10}=2E\frac{1}{p^6}-E^3\frac{1}{p^8}, \\&\quad \tau_{12}=\frac{E}{p^6}+O \left(\frac{1}{p^8}\right) \\&\quad \tau_{14}=-3E\frac{1}{p^8}+O \left(\frac{1}{p^{10}}\right) \end{align*} We find a pattern that matches $\tau_{4n}=(-1)^n\frac{1}{p^{2n}}+O \left( \frac{1}{p^{2n+2}} \right)$ for $n\geq1$, $\tau_{4n+2}=(-1)^n\frac{nE}{p^{2n+2}}+O(\frac{1}{p^{2n+4}})$ for $n\geq2$. We prove by induction that the pattern holds true for all $n\geq2, n\in\mathbb{N}$. Assume \begin{equation*} \tau_{4n}=(-1)^n\frac{1}{p^{2n}}+O \left( \frac{1}{p^{2n+2}} \right), \tau_{4n+2}=(-1)^n\frac{nE}{p^{2n+2}}+O(\frac{1}{p^{2n+4}}). \end{equation*} We claim \begin{align*} &\tau_{4n+4}=(-1)^{n+1}\frac{1}{p^{2n+2}}+O \left( \frac{1}{p^{2n+4}} \right), \\&\quad \tau_{4n+6}=(-1)^{n+1}\frac{(n+1)E}{p^{2n+4}}+O \left(\frac{1}{p^{2n+6}}\right). \end{align*} Using equation (41), \begin{align*} &\tau_{4n+4}p^2+\tau_{4n}=E\tau_{4n+2} \\&\quad \tau_{4n+4}=\frac{E((-1)^n\frac{nE}{p^{2n+2}}+O(\frac{1}{p^{2n+4}}))-((-1)^n\frac{1}{p^{2n}}+O \left( \frac{1}{p^{2n+2}} \right))}{p^2} \\&\quad =\frac{-(-1)^n\frac{1}{p^{2n}}+O(\frac{1}{p^{2n+2}})}{p^2}=(-1)^{n+1}\frac{1}{p^{2n+2}}+O \left( \frac{1}{p^{2n+4}} \right) \end{align*} Similarly, \begin{align*} &\tau_{4n+6}p^2+\tau_{4n+2}=E\tau_{4n+4} \\&\quad \tau_{4n+6}=\frac{E((-1)^{n+1}\frac{1}{p^{2n+2}}+O \left( \frac{1}{p^{2n+4}} \right))-(-1)^n\frac{nE}{p^{2n+2}}-O(\frac{1}{p^{2n+4}})}{p^2} \\&\quad =(-1)^{n+1}\frac{(n+1)E}{p^{2n+4}}+O \left(\frac{1}{p^{2n+6}}\right) \end{align*} The proof for $s_{4n+1}, s_{4n+3}$ uses the exactly same inductive method, and is thus omitted. \end{proof} Recall that our final goal is to prove $\begin{cases} \sum_{n=0}^{\infty} c_n |x|_p^n & \text{ if } x \in \mathbb{Z}_p \\ \sum_{n=1}^{\infty} k_n |x|_p^{-n} & \text{ otherwise} \end{cases}$ is convergent so that $\Psi_0(x)$ is a solution to the function D$^2$$\Psi(x)$+B$\left| x\right|_p^2$$\Psi(x)$=E$\Psi(x)$. As we find out in the following theorem, the statement is true. \begin{thm} \begin{align*} \psi_0(x) &= \sum_{n=0}^{\infty} c_n f_n (x) + \sum_{n=1}^{\infty} k_n g_{-n}(x) \\ &= \begin{cases} \sum_{n=0}^{\infty} c_n |x|_p^n & \text{ if } x \in \mathbb{Z}_p \\ \sum_{n=1}^{\infty} k_n |x|_p^{-n} & \text{ otherwise}. \end{cases} \end{align*} is a solution to the function $D^2\Psi(x)+B\left| x\right|_p^2\Psi(x)=E\Psi(x)$, where $c_n,k_n$ are defined by (30), (31), (33), (34), $B=1$, E is some real constant determined by B. \end{thm} \begin{proof} Fix $x\in\mathbb{Z}_p$ so that $|x|_p\leq1$. Then, \begin{align*} &\sum_{n=0}^{\infty} c_n |x|_p^n=c_0|x|_p^0+\sum_{n=1}^\infty \left((-1)^n\frac{1}{p^{2n}}+O\left(\frac{1}{p^{2n+2}}\right)\right)|x|_p^{4n} \\&\quad +c_6|x|_p^6+\sum_{n=2}^\infty \left((-1)^n\frac{nE}{p^{2n+2}}+O\left(\frac{1}{p^{2n+4}}\right)\right)|x|_p^{4n+2}. \end{align*} Since the error term O has smaller absolute value than the approximation, we have \begin{equation*} \Big|\left((-1)^n\frac{1}{p^{2n}}+O\left(\frac{1}{p^{2n+2}}\right)\right)|x|_p^{4n}\Big|<\frac{2}{p^{2n}}|x|_p^{4n}. \end{equation*} Since the series made by the latter term is convergent by the Ratio Test, \begin{equation*} \sum_{n=1}^{\infty} \left((-1)^n\frac{1}{p^{2n}}+O\left(\frac{1}{p^{2n+2}}\right)\right)|x|_p^{4n} \end{equation*} is absolutely convergent by the Comparison Test, and therefore convergent. Similarly, \begin{equation*} \sum_{n=2}^\infty \left((-1)^n\frac{nE}{p^{2n+2}}+O\left(\frac{1}{p^{2n+4}}\right)\right)|x|_p^{4n+2} \end{equation*} is convergent by comparing it with $\sum_{n=2}^\infty \frac{2nE}{p^{2n+2}}|x|_p^{4n+2}.$ We conclude that \begin{equation*} \sum_{n=0}^{\infty} c_n |x|_p^n \end{equation*} is convergent for all $x\in \mathbb{Z}_p$. Now, we claim \begin{equation*} \sum_{n=1}^{\infty} k_n |x|_p^{-n} \end{equation*} is convergent for any $x\in\mathbb{Q}_p\setminus{\mathbb{Z}_p}$. Fix $x\in\mathbb{Q}_p\setminus{\mathbb{Z}_p}$ so that $|x|_p\geq p$. Then, \begin{align*} &\sum_{n=1}^{\infty} k_n |x|_p^{-n}=\sum_{n=1}^{6} k_n |x|_p^{-n}+\sum_{n=2}^\infty ((-1)^{n+1}p^{2n+2}+O(p^{2n-4}))|x|_p^{-4n-1} \\&\quad +\sum_{n=2}^\infty ((-1)^{n+1}nEp^{2n+2}+O(p^{2n-4}))|x|_p^{-4n-3}. \end{align*} Again, using the condition that the error term O has smaller absolute value than the main term and $|x|_p\geq p$, we have \begin{equation*} ((-1)^{n+1}p^{2n+2}+O(p^{2n-4}))|x|_p^{-4n-1}<\frac{2p^{2n+2}}{p^{4n+1}}=2p^{-2n+1}. \end{equation*} Obviously, $\sum_{n=2}^\infty 2p^{-2n+1}$ converges by the Geometric Series Test. Thus, \begin{equation*} \sum_{n=2}^\infty ((-1)^{n+1}p^{2n+2}+O(p^{2n-4}))|x|_p^{-4n-1}\end{equation*} is convergent by the Comparison Test. Similarly, \begin{equation*} \sum_{n=2}^\infty ((-1)^{n+1}nEp^{2n+2}+O(p^{2n-4}))|x|_p^{-4n-3} \end{equation*} is convergent by comparing it with \begin{equation*} \sum_{n=2}^\infty 2nEp^{2n+2}p^{-4n-3}=\sum_{n=2}^\infty 2nEp^{-2n-1}, \end{equation*} of which the ratio $|\frac{n+1}{n}p^{-2}|$ converges to $p^{-2}<1$ as $n\rightarrow\infty$. Thus, \begin{equation*} \sum_{n=1}^\infty k_n|x|_p^{-n} \end{equation*} converges. \end{proof} In the case $B=1$, the above is enough to prove $\Psi_0(x)$ is a solution to our equation, we still wish to approximate the value of $E$ asymptotically for large prime numbers $p$. Before that, we derive an important equation that would help us estimate $E$ based on $B$. We rewrite the condition on $E$ as two equations using $\tau_{n}$ and $s_{n}$: \begin{align} & c_0(\sum_{n=0}^{\infty}\frac{\tau_{2n}}{p-p^{-2n+3}}-\frac{E\Gamma_p(-2)}{p-1})=k_5\sum_{n=2}^\infty\frac{s_{2n+1}}{p-p^{2n+4}} \\&\quad c_0\sum_{n=0}^\infty\frac{\tau_{2n}}{p-p^{-2n}}=k_5(\sum_{n=2}^\infty\frac{s_{2n+1}}{p-p^{2n+1}}-B\frac{\Gamma_p(-2)}{p-1}). \end{align} Denote \begin{align*} A &=\sum_{n=0}^\infty\frac{\tau_{2n}}{p-p^{-2n}} \\ F &=\sum_{n=2}^\infty\frac{s_{2n+1}}{p-p^{2n+1}}-B\frac{\Gamma_p(-2)}{p-1} \\ C &=\sum_{n=0}^\infty\frac{\tau_{2n}}{p-p^{-2n+3}}-\frac{E\Gamma_p(-2)}{p-1} \\ D &=\sum_{n=2}^\infty\frac{s_{2n+1}}{p-p^{2n+4}}. \end{align*} Then, we have two linear equations in $c_0$ and $k_5$, \begin{align*} & c_0A-k_5F=0 \\&\quad c_0C-k_5D=0 \end{align*} We do not wish to have trivial solutions to the system of equations, since it is meaningless for exploring the $p$-adic analog of the solution. Thus, in order to let the two equations have a non-zero solution, we need $det \begin{pmatrix} A&-F\\C&-D \end{pmatrix}=0$. Thus, $$AD=FC.$$ Consequently, we find \begin{equation} \begin{split} &\left(\sum_{n=0}^\infty \frac{\tau_{2n}} {p-p^{-2n}} \right) \left(\sum_{n=2}^\infty\frac{s_{2n+1}}{p-p^{2n+4}} \right)= \\ &( \sum_{n=2}^\infty\frac{s_{2n+1}}{p-p^{2n+1}}-B\frac{\Gamma_p(-2)}{p-1})(\sum_{n=0}^\infty\frac{\tau_{2n}}{p-p^{-2n+3}}-\frac{E\Gamma_p(-2)}{p-1}) \end{split} \end{equation} \begin{cor} Given $B=1$ and the conditions of Theorem 18, $$E=2-\frac{2}{p}+O \left(\frac{1}{p^2}\right)$$ gives one solution $\Psi_0 (x)$ as above, but this solution is not necessarily unique. \end{cor} \begin{proof} Assume $E=C+a(\frac{1}{p})+O \left(\frac{1}{p^2}\right)$ for some constant $C, a$. Here $O$ represents the size of the error term. By Equation (37) and (38), \begin{equation} \begin{split} &\sum_{n=0}^\infty \frac{\tau_{2n}}{p-p^{-2n}}=\frac{\tau_0}{p-p^0}+\frac{\tau_{2}}{p-p^{-2}}+\frac{\tau_4}{p-p^{-4}}+\frac{\tau_6}{p-p^{-6}}+... \\ &=\frac{1}{p-1}+0+\frac{(-1)p^2}{p-p^{-4}}+\frac{(-1)E/p^4}{p-p^{-6}}+...\approx\frac{1}{p}-\frac{1}{p^3}+O \left(\frac{1}{p^5}\right). \end{split} \end{equation} By Equation (39) and (40), \begin{equation} \begin{split} &\sum_{n=2}^\infty\frac{s_{2n+1}}{p-p^{2n+4}}=\frac{s_5}{p-p^8}+\frac{s_7}{p-p^{10}}+\frac{s_9}{p-p^{12}}+... \\&\quad \approx\frac{1}{p-p^8}+\frac{E}{p-p^{10}}+\frac{(-1)p^2}{p-p^{12}}+... \approx\frac{-1}{p^8}+\frac{-E+1}{p^{10}}+O\left(\frac{1}{p^{12}}\right). \end{split} \end{equation} Replacing (46), (47) into the left hand side of Equation (45), \begin{equation} \begin{split} &LHS=\left(\frac{1}{p}-\frac{1}{p^3}+O\left(\frac{1}{p^5}\right)\right)\left(\frac{-1}{p^8}+\frac{-E+1}{p^{10}}+O\left(\frac{1}{p^{12}}\right)\right) \\ &\approx \frac{-1}{p^9}+\frac{-E+2}{p^{11}}+O\left(\frac{1}{p^{13}}\right). \end{split} \end{equation} We have \begin{equation} \begin{split} &\sum_{n=2}^\infty \frac{s_{2n+1}}{p-p^{2n+1}}-\frac{\Gamma_p(-2)}{p-1}=\frac{s_5}{p-p^5}+\frac{s_7}{p-p^7}+\frac{s_9}{p-p^9}-\frac{1-p^{-3}}{(1-p^2)(p-1)} \\& \approx \frac{1}{p-p^5}+\frac{E}{p-p^7}+\frac{(-1)p^2}{p-p^9}+\frac{1}{p^3}+\frac{1}{p^4}+\frac{2}{p^5}+... \\& \approx\frac{1}{p^3}+\frac{1}{p^4}+\frac{1}{p^5}+O\left(\frac{1}{p^6}\right). \end{split} \end{equation} Similarily, \begin{equation} \begin{split} &\sum_{n=0}^\infty \frac{\tau_{2n}}{p-p^{-2n+3}}-\frac{E\Gamma_p(-2)}{p-1} \\&\quad =\frac{1}{p-p^3}+\frac{\frac{-1}{p^2}}{p-p^{-1}}+\frac{-E/p^4}{p-p^{-3}}+\frac{E}{p^3}+\frac{E}{p^4}+... \\&\quad \approx \frac{-2+E}{p^3}+\frac{E}{p^4}+O\left(\frac{1}{p^5}\right). \end{split} \end{equation} Replacing (49) and (50) into the right hand side of Equation (45), \begin{equation} \begin{split} &RHS=(\frac{1}{p^3}+\frac{1}{p^4}+\frac{1}{p^5}+O(\frac{1}{p^6}))(\frac{-2+E}{p^3}+\frac{E}{p^4}+O(\frac{1}{p^5})) \\ &\approx \frac{E-2}{p^6}+\frac{2E-2}{p^7}+\frac{3E-2}{p^8}+\frac{4E-2}{p^9}... \end{split} \end{equation} Since $LHS=RHS$,we replace $E=c+a(\frac{1}{p})+O(\frac{1}{p^2})$ into the equation and have \begin{align*} &\frac{-1}{p^9}+O(\frac{1}{p^{10}})=\frac{E-2}{p^6}+\frac{2E-2}{p^7}+\frac{3E-2}{p^8}+\frac{4E-2}{p^9}+... \\&\quad =\frac{c-2}{p^6}+\frac{a+2c-2}{p^7}+... \end{align*} Since $c-2=0$, $a+2c-2=0$, we have $c=2$, $a=-2$. Thus, $$E=2-\frac{2}{p}+O(\frac{1}{p^2}).$$ \end{proof} \end{ex} \begin{ex} Inspired by Example 4, we can find another easy solution to \begin{equation*} D^2\Psi(x)+B\left| x\right|_p^2\Psi(x)=E\Psi(x) \end{equation*} with $B=-1$, \begin{align*} \psi_0(x) &= \sum_{n=0}^{\infty} c_n f_n (x) + \sum_{n=1}^{\infty} k_n g_{-n}(x) \\ &= \begin{cases} \sum_{n=0}^{\infty} c_n |x|_p^n & \text{ if } x \in \mathbb{Z}_p \\ \sum_{n=1}^{\infty} k_n |x|_p^{-n} & \text{ otherwise}. \end{cases} \end{align*}. \begin{lem} Given $lim_{n\shortrightarrow\infty}\frac{\Gamma_p(2n+3)}{\Gamma_p(2n+5)}=\frac{1}{p^2}$, the equation as well as $\psi_0$ in Example 5, assuming $p$ is large enough, $B=-1$, $\tau_n,s_n$ be the same as Definition 20, then \begin{equation} \tau_{4n}=\frac{1}{p^{2n}}+O \left( \frac{1}{p^{2n+2}} \right) \end{equation} for $n\geq1$, \begin{equation} \tau_{4n+2}=\frac{nE}{p^{2n+2}}+O \left(\frac{1}{p^{2n+4}}\right) \end{equation} for $n\geq2$, \begin{equation} s_{4n+1}=p^{2n-2}+O(p^{2n-4}) \end{equation} \begin{equation} s_{4n+3}=nEp^{2n-2}+O(p^{2n-4}) \end{equation} for $n\geq2$, where O are error terms. \end{lem} \begin{proof} The proof method is exactly the same as Lemma 2, using the inductive method on the recursive relationship, i.e. Equation (30),(31),(33),(34) of $c_n$ and $k_n$. It is left to readers to find out Lemma 3 is true. \end{proof} \begin{thm} \begin{align*} \psi_0(x) &= \sum_{n=0}^{\infty} c_n f_n (x) + \sum_{n=1}^{\infty} k_n g_{-n}(x) \\ &= \begin{cases} \sum_{n=0}^{\infty} c_n |x|_p^n & \text{ if } x \in \mathbb{Z}_p \\ \sum_{n=1}^{\infty} k_n |x|_p^{-n} & \text{otherwise}. \end{cases} \end{align*} is a solution to the function $D^2\Psi(x)+B\left| x\right|_p^2\Psi(x)=E\Psi(x)$, where $B=-1$, $E$ is some constant determined by $B$. \end{thm} \begin{proof} Again, the proof method is exactly the same as Theorem 18, using the Comparison Test, the Geometric Series Test and the Ratio Test. For each $n\in\mathbb{N}$, \begin{equation*} \tau_{4n}=\frac{1}{p^{2n}}+O \left( \frac{1}{p^{2n+2}} \right)<\frac{2}{p^{2n}}. \end{equation*} \begin{equation*} \tau_{4n+2}=\frac{nE}{p^{2n+2}}+O(\frac{1}{p^{2n+4}})<\frac{2nE}{p^{2n+2}}. \end{equation*} Since $\sum\frac{2}{p^{2n}}$ converges by the Geometric Series Test, $\sum c_{4n}|x|_p^{4n}$ converges. Since $\sum \frac{2nE}{p^{2n+2}}$ converges by the Ratio Test, $\sum c_{4n+2}|x|_p^{4n+2}$ converges. Thus, $\sum c_n|x|_p^n$ converges. Similar methods can prove $k_n|x|_p^n$ converges and the proof is thus omitted. \end{proof} Like Example 4, let us find a constant E that satisfies the $p$-adic Schr\"{o}dinger equation under the condition $B=-1$. \begin{cor} Given the condition in Theorem 19, $E=-\frac{2}{3}\frac{1}{p^2}+\frac{7}{3}\frac{1}{p^3}+O(\frac{1}{p^4})$ is one solution to $B=-1$, but not necessarily unique. \end{cor} \begin{proof} We omit some procedures of calculation since they are very similar to those under Corollary 2. Assume $E=C+\frac{a}{p}+b(\frac{1}{p^2})+d(\frac{1}{p^3})+O(\frac{1}{p^4})$ for some constant $C, a, b, d$. \begin{equation} \sum_{n=0}^\infty \frac{\tau_{2n}}{p-p^{-2n}}\approx \frac{1}{p}+\frac{1}{p^3}+O\left(\frac{1}{p^5}\right) \end{equation} \begin{equation} \sum_{n=2}^\infty \frac{s_{2n+1}}{p-p^{2n+4}}\approx \frac{-1}{p^8}+\frac{-E-1}{p^{10}}+O\left(\frac{1}{p^{12}}\right) \end{equation} Replacing (56) and (57) into the left hand side of (45), \begin{equation} LHS\approx \frac{-1}{p^9}+\frac{-E-2}{p^{11}}+O\left(\frac{1}{p^{13}}\right) \end{equation} \begin{equation} \sum_{n=2}^\infty \frac{s_{2n+1}}{p-p^{2n+1}}+\frac{\Gamma_p(-2)}{p-1}\approx \frac{-1}{p^3}+\frac{-1}{p^4}+\frac{-3}{p^5}+O\left(\frac{1}{p^6}\right) \end{equation} \begin{equation} \sum_{n=0}^\infty \frac{\tau_{2n}}{p-p^{-2n+3}}-\frac{E\Gamma_p(-2)}{p-1}\approx \frac{E}{p^3}+\frac{E}{p^4}+\frac{3E}{p^5}+O\left(\frac{1}{p^6}\right). \end{equation} Replacing (59) and (60) into the right hand side of (45), \begin{equation} RHS\approx\frac{-E}{p^6}+\frac{-2E}{p^7}+\frac{-7E}{p^8}+O\left(\frac{1}{p^9}\right). \end{equation} Since $LHS=RHS$, we replace $E=C+\frac{a}{p}+b(\frac{1}{p^2})+d(\frac{1}{p^3})+O(\frac{1}{p^4})$ into the equation, and get $$E=-\frac{2}{3}\frac{1}{p^2}+\frac{7}{3}\frac{1}{p^3}+O \left(\frac{1}{p^4}\right)$$ after some calculation. \end{proof} \end{ex} \begin{remark} After exploring the above two examples assuming $\Psi_0$ is a convergent power series, it is intuitive to think if constant multiples of $\Psi_0$ are solutions to the $p$-adic Schrodinger Equation. Thus, we have the following proposition. \end{remark} \begin{prop} If \begin{align} \psi_0(x) &= \sum_{n=0}^{\infty} c_n f_n (x) + \sum_{n=1}^{\infty} k_n g_{-n}(x) \\ &= \begin{cases} \sum_{n=0}^{\infty} c_n |x|_p^n & \text{ if } x \in \mathbb{Z}_p \\ \sum_{n=1}^{\infty} k_n |x|_p^{-n} & \text{ otherwise}. \end{cases} \end{align} is a solution to \begin{equation*} D^2\Psi(x)+B\left| x\right|_p^2\Psi(x)=E\Psi(x), \end{equation*} with $B,E$ fixed, then $\Psi_c(x)=c\Psi_0(x)$ is also a solution to the equation for any $c\in\mathbb{Q}_p$. \end{prop} \begin{proof} Assume $x\in\mathbb{Z}_p$, then we replace $\Psi_0(x)=\sum_{n=0}^\infty c_n|x|_p^n$ into the equation, and get \begin{equation} \begin{split} &\sum_{n=0}^{\infty}c_n(\frac{\Gamma_p(n+1)}{\Gamma_p(n-1)}|x|_p^{n-2}+\frac{1}{\Gamma_p(-2)}\frac{p-1}{p-p^{-n+3}})+\sum_{n=1}^{\infty}k_n\frac{1}{\Gamma_p(-2)}\frac{1-p}{p-p^{n+3}} \\ &+B|x|_p^2\sum_{n=0}^\infty c_n|x|_p^n=E\sum_{n=0}^\infty c_n|x|_p^n. \end{split} \end{equation} We multiply both sides of Equation (64) above by a constant $c\in\mathbb{Q}_p$, and have \begin{equation} \begin{split} &c\sum_{n=0}^{\infty}c_n(\frac{\Gamma_p(n+1)}{\Gamma_p(n-1)}|x|_p^{n-2}+\frac{1}{\Gamma_p(-2)}\frac{p-1}{p-p^{-n+3}})+c\sum_{n=1}^{\infty}k_n\frac{1}{\Gamma_p(-2)}\frac{1-p}{p-p^{n+3}} \\ &+cB|x|_p^2\sum_{n=0}^\infty c_n|x|_p^n=cE\sum_{n=0}^\infty c_n|x|_p^n. \end{split} \end{equation} It is easy to check that the left hand side of $Equation(65)=D^2(\Psi_c(x))+B|x|_p^2\Psi_c(x)$, and the right hand side of $ Equation(65)=E\Psi_c(x)$. Thus, \begin{equation} D^2(\Psi_c(x))+B|x|_p^2\Psi_c(x)=E\Psi_c(x) \end{equation} The case when $x\in\mathbb{Q}_p\setminus\mathbb{Z}_p$ is similar. We can also get Equation (66) in this case. Thus, $\Psi_c$ is indeed a solution to the equation. \end{proof} \pagebreak \begin{center} REFERENCES \end{center} \begin{thebibliography}{6} \bibitem{Gleason} Jonathan Gleason, \emph{Existence and uniqueness of Haar measure}, University of Chicago. August, 2010. \bibitem{Griffiths} D.J. Griffiths, D.F. Schroeter, \emph{Introduction to quantum mechanics}, Cambridge University Press. 2018. \bibitem{Alexa} Alexa Pomerantz, \emph{An introduction to the $p$-adic numbers}, University of Chicago, 2020. \bibitem{Jack} Jack A. Thorne, \emph{$p$-adic analysis, $p$-adic arithmetic}, Harvard, 2010. \bibitem{Vladimirov} V.S.Vladimirov, I.V.Volovich and E.I.Zelenov, \emph{p-adic analysis and mathematical physics}, World Scientific Publishing Co., 1994. \bibitem{Zuniga} W.A. Zuniga-Galindo, \emph{$p$-Adic analysis: a quick introduction}, L. Santalo Research Summer School, 2019. \end{thebibliography} \vspace{1cm} \end{document}
2412.10708v1
http://arxiv.org/abs/2412.10708v1
Bertrand lightcone framed curves in the Lorentz-Minkowski 3-space
\documentclass[a4paper,12pt]{article} \usepackage{latexsym} \usepackage{amssymb} \usepackage{theorem} \usepackage{amsmath} \usepackage{amscd} \usepackage{graphicx} \usepackage{url} \usepackage{color} \pagestyle{plain} \setlength{\oddsidemargin}{-.5cm} \setlength{\evensidemargin}{-.5cm} \setlength{\textwidth}{17cm} \setlength{\topmargin}{-1.3cm} \setlength{\textheight}{24cm} \setlength{\headheight}{.1in} \setlength{\headsep}{.3in} \setlength{\parskip}{.5mm} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{example}[theorem]{Example} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{remark}[theorem]{Remark} \newtheorem{definition}[theorem]{Definition} \newcommand{\demo}{\par\noindent{\it Proof. \/}\ } \newcommand{\enD}{\hfill $\Box$\vspace{3truemm} \par} \newcommand{\R}{\mathbb{R}} \newcommand{\N}{\mathbb{N}} \newcommand{\lon}{\longrightarrow} \newcommand{\e}{\varepsilon} \newcommand{\codim}{\operatorname{codim}} \newcommand{\rank}{\operatorname{rank}} \newcommand{\corank}{\operatorname{corank}} \newcommand{\sign}{\operatorname{sign}} \newcommand{\bn}{\mbox{\boldmath $n$}} \newcommand{\bt}{\mbox{\boldmath $t$}} \newcommand{\ba}{\mbox{\boldmath $a$}} \newcommand{\bb}{\mbox{\boldmath $b$}} \newcommand{\be}{\mbox{\boldmath $e$}} \newcommand{\bmu}{\mbox{\boldmath $\mu$}} \newcommand{\bv}{\mbox{\boldmath $v$}} \newcommand{\bw}{\mbox{\boldmath $w$}} \newcommand{\bx}{\mbox{\boldmath $x$}} \newcommand{\by}{\mbox{\boldmath $y$}} \newcommand{\bz}{\mbox{\boldmath $z$}} \newcommand{\A}{\mathcal{A}} \begin{document} \title{Bertrand lightcone framed curves in the Lorentz-Minkowski 3-space} \author{Nozomi Nakatsuyama and Masatomo Takahashi} \date{\today} \maketitle \begin{abstract} We consider mixed types of not only regular curves but also curves with singular points in the Lorentz-Minkowski 3-space. In order to consider mixed type of curves with singular points, we consider the lightcone frame and lightcone framed curves. By using lightcone frame, we can consider Bertrand types for lightcone framed curves, so-called Bertrand lightcone framed curves. We clarify that the existence conditions of Bertrand lightcone framed curves in all cases. As a consequence, we find pseudo-circular involutes and evolutes of mixed type curves which appear as Bertrand lightcone framed curves. \end{abstract} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \footnote[0]{2020 Mathematics Subject classification: 53A35, 53C50, 58K05} \footnote[0]{Key Words and Phrases. Bertrand lightcone framed curve, lightcone framed curve, mixed type, singularity} \section*{Introduction} Bertrand and Mannheim curves are classical objects in differential geometry (\cite{Aminov, Balgetir-Bektacs-Jun-ichi, Berger-Gostiaux, Bertrand, Chunxiao-Pei, HCIP, Papaioannou-Kiritsis, Struik}). A Bertrand (respectively, Mannheim) curve in the Euclidean $3$-space is a space curve whose principal normal line is the same as the principal normal (respectively, bi-normal) line of another curve. By definition, another curve is a parallel curve with respect to the direction of the principal normal vector. In \cite{Honda-Takahashi-2020}, they investigated the condition of the Bertrand and Mannheim curves of non-degenerate curves and framed curves. Moreover, we investigated the other cases, that is, a space curve whose tangent (or, principal normal, bi-normal) line is the same as the tangent (or, principal normal, bi-normal) line of another curve, respectively. We say that a Bertrand type curve if there exists such another curve. We investigated the existence conditions of Bertrand type curves in all cases in \cite{Nakatsuyama - Takahashi}. Moreover, we also investigated curves with singular points. As smooth curves with singular points, it is useful to use the framed curves in the Euclidean space (cf. \cite{Honda-Takahashi-2016}). We investigated the existence conditions of the Bertrand framed curves (Bertrand types of framed curves) in all cases in \cite{Nakatsuyama - Takahashi}. \par For a non-lightlike non-degenerate curve, we have the arc-length parameter and the Frenet-Serret type formula by using a moving frame like a non-degenerate space curve in the Euclidean space. It follows that we have the curvature of a non-lightlike non-degenerate curve (cf. \cite{López, O’Neil}). \par In \cite{Ucum-Ilarslan}, they investigated the necessary and sufficient conditions for timelike Bertrand curves in the Lorentz-Minkowski 3-space. In \cite{Hatice-Kazim}, they investigated the necessary and sufficient conditions for spacelike Bertrand curves with non-null normal vectors in the Lorentz-Minkowski 3-space. Also see in \cite{Wu-Zhou-Yao-Pei}. In \cite{Balgetir-Bektacs-Jun-ichi}, they investigated the necessary and sufficient conditions for null (lightlike) Bertrand curves in the Lorentz-Minkowski 3-space. \par In this paper, we consider mixed types of not only regular curves but also curves with singular points in the Lorentz-Minkowski 3-space. In order to consider mixed types of curves with singular points, we consider the lightcone frame and lightcone framed curves (cf. \cite{Chen-Takahashi}). By using the lightcone frame, we can consider Bertrand lightcone framed curves. We investigate a space curve whose lightlike (or, spacelike, timelike) vector is the same as the lightlike (or, spacelike, timelike) vector of another curve, respectively. We say that Bertrand lightcone framed curves if there exists such another curve. In \S 3, we clarify that the existence conditions of Bertrand lightcone framed curves in all cases. As a consequence, we find psedo-circular evolutes and involutes of mixed type curves which appear as Bertrand lightcone framed curves (Theorems \ref{lightlike-mate-1}, \ref{lightlike-mate-2}, \ref{lightlike-mate-5}, \ref{lightlike-mate-7} and \ref{spacelike-mate4}). Therefore, it is useful to find new lightcone framed curves by using Bertrand lightcone framed curves. \par We shall assume throughout the whole paper that all maps and manifolds are $C^{\infty}$ unless the contrary is explicitly stated. \section{Preliminaries} We review the theory of lightcone framed curves in the Lorentz-Minkowski 3-space. For details see in \cite{Chen-Takahashi}. \subsection{Lorentz-Minkowski $3$-space} The {\it Lorentz-Minkowski space} $\R^3_1$ is the space $\R^3$ endowed with the metric induced by the pseudo-scalar product $\langle \bx, \by \rangle=-x_1y_1+x_2y_2+x_3y_3$, where $\bx=(x_1,x_2,x_3)$ and $\by=(y_1,y_2,y_3)$. We say that a non-zero vector $\bx \in \R^3_1$ is {\it spacelike} if $\langle \bx, \bx \rangle >0$, {\it lightlike} if $\langle \bx, \bx \rangle=0$, and {\it timelike} if $\langle \bx, \bx \rangle <0$ respectively. The {\it norm} of a vector $\bx \in \R^3_1$ is defined by $||\bx||=\sqrt{|\langle \bx,\bx \rangle |}$. For any $\bx=(x_1, x_2, x_3), \by=(y_1, y_2, y_3)\in \R^3_1$, we define a vector $\bx\wedge \by$ by $$ \bx\wedge \by= {\rm det} \left(\begin{array}{ccc} -\be_1&\be_2&\be_3\\ x_1&x_2&x_3\\ y_1&y_2&y_3 \end{array}\right), $$ where $\{\be_1, \be_2, \be_3\}$ is the canonical basis of $\R^3_1$. For any $\bw \in \R_1^3$, we can easily check that $$ \langle \bw, \bx\wedge \by \rangle= \textrm{det}(\bw, \bx, \by), $$ so that $\bx \wedge \by$ is pseudo-orthogonal to both $\bx$ and $\by$. Moreover, if $\bx$ is a unit timelike vector, $\by$ is a unit spacelike vector and $\langle \bx, \by\rangle=0$, $\bx\wedge \by=\bz$, then by a straightforward calculation we have $\bz\wedge \bx=\by,\ \by\wedge \bz=-\bx.$ We define {\it hyperbolic $2$-space, de Sitter $2$-space} or {\it lightcone} by \begin{align*} H^2(-1) &=\{\bx\in \R_1^3\ |\ \langle\bx,\bx\rangle =-1\}, \\ S^2_1 &=\{\bx\in \R_1^3\ |\ \langle\bx,\bx\rangle =1\}, \\ LC^{*} &=\{\bx \in \R_1^3 \setminus \{0\}\ |\ \langle \bx, \bx\rangle=0\}. \end{align*} If $\bx=(x_0,x_1,x_2)$ is a non-zero lightlike vector, then $x_0 \not=0$. Therefore, we have $$ \widetilde{\bx}=\left(1,\frac{x_1}{x_0},\frac{x_2}{x_0}\right) \in S^1_+=\{\bx=(x_0,x_1,x_2) \in LC^*| \ x_0=1\}. $$ We call $S^1_+$ the {\it lightcone circle}. In this paper, we consider two double Legendrian fibrations (cf. \cite{Izumiya09}): \begin{itemize} \item[(1)] $\Delta_1 =\{(\bv,\bw) \in H^2(-1) \times S^2_1 \ | \ \langle \bv,\bw \rangle=0 \},$ \par $\pi_{11}:\Delta_1 \to H^2(-1), \pi_{11}(\bv,\bw)=\bv, \ \pi_{12}:\Delta_1 \to S^2_1, \pi_{12}(\bv,\bw)=\bw$, \par $\theta_{11}=\langle d\bv, \bw \rangle|_{\Delta_1}, \theta_{12}=\langle \bv,d\bw \rangle|_{\Delta_1}$. \par \item[(2)] $\Delta_4 =\{(\bv,\bw) \in LC^* \times LC^* \ | \ \langle \bv,\bw \rangle=-2 \},$ \par $\pi_{41}:\Delta_1 \to LC^*, \pi_{41}(\bv,\bw)=\bv, \ \pi_{42}:\Delta_1 \to LC^*, \pi_{42}(\bv,\bw)=\bw$, \par $\theta_{41}=\langle d\bv, \bw \rangle|_{\Delta_4}, \theta_{42}=\langle \bv,d\bw \rangle|_{\Delta_4}$. \end{itemize} Note that $\Phi:\Delta_4 \to \Delta_1, \Phi(\bv,\bw)=((\bv+\bw)/2, (\bv-\bw)/2)$ is a contact diffeomorphsim. \par Let $O(1,2)$ be the Lorentz group which consists of square matrices $A$ of order $3$ such that $ ^tAZA=Z$, where $$ Z:={\rm diag}(-1,1,1)=\left(\begin{array}{ccc} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right). $$ We set $SO(1,2)$ as $SO(1,2):=\{A \in O(1,2) \mid {\rm det} A=1\}.$ For vectors $\ba, \bb \in \R^3_1$ and $A \in SO(1,2)$, we have $$ \langle \ba, \bb \rangle=\langle A(\ba), A(\bb) \rangle, \ A(\ba \wedge \bb)=A(\ba) \wedge A(\bb). $$ \subsection{Lightcone framed curves} In order to investigate mixed type of curves in the Lorentz-Minkowski space, we introduce the lightcone frame. \begin{definition}{\rm Let $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ be a smooth mapping. We say that $(\gamma,\ell^+,\ell^-)$ is a {\it lightcone framed curve} if there exist smooth functions $\alpha, \beta:I \to \R$ such that $\dot{\gamma}(t)=\alpha(t) \ell^+(t)+\beta(t) \ell^-(t)$ for all $t \in I$. We also say that $\gamma$ is a {\it lightcone framed base curve} if there exists a smooth mapping $(\ell^+,\ell^-):I \to \Delta_4$ such that $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a lightcone framed curve. } \end{definition} Since $(\ell^+,\ell^-):I \to \Delta_4$, we have $$ \langle \ell^+(t),\ell^+(t)\rangle=0, \ \langle \ell^-(t),\ell^-(t)\rangle=0, \ \langle \ell^+(t),\ell^-(t)\rangle=-2 $$ for all $t \in I$. By $\dot{\gamma}(t)=\alpha(t) \ell^+(t)+\beta(t) \ell^-(t)$, $\langle \dot{\gamma}(t),\dot{\gamma}(t) \rangle=-4\alpha(t)\beta(t)$. Therefore, $\gamma$ is spacelike, lightlike, or timelike at $t$ if $\alpha(t)\beta(t)<0$, $\alpha(t)\beta(t)=0$ with $\alpha(t) \not=0$ or $\beta(t) \not=0$, or $\alpha(t)\beta(t)>0$, respectively. Moreover, $t$ is a singular point of $\gamma$ if and only if $\alpha(t)=\beta(t)=0$. \par We denote $(\bn^T,\bn^S):I \to \Delta_1$, $$ \bn^T(t)=\frac{\ell^+(t)+\ell^-(t)}{2}, \ \bn^S(t)=\frac{\ell^+(t)-\ell^-(t)}{2}. $$ We define $\bn:I \to S^2_1, \bn(t)=\bn^T(t) \wedge \bn^S(t)=-(1/2)\ell^+(t) \wedge \ell^-(t)$. Then $\{\bn^T(t),\bn^S(t),\bn(t)\}$ is a pseudo-orthonormal frame of $\gamma(t)$. We say that\\ $\{\ell^+(t),\ell^-(t),\bn(t)\}$ is a {\it lightcone frame} of $\gamma(t)$. Note that the lightcone frame is not a pseudo-orthonormal frame. By a direct calculation, we have \begin{eqnarray*}\left( \begin{array}{c} \dot{\ell^{+}}(t)\\ \dot{\ell^{-}}(t)\\ \dot{\bn}(t) \end{array} \right) &=& \left( \begin{array}{ccc} \kappa_1(t)&0&2\kappa_3(t) \\ 0&-\kappa_1(t)&2\kappa_2(t)\\ \kappa_2(t)&\kappa_3(t)&0 \end{array} \right) \left( \begin{array}{c} \ell^+(t)\\ \ell^-(t)\\ \bn(t) \end{array} \right), \\ \dot{\gamma}(t)&=&\alpha(t)\ell^+(t)+\beta(t)\ell^-(t). \end{eqnarray*} We call $(\kappa_1,\kappa_2,\kappa_3,\alpha,\beta)$ a {\it (lightcone) curvature} of the lightcone framed curve $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$. \par On the other hand, we have \begin{eqnarray*}\label{Frenet-type3} \left( \begin{array}{c} \dot{\bn}^{T}(t)\\ \dot{\bn}^{S}(t)\\ \dot{\bn}(t) \end{array} \right) &=& \left( \begin{array}{ccc} 0&\kappa_1(t)&\kappa^T(t) \\ \kappa_1(t)&0&-\kappa^S(t)\\ \kappa^T(t)&\kappa^S(t)&0 \end{array} \right) \left( \begin{array}{c} \bn^T(t)\\ \bn^S(t)\\ \bn(t) \end{array} \right), \\ \dot{\gamma}(t)&=&a(t)\bn^T(t)+b(t) \bn^S(t), \label{curve3} \end{eqnarray*} where $\kappa^T=\kappa_2+\kappa_3, \ \kappa^S=\kappa_2-\kappa_3, a=\alpha+\beta$ and $b=\alpha-\beta$.\par In \cite{Chen-Takahashi}, we proved existence and uniqueness theorem for a special frame. We can also prove the general case by the similar arguments, we omit it. \begin{theorem}[Existence theorem for lightcone framed curves]\label{existence} Let $(\kappa_1,\kappa_2,\\\kappa_3,\alpha,\beta):$ $I \to \R^5$ be a smooth mapping. There exists a lightcone framed curve $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ such that the curvature is $(\kappa_1,\kappa_2,\kappa_3,\alpha,\beta)$. \end{theorem} \begin{definition}\label{congruence}{\rm We say that $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$ are {\it congruent as lightcone framed curves} if there exist $A \in SO(1,2)$ and $\ba \in \R^3_1$ such that $\overline{\gamma}(t)=A(\gamma(t))+\ba, \ \overline{\ell}^+(t)=A(\ell^+(t)), \ \overline{\ell}^-(t)=A(\ell^-(t))$ for all $t \in I$. } \end{definition} \begin{theorem}[Uniqueness theorem for lightcone framed curves]\label{uniqueness} Let $(\gamma,\ell^+,\\\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$ be lightcone framed curves with curvatures $(\kappa_1,\kappa_2,\kappa_3,\alpha,\beta)$ and $(\overline{\kappa}_1,\overline{\kappa}_2,\overline{\kappa}_3,\overline{\alpha},\overline{\beta})$, respectively. $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are congruent as lightcone framed curves if and only if $(\kappa_1,\kappa_2,\kappa_3,\alpha,\beta)=(\overline{\kappa}_1,\overline{\kappa}_2,\overline{\kappa}_3,\overline{\alpha},\overline{\beta})$. \end{theorem} \begin{proposition}\label{reflection} Let $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ be a lightcone framed curve with curvature $(\kappa_1,\kappa_2,\kappa_3,\alpha,\beta)$. Then $(\gamma,\ell^-,\ell^+)$ is also a lightcone framed curve with the curvature $(-\kappa_1,-\kappa_3,-\kappa_2,\beta,\alpha)$. \end{proposition} We consider a special lightcone frame. We denote $\overline{\ell}^+(t)=(1/c(t))\ell^+(t)$ and $\overline{\ell}^-(t)=c(t)\ell^-(t)$, where $c:I \to \R$ is a non-zero function. By a direct calculation, we have $$ \overline{\ell}^+(t) \wedge \overline{\ell}^-(t)=\ell^+(t) \wedge \ell^-(t)=-2\bn(t). $$ Then $\{\overline{\ell}^+(t),\overline{\ell}^-(t),\bn(t) \}$ is also a lightcone frame of $\gamma(t)$. By a direct calculation, we have \begin{eqnarray*} \left( \begin{array}{c} \dot{\overline{\ell}^{+}}(t)\\ \dot{\overline{\ell}^{-}}(t)\\ \dot{\bn}(t) \end{array} \right) &=& \left( \begin{array}{ccc} \overline{\kappa}_1(t)&0&2\overline{\kappa}_3(t) \\ 0&-\overline{\kappa}_1(t)&2\overline{\kappa}_2(t)\\ \overline{\kappa}_2(t)&\overline{\kappa}_3(t)&0 \end{array} \right) \left( \begin{array}{c} \overline{\ell}^+(t)\\ \overline{\ell}^-(t)\\ \bn(t) \end{array} \right), \\ \dot{\gamma}(t)&=&\overline{\alpha}(t) \overline{\ell}^+(t)+\overline{\beta}(t) \overline{\ell}^-(t), \end{eqnarray*} where \begin{gather*} \overline{\kappa}_1(t) =\frac{-\dot{c}(t)+c(t)\kappa_1(t)}{c^2(t)}, \ \overline{\kappa}_2(t)=c(t)\kappa_2(t), \ \overline{\kappa}_3(t)=\frac{\kappa_3(t)}{c(t)}, \\ \overline{\alpha}(t)=c(t)\alpha(t), \ \overline{\beta}(t)=\frac{\beta(t)}{c(t)}. \end{gather*} If we take $\dot{c}(t)=c(t)\kappa_1(t)$, that is, $c(t)=Ae^{\int \kappa_1(t) dt}$, where $A$ is a constant, then $\overline{\kappa}_1(t)=0$. Hence, we can always take $\overline{\kappa}_1(t)=0$. We say that the lightcone frame $\{\overline{\ell}^+(t),\overline{\ell}^-(t),\bn(t) \}$ with $\overline{\kappa}_1(t)=0$ is an {\it adapted frame}. Moreover, we consider a special lightcone frame (cf. \cite{Liu-Pei}). Let $(\gamma,\ell^+,{\ell}^-):I \to \R^3_1 \times \Delta_4$ be a lightcone framed curve, where ${\ell}^+$ and $ {\ell}^-:I \to S^1_+$. Then there exists a smooth function $\theta:I \to \R$ such that $$ {\ell}^+(t)=(1,\cos \theta(t),\sin \theta(t)), \ {\ell}^-(t)=(1,-\cos \theta(t),-\sin \theta(t)). $$ Therefore, $$ \bn(t)=-\frac{1}{2} {\ell}^+(t) \wedge {\ell}^-(t)=(0,-\sin \theta(t),\cos \theta(t)). $$ We call the above lightcone frame $\{\ell^+(t),\ell^-(t),\bn(t)\}$ a {\it lightcone circle frame} of $\gamma(t)$. By a direct calculation, we have \begin{eqnarray*} \left( \begin{array}{c} \dot{\ell^{+}}(t)\\ \dot{\ell^{-}}(t)\\ \dot{\bn}(t) \end{array} \right) &=& \left( \begin{array}{ccc} 0&0&\dot{\theta}(t) \\ 0&0&-\dot{\theta}(t)\\ -\dot{\theta}(t)/2&\dot{\theta}(t)/2&0 \end{array} \right) \left( \begin{array}{c} \ell^+(t)\\ \ell^-(t)\\ \bn(t) \end{array} \right), \\ \dot{\gamma}(t)&=&\alpha(t)\ell^+(t)+\beta(t)\ell^-(t). \end{eqnarray*} In this case, the curvature is given by $(0,-\dot{\theta}(t)/2,\dot{\theta}(t)/2,\alpha(t),\beta(t))$. In \cite{Liu-Pei}, $(\alpha,\beta,\theta)$ is called a {\it lightcone semi-polar coordinate}. and evolutes of a mixed type curve are investigated by using lightcone circle frame of $\gamma$. \section{Bertrand types of lightcone framed curves} Let $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$ be lightcone framed curves with curvatures $(\kappa_1,\kappa_2,\kappa_3,\alpha,\beta)$ and $(\overline{\kappa}_1,\overline{\kappa}_2,\overline{\kappa}_3,\overline{\alpha},\overline{\beta})$. \begin{definition}{\rm We say that $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are {\it $(\bv,\overline{\bw})$-mates} if there exists a smooth function $\lambda:I \to \R$ with $\lambda \not\equiv 0$ such that $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\bv(t)$ and $\bv(t)=\overline{\bw}(t)$ for all $t \in I$. We also say that $(\gamma,\ell^+,\ell^-)$ is a {\it $(\bv,\overline{\bw})$-Bertrand lightcone framed curve} (or, {\it $(\bv,\overline{\bw})$-Bertrand-Mannheim lightcone framed curve}) if there exists another lightcone framed curve $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ such that $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are $(\bv,\overline{\bw})$-mates. Here $\bv(t)$ and $\overline{\bw}(t)$ are non-zero vectors. } \end{definition} Note that $\lambda \not\equiv 0$ means that $\{t \in I | \lambda(t) \not=0\}$ is a dense subset of $I$. Then $\lambda$ is not identically zero for any non-trivial subintervals of $I$. It follows that $\gamma$ and $\overline{\gamma}$ are different space curves for any non-trivial subintervals of $I$. By definition, $\bv$ and $\overline{\bw}$ are the same causal type. If the both vectors are lightlike, timelike or spacelike, then we say that $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are {\it lightlike mate, timelike mate or spacelike mate}, respectively. Since $\ell^+$ and $\ell^-$ are lightlike, $\bn^T$ is a timelike, $\bn$ and $\bn^S$ are spacelike vectors, lightlike mates are four cases, timelike mate is one case, and spacelike mates are four cases. Moreover, we can consider naturally other lightlike vectors $\widetilde{\ell}^+=\bn^T+\bn$ and $\widetilde{\ell}^-=\bn^T-\bn$ by the moving frame $\{\bn^T,\bn^S,\bn\}$ of $\gamma$. We also add four cases in lightlike mates. By the construction, we have $(\widetilde{\ell}^+,\widetilde{\ell}^-):I \to \Delta_4$, however, $(\gamma,\widetilde{\ell}^+,\widetilde{\ell}^-)$ is not always a lightcone framed curve. \par We clarify that the existence conditions of Bertrand lightcone framed curves in all cases. \subsection{Lightlike mates} Firstly, we consider lightlike mates. There are eight cases. Let $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ be a lightcone framed curve with curvature $(\kappa_1,\kappa_2,\kappa_3,\alpha,\beta)$. \begin{theorem}\label{lightlike-mate-1} $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\ell^+,\overline{\ell}^+)$-Bertrand lightcone framed curve if and only if there exist smooth functions $\lambda,k:I \to \R$ with $\lambda \not\equiv 0$ such that $k (t)\beta(t)=\lambda(t)\kappa_3(t)$ for all $t \in I$. \end{theorem} \demo Suppose that $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\ell^+,\overline{\ell}^+)$-Bertrand lightcone framed curve. Then there exists another lightcone framed curve $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ such that $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are $(\ell^+,\overline{\ell}^+)$-mates, that is, there exists a smooth function $\lambda:I \to \R$ with $\lambda \not\equiv 0$ such that $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\ell^+(t)$ and $\ell^+(t)=\overline{\ell}^+(t)$ for all $t \in I$. By differentiating $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\ell^+(t)$, we have $$ \overline{\alpha}(t)\overline{\ell}^+(t)+\overline{\beta}(t)\overline{\ell}^-(t)=(\alpha(t)+\dot{\lambda}(t)+\lambda(t)\kappa_1(t))\ell^+(t)+\beta(t)\ell^-(t)+2\lambda(t)\kappa_3(t)\bn(t). $$ Since $\ell^+(t)=\overline{\ell}^+(t)$, we have $\overline{\beta}(t)=\beta(t)$. Moreover, there exist smooth functions $a_i,b_i,c_i:I \to \R, i=1,2$ such that \begin{align*} \overline{\ell}^-(t) &=a_1(t)\ell^+(t)+b_1(t)\ell^{-}(t)+c_1(t)\bn(t),\\ \overline{\bn}(t) &=a_2(t)\ell^+(t)+b_2(t)\ell^{-}(t)+c_2(t)\bn(t), \end{align*} where $\overline{\bn}(t)=-(1/2) \overline{\ell}^+(t) \wedge \overline{\ell}^-(t)$. By the condition $(\overline{\ell}^+(t),\overline{\ell}^-(t)) \in \Delta_4$, we have $$ a_1(t)=a_2^2(t), \ b_1(t)=1, \ c_1(t)=2a_2(t), \ b_2(t)=0, \ c_2(t)=1. $$ It follows that $$ \overline{\ell}^-(t) =a_2^2(t)\ell^+(t)+\ell^{-}(t)+2a_2(t)\bn(t), \ \overline{\bn}(t) =a_2(t)\ell^+(t)+\bn(t). $$ Moreover, $$ \overline{\alpha}(t)\overline{\ell}^+(t)+\overline{\beta}(t)\overline{\ell}^-(t)= (\overline{\alpha}(t)+\beta(t)a_2^2(t))\ell^+(t)+\beta(t)\ell^-(t)+2a_2(t)\bn(t). $$ Therefore, we have $\overline{\alpha}(t)+\beta(t)a_2^2(t)=\alpha(t)+\dot{\lambda}(t)+\lambda(t)\kappa_1(t), \lambda(t)\kappa_3(t)=a_2(t)\beta(t)$. If we rewrite $a_2$ as $k$, then $k(t)\beta(t)=\lambda(t)\kappa_3(t)$ for all $t \in I$. \par Conversely, suppose that there exist smooth functions $\lambda,k:I \to \R$ with $\lambda \not\equiv 0$ such that $k (t)\beta(t)=\lambda(t)\kappa_3(t)$ for all $t \in I$. If we consider $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\ell^+(t), \overline{\ell}^{+}(t)=\ell^+(t),\overline{\ell}^-(t)=k^2(t)\ell^+(t)+\ell^-(t)+2k(t)\bn(t)$, \begin{align*} \dot{\overline{\gamma}}(t) &=(\alpha(t)+\dot{\lambda}(t)+\lambda(t)\kappa_1(t))\ell^+(t)+\beta(t)\ell^-(t)+2\lambda(t)\kappa_3(t)\bn(t)\\ &=(\alpha(t)+\dot{\lambda}(t)+\lambda(t)\kappa_1(t)-\beta(t)k^2(t))\overline{\ell}^+(t)+\beta(t)\overline{\ell}^-(t)\\ &=\overline{\alpha}(t)\overline{\ell}^+(t)+\overline{\beta}(t)\overline{\ell}^-(t). \end{align*} It follows that $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$ is a lightcone framed curve. Hence, $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are $(\ell^+,\overline{\ell}^+)$-mates. \enD \begin{proposition}\label{curvature-lightlike-mate-1} Suppose that there exist smooth functions $\lambda,k:I \to \R$ with $\lambda \not\equiv 0$ such that $k (t)\beta(t)=\lambda(t)\kappa_3(t)$ for all $t \in I$. Then the curvature $(\overline{\kappa}_1,\overline{\kappa}_2,\overline{\kappa}_3,\overline{\alpha},\overline{\beta})$ of the lightcone framed curve $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$, where $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\ell^+(t), \overline{\ell}^{+}(t)=\ell^+(t),\overline{\ell}^-(t)=k^2(t)\ell^+(t)+\ell^-(t)+2k(t)\bn(t)$ is given by \begin{gather*} \overline{\kappa}_1(t)=\kappa_1(t)-2\kappa_3(t)k(t), \ \overline{\kappa}_2(t)=\dot{k}(t)+\kappa_1(t)k(t)+\kappa_2(t)-\kappa_3(t)k^2(t), \\ \overline{\kappa}_3(t)=\kappa_3(t), \ \overline{\alpha}(t)=\alpha(t)+\dot{\lambda}(t)+\lambda(t)\kappa_1(t)-\beta(t)k^2(t), \ \overline{\beta}(t)=\beta(t). \end{gather*} \end{proposition} \demo By a direct calculation, we have $\overline{\bn}(t)=k(t)\ell^+(t)+\bn(t)$ and \begin{align*} \dot{\overline{\ell}^+}(t)&=\kappa_1(t)\ell^+(t)+2\kappa_3(t)\bn(t)\\ &=(\kappa_1(t)-2\kappa_3(t)k(t))\overline{\ell}^{+}+2\kappa_3(t)\overline{\bn}(t),\\ \dot{\overline{\ell}^-}(t)&=2k(t)\dot{k}(t)\ell^+(t)+k^2(t)(\kappa_1(t)\ell^+(t)+2\kappa_3(t)\bn(t))-\kappa_1(t)\ell^-(t)\\ &\quad +2\kappa_2(t)\bn(t)+2\dot{k}(t)\bn(t)+2k(t)(\kappa_2(t)\ell^+(t)+\kappa_3(t)\ell^{-}(t))\\ &=-(\kappa_1(t)-2\kappa_3(t)k(t))\overline{\ell}^-(t)\\ &\quad +2(\dot{k}(t)+\kappa_1(t)k(t)+\kappa_2(t)-\kappa_3(t)k^2(t))\overline{\bn}(t). \end{align*} Then we have $$ \overline{\kappa}_1(t)=\kappa_1(t)-2\kappa_3(t)k(t), \ \overline{\kappa}_2(t)=\dot{k}(t)+\kappa_1(t)k(t)+\kappa_2(t)-\kappa_3(t)k^2(t), \ \overline{\kappa}_3=\kappa_3(t). $$ By the proof of Theorem \ref{lightlike-mate-1}, we also have $$ \overline{\alpha}(t)=\alpha(t)+\dot{\lambda}(t)+\lambda(t)\kappa_1(t)-\beta(t)k^2(t), \ \overline{\beta}(t)=\beta(t). $$ \enD \begin{remark}\label{lightlike-mate-1-re}{\rm $(1)$ Suppose that $\kappa_3(t) \not\equiv0$. If we take $k(t)=1$ and $\lambda(t)=\beta(t)/\kappa_3(t)$, then $(\gamma,\ell^+,\ell^-)$ is a $(\ell^+,\overline{\ell}^{+})$-Bertrand lightcone framed curve by Theorem \ref{lightlike-mate-1}. In this case, $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ is given by $\overline{\gamma}(t)=\gamma(t)+(\beta(t)/\kappa_3(t))\ell^+(t),\\ \overline{\ell}^+(t)=\ell^+(t), \overline{\ell}^-(t)=\ell^+(t)+\ell^-(t)+2\bn(t)=2\widetilde{\ell}^+(t)$. It seems to be a kind of an evolute of mixed type curves with the direction $\ell^+$.\par $(2)$ Suppose that $\beta(t) \not\equiv0$. If we take $k(t)=\kappa_2(t)$ and $\lambda(t)=\beta(t)$, then $(\gamma,\ell^+,\ell^-)$ is a $(\ell^+,\overline{\ell}^{+})$-Bertrand lightcone framed curve by Theorem \ref{lightlike-mate-1}. In this case, $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ is given by $\overline{\gamma}(t)=\gamma(t)+\beta(t)\ell^+(t), \overline{\ell}^+(t)=\ell^+(t), \overline{\ell}^-(t)=\kappa_2^2(t)\ell^+(t)+\ell^-(t)+2\beta(t)\bn(t)$. } \end{remark} \begin{theorem}\label{lightlike-mate-2} $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\ell^+,\overline{\ell}^-)$-Bertrand lightcone framed curve if and only if there exist smooth functions $\lambda,k:I \to \R$ with $\lambda \not\equiv 0$ such that $k (t)\beta(t)=\lambda(t)\kappa_3(t)$ for all $t \in I$. That is, $(\gamma,\ell^+,\ell^-)$ is a $(\ell^+,\overline{\ell}^+)$-Bertrand lightcone framed curve if and only if $(\gamma,\ell^+,\ell^-)$ is a $(\ell^+,\overline{\ell}^-)$-Bertrand lightcone framed curve. \end{theorem} \demo Suppose that $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\ell^+,\overline{\ell}^-)$-Bertrand lightcone framed curve. Then there exists a lightcone framed curve $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$ such that $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are $(\ell^+,\overline{\ell}^-)$-mates. By Proposition \ref{reflection}, $(\overline{\gamma},\overline{\ell}^-,\overline{\ell}^+)$ is also a lightcone framed curve. By Theorem \ref{lightlike-mate-1}, there exist smooth functions $\lambda,k:I \to \R$ with $\lambda \not\equiv 0$ such that $k (t)\beta(t)=\lambda(t)\kappa_3(t)$ for all $t \in I$. The convese also holds by Theorem \ref{lightlike-mate-1} and Proposition \ref{reflection}. \enD By Propositions \ref{reflection} and \ref{curvature-lightlike-mate-1}, we have the following. \begin{proposition}\label{curvature-lightlike-mate-2} Suppose that there exist smooth functions $\lambda,k:I \to \R$ with $\lambda \not\equiv 0$ such that $k (t)\beta(t)=\lambda(t)\kappa_3(t)$ for all $t \in I$. Then the curvature $(\overline{\kappa}_1,\overline{\kappa}_2,\overline{\kappa}_3,\overline{\alpha},\overline{\beta})$ of the lightcone framed curve $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$, where $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\ell^+(t), \overline{\ell}^{+}(t)=k^2(t)\ell^+(t)+\ell^-(t)+2k(t)\bn(t),\overline{\ell}^-(t)=\ell^+(t)$ is given by \begin{align*} &\overline{\kappa}_1(t)=-(\kappa_1(t)-2\kappa_3(t)k(t)), \ \overline{\kappa}_2(t)=-\kappa_3(t), \\ &\overline{\kappa}_3=-(\dot{k}(t)+\kappa_1(t)k(t)+\kappa_2(t)-\kappa_3(t)k^2(t)), \\ &\overline{\alpha}(t)=\beta(t), \ \overline{\beta}(t)=\alpha(t)+\dot{\lambda}(t)+\lambda(t)\kappa_1(t)-\beta(t)k^2(t). \end{align*} \end{proposition} \begin{remark}{\rm We can also prove Theorem \ref{lightlike-mate-2} and Proposition \ref{curvature-lightlike-mate-2} by the similar calculations of the proof of Theorem \ref{lightlike-mate-1} and Proposition \ref{curvature-lightlike-mate-1}. } \end{remark} \begin{theorem}\label{lightlike-mate-3} $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\ell^-,\overline{\ell}^+)$-Bertrand lightcone framed curve if and only if there exist smooth functions $\lambda,k:I \to \R$ with $\lambda \not\equiv 0$ such that $k (t)\alpha(t)=\lambda(t)\kappa_2(t)$ for all $t \in I$. \end{theorem} \demo Suppose that $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\ell^-,\overline{\ell}^+)$-Bertrand lightcone framed curve. Then there exists another lightcone framed curve $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ such that $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are $(\ell^-,\overline{\ell}^+)$-mates, that is, there exists a smooth function $\lambda:I \to \R$ with $\lambda \not\equiv 0$ such that $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\ell^-(t)$ and $\ell^-(t)=\overline{\ell}^+(t)$ for all $t \in I$. By differentiating $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\ell^-(t)$, we have $$ \overline{\alpha}(t)\overline{\ell}^+(t)+\overline{\beta}(t)\overline{\ell}^-(t)=\alpha(t)\ell^+(t)+(\beta(t)+\dot\lambda(t)-\lambda(t)\kappa_1(t))\ell^-(t)+2\lambda(t)\kappa_2(t)\bn(t). $$ Since $\ell^-(t)=\overline{\ell}^+(t)$, there exist smooth functions $a_i,b_i,c_i:I \to \R, i=1,2$ such that \begin{align*} \overline{\ell}^-(t) &=a_1(t)\ell^+(t)+b_1(t)\ell^{-}(t)+c_1(t)\bn(t),\\ \overline{\bn}(t) &=a_2(t)\ell^+(t)+b_2(t)\ell^{-}(t)+c_2(t)\bn(t), \end{align*} where $\overline{\bn}(t)=-(1/2) \overline{\ell}^+(t) \wedge \overline{\ell}^-(t)$. By the condition $(\overline{\ell}^+(t),\overline{\ell}^-(t)) \in \Delta_4$, we have $$ a_1(t)=1, \ b_1(t)=b_2^2(t), \ c_1(t)=-2b_2(t), \ a_2(t)=0, \ c_2(t)=-1. $$ It follows that $$ \overline{\ell}^-(t) =\ell^+(t)+b_2^2(t)\ell^{-}(t)-2b_2(t)\bn(t), \ \overline{\bn}(t) =b_2(t)\ell^-(t)-\bn(t). $$ Moreover, $$ \overline{\alpha}(t)\overline{\ell}^+(t)+\overline{\beta}(t)\overline{\ell}^-(t)= \overline{\beta}(t)\ell^+(t)+(\overline{\alpha}(t)+\overline{\beta}(t)b_2^2(t))\ell^-(t)-2b_2(t)\overline{\beta}(t)\bn(t). $$ Therefore, we have $\overline{\beta}(t)=\alpha(t), \overline{\alpha}(t)+\overline{\beta}(t)b_2^2(t)=\beta(t)+\dot\lambda(t)-\lambda(t)\kappa_1(t),\\-b_2(t)\alpha(t)=\lambda(t)\kappa_2(t)$. If we rewrite $b_2$ as $-k$, then $k(t)\alpha(t)=\lambda(t)\kappa_2(t)$ for all $t \in I$. \par Conversely, suppose that there exist smooth functions $\lambda,k:I \to \R$ with $\lambda \not\equiv 0$ such that $k(t)\alpha(t)=\lambda(t)\kappa_2(t)$ for all $t \in I$. If we consider $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\ell^-(t), \overline{\ell}^{+}(t)=\ell^-(t),\overline{\ell}^-(t)=\ell^+(t)+k^2(t)\ell^-(t)+2k(t)\bn(t)$, \begin{align*} \dot{\overline{\gamma}}(t) &=\alpha(t)\ell^+(t)+(\beta(t)+\dot\lambda(t)-\lambda(t)\kappa_1(t))\ell^-(t)+2\lambda(t)\kappa_2(t)\bn(t)\\ &=(\beta(t)+\dot\lambda(t)-\lambda(t)\kappa_1(t)-k^2(t)\alpha(t))\overline{\ell}^+(t)+\alpha(t)\overline{\ell}^-(t)\\ &=\overline{\alpha}(t)\overline{\ell}^+(t)+\overline{\beta}(t)\overline{\ell}^-(t). \end{align*} It follows that $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$ is a lightcone framed curve. Hence, $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are $(\ell^-,\overline{\ell}^+)$-mates. \enD \begin{proposition}\label{curvature-lightlike-mate-3} Suppose that there exist smooth functions $\lambda,k:I \to \R$ with $\lambda \not\equiv 0$ such that $k (t)\alpha(t)=\lambda(t)\kappa_2(t)$ for all $t \in I$. Then the curvature $(\overline{\kappa}_1,\overline{\kappa}_2,\overline{\kappa}_3,\overline{\alpha},\overline{\beta})$ of the lightcone framed curve $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$, where $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\ell^-(t), \overline{\ell}^{+}(t)=\ell^-(t),\overline{\ell}^-(t)=\ell^+(t)+k^2(t)\ell^-(t)+2k(t)\bn(t)$ is given by \begin{align*} & \overline{\kappa}_1(t)=-\kappa_1(t)-2k(t)\kappa_2(t), \ \overline{\kappa}_2(t)=-\dot k(t)+k(t)\kappa_1(t)+k^2(t)\kappa_2(t)-\kappa_3(t), \\ & \overline{\kappa}_3(t)=-\kappa_2(t), \ \overline{\alpha}(t)=\beta(t)+\dot\lambda(t)-\lambda(t)\kappa_1(t)-k^2(t)\alpha(t), \ \overline{\beta}(t)=\alpha(t). \end{align*} \end{proposition} \demo By a direct calculation, we have $\overline{\bn}(t)=-k(t)\ell^-(t)-\bn(t)$ and \begin{align*} \dot{\overline{\ell}^+}(t)&=-\kappa_1(t)\ell^-(t)+2\kappa_2(t)\bn(t)\\ &=(-2k(t)\kappa_2(t)-\kappa_1(t))\overline{\ell}^{+}-2\kappa_2(t)\overline{\bn}(t),\\ \dot{\overline{\bn}}(t)&=-\kappa_2(t)\ell^+(t)+(-\dot k(t)+k(t)\kappa_1(t)-\kappa_3(t))\ell^-(t)-2k(t)\kappa_2(t)\bn\\ &=(-\dot k(t)+k(t)\kappa_1(t)+k^2(t)\kappa_2(t)-\kappa_3(t))\overline{\ell}^+(t)-\kappa_2(t)\overline{\ell}^-(t). \end{align*} Then we have \begin{align*} &\overline{\kappa}_1(t)=-\kappa_1(t)-2k(t)\kappa_2(t), \\ &\overline{\kappa}_2(t)=-\dot k(t)+k(t)\kappa_1(t)+k^2(t)\kappa_2(t)-\kappa_3(t), \\ &\overline{\kappa}_3(t)=-\kappa_2(t). \end{align*} By the proof of Theorem \ref{lightlike-mate-3}, we also have $$ \overline{\alpha}(t)=\beta(t)+\dot\lambda(t)-\lambda(t)\kappa_1(t)-k^2(t)\alpha(t), \ \overline{\beta}(t)=\alpha(t). $$ \enD \begin{remark}\label{lightlike-mate-3-re}{\rm $(1)$ Suppose that $\kappa_2(t) \not\equiv0$. If we take $k(t)=1$ and $\lambda(t)=\alpha(t)/\kappa_2(t)$, then $(\gamma,\ell^+,\ell^-)$ is a $(\ell^-,\overline{\ell}^{+})$-Bertrand lightcone framed curve by Theorem \ref{lightlike-mate-3}. In this case, $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ is given by $\overline{\gamma}(t)=\gamma(t)+(\alpha(t)/\kappa_2(t))\ell^-(t), \overline{\ell}^+(t)\\=\ell^-(t), \overline{\ell}^-(t)=\ell^+(t)+\ell^-(t)+2\bn(t)=2\widetilde{\ell}^+(t)$. It seems to be a kind of an evolute of mixed type curves with the direction $\ell^-$.\par $(2)$ Suppose that $\alpha(t) \not\equiv0$. If we take $k(t)=\kappa_2(t)$ and $\lambda(t)=\alpha(t)$, then $(\gamma,\ell^+,\ell^-)$ is a $(\ell^-,\overline{\ell}^{+})$-Bertrand lightcone framed curve by Theorem \ref{lightlike-mate-3}. In this case, $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ is given by $\overline{\gamma}(t)=\gamma(t)+\alpha(t)\ell^-(t), \overline{\ell}^+(t)=\ell^-(t), \overline{\ell}^-(t)=\ell^+(t)+\kappa_2^2(t)\ell^-(t)+2\kappa_2(t)\bn(t)$. } \end{remark} \begin{theorem}\label{lightlike-mate-4} $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\ell^-,\overline{\ell}^-)$-Bertrand lightcone framed curve if and only if there exist smooth functions $\lambda,k:I \to \R$ with $\lambda \not\equiv 0$ such that $k (t)\alpha(t)=\lambda(t)\kappa_2(t)$ for all $t \in I$. That is, $(\gamma,\ell^+,\ell^-)$ is a $(\ell^-,\overline{\ell}^+)$-Bertrand lightcone framed curve if and only if $(\gamma,\ell^+,\ell^-)$ is a $(\ell^-,\overline{\ell}^-)$-Bertrand lightcone framed curve. \end{theorem} \demo Suppose that $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\ell^-,\overline{\ell}^-)$-Bertrand lightcone framed curve. Then there exists a lightcone framed curve $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$ such that $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are $(\ell^-,\overline{\ell}^+)$-mates. By Proposition \ref{reflection}, $(\overline{\gamma},\overline{\ell}^-,\overline{\ell}^+)$ is also a lightcone framed curve. By Theorem \ref{lightlike-mate-3}, there exist smooth functions $\lambda,k:I \to \R$ with $\lambda \not\equiv 0$ such that $k (t)\alpha(t)=\lambda(t)\kappa_2(t)$ for all $t \in I$. The convese also holds by Theorem \ref{lightlike-mate-3} and Proposition \ref{reflection}. \enD By Propositions \ref{reflection} and \ref{curvature-lightlike-mate-3}, we have the following. \begin{proposition}\label{curvature-lightlike-mate-4} Suppose that there exist smooth functions $\lambda,k:I \to \R$ with $\lambda \not\equiv 0$ such that $k (t)\alpha(t)=\lambda(t)\kappa_2(t)$ for all $t \in I$. Then the curvature $(\overline{\kappa}_1,\overline{\kappa}_2,\overline{\kappa}_3,\overline{\alpha},\overline{\beta})$ of the lightcone framed curve $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$, where $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\ell^-(t), \overline{\ell}^{+}(t)=\ell^+(t)+k^2(t)\ell^-(t)+2k(t)\bn(t),\overline{\ell}^-(t)=\ell^-(t)$ is given by \begin{align*} & \overline{\kappa}_1(t)=\kappa_1(t)+2k(t)\kappa_2(t), \ \overline{\kappa}_2(t)=\kappa_2(t), \\ & \overline{\kappa}_3(t)=\dot k(t)-k(t)\kappa_1(t)-k^2(t)\kappa_2(t)+\kappa_3(t),\\ & \overline{\alpha}(t)=\alpha(t), \ \overline{\beta}(t)=\beta(t)+\dot\lambda(t)-\lambda(t)\kappa_1(t)-k^2(t)\alpha(t). \end{align*} \end{proposition} \begin{theorem}\label{lightlike-mate-5} $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\widetilde{\ell}^+, \overline{\ell}^+)$-Bertrand lightcone framed curve if and only if there exist smooth functions $\lambda, k:I \to \R$ with $\lambda \not\equiv 0$ such that $$k(t)(\alpha(t)+\beta(t))+\alpha(t)-\beta(t)+\lambda(t)(\kappa_1(t)+\kappa_2(t)-\kappa_3(t))=0$$ for all $t \in I$. \end{theorem} \demo Suppose that $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\widetilde{\ell}^+, \overline{\ell}^+)$-Bertrand lightcone framed curve. Then there exists another lightcone framed curve $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ such that $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are $(\widetilde{\ell}^+, \overline{\ell}^+)$-mates, that is, there exists a smooth function $\lambda:I \to \R$ with $\lambda \not\equiv 0$ such that $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\widetilde{\ell}^+(t)$ and $\widetilde{\ell}^+(t)=\overline{\ell}^+(t)$ for all $t \in I$. By differentiating $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\widetilde{\ell}^+(t)$, we have \begin{align*} \overline{a}(t)\overline{\bn}^T(t)+\overline{b}(t)\overline{\bn}^S(t)&=(\dot\lambda(t)+\lambda(t)\kappa^T(t))\bn(t)+(a(t)+\dot{\lambda}(t)+\lambda(t)\kappa^T(t))\bn^T(t)\\ &\quad+(b(t)+\lambda(t)(\kappa_1(t)+\kappa^S(t)))\bn^S(t). \end{align*} Since $\widetilde{\ell}^+(t)=\overline{\ell}^+(t)$, there exist smooth functions $a_i,b_i,c_i:I \to \R, i=1,2$ such that \begin{align*} \overline{\ell}^-(t) &=a_1(t)\widetilde{\ell}^+(t)+b_1(t)\widetilde{\ell}^-(t)+c_1(t)\bn^S(t)\\ &=(a_1(t)+b_1(t))\bn^T(t)+c_1(t)\bn^S(t)+(a_1(t)-b_1(t))\bn(t),\\ \overline{\bn}(t) &=a_2(t)\widetilde{\ell}^+(t)+b_2(t)\widetilde{\ell}^-(t)(t)+c_2(t)\bn^S(t)\\ &=(a_2(t)+b_2(t))\bn^T(t)+c_2(t)\bn^S(t)+(a_2(t)-b_2(t))\bn(t). \end{align*} where $\overline{\bn}(t)=-(1/2) \overline{\ell}^+(t) \wedge \overline{\ell}^-(t)$. By the condition $(\overline{\ell}^+(t),\overline{\ell}^-(t)) \in \Delta_4$, we have $$ a_1(t)=a_2^2(t), \ b_1(t)=1, \ c_1(t)=-2a_2(t), \ b_2(t)=0, \ c_2(t)=-1. $$ It follows that \begin{align*} \overline{\ell}^-(t) &=(a_2^2(t)+1)\bn^T(t)-2a_2(t)\bn^S(t)+(a_2^2(t)-1)\bn(t), \\ &=a_2^2(t)\widetilde{\ell}^+(t)+\widetilde{\ell}^-(t)-2a_2(t)\bn^S(t),\\ \overline{\bn}(t) &=a_2(t)\bn^T(t)-\bn^S(t)+a_2(t)\bn(t) \\ &=a_2(t)\widetilde{\ell}^+(t)-\bn^S(t). \end{align*} Moreover, \begin{align*} \overline{a}(t)\overline{\bn}^T(t)+\overline{b}(t)\overline{\bn}^S(t)&=(\overline{\alpha}(t)+(a_2^2(t)-1)\overline{\beta}(t))\bn(t)\\ &\quad+(\overline{\alpha}(t)+(a_2^2(t)+1)\overline{\beta}(t))\bn^T(t) -2a_2(t)\overline{\beta}(t)\bn^S(t). \end{align*} Therefore, we have $\overline{\alpha}(t)+(a_2^2(t)+1)\overline{\beta}(t)=\alpha(t)+\beta(t)+\dot{\lambda}(t)+\lambda(t)\kappa^T(t)$. If we rewrite $a_2$ as $k$, then $k(t)(\alpha(t)+\beta(t))+\alpha(t)-\beta(t)+\lambda(t)(\kappa_1(t)+\kappa_2(t)-\kappa_3(t))=0$ for all $t \in I$. \par Conversely, suppose that there exist smooth functions $\lambda,k:I \to \R$ with $\lambda \not\equiv 0$ such that $k(t)(\alpha(t)+\beta(t))+\alpha(t)-\beta(t)+\lambda(t)(\kappa_1(t)+\kappa_2(t)-\kappa_3(t))=0$ for all $t \in I$. If we consider $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\widetilde{\ell}^+(t), \overline{\ell}^{+}(t)=\widetilde{\ell}^+(t)=\bn^T(t)+\bn(t),\overline{\ell}^-(t)=(k^2(t)+1)\bn^T(t)-2k(t)\bn^S(t)+(k^2(t)-1)\bn(t)$, \begin{align*} \dot{\overline{\gamma}}(t) &=(\dot\lambda(t)+\lambda(t)\kappa^T(t))\bn(t)+(\alpha(t)+\beta(t)+\dot{\lambda}(t)+\lambda(t)\kappa^T(t))\bn^T(t)\\ &\quad+(\alpha(t)-\beta(t)+\lambda(t)(\kappa_1(t)+\kappa^S(t)))\bn^S(t)\\ &=(a(t)+\dot{\lambda}(t)+\lambda(t)\kappa^T(t)-a(t)k^2(t)/2)\overline{\bn}^T(t)\\ &\quad+(\dot{\lambda}(t)+\lambda(t)\kappa^T(t)-a(t)k^2(t)/2)\overline{\bn}^S(t)\\ &=\overline{a}(t)\overline{\bn}^T(t)+\overline{b}(t)\overline{\bn}^S(t)\\ &=\frac{1}{2}(\overline{a}(t)+\overline{b}(t))\overline{\ell}^+(t)+\frac{1}{2}(\overline{a}(t)-\overline{b}(t))\overline{\ell}^-(t), \end{align*} where \begin{align*} \overline{a}(t)=\dot\lambda(t)+\lambda(t)\kappa^T(t)-\frac{k^2(t)a(t)}{2}+a(t), \ \overline{b}(t)=\dot\lambda(t)+\lambda(t)\kappa^T(t)-\frac{k^2(t)a(t)}{2}. \end{align*} It follows that $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$ is a lightcone framed curve. Hence, $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are $(\widetilde{\ell}^+,\overline{\ell}^+)$-mates. \enD \begin{proposition}\label{curvature-lightlike-mate-5} Suppose that there exist smooth functions $\lambda,k:I \to \R$ with $\lambda \not\equiv 0$ such that \begin{align*} k(t)(\alpha(t)+\beta(t))+\alpha(t)-\beta(t)+\lambda(t)(\kappa_1(t)+\kappa_2(t)-\kappa_3(t))=0, \end{align*} for all $t \in I$. Then the curvature of the lightcone framed curve $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$, where $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\widetilde{\ell}^+(t), \overline{\ell}^{+}(t)=\widetilde{\ell}^+(t)=\bn^T(t)+\bn(t),\overline{\ell}^-(t)=(k^2(t)+1)\bn^T(t)-2k(t)\bn^S(t)+(k^2(t)-1)\bn(t)$ is given by $$ (\overline{\kappa}_1,\overline{\kappa}_2,\overline{\kappa}_3,\overline{\alpha},\overline{\beta})=(\overline{\kappa}_1,(1/2)(\overline{\kappa}^T+\overline{\kappa}^S),(1/2)(\overline{\kappa}^T-\overline{\kappa}^S),(1/2)(\overline{a}+\overline{b}),(1/2)(\overline{a}-\overline{b})), $$ where \begin{align*} \overline{\kappa}_1(t)&=k(t)\kappa_1(t)+\kappa^T(t)+k(t)\kappa^S(t), \\ \overline{\kappa}^T(t)&=-\left(\frac{k^2(t)}{2}-1\right)\kappa_1(t)-k(t)\kappa^T(t)-\frac{k^2(t)}{2}\kappa^S(t)-\dot k(t), \\ \overline{\kappa}^S(t)&=\frac{k^2(t)}{2}\kappa_1(t)+k(t)\kappa^T(t)+\left(\frac{k^2(t)}{2}+1\right)\kappa^S(t)+\dot k(t),\\ \overline{a}(t)&=\dot\lambda(t)+\lambda(t)\kappa^T(t)-\frac{k^2(t)a(t)}{2}+a(t), \\ \overline{b}(t)&=\dot\lambda(t)+\lambda(t)\kappa^T(t)-\frac{k^2(t)a(t)}{2}. \end{align*} \end{proposition} \demo By a direct calculation, \begin{align*} \overline{\bn}^T(t)&=\left(\frac{k^2(t)}{2}+1\right)\bn^T(t)-k(t)\bn^S(t)+\frac{k^2(t)}{2}\bn(t), \\ \overline{\bn}^S(t)&=-\frac{k^2(t)}{2}\bn^T(t)+k(t)\bn^S(t)+\left(-\frac{k^2(t)}{2}+1\right)\bn(t),\\ \overline{\bn}(t)&=k(t)\bn^T(t)-\bn^S(t)+k(t)\bn(t),\\ \dot{\overline{\bn}^T}(t)&=\left(\dot k(t)k(t)-k(t)\kappa_1(t)+\frac{k^2(t)}{2}\kappa^T(t)\right)\bn^T(t)\\ &\quad +\left(\left(\frac{k^2(t)}{2}+1\right)\kappa_1(t)-\dot k(t)+\frac{k^2(t)}{2}\kappa^S(t)\right)\bn^S(t)\\ &\quad +\left(\left(\frac{k^2(t)}{2}+1\right)\kappa^T(t)+k(t)\kappa^S(t)+\dot k(t)k(t)\right)\bn(t),\\ \dot{\overline{\bn}}(t)&=(\dot k(t)-\kappa_1(t)+k(t)\kappa^T(t))\bn^T(t)+k(t)(\kappa_1(t)+\kappa^S(t))\bn^S(t)\\ &\quad +(k(t)\kappa^T(t)+\kappa^S(t)+\dot k(t))\bn(t). \end{align*} Then we have \begin{align*} \overline{\kappa}_1(t)&=k(t)\kappa_1(t)+\kappa^T(t)+k(t)\kappa^S(t), \\ \overline{\kappa}^T(t)&=-\left(\frac{k^2(t)}{2}-1\right)\kappa_1(t)-k(t)\kappa^T(t)-\frac{k^2(t)}{2}\kappa^S(t)-\dot k(t), \\ \overline{\kappa}^S(t)&=\frac{k^2(t)}{2}\kappa_1(t)+k(t)\kappa^T(t)+\left(\frac{k^2(t)}{2}+1\right)\kappa^S(t)+\dot k(t). \end{align*} By the proof of Theorem \ref{lightlike-mate-5}, we also have $$ \overline{a}(t)=\dot\lambda(t)+\lambda(t)\kappa^T(t)-\frac{k^2(t)a(t)}{2}+a(t), \ \overline{b}(t)=\dot\lambda(t)+\lambda(t)\kappa^T(t)-\frac{k^2(t)a(t)}{2}. $$ \enD \begin{remark}{\rm $(1)$ If $\kappa_1(t)+\kappa^S(t) \not=0, k(t)=0$ and $\lambda(t)=-b(t)/(\kappa_1(t)+\kappa^S(t))$, then $(\gamma,\ell^+,\ell^-)$ is a $(\widetilde{\ell}^+,\overline{\ell}^{+})$-Bertrand lightcone framed curve by Theorem \ref{lightlike-mate-5}. In this case, $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ is given by $\overline{\gamma}(t)=\gamma(t)-(b(t)/(\kappa_1(t)+\kappa^S(t)))\widetilde{\ell}^+(t), \overline{\ell}^+(t)=\widetilde{\ell}^+(t), \overline{\ell}^-(t)=\bn^T(t)-\bn(t)=\widetilde{\ell}^-(t)$. \par $(2)$ If $\kappa_1(t)+\kappa^S(t) \not=0, k(t)=1$ and $\lambda(t)=-2\alpha(t)/(\kappa_1(t)+\kappa^S(t))$, then $(\gamma,\ell^+,\ell^-)$ is a $(\widetilde{\ell}^+,\overline{\ell}^{+})$-Bertrand lightcone framed curve by Theorem \ref{lightlike-mate-5}. In this case, $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ is given by $\overline{\gamma}(t)=\gamma(t)-(2\alpha(t)/(\kappa_1(t)+\kappa^S(t)))\widetilde{\ell}^+(t), \overline{\ell}^+(t)=\widetilde{\ell}^+(t), \overline{\ell}^-(t)=2\bn^T(t)-2\bn^S(t)=2{\ell}^-(t)$. \par $(3)$ If $\kappa_1(t)+\kappa^S(t) \not=0, k(t)=-1$ and $\lambda(t)=2\beta(t)/(\kappa_1(t)+\kappa^S(t))$, then $(\gamma,\ell^+,\ell^-)$ is a $(\widetilde{\ell}^+,\overline{\ell}^{+})$-Bertrand lightcone framed curve by Theorem \ref{lightlike-mate-5}. In this case, $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ is given by $\overline{\gamma}(t)=\gamma(t)+(2\beta(t)/(\kappa_1(t)+\kappa^S(t)))\widetilde{\ell}^+(t), \overline{\ell}^+(t)=\widetilde{\ell}^+(t), \overline{\ell}^-(t)=2\bn^T(t)+2\bn^S(t)=2{\ell}^+(t)$. \par It seems to be a kind of psedo-circular evolutes of mixed type curves with the direction $\widetilde{\ell}^+$. } \end{remark} \begin{theorem}\label{lightlike-mate-6} $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\widetilde{\ell}^+, \overline{\ell}^-)$-Bertrand lightcone framed curve if and only if there exist smooth functions $\lambda,k:I \to \R$ with $\lambda \not\equiv 0$ such that $$ k(t)(\alpha(t)+\beta(t))+\alpha(t)-\beta(t)+\lambda(t)(\kappa_1(t)+\kappa_2(t)-\kappa_3(t))=0 $$ for all $t \in I$. That is, $(\gamma,\ell^+,\ell^-)$ is a $(\widetilde{\ell}^+, \overline{\ell}^+)$-Bertrand lightcone framed curve if and only if $(\gamma,\ell^+,\ell^-)$ is a $(\widetilde{\ell}^+, \overline{\ell}^-)$-Bertrand lightcone framed curve. \end{theorem} \demo Suppose that $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\widetilde{\ell}^+, \overline{\ell}^-)$-Bertrand lightcone framed curve. Then there exists a lightcone framed curve $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$ such that $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are $(\widetilde{\ell}^+, \overline{\ell}^+)$-mates. By Proposition \ref{reflection}, $(\overline{\gamma},\overline{\ell}^-,\overline{\ell}^+)$ is also a lightcone framed curve. By Theorem \ref{lightlike-mate-5}, there exist smooth functions $\lambda,k:I \to \R$ with $\lambda \not\equiv 0$ such that $k(t)(\alpha(t)+\beta(t))+\alpha(t)-\beta(t)+\lambda(t)(\kappa_1(t)+\kappa_2(t)-\kappa_3(t))=0$ for all $t \in I$. The convese also holds by Theorem \ref{lightlike-mate-5} and Proposition \ref{reflection}. \enD By Propositions \ref{reflection} and \ref{curvature-lightlike-mate-5}, we have the following. \begin{proposition}\label{curvature-lightlike-mate-6} Suppose that there exist smooth functions $\lambda,k:I \to \R$ with $\lambda \not\equiv 0$ such that $$k(t)(\alpha(t)+\beta(t))+\alpha(t)-\beta(t)+\lambda(t)(\kappa_1(t)+\kappa_2(t)-\kappa_3(t))=0$$ for all $t \in I$. Then the curvature of the lightcone framed curve $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$, where $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\widetilde{\ell}^-(t), \overline{\ell}^{+}(t)=(k^2(t)+1)\bn^T(t)+2k(t)\bn^S(t)+(k^2(t)-1)\bn(t), \overline{\ell}^-(t)=\widetilde{\ell}^+(t)=\bn^T(t)+\bn(t)$ is given by $$ (\overline{\kappa}_1,\overline{\kappa}_2,\overline{\kappa}_3,\overline{\alpha},\overline{\beta})=(\overline{\kappa}_1,(1/2)(\overline{\kappa}^T+\overline{\kappa}^S),(1/2)(\overline{\kappa}^T-\overline{\kappa}^S),(1/2)(\overline{a}+\overline{b}),(1/2)(\overline{a}-\overline{b})), $$ where \begin{align*} \overline{\kappa}_1(t)&=-k(t)\kappa_1(t)-\kappa^T(t)-k(t)\kappa^S(t), \\ \overline{\kappa}^T(t)&=\left(\frac{k^2(t)}{2}-1\right)\kappa_1(t)+k(t)\kappa^T(t)+\frac{k^2(t)}{2}\kappa^S(t)+\dot k(t), \\ \overline{\kappa}^S(t)&=\frac{k^2(t)}{2}\kappa_1(t)+k(t)\kappa^T(t)+\left(\frac{k^2(t)}{2}+1\right)\kappa^S(t)+\dot k(t),\\ \overline{a}(t)&=\dot\lambda(t)+\lambda(t)\kappa^T(t)-\frac{k^2(t)a(t)}{2}+a(t), \\ \overline{b}(t)&=-\dot\lambda(t)-\lambda(t)\kappa^T(t)+\frac{k^2(t)a(t)}{2}. \end{align*} \end{proposition} \begin{theorem}\label{lightlike-mate-7} $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\widetilde{\ell}^-, \overline{\ell}^+)$-Bertrand lightcone framed curve if and only if there exist smooth functions $\lambda, k:I \to \R$ with $\lambda \not\equiv 0$ such that $$k(t)(\alpha(t)+\beta(t))-\alpha(t)+\beta(t)-\lambda(t)(\kappa_1(t)-\kappa_2(t)+\kappa_3(t))=0$$ for all $t \in I$. \end{theorem} \demo Suppose that $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\widetilde{\ell}^-, \overline{\ell}^+)$-Bertrand lightcone framed curve. Then there exists another lightcone framed curve $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ such that $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are $(\widetilde{\ell}^-, \overline{\ell}^+)$-mates, that is, there exists a smooth function $\lambda:I \to \R$ with $\lambda \not\equiv 0$ such that $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\widetilde{\ell}^-(t)$ and $\widetilde{\ell}^-(t)=\overline{\ell}^+(t)$ for all $t \in I$. By differentiating $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\widetilde{\ell}^-(t)$, we have \begin{align*} &\overline{a}(t)\overline{\bn}^T(t)+\overline{b}(t)\overline{\bn}^S(t)\\ &=(-\dot\lambda(t)+\lambda(t)\kappa^T(t))\bn(t)+(a(t)+\dot{\lambda}(t)-\lambda(t)\kappa^T(t))\bn^T(t)\\ &\quad+(b(t)+\lambda(t)(\kappa_1(t)-\kappa^S(t)))\bn^S(t). \end{align*} Since $\widetilde{\ell}^+(t)=\overline{\ell}^+(t)$, there exist smooth functions $a_i,b_i,c_i:I \to \R, i=1,2$ such that \begin{align*} \overline{\ell}^-(t) &=a_1(t)\widetilde{\ell}^+(t)+b_1(t)\widetilde{\ell}^-(t)+c_1(t)\bn^S(t)\\ &=(a_1(t)+b_1(t))\bn^T(t)+c_1(t)\bn^S(t)+(a_1(t)-b_1(t))\bn(t),\\ \overline{\bn}(t) &=a_2(t)\widetilde{\ell}^+(t)+b_2(t)\widetilde{\ell}^-(t)(t)+c_2(t)\bn^S(t)\\ &=(a_2(t)+b_2(t))\bn^T(t)+c_2(t)\bn^S(t)+(a_2(t)-b_2(t))\bn(t). \end{align*} where $\overline{\bn}(t)=-(1/2) \overline{\ell}^+(t) \wedge \overline{\ell}^-(t)$. By the condition $(\overline{\ell}^+(t),\overline{\ell}^-(t)) \in \Delta_4$, we have $$ a_1(t)=1, \ a_2(t)=0, \ b_1(t)=b_2^2(t), \ c_1(t)=2b_2(t), \ c_2(t)=1. $$ It follows that \begin{align*} \overline{\ell}^-(t) &=(1+b_2^2(t))\bn^T(t)+2b_2(t)\bn^S(t)+(1-b_2^2(t))\bn(t), \\ &=\widetilde{\ell}^+(t)+b_2^2(t)\widetilde{\ell}^-(t)+2b_2(t)\bn^S(t),\\ \overline{\bn}(t) &=b_2(t)\bn^T(t)+\bn^S(t)-b_2(t)\bn(t) \\ &=b_2(t)\widetilde{\ell}^-(t)-\bn^S(t). \end{align*} Moreover, \begin{align*} \overline{a}(t)\overline{\bn}^T(t)+\overline{b}(t)\overline{\bn}^S(t)&=(\overline{\alpha}(t)+(b_2^2(t)+1)\overline{\beta}(t))\bn(t)\\ &\quad-(\overline{\alpha}(t)+(b_2^2(t)+1)\overline{\beta}(t))\bn^T(t)+2b_2(t)\overline{\beta}(t)\bn^S(t). \end{align*} Therefore, we have $\overline{\alpha}(t)+(b_2^2(t)+1)\overline{\beta}(t)=\alpha(t)+\beta(t)+\dot{\lambda}(t)-\lambda(t)\kappa^T(t)$. If we rewrite $b_2$ as $k$, then $k(t)(\alpha(t)+\beta(t))-\alpha(t)+\beta(t)-\lambda(t)(\kappa_1(t)-\kappa_2(t)+\kappa_3(t))=0$ for all $t \in I$. \par Conversely, suppose that there exist smooth functions $\lambda,k:I \to \R$ with $\lambda \not\equiv 0$ such that $k(t)(\alpha(t)+\beta(t))-\alpha(t)+\beta(t)-\lambda(t)(\kappa_1(t)-\kappa_2(t)+\kappa_3(t))=0$ for all $t \in I$. If we consider $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\widetilde{\ell}^-(t), \overline{\ell}^{+}(t)=\widetilde{\ell}^-(t)=\bn^T(t)-\bn(t),\overline{\ell}^-(t)=(1+k^2(t))\bn^T(t)+2k(t)\bn^S(t)+(1-k^2(t))\bn(t)$, \begin{align*} \dot{\overline{\gamma}}(t) &=(-\dot\lambda(t)+\lambda(t)\kappa^T(t))\bn(t)+(\alpha(t)+\beta(t)+\dot{\lambda}(t)-\lambda(t)\kappa^T(t))\bn^T(t)\\ &\quad+(\alpha(t)-\beta(t)+\lambda(t)(\kappa_1(t)-\kappa^S(t)))\bn^S(t)\\ &=(a(t)+\dot{\lambda}(t)-\lambda(t)\kappa^T(t)-a(t)k^2(t)/2)\overline{\bn}^T(t)\\ &\quad+(\dot{\lambda}(t)-\lambda(t)\kappa^T(t)-a(t)k^2(t)/2)\overline{\bn}^S(t)\\ &=\overline{a}(t)\overline{\bn}^T(t)+\overline{b}(t)\overline{\bn}^S(t)\\ &=\frac{1}{2}(\overline{a}(t)+\overline{b}(t))\overline{\ell}^+(t)+\frac{1}{2}(\overline{a}(t)-\overline{b}(t))\overline{\ell}^-(t), \end{align*} where \begin{align*} \overline{a}(t)=\dot\lambda(t)-\lambda(t)\kappa^T(t)-\frac{k^2(t)a(t)}{2}+a(t), \ \overline{b}(t)=\dot\lambda(t)-\lambda(t)\kappa^T(t)-\frac{k^2(t)a(t)}{2}. \end{align*} It follows that $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$ is a lightcone framed curve. Hence, $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are $(\widetilde{\ell}^-,\overline{\ell}^+)$-mates. \enD \begin{proposition}\label{curvature-lightlike-mate-7} Suppose that there exist smooth functions $\lambda,k:I \to \R$ with $\lambda \not\equiv 0$ such that $$k(t)(\alpha(t)+\beta(t))-\alpha(t)+\beta(t)-\lambda(t)(\kappa_1(t)-\kappa_2(t)+\kappa_3(t))=0$$ for all $t \in I$. Then the curvature of the lightcone framed curve $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$, where $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\widetilde{\ell}^-(t), \overline{\ell}^{+}(t)=\widetilde{\ell}^-(t)=\bn^T(t)-\bn(t),\overline{\ell}^-(t)=(k^2(t)+1)\bn^T(t)+2k(t)\bn^S(t)+(-k^2(t)+1)\bn(t)$ is given by $$ (\overline{\kappa}_1,\overline{\kappa}_2,\overline{\kappa}_3,\overline{\alpha},\overline{\beta})=(\overline{\kappa}_1,(1/2)(\overline{\kappa}^T+\overline{\kappa}^S),(1/2)(\overline{\kappa}^T-\overline{\kappa}^S),(1/2)(\overline{a}+\overline{b}),(1/2)(\overline{a}-\overline{b})), $$ where \begin{align*} \overline{\kappa}_1(t)&=-k(t)\kappa_1(t)-\kappa^T(t)+k(t)\kappa^S(t), \\ \overline{\kappa}^T(t)&=-\left(\frac{k^2(t)}{2}-1\right)\kappa_1(t)-k(t)\kappa^T(t)+\frac{k^2(t)}{2}\kappa^S(t)+\dot k(t), \\ \overline{\kappa}^S(t)&=-\frac{k^2(t)}{2}\kappa_1(t)-k(t)\kappa^T(t)+\left(\frac{k^2(t)}{2}+1\right)\kappa^S(t)+\dot k(t),\\ \overline{a}(t)&=\dot\lambda(t)-\lambda(t)\kappa^T(t)-\frac{k^2(t)a(t)}{2}+a(t), \\ \overline{b}(t)&=\dot\lambda(t)-\lambda(t)\kappa^T(t)-\frac{k^2(t)a(t)}{2}. \end{align*} \end{proposition} \demo By a direct calculation, \begin{align*} \overline{\bn}^T(t)&=\left(\frac{k^2(t)}{2}+1\right)\bn^T(t)+k(t)\bn^S(t)-\frac{k^2(t)}{2}\bn(t), \\ \overline{\bn}^S(t)&=-\frac{k^2(t)}{2}\bn^T(t)-k(t)\bn^S(t)+\left(\frac{k^2(t)}{2}-1\right)\bn(t),\\ \overline{\bn}(t)&=k(t)\bn^T(t)+\bn^S(t)-k(t)\bn(t),\\ \dot{\overline{\bn}^T}(t)&=\left(\dot k(t)k(t)+k(t)\kappa_1(t)-\frac{k^2(t)}{2}\kappa^T(t)\right)\bn^T(t)\\ &\quad +\left(\left(\frac{k^2(t)}{2}+1\right)\kappa_1(t)+\dot k(t)-\frac{k^2(t)}{2}\kappa^S(t)\right)\bn^S(t)\\ &\quad +\left(\left(\frac{k^2(t)}{2}+1\right)\kappa^T(t)-k(t)\kappa^S(t)-\dot k(t)k(t)\right)\bn(t),\\ \dot{\overline{\bn}}(t)&=(\dot k(t)+\kappa_1(t)-k(t)\kappa^T(t))\bn^T(t)+k(t)(\kappa_1(t)-\kappa^S(t))\bn^S(t)\\ &\quad +(k(t)\kappa^T(t)-\kappa^S(t)-\dot k(t))\bn(t). \end{align*} Then we have \begin{align*} \overline{\kappa}_1(t)&=-k(t)\kappa_1(t)-\kappa^T(t)+k(t)\kappa^S(t), \\ \overline{\kappa}^T(t)&=-\left(\frac{k^2(t)}{2}-1\right)\kappa_1(t)-k(t)\kappa^T(t)+\frac{k^2(t)}{2}\kappa^S(t)+\dot k(t), \\ \overline{\kappa}^S(t)&=-\frac{k^2(t)}{2}\kappa_1(t)-k(t)\kappa^T(t)+\left(\frac{k^2(t)}{2}+1\right)\kappa^S(t)+\dot k(t). \end{align*} By the proof of Theorem \ref{lightlike-mate-7}, we also have $$ \overline{a}(t)=\dot\lambda(t)-\lambda(t)\kappa^T(t)-\frac{k^2(t)a(t)}{2}+a(t), \ \overline{b}(t)=\dot\lambda(t)-\lambda(t)\kappa^T(t)-\frac{k^2(t)a(t)}{2}. $$ \enD \begin{remark}{\rm $(1)$ If $\kappa_1(t)-\kappa^S(t) \not=0, k(t)=0$ and $\lambda(t)=-b(t)/(\kappa_1(t)-\kappa^S(t))$, then $(\gamma,\ell^+,\ell^-)$ is a $(\widetilde{\ell}^-,\overline{\ell}^{+})$-Bertrand lightcone framed curve by Theorem \ref{lightlike-mate-7}. In this case, $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ is given by $\overline{\gamma}(t)=\gamma(t)-(b(t)/(\kappa_1(t)-\kappa^S(t)))\widetilde{\ell}^-(t), \overline{\ell}^+(t)=\widetilde{\ell}^+(t), \overline{\ell}^-(t)=\bn^T(t)+\bn(t)=\widetilde{\ell}^+(t)$. \par $(2)$ If $\kappa_1(t)-\kappa^S(t) \not=0, k(t)=1$ and $\lambda(t)=2\beta(t)/(\kappa_1(t)-\kappa^S(t))$, then $(\gamma,\ell^+,\ell^-)$ is a $(\widetilde{\ell}^-,\overline{\ell}^{+})$-Bertrand lightcone framed curve by Theorem \ref{lightlike-mate-7}. In this case, $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ is given by $\overline{\gamma}(t)=\gamma(t)+(2\beta(t)/(\kappa_1(t)-\kappa^S(t)))\widetilde{\ell}^-(t), \overline{\ell}^+(t)=\widetilde{\ell}^-(t), \overline{\ell}^-(t)=2\bn^T(t)+2\bn^S(t)=2{\ell}^+(t)$. \par $(3)$ If $\kappa_1(t)-\kappa^S(t) \not=0, k(t)=-1$ and $\lambda(t)=-2\alpha(t)/(\kappa_1(t)-\kappa^S(t))$, then $(\gamma,\ell^+,\ell^-)$ is a $(\widetilde{\ell}^-,\overline{\ell}^{+})$-Bertrand lightcone framed curve by Theorem \ref{lightlike-mate-7}. In this case, $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ is given by $\overline{\gamma}(t)=\gamma(t)-(2\alpha(t)/(\kappa_1(t)-\kappa^S(t)))\widetilde{\ell}^-(t), \overline{\ell}^+(t)=\widetilde{\ell}^-(t), \overline{\ell}^-(t)=2\bn^T(t)-2\bn^S(t)=2{\ell}^-(t)$. \par It seems to be a kind of psedo-circular evolutes of mixed type curves with the direction $\widetilde{\ell}^-$. } \end{remark} \begin{theorem}\label{lightlike-mate-8} $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\widetilde{\ell}^-, \overline{\ell}^-)$-Bertrand lightcone framed curve if and only if there exist smooth functions $\lambda,k:I \to \R$ with $\lambda \not\equiv 0$ such that $$k(t)(\alpha(t)+\beta(t))-\alpha(t)+\beta(t)-\lambda(t)(\kappa_1(t)-\kappa_2(t)+\kappa_3(t))=0$$ for all $t \in I$. That is, $(\gamma,\ell^+,\ell^-)$ is a $(\widetilde{\ell}^-, \overline{\ell}^+)$-Bertrand lightcone framed curve if and only if $(\gamma,\ell^+,\ell^-)$ is a $(\widetilde{\ell}^-, \overline{\ell}^-)$-Bertrand lightcone framed curve. \end{theorem} \demo Suppose that $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\widetilde{\ell}^-, \overline{\ell}^-)$-Bertrand lightcone framed curve. Then there exists a lightcone framed curve $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$ such that $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are $(\widetilde{\ell}^-, \overline{\ell}^-)$-mates. By Proposition \ref{reflection}, $(\overline{\gamma},\overline{\ell}^-,\overline{\ell}^+)$ is also a lightcone framed curve. By Theorem \ref{lightlike-mate-7}, there exist smooth functions $\lambda,k:I \to \R$ with $\lambda \not\equiv 0$ such that $k(t)(\alpha(t)+\beta(t))-\alpha(t)+\beta(t)-\lambda(t)(\kappa_1(t)-\kappa_2(t)+\kappa_3(t))=0$ for all $t \in I$. The convese also holds by Theorem \ref{lightlike-mate-7} and Proposition \ref{reflection}. \enD By Propositions \ref{reflection} and \ref{curvature-lightlike-mate-7}, we have the following. \begin{proposition}\label{curvature-lightlike-mate-8} Suppose that there exist smooth functions $\lambda,k:I \to \R$ with $\lambda \not\equiv 0$ such that $$k(t)(\alpha(t)+\beta(t))-\alpha(t)+\beta(t)-\lambda(t)(\kappa_1(t)-\kappa_2(t)+\kappa_3(t))=0$$ for all $t \in I$. Then the curvature of the lightcone framed curve $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$, where $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\widetilde{\ell}^-(t), \overline{\ell}^{+}(t)=(k^2(t)+1)\bn^T(t)-2k(t)\bn^S(t)-(k^2(t)-1)\bn(t), \overline{\ell}^-(t)=\widetilde{\ell}^-(t)=\bn^T(t)-\bn(t)$ is given by $$ (\overline{\kappa}_1,\overline{\kappa}_2,\overline{\kappa}_3,\overline{\alpha},\overline{\beta})=(\overline{\kappa}_1,(1/2)(\overline{\kappa}^T+\overline{\kappa}^S),(1/2)(\overline{\kappa}^T-\overline{\kappa}^S),(1/2)(\overline{a}+\overline{b}),(1/2)(\overline{a}-\overline{b})), $$ where \begin{align*} \overline{\kappa}_1(t)&=k(t)\kappa_1(t)+\kappa^T(t)-k(t)\kappa^S(t), \\ \overline{\kappa}^T(t)&=\left(\frac{k^2(t)}{2}-1\right)\kappa_1(t)+k(t)\kappa^T(t)-\frac{k^2(t)}{2}\kappa^S(t)-\dot k(t), \\ \overline{\kappa}^S(t)&=-\frac{k^2(t)}{2}\kappa_1(t)-k(t)\kappa^T(t)+\left(\frac{k^2(t)}{2}+1\right)\kappa^S(t)+\dot k(t),\\ \overline{a}(t)&=\dot\lambda(t)-\lambda(t)\kappa^T(t)-\frac{k^2(t)a(t)}{2}+a(t), \\ \overline{b}(t)&=-\dot\lambda(t)+\lambda(t)\kappa^T(t)+\frac{k^2(t)a(t)}{2}. \end{align*} \end{proposition} \subsection{Timelike mates} Second, we consider only one timelike mate. \begin{theorem}\label{timelike-mate} $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\bn^T,\overline{\bn}^T)$-Bertrand lightcone framed curve if and only if there exist smooth functions $\lambda,\theta:I \to \R$ with $\lambda \not\equiv 0$ such that $$ \lambda(t) (\kappa_2(t)+\kappa_3(t))\cos \theta(t)-(\alpha(t)-\beta(t)+\lambda(t)\kappa_1(t))\sin \theta(t)=0 $$ for all $t \in I$. \end{theorem} \demo Suppose that $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\bn^T,\overline{\bn}^T)$-Bertrand lightcone framed curve. Then there exists another lightcone framed curve $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ such that $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are $(\bn^T,\overline{\bn}^T)$-mates, that is, there exists a smooth function $\lambda:I \to \R$ with $\lambda \not\equiv 0$ such that $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\bn^T(t)$ and $\bn^T(t)=\overline{\bn}^T(t)$ for all $t \in I$. By differentiating $\overline{\gamma}(t)=\gamma(t)+\lambda(t) \bn^T(t)$, we have $$ \overline{a}(t)\overline{\bn}^T(t)+\overline{b}(t)\overline{\bn}^S(t)=(a(t)+\dot{\lambda}(t))\bn^T(t)+(b(t)+\lambda(t)\kappa_1(t))\bn^S(t)+\lambda(t)\kappa^T(t)\bn(t). $$ Since $\bn^T(t)=\overline{\bn}^T(t)$, we have $\overline{a}(t)=a(t)+\dot{\lambda}(t)$. Moreover, there exists a smooth function $\theta:I \to \R$ such that \begin{align*} \begin{pmatrix} \overline{\bn}(t)\\ \overline{\bn}^S(t) \end{pmatrix} = \begin{pmatrix} \cos \theta(t) & -\sin \theta(t)\\ \sin \theta(t) & \cos \theta(t) \end{pmatrix} \begin{pmatrix} {\bn}(t)\\ {\bn}^S(t) \end{pmatrix}. \end{align*} Then we have $\overline{b}(t)\sin \theta(t)=\lambda(t)\kappa^T(t)$ and $\overline{b}(t)\cos \theta(t)=b(t)+\lambda(t)\kappa_1(t)$. It follows that $\overline{b}(t)=\lambda(t)\kappa^T(t) \sin \theta(t)+(b(t)+\lambda(t)\kappa_1(t))\cos \theta(t)$ and $\lambda(t)\kappa^T(t) \cos \theta(t)$\\ $-(b(t)+\lambda(t)\kappa_1(t))\sin \theta(t)=0$. Therefore, we have $$ \lambda(t) (\kappa_2(t)+\kappa_3(t))\cos \theta(t)-(\alpha(t)-\beta(t)+\lambda(t)\kappa_1(t))\sin \theta(t)=0 $$ for all $t \in I$. \par Conversely, suppose that there exist smooth functions $\lambda,\theta:I \to \R$ with $\lambda \not\equiv 0$ such that $\lambda(t)\kappa^T(t) \cos \theta(t)-(b(t)+\lambda(t)\kappa_1(t))\sin \theta(t)=0$ for all $t \in I$. If we consider $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\bn^T(t), \overline{\bn}(t)=\cos \theta(t)\bn(t)-\sin \theta(t)\bn^S(t), \overline{\bn}^S(t)=\sin \theta(t)\bn(t)+\cos \theta(t)\bn^S(t)$, then $\overline{\bn}^T(t)=\bn^T(t)$ and \begin{align*} \dot{\overline{\gamma}}(t) &=(a(t)+\dot{\lambda}(t))\bn^T(t)+(b(t)+\lambda(t)\kappa_1(t))\bn^S(t)+\lambda(t)\kappa^T(t)\bn(t)\\ &=(a(t)+\dot{\lambda}(t))\overline{\bn}^T(t)+(b(t)+\lambda(t)\kappa_1(t))(-\sin \theta(t) \overline{\bn}(t)+\cos \theta(t)\overline{\bn}^S(t))\\ &\quad +\lambda(t)\kappa^T(t)(\cos \theta(t) \overline{\bn}(t)+\sin \theta(t)\overline{\bn}^S(t)) \\ &=(a(t)+\dot{\lambda}(t))\overline{\bn}^T(t)+(\lambda(t)\kappa^T(t) \sin \theta(t)+(b(t)+\lambda(t)\kappa_1(t))\cos \theta(t))\overline{\bn}^S(t) \\ &= \overline{a}(t)\overline{\bn}^T(t)+\overline{b}(t)\overline{\bn}^S(t)\\ &=\frac{1}{2}(\overline{a}(t)+\overline{b}(t))\overline{\ell}^+(t)+\frac{1}{2}(\overline{a}(t)-\overline{b}(t))\overline{\ell}^-(t), \end{align*} where \begin{align*} & \overline{a}(t)=a(t)+\dot{\lambda}(t), \ \overline{b}(t)=\lambda(t)\kappa^T(t) \sin \theta(t)+(b(t)+\lambda(t)\kappa_1(t))\cos \theta(t),\\ & \overline{\ell}^+(t)=\overline{\bn}^T(t)+\overline{\bn}^S(t), \ \overline{\ell}^-(t)=\overline{\bn}^T(t)-\overline{\bn}^S(t). \end{align*} It follows that $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$ is a lightcone framed curve. Hence, $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are $(\bn^T,\overline{\bn}^T)$-mates. \enD \begin{proposition}\label{curvature-timelike-mate} Suppose that there exist smooth functions $\lambda,\theta:I \to \R$ with $\lambda \not\equiv 0$ such that $$ \lambda(t) (\kappa_2(t)+\kappa_3(t))\cos \theta(t)-(\alpha(t)-\beta(t)+\lambda(t)\kappa_1(t))\sin \theta(t)=0 $$ for all $t \in I$. Then the curvature of the lightcone framed curve $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$, $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\bn^T(t),\overline{\ell}^+(t)=\overline{\bn}^T(t)+\overline{\bn}^S(t), \overline{\ell}^-(t)=\overline{\bn}^T(t)-\overline{\bn}^S(t), \overline{\bn}^{T}(t)=\bn^T(t), \overline{\bn}^S(t)=\sin \theta(t)\bn(t)+\cos\theta(t) \bn^S(t)$ is given by $$ (\overline{\kappa}_1,\overline{\kappa}_2,\overline{\kappa}_3,\overline{\alpha},\overline{\beta})=(\overline{\kappa}_1,(1/2)(\overline{\kappa}^T+\overline{\kappa}^S),(1/2)(\overline{\kappa}^T-\overline{\kappa}^S),(1/2)(\overline{a}+\overline{b}),(1/2)(\overline{a}-\overline{b})), $$ where \begin{align*} \overline{\kappa}_1(t)&=\kappa_1(t) \cos \theta(t)-\kappa^T(t)\sin \theta(t), \\ \overline{\kappa}^T(t)&=-\kappa_1(t)\sin \theta(t)+\kappa^T(t)\cos \theta(t), \\ \overline{\kappa}^S(t)&=\kappa^S(t)-\dot{\theta}(t),\\ \overline{a}(t)&=a(t)+\dot{\lambda}(t), \\ \overline{b}(t)&=\lambda(t)\kappa^T(t) \sin \theta(t)+(b(t)+\lambda(t)\kappa_1(t))\cos \theta(t). \end{align*} \end{proposition} \demo By a direct calculation, \begin{align*} \dot{\overline{\bn}^T}(t)&=\kappa_1(t)\bn^S(t)+\kappa^T(t)\bn(t)\\ &=(\kappa_1(t) \cos \theta(t)+\kappa^T (t)\sin \theta(t))\overline{\bn}^{T}-(\kappa_1(t)\sin \theta(t)-\kappa^T(t)\cos \theta(t))\overline{\bn}(t), \\ \dot{\overline{\bn}^S}(t)&=\dot{\theta}(t)\cos \theta(t) \bn(t)+\sin \theta(t)(\kappa^T(t)\bn^T(t)+\kappa^S(t)\bn^S(t))\\ &\quad -\dot{\theta}(t)\sin \theta(t)\bn^S(t)+\cos \theta(t)(\kappa_1(t)\bn^T(t)-\kappa^S(t)\bn(t)) \\ &=(\kappa_1(t) \cos \theta(t)+\kappa^T(t) \sin \theta(t))\overline{\bn}^T(t)-(\kappa^S(t)-\dot{\theta}(t))\overline{\bn}(t). \end{align*} Then we have \begin{align*} &\overline{\kappa}_1(t)=\kappa_1(t) \cos \theta(t)-\kappa^T(t)\sin \theta(t), \ \overline{\kappa}^T(t)=-\kappa_1(t)\sin \theta(t)+\kappa^T(t)\cos \theta(t), \\ &\overline{\kappa}^S(t)=\kappa^S(t)-\dot{\theta}(t). \end{align*} By the proof of Theorem \ref{timelike-mate}, we also have $$ \overline{a}(t)=a(t)+\dot{\lambda}(t), \ \overline{b}(t)=\lambda(t)\kappa^T(t) \sin \theta(t)+(b(t)+\lambda(t)\kappa_1(t))\cos \theta(t). $$ \enD \begin{remark}{\rm If $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a lightcone framed curve with lightcone circle frame, then $\kappa_1=0$ and $\kappa_2+\kappa_3=0$, see \S 2. By Theorem \ref{timelike-mate}, it is automatically holded, since we can take $\theta=0$. Hence $(\gamma,\ell^+,\ell^-)$ is a $(\bn^T,\overline{\bn}^T)$-Bertrand lightcone framed curve. } \end{remark} \subsection{Spacelike mates} Third, we consider spacelike mates. There are four cases. \begin{proposition}\label{lambda} Suppose that $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$ are $(\bn,\overline{\bn})$-mates with $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\bn(t)$. Then $\lambda$ is a non-zero constant. \end{proposition} \demo By differentiating $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\bn(t)$, we have \begin{align*} \dot{\overline{\gamma}}(t) &=\overline{\alpha}(t) \overline{\ell}^+(t)+\overline{\beta}(t) \overline{\ell}^-(t)\\ &=(\alpha(t)+\lambda(t)\kappa_2(t))\ell^+(t)+(\beta(t)+\lambda(t)\kappa_3(t))\ell^-(t)+\dot{\lambda}(t)\bn(t). \end{align*} Since $\bn(t)=\overline{\bn}(t)$, we have $\dot{\lambda}(t)=0$ for all $t \in I$. Hence $\lambda$ is a constant. If $\lambda$ is zero, then $\gamma$ and $\overline{\gamma}$ are the same. It follows that $\lambda$ is a non-zero constant. \enD \begin{theorem}\label{spacelike-mate-1} $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is always a $(\bn,\overline{\bn})$-Bertrand lightcone framed curve. \end{theorem} \demo If we consider $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$ by $$ \overline{\gamma}(t)=\gamma(t)+\lambda \bn(t), \ \overline{\ell}^+(t)=\ell^+(t), \ \overline{\ell}^-(t)=\ell^-(t), $$ where $\lambda$ is a non-zero constant. By Proposition \ref{lambda}, $$ \dot{\overline{\gamma}}(t)=(\alpha(t)+\lambda(t)\kappa_2(t))\ell^+(t)+(\beta(t)+\lambda(t)\kappa_3(t))\ell^-(t). $$ It follows that $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$ is a lightcone framed curve with the curvature $(\kappa_1,\kappa_2,\kappa_3,\alpha+\lambda\kappa_2,\beta+\lambda\kappa_3)$. Hence, $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are $(\bn,\overline{\bn})$-mates. \enD \begin{remark} {\rm If $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\bn,\overline{\bn})$-Bertrand lightcone framed curve, then $\overline{\gamma}=\gamma+\lambda \bn$, where $\lambda$ is non-zero constant. Hence $\overline{\gamma}$ is a parallel curve of $\gamma$ with respect to $\bn$. We can also take $(\overline{\ell}^+,\overline{\ell}^-)$ as $(c \ell^+, (1/c)\ell^-)$ in Theorem \ref{spacelike-mate-1}, where $c:I \to \R$ is a non-zero function. See the special frame in \S 2. } \end{remark} \begin{theorem}\label{spacelike-mate2} $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\bn^S,\overline{\bn}^S)$-Bertrand lightcone framed curve if and only if there exist smooth functions $\lambda,\theta:I \to \R$ with $\lambda \not\equiv 0$ such that $$ \lambda(t) (\kappa_2(t)-\kappa_3(t))\cosh \theta(t)+(\alpha(t)+\beta(t)+\lambda(t)\kappa_1(t))\sinh \theta(t)=0 $$ for all $t \in I$. \end{theorem} \demo Suppose that $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\bn^S,\overline{\bn}^S)$-Bertrand lightcone framed curve. Then there exists another lightcone framed curve $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ such that $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are $(\bn^S,\overline{\bn}^S)$-mates, that is, there exists a smooth function $\lambda:I \to \R$ with $\lambda \not\equiv 0$ such that $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\bn^S(t)$ and $\bn^S(t)=\overline{\bn}^S(t)$ for all $t \in I$. By differentiating $\overline{\gamma}(t)=\gamma(t)+\lambda(t) \bn^S(t)$, we have $$ \overline{a}(t)\overline{\bn}^T(t)+\overline{b}(t)\overline{\bn}^S(t)=(a(t)+\lambda(t)\kappa_1(t))\bn^T(t)+(b(t)+\dot{\lambda}(t))\bn^S(t)-\lambda(t)\kappa^S(t)\bn(t). $$ Since $\bn^S(t)=\overline{\bn}^S(t)$, we have $\overline{b}(t)=b(t)+\dot{\lambda}(t)$. If we denote $$ \overline{\bn}(t)=a_1(t)\bn(t)+b_1(t)\bn^T(t), \ \overline{\bn}^T(t)=a_2(t)\bn(t)+b_2(t)\bn^T(t), $$ where $a_1,a_2,b_1b_2:I \to \R$ are smooth functions, then $a_1^2(t)-b_1^2(t)=1, a_2^2(t)-b_2^2(t)=-1$ and $a_1(t)a_2(t)-b_1(t)b_2(t)=0$. Since $\overline{\bn}^S(t)=\overline{\bn}(t) \wedge \overline{\bn}^T(t)=(a_1(t)b_2(t)-a_2(t)b_1(t))\bn^S(t)$, $a_1(t)b_2(t)-a_2(t)b_1(t)=1$. It follows that $a_2(t)=b_1(t), b_2(t)=a_1(t)$. By $a_1^2(t)-b_1^2(t)=1$, there exists a smooth function $\theta:I \to \R$ such that \begin{align*} \begin{pmatrix} \overline{\bn}(t)\\ \overline{\bn}^T(t) \end{pmatrix} = \pm \begin{pmatrix} \cosh \theta(t) & \sinh \theta(t)\\ \sinh \theta(t) & \cosh \theta(t) \end{pmatrix} \begin{pmatrix} {\bn}(t)\\ {\bn}^T(t) \end{pmatrix}. \end{align*} Then we have $\pm \overline{a}(t)\sinh \theta(t)=-\lambda(t)\kappa^S(t)$ and $\pm \overline{a}(t)\cosh \theta(t)=a(t)+\lambda(t)\kappa_1(t)$. It follows that $\overline{a}(t)=\pm (\lambda(t)\kappa^S(t) \sinh \theta(t)+(a(t)+\lambda(t)\kappa_1(t))\cosh \theta(t))$ and $\lambda(t)\kappa^S(t) \cosh \theta(t)+(a(t)+\lambda(t)\kappa_1(t))\sinh \theta(t)=0$. Therefore, we have $$ \lambda(t) (\kappa_2(t)-\kappa_3(t))\cosh \theta(t)+(\alpha(t)+\beta(t)+\lambda(t)\kappa_1(t))\sinh \theta(t)=0 $$ for all $t \in I$. \par Conversely, suppose that there exist smooth functions $\lambda,\theta:I \to \R$ with $\lambda \not\equiv 0$ such that $\lambda(t)\kappa^S(t) \cosh \theta(t)+(a(t)+\lambda(t)\kappa_1(t))\sinh \theta(t)=0$ for all $t \in I$. If we consider $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\bn^S(t), \overline{\bn}(t)=\pm(\cosh \theta(t)\bn(t)+\sinh \theta(t)\bn^T(t)), \overline{\bn}^T(t)=\pm (\sinh \theta(t)\bn(t)+\cosh \theta(t)\bn^T(t))$, then $\overline{\bn}^S(t)=\bn^S(t)$ and \begin{align*} \dot{\overline{\gamma}}(t) &=(a(t)+\lambda(t)\kappa_1(t))\bn^T(t)+(b(t)+\dot{\lambda}(t))\bn^S(t)-\lambda(t)\kappa^S(t)\bn(t)\\ &=\pm (a(t)+\lambda(t)\kappa_1(t))(-\sinh \theta(t) \overline{\bn}(t)+\cosh \theta(t)\overline{\bn}^T(t))+(b(t)+\dot{\lambda}(t))\overline{\bn}^S(t)\\ &\quad \mp \lambda(t)\kappa^S(t)(\cosh \theta(t) \overline{\bn}(t)-\sinh \theta(t) \overline{\bn}^T(t)) \\ &=\pm (\lambda(t)\kappa^S(t) \sinh \theta(t)+(a(t)+\lambda(t)\kappa_1(t))\cosh \theta(t))\overline{\bn}^T(t)\\ &\quad+(b(t)+\dot{\lambda}(t))\overline{\bn}^S(t)\\ &= \overline{a}(t)\overline{\bn}^T(t)+\overline{b}(t)\overline{\bn}^S(t)\\ &=\frac{1}{2}(\overline{a}(t)+\overline{b}(t))\overline{\ell}^+(t)+\frac{1}{2}(\overline{a}(t)-\overline{b}(t))\overline{\ell}^-(t), \end{align*} where \begin{align*} & \overline{a}(t)=\pm (\lambda(t)\kappa^S(t) \sinh \theta(t)+(a(t)+\lambda(t)\kappa_1(t))\cosh \theta(t)), \ \overline{b}(t)=b(t)+\dot{\lambda}(t),\\ & \overline{\ell}^+(t)=\overline{\bn}^T(t)+\overline{\bn}^S(t), \ \overline{\ell}^-(t)=\overline{\bn}^T(t)-\overline{\bn}^S(t). \end{align*} It follows that $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$ is a lightcone framed curve. Hence, $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are $(\bn^S,\overline{\bn}^S)$-mates. \enD \begin{proposition}\label{curvature-spacelike-mate2} Suppose that there exist smooth functions $\lambda,\theta:I \to \R$ with $\lambda \not\equiv 0$ such that $$ \lambda(t) (\kappa_2(t)-\kappa_3(t))\cosh \theta(t)+(\alpha(t)+\beta(t)+\lambda(t)\kappa_1(t))\sinh \theta(t)=0 $$ for all $t \in I$. Then the curvature of the lightcone framed curve $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$, $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\bn^S(t),\overline{\ell}^+(t)=\overline{\bn}^T(t)+\overline{\bn}^S(t), \overline{\ell}^-(t)=\overline{\bn}^T(t)-\overline{\bn}^S(t), \overline{\bn}^{T}(t)=\pm(\sinh \theta(t) \bn(t)+\cosh \theta(t)\bn^T(t)), \overline{\bn}^S(t)=\bn^S(t)$ is given by $$ (\overline{\kappa}_1,\overline{\kappa}_2,\overline{\kappa}_3,\overline{\alpha},\overline{\beta})=(\overline{\kappa}_1,(1/2)(\overline{\kappa}^T+\overline{\kappa}^S),(1/2)(\overline{\kappa}^T-\overline{\kappa}^S),(1/2)(\overline{a}+\overline{b}),(1/2)(\overline{a}-\overline{b})), $$ where \begin{align*} \overline{\kappa}_1(t)&=\pm(\kappa_1(t)\cosh \theta(t)+\kappa^S(t) \sinh \theta(t)), \\ \overline{\kappa}^T(t)&=\pm(\dot{\theta}(t)+\kappa^T(t)), \\ \overline{\kappa}^S(t)&=\pm(\kappa_1(t)\sinh \theta(t)+\kappa^S(t)\cosh \theta(t)),\\ \overline{a}(t)&=\pm(\lambda(t)\kappa^S(t) \sinh \theta(t)+(a(t)+\lambda(t)\kappa_1(t))\cosh \theta(t)), \\ \overline{b}(t)&=b(t)+\dot{\lambda}(t). \end{align*} \end{proposition} \demo By a direct calculation, \begin{align*} \dot{\overline{\bn}^T}(t)&=\pm (\dot{\theta}(t)\cosh \theta(t) \bn(t)+\sinh \theta(t)(\kappa^T(t)\bn^T(t)+\kappa^S(t)\bn^S(t))\\ &\quad +\dot{\theta}(t)\sinh \theta(t)\bn^T(t)+\cosh \theta(t)(\kappa_1(t)\bn^T(t)-\kappa^S(t)\bn(t))) \\ &=\pm (\kappa_1 (t)\cosh \theta(t)+\kappa^S(t) \sinh \theta(t))\overline{\bn}^{S}\pm(\dot{\theta}(t)+\kappa^T(t))\overline{\bn}(t), \\ \dot{\overline{\bn}^S}(t)&=\kappa_1(t)\bn^S(t)-\kappa^S(t)\bn(t)\\ &=\pm(\kappa_1 (t)\cosh \theta(t)+\kappa^S(t)\sinh(t))\overline{\bn}^T(t)\\ &\quad\mp(\kappa_1(t)\sinh \theta(t)+\kappa^S(t)\sinh \theta(t))\overline{\bn}(t). \end{align*} Then we have \begin{align*} &\overline{\kappa}_1(t)=\pm(\kappa_1(t) \cosh \theta(t)+\kappa^S(t)\sinh(t)), \ \overline{\kappa}^T(t)=\pm(\dot{\theta}(t)+\kappa^T(t)), \\ &\overline{\kappa}^S(t)=\pm(\kappa_1(t)\sinh \theta(t)+\kappa^S(t)\sinh \theta(t)). \end{align*} By the proof of Theorem \ref{spacelike-mate2}, we also have $$ \overline{a}(t)=\pm (\lambda(t)\kappa^S(t) \sinh \theta(t)+(a(t)+\lambda(t)\kappa_1(t))\cosh \theta(t)), \ \overline{b}(t)=b(t)+\dot{\lambda}(t). $$ \enD \begin{theorem}\label{spacelike-mate3} $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\bn,\overline{\bn}^S)$-Bertrand lightcone framed curve if and only if there exist smooth functions $\lambda,\theta:I \to \R$ with $\lambda \not\equiv 0$ such that \begin{align*} &\left(\alpha(t)+\beta(t)+\lambda(t)(\kappa_2(t)+\kappa_3(t))\right)\sinh\theta(t)\\ &\quad-\left(\alpha(t)-\beta(t)+\lambda(t)(\kappa_2(t)-\kappa_3(t))\right)\cosh\theta(t)=0 \end{align*} for all $t \in I$. \end{theorem} \demo Suppose that $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\bn,\overline{\bn}^S)$-Bertrand lightcone framed curve. Then there exists another lightcone framed curve $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ such that $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are $(\bn,\overline{\bn}^S)$-mates, that is, there exists a smooth function $\lambda:I \to \R$ with $\lambda \not\equiv 0$ such that $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\bn(t)$ and $\bn(t)=\overline{\bn}^S(t)$ for all $t \in I$. By differentiating $\overline{\gamma}(t)=\gamma(t)+\lambda(t) \bn(t)$, we have $$ \overline{a}(t)\overline{\bn}^T(t)+\overline{b}(t)\overline{\bn}^S(t)=(a(t)+\lambda(t)\kappa^T(t))\bn^T(t)+(b(t)+\lambda(t)\kappa^S(t) )\bn^S(t)+\dot{\lambda}(t)\bn(t). $$ Since $\bn(t)=\overline{\bn}^S(t)$, we have $\overline{b}(t)=\dot{\lambda}(t)$. If we denote $$ \overline{\bn}(t)=a_1(t)\bn^T(t)+b_1(t)\bn^S(t), \ \overline{\bn}^T(t)=a_2(t)\bn^T(t)+b_2(t)\bn^S(t), $$ where $a_1,a_2,b_1b_2:I \to \R$ are smooth functions, then $-a_1^2(t)+b_1^2(t)=1, -a_2^2(t)+b_2^2(t)=-1$ and $-a_1(t)a_2(t)+b_1(t)b_2(t)=0$. Since $\overline{\bn}^S(t)=\overline{\bn}(t) \wedge \overline{\bn}^T(t)=(a_1(t)b_2(t)-a_2(t)b_1(t))\bn(t)$, $a_1(t)b_2(t)-a_2(t)b_1(t)=1$. It follows that $a_2(t)=b_1(t), b_2(t)=a_1(t)$. By $-a_1^2(t)+b_1^2(t)=1$, there exists a smooth function $\theta:I \to \R$ such that \begin{align*} \begin{pmatrix} \overline{\bn}(t)\\ \overline{\bn}^T(t) \end{pmatrix} = \pm \begin{pmatrix} \sinh \theta(t) & \cosh \theta(t)\\ -\cosh \theta(t) & -\sinh \theta(t) \end{pmatrix} \begin{pmatrix} {\bn}^T(t)\\ {\bn}^S(t) \end{pmatrix}. \end{align*} Then we have $\mp \overline{a}(t)\cosh \theta(t)=a(t)+\lambda(t)\kappa^T(t)$ and $\mp \overline{a}(t)\sinh \theta(t)=b(t)+\lambda(t)\kappa^S(t)$. It follows that $\overline{a}(t)=\mp ((a+\lambda(t)\kappa^T(t)) \cosh \theta(t)-(b(t)+\lambda(t)\kappa^S(t))\\\sinh \theta(t))$ and $(a+\lambda(t)\kappa^T(t)) \sinh \theta(t)-(b(t)+\lambda(t)\kappa^S(t))\cosh \theta(t)=0$. Therefore, we have \begin{align*} &(\alpha(t)+\beta(t)+\lambda(t)(\kappa_2(t)+\kappa_3(t)))\sinh \theta(t)\\ &\quad-(\alpha(t)-\beta(t)+\lambda(t)(\kappa_2(t)-\kappa_3(t)))\cosh \theta(t)=0 \end{align*} for all $t \in I$. \par Conversely, suppose that there exist smooth functions $\lambda,\theta:I \to \R$ with $\lambda \not\equiv 0$ such that $(a+\lambda(t)\kappa^T(t)) \sinh \theta(t)-(b(t)+\lambda(t)\kappa^S(t))\cosh \theta(t)=0$ for all $t \in I$. If we consider $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\bn(t), \overline{\bn}^T(t)=\mp(\cosh \theta(t)\bn^T(t)+\sinh \theta(t)\bn^S(t)), \overline{\bn}(t)=\pm (\sinh \theta(t)\bn^T(t)+\cosh \theta(t)\bn^S(t))$, then $\overline{\bn}^S(t)=\bn(t)$ and \begin{align*} \dot{\overline{\gamma}}(t) &=(a(t)+\lambda(t)\kappa^T(t))\bn^T(t)+(b(t)+\lambda(t)\kappa^S(t))\bn^S(t)+\dot{\lambda}(t)\bn(t)\\ &=\mp (a(t)+\lambda(t)\kappa^T(t))(\sinh \theta(t) \overline{\bn}(t)+\cosh \theta(t)\overline{\bn}^T(t))+\dot{\lambda}(t)\overline{\bn}^S(t)\\ &\quad \pm (b(t)+\lambda(t)\kappa^S(t))(\cosh \theta(t) \overline{\bn}(t)+\sinh \theta(t) \overline{\bn}^T(t)) \\ &=\mp ((a+\lambda(t)\kappa^T(t)) \cosh \theta(t)-(b(t)+\lambda(t)\kappa^S(t))\sinh \theta(t))\overline{\bn}^T(t)\\ &\quad+\dot{\lambda}(t)\overline{\bn}^S(t) \\ &=\overline{a}(t)\overline{\bn}^T(t)+\overline{b}(t)\overline{\bn}^S(t)\\ &=\frac{1}{2}(\overline{a}(t)+\overline{b}(t))\overline{\ell}^+(t)+\frac{1}{2}(\overline{a}(t)-\overline{b}(t))\overline{\ell}^-(t), \end{align*} where \begin{align*} & \overline{a}(t)=\mp ((a+\lambda(t)\kappa^T(t)) \cosh \theta(t)-(b(t)+\lambda(t)\kappa^S(t))\sinh \theta(t)), \ \overline{b}(t)=\dot{\lambda}(t),\\ & \overline{\ell}^+(t)=\overline{\bn}^T(t)+\overline{\bn}^S(t), \ \overline{\ell}^-(t)=\overline{\bn}^T(t)-\overline{\bn}^S(t). \end{align*} It follows that $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$ is a lightcone framed curve. Hence, $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are $(\bn,\overline{\bn}^S)$-mates. \enD \begin{proposition}\label{curvature-spacelike-mate3} Suppose that there exist smooth functions $\lambda,\theta:I \to \R$ with $\lambda \not\equiv 0$ such that \begin{align*} &\left(\alpha(t)+\beta(t)+\lambda(t)(\kappa_2(t)+\kappa_3(t))\right)\sinh\theta(t)\\ &\quad-\left(\alpha(t)-\beta(t)+\lambda(t)(\kappa_2(t)-\kappa_3(t))\right)\cosh\theta(t)=0 \end{align*} for all $t \in I$. Then the curvature of the lightcone framed curve $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$, $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\bn(t),\overline{\ell}^+(t)=\overline{\bn}^T(t)+\overline{\bn}^S(t), \overline{\ell}^-(t)=\overline{\bn}^T(t)-\overline{\bn}^S(t), \overline{\bn}^{T}(t)=\mp(\cosh \theta(t) \bn^T(t)+\sinh \theta(t)$$\bn^S(t)), \overline{\bn}^S(t)=\bn(t)$ is given by $$ (\overline{\kappa}_1,\overline{\kappa}_2,\overline{\kappa}_3,\overline{\alpha},\overline{\beta})=(\overline{\kappa}_1,(1/2)(\overline{\kappa}^T+\overline{\kappa}^S),(1/2)(\overline{\kappa}^T-\overline{\kappa}^S),(1/2)(\overline{a}+\overline{b}),(1/2)(\overline{a}-\overline{b})), $$ where \begin{align*} \overline{\kappa}_1(t)&=\mp\left(\kappa^T(t)\cosh \theta(t)-\kappa^S(t) \sinh \theta(t)\right), \\ \overline{\kappa}^T(t)&=\mp(\dot\theta(t)+\kappa_1(t)), \\ \overline{\kappa}^S(t)&=\pm\left(\kappa^T(t)\sinh \theta(t)-\kappa^S(t)\cosh \theta(t)\right),\\ \overline{a}(t)&=\mp\left((a(t)+\lambda(t)\kappa^T(t) )\cosh \theta(t)-(b(t)+\lambda(t)\kappa^S(t))\sinh \theta(t)\right), \\ \overline{b}(t)&=\dot{\lambda}(t). \end{align*} \end{proposition} \demo By a direct calculation, \begin{align*} \dot{\overline{\bn}^T}(t)&=\mp (\dot{\theta}(t)\sinh \theta(t) \bn^T(t)+\cosh \theta(t)(\kappa_1(t)\bn^S(t)+\kappa^T(t)\bn(t))\\ &\quad +\dot{\theta}(t)\cosh \theta(t)\bn^S(t)+\sinh \theta(t)(\kappa_1(t)\bn^T(t)-\kappa^S(t)\bn(t))) \\ &=\mp (\kappa^T (t)\cosh \theta(t)-\kappa^S(t) \sinh \theta(t))\overline{\bn}^{S}(t)\mp(\dot{\theta}(t)+\kappa_1(t))\overline{\bn}(t), \\ \dot{\overline{\bn}}(t)&=\pm (\dot{\theta}(t)\cosh \theta(t) \bn^T(t)+\sinh \theta(t)(\kappa_1(t)\bn^S(t)+\kappa^T(t)\bn(t))\\ &\quad +\dot{\theta}(t)\sinh \theta(t)\bn^S(t)+\cosh \theta(t)(\kappa_1(t)\bn^T(t)-\kappa^S(t)\bn(t))) \\ &=\pm (\kappa^T (t)\sinh \theta(t)-\kappa^S(t) \cosh \theta(t))\overline{\bn}^{S}(t)\mp(\dot{\theta}(t)+\kappa_1(t))\overline{\bn}^{T}(t). \end{align*} Then we have \begin{align*} &\overline{\kappa}_1(t)=\mp\left(\kappa^T(t)\cosh \theta(t)-\kappa^S(t) \sinh \theta(t)\right), \ \overline{\kappa}^T(t)=\mp(\dot\theta(t)+\kappa_1(t)), \\ &\overline{\kappa}^S(t)=\pm\left(\kappa^T(t)\sinh \theta(t)-\kappa^S(t)\cosh \theta(t)\right). \end{align*} By the proof of Theorem \ref{spacelike-mate2}, we also have $$ \overline{a}(t)=\mp ((a+\lambda(t)\kappa^T(t)) \cosh \theta(t)-(b(t)+\lambda(t)\kappa^S(t))\sinh \theta(t)), \ \overline{b}(t)=\dot{\lambda}(t). $$ \enD \begin{theorem}\label{spacelike-mate4} $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is always a $(\bn^S,\overline{\bn})$-Bertrand lightcone framed curve. \end{theorem} \demo If we consider $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$ by \begin{align*} &\overline{\gamma}(t)=\gamma(t)+\lambda(t) \bn^S(t), \ \overline{\ell}^+(t)=\widetilde{\ell}^-(t)=\bn^T(t)+\bn(t), \\ &\overline{\ell}^-(t)=\widetilde{\ell}^+(t)=\bn^T(t)-\bn(t), \end{align*} where $\lambda(t)=-\int b(t) dt$. By differentiating $\overline{\gamma}(t)=\gamma(t)-(\int b(t) dt) \bn^S(t)$, we have then we have $$ \dot{\overline{\gamma}}(t)=\left(a(t)-\kappa_1(t)\int b(t) dt\right)\overline{\bn}^T(t)+\left(-\kappa^S(t)\int b(t) dt\right)\overline{\bn}^S(t). $$ It follows that $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-):I \to \R^3_1 \times \Delta_4$ is a lightcone framed curve with the curvature $$ (\overline{\kappa}_1,\overline{\kappa}_2,\overline{\kappa}_3,\overline{\alpha},\overline{\beta})=(\overline{\kappa}_1,(1/2)(\overline{\kappa}^T+\overline{\kappa}^S),(1/2)(\overline{\kappa}^T-\overline{\kappa}^S),(1/2)(\overline{a}+\overline{b}),(1/2)(\overline{a}-\overline{b})), $$ where \begin{align*} &\overline{\kappa}_1(t)=-\kappa^T(t), \ \overline{\kappa}^T(t)=\kappa^S(t), \ \overline{\kappa}^S(t)=\kappa_1(t), \\ &\overline{a}(t)=a(t)-\kappa_1(t)\int b(t) dt, \ \overline{b}(t)=-\kappa^S(t)\int b(t) dt. \end{align*} Hence, $(\gamma,\ell^+,\ell^-)$ and $(\overline{\gamma},\overline{\ell}^+,\overline{\ell}^-)$ are $(\bn^S,\overline{\bn})$-mates. \enD \begin{remark} {\rm If $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a $(\bn^S,\overline{\bn})$-Bertrand lightcone framed curve, then $\overline{\gamma}(t)=\gamma(t)-(\int b(t) dt) \bn^S(t)$. Hence, $\overline{\gamma}$ is a psedo-circular involute of $\gamma$ (cf. \cite{Pei-Takahashi-Zhang, Zhang-Li-Pei}). } \end{remark} \section{Examples} We give concrete example of Bertrand lightcone framed curves. \begin{example} {\rm Let $p, q\in \R$ and $n, m\in \N$. Suppose that $n\neq m$ (respectively, $n= m$). Then we define $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ by \begin{gather*} \gamma(t)=\Bigg(-\frac{p}{m}\cos mt, -\frac{q}{2}\left(\frac{\cos (m-n)t}{m-n}+\frac{\cos(m+n)t}{m+n}\right),\\ -\frac{q}{2}\left(\frac{\sin (m-n)t}{m-n}-\frac{\sin(m+n)t}{m+n}\right)\Bigg) \\ \left({\rm respectively}, \gamma(t)=\left(-\frac{p}{m}\cos mt, -\frac{q}{4m}\cos 2mt, \frac{q}{2}t-\frac{q}{4m}\sin 2mt\right)\right),\\ \ell^+(t)=(1, \cos nt, \sin nt),\\ \ell^-(t)=(1, -\cos nt, -\sin nt). \end{gather*} Then $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a lightcone framed curve with the curvature \begin{align*} {\kappa}_1(t)=0, \ {\kappa}_2(t)=-\frac{n}{2}, \ {\kappa}_3=\frac{n}{2}, \ {\alpha}(t)=\frac{p+q}{2}\sin mt, \ {\beta}(t)=\frac{p-q}{2}\sin mt. \end{align*} By definition, we have $$ {\kappa}^T(t)=0, \ {\kappa}^S(t)=-n, \ {a}(t)=p\sin mt, \ {b}(t)=q\sin mt. $$ If we take $\lambda(t)=(k(t)/n)(p-q)\sin mt$ (respectively, $\lambda(t)=-(k(t)/n)(p+q)\sin mt$) with $\lambda \not\equiv 0$, then the condition of Theorem \ref{lightlike-mate-1} (respectively, of Theorem \ref{lightlike-mate-3}) is satisfied. Hence, $(\gamma,\ell^+,\ell^-)$ is a $(\ell^+, \overline{\ell}^+)$ and $(\ell^+, \overline{\ell}^-)$ (respectively, $(\ell^-, \overline{\ell}^+)$ and $(\ell^-, \overline{\ell}^-)$)-Bertrand lightcone framed curve. Similarly, if we take $\lambda(t)=((k(t)p+q)/n)\sin mt$ (respectively, $\lambda(t)=-((k(t)p-q)/n)\sin mt$) with $\lambda \not\equiv 0$, then the condition of Theorem \ref{lightlike-mate-5} (respectively, of Theorem \ref{lightlike-mate-7}) is satisfied. Hence, $(\gamma,\ell^+,\ell^-)$ is a $(\widetilde{\ell}^+,\overline{\ell}^+)$ and $(\widetilde{\ell}^+,\overline{\ell}^-)$ (respectively, $(\widetilde{\ell}^-, \overline{\ell}^+)$ and $(\widetilde{\ell}^-, \overline{\ell}^-)$)-Bertrand lightcone framed curve. \par If we take $\theta(t)=n\pi$, then the condition of Theorem \ref{timelike-mate} is satisfied. Hence, $(\gamma,\ell^+,\ell^-)$ is a $(\bn^T,\overline{\bn}^T)$-Bertrand lightcone framed curve. \par If we take $\lambda(t)=(p\tanh \theta(t)/n)\sin mt$ (respectively, $\lambda(t)=((p\tanh \theta(t)-q)/n)\sin mt$) with $\lambda \not\equiv 0$, then the condition of Theorem \ref{spacelike-mate2} (respectively, of Theorem \ref{spacelike-mate3}) is satisfied. Hence, $(\gamma,\ell^+,\ell^-)$ is a $(\bn^S,\overline{\bn}^S)$ (respectively, $(\bn,\overline{\bn}^S)$)-Bertrand lightcone framed curve. Moreover, if we take $\lambda(t)=(q/m)\\\cos mt \not\equiv 0$, then the condition of Theorem \ref{spacelike-mate4} is satisfied. Hence, $(\gamma,\ell^+,\ell^-)$ is a $(\bn^S,\overline{\bn})$-Bertrand lightcone framed curve. Similarly, if $\lambda$ is a non-zero constant, then the condition of Theorem \ref{spacelike-mate-1} is satisfied. Hence, $(\gamma,\ell^+,\ell^-)$ is a $(\bn,\overline{\bn})$-Bertrand lightcone framed curve. } \end{example} \begin{example} {\rm Let $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ be \begin{align*} &\gamma(t)=\Biggl(\int \alpha(t)\left(\int \kappa_3(t)dt\right)^2 dt+\int (\alpha(t)+\beta(t)) dt, \\ &\quad-\int \alpha(t)\left(\int \kappa_3(t)dt\right)^2 dt+\int (\alpha(t)-\beta(t)) dt, 2\left(\int\alpha(t)\left(\int\kappa_3(t)\right)dt\right)dt\Biggr),\\ &\ell^+(t)=\Biggl(\left(\int \kappa_3(t)dt\right)^2+1, \ -\left(\int \kappa_3(t)dt\right)^2+1, \ 2\int \kappa_3(t)dt\Biggr),\\ &\ell^-(t)=(1, \ -1, \ 0). \end{align*} Then $(\gamma,\ell^+,\ell^-):I \to \R^3_1 \times \Delta_4$ is a lightcone framed curve with the curvature $(0, 0, {\kappa}_3, {\alpha}, {\beta}).$ By definition, we have ${\kappa}^T(t)={\kappa}_3(t), {\kappa}^S(t)=-{\kappa}_3, {a}(t)=\alpha(t)+\beta(t), \ {b}(t)=\alpha(t)-\beta(t)$. Since $\kappa_2(t)=0$, then the condition of Theorem \ref{lightlike-mate-3} is satisfied. Hence, $(\gamma,\ell^+,\ell^-)$ is always a $(\ell^-,\overline{\ell}^+)$-Bertrand lightcone framed curve. \par If we take ${\kappa}_3(t)=\sin t$, $\alpha(t)=\sin t\cos t$ and $\beta(t)=\cos t$, then $(\gamma,\ell^+,\ell^-)$ is given by \begin{align*} \gamma(t)&=\left(-\frac{1}{4}\cos^4 t-\frac{1}{2}\cos^2 t+\sin t, \ \frac{1}{4}\cos^4 t-\frac{1}{2}\cos^2 t-\sin t, \ \frac{2}{3}\cos^3 t\right),\\ \ell^+(t)&=\left(\cos^2 t+1, \ -\cos^2 t+1, \ -2\cos t \right),\\ \ell^-(t)&=(1, \ -1, \ 0), \end{align*} up to constant. Since $\alpha(t)\beta(t)=\sin t\cos^2 t$, $\gamma$ is a mixed type curve with singular points at $t=\pi/2$ and $3\pi/2$. We also have $\kappa^T(t)=\sin t$, $\kappa^S(t)=-\sin t$, $a(t)=\cos t(\sin t+1)$ and $b(t)=\cos t(\sin t-1)$. Since $\beta(t)=\cos t\not\equiv0$, $(\gamma,\ell^+,\ell^-)$ is always a $(\ell^+,\overline{\ell}^+)$-Bertrand lightcone framed curve by Remark \ref{lightlike-mate-1-re} $(2)$. If we take $k(t)=1$ and $\lambda(t)=-2\cos t$ (respectively, $k(t)=-1$ and $\lambda(t)=-2\cos t$), then the condition of Theorem \ref{lightlike-mate-5} (respectively, of Theorem \ref{lightlike-mate-7}) is satisfied. Hence, $(\gamma,\ell^+,\ell^-)$ is a $(\widetilde{\ell}^+,\overline{\ell}^+)$ and $(\widetilde{\ell}^+,\overline{\ell}^-)$ (respectively, $(\widetilde{\ell}^-,\overline{\ell}^+)$ and $(\widetilde{\ell}^-,\overline{\ell}^-)$)-Bertrand lightcone framed curve. Similarly, if we take $\theta(t)=t$ and $\lambda(t)=\sin t-1$, then the condition of Theorem \ref{timelike-mate} is satisfied. Hence, $(\gamma,\ell^+,\ell^-)$ is a $(\bn^T,\overline{\bn}^T)$-Bertrand lightcone framed curve. \par If we take ${\kappa}_3(t)=\sinh t$, $\alpha(t)=\sinh t\cosh t$ and $\beta(t)=-\sinh t$, then $(\gamma,\ell^+,\ell^-)$ is given by \begin{align*} &\gamma(t)=\left(\frac{\cosh^4 t}{4}+\frac{\cosh^2 t}{2}-\cosh t, \ -\frac{\cosh^4 t}{4}+\frac{\cosh^2 t}{2}+\cosh t, \ \frac{2}{3}\cosh^3 t\right),\\ &\ell^+(t)=\left(\cosh^2 t+1, \ -\cosh^2 t+1, \ 2\cosh t \right),\\ &\ell^-(t)=(1, \ -1, \ 0), \end{align*} up to constant. Since $\alpha(t)\beta(t)=-\sinh^2 t\cosh t$, $\gamma$ is a spcelike curve with a singular point at $t=0$. We also have $\kappa^T(t)=\sinh t$, $\kappa^S(t)=-\sinh t$, $a(t)=\sinh t(\cosh t-1)$ and $b(t)=\sinh t(\cosh t+1)$. If we take a smooth function $\theta : I\to \R$ such that $\lambda(t)=e^{-2\theta(t)}\cosh t+1$, then the condition of Theorem \ref{spacelike-mate3} is satisfied. Hence, $(\gamma,\ell^+,\ell^-)$ is a $(\bn,\overline{\bn}^S)$-Bertrand lightcone framed curve. Similarly, if we take a smooth function $\theta : I\to \R$ such that $\lambda(t)=-\tanh \theta(t)(\cosh t-1)$ and $\theta\not\equiv0$, then the condition of Theorem \ref{spacelike-mate2} is satisfied. Hence, $(\gamma,\ell^+,\ell^-)$ is a $(\bn^S,\overline{\bn}^S)$-Bertrand lightcone framed curve. } \end{example} \begin{thebibliography}{99} {\small \bibitem{Aminov} Y. Aminov, \newblock{\it Differential geometry and the topology of curves. Translated from the Russian by V. Gorkavy.} \newblock{Gordon and Breach Science Publishers}, Amsterdam, 2000. \bibitem{Balgetir-Bektacs-Jun-ichi} H. Balgetir, M. Bekta\c s, J. Inoguchi, \newblock{\it Null Bertrand curves in Minkowski 3-space and their characterizations.} \newblock{Note Mat.} {\bf 23}, 2004, 7--13. \bibitem{Berger-Gostiaux} M. Berger, B. Gostiaux, \newblock{\it Differential geometry: manifolds, curves, and surfaces. Translated from the French by Silvio Levy.} \newblock{Graduate Texts in Mathematics, 115. Springer-Verlag,} New York, 1988. \bibitem{Bertrand} J. Bertrand, \newblock{M\'emoire sur la th\'eorie des courbes \`a double courbure.} \newblock{\it J. de meth\'ematiques pures et appliqu\'ees.} {\bf 15} 1850, 332--350. \bibitem{Chen-Takahashi} L. Chen, M. Takahashi, \newblock{Lightcone framed curves in the Lorentz-Minkowski 3-space.} \newblock{\it Turkish J. Math.} {\bf 48}, 2024, 307--326. doi: 10.55730/1300-0098.3508 \bibitem{Chunxiao-Pei} Z. Chunxiao, D. Pei, \newblock{Generalized Bertrand Curves in Minkowski 3-Space.} \newblock{\it Mathematics.} {\bf 8}, 2020, 2199. \bibitem{Hatice-Kazim} H. A. Erdem, K. I\'larslan, \newblock{Spacelike Bertrand curves in Minkowski 3-space revisited.} \newblock{\it An. Ştiinţ. Univ. ``Ovidius'' Constanţa Ser. Mat.} {\bf 31} (3), 2023, 87-109. \bibitem{Honda-Takahashi-2016} S. Honda, M. Takahashi, \newblock{Framed curves in the Euclidean space.} \newblock{\it Adv. Geom.} {\bf 16}, 2017, 265--276. doi: 10.1515/advgeom-2015-0035 \bibitem{Honda-Takahashi-2020} S. Honda, M. Takahashi, \newblock{Bertrand and Mannheim curves of framed curves in the 3-dimensional Euclidean space.} \newblock{\it Turkish J. Math.} {\bf 44}, 2020, 883--899. \bibitem{HCIP} J. Huang, L. Chen, S. Izumiya, D. Pei, \newblock{Geometry of special curves and surfaces in 3-space form.} \newblock{\it J. Geom. Phys.} {\bf 136}, 2019, 31--38. \bibitem{Izumiya09} S, Izumiya. \newblock{Legendrian dualities and spacelike hypersurfaces in the lightcone.} \newblock{\it Mosc. Math. J.} {\bf 9}, 2009, 325-357. https://doi.org/10.17323/1609-4514-2009-9-2-325-357 \bibitem{Liu-Pei} T. Liu and D. Pei, \newblock{Mixed-type curves and the lightcone frame in Minkowski $3$-space.} \newblock{\it Int. J. Geom. Methods Mod. Phys.} {\bf 17}, 2020, 2050088 (14 pages). \bibitem{López} R. L\'opez, \newblock{Differential geometry of curves and surfaces in Lorentz-Minkowski space.} \newblock{\it Int. Electron. J. Geom.} {\bf 7}, 2014, 44-107. \bibitem{Nakatsuyama - Takahashi} N. Nakatsuyama, M. Takahashi, \newblock{Bertrand types of regular curves and Bertrand framed curves in the Euclidean 3-space.} \newblock{\it To appear in Hokkaido Math. J}. 2024, arXiv : \url{https://arxiv.org/abs/2403.19138}. \bibitem{O’Neil} B. O'Neill, \newblock{Semi-Riemannian Geometry.} \newblock{\it Academic Press, New York, USA,} 1983. \bibitem{Papaioannou-Kiritsis} S. G. Papaioannou, D. Kiritsis, \newblock{An application of Bertrand curves and surfaces to CADCAM.} \newblock{\it Computer-Aided Design}. {\bf 17}, 1985, 348--352. \bibitem{Pei-Takahashi-Zhang} D. Pei, M. Takahashi, W. Zhang, \newblock{Pseudo-circular evolutes and involutes of lightcone framed curves in the Lorentz-Minkowski 3-space.} \newblock{\it Int. J. Geom. Methods Mod. Phys.} 2550012, 36 pages, 2024, \url{https://doi.org/10.1142/S0219887825500124}. \bibitem{Struik} D. J. Struik, \newblock{\it Lectures on classical differential geometry. Reprint of the second edition.} \newblock{Dover Publications, Inc.,} New York, 1988. \bibitem{Ucum-Ilarslan} A. Ucum, K. Ilarslan, \newblock{On timelike Bertrand curves in Minkowski 3-space.} \newblock{\it Honam Math. J.} {\bf 38}, 2016, 467--477. \bibitem{Wu-Zhou-Yao-Pei} L. Wu, A. Zhou, K. Yao, D. Pei, \newblock{Generalized Bertrand Curves of Non-Light-like Framed Curves in Lorentz–Minkowski 3-Space.} \newblock{\it Mathematics.} {\bf 12} 2024, 2593. \bibitem{Zhang-Li-Pei} W. Zhang, P. Li, D. Pei, \newblock{Circular evolutes and involutes of spacelike framed curves and their duality relations in Minkowski 3-space.} \newblock{\it AIMS Math.} {\bf 9}, 2024, 5688--5707. } \end{thebibliography} Nozomi Nakatsuyama, \\ Muroran Institute of Technology, Muroran 050-8585, Japan, \\ E-mail address: [email protected] \\ \\ Masatomo Takahashi, \\ Muroran Institute of Technology, Muroran 050-8585, Japan, \\ E-mail address: [email protected] \end{document}
2412.10837v1
http://arxiv.org/abs/2412.10837v1
A Diagrammatic Approach to Improve Computational Efficiency in Group Equivariant Neural Networks
\documentclass[twoside,11pt]{article} \usepackage{blindtext} \usepackage[abbrvbib, nohyperref, preprint]{jmlr2e} \usepackage{amsmath} \usepackage{amssymb} \usepackage{mathtools} \usepackage{dsfont} \usepackage{microtype} \DeclareMathOperator{\End}{End} \DeclareMathOperator{\Ind}{Ind} \DeclareMathOperator{\Res}{Res} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\Ker}{Ker} \DeclareMathOperator{\Bell}{B} \DeclareMathOperator{\pn}{pn} \DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator{\Stab}{Stab} \DeclareMathOperator{\Aut}{Aut} \DeclareMathOperator{\Rep}{Rep} \DeclareMathOperator{\Qut}{Qut} \DeclareMathOperator{\id}{id} \DeclareMathOperator{\FundRep}{FundRep} \DeclareMathOperator{\Mat}{Mat} \usepackage{mathdots} \usepackage{bm} \usepackage{nicematrix} \usepackage{xcolor} \usepackage{tabularray} \usepackage{comment} \usepackage{xurl} \usepackage{tikzit} \input{mystyle.tikzstyles} \usepackage[most]{tcolorbox} \definecolor{melon}{RGB}{227, 168, 105} \definecolor{terracotta}{RGB}{204, 78, 63} \newcommand{\dataset}{{\cal D}} \newcommand{\fracpartial}[2]{\frac{\partial #1}{\partial #2}} \usepackage{lastpage} \jmlrheading{23}{2024}{1-\pageref{LastPage}}{1/21; Revised 5/22}{9/22}{21-0000}{ Edward Pearce-Crump and William J. Knottenbelt} \ShortHeadings{ A Diagrammatic Approach for Efficient Group Equivariant Neural Networks }{Pearce-Crump and Knottenbelt} rstpageno{1} \begin{document} \title{ A Diagrammatic Approach to Improve Computational Efficiency in Group Equivariant Neural Networks } \author{\name Edward Pearce-Crump \email [email protected] \\ \addr Department of Computing\\ Imperial College London\\ London, SW7 2RH, United Kingdom \AND \name William J. Knottenbelt \email [email protected] \\ \addr Department of Computing\\ Imperial College London\\ London, SW7 2RH, United Kingdom } \editor{My editor} \maketitle \begin{abstract} Group equivariant neural networks are growing in importance owing to their ability to generalise well in applications where the data has known underlying symmetries. Recent characterisations of a class of these networks that use high-order tensor power spaces as their layers suggest that they have significant potential; however, their implementation remains challenging owing to the prohibitively expensive nature of the computations that are involved. In this work, we present a fast matrix multiplication algorithm for any equivariant weight matrix that maps between tensor power layer spaces in these networks for four groups: the symmetric, orthogonal, special orthogonal, and symplectic groups. We obtain this algorithm by developing a diagrammatic framework based on category theory that enables us to not only express each weight matrix as a linear combination of diagrams but also makes it possible for us to use these diagrams to factor the original computation into a series of steps that are optimal. We show that this algorithm improves the Big-$O$ time complexity exponentially in comparison to a na\"{i}ve matrix multiplication. \end{abstract} \begin{keywords} deep learning theory, equivariant neural networks, weight matrices \end{keywords} \section{Introduction} Incorporating symmetries as an inductive bias in neural network design has emerged as an important method for creating more structured and efficient models. These models, commonly referred to as \textit{group equivariant neural networks} \citep{ cohenc16, cohen2017steerable, qi2017pointnet, ravanbakhsh17a, deepsets, cohen2018, esteves2018, kondor18a, thomas2018, maron2018, cohen2019, weiler2019, villar2021scalars, finzi, pearcecrumpB, pearcecrumpJ, pearcecrump, godfrey, pearcecrumpG}, have proven to be useful across a wide range of applications, including, but not limited to, molecule generation \citep{satorras21a}; designing proteins \citep{Jumper2021}; natural language processing \citep{gordon2020, petrache2024}; computer vision \citep{chatzipantazis2023}; dynamics prediction \citep{guttenberg2016}; and even auction design \citep{rahme}. Notably, several recent works have characterised the weight matrices of a class of group equivariant neural networks that use high-order tensor power spaces as their layers \citep{ravanbakhsh17a, maron2018,pearcecrumpB, pearcecrumpJ, pearcecrump, godfrey, pearcecrumpG}. However, because these layers are tensor power spaces, the computations that are involved --- specifically, applying an equivariant weight matrix to an input vector to transform it to an output vector --- can be very expensive. Indeed, if the weight matrix is a function $(\mathbb{R}^{n})^{\otimes k} \rightarrow (\mathbb{R}^{n})^{\otimes l}$, then a na\"{i}ve matrix multiplication on an input vector $v \in (\mathbb{R}^{n})^{\otimes k}$ has a time complexity of $O(n^{l+k})$. Hence there is a need to develop methods that make these computations feasible in practice so that the networks that use these weight matrices become more efficient. We address this problem in the following way, for four groups whose weight matrices have been characterised previously: the symmetric \citep{ravanbakhsh17a, maron2018, pearcecrump, godfrey}, orthogonal, special orthogonal and symplectic groups \citep{pearcecrumpB}. We first develop a diagrammatic framework based on category theory, enabling us to express each group equivariant weight matrix as the image under a monoidal functor of a linear combination of morphisms in a diagram category. In this way, we can effectively interpret the morphisms in each diagram category as being equivalent to matrices. We then use the diagrammatic framework to develop an algorithm that improves upon the na\"{i}ve weight matrix multiplication \textit{exponentially} in terms of its Big-$O$ time complexity. We achieve this by using the diagrams that are equivalent to the original weight matrix to factor it into a sequence of operations that carry out the original calculation in the most efficient way possible. We suggest that our result might enable more widespread adoption of group equivariant neural networks that use high-order tensor power spaces in practical applications. The rest of the paper is organised as follows. After discussing the related work in Section \ref{relatedwork}, we review in Section \ref{background} the important concepts behind the characterisation of the equivariant weight matrices between tensor power spaces of $\mathbb{R}^{n}$ that have appeared elsewhere in the literature. In Section \ref{weightmatcat}, we introduce our diagrammatic framework that is based on category theory, to show that each linear map that we review in Section \ref{background} is actually the result of a monoidal functor between a diagram category and a representation category. In particular, we conclude that section by discussing some fundamental results that form the basis of our main contribution, which is introduced in Section \ref{algorithmsect}: the fast multiplication algorithm for passing a vector through an equivariant weight matrix for the four groups in question. In this section, we present our algorithm, discuss its implementation for each of the four groups, and analyse its Big-$O$ time complexity in each case. Finally, we conclude in Section \ref{conclusion}. \section{Related Work} \label{relatedwork} There is a growing body of literature demonstrating the application of category theory in machine learning. A number of works have used a category-theoretic approach either to constuct, categorise, or analyse neural network architectures \citep{gavranovic24a, abbott2024, khatri2024, tull2024}. Some researchers have explored expanding the concept of equivariance through category theory, resulting in new insights on the structure of equivariant models \citep{deHaan2020, gavranovic24a} while others have focused on understanding gradient-based learning using a category-theoretic framework \citep{cruttwell2022}. Additionally, recent works have examined stochastic equivariant models \citep{cho2019, bloemreddy2020, fritz2020, cornish2024} and causal reasoning \citep{lorenz2023} within the context of Markov categories. Building on these works, we adopt a category-theoretic approach to gain insights into the construction of specific neural networks; namely, the weight matrices of certain tensor power based equivariant neural networks. The primary distinction and contribution of our work lies in using this approach not only to understand the construction of the neural networks themselves but also to significantly improve the time complexity of the forward pass that is involved in their execution. Finally, in a loosely-related body of work, there is a whole subdomain in quantum computing where formal category-theoretic languages, such as the ZX-calculus, are used to rewrite quantum circuits for compilation onto quantum hardware \citep{coecke2017picturing, duncan2020graph, kissinger2012pictures, heunen2020categories, vandewetering2020}. In this context, the original circuit needs to be rewritten in terms of a predefined set of quantum gates that is tailored to the specific quantum hardware. In our work, the algorithm that we present in Section \ref{algorithmsect} rewrites the equivariant weight matrix as a composition of operations; however, this algorithm is designed for classical computing, not quantum computing, and we focus on the time complexity of the operations involved, rather than the underlying hardware on which they will be executed. \section{Background} \label{background} In this section, we revisit the concepts that led to the characterisation of the equivariant weight matrices between high-order tensor power spaces of $\mathbb{R}^{n}$, focusing on the four groups in question: $S_n$, $O(n)$, $SO(n)$ and $Sp(n)$. This section is organised into three parts: first we recall the layer spaces, considered as representations of each group, and the definition of an equivariant map between them; second, we review the diagrams that were used to derive the weight matrix characterisations; and finally we summarise the key results, providing references to where their proofs can be found in the literature. In the following, we write $G(n)$ to refer to any of $S_n$, $O(n)$, $SO(n)$ and $Sp(n)$. We consider $G(n)$ to be a subgroup of matrices in $GL(n)$ throughout, having chosen the standard basis of $\mathbb{R}^{n}$. We also write $[n]$ for the set $\{1, \dots, n\}$, and $[n]^p$ for the $p$-fold Cartesian product of the set $\{1, \dots, n\}$. \subsection{Layer Spaces as Group Representations} \label{sectLayerSpaces} For each neural network that is equivariant to $G(n)$, the layer spaces are not just vector spaces, they are representations of $G(n)$. This makes it possible to incorporate the symmetries as an inductive bias into the network not only through the action of the group on each layer space but also through the equivariant maps between them. We define each of these concepts in turn. Each layer space is a high-order tensor power of $\mathbb{R}^{n}$, denoted $(\mathbb{R}^{n})^{\otimes k}$, that comes with a representation of $G(n)$. This representation, written $\rho_k: G(n) \rightarrow GL((\mathbb{R}^n)^{\otimes k})$, is defined on the standard basis of $(\mathbb{R}^{n})^{\otimes k}$ \begin{equation} \label{tensorelementfirst} e_I \coloneqq e_{i_1} \otimes e_{i_2} \otimes \dots \otimes e_{i_k} \end{equation} for all $I \coloneqq (i_1, i_2, \dots, i_k) \in [n]^k$ by \begin{equation} g \cdot e_I \coloneqq ge_{i_1} \otimes ge_{i_2} \otimes \dots \otimes ge_{i_k} \end{equation} and is extended linearly on the basis elements of $(\mathbb{R}^{n})^{\otimes k}$. We note that if $G(n) = Sp(n)$, then $n = 2m$, and we label and order the indices by $1, 1', \dots, m, m'$ instead. In this case, we sometimes call the standard basis of $\mathbb{R}^{n}$ the symplectic basis. There is a special class of maps between representations of a group that are known as equivariant maps. Recall that a map $\phi : (\mathbb{R}^{n})^{\otimes k} \rightarrow (\mathbb{R}^{n})^{\otimes l}$ between two representations of $G(n)$ is said to be $G(n)$-equivariant if \begin{equation} \label{Gequivmapdefn} \phi(\rho_{k}(g)[v]) = \rho_{l}(g)[\phi(v)] \end{equation} for all $g \in G(n)$ and $v \in (\mathbb{R}^{n})^{\otimes k}$. We denote the vector space of all \textit{linear} $G(n)$-equivariant maps between $(\mathbb{R}^{n})^{\otimes k}$ and $(\mathbb{R}^{n})^{\otimes l}$ by $\Hom_{G(n)}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{\otimes l})$. We are interested in the equivariant weight matrices that map between any two adjacent layer spaces. It is clear that the equivariant weight matrices mapping $(\mathbb{R}^{n})^{\otimes k}$ to $(\mathbb{R}^{n})^{\otimes l}$ can be found by obtaining either a basis or a spanning set of matrices for $\Hom_{G(n)}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{\otimes l})$. The diagrams that can be used to obtain such a basis or spanning set form the topic of the next section. \subsection{Set Partition Diagrams} \label{sectSetPart} There are potentially a number of different ways to obtain such a basis or spanning set of matrices for each group. For example, for the symmetric group $S_n$, \citet{maron2018} considered so-called fixed-point equations to obtain a basis of matrices known as the orbit basis, and \citet{godfrey} constructed an $\mathbb{R}$-algebra from the $S_n$-invariant subspace for each tensor power space of $\mathbb{R}^{n}$ to obtain a different basis of matrices known as the diagram basis. We focus, however, on using set partition diagrams, which originally appeared in \citet{pearcecrumpB, pearcecrump}, as they serve as the foundation for our contributions that appear in the upcoming sections. To introduce these diagrams, we need to begin with the definition of a set partition. \begin{definition} For any non-negative integers $l$ and $k$, consider the set $[l+k] \coloneqq \{1, \dots, l+k\}$. A \textbf{set partition} $\pi$ of $[l+k]$ is a partition of $[l+k]$ into a disjoint union of subsets, each of which we call a \textbf{block}. \end{definition} \begin{example} \label{setpartex1} If $l = 4$ and $k = 6$, then \begin{equation} \{1, 2, 5, 7 \mid 3, 4, 10 \mid 6, 8 \mid 9\} \end{equation} is a set partition of $[4+6]$ having $4$ blocks. \end{example} \begin{definition} Let $\pi$ be a set partition of $[l+k]$. We can associate to $\pi$ a diagram $d_\pi$, called a $\bm{(k,l)}$\textbf{--partition diagram}, that consists of two rows of vertices and edges between vertices such that there are \begin{itemize} \item $l$ vertices on the top row, labelled left to right by $1, \dots, l$ \item $k$ vertices on the bottom row, labelled left to right by $l+1, \dots, l+k$, and \item the edges between the vertices correspond to the connected components of $\pi$. \end{itemize} As a result, $d_\pi$ represents the equivalence class of all diagrams with connected components equal to the blocks of $\pi$. \end{definition} \begin{example} The $(6,4)$--partition diagram corresponding to the set partition given in Example~\ref{setpartex1} is \begin{equation} \begin{aligned} \scalebox{0.6}{\tikzfig{background/composition1i}} \end{aligned} \end{equation} \end{example} \begin{definition} We are also interested in the following types of $(k,l)$--partition diagrams. \begin{itemize} \item A $\bm{(k,l)}$\textbf{--Brauer diagram} $d_\beta$ is a $(k,l)$--partition diagram where the size of every block in $\beta$ is exactly two. \item Given $k$ and $l$, an $\bm{(l+k)\backslash n}$\textbf{--diagram} $d_\alpha$ is a $(k,l)$--partition diagram where exactly $n$ blocks in $\alpha$ have size one, with the rest having exactly size two. The vertices corresponding to the blocks of size one are called \textbf{free} vertices. \end{itemize} \end{definition} \begin{example} \label{exBrauerGrooddiag} The first diagram below is a $(7,5)$--Brauer diagram \begin{equation} \begin{aligned} \scalebox{0.6}{\tikzfig{background/algoplanar7}} \end{aligned} \end{equation} whereas the second is a $(5+6) \backslash 3$--diagram \begin{equation} \begin{aligned} \scalebox{0.6}{\tikzfig{background/algoplanar6}} \end{aligned} \end{equation} \end{example} In order to state the characterisations more succinctly in the next section, we introduce a number of vector spaces as the formal $\mathbb{R}$--linear span of certain subsets of $(k,l)$--partition diagrams, as follows. \begin{definition} \label{partvecspace} \hphantom{x} \begin{itemize} \item The \textbf{partition vector space} $P_k^l(n)$ is defined to be the $\mathbb{R}$--linear span of the set of all $(k,l)$--partition diagrams. \item The \textbf{Brauer vector space} $B_k^l(n)$ is defined to be the $\mathbb{R}$--linear span of the set of all $(k,l)$--Brauer diagrams. \item The \textbf{Brauer--Grood vector space} $D_k^l(n)$ is defined to be the $\mathbb{R}$--linear span of the set of all $(k,l)$--Brauer diagrams together with the set of all $(l+k)\backslash n$--diagrams. \end{itemize} \end{definition} \subsection{Characterisation of Equivariant Weight Matrices} \label{sectCharacterisation} We now use the vector spaces given in Definition \ref{partvecspace} to summarise the equivariant weight matrix characterisation results for each of the four groups. We state each basis/spanning set result as a theorem, and give its associated equivariant weight matrix characterisation as a corollary. In order to state the results, recall that, for all non-negative integers $l, k$, the vector space of matrices $\Hom((\mathbb{R}^{n})^{\otimes k}, (\mathbb{R}^{n})^{\otimes l})$ has a standard basis of matrix units \begin{equation} \label{standardbasisunits} \{E_{I,J}\}_{I \in [n]^l, J \in [n]^k} \end{equation} where $I$ is a tuple $(i_1, i_2, \dots, i_l) \in [n]^l$, $J$ is a tuple $(j_1, j_2, \dots, j_k) \in [n]^k$ and $E_{I,J}$ has a $1$ in the $(I,J)$ position and is $0$ elsewhere. \begin{theorem}[Diagram Basis when $G(n) = S_n$] \cite[Theorem 5.4]{godfrey} \label{diagbasisSn} For all non-negative integers $l, k$ and any positive integer $n$, there is a surjection of vector spaces \begin{equation} \label{surjectionSn} \Theta_{k,n}^l : P_k^l(n) \rightarrow \Hom_{S_n}((\mathbb{R}^{n})^{\otimes k}, (\mathbb{R}^{n})^{\otimes l}) \end{equation} that is given by \begin{equation} d_\pi \mapsto D_\pi \end{equation} for all $(k,l)$--partition diagrams $d_\pi$, where $D_\pi$ is defined as follows. If $S_\pi((I,J))$ is defined to be the set \begin{equation} \label{Snindexingset} \left\{(I,J) \in [n]^{l+k} \mid \text{if } x,y \text{ are in the same block of } \pi, \text{then } i_x = i_y \right\} \end{equation} (where we have momentarily replaced the elements of $J$ by $i_{l+m} \coloneqq j_m$ for all $m \in [k]$), then we have that \begin{equation} \label{mappeddiagbasisSn} D_\pi \coloneqq \sum_{I \in [n]^l, J \in [n]^k} \delta_{\pi, (I,J)} E_{I,J} \end{equation} where \begin{equation} \delta_{\pi, (I,J)} \coloneqq \begin{cases} 1 & \text{if } (I,J) \in S_\pi((I,J)) \\ 0 & \text{otherwise} \end{cases} \end{equation} In particular, the set \begin{equation} \label{klSnSpanningSet} \{D_\pi \mid d_\pi \text{ is a } (k,l) \text{--partition diagram having at most } n \text{ blocks} \} \end{equation} is a basis for $\Hom_{S_n}((\mathbb{R}^{n})^{\otimes k}, (\mathbb{R}^{n})^{\otimes l})$ in the standard basis of $\mathbb{R}^{n}$, of size $\Bell(l+k,n) \coloneqq \sum_{t=1}^{n} \begin{Bsmallmatrix} l+k\\ t \end{Bsmallmatrix} $, where $ \begin{Bsmallmatrix} l+k\\ t \end{Bsmallmatrix} $ is the Stirling number of the second kind. \end{theorem} \begin{corollary}[Permutation Equivariant Weight Matrices] \label{diagweightmatclass} For all non-negative integers $l, k$ and positive integers $n$, the weight matrix $W$ that appears in an $S_n$-equivariant linear layer function from $(\mathbb{R}^{n})^{\otimes k}$ to $(\mathbb{R}^{n})^{\otimes l}$ must be of the form \begin{equation} W = \sum_{d_\pi \in P_k^l(n)} \lambda_\pi{D_\pi} \end{equation} \end{corollary} \begin{theorem} [Spanning set when $G(n) = O(n)$] \cite[Theorem 6.5]{pearcecrumpB} \label{spanningsetO(n)} For all non-negative integers $l, k$ and any positive integer $n$, there is a surjection of vector spaces \begin{equation} \label{surjectionO(n)} \Phi_{k,n}^l : B_k^l(n) \rightarrow \Hom_{O(n)}((\mathbb{R}^{n})^{\otimes k}, (\mathbb{R}^{n})^{\otimes l}) \end{equation} that is given by \begin{equation} d_\beta \mapsto E_\beta \end{equation} for all $(k,l)$--Brauer diagrams $d_\beta$, where $E_\beta \coloneqq D_\beta$ is given in (\ref{mappeddiagbasisSn}). \noindent In particular, the set \begin{equation} \label{klOnSpanningSet} \{E_\beta \mid d_\beta \text{ is a } (k,l) \text{--Brauer diagram} \} \end{equation} is a spanning set for $\Hom_{O(n)}((\mathbb{R}^{n})^{\otimes k}, (\mathbb{R}^{n})^{\otimes l})$ in the standard basis of $\mathbb{R}^{n}$, of size $0$ when $l+k$ is odd, and of size $(l+k-1)!!$ when $l+k$ is even. \end{theorem} \begin{corollary}[Orthogonal Group Equivariant Weight Matrices] \label{Onweightmatclass} For all non-negative integers $l, k$ and positive integers $n$, the weight matrix $W$ that appears in an $O(n)$-equivariant linear layer function from $(\mathbb{R}^{n})^{\otimes k}$ to $(\mathbb{R}^{n})^{\otimes l}$ must be of the form \begin{equation} W = \sum_{d_\beta \in B_k^l(n)} \lambda_\beta{E_\beta} \end{equation} \end{corollary} \begin{theorem} [Spanning set when $G(n) = Sp(n), n = 2m$] \cite[Theorem 6.6]{pearcecrumpB} \label{spanningsetSp(n)} For all non-negative integers $l, k$ and any positive integer $n$ such that $n = 2m$, there is a surjection of vector spaces \begin{equation} \label{surjectionSp(n)} X_{k,n}^l : B_k^l(n) \rightarrow \Hom_{Sp(n)}((\mathbb{R}^{n})^{\otimes k}, (\mathbb{R}^{n})^{\otimes l}) \end{equation} that is given by \begin{equation} d_\beta \mapsto F_\beta \end{equation} for all $(k,l)$--Brauer diagrams $d_\beta$, where $F_\beta$ is defined as follows. Associate the indices $i_1, i_2, \dots, i_l$ with the vertices in the top row of $d_\beta$, and $j_1, j_2, \dots, j_k$ with the vertices in the bottom row of $d_\beta$. Then, we have that \begin{equation} \label{matrixSp(n)} F_\beta \coloneqq \sum_{I, J} \gamma_{r_1, u_1} \gamma_{r_2, u_2} \dots \gamma_{r_{\frac{l+k}{2}}, u_{\frac{l+k}{2}}} E_{I,J} \end{equation} where the indices $i_p, j_p$ range over $1, 1', \dots, m, m'$, where $r_1, u_1, \dots, r_{\frac{l+k}{2}}, u_{\frac{l+k}{2}}$ is any permutation of the indices $i_1, i_2, \dots, i_l, j_1, j_2, \dots, j_k$ such that the vertices corresponding to $r_p, u_p$ are in the same block of $\beta$, and \begin{equation} \label{gammarpup} \gamma_{r_p, u_p} \coloneqq \begin{cases} \delta_{r_p, u_p} & \text{if the vertices corresponding to } r_p, u_p \text{ are in different rows of } d_\beta \\ \epsilon_{r_p, u_p} & \text{if the vertices corresponding to } r_p, u_p \text{ are in the same row of } d_\beta \end{cases} \end{equation} Here, $\epsilon_{r_p, u_p}$ is given by \begin{equation} \label{epsilondef1} \epsilon_{\alpha, \beta} = \epsilon_{{\alpha'}, {\beta'}} = 0 \end{equation} \begin{equation} \label{epsilondef2} \epsilon_{\alpha, {\beta'}} = - \epsilon_{{\alpha'}, {\beta}} = \delta_{\alpha, \beta} \end{equation} \noindent In particular, the set \begin{equation} \label{klSpnSpanningSet} \{F_\beta \mid d_\beta \text{ is a } (k,l) \text{--Brauer diagram} \} \end{equation} is a spanning set for $\Hom_{Sp(n)}((\mathbb{R}^{n})^{\otimes k}, (\mathbb{R}^{n})^{\otimes l})$, for $n = 2m$, in the symplectic basis of $\mathbb{R}^{n}$, of size $0$ when $l+k$ is odd, and of size $(l+k-1)!!$ when $l+k$ is even. \end{theorem} \begin{corollary}[Symplectic Group Equivariant Weight Matrices] \label{Spnweightmatclass} For all non-negative integers $l, k$ and positive even integers $n$, the weight matrix $W$ that appears in an $Sp(n)$-equivariant linear layer function from $(\mathbb{R}^{n})^{\otimes k}$ to $(\mathbb{R}^{n})^{\otimes l}$ must be of the form \begin{equation} W = \sum_{d_\beta \in B_k^l(n)} \lambda_\beta{F_\beta} \end{equation} \end{corollary} \begin{theorem} [Spanning set when $G(n) = SO(n)$] \cite[Theorem 6.7]{pearcecrumpB} \label{spanningsetSO(n)} For all non-negative integers $l, k$ and positive even integers $n$, there is a surjection of vector spaces \begin{equation} \label{surjectionSO(n)} \Psi_{k,n}^l : D_k^l(n) \rightarrow \Hom_{SO(n)}((\mathbb{R}^{n})^{\otimes k}, (\mathbb{R}^{n})^{\otimes l}) \end{equation} that is given by \begin{equation} d_\beta \mapsto E_\beta \end{equation} if $d_\beta$ is a $(k,l)$--Brauer diagram, where $E_\beta$ was defined in Theorem \ref{spanningsetO(n)}, and by \begin{equation} \label{surjdalpha} d_\alpha \mapsto H_\alpha \end{equation} if $d_\alpha$ is a $(k+l)\backslash n$--diagram, where $H_\alpha$ is defined as follows. Associate the indices $i_1, i_2, \dots, i_l$ with the vertices in the top row of $d_\alpha$, and $j_1, j_2, \dots, j_k$ with the vertices in the bottom row of $d_\alpha$. Suppose that there are $s$ free vertices in the top row. Then there are $n-s$ free vertices in the bottom row. Relabel the $s$ free indices in the top row from left-to-right by $t_1, \dots, t_s$, and the $n-s$ free indices in the bottom row from left-to-right by $b_1, \dots, b_{n-s}$. Then, we have that \begin{equation} \label{SO(n)Halpha} H_\alpha \coloneqq \sum_{I \in [n]^l, J \in [n]^k} \det(e_{T,B}) \delta(R,U) E_{I,J} \end{equation} where $\det(e_{T,B})$ is the determinant of the $n \times n$ matrix \begin{equation} \label{detmapdefn} \lvert e_{t_1} \; e_{t_2} \; \cdots \; e_{t_s} \; e_{b_1} \; \cdots \; e_{b_{n-s}}\rvert \end{equation} and \begin{equation} \delta(R,U) \coloneqq \delta_{r_1, u_1} \delta_{r_2, u_2} \dots \delta_{r_{\frac{l+k-n}{2}}, u_{\frac{l+k-n}{2}}} \end{equation} Here, $r_1, u_1, \dots, r_{\frac{l+k-n}{2}}, u_{\frac{l+k-n}{2}}$ is any permutation of the indices \begin{equation} \{i_1, \dots, i_l, j_1, \dots, j_k\} \backslash \{t_1, \dots, t_s, b_1, \dots, b_{n-s}\} \end{equation} such that the vertices corresponding to $r_p, u_p$ are in the same block of $\alpha$. In particular, the set \begin{equation} \label{SOn2SpanningSet} \{E_\beta\}_{\beta} \cup \{H_\alpha\}_{\alpha} \end{equation} where $d_\beta$ is a $(k,l)$--Brauer diagram, and $d_\alpha$ is a $(l+k) \backslash n$--diagram, is a spanning set for $\Hom_{SO(n)}((\mathbb{R}^{n})^{\otimes k}, (\mathbb{R}^{n})^{\otimes l})$ in the standard basis of $\mathbb{R}^{n}$. \end{theorem} \begin{corollary}[Special Orthogonal Group Equivariant Weight Matrices] \label{SOnweightmatclass} For all \allowbreak{} non-negative integers $l, k$ and positive integers $n$, the weight matrix $W$ that appears in an $SO(n)$-equivariant linear layer function from $(\mathbb{R}^{n})^{\otimes k}$ to $(\mathbb{R}^{n})^{\otimes l}$ must be of the form \begin{equation} W = \sum_{d_\beta \in D_k^l(n)} \lambda_\beta{E_\beta} + \sum_{d_\alpha \in D_k^l(n)} \lambda_\alpha{H_\alpha} \end{equation} \end{corollary} \section{Equivariant Weight Matrices from Categories} \label{weightmatcat} \input{category} \section{Algorithm for Computing with Equivariant Weight Matrices} \label{algorithmsect} \input{algorithm} \section{Conclusion} \label{conclusion} Group equivariant neural networks that use high-order tensor power spaces of $\mathbb{R}^{n}$ as their layers often face prohibitively high computational costs when an equivariant weight matrix is applied to an input vector. In this work, we have developed an algorithm that improves upon a na\"{i}ve weight matrix multiplication exponentially in terms of its Big-$O$ time complexity for four groups: the symmetric, orthogonal, special orthogonal and symplectic groups. We showed that a category-theoretic framework -- namely, expressing each weight matrix as the image of a linear combination of diagrams under a monoidal functor -- was very effective in helping to reorganise the overall computation so that it can be executed in an optimal manner. We hope that our algorithm will encourage more widespread adoption of group equivariant neural networks that use high-order tensor power spaces in practical applications. \newpage \acks{This work was funded by the Doctoral Scholarship for Applied Research which was awarded to the first author under Imperial College London's Department of Computing Applied Research scheme. This work will form part of the first author's PhD thesis at Imperial College London. } \nocite{*} \bibliography{sample} \end{document} In the previous section, we recalled the linear maps that can be used to obtain the equivariant weight matrices between tensor power spaces of $\mathbb{R}^{n}$ for each of the four groups in question. To motivate the content of this section, we note that, for a given group $G(n)$, each of the weight matrices is in some sense similar, since they are determined in part by the layer spaces, which are tensor power representations that differ only in their order. Likewise, the vector spaces containing certain types of set partition diagrams are also, in some sense, similar, in that, for a fixed value of $n$, the vector spaces mostly differ by the values chosen for $l$ and $k$, which represent the number of vertices in each row of a set partition diagram. We would like to formalise this intuition of similarity across different values of $l$ and $k$ for both the vector spaces of equivariant linear maps and the vector spaces containing certain types of set partition diagrams. For this, the concepts that appear in category theory are ideal, in that they are useful for abstracting away the specifics of structures in order to study their general properties. From the vector spaces that have similar properties, we create so-called monoidal categories, which are categories that have an additional operation for composing objects and morphisms, known as the tensor product, and then use them to create so-called monoidal functors, which are functors that preserve the tensor product between monoidal categories. We are particularly interested in the tensor product because the layer spaces are tensor power spaces of $\mathbb{R}^{n}$, and they differ only in the number of tensor products that are involved. We wish to emphasise that we are not simply rewriting existing results in a different language, but that, as an outcome of this process, we obtain new insights into the equivariant neural networks for each group. In particular, we show that set partition diagrams that appear in monoidal categories have a string-like quality to them. By pulling on the strings or dragging their ends to different locations, we can use the monoidal functors to obtain new results that are relevant for the weight matrices that appear in these group equivariant neural networks. We summarise these new results at the end of this section. \subsection{Strict $\mathbb{R}$--Linear Monoidal Categories and String Diagrams} We begin by introducing categories and functors that are \textbf{monoidal}. The monoidal property gives additional structure to the way in which objects and morphisms can be related. In particular, monoidal categories have an additional operation, known as a \textbf{bifunctor}, or a \textbf{tensor product}, that enables objects and morphisms to be composed in a second way, relative to the usual definition of composition that exists in every category. We often call this additional composition \textbf{horizontal}, in contrast to the \textbf{vertical} composition that comes with any category. Crucially, monoidal functors preserve the tensor product across monoidal categories. In this section, we assume knowledge of the definition of a category and the definition of a functor between categories. These definitions can be found in an introductory text on category theory, such as \citet{leinster2014, maclane} or \citet{riehl2017}. We assume throughout that all categories are \textbf{locally small}, which means that the collection of morphisms between any two objects is a set. In fact, all of the categories that we consider going forward have morphism sets that are vector spaces. Hence, the morphisms between objects are actually linear maps. For the definitions in this section, we follow the presentation that is given in \citet{Hu2019} and \citet{Savage2021}. \begin{definition} \label{categorystrictmonoidal} A category $\mathcal{C}$ is said to be \textbf{strict monoidal} if it comes with a functor $\otimes: \mathcal{C} \times \mathcal{C} \rightarrow \mathcal{C}$, known as a \textbf{bifunctor} or a \textbf{tensor product}, and a unit object $\mathds{1}$, such that, for all objects $X, Y, Z$ in $\mathcal{C}$, we have that \begin{equation} (X \otimes Y) \otimes Z = X \otimes (Y \otimes Z) \end{equation} \begin{equation} (\mathds{1} \otimes X) = X = (X \otimes \mathds{1}) \end{equation} and, for all morphisms $f, g, h$ in $\mathcal{C}$, we have that \begin{equation} \label{assocbifunctor} (f \otimes g) \otimes h = f \otimes (g \otimes h) \end{equation} \begin{equation} (1_\mathds{1} \otimes f) = f = (f \otimes 1_\mathds{1}) \end{equation} where $1_\mathds{1}$ is the identity morphism $\mathds{1} \rightarrow \mathds{1}$. We often use the tuple $(\mathcal{C}, \otimes_\mathcal{C}, \mathds{1}_\mathcal{C})$ to refer to the strict monoidal category $\mathcal{C}$. \end{definition} We have started with the definition of a \textit{strict} monoidal category as opposed to that of a monoidal category since we can assume that all monoidal categories are strict, owing to a technical result known as Mac Lane's Coherence Theorem. See \citet{maclane} for more details. \begin{definition} \label{categorylinear} A category $\mathcal{C}$ is said to be $\mathbb{R}$\textbf{--linear} if, for any two objects $X, Y$ in $\mathcal{C}$, the morphism space $\Hom_{\mathcal{C}}(X,Y)$ is a vector space over $\mathbb{R}$, and the composition of morphisms is $\mathbb{R}$--bilinear. \end{definition} Combining Definitions \ref{categorystrictmonoidal} and \ref{categorylinear}, we get \begin{definition} A category $\mathcal{C}$ is said to be \textbf{strict} $\mathbb{R}$\textbf{--linear monoidal} if it is a category that is both strict monoidal and $\mathbb{R}$--linear, such that the bifunctor $\otimes$ is $\mathbb{R}$--bilinear. \end{definition} We now consider functors between strict $\mathbb{R}$--linear monoidal categories that preserve the tensor product. \begin{definition} \label{monoidalfunctordefn} Suppose that $(\mathcal{C}, \otimes_\mathcal{C}, \mathds{1}_\mathcal{C})$ and $(\mathcal{D}, \otimes_\mathcal{D}, \mathds{1}_\mathcal{D})$ are two strict $\mathbb{R}$--linear monoidal categories. A \textbf{strict} $\mathbb{R}$\textbf{--linear monoidal functor} from $\mathcal{C}$ to $\mathcal{D}$ is a functor $\mathcal{F}: \mathcal{C} \rightarrow \mathcal{D}$ such that \begin{enumerate} \item for all objects $X, Y$ in $\mathcal{C}$, $\mathcal{F}(X \otimes_\mathcal{C} Y) = \mathcal{F}(X) \otimes_\mathcal{D} \mathcal{F}(Y)$ \item for all morphisms $f, g$ in $\mathcal{C}$, $\mathcal{F}(f \otimes_\mathcal{C} g) = \mathcal{F}(f) \otimes_\mathcal{D} \mathcal{F}(g)$ \item $\mathcal{F}(\mathds{1}_\mathcal{C}) = \mathds{1}_\mathcal{D}$, and \item for all objects $X, Y$ in $\mathcal{C}$, the map \begin{equation} \label{maphomsets} \Hom_{\mathcal{C}}(X,Y) \rightarrow \Hom_{\mathcal{D}}(\mathcal{F}(X),\mathcal{F}(Y)) \end{equation} given by $f \mapsto \mathcal{F}(f)$ is $\mathbb{R}$--linear. \end{enumerate} \end{definition} Strict monoidal categories are particularly interesting because they can be represented by a very useful diagrammatic language known as \textbf{string diagrams}. We will see that, as this language is, in some sense, geometric in nature, it is much easier to work with these diagrams compared with their equivalent algebraic form. \begin{definition} [String Diagrams] Suppose that $\mathcal{C}$ is a strict monoidal category. Let $W, X, Y$ and $Z$ be objects in $\mathcal{C}$, and let $f: X \rightarrow Y$, $g: Y \rightarrow Z$, and $h: W \rightarrow Z$ be morphisms in $\mathcal{C}$. Then we can represent the morphisms $1_{X}: X \rightarrow X$, $f: X \rightarrow Y$, $g \circ f: X \rightarrow Z$ and $f \otimes h: X \otimes W \rightarrow Y \otimes Z$ as string diagrams in the following way: \begin{equation} \begin{aligned} \scalebox{0.75}{\tikzfig{background/stringdiagrams}} \end{aligned} \end{equation} In particular, the vertical composition of morphisms $g \circ f$ is obtained by placing $g$ above $f$, and the horizontal composition of morphisms $f \otimes h$ is obtained by horizontally placing $f$ to the left of $h$. We will often omit the labelling of the objects when they are clear or when they are not important. \end{definition} As an example of how useful string diagrams are when working with strict monoidal categories, the associativity of the bifunctor given in (\ref{assocbifunctor}) becomes immediately apparent. Another, more involved, example is given by the interchange law that exists for any strict monoidal category. It can be expressed algebraically as \begin{equation} \label{interchange} (\mathds{1} \otimes g) \circ (f \otimes \mathds{1}) = f \otimes g = (f \otimes \mathds{1}) \circ (\mathds{1} \otimes g) \end{equation} Without string diagrams, it is somewhat tedious to prove this result. Indeed, for the left hand equality of (\ref{interchange}), we have that \begin{equation} (\mathds{1} \otimes g) \circ (f \otimes \mathds{1}) = (\otimes (\mathds{1}, g)) \circ (\otimes (f, \mathds{1})) = \otimes((\mathds{1}, g) \circ (f, \mathds{1})) = \otimes((f, g)) = f \otimes g \end{equation} where we have used the definition of composition in $\mathcal{C} \times \mathcal{C}$ and the fact that $\otimes : \mathcal{C} \times \mathcal{C} \rightarrow \mathcal{C}$ is a functor. A similar calculation also holds for the right hand equality of (\ref{interchange}). However, with string diagram, the result is intuitively obvious, if we allow ourselves to deform the diagrams by pulling on the strings: \begin{equation} \begin{aligned} \scalebox{0.75}{\tikzfig{background/interchangelaw}} \end{aligned} \end{equation} \subsection{Categorification} Previously, we defined a vector space for each non-negative integer $l$ and $k$ that is the $\mathbb{R}$--linear span of a certain subset of $(k,l)$--partition diagrams. However, it is clear that, for all values of $l$ and $k$, these vector spaces are all similar in nature, in that the set partition diagrams only differ by the number of vertices that appear in each row and by the connections that are made between vertices. Moreover, set partition diagrams look like string diagrams. Given that string diagrams represent strict monoidal categories, and that we have a collection of vector spaces for certain subsets of set partition diagrams, this implies that we should have a number of strict $\mathbb{R}$--linear monoidal categories. We formalise this intuition below. \subsubsection{Partition Categories} We assume throughout that $l$ and $k$ are non-negative integers and that $n$ is a positive integer. In Section \ref{sectSetPart}, we defined the partition vector space $P_k^l(n)$, which has a basis consisting of all possible $(k,l)$--partition diagrams. In order to obtain a category from these vector spaces, we need to define the following two $\mathbb{R}$--bilinear operations on $(k,l)$--partition diagrams: \begin{align} \text{composition: } & \bullet: P_l^m(n) \times P_k^l(n) \rightarrow P_k^m(n) \label{compositionstatement} \\ \text{tensor product: } & \otimes: P_k^l(n) \times P_q^m(n) \rightarrow P_{k+q}^{l+m}(n) \label{tensorprodstatement} \end{align} They are given as follows: \begin{definition}[Composition] \label{composition} Let $d_{\pi_1} \in P_k^l(n)$ and $d_{\pi_2} \in P_l^m(n)$. First, we concatenate the diagrams, written $d_{\pi_2} \circ d_{\pi_1}$, by putting $d_{\pi_1}$ below $d_{\pi_2}$, concatenating the edges in the middle row of vertices, and then removing all connected components that lie entirely in the middle row of the concatenated diagrams. Let $c(d_{\pi_2}, d_{\pi_1})$ be the number of connected components that are removed from the middle row in $d_{\pi_2} \circ d_{\pi_1}$. Then the composition is defined, using infix notation, as \begin{equation} d_{\pi_2} \bullet d_{\pi_1} \coloneqq n^{c(d_{\pi_2}, d_{\pi_1})} (d_{\pi_2} \circ d_{\pi_1}) \end{equation} \end{definition} \begin{example} \label{partcomp} If $d_{\pi_2}$ is the $(6,4)$--partition diagram \begin{equation} \begin{aligned} \scalebox{0.6}{\tikzfig{category/composition1i}} \end{aligned} \end{equation} and $d_{\pi_1}$ is the $(3,6)$--partition diagram \begin{equation} \begin{aligned} \scalebox{0.6}{\tikzfig{category/composition1ii}} \end{aligned} \end{equation} then we have that $d_{\pi_2} \circ d_{\pi_1}$ is the $(3,4)$--partition diagram \begin{equation} \label{partcompdiagram} \begin{aligned} \scalebox{0.6}{\tikzfig{category/composition}} \end{aligned} \end{equation} and so $d_{\pi_2} \bullet d_{\pi_1}$ is the diagram (\ref{partcompdiagram}) multiplied by $n^2$, since two connected components were removed from the middle row of $d_{\pi_2} \circ d_{\pi_1}$. \end{example} \begin{definition}[Tensor Product] \label{tensorprod} Let $d_{\pi_1} \in P_k^l(n)$ and $d_{\pi_2} \in P_q^m(n)$. Then $d_{\pi_1} \otimes d_{\pi_2}$ is defined to be the $(k+q, l+m)$--partition diagram that is obtained by horizontally placing $d_{\pi_1}$ to the left of $d_{\pi_2}$ without any overlapping of vertices. \end{definition} \begin{example} The tensor product $d_{\pi_1} \otimes d_{\pi_2}$, for $d_{\pi_1}$ and $d_{\pi_2}$ given in Example \ref{partcomp}, is the $(9,10)$--partition diagram \begin{equation} \begin{aligned} \scalebox{0.6}{\tikzfig{category/tensorprod}} \end{aligned} \end{equation} \end{example} It is clear that both the composition and the tensor product operations are associative. Consequently, we have the following category: \begin{definition} \label{partitioncategory} We define the \textbf{partition category} $\mathcal{P}(n)$ to be the category whose objects are the non--negative integers, and, for any pair of objects $k$ and $l$, the morphism space $\Hom_{\mathcal{P}(n)}(k,l)$ is $P_k^l(n)$. The vertical composition of morphisms is the composition of partition diagrams given in Definition \ref{composition}; the horizontal composition of morphisms is the tensor product of partition diagrams given in Definition \ref{tensorprod}; and the unit object is $0$. \end{definition} For $B_k^l(n)$, we can inherit the composition and tensor product operations from the composition and tensor product operations for $P_k^l(n)$, giving the following category: \begin{definition} We define the \textbf{Brauer category} $\mathcal{B}(n)$ to be the category whose objects are the same as those of $\mathcal{P}(n)$ and, for any pair of objects $k$ and $l$, the morphism space $\Hom_{\mathcal{B}(n)}(k,l)$ is $B_k^l(n)$. The vertical and horizontal composition of morphisms and the unit object are the same as those defined for $\mathcal{P}(n)$. \end{definition} We claim the following result. \begin{proposition} The partition category $\mathcal{P}(n)$ and the Brauer category $\mathcal{B}(n)$ are strict $\mathbb{R}$--linear monoidal categories. \end{proposition} \begin{proof} We prove this result for the partition category since the proof for the Brauer category is effectively the same. $\mathcal{P}(n)$ is a strict monoidal category because the bifunctor on objects reduces to the addition of natural numbers, which is associative and the bifunctor on morphisms is the tensor product of linear combinations of partition diagrams that is given in Definition \ref{composition}, which is associative by definition. $\mathcal{P}(n)$ is $\mathbb{R}$--linear because the morphism space between any two objects is by definition a vector space, and the composition of morphisms is $\mathbb{R}$--bilinear by definition. For the same reason, the bifunctor is also $\mathbb{R}$--bilinear. \end{proof} We can also obtain a strict $\mathbb{R}$--linear monoidal category $\mathcal{BG}(n)$ from the Brauer--Grood vector spaces $D_k^l(n)$. We will call $\mathcal{BG}(n)$ the \textbf{Brauer--Grood category}, although it sometimes appears in the literature under different names, such as the Jellyfish Brauer category in \cite{comes}. Showing that $\mathcal{BG}(n)$ is strict $\mathbb{R}$--linear monoidal is a long and arduous computation that has previously appeared in \citet{LehrerZhang, LehrerZhang2} -- namely, it is the result of combining the proof of \citet[Theorem 2.6]{LehrerZhang} together with the proof of \citet[Theorem 6.1]{LehrerZhang2}. Hence, for brevity, we omit it here. Instead, we show only part of the definition of the horizontal composition, since it will be used in Section \ref{algorithmsect}. To define the horizontal composition in full, we would need to define it on four possible cases of ordered pairs of diagrams. One of those ordered pairs is $(d_\beta, d_\alpha)$, where $d_\beta$ is a $(k,l)$--Brauer diagram and $d_\alpha$ is an $(m+q) \backslash n$--diagram. In this case, the horizontal composition gives the $(l+m+k+q) \backslash n$--diagram that is obtained by horizontally placing $d_\beta$ to the left of $d_\alpha$ without any overlapping of vertices. \begin{example} Recalling the $(7,5)$--Brauer diagram $d_\beta$ and the $(5+6) \backslash 3$--diagram $d_\alpha$ that was given in Example \ref{exBrauerGrooddiag}, we see that $d_\beta \otimes d_\alpha$ is the the $(10+13) \backslash 3$--diagram \begin{equation} \begin{aligned} \scalebox{0.6}{\tikzfig{background/algotensprod}} \end{aligned} \end{equation} \end{example} For the sake of completeness, we provide the framework of the definition of the Brauer--Grood category below. \begin{definition} The \textbf{Brauer--Grood category} $\mathcal{BG}(n)$ is the category whose objects are the same as those of $\mathcal{P}(n)$ and, for any pair of objects $k$ and $l$, the morphism space $\Hom_{\mathcal{BG}(n)}(k,l)$ is $D_k^l(n)$. [We have omitted the full definition of the vertical composition of morphisms and the horizontal composition of morphisms.] The unit object is $0$. \end{definition} We will refer to these categories in general as \textbf{partition categories}. When we wish to specifically reference the partition category $\mathcal{P}(n)$, we will explicitly write $\mathcal{P}(n)$ to clarify that we mean this particular category. \subsubsection{Group Representation Categories} It is not just the partition vector spaces that are similar for different values of $l$ and $k$. The tensor power spaces of $\mathbb{R}^{n}$ that make up the layer spaces and, in particular, the equivariant linear maps between them, are also similar for different values of $l$ and $k$. To form a category from these representations and the linear maps between them, we first need to define a category for all representations of a group. \begin{definition} Let $G$ be a group. Then $\Rep(G)$ is the category whose objects are pairs $(V, \rho_V)$, where $\rho_V: G \rightarrow GL(V)$ is a representation of $G$, and, for any pair of objects $(V, \rho_V)$ and $(W, \rho_W)$, the morphism space, $\Hom_{\Rep(G)}((V, \rho_V), (W, \rho_W))$, is precisely the vector space of $G$--equivariant linear maps $V \rightarrow W$, $\Hom_{G}(V, W)$. The vertical composition of morphisms is given by the composition of linear maps, the horizontal composition of morphisms is given by the tensor product of linear maps, both of which are associative operations, and the unit object is given by $(\mathbb{R}, 1_\mathbb{R})$, where $1_\mathbb{R}$ is the one-dimensional trivial representation of $G$. \end{definition} If $G = G(n)$ is a subgroup of $GL(n)$ (and, in particular, one of $S_n, O(n), SO(n)$ or $Sp(n)$), then we have the following category. \begin{definition} \label{CGcategory} If $G(n)$ is a subgroup of $GL(n)$, then the \textbf{tensor power representation category} $\mathcal{C}(G(n))$ is the category whose objects are pairs $\{((\mathbb{R}^n)^{\otimes k},\rho_k)\}_{k \in \mathbb{Z}_{\geq 0}}$, where $\rho_k: G(n) \rightarrow GL((\mathbb{R}^n)^{\otimes k})$ is the representation of $G(n)$ given in Section \ref{sectLayerSpaces}, and, for any pair of objects $((\mathbb{R}^n)^{\otimes k},\rho_k)$ and $((\mathbb{R}^n)^{\otimes l},\rho_l)$, the morphism space is precisely $\Hom_{G(n)}((\mathbb{R}^n)^{\otimes k}, (\mathbb{R}^n)^{\otimes l})$. The vertical and horizontal composition of morphisms together with the unit object are inherited from $\Rep(G(n))$. \end{definition} \begin{proposition} For any subgroup $G(n)$ of $GL(n)$, $\mathcal{C}(G(n))$ is a full subcategory of the category of representations of $G(n)$, $\Rep(G(n))$; that is, for every pair of objects in $\mathcal{C}(G(n))$, every morphism between them in $\Rep(G(n))$ is a morphism in $\mathcal{C}(G(n))$. In particular, $\mathcal{C}(G(n))$ is a strict $\mathbb{R}$--linear monoidal category. \end{proposition} \begin{proof} It is clear from the definitions that $\mathcal{C}(G(n))$ is a subcategory of $\Rep(G(n))$. $\mathcal{C}(G(n))$ is a full subcategory of $\Rep(G(n))$, as, for any pair of objects $((\mathbb{R}^n)^{\otimes k},\rho_k)$ and $((\mathbb{R}^n)^{\otimes l},\rho_l)$ in $C(G(n))$, the morphism space in $\mathcal{C}(G(n))$ is the same as the one in $\Rep(G(n))$. $\mathcal{C}(G(n))$ can immediately be seen to be a strict monoidal category because the bifunctor on objects is the tensor product of vector spaces, which is associative, and the bifunctor on morphisms is the tensor product on linear maps between vector spaces, which is also associative. $\mathcal{C}(G(n))$ is $\mathbb{R}$--linear because the morphism space for any two objects is a vector space over $\mathbb{R}$ and the composition of morphisms is $\mathbb{R}$--bilinear because composition is $\mathbb{R}$--bilinear for linear maps on vector spaces. It is also clear that the bifunctor $\otimes$ is $\mathbb{R}$--bilinear since it is the standard tensor product for vector spaces. \end{proof} \subsection {Monoidal Functors from Partition Categories to Group Representation Categories} \label{fullstrictmonfunct} In Section \ref{sectCharacterisation}, we reviewed a number of linear maps between certain partition vector spaces and the vector space of equivariant linear maps between tensor power representations for four subgroups $G(n)$ of $GL(n)$. We now show that these linear maps are the result of something stronger, namely that they are the linear maps that come from the existence of a strict $\mathbb{R}$--linear monoidal functor between a partition category and its associated tensor power representation category. In each of the following four theorems, we state that the functors are \textbf{full}; that is, the underlying map between morphism spaces, having chosen a pair of objects in the domain category, is surjective. \begin{theorem} \label{partfunctor} There exists a full, strict $\mathbb{R}$--linear monoidal functor \begin{equation} \Theta : \mathcal{P}(n) \rightarrow \mathcal{C}(S_n) \end{equation} that is defined on the objects of $\mathcal{P}(n)$ by $\Theta(k) \coloneqq ((\mathbb{R}^{n})^{\otimes k}, \rho_k)$ and, for any objects $k,l$ of $\mathcal{P}(n)$, the map \begin{equation} \label{partmorphism} \Hom_{\mathcal{P}(n)}(k,l) \rightarrow \Hom_{\mathcal{C}(S_n)}(\Theta(k),\Theta(l)) \end{equation} is precisely the map \begin{equation} \label{partmorphismmap} \Theta_{k,n}^l : P_k^l(n) \rightarrow \Hom_{S_n}((\mathbb{R}^{n})^{\otimes k}, (\mathbb{R}^{n})^{\otimes l}) \end{equation} given in Theorem \ref{diagbasisSn}. \end{theorem} \begin{theorem} \label{brauerO(n)functor} There exists a full, strict $\mathbb{R}$--linear monoidal functor \begin{equation} \Phi : \mathcal{B}(n) \rightarrow \mathcal{C}(O(n)) \end{equation} that is defined on the objects of $\mathcal{B}(n)$ by $\Phi(k) \coloneqq ((\mathbb{R}^{n})^{\otimes k}, \rho_k)$ and, for any objects $k,l$ of $\mathcal{B}(n)$, the map \begin{equation} \Hom_{\mathcal{B}(n)}(k,l) \rightarrow \Hom_{\mathcal{C}(O(n))}(\Phi(k),\Phi(l)) \end{equation} is the map \begin{equation} \Phi_{k,n}^l : B_k^l(n) \rightarrow \Hom_{O(n)}((\mathbb{R}^{n})^{\otimes k}, (\mathbb{R}^{n})^{\otimes l}) \end{equation} given in Theorem \ref{spanningsetO(n)}. \end{theorem} \begin{theorem} \label{brauerSp(n)functor} There exists a full, strict $\mathbb{R}$--linear monoidal functor \begin{equation} X : \mathcal{B}(n) \rightarrow \mathcal{C}(Sp(n)) \end{equation} that is defined on the objects of $\mathcal{B}(n)$ by $X(k) \coloneqq ((\mathbb{R}^{n})^{\otimes k}, \rho_k)$ and, for any objects $k,l$ of $\mathcal{B}(n)$, the map \begin{equation} \Hom_{\mathcal{B}(n)}(k,l) \rightarrow \Hom_{\mathcal{C}(Sp(n))}(\Phi(k),\Phi(l)) \end{equation} is the map \begin{equation} X_{k,n}^l : B_k^l(n) \rightarrow \Hom_{Sp(n)}((\mathbb{R}^{n})^{\otimes k}, (\mathbb{R}^{n})^{\otimes l}) \end{equation} given in Theorem \ref{spanningsetSp(n)}. \end{theorem} \begin{theorem} \label{brauerSO(n)functor} There exists a full, strict $\mathbb{R}$--linear monoidal functor \begin{equation} \Psi : \mathcal{BG}(n) \rightarrow \mathcal{C}(SO(n)) \end{equation} that is defined on the objects of $\mathcal{BG}(n)$ by $\Psi(k) \coloneqq ((\mathbb{R}^{n})^{\otimes k}, \rho_k)$ and, for any objects $k,l$ of $\mathcal{B}(n)$, the map \begin{equation} \Hom_{\mathcal{BG}(n)}(k,l) \rightarrow \Hom_{\mathcal{C}(SO(n))}(\Phi(k),\Phi(l)) \end{equation} is the map \begin{equation} \Psi_{k,n}^l : D_k^l(n) \rightarrow \Hom_{SO(n)}((\mathbb{R}^{n})^{\otimes k}, (\mathbb{R}^{n})^{\otimes l}) \end{equation} given in Theorem \ref{spanningsetSO(n)}. \end{theorem} We only show the proof of Theorem \ref{partfunctor} in full since the proofs of Theorems \ref{brauerO(n)functor} and \ref{brauerSp(n)functor} are almost identical. The proof of Theorem \ref{brauerSO(n)functor} can be found in \citet[Theorem 6.1]{LehrerZhang2}. \\ \begin{proof}[Proof of Theorem \ref{partfunctor}] We prove this theorem in a series of steps. \\ \noindent \textbf{Step 1}: We need to show that $\Theta : \mathcal{P}(n) \rightarrow \mathcal{C}(S_n)$ is a functor. We prove this step in three parts. \begin{itemize} \item $\Theta$ maps the objects of $\mathcal{P}(n)$ to the objects of $\mathcal{C}(S_n)$ by definition. \item It is enough to show that $\Theta(g)\Theta(f) = \Theta(g \bullet f)$ on arbitrary basis elements of arbitrary morphism spaces where the codomain of $f$ is the domain of $g$ because the morphism spaces are vector spaces. Let $f = d_{\pi_1}$ be a $(k,l)$--partition diagram, and let $g = d_{\pi_2}$ be a $(l,m)$--partition diagram. Then, by Definition \ref{composition}, we have that \begin{equation} d_{\pi_2} \bullet d_{\pi_1} = n^{c(d_{\pi_2}, d_{\pi_1})} d_{\pi_3} \end{equation} where $d_{\pi_3}$ be the composition $d_{\pi_2} \circ d_{\pi_1}$ expressed as a $(k,m)$--partition diagram. Then \begin{equation} \label{catpartbulletmatmult} \Theta(g \bullet f) = \Theta(d_{\pi_2} \bullet d_{\pi_1}) = n^{c(d_{\pi_2}, d_{\pi_1})} D_{\pi_3} \end{equation} We also have that \begin{align} \Theta(g)\Theta(f) = D_{\pi_2}D_{\pi_1} & = \left( \sum_{I \in [n]^m, K \in [n]^l} \delta_{\pi_2, (I,K)} E_{I,K} \right) \left( \sum_{L \in [n]^l, J \in [n]^k} \delta_{\pi_1, (L,J)} E_{L,J} \right) \\ & = \sum_{I \in [n]^m, K \in [n]^l, J \in [n]^k} \delta_{\pi_2, (I,K)} \delta_{\pi_1, (K,J)} E_{I,K}E_{K,J} \label{catpartcompmultmat} \end{align} For fixed $I, J$, consider \begin{equation} \label{catpartcompdeltas} \sum_{K \in [n]^l} \delta_{\pi_2, (I,K)} \delta_{\pi_1, (K,J)} \end{equation} This is equal to \begin{equation} n^{c(d_{\pi_2}, d_{\pi_1})} \delta_{\pi_3, (I,J)} \end{equation} Indeed, for fixed $I,J$, $\delta_{\pi_3, (I,J)}$ is $1$ if and only if both $\delta_{\pi_2, (I,K)}$ and $\delta_{\pi_1, (K,J)}$ are $1$ for some $K \in [n]^l$ since $d_{\pi_3}$ is the composition $d_{\pi_2} \circ d_{\pi_1}$. The number of such $K$ is determined only by the vertices that appear in connected components that are removed from the middle row of $d_{\pi_2} \circ d_{\pi_1}$, since, for fixed $I,J$, only these vertices can be freely chosen if we want both $\delta_{\pi_2, (I,K)}$ and $\delta_{\pi_1, (K,J)}$ to be $1$. However, since the entries in each connected component must be the same for both $\delta_{\pi_2, (I,K)}$ and $\delta_{\pi_1, (K,J)}$ to be $1$, this implies that the number of $K \in [n]^l$ such that both $\delta_{\pi_2, (I,K)}$ and $\delta_{\pi_1, (K,J)}$ are $1$ is $n^{c(d_{\pi_2}, d_{\pi_1})}$. Hence (\ref{catpartcompmultmat}) becomes \begin{equation} \sum_{I \in [n]^m, J \in [n]^k} n^{c(d_{\pi_2}, d_{\pi_1})} \delta_{\pi_3, (I,J)} E_{I,J} = n^{c(d_{\pi_2}, d_{\pi_1})} E_{\pi_3} \end{equation} and so, by (\ref{catpartbulletmatmult}), we have that $\Theta(g)\Theta(f) = \Theta(g \bullet f)$, as required. \item For each object $k$ in $\mathcal{P}(n)$, the identity morphism $1_{k}$ is the $(k,k)$--partition diagram $d_\pi$ where $\pi$ is the set partition \begin{equation} \{1, k+1 \mid 2, k+2 \mid \dots \mid k, 2k\} \end{equation} of $[2k]$, and its image under $d_\pi \mapsto D_\pi$ is the $n^k \times n^k$ identity matrix, as required. \end{itemize} \noindent \textbf{Step 2}: We need to show that $\Theta : \mathcal{P}(n) \rightarrow \mathcal{C}(S_n)$ is full. \\ The functor $\Theta$ is full because, for all objects $k, l$ in $\mathcal{P}(n)$, the map (\ref{partmorphism}) is (\ref{partmorphismmap}), by definition, and this map is surjective by Theorem \ref{diagbasisSn}. \\ \noindent \textbf{Step 3}: We need to show that $\Theta : \mathcal{P}(n) \rightarrow \mathcal{C}(S_n)$ is strict $\mathbb{R}$--linear monoidal. \\ To show that $\Theta$ is strict $\mathbb{R}$--linear monoidal, we need to show that $\Theta$ satisfies the conditions given in Definition \ref{monoidalfunctordefn}. The picture to have in mind for point 2 below is that the tensor product on set partition diagrams places the diagrams side-by-side, without any overlapping of vertices. \begin{enumerate} \item Let $k, l$ be any two objects in $\mathcal{P}(n)$. Then \begin{align} \Theta(k \otimes l) & = \Theta(k + l) \\ & = ((\mathbb{R}^{n})^{\otimes k+l}, \rho_{k+l}) \\ & = ((\mathbb{R}^{n})^{\otimes k} \otimes (\mathbb{R}^{n})^{\otimes l}, \rho_k \otimes \rho_l) \\ & = \Theta(k) \otimes \Theta(l) \end{align} \item It is enough to prove this condition on arbitrary basis elements of arbitrary morphism spaces as the morphism spaces are vector spaces. Suppose that $f: k \rightarrow l$ and $g: q \rightarrow m$ are two basis elements in $\Hom_{\mathcal{P}(n)}(k,l)$ and $\Hom_{\mathcal{P}(n)}(q,m)$ respectively. Then $f = d_\pi$ for some set partition $\pi$ of $[l+k]$, and $g = d_\tau$ for some set partition $\tau$ of $[m+q]$. As $\Hom_{\mathcal{P}(n)}(k,l) = P_k^l(n)$ and $\Hom_{\mathcal{P}(n)}(q,m) = P_q^m(n)$, we have, by Definition \ref{tensorprod}, that $f \otimes g$ is an element of $P_{k+q}^{l+m}(n) = \Hom_{\mathcal{P}(n)}(k+q,l+m)$. In particular, $f \otimes g = d_{\omega}$ for the set partition $\omega \coloneqq \pi \cup \tau$ of $[l+m+k+q]$. By Theorem \ref{diagbasisSn}, we have that $\Theta(f) = D_\pi$, $\Theta(g) = D_\tau$, and $\Theta(f \otimes g) = D_\omega$. We now show that $\Theta(f) \otimes \Theta(g) = \Theta(f \otimes g)$. Indeed, we have that \begin{align} \Theta(f) \otimes \Theta(g) & = D_\pi \otimes D_\tau \\ & = \left( \sum_{I \in [n]^l, J \in [n]^k} \delta_{\pi, (I,J)} E_{I,J} \right) \otimes \left( \sum_{X \in [n]^m, Y \in [n]^q} \delta_{\tau, (X,Y)} E_{X,Y} \right) \\ & = \sum_{(I,X) \in [n]^{l+m}, (J,Y) \in [n]^{k+q}} \delta_{\omega, (I,X),(J,Y))} E_{(I,X),(J,Y)} \label{deltaprod} \\ & = D_\omega \\ & = \Theta(f \otimes g) \end{align} where (\ref{deltaprod}) holds because $S_\pi((I,J)) \cup S_\tau((X,Y)) = S_\omega((I,X),(J,Y))$. \item It is clear from the statement of the theorem that $\Theta$ sends the unit object $0$ in $\mathcal{P}(n)$ to $(\mathbb{R}, 1_\mathbb{R})$, which is the unit object in $\mathcal{C}(S_n)$. \item This is immediate because the map (\ref{partmorphismmap}) is $\mathbb{R}$--linear by Theorem \ref{diagbasisSn}. \myqedhere \end{enumerate} \end{proof} \subsection{Implications for Group Equivariant Neural Networks} \label{implicationsgroupequiv} The theorems in Section \ref{fullstrictmonfunct} imply several key statements that are crucial for the group equivariant neural networks under consideration. These statements can be formulated as follows. \begin{enumerate} \item To understand and work with any matrix in $\Hom_{G(n)}((\mathbb{R}^{n})^{\otimes k}, (\mathbb{R}^{n})^{\otimes l})$, it is enough to work with the subset of $(k,l)$--partition diagrams that correspond to $G(n)$. This is because we can express any matrix in terms of the set of spanning set elements for $\Hom_{G(n)}((\mathbb{R}^{n})^{\otimes k}, (\mathbb{R}^{n})^{\otimes l})$, and these correspond bijectively with the subset of $(k,l)$--partition diagrams that corresponds to $G(n)$. We can recover the matrix itself by applying the appropriate monoidal functor to the set partition diagrams because the monoidal functors are full. \item We can manipulate the connected components and vertices of $(k,l)$--partition diagrams to obtain new set partition diagrams. Indeed, as the partition categories are strict monoidal, this means that the $(k,l)$--partition diagrams in a partition category are string diagrams. Statement 1 immediately implies that we will obtain new matrices between tensor power spaces of $\mathbb{R}^{n}$ that are equivariant to $G(n)$ from the resulting set partition diagrams. \item If a $(k,l)$--partition diagram can be decomposed as a tensor product of smaller set partition diagrams, then the corresponding matrix can also be decomposed as a tensor product of smaller sized matrices, each of which is equivariant to $G(n)$. This is because the functors are monoidal. \end{enumerate} In the following section, we present our main contribution. We use the monoidal functors that we have established for each of the four groups, together with the implications for the group equivariant neural networks given above, to construct a fast multiplication algorithm for passing a vector through a linear layer function that appears in such a network. Suppose that we would like to pass a vector $v \in (\mathbb{R}^{n})^{\otimes k}$ through a linear layer function that lives in $\Hom_{G(n)}((\mathbb{R}^{n})^{\otimes k}, (\mathbb{R}^{n})^{\otimes l})$. A naive implementation of the matrix multiplication between the weight matrix and the vector has a time complexity equal to $O(n^{l+k})$. We show that we can construct a fast multiplication algorithm that reduces the time complexity exponentially for each of the groups $G(n)$ in question. It is important to highlight that, since we have at least a spanning set of the vector space $\Hom_{G(n)}((\mathbb{R}^{n})^{\otimes k}, (\mathbb{R}^{n})^{\otimes l})$ for each group $G(n)$ that appeared in the previous chapter, it is enough to describe an algorithm for how to multiply $v$ by a spanning set element since we can extend the result by linearity. This linearity provides an extra advantage in that it enables the fast matrix multiplication of an input vector $v \in (\mathbb{R}^{n})^{\otimes k}$ with a weight matrix in $\Hom_{G(n)}((\mathbb{R}^{n})^{\otimes k}, (\mathbb{R}^{n})^{\otimes l})$ to be executed in parallel, namely by performing the fast matrix multiplication on each of the spanning set elements that make up the weight matrix. We begin by introducing a new class of set partition diagrams that we have termed \textbf{algorithmically planar}. \subsection{Algorithmically Planar Set Partition Diagrams} Recall that each spanning set element in $\Hom_{G(n)}((\mathbb{R}^{n})^{\otimes k}, (\mathbb{R}^{n})^{\otimes l})$ corresponds to a set partition diagram under a monoidal functor for $G(n)$, and that a set partition diagram is a string diagram since it is a morphism in a monoidal category. This leads to the following question that we use as motivation: can we use the string-like property of each set partition diagram to gain control over how the matrix multiplication operation is performed between the spanning set element and the vector? There are a few guiding principles that we use to construct the algorithm. Ideally, we would like to break the matrix multiplication operation down into a series of computations, each of which is indecomposable. Given the monoidal functor relationships between set partition diagrams and matrix operations, we know that the smallest possible computations that are available to us correspond precisely with the smallest indecomposable set partition diagrams. In general, set partition diagrams that consist only of one block are the smallest indecomposable diagrams. The exception is for $(l+k) \backslash n$--diagrams, where the smallest indecomposable diagrams are either those consisting of a single block of two vertices or a diagram consisting of all of the free vertices taken together. Moreover, a single block can live either entirely in the top row, entirely in the bottom row, or in both rows. Hence, we see that there are only a few classes of indecomposable diagrams that are smallest. As a result, there are only a few classes of indecomposable matrix operations that are smallest. We can say more. Since the matrices that we obtain must come from set partition diagrams, the most efficient matrices that we can construct are a Kronecker product of indecomposable matrices. But also, since indecomposable matrices come from indecomposable set partition diagrams, this implies that the most efficient matrices that we can construct come from set partition diagrams that decompose into a tensor product of the smallest indecomposable set partition diagrams. Moreover, we would like to choose the order in which the indecomposable matrices appear in the Kronecker product because each indecomposable matrix performs a certain operation and we would like them to be executed in an order that is as efficient as possible. For example, it is best to zero out as many terms as possible in the input vector first, if such an operation can be performed, since this reduces the number of elements that need to be operated on going forward. After this, it is better to perform tensor contraction operations before performing copying operations, because this reduces the number of elements that need to be copied. It is clear that choosing any other order for these operations would make the overall algorithm less efficient. We will see that each of these operations can be understood easily in their equivalent diagram form. Consequently, from the set partition diagram that corresponds to the spanning set element, we would like to create another set partition diagram that decomposes into a tensor product of the smallest possible indecomposable set partition diagrams and are ordered such that the indecomposable matrix operations that appear in the corresponding Kronecker product will be performed most efficiently. We call these special set partition diagrams algorithmically planar, and by construction they are the most efficient diagrams for performing matrix multiplication. They are defined as follows. \begin{definition} \label{algplanardefnSn} A $(k,l)$--partition diagram is said to be \textbf{algorithmically planar} if it satisfies the following conditions: \begin{itemize} \item The connected components that are solely in the bottom row are lined up sequentially, starting from the far right hand side, such that the vertices in each connected component are in consecutive order. We also require that these connected components are ordered by their size, starting on the far right hand side with the largest and decreasing in size as we move towards the left, for reasons that will become clear when we conduct a time complexity analysis of the multiplication algorithm for the symmetric group. \item The connected components that are solely in the top row are lined up sequentially, starting from the far left hand side, such that the vertices in each connected component are in consecutive order. (Note that these connected components can be in any order.) \item The connected components between different rows are such that no two subsets cross. \end{itemize} \end{definition} \begin{example} The $(6,5)$--partition diagram \begin{equation} \begin{aligned} \scalebox{0.6}{\tikzfig{algorithm/algoplanar1}} \end{aligned} \end{equation} is algorithmicallly planar. However, the $(6,5)$--partition diagram \begin{equation} \begin{aligned} \scalebox{0.6}{\tikzfig{algorithm/algoplanar2}} \end{aligned} \end{equation} is not algorithmically planar because the connected component consisting solely of vertex $5$ is not adjacent to a connected component that lives solely in the top row, and the $(6,5)$--partition diagram \begin{equation} \begin{aligned} \scalebox{0.6}{\tikzfig{algorithm/algoplanar3}} \end{aligned} \end{equation} is also not algorithmically planar because the vertices in the connected component $\{2, 4\}$ are not in consecutive order. \end{example} \begin{definition} A $(k,l)$--Brauer diagram is said to be \textbf{algorithmically planar} if it is an algorithmically planar $(k,l)$--partition diagram that is also a Brauer diagram. \end{definition} \begin{example} The $(7,5)$--Brauer diagram \begin{equation} \begin{aligned} \scalebox{0.6}{\tikzfig{algorithm/algoplanar4}} \end{aligned} \end{equation} is algorithmicallly planar, whereas the $(7,5)$--Brauer diagram \begin{equation} \begin{aligned} \scalebox{0.6}{\tikzfig{algorithm/algoplanar7}} \end{aligned} \end{equation} is not algorithmicallly planar. \end{example} \begin{definition} An $(l+k) \backslash n$--diagram is said to be \textbf{algorithmically planar} if it satisfies the following conditions: \begin{itemize} \item The free vertices in the bottom row start from the far right hand side and are lined up sequentially. \item The free vertices in the top row also start from the far right hand side and are lined up sequentially. \item The connected components that are solely in the bottom row are lined up sequentially, starting from the right hand side to the left of the free vertices, such that the vertices in each connected component are in consecutive order. \item The connected components that are solely in the top row are lined up sequentially, starting from the far left hand side, such that the vertices in each connected component are in consecutive order. \item The connected components between different rows are such that no two subsets cross. \end{itemize} \end{definition} \begin{example} The $(5+6) \backslash 3$--diagram \begin{equation} \begin{aligned} \scalebox{0.6}{\tikzfig{algorithm/algoplanar5}} \end{aligned} \end{equation} is algorithmicallly planar. However, the $(5+6) \backslash 3$--diagram \begin{equation} \begin{aligned} \scalebox{0.6}{\tikzfig{algorithm/algoplanar6}} \end{aligned} \end{equation} is not algorithmically planar because the free vertex labelled $1$ is not at the far right hand side of the top row. \end{example} \begin{remark} It is clear that an algorithmically planar set partition diagram is \textbf{planar}, that is, none of the connected components in the diagram cross. \end{remark} \subsection{Multiplication Algorithm} In Algorithm 1, we outline a procedure called \textbf{MatrixMult} that performs the matrix multiplication of $v$ by a spanning set element in $\Hom_{G(n)}((\mathbb{R}^{n})^{\otimes k}, (\mathbb{R}^{n})^{\otimes l})$ using the guiding principles that were given in the previous section. We assume that we have the set partition diagram that is associated with the spanning set element. Note that in the description of the algorithm, we have used $d_\pi$ to represent a generic $(k,l)$--partition diagram; however, the type of set partition diagrams that we can use as input depends entirely upon the group $G(n)$. For example, if $G(n) = O(n)$, then only $(k,l)$--Brauer diagrams are valid as inputs to the procedure. \begin{figure*}[tb] \begin{tcolorbox}[colback=melon!10, colframe=melon!40, coltitle=black, title={{\bfseries Algorithm 1:} \textbf{MatrixMult}($G(n), d_\pi, v$)}] \textbf{Inputs:} \begin{itemize} \item $G(n)$ is a group. \item $d_\pi$ is an appropriate $(k,l)$--partition diagram for $G(n)$. \item $v \in (\mathbb{R}^{n})^{\otimes k}$. \end{itemize} \textbf{Perform the following steps:} \begin{enumerate} \item $\sigma_k, d_\pi, \sigma_l \leftarrow \textbf{Factor}(G(n), d_\pi)$ \item $v \leftarrow \textbf{Permute}(v, \sigma_k)$ \item $w \leftarrow \textbf{PlanarMult}(G(n), d_\pi, v)$ \item $w \leftarrow \textbf{Permute}(w, \sigma_l)$ \end{enumerate} \textbf{Output:} $w \in (\mathbb{R}^{n})^{\otimes l}$. \end{tcolorbox} \label{alg1} \end{figure*} We now describe each of the subprocedures that appear in Algorithm 1 in more detail. \textbf{Factor} takes as input a $(k,l)$--partition diagram that is appropriate for the group $G(n)$. The procedure uses the string-like property of these diagrams to output three diagrams whose composition is equivalent to the original input diagram. The first is a $(k,k)$--partition diagram that corresponds to a permutation $\sigma_k \in S_k$. Specifically, if $i$ is a vertex in the top row, then the vertex in the bottom row that is connected to it is $k + \sigma_k(i)$. The second is a $(k,l)$--partition diagram that is algorithmically planar. The third is an $(l,l)$--partition diagram which can be interpreted as a permutation $\sigma_l \in S_l$ in the same way as the first diagram, replacing $k$ with $l$. \textbf{Permute} takes as input a vector $w \in (\mathbb{R}^{n})^{\otimes m}$, for some $n, m$, that is expressed in the standard basis of $\mathbb{R}^{n}$, and a permutation $\sigma$ in $S_m$, and outputs another vector in $(\mathbb{R}^{n})^{\otimes m}$ where only the indices of the basis vectors~-- and not the indices of the coefficients of $w$~-- have been permuted according to $\sigma$. Said differently, \textbf{Permute} performs the following operation, which is extended linearly: \begin{equation} \label{permuteop} \sigma \cdot w_Ie_I \coloneqq w_I(e_{i_{\sigma(1)}} \otimes e_{i_{\sigma(2)}} \otimes \dots \otimes e_{i_{\sigma(m)}}) \end{equation} Note that in Algorithm 1, $m$ will be equal to either $k$ or $l$. \textbf{PlanarMult} takes as input an algorithmically planar set partition diagram of the form returned by \textbf{Factor} together with a vector, and performs a fast matrix multiplication on this vector. Since the set partition diagram is algorithmically planar, we first use the monoidal property of the category in which it is a morphism to decompose it as a tensor product of smaller set partition diagrams. Next, we apply the monoidal functor that is appropriate for $G(n)$ to express this tensor product of diagrams as a Kronecker product of smaller matrices. Finally, we perform matrix multiplication by applying these smaller matrices to the input vector from ``right-to-left, diagram-by-diagram" -- to be described in more detail for each group below -- returning another vector as output. \begin{remark} In effect, the \textbf{Factor} procedure takes as input a $(k,l)$--partition diagram that is appropriate for the group $G(n)$ and swaps as many pairs of vertices as necessary in each row in order to obtain an algorithmically planar $(k,l)$--partition diagram. The composition of the swaps in each row is represented by a permutation diagram. \end{remark} \begin{remark} We note that the implementation of the \textbf{Factor} and \textbf{PlanarMult} procedures vary according to the group $G(n)$ and the type of set partition diagrams that correspond to $G(n)$, although they share many commonalities. We describe the implementation of these procedures for each of the groups below. \end{remark} \begin{remark} For each group $G(n)$, we also analyse the time complexity of the implementation of \textbf{MatrixMult} and compare it with the time complexity of the naive implementation of matrix multiplication. Note that in our analysis, we are viewing memory operations, such as permuting basis vectors and making copies of coefficients, as having no cost. Hence, we view the procedures \textbf{Factor} and \textbf{Permute} as having no cost. As a result, it is enough to consider the computational cost of \textbf{PlanarMult} only. \end{remark} \subsubsection{Symmetric Group $S_n$} The implementation of Algorithm 1 that we give here for the symmetric group effectively recovers the algorithm that was presented in \citet[Appendix C]{godfrey}; however, we take an entirely different approach that uses monoidal categories instead. The major difference in implementation between the two versions relates to how, in our terminology, the connected components between vertices in different rows of $d_\pi$ are pulled into the middle diagram in \textbf{Factor}. We choose to make sure that the connected components do not cross, making the resulting diagram algorithmically planar, whereas in \citet{godfrey} they choose to connect the vertices in different rows such that the left-most vertices in the top row of the new diagram connect to the right-most vertices in the bottom row, that is, they make the connected components cross in ``opposites". While this does not make any significant difference in terms of performing the matrix multiplication for the symmetric group -- indeed, there is only a difference in how the tensor indices are ordered~-- our decision to make the middle diagram in the composition algorithmically planar leads to significant performance improvements when we extend our approach to the other groups below, since for these groups, we will show that these operations reduce to the identity transformation. \paragraph{Implementation} \begin{figure*}[tb] \begin{tcolorbox}[colback=white!02, colframe=black] \begin{center} \scalebox{0.35}{\tikzfig{algorithm/symmfactoringSn}} \end{center} \end{tcolorbox} \caption{ We use the string-like property of $(k,l)$--partition diagrams to \textbf{Factor} them as a composition of a permutation in $S_k$, an algorithmically planar $(k,l)$--partition diagram, and a permutation in $S_l$. Here, $k=5$ and $l=4$. } \label{symmfactoringSn} \end{figure*} We provide specific implementations of \textbf{Factor} and \textbf{PlanarMult} for the symmetric group $S_n$. \textbf{Factor}: The input is a $(k,l)$--partition diagram $d_\pi$. We drag and bend the strings representing the connected components of $d_\pi$ to obtain three diagrams whose composition is equivalent to $d_\pi$: a $(k,k)$--partition diagram that represents a permutation $\sigma_k$ in the symmetric group $S_k$; an algorithmically planar $(k,l)$--partition diagram; and a $(l,l)$--partition diagram that represents a permutation $\sigma_l$ in the symmetric group $S_l$. To obtain the algorithmically planar $(k,l)$--partition diagram, we drag and bend the strings in any way such that \begin{itemize} \item the connected components that are solely in the bottom row of $d_\pi$ are pulled up to be next to each other in the far right hand side of the bottom row of the algorithmically planar $(k,l)$--partition diagram, such that the connected components are ordered by their size, in decreasing order from right to left, \item the connected components that are solely in the top row of $d_\pi$ are pulled down to be next to each other in the far left hand side of the top row of the algorithmically planar $(k,l)$--partition diagram, and \item the connected components between vertices in different rows of $d_\pi$ are bent to be in between the other vertices of the algorithmically planar $(k,l)$--partition diagram such that no two subsets of connected components cross. \end{itemize} We give an example of this procedure in Figure~\ref{symmfactoringSn}. \textbf{PlanarMult}: We take as input the algorithmically planar $(k,l)$--partition diagram $d_\pi$ having at most $n$ blocks that is the output of \textbf{Factor}, and a vector $v \in (\mathbb{R}^{n})^{\otimes k}$ that is the output of \textbf{Permute}, as per Algorithm 1. Let $t$ be the number of blocks that are solely in the top row of $d_\pi$, let $d$ be the number of blocks that connect vertices in different rows of $d_\pi$, and let $b$ be the number of blocks that are solely in the bottom row of $d_\pi$. Given how \textbf{Factor} constructs the algorithmically planar $(k,l)$--partition diagram $d_\pi$, the set partition $\pi$ corresponding to $d_\pi$ will be of the form \begin{equation} \label{symmpartfactor} \left(\bigcup_{i = 1}^{t} T_i \right) \bigcup \left(\bigcup_{i = 1}^{d} D_i \right) \bigcup \left(\bigcup_{i = 1}^{b} B_i \right) \end{equation} such that \begin{equation} |B_1| \leq |B_2| \leq \dots \leq |B_b| \end{equation} where we have used $T_i$ to refer to a top row block, $D_i$ to refer to a different row block, and $B_i$ to refer to a bottom row block. If we let \begin{equation} D_i \coloneqq D_i^{U} \cup D_i^{L} \end{equation} where $D_i^{U}$ is the subset of $D_i$ whose vertices are in the top row of $d_\pi$, and $D_i^{L}$ is the subset of $D_i$ whose vertices are in the bottom row of $d_\pi$, then we have that the vertices in each subset are given by \begin{itemize} \item $T_i = \left\{ \sum_{j=1}^{i-1} |T_j| + 1, \dots, \sum_{j=1}^{i} |T_j| \right\}$ for all $i = 1 \rightarrow t$ \item $D_i^{U} = \left\{ \sum_{j=1}^{t} |T_j| + \sum_{j=1}^{i-1} |D_j^{U}| + 1, \dots, \sum_{j=1}^{t} |T_j| + \sum_{j=1}^{i} |D_j^{U}| \right\}$ for all $i = 1 \rightarrow d$, \item $D_i^{L} = \left\{ l + \sum_{j=1}^{i-1} |D_j^{L}| + 1, \dots, l + \sum_{j=1}^{i} |D_j^{L}| \right\}$ for all $i = 1 \rightarrow d$, and \item $B_i = \left\{ l + \sum_{j=1}^{d} |D_j^{L}| + \sum_{j=1}^{i-1} |B_j| + 1, \dots, l + \sum_{j=1}^{d} |D_j^{L}| + \sum_{j=1}^{i} |B_j| \right\}$ for all $i = 1 \rightarrow b$. \end{itemize} Note, in particular, that \begin{equation} \label{symmupper} l = \sum_{j = 1}^{t} |T_j| + \sum_{j = 1}^{d} |D_j^{U}| \end{equation} and \begin{equation} \label{symmlower} k = \sum_{j = 1}^{d} |D_j^{L}| + \sum_{j = 1}^{b} |B_j| \end{equation} Next, we take $d_\pi$ and express it as a tensor product of three types of set partition diagrams. The right-most type is itself a tensor product of diagrams corresponding to the $B_i$; the middle type is a single diagram corresponding to $\bigcup_{i = 1}^{d} D_i$; and the left-most type is itself a tensor product of diagrams corresponding to the $T_i$. We apply the monoidal functor $\Theta$ to this tensor product decomposition of diagrams, which returns a Kronecker product of matrices. We perform the matrix multiplication by applying the matrices ``right-to-left, diagram-by-diagram", as follows. \textbf{Step 1: Apply each matrix corresponding to a bottom row block diagram, one-by-one, starting from the one that corresponds to $B_b$ and ending with the one that corresponds to $B_1$.} Suppose that we are performing the part of the matrix multiplication that corresponds to $B_i$, for some $i = 1 \rightarrow b$. The input will be a vector $w \in (\mathbb{R}^{n})^{\otimes k - \sum_{j = i+1}^{b} |B_j|}$. We can express $w$ in the standard basis of $\mathbb{R}^{n}$ as \begin{equation} w = \sum_{L \in [n]^{k - \sum_{j = i+1}^{b} |B_j|}} w_Le_L \end{equation} This will be mapped to the vector $r \in (\mathbb{R}^{n})^{\otimes k - \sum_{j = i}^{b} |B_j|}$, where $r$ is of the form \begin{equation} \label{rMSncoeff} r = \sum_{M \in [n]^{k - \sum_{j = i}^{b} |B_j|}} r_Me_M \end{equation} and \begin{equation} \label{indicesj} r_M = \sum_{j \in [n]} w_{M,j, \dots, j} \end{equation} where the number of indices $j$ in (\ref{indicesj}) is $|B_i|$. At the end of this process, we obtain a vector in $(\mathbb{R}^{n})^{\otimes k - \sum_{j = 1}^{b} |B_j|}$. Note that the matrices corresponding to bottom row connected components are merely performing indexing and summation operations, that is, ultimately, tensor contractions. \textbf{Step 2: Now apply the matrix corresponding to the middle diagram, that is, to the set $\bigcup_{i = 1}^{d} D_i$.} The input will be a vector $w \in (\mathbb{R}^{n})^{\otimes k - \sum_{j = 1}^{b} |B_j|}$. Using (\ref{symmlower}), we can express $w$ in the standard basis of $\mathbb{R}^{n}$ as \begin{equation} w = \sum_{j_1, \dots, j_d \in [n]} w_{j_1, \dots, j_1, j_2, \dots, j_2, \dots, j_d, \dots, j_d} \bigotimes_{k = 1}^{d} \left( \bigotimes_{p = 1}^{|D_k^{L}|} e_{j_k} \right) \end{equation} where each index $j_k$ in the coefficient is repeated $|D_k^{L}|$ times. This will be mapped to the vector $r \in (\mathbb{R}^{n})^{\otimes \sum_{j = i}^{d} |D_j^{U}|}$, where $r$ is of the form \begin{equation} r = \sum_{j_1, \dots, j_d \in [n]} r_{j_1, \dots, j_1, j_2, \dots, j_2, \dots, j_d, \dots, j_d} \bigotimes_{k = 1}^{d} \left( \bigotimes_{p = 1}^{|D_k^{U}|} e_{j_k} \right) \end{equation} where each index $j_k$ in the coefficient is repeated $|D_k^{U}|$ times, and \begin{equation} \label{transfercoeffSn} r_{j_1, \dots, j_1, j_2, \dots, j_2, \dots, j_d, \dots, j_d} = w_{j_1, \dots, j_1, j_2, \dots, j_2, \dots, j_d, \dots, j_d} \end{equation} These operations are called transfer operations, which is a term that first appeared in \cite{pan22}. \textbf{Step 3: Finally, apply each matrix corresponding to a top row block diagram, one-by-one, starting from the one that corresponds to $T_t$ and ending with the one that corresponds to $T_1$.} Suppose that we are performing the part of the matrix multiplication that corresponds to $T_i$, for some $i = 1 \rightarrow t$. Then we begin with a vector $x \in (\mathbb{R}^{n})^{\otimes l - \sum_{j=1}^{i} |T_j|}$ that is of the form \begin{equation} x = \sum_{l_{i+1}, \dots, l_{t} \in [n]} \sum_{j_1, \dots, j_d \in [n]} x_{j_1, j_2, \dots, j_d} \bigotimes_{q = i+1}^{t} \left( \bigotimes_{m = 1}^{|T_q|} e_{l_q} \right) \bigotimes_{k = 1}^{d} \left( \bigotimes_{p = 1}^{|D_k^{U}|} e_{j_k} \right) \end{equation} where $x_{j_1, j_2, \dots, j_d}$ is the coefficient $r_{j_1, \dots, j_1, j_2, \dots, j_2, \dots, j_d, \dots, j_d}$ appearing in (\ref{transfercoeffSn}). This will be mapped to the vector $y \in (\mathbb{R}^{n})^{\otimes l - \sum_{j=1}^{i-1} |T_j|}$, where $y$ is of the form \begin{equation} y = \sum_{l_i, l_{i+1}, \dots, l_{t} \in [n]} \sum_{j_1, \dots, j_d \in [n]} x_{j_1, j_2, \dots, j_d} \bigotimes_{q = i}^{t} \left( \bigotimes_{m = 1}^{|T_q|} e_{l_q} \right) \bigotimes_{k = 1}^{d} \left( \bigotimes_{p = 1}^{|D_k^{U}|} e_{j_k} \right) \end{equation} At the end of this process, we obtain a vector in $(\mathbb{R}^{n})^{\otimes l}$, by (\ref{symmupper}), which is of the form \begin{equation} \sum_{l_1, \dots, l_{t} \in [n]} \sum_{j_1, \dots, j_d \in [n]} x_{j_1, j_2, \dots, j_d} \bigotimes_{q = 1}^{t} \left( \bigotimes_{m = 1}^{|T_q|} e_{l_q} \right) \bigotimes_{k = 1}^{d} \left( \bigotimes_{p = 1}^{|D_k^{U}|} e_{j_k} \right) \end{equation} This is the vector that is returned by \textbf{PlanarMult} for the symmetric group $S_n$. To summarise, the implementation of \textbf{PlanarMult} for the symmetric group $S_n$ takes as input the algorithmically planar $(k,l)$--partition diagram that comes from \textbf{Factor} and expresses it as a tensor product of three types of set partition diagrams. The right-most type is itself a tensor product of set partition diagrams having only vertices in the bottom row, where each diagram represents a single block. These diagrams correspond to tensor contraction operations under the functor $\Theta$. The middle type is a set partition diagram that consists of all of the connected components in the algorithmically planar $(k,l)$--partition diagram between vertices in different rows. These diagrams correspond to transfer operations under the functor $\Theta$. The left-most type is itself a tensor product of set partition diagrams having only vertices in the top row, where each diagram represents a single block. These diagrams correspond to indexing operations that perform copies under the functor $\Theta$. In Figure \ref{tensorproddecompSn}, we present the tensor product decomposition of the algorithmically planar $(5,4)$--partition diagram that is shown in Figure \ref{symmfactoringSn}. \begin{figure*}[tb] \begin{tcolorbox}[colback=white!02, colframe=black] \begin{center} \scalebox{0.6}{\tikzfig{algorithm/tensorproddecompSn}} \end{center} \end{tcolorbox} \caption{The decomposition of the algorithmically planar $(5,4)$--partition diagram that appears in Figure \ref{symmfactoringSn} into a tensor product of smaller partition diagrams. These diagrams correspond, from right-to-left, to tensor contraction, transfer, and copying operations under the functor $\Theta$. This tensor product decomposition is used in \textbf{PlanarMult} for the symmetric group $S_n$.} \label{tensorproddecompSn} \end{figure*} To show diagrammatically how the matrix multiplication is performed, we see that the tensor product of the three types of set partition diagrams corresponds to a Kronecker product of smaller matrices under the functor $\Theta$, defined in Theorem \ref{partfunctor}, by the monoidal property of $\Theta$. We would like to apply each smaller matrix to the input vector from right-to-left, diagram-by-diagram. To do this, we first deform the entire tensor product decomposition of set partition diagrams by pulling each individual diagram up one level higher than the previous one, going from right-to-left, and then apply the functor $\Theta$ at each level. The newly inserted strings correspond to an identity matrix, hence only the matrices corresponding to the original tensor product decomposition act on the input vector at each stage. Figure \ref{planarmultSn} gives an example of how the computation takes place at each stage for the tensor product decomposition given in Figure \ref{tensorproddecompSn}, using its equivalent diagram form. \begin{figure*}[tb] \begin{tcolorbox}[colback=white!02, colframe=black] \begin{center} \scalebox{0.4}{\tikzfig{algorithm/planarmultSn}} \end{center} \end{tcolorbox} \caption{We show how matrix multiplication is implemented in \textbf{PlanarMult} for $S_n$ using the tensor product decomposition of the algorithmically planar $(5,4)$--partition diagram given in Figure \ref{tensorproddecompSn} as an example. We perform the matrix multiplication as follows: first, we deform the entire tensor product decomposition diagram by pulling each individual diagram up one level higher than the previous one, going from right-to-left, and then we apply the functor $\Theta$ at each level. Finally, we perform matrix multiplication at each level to obtain the final output vector.} \label{planarmultSn} \end{figure*} \begin{example} Suppose that we wish to perform the multiplication of $D_\pi$ by $v \in (\mathbb{R}^{n})^{\otimes 5}$, where $D_\pi$ corresponds to the $(5,4)$--partition diagram $d_\pi$ given in Figure \ref{symmfactoringSn} under $\Theta$, and $v$ is given by \begin{equation} \sum_{L \in [n]^5} v_Le_L \end{equation} We know that $D_\pi$ is a matrix in $\Hom_{S_n}((\mathbb{R}^{n})^{\otimes 5}, (\mathbb{R}^{n})^{\otimes 4})$. First, we apply the procedure \textbf{Factor}, which returns the three diagrams given in Figure~\ref{symmfactoringSn}. The first diagram corresponds to the permutation $(13)(24)$ in $S_5$; hence, the result of \textbf{Permute}$(v, (13)(24))$ is the vector \begin{equation} \sum_{L \in [n]^5} v_{l_1, l_2, l_3, l_4, l_5} \left( e_{l_3} \otimes e_{l_4} \otimes e_{l_1} \otimes e_{l_2} \otimes e_{l_5} \right) \end{equation} Next we apply \textbf{PlanarMult} with the decomposition given in Figure \ref{tensorproddecompSn}. Step 1: Apply the matrices that correspond to the bottom row blocks. We obtain the vector \begin{equation} w = \sum_{l_3, l_4 \in [n]} w_{l_3, l_4} \left( e_{l_3} \otimes e_{l_4} \right) \end{equation} where \begin{equation} w_{l_3, l_4} = \sum_{j \in [n]} v_{j, j, l_3, l_4, j} \end{equation} Step 2: Apply the matrices that correspond to the middle diagram. We obtain the vector \begin{equation} r = \sum_{l_3 \in [n]} \sum_{l_4 \in [n]} r_{l_3, l_3, l_4} \left( e_{l_3} \otimes e_{l_3} \otimes e_{l_4} \right) \end{equation} where \begin{equation} r_{l_3, l_3, l_4} = w_{l_3, l_4} \end{equation} Step 3: Apply the matrices that correspond to the top row blocks. We obtain the vector \begin{equation} z = \sum_{m \in [n]} \sum_{l_3 \in [n]} \sum_{l_4 \in [n]} r_{l_3, l_3, l_4} \left( e_{m} \otimes e_{l_3} \otimes e_{l_3} \otimes e_{l_4} \right) \end{equation} Substituting in, we get that \begin{equation} z = \sum_{m \in [n]} \sum_{l_3 \in [n]} \sum_{l_4 \in [n]} w_{l_3, l_4} \left( e_{m} \otimes e_{l_3} \otimes e_{l_3} \otimes e_{l_4} \right) \end{equation} and hence \begin{equation} z = \sum_{m \in [n]} \sum_{l_3 \in [n]} \sum_{l_4 \in [n]} \sum_{j \in [n]} v_{j, j, l_3, l_4, j} \left( e_{m} \otimes e_{l_3} \otimes e_{l_3} \otimes e_{l_4} \right) \end{equation} Finally, as the third diagram returned from \textbf{Factor} corresponds to the permutation $(14)$ in $S_4$, we perform \textbf{Permute}$(z, (14))$, which returns the vector \begin{equation} z = \sum_{m \in [n]} \sum_{l_3 \in [n]} \sum_{l_4 \in [n]} \sum_{j \in [n]} v_{j, j, l_3, l_4, j} \left( e_{l_4} \otimes e_{l_3} \otimes e_{l_3} \otimes e_{m} \right) \end{equation} This is the vector that is returned by \textbf{MatrixMult}. \end{example} \paragraph{Time Complexity} We look at each step of the implementation that was given above. \textbf{Step 1:} For the part of the matrix multiplication that corresponds to $B_i$, for some $i = 1 \rightarrow b$, we map a vector in $(\mathbb{R}^{n})^{\otimes k - \sum_{j = i+1}^{b} |B_j|}$ to a vector in $(\mathbb{R}^{n})^{\otimes k - \sum_{j = i}^{b} |B_j|}$. Since the matrix corresponds to a bottom row block, for each tuple $M$ of indices in the output coefficient $r_M$, as in (\ref{indicesj}), there are only $n$ terms to multiply (an improvement over $n^{|B_i|}$), and consequently only $n-1$ additions. Hence, in total, there are \begin{equation} \label{Snmultalgo} \sum_{i = 1}^{b} n^{k - \sum_{j = b+1-i}^{b} |B_j|} \cdot n \end{equation} multiplications and \begin{equation} \label{Snaddalgo} \sum_{i = 1}^{b} n^{k - \sum_{j = b+1-i}^{b} |B_j|} \cdot (n-1) \end{equation} additions. Note that, in calculating the overall time complexity for this step, the highest order term is given by the size of $B_b$. Hence, the worst case occurs when $|B_b| = 1$, giving an overall time complexity of $O(n^{k})$. However, assuming that Step 1 must take place, the best case occurs when $|B_b| = k$, that is, there is only one bottom row block of size $k$, giving an overall time complexity of $O(n)$. \textbf{Step 2:} As these are transfer operations, there is no cost since we are simply copying elements from an array. \textbf{Step 3:} Here we are copying arrays, hence there is no cost. Consequently, in the worst case, we have reduced the overall time complexity from $O(n^{l+k})$ to $O(n^{k})$. If Step 1 must occur, then in the best case, we have reduced the overall time complexity from $O(n^{l+k})$ to $O(n)$. However, the true best case occurs when Step 1 does not take place at all, that is, when the number of bottom row blocks $b$ is zero, because, in this case, the computation is effectively free! This analysis also shows why, in Definition \ref{algplanardefnSn} of an algorithmically planar $(k,l)$--partition diagram, we have chosen to order the connected components that are solely in the bottom row by their size, in decreasing order from right to left. We want to obtain the best overall time complexity for the multiplication algorithm, which, for the symmetric group, is determined by the number of additions and multiplications that occur in Step 1. It is clear from (\ref{Snmultalgo}) and (\ref{Snaddalgo}) that the time complexity is best when the blocks are ordered in decreasing size order from right to left. \subsubsection{Orthogonal Group $O(n)$} The implementation of Algorithm 1 for the orthogonal group $O(n)$ is similar to the implementation for the symmetric group $S_n$ as a result of the similarity between the monoidal functors that are associated with each group. In fact, the implementation for the orthogonal group will be less involved because the only valid set partition diagrams that can be input into the \textbf{MatrixMult} procedure for this group are Brauer diagrams. \paragraph{Implementation} We provide specific implementations of \textbf{Factor} and \textbf{PlanarMult} for the orthogonal group $O(n)$. \textbf{Factor}: The input is a $(k,l)$--Brauer diagram $d_\beta$. We drag and bend the strings representing the connected components of $d_\beta$ to obtain a factoring of $d_\beta$ into three diagrams whose composition is equivalent to $d_\beta$: a $(k,k)$--Brauer diagram that represents a permutation $\sigma_k$ in the symmetric group $S_k$; another $(k,l)$--Brauer diagram that is algorithmically planar; and a $(l,l)$--Brauer diagram that represents a permutation $\sigma_l$ in the symmetric group $S_l$. To obtain the algorithmically planar $(k,l)$--Brauer diagram, we drag and bend the strings in any way such that \begin{itemize} \item the pairs that are solely in the bottom row of $d_\beta$ are pulled up to be next to each other in the far right hand side of the bottom row of the algorithmically planar $(k,l)$--Brauer diagram, \item the pairs that are solely in the top row of $d_\beta$ are pulled down to be next to each other in the far left hand side of the top row of the algorithmically planar $(k,l)$--Brauer diagram, and \item the pairs between vertices in different rows of $d_\beta$ are bent to be in between the other vertices of the algorithmically planar $(k,l)$--Brauer diagram such that no two pairings in the algorithmically planar diagram intersect each other. \end{itemize} \begin{figure*}[tb] \begin{tcolorbox}[colback=white!02, colframe=black] \begin{center} \scalebox{0.35}{\tikzfig{algorithm/symmfactoringOn}} \end{center} \end{tcolorbox} \caption{ We use the string-like aspect of $(k,l)$--Brauer diagrams to \textbf{Factor} them as a composition of a permutation in $S_k$, an algorithmically planar $(k,l)$--Brauer diagram, and a permutation in $S_l$. Here $k = l = 5$. } \label{symmfactoringOn} \end{figure*} We give an example of this procedure in Figure \ref{symmfactoringOn}. \textbf{PlanarMult}: We take as input the planar $(k,l)$--Brauer diagram $d_\beta$ that is the output of \textbf{Factor}, and a vector $v \in (\mathbb{R}^{n})^{\otimes k}$ that is the output of \textbf{Permute}, as per Algorithm 1. Let $t$ be the number of pairs that are solely in the top row of $d_\beta$, let $d$ be the number of pairs that are in different rows of $d_\beta$, and let $b$ be the number of pairs that are solely in the bottom row of $d_\beta$. Then it is clear that \begin{equation} 2t + d = l \quad \text{and} \quad 2b + d = k \end{equation} and so \begin{equation} 2t + 2d + 2b = l+k \end{equation} Given how \textbf{Factor} constructs the planar $(k,l)$--Brauer diagram $d_\beta$, the Brauer partition $\beta$ corresponding to $d_\beta$ will be of the form \begin{equation} \label{brauerpartfactor} \left(\bigcup_{i = 1}^{t} T_i \right) \bigcup \left(\bigcup_{i = 1}^{d} D_i \right) \bigcup \left(\bigcup_{i = 1}^{b} B_i \right) \end{equation} where we have used $T_i$ to refer to a top row pair, $D_i$ to refer to a different row pair, and $B_i$ to refer to a bottom row pair. In particular, we have that \begin{itemize} \item $T_i = \left\{2i-1, 2i\right\}$ for all $i = 1 \rightarrow t$ \item $D_i = \left\{l+i-d, l+i\right\}$ for all $i = 1 \rightarrow d$, and \item $B_i = \left\{l+k-2b+2i-1, l+k-2b+2i\right\}$ for all $i = 1 \rightarrow b$. \end{itemize} Next, we take $d_\beta$ and express it as a tensor product of three types of Brauer diagrams. The right-most type is itself a tensor product of diagrams corresponding to the $B_i$; the middle type is a single diagram corresponding to $\bigcup_{i = 1}^{d} D_i$; and the left-most type is itself a tensor product of diagrams corresponding to the $T_i$. We apply the monoidal functor $\Phi$ to this tensor product decomposition of diagrams, which returns a Kronecker product of matrices. We perform the matrix multiplication by applying the matrices ``right-to-left, diagram-by-diagram", as follows. \textbf{Step 1: Apply each matrix corresponding to a bottom row pair diagram, one-by-one, starting from the one that corresponds to $B_b$ and ending with the one that corresponds to $B_1$.} Suppose that we are performing the part of the matrix multiplication that corresponds to $B_i$, for some $i = 1 \rightarrow b$. The input will be a vector $w \in (\mathbb{R}^{n})^{\otimes k - 2(b-i)}$. We can express $w$ in the standard basis of $\mathbb{R}^{n}$ as \begin{equation} w = \sum_{L \in [n]^{k-2(b-i)}} w_Le_L \end{equation} This will be mapped to the vector $r \in (\mathbb{R}^{n})^{\otimes k - 2(b-i) - 2}$, where $r$ is of the form \begin{equation} r = \sum_{M \in [n]^{k-2(b-i)-2}} r_Me_M \end{equation} and \begin{equation} \label{rMcoeff} r_M = \sum_{j \in [n]} w_{M,j,j} \end{equation} At the end of this process, we obtain a vector in $(\mathbb{R}^{n})^{\otimes k - 2b}$. As before, the matrices corresponding to bottom row pairs are merely performing tensor contractions. \textbf{Step 2: Now apply the matrix corresponding to the middle diagram, that is, to the set $\bigcup_{i = 1}^{d} D_i$.} The input will be a vector $w \in (\mathbb{R}^{n})^{\otimes k - 2b}$. Expressing $w$ in the standard basis of $\mathbb{R}^{n}$ as \begin{equation} w = \sum_{L \in [n]^{k-2b}} w_Le_L \end{equation} the multiplication of the matrix merely returns $w$ itself! Hence the transfer operations for the orthogonal group are simply the identity map. \textbf{Step 3: Finally, apply each matrix corresponding to a top row pair diagram, one-by-one, starting from the one that corresponds to $T_t$ and ending with the one that corresponds to $T_1$.} Suppose that we are performing the part of the matrix multiplication that corresponds to $T_i$, for some $i = 1 \rightarrow t$. Then we begin with a vector $w \in (\mathbb{R}^{n})^{\otimes k - 2b + 2(t - i)}$ that is of the form \begin{equation} w = \sum_{J \in [n]^{t-i}}\sum_{L \in [n]^{k-2b}} v_L \left(e_{j_1} \otimes e_{j_1} \otimes \dots \otimes e_{j_{t-i}} \otimes e_{j_{t-i}} \otimes e_L\right) \end{equation} where $v_L$ is the coefficient of $e_L$ appearing in the vector at the end of Step 2. This will be mapped to the vector $r \in (\mathbb{R}^{n})^{\otimes k - 2b+2(t-i) + 2}$, where $r$ is of the form \begin{equation} r = \sum_{m \in [n]}\sum_{J \in [n]^{t-i}}\sum_{L \in [n]^{k-2b+2(t-i)}} v_L \left(e_m \otimes e_m \otimes e_{j_1} \otimes e_{j_1} \otimes \dots \otimes e_{j_{t-i}} \otimes e_{j_{t-i}} \otimes e_L\right) \end{equation} At the end of this process, we obtain a vector in $(\mathbb{R}^{n})^{\otimes l}$, since $k - 2b + 2t = l$, which is of the form \begin{equation} \sum_{J \in [n]^t}\sum_{L \in [n]^{k-2b}} v_L \left(e_{j_1} \otimes e_{j_1} \otimes \dots \otimes e_{j_t} \otimes e_{j_t} \otimes e_L\right) \end{equation} This is the vector that is returned by \textbf{PlanarMult} for the orthogonal group $O(n)$. To summarise, the implementation of \textbf{PlanarMult} for the orthogonal group $O(n)$ takes as input the algorithmically planar $(k,l)$--Brauer diagram that comes from \textbf{Factor} and expresses it as a tensor product of three types of Brauer diagrams. The right-most type is itself a tensor product of Brauer diagrams, where each diagram has only two connected vertices in the bottom row. These diagrams correspond to tensor contraction operations under the functor $\Phi$ given in Theorem \ref{brauerO(n)functor}. The middle type is a Brauer diagram that consists of all of the pairs in the planar $(k,l)$--Brauer diagram between vertices in different rows. This diagram corresponds to the identity under the functor $\Phi$. The left-most type is a tensor product of Brauer diagrams having only two connected vertices in the top row. These diagrams correspond to indexing operations that perform copies under the functor $\Phi$. \begin{figure*}[tb] \begin{tcolorbox}[colback=white!02, colframe=black] \begin{center} \scalebox{0.6}{\tikzfig{algorithm/tensorproddecompOn}} \end{center} \end{tcolorbox} \caption{ The tensor product decomposition of the planar $(5,5)$--Brauer diagram that appears in Figure \ref{symmfactoringOn}. } \label{tensorproddecompOn} \end{figure*} In Figure \ref{tensorproddecompOn}, we present the tensor product decomposition of the algorithmically planar $(5,5)$--Brauer diagram that is shown in Figure \ref{symmfactoringOn}. The diagrammatic representation of the matrix multiplication step is very similar to the one for the symmetric group, in that to obtain the matrices we perform the same deformation of the tensor product decomposition of diagrams before applying the functor $\Phi$ at each level. We give an example in Figure \ref{planarmultOn} of how the computation takes place at each stage for the tensor product decomposition given in Figure \ref{tensorproddecompOn}, using its equivalent diagram form. \begin{example} \label{orthogAlgoEx} Suppose that we wish to perform the multiplication of $E_\beta$ by $v \in (\mathbb{R}^{n})^{\otimes 5}$, where $E_\beta$ corresponds to the $(5,5)$--Brauer diagram $d_\beta$ given in Figure \ref{symmfactoringOn} under $\Phi$, and $v$ is given by \begin{equation} \sum_{L \in [n]^5} v_Le_L \end{equation} We know that $E_\beta$ is a matrix in $\Hom_{O(n)}((\mathbb{R}^{n})^{\otimes 5}, (\mathbb{R}^{n})^{\otimes 5})$. First, we apply the procedure \textbf{Factor}, which returns the three diagrams given in Figure~\ref{symmfactoringOn}. The first diagram corresponds to the permutation $(1524)$ in $S_5$, hence, the result of \textbf{Permute}$(v, (1524))$ is the vector \begin{equation} \sum_{L \in [n]^5} v_{l_1, l_2, l_3, l_4, l_5} \left( e_{l_5} \otimes e_{l_4} \otimes e_{l_3} \otimes e_{l_1} \otimes e_{l_2} \right) \end{equation} Now we apply \textbf{PlanarMult} with the decomposition given in Figure \ref{tensorproddecompOn}. Step 1: Apply the matrices that correspond to the bottom row pairs. We obtain the vector \begin{equation} w = \sum_{l_5, l_4, l_3 \in [n]} w_{l_5, l_4, l_3} \left( e_{l_5} \otimes e_{l_4} \otimes e_{l_3} \right) \end{equation} where \begin{equation} w_{l_5, l_4, l_3} = \sum_{j \in [n]} v_{j, j, l_3, l_4, l_5} \end{equation} Step 2: Apply the matrices that correspond to the middle diagram. As the transfer operations are the identity mapping, we get $w$. Step 3: Apply the matrices that correspond to the top row. We obtain the vector \begin{equation} z = \sum_{m \in [n]} \sum_{l_5, l_4, l_3 \in [n]} w_{l_5, l_4, l_3} \left( e_{m} \otimes e_{m} \otimes e_{l_5} \otimes e_{l_4} \otimes e_{l_3} \right) \end{equation} Substituting in, we get that \begin{equation} z = \sum_{m \in [n]} \sum_{l_5, l_4, l_3 \in [n]} \sum_{j \in [n]} v_{j, j, l_3, l_4, l_5} \left( e_{m} \otimes e_{m} \otimes e_{l_5} \otimes e_{l_4} \otimes e_{l_3} \right) \end{equation} Finally, as the third diagram returned from \textbf{Factor} corresponds to the permutation $(1342)$ in $S_5$, we perform \textbf{Permute}$(z, (1342))$, which returns the vector \begin{equation} \sum_{m \in [n]} \sum_{l_5, l_4, l_3 \in [n]} \sum_{j \in [n]} v_{j, j, l_3, l_4, l_5} \left( e_{l_5} \otimes e_{m} \otimes e_{l_4} \otimes e_{m} \otimes e_{l_3} \right) \end{equation} This is the vector that is returned by \textbf{MatrixMult}. \end{example} \begin{figure*}[tb] \begin{tcolorbox}[colback=white!02, colframe=black] \begin{center} \scalebox{0.4}{\tikzfig{algorithm/planarmultOn}} \end{center} \end{tcolorbox} \caption{We show how matrix multiplication is implemented in \textbf{PlanarMult} for $O(n), Sp(n)$ and $SO(n)$ using the tensor product decomposition of the planar $(5,5)$--Brauer diagram given in Figure \ref{tensorproddecompOn} as an example. Effectively, we perform the matrix multiplication by applying the matrices ``right-to-left, diagram-by-diagram". In reality, we perform the matrix multiplication as follows: first, we deform the entire tensor product decomposition diagram by pulling each individual diagram up one level higher than the previous one, going from right--to-left, and then we apply the functor that corresponds to the group at each level. Finally, we perform matrix multiplication at each level to obtain the final output vector.} \label{planarmultOn} \end{figure*} \paragraph{Time Complexity} We look at each step of the implementation that was given above. \textbf{Step 1:} For the part of the matrix multiplication that corresponds to $B_i$, for some $i = 1 \rightarrow b$, we map a vector in $(\mathbb{R}^{n})^{\otimes k - 2(b-i)}$ to a vector in $(\mathbb{R}^{n})^{\otimes k - 2(b-i) - 2}$. Since the matrix corresponds to a bottom row pair that is connected, for each tuple $M$ of indices in the output coefficient $r_M$, as in (\ref{rMcoeff}), there are only $n$ terms to multiply (an improvement over $n^2$), and consequently only $n-1$ additions. Hence, in total, there are \begin{equation} \sum_{i = 1}^{b} n^{k - 2(b-i) - 2} \cdot n \end{equation} multiplications and \begin{equation} \sum_{i = 1}^{b} n^{k - 2(b-i) -2} \cdot (n-1) \end{equation} additions, for an overall time complexity of $O(n^{k-1})$. \textbf{Step 2:} This corresponds to the identity transformation, hence there is no cost, as we do not need to perform this operation. \textbf{Step 3:} Here we are copying arrays, hence there is no cost. Consequently, we have reduced the overall time complexity from $O(n^{l+k})$ to $O(n^{k-1})$. \subsubsection{Symplectic Group $Sp(n)$} The implementation of Algorithm 1 for the symplectic group $Sp(n)$ is related to the implementation for the orthogonal group $O(n)$ because the only valid set partition diagrams that can be input into the procedure in each case are Brauer diagrams. The difference in the implementations comes from the difference in the monoidal functor that is associated with each group. \paragraph{Implementation} The implementation of the \textbf{Factor} procedure is the same as for the orthogonal group. \textbf{PlanarMult}: Again, we take as input the planar $(k,l)$--Brauer diagram $d_\beta$ that is the output of \textbf{Factor}, and a vector $v \in (\mathbb{R}^{n})^{\otimes k}$ that is the output of \textbf{Permute}, as per Algorithm~1. The tensor product decomposition of $d_\beta$ is the same as for the orthogonal group $O(n)$; in particular, the Brauer partition $\beta$ corresponding to $d_\beta$ is of the form (\ref{brauerpartfactor}). The important difference here is that we apply the monoidal functor $X$, instead of $\Phi$, to this tensor product decomposition of diagrams, which returns a different Kronecker product of matrices. Again, we perform the same steps for the matrix multiplication, but this time the vectors returned at each stage are different. \textbf{Step 1: Apply each matrix corresponding to a bottom row pair diagram, one-by-one, starting from the one that corresponds to $B_b$ and ending with the one that corresponds to $B_1$.} Suppose that we are performing the part of the matrix multiplication that corresponds to $B_i$, for some $i = 1 \rightarrow b$. The input will be a vector $w \in (\mathbb{R}^{n})^{\otimes k - 2(b-i)}$. We can express $w$ in the standard basis of $\mathbb{R}^{n}$ as \begin{equation} w = \sum_{L \in [n]^{k-2(b-i)}} w_Le_L \end{equation} This will be mapped to the vector $r \in (\mathbb{R}^{n})^{\otimes k - 2(b-i) - 2}$ where $r$ is of the form \begin{equation} r = \sum_{M \in [n]^{k-2(b-i)-2}} r_Me_M \end{equation} where \begin{equation} r_M = \sum_{j_{k-2(b-i)-1}, j_{k-2(b-i)} \in [n]} \epsilon_{j_{k-2(b-i)-1},j_{k-2(b-i)}} w_{M,j_{k-2(b-i)-1},j_{k-2(b-i)}} \end{equation} Recall that $\epsilon_{j_{k-2(b-i)-1},j_{k-2(b-i)}}$ was defined in (\ref{epsilondef1}) and (\ref{epsilondef2}). At the end of this process, we obtain a vector in $(\mathbb{R}^{n})^{\otimes k - 2b}$. As before, these matrices corresponding to bottom row pairs are performing tensor contractions. \textbf{Step 2: Now apply the matrix corresponding to the middle diagram, that is, to the set $\bigcup_{i = 1}^{d} D_i$.} This will be exactly the same as for the orthogonal group, by the definition of $\gamma_{r_p, u_p}$ given in (\ref{gammarpup}). \textbf{Step 3: Finally, apply each matrix corresponding to a top row pair diagram, one-by-one, starting from the one that corresponds to $T_t$ and ending with the one that corresponds to $T_1$.} Suppose that we are performing the part of the matrix multiplication that corresponds to $T_i$, for some $i = 1 \rightarrow t$. Then we begin with a vector $w \in (\mathbb{R}^{n})^{\otimes k - 2b + 2(t - i)}$ that is of the form \begin{equation} w = \sum_{J \in [n]^{2(t-i)}}\sum_{L \in [n]^{k-2b}} \epsilon_J v_L \left( e_J \otimes e_L\right) \end{equation} where \begin{equation} \epsilon_J \coloneqq \epsilon_{j_1, j_2} \dots \epsilon_{j_{2(t-i)-1}, j_{2(t-i)}} \end{equation} and where $v_L$ is the coefficient of $e_L$ appearing in the vector at the end of Step 2. This will be mapped to the vector $r \in (\mathbb{R}^{n})^{\otimes k - 2b+2(t-i) + 2}$, where $r$ is of the form \begin{equation} r = \sum_{M \in [n]^2} \sum_{J \in [n]^{2(t-i)}}\sum_{L \in [n]^{k-2b}} \epsilon_{M,J} v_L \left( e_M \otimes e_J \otimes e_L\right) \end{equation} where \begin{equation} \epsilon_{M,J} \coloneqq \epsilon_{m_1, m_2} \epsilon_{j_1, j_2} \dots \epsilon_{j_{2(t-i)-1}, j_{2(t-i)}} \end{equation} At the end of this process, we obtain a vector in $(\mathbb{R}^{n})^{\otimes l}$, since $k - 2b + 2t = l$, which is of the form \begin{equation} \sum_{J \in [n]^{2t}}\sum_{L \in [n]^{k-2b}} \epsilon_{J} v_L \left( e_J \otimes e_L\right) \end{equation} where $\epsilon_{J}$ is redefined to be \begin{equation} \epsilon_{j_1, j_2} \dots \epsilon_{j_{2t-1}, j_{2t}} \end{equation} This is the vector that is returned by \textbf{PlanarMult} for the symplectic group $Sp(n)$. \begin{example} Suppose that we wish to perform the multiplication of $F_\beta$ by $v \in (\mathbb{R}^{n})^{\otimes 5}$, for the same $(5,5)$--Brauer diagram $d_\beta$ given in Figure \ref{symmfactoringOn}, and $v$ is given by \begin{equation} \sum_{L \in [n]^5} v_Le_L \end{equation} We know that $F_\beta$ is a matrix in $\Hom_{Sp(n)}((\mathbb{R}^{n})^{\otimes 5}, (\mathbb{R}^{n})^{\otimes 5})$. The \textbf{Factor} and \textbf{Permute} steps are the same as for Example \ref{orthogAlgoEx}, hence, prior to the \textbf{PlanarMult} step, we have the vector \begin{equation} \sum_{L \in [n]^5} v_{l_1, l_2, l_3, l_4, l_5} \left( e_{l_5} \otimes e_{l_4} \otimes e_{l_3} \otimes e_{l_1} \otimes e_{l_2} \right) \end{equation} We now apply \textbf{PlanarMult} with the decomposition given in Figure \ref{tensorproddecompOn}. Step 1: Apply the matrices that correspond to the bottom row pairs. We obtain the vector \begin{equation} w = \sum_{l_5, l_4, l_3 \in [n]} w_{l_5, l_4, l_3} \left( e_{l_5} \otimes e_{l_4} \otimes e_{l_3} \right) \end{equation} where \begin{equation} w_{l_5, l_4, l_3} = \sum_{j_1, j_2 \in [n]} \epsilon_{j_1, j_2} v_{j_1, j_2, l_3, l_4, l_5} \end{equation} Step 2: Apply the matrices that correspond to the middle diagram. As the transfer operations are the identity mapping, we get $w$. Step 3: Apply the matrices that correspond to the top row. We obtain the vector \begin{equation} z = \sum_{m_1, m_2 \in [n]} \sum_{l_5, l_4, l_3 \in [n]} \epsilon_{m_1, m_2} w_{l_5, l_4, l_3} \left( e_{m_1} \otimes e_{m_2} \otimes e_{l_5} \otimes e_{l_4} \otimes e_{l_3} \right) \end{equation} Substituting in, we get that \begin{equation} z = \sum_{m_1, m_2 \in [n]} \sum_{l_5, l_4, l_3 \in [n]} \sum_{j_1, j_2 \in [n]} \epsilon_{m_1, m_2} \epsilon_{j_1, j_2} v_{j_1, j_2, l_3, l_4, l_5} \left( e_{m_1} \otimes e_{m_2} \otimes e_{l_5} \otimes e_{l_4} \otimes e_{l_3} \right) \end{equation} Finally, as the third diagram returned from \textbf{Factor} corresponds to the permutation $(1342)$ in $S_5$, we perform \textbf{Permute}$(z, (1342))$, which returns the vector \begin{equation} \sum_{m_1, m_2 \in [n]} \sum_{l_5, l_4, l_3 \in [n]} \sum_{j_1, j_2 \in [n]} \epsilon_{m_1, m_2} \epsilon_{j_1, j_2} v_{j_1, j_2, l_3, l_4, l_5} \left( e_{l_5} \otimes e_{m_1} \otimes e_{l_4} \otimes e_{m_2} \otimes e_{l_3} \right) \end{equation} This is the vector that is returned by \textbf{MatrixMult}. \end{example} \paragraph{Time Complexity} The time complexity is exactly the same as for the orthogonal group. \subsubsection{Special Orthogonal Group $SO(n)$} We can perform matrix multiplication between either $E_\beta$ or $H_\alpha \in \Hom_{SO(n)}((\mathbb{R}^{n})^{\otimes k}, (\mathbb{R}^{n})^{\otimes l})$ and $v \in (\mathbb{R}^{n})^{\otimes k}$, where $d_\beta$ is a $(k,l)$--Brauer diagram, and $d_\alpha$ is an $(l+k) \backslash n$--diagram. Given that the implementation for the $E_\beta$ case is the same as for the orthogonal group, by Theorem~\ref{spanningsetSO(n)}, we only consider the $H_\alpha$ case below. \paragraph{Implementation} \hphantom{x} \textbf{Factor}: The input is an $(l+k) \backslash n$--diagram $d_\alpha$. As before, we drag and bend the strings representing the connected components of $d_\alpha$ to obtain a factoring of $d_\alpha$ into the three diagrams, except this time the middle diagram will be an algorithmically planar $(l+k) \backslash n$--diagram. To obtain the algorithmically planar $(l+k) \backslash n$--diagram we want to drag and bend the strings in any way such that \begin{itemize} \item the free vertices in the top row of $d_\alpha$ are pulled down to the far right of the top row of the algorithmically planar $(l+k) \backslash n$--diagram, maintaining their order, \item the free vertices in the bottom row of $d_\alpha$ are pulled up to the far right of the bottom row of the algorithmically planar $(l+k) \backslash n$--diagram, maintaining their order, \item the pairs in the bottom row of $d_\alpha$ are pulled up to be next to each other in the right hand side of the bottom row of the algorithmically planar $(l+k) \backslash n$--diagram, but next to and to the left of the free vertices in the bottom row of the algorithmically planar $(l+k) \backslash n$--diagram, \item the pairs in the top row of $d_\alpha$ are pulled down to be next to each other in the far left hand side of the top row of the algorithmically planar $(l+k) \backslash n$--diagram, \item the pairs connecting vertices in different rows of $d_\alpha$ are ordered in the algorithmically planar $(l+k) \backslash n$--diagram in between the other vertices such that no two pairings in the algorithmically planar diagram intersect each other. \end{itemize} We give an example of this procedure in Figure \ref{symmfactoringSOn}. \begin{figure*}[tb] \begin{tcolorbox}[colback=white!02, colframe=black] \begin{center} \scalebox{0.35}{\tikzfig{algorithm/symmfactoringSOn}} \end{center} \end{tcolorbox} \caption{We use the string-like aspect of $(l+k) \backslash n$--diagrams to \textbf{Factor} them as a composition of a permutation in $S_k$, an algorithmically planar $(l+k) \backslash n$--diagram, and a permutation in $S_l$. Here, $k = 5$ and $l = 4$. } \label{symmfactoringSOn} \end{figure*} \textbf{PlanarMult}: We take as input the algorithmically planar $(l+k) \backslash n$--diagram $d_\alpha$ that is the output of \textbf{Factor}, and a vector $v \in (\mathbb{R}^{n})^{\otimes k}$ that is the output of \textbf{Permute}, as per Algorithm 1. We need new subsets and notation to consider the impact of the free vertices in the $(l+k) \backslash n$--diagram $d_\alpha$ on the implementation for \textbf{PlanarMult}. As before, let $t$ be the number of pairs that are solely in the top row of $d_\alpha$, let $d$ be the number of pairs that are in different rows of $d_\alpha$, and let $b$ be the number of pairs that are solely in the bottom row of $d_\alpha$. Now, let $s$ be the number of free vertices in the top row of $d_\alpha$. Hence there are $n-s$ free vertices in the bottom row of $d_\alpha$. Then it is clear that \begin{equation} \label{SO(n)restrictions} 2t + d + s = l \quad \text{and} \quad 2b + d + n - s = k \end{equation} and so \begin{equation} 2t + 2d + 2b + n = l+k \end{equation} Given how \textbf{Factor} constructs the algorithmically planar $(l+k) \backslash n$--diagram $d_\alpha$, the set partition $\alpha$ corresponding to $d_\alpha$ will be of the form \begin{equation} \left(\bigcup_{i = 1}^{t} T_i \right) \bigcup \left(\bigcup_{i = 1}^{d} D_i \right) \bigcup \left(\bigcup_{i = 1}^{s} TF_i \right) \bigcup \left(\bigcup_{i = 1}^{b} B_i \right) \bigcup \left(\bigcup_{i = 1}^{n-s} BF_i \right) \end{equation} where we have used $T_i$ to refer to a top row pair, $D_i$ to refer to a different row pair, $TF_i$ to refer to a top row free vertex, $B_i$ to refer to a bottom row pair, and $BF_i$ to refer to a bottom row free vertex. In particular, we have that \begin{itemize} \item $T_i = \left\{2i-1, 2i\right\}$ for all $i = 1 \rightarrow t$, \item $D_i = \left\{l+i-d-s, l+i\right\}$ for all $i = 1 \rightarrow d$, \item $TF_i = \left\{l-s+i\right\}$ for all $i = 1 \rightarrow s$, \item $B_i = \left\{l+d+2i-1, l+d+2i\right\}$ for all $i = 1 \rightarrow b$, and \item $BF_i = \left\{l+d+2b+i\right\}$ for all $i = 1 \rightarrow n-s$. \end{itemize} We take $d_\alpha$ and express it as a tensor product of four types of set partition diagrams. The right-most type is a diagram consisting of all the free vertices. Consequently, it corresponds to $ \left(\bigcup_{i = 1}^{s} TF_i \right) \bigcup \left(\bigcup_{i = 1}^{n-s} BF_i \right) $. The type to its left is itself a tensor product of diagrams corresponding to the $B_i$; the next type is a single diagram corresponding to $\left(\bigcup_{i = 1}^{d} D_i\right)$; and, finally, the left-most type is itself a tensor product of diagrams corresponding to the $T_i$. We now apply the monoidal functor $\Psi$ to this tensor product decomposition of diagrams, which returns a Kronecker product of matrices. We perform the matrix multiplication by applying the matrices ``right-to-left, diagram-by-diagram", as follows. \textbf{Step 1: Apply the matrix that corresponds to the free vertices, that is, to $ \left(\bigcup_{i = 1}^{s} TF_i \right) \bigcup \left(\bigcup_{i = 1}^{n-s} BF_i \right) $.} The input will be a vector $v \in (\mathbb{R}^{n})^{\otimes k}$. As $k = 2b + d + (n-s)$, we can express $v$ in the standard basis of $\mathbb{R}^{n}$ as \begin{equation} \label{SOnStep1,1} v = \sum_{J \in [n]^{2b + d}} \sum_{B \in [n]^{n-s}} v_{J,B}e_{J,B} \end{equation} This will be mapped to the vector $w \in (\mathbb{R}^{n})^{\otimes 2b+d+s}$, where $w$ is of the form \begin{equation} \label{SOnStep1,2} w = \sum_{J \in [n]^{2b + d}} \sum_{T \in [n]^{s}} w_{J,T}e_{J,T} \end{equation} where \begin{equation} \label{SOnStep1,3} w_{J,T} = \sum_{B \in [n]^{n-s}} v_{J,B} \det(e_{T,B}) \end{equation} \textbf{Step 2: Apply each matrix corresponding to a bottom row pair diagram, one-by-one, starting from the one that corresponds to $B_b$ and ending with the one that corresponds to $B_1$.} This is exactly the same as Step 1 for the orthogonal group. We obtain a vector $y \in (\mathbb{R}^{n})^{\otimes d+s}$ \textbf{Step 3: Now apply the matrix corresponding to the middle diagram, that is, to the set $\left(\bigcup_{i = 1}^{d} D_i\right)$.} This is exactly the same as Step 2 for the orthogonal group. We obtain a vector $r \coloneqq y \in (\mathbb{R}^{n})^{\otimes d+s}$ \textbf{Step 4: Finally, apply each matrix corresponding to a top row pair diagram, one-by-one, starting from the one that corresponds to $T_t$ and ending with the one that corresponds to $T_1$.} This is exactly the same as Step 3 for the orthogonal group. We obtain a vector $z \in (\mathbb{R}^{n})^{\otimes 2t+d+s}$. As $2t+d+s = l$ by (\ref{SO(n)restrictions}), we have that $z \in (\mathbb{R}^{n})^{\otimes l}$, as required. To summarise, the implementation of \textbf{PlanarMult} for $SO(n)$ differs slightly from the implementation of \textbf{PlanarMult} for the groups that have come before. In the case where the spanning set element corresponds to a Brauer diagram, the implementation is the same as for the orthogonal group $O(n)$. Otherwise, the implementation takes as input the algorithmically planar $(l+k) \backslash n$--diagram that comes from \textbf{Factor}, but expresses it as a tensor product of \textit{four} types of set partition diagrams. The right-most is a diagram consisting of all of the free vertices. This corresponds to the determinant map that is given in (\ref{detmapdefn}). The three other types of diagrams that appear in the decomposition and their corresponding operations are exactly the same as for the orthogonal group $O(n)$. In Figure \ref{tensorproddecompSOn}, we present an example of the tensor product decomposition for the algorithmically planar $(4+5) \backslash 3$--diagram given in Figure \ref{symmfactoringSOn}. The diagrammatic representation of the matrix multiplication step is also very similar to the orthogonal group, in that to obtain the matrices we perform the same deformation of the tensor product decomposition of diagrams before applying the functor $\Psi$, defined in Theorem \ref{brauerSO(n)functor}, at each level. Note, in particular, that we need to attach identity strings to the free vertices that appear in the top row. We give an example in Figure \ref{planarmultSOn} of how the computation takes place at each stage for the tensor product decomposition given in Figure \ref{tensorproddecompSOn}, using its equivalent diagram form. \begin{figure*}[tb] \begin{tcolorbox}[colback=white!02, colframe=black] \begin{center} \scalebox{0.6}{\tikzfig{algorithm/tensorproddecompSOn}} \end{center} \end{tcolorbox} \caption{ The tensor product decomposition of the algorithmically planar $(4+5) \backslash 3$--diagram that appears in Figure \ref{symmfactoringSOn}.} \label{tensorproddecompSOn} \end{figure*} \begin{example} Suppose that we wish to perform the multiplication of $H_\alpha$ by $v \in (\mathbb{R}^{3})^{\otimes 5}$, where $H_\alpha$ corresponds to the $(4+5) \backslash 3$--diagram $d_\alpha$ given in Figure \ref{symmfactoringSOn} under $\Psi$, and $v$ is given by \begin{equation} \sum_{L \in [3]^5} v_Le_L \end{equation} when expressed in the standard basis of $\mathbb{R}^{3}$. We know that $H_\alpha$ is a matrix in $\Hom_{SO(3)}((\mathbb{R}^{3})^{\otimes 5}, (\mathbb{R}^{3})^{\otimes 4})$. First, we apply the procedure \textbf{Factor}, which returns the three diagrams given in Figure~\ref{symmfactoringSOn}. The first diagram corresponds to the permutation $(13524)$ in $S_5$, hence, the result of \textbf{Permute}$(v, (13524))$ is the vector \begin{equation} \sum_{L \in [3]^5} v_{l_1, l_2, l_3, l_4, l_5} \left( e_{l_3} \otimes e_{l_4} \otimes e_{l_5} \otimes e_{l_1} \otimes e_{l_2} \right) \end{equation} We now apply \textbf{PlanarMult} with the decomposition given in Figure \ref{tensorproddecompSOn}. Step 1: Apply the matrices that correspond to the free vertices. We obtain the vector \begin{equation} w = \sum_{l_3, l_4, l_5, t_1 \in [3]} w_{l_3, l_4, l_5, t_1} \left( e_{l_3} \otimes e_{l_4} \otimes e_{l_5} \otimes e_{t_1} \right) \end{equation} where \begin{equation} w_{l_3, l_4, l_5, t_1} = \sum_{l_1, l_2 \in [3]} v_{l_1, l_2, l_3, l_4, l_5} \det(e_{t_1} \otimes e_{l_1} \otimes e_{l_2}) \end{equation} Step 2: Apply the matrices that correspond to the bottom row pairs. We obtain the vector \begin{equation} y = \sum_{l_3, t_1 \in [3]} y_{l_3, t_1} \left( e_{l_3} \otimes e_{t_1} \right) \end{equation} where \begin{equation} y_{l_3, t_1} = \sum_{j \in [3]} w_{l_3, j, j, t_1} \end{equation} Step 3: Apply the matrices that correspond to the different row pairs. Here, we get that $r = y$, as the transfer operations correspond to the identity. Step 4: Apply the matrices that correspond to the top row pairs. We obtain the vector \begin{equation} z = \sum_{m \in [3]} \sum_{l_3, t_1 \in [3]} y_{l_3, t_1} \left( e_{m} \otimes e_{m} \otimes e_{l_3} \otimes e_{t_1} \right) \end{equation} Substituting in, we get that \begin{equation} z = \sum_{m \in [3]} \sum_{l_3, t_1 \in [3]} \sum_{j \in [3]} w_{l_3, j, j, t_1} \left( e_{m} \otimes e_{m} \otimes e_{l_3} \otimes e_{t_1} \right) \end{equation} and hence \begin{equation} z = \sum_{m \in [3]} \sum_{l_3, t_1 \in [3]} \sum_{j \in [3]} \sum_{l_1, l_2 \in [3]} v_{l_1, l_2, l_3, j, j} \det(e_{t_1} \otimes e_{l_1} \otimes e_{l_2}) \left( e_{m} \otimes e_{m} \otimes e_{l_3} \otimes e_{t_1} \right) \end{equation} Finally, as the third diagram returned from \textbf{Factor} corresponds to the permutation $(1432)$ in $S_4$, we perform \textbf{Permute}$(z, (1432))$ which returns the vector \begin{equation} \sum_{m \in [3]} \sum_{l_3, t_1 \in [3]} \sum_{j \in [3]} \sum_{l_1, l_2 \in [3]} v_{l_1, l_2, l_3, j, j} \det(e_{t_1} \otimes e_{l_1} \otimes e_{l_2}) \left( e_{t_1} \otimes e_{m} \otimes e_{m} \otimes e_{l_3} \right) \end{equation} This is the vector that is returned by \textbf{MatrixMult}. \end{example} \begin{figure*}[tb] \begin{tcolorbox}[colback=white!02, colframe=black] \begin{center} \scalebox{0.4}{\tikzfig{algorithm/planarmultSOn}} \end{center} \end{tcolorbox} \caption{We show how matrix multiplication is implemented in \textbf{PlanarMult} for $SO(n)$ (here $n=3$) using the tensor product decomposition of the algorithmically planar $(4+5) \backslash 3$--diagram given in Figure \ref{tensorproddecompSOn} as an example. We perform the matrix multiplication as follows: first, we deform the entire tensor product decomposition diagram by pulling each individual diagram up one level higher than the previous one, going from right--to-left, and then we apply the functor $\Psi$ at each level. Note, in particular, that we need to attach identity strings to the free vertices appearing in the top row. Finally, we perform matrix multiplication at each level to obtain the final output vector.} \label{planarmultSOn} \end{figure*} \paragraph{Time Complexity} For a matrix corresponding to a $(k,l)$--Brauer diagram, the analysis is the same as for the orthogonal group. For a matrix corresponding to an $(l+k) \backslash n$--diagram, we look at each step of the implementation that was given above. \textbf{Step 1:} Recall that we map a vector in $(\mathbb{R}^{n})^{\otimes k}$ to a vector in $(\mathbb{R}^{n})^{\otimes 2b+d+s}$ in this step. To obtain the best time complexity, we need to look at the tuples $J \in [n]^{2b+d}, T \in [n]^{s}$ and $B \in [n]^{n-s}$ that appear in (\ref{SOnStep1,1}), (\ref{SOnStep1,2}), and (\ref{SOnStep1,3}). Indeed, for each tuple $J$, we first need to consider how many tuples $T$ come with pairwise different entries in $[n]$. The number of such tuples is $\frac{n!}{(n-s)!}$. Consequently, we only need to perform multiplications and additions for entries with such indices $(J,T)$. Let us call any such pair of indices $(J,T)$ \textbf{valid}. Hence, for each valid tuple $(J,T)$, there are $(n-s)!$ multiplications and $(n-s)! - 1$ additions. Since the number of valid tuples of the form $(J,T)$ is \begin{equation} n^{2b+d}\frac{n!}{(n-s)!} \end{equation} we have that the overall time complexity is $O(n^{k - (n-s)}n!)$ for this step, since $2b+d = k - (n-s)$ by (\ref{SO(n)restrictions}). \textbf{Step 2:} By the operations performed in this step, we can immediately apply the analysis of Step 1 for the orthogonal group. Since the part of the matrix multiplication that corresponds to $B_i$, for some $i = 1 \rightarrow b$, maps a vector in $(\mathbb{R}^{n})^{\otimes d+s+2i}$ to a vector in $(\mathbb{R}^{n})^{\otimes d+s+2i-2}$, we have that the overall time complexity is $O(n^{k+s-(n-s)-1})$ for this step, since $2b + d + s = k + s - (n-s)$ by (\ref{SO(n)restrictions}). \textbf{Steps 3, 4:} These steps correspond to Steps 2, 3 for the orthogonal group, hence they have no cost. Hence, we have reduced the overall time complexity from $O(n^{l+k})$ to \begin{equation} O(n^{k - (n-s)}(n! + n^{s-1})) \end{equation}
2412.10862v1
http://arxiv.org/abs/2412.10862v1
From spinors to horospheres: a geometric tour
\documentclass{article} \usepackage{amsfonts} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsthm} \usepackage{authblk} \usepackage[nottoc]{tocbibind} \usepackage[margin=3cm]{geometry} \DeclareFontFamily{OT1}{pzc}{} \DeclareFontShape{OT1}{pzc}{m}{it}{<-> s * [1.10] pzcmi7t}{} \DeclareMathAlphabet{\mathpzc}{OT1}{pzc}{m}{it} \usepackage{booktabs} \usepackage[pagebackref, pdftex]{hyperref} \renewcommand{\backreftwosep}{\backrefsep} \renewcommand{\backreflastsep}{\backrefsep} \renewcommand*{\backref}[1]{} \renewcommand*{\backrefalt}[4]{ \ifcase #1 [No citations.] \or [#2] \else [#2] } \usepackage{graphicx} \usepackage{tikz} \usetikzlibrary{calc, arrows, decorations.markings, decorations.pathmorphing, positioning, decorations.pathreplacing} \usepackage{capt-of} \setcounter{tocdepth}{2} \AtBeginDocument{ \def\MR#1{} } \newcommand{\To}{\longrightarrow} \newcommand{\0}{{\bf 0}} \newcommand{\1}{{\bf 1}} \newcommand{\A}{\mathcal{A}} \newcommand{\B}{\mathcal{B}} \newcommand{\C}{\mathbb{C}} \newcommand{\Cat}{\mathcal{C}} \newcommand{\CP}{\mathbb{CP}} \newcommand{\D}{\mathcal{D}} \newcommand{\Disc}{\mathbb{D}} \newcommand{\e}{\mathbf{e}} \newcommand{\E}{\mathcal{E}} \newcommand{\f}{\mathbf{f}} \newcommand{\F}{\mathbf{F}} \newcommand{\g}{\mathbf{g}} \newcommand{\G}{\mathbf{G}} \newcommand{\h}{\mathbf{h}} \renewcommand{\H}{\mathbf{H}} \newcommand{\horo}{\mathpzc{h}} \newcommand{\horos}{\mathfrak{H}} \newcommand{\HH}{\mathcal{H}} \newcommand{\hyp}{\mathbb{H}} \renewcommand{\i}{\mathbf{i}} \newcommand{\I}{\mathbf{I}} \renewcommand{\j}{\mathbf{j}} \newcommand{\J}{\mathbf{J}} \renewcommand{\k}{\mathbf{k}} \newcommand{\K}{\mathbf{K}} \renewcommand{\L}{\mathbb{L}} \newcommand{\Lag}{\mathcal L} \newcommand{\M}{\mathcal{M}} \newcommand{\Mbar}{\overline{\mathcal{M}}} \newcommand{\N}{\mathbb{N}} \newcommand{\p}{\mathbf{p}} \renewcommand{\P}{\mathcal{P}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\QQ}{\mathcal{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\Ring}{\mathcal{R}} \newcommand{\RP}{\mathbb{RP}} \newcommand{\s}{\mathfrak{s}} \renewcommand{\S}{\mathcal{S}} \newcommand{\T}{\mathbb{T}} \newcommand{\TT}{\mathcal{T}} \newcommand{\U}{\mathbb{U}} \newcommand{\V}{\mathcal{V}} \newcommand{\x}{{\bf x}} \newcommand{\X}{\mathcal{X}} \newcommand{\Y}{\mathcal{Y}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\ZZ}{\mathcal{Z}} \DeclareMathOperator{\Aut}{Aut} \DeclareMathOperator{\Byp}{Byp} \DeclareMathOperator{\Conv}{Conv} \DeclareMathOperator{\Down}{Down} \DeclareMathOperator{\ev}{ev} \DeclareMathOperator{\For}{For} \DeclareMathOperator{\Fr}{Fr} \DeclareMathOperator{\gr}{gr} \DeclareMathOperator{\Gr}{Gr} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\Hopf}{Hopf} \DeclareMathOperator{\Id}{Id} \let\Im\relax \DeclareMathOperator{\Im}{Im} \let\Re\relax \DeclareMathOperator{\Re}{Re} \DeclareMathOperator{\Int}{Int} \DeclareMathOperator{\inv}{inv} \DeclareMathOperator{\Inv}{Inv} \DeclareMathOperator{\Isom}{Isom} \DeclareMathOperator{\Mat}{Mat} \DeclareMathOperator{\Mor}{Mor} \DeclareMathOperator{\Ob}{Ob} \DeclareMathOperator{\Quad}{Quad} \DeclareMathOperator{\Rep}{Rep} \DeclareMathOperator*{\Res}{Res} \DeclareMathOperator{\Sgn}{Sgn} \DeclareMathOperator{\Span}{Span} \DeclareMathOperator{\Spin}{Spin} \DeclareMathOperator{\Stereo}{Stereo} \DeclareMathOperator{\Sut}{Sut} \DeclareMathOperator{\Sym}{Sym} \DeclareMathOperator{\Top}{Top} \DeclareMathOperator{\Trace}{Trace} \DeclareMathOperator{\Up}{Up} \numberwithin{equation}{section} \newtheorem{theorem}[equation]{Theorem} \newtheorem{thm}{Theorem} \newtheorem{them}{Theorem} \newtheorem{conj}[equation]{Conjecture} \newtheorem{corollary}[equation]{Corollary} \newtheorem{cor}[equation]{Corollary} \newtheorem{lemma}[equation]{Lemma} \newtheorem{lem}[equation]{Lemma} \newtheorem{conjecture}[equation]{Conjecture} \newtheorem{prob}[equation]{Problem} \newtheorem{proposition}[equation]{Proposition} \newtheorem{prop}[equation]{Proposition} \newtheorem{qn}[equation]{Question} \newtheorem{axiom}[equation]{Axiom} \newtheorem{claim}[equation]{Claim} \newtheorem{defn}[equation]{Definition} \theoremstyle{definition} \newtheorem{eg}[equation]{Example} \newcommand{\refsec}[1]{Section~\ref{Sec:#1}} \newcommand{\refdef}[1]{Definition~\ref{Def:#1}} \newcommand{\refeg}[1]{Example~\ref{Eg:#1}} \newcommand{\reffig}[1]{Figure~\ref{Fig:#1}} \newcommand{\reftable}[1]{Table~\ref{Table:#1}} \newcommand{\refeqn}[1]{\eqref{Eqn:#1}} \newcommand{\reflem}[1]{Lemma~\ref{Lem:#1}} \newcommand{\refprop}[1]{Proposition~\ref{Prop:#1}} \newcommand{\refthm}[1]{Theorem~\ref{Thm:#1}} \newcommand{\refcor}[1]{Corollary~\ref{Cor:#1}} \renewcommand{\theenumi}{(\roman{enumi})} \renewcommand{\labelenumi}{\theenumi} \begin{document} \title{From Spinors to Horospheres: A Geometric Tour} \author{Daniel V. Mathews} \affil{School of Mathematics, Monash University \\ School of Physical and Mathematical Sciences, Nanyang Technological University \\ \texttt{[email protected]}} \author{Varsha} \affil{Department of Mathematics, University College London \\ \texttt{[email protected]}} \maketitle \begin{abstract} This article is an exposition and elaboration of recent work of the first author on spinors and horospheres. It presents the main results in detail, and includes numerous subsidiary observations and calculations. It is intended to be accessible to graduate and advanced undergraduate students with some background in hyperbolic geometry. The main result is the spinor--horosphere correspondence, which is a smooth, $SL(2,\C)$-equivariant bijection between two-component complex spin vectors and spin-decorated horospheres in three-dimensional hyperbolic space. The correspondence includes constructions of Penrose--Rindler and Penner, which respectively associate null flags in Minkowski spacetime to spinors, and associate horospheres to points on the future light cone. The construction is presented step by step, proceeding from spin vectors, through spaces of Hermitian matrices and Minkowski space, to various models of 3-dimensional hyperbolic geometry. Under this correspondence, we show that the natural inner product on spinors corresponds to a 3-dimensional, complex version of lambda lengths, describing a distance between horospheres and their decorations. We also discuss various applications of these results. An ideal hyperbolic tetrahedron with spin-decorations at its vertices obeys a Ptolemy equation, generalising the Ptolemy equation obeyed by 2-dimensional ideal quadrilaterals. More generally we discuss how real spinors describe 2-dimensional hyperbolic geometry. We also discuss the relationships between spinors, horospheres, and various sets of matrices. \end{abstract} \tableofcontents \section{Introduction} \subsection{Overview} At least since Descartes, mathematics has sought ways to describe geometry using algebra --- usually, though perhaps not always, in the hope that complicated geometric problems can be reduced to simpler algebraic calculations. In this paper we discuss a way to describe certain objects in 3-dimensional \emph{hyperbolic} geometry, called \emph{horospheres}, using pairs of complex numbers. Our use of pairs of complex numbers builds on that of Roger Penrose and Wolfgang Rindler in their book \cite{Penrose_Rindler84}, where they were considered as \emph{spinors}. Our results build on their work, so we follow their terminology. Spinors arise in various contexts in physics. At least since Einstein, physics has sought ways to describe physical objects geometrically. From this perspective, this paper discusses how to describe spinors in terms of the geometry of horospheres. Horospheres are standard objects in hyperbolic geometry. Though we define them below, we do assume some background in hyperbolic geometry. However, this paper is designed to be broadly accessible, and we hope that, for readers with a little knowledge of hyperbolic geometry, reading this paper may strengthen that knowledge, and inspire them to learn more. The goal of this paper is to explain in detail the following theorem of the first author in \cite{Mathews_Spinors_horospheres}, and some of its ramifications. The theorem says that pairs of complex numbers correspond to horospheres with some decorations on them, which we will define in due course. \begin{thm} \label{Thm:spinors_to_horospheres} There exists an explicit, smooth, bijective, $SL(2,\C)$-equivariant correspondence between nonzero spinors, and horospheres in hyperbolic 3-space $\hyp^3$ with spin decorations. \end{thm} So, given a pair of complex numbers $(\xi, \eta)$, what is the corresponding horosphere, and what is the decoration? We give an explicit answer in \refthm{explicit_spinor_horosphere_decoration}. Having a bijective correspondence between two mathematical objects is good, but it is even better when that correspondence preserves various structures on each side. A particularly nice aspect the correspondence in \refthm{spinors_to_horospheres} is that it can tell us the \emph{distance} between horospheres, and more, from some elementary operations on complex numbers. \refthm{main_thm} tells us how to do this. A bijective correspondence between two mathematical objects is also nice when structures on one side can illuminate structures on the other. We will see various instances of this throughout the paper. One example is that, when we have four pairs of complex numbers, they obey certain equations called \emph{Pl\"{u}cker relations}. These correspond to equations relating distances between horospheres which we call \emph{Ptolemy equations}, as they have the same form as Ptolemy's theorem from classical Euclidean geometry \cite{Ptolemy_Almagest}. The full proof of \refthm{spinors_to_horospheres} takes us on a tour through various interesting mathematical constructions. Along the way we will see, for instance, Pauli matrices from quantum mechanics, Minkowski space from relativity theory, the Hopf fibration, stereographic projection, and the hyperboloid, conformal disc, and upper half space models of hyperbolic space. It is quite a journey and in this paper we take the time to explain each step along the way, making various observations as we proceed. In this sense, this paper is a fuller exposition of \cite{Mathews_Spinors_horospheres}, with some further details, pictures, and calculations. The proof brings together several existing constructions in relativity theory and hyperbolic geometry, including the null flag construction of Penrose--Rindler in \cite{Penrose_Rindler84} and the relation of the light cone to horocycles given by Penner in \cite{Penner87}. It is perhaps worth noting that part of the motivation for Penrose--Rindler's work \cite{Penrose_Rindler84} was that, using their constructions, complex numbers describe structures from both quantum mechanics, and relativity theory. Such phenomena arise here where, as we will see, for instance, the Pauli matrices of quantum mechanics arise in a relativistic context, and the group $SL(2,\C)$ plays several roles, simultaneously describing linear transformations of spinors, conformal transformations of the celestial sphere (regarded as $\CP^1$), and isometries of Minkowski space (i.e. Lorentz transformations). The potential for these mathematical ideas to describe physics has been taken up in the program of \emph{twistor theory} (see e.g. \cite{Huggett_Tod94, Penrose21}). In that context, the results of this paper give a further, very concrete and explicit, geometric interpretation of spinors, that may be of relevance elsewhere. However, the constructions we consider here are prior to the notion of twistors; they only concern spinors. As far as relativity theory is concerned, it is the special theory, not the general theory. Whatever the case, the spinor--horosphere correspondence of \refthm{spinors_to_horospheres} has already found several applications within geometry and topology, from generalising Descartes' circle theorem \cite{me_Zymaris}, to finding hyperbolic structures \cite{Mathews_Purcell_Ptolemy}, and inter-cusp distances in knot complements \cite{Howie_Mathews_et_al}. \subsection{Horospheres and their decorations} \label{Sec:intro_horospheres_decorations} So, what is a horosphere? \begin{defn} \ \label{Def:intro_horosphere} \begin{enumerate} \item A \emph{horoball} is the limit of increasing hyperbolic balls tangent to a given plane in $\hyp^3$ at a given point on a given side, as their radius tends to infinity. \item A \emph{horosphere} is the boundary of a horoball. \end{enumerate} \end{defn} See \reffig{horospheres_defn} for a picture of this construction. It may not be particularly informative at first instance, but horospheres appear distinctively in the various standard models of hyperbolic 3-space $\hyp^3$. In this paper we consider the hyperboloid model, which we denote $\hyp$; the conformal ball model, which we denote $\Disc$; and the upper half space model, which we denote $\U$. These are discussed in texts on hyperbolic geometry such as \cite{Anderson05, CFKP97, Iversen92, Ramsay_Richtmyer95, Ratcliffe19, Thurston97}. \begin{center} \begin{tabular}{cc} \begin{tikzpicture}[scale=0.8] \draw[green] (0,0) ellipse (2cm and 0.4cm); ll[white] (-2,0)--(2,0)--(2,0.5)--(-2,0.5); \shade[ball color = green!40, opacity = 0.2] (0,0) circle (2cm); \draw[green] (0,0) circle (2cm); \draw[dashed,green] (0,0) ellipse (2cm and 0.4cm); \shade[ball color = red!40, opacity = 0.1] (0,1) circle (1cm); \draw (0,1) circle (1cm); ll (0,0) circle (0.055cm); \shade[ball color = red!40, opacity = 0.1] (0,0.75) circle (0.75cm); \draw (0,0.75) circle (0.75cm); \shade[ball color = red!40, opacity = 0.1] (0,0.5) circle (0.5cm); \draw (0,0.5) circle (0.5cm); \shade[ball color = red!40, opacity = 0.1] (0,0.25) circle (0.25cm); \draw (0,0.25) circle (0.25cm); ll (0,2) circle (0.055cm); \node[black] at (0,-1.5) {$\Disc$}; \node at (-0.75,1.4){$\horo$}; \end{tikzpicture} & \begin{tikzpicture}[scale=0.8] \draw[green] (-2,-0.5)--(2,-0.5)--(3,0.5)--(-1,0.5)--(-2,-0.5); \draw (-1,-0.5)--(0,0.5)--(0,3.5)--(-1,2.5)--(-1,-0.5); ll[white] (0.5,1) circle (1cm); \shade[ball color = red!40, opacity = 0.1] (0.5,1) circle (1cm); \draw (0.5,1) circle (1cm); \shade[ball color = red!40, opacity = 0.1] (0.25,1) circle (0.75cm); \draw (0.25,1) circle (0.75cm); \shade[ball color = red!40, opacity = 0.1] (0,1) circle (0.5cm); \draw (0,1) circle (0.5cm); \shade[ball color = red!40, opacity = 0.1] (-0.25,1) circle (0.25cm); \draw (-0.25,1) circle (0.25cm); ll[black] (0.5,0) circle (0.07cm); ll[black] (-0.5,1) circle (0.07cm); \node[black] at (3,1.5) {$\U$}; \node[black] at (1.8,-0.2) {$\C$}; \node at (0.4,2){$\horo$}; \end{tikzpicture}\\ (a) & (b) \end{tabular} \captionof{figure}{Horosphere definition in the (a) disc model and (b) upper half space model.} \label{Fig:horospheres_defn} \end{center} In the hyperboloid model $\hyp$, a horosphere $\horo$ appears as the intersection of the hyperboloid with an affine 3-plane whose normal lies in the light cone. Roughly speaking, such planes are ``on a 45 degree angle"; in the context of conic sections, they are the planes which intersect the cone in parabolic sections. In the conformal ball model $\Disc$, a horosphere appears as a sphere tangent to the sphere at infinity. This point at infinity is called the \emph{centre} of the horosphere. In the upper half space model $\U$, with the boundary at infinity regarded as $\C \cup \{\infty\}$ in the usual way, a horosphere appears either as a horizontal plane, if its centre is $\infty$, and otherwise a sphere tangent to $\C$ at its centre. See \reffig{horospheres}. \begin{center} \begin{tabular}{ccc} \begin{tikzpicture}[scale=0.8] \draw (-0.2,3.7) .. controls (-1,0.25) .. (1.8,4.27); ll[white] (-4,3.7)--(0,0)--(4,3.7)--(-4,3.7); ll[white] (4,4)--(0,0)--(-0.75,0.75)--(1.9,4.3)--(4,4.3); \draw[blue] (-4,4)--(0,0)--(4,4); \draw[dashed, thick] plot[variable=\t,samples=1000,domain=-75.5:75.5] ({tan(\t)},{sec(\t)}); ll[white] (2,3)--(2.2,2.3)--(1.33,2); \draw[blue] (0,4) ellipse (4cm and 0.4cm); \draw[dotted, thick] (-0.2,3.7) .. controls (-1,0.25) .. (1.8,4.27); \draw (0,4) ellipse (3.85cm and 0.3cm); \node[blue] at (-3.5,3){$L^+$}; \draw[dashed] (0,4) ellipse (4cm and 0.4cm); \draw[dashed] (0,4) ellipse (3.85cm and 0.3cm); \draw[dashed] (-4,4)--(0,0)--(4,4); \node at (-0.75,2.5){$\mathpzc{h}$}; \node at (-2.25,3){$\hyp$}; \end{tikzpicture} & \begin{tikzpicture}[scale=0.8] \draw[green] (0,0) ellipse (2cm and 0.4cm); ll[white] (-2,0)--(2,0)--(2,0.5)--(-2,0.5); \shade[ball color = green!40, opacity = 0.2] (0,0) circle (2cm); \draw[green] (0,0) circle (2cm); \draw[dashed,green] (0,0) ellipse (2cm and 0.4cm); \shade[ball color = red!40, opacity = 0.1] (-0.8,0.1) circle (1cm); \draw (-0.8,0.1) circle (1cm); ll (-1.7,0.1) circle (0.055cm); \shade[ball color = red!40, opacity = 0.1] (1.1,-0.2) circle (0.8cm); \draw (1.1,-0.2) circle (0.8cm); ll (1.5,-0.2) circle (0.055cm); \node[black] at (0,-1.5) {$\Disc$}; \node at (-0.75,1.4){$\horo_1$}; \node[black] at (1.1, 0.9) {$\horo_2$}; \end{tikzpicture} & \begin{tikzpicture}[scale=0.8] \draw[green] (-2,-0.5)--(2,-0.5)--(3,0.5)--(-1,0.5)--(-2,-0.5); ll[white] (-0.1,0.5) circle (0.5cm); \shade[ball color = red!40, opacity = 0.1] (-0.1,0.5) circle (0.5cm); \draw (-0.1,0.5) circle (0.5cm); \draw (-2,1.5)--(2,1.5)--(3,2.5)--(-1,2.5)--(-2,1.5); \node[black] at (3,1.5) {$\U$}; \node[black] at (1.8,-0.2) {$\C$}; \node at (0.4,2){$\horo_1$}; \node[black] at (0.7, 0.8) {$\horo_2$}; \end{tikzpicture}\\ (a) & (b) & (c) \end{tabular} \captionof{figure}{Horospheres $\horo, \horo_1, \horo_2$ in the (a) hyperboloid model (drawn schematically, one dimension down), (b) conformal ball model and (c) upper half space model.} \label{Fig:horospheres} \end{center} As it turns out, a horosphere is isometric to the Euclidean plane. Even though hyperbolic 3-space $\hyp^3$ is negatively curved, horospheres are flat surfaces living inside $\hyp^3$. Perhaps this is most easily seen for those horospheres which appear as horizontal planes in the upper half space model $\U$. Using the standard description of $\U$ as \begin{equation} \label{Eqn:upper_half_space} \U = \left\{ (x,y,z) \in \R^3 \, \mid \, z > 0 \right\} \quad \text{with Riemannian metric} \quad ds^2 = \frac{dx^2 + dy^2 + dz^2}{z^2}, \end{equation} fixing $z$ to be a constant $z_0$ shows that the hyperbolic metric on the horosphere $z=z_0$ is a constant multiple of the Euclidean metric on the $xy$-plane. The \emph{decorations} we consider on horospheres take advantage of their Euclidean geometry. If we place a tangent vector at a point on a horosphere $\horo$, we may transport it around $\horo$ by parallel translation, to obtain a \emph{parallel tangent vector field} on $\horo$. Note this cannot be done on surfaces with nonzero curvature: parallel transport of a vector around a loop will in general not result in the same vector. By the Gauss--Bonnet theorem, the vector will be rotated by an angle equal to the curvature inside the loop. In a horosphere decoration, we are only interested in the direction of the vector, not its length. So a decoration is a \emph{parallel oriented line field}. (Alternatively, we could consider it as a parallel unit vector field.) Some decorated horospheres in the disc model and upper half space models are shown in \reffig{decorated_horospheres}. \begin{center} \begin{tabular}{ccc} \begin{tikzpicture}[scale=0.8] \draw[green] (0,0) ellipse (2cm and 0.4cm); ll[white] (-2,0)--(2,0)--(2,0.5)--(-2,0.5); \shade[ball color = green!40, opacity = 0.2] (0,0) circle (2cm); \draw[green] (0,0) circle (2cm); \draw[dashed,green] (0,0) ellipse (2cm and 0.4cm); \shade[ball color = red!40, opacity = 0.1] (-0.8,0.1) circle (1cm); \draw (-0.8,0.1) circle (1cm); ll (-1.7,0.1) circle (0.055cm); \draw[->, red] (-1.7,0.1) to[out=90,in=180] (-0.7,1); \draw[->, red] (-1.7,0.1) to[out=60,in=180] (-0.2,0.7); \draw[->, red] (-1.7,0.1) to[out=30,in=150] (-0.1,0.2); \draw[->, red] (-1.7,0.1) to[out=0,in=135] (-0.1,-0.2); \draw[->, red] (-1.7,0.1) to[out=-15,in=110] (-0.4,-0.6); \draw[->, red] (-1.7,0.1) to[out=-30,in=90] (-0.8,-0.8); \draw[->, red] (-1.7,0.1) to[out=-45,in=90] (-1.3,-0.7); \end{tikzpicture} & \begin{tikzpicture}[scale=0.8] \draw[green] (-2,-0.5)--(2,-0.5)--(3,0.5)--(-1,0.5)--(-2,-0.5); ll[white] (-0.1,0.5) circle (0.5cm); \shade[ball color = red!40, opacity = 0.1] (-0.1,0.5) circle (0.5cm); \draw (-0.1,0.5) circle (0.5cm); ll[red] (-0.1,0) circle (0.07cm); \draw[->, red] (-0.1,0) to[out=135,in=0] (-0.4,0.2); \draw[->, red] (-0.1,0) to[out=120,in=0] (-0.5,0.4); \draw[->, red] (-0.1,0) to[out=90,in=-45] (-0.4,0.7); \draw[->, red] (-0.1,0) to[out=60,in=-60] (-0.2,0.9); \draw[->, red] (-0.1,0) to[out=45,in=-45] (0.1,0.8); \draw[->, red] (-0.1,0) to[out=30,in=-90] (0.3,0.4); \draw (-2,1.5)--(2,1.5)--(3,2.5)--(-1,2.5)--(-2,1.5); \begin{scope}[xshift=0.5cm] \draw[red,->] (-1.1,1.7)--(-1.4,2); \draw[red,->] (-0.4,1.7)--(-1,2.4); \draw[red,->] (0.2,1.7)--(-0.4,2.4); \draw[red,->] (0.8,1.7)--(0.2,2.4); \draw[red,->] (1.2,2)--(0.8,2.4); \end{scope} \node[black] at (3,1.5) {$\U$}; \node[black] at (1.8,-0.2) {$\C$}; \end{tikzpicture}\\ (a) & (b) \end{tabular} \captionof{figure}{Decorated horospheres in the (a) conformal ball and (b) upper half space models.} \label{Fig:decorated_horospheres} \end{center} A decoration on a horosphere can be rotated through any angle. If we rotate it through an angle of $2\pi$, it returns to the same decoration. It turns out that it is possible to define a \emph{spin decoration}, which \emph{does not} return to the same decoration after rotating through $2\pi$, but \emph{does} return to the same decoration after rotation through $4\pi$. A rigorous definition is given in \refdef{spin_decoration}. It requires some technical details relating to the geometry of \emph{spin}, the same geometry that allows an electron to return to its initial state after rotating through $4\pi$, but not $2\pi$. If we do not worry about spin, then \refthm{spinors_to_horospheres} also gives a smooth, bijective, $SL(2,\C)$-equivariant correspondence between nonzero spinors \emph{up to sign}, and decorated horospheres. The $SL(2,\C)$ action then factors through $PSL(2,\C)$. We prove this in \refprop{main_thm_up_to_sign}. It is most convenient to describe a decorated horosphere explicitly in the upper half space model $\U$. It is common to think of the horizontal, $xy$-plane in $\U$ as the complex plane, and introduce a complex coordinate $z = x+yi$. The boundary at infinity of hyperbolic space can then be regarded as $\partial \U = \C \cup \{\infty\}$. Thus, $\U$ can alternately be described as \[ \U = \{ (z,h) \in \C \times \R \, \mid \, h > 0 \} = \C \times \R^+. \] A horosphere $\horo$ in $\U$ thus has its centre in $\C \cup \{\infty\}$. If $\horo$ has centre $\infty$ then it appears as a horizontal plane in $\U$ at some height, and because it is parallel to $\C$, directions along $\horo$ may be specified by complex numbers. If $\horo$ has centre at $z \neq \infty$, then it appears as a Euclidean sphere in $\U$, with some diameter; and at its highest point, or \emph{north pole}, its tangent space is again parallel to $\C$, so directions along $\horo$ may be specified by complex numbers. (Two complex numbers which are positive multiples of each other specify the same direction.) Because a decoration is a \emph{parallel} oriented line field on $\horo$, if suffices to describe a decoration on $\horo$ at one point, and the north pole will suffice. Further details are given in \refsec{U_horospheres_decorations}. \begin{thm} \label{Thm:explicit_spinor_horosphere_decoration} Under the correspondence of \refthm{spinors_to_horospheres}, a nonzero spinor $(\xi, \eta) \in \C^2$ corresponds to a horosphere $\horo$ in $\U$, centred at $\xi/\eta$, with a spin-decoration. \begin{enumerate} \item If $\eta \neq 0$, then $\horo$ appears in $\U$ as a sphere with Euclidean diameter $|\eta|^{-2}$, and its decoration is specified at the north pole by $i \eta^{-2}$. \item If $\eta = 0$ then $\horo$ appears in $\U$ as a plane at height $|\xi|^2$, and its decoration is specified by $i \xi^2$. \end{enumerate} \end{thm} This theorem makes \refthm{spinors_to_horospheres} explicit, and in particular locates precisely the horosphere corresponding to a spinor. See \reffig{upper_half_space_decorated_horosphere}. However, it only describes decorations, rather than spin decorations. Indeed, in \refthm{explicit_spinor_horosphere_decoration}, the spinors $\pm (\xi, \eta)$ both yield the same decorated horosphere. When spin is fully taken into account, the two spinors $(\xi,\eta)$ and $-(\xi,\eta)$ correspond to spin-decorations on the same horosphere which differ by a $2\pi$ rotation. \begin{center} \begin{tikzpicture}[scale=1.2] \draw[green] (-2,-0.5)--(2,-0.5)--(3,0.5)--(-1,0.5)--(-2,-0.5); ll[white] (-0.1,0.5) circle (0.5cm); \shade[ball color = red!40, opacity = 0.1] (-0.1,0.5) circle (0.5cm); \draw (-0.1,0.5) circle (0.5cm); ll[red] (-0.1,0) circle (0.07cm); \draw[->, red] (-0.1,0) to[out=135,in=0] (-0.4,0.2); \draw[->, red] (-0.1,0) to[out=120,in=0] (-0.5,0.4); \draw[->, red] (-0.1,0) to[out=90,in=-45] (-0.4,0.7); \draw[->, red] (-0.1,0) to[out=60,in=-60] (-0.2,0.9); \draw[->, red] (-0.1,0) to[out=45,in=-45] (0.1,0.8); \draw[->, red] (-0.1,0) to[out=30,in=-90] (0.3,0.4); \draw[red, ->] (-0.1,1)--(-0.3,1.2); \node[red] at (0.3,1.2) {$i \eta^{-2}$}; \node[red] at (-0.1,-0.3) {$\xi/\eta$}; \draw[<->] (0.8,0)--(0.8,1); ll[white] (0.6,0.3)--(1.4,0.3)--(1.4,0.7)--(0.6,0.7)--cycle; \node[black] at (1,0.5) {$|\eta|^{-2}$}; \draw (-2,1.5)--(2,1.5)--(3,2.5)--(-1,2.5)--(-2,1.5); \begin{scope}[xshift=0.5cm] \draw[red,->] (-1.1,1.7)--(-1.4,2); \draw[red,->] (-0.4,1.7)--(-1,2.4); \draw[red,->] (0.2,1.7)--(-0.4,2.4); \draw[red,->] (0.8,1.7)--(0.2,2.4); \draw[red,->] (1.2,2)--(0.8,2.4); \node[red] at (-0.45,2.1) {$i \xi^2$}; \end{scope} \draw[<->] (2.2,0)--(2.2,2); ll[white] (1.8,0.7)--(2.6,0.7)--(2.6,1.3)--(1.8,1.3)--cycle; \node[black] at (2.2,1) {$|\xi|^2$}; \node[black] at (3.5,1.5) {$\U$}; \node[black] at (2,-0.2) {$\C$}; \end{tikzpicture} \captionof{figure}{Decorated horospheres in the upper half space model corresponding to spinors $\kappa = (\xi, \eta)$.} \label{Fig:upper_half_space_decorated_horosphere} \end{center} \subsection{Spinor inner product and distances between horospheres} How can we describe the distance between two horospheres --- or even better, between two spin-decorated horospheres? Consider two horospheres $\horo_1, \horo_2$, with centres $p_1, p_2$. Then the geodesic $\gamma$ from $p_1$ to $p_2$ intersects both horospheres orthogonally. Let the intersection points of $\gamma$ with $\horo_1, \horo_2$ be $q_1, q_2$ respectively. Assuming $\horo_1, \horo_2$ are disjoint, the shortest path from $\horo_1$ and $\horo_2$ is given by $\gamma$ from $q_1$ to $q_2$. Denote this shortest distance between the horospheres by $\rho$. If $\horo_1, \horo_2$ have decorations, then we can say more --- there is also an \emph{angle} between them. Precisely, the decoration on $\horo_1$ describes a direction at $q_1$, and if we parallel translate this direction along $\gamma$ to $q_2$, then there is some angle $\theta$, such that rotating the direction at $q_2$ by $\theta$ around $\gamma$ aligns the two decorations. The angle $\theta$ between the two decorations is well defined modulo $2\pi$. If we consider \emph{spin} decorations, then the angle is well defined modulo $4\pi$. Rigorous definitions are given in \refsec{complex_lambda_lengths}. See \reffig{3}. \begin{figure}[h] \def\svgwidth{0.5\columnwidth} \begin{center} \input{complex_lambda_lengths_v5.pdf_tex} \caption{Complex translation distance between decorated horospheres.} \label{Fig:3} \end{center} \end{figure} In this way, we can define a \emph{complex distance} $d$ between spin-decorated horospheres, given by \[ d = \rho + i \theta. \] Our next theorem shows us that we can find the complex distance between two spin-decorated horospheres, from an elementary operation on the corresponding spinors. \begin{thm} \label{Thm:main_thm_2} \label{Thm:main_thm} Given two spinors $\kappa_1, \kappa_2$, with corresponding spin-decorated horospheres $\mathpzc{h}_1, \mathpzc{h}_2$, \[ \{\kappa_1, \kappa_2\} = \exp\left(\frac{d}{2}\right), \] where $\{ \cdot, \cdot \}$ is the inner product of spinors, and $d$ is the complex distance between $\mathpzc{h}_1$ and $\mathpzc{h}_2$. \end{thm} Thus, the complex distance --- including both the distance between horospheres, and angle between decorations --- can be calculated simply from the inner product of spinors. But what is this inner product? As it turns out, it just amounts to arranging the two complex numbers of $\kappa_1$, and the two complex numbers of $\kappa_2$, as the columns of a matrix, and taking the determinant. \begin{defn} \label{Def:bilinear_form_defn} The \emph{spinor inner product} $\{ \cdot, \cdot \} \colon \C^2 \times \C^2 \To \C$ is defined for $\kappa_1 = (\xi_1,\eta_1)$ and $\kappa_2 = (\xi_2, \eta_2)$ by \[ \left\{ \kappa_1 , \kappa_2 \right\} = \det (\kappa_1, \kappa_2) = \det \begin{pmatrix} \xi_1 & \xi_2 \\ \eta_1 & \eta_2 \end{pmatrix} = \xi_1 \eta_2 - \xi_2 \eta_1. \] \end{defn} Equivalently, $\{ \cdot, \cdot \}$ can be regarded as the standard complex symplectic form on $\C^2$. If $\C^2$ has coordinates $(z_1, z_2)$, then the inner product above is (up to conventions about constants) just $dz_1 \wedge dz_2$. We call the quantity $\exp(d/2)$ the \emph{complex lambda length} between spin-decorated horospheres, denoted $\lambda$. \[ \lambda = \exp \left( \frac{d}{2} \right). \] It generalises the notion of \emph{lambda length}, defined by Penner in \cite{Penner87} as a real quantity in the 2-dimensional context. In two dimensions, one can define a distance between horocycles, but there is no angle involved. Our $\lambda$ here is a generalised, 3-dimensional, complex version of the lambda lengths from \cite{Penner87}. It is worth pointing out that the case when our spinors have \emph{real} coordinates essentially reduces to 2-dimensional geometry, though with some technicalities; and when the spinors are \emph{integers}, we can recover Ford circles: we discuss this in \refsec{real_spinors_H2}. Note that as $\theta$ is well defined modulo $4\pi$, $d$ is well defined modulo $4\pi i$, so $d/2$ is well defined modulo $2\pi i$, and hence $\lambda = \exp (d/2)$ is well defined. However, if we drop spin and only consider decorations, then $\theta$ is only well defined modulo $2\pi$, so $d$ is only well defined modulo $2\pi i$, and $\lambda$ is then only well defined up to sign. The spinors $\kappa_1, \kappa_2$ are then also only well defined up to sign, so \refthm{main_thm_2} still holds, but with a sign ambiguity. Although we have assumed the two horospheres $\horo_1, \horo_2$ are disjoint, in fact \refthm{main_thm} applies to any two spin-decorated horospheres. When horospheres overlap, the distance $\rho$ is well defined and negative; when they have the same centre, $\rho \rightarrow -\infty$ and $\lambda = 0$. We discuss this in \refsec{complex_lambda_lengths}. Taken together, \refthm{explicit_spinor_horosphere_decoration} and \refthm{main_thm} provide a powerful method for computations involving horospheres. Given a spinor, we can say precisely where the corresponding horosphere is, and what its decoration looks like. Conversely, given decorated horospheres, it is not difficult to find corresponding spinors. And given two spin-decorated horospheres, we can find the complex distance, or lambda length, between them, simply by taking a determinant. {\flushleft \textbf{Example.} } Consider the spinor $\kappa_1 = (1,0)$. By \refthm{explicit_spinor_horosphere_decoration} it corresponds to the horosphere $\horo_1$ in $\U$, centred at $\infty$ --- hence a horizontal plane --- at height $1$, with decoration specified by $i$. Similarly, $\kappa_2 = (0,1)$ corresponds to the horosphere $\horo_2$ in $\U$, centred at $0$, with Euclidean diameter $1$, and decoration specified at the north pole by $i$. These two horospheres are tangent at $(0,0,1) \in \U$, and their decorations agree there. It turns out that their spin decorations agree too, so their complex distance is given by $d = \rho + i \theta$ where $\rho = 0$ and $\theta = 0$, i.e. $d=1$. Hence their lambda length is $\lambda = \exp(d/2) = 1$. We verify \refthm{main_thm} by checking that $\{\kappa_1, \kappa_2\} = 1$ also, given by taking the determinant of the identity matrix. Multiplying $\kappa_1$ by $re^{i \theta}$ with $r>0$ and $\theta$ real moves the plane $\horo_1$ to height $r^2$ in $\U$, i.e. upwards by $2 \log r$, and rotates its decoration by $2\theta$. The complex distance between $\horo_1, \horo_2$ becomes $d = 2 \log r + 2 \theta i$, and we then find $\lambda = \exp(d/2) = r e^{i \theta}$, which again agrees with $\{\kappa_1, \kappa_2\}$. The situation is as in \reffig{3}. \subsection{Equivariance} \label{Sec:intro_equivariance} \refthm{spinors_to_horospheres} includes a statement that the spinor--horosphere correspondence is $SL(2,\C)$-equivariant. This means that there are actions of $SL(2,\C)$ on the space $\C^2$ of spinors, and on the space of spin-decorated horospheres, and that the correspondence respects those actions. The action of $SL(2,\C)$ on $\C^2$ is not complicated: it is just matrix-vector multiplication! It is easily computable. The action of $SL(2,\C)$ on spin-decorated horospheres, on the other hand, is a little more subtle. The orientation-preserving isometry group of $\hyp^3$ is well known to be $PSL(2,\C)$, and this isomorphism can be made quite explicit in the upper half space model, where elements of $PSL(2,\C)$ describe M\"{o}bius transformations. Thus, $PSL(2,\C)$ acts on $\hyp^3$ by isometries, and hence also on horospheres and decorated horospheres. However, spin decorations on horospheres live in a more complicated space. The group $SL(2,\C)$ is the double and universal cover of $PSL(2,\C)$, and can be regarded as the group of orientation-preserving isometries of $\hyp^3$ which also preserve spin structures. It is then possible to define an action of $SL(2,\C)$ on spin-decorated horospheres, and we do this precisely in \refsec{lifts_of_maps_spaces}. The equivariance of \refthm{spinors_to_horospheres} thus means that applying an $SL(2,\C)$ linear transformation to a spinor corresponds to applying the corresponding isometry to a spin-decorated horosphere. This can be useful. \subsection{Ptolemy equation and matrices} \label{Sec:Ptolemy_matrices} First appearing in Ptolemy's 2nd century \emph{Almagest} \cite{Ptolemy_Almagest} is \emph{Ptolemy's theorem}, that in a cyclic quadrilateral $ABCD$ in the Euclidean plane one has \[ AC \cdot BD = AB \cdot CD + AD \cdot BC. \] \begin{center} \begin{tikzpicture} \draw (0,0) circle (2cm); \draw (1.414,1.414)--(-1.532,1.285)--(-1.414,-1.414)--(1.879,-0.684)--(1.414,1.414)--(-1.414,-1.414); \draw (-1.532,1.285)--(1.879,-0.684); \node at (-1.6,1.6){A}; \node at (1.6,1.6){B}; \node at (2.0,-0.8){C}; \node at (-1.6,-1.6){D}; \end{tikzpicture}\\ \captionof{figure}{Ptolemy's theorem.} \label{Fig:Ptolemys_thm} \end{center} See \reffig{Ptolemys_thm}. Similar \emph{Ptolemy equations} arise in various mathematical contexts, such as representations of 3-manifold groups, e.g. \cite{GGZ15, Zickert16}, and more generally in \emph{cluster algebras}, see e.g. \cite{Fomin_Shapiro_Thurston08, Fomin_Thurston18, Williams14}. As part of their spinor algebra, Penrose--Rindler in \cite{Penrose_Rindler84} discuss an antisymmetric quantity $\varepsilon_{AB}$ describing the inner product $\{ \cdot , \cdot \}$. In particular, it obeys a Ptolemy-like equation (e.g. \cite[eq. 2.5.21]{Penrose_Rindler84} \[ \varepsilon_{AC} \varepsilon_{BD} = \varepsilon_{AB} \varepsilon_{CD} + \varepsilon_{AD} \varepsilon_{BC}. \] In our context, we obtain a Ptolemy equation as follows. \begin{thm} \label{Thm:main_thm_Ptolemy} For any ideal tetrahedron in $\hyp^3$, with spin-decorated horospheres $\mathpzc{h}_i$ ($i=0,1,2,3$) about its vertices, and $\lambda_{ij}$ the lambda length between $\mathpzc{h}_i$ and $\mathpzc{h}_j$, \begin{equation} \label{Eqn:ptolemy} \lambda_{02} \lambda_{13} = \lambda_{01} \lambda_{23} + \lambda_{12} \lambda_{03}. \end{equation} \end{thm} See \reffig{4}. Penner in \cite{Penner87} gave a similar equation for real lambda lengths in an ideal quadrilateral in the hyperbolic plane. \refthm{main_thm_Ptolemy} extends this result into 3 dimensions, using complex lambda lengths. \begin{center} \begin{tikzpicture}[scale=2,>=stealth',pos=.8,photon/.style={decorate,decoration={snake,post length=1mm}}] \draw (-1,0)--(1.5,0.5); ll[white] (0.75,0.35) circle (0.1 cm); \draw (0,1.5)--(-1,0)--(1,0)--(0,1.5)--(1.5,0.5)--(1,0); \draw[blue] (-0.83,0.1) circle (0.2); \draw[blue] (0.85,0.12) circle (0.2); \draw[blue] (0,1.3) circle (0.2); \draw[blue] (1.3,0.5) circle (0.2); \shade[ball color = blue!40, opacity = 0.1] (-0.83,0.1) circle (0.2cm); \shade[ball color = blue!40, opacity = 0.1] (0.85,0.12) circle (0.2cm); \shade[ball color = blue!40, opacity = 0.1] (0,1.3) circle (0.2cm); \shade[ball color = blue!40, opacity = 0.1] (1.3,0.5) circle (0.2cm); \draw[red,->] (-1,0) to[out=90,in=225] (-0.9,0.25); \draw[red,->] (-1,0) to[out=60,in=180] (-0.75,0.2); \draw[red,->] (-1,0) to[out=45,in=150] (-0.7,0.08); \draw[red,->] (-1,0) to[out=30,in=135] (-0.75,-0.05); \draw[red,->] (1,0) to[out=90,in=-45] (0.9,0.25); \draw[red,->] (1,0) to[out=130,in=0] (0.75,0.2); \draw[red,->] (1,0) to[out=135,in=60] (0.7,0.08); \draw[red,->] (1,0) to[out=150,in=45] (0.75,-0.05); \draw[red,->] (1.5,0.5) to[out=120,in=0] (1.2,0.6); \draw[red,->] (1.5,0.5) to[out=150,in=15] (1.15,0.5); \draw[red,->] (1.5,0.5) to[out=180,in=60] (1.2,0.35); \draw[red,->] (1.5,0.5) to[out=200,in=60] (1.3,0.34); \draw[red,->] (0,1.5) to[out=210,in=90] (-0.15,1.3); \draw[red,->] (0,1.5) to[out=225,in=90] (-0.1,1.2); \draw[red,->] (0,1.5) to[out=260,in=120] (0,1.15); \draw[red,->] (0,1.5) to[out=290,in=120] (0.1,1.2); \node at (-1,-0.25){1}; \node at (1,-0.25){2}; \node at (1.7,0.5){3}; \node at (0,1.7){0}; \draw [black!50!green, ultra thick, ->] (-0.5,-0.1) to [out=0, in=180] (0.5,0.1); \draw [black!50!green] (0,-0.2) node {$\lambda_{12}$}; \draw [black!50!green, ultra thick, ->] (-0.4,1.1) to [out=240, in=60] (-0.6,0.4); \draw [black!50!green] (-0.7,0.75) node {$\lambda_{01}$}; \draw [black!50!green, ultra thick, ->] (0.22,1) to [out=-60, in=120] (0.78,0.5); \draw [black!50!green] (0.4,0.65) node {$\lambda_{02}$}; \draw [black!50!green, ultra thick, ->] (1.15,0.05) to [out=45, in=250] (1.18,0.27); \draw [black!50!green] (1.365,0.16) node {$\lambda_{23}$}; \draw [black!50!green, ultra thick, ->] (0.35,1.17) to [out=-33, in=147] (1.15,0.85); \draw [black!50!green] (0.85,1.11) node {$\lambda_{03}$}; \end{tikzpicture} \captionof{figure}{Decorated horospheres and complex lambda lengths along the edges of an ideal tetrahedron.} \label{Fig:4} \end{center} It is perhaps more standard in 3-dimensional geometry and topology to describe hyperbolic ideal tetrahedra using \emph{shape parameters}, which are also \emph{cross-ratios} of the four ideal vertices. Shape parameters were used famously by Thurston to develop gluing and completeness equations for hyperbolic 3-manifolds \cite{Thurston_notes}. As we discuss in \refsec{shape_parameters}, from the lambda lengths of an ideal tetrahedron, one can recover the shape parameters. The spinor--horosphere correspondence allows us to consider horospheres and their decorations via spinors, which are vectors in $\C^2$. So if we have \emph{several} spin-decorated horospheres, we then have \emph{several} vectors in $\C^2$, which can be arranged as the columns of a \emph{matrix}. We can then approach problems involving multiple horospheres, or ideal \emph{polygons} or \emph{polyhedra} by using the algebra of matrices. In a sense, \refthm{main_thm_Ptolemy} is the first result in this regard. An ideal polyhedron in $\hyp^3$ has some number $d$ of ideal vertices. Decorating each ideal vertex with a spin-decorated horosphere, we obtain a bijective correspondence between suitably decorated ideal polyhedra, and $2 \times d$ complex matrices satisfying certain conditions. Moreover, if we want to consider such polyhedra up to \emph{isometry}, we can take a quotient by the $SL(2,\C)$ action. Taking a quotient of a space of $2 \times d$ matrices by a left action of $2 \times 2$ matrices is well known to produce \emph{Grassmannians}. So the spinor--horosphere correspondence allows us to relate spaces of polyhedra to Grassmannian-like objects built from matrices. We explore these ideas in \refsec{polygons_polyhedra_matrices}; they are also developed in \cite{Mathews_Spinors_horospheres}. Similarly, we can relate \emph{ideal polygons} in $\hyp^2$ with $d$ ideal vertices to $2 \times d$ \emph{real} matrices. Lambda lengths are then real, and their sign can then be related to cyclic ordering around the circle at infinity; we discuss this in \refsec{spin_coherent_positivity}. \subsection{The journey ahead: overview of proofs and constructions} As we have mentioned, proving our main theorems involves a journey through several areas of mathematics. Let us now give an overview of where this journey will take us. Essentially, the proof of \refthm{spinors_to_horospheres} consists of carefully tracking spinors through various constructions. In \cite{Mathews_Spinors_horospheres} several steps are elided, and various spaces are implicitly identified. Here here we treat them separately. The journey proceeds in two stages, in \refsec{spin_vectors_to_decorated_horospheres} and \refsec{spin}. The first stage, in \refsec{spin_vectors_to_decorated_horospheres}, goes from spinors to decorated horospheres, but does not incorporate spin. The second stage, in \refsec{spin}, upgrades the spaces and maps of the first stage, to incorporate spin. Once these two stages are complete, in \refsec{applications} we consider some applications. \subsubsection{Pre-spin stage} The first, or ``pre-spin" stage, in \refsec{spin_vectors_to_decorated_horospheres}, has five steps. (In \cite{Mathews_Spinors_horospheres} they are elided to two.) The first step goes from \emph{spinors} to \emph{Hermitian matrices}, and it is implicit when Penrose--Rindler form the expression \[ \kappa^A \; \overline{\kappa}^{A'}. \] This corresponds to taking a spinor $\kappa = (\xi, \eta)$, regarding it as a column vector, and multiplying it by its conjugate transpose $\kappa^*$. The result is a $2 \times 2$ Hermitian matrix. \[ \kappa \kappa^* = \begin{pmatrix} \xi \\ \eta \end{pmatrix} \begin{pmatrix} \overline{\xi} & \overline{\eta} \end{pmatrix}. \] The second step goes from \emph{Hermitian matrices} to \emph{Minkowski space} $\R^{1,3}$, which has coordinates $(T,X,Y,Z)$ and metric $g = dT^2 - dX^2 - dY^2 - dZ^2$. The key fact is that $2 \times 2$ Hermitian matrices are precisely those which can be written in the form \begin{equation} \label{Eqn:spinvec_to_Hermitian} \frac{1}{2} \begin{pmatrix} T+Z & X+iY \\ X-iY & T-Z \end{pmatrix} = \frac{1}{2} \left( T \sigma_T + X \sigma_X + Y \sigma_Y + Z \sigma_Z \right) \end{equation} and hence such matrices can be \emph{identified} with points in $\R^{1,3}$. Here we observe the appearance of the \emph{Pauli matrices} of quantum mechanics, \[ \sigma_T = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, \quad \sigma_X = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \quad \sigma_Y = \begin{pmatrix} 0 & i \\ -i & 0 \end{pmatrix}, \quad \sigma_Z = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}. \] Putting these two steps together, from a nonzero spinor we obtain a $2 \times 2$ Hermitian matrix, and then a point of $\R^{1,3}$. This construction arguably goes back much further than Penrose--Rindler, to the first uses of spinors in quantum theory. In any case, it turns out that the resulting point in Minkowski space always lies on the \emph{positive} or \emph{future light cone} $L^+$, which is given by \[ T^2 - X^2 - Y^2 - Z^2 = 0 \quad \text{and} \quad T>0. \] Thus, to a spinor, our first two steps associate a point in $L^+$. This association, however, is not bijective, indeed far from it. After all, $\C^2$ is 4-dimensional, but $L^+$ is 3-dimensional. Thus Penrose--Rindler consider not just points on the light cone, but \emph{flags}. Roughly speaking, a flag consists of a \emph{point} on $L^+$, the \emph{ray} through that point, and a \emph{2-plane} containing the ray. The possible 2-planes provide an extra dimension of flexibility, and eventually provides the direction of a spin-decoration. So as it turns out, we must associate to a spinor not just a point on the light cone, but a \emph{flag}. Roughly, a flag consists of a point on the light cone (0-dimensional), the ray through it (1-dimensional), and a tangent plane (2-dimensional). See \reffig{flag}. We think of the ray as the flagpole, and the 2-plane as a flag unfurled from it! \begin{center} \begin{tikzpicture} \draw[blue] (3.75,1.5) ellipse (2cm and 0.3cm); \draw[green!50!black] (3.75,0.5) ellipse (1cm and 0.2cm); ll[white] (2.75,0.5)--(4.75,0.5)--(4.75,0.72)--(2.75,0.72); \draw[dashed, green!50!black] (3.75,0.5) ellipse (1cm and 0.2cm); \draw[green!50!black] (1,0)--(5.5,0)--(6.5,1)--(5.25,1); \draw[green!50!black] (2.25,1)--(2,1)--(1,0); \draw[dashed,green!50!black] (5.25,1)--(2.25,1); \draw[dashed,blue] (2.75,0.5)--(3.25,0); \draw[blue] (2.75,0.5)--(1.75,1.5); \draw[dashed, blue] (4.25,0)--(4.75,0.5); \draw[blue] (4.75,0.5)--(5.75,1.5); \draw[blue] (3.25,0)--(3.75,-0.5)--(4.25,0.0); \draw[red] (3.75,-0.5)--(4,0); \draw[dashed,red] (4,0)--(4.1875,0.375); ll[white] (4.475,0.95)--(4.675,0.75)--(4.275,0.55); \draw[red] (4.1375,0.275)--(4.475,0.95)--(4.675,0.75)--(4.275,0.55); \node[blue] at (1.5,1.5){$L^+$}; ll[red] (4.475,0.95) circle (0.055cm); \node[red] at (7.5,1.25){$\kappa=(\xi,\eta)$}; \draw[->,red](6.2,1.25)--(4.6,0.95); \node[green!50!black] at (1.8,0.2){$T=1$}; \node[green!50!black] at (2.9,0.85){\footnotesize$\mathbb{CP}^1$}; \end{tikzpicture} \captionof{figure}{A flag in Minkowski space (drawn a dimension down).} \label{Fig:flag} \end{center} However, if we are to proceed carefully and step by step, then flags in Minkowski space must come from spinors via an intermediate step in Hermitian matrices. As it turns out, we must consider flags in the space of Hermitian matrices. So the first two steps of our construction produce maps \[ \{ \text{Spinors} \} \stackrel{\f}{\To} \{ \text{Hermitian matrices} \} \stackrel{\g}{\To} \{ \text{Future light cone in $\R^{1,3}$} \} \] which are then upgraded to maps \[ \{ \text{Spinors} \} \stackrel{\F}{\To} \{ \text{Flags in Hermitian matrices} \} \stackrel{\G}{\To} \{ \text{Flags in $\R^{1,3}$} \}. \] These steps are carried out in \refsec{spin_vectors_to_Hermitian} to \refsec{flags}, making various observations along the way. (The composition $\g \circ \f$ is essentially the Hopf fibration under stereographic projection!) Roughly, \refsec{spin_vectors_to_Hermitian} considers the map $\f$, \refsec{hermitian_to_minkowski} considers the map $\g$, and \refsec{flags} considers flags and upgrades the maps to $\F$ and $\G$. As it turns out, each step has a ``lower case" version, which considers simpler structures, and an ``upper case" version, which includes some sort of tangent structure such as a flag or decoration. (In \cite{Mathews_Spinors_horospheres}, these two steps are elided into one, with $\f$ and $\g$ becoming $\phi_1$, and $\F, \G$ becoming $\Phi_1$.) These ideas are all in \cite{Penrose_Rindler84}; we give them a slightly different, detailed and explicit treatment. The third step, covered in \refsec{Minkowski_to_hyperboloid}, goes from the \emph{light cone} to \emph{horospheres in the hyperboloid model $\hyp$} of hyperbolic space, and from \emph{flags} to \emph{decorated horospheres in $\hyp$}. This step builds on a construction of Penner \cite{Penner87}, one dimension down. Given a point $p \in L^+$, we consider the 3-plane in $\R^{1,3}$ consisting of $x$ satisfying the linear equation \begin{equation} \label{Eqn:horosphere_eqn} \langle p,x \rangle = 1 \end{equation} in the Minkowski inner product. This is exactly the type of plane that intersects the hyperboloid $\hyp$ in a horosphere, and indeed it yields a map \[ \{ \text{Future light cone in $\R^{1,3}$} \} \stackrel{\h}{\To} \{ \text{Horospheres in $\hyp$} \}. \] See \reffig{flag_horosphere}. It turns out that, if we also have a \emph{flag} based at the point $w$, then that flag intersects the horosphere in a way that precisely gives a decoration, and so this map can be upgraded to a map \[ \{ \text{Flags in $\R^{1,3}$} \} \stackrel{\H}{\To} \{ \text{Decorated horospheres in $\hyp$} \}. \] \begin{center} \begin{tikzpicture}[scale=0.8] \draw (-0.2,3.7) .. controls (-1,0.25) .. (1.8,4.27); ll[white] (-4,3.7)--(0,0)--(4,3.7)--(-4,3.7); ll[white] (4,4)--(0,0)--(-0.75,0.75)--(1.9,4.3)--(4,4.3); \draw[blue] (-4,4)--(0,0)--(4,4); \draw[dashed, thick] plot[variable=\t,samples=1000,domain=-75.5:75.5] ({tan(\t)},{sec(\t)}); ll[white] (2,3)--(2.2,2.3)--(1.33,2); \draw[blue] (0,4) ellipse (4cm and 0.4cm); \draw[dotted, thick] (-0.2,3.7) .. controls (-1,0.25) .. (1.8,4.27); \draw (0,4) ellipse (3.85cm and 0.3cm); \draw[red] (0,0)--(2,3); ll[red] (2,3) circle (0.055cm); \node[blue] at (-3.5,3){$L^+$}; \node[red] at (2.25,3){$p$}; \draw[red] (2,3)--(2.2,2.3)--(1.33,2)--(2,3); \draw[dashed] (0,4) ellipse (4cm and 0.4cm); \draw[dashed] (0,4) ellipse (3.85cm and 0.3cm); \draw[dashed] (-4,4)--(0,0)--(4,4); \node at (-0.75,2.5){$\mathpzc{h}$}; \node at (-2.25,3){$\hyp$}; \draw[gray, ->] (-0.2,3)--(0.8,3); \draw[gray, ->] (-0.4,2)--(0.1,2); \end{tikzpicture} \captionof{figure}{Decorated horosphere in $\hyp$ arising from a flag (drawn a dimension down).} \label{Fig:flag_horosphere} \end{center} The fourth and fifth steps, covered in \refsec{hyperboloid_to_disc} and \refsec{Disc_to_U} respectively, are standard isometries between models of $\hyp^3$. As it turns out, for us the most straightforward route from the hyperboloid model $\hyp$ to the upper half space model $\U$ is via the conformal disc model $\Disc$. Our maps transfer various structures between models, \[ \{ \text{Horospheres in $\hyp$} \} \stackrel{\i}{\To} \{ \text{Horospheres in $\Disc$} \} \stackrel{\j}{\To} \{ \text{Horospheres in $\U$} \}, \] the latter involving stereographic projection. The upper-case versions handle decorations, \[ \{ \text{Decorated horospheres in $\hyp$} \} \stackrel{\I}{\To} \{ \text{Decorated horospheres in $\Disc$} \} \stackrel{\J}{\To} \{ \text{Decorated Horospheres in $\U$} \}. \] (In \cite{Mathews_Spinors_horospheres}, all models of $\hyp^3$ are identified, so $\h, \i, \j$ are elided into $\phi_2$ and $\H, \I, \J$ into $\Phi_2$.) Having completed these five steps, in \refsec{putting_maps_together} we put them together. We have a sequence of maps which start from a spinor, proceed to obtain a flag at a point on $L^+$, and then eventually finish up at a horosphere with a decoration. In \refprop{JIHGF_general_spin_vector} we prove \refthm{explicit_spinor_horosphere_decoration} for decorated horospheres. Much of this story already appears in \cite{Penrose_Rindler84}, if we forget horospheres. The point $p$ on $L^+$ obtained from the spinor $\kappa = (\xi, \eta)$ yields a point on the celestial sphere $\S^+$, which is also the boundary at infinity of hyperbolic space $\partial \hyp^3$. Regarding this sphere as $\CP^1$ via stereographic projection, the point $p$ is at $\xi/\eta$; it is the centre of the corresponding horosphere. The flag and/or decoration yields a tangent direction to $\CP^1$ at $\xi/\eta$, as discussed in \cite[ch. 1]{Penrose_Rindler84}. See \reffig{1}. \begin{center} \begin{tabular}{cc} \begin{tikzpicture} \draw[blue] (3.75,1.5) ellipse (2cm and 0.3cm); \draw[green] (3.75,0.5) ellipse (1cm and 0.2cm); ll[white] (2.75,0.5)--(4.75,0.5)--(4.75,0.72)--(2.75,0.72); \draw[dashed, green!50!black] (3.75,0.5) ellipse (1cm and 0.2cm); \draw[green!50!black] (1,0)--(5.5,0)--(6.5,1)--(5.25,1); \draw[green!50!black] (2.25,1)--(2,1)--(1,0); \draw[dashed,green!50!black] (5.25,1)--(2.25,1); \draw[dashed,blue] (2.75,0.5)--(3.25,0); \draw[blue] (2.75,0.5)--(1.75,1.5); \draw[dashed, blue] (4.25,0)--(4.75,0.5); \draw[blue] (4.75,0.5)--(5.75,1.5); \draw[blue] (3.25,0)--(3.75,-0.5)--(4.25,0.0); \draw[red] (3.75,-0.5)--(4,0); \draw[dashed,red] (4,0)--(4.1875,0.375); ll[white] (4.475,0.95)--(4.675,0.75)--(4.275,0.55); \draw[red] (4.1375,0.275)--(4.475,0.95)--(4.675,0.75)--(4.275,0.55); \node[blue] at (1.5,1.5){$L^+$}; ll[red] (4.475,0.95) circle (0.055cm); \node[red] at (7.5,1.25){$\kappa=(\xi,\eta)$}; \draw[->,red](6.2,1.25)--(4.6,0.95); \node[green!50!black] at (1.8,0.2){$T=1$}; \node[green!50!black] at (2.9,0.85){\footnotesize$\mathbb{CP}^1$}; \end{tikzpicture} & \begin{tikzpicture} \draw[green!50!black] (0,-0.25) ellipse (1.45cm and 0.25cm); ll[white] (-1.45,-0.25)--(1.45,-0.25)--(1.45,0.05)--(-1.45,0.05); \draw[dashed,green!50!black] (0,-0.25) ellipse (1.45cm and 0.25cm); \shade[ball color = green!40, opacity = 0.1] (0,0) circle (1.5cm); \draw[green] (0,0) circle (1.5cm); \draw[dashed,green] (0,1.5)--(1,0.375); \draw[green!50!black] (1,0.375)--(2,-0.75); ll (1,0.375) circle (0.055cm); \draw[->,red] (1,0.375)--(1.3,0.6); \draw[->,red] (2,-0.75)--(2.4,-0.7); \draw (-3,-0.9)--(3,-0.9)--(4,0.1)--(1.48,0.1); \draw[dashed] (1.48,0.1) -- (-1.48,0.1); \draw (-1.48,0.1)--(-2,0.1)--(-3,-0.9); \node[green!50!black] at (-1.4,1.2){$\mathbb{CP}^1$}; ll (2,-0.75) circle (0.055cm); \draw[<-,red] (0.9,0.375)--(-3,0.3); \node[red] at (2,-1.2){$\frac{\xi}{\eta}$}; \node[red] at (2.4,-0.4){$\frac{i}{\eta^2}$}; \end{tikzpicture}\\ (a) & (b) \end{tabular} \captionof{figure}{Spinor $\kappa$ with (a) corresponding null flag, and (b) projection to $\CP^1$.} \label{Fig:1} \end{center} \subsubsection{Spin cycle} In the second stage of our constructions, having completed the five steps of maps $\f,\g,\h,\i,\j$ and their upgrades to flags and decorations $\F,\G,\H,\I,\J$, we do need to go through the five steps in detail again. In \refsec{spin} we just upcycle them to include spin! First there are the technicalities: we must define spin-decorated horospheres and various related notions. We do this in \refsec{spin-decorated_horospheres}. Once this is done, in \refsec{topology_of_spaces_and_maps} we consider the topology of the maps $\F,\G,\H,\I,\J$ and spaces involved. Upcycling our maps to spin versions is essentially just lifting to universal covers, and we obtain \begin{align*} \{ \text{Spinors} \} &\stackrel{\widetilde{\F}}{\To} \{ \text{Spin flags in Hermitian matrices} \} \stackrel{\widetilde{\G}}{\To} \{ \text{Spin flags in $\R^{1,3}$} \} \\ & \stackrel{\widetilde{\H}}{\To} \{ \text{Spin-decorated horospheres in $\hyp$} \} \stackrel{\widetilde{\I}}{\To} \{ \text{Spin-decorated horospheres in $\Disc$} \} \\ &\stackrel{\widetilde{\J}}{\To} \{ \text{Spin-decorated Horospheres in $\U$} \}. \end{align*} We can then prove \refthm{spinors_to_horospheres} and \refthm{explicit_spinor_horosphere_decoration}. It remains to prove \refthm{main_thm}. In \refsec{complex_lambda_lengths} we properly define lambda lengths, and in \refsec{proof_main_thm} we prove the theorem. \subsubsection{Post-spin cycle} Having completed the spin cycle, we then examine a few applications in \refsec{applications}. \refsec{3d_hyp_geom} considers three-dimensional hyperbolic geometry, including the Ptolemy equation of \refthm{main_thm_Ptolemy}. \refsec{real_spinors_H2} considers what happens when spinors are real; we obtain some 2-dimensional hyperbolic geometry, and relations to positivity, triangulated polygons, and Ford circles and Farey fractions. \refsec{polygons_polyhedra_matrices} considers generalising to ideal hyperbolic polygons and polyhedra, and matrices built out of spinors. \subsection{Notation} \label{Sec:notation} In the careful calculations and step-by-step approach of this paper, there is unavoidably much notation. We have tried to be consistent throughout and avoid duplication of notation. We have followed some notation of Penrose--Rindler \cite{Penrose_Rindler84}, some that is standard in Minkowski geometry, and some that is standard in hyperbolic geometry; some however is probably not standard. Throughout, complex numbers are denoted by lower case Greek letters, matrices are denoted by upper case Latin letters, and real numbers usually by lower case Latin letters. (These letters however can also denote other things.) The set of $m\times n$ matrices with entries from a set $\mathbb{F}$, is denoted $\mathcal{M}_{m\times n}(\mathbb{F})$. A ring, field or vector space $\mathbb{F}$ without its zero element is denoted $\mathbb{F}_\times$. In particular, the space of nonzero spinors $\C^2 \setminus \{(0,0)\}$ is abbreviated to $\C^2_\times$. Hyperbolic 3-space (independent of model) is denoted $\hyp^3$ and we use $\hyp, \Disc, \U$ to refer to various models. An overline $\overline{x}$ is common to denote both complex conjugates, and elements of quotient spaces. We use both in close proximity, so to avoid potential confusion, we denote the latter by underlines. That is, $\overline{\alpha}$ is the complex conjugate of $\alpha$, and $\underline{S}$ is an element of a quotient space. In Appendix \ref{Sec:Notation} there is a table of notation for the reader's convenience. Unfortunately for our notation, the letter H is ubiquitous in this subject. Already in this introduction we have seen hyperbolic, hyperboloid, horospheres, Hermitian, height, $\hyp$, $\horo$, $h$, $\h$, $\H$ and $\widetilde{\H}$. There will also be $\HH$, $\mathfrak{H}$, and $\h_\partial$. We can only apologise. \subsection{Acknowledgments} The first author is supported by Australian Research Council grant DP210103136. \section{From spinors to null flags to decorated horospheres} \label{Sec:spin_vectors_to_decorated_horospheres} In this section we establish the necessary constructions for the main theorems (without spin). We start with a definition following the terminology of \cite{Penrose_Rindler84} as we need it. \begin{defn} A \emph{spin vector}, or \emph{two-component spinor}, or just \emph{spinor}, is a pair of complex numbers. \end{defn} \subsection{From spin vectors to Hermitian matrices} \label{Sec:spin_vectors_to_Hermitian} The first step in our journey goes from spin vectors to Hermitian matrices via the map $\f$. In \refsec{Hermitian_matrices_and_properties} we introduce various families of Hermitian matrices; they may seem obscure but we will see in \refsec{hermitian_to_minkowski} that they correspond to standard objects in Minkowski space. In \refsec{map_f} we define and discuss the map $\f$. In \refsec{SL2C_and_f} we discuss $SL(2,\C)$ actions and show $\f$ is $SL(2,\C)$-equivariant. Finally in \refsec{derivatives_of_f} we consider some derivatives of $\f$, motivating the need for flags. \subsubsection{Hermitian matrices and their properties} \label{Sec:Hermitian_matrices_and_properties} \begin{defn} \ \begin{enumerate} \item The set of Hermitian matrices in $\mathcal{M}_{2\times2}(\C)$ is denoted $\HH$. \item $\HH_0=\{S\in\HH \, \mid \, \det S=0\}$ is the set of elements of $\HH$ with determinant zero. \item $\HH_0^{0+}=\{S\in\HH_0 \, \mid \, \Trace S \geq 0 \}$ is the set of elements of $\HH_0$ with non-negative trace. \item $\HH_0^+=\{S\in\HH_0 \, \mid \, \Trace(S)> 0 \}$ is the set of elements of $\HH_0$ with positive trace. \end{enumerate} \end{defn} Observe that $\HH$ is a 4-dimensional real vector space with respect to, for instance, the Pauli basis \[ \sigma_T = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, \quad \sigma_X = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \quad \sigma_Y = \begin{pmatrix} 0 & i \\ -i & 0 \end{pmatrix}, \quad \sigma_Z = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}. \] Note however that none of $\HH_0$, $\HH_0^{0+}$ or $\HH_0^+$ is closed under addition, hence none is a a vector space. However, $\R$ acts on $\HH_0$ by multiplication: a real multiple of an element of $\HH_0$ again lies in $\HH_0$. Similarly, the non-negative reals $\R^{0+}$ act on $\HH_0^{0+}$ by multiplication, and the positive reals $\R^+$ act on $\HH_0^+$ by multiplication. We observe some basic facts about Hermitian matrices of determinant zero. \begin{lem} \label{Lem:H0_trace_diagonal} For $S \in \HH_0$: \begin{enumerate} \item The diagonal elements are both $\geq 0$, or both $\leq 0$. \item $S\in\HH_0^{0+}$ iff both diagonal entries are non-negative. \item $S\in\HH_0^{+}$ iff at least one diagonal entry is positive. \item $\HH_0^+ \subset \HH_0^{0+}$, with $\HH_0^{0+} \setminus \HH_0^+=\{0\}$. \end{enumerate} \end{lem} \begin{proof} Letting $S = \begin{pmatrix} a & b+ci \\ b-ci & d\end{pmatrix}$ where $a,b,c,d\in\R$, we observe that $\det S = ad - b^2 - c^2=0$. \begin{enumerate} \item Since $ad = b^2 + c^2 \geq 0$, either $a,d \geq 0$ or $a,d \leq 0$. \item From (i), $\Trace S = a+d \geq0$ iff $a,d\geq 0$. \item From (i) $\Trace S = a+d >0$ iff at least one of $a,d$ is positive. \item It is immediate from the definition that $\HH_0^+ \subseteq \HH_0^{0+}$. If $S \in \HH_0^{0+} \setminus \HH_0^+$ then $\det S=0=\Trace S$, so from (ii) $a=d=0$, thus $b^2+c^2 = 0$, so $b=c=0$, i.e., $S=0$. \end{enumerate} \end{proof} Thus $\HH_0^{0+}$ can be defined as all $S\in\HH_0$ with both diagonal entries non-negative. Similarly $\HH_0^+$ can be defined as all $S\in\HH_0$ with one diagonal entry positive. \subsubsection{The map from spin vectors to Hermitian matrices} \label{Sec:map_f} \begin{defn} \label{Def:f} The map $\f$ from spin vectors to Hermitian matrices is given by \[ \f \colon \C^2 \To \HH, \quad \f (\kappa) = \kappa \, \kappa^*. \] \end{defn} Here we view $\kappa$ as a column vector, regarding $\C^2$ as $\M_{2 \times 1}(\C)$. \begin{lem} \label{Lem:f_surjectivity} The map $\f$ is smooth and has the following properties: \begin{enumerate} \item $\f(\C^2)=\HH_0^{0+}$. \item $\f(\kappa)=0$ iff $\kappa = 0$. \item The map $\f$ restricts surjectively to a map $\C^2_\times \To \HH_0^+$ (which we also denote $\f$). \end{enumerate} \end{lem} \begin{proof} For general $\kappa = (\xi, \eta)$ we describe $\f$ explicitly; it is manifestly smooth. \begin{equation} \label{Eqn:f_formula} \f(\xi, \eta) = \begin{pmatrix} \xi \\ \eta \end{pmatrix} \begin{pmatrix} \overline{\xi} & \overline{\eta} \end{pmatrix} = \begin{pmatrix} \xi \overline{\xi} & \xi \overline{\eta} \\ \eta \overline{\xi} & \eta \overline{\eta}. \end{pmatrix} = \begin{pmatrix} |\xi|^2 & \xi \overline{\eta} \\ \eta \overline{\xi} & |\eta|^2 \end{pmatrix} \end{equation} \begin{enumerate} \item Observe $\f(\kappa)$ has determinant zero and trace $|\xi|^2 + |\eta|^2 \geq 0$. Thus the image of $\f$ lies in $\HH_0^{0+}$. To see that the image is $\HH_0^{0+}$, take $S = \begin{pmatrix} a & re^{i\theta} \\ re^{-i\theta} & b \end{pmatrix} \in \HH_0^{0+}$, where $r \geq 0$ and $a,b,\theta\in\R$. Then $ab=r^2$, and by \reflem{H0_trace_diagonal}(ii) we have $a,b \geq 0$. Letting $\sqrt{\cdot}$ denote the non-negative square root of a non-negative real number, we may take, for example, $(\xi, \eta) = \left( \sqrt{k} e^{i\theta}, \sqrt{l} \right)$ or $\left( \sqrt{k}, \sqrt{l} e^{-i\theta} \right)$, and then $\f(\xi, \eta) = S$. \item Clearly $\f(0) = 0$. If $\f(\kappa) = 0$ then the diagonal elements of $\f(\kappa)$ are $|\xi|^2 = |\eta|^2 = 0$, so $\kappa=0$. \item If $\kappa \neq 0$ then at least one of the diagonal entries of $\f(\kappa)$ is positive, so by \reflem{H0_trace_diagonal}(iii), $\f(\kappa) \in \HH_0^+$. For surjectivity, take $S \in \HH_0^+$, which by \reflem{H0_trace_diagonal}(iv) is equivalent to $S \in \HH_0^{0+}$ and $S \neq 0$. By (i) there exists $\kappa \in \C^2$ such that $\f(\kappa) = S$. By (ii), $\kappa \neq 0$, i.e. $\kappa \in \C^2_\times$. \end{enumerate} \end{proof} The map $\f$ is not injective; the next lemma describes precisely the failure of injectivity. \begin{lem} \label{Lem:when_f_equal} $\f(\kappa) = \f(\kappa')$ iff $\kappa = e^{i\theta} \kappa'$ for some $\theta\in\R$. \end{lem} \begin{proof} If $\kappa = e^{i \theta} \kappa'$ then we have $\f(\kappa) = \kappa \kappa^* = \left( \kappa' e^{i\theta} \right) \left( e^{-i\theta} \kappa'^* \right) = \kappa' \kappa'^* = \f(\kappa')$. For the converse, suppose $\f(\kappa) = \f(\kappa')$. If $\f(\kappa) = \f(\kappa')=0$ then by \reflem{f_surjectivity}(ii) we have $\kappa = \kappa' = 0$ so the result holds trivially. Thus we assume $\f(\kappa) = \f(\kappa')\neq0$, and hence, again using \reflem{f_surjectivity}(ii), $\kappa, \kappa' \neq (0,0)$. Let $\kappa = (\xi, \eta)$ and $\kappa' = (\xi', \eta')$. Considering \refeqn{f_formula} and equating diagonal entries gives $|\xi| = |\xi'|$ and $|\eta| = |\eta'|$. We then have $\xi = e^{i \theta} \xi'$ and $\eta = e^{i \phi} \eta'$ for some $\theta,\phi\in\R$. Thus \[ \f(\kappa) = \begin{pmatrix} \xi \overline{\xi} & \xi \overline{\eta} \\ \eta \overline{\xi} & \eta \overline{\eta} \end{pmatrix} = \begin{pmatrix} \xi' \overline{\xi'} & e^{i(\theta - \phi)} \xi' \overline{\eta'} \\ e^{i(\phi - \theta)} \eta' \overline{\xi'} & \eta' \overline{\eta'} \end{pmatrix} \quad \text{while} \quad \f(\kappa') = \begin{pmatrix} \xi' \overline{\xi'} & \xi' \overline{\eta'} \\ \eta' \overline{\xi'} & \eta' \overline{\eta'} \end{pmatrix}, \] therefore $\theta = \phi$ (mod $2\pi)$, and we have $(\xi,\eta) = e^{i\theta}(\xi',\eta')$ as desired. \end{proof} {\flushleft \textbf{Remark: $\f$ is the cone on the Hopf fibration.} } The \emph{Hopf fibration} is a fibration of $S^3$ as an $S^1$ bundle over $S^2$. We will discuss it in more detail in \refsec{f_compose_g} and \refsec{Hopf}, but we can see it already. The restriction of $\f$ to $S^3 = \{(\xi,\eta) \in \C^2 \, \mid \, |\xi|^2 + |\eta|^2 =1\}$, since it is smooth and identifies precisely those pairs $(\xi, \eta), (\xi', \eta')$ such that $(\xi, \eta) = e^{i\theta}(\xi', \eta')$, must topologically be the Hopf fibration $S^3 \To S^2$. Similarly, the restriction of $\f$ to $\C_\times^2 \cong S^3 \times \R$ is topologically the product of the Hopf fibration with the identity map on $\R$, $S^3 \times \R \To S^2 \times \R$. Extending to the full domain $\C^2$ then cones off both these spaces with the addition of a single extra point, extending $S^3 \times \R$ to $\C^2$ (the cone on $S^3$) and extending $S^2 \times \R$ to the cone on $S^2$. In other words, $\f$ is the cone on the Hopf fibration. The topology of $\HH$ and various subspaces will become clearer in \refsec{hermitian_to_minkowski} when we consider Minkowski space; see \reflem{Hermitian_topology} and surrounding discussion. \subsubsection{$SL(2,\C)$ actions and equivariance} \label{Sec:SL2C_and_f} We now define $SL(2,\C)$ actions on $\C^2$ and $\HH$. We denote a general element of $SL(2,\C)$ by $A$ and a general element of $\HH$ by $S$. We denote both actions by a dot where necessary. We already mentioned the action on $\C^2$ in the introductory \refsec{intro_equivariance}. \begin{defn} \label{Def:SL2C_action_on_C2} $SL(2,\C)$ acts from the left on $\C^2$ by usual matrix-vector multiplication, $A\cdot\kappa = A \kappa$. \end{defn} \begin{lem} \label{Lem:SL2C_by_symplectomorphisms} For any $\kappa_1, \kappa_2 \in \C^2$ and $A \in SL(2,\C)$, we have \[ \{A \cdot \kappa_1, A \cdot \kappa_2 \} = \{ \kappa_1, \kappa_2 \}. \] \end{lem} In other words, the action of $SL(2,\C)$ on $\C^2$ is by symplectomorphisms, preserving the complex symplectic form $\{ \cdot, \cdot \}$. \begin{proof} Let $M\in\mathcal{M}_{2\times2}(\C)$ have columns $\kappa_1, \kappa_2$. Then by definition $\{ \kappa_1, \kappa_2 \} = \det M$. Further, $AM\in\mathcal{M}_{2 \times 2}(\C)$ has columns $A \kappa_1$ and $A \kappa_2$, so that $\{ A \kappa_1, A \kappa_2 \} = \det (AM)$. Since $A \in SL(2,\C)$ we have $\det A = 1$ so $\det(AM) = \det M$. \end{proof} \begin{defn} \label{Def:SL2C_actions_on_C2_H} \label{Def:standard_SL2C_actions} $SL(2,\C)$ acts from the left on $\HH$ by $A\cdot S = ASA^*$. \end{defn} To see that we indeed have an action on $\HH$ note that $(ASA^*)^* = ASA^*$ and, for $A,A' \in SL(2,\C)$, we have \begin{equation} \label{Eqn:group_action_on_Hermitian} (AA')\cdot S = AA'S(AA')^* = AA'SA'^*A^* = A(A'SA'^*)A^* = A \cdot (A' \cdot S). \end{equation} Note also that, for $S,S' \in \HH$ and $a, a' \in \R$ we have \begin{equation} \label{Eqn:linear_action_on_Hermitian} A \cdot \left( a S + a S' \right) = A \left( a S + a' S' \right) A^* = a ASA^* + a' AS'A^*. = a A \cdot S + a' A \cdot S' \end{equation} so $SL(2,\C)$ acts by real linear maps on $\HH$. Observe that \begin{equation} \label{Eqn:basic_equivariance} \f (A\cdot\kappa) = (A\cdot\kappa)(A\cdot\kappa)^* = A \, \kappa \, \kappa^* \, A^* = A \f(\kappa) A^* = A\cdot \f(\kappa). \end{equation} \begin{lem} \label{Lem:SL2C_preerves_Hs} The action of $SL(2,\C)$ on $\HH$ restricts to actions on $\HH_0$, $\HH_0^{0+}$ and $\HH_0^+$. \end{lem} \begin{proof} If $\det S = 0$ then $\det(A\cdot S) = \det(ASA^*) = \det(A) \det(S) \det(A^*) = 0$, so $\HH_0$ is preserved. If $S \in \HH_0^{0+}$ then by \reflem{f_surjectivity}(i), $S = \f(\kappa)$ for some $\kappa$; by \refeqn{basic_equivariance} then $A \cdot S = A\cdot \f(\kappa) = \f(A\cdot\kappa)$, which by \reflem{f_surjectivity}(i) again lies in $\HH_0^{0+}$. Thus $\HH_0^{0+}$ is preserved. If $S \in \HH_0^+$ then the same argument applies, using \reflem{f_surjectivity}(iii) instead of (i). If $S \in \HH_0^+$ then $S = \f(\kappa)$ for some $\kappa \neq 0$. Since $A \in SL(2,\C)$, $\kappa \neq 0$ implies $A\cdot\kappa \neq 0$. Thus $A \cdot S = A \cdot \f(\kappa) = \f(A\cdot\kappa) \in \HH_0^+$ as desired. \end{proof} \begin{lem} \ \label{Lem:restricted_actions_on_H} \begin{enumerate} \item The actions of $SL(2,\C)$ on $\C^2$ and $\HH_0^{0+}$ are equivariant with respect to $\f$. \item The actions of $SL(2,\C)$ on $\C^2_\times$ and $\HH_0^+$ are equivariant with respect to $\f$. \end{enumerate} \end{lem} \begin{proof} The equivariance is precisely expressed by \refeqn{basic_equivariance}. \end{proof} \begin{lem} \label{Lem:SL2C_on_C2_transitive} The action of $SL(2,\C)$ on $\C^2_\times$ is transitive. That is, for any $\kappa, \kappa' \in \C^2_\times$ there exists $A \in SL(2,\C)$ such that $A \cdot \kappa = \kappa'$. \end{lem} (Note the $A$ here is not unique.) \begin{proof} For an example of a matrix in $SL(2,\C)$ taking $(1,0)$ to $\kappa = (\xi, \eta) \in \C^2_\times$, consider \[ A_\kappa = \begin{pmatrix} \xi & 0 \\ \eta & \xi^{-1} \end{pmatrix} \quad \text{or} \quad \begin{pmatrix} \xi & - \eta^{-1} \\ \eta & 0 \end{pmatrix}. \] As $\kappa \in \C^2_\times$, at least one of $\xi, \eta$ is nonzero, hence at least one of these matrices is well defined. Then the matrix $A_{\kappa'} A_\kappa^{-1}$ takes $\kappa$ to $\kappa'$. \end{proof} \subsubsection{Derivatives of $\f$} \label{Sec:derivatives_of_f} So far, we have associated to a spinor $\kappa\in\C^2$ a Hermitian matrix $\f(\kappa)$. We now proceed to associate to it some tangent information. Consider the derivative of $\f$, as a \emph{real} smooth function, by regarding both $\C^2$ and $\HH$ as $\R^4$. The derivative of $\f$ at a point $\kappa = (\xi, \eta) = (a+bi,c+di) \in \C^2$ (corresponding to $(a,b,c,d) \in \R^4$) in the direction $\nu \in T_\kappa \C^2 \cong \C^2$ is given by \[ D_\kappa \f (\nu) = \left. \frac{d}{ds} \f(\kappa+\nu s) \right|_{s=0} \] where $s$ is a real variable. Regarding $\kappa,\nu\in\mathcal{M}_{2\times 1}(\C)$, we have \[ \f(\kappa+ \nu s) = (\kappa + \nu s)(\kappa+\nu s)^* = \kappa \kappa^* + \left( \kappa \nu^* + \nu \kappa^* \right) s + \nu \nu^* s^2 \] so that \begin{equation} \label{Eqn:derivative_formula} D_\kappa \f(\nu) = \kappa \nu^* + \nu\kappa^*. \end{equation} Since $\f$ has image in $\HH_0^{0+}\subset\HH$, and since the tangent space to a real vector space is the space itself, this derivative lies in $\HH$, which is readily seen via the expression $\kappa \nu^* + \nu \kappa^*$. However, while tangent vectors to $\HH_0^{0+}$ can be regarded as Hermitian matrices, these matrices do not generally lie in $\HH_0^{0+}$, and similar remarks apply to $\HH_0$ and $\HH_0^+$. Indeed, it is straightforward to check that in general $\kappa \nu^* + \nu \kappa^*$ does not lie in $\HH_0$. Derivatives of $\f$ will be useful in the sequel and we note derivatives in some directions here. \begin{lem} \label{Lem:derivatives_of_f_in_easy_directions} For any $\kappa \in C^2_\times$ we have \[ D_\kappa \f(\kappa) = 2 \f(\kappa) \quad \text{and} \quad D_\kappa \f (i \kappa) = 0. \] \end{lem} The first of these says that as $\kappa$ increases along a (real) ray from the origin, $\f(\kappa)$ also increases along a (real) ray from the origin. The second is equivalent to the fact from \reflem{when_f_equal} that $\f$ is constant along the circle fibres $e^{i\theta} \kappa$ over $\theta \in \R$, and $i\kappa$ is the fibre direction. \begin{proof} Using equation \refeqn{derivative_formula} we obtain \begin{align*} D_\kappa \f (\kappa) &= 2 \kappa \kappa^* = 2 \f(\kappa) \\ \D_\kappa \f (i \kappa) &= \kappa (i \kappa)^* + i \kappa \kappa^* = \kappa \kappa^* (-i) + i \kappa \kappa^* = 0. \end{align*} \end{proof} We observe that the action of $SL(2,\C)$ on $\C^2$ extends to tangent vectors $\nu$ in a standard way. If $\nu$ is tangent to $\C^2$ ($\cong \R^4$) at a point $\kappa$, and $A$ lies in $SL(2,\C)$ (or indeed in $GL(4,\R)$), then $A\nu$ is a tangent vector to $\C^2$ at $A \kappa$. This is just the standard fact that the derivative of a linear map on a vector space is itself. Precisely, differentiating \refeqn{basic_equivariance}, we obtain \begin{equation} \label{Eqn:equivariance_of_derivative_of_f} D_{A \kappa} \f ( A \nu) = A\cdot D_\kappa \f(\nu), \end{equation} so that the resulting action of $SL(2,\C)$ on tangent vectors is also equivariant. (Equation \refeqn{equivariance_of_derivative_of_f} also follows immediately from \refeqn{derivative_formula} and \refdef{SL2C_actions_on_C2_H}.) Thus, to a spinor $\kappa$ and a ``tangent spinor" $\nu$ we associate a Hermitian matrix $\f(\kappa)$ and a tangent $D_\kappa \f(\nu)$. However, we want to obtain information from $\kappa$ only; and we do not want to lose any information in passing from $\kappa$ to $\f(\kappa)$ together with tangent data. We are thus interested in $\nu$ being a \emph{function} of $\kappa$. Letting \[ \nu = \ZZ(\kappa) \quad \text{for some real smooth function} \quad \ZZ \colon \R^4 \To \R^4, \] we might then try to associate to a spinor $\kappa$ the Hermitian matrix $\f(\kappa)$ and its tangent $D_\kappa \f ( \ZZ(\kappa)) = \kappa \ZZ(\kappa)^* + \ZZ(\kappa) \kappa^*$. However, $\kappa$ is a four (real) dimensional object, and $\f$ has image in the three-dimensional space $\HH_0^{0+}$, so we can only reasonably expect one extra coordinate's worth of information from tangent data. Moreover, it will be difficult to obtain equivariance under $SL(2,\C)$. On the one hand, applying $A \in SL(2,\C)$ to $D_\kappa \f( \ZZ(\kappa) )$, we would associate to $A\kappa$ the tangent direction \[ A \cdot D_\kappa \f(\ZZ(\kappa)) = A \left( \kappa \ZZ(\kappa)^* + \ZZ(\kappa) \kappa^* \right) A^* \] at $\f(A\kappa)$; but on the other hand, we would associate to $A \kappa$ the tangent direction \[ D_{A \kappa} \f( \ZZ(A\kappa) ) = A \kappa \ZZ(A\kappa)^* + \ZZ(A\kappa) (A \kappa)^*. \] Penrose and Rindler describe a neat solution, providing the extra coordinate's worth of information equivariantly via a certain \emph{flag} based on $\f(\kappa)$. Such flags, however, are more easily seen in Minkowski space, and so we first introduce the map to Minkowski space. \subsection{From Hermitian matrices to the positive light cone in Minkowski space} \label{Sec:hermitian_to_minkowski} Our second step is from Hermitian matrices to Minkowski space via the map $\g$ which, as mentioned in the introduction, may be described by Pauli matrices. The isomorphism $\g$ allows us to regard Hermitian matrices and Minkowski space as the same thing: for us, Hermitian matrices essentially \emph{are} points in Minkowski space. In \refsec{Minkowski_space_and_g} we discuss various notions in Minkowski space and the map $\g$. In \refsec{f_compose_g} we consider the composition $\g \circ \f$. In \refsec{Hopf} we discuss how $\g \circ \f$ is related to stereographic projection and the Hopf fibration. Finally, in \refsec{inner_products_spinors-Minkowski} we discuss a relationship between the inner products on spinors and Minkowski space. \subsubsection{Minkowski space and the map $\g$} \label{Sec:Minkowski_space_and_g} We start with definitions. Write points in Minkowski space as $p = (T,X,Y,Z)$, $p' = (T',X',Y',Z')$. \begin{defn} \ \label{Def:light_cones} \begin{enumerate} \item Minkowski space $\R^{1,3}$ is the 4-dimensional vector space $\R^4$, with inner product \[ \langle p,p' \rangle = TT' - XX' - YY' - ZZ', \] and the $(3+1)$-dimensional Lorentzian manifold structure on $\R^4$ with metric $ds^2 = dT^2 - dX^2 - dY^2 - dZ^2$. \item The \emph{light cone} $L \subset \R^{1,3}$ is $L=\{(T,X,Y,Z) \in \R^{1,3} \, \mid \, T^2 - X^2 - Y^2 - Z^2 = 0\}$. \item The \emph{non-negative light cone} $L^{0+} \subset \R^{1,3}$ is $L^{0+}=\{(T,X,Y,Z) \in L \, \mid \, T \geq 0\}$. \item The \emph{positive light cone} $L^+ \subset \R^{1,3}$ is $L^+=\{(T,X,Y,Z) \in L \, \mid \, T>0\}$. \end{enumerate} \end{defn} Clearly $L^+ \subset L^{0+} \subset L \subset \R^{1,3}$. As usual, we refer to vectors/points $p$ as \emph{timelike}, \emph{lightlike/null}, or \emph{spacelike} accordingly as $T^2 - X^2 - Y^2 - Z^2$ is positive, zero, or negative. \begin{defn} \label{Def:celestial_sphere} The \emph{(future) celestial sphere} $\S^+$ is either \begin{enumerate} \item the projectivisation of $L^+$, or \item the intersection of the future light cone $L^+$ with the plane $T=1$ in $\R^{1,3}$. \end{enumerate} \end{defn} In other words, the celestial sphere is the set of rays of $L^+$; projectivising identifies points along rays from the origin. Alternatively, we may take a subset of $L^+$ containing a single point from each ray; a standard subset given by intersecting with the 3-plane $T=1$. The two versions of $\S^+$ are related by the diffeomorphism sending each ray of $L^+$ to its point at $T=1$. We will need both versions; whenever we mention $\S^+$ we will specify which version we mean. Since the equations $T=1$ and $T^2 - X^2 - Y^2 - Z^2 = 0$ imply $X^2 + Y^2 + Z^2 = 1$, we see $\S^+$ is diffeomorphic to $S^2$. The isomorphism between $\HH$ and $\R^{1,3}$ is already given by \refeqn{spinvec_to_Hermitian}. Any Hermitian matrix can be uniquely written as \[ \begin{pmatrix} a & b+ci \\ b-ci & d \end{pmatrix} \quad \text{or} \quad \frac{1}{2} \begin{pmatrix} T+Z & X+Yi \\ X-Yi & T-Z \end{pmatrix} \] where $a,b,c,d$ or $T,X,Y,Z$ are real, and we map to Minkowski space accordingly. \begin{defn} \label{Def:g_H_to_R31} The map $\g$ from Hermitian matrices to Minkowski space is given by \[ \g \colon \HH \To \R^{1,3}, \quad \g \begin{pmatrix} a & b+ci \\ b-ci & d \end{pmatrix} = \left( a+d, 2b, 2c, a-d \right). \] \end{defn} Since \[ \g^{-1} (T,X,Y,Z) = \frac{1}{2} \begin{pmatrix} T+Z & X+iY \\ X-iY & T-Z \end{pmatrix}, \] it is clear that $\g$ is a linear isomorphism of vector spaces, and diffeomorphism of smooth manifolds. Under $\g$, determinant and trace become familiar expressions in Minkowski space. Our conventions perhaps produce some slightly unorthodox constants. \begin{lem} \label{Lem:det_trace_formulas} Suppose $S \in \HH$ and $\g(S) = (T,X,Y,Z)$. \begin{enumerate} \item $4 \det S = T^2 - X^2 - Y^2 - Z^2$. \item $\Trace S = T$. \end{enumerate} \end{lem} \begin{proof} Immediate calculation. \end{proof} \begin{lem} \label{Lem:det0_lightcone_correspondence} The isomorphism $\g \colon \HH \To \R^{1,3}$ restricts to bijections \[ \text{(i) } \HH_0 \To L, \quad \text{(ii) } \HH_0^{0+} \To L^{0+}, \quad \text{(iii) } \HH_0^+ \To L^+. \] \end{lem} \begin{proof} For (i), \reflem{det_trace_formulas}(i) shows that $\det S = 0$ iff $T^2 - X^2 - Y^2 - Z^2 = 0$. So $S \in \HH_0$ iff $\g(S) \in L$. Suppose now that $S \in \HH_0$ and $\g(S) \in L$. By \reflem{det_trace_formulas}(ii), $\Trace S \geq 0$ iff $T \geq 0$, proving (ii). Similarly, $\Trace S > 0$ iff $T > 0$, proving (iii). \end{proof} The positive light cone $L^+$ is diffeomorphic to $S^2 \times \R$; the slice at constant $T$ is an $S^2$ with equation $X^2 + Y^2 + Z^2 = T^2$. The non-negative light cone is obtained by adding a singular point at the origin, and is the topological cone on $S^2$. The light cone $L$ is a double cone formed by joining two copies of the non-negative cone at the singular point; or alternatively by taking $S^2 \times \R$ and collapsing $S^2 \times \{0\}$ to a point. So we immediately have the following. \begin{lem} \label{Lem:Hermitian_topology} $\HH_0^+ \cong L^+$ is diffeomorphic to $S^2 \times \R$, $\HH_0^{0+} \cong L^{0+}$ is a cone on $S^2$, and $\HH_0 \cong L$ is a double cone on $S^2$. \qed \end{lem} The action of $SL(2,\C)$ on $\HH$ naturally gives an action on $\R^{1,3}$, defining it to be equivariant under the linear diffeomorphism $\g$. This is a standard action. \begin{defn} \label{Def:SL2C_on_R31} $SL(2,\C)$ acts on $\R^{1,3}$ by \[ A\cdot p = \g \left( A\cdot (\g^{-1} (p)) \right) \quad \text{for $A \in SL(2,\C)$ and $p \in \R^{1,3}$.} \] \end{defn} Thus by definition $A\cdot \g(p) = \g (A\cdot p)$ and explicitly, for $p = (T,X,Y,Z)$, \begin{equation} \label{Eqn:SL2C_action_on_R31} A\cdot (T,X,Y,Z) = \g \left( A\cdot \frac{1}{2} \begin{pmatrix} T+Z & X+iY \\ X-iY & T-Z \end{pmatrix} \right) = \frac{1}{2} \, \g \left( A \begin{pmatrix} T+Z & X+iY \\ X-iY & T-Z \end{pmatrix} A^* \right) \end{equation} \begin{lem} \label{Lem:SL2C_action_on_light_cones} For any $A \in SL(2,\C)$, the action of $A$ on $\R^{1,3}$ is a linear map $T_A \colon \R^{1,3} \To \R^{1,3}$ which preserves $L$, $L^{0+}$ and $L^+$. \end{lem} \begin{proof} We have already seen in \refeqn{linear_action_on_Hermitian} that, for given $A \in SL(2,\C)$ the action of $A$ on $\HH$ is a linear map $\HH \To \HH$; since $\g$ and $\g^{-1}$ are linear, $T_A$ is also a linear map $\R^{1,3} \To \R^{1,3}$. By \reflem{SL2C_preerves_Hs}, the action of $A$ on $\HH$ preserves $\HH_0$, $\HH_0^{0+}$ and $\HH_0^+$; thus, applying the linear diffeomorphism $\g$ and \reflem{det0_lightcone_correspondence}, the action of $A$ on $\R^{1,3}$ preserves $L, L^{0+}$ and $L^+$. \end{proof} The linear maps on $\R^{1,3}$ preserving $L^+$ are precisely those in $O(1,3)^+$, i.e. those which preserve the Lorentzian inner product and are orthochronous (preserve the direction of time). The linear maps $T_A$ in fact lie in $SO(1,3)^+$, i.e. are also orientation-preserving. We can observe this directly by noting that the generators of $SL(2,\C)$ \[ \begin{pmatrix} re^{i\theta} & 0 \\ 0 & \frac{1}{r} e^{-i\theta} \end{pmatrix}, \quad \begin{pmatrix} 1 & a+bi \\ 0 & 1 \end{pmatrix}, \quad \begin{pmatrix} 1 & 0 \\ a+bi & 1 \end{pmatrix} \] (where $a,b,r,\theta\in\R$) map to $T_A$ given respectively by \[ \begin{pmatrix} \frac{r^2+r^{-2}}{2} & 0 & 0 & \frac{r^2-r^{-2}}{2} \\ 0 & \cos 2\theta & -\sin 2\theta & 0 \\ 0 & \sin 2\theta & \cos 2\theta & 0 \\ \frac{r^2-r^{-2}}{2} & 0 & 0 & \frac{r^2+r^{-2}}{2} \end{pmatrix}, \quad \begin{pmatrix} 1+\frac{a^2+b^2}{2} & a & b & -\frac{a^2+b^2}{2} \\ a & 1 & 0 & -a \\ b & 0 & 1 & -b \\ \frac{a^2+b^2}{2} & a & b & 1-\frac{a^2+b^2}{2} \end{pmatrix}, \quad \begin{pmatrix} 1+\frac{a^2+b^2}{2} & -a & -b & \frac{a^2+b^2}{2} \\ a & 1 & 0 & a \\ -b & 0 & 1 & -b \\ -\frac{a^2+b^2}{2} & a & b & 1-\frac{a^2+b^2}{2} \end{pmatrix} \] which all have determinant $1$. \subsubsection{Putting $\f$ and $\g$ together} \label{Sec:f_compose_g} We now compose $\f$ and $\g$, \[ \C^2 \stackrel{\f}{\To} \HH \stackrel{\g}{\To} \R^{1,3}. \] This composition sends a spinor $\kappa$ to the point $(T,X,Y,Z) \in \R^{1,3}$ such that \begin{equation} \label{Eqn:Pauli_Hermitian} \kappa \, \kappa^* = \frac{1}{2} \left( T \sigma_T + X \sigma_X + Y \sigma_Y + Z \sigma_Z \right). \end{equation} We consider some properties of this composition, and perform some calculations. \begin{lem} \label{Lem:gof_properties} The map $\g \circ \f \colon \C^2 \To \R^{1,3}$ is smooth and has the following properties. \begin{enumerate} \item $\g \circ \f (\kappa) = 0$ precisely when $\kappa = 0$. \item The image of $\g \circ \f$ is $L^{0+}$. \item $\g \circ \f$ restricts to a surjective map $\C_\times^2 \To L^+$. \item $\g \circ \f(\kappa) = \g \circ \f(\kappa')$ iff $\kappa = e^{i\theta} \kappa'$ for some real $\theta$. \item The actions of $SL(2,\C)$ on $\C^2$ and $\R^{1,3}$ are equivariant with respect to $\g \circ \f$. These actions restrict to actions on $\C_\times^2$ and $L, L^+, L^{0+}$ which are also appropriately equivariant. \end{enumerate} \end{lem} \begin{proof} Immediate from \reflem{f_surjectivity}, \reflem{when_f_equal}, \reflem{restricted_actions_on_H} and \reflem{det0_lightcone_correspondence}. \end{proof} We can calculate $\g \circ \f$ explicitly, and prove some of its properties. For the rest of this subsection, let $\kappa = (\xi, \eta) = (a+bi,c+di) \in \C^2$, where $a,b,c,d \in \R$. \begin{lem} \label{Lem:spin_vector_to_TXYZ} Let $\g \circ \f(\kappa) = (T,X,Y,Z)$. Then \begin{align*} T &= |\xi|^2 + |\eta|^2 = a^2 + b^2 + c^2 + d^2 \\ X &= 2 \Re \left( \xi \overline{\eta} \right) = 2 \, |\eta|^2 \, \Re (\xi/\eta) = 2(ac+bd) \\ Y &= 2 \Im \left( \xi \overline{\eta} \right) = 2 \, |\eta|^2 \, \Im (\xi/\eta) = 2(bc-ad) \\ Z &= |\xi|^2 - |\eta|^2 = a^2+b^2-c^2-d^2. \end{align*} \end{lem} \begin{proof} From \refeqn{f_formula} we have \begin{equation} \label{Eqn:f_kappa_in_real_coords} \f(\kappa) = \begin{pmatrix} \xi \overline{\xi} & \xi \overline{\eta} \\ \eta \overline{\xi} & \eta \overline{\eta}. \end{pmatrix} = \begin{pmatrix} a^2 + b^2 & (ac+bd)+(bc-ad)i \\ (ac+bd)-(bc-ad)i & c^2 + d^2 \end{pmatrix} \end{equation} Applying the definition of $\g$ from \refdef{g_H_to_R31} and the fact $\overline{\eta} = \eta^{-1} \, |\eta|^2$ then gives the claim. \end{proof} We already noted in \refsec{map_f} that $\f$ is the cone on the Hopf fibration. In Minkowski space, the picture is perhaps a little more intuitive, and we can add some explicit details. \begin{lem} \label{Lem:C2_to_R31_Hopf_fibrations} Let $S^3_r = \{ \kappa \in \C^2 \, \mid \, |\xi|^2 + |\eta|^2 = r^2 \}$ be the 3-sphere of radius $r>0$ in $\C^2 \cong \R^4$, and let $S^3 = S^3_1$. \begin{enumerate} \item The restriction of $\g \circ \f$ to each $S^3_r$ yields a surjective map from $S^3_r$ onto the 2-sphere $L^+ \cap \{ T=r^2 \} = r^2 \S^+ \cong S^2$ which is the Hopf fibration. In particular, the restriction to $S^3$ yields a Hopf fibration onto the celestial sphere $S^3 \To \S^+ \cong S^2$. \item The map $\g \circ \f \colon \C^2 \To L^{0+}$ is the cone on the Hopf fibration. \end{enumerate} \end{lem} In (i) we regard $\S^+$ as $L^+ \cap \{T=1\}$, i.e. \refdef{celestial_sphere}(ii). \begin{proof} In \refsec{map_f} we saw that, since $\f(\kappa) = \f(\kappa')$ iff $\kappa = e^{i \theta} \kappa'$, $\f$ is a smooth map on each $S^3_r$ collapsing each fibre of the Hopf fibration to a point, so is the Hopf fibration. As $\g$ is a diffeomorphism, the same is true for $\g \circ \f$. By \reflem{spin_vector_to_TXYZ}, $\g \circ \f (\xi, \eta)$ has $T$-coordinate $|\xi|^2 + |\eta|^2 = r^2$, and by \reflem{gof_properties}(iii), $\g \circ \f (\C^2_\times) = L^{+}$. So the image of $S^3_r$ under $\g \circ \f$ is the intersection of $L^{+}$ with $T=r^2$, as claimed. Thus, the family of $3$-spheres $S^3_r$ foliating $\C^2_\times$ are mapped under $\g \circ \f$ by Hopf fibrations to the family of $2$-spheres $L^+ \cap \{T=1\}$ foliating $L^+$. See \reffig{cone_on_Hopf}. Hence we can regard the restriction of $\g \circ \f$ to $\C_\times^2$ as the product of the Hopf fibration with the identity map, $\C^2_\times \cong S^3 \times \R \To S^2 \times \R \cong L^+$. \begin{center} \begin{tikzpicture} \draw[green] (0,0) ellipse (2cm and 0.4cm); ll[white] (-2,0)--(2,0)--(2,0.5)--(-2,0.5); \draw[red] (0,0) ellipse (1cm and 0.2cm); ll[white] (-1,0)--(1,0)--(1,0.5)--(-1,0.5); \draw[blue] (0,0) ellipse (0.5cm and 0.1cm); ll[white] (-0.5,0)--(0.5,0)--(0.5,0.5)--(-0.5,0.5); \draw[cyan] (0,0) ellipse (0.25cm and 0.05cm); ll[white] (-0.25,0)--(0.25,0)--(0.25,0.5)--(-0.25,0.5); \shade[ball color = green!40, opacity = 0.2] (0,0) circle (2cm); \draw[green] (0,0) circle (2cm); \draw[dashed,green] (0,0) ellipse (2cm and 0.4cm); \shade[ball color = red!80, opacity = 0.1] (0,0) circle (1cm); \draw[red] (0,0) circle (1cm); \draw[dashed,red] (0,0) ellipse (1cm and 0.2cm); \shade[ball color = blue!160, opacity = 0.1] (0,0) circle (0.5cm); \draw[blue] (0,0) circle (0.5cm); \draw[dashed,blue] (0,0) ellipse (0.5cm and 0.1cm); \shade[ball color = cyan!320, opacity = 0.1] (0,0) circle (0.25cm); \draw[dashed,cyan] (0,0) ellipse (0.25cm and 0.05cm); \draw[cyan] (0,0) circle (0.25cm); \node[black] at (2,1.5) {$S_r^3$}; \draw[green] (6,1) ellipse (2cm and 0.3cm); \draw[red] (6,0) ellipse (1cm and 0.15cm); \draw[blue] (6,-0.5) ellipse (0.5cm and 0.075cm); \draw[cyan] (6,-0.75) ellipse (0.25cm and 0.0325cm); \draw (4,1)--(6,-1)--(8,1); \node at (3.5,0){$\stackrel{\g\circ\f}{\To}$}; \node at (8.5,1.5){$L^+\cap \{T=r^2$\}}; \end{tikzpicture} \captionof{figure}{The map $\g \circ \f$ as the cone on the Hopf fibration (drawn one dimension down).} \label{Fig:cone_on_Hopf} \end{center} Adding the $0$ into $\C^2$ and $L^+$, since $\g \circ \f (0)= 0$, $\g \circ \f$ is the cone on the Hopf fibration. \end{proof} The following computation will be useful when we consider lines and planes containing $\g \circ \f (\kappa)$. \begin{lem} \label{Lem:gof_celestial_sphere} For any $\kappa \in \C_\times^2$, the line $\R (\g \circ \f (\kappa))$ intersects $\S^+$ in the unique point \[ \left( 1, \frac{2(ac+bd)}{a^2+b^2+c^2+d^2}, \frac{2(bc-ad)}{a^2+b^2+c^2+d^2}, \frac{a^2+b^2-c^2-d^2}{a^2+b^2+c^2+d^2} \right). \] \end{lem} Here we regard $\S^+$ as $L^+ \cap \{T=1\}$, i.e \refdef{celestial_sphere}(ii). \begin{proof} This follows immediately from \reflem{spin_vector_to_TXYZ}, scaling $\g \circ \f(\kappa)$ to have $T$-coordinate $1$. \end{proof} \subsubsection{The Hopf fibration and stereographic projection} \label{Sec:Hopf} We have seen the Hopf fibration in $\g \circ \f$; we can also describe this directly and explicitly. Perhaps the most standard definition of the Hopf fibration is as follows. \begin{defn} The \emph{Hopf fibration} is the map \[ \text{Hopf} \colon S^3 \To S^2 \cong \CP^1, \quad (\xi, \eta) \mapsto \frac{\xi}{\eta}. \] \end{defn} Here we regard $S^3$ as $\{(\xi, \eta) \; \mid \; |\xi|^2 + |\eta|^2 = 1 \} \subset \C^2$, and $\CP^1 = \C \cup \{\infty\} $ as $S^2$. We can translate from the Riemann sphere to the unit 2-sphere in $\R^3$ by stereographic projection; again, perhaps the most standard definition is as follows. It is the map obtained from projecting the $xy$-plane in $\R^3$, viewed as $\C$, to the unit sphere, as in \reffig{1}. It extends to a map from $\CP^1 = \C \cup \{\infty\}$. \begin{defn} \label{Def:stereographic_projection} \emph{Stereographic projection} is the map \[ \text{Stereo} \colon \CP^1 \To S^2, \quad a+bi \mapsto \left( \frac{2a}{1+a^2+b^2}, \frac{2b}{1+a^2+b^2}, \frac{-1+a^2+b^2}{1+a^2+b^2} \right), \quad \infty \mapsto (0,0,1). \] \end{defn} If we compute the Hopf fibration from the standard $S^3 \subset \CP^1$, to the standard Euclidean $S^2 \subset \R^3$ using stereographic projection, we obtain expressions we have seen before! \begin{lem} \label{Lem:gof_Hopf} Let $\pi_{XYZ} \colon \R^{1,3} \To \R^3$ be the projection onto the $XYZ$ 3-plane in Minkowski space. Then the composition $\Stereo \circ \Hopf \colon S^3 \To S^2$ is given by \[ \Stereo \circ \Hopf = \pi_{XYZ} \circ \g \circ \f|_{S^3}. \] \end{lem} Here the projection $\pi_{XYZ}$ simply maps $(X,Y,Z,T) \mapsto (X,Y,Z)$. In other words, the $X,Y,Z$ coordinates of $\g \circ \f$ are precisely the Hopf fibration computed with stereographic projection. \begin{proof} Let $(\xi, \eta) = (a+bi, c+di) \in S^3$ where $a,b,c,d \in \R$. We compute \[ \Hopf (\xi,\eta) = \frac{a+bi}{c+di} = \frac{ac+bd}{c^2+d^2} + i \frac{bc-ad}{c^2+d^2} \] and then applying $\Stereo$ yields \[ \left( \frac{ 2 \left( \frac{ac+bd}{c^2+d^2} \right) }{1 + \left( \frac{ac+bd}{c^2+d^2} \right)^2 + \left( \frac{bc-ad}{c^2+d^2} \right)^2 }, \; \frac{ 2 \left( \frac{bc-ad}{c^2+d^2} \right) }{1 + \left( \frac{ac+bd}{c^2+d^2} \right)^2 + \left( \frac{bc-ad}{c^2+d^2} \right)^2 }, \; \frac{ -1 + \left( \frac{ac+bd}{c^2+d^2} \right)^2 + \left( \frac{bc-ad}{c^2+d^2} \right)^2 }{ 1 + \left( \frac{ac+bd}{c^2+d^2} \right)^2 + \left( \frac{bc-ad}{c^2+d^2} \right)^2 } \right) \] which, fortunately enough, simplifies to \[ \frac{1}{a^2+b^2+c^2+d^2} \left( 2(ac+bd), \; 2 (bc-ad), \; a^2+b^2 - c^2 - d^2 \right). \] Since $a^2+b^2+c^2+d^2 = |\xi|^2 + |\eta|^2 = 1$, comparison with \reflem{spin_vector_to_TXYZ} gives the desired result. \end{proof} \subsubsection{Inner products on spinors and Minkowski space} \label{Sec:inner_products_spinors-Minkowski} Two spinors $\kappa, \kappa' \in \C^2$ have an inner product $\{\kappa, \kappa'\}$; we also now have the two points in the light cone $\g \circ \f (\kappa), \, \g \circ \f (\kappa')$, on which we can consider the Lorentzian inner product $\langle \g \circ \f(\kappa), \, \g \circ \f(\kappa') \rangle$. If one of $\kappa,\kappa'$ is a real multiple of the other, then $\{\kappa, \kappa'\} = 0$, and equally, $\g \circ \f(\kappa)$ and $\g \circ \f(\kappa')$ are proportional lightlike vectors, so $\langle \g \circ \f(\kappa), \g \circ \f (\kappa') \rangle = 0$. In fact, we have the following. Compare \cite[lem. 4.5]{Penner12}. \begin{prop} \label{Prop:complex_Minkowski_inner_products} For $\kappa, \kappa' \in \C^2_\times$, \[ 2 \left| \left\{ \kappa, \kappa' \right\} \right|^2 = \langle \g \circ \f (\kappa), \, \g \circ \f(\kappa') \rangle. \] \end{prop} Let $\kappa = (\xi, \eta)$, $\kappa' = (\xi', \eta')$, and $\xi = a+bi,\ \eta = c+di,\ \xi' = a'+b'i,\ \eta' = c'+d'i$ where $a,b,c,d,a',b',c',d'$ are all real. It is convenient for the proof to think of $\kappa, \kappa'$ as real vectors $(a,b,c,d)$, $(a',b',c',d')$, and consider the $2 \times 4$ matrix \[ M = \begin{pmatrix} a & b & c & d \\ a' & b' & c' & d' \end{pmatrix} \] with those vectors as its rows. We denote by $M_{ij}$ the submatrix of $M$ formed from its $i$ and $j$ columns. Thus, for instance, \[ M_{34} = \begin{pmatrix} c & d \\ c' & d' \end{pmatrix}, \quad \det M_{13} = ac' - ca', \quad \text{etc.} \] It is then true that \begin{equation} \label{Eqn:Plucker_24} \det M_{13} \det M_{24} = \det M_{12} \det M_{34} + \det M_{14} \det M_{23}. \end{equation} This can be checked directly; it is a Pl\"{u}cker relation, which arises in the theory of Grassmannians (see e.g. \cite[ch. 1.5]{Griffiths_Harris94}). We will use it later in \refsec{3d_hyp_geom} to prove our Ptolemy equation. The strategy of the proof of \refprop{complex_Minkowski_inner_products} is to write all quantities in terms of the $M_{ij}$. \begin{lem} \label{Lem:complex_inner_product_subdeterminants} With $\kappa,\kappa'$ as above, \[ \left\{\kappa,\kappa'\right\} = \left( \det M_{13} - \det M_{24} \right) + \left( \det M_{14} + \det M_{23} \right) i. \] \end{lem} This lemma is really a general fact about $2 \times 2$ complex matrices $N$: if we make its entries into $1 \times 2$ real matrices, and obtain a $2 \times 4$ real matrix $M$, then $\det N$ is given by the right hand side above. \begin{proof} \begin{align*} \det \begin{pmatrix} a+bi & a'+b'i \\ c+di & c'+d'i \end{pmatrix} &= (a+bi)(c'+d' i)-(a'+b'i)(c+di) \\ &= \left( ac' - ca' + db'-bd' \right) + \left( ad'-da' + bc'-cb' \right)i, \end{align*} which is the desired combination of determinants. \end{proof} \begin{lem} \label{Lem:Minkowski_inner_product_subdeterminants} With $\kappa,\kappa'$ as above, \[ \frac{1}{2} \langle \g \circ \f (\kappa), \, \g \circ \f (\kappa') \rangle = \det M_{13}^2 + \det M_{14}^2 + \det M_{23}^2 + \det M_{24}^2 - 2 \det M_{12} \det M_{34}. \] \end{lem} \begin{proof} Using \reflem{spin_vector_to_TXYZ} we have \begin{align*} \g \circ \f(\kappa) &= \left( a^2 + b^2 + c^2 + d^2, \, 2(ac+bd), \, 2(bc-ad), \, a^2 + b^2 - c^2 - d^2 \right) \\ \g \circ \f(\kappa') &= \left( a'^2 + b'^2 + c'^2 + d'^2, \, 2(a'c'+b'd'), \, 2(b'c'-a'd'), \, a'^2 + b'^2 - c'^2 - d'^2 \right) \end{align*} so applying $\langle \cdot, \cdot \rangle$ yields $\langle \g \circ \f (\kappa), \, \g \circ \f (\kappa') \rangle$ as \begin{align*} \left( a^2 + b^2 + c^2 + d^2 \right) \left( a'^2 + b'^2 + c'^2 + d'^2 \right) & - 4 (ac+bd)(a'c'+b'd') - 4 (bc-ad)(b'c'-a'd') \\ &- \left(a^2 + b^2 - c^2 - d^2 \right) \left( a'^2 + b'^2 - c'^2 - d'^2 \right) \end{align*} This simplifies to \[ 2(ac'-ca')^2 + 2(ad'-da')^2 + 2(bc'-cb')^2 + 2(bd'-db')^2 - 4(ab'-ba')(cd'-dc') \] giving the desired equality. \end{proof} \begin{proof}[Proof of \refprop{complex_Minkowski_inner_products}] By \reflem{complex_inner_product_subdeterminants} and \reflem{Minkowski_inner_product_subdeterminants}, it remains to show that the following equation holds: \[ \left( \det M_{13} - \det M_{24} \right)^2 + \left( \det M_{14} + \det M_{23} \right)^2 = \det M_{13}^2 + \det M_{14}^2 + \det M_{23}^2 + \det M_{24}^2 - 2 \det M_{12} \det M_{34}. \] Upon expanding and simplifying, this reduces to the Pl\"{u}cker equation \refeqn{Plucker_24}. \end{proof} \subsection{Flags} \label{Sec:flags} We now pick up the idea, left off in \refsec{derivatives_of_f}, of defining a flag using the map $\f$ and its derivative in a certain direction $\ZZ(\kappa)$ at each point $\kappa \in \C^2_\times$. \begin{defn} A \emph{flag} in a vector space $V$ is an ascending sequence of subspaces \[ V_1 \subset \cdots \subset V_k. \] Letting $d_i = \dim V_i$, the $k$-tuple $(d_1, \ldots, d_k)$ is called the \emph{signature} of the flag. \end{defn} We will use the map $\f$ to span a 1-dimensional subspace of $\HH$, and then use its derivative as described by $\ZZ$ to span a 2-plane. Thus, the flag involved will be \[ \R \f(\kappa) \subset \R \f(\kappa) \oplus \R D_\kappa \f(\ZZ(\kappa)), \] and this assignment of flags to spin vectors turns out to be equivariant under the action of $SL(2,\C)$. Such flags are flags in $\HH$, but as seen in \refsec{hermitian_to_minkowski}, there is a linear isomorphism $\g$ between $\HH$ and $\R^{1,3}$ preserving all relevant structure, so these flags can also be considered in $\R^{1,3}$, after applying $\g$ appropriately. The flags we consider all have signature $(1,2)$, but not every such flag arises by this construction. There are certain geometric constraints on the subspaces, relating to the \emph{light cone} $L$ of \emph{null vectors} in $\R^{1,3}$, or the space of singular Hermitian matrices $\HH_0$. Moreover, in order to obtain our desired bijections, we need further structure in our flags of a distinguished point, and orientations. Hence we call the flag structures we need \emph{pointed oriented null flags}. To most readers, we suspect geometric constraints are more easily understood in terms of the light cone in Minkowski space, than in terms of singular Hermitian matrices. On the other hand, the map $\f$ maps directly into Hermitian matrices, while the map $\g$ then applies a further linear transformation, so the algebra of flags is simpler in terms of Hermitian matrices. Thus, we discuss flags both in $\HH$ and $\R^{1,3}$, but prefer $\HH$ for simpler algebra, and $\R^{1,3}$ for geometric intuition. We will define flags in $\HH$ and $\R^{1,3}$ simultaneously. In \refsec{Z} and we introduce the map $\ZZ$, needed for defining the flag direction. In \refsec{PNF} we introduce \emph{pointed null flags}, with ``null" having its usual meaning in $\R^{1,3}$, and then in \refsec{PONF} we introduce \emph{pointed oriented null flags}, the precise type of flag structure we need, which also have some orientation in their structure. In \refsec{describing_flags} we develop notation for describing flags. Then in \refsec{map_F} we can define the map $\F$ from spin vectors to flags. In \refsec{SL2c_action_on_flags_HH} we discuss the $SL(2,\C)$ action on flags, and in \refsec{equivariance_of_F} prove equivariance of the action. This discussion of the $SL(2,\C)$ action is in terms of Hermitian matrices $\HH$, so in \refsec{flags_Minkowski_space} we translate these results into Minkowski space. In \refsec{calculating_flags_Minkowski} we explicitly calculate details of flags in Minkowski space corresponding to spin vectors, and in \refsec{rotating_flags} we consider rotating them. This allows us to show in \refsec{F_surjectivity} that the maps $\F$ and $\G \circ \F$ are surjective, more precisely 2--1 maps. \subsubsection{The map $\ZZ$} \label{Sec:Z} \begin{defn} \label{Def:Z_C2_to_C2_and_J} Define $\ZZ \colon \C^2 \To \C^2$ by \[ \ZZ \begin{pmatrix}\alpha\\ \beta\end{pmatrix} = \begin{pmatrix} \overline{\beta} \, i\\ \, -\overline{\alpha} \, i \end{pmatrix} \quad \text{i.e.} \quad \ZZ (\kappa) = J \, \overline{\kappa} \quad \text{where} \quad J = \begin{pmatrix} 0 & i \\ -i & 0 \end{pmatrix}. \] \end{defn} With this definition of $\ZZ$, using \refeqn{derivative_formula}, we obtain \begin{equation} \label{Eqn:derivative_flag_dirn} D_\kappa f(\ZZ(\kappa)) = \kappa \ZZ(\kappa)^* + \ZZ(\kappa) \kappa^* = \kappa \kappa^T J + J \overline{\kappa} \kappa^*. \end{equation} The following observations are significant in the sequel and help to motivate the definition of $\ZZ$. \begin{lem} \label{Lem:bilinear_Z_negative_imaginary} \label{Lem:Z_forms_basis} For any $\kappa \in \C^2_\times$, \begin{enumerate} \item $\{\kappa, \ZZ(\kappa)\}$ is negative imaginary; \item $\kappa$ and $\ZZ(\kappa)$ form a basis for $\C^2$ as a complex vector space. \end{enumerate} \end{lem} \begin{proof} Let $\kappa=(\xi,\eta) \in \C^2_\times$, then from \refdef{bilinear_form_defn}, \[ \{\kappa,\ZZ(\kappa)\}= \det \begin{pmatrix} \xi & \overline{\eta} \, i \\ \eta & - \overline{\xi} \, i \end{pmatrix} = \xi(-\overline{\xi}i)-\eta(\overline{\eta}i) =- \left( |\xi|^2+|\eta|^2 \right) i, \] which is negative imaginary. Being nonzero, the matrix columns are linearly independent over $\C$. \end{proof} For another, possibly motivating, perspective on $\ZZ$, identify $(\xi,\eta)=(a+bi,c+di)$ with the quaternion $q=a+b\pmb{i}+c\pmb{j}+d\pmb{k}$, where $1, \pmb{i}, \pmb{j}, \pmb{k}$ are the elementary quaternions. Then, as a map on quaternions, $\ZZ$ is given by \[ \ZZ(q)=-\pmb{k} q=-\pmb{k}(a+b\pmb{i}+c\pmb{j}+d\pmb{k})=(d+c\pmb{i}-b\pmb{j}-a\pmb{k})\leftrightarrow(d+ci,-b-ai). \] Thus, in the Euclidean metric on $\C^2 \cong \R^4$, $\ZZ (q)$ is orthogonal to $q$. On the unit $S^3$ centred at the origin in the quaternions, the tangent space to $S^3$ at $\kappa$ has basis $\pmb{i} \kappa, \pmb{j} \kappa, \pmb{k} \kappa$. The $\pmb{i}\kappa$ direction is the direction of the fibre of the Hopf fibration, and $\f$ is constant in that direction. This perhaps motivates why we take the $\pmb{k} \kappa$ direction. (The choice of $-$ rather than $+$, and $\pmb{k}$ rather than $\pmb{j}$, is somewhat arbitrary.) \subsubsection{Pointed null flags} \label{Sec:PNF} All the flags we consider will be of signature $(1,2)$ in $\HH \cong \R^{1,3}$. By \reflem{det0_lightcone_correspondence}, the subset $\HH_0^+ \subset \HH$ corresponds under $\g$ to the positive light cone $L^+ \subset \R^{1,3}$. Vectors on $L^+$ are null, hence the name. \begin{defn} \label{Def:null_flag_in_Minkowski} A \emph{null flag} in $\R^{1,3}$ (resp. $\HH$) is a flag of signature $(1,2)$ in $\R^{1,3}$ (resp. $\HH$) \[ V_1 \subset V_2 \] where \begin{enumerate} \item $V_1$ is spanned by some $p \in L^+$ (resp. $S \in \HH_0^+$). \item $V_2$ is spanned by the same $p$ (resp. $S$), together with some $v \in T_p L^+$ (resp. $U \in T_S \HH_0^+$). \end{enumerate} \end{defn} Thus in a null flag $V_1 \subset V_2$ in $\R^{1,3}$, the first space $V_1$ is a line in the light cone, and the second space $V_2$ is a 2-plane tangent to the light cone. Although $p$ in the above definition is null (indeed, has future-pointing lightlike position vector), the tangent vector $v$ to $L^+$ at $p$ is not null. See \reffig{flag}. The definitions of null flags in $\HH$ and $\R^{1,3}$ correspond under the isomorphism $\g$: $V_1 \subset V_2$ is a null flag in $\HH$ iff $\g(V_1) \subset \g(V_2)$ is a null flag in $\R^{1,3}$. Thus $\g$ provides a bijection between null flags in $\HH$ and null flags in $\R^{1,3}$. From a spinor $\kappa$, we already have a point $\f(\kappa) \in \HH_0^+$ or $\g \circ \f(\kappa) \in L^+$, so our flags come with a distinguished basepoint, as in the following definition. \begin{defn} \label{Def:pointed_null_flag} A \emph{pointed null flag} in $\R^{1,3}$ (resp. $\HH$) is a point $p \in L^+$ (resp. $S \in \HH_0^+$) together with a null flag $\R p \subset V$ (resp. $\R S \subset V$). We denote the set of pointed null flags in $\R^{1,3}$ (resp. $\HH$) by $\mathcal{F_P}(\R^{1,3})$ (resp. $\mathcal{F_P}(\HH)$ ). \end{defn} When the distinction between $\HH$ and $\R^{1,3}$ is unimportant we simply write $\mathcal{F_P}$. We denote a pointed null flag as above in \begin{itemize} \item $\R^{1,3}$ by $(p,V)$ or $[[p,v]]$, where $v \in T_p L^+$ and $V$ is spanned by $p$ and $v$; \item $\HH$ by $(S, V)$ or $[[S,U]]$, where $U \in T_S \HH_0^+$ and $V$ is spanned by $S$ and $U$. \end{itemize} All the notions in $\HH$ and $\R^{1,3}$ in the definition of pointed null flags correspond under the isomorphism $\g$: $(S,V)\in\mathcal{F_P}(\HH)$ iff $(\g(S), \g(V))\in\mathcal{F_P}(\R^{1,3})$. So $\g$ yields a bijection $\mathcal{F_P}(\HH) \To \mathcal{F_P}(\R^{3,1})$, given by $(S,V) \mapsto (\g(S),\g(V))$ or $[[S,U]] \mapsto [[\g(S), \g(U)]]$. The notation $(p,V)$ is unique: if $(p,V) = (p',V')$ then $p=p'$ and $V=V'$. However the same is not true for the notation $[[p,v]]$: a given pointed null flag may be described by different pairs $p,v$. The following lemma clarifies when two descriptions are equal. \begin{lem} \label{Lem:characterise_equal_PNFs} Suppose $p,p' \in L^+$ and $v,v' \in \R^{1,3}$. The following are equivalent: \begin{enumerate} \item $[[p,v]]$ and $[[p',v']]$ describe the same pointed null flag. \item $p=p'$, and $v,v'$ both lie in $T_p L^+$, and the real spans of $(p,v)$ and $(p',v')$ are 2-dimensional and equal. \item $p=p'$, and $v,v'$ both lie in $T_p L^+$, and $v,v'$ are not real multiples of $p$, and there exist real numbers $a,b,c$, not all zero, such that $ap+bv+cv'=0$. \end{enumerate} \end{lem} A similar statement applies for pointed null flags in $\HH$, if we replace $p,p' \in L^+$ with $S,S' \in \HH_0^+$, $v,v' \in \R^{1,3}$ with $U,U' \in \HH$, and $T_p L^+$ with $T_S \HH_0^+$. \begin{proof} That (i) is equivalent to (ii) is immediate from the definition: the points $p,p'$ must be equal, and the planes spanned by $(p,v)$ and $(p',v')$ must be tangent to $L^+$ (resp. $\HH_0^+$) and equal. That (ii) is equivalent to (iii) is elementary linear algebra: $(p,v)$ and $(p,v')$ span equal 2-dimensional planes iff $(p,v)$ and $(p,v')$ are linearly independent but $(p,v,v')$ is linearly dependent. \end{proof} \subsubsection{Pointed oriented null flags} \label{Sec:PONF} In general, an \emph{oriented flag} is a flag \[ \{0\} = V_0 \subset V_1 \subset \cdots \subset V_k \] where each quotient $V_i/V_{i-1}$, for $i=1, \ldots, k$, is endowed with an orientation. Equivalently, these orientations amount to orienting $V_1$, and then orienting each quotient $V_2/V_1, V_3/V_2, \ldots, V_k/V_{k-1}$. We regard an \emph{orientation} of a vector space $V$, in standard fashion, as an equivalence class of ordered bases of $V$, where two ordered bases are equivalent when they are related by a linear map with positive determinant. A pointed null flag $(p,V)\in\mathcal{F_P}$ already naturally contains some orientation data: the 1-dimensional space $\R p$ can be oriented in the direction of $p$. Thus it remains to orient the quotient $V/\R p$, as per the following definition. \begin{defn} \label{Def:pointed_oriented_null_flag} A \emph{pointed oriented null flag} in $\R^{1,3}$ is the data $(p, V, o)$ where: \begin{enumerate} \item $(p,V)\in\mathcal{F_P}(\R^{1,3})$, with $\R p$ is oriented in the direction of $p$; \item $o$ is an orientation of $V/\R p$. \end{enumerate} The set of pointed oriented null flags in $\R^{1,3}$ is denoted $\mathcal{F_P^O}(\R^{1,3})$. \end{defn} Similarly, a pointed oriented null flag in $\HH$ consists of $(S, V, o)$, where $(S,V) \in \mathcal{F_P}(\HH)$, $\R S$ is oriented in the direction of $S$, and $o$ is an orientation of $V/\R S$. Since $(S,V)$ is a pointed null flag, $S \in \HH_0^+$, and $V$ is a 2-dimensional subspace containing $S$ and tangent to $\HH_0^+$. The set of pointed oriented null flags in $\HH$ is denoted $\mathcal{F_P^O}(\HH)$. When the distinction between $\HH$ and $\R^{1,3}$ is unimportant we simply write $\mathcal{F_P^O}$. Pointed oriented null flags are the structure we need to describe spinors. Henceforth we will simply refer to them as \emph{flags}. The space $\mathcal{F_P^O}(\R^{1,3})$ of pointed null flags is 4-dimensional. To see this, note that $p$ lies in the 3-dimensional positive light cone $L^+$. The tangent space $T_p L^+$ is 3-dimensional and contains $\R p$ as a subspace. The set of relatively oriented 2-planes $V$ in the 3-dimensional vector space $T_p L^+$ containing $\R p$ is 1-dimensional; there is an $S^1$ worth of such 2-planes, rotating around $\R p$. In fact, we will see later in \refsec{topology_of_spaces} that $\mathcal{F_P^O}$ naturally has the topology of $\textnormal{UT}S^2 \times \R$, the product of the unit tangent bundle of $S^2$ with $\R$. Just as for pointed null flags, there is a bijection $\mathcal{F_P^O}(\HH) \To \mathcal{F_P^O}(\R^{1,3})$, as we now show. Let $(S,V,o) \in \mathcal{F_P^O}(\HH)$, consisting of subspaces $\R S \subset V$. Just as for pointed null flags, we can directly apply $\g$ to $S \in \HH_0^+$ and $V \subset \HH$ to obtain $\g(S)$, and $\g(V)$. We can also apply $\g$ to the orientation $o$ as follows. The orientation $o$ is represented by an equivalence class of ordered bases of $V/\R S$. (As $V/\R S$ is 1-dimensional, such an ordered basis consists of just one element.) The isomorphism $\g \colon \HH \To \R^{1,3}$ restricts to isomorphisms $V \To \g(V)$ and $\R S \To \R \g(S)$, and hence provides an isomorphism of quotient spaces $\underline{\g} \colon V / \R S \To \g(V) / \R \g(S)$. Taking $\underline{B}$ to be an ordered basis of $V/\R S$ representing $o$, then we define $\g(o)$ to the the orientation represented by $\g(\underline{B})$. \begin{defn} \label{Def:G} The map $\G$ from (pointed oriented null) flags in $\HH$, to (pointed oriented null) flags in $\R^{1,3}$, is given by \[ \G \colon \mathcal{F_P^O}(\HH) \To \mathcal{F_P^O}(\R^{1,3}), \quad \G(S,V,o) = (\g(S),\g(V),\g(o)). \] \end{defn} \begin{lem} \label{Lem:G_bijection} $\G$ is well defined and a bijection. \end{lem} In other words, $(S,V,o)\in\mathcal{F_P^O}(\HH)$ iff $(\g(S),\g(V),\g(o))\in\mathcal{F_P^O}(\R^{1,3})$ \begin{proof} The isomorphism $\g$ maps $S \in \HH_0^+$ to a point $\g(S) \in L^+$ (\reflem{det0_lightcone_correspondence}). The 2-plane $V$ is spanned by $S$ and an element of $T_S \HH_0^+$, so $\g(V)$ is a 2-plane spanned by $\g(S)$ and an element of $T_{\g(S)} L^+$. Thus $\R \g(S) \subset \g(V)$ is a null flag in $\R^{1,3}$ and in fact $(\g(S), \g(V)) \in \mathcal{F_P} (\R^{1,3})$. Considering orientations, since $\g(S) \in L^+$, the 1-dimensional space $\R \g(S)$ is oriented towards the future, in the direction of $\g(S)$. To see that $\g(o)$ is well defined, let $\underline{B}, \underline{B'}$ be two ordered bases of $V/\R S$ representing $o$ (in fact each basis consists of one vector); we show that $\g(\underline{B}), \g(\underline{B'})$ represent the same orientation of $\g(V)/\R \g(S)$. Since $\underline{B}, \underline{B'}$ represent $o$ and consist of single vectors, then $\underline{B'} = m \underline{B}$ where $m$ is positive real, so $\g(\underline{B'}) = M \g (\underline{B})$. As $m > 0$ then $\g(\underline{B'})$ and $\g(\underline{B})$ represent the same orientation $\g(V)/\R \g(S)$. So $\g(o)$ is well defined, and indeed $\G$ is well defined. The same arguments applied to the isomorphism $\g^{-1}$ show that $\G^{-1}$ is a well defined inverse to $\G$, so $\G$ is a bijection. \end{proof} \subsubsection{Describing flags} \label{Sec:describing_flags} Above we introduced notation $[[p,v]]$ for pointed null flags. We now extend this notation to (pointed oriented null) flags. \begin{defn} \label{Def:pv_notation_PONF} Let $p \in L^+$ and $v \in T_p L^+$, such that $p,v$ are linearly independent. Then $[[p,v]]$ denotes $(p,V,o)\in\mathcal{F_P^O}(\R^{1,3})$, where $V$ is the span of $p$ and $v$, and $o$ is the orientation on $V/\R p$ represented by $v + \R p$. \end{defn} The definition works similarly in $\mathcal{F_P^O}(\HH)$: for $S \in \HH_0^+$ and $U \in T_S \HH_0^+$, such that $S,U$ are linearly independent, $[[S,U]]$ denotes $(S,V,o)\in\mathcal{F_P^O}(\HH)$ where $V$ is the span of $S$ and $U$, and $o$ is the orientation on $V/\R S$ given by $U + \R S$. Intuitively, the orientations can be understood as follows. The 2-plane $V$ is spanned by $p$ and $v$; $p$ gives an orientation on the line $\R p$, which is towards the future in $\R^{1,3}$ since $p \in L^+$. Choosing an orientation on $V/\R p$ amounts to choosing one of the two sides of the line $\R p$ on the plane $V$; we choose the side to which $v$ points. We have seen that flags in $\HH$ and $\R^{1,3}$ are related by the bijection $\G$, which has a simple description in this notation. \begin{lem} \label{Lem:G_in_pv_notation} For $[[S,U]] \in \mathcal{F_P^O}(\HH)$, we have $\G [[S,U]] = [[\g(S), \g(U)]]$. \end{lem} \begin{proof} Let $V$ be the 2-plane spanned by $S,U$ and $o$ the orientation on $V/\R S$ given by $U$, so $[[S,U]] = (S,V,o)$. Applying $\G$ to this flag, by \refdef{G}, yields $(\g(S),\g(V),\g(o))$. Now $\g(V)$ is the span of $\g(S)$ and $\g(U)$, and $\g(o)$ is the orientation on $\g(V)/\R \g(S)$ induced by $\g(U)$, so $(\g(S),\g(V),\g(o)) = [[\g(S),\g(U)]]$. \end{proof} Just as for pointed null flags, a given $(p,V,o)\in\mathcal{F_P^O}(\R^{1,3})$ can be described by many different $[[p,v]]$, and the following lemma, refining \reflem{characterise_equal_PNFs}, describes when they are equal. \begin{lem} \label{Lem:characterise_equal_PONFs} Suppose $p,p' \in L^+$ and $v,v' \in \R^{1,3}$. The following are equivalent. \begin{enumerate} \item $[[p,v]]$ and $[[p',v']]$ describe the same (pointed oriented null) flag. \item $p=p'$, and $v,v'$ both lie in $T_p L^+$, and the sets \[ \R p + \R^+ v = \left\{ ap+bv \mid a,b \in \R, b > 0 \right\}, \quad \R p' + \R^+ v' = \left\{ ap'+b v' \mid a,b \in \R, b > 0 \right\} \] are equal 2-dimensional half-planes. \item $p=p'$, and $v,v'$ both lie in $T_p L^+$, and $v,v'$ are not real multiples of $p$, and there exist real numbers $a,b,c$ such that $ap+bv+cv'=0$, where $b,c$ are nonzero and have opposite sign. \end{enumerate} \end{lem} As usual, a similar statement applies to flags in $\HH$, replacing $\R^{1,3}$ with $\HH$, $p,p' \in L^+$ with $S,S' \in \HH_0^+$, $v,v' \in \R^{1,3}$ with $U,U' \in \HH$, and $T_p L^+$ with $T_S \HH_0^+$. Note that when $v,v'$ are not real multiples of $p$, then an equation $ap+bv+cv'=0$ with $a,b,c$ not all zero must have $b$ and $c$ nonzero, and so can be rewritten as $v' = dv+ep$ or $v = d'v'+e'p$, expressing $v'$ in terms of the basis $\{v,p\}$, or $v$ in terms of the basis $\{v',p\}$ respectively. Having $b$ and $c$ of opposite sign is then equivalent to $d$ and $d'$ being positive, since $d = -b/c$ and $d'=-c/b$. In other words, $v$ is a positive multiple of $v'$, modulo multiples of $p$; and equivalently, $v'$ is a positive multiple of $v$ modulo multiples of $p$. \begin{proof} First we show the equivalence of (i) and (ii). By \reflem{characterise_equal_PNFs}, $[[p,v]]$ and $[[p',v']]$ describe the same pointed null flag if and only if $p=p'$, $v,v'$ both lie in $T_p L^+$, and the real spans of $(p,v)$ and $(p',v')$ are 2-dimensional and equal; let this span be $V$. It remains to show that the orientations on $V/\R p$ given by $v+\R p$ and $v'+\R p$ are equal if and only if $\R p + \R^+ v = \R p + \R^+ v'$. Now $V$ is divided into two half planes by the line $\R p$. They are respectively given by \[ \R p + \R^+ v = \left\{ ap+bv \mid a,b \in \R, b > 0 \right\} \quad \text{and} \quad \R p - \R^+ v = \left\{ ap-bv \mid a,b \in \R, b > 0 \right\}. \] These two half-planes map down to the 1-dimensional quotient space $V/\R p$ to give the two components of the complement of the origin: the first half-plane yields the positive real span of $v+\R p$; the second yields the negative real span of $v+\R p$. The first defines the co-orientation given by $v+\R p$. For $(p,v')$ we have a similar description of two half-planes $\R p + \R^+ v'$ and $\R p - \R^+ v'$, and we see that the half-plane $\R p + \R^+ v'$ yields the positive real span of $v'+ \R p$ in $V/\R p$, corresponding to the orientation given by $v' + \R p$. Thus, the two orientations are equal if and only if the two claimed sets are equal. Now we show that (ii) is equivalent to (iii). We note that if the two sets in (ii) are equal, then $v' = ap+bv$ for some real $a,b$ with $b$ positive. Then $ap+bv-v'=0$ provides the equation required for (iii). Conversely, if $ap+bv+cv'=0$ with $b,c$ of opposite sign, then we may write $v'=dv+ep$ where $d$ is positive. Thus $v' \in \R p + \R^+ v$, so the half-plane $\R p + \R^+ v$ must coincide with the half-plane $\R p + \R^+ v'$. \end{proof} \subsubsection{The map from spin vectors to flags} \label{Sec:map_F} We now upgrade the map $\f$ to $\F$. Whereas $\f$ associates to a spinor $\kappa$ a matrix in $\HH_0^{0+}$, the map $\F$ associates to $\kappa$ a flag in $\HH$. The point in the pointed flag is just $\f(\kappa)$. As discussed at the beginning of \refsec{flags}, the 2-plane incorporates tangent data, using the derivative of $\f$ in a direction specified by the map $\ZZ$. We will see that the resulting construction is equivariant. \begin{defn} \label{Def:spinors_to_PNF} The map $\F$ from nonzero spin vectors to (pointed oriented null) flags is given by \[ \F \colon \C_\times^2 \To \mathcal{F_P^O}(\HH), \quad \F(\kappa) = [[ \f(\kappa), \; D_\kappa \f(\ZZ(\kappa)) ]]. \] \end{defn} Using \refeqn{derivative_flag_dirn} we thus have, for $\kappa \in \C^2_\times$, \begin{equation} \label{Eqn:F_explicitly} \F(\kappa) = [[ \f(\kappa), \; \kappa \kappa^T J + J \, \overline{\kappa} \kappa^* ]]. \end{equation} Although $\F$ as stated could equally well map to less elaborate structures, for instance dropping the ``pointed or ``oriented" details, we need the full data of a pointed oriented null flag for our construction. The domain of $\F$ is $\C_\times^2$ rather than $\C^2$, since $\f(0)=0$, which does not span a 1-dimensional subspace in $\HH$; moreover there is no well defined tangent space to $\HH_0^+$ or $\HH_0^{0+}$ there. For $\kappa \neq 0$ we have $0 \neq \f(\kappa) \in \HH_0^+$, so we obtain a well defined 1-dimensional subspace for our null flag. Although it is clear $D_\kappa \f(\ZZ(\kappa)) \in T_{\f(\kappa)} \HH_0^+$, it is perhaps not so clear that, with $\f(\kappa)$, it spans a 2-dimensional vector space. We verify this, and in fact prove something stronger, in \reflem{flag_well_defined} below. We saw in \reflem{G_bijection}, that the linear isomorphism $\g \colon \HH \To \R^{1,3}$ induces a bijection $\G$ on flags; this immediately allows us to transport the flags on $\HH$, constructed by $\F$, over to Minkowski space. Before proving \reflem{flag_well_defined} to verify that $\F$ is well defined, we first prove a general observation in linear algebra about factorisation of spin vectors. Statements equivalent to this first lemma appear in Penrose and Rindler \cite{Penrose_Rindler84}, and probably elsewhere. Recall (\refsec{notation}) that $\M_{m \times n}(\mathbb{F})$ denotes $m \times n$ matrices with entries in $\mathbb{F}$, and $\M_{m \times n}(\mathbb{F})_\times$ denotes such matrices which are nonzero. \begin{lem} \label{Lem:spinor_factorisation} Suppose $M,M'\in\mathcal{M}_{2\times 1}(\C)_\times$, and $N,N'\in\mathcal{M}_{1\times 2}(\C)_\times$. If $MN = M'N'$ then there exists $\mu\in\C_\times$ such that $M = \mu M'$ and $N = \mu^{-1} N'$. \end{lem} \begin{proof} Let \[ M = \begin{pmatrix} \alpha \\ \beta \end{pmatrix}, \quad M' = \begin{pmatrix} \alpha' \\ \beta' \end{pmatrix}, \quad N= \begin{pmatrix} \gamma & \delta \end{pmatrix}, \quad N' = \begin{pmatrix} \gamma' & \delta' \end{pmatrix}. \quad \text{Also let} \quad v = \begin{pmatrix} -\delta \\ \gamma \end{pmatrix} \] so that $Nv=0$. Then $M'N'v = MNv=0$, which can be written out as \[ M'N' v = M' \begin{pmatrix} \gamma' & \delta' \end{pmatrix} \begin{pmatrix} -\delta \\ \gamma \end{pmatrix} = M' (-\gamma' \delta + \delta' \gamma) = \begin{pmatrix} 0 \\ 0 \end{pmatrix}. \] Since $M'$ is nonzero, we have $-\gamma' \delta + \delta' \gamma = 0$, so that $N$ and $N'$ are (complex) proportional. A similar argument shows that $M$ and $M'$ are (complex) proportional. Since $MN=M'N'$, these proportions are inverses. Thus $M = \mu M'$ and $N = \mu^{-1} N'$ for some complex $\mu$. \end{proof} \begin{lem} \label{Lem:flag_well_defined} For any $\kappa \neq 0$, the three Hermitian matrices \[ \f(\kappa), \quad D_\kappa \f(\ZZ(\kappa)), \quad D_\kappa \f (i \ZZ(\kappa)) \] are linearly independent over $\R$. \end{lem} It follows that $D_\kappa \f(\ZZ(\kappa))$ is not a real multiple of $\f(\kappa)$, and hence $\F$ is well defined. \begin{proof} Applying \refeqn{derivative_flag_dirn}, we must show that for all $\kappa \neq 0$, the Hermitian matrices \[ \kappa \kappa^*, \quad \kappa \kappa^T J + J \overline{\kappa} \kappa^*, \quad -i \left( \kappa \kappa^T J - J \overline{\kappa} \kappa^* \right) \] are linearly independent over $\R$. Suppose to the contrary that they are not: then we have \[ a \kappa \kappa^* + b \left( \kappa \kappa^T J + J \overline{\kappa} \kappa^* \right) - ci \left(\kappa \kappa^T J - J \overline{\kappa} \kappa^* \right) = 0, \] for some real $a,b,c$, not all zero. We may rewrite this as \[ \kappa \left( a \kappa^* + b \kappa^T J - c i \kappa^T J \right) = \left( b J \overline{\kappa} + c i J \overline{\kappa} \right) \left( - \kappa^* \right). \] Let $\beta = b + ci$. Note $\beta = 0$ implies $a \kappa \kappa^* = 0$, a contradiction since $\kappa \in \C^2_\times$ and $a,b,c$ are not all zero; so $\beta \neq 0$. The equation can be written as \[ \kappa \left( a \kappa^* + \overline{\beta} \kappa^T J \right) = \left( J \overline{\kappa} \right) \left( - \beta \kappa^* \right), \] where both sides are a product of a $2 \times 1$ and $1 \times 2$ complex matrix. On the right hand side, both factors are nonzero, hence the same must be true on the left hand side. Applying \reflem{spinor_factorisation} we have $\kappa = \mu J \overline{\kappa}$ for some $\mu\neq0\in\C$. Letting $\kappa = (\xi, \eta)$ we thus have \[ \begin{pmatrix} \xi \\ \eta \end{pmatrix} = \mu \begin{pmatrix} 0 & i \\ -i & 0 \end{pmatrix} \begin{pmatrix} \overline{\xi} \\ \overline{\eta} \end{pmatrix} = \mu \begin{pmatrix} \overline{\eta} \, i \\ - \overline{\xi} \, i \end{pmatrix}, \] so that $\xi = \mu \overline{\eta} i$ and $\eta = -\mu \overline{\xi} i$, hence $\overline{\eta} = \overline{\mu} \xi i$. But putting these together yields \[ \xi = \mu \overline{\eta} i = \mu (\overline{\mu} \xi i) i = -|\mu|^2 \xi. \] Thus $\xi = 0$, which implies $\eta = 0$, contradicting $\kappa \neq 0$. \end{proof} After \reflem{flag_well_defined}, we can give quite a precise description of the derivative of $\f$. At a point $\kappa$, the derivative $D_\kappa \f$ is a real linear map between tangent spaces $T_\kappa \C^2 \To T_{\f(\kappa)} \HH$. As both $\C^2$ and $\HH$ are real vector spaces, we may identify these tangent spaces with $\C^2$ and $\HH$ respectively. \begin{lem} \label{Lem:structure_of_derivative_of_f} For any $\kappa \in \C^2_\times$, the derivative $D_\kappa \f$, considered as a real linear map $\C^2 \To \HH$, has the following properties. \begin{enumerate} \item The kernel of $D_\kappa \f$ is 1-dimensional, spanned by $i \kappa$. \item $\kappa, \ZZ(\kappa), i \ZZ(\kappa) \in \C^2$ are linearly independent over $\R$, and their 3-dimensional span maps isomorphically onto the image of $D_\kappa \f$. \end{enumerate} \end{lem} We will see later in \reflem{orthonormal_basis_from_spinor} some nice properties of the three vectors in (ii) and their images. \begin{proof} By \reflem{Z_forms_basis}, $\{ \kappa, \ZZ(\kappa)\}$ is a complex basis for $\C^2$, hence $\{ \kappa, i \kappa, \ZZ(\kappa), i \ZZ(\kappa) \}$ is a real basis for $\C^2$. We consider the effect of $D_\kappa \f$ on this basis. We saw in \reflem{derivatives_of_f_in_easy_directions} that $i \kappa \in \ker D_\kappa \f$, so the kernel of $D_\kappa \f$ has dimension $\geq 1$ and the image of $D_\kappa \f$ has dimension $\leq 3$. Since $D_\kappa \f (\kappa) = 2 \f(\kappa)$ (\reflem{derivatives_of_f_in_easy_directions}), \reflem{flag_well_defined} tells us that the images of $\kappa, \ZZ(\kappa), i \ZZ(\kappa)$ under $D_\kappa \f$ are linearly independent. So the image of $D_\kappa \f$ has dimension exactly $3$, spanned by the image of these 3 vectors, and the kernel has dimension has exactly $1$, spanned by $i \kappa$. \end{proof} Combining \refdef{spinors_to_PNF}, equation \refeqn{F_explicitly} and \reflem{G_in_pv_notation}, we immediately obtain the following description of $\G \circ \F \colon \C_\times^2 \To \mathcal{F_P^O}(\R^{1,3})$. This shows how to associate a flag in Minkowski space to a spin vector. \begin{lem} \label{Lem:GoF_in_pv_form} \[ \G \circ \F (\kappa) = [[ \g \circ \f (\kappa), \g \left( D_\kappa \f (\ZZ(\kappa)) \right) ]] = [[ \g \left( \kappa \kappa^* \right) , \g \left( \kappa \kappa^T J + J \overline{\kappa} \kappa^* \right) ]]. \] \qed \end{lem} \subsubsection{$SL(2,\C)$ action on flags in $\HH$} \label{Sec:SL2c_action_on_flags_HH} We now explain how $SL(2,\C)$ acts on flags in $\HH$. In \refsec{equivariance_of_F} we consider equivariance of $\F$ with respect to this action. We have considered flags both in $\HH$ and $\R^{1,3}$, but the isomorphism $\G$ shows that it is equivalent to consider either space of flags. Although $\R^{1,3}$ is perhaps easier to understand geometrically, it is more straightforward algebraically to consider the action on flags in $\HH$, and so we will consider $\HH$ first. From \refsec{flags_Minkowski_space} onwards we will consider $\R^{1,3}$. To define the action of $SL(2,\C)$ on the space of flags $\mathcal{F_P^O}(\HH)$, we need to consider its actions on subspaces of $\HH$, their quotient spaces, and their orientations. We start with subspaces, extending the action on $\HH$ from \refdef{standard_SL2C_actions}. \begin{defn} \label{Def:matrix_on_Hermitian_subspace} Let $V$ be a real vector subspace of $\HH$, and $A \in SL(2,\C$). Then the action of $A$ on $V$ is given by \[ A\cdot V = \left\{ A\cdot S \mid S \in V \right\} = \left\{ ASA^* \mid S \in V \right\} = AVA^*. \] \end{defn} The same calculation as for $\HH$ \refeqn{group_action_on_Hermitian} shows that, for $A,A' \in SL(2,\C)$, we have $(AA') \cdot V = A \cdot (A' \cdot V)$, so we indeed have an action of $SL(2,\C)$ on the set of subspaces of $\HH$. In fact, as we now see, this action is by linear isomorphisms. \begin{lem} Let $V$ be a real $k$-dimensional subspace of $\HH$ and $A \in SL(2,\C)$. \label{Lem:SL2C_action_preserves_dimension} \begin{enumerate} \item The map $V \To A \cdot V$ defined by $S \mapsto A \cdot S$ for $S \in V$ is a linear isomorphism. In particular, $A\cdot V$ is also a $k$-dimensional subspace of $\HH$. \item \refdef{matrix_on_Hermitian_subspace} defines an action of $SL(2,\C)$ on the set of real $k$-dimensional subspaces of $\HH$. \end{enumerate} \end{lem} The set of $k$-dimensional subspaces of $\HH$ forms the \emph{Grassmannian} $\Gr(k,\HH)$, so the above lemma says that $SL(2,\C)$ acts on $\Gr(k,\HH)$ by linear isomorphisms. \begin{proof} The map $V \To A \cdot V$ is given by the action of $A$ on individual elements $S$ of $\HH$, i.e. $S \mapsto A \cdot S = A S A^*$. This is a real linear map, as shown explicitly in \refeqn{linear_action_on_Hermitian}. It is also invertible, with inverse given by the action of $A^{-1}$. Thus $V$ and $A \cdot V$ must have the same dimension. \end{proof} Next we consider the action of $SL(2,\C)$ on quotients of subspaces of $\HH$, and their bases. For the rest of this subsection, $V \subset W$ are real subspaces of $\HH$, and $A \in SL(2,\C)$. \begin{lem} \ \label{Lem:SL2C_action_subspaces_facts} \begin{enumerate} \item $A \cdot V \subset A \cdot W$, so the quotient $(A \cdot W) / (A \cdot V)$ is well defined. \item Let $\underline{S} = S + V \in W/V$, i.e. $S \in W$ represents $\underline{S}$. Then $A \underline{S} A^*$ is a well-defined element of $(A\cdot W)/(A\cdot V)$, represented by $A\cdot S = A S A^* \in A\cdot W$. \item The map $W/V \To (A \cdot W) / (A \cdot V)$ defined by $\underline{S} \mapsto A \underline{S} A^*$ is a linear isomorphism. \item \label{Lem:action_on_ordered_bases} If $\underline{S}_1, \ldots, \underline{S}_k$ is a basis of of $W/V$, then $A \underline{S}_1 A^*, \ldots, A \underline{S}_k A^*$ is a basis of $(A\cdot W)/(A\cdot V)$. \end{enumerate} \end{lem} In (ii) above, we think of $A \underline{S} A^*$ as the action of $A$ on $\underline{S} \in W/V$, and define $A \cdot \underline{S} = A \underline{S} A^* \in (A \cdot W)/(A \cdot V)$. If $A,A' \in SL(2,\C)$ then for $\underline{S}$ an element of $W/V$, we have a similar calculation as \refeqn{group_action_on_Hermitian} \begin{equation} \label{Eqn:group_action_on_quotient} (AA') \cdot \underline{S} = (AA') \underline{S} (AA')^* = A A' \underline{S} A'^* A^* = A \cdot (A' \underline{S} A'^*) = A \cdot (A' \cdot \underline{S}), \end{equation} showing that we have a group action of $SL(2,\C)$ on quotients of subspaces of $\HH$. \begin{proof} \ \begin{enumerate} \item An element of $A \cdot V$ can be written as $A \cdot S$ for some $S \in V$; as $V \subset W$ then $S \in W$, so $A \cdot S \in A \cdot W$. Thus $A \cdot V \subset A \cdot W$. \item If $S' \in [S]$ is another representative of $\underline{S}$, then $S-S' \in V$, so $A\cdot S - A\cdot S' = A\cdot (S - S') \in A\cdot V$. \item The same calculation as in \refeqn{linear_action_on_Hermitian} shows that $\underline{S} \mapsto A \underline{S} A^*$ is linear in $\underline{S}$. And as in \reflem{SL2C_action_preserves_dimension}, this linear map is invertible, with inverse given by the action of $A^{-1}$. \item Immediate from the previous part, since a linear isomorphism sends a basis to a basis. \end{enumerate} \end{proof} In (iv) above, we think of the basis $A \underline{S}_i A^*$ as the action of $A$ on the basis $\underline{S}_i$. Writing $\underline{B} = (\underline{S}_1, \ldots, \underline{S}_k)$ for the ordered basis, we define $A \cdot \underline{B} = (A \cdot \underline{S}_1, \ldots, A \cdot \underline{S}_k)$. For $A,A' \in SL(2,\C)$ and $\underline{B}$ an ordered basis, we then have $(AA') \cdot \underline{B} = A \cdot (A' \cdot \underline{B})$, by a similar calculation as \refeqn{group_action_on_quotient}. Thus, we have a group action of $SL(2,\C)$ on ordered bases of quotients of subspaces of $\HH$. Next, consider \emph{two} ordered bases $\underline{B} = (\underline{S}_1, \ldots, \underline{S}_k)$ and $\underline{B}' = (\underline{S}'_1, \ldots, \underline{S}'_k)$, and their orientations. By \reflem{SL2C_action_subspaces_facts}(iv) then $A \cdot \underline{B}$ and $A \cdot \underline{B}'$ are ordered bases of $(A \cdot W)/(A \cdot V)$. \begin{lem} \label{Lem:change_of_basis_matrix_after_action} \label{Lem:action_on_coorientation} Let $\underline{B}, \underline{B}'$ be two ordered bases of $W/V$ as above. \begin{enumerate} \item Let $M$ be the linear map of $W/V$ taking the ordered basis $\underline{B}$ to $\underline{B}'$, and $N$ the linear map of $(A \cdot W)/(A \cdot V)$ taking the ordered basis $A \cdot \underline{B}$ to $A \cdot \underline{B}'$. Then $\det M= \det N$. \item If $\underline{B}$ and $\underline{B}'$ are ordered bases of $W/V$ representing the same orientation, then $A\cdot \underline{B}$ and $A\cdot \underline{B}'$ represent the same orientation of $(A\cdot W)/(A\cdot V)$. \end{enumerate} \end{lem} \begin{proof} By \reflem{SL2C_action_subspaces_facts}(iii), the map $T_A \colon W/V \To (A \cdot W)/(A \cdot V)$ given by $\underline{S} \mapsto A \cdot \underline{S}$ is a linear isomorphism, and by definition it sends the ordered basis $\underline{B}$ to $A \cdot \underline{B}$ and $\underline{B}'$ to $A \cdot \underline{B}'$. Thus $T_A M = N T_A$, and the matrix of $M$ with respect to $\underline{B}$ (or $\underline{B}'$) is equal to the matrix of $N$ with respect to $A \cdot \underline{B}$ (or $A \cdot \underline{B}'$). Thus $\det M = \det N$. If $\underline{B}, \underline{B}'$ represent the same orientation, then $\det M > 0$, so $\det N = \det M > 0$. Thus $A \cdot \underline{B}$ and $A \cdot \underline{B}'$ represent the same orientation. \end{proof} Recall from \refdef{pointed_oriented_null_flag} that the orientations in flags are orientations on quotients of subspaces. For an orientation $o$ on $W/V$ then we can define $A \cdot o$ to be the orientation on $(A \cdot W)/(A \cdot V)$ represented by $A \cdot \underline{B}$, where $\underline{B}$ is any ordered basis of $W/V$ representing $o$. By the above lemma, $A \cdot o$ is well defined. For $A,A' \in SL(2,\C)$, we observe that $(AA')\cdot o = A\cdot (A' \cdot o)$. Indeed, taking a basis $\underline{B}$ representing $o$, we saw that $(AA') \cdot \underline{B} = A \cdot (A' \cdot \underline{B})$, which are bases representing the orientations $(AA') \cdot o$ and $A \cdot (A' \cdot o)$ respectively. Thus we have a group action of $SL(2,\C)$ on orientations of quotients of subspaces of $\HH$. We can now define an action of $SL(2,\C)$ on flags in $\HH$. \begin{defn} \label{Def:matrix_on_PONF} Consider $(S,V,o)\in\mathcal{F_P^O}(\HH)$ and let $A \in SL(2,\C)$. Define $A$ to act on $(S,V,o)$ by \[ A\cdot (S,V,o) = (A\cdot S, A\cdot V, A\cdot o). \] \end{defn} \begin{lem} \label{Lem:SL2C_act_on_PONF_H} \refdef{matrix_on_PONF} defines an action of $SL(2,\C)$ on $\mathcal{F_P^O}(\HH)$. \end{lem} \begin{proof} First we check that $(A\cdot S, A\cdot V, A \cdot o)$ is indeed a pointed oriented null flag. We know that $SL(2,\C)$ acts on $\HH_0^+$ (\reflem{SL2C_preerves_Hs}), so $A \cdot S \in \HH_0^+$. As the $SL(2,\C)$ action preserves 2-dimensional subspaces (\reflem{SL2C_action_preserves_dimension}), $A \cdot V$ is 2-dimensional. We also observe that $\R S \subset V$ implies $\R(A\cdot S) = \R(ASA^*) = A(\R S)A^* \subset AVA^* = A \cdot V$. As $(S,V) \in \mathcal{F_P}(\HH)$, by definition there exists $v \in T_S \HH_0^+$ such that $S$ and $v$ span $V$. Since the action of $A$ on subspaces is by linear isomorphisms (\reflem{SL2C_action_preserves_dimension}), then $A\cdot S$ and $A\cdot v$ span $A\cdot V$, and moreover, since $\HH_0^+$ lies in the vector space $\HH$, on which the action of $A$ is linear, we have $A\cdot v \in T_{A\cdot S} \HH_0^+$. Thus $\R(A\cdot S) \subset A\cdot V$ is a null flag and $(A\cdot S,A\cdot V) \in \mathcal{F_P}(\HH)$. By \reflem{action_on_coorientation} and subsequent remarks, $A\cdot o$ is an orientation on $(A \cdot V) / (A\cdot \R S)$. Thus $(A \cdot S, A \cdot V, A \cdot o)$ is a pointed oriented null flag. The actions of $SL(2,\C)$ on $\HH$, subspaces of $\HH$, and orientations are all group actions, by \refdef{SL2C_actions_on_C2_H}, \refdef{matrix_on_Hermitian_subspace}, and \reflem{action_on_coorientation} (and subsequent comments) respectively. So for $A,A' \in SL(2,\C)$ we have $(AA')\cdot (S,V,o) = A\cdot (A' \cdot (S, V, o))$, yielding the desired group action. \end{proof} The action of $SL(2,\C)$ on $\mathcal{F_P^O}(\HH)$ is described naturally in the notation $[[S,U]]$ of \refdef{pv_notation_PONF}. \begin{lem} \label{Lem:action_on_pv_notation} \label{Lem:action_on_pv_notation_PONF} Let $[[S,U]] \in \mathcal{F_P^O}(\HH)$, and $A \in SL(2,\C)$, then \[ A\cdot [[S,U]] = [[A\cdot S, A\cdot U]] = [[ASA^*, AUA^*]]. \] \end{lem} \begin{proof} Letting $V$ be the real span of $S$ and $U$, and $o$ the orientation induced by $U$ on $V/\R S$, we have $[[S,U]] = (S, V, o)$. In particular, $\underline{U} = U + \R S \in V / \R S$ is an (ordered!) basis of the 1-dimensional quotient space $V / \R S$, and $o$ is the orientation given by $\underline{U}$. By \refdef{matrix_on_PONF}, $A \cdot (S,V,o) = (A \cdot S, A \cdot V, A \cdot o)$. As $S,U$ is a basis of $V$, and $A$ acts by linear isomorphisms (\reflem{SL2C_action_preserves_dimension}), then $A \cdot S, A \cdot U$ is basis of $A \cdot V$. Moreover, the action of $A$ induces an isomorphism of quotient spaces $V / \R S \To (A \cdot V) / (A \cdot \R S)$ sending $\underline{U}$ to $A \cdot \underline{U}$ (\reflem{SL2C_action_subspaces_facts}), and $A \cdot o$ is the orientation given by $A \cdot \underline{U}$. In other words, $A \cdot o$ is the orientation induced by $A \cdot U$ on $(A \cdot V)/(A \cdot \R S)$. Thus $(A \cdot S, A \cdot V, A \cdot o) = [[A \cdot S, A \cdot U]]$. \end{proof} \subsubsection{Equivariance of actions on spin vectors and flags in $\HH$} \label{Sec:equivariance_of_F} In this section prove equivariance of $\F$ , as follows. \begin{prop} \label{Prop:SL2C_spinors_PNF_H_equivariant} The actions of $SL(2,\C)$ on $\C_\times^2$ and $\mathcal{F_P^O}(\HH)$ are equivariant with respect to $\F$. In other words, for $\kappa \in \C_\times^2$ and $A \in SL(2,\C)$, \[ A\cdot \F(\kappa) = \F(A\cdot\kappa). \] \end{prop} The proof of \refprop{SL2C_spinors_PNF_H_equivariant} is essentially the first time we actually use $A \in SL(2,\C)$: the actions of $SL(2,\C)$ in \refdef{standard_SL2C_actions}, \reflem{restricted_actions_on_H}, and \refdef{matrix_on_Hermitian_subspace}--\reflem{action_on_pv_notation} all work for $A \in GL(2,\C)$. We will give two proofs of \refprop{SL2C_spinors_PNF_H_equivariant}, one conceptual, and one explicit. The first, conceptual proof is based on the following lemma. \begin{lem} \label{Lem:conceptual} For two spinors $\kappa,\nu\in\C^2_\times$, the following are equivalent: \begin{enumerate} \item $\{\kappa,\nu\}$ is negative imaginary, \item $\nu=\alpha\kappa+b\ZZ(\kappa)$, where $\alpha\in\C,b\in\R^+$, \item $[[\f(\kappa),D_\kappa \f(\nu)]]=\F(\kappa)$. \end{enumerate} \end{lem} To motivate this lemma, note that all three equivalent conditions say, in various senses, that ``$\nu$ is like $\ZZ(\kappa)$". \reflem{bilinear_Z_negative_imaginary} tells us that $\{ \kappa, \ZZ(\kappa) \}$ is negative imaginary, so (i) says that $\{\kappa, \nu\}$ is like $\{\kappa_, \ZZ(\kappa)\}$. Condition (ii) says that $\nu$ is, up to multiples of $\kappa$, a positive multiple of $\ZZ(\kappa)$. And \refeqn{F_explicitly} tells us that $\F(\kappa) = [[\f(\kappa),D_\kappa \f(\ZZ(\kappa))]]$, so (iii) says that using the directional derivative of $\f$ in the direction $\nu$ yields the same flag as $\F$, which uses the direction $\ZZ(\kappa)$. \begin{proof} We first show (i) and (ii) are equivalent. Since $\{\cdot, \cdot\}$ is complex bilinear, if (ii) holds then \[ \{\kappa, \nu\} = \alpha \{ \kappa, \kappa \} + b \{ \kappa, \ZZ(\kappa) \} = b \{ \kappa, \ZZ(\kappa) \} \] which is negative imaginary by \reflem{bilinear_Z_negative_imaginary}, so (i) holds. For the converse, if $\{\kappa, \nu\}$ is negative imaginary then $\{\kappa, b\ZZ(\kappa)\} = \{\kappa, \nu\}$ for some positive $b$. As $\{\cdot,\cdot\}$ is a complex symplectic form on a complex 2-dimensional vector space, any two vectors yielding the same value for $\{\kappa,\cdot\}$ differ by a complex multiple of $\kappa$, so (ii) holds. Next we show (ii) and (iii) are equivalent. For convenience, let $S = \f(\kappa)$, $U = D_\kappa \f(\nu)$ and $U' = D_\kappa \f(\ZZ(\kappa))$. Suppose (ii) holds, so that $\nu = \alpha \kappa + b \ZZ(\kappa)$, and we show that \[ [[\f(\kappa),D_\kappa \f(\nu)]]=[[\f(\kappa), D_\kappa \f(\ZZ(\kappa))]], \quad \text{i.e.} \quad [[S,U]] = [[S,U']]. \] Let $\alpha = c + di$, where $c,d \in \R$. Then by the (real) linearity of the derivative of $\f$, and using the calculations of derivatives in the $\kappa$ direction (proportional to $\f(\kappa)$ and $i \kappa$ directions (the fibre direction) from \reflem{derivatives_of_f_in_easy_directions}, we have \begin{align*} U &= D_\kappa \f(\nu) = D_\kappa \f ( c \kappa + d i \kappa + b \ZZ(\kappa) ) \\ &= c D_\kappa \f(\kappa) + d D_\kappa \f (i \kappa) + b D_\kappa \f (\ZZ(\kappa)) \\ &= 2 c \f(\kappa) + b D_\kappa \f(\ZZ(\kappa)) = 2 c S + b U'. \end{align*} We now apply \reflem{characterise_equal_PONFs}. Since $\F(\kappa) = [[S,U']]$ is a bona fide flag, $U'$ is not a real multiple of $S$. Since $U = 2cS + bU'$, we see that $U$ is not a real multiple of $S$ either. The equation $-2c S + U - bU' = 0$ above is a linear dependency between $S,U,U'$ with coefficients of opposite sign on $U$ and $U'$. Thus the flags are equal. Alternatively, one can observe that $\R S + \R^+ U = \R S + \R^+ U'$. For the converse, suppose $[[S,U]] = [[S,U']]$. By \reflem{characterise_equal_PONFs}, we have a linear dependency and rearranging it, we have $U = a S + b U'$ where $a,b$ are real and $b>0$. Thus \[ D_\kappa \f(\nu) = a \f(\kappa) + b D_\kappa \f(\ZZ(\kappa)). \] Since $D_\kappa \f(\kappa) = 2 \f(\kappa)$ (\reflem{derivatives_of_f_in_easy_directions}), using the real linearity of $D_\kappa \f$, we have \[ D_\kappa \f \left( \nu - \frac{a}{2} \kappa - b \ZZ(\kappa) \right) = 0. \] By \reflem{structure_of_derivative_of_f}, $D_\kappa \f$ has kernel spanned by $i \kappa$. Thus we have $\nu - \frac{a}{2} \kappa - b \ZZ(\kappa) = c i \kappa$ for some real $c$. Letting $\alpha = a/2 + ci$, we have $\nu = \alpha \kappa + b \ZZ(\kappa)$, as required for (ii). \end{proof} \begin{proof}[Proof 1 of \refprop{SL2C_spinors_PNF_H_equivariant}] We have $\F(\kappa)=[[\f(\kappa), D_\kappa \f(\ZZ(\kappa)]]$ so \[ A\cdot \F(\kappa) = [[A \cdot \f(\kappa), A\cdot D_\kappa \f(\ZZ(\kappa))]] = [[\f(A\kappa), D_{A\kappa} \f(A(\ZZ(\kappa)))]], \] applying \reflem{action_on_pv_notation}, equivariance of $\f$ (\reflem{restricted_actions_on_H}) and its derivative \refeqn{equivariance_of_derivative_of_f}. Now as $A \in SL(2,\C)$, by \reflem{SL2C_by_symplectomorphisms} it acts on $\C^2$ by symplectomorphisms, so $\{A\kappa,A(\ZZ(\kappa))\} = \{\kappa,\ZZ(\kappa)\}$. But $\{\kappa, \ZZ(\kappa)\}$ is negative imaginary (\reflem{bilinear_Z_negative_imaginary}), so by \reflem{conceptual} then $[[ \f(A\kappa), D_{A\kappa} \f(A(\ZZ(\kappa)))]] = \F(A\kappa)$. \end{proof} The second, explicit proof of \refprop{SL2C_spinors_PNF_H_equivariant} is based on the following, perhaps surprising, identity. \begin{prop} \label{Prop:crazy_identity} For any spin vector $\kappa \in \C^2$ and $A \in SL(2,\C)$, \begin{align*} \left[ A \kappa \kappa^T J A^* + A J \overline{\kappa} \kappa^* A^* \right] \left( \kappa^* A^* A \kappa \right) = \left[ A \kappa \kappa^T A^T J + J \overline{A} \overline{\kappa} \kappa^* A^* \right] \left( \kappa^* \kappa \right) , + \left[ A \kappa \kappa^* A^* \right] \left( \kappa^T J A^* A \kappa + \kappa^* A^* A J \overline{\kappa} \right). \end{align*} \end{prop} \begin{proof} Let $A = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix}$ and $\kappa = \begin{pmatrix} \xi \\ \eta \end{pmatrix}$, and expand and simplify, using $\alpha \delta - \beta \gamma = 1$. \end{proof} \begin{proof}[Proof 2 of \refprop{SL2C_spinors_PNF_H_equivariant}] From \refdef{spinors_to_PNF} we have $\F(\kappa) = [[ \f(\kappa), D_\kappa \f(\ZZ(\kappa)) ]]$, and by \reflem{action_on_pv_notation_PONF} we have \[ A\cdot \F(\kappa) = [[A\cdot \f(\kappa), A\cdot D_\kappa \f(\ZZ(\kappa)) ]]. \] On the other hand, $A$ acts on $\kappa$ simply by matrix-vector multiplication, and we have \begin{align*} \F(A\cdot\kappa) &= \F(A\kappa) = [[ \f(A\kappa), D_{A\kappa} \f(\ZZ(A \kappa)) ]] \end{align*} We now use \reflem{characterise_equal_PONFs} to show the two claimed pointed flags are equal, verifying (iii) there, which has three conditions. The first condition is $A\cdot \f(\kappa) = \f(A \kappa)$; call this point $p$. This follows from equivariance of $\f$ (\reflem{restricted_actions_on_H}). The second condition is that $A\cdot D_\kappa \f(\ZZ(\kappa))$ and $D_{A \kappa} \f(\ZZ(A \kappa))$ both lie in the tangent space to $\HH_0^+$ at $p$, and are not real multiples of $p$. Since $\f$ has image in $\HH_0^+$, the image of the derivative $D_\kappa \f$ lies in $T_{\f(\kappa)} \HH_0^+$, and hence $D_\kappa \f (\ZZ(\kappa)) \in T_{\f(\kappa)} \HH_0^+$. Moreover, by \reflem{flag_well_defined}, $D_\kappa \f(\ZZ(\kappa))$ is not a real multiple of $\f(\kappa)$. As $A$ acts linearly on $\HH$ preserving $\HH_0^+$, then $A\cdot D_\kappa \f(\ZZ(\kappa)) \in T_{p} \HH_0^+$. Similarly, the image of the derivative of $\f$ at $A \kappa$ lies in $T_{\f(A\kappa)} \HH_0^+$, so $D_{A \kappa} \f(\ZZ(A \kappa)) \in T_p \HH_0^+$. Applying $A$, which acts linearly on $\HH$, sends $\f(\kappa)$ to $A\cdot \f(\kappa) = p$ and $D_\kappa \f(\ZZ(\kappa))$ to $A\cdot D_\kappa \f(\ZZ(\kappa))$. If these two did not span a plane, then the action of $A$ would send a 2-plane to a smaller dimensional subspace, contradicting \reflem{SL2C_action_preserves_dimension}. Thus $A\cdot D_\kappa \f(\ZZ(\kappa))$ is not a real multiple of $p$. Applying \reflem{flag_well_defined} to $A \kappa$ gives that $D_{A \kappa} \f(\ZZ(A \kappa))$ is not a real multiple of $\f(A \kappa) = p$ either. The third condition is that there exist real numbers $a,b,c$ such that \begin{equation} \label{Eqn:want_these_abc} a \left( p \right) + b \left( A\cdot D_\kappa \f(\ZZ(\kappa)) \right) + c \left( D_{A \kappa} \f(\ZZ(A \kappa)) \right) = 0, \end{equation} where $b$ and $c$ have opposite signs. We calculate $p = A\cdot \f(\kappa) = A \kappa \kappa^* A^*$, and from \refeqn{F_explicitly} we have $D_\kappa \f(\ZZ(\kappa)) = \kappa \kappa^T J + J \overline{\kappa} \kappa^*$ so \[ A\cdot D_\kappa \f(\ZZ(\kappa)) = A\cdot \left( \kappa \kappa^T J + J \overline{\kappa} \kappa^* \right) = A \left( \kappa \kappa^T J + J \overline{\kappa} \kappa^* \right) A^*. \] and \[ D_{A\kappa} \f(\ZZ(A \kappa)) = (A\kappa) (A\kappa)^T J + J \overline{(A \kappa)} (A\kappa)^* = A \kappa \kappa^T A^T J + J \overline{A} \, \overline{\kappa} \kappa^* A^*. \] We can then rewrite \refprop{crazy_identity} as \[ \left[ A\cdot D_\kappa \f(\ZZ(\kappa)) \right] \left( \kappa^* A^* A \kappa \right) - \left[ D_{A\kappa} \f(\ZZ(A \kappa)) \right] \left( \kappa^* \kappa \right) - \left[ p \right] \left( \kappa^T J A^* A \kappa + \kappa^* A^* A J \overline{\kappa} \right) = 0, \] where the expressions in parentheses are real numbers. For any $\tau \in \C^2_\times$ written as a column vector, $\tau^* \tau$ is positive real; taking $\tau$ to be $A \kappa$ and $\kappa$ respectively, we see that $\kappa^* A^* A \kappa > 0$ and $-\kappa^* \kappa < 0$. Thus we have the required $a,b,c$ for \refeqn{want_these_abc}. \end{proof} \subsubsection{$SL(2,\C)$ action on flags in Minkowski space} \label{Sec:flags_Minkowski_space} We now translate all the above results on flags in $\HH$ into Minkowski space, using the maps $\g \colon \HH \To \R^{1,3}$ (\refdef{g_H_to_R31}) and $\G \colon \mathcal{F_P^O}(\HH) \To \mathcal{F_P^O}(\R^{1,3})$ (\refdef{G}). Essentially, $\g$ and $\G$ preserve all the structure required, so statements about flags in $\HH$ translate immediately to Minkowski space. We have already defined a null flag (\refdef{null_flag_in_Minkowski}), pointed null flag (\refdef{pointed_null_flag}), pointed oriented null flag (\refdef{pointed_oriented_null_flag}), and $[[p,v]]$ notation for flags (\refdef{pv_notation_PONF}) in both $\HH$ and $\R^{1,3}$, and observed that $\g$ sends each object in $\HH$ to the corresponding object in $\R^{1,3}$, giving rise to the bijection $\G$. We now define the $SL(2,\C)$ action on $\mathcal{F_P^O}(\R^{1,3})$ and show $\G$ is equivariant. We extend the action of $SL(2,\C)$ on $\R^{1,3}$ (\refdef{SL2C_on_R31}) to subspaces of $\R^{1,3}$, quotient spaces, and orientations. As in \refdef{SL2C_on_R31}, these actions are imported directly from the corresponding actions in $\HH$. Throughout this section, $V \subset W$ are subspaces of $\R^{1,3}$, and $A \in SL(2,\C)$. \begin{defn} \label{Def:SL2C_on_R31_subspace} \label{Def:SL2C_on_R31_orientations} \label{Def:SL2C_on_PONF_R31} The action of $A$ on: \begin{enumerate} \item a vector subspace $V$ of $\R^{1,3}$ is given by \[ A\cdot V = \{A\cdot v \mid v \in V \} = \left\{ \g \left( A\cdot \left( \g^{-1} v \right) \right) \mid v \in V \right\} = \g \left( A\cdot \left( \g^{-1} (V) \right) \right) = \g \left( A \left( \g^{-1} V \right) A^* \right); \] \item a quotient space $W/V$ is given by $A \cdot (W/V) = A \cdot W/A \cdot V$; \item an orientation $o$ on $W/V$ is given by $A \cdot o = \g \left( A\cdot \g^{-1} (o) \right)$; \item a flag $(p,V,o)\in\mathcal{F_P^O}(\R^{1,3})$, is given by $A\cdot (p,V,o) = (A\cdot p, A\cdot V, A\cdot o)$. \end{enumerate} \end{defn} Note that as $V \subset W$, then $A \cdot V \subset A \cdot W$, so (ii) above makes sense. All these actions essentially derive from the action of $SL(2,\C)$ on $\R^{1,3}$. If $A \in SL(2,\C)$ acts on $\R^{1,3}$ via a linear map $M \in SO(1,3)^+$, then all of the actions above essentially just apply $M$. In particular, for a flag $(p,V,o)$, we have $A\cdot (p,V,o)=(Mp,MV,Mo)$. It follows immediately from the fact that $\g$ is a linear isomorphism, and the results of \refsec{SL2c_action_on_flags_HH}, that these definitions give actions of $SL(2,\C)$ on the following sets. \begin{enumerate} \item The set of subspaces of $\R^{1,3}$, acting by linear isomorphisms, using \reflem{SL2C_action_preserves_dimension}; also on each Grassmannian $\Gr(k,\R^{1,3})$. \item The set of quotients of subspaces of $\R^{1,3}$, acting by linear isomorphisms, using \reflem{SL2C_action_subspaces_facts} and subsequent comment. \item The set of orientations of quotients of subspaces of $\R^{1,3}$, using \reflem{action_on_coorientation} and subsequent comment. \item the set of flags $\mathcal{F_P}(\R^{1,3})$, using \reflem{SL2C_act_on_PONF_H} and subsequent comment. \end{enumerate} Similarly we obtain the following immediate translation of \reflem{action_on_pv_notation} \begin{lem} \label{Lem:SL2c_action_on_PONF_R31_works} For $[[p,v]] \in \mathcal{F_P^O}(\R^{1,3})$, we have \[ A\cdot [[p,v]] = [[A\cdot p,A\cdot v]] \] \qed \end{lem} All the actions of $SL(2,\C)$ on objects in $\R^{1,3}$ are defined by applying $\g^{-1}$, then apply the action in $\HH$, then applying $\g$. Hence they are all equivariant. In particular, We obtain the following statement. \begin{prop} \label{Prop:FG_equivariant} The actions of $SL(2,\C)$ on $\mathcal{F_P^O}(\HH)$ and $\mathcal{F_P^O}(\R^{1,3})$ are equivariant with respect to $\G$. In other words, for any $A \in SL(2,\C)$ and any $(S,V,o) \in \mathcal{F_P^O}(\HH)$, \[ \G( A \cdot (S,V,o)) = A \cdot \G(S,V,o), \quad \text{i.e.} \quad \begin{array}{ccc} \mathcal{F_P^O}(\HH) & \stackrel{\G}{\To} & \mathcal{F_P^O}(\R^{1,3}) \\ \downarrow A && \downarrow A \\ \mathcal{F_P^O}(\HH) & \stackrel{\G}{\To} & \mathcal{F_P^O}(\R^{1,3}) \end{array} \quad \text{commutes}. \] \qed \end{prop} \subsubsection{Flag intersection with the celestial sphere} \label{Sec:calculating_flags_Minkowski} Let us calculate some details of the flag of a spin vector. In particular, it will be useful to describe its intersections with the celestial sphere $\S^+ = L^+ \cap \{T=1\}$ (\refdef{celestial_sphere}(ii)) Given a flag $(p,V,o) \in \mathcal{F_P^O}(\R^{1,3})$, the line $\R p$ intersects $\S^+$ in a point $q$. The 2-plane $V$ contains $\R p$, so is transverse to the 3-plane $T = 1$, and intersects this 3-plane in a 1-dimensional line. Because $V$ is tangent to the light cone, the line $V \cap \{T=1\}$ is tangent to $\S^+$ at $q$. The orientation $o$ on $V/\R p$ yields an orientation on this line $V \cap \{T=1\}$. Now, given a spin vector $\kappa = (\xi, \eta)$, by \reflem{GoF_in_pv_form} the associated flag $\G \circ \F(\kappa)$ in $\R^{1,3}$ is $[[p,v]]$, where $p = \g \circ \f (\kappa)$, and $v = \g (D_\kappa \f(\ZZ(\kappa)))$. The 2-plane $V$ is the span of $p$ and $v$, with orientation on $V/\R p$ given by $v$. In \refsec{f_compose_g} we gave explicit descriptions of $p$ (\reflem{spin_vector_to_TXYZ}), and the intersection point $q$ of the line $\R p$ with $\S^+$ (\reflem{gof_celestial_sphere}): \begin{align*} p &= \g \circ \f (\kappa) = \left( a^2 + b^2 + c^2 + d^2, 2(ac+bd), 2(bc-ad), a^2 + b^2 - c^2 - d^2 \right) \\ q &= \left( 1, \frac{2(ac+bd)}{a^2+b^2+c^2+d^2}, \frac{2(bc-ad)}{a^2+b^2+c^2+d^2}, \frac{a^2+b^2-c^2-d^2}{a^2+b^2+c^2+d^2} \right). \end{align*} As we now see, $v$ has no $T$-component, and so gives a tangent vector to $\S^+$ at $q$, which is the oriented direction of the line $V \cap \{T=1\}$. See \reffig{flag_intersect_celestial_sphere}. \begin{center} \begin{tikzpicture} \draw[blue] (3.75,1.5) ellipse (2cm and 0.3cm); \draw[green!50!black] (3.75,0.5) ellipse (1cm and 0.2cm); ll[white] (2.75,0.5)--(4.75,0.5)--(4.75,0.72)--(2.75,0.72); \draw[dashed, green!50!black] (3.75,0.5) ellipse (1cm and 0.2cm); \draw[green!50!black] (1,0)--(5.5,0)--(6.5,1)--(5.25,1); \draw[green!50!black] (2.25,1)--(2,1)--(1,0); \draw[dashed,green!50!black] (5.25,1)--(2.25,1); \draw[dashed,blue] (2.75,0.5)--(3.25,0); \draw[blue] (2.75,0.5)--(1.75,1.5); \draw[dashed, blue] (4.25,0)--(4.75,0.5); \draw[blue] (4.75,0.5)--(5.75,1.5); \draw[blue] (3.25,0)--(3.75,-0.5)--(4.25,0.0); \draw[red] (3.75,-0.5)--(4,0); \draw[dashed,red] (4,0)--(4.1875,0.375); ll[white] (4.475,0.95)--(4.675,0.75)--(4.275,0.55); \draw[red] (4.1375,0.275)--(4.475,0.95)--(4.675,0.75)--(4.275,0.55); \node[blue] at (1.5,1.5){$L^+$}; ll[red] (4.475,0.95) circle (0.055cm); ll[red] (4.15,0.3) circle (0.055cm); \node[red] at (4.75,1){\footnotesize$p$}; \node[red] at (4.8,0.75){\footnotesize$V$}; \node[red] at (4.1,0.45){\footnotesize$q$}; \node[red] at (4.6,0.4){\footnotesize$v$}; \draw[->,red](4.15,0.3)--(4.5,0.37); \node[green!50!black] at (1.8,0.2){$T=1$}; \node[green!50!black] at (2.9,0.85){\footnotesize$\mathcal{S}^+$}; \end{tikzpicture} \captionof{figure}{The intersection of a flag with the celestial sphere.} \label{Fig:flag_intersect_celestial_sphere} \end{center} For the rest of this section, we let $\kappa = (\xi, \eta) = (a+bi, c+di) \in \C^2_\times$ where $a,b,c,d \in \R$. \begin{lem} \label{Lem:null_flag_tricky_vector} \label{Lem:null_flag_tricky_vector_PONF} The 2-plane of the flag $\G \circ \F (\kappa)$ intersects any 3-plane of constant $T$ in a 1-dimensional line, and the orientation on the flag yields an orientation on this line. The oriented line's direction is \[ v = \g (D_\kappa \f(\ZZ(\kappa))) = 2 \left( 0, 2(cd-ab), a^2 - b^2 + c^2 - d^2, 2(ad+bc) \right). \] \end{lem} To see why $v$ has $T$-component zero, observe that $\kappa$ lies in a $3$-sphere $S^3_r$ of radius $r = |\xi|^2 + |\eta|^2 > 0$, and by \reflem{C2_to_R31_Hopf_fibrations}, each such 3-sphere maps under $\g \circ \f$ to a constant-$T$ slice of $L^+$, namely $L^+ \cap \{T=r^2\}$. Now the tangent vector $\ZZ(\kappa)$ at $\kappa$ in $\C^2$ is in fact tangent to $S^3_r$. Indeed, as discussed in \refsec{Z}, regarding $\kappa$ as a quaternion, $\ZZ(\kappa) = - \pmb{k} \kappa$, so that $\ZZ(\kappa)$ is orthogonal to the position vector of $\kappa$. Thus, under $D_\kappa (\g \circ \f) = \g \circ D_\kappa \f$, the vector $\ZZ(\kappa)$ tangent to $S^3_r$ is mapped to a tangent vector to $L^+ \cap \{ T = r^2 \}$, hence has $T$-component zero. The expressions for $p$ and $v$ look quite similar. Indeed, their $X,Y,Z$ coordinates can be obtained from each other by permuting variables, coordinates, and signs. As we see in the next section, this is not a coincidence. In any case, we now calculate this vector. \begin{proof} Using \refdef{Z_C2_to_C2_and_J} and \refeqn{derivative_flag_dirn}, we calculate \begin{align*} D_\kappa \f (\ZZ(\kappa)) &= \kappa \kappa^T J + J \overline{\kappa} \kappa^* = \begin{pmatrix} \xi \\ \eta \end{pmatrix} \begin{pmatrix} \xi & \eta \end{pmatrix} \begin{pmatrix} 0 & i \\ -i & 0 \end{pmatrix} + \begin{pmatrix} 0 & i \\ -i & 0 \end{pmatrix} \begin{pmatrix} \overline{\xi} \\ \overline{\eta} \end{pmatrix} \begin{pmatrix} \overline{\xi} & \overline{\eta} \end{pmatrix} \\ &= \begin{pmatrix} -i \xi \eta & i \xi^2 \\ -i \eta^2 & i \xi \eta \end{pmatrix} + \begin{pmatrix} i \overline{\xi \eta} & i \overline{\eta}^2 \\ -i \overline{\xi^2} & -i \overline{\xi \eta} \end{pmatrix} = \begin{pmatrix} i \left( \overline{\xi \eta} - \xi \eta \right) & i \left( \xi^2 + \overline{\eta}^2 \right) \\ -i \left( \overline{\xi}^2 + \eta^2 \right) & i \left( \xi \eta - \overline{\xi \eta} \right) \end{pmatrix} \end{align*} Thus, applying \refdef{g_H_to_R31}, \begin{align} v = \g \left( D_\kappa \f(\ZZ(\kappa)) \right) &= \left( 0, 2 \Re \left( i \left( \xi^2 + \overline{\eta}^2 \right) \right), 2 \Im \left( i \left( \xi^2 + \overline{\eta}^2 \right) \right), 2i \left( \overline{\xi \eta} - \xi \eta \right) \right) \nonumber \\ \label{Eqn:flag_direction_in_terms_of_alpha_beta} &= \left( 0, -2 \Im \left( \xi^2 + \overline{\eta}^2 \right), 2 \Re \left( \xi^2 + \overline{\eta}^2 \right), 4 \Im \left( \xi \eta \right) \right), \end{align} using the identities $i(\overline{z}-z) = 2 \Im z$, $\Re(iz) = -\Im(z)$ and $\Im(iz) = \Re(z)$. We then directly calculate \begin{align*} \xi^2 + \overline{\eta}^2 &= (a+bi)^2 + (c-di)^2 = a^2 - b^2 +c^2 - d^2 + 2(ab-cd)i, \\ \xi \eta &= (a+bi)(c+di) = ac-bd + (ad+bc)i \end{align*} and substituting real and imaginary parts give the desired expression for $v$. Since $v$ has $T$-coordinate $0$, when we intersect $V$ with a 3-plane $T = $ constant, $V$ yields a line in the direction of $v$. The orientation on $V/\R p$ given by $v$ yields the orientation on this line given by $v$. \end{proof} \begin{eg} \label{Eg:flag_of_simple_spinors} Let us compute the flag of the spinor $\kappa_0 = (1,0)$. By direct calculation, or using \reflem{spin_vector_to_TXYZ}, we have $\g \circ \f (\kappa_0) = (1, 0, 0, 1)$; let this point be $p_0$. From \reflem{null_flag_tricky_vector} we have \[ \G \circ \F (\kappa_0) = [[p_0, (0,0,1,0)]] \] i.e. the flag points in the $Y$-direction. The quotient $V/\R p_0$ is spanned and oriented by $(0,0,1,0)$. More generally, if we take $\kappa = (e^{i\theta}, 0)$, we obtain $\g \circ \f (\kappa_0) = (1,0,0,1) = p_0$ again, but now (again using \reflem{null_flag_tricky_vector} with $a=\cos \theta$, $b = \sin \theta$), we have \[ \G \circ \F(\kappa) = [[p_0, (0, -\sin 2\theta, \cos 2\theta, 0)]]. \] Now $V/\R p_0$ is spanned and oriented by the vector $(0,-\sin2\theta, \cos 2\theta, 0)$. Thus as $\kappa$ rotates from $(1,0)$ by an angle of $\theta$, multiplying $\kappa$ by $e^{i\theta}$, $p$ remains constant, but the flag rotates by an angle of $2\theta$. Indeed, as the direction is $(0,\sin(-2\theta),\cos(-2\theta),0)$, it may be better to say that the flag rotates by an angle of $-2\theta$. \end{eg} We will next see that this principle applies to spinors generally: multiplying a spinor by $e^{i\theta}$ rotates a flag by $-2\theta$, in an appropriate sense. \subsubsection{Rotating flags} \label{Sec:rotating_flags} Given $p\in L^+$, we now consider the set of flags $(p,V,o)$ based at $p$. We first consider which 2-planes $V$ may arise, and for this we need a description of the tangent space to the light cone. \begin{lem} \label{Lem:light_cone_orthogonal_complement} At any $p \in L^+$, the tangent space to $L^+$ is the orthogonal complement $p^\perp$ with respect to the Minkowski inner product: \[ T_p L^+ = \{ v \in \R^{1,3} \mid \langle p,v \rangle = 0 \} = p^\perp. \] \end{lem} \begin{proof} A smooth curve $p(s)$ on $L^+$ passing through $p(0) = p$ satisfies $\langle p(s),p(s) \rangle = 0$ for all $s$. Differentiating and setting $s=0$ yields $\langle p, p'(0) \rangle = 0$ Thus $T_p L^+ \subseteq p^\perp$. As both are 3-dimensional linear subspaces they are equal. \end{proof} Thus, the 2-planes $V$ which may arise in a flag based at $p \in L^+$ are precisely those satisfying $\R p \subset V \subset p^\perp = T_p L^+$. Since $p \in L^+$, $p$ has positive $T$-coordinate, so the ray $\R p$ is transverse to any 3-plane $T =$ constant; moreover, $V$ and $p^\perp$ are also transverse to $T=$ constant. Thus such a $V$ intersects a 3-plane $T=$ constant in a line, which also lies in $p^\perp$. Conversely, a line in a 3-plane $T=$ constant, which also lies in $p^\perp$ spans, together with $p$, a 2-plane $V$ such that $\R p\subset V \subset p^\perp$. So the 2-planes $V$ arising in pointed null flags starting from $p$ can be characterised via their 1-dimensional intersections with 3-planes of constant $T$. The intersections of such 2-planes $V$ with the 3-plane $T=0$ are precisely the 1-dimensional subspaces of the 2-plane $\{T=0\} \cap p^\perp$. A flag also includes an orientation $o$ on $V/\R p$. As $p$ has positive $T$-coordinate, each vector in $V/\R p$ has a unique representative with $T$-coordinate zero, giving an isomorphism $V/\R p \cong V \cap \{T=0\}$. The orientation $o$ on $V/\R p$ is thus equivalent to an orientation on the 1-dimensional subspace $V \cap \{T=0\}$. Thus, the flags based at $p$ can be characterised by their oriented intersections with $\{T=0\}$, and correspond precisely to the oriented 1-dimensional subspaces of the 2-plane $\{T=0\} \cap p^\perp$. There is an $S^1$ family of oriented lines through the origin in a 2-plane, and so there is an $S^1$ family of flags based at $p$. To investigate how flags rotate, we set up a useful basis. Let $\kappa = (\xi, \eta) = (a+bi, c+di) \in \C^2_\times$ where $a,b,c,d \in \R$, and let $|\xi|^2+|\eta|^2=r^2$, where $r>0$. Also let $S^3_r = \{ \kappa \in \C^2 \, \mid \, |\xi|^2 + |\eta|^2 = r^2 \}$ be the 3-sphere of radius $r>0$ in $\C^2$. The corresponding flag $\G \circ \F(\kappa)$ is $[[p,v]]$ where $p = \g \circ \f (\kappa) \in L^+$ and $v = \g \circ D_\kappa \f (\ZZ(\kappa)) \in T_p L^+$ (\reflem{GoF_in_pv_form}). We calculated $p$ and $v$ explicitly in \reflem{spin_vector_to_TXYZ} and \reflem{null_flag_tricky_vector}. In \refsec{calculating_flags_Minkowski} we observed the algebraic similarity between the expressions for $p$ and $v$. We now extend them to provide a useful basis of the $XYZ$ 3-plane. The $T$-coordinate of $p$ is $r^2$, so $p \in L^+ \cap \{T=r^2\}$, which is a 2-sphere of Euclidean radius $r$ in the 3-plane $T=r^2$ in Minkowski space. Indeed $L^+ \cap \{T=r^2\} = r^2 \S^+$, where the celestial sphere $\S^+ = L^+ \cap \{T=1\}$ is the unit sphere in the plane $T=1$ (\refdef{celestial_sphere}(ii)). Indeed, as observed in in \reflem{C2_to_R31_Hopf_fibrations}, $\g \circ \f$ restricts to a Hopf fibration $S^3_r \To r^2 \S^+$. Thus the projection of $p$ to the $XYZ$ 3-plane has Euclidean length $r$. Similarly, (because of the algebraic similarity of $p$ and $v$), one can check that the $XYZ$-projection of $v$ also has length $r$. Since $v \in T_p L^+ = p^\perp$ we have $\langle p, v \rangle = 0$, and since the $T$-coordinate of $v$ is $0$ (\reflem{null_flag_tricky_vector} and discussed in \refsec{calculating_flags_Minkowski}), we deduce that the $XYZ$-projections of $p$ and $v$ are orthogonal in $\R^3$. Thus, they extend naturally to an orthogonal basis where all vectors have length $r$. When $r=1$, i.e. $\kappa \in S^3$, we saw in \reflem{gof_Hopf} that the $XYZ$-projection of $\g \circ \f$ is the Hopf fibration composed with stereographic projection. And in this case we obtain an orthonormal basis. \begin{lem} \label{Lem:orthonormal_basis_from_spinor} For any $\kappa \in \C^2_\times$, the vectors $e_1(\kappa), e_2(\kappa), e_3(\kappa)$ below all have length $r$ and form a right-handed orthogonal basis of $\R^3$. Moreover, identifying $\R^3$ with the $T=0$ plane in $\R^{1,3}$, $e_1(\kappa)$ and $e_2 (\kappa)$ form an orthogonal basis for the 2-plane $\{T=0\} \cap p^\perp$. \[ \begin{array}{rll} e_1 (\kappa) &= \left( a^2 - b^2 - c^2 + d^2, \; 2(ab+cd), 2(bd-ac) \right) &= \frac{1}{2} \pi_{XYZ} \circ \g \circ D_\kappa \f \left( i \ZZ(\kappa) \right) \\ e_2 (\kappa) &= \left( 2(cd-ab), \; a^2 - b^2 + c^2 - d^2, \; 2(ad+bc) \right) &= \frac{1}{2} \pi_{XYZ} (v) = \frac{1}{2} \pi_{XYZ} \circ \g \circ D_\kappa \f \left( \ZZ(\kappa) \right)\\ e_3(\kappa) &= \left( 2(ac+bd), \; 2(bc-ad), \; a^2 + b^2 - c^2 - d^2 \right) &= \pi_{XYZ} (p) = \frac{1}{2} \pi_{XYZ} \circ \g \circ D_\kappa \f (\kappa) \\ \end{array} \] \end{lem} In \reflem{structure_of_derivative_of_f} we identified 3 vectors $\kappa, \ZZ(\kappa), i \ZZ(\kappa) \in \C^2$, which are orthogonal and have equal length $r$; at $\kappa$ they consist of a radial vector and two tangent vectors to $S^3_r$. We showed that their images under the the derivative of $\f$ spanned the image of $D_\kappa \f$. Here we calculate that their images under the derivative of $\g \circ \f$ are also orthogonal and have equal length $r$. \begin{proof} These are direct calculations. In addition to the preceding lemmas mentioned above giving $e_2(\kappa)$ and $e_3 (\kappa)$, we can also use \reflem{derivatives_of_f_in_easy_directions} that $D_\kappa \f (\kappa) = 2 \f(\kappa)$. A similar method as in the proof of \reflem{null_flag_tricky_vector}, using \refeqn{derivative_formula}, gives $e_1 (\kappa)$. One can check that the cross product of the first and second vectors yields $a^2 + b^2 + c^2 + d^2 = r^2$ times the third, so we have the correct orientation. Now $p = (r^2, e_3(\kappa))$, using \reflem{spin_vector_to_TXYZ}. When regarded in $\R^{1,3}$, the $e_i$ have $T$-coordinate zero, so $\langle p, e_i \rangle = - e_3 \cdot e_i$, which is zero for $i=1,2$. Thus $e_1, e_2 \in \{T=0\} \cap p^\perp$. Since $e_1, e_2$ are orthogonal, and since as argued above $\{T=0\} \cap p^\perp$ is 2-dimensional, we have an orthogonal basis. \end{proof} We now have an explicit picture of the intersection of the flag of $\kappa$ in the 3-plane $T=r^2$ of Minkowski space. In this 3-plane, the light cone appears as a 2-sphere of radius $r^2$, $p$ appears at $e_3 (\kappa)$, and the tangent space to the light cone $T_p L^+ = p^\perp$ appears as the tangent 2-plane to the 2-sphere at $p$. The flag 2-plane appears as an oriented line through $p$ in the direction of $e_2 \sim v$; the possible flag 2-planes based at $p$ appear as oriented lines through $p$ tangent to the 2-sphere. See \reffig{flag_intersect_T_r_squared}. \begin{center} \begin{tikzpicture}[scale=1.2] \draw[blue] (0,0) ellipse (1.5cm and 0.25cm); ll[white] (-1.5,-0.25)--(1.5,-0.25)--(1.5,0.05)--(-1.5,0.05); \draw[dashed,blue] (0,0) ellipse (1.5cm and 0.25cm); \shade[ball color = blue!40, opacity = 0.1] (0,0) circle (1.5cm); \draw[blue] (0,0) circle (1.5cm); \shade[ball color=green!40,opacity=0.1] (-0.25,1)--(0.75,0)--(1.75,0.5)--(0.75,1.5)--(-0.25,1); \draw[green!50!black] (-0.25,1)--(0.75,0)--(1.75,0.5)--(0.75,1.5)--(-0.25,1); ll (0.75,0.75) circle (0.04cm); \draw[blue, ->] (0,0)--(0.75,0.75); \draw[green!50!black,->](0.75,0.75)--(1.5,0.45); \draw[green!50!black,->] (0.75,0.75)--(0.75,1.4); \node at (-2,1){$T=r^2$}; \node at (-2.5,0.25){$Z$}; \node at (-1.5,-0.75){$X$}; \node at (-1.85,-0.1){$Y$}; \draw[<->](-2.5,0)--(-2.5,-0.75)--(-1.75,-0.75); \draw[->](-2.5,-0.75)--(-2,-0.25); \node at (0.95,0.95){$p$}; \node at (0.5,0.3){\small$e_3$}; \node at (0.25,1.25){\small$e_2=v$}; \node at (1.25,0.4){\small$e_1$}; \node at (1.5,-1){\footnotesize$L^+$}; \draw[dashed] (0.6,0.6)--(0.8,0.5)--(0.95,0.65); \draw[dashed] (0.6,0.6)--(0.6,0.8)--(0.75,0.95); \draw[dashed] (0.95,0.65)--(0.9,0.9)--(0.75,0.95); \end{tikzpicture} \captionof{figure}{The intersection of the light cone, tangent space, and flag with the plane $T = r^2$.} \label{Fig:flag_intersect_T_r_squared} \end{center} As an aside, we note that \[ \kappa = (\xi, \eta) \in S^3 \quad \text{corresponds to a matrix} \quad \begin{pmatrix} \xi & - \overline{\eta} \\ \eta & \overline{\xi} \end{pmatrix} \in SU(2), \] which in turn corresponds to a rotation of $\R^3$, under the standard double covering map $SU(2) \To SO(3)$ (a subset of the double cover $SL(2,\C) \To SO(1,3)^+$ considered at length here). The images of the standard basis vectors in $\R^3$ under this rotation are precisely the $e_i (\kappa)$ here. When $\kappa = (1,0)$, from \refeg{flag_of_simple_spinors}, $e_1, e_2, e_3$ are just unit vectors in the $X,Y,Z$ directions respectively, and we calculated that multiplying $\kappa$ by $e^{i\theta}$ preserved $e_3$ ($= \g \circ \f(\kappa)$) but rotated the flag direction $e_2$ by $-2\theta$ about $e_3$. We now show this holds in general. In general, a rotation of $\R^3$ about $e_3$ by angle $\theta$ fixes $e_3$, sends $e_1 \mapsto e_1 \cos \theta + e_2 \sin \theta$, and $e_2 \mapsto -e_1 \sin \theta + e_2 \cos \theta$. \begin{lem} \label{Lem:flag_basis_rotation} Each $e_i (e^{i\theta} \kappa)$ is obtained from $e_i (\kappa)$ by a rotation of angle $-2\theta$ about $e_3 (\kappa)$. \end{lem} \begin{proof} We first observe that $\f(\kappa) = \f(e^{i\theta} \kappa)$ (\reflem{when_f_equal}) implies $e_3 (\kappa) = e_3 (e^{i \theta} \kappa)$. We now calculate $e_2 (e^{i\theta} \kappa)$ directly. In \refeqn{flag_direction_in_terms_of_alpha_beta} we calculated an expression for $\g \circ D_\kappa \f (\ZZ(\kappa))$ in terms of $(\xi, \eta)$; replacing them with $e^{i\theta} (\xi, \eta)$ we obtain \[ \g \circ D_\kappa \f (\ZZ (e^{i \theta} \kappa)) = \left( 0, -2 \Im \left( e^{2 i \theta} \xi^2 + e^{-2i\theta} \overline{\eta}^2 \right), 2 \Re \left( e^{2 i \theta} \xi^2 + e^{-2i\theta} \overline{\eta}^2 \right), 4 \Im \left( e^{2 i \theta} \xi \eta \right) \right). \] Now direct computations yield \begin{align*} e^{2 i \theta} \xi^2 + e^{-2i\theta} \overline{\eta}^2 &= \left( (a^2-b^2+c^2-d^2) \cos 2\theta - 2(ab+cd) \sin 2\theta \right) \\ & \quad \quad + i \left( 2(ab-cd) \cos 2\theta + (a^2 - b^2 - c^2 + d^2) \sin 2\theta \right) \\ e^{2i\theta} \xi \eta &= \left( (ac-bd) \cos 2\theta - (ad+bc) \sin 2\theta \right) + i \left( (ad+bc) \cos 2\theta + (ac-bd) \sin 2\theta \right) \end{align*} so that $\pi_{XYZ} \circ \g \circ D_\kappa \f (\ZZ (e^{i \theta} \kappa))$ is given by \begin{align*} 2 \Big( 2(cd-ab) \cos 2\theta &+ (-a^2 + b^2 + c^2 - d^2) \sin 2\theta, \; (a^2 - b^2 + c^2 - d^2) \cos 2\theta - 2(ab+cd) \sin 2\theta, \\ & \quad \quad \quad 2(ad+bc) \cos 2\theta + 2(ac-bd) \sin 2\theta \Big) \end{align*} hence $e_2 (e^{i \theta} \kappa) = \frac{1}{2} \pi_{XYZ} \circ \g \circ D_\kappa \f (\ZZ (e^{i \theta} \kappa))$ is given by \begin{align*} \cos 2\theta & \left( 2(cd-ab), a^2 - b^2 + c^2 - d^2, 2(ad+bc) \right) + \sin 2\theta \left( -a^2 + b^2 + c^2 - d^2, -2(ab+cd), 2(ac-bd) \right) \\ &= e_2 (\kappa) \cos (-2\theta) + e_1 (\kappa) \sin (-2\theta) \end{align*} Thus both $e_2$ and $e_3$ behave as claimed. Since $e_1 (e^{i\theta} \kappa)$ forms a right-handed orthonormal basis with $e_2 (e^{i\theta} \kappa)$ and $e_3 (e^{i\theta} \kappa)$, the same must be true of $e_1$. \end{proof} \subsubsection{Surjectivity of maps to flags} \label{Sec:F_surjectivity} We now show that all flags arise via the maps $\F$ and $\G$. \begin{prop} \label{Prop:F_G_surjective} The maps $\F$ and $\G \circ \F$ are surjective. \end{prop} \begin{proof} Since $\G$ is a bijection, it suffices to prove $\G \circ \F$ is a surjection $\C_\times^2 \To \mathcal{F_P^O}(\R^{1,3})$. As explained in \refsec{rotating_flags} above, there is an $S^1$ family of flags at a given basepoint $p \in L^+$, which can be characterised by their oriented 1-dimensional intersections with $\{T=0\}$, and these intersections are precisely the oriented 1-dimensional subspaces of the 2-plane $\{T=0\} \cap p^\perp$. \refsec{rotating_flags} essentially shows that multiplying a spinor by $e^{i\theta}$ fixes the basepoint of a flag, but rotates through this $S^1$ family of flags based at $p$ by an angle of $-2\theta$. To see this explicitly, take $\kappa \in \C^2_\times$, which yields the flag $\G \circ \F (\kappa) = [[p , \g \circ D_\kappa \f (\ZZ(\kappa))]]$ based at $p$, where $p = \g \circ \f (\kappa)$ (\reflem{GoF_in_pv_form}). Since $\g \circ D_\kappa \f (\ZZ(\kappa))$ has $T$-coordinate zero (\reflem{null_flag_tricky_vector}), the 2-plane of the flag intersects $\{T=0\}$ along $\g \circ D_\kappa \f (\ZZ(\kappa))$. So the flag $\G \circ \F (\kappa)$ corresponds to the oriented 1-dimensional subspace of $\{T=0\} \cap p^\perp$ given by $\g \circ D_\kappa \f (\ZZ(\kappa))$ or, if we regard $\R^3$ as the $T=0$ subset of Minkowski space, by $e_2 (\kappa)$. By \reflem{orthonormal_basis_from_spinor}, $e_1 (\kappa)$ and $e_2(\kappa) $ span the 2-plane $\{T=0\} \cap p^\perp$. By \reflem{flag_basis_rotation}, multiplying $\kappa$ by $e^{i\theta}$ rotates this plane in $\R^3$ by an angle of $-2\theta$, about the orthogonal vector $e_3 (\kappa)$. Thus as $\theta$ ranges through $[0,2\pi]$ (or even just $[0,\pi)$), all flags based at $p$ are obtained. Thus, if $\G \circ \F$ contains in its image a flag based at a point $p \in L^+$, then it contains all flags based at $p$. It thus remains to show that all points of $L^+$ arise in the image of $\g \circ \f$. But we showed this in \reflem{gof_properties}. \end{proof} \begin{lem} \label{Lem:F_G_2-1} The maps $\F$ and $\G \circ \F$ are 2--1. More precisely, $\F(\kappa) = \F(\kappa')$ iff $\G \circ \F (\kappa) = \G \circ \F (\kappa')$ iff $\kappa = \pm \kappa'$. \end{lem} \begin{proof} Again as $\G$ is a bijection it suffices to show that $\G \circ \F$ is 2--1. Suppose two spinors $\kappa, \kappa'$ yield the same flag. Then in particular these flags have the same basepoint $p$, i.e. $\g \circ \f (\kappa) = \g \circ \f (\kappa') = p$. Hence $\kappa' = e^{i \theta} \kappa$ (\reflem{gof_properties}). We have seen (\reflem{flag_basis_rotation}) that the flag of $e^{i \theta} \kappa$ is is obtained from that of $\kappa$ by rotation by an angle of $-2\theta$ through the $S^1$ family of flags based at $p$. This $S^1$ family is characterised by the family of oriented lines in a 2-dimensional Euclidean plane, namely $\{T=0\} \cap p^\perp$. Thus, rotating a flag, we obtain the same flag when the rotation angle is an integer multiple of $2\pi$. Thus $\kappa = \pm \kappa'$. The converse follows equally from these observations: $-\kappa = e^{i\pi} \kappa$ has flag obtained from that of $\kappa$ by a rotation of $-2\pi$, hence yields the same flag. \end{proof} (If we ignore orientations, and consider only pointed null flags as per \refdef{pointed_null_flag}, then flags coincide when they are rotated by $\pi$ rather than $2\pi$, yielding 4--1 rather than 2--1 maps.) We point out that there should be an extension of \refprop{complex_Minkowski_inner_products} using rotations between flags. There we found that for two spinors $\kappa, \kappa'$, the magnitude of $\{\kappa, \kappa'\}$ gave the Minkowski inner product of $p = \g \circ \f (\kappa)$ and $p' = \g \circ \f (\kappa')$. The argument of $\{\kappa, \kappa'\}$ should be related to the angles between the geodesic connecting $p$ to $p'$, and the flag directions of $\G \circ \F(\kappa), \G \circ \F (\kappa')$ at $p,p'$ respectively (or indeed, the directions $e_2(\kappa), e_2 (\kappa')$. \subsection{From Minkowski space to the hyperboloid model} \label{Sec:Minkowski_to_hyperboloid} The third step in our journey is from Minkowski space to the hyperboloid model; we now finally enter hyperbolic space. We define the map $\h$ from the light cone to horospheres, and the map $\H$ from flags to decorated horospheres. We proceed as follows. We first introduce and discuss the hyperboloid model (\refsec{hyperboloid_model}) and horospheres (\refsec{horospheres}). In \refsec{light_cone_to_horosphere} we define and discuss the map $\h$; in \refsec{SL2C_on_hyperboloid} we prove it is $SL(2,\C)$-equivariant. We briefly digress in \refsec{distances_between_horospheres} to discuss distances between horospheres, and how they can be found from spinors. In \refsec{flags_and_horospheres} we introduce the map $\H$, which produces an oriented line field on a horosphere; however at this stage we do not know that the line field is parallel. In \refsec{examples_from_10} we compute in detail flags and horospheres and decorations from the single spinor $(1,0)$; this work then pays off in \refsec{parallel_line_fields} when we show that oriented line fields obtained from $\H$ are parallel. In \refsec{decorated_horospheres} we define decorated horospheres and show $\H$ is a bijection. Finally, in \refsec{SL2c_on_decorated_horospheres} we show $\H$ is $SL(2,\C)$-equivariant. \subsubsection{The hyperboloid model} \label{Sec:hyperboloid_model} \begin{defn} The \emph{hyperboloid model} $\hyp$ is the Riemannian submanifold of $\R^{1,3}$ consisting of $x = (T,X,Y,Z) \in \R^{1,3}$ such that \[ T>0 \quad \text{and} \quad \langle x,x \rangle = T^2 - X^2 - Y^2 - Z^2 = 1, \] with metric $ds^2 = dX^2 + dY^2 + dZ^2 - dT^2$. \end{defn} To see that $\hyp$ is a Riemannian (not Lorentzian or semi-Riemannian) manifold, observe that, by essentially the same proof as \reflem{light_cone_orthogonal_complement} for the light cone (which, like the hyperboloid, is part of a level set of the Minkowski norm function), we have, for any $q \in \hyp$, \begin{equation} \label{Eqn:hyperboloid_tangent_space} T_q \hyp = q^\perp. \end{equation} As $q$ by definition has timelike position vector, all nonzero vectors in $q^\perp$ are spacelike. Thus all nonzero tangent vectors to $\hyp$ are spacelike. Reversing the sign of the metric on $\R^{1,3}$, we have a positive definite Riemannian metric on $\hyp$. The cross section of $\hyp$ with a 3-plane of constant $T \geq 1$ is a Euclidean 2-sphere (of radius $\sqrt{T^2-1}$). The cross section of $L^+$ with such a 3-plane is also a Euclidean 2-sphere (of radius $T$). When $T$ becomes large, these 2-spheres become arbitrarily close and represent the possible directions of geodesics from a point in $\hyp$. Thus we may regard the \emph{sphere at infinity} of $\hyp$, which we write as $\partial \hyp$, as the celestial sphere $\S^+$ (the projectivisation of $L^+$, \refdef{celestial_sphere}(i)). We denote the isometry group of $\hyp$ by $\Isom \hyp$, and its subgroup of orientation-preserving isometries by $\Isom^+ \hyp$. It is well known that $\Isom \hyp \cong O(1,3)^+$ and $\Isom^+ \hyp \cong SO(1,3)^+$, acting by linear transformations on $\R^{1,3}$. We saw a few examples in \refsec{Minkowski_space_and_g} of how the action of $SL(2,\C)$ gives rise to linear transformations of $\R^{1,3}$ in $SO(1,3)^+$. It is well known that this map $SL(2,\C) \To SO(1,3)^+$ is a surjective homomorphism which is 2--1, with kernel $\pm I$. \subsubsection{Horospheres} \label{Sec:horospheres} Horospheres in $\hyp$ are given by intersection with certain 3-planes $\Pi$ in $\R^{1,3}$; we now say precisely which. As mentioned in \refsec{intro_horospheres_decorations}, they are analogous to 2-planes which cut out parabolic conic sections. \begin{lem} Let $\Pi$ be an affine 3-plane in $\R^{1,3}$. The following are equivalent. \begin{enumerate} \item $\Pi$ has a lightlike tangent vector, and no timelike tangent vector. \item There exist a lightlike vector $n$ and $c \in \R$ so that $\Pi=\{x \in \R^{1,3}|\langle x, n \rangle = c \}$. \item $\Pi$ is parallel to $n^\perp$ where $n$ is lightlike. \end{enumerate} We call such a plane a \emph{lightlike 3-plane}. \end{lem} \begin{proof} Let $n$ be a Minkowski normal vector to $\Pi$, so that $\Pi=\{x\in\R^{1,3}|\langle x, n \rangle = c\}$ for some $c\in\R$. Such $n$ is unique up to a nonzero real scalar; we take it to be future pointing, i.e. have non-negative $T$-coordinate. The tangent space to $\Pi$ is then the orthogonal complement $n^\perp$, and $\Pi$ is parallel to $n^\perp$. If $n$ is timelike, after changing basis by a rotation in the $XYZ$ 3-plane (which is an isometry in $SO(1,3)^+$), we may arrange that $n = (T,X,0,0)$ where $T,X>0$. Similarly, if $n$ is spacelike (resp. timelike) then by a change of basis by boost in the $XT$ 2-plane, we may assume $n = (0,X,0,0)$ and $X>0$ (resp. $(T,0,0,0)$ and $T>0$). If $n$ is spacelike, $n=(0,X,0,0)$ then $n^\perp$ contains $(1,0,0,0)$, which is timelike. Thus none of (i)--(iii) hold. Similarly, if $n$ is timelike, $n=(T,0,0,0)$, then $n^\perp=\{p=(T,X,Y,Z)|\ T=0\}$, so every nonzero vector in $n^\perp$ is spacelike, and again none of (i)--(iii) hold. If $n$ is lightlike, $n=(T,X,0,0)$ with $T,X>0$, then $n^\perp=\{x = (T,X,Y,Z)|\ T=X\}$. Any such $x$ satisfies $\langle x,x \rangle = -Y^2-Z^2 \leq 0$ so is lightlike or spacelike. Thus all of (i)--(iii) hold. \end{proof} Not all lightlike 3-planes intersect $\hyp$; some pass below (in the past of) the positive light cone. \begin{lem} \label{Lem:plane_intersect_hyperboloid} A lightlike 3-plane $\Pi$ satisfies $\Pi\cap\hyp\neq\emptyset$ iff $\Pi=\{x\in\R^{1,3}|\langle x, n \rangle = c,\ n \in L^+,\ c>0\}$ for some $n$ and $c$. \end{lem} Any lightlike 3-plane has an equation $\langle x,n \rangle = c$ where $n \in L^+$; the point here is that only those with $c>0$ intersect $\hyp$. \begin{proof} Let $\Pi$ have equation $\langle x,n \rangle = c$ with $n \in L^+$. By a change of basis in $SO(1,3)^+$, we may assume $n = (1,1,0,0)$. Such a change of basis preserves $\langle \cdot, \cdot \rangle$ and $L^+$, hence $\Pi$ is given by an equation of the desired form iff its equation satisfies the desired form after this change of basis. The 3-plane $\Pi$ then has equation $T-X=c$. The plane intersects $\hyp$ iff there exist $(T,X,Y,Z)$ such that $T-X=c$, $T>0$ and $T^2 - X^2 - Y^2 - Z^2 = 1$. Substituting the former into the latter yields $T^2 - (T-c)^2 -Y^2-Z^2=1 = 2cT-c^2-Y^2-Z^2=1$. If $c \leq 0$ then, as $T>0$, every term on the left is non-positive and we have a contradiction. If $c>0$ then there certainly are solutions, for instance $(T,X,Y,Z) = ((1+c^2)/2c, (1-c^2)/2c,0,0)$. \end{proof} \begin{defn} \label{Def:set_of_horospheres} A \emph{horosphere} in $\hyp$ is a non-empty intersection of $\hyp$ with a lightlike 3-plane. The set of all horospheres in $\hyp$ is denoted $\mathfrak{H}(\hyp)$. \end{defn} It is perhaps not obvious that this definition agrees with \refdef{intro_horosphere}; it is better seen via other models. In any case, a lightlike 3-plane $\Pi$ intersecting $\hyp$ determines a horosphere $\mathpzc{h}$; and conversely, $\mathpzc{h}$ determines the plane $\Pi$ as the unique affine 3-plane containing $\mathpzc{h}$. So there is a bijection \[ \{ \text{Lightlike 3-planes $\Pi$ such that $\Pi \cap \hyp \neq \emptyset$} \} \To \mathfrak{H}(\hyp), \] given by intersection with $\hyp$. A horosphere determines a distinguished point at infinity, i.e. ray on the light cone, as follows. \begin{lem} \label{Lem:horosphere_centre_exists} Let $\mathpzc{h} \in \mathfrak{H}(\hyp)$ be the intersection of $\hyp$ with the lightlike 3-plane $\Pi$ with equation $\langle x,n \rangle = c$, where $n \in L^+$ and $c>0$. Then $\Pi$ intersects every ray of $L^+$ except the ray containing $n$. \end{lem} \begin{proof} The 3-plane $\Pi$ is parallel to, and disjoint from, the 3-plane $n^\perp$, which contains the ray of $L^+$ through $n$. Thus $\Pi$ does not intersect the ray containing $n$. To see that $\Pi$ intersects every other ray, let $p \in L^+$ be a point not on the ray through $n$. By a change of basis as in \reflem{plane_intersect_hyperboloid}, we may assume $n=(1,1,0,0)$, so $\Pi$ has equation $T-X=c$. Let $p = (T_0, X_0, Y_0, Z_0)$. Note that $T_0 > X_0$, for if $T_0 \leq X_0$ then $T_0^2 \leq X_0^2$ so $0 = \langle p,p \rangle = T_0^2 - X_0^2 - Y_0^2 - Z_0^2 \leq -Y_0^2 - Z_0^2$, so $Y_0 = Z_0 = 0$, so $p$ is on the ray through $n$. We then observe that the point $cp/(T_0 - X_0)$ lies on both the ray through $p$ (since it is a positive multiple of $p$), and $\Pi$ (since the $T$-coordinate $cT_0/(T_0 - X_0)$ and $X$-coordinate $cX_0/(T_0-X_0)$ differ by $c$). \end{proof} \begin{defn} Let $\mathpzc{h} \in \mathfrak{H}(\hyp)$, corresponding to the lightlike 3-plane $\Pi$. The \emph{centre} of $\mathpzc{h}$ is the unique point of $\partial \hyp \cong \S^+$ such that $\Pi$ does not intersect the corresponding ray of $L^+$. \end{defn} Here we regard $\S^+$ as the projectivisation of $L^+$, \refdef{celestial_sphere}(i). By \reflem{horosphere_centre_exists}, if $\Pi$ has equation $\langle x, n \rangle = c$ where $n \in L^+$ and $c>0$, then the centre of $\mathpzc{h}$ is the point of $\S^+$ corresponding to the ray through the normal vector $n$. \begin{defn} Let $\mathpzc{h}$ be a horosphere, corresponding to the 3-plane $\Pi$. The \emph{horoball} bounded by $\mathpzc{h}$ is the subset of $\hyp$ bounded by $\h$, on the same side of $\Pi$ as its centre. The \emph{centre} of a horoball is the centre of its bounding horosphere. \end{defn} We may regard a horoball as a neighbourhood in $\hyp$ of its centre, a point at infinity in $\partial \hyp$. {\flushleft \textbf{Remark.} } A horosphere appears in the hyperboloid model as a 2-dimensional paraboloid. To see this, again as in \reflem{plane_intersect_hyperboloid} we may change basis in $SO(1,3)^+$ and assume the lightlike 3-plane has equation $T-X=c$ where $c>0$ (we could in fact obtain equation $T-X=1$). Eliminating $T$ from $T-X=c$ and $T^2-X^2-Y^2-Z^2=1$ yields $(X+c)^2-X^2-Y^2-Z^2=1$, so $2cX-Y^2-Z^2=1-c^2$, hence $X=\frac{1}{2c} \left( Y^2 +Z^2 + 1-c^2 \right)$, which is the equation of a 2-dimensional paraboloid in $\R^3$. Thus the horosphere is the image of the paraboloid $X=\frac{1}{2c} \left( Y^2 +Z^2 + 1-c^2 \right)$ in $\R^3$ under the injective linear map $\R^3 \To \R^{1,3}$ given by $(X,Y,Z) \mapsto (X+c,X,Y,Z)$. This remark makes clear that a horosphere has the topology of a 2-plane. In fact, a horosphere is isometric to the Euclidean plane; this is easier to see in other models of hyperbolic space. \subsubsection{The map from the light cone to horospheres} \label{Sec:light_cone_to_horosphere} The following idea, assigning horospheres to points of $L^+$, goes back at least to Penner \cite{Penner87}, at least in 2-dimensional hyperbolic space. \begin{defn} \label{Def:h} There is a bijection \[ \h \colon L^+ \To \horos(\hyp) \] which sends $p \in L^+$ to the horosphere $\mathpzc{h}$ given by the intersection of $\hyp$ with the lightlike 3-plane with equation $\langle x, p \rangle = 1$. \end{defn} \begin{proof} If $p \in L^+$ then by \reflem{plane_intersect_hyperboloid} the 3-plane $\langle x, p \rangle = 1$ is lightlike and intersects $\hyp$ nontrivially, yielding a horosphere, so the map is well defined. To show $\h$ is bijective, we construct its inverse. So let $\mathpzc{h}$ be a horosphere, with corresponding lightlike 3-plane $\Pi$. By \reflem{plane_intersect_hyperboloid}, $\Pi$ has an equation of the form $\langle x, n \rangle = c$ where $n \in L^+$ and $c>0$. Dividing through by $c$, $\Pi$ has equivalent equation $\langle x, n/c \rangle = 1$. Now $n/c \in L^+$, and with the constant normalised to $1$, $\Pi$ has a unique equation of this form. Thus $n/c$ is the unique point in $L^+$ such that $\h(n/c) = \horo$. \end{proof} By \reflem{horosphere_centre_exists}, the horosphere $\h(p)$ has centre given by the ray through $p$. Let us consider the geometry of the map $\h$. As $p$ is scaled up or down by multiples of $c>0$, the 3-plane $\langle x, p \rangle = 1$ is translated through a family of lightlike 3-planes with common normal, namely the ray through $p$. This is because $\langle x, cp \rangle = 1$ is equivalent to $\langle x, p \rangle = \frac{1}{c}$. The family of lightlike 3-planes are disjoint, and their intersections with $\hyp$ yield a family of horospheres with common centre foliating $\hyp$. As $p$ goes to infinity, the 3-planes approach tangency with the light cone, and the corresponding horospheres also ``go to infinity", bounding decreasing horoballs, and eventually becoming arbitrarily far from any given point in $\hyp$. The set $\horos(\hyp)$ naturally has the topology of $S^2 \times \R$. For instance, a horosphere is uniquely specified by its centre, a point of $\partial \hyp \cong \S^+ \cong S^2$, and a real parameter specifying the position of $\horo$ in the foliation of $\hyp$ by horospheres about $p$. With this topology, $\h$ is a diffeomorphism. Forgetting everything about the horosphere except its centre, we obtain the following, which is useful in the sequel. \begin{defn} \label{Def:h_partial_light_cone_to_hyp} The map from the positive light cone to the boundary at infinity of $\hyp$ \[ \h_\partial \colon L^+ \To \partial \hyp = \S^+ \] sends $p$ to the centre of $\h(p)$. \end{defn} Since the centre of $\h(p)$ is the ray through $p$, $\h_\partial$ is just the projectivisation map collapsing each ray of $L^+ \cong S^2 \times \R$ to a point, producing $\S^+ = \partial \hyp$. The map $\h$ also provides a nice description of the tangent spaces of a horosphere. We demonstrate this after giving a straightforward lemma that will be useful in the sequel. \begin{lem} \label{Lem:lightlike_intersection} Let $q \in \hyp$ and $1 \leq k \leq 4$ be an integer. The intersection of the 3-plane $T_q \hyp = q^\perp$ with a $k$-plane $V \subset \R^{1,3}$ containing a lightlike or timelike vector is transverse, and hence $T_q \hyp \cap V$ has dimension $k-1$. \end{lem} \begin{proof} As $T_q \hyp$ is spacelike, but $V$ contains a lightlike or timelike vector, $T_q \hyp + V$ has dimension more than $3$, hence $4$. Thus the intersection is transverse, and the intersection is as claimed. \end{proof} \begin{lem} \label{Lem:tangent_space_of_horosphere} Let $p \in L^+$ and let $q$ be a point on the horosphere $\h(p)$. Then the tangent space $T_q \h(p)$ is the 2-plane given by the following transverse intersection of 3-planes: \[ T_q \h(p) = p^\perp \cap q^\perp. \] \end{lem} \begin{proof} Observe that $p^\perp$ is the tangent space to the 3-plane $\langle x,p \rangle = 1$ cutting out $\h(p)$, and $q^\perp$ is the tangent 3-plane to $\hyp$ at $q$, by \refeqn{hyperboloid_tangent_space}. So $T_q \h(p)$ is given as claimed. We explicitly calculated that horospheres are paraboloids, hence 2-dimensional manifolds, so the intersection must be transverse to obtain a 2-dimensional result. This can also be seen directly from \reflem{lightlike_intersection}, since $p^\perp$ contains the lightlike vector $p$. \end{proof} \subsubsection{$SL(2,\C)$ action on hyperboloid model} \label{Sec:SL2C_on_hyperboloid} We have seen that $SL(2,\C)$ acts on $\R^{1,3}$ in \refdef{SL2C_on_R31}, by linear maps in $SO(1,3)^+$. Linear maps in $SO(1,3)^+$ preserve the Minkowski metric, the positive light cone $L^+$, the hyperboloid $\hyp$, and lightlike 3-planes. They also send rays of $L^+$ to rays of $L^+$, send horospheres to horospheres, and act as orientation-preserving isometries on $\hyp$. Thus we can make the following definitions. \begin{defn} \ \label{Def:SL2C_action_on_hyperboloid_model} \begin{enumerate} \item $SL(2,\C)$ acts on $\hyp$ by restriction of its action on $\R^{1,3}$. \item $SL(2,\C)$ acts on $\partial \hyp$ by restriction of its action to $L^+$ and projectivisation to $\S^+ = \partial \hyp$. \item $SL(2,\C)$ acts on $\horos(\hyp)$ via its action on $\hyp$. \end{enumerate} \end{defn} \begin{lem} \ \label{Lem:h_equivariance} \begin{enumerate} \item The actions of $SL(2,\C)$ on $L^+$ and $\horos(\hyp)$ are equivariant with respect to $\h$. \item The actions of $SL(2,\C)$ on $L^+$ and $\partial \hyp$ are equivariant with respect to $\h_\partial$. \end{enumerate} That is, for $A \in SL(2,\C)$ and $p \in L^+$, \[ \h(A\cdot p) = A\cdot (\h(p)) \quad \text{and} \quad \h_\partial (A\cdot p) = A\cdot \h_\partial(p). \] \end{lem} \begin{proof} The horosphere $\h(p)$ is cut out of $\hyp$ by the 3-plane $\langle x,p \rangle = 1$. Upon applying $A$, we see that $A\cdot \h(p)$ is cut out of $\hyp$ by the equation $\langle A^{-1}\cdot x, p \rangle = 1$, which is equivalent to $\langle x, A\cdot p \rangle = 1$, and this equation cuts out $\h(A\cdot p)$. Thus $A\cdot \h(p) = \h(A\cdot p)$ as desired for (i). Forgetting everything but points at infinity, we obtain (ii). \end{proof} We will need the following in the sequel. To those familiar with hyperbolic geometry it will be known or a simple exercise, but we can give an argument using spinors, which may be of interest. \begin{lem} The action of $SL(2,\C)$ on $\mathfrak{H}(\hyp)$ is transitive. \end{lem} In other words, if $\mathpzc{h}, \mathpzc{h}'$ are horospheres then there exists $A \in SL(2,\C)$ such that $A \cdot \mathpzc{h} = \mathpzc{h}'$. This $A$ is not unique. \begin{proof} As $\h$ is bijective (\refdef{h}) and $\g \circ \f\colon \C^2_\times \To L^+$ is surjective (\reflem{gof_properties}), there exist $\kappa, \kappa' \in \C^2_\times$ such that $\h \circ \g \circ f (\kappa) = \mathpzc{h}$ and $\h \circ \g \circ f (\kappa') = \mathpzc{h'}$. Now by \reflem{SL2C_on_C2_transitive} the action of $SL(2,\C)$ on $\C^2_\times$ is transitive, so there exists $A \in SL(2,\C)$ such that $A \cdot \kappa = \kappa'$. Then by equivariance of $\h$ (\reflem{h_equivariance}) and $\g \circ \f$ (\reflem{gof_properties}) we have \[ A \cdot \mathpzc{h} = A \cdot \left( \h \circ \g \circ \f (\kappa) \right) = \h \circ \g \circ \f \left( A \cdot \kappa \right) = \h \circ \g \circ \f (\kappa') = \mathpzc{h'} \] as desired. \end{proof} \subsubsection{Distances between horospheres} \label{Sec:distances_between_horospheres} We now consider distances between horospheres and points in $\hyp^3$. Later, in \refsec{complex_lambda_lengths}, we will define \emph{complex} and \emph{directed} distances between horospheres with decorations, but for now we only need a simpler, undirected notion of distance. The arguments of this subsection are based on \cite{Penner87}. Let $\mathpzc{h}, \mathpzc{h}'$ be two horospheres, with centres $p \neq p'$ respectively. Let $\gamma$ be the geodesic with endpoints $p,p'$, and let $q = \gamma \cap \mathpzc{h}$ and $q' = \gamma \cap \mathpzc{h}'$. If $\mathpzc{h}$ and $\mathpzc{h}'$ are disjoint, then the shortest arc from $\mathpzc{h}$ to $\mathpzc{h'}$ is the segment $\gamma_{q,q'}$ of the geodesic $\gamma$ between $q$ and $q'$. When $\mathpzc{h}, \mathpzc{h'}$ overlap, one might think their distance should be zero, but instead we it turns out to be useful to use the same segment $\gamma_{q,q'}$, but count the distance negatively. When $\horo, \horo'$ have the same centre, there is no distinguished geodesic $\gamma$, we define a distance of $-\infty$ (see \refsec{complex_lambda_lengths} for justification). \begin{defn} \label{Def:signed_undirected_distance} The \emph{signed (undirected) distance} $\rho$ between $\mathpzc{h}$ and $\mathpzc{h'}$ is defined as follows. \begin{enumerate} \item If $p = p'$ then $\rho = - \infty$. \item If $p \neq p'$ and \begin{enumerate} \item $\mathpzc{h}, \mathpzc{h}'$ are disjoint, then $\rho$ is the length of $\gamma_{q,q'}$; \item $\mathpzc{h}, \mathpzc{h}'$ are tangent, then $\rho=0$; \item $\mathpzc{h}, \mathpzc{h}'$ overlap, then $\rho$ is the negative length of $\gamma_{q,q'}$. \end{enumerate} \end{enumerate} \end{defn} We can apply a similar idea for the distance between a horosphere $\horo$ and a point $q$. Let $p$ be the centre of $\horo$, let $\gamma$ the geodesic with an endpoint at $p$ passing through $q$, and let $q' = \horo \cap \gamma$. let $\gamma_{q,q'}$ be the segment of $\gamma$ between $q$ and $q'$. This segment provides the shortest path between $\horo$ and $q$. \begin{defn} The \emph{signed distance} $\rho$ between $\horo$ and $q$ is defined as follow. \begin{enumerate} \item If $q$ lies outside the horoball bounded by $\horo$, then $\rho$ is the length of $\gamma_{q,q'}$. \item If $q$ lies on $\horo$, then $\rho = 0$. \item If $q$ lies inside the horoball bounded by $\horo$, then $\rho$ is the negative length of $\gamma_{q,q'}$. \end{enumerate} \end{defn} \begin{lem} \label{Lem:geodesic} Let $q_0 = (1,0,0,0) \in \hyp$ and $p = (T,X,Y,Z) \in L^+$. Then the signed distance $\rho$ between $\h(p) \in\mathfrak{H}(\hyp)$ and $q_0$ is $\log T$. \end{lem} Here $q_0$ can be regarded as ``the centre of $\hyp$", the unique point with $X,Y,Z$-coordinates all zero. \begin{proof} The strategy is as follows: consider the affine line in $\R^{1,3}$ from $p$ to $q_0$; calculate where this line intersects the cone on the horosphere $\h(p)$; this intersection point will be on the ray through the the point of $\h(p)$ closest to $q_0$; then we find the desired distance. As the horosphere $\h(p)$ consists of the points $x \in \hyp$ (which satisfy $\langle x,x \rangle = 1$) with $\langle x,p \rangle = 1$, the \emph{cone} on $\h(p)$ consists of constant multiples $cx$ ($c \in \R$) of such points, which satisfy $\langle cx, p \rangle = c$ and $\langle cx,cx \rangle = c^2$, hence $\langle cx, p \rangle = \langle cx, cx \rangle^2$. Recall that the centre of $\h(p)$ is the point of $\partial \hyp$ represented by $p$, i.e. the ray through $p$. Note $\langle p,p \rangle = 0$. For points $x$ on this ray we have $\langle x,x \rangle^2 = 0 = \langle x, p \rangle^2$. From the previous two paragraphs, we observe that points $x$ in the cone on $\h(p)$ and on the ray through $p$ satisfy $\langle x, p \rangle^2 = \langle x,x \rangle$. Conversely, if a point $x$ satisfies $\langle x,p \rangle^2 = \langle x,x \rangle$ then we claim it is either on this cone or this ray. To see this, note the equation implies $\langle x,x \rangle \geq 0$. If $\langle x,x \rangle = 0$, we have $\langle x, p \rangle = 0$, so that $x$ lies on the ray through $p$;. If $\langle x,x \rangle > 0$ then there is a real multiple $x'$ of $x$ on $\hyp$, and then we have $\langle x', x' \rangle = 1$ and $\langle p, x' \rangle^2 = 1$. But as $p \in L^+$ and $x' \in \hyp$ we cannot have $\langle p, x' \rangle < 0$; thus $\langle p, x' \rangle = 1$, so $x' \in \h(p)$ and $x$ lies on the cone on $\h(p)$. Therefore, the equation \begin{equation} \label{Eqn:cone_on_horosphere} \langle x,p \rangle^2 = \langle x,x \rangle \end{equation} characterises points in the cone on $\h(p)$ and the ray through $p$. We now parametrise the affine line from $p$ to $q_0$ by $x(s) = sp+(1-s)q_0$ and find where $x(s)$ satisfies \refeqn{cone_on_horosphere}. We calculate \begin{align*} \langle x,p \rangle = \langle sp+(1-s)q_0 ,p \rangle = s \langle p,p \rangle + (1-s) \langle q_0 , p \rangle = (1-s)T, \end{align*} using $p= (T,X,Y,Z)$, $q_0 = (1,0,0,0)$, and since $p \in L^+$ so that $\langle p,p \rangle = 0$. Similarly, \begin{align*} \langle x,x \rangle &= s^2 \langle p,p \rangle + 2s(1-s) \langle p, q_0 \rangle + (1-s)^2 \langle q_0, q_0 \rangle \\ &= 2s(1-s)T + (1-s)^2 = (1-s) \left( 2sT + 1-s \right). \end{align*} The equation $\langle x,p \rangle^2 = \langle x,x \rangle$ then yields \[ (1-s)^2 T^2 = (1-s) \left( 2sT + 1-s \right) \] The solution $s=1$ corresponds to $x=p$, the other solution is $s = \frac{T^2-1}{T^2+2T-1}$. For this $s$, $x(s)$ lies on the cone above $\h(p)$ at the point closest to $q_0$, and normalising its length gives the closest point in $\h(p)$ to $q_0$ as \[ q' = \left( \frac{T^2 + 1}{2T^2}T, \frac{T^2-1}{2T^2} X, \frac{T^2-1}{2T^2} Y, \frac{T^2-1}{2T^2} Z \right), \] When $T>1$, the $X,Y,Z$ coordinates of $q'$ are positive multiples of $X,Y,Z$, so $q'$ lies on the geodesic from $q_0$ to the point at infinity represented by $p$, on the same side of $q_0$ as $p$. The horoball bounded by $\h(p)$ is thus disjoint from $q_0$, so $\rho>0$. Conversely, when $T<1$, $\rho<0$. The distance $d$ from $q'$ to $q_0$ can now be found from the formula $\cosh d = \langle x,y \rangle$, where $d$ is the hyperbolic distance between points $x,y \in \hyp$. (Note $d = \pm \rho$.) Thus \[ \cosh d = \langle q', q_0 \rangle = \frac{T^2+1}{2T} = \frac{1}{2} \left( T + \frac{1}{T} \right). \] Since $\cosh d = \frac{1}{2} \left( e^d + e^{-d} \right)$, we have $e^d = T$ or $e^d = \frac{1}{T}$, i.e. $d = \pm \log T$. We just saw that when $T>1$, $\rho>0$ and when $T<1$, $\rho<0$. Thus $\rho = \log T$. \end{proof} \begin{prop} \label{Prop:point_horosphere_distance_hyp} Let $q \in \hyp$ and $p \in L^+$. Then the signed distance between $q$ and the horosphere $\h(p)$ is $\log \langle q,p \rangle$. \end{prop} \begin{proof} We reduce to the previous lemma. Let $M \in SO(1,3)^+$ be an isometry which sends $q$ to $q_0$, and let $M(p) = (T,X,Y,Z) \in L^+$. By \reflem{geodesic}, the signed distance $\rho$ between $q_0$ and $\h(M(p))$ is given by $\rho = \log T = \log \langle q_0, (T,X,Y,Z) \rangle$. Now as $M$ is an isometry, we have $\langle q_0, (T,X,Y,Z) \rangle = \langle M(q), M(p) \rangle = \langle q,p \rangle$. Thus $\rho = \log \langle q,p \rangle$. \end{proof} \begin{lem} \label{Lem:geodesic2} Let $p_0 = (1,0,0,1)$ and $p = (T,X,Y,Z)$ be points on $L^+$. Then the signed distance between the two horospheres $\h(p)$ and $\mathpzc{h}_0 = \h(p_0)$ is $\log \frac{T-Z}{2}$. \end{lem} Note that for any point $(T,X,Y,Z) \in L^+$, $T \geq Z$, with equality iff the point is a multiple of $p_0$. The case $T=Z$ arises when $p_0$ and $p$ lie on the same ray of $L^+$, and we regard $\log 0 $ as $-\infty$. \begin{proof} We follow a similar strategy to the previous lemma. The two horospheres have centres on $\partial \hyp$ given by rays through $p_0$ and $p$. We consider the affine line between $p$ and $p_0$, parametrised as $x(s) = sp+(1-s)p_0$, and find which points on this line lie on the cones of $\h(p)$ and $\mathpzc{h}_0$. The cone on $\h(p)$ is defined again by $\langle x,p \rangle^2 = \langle x,x \rangle$, and the cone on $\mathpzc{h}_0$ is defined by $\langle x, p_0 \rangle^2 = \langle x,x \rangle$. We find that the closest points on $\h(p)$ and $\mathpzc{h}_0$ to each other are \[ q = \left( \frac{T}{2} + \frac{1}{T-Z}, \frac{X}{2}, \frac{Y}{2}, \frac{Z}{2} + \frac{1}{T-Z} \right) \quad \text{and} \quad q_0 = \frac{1}{2(T-Z)} \left( 3T-Z, 2X, 2Y, T+Z \right). \] respectively. Now $\mathpzc{h}_0$ is cut out of $\hyp$ by the equation $T-Z=1$, and $T-Z=0$ contains its centre $p_0$. So the horoball bounded by $\mathpzc{h}_0$ consists of points in $\hyp$ satisfying $T-Z<1$. Thus the two horoballs are disjoint iff $q$ lies outside the horoball of $\mathpzc{h}_0$, which occurs iff $q$ satisfies $T-Z>1$. This happens precisely when \[ \left( \frac{T}{2} + \frac{1}{T-Z} \right) - \left( \frac{Z}{2} + \frac{1}{T-Z} \right) = \frac{T-Z}{2} > 1. \] Thus the horoballs are disjoint precisely when $T-Z>2$. We then find the distance $d$ between the closest points using $\cosh d = \langle q, q_0 \rangle$, which reduces to \[ \frac{1}{2} \left( e^d + e^{-d} \right) = \frac{1}{2} \left( \frac{T-Z}{2} + \frac{2}{T-Z} \right). \] Thus $e^d = \frac{T-Z}{2}$ or $\frac{2}{T-Z}$, i.e. $d = \pm \log \frac{T-Z}{2}$. As we have seen, when $T-Z>2$ the horoballs are disjoint, so that $d>0$. Hence $\rho = \log \frac{T-Z}{2}$ as desired. \end{proof} \begin{prop}[Cf. \cite{Penner87} lemma 2.1] \label{Prop:horosphere_distance_hyp} Let $p, p' \in L^+$. Then the signed distance $\rho$ between the horospheres $\h(p), \h(p')$ satisfies \begin{equation} \label{Eqn:horosphere_distance_from_Minkowski_inner_product} \langle p, p' \rangle = 2 e^{\rho}. \end{equation} Further, suppose $\kappa, \kappa' \in \C^2_\times$ satisfy $\g \circ \f(\kappa) = p$ and $\g \circ \f(\kappa') = p'$. Then \begin{equation} \label{Eqn:horosphere_distance_from_spinor_inner_product} \left| \{ \kappa, \kappa' \} \right|^2 = e^\rho \end{equation} \end{prop} Equation \refeqn{horosphere_distance_from_spinor_inner_product} is equivalent to the modulus of the equation in \refthm{main_thm}. It is perhaps interesting that we can obtain this result without yet having considered spin at all. This proposition is closely related to \refprop{complex_Minkowski_inner_products}. \begin{proof} We begin with equation \refeqn{horosphere_distance_from_spinor_inner_product}, reducing it to the previous lemma. By \reflem{SL2C_on_C2_transitive}, there exists $A \in SL(2,\C)$ such that $A(\kappa) = (1,0)$. Let $A(\kappa') = \kappa''$. Then by \reflem{SL2C_by_symplectomorphisms}, \begin{equation} \label{Eqn:reduction_to_10} \{\kappa, \kappa'\} = \{A \kappa, A \kappa'\} = \{ (1,0), \kappa''\}. \end{equation} As $A$ acts by an isometry of hyperbolic space, the signed distance between the horospheres $A \cdot \h \circ \g \circ \f (\kappa)$ and $A \cdot \h \circ \g \circ \f (\kappa')$ is also $\rho$. By equivariance of $\f,\g,\h$ these horospheres can also be written as $\h \circ \g \circ \f (1,0)$ and $\h \circ \g \circ \f (\kappa'')$. Now $\g \circ \f (1,0) = p_0 = (1,0,0,1)$. Let $\g \circ \f (\kappa'') = (T,X,Y,Z)$. By \reflem{geodesic2}, $\rho = \log \frac{T-Z}{2}$. Rearranging this and noting that $\langle p_0, (T,X,Y,Z) \rangle = T-Z$, we have \[ e^\rho = \frac{1}{2} \left\langle p_0, (T,X,Y,Z) \right\rangle = \frac{1}{2} \langle \g \circ \f (1,0), \g \circ \f (\kappa'') \rangle. \] Applying \refprop{complex_Minkowski_inner_products} we then obtain \[ e^\rho = \left| \{ (1,0), \kappa'' \} \right|^2, \] which by \refeqn{reduction_to_10} is equal to $| \{ \kappa, \kappa' \} |^2$ as desired. To obtain equation \refeqn{horosphere_distance_from_Minkowski_inner_product}, note that as $\g \circ \f$ is surjective, there exist $\kappa, \kappa'$ such that $\g \circ \f (\kappa) = p$ and $\g \circ \f (\kappa') = p'$. Then the first equation follows directly from the second, using \refprop{complex_Minkowski_inner_products}. \end{proof} \subsubsection{The map from flags to horospheres} \label{Sec:flags_and_horospheres} We consider how flags behave under $\h$ and how to obtain corresponding tangent data on a horosphere. So, let $(p,V, o)\in\mathcal{F_P^O}(\R^{1,3})$ and consider the effect of $\h$. The situation is schematically depicted in \reffig{flag_horosphere}. First, consider the point $p$. Under $\h$, $p$ corresponds to a horosphere $\h(p)\in\mathfrak{H}$. At a point $q$ of $\h(p)$, by \reflem{tangent_space_of_horosphere} we have $T_q \h(p) = p^\perp \cap q^\perp$ Second, consider the 2-plane $V$; recall $\R p \subset V \subset p^\perp$ (\reflem{light_cone_orthogonal_complement}). Consider how $V$ intersects the tangent space to $\h(p)$ at $q$. We have \[ T_q \h(p) \cap V = ( q^\perp \cap p^\perp) \cap V = q^\perp \cap V, \] where the latter equality used $V \subset p^\perp$. Now as $\R p \subset V$, $V$ contains the the lightlike vector $p$, so by \reflem{lightlike_intersection} the latter intersection is transverse and the result is 1-dimensional. Third, consider the orientation $o$; recall $o$ is an orientation on the 1-dimensional space $V / \R p$. We will try to use $o$ to provide an orientation on the 1-dimensional space $T_q \h(p) \cap V$. We can regard $o$ as singling out as positive one the two sides of the origin in the line $V/\R p$ (the other side being negative). Then, any vector $w \in V$ which does not lie in $\R p$ obtains a sign, depending on the side of $\R p$ to which it lies; these two sides of $\R p$ project to the two sides of the origin in $V/\R p$. \begin{lem} If $p \in L^+$, $q \in \h(p)$ and $\R p \subset V \subset p^\perp$ (as above), then $T_q \h(p) \cap V \neq \R p$. \end{lem} \begin{proof} As $T_q \h(p) \cap V \subset T_q \hyp$, it is spacelike, so cannot contain the lightlike vector $p$. \end{proof} Thus the 1-dimensional subspace $T_q \h(p) \cap V$ is a line in the 2-plane $V$ transverse to $\R p$. So $o$ singles out one side of the origin in this line; or equivalently, induces an orientation on this line. To summarise: given a flag $(p,V,o)$, the point $p \in L^+$ singles out a horosphere $\h(p)$; at a point $q$ on this horosphere, $V$ singles out a distinguished 1-dimensional subspace $T_q \h(p) \cap V$ of the tangent space $T_q \h(p)$ to the horosphere; and $o$ induces an orientation on the 1-dimensional space $V \cap T_q \h(p)$. Considering the above construction over all $q \in h(p)$, the 1-dimensional spaces $T_q \h(p) \cap V$ form a \emph{tangent line field} on the horosphere $\h(p)$, and with the orientation from $o$ we in fact have an \emph{oriented tangent line field} on the horosphere $\h(p)$, i.e. a smoothly varying choice of oriented 1-dimensional subspace of each tangent space $T_q \h(p)$. We denote this oriented tangent line field by $V \cap T\h(p)$, as it is given by intersections with the various fibres in the tangent bundle to $\h(p)$. We can then make the following definitions. \begin{defn} \label{Def:overly_decorated_horosphere} An \emph{overly decorated horosphere} is a pair $(\mathpzc{h},L^O)$ consisting of $\mathpzc{h}\in\horos(\hyp)$ together with an oriented tangent line field $L^O$ on $\mathpzc{h}$. The set of overly decorated horospheres is denoted $\mathfrak{H_D^O}(\hyp)$. \end{defn} \begin{defn} \label{Def:H_PONF_to_decorated_horospheres} The map $\H$ sends (pointed oriented null) flags in $\R^{1,3}$ to overly decorated horospheres \[ \H \colon \mathcal{F_P^O}(\R^{1,3}) \To \mathfrak{H_D^O}(\hyp), \quad \H(p,V,o) = \left( \h(p), V \cap T \h(p) \right), \] where $V \cap T \h(p)$ is endowed with the orientation induced from $o$. \end{defn} We say the horospheres are ``overly" decorated, because it turns out that the oriented line fields $V \cap T\h(p)$ are of a very specific type: they are \emph{parallel}. A parallel oriented line field is determined by the single oriented line at one point; keeping track of an entire oriented line field is overkill. \subsubsection{Illustrative examples from the spinor $(1,0)$} \label{Sec:examples_from_10} Let us return to the spinor $\kappa_0 = (1,0)$. In \refeg{flag_of_simple_spinors} we calculated that, in Minkowski space, the flag $\G \circ \F (\kappa_0)$ is based at $\g \circ \f (\kappa_0) = (1,0,0,1)$; let this point by $p_0$. We also calculated that the flag has 2-plane $V$ spanned by $p_0$ and the vector $(0,0,1,0)$ in the $Y$-direction, which we denote $\partial_Y$. This flag has $V/\R p_0$ is oriented in the direction of $\partial_Y$. In other words, the flag is $[[p_0, \partial_Y]]$ \begin{eg}[The horosphere of $(1,0)$ and oriented line field at a point] \label{Eg:horosphere_of_10_at_point} Let us now find the corresponding horosphere, which we denote $\horo_0$, i.e. $\horo_0 = \h(p_0) = \h \circ \g \circ \f (\kappa_0)$. It is cut out of $\hyp$ by the 3-plane $\Pi$ with equation $\langle x, p_0 \rangle = 1$, i.e. $T-Z=1$. Thus, $\mathpzc{h}_0$ is the paraboloid defined by equations $T^2-X^2-Y^2-Z^2=1$ and $T-Z=1$. By the comment after \refdef{h}, the centre of $\mathpzc{h}_0$ is the ray of $L^+$ through $p_0$. A useful perspective on this horosphere $\mathpzc{h}_0$ may be obtained by noting that $\Pi$, with equation $T-Z=1$, is foliated by lines in the direction $(1,0,0,1)$ (i.e. the direction of the position vector of $p_0$). Each such line contains exactly one point with $T=0$, i.e. in the $XYZ$ 3-plane. Since $T-Z=1$, when $T=0$ we have $Z=-1$. This $\Pi$ intersects the $XYZ$ 3-plane in the 2-plane consisting of points of the form $(0,X,Y,-1)$. Denote this 2-plane $\Pi_{XY}$. It is a Euclidean 2-plane. Each of the lines parallel to $p_0$ foliating $\Pi$ intersects the horosphere $\mathpzc{h}_0$ exactly once. To see this, note that such a line has parametrisation $(0,X,Y,-1) + s(1,0,0,1) = (s,X,Y,s-1)$, and intersects $\horo_0$ when it intersects $\hyp$, i.e. when $s^2 - X^2 - Y^2 - (s-1)^2 = 1$. This equation is linear in the parameter $s$ and has a unique solution, giving the unique intersection point with $\mathpzc{h}_0$. Thus the projection $\Pi \To \Pi_{XY}$, projecting along the lines in the direction of $p_0$, restricts to a bijection $\mathpzc{h}_0 \To \Pi_{XY}$. In fact, as $p_0$ is a lightlike direction and the tangent planes to $\Pi$ are precisely the orthogonal complement $p_0^\perp$, this bijection is an isometry. This shows the horosphere $\mathpzc{h}_0$ is isometric to a Euclidean 2-plane. It also shows that a point of $\mathpzc{h}_0$ is determined by its $X$ and $Y$ coordinates, and that all $(X,Y) \in \R^2$ arise as $X,Y$ coordinates of points on $\mathpzc{h}_0$. See \reffig{plane_Pi_projection}. \begin{center} \begin{tikzpicture} \draw(0,0)--(3,3)--(1,4)--(-2,1)--(0,0); \draw(0.5,0.5)--(-1.5,1.5); \draw (1.2,3.875) .. controls (-0.5,1) .. (2.8,3.125); \draw[red, dashed, thick, ->](0.5,0.5)--(-1.5,1.5); \draw[red, dashed, thick, <-](1.2,3.875) .. controls (-0.5,1) .. (2.8,3.125); \draw[->](0.7,3.25)--(-1,1.5); \draw[->](2.2,2.5)--(0.4,0.8); \draw[->](0,1.55)--(-0.35,1.2); \node at (0.75,0.1){$\Pi_{XY}$}; \node at (3,2.5){$\Pi$}; \node at (0.45,1.9){$q_0$}; \node at (1.2,3.5){$\mathpzc{h}_0$}; \node at (-1.5,2){$p_0$}; \draw[->](-1.25,2)--(-0.25,3); \end{tikzpicture} \captionof{figure}{Projection of the plane $\Pi$ to $\Pi_{XY}$ (schematically drawn a dimension down).} \label{Fig:plane_Pi_projection} \end{center} Let us examine the horosphere $\horo_0$ at a particular point. One can verify that $(1,0,0,0) \in \mathpzc{h}_0$; let this point be $q_0$. The tangent space of $\hyp$ at $q_0$ is $q_0^\perp$ by \refeqn{hyperboloid_tangent_space}, which has equation $T=0$. So $T_{q_0} \hyp$ is the $XYZ$ 3-plane. The tangent space of $\mathpzc{h}_0$ at $q_0$ is $p_0^\perp \cap q_0^\perp$ by \reflem{tangent_space_of_horosphere}, thus is defined by equations $T-Z=0$ and $T=0$. So $T_{q_0} \mathpzc{h}_0$ is the $XY$ 2-plane. The decoration, or oriented line, obtained on the horosphere in $\G \circ \F (\kappa_0)$, at $q_0$, by \refdef{H_PONF_to_decorated_horospheres} is given by $V \cap T_{q_0} \mathpzc{h}_0$. We have calculated that $V$ is spanned by $p_0$ and $\partial_Y$, while $T_{q_0} \mathpzc{h}_0$ is the $XY$-plane, so the intersection is the line in the $Y$ direction. Since the flag $V / \R p_0$ is oriented in the direction of $\partial_Y$, this line is oriented in the $\partial_Y$ direction. Note that a quotient by $\R p_0$, when restricted to the 3-plane $\Pi$, is essentially the same as the projection along the lines in the $p_0$ direction discussed above. At each point of $\Pi$ (given by $T-Z=1$), the tangent space is given by $p_0^\perp = \{T-Z=0\}$, and $V$ is a 2-dimensional subspace of this tangent space. When we project $\Pi \To \Pi_{XY}$, the 2-plane $V$ of the flag projects to a 1-dimensional subspace of $\Pi_{XY}$, which we may regard as $V/\R p_0$. Since $V$ is spanned by $p_0$ and $\partial_Y$, the projection along $p_0$ is spanned by $\partial_Y$. \end{eg} \begin{eg}[Action of parabolic matrices on flag and horosphere of $(1,0)$] \label{Eg:parabolic_action_on_h0} Consider the following matrices in $SL(2,\C)$: \begin{equation} \label{Eqn:P} P_\alpha = \begin{pmatrix} 1 & \alpha \\ 0 & 1 \end{pmatrix} \text{ for $\alpha \in \C$}, \quad P = \left\{ P_\alpha \; \mid \; \alpha \in \C \right\} . \end{equation} It is not difficult to see that $P$ is a subgroup $P$ of $SL(2,\C)$. Indeed, for $\alpha,\alpha' \in \C$ we have $P_\alpha P_{\alpha'} = P_{\alpha'} P_\alpha = P_{\alpha+\alpha'}$, and the correspondence $\alpha \mapsto P_\alpha$ gives an isomorphism from $\C$, as an additive group, to $P$. Thus $P \cong \C \cong \R^2$. The matrices $P_\alpha$ are all \emph{parabolic} in the sense that they have trace $2$. They are also \emph{parabolic} in the sense that, at least when $\alpha \neq 0$, as complex linear maps on $\C^2$, they have only one 2-dimensional eigenspace (i.e. their Jordan block decomposition consists of a single 2-dimensional block). The word parabolic can have other meanings too, which do not concern us here. As a subgroup of $SL(2,\C)$, $P$ acts on all the spaces that $SL(2,\C)$ does. It will be useful to consider its action on various objects deriving from the spinor $\kappa_0 = (1,0)$ of the previous example. Each $P_\alpha$ acts on $\C^2$ by complex linear maps preserving $\kappa_0$. In fact, for the action of $SL(2,\C)$ on $\C^2$ of \refdef{SL2C_action_on_C2}, $P$ is precisely the stabiliser of $\kappa_0$. Under the map $\g \circ \f$ from $\C^2$ to $\R^{1,3}$, $\kappa_0$ maps to $p_0$. As $P$ preserves $\kappa_0$, by equivariance of $\g \circ \f$ (\reflem{gof_properties}), the action of $P$ on $\R^{1,3}$ preserves $p_0$. Precisely, for any $P_\alpha \in P$ we have \begin{equation} \label{Eqn:parabolics_fix_p0} P_\alpha \cdot p_0 = P_\alpha \cdot \left( (\g \circ \f) (\kappa_0) \right) = (\g \circ \f ) \left( P_\alpha \cdot (\kappa_0) \right) = (\g \circ \f) (\kappa_0) = p_0 \end{equation} Thus, each $P_\alpha$ acts on $\R^{1,3}$ by a real linear map in $SO(1,3)^+$ (\reflem{SL2C_action_on_light_cones} and subsequent comments) which preserves $p_0$, and hence also $p_0^\perp$. So, it can't be ``too bad"; we compute it explicitly. On the Hermitian matrix $S$ corresponding to the point $2(T,X,Y,Z) \in \R^{1,3}$ (see \refdef{g_H_to_R31}), $P_\alpha$ acts by \begin{align*} P_\alpha \cdot S &= P_\alpha S P_\alpha^* = \begin{pmatrix} 1 & \alpha \\ 0 & 1 \end{pmatrix} \begin{pmatrix} T+Z & X+iY \\ X-iY & T-Z \end{pmatrix} \begin{pmatrix} 1 & 0 \\ \overline{\alpha} & 1 \end{pmatrix} \\ &= \begin{pmatrix} T+Z + \alpha(X-iY) + \overline{\alpha}(X+iY) + |\alpha|^2 (T-Z) & X+iY+\alpha(T-Z) \\ X-iY+\overline{\alpha}(T-Z) & T-Z \end{pmatrix}. \end{align*} This is equal to the Hermitian matrix corresponding to a point $2(T',X',Y',Z') \in \R^{1,3}$ \[ \begin{pmatrix} T'+Z' & X'+iY' \\ X'-iY' & T'-Z' \end{pmatrix} \] where, letting $\alpha = a+bi$ with $a,b \in \R$, \begin{equation} \begin{array}{cc} \label{Eqn:transform_TXYZ_under_simple_parabolic_first} T' = T + a X + b Y + \frac{|\alpha|^2}{2} (T-Z), & X' = X + a (T-Z), \\ Y' = Y + b (T-Z), & Z' = Z + a X + b Y + \frac{|\alpha|^2}{2} (T-Z) \end{array} \end{equation} Indeed, one can verify that $(T,X,Y,Z) = p_0$ implies $(T',X',Y',Z') = p_0$. This describes the action of $P$ on $\R^{1,3}$. Now consider the action of $P$ on the flag $\G \circ \F(\kappa_0) = [[p_0, \partial_Y]] \in \mathcal{F_P^O}(\R^{1,3})$ from \refeg{flag_of_simple_spinors} and the previous \refeg{horosphere_of_10_at_point}. Using equivariance again (of $\G \circ \F$ this time, \refprop{SL2C_spinors_PNF_H_equivariant} and \refprop{FG_equivariant}), as $P$ stabilises $\kappa_0$, it also stabilises $[[p_0, \partial_Y]]$. Precisely, for $P_\alpha \in P$ we have \[ P_\alpha \cdot [[p_0, \partial_Y]] = P_\alpha \cdot \left( \G \circ \F \right) (\kappa_0) = \left( \G \circ \F \right) \left( P_\alpha \cdot (\kappa_0) \right) = \left( \G \circ \F \right) (\kappa_0) = [[p_0, \partial_Y]] \] Thus each $P_\alpha$ must fix the flag 2-plane $V$ spanned by $p_0$ and $\partial_Y$; we saw in \refeqn{parabolics_fix_p0} that $P_\alpha$ fixes $p_0$; we compute $P_\alpha \cdot \partial_Y$ explicitly to see how $P$ acts on $V$. Using \refeqn{transform_TXYZ_under_simple_parabolic_first} gives \[ P_\alpha \cdot \partial_Y = P_\alpha \cdot (0,0,1,0) = (b, 0, 1, b) = \partial_Y + b p_0. \] Thus indeed each $P_\alpha$ preserves the plane $V$ spanned by $p_0$ and $\partial_Y$. In fact, it acts as the identity on $V/\R p_0$, so definitely preserves the orientation in the flag. Each $P_\alpha$ fixes $p_0^\perp$, the 3-dimensional orthogonal complement of $p_0$, which has a basis given by $p_0, \partial_Y$ and $\partial_X = (0,1,0,0)$. We have already computed $P_\alpha$ on the first two of these; the third is no more difficult, and we find that $P_\alpha$ acts on $p_0^\perp$ by \begin{equation} \label{Eqn:parabolic_on_p0_perp} P_\alpha \cdot p_0 = p_0, \quad P_\alpha \cdot \partial_X = \partial_X + a p_0, \quad P_\alpha \cdot \partial_Y = \partial_Y + b p_0, \end{equation} adding multiples of $p_0$ to $\partial_X$ and $\partial_Y$ according to the real and imaginary parts of $\alpha$. Having considered both $p_0$ and $p_0^\perp$, we observe that $\R p_0 \subset p_0^\perp$ and so we can consider their quotient $p_0^\perp / \R p_0$. This is a 2-dimensional vector space, and has a basis represented by $\partial_X$ and $\partial_Y$. From \refeqn{parabolic_on_p0_perp} we observe that each $P_\alpha$ acts on $p_0^\perp / \R p_0$ as the identity. Next we turn to horospheres. \refeg{horosphere_of_10_at_point} above calculated $\h(p_0) = \h \circ \g \circ \f (\kappa_0)$ to be the horosphere $\mathpzc{h}_0$ cut out of $\hyp$ by the plane $\Pi$ with equation $T-Z=1$. We found that the point $q_0 = (1,0,0,0)$ was on this horosphere. At this point we have $T_{q_0} \hyp$ equal to the $XYZ$ 3-plane, $T_{q_0} \h(p_0)$ equal to the the $XY$ 2-plane, and the oriented decoration $V \cap T_{q_0} \h(p_0)$ given by $\partial_Y$. Again by equivariance (\reflem{gof_properties}, \reflem{h_equivariance}), $P$ must fix $\mathpzc{h}_0$: for any $P_\alpha \in P$ we have \[ P_\alpha \cdot \mathpzc{h}_0 = P_\alpha \cdot \left( \h \circ \g \circ \f \right) (\kappa_0) = \left( \h \circ \g \circ \f \right) \left( P_\alpha \cdot (\kappa_0) \right) = \h \circ \g \circ \f (\kappa_0) = \mathpzc{h}_0. \] Let us see explicitly how $P_\alpha$ acts on the horosphere $\mathpzc{h}_0$, starting from the point $q_0$. Using \refeqn{transform_TXYZ_under_simple_parabolic_first}, and recalling that every point of $\mathpzc{h}_0$ satisfies $T-Z=1$, we obtain \begin{equation} \label{Eqn:general_point_on_h0} P_\alpha \cdot q_0 = \left( 1 + \frac{|\alpha|^2}{2}, a, b, \frac{|\alpha|^2}{2} \right) = \left( 1 + \frac{a^2 + b^2}{2}, a, b, \frac{a^2+b^2}{2} \right). \end{equation} The $X$ and $Y$ coordinates of $P_\alpha \cdot q_0$ are the real and imaginary parts of $\alpha$, and as mentioned in \refeg{horosphere_of_10_at_point}, $X$ and $Y$ coordinates determine points of $\horo_0$. Thus for any point $q \in \mathpzc{h}_0$ there is precisely one $\alpha \in \C$ such that $P_\alpha \cdot q_0 = q$, namely $\alpha=X+Yi$. In other words, the action of $P$ on $\mathpzc{h}_0$ is simply transitive. The expression in \refeqn{general_point_on_h0} is a parametrisation of $\mathpzc{h}_0$ by $(a,b) \in \R^2$ or $\alpha\in \C$. If we project $\mathpzc{h}_0$ to $\Pi_{XY}$ as in \refeg{horosphere_of_10_at_point}, then $P_\alpha$ acts by addition by $(0,a,b,0)$. \end{eg} \begin{eg}[Oriented line field on the horosphere of $(1,0)$] \label{Eg:horosphere_of_10_generally} We again consider the horosphere $\mathpzc{h}_0 = \h(p_0) = \h \circ \g \circ \f (\kappa_0)$. In \refeg{horosphere_of_10_at_point} we found the tangent space to $\mathpzc{h}_0$ at a specific point $q_0$, and its intersection with the flag $\G \circ \F(\kappa_0)$. In \refeg{parabolic_action_on_h0} we found that the group $P$ acts simply transitively on $\mathpzc{h}_0$, so each point $q \in \mathpzc{h}_0$ can be written as $P_\alpha \cdot q_0$ for a unique $\alpha = a+bi$. We now find the tangent space to $\mathpzc{h}_0$ at $q$ explicitly, and its decoration, given by intersection with the flag $\G \circ \F (\kappa_0)$. Having calculated $q$ explicitly in \refeqn{general_point_on_h0}, using \refeqn{hyperboloid_tangent_space} we have \begin{equation} \label{Eqn:tangent_space_general_point_on_h0} T_q \hyp = q^\perp = \left\{ (T,X,Y,Z) \mid \left( 1 + \frac{|\alpha|^2}{2} \right) T - a X - b Y - \frac{|\alpha|^2}{2} Z = 0 \right\} \end{equation} The tangent space to the horosphere $\mathpzc{h}_0$ at $q$ is given by the intersection of $T_q \hyp$ with $p_0^\perp$ (\reflem{tangent_space_of_horosphere}). As in \refeg{horosphere_of_10_at_point}, the 3-plane $p_0^\perp$ has equation $T-Z=0$. Substituting $T=Z$ into \refeqn{tangent_space_general_point_on_h0} simplifies the equation to \[ Z = a X + b Y \] and so we can obtain various descriptions of the tangent space to $\mathpzc{h}_0$ at $q$, \begin{align*} T_q \mathpzc{h}_0 &= q^\perp \cap p_0^\perp = \left\{ (T,X,Y,Z) \; \mid \; T=Z, \; Z = a X + b Y \right\} \\ &= \left\{ \left( aX+bY, X, Y, aX+bY \right) \; \mid \; X,Y \in \R \right\} \\ &= \Span \left\{ (a,1,0,a), (b,0,1,b) \right\} = \Span \left\{ \partial_X + a p_0, \partial_Y + b p_0 \right\} \end{align*} As in \refeg{flag_of_simple_spinors} and \refeg{horosphere_of_10_at_point}, the flag 2-plane $V$ of $\G \circ \F (\kappa_0)$ is spanned by $p_0$ and $\partial_Y$, with $V/\R p_0$ oriented by $\partial_Y$. One of the generators of $T_q \mathpzc{h}_0$ identified above already lies in this subspace, so the line field on $\mathpzc{h}_0$ at $q$ is given by \[ V \cap T_{q} \mathpzc{h}_0 = \Span \left\{ (b,0,1,b) \right\} = \Span \left\{ \partial_Y + b p_0 \right\} \] The orientation on $V/\R p_0$ given by $\partial_Y + \R p_0$ induces the orientation on the 1-dimensional space $V \cap T_q \mathpzc{h}_0$ given by $\partial_Y + b p_0$. In other words, the oriented line field of $\H \circ \G \circ \F (\kappa_0)$ at $q = P_\alpha \cdot p_0$ is spanned and oriented by $\partial_Y + b p_0$. Denote this oriented line field by $L^O$, so that its value at $q$ is given by \[ L^O_q = \Span \left\{ \partial_Y + b p_0 \right\}. \] In the parametrisation of \refeqn{general_point_on_h0} by $(a,b) \in \R^2$, $L_q^O$ points in the direction of constant $a$ and increasing $b$, i.e. the partial derivative with respect to $b$. Since the action of $P$ on $\R^{1,3}$ is linear and preserves $\hyp$, $V$, and $\mathpzc{h}_0$, it also preserves tangent spaces of $\horo_0$: for any $\alpha \in \C$, we have $P_\alpha \cdot T_q \mathpzc{h}_0 = T_{P_\alpha \cdot q} \mathpzc{h}_0$. Hence the action of $P$ must preserve the intersections $V \cap T_q \mathpzc{h}_0$ which form the decoration on $\mathpzc{h}_0$: \[ P_\alpha \cdot \left( V \cap T_q \mathpzc{h}_0 \right) = V \cap T_{P_\alpha \cdot q} \mathpzc{h}_0 \] Indeed, we can check this explicitly at any $q \in \mathpzc{h}_0$. Letting $q = P_\alpha \cdot q_0$, we just saw that the oriented line field at $q$ is spanned and oriented by $\partial_Y + b p_0$. Applying $P_{\alpha'}$, where $\alpha' = a'+b' i$ with $a',b' \in \R$, from \refeqn{transform_TXYZ_under_simple_parabolic_first} we obtain \[ P_{\alpha'} \cdot \left( \partial_Y + b p_0 \right) = P_{\alpha'} \cdot (b,0,1,b) = (b+b', 0, 1, b+b') = \partial_Y + (b+b') p_0, \] the same vector spanning and orienting $L^O_{q'}$ where $q' = P_{\alpha'} \cdot q = P_{\alpha+\alpha'} q_0$. So, for any $q \in \mathpzc{h}_0$ and any $A \in P$, \[ A \cdot L^O_q = L^O_{A \cdot q} \] Thus, the oriented line field $L^O$ on $\mathpzc{h}_0$ given by $\H \circ \G \circ \F (\kappa_0)$ is a quite special type of oriented line field: it is parallel. Its value at any one point determines all the others, by applying the isometries given by $P$. The group $P$ of isometries of $\hyp$ is precisely the set of translations of $\mathpzc{h}_0$, which acts simply transitively on $\mathpzc{h}_0$ and carries with it the oriented line field $L^O$. It is worth noting what happens if we project $\mathpzc{h}_0$ to the plane $\Pi_{XY}$ from \refeg{horosphere_of_10_at_point}. As discussed there, this projection is an isometry, and is effectively a quotient by $\R p_0$, expressing $\mathpzc{h}_0$ as a Euclidean 2-plane. Under this projection, $V$ becomes an oriented line field in the direction $\partial_Y$. We saw in \refeg{parabolic_action_on_h0} that after applying this projection, $P_\alpha$ acts by translation by $(0,a,b,0)$. Thus in particular it preserves the oriented line field in the direction $\partial_Y$, which is the oriented line field of $\H \circ \G \circ \F(\kappa_0)$. \end{eg} \subsubsection{Parallel line fields} \label{Sec:parallel_line_fields} The type of oriented line field found as $\H \circ \G \circ \F(1,0)$ is known as \emph{parallel}, which we now define. \begin{defn} An element $A \in SL(2,\C)$, or the corresponding element $M \in SO(1,3)^+$, is called \begin{enumerate} \item \emph{parabolic} if $\Trace A = \pm 2$; \item \emph{elliptic} if $\Trace A \in (-2,2)$. \item \emph{loxodromic} if $\Trace A \in \C \setminus [-2,2] = \pm 2$. \end{enumerate} \end{defn} (There are other characterisations of these types of elements, but this is all we need.) It follows that the type of $A$ and any conjugate $MAM^{-1}$ are the same. All the matrices $P_\alpha$ of the previous section are parabolic. (Their negatives $-P_\alpha$ are also parabolic, but a matrix $A \in SL(2,\C)$ and its negative $-A$ produce the same element of $SO(1,3)^+$, so these do not produce any new isometries of $\hyp$). The oriented line field calculated on $\mathpzc{h}_0$ in the previous section thus satisfies the following definition. \begin{defn} Let $\mathpzc{h}\in\mathfrak{H}(\hyp)$. An oriented line field on $\mathpzc{h}$ is \emph{parallel} if it is invariant under the parabolic isometries of $\hyp$ fixing $\mathpzc{h}$. \end{defn} Thus, to describe a parallel oriented line field on a horosphere $\horo$, it suffices to describe it at one point: the oriented lines at other points can be found by applying parabolic isometries. Indeed, a horosphere is isometric to the Euclidean plane, and the parabolic isometries preserving $\mathpzc{h}$ act by Euclidean translations. A parallel oriented line field is therefore parallel in the sense of ``invariant under parallel translation". By the Gauss--Bonnet theorem no such line field exists on a surface of nonzero curvature. As we now see, all oriented line fields produced by $\H$ (\refdef{H_PONF_to_decorated_horospheres}) are parallel. \begin{lem} \label{Lem:image_of_H_parallel} Let $(p,V,o) \in \mathcal{F_P^O}(\R^{1,3})$ be a flag, and let $\H(p,V,o) = (\h(p), L^O) \in \mathfrak{H_D^O}(\hyp)$ the corresponding overly decorated horosphere. Then the oriented line field $L^O$ on $\h(p)$ is parallel. \end{lem} \begin{proof} The proof proceeds by reducing to the examples of the previous \refsec{examples_from_10}. As $\G \circ \F$ is surjective (\refprop{F_G_surjective}), there exists $\kappa \in \C_\times^2$ such that $(p,V,o) = \G \circ \F(\kappa)$. As the action of $SL(2,\C)$ on $\C^2_\times$ is transitive (\reflem{SL2C_on_C2_transitive}), there exists $A \in SL(2,\C)$ be a matrix such that $A \cdot \kappa = (1,0)$. Then by equivariance of $\f,\g,\h$ (\reflem{gof_properties}, \reflem{h_equivariance}) $A$ sends the given horosphere $\h(p)$ to $\horo_0 = \h(p_0) = \h \circ \g \circ \f (1,0)$ from \refsec{examples_from_10}: \[ A \cdot \h(p) = A \cdot \left( \h \circ \g \circ \f (\kappa) \right) = \h \circ \g \circ \f \left( A \cdot \kappa \right) = \h \circ \g \circ \f (1,0) = \mathpzc{h}_0. \] Similarly, by equivariance of $\F$ and $\G$, $A$ sends the flag $(p,V,o)$ to the standard one $\G \circ \F(1,0)$ from \refsec{examples_from_10}, which we denote $(p_0, V_0, o_0)$: \[ A (p,V,o) = A \cdot \left( \G \circ \F (\kappa) \right) = \G \circ \F \left(A \cdot \kappa \right) = \G \circ \F (1,0) = (p_0, V_0, o_0). \] Consider now the action of $A$ on oriented line fields. Recall that $SL(2,\C)$ acts on $\R^{1,3}$ via linear maps in $SO(1,3)^+$. If there is an oriented line field $L^O$ on $\h(p)$, then $A$ (via its derivative; but $A$ acts on $\R^{1,3}$ by a linear map) takes $L^O$ to an oriented line field on $\h(p_0)$, and $A^{-1}$ does the opposite. Thus $A$ and $A^{-1}$ provide a bijection \begin{equation} \label{Eqn:oriented_line_field_bijection} \left\{ \text{Oriented line fields on $\h(p)$} \right\} \cong \left\{ \text{Oriented line fields on $\mathpzc{h}_0$} \right\}. \end{equation} Now, if $P$ is a parabolic isometry fixing $\h(p)$ then $A P A^{-1}$ is a parabolic isometry fixing $\mathpzc{h}_0 = A \cdot \h(p)$. This conjugation operation $P \mapsto A P A^{-1}$ has inverse $P \mapsto A^{-1} P A$, and provides a bijection between parabolic isometries fixing $\h(p)$ and parabolic isometries fixing $\mathpzc{h}_0 = A \cdot \h(p)$. Thus, if we have a parallel oriented line field $L^O$ on $\h(p)$, then it is preserved under all parabolics $P$ fixing $\h(p)$, $P \cdot L^O = L^O$. Then the corresponding line field $A L^O$ on $\mathpzc{h}_0 = A \cdot \h(p)$ is preserved by all parabolics $A P A^{-1}$ fixing $\mathpzc{h}_0$, so $A \cdot L^O$ is parallel. In other words, the bijection \refeqn{oriented_line_field_bijection} above restricts to a bijection \begin{equation} \label{Eqn:parallel_oriented_line_field_bijection} \left\{ \text{Parallel oriented line fields on $\h(p)$} \right\} \cong \left\{ \text{Parallel oriented line fields on $\mathpzc{h}_0$} \right\}. \end{equation} Now taking the given oriented line field $L^O$ from $\H(p,V,o)$ and applying $A$ gives an oriented lie field on $\mathpzc{h}_0$. We compute \[ A L^O = A \left( V \cap T \h(p)) \right) = A \cdot V \cap T \left( A \cdot \h(p) \right) = V_0 \cap T \mathpzc{h}_0 \] which is precisely the oriented line field from $\H \circ \G \circ \F (1,0)$ in \refsec{examples_from_10}, which we calculated to be parallel. As $A$ sends $L^O$ to a parallel oriented line field, by \refeqn{parallel_oriented_line_field_bijection} $L^O$ is also parallel. \end{proof} The proof above essentially shows that any horosphere $\mathpzc{h}$, and the group of parabolics preserving it, behave like any other. The group of parabolics preserving a horosphere is isomorphic to the additive group $\C$ and acts by Euclidean translations on the horosphere. By a similar argument as above, one can show that if $A$ is parabolic and fixes $p \in L^+$, then $A$ fixes the horosphere $\h(p)$, the line $\R p$, the orthogonal complement $p^\perp$, and the quotient $p^\perp / \R p$, where it acts by translations. \subsubsection{Decorated horospheres} \label{Sec:decorated_horospheres} Parallel oriented line fields are precisely the type of decoration we want on horospheres (at least, until we introduce spin in \refsec{spin}). As we see now, they make $\H$ into a bijection. \begin{defn} \label{Def:decorated_horosphere} An \emph{decorated horosphere} is a pair $(\mathpzc{h}, L^O_P)$ consisting of $\mathpzc{h}\in\mathfrak{H}$ together with an oriented parallel line field $L^O_P$ on $\mathpzc{h}$. The set of all decorated horospheres is denoted $\mathfrak{H_D}$. \end{defn} We often refer to the oriented parallel line field on a horosphere as its \emph{decoration}. By definition, $\mathfrak{H_D} \subset \mathfrak{H_D^O}$. Note that \refdef{decorated_horosphere} does not refer to any particular model of hyperbolic space. When we refer to decorated horospheres in a particular model we add it in brackets, e.g. $\mathfrak{H_D}(\hyp)$. Although $\H$ was originally defined (\refdef{H_PONF_to_decorated_horospheres}) as a map $\mathcal{F_P^O}(\R^{1,3}) \To \mathfrak{H_D^O}(\hyp)$, by \reflem{image_of_H_parallel} $\H$ in fact has image $\mathfrak{H_D}(\hyp)$. Thus, we henceforth regard $\H$ as a map to the set of decorated horospheres, i.e. \[ \H \colon \mathcal{F_P^O} (\R^{1,3}) \To \mathfrak{H_D}(\hyp). \] We will no longer need to refer to arbitrary line fields or overly decorated horospheres. \begin{lem} \label{Lem:H_bijection} $\H \colon \mathcal{F_P^O}(\R^{1,3}) \To \mathfrak{H_D}(\hyp)$ is a bijection. \end{lem} \begin{proof} From \refdef{h}, $\h \colon L^+ \To \mathfrak{H}(\hyp)$ is a bijection. Since the horosphere of $\H(p,V,o)$ is just $\h(p)$, every horosphere is obtained in the image of $\H$. As explained in \refsec{rotating_flags}, there is an $S^1$ family of flags at any given basepoint $p \in L^+$. The 2-planes $V$ in this family all contain the line $\R p$, and rotate in the $3$-dimensional subspace $T_p L^+$ of $\R^{1,3}$. In defining the map $\H$, the horosphere $\h(p)$ is cut out of $\hyp$ by the 3-plane $\Pi$ with equation $\langle x, p \rangle = 1$. This 3-plane is parallel to the 3-plane $\langle x,p \rangle = 0$, which is $p^\perp = T_p L^+$. So in fact the tangent space to $\Pi$ at any point is just $T_p L^+$. We saw in \refsec{flags_and_horospheres} that $V$ always intersects the tangent space to $\h(p)$ in a 1-dimensional set, i.e. transversely in $\Pi$, and we saw in \reflem{image_of_H_parallel} that the resulting oriented line field is always parallel, hence determined by its value at one point. Moreover, the horosphere (being a spacelike surface) is transverse to the lightlike direction $\R p$. So as the flags based at $p$ rotate about $\R p$, they can also be considered to rotate in $T_p L^+ \cong T \Pi$, and transversely and bijectively cut out the $S^1$ family of oriented parallel directions on the 2-dimensional horosphere $\h(p)$ at each point. \end{proof} \subsubsection{$SL(2,\C)$ action on decorated horospheres} \label{Sec:SL2c_on_decorated_horospheres} \begin{defn} \ \label{Def:SL2C_action_UODHOR_hyp} $SL(2,\C)$ acts on $\mathfrak{H_D}(\hyp)$ via its action on $\mathfrak{H}(\hyp)$ and its derivative. \end{defn} This action of $A \in SL(2,\C)$ derives from its action on $\R^{1,3}$ (\refdef{SL2C_on_R31}) via linear maps in $SO(1,3)^+$, the orientation-preserving isometries of $\hyp$. A horosphere $\mathpzc{h}$ is sent to $A \cdot \mathpzc{h}$ as in \refdef{SL2C_action_on_hyperboloid_model}. The derivative of this linear map (which is the same linear map, on the tangent space to the horosphere) applies to the decoration. Thus if $(\mathpzc{h}, L_P^O)$ is a decorated horosphere then $A \cdot (\mathpzc{h}, L_P^O) = (A \cdot \mathpzc{h}, A \cdot L_P^O)$ where both $A \cdot \mathpzc{h}$ and $A \cdot L_P^O$ mean to apply $A$ as a linear map in $SO(1,3)^+$. \begin{lem} \label{Lem:H_equivariant} The actions of $SL(2,\C)$ on $\mathcal{F_P^O}(\R^{1,3})$ (\refdef{SL2C_on_PONF_R31}), and $\mathfrak{H_D}(\hyp)$ are equivariant with respect to $\H$. \end{lem} \begin{proof} The equivariance basically follows from the fact that $A$ acts via a linear map in $SO(1,3)^+$ on both spaces. Explicitly, let $A \in SL(2,\C)$, and let $M \in SO(1,3)^+$ be the induced map on $\R^{1,3}$. For a flag $(p,V,o) \in \mathcal{F_P^O}(\R^{1,3})$, the action of $A$ on $p, V$ and $o$ is via the linear map $M$ on $\R^{1,3}$, and we have $A\cdot (p,V,o)=(Mp,MV,Mo)$ where $M$ acts linearly in the usual way. Now $\H(p,V,o) = (\h(p), V \cap T\h(p))$ where the horosphere $\h(p)\in\mathfrak{H}(\hyp)$ is cut out of $\hyp$ by the plane with equation $\langle x,p \rangle = 1$, and $V \cap T \h(p)$ is a line which obtains an orientation from $o$. Thus, $A\cdot \H(p,V,o) = (M\h(p), M(V \cap T\h(p)))$ is simply obtained by applying the linear map $M$ to the situation. On the other hand, $\H(Mp,MV,Mo)) = (\h(Mp), MV \cap M(T\h(p)))$. By equivariance of $\h$ (\reflem{h_equivariance}), $\h(Mp)=M \h(p)$. And $M(V \cap T\h(p)) = MV \cap M(T\h(p)) = MV \cap TM\h(p)$: the image under $M$ of the intersection of 2-plane $V$ with the tangent space of $\h(p)$ is the intersection of $MV$ with the tangent space of $M\h(p) = \h(Mp)$. \end{proof} \subsection{From the hyperboloid model to the disc model} \label{Sec:hyperboloid_to_disc} The fourth step of our journey is from the hyperboloid model $\hyp$ to the disc model $\Disc$, via the maps $\i$ (and $\I$) from horospheres (with decorations) in $\hyp$ to horospheres (with decorations) in $\Disc$. The map from $\hyp$ to $\Disc$ is a standard isometry and we discuss it briefly. All constructions in $\hyp$ translate directly to $\Disc$, but we only consider the model briefly here. In \refsec{disc_model} we introduce the model and the maps $\i$ and $\I$; in \refsec{SL2C_disc_model} we discuss $SL(2,\C)$ actions and equivariance; in \refsec{examples_computations_disc_model} we discuss some examples and computations. \subsubsection{The disc model} \label{Sec:disc_model} For a point $(X,Y,Z) \in \R^3$ let $r$ be its Euclidean length, i.e. $r > 0$ is such that $r^2 = X^2 + Y^2 + Z^2$. \begin{defn} The \emph{disc model} $\Disc$ of $\hyp^3$ is the set \[ \{(X,Y,Z) \in \R^3 \, \mid \, r < 1 \} \quad \text{with Riemannian metric} \quad ds^2 = \frac{4 \left( dX^2 + dY^2 + dZ^2 \right)}{\left( 1-r^2 \right)^2}. \] The boundary at infinity $\partial \Disc$ of $\Disc$ is $\{(X,Y,Z) \in \R^3 \, \mid r = 1 \}$. \end{defn} \begin{center} \begin{tikzpicture} \draw[blue] (0,1) ellipse (1cm and 0.2cm); ll[white] (-1,1)--(1,1)--(1,1.5)--(-1,1.5); \draw[blue,dotted] (0,1) ellipse (1cm and 0.2cm); \draw (0,0) ellipse (1cm and 0.2cm); \draw[blue] (-4,4)--(0,0)--(4,4); \draw[dashed, thick] plot[variable=\t,samples=1000,domain=-75.5:75.5] ({tan(\t)},{sec(\t)}); \draw[blue] (0,4) ellipse (4cm and 0.4cm); \draw (0,4) ellipse (3.85cm and 0.3cm); ll[red] (1.5,3) circle (0.055cm); \node at (1.5,3.25){$x$}; ll[red] (0.38,0) circle (0.055cm); \node at (0.75,0){\tiny$\i(x)$}; ll[red] (0,-1) circle (0.055cm); \node at (-1,-0.8){$(-1,0,0,0)$}; \draw[dotted, thin] plot[variable=\t,samples=1000,domain=-75.5:75.5] ({tan(\t)},{sec(\t)}); \draw[dashed] (0,4) ellipse (4cm and 0.4cm); \draw[dashed] (0,4) ellipse (3.85cm and 0.3cm); \draw[dashed] (-4,4)--(0,0)--(4,4); \node at (-2.25,3){$\hyp$}; \draw[red] (1.5,3)--(0,-1); \node at (1.25,0){$\Disc$}; \end{tikzpicture} \label{Fig:hyperboloid_to_disc} \captionof{figure}{From the hyperboloid $\hyp$ to the disc $\Disc$ (drawn a dimension down).} \end{center} The standard isometry from the hyperboloid model $\hyp$ to the disc model $\Disc$ regards $\Disc$ as the unit 3-disc in the 3-plane $T=0$, i.e. \[ \Disc = \{ (0,X,Y,Z) \mid X^2 + Y^2 + Z^2 < 1 \}, \] and is given by straight-line projection from $(-1,0,0,0)$. See \reffig{hyperboloid_to_disc}. This gives the following map. \begin{defn} \label{Def:isometry_hyp_disc} The isometry $\i$ from the hyperboloid model $\hyp$ to the disc model $\Disc$ is given by \[ \i \colon \hyp \To \Disc, \quad \i (T,X,Y,Z) = \frac{1}{1+T} (X,Y,Z). \] The map $\i$ extends to a map on spheres at infinity, which is essentially the identity on $\S^+$, but the domain can be taken to be $L^+$, \[ \i \colon \partial \hyp = \S^+ \To \partial \Disc \text{ or } L^+ \To \partial \Disc, \quad \i (T,X,Y,Z) = \left( \frac{X}{T}, \frac{Y}{T}, \frac{Z}{T} \right). \] The map $\i$ yields a map on horospheres, which we also denote $\i$, \[ \i \colon \mathfrak{H}(\hyp) \To \mathfrak{H}(\Disc). \] \end{defn} Horospheres in $\Disc$ appear as Euclidean spheres tangent to the boundary sphere $\partial \Disc$. The point of tangency with $\partial \Disc$ is the centre of the horosphere. The horoball bounded by the horosphere is the interior of the Euclidean sphere. If a horosphere in $\hyp$ has an oriented tangent line field, we can transport it to $\Disc$ using the derivative of $\i$. One of these oriented tangent line fields is parallel if and only if the other is. So we obtain the following. \begin{defn} \label{Def:I} The map \[ \I \colon \mathfrak{H_D}(\hyp) \To \mathfrak{H_D}(\Disc). \] is given by $\i$ and its derivative. \end{defn} It is clear that $\i$ and $\I$ are both bijections. \subsubsection{$SL(2,\C)$ action on disc model} \label{Sec:SL2C_disc_model} The action of $SL(2,\C)$ extends to $\Disc$ and $\partial \Disc$, $\mathfrak{H}(\Disc)$, as follows: \begin{defn} The action of $A \in SL(2,\C)$ on \label{Def:SL2C_action_disc_model} \label{Def:SL2C_action_UODHOR_Disc} \begin{enumerate} \item $\Disc$ sends each $x \in \Disc$ to $A\cdot x = \i \left( A\cdot \left( \i^{-1} x \right) \right)$. \item $\partial \Disc$ sends each $x \in \partial \Disc$ to $ A\cdot x = \i \left( A\cdot \left( \i^{-1} x \right) \right)$. \item $\mathfrak{H}(\Disc)$ is induced by the action on $\Disc$, which sends $\mathfrak{H}(\Disc)$ to $\mathfrak{H}(\Disc)$. \item $\mathfrak{H_D}(\Disc)$ is induced by its action on $\mathfrak{H}(\Disc)$ and its derivative. \end{enumerate} \end{defn} Note that in (i), $\i^{-1} x \in \hyp$, so $A \cdot \i^{-1}(x)$ uses the action on $\hyp$, and in (ii), $\i^{-1} (x) \in \partial \hyp$, so $A \cdot \i^{-1}(x)$ uses the action on $\partial \hyp$ (\refdef{SL2C_action_on_hyperboloid_model}). The actions on $\Disc$ and $\partial \Disc$ are equivariant by definition: if we take a point $p \in \hyp$ or $\partial \hyp$, then $\i(p) \in \Disc$ or $\partial \Disc$, and by definition \[ A \cdot \i (p) = \i \left( A \cdot p \right). \] The action on $\horos(\Disc)$ is induced by the pointwise action on $\Disc$, immediately giving the following. \begin{lem} The actions of $SL(2,\C)$ on \label{Lem:SL2C_actions_on_Hyp_Disc_equivariant} \[ \text{(i) } \hyp \text{ and } \Disc, \quad \text{(ii) } \partial \hyp \text{ and } \partial \Disc, \quad \text{(iii) } \mathfrak{H}(\hyp) \text{ and } \mathfrak{H}(\Disc) \] are equivariant with respect to $\i$. \qed \end{lem} \begin{lem} \label{Lem:I_equivariant} The actions of $SL(2,\C)$ on $\mathfrak{H_D}(\hyp)$ and $\mathfrak{H_D}(\Disc)$ are equivariant with respect to $\I$. \end{lem} \begin{proof} We just saw the action of $A \in SL(2,\C)$ on $\mathfrak{H}(\hyp)$ and $\mathfrak{H}(\Disc)$ are equivariant with respect to $\i$. Both $A$ and $\I$ transport tangent line fields using the derivative, so they commute. \end{proof} \subsubsection{Examples and computations} \label{Sec:examples_computations_disc_model} We give some facts about the isometry $\i$. \begin{lem} \label{Lem:i_facts} Under the map $\i \colon \hyp \To \Disc$, \begin{enumerate} \item $q_0 = (1,0,0,0) \in \hyp$ maps to the origin $(0,0,0) \in \Disc$. \item The point in $\partial \hyp$ represented by the ray in $L^+$ through $(1,X,Y,Z)$, maps to $(X,Y,Z) \in \partial \Disc$. \item In particular, the point of $\partial \hyp$ represented by the ray of $L^+$ through $p_0 = (1,0,0,1)$, maps to the north pole $(0,0,1) \in \partial \Disc$. \end{enumerate} \end{lem} \begin{proof} These are immediate from \refdef{isometry_hyp_disc}. \end{proof} \begin{eg}[Decorated horosphere in $\Disc$ of spinor $(1,0)$] \label{Eg:decorated_horosphere_of_10_Disc} Let $\kappa_0 = (1,0)$. The horosphere $\mathpzc{h}_0 =\h(p_0) = \h \circ \g \circ \f (\kappa_0)$ in $\hyp$, considered at length in the examples of \refsec{examples_from_10}, corresponds to a horosphere $\mathpzc{h}'_0 = \i(\mathpzc{h}_0)$ in $\Disc$. Since $\mathpzc{h}_0$ has centre the ray through $p_0 = (1,0,0,1)$ and passes through $q_0 = (1,0,0,0)$, using \reflem{i_facts}, $\mathpzc{h}'_0$ has centre $(0,0,1)$ and passes through the origin. Thus it is a Euclidean sphere of diameter $1$. In \refeqn{general_point_on_h0} we found a parametrisation of $\mathpzc{h}_0$ by $\alpha = a+bi \in \C$ or $(a,b) \in \R^2$. Applying $\i$ yields a parametrisation of $\mathpzc{h}'_0$, \begin{equation} \label{Eqn:parametrisation_of_10_horosphere_in_disc} \i \left( 1+ \frac{|\alpha|^2}{2},a, b, \frac{|\alpha|^2}{2} \right) = \frac{2}{4+a^2 + b^2} \left( a, b, \frac{a^2 + b^2}{2} \right). \end{equation} One can verify explicitly that this parametrises a Euclidean sphere in $\Disc$, tangent to $\partial \Disc$ at $(0,0,1)$ and passing through the origin (except for the point of tangency). In \refeg{horosphere_of_10_generally} we found the oriented tangent line field $L^O$ on $\mathpzc{h}_0$ given by $\H \circ \G \circ \F(\kappa_0)$ explicitly: at the point $q$ parametrised by $(a,b)$, $L^O_q$ is spanned and oriented by $(b, 0, 1, b)$, which is the direction of constant $a$ and increasing $b$. Applying $\I$ we obtain a decoration on $\mathpzc{h}'_0$. This amounts to applying the derivative of $\i$ in the appropriate direction, which is just the partial derivative of $\i$ with respect to $b$. We find that the corresponding oriented line field on $\mathpzc{h}'_0$ is spanned and oriented by \begin{equation} \label{Eqn:decoration_on_10_horosphere_disc} \frac{2}{(4+a^2+b^2)^2} \left( -2ab, 4+a^2-b^2,4b \right). \end{equation} This gives an explicit description of $\I \circ \H \circ \G \circ \F(\kappa_0)$. In particular, at the origin $(a,b)=(0,0)$, the decoration points in the direction $(0,1,0)$. \end{eg} For a general spin vector $\kappa$, we can explicitly compute the centre of the corresponding horosphere in $\Disc$. \begin{lem} For $\kappa = (a+bi, c+di) \in \C^2_\times$ with $a,b,c,d \in \R$, we have \[ \i \circ \h_\partial \circ \g \circ \f (\kappa) = \frac{1}{a^2+b^2+c^2+d^2} \left( 2(ac+bd), 2(bc-ad), a^2 + b^2 - c^2 - d^2 \right). \] \end{lem} \begin{proof} In \refsec{light_cone_to_horosphere} we observed that $\h_\partial$ is just the projectivisation map $L^+ \To \S^+$. So $\h_\partial \circ \g \circ \f (\kappa)$ is the point on $\partial \hyp$ given by the ray through $\g \circ \f (\kappa)$, calculated in \reflem{spin_vector_to_TXYZ}. Applying $\i$ to a point on that ray, such as the point calculated in \reflem{gof_celestial_sphere}, we obtain the result. \end{proof} A few further remarks: \begin{itemize} \item In \refsec{calculating_flags_Minkowski} we considered $\g \circ D_\kappa \f (\ZZ(\kappa))$, which is involved in defining the flag $\G \circ \F (\kappa)$. Explicit calculation (\reflem{null_flag_tricky_vector}) showed $\g \circ D_\kappa \f (\ZZ(\kappa))$ has no $T$-component. It thus defines a tangent vector to the $S^2$ given by intersecting $L^+$ with any slice of constant positive $T$. The map from this $S^2$ to $\partial \Disc$ is just a dilation from the origin, and so we immediately obtain these flag directions on $\partial \Disc$. From \reflem{null_flag_tricky_vector} we find that when $\kappa = (a+bi, c+di)$ with $a,b,c,d \in \R$, the direction is \begin{equation} \label{Eqn:flag_direction_disc} \left( 2(cd-ab), a^2-b^2+c^2-d^2,2(ad+bc) \right). \end{equation} \item More generally, in \refsec{rotating_flags} we found an orthogonal basis $e_1 (\kappa), e_2(\kappa), e_3 (\kappa)$ for $\R^3$, obtained by projecting to the $XYZ$ 3-plane the point $p = \g \circ \f (\kappa)$, and derivatives of $\g \circ \f$ in the directions $\ZZ(\kappa)$ and $i \ZZ(\kappa)$. As discussed there, this basis yields an explicit picture of the flag of $\kappa$ in the 3-plane $T=r^2$, on which the light cone appears as a 2-sphere of radius $r^2$. Projection to the $XYZ$ 3-plane, and rescaling to the unit sphere, then gives a description of the flag on $\partial \Disc$. So \reffig{flag_intersect_T_r_squared} can be regarded also as a picture of a flag in $\Disc$. \item With this in mind, return to the decorated horosphere $\horo'_0$ of \refeg{decorated_horosphere_of_10_Disc}: described by $\kappa_0 = (1,0)$, it has centre $(0,0,1)$, Euclidean diameter 1, parametrisation \refeqn{parametrisation_of_10_horosphere_in_disc}, and decoration \refeqn{decoration_on_10_horosphere_disc}. From \refeqn{flag_direction_disc}, the flag direction at $(0,0,1)$ is (setting $\kappa = \kappa_0$) is $(0,1,0)$. Now consider what happens as a point $q$ in the horosphere approaches $(0,0,1) \in \partial \Disc$ along the line field. This corresponds to holding $a$ constant and letting $b \rightarrow \pm \infty$. One can check that the oriented line field on $\mathpzc{h}'_0$ approaches $(0,-1,0)$. This is the negative of the flag direction at $(0,0,1)$ calculated above, and we appear to have a ``mismatch" of decorations at infinity. See \reffig{5}. This is worth noting, to avoid future confusion, but not particularly surprising: in Minkowski space, the flag direction along $L^+$ and the oriented line field on a horosphere come from intersections with different, parallel 3-planes. Also note that, approaching the centre of the horosphere from other directions on the horosphere, the oriented line field can approach any arbitrary direction. \end{itemize} \begin{center} \begin{tikzpicture}[scale=1.1] \draw (0,0) ellipse (1.5cm and 0.25cm); ll[white] (-1.45,-0)--(1.45,-0)--(1.45,0.3)--(-1.45,0.3); \draw[dashed] (0,0) ellipse (1.5cm and 0.25cm); ll[white] (0,0.75) circle (0.75cm); \draw[gray, dashed] (0,0.75) ellipse (0.75cm and 0.125cm); ll[white] (-0.7,0.75)--(0.7,0.75)--(0.7,0.9)--(-0.7,0.9); \draw[gray, dotted] (0,0.75) ellipse (0.75cm and 0.125cm); \shade[ball color = gray!40, opacity = 0.1] (0,0) circle (1.5cm); \draw (0,0) circle (1.5cm); \shade[ball color = gray!40, opacity = 0.1] (0,0.75) circle (0.75cm); \draw (0,0.75) circle (0.75cm); \draw[dotted] (0,0) ellipse (1.5cm and 0.25cm); \draw[<->] (3,1)--(3,0)--(4,0); \draw[->] (3,0)--(2.5,-0.5); \node at (3,1.25){$z$}; \node at (2.3,-0.7){$x$}; \node at (4.25,0){$y$}; \node at (0,1.75){$(0,0,1)$}; \draw (0,0.85) circle (0.65cm); \draw (0,1) circle (0.5cm); \draw (0,1.2) circle (0.3cm); \draw (0,1.4) circle (0.1cm); \draw[<-] (0.02,1.3)--(0.04,1.3); \draw[<-] (0.02,0.9)--(0.04,0.9); \draw[<-] (0.02,0.5)--(0.04,0.5); \draw[<-] (0.02,0.2)--(0.04,0.2); \draw[line width=0.5mm, ->] (-0.04,1.5)--(-0.06,1.5); \end{tikzpicture} \captionof{figure}{Decoration ``mismatch" at $\infty$.} \label{Fig:5} \end{center} \subsection{From the disc model to the upper half space model} \label{Sec:Disc_to_U} Finally, in our fifth step, we pass to the upper half space model $\U$, via the maps $\j$ (and $\J$) sending horospheres (with decorations) from $\Disc$ to $\U$. We have already discussed $\U$ to some extent in the introduction. The map $\Disc \To \U$ is another standard isometry and we discuss it briefly. We introduce $\U$, $\j$ and $\J$ in \refsec{U_horospheres_decorations} and prove their $SL(2,\C)$ equivariance in \refsec{SL2C_on_U}. \subsubsection{The upper half space model, horospheres, and decorations} \label{Sec:U_horospheres_decorations} As discussed in introductory \refsec{intro_horospheres_decorations}, we may denote points in $\U$ by Cartesian coordinates $(x,y,z)$ with $z>0$, or combine $x$ and $y$ into a complex number $x+yi$, writing points of $\U$ as $(x+yi,h) \in \C \times \R^+$. Regarding $\C$ as $\C \times \{0\}$, the boundary at infinity is $\partial \U = \C \cup \{\infty\} = \CP^1$. Stereographic projection $S^2 \To \CP^1$ (the inverse of the map in \refdef{stereographic_projection}) yields the map $\partial \Disc \To \partial \U$. \begin{defn} \label{Def:isometry_D_U} The isometry $\j$ from the disc model $\Disc$ to the upper half space model $\U$ is induced by its map on spheres at infinity, \[ \j = \Stereo^{-1} \colon \partial \Disc = S^2 \To \partial \U = \C \cup \{\infty\}, \quad \j(x,y,z) = \frac{x+iy}{1-z}. \] This map extends uniquely to an isometry $\j \colon \Disc \To \U$ and then restricts to a map on horospheres, which we also denote $\j$, \[ \j \colon \mathfrak{H}(\Disc) \To \mathfrak{H}(\U). \] \end{defn} As with $\i$ and $\I$, the derivative of the isometry $\j$ can be used to transport a decoration on a horosphere from $\Disc$ to $\U$. \begin{defn} \label{Def:J} The map \[ \J \colon \mathfrak{H_D}(\Disc) \To \mathfrak{H_D}(\U) \] is given by $\j \colon \Disc \To \U$ and its derivative. \end{defn} Clearly $\j$ (in all its forms) and $\J$ are bijections. We have discussed horospheres and decorations in $\U$ in introductory \refsec{intro_horospheres_decorations}; we now elaborate. A horosphere $\horo \in \horos(\U)$ centred at $\infty$ appears in $\U$ as a horizontal Euclidean plane. The group of parabolic isometries fixing $\mathpzc{h}$ appear in $\U$ as horizontal translations. An oriented tangent line field on $\horo$ is then parallel if and only if it appears \emph{constant}. So to describe a decoration on $\mathpzc{h}$, we only need to specify a direction at one point; the decoration points in the same direction at all other points. Since $\horo$ appears in $\U$ as a plane parallel to the complex plane, we can describe a decoration by a complex number. Since it is an oriented line field, that complex number is only well defined up to multiplication by positive reals. See \reffig{decorated_horospheres}(b). On the other hand, if a horosphere $\mathpzc{h} \in \horos(\U)$ is not entered at $\infty$, then it appears in $\U$ as a Euclidean sphere tangent to $\C$. As discussed in \refsec{parallel_line_fields}, to specify a decoration, it suffices to specify an oriented tangent line at any point of $\horo$; the oriented line field then propagates over the rest of $\horo$ by parallel translation. The point at which it is most convenient to specify a decoration is at the point which appears highest in $\U$, which we call the \emph{north pole} of $\horo$. The tangent space to $\horo$ at its north pole is parallel to $\C$, and so a decoration there can be specified by a complex number (again, up to multiplication by positive reals). Precisely, at the north pole, a tangent vector $(a,b,0)$ in Cartesian coordinates corresponds to the complex number $a+bi$. See \reffig{upper_half_space_decorated_horosphere}. \begin{defn} \label{Def:decoration_specification} Let $(\horo, L_P^O) \in \mathfrak{H_D}(\U)$, where $\horo$ is a horosphere and $L_P^O$ a parallel oriented line field. \begin{enumerate} \item If the centre of $\horo$ is $\infty$, then a \emph{specification} of $L_P^O$ is a complex number directing $L_P^O$ at any point of $\horo$, identifying each tangent space of $\horo$ with $\C$. \item If the centre of $\horo$ is not $\infty$, then a \emph{north-pole specification}, or just \emph{specification}, of $L_P^O$ is a complex number directing $L_P^O$ at the north pole $n$ of $\horo$, identifying $T_n \horo$ with $\C$. \end{enumerate} \end{defn} Thus any decorated horosphere in $\U$ has a specification, but it is not unique: if $\alpha \in \C$ is a specification for $\horo$, then so is $c \alpha$ for any $c > 0$. \subsubsection{$SL(2,\C)$ action on the upper half space model} \label{Sec:SL2C_on_U} The $SL(2,\C)$ actions on various aspects of $\U$ are similar to previous models of $\hyp^3$, using actions defined previously. \begin{defn} \label{Def:SL2C_action_upper_half_space_model} \label{Def:SL2C_action_UODHOR_U} The action of $A \in SL(2,\C)$ on \begin{enumerate} \item $\U$ sends each $x \in \U$ to $A\cdot x = \j \left( A\cdot \left( \j^{-1} x \right) \right)$. \item $\partial \U$ sends each $x \in \partial \U$ to $A\cdot x = \j \left( A\cdot \left( \j^{-1} x \right) \right)$. \item $\mathfrak{H}(\U)$ in induced by the action on $\U$, which sends $\horos(\U)$ to $\horos(\U)$. \item $\mathfrak{H_D}(\U)$ is induced by its action on $\horos(\U)$ and its derivative. \end{enumerate} \end{defn} As with the disc model, the actions on $\U$ and $\partial \U$ are defined to be equivariant, and as the action on $\horos(\U)$ is induced pointwise by the action on $\U$, we immediately have the following. \begin{lem} \label{Lem:D_U_actions_equivariant} The actions of $SL(2,\C)$ on \[ \text{(i) } \Disc \text{ and } \U, \quad \text{(ii) } \partial \Disc \text{ and } \partial \U, \quad \text{(iii) } \mathfrak{H}(\Disc) \text{ and } \mathfrak{H}(\U) \] are equivariant with respect to $\j$. \qed \end{lem} Similarly, both $\J$ and $A \in SL(2,\C)$ transport line fields using the derivative, giving the following. \begin{lem} \ \label{Lem:J_equivariant} The actions of $SL(2,\C)$ on $\mathfrak{H_D}(\Disc)$ and $\mathfrak{H_D}(\U)$ are equivariant with respect to $\J$. \qed \end{lem} \subsection{Putting the maps together} \label{Sec:putting_maps_together} We now have two sequences of maps, $\f,\g,\h,\i,\j$ and $\F,\G,\H,\I,\J$, as discussed in the introduction. We now consider their compositions. In \refsec{boundary_points_isometries} we consider the effect of these maps on points at infinity, and show that the action of $SL(2,\C)$ on $\partial \U$ yields the standard description of isometries via M\"{o}bius transformation. In \refsec{fghij_2}, we calculate the compositions of $\f, \g, \h, \i, \j$ and $\F,\G,\H,\I,\J$. \subsubsection{Boundary points and isometries} \label{Sec:boundary_points_isometries} Before considering the composition of $\f,\g,\h,\i,\j$, we consider the composition \[ \C_\times^2 \stackrel{\f}{\To} \HH_0^+ \stackrel{\g}{\To} L^+ \stackrel{\h_\partial}{\To} \partial \hyp \stackrel{\i}{\To} \partial \Disc \stackrel{\j}{\To} \partial \U. \] These map to the points of $\partial\hyp, \partial\Disc, \partial\U$ which are the centres of the horospheres produced by $\h, \i, \j$. For convenience, we abbreviate the composition to \[ \k_\partial = \j \circ \i \circ \h_\partial \circ \g \circ \f \] There are $SL(2,\C)$ actions on all these spaces. A matrix $A \in SL(2,\C)$ acts on $\C_\times^2$ via matrix-vector multiplication (\refdef{SL2C_action_on_C2}); on $S \in \HH_0^+$, $A$ acts as $A\cdot S = ASA^*$ (\reflem{restricted_actions_on_H}); on $L^+ \subset \R^{1,3}$, $A$ essentially has the same action, which via $\g$ becomes a linear map in $SO(1,3)^+$ (\refdef{SL2C_on_R31}); for $x \in \partial \hyp$, $A \in SL(2,\C)$ acts similarly (\refdef{SL2C_action_on_hyperboloid_model}); the action is then transferred to the other models using the isometries $\i$ and $\j$ (\refdef{SL2C_action_disc_model}, \refdef{SL2C_action_upper_half_space_model}). We have seen that these actions are all equivariant with respect to these maps: $\f$ \reflem{restricted_actions_on_H}, $\g$ (remark after \refdef{SL2C_on_R31}), $\h_\partial$ (\reflem{h_equivariance}), $\i$ (\reflem{SL2C_actions_on_Hyp_Disc_equivariant}), and $\j$ (\reflem{D_U_actions_equivariant}). Thus, $\k_\partial$ is also $SL(2,\C)$-equivariant. Let us now compute the composition $\k_\partial$! \begin{prop} \label{Prop:explicit_fghij} The composition $\k_\partial = \j \circ \i \circ \h_\partial \circ \g \circ \f \colon \C_\times^2 \To \partial \U = \C \cup \{\infty\}$ is given by \[ \k_\partial (\xi, \eta) = \frac{\xi}{\eta}. \] \end{prop} We give two proofs of this result. This first is more conceptual, using our previous observations about the Hopf fibration and stereographic projection. The second is explicitly computational. \begin{lem} \label{Lem:Stereo_Hopf_p} Let $\p \colon \C^2_\times \To S^3$ be the map that collapses each real ray from the origin to its intersection with the unit 3-sphere. Then \[ \Stereo \circ \Hopf \circ \, \p = \i \circ \h_\partial \circ \g \circ \f \] In other words, the following diagram commutes. \begin{center} \begin{tikzpicture} \node (a) at (0,0){$\C^2_\times$}; \node (b) at (2,1){$S^3$}; \node (c) at (4,1){$\CP^1$}; \node (d) at (6,0){$S^2=\partial\Disc$}; \node (e) at (1,-1){$\HH_0^+$}; \node (f) at (3,-1){$L^+$}; \node (g) at (5,-1){$\partial\hyp$}; \draw[->] (a) -- (b) node [pos=0.5,above] {$\p$}; \draw[->] (b) -- (c) node [pos=0.5,above] {$\Hopf$}; \draw[->] (c) -- (d); \node at (5.5,0.8) {$\Stereo$}; \draw[->] (a) -- (e) node [pos=0.75,above] {$\f$}; \draw[->] (e) -- (f) node [pos=0.5,above] {$\g$}; \draw[->] (f) -- (g) node [pos=0.5,above] {$\h_\partial$}; \draw[->] (g) -- (d) node [pos=0.25,above] {$\i$}; \end{tikzpicture} \end{center} \end{lem} \begin{proof} We already saw in \reflem{gof_Hopf} that, for $\kappa = (\xi, \eta) \in S^3$, the $XYZ$ coordinates of $\g \circ \f (\kappa)$ are precisely $\Stereo \circ \Hopf (\kappa)$. In this case (\reflem{spin_vector_to_TXYZ}), the $T$ coordinate of $\g \circ \f (\kappa)$ is $1$. Now the map $\h_\partial$ (\refdef{h_partial_light_cone_to_hyp}) projectivises the light cone, and then $\i$ (\refdef{isometry_D_U}) maps it to the unit Euclidean sphere in such a way that the ray through $(1,X,Y,Z)$ maps to $(X,Y,Z)$. Hence we have \begin{equation} \label{Eqn:hgf=stereohopf_in_S3} \i \circ \h_\partial \circ \g \circ \f (\kappa) = \Stereo \circ \Hopf (\kappa) \quad \text{for $\kappa \in S^3$} \end{equation} Now for general $\kappa \in \C^2_\times$, let $\kappa = r\kappa'$ where $r>0$ and $\kappa' \in S^3$. Then $\p(\kappa) = \kappa'$ and $\i \circ \h_\partial \circ \g \circ \f (\kappa') = \Stereo \circ \Hopf (\kappa')$. Applying $\f$ we have $\f(\kappa) = \f(r \kappa') = (r \kappa')(r \kappa')^* = r^2 \kappa' \kappa'^*= r^2 \f(\kappa')$. Applying the linear map $\g$ we then have $\g \circ \f (\kappa) = r^2 \g \circ \f (\kappa')$; then $\h_\partial$ then collapses rays to a point, so $\h_\partial \circ \g \circ \f (\kappa) = \h_\partial \circ \g \circ \f (\kappa')$. Putting this together we obtain the result: \[ \i \circ \h_\partial \circ \g \circ \f (\kappa) = \i \circ \h_\partial \circ \g \circ \f (\kappa') = \Stereo \circ \Hopf (\kappa') = \Stereo \circ \Hopf \circ \, \p (\kappa). \] \end{proof} \begin{proof}[Proof 1 of \refprop{explicit_fghij}] From the preceding lemma, we may replace $\i \circ \h_\partial \circ \g \circ \f$ with $\Stereo \circ \Hopf \circ \p$. The final map $\j$ (\refdef{isometry_D_U}) is the inverse of $\Stereo$ (\refdef{stereographic_projection}). Thus \[ \k(\xi, \eta) = \j \circ \i \circ \h_\partial \circ \g \circ \f (\xi,\eta) = \Stereo^{-1} \circ \Stereo \circ \Hopf \circ \, \p (\xi, \eta) = \Hopf \circ \, \p (\xi, \eta). \] Writing $(\xi, \eta) = r(\xi',\eta')$ where $r>0$ and $(\xi', \eta') \in S^3$, we have $\p (\xi, \eta) = (\xi', \eta')$ and \[ \Hopf \circ \, \p (\xi, \eta) = \Hopf (\xi', \eta') = \frac{\xi'}{\eta'} = \frac{\xi}{\eta}. \] \end{proof} \begin{proof}[Proof 2 of \refprop{explicit_fghij}] Let $\xi = a+bi$ and $\eta = c+di$ where $a,b,c,d \in \R$. In \reflem{spin_vector_to_TXYZ} we computed \[ \g \circ \f (\xi, \eta) = \left( a^2+b^2+c^2+d^2, 2(ac+bd), 2(bc-ad), a^2+b^2-c^2-d^2 \right) \in L^+. \] The map $\h_\partial$ then projectivises, and $\i$ (\refdef{isometry_hyp_disc}) then maps $(T,X,Y,Z) \mapsto (X/T,Y/T,Z/T)$, so we have \[ \i \circ \h_\partial \circ \g \circ \f (\xi, \eta) = \left( \frac{2(ac+bd)}{a^2+b^2+c^2+d^2}, \frac{2(bc-ad)}{a^2+b^2+c^2+d^2}, \frac{a^2+b^2-c^2-d^2}{a^2+b^2+c^2+d^2} \right). \] (This may also be obtained from \reflem{gof_celestial_sphere}). Finally, applying $\j$ (\refdef{isometry_D_U}) we have \begin{align*} \k_\partial (\xi, \eta) = \j \circ \i \circ \h_\partial \circ \g \circ \f (\xi, \eta) &= \frac{ \frac{2(ac+bd)}{a^2+b^2+c^2+d^2} + i \frac{2(bc-ad)}{a^2+b^2+c^2+d^2} }{1 - \frac{a^2+b^2-c^2-d^2}{a^2+b^2+c^2+d^2} } = \frac{ (ac+bd) + i(bc-ad) }{ c^2+d^2 } \\ &= \frac{(a+bi)(c-di)}{(c+di)(c-di)} = \frac{a+bi}{c+di} = \frac{\xi}{\eta}. \end{align*} \end{proof} \begin{lem} An $A \in SL(2,\C)$ acts on $\partial \U = \C \cup \{\infty\} = \CP^1$ by M\"{o}bius transformations: \[ \text{if} \quad A = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \quad \text{and} \quad z \in \C \cup \{\infty\} \quad \text{then} \quad A\cdot z = \frac{\alpha z + \beta}{\gamma z + \delta}. \] \end{lem} Note that when $A$ is the negative identity matrix, the corresponding M\"{o}bius transformation is just the identity. Thus the above action of $SL(2,\C)$ descends to an action of $PSL(2,\C)$. It is a standard fact that a M\"{o}bius transformation on $\partial \U$ extends to an orientation-preserving isometry of $\U$. In fact, the orientation preserving isometry group of $\U$ is $PSL(2,\C)$, acting in this way. \begin{proof} We use the equivariance of $\k_\partial \colon \C_\times^2 \To \partial \U = \C \cup \{\infty\}$. Starting from $\kappa = (\xi, \eta) \in \C_\times^2$ we have \[ A\cdot\kappa = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \begin{pmatrix} \xi \\ \eta \end{pmatrix} = \begin{pmatrix} \alpha \xi + \beta \eta \\ \gamma \xi + \delta \eta \end{pmatrix}. \] On the other hand we just computed $\k_\partial (\kappa) = \xi/\eta$. Thus the action of $A$ on this point of $\C \cup \{\infty\}$ is given by \[ A\cdot \k_\partial (\kappa) = \k_\partial (A\cdot\kappa) = \k_\partial \begin{pmatrix} \alpha \xi + \beta \eta \\ \gamma \xi + \delta \eta \end{pmatrix} = \frac{\alpha \xi + \beta \eta}{\gamma \xi + \delta \eta} \] which is precisely the action of the claimed M\"{o}bius transformation on $\xi/\eta$. Every point of $\C \cup \{\infty\}$ can be written as $\xi/\eta$ for some such $(\xi, \eta)$, and hence the action on $\C \cup \{\infty\}$ is as claimed. Even better, we can regard $\CP^1$ and its points as $[\xi:\eta]$, and then $A$ simply acts linearly. \end{proof} \subsubsection{Maps to horospheres and decorations} \label{Sec:fghij_2} \label{Sec:FGHIJ} Consider now the following compositions, which map to horospheres and decorated horospheres. \begin{gather*} \C_\times^2 \stackrel{\f}{\To} \HH_0^+ \stackrel{\g}{\To} L^+ \stackrel{\h}{\To} \mathfrak{H}(\hyp) \stackrel{\i}{\To} \mathfrak{H}(\Disc) \stackrel{\j}{\To} \mathfrak{H}(\U), \\ \C_\times^2 \stackrel{\F}{\To} \mathcal{F_P^O}(\HH) \stackrel{\G}{\To} \mathcal{F_P^O} (\R^{1,3}) \stackrel{\H}{\To} \mathfrak{H_D}(\hyp) \stackrel{\I}{\To} \mathfrak{H_D}(\Disc) \stackrel{\J}{\To} \mathfrak{H_D}(\U). \end{gather*} We abbreviate the compositions to \[ \k = \j \circ \i \circ \h \circ \g \circ \f. \quad \text{and} \quad \K = \J \circ \I \circ \H \circ \G \circ \F. \] Again, $SL(2,\C)$ acts on all these spaces; additionally to those seen in \refsec{boundary_points_isometries}, $A \in SL(2,\C)$ acts on horospheres $\horos(\hyp)$ via its action on $\R^{1,3}$ (\refdef{SL2C_action_on_hyperboloid_model}), and on horospheres in other models by using the isometries between the models (\refdef{SL2C_action_disc_model}, \refdef{SL2C_action_upper_half_space_model}). We have seen these actions are all equivariant with respect to $\h$ (\reflem{h_equivariance}), $\i$ (\reflem{SL2C_actions_on_Hyp_Disc_equivariant}), and $\j$ (\reflem{D_U_actions_equivariant}). Further, $A \in SL(2,\C)$ acts on a flag $(p,V,o) \in \mathcal{F_P^O}(\HH)$ via its action on $\HH$ (\refdef{matrix_on_PONF}); on a flag in $\R^{1,3}$ via the isomorphism $\g$ (\refdef{SL2C_on_PONF_R31}); on a decorated horosphere in $\hyp$ via its action on $\hyp$ (and its derivative) (\refdef{SL2C_action_UODHOR_hyp}); and on decorated horospheres in other models by the using isometries between the models (\refdef{SL2C_action_UODHOR_Disc}, \refdef{SL2C_action_UODHOR_U}). Moreover, all the maps are equivariant: $\F$ (\refprop{SL2C_spinors_PNF_H_equivariant}), $\G$ (\refprop{FG_equivariant}), $\H$ (\reflem{H_equivariant}), $\I$ (\reflem{I_equivariant}), and $\J$ (\reflem{J_equivariant}). Thus, the compositions $\k$ and $\K$ are $SL(2,\C)$-equivariant. It is worth pointing out that this composition $\K$ is \emph{almost} a bijection. Only $\F$ is not a bijection, but we have seen that it is surjective and 2--1, with $\F(\kappa) =\F(\kappa')$ iff $\kappa = \pm \kappa'$ (\reflem{F_G_2-1}). We have seen that $\G,\H,\I,\J$ are bijections (\reflem{G_bijection}, \reflem{H_bijection}, remark after \refdef{I}, remark after \refdef{J}). Indeed, it is not hard to see that $\G,\H,\I,\J$ are all smooth and have smooth inverses, so we in fact have diffeomorphisms between these spaces. We will see how to produce a complete bijection in \refsec{lifts_of_maps_spaces}. We now compute the compositions. The following proposition includes a precise statement of \refthm{explicit_spinor_horosphere_decoration}, for (non-spin-)decorated horospheres. \begin{prop} \label{Prop:JIHGF_general_spin_vector} \label{Prop:U_horosphere_general} For $(\xi, \eta) \in \C_\times^2$ the decorated horosphere $\K(\xi, \eta) \in \mathfrak{H_D}(\U)$ is centred at $\xi/\eta$ and \begin{enumerate} \item is a sphere with Euclidean diameter $|\eta|^{-2}$ and decoration north-pole specified by $i \eta^{-2}$, if $\eta \neq 0$; \item is a horizontal plane at Euclidean height $|\xi|^2$ and decoration specified by $i \xi^2$, if $\eta = 0$. \end{enumerate} The horosphere $\k(\xi, \eta) \in \horos(\U)$ is the horosphere of $\K(\xi, \eta)$, without the decoration. \end{prop} Specifications here are in the sense of \refdef{decoration_specification}. As in \refsec{fghij_2}, the strategy is to prove the proposition for $(1,0)$ and build to the general case by equivariance. The strategy is to first prove the proposition for $\kappa = (1,0)$, then use equivariance to prove it for $(0,1)$, then general $\kappa$. We have studied the horosphere of $(1,0)$ extensively; we now just need to map it to $\U$ via $\j$. \begin{lem} \label{Lem:j_facts} The map $\j$ has the following properties, illustrated in \reffig{D_to_U}. \begin{enumerate} \item It maps the following points $\partial \Disc \To \partial \U \cong \C \cup \{\infty\}$: \[ \begin{array}{ccc} \j(-1,0,0) = -1, & \j(0,-1,0) = -i, & \j(0,0,-1) = 0, \\ \j(1,0,0) = 1, & \j(0,1,0) = i, & \j(0,0,1)= \infty. \end{array} \] \item Denoting by $[p \rightarrow q]$ the oriented geodesic from a point at infinity $p \in \partial \Disc$ or $\partial \U$ to $q$, we have \[ \j\left[ (-1,0,0) \rightarrow (1,0,0) \right] = \left[ -1 \rightarrow 1 \right] \quad \text{and} \quad \j\left[ (0,-1,0) \rightarrow (0,1,0) \right] = \left[ -i \rightarrow i \right]. \] \item $\j$ maps $(0,0,0) \in \Disc$ to $(0,0,1) \in \U$, and at this point the derivative maps $(0,1,0)$ to $(0,1,0)$. \end{enumerate} \end{lem} \begin{figure} \begin{center} \begin{tikzpicture} \tikzset{ partial ellipse/.style args={#1:#2:#3}{ insert path={+ (#1:#3) arc (#1:#2:#3)} } } \shade[ball color = green!40, opacity = 0.2] (0,0) circle (2cm); \shade[ball color = green!40, opacity = 0.2] (0,0) circle (2cm); \draw[green] (0,0) circle (2cm); \draw[green] (0,0) ellipse (2cm and 0.4cm); \draw[red] (0,1) circle (1cm); \shade[ball color = red!80, opacity = 0.1] (0,1) circle (1cm); \draw[red] (0,1) ellipse (1cm and 0.2cm); \draw[>=latex, thick, ->>>] (0,-2) -- (0,2); \draw[>=latex, thick, ->>] (-2,0) -- (2,0); \draw[>=latex, thick, ->] (-0.3,-0.3)--(0.3,0.3); \node[black] at (-2.8,0) {$(-1,0,0)$}; \node[black] at (2.8,0) {$(1,0,0)$}; \node[black] at (0,-2.5) {$(0,0,-1)$}; \node[black] at (0,2.5) {$(0,0,1)$}; \node[black] at (-0.7,-0.6) {$(0,-1,0)$}; \node[black] at (0.6,0.6) {$(0,1,0)$}; \node[black] at (1.8,-1.8) {$\partial \Disc$}; \node[black] at (-0.4,1.4) {$\horo$}; \node at (4.5,0){$\stackrel{\j}{\To}$}; \begin{scope}[xshift = 1cm] \draw[green] (5,-2)--(9,-2)--(10,-1)--(6,-1)--(5,-2); \shade[color = green, opacity=0.2] (5,-2)--(9,-2)--(10,-1)--(6,-1)--(5,-2); \draw[>=latex, thick, ->>>] (7.5,-1.5) -- (7.5,2); \draw[>=latex, thick, ->>] (5.5,-1.5) arc[start angle=180, end angle=0,radius=2cm]; \draw[>=latex, thick, ->] (7.5,-1.5) [partial ellipse=190:10:0.5cm and 2cm]; \draw[red] (5,0)--(9,0)--(10,1)--(6,1)--(5,0); \shade[color = red, opacity=0.2] (5,0)--(9,0)--(10,1)--(6,1)--(5,0); \node[black] at (5,-1.5) {$-1$}; \node[black] at (10,-1.5) {$1$}; \node[black] at (7,-2.3) {$-i$}; \node[black] at (8.3,-0.7) {$i$}; \node[black] at (9,0.5) {$\horo$}; \node[black] at (9,-1.5) {$\C$}; \node[black] at (10,0) {$\U$}; \end{scope} \end{tikzpicture} \caption{The map $\j$, showing various boundary points, geodesics, and horospheres.} \label{Fig:D_to_U} \end{center} \end{figure} \begin{proof} Applying \refdef{isometry_D_U} immediately gives (i). Since $\j$ is an isometry $\Disc \To \U$, it must preserve geodesics and their endpoints at infinity, so (ii) follows. Finally, the origin in $\Disc$ is the intersection point of the two geodesics in $\Disc$ specified in (ii), so maps to the intersection of the two corresponding geodesics in $\U$. The intersection point in $\U$ of the geodesics $\left[ -1 \rightarrow 1 \right]$ and $\left[ -i \rightarrow i \right]$ is $(0,0,1)$. The specified tangent direction at the origin in $\Disc$ is the direction of the latter geodesic, thus it maps to the claimed tangent direction at $(0,0,1) \in \U$. \end{proof} \begin{lem} \label{Lem:U_horosphere_10} \label{Lem:JIHGF10} $\k (1,0)\in\mathfrak{H}(\U)$ is centred at $\infty$ at (Euclidean) height $1$. $\K (1,0) \in \mathfrak{H_D}(\U)$ is the same horosphere, with decoration specified by $i$. \end{lem} \begin{proof} In \refeg{decorated_horosphere_of_10_Disc} we described explicitly the decorated horosphere in $\Disc$ given by $(1,0)$, i.e. $\I\circ \H \circ \G \circ \F (1,0)$. It is the horosphere in $\Disc$ centred at $(0,0,1)$, passing through the origin $(0,0,0)$. At the origin, the decoration points in the direction of $(0,1,0)$. Forgetting the decoration yields $\i \circ \h \circ \g \circ \f (1,0)$. Applying $\j$, \reflem{j_facts} shows that the horosphere centre $(0,0,1)$ maps to $\infty$, the origin of $\Disc$ maps to $(0,0,1) \in \U$, and the direction $(0,1,0)$ at the origin maps to to the direction $(0,1,0)$ at $(0,0,1) \in \U$. Thus $\k(1,0)$ is centred at $\infty$ and passes through $(0,0,1)$, hence lies at Euclidean height 1. The decoration $(0,1,0)$ there is the $i$ direction, so the decoration on $\K(1,0)$ is specified by $i$. See \reffig{D_to_U} \end{proof} \begin{lem} \label{Lem:U_horosphere_01} \label{Lem:JIHG010} $\k(0,1)\in\mathfrak{H}(\U)$ is centred at $0$ and has Euclidean diameter $1$. $\K (0,1)\in\mathfrak{H_D}(\U)$ is the same horosphere, with decoration north-pole specified by $i$. \end{lem} \begin{proof} We use the previous lemma and equivariance. Note \[ \begin{pmatrix} 0 \\ 1 \end{pmatrix} = A \begin{pmatrix} 1 \\ 0 \end{pmatrix} \quad \text{where} \quad A = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} \in SL(2,\C), \] so \[ \K \begin{pmatrix} 0 \\ 1 \end{pmatrix} = \K \left( A \begin{pmatrix} 1 \\ 0 \end{pmatrix} \right) = A \cdot \left( \K \begin{pmatrix} 1 \\ 0 \end{pmatrix} \right), \] and similarly for $\k$. Thus $\K (0,1)$ is obtained from $\K(1,0)$ of \reflem{U_horosphere_10} by applying $A$, and similarly for $\k$. On $\U$, $A$ acts by the M\"{o}bius transformation $z \mapsto -1/z$, which is an involution sending $\infty \leftrightarrow 0$. It yields an isometry of $\U$ which is a half turn about the geodesic between $-i$ and $i$. As the point $(0,0,1)$ lies on this geodesic, it is fixed by the action of $A$. The vector $(0,1,0)$ at $(0,0,1)$ is tangent to the geodesic, so is also preserved by the half turn. Since $\k(1,0)$ has centre $\infty$ and passes through $(0,0,1)$, then $A \cdot \k(1,0)$ has centre $0$ and also passes through $(0,0,1)$. Hence $\k(0,1)$ has centre $0$ and Euclidean diameter $1$. The decoration of $\K(1,0)$ is directed by $(0,1,0)$ at $(0,0,1)$, and this vector is preserved by $A$. Hence this vector also directs the oriented parallel line field of $\K (0,1)$, which is thus north pole specified by $(0,1,0)$, corresponding to the complex number $i$. See \reffig{K10_to_K01}. \end{proof} \begin{figure} \begin{center} \begin{tikzpicture}[scale=1.2] \tikzset{ partial ellipse/.style args={#1:#2:#3}{ insert path={+ (#1:#3) arc (#1:#2:#3)} } } \draw[green!50!black] (4,-2)--(10,-2)--(11,-1)--(5,-1)--(4,-2); \shade[ball color = red, opacity = 0.2] (7.5,-0.5) circle (1cm); \draw[thick] (7.5,-1.5) [partial ellipse=190:170:0.5cm and 2cm]; \draw[>=latex, thick, ->] (7.5,-1.5) [partial ellipse=167:10:0.5cm and 2cm]; \draw[red] (4,0)--(10,0)--(11,1)--(5,1)--(4,0); \shade[color = red, opacity=0.2] (4,0)--(10,0)--(11,1)--(5,1)--(4,0); \draw[red, fill=red] (7.5,0.5) circle (0.05cm); \draw[red, thick, -latex] (7.5,0.5)--(8,1); \node[red] at (7.9,1.3) {$i$}; \draw[black, fill=black] (7,-1.8) circle (0.05cm); \draw[black, fill=black] (8,-1.2) circle (0.05cm); \node[black] at (7,-2.3) {$-i$}; \node[black] at (8.3,-0.7) {$i$}; \node[black] at (10,0.7) {$\K(1,0)$}; \node[black] at (5.9,-0.3) {$\K(0,1)$}; \node[black] at (9,-1.5) {$\C$}; \node[black] at (10,-0.5) {$\U$}; \draw[thick, ->] (6.875,-1.5) arc (225:-45: 0.25cm); \draw[black, fill=black] (7.5,-1.5) circle (0.05cm); \node[black] at (7.7,-1.7) {$0$}; \node[black] at (5.9,-1.4) {$z \mapsto -1/z$}; \end{tikzpicture} \caption{The decorated horospheres $\K(1,0)$ and $\K(0,1)$ are related by the M\"{o}bius transformation $z \mapsto -1/z$.} \label{Fig:K10_to_K01} \end{center} \end{figure} \begin{proof}[Proof of \refprop{U_horosphere_general}] We use the previous two lemmas and $SL(2,\C)$-equivariance. Observe that \[ \begin{pmatrix} \xi \\ 0 \end{pmatrix} = \begin{pmatrix} \xi & 0 \\ 0 & \xi^{-1} \end{pmatrix} \begin{pmatrix} 1 \\ 0 \end{pmatrix} \quad \text{and} \quad \begin{pmatrix} \xi \\ \eta \end{pmatrix} = \begin{pmatrix} \eta^{-1} & \xi \\ 0 & \eta \end{pmatrix} \begin{pmatrix} 0 \\ 1 \end{pmatrix}. \] If $\eta = 0$, then we have \[ \K \begin{pmatrix} \xi \\ 0 \end{pmatrix} = \K \left( \begin{pmatrix} \xi & 0 \\ 0 & \xi^{-1} \end{pmatrix} \cdot \begin{pmatrix} 1 \\ 0 \end{pmatrix} \right) = \begin{pmatrix} \xi & 0 \\ 0 & \xi^{-1} \end{pmatrix} \cdot \left( \K \begin{pmatrix} 1 \\ 0 \end{pmatrix} \right), \] and similarly for $\k$. The matrix $A \in SL(2,\C)$ involved corresponds to the isometry of $\U$ described by the M\"{o}bius transformation $z \mapsto \xi^2 z$. Thus $\K(\xi,0)$ is the image of $\K(1,0)$ under this isometry. By \reflem{JIHGF10}, $\K(1,0)$ is the horosphere centred at $\infty$ at Euclidean height $1$ with decoration specified by $i$. In $\U$, the isometry appears as a Euclidean dilation from the origin by factor $|\xi|^2$, and a rotation about the $z$-axis by $2 \arg \xi$. The resulting horosphere is again centred at $\infty$, i.e. a plane, but now has height $|\xi|^2$, and parallel oriented line field directed by $i \xi^2$. Thus $\K(\xi,0)$ is as claimed, and forgetting the decoration, $\k(\xi,0)$ is as claimed. If $\eta \neq 0$ then \[ \K \begin{pmatrix} \xi \\ \eta \end{pmatrix} = \K \left( \begin{pmatrix} \eta^{-1} & \xi \\ 0 & \eta \end{pmatrix} \begin{pmatrix} 0 \\ 1 \end{pmatrix} \right) = \begin{pmatrix} \eta^{-1} & \xi \\ 0 & \eta \end{pmatrix} \cdot \left( \K \begin{pmatrix} 0 \\ 1 \end{pmatrix} \right). \] The matrix $A \in SL(2,\C)$ involved corresponds to the M\"{o}bius transformation $z \mapsto z \eta^{-2} + \xi \eta^{-1}$. The desired decorated horosphere $\K(\xi, \eta)$ is the image under $A$ of $\K(0,1)$, i.e. (by \reflem{U_horosphere_01}) the decorated horosphere centred at $0$ of Euclidean diameter $1$ and north-pole specification $i$. In $\U$, the corresponding isometry appears as a dilation from the origin by factor $|\eta|^{-2}$, a rotation about the $z$-axis by $-2 \arg \eta$, and then a translation in the horizontal ($\C$) plane by $\xi/\eta$. The resulting decorated horosphere $\K(\xi, \eta)$ has Euclidean diameter $|\eta|^{-2}$, center $\xi/\eta$, and north-pole specification $i \eta^{-2}$, as claimed. Forgetting the decoration, $\k(\xi, \eta)$ is as claimed. \end{proof} {\flushleft \textbf{Remark.} } It is perhaps not so surprising that a pair of complex numbers $(\xi, \eta)$ should correspond to an object centred at $\xi/\eta \in \partial \U$, with a tangent decoration in the direction of $i/\eta^2$. These are precisely the type of things preserved by M\"{o}bius transformations. Indeed, a M\"{o}bius transformation \[ m \colon \CP^1 \To \CP^1, \quad m(z) = \frac{\alpha z+ \beta}{\gamma z+\delta}, \quad \text{corresponding to } \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \in SL(2,\C), \] sends \[ \frac{\xi}{\eta} \mapsto \frac{ \alpha \frac{\xi}{\eta} + \beta }{ \gamma \frac{\xi}{\eta} + \delta} = \frac{\alpha \xi + \beta \eta}{\gamma \xi + \delta \eta} = \frac{\xi'}{\eta'} \] where \[ \xi' = \alpha \xi + \beta \eta \quad \text{and} \quad \eta' = \gamma \xi + \delta \eta, \quad \text{i.e.} \begin{pmatrix} \xi' \\ \eta' \end{pmatrix} = \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \begin{pmatrix} \xi \\ \eta \end{pmatrix}. \] Its derivative is then \[ m'(z) = \frac{1}{(\gamma z+\delta)^2}, \quad \text{so that} \quad m' \left( \frac{\xi}{\eta} \right) = \frac{1}{ \left( \gamma \frac{\xi}{\eta} + \delta \right)^2 } = \frac{\eta^2}{ \left( \gamma \xi + \delta \eta \right)^2 } = \frac{\eta^2}{\eta'^2}. \] When applied to a tangent vector $i/\eta^2$ at $\xi/\eta$, one obtains \[ m' \left( \frac{\xi}{\eta} \right) \frac{i}{\eta^2} = \frac{\eta^2}{\eta'^2} \frac{i}{\eta^2} = \frac{i}{\eta'^2} \quad \text{at} \quad m \left( \frac{\xi}{\eta} \right) = \frac{\xi'}{\eta'}. \] In other words, a tangent decoration $i/\eta^2$ at $\xi/\eta$ maps to a tangent decoration $i/\eta'^2$ and $\xi'/\eta'$. In this way, the $SL(2,\C)$ equivariance arises naturally and geometrically. \section{Spin decorations and complex lambda lengths} \label{Sec:spin} Finally, we incorporate spin into our considerations. \subsection{Spin-decorated horospheres} \label{Sec:spin-decorated_horospheres} We now define the requisite notions for spin decorations on horospheres. In section \refsec{frame_fields} we discuss how decorations on horospheres give rise to certain frame fields; then we can define spin frame and spin isometries (\refsec{spin_frames_isometries}), and then spin decorations (\refsec{spin_decorations}). Throughout this section we consider hyperbolic 3-space $\hyp^3$ independent of model. We will use the cross product $\times$ of vectors in the elementary sense that if $v,w$ are tangent vectors to $\hyp^3$ at the same point $p \in \hyp^3$ making an angle of $\theta$, then $v \times w$ has length $|v| \, |w| \sin \theta$ and points in the direction perpendicular to $v$ and $w$ as determined by the right hand rule. We will make much use of frames. By \emph{frame} we mean right-handed orthonormal frame in $\hyp^3$. In other words, a frame is a triple $(f_1, f_2, f_3)$ where all $f_i$ are unit tangent vectors to $\hyp^3$ at the same point and $f_1 \times f_2 = f_3$. \subsubsection{Frame fields of decorated horospheres} \label{Sec:frame_fields} Throughout this section, let $\horo$ be a horosphere in $\hyp^3$. As with any smooth surface in a 3-manifold, at any point of $\mathpzc{h}$ there are two normal directions. \begin{defn} \ \label{Def:horosphere_normals} \begin{enumerate} \item The \emph{outward} normal direction to $\mathpzc{h}$ is the normal direction towards its centre. The outward unit normal vector field to $\mathpzc{h}$ is denoted $N^{out}$. \item The \emph{inward} normal direction to $\mathpzc{h}$ is the normal direction away from its centre. The inward unit normal vector field to $\mathpzc{h}$ is denoted $N^{in}$. \end{enumerate} \end{defn} Intuitively, ``inwards" means in towards the bulk of $\hyp^3$, and ``outwards" means out towards the boundary at infinity. (This means that the ``outwards" direction from a horosphere points into the horoball it bounds.) We now associate \emph{frames} to horospheres equipped with certain vector fields. . \begin{defn} \label{Def:inward_outward_frame_fields} Let $\V$ be a unit parallel vector field on $\mathpzc{h}$. \begin{enumerate} \item The \emph{outward frame field of $\V$} is the frame field on $\mathpzc{h}$ given by \[ f^{out}(\V) = \left( N^{out}, \V, N^{out} \times \V \right). \] \item The \emph{inward frame field of $\V$} is the frame field on $\mathpzc{h}$ given by \[ f^{in}(\V) = \left( N^{in}, \V, N^{in} \times \V \right). \] \end{enumerate} A frame field on $\horo$ is an \emph{outward} (resp. \emph{inward}) frame field if it is the outward (resp. inward) frame field of some unit parallel vector field on $\horo$. \end{defn} \begin{defn} If $(\mathpzc{h}, L^O_P) \in\mathfrak{H_D}$ with oriented parallel line field $L^O_P$, the \emph{associated outward (resp.inward) frame field} on $\mathpzc{h}$ is the outward (resp. inward) frame field of $\V$, where $\V$ is the unit tangent vector field on $\mathpzc{h}$ directing $L^O$. \end{defn} A decoration on $\horo$ thus determines an outward and an inward frame field on $\mathpzc{h}$. See \reffig{frames_from_decoration}. \begin{figure} \begin{center} \begin{tikzpicture} \draw[green!50!black] (5,-1.5)--(4,-2.5)--(10,-2.5)--(11,-1.5); \draw[red] (4,0)--(10,0)--(11,1)--(5,1)--(4,0); \shade[color = red, opacity=0.2] (4,0)--(10,0)--(11,1)--(5,1)--(4,0); \draw[red, thick, -latex] (5.5,0.25)--(6,0.75); \draw[red, thick, -latex] (7.5,0.25)--(8,0.75); \draw[red, thick, -latex] (9.5,0.25)--(10,0.75); \node[red] at (8.75,0.5) {$L_P^O$}; \node[black] at (6.75,0.5) {$\horo$}; \draw[black, -latex] (7.5,1.5)--(7.5,2.25); \node[black] at (7.5,2.5) {$N^{out}$}; \draw[black, -latex] (7.5,1.5)--(8,2); \node[black] at (8.25,2.25) {$\V$}; \draw[black, -latex] (7.5,1.5)--(6.8,1.5); \node[black] at (6,1.5) {$N^{out} \times \V$}; \node[black] at (9,2) {$f^{out}$}; \draw[black, -latex] (7.5,-1)--(7.5,-1.75); \node[black] at (7.5,-2) {$N^{in}$}; \draw[black, -latex] (7.5,-1)--(8,-0.5); \node[black] at (8.25,-0.25) {$\V$}; \draw[black, -latex] (7.5,-1)--(8.2,-1); \node[black] at (9,-1) {$N^{in} \times \V$}; \node[black] at (6.5,-1) {$f^{in}$}; \end{tikzpicture} \caption{A decoration $L^P_O$ on a horosphere $\horo$ determines inward and outward frame fields.} \label{Fig:frames_from_decoration} \end{center} \end{figure} \subsubsection{Spin frames and spin isometries} \label{Sec:spin_frames_isometries} The bundle of (right-handed orthonormal) frames over $\hyp^3$ is a principal $SO(3)$ bundle. As $\pi_1(SO(3)) \cong \Z/2\Z$, the double cover of $SO(3)$ is also its universal cover, and this is the spin group $\Spin(3)$. \begin{defn} \label{Def:Fr} Denote by $\Fr \To \hyp^3$ the principal $SO(3)$ bundle of (right-handed orthonormal) frames over $\hyp^3$, and $\Spin \To \hyp^3$ its double cover, a principal $\Spin(3)$ bundle. \end{defn} A point of (the total space of) $\Fr$ consists of a point of $\hyp^3$ together with a frame there; similarly, a point of $\Spin$ consists of a point of $\hyp^3$ together with one of the two lifts of a frame there. \begin{defn} A point of the total space of $\Spin$ is called a \emph{spin frame}. \end{defn} The orientation preserving isometry group $\Isom^+ \hyp^3$ of $\hyp^3$ acts simply transitively on $\Fr$: there is a unique orientation-preserving isometry sending any frame at any point of $\hyp^3$ to any other frame at any other point. Using the isomorphism $\Isom^+(\hyp^3) \cong PSL(2,\C)$ yields a diffeomorphism \begin{equation} \label{Eqn:PSL2C_Fr} PSL(2,\C) \cong \Fr. \end{equation} We can make this homeomorphism explicit by choosing a specific frame, a ``base frame" $f_0$. The identity $1 \in PSL(2,\C)$ corresponds to the frame $f_0$, and then a general element $A \in PSL(2,\C) \cong \Isom^+ \hyp^3$ corresponds to the frame obtained by applying the isometry $A$ (and its derivative) to $f_0$. In other words, he correspondence is given by $A \leftrightarrow A\cdot f_0$. The actions of $PSL(2,\C)$ on itself by multiplication, and on $\Fr$ by orientation-preserving isometries, are equivariant with respect to this correspondence; so we have an identification of $PSL(2,\C)$-spaces. This identification then lifts to universal covers: a path in $PSL(2,\C)$ from $1$ to an element $A$ corresponds to a path in $\Fr$ from $f_0$ to $A \cdot f_0$. Recalling the definition of a universal cover, this gives an identification between points of the universal cover of $PSL(2,\C)$, and the universal cover of $\Fr$. These universal covers are $SL(2,\C)$, and the space of spin frames $\Spin$, respectively. So we obtain a homeomorphism which identifies $SL(2,\C)$ with spin frames. \begin{equation} \label{Eqn:SL2C_Spin} SL(2,\C) \cong \Spin \end{equation} Under this identification, the two matrices $A,-A \in SL(2,\C)$ lifting $\pm A \in PSL(2,\C)$ correspond to the two spin frames above the frame $(\pm A).f_0$. The two spin frames lifting a common frame are related by a $2\pi$ rotation about any axis at their common point. Indeed, $SL(2,\C)$ acts freely and transitively on $\Spin$, whose elements are spin frames in $\hyp^3$. \begin{defn} A \emph{spin isometry} is an element of the universal cover of $\Isom^+ \hyp^3$. \end{defn} Thus, a spin isometry is just an element of $SL(2,\C)$, regarded as the double/universal cover of $PSL(2,\C) \cong \Isom^+ \hyp^3$. Each orientation-preserving isometry of $\hyp^3$ lifts to two spin isometries, which differ by a $2\pi$ rotation. Just as an orientation-preserving isometry sends frames to frames, a spin isometry sends spin frames to spin frames. \subsubsection{Spin decorations} \label{Sec:spin_decorations} Let $\horo$ be a horosphere in $\hyp^3$. A frame field on $\mathpzc{h}$ is a continuous section of $\Fr$ along $\mathpzc{h}$, and such a frame field has two continuous lifts to $\Spin$. \begin{defn} An \emph{outward (resp. inward) spin decoration} on $\mathpzc{h}$ is a continuous lift of an outward (resp. inward) frame field on $\mathpzc{h}$ from $\Fr$ to $\Spin$. \end{defn} In other words, an outward (resp. inward) spin decoration on $\mathpzc{h}$ is a choice of lift to $\Spin$ of a frame field of the form $f^{out}(\V)$ (resp. $f^{in}(\V)$), for some unit parallel vector field $\V$ on $\mathpzc{h}$. Given an inward frame field $f^{in}(\V) = (N^{in}, \V, N^{in} \times \V)$ on $\mathpzc{h}$ corresponding to a unit parallel vector field $\V$, we can obtain $f^{out}(\V) = (N^{out}, \V, N^{out} \times \V)$ by rotating the frame at each point by an angle of $\pi$ about $\V$. This rotation preserves $\V$ and sends $N^{in}$ to $N^{out}$, hence sends one frame to the other, and a similar rotation sends $f^{out}(\V)$ back to $f^{in}(\V)$. Each rotation of angle $\pi$ can be done in either direction around $\V$. However, once we take spin lifts, rotations of angle $\pi$ clockwise or anticlockwise about $\V$ yield distinct results, since the results are related by a $2\pi$ rotation. Thus we make the following definition, where rotations about vectors are made in the usual right-handed way. \begin{defn} \ \label{Def:associated_inward_outward_spindec} \begin{enumerate} \item If $W^{out}$ is an outward spin decoration on $\mathpzc{h}$ lifting an outward frame field $(N^{out}, \V, N^{out} \times \V)$ for some unit parallel vector field $\V$, the \emph{associated inward spin decoration} is the inward spin decoration obtained by rotating $W^{out}$ by angle $\pi$ about $\V$ at each point of $\mathpzc{h}$. \item If $W^{in}$ is an inward spin decoration on $\mathpzc{h}$ lifting an inward frame field $(N^{in}, \V, N^{in} \times \V)$ for some unit parallel vector field $\V$, the \emph{associated outward spin decoration} is the outward spin decoration obtained by rotating $W^{in}$ by angle $-\pi$ about $\V$ at each point of $\mathpzc{h}$. \end{enumerate} \end{defn} The choice of $\pi$ and $-\pi$ is somewhat arbitrary but is required for our main theorem to hold. By construction, if $W^{out}$ (resp. $W^{in}$) is a lift of $f^{out}(\V)$ (resp. $f^{in}(\V)$), then the associated inward (resp. outward) spin decoration is a spin decoration lifting $f^{in}(\V)$ (resp. $f^{out}(\V)$). Moreover, these associations are inverses so we obtain pairs $(W^{in}, W^{out})$ where each is associated to the other. Given $\V$, the frame fields $f^{in}(\V)$ and $f^{out}(\V)$ are determined, and then there are two choices of lift for $W^{in}$ and two choices of lift for $W^{out}$. Each choice of $W^{in}$ has an associated $W^{out}$. Thus, the choice of $W^{in}$ determines the associated $W^{out}$ and vice versa. Later, in \refsec{complex_lambda_lengths}, inward and outward fields feature equally in the definition of a complex lambda length. So we prefer to use both of them, as a pair, in the following definition. \begin{defn} \label{Def:spin_decoration} A \emph{spin decoration} on $\mathpzc{h}$ is a pair $W = (W^{in}, W^{out})$ where $W^{in}$ is an inward spin decoration on $\mathpzc{h}$, $W^{out}$ is an outward spin decoration on $\mathpzc{h}$, and each is associated to the other. The pair $(\horo, W)$ is called a \emph{spin-decorated horosphere}. \end{defn} {\flushleft \textbf{Remark.} } Under the identification $PSL(2,\C) \cong \Fr$, decorated horospheres correspond to certain cosets of $PSL(2,\C)$. Let us make the homeomorphism \refeqn{PSL2C_Fr} explicit by choosing the base frame $f_0$ to be the frame $(e_z, e_y, -e_x) \in \Fr$ at the point $p_0 = (0,0,1)$ in the upper half space model, where $e_x, e_y, e_z$ denote unit vectors in the $x,y,z$ directions. Then $1\in PSL(2,\C)$ corresponds to the base frame $f_0$ at $p_0$. This $f_0$ forms part of an outward frame field $f^{out}_0$ on the horosphere $\mathpzc{h}_0$ centred at $\infty$ passing through $p_0$. This outward frame field $f^{out}_0$ arises from the decoration on $\horo_0$ in the $y$-direction. The frames of $f^{out}_0$ are obtained from $f_0$ by parabolic isometries which appear as horizontal translations in $\U$. These isometries form the subgroup of $PSL(2,\C)$ given by \[ \underline{P} = \left\{ \pm \begin{pmatrix} 1 & \alpha \\ 0 & 1 \end{pmatrix} \mid \alpha \in \C \right\}. \] The cosets $g \underline{P}$, over $g \in PSL(2,\C)$, then yield the outward frame fields associated to oriented parallel line fields on horospheres, and we obtain a bijection \begin{equation} \label{Eqn:decorated_horospheres_cosets} PSL(2,\C)/ \underline{P} \cong \mathfrak{H_D}. \end{equation} \begin{defn} \label{Def:spin-decorated_horospheres} The set of all spin-decorated horospheres is denoted $\mathfrak{H_D^S}$. \end{defn} There is a 2-1 projection map $\mathfrak{H_D^S} \To \mathfrak{H_D}$ given as follows. A spin decorated horosphere $(\horo, W)$ contains a pair $W = (W^{in}, W^{out})$ of associated inward and outward spin decorations on a horosphere $\mathpzc{h}$, which project down to inward and outward frame fields on $\mathpzc{h}$. The inward frame is of the form $f^{in}(\V)$ for some unit parallel vector field $\V$ on $\mathpzc{h}$, and the outward frame is of the form $f^{out}(\V)$, for the same $\V$. This $\V$ directs an oriented parallel line field $L_P^O$ on $\horo$, i.e. a decoration on $\horo$. The spin decoration $W$ projects to the decoration $L_P^O$. There are two spin decorations on $\horo$ which project to this $L_P^O$, namely $W$, and the spin decoration $W' = (W'^{in}, W'^{out})$ obtained from rotating $W^{in}$ and $W^{out}$ through $2\pi$ at each point. {\flushleft \textbf{Remark.} }Just as decorated horospheres correspond to certain cosets of $PSL(2,\C)$ \refeqn{decorated_horospheres_cosets}, spin-decorated horospheres correspond to certain cosets of $SL(2,\C)$. Starting from the identification $SL(2,\C) \cong \Spin$ \refeqn{SL2C_Spin}, we can make it explicit by choosing a base spin frame $\widetilde{f_0}$, a lift of the base frame $f_0$. An $A\in SL(2,\C)$, being a point of the universal cover of $PSL(2,\C) \cong \Isom^+(\hyp^3)$, can be regarded as a (homotopy class of a) path in $PSL(2,\C)$ from the identity to the element $\pm A$ of $PSL(2,\C)$. This can be regarded as a path of isometries starting at the identity, and its action on frames yields a path from $\widetilde{f_0}$ to the spin frame corresponding to $A$. On $\mathpzc{h}_0\in\mathfrak{H}$ centred at $\infty$ passing through $p_0$, the frame $f_0$ forms part of a unique outward frame field $f_0^{out}$. This outward frame field lifts to two distinct outward spin decorations on $\mathpzc{h}_0$. One of these contains $\widetilde{f_0}$, corresponding to the identity in $SL(2,\C)$, and the spin frames of this outward spin decoration correspond to the elements of $SL(2,\C)$ forming the parabolic subgroup \[ P = \left\{ \begin{pmatrix} 1 & \alpha \\ 0 & 1 \end{pmatrix} \mid \alpha \in \C \right\}. \] The other lift of $f_0^{out}$ is the outward spin decoration on $\mathpzc{h}_0$ whose spin frames are obtained from those of the previous spin decoration by a $2\pi$ rotation; these correspond to the negative matrices in $SL(2,\C)$, and correspond to the coset \[ -P = \begin{pmatrix} -1 & 0 \\ 0 & -1 \end{pmatrix} P. \] In general, cosets $gP$, over $g \in SL(2,\C)$, yield the outward spin decorations corresponding to spin decorations on horospheres, and we obtain a bijection \begin{equation} \label{Eqn:SL2C_mod_P} SL(2,\C)/P \cong \mathfrak{H_D^S}. \end{equation} \subsection{Topology of spaces and maps} \label{Sec:topology_of_spaces_and_maps} We now consider the various spaces and maps in the composition $\K$: \[ \C_\times^2 \stackrel{\F}{\To} \mathcal{F_P^O}(\HH) \stackrel{\G}{\To} \mathcal{F_P^O} (\R^{1,3}) \stackrel{\H}{\To} \mathfrak{H_D}(\hyp) \stackrel{\I}{\To} \mathfrak{H_D}(\Disc) \stackrel{\J}{\To} \mathfrak{H_D}(\U). \] In turn, we consider the topology of spaces (\refsec{topology_of_spaces}), the topology of the maps (\refsec{topology_of_maps}), then lift them to incorporate spin (\refsec{lifts_of_maps_spaces}). \subsubsection{Topology of spaces} \label{Sec:topology_of_spaces} Topologically, $\C_\times^2 \cong \R^4 \setminus \{0\} \cong S^3 \times \R$, which is simply connected: $\pi_1 (\C^2_\times) \cong \pi_1 (S^3) \times \pi_1 (\R)$ is trivial. The space of flags $\mathcal{F_P^O}(\R^{1,3})$ naturally has the topology of $UTS^2 \times \R$, where $UTS^2$ is the unit tangent bundle of $S^2$. A point of $UTS^2$ describes a point on the celestial sphere $\S^+ \cong S^2$, or equivalently a lightlike ray, together with a tangent direction to $\S^+$ at that point, which precisely provides a flag 2-plane containing that ray. There is also an $\R$ family of points on each lightlike ray. This provides an identification $\mathcal{F_P^O}(\R^{1,3}) \cong UTS^2 \times \R$ and we use it to provide a topology and smooth structure on $\mathcal{F_P^O}(\R^{1,3})$. Since $\g$ is a linear isomorphism between $\HH$ and $\R^{1,3}$, we can similarly identify $\mathcal{F_P^O}(\HH) \cong UTS^2 \times \R$ so that $\G$ is a diffeomorphism. The space $UTS^2$ is not simply connected; it is diffeomorphic to $SO(3)$. One way to see this standard fact is to note that a point of $S^2$ yields a unit vector $v_1$ in $\R^3$; a unit tangent vector to $S^2$ at $v_1$ yields an orthonormal unit vector $v_2$; and then $v_1, v_2$ uniquely determines a right-handed orthonormal frame for $\R^3$. This gives a diffeomorphism between $UTS^2$ and the space of frames in $\R^3$, i.e. $UTS^2 \cong SO(3)$. Thus $\pi_1 (UTS^2) \cong \pi_1 (SO(3)) \cong \Z/2\Z$, and each space of flags has fundamental group $\pi_1 (UTS^2 \times \R) \cong \pi_1 (UTS^2) \times \pi_1 (\R) \cong \Z/2\Z$. The spaces of decorated horospheres $\mathfrak{H_D}$ naturally have the topology of $UTS^2 \times \R$, with fundamental group $\Z/2\Z$. This is true for any model of $\hyp^3$. A point of $UTS^2$ describes the point at infinity in $\partial \hyp^3 \cong S^2$ of a horosphere, together with a parallel tangent field direction, and at each point at infinity there is an $\R$ family of horospheres. This provides an identification $\mathfrak{H_D} \cong UTS^2 \times \R$ and we use it to provide a topology and smooth structure on $\mathfrak{H_D}$. Since $\i,\j$ are isometries between different models of $\hyp^3$, $\I$ and $\J$ provide diffeomorphisms between $\mathfrak{H_D}(\hyp)$, $\mathfrak{H_D}(\Disc)$ and $\mathfrak{H_D}(\U)$. \subsubsection{Topology of maps} \label{Sec:topology_of_maps} We saw above that $\G, \I, \J$ are diffeomorphisms, so it remains to consider the maps $\F$ and $\H$, which topologically are maps $S^3 \times \R \To UTS^2 \times \R$ and $UTS^2 \times \R \To UTS^2 \times \R$ respectively. First, consider the map $\F$. Since $\G$ is a diffeomorphism, we may equivalently consider the map $\G \circ \F \colon S^3 \times \R \To UTS^2 \times \R$. Both $S^3 \times \R$ and $UTS^2 \times \R$ are both naturally $S^1$ bundles over $S^2 \times \R$, the former via the Hopf fibration, the latter as a unit tangent bundle. We saw in \reflem{C2_to_R31_Hopf_fibrations} that $\g \circ \f \colon S^3 \times \R \To L^+$, sends each 3-sphere $S^3_r$ of constant radius $r$, to the 2-sphere $L^+ \cap \{ T = r^2\}$, via a Hopf fibration. Since $L^+ \cong S^2 \times \R$, topologically $\g \circ \f \colon S^3 \times \R \To S^2 \times \R$ is the product of the Hopf fibration with the identity. The map $\G \circ \F$ is then a map $S^3 \times \R \To UTS^2 \times \R$ which adds the data of a flag to the point on $L^+$ described by $\g \circ \f$. It thus projects to $\g \circ \f$ under the projection map $UTS^2 \times \R \To S^2 \times \R$. That is, the following diagram commutes. \begin{center} \begin{tikzpicture} \node (a) at (0,0){$S^3\times\R$}; \node (b) at (3,0){$UTS^2\times\R$}; \node (c) at (3,-1){$S^2\times\R$}; \draw[->] (a) -- (b) node [pos=0.5,above] {$\G\circ\F$}; \draw[->] (a) -- (c) node [pos=0.35,below] {$\g\circ\f$}; \draw[->] (b) -- (c); \end{tikzpicture} \end{center} Another way of viewing this diagram is that $\G \circ \F$ is a map of $S^1$ bundles over $S^2 \times \R$. Let us consider the fibres over a point $p \in S^2 \times \R \cong L^+$, which can equivalently be described by a pair $\underline{p} \in \S^+ \cong \CP^1$, and a length $r>0$ (or $T$-coordinate $T=r^2$). In $S^3 \times \R$, the fibre over $p \in \S^2 \times \R$ is the set of $(\xi, \eta)$ such that $|\xi|^2 + |\eta|^2 = r^2$ and $\xi/\eta = \underline{p}$. Given one point in the fibre $(\xi_0, \eta_0)$ over $p$, the other points in the fibre are of the form $e^{i\theta}(\xi_0, \eta_0)$, by \reflem{gof_properties}, and form an $S^1$. Under $\G \circ \F$, this fibre maps to the fibre of unit tangent directions to $S^2$ at $\underline{p}$, or equivalently, the fibre of flag directions over $\R p$. Proceeding around an $S^1$ fibre in $\C_\times^2 \cong S^3 \times \R$ corresponds to a path $e^{i\theta}(\xi_0, \eta_0)$ for $\theta$ from $0$ to $2\pi$. Proceeding around the $S^1$ factor in a fibre in $\mathcal{F_P^O}(\R^{1,3})$ corresponds to rotating the 2-plane of a null flag through $2\pi$ about a fixed ray. As we saw in \refsec{rotating_flags}, and explicitly in \reflem{flag_basis_rotation}, as we move through the $S^1$ fibre above $p$ in $S^3 \times \R$, the point $e^{i\theta}(\xi_0, \eta_0)$ under $\G \circ \F$ produces a flag rotation of angle $-2\theta$. So $\G \circ \F$ is a smooth 2--1 map on each fibre. We discussed this explicitly in the proof of \refprop{F_G_surjective}. The map $\G$ is also a bundle isomorphism: $\g$ is a linear isomorphism between $\HH$ and $\R^{1,3}$, and the diffeomorphism provided by $\G$ between $\mathcal{F_P^O}(\HH)$ and $\mathcal{F_P^O}(\R^{1,3})$, both diffeomorphic to $UTS^2 \times \R$, respects their structure as $S^1$ bundles over $S^2 \times \R$. Thus, both $\F$ and $\G \circ \F$ are bundle maps $S^3 \times \R \To UTS^2 \times \R$ of $S^1$-bundles over $S^2 \times \R$, which are 2--1 on each fibre. They are also covering maps, since $UTS^2 \cong \RP^3$, so topologically both $\F$ and $\G \circ \F$ they are maps $S^3 \times \R \To \RP^3 \times \R$ which are topologically the product of the 2-fold covering map with the identity. We now turn to the map $\H \colon \mathcal{F_P^O}(\R^{1,3}) \To \mathfrak{H_D}(\hyp)$, which is topologically a map $UTS^2 \times \R \To UTS^2 \times \R$. Again, both spaces are $S^1$-bundles over $S^2 \times \R$. As discussed in \refsec{light_cone_to_horosphere}, the map $\h \colon L^+ \To \horos(\hyp)$ is a diffeomorphism, both spaces being diffeomorphic to $S^2 \times \R$. We have seen that $\mathcal{F_P^O}(\R^{1,3})$ is an $S^1$-bundle over $L^+ \cong \R^2 \times S^1$, with an $S^1$ worth of flag directions at each point of $L^+$. And $\mathfrak{H_D}(\hyp)$ is an $S^1$-bundle over $\horos(\hyp)$, with an $S^1$ of decorations over each horosphere. Thus we have a commutative diagram \[ \begin{array}{ccc} UTS^2 \times \R \cong \mathcal{F_P^O}(\R^{1,3}) & \stackrel{\H}{\To}& \mathfrak{H_D}(\hyp) \cong UTS^2 \times \R \\ \downarrow & & \downarrow \\ S^2 \times \R \cong L^+ & \stackrel{\h}{\To} & \horos(\hyp) \cong S^2 \times \R \end{array} \] As argued in \reflem{H_bijection}, $\H$ maps the $S^1$ fibre of flags above a point $p \in L^+$, to the $S^1$ fibre of decorations on the horosphere $\h(p) \in \horos(\hyp)$, in bijective fashion. This map is in fact smooth: as the 2-plane of the flag rotates, the same 2-plane rotates to provide different decorations on a horosphere, always intersecting the horosphere transversely. So $\H$ is a diffeomorphism and a bundle isomorphism. Combining the above with \reflem{F_G_2-1}, we have now proved the following. This is the non-spin version of the main \refthm{spinors_to_horospheres}, using spinors up to sign. \begin{prop} \label{Prop:main_thm_up_to_sign} The map $\K \colon \C^2_\times \To \mathfrak{H_D}(\U)$ is smooth, surjective, 2--1, and $SL(2,\C)$-equivariant. It yields a smooth, bijective, $SL(2,\C)$-equivariant map \[ \frac{\C^2_\times}{ \{ \pm 1 \} } \To \mathfrak{H_D}(\U) \] between nonzero spin vectors up to sign, and decorated horospheres. The action of $SL(2,\C)$ on both $\C^2_\times/\{\pm 1\}$ and $\mathfrak{H_D}(\U)$ factors through $PSL(2,\C)$. \qed \end{prop} \subsubsection{Spin lifts of maps and spaces} \label{Sec:lifts_of_maps_spaces} Let us now consider spin lifts, or universal covers, of the above spaces. We observe that the 2--1 projection $\mathfrak{H_D^S} \To \mathfrak{H_D}$ is a double cover. This can be seen directly, or via the identifications with $SL(2,\C)/P$ and $PSL(2,\C)/\underline{P}$ of \refeqn{SL2C_mod_P} and \refeqn{decorated_horospheres_cosets}. Since $\mathfrak{H_D^S}$ is a double cover of $\mathfrak{H_D} \cong UTS^2 \times \R \cong SO(3) \times \R \cong \RP^3 \times \R$, we have $\mathfrak{H_D^S} \cong S^3 \times \R$, and $\mathfrak{H_D^S}$ is in fact the universal cover of $\mathfrak{H_D}$. We also have a commutative diagram \[ \begin{array}{ccccc} SL(2,\C) & \To & SL(2,\C)/P & \cong & \mathfrak{H_D^S} \\ \downarrow && \downarrow && \downarrow \\ PSL(2,\C) & \To & PSL(2,\C)/(\underline{P}) & \cong & \mathfrak{H_D} \end{array} \] where the vertical maps are double covers and universal covers. Similarly, the spaces $\mathcal{F_P^O}$ are diffeomorphic to $\RP^3 \times \R$, so have double and universal covers diffeomorphic to $S^3 \times \R$, and these arise from bundle maps which are 2--1 on each fibre. In $\mathcal{F_P^O}$, a fibre is the $S^1$ family of flags with a given base point and flagpole. In the double cover, rotating a flag about its flagpole through $2\pi$ (and keeping the base point fixed) does not return to the same null flag, but a rotation of $4\pi$ does return to the same fixed point. \begin{defn} \label{Def:covers_of_flags} We denote by $\mathcal{SF_P^O}(\HH)$ and $\mathcal{SF_P^O}(\R^{1,3})$ the double (universal) covers of $\mathcal{F_P^O}(\HH)$ and $\mathcal{F_P^O}(\R^{1,3})$ respectively. We call an element of $\mathcal{SF_P^O}(\HH)$ or $\mathcal{SF_P^O}(\R^{1,3})$ a \emph{spin flag}. \end{defn} A spin flag in \cite{Penrose_Rindler84} is called a \emph{null flag}. The maps $\G,\H,\I,\J$ are all diffeomorphisms, and these lift to diffeomorphisms of double covers of spaces $\mathfrak{H_D^S}$ and $\mathcal{SF_P^O}$. We denote these diffeomorphisms $\widetilde{\G}, \widetilde{\H}, \widetilde{\I}, \widetilde{\J}$. Since $\C_\times^2$ is simply connected, we also obtain a lift $\widetilde{\F}$ of $\F$ from $\C^2_\times$ to $\mathcal{SF_P^O}(\HH)$. The result is a sequence of diffeomorphisms lifting $\F, \G, \H, \I, \J$, between spaces all diffeomorphic to $S^3 \times \R$; they are also isomorphisms of $S^1$ bundles over $S^2 \times \R$. \begin{equation} \label{Eqn:fghij_lifts} \C_\times^2 \stackrel{\widetilde{\F}}{\To} \mathcal{SF_P^O}(\HH) \stackrel{\widetilde{\G}}{\To} \mathcal{SF_P^O} (\R^{1,3}) \stackrel{\widetilde{\H}}{\To} \mathfrak{H_D^S}(\hyp) \stackrel{\widetilde{\I}}{\To} \mathfrak{H_D^S}(\Disc) \stackrel{\widetilde{\J}}{\To} \mathfrak{H_D^S}(\U). \end{equation} We have already seen that $\F,\G,\H,\I,\J$ are all $SL(2,\C)$ equivariant; we now argue that their lifts are too. First, note that the actions of $SL(2,\C)$ on $\mathcal{F_P^O}(\HH)$, $\mathcal{F_P^O}(\R^{1,3})$ and $\mathfrak{H_D}$ all factor through $PSL(2,\C)$. The action on $\mathcal{F_P^O}(\HH)$ derives from the action of $A \in SL(2,\C)$ on $S \in \HH$ as $S \mapsto ASA^*$, which when $A=-1$ is trivial. The same is true for the action on $\mathcal{F_P^O}(\R^{1,3})$, which is equivalent via the diffeomorphism $\G$. Similarly for the action on $\horos_D$, the action of $SL(2,\C)$ factors through $PSL(2,\C)$ since $PSL(2,\C) \cong \Isom^+ \hyp^3$. As $SL(2,\C)$ is the universal cover of $PSL(2,\C)$, we may regard elements of $SL(2,\C)$ as homotopy classes of paths in $PSL(2,\C)$ starting from the identity, and the action of elements in such a path on $\C^2_\times$, $\mathcal{F_P^O}(\HH)$, $\mathcal{F_P^O}(\R^{1,3})$, or $\mathfrak{H_D}$ in any model of hyperbolic space, is equivariant. The resulting paths in $\mathcal{F_P^O}$ or $\mathfrak{H_D}$ lifts to paths in the universal covers $\mathcal{SF_P^O}$ or $\mathfrak{H_D^S}$, and so we obtain equivariant actions of $SL(2,\C)$ on the universal covers, proving the following proposition. \begin{prop} \label{Prop:spin_decoration_equivariance} The maps $\widetilde{\F},\widetilde{\G},\widetilde{\H},\widetilde{\I},\widetilde{\J}$ are all diffeomorphisms, equivariant with respect to the actions of $SL(2,\C)$ on $\C_\times^2$, $\mathcal{SF_P^O}(\HH)$, $\mathcal{SF_P^O}(\R^{1,3})$, $\mathfrak{H_D^S}(\hyp)$, $\mathfrak{H_D^S}(\Disc)$ and $\mathfrak{H_D^S}(\U)$. \qed \end{prop} Abbreviating the composition to \[ \widetilde{\K} = \widetilde{\J} \circ \widetilde{\I} \circ \widetilde{\H} \circ \widetilde{\G} \circ \widetilde{\F}, \] and observing that $\widetilde{\K}$ projects to $\K$ upon forgetting spin, mapping spin-decorated horospheres to decorated horospheres, we now have the following precise version of the main \refthm{spinors_to_horospheres} and \refthm{explicit_spinor_horosphere_decoration}. \begin{theorem} \label{Thm:main_thm_precise} The map $\widetilde{\K} \colon \C^2_\times \To \mathfrak{H_D^S}(\U)$ is an $SL(2,\C)$-equivariant diffeomorphism. Under $\widetilde{\K}$, a nonzero spinor corresponds to a spin-decorated horosphere which projects to the decorated horosphere described in \refprop{JIHGF_general_spin_vector}. \end{theorem} \subsection{Complex lambda lengths} \label{Sec:complex_lambda_lengths} We define requisite notions for lambda lengths. In this section we consider $\hyp^3$ independent of model. \begin{defn} Let $q$ be a point on an oriented geodesic $\gamma$ in $\hyp^3$. \begin{enumerate} \item Let $f = (f_1, f_2, f_3)$ be a (right-handed orthonormal) frame at $q$. We say $f$ is \emph{adapted to $\gamma$} if $f_1$ is positively tangent to $\gamma$. \item Let $\widetilde{f}$ be a spin frame at $q$. We say $\widetilde{f}$ is \emph{adapted to $\gamma$} if it is the lift of a frame adapted to $\gamma$. \end{enumerate} \end{defn} Suppose now that $\gamma$ is an oriented geodesic in $\hyp^3$, and $q_1, q_2$ are two points on this line (not necessarily distinct). Suppose we have a frame $f^i$ at $q_i$ adapted to $\gamma$, for $i=1,2$; let $f^i = (f^i_1, f^i_2, f^i_3)$. We can then consider parallel translation along $\gamma$ from $q_1$ to $q_2$; this translation is by some distance $\rho$, which we regard as positive or negative by reference to the orientation on $\gamma$. This parallel translation takes $f^1$ to a frame ${f^1}'$ at $q_2$. Since $f^1$ is adapted to $\gamma$, its first vector points positively along $\gamma$, and since ${f^1}'$ is related to $f^1$ by parallel translation along $\gamma$, ${f^1}'$ is also adapted to $\gamma$. Thus ${f^1}'$ and $f^2$ lie at the same point $q_2$ and have the same first vector. A further rotation of same angle $\theta$ about $\gamma$ (signed using the orientation of $\gamma$, using the standard right-handed convention) then takes ${f^1}'$ to $f^2$. We regard $\rho + i\theta$ as a complex length from $f^1$ to $f^2$, which we also denote by $d$. Note that $\theta$ is only well defined modulo $2\pi$. If the frames $f^1, f^2$ are lifted to spin frames, the same applies, except that $\theta$ is then well defined modulo $4\pi$. We summarise in the following definition. \begin{defn} \label{Def:complex_distance} Let $f^1, f^2$ be frames, or spin frames, at points $q_1, q_2$ on an oriented geodesic $\gamma$, adapted to $\gamma$. The \emph{complex translation distance}, or just \emph{complex distance} from $f^1$ to $f^2$ is $d = \rho+i\theta$, where a translation along $\gamma$ of signed distance $\rho$, followed by a rotation about $\gamma$ of angle $\theta$, takes $f^1$ to $f^2$. \end{defn} Two arbitrarily chosen frames, or spin frames, will usually not be adapted to any single oriented geodesic. If they are both adapted to a single oriented geodesic, then that geodesic is unique. So we may simply speak of the complex distance from $f^1$ to $f^2$, when it exists, without reference to any geodesic. The complex distance between two frames adapted to a common geodesic is well defined modulo $2\pi i$. The complex distance between two spin frames adapted to a common geodesic is well defined modulo $4\pi i$. Suppose now that we have two horospheres. We first consider decorations on them, then lift to spin decorations. So, let $(\mathpzc{h}_i, L^O_i)\in\mathfrak{H_D}$, for $i=1,2$, with $\mathpzc{h}_i\in\mathfrak{H}$ and $L^O_i$ an oriented parallel line field on $\horo_i$. Let $p_i \in \partial \hyp^3$ be the centre of $\mathpzc{h}_i$, and assume $p_1 \neq p_2$. Let $\gamma_{12}$ be the oriented geodesic from $p_1$ to $p_2$. Let $q_i = \gamma_{12} \cap \mathpzc{h}_i$. So if $\horo_1, \horo_2$ are disjoint then $q_1$ is the closest point on $\mathpzc{h}_1$ to $\mathpzc{h}_2$, $q_2$ is the closest point on $\mathpzc{h}_2$ to $\mathpzc{h}_1$, and $\gamma_{12}$ is the unique common perpendicular geodesic to $\mathpzc{h}_1$ and $\mathpzc{h}_2$, oriented from $p_1$ to $p_2$. However, these constructions apply even if $\horo_1, \horo_2$ are tangent or overlap. The oriented parallel line field $L^O_i$ on $\mathpzc{h}_i$ determines an associated outward frame field $f_i^{out}$, and inward frame field $f_i^{in}$, on $\mathpzc{h}_i$. Note that $f_1^{in}(q_1)$ and $f_2^{out}(q_2)$ are both adapted to $\gamma_{12}$, while $f_1^{out}(q_1)$ and $f_2^{in}(q_2)$ are not; rather $f_1^{out}(q_1)$ and $f_2^{in}(q_2)$ are both adapted to the oriented geodesic $\gamma_{21}$ from $p_2$ to $p_1$. If we instead have spin decorations $(\mathpzc{h}_i, W_i)\in\mathfrak{H_D^S}$, then each $\mathpzc{h}_i\in\mathfrak{H}$ has a spin decoration $W_i$, from which we obtain an outward spin decoration $W_i^{out}$ and an inward spin decoration $W_i^{in}$ on each $\mathpzc{h}_i$. Note that $W_i^{out}$ and $W_i^{in}$ here project to $f_i^{out}$ and $f_i^{in}$ as in the previous paragraph. So $W_1^{in}(q_1)$ and $W_2^{out}(q_2)$ are adapted to $\gamma_{12}$, and $W_1^{out}(q_1)$ and $W_2^{in}(q_2)$ are adapted to $\gamma_{21}$. \begin{center} \begin{tikzpicture} \draw[thick] (0,2) to[in=135,out=30](4,1); \draw[red!50, ->, line width=0.5mm](0,2) to [out=30,in=210] (0.8,2.4); \draw[green!50!black, ->, line width=0.5mm](0,2)--(0,2.8); \draw[blue, ->, line width=0.5mm](0,2)--(0.8,1.6); \draw[thick] (0,2) to[in=135,out=30](4,1); \draw[red, ->, line width=0.5mm](4,1) to [out=315,in=135] (4.6,0.4); \draw[green!50!black, ->, line width=0.5mm](4,1)--(4.7,1.6); \draw[blue, ->, line width=0.5mm](4,1)--(3.7,0.4); \node at (0,1.5){$f_1^{in}(q_1)$}; \node at (4,0){$f_1^{out}(q_2)$}; \node at (2,2){$\gamma_{12}$}; \end{tikzpicture} \captionof{figure}{Complex Translation Distance between $f^{in}$ and $f^{out}$}. \label{Fig:6} \end{center} \begin{defn} \ \label{Def:complex_lambda_length} \begin{enumerate} \item If $(\mathpzc{h}_1, L^O_1),(\mathpzc{h}_2, L^O_2)\in\mathfrak{H_D}$ have distinct centres, the \emph{complex lambda length} from $(\mathpzc{h}_1, L^O_1)$ to $(\mathpzc{h}_2, L^O_2)$ is \[ \lambda_{12} = \exp \left( \frac{d}{2} \right), \] where $d$ is the complex distance from $f_1^{in}(q_1)$ to $f_2^{out}(q_2)$. \item If $(\mathpzc{h}_1, W_1),(\mathpzc{h}_2, W_2)\in\mathfrak{H_D^S}$ have distinct centres, the \emph{complex lambda length} from $(\mathpzc{h}_1, W_1)$ to $(\mathpzc{h}_2, W_2)$ is \[ \lambda_{12} = \exp \left( \frac{d}{2} \right), \] where $d$ is the complex distance from $W_1^{in}(q_1)$ to $W_2^{out}(q_2)$. \end{enumerate} If $\horo_1, \horo_2$ have common centre then in both cases $\lambda_{12} = 0$. \end{defn} See \reffig{6}. We abbreviate complex lambda length to \emph{lambda length}. In the decorated case, $d$ is well defined modulo $2\pi i$, so $\lambda_{12}$ is a well defined complex number up to sign. In the spin-decorated case, $\lambda_{12}$ is a well defined complex number. In either case $|\lambda_{12}|$ is well defined. Assume $\horo_1, \horo_2$ have distinct centres, so the geodesic $\gamma$ and the points $q_1, q_2$ exist. Writing the complex distance $d$ from $f_1^{in}(q_1)$ to $f_2^{out}(q_2)$ or $W_1^{in}(q_1)$ to $W_2^{out}(q_2)$ as $d = \rho + i \theta$ with $\rho, \theta \in \R$, then $\rho$ is the signed distance from $q_1$ to $q_2$ along the oriented geodesic $\gamma_{12}$. When $\horo_1, \horo_2$ are disjoint, then $\rho$ is positive, and gives the shortest distance between $\horo_1$ and $\horo_2$. When $\horo_1, \horo_2$ are tangent, $\rho=0$. When $\horo_1, \horo_2$ overlap, $\rho$ is negative. Setting $\lambda_{12} = 0$ when $\horo_1$ and $\horo_2$ have the same centre extends $\lambda$ to a continuous function $\mathfrak{H_D^S} \times \mathfrak{H_D^S} \To \C$, since when two horospheres (of fixed size, say, as they appear in the disc model) approach each other, their common perpendicular geodesic moves out to infinity and the length of the interval lying in the intersection of the horoballs becomes arbitrarily large, so that $\rho \rightarrow -\infty$ and hence $\lambda \rightarrow 0$. These observations show that $\rho$ agrees with the signed undirected distance of \refdef{signed_undirected_distance}. Although $d$ is defined in a ``directed" way from $\horo_1$ to $\horo_2$, its real part $\rho$ does not depend on the direction. Its imaginary part, the angle $\theta$, is also undirected in the decorated case, but in the spin-decorated case $\theta$ does depend on the direction, as we see below in \reflem{lambda_antisymmetric}. Taking moduli of both sides of the equations in \refdef{complex_lambda_length}, we obtain \[ \left| \lambda_{12} \right| = \exp \left( \frac{\rho}{2} \right). \] which by \refeqn{horosphere_distance_from_Minkowski_inner_product} and \refeqn{horosphere_distance_from_spinor_inner_product} implies \[ \left| \lambda_{12} \right|^2 = \frac{1}{2} \left\langle \h^{-1}(\horo_1), \h^{-1}(\horo_2) \right\rangle = \left| \left\{ \kappa_1, \kappa_2 \right\} \right|^2 \] where $\h^{-1}(\horo_i) \in L^+$ is the point on the light cone corresponding to the horosphere $\horo_i$ under $\h$, and $\kappa_i$ is a spinor corresponding to the horosphere $\horo_i$, i.e. such that $\h \circ \g \circ \f (\kappa_i) = \horo_i$. These equations include the modulus of the equation in \refthm{main_thm}. We now show that lambda length is antisymmetric, in the sense that if we measure it between spin-decorated horospheres in reverse order, it changes by a sign. This is necessary for \refthm{main_thm}, since the spinor inner product $\{ \cdot, \cdot \}$ of \refdef{bilinear_form_defn} is also antisymmetric. \begin{lem} \label{Lem:lambda_antisymmetric} Let $(\mathpzc{h}_i, W_i)\in\mathfrak{H_D^S}$, for $i=1,2$. Let $d_{ij}$ be the complex distance from $W_i^{in}(q_i)$ to $W_j^{out}(q_j)$, so that $\lambda_{ij} = \exp \left( d_{ij}/2 \right)$ is the lambda length from $(\mathpzc{h}_i, W_i)$ to $(\mathpzc{h}_j, W_j)$. Then \[ d_{ij} = d_{ji} + 2 \pi i \quad \text{mod} \quad 4\pi i \quad \text{and} \quad \lambda_{ij} = -\lambda_{ji}. \] \end{lem} \begin{proof} First, if the horospheres have common centre then $\lambda_{ij} = \lambda_{ji} = 0$, by definition. So we may assume they have distinct centres. Then $\lambda_{ij} = \exp(d_{ij}/2)$, where $d_{ij}$ is the complex distance from $W_i^{in}$ to $W_j^{out}$ along $\gamma_{ij}$, the oriented geodesic from the centre of $\horo_i$ to the centre of $\horo_j$. Let $W_i^{in}, W_j^{out}$ project to the frames $f_i^{in}(\V_i), f_j^{out}(\V_j)$ of unit parallel vector fields $\V_i, \V_j$ on $\mathpzc{h}_i, \horo_j$. Recall that $W_2^{in}$ is obtained from $W_2^{out}$ by a rotation of $\pi$ about $\V_2$, and $W_1^{out}$ is obtained from $W_1^{in}$ by a rotation of $-\pi$ about $\V_1$ (\refdef{associated_inward_outward_spindec}). Let $Y_1^{out}$ be obtained from $W_1^{in}$ by a rotation of $\pi$ about $\V_1$, so $Y_1^{out}$ and $W_1^{out}$ both project to $f_1^{out}$, but differ by a $2\pi$ rotation. Now the spin isometry which takes $W_1^{in}(p_1)$ to $W_2^{out}(p_2)$ also takes $Y_1^{out}(p_1)$ to $W_2^{in}(p_2)$, since the latter pair are obtained from the former pair by rotations of $\pi$ about $\V_1, \V_2$ respectively. So the complex distance from $W_1^{in}(p_1)$ to $W_2^{out}(p_2)$ along $\gamma_{12}$ is equal to the complex distance from $W_2^{in}(p_2)$ to $Y_1^{out}(p_1)$ along $\gamma_{21}$. But this latter complex distance is equal to $d_{21} + 2\pi i$ (mod $4\pi i$), since $Y_1^{out}(p_1)$ and $W_1^{out}(p_1)$ differ by a $2\pi$ rotation. Thus we obtain $d_{12} = d_{21} + 2 \pi i$ mod $4\pi i$, hence $\lambda_{12} = - \lambda_{21}$ as desired. \end{proof} \subsection{Proof of \refthm{main_thm_2}} \label{Sec:proof_main_thm} The strategy of the proof of \refthm{main_thm_2} is to first prove it in simple cases, and then extend to the general case by equivariance. Before doing so, however, we first establish how lambda lengths are invariant under $SL(2,\C)$. \begin{lem} \label{Lem:lambda_length_invariant_under_isometry} Let $(\mathpzc{h}_i, W_i)\in\mathfrak{H_D^S}$ for $i=1,2$ and let $A \in SL(2,\C)$. Let $\lambda_{12}$ be the complex lambda length from $(\mathpzc{h}_1, W_1)$ to $(\mathpzc{h}_2, W_2)$, and let $\lambda_{A1,A2}$ be the complex lambda length from $A\cdot (\mathpzc{h}_1, W_1)$ to $A\cdot (\mathpzc{h}_2, W_2)$. Then $\lambda_{12} = \lambda_{A1,A2}$. \end{lem} \begin{proof} As $A \in SL(2,\C)$, the universal cover of $\Isom^+ \hyp^3 \cong PSL(2,\C)$, $A$ is represented by a path of isometries $M_t \in PSL(2,\C)$, where $M_0$ is the identity and $M_1 = \pm A$. As in the definition of complex lambda length, let $\gamma_{12}$ be the oriented geodesic from the centre of $\horo_1$ to the centre of $\horo_2$, and let $q_i = \gamma_{12} \cap \horo_i$. Then the spin frames $W_1^{in} (q_1)$ and $W_2^{out} (q_2)$ are adapted to $\gamma_{12}$ and their complex distance $d$ satisfies $\lambda_{12} = \exp(d/2)$. As each $M_t$ is an isometry, applying $M_t$ to the horospheres and spin frames involved yields a 1-parameter family of horospheres $M_t \cdot \horo_1, M_t \cdot \horo_2$ for $t \in [0,1]$, with mutually perpendicular geodesic $M_t \cdot \gamma_{12}$, intersecting the horospheres at points $q_1^t = M_t \cdot q_1$ and $q_2^t = M_t \cdot q_2$, at which there are spin frames $M_t \cdot W_1^{in} (q_1^t), M_t \cdot W_2^{out} (q_2^t)$ adapted to $M_t \cdot \gamma_{12}$. As $M_t$ is an isometry, the complex distance $d$ between the spin frames $M_t \cdot W_1^{in} (q_1^t)$ and $M_t \cdot W_2^{out} (q_2^t)$ remains constant. Hence the lambda length $\lambda_{12} = \exp(d/2)$ also remains constant. At time $t=1$, we arrive at the decorated horospheres $A \cdot (\horo_1, W_1)$ and $A \cdot (\horo_2, W_2)$. Their complex distance remains $d$, and their lambda length $\lambda_{A1,A2}$ remains equal to $\lambda = e^{d/2}$. \end{proof} \begin{lem} \label{Lem:main_thm_for_10_and_01} Let $\kappa_1 = (1,0)$ and $\kappa_2 = (0,1)$, and let $(\horo_1, W_1), (\horo_2, W_2) \in \mathfrak{H_D^S}(\U)$ be the corresponding spin-decorated horospheres under $\widetilde{\K}$. Then the lambda length from $(\horo_1, W_1)$ to $(\horo_2, W_2)$ is $1$. \end{lem} \begin{proof} By \refprop{JIHGF_general_spin_vector}, $\mathpzc{h}_1$ is centred at $\infty$, at Euclidean height $1$, with spin decoration $W_1$ projecting to the decoration specified by $i$. Similarly, $\mathpzc{h}_2$ is centred at $0$, with Euclidean diameter $1$, and spin decoration $W_2$ projecting to the decoration north-pole specified by $i$. These two horospheres are tangent at $q = (0,0,1)$, and both spin decorations $W_1^{in}$ and $W_2^{out}$ both project to the same frame at $q$, namely $(-e_z,e_y,e_x)$. So the complex distance from $W_1^{in}(q)$ to $W_2^{out}(q)$ is $d = i\theta$, where the rotation angle $\theta$ is $0$ or $2\pi$ mod $4\pi$; we claim it is in fact $0$ mod $4\pi$. To see this, consider the following path in $PSL(2,\C) \cong \Isom^+ \U$: \[ M_t = \pm \begin{pmatrix} \cos t & -\sin t \\ \sin t & \cos t \end{pmatrix} \in PSL(2,\C), \quad \text{from} \quad t=0 \quad \text{to} \quad t=\frac{\pi}{2}. \] As an isometry of $\U$, each $M_t$ is a rotation by angle $2t$ about the oriented geodesic $\delta$ from $-i$ to $i$. Hence $M_t$ preserves each point on $\delta$, including $q$. Thus $M_t$ rotates $\horo_1$ about $\delta$ through to the horosphere $M_{\pi/2} \horo_1$, which is centred at $M_{\pi/2} (0) = \infty$ and passes through $q$, hence is $\horo_2$. Throughout this family of rotations, the point $q$ is preserved, as is the tangent vector at $q$ in the $y$-direction, which is positively tangent to $\delta$. In particular, over $t \in [0, \pi/2]$, the family of rotations $M_t$ rotates the frame of $W_1^{in}$ to the frame of $W_2^{in}$. In fact, the path $M_t$ rotates the \emph{spin} frame of $W_1^{in}$ to the spin frame $W_2^{in}$. The path $M_t$ is a path in $PSL(2,\C)$ starting at the identity, and lifts to a unique path in $SL(2,\C)$ starting at the identity \[ \widetilde{M_t} = \begin{pmatrix} \cos t & - \sin t \\ \sin t & \cos t \end{pmatrix} \quad \text{from} \quad \widetilde{M_0} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \quad \text{to} \quad A = \widetilde{M_{\frac{\pi}{2}}} = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}. \] Regarding $SL(2,\C)$ as a universal cover of $PSL(2,\C)$, $M_t$ is a path representing the spin isometry $A$. Note that $A \cdot (0,1) = (1,0)$, i.e. $A \cdot \kappa_1 = \kappa_2$. So by $SL(2,\C)$-equivariance (\refthm{main_thm_precise}), we have $A \cdot (\mathpzc{h}_1, W_1) = (\mathpzc{h}_2, W_2)$, and hence $A \cdot W_1^{in} = W_2^{in}$. Thus on the one hand $A \cdot W_1^{in} = W_2^{in}$. But on the other hand, $A$ is represented by the path $M_t$, which rotates about the geodesic $\delta$ by an angle of $2t$, for $t \in [0, \pi/2]$. Therefore $W_2^{in}(q)$ is obtained from $W_1^{in}(q)$ by a rotation of angle $\pi$ about $e_y$, the vector pointing along $\delta$. Then, by \refdef{associated_inward_outward_spindec}, $W_2^{out}(q)$ is obtained from $W_2^{in}(q)$ by a rotation of angle $-\pi$ about $e_y$, i.e. by $-\pi$ about the oriented geodesic $\delta$. Thus, from $W_1^{in}(q)$, we obtain $W_2^{in}(q)$ by a rotation of $\pi$ about $\delta$; and then obtain $W_2^{out}(q)$ by a rotation of $-\pi$ about $\delta$. So $W_1^{in}(q) = W_2^{out}(q)$, and the rotation angle $\theta$ is $0$ mod $4\pi$ as claimed. Then $d=0$ and $\lambda = \exp(d/2) = 1$. \end{proof} \begin{lem} \label{Lem:main_thm_for_10_and_0D} Let $0 \neq D \in \C$, and let $\kappa_1 = (1,0)$ and $\kappa_2 = (0,D)$. Let $(\horo_1, W_1), (\horo_2, W_2) \in \mathfrak{H_D^S}(\U)$ be the corresponding spin-decorated horospheres under $\widetilde{\K}$. Then the lambda length from $(\horo_1, W_1)$ to $(\horo_2, W_2)$ is $D$. \end{lem} \begin{proof} The previous \reflem{main_thm_for_10_and_01} verified this statement when $D=1$. As there, $\horo_1$ is centred at $\infty$, of height $1$, with spin decoration $W_1$ projecting to the decoration specified by $i$. By \refprop{JIHGF_general_spin_vector}, $\horo_2$ is centred at $0$, with Euclidean height $|D|^{-2}$, and spin decoration $W_2$ projecting to the decoration north-pole specified by $i D^{-2}$. The common perpendicular geodesic $\gamma_{12}$ is the vertical line in $\U$ from $\infty$ to $0$, which intersects $\mathpzc{h}_1$ at $q_1 = (0,0,1)$ and $\mathpzc{h}_2$ at $q_2 = (0,0,|D|^{-2})$. Thus the signed distance from $q_1$ to $q_2$ along $\gamma$ is $\rho = 2 \log |D|$. The rotation angle $\theta$ between decorations, measured with respect to $\gamma_{12}$ is $2 \arg D$, modulo $2\pi$. We will show that $\theta$ is in fact $2 \arg D$ modulo $4\pi$. From \reflem{main_thm_for_10_and_01}, we know that when $D=1$, the points $q_1, q_2$ coincide, and the frames $W_1^{in}$ and $W_2^{out}$ coincide at this point. Denote the spin-decorated horosphere $\widetilde{\K} (0,1)$ by $(\horo_{2,{D=1}}, W_{2,{D=1}})$. We consider a spin isometry taking the $D=1$ case to the general $D$ case. Consider the following path $M_t$ in $PSL(2,\C)$ for $t \in [0,1]$, representing the spin isometry $A$: \[ A = \begin{pmatrix} D^{-1} & 0 \\ 0 & D \end{pmatrix} , \quad M_t = \pm \begin{pmatrix} e^{-t \left( \log |D| + i \arg D \right)} & 0 \\ 0 & e^{t \left( \log |D| + i \arg D \right)} \end{pmatrix} \] Note $M_t$ effectively has diagonal entries $D^{-t}$ and $D^t$, we just make them precise using logarithm and argument. We can take, for instance, $\arg D \in [0, 2\pi)$. The path $M_t$ lifts to a path in $SL(2,\C)$ beginning at the identity and ending at $A$, so indeed $M_t$ represents $A$. On the one hand, $A \cdot (0,1) = (0,D)$, so by equivariance (\refthm{main_thm_precise}), when applied to the corresponding horospheres, $A \cdot (\horo_{2,{D=1}}, W_{2,{D=1}}) = (\horo_2, W_2)$. On the other hand, each $M_t$ is a loxodromic isometry of $\U$, which translates along $\gamma_{12}$ by signed distance $2t \log |D|$, and rotates around the oriented geodesic $\gamma_{12}$ by angle $2t \arg D$, for $t \in [0,1]$. So $A \cdot (\horo_{2,{D=1}}, W_{2,{D=1}}) = (\horo_2, W_2)$ is obtained from $(\horo_{2,{D=1}}, W_{2,{D=1}})$ by a translation along $\gamma_{12}$ of distance $2 \log |D|$, and rotation around $\gamma_{12}$ of angle $2 \arg D$. Now from \reflem{main_thm_for_10_and_01}, the spin frames $W_1^{in} (q_1)$ and $W_{2,{D=1}}^{out} (q_1)$ coincide. From above, $W_2^{out} (q_2)$ is obtained from $W_{2,{D=1}}^{out} (q_1)$ by a complex translation of $d = 2 \log |D| + 2 i \arg D$. Thus the lambda length from $(\horo_1, W_1)$ to $(\horo_2, W_2)$ is \[ \lambda_{12} = e^{d/2} = \exp \left( \log |D| + i \arg(D) \right) = D. \] \end{proof} We now state and prove a precise version of \refthm{main_thm_2}. \begin{theorem} \label{Thm:main_thm_2_precise} Let $\kappa_1, \kappa_2 \in \C_\times^2$, and let $\widetilde{\K}(\kappa_1)= (\mathpzc{h}_1, W_1)$ and $\widetilde{\K}(\kappa_2)=(\mathpzc{h}_2, W_2)$ be the corresponding spin-decorated horospheres. Then the lambda length $\lambda_{12}$ from $(\mathpzc{h}_1, W_1)$ to $(\mathpzc{h}_2, W_2)$ is given by \[ \lambda_{12} = \{\kappa_1, \kappa_2 \}. \] \end{theorem} \begin{proof} If $\kappa_1, \kappa_2$ are linearly dependent then one is a complex multiple of the other, and the two horospheres $\mathpzc{h}_1, \mathpzc{h}_2$ have the same centre. Then $\{\kappa_1, \kappa_2\} = \lambda_{12} = 0$. We can thus assume $\kappa_1, \kappa_2$ are linearly independent. By \refthm{main_thm_precise}, $\widetilde{\K}$ is $SL(2,\C)$-equivariant. By \reflem{SL2C_by_symplectomorphisms}, the bilinear form $\{\cdot, \cdot \}$ is invariant under applying $A \in SL(2,\C)$ to spin vectors. By \reflem{lambda_length_invariant_under_isometry}, complex lambda length is invariant under applying $A \in SL(2,\C)$ to spin-decorated horospheres. So it suffices to show the desired equality after applying an element $A$ of $SL(2,\C)$ to both $\kappa_1, \kappa_2$ and $(\mathpzc{h}_1, W_1), (\mathpzc{h}_2, W_2)$. Since $\kappa_1, \kappa_2$ are linearly independent, we take $A$ to be the unique matrix in $SL(2,\C)$ such that $A\cdot\kappa_1 = (1,0)$ and $A\cdot\kappa_2 = (0,D)$ for some $D$. In fact then $D = \{ \kappa_1, \kappa_2\}$. To see this, note that $A$ is the inverse of the matrix with columns $\kappa_1$ and $\kappa_2/D$, with $D$ chosen so that $\det A = 1$. By definition of the bilinear form $\{ \cdot, \cdot \}$, we have $1 = \det A = \{ \kappa_1, \kappa_2/D \} = \frac{1}{D} \{\kappa_1, \kappa_2 \}$. Thus $D = \{ \kappa_1, \kappa_2\}$. Thus, it suffices to prove the result when $\kappa_1 = (1,0)$ and $\kappa_2 = (0,D)$, i.e. that in this case the lambda length is $\{\kappa_1, \kappa_2\} = D$. This is precisely the result of \reflem{main_thm_for_10_and_0D}. \end{proof} \section{Applications} \label{Sec:applications} \subsection{Three-dimensional hyperbolic geometry} \label{Sec:3d_hyp_geom} \subsubsection{Ptolemy equation for spin-decorated ideal tetrahedra} We now prove \refthm{main_thm_Ptolemy}. In fact, we prove the following slightly stronger theorem. \begin{theorem} Let $(\mathpzc{h}_i, W_i)\in\mathfrak{H_D^S}$ for $i=0,1,2,3$ be four spin-decorated horospheres in $\hyp^3$, and let $\lambda_{ij}$ be the lambda length from $(\mathpzc{h}_i, W_i)$ to $(\mathpzc{h}_j, W_j)$. Then \[ \lambda_{01} \lambda_{23} + \lambda_{03} \lambda_{12} = \lambda_{02} \lambda_{13}. \] \end{theorem} \begin{proof} By \refthm{main_thm_precise}, each $(\mathpzc{h}_i, W_i)$ corresponds via $\widetilde{\K}$ to a unique $\kappa_i = (\xi_i, \eta_i) \in \C_\times^2$. Let $M\in\mathcal{M}_{2 \times 4}(\C)$ be the matrix whose $j^{\text{th}}$ column is $\kappa_j$. For $i,j \in \{0,1,2,3\}$, let $M_{ij}\in\mathcal{M}_{2 \times 2}(\C)$ be the submatrix whose columns are $\kappa_i$ and $\kappa_j$ in order. By definition $\det M_{ij} = \{ \kappa_i, \kappa_j \}$ and by \refthm{main_thm_2_precise} this is also equal to $\lambda_{ij}$. Thus the claimed equation can be rewritten as \[ \det M_{01} \det M_{23} + \det M_{03} \det M_{12} = \det M_{02} \det M_{12} \] which is a well known Pl\"{u}cker relation, as seen previously in \refeqn{Plucker_24}. \end{proof} As mentioned in introductory \refsec{Ptolemy_matrices}, this theorem generalises Penner's Ptolemy equation for decorated ideal triangles in $\hyp^2$. When the four horosphere centres lie in a plane, Penner defines each $\lambda_{ij}$ to be our $\exp \left( \rho_{ij}/2 \right)$, by reference to distances only, without angles or decorations. As we will see in \refsec{spin_coherent_positivity} below, decorations can be chosen to obtain the same $\lambda_{ij}$. Note that multiplying any $\kappa_i$, corresponding to $(\mathpzc{h}_i,W_i)$, with a scalar $\alpha\in\C$ also scales each term of \refeqn{ptolemy} containing index $i$. For instance, multiplying $\kappa_1$ by $\alpha$ scales $\lambda_{01},\lambda_{12}$ and $\lambda_{13}$. In this case, the choice of decorated horosphere at each vertex is like a choice of ``gauge". \subsubsection{Shape parameters and Ptolemy equations} \label{Sec:shape_parameters} The following definition is standard in hyperbolic geometry; shape parameters for instance form the variables in Thurston's gluing equations \cite{Thurston_notes}. \begin{defn} \label{Def:shape_parameter} Let $e$ be an edge of an ideal hyperbolic tetrahedron $\Delta$. Suppose the endpoints of $e$ are placed at $0$ and $\infty$ in $\U$, and the remaining ideal vertices are placed at $1$ and $z_e \in \C$ such that $\Im z_e > 0$. Then $z_e$ is the \emph{shape parameter} of $\Delta$ along $e$. \end{defn} As is well known, the shape parameters at opposite edges of an ideal tetrahedron are equal. And if one shape parameter $z$ is known, then the others can be denoted $z, z', z''$ such that \[ z' = \frac{1}{1-z} \quad \text{and} \quad z'' =\frac{z-1}{z}. \] In particular, we have the equation \begin{equation} \label{Eqn:shapeparamter} z + z'^{-1} = 1, \end{equation} which also holds under cyclic permutations $(z, z', z'')\mapsto (z', z'', z)$. \begin{theorem} Numbering the ideal vertices of $\Delta$ by $0, 1, 2, 3$ as in \reffig{7}, let the shape parameter of edge $ij$ by $z_{ij}$. Choose $(\mathpzc{h}_i,W_i)\in\mathfrak{H_D^S}$ at ideal vertex $i$ and let $\lambda_{ij}$ be the complex lambda length from $(\mathpzc{h}_i,W_i)$ to $(\mathpzc{h}_j,W_j)$. Then \begin{equation}z_{01}=z_{23}=\frac{\lambda_{02}\lambda_{13}}{\lambda_{03}\lambda_{12}},\ \ z_{02}=z_{13}=-\frac{\lambda_{03}\lambda_{12}}{\lambda_{01}\lambda_{23}},\ \ z_{03}=z_{12}=\frac{\lambda_{01}\lambda_{23}}{\lambda_{02}\lambda_{13}}.\label{Eqn:sp}\end{equation} \end{theorem} \begin{center} \begin{tikzpicture}[scale=2] \draw (1,0)--(0,1); ll[white] (0.5,0.5) circle (0.1 cm); \draw (0,0)--(1,0)--(1,1)--(0,1)--(0,0)--(1,1); \node at (-0.1,-0.1){2}; \node at (-0.1,1.1){1}; \node at (1.1,-0.1){0}; \node at (1.1,1.1){3}; \end{tikzpicture} \captionof{figure}{Tetrahedron with vertices labeled $0,1,2,3$}. \label{Fig:7} \end{center} \begin{proof} Transforming a spin-decorated ideal tetrahedron by a spin isometry leaves all shape parameters and complex lambda lengths invariant. Noting the orientation of \reffig{7}, we may thus place the ideal vertices $0, 1, 2, 3$ at $0, \infty, z, 1\in\partial\hyp^3$ respectively, so that $z = z_{01} = z_{23}$, $z' = z_{02} = z_{13}$ and $z'' = z_{03} = z_{12}$. Then multiplying a spinor $\kappa_i$ corresponding to $(\mathpzc{h}_i,W_i)$ by a complex scalar $\alpha$, the homogeneous expressions in lambda lengths in \refeqn{sp} are invariant. Thus it suffices to prove the claim for any choice of spin decoration, or spinor, at each vertex. By \refthm{main_thm_precise}, each of the following spinors $\kappa_j$ corresponds under $\widetilde{\K}$ to a spin-decorated horosphere at ideal vertex $j$: \[ \kappa_0 = (0, 1), \quad \kappa_1 = (1, 0), \quad \kappa_2 = (z, 1), \quad \kappa_3 = (1, 1). \] By \refthm{main_thm} we calculate the complex lambda lengths to be \[ \lambda_{01} =-1, \quad \lambda_{02} =-z, \quad \lambda_{03} =-1, \quad \lambda_{12} =1, \quad \lambda_{13} =1, \quad \lambda_{23} =z-1. \] This immediately gives the first equation. Permuting labels $(0,1,2,3)\mapsto (0,2,3,1)$ on $\Delta$ and similarly the indices on each $\lambda$, and using the antisymmetry of $\lambda$, gives the subsequent equations. \end{proof} Substituting $z = z_{01} = z_{23}$ and $z' = z_{02} = z_{13}$ with the corresponding expression in the $\lambda_{ij}$ from \refeqn{sp}, the equation \refeqn{shapeparamter} relating shape parameters becomes precisely the Ptolemy equation \refeqn{ptolemy}. From this perspective, arguably the shape parameter equation is a Ptolemy equation in disguise! \subsection{Real spinors and 2-dimensional hyperbolic geometry} \label{Sec:real_spinors_H2} When a spinor $(\xi, \eta)$ has real coordinates, the corresponding horosphere has centre $\xi/\eta$ in $\R \cup \{\infty\}$, which forms the boundary of a copy of the upper half-plane model of 2-dimensional hyperbolic space, inside the upper half-space model $\U$ of 3-dimensional hyperbolic space. With this observation, we can use real spinors to describe 2-dimensional hyperbolic geometry. Accordingly, we regard $\hyp^2$ as the upper half plane embedded in $\U = \{(x,y,z) \mid z>0 \}$ at $y=0$. In this section we explore some of this lower-dimensional geometry. In \refsec{horocycles_decorations} we adapt spin decorations to 2 dimensions, and show they correspond to real spinors. In \refsec{spin_coherent_positivity} we discuss the relationship between ordering points around the boundary circle of $\hyp^2$, and positive lambda lengths. In \refsec{triangulations} we briefly mention triangulations of polygons, and in \refsec{Ford} we discuss how Ford circles arise from integer spinors. \subsubsection{Horocycles and their decorations} \label{Sec:horocycles_decorations} Any horocycle $\mathpzc{h}$ in $\hyp^2$ extends to a unique horosphere $\widetilde{\mathpzc{h}}$ in $\hyp^3$, with centre in $\R \cup \{\infty\}$, and $\widetilde{\horo}$ can be given a spin decoration as usual. \begin{defn} \label{Def:planar_spin_decoration} Let $\mathpzc{h}$ be a horocycle in $\hyp^2$. A \emph{planar spin decoration} on $\mathpzc{h}$ is a spin decoration $(W^{in}, W^{out})$ on the corresponding $\widetilde{\mathpzc{h}} \in \mathfrak{H}(\hyp^3)$ whose inward and outward spin frames $W^{in}, W^{out}$ project to frames specified by $i$. \end{defn} The requirement that frames be specified by $i$ determines a unique decoration on $\widetilde{\horo}$, but not a unique spin decoration. There are precisely two planar spin decorations on $\horo$, corresponding to two spinors $\pm \kappa$. \begin{lem} \label{Lem:real_spinor_planar_decoration} Let $\kappa = (\xi,\eta) \in \C_\times^2$ be a spin vector corresponding to $(\mathpzc{h}, W)\in\mathfrak{H_D^S}$. Then $(\mathpzc{h},W)$ arises from a planar spin decoration if and only if $(\xi,\eta) \in \R_\times^2$. \end{lem} \begin{proof} A spin decoration on a horosphere is planar if and only if its centre is real and its decoration is specified by $i$. Since the centre is $\xi/\eta$ and its frames are (north-pole) specified by $i/\eta^2$ (if $\eta \neq 0$) or $i \xi^2$ (if $\eta = 0$), the decoration is planar iff $\xi/\eta$ is real and $\eta^2$ is positive, or $\xi/\eta = \infty$ (i.e. $\eta = 0)$ and $\xi^2$ is positive; i.e. iff $\xi/\eta$ is real and $\eta$ is real, or $\eta = 0$ and $\xi$ is nonzero real. This is precisely the case when $\xi$ and $\eta$ are both real and not both zero. \end{proof} Suppose we have two horocycles $\mathpzc{h}_1, \mathpzc{h}_2$ in $\hyp^2$ with planar spin decorations. Then the complex distance $d_{12}$ from $W_1^{in}$ to $W_2^{out}$ will be of the form $\rho_{12} + i \theta_{12}$ where $\theta_{12}$ is $0$ or $2\pi$ mod $4\pi$. The lambda length $\lambda_{12}$ from $(\mathpzc{h}_1, W_1)$ to $(\mathpzc{h}_2, W_2)$ will be positive or negative accordingly. Since lambda lengths are antisymmetric (\reflem{lambda_antisymmetric}), $\lambda_{21} = - \lambda_{12}$, the two complex distances $d_{12}, d_{21}$ satisfy $d_{21} = d_{12} + 2\pi i$ mod $4\pi i$. So in particular, $\theta_{12}$ is $0$ mod $4\pi$ precisely when $\theta_{21}$ is $2\pi$ mod $4\pi$, and vice versa. \subsubsection{Ordering and positivity in the hyperbolic plane} \label{Sec:spin_coherent_positivity} We will use a specific orientation of $\hyp^2$. Since we regard $\hyp^2$ is a surface embedded in $\U$, each point of $\hyp^2$ has two normal directions $\pm e_y$. By \refdef{planar_spin_decoration}, the positive $y$-direction which corresponds to real spinors, and so we make the following definition. \begin{defn} We orient $\hyp^2$ by the normal direction $e_y$. We orient $\partial \hyp^2$ as the boundary of $\hyp^2$. \end{defn} Note that this is the opposite of the usual orientation on the upper half plane, and in particular $\partial \hyp^2 = \R \cup \{\infty\}$ is oriented along $\R$ in the negative direction. \begin{defn} \label{Def:in_order} A collection of distinct points $z_1, \ldots, z_d \in \partial \hyp^2$ are \emph{in order} around $\partial \hyp^2$ if they lie in cyclic order on the oriented circle $\partial \hyp^2$. \end{defn} Since $\partial \hyp^2$ is oriented in the negative direction on $\R$, ``in order" here roughly means ``in decreasing order", but as $\partial \hyp^2$ is a circle, we allow for cyclic permutation. Thus $z_1, \ldots, z_d \in \R$ are in order if $z_1 > z_2 > \cdots > z_d$. But they are also in order if, for instance, $\infty = z_k > k_{k+1} > \cdots > z_d > z_1 > z_2 > \cdots > z_{k-1}$ for some $k$. We will define, as usual, an ideal $d$-gon in the hyperbolic plane to have vertices at infinity, but we prefer the vertices to be in order around the circle at infinity. we allow various relaxations of usual assumptions, or degenerations, of those vertices, as follows. \begin{defn} \ \label{Def:d-gons} \begin{enumerate} \item An \emph{ideal $d$-gon} is a collection of distinct points $z_1, \ldots, z_d$ in $\partial \hyp^2$, labelled in order around $\partial \hyp^2$. \item A \emph{generalised ideal $d$-gon} is a collection of points $z_1, \ldots, z_d$ in $\partial \hyp^2$. \item A generalised ideal $d$-gon is \emph{degenerate} if all its vertices coincide. Otherwise it is \emph{non-degenerate}. \end{enumerate} \end{defn} Thus, a generalised ideal $d$-gon may not have its vertices labelled in order around $\partial \hyp^2$, and some or even all vertices may coincide. When all of them coincide, it is degenerate. For the rest of this section, let $\horo_1, \ldots, \horo_d$ be horocycles in $\hyp^2$, with centres $p_1, \ldots, p_d \in \partial \hyp^2 = \R \cup \{\infty\}$, and let $\gamma_{ij}$ be the oriented geodesic from $p_i$ to $p_j$. Let $q_{i,j}$ be the intersection of $\gamma_{ij}$ with $\horo_i$. It turns out that when the $p_i$ are distinct and in order as per \refdef{in_order}, one can choose planar spin decorations on the $\horo_i$ in a ``coherent" way. \begin{prop} \label{Prop:distinct_vertices_coherent_decorations} Suppose $p_1, \ldots, p_d \in \partial \hyp^2$ are distinct and in order around $\partial \hyp^2$. Then there exist planar spin decorations $W_1, \ldots, W_d$ on $\horo_1, \ldots, \horo_d$ such that the lambda lengths $\lambda_{ij}$ from $(\horo_i, W_i)$ to $(\horo_j, W_j)$ satisfy \[ \lambda_{ij} > 0 \quad \text{when} \quad i<j \quad \text{and} \quad \lambda_{ij} < 0 \quad \text{when} \quad i>j. \] \end{prop} \begin{proof} Recall there are two choices for the spin decoration on each horocycle. Choose the spin decoration on $\horo_1$ arbitrarily, and then choose the spin decoration on each other $\horo_j$ so that $\lambda_{1j} >0$. This means that each complex distance $d_{1j} = \rho_{1j} + i \theta_{1j}$ is chosen to be real, with each $\theta_{1j} = 0$ mod $4\pi$, and the spin frame $W_j$ on each $\widetilde{\horo_j}$ is chosen so that $W_1^{in} (p_{1,j})$ and $W_j^{out} (p_{j,1})$ are related by parallel translation along $\gamma_{1j}$; there is no rotation, as $\theta_{1j} = 0$ mod $4\pi$. It suffices then to prove that for any $i<j$ we have $\lambda_{ij} > 0$, or equivalently, $\theta_{ij} = 0$ mod $4\pi$. We give two proofs of this claim, geometric then algebraic. For the first proof, consider the oriented geodesics $\gamma_{1i}, \gamma_{1j}, \gamma_{ij}$. By \refdef{associated_inward_outward_spindec}, $W_i^{out}$ is obtained from $W_i^{in}$ at any point on $\horo_i$ by a rotation of $-\pi$ about the decoration on $W_i$, which points in $y$-direction. Moreover, as decorations arise from parallel line fields, $W_i^{in}$ is obtained at any point of $\horo_i$ from any other by parallel translation on $\horo_i$; similarly $W_j^{in}$ is obtained at any point of $\horo_j$ from any other by parallel translation on $\horo_j$. Thus, we may proceed from $W_i^{in}(p_{i,j})$ to $W_j^{out}(p_{j,i})$ in several steps, as follows: \begin{enumerate} \item Parallel translation along $\horo_i$ from $p_{i,j}$ to $p_{i,1}$ takes $W_i^{in}(p_{i,j})$ to $W_i^{in}(p_{i,1})$; \item Rotation by $-\pi$ about the $y$-direction takes $W_i^{in}(p_{i,1})$ to $W_i^{out}(p_{i,1})$; \item Parallel translation along $\gamma_{1i}$ from $p_{i,1}$ to $p_{1,i}$ takes $W_i^{out}(p_{i,1})$ to $W_1^{in}(p_{1,i})$; \item Parallel translation along $\horo_1$ from $p_{1,i}$ to $p_{1,j}$ takes $W_1^{in}(p_{1,i})$ to $W_1^{in}(p_{1,j})$; \item Parallel translation along $\gamma_{1j}$ from $p_{1,j}$ to $p_{j,1}$ takes $W_1^{in}(p_{1,j})$ to $W_j^{out}(p_{j,1})$; \item Parallel translation along $\horo_j$ from $p_{j,1}$ to $p_{j,i}$ takes $W_j^{out}(p_{j,1})$ to $W_j^{out}(p_{j,i})$. \end{enumerate} Now, we note that the result of these translations must be the same, or differ by $2\pi$, from the parallel translation along $\gamma_{ij}$, which takes $W_i^{in}(p_{i,j})$ to $W_j^{out}(p_{j,i})$. By tracking the rotation of the frame (which may be done by hand, or by measuring angles), we see that the results are the same. See \reffig{spin_coherence}. Thus $\theta_{ij} = 0$ mod $4\pi$, and $\lambda_{ij} > 0$. \begin{figure} \begin{center} \begin{tikzpicture}[scale=1.8] \draw[black] (0,0) circle (2cm); \draw[black] (0,1.5) circle (0.5cm); \draw[black] (-1.732, -1) arc (210:-150:0.5cm); \draw[black] (1.732,-1) arc (-30:330:0.7 cm); \begin{scope}[decoration={markings,mark=at position 0.5 with {\arrow{latex}}}] \draw[black, postaction={decorate}] (0,2) arc (0:-60:3.464 cm); \draw[black, postaction={decorate}] (0,2) arc (180:240:3.464 cm); \draw[black, postaction={decorate}] (1.732,-1) arc (60:120:3.464cm); \end{scope} \node[black] at (0,2.2) {$z_1$}; \node[black] at (-1.9,-1.1) {$z_j$}; \node[black] at (1.9,-1.1) {$z_i$}; ll[black] (-0.14,1.03) circle (0.05 cm); \node[black] at (-0.4,0.9) {$p_{1,j}$}; ll[black] (0.14,1.03) circle (0.05 cm); \node[black] at (0.4,0.9) {$p_{1,i}$}; ll[black] (-0.96,-0.38) circle (0.05 cm); \node[black] at (-1,-0.2) {$p_{j,1}$}; ll[black] (-0.81,-0.63) circle (0.05 cm); \node[black] at (-0.6,-0.8) {$p_{j,i}$}; ll[black] (0.7, -0.1) circle (0.05 cm); \node[black] at (1.1,-0.2) {$p_{i,1}$}; ll[black] (0.43,-0.56) circle (0.05 cm); \node[black] at (0.6,-0.7) {$p_{i,j}$}; \draw[-latex, ultra thick, green!50!black] (0.35,-0.45) arc (165:135:0.8 cm); \draw[-latex, ultra thick, green!50!black] (0.7, -0.3) arc (-90:90:0.2 cm); \draw[-latex, ultra thick, green!50!black] (0.55, 0) arc (215:200:3.6 cm); \draw[-latex, ultra thick, green!50!black] (0.15, 0.9) arc (-72:-108:0.5 cm); \draw[-latex, ultra thick, green!50!black] (-0.14,0.8) arc (-20:-40:3.6 cm); \draw[-latex, ultra thick, green!50!black] (-0.85, -0.35) arc (40:15:0.5 cm); \node[black] at (0,-0.7) {$\gamma_{i,j}$}; \node[black] at (0.7,0.4) {$\gamma_{1,i}$}; \node[black] at (-0.7,0.4) {$\gamma_{1,j}$}; \node[black] at (-0.7,1.6) {$\horo_1$}; \node[black] at (1.6, 0.1) {$\horo_i$}; \node[black] at (-1.1, -1.4) {$\horo_j$}; \end{tikzpicture} \caption{Constructing spin-coherent planar spin decorations.} \label{Fig:spin_coherence} \end{center} \end{figure} We give the second proof in the case where none of the $p_i$ is $\infty$, leaving the rest to the reader. Let $\kappa_j = (\xi_j, \eta_j)$ be spinors corresponding to the $(\horo_j, W_j)$. Then we have $p_j = \xi_j/\eta_j$ so \begin{equation} \label{Eqn:pi-pj} p_i - p_j = \frac{\xi_i}{\eta_i} - \frac{\xi_j}{\eta_j} = \frac{\xi_i \eta_j - \xi_j \eta_i}{\eta_i \eta_j} = \frac{ \{\kappa_i, \kappa_j \} }{ \eta_i \eta_j} = \frac{\lambda_{ij}}{\eta_i \eta_j} \end{equation} and similarly \begin{equation} \label{Eqn:p1-pi} p_1 - p_i = \frac{\lambda_{1i}}{\eta_1 \eta_i}, \quad p_1 - p_j = \frac{\lambda_{1i}}{\eta_1 \eta_j}. \end{equation} Recall $i<j$ and the $p_i$ are in order. We consider two cases, $p_i < p_j$ and $p_j < p_i$. If $p_i < p_j$ then since $p_1, p_i, p_j$ are in order we have $p_i < p_1 < p_j$. So $p_1 - p_i$ and $p_1 - p_j$ have opposite signs. Hence by equation \refeqn{p1-pi}, $\eta_i$ and $\eta_j$ have opposite signs. Hence in equation \refeqn{pi-pj}, the left hand side is negative and the right hand side denominator is negative. Thus the numerator is positive, $\lambda_{ij} > 0$. If $p_i > p_j$ then the ordering of $p_1, p_i, p_j$ tells us that $p_1 < p_j < p_i$ or $p_j < p_i < p_1$. Either way, $p_1 - p_i$ and $p_1 - p_j$ have the same sign, so by \refeqn{p1-pi}, $\eta_i$ and $\eta_j$ have the same sign. Hence in \refeqn{pi-pj}, the left hand side is positive and the right hand side denominator is positive. Thus $\lambda_{ij} > 0$ again. \end{proof} We formalise this ``coherence" of spin decorations, or positivity of lambda lengths, in the following definitions. \begin{defn} \label{Def:spin-coherent} Let $\horo_1, \ldots, \horo_d$ be horocycles with distinct centres $p_1, \ldots, p_d$. A set of planar spin decorations on the $\horo_i$ is \emph{spin-coherent} if \[ \left\{ \begin{array}{rcl} \theta_{ij} &=& 0\mod4\pi \text{ when } i<j \\ \theta_{ij} &=& 2\pi\mod4\pi \text{ when } i>j, \end{array} \right. \quad \text{or equivalently,} \quad \left\{ \begin{array}{rcl} \lambda_{ij} &>& 0 \text{ when } i<j \\ \lambda_{ij} &<& 0 \text{ when } i>j. \end{array} \right. \] \end{defn} \begin{defn} A collection of spinors $\kappa_1, \ldots, \kappa_d$ is \emph{totally positive} if they are all real, and satisfy \begin{equation} \label{Eqn:total_positivity} \{ \kappa_i, \kappa_j \} > 0 \quad \text{when} \quad i<j \quad \text{and} \quad \{ \kappa_i, \kappa_j \} < 0 \quad \text{when} \quad i>j. \end{equation} \end{defn} This notion of \emph{total positivity} arises in other contexts in mathematics and physics. See, for example, \cite{ABCGPT16, Lusztig94, Postnikov06, Williams21}). \begin{lem} \label{Lem:positive_spin-coherent} A collection of spinors $\kappa_1, \ldots, \kappa_d$ is totally positive and if and only if the corresponding spin-decorated horospheres have planar spin decorations and are spin-coherent. \end{lem} \begin{proof} By \reflem{real_spinor_planar_decoration}, the $\kappa_j$ are real if and only if the $\horo_j$ have planar spin decorations. By \refthm{main_thm}, the $\{\kappa_i, \kappa_j\}$ satisfy the total positivity condition \refeqn{total_positivity} if and only if the $\lambda_{ij}$ satisfy the spin-coherence condition of \refdef{spin-coherent}. \end{proof} Note that in \refdef{spin-coherent} of spin coherence, the centres $p_i$ are distinct, but there is no requirement that the vertices lie in order around $\partial \hyp^2$. However, it turns out that the spin coherence condition \emph{implies} that the $p_i$ are in order around $\partial \hyp^2$. The following converse to \refprop{distinct_vertices_coherent_decorations} is proved in \cite{Mathews_Spinors_horospheres}. \begin{prop} \label{Prop:spin-coherent_in_order} If $\horo_1, \ldots, \horo_d$ have spin-coherent planar spin decorations, then their centres $p_1, \ldots, p_d$ are in order around $\partial \hyp^2$. In other words, $p_1, \ldots, p_d$ form an ideal $d$-gon as in \refdef{d-gons}. \end{prop} \subsubsection{Triangulations of polygons} \label{Sec:triangulations} Let $d \geq 3$ be an integer and consider an ideal $d$-gon with a triangulation $T$. This $T$ consists of $d-3$ diagonals, so the polygon and the triangulation together contain $2d-3$ edges. If each ideal vertex of the polygon has a planar spin decoration, we then have a lambda length associated to each edge of the polygon and the triangulation. Labelling the vertices of the $d$-gon from $1$ to $d$, we write $\lambda_{ij}$ for the lambda length from vertex $i$ to vertex $j$. A natural operation on a triangulation is a diagonal flip, which replaces two adjacent triangles with two different triangles. These two adjacent triangles share an edge and together they form an ideal $4$-gon. If the four vertices involved in a diagonal flip have labels $i,j,k,l$ in order, in the diagonal flip, lambda length $\lambda_{ik}$ is replaced with $\lambda_{jl}$ or vice versa. The variables are related by the Ptolemy equation \[ \lambda_{ij} \lambda_{kl} + \lambda_{il} \lambda_{jk} = \lambda_{ik} \lambda_{jl}. \] Considering such variables and equations leads to the notion of a cluster algebra: see e.g. \cite{Fomin_Shapiro_Thurston08, Fomin_Thurston18, Williams14} for details. If we allow the ideal vertices to vary along $\partial \hyp^2$ and the horospheres to also vary, the space of such choices gives a description of what Penner calls the \emph{decorated Teichm\"{u}ller space} $\widetilde{T}$. Then each $\lambda_{ij}$ can be regarded as a function $\widetilde{T} \To \R^{+}$ and indeed the collection of $\lambda_{ij}$ from the edges of the polygon and the diagonals of a triangulation yields a system of coordinates on $\widetilde{T}$ and a homeomorphism $\widetilde{T} \To \R^{2d-3}$. See e.g. \cite{Penner87, Penner12} for further details. \subsubsection{Ford circles} \label{Sec:Ford} Suppose we have a spinor $\kappa = (\xi, \eta)$ whose coordinates $\xi, \eta$ are \emph{integers}. As a real spinor, by \reflem{real_spinor_planar_decoration}, $\kappa$ corresponds to a horocycle with a planar spin decoration. Its centre is at the \emph{rational} number $\xi/\eta$, and its Euclidean diameter is $1/\eta^2$. If we require that $(\xi, \eta) \in \Z^2_\times$ be \emph{relatively prime}, then we obtain a family of horocycles with precisely one horocycle centred at each point of $\Q \cup \{\infty\}$. They are known as \emph{Ford circles} \cite{Ford38}. See \reffig{Ford_farey}. A standard fact about Ford circles is that the circles at $a/b$ and $c/d$, written as fractions in simplest form, are tangent if and only if $|ad-bc| = 1$. We see this immediately from the spinor-horosphere correspondence, since the corresponding real spinors $\kappa_1 = (a,b)$ and $\kappa_2 = (c,d)$ satisfy $\{\kappa_1, \kappa_2\} = ad-bc = \pm 1$ so the corresponding spin-decorated horospheres have lambda length $\lambda = \pm 1$. Thus the horocycles are tangent. \begin{figure} \begin{center} \begin{tikzpicture}[scale=10] \draw[black, thick] (0,0) -- (1,0); \draw[black] (0,0) arc (-90:0:0.5 cm); \draw[black] (0,-0.02) -- (0,0); \node at (0,-0.05) {$\frac{0}{1}$}; \draw[black] (1,0) arc (-90:-180:0.5 cm); \draw[black] (1,-0.02) -- (1,0); \node at (1,-0.05) {$\frac{1}{1}$}; \draw[black] (0.5,0) arc (-90:270:0.125 cm); \draw[black] (0.5,-0.02) -- (0.5,0); \node at (0.5,-0.05) {$\frac{1}{2}$}; \draw[black] ( {1/3} ,0) arc (-90:270: { 1/18 }); \draw[black] ( {1/3} ,-0.02) -- ( {1/3} ,0); \node at ( {1/3} ,-0.05) {$\frac{1}{3}$}; \draw[black] ( {2/3}, 0) arc (-90:270: {1/18} ); \draw[black] ( {2/3} ,-0.02) -- ( {2/3} ,0); \node at ( {2/3} ,-0.05) {$\frac{2}{3}$}; \draw[black] ( {1/4}, 0) arc (-90:270: {1/32} ); \draw[black] ( {1/4} ,-0.02) -- ( {1/4} ,0); \node at ( {1/4} ,-0.05) {$\frac{1}{4}$}; \draw[black] ( {3/4}, 0) arc (-90:270: {1/32} ); \draw[black] ( {3/4} ,-0.02) -- ( {3/4} ,0); \node at ( {3/4} ,-0.05) {$\frac{3}{4}$}; \draw[black] ( {1/5}, 0) arc (-90:270: {1/50} ); \draw[black] ( {1/5} ,-0.02) -- ( {1/5} ,0); \node at ( {1/5} ,-0.05) {$\frac{1}{5}$}; \draw[black] ( {2/5}, 0) arc (-90:270: {1/50} ); \draw[black] ( {2/5} ,-0.02) -- ( {2/5} ,0); \node at ( {2/5} ,-0.05) {$\frac{2}{5}$}; \draw[black] ( {3/5}, 0) arc (-90:270: {1/50} ); \draw[black] ( {3/5} ,-0.02) -- ( {3/5} ,0); \node at ( {3/5} ,-0.05) {$\frac{3}{5}$}; \draw[black] ( {4/5}, 0) arc (-90:270: {1/50} ); \draw[black] ( {4/5} ,-0.02) -- ( {4/5} ,0); \node at ( {4/5} ,-0.05) {$\frac{4}{5}$}; \draw[black] ( {1/6}, 0) arc (-90:270: {1/72} ); \draw[black] ( {1/6} ,-0.02) -- ( {1/6} ,0); \node at ( {1/6} ,-0.05) {$\frac{1}{6}$}; \draw[black] ( {5/6}, 0) arc (-90:270: {1/72} ); \draw[black] ( {5/6} ,-0.02) -- ( {5/6} ,0); \node at ( {5/6} ,-0.05) {$\frac{5}{6}$}; \end{tikzpicture} \caption{Ford circles and Farey fractions.} \label{Fig:Ford_farey} \end{center} \end{figure} More generally, the hyperbolic distance $\rho$ between Ford circles based at $a/b$ and $c/d$, again given as fractions in simplest form, is given from $e^\rho = |\lambda|^2$ as \[ \rho = 2 \log |\lambda| = 2 \log \left| \{ \kappa_1, \kappa_2 \} \right| = 2 \log \left| ad - bc \right|. \] This was observed by McShane and Sergiescu in \cite{McShane_Eisenstein, McShane_Sergiescu}. If we have two tangent Ford circles, based at fractions in simplest form $a/b$ and $c/d$, then there is a unique Ford circle between them, tangent to both. This Ford circle is based at the \emph{mediant} or \emph{Farey sum} of $a/b$ and $c/d$, namely \[ \frac{a+c}{b+d}, \] which has corresponding spinor $\kappa_1 + \kappa_2$. Its tangency with the circles corresponding to $\kappa_1, \kappa_2$ then follows from its lambda length, or spinor inner product, with $\kappa_1$ and $\kappa_2$ being $\pm 1$: \[ \{\kappa_1 + \kappa_2, \kappa_1 \} = \{ \kappa_2, \kappa_1 \} = \pm 1, \quad \{\kappa_1 + \kappa_2, \kappa_2 \} = \{ \kappa_1, \kappa_2 \} = \pm 1. \] Here we used the antisymmetry of the spinor inner product, and the tangency of horospheres corresponding to $\kappa_1, \kappa_2$. \subsection{Polygons, polyhedra and matrices} \label{Sec:polygons_polyhedra_matrices} \subsubsection{Matrices and ideal hyperbolic polygons} If we have an ideal $d$-gon in $\hyp^2$ (\refdef{d-gons}), with horospheres and planar spin decorations on each ideal vertex (\refdef{planar_spin_decoration}), then we have a real spinor associated to each vertex (\reflem{real_spinor_planar_decoration}). We can arrange these spinors as the columns of a matrix. Various properties of this matrix correspond to properties of corresponding ideal polygon. \begin{lem} \label{Lem:ideal_polygons_matrices} The map which takes planar spin-decorated horocycles $(\horo_1, W_1), \ldots, (\horo_d, W_d)$ in $\hyp^2$ to matrices whose columns are the corresponding collection of spin vectors $\kappa_1, \ldots, \kappa_d \in \R^2_\times$, induces the following correspondences: \begin{gather*} \left\{ \begin{array}{c} \text{Generalised ideal $d$-gons in $\hyp^2$} \\ \text{with planar spin decorations} \end{array} \right\} \cong \left\{ \begin{array}{c} \text{$2 \times d$ real matrices} \\ \text{with no zero columns} \end{array} \right\} \\ \left\{ \begin{array}{c} \text{Nondegenerate generalised ideal $d$-gons in $\hyp^2$} \\ \text{with planar spin decorations} \end{array} \right\} \cong \left\{ \begin{array}{c} \text{$2 \times d$ real matrices of rank $2$} \\ \text{with no zero columns} \end{array} \right\} \\ \left\{ \begin{array}{c} \text{Spin-coherent ideal $d$-gons in $\hyp^2$} \end{array} \right\} \cong \left\{ \begin{array}{c} \text{$2 \times d$ real matrices} \\ \text{where all $2 \times 2$ submatrices} \\ \text{have positive determinant} \end{array} \right\} \end{gather*} \end{lem} \begin{proof} In a generalised ideal $d$-gon (\refdef{d-gons}(ii)) there is no restriction on where the vertices lie, so any collection of $\kappa_i \in \R^2_\times$ provide a matrix. Generalised ideal $d$-gons with planar spin decorations are thus bijective with $2 \times d$ real matrices, none of whose columns consist entirely of zeroes. Adding the requirement of nondegeneracy (\refdef{d-gons}(iii)) means that two of the $\xi_i/\eta_i$ are distinct, so that there are two linearly independent columns in the matrix, so it has rank 2. Adding the requirement that all vertices are distinct means that all $\xi_i/\eta_i$ are distinct, so all $2 \times 2$ submatrices of $M$ have rank 2. The requirement of spin-coherence corresponds to all $2 \times 2$ matrices have positive determinant (\reflem{positive_spin-coherent}), and in this case the ideal vertices are in order around $\partial \hyp^2$, so we have an ideal $d$-gon (\refprop{spin-coherent_in_order}). \end{proof} \subsubsection{Matrices and ideal hyperbolic polyhedra} We can find similar correspondences between complex matrices and polyhedra in $3$ dimensions. This notion of ideal polyhedron is somewhat unsatisfactory, forgetting all its structure except for listing out its vertices. Nonetheless it is analogous to \refdef{d-gons} and allows us to produce corresponding matrices. However, since $\partial \hyp^3 \cong S^2$, there is no natural way to order vertices. \begin{defn} \ \begin{enumerate} \item An \emph{ideal $d$-hedron} is a collection of distinct points $p_1, \ldots, p_d$ in $\partial \hyp^3$. \item A \emph{generalised ideal $d$-hedron} is a collection of points $p_1, \ldots, p_d$ in $\partial \hyp^3$. \item A generalised ideal $d$-hedron is \emph{degenerate} if all of its vertices coincide. Otherwise it is \emph{non-degenerate}. \end{enumerate} \end{defn} \begin{lem} The map which takes spin-decorated horospheres $(\horo_1, W_1), \ldots, (\horo_d, W_d)$ in $\hyp^3$ to matrices whose columns are the corresponding collection of spin vectors $\kappa_1, \ldots, \kappa_d \in \C^2_\times$, induces the following correspondences: \begin{gather*} \left\{ \begin{array}{c} \text{Generalised ideal $d$-hedra in $\hyp^3$} \\ \text{with spin decorations} \end{array} \right\} \cong \left\{ \begin{array}{c} \text{$2 \times d$ complex matrices} \\ \text{with no zero columns} \end{array} \right\} \\ \left\{ \begin{array}{c} \text{Nondegenerate generalised ideal $d$-hedra in $\hyp^3$} \\ \text{with spin decorations} \end{array} \right\} \cong \left\{ \begin{array}{c} \text{$2 \times d$ complex matrices} \\ \text{of rank $2$} \\ \text{with no zero columns} \end{array} \right\} \\ \left\{ \begin{array}{c} \text{Ideal $d$-hedra in $\hyp^3$} \\ \text{with spin decorations} \end{array} \right\} \cong \left\{ \begin{array}{c} \text{$2 \times d$ complex matrices} \\ \text{where all $2 \times 2$ submatrices} \\ \text{have nonzero determinant} \end{array} \right\} \end{gather*} \end{lem} \begin{proof} A generalised ideal $d$-hedron yields a matrix with all columns in $\C_\times^2$, without further restriction. Just as in two dimensions, adding the requirement of nondegeneracy corresponds to the matrix having rank 2. All vertices being distinct corresponds to all $2 \times 2$ submatrices having nonzero determinant. \end{proof} \subsubsection{Ideal hyperbolic polygons up to isometry} After characterising various spaces of hyperbolic ideal polygons as spaces of matrices in \reflem{ideal_polygons_matrices}, we now consider such objects \emph{up to isometry}. Just as $SL(2,\C)$ is the double and universal cover of $PSL(2,\C)$, $SL(2,\R)$ is the double and universal cover of $PSL(2,\R)$. The latter is the orientation-preserving isometry group of $\hyp^2$. Since the polygons we have considered all have various types of spin decorations, we consider them under the action of spin isometries preserving $\hyp^2$, i.e. of $SL(2,\R)$. \begin{defn} A \emph{spin isometry class} of a set $X$ of generalised ideal $d$-gons with planar spin decorations is an orbit of the $SL(2,\R)$ action on $X$. \end{defn} In this definition, $X$ could for example be any of the sets of $d$-gons considered in \reflem{ideal_polygons_matrices}. The correspondence between spin vectors and spin-decorated horospheres is $SL(2,\C)$-equivariant, and $SL(2,\R)$ is the subgroup of $SL(2,\C)$ which preserves real spin vectors, and also preserves $\hyp^2$. Given a collection of real spin vectors $\kappa_1, \ldots, \kappa_d$ forming a $2 \times d$ matrix $M$, an $A \in SL(2,\C)$ acts simultaneously on all the $\kappa_j$ in the matrix multiplication $AM$. The correspondences of \reflem{ideal_polygons_matrices} then yield correspondences between spin isometry classes of various types of decorated $d$-gons, and the orbit spaces of the $SL(2,\R)$ action on various sets of matrices. For instance, we have the following. \begin{gather} \label{Eqn:corresondences_orbit_spaces_first} \left\{ \begin{array}{c} \text{Spin isometry classes of} \\ \text{generalised ideal $d$-gons in $\hyp^2$} \\ \text{with planar spin decorations} \end{array} \right\} \cong SL(2,\R) \backslash \left\{ \begin{array}{c} \text{$2 \times d$ real matrices} \\ \text{with no zero columns} \end{array} \right\} \\ \left\{ \begin{array}{c} \text{Spin isometry classes of} \\ \text{nondegenerate generalised ideal $d$-gons in $\hyp^2$} \\ \text{with planar spin decorations} \end{array} \right\} \cong SL(2,\R) \backslash \left\{ \begin{array}{c} \text{$2 \times d$ real matrices} \\ \text{of rank $2$} \\ \text{with no zero columns} \end{array} \right\} \\ \left\{ \begin{array}{c} \text{Spin isometry classes of} \\ \text{spin-coherent ideal $d$-gons in $\hyp^2$} \end{array} \right\} \cong SL(2,\R) \backslash \left\{ \begin{array}{c} \text{$2 \times d$ real matrices} \\ \text{where all $2 \times 2$ submatrices} \\ \text{have positive determinant} \end{array} \right\} \label{Eqn:corresondences_orbit_spaces_last} \end{gather} In the case of spin-coherent ideal $d$-gons, the spin isometry classes reduce to isometry classes in the standard sense. \begin{prop} \label{Prop:spin_isometry_spin-coherent_ideal_d-gons} Let $d \geq 3$. There is a bijection between spin isometry classes of spin-coherent ideal $d$-gons in $\hyp^2$ and isometry classes of ideal $d$-gons in $\hyp^2$, decorated with a horocycle at each vertex. \end{prop} That is, \begin{gather} \label{Eqn:spin_isometry_isometry} \left\{ \begin{array}{c} \text{Spin isometry classes of} \\ \text{$d$-gons in $\hyp^2$} \\ \text{with spin-coherent decorations} \end{array} \right\} \cong \left\{ \begin{array}{c} \text{Isometry classes of} \\ \text{ideal $d$-gons in $\hyp^2$} \\ \text{decorated with horocycles} \end{array} \right\} \end{gather} The latter is Penner's decorated Teichm\"{u}ller space of a disc with $d$ boundary punctures. \begin{proof} As discussed in \refsec{horocycles_decorations}, an ideal $d$-gon in $\hyp^2$, decorated with horocycles, has two possible planar spin decorations at each ideal vertex, for a total of $2^d$ choices. However, as discussed in the proof of \refprop{distinct_vertices_coherent_decorations}), if the decorations are spin-coherent, then once we choose one planar spin decoration at one ideal vertex, the rest are determined. Thus there are precisely $2$ spin-coherent decorations. Under the action of $SL(2,\R)$ the negative identity will identify planar spin decorations in pairs. So there will be $2^{d-1}$ spin isometry classes of planar spin decorations, but only one spin isometry class of spin-coherent decorations. If two spin-coherent ideal $d$-gons are related by a spin isometry, then the underlying isometry of $\hyp^2$ maps the underlying ideal $d$-gons and horocycles to each other. So we have a map from the left to right set of \refeqn{spin_isometry_isometry}. By \refprop{distinct_vertices_coherent_decorations}, we can always find spin-coherent decorations on an ideal $d$-gon, so this map is surjective. As argued above, there is only one spin isometry class of spin-coherent decorations, so the map is injective. \end{proof} Orbit spaces of $SL(2,\R)$ action on sets of matrices are quite similar to \emph{Grassmannians}. The real Grassmannian $\Gr_\R (2,d)$ is the space of all real $2$-planes in $\R^d$, and it can be described as \begin{equation} \label{Eqn:Grassmannian_description} \Gr_\R (2,d) \cong GL(2,\R) \backslash \left\{ \begin{array}{c} \text{$2 \times d$ real matrices} \\ \text{of rank $2$} \end{array} \right\}. \end{equation} To see this, we associate to a $2 \times d$ matrix its row space, a subspace of $\R^d$. If $M$ has rank 2 then this row space is a 2-plane in $\R^d$, hence an element of $\Gr_\R (2,d)$. This gives a map from matrices to $\Gr_\R (2,d)$. But multiplying a $2 \times d$ matrix on the left by a matrix in $GL(2,\R)$ preserves its row space, and indeed the $GL(2,\R)$-orbits of matrices correspond to points in $\Gr_\R (2,d)$. The correspondences in \refeqn{corresondences_orbit_spaces_first}--\refeqn{corresondences_orbit_spaces_last} describe various classes of spin isometry classes of ideal $d$-gons in similar terms to the Grassmannian in \refeqn{Grassmannian_description}. Various statements along these lines are made in \cite{Mathews_Spinors_horospheres}, including with complex matrices and polyhedra. It is also worth pointing out that determinants of $2 \times 2$ submatrices of the $2 \times d$ matrix, which in our context are lambda lengths, correspond to \emph{Pl\"{u}cker coordinates} on the Grassmannian. Combining \refeqn{corresondences_orbit_spaces_last} and \refeqn{spin_isometry_isometry} gives an algebraic description of the decorated Teichm\"{u}ller space of a punctured disc \[ SL(2,\R) \backslash \left\{ \begin{array}{c} \text{$2 \times d$ real matrices} \\ \text{where all $2 \times 2$ submatrices} \\ \text{have positive determinant} \end{array} \right\} \cong \left\{ \begin{array}{c} \text{Isometry classes of} \\ \text{ideal $d$-gons in $\hyp^2$} \\ \text{decorated with horocycles} \end{array} \right\}, \] and in fact, as shown in \cite{Mathews_Spinors_horospheres}, this is the \emph{affine cone} on the \emph{positive Grassmannian} $\Gr^+(2,d)$. \newpage \appendix \section{Notation} \label{Sec:Notation} \begin{tabular}{ll} \toprule \multicolumn{2}{l}{\textbf{General}} \\ $D_p f (v)$ & Derivative of function $f$ at point $p$ in direction $v$ \\ $T_p M$ & Tangent space to manifold $M$ at point $p$ \\ $\f,\g,\h,\i,\j$ & Maps from spinors, to Hermitian matrices, to light cone, to horospheres \\ $\h_\partial$ & Simplification of $\h$ mapping to $\partial \hyp$ \\ $\F,\G,\H,\I,\J$ & Maps from spinors, to flags, to decorated horospheres \\ $\widetilde{\mathbf{F}}, \widetilde{\mathbf{G}}, \widetilde{\mathbf{H}}, \widetilde{\mathbf{I}}, \widetilde{\mathbf{J}}$ & Spin lifts of $\mathbf{F},\mathbf{G},\mathbf{H},\mathbf{I},\mathbf{J}$, maps from spinors, to spin flags, to spin-decorated horospheres\\ $\k_\partial, \k, \K, \widetilde{\K}$ & Compositions of $\f,\g,\h_\partial,\i,\j$, and $\f,\g,\h,\i,\j$, and $\F,\G,\H,\I,\J$, and $\widetilde{\mathbf{F}}, \widetilde{\mathbf{G}}, \widetilde{\mathbf{H}}, \widetilde{\mathbf{I}}, \widetilde{\mathbf{J}}$. \\ \midrule \multicolumn{2}{l}{\textbf{Algebra, matrices}} \\ $\R, \R^{0+}, \R^+$ & Reals, non-negative reals, positive reals \\ $i$ & Square root of $-1$ \\ $\alpha, \beta$ & General complex numbers \\ $a,b,c,d$ & General real numbers \\ $A, A'$ & Matrices in $SL(2,\C)$ \\ $M,N$ & General matrices \\ $\mathcal{M}_{m\times n}(\mathbb{F})$ & $m \times n$ matrices with entries in field $\mathbb{F}$ \\ $\mathbb{F}_\times$ & Nonzero elements of field / ring / vector space $\mathbb{F}$ \\ $\HH$ & $2 \times 2$ Hermitian matrices \\ $\HH_0$ & $2 \times 2$ Hermitian matrices with determinant $0$ \\ $\HH_0^{0+}$ & $2 \times 2$ Hermitian matrices with determinant $0$ and trace $\geq 0$ \\ $\HH_0^+$ & $2 \times 2$ Hermitian matrices with determinant $0$ and trace $>0$ \\ $S$ & Hermitian matrix \\ $\underline{S}$ & Image of Hermitian matrix in quotient space \\ $\underline{B}$ & Basis of quotient space of Hermitian matrices \\ \midrule \multicolumn{2}{l}{\textbf{Spin vectors}} \\ $\kappa$ & Spin vector \\ $\xi, \eta$ & Coordinates of spin vector (elements of $\C$) \\ $\nu$ & Tangent vector to $\C^2$ \\ $\{ \cdot, \cdot \}$ & Inner product of spin vectors \\ $\ZZ$ & Map $\C^2 \To \C^2$ in definition of $\F$ giving direction of flag \\ $J$ & Complex linear map in definition of $\ZZ$ \\ $\p$ & Projection $\C^2_\times \To S^3$ (\reflem{Stereo_Hopf_p}) \\ \midrule \multicolumn{2}{l}{\textbf{Minkowski geometry}} \\ $L$ & Light cone (\refdef{light_cones}) \\ $L^{0+}$ & Non-negative light cone (\refdef{light_cones}) \\ $L^+$ & Positive light cone (\refdef{light_cones}) \\ $p = (T,X,Y,Z)$ & Point in Minkowski space $\R^{1,3}$ \\ $\langle \cdot, \cdot \rangle$ & Inner product (signature $+---$) on Minkowski space $\R^{1,3}$ \\ $\pi_{XYZ}$ & Projection to $XYZ$ 3-plane \\ $V,W$ & Subspaces of $\R^{1,3}$ \\ $\S^+$ & Future celestial sphere (\refdef{celestial_sphere}) \\ $\Pi$ & Lightlike $3$-plane in $\R^{1,3}$ \\ $e_1, e_2, e_3$ & Orthonormal basis determined by spinor (\reflem{orthonormal_basis_from_spinor}) \\ \bottomrule \end{tabular} \begin{tabular}{ll} \toprule \multicolumn{2}{l}{\textbf{Hyperbolic geometry}} \\ $\hyp^n$ & Hyperbolic $n$-space (model-independent) \\ $\partial \hyp^n$ & Boundary at infinity of hyperbolic $n$-space \\ $\Disc$, $\partial \Disc$ & Conformal ball model of $\hyp^3$, boundary at infinity \\ $\U, \partial \U$ & Upper half space model of $\hyp^3$, boundary at infinity \\ $x,y,z$ & Coordinates in upper half space model \\ $\hyp$ & Hyperboloid model of $\hyp^3$ \\ $\mathpzc{h}$ & Horosphere \\ $d = \rho + i \theta$ & Complex distance between horospheres \\ $\rho$ & Signed distance between horospheres \\ $\theta$ & Angular distance between horosphere decorations \\ $\lambda$, $\lambda_{ij}$ & Lambda-length, between horospheres indexed by $i,j$ \\ $\mathfrak{H}, \mathfrak{H}(\hyp), \mathfrak{H}(\Disc)$ & Set of all horospheres in $\hyp^3, \hyp, \Disc$ (\refdef{set_of_horospheres}) \\ $\Delta$ & Ideal tetrahedron in $\hyp^3$ \\ $\zeta_i$ & Ideal points, ideal vertices \\ $E$ & Edge of tetrahedron \\ $z_e$ & Shape parameter of tetrahedron along edge $e$ (\refdef{shape_parameter}) \\ $P_\alpha$ & Parabolic matrix in $SL(2,\C)$ \refeqn{P} \\ $P$ & parabolic subgroup of $SL(2,\C)$ or $PSL(2,\C)$ \\ \midrule \multicolumn{2}{l}{\textbf{Flags}} \\ $\mathcal{F_P}$; $\mathcal{F_P}(\HH), \mathcal{F_P}(\R^{1,3})$ & Set of pointed null flags; in $\HH$, in $\R^{1,3}$ \\ $(p,V), [[p,v]]$ & Pointed null flag in $\HH$ or $\R^{1,3}$ (\refdef{pointed_null_flag}, \refdef{null_flag_in_Minkowski}) \\ $\mathcal{F_P^O}$; $\mathcal{F_P^O}(\HH), \mathcal{F_P^O}(\R^{1,3})$ & Set of pointed oriented null flags; in $\HH$, in $\R^{1,3}$ \\ $(p,V,o), [[p,v]]$ & Pointed oriented null flag (\refdef{pointed_oriented_null_flag}, \refdef{pv_notation_PONF}) \\ $\mathcal{SF_P^O}(\HH), \mathcal{SF_P^O}(\R^{1,3})$ & Spin flags (\refdef{covers_of_flags}) \\ \midrule \multicolumn{2}{l}{\textbf{Frames, decorations}} \\ $\Fr$ & Frame bundle over $\hyp^3$ (\refdef{Fr}) \\ $\Spin$ & Spin frame bundle over $\hyp^3$ (\refdef{Fr}) \\ $f = (f_1, f_2, f_3)$ & Frame \\ $\widetilde{f}$ & Spin frame \\ $L^O$ & Oriented line field on horosphere \\ $\mathfrak{H_D^O}$ & Set of overly decorated horospheres (\refdef{overly_decorated_horosphere}) \\ $L^O_P$ & Oriented parallel line field on horosphere \\ $\mathfrak{H_D}$ & Set of decorated horospheres (\refdef{decorated_horosphere}) \\ $N^{in}, N^{out}$ & Inward, outward normal vector fields to horosphere (\refdef{horosphere_normals}) \\ $\V$ & Unit parallel vector field on horosphere \\ $f^{in}(\V), f^{out}(V)$ & Inward, outward frame fields of vector field $V$ (\refdef{inward_outward_frame_fields}) \\ $W^{in}, W^{out}$ & Inward, outward spin decoration (\refdef{associated_inward_outward_spindec}) \\ $\mathfrak{H_D^S}$ & Set of spin-decorated horospheres (\refdef{spin-decorated_horospheres}) \\ \bottomrule \end{tabular} \small \bibliography{spinref} \bibliographystyle{amsplain} \end{document}
2412.10880v1
http://arxiv.org/abs/2412.10880v1
Huygens and $π$
\documentclass[12pt]{article} \usepackage{amsmath,amssymb,amsthm} \usepackage{url} \title{Huygens and $\pi$ } \author{Mark B. Villarino\\ Escuela de Matem\'atica, Universidad de Costa Rica,\\ 11501 San Jos\'e, Costa Rica} \date{} \theoremstyle{plain} \newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{lemma}[thm]{Lemma} \newtheorem{Claim}{Claim}[section] \theoremstyle{definition} \newtheorem{defn}{Definition}[section] \numberwithin{equation}{section} \makeatletter \def\section{\@startsection{section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3ex plus .2ex}{\large\bf}} \def\subsection{\@startsection{subsection}{2}{\z@}{-3.25ex plus -1ex minus -.2ex}{1.5ex plus .2ex}{\normalsize\bf}} \makeatother \renewcommand{\geq}{\geqslant} \renewcommand{\leq}{\leqslant} \newcommand{\xx}{\mathbf{x}} \newcommand{\half}{{\mathchoice{\thalf}{\thalf}{\shalf}{\shalf}}} \newcommand{\shalf}{{\scriptstyle\frac{1}{2}}} \newcommand{\thalf}{\tfrac{1}{2}} \newcommand{\word}[1]{\quad\mbox{#1}\quad} \renewcommand{\theenumi}{\alph{enumi}} \renewcommand{\labelenumi}{\textup{(\theenumi)}} \newcommand{\hideqed}{\renewcommand{\qed}{}} \hyphenation{Le-gen-dre} \begin{document} \maketitle \tableofcontents \section{Introduction} \label{sec:intro} Some two thousand years ago Archimedes of Syracuse (275-212 A.C.), the greatest mathematician of all antiquity (and just possibly ever) authored his monograph entitled ``\emph{The Measurement of a Circle}" \cite{Arch1} in which he proved the following celebrated inequality: \begin{equation} \label{Arch1} \boxed{3\frac{10}{71}<\pi< 3\frac{1}{7}}. \end{equation} (Both bounds are accurate to two decimal places.) We translate Archimedes' own statement of \eqref{Arch1} since it might be surprising to the modern mathematician: \begin{thm} The circumference of any circle is greater than three times the diameter and exceeds it by a quantity less than a seventh part of the diameter but greater than ten seventy-first parts. \end{thm} Observe that there is \emph{no mention of the constant} $\pi$. Indeed, although Euclid proves the existence of $\pi$ in X.1 of the \emph{Elements}, Greek mathematicians never had a special name nor notation for it. We recall that Archimedes proved \eqref{Arch1} in three steps. \emph{First} he ``compressed" a circle between an inscribed and circumscribed regular\emph{ hexagon}. \emph{Second}, he proved (in geometric guise) the identity \begin{equation} \label{trig} \boxed{\cot\left(\frac{\theta}{2}\right)=\cot\theta+\sqrt{\cot^2\theta +1}} \end{equation} \emph{Third}, he used \eqref{trig} to \emph{recursively} compute the perimeters of inscribed and circumscribed regular $12$-gons, $24$-gons, $48$-gons, and finally $96$-gons. For the circumscribed $96$-gon Archimedes obtained \begin{equation} \label{upper} \frac{\text{perimeter of $96$-gon}}{\text{diameter}}<\frac{14688}{4673\frac{1}{2}}=3+\frac{667\frac{1}{2}}{4673\frac{1}{2}}<3+\frac{667\frac{1}{2}}{4672\frac{1}{2}}=3\frac{1}{7}. \end{equation} For the inscribed $96$-gon Archimedes obtained \begin{equation} \label{lower} \frac{\text{perimeter of $96$-gon}}{\text{diameter}}>\frac{6336}{2017\frac{1}{4}}=3+\frac{10}{71}+\frac{37}{572899}>3\frac{10}{71}. \end{equation} Subsequent generations of mathematicians sought closer bounds on $\pi$ by increasing the number of sides of the inscribed and circumscribed polygons, but until the advent of symbolic algebra and the calculus, Archimedes' procedure remained the paradigm for \emph{two thousand years!} Some 1900 years after Archimedes, in 1654, the 25 year-old Dutch scientist/mathematician Christian Huygens, applied an almost bewildering tour-de-force of elementary geometry to the perimeter of an inscribed regular $60$-gon to obtain the following spectacular improvement of \eqref{Arch1}: \begin{equation} \label{HuygensI} \boxed{3.1415926 533 <\pi < 3.1415936538} \end{equation} (Both bounds are accurate to NINE (!) decimal places.) He did so in \S 20 of his brilliant treatise ``\emph{De circuli magnitudine inventa}," \cite{Huy} which, for brevity, we will simply call \emph{``inventa"}. In the preface to \emph{inventa} Huygens declares that (up to his time) the only theorem in circle quadrature with a proper proof states the perimeter of a circle is bounded above and below by the perimeters of a circumscribed and inscribed polygon, respectively, and that his treatise will do more. His assessment is rather modest since \emph{inventa} is not only\emph{ epoch-making} for circle-quadrature (as shown, for example by \eqref{HuygensI}), but is also, indisputably, \emph{one of the most beautiful and important elementary geometric works ever written}, and like the \emph{Measurement} of Archimedes, it will retain its value even if the results in it can be obtained much more quickly today using modern analysis. \emph{Inventa} not only contains \S 20 mentioned above, but also \emph{the very first proofs in the history of mathematics} of the famous inequalities (see below) \eqref{Snell3}. \begin{equation} \label{Snell3} \boxed{\frac{3 \sin x}{2+\cos x} \leq x\leq \frac{2}{3}\sin x+\frac{1}{3}\tan x} \end{equation} where $0\leq x\leq \frac{\pi}{2}.$ Huygens proves them (geometrically(!)) as theorems 12 and 13 of his treatise. The lower bound is apparently due to Nikolaus von Cusa \cite{NVC} while the upper bound is due to Snell \cite{snell} , but neither presented a rigorous proof. That was left for Huygens who achieved it brilliantly. Moreover, Huygens shows his wonderful originality by transforming Archimedes exact determination of the \emph{barycenter} of a parabolic segment into \emph{ barycenter inequalities} and the latter become an even \emph{more exact arc-length inequalities}, namely: \begin{equation} \label{XX2} \boxed{x<\sin x +\frac{10(4\sin^2\frac{x}{2}-\sin^2 x)}{12\sin\frac{x}{2}+9\sin x}. } \end{equation} and \begin{equation} \label{XX3} \boxed{x>\sin x +\frac{10(4\sin^2\frac{x}{2}-\sin^2 x)}{12\sin\frac{x}{2}+9\sin x+8\dfrac{(2\sin\frac{x}{2}-\sin x)^2}{12\sin\frac{x}{2}+9\sin x}}. } \end{equation} These two barycenter inequalities are the tools that Huygens uses to prove his results in \S 20, cited above. This investigation is by no means straight-forward, but Huygens carries out the proof of the first inequality \eqref{XX2} brilliantly. Huygens never published a proof of the second inequality \eqref{XX3}. Joseph Pinelis published the first proof of it in \emph{math overflow} \cite{JP} and we offer a new one in this paper. We finally point out that numerical analysts describe the three formulas cited above as the first examples in the history of mathematics of \emph{Richardson extrapolation} in which suitable algebraic combinations of simple functions produce highly accurate approximations. These examples appeared some \emph{three centuries} before the papers of L. Richardson who originated the modern method \cite{Rich}. The first proof of any famous theorem, just by being the \emph{first}, automatically acquires an historical and methodological importance which makes it of interest to mathematicians, all the more so if the theorem has been around without proof for some time. As we shall see, the lower bound in \eqref{Snell3} had been around for \emph{two centuries (!)} and the upper bound close to a century. Yet, the literature does not offer the contemporary mathematician access to an \emph{ab initio} detailed motivated presentation of Huygens' beautiful geometrical proofs and one must consult the \emph{inventa} itself. Unfortunately even the \emph{inventa} presents difficulties to today's mathematician. For, Huygens chose to present \emph{inventa} in the strict euclidean-archimedian format, which means: \begin{itemize} \item The proofs are entirely synthetic without any contextual analysis nor motivation. \item There is no algebraic notation. Thus proportions and inequalities are written out as long sentences in words. \item The main steps are not set out separately. So intricate proofs appear as a single paragraph of running printed text over several pages without a break . \end{itemize} These format criticisms also apply to the French \cite{Huy1}, German \cite{Rud}, and (somewhat less to the) English translations \cite{HP} of \emph{inventa}, since they are strictly literal. Our paper, which presents Huygens' proofs of \eqref{Snell3} and \S 20, and makes some of the rich content of \emph{inventa} available in modern form, thus fills a need in the literature. \section{The Fundamental Idea in Huygens' Tract.} Huygens' fundamental idea is simple and brilliant. He approximates the \emph{area} of a\emph{ circular} segment (and thus, sector) by the area of a suitable \emph{parabolic} segment. Why? Because the great Archimedes exhaustively investigated the metrical properties of a parabolic segment and thus Huygens is able to transform Archimedes' \emph{exact equations} for the area of a parabolic segment into \emph{inequalities} for the area of a circular segment. But the area of a circular segment is a simple function of the radius and the \emph{arc length}. Therefore the area inequalities become the \emph{arc length inequalities,} referred to above. The same idea undergirds Huygens' barycenter inequalities: the exact position of the barycenter of parabolic segment, as determined by Archimedes, becomes the approximate location of the barycenter of a \emph{circular} segment, and since Huygens determined the exact location of the latter, the relation between the two barycenters becomes an arc-length inequality. \section{The Firste Heron-Huygens Lemma.} Huygens bases the first part his tract \emph{Inventa} (up to Proposition 16 inclusive) on two propositions which give upper and lower bounds for the area of a \emph{circular segment }(and therefore, sector) which Archimedes' corresponding proposition on \emph{parabolic} segments clearly inspired. In his tract \emph{The quadrature of the parabola} the great Archimedes famously proved \begin{thm} Every segment bounded by a parabola and a chord is equal to\textbf{ four-thirds} of the triangle which as the same base as the segment and the same height. \end{thm} Archimedes' proof exhibits the first explicit sum of an infinite (geometric) series in the history of mathematics (of ratio $\frac{1}{4}$). It is an interesting historical fact that Heron of Alexandria\emph{ anticipated Huygens by some 1600 years in section 32 of his treatise \emph{Metrika}} \cite{Heron}. While for almost two thousand years \emph{Metrika} was thought to be irretrievably lost, the manuscript for Heron's treatise was discovered in 1896, some 250 years \emph{after} Huygens wrote \emph{Inventa} so that the latter was totally unaware of Heron's work and did his own investigation quite independently. Yet, even the statement of Huygens' \emph{Proposition I} coincides, almost word for word, with Heron's. For this reason we call the two bounds the \emph{Heron-Huygens Lemmas.} \begin{lemma} If in a segment of a circle less than a semicircle a maximum triangle be inscribed, and in the subtended segments triangles be similarly inscribed, the triangle first drawn will be less (in area) than \textbf{four times} the sum of the two which were drawn in the subtended segments. \end{lemma} \begin{proof} Given the segment $ABC$ of a circle, and $BD$ the diameter of the segment; let there be inscribed a maximum triangle $\triangle ABC$, i.e., one which a base and altitude the same as those of the segment. Likewise, in the two subtended segments let there be inscribed maximum triangles $\triangle AEB$ and $\triangle BFC$. To prove that the \emph{area} $(\triangle ABC)$ is less than \textbf{four times} the sum of $(\triangle AEB)$ and $(\triangle BFC)$. Lef $EF$ be joined, cutting the diameter of the segment in the point $G$. Then there are \emph{three congruent triangles}, namely $\triangle AEB$, $\triangle BFC$, and the new triangle $\triangle EBF$. We will prove that \begin{equation} \label{eight} (\triangle ABC)<8(\triangle EBF) \end{equation} Since \begin{equation} \label{ } 8(\triangle EBF)=4(\triangle AEB)+4(\triangle BFC) \end{equation} this will complete the proof of the lemma. \begin{Claim} \label{Claim1} $$ BD<4BG $$ \end{Claim} \begin{proof} The arc $\overset\frown{AB}$ is bisected by the point $E$. Therefore \begin{equation} \label{ } \begin{split} EA(\text{or} EB&>\frac{1}{2}AB\\ &\Rightarrow \overline{AB}^2<4\overline{EB}^2 \quad\text{or}\quad 4\overline{EA}^2\\ &\Rightarrow \frac{ \overline{AB}^2}{\overline{EB}^2}=\frac{DB}{BG}<4 \end{split} \end{equation} where the final equality is due to similar triangles. This proves Claim 1. \end{proof} \begin{Claim} $$ AC<2EF $$ \end{Claim} \begin{proof}\ \begin{equation} \begin{split} EF&=AB=BC\\ &\Rightarrow 2EF=AB+BC>AC \end{split} \end{equation} This proves Claim 2. \end{proof} This proves \eqref{eight} and therefore the lemma. \end{proof} \begin{thm} (Heron-Huygens I) (Theorem III of Inventa) The area of a circular segment less than a semicircle has a\textbf{ greater ratio} to the area of its maximum inscribed triangle than\textbf{ four to three}. \end{thm} \begin{proof} In the circular segments with bases the chords $AE, EB, BF, FC$ we again draw the maximum isosceles triangles, and then apply the process again, etc., and so fill out the segment. Applying the lemma, we obtain \begin{eqnarray}{ } (\text {segment}\,ABC) &= & (\triangle ABC)+2 (\triangle AEB) +4(\triangle ...)+\cdots\\ &>&(\triangle ABC)\left(1+\frac{1}{4}+\frac{1}{4^2}+\cdots\right) \\ &=&\frac{4}{3}(\triangle ABC) \end{eqnarray} i.e., \begin{equation} \label{HHI} \boxed{(\text {segment}\, ABC)>\frac{4}{3}(\triangle ABC)} \end{equation} \end{proof} One can see that this proof is almost word-for-word the proof Archimedes gives for the area of a \emph{parabolic segment}. The only difference is that equality is replaced by \emph{inequality} since the parabolic segment is \emph{smaller} than the corresponding circular segment. Moreover, Heron proves this theorem in almost exactly the same way. \section{Huygens' First Arc Length Inequality} Huygens" first inequality is a \emph{lower }bound for the circumference of a circle. Here we see how Huygens' proves inequalities about circular and polygonal \emph{areas}, and then translates them into inequalities about \emph{perimeters}. \begin{thm}(First Arc Length Inequality) (Theorem VII of Inventa) If $C$ denotes the circumference of a circle, then \begin{equation} \label{SNELLI} \boxed{C>C_{2n}+\frac{1}{3}(C_{2n}-C_n)} \end{equation} where $C_n$ denotes the perimeter of the inscribed polygon of $n$ sides. \end{thm} \begin{proof} We now want to obtain the inequality for a sector, and the arc. We let $ABC$ be a half-segment of the circle with center $M$ and radius $r$. We set $a:=AB,b:=BC$ and let $D$ be the midpoint of the circular arc $s:=\overset\frown{AB}$. Finally let $\delta:=(\triangle ADB)$. Now we add the area $(\triangle AMB) =\frac{br}{2}$ to that of the circular segment $ADB$ to form the circular sector of area $(AMB):=S:=\frac{sr}{2}$. Thus \begin{eqnarray*} \delta & = & 2(\triangle ADM)-(\triangle AMB)=\frac{a-b}{2} \\ \text{Heron-Huygens I} &\Rightarrow&\frac{4}{3}(a-b)\cdot\frac{r}{2}+\frac{br}{2} < \frac{sr}{2} \\ & \Rightarrow& b+\frac{4}{3}(a-b)=\frac{4a-b}{3}=a+\frac{1}{3}(a-b)<s \end{eqnarray*} where we used the\emph{ first Heron-Huygens lemma} for the first implication. But $a$ is a side of $C_{2n}$ and $b$ that of $C_n$ and so our final inequality transforms into the statement of Snell's inequality. \end{proof} We observe that if $p_n$ denotes the perimeter of an inscribed regular $n$-gon, and if we take the radius to be unity, we obtain the following Taylor expansion for \emph{Huygens' first inequality for} $2\pi$. In fact, $$ \boxed{p_{2n}+\frac{1}{3}(p_{2n}-p_n)=2\pi-\frac{\pi^5}{240n^4}+\frac{\pi^7}{8064n^6}-+\cdots} $$ which shows not only that the approximation is in \emph{defect} that also the \emph{error} does not exceed $\dfrac{\pi^5}{240n^4}\equiv\dfrac{1.275\cdots}{n^4}$, which clearly is quite small for large $n$. \section{ Nikolaus von Cusa's Lower Bound} \begin{thm} (Nikolaus von Cusa) (Theorem XIII of Inventa) If to the diameter of a circle a radius is added in the same direction, and a line is drawn from the end of the extended line cutting the circle and meeting the tangent to the circle at the opposite extremity of the diameter, this will intersect a part of the tangent \textbf{less} than the adjacent intercepted \textbf{arc}. \end{thm} \begin{proof} Given a circle with diameter $AB$, let $AB$ be produced so that $AC$ is equal to the radius. Let $CL$ be drawn cutting the circumference the second time in $E$ and meeting in $L$ the tangent to the circle at the extremity $B$ of the diameter. To prove \begin{equation} \label{SNELLII} \boxed{BL<\overset\frown{EB}} \end{equation} Let $AE$ and $EB$ be drawn, and set $$ AH=AE. $$ Let $HE$ be produced to meet the tangent in $K$. Finally, let $EG$ be drawn perpendicular to the diameter $AB$ and $ED$ perpendicular to the tangent $BL$. Then, \begin{eqnarray*} \text{since $\triangle HAE$ is isoceles} & \Rightarrow & \angle H=\angle HEA \\ \text{and since}\,\angle AEB & = & \frac{\pi}{2}\\ \Rightarrow\angle HEA+\angle KEB & = & \frac{\pi}{2}\\ \text{But}\,\angle H+\angle HKB &= & \frac{\pi}{2}\quad\text{(since in $\triangle HKB$, $\angle B =\frac{\pi}{2}$)} \end{eqnarray*} Therefore, subtracting equals from equals, $\angle H$ on one side and $\angle HEA$ on the other, we conclude $$ \angle KEB=\angle HKB\Rightarrow \text{$\triangle KEB$ is isoceles}\Rightarrow (EB=BK). $$ \begin{eqnarray*} \text{Moreover}\,BD & = & EG \\ \Rightarrow DK & = & BE-EG, \quad\text{(and since)}\\ \frac{AG}{AE} & = & \frac{AE}{AB}\\ \Rightarrow AG+AB & > & 2AE\quad\text{(arithmetic mean greater than geometric mean)}\\ \Rightarrow AE\,\text{(or $AH$)} & < & \frac{1}{2}(AG+AB)\\ & < & CA+\frac{1}{2}AG\quad\text{(and subtracting $CA$ from both sides)}\\ &\Rightarrow CH < & \frac{1}{2}AG\\ \text{But}\, CA& > & \frac{1}{2}AG\quad\text{(and adding $AC$ to$AG$)}\\ \Rightarrow CG & > & 3CH\\ \text{But, since}\,\frac{HG}{GE} & = & \frac{GD}{DK}\\ \text{and}\,\frac{GE}{GC} & = & \frac{LD}{DE}\quad\text{(by multiplying the two proportions)}\\ \Rightarrow\frac{HG}{GC} & = & \frac{LD}{DK}\quad\text{(and converting the ratio and dividing)}\\ \Rightarrow\frac{GC}{CH} & = & \frac{DK}{KL}\\ \Rightarrow DK & > & 3KL=3(EB-EG)\\ \Rightarrow KL & < & \frac{1}{3}(EB-EG)\\ \text{But}\, KB & = & EB\\ \Rightarrow KB+KL\equiv LB & < & \overset\frown{BE} \quad\text{by \eqref{SNELLI}}\quad(\emph{VII of Inventa}) \end{eqnarray*} This last inequality is precisely \eqref{SNELLII}. \end{proof} If the center of the circle is $O$ and we take the radius $OA=1$, and we take $$ x:=\overset\frown{BE} $$ then by similar triangles, $$ \frac{LB}{3}=\frac{EG}{2+OG}\Leftrightarrow \frac{LB}{3}=\frac{\sin x}{2+\cos x}. $$ Now, we just proved $$ x>LB\Leftrightarrow \boxed{x>\frac{3\sin x}{2+\cos x}} $$ and the final inequality is\emph{ the modern statement of Cusa's inequality.} The accuracy is given by $$ \frac{3\sin x}{2+\cos x}=x-\frac{x^5}{180}-\frac{x^7}{1512}-\cdots $$ which shows that Cusa's approximation $$ \boxed{x\approx\frac{3\sin x}{2+\cos x}} $$ to be in \emph{defect} and if we put $x:=\dfrac{\pi}{2n}$, we conclude $$ 2\pi=4n\cdot\text{LB}+\frac{\pi^5}{1440n^4}+\cdots $$ with an error $\dfrac{\pi^5}{1440n^4}=\dfrac{0.2125\cdots}{n^4}$ which is quite small for large $n$. \section{Second Heron-Huygens Lemma} We now seek to prove an \emph{upper} bound for the circumference. \begin{lemma} If a triangle is drawn having the same base as a segment of circle less than a semicircle and having its sides\textbf{ tangent} to the segment, and if a line is drawn tangent to the segment at its vertex, this cuts off from the given triangle a triangle greater than \textbf{one half} of the maximum triangle described within the segment. \end{lemma} \begin{proof} Given the circular segment $ABC$ less than a semicircle with its vertex at $B$. Let the lines $AE$ and $CE$, tangents to the segment at the extremities of its base, meet in $E$. For, they \emph{will} meet since the segment is less than a semicircle. Moreover, let $FG$ be drawn tangent to the segment at its vertex $B$; and let $AB$ and $BC$ be joined. To prove\begin{equation} \label{ } \boxed{(\triangle FEG)>\frac{1}{2}(\triangle ABC).} \end{equation} Clearly $\triangle AEC,\triangle FEG,\triangle AFB$ and $\triangle BGC$ are all isosceles and that $B$ bisects $FG$. Therefore, \begin{eqnarray*} FE+EG& > & FG\\ \Rightarrow EF& > & FG\quad(\text{or}\,FA)\\ \Rightarrow AE &< & 2FE \end{eqnarray*} \begin{Claim} $$ (\triangle FEG)>\frac{1}{4}(\triangle AEC) $$ \end{Claim} \begin{proof} Draw the straight line $EBX$ cutting $AC$ in $X$. Then \begin{eqnarray*} FE & > & \frac{1}{2}FAE \\ \Rightarrow EB & > & \frac{1}{2}EX\quad\text{(since they are in the same proportion)}\\ \Rightarrow FE\cdot EB & > & \frac{1}{4}AE\cdot EX\\ \Rightarrow \frac{1}{2}FE\cdot EB & > & \frac{1}{4}\cdot \frac{1}{2}AE\cdot EX\\ \Rightarrow(\triangle FEG) & > & \frac{1}{4}(\triangle AEC) \end{eqnarray*} This proves the claim. \end{proof} Moreover $$ \frac{FA}{AE}=\frac{\text{altitude of $\triangle ABC$}}{\text{altitude of $\triangle AEC$}} $$ and the two triangles have the same base $AC$. But $$ FA<\frac{1}{2}AE\Rightarrow (\triangle ABC)<(\triangle AEC). $$ But, by our claim $$ (\triangle FEG)>\frac{1}{4}(\triangle AEC)\Rightarrow (\triangle FEG)>\frac{1}{2}(\triangle ABC). $$ \end{proof} \begin{thm} (Heron-Huygens II) (Theorem IV of Inventa) The area of a circular segment less than a semicircle is \textbf{less} than \textbf{two thirds} of the area of a triangle having its base in common with this segment and its sides \textbf{tangent} to it. \end{thm} \begin{proof} Using the same figure we obtain \begin{eqnarray*} (\triangle FEG)& > & \frac{1}{2}(\triangle ABC) \\ (\triangle HEI)& > & \frac{1}{2}(\triangle AMB) \\ \cdots & > & \cdots \end{eqnarray*} Adding all of these inequalities we obtain $$ (\triangle AEC)-(\text{Segment}\,ABC)>\frac{1}{2}(\text{Segment}\,ABC) $$ i.e. \begin{equation} \label{ } \boxed{(\text{Segment}\,ABC)<\frac{2}{3}(\triangle AEC)} \end{equation} \end{proof} \section{The Second Snell Theorem} This first theorem es preparatory to the following one. \begin{thm} (Theorem VIII of Inventa) Given a circle, if at the extremity of a diameter a tangent is drawn, and if from the opposite extremity of the diameter a line is drawn which cuts the circumference and meets the tangent produced, then \textbf{two thirds} of the intercepted tangent plus \textbf{one third} of the line dropped from the point of intersection perpendicular to the diameter are \textbf{greater} than the adjacent \textbf{subtended arc.} \end{thm} \begin{proof} Given a circle with center $A$ and diameter $BC$; and let there be drawn from $C$ a line $CD$ tangent to the circle. And let a line $BD$, drawn froom the other exteremity of the diameter, meet this and intersect the circumference in $E$; let $EF$ be perpendicular to the diameter $EC$. To prove: \begin{equation} \label{lemma} \boxed{\overset\frown{EC}<\frac{2}{3}CD+\frac{1}{3}EF} \end{equation} Let $AE$ and $EC$ be joined. At the point $E$ draw a tangent to the circle which meets the tangent $CD$ in $G$. Then $$ EG=GC=DG . $$ For, if we draw a circle with center $G$ which passes through the points $C$ and $E$ it will also pass through the point $D$ because $\angle CED$ is a right angle. By \eqref{sectorII}\textbf{ (Heron-Huygens II)}: \begin{eqnarray*} (\text{Sector}\,AEC) & <& (\triangle AEC)+\frac{2}{3}(\triangle EGC) \\ & < & \frac{2}{3}(AEGC)+\frac{1}{3}(\triangle EAC) \end{eqnarray*} Now \begin{eqnarray*} (AEGC)& = & (\triangle \text{with base $2CG=CD$ and an altitude CA}) \\ (\triangle AEC) & = & (\triangle \text{with base $EF$ and the same altitude CA})\\ \Rightarrow \frac{2}{3}(AEGC)+\frac{1}{3}(\triangle EAC)& = & (\triangle\text{with base $\frac{2}{3}CD+\frac{1}{3}EF$ and altitude $AC$})\\ &=& \frac{1}{2}\left(\frac{2}{3}CD+\frac{1}{3}EF\right)\cdot AC\\ & > & (\text{Sector}\,AEC)=\frac{1}{2}\overset\frown{EC}\cdot AC\\ \Rightarrow \overset\frown{EC} & < & \frac{2}{3}CD+\frac{1}{3}EF. \end{eqnarray*} \end{proof} \begin{thm} (Second Snell Theorem) (Theorem IX of Inventa) The circumference of a circle is less than \textbf{two thirds} of the perimeter of an equilateral polygon \textbf{inscribed} in it plus \textbf{one third} of a similar \textbf{circumscribed} polygon. \end{thm} \begin{proof} In symbols, we have to prove \begin{equation} \label{Snell2} \boxed{C<\frac{2}{3}C_n+\frac{1}{3}C_n'} \end{equation} where $C_n$ denotes the perimeter of the inscribed polygon, $C_n'$ is the the perimeter of the circumscribed polygon of $n$ sides, and $C$ is the circumference of the circle. Given a circle with center $A$, let there be inscribed in it an equilateral polygon, one of whose sides is $CD$. Let there be circumscribed another polygon with sides parallel to the former, one of those sides being $EF$. \emph{We have to prove that the circumference of the circle is less than two-thirds of the perimeter of polygon $CD$ plus one third of the perimeter of polygon $EF$}. Let $BG$, a diameter of the circle be drawn bisecting the side $CD$ of the inscribed polygon at $H$, and the side $EF$ of the circumscribed polygon at $G$ (and it is obvious that $G$ will be the point of tangency of the side $EF$). Let $$ HL=HG $$ and let $AC$ and $BC$ be drawn and produced, $BC$ to meet the side $EF$ in $K$, $AC$ to fall upon the vertex $E$ of the circumscribed polygon. Then \begin{eqnarray*} HL=HG &\Rightarrow & BL=2AH \\ \Rightarrow \frac{GA}{AH} & = & \frac{GB}{BL}\\ \text{But}\,\frac{HB}{BL} & > & \frac{GB}{BH}\quad\text{(since $GB,HB,LB$ all exceed each other by the same amount)}\\ \Rightarrow \frac{GB}{BL}\,(\text{or $\frac{GA}{AH}$}) & > & \frac{GB^2}{BH^2}\\ \text{Moreover}\,\frac{GA}{AH} & = & \frac{EG}{CH}\\ \text{and}\,\frac{GB}{BH} & = & \frac{KG}{CH}\\ \Rightarrow \frac{EG}{CH} & > & \frac{KG^2}{CH^2}\\ \Rightarrow \frac{EG}{KG} & > & \frac{KG}{CH}\\ \Rightarrow EG+CH & > & 2KG \quad\text{(arithmetic mean greater than geometric mean)}\\ \Rightarrow \frac{1}{3}(EG+CH) & > & \frac{2}{3}KG\\ \Rightarrow \frac{1}{3}(EG+CH)+\frac{1}{3}CH & > & \frac{2}{3}KG+\frac{1}{3}CH\\ \Rightarrow \frac{1}{3}EG+\frac{2}{3}CH & >& \frac{2}{3}KG+\frac{1}{3}CH>\overset\frown{CG} \end{eqnarray*} where the last inequality follows from \eqref{lemma}, the previous theorem \emph{(Inventa VIII)}. And now one extends the inequality for the arc to the entire circle. \end{proof} If $p_n'$ is the perimeter of a circumscribed regular $n$-gon, then for the expansion for a circle of unit radius, we obtain $$ \frac{2}{3}p_n+\frac{1}{3}p_n'=2\pi+\frac{\pi^5}{10n^4}+\frac{\pi^7}{8n^6}+\cdots $$ which shows the approximation to be in \emph{excess} and the \emph{error} is $\dfrac{\pi^5}{10n^4}\equiv\dfrac{30.6}{10n^4}$ which is small for large $n$. One also sees that it is a little less accurate than the lower bound although it is of the same order. \section{Snell's Upper Bound} As we pointed out earlier, Huygens proves two famous inequalities. One of them the is Cusa inequality of a previous section. The second one we discuss now. It is fascinating that the figure involved is the \emph{Archimedes trisection figure}. \begin{thm} (Snell's Upper Bound) (Theorem XII of Inventa) If between the diameter produced of a circle and the cumference a line is inserted equal to the radius, and when produced cuts the circle and meets a tangent to the circle at the other extremity of the diameter, this line will intercept a part of the tangent \textbf{greater} than the adjacent intercepted \textbf{arc}. \end{thm} \begin{proof} Let there be described a circle with center $C$ and diameter $AB$. Let this be produced in the direction of $A$, and let there be inserted between it and the circumference the line $ED$ equal the radius $AC$. Then, this, when produced, cuts the circumference in $F$ and meets the tangent in $G$, the tangent being drawn at the extremity $B$ of the diameter. We will prove \begin{equation} \label{SnellOwn} \boxed{BG>\overset\frown{BF}} \end{equation} Let $HL$ be drawn through the center of the circle parallel to $EG$ and meeting the circumference in in $H$ and $M$ and the tangent $BG$ in $L$. Let $DH$ be drawn cutting the diameter in $K$. Then \begin{eqnarray*} \triangle EDK & \sim & \triangle CHK\quad\text{(since the angles at $K$ are equal and $\angle E=\angle C$)} \\ \text{But}\,ED & = & HC \quad\text{(and these sides are subtended by equal angles)} \\ \Rightarrow DK & = &KH\\ \Rightarrow CA\, & \text{bisects} & \,DH\,\text{and}\,\overset\frown{DAH}\\ \Rightarrow \overset\frown{DH} &=& \overset\frown{FM}=\overset\frown{2AH}\\ \text{But}\,\overset\frown{AH} & = & \overset\frown{MB}\\ \Rightarrow \overset\frown{MB} & = & \overset\frown{3AH} \end{eqnarray*} Moreover \begin{equation*} \left.\begin{aligned} HK=\text{sine of}\,\overset\frown{HA}\\ LB=\text{its tangent}\\ \text{and}\,\eqref{Snell2}\,(\text{Second Snell Theorem}) \end{aligned} \right\} \quad\Rightarrow \frac{2}{3}HK+\frac{1}{3}LB>\overset\frown{AH} \end{equation*} and multiplying by $3$ we obtain: \begin{eqnarray*} 2HK+LB & > & 3\overset\frown{HA} \\ \Leftrightarrow HD +LB & > & 3\overset\frown{HA} \\ \Leftrightarrow GL +LB & > & 3\overset\frown{HA} \\ \Leftrightarrow GL +LB & > & \overset\frown{FB} \\ \Leftrightarrow BG& > & \overset\frown{FB} \end{eqnarray*} This completes the proof. \end{proof} If we put $$ x:=\overset\frown{FB} $$ then \begin{equation*} \left.\begin{aligned} GL=HD=2\sin\left(\frac{x}{3}\right)\\ LB=\tan\left(\frac{x}{3}\right)\\ \end{aligned} \right\} \quad\Rightarrow GB=2\sin\left(\frac{x}{3}\right)+\tan\left(\frac{x}{3}\right) \end{equation*} If we now replace $x$ by $3x$ we obtain Snell's Own Approximation $$ \boxed{x\approx\frac{2\sin x+\tan x}{3}} $$ and the expansion $$ \frac{2\sin x+\tan x}{3}=x+\frac{x^5}{20}+\frac{x^7}{56}+\cdots $$ shows that the approximation is in \emph{excess} and the error one commits in the approximation is about $-\dfrac{x^5}{20}.$ If we apply the \emph{first} (Cusa) inequality for $n=6$ and this \emph{second } (Snell) inequality for $n=12$ we obtain $$ 3.1411\cdots<\pi<3.1424\cdots $$ which Archimedes obtained using the $96$-gon. If we apply the \emph{first} (Cusa) inequality for $n=30$ and this \emph{second} (Snell) inequality for $n=60$ we obtain $$ 3.1415917\cdots<\pi<3.141594\cdots $$ which suggests that the decimal expansion of $\pi$ begins with $\pi=3.14159\cdots$. \section{The barycenter} After proving the Cusa-Snell inequalities, Huygens presents his own very elegant approximation which is based on an observation about the \emph{barycenter} of a circular segment. What is novel and original is the way Huygens transforms \emph{the location of a barycenter into an inequality about perimeters.} \begin{thm} (Theorem XIV of Inventa) The barycenter of a circular segment divides the diameter of the segment so that the part near the vertex is \textbf{greater} than the rest, but \textbf{less} than \textbf{three halves} of it. \end{thm} \begin{proof} Given a segment $ABC$ of a circle, (and let it be put less than a semicircle because others do not satisfy the proposition) and let $BD$ be the diameter of the segment, bisected in $E$. \begin{Claim} The barycenter of the segment $AB$ is at a distance \emph{below} the point $E$ from the vertex $B$ on the diameter. \end{Claim} \begin{proof} That the barycenter is on the diameter is evident from considerations of symmetry (and we have shown this elsewhere.) Let a line be drawn through $E$ parallel to the base, meeting the circummference on either side in points $F$ and $G$. Draw the lines $KI$ and $HL$ through these perpendicular to the base $AC$, and let these, together with the line tangent to the segment at its vertex form the rectangle $KL$. Since, by assumption, the segment is less than a semicircle, the rectangle $FL$, which is one half of the given rectangle, \begin{itemize} \item is contained \emph{within} the segment $AFGC$; and \item the regions $AFJ$ and $LGC$ are left over \end{itemize} But $KG$, the other half of the rectangle $KL$ \begin{itemize} \item \emph{includes} segment $FBG$; and \item the regions $FBK$ and $GBH$. \end{itemize} Since these regions lie wholly \emph{above} the line $FG$, their common barycenter will be located above it. Now, the point $E$, on this same line $FG$, is the barycenter of the whole rectangle $KL$. Therefore the barycenter of the remaining region $BFJLGB$ will lie \emph{below} the line $FG$. But the common barycenter of the regions $AFJ$ and $LGC$ is also \emph{below} $FG$. Therefore, the barycenter of the magnitude composed of these regions and the region $BFJLGB$, ie. of the \emph{segment} $ABC$ must be found \emph{below} the line $FG$, and hence \emph{below the point} $E$. This proves the claim. \end{proof} Now let the same diameter $BD$ be cut in $S$ so that $BS$ is \textbf{three halves} of the remainder $SD$. \begin{Claim} The barycenter of the segment $ABC$ is closer to the vertex B than the point $S$. \begin{proof} Let $BDF$ be the diameter of the whole circle, and through $S$ draw a line parallel to the base, meeting the circumference in $F$ and $G$. Draw a \emph{parabola} with vertex $B$, axis $BD$, and latus rectum equal to $SP.$ Let it meet the base of the circular segment in $H$ and $K$. Then, since $$ FS^2=BS\cdot SP $$ (the parabola) will cross the circle at the point $F$ and likewise at the point $G$. Now, the arcs $\overset\frown{BF}$ and $\overset\frown{BG}$ of the parabola will fall \emph{within} the circumference, but the remaining arcs $\overset\frown{FH}$ and $\overset\frown{GK}$ will be \emph{outside}. We prove this as follows: draw a line $NL$ between $B$ and $S$, parallel to the base, meeting the circumference in $N$ and the parabola in $M$. But \begin{equation*} \left.\begin{aligned} NL^2=BL\cdot BP\\ ML^2=BL\cdot SP\\ BL\cdot BP>BL\cdot SP \end{aligned} \right\} \quad \Rightarrow NL^2>ML^2\Rightarrow NL>ML \end{equation*} The same holds for any such line drawn between $B$ and $F$ and therefore the arc $\overset\frown{BF}$ of the circumference must lie entirely \emph{outside} of the parabola. The same must hold for $\overset\frown{BG}$. Again, since \begin{equation*} \left.\begin{aligned} DA^2=BD\cdot DP\\ DH^2=BD\cdot SP \end{aligned} \right\} \quad \Rightarrow HD>AD \end{equation*} and the same will hold for any line drawn between $S$ and $D$. Therefore the arcs $\overset\frown{FA}$ and $\overset\frown{GC}$ will fall \emph{within} the parabola. Now we examine the regions $FNBM$, and $BQG$, as well as $HFA$ and $GCK.$ Since the latter two lie entirely below the line $FG$, their common barycenter will also lie below it. But the barycenter of the parabolic segment $HBK$ is on $FG$ at the point $S$.\begin{footnote} {Archimedes, Proposition 8, Book II, \emph{Equilibrium of Planes}. This theorem states that if $AO$ is the diameter of a parabolic segment, and $G$ its barycenter, then $AG$ is equal to three-halves of $GO$, if $A$ is the vertex of the parabola. }\end{footnote}. Therefore, the barycenter of the remaining region $AFMBQGC$ will be \emph{above} the line $FG$. But the barycenters of the regions $FMBN$ and $BGQ$ also are above the line since they themselves are. Therefore the region composed of these two and $AFMBQGC$, i.e. of the segment $ABC$ of the circle will have its barycenter \emph{above} $FG$. And since it is on the diameter $BD$, it will be at a \emph{smaller} distance from the vertex $B$ than the point $S$. This proves the claim. \end{proof} \end{Claim} This proves the theorem \end{proof} Now we transform this result on the barycenter of a circular segment into an inequality on its \emph{area}. \begin{thm} (Theorem XV of the Inventa) The area of a circular segment less than a semicircle has a \textbf{greater} ratio to its maximum inscribed triangle than $\mathbf{\dfrac{4}{3}}$, but \emph{less} than the ratio which $\mathbf{\dfrac{10}{3}}$ of the diameter of the remaining segment has to the diameter of the circle plus \textbf{three} times the line which reaches from the center of the circle to the base of the segment. \end{thm} \begin{proof} Given a circular segment less than a semicircle in which is inscribed the maximum triangle $\triangle ABC$. Let the diameter of the segment be $BD$ and the diameter of the circle from which the circle was cut off $BF$ and its center $E$. \begin{Claim} \begin{equation} \label{ } \boxed{\frac{(\text{segment}\,ABC)}{(\triangle ABC)} >\frac{4}{3}} \end{equation} \end{Claim} \begin{proof} Let $G$ be the barycenter of the $\text{\emph{segment}}\, ABC$, and let $DF$ be cut in $H$ such that $$ HD=2HF. $$ Then, since \begin{equation*} \left.\begin{aligned} FB=2EB\\ DB<2BG \end{aligned} \right\} \quad \Rightarrow \frac{FB}{BD}>\frac{EB}{BG}\Rightarrow\frac{BF}{FD}<\frac{BE}{EG}\Rightarrow\frac{FD}{EC}>\frac{BF}{BE}=\frac{2}{1}\Rightarrow FD>2EG \end{equation*} Moreover,\begin{eqnarray*} HD & = & \frac{2}{3}FD \\ \Rightarrow HD& > &\frac{4}{3}EG \\ \text{Moreover}\,\frac{HD}{EG} & = &\frac{(\text{segment}\,ABC)}{(\triangle ABC})\quad(\text{by \eqref{SNELLI}, VII of Inventa)}\\ \text{i.e.}\, \frac{(\text{segment}\,ABC)}{(\triangle ABC}) & > & \frac{4}{3} \end{eqnarray*} which proves the claim. \end{proof} \begin{Claim} \begin{equation} \label{XV2} \boxed{\frac{(\text{segment}\,ABC)}{(\triangle ABC)}< 3\frac{1}{3}\cdot\frac{DF}{BF+3ED}} \end{equation} \end{Claim} \begin{proof} Choose $R$ on the diameter $BD$ of the segment so that $$ BR=\frac{3}{2}RD. $$ Then, by the previous theorem \emph{(XIV of Inventa)}, $R$ falls between $G$ and $D$ since $G$ is the barycenter of $Segment ABC$. This means $$ EG>ER. $$ Then, since \begin{equation*} \left.\begin{aligned} \frac{HD}{EG} = \frac{(\text{segment}\,ABC)}{(\triangle ABC)}\\ EG>ER\Rightarrow \frac{HD}{EG}<\frac{HD}{ER} \end{aligned} \right\} \Rightarrow\frac{(\text{segment}\,ABC)}{(\triangle ABC)}<\frac{HD}{ER}=\frac{5HD}{5ER} \end{equation*} and since $$ HD = \frac{2}{3}FD\Rightarrow 5HD=\frac{10}{3}DF=3\frac{1}{3}\cdot DF, $$ and $$ ER=EB+\frac{2}{5}DB\Rightarrow 5ER=2BD+5ED=2EB+3ED=BF+3ED $$ and dividing we obtain $$ \frac{(\text{segment}\,ABC)}{(\triangle ABC)}<3\frac{1}{3}\cdot\frac{DF}{BF+3ED} $$ which proves the claim. \end{proof} This completes the proof \end{proof} Now we are ready to apply the previous inequality on \emph{areas} to obtain an inequality on a \emph{circular arc}. \begin{thm} (Theorem XVI of Inventa) Any \textbf{arc} less than a semicircle is \textbf{greater} than its chord plus one-third of the difference between it and its sine. But it is \textbf{less} than the chord plus the line which has the same ratio to the above-mentioned one third that four times the chord plus the sine has to twice the chord plus three times the sine. \end{thm} \begin{proof} Given a circle with center $D$ and diameter $FB$. Also given an arc $\overset\frown{BA}$, less than a semicircle, and its chord $BA$ and its sine $AM$, which of course is perpendicular to the diameter $FB$. And let the lines $$ GH=AM, GI =AB\Rightarrow GI-GH=HI $$ and let $$ IK=\frac{1}{3}HI , GK=GI+IK. $$ We have already shown in \eqref{SNELLI} (\emph{VII of Inventa}) that: \begin{equation} \label{XVI-1} \boxed{\overset\frown{AB}>GK=AB+\frac{1}{3}(AB-AM).} \end{equation} Now we define $IO$ by $$ \frac{IO}{IK}:=\frac{4AB+AM}{2AB+3AM} $$ and form the sum $$ GO:=GI+IO. $$ We must now prove \begin{Claim} \begin{equation} \label{XVI-2} \boxed{\overset\frown{AB}<GO=AB+\frac{1}{3}(AB-AM)\cdot\frac{4AB+AM}{2AB+3AM}} \end{equation} \end{Claim} \begin{proof} Construct the triangles with common vertex $L$, common altitude the radius $DE$ of the circle, and bases $GH$, $HI$, and $IO$. Join $DA$ and let the diameter $CE$ of the circle bisect $AB$ in $N$ and the arc $\overset\frown{AB}$ in $E$. Join $AE$ and $EB$. Then, since $$ \frac{OK}{IK}=\frac{4GI+GH}{2GI+3GH}\Rightarrow\frac{OH}{3IK}=\frac{OH}{IH}=\frac{4HI+GH}{6GI+9GH} $$ and therefore (by composition) $$ \frac{OH}{IH}=\frac{10GI+10GH}{6GI+9GH}=\frac{10}{3}\frac{GI+GH}{2GI+3GH}. $$ which we get by dividing by $3$. Moreover $$ \triangle BAM\sim\triangle BDN\Rightarrow\frac{GI}{GJ}\equiv\frac{BA}{AM}=\frac{BD}{DN}. $$ Therefore, $$ \frac{OH}{HI}=\frac{10}{3}\frac{BD+BN}{2BD+3BN}=\frac{10}{3}\frac{NC}{\text{diameter}\,EC+3DN}. $$ But, by \eqref{XV2} \emph{(XV Inventa)} $$ \frac{\text{segment}\,AEB)}{(\triangle AEB)}<\frac{OH}{HI}\equiv\frac{(\triangle OHL)}{(\triangle IHL)}. $$ \begin{Claim} We now assert that: $$ (\triangle IHL)=(\triangle AEB) $$ \end{Claim} For, since the base of $\triangle GHL$ equals the altitude of $\triangle DAB$ and vice versa. we conclude $$ (\triangle GHL)=(\triangle DAB). $$ Moreover, $$ GI=AB\Rightarrow (\triangle GIL)=(\triangle DAE)+(\triangle DBE)=(DAEB). $$ This proves our claim. Therefore, $$ \frac{\text{segment}\,AEB)}{(\triangle AEB)}<\frac{(\triangle OHL)}{(\triangle AEB)}\Rightarrow(\triangle OHL)>(\text{Seg}\,AEB) $$ and $$ \Rightarrow(\triangle OGL)<(\text{sector}\,DAEB). $$ But the altitude of $\triangle OGL$ equals the radius $DB$. Therefore the \emph{base $GO$} will be greater than the \emph{arc} $\overset\frown{AB}$. This proves the Claim. \end{proof} This proves the theorem. \end{proof} \begin{cor} \begin{equation} \label{XVI-3} \boxed{C_{2n}+\frac{C_{2n}-C_n}{3}<C<C_{2n}+\frac{C_{2n}-C_n}{3}\cdot\frac{4C_{2n}+C_n}{2C_{2n}+3C_n}} \end{equation} where $C_n$ is the perimeter of an inscribed regular polygon of $n$ sides and $C$ is the circumference of the circle. \end{cor} \section{Huygens' Barycentric Equation} Three years before he published \emph{Inventa} Huygens used Archimedes' proofs in the latter's theory of the barycenter of a parabolic segment to prove an equation relating the barycenter of a circular segment to the areas of that segment and a certain associated triangle. This equation became the fundamental lemma for Huygens subsequent barycentric theories. We follow J.E. Hofmann's treatment \cite{Hof}. Suppose that the equation of the circle of diameter $r$ is \begin{equation} \label{circle} y^2=rx-x^2. \end{equation} We cut off the circular segment $\Sigma$ (we use the notation $\Sigma$ both for the segment and for its area) with the line $x=a<\dfrac{r}{2}$, and thus it has vertices $$ C=(a,b)\quad\text{and}\quad\overline{C}=(a,-b) \quad\text{where}\quad b^2=a(r-a). $$ Therefore, by elementary calculus, \begin{equation} \label{ } \Sigma=2\int_{0}^{a}y(x)~dx. \end{equation} We let $(\xi,0)$ (where $\xi<a$) be the barycenter of $\Sigma$ which, by symmetry, clearly lies on the $x$-axis. We define it analytically by the well-known condition\begin{equation} \label{ } 2\xi\int_{0}^{a}y(x)~dx=2\int_{0}^{a} xy(x)~dx=\xi\cdot\Sigma. \end{equation} Taking differentials of \eqref{circle} we obtain $$ 2y~dy=r~dx-2x~dx=\left(\frac{r}{2}-x\right)2~dx. $$ Multiplying by $y$ and integrating we obtain the fundamental equation \begin{equation} \label{fund} \boxed{2\int_{0}^{b}y^2~dy=\int_{0}^{a}\left(\frac{r}{2}-x\right)2y~dx.} \end{equation} Now, we interpret $$ 2\int_{0}^{b}y^2~dy=\int_{0}^{b}x\cdot2y~dx $$ as the \emph{rotational moment }of the isoceles right triangle $\triangle OA\overline{A}$ where $$ O:=(0,0), A:=(b,b), \overline{A}:=(b,-b). $$ around the $y$-axis. We obtain \begin{equation} \label{right1} 2\int_{0}^{b}y^2~dy=\frac{2}{3}b^3=\frac{2}{3}bb^2=\frac{2}{3}b\cdot a(r-a)=\frac{2}{3}(r-a)\delta \end{equation} where $$ \delta:=ab=(\triangle OA\overline{A}). $$ The right hand side of the fundamental equation \eqref{fund} gives the \emph{rotational moment} \begin{equation} \label{right2} \int_{0}^{a}\left(\frac{r}{2}-x\right)2y~dx=\left(\frac{r}{2}-\xi\right)\cdot\Sigma \end{equation} where the rotated surface in question is $\Sigma$ and the axis of rotation is the diameter $x=\dfrac{r}{2}$ of the circle \eqref{circle}. By \eqref{fund} we equate the right hand sides of \eqref{right1} and \eqref{right2} we obtain the Huygens barycentric equation. We state it as a theorem. \begin{thm} (Huygens' barycentric equation) \begin{equation} \label{barycenter} \boxed{\frac{\Sigma}{\delta}=\frac{2}{3}\cdot\dfrac{r-a}{\dfrac{r}{2}-\xi}} \end{equation} \end{thm} This is a very powerful result. For example it leads quite rapidly to a \emph{new proof} of \eqref{HHI} (\emph{Theorem III of Inventa}). First we give a new formulation of his barycentric equation. Let $ABC$ be a \emph{half} circular segment, G the midpoint of $AC:=c$ and $GH$ the corresponding half-chord. We expand to the rectangle $ACEF$. Its barycenter lies on $GH$. Now we consider the half-segment from which we \emph{remove} the region $AFH$ on the left, and to which we \emph{add} the region $HBE$ on the right. Thus, the barycenter of the half-segment, and of course the whole segment, necessarily lies to the \emph{right} of $GH$. Therefore, if we put $\xi:=AS$, where $S$ is the barycenter of the entire segment, then necessarily \begin{equation} \label{bary} \frac{c}{2}<\xi. \end{equation} If we substitute \eqref{bary} in the Huygens barycentric equation and use the notations of \eqref{HHI} we obtain \begin{thm} \begin{equation} \label{bary2} \boxed{\frac{\Sigma}{\delta}<\frac{2}{3}\cdot\dfrac{2r-c}{r-\xi}} \end{equation} \end{thm} Now, $$ \frac{c}{2}<\xi\Rightarrow\frac{\Sigma}{\delta}<\frac{2}{3}\cdot\dfrac{2r-2\xi}{r-\xi}=\frac{4}{3}, $$ which is \eqref{HHI}. \subsection{ Hofmann's proof of XVI of Inventa} \begin{proof} We return to our consideration of the half-segment $ABC$. In it, let $P$ divide the diameter $AC=:c$ in the ratio $AP:PC::3:2$. Suppose the perpendicular to $AC$ through $P$ cuts the circular arc $\overset\frown{AB}$ in $Q$. We construct the \emph{parabola} with axis of symmetry $AC$ and which passes through $A$ and $Q$. Finally, suppose the parabola cuts the prolongation of $BC$ in $E$. Then, we see that the \emph{parabolic} arc $\overset\frown{AQ}$ lies \emph{inside} of the half-segment, while the \emph{parabolic} arc $\overset\frown{QE}$ lies \emph{outside} of the half-segment. By, Theorem 8, of Book II of Archimedes' work \emph{On the equilibrium of planes}, the line segment $PQ$ contains the barycenter of the \emph{parabolic} half-segment $AQEC$. The region $APQ$ to the \emph{left} of this line arises by \emph{adding} the region $AQ$ bounded by the circular and parabolic arcs to the half-segment of the circle, while the region $PCBQ$ lying to the \emph{right} of that line arises by \emph{removing} the region $BEQ$ from the half-segment. Therefore, the barycenter of the half-segment (and thus the whole segment) lies to the \emph{left} of the segment $SP$. Since, however, $$ AP=\frac{3c}{5} \Rightarrow \xi<\frac{3c}{5}\Rightarrow \boxed{\frac{\Sigma}{\delta}<\frac{4}{3}\frac{2r-c}{2r-\dfrac{6c}{5}}}. $$ Recalling the notation of \eqref{HHI} we put $$ c:=DN\Rightarrow (\triangle ABD)=\frac{ac}{2} $$ on the one hand, while on the other hand \begin{equation} \label{hbe1} \triangle ACB\sim\triangle ANM\Rightarrow \frac{r}{r-c}=\frac{a}{b}. \end{equation} Therefore, Huygens' barycentric equation \eqref{hbe1} gives us \begin{eqnarray*} \Sigma& <& \frac{ 4\delta}{3}\cdot \frac{2r-c}{2r-\dfrac{6c}{5}}\\ & = & \frac{4}{3}\cdot\frac{ac}{2}\cdot\frac{a+b}{\dfrac{4a+6b}{5}}\\ & = & \frac{10}{3}\cdot\frac{a^2-b^2}{2a+3b}\cdot\frac{r}{2}\\ \Rightarrow (\triangle ABM)+\Sigma &<& \frac{br}{2}+ \frac{10}{3}\cdot\frac{a^2-b^2}{2a+3b}\cdot\frac{r}{2}\\ \Rightarrow(\text{Sector}\,ADBM)& < &\frac{br}{2}+ \frac{10}{3}\cdot\frac{a^2-b^2}{2a+3b}\cdot\frac{r}{2}\\ \Rightarrow\frac{sr}{2}& < &\frac{br}{2}+ \frac{10}{3}\cdot\frac{a^2-b^2}{2a+3b}\cdot\frac{r}{2}\\ \Rightarrow s& < &b+ \frac{10}{3}\cdot\frac{a^2-b^2}{2a+3b}\\ \Leftrightarrow s & < & a+\frac{a-b}{3}\cdot \frac{4a+b}{2a+3b} \end{eqnarray*} and this last inequality is Huygens' \eqref{XVI-2}. \end{proof} If we apply the corollary \eqref{XVI-3} of this inequality to a circle of diameter unity we obtain \begin{equation} \label{ } \left\{C_{2n}+\frac{C_{2n}-C_n}{3}\cdot\frac{4C_{2n}+C_n}{2C_{2n}+3C_n}\right\}-\pi<\frac{\pi^7}{22400}\cdot\frac{1}{n^6}+O\left(\frac{1}{n^8}\right) \end{equation} which shows the high accuracy of Huygens' approximation. \section{Huygens unproved barycentric theorem of \S{XX} of Inventa} Huygens wanted a \emph{lower} bound with the same accuracy $O\left(\frac{1}{n^6}\right)$ as his upper bound \eqref{XVI-2}. All he says is \begin{quote} ``...we will obtain another lower limit more exact than the first one, using the following precept, which results from a more precise examination of the center of gravity." \end{quote} To achieve it he announced, \emph{without proof (!)} the following approximation: \begin{thm} Using the earlier notation, the following lower bound is valid: \begin{equation} \label{XX1} \boxed{s>b+\frac{10}{3}\cdot\frac{a^2-b^2}{2a+3b+\dfrac{\dfrac{8}{3}(a-b)^2}{6a+9b}}} \end{equation} \end{thm} If we combine the upper and (unproved) lower bounds we obtain \begin{thm} \begin{equation} \boxed{b+\frac{10}{3}\cdot\frac{a^2-b^2}{2a+3b+\dfrac{\dfrac{8}{3}(a-b)^2}{6a+9b}}<s<b+ \frac{10}{3}\cdot\frac{a^2-b^2}{2a+3b}} \end{equation} \end{thm} This last inequality shows the structural similarity between the two bounds. \begin{cor} \begin{equation} \label{XX} \boxed{C_n+\frac{10}{3}\cdot\frac{C_{2n}^2-C_n^2}{2C_{2n}+3C_n+\dfrac{\dfrac{8}{3}(C_{2n}-C_n)^2}{6C_{2n}+9C_n}}<C<C_n+ \frac{10}{3}\cdot\frac{C_{2n}^2-C_n^2}{2C_{2n}+3C_n}} \end{equation} \end{cor} If $n=30$ then Huygens used the following approximations in \eqref{XX} \begin{eqnarray*} 3.13585389802979 & <C_{30}< & 3.13585389802980\\ 3.14015737457639 &< C_{60}< &3.14015737457640 \end{eqnarray*} which gives $$ 3.14159265339060<\pi<3.14159265377520 $$ which rounds to the now famous result \begin{equation} \label{Huygens} \boxed{3.1415926 533 <\pi < 3.1415936538} \end{equation} Apparently, Huygens never wrote up a geometric proof of his new lower bound \eqref{XX1}. Indeed, to this day, \emph{such a geometric proof remains unknown}. So, we will present a proof based on the location of the barycenter of a circular segment below. The well-known formula for the distance from the center of a circle defining a circular segment to the segment's barycenter is given by $$ \text{distance of barycenter from the center}=\frac{4}{3}\cdot r \frac{\sin^3\frac{\theta}{2}}{\theta-\sin\theta} $$ where $r$ is the radius of the circle and $\theta$ is the central angle of the segment. Thus, if the radius of the circle is unity, the distance of the barycenter from the \emph{vertex} of the segment is $$ =1-\frac{4}{3}\cdot \frac{\sin^3\frac{\theta}{2}}{\theta-\sin\theta} $$ and the barycenter is located at a fraction of the \emph{ height}, $1-\cos\frac{\theta}{2}$, of the segment, where that fraction is given by $$ \frac{\text{distance from the vertex}}{\text{height}}=\frac{1-\dfrac{4}{3}\cdot \dfrac{\sin^3\frac{\theta}{2}}{\theta-\sin\theta}}{1-\cos\frac{\theta}{2}}=\frac{3}{5}-\frac{3}{1400}\theta^2-O(\theta^4). $$ where the right hand side is the asymptotic expansion of the fraction (And, perhaps, Huygens' followed a similar path, but geometrically.) \\ \hrule \begin{proof} (of Huygens' final inequality ) We thank \textbf{Iosif Pinelis} from MathOverflow for his help developing this proof. In 1914, F. Schuh \cite{Schuh} replaced a circular segment by a cleverly chosen \emph{parabolic} segment and used Archimedes' determination of the barycenter of the latter to prove the following inequality: \begin{equation} \label{Schuh1} \xi>\frac{3}{5}c-\frac{3c^2}{25\left(3-\frac{3}{5}c\right)} \end{equation} where $c$ is the diameter (or \emph{height}) of the segment, $r$ is the radius of the circle, and $\xi$ is the distance along the diameter from the vertex of the segment to the barycenter of the segment. Then Schuh used Huygens' barycentric equation \eqref{barycenter} to transform \eqref{Schuh1} into the version of Huygens' final inequality with the constant $27$ instead of the constant $8$ of Huygens' original inequality. It occured to me to wonder if one could alter \eqref{Schuh1} so that Schuh's transformations would produce Huygens' original inequality. It turns out that if \emph{we replace the constant $3$ in the numerator of \eqref{Schuh1} by the constant $\dfrac{8}{9}$,} then we obtain Huygens' original inequality. Therefore we want to prove the inequality \begin{equation} \label{Schuh2} \boxed{\xi>\frac{3}{5}c-\frac{\frac{8}{9}c^2}{25\left(3-\frac{3}{5}c\right)}} \end{equation} Now we use the inequality which \textbf{Iosif Pinelis} from MathOverflow proved \cite{JP1}: \begin{equation} \label{Schuh3} \xi>\frac{3}{5}c-\frac{c}{1400}\theta^2 \end{equation} where $0<\theta<\frac{\pi}{2}$, and $\theta$ is the central angle of the segment. In our case, if we take $r=1$, then $$ \sin\frac{\theta}{2}=\sqrt{c(1-c)} $$ and so we want to prove $$ \frac{3}{5}c-\frac{4c}{1400}\left(\arcsin\sqrt{c(1-c})\right)^2>\frac{3}{5}c-\frac{\frac{8}{9}c^2}{25\left(3-\frac{3}{5}c\right)} $$ where $0<c<1$. This becomes $$ \frac{\frac{8}{9}c}{25\left(3-\frac{3}{5}c\right)}>\frac{4}{1400}\left(\arcsin\sqrt{c(1-c})\right)^2. $$ But the difference $$ F(c):=\frac{\frac{8}{9}c}{25\left(3-\frac{3}{5}c\right)}-\frac{4}{1400}\left(\arcsin\sqrt{c(1-c})\right)^2. $$ is zero for $c=0$ and has a positive derivative for $0<c<1$. This completes the proof of \eqref{Schuh2} and therefore of Huygens' final inequality. \end{proof} Our proof uses the inequality \eqref{Schuh3} whose proof requires a clever application of differential calculus, something unavailable to Huygens. Still ours, although we also use calculus, is based on \emph{the location of the barycenter}, something which apparently inspired Huygens' own investigation, and thus is in the spirit of his still unknown proof. \section{An historical conjecture} Both the Cusa inequality and Snell's own inequality may be termed ``\emph{convergence-improving}" inequalities since they produce at least twice the accuracy as the original procedure of Archimedes. We make the following historic conjecture: \emph{Archimedes was fully cognizant of the Snell-Cusa convergence-improving inequalities and probably used them to obtain closer bounds on $\pi$.} We base our suggestion on the following considerations. \begin{enumerate} \item It is well-known that the extant text of \emph{Measurement} is a damaged and corrupt extract from a far more comprehensive study by Archimedes of the metric properties of circular figures. But, unfortunately, his full tract was soon lost due to the willy-nilly of history. \item Some three centuries later, Heron \cite{Heron} quotes closer bounds on $\pi$ computed by Archimedes, but they are corrupt (the quoted lower bound is actually a quite good upper bound). Moreover he gives no indication as to how they were calculated. \item Heron also cites a proposition on circular segments from \emph{Measurement} which is \emph{not} in the extant text of today; indeed Eutokios (6th century A.C.) had a version of \emph{Measurement} which virtually concides with today's variant . So the original full treatise had already been lost by then. \item Now we submit the major piece of evidence for our conjecture. The treatise \emph{The Book of Lemmas} \cite{Arch2}, transmitted by Arab mathematicians, contains, as Proposition 8, Archimedes' famous \emph{trisection of an angle}. The trisection construction creates a figure \emph{which coincides exactly with the Snell-Cusa figures in Huygen's treatise.} We submit that this is not just a curious coincidence. Rather it seems to us that Archimedes worked assiduously on finding approximations to $\pi$ and as an \emph{incidental by-product} encountered the trisection construction. But the fact that he was familiar with the trisection figure and was deeply investigating such approximations suggests to us that he had to be \emph{completely aware of the applications of that figure to convergence-improving approximations}. To suggest otherwise is to say that the most brilliant mathematician of antiquity, working in the area of geometric approximations to $\pi$ was unaware of that figure's applications to the subject of his researches. We offer that this last alternative is utterly inconceivable. \item Assuming his familiarity with the Snell-Cusa improvements, it seems to us quite likely that he used them to compute the closer bounds cited by Heron. Of course, our current knowledge does not allow rigorous conclusions but the argument outlined above seems quite plausible. \end{enumerate} \begin{thebibliography}{15} \bibitem{Arch1} Archimedes, \emph{The Measurement of a Circle}, in Heath, T.L., \emph{The Works of Archimedes with the Method of Archimedes}, Dover Publications Inc. NY, 1953. \bibitem{Huy} Huygens, Christian, \emph{De Circuli Magnitudine Inventa}, Apud Johannem and Danielem Elzevier, 1654. \bibitem{NVC} Niklaus von Cusa, \emph{De mathematicae perfectione}, Opera (Paris 1514, Basel 1565). \bibitem{snell} Willebord Snellius, \emph{Ciclometricus}, Ludg. Bat. 1621. \bibitem{JP} Joseph Pinelis, \emph{Huygens' final unproved inequality}, mathoverflow, july 23, 2024. \bibitem{JP1} Joseph Pinelis, \emph{On a trigonometric inequality by Huygens}, mathoverflow, july 22, 2024. \bibitem{Rich} Richardson, L.F., \emph{The deferred approach to the limit, Part I--single Lattice},Philos. Trans. R. Soc. Lond. Ser A $\mathbf{ 226}$ 299-349 (1927) \bibitem{Rud} Rudio, F: \emph{Archimedes, Huygens, Lambert, Legendre. Vier Abhandliungen uber die Kreismessung} Teubner (1892) \bibitem{Huy1} Huygens, Christian \emph{Oeuvres Completes de Christiaan Huyghens, Vol 12} published by La Societe Holladaise des Sciences, 1910 \bibitem{HP} Pearson, H \emph{The area of a circle by the method of Archimedes, including a translation of ``De circuli magnitudine inventa" by Christiaan Huyghens}, Masters Thesis, Boston University 1922. \bibitem{Heron} Heron of Alexandria \emph{Metrika}, Teubner, Leipzig, 1903. \bibitem{Hof} Hofmann, J.E. \emph{Uber die Kreismessung von Chr. Huybens, ihre Vorgeschichte, ihren Inhalt, ihre Bedeutung und ihr Nachwirken} Archive for the History of the Exact Sciences, 1966. \bibitem{Schuh} Schuh, F. \emph{Sur quelques formules approximatives de la circumference du cercle et sur la Cyclometrie de Huygens} Archives Neerlandaises (IIIA), vol 3, 1-178 and 229-323 (1913/1914) \bibitem{Arch2} Archimedes \emph{Book of Lemmas} in Heath, T.L., \emph{The Works of Archimedes with the Method of Archimedes}, Dover Publications Inc. NY, 1953. \end{thebibliography} \end{document} At this juncture it seems appropriate to quote the well-known British analyst E.W. Hobson \begin{quote} ``The extreme limit of what can be obtained on the geometrical lines laid down by Archimedes was reached in the work of Christian Huyghens(1629-1665). In his work `\emph{De circuli magnitudine inventa}' which is a model of geometrical reasoning he undertakes by improved methods to make a careful examination of the \emph{area} of a circle. He established sixteen theorems by geometrical processes and shews that by means of his theorems \emph{three times} as many places of decimals can be obtained as by the older method. The determination made by Archimedes he can get from the \emph{triangle} alone. The (60-gon) gives him the limits 3.141593633 and 3.1415926538." \end{quote} (Our emphasis) \\ \\ Today's literature is replete with titles such as \emph{Huygen's inequality}, \emph{the Huygens-Snell inequality}, \emph{the Huygens-Cusa inequality}, and so on. All of these inequalities appear in Huygens' marvelously elegant treatise. These modern works prove the inequalities using partial sums of the Maclaurin expansions of trigonometric functions, and the processes of integration and differentiation. Huygens' own geometric proofs are relegated (if at all) to the sidelines with comments such as "quite difficult." Amazingly, today's literature does not offer the interested mathematician an ab initio detailed and complete exposition of Huygens' original (and quite beautiful) proofs of these important theorems. One must go to his collected works to read them. Our paper fills the gap. If we put $x:=\frac{\pi}{n}$, then we must prove \begin{equation} \label{XX2} \pi\geq\frac{\pi}{x}\sin x \frac{20+51\cos x-2\cos^2x+6\cos^3x}{21+18\cos x+36\cos^2x} \end{equation} Pinelis' clever idea is to use a standard technique of integration where one \emph{transforms a trigonometric quotient into a polynomial one}. Using the substitution $$ t:=\tan \frac{x}{2},\quad x=2\arctan t,\quad \cos x=\frac{1-t^2}{1+t^2},\quad \sin x=\frac{2t}{1+t^2} $$ the desired inequality \eqref{XX2} becomes $$ d(t):=\frac{2t(39t^6-29t^4-9tt^2-75)}{3(t^2+1)^2(13t^4-10t^2+25)}+2\arctan t\geq 0 $$ for $0\leq t\leq 1$. But $d(0)=0$ and $$ d'(t)=\frac{128t^6(13t^4+40t^2+75)}{(t^2+1)^3(13t^4-10t^2+25)^2}\geq 0 $$ for all real $t$. This completes the proof. \\ \hrule \end{proof} Admittedly, the given proof is a verification, and gives no idea as to how Huygens thought of his approximation. But, it does fill a gap in the literature. \section{An historical conjecture} Both the Cusa inequality and Snell's own inequality may be termed ``\emph{convergence-improving}" inequalities since they produce at least twice the accuracy as the original procedure of Archimedes. We make the following historic conjecture: \emph{Archimedes was fully cognizant of the Snell-Cusa convergence-improving inequalities and probably used them to obtain closer bounds on $\pi$.} We base our suggestion on the following considerations. \begin{enumerate} \item It is well-known that the extant text of \emph{Measurement} is a damaged and corrupt extract from a far more comprehensive study by Archimedes of the metric properties of circular figures. But, unfortunately, his full tract was soon lost due to the willy-nilly of history. \item Some three centuries later, Heron quotes closer bounds on $\pi$ computed by Archimedes, but they are corrupt (the lower bound is actually a quite good upper bound). Moreover he gives no indication as to how they were calculated. \item Heron also cites a proposition on circular segments from \emph{Measurement} which is \emph{not} in the extant text of today; indeed Eutokios (6th century A.C.) had a version of \emph{Measurement} which virtually concides with today's variant . So the original full treatise had already been lost by then. \item Now we submit the major piece of evidence for our conjecture. The treatise \emph{The Book of Lemmas}, transmitted by Arab mathematicians, contains, as Proposition 8, Archimedes' famous \emph{trisection of an angle}. The trisection construction creates a figure \emph{which coincides exactly with the Snell-Cusa figures in Huygen's treatise.} We submit that this is not just a curious coincidence. Rather it seems to us that Archimedes worked assiduously on finding approximations to $\pi$ and as an \emph{incidental by-product} encountered the trisection construction. But the fact that he was familiar with the trisection figure and was deeply investigating such approximations suggests to us that he had to be \emph{completely aware of the applications of that figure to convergence-improving approximations}. To suggest otherwise is to say that the most brilliant mathematician of antiquity, working in the area of geometric approximations to $\pi$ was unaware of that figure's applications to the subject of his researches. We offer that this last alternative is utterly inconceivable. \item Assuming his familiarity with the Snell-Cusa improvements, it seems to us quite likely that he used them to compute the closer bounds cited by Heron. Of course, our current knowledge does not allow rigorous conclusions but the argument outlined above seems quite plausible. \end{enumerate} \section{A fundamental identity} At the beginning of the Huygen's Collected Works edition of `\emph{De circuli magnitudine inventa}' which we will simply call `\emph{inventa}', the editors deduce an identity, of interest in its own right, which allows one to both see the origin of some of Huygens' approximations as well as to compare their relative accuracies. We base our presentation on that of the editors. \begin{thm} Let $p_n$ be the perimeter of regular polygon of $n$ sides inscribed in circle of unit radius. Then the following identity always is valid: \begin{equation} \label{identity} \begin{split} 2\pi&\equiv p_{2n}+\frac{1}{3}(p_{2n}-p_n) +\frac{2}{15}\frac{(p_{2n}-p_n)^2}{p_{2n}} +\frac{2}{35}\frac{(p_{2n}-p_n)^3}{p_{2n}^2}\\ &+\frac{8}{315}\frac{(p_{2n}-p_n)^4}{p_{2n}^3} +\frac{8}{693}\frac{(p_{2n}-p_n)^5}{p_{2n}^4}+\cdots \end{split} \end{equation} \end{thm} \begin{proof} The identity \eqref{identity} is just another way of writing the well-known power series expansion for $\arcsin x$, namely \begin{equation} \label{arc} \arcsin x = x+\frac{1}{2\cdot 3}x^3+\frac{1\cdot 3}{2\cdot 4\cdot 5}x^5+\frac{1\cdot 3\cdot 5}{2\cdot 4\cdot 6\cdot 7}x^7+\cdots \end{equation} Let $\alpha$ be the central angle of one arc subtended by a side of a regular polygon of $2n$ sides. Let $a_{2n}$ be the length of the side. Then \begin{equation} \label{alpha} \alpha=\frac{\pi}{2n}=\arccos\frac{\frac{1}{2}a_n}{a_{2n}}=\arccos\frac{p_n}{p_{2n}}=\arcsin\frac{\sqrt{p_{2n}^2-p_n^2}}{p_{2n}} \end{equation} and, by \eqref{arc} \begin{equation} \frac{\pi}{2n}= \frac{\sqrt{p_{2n}^2-p_n^2}}{p_{2n}}+\frac{1}{2\cdot 3}\left(\frac{\sqrt{p_{2n}^2-p_n^2}}{p_{2n}}\right)^3+\frac{1\cdot 3}{2\cdot 4\cdot 5}\left(\frac{\sqrt{p_{2n}^2-p_n^2}}{p_{2n}}\right)^5+\cdots \end{equation} But \begin{equation} \label{ } p_{2n}=4n\sin\alpha=4n\frac{\sqrt{p_{2n}^2-p_n^2}}{p_{2n}}\Rightarrow n=\frac{p_{2n}^2}{4\sqrt{p_{2n}^2-p_n^2}} \end{equation} Therefore \begin{eqnarray*} \label{ } 2\pi&=&\frac{p_{2n}^2}{\sqrt{p_{2n}^2-p_n^2}}\left\{\frac{\sqrt{p_{2n}^2-p_n^2}}{p_{2n}}+\frac{1}{2\cdot 3}\left(\frac{\sqrt{p_{2n}^2-p_n^2}}{p_{2n}}\right)^3+\frac{1\cdot 3}{2\cdot 4\cdot 5}\left(\frac{\sqrt{p_{2n}^2-p_n^2}}{p_{2n}}\right)^5+\cdots\right\}\\ &=&p_{2n}+\frac{1}{2\cdot 3}\frac{p_{2n}^2-p_n^2}{p_{2n}}+\frac{1\cdot 3}{2\cdot 4\cdot 5}\frac{(p_{2n}^2-p_n^2)^2}{p_{2n}^3}+\frac{1\cdot 3\cdot 5}{2\cdot 4\cdot 6\cdot 7}\frac{(p_{2n}^2-p_n^2)^3}{p_{2n}^5}+\cdots \end{eqnarray*} Now we use the identity $$ (p_{2n}^2-p_n^2)^k\equiv(p_{2n}+p_n)^k(p_{2n}-p_n)^k\equiv\left[2p_{2n}-(p_{2n}-p_n)\right]^k(p_{2n}-p_n)^k $$ and the binomial theorem for $k=1,2,3,\cdots$, and write the denominator $p_{2n}^{2k-1}\equiv p_{2n}^{k}\cdot p_{2n}^{k-1}$ to obtain the given expansion. For example, the second, third, and fourth terms give us: \begin{eqnarray*} \frac{p_{2n}^2-p_n^2}{p_{2n}} & = & 2(p_{2n}-p_n)- \frac{(p_{2n}^2-p_n^2)^2}{p_{2n}}\\ \frac{(p_{2n}^2-p_n^2)^2}{p_{2n}^3}& = & 4 \frac{(p_{2n}^2-p_n^2)^2}{p_{2n}}-4\frac{(p_{2n}^3-p_n^2)^2}{p_{2n}^2}+\frac{(p_{2n}^3-p_n^2)^4}{p_{2n}^3}\\ \frac{(p_{2n}^3-p_n^2)^3}{p_{2n}^5}& = & 8\frac{(p_{2n}^3-p_n^2)^3}{p_{2n}^2}-12\frac{(p_{2n}^3-p_n^2)^4}{p_{2n}^3}+6\frac{(p_{2n}^3-p_n^2)^5}{p_{2n}^4} \end{eqnarray*} Therefore, \begin{itemize} \item the coefficient of $p_{2n}-p_n$ is $2\cdot\frac{1}{6}=\frac{1}{3}.$ \item the coefficient of $\frac{p_{2n}^2-p_n^2}{p_{2n}}$ is $\frac{3}{10}-\frac{1}{6}=\frac{2}{15}.$ \item the coefficient of $\frac{(p_{2n}^3-p_n^2)^3}{p_{2n}^2}$ is $-\frac{3}{10}+\frac{5}{14}=\frac{2}{35}$ \end{itemize} and so on... \end{proof} There does not seem to be a simple formula for the general term. \\ Now, if we apply the identity \eqref{identity} to the Snell inequalities \eqref{Snell3} we obtain \begin{eqnarray*} &p_{2n}+\frac{1}{3}(p_{2n}-p_n)+\frac{1}{9}\frac{(p_{2n}-p_n)^2}{p_{2n}}+\frac{(p_{2n}-p_n)^3}{9p_n(2p_2n{+p_n})}&\\ &<2\pi& \\ &<p_{2n}+\frac{1}{3}(p_{2n}-p_n)+\frac{1}{3}\frac{(p_{2n}-p_n)^2}{p_{2n}}+\frac{(p_{2n}-p_n)^3}{3p_np_{2n}}& \end{eqnarray*} \\ Comparing this with \eqref{identity} we see that the difference between the exact value of $2\pi$ in the identity and the value in the lower bound of the Snell approximation is very close to $\frac{1}{45}\dfrac{(p_{2n}-p_n)^2}{p_{2n}}$ and the error in the upper bound is about nine times as large. Indeed, as the well-known historian of mathematics Wilbur Knorr commented: \begin{quote} ``The rounding-off technique which ARCHIMEDES employs is extremely subtle. One indication of his adroitness is that twice in the calculation of the lower bound ARCHIMEDES adjusts the terms of the triangles by factores of $\frac{4}{13} $ and $\frac{11}{40}$, respectively, in order to remove fractional remainders and thus facilitate further computation. The \emph{choice} of the fractional remainders $\frac{3}{4}$ and $\frac{9}{11}$ was thus made after considerable forethought...(and) the values associated with the $96$-gons are finally expanded to produce the bounds $3\frac{1}{7}$ and $3\frac{10}{71}$." \end{quote} \begin{eqnarray*} \text{since $\triangle HAE$ is isoceles} & \Rightarrow & \angle H=\angle HEA \\ \text{and since}\,\angle AEB & = & \frac{\pi}{2}\\ \Rightarrow \angle HEA+\angle KEB & = & \frac{\pi}{2}\\ \text{But}\,\angle H+\angle HKB &= & \frac{\pi}{2}\quad\text{since in $\triangle HKB$, $\angle B =\frac{\pi}{2}\$} \end{eqnarray*} Let$HE$ be produced to meet the tangent in $K$. Finally, let $EG$ be drawn perpendicular to the diameter $AB$ and $ED$ perpendicular to the tangent $BL$. Then, \begin{eqnarray*} \text{since $\triangle HAE$ is isoceles} & \Rightarrow & \angle H=\angle HEA \\ \text{and since}\,\angle AEB & = & \frac{\pi}{2}\\ \Rightarrow\angle HEA+\angle KEB & = & \frac{\pi}{2}\\ \text{But}\,\angle H+\angle HKB &= & \frac{\pi}{2}\quad\text{since in $\triangle HKB$, $\angle B =\frac{\pi}{2}\$} \end{eqnarray*} In the previous figure, extend $AE$ till it intersects the tangent (produced) in $K$, and construct a point $L$ such that $$ LF=FC\Rightarrow BL=2AF $$ and since the geometric mean is less than the arithmetic mean \begin{eqnarray*} \frac{BC}{BF} & < & \frac{BF}{BL} \\ \Rightarrow \frac{BC}{BL} & = & \frac{AC}{AF}=\frac{BC}{BV}\cdot\frac{BF}{BL}>\frac{BC^2}{BF^2} \\ \Rightarrow \frac{KC}{EF} & > & \frac{DC^2}{EF^2}\\ \Rightarrow \frac{KC}{DC} & > & \frac{DC}{EF}\\ \Rightarrow KC+EF & > & 2DC\\ \Rightarrow \frac{1}{3}KC+\frac{2}{3}EF & > & \frac{2}{3}DC+\frac{1}{3}EF\\ \Rightarrow \overset\frown{EC} & < & \frac{1}{3}KC+\frac{2}{3}EF\quad\text{by \eqref{lemma}} \end{eqnarray*} where the theorem follows by extending the final inequality to the complete circle. \end{proof} \begin{thm} If $S$ is the area of circular segment, then \begin{equation} \label{ } \boxed{S>A_{2n}+\frac{1}{3}(A_{2n}-A_n)} \end{equation} where $A_n$ is the area of the segmental polygon of $n$ sides. \end{thm} \begin{proof} We label the vertices of the segmental $2n$-gon $0,1,2,\cdots, 2n$ and those of the segmental $n$-gon $0,2,\cdots,2n$. We cal their areas are, respectively $A_{2n}$ and $A_n$. Then \begin{eqnarray*} \text{seg}\,(02)+\text{seg}\,(24)+\cdots+\text{seg}\,(2n-2,2n) & = & \text{seg}\,(0,2n)-A_n \\ & > & \frac{4}{3}\left[(\triangle 012)+(\triangle 234)+\cdots\right] \\ & > & \frac{4}{3}(A_{2n}-A_n) \end{eqnarray*} where we used the \emph{first Heron-Huygens Lemma} in the first inequality, and therefore \begin{equation*} \text{Segment}\,(0,2n)>\frac{4}{3}A_{2n}-\frac{1}{3}A_n \end{equation*} \end{proof} \begin{lemma} Given a circle of radius $r$ we let $A_n$ be the area of an inscribed regular polygon of $n$ sides, $p_n$ be the perimeter of an inscribed regular polygon of $n$ sides, $A_n'$ is the area of a circumscribed regular polygon of $n$ sides, $p_n'$ is the perimeter of a circumscribed regular polygon of $n$ sides. Then, \begin{enumerate} \item $A_{2n}=\frac{1}{2}p_nr$. \item $A_{2n}'=\frac{1}{2}p_n'r$. \item $p_{2n}'=\frac{p_{2n}^2}{p_n}$ \end{enumerate} \end{lemma} \begin{enumerate} \item \item \item \end{enumerate} \begin{eqnarray} A & = & B \\ C & = & D \end{eqnarray} \begin{array}{ } & \\ & \end{array}
2412.10905v3
http://arxiv.org/abs/2412.10905v3
On the total surface area of potato packings
\documentclass{amsart} \usepackage[utf8]{inputenc} \usepackage{amsmath,amssymb,amsthm} \usepackage{comment} \usepackage{hyperref} \usepackage{pgf,tikz} \usepackage{esint} \usepackage{bbm} \usetikzlibrary{arrows} \usetikzlibrary{arrows.meta} \newcommand{\RR}{\mathbb R} \newcommand{\CC}{\mathbb C} \newcommand{\NN}{\mathbb N} \newcommand{\ZZ}{\mathbb Z} \newcommand{\1}{{\mathbbm 1}} \renewcommand{\H}{\mathcal H} \renewcommand{\L}{\mathcal L} \newcommand{\eps}{\varepsilon} \newcommand{\m}{\mathfrak m} \renewcommand{\d}{\mathrm{d}} \newcommand{\loc}{\mathrm{loc}} \newcommand{\defeq}{:=} \renewcommand{\vec}[1]{\mathbf{#1}} \newcommand{\abs}[1]{\left\vert #1 \right\vert} \newcommand{\Abs}[1]{\left\Vert #1 \right\Vert} \newcommand{\enclose}[1]{\left(#1\right)} \newcommand{\Enclose}[1]{\left[#1\right]} \newcommand{\ENCLOSE}[1]{\left\{#1\right\}} \newcommand{\weakstarto}{{\stackrel{*}{\rightharpoonup}}} \newcommand{\Partial}{\tilde \partial} \newcommand{\St}{\mathrm{St}} \newcommand{\M}{\mathcal{M}} \renewcommand{\H}{\mathcal{H}} \newcommand{\dist}{\mathrm{dist}} \newcommand{\diam}{\mathrm{diam}} \newcommand{\co}{\mathrm{co}} \renewcommand{\S}{\mathcal{S}} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{assumption}[theorem]{Assumption} \newtheorem{problem}[theorem]{Problem} \newtheorem{openpb}[theorem]{Open Problem} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \author[Novaga]{Matteo Novaga} \address[Matteo Novaga, Emanuele Paolini, Eugene Stepanov]{Dipartimento di Matematica, Universit\`a di Pisa, Largo Bruno Pontecorvo 5 \\ I-56127, Pisa} \address[Eugene Stepanov] { St.Petersburg Branch of the Steklov Mathematical Institute of the Russian Academy of Sciences, St.Petersburg, Russia \and Department of Mathematical Physics, Faculty of Mathematics and Mechanics, St. Petersburg State University, Universitetskij pr.~28, Old Peterhof, 198504 St.Petersburg, Russia \and ITMO University \and Faculty of Mathematics, Higher School of Economics, Moscow } \email[Matteo Novaga]{[email protected]} \author[Paolini]{Emanuele Paolini} \email[Emanuele Paolini]{[email protected]} \author[Stepanov]{Eugene Stepanov} \email[Eugene Stepanov]{[email protected]} \thanks{ This research was partially supported by MUR Excellence Department Project awarded to the Department of Mathematics of the University of Pisa. The work of M.N.\ was partially supported by Next Generation EU, Mission 4, Component 2, PRIN 2022E9CF89. The work of E.P.\ was partially supported by Next Generation EU, Mission 4, Component 2, PRIN 2022PJ9EFL, and by project PRA 2022 14 GeoDom (Università di Pisa). The work of E.S.\ is partially within the framework of HSE University Basic Research Program. The authors are indebted to Francesco Nobili for suggesting a reference that improved substantially the result of Lemma~\ref{lemma:PI} } \date{\today} \title{On the total surface area of potato packings} \begin{document} \begin{abstract} We prove that if we fill without gaps a bag with infinitely many potatoes, in such a way that they touch each other in few points, then the total surface area of the potatoes must be infinite. In this context potatoes are measurable subsets of the Euclidean space, the bag is any open set of the same space. As we show, this result also holds in the general context of doubling (even locally) metric measure spaces satisfying Poincar\'e inequality, in particular in smooth Riemannian manifolds and even in some sub-Riemannian spaces. \end{abstract} \maketitle \tableofcontents \section*{Introduction} \begin{figure} \begin{center} \includegraphics[height=0.5\textwidth]{bag.png} \hfill \includegraphics[height=0.5\textwidth]{ApollonianGasket-15_32_32_33.png} \end{center} \caption{On the left-hand side: a potato bag. On the right-hand side: an Apollonian gasket. } \label{fig:bag} \end{figure} In this note we prove the following curious fact: if we fill without gaps a bag (modeled, say, by an arbitrary open connected subset of $\RR^d$) with potatoes (modeled, in this case, by open sets of finite perimeter), in such a way that the potatoes touch each other in a single point (more generally in a set of zero surface measure), see Figure~\ref{fig:bag}, then the total surface area of the potatoes is infinite. We show that this very simple result is in fact true for a fairly large class of metric measure spaces, where the perimeter of a set can be naturally defined. This result implies in particular that the residual set (the complement of the union of the potatoes) has Hausdorff dimension at least $d-1$. Theorem~1.4(ii) from \cite{MaiNta22} shows that this statement is sharp: in fact it proves the existence of a packing of a planar convex set by planar strictly convex sets for which the dimension of the residual set is exactly $1$. In this way we generalise the results of \cite{MaiNta22}, dedicated to packing of convex sets, which in their turn generalise those from the series of papers \cite{Lar66,Lar66b,Lar66c,Lar68} and \cite{MikMik02} dedicated to packings of spheres. \section{Notation and preliminary results} We will consider here metric measure spaces $(X,\d,\m)$ where $X$ is a nonempty set equipped with a distance $\d$ and a $\sigma$-finite Borel measure $\m$ with $\m(X)\neq 0$. We say that a function $F$ defined on the borel sets of a metric space $X$ with values in $[0,+\infty]$ is a \emph{perimeter-like evaluation} if it satisfies the following properties: \begin{enumerate} \item[(0)] $F(\emptyset)=0$; \item[(T)] $F(A) \ge \limsup_n F(A_n)$ whenever $A_n\subset A$ and $F(A\setminus A_n)\to 0$; \item[(C)] $F(X\setminus A) = F(A)$; \item[(L)] $F(A)\le \liminf_n F(A_n)$ if $\lim_n \m(A_n\triangle A)=0$ (one says that $F$ is lower semicontinuous with respect to $L^1(X,\m)$ convergence of sets); \item[(Z)] if $\m(A \triangle B)=0$ then $F(A)=F(B)$. \end{enumerate} \begin{lemma}\label{lemma:T} The property (T) is valid if, for some $c>0$, one has \begin{enumerate} \item[(T')] $F(A) \ge F(A\setminus B) - c F(B)$ whenever $B\subset A$. \end{enumerate} \end{lemma} \begin{proof} Just notice that \begin{align*} F(A) &= F(A_n \cup (A\setminus A_n))\\ &\ge F(A_n) - cF(A\setminus A_n) && \text{by $(T')$} \end{align*} and if $F(A\setminus A_n)\to 0$, then we have (T). \end{proof} For a fairly general class of metric measure spaces, namely those with a doubling measure and satisfying the Poincar\'e inequality, a perimeter functional has been defined in \cite{Mir03,AmbMirPal04,BonPasRaj20}. We refer to the monograph \cite{HeiKosShaTys15} and references therein for a detailed discussion of such spaces. Here we just recall the basic definitions. A measure ${\mathfrak m}$ is said \emph{doubling}, provided there is a $C_D\ge 1$ such that \[ {\mathfrak m}(B_{2r}(x))\le C_D \m(B_r(x)),\qquad \text{for all }x\in X,r \in (0,R). \] We will shortly say that $(X,\d,{\mathfrak m})$ is a doubling space. We say that a metric measure space $(X,\d,{\mathfrak m})$ supports a \emph{weak $(1,1)$ Poincar\'e} inequality, provided there is $\tau_P\ge 1$, and a constant $C_P>0$ such that, for any locally Lipschitz function $f\colon X\to \RR$, one has \[ nt_{B_{\tau_P r}(x)} {\rm Lip}(f)\,d\m, \qquad \text{for all }x \in X, r>0, \] nt_{B_r(x)} f\,d{\mathfrak m}$ and $Lip(f)$ stands for the Lipschitz constant of $f$ over $B_{\tau_P r}(x)$. Actually, this is usually formulated with \emph{upper gradients} (see \cite{HeiKos98}), rather than with local Lipschitz constant. However, if $(X,\d,\m)$ is doubling, the two formulations are equivalent \cite{Kei03}. \begin{definition} A PI-space is a metric measure space $(X,\d,{\mathfrak m})$ that is doubling and supports a weak $(1,1)$-local Poincar\'e inequality. \end{definition} The perimeter functional in a PI-space has been defined in \cite{Mir03,AmbMirPal04} (see also \cite{BonPasRaj20}) as a natural generalization of the classical Euclidean perimeter in $\RR^d$ (see \cite{Giu84}). Note that, like in the classical Euclidean case $\RR^d$ where the perimeter is the $(d-1)$-dimensional Hausdorff measure of the essential boundary, also in PI-spaces the perimeter of a set $E$ is a measure of the essential boundary $\partial^e E$ of $E$, defined in \cite{AmbMirPal04}, and is absolutely continuous with respect to the so-called codimension one Hausdorff measure $\H^{-1}$, with a Borel density $\theta_E\colon X\to [\alpha,\beta]$ where $\alpha$ and $\beta$ are two positive constants depending only on the constants $C_D$, $C_P$, $\tau_P$ of the space \cite{BonPasRaj20}. In particular, one has the integral representation \[ P(E) = \int_{\partial^e E} \theta_E(x)\, d\H^{-1}(x). \] We refer to \cite{BonPasRaj20,BreNobPas23} where this notion has been further studied. It is important to know that this includes many classical examples, like for instance \begin{enumerate} \item the Euclidean space $X=\RR^d$ (or $X$ open subset of $\RR^d$) with the Lebesgue measure (in this case $\H^{-1}$ is the classical $(d-1)$-dimensional Hausdorff measure and $\partial^e E$ is equivalent to the reduced boundary of $E$ as defined by De Giorgi), \item $X$ a finite dimensional space equipped with any anisotropic norm and the Lebesgue measure, or even some of the so-called $RCD$ metric measure spaces, \item $X$ a Heisenberg group of any dimension, or even some of the more general Carnot groups. \end{enumerate} \begin{lemma}\label{lemma:PI} Let $(X,\d,\m)$ be a $PI$ space. Then the perimeter functional $P$ is a perimeter-like evaluation, i.e.\ it satisfies all the properties (0), (T), (C), (L), and (Z). \end{lemma} \begin{proof} Property $(0)$ follows from $\partial^e \emptyset = \emptyset$. Property $(T')$, with $c=1$, is shown in \cite[eq.~(4.1)]{Amb02}. Proposition~1.7 (i), (ii), and (vi) in \cite{BonPasRaj20} shows (Z), (L), and (C) respectively. \end{proof} \section{Main result} We start with the following auxiliary statement. \begin{proposition} If $F$ satisfies properties (0), (C), and the family of sets $E_k$, $k\in \NN$ is such that $\bigcup E_i = X$, and \[ F\enclose{\bigcup_j E_j} \ge \sum_j F(E_j) \] then $F(E_i) = 0$ for all $i\in\NN$. If, additionally, $F$ satisfies (Z) then the same holds true if $\m(X\setminus\bigcup E_i) = 0$ rather than $\bigcup E_i = X$. \end{proposition} \begin{proof} One has \begin{align*} 0 &= F(\emptyset) && \text{by (0)}\\ &= F(X) && \text{by (C)}\\ &= F\enclose{\bigcup E_i} && \text{by assumption}\\ &\ge \sum_i F(E_i) && \text{by assumption} \end{align*} hence $F(E_i)=0$ for all $i\in \NN$. If, additionally, $F$ satisfies (Z), and $\m(X\setminus\bigcup E_i) = 0$ one has \begin{align*} 0 &= F(\emptyset)=F(X)\\ &= F\enclose{X \setminus \bigcup E_i} && \text{by (Z)}\\ &= F\enclose{\bigcup E_j} && \text{by (C)}\\ &\ge \sum F(E_j) && \text{by assumption} \end{align*} so that $F(E_j)=0$ for all $j$. \end{proof} The Theorem below is the main tecnical result which will be further translated into corollaries adapted to applications. \begin{theorem}\label{th:main} If $F$ satisfies properties (0), (C), (T), (L) and the family of sets $E_k$, $k\in \NN$ is such that $F(E_i\cup E_j) = F(E_i) + F(E_j)$ for $i\neq j$, $\bigcup_{i=0}^{+\infty} E_i = X$, and $\m\enclose{\bigcup_{i=1}^{+\infty} E_i} < +\infty$, then either $\sum_{i=0}^{+\infty} F(E_i) = +\infty$ or $F(E_i) = 0$ for all $i\in\NN$. If, additionally, $F$ satisfies (Z) then the same holds true if $\m(X\setminus\bigcup_{i=0}^{+\infty} E_i) = 0$ for all $i\neq j$, in place of $\bigcup E_i = X$. \end{theorem} \begin{proof} Let \[ T_n\defeq \bigcup_{i=n+1}^{+\infty} E_j, \qquad T_n^m\defeq \bigcup_{i=n+1}^{m} E_j. \] Clearly $T_n^m \nearrow T_n$ as $m\to +\infty$, hence $\lim_m \m(T_n^m) = \m(T_n)$. Since $\m(T_n)\le \m\enclose{\bigcup_{i=1}^{+\infty} E_i}<+\infty$ then we have that $\m(T_n^m \triangle T_n)\to 0$ as $m\to+\infty$. Therefore \begin{equation}\label{eq:48972345} \begin{aligned} F(T_n) &\le \liminf_m F(T_n^m) && \text{by (L)}\\ &= \liminf_m \sum_{i=n+1}^m F(E_i) &&\text{by assumption}\\ &= \sum_{i=n+1}^{+\infty} F(E_i). \end{aligned} \end{equation} If $\sum_{i=1}^{+\infty} F(E_i)=+\infty$ the proof is concluded. Otherwise the righthand side of \eqref{eq:48972345} is the remainder of a convergent number series and hence $F(T_n)\to 0$ as $n\to +\infty$. We have then \begin{align*} F\enclose{\bigcup_{i=0}^{+\infty} E_i} &\ge \limsup_n F\enclose{\bigcup_{i=0}^n E_i} && \text{by (T)}\\ &= \lim_n \sum_{i=0}^n F(E_i) && \text{by assumption}\\ &= \sum_{i=0}^{+\infty} F(E_i). \end{align*} The proof is then concluded using the previous proposition. \end{proof} \section{Some applications} Combining Theorem~\ref{th:main} with Lemma~\ref{lemma:PI} we obtain the following result. \begin{theorem}\label{th:1} Let $(X,\d,\m)$ be a $PI$ space. Let $E_k$, $k\in \NN$, be a sequence of Borel subsets of $X$ such that \begin{enumerate} \item[(i)] $\m(E_k \cap E_j)=0$ for $k\neq j$; \item[(ii)] $\m\enclose{X \setminus \bigcup E_k}=0$; \item[(iii)] $\H^{-1}(\partial^e E_k \cap \partial^e E_j )=0$ for all $k\neq j$. \item[(iv)] $\m(E_0)>0$ and $\m (E_1)>0$. \end{enumerate} Then \[ \sum_k P(E_k) = +\infty. \] \end{theorem} \begin{proof} Conditions (i) and (iii) imply \[ P(E_k\cup E_j) = P(E_k) + P(E_j) \] in view of Lemma~2.3(ii) in \cite{BonPasRaj20}. The assumptions of Theorem~\ref{th:main} are therefore satisfied by $P$ in view of Lemma~\ref{lemma:PI}. Hence by Theorem~\ref{th:main} we conclude that either $\sum P(E_k)=+\infty$ or $P(E_k)=0$ for all $k$. But the latter option is excluded by (iv) in view of the relative isoperimetric inequality for $PI$ spaces, provided in \cite{Mir03}. \end{proof} \begin{corollary}\label{co:2} Let $\m$ be the Lebesgue measure on $\RR^d$. Let $\mathcal F$ be a family of at least two measurable nonempty subsets of $\RR^d$, each with positive volume $\m$ and kissing each other on sets of zero $\H^{d-1}$ measure, i.e. \begin{equation}\label{eq:0967} \H^{d-1}(\bar A\cap \bar B)=0, \qquad \text{for every}\ A,B\in \mathcal F, A\neq B. \end{equation} In particular this assumption is satisfied if the sets in $\mathcal F$ are strictly convex and their interiors are not intersecting. Let $B\subset \RR^d$ be an open ball. If $\sum_{E\in \mathcal F} \H^{d-1}(\partial E\cap B)<+\infty$, then either there exists a set $E\in \mathcal F$ such that \[ \m (B\setminus E)=0 \] or \[ \m \enclose{B \setminus \bigcup_{E\in \mathcal F} E}>0, \] i.e.\ there is a subset of $B$ with positive measure which is not covered by the union of $\mathcal F$. In particular if $\Omega\subset \RR^d$ is an open set and all $E \in \mathcal F$ are open nonempty subsets of $\Omega$, regular in the sense that they coincide with the interior of their closure, satisfying~\eqref{eq:0967} and \begin{equation}\label{eq:m} \m\left(\Omega\setminus \bigcup_{E\in \mathcal F} E\right)=0, \end{equation} then given any $x\in \Omega \cap \bigcup_{E\in \mathcal F} \partial E$ and $B$ an open ball centered at $x$, one has \begin{equation}\label{eq:68} \sum_{E\in \mathcal F} \H^{d-1}(\partial E\cap B)=+\infty. \end{equation} If, moreover, $\Omega$ is connected, this implies in particular \begin{equation}\label{eq:685} \sum_{E\in \mathcal F} \H^{d-1}(\partial E\cap \Omega)=+\infty. \end{equation} \end{corollary} \begin{proof} We apply Theorem~\ref{th:1} with $X\defeq B$, equipped with the Euclidean distance, and $P$ is the usual Euclidean (Caccioppoli) perimeter relative to $B$. Clearly, the doubling condition holds for $B$ and, since $B$ is connected, the Poincar\'e inequality is satisfied and $X=B$ is a $PI$ space. Then, the De Giorgi reduced boundary $\partial^*E$ of a Borel set $E\subset \RR^d$ satisfies $\partial^* E \subset \partial^e E$ and $\H^{d-1}(\partial^e E) = \H^{d-1}(\partial^* E)$. Notice that $\H^{d-1}$ coincides with the spherical Hausdorff measure $\mathcal S^{d-1}$ on rectifiable sets, and (see the Example ``weighted spaces'' in Section~7 of \cite{AmbMirPal04}) $\H^{d-1}$ coincides, up to a multiplicative constant, with $\mathcal S^{d-1}$. If there exists $E\in \mathcal F$ such that $\m(B\setminus E)=0$ or, if $\m(B\setminus \bigcup_{E \in \mathcal F} E)>0$ there is nothing to prove. Otherwise there should be at least two different sets $E_0$, $E_1$ in $\mathcal F$ such that $\m(B\cap E_0)>0$ and $\m(B\cap E_1)>0$. Enumerate now all the sets of $\mathcal F$ as $E_k$, $k\in \NN$ (clearly $\mathcal F$ is at most countable since each set is assumed to have positive measure, and in the case that $\mathcal F$ is finite we can complete the sequence with empty sets). Notice that $\H^{d-1}(\partial^e E_k\cap \partial^e E_j)=0$ for all $k\neq j$ and $\m(E_k\cap E_j)=0$ since $\H^{d-1}(\bar E_k \cap \bar E_j)=0$. Therefore, we can apply Theorem~\ref{th:1} to the sequence $E_k\cap B$ to get $\sum_k P(E_k, B)=+\infty$ hence \[ \sum_k \H^{d-1}(\partial E_k\cap B) \ge \sum_k \H^{d-1}(\partial^* E_k\cap B) = \sum_k P(E_k,B) = +\infty, \] showing the first claim. In the particular case when $\m(\Omega\setminus \bigcup_{E\in \mathcal F} E)=0$ for some open $\Omega\subset \RR^d$, taking $x$ and $B$ as in the statement, we consider a ball $B'\subset B$ centered at $x$ such that $B'\subset \Omega$. Then, for every $E\in \mathcal F$ either $E\cap B'=\emptyset$ or $E\cap B'\neq \emptyset$. In the latter case one has $\partial E\cap B'\neq \emptyset$ since otherwise we would have $B'\subset E$ which contradicts the choice of the center $x$ and the fact that all the sets in $\mathcal F$ are disjoint. The regularity assumption implies that the interior of $B'\setminus E$ is nonempty, hence $\m(B'\setminus E)>0$. Therefore we have shown that $\m(B'\setminus E)>0$ for every $E\in \mathcal F$ and hence by the previous claim with $B'$ in place of $B$, one has \[ \sum_{E\in \mathcal F} \H^{d-1}(\partial E\cap B)\ge \sum_{E\in \mathcal F} \H^{d-1}(\partial E\cap B')=+\infty \] as claimed. If $\Omega$ is also connected, then since $\mathcal F$ has at least two elements, we obtain that $\bigcup_{E\in \mathcal F} \partial E\cap \Omega$ is not empty. Finally, from~\eqref{eq:68} one has \[ \sum_{E\in \mathcal F} \H^{d-1}(\partial E\cap\Omega) = +\infty, \] proving the last claim. \end{proof} \begin{remark} It is clear from the proof that the statement of Corollary~\ref{co:2} holds under the slightly weaker assumption that $\H^{d-1}(\partial^* A\cap \partial^* B)=0$ for all $A,B\in \mathcal F$ instead of $\H^{d-1}(\bar A\cap \bar B)=0$, where $\partial^*$ stands for the reduced boundary in the sense of De Giorgi. \end{remark} \begin{remark} Corollary~\ref{co:2} is valid in any metric measure space $(X,\d,\m)$ satisfying the Poincar\'e inequality where for every $x\in X$ there is an open ball $U\subset X$ containing $x$ such that $(U,\d,\m)$ is doubling. In particular, it is valid in any $C^2$ smooth Riemannian manifold. Indeed it is enough to rewrite word-to-word the proof of Corollary~\ref{co:2} with balls $B\subset U$. In the case when $X$ a $C^2$ smooth Riemannian manifold it is enough to note that over every ball $U$ the curvature of $X$ is bounded hence $(U,\d,\m)$ is doubling (in fact it is enough for this purpose that the curvature be bounded from below). \end{remark} \begin{remark}\label{remrem} An alternative proof of Corollary~\ref{co:2} for a family $\mathcal F$ of strictly convex sets could proceed as follows: \begin{itemize} \item For the planar case $d=2$ one uses the coarea inequality to state that if the total perimeter of the sets in the family is finite then almost every line (in fact every except a countable number of lines), say horizontal, intersects the boundary of the sets in a set of finite $\H^{0}$ measure (i.e. in a finite set). On the other hand if the sets of $\mathcal F$ are assumed to be strictly convex then the lines intersecting only a finite number of sets in the family should be at most countable: in fact, the intersection of a convex set with a line is a line segment, and hence if the line intersects only a finite number of sets, then each point of its intersection with the boundary of some set of $\mathcal F$ is a point of intersection between two different sets of $\mathcal F$, which are countably many in total. This contradiction shows that the total perimeter of the sets in the family is infinite for planar packing of strictly convex sets (even in the case when the ambient set is not convex). \item For the general space dimension $d\geq 2$, again assuming that the total perimeter of the packing is finite, by coarea inequality we have that the intersection of almost every hyperplane, say, horizontal, with the boundary of the sets in the family, has finite $\H^{d-2}$ measure. If the ambient set is convex, then inside every such hyperplane we have a packing of a convex set by strictly convex sets. Proceeding by backward induction on the dimension we arrive at a contradiction. Convexity of the ambient set is clearly essential for the argument to work in the case $d > 2$. \end{itemize} \end{remark} \begin{remark} Under the assumptions of Corollary~\ref{co:2}, if $\Omega$ is connected and every set in $\mathcal F$ is regular in the sense of this Corollary, reasoning as in Remark~\ref{remrem} one can prove that \begin{equation}\label{eq:69} \sum_{E\in \mathcal F} (\diam E)^{d-1} = +\infty, \end{equation} where $\diam E$ denotes the diameter of $E$. Indeed, since $\Omega$ is connected, we can choose a ball $B\subset \Omega$ such that $V:=\bigcup_{E\in \mathcal F} \partial E\cap B$ is nonempty. Notice that, if $x\in \partial E$ for some $E\in\mathcal F$, then every neighborhood of $x$ intersects at least two sets in $\mathcal F$. Therefore, up to a suitable rotation, we can assume that there exists a set $L_1$ of positive measure of lines $\ell$ parallel to the first coordinate direction $e_1$, which intersect in $B$ at least two sets of $\mathcal F$ (hence they also intersect the set $V$). Here by measure of the set of lines $L_1$ we mean the $\L^{d-1}$ measure of its orthogonal projections on the hyperplane $e_1^\perp$ perpendicular to $e_1$. In particular $\L^{d-1}\left(p_1\left(V\right)\right) >0$, where $p_1$ denotes the orthogonal projection on $e_1^\perp$. Since \[ \L^{d-1}(p_1(\bar E)) \le \omega_{d-1}\diam (p_1(\bar E))^{d-1} \le \omega_{d-1}(\diam \bar E)^{d-1} = \omega_{d-1}(\diam E)^{d-1}, \] if \eqref{eq:69} is not satisfied, then, for every $\eps>0$, there is a cofinite subfamily $\mathcal F_\eps\subset\mathcal F$ such that \[ \sum_{E\in \mathcal F_\eps}\L^{d-1}(p_1(\bar E)) <\eps. \] This implies that there exists a set $L_2\subset L_1$ of positive measure of lines $\ell$ which do not intersect the closures of the sets in $\mathcal F_\eps$, but intersect at least two sets in $\mathcal F\setminus\mathcal F_\eps$. Notice that $\ell\cap B\subset \bigcup_{E\in \mathcal F\setminus\mathcal F_\eps} \bar E$ for almost every line $\ell$ parallel to $e_1$, since otherwise, by Fubini Theorem, the volume of $B\setminus \bigcup_{E\in \mathcal F} \bar E$ would be strictly positive, contradicting~\eqref{eq:m}. As a consequence, for almost every line $\ell\in L_2$, since $\ell\cap B$ is connected, there exist two sets $U, W$ in $\mathcal F\setminus\mathcal F_\eps$, possibly depending on $\ell$, such that $\ell \cap B \cap \partial U\cap \partial W$ is nonempty (otherwise $\ell\cap B$ would be a union of a finite disjoint family of relatively closed sets). Finally, since the family $\mathcal F\setminus\mathcal F_\eps$ is finite, we can find two sets $E_1, E_2$ in $\mathcal F\setminus\mathcal F_\eps$ and a subset $L_3\subset L_2$ of positive measure such that $\ell \cap B \cap \partial E_1\cap \partial E_2$ is nonempty for every $\ell\in L_3$, contradicting~\eqref{eq:0967}. \end{remark} \bibliographystyle{amsplain} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \begin{thebibliography}{10} \bibitem{Amb01} L. Ambrosio, \emph{Some fine properties of sets of finite perimeter in {Ahlfors} regular metric measure spaces}, Adv. Math. \textbf{159} (2001), no.~1, 51--67. \bibitem{Amb02} \bysame, \emph{Fine properties of sets of finite perimeter in doubling metric measure spaces}, Set-Valued Anal. \textbf{10} (2002), no.~2-3, 111--128. \bibitem{AmbMirPal04} L.~Ambrosio, M.~Miranda, Jr., and D.~Pallara, \emph{Special functions of bounded variation in doubling metric measure spaces}, Calculus of variations: topics from the mathematical heritage of {E}.\ {D}e {G}iorgi, Quad. Mat., vol.~14, Dept. Math., Seconda Univ. Napoli, Caserta, 2004, pp.~1--45. \bibitem{BonPasRaj20} P.~Bonicatto, E.~Pasqualetto, and T.~Rajala, \emph{Indecomposable sets of finite perimeter in doubling metric measure spaces}, Calc. Var. Partial Differential Equations \textbf{59} (2020), no.~2, Paper No. 63, 39. \MR{4073209} \bibitem{BreNobPas23} C. Brena, F. Nobili, and E. Pasqualetto, \emph{Maps of bounded variation from {PI} spaces to metric spaces}, Preprint, {ArXiv}:2306.00768 (2023), 2023. \bibitem{Giu84} E. Giusti, \emph{Minimal surfaces and functions of bounded variation}, Monographs in Mathematics, vol.~80, Birkh{\"a}user, 1984. \bibitem{HeiKos98} J. Heinonen and P. Koskela, \emph{Quasiconformal maps in metric spaces with controlled geometry}, Acta Math. \textbf{181} (1998), no.~1, 1--61. \bibitem{HeiKosShaTys15} J. Heinonen, P. Koskela, N. Shanmugalingam, and J.~T. Tyson, \emph{Sobolev spaces on metric measure spaces. {An} approach based on upper gradients}, New Math. Monogr., vol.~27, Cambridge: Cambridge University Press, 2015. \bibitem{Kei03} S. Keith, \emph{Modulus and the {Poincar{\'e}} inequality on metric measure spaces}, Math. Z. \textbf{245} (2003), no.~2, 255--292. \bibitem{Lar66b} D.~G. Larman, \emph{An asymptotic bound for the residual area of a packing of discs}, Proc. Cambridge Philos. Soc. \textbf{62} (1966), 699--704. \bibitem{Lar66c} \bysame, \emph{A note on the {B}esicovitch dimension of the closest packing of spheres in {$R_{n}$}}, Proc. Cambridge Philos. Soc. \textbf{62} (1966), 193--195. \bibitem{Lar66} \bysame, \emph{On the exponent of convergence of a packing of spheres}, Mathematika \textbf{13} (1966), no.~1, 57--59. \bibitem{Lar68} \bysame, \emph{On packings of unequal spheres in {$R_{n}$}}, Canadian J. Math. \textbf{20} (1968), 967--969. \bibitem{MaiNta22} S.~Maio and D.~Ntalampekos, \emph{On the {H}ausdorff dimension of the residual set of a packing by smooth curves}, Journal of the London Mathematical Society \textbf{105} (2022), no.~3, 1752--1786. \bibitem{MikMik02} A.~S. Mikhailov and V.~S. Mikhailov, \emph{Remark on covering theorem}, Journal of Mathematical Sciences \textbf{112} (2002), 4024--4028. \bibitem{Mir03} M. Miranda, \emph{Functions of bounded variation on ``good'' metric spaces}, J. Math. Pures Appl. (9) \textbf{82} (2003), no.~8, 975--1004. \end{thebibliography} \end{document}
2412.11163v1
http://arxiv.org/abs/2412.11163v1
The Fine interior of dilations of a rational polytope
\documentclass[12pt,a4paper]{amsart} \usepackage{amsmath,amsfonts,amssymb,amsthm,amscd} \usepackage{dsfont,mathrsfs,enumerate} \usepackage{curves,epic,eepic} \setlength{\topmargin}{0cm} \setlength{\textwidth}{15cm} \setlength{\oddsidemargin}{0.5cm} \setlength{\evensidemargin}{0.5cm} \setlength{\unitlength}{1mm} \usepackage[colorlinks=true, linkcolor=red, citecolor=blue, pagebackref]{hyperref} \renewcommand\backrefxxx[3]{ \hyperlink{page.#1}{$\uparrow$#1}} \usepackage{url} \usepackage{cancel} \usepackage[pdftex]{graphicx} \usepackage{psfrag} \usepackage{epic} \usepackage{eepic} \usepackage[matrix,arrow,curve]{xy} \input{epsf} \usepackage{tikz} \usetikzlibrary{calc} \usetikzlibrary{intersections} \usetikzlibrary{through} \usetikzlibrary{backgrounds} \newtheorem{thm}{Theorem}[section] \newtheorem{lemma}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{coro}[thm]{Corollary} \newtheorem{conj}[thm]{Conjecture} \theoremstyle{definition} \newtheorem{defi}[thm]{Definition} \newtheorem{rem}[thm]{Remark} \newtheorem{ex}[thm]{Example} \newtheorem{con}[thm]{Construction} \def\Z{\mathds Z} \def\Q{\mathds Q} \def\R{\mathds R} \def\C{\mathds C} \def\phi{\varphi} \def\<{{\langle}} \def\>{{\rangle}} \newcommand{\area}[1]{\text{area}(#1)} \newcommand{\vol}[1]{\textit{vol}(#1)} \newcommand{\lw}[1]{\text{lw}(#1)} \newcommand{\lwd}[2]{\text{lwd}_{#1}(#2)} \newcommand{\interior}[1]{\text{int}(#1)} \newcommand{\floor}[1]{\left\lfloor #1 \right\rfloor} \newcommand{\ceil}[1]{\left\lceil #1 \right\rceil} \newcommand{\plvsl}[1]{\text{plvsl}(#1)} \newcommand{\Min}[2]{\mathrm{Min}_{#1}(#2)} \newcommand{\conv}[1]{\mathrm{conv}\left(#1\right)} \newcommand{\normalfan}[1]{\Sigma_{#1}} \renewcommand{\vec}[1]{\overrightarrow{#1}} \definecolor{zzttqq}{rgb}{0.6,0.2,0} \definecolor{ududff}{rgb}{0.3,0.3,1} \definecolor{aabbcc}{rgb}{1,0,0} \begin{document} \title[The Fine interior of dilations of a rational polytope]{The Fine interior of dilations of a rational polytope} \author[Martin Bohnert]{Martin Bohnert} \address{Mathematisches Institut, Universit\"at T\"ubingen, Auf der Morgenstelle 10, 72076 T\"ubingen, Germany} \email{[email protected]} \begin{abstract} A nondegenerate toric hypersurface of negative Kodaira dimension can be characterized by the empty Fine interior of its Newton polytope according to recent work by Victor Batyrev, where the Fine interior is the rational subpolytope consisting of all points which have an integral distance of at least $1$ to all integral supporting hyperplanes of the Newton polytope. Moreover, we get more information in this situation if we can describe how the Fine interior behaves for dilations of the Newton polytope, e.g. if we can determine the smallest dilation with a non-empty Fine interior. Therefore, in this article we give a purely combinatorial description of the Fine interiors of all dilations of a rational polytope, which allows us in particular to compute this smallest dilation and to classify all lattice $3$-polytopes with empty Fine interior, for which we have only one point as Fine interior of the smallest dilation with non-empty Fine interior. \end{abstract} \maketitle \thispagestyle{empty} \section{Introduction} The Fine interior $F(P)$ of a lattice $d$-polytope $P\subseteq \R^d$ was introduced by Jonathan Fine in \cite[§4.2.]{Fin83} to study top degree differentials of hypersurfaces using their Newton polytope. Originally called \textit{heart} of the Newton polytope, we now use the term \textit{Fine interior} as introduced by Miles Reid when working with plurigenera and canonical models of nondegenerate toric hypersurfaces in \cite[II, 4 Appendix]{Rei87}. If the Fine interior of the Newton polytope of a nondegenerate hypersurface in a torus is not empty, then we can construct both a unique projective model with at worst canonical singularities and minimal models of the hypersurface using the Fine interior and related combinatorial data, as recently shown by Victor Batyrev in \cite{Bat23a}. However, if the Fine interior is empty, the hypersurface has negative Kodaira dimension, and we either get the Zariski closure of the hypersurface in a suitable canonical toric $\Q-$Fano variety as a canonical $\Q$-Fano hypersurface, or we get a $\Q$-Fano fibration for the compactification of the hypersurface by \cite{Bat23b}. We focus here on the second case with an empty Fine interior and call a lattice polytope with an empty Fine interior \textit{$F$-hollow}. To study this case in detail, it is necessary to understand how the Fine interior behaves for dilations of the lattice polytope. The most important affine unimodular invariants of the lattice polytope $P$ in this context are the \textit{minimal multiplier} $\mu=\mu(P)$, which is the smallest real number $\lambda$ with $F(\lambda P)\neq \emptyset$, and the associated Fine interior $F(\mu P)$. We call a $F$-hollow lattice polytope $P$ \textit{weakly sporadic}, if $\dim(F(\mu P))=0$, which corresponds to the first situation with a canonical $\Q$-Fano hypersurface. If $\dim(F(\mu P))>0$, we get the $\Q$-Fano fibration with a combinatorial description from the lattice projection of $P$ along $F(\mu P)$. There are two main goals of this article: first to give a simple purely combinatorial description of the Fine interiors of all dilations of a rational polytope (Theorem \ref{main_theorem}), which allows in particular to compute the minimal multiplier and to see it as a rational number (Corollary \ref{min_mult_rational}), and, as a second goal, to use these tools to classify all weakly sporadic $F$-hollow $3$-polytopes (Theorem \ref{weakly_sporadic classification}, Theorem \ref{sporadic classification}). We begin by fixing the notation and give precise definitions of the Fine interior and related combinatorial objects in the next section. The third section gives the description of the Fine interior of dilations of a lattice polytope as the intersection of a hyperplane with a polyhedron of dimension $d+1$, before we take a closer look at the commutativity of intersections with hyperplanes and the computation of the Fine interior in section 4. In section 5 we introduce the minimal and some other special multipliers that mark combinatorial changes for the Fine interior during the dilation of the polytope. And in the last section we give the classification of the weakly sporadic $F$-hollow polytopes in dimension $3$. \section{The Fine interior and associated combinatorial objects} Let be $d\in \Z_{\geq 1}$, $M\cong \Z^d$ be a lattice of rank $d$, $N=\mathrm{Hom}(M,\Z)$ be the lattice dual to $M$ and $\left<\cdot,\cdot\right>\colon M\times N \to \Z$ be the natural pairing. We have real vector spaces $M_\R = M \otimes \R$ and $N_\R = N \otimes \R$ and extend the natural pairing to them. We will look now at convex geometric objects in $M_\R$ and $N_\R$ and give a brief description of the objects we will need later. For more details see e.g. \cite{Zie07}. The convex hull of a finite number of points in $M_\Q\subseteq M_\R$ is called a \textit{rational $d$-polytope} if there are $d+1$ affine independent points among these points. A \textit{lattice $d$-polytope} is a rational $d$-polytope with all vertices in $M$. For $\lambda \in \R$ and a rational $d$-polytope $P\subseteq M_\R$ we define the real dilation \begin{align*} \lambda P := \{\lambda\cdot x \in M_\R \mid x \in P\} \subseteq M_\R. \end{align*} By a \textit{$d$-polytop} in $M_\R$ we mean in the following a real dilation of some rational $d$-polytope, and these are the most general polytopes we work with. If a dual vector $y \in N_\R\setminus \{0\}$ attains its minimum on a set $P\subseteq M_\R$, we denote this minimum by \begin{align*} \Min{P}{y}:= \min \{\left<x,y\right> \mid x\in P\} \end{align*} and we write $\Min{P}{y}=-\infty$ if there is a point $p_n\in P$ with $\left<p_n,y\right><-n$ for all $n\in \Z_{\geq 0}$. We can use this notation, for example, to describe a convex, closed set $P\subseteq M_\R$ with help of the supporting hyperplane theorem as an intersection of closed half-spaces by \begin{align*} P =& \{x\in M_\R \mid \left<x,y\right> \geq \Min{P}{y} \ \forall y \in N_\R\setminus \{0\} \}. \end{align*} For a $d$-polytope $P\subseteq M_\R$ we define additional data to simplify this dual description. For a face $F \preceq P$, i.e. the intersection of $P$ with a supporting hyperplane, we have a cone \begin{align*} \sigma_F := \{ y \in N_\R \mid \Min{P}{y}=\Min{F}{y} \} \subseteq N_\R, \end{align*} the \textit{normal cone of the face $F$}. All normal cones together form a fan, which we call the \textit{normal fan $\normalfan{P}$} of the polytope $P$. We write $\Sigma_P[1]\subseteq N$ for the set of primitive ray generators of the normal fan, i.e. the primitive dual lattice vectors that generate the cones of dimension $1$. We can now use this to uniquely describe our $d$-polytope as the intersection of the finitely many closed half-spaces associated to facets, i.e. faces of dimension $d-1$, by \begin{align*} P = \{x\in M_\R \mid \left<x,n\right> \geq \Min{P}{n} \ \forall n \in \Sigma_P[1] \}. \end{align*} More generally, we call the possibly unbounded intersection of finitely many closed half-spaces in $M_\R$ a \textit{polyhedron}. A \textit{rational polyhedron} is a polyhedron, where each facet of the polyhedron we can chose a normal vector from $N$. If we have $0$ in each facet, we have a \textit{polyhederal cone}. We can associate to each polytope in $M_\R$ the \textit{cone over $P$}, i.e. the polyhedron cone defined by \begin{align*} \mathrm{cone}(P):= \R_{\geq 0} (\{1\} \times P)\subseteq \R \times M_R. \end{align*} Moreover, every polyhedron has a decomposition as a Minkowski sum of a polytope and a polyhedral cone where the polyhedral cone is uniquely determined and is called the \textit{recession cone of the polyhedron}. We will now introduce the Fine interior of a $d$-polytope as the central combinatorial object of the article. \begin{defi}\label{Def_FineInt} Let $P\subseteq M_\R$ be a $d$-polytope. Then the \textit{Fine interior $F(P)$} of $P$ is defined as \begin{align*} F(P) := \{x\in M_\R \mid \left<x,n\right> \geq \Min{P}{n}+1 \ \forall n \in N\setminus \{0\} \} \subseteq P. \end{align*} \end{defi} In fact, we do not need to use all the dual vectors of $N\setminus \{0\}$ in the definition \ref{Def_FineInt}. It is enough to choose the finitely many dual vectors that are part of a Hilbert basis of some cone of the normal fan $\Sigma_P$ (\cite[3.10]{Bat23a}). In particular, since $F(P)$ is bounded as a subset of $P$, the Fine interior of a $d$-polytope is itself a $d$-polytope. By \cite[5.4]{Bat23a} it is even possible to restrict to primitive ray generators in the canonical refinement of $\Sigma_P$, i.e. elements of \begin{align*} \normalfan{P}^{\mathrm{can}}[1] = \{ n \in N \mid n \text{ is vertex of } \conv{\sigma \cap (N\setminus \{0\})} \text{ for a maximal cone } \sigma \in \Sigma_P\} \end{align*} and so we get \begin{align*} F(P) = \{x\in M_\R \mid \left<x,n\right> \geq \Min{P}{n}+1 \ \forall n \in \normalfan{P}^{\mathrm{can}}[1] \}. \end{align*} But this is the strongest possible a priori restriction on dual lattice vectors, since by \cite[5.3]{Bat23a} we need all $n \in \normalfan{P}^{\mathrm{can}}[1]$ if we look at $\lambda P$ for $\lambda$ large enough instead of $P$. A posteriori we could use in the definition \ref{Def_FineInt} only those dual vectors that define half-spaces which really touch $F(P)$ after the shift by $1$. We have the following formal definition for these dual vectors: \begin{defi} Let $P\subseteq M_\R$ be a $d$-polytope with $F(P)\neq \emptyset$. Then the \textit{support of $F(P)$} is the set of dual lattice vectors \begin{align*} S_F(P):=\{n \in N \mid \Min{F(P)}{n} = \Min{P}{n}+1\}\subseteq N. \end{align*} \end{defi} In the construction of canonical models of nondegenerate hypersurfaces \cite{Bat23a} there is another rational polytope defined by the support of the Fine interior which turns out to be very helpful, which we introduce now. \begin{defi} Let $P\subseteq M_\R$ be a $d$-polytope with $F(P)\neq \emptyset$. Then the \textit{canonical hull of $P$} is the $d$-polytope \begin{align*} C(P):=\{x\in M_\R \mid \left<x,n\right> \geq \Min{P}{n} \ \forall \nu \in S_F(P) \}\subseteq P. \end{align*} Moreover, $P$ is called \textit{canonically closed} if $C(P)=P$, i.e. if and only if $\Sigma_{P}[1]\subseteq S_F(P)$. \end{defi} We end this section with two important examples of canonically closed lattice polytopes that we will need later. \begin{ex}\label{Ex_canonically_closed_dim2} All lattice polygons, i.e. lattice polytopes of dimension $d=2$, with non-empty Fine interior are canonically closed by \cite[4.4]{Bat23a}. \end{ex} \begin{ex}\label{Ex_canonically_closed_reflexive} In arbitrary dimension the \textit{reflexive polytopes}, i.e. the lattice polygons $P\subseteq M_\R$, which have $0\in \interior{P}$ and for which the dual polytope \begin{align*} P^* := \{ y \in N_\R \mid \left<x,y\right> \geq -1 \ \forall x \in P\} \subseteq N_\R \end{align*} is a lattice polytope, are canonically closed. More precisely, they are exactly those canonically closed lattice polytopes which have $\{0\}$ as their Fine interior (\cite[4.9]{Bat23a}). Reflexive polytopes are well known because of their important role in combinatorial mirror symmetry (\cite{Bat94}). Note that we also get interesting objects for combinatorial mirror symmetry in some cases when the lattice polytope has only $\{0\}$ as its Fine interior, but it is not canonically closed (\cite{Bat17}). \end{ex} \section{Simultaneous description of the Fine interiors of all dilations} In this section we want to describe the Fine interiors of all real dilations of a $d$-polytope simultaneously. As a first step, we want to understand the behavior of $\Min{P}{y}$, $\normalfan{P}$ and $S_F(P)$ under real dilations. We do this in the following two lemmata. \begin{lemma}\label{Dilations_and_normalfans} Let $P\subseteq M_\R$ be a $d$-polytope and $\lambda \in \R_{> 0}$. Then we have for all $y\in N_\R$ that $\lambda\Min{P}{y}=\Min{\lambda P}{y}$ and $\Sigma_P= \Sigma_{\lambda P}$. \end{lemma} \begin{proof} Since $\lambda > 0$ the multiplication with $\lambda$ respects the minimum and due to the linearity of $\left<\cdot, y\right>$ we have \begin{align*} \lambda\Min{P}{y}=& \lambda \cdot \min \{\left<x,y\right> \mid x\in P\}= \min \{\left<\lambda x,y\right> \mid x\in P\} = \min \{\left<x,y\right> \mid x\in \lambda P\} \\=& \Min{\lambda P}{y}. \end{align*} If we use this not only for $P$, but also for a face $F\preceq P$, we get for the normal cones \begin{align*} \sigma_F = \{ y \in N_\R \mid \Min{P}{y}=\Min{F}{y} \} = \{ y \in N_\R \mid \Min{\lambda P}{y}=\Min{\lambda F}{y} \}= \sigma_{\lambda F} \end{align*} and so we have $\Sigma_P=\Sigma_{\lambda P}$. \end{proof} \begin{lemma}\label{Dilations_and_support} Let $P\subseteq M_\R$ be a $d$-polytope with $F(P)\neq \emptyset$. If $\nu \in S_F(P)$, then $\nu \in S_F(\lambda P)$ for all $\lambda \in \R_{\geq 1}$. If $F(P)$ is full-dimensional and $\nu \in \Sigma_{F(P)}[1]$, then we also have $\nu \in \Sigma_{F(\lambda P)}[1]$ for all $\lambda \in \R_{\geq 1}$. \end{lemma} \begin{proof} Since $\nu \in S_F(P)$, there is a point $x\in F(P)$ with $\left<x,\nu\right>=\Min{P}{\nu}+1$ and $\left<x,n\right>\geq \Min{P}{n}+1$ for all $n\in N\setminus \{0\}$. For each $n\in N\setminus \{0\}$ we take some $x_n\in P$ with $\left<x_n,n\right>=\Min{P}{n}$ and set $w_n:=x-x_n\in M_\R$. Then we have $x=x_\nu+w_\nu=x_n+w_n$ for all $n\in N\setminus\{0\}$ and \begin{align*} \left<w_\nu,\nu\right>=\left<x-x_\nu,\nu\right>=1, \left<w_n,n\right>=\left<x-x_n,n\right>\geq 1. \end{align*} Thus we get for $\lambda \in \R_{\geq 1}$ and $n\in N\setminus\{0\}$ with \ref{Dilations_and_normalfans} \begin{align*} \left<\lambda x_\nu+w_\nu,n\right>=&\left<\lambda(x_n+w_n-w_\nu)+w_\nu,n\right>\\=& \lambda \left<x_n,n\right>+\lambda\left<w_n,n\right>-(\lambda-1)\left<w_\nu,n\right>\\= &\lambda \Min{P}{n}+\lambda\left<w_n,n\right>-(\lambda-1)\left<x_n+w_n-x_\nu,n\right>\\ = &\Min{\lambda P}{n}+\left<w_n,n\right>-(\lambda-1)\Min{P}{n}+(\lambda-1)\left<x_\nu,n\right>\\ \geq & \Min{\lambda P}{n}+1, \end{align*} because $x_\nu \in P$ and so $\left<x_\nu,n\right>\geq \Min{P}{n} $. So $\lambda x_\nu+w_\nu \in F(\lambda P)$ and we get $\nu \in S_F(\lambda P)$ by \begin{align*} \left<\lambda x_\nu+w_\nu,\nu\right>=\lambda \Min{P}{\nu}+1=\Min{\lambda P}{\nu}+1. \end{align*} If $\nu \in \Sigma_{F(P)}[1]$, then we can do the same calculations not only for $x$ but also for $d$ affine independent points on the corresponding facet to $\nu$ of $F(P)$ and get $d$ affine independent points on a facet of $F(\lambda P)$ defined by $\nu$ and thus $\nu \in \Sigma_{F(\lambda P)}[1]$ for all $\lambda \in \R_{\geq 1}$. \end{proof} We are now ready to describe the Fine interior of all real delations of a $d$-polytope in $M_\R$ simultaneously with the help of a suitable polyhedron in $\R \times M_\R$, which lies in the cone over the polytope. \begin{thm}\label{main_theorem} Let $P\subseteq M_\R$ be a $d$-polytope and $\lambda \in \R_{\geq 0}$. Then \begin{align*} \mathcal{F}(P):=&\left\{(x_0,x)\in \R \times M_\R \mid \left<(x_0,x),(-\Min{P}{\nu},\nu)\right> \geq 1 \ \forall \nu \in \normalfan{P}^{\mathrm{can}}[1] \right\} \end{align*} is a polyhedron without compact facets, the number of facets is $|\normalfan{P}^{\mathrm{can}}[1]|$, the recession cone of $\mathcal{F}(P)$ is $\mathrm{cone}(P)=\R_{\geq 0}(\{1\} \times P)$, and for all $\lambda \in \R_{\geq 0}$ we get the Fine interior of $\lambda P$ from $\mathcal{F}(P)$ by the identity \begin{align*} \{\lambda \} \times F(\lambda P) = \mathcal{F}(P)\cap \{(x_0,x)\in \R \times M_\R \mid x_0=\lambda\}. \end{align*} \end{thm} \begin{proof} The Fine interior of $\lambda P$ is given by \begin{align*} F(\lambda P)=\{x\in M_\R \mid \left<x,\nu\right> \geq \Min{\lambda P}{\nu}+1 \ \forall \nu \in \normalfan{\lambda P}^\mathrm{can}[1]\}. \end{align*} With \ref{Dilations_and_normalfans} we get \begin{align*} F(\lambda P)=&\{x\in M_\R \mid \left<x,\nu\right> \geq \lambda \Min{P}{\nu}+1 \ \forall \nu \in \normalfan{P}^\mathrm{can}[1]\}\\ =&\{x\in M_\R \mid \left<(\lambda,x),(-\Min{P}{\nu},\nu)\right> \geq 1 \ \forall \nu \in \normalfan{P}^\mathrm{can}[1] \} \end{align*} and so we have \begin{align*} &\{\lambda\} \times F(\lambda P)\\=&\left\{(x_0,x)\in \R \times M_\R \mid \left<(x_0,x),(-\Min{P}{\nu},\nu)\right> \geq 1 \ \forall \nu \in \normalfan{P}^\mathrm{can}[1] \right\}\cap \{x_0=\lambda\}. \end{align*} Since $\normalfan{P}[1]\subseteq \normalfan{P}^{\mathrm{can}}[1]$ we get the recession cone of $\mathcal{F}(P)$ by \cite[Prop. 1.12.]{Zie07} as \begin{align*} &\left\{(x_0,x)\in \R \times M_\R \mid \left<(x_0,x),(-\Min{P}{\nu},\nu)\right> \geq 0 \ \forall \nu \in \normalfan{P}^{\mathrm{can}}[1] \right\}\\ =& \left\{(x_0,x)\in \R \times M_\R \mid \left<x,\nu\right> \geq x_0\cdot \Min{P}{\nu} \ \forall \nu \in \normalfan{P}^{\mathrm{can}}[1] \right\}\\ =& \left\{(x_0,x)\in \R \times M_\R \mid x\in x_0\cdot P \right\}\\ =& \mathrm{cone}(P). \end{align*} The facets of $\mathcal{F}(P)$ are unbounded because from $\nu \in \normalfan{F(\lambda_0P)}[1]$ we get $\nu \in \normalfan{F(\lambda P)}[1]$ for all $\lambda\geq \lambda_0$ by \ref{Dilations_and_support} and the number of facets is $|\normalfan{P}^{\mathrm{can}}[1]|$ since we need all vertices of the canonical refinement to describe $F(\lambda P)$ for $\lambda$ large enough. \end{proof} We now focus on the situation where $P$ is at least a rational polytope, so that $\mathcal{F}(P)$ also becomes a rational polyhedron. For lattice polytopes we want to give an alternative description of $\mathcal{F}(P)$ using the Fine interior of $\mathrm{cone}(P)$. Although we have not yet given a formal definition of the Fine interior of cones, it seems reasonable to have this definition completely analogous. Since $n \in N \setminus \{0\}$ attains a finite minimum on $\sigma$ if and only if $n$ is an element of the dual cone \begin{align*} \check{\sigma}=\{y\in N_\R \mid \left<x,y\right> \geq 0 \ \forall x \in \sigma \} \subseteq N_\R \end{align*} and then the minimum is $0$, we have \begin{align*} F(\sigma) :=& \{x\in M_\R \mid \left<x,n\right> \geq \Min{\sigma}{n}+1 \ \forall n \in N\setminus \{0\} \}\\ =& \{x\in M_\R \mid \left<x,n\right> \geq 1 \ \forall n \in \check{\sigma} \cap N\setminus \{0\} \}. \end{align*} This allows us in some cases to get $\mathcal{F}(P)$ directly as the Fine interior of $\mathrm{cone}(P)$, as the following corollary shows. \begin{coro}\label{Fine_interior_cone} If $P$ is a lattice polytope, then we have for all $\lambda\in \R_{\geq 1}$ that \begin{align*} \mathcal{F}(P)\cap \{x_0=\lambda\} =& F(\lambda P) \times \{\lambda\} = F(\mathrm{cone}(P)\cap \{x_0=\lambda\}) \times \{\lambda\}\\ =&F(\mathrm{cone}(P)) \cap \{x_0=\lambda\}. \end{align*} In particular, we have $\mathcal{F}(P)=F(\mathrm{cone}(P))$ if and only if $F(\lambda P)=\emptyset$ for all $\lambda<1$. \end{coro} \begin{proof} By \ref{main_theorem} and the definition of $\mathrm{cone}(P)$ we get \begin{align*} \mathcal{F}(P)\cap \{x_0=\lambda\}=F(\lambda P) \times \{\lambda\} =F(\mathrm{cone}(P)\cap \{x_0=\lambda\}) \times \{\lambda\}. \end{align*} Since we have by definition \begin{align*} &F(\mathrm{cone}(P))=\\ &\{(x_0,x)\in \R \times M_\R \mid \left<(x_0,x),(n_0,n)\right> \geq 1 \ \forall (n_0,n) \in \check{\mathrm{cone}(P)} \cap (\Z \times N) \setminus \{(0,0)\} \}, \end{align*} it is by \ref{main_theorem} and for $\lambda\geq 1$ enough to see that the vertices of \begin{align*} \conv{\check{\mathrm{cone}(P)} \cap (\Z \times N\setminus \{0\} )} \end{align*} are exactly given by $(-\Min{P}{\nu},\nu)$ with $\nu \in \normalfan{P}^\mathrm{can}$. First notice that $-\Min{P}{\nu}\in \Z$ since $P$ is a lattice polytope. Also, for all $x\in P$ we have \begin{align*} \left<(-\Min{P}{\nu},\nu),(\lambda,\lambda x)\right>=-\lambda \Min{P}{\nu}+\lambda\left<\nu,x\right>)\geq 0 \end{align*} and therefore $(-\Min{P}{\nu},\nu)\in \check{\mathrm{cone}(P)}$. With the same calculation we see that the facet $F_p\preceq \check{\mathrm{cone}(P)}$ corresponding to a vertex $p\preceq P$ is \begin{align*} F_p=\{(y_0,y)\in (\Z \times N)_\R \mid y \in \sigma_p, y_0=-\Min{P}{y} \}. \end{align*} Thus every lattice point $(n_0,n)\in \check{\mathrm{cone}(P)}\cap (\Z \times N\setminus \{0\})$ shares its $N$-projection with a lattice point in a facet, namely $(-\Min{P}{n},n)$. Thus the convex hull of $\check{\mathrm{cone}(P)}\cap (\Z \times N\setminus \{0\})$ is the convex hull of the lattice points different from $0$ in the facets, and the vertices of this convex hull are the points $(-\Min{P}{\nu}, \nu)$ with $\nu \in \normalfan{P}^\mathrm{can}$ by definition of the canonical refinement. \end{proof} We end this section with a characterization of the cases where $\mathcal{F}(P)$ is as simple as possible, i.e. it is $\mathcal{F}(P)$ is a rational cone shifted by a lattice vector. For this, recall the following notions for Fano polytopes (for more background on Fano polytopes see e.g. the survey \cite{KN12}). \begin{defi} Let $P\subseteq M_\R$ be a lattice $d$-polytope. If $\mathrm{int}(P)\cap M=\{0\}$, then we call $P$ a \textit{canonical Fano polytope}. If there is some $k\in \Z_{\geq 1}$ such that $kP$ is affine unimodular equivalent to a reflexive polytope, then we call $P$ a \textit{Gorenstein polytope of index $k$}. \end{defi} We now get the following characterizations for the cases where $\mathcal{F}(P)$ is a rational cone shifted by a lattice vector. \begin{coro}\label{Fine_lattice_cone} Let $P$ be a rational polytope. Then $\mathcal{F}(P)$ is a rational cone shifted by $(1,0)$ if and only if $P^*$ is a canonical Fano polytope. \end{coro} \begin{proof} Since $\mathrm{cone}(P)$ is the recession cone of $\mathcal{F}(P)$ by \ref{main_theorem}, $\mathcal{F}(P)$ is a rational cone shifted by $(1,0)$ if and only if $\mathcal{F}(P)=(1,0)+\mathrm{cone}(P)$. But by \ref{main_theorem} this is equivalent to $\Sigma_P[1]=\Sigma_P^{\mathrm{can}}[1]$, $F(P)=\{0\}$, and $\Min{P}{\nu}=-1$ for all $\nu \in \Sigma_P[1]$, which we have if and only if $P^\ast$ is a lattice polytope having only $0$ as an interior lattice point. \end{proof} \begin{coro} Let $P$ be a lattice polytope. Then $\mathcal{F}(P)$ is a lattice cone with vertex $(k,x), k\in \mathbb{Z}_{\geq 1}, x\in M$ if and only if $kP-x$ is a reflexive polytope. In particular, $P$ is then a Gorenstein polytope of index $k$. \end{coro} \begin{proof} $\mathcal{F}(P)$ is a lattice cone with vertex $(k,x)\in \Z \times M$ if and only if $\mathcal{F}(kP-x)$ is a lattice cone with vertex $(0,1)$. This is equivalent by \ref{Fine_lattice_cone} to the situation where $(kP-x)^*$ is a canonical Fano polytope, and since $kP-x$ is a lattice polytope, this is the case if and only if $kP-x$ is reflexive. \end{proof} \section{The Fine interior and intersections with hyperplanes} The result in \ref{Fine_interior_cone} motivates us to look at cases where we can do Fine interior computations in codimension $1$ using intersections with hyperplanes. We will see in this section that we have such results for dilations of lattice polytopes with small lattice width. Recall that the \textit{lattice width} $\lw{P}$ of a lattice polytope $P\subseteq M_\R$ is defined by \begin{align*} \lw{P} := \min \{ -\Min{P}{-n}-\Min{P}{n} \mid n \in N\setminus \{0\} \} \end{align*} and a dual vector $n_{lw}\in N$ with \begin{align*} \lw{P} = -\Min{P}{-n_{lw}}-\Min{P}{n_{lw}} \end{align*} is called a \textit{lattice width direction}. We start with the lattice pyramid $\mathrm{Pyr}(P)$ of a lattice $d$-polytope $P\subseteq M_\R$, which is defined by \begin{align*} \mathrm{Pyr}(P):=\conv{\{1\}\times P, (0,0)}. \end{align*} and clearly has $\lw{\mathrm{Pyr}(P)}=1$. Since we can also consider $\mathrm{Pyr}(P)$ as the intersection of $\mathrm{cone}(P)$ with the half-space $x_0\leq 1$, we get the following corollary from \ref{Fine_interior_cone}. \begin{coro}\label{Fine_interior_pyramid} Let $P\subseteq M_\R$ be a lattice $d$-polytope, $\mu,\lambda\in \Q_{\geq 0}$. Then we have for dilations of the lattice pyramid $\mathrm{Pyr}(P)\subseteq \R \times M_\R$ \begin{align*} F(\mu\mathrm{Pyr}(P))\cap \{x_0=\lambda\}\cong \begin{cases} F(\lambda P) &\text{ if } 1\leq \lambda \leq \mu-1\\ \emptyset &\text{ else}. \end{cases} \end{align*} and if $F(\mu\mathrm{Pyr}(P))\neq \emptyset$, then $\mu\geq 2$ and \begin{align*} &S_F(\mu\mathrm{Pyr}(P))\\=&\begin{cases}\{(-\Min{P}{\nu},\nu)\in \Z \times N \mid \nu \in S_F(\mu-1)P\}\cup \{(0,\pm 1)\} &\text{ if } F(P)\neq \emptyset\\ \{(-\Min{P}{\nu},\nu)\in \Z \times N \mid \nu \in S_F(\mu-1)P\}\cup \{(0,-1)\} &\text{ if } F(P)=\emptyset. \end{cases} \end{align*} In particular, $F(2\mathrm{Pyr}(P))=\{1\} \times F(P)$ and for $F(P)\neq \emptyset$ we have that $2\mathrm{Pyr}(P)$ is canonically closed if and only if $P$ is canonically closed. \end{coro} \begin{proof} For $(x_0,x)\in F(\mu \mathrm{Pyr}(P))\subseteq \R \times M_\R$ we have \begin{align*} \left<(x_0,x),(1,0)\right>=x_0\geq \Min{\mu\mathrm{Pyr}(P)}{(1,0)}+1=1 \end{align*} and \begin{align*} \left<(x_0,x),(-1,0)\right>=-x_0\geq \Min{\mu\mathrm{Pyr}(P)}{(-1,0)}+1=-\mu+1. \end{align*} So we get $F(\mu\mathrm{Pyr}(P))\cap \{x_0=\lambda\}=\emptyset$ for $\lambda<1$ and $\lambda \geq \mu-1$ and support vectors $(0,\pm 1)$ in the appropriate cases. If we can show that all the other support vectors of $F(\mu\mathrm{Pyr}(P))$ attain their minimum on $\mu \mathrm{Pyr}(P)$ on a face of dimension $1$ or greater, we get the result as a corollary of \ref{Fine_interior_cone}. If $\mu \in \Z$, this is clear, because $\mu\mathrm{Pyr}(P)\cap \{x_0=1\}\cong P$ and $\mu\mathrm{Pyr}(P)\cap \{y=\mu-1\}\cong(\mu-1)P$ are lattice polytopes in this case, and so there is no addition support vector which attains its minimum only on a vertex of $\mu\mathrm{Pyr}(P)$. For $\mu \in \Q$ we get this, since the situation near vertices after a translation with a suitable vector from $\Q^{d+1}$ is locally the same as for $\floor{\mu}\mathrm{Pyr}(P)$ and the calculation of the Fine interior commutes with translations. \end{proof} We can note expect, that we can always commute Fine interior calculations with intersections with hyperplanes. Nevertheless, we have at least the partial result of the following lemma. \begin{lemma}\label{Intersection_of_Fine_interior} Let $P\subseteq \R \times M_\R$ be a rational $d$-polytope with $P\cap (\{1\} \times M_\R ) \neq \emptyset$ and $P\cap (\{-1\} \times M_\R ) \neq \emptyset$, and $P_0 \subseteq M_\R$ defined by $\{ 0 \} \times P_0 = P \cap (\{0\} \times M_\R )$. Then $\{ 0 \} \times F(P_0) \subseteq F(P) \cap (\{0\} \times M_\R )$. \end{lemma} \begin{proof} If $(0,x)\notin F(P) \cap (\{0\} \times M_\R )$, then we have some $(\nu_0,\nu)\in (\Z \times N)\setminus \{(0,0)\}$ with \begin{align*} \left< (0,x),(\nu_0,\nu)\right> < \Min{P}{(\nu_0,\nu)}+1. \end{align*} Note that $\nu \neq 0$, because otherwise we had \begin{align*} \Min{P}{(\nu_0,0)}+1\leq-|\nu_0|+1\leq 0=\left<(0,x),(\nu_0,0)\right> \end{align*} for all $\nu_0\neq 0$, since $P\cap P\cap (\{1\} \times M_\R ) \neq \emptyset\neq P\cap P\cap (\{-1\} \times M_\R )$. So we get $\nu\neq 0$ with \begin{align*} \left<x,\nu\right>=\left< (0,x),(\nu_0,\nu)\right> < \Min{P}{(\nu_0,\nu)}+1 \leq& \Min{\{0\} \times P_0}{(\nu_0,\nu)}+1=\Min{P_0}{\nu}+1 \end{align*} and so $x\notin F(P_0)$. \end{proof} As the main result of the section we show now, that everything is fine if we have a lattice polytopes of lattice width $2$ whose middle-polytope is also a lattice polytope. \begin{prop}\label{FineInterior_Width2} Let $P\subseteq \R \times M_\R$ be lattice polytope of lattice width $2$ with $P\subseteq [-1,1] \times M_\R$, such that $\{0\} \times P_0 :=P\cap (\{0\} \times M_\R)$ is also a lattice polytope.\\ Then \begin{align*} F(P)=\{0\} \times F(P_0) \end{align*} and if $F(P)\neq \emptyset$, then \begin{align*} S_F(P_0)=\{\nu \in N\setminus\{0\} \mid \exists \nu_0\in \Z, (\nu_0,\nu)\in S_F(P)\} \end{align*} and $S_F(P)$ is given as union of $\{(-1,0),(1,0)\}$ with \begin{align*} \{(\nu_0,\nu) \in \Z \times N \mid \nu\in S_F(P_0), \Min{P}{(\nu_0,\nu)}=\Min{\{0\} \times P_0}{(\nu_0,\nu)}\}. \end{align*} \end{prop} \begin{proof} Since $P\subseteq [1,1] \times M_\R$ we have $F(P)=F(P)\cap (\{0\} \times M_\R)$ and so we have $F(P)\supseteq \{ 0 \} \times F(P_0)$ by \ref{Intersection_of_Fine_interior}. Now we show $F(P)\subseteq F(P_0)\times \{0\}$. Since $P\subseteq [1,1] \times M_\R$ we have $F(P)\subseteq \{ 0 \} \times P_0$ and it is enough to show that $(0,x)\notin F(P)$ for all $x\notin F(P_0)$. For $x\in M_\R \setminus F(P_0)$ we have some $0\neq \nu \in N$ with \begin{align*} \left<x,\nu\right> < \Min{P_0}{\nu}+1. \end{align*} Since $P_0$ is a lattice polytope, we have a vertex $e\in M$ of $P_0$ with $\Min{P_0}{\nu}=\left<e,\nu\right>$. Let $f=(1,f')\in M$ be a vertex of $P$ such that $\nu_0:=\left<e-f',\nu\right>\in \Z$ is maximal. Then we have \begin{align*} \left<(1,f'),(\nu_0,\nu)\right>=\nu_0+\left<f',\nu\right>=\left<e,\nu\right>=\Min{P_0}{\nu} \end{align*} and since $\left<(0,e),(\nu_0,\nu)\right>=\left<e,\nu\right>=\Min{P_0}{\nu}$ and $\nu_0$ was chosen maximal, we get that $(\nu_0,\nu)$ defines a hyperplane supporting $P$ on on a face containing $e$ and $f$ and we have \begin{align*} \Min{P}{(\nu_0,\nu)}=\Min{P_0}{\nu} \end{align*} and \begin{align*} \left<(0,x),(\nu_0,\nu)\right>=\left<x,\nu\right><\Min{P_0}{\nu}+1=\Min{P}{(\nu_0,\nu)}+1, \end{align*} i.e. $(0,x)\notin F(P)$. By the calculations above we get also that for every $\nu \in S_F(P_0)$ there exists a $\nu_0\in \Z$, for example with $\nu_0$ defined as above, with $(\nu_0,\nu)\in S_F(P)$. Since $P_0$ is a lattice polytope and $F(P)\subsetneq P_0$, we have $\Min{P}{(\nu_0,\nu)}=\Min{ \{ 0 \} \times P_0}{(\nu_0,\nu)}$ for all $(\nu_0,\nu)\in S_F(P)\setminus \{(\pm 1,0)\}$, and from $F(P)=\{ 0 \} \times F(P_0)$ we get $\nu \in S_F(P_0)$. \end{proof} As a corollary we can use two-dimensional results from \cite{BS24} to explain some experimental results on the Fine interior of lattice $3$-polytopes in \cite{BKS22}. \begin{coro} Let $P \subseteq \R^3$ be a lattice $3$-polytope with $|\interior{P}\cap \Z^3| \leq 1$. Then $\dim F(P)\neq 2$. \end{coro} \begin{proof} Suppose $\dim F(P) = 2$. Then $P$ must have lattice width $2$ and we can assume $P\subseteq [-1,1] \times \R^2$. We have that the half-integral polygon $P_0\subseteq \R^2$ definied by $\{0\} \times P_0 := P \cap (\{0\} \times \R^2)$ has at least $1$ interior integral point. By classification results of maximal half-integral polygons in \cite[section 5 and 6]{BS24} we know, that then we can assume $P_0 \subseteq \R \times [-1,1]$ or $P_0 \subseteq \conv{(0,0), (3,0), (0,3)}$. In particular, we can assume that $P_0$ is part of a lattice polygon $\bar{P_0}$ which has Fine interior of dimension smaller than $1$. Thus by \ref{FineInterior_Width2} we get $\dim F(\conv{P,\bar{P_0}})<2$ for the lattice polytope $\conv{P,\bar{P_0}}$ and from $F(P) \subseteq F(\conv{P,\bar{P_0}})$ we get the contradiction $\dim F(P)<2$. \end{proof} The following example shows that we cannot omit central premises in \ref{FineInterior_Width2}. \begin{ex} For $P:=\conv{(-1,-1,-1),(1,0,-1),(0,1,-1),(0,0,1)}$, we have \begin{align*} F(P\cap \{x_3=0\})=\emptyset \neq \{(0,0,0)\}=F(P)\cap \{x_3=0\}=F(P). \end{align*} Note, that we have $\lw{P}=2$ but \begin{align*} P\cap \{x_3=0\}=\conv{(-1/2,-1/2),(1/2,0,0),(0,1/2,0)} \end{align*} is not a lattice polytope. If the intersection gives a lattice polytope, we also cannot commute the calculation of the Fine interior and the intersection in general. See for example \begin{align*} &F(2P\cap \{x_3=0\})=F((-1,-1),(1,0,0),(0,1,0))=\{(0,0,0)\}\\\neq & \conv{(-1/2,-1/2,0),(1/2,0,0),(0,1/2,0)}=F(2P)\cap \{x_3=0\}. \end{align*} \end{ex} We will later use the following corollary which helps us to understand the Fine interior for dilations of lattice polytopes of lattice width $1$. \begin{coro}\label{FineInterior_Width1} Let P be a lattice polytope of lattice width 1, $P\subseteq [0,1] \times M_\R$ and $\{\frac{1}{2}\} \times P_{1/2}:=P \cap (\{\frac{1}{2}\} \times M_\R)$. Then $2P_{1/2}$ is a lattice polytope with \begin{align*} F(2P)=\{1\} \times F(2P_{1/2}). \end{align*} Moreover, if $F(2P)\neq \emptyset$, then we get that $2P$ is canonically closed if and only if $2P_{1/2}$ is canonically closed. \end{coro} \begin{proof} From \ref{FineInterior_Width2} we get $F(2P)=\{1\} \times F(2P_{1/2})$ and since $2P$ has no vertices with first coordinate $1$, we have for all $\nu \in S_F(2P_{1/2})$ exactly one $\nu_0 \in \Z$ with $(\nu_0,\nu)\in S_F(2P)$ and $(\nu_0,\nu)$ attains its minimum on $P$ on an edge. So $\Sigma_{P}[1]\in S_F(2P)$ if and only if $\Sigma_{P_{1/2}}[1] \in S_F(2P_{1/2})$. \end{proof} We can even go further for $d=2$ since the Fine interior of a lattice polygon is the convex hull of the interior lattice points by \cite[2.9]{Bat17} and every lattice polygon with non-empty Fine interior is canonically closed by \ref{Ex_canonically_closed_dim2}. \begin{coro}\label{FineInterior_Width1_dim3} Let $P\subseteq \R \times M_\R$ be a lattice $3$-polytope of lattice width $1$.\\ Then $F(2P)=\conv{\mathrm{int}(2P)\cap M}$ and if $F(2P)\neq \emptyset$, then $2P$ is canonically closed. \end{coro} We finish this section with an important example for the situation in the corollary. \begin{ex} Let $\Delta\subseteq \R^3$ be an empty lattice $3$-simplex of normalized volume $q$, i.e. $|\Delta \cap \Z^3|=4$. Then we have by \cite{Whi64} some $p\in \Z$ with $gcd(p,q)=1$ and \begin{align*} \Delta \cong \Delta(p,q) := \conv{(0,0,0), (1,0,0),(0,0,1),(p,q,1)}, \end{align*} in particular, the lattice width of $\Delta$ is $1$. By \ref{FineInterior_Width1_dim3} we get \begin{align*} F(2\Delta(p,q))=&\conv{\mathrm{int}(2\Delta(p,q))\cap\Z^3}\\ =&\conv{\mathrm{int}(\conv{(0,0),(1,0),(p,q),(p+1,q)})\cap \Z^2}\times \{1\}\\ =&\conv{\left\{\left( \ceil{ \frac{jp}{q} },j,1\right) \mid 0< j< q\right\}}. \end{align*} Thus $F(2\Delta(p,q))=\emptyset$ if and only if $q=1$. If the interior lattice points are collinear, we get after a shearing $p=\pm 1$. So we get for $q>1$ \begin{align*} \dim(F(2\Delta(p,q)))= \begin{cases} 0 & \text{ if } q=2\\ 1 & \text{ if } q\geq 3 \text{ and } p\equiv \pm 1 \mod q\\ 2 & \text{ else. } \end{cases} \end{align*} \end{ex} \section{Special multipliers of a lattice polytope} In this section we focus on special dilations of a rational $d$-polytope $P$ corresponding to vertices of $\mathcal{F}(P)$. \begin{defi} Let $P\subseteq M_\R$ be a rational $d$-polytope, $\{(\mu_1,p_1),\dotsc,(\mu_n,p_n)\}$ the set of vertices of $\mathcal{F}(P)\subseteq \R \times M_\R$.\\ We call every $\mu_i$ a \textit{special multipier of $P$}, $\mu(P):= \min_i \mu_i$ the \textit{minimal multiplier}, and $\mu_{max}(P):= \max_i \mu_i$ the \textit{maximal multiplier of $P$}. \end{defi} Since $P$ is rational, we now directly get the rationality of all special multipliers. \begin{coro}\label{rationality_multipliers} Let $P\subseteq M_\R$ be a rational $d$-polytope and $(-\Min{P}{\nu_k}, \nu_k)\in \Q^{d+1}$ for $1\leq k\leq d+1$ linear independent vectors defining a vertex $(\mu_i, p_i)$ of the polyhedron $\mathcal{F}(P)\subseteq \R \times M_\R$.\\ Then we have \begin{align*} \mu_i=\frac{\sum_{j=1}^{d+1}(-1)^{j+d+1}\begin{vmatrix}\nu_1 & \dots & \hat{\nu_j} & \dots & \nu_{d+1}\end{vmatrix}}{\sum_{j=1}^{d+1}(-1)^{j+d}\Min{P}{\nu_j}\begin{vmatrix}\nu_1 & \dots & \hat{\nu_j} & \dots & \nu_{d+1}\end{vmatrix}}\in \mathbb{Q}, \end{align*} were the vectors $\hat{\nu_j}$ are canceled. \end{coro} \begin{proof} This follows directly from Cramer's rule and the Laplace expansion of the occurring determinants. \end{proof} In particular, we get now an easy combinatorial proof for the characterization of the minimal multiplier in \cite[3.8.]{Bat23a} and for its rationality proven in \cite[3.2.]{Bat23b} by methods from toric geometry. \begin{coro}\label{min_mult_rational} Let $P\subseteq M_\R$ be a full dimensional rational polytope.\\ Then $\mu(P)\in \mathbb{Q}$ and $0\leq \dim F(\lambda P)\leq d-1$ if and only if $\lambda=\mu(P)$. \end{coro} \begin{proof} We have $\mu(P)\in \Q$ by \ref{rationality_multipliers}. Moreover, the intersection of $\mathcal{F}(P)\subseteq M_\R\times \R$ with $M_\R \times \lambda$ has dimension between $0$ and $d-1$ if and only if $\lambda=\mu(P)$ because $\mathcal{F}(P)$ has no facet parallel to $M_\R \times \{0\}$ since $\nu \neq 0$ for all $\nu \in \normalfan{P}^\mathrm{can}[1]$. \end{proof} For the lattice pyramid $\mathrm{Pyr}(P)$ we can compute the minimal multiplier from the minimal multiplier of $P$ using the results from the last section. \begin{coro}\label{multipliers_pyramid} Let $P \subseteq M_\R$ be a lattice polytope with $\mu:=\mu(P)\geq 1$. Then we get $\mu(\mathrm{Pyr}(P))=\mu+1$ for the minimal multiplier of the lattice pyramid and \begin{align*} F((\mu+1)\mathrm{Pyr}(P))\cong F(\mu P). \end{align*} \end{coro} \begin{proof} Since $\mu\geq 1$ we have by \ref{Fine_interior_pyramid} \begin{align*} F((\mu+1)\mathrm{Pyr}(P))\cap (\{\lambda\} \times M_\R)=\begin{cases} F(\mu P) & \text{ if } \lambda=\mu\\ \emptyset & \text{ else.} \end{cases} \end{align*} Thus we get \begin{align*} F((\mu+1)\mathrm{Pyr}(P))=F((\mu+1)\mathrm{Pyr}(P))\cap (\{\mu\} \times M_\R ) \cong F(\mu P). \end{align*} \end{proof} We now introduce an invariants of Ehrhart theory related to the minimal multiplier. Recall that for a lattice $d$-polytope $P\subseteq M_\R$, by results of Eugène Ehrhart and Richard P. Stanley, we have a unique polynomial $h^*_P \in \Z[x]$ with non-negative coefficients and $\deg h^*_P\leq d$ such that \begin{align*} \sum_{k \in \Z_{\geq 0}} |kP \cap M|x ^k=\frac{h^*_P(x)}{(1-x)^{d+1}}. \end{align*} We write $\deg(P) := \deg(h^*_P)$ and have for the \textit{codegree of $P$}, i.e. $\mathrm{codeg}(P):=d+1-\deg(P)$ that $\mathrm{codegree}(P)= \min \{k \in \Z_{>0} \mid |\interior{kP} \cap M|>0\}$ and the highest non-zero coefficient of the $h*$ polynomial gives the number of interior lattice points of $\mathrm{codeg}(P)\cdot P$. For this and more results on Ehrhart theory see for example \cite{BS15}. We get the following connection with the minimal multiplier. \begin{coro} Let $P\subseteq M_\R$ be a lattice $d$-polytope. Then we have the as a upper bound for the minimal multiplier $\mu(P)\leq \mathrm{codegree}(P)\leq d+1$. \end{coro} Since a rational $d$-polytope with lattice width smaller than $2$ has empty Fine interior, we get also a lower bound for $\mu(P)$. \begin{coro} Let $P$ be a rational $d$-polytope. Then $\frac{2}{\lw{P}}\leq \mu(P)$ and if we have $\dim(F(\mu P))=d-1$ then $\frac{2}{\lw{P}}= \mu(P)$. \end{coro} We look now at the special situation with $\mu(P)>1$ and $\dim(F(\mu(P)P))=0$ since we want to understand this situation for $d=3$ in details later. First recall some definitions from \cite{Bat23b}. \begin{defi} Let $P\subseteq M_\R$ be a lattice $d$-polytope. We call $P$ \textit{$F$-hollow} if $\mu(P)>1$. If $\mu(P)>1$ and $\dim(F(\mu(P)P))=0$, then we call $P$ \textit{weakly sporadic $F$-hollow}. A weakly sporadic $F$-hollow lattice $d$-polytope, which is not affine unimodular equivalent to a subset of $P' \times \R^{d-k}$ for some lattice $k$-polytope $P'$ with $k<d$, is called a \textit{sporadic $F$-hollow polytope.} \end{defi} We will no give to classes of examples to illustrate these definitions. \begin{ex}[\cite{AWW09}] For any unit fraction partition of $1$ of length $d$, i.e. a $d$-tuple $(k_1,\dotsc,k_d)\in \Z_{\geq 2}^d$ with $\sum_{i=1}^{d}\frac{1}{k_i}=1$, the simplex \begin{align*} \Delta_{(k_1,\dotsc,k_d)}:=\conv{0,k_1e_1,\dotsc, k_de_d}\subseteq \R^d, \end{align*} where $e_i$ is the standard basis in $\R^d$, is a sporadic $F$-hollow simplex with \begin{align*} \mu(\Delta_{(k_1,\dotsc,k_d)})=\frac{k+1}{k} \end{align*} Moreover, it is a maximal hollow lattice polytope, i.e. there are no hollow lattice polytopes, that contain it as a subpolytope. How do we see this? With $k:=lcm(k_i)$ we can describe the points $x=(x_1,\dotsc,x_d)\in \Delta_{(k_1,\dotsc,k_d)}$ as the points in the intersection of the half-spaces $x_i\geq 0$ and $\sum_{i=1}^{d} \frac{k}{k_i}x_i \leq k$. We have no interior lattice points, since every interior point has $x_i>0$ and $\frac{k}{k_i}x_i < k$. Moreover, every facet has at least one lattice point in its relative interior, we can pick the points $\sum_{i\in \{1,\dotsc,d\}, i\neq j} e_i$ and $\sum_{i\in \{1,\dotsc,d\}} e_i$ to see this. So we see that $\Delta_{(k_1,\dotsc,k_d)}$ is a maximal hollow lattice simplex. So it is enough to show that \begin{align*} F\left(\frac{k+1}{k} \Delta_{(k_1,\dotsc,k_d)}\right)=\{(1,1,\dotsc,1)\}. \end{align*} We get $F\left(\frac{k+1}{k} \Delta_{(k_1,\dotsc,k_d)}\right)\subseteq \{(1,1,\dotsc,1)\}$ since every facet has lattice distance $1$ from $(1,1,\dotsc,1)$. Every other supporting integral hyperplane has also at least lattice distance $1$ to $(1,1,\dotsc,1)$, since there is a vertex of $\Delta_{(k_1,\dotsc,k_d)}$ which has a smaller lattice distance to the hyperplane than $(1,1,\dotsc,1)$ and the vertex is a lattice point. It is worth noting, that unit fraction partitions of $1$ and generalized forms of them are also used in other classifications of lattice simplices, e.g. in recent classification results of Fano simplices for various Gorenstein indices in \cite{Bae25}. \end{ex} \begin{ex} We can construct some special weakly sporadic lattice polytopes from canonical Fano polytopes. For some canonical Fano polytopes $P$ there is a $\lambda < 1$ and some $x \in \Q^d$ such that $Q:=\lambda P^*-x$ is an $F$-hollow lattice polytope. Then $Q$ is weakly sporadic with $\mu(P)=\mu_{max}(P)=\frac{1}{\lambda}$ by \ref{Fine_lattice_cone}. If $P$ is not only canonical Fano but also reflexive, we get all Gorenstein polytopes and all their integral dilations without interior lattice points this way. In particular, since all canonical Fano polygons are reflexive, we get in dimension $2$ only the three Gorenstein polygons of index greater $1$ and the twofold standard lattice triangle, and we will see later that these are already all weakly sporadic lattice polygons. In dimension $3$ we get $53$ weakly sporadic lattice polytopes this way, among them the $34$ Gorenstein polytopes with index greater than $1$ and the $3$ maximal sporadic simplizes $\Delta_{(3,3,3)}, \Delta_{(2,4,4)}, \Delta_{(2,3,6)}$. If we allow $Q$ to have a non-empty Fine interior but still want it to be hollow, then we also get all maximal hollow lattice polytopes with Fine interior of dimension $0$ and (perhaps surprisingly) also the only maximal hollow lattice polytope with Fine interior of dimension $3$, which are described in \cite{BKS22}. \end{ex} We end this section with some words on the other special multipliers. \begin{lemma}\label{special_multiplier_support} Let $P$ be a rational $d$-polytope, $n \in S_F(\lambda P)$ for some $\lambda \in \R_{\geq 0}$. Then there is a special multiplier $\mu_i$ of $P$, with $n \in S_F(\lambda P)$ if and only if $\lambda \geq \mu_i$. \end{lemma} \begin{proof} Since $n \in S_F(\lambda P)$ for some $\lambda \in \R_{\geq 0}$, there is a supporting hyperplane of $\mathcal{F}(P)$ with normal vector $(-\Min{P}{n},n)$. The intersection of this hyperplane with $\mathcal{F}(P)$ is a unbounded face of $\mathcal{F}(P)$ and the lowest last coordinate of the vertices of this facet is a special multiplier $\mu_i$ with $\nu \in S_F(\lambda P)$ if and only if $\lambda \geq \mu_i$. \end{proof} \begin{defi} Let $P$ be a rational $d$-polytope, $n \in S_F(\lambda P)$ for some $\lambda \in \R_{\geq 0}$. Then we define $\mu_n$ as the special multiplier from \ref{special_multiplier_support} and call it the \textit{special multiplier of the support vector $n$}. \end{defi} There is a special multiplier connected to the property of being canonically closed introduced in \cite[4.17.]{Bat23a}. We also get the characterization and rationality of this multiplier now as a corollary. \begin{coro} Let $P$ be a rational polytope.\\ There is a multiplier $\mu_{cc}(P)\in \Q$ of $P$, so that $\lambda P$ is canonically closed if and only if $\lambda\geq \mu_{cc}$. \end{coro} \begin{proof} By \cite[4.3]{Bat23a} $\lambda P$ is canonically closed if and only if $\Sigma_P[1]\subseteq S_F(\lambda P)$. So by \ref{special_multiplier_support} the multiplier $\mu_{cc}$ can be defined by $\mu_{cc}:=\max \{\mu_n \mid n \in \Sigma_P[1]\}$. \end{proof} There are examples with $\mu_{cc}(P)\neq \mu_{max}(P)$ e.g. one can see that \begin{align*} \conv{(-4,-7,-9,-5), (0,1,0,0), (1,0,0,0), (2, 5, 9, 5), (0, 1, 0, 3)} \end{align*} is a maximal hollow lattice $4$-simplex with minimal multiplier $1$, in particular it is canonically closed. But one can calculate that $\mu_{max}=\frac{4}{3}$. Thus, contrary to \cite[5.8]{Bat23a}, it is possible that for a canonically closed lattice polytope $P$ the combinatorial type of some $F(\lambda P)$ with $\lambda>1$ is different from the combinatorial type of $F(P)$. Nevertheless, for $\lambda\geq\mu_{max}(P)$ we still have a Minkowski sum decomposition of $F(\lambda P)$ as in \cite[5.5]{Bat23a}. But here this result is now just a corollary of \ref{main_theorem} and the usual decomposition of a polyhedron into the Minkowski sum of a polytope and the recession cone. \begin{coro}\label{FinePolyhedron} Let $P$ be a rational polytope and $\mu_{max}$ the maximal multiplier of $P$. Then we have \begin{align*} F(\lambda \mu_{max}P)=F(\mu_{max}P)+(\lambda-1)\mu_{max}P \end{align*} for all $\lambda\geq 1$ and $\mu_{max}$ is the smallest rational number with this property. \end{coro} \begin{proof} We look at $\mathcal{F}(\mu_{max}P)$ for $s_0\geq 1$. If we decompose this polyhedron as Minkowski sum of a polytope and the recession cone, we get since there are only vertices with first coordinate $1$ \begin{align*} \mathcal{F}(\mu_{max}P) \cap \{x_0\geq 1\}=\{1\} \times F(\mu_{max}P)+\R_{\geq 0}(\{1\} \times \mu_{max}P). \end{align*} So we have for all $\lambda\geq 1$ \begin{align*} \{\lambda\} \times F(\lambda \mu_{max}P)=&\mathcal{F}(\mu_{max}P) \cap \{x_0= \lambda\}\\ =&\{1\} \times F(\mu_{max}P)+(\lambda-1)(\{1\} \times \mu_{max}P)\\ =&\{\lambda\} \times (F(\mu_{max}P)+(\lambda-1)\mu_{max}P). \end{align*} For smaller rational numbers $\lambda<\mu_{max}$ the polyhedron $\mathcal{F}(\lambda P) \cap \{y\geq 1\}$ has vertices with first coordinate greater than $1$ and so $\lambda$ can not fulfil the property. \end{proof} \section{Weakly sporadic $F$-hollow lattice 3-polytopes} Our aim in this section is to classify all weakly sporadic $F$-hollow lattice 3-polytopes. In a first step we look in arbitrary dimension $d\geq 2$ on the weakly sporadic $F$-hollow lattice $d$-polytopes with degree at most $1$, where the degree is defined as the degree of the $h^*$ polynomial as before. Since lattice $d$-polytopes with degree at most $1$ are classified in \cite{BN07}, we can use this classification to describe all weakly sporadic $F$-hollow lattice $d$-polytopes among them. \begin{prop}\label{weakly_sporadic_small_degree} Let $P$ be a lattice $d$-polytope with $\deg(P)\leq 1$. Then $P$ is a weakly sporadic $F$-hollow lattice polytope if and only if $P$ is either the standard simplex $\Delta_d$, a Gorenstein polytope of index $d$, or affine unimodular equivalent to the $(d-2)$-fold pyramid over $2\Delta_2$. \end{prop} \begin{proof} If $\deg(P)=0$, then $P$ is by \cite[Prop. 1.4.]{BN07} affine unimodular equvivalent to the standard simplex $\Delta_d$ which is weakly sporadic by \ref{multipliers_pyramid}. So only the case $\deg(P)=1$ remains. By \cite[Theorem 2.5]{BN07} every 3-polytope of degree $1$ is affine unimodular equivalent to the the $(d-2)$-fold pyramid over $2\Delta_2$, which is weakly sporadic $F$-hollow by \ref{multipliers_pyramid}, or to the a Lawrence prism $P_{h_1,\dotsc,h_d}$. Since every Lawrence prism $P_{h_1,\dotsc,h_d}$ projects to the standard simplex $\Delta_{d-1}$, we get $\mu(P_{h_1,\dotsc,h_d})\geq \mu(\Delta_{d-1})=d$. So $\mu(P_{h_1,\dotsc,h_d})=d$, because $dP_{h_1,\dotsc,h_d}$ has interior lattice points since $\mathrm{codeg}(P_{h_1,\dotsc,h_d})=d+1-\deg(P_{h_1,\dotsc,h_d})=d$. To be weakly sporadic, we must therefore have exaktly one interior lattice point in the lattice polytope $d P_{h_1,\dotsc,h_d}$, so we get $h^\ast(P_{h_1,\dotsc,h_d})(t)=t+1$ and so $P_{h_1,\dotsc,h_d}$ is a Gorenstein polytope of index $d$. \end{proof} \begin{rem}\label{Gorenstein_d-polytopes} We can explicitly describe the Gorenstein $d$-polytopes of index $d$ for $d\geq 2$. They all have the $h^\ast$-polynomial $t+1$ and so they are affine unimodular equivalent to a Lawrence prism $P_{h_1,\dotsc,h_d}$ with \begin{align*} h^*_{P_{h_1,\dotsc,h_3}}(t)=(h_1+\dotsc+h_d-1)t+1=t+1 \end{align*} by \cite[2.4. f.]{BN07}. Because of the automorphisms of $\Delta_d$ we get without restriction $(h_1,h_2,\dotsc,h_d)\in \{(1,1,0,\dotsc,0),(2,0,0,\dotsc,0)\}$ and so there are exactly two Gorenstein polytopes of index $d$, namely $P_{1,1,0,\dotsc,0}$ and $P_{2,0,\dotsc,0}$. \end{rem} In dimension $2$ every $F$-hollow lattice polygon has degree at most $1$ and so we get already the complete classification of weakly sporadic lattice polygons from this general result, i.e. the weakly sporadic lattice polygons are $\Delta_2, P_{1,1}, P_{2,0}$ and $2 \Delta_2$. We will see now, that in dimension $3$ the only addional examples of lattice width $1$ are Gorenstein polytopes of index $2$. \begin{prop} Let $P$ be a lattice $3$-polytope of lattice width 1 with $\deg(P)>1$. Then $P$ is a weakly sporadic $F$-hollow lattice polytope if and only if $P$ is a Gorenstein polytope of index $2$. \end{prop} \begin{proof} Since $P$ has lattice width $1$, we have $\mu(P)\geq 2$. We can not have $\mu(P)>2$, since then $\deg P=d+1-\mathrm{codeg}(P)\leq 4-3=1$. It remains to look at the case $\mu(P)=2$, and so we have $\dim F(2P)=0$. From \ref{FineInterior_Width1} we also get $\dim F(2P')=0$ for the middle lattice polygon $2P'$, and so $F(2P')$ is a lattice point and $2P'$ is canonically closed. So $F(2P)$ is a lattice point and $2P$ is canonically closed by \ref{FineInterior_Width1}, so $2P$ is a reflexive polytope and therefore $P$ is a Gorenstein polytope of index $2$. \end{proof} \begin{rem} The Gorenstein $d$-polytopes of index $d-1$, or equivalently degree $2$, are completely classified in \cite{BJ10}. In dimension $3$ there are exactly $31$ Gorenstein polytopes of degree $2$, among them the $16$ lattice pyramids over the $16$ reflexive polygons. The polytope $2\Delta_3$ is the only one of these polytopes, which has lattice width greater than $1$. \end{rem} We describe now the weakly sporadic $F$-hollow lattice 3-polytopes projecting to $2\Delta_2$ but not to $\Delta_1$ as lattice polytopes in $2\Delta_2\times \R$. \begin{lemma}\label{prism_weakly_sporadic} Let $P$ be a weakly sporadic $F$-hollow lattice 3-tope.\\ If the lattice width of $P$ is greater than $1$ and $P$ is not a sporadic $F$-hollow polytope, then $P$ is affine unimodular equivalent to a lattice polytope in $2\Delta_2 \times [0,4]$. \end{lemma} \begin{proof} Since $P$ is not sporadic and has a lattice width greater than 1, it must have a lattice projection on $2\Delta_2$ and is therefore affine unimodular equivalent to a lattice polytope $Q$ in $2\Delta_2\times \mathbb{R}$. Let $c=(\frac{2}{3},\frac{2}{3})$ be the barycenter of $2\Delta_2$ and $c_l, c_u$ the lower and upper intersection point of $\{c\}\times \R$ and the boundary of $Q$. The intersection points are points of a lower and an upper facet $F_l$ and $F_u$ of $Q$. The lower facet $F_l$ projects along $(0,0,1)$ onto a lattice subpolygon of $2 \Delta_2\times \{0\}$. There are up to automorphisms of $2\Delta_2\times \{0\}$ two lattice subpolygons without three consecutive vertices which form an affine lattice basis of $\Z^2 \times \{0\}$, namely \begin{align*} \Delta_2 \text{ and } \conv{(0,0,0), (2,0,0), (0,1,0)}. \end{align*} So, after a appropriate affine unimodular transformation we can assume that either \begin{align*} F_l\subseteq& \Delta_2\times \{0\},\\ F_l=&\conv{(0,0,0),(2,0,0),(0,2,1)} \text{ or }\\ F_l=&\conv{(0,0,1),(2,0,0),(0,1,0)} \end{align*} and in particular, we have $Q\subseteq 2\Delta_2 \times [0,\infty)$. Next, we will show that the third coordinate of $c_u$ is at most $\frac{4}{3}$. If $F_l \subseteq \Delta_2\times \{0\}$ or $F_l=\conv{(0,0,1),(2,0,0),(0,1,0)}$, then we have $\frac{3}{2}c_l=(1,1,0)$, and if $F_l=\conv{(0,0,0),(2,0,0),(0,2,1)}$, then we have $\frac{3}{2}c_l=(1,1,\frac{1}{2})$. But since $c_l$ is in the second case an interior point of a facet with integral points of $Q$, we get in both cases that every hyperplane supporting $\frac{3}{2}Q$ from below has at least lattice distance $1$ to $(1,1,1)$. Similarly every hyperplane supporting $\frac{3}{2}Q$ from above has at least lattice distance $1$ to $\frac{3}{2}c_u-(0,0,1)$. If we assume that the third coordinate of $c_u$ is larger than $\frac{4}{3}$, we get $F(\frac{3}{2}Q)\supseteq \conv{(1,1,1),\frac{3}{2}c_u-(0,0,1)}$ and $\dim(F(\frac{3}{2}Q))=1$, which contradicts the assumption that $Q$ is weakly sporadic. Let $h$ be a vertex of $Q$ with a maximum third coordinate. It remains to show, that this third coordinate is at most $4$. The line through $h$ and $c_u$ has two intersection points $h$ and $s$ with the boundary of $\Delta_2 \times \R$ and the third coordinate of $s$ is at least $0$, because between $c_u$ and $s$ the intersection of the line with the interior of $Q$ is empty and the segment between $c_u$ and $s$ lies above the polytope $Q$. Since the length of the segment between $h$ and $c_u$ is at most twice the length of the segment between $c_u$ and $s$, we get that the third coordinate of $h$ is at most $3\cdot \frac{4}{3}=4$. Therefore, $P$ is affine unimodular equivalent to a subpolytope $Q$ of $2\Delta_2 \times [0,4]$. \end{proof} \begin{rem} The polytope $2P_{2,0,0}=\conv{(0,0,0),(2,0,0),(0,2,0),(0,0,4)}$ is weakly sporadic $F$-hollow because $P_{2,0,0}$ is a Gorenstein polytope of index $3$ by \ref{Gorenstein_d-polytopes} and has only the lattice width directions $\pm (1,0,0),\pm (0,1,0),\pm(1,1,0)$. This shows that the prism $2\Delta_2\times [0,4]$ in Lemma~\ref{prism_weakly_sporadic} is the best possible choice. \end{rem} We can explicitly classify the lattice subpolytopes of $2\Delta_2 \times [0,4]$ by recursively deleting vertices from the lattice polytope and looking at the convex hull of the remaining lattice points. To do this efficiently, we should skip all the lattice polytopes we have already seen interms of affine unimodular equivalence. This can be done, for example, by using the PALP normal form of a lattice polytope (\cite{KS04}, \cite{GK13}). We get the following result. \begin{prop} There are exactly $80$ weakly sporadic non sporadic $F$-hollow lattice 3-polytopes of lattice width $2$. All of them are lattice subpolytopes of $2P_{2,0,0}$ or $2P_{1,1,0}$. $2\Delta_3$ is the only one of them with minimal multiplier $\mu=2$, the others have $\mu=\frac{3}{2}$. \end{prop} \begin{thm}\label{weakly_sporadic classification} There are exactly $114$ weakly sporadic but non sporadic $F$-hollow lattice $3$-polytopes. Among them are exactly $31$ Gorenstein polytopes of index $2$, $2$ Gorenstein polytopes of index $3$, $1$ Gorenstein polytope of index $4$, $1$ exceptional simplex with $\mu=\frac{5}{2}$ and $79$ polytopes of width $2$ with $\mu=\frac{3}{2}$. Coordinates for the vertices of all these polytopes are available on \cite{Boh24}. \end{thm} By \cite[Theorem 1.11.]{Bat23b} up to affine unimodular equivalence there are only finitely many sporadic $F-$hollow lattice $d$-polytopes, i.e. $F$-hollow lattice polytopes without $F$-hollow projection. A similar result holds for hollow lattice polytopes \cite{NZ11}. In dimension $3$ all sporadic $F$-hollow polytopes are subpolytopes of the $12$ maximal hollow lattice polytopes classified in \cite{AWW11}, \cite{AKW17}. Deciding whether a lattice polytope projects to $[0,1]$ is easy, since we have such a projection if and only if the polytope has lattice width $1$. Since the only $F$-hollow polygon with lattice with greater than $1$ is $2\Delta_2=\conv{(0,0), (2,0), (0,2)}$, we need the following lemma to understand projections on this polygon. \begin{lemma} Let $P$ be a lattice $d$-polytope, $d\geq 3$, with lattice width greater than $1$.\\ Then $P$ projects to $2\cdot \Delta_2$ if and only if $P$ has lattice width $2$ and there are linearly dependent lattice width directions $w_1, w_2, w_3$ of $P$ such that $0$ is a interior point of $Q:=\conv{w_1,w_2,w_3}$, the normalized area of $Q$ is $3$ and the supporting hyperplanes of $P$ corresponding to the lattice width directions $\pm w_1, \pm w_2, \pm w_3$ define a polyhedron with exactly $3$ facets. \end{lemma} \begin{proof} If $P$ projects to $2\Delta_2$, then $P$ is affine unimodular equivalent to a subpolytope of $2\Delta^2 \times \R^{d-1}$. Since $2\Delta_2$ has lattice width $2$ and width directions $(1,0), (0,1), (-1,-1)$, we have lattice width directions $w_1=(1,0,0,\dotsc,0), w_2=(0,1,0,\dotsc,0), w_3=(-1,-1,0,\dotsc,0)$ of $P$ and $w_1,w_2,w_3$ satisfy the required conditions. Conversely, if we have lattice width directions $w_1,w_2,w_3$ that satisfy the conditions, then $\conv{0, w_1, w_2}$ must be an empty triangle, since $0$ is an interior point of the polygon $\conv{w_1,w_2,w_3}$ with normalized area $3$. Thus $0,w_1,w_2$ is an affine lattice basis of a $2$-dimensional sublattice, which we can extend to a lattice basis of the whole lattice $\Z^d$. Mapping this lattice basis to the standard basis we get that $P$ is affine unimodular equivalent to a subpolytope of $[0,2]^2\times \R^{d-2}$. Under this map, the width direction $w_3=-w_1-w_2$ maps to $(-1,-1,0,\dotsc, 0)$ and so $P$ is even affine unimodular equivalent to a subpolytope of $2\Delta_2\times \R^{d-2}$ or $\conv{(1,0),(0,1),(-1,1),(-1,0),(0,-1),(1,-1)}\times \R^{d-2}$. But the latter is not possible, since the supporting hyperplanes of $P$, corresponding to the lattice width directions, define a polyhedron with exactly $3$ facets, and so $P$ is affine unimodular equivalent to a subpolytope of $2\Delta_2\times \R^{d-2}$ and thus projects to $2\Delta_2$. \end{proof} \begin{rem} Working with the lattice width has been helpful for various classifications of lattice polytopes e.g. for the classification of empty $4$-simplices in \cite{IVS21}. There are also classifications looking at a multi-width, which is even a generalization of our situation with lattice width $2$ for three different width directions, see \cite{Ham24} for this approach. \end{rem} With the help of the lemma, we can now determine all sporadic $F$-hollow lattice $3$-polytopes as lattice polygons of the maximal hollow ones. We get the following result. \begin{thm}\label{sporadic classification} There are exactly $1368$ sporadic $F$-hollow lattice $3$-polytopes up to affine unimodular equivalence. All of them are subpolygons of $\Delta_{(3,3,3)}$, $\Delta_{(2,4,4)}$ or $\Delta_{(2,3,6)}$. $300$ have $\mu=\frac{4}{3}$, $632$ have $\mu=\frac{5}{4}$ and $436$ have $\mu=\frac{7}{6}$. Coordinates for the vertices of all these polytopes are available on \cite{Boh24}. \end{thm} Let us end with two remarks on other classifications. \begin{rem} Since by \cite[Appendix B]{BKS22} there are $9$ hollow lattice $3$-polytopes, which are not $F$-hollow, we have all in all $1377$ sporadic hollow lattice $3$-polytopes, not projecting to a hollow lattice polytope of smaller dimension. \end{rem} \begin{rem} Among the sporadic $F$-hollow polytopes are $52$ bipyramids, which were also classified in \cite[Lemma 4.2.]{IVS21} to determine all empty $4$-simplices, which have a hollow projection to a hollow 3-polytope. $29$ primitive bipyramids out of the $52$ correspond to the $29$ \textit{stable quintuples} in the classification of $4$-dimensional terminal quotient singularities in \cite{MMM88}. \end{rem} \begin{thebibliography}{AKW17} \begin{footnotesize} \bibitem[AWW09]{AWW09} Kent Andersen, Christian Wagner, and Robert Weismantel, \emph{Maximal integral simplices with no interior integer points.} (2009) \newblock \href{https://arxiv.org/pdf/0904.2108} {arXiv:0904.2108}. \bibitem[AKW17]{AKW17} Gennadiy Averkov, Jan Krümpelmann and Stefan Weltge, \emph{Notions of maximality for integral lattice-free polyhedra: the case of dimension three.} Math. Oper. Res. 42, No. 4, 1035-1062 (2017). \bibitem[AWW11]{AWW11} Gennadiy Averkov, Christian Wagner and Robert Weismantel, \emph{Maximal lattice-free polyhedra: finiteness and an explicit description in dimension three.} Math. Oper. Res. 36, No. 4, 721-742 (2011). \bibitem[Bae25]{Bae25} Andreas Bäuerle, \emph{Sharp volume and multiplicity bounds for Fano simplices.} Journal of Algebraic Combinatorics, Volume 61, article number 9, (2025). \bibitem[Bat94]{Bat94} Victor V. Batyrev, \emph{Dual polyhedra and mirror symmetry for Calabi-Yau hypersurfaces in toric varieties.} J. Algebr. Geom. 3, No. 3, 493-535 (1994). \bibitem[Bat17]{Bat17} Victor Batyrev, \emph{The stringy Euler number of Calabi-Yau hypersurfaces in toric varieties and the Mavlyutov duality.} Pure Appl. Math. Q. 13, No. 1, 1-47 (2017). \bibitem[Bat23a]{Bat23a} Victor V. Batyrev, \emph{Canonical models of toric hypersurfaces.} Algebr. Geom. 10, No. 4, 394-431 (2023). \bibitem[Bat23b]{Bat23b} Victor V. Batyrev, \emph{Projecting lattice polytopes according to the Minimal Model Program.} (2023) \newblock \href{https://arxiv.org/abs/2307.16306} {arXiv:2307.16306}. \bibitem[BJ10]{BJ10} Victor Batyrev and Dorothee Juny, \emph{Classification of Gorenstein toric del Pezzo varieties in arbitrary dimension.} Mosc. Math. J. 10, No. 2, 285-316 (2010). \bibitem[BKS22]{BKS22} Victor Batyrev, Alexander Kasprzyk and Karin Schaller, \emph{On the Fine interior of three-dimensional canonical Fano polytopes.} Interactions with lattice polytopes. Selected papers based on the presentations at the workshop, Magdeburg, Germany, September 14–16, 2017. Cham: Springer. Springer Proc. Math. Stat. 386, 11-47 (2022). \bibitem[BN07]{BN07} Victor Batyrev and Benjamin Nill, \emph{Multiples of lattice polytopes without interior lattice points.} Mosc. Math. J. 7, No. 2, 195-207 (2007). \bibitem[BS15]{BS15} Matthias Beck and Sinai Robins, \emph{Computing the continuous discretely. Integer-point enumeration in polyhedra.} 2nd edition. Undergraduate Texts in Mathematics. New York, NY: Springer (2015). \bibitem[Boh24]{Boh24} Martin Bohnert, The Fine interior of dilations of a rational polytope - data files. \href{https://github.com/mbohnert/weakly_sporadic_F_hollow}{https://github.com/mbohnert/weakly\textunderscore sporadic\textunderscore F\textunderscore hollow}. \bibitem[BS24]{BS24} Martin Bohnert and Justus Springer, \emph{Classifying rational polygons with small denominator and few interior lattice points.} (2024) \newblock \href{https://arxiv.org/abs/2410.17244}{arXiv:2410.17244}. \bibitem[Fin83]{Fin83} J. Fine, {\em Resolution and Completion of Algebraic Varieties}, Ph.D. University of Warwick 1983. \bibitem[GK13]{GK13} Roland Grinis and Alexander Kasprzyk. \emph{Normal forms of convex lattice polytopes.} (2013) \newblock \href{https://arxiv.org/abs/1301.6641}{arXiv:1301.6641}. \bibitem[Ham24]{Ham24} Girtrude Hamm, \emph{Classification of Width 1 Lattice Tetrahedra by Their Multi-Width.} Discrete Comput Geom (2024). \bibitem[IVS21]{IVS21} {\'O}scar Iglesias-Vali{\~n}o and Francisco Santos, \emph{The complete classification of empty lattice 4-simplices.} Rev. Mat. Iberoam. 37, No. 6, 2399-2432 (2021). \bibitem[KN12]{KN12} Alexander M. Kasprzyk and Benjamin Nill, \emph{Fano polytopes.} In: A. Rebhan, L. Katzarkov, J. Knapp, R. Rashkov, E. Scheidegger (Eds.), Strings, gauge fields, and the geometry behind: the legacy of Maximilian Kreuzer, World Scientific, 349-364, (2012). \bibitem[KS04]{KS04} Maximilian Kreuzer and Harald Skarke, \emph{{PALP}: a package for analysing lattice polytopes with applications to toric geometry.} Comput. Phys. Commun. 157, No. 1, 87-106 (2004). \bibitem[MMM88]{MMM88} Shigefumi Mori, David R. Morrison and Ian Morrison, \emph{On four-dimensional terminal quotient singularities.} Math. Comput. 51, No. 184, 769-786 (1988). \bibitem[NZ11]{NZ11} Benjamin Nill and Günter M. Ziegler, \emph{Projecting lattice polytopes without interior lattice points.} Math. Oper. Res. 36, No. 3, 462-467 (2011). \bibitem[Rei87]{Rei87} Miles Reid, \emph{Young person’s guide to canonical singularities.} Algebraic geometry, Proc. Summer Res. Inst., Brunswick/Maine 1985, part 1, Proc. Symp. Pure Math. 46, 345-414 (1987). \bibitem[Whi64]{Whi64} G. K. White, \emph{Lattice tetrahedra.} Can. J. Math. 16, 389-396 (1964). \bibitem[Zie07]{Zie07} Günter M. Ziegler, \emph{Lectures on polytopes.} Seventh Printing, Graduate Texts in Mathematics. 152. Berlin: Springer-Verlag. (2007). \end{footnotesize} \end{thebibliography} \end{document}
2412.11227v2
http://arxiv.org/abs/2412.11227v2
The Brascamp-Lieb inequality in Convex Geometry and in the Theory of Algorithms
\documentclass{amsart} \usepackage{amsfonts} \usepackage{mathrsfs} \usepackage{cite} \usepackage{graphicx} \newcommand{\R}{{\mathbb R}} \newcommand{\PP}{{\mathbb P}} \newcommand{\N}{{\mathbb N}} \newcommand{\Z}{{\mathbb Z}} \newcommand{\C}{{\mathbb C}} \newcommand{\E}{{\mathbb E}} \newcommand{\e}{\epsilon} \renewcommand{\d}{\partial} \newcommand{\half}{\frac{1}{2}} \newtheorem{theo}{Theorem}[section] \newtheorem{lemma}[theo]{Lemma} \newtheorem{prop}[theo]{Proposition} \newtheorem{coro}[theo]{Corollary} \newtheorem{conj}[theo]{Conjecture} \newtheorem{claim}[theo]{Claim} \newtheorem{remark}[theo]{Remark} \newtheorem{defi}[theo]{Definition} \newtheorem{example}[theo]{Example} \newcommand{\GL}[1]{\text{GL }#1} \newcommand{\SL}[1]{\text{SL }#1} \newcommand{\relint}[1]{\text{relint }#1} \newcommand{\Conv}[1]{\text{Conv }#1} \newcommand{\Int}[1]{\text{\rm Int }#1} \newcommand{\Proj}[1]{\text{Proj }#1} \newcommand{\inte}{{\operatorname{int}}} \newcommand{\supp}{{\operatorname{supp}}} \newcommand{\lin}{{\operatorname{lin}}} \newcommand{\sfe}{S^{n-1}} \title[Some applications of the Brascamp-Lieb inequality]{The Brascamp-Lieb inequality in Convex Geometry and in the Theory of Algorithms} \author{K\'aroly J. B\"or\"oczky (R\'enyi Institute, Budapest)} \begin{document} \maketitle \begin{abstract} The Brascamp-Lieb inequality in harmonic analysis was proved by Brascamp and Lieb in the rank one case in 1976, and by Lieb in 1990. It says that in a certain inequality, the optimal constant can be determined by checking the inequality for centered Gaussian distributions. It was Keith M Ball's pioneering work around 1990 that led to various applications of the inequality in Convex Geometry, and even in Discrete Geometry, like Brazitikos' quantitative fractional version of the Helly Theorem. On the other hand, determining the optimal constant and possible Gaussian extremizers for the Brascamp-Lieb inequality can be formulated as a problem in terms of positive definite matrices, and this problem has intimate links to the Theory of Algorithms. \end{abstract} \section{The Brascamp-Lieb-Barthe inequalities} \label{secIntro} For a proper linear subspace $E$ of $\R^n$ ($E\neq \R^n$ and $E\neq\{0\}$), let $P_E$ denote the orthogonal projection into $E$. We say that the subspaces $E_1,\ldots,E_k$ of $\R^n$ and $p_1,\ldots,p_k>0$ form a Geometric Brascamp-Lieb datum if they satisfy \begin{equation} \label{highdimcond0} \sum_{i=1}^kp_iP_{E_i}=I_n. \end{equation} The name ``Geometric Brascamp-Lieb datum" coined by Bennett, Carbery, Christ, Tao \cite{BCCT08} comes from the following theorem, originating in the work of Brascamp, Lieb \cite{BrL76} and Ball \cite{Bal89,Bal91} in the rank one case (${\rm dim}\,E_i=1$ for $i=1,\ldots,k$), and Lieb \cite{Lie90} and Barthe \cite{Bar98} in the general case. In the rank one case, the Geometric Brascamp-Lieb datum is known by various names, like "John decomposition of the identity operator" (cf. Theorem~\ref{BrascampLiebRankOne} and Theorem~\ref{Johnmaxvol}), or tight frame, or Parseval frame in coding theory and computer science (see for example Casazza, Tran, Tremain \cite{CTT20}). \begin{theo}[Brascamp-Lieb, Ball, Barthe] \label{BLtheo} For the linear subspaces $E_1,\ldots,E_k$ of $\R^n$ and $p_1,\ldots,p_k>0$ satisfying \eqref{highdimcond0}, and for non-negative $f_i\in L_1(E_i)$, we have \begin{equation} \label{BL} \int_{\R^n}\prod_{i=1}^kf_i(P_{E_i}x)^{p_i}\,dx \leq \prod_{i=1}^k\left(\int_{E_i}f_i\right)^{p_i} \end{equation} \end{theo} {\bf Remark} This is H\"older's inequality if $E_1=\ldots=E_k=\R^n$ and $P_{E_i}=I_n$, and hence $\sum_{i=1}^kp_i=1$.\\ We note that equality holds in Theorem~\ref{BLtheo} if $f_i(x)=e^{-\pi\|x\|^2}$ for $i=1,\ldots,k$; and hence, each $f_i$ is a Gaussian density. Actually, Theorem~\ref{BLtheo} is an important special case discovered by Ball \cite{Bal91,Bal03} in the rank one case and by Barthe \cite{Bar98} in the general case of the general Brascamp-Lieb inequality (cf. Theorem~\ref{BLgeneral}). After partial results by Barthe \cite{Bar98}, Carlen, Lieb, Loss \cite{CLL04} and Bennett, Carbery, Christ, Tao \cite{BCCT08}, it was Valdimarsson \cite{Val08} who characterized equality in the Geometric Brascamp-Lieb inequality. In order to state his result, we need some notation. Let $E_1,\ldots,E_k$ the proper linear subspaces of $\R^n$ and $p_1,\ldots,p_k>0$ satisfy \eqref{highdimcond0}. As Bennett, Carbery, Christ, Tao \cite{BCCT08} observe, \eqref{highdimcond0} yields that for any non-zero linear subspace $V$, the map $\sum_{i=1}^k p_iP_V\circ P_{E_i}$ is the identity map on $V$, and hence considering traces show that \begin{equation} \label{sumEcapV} \sum_{i=1}^k p_i\dim(E_i\cap V)\leq \dim V. \end{equation} In order to understand extremizers in \eqref{BL}, following Carlen, Lieb, Loss \cite{CLL04} and Bennett, Carbery, Christ, Tao \cite{BCCT08}, we say that a non-zero linear subspace $V$ is a critical subspace if $$ \sum_{i=1}^k p_i\dim(E_i\cap V)=\dim V, $$ which is turn equivalent saying that $$ \mbox{$E_i=(E_i\cap V)+ (E_i\cap V^\bot)$ for $i=1,\ldots,k$} $$ by the argument leading to \eqref{sumEcapV} (cf. \cite{BCCT08}). We say that a critical subspace $V$ is indecomposable if $V$ has no proper critical linear subspace. Valdimarsson \cite{Val08} introduced the notions of independent subspaces and the dependent subspace. We write $J$ to denote the set of $2^k$ functions $\{1,\ldots,k\}\to\{0,1\}$. If $\varepsilon\in J$, then let $F_{(\varepsilon)}=\cap_{i=1}^kE_i^{(\varepsilon(i))}$ where $E_i^{(0)}=E_i$ and $E_i^{(1)}=E_i^\bot$ for $i=1,\ldots,k$. We write $J_0$ to denote the subset of $\varepsilon\in J$ such that ${\rm dim}\,F_{(\varepsilon)}\geq 1$, and such an $F_{(\varepsilon)}$ is called independent following Valdimarsson \cite{Val08}. Readily $F_{(\varepsilon)}$ and $F_{(\tilde{\varepsilon})}$ are orthogonal if $\varepsilon\neq\tilde{\varepsilon}$ for $\varepsilon,\tilde{\varepsilon}\in J_0$. In addition, we write $F_{\rm dep}$ to denote the orthogonal component of $\oplus_{\varepsilon \in J_0}F_{(\varepsilon)}$. In particular, $\R^n$ can be written as a direct sum of pairwise orthogonal linear subspaces in the form \begin{equation} \label{independent-dependent0} \R^n=\left(\oplus_{\varepsilon \in J_0}F_{(\varepsilon)}\right)\oplus F_{\rm dep}. \end{equation} Here it is possible that $J_0=\emptyset$, and hence $\R^n=F_{\rm dep}$, or $F_{\rm dep}=\{0\}$, and hence $\R^n=\oplus_{\varepsilon \in J_0}F_{(\varepsilon)}$ in that case. For a non-zero linear subspace $L\subset \R^n$, we say that a linear transformation $A:\,L\to L$ is positive definite if $\langle Ax,y\rangle=\langle x, Ay\rangle$ and $\langle x, Ax\rangle>0$ for any $x,y\in L\backslash\{0\}$. \begin{theo}[Valdimarsson] \label{BLtheoequa} For the proper linear subspaces $E_1,\ldots,E_k$ of $\R^n$ and $p_1,\ldots,p_k>0$ satisfying \eqref{highdimcond0}, let us assume that equality holds in the Brascamp-Lieb inequality \eqref{BL} for non-negative $f_i\in L_1(E_i)$, $i=1,\ldots,k$. If $F_{\rm dep}\neq\R^n$, then let $F_1,\ldots,F_\ell$ be the independent subspaces, and if $F_{\rm dep}=\R^n$, then let $\ell=1$ and $F_1=\{0\}$. There exist $b\in F_{\rm dep}$ and $\theta_i>0$ for $i=1,\ldots,k$, integrable non-negative $h_{j}:\,F_j\to[0,\infty)$ for $j=1,\ldots,\ell$, and a positive definite matrix $A:F_{\rm dep}\to F_{\rm dep}$ such that the eigenspaces of $A$ are critical subspaces and \begin{equation} \label{BLtheoequaform} f_i(x)=\theta_i e^{-\langle AP_{F_{\rm dep}}x,P_{F_{\rm dep}}x-b\rangle}\prod_{F_j\subset E_i}h_{j}(P_{F_j}(x)) \mbox{ \ \ \ for Lebesgue a.e. $x\in E_i$}. \end{equation} On the other hand, if for any $i=1,\ldots,k$, $f_i$ is of the form as in \eqref{BLtheoequaform}, then equality holds in \eqref{BL} for $f_1,\ldots,f_k$. \end{theo} Theorem~\ref{BLtheoequa} explains the term "independent subspaces" because the functions $h_{j}$ on $F_j$ are chosen freely and independently from each other. A reverse form of the Geometric Brascamp-Lieb inequality was proved by Barthe \cite{Bar98}. We write $\int^*_{\R^n}\varphi $ to denote the outer integral for a possibly non-integrable function $\varphi:\,\R^n\to[0,\infty)$; namely, the infimum (actually minimum) of $\int_{\R^n} \psi$ where $\psi\geq \varphi$ is Lebesgue measurable. \begin{theo}[Barthe] \label{RBLtheo} For the non-trivial linear subspaces $E_1,\ldots,E_k$ of $\R^n$ and $p_1,\ldots,p_k>0$ satisfying \eqref{highdimcond0}, and for non-negative $f_i\in L_1(E_i)$, we have \begin{equation} \label{RBL} \int_{\R^n}^*\sup_{x=\sum_{i=1}^kp_ix_i,\, x_i\in E_i}\;\prod_{i=1}^kf_i(x_i)^{p_i}\,dx \geq \prod_{i=1}^k\left(\int_{E_i}f_i\right)^{p_i}. \end{equation} \end{theo} \noindent{\bf Remark.} This is the Pr\'ekopa-Leindler inequality (cf. Theorem~\ref{PL}) if $E_1=\ldots=E_k=\R^n$ and $P_{E_i}=I_n$, and hence $\sum_{i=1}^kp_i=1$. \\ We say that a function $h:\,\R^n\to[0,\infty)$ is log-concave if $h((1-\lambda)x+\lambda\,y)\geq h(x)^{1-\lambda}h(y)^\lambda$ for any $x,y\in\R^n$ and $\lambda\in(0,1)$; or in other words, $h=e^{-W}$ for a convex function $W:\,\R^n\to(-\infty,\infty]$. B\"or\"oczky, Kalantzopoulos, Xi \cite{BKX23} prove the following characterization of equality in the Geometric Barthe's inequality \eqref{RBL}. \begin{theo}[B\"or\"oczky, Kalantzopoulos, Xi] \label{RBLtheoequa} For linear subspaces $E_1,\ldots,E_k$ of $\R^n$ and $p_1,\ldots,p_k>0$ satisfying \eqref{highdimcond0}, if $F_{\rm dep}\neq\R^n$, then let $F_1,\ldots,F_\ell$ be the independent subspaces, and if $F_{\rm dep}=\R^n$, then let $\ell=1$ and $F_1=\{0\}$. If equality holds in the Geometric Barthe's inequality \eqref{RBL} for non-negative $f_i\in L_1(E_i)$ with $\int_{E_i}f_i>0$, $i=1,\ldots,k$, then \begin{equation} \label{RBLtheoequaform} f_i(x)=\theta_i e^{-\langle AP_{F_{\rm dep}}x,P_{F_{\rm dep}}x-b_i\rangle}\prod_{F_j\subset E_i}h_{j}(P_{F_j}(x-w_i)) \mbox{ \ \ \ for Lebesgue a.e. $x\in E_i$} \end{equation} where \begin{itemize} \item $\theta_i>0$, $b_i\in E_i\cap F_{\rm dep}$ and $w_i\in E_i$ for $i=1,\ldots,k$, \item $h_{j}\in L_1(F_j)$ is non-negative for $j=1,\ldots,\ell$, and in addition, $h_j$ is log-concave if there exist $\alpha\neq \beta$ with $F_j\subset E_\alpha\cap E_\beta$, \item $A:F_{\rm dep}\to F_{\rm dep}$ is a positive definite matrix such that the eigenspaces of $A$ are critical subspaces. \end{itemize} On the other hand, if for any $i=1,\ldots,k$, $f_i$ is of the form as in \eqref{RBLtheoequaform} and equality holds for all $x\in E_i$ in \eqref{RBLtheoequaform}, then equality holds in \eqref{RBL} for $f_1,\ldots,f_k$. \end{theo} In particular, if for any $\alpha=1,\ldots,k$, the subspaces $\{E_i\}_{i\neq \alpha}$ span $\R^n$ in Theorem~\ref{RBLtheoequa}, then any extremizer of the Geometric Barthe's inequality is log-concave. We note that Barthe's inequality \eqref{RBL} extends the celebrated Pr\'ekopa-Leindler inequality Theorem~\ref{PL} (proved in various forms by Pr\'ekopa \cite{Pre71,Pre73}, Leindler \cite{Lei72} and Borell \cite{Bor75}) whose equality case was clarified by Dubuc \cite{Dub77} (see the survey Gardner \cite{gardner}). \begin{theo}[Pr\'ekopa, Leindler, Dubuc] \label{PL} For $m\geq 2$, $\lambda_1,\ldots,\lambda_m\in(0,1)$ with $\lambda_1+\ldots+\lambda_m=1$ and integrable $\varphi_1,\ldots,\varphi_m:\,\R^n\to[0,\infty)$, we have \begin{equation} \label{PLineq} \int_{\R^n}^* \sup_{x=\sum_{i=1}^m\lambda_ix_i,\, x_i\in \R^n}\;\prod_{i=1}^m\varphi_i(x_i)^{\lambda_i}\,dx \geq \prod_{i=1}^m\left(\int_{\R^n}\varphi_i\right)^{\lambda_i}, \end{equation} and if equality holds and the left hand side is positive and finite, then there exist a log-concave function $\varphi$ and $a_i>0$ and $b_i\in\R^n$ for $i=1,\ldots,m$ such that $$ \varphi_i(x)=a_i\, \varphi(x-b_i) $$ for Lebesgue a.e. $x\in\R^n$, $i=1,\ldots,m$. \end{theo} The explanation for the phenomenon concerning the log-concavity of $h_j$ in Theorem~\ref{RBLtheoequa} is as follows. Let $\ell\geq 1$ and $j\in\{1,\ldots,\ell\}$, and hence $\sum_{E_i\supset F_j}p_i=1$. If $f_1,\ldots,f_k$ are of the form \eqref{RBLtheoequaform}, then equality in Barthe's inequality \eqref{RBL} yields $$ \int^*_{F_j}\sup_{x=\sum_{E_i\supset F_j}p_i x_i\atop x_i\in F_j}h_{j}\Big(x_i-P_{F_j}w_i\Big)^{p_i}\,dx= \prod_{E_i\supset F_j}\left(\int_{F_j}h_{j}\Big(x-P_{F_j}w_i\Big)\,dx\right)^{p_i} \left(= \int_{F_j} h_j(x)\,dx\right). $$ Therefore, if there exist $\alpha\neq \beta$ with $F_j\subset E_\alpha\cap E_\beta$, then the equality conditions in the Pr\'ekopa-Leindler inequality \eqref{PLineq} imply that $h_j$ is log-concave. On the other hand, if there exists $\alpha\in \{1,\ldots,k\}$ such that $F_j\subset E_\beta^\bot$ for any $\beta\neq\alpha$, then we do not have any condition on $h_j$, and $p_\alpha=1$.\\ For completeness, let us state and discuss the general Brascamp-Lieb inequality and its reverse form due to Barthe. The following was proved by Brascamp, Lieb \cite{BrL76} in the rank one case and Lieb \cite{Lie90} in general. \begin{theo}[Brascamp-Lieb Inequality] \label{BLgeneral} Let $B_i:\R^n\to H_i$ be surjective linear maps where $H_i$ is $n_i$-dimensional Euclidean space, $n_i\geq 1$, for $i=1,\ldots,k$ such that $$ \cap_{i=1}^k {\rm ker}\,B_i=\{0\}, $$ and let $p_1,\ldots,p_k>0$ satisfy $\sum_{i=1}^kp_in_i=n$. Then for non-negative $f_i\in L_1(H_i)$, we have \begin{equation} \label{BLgeneraleq} \int_{\R^n}\prod_{i=1}^kf_i(B_ix)^{p_i}\,dx \leq {\rm BL}(\mathbf{B},\mathbf{p})\cdot\prod_{i=1}^k\left(\int_{H_i}f_i\right)^{p_i} \end{equation} where the optimal factor ${\rm BL}(\mathbf{B},\mathbf{p})\in(0,\infty]$ depending on $\mathbf{B}=(B_1,\ldots,B_k)$ and $\mathbf{p}=(p_1,\ldots,p_k)$ (which we call a Brascamp-Lieb datum), and ${\rm BL}(\mathbf{B},\mathbf{p})$ is determined by choosing centered Gaussians $f_i(x)=e^{-\langle A_ix,x\rangle}$ for some symmetric positive definite $n_i\times n_i$ matrix $A_i$, $i=1,\ldots,k$ and $x\in H_i$. \end{theo} \noindent{\bf Remark} The Geometric Brascamp-Lieb Inequality is readily a special case of \eqref{BLgeneraleq} where ${\rm BL}(\mathbf{B},\mathbf{p})=1$. We note that \eqref{BLgeneraleq} is H\"older's inequality if $H_1=\ldots=H_k=\R^n$ and each $B_i=I_n$, and hence ${\rm BL}(\mathbf{B},\mathbf{p})=1$ and $\sum_{i=1}^kp_i=1$ in that case. The condition $\sum_{i=1}^kp_in_i=n$ makes sure that for any $\lambda>0$, the inequality \eqref{BLgeneraleq} is invariant under replacing $f_1(x_1),\ldots,f_k(x_k)$ by $f_1(\lambda x_1),\ldots,f_k(\lambda x_k)$, $x_i\in H_i$.\\ We say that two Brascamp-Lieb datum $\{(B_i,p_i)\}_{i=1,\ldots,k}$ and $\{(B'_i,p'_i)\}_{i=1,\ldots,k'}$ as in Theorem~\ref{BLgeneral} are called equivalent if $k'=k$, $p'_i=p_i$, and there exists linear isomorphisms $\Psi:\R^n\to\R^n$ and $\Phi_i:H_i\to H'_i$, $i=1,\ldots,k$, such that $B'_i=\Phi_i\circ B_i\circ \Psi$. It was proved by Carlen, Lieb, Loss \cite{CLL04} in the rank one case, and by Bennett, Carbery, Christ, Tao \cite{BCCT08} in general that there exists a set of extremizers $f_1,\ldots,f_k$ for \eqref{BLgeneraleq} if and only if the Brascamp-Lieb datum $\{(B_i,p_i)\}_{i=1,\ldots,k}$ is equivalent to some Geometric Brascamp-Lieb datum. Therefore, Valdimarsson's Theorem~\ref{BLtheoequa} provides a full characterization of the equality case in Theorem~\ref{BLgeneral}, as well. The following reverse version of the Brascamp-Lieb inequality was proved by Barthe in \cite{Bar97} in the rank one case, and in \cite{Bar98} in general. \begin{theo}[Barthe's Inequality] \label{RBLgeneral} Let $B_i:\R^n\to H_i$ be surjective linear maps where $H_i$ is $n_i$-dimensional Euclidean space, $n_i\geq 1$, for $i=1,\ldots,k$ such that $$ \cap_{i=1}^k {\rm ker}\,B_i=\{0\}, $$ and let $p_1,\ldots,p_k>0$ satisfy $\sum_{i=1}^kp_in_i=n$. Then for non-negative $f_i\in L_1(H_i)$, we have \begin{equation} \label{RBLgeneraleq} \int_{\R^n}^* \sup_{x=\sum_{i=1}^kp_i B_i^*x_i,\, x_i\in H_i}\; \prod_{i=1}^kf_i(x_i)^{p_i}\,dx \geq {\rm RBL}(\mathbf{B},\mathbf{p})\cdot \prod_{i=1}^k\left(\int_{H_i}f_i\right)^{p_i} \end{equation} where the optimal factor ${\rm RBL}(\mathbf{B},\mathbf{p})\in[0,\infty)$ depends on the Brascamp-Lieb datum $\mathbf{B}=(B_1,\ldots,B_k)$ and $\mathbf{p}=(p_1,\ldots,p_k)$, and ${\rm RBL}(\mathbf{B},\mathbf{p})$ is determined by choosing centered Gaussians $f_i(x)=e^{-\langle A_ix,x\rangle}$ for some symmetric positive definite $n_i\times n_i$ matrix $A_i$, $i=1,\ldots,k$ and $x\in H_i$. \end{theo} \noindent{\bf Remark} The Geometric Barthe's Inequality is readily a special case of \eqref{RBLgeneraleq} where ${\rm RBL}(\mathbf{B},\mathbf{p})=1$. We note that \eqref{RBLgeneraleq} is the Pr\'ekopa-Leindler inequality \eqref{PLineq} if $H_1=\ldots=H_k=\R^n$ and each $B_i=I_n$, and hence ${\rm RBL}(\mathbf{B},\mathbf{p})=1$ and $\sum_{i=1}^kp_i=1$ in that case. The condition $\sum_{i=1}^kp_in_i=n$ makes sure that for any $\lambda>0$, the inequality \eqref{RBLgeneraleq} is invariant under replacing $f_1(x_1),\ldots,f_k(x_k)$ by $f_1(\lambda x_1),\ldots,f_k(\lambda x_k)$, $x_i\in H_i$. \\ \begin{remark}[The relation between ${\rm BL}(\mathbf{B},\mathbf{p})$ and ${\rm RBL}(\mathbf{B},\mathbf{p})$] For a Brascamp-Lieb datum $\mathbf{B}=(B_1,\ldots,B_k)$ and $\mathbf{p}=(p_1,\ldots,p_k)$ as in Theorem~\ref{BLgeneral} and Theorem~\ref{RBLgeneral}, possibly ${\rm BL}(\mathbf{B},\mathbf{p})=\infty$ and ${\rm RBL}(\mathbf{B},\mathbf{p})=0$ (see Section~\ref{secFiniteness} for the characterizastion when ${\rm BL}(\mathbf{B},\mathbf{p})$ and ${\rm RBL}(\mathbf{B},\mathbf{p})$ are positive and finite). According to Barthe \cite{Bar98}, ${\rm BL}(\mathbf{B},\mathbf{p})<\infty$ if and only if ${\rm RBL}(\mathbf{B},\mathbf{p})>0$, and in this case, we have \begin{equation} \label{BLRBL} {\rm BL}(\mathbf{B},\mathbf{p})\cdot {\rm RBL}(\mathbf{B},\mathbf{p})=1. \end{equation} \end{remark} Concerning extremals in Theorem~\ref{RBLgeneral}, Lehec \cite{Leh14} proved that if there exists some Gaussian extremizers for Barthe's Inequality \eqref{RBLgeneraleq}, then the corresponding Brascamp-Lieb datum $\{(B_i,p_i)\}_{i=1,\ldots,k}$ is equivalent to some Geometric Brascamp-Lieb datum; therefore, the equality case of \eqref{RBLgeneraleq} can be understood via Theorem~\ref{RBLtheoequa} in that case. However, it is still not known whether having any extremizers in Barthe's Inequality \eqref{RBLgeneraleq} yields the existence of Gaussian extremizers. One possible approach is to use iterated convolutions and renormalizations as in Bennett, Carbery, Christ, Tao \cite{BCCT08} in the case of Brascamp-Lieb inequality. The importance of the Brascamp-Lieb inequality is shown by the fact that besides harmonic analysis and convex geometry, it has been also applied, for example, \begin{itemize} \item in discrete geometry, like about a quantitative fractional Helly theorem by Brazitikos \cite{Bra14}, \item in combinatorics, like about exceptional sets by Gan \cite{Gan24}, \item in number theory, like the paper by Guo, Zhang \cite{GuZ19}, \item to get central limit theorems in probability, like the paper by Avram, Taqqu \cite{AvT06}. \end{itemize} We note the paper by Brazitikos \cite{Bra14} is especially interesting from the point of view that it does not simply consider the rank one Geometric Brascamp-Lieb inequality (cf. Theorem~\ref{BrascampLiebRankOne}) that is typically used for many inequalities in convex geometry, but an approximate version of it. There are three main methods of proofs that work for proving both the Brascamp-Lieb Inequality and its reverse form due to Barthe. The paper Barthe \cite{Bar98} used optimal transportation to prove Barthe's Inequality (``the Reverse Brascamp-Lieb inequality") and reprove the Brascamp-Lieb Inequality simultaneously. A heat equation argument was provided in the rank one case by Carlen, Lieb, Loss \cite{CLL04} for the Brascamp-Lieb Inequality and by Barthe, Cordero-Erausquin \cite{BaC04} for Barthe's inequality. The general versions of both inequalities are proved via the heat equation approach by Barthe, Huet \cite{BaH09}. Finally, simultaneous probabilistic arguments for the two inequalities are due to Lehec \cite{Leh14}. We note that Chen, Dafnis, Paouris \cite{CDP15} and Courtade, Liu \cite{CoL21}, as well, deal systematically with finiteness conditions in Brascamp-Lieb and Barthe's inequalities. Various versions of the Brascamp-Lieb inequality and its reverse form have been obtained by Balogh, Kristaly \cite{BaK18} Barthe \cite{Bar04}, Barthe, Cordero-Erausquin \cite{BaC04}, Barthe, Cordero-Erausquin, Ledoux, Maurey \cite{BCLM11}, Barthe, Wolff \cite{BaW14,BaW22}, Bennett, Bez, Flock, Lee \cite{BBFL18}, Bennett, Bez, Buschenhenke, Cowling, Flock \cite{BBBCF20}, Bennett, Tao \cite{BeT24}, Bobkov, Colesanti, Fragal\`a \cite{BCF14}, Bueno, Pivarov \cite{BuP21}, Chen, Dafnis, Paouris \cite{CDP15}, Courtade, Liu \cite{CoL21}, Duncan \cite{Dun21}, Ghilli, Salani \cite{GhS17}, Kolesnikov, Milman \cite{KoM22}, Livshyts \cite{Liv21}, Lutwak, Yang, Zhang \cite{LYZ04,LYZ07}, Maldague \cite{Mal}, Marsiglietti \cite{Mar17}, Nakamura, Tsuji \cite{NaT}, Rossi, Salani \cite{RoS17,RoS19}. \section{The Reverse Isoperimetric Inequality and the rank one Geometric Brascamp-Lieb inequality} For a compact convex set $K\subset\R^n$ with ${\rm dim}\,{\rm aff}\,K=m$, we write $|K|$ to denote the $m$-dimensional Lebesgue measure of $K$, and $S(K)$ to denote the surface area of $K$ in terms of the $(n-1)$-dimensional Hausdorff measure. In addition, let $B^n=\{x\in\R^n:\,\|x\|\leq 1\}$ be the Euclidean unit ball.\\ \noindent{\bf Remark.} For the box $X_\varepsilon=[-\varepsilon^{-(n-1)},\varepsilon^{-(n-1)}]\times [-\varepsilon,\varepsilon]^{n-1}$, we have $|X_\varepsilon|=2^n$ but $S(X_\varepsilon)>1/\varepsilon$ (the area of a "long" facet); therefore, the isoperimetric quotient $S(X_\varepsilon)^n/|X_\varepsilon|^{n-1}$ can be arbitrary large in general. The "Reverse isoperimetric inequality" says that each convex body has a linear image whose isoperimetric quotient is at most as bad as of a regular simplex, and hence "simplices have the worst isoperimetric quotient" up to linear transforms (cf. Theorem~\ref{inverse-iso-simplex}). For origin symmetric convex bodies, "cubes have the worst isoperimetric quotient" up to linear transforms (cf. Theorem~\ref{inverse-iso-cube}). Let $\Delta^n$ denote the regular simplex circumscribed around $B^n$, and hence each facet touches $B^n$. \begin{theo}[Reverse Isoperimetric Inequality, Keith Ball \cite{Bal91}] \label{inverse-iso-simplex} For any convex body $K$ in $\R^n$, there exists $\Phi\in {\rm GL}(n)$ such that $$ \frac{S(\Phi K)^n}{|\Phi K|^{n-1}}\leq \frac{S(\Delta^n)^n}{|\Delta^n|^{n-1}} =\frac{n^{3n/2}(n+1)^{(n+1)/2}}{n!}, $$ where strict inequality can be attained if and only if $K$ is not a simplex. \end{theo} We note that a {\it parallelepiped}\index{parallelepiped} is the linear image of a cube, and consider the centered cube $W^n=[-1,1]^n$ of edge length $2$. \begin{theo}[Reverse Isoperimetric Inequality in the $o$-symmetric case, Keith Ball \cite{Bal89}] \label{inverse-iso-cube} For any $o$-symmetric convex body $K$ in $\R^n$, there exists $\Phi\in {\rm GL}(n)$ such that $$ \frac{S(\Phi K)^n}{|\Phi K|^{n-1}}\leq \frac{S(W^n)^n}{|W^n|^{n-1}}=2^nn^n, $$ where strict inequality can be attained if and only if $K$ is not a parallelepiped. \end{theo} We note that B\"or\"oczky, Hug \cite{BoH17b} and B\"or\"oczky, Fodor, Hug \cite{BFH19} prove stability versions Theorem~\ref{inverse-iso-simplex} and Theorem~\ref{inverse-iso-cube}, respectively. To sketch the proof of the Reverse Isoperimetric Inequality Theorem~\ref{inverse-iso-simplex} and Theorem~\ref{inverse-iso-cube} in order to show how it is connected to the Brascamp-Lieb inequality, we note that a polytope $P$ is circumscribed around $B^n$ if each facet of $P$ touches $B^n$. \begin{lemma} \label{ballinbody} If $rB^n\subset K$ for a convex body $K$ in $\R^n$ and $r>0$, then $S(K)\leq \frac{n}r\,|K|$, and equality holds if $K$ is a polytope circumscribed around $rB^n$. \end{lemma} \begin{proof} The inequality $S(K)\leq \frac{n}r\,|K|$ follows from $$ S(K)=\lim_{\varrho\to 0^+}\frac{|K+\varrho\,B^n|-|K|}{\varrho}\leq \lim_{\varrho\to 0^+}\frac{|K+\frac{\varrho}r\,K|-|K|}{\varrho}= \frac{n}r\,|K|. $$ If $K$ is a polytope circumscribed around $rB^n$, then considering the bounded "cones" with apex $o$ and of height $r$ over the facets shows that $|K|=\frac{r}n\,S(P)$ in this case. \end{proof} The proof of the Reverse Isoperimetric inequality both in the $o$-symmetric and non-symmetric cases is based on the rank one Geometric Brascamp-Lieb inequality Theorem~\ref{BrascampLiebRankOne}. \begin{theo}[Brascamp-Lieb, Keith Ball] \label{BrascampLiebRankOne} If $u_1,\ldots,u_k\in S^{n-1}$ and $p_1,\ldots,p_k>0$ satisfy \begin{equation} \label{BLJohn0} \sum_{i=1}^kp_i u_i\otimes u_i={\rm I}_n, \end{equation} and $f_1,\ldots,f_k\in L^1(\R)$ are non-negative, then \begin{equation} \label{BL0} \int_{\R^n}\prod_{i=1}^kf_i(\langle x,u_i\rangle)^{p_i}\,dx\leq \prod_{i=1}^k\left(\int_{\R}f_i\right)^{p_i}. \end{equation} \end{theo} \noindent{\bf Remarks.} \begin{description} \item[(i)] If $n=1$, then the Brascamp-Lieb inequality (\ref{BL0}) is the H\"older inequality. \item[(ii)] Inequality (\ref{BL0}) is optimal, and we provide two types of examples for equality: \begin{itemize} \item If $u_1,\ldots,u_k\in S^{n-1}$ and $p_1,\ldots,p_k>0$ satisfy (\ref{BLJohn0}), and $f_i(t)=e^{-\pi t^2}$ for $i=1,\ldots,k$, then each $\int_{\R}f_i=1$, and $$ \int_{\R^n}\prod_{i=1}^kf_i(\langle x,u_i\rangle)^{p_i}\,dx= \int_{\R^n}e^{-\pi\sum_{i=1}^kp_i\langle x,u_i\rangle^2}\,dx= \int_{\R^n}e^{-\pi\langle x,x\rangle^2}\,dx=1. $$ \item If $u_1,\ldots,u_n$ is an orthonormal basis, $k=n$ and $p_1=\ldots=p_n=1$, and hence (\ref{BLJohn0}) holds, and $f_1,\ldots,f_n\in L^1(\R)$ any functions, then the Fubini Theorem yields $$ \int_{\R^n}\prod_{i=1}^nf_i(\langle x,u_i\rangle)^{p_i}\,dx= \prod_{i=1}^n\left(\int_{\R}f_i\right)^{p_i}. $$ \end{itemize} \end{description} More precisely, Theorem~\ref{BrascampLiebRankOne} is the so-called Geometric form of the rank one Brascamp-Lieb inequality discovered by Keith Ball, which matches nicely the form of John's theorem as in Theorem~\ref{Johnmaxvol} (see Keith Ball \cite{Bal92} or Gruber, Schuster \cite{GrS05} for the if and only if statement). \begin{theo}[John] \label{Johnmaxvol} For any convex $K\subset\R^n$, there exists a unique ellipsoid of maximal volume - the so-called John ellipsoid - contained in $K$. Assuming that $B^n\subset K$, $B^n$ is the John ellipsoid of $K$ if and only if there exist $u_1,\ldots,u_k\in S^{n-1}\cap \partial K$ and $p_1,\ldots,p_k>0$, $k\leq n(n+1)$, such that \begin{align} \label{John1} \sum_{i=1}^kp_i u_i\otimes u_i&={\rm I}_n,\\ \label{John2} \sum_{i=1}^kp_i u_i&=o \end{align} where ${\rm I}_n$ denotes the $n\times n$ identity matrix. If $K$ is origin symmetric ($K=-K$), then we may assume that $k=2\ell$ for an integer $\ell\geq n$, and $p_{i+\ell}=p_i$ and $u_{i+\ell}=-u_i$ for $i\in\{1,\ldots,\ell\}$, and hence \eqref{John2} can be dropped. \end{theo} \noindent{\bf Remarks.} Assume that $B^n\subset K$ is the John ellipsoid of $K$ in Theorem~\ref{Johnmaxvol}. \begin{itemize} \item (\ref{John1}) yields that $\langle x,y\rangle =\sum_{i=1}^kp_i\langle x,u_i\rangle\langle y,u_i\rangle$ for $x,y\in\R^n$, and hence the discrete measure $\mu$ on $S^{n-1}$ concentrated on $\{u_1,\ldots,u_k\}$ with $\mu(u_i)=p_i$ is called isotropic. \item $\sum_{i=1}^k p_i=n$ follows by comparing traces in (\ref{John1}). \item $\langle x,u_i\rangle\leq 1$ for $x\in K$ and $i=1,\ldots,k$ as $K$ and $B^n$ share the same supporting hyperplanes at $u_1,\ldots,u_k$. \end{itemize} Equality in Theorem~\ref{BrascampLiebRankOne} has been characterized by Barthe \cite{Bar98}. It is more involved; therefore, we only quote the special case that we need. \begin{theo}[Barthe] \label{BLequa0} Let $\int_{\R}f_i>0$ for $i=1,\ldots,k$, such that none of the $f_i$s is Gaussian in Theorem~\ref{BrascampLiebRankOne}, and equality holds in (\ref{BL0}). Then there exists an orthonormal basis $e_1,\ldots,e_n$ of $\R^n$ such that $\{u_1,\ldots,u_k\}\subset\{\pm e_1,\ldots,\pm e_n\}$ and $\sum_{u_i\in\R e_p}p_i=1$ for each $e_p$, and if $u_i=-u_j$, then $f_i(t)=\lambda_{ij}f_j(-t)$ for $\lambda_{ij}>0$. \end{theo} It is a natural question how well an inscribed ellipsoid can approximate a convex body in terms of volume. This question was answered by Keith Ball \cite{Bal89,Bal91}, see Theorem~\ref{volume-ration-cube} for the origin symmetric case, and Theorem~\ref{volume-ratio-simplex} in general. \begin{theo}[Volume Ratio in the origin symmetric case, Keith Ball \cite{Bal89}] \label{volume-ration-cube} For any $o$-symmetric convex body $K$ in $\R^n$, the \index{volume ratio}maximal volume John ellipsoid $E\subset K$ satisfies $$ \frac{|K|}{|E|}\leq \frac{|W^n|}{|B^n|} =\frac{2^n}{\omega_n}, $$ where strict inequality is attained unless $K$ is a parallelepiped. \end{theo} \begin{proof} We may assume after a linear transformation that $E=B^n$. According to John's Theorem~\ref{Johnmaxvol}, there exists a symmetric set $u_1,\ldots,u_{2\ell}\in S^{n-1}\cap \partial K$ and $p_1,\ldots,p_{2\ell}>0$ with $u_{i+\ell}=-u_i$ and $p_{i+\ell}=p_i$, $i=1,\ldots,\ell$, such that $$ \sum_{i=1}^{2\ell}p_i u_i\otimes u_i={\rm I}_n. $$ For $i=1,\ldots,2\ell$, let $f_i=\mathbf{1}_{[-1,1]}$. Now $K\subset P$ for the polytope $P=\{x\in\R^n:\,\langle x,u_i\rangle\leq 1$, $i=1,\ldots,2\ell\}$ according to the Remarks after John's Theorem~\ref{Johnmaxvol} where $\mathbf{1}_P(x)=\prod_{i=1}^{2\ell}f_i(\langle x,u_i\rangle)=\prod_{i=1}^{2\ell}f_i(\langle x,u_i\rangle)^{p_i}$. It follows from the Brascamp-Lieb inequality (\ref{BL0}) and $\sum_{i=1}^{2\ell}p_i=n$ that $$ |K|\leq |P|=\int_{\R^n}\prod_{i=1}^{2\ell}f_i(\langle x,u_i\rangle)^{p_i}\,dx\leq \prod_{i=1}^{2\ell}\left(\int_{\R}f_i\right)^{p_i}=2^{\sum_{i=1}^{2\ell}p_i}=2^n=|W^n|. $$ If $|K|=|W^n|$, then $|K|=|P|$, and Theorem~\ref{BLequa0} yields that $\ell=n$ and $u_1,\ldots,u_n$ is an orthonormal basis of $\R^n$; therefore, $K$ is a cube. \end{proof} Concerning the volume ratio of general convex bodies, we only sketch the argument because it involves a somewhat technical calculation. \begin{theo}[Volume Ratio, Keith Ball \cite{Bal91}] \label{volume-ratio-simplex} For any convex body $K$ in $\R^n$, \index{volume ratio}the maximal volume John ellipsoid $E\subset K$ satisfies $$ \frac{|K|}{|E|}\leq \frac{|\Delta^n|}{|B^n|} =\frac{n^{n/2}(n+1)^{(n+1)/2}}{n!\omega_n}, $$ where strict inequality is attained unless $K$ is a simplex. \end{theo} \begin{proof}[Sketch of the proof of Theorem~\ref{volume-ratio-simplex}] We may assume that $B^n$ is the John ellipsoid of $K$, and let $p_1,\ldots,p_k>0$ be the coefficients and $u_1,\ldots,u_k\in S^{n-1}\cap \partial K$ be the contact points satifying \eqref{John1} and \eqref{John2} in John's Theorem~\ref{Johnmaxvol}; namely, \begin{equation} \label{John12VolumeRatio} \sum_{i=1}^kp_i u_i\otimes u_i={\rm I}_n \mbox{ \ and \ } \sum_{i=1}^kp_i u_i=o. \end{equation} Again, $K\subset P$ for the polytope $P=\{x\in\R^n:\,\langle x,u_i\rangle\leq 1$, $i=1,\ldots,k\}$ according to the Remarks after John's Theorem~\ref{Johnmaxvol}. The main idea is to lift $u_1,\ldots,u_k$ to $\R^{n+1}$, and employ the Brascamp-Lieb inequality in $\R^{n+1}$. In particular, $\R^n$ is identified with $w^\bot$ for a fixed $w\in S^n\subset\R^{n+1}$, and let $\tilde{u}_i=-\sqrt{\frac{n}{n+1}}\cdot u_i+\sqrt{\frac{1}{n+1}}\cdot w$ and $\tilde{c}_i=\frac{n+1}{n}\cdot p_i$ for $i=1,\ldots,k$. Therefore, $\sum_{i=1}^k\tilde{c}_i \tilde{u}_i\otimes \tilde{u}_i={\rm I}_{n+1}$ follows from \eqref{John12VolumeRatio}. For $i=1,\ldots,k$, we consider the probability density $$ f_i(t)=\left\{ \begin{array}{rl} e^{-t}&\mbox{if $t\geq 0$};\\ 0&\mbox{if $t< 0$} \end{array} \right. $$ on $\R$ where some not too complicated calculations show that $$ \int_{\R^{n+1}}\prod_{i=1}^kf_i(\langle x,\tilde{u}_i\rangle)^{\tilde{c}_i}=\frac{|P|}{|\Delta^n|}. $$ We conclude from the Brascamp-Lieb inequality \eqref{BL0} that $|K|\geq|P|\geq |\Delta^n|$. If $|K|=|\Delta^n|$, then $K=P$ and equality holds in the Brascamp-Lieb inequality. Therefore, Theorem~\ref{BLequa0} provides an orthonormal basis $e_1,\ldots,e_{n+1}$ of $\R^{n+1}$ such that $\{\tilde{u}_1,\ldots,\tilde{u}_k\}\subset\{\pm e_1,\ldots,\pm e_{n+1}\}$. Since $\langle w,\tilde{u}_i\rangle=\sqrt{\frac{1}{n+1}}$ for $i=1,\ldots,k$, we conclude that $k=n+1$ and $\tilde{u}_1,\ldots,\tilde{u}_{n+1}$ is an an orthonormal basis of $\R^{n+1}$, and hence $P$ is congruent to $\Delta^n$. \end{proof} \begin{proof}[Proof of the Reverse Isoperimetric Inequality Theorem~\ref{inverse-iso-simplex} and Theorem~\ref{inverse-iso-cube}:] After applying an affine transformation, we may assume that the John ellipsoid of $K$ is $B^n$ both in Theorem~\ref{inverse-iso-simplex} and Theorem~\ref{inverse-iso-cube}. For Theorem~\ref{inverse-iso-simplex}, Theorem~\ref{volume-ratio-simplex} yields that $|K|\leq|\Delta^n|$, thus we deduce from Lemma~\ref{ballinbody} that $$ \frac{S(K)^n}{|K|^{n-1}}\leq \frac{n^n|K|^n}{|K|^{n-1}}=n^n|K|\leq n^n|\Delta^n| =\frac{S(\Delta^n)^n}{|\Delta^n|^{n-1}}. $$ If equality holds in Theorem~\ref{inverse-iso-simplex}, then the equality case of Theorem~\ref{volume-ratio-simplex} yields that $K$ is congruent to $\Delta^n$. For Theorem~\ref{inverse-iso-cube}, we use the same argument, only with Theorem~\ref{volume-ration-cube} in place of Theorem~\ref{volume-ratio-simplex}. \end{proof} \section{The Loomis, Whitney inequality, the Bollobas-Thomason inequality and their dual forms} \label{secBT} In this section, we list some geometric inequalities that are direct consequences of the Geometric Brascamp-Lieb inequality \eqref{BL} and Barthe's Geometric Reverse Brascamp-Lieb inequality \eqref{RBL}. We write $e_1,\ldots,e_n$ to denote an orthonomal basis of $\R^n$. The starting point is the classical Loomis-Whitney inequality \cite{LoW49} from 1949 which follows from the Geometric Brascamp-Lieb inequality \eqref{BL} provided that $k=n$, $p_1=\ldots=p_n=\frac1{n-1}$ and $f_i$ is the characteristic function of $P_{e_i^\bot}K$. \begin{theo}[Loomis, Whitney] \label{Loomis-Whitney} If $K\subset \R^n$ is compact and affinely spans $\R^n$, then \begin{equation} \label{Loomis-Whitney-ineq} |K|^{n-1}\leq \prod_{i=1}^n|P_{e_i^\bot}K|, \end{equation} with equality if and only if $K=\oplus_{i=1}^nK_i$ where ${\rm aff}K_i$ is a line parallel to $e_i$. \end{theo} Meyer \cite{Mey88} provided a dual form of the Loomis-Whitney inequality where equality holds for affine crosspolytopes which follows from Barthe's Geometric Brascamp-Lieb inequality \eqref{RBL} provided that $k=n$, $p_1=\ldots=p_n=\frac1{n-1}$ and $f_i$ is the characteristic function of $e_i^\bot\cap K$. \begin{theo}[Meyer] \label{dual-Loomis-Whitney} If $K\subset \R^n$ is compact convex with $o\in{\rm int}K$, then \begin{equation} \label{dual-Loomis-Whitney-ineq} |K|^{n-1}\geq \frac{n!}{n^n}\prod_{i=1}^n|K\cap e_i^\bot|, \end{equation} with equality if and only if $K={\rm conv}\{\pm\lambda_ie_i\}_{i=1}^n$ for $\lambda_i>0$, $i=1,\ldots,n$. \end{theo} We note that various Reverse and dual Loomis-Whitney type inequalities are proved by Campi, Gardner, Gronchi \cite{CGG16}, Brazitikos {\it et al} \cite{BDG17,BGL18}, Alonso-Guti\'errez {\it et al} \cite{ABBC,AB}. To consider a genarization of the Loomis-Whitney inequality and its dual form, we set $[n]:=\{1,\ldots,n\}$, and for a non-empty proper subset $\sigma\subset[n]$, we define $E_\sigma={\rm lin}\{e_i\}_{i\in\sigma}$. For $s\geq 1$, we say that the not necessarily distinct proper non-empty subsets $\sigma_1,\ldots,\sigma_k\subset[n]$ form an $s$-uniform cover of $[n]$ if each $j\in[n]$ is contained in exactly $s$ of $\sigma_1,\ldots,\sigma_k$. The Bollobas-Thomason inequality \cite{BoT95} follows from the Geometric Brascamp-Lieb inequality \eqref{BL} provided that $p_1=\ldots=p_k=\frac1{s}$ and $f_i$ is the characteristic function of $P_{E_{\sigma_i}}K$. \begin{theo}[Bollobas, Thomason] \label{Bollobas-Thomason} If $K\subset \R^n$ is compact and affinely spans $\R^n$, and $\sigma_1,\ldots,\sigma_k\subset[n]$ form an $s$-uniform cover of $[n]$ for $s\geq 1$, then \begin{equation} \label{Bollobas-Thomasson-ineq} |K|^s\leq \prod_{i=1}^k|P_{E_{\sigma_i}}K|. \end{equation} \end{theo} We note that the case when $k=n$, $s=n-1$, and hence when we may assume that $\sigma_i=[n]\backslash e_i$, is the Loomis-Whitney inequality Therem~\ref{Loomis-Whitney}. Liakopoulos \cite{Lia19} managed to prove a dual form of the Bollobas-Thomason inequality which follows from Barthe's Geometric Brascamp-Lieb inequality \eqref{RBL} provided that $p_1=\ldots=p_n=\frac1{s}$ and $f_i$ is the characteristic function of $E_{\sigma_i}\cap K$. For a finite set $\sigma$, we write $|\sigma|$ to denote its cardinality. \begin{theo}[Liakopoulos] \label{Liakopoulos} If $K\subset \R^n$ is compact convex with $o\in{\rm int}K$, and $\sigma_1,\ldots,\sigma_k\subset[n]$ form an $s$-uniform cover of $[n]$ for $s\geq 1$, then \begin{equation} \label{Liakopoulos-ineq} |K|^s\geq \frac{\prod_{i=1}^k|\sigma_i|!}{(n!)^s}\cdot \prod_{i=1}^k|K\cap E_{\sigma_i}|. \end{equation} \end{theo} The equality case of the Bollobas-Thomason inequality Theorem~\ref{Bollobas-Thomason} based on Valdimarsson \cite{Val08} has been known to the experts. Let $s\geq 1$, and let $\sigma_1,\ldots,\sigma_k\subset[n]$ be an $s$-uniform cover of $[n]$. We say that $\tilde{\sigma}_1,\ldots,\tilde{\sigma}_l\subset[n]$ form a $1$-uniform cover of $[n]$ induced by the $s$-uniform cover $\sigma_1,\ldots,\sigma_k$ if $\{\tilde{\sigma}_1,\ldots,\tilde{\sigma}_l\}$ consists of all non-empty distinct subsets of $[n]$ of the form $\cap_{i=1}^k\sigma^{\varepsilon(i)}_i$ where $\varepsilon(i)\in\{0,1\}$ and $\sigma_i^0=\sigma_i$ and $\sigma_i^1=[n]\setminus\sigma_i$. We observe that $\tilde{\sigma}_1,\ldots,\tilde{\sigma}_l\subset[n]$ actually form a $1$-uniform cover of $[n]$; namely, $\tilde{\sigma}_1,\ldots,\tilde{\sigma}_l$ is a partition of $[n]$. \begin{theo}[Folklore] \label{Bollobas-Thomason-eq} Let $K\subset \R^n$ be compact and affinely span $\R^n$, and let $\sigma_1,\ldots,\sigma_k\subset[n]$ form an $s$-uniform cover of $[n]$ for $s\geq 1$. Then equality holds in \eqref{Bollobas-Thomasson-ineq} if and only if $K=\oplus_{i=1}^l P_{E_{\tilde{\sigma}_i}}K$ where $\tilde{\sigma}_1,\ldots,\tilde{\sigma}_l$ is the $1$-uniform cover of $[n]$ induced by $\sigma_1,\ldots,\sigma_k$. \end{theo} On the other hand, Theorem~\ref{RBLtheoequa} yields the characterization of the equality case of the dual Bollobas-Thomason inequality Theorem~\ref{Liakopoulos} (cf. B\"or\"oczky, Kalantzopoulos, Xi \cite{BKX23}). \begin{theo} \label{Liakopoulos-eq} Let $K\subset \R^n$ be compact convex with $o\in{\rm int}K$, and let $\sigma_1,\ldots,\sigma_k\subset[n]$ form an $s$-uniform cover of $[n]$ for $s\geq 1$. Then equality holds in \eqref{Liakopoulos-ineq} if and only if $K={\rm conv}\{K\cap F_{\tilde{\sigma}_i}\}_{i=1}^l$ where $\tilde{\sigma}_1,\ldots,\tilde{\sigma}_l$ is the $1$-uniform cover of $[n]$ induced by $\sigma_1,\ldots,\sigma_k$. \end{theo} \section{Finiteness of ${\rm BL}(\mathbf{B},\mathbf{p})$ and ${\rm RBL}(\mathbf{B},\mathbf{p})$} \label{secFiniteness} Let $\mathbf{B}=(B_1,\ldots,B_k)$ and $\mathbf{p}=(p_1,\ldots,p_k)$ be a Brascamp-Lieb datum as in Theorem~\ref{BLgeneral} and Theorem~\ref{RBLgeneral}; namely, $B_i:\R^n\to H_i$ are surjective linear maps where $H_i$ is $n_i$-dimensional Euclidean space, $n_i\geq 1$, for $i=1,\ldots,k$ such that $$ \cap_{i=1}^k {\rm ker}\,B_i=\{0\}, $$ and $p_1,\ldots,p_k>0$ satisfy that $\sum_{i=1}^kp_in_i=n$. The finiteness of the factor ${\rm BL}(\mathbf{B},\mathbf{p})$ in the Brascamp-Lieb inequality Theorem~\ref{BLgeneral} was characterized by Bennett, Carbery, Christ, Tao \cite{BCCT08}. \begin{theo}[Bennett, Carbery, Christ, Tao \cite{BCCT08}, Barthe \cite{Bar98}] For a Brascamp-Lieb datum $\mathbf{B}=(B_1,\ldots,B_k)$ and $\mathbf{p}=(p_1,\ldots,p_k)$ as above, ${\rm BL}(\mathbf{B},\mathbf{p})<\infty$ if and only if ${\rm RBL}(\mathbf{B},\mathbf{p})>0$, which is in turn equivalent with the property that \begin{equation} \label{BLfiniteV} {\rm dim}\,V\leq \sum_{i=1}^k p_i\cdot {\rm dim}\,(B_iV) \end{equation} for any linear subspace $V\subset\R^n$. In this case, we have \begin{equation} \label{BLRBL0} {\rm BL}(\mathbf{B},\mathbf{p})\cdot {\rm RBL}(\mathbf{B},\mathbf{p})=1. \end{equation} \end{theo} Now fixing the surjective linear maps $B_i:\R^n\to H_i$, the question is a nice description of the set of all $\mathbf{p}=(p_1,\ldots,p_k)$ such that ${\rm BL}(\mathbf{B},\mathbf{p})<\infty$. In addition, we say that $f_1,\ldots,f_k$ with positive integral are {\it extremizers} for the Brascamp-Lieb inequality \eqref{BLgeneraleq} (Barthe's inequality \eqref{RBLgeneraleq}) if equality holds in \eqref{BLgeneraleq} (in \eqref{RBLgeneraleq}) for them. Moreover, $f_1,\ldots,f_k$ are called Gaussian extremizers if there exist some symmetric positive definite $n_i\times n_i$ matric $A_i$ for $i=1,\ldots,k$ such that $f_i(x)=e^{-\langle A_ix,x\rangle}$ for $x\in H_i$. According to Barthe \cite{Bar98}, $f_i(x)=e^{-\langle A_ix,x\rangle}$, $i=1,\ldots,k$, form a Gaussian extremizer for the Brascamp-Lieb inequality \eqref{BLgeneraleq} if and only if $f_i(x)=e^{-\langle A_i^{-1}x,x\rangle}$, $i=1,\ldots,k$, form a Gaussian extremizer for Barthe's Reverse Brascamp-Lieb inequality \eqref{RBLgeneraleq}. \begin{theo}[Bennett, Carbery, Christ, Tao \cite{BCCT08}] \label{Brascamp-Lieb-polytope} Fixing the surjective linear maps $B_i:\R^n\to H_i$ where $H_i$ is $n_i$-dimensional Euclidean space, $n_i\geq 1$, for $i=1,\ldots,k$ such that $$ \cap_{i=1}^k {\rm ker}\,B_i=\{0\}, $$ the set of all $\mathbf{p}=(p_1,\ldots,p_k)\in\R^k$ such that ${\rm BL}(\mathbf{B},\mathbf{p})<\infty$; namely, $p_1,\ldots,p_k\geq 0$, $\sum_{i=1}^kp_in_i=n$ and \eqref{BLfiniteV} holds for any linear subspace $V\subset\R^n$, is a $(k-1)$-dimensional bounded (closed) convex polytope $P_{\mathbf{B}}$. In addition, if $\mathbf{p}$ lies in the relative interior of this so-called Brascamp-Lieb polytope $P_{\mathbf{B}}$ (strict inequality holds in \eqref{BLfiniteV} for any linear subspace $V\subset\R^n$), then there exists a Gaussian extremizer both for the Brascamp-Lieb inequality \eqref{BLgeneraleq}and Barthe's inequality \eqref{RBLgeneraleq}. \end{theo} \noindent{\bf Remark.} For a Brascamp-Lieb datum $\mathbf{B}=(B_1,\ldots,B_k)$ and $\mathbf{p}=(p_1,\ldots,p_k)$, Bennett, Carbery, Christ, Tao \cite{BCCT08} proved that if there exists an extremizer for the Brascamp-Lieb inequality \eqref{BLgeneraleq}, then there exists a Gaussian extremizer, as well. However, the analogous statement is not known about Barthe's Reverse Brascamp-Lieb inequality \eqref{RBLgeneraleq}.\\ The fact that \eqref{BLfiniteV} for any linear subspace $V\subset\R^n$ means only finitely many inequalities follows from the observation that there are only finitely many possible values of ${\rm dim}\,(B_iV)$ and ${\rm dim}\,V$. The vertices of $P_{\mathbf{B}}$ have been described by Barthe \cite{Bar98} in the rank one case (each $n_i=1$), and by Valdimarsson \cite{Val10} in general. According to Theorem~\ref{Brascamp-Lieb-polytope}, if $\mathbf{p}=(p_1,\ldots,p_k)$ lies in the relative interior of $P_{\mathbf{B}}$, then there there exists Gaussian extremizers providing equality in the Brascamp-Lieb inequality \eqref{BLgeneraleq}. However, if $\mathbf{p}$ lies in the relative boundary of $P_{\mathbf{B}}$, then possibly we never have equality in the Brascamp-Lieb inequality \eqref{BLgeneraleq}. On the other hand, we may have Gaussian extremizers for some other $\mathbf{p}$ lying in the relative boundary of $P_{\mathbf{B}}$. We exhibit this phenomenon on the example of Young's classical convolution inequality from. We recall that if $f,g:\R\to[0,\infty)$ are measurable and $p\geq 1$, then \begin{align*} \|f\|_p&=\left(\int_{\R}f^p\right)^{\frac1p};\\ f* g(x)&=\int_{\R}f(y)g(x-y)\,dy. \end{align*} \begin{example}[Young's convolution inequality] The original inequality by Young \cite{You12} from 1912 is of the following form: If $p,q,s\geq 1$ with $\frac1p+\frac1q=\frac1s+1$, then there exists a minimal $c_{pq}>0$ such that for any measurable $f,g:\R\to[0,\infty)$, we have \begin{equation} \label{Young1} \|f*g\|_s\leq c_{pq}\cdot \|f\|_p\cdot \|g\|_q. \end{equation} Using the H\"older inequality and its equality case, we see that the version \eqref{Young1} of the Young inequality is equivalent with the following statement: Let $r\geq 1$ satisfy that $\frac1s+\frac1r=1$, and hence $\frac1p+\frac1q+\frac1r=2$. If $f,g,h:\R\to[0,\infty)$ are measurable, then \begin{equation} \label{Young2} \int_{\R^2}f(y)g(x-y)h(x)\,dy\,dx\leq c_{pq}\cdot \|f\|_p\cdot \|g\|_q \cdot \|h\|_r. \end{equation} Using the substitution $f_1=|f|^p$, $f_2=|g|^q$, $f_3=|h|^r$, $p_1=\frac1p$, $p_2=\frac1q$ and $p_3=\frac1r$, and hence $p_1+p_2+p_3=2$, \eqref{Young2} reads as \begin{equation} \label{Young3} \int_{\R^2}f_1(y)^{p_1}f_2(x-y)^{p_2}f_3(x)^{p_3}\,dydx\leq c_{pq}\cdot \prod_{i=1}^3\left(\int_{\R}f_i\right)^{p_i}. \end{equation} Now \eqref{Young3} is a proper Brascamp-Lieb inequality as in Theorem~\ref{BLgeneral} taking $H_i=\R$, $i=1,2,3$, $B_1(x,y)=y$, $B_2(x,y)=x-y$ and $B_3(x,y)=x$. Let us see when ${\rm BL}(\mathbf{B},\mathbf{p})<\infty$ for the Brascamp-Lieb datum $\mathbf{B}=(B_1,B_2,B_3)$ and $\mathbf{p}=(p_1,p_2,p_3)$. Applying the condition \eqref{BLfiniteV} in the cases when the linear subspace $V$ has equation either $x=0$, or $x=y$, or $y=0$ yields the conditions $p_1+p_2\geq 1$, $p_1+p_3\geq 1$ and $p_2+p_3\geq 1$. Since $p_1+p_2+p_3=2$, we deduce that $P_{\mathbf{B}}\subset\R^3$ is a triangle with vertices $(1,1,0)$, $(1,0,1)$ and $(0,1,1)$. In turn, we also deduce that $c_{pq}$ is finite in \eqref{Young1} if $p,q,s\geq 1$ satisfy $\frac1p+\frac1q=\frac1s+1$. Actualy, a simple argument based on the H\"older inequality yields that $c_{pq}\leq 1$. Brascamp, Lieb \cite{BrL76} proved that extremizers exists in \eqref{Young3} if and only if $\mathbf{p}=(p_1,p_2,p_3)$ lies either in the relative interior of $P_{\mathbf{B}}$, or $\mathbf{p}$ is a vertex of $P_{\mathbf{B}}$. In particular, if $\mathbf{p}$ lies on the relative interior of a side of $P_{\mathbf{B}}$, then no extremizers exist even if $c_{pq}$ is finite. \end{example} \section{Algorithmic and optimization aspects of the Brascamp-Lieb inequality} Since for algorithms, we want to work with matrices and not with linear maps, we set $H_i=\R^{n_i}$ in the Brascamp-Lieb datum; therefore, for the whole section, $B_i:\R^n\to \R^{n_i}$ is a surjective linear map for $i=1,\ldots,k$, such that \begin{equation} \label{BiNonTrivial} \cap_{i=1}^k {\rm ker}\,B_i=\{0\}, \end{equation} $\mathbf{B}=(B_1,\ldots,B_k)$ and $\mathbf{p}=(p_1,\ldots,p_k)$ where $p_1,\ldots,p_k>0$ and $\sum_{i=1}^kp_in_i=n$. Following Garg, Gurvits, Oliveira, Wigderson \cite{GGOW18}, the main question we discuss in this section is how to determine effectively whether ${\rm BL}(\mathbf{B},\mathbf{p})$ is finite for a Brascamp-Lieb datum $(\mathbf{B},\mathbf{p})$, and if finite, then how to approximate effectively its value. We write $\mathcal{M}(m)$ to denote the set of symmetric positive definite $m\times m$ matrices for $m\geq 1$. We note that if $A\in \mathcal{M}(m)$, then $$ \int_{\R^m}e^{-\pi\langle Ax,x\rangle}\,dx=\sqrt{\det A}. $$ It follows that the Brascamp-Lieb inequality Theorem~\ref{BLgeneral} proved by Lieb \cite{Lie90} is equivalent with the following statement. \begin{theo}[Lieb \cite{Lie90}] For a Brascamp-Lieb datum $(\mathbf{B},\mathbf{p})$ as above, we have \begin{equation} \label{BLBpmatrices} {\rm BL}(\mathbf{B},\mathbf{p})=\sup\left\{\sqrt{\frac{\prod_{i=1}^k(\det A_i)^{p_i}}{\det \sum_{i=1}^kp_iB_i^*A_iB_i}}:\,A_i\in\mathcal{M}(n_i),\; i=1,\ldots,k \right\}. \end{equation} \end{theo} It follows from the condition \eqref{BLfiniteV} on the subspaces $V$, that if we fix $\mathbf{p}$ in the Brascamp-Lieb datum $(\mathbf{B},\mathbf{p})$, then \begin{itemize} \item the set of all $\mathbf{B}$ such that ${\rm BL}(\mathbf{B},\mathbf{p})<\infty$ is open (in the space of all possible $\mathbf{B}$). \end{itemize} Bennett, Bez, Cowling, Flock \cite{BBCF17} prove the continuity of the Brascamp-Lieb datum in terms of $\mathbf{B}$. \begin{theo}[Bennett, Bez, Cowling, Flock \cite{BBCF17}] \label{BLBpBcont} If we fix $\mathbf{p}$ in the Brascamp-Lieb datum $(\mathbf{B},\mathbf{p})$, then $\mathbf{B}\mapsto {\rm BL}(\mathbf{B},\mathbf{p})$ is a continuous function of $\mathbf{B}$, including the values when ${\rm BL}(\mathbf{B},\mathbf{p})=\infty$. \end{theo} We say that the Brascamp-Lieb data $\mathbf{B}=(B_1,\ldots,B_k)$, $\mathbf{p}=(p_1,\ldots,p_k)$ and $\mathbf{B}'=(B'_1,\ldots,B'_m)$, $\mathbf{p}'=(p'_1,\ldots,p'_m)$ are equivalent, if $k=m$, $p'_i=p_i$ for $i=1,\ldots,k$, and there exist $\Phi\in{\rm GL}(n)$ and $\Psi_i\in{\rm GL}(n_i)$, $i=1,\ldots,k$, such that $B'_i=\Psi_i^{-1}B_i\Phi$, $i=1,\ldots,k$. \begin{theo}[Bennett, Carbery, Christ, Tao \cite{BCCT08}] For the equivalent Brascamp-Lieb datums $(\mathbf{B},\mathbf{p})$ and $(\mathbf{B}',\mathbf{p}')$ as above, we have \begin{equation} \label{BLBpequivalence} {\rm BL}(\mathbf{B}',\mathbf{p}')=\frac{\prod_{i=1}^k(\det \Psi_i)^{p_i}}{\det \Phi} \cdot {\rm BL}(\mathbf{B},\mathbf{p}). \end{equation} \end{theo} Now the paper \cite{BCCT08} also showed that the existence of Gaussian maximizers is equivalent to saying that the Brascamp-Lieb datum is equivalent to a geometric one. We write $B^*$ to denote the transpose of a matrix $B$. \begin{defi}[Geometric Brascamp-Lieb datum] \label{GeometricProperties} We say that a Brascamp-Lieb datum $(\mathbf{B},\mathbf{p})$ as at the beginning of the section is {\it geometric} if \begin{description} \item[Projection] $B_iB_i^*=I_{n_i}$ for $i=1,\ldots,k$; \item[Isotropy] $\sum_{i=1}^kp_iB_i^*B_i=I_n$. \end{description} \end{defi} \noindent{\bf Remarks.} \begin{itemize} \item In this case, we can take $E_i=B_i^*\R^{n_i}$ and $B_i=P_{E_i}$ in order to obtain \eqref{highdimcond0}; namely, the equivalent the "Geometric Brascamp-Lieb datum" of Section~\ref{secIntro}. \item In the geometric case, we have \begin{equation} \label{geometricBLBp} {\rm BL}(\mathbf{B},\mathbf{p})={\rm RBL}(\mathbf{B},\mathbf{p})=1 \end{equation} according to Keih Ball \cite{Bal89} in the rank one case, and Barthe \cite{Bar98} in general. One set of extremizers are $f_i(x)=e^{-\pi\langle x,x\rangle}$ for $x\in\R^{n_i}$ and $i=1,\ldots,k$ (both for the Brascamp-Lieb inequality and Barthe's Reverse Brascamp-Lieb inequality). \item If both properties in Definition~\ref{GeometricProperties} hold, then the relations of non-triality (cf. \eqref{BiNonTrivial}) and $\sum_{i=1}^kp_in_i=n$ automatically hold. \end{itemize} If both the properties "Projection" and "Isotropic" hold, then we the Brascamp-Lieb constant is $1$ according \eqref{geometricBLBp}. However, already one of the conditions ensure that $1$ is a lower bound. \begin{prop}[Garg, Gurvits, Oliveira, Wigderson \cite{GGOW18}] \label{ProjIsoBLBp} If a Brascamp-Lieb datum $(\mathbf{B},\mathbf{p})$ satisfies either the property "Projection" or "Isotropic" in Definition~\ref{GeometricProperties}, then \begin{equation} \label{ProjIsoBLBp-eq} {\rm BL}(\mathbf{B},\mathbf{p})\geq 1. \end{equation} \end{prop} Let us reformulate the results in the previous section by Bennett, Carbery, Christ, Tao \cite{BCCT08} about the finiteness of $(\mathbf{B},\mathbf{p})$ in the way such that it is used as a test for the algorithm by Garg, Gurvits, Oliveira, Wigderson \cite{GGOW18}. \begin{theo}[Bennett, Carbery, Christ, Tao \cite{BCCT08}] Let $(\mathbf{B},\mathbf{p})$ be a Brascamp-Lieb datum as at the beginning of the section. \begin{description} \item[${\rm BL}(\mathbf{B},\mathbf{p})$ finite] If $(\mathbf{B},\mathbf{p})$ is equivalent to a geometric Brascamp-Lieb datum, then $(\mathbf{B},\mathbf{p})$ is finite. \item[${\rm BL}(\mathbf{B},\mathbf{p})$ infinite] If there exists a linear subspace $V\subset\R^n$ such that $$ {\rm dim}\,V> \sum_{i=1}^k p_i\cdot {\rm dim}\,(B_iV), $$ then $(\mathbf{B},\mathbf{p})$ is infinite. \end{description} \end{theo} \noindent{\bf Remark.} Naturally, if $(\mathbf{B},\mathbf{p})$ is geometric, then even there exists some maximizer $A_1,\ldots,A_k$ in \eqref{BLBpmatrices} (and equivalently, some Gaussian maximizer in the Brascamp-Lieb inequality Theorem~\ref{BLgeneral}).\\ Theorem~\ref{BLBpBcont} about the continuity of the Brascamp-Lieb constant, and the above statements raise the hope that fixing $\mathbf{p}$ in the Brascamp-Lieb datum $(\mathbf{B},\mathbf{p})$ and varying $\mathbf{B}$, one might be able to find an efficient algorithm calculating ${\rm BL}(\mathbf{B},\mathbf{p})$. This was achieved by Garg, Gurvits, Oliveira, Wigderson \cite{GGOW18}. For any Brascamp-Lieb datum $(\mathbf{B},\mathbf{p})$ as at the beginning of the section, it can be easily achieved that at least one of the properties in Definition~\ref{GeometricProperties} hold.\\ \noindent{\bf "Projection-normalization":} For $C_i=B_iB_i^*$, $i=1,\ldots,k$, - that is an invertible $n_i\times n_i$ matrix by the non-triviality condition \eqref{BiNonTrivial} -, replace $B_i$ by $C_i^{-1/2}B'_i=B_i$, $i=1,\ldots,k$, and hence the Brascamp-Lieb datum $(\mathbf{B}',\mathbf{p})$ satisfies the "Projection" condition.\\ \noindent{\bf "Isotropy-normalization":} For $C=\sum_{i=1}^kp_iB_i^*B_i$ - that is an invertible $n\times n$ matrix by the non-triviality condition \eqref{BiNonTrivial} -, replace $B_i$ by $B'_i=B_iC^{-1/2}$, $i=1,\ldots,k$, and hence the Brascamp-Lieb datum $(\mathbf{B}',\mathbf{p})$ satisfies the "Isotropy" condition.\\ The key statement ensuring the effectiveness of the algorithm by Garg, Gurvits, Oliveira, Wigderson \cite{GGOW18} is the following (we repeat Proposition~\ref{ProjIsoBLBp} in order to ensure the clarity of the statement). \begin{theo}[Garg, Gurvits, Oliveira, Wigderson \cite{GGOW18}] Let $(\mathbf{B},\mathbf{p})$ be a Brascamp-Lieb datum with ${\rm BL}(\mathbf{B},\mathbf{p})<\infty$ where $\mathbf{B}$ has binary length $b$ and $,\mathbf{p}$ has common denominator $d$, and let $(\mathbf{B}',\mathbf{p})$ be the Brascamp-Lieb datum obtained from $(\mathbf{B},\mathbf{p})$ by either Projection-normalization or Isotropy-normalization. \begin{description} \item[Upper bound] ${\rm BL}(\mathbf{B},\mathbf{p}) \leq \exp({\rm poly}(b, \log d))$. \item[Lower bound] $(\mathbf{B}',\mathbf{p})\geq 1$. \item[Progress per step] If ${\rm BL}(\mathbf{B},\mathbf{p})>1+\varepsilon$ for $\varepsilon>0$, then $$ {\rm BL}(\mathbf{B}',\mathbf{p})\leq \left(1-{\rm poly}\left(\frac{\varepsilon}{nd}\right)\right){\rm BL}(\mathbf{B},\mathbf{p}). $$ \end{description} \end{theo} The basic idea of the algorithm by Garg, Gurvits, Oliveira, Wigderson \cite{GGOW18} is that at the $m$th step, Projection-normalization is executed if $m$ is odd, and Isotropy-normalization is executed if $m$ is even. \begin{theo}[Garg, Gurvits, Oliveira, Wigderson \cite{GGOW18}] There exists an algorithm such that on a Brascamp-Lieb datum $(\mathbf{B},\mathbf{p})$ where $\mathbf{B}$ has binary length $b$ and $,\mathbf{p}$ has common denominator $d$, and assuming an accuracy parameter $\varepsilon\in(0,1)$, the algorithm runs in time ${\rm poly}(b, d,1/\varepsilon)$, and \begin{itemize} \item either computes a factor $(1+\varepsilon)$ approximation of ${\rm BL}(\mathbf{B},\mathbf{p})$ (in the case ${\rm BL}(\mathbf{B},\mathbf{p})<\infty)$, \item or produces a linear subspace $V\subset\R^n$ satisfying the condition\\ ${\rm dim}\,V> \sum_{i=1}^k p_i\cdot {\rm dim}\,(B_iV)$ (in the case ${\rm BL}(\mathbf{B},\mathbf{p})=\infty$). \end{itemize} \end{theo} Further properties of the Brascamp-Lieb datum$(\mathbf{B},\mathbf{p})$ when we fix $\mathbf{p}$ and vary $\mathbf{B}$ have been investigated by Bez, Gauvan, Tsuji \cite{BGT}. \noindent{\bf Acknowledgement.} I am grateful to the two referees for all their helpful remarks improving the survey, and to J\'anos Pach for encouragement. \begin{thebibliography}{99} \bibitem{ABBC} D. Alonso-Guti\'errez, J. Bernu\'es, S. Brazitikos, A. Carbery: On affine invariant and local Loomis-Whitney type inequalities. arXiv:2002.05794 \bibitem{AB} D. Alonso-Guti\'errez, S. Brazitikos: Reverse Loomis-Whitney inequalities via isotropicity. Proc. Amer. Math. Soc., 149 (2021), 817-828. \bibitem{AvT06} F. Avram, M.S. Taqqu: On a Szeg\H{o} type limit theorem and the asymptotic theory of random sums, integrals and quadratic forms. Dependence in probability and statistics, Lect. Notes Stat., 187, Springer, New York, 2006, 259-286. \bibitem{Bal89} K.M.~Ball: Volumes of sections of cubes and related problems. In: J. Lindenstrauss and V.D. Milman (ed), Israel seminar on Geometric Aspects of Functional Analysis 1376, Lectures Notes in Mathematics. Springer-Verlag, 1989. \bibitem{Bal91} K.M.~Ball: Volume ratios and a reverse isoperimetric inequality. J. London Math. Soc. 44 (1991), 351-359. \bibitem{Bal92} K.M. Ball: Ellipsoids of maximal volume in convex bodies. Geom. Dedicata, 41 (1992), 241-250. \bibitem{Bal03} K.M.~Ball: Convex geometry and functional analysis. In: W~B.~Johnson, L. Lindenstrauss (eds), Handbook of the geometry of Banach spaces, 1, (2003), 161-194. \bibitem{BaK18} Z. Balogh, A. Kristaly: Equality in Borell-Brascamp-Lieb inequalities on curved spaces. Adv. Math. 339 (2018), 453-494. \bibitem{Bar97} F.~Barthe: In\'egalit\'es de Brascamp-Lieb et convexit\'e. C. R. Acad. Sci. Paris 324 (1997), 885-888. \bibitem{Bar98} F.~Barthe: On a reverse form of the Brascamp-Lieb inequality. Invent. Math. 134 (1998), 335-361. \bibitem{Bar04} F.~Barthe: A continuous version of the Brascamp-Lieb inequalities. Geometric Aspects of Functional Analysis, Lecture Notes in Mathematics Volume 1850, 2004, 53-63. \bibitem{BaC04} F.~Barthe, D.~Cordero-Erausquin: Inverse Brascamp-Lieb inequalities along the heat equation. Geometric aspects of functional analysis, 65-71, Lecture Notes in Math., 1850, Springer, Berlin, 2004. \bibitem{BCLM11} F.~Barthe, D.~Cordero-Erausquin, M.~Ledoux, B.~Maurey: Correlation and Brascamp-Lieb inequalities for Markov semigroups. Int. Math. Res. Not. 10 (2011), 2177-2216. \bibitem{BaH09} F.~Barthe, N. Huet: On Gaussian Brunn-Minkowski inequalities. Studia Math. 191 (2009), 283-304. \bibitem{BaW14} F. Barthe, P. Wolff: Positivity improvement and Gaussian kernels. C. R. Math. Acad. Sci. Paris, 352 (2014), 1017-1021. \bibitem{BaW22} F. Barthe, P. Wolff: Positive Gaussian Kernels also Have Gaussian Minimizers. Mem. Amer. Math. Soc., 276 (2022), no. 1359, iii+90 pp. \bibitem{BBCF17} J. Bennett, N. Bez, M.G. Cowling, T.C. Flock: Behaviour of the Brascamp-Lieb constant. Bull. Lond. Math. Soc., 49 (2017), 512-518. \bibitem{BBFL18} J. Bennett, N. Bez, T.C. Flock, S. Lee: Stability of the Brascamp–Lieb constant and applications. Am. J. Math. 140(2) (2018), 543-569. \bibitem{BBBCF20} J. Bennett, N. Bez, S. Buschenhenke, M.G. Cowling, T.C. Flock: On the nonlinear Brascamp-Lieb inequality. Duke Math. J. 169(17) (2020), 3291-3338 \bibitem{BCCT08} J.~Bennett, T.~Carbery, M.~Christ, T.~Tao: The Brascamp--Lieb Inequalities: Finiteness, Structure and Extremals. Geom. Funct. Anal. 17 (2008), 1343--1415. \bibitem{BeT24} J.~Bennett, T. Tao: Adjoint Brascamp-Lieb inequalities. Proc. Lond. Math. Soc. (3) 129 (2024), no. 4, Paper No. e12633, 51 pp. \bibitem{BGT} N. Bez, A. Gauvan, H. Tsuji: A note on ubiquity of geometric Brascamp-Lieb data. Bull. Lond. Math. Soc., https://doi.org/10.1112/blms.13198 \bibitem{BoT95} B. Bollobas, A. Thomason: Projections of bodies and hereditary properties of hypergraphs. Bull. Lond. Math. Soc. 27, (1995), 417--424. \bibitem{BCF14} S.G. Bobkov, A. Colesanti, I. Fragal\`a: Quermassintegrals of quasi-concave functions and generalized Prékopa-Leindler inequalities. Manuscripta Math., 143 (2014), 131-169. \bibitem{Bor75} C. Borell: Convex set functions in $d$-space. Period. Math. Hungar. 6 (1975), 111-136. \bibitem{BFH19} K.J. B\"or\"oczky, F. Fodor, D. Hug: Strengthened volume inequalities for $L_p$ zonoids of even isotropic measures. Trans. Amer. Math. Soc. 371 (2019), no. 1, 505–548. \bibitem{BoH17b} K.J. B\"or\"oczky, D. Hug: Isotropic measures and stronger forms of the reverse isoperimetric inequality. Trans. Amer. Math. Soc., 369 (2017), 6987-7019. \bibitem{BKX23} K.J. B\"or\"oczky, P. Kalantzopoulos, D. Xi: The case of equality in geometric instances of Barthe's reverse Brascamp-Lieb inequality. Geometric aspects of functional analysis, Lecture Notes in Math., 2327, Springer, Cham, 2023, 129-165. \bibitem{BrL76} H.J.~Brascamp, E.H.~Lieb: Best constants in Young's inequality, its converse, and its generalization to more than three functions. Adv. Math. 20 (1976), 151-173. \bibitem{Bra14} S. Brazitikos: Brascamp-Lieb inequality and quantitative versions of Helly's theorem. Mathematika 63 (2017), 272-291. \bibitem{BGVV14} S.~Brazitikos, A.~Giannopoulos, P.~ Valettas, B.-H.~Vritsiou: Geometry of isotropic convex bodies. Mathematical Surveys and Monographs 196, American Mathematical Society, Providence, RI, 2014. \bibitem{BDG17} S. Brazitikos, S. Dann, A. Giannopoulos, A. Koldobsky: On the average volume of sections of convex bodies. Israel J. Math. 222 (2017), 921--947. \bibitem{BGL18} S. Brazitikos, A. Giannopoulos, D-M. Liakopoulos: Uniform cover inequalities for the volume of coordinate sections and projections of convex bodies. Adv. Geom. 18 (2018), 345--354. \bibitem{BuP21} J.R. Bueno, P. Pivarov: A stochastic Prékopa-Leindler inequality for log-concave functions. Commun. Contemp. Math., 23 (2021), no. 2, Paper No. 2050019, 17 pp. \bibitem{CLL04} E.~Carlen, E.H. Lieb, M. Loss: A sharp analog of Young's inequality on $S^N$ and related entropy inequalities. J. Geom. Anal., 14 (2004), 487-520. \bibitem{CCE09} E.~Carlen, D.~Cordero-Erausquin: Subadditivity of the entropy and its relation to Brascamp-Lieb type inequalities. Geom. Funct. Anal., 19 (2009), 373-405. \bibitem{CTT20} P.G. Casazza, T.T. Tran, J.C. Tremain: Regular two-distance sets. J. Fourier Anal. Appl., 26 (2020), no. 3, Paper No. 49, 32 pp. \bibitem{CDP15} W-K. Chen, N. Dafnis, G. Paouris: Improved H\"older and reverse H\"older inequalities for Gaussian random vectors. Adv. Math. 280 (2015), 643--689. \bibitem{CoL21} T.A. Courtade, J. Liu: Euclidean forward-reverse Brascamp-Lieb inequalities: finiteness, structure, and extremals. J. Geom. Anal., 31 (2021), 3300--3350. \bibitem{Dub77} S. Dubuc: Crit\`eres de convexit\'e et in\'egalit\'es int\'egrales. Ann. Inst. Fourier Grenoble, 27 (1) (1977), 135--165. \bibitem{Dum10} L. D\"umbgen: Bounding standard Gaussian tail probabilities. arxiv:1012.2063v3 \bibitem{Dun21} J. Duncan: An algebraic Brascamp-Lieb inequality. J. Geom. Anal. 31 (2021), 10136-10163. \bibitem{Gan24} S. Gan: Exceptional set estimate through Brascamp-Lieb inequality. Int. Math. Res. Not., IMRN 2024, 7944-7971. \bibitem{gardner} R. Gardner: The Brunn-Minkowski inequality. Bull. Amer. Math. Soc. 39 (2002), 355-405. \bibitem{GGOW18} A. Garg, L. Gurvits, R. Oliveira, A. Wigderson: Algorithmic and optimization aspects of Brascamp-Lieb inequalities, via operator scaling. Geom. Funct. Anal., 28 (2018), no. 1, 100-145. \bibitem{GhS17} D. Ghilli, P. Salani: Quantitative Borell-Brascamp-Lieb inequalities for power concave functions. J. Convex Anal. 24 (2017), 857-888. \bibitem{GianMil2000} A.~Giannopoulos, V.~Milman: Extremal problems and isotropic positions of convex bodies. Israel J. Math. 117 (2000), 29--60. \bibitem{GianMil2001} A.~Giannopoulos, V.~Milman: Euclidean structure in finite dimensional normed spaces. Handbook of the geometry of Banach spaces, Vol. I, 707-779, North-Holland, Amsterdam, 2001. \bibitem{GianPapa1999} A.~Giannopoulos, M.~Papadimitrakis: Isotropic surface area measures. Mathematika 46 (1999), 1-13 \bibitem{Groemer1990} H.~Groemer: Stability properties of geometric inequalities. Amer. Math. Monthly 97 (1990), no. 5, 382--394. \bibitem{Groemer1993} H.~Groemer: Stability of geometric inequalities. Handbook of convex geometry, Vol. A, B, 125--150, North-Holland, Amsterdam, 1993. \bibitem{GroemerSchneider1991} H.~Groemer, R.~Schneider: Stability estimates for some geometric inequalities. Bull. London Math. Soc. 23 (1991), no. 1, 67--74. \bibitem{GrS05} P.M.~Gruber, F.E.~Schuster: An arithmetic proof of John's ellipsoid theorem. Arch. Math. 85 (2005), 82--88. \bibitem{GuM11} O.~Guedon, E.~Milman: Interpolating thin-shell and sharp large-deviation estimates for isotropic log-concave measures. Geom. Funct. Anal. 21 (2011), 1043--1068. \bibitem{GuZ19} S. Guo, R. Zhang: On integer solutions of Parsell-Vinogradov systems. Invent. Math. 218 (2019), 1-81. \bibitem{Gustin1953} W.~Gustin: An isoperimetric minimax. Pacific J. Math. 3 (1953), 403--405. \bibitem{Joh37} F.~John: Polar correspondence with respect to a convex region. Duke Math. J. 3 (1937), 355--369. \bibitem{Kla09} B.~Klartag: {A Berry-Esseen type inequality for convex bodies with an unconditional basis}. Probab. Theory Related Fields \textbf{145} (2009), 1-33. \bibitem{Kla10} B.~Klartag: On nearly radial marginals of high-dimensional probability measures. J. Eur. Math. Soc. {12} (2010), 723-754. \bibitem{KlarMil12} B.~Klartag, E.~Milman: Centroid bodies and the logarithmic Laplace transform--a unified approach. J. Funct. Anal. 262 (2012), 10-34. \bibitem{KoM22} A.V. Kolesnikov, E. Milman: Local $L_p$-Brunn-Minkowski inequalities for $p<1$. Memoirs of the American Mathematical Society, 277 (2022), no. 1360. \bibitem{Leh14} J. Lehec: Short probabilistic proof of the Brascamp-Lieb and Barthe theorems. Canad. Math. Bull., 57 (2014), 585-597. \bibitem{Lei72} L. Leindler: On a certain converse of H\"older's inequality. II. Acta Sci. Math. (Szeged) 33 (1972), 217--223. \bibitem{Lia19} Liakopoulos, D.-M.: Reverse Brascamp-Lieb inequality and the dual Bollobás-Thomason inequality. Arch. Math. (Basel) 112 (2019), 293--304. \bibitem{Liv21} G.V. Livshyts: Some remarks about the maximal perimeter of convex sets with respect to probability measures. Commun. Contemp. Math. 23 (2021), no. 5, Paper No. 2050037, 19 pp. \bibitem{LoW49} L.H. Loomis, H. Whitney: An inequality related to the isoperimetric inequality, Bull. Amer. Math. Soc, 55 (1949), 961--962. \bibitem{Lie90} E.H.~Lieb: Gaussian kernels have only Gaussian maximizers. Invent. Math. 102 (1990), 179--208. \bibitem{CGG16} S. Campi, R. Gardner, P. Gronchi: Reverse and dual Loomis-Whitney-type inequalities. Trans. Amer. Math. Soc., 368 (2016), 5093--5124. \bibitem{LYZ04} E. Lutwak, D. Yang, G. Zhang: Volume inequalities for subspaces of $L_p$. J. Diff. Geom., 68 (2004), 159-184. \bibitem{LYZ07} E. Lutwak, D. Yang, G. Zhang: Volume inequalities for isotropic measures. Amer. J. Math., 129 (2007), 1711-1723. \bibitem{Mal} D. Maldague: Regularized Brascamp–lieb Inequalities And An Application. The Quarterly Journal of Mathematics. https://doi.org/10.1093/qmath/haab032 \bibitem{Mar17} A. Marsiglietti: Borell's generalized Prékopa-Leindler inequality: a simple proof. J. Convex Anal. 24 (2017), 807-817. \bibitem{McC95} R.J. McCann: Existence and uniqueness of monotone measure-preserving maps. Duke Math. J., 80 (1995), 309-323. \bibitem{Mey88} M. Meyer: A volume inequality concerning sections of convex sets. Bull. Lond. Math. Soc., 20 (1988),15-155. \bibitem{NaT} S. Nakamura, H. Tsuji: A generalized Legendre duality relation and Gaussian saturation. arXiv:2409.13611 \bibitem{Petty1961} C.M.~Petty: Surface area of a convex body under affine transformations. Proc. Amer. Math. Soc. 12 (1961), 824--828, \bibitem{Pre71} A. Pr\'ekopa: Logarithmic concave measures with application to stochastic programming. Acta Sci. Math. (Szeged) 32 (1971), 301--316. \bibitem{Pre73} A. Pr\'ekopa: On logarithmic concave measures and functions. Acta Sci. Math. (Szeged) 34 (1973), 335--343. \bibitem{RoS17} A. Rossi, P. Salani: Stability for Borell-Brascamp-Lieb inequalities. Geometric aspects of functional analysis, Lecture Notes in Math., 2169, Springer, Cham, (2017), 339-363. \bibitem{RoS19} A. Rossi, P. Salani: Stability for a strengthened Borell-Brascamp-Lieb inequality. Appl. Anal., 98 (2019), 1773-1784. \bibitem{Val11} S.I. Valdimarsson: Geometric Brascamp-Lieb has the optimal best constant. J. Geom. Anal. 21 (2011), 1036-1043. \bibitem{Val08} S.I. Valdimarsson: Optimisers for the Brascamp-Lieb inequality. Israel J. Math. 168 (2008), 253-274. \bibitem{Val10} S.I. Valdimarsson: The Brascamp-Lieb polyhedron. Canad. J. Math., 62 (2010), 870-888. \bibitem{You12} W.H. Young: On the multiplication of successions of Fourier constants. Proceedings of the Royal Society A, 87 (596) (1912), 331-339. \end{thebibliography} \end{document}
2412.11225v1
http://arxiv.org/abs/2412.11225v1
Cohomology of the diffeomorphism group of the connected sum of two generic lens spaces
\pdfoutput=1 \documentclass[a4paper]{article} \usepackage{amsfonts} \usepackage{mathtools} \usepackage{amsthm, amssymb, amsfonts, enumerate} \usepackage{tikz-cd} \usepackage{spectralsequences} \usepackage{geometry} \usetikzlibrary{matrix,positioning,arrows.meta} \usetikzlibrary{arrows} \newcommand{\rrightarrow}{\mathrel{\mathrlap{\rightarrow}\mkern1mu\rightarrow}} \DeclareMathOperator*{\colim}{colim} \DeclareMathOperator{\Map}{Map} \DeclareMathOperator{\Diff}{Diff} \DeclareMathOperator{\Emb}{Emb} \DeclareMathOperator{\Isom}{Isom} \DeclareMathOperator{\Sub}{Sub} \DeclareMathOperator{\Fr}{Fr} \DeclareMathOperator{\GL}{GL} \DeclareMathOperator{\SO}{SO} \newcommand{\interior}[1]{\smash{\mathring{#1}}} \DeclareMathOperator{\Norm}{Norm} \DeclareMathOperator{\norm}{norm} \DeclareMathOperator{\Cent}{Cent} \DeclareMathOperator{\cent}{cent} \DeclareMathOperator{\Dih}{Dih} \DeclareMathOperator{\Stab}{Stab} \DeclareMathOperator{\image}{im} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\Grp}{Grp} \DeclareMathOperator{\Top}{Top} \newcommand{\hq}{/\!\!/} \newcommand{\Ostar}{\Or(2)^*} \newcommand{\Is}{\operatorname{{\mathcal I}}} \newcommand{\Or}{\operatorname{O}} \newtheorem{theorem}{Theorem}[section] \newtheorem{claim}[theorem]{Claim} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{notation}[theorem]{Notation} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{observation}[theorem]{Observation} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \SseqNewClassPattern{myclasspattern}{ (0,0); (-0.3,0)(0.3,0); (-0.4,0.3)(-0.3,-0.3)(0.4,0.3); } \newcommand{\fakeenv}{} \newenvironment{restate}[2] { \renewcommand{\fakeenv}{#2} \theoremstyle{plain} \newtheorem*{\fakeenv}{#1~\ref{#2}} \begin{\fakeenv} } { \end{\fakeenv} } \usepackage{hyperref} \begin{document} \title{Cohomology of the diffeomorphism group of the connected sum of two generic lens spaces} \author{Zoltán Lelkes} \date{} \maketitle \begin{abstract} We consider the connected sum of two three-dimensional lens spaces $L_1\#L_2$, where $L_1$ and $L_2$ are non-diffeomorphic and are of a certain "generic" type. Our main result is the calculation of the cohomology ring $H^\ast(B\Diff(L_1\#L_2);\mathbb{Q})$, where $\Diff(L_1\#L_2)$ is the diffeomorphism group of $M$ equipped with the $C^\infty$-topology. We know the homotopy type of the diffeomorphism groups of generic lens spaces this, combined with a theorem of Hatcher forms the basis of our argument. \end{abstract} \section{Introduction} For a smooth 3-manifold $M$, let $\Diff(M)$ be its diffeomorphism group endowed with the $C^\infty$-topology. The space $B\Diff(M)$ classifies smooth $M$-bundles, in the sense that concordance classes of smooth $M$-bundles over a space $X$ are in bijection with homotopy classes of maps $X\to B\Diff(M)$, where this bijection is given by pulling back the universal smooth $M$-bundle over $B\Diff(M)$, see \cite{galat19}. Therefore, the cohomology of $B\Diff(M)$ gives characteristic classes of smooth $M$-bundles. The 3-dimensional lens space $L(m, q)$ is the quotient of $S^3\subseteq \mathbb{C}^2$ by the action of $C_m$, the cyclic group of order m, induced by multiplication with $\xi_m$ in the first coordinate and with $\xi_m^q$ in the second coordinate, where $\xi_m$ is the mth root of unity. These inherit the structure of a (Riemannian) 3-manifold and in fact they are prime 3-manifolds. We call a 3-dimensional lens space a generic lens space if $m>2$, $1<q<\frac{m}{2}$, and $q^2\not\equiv \pm 1 \mod m$. Generic lens spaces do not admit any orientation reversing diffeomorphisms, see \cite{mccul00}. In this text, we will always take cohomology with rational coefficients and in order to make notation more convenient we omit them. We prove the following main result. \begin{restate}{Theorem}{main result} Let $L_1$ and $L_2$ be two non-diffeomorphic two generic lens spaces. \[H^\ast(B\Diff(L_1\#L_2))\cong \mathbb{Q}[\mu^2, \eta^2, \nu^2, \vartheta^2] / (\mu^2\eta^2, \nu^2\vartheta^2, \mu^2+\eta^2-\nu^2-\vartheta^2).\] \end{restate} We compute the mapping class group of $L_1\#L_2$ as well, this computation plays a crucial role in showing the main result. \begin{restate}{Theorem}{thm: mapping class group} Let $L_1$ and $L_2$ be two non-diffeomorphic generic lens spaces. \[\pi_0 (\Diff(L_1\#L_2)) \cong C_2\times C_2.\] \end{restate} To expand on Theorem \ref{main result} let us give a rundown of where the generators $\mu$, $\eta$, $\nu$, $\vartheta$ in ultimately arise from. By \cite{Hong11} for a generic lens space $L$, the inclusion $\Isom(L)\hookrightarrow \Diff(L)$ is a weak equivalence, where $\Isom(L)$ is the isometry group of $L$. The isometry group of a generic lens space is calculated in \cite{mccul00}. It is shown there that $\Isom(L)_0$ is covered m-fold by an $\SO(2)\times \SO(2)$ subgroup of $\SO(4)$, where $G_0\triangleleft G$ denotes the path component of the identity in the topological group $G$. Let us denote by $\mathbb{Q}[e\otimes 1, 1\otimes e]$ the cohomology ring of $\SO(2)\times \SO(2)$ where the two generators are the Euler classes pulled back along the projections. In the cohomology ring of $B\Diff(L_1)_0$, we denote $\mu$ the preimage of $e\otimes 1$ and $\eta$ the preimage of $1\otimes e$. Similarly for $B\Diff(L_2)_0$, $\nu$ denotes the preimage of $e\otimes 1$ and $\vartheta$ denotes the preimage of $1\otimes e$. The theorem of Hatcher referenced in the abstract is remarked in \cite{Hatch81} and states that in case $M$ is the connected sum of two prime 3-manifolds, then $\Diff(M)$ deformation retracts onto $\Diff(M, S^2)$ where $S^2\subseteq M$ is a copy of the non-trivial 2-sphere in $M$. We calculate $H^\ast(B\Diff(L_1\#L_2, S^2)_0)$ via considering the restrictions to $B\Diff(L_1\setminus \interior{D^3})_0$ and $B\Diff(L_2\setminus \interior{D^3})_0$. We show that $B\Diff_\text{pt}(L)_0 \simeq B\Diff(L\setminus\interior{D^3})_0$, where $\Diff_\text{pt}(L)_0$ is the subgroup of $\Diff(L)_0$ consisting of those diffeomorphisms that leave a given point $\text{pt}\in L_1\#L_2$ fixed. In the cohomology of $B\Diff_\text{pt}(L)_0$ we pull back the generators from the generators of $B\Diff(L)_0$ via the inclusion. Finally, note that $H^\ast(B\Diff(L_1\#L_2))$ is the subring $H^\ast(B\Diff(L_1\#L_2)_0)^{\pi_0\Diff(L_1\#L_2)}$. For more details on this and for an overview of the proof, see Section \ref{strategy section}. \subsection*{Comparison with previous work} In dimension two, the Madsen-Weiss theorem \cite{MadsenWeiss07} proves the Mumford conjecture and describes the cohomology of $B\Diff(F)$ in a stable range for $F$, a smooth, compact, connected and oriented surface. In high dimensions, Randal-Williams and Galatius \cite{OscarSoren17} show an analogue of the Madsen–Weiss theorem for any simply-connected manifold of dimension $2n\geq 6$. In dimension 3 most of the work focuses on prime manifolds. Hatcher proved the Smale conjecture $\Diff(S^3)\simeq O(4)$ in \cite{Hatch83} and $\Diff(S^1\times S^2)\simeq O(2)\times O(3)\times \Omega O(3)$ in \cite{Hatch81}. For Haken 3-manifods, by the work of Waldhausen \cite{Waldh68}, Hatcher \cite{Hatch76}, and Ivanov \cite{Ivanov79} the calculations of the homotopy types of $\Diff(M)$ largely reduce to those of the mapping class group. A notable exception is \cite{bamler19} where they show the generalized Smale conjecture for all 3-dimensional spherical spaces, as well as $\Diff(\mathbb{R}P^3\#\mathbb{R}P^3)\simeq \Or(1)\times \Or(2)$. In \cite{jan24} Boyd, Bregman, and Steinebrunner show that for a compact, orientable 3-manifold $M$, $B\Diff(M)$ is of finite type. Their paper is where the outline of the arguments in this work originates. In an upcoming paper they aim to calculate the rational cohomology ring of $B\Diff((S^1 \times S^2)^{\#2})$. In most cases when we know the homotopy type of $\Diff(M)$, if $\pi_0\Diff(M)$ is finite, it turns out to be that of a compact Lie group. However, this is not the case for $L_1\#L_2$ where $L_1$ and $L_2$ are non-diffeomorphic generic lens spaces. \begin{corollary} Let $L_1$ and $L_2$ be non-diffeomorphic generic lens spaces. $B\Diff(L_1\#L_2)$ is not weakly equivalent to the classifying space of a compact Lie group. \end{corollary} This is a consequence of Theorem \ref{main result} and Hopf's theorem (see e.g. \cite[Theorem 1.81]{Felix08}). The latter states that for any $G$ compact Lie group, $H^\ast(BG_0)$ is a free polynomial ring on even generators. Furthermore, $H^\ast(BG) \cong H^\ast(BG_0)^{G/G_0}$ (see e.g. \cite[Proposition 3G.1]{Hatch22}). This means in particular that $H^\ast(BG)$ is an ideal domain, while $H^\ast(B\Diff(L_1\#L_2))$ is not by Theorem \ref{main result}. \subsection*{Acknowledgements} This project has grown out of my master's thesis, which I wrote under the supervision of Jan Steinebrunner. I cannot thank him enough for his insights and ideas. Writing both the thesis and this paper at every turn he has been there to provide guidance; it has truly been a great experience working with him. \section{Background}\label{the setting} \subsection{Lens spaces and their isometries} We concern ourselves with 3-dimensional lens spaces, these are manifolds $L(m, q)$ for coprime $m, q\in \mathbb{N}$ such that $L(m, q)$ is the quotient of $S^3\subseteq \mathbb{C}$ by the action generated by multiplication in the first coordinate by $e^\frac{2\pi i}{m}$ and in the second by $e^\frac{2\pi i q}{m}$. Two lens spaces $L(m_1, q_1)$ and $L(m_2, q_2)$ are diffeomorphic if and only if $m_1 = m_2$ and $q_1+q_2 \equiv 0 \mod m_1$ or $q_1q_2\equiv 1 \mod m_1$. This is shown for example in \cite[Theorem 2.5]{Hatch23}. An irreducible 3-manifold is a 3-dimensional manifold in which every embedded 2-sphere bounds a 3-disc. A consequence of the Poincaré conjecture is that a connected, compact, orientable 3-manifold $M$ is irreducible if and only if $\pi_2(M)$ is trivial. Since any 3-dimensional lens space is covered by the 3-sphere its second homotopy group is zero and thus all 3-dimensional lens spaces are irreducible. By explicitly considering the cellular structure of $L(m, q)$ its rational cohomology can be shown to be $\mathbb{Q}$ in degrees $0$ and $3$ and trivial in all other degrees. The quotient map $S^3\to L(m, q)$ induces an isomorphism on rational cohomology, since it is injective in top degree as it is a covering. We take the unique metric on $L(m, q)$ that makes the covering $S^3 \to L(m, q)$ a Riemannian covering when considering the standard metric on $S^3$, such a metric exists as the action of $C_m$, a discrete subgroup of the isometry group of $S^3$, is free. Recall the Smale conjecture proven by Hatcher in \cite{Hatch83}. \begin{theorem}\label{thm: Smale conjecture} The inclusion $\Or(4)\cong\Isom(S^3)\hookrightarrow\Diff(S^3)$ is a weak equivalence, where $\Isom(S^3)$ denotes the group of isometries of $S^3$ when endowed with the standard Riemannian metric. \end{theorem} The diffeomorphism groups of these lens spaces are also well understood, since the generalized Smale conjecture holds for this class of 3-manifolds. This is shown by Hong, Kalliongis, McCullough, and Rubinstein in \cite{Hong11}. \begin{theorem}\label{thm: generalized smale conj} For any 3-dimensional lens space $L(m, q)$ with $m>2$, the inclusion of the isometry group into the diffeomorphism group of $L(m, q)$, $\Isom(L(m, q)) \hookrightarrow \Diff(L(m, q))$ is a homotopy equivalence. \end{theorem} McCullough in \cite{mccul00} presents a calculation of $\Isom(L(m, q))$. He uses the unit quaternion group structure on $S^3$, letting $S^3=\{z_0 + z_1j | z_0,\,z_1\in\mathbb{C}\,s.t.\,|z_0|^2 + |z_1|^2 = 1 \}$ with the convention $zj = j\overline{z}$. The isometries are described using the following double covering by $S^3\times S^3$ of $\SO(4)$ \[\begin{tikzcd}[row sep=tiny] {F\colon S^3\times S^3} & {\SO(4)} \\ {(q_1, q_2)} & {(q\mapsto q_1 q q_2^{-1}).} \arrow[from=1-1, to=1-2] \arrow[maps to, from=2-1, to=2-2] \end{tikzcd}\] \begin{enumerate} \item Denote $S^1 = \{z_0 \in \mathbb{C}\,|\, |z_0| = 1\} < S^3$ (i.e. the elements with no $j$ term), $\xi_k = e^\frac{2\pi i}{k} \in S^1$, and $C_k = \langle\xi_k\rangle$. \item Denote $\Dih(S^1\tilde{\times}S^1) = \langle F(S^1\times S^1), F(j, j)\rangle$ the subgroup of $\SO(4)$. It may be described as the semidirect product $(S^1\tilde{\times}S^1)\rtimes C_2$, where $C_2$ acts by conjugation on each coordinate and $S^1\times S^1 = (S^1\times S^1)/\langle (-1, -1)\rangle$. \end{enumerate} The key to his approach lies in the following lemma, the proof of which we leave to the reader. \begin{lemma}\label{lem: the descenting isometries} Let $G<\SO(4)$ be a finite subgroup acting on $S^3$ freely, such that its action is induced by the action of $\SO(4)$. If $M = S^3/G$, then $\Isom^{+}(M) \cong \Norm(G)/G$ where $\Norm(G)$ is the normalizer of $G$ in $\SO(4)$ and $\Isom^{+}(M)$ is the group of orientation preserving isometries of $M$. \end{lemma} In our case the $C_m$ action which we quotient $S^3$ by to gain $L(m, q)$ is described as the subgroup of $\SO(4)$ generated by $F(\xi_{2m}^{q+1}, \xi_{2m}^{q-1})$. \begin{definition} A \textit{generic lens space} is a 3-dimensional lens space $L(m, q)$ such that $m>2$, $1<q<\frac{m}{2}$, and $q^2\not\equiv \pm 1 \mod m$. \end{definition} It is an important fact for us that generic lens spaces do not admit orientation reversing homeomorphisms, this comes from \cite[Proposition 1.1]{mccul00}. Based on $m$ and $q$ the isometry group $\Isom(L(m, q))$ may be one of $8$ group and all generic lens spaces have isometry groups isomorphic to $\Dih(S^1\tilde{\times}S^1)/\langle F(\xi_{2m}^{q+1}, \xi_{2m}^{q-1})\rangle$. Generic lens spaces are generic in the sense that given $m$, the ratio of possible choices of $1\leq q\leq m$ yielding \[\Isom(L(m, q)) \cong \Dih(S^1\tilde{\times}S^1)/\langle F(\xi_{2m}^{q+1}, \xi_{2m}^{q-1})\rangle\] to $m$ tends to $1$ as $m$ tends to infinity. \subsection{Fiber sequences of diffeomorphism groups} Let us fix some notation for different subgroups of the diffeomorphism group of a manifold. We always allow manifolds to have boundary. \begin{definition}\label{def: diffeo groups notation} Let $M$ be a 3-manifolds, $V$ a manifold, and $U\subseteq M$ a submanifold. \begin{enumerate} \item $\Emb(V, M)\subseteq C^\infty(V, M)$ is the subset consisting of the embeddings of $V$ into $M$. \item $\Diff_\partial (M) = \{\varphi \in \Diff(M) \,|\, \forall x \in \partial M,\, \varphi(x) = x\}$. \item $\Diff_U(M) = \{\varphi \in \Diff(M) \,|\, \forall x \in U,\, \varphi(x) = x\}$. \item $\Diff(M, U) = \{\varphi \in \Diff(M) \,|\, \varphi(U) = U\}$. \item We often assume a Riemannian metric on $M$ and denote the group of isometries of $M$ by $\Isom(M)$. \end{enumerate} For all the groups $G$ above, we use the notation $G^+$ to denote the subset consisting of only orientation preserving maps, in case $M$ and $V$ are orientable, and if $V$ is codimension one we use the notation $\Emb^+(V, M)$ for orientation preserving embeddings. Furthermore, for all topological groups $G$ we will denote by $G_0$ the path component of the identity in $G$. \end{definition} To derive our fiber sequences we will rely on the notion of local retractileness defined as in \cite{Canter17}. \begin{definition} Let $G$ be a topological group. A \textit{$G$-locally retractile} space $X$ is a topological space with a continuous $G$-action, such that for all $x\in X$ there exists an open neighborhood $U\subseteq X$ of $x$ and a map $\xi\colon U \to G$, such that for all $y\in U$, $y = \xi(y).x$. In this situation $\xi$ is a \textit{$G$-local retraction around $x$}. \end{definition} In this case locally $X$ is a retract of $G$, but a $G$-local retraction around $x$ is in fact a local section of the map $G\to X$ sending $g$ to $g.x$. \begin{example}\label{eg: S^3 is SO(4) locally retractile} $S^3$ is an $\SO(4)$-locally retractile space. Given some base-point $q_0\in S^3$ we can write down an $\SO(4)$-local retraction around $q_0$ via $\xi\colon S^3\to \SO(4)$ with $\xi(q) = F(q, q_0)$. \end{example} From now on, we will always assume that actions of topological groups are continuous. The following is a combination of lemmas from \cite[Lemma 2.4, 2.5, 2.6]{Canter17} except for point (4) which follows by choosing some path between points and then covering it by a finite number of opens and applying local retractileness. \begin{lemma} \label{local retractileness} Let $G$ be a topological group and $E$ and $X$ spaces with a $G$-action, and let $f\colon E \to X$ be a $G$-equivariant map. \begin{enumerate}[(1)] \item If $X$ is $G$-locally retractile, then $f$ is a locally trivial fibration. \item If $f$ has local sections and $E$ is $G$-locally retractile, then $X$ is also $G$-locally retractile. \item Let $X$ be locally path connected and $G$-locally retractile. If $H<G$ is a subgroup containing the path component of the identity, then $X$ is also $H$-locally retractile. \item If $X$ is path connected and $G$-locally retractile, then the action of $G$ is transitive. \end{enumerate} \end{lemma} The following theorem proved by Lima in \cite{Lim64}, originally due to Palais and Cerf, implies that $\Emb(V, M)$ is $\Diff(M)$-locally retractile in case $V$ is compact, where the action on $\Emb(V, \interior{M})$ is given by post-composition. \begin{theorem}\label{Emb is locally retractile} Let $M$ be a $C^\infty$-manifold, and $V\subseteq \interior{M}$ a compact submanifold. The space $\Emb(V, \interior{M})$ is $\Diff(M)$-locally retractile. \end{theorem} This provides us with the Palais fiber sequence. Let $M$ be a $C^\infty$-manifold, $V\subseteq \interior{M}$ a compact submanifold. There is a fiber sequence of the form \begin{equation}\label{eq: Palais fib seq} \Diff_V(M) \hookrightarrow \Diff(M) \to \Emb(V, \interior{M}). \end{equation} Pulling back the Palais fiber sequence gives the following lemma: \begin{lemma}\label{submnfld fib seq} Given a compact submanifold $V\subseteq \interior{M}$ there is a fiber sequence \[\Diff_V(M)\to \Diff(M, V) \to \Diff(V).\] Furthermore, for $\Diff^\prime(V)$ the space of those diffeomorphisms of $V$ that can be extended to a diffeomorphism of $M$ we have that the map $\Diff(M, V)\to \Diff^\prime(V)$ is a $\Diff_V(M)$-principal bundle. \end{lemma} The last point about the map $\Diff(M, V)\to \Diff^\prime(V)$ being a $\Diff_V(M)$-principal bundle is especially useful when considering in tandem with the following lemma from \cite[Corollary 2.11 (2)]{bonat20}. \begin{lemma}\label{ses delooped} For $i = 1, 2, 3$ let $G_i$ be a topological group and and $S_i$ a space with a $G_i$-action. Let $1\to G_1\to G_2 \overset{\phi}{\to}G_3\to 1$ be a short exact sequence of groups such that $\phi$ is a $G_1$-principal bundle. If $S_1\to S_2\to S_3$ is a fiber sequence of equivariant maps, then the induced maps on quotients form a homotopy fiber sequence \[S_1\hq G_1 \to S_2\hq G_2 \to S_3\hq G_3.\] \end{lemma} We will use two special cases of this lemma, both of them are well-known results, one is the case where $S_1=S_2=S_3=\text{pt}$, which allows us to deloop the short exact sequence of groups into a homotopy fiber sequence $BG_1\to BG_2\to BG_3$, the second is where $S_1 = S_2 = X$, $S_3= \text{pt}$ and $G_1 = 1$, $G_2=G_3 = G$, which gives for all $G$-spaces $X$ a homotopy fiber sequence $X\to X\hq G \to BG$. \begin{remark} Let $1\to G_1\to G_2 \overset{p}{\to}G_3\to 1$ be a short exact sequence of topological groups. $G_3$ is a $G_2$-locally retractile space with respect to the induced action from $p$, if and only if $p$ is a $G_1$-principal bundle. In this case we call the short exact sequence a principal short exact sequence. \end{remark} Cerf in \cite{Cerf61} showed the contractibility of collars, the following formulation of it comes from \cite[Theorem 2.6]{jan24}. \begin{theorem}\label{contractable collars} The space of collars \[\Emb_{\partial M}(\partial M \times I, M) = \{\iota \in \Emb(\partial M \times I, M) \,|\, \left.\iota\right|_{\partial M} = \text{id}_{\partial M}\}\] is weakly contractible, where $\partial M \times I$ is a tubular neighborhood of $\partial M$. As a consequence we have that the subgroup inclusion \[\Diff_U(M)\hookrightarrow\Diff_{\partial U}(M\setminus \interior{U})\] is a weak equivalence for a codimension 0 submanifold $U\subseteq \interior{M}$. \end{theorem} The next lemma, a consequence of the \textit{homotopical orbit stabilizer lemma}, \cite[Lemma 2.10]{jan24} . \begin{lemma}\label{lem: id path component homotopical orbit stabilizer} Let $X$ be a path connected $G$-locally retractile space such that the $G$ action on $X$ is transitive, and let $x\in X$. Consider the inclusion $\{x\}\hookrightarrow X$, this is equivariant with respect to $\Stab_G(x)_0\hookrightarrow G_0$, where $G_0 \triangleleft G$ is the path component of the identity in $G$ and $\Stab_G(x) < G$ is the stabilizer group of $x$ in $G$. If the inclusion of $\Stab_G(x)$ into $G$ induces a bijection on path components, then the equivariant inclusion of $x$ into $X$ induces a weak equivalence, in fact a homeomorphism for the right models of the classifying spaces, \[B\Stab_G(x)_0 \overset{\simeq}{\to}X\hq G_0.\] Moreover, there is a homotopy fiber sequence \[X\to B \Stab_G(x)_0 \to BG_0.\] \end{lemma} \begin{proof} By Lemma \cite[Lemma 2.10]{jan24}, the map \[\begin{tikzcd}[cramped, row sep=small] {\Stab_G(x)} & G \\ \{x\} \arrow[loop above, out=120, in=70, distance=15] & X \arrow[loop above, out=120, in=70, distance=15] \arrow[hook, from=1-1, to=1-2] \arrow[hook, from=2-1, to=2-2] \end{tikzcd}\] induces a weak equivalence $B\Stab_G(x) \overset{\simeq}{\to}X\hq G$, which is in fact a homeomorphism for the right models of the classifying spaces We have to see that \[\Stab_{G}(\iota)_0\hookrightarrow\Stab_{G_0}(\iota) = G_0\cap\Stab_{G}(x)\] is a surjection. The assumption that $\Stab_G(x)\hookrightarrow G$ induces a bijection on path components means that any $g\in \Stab_{G}(x)$ is in $\Stab_{G}(x)_0$ if and only if it is connected to the identity in $G$, i.e. is in $G_0$. \end{proof} \begin{theorem} \label{embeddings of discs are framings} If $M$ is an $m$-dimensional manifold, then the differential at $0$ gives a weak equivalence $\Emb(D^m, M)\overset{\simeq}{\to}\Fr(TM)$. \end{theorem} \begin{lemma}\label{lem: cut out disc} Let $M$ be a closed 3-manifold and $D\subseteq M$ an embedded 3-disc. Denote \[\Diff^{\Or}(M, D) = \{\varphi\in \Diff(L, D)\,|\, \left.\varphi\right|_{D}\in \Or(3)\subseteq \Diff(D)\}.\] The maps \[\Diff(M\setminus \interior{D})\leftarrow \Diff^{\Or}(M, D) \to \Diff_{x}(M)\] are weak equivalences, where $x\in D$ is its center point. \end{lemma} \begin{proof} The map $\Diff^{\Or}(M, D)\to \Diff(M\setminus \interior{D})$ is the pullback of the map $\Or(3)\to \Diff(\partial(M\setminus \interior{D}))$ along the restriction $\Diff(M\setminus \interior{D})\to \Diff(\partial(M\setminus \interior{D}))$. By the Smale theorem, the map $\Or(3) \to \Diff(S^2)\cong \Diff(\partial(M\setminus \interior{D}))$ is a weak equivalence. The map $\Diff^{\Or}(M, D)\to \Diff_{x}(M)$ is a weak equivalence as it is a pullback of the map $\Or(3)\to\Emb_{\{x\}}(D^3, M)$ that is given by acting through precomposition by an element of $\Or(3)$ viewed as a diffeomorphism of $D^3$ on the embedding of $D$. Here $\Emb_{\{x\}}(D^3, M) = \{i \in \Emb(D^3, M)\, |\, i(0) = x\}$. Taking the derivative at $x$ gives a weak equivalence $\Emb_{\{x\}}(D^3, M)\to \GL_3(\mathbb{R})$ and this means that as $\GL_3(\mathbb{R})$ retracts onto $\Or(3)$, the composition with $\Or(3)\to\Emb_{\{x\}}(D^3, M) $ is a weak equivalence and we conclude using the 2 out of 3 property. \end{proof} \section{Setup} \subsection{The main homotopy fiber sequence} There is a theorem of Hatcher, remarked in \cite{Hatch81}, also proven in \cite[Theorem 3.21]{jan24} stating: \begin{theorem}\label{theorem of Hatcher} Let $M$ be a connected sum of two irreducible manifolds that are not diffeomorphic to $S^3$. If $S\subseteq M$ is the 2-sphere these irreducible pieces are joined along, then the inclusion $\Diff(M, S) \hookrightarrow \Diff(M)$ is an equivalence. \end{theorem} From now on we set $M\cong L_1\#L_2$ for two generic lens spaces, so that $L_1\not \cong L_2$. Fix a 2-sphere $S$ in $M\cong L_1\#L_2$ is such that $M\setminus N(S) \cong L_1\setminus\interior{D^3} \sqcup L_2\setminus\interior{D^3}$ where $N(S)$ is an open tubular neighborhood of $S$. As $L_1\not\cong L_2$, $\Diff(M)\simeq \Diff(M, S)\cong \Diff(M, L_2\setminus\interior{D^3})$. Consider the following exact sequence of topological groups, \begin{equation}\label{main fib seq w.o. delooping} \Diff_{L_2\setminus\interior{D^3}}(M)\to \Diff(M, L_2\setminus\interior{D^3}) \overset{p}{\to} \Diff(L_2\setminus\interior{D^3}). \end{equation} By Lemma \ref{submnfld fib seq}, to see that this is a principal short exact sequence, we need the second map to be surjective. However as a consequence of contractability of collars, we have the following lemma: \begin{lemma}\label{lem: extendability based on boundary} Let $V\subseteq M$ be a codimension zero submanifold of M and $\varphi\in\Diff(V)$. There is some $f\in \Diff(M, V)$ such that $\left.f\right|_V = \varphi$ if and only if there is some $\psi\in \Diff(M, V)$ such that \[[\left.\psi\right|_{\partial V}] = [\left.\varphi\right|_{\partial V}]\in\pi_0\Diff(\partial V).\] This says that the extendability of $\varphi$ only depends on $[\left.\varphi\right|_{\partial V}]\in \pi_0\Diff(\partial V)$. \end{lemma} On one hand $\pi_0 \Diff(\partial L_2\setminus\interior{D^3}) \cong \pi_0 \Diff(S^2) \cong \pi_0 \Or (3)\cong C_2$, where under the last isomorphism orientation preserving diffeomorphisms are mapped to $+1$ and orientation reversing diffeomorphisms are mapped to $-1$. On the other hand, generic lens spaces do not admit orientation reversing homeomorphisms, \cite[Proposition 1.1]{mccul00}, and therefore for all $\varphi \in \Diff(\partial L_2\setminus\interior{D^3})$, $[\left.\varphi\right|_{\partial L_2\setminus\interior{D^3}}] = [\text{id}]\in \pi_0 \Diff(\partial L_2\setminus\interior{D^3})$. This means Lemma \ref{lem: extendability based on boundary} implies that the short exact sequence (\ref{main fib seq w.o. delooping}) is a principal short exact sequence. This in particular means that by Lemma \ref{ses delooped} we can deloop this to a homotopy fiber sequence as follows: \begin{equation}\label{main fib seq} B\Diff_{L_2\setminus\interior{D^3}}(M)\to B\Diff(M, L_2\setminus\interior{D^3}) \to B\Diff(L_2\setminus\interior{D^3}). \end{equation} Let us inspect the outer terms of (\ref{main fib seq}). Contractability of collars implies that $\Diff_{L_2\setminus\interior{D^3}}(M)\simeq \Diff_\partial(L_1\setminus\interior{D^3})$. Applying it again yields $\Diff_\partial(L_1\setminus\interior{D^3})\simeq \Diff_{D^3}(L_1)$. Furthermore applying Lemma \ref{lem: cut out disc} we get $\Diff(L_2\setminus\interior{D^3}) \simeq \Diff_{\text{pt}}(L_2)$. This means that to get the terms in the Leray-Serre spectral sequence induced by (\ref{main fib seq}), we just have to calculate the cohomology of $B\Diff_{D^3}(L_1)$ and $B \Diff_{\text{pt}}(L_2)$. \subsection{Strategy}\label{strategy section} Let us go over our strategy for the proof before we get to the details. By Theorem \ref{theorem of Hatcher} $\Diff(M, S)\simeq \Diff(M)$ and we want to compute the cohomology of the classifying space of $G = \Diff(M, S)$. Our strategy to calculate the cohomolgy of $BG$ is using the homotopy fiber sequence \[BG_0\to BG \to B\pi_0G\] where $G_0$ is the path component of the unit in $G$. Since the $E_2$-page is twisted, one has to determine the action of $\pi_1 BG\cong \pi_0 G$ on the cohomolgy of $BG_0$ in order to figure out the cohomology of $BG$. If we can do this, and assuming that $G_0$ is a finite group, we obtain that \[H^\ast(BG) \cong H^\ast(BG_0)^{\pi_0 G}.\] This means we need to calculate $\pi_0 \Diff(M, S)$, $H^\ast(B\Diff(M, S)_0)$, and the action. We calculate the cohomology groups $H^k(B\Diff(M, S)_0)$ using the cohomological Leray-Serre spectral sequence associated to the homotopy fibers sequence (\ref{main fib seq}), this will turn out to collapse on the second page. However this does not tell us the ring structure. In order to calculate that we use the map induced by the product of the restrictions \[H^\ast(B\Diff(L_2\setminus\interior{D^3})_0 \times B\Diff(L_1\setminus\interior{D^3})_0)\to H^\ast(B\Diff(M, S)_0).\] We show that the kernel of this map contains a specific ideal, and then as we know the dimensions of $H^k(B\Diff(M, S)_0)$ as a $\mathbb{Q}$-vector space for each $k$, we can conclude that the kernel is in fact equal to that ideal. In the calculation of both $B\Diff_{D^3}(L)_0$ and $B \Diff_{\text{pt}}(L)_0$ we will exploit the covering of $\Isom(L)_0$ by $\SO(2)\times \SO(2)$ as discussed in Lemma \ref{lem: the descenting isometries}. \subsection{The mapping class groups} Our goal in this section is to calculate $\pi_0\Diff(M)$, the mapping class group of $M$. \begin{lemma}\label{lem: descending differentials fixing points} Consider the inclusions \[\iota_{1j} \colon \SO(2)\hookrightarrow \Isom^+_{\{1j\}}(S^3)\] be the inclusion given as $e^{2ti} \mapsto F(e^{ti}, e^{-ti})$ and \[\iota_{1}\colon \SO(2) \hookrightarrow \Isom^+_{\{1\}}(S^3)\] be the inclusion given as $e^{2ti} \mapsto F(e^{ti}, e^{ti})$ for all $t\in [0, \pi)$. Let $x$ denote either $1j$ or $1$ and $p^\ast\colon \Norm(C_m)_0\to \Diff_{p(x)}(L)_0$ the map induced by the projection $p\colon S^3\to L$ where $\Norm(C_m)$ is the normalizer of the $C_m < \Isom^+(S^3)$ that we are quotienting $S^3$ by to gain $p$. Given an identification of the tangent space of at $x$ with $\mathbb{R}^3$, we get that the composition \[\SO(2)\overset{\iota_{x}}{\to} \Norm(C_m)_0 \overset{p^\ast}{\to}\Diff_{\{p(x)\}}(L)_0\overset{T_{x}}{\to}\GL^+_3(\mathbb{R})\] is the inclusion. \end{lemma} \begin{proof} Both of $\iota_1$ and $\iota_{1j}$ land in the $\SO(2)\times\SO(2) = F(S^1, S^1)$ subgroup of $\Isom^+(S^3)$ that is always in the normalizer of the subgroup we quotient by to get a generic lens space. The action of $C_m$ on $S^3$ is a free action of a finite discrete group, and therefore $\varepsilon$ chosen small enough, each point in $B_x(\varepsilon)$, where $B_{q_0 + q_1j}(\varepsilon) = \{z_0+z_1j\in S^3 \,|\, |z_0-q_0|^2+|z_1-q_1|^2 < \varepsilon\}$. Furthermore the image of $\iota_{x}$ leaves $x$ fixed and in fact also $B_x(\varepsilon)$ as for $\zeta, z \in \mathbb{C}$, $|\zeta ^2 z| = |z|$ and $F(\zeta, \zeta)$ is multiplication of the second coordinate by $\zeta^2$ and $F(\zeta, \zeta^{-1})$ is multiplication of the first coordinate by $\zeta^2$. By all this we really mean that we get a diagram as follows: \[\begin{tikzcd} {B_x(\varepsilon)} && {B_x(\varepsilon)} \\ {p(B_x(\varepsilon))} && {p(B_x(\varepsilon)).} \arrow["{\left.\iota_x(\zeta)\right|_{B_x(\varepsilon)}}", from=1-1, to=1-3] \arrow["\cong"', from=1-1, to=2-1] \arrow["\cong"', from=1-3, to=2-3] \arrow["{\left.p\circ\iota_x(\zeta)\right|_{p(B_x(\varepsilon))}}", from=2-1, to=2-3] \end{tikzcd}\] Therefore choosing the charts on $L$ to be gained locally from charts on $S^3$ through $p$ we see that the differential of $p\circ\iota_x(\zeta)$ at $p(x)$ agrees with the differential of $\iota_x(\zeta)$ at $x$. The composition $T_{x}\circ \iota_{x}\colon \SO(2) \to \GL_3(\mathbb{R})$ becomes the inclusion, given by block summing with the one-by-one identity matrix (we restrict the differential of $\iota_x(A)$ which is block summing the matrix of $A$ with a two-by-two identity matrix to the space spanned by the other three standard basis vectors besides $x$). \end{proof} \begin{theorem}\label{thm: lens space diffs pi_0's} For a generic lens space $L$, the inclusions $\Diff_{\text{pt}}(L)\hookrightarrow \Diff(L)$ and $\Diff_{D^3}(L)\hookrightarrow \Diff_{\text{pt}}(L)$ induce isomorphisms on path components, and we have \[\pi_0(\Diff_{D^3}(L))\cong\pi_0(\Diff_{\text{pt}}(L))\cong \pi_0(\Diff(L))\cong C_2.\] \end{theorem} \begin{proof} The statement $\pi_0(\Diff(L))\cong C_2$ follows from the generalized Smale conjecture (Theorem \ref{thm: generalized smale conj}) and from $\Isom(L)\cong \Dih(S^1\tilde{\times}S^1)$ (quotienting $\Dih(S^1\tilde{\times}S^1)$ by $\langle F(\xi_{2m}^{q+1}), \xi_{2m}^{q-1})\rangle$ just results in an $m$-fold covering of $\Dih(S^1\tilde{\times}S^1)$ by itself). Let $1 = p(1)\in L$ for the quotient map $p\colon S^3\to L$. For $\pi_0(\Diff_{\text{pt}}(L))\cong \pi_0(\Diff(L))$ consider the fiber sequence \[\Diff_{\{1\}}(L)\to \Diff(L)\to L \cong \Emb(\text{pt}, L)\] this yields an exact sequence \[\pi_1(\Isom(L), \text{id}) \overset{f}{\to} \pi_1(L, 1)\to \pi_0(\Diff_{\{1\}}(L) )\overset{g}{\to} \pi_0(\Diff(L))\to \pi_0(L)\cong\text{pt}.\] To see that $g$ is an isomorphism we just need that $f$ is surjective. $\pi_1(L)$ is cyclic so all we have to show is that $f$ hits its generator. $p\circ \gamma$ generates $\pi_1(L)$ for $\gamma(t) = e^{\frac{2\pi i t}{m}}$ by covering theory, as $\xi_m = F(\xi_{2m}^{q+1}, \xi_{2m}^{q-1})(1)$, and $F(\xi_{2m}^{q+1}, \xi_{2m}^{q-1})$ is the generator of the $C_m$-action on $S^3$ we quotient by. Now we just have to see that $\gamma$ can be given by a path $\lambda$ in $\Norm(C_m) = \Dih(S^1\tilde{\times}S^1) = \langle F(S^1\times S^1), F(j, j) \rangle$ so that $\lambda(t)(1) = \gamma(t)$ and $\lambda$ becomes a loop in $\Isom(L)$. Such a path may be constructed as $\lambda(t) = f(\xi_{2m}^{t(q+1)}, \xi_{2m}^{t(q-1)})$, where $f(q_1, q_2)$ denotes the isometry of $L$ induced by $F(q_1, q_2)$ for any $q_1$ and $q_2$ this makes sense for. For $\pi_0(\Diff_{D^3}(L))\cong\pi_0(\Diff_{\text{pt}}(L))$ consider the homotopy fiber sequence \[\Diff_{D^3}(L) \to \Diff_{\{1\}}(L) \overset{T_1}{\to} \GL_3^{+}(\mathbb{R})\simeq SO(3).\] This gives rise to the exact sequence \[\pi_1(\Diff_{\{1\}}(L), \text{id}) \overset{f}{\to} \pi_{1}(\SO(3), \text{id})\to \pi_0(\Diff_{D^3}(L) )\overset{g}{\to} \pi_0(\Diff_{\{1\}}(L))\to \pi_0(\SO(3))\simeq \text{pt}.\] Again we have to see that $f$ is surjective. We have $\GL_3^{+}(\mathbb{R})\simeq \SO(3) \cong D^3/\sim$ where on $D^3$ we identify the antipodal points of $\partial D^3$, we take $D^3= \{x\in \mathbb{R}^3 \,|\, |x|\leq \pi\}$ and then each point $x\in D^3$ of it corresponds to the rotation around the span of $\{x\}$ in $\mathbb{R}^3$ by the angle $|x|$ and clockwise or counter clockwise depending on the sign of $x$, the origin corresponds to the identity. $\pi_1(\SO(3), \text{id}) = C_2$ generated by the loops given by $\gamma\colon [0, 1]\to D^3/\sim$, with $\gamma(t)= tx - (1-t)x$ for some $x\in \partial D^3$. This means that we want a loop $\lambda$ in $\Diff_{\{1\}}(L)$ with $T_1\lambda(t)$ being rotation by $(2t-1)\pi$ around some axis (as rotation by $\theta$ around an axis spanned by $x$ is rotation by $-\theta$ around the axis given by $-x$). Consider $\lambda(t)$ given by $F(\zeta_t, \zeta_t)$ for $\zeta_t = e^{\pi i t}$, since $\zeta_t\in S^1$, $F(\zeta_t, \zeta_t)(z_0+z_1j) = z_0+\zeta_t^2 z_1 j$. This is essentially the loop in $\Isom^+_1(S^3)$ given by $\iota_1(S^1)$ and therefore by Lemma \ref{lem: descending differentials fixing points} we conclude. \end{proof} Finally, we compute the path components of $\Diff(M, S)\simeq \Diff(M)$. Before this calculation let us present a handy commutative diagram that will come up in another context later as well. \begin{remark}\label{rem: handy commutative diagram} The following is a commutative diagram: \[\begin{tikzcd}[cramped,row sep=large] {\Diff_{L_1\setminus \interior{D^3}}(M)} & {\Diff_\partial(L_2\setminus\interior{D^3})} & {\Diff_{D^3}(L_2)} \\ {\Diff(L_2\setminus \interior{D^3})} & {\Diff_{\text{pt}}(L_2, D^3)} & {\Diff_{\text{pt}}(L_2).} \arrow["\simeq", from=1-1, to=1-2] \arrow["{(\text{res}^M_{L_2\setminus \interior{D^3}})_\ast}", from=1-1, to=2-1] \arrow[dashed, hook', from=1-2, to=2-1] \arrow["\simeq"', from=1-3, to=1-2] \arrow[dashed, hook', from=1-3, to=2-2] \arrow[from=1-3, to=2-3] \arrow["\simeq"', from=2-2, to=2-1] \arrow["\simeq", from=2-2, to=2-3] \end{tikzcd}\] \end{remark} \begin{theorem}\label{thm: mapping class group} The mapping class group of $M\cong L_1\#L_2$ where $L_1$ and $L_2$ are non-diffeomorphic generic lens spaces is \[\pi_0 (\Diff(M)) \cong C_2\times C_2.\] \end{theorem} \begin{proof} We consider the commutative diagram, where both rows are fiber sequences: \[\begin{tikzcd} {\Diff_{L_1\setminus\interior{D^3}}(M)} & {\Diff(M, L_1\setminus\interior{D^3})} & {\Diff(L_1\setminus\interior{D^3})} \\ {\Diff(L_2\setminus\interior{D^3})} & {\Diff(L_2\setminus\interior{D^3}) \times \Diff(L_1\setminus\interior{D^3})} & {\Diff(L_1\setminus\interior{D^3}).} \arrow[from=1-1, to=1-2] \arrow[from=1-1, to=2-1] \arrow[from=1-2, to=1-3] \arrow[from=1-2, to=2-2] \arrow[from=1-3, to=2-3] \arrow[from=2-1, to=2-2] \arrow[from=2-2, to=2-3] \end{tikzcd}\] This induces a comparison of long exact sequences. \[\begin{tikzcd}[cramped,column sep=tiny] {\pi_1\Diff(L_1\setminus\interior{D^3})} & {\pi_0\Diff_{L_1\setminus\interior{D^3}}(M)} & {\pi_0\Diff(M, L_1\setminus\interior{D^3})} & {\pi_0\Diff(L_1\setminus\interior{D^3})} \\ {\pi_1\Diff(L_1\setminus\interior{D^3})} & {\pi_0\Diff(L_2\setminus\interior{D^3})} & {\pi_0\Diff(L_2\setminus\interior{D^3}) \times \pi_0\Diff(L_1\setminus\interior{D^3})} & {\pi_0\Diff(L_1\setminus\interior{D^3}).} \arrow["{\partial^\prime}", from=1-1, to=1-2] \arrow[equal, from=1-1, to=2-1] \arrow["{\iota_\ast}", from=1-2, to=1-3] \arrow["{\left(\text{res}^M_{L_2\setminus\interior{D^3}}\right)_\ast}", from=1-2, to=2-2] \arrow["{\left(\text{res}^M_{L_1\setminus\interior{D^3}}\right)_\ast}", from=1-3, to=1-4] \arrow[from=1-3, to=2-3] \arrow[equal, from=1-4, to=2-4] \arrow["\partial", from=2-1, to=2-2] \arrow[from=2-2, to=2-3] \arrow[from=2-3, to=2-4] \end{tikzcd}\] We have that \[\pi_0\Diff_{L_1\setminus\interior{D^3}}(M)\cong \pi_0\Diff_{D^3}(L_2)\cong C_2\] and \[\pi_0\Diff(L_1\setminus\interior{D^3})\cong \pi_0\Diff_{\text{pt}}(L_1)\cong C_2.\] In the above diagram $\partial$ is $0$ by exactness, and $\left(\text{res}^M_{L_2\setminus\interior{D^3}}\right)_\ast$ is an isomorphism after considering the commutative diagram from Remark \ref{rem: handy commutative diagram} and Theorem \ref{thm: lens space diffs pi_0's}. This means that $\partial^\prime$ is $0$ by commutativity. Thus $\iota_\ast$ is injective. We furthermore have that $\left(\text{res}^M_{L_1\setminus\interior{D^3}}\right)_\ast$ is surjective by Lemma \ref{lem: extendability based on boundary}. Now we apply the 5-lemma to \[\begin{tikzcd}[column sep=large] 0 & {C_2} & {\pi_0\Diff(M, L_1\setminus\interior{D^3})} & {C_2} & 0 \\ 0 & {C_2} & {C_2 \times C_2} & {C_2} & 0 \arrow["{\partial^\prime}", from=1-1, to=1-2] \arrow[equal, from=1-1, to=2-1] \arrow["{\iota_\ast}", from=1-2, to=1-3] \arrow["\cong", from=1-2, to=2-2] \arrow["{\left(\text{res}^M_{L_1\setminus\interior{D^3}}\right)_\ast}", from=1-3, to=1-4] \arrow[from=1-3, to=2-3] \arrow[from=1-4, to=1-5] \arrow["\cong", from=1-4, to=2-4] \arrow[equal, from=1-5, to=2-5] \arrow["\partial", from=2-1, to=2-2] \arrow[from=2-2, to=2-3] \arrow[from=2-3, to=2-4] \arrow[from=2-4, to=2-5] \end{tikzcd}\] and conclude that $\pi_0 \Diff(M)\cong \pi_0\Diff(M, L_1\setminus\interior{D^3})\cong C_2\times C_2$. \end{proof} \section{Computations on the identity path components}\label{the computation} In this section $L$ will always denote a generic lens space. We start with establishing some background and notation for the calculation. \cite[Theorem 15.9]{miln74} implies that the rational cohomology ring $H^\ast(B\SO(n))$ is a polynomial ring over $\mathbb{Q}$ generated by \begin{enumerate} \item in case $n$ is odd, the Pontryagin classes $p_1, \dots, p_{(n-1)/2}$ \item in case $n$ is even, the Pontryagin classes $p_1, \dots, p_{n/2}$ and the Euler class $e$, where $e^2 = p_{n/2}$. \end{enumerate} Here the degrees are as follows: $|p_k| = 4k$ and $|e| = n$. The inclusion $\SO(n)\times\SO(m)\to \SO(n+m)$ given by block summing induces the Whitney sum on vector bundles, let us give two corollaries of this. In $H^2(B\SO(2)\times B\SO(2))$ we will denote following the Künneth isomorphism $pr_1^\ast(e)$ as $e\otimes 1$ and $pr_2^\ast(e)$ as $1\otimes e$. The map \[H^\ast(B\SO(4))\to H^\ast(B\SO(2)\times B\SO(2))\] induced by the inclusion of $\SO(2)\times \SO(2) \hookrightarrow \SO(4)$ sends $p_1$ to $(e\otimes 1)^2 + (1\otimes e)^2$ and $e$ to $(e\otimes 1)(1\otimes e)$. Similarly the map \[H^\ast(B\SO(4))\to H^\ast(B\SO(3))\] induced by block sum with the identity, sends $p_1$ to $p_1$ and $e$ to $0$. \begin{lemma}\label{lem: preliminary s.seq. comparison} In the rational cohomological Leray-Serre spectral sequence of \[S^3\to S^3\hq(\SO(2)\times\SO(2))\to B\SO(2)\times B\SO(2)\] the differential $d^4\colon E_4^{0, 3}\to E_4^{4, 0}$ sends the fundamental class of $S^3$ to a non-zero multiple of $(e\otimes 1)(1\otimes e)$. \end{lemma} \begin{proof} Applying Lemma \ref{lem: id path component homotopical orbit stabilizer} in light of Example \ref{eg: S^3 is SO(4) locally retractile} we have in particular $B\SO(3)\cong S^3\hq \SO(4)$ and under this homeomorphism $S^3\hq\SO(4)\to B\SO(4)$ becomes the map $B\SO(3)\hookrightarrow B\SO(4)$ induced by the inclusion $\SO(3)\hookrightarrow\SO(4)$ as $\SO(3)$ is the stabilizer subgroup of $1 + 0j\in S^3$. We inspect the cohomological Leray-Serre spectral sequence of \[S^3\to S^3\hq\SO(4)\to B\SO(4).\] Note that the only non-zero differentials are on the $E_4$-page as $E_2^{p, q} \cong H^p(B\SO(4))\otimes H^q(S^3)$. Since \[H^4(B\SO(4))\cong E_2^{4, 0}\rrightarrow E_\infty^{4, 0}\cong H^4(S^3\hq\SO(4))\] is induced by the map $S^3\hq\SO(4)\to B\SO(4)$ and we conclude that $\image(d^4\colon E_4^{0, 3}\to E_4^{4, 0}) = \langle e\rangle$. Now the comparison \[\begin{tikzcd}[cramped] {S^3} & {S^3\hq\SO(4)} & {B\SO(4)} \\ {S^3} & {S^3\hq(\SO(2)\times\SO(2))} & {B(\SO(2)\times\SO(2))} \arrow[from=1-1, to=1-2] \arrow[from=1-2, to=1-3] \arrow[shift left, no head, from=2-1, to=1-1] \arrow[no head, from=2-1, to=1-1] \arrow[from=2-1, to=2-2] \arrow[from=2-2, to=1-2] \arrow[from=2-2, to=2-3] \arrow["i"', from=2-3, to=1-3] \end{tikzcd}\] induces a comparison of spectral sequences. We know that $i^\ast(e) = (e\otimes 1)(1\otimes e)$ and from this we conclude. \end{proof} \subsection{The diffeomorphisms fixing a point} We want to compare $\Diff_{\text{pt}}(L)$ to $\Diff_{\text{pt}}^+(S^3)$, but not all of the diffeomorphisms of $S^3$ factor through the quotient, in fact similarly to Lemma \ref{lem: the descenting isometries} exactly those do which are in the normalizer of the $C_m$ subgroup of $\SO(4) = \Isom^+(S^3) < \Diff^+(S^3)$ that we mod out by. This description gives us the following diagram: \[\begin{tikzcd} {\Diff^{+}(S^3)} & {\Norm_{\Diff^+(S^3)}(C_m)_0} & {\Diff(L)_0} \\ {\SO(4)} & {\SO(2)\times\SO(2)} & {\Isom(L)_0} \\ {S^3}\arrow[loop above, out=120, in=70, distance=15] & {S^3}\arrow[loop above, out=120, in=70, distance=15] & L.\arrow[loop above, out=120, in=70, distance=15] \arrow[from=1-2, to=1-1] \arrow[from=1-2, to=1-3] \arrow["\simeq"', hook, from=2-1, to=1-1] \arrow[hook, from=2-2, to=1-2] \arrow[from=2-2, to=2-1] \arrow["{\sim_\mathbb{Q}}", from=2-2, to=2-3] \arrow["\simeq", hook, from=2-3, to=1-3] \arrow[equal, from=3-2, to=3-1] \arrow["{\sim_\mathbb{Q}}", from=3-2, to=3-3] \end{tikzcd}\] \begin{notation} By $\sim_\mathbb{Q}$ we denote that the given map induces isomorphism on rational cohomology. \end{notation} In this case the maps indicated to induce isomorphisms on rational cohomology do so by virtue of the fact that the maps $F(S^1, S^1) = \SO(2)\times\SO(2)\to\Norm(C_m)_0 = \Dih(S^1\tilde{\times}S^1)_0$ and $S^3\to L$ in the diagram are m-fold coverings. By naturality we get a zig-zag of homotopy fiber sequences \begin{equation}\label{eq: emb of a point comparison} \begin{tikzcd} {S^3} & {S^3\hq \SO(4)} & {B\SO(4)} \\ {S^3} & {S^3\hq (\SO(2)\times \SO(2))} & {B(\SO(2)\times\SO(2))} \\ L & {L\hq \Isom(L)_0} & {B\Isom(L)_0.} \arrow[from=1-1, to=1-2] \arrow[from=1-2, to=1-3] \arrow[equal, from=2-1, to=1-1] \arrow[from=2-1, to=2-2] \arrow["{\sim_\mathbb{Q}}", from=2-1, to=3-1] \arrow[from=2-2, to=1-2] \arrow[from=2-2, to=2-3] \arrow[from=2-2, to=3-2] \arrow[from=2-3, to=1-3] \arrow["{\sim_\mathbb{Q}}", from=2-3, to=3-3] \arrow[from=3-1, to=3-2] \arrow[from=3-2, to=3-3] \end{tikzcd} \end{equation} Here the middle map of the bottom comparison is also a rational cohomology isomorphism by the naturality properties of the Leray-Serre spectral sequences, see \cite[Proposition 5.13]{HatchSSeq}. \begin{theorem}\label{thm: rat cohom of diff(generic lens space) fixed a point} For a generic lens space $L$, \[H^\ast(B\Diff_{\text{pt}}(L)_0)\cong \mathbb{Q}[\mu, \eta]/( \mu\eta)\] where $|\mu|=|\eta| = 2$. Furthermore there is a surjection of graded algebras \[H^\ast(B\SO(2)\times B\SO(2)) \rrightarrow H^\ast(B\Diff_{\text{pt}}(L)_0)\] induced by the zig-zag $B\SO(2)\times B\SO(2) \overset{\sim_\mathbb{Q}}{\to} B\Isom(L)_0 \leftarrow L\hq\Isom(L)_0 \simeq B\Diff_{\text{pt}}(L)_0$, sending the pullbacks $1\otimes e$ and $e\otimes 1$ of the Euler class $e\in H^\ast(B\SO(2))$ along the two projections to $\mu$ and $\eta$. \end{theorem} \begin{proof} By Theorem \ref{Emb is locally retractile}, $\Emb(\text{pt}, L)\cong L$ is $\Diff(L)$-locally retractile. Lemma \ref{local retractileness} (3) and (4) implies that it is also $\Diff(L)_0$-locally retractile and that the $\Diff(L)_0$ action on $L$ is transitive. Lemma \ref{lem: id path component homotopical orbit stabilizer} and Theorem \ref{thm: lens space diffs pi_0's} implies that $\Diff_\text{pt}(L)_0\simeq \Emb(\text{pt}, L)\hq \Diff(L)_0$. Finally, by Theorem \ref{thm: generalized smale conj} we have \[L\hq \Isom(L)_0 \simeq B\Diff_{\text{pt}}(L)_0.\] By the comparison (\ref{eq: emb of a point comparison}) we reduce to computing $H^\ast(S^3\hq(\SO(2)\times\SO(2)))$. Using Lemma \ref{lem: preliminary s.seq. comparison} and the fact that the only non-zero differentials in the cohomological Leray Serre spectral sequence of \[S^3\to S^3\hq(\SO(2)\times \SO(2))\to B\SO(2)\times B\SO(2)\] are on the $E_4$-page, we conclude that the spectral sequence collapses on the $E_5$-page, and examining the cup product structure that the $d_4$ differentials hit everything in the ideal $((e\otimes 1)(1\otimes e))$ and leave only the zeroth row to be non-zero in $E_\infty$. \end{proof} \subsection{The diffeomorphisms fixing a disc} Similarly to before we use the diagram \[\begin{tikzcd} {\SO(4)} & {\SO(2)\times\SO(2)} & {\Isom(L)_0} \\ {\Emb^{+}(D^3, S^3)}\arrow[loop above, out=120, in=70, distance=15] & {\Emb^{+}(D^3, S^3)}\arrow[loop above, out=120, in=70, distance=15] & \Emb^{+}(D^3, L).\arrow[loop above, out=120, in=70, distance=15] \arrow[from=1-2, to=1-1] \arrow["{\sim_\mathbb{Q}}", from=1-2, to=1-3] \arrow[equal, from=2-2, to=2-1] \arrow["{\sim_\mathbb{Q}}", from=2-2, to=2-3] \end{tikzcd}\] This diagram implies by naturality that we have a zig-zag of fiber sequences as follows: \begin{equation}\label{eq: second fib seq comparison} \begin{tikzcd}[cramped,column sep=small] {\Emb^{+}(D^3, S^3)} & {\Emb^{+}(D^3, S^3)\hq \SO(4)} & {B\SO(4)} \\ {\Emb^{+}(D^3, S^3)} & {\Emb^{+}(D^3, S^3)\hq (\SO(2)\times \SO(2))} & {B(\SO(2)\times\SO(2))} \\ \Emb^{+}(D^3, L) & {\Emb^{+}(D^3, L)\hq \Isom(L)_0} & {B\Isom(L)_0.} \arrow[from=1-1, to=1-2] \arrow[from=1-2, to=1-3] \arrow[equal, from=2-1, to=1-1] \arrow[from=2-1, to=2-2] \arrow["{\sim_\mathbb{Q}}", from=2-1, to=3-1] \arrow[from=2-2, to=1-2] \arrow[from=2-2, to=2-3] \arrow[from=2-2, to=3-2] \arrow[from=2-3, to=1-3] \arrow["{\sim_\mathbb{Q}}", from=2-3, to=3-3] \arrow[from=3-1, to=3-2] \arrow[from=3-2, to=3-3] \end{tikzcd} \end{equation} \begin{theorem}\label{thm: rat cohom of diff(generic lens space) fixed a disc} For a generic lens space $L$, \[H^\ast(B\Diff_{D^3}(L)_0)\cong \mathbb{Q}[\mu, \eta]/( \mu^2+\eta^2, \mu\eta)\] where $|\mu|=|\eta| = 2$. Furthermore there is a surjection of graded algebras \[H^\ast(B\SO(2)\times B\SO(2)) \rrightarrow H^\ast(B\Diff_{D^3}(L)_0)\] induced by the zig-zag $B(\SO(2)\times \SO(2))\overset{\sim_\mathbb{Q}}{\to}B\Isom(L)_0\leftarrow \Emb^+(D^3, L)\hq \Isom(L)_0$ sending the pullbacks $1\otimes e$ and $e\otimes 1$ of the Euler class $e\in H^\ast(B\SO(2))$ along the two projections to $\mu$ and $\eta$. \end{theorem} \begin{proof} $L$ is parallelizable, meaning $\Fr^+(L)\cong L\times \GL_3^+(\mathbb{R})\simeq L\times \SO(3)$, because it is a closed orientable 3-manifold (see \cite{bened18}). Thus Theorem \ref{embeddings of discs are framings} implies $\Emb^+(D^3, L)\simeq L\times \SO(3)$. This means it is path connected, which is instrumental in using the homotopy orbit stabilizer lemma. By Theorem \ref{Emb is locally retractile}, $\Emb(D^3, L)\cong L$ is $\Diff(L)$-locally retractile. Lemma \ref{local retractileness} (3) and (4) implies that it is also $\Diff(L)_0$-locally retractile and that the $\Diff(L)_0$ action on $L$ is transitive. Lemma \ref{lem: id path component homotopical orbit stabilizer} and Theorem \ref{thm: lens space diffs pi_0's} implies that $\Diff_{D^3}(L)_0\simeq \Emb(D^3, L)\hq \Diff(L)_0$. Finally, by Theorem \ref{thm: generalized smale conj} we have \[\Emb^+(D^3, L)\hq \Isom(L)_0\simeq B\Diff_{D^3}(L)_0.\] Similar argument shows \[\Emb^+(D^3, S^3)\hq\SO(4)\simeq B\Diff_{D^3}(S^3)\simeq \text{pt}.\] By Theorem \ref{embeddings of discs are framings} we also have that $\Emb^+(D^3, S^3)\simeq S^3\times \SO(3)$. Inspecting (\ref{eq: second fib seq comparison}) we can see that again we may reduce to computing \[H^\ast(\Emb^+(D^3, S^3)\hq(\SO(2)\times\SO(2))).\] Let us denote $E_\bullet^{\bullet, \bullet}$ the cohomological Leray Serre spectral sequence associated to \[\Emb^+(D^3, S^3)\to \Emb^+(D^3, S^3)\hq\SO(4)\to B\SO(4).\] Let us denote $D_\bullet^{\bullet, \bullet}$ the cohomological Leray Serre spectral sequence associated to \[\Emb^+(D^3, S^3)\to \Emb^+(D^3, S^3)\hq(\SO(2)\times\SO(2))\to B\SO(2)\times B\SO(2).\] Note that $E_2^{p, q}\cong E_2^{p, 0}\otimes E_2^{0, q}$ and also $D_2^{p, q}\cong D_2^{p, 0}\otimes D_2^{0, q}$. Let us use the notation \[H^\ast(\Emb^{+}(D^3, S^3))\cong H^\ast(S^3)\otimes_\mathbb{Q} H^\ast(\SO(3), \mathbb{Q})\cong \mathbb{Q}[\alpha, \beta]/\langle \alpha^2, \beta^2\rangle\] and $\mu = e\otimes 1$, $\eta = 1\otimes e\in H^2(B\SO(2)\times B\SO(2))$. With these notations the comparison of the fiber sequences $E_\bullet^{\bullet, \bullet}$ and $D_\bullet^{\bullet, \bullet}$ is laid out in Figure \ref{fig:sseqs2}, where the dots denote non-zero vector spaces that have too many generators to list. \begin{figure}[ht] \advance\leftskip-1cm \caption{Comparing spectral sequences} \begin{sseqpage}[title = $E_4^{\bullet, \bullet}$, cohomological Serre grading, class pattern = myclasspattern, classes = { draw = none }, class labels = { font = \small }, xscale = 0.7, yscale = 0.7] \class["1"](0,0) \class["e\;\;p_1"](4, 0) \class["p_1^2"](8, 0) \class["e^2"](8, 0) \class["e p_1"](8, 0) \class["\alpha\;\;\beta"](0, 3) \class["\alpha \beta"](0, 6) \class[{ black, fill }](4, 3) \class[{ black, fill }](4, 6) \class[{ black, fill }](8, 3) \class[{ black, fill }](8, 6) \d4(0,3) \d4(0, 6) \d4(4, 3) \d4(4, 6) \end{sseqpage} \quad \begin{sseqpage}[title = $D_4^{\bullet, \bullet}$, cohomological Serre grading, class pattern = myclasspattern, classes = { draw = none }, class labels = { font = \small }, xscale = 0.7, yscale = 0.7] \class["1"](0,0) \class["\eta"](2, 0) \class["\mu"](2, 0) \class["\eta^2"](4, 0) \class["\mu^2"](4, 0) \class["\eta \mu"](4, 0) \class[{ black, fill }](6, 0) \class[{ black, fill }](8, 0) \class["\alpha\;\;\beta"](0, 3) \class["\alpha \beta"](0, 6) \class[{ black, fill }](2, 3) \class[ { black, fill }](2, 6) \class[ { black, fill }](4, 6) \class[ { black, fill }](4, 3) \class[ { black, fill }](6, 6) \class[ { black, fill }](6, 3) \class[ { black, fill }](8, 6) \class[ { black, fill }](8, 3) \d4(0,3) \d4(0, 6) \d4(2,3) \d4(2, 6) \d4(4, 3) \d4(4, 6) \end{sseqpage} \begin{tikzpicture}[overlay, remember picture] \draw[-latex] (-14.3, 3.8) to[out=15,in=165] (-6.3, 3.7) node [above left = 0.7 and 3.3] {$\text{id}$}; \draw[-latex] (-11.3, 1.8) to[out=15,in=165] (-4, 1.7) node [above left = 0.7 and 3.3] {$i^\ast$}; \end{tikzpicture} \label{fig:sseqs2} \end{figure} Firstly, we want that $\langle\prescript{E}{}d_4^{0, 3}(\alpha)\rangle=\langle e \rangle$. To see this we use a comparison of spectral sequences through the following diagram: \[\begin{tikzcd} {S^3} & {S^3\hq \SO(4)} & {B\SO(4)}\\ {S^3\times \SO(3)} & {(S^3\times \SO(3))\hq \SO(4)} & {B\SO(4).} \arrow[from=1-1, to=1-2] \arrow[from=1-2, to=1-3] \arrow[from=2-1, to=1-1] \arrow[from=2-1, to=2-2] \arrow[from=2-2, to=1-2] \arrow[from=2-2, to=2-3] \arrow["\simeq", from=2-3, to=1-3] \end{tikzcd}\] Where the map $S^3\times \SO(3)\to S^3$ is the projection onto the first coordinate. This is because the weak equivalence $\Emb^+(D^3, S^3)\simeq S^3\times \SO(3)$ records the point the origin is sent to, and the differential at the origin orthogonalized via Gram-Schmidt. Therefore under this weak equivalence, the map $\Emb^+(D^3, S^3)\to \Emb(\text{pt}, S^3)$ induced by the inclusion of the origin into $D^3$, becomes the projection. This means that $\alpha\in H^\ast(S^3)$ is sent to $\alpha = pr_1^\ast(\alpha)\in H^\ast(S^3\times \SO(3))$ by the map induced on cohomology by the comparison of the fibers, and thus by Lemma \ref{lem: preliminary s.seq. comparison} we see that indeed $\langle\prescript{E}{}d_4^{0, 3}(\alpha)\rangle=\langle e \rangle$. Since $E_\infty^{\ast, \ast} \cong 0$ and the only non-trivial differentials in $E_\bullet^{\bullet, \bullet}$ are on the $E_4$-page, we have to have that $\langle\prescript{E}{}d_4^{0, 3}(\beta)\rangle=\langle p_1 \rangle$. We can see that the comparison yields \[\langle\prescript{D}{}d_4^{0, 3}(\alpha)\rangle=\langle \mu\eta \rangle\] and \[\langle\prescript{D}{}d_4^{0, 3}(\beta)\rangle=\langle \mu^2+\eta^2 \rangle.\] We have \[\dim(E_2^{2k, 6}) + \dim(E_2^{2k+8, 0}) = \dim(E_2^{2k+4, 3})\] and $\dim(E_2^{6, 0}) = \dim(E_2^{2, 3})$. Furthermore inspecting the multiplicative structure we find that $\prescript{D}{}d_4^{2k, 6}\colon D_4^{2k, 6}\to D_4^{2k+4, 3}$ sends the generators of $D_4^{2k, 6}$ to an independent set in $D_4^{2k+4, 3}$ and that all the generators of $D_4^{2k+6, 0}$ are hit by $\prescript{D}{}d_4^{2k+2, 3}$ for all $k\geq 0$. This means that in fact in the $E_\infty$-page, the only non-trivial entries that remain are $D_\infty^{0, 0}$, $D_\infty^{2, 0}$, and $D_\infty^{4, 0}$. From this we conclude. \end{proof} \subsection{The whole identity path component} To calculate $H^k(B\Diff(M)_0)$, we just have to run the cohomological Leray-Serre spectral sequence of \[B\Diff_{L_2\setminus\interior{D^3}}(M)_0\to B\Diff(M, L_2\setminus\interior{D^3})_0\to B\Diff(L_2\setminus\interior{D^3})_0.\] Here the base is weakly equivalent to $B\Diff_{\text{pt}}(L_2)_0$ and the fiber is weakly equivalent to $B\Diff_{D^3}(L_1)$. Let us recall our results from Theorem \ref{thm: rat cohom of diff(generic lens space) fixed a point} and Theorem \ref{thm: rat cohom of diff(generic lens space) fixed a disc}. \[H^k(B\Diff_{D^3}(L)_0)\cong \begin{cases} \mathbb{Q}\text{ if } k= 0\\ \mathbb{Q}\langle \mu, \eta\rangle \text{ if } k = 2\\ \mathbb{Q}\langle \mu^2\rangle \text{ if } k = 4\\ 0\text{ otherwise} \end{cases}\] Where the cup product structure is so that $\mu^2 = -\eta^2$ and $\mu\eta = 0$. \[H^k(B\Diff_{\text{pt}}(L)_0)\cong \begin{cases} \mathbb{Q}\text{ if } k= 0\\ \mathbb{Q}\langle \mu^{k/2}, \eta^{k/2} \rangle\text{ if } k \text{ is even}\\ 0\text{ otherwise} \end{cases}\] These imply that the $E_2$-page we are interested in looks as follows: \[\begin{sseqpage}[title = The main spectral sequence, cohomological Serre grading, class pattern = myclasspattern, classes = { draw = none }, class labels = { font = \small }, xscale = 0.7, yscale = 0.7] \class["\mathbb{Q}"](0,0) \class["\mathbb{Q}^2"](0, 2) \class["\mathbb{Q}"](0, 4) \class["\mathbb{Q}^2"](2, 0) \class["\mathbb{Q}^2"](4, 0) \class["\mathbb{Q}^2"](6, 0) \class["\mathbb{Q}^4"](2, 2) \class["\mathbb{Q}^2"](2, 4) \class["\mathbb{Q}^2"](4, 4) \class["\mathbb{Q}^2"](6, 4) \class["\mathbb{Q}^4"](4, 2) \class["\mathbb{Q}^4"](6, 2) \class["\dots"](8, 2) \class["\dots"](8, 0) \class["\dots"](8, 4) \end{sseqpage}\] but since we are in even cohomological grading this collapses on the $E_2$-page and therefore we get that \[H^n(B\Diff(M)_0)\cong \bigoplus_{k+l = n}H^k(B\Diff_{L_2\setminus\interior{D^3}}(M)_0)\otimes_{\mathbb{Q}} H^l(B\Diff(L_2\setminus\interior{D^3})_0).\] \begin{theorem}\label{thm: main result} Let $L_1$ and $L_2$ be generic 3-dimensional lens spaces that are not diffeomorphic to each other, and $M \cong L_1\#L_2$. \[H^k(B\Diff(M)_0)\cong \begin{cases} \mathbb{Q} \;\,\text{ if } k = 0\\ \mathbb{Q}^4 \text{ if } k = 2\\ \mathbb{Q}^7 \text{ if } k = 4\\ \mathbb{Q}^8 \text{ if $k$ is even and }\geq 6 \\ 0\text{ otherwise} \end{cases}\] \end{theorem} Now we will give a more information about the cup product structure: Figure \ref{fig:main ho fib seq comp} shows a comparison that we also used in the proof of Theorem \ref{thm: mapping class group}. \begin{figure}[ht] \caption{Comparing the homotopy fiber sequences} \[\begin{tikzcd} {B\Diff_{L_1\setminus\interior{D^3}}(M)_0} & {B\Diff(M, L_1\setminus\interior{D^3})_0} & {B\Diff(L_1\setminus\interior{D^3})_0} \\ {B\Diff(L_2\setminus\interior{D^3})_0} & {B\Diff(L_2\setminus\interior{D^3})_0 \times B\Diff(L_1\setminus\interior{D^3})_0} & {B\Diff(L_1\setminus\interior{D^3})_0} \arrow[from=1-1, to=1-2] \arrow[from=1-1, to=2-1] \arrow[from=1-2, to=1-3] \arrow[from=1-2, to=2-2] \arrow[equal, from=1-3, to=2-3] \arrow[from=2-1, to=2-2] \arrow[from=2-2, to=2-3] \end{tikzcd}\] \label{fig:main ho fib seq comp} \end{figure} From it we get a comparison of the induced cohomological Leray-Serre spectral sequences. The map $B\Diff_{L_1\setminus\interior{D^3}}(M)_0 \to B\Diff(L_2\setminus\interior{D^3})_0$ corresponds to $B\Diff_{D^3}(L_2)_0\to B\Diff_{\text{pt}}(L_1)_0$ under the commutative diagram from Remark \ref{rem: handy commutative diagram}. As a consequence of Theorem \ref{thm: rat cohom of diff(generic lens space) fixed a point} and Theorem \ref{thm: rat cohom of diff(generic lens space) fixed a disc} we have the following: \begin{corollary}\label{lem: surj on cohom of fiber} The map induced by the inclusion $B\Diff_{D^3}(L_2)_0\to B\Diff_{\text{pt}}(L_1)_0$ induces a surjection on rational cohomology. \end{corollary} \begin{proof} There is a commutative diagram as follows: \[\begin{tikzcd}[cramped,column sep=tiny] &&& {B\SO(2)\times B\SO(2)} \\ {B\Diff_{D^3}(L_2)_0} & {\Emb^+(D^3, L_2)\hq\Diff(L_2)_0} & {\Emb^+(D^3, L_2)\hq\Isom(L_2)_0} & {B\Isom(L_2)_0} \\ {B\Diff_{\text{pt}}(L_2)_0} & {\Emb(pt, L_2)\hq\Diff(L_2)_0} & {\Emb(\text{pt}, L_2)\hq\Isom(L_2)_0} & {B\Isom(L_2)_0.} \arrow["{\sim_\mathbb{Q}}", from=1-4, to=2-4] \arrow[from=2-1, to=3-1] \arrow["\simeq", from=2-1, to=2-2] \arrow["\simeq"', from=2-3, to=2-2] \arrow[from=2-2, to=3-2] \arrow[from=2-3, to=2-4] \arrow[from=2-3, to=3-3] \arrow[equal, from=2-4, to=3-4] \arrow["\simeq", from=3-1, to=3-2] \arrow["\simeq"', from=3-3, to=3-2] \arrow[from=3-3, to=3-4] \end{tikzcd}\] Applying rational cohomology to this we obtain by Theorem \ref{thm: rat cohom of diff(generic lens space) fixed a point} and Theorem \ref{thm: rat cohom of diff(generic lens space) fixed a disc} the commutativity of the following triangle: \[\begin{tikzcd} {H^\ast(B\Diff_{D^3}(L)_0)} & {H^\ast(B\SO(2)\times B\SO(2))} \\ {H^\ast(B\Diff_{\text{pt}}(L)_0).} \arrow[two heads, from=1-2, to=1-1] \arrow[two heads, from=1-2, to=2-1] \arrow[from=2-1, to=1-1] \end{tikzcd}\] This then shows that \[ H^\ast(B\Diff_{\text{pt}}(L_2)_0)\to H^\ast(B\Diff_{D^3}(L_2)_0)\] is surjective. \end{proof} The following lemma will be a core part of our argument for Theorem \ref{thm: the rational cohommology of the main gorups identitiy component}. \begin{lemma}\label{lem: differential map on group cohomology} Let $L$ be a generic lens space and consider the map given by taking the differential at a point $\text{pt}$: \[T_{\text{pt}}\colon \Diff_{\text{pt}}(L) \to \GL^+_3(\mathbb{R}).\] On rational cohomology this map induces the map \[H^\ast(B GL^+_3(\mathbb{R}))\cong \mathbb{Q}[p_1] \to H^\ast(B\Diff_{\text{pt}}(L))\cong\mathbb{Q}[\mu, \eta]/(\mu\eta)\] that sends $p_1$ to $\mu^2+\eta^2$ with the usual notation for the cohomology of $\Diff_{\text{pt}}(L)$, and where we use $GL^+_3(\mathbb{R})\simeq \SO(3)$ and $p_1$ denotes the Pontryagin class. \end{lemma} \begin{proof} Let us use the same notations $\iota_{1}$ and $\iota_{1j}$ as in Lemma \ref{lem: descending differentials fixing points}. When thinking of $S^3$ as the unit sphere in $\mathbb{C}^2$, the image of $\iota_1$ consists of all the rotations of the first coordinate leaving the second coordinate fixed, the image of $\iota_{1j}$ consists of all the rotations of the second coordinate leaving the first coordinate fixed. This means that these maps factor through the quotient $\pi\colon S^3\to L$, meaning that we can get dashed maps, where $\pi^\ast\colon \Norm_{\Isom^+_{x}(S^3)}(C_m)_0\to\Diff_{\pi(x)}(L)_0$ denotes the map given by postcomposition with $\pi$. \begin{equation}\label{eq: iota pi business} \begin{tikzcd}[cramped] {\{x\}\hq \SO(2)} && {\{\pi(x)\}\hq\Diff_{\{\pi(x)\}}(L)_0} \\ {S^3\hq(\SO(2)\times\SO(2))} & {L\hq\Isom(L)_0} & {L\hq\Diff(L)_0} \arrow["B(\pi^\ast\circ\iota_{x})", dashed, from=1-1, to=1-3] \arrow[from=1-1, to=2-1] \arrow["\simeq"', from=1-3, to=2-3] \arrow["\sim_{\mathbb{Q}}", from=2-1, to=2-2] \arrow["\simeq", from=2-2, to=2-3] \end{tikzcd} \end{equation} Where $\pi\colon S^3 \to L$ is the quotient map, $x$ is either $1j$ or $1$, and in the diagram the left vertical map is induced by the inclusion \[\begin{tikzcd}[cramped, row sep=small, column sep=large] {\SO(2)} & \SO(2)\times\SO(2) \\ \{x\} \arrow[loop above, out=120, in=70, distance=15] & S^3. \arrow[loop above, out=120, in=70, distance=15] \arrow[hook, from=1-1, to=1-2, "\iota_{x}"] \arrow[hook, from=2-1, to=2-2] \end{tikzcd}\] Let us investigate what this diagram looks like after taking rational cohomology. First, we must consider what this left vertical map induces in rational cohomology and for that we can use the commutative triangle \[\begin{tikzcd}[cramped] {B\SO(2)} & {S^3\hq(\SO(2)\times \SO(2))} \\ & {B(\SO(2)\times\SO(2)).} \arrow[from=1-1, to=1-2] \arrow["{B\iota_x}"', from=1-1, to=2-2] \arrow[from=1-2, to=2-2] \end{tikzcd}\] The vertical map in this triangle by Lemma \ref{lem: preliminary s.seq. comparison} on cohomology induces the quotient map $\mathbb{Q}[e\otimes 1, 1\otimes e] \to \mathbb{Q}[e\otimes 1, 1\otimes e]/((e\otimes 1)(1\otimes e))$. Furthermore since $\iota_\text{1}$ is a section of $\text{pr}_2$ but $\text{pr}_1\circ\iota_1$ is constant and $\iota_{1j}$ is a section of $\text{pr}_1$ but $\text{pr}_2\circ\iota_{1j}$ is constant, we have that $B\iota_1(e\otimes 1) = 0$ and $B\iota_1(1\otimes e) = e$ and $B\iota_{1j}(e\otimes 1) = 0$ and $B\iota_{1j}(1\otimes e) = e$. Now we can conclude that applying cohomology to (\ref{eq: iota pi business}) we see that $\mu$ and $\eta$ are defined to be the preimages of $e\otimes 1$ and $1\otimes e$ respectively through the function that is induced by $S^3\hq (\SO(2)\times \SO(2))\to L\hq \Isom(L)_0$ on cohomology. Now we have described all the maps except the dashed map in (\ref{eq: iota pi business}) but commutativity allows us to conclude that $B(\pi^\ast\circ\iota_1)$ sends $\mu$ to $0$ and $\eta$ to $e$, and $B(\pi^\ast\circ\iota_{1j})$ sends $\mu$ to $e$ and $\eta$ to $0$. Furthermore as we have seen in Lemma \ref{lem: descending differentials fixing points} the following composition is still the same inclusion of $\SO(2)$: \[\SO(2)\overset{\iota_{x}}{\to} \Norm(C_m)_0\overset{\pi^\ast}{\to}\Diff_{\{\pi(x)\}}(L)_0\overset{T_{x}}{\to}\GL^+_3(\mathbb{R}).\] This means that \[B(T_x\circ\pi^\ast\circ\iota_x)\colon B\SO(2)\to B\GL^+_3(\mathbb{R})\] on cohomology induces the map that sends $p_1$ to $e^2$ by the theory of characteristic classes ($e^2 = p_1$ in $H^\ast(B\SO(2))\cong \mathbb{Q}[e]$). Now we are almost ready to conclude, but first we have to relate the two maps $T_{1}$ and $T_{1j}$. The subgroups $\Diff_{\{1\}}(L)_0, \Diff_{\{1j\}}(L)_0 < \Diff(L)_0$ are conjugate to each other and we wish to exploit this fact. Let us fix a diffeomorphism $\psi\in \Diff(L)_0$ that is so that $\psi(1) = 1j$, note that existence of such $\psi$ follows from Lemma \ref{local retractileness} (3) and (4) and the fact that $L\cong\Emb(pt, L)$ is $\Diff(L)$-locally retractile. Conjugating some $\varphi\in \Diff_{\{1\}}(L)_0$ with $\psi$ we get $\psi\circ\varphi\circ\psi^{-1}\in \Diff_{\{1j\}}(L)_0$, let us denote $c_\psi$ the map sending $\varphi$ to $\psi\circ\varphi\circ\psi^{-1}$. When we think of $T_{x}$ as taking values in $\GL_3^+(\mathbb{R})$, we are identifying $T_{x}L$ with $\mathbb{R}^3$ via a chart, let us denote this chart by $\sigma_{x}$. We may assume that on a small neighborhood of $0$ the diffeomorphism $\sigma_{1j}\circ\psi\circ\sigma_{1}$ is the identity, this means that $T_1 = c_{\psi}\circ T_{1j}$. It is a general fact that an inner homomorphism of $G$ induces on $BG$ a map homotopic to the identity (however in general not based homotopic), see for example \cite[Chapter II Theorem 1.9]{adem13} but it also follows in our case directly from $\Diff(L)_0$ being path connected. The inclusion $\Diff_{x}(L)_0\hookrightarrow\Diff(L_0)$ induces furthermore a surjection on rational cohomology, $\mathbb{Q}[\mu, \eta] \to \mathbb{Q}[\mu, \eta]/(\mu\eta)$, and this means that $(Bc_{\psi})^\ast\colon H^\ast(B\Diff_{1j}(L)_0)\to H^\ast(B\Diff_{1}(L)_0)$ is the identity. We will identify these cohomology groups via this comparison. To conclude consider that \[B(T_1\circ\pi^\ast\circ\iota_1) = B(T_{1j}\circ\pi^\ast\circ\iota_{1j}).\] These send $p_1$ to $e^2$. Furthermore $B(\pi^\ast\circ\iota_1)$ sends $\mu$ to $0$ and $\eta$ to $e$, and $B(\pi^\ast\circ\iota_{1j})$ sends $\mu$ to $e$ and $\eta$ to $0$. This means that necessarily $T_1(p_1) = \eta^2 + a\mu^2$ and $T_{1j}(p_1) = b\eta^2 + \mu^2$, where $a, b\in\mathbb{Q}$ (we don't care about $\eta\mu$ because that is zero in $H^\ast(B\Diff_{x}(L)_0)$). By our identification of $H^\ast(B\Diff_{\{1\}}(L)_0)$ with $H^\ast(B\Diff_{\{1j\}}(L)_0)$ we have $\eta^2 + a\mu^2 = b\eta^2 + \mu^2$ and we conclude that $a = b = 1$. \end{proof} \begin{theorem}\label{thm: the rational cohommology of the main gorups identitiy component} Let $M\cong L_1\#L_2$ for two non-diffeomorphic generic lens spaces $L_1$ and $L_2$, fix a 3-disc in $L_1$ and $L_2$ to denote the discs that are cut out when connected summing, and $S^2$ in $M$ the sphere we join $L_1\setminus\interior{D^3}$ and $L_2\setminus\interior{D^3}$ along. Denote the rational cohomology groups \[H^\ast(B\Diff(L_1\setminus\interior{D^3})_0) \cong \mathbb{Q}[\mu, \eta]/(\mu\eta) \;and\; H^\ast(B\Diff(L_2\setminus\interior{D^3})_0) \cong \mathbb{Q}[\nu, \vartheta]/(\nu\vartheta).\] The map induced by the product of the restrictions \[H^\ast(B\Diff(L_2\setminus\interior{D^3})_0 \times B\Diff(L_1\setminus\interior{D^3})_0)\to H^\ast(B\Diff(M, S^2)_0)\] is surjective, and through it we obtain \[H^\ast(B\Diff(M, S^2)_0)\cong\mathbb{Q}[\mu, \eta,\nu, \vartheta]/(\mu\eta, \nu\vartheta, \mu^2+\eta^2 - \nu^2-\eta^2).\] \end{theorem} \begin{proof} Lemma \ref{lem: surj on cohom of fiber} implies that the comparison of the fibers in Figure \ref{fig:main ho fib seq comp} induces a surjection on rational cohomology. As the Leray-Serre spectral sequences induced by both of the rows in Figure \ref{fig:main ho fib seq comp} collapse on the $E_2$-page, the induced map on the total spaces \[f\colon H^\ast(B\Diff(L_2\setminus\interior{D^3})_0 \times B\Diff(L_1\setminus\interior{D^3})_0)\to H^\ast(B\Diff(M, S^2)_0).\] is surjective by naturality properties of the spectral sequences. This means that in order to figure out the cup product structure on $H^\ast(B\Diff_{D^3}(M)_0)$, we need to describe the kernel of this map. To compute this kernel we consider the square \[\begin{tikzcd} {\Diff(M, S^2)_0} & {\Diff(L_1\setminus\interior{D^3})_0\times\Diff(L_2\setminus\interior{D^3})_0} \\ {\Diff(S^2)_0} & {\Diff(S^2)_0\times\Diff(S^2)_0.} \arrow[from=1-1, to=1-2] \arrow["{\text{res}^M_{S^2}}", from=1-1, to=2-1] \arrow[from=1-2, to=2-2] \arrow["\Delta", from=2-1, to=2-2] \end{tikzcd}\] This induces the maps on cohomology, where we will be interested in computing $g_1$ and $g_2$: \[\begin{tikzcd} {H^\ast(B\Diff(M, S^2)_0)} & {H^\ast(B\Diff(L_1\setminus\interior{D^3})_0)\otimes_\mathbb{Q}H^\ast(B\Diff(L_2\setminus\interior{D^3})_0)} \\ {H^\ast(B\Diff(S^2)_0)} & {H^\ast(B\Diff(S^2)_0)\otimes_\mathbb{Q}H^\ast(B\Diff(S^2)_0).} \arrow["f"', two heads, from=1-2, to=1-1] \arrow[from=2-1, to=1-1] \arrow["{g_1\otimes g_2}", from=2-2, to=1-2] \arrow["\smile"', from=2-2, to=2-1] \end{tikzcd}\] Note that this diagram shows $f\circ (g_1\otimes g_2) = (\text{res}^M_{S^2})^\ast\circ\smile$. This means that since $f\circ (g_1\otimes g_2) = (\text{res}^M_{S^2})^\ast\circ\smile$, \[f(\text{pr}_1^\ast(g_1(p_1))\smile\text{pr}_2^\ast(g_2(1))) = (\text{res}^M_{S^2})^\ast(p_1)= f(\text{pr}_1^\ast(g_1(1))\smile\text{pr}_2^\ast(g_2(p_1))).\] And therefore $(g_1\otimes g_2)(p_1\otimes 1) - (g_1\otimes g_2)(1\otimes p_1)\in \ker(f)$. Since $g_1$ and $g_2$ are symmetric we will continue with the notation $g\colon H^\ast(B\Diff(S^2)_0)\to H^\ast(B\Diff(L\setminus \interior{D^3})_0)$. To understand this map we use the diffeomorphism group \[\Diff^{\text{SO}}(L, D^3) = \{\varphi\in \Diff(L, D^3)\,|\, \left.\varphi\right|_{D^3}\in \SO(3)\subseteq \Diff(D^3)\}\] consisting of those diffeomorphisms that rotate the 3-disc that is fixed set-wise. $\Diff^{\SO}(L, D^3)\simeq \Diff(L, D^3)$, as $\SO(3)\simeq \Diff(D^3)$. This fits into a diagram of the following form: \[\begin{tikzcd} {\Diff(L\setminus \interior{D^3})_0} & {\Diff^{\text{SO}}(L, D^3)_0} & {\Diff_{\text{pt}}(L)_0} \\ {\Diff(S^2)_0} & {\SO(3)} & {\GL^+_3(\mathbb{R}).} \arrow[from=1-1, to=2-1] \arrow["\simeq"', from=1-2, to=1-1] \arrow["\simeq", from=1-2, to=1-3] \arrow[from=1-2, to=2-2] \arrow["{T_{\text{pt}}}"', from=1-3, to=2-3] \arrow["\simeq"', from=2-2, to=2-1] \arrow["\simeq", from=2-2, to=2-3] \end{tikzcd}\] Here the top left horizontal map is an equivalence, because it is a composite of $\Diff^{\text{SO}}(L, D^3)_0\simeq \Diff(L, D^3)_0\simeq\Diff(L\setminus \interior{D^3})_0$. In Lemma \ref{lem: differential map on group cohomology} we have computed what the map induced on group cohomolgy by the map taking differentials at $\text{pt}$, the mid-point of the $D^3$ we cut out to get $L\setminus \interior{D^3}$. It sends $p_1$ to $\mu^2+\eta^2$ (for $L= L_1$). So getting back to computing the kernel of $f$, we can now see that $(\mu^2+\eta^2-\nu^2-\vartheta^2)\in \ker(f)$. We know the dimensions of $H^k(B\Diff(M, S^2)_0)$ for all $k$, and comparing dimensions we can see that this must be the whole kernel. Let us give a short argument for why the dimensions should agree. The dimensions in Theorem \ref{thm: main result} come from a spectral sequence that collapse on the $E_2$-page, with fiber $B\Diff_{L_2\setminus\interior{D^3}}(M)_0$ and base $B\Diff(L_2\setminus\interior{D^3})_0$. This means that the dimension of $H^k(B\Diff(M, L_2\setminus\interior{D^3}))_0$ is the same as the dimension of $H^k(B\Diff_{L_2\setminus\interior{D^3}}(M)_0\times B\Diff(L_2\setminus\interior{D^3})_0)$ as a $\mathbb{Q}$ vector space. So we wish to see that the dimension in each degree is the same for the graded $\mathbb{Q}$ vector spaces $\mathbb{Q}[\mu, \eta, \nu, \vartheta]/(\mu\eta, \nu\vartheta, \mu^2+\eta^2)$ and $\mathbb{Q}[\mu, \eta, \nu, \vartheta]/(\mu\eta, \nu\vartheta, \mu^2+\eta^2-\nu^2-\vartheta^2)$. Let us fix the lexicographic monomial order with $\mu> \eta> \nu> \vartheta$, and find the Gröbner basis with respect to this monomial order, now this shows that the leading term ideal of both of the ideals $I_1 = (\mu\eta, \nu\vartheta, \mu^2+\eta^2)$ and $I_2 = (\mu\eta, \nu\vartheta, \mu^2+\eta^2-\nu^2-\vartheta^2)$ is $\text{LT}(I_1) = \text{LT}(I_2) = (\mu\eta, \nu\vartheta, \eta^3, \mu^2)$. It is a fact of algebra, see e.g. \cite{CLO1}[Proposition 4, Chapter 5, Section 3] that as $\mathbb{Q}$ vector spaces $\mathbb{Q}[\mu, \eta, \nu, \vartheta]/I = \text{Span}(x^\alpha | x^\alpha\not \in \text{LT}(I))$ for any ideal $I\subseteq \mathbb{Q}[\mu, \eta, \nu, \vartheta]$. \end{proof} \section{The whole diffeomorphism group} In our section on strategy we stated that $H^\ast(G)\cong H^\ast(G_0)^{\pi_0 G}$ but we are yet to describe the action in detail. It is obtained as follows: take some element $[g]\in \pi_0 G$, and conjugation by this element induces a self map, $c_g$ of $H^\ast(BG_0)$, note that this construction is natural in the sense that given a group homomorphism $\varphi$, $H^\ast(B\varphi)\circ c_{\varphi(g)} = c_g\circ H^\ast(B\varphi)$ for all $g$ in the domain of $\varphi$. In the following statement we use the notation form Theorem \ref{thm: the rational cohommology of the main gorups identitiy component}. \begin{proposition} The action of $\pi_0\Diff(M)\cong C_2\times C_2$ on \[H^\ast(B\Diff(M)_0)\cong \mathbb{Q}[\mu, \eta,\nu, \vartheta]/(\mu\eta, \nu\vartheta, \mu^2+\eta^2 - \nu^2-\eta^2)\] is generated by $c_{(-1, 1)}\colon \mu\mapsto -\mu$, $\eta\mapsto -\eta$ (leaving the other generators fixed), and $c_{(1, -1)}\colon\nu\mapsto -\nu$, $\vartheta \mapsto -\vartheta$. \end{proposition} \begin{proof} From Theorem \ref{thm: mapping class group} we have the description $\pi_0 \Diff(M, S^2) \cong \pi_0 \Diff(L_1\setminus\interior{D^3}) \times \pi_0 \Diff(L_2\setminus\interior{D^3})$. Since the product of the restrictions $\Diff(M, S^2)_0 \to \Diff(L_1\setminus\interior{D^3})_0\times \Diff(L_2\setminus\interior{D^3})_0$ induce a surjection on cohomology, using symmetry, we can figure out this action by investigating the action of $\pi_0\Diff(L\setminus \interior{D^3})$ on $H^\ast(\Diff(L\setminus \interior{D^3})_0)$. But this through weak equivalences reduces to the action of $\pi_0\Diff_{\text{pt}}(L)$ on $H^\ast(\Diff_{\text{pt}}(L)_0)$. Now from the calculation of the cohomology of $B\Diff_{\text{pt}}(L)_0$ we see that in fact through another surjection it is enough to figure out the action of $\pi_0\Isom(L)$ on $H^\ast(B\Isom(L)_0)$. Without loss of generality we fix $\text{pt}\in L$ to be represented by $(1+0j)\in S^3$. From the calculation of the isometries the element $f(j, j)$ represents the non-trivial element of $\pi_0\Isom(L)\cong C_2$. Computation shows $F(j, j)\circ F(w_1, w_2) \circ F(j, j)^{-1} = F(\overline{w_1}, \overline{w_2})$. This means that the action is generated by $\Isom(L)\cong S^1\times S^1 \to S^1\times S^1$, $(w_1, w_2)\mapsto (\overline{w_1}, \overline{w_2})$. The conjugation map $S^1\to S^1$ on cohomology that sends the fundamental class $e$ to $-e$ by naturality of spectral sequences it also sends $e$ to $-e$ in the cohomology of $B S^1\cong \mathbb{CP}^\infty$. \end{proof} \begin{lemma}\label{lem: fixed points of a quotient} Let $G$ be group acting on a ring $R$ such that the order of $G$ is invertible in $R$. If $I\subseteq R$ is an ideal such that $I\subseteq G.I$, then \[(R/I)^G\cong R^G/I^G\] where $I^G=R^G\cap I$ is an ideal of the subring $R^G\subseteq R$. \end{lemma} \begin{proof} Since $I\subseteq G.I$ and therefore the $G$ action on $R$ induces a $G$ action on $R/I$. Let us denote $I^G =I\cap R^G \subseteq R^G$, this is an ideal, we will see that $(R/I)^G \cong R^G/I^G$. The map $R^G/I^G\to R/I$, $[r]_{I^G}\mapsto [r]_{I}$ is well-defined, and injective since for $r, r^\prime \in R^G$, $[r]_{I^G} = [r^\prime]_{I^G}$ if and only if $(r-r^\prime)\subseteq I^G = R^G\cap I$, but $(r-r^\prime)\in R^G$ comes for free so this is the case if and only if $[r]_I = [r^\prime]_I$. The map $[r]_{I^G}\mapsto [r]_{I}$ lands in $(R/I)^G\subseteq R/I$, we will show that it hits all the elements of $(R/I)^G$. Consider some $[r]\in (R/I)$, we wish to see that if for all $g\in G$, $[g.r] = [r]$, then there is some $r^\prime \in R^G$, so that $[r] = [r^\prime]$. Set \[r^\prime = \frac{1}{|G|}\sum_{g\in G}g.r\] this is so that $[r^\prime] = \frac{1}{|G|}|G|\cdot [r] = [r]$. Clearly multiplication by any element of $G$ gives an isomorphism of $G$ thus we also have $r^\prime\in R^G$. \end{proof} From the theorem of Hatcher, Theorem \ref{theorem of Hatcher}, $H^\ast(B\Diff(M, S))\cong H^\ast(B\Diff(M))$ in the case where $M$ is the connected sum of two generic lens spaces. In this light we state our main theorem with notations for $H^\ast(B\Diff(L_1\setminus\interior{D^3})_0)$ and $H^\ast(B\Diff(L_2\setminus\interior{D^3})_0)$ consistent with those in Theorem \ref{thm: the rational cohommology of the main gorups identitiy component}. \begin{theorem}\label{main result} Let $M\cong L_1\#L_2$ for two non-diffeomorphic generic lens spaces $L_1$ and $L_2$, fix $D^3$ in $L_1$ and $L_2$ to denote the discs that are cut out when connected summing, and $S^2$ in $M$ the sphere we join $L_1\setminus\interior{D^3}$ and $L_2\setminus\interior{D^3}$ along. The map induced by the product of the restrictions \[H^\ast(B\Diff(L_2\setminus\interior{D^3})_0 \times B\Diff(L_1\setminus\interior{D^3})_0)\to H^\ast(B\Diff(M, S^2)_0)\] is surjective, and through it we obtain \[H^\ast(B\Diff(M, S^2)_0)\cong\mathbb{Q}[\mu, \eta,\nu, \vartheta]/(\mu\eta, \nu\vartheta, \mu^2+\eta^2 - \nu^2-\eta^2).\] Furthermore the composition $B\Diff(M, S)_0\to B\Diff(M, S)\to B\Diff(M)$ induces an inclusion of the cohomology of $B\Diff(M)$ as the subring \[H^\ast(B\Diff(M))\cong \mathbb{Q}[\mu^2, \eta^2, \nu^2, \vartheta^2] / (\mu^2\eta^2, \nu^2\vartheta^2, \mu^2+\eta^2-\nu^2-\vartheta^2).\] \end{theorem} \begin{proof} Let us denote $R = \mathbb{Q}[\mu, \eta, \nu, \vartheta]$, $G = C_2\times C_2$, and $I = (\mu\eta, \nu\vartheta, \mu^2+\eta^2 - \nu^2-\vartheta^2)$, these are such that they satisfy the assumptions of Lemma \ref{lem: fixed points of a quotient}. Calculation allows us to see that \[R^G \cong \mathbb{Q}[\mu^2, \mu\eta, \eta^2, \nu^2, \nu\vartheta, \vartheta^2].\] This is because the fixed points of the action are polynomials whose terms only contain monomials $\mu^a\eta^b\nu^c\vartheta^d$ such that both $a+b$ and $c+d$ are even. All the generators of $I$ are in $R^G$ and therefore $I^G$ is generated by these same monomials. \end{proof} \bibliographystyle{amsalpha} \bibliography{main} \end{document}
2412.11283v1
http://arxiv.org/abs/2412.11283v1
Cyclic polytopes through the lens of iterated integrals
\documentclass{ourlematema} \usepackage{graphicx} \usepackage{amsmath,amsfonts,amssymb,amsthm} \usepackage{hyperref} \usepackage{tikz-cd} \usepackage{cleveref} \usepackage{enumerate} \usepackage{xcolor} \usepackage{shuffle} \newtheorem{theorem}{Theorem}[section] \newtheorem*{theorem*}{Theorem} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{remark}[theorem]{Remark} \newtheorem{lmm}[theorem]{Lemma} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{problem}[theorem]{Problem} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{definitionthm}[theorem]{Theorem and Definition} \newtheorem{question}[theorem]{Question} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{example}[theorem]{Example} \newtheorem{myproblem}[theorem]{Problem} \Crefname{thm}{Theorem}{Theorems} \newcommand{\word}[1]{\texttt{#1}} \definecolor{darkgreen}{RGB}{0,150,20} \newcommand{\annotationF}[1]{\textcolor{darkgreen}{#1}} \newcommand{\annotationR}[1]{\textcolor{purple}{#1}} \newcommand{\mb}{\mathbb} \newcommand{\mc}{\mathcal} \newcommand{\R}{{\mb R}} \newcommand{\convperms}{C} \newcommand{\spann}{\operatorname{span}} \newcommand{\ourfeatures}{\mathsf{Inv}} \newcommand{\pwlinear}{\mathsf{PL}} \newcommand{\signedvolume}{\mathsf{vol}} \newcommand{\antipode}{\mathcal{A}} \newcommand{\timerevinv}{\mathsf{TimeRevInv}} \newcommand{\loopclosureinv}{\mathsf{LoopClosureInv}} \newcommand{\convhull}{\operatorname{conv}} \newcommand{\homomor}{H} \newcommand{\letteri}{\texttt{i}} \newcommand{\emptyword}{\texttt{e}} \DeclareMathOperator{\Aut}{Aut} \DeclareMathOperator{\sgn}{sgn} \title{Cyclic polytopes through the lens of iterated integrals} \author{Felix Lotter} \address{Max Planck Institute for Mathematics in the Sciences, Leipzig\\ e-mail: \texttt{[email protected]}} \author{Rosa Preiss} \address{Technical University Berlin\\ e-mail: \texttt{[email protected]}} \date{10/15/24} \begin{document} \maketitle \begin{abstract} The volume of a cyclic polytope can be obtained by forming an iterated integral, known as the path signature, along a suitable piecewise linear path running through its edges. Different choices of such a path are related by the action of a subgroup of the combinatorial automorphisms of the polytope. Motivated by this observation, we look for other polynomials in the vertices of a cyclic polytope that arise as path signatures and are invariant under the subgroup action. We prove that there are infinitely many such invariants which are algebraically independent in the shuffle algebra. \end{abstract} \section{Introduction} \paragraph{Iterated integrals and piecewise linear paths} A \emph{path}, for the purpose of this paper, is a continuous map $X:[0,1]\to \mb R^d$ such that the coordinate functions $X_i$ are piecewise continuously differentiable. Given such a path $X$, its \emph{(iterated integral) signature} is the linear form \begin{align}\label{def:sig} \begin{split} S(X): \mb R\langle \texttt{1},\dots, \texttt{d}\rangle &\to \mb R \\ \letteri_1\cdots \letteri_k &\mapsto \int_{\Delta_k} dX_{i_1}(t_1)\dots dX_{i_k}(t_k) \end{split} \end{align} where $k$ varies over all positive integers and $\Delta_k$ denotes the simplex $0\leq t_1 \leq \dots \leq t_k \leq 1$. Here, $\mb R\langle \texttt{1},\dots, \texttt{d}\rangle$ is the free associative algebra over the letters (that is, formal symbols) $\mathtt{1},\dots,\mathtt{d}$. The words $\letteri_1\dots\letteri_k$ form a basis of this space, such that \eqref{def:sig} does indeed define a linear form. For example, $S(X)(\texttt{1} \texttt{1} + \texttt{1} \texttt{2}) = \int_0^1 \int_0^{t_2} X_1'(t_1) X_1'(t_2) + X_1'(t_1) X_2'(t_2) dt_1 dt_2$. The signature of a path determines the path up to translation, reparametrization and tree-like equivalence \cite{BGLY16,Chen1958}.\par In this paper, we are interested in \emph{piecewise linear paths}. Such a path is uniquely determined by its control points $x_1,\dots,x_n \in \mb R^d$, that is, the (ordered) set of start and end points of all of its linear segments. The signature of such a piecewise linear path can be described explicitly in terms of the increments $a_i := x_i - x_{i-1}$. In fact, it defines a map \begin{equation}\label{eq:sign map} \homomor^d_n: \mb R\langle \texttt 1,\dots,\texttt d \rangle {\to} \mb R[x_1,\dots,x_n], \end{equation} where $x_i=(x_{i1},\dots,x_{id})$, see \Cref{def:Hnd_via_sig} and \Cref{eq:Hnd_recursive1} for a recursive formula. If the left hand side is viewed as a commutative algebra $\mb R\langle \texttt 1,\dots,\texttt d \rangle_\shuffle$ via the \textit{shuffle product} $\shuffle$ (see \eqref{def:shuffle}), this map becomes a homomorphism of graded algebras. Its image is a subalgebra of $\mb R[x_1,\dots,x_n]$ which we will call \emph{the ring of signature polynomials in $d \times n$ variables} in the following, denoted by $\mc S^d[x_1,\dots,x_n]$.\par The polynomials in $\mc S^d[x_1,\dots,x_n]$ inherit some nice properties from their integral representation. For example, they are translation invariant, \begin{equation*} p(x_1,\dots,x_n)=p(x_1+y,\dots,x_n+y) \end{equation*} for $y \in \mb R^d$. Moreover, if $p \in \mc S^d[x_1,\dots,x_n]$, then for $1 < i < n$ the polynomial \begin{equation}\label{eq:face map} p(x_1,\dots,x_{i-1},\lambda x_{i-1} + (1-\lambda) x_{i+1},x_{i+1},\dots,x_n) \end{equation} is independent of $\lambda \in [0,1]$, due to reparame\-trization invariance of iterated integrals. In particular, \eqref{eq:face map} is a polynomial in variables $x_j, j\not=i$. \par It follows that $(\mc S^d[x_1,\dots,x_n])_{n \in \mb N}$ naturally forms a semi-simplicial set where the $i$-th face map $d^n_i: \mc S^d[x_1,\dots,x_n] \to \mc S^d[x_1,\dots,x_{n-1}]$ first replaces $x_i$ by $x_{i+1}$ or $x_{i-1}$ (which defines a unique polynomial by the above) and then replaces $x_j$ by $x_{j-1}$ for $j>i$.\par Now, for fixed $n$ and a given subgroup $G$ of $S_n$ one can ask the following question: Which signature polynomials in $d\times n$ variables are invariant under the action of $G$ on $x_1,\dots,x_n$ by permutation? More precisely, we would like to determine the pullback $\mathsf{Inv}^d_n(G) \subseteq \mb R\langle \texttt 1,\dots,\texttt d \rangle$ in \[\begin{tikzcd} \mathsf{Inv}^d_n(G) \rar \dar & \mb R[x_1,\dots,x_n]^G \dar \\ \mb R\langle \texttt 1,\dots,\texttt d \rangle \rar["H^d_n"] & \mb R[x_1,\dots,x_n] \end{tikzcd}\] where $\mb R[x_1,\dots,x_n]^G$ is the subring of $G$-invariants in $\mb R[x_1,\dots,x_n]$. \paragraph{Towards positivity} In this paper, we address this question for a specific choice of $G$. Namely, given $d$ and $n\geq d+1$, $S_n$ acts naturally by permutations of columns on the set of $(d+1) \times n$-matrices \begin{equation}\label{eq:matr} \begin{pmatrix} 1 & \dots & 1\\ x_1 & \dots & x_n \end{pmatrix} \end{equation} and we want to choose $G$ as the stabiliser $\convperms^d_n$ of the subset of matrices whose maximal minors are positive. Following the terminology of \cite[Section 2]{arkani2014amplituhedron}, we call these matrices \textit{positive}.\par Our motivation is that for each such positive matrix, the volume of the polytope $\convhull(x_1,\dots,x_n)$ can be obtained from the signature of the piecewise linear path $X$ with control points $x_1 \to \dots \to x_n$ as the \textit{signed volume} $$\langle S(X), \mathsf{vol}_d \rangle = \int_{\Delta_d} \det \begin{pmatrix} X'(t_1) & \dots & X'(t_d) \end{pmatrix} dt_1 \ldots dt_d$$ where \begin{equation}\label{eq:signed vol} \signedvolume_d := \sum_{\sigma \in S_d} \sgn(\sigma) \sigma(\texttt 1)\dots \sigma(\texttt d) \in \mb R\langle \texttt{1},\dots, \texttt{d}\rangle, \end{equation} see \cite[Theorem 3.4]{amendolaleemeroni23} and \cite[Section~3.3]{diehl2019invariants}. As the set of $x_1, \dots, x_n$ with \eqref{eq:matr} positive is Zariski-dense in the set of all $x_1, \dots, x_n$, it follows that $\signedvolume_d \in \mathsf{Inv}^d_n(\convperms^d_n)$ for all $n\geq d+1$ (see \Cref{prop:eq rel on pwl}). Thus, we expect $\mathsf{Inv}^d_n(\convperms^d_n)$ to describe geometric features of polytopes $\convhull(x_1,\dots,x_n)$ for positive matrices \eqref{eq:matr}. Such polytopes are known as cyclic $d$-polytopes admitting a (not necessarily unique) \textit{canonical labeling} $x_1,\dots,x_n$. \begin{figure}[h] \centering \includegraphics[width=0.8\linewidth]{cycD5v2.pdf} \caption{A cyclic $2$-polytope with $5$ vertices, spanned by $5$ different piecewise linear paths, related by cyclic permutations of their control points. The volume of the polygon agrees with the signed volume of each of the paths.}\label{fig:2polytope} \end{figure} \par Of particular interest is the intersection $$\mathsf{Inv}^d := \bigcap_{n\geq d+1} \mathsf{Inv}^d_n(\convperms^d_n)$$ which we call the \textit{ring of volume invariants}. This terminology will be motivated in \Cref{prop:eq rel on pwl} where we show that if the signed volume of a piecewise linear path in general position is invariant under a permutation of the control points then its signature at any $w \in \ourfeatures^d$ will be as well. \par Note that $\ourfeatures^d$ forms a subalgebra of $\mb R\langle \texttt{1},\dots, \texttt{d}\rangle_\shuffle$. One might view $H_n^d(\ourfeatures^d)$ as functions on the set of canonically labeled cyclic $d$-polytopes with $n$ vertices. Note that by definition of $\ourfeatures^d$, $\homomor^d_{\bullet}(\ourfeatures^d)$ inherits the structure of a semi-simplicial set from $\mc S^d[x_1,\dots,x_n]$. The face maps can then be interpreted as restrictions to subpolytopes.\par As noted above, we certainly have $\signedvolume_d \in \mathsf{Inv}^d$. The main result of this paper is the following theorem in \Cref{sec:ring_of_invariants}, showing that there is an abundance of volume invariants: \begin{theorem*}[{\ref{thm:abundant invariants}}] For any $d$, $\ourfeatures^d$ contains infinitely many algebraically independent elements (with respect to the shuffle product) and is thus in particular infinitely generated as a (shuffle) subalgebra of $\R\langle\word{1},\dots,\word{d}\rangle$. \end{theorem*} \paragraph{Outline} In Section \ref{sec:Hnd}, we precisely define and thoroughly discuss the ring homomorphism $H_n^d$. In particular, we describe how to obtain a useful recursive formula in \eqref{eq:Hnd_recursive1}. Section \ref{sec:Cnd} is devoted to introducing the subgroup $C_n^d\subset S_n$ as the stabilizer of positive matrices, i.e.\ matrices with positive maximal minors, under column permutation. We then show that $C_n^d$ is exactly the subgroup of $S_n$ under which the signed volume is invariant for piecewise linear paths in general position. Finally, in Section \ref{sec:ring_of_invariants}, we prove our main results. In Propositions \ref{prop:inv geq3 odd} and \ref{prop:inv_loopclosure_timerev}, we fully characterize the invariant rings $\ourfeatures_{\geq d+3}^d$ for $d+3$ and more points in $\mb R^d$. In Theorem \ref{thm:abundant invariants}, we show that the rings of volume invariants $\ourfeatures^d$ are `very large', in the sense that they are infinitely generated, and even contain infinitely many algebraically independent elements. \section{Piecewise linear paths and signatures}\label{sec:Hnd} Our goal in this section is to define and explain the homomorphism $H_n^d$.\par \begin{definition} A piecewise linear path with control points $x_1,\dots,x_n \in \mb R^d$ is a continuous map $X: [0,1] \to \mb R^d$ such that there are $0=t_1\leq t_2 \leq \dots \leq t_n=1$ with the property that $X(t_i)=x_i$ for all $i$ and $X$ is an affine map on all intervals $[t_i,t_{i+1}]$. We write $\{x_1\to \dots \to x_n\}$ to denote such a path independent of the precise time parametrization, and we write $\pwlinear^d_n$ for the set of all piecewise linear paths through $\mb R^d$ with $n$ control points. \end{definition} In particular, a piecewise linear path with two control points is a linear path. Up to reparametrization, any piecewise linear path can be viewed as a \textit{concatenation} of linear paths. \begin{definition} Given paths $X:[0,1]\to \mb R^d$ and $Y:[0,1] \to \mb R^d$, their concatenation is the path $X \sqcup Y:[0,1] \to \mb R^d$ which is defined as $X(2t)$ for $t\in[0,\frac 1 2]$ and as $Y(2t - 1)$ for $t\in [\frac 1 2, 1]$. \end{definition} An important property of the signature is its compatibility with concatenation, in the following sense: \begin{proposition}[Chen's identity, Theorem~3.1 of \cite{bib:Che1954}]\label{prop:chen} Let $X$ and $Y$ be paths in $\mb R^d$. Then $$S(X \sqcup Y) = S(X) \bullet S(Y).$$ \end{proposition} \noindent Here $S(X) \bullet S(Y)$ is the composition \[\begin{tikzcd}[nodes={inner sep=10pt}] \mb R\langle \mathtt{1}, \dots, \mathtt{d} \rangle \rar{\Delta} & \mb R\langle \mathtt{1}, \dots, \mathtt{d} \rangle \otimes \mb R\langle \mathtt{1}, \dots, \mathtt{d} \rangle \rar{S(X) \otimes S(Y)} & \mb R \end{tikzcd}\] where $\Delta$ denotes the coproduct of the Hopf algebra $\mb R\langle \mathtt{1}, \dots, \mathtt{d} \rangle$. More explicitly, $S(X)\bullet S(Y)$ maps a word $\letteri_1\cdots \letteri_k$ to $$\sum_{j=0}^k \langle S(X), \letteri_1\cdots \letteri_j \rangle \langle S(Y), \letteri_{j+1} \cdots \letteri_k \rangle.$$ We are now ready to define $H_n^d$. \begin{definitionthm}\label{def:Hnd_via_sig} For any $w\in\R\langle \texttt{1},\dots,\texttt{d}\rangle$ the function \begin{align*} f_n(w): \R^{d \times n} &\to\R \\ (x_1,\ldots,x_n) &\mapsto \langle S(\{x_1\to\dots\to x_n\}),w\rangle \end{align*} is given by a polynomial in $x_1, \ldots, x_n$. We define \begin{align*} H_n^d: \mb R\langle\texttt{1},\dots,\texttt{d}\rangle &\to \mb R[x_1,\dots,x_n] \\ w &\mapsto f_n(w) \end{align*} \end{definitionthm} \begin{proof} First note that $f_n(w)$ is wel(Gale’s evenness criterion)(Gale’s evenness criterion)(Gale’s evenness criterion)l-defined by reparametrization invariance of the signature $S$. We will now proceed by induction, starting with $n=2$.\par The signature of a linear path $X$ with control points $x_1,x_2$ is easily calculated. Indeed, note that the integrals \eqref{def:sig} only depend on the vector $a:=x_2-x_1$. The integrand of $\langle S(X), \letteri_1\cdots \letteri_k\rangle$ is just the product $a_{i_1}\ldots a_{i_k}$ and thus \begin{equation*}\label{eq:H2} f_2({\letteri_1\cdots \letteri_k}) = \frac{1}{k!} a_{i_1}\ldots a_{i_k} \end{equation*} as the simplex $\Delta_k$ has volume $\frac{1}{k!}$, proving the base case. Now by \Cref{prop:chen} we have \begin{align*} f_n(\letteri_1\cdots \letteri_k)(x_1,\dots,x_n) = \\\sum_{j=0}^k f_{l}(\letteri_1\cdots \letteri_j)(x_1,\dots,x_{l}) \cdot f_{n-l+1}(\letteri_{j+1} \cdots \letteri_k)(x_{l},\dots,x_{n}) \end{align*} for any $n$ and all $l$. Choosing $l=2$, we conclude by induction. \end{proof} Note that the proof yields the recursive formula \begin{align}\label{eq:Hnd_recursive1} \begin{split} H_n^d(\letteri_1\cdots \letteri_k)(x_1,\dots,x_n)=\\ \sum_{j=0}^k\frac{1}{j!}H_{n-1}^d(\letteri_{j+1}\cdots \letteri_k)(x_2,\dots,x_n)\prod_{m=1}^j (x_{2,i_m}-x_{1,i_m}). \end{split} \end{align} for $H_n^d(\letteri_1\cdots \letteri_k)(x_1,\dots,x_n)$. \begin{example} We have \begin{align*} &H_3^3(\texttt{123})(x_1,x_2,x_3) = \\ &H_2^3(\texttt{123})(x_2,x_3)H_2^3(\emptyword)(x_1,x_2) +H_2^3(\texttt{23})(x_2,x_3)H_2^3(\texttt{1})(x_1,x_2)\\ &\hphantom{=}+H_2^3(\texttt{3})(x_2,x_3)H_2^3(\texttt{12})(x_1,x_2) +H_2^3(\emptyword)(x_2,x_3)H_2^3(\texttt{123})(x_1,x_2)\\ &=\frac{1}{3!}a_{2,1}a_{2,2}a_{2,3} +\frac{1}{2!}a_{2,2}a_{2,3}\cdot a_{1,1} +a_{2,3}\cdot\frac{1}{2!}a_{1,1}a_{1,2} +\frac{1}{3!}a_{1,1}a_{1,2}a_{1,3} \end{align*} where $\emptyword$ is the unit of $\R\langle\texttt{1},\dots,\texttt{d}\rangle$, the so-called empty word, which is mapped by all $H_n^d$ to the unit constant polynomial. \end{example} \annotationR{} \begin{remark}In general, $H^d_n$ can be shown to factor as \[\begin{tikzcd} \mb R\langle\texttt{1},\dots,\texttt{d}\rangle \dar \drar["H_n^d"] & \\ \mathrm{Qsym}(\mb R[a_{1},\dots,a_{n-1}]) \rar & \mb R[x_{1}, \dots,x_{n}] \end{tikzcd}\] where $\mathrm{Qsym}(\mb R[a_{1},\dots,a_{n-1}])$ denotes the ring of \textit{quasi-symmetric functions of level $d$} in the vectors $a_{1},\dots,a_{n-1}$, cf. \cite{diehleftapia2020acta}, and the bottom map sends $a_{i}$ to $x_{i+1} - x_{i}$. We refer to \cite{AFS18} for further details about the signature of a piecewise linear path. \end{remark} Let us now explain how to turn $\homomor_n^d$ into an algebra homomorphism. We do this by equipping the $\mb R$-vector space $\mb R\langle \mathtt{1}, \dots, \mathtt{d} \rangle$ with the commutative \textit{shuffle product} $\shuffle$. This can be defined on words in the following way: \begin{equation}\label{def:shuffle} \letteri_1\cdots \letteri_l \shuffle \letteri_{l+1}\cdots \letteri_k := \sum_{\sigma \in G} \letteri_{\sigma^{-1}(1)} \cdots \letteri_{\sigma^{-1}(k)} \end{equation} where $G$ is the set of $\sigma \in S_k$ with $\sigma^{-1}(1) < \dots < \sigma^{-1}(l)$ and $\sigma^{-1}(l+1) < \dots < \sigma^{-1}(k)$. In other words, $\letteri_1\cdots \letteri_l \shuffle \letteri_{l+1}\cdots \letteri_k$ is the sum of all ways of interleaving the two words $\letteri_1\cdots \letteri_l$ and $\letteri_{l+1}\cdots \letteri_k$. \par The commutative algebra $(\mb R\langle \mathtt{1}, \dots, \mathtt{d} \rangle, \shuffle)$ is well-understood. As an algebra, it is (infinitely) freely generated by the \textit{Lyndon words}. For details we refer to \cite{reutenauer1993free}. The connection to iterated integrals is the following: \begin{proposition}[Ree's shuffle identity \cite{Ree58}] Let $X$ be a path in $\mb R^d$ and $p,q \in \mb R\langle \mathtt{1}, \dots, \mathtt{d} \rangle$. Then $$\langle S(X), p \shuffle q \rangle = \langle S(X), p \rangle \langle S(X), q \rangle$$ \end{proposition} \begin{corollary} The maps $\homomor^d_n$ are algebra homomorphisms $$\mb R\langle \mathtt{1}, \dots, \mathtt{d} \rangle_\shuffle := (\mb R\langle \mathtt{1}, \dots, \mathtt{d} \rangle, \shuffle) \to (\mb R[x_1,\ldots,x_n], \cdot).$$ \end{corollary} In particular, the kernel of $\homomor^d_n$, which we denote by $\mathcal{I}(\pwlinear_n^d)$, is an ideal in $\mb R\langle \mathtt{1}, \dots, \mathtt{d} \rangle_\shuffle$. Geometrically, it can be viewed as the vanishing ideal of $S(\pwlinear_n^d) \subseteq \mathrm{Spec} \ \mb R\langle \mathtt{1}, \dots, \mathtt{d} \rangle$, which is the image of $\pwlinear^d_n$ under the signature (in \cite{preiss2024algebraic}, this is just called the vanishing ideal of $\pwlinear_n^d$). It follows that the ring of signature polynomials $\mc S^d[x_1,\dots,x_n]$ is isomorphic to $\mb R\langle \mathtt{1}, \dots, \mathtt{d} \rangle_\shuffle/\mathcal{I}(\pwlinear_n^d)$. \section{The stabiliser of positive matrices}\label{sec:Cnd} In this section our goal is to determine the group $\convperms^d_n$ from the introduction. Recall that $\convperms^d_n$ is defined as the subgroup of $S_n$ stabilising the set of positive $d\times n$-matrices under the action on columns. \begin{definition} An $d \times n$ matrix is called positive if all its maximal minors are positive. \end{definition} The group $S_n$ acts on the columns of an $(d+1) \times n$-matrix by permutation. $\convperms^d_n$ is defined as the stabiliser of positive matrices under this action. In other words, $\convperms^d_n$ is the subgroup of permutations of the columns of a $(d+1) \times n$ matrix with positive maximal minors such that the resulting matrix has again positive maximal minors.\par It turns out that the parity of $d$ has a large impact on the structure of $\convperms^d_n$. We give a full description of this structure in \Cref{prop:convex autos odd} and \Cref{prop:convex autos even} below. In fact, the group $\convperms^d_n$ is a subgroup of the automorphisms of a cyclic $d$-polytope with $n$ vertices. To see this, we recall some elementary theory of (cyclic) polytopes. \begin{lmm}\label{lmm:face condition} Let $P$ be a $d$-dimensional polytope with vertex set $V := \{x_1, \dots, x_n\}$. Then $F = \{x_{i_1}, \dots, x_{i_d}\}$ is the vertex set of a facet if and only if \begin{equation}\label{eq:face cond det} \det \begin{pmatrix} 1 & \dots & 1 & 1\\ x_{i_1} & \dots & x_{i_d} & y \end{pmatrix} \end{equation} has a fixed sign for all $y \in V-F$. \end{lmm} \begin{proof} Note that $$\det \begin{pmatrix} 1 & \dots & 1 & 1\\ x_{i_1} & \dots & x_{i_d} & y \end{pmatrix} = \det \begin{pmatrix} x_{i_2} - x_{i_1} & \dots & x_{i_d} - x_{i_1} & y - x_{i_1} \end{pmatrix}$$ As a function in $y$, this determinant vanishes exactly on the hyperplane spanned by $x_{i_1},\dots,x_{i_d}$ and has constant sign on the two associated open half-spaces. \end{proof} \begin{corollary}[Gale's evenness criterion, \cite{Gale1963NeighborlyAC}]\label{cor:gale} Let $P$ be a polytope with vertices $x_1, \dots, x_n$ such that all maximal minors of \eqref{eq:matr} have the same sign. Then the facets of $P$ are exactly the sets $F= x_I := \{x_{i_1}, \dots, x_{i_d}\}$ such that $\#\{i \in I | \ i > j\}$ has the same parity for all $j \in [n] - I$. In other words, $P$ is a cyclic polytope. \end{corollary} \begin{proof} This follows immediately from \Cref{lmm:face condition} as the sign of the determinant \eqref{eq:face cond det} is exactly $(-1)^{\#\{i \in I | \ i > j\}}$ times the sign of a maximal minor of \eqref{eq:matr} for $y=x_j\notin F$. \end{proof} \begin{corollary}\label{cor:conv pres is aut} Let $x_1,\dots,x_n$ be such that the matrix \begin{equation*} \begin{pmatrix} 1 & \dots & 1\\ x_1 & \dots & x_n \end{pmatrix} \end{equation*} has positive maximal minors. Then for every $\pi \in S_n$ such that the matrix \begin{equation*} \begin{pmatrix} 1 & \dots & 1\\ x_{\pi(1)} & \dots & x_{\pi(n)} \end{pmatrix} \end{equation*} has positive maximal minors, $x_i \mapsto x_{\pi(i)}$ is a combinatorial automorphism of the cyclic polytope $P = \convhull(x_1, \dots, x_n)$. That is, it defines an automorphism of its face lattice. \end{corollary} \begin{proof} This is clear from \Cref{cor:gale} since the face condition there is only a condition on indices: it does not depend on the $x_i$ themselves. \end{proof} We will see in \Cref{cor:ev dim conv pres} that for even $d$ the converse is true up to sign, that is, combinatorial automorphisms preserve the property that all minors have the same sign. This is not true in odd dimensions: \begin{example}\label{ex:counter convex} Let $x_1,\dots, x_6\in\mathbb R^3$ be such that \begin{equation*} \begin{pmatrix} 1 & \dots & 1\\ x_1 & \dots & x_6 \end{pmatrix} \end{equation*} has positive maximal minors. Then $(x_1,x_2,x_3,x_4,x_5,x_6)\mapsto(x_6,x_2,x_3,x_4,x_5,x_1)$ is an automorphism of cyclic polytopes, but \begin{equation*} \det \begin{pmatrix} 1 & 1 & 1 & 1\\ x_6 & x_2 & x_3 & x_4 \end{pmatrix} < 0 \end{equation*} while \begin{equation*} \det \begin{pmatrix} 1 & 1 & 1 & 1\\ x_2 & x_3 & x_4 & x_5 \end{pmatrix}>0 \end{equation*} \end{example} \vspace{2em} Let us now give a full description of $\convperms^d_n$. We will use the following characterization of the combinatorial automorphisms of a cyclic polytope: \begin{theorem}[{\cite[Theorem 8.3]{KaibelAutomorphismGO}}]\label{thm:cyclaut} The combinatorial automorphism group of a cyclic $d$-polytope with $n$ vertices is isomorphic to\par \vspace{1em} { \setlength\tabcolsep{2em} \renewcommand{\arraystretch}{2} \centering \begin{tabular}[t]{cccc} & $n = d + 1$ & $n = d + 2$ & $n \geq d+3$ \\ \hline $d$ even & $\mb S_n$ & $\mb S_{\frac n 2} \text{ wr } \mb Z_2$ & $\mb D_n$ \\ \hline $d$ odd & $\mb S_n$ & $\mb S_{\lceil{\frac n 2}\rceil} \times \mb S_{\lfloor{\frac n 2}\rfloor}$ & $\mb Z_2 \times \mb Z_2$ \end{tabular}\par} \vspace{1em} \end{theorem} We start with the case of odd dimension. \begin{proposition}\label{prop:convex autos odd} Assume $d$ is odd. Then the group $\convperms_n^d$ is \begin{enumerate}[i)] \item $A_n$ if $n=d+1$, \item $A_n\cap (S_{\frac{n-1} 2}\times S_{\frac{n+1} 2})$ if $n=d+2$, \item $\mathbb Z/2$ if $n \geq d+3$ and $\frac{d+1}{2}$ is even and \item $1$ if $n \geq d+3$ and $\frac{d+1}{2}$ is odd. \end{enumerate} \end{proposition} For the proof we need the following small lemma: \begin{lmm}\label{lmm:sign switch} Let \begin{equation*} X = \begin{pmatrix} x_1 & \dots & x_{n+1} \end{pmatrix} \end{equation*} be a $n \times (n+1)$-matrix such that all minors have the same sign. Then switching two even or two odd columns switches the sign of all minors of $X$. \end{lmm} \begin{proof} Let $i,j$ be the indices of the columns that are switched and write $X'$ for the matrix obtained from $X$ by switching columns $i$ and $j$. For $I = \{i_1, \dots, i_n\}$ with $i_1 < \dots < i_n$ we write $X_I$ for the $n\times n$ matrix $\begin{pmatrix} x_{i_1} & \dots & x_{i_n} \end{pmatrix}$.\\ If $I$ contains both $i$ and $j$, then $X'_I$ is obtained from $X_I$ by switching two columns and thus inverts the sign of the determinant. Now assume that $I$ does not contain $j$, so it contains $i$. $X'_I$ is obtained from $X_I$ by replacing $x_i$ with $x_j$. In particular, if $J := I \cup \{j\}- \{i\}$, then $X_I'$ has the same set of columns as $X_J$. Now note that every integer between $i$ and $j$ is contained in $I$ and since $i-j$ is even the number of such integers is odd. Thus, an odd number of transpositions is required to obtain $X_J$ from $X_I'$, and thus the determinant of $X_I'$ and the determinant of $X_J$ have inverse signs. \end{proof} \begin{proof}[Proof of \Cref{prop:convex autos odd}] Let $x_1,\dots,x_n$ be such that \eqref{eq:matr} is positive. Set $P = \convhull(x_1, \dots, x_n)$. We need to determine the subgroup of $\Aut(P)$ consisting of automorphisms that preserve positivity of all minors. Given $\pi \in \Aut(P)$ we write $\pi(X)$ for the matrix whose $i$-th column is the $\pi(i)$-th column of $X$. \begin{itemize} \item $i)$ is clear from \Cref{thm:cyclaut} as the determinant is an alternating map (here $P$ is a simplex). \item For $ii)$ we use that by \Cref{thm:cyclaut} we have $\Aut(P)= S_{\frac{n-1} 2}\times S_{\frac{n+1} 2}$ if $n=d+2$, where the first factor acts on the even and the second on the odd vertices of $P$. Thus, the statement follows from \Cref{lmm:sign switch}. \item For $iii)$ and $iv)$ we use that, again by \Cref{thm:cyclaut}, $\Aut(P)=\mb Z /2 \times \mb Z/2$ where the first factor acts by switching the first and the last vertex and the second factor acts by inverting the order on the inner vertices. Let $\pi$ be the generator of the first factor and $s$ the generator of the second factor. If $\frac{d+1}{2}$ is even, then $\tau = \pi \circ s$ preserves the sign of minors, otherwise it flips the sign. This is because any minor of $\pi \circ s(X)$ is turned into a submatrix of $X$ by $\frac{d+1}{2}$ transpositions. Next, aiming for a contradiction, assume $s$ preserves positive minors. Then $\pi \circ s \circ s = \pi$ either preserves or flips the sign of all minors. But this is not the case: Consider the matrix of the first $d+1$ columns of $\pi(X)$. This matrix is turned into a submatrix of $X$ by $d$ transpositions of columns, so it has negative determinant. But the matrix of columns $2,\dots,d+2$ of $\pi(X)$ is still a submatrix of $X$ and has, in particular, positive determinant (see\ \Cref{ex:counter convex} for an example in the case $d=3$). \end{itemize} \end{proof} \begin{proposition}\label{prop:convex autos even} Assume $d$ is even. Then the group $\convperms_n^d$ is \begin{enumerate}[i)] \item $A_n$ if $n=d+1$, \item $(A_n\cap (S_{ \frac n 2}\times S_{\frac n 2})) \rtimes \mb Z/2$ if $n=d+2$ and $\frac{d}{2}$ is even, \item $\ker \phi$ for the map $\phi: S_{ \frac n 2}\times S_{\frac n 2} \rtimes \mb Z/2 \to \{-1,1\}$ that maps $(\omega,\pi,\tau)$ to $\sgn(\omega)\sgn(\pi)\gamma(\tau)$ (where $\gamma$ maps $\tau$ to $-1$), if $n=d+2$ and $\frac{d}{2}$ is odd, \item $\mb D_n$ if $n \geq d+3$ and $\frac{d}{2}$ is even and \item $\mb Z/n$ if $n \geq d+3$ and $\frac{d}{2}$ is odd. \end{enumerate} \end{proposition} \begin{proof} Let again $x_1,\dots,x_n$ be such that \eqref{eq:matr} is positive. We set $P = \convhull(x_1, \dots, x_n)$. \begin{itemize} \item $i)$ is still clear from \Cref{thm:cyclaut} as the determinant is an alternating map (again, $P$ is a simplex). \item For $ii)$ we need to adapt the proof of \Cref{thm:cyclaut}. Let $s \in S_n$ denote the order-reversing permutation. Since $\frac{d}{2}$ transpositions of columns turn any $d+1\times d+1$ submatrix of $s(X)$ into a submatrix of $X$, $s$ preserves the sign of minors if $\frac{d}{2}$ is even and flips it if $\frac{d}{2}$ is odd. In the first case, going through the proof of \Cref{thm:cyclaut} and using \Cref{lmm:sign switch}, we obtain the semi-direct product $(A_n \cap S_{\frac{n} 2}\times S_{\frac{n} 2}) \rtimes \mb Z/2$. In the second case it is more difficult to describe the subgroup; we can define it as the kernel of the map $\phi: (S_{\frac{n} 2}\times S_{\frac{n} 2}) \rtimes \mb Z/2 \to \{-1,1\}$ that maps $(\omega,\pi,\tau)$ to $\sgn(\omega)\sgn(\pi)\gamma(\tau)$ where $\gamma$ maps $\tau$ to $-1$. \item For $iv)$ we use that, again by \Cref{thm:cyclaut}, $\Aut(P)=\mb D_n$. Let $r$ be rotation by $1$ and $s$ the order-reversing permutation. They generate $\mb D_n$ and since $\frac{d}{2}$ is even, both of them preserve positive minors. \item Similarly, if $\frac{d}{2}$ is odd, then $s$ will flip the signs of all minors. Thus, since $srs = r^{-1}$, we just obtain the subgroup generated by $r$ in this case. \end{itemize} \end{proof} For case v), compare \cite{postnikov2006}. \begin{corollary}\label{cor:ev dim conv pres} For even $d$, every automorphism of a cyclic $d$-polytope with $n$ vertices preserves the property that the maximal minors of \eqref{eq:matr} have constant sign. \end{corollary} \begin{proof} Going through the proof of \Cref{prop:convex autos even} again, we see that any combinatorial automorphism either preserves or flips the sign of all maximal minors. \end{proof} The following result adds further motivation to our interest in $C^d_n$ and explains why we call $\ourfeatures^d$ the ring of volume invariants. \begin{proposition}\label{prop:eq rel on pwl} For all $n \geq d+1$ there is a non-empty Zariski open subset $O$ of $\R^{d\times n}$ such that for all $X\in O$, $\sigma \in S_n$: $$\langle S(X),\signedvolume_d\rangle=\langle S(\sigma.X), \signedvolume_d\rangle \textrm{ if and only if } \sigma \in \convperms^d_n.$$ \end{proposition} Here, we identify a piecewise linear path $X$ with its ordered set of control points, i.e., a $d\times n$ matrix $X$. \begin{proof} The implication $\Leftarrow$ holds for all $X$ as it holds for the Zariski dense subset of $X$ with \eqref{eq:matr} positive, as discussed in the introduction. Thus it suffices to show the converse.\par Let us denote the vanishing locus of the polynomial $$\langle S(X), \signedvolume_d \rangle-\langle S(\sigma.X), \signedvolume_d\rangle = H^d_n(\signedvolume_d) - \sigma.H^d_n(\signedvolume_d)$$ in $\mb R^{d \times n}$ by $Z_\sigma$ for $\sigma \in S_n$ and set $Z := \bigcup_{\sigma \in S_n \backslash \convperms^d_n} Z_\sigma$. This is the locus of $X$ violating the implication $\Rightarrow$. As it is closed, we only need to show that $Z$ is not the whole space. Then we can choose $O:=\mb R^{d \times n} - Z$.\par Since $\mb R^{d \times n}$ is irreducible, it suffices to show that $Z_\sigma \not= \mb R^{d\times n}$ for all $\sigma \notin \convperms^d_n$. Thus, let $\sigma \in S_n \backslash \convperms^d_n$. Then we can choose $X$ such that \eqref{eq:matr} is positive for $X$, but not for $\sigma.X$. We will now argue by contraposition that the impliciation $\Rightarrow$ is true for $X$.\par Let $x_1,\dots,x_n$ denote the columns of $X$. Set $P= \convhull(x_1,\dots,x_n)$. Then the matrix \begin{equation*} \begin{pmatrix} 1 & \dots & 1\\ x_{\sigma^{-1}(1)} & \dots & x_{\sigma^{-1}(n)} \end{pmatrix} \end{equation*} has a negative maximal minor. There is some triangulation $\mc S$ of $P$ containing the simplex corresponding to the index set of this minor. To $\Delta \in \mc S$ we associate the indices of its vertices $\{ i^\Delta_1,\dots,i^\Delta_{d+1}\}, \ i_1 < \dots < i_{d+1}$. Then we consider $$\signedvolume_d'(X):= \sum_{\Delta \in \mc S} \det\begin{pmatrix}1&\dots&1\\x_{i^\Delta_1}&\dots&x_{i^\Delta_{d+1}}\end{pmatrix},$$ Since $\Delta$ is a triangulation and since all determinants appearing in the sum are positive if \eqref{eq:matr} is positive, $\signedvolume_d'$ agrees with $\langle S(X), \signedvolume_d\rangle$ on the Zariski dense subset of $X$ with \eqref{eq:matr} positive and thus $\signedvolume_d'$ and $\langle S(X), \signedvolume_d\rangle$ are identical. But by construction $\signedvolume'_d(\sigma.X)$ is strictly bounded by the volume of $P$ (since at least one minor appearing in the sum will be negative) and so \begin{equation*} \langle S(X), \signedvolume_d\rangle - \langle S(\sigma.X),\signedvolume_d \rangle > 0.\qedhere \end{equation*} \end{proof} In particular, away from some exceptional closed set, we have the implication $$\langle S(X), \signedvolume_d\rangle = \langle S(\sigma.X), \signedvolume_d\rangle \Rightarrow \forall \ w \in \ourfeatures^d: \ \langle S(X),w \rangle = \langle S(\sigma.X), w \rangle $$ for all $X$. \section{Investigating the ring of volume invariants}\label{sec:ring_of_invariants} In the following we write $\ourfeatures^d_n:=\ourfeatures^d_n(\convperms^d_n)$ for simplicity. Given the case distinction in Proposition \ref{prop:convex autos even}, we will treat the cases $n\geq d+3$ simultanuously and put $$\ourfeatures_{\geq d+3}^d:=\bigcap_{n\geq d+3} \ourfeatures_n^d$$ so that we have $$\ourfeatures^d=\ourfeatures_{d+1}^d\cap \ourfeatures_{d+2}^d \cap \ourfeatures_{\geq d+3}^d.$$ Note that the kernel $\mc I(\pwlinear^d_{d+2})$ of $\homomor^d_{d+2}$ is contained in $\ourfeatures^d_{d+1} \cap \ourfeatures^d_{d+2}$. We can write \begin{equation*} \ourfeatures^d=\frac{\ourfeatures^d}{\ourfeatures^d \cap \mathcal{I}(\pwlinear^d_{d+2})}\oplus\big(\ourfeatures^d_{\geq d+3}\cap\mathcal{I}(\pwlinear^d_{d+2})\big) \end{equation*} While we can give a description of $\ourfeatures^d_{\geq d+3}$, the problem for $\ourfeatures^d_{d+1}$ and $\ourfeatures^d_{d+2}$ is more difficult. We give examples and a conjecture instead, based on computations for low values of $d$: \begin{conjecture}\label{conj:only vol for d+2} $$\ourfeatures^d_{d+2}/\mathcal{I}(\pwlinear^d_{d+2}) \cong \mb R[H^d_{d+2}(\signedvolume_d)]\subseteq \mc S^d[x_1,\dots,x_{d+2}]$$ i.e.\ the subalgebra of $\mc S^d[x_1,\dots,x_{d+2}]$ generated by $H^d_{d+2}(\signedvolume_d)$. Equivalently, $$\ourfeatures^d_{d+2}=\spann\{(\signedvolume_d)^{\shuffle k},k\geq 0\}\oplus \mathcal{I}(\pwlinear_{d+2}^d).$$ \end{conjecture} \begin{example} A computation using \textsc{Macaulay 2} shows that the vector space of invariants of degree $\leq 6$ in $\ourfeatures^3_{4}$ is spanned by $18$ elements, one in degree $3$ (the signed volume), $6$ in degree $5$ and $11$ in degree $6$ (including the shuffle square of the signed volume). Here are two examples: \begin{itemize} \item $w_1:=\texttt{12333} + \texttt{13233} -\frac{2}{3} \cdot \texttt{13323} - \frac{4}{3} \cdot \texttt{13332} - \texttt{21333} - \texttt{23133} + \frac{2}{3} \cdot \texttt{23313} + \frac{4}{3} \cdot \texttt{23331} + \frac{5}{3} \cdot \texttt{31323} - \frac{2}{3} \cdot \texttt{31332} - \frac{5}{3} \cdot \texttt{32313} + \frac{2}{3} \cdot \texttt{32331} + \texttt{33132} - \texttt{33231} + \texttt{33312} - \texttt{33321}$ \item $w_2 := - \texttt{123333} -3 \cdot \texttt{132333} + 4 \cdot \texttt{133233} + \texttt{213333} + 3 \cdot \texttt{231333} +\\ -4 \cdot \texttt{233133} + \texttt{312333} -2 \cdot \texttt{313233} - \texttt{321333} +\\ 2 \cdot \texttt{323133} + 2 \cdot \texttt{331323} -4 \cdot \texttt{331332} -2 \cdot \texttt{332313} +\\ 4 \cdot \texttt{332331} -\texttt{333123} + 3 \cdot \texttt{333132} + \texttt{333213} +\\ -3 \cdot \texttt{333231} + \texttt{333312} - \texttt{333321}$ \end{itemize} Writing $x_i=(x_{i1}, \dots, x_{id})$ and $a_i:= x_{i+1} - x_i$, their images under \eqref{eq:sign map} in the polynomial ring $\mb R[x_1,\dots,x_d]$ are given by \begin{small} \begin{align*} &2\cdot(-3a_{13}^3a_{22}a_{31} + 3a_{12}a_{13}^2a_{23}a_{31} - 4a_{13}^2a_{22}a_{23}a_{31} + 4a_{12}a_{13}a_{23}^2a_{31} \\ &- 4a_{13}a_{22}a_{23}^2a_{31} + 4a_{12}a_{23}^3a_{31} + 3a_{13}^3a_{21}a_{32} - 3a_{11}a_{13}^2a_{23}a_{32} \\ &+ 4a_{13}^2a_{21}a_{23}a_{32} - 4a_{11}a_{13}a_{23}^2a_{32} + 4a_{13}a_{21}a_{23}^2a_{32} - 4a_{11}a_{23}^3a_{32} \\ &- 3a_{12}a_{13}^2a_{21}a_{33} + 3a_{11}a_{13}^2a_{22}a_{33} - 4a_{12}a_{13}a_{21}a_{23}a_{33} + 4a_{11}a_{13}a_{22}a_{23}a_{33} \\ &- 4a_{12}a_{21}a_{23}^2a_{33} + 4a_{11}a_{22}a_{23}^2a_{33} - 2a_{13}^2a_{22}a_{31}a_{33} + 2a_{12}a_{13}a_{23}a_{31}a_{33} \\ &- 4a_{13}a_{22}a_{23}a_{31}a_{33} + 4a_{12}a_{23}^2a_{31}a_{33} + 2a_{13}^2a_{21}a_{32}a_{33} - 2a_{11}a_{13}a_{23}a_{32}a_{33} \\ &+ 4a_{13}a_{21}a_{23}a_{32}a_{33} - 4a_{11}a_{23}^2a_{32}a_{33} - 2a_{12}a_{13}a_{21}a_{33}^2 + 2a_{11}a_{13}a_{22}a_{33}^2 \\ &- 4a_{12}a_{21}a_{23}a_{33}^2 + 4a_{11}a_{22}a_{23}a_{33}^2 - 3a_{13}a_{22}a_{31}a_{33}^2 + 3a_{12}a_{23}a_{31}a_{33}^2 \\ &+ 3a_{13}a_{21}a_{32}a_{33}^2 - 3a_{11}a_{23}a_{32}a_{33}^2 - 3a_{12}a_{21}a_{33}^3 + 3a_{11}a_{22}a_{33}^3) \end{align*} \end{small} and \begin{small} \begin{align*} &24\cdot (-a_{13}^4a_{22}a_{31} + a_{12}a_{13}^3a_{23}a_{31} - 2a_{13}^3a_{22}a_{23}a_{31} + 2a_{12}a_{13}^2a_{23}^2a_{31} \\ &+ a_{13}^4a_{21}a_{32} - a_{11}a_{13}^3a_{23}a_{32} + 2a_{13}^3a_{21}a_{23}a_{32} - 2a_{11}a_{13}^2a_{23}^2a_{32} \\ &- a_{12}a_{13}^3a_{21}a_{33} + a_{11}a_{13}^3a_{22}a_{33} - 2a_{12}a_{13}^2a_{21}a_{23}a_{33} + 2a_{11}a_{13}^2a_{22}a_{23}a_{33} \\ &- a_{13}^3a_{22}a_{31}a_{33} + a_{12}a_{13}^2a_{23}a_{31}a_{33} + a_{13}^3a_{21}a_{32}a_{33} - a_{11}a_{13}^2a_{23}a_{32}a_{33} \\ &- a_{12}a_{13}^2a_{21}a_{33}^2 + a_{11}a_{13}^2a_{22}a_{33}^2 + a_{13}^2a_{22}a_{31}a_{33}^2 - a_{12}a_{13}a_{23}a_{31}a_{33}^2 \\ &+ 2a_{13}a_{22}a_{23}a_{31}a_{33}^2 - 2a_{12}a_{23}^2a_{31}a_{33}^2 - a_{13}^2a_{21}a_{32}a_{33}^2 + a_{11}a_{13}a_{23}a_{32}a_{33}^2 \\ &- 2a_{13}a_{21}a_{23}a_{32}a_{33}^2 + 2a_{11}a_{23}^2a_{32}a_{33}^2 + a_{12}a_{13}a_{21}a_{33}^3 - a_{11}a_{13}a_{22}a_{33}^3 \\ &+ 2a_{12}a_{21}a_{23}a_{33}^3 - 2a_{11}a_{22}a_{23}a_{33}^3 + a_{13}a_{22}a_{31}a_{33}^3 - a_{12}a_{23}a_{31}a_{33}^3 \\ &- a_{13}a_{21}a_{32}a_{33}^3 + a_{11}a_{23}a_{32}a_{33}^3 + a_{12}a_{21}a_{33}^4 - a_{11}a_{22}a_{33}^4) \end{align*} \end{small} respectively. \end{example} Let us now give a description of $\ourfeatures^d_{\geq d+3}$. Let $\mc A$ denote the antipode of the Hopf algebra $\mb R\langle \texttt 1, \dots, \texttt d\rangle$, that is, the map sending a word $w$ to $(-1)^{d+1}w'$ where $w'$ is obtained from $w$ by reversing the order of its letters. \begin{definition} We define $\timerevinv^d:=\{w\in \mb R\langle \texttt{1},\dots, \texttt{d}\rangle \ | \ w=\antipode w\}$. This is the subring of $w \in \mb R\langle \texttt{1},\dots, \texttt{d}\rangle$ such that $\langle S(X), w\rangle = \langle S(X^{-1}), w \rangle$ for all paths $X:[0,1] \to \mb R^d$, where $X^{-1}: [0,1]\to \mb R^d,\ t \mapsto X(1-t)$. Indeed, taking the antipode is the adjoint operation to time reversal (see e.g.\ \cite{preiss2024algebraic}). \end{definition} In the following, we will make crucial use of the Chen-Chow theorem in the following version: \begin{theorem}[{Chen-Chow}]\label{thm:chenchow} Let $w, v \in \mb R\langle \mathtt 1, \dots, \mathtt d \rangle$. If $\langle S(X), w\rangle = \langle S(X), v \rangle$ for all piecewise linear paths $X$, then $w=v$. \end{theorem} \begin{proof} By (the proof of) \cite[Lemma 8]{diehl2019invariants}, the image of all piecewise linear paths under $S$ spans the dual $(\mb R\langle \texttt 1, \dots, d \rangle_{\leq k})^*$ of the vector space of words $\leq k$. \end{proof} \begin{proposition}\label{prop:inv geq3 odd} Assume $d$ is odd. Then $$\ourfeatures_{\geq d+3}^d=\timerevinv^d$$ if $\frac{d+1}2$ is even, and $\ourfeatures_{\geq d+3}^d=\mb R\langle \texttt{1},\dots, \texttt{d}\rangle$ if $\frac{d+1}{2}$ is odd. \end{proposition} \begin{proof} Let $X$ be a piecewise linear path in $\mb R^d$ with $n\geq d+3$ control points. If $\frac{d+1}{2}$ is odd then $\convperms^d_n$ is trivial by \Cref{prop:convex autos odd}. If $\frac{d+1}{2}$ is even, then the proof of \Cref{prop:convex autos odd} shows that $\convperms^d_n$ is generated by the reflection $\tau$ which inverts the order of the vertices. On piecewise linear paths, this corresponds to $X \mapsto X^{-1}$. Thus, if $w \in \ourfeatures_{\geq d+3}$, then $\langle S(X), \mc A w \rangle = \langle S(X^{-1}), w\rangle = \langle S(X), w\rangle$, concluding the proof by \Cref{thm:chenchow}. \end{proof} Note that $\timerevinv$ can be identified as the image of $$\mb R\langle \texttt{1},\dots, \texttt{d}\rangle \to \mb R\langle \texttt{1},\dots, \texttt{d}\rangle, \ w \mapsto w + \mc Aw.$$ \begin{example} In $d=3$, consider the concatenation square of signed 3-volume \begin{equation*} \signedvolume_3^{\bullet 2}=(\texttt{123}+\texttt{231}+\texttt{312}-\texttt{213}-\texttt{132}-\texttt{321})^{\bullet 2}, \end{equation*} where the concatenation (of words) $\bullet$ is the bilinear non-commutative product on $\mb R\langle\mathtt{1},\dots,\mathtt{d}\rangle$ given by \begin{equation*} \mathtt{i}_1\dots\mathtt{i}_k\bullet\mathtt{i}_{k+1}\dots\mathtt{i}_m=\mathtt{i}_1\dots\mathtt{i}_m \end{equation*} We have $\antipode\signedvolume_3^{\bullet 2}=\signedvolume_3^{\bullet 2}$, so $\signedvolume_3^{\bullet 2}\in\timerevinv^3=\ourfeatures_{\geq 6}^3$. Furthermore, $\langle S(X),\signedvolume_3^{\bullet 2}\rangle$ is the integral \begin{align*} \int_{0\leq t_1\leq\dots\leq t_6\leq 1}\det(X'(t_1),X'(t_2),X'(t_3))\det(X'(t_4),X'(t_5),X'(t_6))dt_1\dots dt_6& \end{align*} so if $X$ has only 4 segments, there is no choice of $0\leq t_1\leq\dots\leq t_6\leq 1$ such that both determinants are non-zero. Thus, $\signedvolume_3^{\bullet 2}\in \ourfeatures_{\geq 6}^3\cap\mathcal{I}(\pwlinear_5^3)\subset \ourfeatures^3$. \end{example} \begin{definition} We define $\loopclosureinv^d$ as the subring of $u \in \mb R\langle\mathtt{1},\dots,\mathtt{d}\rangle$ such that \begin{equation*} \langle S(X),u\rangle=\langle S(X\sqcup \{X(1)\to X(0)\}),u\rangle=\langle S(\{X(1)\to X(0)\}\sqcup X),u\rangle, \end{equation*} where $\sqcup$ is concatenation of paths and $\{X(1)\to X(0)\}$ is the linear segment from $X(1)$ to $X(0)$. That is, the elements of $\loopclosureinv^d$ correspond to signature values that are stable both under closing a path to a loop by concatenating a linear segment to the right (the \textit{right loop closure}), as well as under closing a path to a loop by concatenating a linear segment to the left (the \textit{left loop closure}). We refer to \cite{loopcl24} for details. \end{definition} For example, the signatures of the five paths from \Cref{fig:2polytope} agree on all loop closure invariants (since the right closure of the first path is the left closure of the second path depicted, and so on). \begin{proposition}\label{prop:inv_loopclosure_timerev} Assume $d$ is even. Then $$\ourfeatures_{\geq d+3}^d=\loopclosureinv^d\cap\timerevinv^d$$ if $\frac{d}{2}$ is even, and $\ourfeatures_{\geq d+3}^d=\loopclosureinv^d$ if $\frac{d}{2}$ is odd. \end{proposition} \begin{proof} If $\frac{d}{2}$ is even then $\convperms^d_n$ is the group $\mb D_n$ by \Cref{prop:convex autos even}. It is spanned by the order-reversing permutation $s_n$ and the rotation $r_n$. As observed in (the proof of) \Cref{prop:inv geq3 odd}, invariants under $s_n$ for all $n$ simultaneously are precisely the elements of $\timerevinv$. On the other hand, invariants under all $r_n$ for all $n$ simultaneously are the elements of $\loopclosureinv$, see \cite{loopcl24}. Indeed, for $w \in \ourfeatures^d_{\geq d+3}$ and the right closure $\bar X^R$ of the path $X$ (which is a piecewise linear path with $n+1$ control points) we must have $\langle S(r_{n+1}(\bar X^R)), w \rangle = \langle S( \bar X), w \rangle$ for the rotation $r_{n+1} \in \mb Z/(n+1)$. But $\langle S(r_{n+1}(\bar X^R)), w \rangle = \langle S(X), w \rangle $ by reparametrisation invariance of iterated integrals. Similarly, for the left closure $\bar X^L$ of the path $X$ we must have $\langle S( r^{-1}_{n+1}(\bar X^L)), w \rangle = \langle S(\bar X), w \rangle$. But again, $\langle S(r^{-1}_{n+1}(\bar X^L)), w \rangle = \langle S(X), w \rangle$. Using \Cref{thm:chenchow} and that both left- and right-closure admit an adjoint \cite[Lemma 4.7]{loopcl24} we see that $w \in \loopclosureinv$. The reverse inclusion is immediate from \cite[Proposition 4.3]{loopcl24}. \par In the case that $\frac{d}{2}$ is odd we have that $\convperms^d_n = \mb Z/n$ is just generated by $r_n$ as shown in \Cref{prop:convex autos even}. Thus, both statements follow. \end{proof} \begin{theorem}\label{thm:abundant invariants} For any $d$, $\ourfeatures_{\geq d+3}^d\cap\mathcal{I}(\pwlinear^d_{d+2})$ (and in particular $\ourfeatures^d$) contains infinitely many algebraically independent elements (with respect to the shuffle product) and is thus in particular infinitely generated as a (shuffle) subalgebra of $\R\langle\word{1},\dots,\word{d}\rangle$. \end{theorem} \begin{proof} If $d$ is odd then we have $\ourfeatures^d_{\geq d+3} = \timerevinv^d$ or $\ourfeatures^d = \mb R\langle \texttt 1, \dots, d\rangle$. If $d$ is even then $\ourfeatures^d_{\geq d+3} = \loopclosureinv^d \cap \timerevinv^d$ or $\ourfeatures^d_{\geq d+3} = \loopclosureinv^d$. We claim that all of these algebras contain infinitely many algebraically independent elements. This is true for $\mb R\langle \texttt 1, \dots, d\rangle$ as it is freely generated by the Lyndon words, see \cite[Theorem 6.1]{reutenauer1993free}. Then it is also true for $\timerevinv^d$: It is the kernel of the map (of vector spaces) $\psi: \mb R\langle \texttt 1, \dots, \texttt d\rangle \to \mb R\langle \texttt 1, \dots, \texttt d\rangle ,w\mapsto w - \mc A w$ and if it does not contain infinitely many algebraically independent elements, then there is a finite set $S$ of Lyndon words such that each element is already algebraic over $\mb R[S]$, thus contained in it. It follows that the image of $\psi$ would have to contain infinitely many algebraically independent $a_1,a_2,\dots$ but then the elements $a_1^{\shuffle 2}, a_2^{\shuffle 2}, \dots \in \timerevinv$ are still algebraically independent, yielding a contradiction. In \cite{loopcl24} it is shown that $\loopclosureinv^d$ contains an infinite algebraically independent subset and by a similar argument as above it follows that the intersection $\loopclosureinv^d \cap \timerevinv^d$ also does, using that $w-\mc Aw$ is an element of $\loopclosureinv^d$ for every $w \in \loopclosureinv^d$ since $\mc Aw$ is a loop closure invariant if $w$ is (which follows from the definition). So we have shown that $\ourfeatures^d_{\geq d+3}$ contains infinitely many algebraically independent elements for any $d$. In particular, we can consider a composition \[ \begin{tikzcd}[nodes={inner sep=1pt}] \mb R[s_1,s_2,\dots] \rar[hookrightarrow] & \ourfeatures^d_{\geq d+3} \rar & \R\langle\texttt 1, \dots,\texttt d\rangle \rar & \R\langle\texttt 1, \dots,\texttt d\rangle / \mc I(\pwlinear^d_{d+2}) \end{tikzcd}\] where the first map is injective. The kernel of this composition must contain infinitely many algebraically independent elements as the quotient on the right is isomorphic to a subring of $\mb R[x_1,\dots,x_{d+2}]$ via \eqref{eq:sign map}. Indeed, otherwise there is again some $N$ such that any element of the kernel is already algebraic over $\mb R[s_1,\dots,s_N]$ and we get an injection $\mb R[s_{N+1},s_{N+2},\dots] \to \mb R[x_1,\dots,x_{d+2}]$ which is absurd. The infinitely many algebraically independent elements are still algebraically independent in the larger ring $\ourfeatures^d_{\geq d+3}$ and by construction contained in $\mc I(\pwlinear^d_{d+2})$, proving the claim. \end{proof} \section{Outlook} \paragraph{Computing volume invariants for even $d$} In dimension $2$, we simply have $\ourfeatures^2=\loopclosureinv^2$. An lowest degree example of a loop closure invariant for two dimensional paths that is independent of signed area is the following: \begin{align*} & \hphantom{\hspace{1.1em}} 3\cdot\texttt{111222}+5\cdot\texttt{112122}+3\cdot\texttt{112212}-3\cdot\texttt{112221}\\ &+3\cdot\texttt{121122}+\texttt{121212}-5\cdot\texttt{121221}+\texttt{122112}\\ &-5\cdot\texttt{122121}-3\cdot\texttt{122211}-3\cdot\texttt{211122}-5\cdot\texttt{211212}\\ &+\texttt{211221} -5\cdot\texttt{212112} +\texttt{212121}+3\cdot\texttt{212211}\\ &-3\cdot\texttt{221112}+3\cdot\texttt{221121} +5\cdot\texttt{221211}+3\cdot\texttt{222111} \end{align*} However, starting from four dimensions the even case gets vastly more involved. The lowest degree generators of $\mathcal{I}(\pwlinear_6^4)$ are the following $8$ on level $7$, \begin{align*} &\signedvolume_4\bullet\signedvolume_3(\texttt{1},\texttt{2},\texttt{3}),\quad\signedvolume_4\bullet\signedvolume_3(\texttt{1},\texttt{2},\texttt{4}),\quad\signedvolume_4\bullet\signedvolume_3(\texttt{1},\texttt{3},\texttt{4}),\\ &\signedvolume_4\bullet\signedvolume_3(\texttt{2},\texttt{3},\texttt{4}), \quad \signedvolume_3(\texttt{1},\texttt{2},\texttt{3})\bullet\signedvolume_4,\quad\signedvolume_3(\texttt{1},\texttt{2},\texttt{4})\bullet\signedvolume_4,\\ &\signedvolume_3(\texttt{1},\texttt{3},\texttt{4})\bullet\signedvolume_4,\quad \signedvolume_3(\texttt{2},\texttt{3},\texttt{4})\bullet\signedvolume_4 \end{align*} where \begin{align*} \signedvolume_4 &:=\signedvolume_4(\texttt{1},\texttt{2},\texttt{3},\texttt{4})\\ &:=\texttt{1234}-\texttt{1243}-\texttt{1324}+\texttt{1342}+\texttt{1423}-\texttt{1432}-\texttt{2134}+\texttt{2143}\\ &\hphantom{:=}+\texttt{2314}-\texttt{2341}-\texttt{2413}+\texttt{2431}+\texttt{3124}-\texttt{3142}-\texttt{3214}+\texttt{3241}\\ &\hphantom{:=}+\texttt{3412}-\texttt{3421}-\texttt{4123}+\texttt{4132}+\texttt{4213}-\texttt{4231}-\texttt{4312}+\texttt{4321} \end{align*} is four dimensional signed volume and $$\signedvolume_3(\texttt{i},\texttt{j},\texttt{k})=\texttt{ijk}+\texttt{jki}+\texttt{kij}-\texttt{jik}-\texttt{ikj}-\texttt{kji}.$$ \noindent No linear combination of these eight is a loop-closure invariant.\par Even though we know that $\ourfeatures^4_{\geq d+3}\cap\mathcal{I}(\pwlinear^4_6)$ is an infinitely generated subring, we need to compute very far to find the first generator. \paragraph{The induced equivalence relation} Recall that we can view $\ourfeatures^d$ as features on cyclic polytopes with $n\geq d+1$ vertices. If we consider the equivalence relation $$(x_1, \dots, x_n) \sim (y_1, \dots, y_m) :\Leftrightarrow \forall \ f \in \ourfeatures^d: f(x_1, \dots, x_n) = f(y_1, \dots, y_m)$$ then we have $(x_1,...,x_n) \sim (x_{\sigma(1)},\dots,x_{\sigma(n)})$ for any $\sigma \in C^n_d$. The properties of iterated integrals imply that $(x_1,\dots,x_n) \sim (x_1 + c, \dots, x_n + c)$ for any $c \in \mb R^d$ and that $(x_1,\dots,x_n) \sim (x_1,\dots,\hat{x_i},\dots,x_n)$ ($x_i$ is omitted in the second tuple) whenever $x_i$ is a convex combination of $x_{i-1}$ and $x_{i+1}$.\par However, we do not expect these three types of relations to generate $\sim$: For example, if \Cref{conj:only vol for d+2} holds true, then any two $d$-polytopes with $d+2$ vertices and the same volume are equivalent under $\sim$. \begin{myproblem} How can the equivalence relation $\sim$ be described geometrically or combinatorially? \end{myproblem} \paragraph{Specializing to $O$ and $SL$-invariants} For $O_d$, the orthogonal group, we may reduce to invariants on demand (cf.\ \cite{diehllyonsnipreiss}), for example through a projection $$R:\,p\mapsto \int_{O_d}p(A x_1,\dots,A x_n) d\mu(A),$$ where $\mu$ is the Haar measure of $O_d$ with $\mu(O_d)=1$. $R$ preserves signature polynomials, and is an example of a so-called Reynold's operator. Now for $SL_d$, there is no finite Haar measure as it is a non-compact group. However, we may still compute the intersection of the subrings of $\ourfeatures^d$ and the $SL_d$ invariants. This yields functions on the positive Grassmannian, as the latter can be represented by positive matrices modulo $SL_d$ action from the left. See for example \cite{derksenkemper} for the notions of Reynold's operator and Haar measure. \paragraph{Invariants for other groups} Instead of considering $G_n=C_n^d$, there are of course other interesting possibilities.\par If we were to consider the maximal choice $G_n=S_n$, then $\ourfeatures^d(G)$ would be the zero subring. Indeed, take any piecewise linear path $P$, with $n$ vertices. Then one can double each vertex except the last to obtain a path with $2n-1$ vertices but the same signature. Permuting the order of the vertices allows then to obtain a tree-like path (i.e.\ a path with vanishing signature). Thus, any (simultaneous) invariant for the $S_n$-action must evaluate to $0$ under the signature. By Chen-Chow, this implies that the invariant itself must be $0$. For $G_n=\mathbb{Z}/n$, we exactly get $\ourfeatures^d(G)=\loopclosureinv^d$, and for $G_n=\mathbb{D}_n$, we have $\ourfeatures^d(G)=\loopclosureinv^d\cap\timerevinv^d$. \paragraph{Acknowledgements} The authors thank Carlos Améndola, Joscha Diehl, Jeremy Reizenstein, Leonard Schmitz, Bernd Sturmfels and Nikolas Tapia for helpful discussions and suggestions, and Shelby Cox and Gabriele Dian for reviewing a preliminary version of this paper. The authors acknowledge support from DFG CRC/TRR 388 ``Rough Analysis, Stochastic Dynamics and Related Fields'', Project A04. \begin{small} \def\cprime{$'$} \begin{thebibliography}{10} \bibitem{AFS18} Carlos Am{\'e}ndola, Peter Friz, and Bernd Sturmfels. \newblock {Varieties of Signature Tensors}. \newblock {\em Forum of Mathematics, Sigma}, 7:e10, 2019. \bibitem{amendolaleemeroni23} Carlos Améndola, Darrick Lee, and Chiara Meroni. \newblock Convex hulls of curves: Volumes and signatures. \newblock January 2023. \newblock \href{https://arxiv.org/abs/2301.09405}{\tt arXiv:2301.09405 [math.MG]}. \bibitem{arkani2014amplituhedron} Nima Arkani-Hamed and Jaroslav Trnka. \newblock The amplituhedron. \newblock {\em Journal of High Energy Physics}, 2014(10):1--33, 2014. \bibitem{BGLY16} Horatio Boedihardjo, Xi~Geng, Terry Lyons, and Danyu Yang. \newblock {The signature of a rough path: Uniqueness}. \newblock {\em Advances in Mathematics}, 720--737, 2016. \bibitem{bib:Che1954} Kuo-Tsai Chen. \newblock {Iterated Integrals and Exponential Homomorphisms}. \newblock {\em Proceedings of the London Mathematical Society}, s3-4(1):502--512, 1954. \bibitem{Chen1958} Kuo-Tsai Chen. \newblock {Integration of Paths -- A Faithful Representation of Paths by Noncommutative Formal Power Series}. \newblock {\em Transactions of the American Mathematical Society}, 89(2):395--407, 1958. \bibitem{colmenarejopreiss20} Laura Colmenarejo and Rosa Prei{\ss}. \newblock {Signatures of paths transformed by polynomial maps}. \newblock {\em Beitr{\"a}ge zur Algebra und Geometrie/Contributions to Algebra and Geometry}, 61(4):695--717, 2020. \bibitem{derksenkemper} Harm Derksen and Gregor Kemper. \newblock{\em Computational Invariant theory.} \newblock Encyclopedia of Mathematical Sciences 130. \newblock Springer, 2nd edition, 2015. \bibitem{diehleftapia2020acta} Joscha Diehl, Kurusch {Ebrahimi-Fard}, and Nikolas Tapia. \newblock {Time-Warping Invariants of Multidimensional Time Series}. \newblock {\em Acta Applicandae Mathematicae}, 170(1):265--290, 2020. \bibitem{diehllyonsnipreiss} Joscha Diehl, Terry Lyons, Hao Ni, and Rosa Preiß. \newblock Signature invariants characterize orbits of paths under compact matrix group action. \newblock Work in progress. \bibitem{diehl2019invariants} Joscha Diehl and Jeremy Reizenstein. \newblock Invariants of multidimensional time series based on their iterated-integral signature. \newblock {\em Acta Applicandae Mathematicae}, 164(1):83--122, 2019. \bibitem{fedorchukpak} Maksym Fedorchuk and Igor Pak. \newblock Rigidity and polynomial invariants of convex polytopes. \newblock {\em Duke Mathematical Journal}, 129, 2005. \bibitem{Gale1963NeighborlyAC} David Gale. \newblock Neighborly and cyclic polytopes. \newblock Proceedings of symposia in pure mathematics, vol. VII, p. 225-232. \newblock 1963. \bibitem{KaibelAutomorphismGO} Volker Kaibel and Arnold Wassmer. \newblock Automorphism groups of cyclic polytopes, 2003. \newblock \href{https://cloud.ovgu.de/s/yAoQJRR35QiWF6M}{https://cloud.ovgu.de/s/yAoQJRR35QiWF6M}, 2024. \bibitem{postnikov2006} Alexander Postnikov. \newblock Total positivity, Grassmannians, and networks. \newblock \href{https://arxiv.org/abs/math/0609764}{\tt arXiv:0609764}, 2006. \bibitem{preiss2024algebraic} Rosa Prei{\ss}. \newblock An algebraic geometry of paths via the iterated-integrals signature. \newblock \href{https://arxiv.org/abs/2311.17886}{\em arXiv:2311.17886}, 2024. \bibitem{loopcl24} Rosa Prei{\ss}, Jeremy Reizenstein, and Joscha Diehl. \newblock Conjugation, loop and closure invariants of the iterated-integrals signature. \newblock {\em to be announced}, 2024. \bibitem{Ree58} Rimhak Ree. \newblock {Lie Elements and an Algebra Associated With Shuffles}. \newblock {\em Annals of Mathematics Second Series}, 68(2):210--220, 1958. \bibitem{reutenauer1993free} C.~Reutenauer. \newblock {\em Free Lie Algebras}. \newblock LMS monographs. Clarendon Press, 1993. \end{thebibliography} \end{small} \end{document}
2412.11358v1
http://arxiv.org/abs/2412.11358v1
Enumerating Diagonalizable Matrices over $\mathbb{Z}_{p^k}$
\documentclass{article} \usepackage{amsmath,amssymb,amsthm} \usepackage{mathtools} \usepackage[all]{xy} \usepackage{amsfonts,mathrsfs,graphicx,multirow,latexsym} \usepackage[mathscr]{euscript} \usepackage{float} \usepackage{cellspace} \usepackage[export]{adjustbox} \usepackage{makecell} \setlength{\oddsidemargin}{.5in} \setlength{\evensidemargin}{.5in} \setlength{\textwidth}{6.in} \setlength{\topmargin}{0in} \setlength{\headsep}{.20in} \setlength{\textheight}{8.5in} \pdfpagewidth 8.5in \pdfpageheight 11in \newtheoremstyle{custom}{}{}{}{}{}{.}{ }{\thmname{}\thmnumber{}\thmnote{\bfseries #3}} \newtheoremstyle{Theorem}{}{}{\itshape}{}{}{.}{ }{\thmname{\bfseries #1}\thmnumber{\;\bfseries #2}\thmnote{\;(\bfseries #3)}} \theoremstyle{Theorem} \newtheorem{theorem}{Theorem}[section] \newtheorem{cor}{Corollary}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{prop}{Proposition}[section] \newtheorem*{nonumthm}{Theorem} \newtheorem*{nonumprop}{Proposition} \theoremstyle{definition} \newtheorem{definition}{Definition}[section] \newtheorem*{answer}{Answer} \newtheorem*{nonumdfn}{Definition} \newtheorem*{nonumex}{Example} \newtheorem{ex}{Example}[section] \theoremstyle{remark} \newtheorem{remark}{Remark}[section] \newtheorem*{note}{Note} \newtheorem*{notation}{Notation} \theoremstyle{custom} \newtheorem*{cust}{Definition} \usepackage[colorinlistoftodos]{todonotes} \usepackage[colorlinks=true, allcolors=blue]{hyperref} \title{Enumerating Diagonalizable Matrices over $\mathbb{Z}_{p^k}$} \author{Catherine Falvey, Heewon Hah, William Sheppard, Brian Sittinger,\\ Rico Vicente} \date{\vspace{-5ex}} \begin{document} \maketitle \begin{abstract} Although a good portion of elementary linear algebra concerns itself with matrices over a field such as $\mathbb{R}$ or $\mathbb{C}$, many combinatorial problems naturally surface when we instead work with matrices over a finite field. As some recent work has been done in these areas, we turn our attention to the problem of enumerating the square matrices with entries in $\mathbb{Z}_{p^k}$ that are diagonalizable over $\mathbb{Z}_{p^k}$. This turns out to be significantly more nontrivial than its finite field counterpart due to the presence of zero divisors in $\mathbb{Z}_{p^k}$. \end{abstract} \section{Introduction} A classic problem in linear algebra concerns whether a matrix $A \in M_n(K)$ (where $K$ is a field) is diagonalizable: There exists an invertible matrix $P \in GL_n(K)$ and a diagonal matrix $D \in M_n(K)$ such that $A = PDP^{-1}$. It is known that if $A$ is diagonalizable, then $D$ is unique up to the order of its diagonal elements. Besides being useful for computing functions of matrices (and therefore often giving a solution to a system of linear differential equations), this problem has applications in the representation of quadratic forms. \vspace{.1 in} If we consider $M_n(K)$ when $K$ is a finite field, one natural problem is to enumerate $\text{Eig}_n(K)$, the set of $n \times n$ matrices over $K$ whose $n$ eigenvalues, counting multiplicity, are in $K$. Olsavsky \cite{Olsavsky} initiated this line of inquiry, and determined that for any prime $p$, $$|\text{Eig}_2(\mathbb{F}_p)| = \frac{1}{2} \Big(p^4 + 2p^3 - p^2\Big).$$ \noindent More recently, Kaylor and Offner \cite{Kaylor} gave a procedure to enumerate $\text{Eig}_n(\mathbb{F}_q)$, thereby extending Olsavsky's work for any $n$ and any finite field $\mathbb{F}_q$. \vspace{.1 in} Inspired by these works, we turn our attention to $n \times n$ matrices over $\mathbb{Z}_{p^k}$, where $p$ is a prime and $k$ is a positive integer. More specifically, we investigate the problem about enumerating $\text{Diag}_n(\mathbb{Z}_{p^k})$, the set of $n \times n$ diagonalizable matrices over $\mathbb{Z}_{p^k}$. This is significantly more involved when $k \geq 2$, and many of the difficulties arise from having to carefully consider the zero divisors of $\mathbb{Z}_{p^k}$, namely any integral multiple of $p$. \vspace{.1 in} In Section 2, we review the pertinent definitions and notations for working with matrices over commutative rings. Most notably, we give a crucial theorem that essentially states that a diagonalizable matrix over $\mathbb{Z}_{p^k}$ is unique up to the ordering of its diagonal entries. In Section 3, we give the basic procedure for enumerating $\text{Diag}_n(\mathbb{Z}_{p^k})$ and apply it to the case where $n=2$ in Section 4. In order to deal with the cases where $n \geq 3$ in a systematic manner, we introduce to any diagonal matrix an associated weighted graph in Section 5 that allows us to find $|\text{Diag}_3(\mathbb{Z}_{p^k})|$ and $|\text{Diag}_4(\mathbb{Z}_{p^k})|$ in Sections 6 and 7, respectively. In the final sections, we use our work to find the proportion of matrices that are diagonalizable over $\mathbb{Z}_{p^k}$ and conclude by giving ideas for future research based on the ideas in this article. As far as we understand, all results and definitions from Proposition 3.1 in Section 3 onward are original. \section{Background} In this section, we give some definitions from matrix theory over rings that allow us to extend some notions of matrices from elementary linear algebra to those having entries in $\mathbb{Z}_{p^k}$. For the following definitions, we let $R$ denote a commutative ring with unity. For further details, we refer the interested reader to \cite{Brown}. To fix some notation, let $M_n(R)$ denote the set of $n \times n$ matrices with entries in $R$. The classic definitions of matrix addition and multiplication as well as determinants generalize in $M_n(R)$ in the expected manner. In general, $M_n(R)$ forms a non-commutative ring with unity $I_n$, the matrix with 1s on its main diagonal and 0s elsewhere. Next, we let $GL_n(R)$ denote the set of invertible matrices in $M_n(R)$; that is, $$GL_n(R) = \{A \in M_n(R) \, : \, AB = BA = I_n \text{ for some } B \in M_n(R)\}.$$ \noindent Note that $GL_n(R)$ forms a group under matrix multiplication and has alternate characterization $$GL_n(R) = \{A \in M_n(R) \, : \, \det A \in R^*\},$$ \noindent where $R^*$ denotes the group of units in $R$. Observe that when $R$ is a field $K$, we have $K^* = K \backslash \{0\}$; thus we retrieve the classic fact for invertible matrices over $K$. For this article, we are specifically interested in the case when $R = \mathbb{Z}_{p^k}$ where $p$ is prime and $k \in \mathbb{N}$. Then, $$GL_n(\mathbb{Z}_{p^k}) = \{A \in M_n(\mathbb{Z}_{p^k}) \, | \, \det A \not\equiv 0 \bmod p\};$$ \noindent in other words, we can think of an invertible matrix with entries in $\mathbb{Z}_{p^k}$ as having a determinant not divisible by $p$. \begin{definition} We say that $A \in M_n(R)$ is \textbf{diagonalizable over $R$} if $A$ is similar to a diagonal matrix $D \in M_n(R)$; that is, $A=PDP^{-1}$ for some $P \in GL_n(R)$. \end{definition} Recall that any diagonalizable matrix over a field is similar to a distinct diagonal matrix that is unique up to ordering of its diagonal entries. Since $\mathbb{Z}_{p^k}$ is \emph{not} a field whenever $k \geq 2$, we now give a generalization of this key result to matrices over $\mathbb{Z}_{p^k}$. This provides a foundational result that allows us to use the methods from \cite{Kaylor} to enumerate diagonalizable matrices over $\mathbb{Z}_{p^k}$. Although we originally came up for a proof for this result, the following elegant proof was suggested to the authors by an anonymous MathOverflow user; see \cite{User}. \begin{theorem} \label{thm:DDT} Any diagonalizable matrix over $\mathbb{Z}_{p^k}$ is similar to exactly one diagonal matrix that is unique up to ordering of its diagonal entries. \end{theorem} \begin{proof} Suppose that $D, D' \in M_n(\mathbb{Z}_{p^k})$ are diagonal matrices such that $D' = PDP^{-1}$ for some $P \in GL_n(\mathbb{Z}_{p^k})$. Writing $D = \text{diag}(d_1, \dots , d_n)$, $D' = \text{diag}(d'_1, \dots , d'_n)$, and $P = (p_{ij})$, we see that $D' = PDP^{-1}$ rewritten as $PD = D' P$ yields $p_{ij} d_i = p_{ij} d'_j$ for all $i, j$. \vspace{.1 in} Since $P \in GL_n(\mathbb{Z}_{p^k})$, we know that $\det{P} \in \mathbb{Z}_{p^k}^*$, and thus $\det{P} \not\equiv 0 \bmod p$. However, since $\det{P} = \sum_{\sigma \in S_n} (-1)^{\text{sgn}(\sigma)} \prod_{i} p_{i, \sigma(i)}$, and the set of non-units in $\mathbb{Z}_{p^k}$ (which is precisely the subset of elements congruent to 0 mod $p$) is additively closed, there exists $\sigma \in S_n$ such that $\prod_{i} p_{i, \sigma(i)} \in \mathbb{Z}_{p^k}^*$ and thus $p_{i,\sigma(i)} \in \mathbb{Z}_{p^k}^*$ for all $i$. \vspace{.1 in} Then for this choice of $\sigma$, it follows that $p_{i,\sigma(i)} d_i = p_{i,\sigma(i)} d'_{\sigma(i)}$ for each $i$, and since $p_{i,\sigma(i)} \in \mathbb{Z}_{p^k}^*$, we deduce that $d_i = d'_{\sigma(i)}$ for each $i$. In other words, $\sigma$ is a permutation of the diagonal entries of $D$ and $D'$, giving us the desired result. \end{proof} \vspace{.1 in} \noindent \textbf{Remark:} Theorem \ref{thm:DDT} does not extend to $\mathbb{Z}_m$ for a modulus $m$ with more than one prime factor. As an example from \cite{Brown}, the matrix $\begin{pmatrix} 2 & 3 \\ 4 & 3 \end{pmatrix} \in M_2(\mathbb{Z}_6)$ has two distinct diagonalizations $$\begin{pmatrix} 1 & 3 \\ 2 & 1 \end{pmatrix} \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix} \begin{pmatrix} 1 & 3 \\ 2 & 1 \end{pmatrix}^{-1} = \begin{pmatrix} 1 & 3 \\ 5 & 2 \end{pmatrix} \begin{pmatrix} 5 & 0 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} 1 & 3 \\ 5 & 2 \end{pmatrix}^{-1}.$$ The resulting diagonal matrices are thus similar over $\mathbb{Z}_6$ although their diagonal entries are not rearrangements of one another. \section{How to determine \texorpdfstring{$|\text{Diag}_n(\mathbb{Z}_{p^k})|$}{TEXT}} In this section, we give a procedure that allows us to determine $|\text{Diag}_n(\mathbb{Z}_{p^k})|$, the number of matrices in $M_n(\mathbb{Z}_{p^k})$ that are diagonalizable over $\mathbb{Z}_{p^k}$. The main idea is to use a generalization of a lemma from Kaylor (Lemma 3.1 in \cite{Kaylor}). Before stating it, we first fix some notation in the following definition. \begin{definition} Let $R$ be a commutative ring with 1, and fix $A \in M_n(R)$. \begin{itemize} \item The \textbf{similarity (conjugacy) class} of $A$, denoted by $S(A)$, is the set of matrices similar to $A$: $$S(A) = \{B\in M_n(R) \, : \, B=PAP^{-1} \text{ for some } P \in GL_n(R)\}.$$ \item The \textbf{centralizer} of $A$, denoted by $C(A)$, is the set of invertible matrices that commute with $A$: $$C(A) = \lbrace P \in GL_n(R) \, : \, PA=AP \rbrace.$$ \end{itemize} \end{definition} \noindent Note that $P \in C(A)$ if and only if $A=PAP^{-1}$, and moreover $C(A)$ is a subgroup of $GL_n(R)$. \begin{lemma} \label{lemma:counting} Let $R$ be a finite commutative ring. For any $A \in M_n(R)$, we have $\displaystyle \vert S(A)\vert = \frac{\vert GL_n(R)\vert }{\vert C(A)\vert}.$ \end{lemma} \begin{proof} This is proved verbatim as Lemma 3.1 in \cite{Kaylor} upon replacing a finite field with a finite commutative ring. Alternatively, this is a direct consequence of the Orbit-Stabilizer Theorem where $GL_n(R)$ is acting on $M_n(R)$ via conjugation. \end{proof} To see how this helps us in $M_n(\mathbb{Z}_{p^k})$, recall by Theorem \ref{thm:DDT} that the similarity class of a given diagonalizable matrix can be represented by a unique diagonal matrix (up to ordering of diagonal entries). Therefore, we can enumerate $\text{Diag}_n(\mathbb{Z}_{p^k})$ by first enumerating the diagonal matrices in $M_n(\mathbb{Z}_{p^k})$ and then counting how many matrices in $M_n(\mathbb{Z}_{p^k})$ are similar to a given diagonal matrix. Then, Lemma \ref{lemma:counting} yields \begin{equation}\label{eq:1} |\text{Diag}_n(\mathbb{Z}_{p^k})| = \sum_{D \in M_n(\mathbb{Z}_{p^k})} |S(D)| = \sum_{D \in M_n(\mathbb{Z}_{p^k})} \frac{\vert GL_n(\mathbb{Z}_{p^k})\vert }{\vert C(D)\vert}, \end{equation} where it is understood that each diagonal matrix $D$ represents a distinct similarity class of diagonal matrices. Observe that diagonal matrices having the same diagonal entries up to order belong to the same similarity class and are counted as different matrices when computing the size of their similarity class. First, we give a formula for $\vert GL_n(\mathbb{Z}_{p^k}) \vert$. As this seems to be surprisingly not well-known, we state and give a self-contained proof of this result inspired by \cite{Bollman} (for a generalization, see \cite{Han}). \begin{lemma} $\vert GL_n(\mathbb{Z}_{p^k})\vert = p^{n^2(k-1)} \displaystyle \prod_{l=1}^{n} (p^n - p^{l-1}).$ \end{lemma} \begin{proof} First, we compute $|GL_n(\mathbb{Z}_p)|$ by enumerating the possible columns of its matrices. For $A \in GL_n(\mathbb{Z}_p)$, there are $p^n - 1$ choices for the first column of $A$, as the zero column vector is never linearly independent. Next, we fix $l \in \{2, 3, \dots, n\}$. After having chosen the first $(l-1)$ columns, there are $(p^n - 1) - (p^{l-1} - 1) = p^n - p^{l-1}$ choices for the $l$-th column, because we want these $l$ columns to be linearly independent over $\mathbb{Z}_p$ (and there are $p$ multiples for each of the first $(l-1)$ columns). Therefore, we conclude that $$\vert GL_n(\mathbb{Z}_{p})\vert = \displaystyle \prod_{l=1}^{n} (p^n - p^{l-1}).$$ Hereafter, we assume that $k \geq 2$. Consider the mapping $\psi : M_n(\mathbb{Z}_{p^k}) \rightarrow M_n(\mathbb{Z}_{p})$ defined by $\psi(A) = A\bmod p $; note that $\psi$ is a well-defined (due to $p \mid p^k$) surjective ring homomorphism. Moreover, since ker$\;\psi = \{A \in M_n(\mathbb{Z}_{p^k}) \, : \, \psi(A) = 0\bmod p\}$ (so that every entry in such a matrix is divisible by $p$), we deduce that $|\text{ker}\;\psi| = (p^k / p)^{n^2} = p^{(k-1)n^2}$. \vspace{.1 in} Then, restricting $\psi$ to the respective groups of invertible matrices, the First Isomorphism Theorem yields $${GL_n(\mathbb{Z}_{p^k})} / {\ker\;\psi} \cong\; GL_n(\mathbb{Z}_p).$$ \noindent Therefore, we conclude that $$\vert GL_n(\mathbb{Z}_{p^k})\vert = |\ker\psi| \cdot |GL_n(\mathbb{Z}_{p})| = p^{n^2(k-1)} \displaystyle \prod_{l=1}^{n} (p^n - p^{l-1}).$$ \end{proof} We next turn our attention to the problem of enumerating the centralizer of a diagonal matrix in $\mathbb{Z}_{p^k}$. \begin{prop}\label{thm:centralizer} Let $D \in M_n(\mathbb{Z}_{p^k})$ be a diagonal matrix whose distinct diagonal entries $\lambda_1, \dots, \lambda_g$ have multiplicities $m_1, \dots, m_g$, respectively. Then, $$|C(D)| = \Big(\prod_{i = 1}^g |GL_{m_i}(\mathbb{Z}_{p^k})|\Big) \cdot \Big( \prod_{j = 2}^g \prod_{i = 1}^{j-1} p^{2m_im_jl_{ij}}\Big),$$ where $l_{ij}$ is the non-negative integer satisfying $p^{l_{ij}} \mid\mid (\lambda_i - \lambda_j)$ for each $i$ and $j$; that is, $$\lambda_i - \lambda_j = rp^{l_{ij}} \text{ for some } r \in \mathbb{Z}_{p^{k-l_{ij}}}^*.$$ \end{prop} \begin{proof} Assume without loss of generality that all matching diagonal entries of $D$ are grouped together; that is, we can think of each $\lambda_i$ with multiplicity $m_i$ as having its own $m_i \times m_i$ diagonal block of the form $\lambda_i I_{m_i}$ within $D$. \vspace{.1 in} To find the centralizer of $D$, we need to account for all $A \in GL_n(\mathbb{Z}_{p^k})$ such that $AD = DA$. Writing $A = (A_{ij})$, where $A_{ij}$ is an $m_i \times m_j$ block, computing the necessary products and equating like entries yields $$\lambda_i A_{ij} = \lambda_j A_{ij}.$$ \noindent If $i \neq j$, then $(\lambda_i - \lambda_j) A_{ij} \equiv 0 \bmod p^k$. Therefore, $A_{ij} \equiv 0 \bmod p^{k - l_{ij}}$, and thus $A_{ij} \equiv 0 \bmod p$. Observe that this gives $p^{l_{ij}}$ possible values for each entry in $A_{ij}$ (and similarly for those in $A_{ji}$). \vspace{.1 in} Therefore, $A$ is congruent to a block diagonal matrix modulo $p$ with blocks $A_{ii}$ having dimensions $m_i \times m_i$ for each $i \in \{1, \dots, g\}$. Finally since $A \in GL_n(\mathbb{Z}_{p^k})$, this means that each $A_{ii} \in GL_{m_i}(\mathbb{Z}_{p^k})$. With this last observation, the formula for $|C(D)|$ now follows immediately. \end{proof} Proposition \ref{thm:centralizer} motivates the following classification of diagonal matrices in $\mathbb{Z}_{p^k}$. \begin{definition} Let $D \in M_n(\mathbb{Z}_{p^k})$ be a diagonal matrix whose distinct diagonal entries $\lambda_1, \dots, \lambda_g$ have multiplicities $m_1, \dots, m_g$, respectively. The \textbf{type} of $D$ is given by the following two quantities: \begin{itemize} \item The partition $n = m_1 + \dots + m_g$ \item The set $\{l_{ij}\}$ indexed over all $1 \leq i < j \leq g$, where $p^{l_{ij}} \mid\mid (\lambda_j - \lambda_i)$. \end{itemize} \noindent Then we say that two diagonal matrices $D, D' \in M_n(\mathbb{Z}_{p^k})$ have the \textbf{same type} if and only if $D$ and $D'$ share the same partition of $n$, and there exists a permutation $\sigma \in S_n$ such that $l_{ij} = l'_{\sigma(i)\sigma(j)}$ for all $1 \leq i < j \leq g$. We denote the set of all distinct types of diagonal $n \times n$ matrices by $\mathcal{T}(n)$. \end{definition} \noindent \textbf{Example:} Consider the following three diagonal matrices from $M_3(\mathbb{Z}_8)$: $$D_1 = \begin{pmatrix} 1 & 0 & 0\\ 0 & 2 & 0\\0 & 0 & 3\end{pmatrix},\, D_2 = \begin{pmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\0 & 0 & 5\end{pmatrix}, \, D_3 = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0\\0 & 0 & 3 \end{pmatrix},\, D_4 = \begin{pmatrix} 7 & 0 & 0 \\ 0 & 5 & 0\\0 & 0 & 7 \end{pmatrix}.$$ \noindent Since $D_1$ has partition $1 + 1 + 1$, while $D_2$, $D_3$, and $D_4$ have the partition $2 + 1$, $D_1$ does not have the same type as any of $D_2$, $D_3$, and $D_4$. Moreover, $D_2$ and $D_3$ do not have the same type, because $2^2 \mid\mid(5 - 1)$, while $2^1 \mid\mid(3 - 1)$. However, $D_3$ and $D_4$ have the same type, because they share the same partition $2+1$ and $2^1$ exactly divides both $3-1$ and $7-5$. \vspace{.1 in} It is easy to verify that if $D$ and $D'$ are two $n \times n$ diagonal matrices of the same type, then $|C(D)| = |C(D')|$ and thus $|S(D)| = |S(D')|$. Consequently for any type $T$, define $c(T)$ and $s(T)$ by $c(T) = |C(D)|$ and $s(T) = |S(D)|$ where $D$ is any matrix of type $T$. Then, letting $t(T)$ denote the number of diagonal matrices (up to permutations of the diagonal entries) having type $T$, we can rewrite (\ref{eq:1}) as \begin{equation} \label{eq:2} |\text{Diag}_n(\mathbb{Z}_{p^k})| = \sum_{T \in \mathcal{T}(n)} t(T) \, \frac{\vert GL_n(\mathbb{Z}_{p^k})\vert }{c(T)}. \end{equation} \section{Enumerating the \texorpdfstring{$2 \times 2$}{TEXT} Diagonalizable Matrices} We now illustrate our procedure for determining the value of $\vert \text{Diag}_2(\mathbb{Z}_{p^k}) \vert$. \begin{theorem} The number of $2 \times 2$ matrices with entries in $\mathbb{Z}_{p^k}$ that are diagonalizable over $\mathbb{Z}_{p^k}$ is $$\vert \emph{Diag}_2(\mathbb{Z}_{p^k}) \vert = p^k + \dfrac{p^{k+1}(p^2-1)(p^{3k}-1)}{2(p^3-1)}.$$ \end{theorem} \begin{proof} In order to find $\vert \text{Diag}_2(\mathbb{Z}_{p^k}) \vert$, we need to enumerate all of the $2 \times 2$ diagonal matrix types. First of all, there are two possible partitions of $2$, namely $2$ and $1+1$. The trivial partition yields one distinct type of diagonal matrices $$T_1 = \Big\{\begin{pmatrix} \lambda & 0 \\ 0 & \lambda \end{pmatrix} \; : \; \lambda \in \mathbb{Z}_{p^k} \Big\},$$ \noindent which consists of the $2 \times 2$ scalar matrices. Since there are $p^k$ choices for $\lambda$, we have $t(T_1) = p^k$. Moreover $c(T_1) = |GL_2(\mathbb{Z}_{p^k})|$, because any invertible matrix commutes with a scalar matrix. \vspace{.1 in} The nontrivial partition $2 = 1 + 1$ yields the remaining $k$ distinct types of matrices that we index by $i \in \{0, 1, \dots , k-1\}$: $$T_2^{(i)} = \Big\{\begin{pmatrix} \lambda_1 & 0 \\ 0 & \lambda _2 \end{pmatrix} \; : \; p^i \; || \; (\lambda_1-\lambda_2) \Big\}.$$ \noindent Fix $i \in \{0, 1, \dots , k-1\}$; we now enumerate $t(T_2^{(i)})$ and $c(T_2^{(i)})$. For $t(T_2^{(i)})$, we first observe that there are $p^k$ choices for $\lambda_1$. To find the number of choices for $\lambda_2$, observe that $\lambda_1-\lambda_2 \equiv rp^i \bmod p^k$ for some unique $r \in (\mathbb{Z}_{p^{k-i}})^*$. Hence, there are $\phi(p^{k-i})$ choices for $r$ and thus for $\lambda_2$. (As a reminder, $\phi$ denotes the Euler phi function, and $\phi(p^l) = p^{l-1}(p-1)$.) Since swapping $\lambda_1$ and $\lambda_2$ does not change the similarity class of the diagonal matrix, we conclude that $$t(T_2^{(i)})=\dfrac{p^k \phi (p^{k-i})}{2!}.$$ \noindent Next, applying Proposition \ref{thm:centralizer} yields $c(T_2^{(i)}) = p^{2i} \phi(p^k)^2.$ \vspace{.1 in} Finally, we use (\ref{eq:2}) to enumerate the $2 \times 2$ diagonal matrices and conclude that \begin{align*} \vert\text{Diag}_2(\mathbb{Z}_{p^k})\vert &= t(T_1) \frac{\vert GL_n(\mathbb{Z}_{p^k})\vert }{c(T_1)} + \sum_{i=0}^{k-1} t(T_2^{(i)}) \frac{\vert GL_n(\mathbb{Z}_{p^k})\vert }{c(T_2^{(i)})}\\ & = p^k + \dfrac{p^k}{2} \cdot \dfrac{p^{4(k-1)}(p^2-1)(p^2-p)}{\phi(p^k)^2} \sum_{i=0}^{k-1} \dfrac{\phi(p^{k-i})}{p^{2i}} \\ & = p^k + \dfrac{p^k}{2} \cdot \dfrac{p^{4(k-1)}(p^2-1)(p^2-p)}{(p^{k-1} (p-1))^2} \sum_{i=0}^{k-1} \dfrac{p^{k-i-1} (p-1)}{p^{2i}} \\ & = p^k + \dfrac{p^{4k-2}(p^2-1)}{2} \sum_{i=0}^{k-1} \dfrac{1}{p^{3i}} \\ & = p^k + \dfrac{p^{4k-2}(p^2-1)}{2} \cdot \frac{1 - p^{-3k}}{1 - p^{-3}}, \text{ using the geometric series}\\ & = p^k + \dfrac{p^{k+1}(p^2-1)(p^{3k}-1)}{2(p^3-1)}. \end{align*} \end{proof} \noindent \textbf{Remarks}: Observe that in the case where $k = 1$, the formula reduces to $\frac{1}{2}(p^4 - p^2 + p)$, which can be found at the end of Section 3 in Kaylor \cite{Kaylor} after you remove the contributions from the $2 \times 2$ Jordan block case. Moreover, for the diagonal matrix types corresponding to the nontrivial partition and $i \geq 1$, we are dealing with differences of diagonal entries yielding zero divisors in $\mathbb{Z}_{p^k}$; these scenarios never occur when $k = 1$ because $\mathbb{Z}_p$ is a field. \section{Enumerating \texorpdfstring{$n \times n$}{TEXT} Diagonal Matrices of a Given Type} \subsection{Representing a Diagonal Matrix with a Valuation Graph} As we increase the value of $n$, the enumeration of $n \times n$ diagonalizable matrices over $\mathbb{Z}_{p^k}$ becomes more involved, because the number of distinct types becomes increasingly difficult to catalog. The difficulties come both from the powers of $p$ dividing the differences of the diagonal entries of the matrix as well as the increasing number of partitions of $n$. In order to aid us in classifying diagonal matrices into distinct types, we introduce an associated graph to help visualize these scenarios. \vspace{.1 in} Let $D \in M_n(\mathbb{Z}_{p^k})$ be diagonal with distinct diagonal entries $\lambda_1, \dots, \lambda_g \in \mathbb{Z}_{p^k}$. Ordering the elements in $\mathbb{Z}_{p^k}$ by $0 < 1 < 2 < \dots < p^k - 1$, we can assume without loss of generality that $\lambda_1 < \lambda_2 < \dots < \lambda_g$ (since $D$ is similar to such a matrix by using a suitable permutation matrix as the change of basis matrix). Associated to $D$, we define its associated weighted complete graph $G_D$ (abbreviated as $G$ when no ambiguity can arise) as follows: We label its $g$ vertices with the diagonal entries $\lambda_1, \lambda_2, \dots , \lambda_g$, and given the edge between the vertices $\lambda_i$ and $\lambda_j$, we define its weight $l_{ij}$ as the unique non-negative integer satisfying $p^{l_{ij}} \mid\mid (\lambda_i - \lambda_j)$. \begin{definition} Let $D \in M_n(\mathbb{Z}_{p^k})$ be diagonal. We call the weighted complete graph $G$ associated to $D$ as constructed above the \textbf{valuation graph} of $D$. \end{definition} \bigskip \noindent The following fundamental property of such graphs justifies why we call these valuation graphs. \begin{prop} \textbf{(Triangle Inequality)} \label{thm:triangleinequality} Let $G$ be a valuation graph. Given vertices $\lambda_a$, $\lambda_b$, and $\lambda_c$ in $G$ and edges $E_{ab}$, $E_{ac}$, and $E_{bc}$, the weights satisfy $l_{bc} \geq \min \{l_{ab}, l_{ac}\}$. In particular, $l_{bc} = \min \{l_{ab}, l_{ac}\}$ if $l_{ab} \neq l_{ac}$. \end{prop} \begin{proof} By hypothesis, we know that $l_{ab}$ and $l_{ac}$ are the biggest non-negative integers satisfying $$\lambda_a - \lambda_b = rp^{l_{ab}} \text{ and } \lambda_a - \lambda_c = sp^{l_{ac}} \text{ for some } r, s \in \mathbb{Z}_{p^k}^*.$$ \noindent Without loss of generality, assume that $l_{ab} \geq l_{ac}$. Then, we obtain $$\lambda_b - \lambda_c = (\lambda_a - \lambda_c) - (\lambda_a - \lambda_b) = p^{l_{ac}} (s - r p^{l_{ab} - l_{ac}}).$$ \noindent If $l_{ab} > l_{ac}$, then $(s - r p^{l_{ab} - l_{ac}}) \in \mathbb{Z}_{p^k}^*$, and if $l_{ab} = l_{ac}$ then $s-r$ may or may not be a zero divisor in $\mathbb{Z}_{p^k}$. The claim now immediately follows. \end{proof} Observe that since the valuation graph arises from a diagonal matrix in $M_n(\mathbb{Z}_{p^k})$, it is clear that its weights can only attain integral values between 0 and $k-1$ inclusive. In fact, we can give another restriction on the possible values of its weights. \begin{lemma}\label{thm:number_of_weights} A valuation graph $G$ on $g$ vertices has no more than $g-1$ weights. \end{lemma} \begin{proof} We prove this by induction on the number of vertices $g$. This claim is true for $g = 2$, because such a graph has exactly one weight. Next, we assume that the claim is true for any valuation graph on $g$ vertices, and consider a valuation graph $G$ with vertices $\lambda_1, \dots, \lambda_{g+1}$. By the inductive hypothesis, the valuation subgraph $H$ of $G$ with vertices $\lambda_1, \dots, \lambda_g$ has no more than $g-1$ weights. It remains to consider the weights of the edges from these vertices to the remaining vertex $\lambda_{g+1}$. If none of these edges have any of the $g-1$ weights of $H$, then we are done. Otherwise, suppose that one of these edges (call it $E$) has an additional weight. Then for any edge $E'$ other than $E$ that has $\lambda_{g+1}$ as a vertex, the Triangle Inequality (Prop. \ref{thm:triangleinequality}) implies that $E'$ has no new weight. Hence, $G$ has no more than $(g-1)+1 = g$ weights as required, and this completes the inductive step. \end{proof} We know that for any diagonal matrix $D \in M_n(\mathbb{Z}_{p^k})$, its valuation graph $G$ satisfies the Triangle Inequality. Moreover, any complete graph on $n$ vertices satisfying the Triangle Inequality necessarily corresponds to a collection of diagonal matrices with distinct diagonal entries in $M_n(\mathbb{Z}_{p^k})$ as long as there are at most $n-1$ weights and the maximal weight is at most $k-1$. Moreover, such a graph also corresponds to a collection of diagonal matrices with non-distinct diagonal entries in $M_N(\mathbb{Z}_{p^k})$ where $N$ is the sum of these multiplicities. \subsection{Enumerating Diagonalizable Matrices with a Given Valuation Graph} Throughout this section, we assume that the diagonal matrix in $M_n(\mathbb{Z}_{p^k})$ has distinct diagonal entries. Given its valuation graph $G$, we construct a specific kind of spanning tree that will aid us in enumerating the diagonal matrices in $M_n(\mathbb{Z}_{p^k})$ having valuation graph $G$. In a sense, such a spanning tree concisely shows the dependencies among the diagonal entries of a given diagonal matrix. \begin{prop} Given a diagonal matrix $D \in M_n(\mathbb{Z}_{p^k})$ with distinct diagonal entries having valuation graph $G$, there exists a spanning tree $T \subset G$ from which we can uniquely reconstruct $G$. We call $T$ a \textbf{permissible spanning tree} of $G$. \end{prop} \begin{proof} Suppose that $G$ is a valuation graph on $n$ vertices with $r$ distinct weights $a_1, a_2, \ldots , a_r$ listed in increasing order. In order to construct a permissible spanning tree for $G$, we consider the following construction. \vspace{.1 in} For each weight $a_i$ with $1 \leq i \leq r$, define $G_{a_i}$ to be the subgraph of $G$ consisting of the edges with weight \emph{at most} $a_i$ along with their respective vertices. From the definition of a weight, we immediately see that $G_{a_1} \supseteq G_{a_2} \supseteq \dots \supseteq G_{a_r}$. Moreover, Prop. \ref{thm:triangleinequality} implies that each connected component of $G_{a_i}$ is a complete subgraph of $G$. \vspace{.1 in} To use these subgraphs to construct a permissible spanning tree for $G$, we start with the edges in $G_{a_r}$. For each connected component of $G_{a_r}$, we select a spanning tree and include all of their edges into the edge set $E$. Next, we consider the edges in $G_{a_{r-1}}$. For each connected component of $G_{a_{r-1}}$, we select a spanning tree that includes the spanning tree from the previous step. We inductively repeat this process until we have added any pertinent edges from $G_{a_1}$. (Note that since $G_{a_1}$ contains only one connected component, $T$ must also be connected.) The result is a desired permissible spanning tree $T$ for our valuation graph $G$. \vspace{.1 in} Next, we show how to uniquely reconstruct the valuation graph $G$ from $T$. To aid in this procedure, we say that \textit{completing edge} of two edges $e_1,e_2$ in $G$ that share a vertex is the edge $e_3$ which forms a complete graph $K_3$ with $e_1$ and $e_2$. \vspace{.1 in} Start by looking at the edges having the largest weight $a_r$ in $T$. If two edges with weight $a_r$ share a vertex, then their completing edge in $G$ must also have weight $a_r$ by the maximality of $a_r$. Upon completing this procedure, there can be no other edges in $G$ of weight $a_r$, as this would violate the construction of $T$. \vspace{.1 in} Next consider the edges having weight $a_{r-1}$ (if they exist). For any two edges of weight $a_{r-1}$ that share a vertex, their completing edge must have weight $a_{r-1}$ or $a_r$ by the Triangle Inequality. If the completing edge had weight $a_r$, then we have already included this edge from the previous step. Otherwise, we conclude that the completing edge must have weight $a_{r-1}$. \vspace{.1 in} Continuing this process to the lowest edge coloring $a_1$, we reconstruct $G$ as desired. \end{proof} We now return to the problem of enumerating diagonal $n \times n$ matrices over $\mathbb{Z}_{p^k}$ of a given type. We begin with the case that $A \in M_n(\mathbb{Z}_{p^k})$ is a diagonal matrix over $\mathbb{Z}_{p^k}$ with distinct diagonal entries. Let $G$ be its associated valuation graph with $r$ distinct weights $a_1, a_2, \dots, a_r$. \begin{definition} Let $T$ be a permissible spanning tree of a valuation graph $G$. We say that a subset of edges in $T$ all with weight $a_t$ are \textbf{linked} if there exists a subtree $S$ of $T$ containing these edges such that each edge in $S$ has weight at least $a_t$. \end{definition} We use the notion of linked edges to partition the set of edges from our permissible tree $T$ beyond their weights as follows. Let $L^{t}$ denote the set of edges in $T$ with weight $a_t$. Then, $L^{t}$ decomposes into pairwise disjoint sets $L_1^{t}, \dots, L_{\ell(t)}^{t}$ for some positive integer $\ell(t)$, where each $L_j^{t}$ is a maximal subset of linked edges from $L^{t}$. \begin{definition} Let $T$ be a permissible spanning tree for a given valuation graph $G$. For a given weight $a_t$, we say that $L_1^{t}, \dots, L_{\ell(t)}^{t}$ are the \textbf{linked cells} of the weight $a_t$. \end{definition} \begin{theorem}\label{thm:linked} Let $G$ be a valuation graph having $r$ distinct weights $a_1,a_2,\dots,a_r$ listed in increasing order, and let $T$ be a permissible spanning tree of $G$ with linked cells $L_j^{t}$. Then, the total number of diagonal matrix classes having distinct diagonal entries in $M_n(\mathbb{Z}_{p^k})$ with an associated valuation graph isomorphic to $G$ equals $$\frac{p^k}{|\emph{Aut}(G)|} \cdot \prod_{t=1}^r \prod_{j=1}^{\ell(t)} \prod_{i=1}^{|L_j^{t}|} \phi_{i}(p^{k-a_t}),$$ \noindent where $\phi_{i}(p^j) = p^j - ip^{j-1}$, and $\text{Aut}(G)$ denotes the set of weighted graph automorphisms of $G$. \end{theorem} \begin{proof} Fix a valuation graph $G$. The key idea is to consider the edges of its permissible spanning tree via linked cells, one weight at a time in descending order. Throughout the proof, we use the following convention: If an edge $E$ has vertices $\lambda_1,\lambda_2$ with $\lambda_2 > \lambda_1$, we refer to the value $\lambda_2 - \lambda_1$ as the \textit{edge difference} associated with $E$. \vspace{.1 in} First consider the edges in the linked cell of the maximal weight $a_r$. Without loss of generality, we start with the edges in $L_1^{r}$. Since $a_r$ is maximal, we know that $L_1^{r}$ is itself a tree. For brevity, we let $m = |L_1^{r}|$. Then, $L_1^{r}$ has $m$ edges connecting its $m+1$ vertices. We claim that there are $\prod_{i=1}^m \phi_i(p^{k-a_r})$ ways to label the values of the edge differences. \vspace{.1 in} To show this, we start by picking an edge in $L_1^{r}$, and let $\lambda_1$ and $\lambda_2$ denote its vertices. Since $\lambda_2 - \lambda_1 = s_1 p^{a_r}$ for some $s_1 \in \mathbb{Z}_{p^{k-a_r}}^*$, we see that $\lambda_2 - \lambda_1$ can attain $\phi(p^{k-a_r}) = \phi_1(p^{k-a_r})$ distinct values. Next, we pick a second edge in $L_1^{r}$ that connects to either $\lambda_1$ or $\lambda_2$; without loss of generality (relabeling vertices as needed), suppose it is $\lambda_2$. Letting $\lambda_3$ denote the other vertex of this edge, then $\lambda_3 - \lambda_2 = s_2 p^{a_r}$ for some $s_2 \in \mathbb{Z}_{p^{k-a_r}}^*$. However because $a_r$ is the maximal weight in $G$, the edge connecting $\lambda_1$ and $\lambda_3$ also has weight $a_r$. On the other hand, we have $$\lambda_3 - \lambda_1 = (\lambda_3 - \lambda_2) + (\lambda_2 - \lambda_1) = (s_2 + s_1)p^{a_r} \text{ where } s_2 + s_1 \in \mathbb{Z}^*_{p^{k-a_r}}.$$ \noindent Hence, $s_2 \not\equiv -s_1 \bmod p^{k-{a_r}}$, and therefore there are $\phi_1(p^{k-a_r}) - p^{k-a_r-1} = \phi_2(p^{k-a_r})$ possible values for $s_2$. Repeating this procedure, we can assign $\phi_i(p^{k-a_r})$ values to the difference of the vertices from the $i$th edge in $L_1^{r}$. Now the claim immediately follows. \vspace{.1 in} The preceding discussion applies to any of the linked cells of weight $a_r$, because edges in distinct linked cells never share a common vertex. Hence, we conclude that the number of possible values of edge differences in $L^{r}$ equals $$\prod_{j=1}^{\ell(r)} \prod_{i=1}^{|L_j^{r}|} \phi_{i}(p^{k-a_r}).$$ Next, suppose that we have enumerated all edge differences from all linked cells having weight $a_{t+1}, \dots, a_r$ for some fixed $t$. We now consider linked cells for the weight $a_t$. The procedure proceeds just as before, with the only difference being that two edges of any weight lower than $a_r$ may be linked via some subtree of $T$ containing other higher weights. However this presents no new difficulties. \vspace{.1 in} Fix a linked cell with weight $a_t$ and choose a first edge with vertices $\lambda_{c_1}$ and $\lambda_{c_2}$. As above, this edge corresponds to one of $\phi_1(p^{k-a_t})$ possible differences between values $\lambda_{c_1}$ and $\lambda_{c_2}$. Given another edge linked to the aforementioned edge in this linked cell, it either shares or does not share a vertex with the first edge. We consider these cases separately. \vspace{.1 in} First, suppose the two edges share a common vertex $\lambda_{c_2}$. Then as in the previous case, the connecting edge between $\lambda_{c_1}$ and $\lambda_{c_3}$ must have weight at least $a_t$ (as this edge otherwise has weight greater than $a_t$ and such vertices have been previously considered), and thus we can choose the value for $\lambda_{c_3} - \lambda_{c_2}$ in $\phi_2(p^{k-a_t})$ ways. \vspace{.1 in} Alternately, suppose that the two edges are connected through already established edges of higher weights on the vertices $\lambda_{d_1}, \lambda_{d_2}, \dots, \lambda_{d_s}$. Without loss of generality, assume that the vertices $\lambda_{c_1}$ and $\lambda_{c_4}$ are the initial and terminal vertices, respectively, in this second edge. We know that $\lambda_{c_2} - \lambda_{c_1} = rp^{k-a_t}$ and $\lambda_{c_4} - \lambda_{c_3} = r'p^{a_t}$ for some $r,r' \in \mathbb{Z}^*_{p^{k-a_t}}$. Also since the edges connecting $\lambda_{c_2}$ to $\lambda_{d_1}$, $\lambda_{d_s}$ to $\lambda_{c_3}$, and $\lambda_{d_i}$ to $\lambda_{d_j}$ for all $1 \leq i < j \leq s$ have weights higher than $a_t$, it follows that $0 \equiv \lambda_{d_1}-\lambda_{c_2} \equiv \lambda_{c_3}-\lambda_{d_s} \equiv \lambda_{d_j}-\lambda_{d_i} \bmod{p^{a_t+1}}$ and these observations give us \begin{align*} \lambda_{c_4} - \lambda_{c_1} &\equiv (\lambda_{c_2} - \lambda_{c_1}) + (\lambda_{d_1} - \lambda_{c_2}) + (\lambda_{d_2} - \lambda_{d_1}) + \dots + (\lambda_{c_3} - \lambda_{d_s}) + (\lambda_{c_4} - \lambda_{c_3}) \\ &\equiv (r + r') p^{a_t} \bmod{p^{a_t+1}}. \end{align*} \noindent However, by an inductive use of the Triangle Inequality, we see that the edge directly connecting $c_1$ and $c_4$ must have weight $a_t$. Thus, $r + r' \not\equiv 0 \bmod p$, and the number of permissible choices for $r'$ is therefore $p^{k-a_t}-2p^{k-a_t-1} = \phi_2(p^{k-a_t})$. \vspace{.1 in} Continuing this process, we can see that when we add the $i$-th edge in this linked cell (if it exists), we can find a path between it and the previous $(i-1)$ edges in $T$ sharing the same linked cell, giving $\phi_i(p^{k-a_t})$ choices for the corresponding edge differences. \vspace{.1 in} At this point we have considered every edge in $T$. The number of possible edge differences among all of the edges in $T$ equals $$\prod_{t=1}^r \prod_{j=1}^{\ell(t)} \prod_{i=1}^{|L_j^{t}|} \phi_{i}(p^{k-a_t}).$$ In summary, we have specified the number of values that the differences of the vertices to each of the edges in our permissible tree can attain. Consequently, as soon as we specify the value of one vertex, in which there are $p^k$ possible choices, we have uniquely determined (by our work above) the values of the remaining vertices through their differences. Therefore, the number of possible diagonal matrices with the given valuation graph equals $$p^k \cdot \prod_{t=1}^r \prod_{j=1}^{\ell(t)} \prod_{i=1}^{|L_j^{t}|} \phi_{i}(p^{k-a_t}).$$ \vspace{.1 in} Finally, we note that permuting the order of the diagonal entries of any diagonal matrix associated with $G$ yields a valuation graph isomorphic to $G$. Since these correspond to the weighted graph automorphisms of $G$, dividing our last formula by $|\text{Aut}(G)|$ yields the desired enumeration formula. \end{proof} \noindent \textbf{Remark:} Note that the group of weighted automorphisms of $G$ is a subgroup of all automorphisms (under composition of isomorphisms) of the corresponding unweighted graph version of $G$. Since $G$ is a complete graph with $n$ vertices, we know that there are $|S_n| = n!$ unweighted graph automorphisms of $G$ (which can be represented by $n \times n$ permutation matrices). Then, Lagrange's Theorem for groups implies that $|\text{Aut}(G)| = \frac{n!}{\sigma(G)}$, where $\sigma(G) = [S_n : \text{Aut}(G)]$ denotes the number of vertex permutations yielding non-isomorphic valuation graphs from $G$. In this manner, one can determine alternatively find the value of $|\text{Aut}(G)|$ by directly computing $\sigma(G)$. \vspace{.1 in} So far, Theorem \ref{thm:linked} allows us to enumerate diagonal matrices with distinct diagonal entries with an associated valuation graph. The following proposition addresses how to extend this theorem to also enumerate diagonal matrices whose diagonal entries are not distinct. \begin{prop} \label{thm:multiple} Let $D \in M_n(\mathbb{Z}_{p^k})$ be a diagonal matrix with distinct diagonal entries $\lambda_1, \dots , \lambda_g$, and let $D' \in M_g(\mathbb{Z}_{p^k})$ be the corresponding diagonal matrix with (distinct) diagonal entries $\lambda_1, \dots , \lambda_g$. If $D$ has exactly $n_m$ distinct $m \times m$ diagonal blocks for each $m \in \{1, 2, \dots, g\}$, then $$t(T) = \frac{g!}{n_1! \dots n_g!} \cdot t(T'),$$ where $T$ and $T'$ are the types of $D$ and $D'$, respectively. \end{prop} \begin{proof} Since we know by hypothesis that $D$ and $D'$ share the same number of distinct diagonal entries, it suffices to count the number of ways to arrange the diagonal blocks (each of which is distinguished by a different scalar on their respective diagonals) in $D$. Since the number of ways of arranging these diagonal blocks in $D$ equals $\frac{g!}{n_1! \dots n_g!}$, the conclusion of this theorem is now an immediate consequence. \end{proof} Now that we have Theorem \ref{thm:linked} and Proposition \ref{thm:multiple} at our disposal, we are more than ready to enumerate the diagonalizable $n \times n$ matrices in the cases where $n = 3$ and $4$; this we address in the next two sections. Before doing this, we would like to put our theory of valuation graphs into perspective by giving an example that illustrates the theory we have developed for the valuation graph. \vspace{.1 in} \noindent \textbf{Example:} Consider the diagonal matrix $D \in M_6(\mathbb{Z}_{3^3})$ whose diagonal entries are 0, 1, 2, 4, 5, and 11. Then, its corresponding valuation graph $G$ is depicted in Figure 1 below. \begin{figure}[H] \centering \includegraphics[width = 2.3 in]{counting-k6-example.pdf} \caption{The valuation graph $G$ corresponding to $D$.} \end{figure} \noindent Observe the number of distinct weights in $G$ is $3$, consistent with Lemma \ref{thm:number_of_weights}, and that the highest edge weight is $2$. \vspace{.1 in} Next, we give examples of permissible spanning trees for $G$ and partition their edges into linked cells. Figure 2 shows three permissible spanning trees $T_1,T_2,T_3$ for $G$ and their linked cells $L_1^1, L_1^2, L_2^2$, and $L_1^3$. \begin{figure}[H] \centering \includegraphics[width = 3 in]{k6-several-trees.pdf} \caption{Three permissible spanning trees for $G$ and their linked cells.} \end{figure} Although each of these spanning trees have different degrees, they all have the same edge decomposition into linked cells. Thus, we can use any of these permissible spanning trees to enumerate the number of similarity classes of diagonal matrices sharing $G$ as its valuation graph. To this end, it remains to compute $|\text{Aut}(G)|$. Since we can permute the vertices $2$ and $11$, as well as the vertices $1$ and $4$ without altering $G$, this implies that $|\text{Aut}(G)| = 2!\cdot2!$. Therefore by Theorem \ref{thm:linked}, the number of similarity classes of diagonal matrices with valuation graph $G$ equals \begin{align*} \frac{3^3}{2! \cdot 2!} \cdot \prod_{t=0}^2 \prod_{j=1}^{\ell(t)} \prod_{i=1}^{|L_j^{t}|} \phi_{i}(3^{3-t}) &= \frac{27}{4} \cdot\phi_1(3^3) \cdot \phi_2(3^3) \cdot \phi_1(3^2) \cdot \phi_1(3^2) \cdot \phi_1(3^1)\\ &= 78732. \end{align*} \section{Enumerating the \texorpdfstring{$3 \times 3$}{TEXT} Diagonalizable Matrices} \begin{theorem} The number of $3 \times 3$ matrices with entries in $\mathbb{Z}_{p^k}$ that are diagonalizable over $\mathbb{Z}_{p^k}$ is \begin{align*} |\emph{Diag}_3(\mathbb{Z}_{p^k})| &= p^k + \frac{p^{k+2}(p^3-1)(p^{5k}-1)}{p^5 - 1} + \frac{p^{k+3}(p^3-1)(p-2)(p+1)(p^{8k}-1)}{6(p^8 - 1)}\\ &+ \frac{p^{k+3}(p^2-1)}{2}\Bigg( \frac{p^{8k}-p^8}{p^8-1} - \frac{p^{5k}-p^5}{p^5-1}\Bigg). \end{align*} \end{theorem} \begin{proof} We first enumerate all of the $3 \times 3$ diagonal matrix types. There are three partitions of $3$, namely $3$, $2+1$, and $1+1+1$. The trivial partition yields the type of scalar matrices $$T_1 = \left \{ \begin{pmatrix} \lambda &&\\ & \lambda&\\ && \lambda\\ \end{pmatrix} \; : \; \lambda \in \mathbb{Z}_{p^k} \right\}.$$ \noindent As with the type of $2 \times 2$ scalar diagonal matrices, we have $t(T_1) = p^k$ and $c(T_1) = |GL_3(\mathbb{Z}_{p^k})|$. \vspace{.1 in} The partition $3 = 2+1$ comprises $k$ distinct types as $i \in \{0, 1, \dots , k-1\}$: $$T_2^{(i)} = \left\{\begin{pmatrix} \lambda_1 &&\\ & \lambda_1&\\ && \lambda_2\\ \end{pmatrix} \; : \; p^i \; || \; (\lambda_1-\lambda_2) \right\}.$$ \noindent Proposition \ref{thm:multiple} relates these types to the non-scalar types of $2 \times 2$ diagonal matrices, and thus $$t(T_2^{(i)}) = \frac{2!}{1!1!} \cdot \frac{p^k \phi(p^{k-i})}{2!} = p^k \phi(p^{k-i}).$$ \noindent Next, Proposition \ref{thm:centralizer} gives us $c(T_2^{(i)}) = \phi(p^k) \cdot \vert GL_2(\mathbb{Z}_{p^k}) \vert \cdot p^{4i}$. \vspace{.1 in} Finally, the partition $3=1+1+1$ comprises two distinct classes of diagonal matrix types that we concisely give by their respective valuation graphs in the figure below: \begin{figure}[H] \centering \includegraphics[width = 3 in]{k3.pdf} \caption{Two valuation graph classes in the $3 \times 3$ case.} \end{figure} For the first valuation graph, let $i \in \{0, 1, \dots, k-1\}$ denote the common weight of the three edges on the first valuation graph given above. Letting $T_{3a}^{(i)}$ denote this type, Theorem \ref{thm:linked} yields $t(T_{3a}^{(i)})= \displaystyle \frac{p^k \phi (p^{k-i}) \phi_2(p^{k-i})}{3!}$, and Proposition \ref{thm:centralizer} gives us $c(T_{3a}^{(i)}) = \phi (p^k)^3 p^{6i}$. \vspace{.1 in} For the second valuation graph, let $i$ and $j$ denote the weights in the second valuation graph given above; note that $i \in \{0, \dots, k-2\}$ and $j \in \{i+1, \dots, k-1\}$. Letting $T_{3b}^{(i,j)}$ denote this type, Theorem \ref{thm:linked}, gives us $t(T_{3b}^{(i,j)}) = \displaystyle \frac{p^k \phi (p^{k-i})\phi (p^{k-j})}{2!}$, and Proposition \ref{thm:centralizer} yields $c(T_{3b}^{(i, j)}) = \phi (p^k)^3 p^{4i + 2j}$. \vspace{.1 in} Finally, we use (\ref{eq:2}) to enumerate the $3 \times 3$ diagonal matrices and conclude that \begin{align*} \vert\text{Diag}_3(\mathbb{Z}_{p^k})\vert &= p^k + \frac{p^{k+2}(p^3-1)(p^{5k}-1)}{p^5 - 1} + \frac{p^{k+3}(p^3-1)(p-2)(p+1)(p^{8k}-1)}{6(p^8 - 1)} \\ &+ \; \frac{p^{k+3}(p^2-1)}{2}\Bigg( \frac{p^{8k}-p^8}{p^8-1} - \frac{p^{5k}-p^5}{p^5-1}\Bigg). \end{align*} \end{proof} \section{Enumerating the \texorpdfstring{$4 \times 4$}{TEXT} Diagonalizable Matrices} We first address the $4 \times 4$ diagonal matrices with repeated diagonal entries. By using Propositions \ref{thm:centralizer} and \ref{thm:multiple}, we obtain the results in the following tables. Table 1 deals with the cases where there are at most two distinct diagonal entries. \begin{table}[ht] \centering \begin{tabular}{|c|Sc|c|c|} \hline Type $T$ & Valuation Graph & $t(T)$ & $c(T)$\\ \hline $\begin{pmatrix} \lambda &&&\\ & \lambda&&\\ && \lambda&\\ &&& \lambda\\ \end{pmatrix}$ & \includegraphics[width = 0.7 in, valign=c] {k1.pdf} & $p^k$ & $|GL_4(\mathbb{Z}_{p^k})|$ \\ \hline $\begin{pmatrix} \lambda_1 &&&\\ & \lambda_1 &&\\ && \lambda_1 &\\ &&& \lambda_2\\ \end{pmatrix}$ & \includegraphics[width = 1.0 in, valign=c] {k2.pdf} & $p^k \phi(p^{k-i})$ & $ p^{6i} \phi(p^k) \, |GL_3(\mathbb{Z}_{p^k})|$ \\ \hline $\begin{pmatrix} \lambda_1 &&&\\ & \lambda_1 &&\\ && \lambda_2 &\\ &&& \lambda_2\\ \end{pmatrix}$ & \includegraphics[width = 1.0 in, valign=c] {k2.pdf} & $\displaystyle\frac{p^k \phi(p^{k-i})}{2}$ & $p^{8i} \, |GL_2(\mathbb{Z}_{p^k})|^2 $ \\ \hline \end{tabular} \caption{$4 \times 4$ diagonal matrix types with at most two distinct diagonal entries.} \end{table} In Table 2, we consider the more involved case where a given diagonal matrix has three distinct diagonal entries. \begin{table}[ht] \centering \begin{tabular}{|c|Sc|c|c|} \hline Type $T$ & Valuation Graph & $t(T)$ & $c(T)$\\ \hline $\begin{pmatrix} \lambda_1 &&&\\ & \lambda_1 &&\\ && \lambda_2 &\\ &&& \lambda_3\\ \end{pmatrix}$ & \includegraphics[width = 1.0in, valign=c] {g1.pdf} & $\displaystyle\frac{p^k \phi(p^{k-i}) \phi_2(p^{k-i})}{2}$ & $p^{10i} \phi(p^k)^2 \, |GL_2(\mathbb{Z}_{p^k})|$ \\ \hline $\begin{pmatrix} \lambda_1 &&&\\ & \lambda_1 &&\\ && \lambda_2 &\\ &&& \lambda_3\\ \end{pmatrix}$ & \includegraphics[width = 1.0in, valign=c] {g2.pdf} &$\displaystyle\frac{3p^k \phi(p^{k-i}) \phi(p^{k-j})}{2}$ & $p^{6i+4j} \phi(p^k)^2 \, |GL_2(\mathbb{Z}_{p^k})|$ \\ \hline $\begin{pmatrix} \lambda_1 &&&\\ & \lambda_1 &&\\ && \lambda_2 &\\ &&& \lambda_3\\ \end{pmatrix}$ & \includegraphics[width = 1.0in, valign=c] {g3.pdf} &$\displaystyle\frac{3p^k \phi(p^{k-i}) \phi(p^{k-j})}{2}$ & $p^{8i+2j} \phi(p^k)^2 \, |GL_2(\mathbb{Z}_{p^k})|$ \\ \hline \end{tabular} \caption{$4 \times 4$ diagonal matrix types with three distinct diagonal entries.} \end{table} It remains to enumerate the diagonal matrix types where the diagonal entries are distinct. By inspection, we find that there are 6 distinct classes of valuation graphs if we disregard the actual weights of their edges. We summarize the pertinent information for each of these six valuation graphs in Table 3. \begin{table}[H] \centering \begin{tabular}{|Sc|c|c|} \hline Valuation Graph & $t(T)$ & $c(T)$\\ \hline \includegraphics[width = 1.2in, valign=c] {k4-1.pdf} & $\displaystyle\frac{p^k \phi(p^{k-i}) \phi_2(p^{k-i}) \phi_3(p^{k-i})}{4!}$ & $p^{12i} \phi(p^k)^4$\\ \hline \includegraphics[width = 1.2in, valign=c] {k4-2.pdf} &$\displaystyle\binom{4}{3}\frac{p^k \phi(p^{k-i}) \phi(p^{k-j}) \phi_2(p^{k-j})}{4!}$ & $p^{6i+6j} \phi(p^k)^4$\\ \hline \includegraphics[width = 1.2in, valign=c] {k4-3.pdf} &$\displaystyle\frac{1}{2}\binom{4}{2} \frac{p^k \phi(p^{k-i}) \phi(p^{k-j})^2}{4!}$ & $p^{8i+4j} \phi(p^k)^4$\\ \hline \includegraphics[width = 1.2in, valign=c] {k4-4.pdf} &$\displaystyle\binom{4}{2} \frac{p^k \phi(p^{k-i}) \phi_2(p^{k-i}) \phi(p^{k-j})}{4!}$ & $p^{10i+2j} \phi(p^k)^4$\\ \hline \includegraphics[width = 1.2in, valign=c] {k4-5.pdf} &$\displaystyle\binom{4}{3} \binom{3}{1} \frac{p^k \phi(p^{k-i}) \phi(p^{k-j}) \phi(p^{k-m})}{4!}$ & $p^{6i+4j+2m} \phi(p^k)^4$\\ \hline \includegraphics[width = 1.2in, valign=c] {k4-6.pdf} &$\displaystyle\binom{4}{4} \binom{4}{2} \frac{p^k \phi(p^{k-i}) \phi(p^{k-j}) \phi(p^{k-m})}{4!}$ & $p^{8i+2j+2m} \phi(p^k)^4$\\ \hline \end{tabular} \caption{$4 \times 4$ diagonal matrix types with distinct diagonal entries.} \end{table} By using (\ref{eq:2}), one can now find the number of $4 \times 4$ diagonalizable matrices over $\mathbb{Z}_{p^k}$. In light of the many cases from the three tables above, the final formula will be quite long and messy to explicitly write out, and we therefore have chosen not to include it here (although the curious reader should have no problem constructing it if necessary). \section{The Proportion of Diagonalizable Matrices Over \texorpdfstring{$\mathbb{Z}_{p^k}$}{TEXT}} Kaylor \cite{Kaylor} noted that as the size of the field $\mathbb{F}_q$ increases, the proportion of matrices in $M_n(\mathbb{F}_q)$ with all eigenvalues in $\mathbb{F}_q$ approaches $\frac{1}{n!}$; that is, $$\lim_{q \to \infty} \frac{|\text{Eig}_n(\mathbb{F}_q)|}{|M_n(\mathbb{F}_q)|} = \frac{1}{n!}.$$ \noindent In particular, the work in \cite{Kaylor} also implies that as the size of $\mathbb{F}_q$ increases, the proportion of matrices in $M_n(\mathbb{F}_q)$ that are diagonalizable over $\mathbb{F}_q$ approaches $\frac{1}{n!}$ as well. We generalize this latter result by replacing $\mathbb{F}_q$ in the case of $q = p$ with $\mathbb{Z}_{p^k}$. \begin{theorem} Fix positive integers $n$ and $k$, and let $p$ be a prime number. Then, $$\lim_{p \to \infty} \frac{|\emph{Diag}_n(\mathbb{Z}_{p^k})|}{|M_n(\mathbb{Z}_{p^k})|} = \frac{1}{n!}.$$ \end{theorem} \begin{proof} Letting $i$ index the distinct types of diagonal matrices, we let $T_{n,i}$ denote the $i$-th distinct type of a diagonal matrix in $M_n(\mathbb{Z}_{p^k})$. Note that we can view $|\text{Diag}_n(\mathbb{Z}_{p^k})|$ as a polynomial in powers of $p$. Since we are taking a limit as $p \to \infty$, it suffices to determine which diagonal matrix types contributes to the leading term of $|\text{Diag}_n(\mathbb{Z}_{p^k})|$. We accomplish this by first computing its degree. \begin{align*} \deg{|\text{Diag}_n(\mathbb{Z}_{p^k})|} &= \deg \Big(\sum_{i = 1}^{|\mathcal{T}(n)|} t(T_{n,i}) s(T_{n,i})\Big)\\ &= \max_{1 \leq i \leq |\mathcal{T}(n)|} \deg\Big(t(T_{n,i}) s(T_{n,i})\Big)\\ &= \max_{1 \leq i \leq |\mathcal{T}(n)|} \deg \Big( t(T_{n,i}) \frac{|GL_n(\mathbb{Z}_{p^k})|}{c(T_{n,i})}\Big)\\ &= \max_{1 \leq i \leq |\mathcal{T}(n)|} (\deg{|GL_n(\mathbb{Z}_{p^k})|} + \deg{t(T_{n,i})} - \deg{c(T_{n,i})})\\ &= \max_{1 \leq i \leq |\mathcal{T}(n)|} (kn^2 + \deg{t(T_{n,i})} - \deg{c(T_{n,i})}). \end{align*} \noindent By Proposition \ref{thm:centralizer}, we find that \begin{align*} \deg c(T_{u,i}) &= \sum_{i = 1}^r \deg |GL_{m_i}(\mathbb{Z}_{p^k})| + \sum_{1 \leq i < j \leq k} \deg{p^{2m_im_jl_{ij}}}\\ &= k \sum_{i=1}^r m_i^2 + \sum_{1 \leq i < j \leq k} 2 m_i m_j l_{ij}\\ &\geq k \sum_{i=1}^r m_i^2 \text{, since each } l_{ij} \geq 0\\ &\geq kn \text{, since each } m_i \geq 1. \end{align*} \noindent Moreover, Theorem \ref{thm:linked} yields \begin{align*} \deg t(T_{n,i}) &= k + \sum_{t=1}^r \sum_{j=1}^{J_t} \sum_{i=1}^{|L_j^{(a_t)}|} (k - a_t)\\ &\leq k + \sum_{t=1}^r \sum_{j=1}^{J_t} k |L_j^{(a_t)}|\\ &= k + k(n-1)\\ &= kn. \end{align*} \noindent Therefore $\deg{t(T_{n,i})} \leq \deg{c(T_{n,i})}$, with equality occurring if and only if the diagonal matrix type in which its diagonal entries are distinct and their differences are units in $\mathbb{Z}_{p^k}$. Hence, $\deg{\vert \text{Diag}_n (\mathbb{Z}_{p^k})\vert} = kn^2$, and using the aforementioned diagonal matrix type, the leading coefficient of $|\text{Diag}_n(\mathbb{Z}_{p^k})|$ equals $\frac{1}{n!}$ by Theorem \ref{thm:linked}. Thus, we have $$\frac{|\text{Diag}_n(\mathbb{Z}_{p^k})|}{|M_n(\mathbb{Z}_{p^k})|} = \frac{\frac{1}{n!} \, p^{kn^2} + O(p^{kn^2 - 1})}{p^{kn^2}}.$$ The desired limit immediately follows by letting $p \to \infty$. \end{proof} \section{Future Research} As we have seen, given a ring of the form $\mathbb{Z}_{p^k}$ and a positive integer $n$, we have given a procedure to compute $|\text{Diag}_n(\mathbb{Z}_{p^k})|$. The main difficulty that remains is enumerating the possible valuation graph classes (up to automorphism and disregarding the actual values of the weights) corresponding to $n \times n$ diagonal matrices. As demonstrated in the previous sections, it suffices to enumerate such classes corresponding to $n \times n$ diagonal matrices with distinct diagonal entries; let $a_n$ denote this quantity. We have seen that $a_2 = 3$ and $a_4 = 6$, and it turns out that $a_5 = 20$ (see Figure 4 below for these classes). It would be of interest to find at least a recursive formula that determines $a_n$ for a given value of $n$. \begin{figure}[ht] \begin{center} \includegraphics[width = 5.5in]{new-k5-all-graphs.pdf} \caption{The twenty $5 \times 5$ valuation graph classes.} \end{center} \end{figure} In addition to this, it would be of interest to extend our work to include matrices with Jordan Canonical Forms (JCFs) over $\mathbb{Z}_{p^k}$; that is matrices similar to a block diagonal matrix comprised of the Jordan matrices $$\begin{pmatrix} \lambda & 1 & 0 & \dots & 0 & 0\\ 0 & \lambda & 1 & \dots & 0 & 0\\ 0 & 0 & \lambda & \dots & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\ 0 & 0 & 0 & \dots & \lambda & 1\\ 0 & 0 & 0 & \dots & 0 & \lambda \end{pmatrix}$$ \noindent for some $\lambda \in \mathbb{Z}_{p^k}$. One would have to be careful performing such an enumeration, because it is possible for a given matrix to have more than one distinct JCF over $\mathbb{Z}_{p^k}$. For instance in $\mathbb{Z}_4$, we have that $$\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 2 & 1 \end{pmatrix} \begin{pmatrix} 2 & 1 \\ 0 & 2 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 2 & 1 \end{pmatrix}^{-1}.$$ \noindent Although finding an enumeration formula for the centralizer of a Jordan matrix should be straightforward, this is not expected to be the case for an arbitrarily chosen JCF. \vspace{.1 in} As a final remark, besides the potential non-uniqueness of a JCF, there is a reason why we have not enumerated $|\text{Eig}_n(\mathbb{Z}_{p^k})|$. Unlike in the finite field case where any matrix in $\text{Eig}_n(\mathbb{F}_q)$ has a JCF (see \cite{Kaylor} for more details), this is not even necessarily the case in $\text{Eig}_n(\mathbb{Z}_{p^k})$. For example in $\mathbb{Z}_{p^2}$, any matrix of the form $\begin{pmatrix} \lambda & p \\ 0 & \lambda \end{pmatrix}$ has double eigenvalue $\lambda$, but lacks a Jordan Canonical Form over $\mathbb{Z}_{p^2}$. Determining all similarity classes of matrices such matrices is in general still an open question. \begin{thebibliography}{99} \bibitem{Avni} N. Avni, U. Onn, A. Prasad, and L. Vaserstein, Similarity Classes of $3 \times 3$ Matrices Over a Local Principal Ideal Ring, Comm. Algebra \textbf{37 (8)} (2009), 2601-2615. \bibitem{Bollman} D. Bollman and H. Ramirez, On the Enumeration of Matrices Over Finite Commutative Rings, \emph{Amer. Math. Monthly} \textbf{76 (9)} (1969), 1019-1023. \bibitem{Brown} W. Brown, Matrices over Commutative Rings, CRC Press, 1992. \bibitem{Han} J. Han, The general linear group over a ring, \emph{Bull. Korean Math Soc.} \textbf{43} (2006) 619-626. \bibitem{Kaylor} L. Kaylor and D. Offner, Counting Matrices Over a Finite Field With All Eigenvalues in the Field, \emph{Involve} \textbf{7} (2014) 627-645. \bibitem{Olsavsky} G. Olsavsky, The Number of $2 \times 2$ Matrices Over $\mathbb{Z}/p\mathbb{Z}$ with Eigenvalues in the Same Field, \emph{Math. Mag.} \textbf{76} (2003), 314-317. \bibitem{User} User44191 (https://mathoverflow.net/users/44191/user44191), Uniqueness of diagonalizing a matrix over $\mathbb{Z}_{p^k}$, URL (version: 2019-02-05): http://mathoverflow.net/q/303634 \end{thebibliography} \end{document}
2412.11415v4
http://arxiv.org/abs/2412.11415v4
Delone sets associated with badly approximable triangles
\documentclass[reqno]{amsart} \usepackage{amsfonts} \usepackage{amsmath,amssymb,amsthm,bm,bbm} \usepackage{amscd} \usepackage{color} \usepackage{caption} \usepackage{float} \usepackage{subcaption} \usepackage{graphicx} \usepackage{geometry} \usepackage{mathrsfs} \usepackage{enumitem} \usepackage{makecell} \usepackage{hyperref} \usepackage{etoolbox} \patchcmd{\section}{\scshape}{\bfseries}{}{} \makeatletter \renewcommand{\@secnumfont}{\bfseries} \makeatother \newcommand{\B}{{\mathcal B}} \newcommand{\M}{{\mathcal M}} \newcommand{\R}{{\mathbb R}} \newcommand{\Q}{{\mathbb Q}} \newcommand{\Z}{{\mathbb Z}} \newcommand{\C}{{\mathbb C}} \newcommand{\cW}{{\mathcal {W}}} \newcommand{\cF}{{\mathcal {F}}} \newcommand{\cT}{{\mathcal {T}}} \newcommand{\cP}{{\mathcal {P}}} \newcommand{\N}{{\mathbb N}} \newcommand{\A}{{\mathcal A}} \newcommand{\QQ}{{\mathbb{Q}}} \newcommand{\RR}{{\mathbb{R}}} \renewcommand{\Re}{{\mathrm{Re}}} \renewcommand{\Im}{{\mathrm{Im}}} \newcommand{\card}{\text{card}} \newcommand{\diam}{\text{diam}} \newcommand{\Area}{\text{Area}} \newcommand{\dist}{\text{dist}} \newcommand{\eps}{\varepsilon} \newcommand\blue[1]{\textcolor{blue}{#1}} \numberwithin{equation}{section} \renewcommand{\baselinestretch}{1.2} \captionsetup[table]{skip=2ex,font=footnotesize} \geometry{a4paper,left=2.5cm,right=2.5cm,top=1.5cm,bottom=1.5cm} \newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{fact}[thm]{Fact} \newtheorem{conj}[thm]{Conjecture} \theoremstyle{definition} \newtheorem{quest}[thm]{Question} \newtheorem{defn}[thm]{Definition} \newtheorem{example}[thm]{Example} \newtheorem{remark}[thm]{Remark} \newtheorem{notation}[thm]{Notation} \begin{document} \title{Delone sets associated with badly approximable triangles} \author{Shigeki Akiyama} \address{ Institute of Mathematics, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki, 305-8571 Japan } \email{[email protected]} \author{Emily R. Korfanty} \address{Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, AB, T6G 2G1, Canada}\email{[email protected]} \author{Yan-li Xu$^*$} \address{Department of Mathematics and Statistics, Central China Normal University, Wuhan, 430079, China} \email{xu\[email protected]} \date{\today} \thanks{\indent\bf Key words and phrases:\ Badly approximable numbers, Hall's ray, Iterated Function System, Delone sets, Chabauty--Fell topology.} \thanks{* Corresponding author.} \begin{abstract} We construct new Delone sets associated with badly approximable numbers which are expected to have rotationally invariant diffraction. We optimize the discrepancy of corresponding tile orientations by investigating the linear equation $x+y+z=1$ where $\pi x$, $\pi y$, $\pi z$ are three angles of a triangle used in the construction and $x$, $y$, $z$ are badly approximable. In particular, we show that there are exactly two solutions that have the smallest partial quotients by lexicographical ordering. \end{abstract} \maketitle \section{Introduction} The study of non-periodic structures and their diffraction has been a topic of great interest since the discovery of quasicrystals in 1984 by Dan Shechtman \cite{Shechtman-et-al:84}. The diffraction from these materials exhibit sharp patterns of bright spots, known as Bragg peaks, despite having a non-periodic atomic structure. This raised a compelling question: \emph{Which non-periodic structures exhibit sharp diffraction patterns?} Today, much is known about non-periodic structures when the local patterns are finite up to translations; this property is known as finite local complexity. We refer the readers to \cite{Baake-Gahler:16, Baake-Grimm:13} for a broad range of examples and their corresponding theory of pure point diffraction. However, diffraction is less understood for structures that do not have finite local complexity, especially for substitution tilings with statistical circular symmetry. Here, statistical circular symmetry refers to the orientations of the tiles being uniformly distributed on the unit circle when ordered according to the self-similar structure (see~\cite{Frettloh:08} for a definition). The paradigm of such structures is the pinwheel tiling \cite{Radin:94}. Of the known tilings with statistical circular symmetry (see \cite{Frettloh:08,Frettloh-Harriss-Gahler,Sadun:98} for examples), the pinwheel tiling has been most thoroughly studied \cite{Baake-Frettloh-Grimm:07, Baake-Frettloh-Grimm:07b, Grimm-Deng:2011, MPS:06, Postnikoff:2004}. Despite this, little is known about the pinwheel diffraction, except that it is rotationally invariant with a Bragg peak of unit intensity at the origin. The pinwheel tiling is a non-periodic tiling of $\RR^2$ by a right triangle with side lengths 1, 2, and $\sqrt{5}$. It is an inflation tiling constructed via the subdivision rule shown in Figure~\ref{fig:pinwheel-sub}. More specifically, starting from an initial triangle, one iteratively applies an inflation by $\sqrt{5}$ and subdivides each tile into $5$ smaller, congruent triangles according to the subdivision rule. For the pinwheel tiling, there is a canonical choice of a distinguished point within each tile, and together these points form the usual Delone set associated with the pinwheel tiling. A patch of the pinwheel tiling and its Delone set is shown in Figure~\ref{fig:pinwheel-patch}. \begin{figure}[ht] \begin{center} \includegraphics{pinwheel.pdf} \end{center} \caption{The pinwheel subdivision rule.} \label{fig:pinwheel-sub} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=8cm]{pinwheelPlus_n5_BW_clipCP.pdf} \end{center} \caption{The pinwheel tiling and its associated Delone set.} \label{fig:pinwheel-patch} \end{figure} The statistical circular symmetry of the pinwheel tiling is due to the key angle~$\arctan(\frac{1}{2})$, which is incommensurate with $\pi$. More generally, for primitive substitution tilings in $\RR^2$, statistical circular symmetry is equivalent to existence of a level-$n$ ($n\geq 1$) supertile containing two copies of the same prototile differing in orientation by an angle $\alpha \notin \pi \QQ$ (see \cite[Proposition~3.4 and Theorem~6.1]{Frettloh:08}). The essential reason for this fact is that the map $x\to x+ \alpha$ specifies an irrational rotation on the torus $S^1$, and by a theorem of Weyl \cite{Weyl:16}, the orbit of an irrational rotation is uniformly distributed on $S^1$. In this paper, we are interested in the rate of convergence of the distribution of angles to the uniform distribution, i.e., the discrepancy. It is well-known that $x\to x+ \alpha \pmod{1}$ attains the smallest possible discrepancy up to constant factors when $\alpha$ is badly-approximable, i.e., when its partial quotients are bounded. Moreover, if this bound is small, then the above constant also becomes small (see ~\cite[Chapter~2,~Theorem~3.4]{Kuipers-Niederreiter:74}). Badly approximable angles often appear in phyllotaxis. One such example is the golden angle $\pi \omega$ where $$ \omega=\frac{\sqrt{5}-1}{2}= \cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{\ddots}}} =[1,1,\dots] \,. $$ The partial quotients of $\omega$ are minimal, and therefore, the irrational rotation by $\pi\omega$ leads to the fastest convergence to the uniform distribution. In this regard, pinwheel tiling is not ideal. There are currently no known bounds for the partial quotients of $$ \frac{\arctan(1/2)}{\pi}=[6, 1, 3, 2, 5, 1, 6, 5,\dots]. $$ Due to the Gelfond-Schneider Theorem, it is known that $\arctan(1/2)/\pi$ is transcendental. In particular, this implies that its expansion is not eventually periodic. Though these first several terms are fairly small, one can find large partial quotients $583, 1990, 116880, 213246\dots$ in its expansion at positions $53, 1171, 4806, 109153, \dots$. Since the set of badly approximable numbers has measure zero (see, for example, \cite[Chapter 11, Theorem 196]{HW} or \cite[Chapter 2, Theorem 29]{Khinchin:97}), it is natural to guess that $\arctan(1/2)/\pi$ is \emph{not} badly approximable. Further, by ergodicity of the continued fraction map, almost all numbers are normal with respect to the Gauss measure \cite{Khinchin:97,KN:00}, and consequently are not badly approximable. Note also that the right angle $\pi/2$ that appears in the pinwheel tiling is the antipode of the badly approximable angles. Similar to the pinwheel tiling, the key angles for the other aforementioned tilings with statistical circular symmetry are also not likely to be badly approximable. Motivated by this, we construct new tilings and associated Delone sets by triangles where every angle is the product of $\pi$ and a badly approximable number. We start from the subdivision rule shown in Figure~\ref{fig:subdivision-rule-new}. \begin{figure}[ht] \centering \includegraphics[width=9 cm]{subdivision_rule} \caption{Subdivision rule for triangles with angles $\alpha$, $\beta$, $\gamma$. The triangle on the left is scalene, and the triangle on the right is isosceles. This rule is valid for any solutions of~$\alpha+\beta+\gamma=\pi$.} \label{fig:subdivision-rule-new}\end{figure} This subdivision rule has the special property that the angles~$\alpha,\beta,\gamma$ can be chosen to be \emph{any} angles satisfying $\alpha + \beta + \gamma = \pi$. In particular, if one can choose $\alpha,\beta,\gamma$ so that~$\alpha/\pi, \beta/\pi$ and $\gamma/\pi$ are badly approximable numbers, then the remaining angle $\pi - 2\gamma$ is also a badly approximable multiples of $\pi$. This leads us to our target equation $$ x+y+z=1 \,, $$ where $x, y, z$ are badly approximable numbers and $\alpha = \pi x, \beta = \pi y, \gamma = \pi z$ are the angles of the corresponding triangle. We are especially interested in solutions such that the partial quotients of $x, y, z$ are small by lexicographical ordering. In this case, we refer to the triangle with angles $\pi x, \pi y, \pi z$ as an \emph{optimal badly approximable triangle}. It is easy to see that if each term in the continued fraction expansion of $x,y,z$ does not exceed two, the equation $x+y+z=1$ has no solution. Therefore, we seek a solution $x,y,z$ such that, for each of these numbers, the first partial quotient does not exceed three, and the remaining quotients are no greater than two. To our surprise, we can show that the equation $x+y+z=1\ (x\le y\le z)$ has exactly two solutions under this restriction: $$ x=2-\sqrt{3}=[3,1,2,1,2,1,2\ldots],\ y=z=\frac{\sqrt{3}-1}2=[2,1,2,1,2,1,\ldots]\,, $$ and $$ x=y=\frac{2-\sqrt{2}}2=[3,2,2,2,2,2,\ldots],\ z=\sqrt{2}-1=[2,2,2,2,2,\ldots]\, ; $$ see Theorem~\ref{Main}. The proof of this fact requires careful case analysis on infinitely many sub-cases. Based on this main result, we can then easily conclude that the equation $x+y=z\ (x\le y)$ has exactly four solutions under the same conditions; see Theorem~\ref{Main2}. Furthermore, our method gives uncountably many explicit solutions when the partial quotients of $x,y,z$ do not exceed three; see Theorem~\ref{Main3}. Combining these results on badly approximable numbers with the subdivision rule of Figure~\ref{fig:subdivision-rule-new}, we obtain Delone sets associated with tilings that have optimal statistical circular symmetry. More specifically, the Delone sets are produced from optimal badly approximable triangles, so that the discrepancy is minimized. To construct our Delone sets, we largely follow the threshold method for multiscale substitution schemes considered in \cite{Smi-Solo:21}, but we use contractions described by a graph directed iterated function system to give a concise presentation. The main idea is to subdivide the triangles until the areas reach a given threshold, and then renormalize them to obtain larger and larger patches. By choosing a suitable point within each triangle (e.g. the centroids), we get a sequence of finite point sets. We prove the existence of a Delone limit set for this sequence in the \emph{Chabauty--Fell topology} \cite{Chabauty:50,Fell:62} (see Theorem~\ref{thm:convergence}). A patch of a Delone set obtained from the subdivision rule in Figure~\ref{fig:subdivision-rule-new} for using optimal badly approximable triangles is shown in Figure~\ref{fig:optimal1-patch}. \begin{figure}[ht] \begin{center} \includegraphics[width=8cm]{optimal1_clip1_004.pdf} \end{center} \caption{A new tiling by optimal badly approximable triangles and its associated Delone set, constructed via the subdivision rule shown in Figure~\ref{fig:subdivision-rule-new} with $\alpha = (2-\sqrt{3})\pi$ and~${\beta=\gamma=\frac{(\sqrt{3}-1)\pi }{2}}$. } \label{fig:optimal1-patch} \end{figure} The paper is organized as follows. In Section~\ref{sec:main-results-1}, we provide the required background and definitions, and state our main results on badly approximable numbers. In Section~\ref{sec:main-results-2}, we describe our construction of Delone sets using graph directed iterated function systems. In Section~\ref{sec:specific}, we return to the original motivation and discuss the Delone sets obtained from the subdivision rule shown in Figure~\ref{fig:subdivision-rule-new} for the optimal badly approximable triangles associated with Theorem~\ref{Main}. Then, in Section~\ref{sec:proof_main123}, we prove Theorem~\ref{Main}, Theorem \ref{Main2} and Theorem~\ref{Main3}. Finally, in Section~\ref{sec:open}, we give several open problems. \section{Solving \texorpdfstring{$x+y+z=1$}{x+y+z=1} in badly approximable numbers}\label{sec:main-results-1} In this section, we will state our main results on badly approximable numbers. Their proofs are found in Section \ref{sec:proof_main123}. Let us start some definitions. \begin{defn}An irrational number $x \in (0,1)$ is called \emph{badly approximable} if the partial quotients in the continued fraction expansion $$ x=[a_1(x),a_2(x),\dots]=\cfrac 1{a_1(x)+\cfrac 1{ a_2(x)+ \cfrac 1{\ddots}}}\,, \quad a_j(x) \in \mathbb{Z}_+\,, \ j=1,2,\ldots \,, $$ are bounded, i.e.\ if $\sup_{k \geq 1}a_k(x)<\infty$. \end{defn} Equivalently, a number $x\in (0,1)$ is badly approximable if and only if there exists some $\varepsilon>0$ with the property that \begin{equation*} \left|x-\frac{p}{q}\right|\geq \frac{\varepsilon}{q^2} \,, \end{equation*} for all rational numbers $\frac{p}{q}$; see \cite[Chapter 11]{HW} or \cite[Theorem 23]{Khinchin:97}. For $x=[a_1(x),a_2(x),\dots]\in (0,1)$, by using the Gauss map $$ T(x)=\frac 1x -\left\lfloor \frac 1x \right\rfloor\,, $$ we have $$ T^{k-1}(x)=[a_{k}(x),a_{k+1}(x),a_{k+2}(x),\dots] \,, $$ and $a_k(x)=\lfloor 1/T^{k-1}(x) \rfloor$ for all $k\geq 1$. \begin{defn}A continued fraction $x = [a_1,a_2,\dots]\,$ is \textit{eventually periodic} if there are integers $N\geq 0$ and $k\geq 1$ with $a_{n+k}=a_n$ for all $n \geq N$. Such a continued fraction will be written \[ x = [a_1,\dots,a_{N-1},\overline{a_N,\dots,a_{N+k-1}}] \,. \] \end{defn} We use the notation $(a_N,\dots,a_{N+k-1})^\ell$ to denote the repetition of the numbers $a_N,\dots,a_{N+k-1}$ in the continued fraction $\ell\geq 0$ many times. We write $(a_j)^\ell$ for the repetition of a single number $a_j$. For convenience, in the case where $x\in(0,1)\cap\QQ$ we use the notation \[ x = [a_1,a_2,\dots,a_n,\infty] =\frac{1}{a_1+\frac{1}{a_2+\frac{1}{\ddots + \frac{1}{a_n}}}}\,. \] \begin{defn} Define the \textit{cylinder set} of $b_1,\dots,b_n\in\mathbb{N}$ by \[ I(b_1,\dots,b_n)= \{x\in(0,1) \,:\, x=[x_1,x_2,\dots]\,, x_i=b_i\ for\ 1 \leq i\leq n\}\,. \] \end{defn} The set $I(b_1,\dots , b_n)$ is an interval with endpoints \[ \frac{P_n+P_{n\blue{-}1}}{Q_n+Q_{n\blue{-}1}}\quad and\quad \frac{P_n}{Q_n} \,, \] for $n\geq 1$, where $$ P_n=b_nP_{n-1}+P_{n-2}\,,\quad Q_n=b_nQ_{n-1}+Q_{n-2} \,, $$ with \[ \begin{pmatrix} P_{-1} & P_0\\ Q_{-1} & Q_0 \end{pmatrix}= \begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix}\,. \] Let us define our linear problem for badly approximable numbers more precisely. An irrational number $x\in (0,1)$ is $B$-bad if $a_k(x)\le B$ holds for all $k \geq 1$. Let $\B_B$ be the set of all $B$-bad numbers in $(0,1)\backslash \QQ$. For $j\ge 0$, we define the set $$ \B_{B,j}= \B_{B+1} \cap T^{-j}(\B_B) \,, $$ i.e., $\B_{B,j}$ is the set of irrational numbers which satisfy \begin{equation*} \begin{cases} a_k\le B+1 & k \leq j\\ a_k\le B & k > j \,. \end{cases} \end{equation*} Clearly, we have $$\B_B=\B_{B,0}\subset \B_{B,1} \subset \B_{B,2} \subset \cdots\,.$$ Further, we define $\B^*_B=\bigcup_{j=0}^{\infty} \B_{B,j}$ to be the set of eventually $B$-bad numbers in $\B_{B+1}$. In this paper, we are interested in the additive structure of $\B_{B,j}$ and $\B^*_B$. We begin with a simple lemma. \begin{lem} \label{Triv} \emph{ For $x=[a_1,a_2,a_3,\dots]\in (0,1)$, we have $$ 1-x=\begin{cases} [1,a_1-1,a_2,a_3,\dots] & a_1\ge 2\\ [1+a_2,a_3,\dots] & a_1=1\,.\end{cases} $$ } \end{lem} \begin{proof} Putting $x=1/(a_1+y)$ with $y\in (0,1)$, we see that $$ 1-x=\cfrac {1}{1+\frac 1{a_1-1+y}} \,, $$ from which the result easily follows. \end{proof} \begin{cor}\label{cor:Trivial} \emph{ An irrational number $x$ is in $\B_{2,1}$ if and only if $1-x$ is also in $\B_{2,1}$. } \end{cor} \begin{remark} The property of $\B_{2,1}$ described in Corollary~\ref{cor:Trivial} does not hold in $\B_2$ or in $\B_{2,j}$ for any~$j\geq 2$. \end{remark} \begin{remark}\label{rem:no-B2-solution} Lemma~\ref{Triv} shows that the equation $ x+y=1\ (x,y\in \B_{2},\ x\le y) $ is trivially solved and has the set of solutions \[ \{ (x,1-x) \ |\ x\in \B_{2}\cap [0,1/2) \} \,. \] In particular, the equation has uncountably many different solutions. However, our equation of interest $x+y+z=1$ has no solutions in $\B_2$. Indeed, if $x,y,z\in \B_2$, then we also have $x,y,z \in I(1) \cup I(2) = [\frac{1}{3},1)$. However, if we also have $x+y+z=1$, then the only possible solution is $x=y=z=\frac{1}{3}\in\mathbb{Q}$, which contradicts irrationality of $x,y,z\in\B_2$. \end{remark} Our main results are as follows: \begin{thm}\label{Main} \emph{ The equality $ x+y+z=1\ (x,y,z\in \B_{2,1},\ x\le y\le z) $ has exactly two solutions $$ x=2-\sqrt{3}=[3,\overline{1,2}],\ y=z=\frac{\sqrt{3}-1}2=[\overline{2,1}]\,, $$ and $$ x=y=\frac{2-\sqrt{2}}2=[3,\overline{2}],\ z=\sqrt{2}-1=[\overline{2}]\,. $$ } \end{thm} By using Lemma \ref{Triv}, we may rephrase Theorem \ref{Main} as follows: \begin{thm} \label{Main2} \emph{ The equality $ x+y=z\ (x,y,z\in \B_{2,1}\,, x \leq y) $ has exactly four solutions $$ x=2-\sqrt{3}=[3,\overline{1,2}],\ y=\frac{\sqrt{3}-1}2=[\overline{2,1}],\ z=\frac{3-\sqrt{3}}{2}=[1,1,1,\overline{2,1}]\,, $$ $$ x=y=\frac{\sqrt{3}-1}2=[\overline{2,1}],\ z=\sqrt{3}-1=[\overline{1,2}]\,, $$ $$ x=y=\frac{2-\sqrt{2}}2=[3,\overline{2}],\ z=2-\sqrt{2}=[1,1,\overline{2}]\,, $$ and $$ x=\frac{2-\sqrt{2}}2=[3,\overline{2}],\ y=\sqrt{2}-1=[\overline{2}],\ z=\frac{\sqrt{2}}{2}=[1,\overline{2}]\,. $$ } \end{thm} In 1954, Hall \cite{Hall:47} proved that $\B_4+\B_4$ contains an interval. Subsequently, Freiman~\cite{Freiman:73} and Schecker~\cite{Schecker:77} showed that $\B_3+\B_3$ contains an interval. This implies that the equalities $x+y+z=1\ (x,y,z\in \B_{3},\ x\le y\le z)$ and $x+y=z\ (x,y,z\in \B_{3},\ x\le y)$ both have uncountably many solutions. These results were proven using a criterion for Cantor sets which, when satisfied, implies that the arithmetic sum of two Cantor sets contains an interval. By this method, it may be challenging to make the solutions explicit, i.e.\ we know that one can choose any $z\in \B_3$ in the interval, but it is not clear which numbers $x,y\in \B_3$ satisfy the equation. In contrast, our method gives explicit solutions in $\B^*_{2}$ and~$\B_3$: \begin{thm} \label{Main3} \emph{ The equality $ x+y+z=1\ (x,y,z\in \B^*_{2},\ x\le y\le z) $ has infinitely many solutions. Furthermore, uncountably many solutions of the equality $ x+y+z=1\ (x,y,z\in \B_{3},\ x\le y\le z) $ can be constructed explicitly. } \end{thm} \section{Construction of Delone Sets}\label{sec:main-results-2} In this section, we construct Delone sets starting from an arbitrary subdivision rule described by a graph directed iterated function system satisfying certain basic requirements, all of which are satisfied by our subdivision rule shown in Figure~\ref{fig:subdivision-rule-new}. Let us begin with some definitions. \subsection{Delone sets and the Chabauty--Fell topology}We restrict our attention to $\RR^2$. Throughout, we denote the Euclidean norm on $\RR^2$ by $\|\cdot\|$, and we write $B(x,r)$ to denote the open ball of radius $r$ centered at $x$, i.e.\ $B(x,r)=\{y\in\RR^2\,:\, \|y-x\|< r\}$. Similarly, we write $\overline{B(x,r)}$ for the closed ball, i.e.\ $\overline{B(x,r)}=\{y\in\RR^2\,:\, \|y-x\| \leq r\}$. We begin by recalling the definition of a Delone set. \begin{defn} Let $\Lambda$ be a closed subset of $X\subseteq \RR^2$. \begin{enumerate}[label=(\roman*)] \item If there exists an $r>0$ such that for any $x \in X$, $\card (B(x,r) \cap \Lambda) \leq 1$, then $\Lambda$ is \emph{$r$-uniformly discrete} in $X$. \item If there exists an $R>0$ such that for any $x \in X$, $\overline{B(x,R)} \cap \Lambda\neq\emptyset$, then $\Lambda$ is \emph{$R$-relatively dense} in $X$. \item We say that $\Lambda$ is a \emph{$(r,R)$-Delone set} in $X$ if $\Lambda$ is both $r$-uniformly discrete in $X$ and $R$-relatively dense in $X$. \end{enumerate} \end{defn} \begin{defn} Let $X\subseteq \RR^2$ be closed. We define the following spaces of point sets: \begin{enumerate}[label=(\roman*)] \item Denote by $2^X$ the set of all closed subsets of $X$; \item Denote by $\cW_{r,R}(X)$ the set of closed subsets $\Lambda \subseteq \RR^2$ such that $\Lambda$ is an $(r,R)$-Delone set in $X$. \end{enumerate} \end{defn} \begin{remark} If $X$ and $Y$ are closed subsets of $\RR^2$ and $Y \subseteq X$, then any set which is $r$-uniformly discrete and $R$-relatively dense in $X$ must also be $r$-uniformly discrete and $R$-relatively dense in $Y$. In particular, we have $\cW_{r,R}(X)\subseteq \cW_{r,R}(Y)$. \end{remark} Next, we will construct a sequence of finite point sets and show the existence of a subsequence converging to a Delone set in the \textit{Chabauty--Fell topology} \cite{Chabauty:50,Fell:62}. This topology is also commonly referred to as the \textit{Chabauty topology} or the \textit{Fell topology}, which are defined more generally in any topological space. In the case of locally compact groups, it is also called the \textit{local rubber topology} \cite{Baake-Lenz:04}. Since we work in $\RR^2$, the Chabauty--Fell topology is metrizable and induced by the following metric (see {\cite[Appendix~A]{Smi-Solo:22}}): \begin{defn}[Chabauty--Fell Topology] For each $\Lambda_1,\Lambda_2\in2^{X}$, define \[ d(\Lambda_1,\Lambda_2) = \inf\ \{\{1\}\cup\{\varepsilon>0\,:\,\Lambda_1\cap B(0,\textstyle{\frac{1}{\varepsilon}}) \subseteq \Lambda_2 + B(0,\varepsilon)\ \text{and}\ \Lambda_2\cap B(0,\textstyle{\frac{1}{\varepsilon}}) \subseteq \Lambda_1 + B(0,\varepsilon)\}\}\,. \] The map $d:2^X\times2^X \rightarrow [0,\infty)$ is a metric on $2^X$ inducing the Chabauty--Fell topology. \end{defn} \begin{remark} It has long been known that the space $2^X$ is compact in the Fell topology \cite{Fell:62}. Furthermore, it is easy to show that the space $\cW_{r,R}(X)$ is also compact in the Fell topology. As we will use this fact, we give a proof for $X = \RR^2$ in Appendix~\ref{sec:compactness}. \end{remark} We will deduce the existence of a subsequence converging to a Delone set using the compactness of $\mathcal{W}_{r,R}(\RR^2)$; however, our sequence will not be contained in $\mathcal{W}_{r,R}(\RR^2)$ because no finite set is relatively dense in $\RR^2$. Thus, we will use the following easy observation about the Chabauty--Fell topology: \begin{prop}\label{prop:Fell-compact-convergence} \emph{ Let $\{\Lambda_n\}_{n \geq 1}$ be a sequence in $2^X$ and $\Lambda \in 2^X$. Let $\{R_n\}_{n \geq 1}$ be a sequence of positive real numbers with $\lim_{n\rightarrow\infty} R_n = \infty$. Then $\Lambda_n$ converges to $\Lambda$ if and only if $\Lambda_n \cap B(0,R_n)$ converges to $\Lambda$ in the Chabauty--Fell topology. } \end{prop} \begin{proof} First, assume that $\Lambda_n$ converges to $\Lambda$ in the Chabauty--Fell topology. Then, for any $\varepsilon>0$, there exists a constant $N_1$ such that for any $n \geq N_1$, we have $d(\Lambda_n,\Lambda)<\varepsilon$. That is, \begin{equation}\label{def_1} \Lambda_n \cap B(0,\tfrac{1}{\varepsilon}) \subseteq \Lambda+B(0,\varepsilon) \,, \end{equation} and \begin{equation}\label{def_2} \Lambda \cap B(0,\tfrac{1}{\varepsilon}) \subseteq \Lambda_n+B(0,\varepsilon) \,. \end{equation} For the above $\varepsilon$, there must exist a constant $N_2$ such that for any $n \geq N_2$, $R_n>\tfrac{1}{\varepsilon} +\varepsilon$. Let $M=\max\{N_1,N_2\}$. Then by \eqref{def_1}, we have \begin{equation}\label{def_3} \Lambda_n \cap B(0,R_n) \cap B(0,\tfrac{1}{\varepsilon}) = \Lambda_n \cap B(0,\tfrac{1}{\varepsilon}) \subseteq \Lambda+B(0,\varepsilon) \quad \forall n \geq M\,. \end{equation} Furthermore, we get \begin{equation}\label{def_4} \Lambda \cap B(0,\tfrac{1}{\varepsilon}) \subseteq \Lambda_n \cap B(0,R_n)+B(0,\varepsilon)\,. \end{equation} Next, let $x\in \Lambda \cap B(0, \tfrac{1}{\varepsilon})$. By \eqref{def_2}, we have $x \in \Lambda_n + B(0,\varepsilon)$. Thus, there exists some $y\in \Lambda_n$ and some $z\in B(0,\varepsilon)$ such that $x=y+z$. Since $R_n > \tfrac{1}{\varepsilon} + \varepsilon$ and $x\in B(0, \tfrac{1}{\varepsilon})$ we have \[ \|y\| \leq \|x\| + \|x-y\| = \|x\| + \|z\| < \frac{1}{\varepsilon} + \varepsilon < R_n\,, \] so $y\in \Lambda_n \cap B(0,R_n)$. In particular, we have $x = y + z \in \Lambda_n \cap B(0,R_n) + B(0,\varepsilon$). By \eqref{def_3} and \eqref{def_4}, we have shown that $d(\Lambda_n \cap B(0,R_n),\Lambda)<\varepsilon$ for all $n \geq M$, so $\Lambda_n \cap B(0,R_n)$ converges to $\Lambda$ in the Chabauty--Fell topology. Conversely, suppose that $\Lambda_n \cap B(0,R_n)$ converges to $\Lambda$ in the Chabauty--Fell topology. For any $\varepsilon>0$, there exists a constant $N_1$ such that for any $n \geq N_1$, we have $d(\Lambda_n\cap B(0,R_n),\Lambda)<\varepsilon$. For the above $\varepsilon$, there must exist a constant $N_2$ such that $R_n>\tfrac{1}{\varepsilon}$ for any $n \geq N_2$. Let $M=\max\{N_1,N_2\}$. Then, for all $n \geq M$, we get \[ \Lambda_n\cap B(0,\tfrac{1}{\varepsilon})=\Lambda_n\cap B(0,R_n) \cap B(0,\tfrac{1}{\varepsilon}) \subseteq \Lambda+B(0,\varepsilon) \,, \] and \[ \Lambda \cap B(0,\tfrac{1}{\varepsilon}) \subseteq \Lambda_n\cap B(0,R_n) + B(0,\varepsilon) \subseteq \Lambda_n+B(0,\varepsilon) \,, \] so $d(\Lambda_n, \Lambda)< \eps$. Therefore, $\Lambda_n $ converges to $\Lambda$ in the Chabauty--Fell topology. \end{proof} In other words, the convergence of a sequence in the Chabauty--Fell topology is equivalent to the convergence of its restriction to larger and larger patches. Provided that we can extend our finite point sets to elements of $\mathcal{W}_{r,R}(\RR^2)$, we can use this fact and the compactness of $\mathcal{W}_{r,R}(\RR^2)$ to prove the existence of a Delone limit set. \subsection{Threshold method}\label{sec:threshold} To any subdivision rule on a finite set of tiles in $\RR^2$, there is an associated graph-directed iterated function system (GIFS) consisting of similitudes. In this section, starting from a GIFS of this type, we construct a corresponding Delone set. In particular, starting from some initial tile, we apply the GIFS until the area of every tile is below some threshold $\varepsilon > 0$. Once there are no tiles of area greater than $\varepsilon$, we inflate the finite patch by $\frac{1}{\sqrt{\varepsilon}}$. We call this an \textit{$\varepsilon$-rule}, which is defined more precisely below. \subsubsection{\textbf{GIFS}}Let $(V,\Gamma)$ be a directed graph with vertex set $V$ and directed-edge set $\Gamma$ where both $V$ and $\Gamma$ are finite. Denote the set of edges from $j$ to $i$ by $\Gamma_{j,i}$ and assume that for any $j\in V$, there is at least one edge starting from vertex $j$. Furthermore, assume that for each edge $e\in\Gamma$, there is a corresponding contractive similitude $g_e:\mathbb{R}^n\rightarrow\mathbb{R}^n$. We call $(g_e)_{e\in \Gamma}$ a \emph{graph-directed IFS (GIFS)} (see \cite{MW88}). The invariant sets of this GIFS, also called \emph{graph-directed sets}, are the unique non-empty compact sets $(T_j)_{j\in V}$ satisfying \begin{equation}\label{GIFS} T_j=\bigcup\limits_{i\in V}\bigcup\limits_{e\in\Gamma_{j,i}}g_e(T_i),\quad j\in V \,. \end{equation} Let $\mathcal{F} = \{T_1, T_2, \dots, T_m\}$ denote the invariant sets of a GIFS $(g_e)_{e\in \Gamma}$ with vertex set $V=\{1,2,\dots,m\}$. We make the following additional assumptions on the GIFS: \begin{enumerate}[label=\arabic*.] \item Each tile in $\cF$ has a nonempty interior. \item Each tile in $\cF$ satisfies the open set condition, i.e.\ each union in \eqref{GIFS} has no interior-overlap. \item The directed graph $(V,\Gamma)$ is strongly connected, i.e.\ for all $i,j \in V$, there is similar copy of $T_i$ in the subdivision of $T_j$. \end{enumerate} \subsubsection{\textbf{$\boldsymbol{\varepsilon}$-rule}.} For convenience, let $\Sigma_{N}$ denote the set of edge sequences of length $N$ on $(V,\Gamma)$. In other words, $\Sigma_{N}$ is the collection of sequences $e_1,e_2,\dots, e_N \in \Gamma$ such that the composition $g_{e_1} \circ g_{e_2} \circ \dots \circ g_{e_N}$ is permitted by the GIFS. We iterate the GIFS starting from an initial tile $T_j\in \cF$ with the following \textit{$\varepsilon$-rule}: \begin{enumerate}[label=(\roman*)] \item Fix $0<\varepsilon< 1$. Given a finite sequence $(e_1, e_2, \dots, e_{m(\varepsilon)}) \in \Sigma_{m(\varepsilon)}$, we continue applying the GIFS to the tile $$ T = g_{e_1}\circ g_{e_2} \circ \dots \circ g_{e_{m(\varepsilon)}}(T_j)\,, $$ while $\text{Area}(T) > \varepsilon$, and stop applying the GIFS to $T$ when $\text{Area}(T) \leq \varepsilon$. \item Once the process in (i) terminates, we inflate the resulting collection of tiles by $\frac{1}{\sqrt{\varepsilon}}$. We denote the resulting finite patch by $\mathcal{P}_{\varepsilon}(T_j)$. \end{enumerate} \begin{remark}\label{range_of_area} Let $|g_e'|$ denote the contraction factor of the similitude $g_e$ in the GIFS. After applying the $\varepsilon$-rule, we are guaranteed to have $\Area(T)\in[a,1]$ for every tile $T\in\cP_{\varepsilon}(T_j)$, where $$ a=\min\{|g_e'|^2\, :\, e \in \Gamma\} \,. $$ \end{remark} \subsubsection{\textbf{Associated point sets.}} Our next step is to define a point set $\Lambda_j(\varepsilon)$ associated with the patch $\cP_{\varepsilon}(T_j)$ by placing one point inside each tile. Since $\cF$ is a finite set, we can choose fixed constants $r_0\,,R_0>0$ which do not depend on $T_j$ and distinguished points $\lambda_j^{(0)}\in T_j$ for each $j \in V$ such that \begin{equation}\label{r_0R_0} B(\lambda_j^{(0)},r_0) \subset T_j \subset B(\lambda_j^{(0)},R_0)\,. \end{equation} Let $0<\varepsilon<1$ and $T_j \in \cF$. Starting from $\lambda_j^{(0)}$, we construct the elements of $\Lambda_j(\varepsilon)$ recursively as follows. For each application of the GIFS $$ T_j=\bigcup\limits_{i\in V}\bigcup\limits_{e\in\Gamma_{j,i}}g_e(T_i)\,,\quad j\in V \,, $$ in the $\varepsilon$-rule, we produce new points \[ \lambda_j^{(n)}=\bigcup\limits_{i\in V}\bigcup\limits_{e\in\Gamma_{j,i}}g_e(\lambda_i^{(n-1)})\,,\quad j\in V \,, \] that replace the previous points $\lambda_i^{(n-1)}$. This process terminates when there are no further applications of the GIFS. Finally, we inflate the resulting collection of points by $\frac{1}{\sqrt{\varepsilon}}$ to obtain \emph{the associated point sets} $\Lambda_j(\varepsilon)$. In other words, since any tile $T$ in $\cP_{\varepsilon}(T_j)$ is similar to a unique $T_i\in \cF$, we place exactly one point in $T$ via similarity at the same location as $\lambda_i^{(0)}\in T_i$. Though one can choose any points $\{\lambda_j^{(0)}\}_{j \in V}$ such that \eqref{r_0R_0} holds, for simplicity we choose the centroid of each tile. In this case, $\Lambda_j(\varepsilon)$ is simply the set of centroids of the tiles in $\cP_\varepsilon(T_j)$. Furthermore, we introduce the additional assumption that \begin{align}\label{range_of_area_2} \min\{\Area(T_j) \,:\, T_j\in \cF\}=1,\quad \max\{\Area(T_j) \,:\, T_j\in \cF\}=S\geq 1 \,, \end{align} as this will simplify subsequent proofs. The value $S$ is related to the GIFS $(g_e)_{e \in \Gamma}$; however, one can easily modify a given GIFS to satisfy \eqref{range_of_area_2} by simultaneously enlarging or shrinking all tiles $T_j$ by a fixed similitude. \subsubsection{\textbf{Existence of a Delone limit set}} Next, we consider a decreasing sequence $\{\varepsilon_n\}_{n \geq 1}$ in $(0,1)$ and show that the corresponding sequence of point sets $\{\Lambda_j(\varepsilon_n)\}_{n \geq 1}$ has a subsequence converging to a Delone set in the Chabauty--Fell topology. \begin{lem}\label{Limit_Delone} \emph{ For any decreasing sequence $\{\varepsilon_n\}_{n \geq 1}$ in $(0,1)$ with $\lim_{n\rightarrow \infty} \varepsilon_n = 0$, there exists a sequence $\{K_n\}_{n\geq1}$ of compact sets such that \begin{enumerate}[label=(\roman*)] \item $K_n \subseteq K_{n+1}$ for all $n\geq1$, \item $\cup_{n\geq 1} K_n = \RR^2$, and \item $\Lambda_j(\varepsilon_n)\in \cW_{r,R}(K_n)$ for all $n\geq1$, where $r=\sqrt{\tfrac{a}{S}}r_0$ and $R=R_0$. \end{enumerate} In other words, $\Lambda_j(\varepsilon_n)$ is an $(r,R)$-Delone set in $K_n$ for each $n\geq1$. } \end{lem} \begin{proof} Recall from \eqref{r_0R_0} that we have \[ B(\lambda_j^{(0)},r_0) \subset T_j \subset B(\lambda_j^{(0)},R_0)\,, \quad \forall\, T_j\in\cF\,. \] Given $0<\varepsilon_n<1$, consider an arbitrary tile $T\in \cP_{\varepsilon_n}(T_j)$. There must exist a unique finite sequence $(e_1,e_2,\ldots,e_{m(\varepsilon_n)})\in\Sigma_{m(\varepsilon_n)}$ and one tile $T_{i_0}\in\cF$ such that \[ T=\tfrac{1}{\sqrt{\varepsilon_n}}g_{e_1}\circ g_{e_2} \circ \dots \circ g_{e_{m(\varepsilon_n)}}(T_{i_0})\,. \] Define $x_T$ to be the point of $\Lambda_j(\varepsilon_n)$ that lies in $T$, i.e. $$ x_T= \tfrac{1}{\sqrt{\varepsilon_n}}g_{e_1}\circ g_{e_2} \circ \dots \circ g_{e_{m(\varepsilon_n)}}(\lambda_{i_0}^{(0)})\,. $$ Let $c=\Area(T)$ and $A=\Area(T_{i_0})$. Hence, the linear scaling factor from $T_{i_0}$ to $T$ is $\sqrt{\frac{c}{A}}$. Thus, scaling $B(\lambda_{i_0}^{(0)},r_0)$ by the linear factor $\sqrt{\frac{c}{A}}$ will result in a ball of radius $\sqrt{\frac{c}{A}}r_0$ that fits inside $T$ when centered at $x_T$. Hence, we have $B(x_T,\sqrt{\frac{c}{A}}r_0)\subseteq T$. Similarly, we have $T\subseteq B(x_T,\sqrt{\frac{c}{A}}R_0)$. Moreover, by Remark~\ref{range_of_area}, we have $a \leq c\leq 1$. By assumption~\ref{range_of_area_2}, we get $A\in[1,S]$. From this, we obtain \begin{equation}\label{eq:inclusions} B(x_T,\textstyle{\sqrt{\tfrac{a}{S}}}r_0) \subseteq B(x_T,\textstyle{\sqrt{\tfrac{c}{A}}}r_0) \subseteq T \subseteq B(x_T,\textstyle{\sqrt{\tfrac{c}{A}}}R_0) \subseteq B(x_T,R_0) \quad \forall \, T \in \cP_{\varepsilon_n}(T_j)\,. \end{equation} Assume without loss of generality that $\lambda_j^{(0)}=0$. Then, for each $\varepsilon_n$, we have that \begin{equation*} \Lambda_j(\varepsilon_n)\subseteq K_n = \bigcup_{T\in\cP_{\varepsilon_n}(T_j)}T \,. \end{equation*} For this choice of $K_n$, it is easy to see that conditions (i) and (ii) are satisfied. To prove condition (iii), we must show that for every $x\in K_n$, $\card(B(x,r)\cap\Lambda_j(\varepsilon_n)\leq 1$ and $\overline{B(x,R)}\cap \Lambda_j(\varepsilon_n)\neq \emptyset$, when $r=\sqrt{\tfrac{a}{S}}r_0$ and $R=R_0$. To this end, let $x\in K_n$ be arbitrary. By our choice of $K_n$, there exists $T\in\cP_{\varepsilon_n}(T_j)$ such that $x\in T$. First, notice that \eqref{eq:inclusions} gives us $x\in B(x_T,R)$, which implies that $x_T \in B(x,R)$, so $\overline{B(x,R)}\cap \Lambda_j(\varepsilon_n) \neq \emptyset$ holds. It remains to prove that $\card(B(x,r)\cap \Lambda_j(\varepsilon_n))\leq 1$. We consider two cases: \textit{Case 1.} Assume that $x$ lies in $B(x_T,r)$. Then $x_T\in B(x,r)$. We claim that $x_{T'}\notin B(x,r)$ for any $T' \in \cP_{\varepsilon_n}(T_j) \backslash T$. Indeed, we have \begin{equation}\label{triangle_inequality} 2r \leq \|x_T-x_{T'}\| \leq \|x_T-x\| + \|x-x_{T'}\| \leq r + \|x-x_{T'}\|\,. \end{equation} the first inequality holds since $T'$ and $T$ are non-overlapping, and by \eqref{triangle_inequality} we have $\|x-x_{T'}\| \geq r$. \textit{Case 2.} Assume that $x$ lies outside of $B(x_T,r)$, i.e.\ $x_T \notin B(x,r)$. We need to check that there can be at most one tile $T'\in\cP_{\varepsilon_n}(T_j) \backslash T$ such that $x_{T'}\in B(x,r)$. On the contrary, suppose that there exist two distinct tiles $T',T''\in\cP_{\varepsilon_n}(T_j) \backslash T$ with $x_{T'},x_{T''}\in B(x,r)$. Since $T'$ and $T''$ have no overlap, we must have $\|x_{T'}-x_{T''}\|\geq 2r$. On the other hand, since $x_{T'},x_{T''}\in B(x,r)$, the triangle inequality gives us \[ \|x_{T'}-x_{T''}\| \leq \|x_{T'}-x\| + \|x-x_{T''}\| < r+r=2r\,, \] a contradiction. Thus, we have shown that $\card(B(x,r)\cap\Lambda_j(\varepsilon_n))\leq 1$ and $\overline{B(x,R)}\cap \Lambda_j(\varepsilon_n) \neq \emptyset$. As $x$ was arbitrary, this completes the proof. \end{proof} \begin{thm}\label{thm:convergence} \emph{ There exists a subsequence of $\Lambda_j(\varepsilon_n)$ that converges to a $(r,R)$-Delone set in $\RR^2$ in the Chabauty--Fell topology. } \end{thm} \begin{proof} By Lemma~\ref{Limit_Delone}, we have $\Lambda_j(\varepsilon_n) \subseteq \mathcal{W}_{r,R}(K_n)$, so there must exist a point set $\Lambda_n \in \mathcal{W}_{r,R}(\RR^2)$ such that $\Lambda_n \cap K_n = \Lambda_j(\varepsilon_n)$. Next, since $\cW_{r,R}(\RR^2)$ is compact, there exists a subsequence $\{\Lambda_{n_i}\}_{i \geq 1}$ of $\{\Lambda_n\}_{n \geq 1}$ converging to some $\Lambda \in \cW_{r,R}(\RR^2)$ in the Chabauty--Fell topology. Moreover, we have $K_n \subseteq K_{n+1}$ for all $n$ and $\bigcup_{n\geq 1}K_n = \RR^2$. By compactness of $K_{n_i}$ in $\RR^2$ (in the Euclidean topology), we can pick a sequence $R_{n_i}$ of positive real numbers such that $K_{n_i} \subseteq B(0,R_{n_i})$ for all $i$. Therefore, by Proposition~\ref{prop:Fell-compact-convergence}, we also have that $\Lambda_{n_i} \cap B(0,R_{n_i})$ converges to $\Lambda$ in the Chabauty--Fell topology. This proves the theorem. \end{proof} \section{Tilings By Badly Approximable Triangles}\label{sec:specific} In this section, we consider the subdivision rule given in Figure~\ref{fig:subdivision-rule-new} for arbitrary angles $\alpha,\beta,\gamma$. Below, we describe the GIFS associated with this subdivision rule. Then, as illustrated in Section~\ref{sec:optimal-tilings}, we can use this GIFS and the methods in Section~\ref{sec:threshold} to produce a sequence of finite patches leading to a Delone set. \subsection{GIFS for arbitrary angles} \label{GIFS2} Let $T_1$ and $T_2$ be the scalene and isosceles triangle of unit area, respectively. We compute the GIFS for $\cF=\{T_1,T_2\}$. We use $e^{i\theta}$ to denote rotation about the origin by the angle $\theta$. Suppose that the bottom left corners of the tiles are at the origin. The contractive similitudes are as follows: \begin{equation}\label{eq:GIFS-specific} \begin{aligned} & f_1(x,y)=\tfrac{1}{1+t^2}(x,y)\,, \\ & f_2(x,y)=\tfrac{t}{s(1+t^2)}(-x,y) \cdot e^{i(\pi-\beta)}+(a,0)+at(\cos\gamma,\sin\gamma)\,,\\ & f_3(x,y)=\tfrac{t}{1+t^2}(x,y)\cdot e^{i\gamma}+(a,0)\,,\\ &f_4(x,y)=\tfrac{\sqrt{C}t}{1+t^2}(-x,y) \cdot e^{i\alpha}+(bu,0)\,, \\ & f_5(z)=\tfrac{\sqrt{C}t}{1+t^2}(-x,y) \cdot e^{i(\pi-\alpha)}+(bu,0)+bt(\cos\alpha,\sin\alpha) \,, \\ & g_1(x,y)=\tfrac{u}{\sqrt{C}(st+u)}(x,y) \cdot e^{i(\pi-\beta)}+(a,0)+at(\cos(\gamma),\sin(\gamma))\,,\\ & g_2(x,y)=\tfrac{u}{st+u}(x,y)\,, \\ & g_3(x,y)=\tfrac{st}{st+u}(x,y)+bst^2(\cos\gamma,\sin\gamma)\,, \end{aligned} \end{equation} where \begin{equation*} s = \tfrac{\sin\beta}{\sin\gamma} \,,\quad t = \tfrac{\sin\alpha}{\sin\beta} \,,\quad u = 2st^2\cos\gamma=\tfrac{2\sin^2\gamma}{\sin\beta\tan\gamma} \,, \quad C=\tfrac{2(1+t^2)^2\sin\alpha\cot \gamma}{t(st+u)(1+2t\cos \gamma)}\,, \end{equation*} and \begin{equation*} a = \tfrac{1}{1+t^2}\sqrt{\tfrac{2}{s\sin(\alpha)}}\,, \quad b = 2\sqrt{\tfrac{\cot(\gamma)}{st(st+u)(2t\cos(\gamma) + 1)}} \,. \end{equation*} Then we get \begin{equation*} T_1=g_1(T_2) \cup f_1(T_1) \cup f_2(T_1) \cup f_3(T_1) \,, \end{equation*} and \begin{equation*} T_2=g_2(T_2) \cup g_3(T_2) \cup f_4(T_1) \cup f_5(T_1)\,. \end{equation*} \subsection{Tilings with optimal badly approximable angles}\label{sec:optimal-tilings} Our objective is to obtain Delone sets associated with the unique solutions to the equality $x+y+z=1\ (x,y,z\in\B_{2,1},\ x \leq y \leq z)$ found in Theorem~\ref{Main}: \begin{equation*} x=2-\sqrt{3}\,,\quad y=z=\frac{\sqrt{3}-1}{2}\,, \end{equation*} and \begin{equation*} x=y=\frac{2-\sqrt{2}}{2}\,,\quad z=\sqrt{2}-1\,. \end{equation*} For the first solution, by choosing \begin{equation}\label{eq:optimal1-angles} \alpha = (2-\sqrt{3})\pi\,, \quad \beta=\gamma=\frac{(\sqrt{3}-1)\pi}{2} \,, \end{equation} the subdivision rule in Figure~\ref{fig:subdivision-rule-new} reduces to a subdivision rule involving only the isosceles triangle with angles $\alpha$ and $\beta=\gamma$ as shown in Figure~\ref{fig:subdivision-rule-iso}. \begin{figure}[ht] \centering \includegraphics{subdivision_rule_optimal1} \caption{Subdivision rule for the isosceles triangle with optimal badly approximable angles $\alpha$ and $\beta=\gamma$ as in \eqref{eq:optimal1-angles}. } \label{fig:subdivision-rule-iso} \end{figure} Similarly, for the second solution, by choosing \begin{equation}\label{eq:optimal2-angles} \alpha = (\sqrt{2}-1)\pi\,, \quad \beta=\gamma=\frac{(2-\sqrt{2})\pi}{2} \,, \end{equation} we again get a subdivision rule involving only an isosceles triangle, exactly as in Figure~\ref{fig:subdivision-rule-iso} except with different values for $\alpha$ and $\beta$. In this section, we provide some illustrations of our construction in Section~\ref{GIFS2} in the specific case when $\alpha$ and $\beta=\gamma$ are as in $\eqref{eq:optimal1-angles}$ and \eqref{eq:optimal2-angles}. First, in Figure~\ref{fig:optimal_iterations}, we illustrate the $\varepsilon$-rule process for $\varepsilon=0.2$ as an example. Then, in Figure~\ref{fig:optimal1_patches} and Figure~\ref{fig:optimal2_patches}, we show the final patches for a few different values of $\varepsilon$. \begin{figure}[ht] \begin{subfigure}[c]{\textwidth} \centering \includegraphics{optimal1_iterations1_2} \caption{$\alpha=(2-\sqrt{3})\pi$, $\beta=\gamma=\frac{(\sqrt{3}-1)\pi}{2}$} \end{subfigure} \par\bigskip \begin{subfigure}[c]{\textwidth} \centering \includegraphics{optimal2_iterations1_2} \caption{$\alpha=(\sqrt{2}-1)\pi$, $\beta=\gamma=\frac{(2-\sqrt{2})\pi}{2}$} \end{subfigure} \caption{Illustration of the $\varepsilon$-rule for $\varepsilon=0.2$ for optimal badly approximable angles.} \label{fig:optimal_iterations} \end{figure} \begin{figure}[ht] \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics{optimal1_patch1_08} \caption{$\varepsilon=0.08$} \end{subfigure} \hskip -9ex \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics{optimal1_patch1_04} \caption{$\varepsilon=0.04$} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics{optimal1_patch1_02} \caption{$\varepsilon=0.02$} \end{subfigure} \caption{Final patches resulting from different $\varepsilon$-rules for optimal badly approximable angles $\alpha=(2-\sqrt{3})\pi$, $\beta=\gamma=\frac{(\sqrt{3}-1)\pi}{2}$.} \label{fig:optimal1_patches} \end{figure} \begin{figure}[ht] \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics{optimal2_patch1_08} \caption{$\varepsilon=0.08$} \end{subfigure} \hskip -9ex \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics{optimal2_patch1_04} \caption{$\varepsilon=0.04$} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics{optimal2_patch1_02} \caption{$\varepsilon=0.02$} \end{subfigure} \caption{Final patches resulting from different $\varepsilon$-rules for optimal badly approximable angles $\alpha=(\sqrt{2}-1)\pi$, $\beta=\gamma=\frac{(2-\sqrt{2})\pi}{2}$.} \label{fig:optimal2_patches} \end{figure} As described in \cite[Remark~4.11]{Smi-Solo:21}, one can produce a stationary tiling by applying an additional isometry after each $\varepsilon$-rule. For our subdivision rule shown in Figure~\ref{fig:subdivision-rule-new}, we can do this by introducing a rotation by the badly approximable angle $-\gamma$. To make this more precise, choose the sequence $\varepsilon_n = \varepsilon_0^n$ where $\varepsilon_0 = \frac{t^2}{(1+t^2)^2}$ and $t = \frac{\sin(\alpha)}{\sin(\beta)}$. Then $\varepsilon_0$ is the square of the contraction factor of the similitude $f_3$ in \eqref{eq:GIFS-specific}. This function maps the large scalene triangle $T_1$ shown on the left in Figure~\ref{fig:subdivision-rule-new} to the smaller scalene triangle in its center. For this value of $\varepsilon_0$, the finite patch $\cP_{\varepsilon_0}(T_1)$ contains a copy of $e^{i\gamma}T_1$. With this choice of $\varepsilon_n$, the sequence $e^{-in\gamma}(\cP_{\varepsilon_n}(T_1))$ will produce a stationary tiling. Figure~\ref{fig:optimal_stationary} shows the resulting nested sequence of finite patches when $\alpha$ and $\beta=\gamma$ are as in $\eqref{eq:optimal1-angles}$ and \eqref{eq:optimal2-angles}. \begin{figure}[ht] \begin{subfigure}[c]{\textwidth} \centering \includegraphics{optimal1_stationary1_01384} \caption{$\alpha=(2-\sqrt{3})\pi$, $\beta=\gamma=\frac{(\sqrt{3}-1)\pi}{2}$} \end{subfigure} \par\bigskip \begin{subfigure}[c]{\textwidth} \centering \includegraphics{optimal2_stationary1_014} \caption{$\alpha=(\sqrt{2}-1)\pi$, $\beta=\gamma=\frac{(2-\sqrt{2})\pi}{2}$} \end{subfigure} \caption{Illustration of the sequences producing stationary tilings for badly approximable angles. The stationary tiling is produced by choosing $\varepsilon_n = \varepsilon_0^n$ where $\varepsilon_0$ such that the area of the middle triangle is preserved, and applying an appropriate isometry after applying each $\varepsilon_n$-rule. Each stationary subpatch is highlighted in blue. In (B), an additional rotation by $\frac{\pi}{4}$ is applied for ease of illustration. } \label{fig:optimal_stationary} \end{figure} Each finite patch in these nested sequences contains the previous finite patch, along with several new copies of the triangle rotated by badly approximable angles $\alpha$ and $\beta=\gamma$. Due to these irrational rotations, we expect the diffraction from the resulting stationary tilings to be rotationally invariant. Moreover, the discrepancy of each irrational rotation is small and optimal for our two choices of angles. We conclude this section with a remark on convergence. \begin{remark} The tilings produced by our method do not have finite local complexity; because of this, we utilized the Chabauty--Fell topology to show the existence of a limit Delone set via a non-constructive process of subsequence selection. However, by the method described above, we can always produce a nested sequence of finite patches converging to a stationary tiling. In this case, we also have convergence in the \emph{local topology}, i.e.\ the finite patches overlap exactly out to larger and larger radii up to small translations (see \cite[Chapter~5]{Baake-Grimm:13} for a formal definition). This stronger notion of convergence is primarily used in the study of tilings with finite local complexity, but is still applicable stationary constructions. \end{remark} \section{Proof of Theorems~\ref{Main}, \ref{Main2} and \ref{Main3}}\label{sec:proof_main123} To prove Theorem~\ref{Main} and \ref{Main2}, we will need the following lemmas: \begin{lem}[Forbidden patterns]\label{lem:x3-y3-z-1} \emph{ Let $z\in(0,1)\backslash\QQ$ be such that $z \in [r_1, r_2]$ for some $r_1,r_2\in(0,1)\cap \QQ$. If the simple continued fractions of $r_1$ and $r_2$ are of the form \begin{equation}\label{eq:forbidden1} r_1 = [(2)^{2k-1},(2,1)^\ell,s,a_1,\dots,a_{n_1},\infty]\,,\quad r_2 = [(2)^{2k-1},b_1,\dots,b_{n_2},\infty]\,, \end{equation} or \begin{equation}\label{eq:forbidden2} r_1 = [(2)^{2k},a_1,\dots,a_{n_1},\infty]\,,\quad r_2 = [(2)^{2k},(2,1)^\ell,s,b_1,\dots,b_{n_2},\infty]\,, \end{equation} where $k\geq 1$, $\ell \geq 0$, and $s\geq 3$, then $z \notin \B_2$. } \end{lem} \begin{proof} \medskip \noindent \textit{Case 1.} Assume that $r_1$ and $r_2$ are of the form \eqref{eq:forbidden1} where $k\geq 1$, $\ell \geq 0$, and $s\geq 3$. \medskip Let $z = [c_1,c_2,\dots]$. Suppose by contradiction that $z\in\B_2$. First, for simplicity, observe that \eqref{eq:forbidden1} implies \begin{equation}\label{eq:forbidden1-simple} [(2)^{2k-1},(2,1)^\ell,s,\infty] \leq z \leq [(2)^{2k-1},\infty]\,. \end{equation} Since \eqref{eq:forbidden1} requires $c_1,\dots,c_{2k-1} \geq 2$ and $z\in\B_2$ requires $c_1,\dots,c_{2k-1} \leq 2$, we must have \begin{equation}\label{eq:equals2} c_1=\dots=c_{2k-1}=2\,, \end{equation} i.e.\ $z\in I((2)^{2k-1})$. Next, consider $u=[c_{2k},c_{2k+1},\dots]$. By \eqref{eq:forbidden1-simple} and \eqref{eq:equals2}, we have \begin{equation}\label{eq:leq21} u \leq [(2,1)^\ell,s,\infty]\,. \end{equation} From this, we can deduce that $u\in I((2,1)^\ell)$. To see this, observe that \eqref{eq:leq21} gives us $c_{2k} \geq 2$, so we must have $c_{2k}=2$ because $u\in \B_2$. We also get that $c_{2k+1} \leq 1$, so we must have $c_{2k+1} = 1$. Proceeding inductively, we find that $c_{2q}=2$ and $c_{2q+1}=1$ for $k\leq q\leq k+\ell-1$. Now consider $v = [c_{2(k+\ell)},c_{2(k+\ell)+1},\dots]$. From \eqref{eq:leq21} we get \[ v \leq [s,\infty] = \frac{1}{s}\,, \] so $c_{2(k+\ell)} = \lfloor 1/v \rfloor \geq s \geq 3$, contradicting $v\in\B_2$. Therefore, $z$ cannot have the form \eqref{eq:forbidden1}. \medskip \noindent \textit{Case 2.} The proof when $r_1$ and $r_2$ are of the form \eqref{eq:forbidden2} is similar to Case 1. \end{proof} We call any interval of the form \eqref{eq:forbidden1} or \eqref{eq:forbidden2} a \textit{forbidden pattern}. Moreover, if $z \in [r_1, r_2]$ for some forbidden pattern $[r_1, r_2]$, we say that $z$ is \emph{contained in a forbidden pattern.} \begin{lem}\label{xyz} \emph{ The equality $ x+y=z\ (x,y,z\in \B_{2}) $ has exactly one solution \begin{equation}\label{eq:B2-unique} x=y=\frac{\sqrt{3}-1}2=[\overline{2,1}],\ z=\sqrt{3}-1=[\overline{1,2}]\,. \end{equation} } \end{lem} \begin{proof} Suppose $x=[a_1,a_2,\cdots]$ where $a_i\in \{1,2\}$ with $i\geq 1$ and $y=[b_1,b_2,\cdots]$ where $b_j\in \{1,2\}$ with $j\geq 1$. It is very easy to get that $x<y$ if and only if $(-1)^na_n<(-1)^nb_n$ for the first $n$ such that $a_n\neq b_n$. As a consequence, we have \[ \frac{\sqrt{3}-1}{2}=[\overline{2,1}] \, = \min(\B_2)\,, \] and \[ \sqrt{3}-1=[\overline{1,2}] \, = \max(\B_2)\,. \] With this preparation, we show that \eqref{eq:B2-unique} is the unique solution of $ x+y=z\ (x,y,z\in \B_{2}) $ by considering the sum of $x$ and $y$. If $x+y<\sqrt{3}-1$, we get that one of them is less than $\frac{\sqrt{3}-1}{2}$, which contradicts $\frac{\sqrt{3}-1}{2}$ being the minimum of $\B_2$. Thus, $x+y\geq \sqrt{3}-1$. Similarly, if $x+y>\sqrt{3}-1$, then $z=x+y>\sqrt{3}-1$, which contradicts $\sqrt{3}-1$ being the maximum of $\B_2$. Therefore, $z=x+y=\sqrt{3}-1$ and $x=y=\frac{\sqrt{3}-1}{2}$. \end{proof} We can now provide a concise proof of Theorem~\ref{Main}. Indeed, we will see that the main part of the proof can be summarized by the careful analysis of cases shown in Table~\ref{tab:my_label_1} and Table~\ref{tab:my_label_2}, which we present below. However, the reader may also wish to refer to \eqref{eq_1_X_Y_Zb} and Remark~\ref{rem:intuition} for an intuitive explanation of why this case analysis is stable as the parameter $n$ increases. \subsection{Proof of Theorem~\ref{Main}} Assume that $x \leq y \leq z$. We divide the proof into 4 cases: \medskip \noindent \textit{Case 1.} $x,y,z \in \B_2$. \medskip \noindent This case is impossible by Remark~\ref{rem:no-B2-solution}. \medskip \noindent \textit{Case 2.} $x\in \B_{2,1}\setminus\B_2$, and $y,z\in\B_2$. \medskip \noindent Setting $x=\frac{1}{3+X}$ with $X\in\B_2$, we have $$ y+z=1-\frac{1}{3+X}=\frac{1}{1+\frac{1}{2+X}}\in \B_2\,. $$ By Lemma~\ref{xyz}, we obtain $$ y=z=\frac{\sqrt{3}-1}{2}\,, \quad 1-\frac{1}{3+X}=\sqrt{3}-1\,, $$ i.e. \begin{align}\label{x+y+z=1_1} x=\frac{1}{3+X}=2-\sqrt{3}=[3,\overline{1,2}]\,, \quad y=z=\frac{\sqrt{3}-1}{2}=[\overline{2,1}]\,, \end{align} is the solution of $x+y+z=1$. \medskip \noindent \textit{Case 3.} $x,y \in \B_{2,1}\setminus\B_2$, and $z\in\B_2$. \medskip \noindent In this case, the continued fraction expansions of $x$ and $y$ both satisfy $a_1 = 3$, $a_j \leq 2$ for all $j \geq 2$. Under these assumptions, we aim to show that \begin{equation*} x=y=\frac{2-\sqrt{2}}{2} = [3,\overline{2}]\,, \quad z = \sqrt{2}-1 = [\overline{2}]\,, \end{equation*} is the only possible solution. First, we introduce variables $X_n,Y_n,Z_n$ to represent arbitrary numbers satisfying a certain property that depends on a given nonnegative integer $n$. More specifically, we will prove the following statement: \medskip For each $n\geq 0$\,, if $X_n \notin I(3,(2)^{n+1})$ or\ $Y_n \notin I(3,(2)^{n+1})$, then $Z_n=1-X_n-Y_n$ is contained in a forbidden pattern. \medskip \noindent We divide the proof into 2 cases according to the parity of $n$. \medskip \noindent \textit{Case 3.1.} $n$ is even. \medskip \noindent The computations when $n$ is even are summarized in Table~\ref{tab:my_label_1}. There are 17 cases that must be considered for the cylinder sets of $X_n$ and $Y_n$. \begin{table}[ht] \centering \footnotesize \begin{tabular}{c|c|c|c|c} \thead{Case} & \thead{Cylinder set \\ for $X_n$} & \thead{Cylinder set \\ for $Y_n$} & \thead{Left endpoint of \\ the forbidden pattern} & \thead{Right endpoint of \\ the forbidden pattern}\\ \hline 1 & $I(3,(2)^n,1)$ & $I(3,(2)^n,1)$ & $[(2)^{n+1},3,\infty]$ & $[(2)^{n+1},\infty]$ \\ 2.1 & $I(3,(2)^n,1,2)$ & $I(3,(2)^n,2,2)$ & $[(2)^{n+1},3,\infty]$ & $[(2)^{n+1},3,1,\infty]$ \\ 2.2&$I(3,(2)^n,1,2)$ & $I(3,(2)^n,2,1)$ & $[(2)^{n+1},(2,1),12,\infty]$ & $[(2)^{n+1},3,1,\infty]$ \\ 2.3.1&$I(3,(2)^n,1,1,1)$ & $I(3,(2)^n,2,2,1)$ & $[(2)^{n+1},3,\infty]$ & $[(2)^{n+1},3,2,\infty]$ \\ 2.3.2&$I(3,(2)^n,1,1,1)$ & $I(3,(2)^n,2,2,2)$ & $[(2)^{n+1},3,\infty]$ & $[(2)^{n+1},3,3,\infty]$ \\ 2.3.3&$I(3,(2)^n,1,1,2)$ & $I(3,(2)^n,2,2,1)$ & $[(2)^{n+1},(2,1),14,\infty]$ & $[(2)^{n+1},3,10,\infty]$ \\ 2.3.4&$I(3,(2)^n,1,1,2)$ & $I(3,(2)^n,2,2,2)$ & $[(2)^{n+1},(2,1),10,\infty]$ & $[(2)^{n+1},3,20,\infty]$ \\ 2.4.1&$I(3,(2)^n,1,1,1)$ & $I(3,(2)^n,2,1,1)$ & $[(2)^{n+1},(2,1),6,\infty]$ & $[(2)^{n+1},3,4,\infty]$ \\ 2.4.2&$I(3,(2)^n,1,1,1)$ & $I(3,(2)^n,2,1,2)$ & $[(2)^{n+1},(2,1),4,\infty]$ & $[(2)^{n+1},3,8,\infty]$ \\ 2.4.3&$I(3,(2)^n,1,1,2)$ & $I(3,(2)^n,2,1,1)$ & $[(2)^{n+1},(2,1),3,\infty]$ & $[(2)^{n+1},2,1,\infty]$ \\ 2.4.4.1.1&$I(3,(2)^n,1,1,2,1,1)$ & $I(3,(2)^n,2,1,2,1,1)$ & $[(2)^{n+1},(2,1),3,\infty]$ & $[(2)^{n+1},2,1,\infty]$ \\ 2.4.4.1.2&$I(3,(2)^n,1,1,2,1,1)$ & $I(3,(2)^n,2,1,2,1,2)$ & $[(2)^{n+1},(2,1),3,\infty]$ & $[(2)^{n+1},2,1,\infty]$ \\ 2.4.4.1.3&$I(3,(2)^n,1,1,2,1,2)$ & $I(3,(2)^n,2,1,2,1,2)$ & $[(2)^{n+1},(2,1),3,\infty]$ & $[(2)^{n+1},2,1,\infty]$ \\ 2.4.4.1.4&$I(3,(2)^n,1,1,2,1,2)$ & $I(3,(2)^n,2,1,2,1,1)$ & $[(2)^{n+1},(2,1),3,\infty]$ & $[(2)^{n+1},2,1,\infty]$ \\ 2.4.4.2&$I(3,(2)^n,1,1,2,1)$ & $I(3,(2)^n,2,1,2,2)$ & $[(2)^{n+1},(2,1)^2,20,\infty]$ & $[(2)^{n+1},2,1,\infty]$ \\ 2.4.4.3&$I(3,(2)^n,1,1,2,2)$ & $I(3,(2)^n,2,1,2,1)$ & $[(2)^{n+1},(2,1),3,\infty]$ & $[(2)^{n+1},2,1,\infty]$ \\ 2.4.4.4&$I(3,(2)^n,1,1,2,2)$ & $I(3,(2)^n,2,1,2,2)$ & $[(2)^{n+1},(2,1),3,\infty]$& $[(2)^{n+1},2,1,\infty]$ \\ \end{tabular} \caption{Case analysis showing the forbidden patterns containing $Z_n=1-X_n-Y_n$ when} $n$ is even. \label{tab:my_label_1} \end{table} The computations are lengthy and we only detail Case~2.1 of Table~\ref{tab:my_label_1} in this paper. To prove the result for Case~2.1 in Table~\ref{tab:my_label_1}, assume that \begin{equation*} X_n \in I(3,(2)^n,1,2)\,, \quad Y_n \in I(3,(2)^n,2,2) \,. \end{equation*} Setting \[ A_n=1-\tfrac{\sqrt{2}}{2}+\tfrac{12-19\sqrt{2}}{-19+6\sqrt{2}+17(1+\sqrt{2})^{2n+4}} \,, \quad B_n=1-\tfrac{\sqrt{2}}{2}+\tfrac{10+\sqrt{2}}{1+5\sqrt{2}-7(1+\sqrt{2})^{2n+5}} \,, \] and \[ C_n=1-\tfrac{\sqrt{2}}{2}+\tfrac{\sqrt{2}}{-(1+\sqrt{2})^{2n+8}+1} \,, \quad D_n=1-\tfrac{\sqrt{2}}{2}+\tfrac{\sqrt{2}}{(1+\sqrt{2})^{2n+8}+1} \,, \] we obtain \[ I(3,(2)^n,1,2) =(A_n,B_n]=1-\tfrac{\sqrt{2}}{2}+\left(\tfrac{12-19\sqrt{2}}{-19+6\sqrt{2}+17(1+\sqrt{2})^{2n+4}}, \tfrac{10+\sqrt{2}}{1+5\sqrt{2}-7(1+\sqrt{2})^{2n+5}}\right]\,, \] \[ \ I(3,(2)^n,2,2)=(C_n,D_n]=1-\tfrac{\sqrt{2}}{2}+\left(\tfrac{\sqrt{2}}{-(1+\sqrt{2})^{2n+8}+1}, \tfrac{\sqrt{2}}{(1+\sqrt{2})^{2n+8}+1}\right] \,. \] We claim that \begin{align*} Z_n=1-X_n-Y_n\in \left[[(2)^{n+1},3,\infty]\,,[(2)^{n+1},3,1,\infty]\right] \,. \end{align*} Since $n+1$ is odd, by Lemma~\ref{lem:x3-y3-z-1}, this implies that the bounds on $Z_n$ are contained in a forbidden pattern. Indeed, \begin{equation*} Z_n= 1-X_n-Y_n\in[1-B_n-D_n, 1-A_n-C_n) \,, \end{equation*} by direct computation, we get \[ 1-B_n-D_n=\sqrt2-1-\tfrac{10+\sqrt{2}}{1+5\sqrt{2}-7(1+\sqrt{2})^{2n+5}}-\tfrac{\sqrt{2}}{(1+\sqrt{2})^{2n+8}+1}\,, \] \[ 1-A_n-C_n=\sqrt2-1-\tfrac{12-19\sqrt{2}}{-19+6\sqrt{2}+17(1+\sqrt{2})^{2n+4}}-\tfrac{\sqrt{2}}{-(1+\sqrt{2})^{2n+8}+1}\,. \] Since $n$ is even, \begin{align*} &\ \ \ \ (1-B_n-D_n)-[(2)^{n+1},3,\infty]\\ &=\sqrt2-1-\tfrac{10+\sqrt{2}}{1+5\sqrt{2}-7(1+\sqrt{2})^{2n+5}}-\tfrac{\sqrt{2}}{(1+\sqrt{2})^{2n+8}+1}- \tfrac{(-3+2\sqrt{2})^{n+2}+1}{(1-\sqrt{2})(-3+2\sqrt{2})^{n+2}+1+\sqrt{2} }\\ &=\tfrac{(1+\sqrt{2})^{n}(24+16\sqrt{2})-(1-\sqrt{2})^{n}(-24+16\sqrt{2})} {\left(-(1-\sqrt{2})^{n+3}-(1+\sqrt{2})^{n+3}\right)\left(10+(2\sqrt{2}-1)(1-\sqrt{2})^{2n+6}-(2\sqrt{2}+1)(1+\sqrt{2})^{2n+6}\right)} \,, \end{align*} is positive. We show this by checking that the numerator and the denominator are both positive. Indeed, since $\left|1+\sqrt{2}\right|>\left|1-\sqrt{2}\right|$ and $\left|24+16\sqrt{2}\right|>\left|-24+16\sqrt{2}\right|$, the numerator is positive. Similarly, the first term in the denominator $-(1-\sqrt{2})^{n+3}-(1+\sqrt{2})^{n+3}$ is negative since $\left|1-\sqrt{2}\right|<\left|1+\sqrt{2}\right|$. Now, since $\left|2\sqrt{2}-1\right|<\left|2\sqrt{2}+1\right|$ and $\left|1-\sqrt{2}\right|<\left|1+\sqrt{2}\right|$, the second term in the denominator $$ 10+(2\sqrt{2}-1)(1-\sqrt{2})^{2n+6}-(2\sqrt{2}+1)(1+\sqrt{2})^{2n+6} \,, $$ is negative, so the denominator is positive. Therefore, the quotient is positive, as desired. Next, we verify that \begin{align*} &\ \ \ \ \ [(2)^{n+1},3,1,\infty]-(1-A_n-C_n)\\ &=\tfrac{\frac{1}{7}(9+4\sqrt{2})(-3+2\sqrt{2})^{n+2}+1}{\frac{1}{7}(1- 5\sqrt{2})(-3+2\sqrt{2})^{n+2}+1+\sqrt{2}}-\left(\sqrt2-1-\tfrac{12-19\sqrt{2}}{-19+6\sqrt{2}+17(1+\sqrt{2})^{2n+4}}-\tfrac{\sqrt{2}}{-(1+\sqrt{2})^{2n+8}+1}\right)\\ &=\tfrac{(1+\sqrt{2})^{n}(72+60\sqrt{2})-(1-\sqrt{2})^{n}(-72+60\sqrt{2})} {\left((1-\sqrt{2})^{n+3}(4+\sqrt{2})+(1+\sqrt{2})^{n+3}(4-\sqrt{2})\right)\left(-28+(1-\sqrt{2})^{2n+6}(6-\sqrt{2})+(1+\sqrt{2})^{2n+6}(6+\sqrt{2})\right)} \,, \end{align*} is positive. As before, we can see this by checking that the numerator and the denominator are both positive. This time, we have $\left|1+\sqrt{2}\right|>\left|1-\sqrt{2}\right|$ and $\left|72+60\sqrt{2}\right|>\left|72-60\sqrt{2}\right|$, so the numerator is positive. Similarly, it is easy to check that the first term and the second term in the denominator are positive, so the denominator is positive. Therefore, the quotient is positive, as desired. Thus, we obtain that $$ Z_n=1-X_n-Y_n \in \left[[(2)^{n+1},3,\infty],[(2)^{n+1},3,1,\infty]\right] \,, $$ which implies that the bounds on $Z_n$ are contained in a forbidden pattern. The other cases in Table~\ref{tab:my_label_1} can be proven in the same way. \medskip \noindent \textit{Case 3.2.} $n$ is odd. \medskip \noindent The computations of forbidden patterns containing $Z_n=1-X_n-Y_n$ when $n$ is odd are summarized in Table~\ref{tab:my_label_2}, where $X_n \notin I(3,(2)^{n+1})$ or\ $Y_n \notin I(3,(2)^{n+1})$. Again, there are 17 cases that must be considered. The proofs are similar to those for the even case. \begin{table}[ht] \centering \footnotesize \begin{tabular}{c|c|c|c|c} \thead{Case} & \thead{Cylinder set \\ for $X_n$} & \thead{Cylinder set \\ for $Y_n$} & \thead{Left endpoint of \\ the forbidden pattern} & \thead{Right endpoint of \\ the forbidden pattern}\\ \hline 1 & $I(3,(2)^n,1)$ & $I(3,(2)^n,1)$ & $[(2)^{n+1},\infty]$ & $[(2)^{n+1},3,\infty]$ \\ 2.1 & $I(3,(2)^n,1,2)$ & $I(3,(2)^n,2,2)$ & $[(2)^{n+1},3,1,\infty]$ & $[(2)^{n+1},3,\infty]$ \\ 2.2&$I(3,(2)^n,1,2)$ & $I(3,(2)^n,2,1)$ & $[(2)^{n+1},3,1,\infty]$ & $[(2)^{n+1},(2,1),12,\infty]$ \\ 2.3.1&$I(3,(2)^n,1,1,1)$ & $I(3,(2)^n,2,2,1)$ & $[(2)^{n+1},3,2,\infty]$ & $[(2)^{n+1},3,\infty]$ \\ 2.3.2&$I(3,(2)^n,1,1,1)$ & $I(3,(2)^n,2,2,2)$ & $[(2)^{n+1},3,3,\infty]$ & $[(2)^{n+1},3,\infty]$ \\ 2.3.3&$I(3,(2)^n,1,1,2)$ & $I(3,(2)^n,2,2,1)$ & $[(2)^{n+1},3,10,\infty]$ & $[(2)^{n+1},(2,1),14,\infty]$ \\ 2.3.4&$I(3,(2)^n,1,1,2)$ & $I(3,(2)^n,2,2,2)$ & $[(2)^{n+1},3,20,\infty]$ & $[(2)^{n+1},(2,1),10,\infty]$ \\ 2.4.1&$I(3,(2)^n,1,1,1)$ & $I(3,(2)^n,2,1,1)$ & $[(2)^{n+1},3,4,\infty]$ & $[(2)^{n+1},(2,1),6,\infty]$ \\ 2.4.2&$I(3,(2)^n,1,1,1)$ & $I(3,(2)^n,2,1,2)$ & $[(2)^{n+1},3,8,\infty]$ & $[(2)^{n+1},(2,1),4,\infty]$ \\ 2.4.3&$I(3,(2)^n,1,1,2)$ & $I(3,(2)^n,2,1,1)$ & $[(2)^{n+1},2,1,\infty]$ & $[(2)^{n+1},(2,1),3,\infty]$ \\ 2.4.4.1.1&$I(3,(2)^n,1,1,2,1,1)$ & $I(3,(2)^n,2,1,2,1,1)$ & $[(2)^{n+1},2,1,\infty]$ & $[(2)^{n+1},(2,1),3,\infty]$ \\ 2.4.4.1.2&$I(3,(2)^n,1,1,2,1,1)$ & $I(3,(2)^n,2,1,2,1,2)$ & $[(2)^{n+1},2,1,\infty]$ & $[(2)^{n+1},(2,1),3,\infty]$ \\ 2.4.4.1.3&$I(3,(2)^n,1,1,2,1,2)$ & $I(3,(2)^n,2,1,2,1,2)$ & $[(2)^{n+1},2,1,\infty]$ & $[(2)^{n+1},(2,1),3,\infty]$ \\ 2.4.4.1.4&$I(3,(2)^n,1,1,2,1,2)$ & $I(3,(2)^n,2,1,2,1,1)$ & $[(2)^{n+1},2,1,\infty]$ & $[(2)^{n+1},(2,1),3,\infty]$ \\ 2.4.4.2&$I(3,(2)^n,1,1,2,1)$ & $I(3,(2)^n,2,1,2,2)$ & $[(2)^{n+1},2,1,\infty]$ & $[(2)^{n+1},(2,1)^2,20,\infty]$ \\ 2.4.4.3&$I(3,(2)^n,1,1,2,2)$ & $I(3,(2)^n,2,1,2,1)$ & $[(2)^{n+1},2,1,\infty]$ & $[(2)^{n+1},(2,1),3,\infty]$ \\ 2.4.4.4&$I(3,(2)^n,1,1,2,2)$ & $I(3,(2)^n,2,1,2,2)$ & $[(2)^{n+1},2,1,\infty]$ & $[(2)^{n+1},(2,1),3,\infty]$ \\ \end{tabular} \caption{Case analysis showing the forbidden patterns containing $Z_n=1-X_n-Y_n$ when $n$ is odd.} \label{tab:my_label_2} \end{table} Immediately, from Table~\ref{tab:my_label_1} and Table~\ref{tab:my_label_2}, we see that \begin{align*} X_n\in I(3,(2)^n,1)\,, \quad Y_n\in I(3,(2)^n,1) \,, \end{align*} is impossible by Case~1. Together, Cases~2.3.1 to 2.3.4 show that \begin{align}\label{1122} X_n\in I(3,(2)^n,1,1)\,, \quad Y_n\in I(3,(2)^n,2,2) \,, \end{align} is impossible. Next, Cases~2.4.4.1.1 to 2.4.4.1.4 show that it is impossible to have $X_n\in I(3,(2)^n,1,1,2,1)$ and $Y_n\in I(3,(2)^n,2,1,2,1)$. By this and Cases~2.4.4.2 to 2.4.4.4, we get that it is impossible to have $X_n\in I(3,(2)^n,1,1,2)$ and $Y_n\in I(3,(2)^n,2,1,2)$. This, together with Cases~2.4.1 to 2.4.3, shows that \begin{align}\label{1121} X_n\in I(3,(2)^n,1,1)\,, \quad Y_n\in I(3,(2)^n,2,1) \,, \end{align} is impossible. Next, from Case~2.1, Case~2.2, \eqref{1122}, and \eqref{1121}, we obtain that \begin{align*} X_n\in I(3,(2)^n,1)\,, \quad Y_n\in I(3,(2)^n,2) \,, \end{align*} is impossible. This analysis of cases shows that any solution of $X_n+Y_n+Z_n=1$ must satisfy $$ X_n\,, Y_n \in I(3,(2)^{n+1}) \,. $$ Therefore, \begin{align}\label{x+y+z=1_2} x=y=\frac{2-\sqrt{2}}{2} = [3,\overline{2}]\,, \quad z = \sqrt{2}-1 = [\overline{2}]\,, \end{align} is the only possible solution of $x+y+z=1$ for this case. \medskip \noindent \textit{Case 4.} $x,y,z \in \B_{2,1}\setminus\B_2$. \medskip \noindent We claim that this case is impossible. In fact, $I(3)=[\frac{1}{4},\frac{1}{3})$, so $x+y+z<1$, which is a contradiction. Therefore, by \eqref{x+y+z=1_1} and \eqref{x+y+z=1_2}, the theorem is proven. \subsection{Proof of Theorem~\ref{Main2}} Observe that $x,y,z \in \B_{2,1}$ satisfy the equality $x+y+z=1$ if and only if the three equalities \begin{equation}\label{eq:3eq} x+y=1-z\,, \quad x + z = 1-y \,, \quad y + z = 1-x \,, \end{equation} are also satisfied. Moreover, by Corollary~\ref{cor:Trivial}, the numbers $1-z$, $1-y$, and $1-x$ must also be in $\B_{2,1}$. Next, recall from Theorem~\ref{Main} that \begin{equation}\label{eq:sol1} x=2-\sqrt{3}=[3,\overline{1,2}]\,,\quad y=z=\frac{\sqrt{3}-1}2=[\overline{2,1}]\,, \end{equation} and \begin{equation}\label{eq:sol2} x=y=\frac{2-\sqrt{2}}2=[3,\overline{2}]\,, \quad z=\sqrt{2}-1=[\overline{2}]\,, \end{equation} are the only solutions of the equality $x+y+z=1\,,\ (x,y,z\in\B_{2,1}\,,\ x\leq y \leq z)$. In \eqref{eq:sol1}, the solution of $x+y+z=1$ happens to have $y=z$, so \eqref{eq:sol1} provides two solutions of our target equality due to \eqref{eq:3eq}. Specifically, we get that \[ x=2-\sqrt{3}=[3,\overline{1,2}]\,, \quad y=\frac{\sqrt{3}-1}2=[\overline{2,1}]\,, \quad z=\frac{3-\sqrt{3}}{2}=[1,1,1,\overline{2,1}]\,, \] and \[x=y=\frac{\sqrt{3}-1}2=[\overline{2,1}]\,, \quad z=\sqrt{3}-1=[\overline{1,2}] \,,\] are both solutions of the equality $x+y=z\,, \ (x,y,z\in \B_{2,1}\,, x\leq y)$. In \eqref{eq:sol2}, the solution of $x+y+z=1$ happens to have $x=y$, so \eqref{eq:sol2} provides two more solutions of our target equality due to \eqref{eq:3eq}. Specifically, we get that \[ x=y=\frac{2-\sqrt{2}}2=[3,\overline{2}]\,, \quad z=2-\sqrt{2}=[1,1,\overline{2}]\,, \] and \[x = \frac{2-\sqrt{2}}2=[3,\overline{2}]\,, \quad y = \sqrt{2}-1=[\overline{2}]\,, \quad z = \frac{\sqrt{2}}{2}=[1,\overline{2}] \,, \] are also solutions of the equality $x+y=z\,, \ (x,y,z\in \B_{2,1}\,, x\leq y)$. Finally, we know that these four solutions are the only possibilities because any additional solution to the equality $x+y=z\,, \ (x,y,z\in \B_{2,1}\,, x\leq y)$ would produce an additional solution to $x+y+z=1\,,\ (x\leq y \leq z)$, which is impossible by Theorem~\ref{Main}. \begin{remark} Observe that Table~\ref{tab:my_label_2} can be obtained from Table~\ref{tab:my_label_1} by exchanging the left and the right endpoints of the forbidden patterns in the second-last and last columns. \end{remark} \subsection{Proof of Theorem \ref{Main3}}By Theorem~\ref{Main}, we know that there are exactly two solutions of the equality $x+y+z=1$ under the restrictions $x,y,z\in\B_{2,1}$ and $x \leq y\leq z$. Starting from the second solution $$ x=y=\frac{2-\sqrt{2}}2=[3,\overline{2}],\ z=\sqrt{2}-1=[\overline{2}] \,, $$ we can construct further explicit solutions of the equation $x+y+z=1$ under the weaker restrictions $x,y,z \in \B_2^*$ and $x,y,z\in \B_3$. We do this by making certain insertions into the continued fraction expansions of $x$, $y$, and $z$. Consider the result of inserting a 2 after the number 3 in the continued fraction expansions of $x$ and $y$, and inserting a 2 foremost in the continued fraction expansion of $z$. This produces new numbers $X$, $Y$, $Z$ that satisfy $$ X=\cfrac 1{3+\cfrac 1{\cfrac 1{x}-1}}\,, \quad Y=\cfrac 1{3+\cfrac 1{\cfrac 1{y}-1}}\,, \quad Z=\cfrac 1{2+z} \,. $$ From these expressions, we obtain the equality \begin{equation}\label{eq_1_X_Y_Zb} 1-X-Y-Z=\frac{(x-y)^2}{(3-2x)(3-2y)(3-x-y)}\,, \end{equation} which implies that $X+Y+Z=1$ because $x=y$. Similarly, consider a different type of insertion where we insert $3, 1, 3$ after the number $3$ in the continued fraction expansions of $x$ and $y$, and $1, 1, 2, 1, 1$ after the number $2$ in the continued fraction expansion of $z$. In this case, we get new numbers $X$, $Y$, and $Z$ that satisfy $$ X=\cfrac 1{3+\cfrac 1{3+\cfrac 1{1+x}}}\,, \quad Y=\cfrac 1{3+\cfrac 1{3+\cfrac 1{1+y}}}\,, \quad Z=\cfrac 1{2+\cfrac 1{1+\cfrac 1{1+\cfrac 1{2+\cfrac 1{1+\cfrac 1{\cfrac1{z}-1}}}}}} \,. $$ This gives us the equality \begin{equation}\label{eq_1_X_Y_Za} 1-X-Y-Z=\frac{-5(x-y)^2}{(10x+13)(10y+13)(5x+5y+13)}\,, \end{equation} which again implies that $X+Y+Z=1$ because $x=y$. Therefore, there are at least two different types of insertions that result in further solutions of the equality. Moreover, $S=\{2,11211\}$ is a code, i.e., any word generated by $S$ can be uniquely decomposed into a word over $S$ (see \cite[Chapter~1]{Lothaire:02} for a definition). From this observation, we immediately get that there are infinitely many solutions of the equality for $x,y,z \in \B_2^*$, since $\B_2^*=\bigcup_{j=1}^\infty\B_{2,j}$. Furthermore, we obtain uncountably many explicit solutions of the equality for $x,y,z \in \B_3$ by this method. \begin{remark} We can also prove Theorem~\ref{Main3} by starting from the first solution of Theorem~\ref{Main} and using the same method with a small modification to the insertions. \end{remark} \begin{remark}\label{rem:intuition} By the formula \eqref{eq_1_X_Y_Zb}, we can reprove Theorem~\ref{Main} by induction. The right side of \eqref{eq_1_X_Y_Zb} gives the error term after the number $2$ is inserted into the continued fraction expansions of $X_n$, $Y_n$, and $Z_n$ in Tables~\ref{tab:my_label_1} and~\ref{tab:my_label_2}. In particular, this error term is small enough that the number of cases to consider is always 17. \end{remark} \section{Open Problems}\label{sec:open} In Theorem~\ref{Main}, we prove that there are exactly two solutions of the equality $x+y+z=1$ in $\B_{2,1}$ and that these solutions are in $\Q(\sqrt{2})$ and $\Q(\sqrt{3})$. In $\B_{2,2}$, we obtain at least three additional solutions of the equation $x+y+z=1(x \leq y \leq z)$: \begin{gather*} x=y=[3,3,\overline{1,2}]\,, \quad z=[2,1,\overline{1,2}]\,;\\ x=y=[3,1,\overline{1,2}]\,, \quad z=[2,3,\overline{1,2}]\,;\\ x=[3,1,\overline{1,2}]\,, \quad y=[3,3,\overline{1,2}]\,, \quad z=[2,2,2,\overline{2,1}]\,. \end{gather*} However, we do not know whether or not the number of solutions is finite in $\B_{2,j}(j \ge 2)$ and whether or not they are in a real quadratic field. In Theorem~\ref{Main3}, we prove that there are infinitely many solutions of the equality $x+y+z=1$ in $\B_2^*$, but we do not know how large the set of solutions is, or its Hausdorff dimension. We know that the number of solutions changes from finite to infinite as we go from $\B_{2,1}$ to $\B_2^*$, but what happens in between? The proof of Theorem \ref{Main3} relied on the two key identities \eqref{eq_1_X_Y_Zb} and \eqref{eq_1_X_Y_Za}. In fact, many similar identities can be found. For example, \begin{align*} 1-[2,1,3,\tfrac{1}{x}-1]-[2,1,3,\tfrac{1}{y}-1]-[3,1,1,\tfrac{1}{1-x-y}]&=\frac{4(x-y)^2}{(8x-11)(8y-11)(11-4x-4y)}\,,\\ 1-[3,1,1,\tfrac{1}{x}]-[3,1,1,\tfrac{1}{y}]-[2,3,1,\tfrac{1}{1-x-y}-1]&=-\frac{2(x-y)^2}{(4x+7)(4y+7)(2x+2y+7)}\,,\\ 1-[3,3,1,\tfrac{1}{x}-2]-[3,3,1,\tfrac{1}{y}-2]-[2,1,1,1,1-x-y]&=\frac{8(x-y)^2}{(16x-13)(16y-13)( 13-8x-8y)}\,. \end{align*} From such identities whose right side is divisible by $x-y$, we can construct further explicit solutions of $x+y+z=1$ in $\B_2^*$ or $\B_3$. It may be an interesting problem to characterize the set of such identities. Such transformations on $x$ form a semi-group of integral M\"obius transformations, but can we describe its generators? Are they finite? Moreover, our method produces many isosceles triangles, but we know very little about scalene triangles. Below, we point out a sporadic infinite family of solutions to $x+y+z=1$ of this type: $$ x=[3,(2)^{\ell},1,\overline{1,2}]\,, \quad y=[3,(2)^{\ell},3,\overline{1,2}]\,, \quad z=[(2)^{4+2\ell},\overline{1,2}] \qquad \ell=0,1,\dots \,, $$ which is shown by induction using the two lucky equalities: \begin{align*} 1-[3,\tfrac{1}{x}-1]-[3,\tfrac{1}{y}-1]-[2,2,\tfrac{1}{1-x-y}]&= \frac{2 (x+y-3) (2 x y-2 x-2 y+1)}{(2 x-3) (2 y-3) (7-2 x-2 y)}\,,\\ 2[3,\tfrac{1}{x}-1][3,\tfrac{1}{y}-1] -2[3,\tfrac{1}{x}-1]-2[3,\tfrac{1}{y}-1]+1 &=- \frac{2 xy -2 x-2 y+1}{(2 x-3) (2 y-3)}\,. \end{align*} Lastly, regarding the diffraction from the Delone sets obtained in Section~\ref{sec:specific}, an interesting open problem is to determine how the continued fraction expansions of the badly approximable numbers $x$, $y$, $z$ are related to the autocorrelation measures of the associated Delone sets; see \cite[Chapter~9]{Baake-Grimm:13} for an overview of the theory of diffraction from point sets. A different construction of Delone sets via badly approximable angles and Fermat Spirals is discussed in \cite{Adiceam-Tsokanos:22, Akiyama:20, Marklof:20, Yamagishi-Sushida-Sadoc:21}; the diffraction from these examples may also be an interesting avenue of further study. \appendix \section{Proof of Compactness of \texorpdfstring{$W_{r,R}(\RR^2)$}{Wr,R(R2)}}\label{sec:compactness} For completeness, here we give a proof that $W_{r,R}(\RR^2)$ is compact in the Chabauty--Fell topology. Note that to obtain the desired compactness of $W_{r,R}(\RR^2)$, it is necessary to use open balls for uniform discreteness and closed balls for relative denseness in the definition of an $(r,R)$-Delone set. \begin{lem} \emph{ The set \[ \cW_{r,R}(\RR^2) = \{Y \in 2^{\RR^2} \,:\, Y \ \text{is an} \ (r,R)\text{-Delone set in} \ \RR^2\} \,, \] is compact in the Chabauty--Fell topology. } \end{lem} \begin{proof} It follows from the early work of Fell \cite{Fell:62} that the space $2^X$ is compact in the Chabauty--Fell topology for every topological space $X$. Let $X = \RR^2$. Since $\cW_{r,R}(X) \subseteq 2^X$, it suffices to show that $\cW_{r,R}(X)$ is closed. To this end, let $\{\Lambda_n\}_{n\geq 1}$ be a sequence in $\cW_{r,R}(X)$ converging to some $\Lambda \in 2^X$. Our goal is to prove that $\Lambda$ is both $r$-uniformly discrete and $R$-relatively dense. \emph{\textbf{Proof of $r$-uniform discreteness.}} Let $x\in X$ be arbitrary. Suppose by contradiction that there are two points $y,z\in \Lambda\cap B(x,r)$ with $y\neq z$. Consider any $$ 0 < \eps < \min(\textstyle{\frac{1}{r+\|x\|}},r-\|x-y\|,r-\|x-z\|,\textstyle{\frac{1}{2}}\|y-z\|) \,. $$ By convergence in the Chabauty--Fell topology, there exists an $N\geq 1$ such that $$ \Lambda \cap B(0,\textstyle{\frac{1}{\eps}}) \subseteq \Lambda_n + B(0,\eps) \quad \forall n\geq N\,. $$ Now, since $\eps < \frac{1}{r + \|x\|}$, we have that $B(x,r)\subseteq B(0,\textstyle{\frac{1}{\eps}})$, so $$ y, z \in \Lambda \cap B(x,r) \subseteq \Lambda \cap B(0,\textstyle{\frac{1}{\eps}}) \subseteq \Lambda_n + B(0,\eps) \quad \forall n \geq N\,. $$ In particular, there exist points $y_N$ and $z_N$ in $\Lambda_N$ such that $\|y-y_N\|<\eps$ and $\|z-z_N\| < \eps$. By our choice of $\eps$, we have $\|y-z\| > 2\eps$. From this, we see that $$ \|y_N - z_N\| \geq \|y - z\| - \|y-y_N\| - \|z - z_N\| > 2\eps - \eps - \eps = 0 \,, $$ so $y_N \neq z_N$. Furthermore, we have $r-\|x-y\| > \eps$ and $r-\|x-z\| > \eps$, so $y_N$ and $z_N$ are both in $B(x,r)$. Indeed, we have $$ \|x - y_N\| \leq \|x-y\| + \|y-y_N\| < r-\eps + \eps = r\,, $$ and a similar inequality shows that $\|x-z_N\| < r$. This contradicts $r$-uniform discreteness of $\Lambda_N$. Therefore, $\Lambda\cap B(x,r)$ has at most one element for every $x\in \RR^2$, i.e.\ $\Lambda$ is $r$-uniformly discrete. \emph{\textbf{Proof of $R$-relative denseness.}} Let $x\in X$ be arbitrary. We aim to show that $\Lambda \cap \overline{B(x,R)} \neq \emptyset$. By assumption, we have that $\Lambda_n$ is $R$-relatively dense for every $n$, so there exists a sequence of points $y_n$ in $X$ such that $y_n \in \Lambda_n \cap \overline{B(x,R)}$ for all $n$. Since $\overline{B(x,R)}$ is compact in $X$ (in the Euclidean topology), there exists a subsequence $y_{n_i}$ of $y_n$ and some $y\in\overline{B(x,R)}$ such that $\|y_{n_i} - y\| \rightarrow 0$. Next, observe that for all $n\in\N$ with $n > R+\|x\|+1$, we have $\overline{B(x,R)} \subseteq B(0,n)$. From this and convergence in the Chabauty--Fell topology, there exists an $N>R+\|x\|+1$ such that $$ y_{n} \in \Lambda_n \cap \overline{B(x,R)} \subseteq \Lambda_n \cap B(0,n) \subseteq \Lambda + B(0,\textstyle{\frac{1}{n}}) \quad \forall n\geq N\,. $$ Thus, there is a sequence $z_n$ in $\Lambda$ such that $\|y_n - z_n\| < \frac{1}{n}$ for all $n\geq N$. In particular, we have $z_{n_i}\rightarrow y$. Now, since $\Lambda$ is a closed set in the Euclidean topology, we must have $y\in \Lambda$. Moreover, $y$ is also in $\overline{B(x,R)}$, so we have shown that there is some $y\in \Lambda \cap \overline{B(x,R)} \neq \emptyset$, as desired. This completes the proof. \end{proof} \section*{Acknowledgements} S.A. is supported by JSPS grants 20K03528, 24K06662 and RIMS in Kyoto University. E.R.K. is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) via grant 2024-0485 and the Canadian Graduate Scholarship - Doctoral. Y-L.X. is supported by the China Scholarship Council, China (No.\,202306770085). We would also like to thank Nicolae Strungaru for his insights on the construction of Delone sets, and Noel Murasko for his assistance in producing the graphics. We are grateful to the anonymous referee for giving us comments to increase the readability of the original manuscript. The introduction is greatly improved by this suggestion. \begin{thebibliography}{1} \bibitem{Adiceam-Tsokanos:22} F.~Adiceam, and I.~Tsokanos, \emph{ Higher dimensional spiral Delone sets}, Funct. Approx. Comment. Math. \textbf{67} (2022), no.~1, 21--46. \bibitem{Akiyama:20} S.~Akiyama, \emph{Spiral Delone sets and three distance theorem}, Nonlinearity \textbf{33} (2020), no.~5, 2533--2540. \bibitem{Baake-Frettloh-Grimm:07} M. Baake, D. Frettl\"oh and U.~G. Grimm, \emph{A radial analogue of Poisson's summation formula with applications to powder diffraction and pinwheel patterns}, J. Geom. Phys. \textbf{57} (2007), no.~5, 1331--1343. \bibitem{Baake-Frettloh-Grimm:07b} M. Baake, D. Frettl\"oh and U.~G. Grimm, \emph{Pinwheel patterns and powder diffraction}, Philos. Mag. \textbf{87} (2007), no.~18-21, 2831--2838. \bibitem{Baake-Gahler:16} M. ~Baake and F.~G\"ahler, \emph{Pair correlations of aperiodic inflation rules via renormalisation: some interesting examples}, Topology Appl. \textbf{205} (2016), 4--27. \bibitem{Baake-Grimm:13} M.~Baake and U.~Grimm, \emph{Aperiodic Order. Vol.~1: A Mathematical Invitation}, Cambridge University Press, Cambridge (2013). \bibitem{Baake-Lenz:04} M.~Baake and D.~Lenz, \emph{Dynamical systems on translation bounded measures: {P}ure point dynamical and diffraction spectra}, Ergod. Theory Dyn. Syst. \textbf{24} (2004), no.~6, 1867--1893. \bibitem{Chabauty:50} C.~Chabauty, \emph{Limite d'ensembles et g\'{e}om\'{e}trie des nombres}, Bull. Soc. Math. France \textbf{78} (1950), 143--151. \bibitem{Fell:62} J.\,M.~Fell, \emph{A Hausdorff topology for the closed subsets of a locally compact non-Hausdorff space}, Proc. Amer. Math. Soc. \textbf{13} (1962), no.~3, 472--476. \bibitem{Freiman:73} G.~A.~Freiman, \emph{On the beginning of Hall's ray}, Chapter V in Teorija cisel (Number Theory), Kalininskii Gosudarstvennyi Universitet, Moscow (1973), 87--113. \bibitem{Frettloh:08} D.~Frettl{\"o}h, \emph{Substitution tilings with statistical circular symmetry}, European J. Combin. \textbf{29} (2008), no.~8, 1881--1893. \bibitem{Frettloh-Harriss-Gahler} D.~Frettl{\"o}h, E.~Harriss, and F.~G{\"a}hler, \emph{Tilings Encyclopedia}, \url{https://tilings.math.uni-bielefeld.de/}. \bibitem{Grimm-Deng:2011} U.~Grimm and X.~Deng, \emph{Some comments on pinwheel tilings and their diffraction}, J. Phys. Conf. Ser. \textbf{284} (2011), no.~1, 012032. \bibitem{Hall:47} M.~Hall,~Jr., \emph{On the sum and product of continued fractions}, Ann. Math. \textbf{48} (1947), no.~4, 966-993. \bibitem{HW} G.~H.~Hardy, E.~M.~ Wright, An introduction to the theory of numbers, Sixth edition, Oxford University Press, Oxford, 2008. xxii+621 pp. \bibitem{Khinchin:97} A.~ Y.~ Khinchin, \emph{Continued fractions}, New York: Dover Publications, (1997). \bibitem{KN:00} C.~Kraaikamp, H.~Nakada, \emph{On normal numbers for continued fractions}, Ergodic Theory Dynam. Systems 20 (2000), no. 5, 1405–1421. \bibitem{Kuipers-Niederreiter:74} L.~Kuipers and H.~Niederreiter, \emph{Uniform distribution of sequences}, J. Wiley and Sons, New York (1974). \bibitem{Lothaire:02} M.~Lothaire, \emph{Algebraic Combinatorics on Words}, Encyclopedia of Mathematics and its Applications, Cambridge University Press, Cambridge (2002). \bibitem{Marklof:20} J.~Marklof, \emph{Delone sets generated by square roots}, Amer. Math. Monthly, \textbf{127} (2020), no.~9, 836--840. \bibitem{MW88} R.\,D.~Mauldin and S.\,C.~Williams, \emph{Hausdorff dimension in graph directed constructions}, Trans. Amer. Math. Soc., \textbf{309} (1988), 811-829. \bibitem{MPS:06} R.V. Moody, D. Postnikoff, N. Strungaru, \emph{Circular symmetry of pinwheel diffraction}, Ann. Henri Poincaré, \textbf{7} (2006), 711--730. \bibitem{Postnikoff:2004} D.~Postnikoff, \emph{The geometry and diffraction of the pinwheel tiling}, Master's Thesis, Canada: University of Alberta (2004). \bibitem{Radin:94} C.~Radin, \emph{The pinwheel tilings of the plane}, Ann. Math. \textbf{139} (1994), no.~3, 661--702. \bibitem{Sadun:98} L.~Sadun, \emph{Some generalizations of the pinwheel tiling}, Discrete Comput. Geom. \textbf{20} (1998), 79--110. \bibitem{Schecker:77} H.~Schecker, \emph{{\"U}ber die Menge der Zahlen, die als Minima quadratischer Formen auftreten}, J. Number Theory \textbf{9} (1977), 121--141. \bibitem{Shechtman-et-al:84} D.~Shechtman, I.~Blech, D.~Gratias, and J.\,W.~Cahn, \emph{Metallic phase with long-range orientational order and no translational symmetry}, Phys.~Rev.~Lett. \textbf{53} (1984), no.~20, 1951--1953. \bibitem{Smi-Solo:22} Y.~Smilansky and Y.~Solomon, \emph{A dichotomy for bounded displacement equivalence of Delone sets}, Ergod. Theory Dyn. Syst. \textbf{42} (2022), no.~8, 2693--2710. \bibitem{Smi-Solo:21} Y.~Smilansky and Y.~Solomon, \emph{Multiscale substitution tilings}, Proc. Lond. Math. Soc. \textbf{123} (2021), no.~6, 517-564. \bibitem{Weyl:16} H. Weyl, \emph{\"Uber die Gleichverteilung von Zahlen mod}, Eins, Math. Ann. \textbf{77} (1916), no.~3, 313--352. \bibitem{Yamagishi-Sushida-Sadoc:21} Y.~Yamagishi, T.~Sushida, and J.\,F.~Sadoc, \emph{ Area convergence of Voronoi cells on spiral lattices}, Nonlinearity \textbf{34} (2021), no.~5, 3163--3183. \end{thebibliography} \end{document}
2412.11406v1
http://arxiv.org/abs/2412.11406v1
Normal surface singularities of small degrees
\documentclass[12pt,reqno]{amsart} \usepackage{geometry} \geometry{top=1in, bottom=1in, left=1.25in, right=1.25in} \usepackage{verbatim} \usepackage{curves} \usepackage{longtable} \usepackage{color} \usepackage{epic} \usepackage{amssymb} \usepackage{amsthm} \usepackage{amscd} \usepackage{amsmath} \usepackage{graphics} \allowdisplaybreaks \begin{document} \newtheorem{main}{Theorem} \newtheorem{Theorem}{Theorem} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{thmletter}{Theorem} \def\thethmletter{\Alph{thmletter}} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \newtheorem*{remark*}{Remark} \newtheorem{notation}{Notation}[section] \newcommand{\noqed}{\def\qedsymbol{}} \def\mtline#1{\hbox to#1{\hrulefill}} \def\bbk{\bigbreak} \let\bbk\relax \def\Gam{\Gamma} \def\Sig{\Sigma} \def\lds{\dots} \def\blt{\mbox{$\begin{picture}(6,6)\put(3,3){\circle*{6}}\end{picture}$}} \def\noi{\noindent} \let\noi\relax \def\wtit{\widetilde} \def\ub{\underline} \def\Raw{\Rightarrow} \def\lraw{\longrightarrow} \def\ul{\underline} \renewcommand{\qedsymbol}{Q.E.D.} \newcommand{\zz}{\phantom{00}} \newcommand{\TP}{\!{}+{}\!} \newcommand{\TM}{\!{}-{}\!} \newcommand{\TE}{\!{}={}\!} \title[Normal Surface singularities of small degrees]{Normal Surface singularities of small degrees} \author[S. Yau]{Stephen S.-T. Yau} \address{ Beijing Institute of Mathematical Sciences and Applications (BIMSA), Beijing 101400\\ P. R. of China; Department of Mathematical Sciences, Tsinghua University, Beijing 100084\\ P. R. of China} \curraddr{} \email{[email protected]} \author[H. Zuo]{Hao Zuo} \address{Department of Mathematical Sciences\\ Tsinghua University\\ Beijing 100084\\ P. R. of China} \curraddr{} \email{[email protected]} \author[H. Zuo]{Huaiqing Zuo} \address{Department of Mathematical Sciences\\ Tsinghua University\\ Beijing 100084\\ P. R. of China} \curraddr{} \email{[email protected]} \thanks{Zuo is supported by NSFC Grant 12271280. Yau is supported by Tsinghua University Education Foundation fund (042202008).} \begin{abstract}{The notion of the Yau sequence was introduced by Tomaru, as an attempt to extend Yau's elliptic sequence for (weakly) elliptic singularities to normal surface singularities of higher fundamental genera. In this paper, we obtain the canonical cycle using the Yau cycle for certain surface singularities of degree two. Furthermore, we obtain a formula of arithmetic genera and a{\color{black} n} upper bound of geometric genera for these singularities. We also give some properties about the classification of weighted dual graphs of certain surface singularities of degree two.} Keywords. normal singularities, Yau cycle, canonical cycle. MSC(2020). 14B05, 32S25. \end{abstract} \maketitle \section{Introduction}\label{sec1} {\color{black}Let $(V, o)$ be an isolated singular point on a surface $V$, and $\pi\colon X \to V$ be a resolution of $V$. Due to the negative definiteness of the intersection matrix of $\pi^{-1}(o)$, there exists a non-zero effective divisor with support $\pi^{-1}(o)$ that has a non-positive intersection number with every exceptional curve. We define $Z$ as the unique minimal such divisor and refer to it as the \textit{fundamental cycle}. Notably, $-Z^2$ represents one of the most fundamental invariants of $(V,o)$ that remains independent of any choice in resolutions. We denote $-Z^2$ as the \textit{degree} of $(V, o)$. In this paper, we investigate surface singularities by examining decompositions of various cycles. A primary focus is the \textit{Yau sequence}, introduced by Tomaru \cite{Tom}, which extends the first author's elliptic sequence \cite{Ya2} to singularities with larger fundamental genera. Additionally, we consider the \textit{Yau cycle} (see \cite{Ko1}), defined as the sum of all curves present in the Yau sequence (see Definition \ref{def2.14}). Notably, the Yau cycles exhibit a strong connection with canonical cycles in certain types of singularities. Konno explores applications of the Yau cycle in degree one singularities in \cite{Ko1}. In this paper, we extend Konno's results to degree two singularities and establish additional properties within a more restrictive context.} Firstly, we discussed the relation between the canonical cycle and the Yau cycle of surface singularities of degree two. Compared to degree one situations, the degree two situations are more complicated. In {\color{black}the} degree one situations, we know that {\color{black}the canonical cycle is obtained as a multiple of the Yau cycle}, when $Z$ is essentially irreducible. But the property doesn't hold in {\color{black} the} degree two situations, so we consider a more restrictive condition $D_m=Z_{min}$ in Theorem \ref{thm3.5} to generalize the property, where $m$ denote{\color{black}s} the length of the Yau sequence for $Z$ and $D_m$ is the smallest cycle in the Yau sequence. And we point out that the condition $D_m=Z_{min}$ holds when $(V,o)$ are singularities of degree one. \begin{thmletter}\label{mt1} Let $(V, o)$ be a normal surface singularity of degree two with $p_f(V,o)>0$, and $\pi\colon X \to V$ be a minimal resolution of $V$. Assume that fundamental cycle $Z$ is essentially irreducible and $D_m=Z_{min}$. Then $(V,o)$ is numerically Gorenstein with canonical cycle $p_f(V,o)Y$. \end{thmletter} For surface singularities of degree one, Konno {\color{black} obtains} a lower bound on the arithmetic genus by using the Yau cycle $Y$ in \cite{Ko1}. For those with essentially irreducible $Z$ (see Definition \ref{def3.1}), we {\color{black} have} found that the arithmetic genus is equal to {\color{black} this} lower bound. \begin{thmletter}\label{mt2} Let $(V,o)$ be a normal surface singularity of degree one with $p_f(V,o)>0$, $Z$ is the fundamental cycle on the minimal resolution. Assume that $Z$ is essentially irreducible, then $p_a(V,o)=\frac{p(p-1)m}{2}+1$, where $p=p_f(V,o)$ and $m$ denotes the length of the Yau sequence for $Z$. \end{thmletter} We want to get a formula for the arithmetic genus of a singular point of degree 2 with essentially irreducible $Z$ like in the Theorem \ref{mt2}. For the degree two case, unlike the degree one case, if we add the additional condition that $Z$ is essentially irreducible which still doesn't imply that $D_m=Z_{min}$. However, the condition $D_m=Z_{min}$ is necessary when the arithmetic genus is equal to the lower bound. In order to study singularities of degree two under the condition $D_m=Z_{min}$, In Theorem \ref{thm3.8}, we classify the weighted dual graphs of surface singularities of the degree two when $m>1$. By {\color{black} classifying these graphs}, we compute $D_m$ and $Z_{min}$ for each case in Theorem \ref{thm3.8} and obtain all cases which satisfy the conditions of $m>1$ and $D_m=Z_{min}$. From the classification of weighted dual graphs for these singularities, we obtain a formula in Theorem \ref{thm3.13} (i.e., Theorem \ref{mt3}) for their arithmetic genera. \begin{thmletter}\label{mt3} Let $(V,o)$ be a normal surface singularity of degree two with $p_f(V,o)>0$, and $\pi\colon X \to V$ be a minimal resolution of $V$. Assum{\color{black} ing} that {\color{black} the} fundamental cycle $Z$ is essentially irreducible and $D_m=Z_{min}$, then $p_a(V,o)=[\frac{p^2}{4}]m+1$ and $p_g(V,o) \le [\frac{(p+1)^2}{4}]m$. \end{thmletter} \section{Preliminaries}\label{sec2} \subsection{Riemann-Roch and fundamental cycle} Let $(V, o)$ be an isolated singular point on a surface $V$, and $\pi\colon X \to V$ be a resolution of $V$. Let $\pi^{-1}(o)=\cup E_i$, $1\le i\le n$, be the decomposition of the exceptional set $\pi^{-1}(o)$ into irreducible components. The cycles are divisors of the form $D=\sum d_iE_i$ with $d_i \in \mathbb Z$, where $E_i$ are irreducible exceptional curves. There is a natural partial ordering of the cycles: $D_1=\sum_{i} m_{i} E_{i} \le D_2=\sum_{i} n_{i} E_{i}$ if and only if $m_{i} \le n_{i}$ for all $i$. If $D_{1} \leq D_{2}$ but $D_{1} \neq D_{2}$ then we write $D_{1}<D_{2}$. We let $\textup{supp}\ D=\cup E_i$, $d_i\ne 0$, denote the support of $D$. For a cycle $D=\sum d_iE_i$ on $\pi^{-1}(o)$, $\chi(D)$ is defined by $$\chi(D)=\dim H^0 ({\color{black}X},\mathcal{O}_D)-\dim H^1({\color{black}X},\mathcal{O}_D),$$ where $\mathcal{O}_D = \mathcal{O/\mathcal{O}}(-D)$. Then by Riemann-Roch theorem \cite[Proposition~IV.4, p.~75]{Se}, we have $$\chi (D)=-\frac12 (D^2+D\cdot K),$$ where $K$ is the canonical divisor on $X$ and $D\cdot K$ is the intersection number of $D$ and $K$. For any irreducible curve $E_i$, the adjunction formula \cite[Proposition~IV, 5, p.~75]{Se} says $$E_i \cdot K=-E^2_i + 2g_i +2\delta_i -2 ,$$ where $g_i$ is the genus of $E_i$ and $\delta_i$ is the degree of the conductor of $E_i$. The arithmetic genus of $D \ge 0$ is defined by $p_a(D)=1-\chi(D)$. It follows immediately from Riemann-Roch theorem that if $A$ and $B$ are cycles, then $$p_a(A+B)=p_a(A)+p_a(B)+A \cdot B-1.$$ \begin{definition}\label{def2.1} Let $\pi\colon X \to V$ be a resolution, then the intersection form is negative definite on the exceptional set $\pi^{-1}(o)$. Hence, there exists a cycle $D > 0$ with support $\pi^{-1}(o)$ such that $E_i \cdot D \le 0$ for all $E_i$. We denote by $Z$ the smallest one among such cycle{\color{black}s} and call it the fundamental cycle \cite[~131-132]{Ar}. \end{definition} Because the fundamental cycle $Z$ is {\color{black}the} smallest, for any proper subcycle D of $Z$, there exists {\color{black}an} $E_i$ such that $E_i \cdot D >0$ and such that $E_i <Z-D$. In fact, the fundamental cycle $Z$ can be computed from the intersection as follows via a computation sequence for $Z$ in the sense of Laufer \cite[Proposition~4.1]{La2}. \begin{align*} Z_0=0, Z_1 &= E_{i_1}, Z_2=Z_1+E_{i_2},\dots, Z_j=Z_{j-1}+E_{i_j},\dots,\\ Z_\ell &= Z_{\ell-1}+E_{i_\ell} =Z, \end{align*} where $E_{i_1}$ is {\color{black}an} irreducible component and $E_{i_j}\cdot Z_{j-1}>0$, $1< j\le \ell$. Consider the computation sequence for $Z$, we have $$p_a(Z_j)=p_a(Z_{j-1})+p_a(E_{i_j})+Z_{j-1} \cdot E_{i_j}-1 \ge p_a(Z_{j-1})$$ for $1 \le j \le \ell$. Then $p_a(Z) \ge p_a(D)$ for any subcycle $0<D<Z$. \begin{definition}\label{def2.2} We {\color{black}define} the arithmetic genus of $Z$ the \textit{fundamental genus} of $(V,o)$ and denote it {\color{black}by} $p_f(V,o)$. The \textit{arithmetic genus} and \textit{geometric genus} of $(V,o)$ are respectively defined {\color{black}as} $p_a(V,o)=\max \{p_a(D)|0<D\}$ and $p_g(V,o)=\dim_C H^1(X,\mathcal{O}_X)$. \end{definition} \subsection{Chain-connected}we recall the notion of chain-connected {\color{black}cycles} introduced by Konno \cite{Ko2} and state {\color{black}the} fundamental properties of the chain-connected cycles. \begin{definition}\label{def2.2} A line bundle on a cycle is nef if it is of non-negative degree on {\color{black}all} irreducible components. \end{definition} \begin{definition}\label{def2.3} A cycle $D$ is chain-connected if $\mathcal{O}_{D-C}(-C)$ is not nef for any proper subcycle $0<C<D$. \end{definition} According to the minimality of the fundamental cycle $Z$, the fundamental cycle is chain-connected cycle. In fact, it is the {\color{black}largest} chain-connected cycle with support $\pi^{-1}(o)$. \begin{proposition}\label{prop2.4} Let D be a chain-connected cycle, then $p_a(D) \ge p_a(C)$ for any subcycle $0<C<D$. \end{proposition} \begin{proof} Since D is a chain-connected cycle, we have $\mathcal{O}_{D-A}(-A)$ is not nef for any subcycle $0<A<D$. So we can find an irreducible component $E_i<D-A$ such that $E_i \cdot A > 0$. For any subcycle $0<C<D$, like Laufer's computation sequence, we can get an increasing sequence of cycle{\color{black}s}. Let \begin{align*} D_0=C, D_1 &= D_0 + E_{i_1}, D_2=D_1+E_{i_2},\dots, D_j=D_{j-1}+E_{i_j},\dots,\\ D_\ell &= D_{\ell-1}+E_{i_\ell} =D, \end{align*} where $E_{i_j}$ is {\color{black}an} irreducible component and $E_{i_j}\cdot D_{j-1}>0$, $1 \le j\le \ell$. We have $p_a(D_j)=p_a(D_{j-1})+p_a(E_{i_j}) + E_{i_j}\cdot D_{j-1} -1 \ge p_a(D_{j-1})$ for $1 \le j \le \ell$, since $p_a(E_{i_j}) \ge 0$ and $E_{i_j}\cdot D_{j-1}>0$. We conclude that $p_a(D_\ell) \ge p_a(D_0)$, i.e., $p_a(D) \ge p_a(C)$. \end{proof} We remark that, when $p_a(D)=p_a(C)$, we have $p_a(D_j) = p_a(D_{j-1})$ for $1 \le j \le \ell$. It means that $p_a(E_{i_j}) = 0$ and $E_{i_j}\cdot D_{j-1} = 1 $ for $1 \le j \le \ell$. \begin{definition}\label{def2.5} Let $D$ be a reducible curve. An irreducible component $E$ of $D$ is said to be a $(-m)_D-curve$ if $p_a(E)=0$ and $E \cdot (D-E) = m$. \end{definition} \begin{proposition}[\cite{Ko2}]\label{prop2.6} Given a $(-1)_D-curve$ $E$ of $D$. If D is chain-connected, then the subcycle $D'=D-E$ is {\color{black}also} chain-connected. \end{proposition} \begin{corollary}[\cite{Ko2}]\label{coro2.7} Let D be a chain-connected cycle. If $p_a(C)=p_a(D)$ for a subcycle $C<D$, then the subcycle $C$ is {\color{black}also} chain-connected. \end{corollary} \begin{proposition}[\cite{Ko2}]\label{prop2.8} Let D be a chain-connected cycle. If $\mathcal{O}_{D}(-C)$ is nef for a cycle $C$, then either $D \le C$ or $\textup{supp}\ C \cap \textup{supp}\ D= \emptyset$. \end{proposition} \begin{definition}\label{def2.9} If $D$ is chain-connected and $p_a(D) > 0$, then there uniquely exists a minimal subcycle $D_{min}$ of $D$ such that $p_a(D_{min})=p_a(D)$. We call $D_{min}$ the minimal model of $D$. \end{definition} \begin{theorem}[Chain-connected component decomposition, \cite{Ko2}]\label{thm2.10} Let $D$ be a cycle. Then there {\color{black}exists} a sequence $D_1$,$D_2$,\dots, $D_r$ of chain-connected subcycles of $D$ and a sequence $m_1$, \dots, $m_r$ of positive integers which satisfy \begin{itemize} \item[(1)] $D=m_1 D_1+\dots+m_r D_r$. \item[(2)] For $i<j$, the cycle $-D_i$ is nef on $D_j$. \item[(3)] If $m_i \ge 2$, then $-D_i$ is nef on $D_i$. \item[(4)] For $i<j$, either $D_i > D_j$ or $\textup{supp}\ D_i \cap \textup{supp}\ D_j= \emptyset$. \end{itemize} \end{theorem} \subsection{The canonical cycle} \begin{definition}\label{def2.11} The rational cycle $Z_{K}$ is called the canonical cycle if $Z_{K} \cdot {\color{black}E}_{i}=-K {\color{black}E}_{i}$ for all $i$, i.e. $$ Z_{K} \cdot {\color{black}E}_{i}={\color{black}E}_{i}^{2}-2 \delta_{i}-2 g_{i}+2 \text { for all } i, $$ where $\delta_i$ is the ``number'' of nodes and cusps on ${\color{black}E}_i$. \end{definition} \begin{definition}\label{def2.12} If the coefficients of $Z_{K}$ are integers, then the singularity is called \textit{numerical Gorenstein} {\color{black}singularity}. \end{definition} \subsection{Yau sequence and Yau cycle} Assume that $p_f(V,o)=p_a(Z)>0$. By {\color{black}D}efinition {\color{black}\ref{def2.9}}, there uniquely exists a minimal subcycle $Z_{min}$ of $Z$ such that $p_a(Z_{min})=p_a(Z)$. \begin{lemma}\label{Lemma}\label{lem2.13} Assume that $-Z$ is numerically trivial on $Z_{min}$. Then there uniquely exists a maximal subcycle $D<Z$ such that $\mathcal{O}_{D}(-Z)$ is numerically trivial and $p_a(D)=p_f(V,o)$. Moreover, $D$ is the fundamental cycle on its support. \end{lemma} \begin{proof} Let $S=\{ 0 < D < Z | \mathcal{O}_{D}(-Z)\ is \ numerically \ trivial \ and \ p_a(D)=p_a(Z)\}$. By the assumption, $Z_{min} \in S$. Since the coefficients of subcycle D are integers, the nonempty set $S$ contains a maximal element. We show the maximal element of $S$ is the fundamental cycle on its support. Let $D$ be a maximal element of $S$ and $Z_1$ {\color{black}be} the fundamental cycle on $\textup{supp}\ D$. Assume that there exists an irreducible component $C \le D$ satisfying $C \cdot D > 0$. Since $\mathcal{O}_{D}(-Z)$ is numerically trivial and $C \le D$, we have $C (Z-D) < 0$ and, hence, $C \le Z-D$. Then $C+D$ is a subcycle of $Z$ and $\mathcal{O}_{C+D}(-Z)$ is numerically trivial. Furthermore, we have $p_a(Z) \ge p_a(C+D)=p_a(C)+p_a(D)+C \cdot D - 1 \ge p_a(D)=p_a(Z)$, hence $C+D \in S$. This contradicts the assumption that $D$ is maximal. Then we have $\mathcal{O}_{Z_1}(-D)$ is nef and $Z_1 \le D$. Since $D<Z$ and $p_a(D)=p_a(Z)$, the cycle D is chain-connected according to {\color{black}C}orollary {\color{black}\ref{coro2.7}}. Furthermore, we have $D \le Z_1$ according to {\color{black}P}roposition {\color{black}\ref{prop2.8}}, since $\mathcal{O}_{D}(-Z_1)$ {\color{black}is nef} and $\textup{supp}\ D \cap \textup{supp}\ Z_1 \neq \emptyset$. Hence we get $D=Z_1$. Assume $D_1$ and $D_2$ {\color{black}are} different maximal elements in $S$. If $\mathcal{O}_{D_2}(-D_1)$ is not nef, then there exists an irreducible component $C<D_2$ such that $C \cdot D_1 > 0$. Since $D_1$ is the fundamental cycle on its support, this shows $C \nleq D_1$ and it follows $C+D_1 \le Z$. Then we have $p_a(Z) \ge p_a(C+D_1)= p_a(C)+p_a(D_1)+C \cdot D_1 -1 \ge p_a(D_1)=p_a(Z)$ and $\mathcal{O}_{C+D_1}(-Z)$ is numerically trivial, hence $C+D \in S$. This contradicts the assumption that $D_1$ is maximal, hence we get $\mathcal{O}_{D_2}(-D_1)$ is nef. Since $D_2$ is chain-connected, either $D_2 \le D_1$ or $\textup{supp}\ D_1 \cap \textup{supp}\ D_2= \emptyset$. By the uniqueness of the minimal model $Z_{min}$, we get $Z_{min}\le D_1$ and $Z_{min} \le D_2$. It means that $D_2 \le D_1$, contradicting that $D_1$ and $D_2$ are different maximal elements in $S$. Hence there uniquely exists a maximal element in $S$. \end{proof} \begin{definition}[\color{black}\cite{Ko1}]\label{def2.14} We call $D$ as in the {\color{black}L}emma {\color{black}\ref{lem2.13}} the \textit{Tyurina component} of $Z$. Since $D$ is the fundamental cycle on its support and $Z_{min}$ is also the minimal model of $D$, we can get the Tyurina component of {\color{black}$D$} when $-D$ is numerically trivial on $Z_{min}$. By the induction, we get the sequence of cycles $$0<D_m<D_{m-1}<\dots<D_2<D_1=Z$$ such that $D_{i+1}$ is the Tyurina component of $D_i$ for $1 \le i \le m-1$ and $D_m \cdot Z_{min}<0$. We call it the \textit{Yau sequence} for $Z$ and call $Y=\sum_{i=1}^{m}D_i$ the \textit{Yau cycle}. The case $Z \cdot Z_{min}<0$ is corresponds to $m=1$. \end{definition} \begin{proposition}[Theorem 3.7, \cite{Ya2}]\label{prop2.15} If $(V,o)$ is a numerically Gorenstein elliptic singular point and $\pi \colon X \to V$ is the minimal resolution, then the Yau cycle is the canonical cycle. \end{proposition} The length $m$ of the Yau sequence is a numerical invariant of $(V,o)$, and gives us the arithmetic genus for singular points of fundamental genus two in \cite{Ko1}. When $p_f(V,o)>2$, length $m$ also gives us a lower bound of $p_a(V,o)$ since $p_a(Y)=\sum_{i=1}^{m}(p_a(D_i)-1)+\sum_{1 \le i<j \le m}D_i D_j +1 = m(p_f(V,o)-1)+1$. \subsection{Classfication of weighted dual graphs} {\color{black}Let $(V, o)$ be an isolated singular point on a surface $V$, and $\pi\colon X \to V$ be a minimal resolution of $V$. There are two beautiful results given by Artin in \cite{Ar}. \begin{definition}\label{def2.16} The singularity $(V, o)$ is said to be rational if $\chi (Z)=1$. \end{definition} If $(V, o)$ is a rational singularity, then $\pi$ is also a minimal good resolution, i.e., exceptional set with nonsingular $E_i$ and normal crossings. Moreover, each $E_i$ is a rational curve and $E_i^2=-2$. \begin{theorem}[\cite{Ar}] \label{thm2.17} If $(V,o)$ is a hypersurface rational singularity, then $(V,o)$ is a rational double point. Moreover the set of weighted dual graphs of hypersurface rational singularities consists of the following graphs: \begin{flushleft} {\Small\begin{longtable}{@{\hspace*{-25pt}}l@{\hspace*{6pt}}l@{\hspace*{12pt}}l@{\hspace*{26pt}}l@{}} {\normalfont\upshape(1)}&$A_n, n\ge 1$&\scalebox{.85}{\begin{picture}(128,21) \put(14,0){\line(1,0){12}} \put(30,0){\dashbox{2}(93,0)} \put(3,9){\makebox{\footnotesize$-2$}} \put(21,9){\makebox{\footnotesize$-2$}} \put(115,9){\makebox{\footnotesize$-2$}} \put(11,0){\circle*{6}} \put(29,0){\circle*{6}} \put(123,0){\circle*{6}} \end{picture}} &$Z=1\ 1\dots 1$.\\[9pt] {\upshape(2)}&$D_n, n\ge 4$&\scalebox{.85}{\begin{picture}(128,21) \put(14,0){\line(1,0){12}} \put(32,0){\line(1,0){12}} \put(45,0){\dashbox{2}(75,0)} \put(3,-11){\makebox{\footnotesize$-2$}} \put(21,-11){\makebox{\footnotesize$-2$}} \put(39,-11){\makebox{\footnotesize$-2$}} \put(115,-11){\makebox{\footnotesize$-2$}} \put(11,0){\circle*{6}} \put(29,0){\circle*{6}} \put(29,18){\circle*{6}} \put(31,18){\makebox{\footnotesize$-2$}} \put(29,0){\line(0,1){18}} \put(47,0){\circle*{6}} \put(29,0){\circle*{6}} \put(123,0){\circle*{6}} \end{picture}} &$Z=1\ \stackrel{\textstyle 1\strut}{2\strut}\ 2\dots \ 2 \ 1.$\\[9pt] {\upshape(3)}&$E_6$&\scalebox{.85}{\begin{picture}(128,21) \put(14,0){\line(1,0){12}} \put(32,0){\line(1,0){12}} \put(50,0){\line(1,0){12}} \put(68,0){\line(1,0){12}} \put(3,-11){\makebox{\footnotesize$-2$}} \put(21,-11){\makebox{\footnotesize$-2$}} \put(39,-11){\makebox{\footnotesize$-2$}} \put(57,-11){\makebox{\footnotesize$-2$}} \put(75,-11){\makebox{\footnotesize$-2$}} \put(11,0){\circle*{6}} \put(47,18){\circle*{6}} \put(49,18){\makebox{\footnotesize$-2$}} \put(47,0){\line(0,1){18}} \put(29,0){\circle*{6}} \put(47,0){\circle*{6}} \put(65,0){\circle*{6}} \put(83,0){\circle*{6}} \end{picture}} &$Z=1\ 2\ \stackrel{\textstyle 2\strut}{3\strut}\ 2\ 1.$\\[9pt] {\upshape(4)}&$E_7$& \scalebox{.85}{ \begin{picture}(128,21) \put(14,0){\line(1,0){12}} \put(32,0){\line(1,0){12}} \put(50,0){\line(1,0){12}} \put(68,0){\line(1,0){12}} \put(86,0){\line(1,0){12}} \put(3,-11){\makebox{\footnotesize$-2$}} \put(21,-11){\makebox{\footnotesize$-2$}} \put(39,-11){\makebox{\footnotesize$-2$}} \put(57,-11){\makebox{\footnotesize$-2$}} \put(75,-11){\makebox{\footnotesize$-2$}} \put(93,-11){\makebox{\footnotesize$-2$}} \put(11,0){\circle*{6}} \put(47,18){\circle*{6}} \put(49,18){\makebox{\footnotesize$-2$}} \put(47,0){\line(0,1){18}} \put(29,0){\circle*{6}} \put(47,0){\circle*{6}} \put(65,0){\circle*{6}} \put(83,0){\circle*{6}} \put(101,0){\circle*{6}} \end{picture}} &$Z=2\ 3\ \stackrel{\textstyle 2\strut}{4\strut}\ 3\ 2\ 1.$\\[9pt] {\upshape(5)}&$E_8$& \scalebox{.85}{ \begin{picture}(128,21) \put(14,0){\line(1,0){12}} \put(32,0){\line(1,0){12}} \put(50,0){\line(1,0){12}} \put(68,0){\line(1,0){12}} \put(3,-11){\makebox{\footnotesize$-2$}} \put(21,-11){\makebox{\footnotesize$-2$}} \put(39,-11){\makebox{\footnotesize$-2$}} \put(57,-11){\makebox{\footnotesize$-2$}} \put(75,-11){\makebox{\footnotesize$-2$}} \put(11,0){\circle*{6}} \put(47,18){\circle*{6}} \put(49,18){\makebox{\footnotesize$-2$}} \put(47,0){\line(0,1){18}} \put(29,0){\circle*{6}} \put(47,0){\circle*{6}} \put(65,0){\circle*{6}} \put(83,0){\circle*{6}} \put(101,0){\circle*{6}} \put(119,0){\circle*{6}} \put(86,0){\line(1,0){12}} \put(104,0){\line(1,0){12}} \put(93,-11){\makebox{\footnotesize$-2$}} \put(111,-11){\makebox{\footnotesize$-2$}} \end{picture}} &$Z=2\ 4\ \stackrel{\textstyle 3\strut}{6\strut}\ 5\ 4\ 3\ 2.$ \end{longtable}} \end{flushleft} \end{theorem} To each such weighted dual graph is associated an intersection matrix whose $(i,j)$th entry is $E_{i} \cdot E_{j}$. These graphs $(1)-(5)$ in Theorem \ref{thm2.17} are called ADE graphs in the literature. This theorem completely classifies the weighted dual graphs with all $E_i^2=-2$. In general, according to \cite{Ar} and \cite{Gr}, to classify the weighted dual graphs we need to classify the corresponding negative-definite matrices: \begin{proposition}[\cite{Ar}]\label{prop2.18} Let $\left\{E_{i}\right\}_{i=1, \cdots, n}$ be a connected bunch of complete curves on a regular two-dimensional scheme: \begin{itemize} \item[(i)] Suppose that $E_{i} \cdot E_{j}$ is negative-definite, then there exists positive cycles $Z=\sum r_{i} E_{i}$ such that $Z \cdot E_{i} \leq 0$ for all $i .$ \item[(ii)] Conversely, if there exists a positive cycle $Z=\sum r_{i} E_{i}$ such that $Z \cdot E_{i} \leq 0$ for all $i$, then $E_{i} \cdot E_{j}$ is negative semi-definite. If in addition $Z^{2}<0$, then $E_{i} \cdot E_{j}$ is negative-definite. \end{itemize} \end{proposition} } \section{Singularities of degree two} \subsection{The relation between the canonical cycle and the Yau cycle} Let $(V, o)$ be an isolated singular point on a surface $V$, and $\pi\colon X \to V$ be a minimal resolution of $V$. We know the Yau cycle is the canonical cycle if $(V, o)$ is a numerically Gorenstein elliptic singular point. This property doesn't hold when $p_f(V,o) > 0$. But {\color{black}i}f the degree of $(V,o)$ is small, we can get {\color{black}a} similar property {\color{black}under} restrictive {\color{black}conditions}. If the fundamental cycle $Z$ satisfying $Z^2=-1$, the relation between the canonical cycle and the Yau cycle {\color{black}is} given by Konno in \cite{Ko1}. \begin{definition}\label{def3.1} Since $p_f(V,o)>0$, there exists an irreducible component $A \le Z$ such that $A$ is not {\color{black}a} $(-2)$-curve. Let $k$ be the coefficient of cycle $A$ of $Z$. We say that $Z$ is essentially irreducible if either $Z={\color{black}k}A$ or $Z-{\color{black}k}A$ consists of $(-2)$-curves. \end{definition} \begin{proposition}[Lemma 3.4, \cite{Ko1}]\label{prop3.2} Let $(V, o)$ be a normal surface singularity with $p_f(V,o)>0$ and $Z^2=-1$, and $\pi\colon X \to V$ be a minimal resolution of $V$. Assume that $Z$ is essentially irreducible. Then $(V,o)$ is numerically Gorenstein with canonical cycle $(2p_f(V,o)-1)Y$. \end{proposition} For the case with $Z^2=-2$, we consider a similar property: $(V,o)$ is numerically Gorenstein with canonical cycle $p_f(V,o)Y$. However this property doesn't hold when only $Z$ is essentially irreducible. So we need a more restrictive situation. \begin{lemma}\label{Lemma}\label{lem3.3} Let $(V, o)$ be a normal surface singularity with $p_f(V,o)>0$ and $Z^2=-2$, and $\pi\colon X \to V$ be a minimal resolution of $V$. Assume that $Z$ is essentially irreducible, then either $m=1$ or $Z-D_m$ consists of $(-2)$-curves, where $m$ denotes the length of the Yau sequence for $Z$ and $D_m$ is the smallest term in the Yau sequence for $Z$. \end{lemma} \begin{proof} If $m>1$, we can get an increasing sequence of cycle{\color{black}s} like the proof of proposition 2.4: \begin{align*} Z_0=D_m, Z_1 &= Z_0 + E_{i_1}, Z_2=Z_1+E_{i_2},\dots, Z_j=Z_{j-1}+E_{i_j},\dots,\\ Z_\ell &= Z_{\ell-1}+E_{i_\ell} =Z, \end{align*} where $E_{i_j}$ is {\color{black}an} irreducible component and $E_{i_j}\cdot Z_{j-1}>0$, $1 \le j\le \ell$. By the definition of Yau sequence, we have $p_a(D_m)=p_a(Z)$. Since $p_a(Z_j)=p_a(Z_{j-1})+p_a(E_{i_j}) + E_{i_j}\cdot Z_{j-1} -1 \ge p_a(Z_{j-1})$ for $1 \le j \le \ell$, we can get $p_a(E_{i_j}) = 0$ and $E_{i_j}\cdot Z_{j-1} = 1 $ for $1 \le j \le \ell$. Consider that $Z_{j-1} ^2 = Z_{j}^2 - 2 E_{i_j}\cdot Z_{j-1} -E_{i_j}^2=Z_{j}^2-2-E_{i_j}^2$ and $\pi$ is minimal, we have that $-2=Z_\ell^2 \le Z_{\ell-1}^2 \le \dots \le Z_1^2 \le Z_0^2 \le -1$. Assume $Z-D_m$ {\color{black}does} not consists of $(-2)$-curve, then the $E_{i_j}$ that is not $(-2)$-curve is $(-3)$-curve and only one. Since $D_m^2=-1$, we have the unique non-multiple component $A$ with $A\cdot D_m=-1$ and $\mathcal{O}_{D_m-A}(-D_m)$ is numerically trivial. Since $D_m \cdot Z_{min}<0$, $A$ is not $(-2)$-curve. Thus we can get that $\pi^{-1}(o)=\cup E_i$ satisfying all $E_i$ are $(-2)$-curves except {\color{black}for} one $(-3)$-curve $A$, and the coefficient of cycle $A$ of $Z$ is 2. Thus $p_a(Z)=(2A \cdot K-2)/2 + 1=1$ by {\color{black}the} Riemann-Roch theorem. These weighted dual graphs {\color{black}consisting of ($-2$)-curves and exactly one ($-3$)-curve} are classified in \cite{YZZ2}, there is no situation where the conditions are met. This means $Z-D_m$ consists of $(-2)$-curve{\color{black}s}. \end{proof} { \color{black} \begin{remark} If we don't use the classification results of weighted dual graphs in \cite{YZZ2}, we can still prove Lemma \ref{lem3.3} by discussing the irreducible components with a negative intersection number with $Z$. This is the main approach to proving Theorem \ref{thm3.8} later on, however, the process may be more cumbersome. Here, let's briefly explain the train of thought. Continuing from the process of Lemma \ref{lem3.3}, we now need to prove that there are no singularities where the coefficient of $A$ in $Z$ is $2$ and the coefficient in $D_m$ is $1$. Since $D_m \neq Z$ and $Z$ is essentially irreducible, we know that the irreducible component with a negative intersection number with $Z$ is a ($-2$)-curve. We assert that such an irreducible component is unique. Otherwise, there exists an irreducible component $B$ such that $B \cdot Z=-1$, and the coefficient of cycle $B$ of $Z$ is $1$. It means there exists a unique irreducible component $B_1$ connected with $B$ and the coefficient of cycle $B_1$ of $Z$ is $1$. If $B_1$ is a ($-2$)-curve, then we can find the unique irreducible component $B_2$ connected with $B_1$, excluding $B$. We can continue repeating this step until $B_n$ is not longer a ($-2$)-curve. Since $Z$ is essentially irreducible, we have $B_n=A$, but the coefficient of $A$ of $Z$ is $2$, contradicting. Then we have the unique irreducible component $C$ such that $C \cdot Z=-1$, and the coefficient of cycle $C$ of $Z$ is $2$. After removing $A$, we observe that $Z$ can be divided into several connected branches consisting of ($-2$)-curves. We will only consider the branch where $C$ is located. Its weighted dual graph is an ADE. The number of the irreducible components connected with $C$ is either $1$ or 2. By using a similar method as mentioned above, we can determine the weighted dual graph of the branch where $C$ is located is $A_n$ for some $n$ when the number is 1, and it is $D_n$ for some $n$ when the number is $2$. In these cases, we can conclude that the coefficient of $A$ of $D_m$ is $2$. The specific process can be referred to case (2) and case (5) of Theorem \ref{thm3.8}. \end{remark} } \begin{remark}\label{rm3.4} It is easy to see that when $Z-D_m$ consists of $(-2)$-curves, then $D_i^2=Z^2$ for $1 \le i \le m$. \end{remark} \begin{theorem} [i.e. Theorem \ref{mt1}] \label{thm3.5} Let $(V, o)$ be a normal surface singularity with $p_f(V,o)>0$ and $Z^2=-2$, and $\pi\colon X \to V$ be a minimal resolution of $V$. Assume that $Z$ is essentially irreducible and $D_m=Z_{min}$. Then $(V,o)$ is numerically Gorenstein with canonical cycle $p_f(V,o)Y$. \end{theorem} \begin{proof} Assume $A$ is {\color{black}an} irreducible component $A\le Z$ such that A is not {\color{black}a} $(-2)$-curve, and $k$ is the coefficient of cycle $A$ of $Z$. Since $-2=Z_{min} \cdot (kA)=k Y \cdot A$, we have $K_X \cdot A=\frac{1}{k} K_X \cdot Z= \frac{2p_f(V,o)}{k}=- p_f(V,o) Y \cdot A$. We claim that $B \cdot Y=0$ for any component $B<Z-kA$. Let $j$ be the {\color{black}largest} index such that $B \le D_j$. If $j=m$, then $\mathcal{O}_{B}(-D_i)$ is numerically trivial for $1 \le i \le m-1$ by {\color{black}the} definition of {\color{black}the} Yau sequence and $B \cdot D_m=0$ by $D_m=Z_{min}$, hence $B \cdot Y=0$. If $j<m$, then $D_{j+1}$ is the Tyurina component of $D_j$. We have $\mathcal{O}_{B}(-D_i)$ is numerically trivial for $1 \le i \le j-1$ and $\mathcal{O}_{D_k}(-(D_j-D_{j+1}))$ is numerically trivial for $j+2 \le k \le m$. Since $D_k$ is chain-connected, we have either $D_k \le D_j-D_{j+1}$ or $\textup{supp}\ D_k \cap \textup{supp}\ (D_j-D_{j+1})= \emptyset$. However, $D_k \le D_j-D_{j+1}$ is impossible, since $p_a(D_k+D_{j+1})=p_a(D_k)+p_a(D_{j+1})-1 \ge p_a(D_{j+1})$ and $\mathcal{O}_{D_k+D_{j+1}}(-D_j)$ is numerically trivial. Thus $\textup{supp}\ D_k \cap \textup{supp}\ (D_j-D_{j+1})= \emptyset$ and $C \cdot D_k=0$ for any irreducible component $C \le D_j-D_{j+1}$. In particular, $B \cdot D_k =0$ for $j+2 \le k \le m$. In sum, we get $B \cdot Y=B \cdot D_j+B \cdot D_{j+1}$. By Lemma \ref{lem3.3} and Remark \ref{rm3.4}, we have that $D_j^2=-2$ and $D_j-D_{j+1}$ consists of $(-2)$-curves. Since $D_j$ is the fundamental cycle on its support, there either exist two component{\color{black}s} $C_1$ and $C_2$ of $D_j$ such that $C_1 \cdot D_j=C_2\cdot D_j=-1$ and the coefficient of cycle $C_i$ of $D_j$ is 1 for $i=1,2$, or exists {\color{black}a} component $C_3$ of $D_j$ such that $C_3 \cdot D_j=-1$ and the coefficient of cycle $C_3$ of $D_j$ is 2. If the first alternative happens, then $\mathcal{O}_{D_j-C_1-C_2}(-D_j)$ is numerically trivial and $C_1 \cdot C_2=0$. This implies that $D_j-C_1-C_2$ is the Tyurina component of $Z$, and $B \in \{ C_1,C_2\}$. We have $B \cdot D_j = -1$ and $B \cdot D_{j+1}=B \cdot (D_j-C_1-C_2)=B\cdot D_j - B^2=1$, hence $B\cdot Y=B \cdot D_j+B \cdot D_{j+1}=0$. If the last alternative happens, we can get an increasing sequence of cycle{\color{black}s} like the proof of {\color{black}P}roposition \ref{prop2.4}: \begin{align*} Z_0=D_{j+1}, Z_1 &= Z_0 + E_{i_1}, Z_2=Z_1+E_{i_2},\dots, Z_j=Z_{j-1}+E_{i_j},\dots,\\ Z_\ell &= Z_{\ell-1}+E_{i_\ell} =D_j, \end{align*} where $E_{i_j}$ is {\color{black}an} irreducible component and $E_{i_j}\cdot Z_{j-1}=1$ for $1 \le j\le \ell$ by $p_a(D_j)=p_a(D_{j+1})$. Then we have $E_{i_1}=C_3$ and $C_3 \cdot Y=C_3 \cdot D_j + C_3 \cdot D_{j+1}=0$. If $B \neq C_3$, it means that $B \cdot D_j=0$. Since $B \nleq D_{j+1}$ and $\mathcal{O}_{D_{j+1}+B}(-D_j)$ is numerically trivial, we have $B \cdot D_{j+1} \ge 0$ and $B \cdot D_{j+1} \le 0$, furthermore, $B \cdot Y=B \cdot D_j+B \cdot D_{j+1}=0$. In sum, we have shown that $(V,o)$ is numerically Gorenstein with canonical cycle $p_f(V,o)Y$. \end{proof} We remark that, when $Z^2=-1$, the condition that $Z$ is essentially irreducible assuming that $D_m=Z_{min}$. {\color{black}However} when $Z^2=-2$, there exists {\color{black}a} singularity $(V, o)$ satisfies that $Z$ is essentially irreducible and $D_m \neq Z_{min}$. \begin{proposition}\label{rm3.6} Let $(V, o)$ be a normal surface singularity with $p_f(V,o)>0$, and $\pi\colon X \to V$ be a minimal resolution of $V$. Assume that $Z$ is essentially irreducible with $K_X \cdot A + Z^2 \ge 0$ and $D_m=Z_{min}${\color{black}, where $A$ is the unique cycle in $Z$ that is not a ($-2$)-curve.} Then we have $Z_K =(\frac{2-2p_f(V,o)}{Z^2}+1) Y$. \end{proposition} {\color{black}\begin{proof} Since this property is not relevant to the main conclusions of this paper, only a brief outline of the proof is provided here. First, let us retain the notations used in Lemma \ref{lem3.3}. We have that $-Z^2=Z_\ell^2 \le Z_{\ell-1}^2 \le \dots \le Z_1^2 \le Z_0^2 = D_m^2 \le -1$. Since $Z$ is essentially irreducible, assume $k$ is the coefficient of cycle $A$ of $Z$, and $k'$ is the coefficient of cycle $A$ of $D_m$. Then we have that $p_a(Z)=1+\frac12(Z^2+k A\cdot K)$ and $p_a(D_m)=1+\frac12(D_m^2+k' A\cdot K)$ by the Riemann-Roch theorem. Since $Z^2 + A \cdot K\ge 0$, we have $k'=k$. It means that $Z-D_m$ consists of ($-2$)-curves and $D_i^2=Z^2$ for $1\le i \le m$. To prove $Z_K =(\frac{2-2p_f(V,o)}{Z^2}+1) Y$, we need to demonstrate $(\frac{2-2p_f(V,o)}{Z^2}+1) Y \cdot A=-K_X \cdot A$ and $Y \cdot B=0$ for any ($-2$)-curve $B$. The former can be derived from that $Z^2=Z_{min} \cdot (kA)=k Y \cdot A$ and $p_a(Z)=1+\frac12(Z^2+k A\cdot K)$. Similar to the proof process of Theorem \ref{thm3.5}, the latter can be converted to $B \cdot (D_j+D_{j+1})=0$, where $j$ is the largest index such that $B \le D_j$. We can assume $B \cdot D_j =-1$, since the case when $B \cdot D_j=0$ is similar to the last part of the proof of Theorem \ref{thm3.5}. After removing $A$, we observe that $D_j$ can be divided into several connected branches consisting of ($-2$)-curves; we will only consider the branch where $B$ is located, and claim that $B$ is the only irreducible component in this branch satisfying $B \cdot D_j < 0$. Otherwise, if we can find a path connecting two irreducible components with a negative intersection number with $D_j$, subtracting $1$ from the coefficients of the curves in this path of $D_j$ would result in a new curve with an arithmetic genus larger than that of $D_j$, contradicts the chain-connectivity of $D_j$. Additionally, as shown in the proof of Proposition \ref{prop2.4}, we can transform $D_j$ into $D_{j+1}$ by progressively removing irreducible components. We further claim that during this process, $B$ is the last irreducible component to be removed in this branch. This implies that $B \cdot D_{j+1}=1$. Thus, we have proven this remark. \end{proof} } \subsection{Classification of weighted dual graphs of the singularities of degree two with the fundamental cycle $Z$ being essentially irreducible.} Let $(V, o)$ be a normal surface singularity with $p_f(V,o)>0$ and $Z^2=-2$, and $\pi\colon X \to V$ be a minimal resolution of $V$. Assume that $Z$ is essentially irreducible and {\color{black}the} irreducible componet $A \le Z$ is not {\color{black}a} $(-2)$-curve. Let $k$ be the coefficient of cycle $A$ of $Z$ and $m$ {\color{black}be} the length of the Yau sequence for $Z$. If $m=1$, the condition $D_m=Z_{min}$ means $\mathcal{O}_{Z-kA}(-Z)$ is numerically {\color{black}trivial}. In order to study the relation between $Z$ and $D_m$, we {\color{black}aim} to classif{\color{black}y} {\color{black}the} weighted dual graphs of the singularities of degree two with the fundamental cycle $Z$ being essentially irreducible. We use $\Gamma$ to denote the weighted dual tree graph of the exceptional set $\pi^{-1}(o)$. After removing the point corresponding to the $A$, the remaining connected graphs are denoted as $\Gam_1,\cdots,\Gam_n$. \begin{lemma} \label{lem3.7} With the notations as above, we have $\Gam_i$ must be ADE for any $1\le i\le n$. Let $Z|_{\Gam_i}$ be the fundamental cycle $Z$ restricted to each $\Gam_i$. If $m>1$, we have $\{ i |(Z|_{\Gam_i} \cdot Z) <0 \} =\{ i | \textup{supp} \ \Gam_i \cap \textup{supp} \ (Z-D_m) \neq \emptyset \}$, where $m$ denotes the length of the Yau sequence for $Z$ and $D_m$ denotes the smallest component of the Yau sequence. \end{lemma} \begin{proof} Firstly, we have $Z|_{\Gam_i}\cdot E_j \leq 0$, $\forall E_j \in \textup{supp} \ \Gam_i$. Assume cycle $E_{j_0} \in \textup{supp} \ \Gam_i$ connected with $A$, then $Z|_{\Gam_i}\cdot E_{j_0}<0$. By Proposition \ref{prop2.18}, we conclude that $\Gam_i$ is negative-definite. Hence $\Gam_i$ must be $ADE$. According to Lemma \ref{lem3.3}, we have an increasing sequence of cycle{\color{black}s} when $m>1$: \begin{align*} Z_0=D_m, Z_1 &= Z_0 + E_{i_1}, Z_2=Z_1+E_{i_2},\dots, Z_j=Z_{j-1}+E_{i_j},\dots,\\ Z_\ell &= Z_{\ell-1}+E_{i_\ell} =Z, \end{align*} where $E_{i_j}$ is {\color{black}a} $(-2)$-curve and $E_{i_j}\cdot Z_{j-1}=1$, $1 \le j\le \ell$. For any $1 \le i \le n$, if $Z|_{\Gam_i}\cdot Z=0$, we prove that $\mathcal{O}_{Z|_{\Gam_i}}(-Z_j)$ is numerically {\color{black}trivial} for each $j$ by induction on $j$. Firstly, since $Z|_{\Gam_i}\cdot Z=0$, we have $\mathcal{O}_{Z|_{\Gam_i}}(-Z_\ell)$ is numerically {\color{black}trivial}. Assume $\mathcal{O}_{Z|_{\Gam_i}}(-Z_j)$ is numerically {\color{black}trivial} for any $1 \le j \le \ell$, notice that $E_{i_j} \cdot Z_j=E_{i_j} \cdot Z_{j-1} + E_{i_j}^2=-1$, we have $E_{i_j} \notin \textup{supp} \ \Gam_i$ and $E_{i_j} \neq A$. It means that $E_{i_j}$ is disjoint from $Z|_{\Gam_i}$, so that $-Z_{j-1}=-(Z_j-E_{i_j})$ is numerically {\color{black}trivial} $Z|_{\Gam_i}$. By induction, we have $\mathcal{O}_{Z|_{\Gam_i}}(-Z_j)$ is numerically {\color{black}trivial} for each $j$, hence $\textup{supp} \ \Gam_i \cap \textup{supp} \ (Z-D_m) = \emptyset$. Since $\mathcal{O}_{D_m}(-Z)$ is numerically {\color{black}trivial} when $m>1$, it is obvious that $\textup{supp} \ \Gam_i \cap \textup{supp} \ (Z-D_m) \neq \emptyset$ when $Z|_{\Gam_i} \cdot Z <0$. Hence, we have $\{ i |(Z|_{\Gam_i} \cdot Z) <0 \} =\{ i | \textup{supp} \ \Gam_i \cap \textup{supp} \ (Z-D_m) \neq \emptyset \}$. \end{proof} Let $S=\{ i |(Z|_{\Gam_i} \cdot Z) <0 \}$, by the proof of the Lemma \ref{lem3.7}, in order to study the relation between $Z$ and $D_m$, we only need to observe that the subgraph $\Gam'$ of $\Gam$ by removing all the $\Gam_i$, $i \notin S$. In a dual graph, the $*$ represents the point corresponding to cycle $A$. We still call it the cycle $A$ later. The others are the points corresponding to the {\color{black}(}$-2${\color{black})-}curves, denoted by $\begin{picture}(12,3) \put(6,3){\circle*{6}} \end{picture}$, we call them {\color{black}(}$-2${\color{black})-}{\color{black}curves} later. \begin{theorem} \label{thm3.8} With the notations as above, when $m>1$ case, $\Gam'$ and $Z|_{\Gam'}$ must be one of the following(the underlined number represents the $A$): \begin{itemize} \item[(1)] $A_{m'}+A+A_{n'}$.\\ \begin{picture}(145,18) \put(20,-18){\makebox{\footnotesize$m'$ points}} \put(16,-4){\makebox{\footnotesize$\underbrace{\hspace{40pt}}$}} \put(98,-18){\makebox{\footnotesize$n'$ points}} \put(94,-4){\makebox{\footnotesize$\underbrace{\hspace{40pt}}$}} \put(11,3){\dashbox{2}(48,0)} \put(89,3){\dashbox{2}(48,0)} \put(71,0){$*$} \put(11,3){\circle*{6}} \put(59,3){\circle*{6}} \put(61,3){\line(1,0){12}} \put(74,3){\line(1,0){12}} \put(89,3){\circle*{6}} \put(137,3){\circle*{6}} \end{picture} $Z|_{\Gam'} =1 \ ... \ \underline{1} \ ... \ 1$. \\ \\ \item[(2)] $A+A_{n'}$: $n'\ge 3$.\\ \begin{picture}(83,18) \put(20,-18){\makebox{\footnotesize$n'$ points}} \put(16,-4){\makebox{\footnotesize$\underbrace{\hspace{40pt}}$}} \put(11,3){\dashbox{2}(48,0)} \put(71,0){$*$} \put(11,3){\circle*{6}} \put(59,3){\circle*{6}} \put(61,3){\line(1,0){12}} \end{picture} $Z|_{\Gam'} =1 \ 2 \ 2 \ ...\ 2 \ \underline{2}$,or \begin{picture}(83,18) \put(20,-18){\makebox{\footnotesize$n'$ points}} \put(16,-4){\makebox{\footnotesize$\underbrace{\hspace{40pt}}$}} \put(11,3){\dashbox{2}(48,0)} \put(71,0){$*$} \put(11,3){\circle*{6}} \put(59,3){\circle*{6}} \put(61,5){\line(1,0){12}} \put(61,1){\line(1,0){12}} \end{picture} $Z|_{\Gam'} =1 \ 2 \ 2 \ ...\ 2 \ \underline{1}$. \\ \\ \item[(3)] $A+(1-A_{n'})$: $n'\ge 3$.\\ \begin{picture}(83,18) \put(11,3){\dashbox{2}(48,0)} \put(71,0){$*$} \put(11,3){\circle*{6}} \put(59,3){\circle*{6}} \put(61,3){\line(1,0){12}} \put(59,5){\line(0,1){12}} \put(59,19){\circle*{6}} \end{picture} $Z|_{\Gam'} = 1 \ 2 \ ...\stackrel {\textstyle {1}\strut}{2\strut} \ \underline{1}$. \\ \item[(4)] $A+(k'-D_{n'})$: k' is an even number and $0\le k' \le n-3$.\\ \begin{picture}(131,18) \put(20,-18){\makebox{\footnotesize$k'+1$ points}} \put(16,-4){\makebox{\footnotesize$\underbrace{\hspace{40pt}}$}} \put(11,3){\dashbox{2}(48,0)} \put(59,3){\dashbox{2}(48,0)} \put(107,3){\circle*{6}} \put(123,3){\circle*{6}} \put(107,19){\circle*{6}} \put(11,3){\circle*{6}} \put(59,3){\circle*{6}} \put(109,3){\line(1,0){12}} \put(107,5){\line(0,1){12}} \put(59,5){\line(0,1){12}} \put(56,16){$*$} \end{picture} $Z|_{\Gam'} = 2 \ 3 \ ...\stackrel {\textstyle \underline{1}\strut}{k+2\strut} \ ... \stackrel {\textstyle {\frac{k'+2}{2}}\strut}{k'+2\strut} \frac{k'+2}{2}$. \\ \\ \\ In particular, when $n'$ is an odd number and $k'=n'-3$, we have:\\ \begin{picture}(83,18) \put(11,3){\dashbox{2}(48,0)} \put(75,3){\circle*{6}} \put(59,-13){\circle*{6}} \put(11,3){\circle*{6}} \put(59,3){\circle*{6}} \put(61,3){\line(1,0){12}} \put(59,1){\line(0,-1){12}} \put(59,5){\line(0,1){12}} \put(56,16){$*$} \end{picture} $Z|_{\Gam'} = 2 \ 3 \ ...\underset{\textstyle \frac{n'-1}{2}} {\stackrel {\textstyle \underline{1}\strut}{n'-1\strut}} \frac{n'-1}{2}$.\\ In additional, when $k'=0$, we have:\\ \begin{picture}(99,18) \put(27,3){\dashbox{2}(48,0)} \put(91,3){\circle*{6}} \put(75,19){\circle*{6}} \put(27,3){\circle*{6}} \put(75,3){\circle*{6}} \put(11,3){\line(1,0){12}} \put(77,3){\line(1,0){12}} \put(75,5){\line(0,1){12}} \put(7,0){$*$} \end{picture} $Z|_{\Gam'} =\underline{1} \ 2 \ 2 \ ... \stackrel {\textstyle 1 \strut}{2\strut} 1$. \\ \item[(5)] $A+(D_{n'}')$: $n'$ is an odd number.\\ \begin{picture}(99,18) \put(11,3){\dashbox{2}(48,0)} \put(75,3){\circle*{6}} \put(59,19){\circle*{6}} \put(11,3){\circle*{6}} \put(59,3){\circle*{6}} \put(61,3){\line(1,0){12}} \put(59,5){\line(0,1){12}} \put(77,3){\line(1,0){12}} \put(87,0){$*$} \end{picture} $Z|_{\Gam'} = 2 \ 3 \ ...\stackrel{\textstyle \frac{n'-1}{2} \strut}{n'-1\strut} \ \frac{n'+1}{2} \ \underline{2}$,\\ or \begin{picture}(99,18) \put(11,3){\dashbox{2}(48,0)} \put(75,3){\circle*{6}} \put(59,19){\circle*{6}} \put(11,3){\circle*{6}} \put(59,3){\circle*{6}} \put(61,3){\line(1,0){12}} \put(59,5){\line(0,1){12}} \put(77,5){\line(1,0){12}} \put(77,1){\line(1,0){12}} \put(87,0){$*$} \end{picture} $Z|_{\Gam'} = 2 \ 3 \ ...\stackrel{\textstyle \frac{n'-1}{2} \strut}{n'-1\strut} \ \frac{n'+1}{2} \ \underline{1}$.\\ \item[(6)] $A+(D_{n'}'')$: $n'$ is an even number.\\ \begin{picture}(83,18) \put(11,3){\dashbox{2}(48,0)} \put(75,3){\circle*{6}} \put(59,19){\circle*{6}} \put(11,3){\circle*{6}} \put(59,3){\circle*{6}} \put(61,3){\line(1,0){12}} \put(59,5){\line(0,1){12}} \put(75,5){\line(0,1){12}} \put(61,19){\line(1,0){12}} \put(72,16){$*$} \end{picture} $Z|_{\Gam'} = 2 \ 3 \ ...\stackrel{\textstyle \frac{n'}{2} \strut}{n'-1\strut}\ \stackrel {\textstyle \underline{1} \strut}{\frac{n'}{2}\strut}$.\\ \item[(7)] $A+E_6$.\\ \begin{picture}(98,18) \put(11,3){\circle*{6}} \put(11,3){\line(1,0){12}} \put(27,3){\circle*{6}} \put(27,3){\line(1,0){12}} \put(43,3){\circle*{6}} \put(43,3){\line(1,0){12}} \put(59,3){\circle*{6}} \put(59,3){\line(1,0){12}} \put(43,5){\line(0,1){12}} \put(43,19){\circle*{6}} \put(75,3){\circle*{6}} \put(75,3){\line(1,0){12}} \put(86,0){$*$} \end{picture} $Z|_{\Gam'} = 2 \ 3 \stackrel{\textstyle 2 \strut}{4\strut} 3 \ 2 \ \underline{1}$.\\ \item[(8)] $A+D_5'''$: \\ \begin{picture}(82,18) \put(11,3){\circle*{6}} \put(11,3){\line(1,0){12}} \put(27,3){\circle*{6}} \put(27,3){\line(1,0){12}} \put(43,3){\circle*{6}} \put(43,3){\line(1,0){12}} \put(59,3){\circle*{6}} \put(59,3){\line(1,0){12}} \put(43,5){\line(0,1){12}} \put(43,19){\circle*{6}} \put(70,0){$*$} \end{picture} $Z|_{\Gam'} = 1 \ 2 \stackrel{\textstyle 2 \strut}{3\strut} 2 \ \underline{1}$. \end{itemize} \end{theorem} {\color{black}The} weighted dual graph of case (8) is the same as case (5), but $Z|_{\Gam'}$ {\color{black}differs}. \begin{proof} Since $m>1$, $Z \cdot A=0$ and $-2=Z^2=\overset{n}{\underset{i=1}\sum} (Z \cdot Z|_{\Gam_i}) $, hence, the number of elements in $S$ is {\color{black}either} $1$ or $2$. If $S$ has two elements, we can assume $S=\{ 1,2 \}$. Then we know that $Z \cdot Z|_{\Gam_1}=-1$ and, hence, there exists {\color{black}an} irreducible componet $C_1 \le Z|_{\Gam_1}$ such that $C_1 \cdot Z=-1$ and the coefficient of $C_1$ of $Z$ is $1$. Since $C_1$ is {\color{black}a} $(-2)$-curve, there exists {\color{black}a} unique cycle in $\Gam$ connected with $C_1$. If the cycle isn't $A$, we call it $C_2$. Then we have $C_2 \cdot Z=-1$ and the coefficient of $C_1$ of $Z$ is $1$. Except for $C_1$, there exists {\color{black}a} unique cycle in $\Gam$ connected with $C_2$. Now{\color{black}, by} the obvious induction{\color{black}, we can show} that the weighted dual graph of $\Gam_1$ is $A_{m'}$ for any $m'$ and similarly the weighted dual graph of $\Gam_2$ is $A_{n'}$ for any $n'$, and the weighted dual graph of $\Gam'$ and $Z|_{\Gam'}$ is the same as {\color{black}in} case (1). Then we consider that $S$ has only one element, we can assume $S=\{ 1\}$. Then we know that $Z \cdot Z|_{\Gam_1}=-2$ and there exists {\color{black}a} unique cycle $C$ in $\Gam_1$ such that $C \cdot Z<0$. That is because if there {\color{black}exist} two cycles $C$ and $D$, we can find a connection way of $C$ and $D$ in $\Gam_1$. We denote the points of the connection way {\color{black}by} $C_1=C$,$C_2$,$\dots$,$C_{\color{black}k'}=D$, then $\mathcal{O}_{Z}(-(Z-\overset{k'}{\underset{i=1}\sum} C_i))$ is nef, contradicting the minimality of $Z$. Since $C {\color{black}\cdot} Z=p_a(Z)-p_a(Z-C)-1 \ge -1$, the coefficient of $C$ of $Z$ is $2$. Refer{\color{black}ring} to \cite{Oku}, for an ADE graph, with abuse of notations, $E_i$ denotes exceptional curve, and $E= \sum E_i$ is the exceptional cycle. Let $\delta_{i}=\left(E-E_{i}\right) \cdot E_{i}$ be the number of irreducible components of $E$ connected with $E_{{\color{black}i}}$. A cycle $E_{i}$ is called an end (resp. a node), if $\delta_{i}=1$ (resp. $\delta_{i} \geq 3$). If $C$ isn't an end in $\Gam_1$, we claim that $\Gam_1$ is $A_{n'}$ for any $n'$. If not, then $\Gam_1$ has a node $D$, we can find a connection way of $C$ and $D$. We denote the points of the connection way {\color{black}by} $C_1=C$,$C_2$,$\dots$,$C_k'=D$(If $C=D$, denote $k'=1$). If $C \neq D$, we denote the cycles connected with $D$ in $\Gam_1$ {\color{black}by} $C_{k'-1}$,$B_1$,$B_2$, and the another cycle connected with $C$ in $\Gam_1$ {\color{black}as $B_3$}. If $C=D$, we denote the cycles connected with $D$ in $\Gam_1$ as $B_1$,$B_2$,$B_3$. Then we have $\mathcal{O}_{Z}(-(Z-2\overset{k'}{\underset{i=1}\sum} C_i-\overset{3}{\underset{i=1}\sum} B_i))$ is nef, contradicting the minimality of $Z$. Since the coefficient of $C$ of $Z$ is $2$ and $C$ isn't {\color{black}an} end in $\Gam_1$, we have there exists a cycle $D$ connected with $C$ in $\Gam_1$ such that the coefficient of $D$ of $Z$ is $1$, hence, $D$ is an end. Then either $C$ {\color{black}is} connected with $A$ or the coefficient of the another cycle $C_2$ that connected with $C$ in $\Gam_1$ of $Z$ is $2$. If the first alternative happens, {\color{black}there are two situations. One is that} $C$ {\color{black}is} connected with another cycle $C_2$ and $C_2$ is an end when $kA \cdot C=1$, {\color{black}the other is that} $C$ is an end in $\Gam_1$ when $kA \cdot C=2$({\color{black}which} contradicts the assumption that $C$ isn't an end). If the last alternative happens, for $C_2$, either $C_2$ {\color{black}is} connected with $A$ {\color{black},} or the coefficient of the another cycle $C_3$ that {\color{black}is} connected with $C_2$ in $\Gam_1$ of $Z$ is $2$. By induction, we can get case (2) and case (3). If $C$ is an end, we have $\Gam_1$ isn't $A_{n'}$ for any $n'$ by $m>1$. Then $\Gam_1$ has a node {\color{black}called} $D$, we can find a connection way of $C$ and $D$, and denote the points of the connection way {\color{black}by} $C_1=C$,$C_2$,$\dots$,$C_k'=D$, and {\color{black}denote} $B_1$,$B_2$ as the other cycles connected with $D$. Assume $c_i$(resp. $b_i$) is the coefficient of $C_i$(resp. $B_i$) of $Z$. By induction, we have $c_{j}=j+1-\overset{j-1}{\underset{i=1}\sum}((j-i)kA\cdot C_i)$ for $2\le j \le k'$. Since $b_1,b_2\ge \frac{c_k'}{2}$, we have $0 \le c_{k'}-c_{k'-1}-kA\cdot C_{k'}=1-\overset{k'}{\underset{i=1}\sum}(kA\cdot C_i)$. If $A \cdot \overset{k'}{\underset{i=1}\sum}C_i>0$, we have $B_1$,$B_2$ are {\color{black}the} ends and $k=1$, then we get case (4). If $A \cdot \overset{k'}{\underset{i=1}\sum}C_i=0$, we have $c_j=j+1$ for $1\le j \le k'$. If $k'$ is an odd number, then we can assume $b_1=\frac{k'+1}{2}$ and $b_2=\frac{k'+3}{2}$, hence, $B_1$ is an end and $kA \cdot B_2 \le 2$. We can get case (5) when $kA \cdot B_2=2$, and get case (7) when $kA \cdot B_2=0$. If $kA \cdot B_2=1$, there exists a cycle $B_3$ connected with $B_2$, and the coefficient of $B_3$ of $Z$ is 1. But $B_3 \cdot Z \ge -2+\frac{k'+3}{2} >0$, it's impossible. If $k'$ is an even number, then we can assume $b_1=b_2=\frac{k'+2}{2}$. We can get case (6) when $kA \cdot B_1>0$ and $kA \cdot B_2>0$, and get case (8) when $kA \cdot B_1=0$ or $kA \cdot B_2=0$. If $kA \cdot B_1=0$ and $kA \cdot B_2=0$, we can prove that $A \cdot Z|_{\Gam_1}=0$, it's impossible. \end{proof} \begin{proposition} \label{prop3.9} With the notations as above, when $m>1$ and $D_m=Z_{min}$ case, $\Gam'$ must be one of the following: \begin{itemize} \item[(1)] $A_{n'}+A+A_{n'}$. \item[(2)] $A+A_{n'}$: $n'$ is an odd number and $n'\ge 3$. \item[(3)] $A+(1-A_{n'})$: $n'$ is an odd number and $n'\ge 3$. \item[(4)] $A+(k'-D_{n'})$: k' is an even number and $0\le k' \le n-3$. \item[(5)] $A+(D_{n'}')$: $n'$ is an odd number. \item[(6)] $A+(D_{n'}'')$: $n'$ is an even number. \item[(7)] $A+E_6$. \end{itemize} remark: The weighted dual graphs $\Gam'$ of case{\color{black}s} (1)-(7) in the Proposition \ref{prop3.9} {\color{black}are} the same as {\color{black}those of }case{\color{black}s} (1)-(7) {\color{black}mentioned} in the Theorem \ref{thm3.8}. \end{proposition} \begin{proof} We only need to compare $D_m|_{\Gam'}$ with $Z_{min}|_{\Gam'}$, since $\textup{supp} \ (Z-D_m) \subset \textup{supp} \ (Z-Z_{min}) \subset \textup{supp} \ \Gam'$. We can compute the $D_m|_{\Gam'}$ and $Z_{min}|_{\Gam'}$ for each case in the Theorem \ref{thm3.8}. Computations {\color{black}for} case{\color{black}s} (1)-(8) are simple, {\color{black}so} let us take the computation of the case (1) as an example here. In case (1), there {\color{black}exist} two different irreducible components $B_1$ and $C_1$ in $D_1=Z$ such that $B_1 \cdot D_1=-1${\color{black},} $C_1 \cdot D_1=-1$, and $B_1 \cdot C_1 = 0$. In fact, the points corresponding to the $B_1$ and $C_1$ are the ends of $A_{m'}$ and $A_{n'}$ that {\color{black}are} not connect{\color{black}ed} with the point corresponding to the $A$. Then we have $D_2=D_1-B_1-C_1$. We can assume $m' \ge n'${\color{black}. B}y induction, we know that $m=n'+1$ and the weighted dual {\color{black}graph} of $D_{m}|_{\Gam'}$ {\color{black}consists} of $A_{m'-n'}$ and the point corresponding to the $A$. If $m'-n' \neq 0$, since the cycle $C$ corresponding to the end of $A_{m'-n'}$ that {\color{black}is} not connect{\color{black}ed} with the point corresponding to the $A${\color{black}, and it} satisfies $C \cdot D_m=-1$, we have $p_a(D_m)=p_a(D_m-C)$. By induction, we have $Z_{min}|_{\Gam'}=A$. Now we give the $D_m|_{\Gam'}$ and $Z_{min}|_{\Gam'}$ for each case in the Theorem \ref{thm3.8}. \begin{itemize} \item[(1)] $A_{m'}+A+A_{n'}$: assume $m' \ge n'$.\\ $D_m|_{\Gam'}$:\begin{picture}(83,18) \put(20,-18){\makebox{\footnotesize$m'-n'$ points}} \put(16,-4){\makebox{\footnotesize$\underbrace{\hspace{40pt}}$}} \put(11,3){\dashbox{2}(48,0)} \put(71,0){$*$} \put(11,3){\circle*{6}} \put(59,3){\circle*{6}} \put(61,3){\line(1,0){12}} \end{picture} $1 \ ... \ \underline{1}$ .\\ \\ $Z_{min}|_{\Gam'}$:\begin{picture}(23,18) \put(11,0){$*$} \end{picture} \underline{1}.\\ \item[(2)] $A+A_{n'}$: $n'\ge 3$.\\ $D_m|_{\Gam'}$(when n' is an even number): \begin{picture}(50,18) \put(11,3){\circle*{6}} \put(11,3){\line(1,0){12}} \put(27,3){\circle*{6}} \put(27,3){\line(1,0){12}} \put(38,0){$*$} \end{picture} $1 \ 2 \ \underline{2}$ or \begin{picture}(50,18) \put(11,3){\circle*{6}} \put(11,3){\line(1,0){12}} \put(27,3){\circle*{6}} \put(27,5){\line(1,0){12}} \put(27,1){\line(1,0){12}} \put(38,0){$*$} \end{picture}$1 \ 2 \ \underline{1}$.\\ $D_m|_{\Gam'}$(when n' is an odd number): \begin{picture}(34,18) \put(11,3){\circle*{6}} \put(11,3){\line(1,0){12}} \put(22,0){$*$} \end{picture} $1 \ \underline{2}$ or \begin{picture}(34,18) \put(11,3){\circle*{6}} \put(11,5){\line(1,0){12}} \put(11,1){\line(1,0){12}} \put(22,0){$*$} \end{picture}$1 \ \underline{1}$.\\ $Z_{min}|_{\Gam'}$: \begin{picture}(34,18) \put(11,3){\circle*{6}} \put(11,3){\line(1,0){12}} \put(22,0){$*$} \end{picture} $1 \ \underline{2}$ or \begin{picture}(34,18) \put(11,3){\circle*{6}} \put(11,5){\line(1,0){12}} \put(11,1){\line(1,0){12}} \put(22,0){$*$} \end{picture}$1 \ \underline{1}$.\\ \item[(3)] $A+(1-A_{n'})$: $n'\ge 3$.\\ $D_m|_{\Gam'}$(when n' is an even number): \begin{picture}(34,18) \put(11,3){\circle*{6}} \put(11,3){\line(1,0){12}} \put(11,5){\line(0,1){12}} \put(11,19){\circle*{6}} \put(22,0){$*$} \end{picture} $ \stackrel{\textstyle 1 \strut}{1\strut} \underline{1}$.\\ $D_m|_{\Gam'}$(when n' is an odd number): \begin{picture}(23,18) \put(11,0){$*$} \end{picture} \underline{1}.\\ $Z_{min}|_{\Gam'}$: \begin{picture}(23,18) \put(11,0){$*$} \end{picture} \underline{1}.\\ \item[(4)] $A+(k'-D_{n'})$: k' is an even number and $0\le k' \le n-3$.\\ $D_m|_{\Gam'}$:\begin{picture}(131,18) \put(20,-18){\makebox{\footnotesize$k'$ points}} \put(16,-4){\makebox{\footnotesize$\underbrace{\hspace{40pt}}$}} \put(11,3){\dashbox{2}(48,0)} \put(59,3){\dashbox{2}(48,0)} \put(107,3){\circle*{6}} \put(123,3){\circle*{6}} \put(107,19){\circle*{6}} \put(11,3){\circle*{6}} \put(59,3){\circle*{6}} \put(109,3){\line(1,0){12}} \put(107,5){\line(0,1){12}} \put(59,5){\line(0,1){12}} \put(56,16){$*$} \end{picture} $1 \ 2 \ ...\stackrel {\textstyle \underline{1}\strut}{k\strut} \ ... \stackrel {\textstyle {\frac{k'}{2}}\strut}{k'\strut} \frac{k'}{2}$. \\ \\ \\ $Z_{min}|_{\Gam'}$:\begin{picture}(131,18) \put(20,-18){\makebox{\footnotesize$k'$ points}} \put(16,-4){\makebox{\footnotesize$\underbrace{\hspace{40pt}}$}} \put(11,3){\dashbox{2}(48,0)} \put(59,3){\dashbox{2}(48,0)} \put(107,3){\circle*{6}} \put(123,3){\circle*{6}} \put(107,19){\circle*{6}} \put(11,3){\circle*{6}} \put(59,3){\circle*{6}} \put(109,3){\line(1,0){12}} \put(107,5){\line(0,1){12}} \put(59,5){\line(0,1){12}} \put(56,16){$*$} \end{picture} $1 \ 2 \ ...\stackrel {\textstyle \underline{1}\strut}{k\strut} \ ... \stackrel {\textstyle {\frac{k'}{2}}\strut}{k'\strut} \frac{k'}{2}$. \\ \\ \\ \item[(5)] $A+(D_{n'}')$: $n'$ is an odd number.\\ $D_m|_{\Gam'}$:\begin{picture}(99,18) \put(20,-18){\makebox{\footnotesize$n'-3$ points}} \put(16,-4){\makebox{\footnotesize$\underbrace{\hspace{40pt}}$}} \put(11,3){\dashbox{2}(48,0)} \put(75,3){\circle*{6}} \put(59,19){\circle*{6}} \put(11,3){\circle*{6}} \put(59,3){\circle*{6}} \put(61,3){\line(1,0){12}} \put(59,5){\line(0,1){12}} \put(77,3){\line(1,0){12}} \put(87,0){$*$} \end{picture} $1 \ ...\stackrel{\textstyle \frac{n'-3}{2} \strut}{n'-3\strut} \ \frac{n'-1}{2} \ \underline{2}$\\ \\ \\ or \begin{picture}(99,18) \put(20,-18){\makebox{\footnotesize$n'-3$ points}} \put(16,-4){\makebox{\footnotesize$\underbrace{\hspace{40pt}}$}} \put(11,3){\dashbox{2}(48,0)} \put(75,3){\circle*{6}} \put(59,19){\circle*{6}} \put(11,3){\circle*{6}} \put(59,3){\circle*{6}} \put(61,3){\line(1,0){12}} \put(59,5){\line(0,1){12}} \put(77,5){\line(1,0){12}} \put(77,1){\line(1,0){12}} \put(87,0){$*$} \end{picture} $1 \ ...\stackrel{\textstyle \frac{n'-3}{2} \strut}{n'-3\strut} \ \frac{n'-1}{2} \ \underline{1}$.\\ \\ \\ $Z_{min}|_{\Gam'}$:\begin{picture}(99,18) \put(20,-18){\makebox{\footnotesize$n'-3$ points}} \put(16,-4){\makebox{\footnotesize$\underbrace{\hspace{40pt}}$}} \put(11,3){\dashbox{2}(48,0)} \put(75,3){\circle*{6}} \put(59,19){\circle*{6}} \put(11,3){\circle*{6}} \put(59,3){\circle*{6}} \put(61,3){\line(1,0){12}} \put(59,5){\line(0,1){12}} \put(77,3){\line(1,0){12}} \put(87,0){$*$} \end{picture} $1 \ ...\stackrel{\textstyle \frac{n'-3}{2} \strut}{n'-3\strut} \ \frac{n'-1}{2} \ \underline{2}$\\ \\ \\ or \begin{picture}(99,18) \put(20,-18){\makebox{\footnotesize$n'-3$ points}} \put(16,-4){\makebox{\footnotesize$\underbrace{\hspace{40pt}}$}} \put(11,3){\dashbox{2}(48,0)} \put(75,3){\circle*{6}} \put(59,19){\circle*{6}} \put(11,3){\circle*{6}} \put(59,3){\circle*{6}} \put(61,3){\line(1,0){12}} \put(59,5){\line(0,1){12}} \put(77,5){\line(1,0){12}} \put(77,1){\line(1,0){12}} \put(87,0){$*$} \end{picture} $1 \ ...\stackrel{\textstyle \frac{n'-3}{2} \strut}{n'-3\strut} \ \frac{n'-1}{2} \ \underline{1}$.\\ \\ \item[(6)] $A+(D_{n'}'')$: $n'$ is an even number.\\ $D_m|_{\Gam'}$:\begin{picture}(83,18) \put(20,-18){\makebox{\footnotesize$n'-3$ points}} \put(16,-4){\makebox{\footnotesize$\underbrace{\hspace{40pt}}$}} \put(11,3){\dashbox{2}(48,0)} \put(75,3){\circle*{6}} \put(59,19){\circle*{6}} \put(11,3){\circle*{6}} \put(59,3){\circle*{6}} \put(61,3){\line(1,0){12}} \put(59,5){\line(0,1){12}} \put(75,5){\line(0,1){12}} \put(61,19){\line(1,0){12}} \put(72,16){$*$} \end{picture} $1 \ ...\stackrel{\textstyle \frac{n'-2}{2} \strut}{n'-3\strut}\ \stackrel {\textstyle \underline{1} \strut}{\frac{n'-2}{2}\strut}$. \\ \\ \\ $Z_{min}|_{\Gam'}$:\begin{picture}(83,18) \put(20,-18){\makebox{\footnotesize$n'-3$ points}} \put(16,-4){\makebox{\footnotesize$\underbrace{\hspace{40pt}}$}} \put(11,3){\dashbox{2}(48,0)} \put(75,3){\circle*{6}} \put(59,19){\circle*{6}} \put(11,3){\circle*{6}} \put(59,3){\circle*{6}} \put(61,3){\line(1,0){12}} \put(59,5){\line(0,1){12}} \put(75,5){\line(0,1){12}} \put(61,19){\line(1,0){12}} \put(72,16){$*$} \end{picture} $1 \ ...\stackrel{\textstyle \frac{n'-2}{2} \strut}{n'-3\strut}\ \stackrel {\textstyle \underline{1} \strut}{\frac{n'-2}{2}\strut}$. \\ \\ \item[(7)] $A+E_6$.\\ $D_m|_{\Gam'}$: \begin{picture}(23,18) \put(11,0){$*$} \end{picture} \underline{1}.\\ $Z_{min}|_{\Gam'}$: \begin{picture}(23,18) \put(11,0){$*$} \end{picture} \underline{1}.\\ \item[(8)] $A+D_5'''$: \\ $D_m|_{\Gam'}$:\begin{picture}(82,18) \put(11,3){\circle*{6}} \put(11,3){\line(1,0){12}} \put(27,3){\circle*{6}} \put(27,3){\line(1,0){12}} \put(43,3){\circle*{6}} \put(43,3){\line(1,0){12}} \put(59,3){\circle*{6}} \put(59,3){\line(1,0){12}} \put(70,0){$*$} \end{picture} $1 \ 1 \ 1 \ 1 \ \underline{1}$.\\ $Z_{min}|_{\Gam'}$:\begin{picture}(23,18) \put(11,0){$*$} \end{picture} \underline{1}. \end{itemize} Comparing the $D_m|_{\Gam'}$ with $Z_{min}|_{\Gam'}$, we can get this proposition. \end{proof} \subsection{Arithmetic genus of a singularity in the essentially irreducible case} Let $i$ be a non-negative integer and $p_f(V,o)=p>0$. We have $p_a(iY)-1=i(p_a(Y)-1)+\frac{i(i-1)}{2}Y^2=m(i(p-1) +\frac{i(i-1)}{2}Z^2)$, where $m$ denotes the length of the Yau sequence for $Z$. In the degree one case, we can get a {\color{black}lower} bound {\color{black}for $p_a(V,o)$} by $\underset{i}{\max}\{ p_a(iY)\}=p_a(pY)=\frac{p(p-1)m}{2}+1$. \begin{lemma}[Lemma 3.2, \cite{Ko1}] \label{lem3.10} $p_a(V,o) \ge \frac{p(p-1)m}{2}+1$ holds for a normal surface singularity $(V,o)$ of degree one, where $p=p_f(V,o)$ and $m$ denotes the length of the Yau sequence for $Z$. \end{lemma} In fact, we have the equality sign holds when $Z$ is essentially irreducible. \begin{theorem} [i.e. Theorem \ref{mt2}] \label{thm3.11} Let $(V,o)$ be a normal surface singularity of degree one with $p_f(V,o)>0$, $Z$ {\color{black}be} the fundamental cycle on the minimal resolution. Assume that $Z$ is essentially irreducible, then $p_a(V,o)=\frac{p(p-1)m}{2}+1$, where $p=p_f(V,o)$ and $m$ denotes the length of the Yau sequence for $Z$. \end{theorem} \begin{proof} By Lemma 3.1 in \cite{Ko1}, we have {\color{black}that} $A_i=D_i-D_{i+1}$ is a $(-2)$-curve with $A_i \cdot D_i=-1$ for $1\le i <m$ and $D_m=Z_{min}$. Let $C$ be a cycle whose support is in $\pi^{-1}(o)$ such that $p_a(C)=p_a(V,o)$. Let $C=C_1+ \dots + C_n$ be a chain-connected component decomposition, where $C_i$ is a chain-connected cycle and $\mathcal{O}_{C_j}(-C_i)$ is nef for $i<j$. Let $A \le Z$ be an irreducible component such that $A$ is not {\color{black}a} $(-2)$-curve. Since $Z$ is essentially irreducible, we have $p_a(C-C_i)=p_a(C)-p_a(C_i)-C_i \cdot (C_1+ \dots + C_{i-1}+C_{i+1}+\dots +C_n)+1 \ge p_a(C)-p_a(C_i) +1 \ge p_a(C)+1$ when $A \nleq C_i$. So, we can get that $A \le C_i$ for any $i$. We prove $p_a(C) \le \frac{p(p-1)m}{2}+1$ by induction on the length $m$ of the Yau sequence. When {\color{black}$m=1$}, we have $Z=Z_{min}$ and $Z \cdot A=-1$. Since the coefficient of $A$ in $Z$ is $1$ and $A \le C_i \le Z$, we have $A \cdot C_i \le A \cdot Z =-1$. Furthermore, since $\mathcal{O}_{C_j}(-C_i)$ is nef, we have $C_i \cdot C_j=A \cdot C_j +(C_i-A)\cdot C_j \le A \cdot C_j \le -1$ for $i<j$. Then $$p_a(C)-1=\sum_{i=1}^{n}(p_a(C_i)-1)+\sum_{i<j}C_i \cdot C_j \le n(p-1)-\frac{n(n-1)}{2} \le \frac{p(p-1)}{2}.$$ When $m>1$, assume the inequality holds for $m-1$, let us proceed to show {\color{black}that} it is true for $m$. We consider all the chain-connected component{\color{black}s} $C_i$ such that $A_1 \le C_i$ and assume that $n_0$ is the number of these cycles. Since the coefficient of $A_1$ in $Z$ is $1$ and $C_i \le Z$, we have $A_1 \cdot C_i \le A_1 \cdot Z=-1$ {\color{black}when $A_1 \le C_i$}. Furthermore, since $\mathcal{O}_{C_j}(-C_i)$ is nef, we have $C_i \cdot C_j = A_1 \cdot C_j+(C_i-A_1)\cdot C_j \le -1$ {\color{black}when $A_1 \le C_i$ and $A_1 \le C_j$($i<j$)}. Then $$ \begin{aligned} p_a(C)-1 &=(p_a(C-\sum_{A_1 \le C_i}C_i)-1)+(p_a(\sum_{A_1 \le C_i}C_i)-1)+(C-\sum_{A_1 \le C_i}C_i) \cdot (\sum_{A_1 \le C_i}C_i) \\ &\le (p_a(C-\sum_{A_1 \le C_i}C_i)-1)+\sum_{A_1 \le C_i}(p_a(C_i)-1)+\sum_{A_1 \le C_i,A_1 \le C_j,i<j}C_i \cdot C_j \\ &\le (p_a(C-\sum_{A_1 \le C_i}C_i)-1)+n_0(p-1)-\frac{n_0(n_0-1)}{2} \\ &\le (p_a(C-\sum_{A_1 \le C_i}C_i)-1)+\frac{p(p-1)}{2} .\end{aligned} $$ Notice that $A_1 \nleq C-\sum_{A_1 \le C_i}C_i$ and $D_2=Z-A_1$, we know that $\textup{supp}\ (C-\sum_{A_1 \le C_i}C_i) \subseteq \textup{supp}\ (D_2)$. Since $D_2$ is the fundamental cycle on its support and the length of the Yau sequence for $D_2$ is $m-1$, by the induction hypothesis, we have $p_a(C-\sum_{A_1 \le C_i}C_i) \le \frac{p(p-1)(m-1)}{2}+1$. It means that $p_a(C) \le p_a(C-\sum_{A_1 \le C_i}C_i)+\frac{p(p-1)}{2} \le \frac{p(p-1)m}{2}+1$. It follows from Lemma \ref{lem3.10} that we have $p_a(V,o)=\frac{p(p-1)m}{2}+1$. \end{proof} We want to get a similar formula for the higher degree case: $$p_a(V,o)=p_a( ([\frac{p-1}{d}]+1)Y )=\frac{dm}{2}(\frac{2p-2}{d}-[\frac{p-1}{d}])([\frac{p-1}{d}]+1)+1,$$ where $d=-Z^2$ and $[ a ] :=\max \{ n \in \mathbb Z | n \le a \}$ for real number $a$(Gauss symbol). However this formula doesn't hold in {\color{black}the} general case. For example, if $m=1$, $Z \neq Z_{min}$ and $p \ge d+1$, then $$p_a( ([\frac{p-1}{d}]+1)Y )=p_a( ([\frac{p-1}{d}]+1)Z) < p_a( [\frac{p-1}{d}]Z+Z_{min})\le p_a(V,o).$$ Notice that, {\color{black}the condition $Z^2=-1$ implies that $D_m=Z_{min}$,} so we need the restrictive condition $D_m=Z_{min}$ when $d>1$. \begin{lemma} \label{lem3.12} Let $(V,o)$ be a normal surface singularity of degree two or degree three, $Z$ is the fundamental cycle on the minimal resolution. Assume that $Z$ is essentially irreducible and $Z=Z_{min}$, then $$p_a(V,o)=p_a( ([\frac{p-1}{d}]+1)Z )=\frac{d}{2}(\frac{2p-2}{d}-[\frac{p-1}{d}])([\frac{p-1}{d}]+1)+1.$$ \end{lemma} \begin{proof} There exists a cycle $D$ such that $p_a(D)=p_a(V,o)$ and $\mathcal{O}_{Z}(-D)$ is nef since the negative definiteness of the intersection matrix. Assume $A$ is the irreducible component $A \le Z$ such that $A$ is not {\color{black}a} $(-2)$-curve and $k$ is the coefficient of cycle $A$ of $Z$. Assume $ak+b$ is the coefficient of cycle $A$ of $D$, where $a,b \in \mathbb Z$ and $0\le b <k$. Then we claim that $aZ \le D' <(a+1)Z$. If $D \nleq (a+1)Z$, let $B=\max (D-(a+1)Z,0)>0$, where $\max (D_1,D_2):= \sum \max (n_i,m_i)E_i$ for $D_1=\sum n_i E_i$ and $D_2=\sum m_i E_i$, then we have $D-B<(a+1)Z$ by the definition of $B$. For any irreducible components $C \le B$, we have the coefficient of $C$ of $D-B$ is equal to the coefficient of $C$ of $(a+1)Z$. Therefore, $C \cdot (D-B) \le C \cdot (a+1)Z$. Notice that $Z$ is essentially irreducible and $A \nleq B$ and $Z=Z_{min}$, we have {\color{black}that} $C$ is {\color{black}a} $(-2)$-curve and $C\cdot Z=0$. It means that $B$ consist{\color{black}s} of $(-2)$-curve and $B \cdot (D-B) \le 0$. Then we have $p_a(D-B)=p_a(D)-p_a(B)-B \cdot (D-B) +1 > p_a(D)$, contradicting the maximality of $p_a(D)$. If $aZ \nleq D$, let $B=\max (aZ-D,0)>0$, then we have $aZ \le D+B$ by the definition of $B$. There exists an irreducible component $C \le B$ such that $C \cdot B <0$ by the negative definiteness of the intersection matrix. Since $C \le B$, we know that the coefficient of $C$ of $D+B$ is equal to the coefficient of $C$ of $aZ$. Therefore, $C \cdot (D+B) \ge C\cdot aZ =0$. But $C \cdot B<0$, contradicting that $\mathcal{O}_{Z}(-D)$ is nef. So we get $aZ \le D' <(a+1)Z$. If $D\neq aZ$, assume $D=aZ+B$, where $0 < B <Z$. We claim that $p_a(B)-1 \le \frac{b}{k}(p_a(Z)-1)$. By {\color{black}the} Riemann-Roch theorem, $p_a(B)-1=\frac{1}{2}(B^2+B \cdot K)=\frac{1}{2}(B^2+bA \cdot K)$ and $p_a(Z)-1=\frac{1}{2}(Z^2+kA \cdot K)$. Hence the claim is equivalent to $B^2 \le \frac{b}{k}Z^2$. When degree two or degree three case, we know that $1 \le k \le 3$ by $Z^2=kA\cdot Z$. If $k=1$, the claim is obvious by $0\le b<k$. If $k=2$, it means that $d=2$ and $\frac{b}{k}Z^2 \ge -1$, the claim is obvious too. If $k=3$, it means that $d=3$ and $0 \le b \le 2$. The claim is obvious when $b \le 1$, so we only need to observe $b=2$ case. Since $B^2+2A \cdot K=2p_a(B)-2$, we know that $B^2$ is even number. Then $B^2 \le -2 =\frac{b}{k}Z^2$. By $p_a(D)=p_a(aZ)+p_a(B)+aZ \cdot B -1 \ge p_a(aZ)$, we have $p_a(V,o)=p_a(D)=p_a(aZ)+p_a(B)-1 +ab Z \cdot A \le p_a(aZ)+ \frac{b}{k}(p_a(Z)-1+ak Z \cdot A) \le p_a(aZ)+ (p_a(Z)-1+ak Z\cdot A)=p_a((a+1)Z)$. So we know that $\underset{i}{\max}\{ p_a(iZ)\} \le p_a(V,o)=p_a(D) \le p_a (a+1)Z$, hence $$p_a(V,o)=\underset{i}{\max}\{ p_a(iZ)\}=p_a( ([\frac{p-1}{d}]+1)Z )=\frac{d}{2}(\frac{2p-2}{d}-[\frac{p-1}{d}])([\frac{p-1}{d}]+1)+1.$$ \end{proof} When the singular point is of degree two, according to Proposition \ref{prop3.9}, we can generalized Lemma \ref{lem3.12} to $m>1$ case. \begin{theorem}[i.e. Theorem \ref{mt3}] \label{thm3.13} Let $(V,o)$ be a normal surface singularity of degree two with $p_f(V,o)>0$, $Z$ is the fundamental cycle on the minimal resolution. Assume that $Z$ is essentially irreducible and $D_m=Z_{min}$, then $$p_a(V,o)=p_a( ([\frac{p-1}{2}]+1)Y )=m(p-1-[\frac{p-1}{2}])[\frac{p+1}{2}]+1=[\frac{p^2}{4}]m+1.$$ \end{theorem} \begin{proof} According to Proposition \ref{prop3.9}, we only {\color{black}need} to prove that the theorem holds in each case. There exists a cycle $C$ such that $p_a(C)=p_a(V,o)$ and $\mathcal{O}_{Z}(-D)$ is nef since the negative definiteness of the intersection matrix. Assume $A$ is the irreducible component $A \le Z$ such that $A$ is not {\color{black}a} $(-2)$-curve and $k$ is the coefficient of cycle $A$ of $Z$. In case (1), let $C=C_1+ \dots + C_n$ be a chain-connected component decomposition, where $C_i$ is a chain-connected cycle and $\mathcal{O}_{C_j}(-C_i)$ is nef for $i<j$. As in the proof of Theorem \ref{thm3.11}, we have $A \le C_i$ for $1 \le i \le n$. Since $C_i$ is chain-connected, we have {\color{black}that} $\textup{supp}\ C_i$ is connected. With the notations as {\color{black}in} Theorem \ref{thm3.8}, we know the coefficient of cycle $B$ of $Z$ is $1$ for each cycle $B$ in $\Gam'$. We denote the number of the cycle $B$ that $B \le C_i$ and $B$ in the first $A_{n'}$(resp. the last $A_{n'}$) {\color{black}by} $a_i$(resp. $b_i$) for $1 \le i \le n$. Let $c_j$(resp. $d_j$) be the number of $i$ that $a_i=j$(resp. $b_i=j$) for $0 \le j \le n'$. Then we have $\overset{n'}{\underset{j=0}\sum}c_j=\overset{n'}{\underset{j=0}\sum}c_j=n$ and $p_a(C)-1=\sum_{i=1}^{n}(p_a(C_i)-1)+\sum_{i<j}C_i \cdot C_j \le n(p-1)-\overset{n'}{\underset{j=0}\sum}\frac{c_j^2-c_j+d_j^2-d_j}{2} \le np-{\underset{j=0}\sum}\frac{c_j^2+d_j^2}{2}.$ It is easy to see that $p_a(C)$ reaches its maximum value when $c_0=c_1=c_2=\dots=c_{n'}=d_0=d_1=\dots=d_{n'}=[\frac{p-1}{d}]$ and $m=n'+1$, so the theorem holds in case (1). In case (2) or case (3), there exists {\color{black}a} unique cycle $B \le Z$ such that $B \cdot Z=-1$. We consider the coefficient of cycle $B$ of $C$, denote {\color{black}by} $2a+b$($a,b \in \mathbb Z, 0\le b \le 1$). As in the proof of Lemma \ref{lem3.12}, $aZ \le C$. If $b=0$, we have $\mathcal{O}_{C-aZ}(-Z)$ is numerically {\color{black}trivial}, hence, $p_a(C)-1=(p_a(C-aZ)-1)+a(p-1)-\frac{a^2-a}{2} \le (p_a(C-aZ)-1)+[\frac{p^2}{4}]$ and $\textup{supp}\ (C-aZ) \subset \textup{supp}\ (D_2)$. If $b=1$, there exists an end $D$ in $\Gam'$ that connected with $B$. Denote the coefficient of cycle $D$ of $C$ {\color{black}by} {\color{black}$c$}, we have $c=a$ and $D\cdot C=-1$ since $-1\le D\cdot C=-2c+(2a+b)\le 0$, hence, $\mathcal{O}_{C-aZ-B-D}(-Z)$ is numerically {\color{black}trivial} and $p_a(C)-1 \le p_a(C-B-D)-1=(p_a(C-aZ-B-D)-1)+a(p-1)-\frac{a^2-a}{2} \le (p_a(C-aZ-B-D)-1)+[\frac{p^2}{4}]$. By induction, the theorem holds in case (2), case (3). In case (4), case (5), case (6), we have $m=2$. There exists {\color{black}a} unique cycle $B \le Z$ such that $B \cdot Z=-1$, denote the coefficient of cycle $B$ of $C$ {\color{black}by} $2a+b$($a,b \in \mathbb Z, 0\le b \le 1$). As in the proof of Lemma \ref{lem3.12}, $aZ \le C$. The theorem is trivial when $b=0$, so we only need to prove the theorem when $b=1$. We have $p_a(C)-1=(p_a(aZ)-1)+(p_a(C-aZ-B)-1)+(B\cdot(C-B)-1)$. Notice that $B \cdot (C-B)-1 \le 1$, $p_a(aZ)-1\le [\frac{p^2}{4}]$, and $p_a(C-aZ-B)-1\le [\frac{p^2}{4}]$, we have $p_a(C)-1 \le 2[\frac{p^2}{4}]+1$. Whatmore, $p_a(aZ)-1 = [\frac{p^2}{4}]$ means $a \ge [\frac{p}{2}]$, and $p_a(C-aZ-B)-1 = [\frac{p^2}{4}]$ means $[\frac{p}{2}]D_2 \le C-aZ-B \le [\frac{p+3}{2}]D_2$, then $B \cdot (C-B)-1=B\cdot aZ+B\cdot (C-B-aZ)-1<-[\frac{p}{2}]+[\frac{p+3}{2}]-1 \le 1$. So the theorem holds in case (4), case (5), case (6). In case (7), we have $m=3$. There exists {\color{black}a} unique cycle $B \le Z$ such that $B \cdot Z=-1$, denote the coefficient of cycle $B$ of $C$ {\color{black}by} $2a+b$($a,b \in \mathbb Z, 0\le b \le 1$). Similarly, we have $p_a(C)=p_a(aZ)+p_a(C-aZ-bB)-1\le p_a(C-aZ-bB)+ [\frac{p^2}{4}]$ when $b=0$, and $p_a(C)=p_a(aZ)+p_a(C-aZ-bB)-1+(B\cdot(C-B)-1)$ when $b=1$. Since $D_2$ is the same as case (4) with $k'=0$ and $n'=5$, we have $p_a(C-aZ-bB) \le 2[\frac{p^2}{4}]+1$, and $p_a(C-aZ-bB) = 2[\frac{p^2}{4}]+1$ means $C-aZ-bB \le [\frac{p+3}{2}](D_2+D_3)$. Hence, similarly as case (4)-(6), we have the theorem holds in case (7). \end{proof} \begin{thebibliography}{Gr-Ri} \bibitem{Ar} M.~Artin, \emph{On isolated rational singularities of surfaces}, Amer.\ J.\ Math. \textbf{88} (1966), 129--136. \bibitem{Gr} H.~Grauert, \emph{{\"U}ber {M}odifikationen und exzeptionelle analyticshe {M}engen}, Math.\ Ann. \textbf{146} (1962), 331--368. \bibitem{Ko2} K.~Konno, \emph{Chain-connected component decomposition of curves on surfaces}, J. Math. Soc. Japan, \textbf{62} (2010), 467–486. \bibitem{Ko1} K.~Konno, \emph{On the Yau cycle of a normal surface singularities}, Asian. J. Math. \textbf{16} (2) (2012), 279--298. \bibitem{La2} H.~Laufer, \emph{On rational singularities}, Amer.\ J.\ Math. \textbf{94} (1972), 597--608. \bibitem{Oku} T. Okuma, \emph{The geometric genus of splice quotient singularities}, Trans. Amer. Math. Soc. \textbf{360} (12) (2008), 6643-6659. \bibitem{Se} J.~P. Serre, \emph{Groupes algebriques et corps de classes}, Actualit{\'e}s Scientifiques et Industrielles, no. 1264, Hermann, Paris, 1959. \bibitem{Tom} T. Tomaru, \emph{On Gorenstein surface singularities with fundamental genus $p_f\geq 2$ which satisfy some minimality conditions}, Pacific J. Math. \textbf{170} (1995), 271-295. \bibitem{Ya2} S. S.-T. Yau, \emph{On maximally elliptic singularities}, Trans. Amer. Math. Soc. \textbf{257} (1980), 269--329. \bibitem{YZZ2} S. S.-T. Yau, Q. W. Zhu, and H. Q. Zuo, \emph{Classification of weighted dual graphs consist of -2 curves and exactly one -3 curve}, Izvestiya: Mathematics, \textbf{87} (5) (2023), 1078–1116. \end{thebibliography} \end{document} \nocite*{} \bibliographystyle{amsplain} \bibliography{yauzhang} \end{document}
2412.11508v1
http://arxiv.org/abs/2412.11508v1
Legendre theorems for certain overpartitions and overpartition pairs
\documentclass[reqno]{amsart} \title[Legendre theorems for overpartitions]{Legendre theorems for certain overpartitions and overpartition pairs} \usepackage{amssymb,amsmath,amsthm,epsfig,graphics,latexsym,hyperref} \theoremstyle{definition} \newtheorem{definition}{Definition} \theoremstyle{plain} \newtheorem{lemma} {Lemma} \newtheorem{proposition}{Proposition} \newtheorem{theorem} {Theorem} \newtheorem*{maintheorem} {Main Theorem} \newtheorem{corollary} {Corollary} \newtheorem{conjecture} {Conjecture} \theoremstyle{remark} \newtheorem{example}{{\bf Example}} \newtheorem{remark}{{\bf Remark}} \numberwithin{equation}{section} \newcommand{\Mod}[1]{\ (\mathrm{mod}\ #1)} \newcommand{\te}{\theta} \newcommand{\fr}{\frac} \newcommand{\lt}{\left(} \newcommand{\rt}{\right)} \DeclareMathOperator{\spt}{spt} \DeclareMathOperator{\podd}{pod} \newmuskip\pFqskip \pFqskip=6mu \mathchardef\pFcomma=\mathcode`, \newcommand*\pFq[5]{ \begingroup \begingroup\lccode`~=`, \lowercase{\endgroup\def~}{\pFcomma\mkern\pFqskip} \mathcode`,=\string"8000 {}_{#1}\phi_{#2}\biggl[\genfrac..{0pt}{}{#3}{#4};#5\biggr] \endgroup } \begin{document} \author[ G. E. Andrews and M. El Bachraoui]{George E. Andrews and Mohamed El Bachraoui} \address{The Pennsylvania State University, University Park, Pennsylvania 16802} \email{[email protected]} \address{Dept. Math. Sci, United Arab Emirates University, PO Box 15551, Al-Ain, UAE} \email{[email protected]} \keywords{integer partitions, overpartitions, overpartition pairs, $q$-series, Bailey pair.} \subjclass[2000]{11P81; 05A17; 11D09} \begin{abstract} Motivated by two Legendre-type formulas for overpartitions, we derive a variety of their companions as Legendre theorems for overpartition pairs. This leads to equalities of subclasses of overpartitions and overpartition pairs. \end{abstract} \date{\textit{\today}} \thanks{First author partially supported by Simons Foundation Grant 633284} \maketitle \section{Introduction}\label{sec introduction} Throughout $q$ denotes a complex number satisfying $|q|<1$, $m$ and $n$ denote nonnegative integers. We will use the following standard notation for $q$-series~\cite{Andrews, Gasper-Rahman} \[ (a;q)_0 = 1,\ (a;q)_n = \prod_{j=0}^{n-1} (1-aq^j),\quad (a;q)_{\infty} = \prod_{j=0}^{\infty} (1-aq^j), \] \[ (a_1,\ldots,a_k;q)_n = \prod_{j=1}^k (a_j;q)_n,\ \text{and\ } (a_1,\ldots,a_k;q)_{\infty} = \prod_{j=1}^k (a_j;q)_{\infty}. \] We will frequently use without reference the following basic facts of $q$-series~\cite{Andrews, Gasper-Rahman} \begin{equation}\label{basic-facts} (a;q)_{n+m} = (a;q)_{m} (aq^{m};q)_n,\ (a;q)_{\infty} = (a;q)_n (aq^n;q)_{\infty},\ (a;q)_{\infty} = (a;q^2)_{\infty}(aq;q^2)_{\infty}. \end{equation} Letting $p_e(\mathcal{D},n)$ (resp. $p_o(\mathcal{D},n)$) denote the number of partitions of $n$ into an even (resp. odd) number of distinct parts, it is easy to see that their difference has the following generating function~\cite{Andrews} \begin{equation}\label{Legendre-0}\sum_{n\geq 0} \big( p_e(\mathcal{D},n) - p_o(\mathcal{D},n) \big) q^n =(q;q)_\infty. \end{equation} Then combining~\eqref{Legendre-0} with Euler's pentagonal theorem~\cite{Andrews} \begin{equation}\label{Euler-1} (q;q)_\infty = \sum_{n=-\infty}^\infty (-1)^n q^{\fr{3n^2-n}{2}} =1+\sum_{n\geq 1}(-1)^n q^{\fr{3n^2+n}{2}} +\sum_{n\geq 1}(-1)^n q^{\fr{3n^2-n}{2}}, \end{equation} we get \begin{equation}\label{Legendre-1} \sum_{n\geq 0} \big( p_e(\mathcal{D},n) - p_o(\mathcal{D},n) \big) q^n =\sum_{n=-\infty}^\infty (-1)^n q^{\fr{3n^2-n}{2}} \end{equation} which is equivalent to Legendre's~\cite{Legendre} celebrated result \begin{equation}\label{Legendre} p_e(\mathcal{D},n) - p_o(\mathcal{D},n) =\begin{cases} (-1)^k, & \text{if\ }n=k(3k\pm 1)/2\ \text{for some\ } k\in\mathbb{Z}, \\ 0, & \text{otherwise.} \end{cases} \end{equation} Formulas of type~\eqref{Legendre-1} or~\eqref{Legendre} are referred to as Legendre theorems. An immediate consequence of~\eqref{Legendre} is the vanishing of the difference $p_e(\mathcal{D},n) - p_o(\mathcal{D},n)$ and obviously the equality of $p_e(\mathcal{D},n)$ and $p_o(\mathcal{D},n)$ at any positive integer $n$ which is not a pentagonal number. An overpartition~\cite{Corteel-Lovejoy} of $n$ is a partition of $n$ where the first occurrence of each part may be overlined. The number of overpartitions of $n$, written $\overline{p}(n)$, has the following generating function \[ \sum_{n=0}^\infty \overline{p}(n) q^n = \fr{(-q;q)_\infty}{(q;q)_\infty}. \] Note that overlined parts in overpartitions are distinct by definition. We say that an overpartition has distinct parts if its non-overlined parts are distinct too. Letting $\overline{p}_d(n)$ denote the number of overpartitions of $n$ into distinct parts, it is easy to see that \[ \sum_{n=0}^\infty \overline{p}_d(n) q^n = (-q;q)_\infty^2. \] Recently, The first author and A. J. Yee~\cite{Andrews-Yee} did an extensive study of Legendre theorems for overpartitions revealing that this was a topic filled with elegant possibilities. For instance, letting $TH(n)$ denote the number of overpartitions of $n$ in which there is both an overlined and a non-overlined largest part and letting $THE(n)$ denote the number of $TH$-overpartitions with an even number of parts minus the number with an odd number of parts, the authors proved that \begin{equation}\label{AndYee id} THE(n) =\begin{cases} (-1)^n(2k-1),\ \text{if\ } k^2<n< (k+1)^2\ \text{for some\ } k\in\mathbb{N}, \\ (-1)^n(2k-2),\ \text{if\ } n=k^2\ \text{for some\ } k\in\mathbb{N}. \end{cases} \end{equation} The second author in~\cite{Bachraoui 2023} studied some overpartitions into distinct parts and obtained Legendre theorems analogue to~\eqref{AndYee id} with the conditions involving squares $k^2$ replaced by conditions involving triangular numbers $\binom{k}{2}$. An overpartition pair~\cite{Lovejoy 2006} of $n$ is a pair of overpartitions $\pi=(\lambda_1, \lambda_2)$ where the sum of all of the parts is $n$. The number of all overpartition pairs of $n$, written $\overline{pp}(n)$, has the following generating function \[ \sum_{n=0}^\infty \overline{pp}(n) q^n = \fr{(-q;q)_\infty^2}{(q;q)_\infty^2}. \] We say that an overpartition pair $(\lambda_1, \lambda_2)$ has distinct parts if both $\lambda_1$ and $\lambda_2$ have distinct parts. While there is a rich literature dealing with results of Legendre type along with non-negativity results and inequalities for both partitions and overpartitions (see for instance~\cite{Andrews 2013, Bachraoui 2023-b, Berkovich-Grizzell 2014, Berkovich-Uncu 2019, Kim-Kim-Lovejoy 2020, Kim-Kim-Lovejoy 2021, Lovejoy 2005}), there seems not much to have been done in this direction for overpartition pairs. Besides, partition pairs have an established and beautiful theory. W. H. Burge in a seminal paper~\cite{Burge} studied partition pairs at length and revealed truly surprising theorems and rich possibilities for further research. In this paper, we hope to follow the natural direction suggested by joint consideration of~\cite{Burge},~\cite{Andrews-Yee}, and~\cite{Bachraoui 2023} as we shall examine Legendre type theorems related to overpartitions and pairs of overpartitions. Namely, we will prove the following main results which are particularly lovely examples of Legendre type theorems arising in this area. \begin{theorem}\label{thm FG'} There holds \[ \begin{split} (a)\ \sum_{n=1}^\infty F'(n) q^n &:= \sum_{n=1}^\infty q^{n}(q^{n+1};q)_\infty (q^{n};q)_{n} = \sum_{n\geq 1}(-1)^{n+1} q^{\fr{3n^2-n}{2}}, \\ (b)\ \sum_{n=1}^\infty G'(n) q^n &:= \sum_{n=1}^\infty q^{n}(q^{n+1};q)_\infty (-q^{n};q)_{n} = \sum_{n\geq 1}(-1)^{n+1} q^{\fr{3n^2-n}{2}} + 2\sum_{n\geq 0} q^{6n^2+7n+2}. \end{split} \] \end{theorem} \begin{theorem}\label{thm A'} We have \[ \sum_{n=1}^\infty A'(n) q^n :=\sum_{n=1}^\infty q^{n}(q^{n+1};q)_\infty^3 (q^{n};q)_{n} =\sum_{n=1}^\infty (-1)^{n+1} n q^{\fr{n(n+1)} {2}}. \] \end{theorem} \begin{theorem}\label{thm A''} We have \[ \sum_{n=1}^\infty A''(n) q^n :=\sum_{n=1}^\infty q^{n}(-q^{n+1};q)_\infty^2 (q^{n+1};q)_\infty (q^{n};q)_{n} =\sum_{n=1}^\infty n q^{\fr{n(n+1)} {2}}. \] \end{theorem} \begin{theorem}\label{thm B'} We have \[ \sum_{n=1}^\infty B'(n) q^n :=\sum_{n=1}^\infty q^{n}(-q^{n+1};q)_\infty^2 (q^{n+1};q)_\infty (q^{n+1};q)_{n} = \Big(\sum_{n=0}^\infty q^{\fr{n(n+1)}{2}} \Big)^2 - \sum_{n=0}^\infty q^{\fr{n(n+1)}{2}}. \] \end{theorem} Our results include the following two companions of the above identities. \begin{theorem}\label{thm C'} Let \[ \sum_{n=1}^\infty C'(n) q^n :=\sum_{n=1}^\infty q^{n}(q^{n+1};q)_\infty^2 (q^{n};q)_\infty (q^{n};q)_{n}. \] Then we have \[ \sum_{n=1}^\infty C'(n) q^{8n+2} = 2 \sum_{r,n=0}^\infty (-1)^{n+1} q^{(2r+3)^2 + (2r+3+2n)^2} + \sum_{n=0}^\infty (-1)^{n+1} q^{1+(2n+1)^2} \] \[ + \sum_{n=1}^\infty q^{2(2n+1)^2} \] \end{theorem} \begin{theorem}\label{thm D'} Let \[ \sum_{n=1}^\infty D'(n) q^n :=\sum_{n=1}^\infty q^{2n}(q^{n+1};q)_\infty^3 (q^{n};q)_{n}. \] Then we have \[ \sum_{n=1}^\infty D'(n) q^{8n+2} = 2 \sum_{r,n=0}^\infty (-1)^{n} q^{(2r+1)^2 + (2r+1+2n)^2} - \sum_{n=0}^\infty (-1)^{n}(n+1) q^{1+(2n+1)^2} \] \[ - \sum_{n=0}^\infty q^{2(2n+1)^2} \] \end{theorem} The rest of the paper is organized as follows. In Section~\ref{sec combinatorial} we give combinatorial interpretations in terms of overpartitions and pairs of overpartitions for our sequences. Sections~\ref{sec proof FG'}-\ref{sec proof D'} are devoted to the proofs of the main theorems. Finally in Section~\ref{sec conclusion} we close by some remarks and questions suggested by this work. \section{Combinatorial interpretations}\label{sec combinatorial} Throughout all of the overpartitions and overpartition pairs have a smallest part. We will write $s(\pi)$ to denote the smallest part of the partition $\pi$. The two sequences $F'(n)$ and $G'(n)$ in Theorem~\ref{thm FG'} have the following natural interpretations as overpartition differences. \begin{definition}\label{def FG'} For any positive integer $n$ let $F(n)$ denote the number of overpartitions $\pi$ of $n$ into distinct parts where $s(\pi)$ occurs overlined and the non-overlined parts are in the half-open interval $[s(\pi), 2 s(\pi))$. Let $F_0(n)$ (resp. $F_1(n)$) denote the number of overpartition counted by $F(n)$ in which the number of parts is even (resp. odd). By letting the term $q^n (q^{n+1};q)_{\infty}$ generate the overlined parts of $\pi$ and $(q^{n};q)_{n} $ generate its non-overlined parts, it is easy to check that \begin{equation}\label{gen F'} \sum_{n=1}^\infty \big(F_1(n)-F_0(n)\big) q^n =\sum_{n=1}^\infty q^{n}(q^{n+1};q)_\infty (q^{n};q)_{n}, \end{equation} which by the definition of $F'(n)$ in Theorem~\ref{thm FG'} yields \[ F'(n) = F_1(n)-F_0(n). \] Similarly, let $G_0(n)$ (resp. $G_1(n)$) denote the number of overpartitions counted by $F(n)$ in which the number of overlined parts is even (resp. odd). By letting the term $q^n (q^{n+1};q)_{\infty}$ generate the overlined parts of $\pi$ and $(-q^{n};q)_{n} $ generate its non-overlined parts, it is easy to check that \begin{equation}\label{gen G'} \sum_{n=1}^\infty \big(G_1(n)-G_0(n)\big) q^n =\sum_{n=1}^\infty q^{n}(q^{n+1};q)_\infty (-q^{n};q)_{n}. \end{equation} That is, \[ G'(n) = G_1(n)-G_0(n). \] \end{definition} For example, $F(4)=4$ counting \[ \bar{4}, \bar{3}+\bar{1}, \bar{2}+2, \bar{2}+\bar{1}+1. \] We have $F_0(4)=2$ counting $\bar{3}+\bar{1}$ and $\bar{2}+2$ and we have $F_1(4)=2$ counting $\bar{4}$ and $\bar{2}+\bar{1}+1$ and thus $F'(4)=0$. Furthermore, it is easily checked that $G'(4)=0$. This agrees with Theorem~\ref{thm FG'}. We now focus on the sequences $A'(n)$, $A''(n)$, $B'(n)$, $C'(n)$, and $D'(n)$ whose natural interpretations are differences of overpartition pairs. \begin{definition}\label{def A} For any positive integer $n$ let $A(n)$ denote the number of overpartition pairs $\pi=(\lambda_1, \lambda_2)$ of $n$ into distinct parts where $s(\pi)=s(\lambda_1)$ and $s(\lambda_1)$ occurs overlined, the non-overlined parts of $\lambda_1$ are $>s(\pi)$ and the non-overlined parts of $\lambda_2$ are in the half-open interval $[s(\pi), 2 s(\pi))$. Let $A_0(n)$ (resp. $A_1(n)$) denote the number of overpartition pairs counted by $A(n)$ in which the number of parts is even (resp. odd) and let \[ A'(n) = A_1(n)-A_0(n). \] By letting the term $q^n (q^{n+1};q)_{\infty}$ generate the overlined parts of $\lambda_1$ and $(q^{n+1};q)_{\infty} $ generate its non-overlined parts and letting the term $(q^{n+1};q)_{\infty} $ generate the overlined parts of $\lambda_2$ and $(q^{n};q)_{n} $ generate its non-overlined parts, it is easy to check that \begin{equation}\label{gen A'} \sum_{n=1}^\infty A'(n) q^n =\sum_{n=1}^\infty q^{n}(q^{n+1};q)_\infty^3 (q^{n};q)_{n}. \end{equation} Furthermore, let $A_2(n)$ (resp. $A_3(n)$) denote the number of overpartition pairs counted by $A(n)$ in which the number of non-overlined parts is even (resp. odd) and let \[ A''(n) = A_2(n)-A_3(n). \] We have \begin{equation}\label{gen A''} \sum_{n=1}^\infty A''(n) q^n =\sum_{n=1}^\infty q^{n}(-q^{n+1};q)_\infty^2 (q^{n+1};q)_\infty (q^{n};q)_{n}. \end{equation} \end{definition} For example, $A(3)=4$ with relevant partition pairs \[ \big( (\bar{3}),\emptyset \big), \big( (\bar{2}, \bar{1}),\emptyset \big), \big( (2, \bar{1}),\emptyset \big), \ \text{and\ } \big( (\bar{1}),(\bar{2}) \big). \] We have $A_0(3)=1$ with the only overpartition pair $\big( (\bar{3}),\emptyset \big)$ and $A_1(3)=3$ counting \[ \big( (\bar{2}, \bar{1}),\emptyset \big), \big( (2, \bar{1}),\emptyset \big), \ \text{and\ } \big( (\bar{1}),(\bar{2}) \big) \] and thus $A'(3)= 1-3=-2$. On the other hand, we have $A_2(3)=3$ counting \[ \big( (\bar{3}),\emptyset \big), \big( (\bar{2}, \bar{1}),\emptyset \big), \ \text{and\ } \big( (\bar{1}),(\bar{2}) \big) \] and $A_3(3)=1$ with the only relevant pair $\big( (2, \bar{1}),\emptyset \big)$, and thus $A''(3)= 3-1=2$. \begin{definition}\label{def B} For any positive integer $n$ let $B(n)$ denote the number of overpartition pairs $\pi=(\lambda_1, \lambda_2)$ of $n$ into distinct parts where $s(\pi)=s(\lambda_1)$ is overlined and occurs exactly once in $\pi$ and the non-overlined parts in $\lambda_2$ are $\leq 2 s(\pi)$. Let $B_0(n)$ (resp. $B_1(n)$) denote the number of overpartition pairs counted by $B(n)$ in which the number of non-overlined parts is even (resp. odd) and let \[ B'(n) = B_0(n)-B_1(n). \] By letting the term $q^n (-q^{n+1};q)_{\infty}$ generate the overlined parts of $\lambda_1$ and $(q^{n+1};q)_{\infty} $ generate its non-overlined parts and letting the term $(-q^{n+1};q)_{\infty} $ generate the overlined parts of $\lambda_2$ and $(q^{n+1};q)_{n} $ generate its non-overlined parts, it is directly verified that \begin{equation}\label{gen B'} \sum_{n=1}^\infty B'(n) q^n =\sum_{n=1}^\infty q^{n}(-q^{n+1};q)_\infty^2 (q^{n+1};q)_\infty (q^{n+1};q)_{n}. \end{equation} \end{definition} For example, $B(3)=5$ counting \[ \big( (\bar{3}),\emptyset \big), \big( (\bar{2},\bar{1}),\emptyset \big), \big( (2,\bar{1}),\emptyset \big), \big( (\bar{1}),(\bar{2}) \big), \big( (\bar{1}),(2) \big). \] We have $B_0(3)=3$ counting \[ \big( (\bar{3}),\emptyset \big), \big( (\bar{2},\bar{1}),\emptyset \big), \big( (\bar{1}),(\bar{2}) \big), \] and $B_0(3)=2$ counting \[ \big( (2,\bar{1}),\emptyset \big)\ \text{and\ } \big( (\bar{1}),(2) \big) \] and thus $B'(3)= 3-2=1$. \begin{definition}\label{def C} For any positive integer $n$ let $C(n)$ denote the number of overpartition pairs $\pi=(\lambda_1, \lambda_2)$ of $n$ into distinct parts where $s(\pi)=s(\lambda_1)$ occurs overlined, the overlined parts of $\lambda_2$ are $>s(\pi)$ and its non-overlined parts $< 2 s(\pi)$. Let $C_0(n)$ (resp. $C_1(n)$) denote the number of overpartition pairs counted by $C(n)$ in which the number of parts is even (resp. odd) and let \[ C'(n) = C_1(n)-C_0(n). \] By letting the term $q^n (q^{n+1};q)_{\infty}$ generate the overlined parts of $\lambda_1$ and $(q^{n};q)_{\infty} $ generate its non-overlined parts and letting the term $(q^{n+1};q)_{\infty} $ generate the overlined parts of $\lambda_2$ and $(q^{n};q)_{n} $ generate its non-overlined parts, it is easily seen that \begin{equation}\label{gen C'} \sum_{n=1}^\infty C'(n) q^n =\sum_{n=1}^\infty q^{n}(q^{n+1};q)_\infty^2 (q^{n};q)_\infty (q^{n};q)_{n}. \end{equation} \end{definition} For example, $C(3)=5$ with relevant partition pairs \[ \big( (\bar{3}),\emptyset \big), \big( (\bar{2},\bar{1}),\emptyset \big), \big( (2,\bar{1}),\emptyset \big), \ \text{and\ } \big( (\bar{2}),(\bar{1})\big), \big( (\bar{1},1),(1) \big). \] We have $C_0(3)=3$ counting \[ \big( (\bar{2},\bar{1}),\emptyset \big), \big( (2,\bar{1}),\emptyset \big), \ \text{and\ } \big( (\bar{2}),(\bar{1})\big) \] and $C_1(3)=2$ counting \[ \big( (\bar{3}),\emptyset \big)\ \text{and\ } \big( (\bar{1},1),(1) \big), \] and thus $C'(3)= 2-3=-1$. Our last example of differences of overpartition pairs is a slight modification of Definition~\ref{def A}. In particular, the smallest must appear in both components of the overpartition pair $\pi=(\lambda_1,\lambda_2)$, i.e. $s(\pi)=s(\lambda_1)=s(\lambda_2)$. \begin{definition}\label{def D} For any positive integer $n$ let $D(n)$ denote the number of overpartition pairs $\pi=(\lambda_1, \lambda_2)$ of $n$ into distinct parts where $s(\pi)=s(\lambda_1)=s(\lambda_2)$ occurs overlined, the non-overlined parts of $\lambda_1$ are $>s(\pi)$ and the non-overlined parts of $\lambda_2$ are in the interval $[s(\pi), 2 s(\pi))$. Let $D_0(n)$ (resp. $D_1(n)$) denote the number of overpartition pairs counted by $D(n)$ in which the number of parts is even (resp. odd) and let \[ D'(n) = D_0(n)-D_1(n). \] By letting the term $q^n (q^{n+1};q)_{\infty}$ generate the overlined parts of $\lambda_1$ and $(q^{n+1};q)_{\infty} $ generate its non-overlined parts and letting the term $q^n(q^{n+1};q)_{\infty} $ generate the overlined parts of $\lambda_2$ and $(q^{n};q)_{n} $ generate its non-overlined parts, it is easy to check that \begin{equation}\label{gen D'} \sum_{n=1}^\infty D'(n) q^n =\sum_{n=1}^\infty q^{2n}(q^{n+1};q)_\infty^3 (q^{n};q)_{n}. \end{equation} \end{definition} For example, $D(4)=5$ with relevant partition pairs \[ \big( (\bar{2}),(\bar{2}) \big), \big( (\bar{2}, \bar{1}),(\bar{1}) \big), \big( (2, \bar{1}),(\bar{1}) \big), \ \text{and\ } \big( (\bar{1}),(\bar{2},\bar{1})\big). \] We have $D_0(4)=1$ counting $\big( (\bar{2}),(\bar{2}) \big)$ and $D_1(4)=3$ counting \[ \big( (\bar{2}, \bar{1}),(\bar{1}) \big), \big( (2, \bar{1}),(\bar{1}) \big), \ \text{and\ } \big( (\bar{1}),(\bar{2},\bar{1})\big), \] and thus $D'(4)= 1-3=-2$. Similarly, we have $D(5)=6$, enumerating \[ \big( (\bar{3},\bar{1}),(\bar{1}) \big), \big( (3,\bar{1}),(\bar{1}) \big), \big( (\bar{1}),(\bar{3},\bar{1}) \big), \big( (\bar{2},\bar{1}),(\bar{1},1) \big), \big( (2,\bar{1}),(\bar{1},1) \big), \ \text{and\ } \big( (\bar{1}),(\bar{2},\bar{1},1) \big) \] and we can easily see that $D_0(5) = D_1(5)=3$ and thus $D'(5)=0$. \section{Proof of Theorem~\ref{thm FG'}}\label{sec proof FG'} We will request the famous Euler's formula~\cite{Andrews} \begin{equation}\label{Euler-1} (q;q)_\infty = 1+\sum_{n\geq 1}(-1)^n q^{\fr{3n^2+n}{2}} +\sum_{n\geq 1}(-1)^n q^{\fr{3n^2-n}{2}} \end{equation} and the following formula which is found in Fine~\cite[(25.94)]{Fine} \begin{equation}\label{Fine} \sum_{n\geq 0}\fr{(aq^{n+1};q)_n t^n}{(q;q)_n} = (t;q)_\infty^{-1} \sum_{n\geq 0}\fr{(t;q)_n}{(q;q)_n} (-at)^n q^{\fr{3n^2+n}{2}}. \end{equation} As for part~(a), we have \[ \sum_{n\geq 1} q^n (q^{n+1};q)_\infty (q^n;q)_n =(q;q)_\infty \sum_{n\geq 1} \fr{q^n (q^n;q)_n}{(q;q)_n} \] \[ =(q;q)_\infty \sum_{n\geq 0} \fr{q^n (q^n;q)_n}{(q;q)_n} - (q;q)_\infty \] \[ =(q;q)_\infty (q;q)_\infty^{-1}\sum_{n\geq 0}(-1)^n q^{\fr{3n^2+n}{2}} - (q;q)_\infty \] \[ =\sum_{n\geq 0}(-1)^n q^{\fr{3n^2+n}{2}} - 1-\sum_{n\geq 1}(-1)^n q^{\fr{3n^2+n}{2}} -\sum_{n\geq 1}(-1)^n q^{\fr{3n^2-n}{2}} \] \[ =\sum_{n\geq 1}(-1)^{n+1} q^{\fr{3n^2-n}{2}}, \] where in the fourth identity we applied~\eqref{Fine} with $(a,t)=(q^{-1},q)$ and in the fifth identity we applied~\eqref{Euler-1}. Regarding part~(b), we have \[ \sum_{n\geq 1} q^n (q^{n+1};q)_\infty (-q^n;q)_n =(q;q)_\infty \sum_{n\geq 1} \fr{q^n (-q^n;q)_n}{(q;q)_n} \] \[ =(q;q)_\infty \sum_{n\geq 0} \fr{q^n (-q^n;q)_n}{(q;q)_n} - (q;q)_\infty \] \[ =(q;q)_\infty (q;q)_\infty^{-1}\sum_{n\geq 0}(-1)^n q^{\fr{3n^2+n}{2}} - (q;q)_\infty \] \[ =\sum_{n\geq 0} q^{\fr{3n^2+n}{2}} - 1-\sum_{n\geq 1}(-1)^n q^{\fr{3n^2+n}{2}} -\sum_{n\geq 1}(-1)^n q^{\fr{3n^2-n}{2}} \] \[ =2 \sum_{n\geq 0}q^{\fr{3(2n+1)^2 + 2n+1}{2}} - \sum_{n\geq 1} (-1)^n q^{\fr{3n^2-n}{2}} \] \[ =2\sum_{n\geq 0}q^{6n^2+7n+2} + \sum_{n\geq 1}(-1)^{n+1} q^{\fr{3n^2-n}{2}}, \] where in the fourth identity we applied~\eqref{Fine} with $(a,t)=(-q^{-1},q)$ and in the fifth identity we applied~\eqref{Euler-1}. \section{Proof of Theorem~\ref{thm A'}}\label{sec proof A'} We first recall the following definition~\cite{Gasper-Rahman} \[ \pFq{3}{2}{a,b,c}{d,e}{q, z} =\sum_{n=0}^\infty \fr{(a,b,c;q)_n}{(q,d,e;q)_n}z^n. \] We need Jacobi's identity~\cite{Andrews} \begin{equation}\label{Jacobi} \sum_{n=0}^\infty (-1)^n (2n+1) q^{\fr{n(n+1)}{2}} = (q;q)_\infty^3, \end{equation} the following identity of Gasper and Rahman~\cite[(III.10)]{Gasper-Rahman} \begin{equation}\label{GR-1} \pFq{3}{2}{a,b,c}{d,e}{q, \fr{de}{abc}} =\fr{(b,de/ab,de/bc;q)_\infty}{(d,e,de/abc;q)_\infty} \pFq{3}{2}{d/b,e/b,de/abc}{de/ab,de/bc}{q, b}, \end{equation} and the following identity of Andrews and Warnaar~\cite[p. 181]{Andrews-Warnaar 2007} \begin{equation}\label{AW} \sum_{n=0}^\infty \fr{(-zq;q^2)_n (-z^{-1}q;q^2)_n q^n}{(-q;q)_{2n+1}} =\sum_{n=0}^\infty \fr{1-z^{2n+1}}{1-z} z^{-n} q^{n(n+1)}. \end{equation} We have \[ \begin{split} \mathcal{A}_1(q) &:= \sum_{n=1}^\infty A'(n) q^n \\ &=\sum_{n=1}^\infty q^n (q^{n+1};q)_\infty^3 (q^n;q)_n \\ &= (q;q)_\infty^3 \sum_{n=1}^\infty \fr{q^n (q;q)_{2n-1}}{(q;q)_n^3 (q;q)_{n-1}}. \end{split} \] Hence \[ \begin{split} \mathcal{A}_1(q^2) &=(q^2;q^2)_\infty^3 \sum_{n=1}^\infty \fr{q^{2n} (q^2;q^2)_{2n-1}}{(q^2;q^2)_n^3(q^2;q^2)_{n-1}} \\ &=(q^2;q^2)_\infty^3 \sum_{n=1}^\infty \fr{q^{2n} (q^2;q^4)_{n}(q^4;q^4)_{n-1}}{(q^2;q^2)_n^3(q^2;q^2)_{n-1}} \\ &= (q^2;q^2)_\infty^3 \sum_{n=1}^\infty \fr{q^{2n} (-q^2;q^2)_{n-1}(q;q^2)_n (-q;q^2)_n}{(q^2;q^2)_n^3} \\ &= (q^2;q^2)_\infty^3 \Big( \fr{1}{2}\pFq{3}{2}{-1,q,-q}{q^2,q^2}{q^2, q^2} -\fr{1}{2} \Big). \end{split} \] Now apply~\eqref{GR-1} with $q\to q^2$, $a=-1$, $b=q$, $c=-q$, $d=e=q^2$ to deduce that \[ \begin{split} \pFq{3}{2}{-1,q,-q}{q^2,q^2}{q^2, q^2} &= \fr{(q,-q^3,-q^2;q^2)_\infty}{(q^2,q^2,q^2;q^2)_\infty}\pFq{3}{2}{q,q,q^2}{-q^3,-q^2}{q^2, q} \\ &=\fr{1}{(1+q) (q^2;q^2)_\infty^3}\pFq{3}{2}{q,q,q^2}{-q^3,-q^2}{q^2, q} \\ &= \fr{1}{(q^2;q^2)_\infty^3} \sum_{n=0}^\infty \fr{(q;q^2)_n^2 q^n}{(-q;q)_{2n+1}} \\ &= \fr{1}{(q^2;q^2)_\infty^3} \sum_{n=0}^\infty (-1)^n q^{n^2+n}, \end{split} \] where the last assertion follows by setting $z=-1$ in~\eqref{AW}. Thus we have proved that \[ (q^2;q^2)_\infty^3 + 2 \mathcal{A}_1(q^2) = \sum_{n=0}^\infty (-1)^n q^{n^2+n}. \] Then by Jacobi's identity~\eqref{Jacobi} \[ \begin{split} \mathcal{A}_1(q^2) &= \fr{1}{2}\Big(\sum_{n=0}^\infty (-1)^n q^{n^2+n} - \sum_{n=0}^\infty (-1)^n (2n+1)q^{n^2+n} \Big) \\ &= - \sum_{n=0}^\infty (-1)^n n q^{n^2+n}, \end{split} \] which is the desired formula with $q$ replaced by $q^2$. \section{Proof of Theorem~\ref{thm A''}}\label{sec proof A''} We will make an appeal to Gauss' idenity \begin{equation}\label{Gauss} \psi(q)=(-q;q)_\infty^2 (q;q)_\infty = \fr{(q^2;q^2)_\infty}{(q;q^2)_\infty} =\sum_{n=0}^\infty q^{\fr{n(n+1)}{2}}, \end{equation} and the following identity of Gasper and Rahman~\cite[(III.9)]{Gasper-Rahman} \begin{equation}\label{GR-2} \pFq{3}{2}{a,b,c}{d,e}{q, \fr{de}{abc}} =\fr{(e/a,de/bc;q)_\infty}{(e,de/abc;q)_\infty} \pFq{3}{2}{a,d/b,b/c}{d,de/bc}{q, \fr{e}{a}}. \end{equation} We have \[ \begin{split} \mathcal{A}_2(q) &:= \sum_{n=1}^\infty A''(n) q^n \\ &=\sum_{n=1}^\infty q^n (-q^{n+1};q)_\infty^2 (q^{n+1};q)_\infty (q^n;q)_n \\ &= (-q;q)_\infty^2(q;q)_\infty \sum_{n=1}^\infty \fr{q^n (q;q)_{2n-1}}{(-q;q)_n^2 (q;q)_{n} (q;q)_{n-1}} \\ &=\psi(q) \sum_{n=1}^\infty \fr{q^n (q;q^2)_n (-q;q)_{n-1}}{(q^2;q^2)_n (-q;q)_n} \\ &=\psi(q) \Big( \fr{1}{2}\pFq{3}{2}{q^{\fr{1}{2}},-q^{\fr{1}{2}},-1}{-q,-q}{q, q} -\fr{1}{2} \Big) \end{split} \] Hence by replacing $q^2$ with $q$, \[ \psi(q^2) + 2\mathcal{A}_2(q^2) =\psi(q^2) \pFq{3}{2}{-q,-1,q}{-q^2,-q^2}{q^2, q^2}. \] Now apply~\eqref{GR-2} with $q\to q^2$, $a=-q$, $b=-1$, $c=q$, $d=e=-q^2$ to obtain \[ \begin{split} \pFq{3}{2}{-q,-1,q}{-q^2,-q^2}{q^2, q^2} &= \fr{(q,-q^3;q^2)_\infty}{(-q^2,q^2;q^2)_\infty} \pFq{3}{2}{-q,q^2,-q}{-q^2,-q^3}{q^2, q} \\ &= \fr{(q^2;q^4)_\infty}{(1+q) (-q^2,q^2;q^2)_\infty} \pFq{3}{2}{-q,q^2,-q}{-q^2,-q^3}{q^2, q} \\ &= \fr{1}{(1+q) \psi(q^2)}\ \pFq{3}{2}{q^2,-q,-q}{-q^2,-q^3}{q^2, q} \\ &=\fr{1}{\psi(q^2)} \sum_{n=0}^\infty \fr{(-q;q^2)_n^2 q^n}{(-q^2;q^2)_n (-q;q^2)_{n+1}} \\ &=\fr{1}{\psi(q^2)} \sum_{n=0}^\infty \fr{(-q;q^2)_n^2 q^n}{(-q;q)_{2n+1}} \\ &=\fr{1}{\psi(q^2)} \sum_{n=0}^\infty (2n+1) q^{n(n+1)}, \end{split} \] where the last formula follows from~\eqref{AW} with $z=1$. Thus we have proved that \[ \psi(q^2)+ 2\mathcal{A}_2(q^2)= \sum_{n=0}^\infty (2n+1) q^{n(n+1)} \] and therefore by~\eqref{Gauss}, \[ \mathcal{A}_2(q^2) = \fr{1}{2}\sum_{n=0}^\infty (2n+1) q^{n(n+1)}- \fr{1}{2} \sum_{n=0}^\infty q^{n(n+1)} =\sum_{n=0}^\infty n q^{n(n+1)}, \] which is the desired identity with $q^2$ replacing $q$. \section{Proof of Theorem~\ref{thm B'}}\label{sec proof B'} We will request Euler's identity~\cite{Andrews} \begin{equation}\label{Euler} (-q;q)_\infty = \fr{1}{(q;q^2)_\infty} \end{equation} and the $q$-binomial theorem \begin{equation}\label{q-binomial} \sum_{n=0}^\infty \fr{(a;q)_n}{(q;q)_n} z^n = \fr{(azq;q)_\infty}{(z;q)_\infty}. \end{equation} Now from~\eqref{gen B'},~\eqref{basic-facts}, and~\eqref{Euler}, we find \[ \begin{split} \sum_{n=1}^\infty B'(n) q^n &= \sum_{n=1}^\infty q^{n}(-q^{n+1};q)_\infty^2 (q^{n+1};q)_\infty (q^{n+1};q)_{n} \\ &=(-q;q)_\infty \sum_{n=1}^\infty q^{n}\fr{(q^{2n+2};q^2)_\infty (q^{n+1};q)_{n}}{(-q;q)_n} \\ &=(-q;q)_\infty \sum_{n=1}^\infty q^{n}\fr{(q^{2n+2};q^2)_\infty (q;q)_{2n}}{(q^2;q^2)_{n}} \\ &= \fr{1}{(q;q^2)_\infty} \sum_{n=1}^\infty q^n (q^{2n+2};q^2)_\infty(q;q^2)_n \\ &= \sum_{n=1}^\infty q^n \fr{(q^{2n+2};q^2)_\infty}{(q^{2n+1};q^2)_\infty} \\ &= \sum_{n=0}^\infty q^n \fr{(q^{2n+2};q^2)_\infty}{(q^{2n+1};q^2)_\infty} - \fr{(q^2;q^2)_\infty}{(q;q^2)_\infty} \\ &= \fr{(q^2;q^2)_\infty}{(q;q^2)_\infty} \sum_{n=0}^\infty \fr{(q;q^2)_n}{(q^2;q^2)_n} q^n - \fr{(q^2;q^2)_\infty}{(q;q^2)_\infty} \\ &= \Big(\fr{(q^2;q^2)_\infty}{(q;q^2)_\infty} \Big)^2 - \fr{(q^2;q^2)_\infty}{(q;q^2)_\infty} \\ &= \Big(\sum_{n=0}^\infty q^{\fr{n(n+1)}{2}} \Big)^2 - \sum_{n=0}^\infty q^{\fr{n(n+1)}{2}}, \end{split} \] where the penultimate identity follows by~\eqref{q-binomial} and the last identity follows from~\eqref{Gauss}. \section{Proof of Theorem~\ref{thm C'}}\label{sec proof C'} Recall that a pair of sequences $(\alpha_n,\beta_n)_{n\geq 0}$ is called a Bailey pair relative to $a$ if~\cite{Andrews 1986, Warnaar 2009} \[ \beta_n = \sum_{r=0}^n \fr{\alpha_r}{(q)_{n-r}(aq)_{n+r}}. \] We shall require the following lemma which is an equivalent variant of Lovejoy~\cite[(1.12)]{Lovejoy 2012}. \begin{lemma}\label{thm ConjBailey} If $(\alpha_n,\beta_n)$ is a Bailey pair relative to $a^2$, then \[ (q;q)_{\infty} (-aq;q)_{\infty}^2 \sum_{n= 0}^\infty \fr{q^n (a;q)_n (a^2 q;q^2)_n}{(-a q;q)_n}\beta_n \] \begin{equation}\label{main id} = (1-a) \sum_{r,n=0}^\infty \fr{1+aq^{r+2n+1}}{1-aq^r} a^{2n} q^{2n^2+2nr+n+r} \alpha_r. \end{equation} \end{lemma} We want to apply Lemma~\ref{thm ConjBailey} to the following Bailey pair relative to $q^2$ which can be found in Lovejoy~\cite{Lovejoy 2012} \begin{equation}\label{BP-D} \alpha_n = q^{n^2+n}\fr{1-q^{2n+2}}{1-q^2},\quad \beta_n = \fr{1}{(q;q)_n (q^2; q)_n}. \end{equation} Then by Lemma~\ref{thm ConjBailey} with $a=-q$, we find \[ (1-q)(q;q)_\infty (q^2;q)_\infty^2 \sum_{n=0}^\infty\fr{q^n (-q;q)_n (q^3;q^2)_n}{(q^2;q)_n (q;q)_n (q^2;q)_n} \] \begin{equation}\label{BP-D 1} = \sum_{r,n=0}^\infty q^{2n^2+2nr+r^2+3n+2r} (1-q^{r+1})(1-q^{2n+r+2}). \end{equation} Then by~\eqref{basic-facts} and~\eqref{Euler} the left hand-side of~\eqref{BP-D 1} equals \[ \sum_{n=0}^\infty q^n (-q;q)_n (q;q^2)_{n+1} (q^{n+1};q)_\infty (q^{n+2};q)_\infty^2 \] \[ =(-q;q)_\infty \sum_{n=0}^\infty q^n\fr{(q;q^2)_{n+1} (q^{n+1};q)_\infty (q^{n+2};q)_\infty^2}{(-q^{n+1};q)_\infty} \] \[ =\fr{1}{(q;q^2)_\infty} \sum_{n=0}^\infty q^n\fr{(q;q^2)_{n+1} (q^{n+1};q)_\infty (q^{n+2};q)_\infty^2}{(-q^{n+1};q)_\infty} =\sum_{n=0}^\infty q^n\fr{(q^{n+1};q)_\infty (q^{n+2};q)_\infty^2}{(-q^{n+1};q)_\infty (q^{2n+3};q)_\infty} \] \[ =\sum_{n=0}^\infty q^n\fr{(q^{n+1};q)_\infty^2 (q^{n+2};q)_\infty^2}{(q^{2n+2};q^2)_\infty (q^{2n+3};q^2)_\infty} =\sum_{n=0}^\infty q^n\fr{(q^{n+1};q)_\infty^2 (q^{n+2};q)_\infty^2}{(q^{2n+2};q)_\infty} \] \[ =\sum_{n=0}^\infty q^n (q^{n+1};q)_\infty (q^{n+1};q)_{n+1}(q^{n+2};q)_\infty^2 =\fr{1}{q} \sum_{n=1}^\infty q^n (q^{n};q)_\infty (q^{n};q)_{n}(q^{n+1};q)_\infty^2 \] \[ =\fr{1}{q} \sum_{n=1}^\infty C'(n) q^n, \] where the last identity follows from~\eqref{gen C'}. Then~\eqref{BP-D 1} means that \[ \sum_{n=1}^\infty C'(n) q^n = \sum_{r,n=0}^\infty q^{2n^2+2nr+r^2+3n+2r+1} (1-q^{r+1})(1-q^{2n+r+2}) \] \[ = \sum_{r,n=0}^\infty (q^{2n^2+2nr+r^2+3n+2r+1} + q^{2n^2+2nr+r^2+5n+4r+4}) \] \[ - \sum_{r,n=0}^\infty (q^{2n^2+2nr+r^2+5n+3r+3} + q^{2n^2+2nr+r^2+3n+3r+2}) \] \[ =\sum_{n=0}^\infty q^{2n^2+3n+1} + 2 \sum_{r,n=0}^\infty q^{2n^2+2nr+r^2+5n+4r+4} \] \[ - \sum_{r,n=0}^\infty (q^{2n^2+2nr+r^2+5n+3r+3} + q^{2n^2+2nr+r^2+3n+3r+2}) \] \[ =\sum_{n=0}^\infty q^{\fr{1}{8} \big( (4n+3)^2-1\big)} +2 \sum_{r,n=0}^\infty q^{\fr{1}{8} \big( (2r+3)^2 +(2r +4n+5)^2-2\big)} \] \[ - \sum_{r,n=0}^\infty q^{\fr{1}{8} \big( (2r+3)^2 +(2r +4n+3)^2-2\big)} - \sum_{r,n=0}^\infty q^{\fr{1}{8} \big( (2r+1)^2 +(2r +4n+5)^2 -2\big)}. \] Hence \[ \sum_{n=1}^\infty C'(n) q^{8n+2} = \sum_{n=0}^\infty q^{(4n+3)^2+1} +2 \sum_{r,n=0}^\infty q^{(2r+3)^2 +(2r +4n+5)^2} \] \[ -\sum_{r,n=0}^\infty q^{(2r+3)^2 +(2r +4n+3)^2} -\sum_{r,n=0}^\infty q^{(2r+1)^2 +(2r +4n+5)^2} \] \[ =\sum_{n=0}^\infty q^{(4n+3)^2+1} + 2\sum_{r,n=0}^\infty (-1)^{n+1} q^{(2r+3)^2 + (2r+3+2n)^2} \] \[ +\sum_{r,n=0}^\infty q^{(2r+3)^2 + (2r+3+4n)^2} - \sum_{r,n=0}^\infty q^{(2r+1)^2 + (2r+1+4n+4)^2} \] \[ =2\sum_{r,n=0}^\infty (-1)^{n+1} q^{(2r+3)^2 + (2r+3+2n)^2} +\sum_{n=0}^\infty q^{(4n+3)^2+1} - \sum_{n=0}^\infty q^{(4n+1)^2+1} + \sum_{r=1}^\infty q^{2(2r+1)^2} \] \[ =2\sum_{r,n=0}^\infty (-1)^{n+1} q^{(2r+3)^2 + (2r+3+2n)^2} +\sum_{n=0}^\infty (-1)^{n+1}q^{(2n+1)^2+1} + \sum_{r=1}^\infty q^{2(2r+1)^2}, \] which is the desired formula. \section{Proof of Theorem~\ref{thm D'}}\label{sec proof D'} We have the following Bailey pair relative to $1$ which is due to Slater~\cite[H(1) p. 468]{Slater 1951} \begin{equation}\label{BP-B} \alpha_n = \begin{cases} 1, & \text{if $n=0$}, \\ q^{n^2} (q^n - q^{-n}), & \text{if $n=0$} \end{cases}, \qquad \beta_n = \fr{q^n}{(q;q)_n^2}. \end{equation} Then by Lemma~\ref{thm ConjBailey} applied to~\eqref{BP-B} with $a=-1$, we get \[ (q;q)_\infty^3\sum_{n=0}^\infty \fr{q^{n} (-1;q)_n (q;q^2)_n}{(q;q)_n}\fr{q^n}{(q;q)_n^2} \] \[ =2\sum_{n=0}^\infty \fr{1-q^{2n+1}}{2} q^{2n^2+n} + 2 \sum_{\substack{n=0\\ r=1}}^\infty\fr{1-q^{r+2n+1}}{1+q^r}q^{2n^2+2nr+n+r} q^{r^2}(q^r-q^{-r}) \] \[ =\sum_{n=0}^\infty (1-q^{2n+1})q^{2n^2+n} +2\sum_{\substack{n=0\\ r=1}}^\infty \fr{1-q^{r+2n+1}}{1+q^r} q^{2n^2+2nr+r^2+n}(q^{2r}-1), \] or equivalently, \begin{equation}\label{help B-1} \fr{(q;q)_\infty^3}{2} + (q;q)_\infty^3 \sum_{n=1}^\infty \fr{q^{2n} (-q;q)_{n-1} (q;q^2)_n}{(q;q)_n^3} \end{equation} \[ = \fr{1}{2}\sum_{n=0}^\infty (1-q^{2n+1})q^{2n^2+n} - \sum_{\substack{n=0\\ r=1}}^\infty (1-q^r)(1-q^{r+2n+1}) q^{2n^2+2nr+r^2+n} \] \[ = \fr{1}{2}\sum_{n=0}^\infty (1-q^{2n+1})q^{2n^2+n} - \sum_{\substack{n=0\\ r=1}}^\infty \big( q^{2n^2+2nr+r^2+n} + q^{2n^2+2nr+r^2+3n+2r+1}\big) \] \[ +\sum_{\substack{n=0\\ r=1}}^\infty \big( q^{2n^2+2nr+r^2+n+r} + q^{2n^2+2nr+r^2+3n+r+1}\big) \] \[ = \fr{1}{2}\sum_{n=0}^\infty (1-q^{2n+1})q^{2n^2+n} -\sum_{\substack{n=0\\ r=0}}^\infty \big( q^{2n^2+2nr+r^2+n} + q^{2n^2+2nr+r^2+3n+2r+1}\big) \] \[ + \sum_{\substack{n=0\\ r=0}}^\infty \big( q^{2n^2+2nr+r^2+n+r} + q^{2n^2+2nr+r^2+3n+r+1}\big) \] \[ = \fr{1}{2}\sum_{n=0}^\infty (1-q^{2n+1})q^{2n^2+n} - \sum_{n=0}^\infty q^{2n^2 +n} - 2 \sum_{n,r=0}^\infty q^{2n^2+2nr+r^2+3n+2r+1} \] \[ + \sum_{n,r=0}^\infty \big( q^{2n^2+2nr+r^2+n+r} + q^{2n^2+2nr+r^2+3n+r+1}\big) \] \[ = -\fr{1}{2}\sum_{n=0}^\infty q^{\fr{1}{8}( (4n+1)^2-1 )} -\fr{1}{2}\sum_{n=0}^\infty q^{\fr{1}{8}( (4n+3)^2-1 )} -2 \sum_{n,r=0}^\infty q^{\fr{1}{8}( (2r+1)^2 + (2r+4n+3)^2-2 )} \] \[ + \sum_{n,r=0}^\infty q^{\fr{1}{8}( (2r+1)^2 + (2r+4n+1)^2-2 )} + \sum_{n,r=0}^\infty q^{\fr{1}{8}( (2r-1)^2 + (2r+4n+3)^2-2 )} \] \[ = -\fr{1}{2}\sum_{n=0}^\infty q^{\fr{1}{8}( (4n+1)^2-1 )} +\fr{1}{2}\sum_{n=0}^\infty q^{\fr{1}{8}( (4n+3)^2-1 )} - \sum_{n=0}^\infty q^{\fr{1}{8}( 2(2n+1)^2-2 )} \] \[ -2 \sum_{n,r=0}^\infty q^{\fr{1}{8}( (2r+1)^2 + (2r+4n+3)^2-2 )} +2 \sum_{n,r=0}^\infty q^{\fr{1}{8}( (2r+1)^2 + (2r+1+4n)^2-2 )} \] which by virtue of~\eqref{Jacobi} means \begin{equation}\label{help B-2} (q;q)_\infty^3 \sum_{n=1}^\infty \fr{q^{2n} (-q;q)_{n-1} (q;q^2)_n}{(q;q)_n^3} \end{equation} \[ =2 \sum_{n,r=0}^\infty q^{\fr{1}{8}( (2r+1)^2 + (2r+1+4n)^2-2 )} -2 \sum_{n,r=0}^\infty q^{\fr{1}{8}( (2r+1)^2 + (2r+4n+3)^2-2 )} \] \[ -\fr{1}{2}\sum_{n=0}^\infty q^{\fr{1}{8}( (4n+1)^2-1 )} +\fr{1}{2}\sum_{n=0}^\infty q^{\fr{1}{8}( (4n+3)^2-1 )} - \sum_{n=0}^\infty q^{\fr{1}{8}( 2(2n+1)^2-2 )} \] \[ -\fr{1}{2}\sum_{n=0}^\infty (-1)^n (2n+1) q^{\fr{n(n+1)}{2}} \] \[ =2 \sum_{n,r=0}^\infty (-1)^nq^{\fr{1}{8}( (2r+1)^2 + (2r+1+2n)^2-2 )} -\fr{1}{2}\sum_{n=0}^\infty (-1)^n q^{\fr{1}{8}( (2n+1)^2-1 )} \] \[ -\sum_{n=0}^\infty q^{\fr{1}{8}( 2(2n+1)^2-2 )} -\fr{1}{2}\sum_{n=0}^\infty (-1)^n (2n+1) q^{\fr{n(n+1)}{2}}. \] Furthermore, by~\eqref{basic-facts} and~\eqref{gen D'}, the left hand-side of~\eqref{help B-2} equals \[ \sum_{n=1}^\infty q^{2n}\fr{(-q;q)_{n-1} (q;q)_{2n} (q^{n+1};q)_\infty^3}{(q^2;q^2)_n} = \sum_{n=1}^\infty q^{2n}\fr{(q^2;q^2)_{n-1} (q^n;q)_{n+1} (q^{n+1};q)_\infty^3}{(q^2;q^2)_n} \] \begin{equation}\label{help B-3} = \sum_{n=1}^\infty q^{2n} (q^n;q)_n (q^{n+1};q)_\infty^3 = \sum_{n=1}^\infty D'(n) q^n. \end{equation} Now combining~\eqref{help B-2} and~\eqref{help B-3}, we achieve \[ \sum_{n=1}^\infty D'(n) q^{8n+2} = 2 \sum_{n,r=0}^\infty (-1)^n q^{(2r+1)^2 + (2r+1+2n)^2} -\fr{1}{2}\sum_{n=0}^\infty (-1)^n q^{(2n+1)^2+1} \] \[ -\sum_{n=0}^\infty q^{2(2n+1)^2} - \fr{1}{2} \sum_{n=0}^\infty (-1)^n (2n+1) q^{(2n+1)^2 +1} \] \[ = 2 \sum_{n,r=0}^\infty (-1)^n q^{(2r+1)^2 + (2r+1+2n)^2} -\sum_{n=0}^\infty (-1)^n(n+1) q^{(2n+1)^2 +1} - \sum_{n=0}^\infty q^{2(2n+1)^2}, \] which is the desired formula. \section{Concluding remarks}\label{sec conclusion} {\bf 1.\ }Our proofs for the identities in Theorems~\ref{thm FG'}-\ref{thm D'} rely on the theory of $q$-series. It would be very interesting to find combinatorial proofs for these identities. {\bf 2.\ } We start with some corollaries of our main results. We get from Theorems~\ref{thm A'}~and~\ref{thm A''}. \begin{corollary}\label{cor A'-A''} For all positive integer $n$ which is not a triangular number, we have $A'(n)=A''(n)=0$. \end{corollary} The following result is a direct consequence of Theorem~\ref{thm B'}. \begin{corollary}\label{cor B'} For all positive integer $n$ which is not the sum of two triangular numbers, we have $B'(n)=0$. \end{corollary} Furthermore, noting the elementary fact that an integer $n$ is the sum of two triangular numbers if and only if $8n+2$ is the sum of two squares, we get the following consequence of Theorem~\ref{thm C'}. \begin{corollary}\label{cor C'} For any positive which is not the sum of two triangular numbers, we have $C'(n)=0$. \end{corollary} By the note just before Corollary~\ref{cor C'}, we get the following consequence of Theorem~\ref{thm D'}. \begin{corollary}\label{cor D'} For all positive integer $n$ which is not the sum of two triangular numbers, we have $D'(n)=0$. \end{corollary} The vanishing of the differences in Corollary~\ref{cor A'-A''} obviously means that $A_0(n)=A_1(n)$ and $A_2(n)=A_3(n)$ for all non-triangular numbers and vanishing of the differences in Corollary in Corollaries~\ref{cor B'},~\ref{cor C'}, and~\ref{cor D'} respectively means that $B_0(n)=B_1(n)$, $C_0(n)=C_1(n)$, and $D_0(n)=D_1(n)$ for any positive integer which is not the sum of two triangular number. It is natural to ask for bijective proofs for these equalities. \bigskip \noindent{\bf Acknowledgment.} The authors are grateful to the referee for valuable comments and interesting suggestions which have improved the presentation and quality of the paper. \noindent{\bf Data Availability Statement.\ } Not applicable. \begin{thebibliography}{99} \bibitem{Andrews 1986} G. E. Andrews, \emph{$q$-Series: Their Development and Application in Analysis, Number Theory, Combinatorics, Physics and Computer Algebra}, C.B.M.S. Regional Conference Series in Math, No. 66, American Math. Soc. Providence (1986). \bibitem{Andrews} G.E. Andrews, \emph{The Theory of Partitions}, Cambridge University Press, Cambridge, 1998. \bibitem{Andrews 2013} G. E. Andrews, \emph{Difference of partition functions: the anti-telescoping method}, In From Fourier analysis and number theory to Radon transforms and geometry, Volume 28 of Dev. Math., pages 1?20. Springer, New York, 2013. \bibitem{Andrews-Warnaar 2007} G. E. Andrews and S. O. Warnaar, \emph{The Bailey transform and false theta functions}, Ramanujan J. 14 (2007), 173--188. \bibitem{Andrews-Yee} G. E. Andrews and A. J. Yee, \emph{Legendre theorems for subclasses of overpartitions}, J. Combin. Theory Ser. A 144 (2016), 16--36. \bibitem{Burge} W. H. Burge, \emph{Restricted partition pairs}, J. Combin. Theory Ser. A 63 (1993), 210--222. \bibitem{Corteel-Lovejoy} S. Corteel and J. Lovejoy, \emph{Overpartitions}, Trans. Amer. Math. Soc. 356 (2004), 1623--1635. \bibitem{Bachraoui 2023} M. El Bachraoui, \emph{Theorems of Legendre type for overpartitions}, Bull. Aust. Math. Soc. 109 (2024) , 265--275. \bibitem{Bachraoui 2023-b} M. El Bachraoui, \emph{Positive differences of overpartitions with separated parts}, Ramanujan J. (2023) (Accepted). \bibitem{Berkovich-Grizzell 2014} A. Berkovich and K. Grizzell, \emph{A partition inequality involving products of two $q$-Pochhammer symbols}, Ramanujan 125, 2539, Contemp. Math. 627, Amer. Math. Soc., Providence, RI, 2014. \bibitem{Berkovich-Uncu 2019} A. Berkovich and A. Uncu, \emph{Some elementary partition inequalities and their implications}, Ann. Comb. 23 (2019), 263–-284. \bibitem{Fine} N. J. Fine, \emph{Basic hypergeometric series and applications, Mathematical Surveys and Monographs}, vol. 27, American Mathematical Society, Providence, RI, 1988. \bibitem{Gasper-Rahman} G. Gasper and M. Rahman, \emph{Basic Hypergeometric series}, Cambridge University Press, 2004. \bibitem{Kim-Kim-Lovejoy 2020} B. Kim, E. Kim, and J. Lovejoy, \emph{Parity bias in partitions}, Eur. J. Combin. 89 (2020), 103--159. \bibitem{Kim-Kim-Lovejoy 2021} B. Kim, E. Kim, and J. Lovejoy, \emph{On weighted overpartitions related to some q-series in Ramanujan's lost notebook}, Int. J. Number Theory 17 (2021), 603--619. \bibitem{Legendre} A. M. Legendre, \emph{Th\'eorie des nomdres, Fr\'emin Didot fr\`eres}, Paris, 1830. \bibitem{Lovejoy 2005} J. Lovejoy, \emph{Rank and conjugation for the Frobenius representation of an overpartition}, Ann. Comb. 9 (2005),321--334. \bibitem{Lovejoy 2006} J. Lovejoy, \emph{Overpartition pairs}, Ann. Institut Fourier 56 (2006), 781--794. \bibitem{Lovejoy 2012} J. Lovejoy, \emph{Ramanujan-type partial theta identities and conjugate Bailey pairs}, Ramanujan J. 29 (2012), 51--67. \bibitem{Slater 1951} L. J. Slater, \emph{A new proof of Roger's transformations of infinite series}, Proc. London Math. Soc. 53 (2) (1951), 460--475. \bibitem{Warnaar 2003} S. O. Warnaar, \emph{Partial theta functions. \emph{I}. Beyond the lost notebook}, Proc. London Math. Soc. 87 (2003), 363--395. \bibitem{Warnaar 2009} S. O. Warnaar, \emph{50 years of Bailey’s lemma}, In Algebraic Combinatorics and Applications; Springer: Berlin, Germany, (2009), 333--347. \end{thebibliography} \end{document}
2412.11485v2
http://arxiv.org/abs/2412.11485v2
Inexact Proximal Point Algorithms for Zeroth-Order Global Optimization
\documentclass[final,onefignum,onetabnum]{siamart220329} \usepackage{braket,amsfonts} \usepackage{graphicx,epstopdf} \usepackage{array} \usepackage[caption=false]{subfig} \usepackage{float} \usepackage{pgfplots} \newsiamthm{claim}{Claim} \newsiamremark{remark}{Remark} \newsiamremark{hypothesis}{Hypothesis} \crefname{hypothesis}{Hypothesis}{Hypotheses} \usepackage{algorithmic} \Crefname{ALC@unique}{Line}{Lines} \usepackage{amsopn} \DeclareMathOperator{\Range}{Range} \usepackage{xspace} \usepackage{bold-extra} \usepackage[most]{tcolorbox} \newcommand{\BibTeX}{{\scshape Bib}\TeX\xspace} \newcounter{example} \colorlet{texcscolor}{blue!50!black} \colorlet{texemcolor}{red!70!black} \colorlet{texpreamble}{red!70!black} \colorlet{codebackground}{black!25!white!25} \newcommand\bs{\symbol{'134}} \newcommand{\preamble}[2][\small]{\textcolor{texpreamble}{#1\texttt{#2 \emph{\% <- Preamble}}}} \lstdefinestyle{siamlatex}{ style=tcblatex, texcsstyle=*\color{texcscolor}, texcsstyle=[2]\color{texemcolor}, keywordstyle=[2]\color{texemcolor}, moretexcs={cref,Cref,maketitle,mathcal,text,headers,email,url}, } \tcbset{ colframe=black!75!white!75, coltitle=white, colback=codebackground, colbacklower=white, fonttitle=\bfseries, arc=0pt,outer arc=0pt, top=1pt,bottom=1pt,left=1mm,right=1mm,middle=1mm,boxsep=1mm, leftrule=0.3mm,rightrule=0.3mm,toprule=0.3mm,bottomrule=0.3mm, listing options={style=siamlatex} } \newtcblisting[use counter=example]{example}[2][]{ title={Example~\thetcbcounter: #2},#1} \newtcbinputlisting[use counter=example]{\examplefile}[3][]{ title={Example~\thetcbcounter: #2},listing file={#3},#1} \DeclareTotalTCBox{\code}{ v O{} } { fontupper=\ttfamily\color{black}, nobeforeafter, tcbox raise base, colback=codebackground,colframe=white, top=0pt,bottom=0pt,left=0mm,right=0mm, leftrule=0pt,rightrule=0pt,toprule=0mm,bottomrule=0mm, boxsep=0.5mm, #2}{#1} \patchcmd\newpage{\vfil}{}{}{} \flushbottom \usepackage{graphicx} \usepackage{amsmath,amssymb,makecell,multicol} \usepackage{booktabs} \usepackage[letterpaper,top=1in,bottom=1in,left=1in,right=1in,marginparwidth=1.75cm]{geometry} \usepackage{algorithm} \usepackage{xcolor} \usepackage{makecell} \usepackage{array} \usepackage{booktabs} \usepackage{longtable} \usepackage{lscape} \usepackage{pdflscape} \usepackage{geometry} \usepackage{hyperref} \usepackage{cleveref} \newcommand{\Hess}{\nabla^2} \renewcommand{\Re}{\mathbb{R}} \newcommand{\tu}{\tilde{u}} \newcommand{\tf}{\tilde{f}} \newcommand{\abs}[1]{\left|#1\right|} \newcommand{\norm}[1]{\left\|#1\right\|} \renewcommand{\Set}[1]{\left\{#1\right\}} \DeclareMathOperator{\diag}{diag} \newcommand{\sgn}{\text{sgn}} \newcommand{\prox}{\operatorname{prox}} \newcommand{\argmin}{\operatorname*{argmin}} \newcommand{\bigO}{\mathcal O} \newcommand{\TT}{{\scriptscriptstyle TT}} \newcommand{\MC}{{\scriptscriptstyle MC}} \newcommand{\X}{\mathcal X} \newcommand{\Z}{\mathcal Z} \newcommand{\K}{\mathcal K} \newcommand{\Zd}{{\scriptscriptstyle \mathcal Z}} \newcommand{\floor}[1]{\left\lfloor #1 \right\rfloor} \newcommand{\ceil}[1]{\left\lceil #1 \right\rceil} \newcommand{\pr}[1]{\mathbb P\left( #1 \right)} \newcommand{\expect}[1]{\mathbb E\left[#1\right]} \newcommand{\Var}[1]{\textrm{Var} \left[#1\right]} \newcommand{\Cov}[1]{\textrm{Cov} \left[#1\right]} \newcommand{\xstar}{x^\ast} \newcommand{\Grad}{\nabla\!} \newcommand{\mx}[1]{\textcolor{magenta}{#1 --mx}} \newcommand{\fh}[1]{\textcolor{blue}{#1 --fh}} \newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{defn}[thm]{Definition} \newtheorem{prop}[thm]{Proposition} \newtheorem{assum}[thm]{Assumption} \title{Inexact Proximal Point Algorithms for\\ Zeroth-Order Global Optimization\thanks{Submitted to the editors December 15, 2024. \funding{M. Zhang and H. Schaeffer were supported in part by NSF 2331033 and NSF 2427558. F. Han and S. Osher were partially supported by AFOSR MURI FA9550-18-502 and ONR N00014-20-1-2787. Y. Chow was supported in part by NSF DMS-2409903 and ONR N000142412661.}}} \author{Minxin Zhang\thanks{Department of Mathematics, University of California, Los Angeles, Los Angeles, CA 90024, USA \\(\email{[email protected]}, \email{[email protected]}, \email{[email protected]}, \email{[email protected]}).} \and Fuqun Han\footnotemark[2] \and Yat Tin Chow \thanks{Department of Mathematics, University of California, Riverside, Riverside, CA 92521, USA (\email{[email protected]})} \and Stanley Osher\footnotemark[2] \and Hayden Schaeffer\footnotemark[2]} \headers{Inexact Proximal Point Algorithms for Zeroth-Order Global Optimization}{Zhang, Han, Chow, Osher, and Schaeffer} \ifpdf \hypersetup{ pdftitle={IPPGO} } \begin{document} \maketitle \begin{abstract} This work concerns the zeroth-order global minimization of continuous nonconvex functions with a unique global minimizer and possibly multiple local minimizers. We formulate a theoretical framework for inexact proximal point (IPP) methods for global optimization, establishing convergence guarantees under mild assumptions when either deterministic or stochastic estimates of proximal operators are used. The quadratic regularization in the proximal operator and the scaling effect of a parameter $\delta>0$ create a concentrated landscape of an associated Gibbs measure that is practically effective for sampling. The convergence of the expectation under the Gibbs measure as $\delta\to 0^+$ is established, and the convergence rate of $\bigO(\delta)$ is derived under additional assumptions. These results provide a theoretical foundation for evaluating proximal operators inexactly using sampling-based methods such as Monte Carlo (MC) integration. In addition, we propose a new approach based on tensor train (TT) approximation. This approach employs a randomized TT cross algorithm to efficiently construct a low-rank TT approximation of a discretized function using a small number of function evaluations, and we provide an error analysis for the TT-based estimation. We then propose two practical IPP algorithms, TT-IPP and MC-IPP. The TT-IPP algorithm leverages TT estimates of the proximal operators, while the MC-IPP algorithm employs MC integration to estimate the proximal operators. Both algorithms are designed to adaptively balance efficiency and accuracy in inexact evaluations of proximal operators. The effectiveness of the two algorithms is demonstrated through experiments on diverse benchmark functions and various applications. \end{abstract} \begin{keywords} global optimization, nonconvex optimization, zeroth-order optimization, derivative-free optimization, proximal operator, inexact proximal point algorithm, tensor train, cross approximation, Monte Carlo integration, Gibbs measure. \end{keywords} \begin{MSCcodes} 49M15, 65K05, 90C26, 90C56 \end{MSCcodes} \section{Introduction} Global optimization of nonconvex functions plays a crucial role in various scientific and engineering applications, including machine learning \cite{lecun2015deep}, signal processing \cite{saab2010sparse}, computational biology \cite{reali2017optimization} and computational physics \cite{gottvald1992global}. These problems are inherently challenging due to the presence of multiple local minimizers and the lack of gradient information in certain scenarios. Standard gradient-based optimization methods, such as gradient descent, often only guarantee the convergence to a local minimizer. To address this, various global optimization techniques have been proposed, most of which are heuristic or require computational complexity that increases exponentially with problem dimensionality \cite{locatelli2013global}. Zeroth-order optimization methods \cite{larson2019derivative}, also known as derivative-free optimization, solve problems solely through function evaluations, making them ideal for scenarios where gradient information is unavailable or expensive to compute. In this work, we propose new \emph{inexact proximal point algorithms} for zeroth-order global minimization of continuous nonconvex functions $f:\Re^d\to\Re$ with a unique global minimizer and possibly multiple local minimizers. Theoretical convergence guarantees are established under mild assumptions. Proximal point methods \cite{parikh2014proximal} is a class of optimization methods that iterate by evaluating the set-valued \emph{proximal operator}, defined as \begin{equation}\label{eq:prox} \prox_{tf}(x) := \argmin_{z\in\Re^d} \phi(z)\,, ~\textrm{ with } \, \phi(z) = f(z) + \frac{1}{2t} \norm{z-x}^2, \end{equation} for some $t>0.$ These methods are generally applied to functions for which proximal operators are either easy to compute or admit closed-form solutions. Convergence properties of proximal point methods have been studied extensively in the context of convex optimization \cite{moreau1965proximite, rockafellar1976monotone, solodov2001unified, bertsekas2011incremental, asi2019stochastic}. For nonconvex functions, variants of proximal point methods have been considered \cite{rockafellar2021advances, davis2022proximal, fukushima1981generalized, khanh2023inexact}, typically guaranteeing convergence to a critical point or a local minimizer. A proximal point method for global optimization is proposed in \cite{heaton2024global}, with convergence guarantee to the global minima under the condition that the proximal operator is evaluated exactly at each iteration. For a general nonconvex function $f$, evaluating the exact proximal operator is computationally impractical. As a generalization of \cite{heaton2024global}, in Section~\ref{sec:IPP} we formulate a theoretical framework for inexact proximal point (IPP) methods that guarantees convergence to the unique global minimizer when either deterministic or stochastic estimates of proximal operators are used. Given the uniqueness of the global minimizer of $f$, the proximal operator \eqref{eq:prox} is single-valued under a wide range of conditions (see Proposition~\ref{prop:single_prox}), encompassing a broad class of nonconvex and nonsmooth functions. We consider zeroth-order methods for evaluating the single-valued proximal operator inexactly. For a small $\delta>0$, it is well known that the \emph{Gibbs measure} associated with $\phi$ in \eqref{eq:prox}, defined by \begin{equation}\label{eq:gibbs} \rho_\delta(A):=\frac{\int_{A}\exp{(-\phi(z)/\delta)dz}}{\int_{\Re^d}\exp{(-\phi(z)/\delta)dz}} \quad\textrm{ for } A\in\mathcal B(\Re^d)\,, \end{equation} approximates the Dirac measure centered at $z^*:=\prox_{tf}(x)$. The convergence in distribution of Gibbs measures and the corresponding convergence rates were derived in \cite{Gibbs_asym, bras2022convergence}. Let $Z_\delta$ be a random variable with the probability distribution $\rho_\delta$. Then the expectation of $Z_\delta$ satisfies \begin{equation}\label{eq:expect} \expect{Z_\delta}= \frac{\int z\exp\left(-\phi(z)/\delta\right)dz}{\int\exp{(-\phi(z)/\delta)dz}}\approx z^*. \end{equation} In Section~\ref{sec:analysis}, we show that the expectation converges to $z^*$ as $\delta\to 0^+$ if $\phi$ is continuous at around $z^*$ and derive the convergence rate of $\bigO(\delta)$ for the case where $z^*$ is \emph{nondegenerate} and $\phi$ is twice continuously differentiable at around $z^*$. To obtain an estimate of $z^*$, it is impractical to directly compute \eqref{eq:expect} via numerical integration due to the exponential growth in the number of quadrature nodes with respect to the dimension $d$. Fortunately, the quadratic regularization in $\phi$ and the scaling effect of $\delta$ lead to a concentrated landscape of the Gibbs measure that is practically effective for sampling. As an illustration, Figure~\ref{fig:regula} compares the original landscape of a nonconvex function, the Schaffer function \cite{test_problems_2005}, with the landscapes of its several transformations. The minimizer of the original function $f$ in Figure~\ref{fig:1a} is turned into a maximizer in Figure~\ref{fig:1b}; the quadratic regularization reduces oscillations and increases density near the solution in Figure~\ref{fig:1c}; and the scaling by a small $\delta>0$ concentrates the density near the solution. We consider two sampling-based methods to efficiently estimate the proximal operator. A classical approach, as proposed in \cite{osher2023hamilton,heaton2024global,tibshirani2024laplace}, is to use the Monte Carlo (MC) integration to approximate \eqref{eq:expect} by computing a weighted average of Gaussian samples centered at $x$. The MC integration is easy to implement, but may suffer from high variance in practice. To alleviate this, variance reduction techniques, such as the exponentially weighted moving average (EWMA) \cite{ross2009probability}, can be incorporated. Additionally, in Section~\ref{sec:approx}, we propose a new approach based on tensor train (TT) approximation \cite{oseledets2011tensor,Thm_TT_accuracy}, which exploits the Sobolev smoothness of the integrands to improve the estimation accuracy. The TT approximation of tensors is a generalization of the truncated singular value decomposition (SVD) of matrices. To obtain a TT estimate of \eqref{eq:expect}, we employ the randomized TT cross algorithm \cite{oseledets2010tt, savostyanov2014quasioptimality} to construct a low-rank TT approximation of the discretized function $\exp{\left(-f/\delta\right)}$ over a mesh grid. The TT cross algorithm accurately computes the TT approximation using a small number of function evaluations, with computational cost depending linearly on the dimension $d$, and without storing the full tensor. This makes it particularly well-suited for functions with approximate low-rank structure, such as the one illustrated in Figure~\ref{fig:1d}. We provide an error analysis for the TT estimate of a proximal operator in Section~\ref{sec:error}. \begin{figure}[H] \centering \subfloat[]{\includegraphics[width=0.24\textwidth]{pix/fx0.eps} \label{fig:1a}} \hfill \subfloat[]{\includegraphics[width=0.24\textwidth]{pix/e-f0.eps} \label{fig:1b}} \hfill \subfloat[]{\includegraphics[width=0.24\textwidth]{pix/e-f2.eps} \label{fig:1c}} \hfill \subfloat[]{\includegraphics[width=0.24\textwidth]{pix/e-f4.eps} \label{fig:1d}} \caption{(a) The 2D Schaffer function $f(x)$; (b) $\exp(-f(x))$; (c) \( \exp\left(-\left(f(x) + \|x-z\|_2^2/(2t)\right)\right) \), with $t=6$ and $z=(1,1)$ an initial guess of the global minimizer; (d) \( \exp\left(-\left(f(x) + \|x-z\|_2^2/(2t)\right)/\delta\right) \), with $\delta=0.25.$} \label{fig:regula} \end{figure} We then propose two practical IPP algorithms: the TT-IPP algorithm, which leverages TT estimates of the proximal operators; and the MC-IPP algorithm, which employs MC integration to estimate the proximal operators. Both algorithms adaptively decrease the parameter $\delta$ based on whether a sufficient decrease in the function value is achieved relative to several previous iterates. To balance efficiency and accuracy in estimating the proximal operators, TT-IPP is designed to adaptively refine the mesh grid and update the associated TT approximation; and MC-IPP is designed to adaptively increase the sample size and update the EWMA parameter. Additionally, a warm start can be conveniently incorporated into the two IPP algorithms, with minimal computational cost equivalent to a single iteration of either TT-IPP or MC-IPP. The effectiveness of both algorithms is shown through experiments on diverse benchmark functions and applications. \subsection{Prior work}\label{sec:prior} In this work, we focus on zeroth-order global optimization for nonconvex functions. Numerous optimization methods in the literature fall into this category. A comprehensive survey of historical perspectives and recent advancements in global optimization is presented in \cite{locatelli2021global}. For example, pure random search (PRS) \cite{locatelli2013global} samples random points uniformly over the feasible region and selects the one with the smallest function value as the solution. Genetic Algorithms (GAs) \cite{holland1992genetic} start with a population of candidate solutions and evolve them by mimicking natural evolutionary processes. Differential evolution (DE) \cite{DE_original} iteratively refines candidate solutions by combining differences between randomly selected agents to explore the search space. Particle swarm optimization (PSO) \cite{PSO_first} is inspired by the social behavior of swarms, where particles explore the search space by updating their positions based on personal and collective best solutions. Simulated Annealing (SA) \cite{sa_original} is a probabilistic algorithm inspired by the annealing process in metallurgy, which explores the search space by accepting both improving and, with decreasing probability, worsening solutions to escape local optima. Most of the aformentioned methods are metaheuristic. Additionally, variants of random zeroth-order methods for convex optimization were proposed in \cite{nesterov2017random}, where an expectation similar to \eqref{eq:expect} was used to approximate gradients. More zeroth-order methods for convex optimization are described in \cite{nemirovskij1983problem}. \cite{jongeneel2024small} proposed a novel randomized gradient estimator for zeroth-order optimization of real analytic functions. Random zeroth-order methods for constrained minimization of subdifferentially polynomially bounded functions were proposed in \cite{lei2024subdifferentially}. Consensus-based optimization (CBO) \cite{CBO_first_2017, CBO_Analysis} is an emerging class of zeroth-order methods for global optimization that offers convergence guarantees to near-optimal solutions. In CBO, a swarm of agents collectively moves toward a consensus point while using stochastic perturbations to explore the search space. In \cite{gomes2023derivativefree}, a derivative-free global optimization algorithm was introduced for one-dimensional functions, providing certain convergence guarantees. The algorithm approximates gradient flow using MC integration and rejection sampling. A proximal point algorithm called HJ-MAD was proposed in \cite{heaton2024global} for global optimization of nonconvex functions. The method iteratively computes the proximal operator via MC integration; however, convergence is guaranteed only if the proximal operator is evaluated exactly at each iteration. In \cite{engquist2024adaptive}, a stochastic derivative-free algorithm was proposed, whose continuous limit, modeled by a stochastic differential equation, converges to the global minimizer as time approaches infinity. In recent years, tensor-train-based methods have gained attention for multidimensional optimization. TT-Opt \cite{sozykin2022ttopt} employs TT approximations to perform optimization on a predefined grid, which limits its ability to achieve high accuracy or explore large domains. PROTES \cite{batsheva2024protes} is another method for optimization on a predefined grid, utilizing probabilistic sampling from a low-parametric distribution represented in a TT format. In \cite{chertkov2022optimization}, a probabilistic algorithm was proposed for optimizing discretized functions in the TT format. TTGO \cite{shetty2023tensor} leverages the separable structure of TT approximations to perform conditional sampling for initializing local optimization solvers in robotics applications. In \cite{soley2021iterative}, an iterative power algorithm was proposed for global optimization, which performs power iterations on a special form of TT approximations, the quantics tensor train, to concentrate the density function near the global minimizer. \subsection{Contributions and organization} The contributions of this work are summarized below. \begin{enumerate} \item A theoretical framework for IPP methods is formulated for the global optimization of nonconvex functions, with convergence guarantees established under mild assumptions when either deterministic or stochastic estimates of proximal operators are used. \item Convergence of the expectation \eqref{eq:expect} under Gibbs measure as $\delta\to 0^+$ is established, and the convergence rate of $\bigO(\delta)$ is derived under additional assumptions. These results provide theoretical foundations for evaluating proximal operators inexactly using sampling-based methods. \item A TT-based approach is proposed for the estimation of proximal operators, accompanied by an error analysis. This approach leverages the Sobolev smoothness of functions to circumvent the \emph{curse-of-dimensionality}, a challenge faced by most existing global optimization methods. \item Building on the theoretical framework, two practical IPP algorithms, TT-IPP and MC-IPP, are developed. These algorithms are designed to adaptively balance efficiency and accuracy in evaluating inexact proximal operators. Their effectiveness is demonstrated through experiments on a diverse set of benchmark functions and various applications. \end{enumerate} The rest of the paper is organized as follows. The theoretical framework of IPP methods for global optimization is formulated and analyzed in Section~\ref{sec:IPP}. Convergence results of the expectation under the parameterized Gibbs measure are established in Section~\ref{sec:analysis}. Section~\ref{sec:approx} introduces the new TT-based approach for inexact evaluation of proximal operators, along with an error analysis. The two practical algorithms, TT-IPP and MC-IPP, are proposed in Section~\ref{sec:IPP_alg}. Experiment results on benchmark functions and practical applications are showcased in Section~\ref{sec:experiments}. Finally, Section~\ref{sec:conclud} concludes the paper, discussing limitations of this work and potential directions for future research. \section{Inexact proximal point methods for global optimization}\label{sec:IPP} In this section, we provide a theoretical framework of inexact proximal point (IPP) methods for the global optimization of nonconvex functions. An IPP method evaluates the proximal operator inexactly \[ y^k\approx \hat x^k\in\prox_{t_k f}(x^k) \] at each iterate $x^k$ for some $t_k>0,$ and the next iterate is given by \[ x^{k+1}=\alpha_k y^k+(1-\alpha_k) x^k\,, \] where $\alpha_k\in (0,1]$ is called a \emph{damping} parameter. Details of the IPP method under consideration are summarized in Algorithm~\ref{alg:IPP}. In particular, Lines~\ref{line:qk}--\ref{line:tk} update $t_k$ in the same manner as the exact proximal point method described in \cite{heaton2024global} to prevent the iterates from converging to a local minimizer that is not globally optimal. Specifically, when $x^{k+1}$ is close to the previous iterate $x^k,$ $t_k$ is increased to encourage global exploration; when $x^{k+1}$ is farther from $x^k$, $t_k$ is decreased to promote exploration near the current iterate $x^{k+1}$; otherwise, $t_k$ remains unchanged. \begin{algorithm}[htp!] \caption{Inexact Proximal Point Method (IPP)}\label{alg:IPP} \begin{algorithmic}[1] \STATE \textbf{Input:} \small{$x^0\in\Re^d$, $0<\eta_-<1<\eta_+$, $0<\theta_1\le\theta_2<1$, $\bar\epsilon>0$, $0< \tau\le t_0\le T$, $\Set{\alpha_k}\subset [\alpha_{\min},\alpha_{\max}]\subset (0,1]$. } \FOR{$k=0,1,2,\cdots$} \STATE $y^k\approx \hat x^k\in\prox_{t_k f}(x^k)$ \STATE $x^{k+1}=\alpha_k y^k+(1-\alpha_k) x^k$ \STATE $q_{k} = \norm{x^{k+1}-x^k}/t_k$ \label{line:qk} \IF{$k\ge 1$ and $q_k\le\theta_1 q_{k-1}+\bar\epsilon$} \label{line:qk1} \STATE $t_{k+1} = \min\{\eta_+t_k, T\}$ \ELSIF{$k\ge 1$ and $q_k>\theta_2 q_{k-1}+\bar\epsilon$} \STATE $t_{k+1} = \max\{\eta_-t_k, \tau\}$ \ELSE \STATE $t_{k+1} = t_k$ \ENDIF \label{line:tk} \ENDFOR \STATE \textbf{Output:} last iterate $x^{k}$. \end{algorithmic} \end{algorithm} We include the definition of the subdifferential below. \begin{defn}\cite[Section 2]{davis2018subgradient}\label{def:subd} The \emph{subdifferential} of $f:\Re^d\to\Re$ at $x$, denoted by $\partial f(x),$ is the set of all $v\in\Re^n$ satisfying \[ f(y)\ge f(x)+\langle v,y-x\rangle+o(\norm{y-x}) ~\textrm{ as }~ y\to x. \] \end{defn} To establish the theoretical convergence of IPP methods, we make the following assumptions on $f$. \begin{assum}\label{assum1} The function $f:\Re^d\to\Re$ is continuous and has a unique global minimizer $x^*\in\Re^d.$ \end{assum} \begin{assum}\label{assum2} The function $f:\Re^d\to\Re$ is $p$-coercive for some $p>0,$ i.e. \[ f(x)/\norm{x}^p\to\infty \textrm{ as } \norm{x}\to\infty. \] \end{assum} \begin{assum}\label{assum3} There exists $\mu>0$ such that $0\in\partial f(x)$ and $f(x)<f_{\min}+\mu$ imply $x=x^*.$ \end{assum} We remark that Assumptions~\ref{assum1} and \ref{assum2} are standard conditions ensuring the continuity of $f$ and the existence of the unique global minimizer, and Assumption~\ref{assum3} is a mild condition that excludes cases where $f$ exhibits extreme oscillations around $x^*$. \begin{lem} Under the Assumptions~\ref{assum1}--\ref{assum3}, for arbitrary $t>0$, $\prox_{tf}(x)$ is nonempty for all $x.$ \end{lem} \begin{proof} A similar result is stated in \cite[Lemma A1]{heaton2024global} under stronger assumptions, and the same proof can be applied here. \end{proof} The following theorem shows the convergence of Algorithm~\ref{alg:IPP} to the global minimizer when the proximal operators are evaluated inexactly with asymptotic accuracy. The condition \eqref{eq:prox_error} was also used in \cite{rockafellar1976monotone} to establish the convergence of a classical IPP method for convex optimization and in \cite{rockafellar2021advances} to show its local convergence to a stationary point for nonconvex functions using monotone operator theory. \begin{thm}\label{thm:IPP} Suppose Assumptions~\ref{assum1} -- \ref{assum3} hold, the parameter $\alpha_{\min}>1-\eta_-$, and the choice of $T>0$ is sufficiently large (see \eqref{eq:tbound}) in Algorithm~\ref{alg:IPP}. Let $\Set{x^k}_{k\ge 0}$ be the sequence of iterates generated by Algorithm~\ref{alg:IPP}. If the error in estimating the proximal point operator satisfies \begin{equation}\label{eq:prox_error} \sum_{k=0}^{\infty} \norm{y^k - \hat x^k}^2<\infty\,, \end{equation} where $\hat x^k\in \prox_{t_k f}(x^k)$, then $\Set{x^k}_{k\ge 0}$ converges to $x^*$ as $k\to\infty.$ \end{thm} \begin{proof} First, we show that $t_k = T$ for all $k$ sufficiently large. Indeed, since $\hat x^{k+1}\in \prox_{t_{k+1} f}(x^{k+1})$, \[ f(\hat x^{k+1})+\frac{1}{2t_{k+1}}\norm{\hat x^{k+1}-x^{k+1}}^2 \le f(\hat x^{k})+\frac{1}{2t_{k+1}}\norm{\hat x^{k}-x^{k+1}}^2, \] which implies \begin{align}\label{eq:xdiff} f(\hat x^{k+1})-f(\hat x^{k})\nonumber & \le \frac{1}{2t_{k+1}}\left(\norm{\hat x^{k}-x^{k+1}}^2-\norm{\hat x^{k+1}-x^{k+1}}^2\right)\nonumber \\ = &\frac{1}{2t_{k+1}}\left(\norm{\alpha_k(\hat x^{k}- y^k)+(1-\alpha_k)(\hat x^k- x^k)}^2-\norm{\hat x^{k+1}-x^{k+1}}^2\right)\nonumber\\ \le & \frac{1}{2t_{k+1}}\left(\alpha_k \norm{\hat x^{k}- y^k}^2+(1-\alpha_k)\norm{\hat x^k- x^k}^2-\norm{\hat x^{k+1}-x^{k+1}}^2\right). \end{align} Thus, \begin{align}\label{eq:boundfk} -\infty<f(x^*)\le & \limsup_{k\to\infty} f(\hat x^{k+1}) \nonumber\\ \le & f(\hat x^{0})+\sum_{k=0}^\infty \frac{\alpha_k \norm{\hat x^{k}- y^k}^2+(1-\alpha_k)\norm{\hat x^k- x^k}^2-\norm{\hat x^{k+1}-x^{k+1}}^2}{2t_{k+1}}\nonumber\\ \le & f(\hat x^{0})+\frac{1}{2\tau}\sum_{k=0}^\infty\norm{\hat x^{k}- y^k}^2+ \frac{1-\alpha_{\min}}{\eta_-}\sum_{k=0}^\infty\frac{\norm{\hat x^k- x^k}^2}{2t_{k}}-\sum_{k=1}^\infty\frac{\norm{\hat x^k- x^k}^2}{2t_{k}}\nonumber\\ \le & f(\hat x^{0})+\frac{1}{2\tau}\sum_{k=0}^\infty\norm{\hat x^{k}- y^k}^2+ \frac{1-\alpha_{\min}-\eta_-}{2T\eta_-}\sum_{k=0}^\infty\norm{\hat x^k- x^k}^2. \end{align} By the assumptions that $\alpha_{\min}>1-\eta_-$ and \eqref{eq:prox_error}, it follows that \begin{equation}\label{eq:xk_approx} \sum_{k=0}^\infty\norm{\hat x^{k}-x^{k}}^2<\infty\,, \end{equation} which implies $\lim\limits_{k\to\infty} \norm{\hat x^{k}-x^{k}} = 0.$ By the triangle inequality, \[ 0\le \lim_{k\to\infty}\norm{x^{k+1}-x^{k}}\le\lim_{k\to\infty}\norm{\hat x^{k}-x^{k}}+\lim_{k\to\infty}\norm{\hat x^{k}-x^{k+1}}=0\,. \] Therefore, $\lim\limits_{k\to\infty}\norm{x^{k+1}-x^{k}}=0,$ which implies \[ \lim_{k\to\infty} \frac{q_k}{\theta_1 q_{k-1}+\bar\epsilon}=\lim_{k\to\infty} \frac{q_k}{\bar\epsilon} = 0. \] where $q_k = \norm{x^{k+1}-x^k}/t_k$ as defined in Line~\ref{line:qk} of Algorithm~\ref{alg:IPP}. Hence, the condition in Line~\ref{line:qk1} of Algorithm~\ref{alg:IPP} is satisfied for all sufficiently large \( k \), and \( t_k \) reaches the upper bound \( T \) within a finite number of iterations. Next, we show that $\Set{\hat x^k}_{k\ge 0}$ and $\Set{x^k}_{k\ge 0}$ are bounded. Indeed, \eqref{eq:boundfk} implies that $\Set{f(\hat x^{k})}$ is bounded. If $\Set{\hat x^k}$ is unbounded, then there exists a subsequence $\Set{\hat x^{k_j}}$ such that $\lim_{j\to\infty}\norm{\hat x^{k_j}}=\infty,$ which contradicts Assumption~\ref{assum2}. Therefore, $\Set{\hat x^k}$ is bounded. Combining with \eqref{eq:prox_error}, it follows that $\Set{x^k}_{k\ge 0}$ is bounded as well. Thus, there exists a constant $M>0$ such that, for all $k$, \begin{equation}\label{eq:boundM} \max\left\{\norm{\hat x^k}^2, \norm{x^k}^2\right\}\le M. \end{equation} Since $\Set{\hat x^k}$ is bounded, by Bolzano-Weierstrass theorem, there exists a convergent subsequence, $\Set{\hat x^{k_j}}$, with limit $x^{\infty}.$ By \eqref{eq:xk_approx}, \[ 0\le \lim_{j\to\infty} \norm{x^{k_j}-x^{\infty}}\le \lim_{j\to\infty} \left(\norm{x^{k_j}-\hat x^{k_j}} +\norm{\hat x^{k_j}-x^{\infty}}\right)=0\,, \] i.e. $\lim\limits_{j\to\infty}x^{k_j} = x^{\infty}.$ Next, we show that $0\in\partial f(x^\infty).$ By the definition of Moreau envelope, we have \[ f(x^\infty)\ge u(x^\infty,T):= \min_{z} f(z) + \frac{1}{2t} \norm{z-x^\infty}^2. \] On the other hand, \begin{align*} f(x^\infty) = &\lim_{j\to\infty} f(\hat x^{k_j})\le\lim_{j\to\infty} u(x^{k_j},T)\\ \le& \lim_{j\to\infty} f(\hat x^\infty) + \frac{1}{2T}\norm{\hat x^\infty -x^{k_j}}^2\\ \le & \lim_{j\to\infty} u(x^\infty,T)+\frac{1}{2T}\left(\norm{\hat x^\infty -x^{k_j}}^2-\norm{\hat x^\infty -x^{\infty}}^2\right) \le u(x^\infty,T), \end{align*} where $\hat x^\infty\in \prox_{T f}(x^\infty).$ Hence, $f(x^\infty)= u(x^\infty,T),$ which, by Definition~\ref{def:subd}, implies $0\in\partial f(x^\infty)$. If $T>0$ is sufficiently large such that \begin{equation}\label{eq:tbound} T>M/\mu \end{equation} for $M$ given in \eqref{eq:boundM} and $\mu$ given in Assumption~\ref{assum3}, then \[ f(x^\infty) = u(x^\infty,T)\le f(x^*)+\frac{1}{2T}\norm{x^*-x^\infty}^2< f(x^*)+\mu\,. \] It follows from Assumption~\ref{assum2} that $x^\infty=x^*.$ It remains to show that the whole sequence $\Set{x^k}_{k\ge 0}$ converges to $x^*.$ Since $\sum\limits_{k=0}^{\infty} \norm{x^{k+1} - \hat x^k}^2<\infty$ and $\sum\limits_{k=0}^\infty\norm{\hat x^{k}-x^{k}}^2<\infty$, for arbitray $\kappa>0,$ there exists a constant $N_\kappa$ such that for all integers $k>l>N_\kappa,$ \[ \sum_{j=l}^{k-1} \norm{x^{j+1} - \hat x^j}^2<\kappa ~\textrm{ and } \sum_{j=l}^{k-1} \norm{x^{j+1} - \hat x^{j+1}}^2<\kappa \,. \] By \eqref{eq:xdiff}, for $k>l>N_\kappa,$ \[ f(\hat x^{k})-f(\hat x^{l})\le \frac{1}{2\tau}\sum_{j=l}^{k-1}\norm{\hat x^{j}-x^{j+1}}^2-\frac{1}{2T}\sum_{j=l}^{k-1}\norm{\hat x^{j+1}-x^{j+1}}^2 <\kappa\left(\frac{1}{2\tau}-\frac{1}{2T}\right). \] Since $\kappa>0$ is arbitrary, $\lim\limits_{k\to\infty} f(\hat x^k) = \lim\limits_{j\to\infty} f(\hat x^{k_j}) = f(x^*)\,.$ By the continuity of $f$ and the uniqueness of $x^*,$ $\lim\limits_{k\to\infty}\norm{\hat x^k-x^*}=0.$ Hence, by \eqref{eq:xk_approx}, \[ 0\le \lim_{k\to\infty} \norm{x^{k}-x^{*}}\le \lim_{k\to\infty} \norm{x^{k}-\hat x^{k}} +\norm{\hat x^{k}-x^{*}}=0, \] i.e. $\lim\limits_{k\to\infty} x^k=x^*$. The proof is thus completed. \end{proof} The following theorem establishes the convergence of IPP to the global minimizer when the iterates $\Set{x^k}_{k\ge 0}$ are based on stochastic estimates of the proximal operators. \begin{thm}\label{thm:sIPP} Suppose Assumptions~\ref{assum1} -- \ref{assum3} hold, $\alpha_{\min}>1-\eta_-$, and the choice of $\,T>0$ is sufficiently large (see \eqref{eq:ptbound}) in Algorithm~\ref{alg:IPP}. Let $\Set{x^k}_{k\ge 0}$ be a stochastic sequence of iterates generated by Algorithm~\ref{alg:IPP}. If there exists constants $\Set{\epsilon_k}$ with $\sum\limits_{k=0}^\infty\epsilon_k<\infty$ and probabilities $\Set{p_k}$ with $\sum\limits_{k=0}^\infty p_k<\infty$ such that \begin{equation}\label{eq:pprox_error} \pr{\norm{y^k-\hat x^k}^2>\epsilon_k}\le p_k, \end{equation} where $\hat x^k\in \prox_{t_k f}(x^k)$, then $\Set{x^k}_{k\ge 0}$ converges to $x^*$ almost surely as $k\to\infty.$ \end{thm} \begin{proof} First, we show that $\lim\limits_{k\to\infty}t_k = T$ almost surely. By \eqref{eq:pprox_error} \[ \sum_{k=0}^\infty\pr{\norm{y^k-\hat x^k}^2>\epsilon_k}<\infty. \] The Borel–Cantelli lemma implies that $\pr{\norm{y^k-\hat x^k}^2>\epsilon_k ~\textrm{ infinitely often}}=0$, i.e. \[ \pr{\norm{y^k-\hat x^k}^2\le \epsilon_k ~\textrm{ for all sufficiently large } k}=1\,. \] Therefore, $\pr{\sum\limits_{k=1}^\infty \norm{y^k-\hat x^k}^2<\infty}=1.$ Then by \eqref{eq:boundfk}, \begin{equation}\label{eq:xk} \pr{\sum_{k=1}^\infty \norm{x^{k}-\hat x^k}^2<\infty}=1\,. \end{equation} By the triangle inequality, $\pr{\lim\limits_{k\to\infty} \norm{x^{k+1}-x^k}^2=0}=1$, which, by Lines~\ref{line:qk}--\ref{line:tk} of Algorithm~\ref{alg:IPP}, implies that $\lim\limits_{k\to\infty}t_k = T$ almost surely. Next, we show that $\Set{\norm{\hat x^k}^2, \norm{x^k}^2}_{k\ge 0}$ is uniformly bounded by some constant $M$ with probability $1$. Indeed, \eqref{eq:boundfk} implies that $\Set{f(\hat x^{k})}$ is uniformly bounded by some constant $M'$ with probability $1$. Hence, by Assumption~\ref{assum2}, $\Set{\hat x^k}$ is uniformly bounded with probability $1$. Combining with \eqref{eq:prox_error}, it follows that $\Set{x^k}_{k\ge 0}$ is uniformly bounded with probability $1$ as well. By Bolzano-Weiserstrass theorem, with probability $1$, there exists a subsequence $\Set{\hat x^{k_j}}$ that converges to some limit $x^{\infty}.$ Following the same arguments in the proof of Theorem~\ref{thm:IPP}, if \begin{equation}\label{eq:ptbound} T>M/\mu\,, \end{equation} then $x^{\infty}=x^*$ with probability $1$. Since every subsequence of $\Set{\hat x^{k}}$ converges to $x^*$ almost surely, $\Set{\hat x^{k}}$ itself converges to $x^*$ almost surely when \eqref{eq:ptbound} holds. Theorefore, by \eqref{eq:xk}, we conclude that $\Set{x^{k}}$ itself converges to $x^*$ almost surely when \eqref{eq:ptbound} holds. \end{proof} \section{Approximation of the proximal operator}\label{sec:analysis} As discussed in the previous section, the convergence of an IPP method relies on estimating the proximal operators with sufficient asymptotic accuracy. In this section, we consider the approximation of $\operatorname{prox}_{t f}(x)$ via the Gibbs measure associated with the function \begin{equation}\label{eqn:phi} \phi(z) := f(z)+\frac{1}{2t}\norm{z-x}^2. \end{equation} We focus on the case where the proximal operator is single-valued, i.e. the function $\phi$ has a unique global minimizer $z^*=\prox_{tf}(x).$ In this case, the Gibbs measure defined by \eqref{eq:gibbs} approximates the Dirac measure centered at $z^*$ for some small $\delta>0$. Hence, an approximation of the proximal operator is given by \begin{equation}\label{eqn:aprox} \prox_{tf}^\delta(x) := \frac{\int z\exp\left(-\phi(z)/\delta\right)dz}{\int\exp{(-\phi(z)/\delta)dz}}\approx \prox_{tf}(x)\,. \end{equation} The proximal operator is single-valued under a wide range of conditions. For relatively small $t>0,$ a standard result is that it is single-valued if $f$ is \emph{prox-regular}, a property that holds for all convex functions and a broad class of nonconvex functions \cite{poliquin2010calculus}. For sufficiently large $t$, it is single-valued if $x^*$ is a \emph{nondegenerate} minimizer when $f$ is $C^2$ around $x^*$, or if $f$ is \emph{sharp} \cite{davis2018subgradient,dinh2017sharp} around $x^*$ in the nonsmooth case. These conditions and required definitions are summarized in the following definitions and Proposition~\ref{prop:single_prox} below. \begin{defn} A stationary point $x^*$ of $f$ is called \emph{nondegenerate} if $f$ is $C^2$ around $x^*$ and the Hessian $\Hess f(x^*)$ is nonsingular. \end{defn} \begin{defn} We say $f$ is \emph{sharp} on a neighborhood $U$ of $x^\ast$ if there exists $\eta>0$ such that \begin{equation}\label{eq:sharp} f(x)-f_{\min}\ge \eta\norm{x-x^\ast},\quad \forall x\in U\,. \end{equation} \end{defn} The sharpness condition holds for a wide class of functions, including strongly convex functions and nonconvex functions that satisfy the Kurdyka--Łojasiewicz inequality~\cite{attouch2013convergence, bolte2014proximal} or the Polyak--Łojasiewicz condition~\cite{karimi2016linear}. \begin{prop}\label{prop:single_prox} The proximal operator $\operatorname{prox}_{t f}(x)$ is single-valued under any of the following conditions: \begin{enumerate} \item \textbf{Prox-Regularity of $f$:} If $f$ is \emph{prox-regular} at $x$, meaning there exists a constant $r > 0$ such that for all $x' \neq x''$ near $x$, \[ f(x'') > f(x') + \langle v, x'' - x' \rangle - \frac{r}{2} \| x'' - x' \|^2, \] where $v \in \partial f(x')$, then $\operatorname{prox}_{t f}(x)$ is single-valued for $t < 1/r$; or \item \textbf{Nondegeneracy of $x^\ast$:} In addition to Assumptions~\ref{assum1} -- \ref{assum3}, if $f$ is $C^2$ on a neighborhood $U$ of $x^\ast$ and $x^\ast$ is nondegenerate, then $\operatorname{prox}_{t f}(x)$ is single-valued for all sufficiently large $t$; or \item \textbf{Sharpness at $x^\ast$:} In addition to Assumptions~\ref{assum1} -- \ref{assum3}, if $f$ is sharp on a neighborhood $U$ of $x^\ast,$ then $\operatorname{prox}_{t f}(x)$ is single-valued for all sufficiently large $t$. \end{enumerate} \end{prop} \begin{proof} For part (1), if $f$ is prox-regular, then by \cite[Thm. 1.3]{poliquin2010calculus}, $\operatorname{prox}_{tf}(x)$ is single-valued for all $t<1/r.$ For part (2), if $f$ is twice continuously differentiable on a neighborhood $U$ of $x^\ast$ and $x^\ast$ is nondegenerate, then there exists a subset $\tilde U\subset U$ such that $\xstar\in\tilde U$ and \begin{equation}\label{eq:epsilonH} \norm{\Grad f(z_1)-\Grad f(z_2)} \ge \min_{z\in\tilde U}\lambda_{\min} \left (\Hess f(z)\right)\norm{z_1-z_2}:= \epsilon_H\norm{z_1-z_2}, \qquad \forall ~z_1, z_2\in \tilde U\,, \end{equation} for some constant $\epsilon_H>0$. Now assume that $z_1, z_2\in \prox_{tf}(x)$, then \[ f(z_i)+\frac{1}{2t}\norm{z-x}^2\le f(\xstar)+\frac{1}{2t}\norm{\xstar-x}^2 \] for $i=1,2,$ which implies that \[ 0\le f(z_i)-f(\xstar)\le \frac{1}{2t}(\norm{\xstar-x}^2-\norm{z-x}^2) \] for $i=1,2.$ Since $\xstar$ is the unique minimizer and $f$ is p-coercive, $z_1,z_2\in\tilde U$ if $t$ is sufficently large. Moreover, by the definition of $\prox_{tf}(x),$ \[ \Grad f(z_i) = \frac{x-z_i}{t}, \quad i=1,2\,. \] It follows that \[ \norm{\Grad f(z_1)-\Grad f(z_2)}=\frac{1}{t}\norm{z_1-z_2}<\epsilon_H\norm{z_1-z_2} \] when $t>1/\epsilon_H$. Hence, combining with \eqref{eq:epsilonH}, we have $z_1 = z_2,$ i.e. $\operatorname{prox}_{t f}(x)$ is single-valued. Finally, for part (3), assume that $f$ is sharp on a neighborhood $U$ of $x^\ast.$ Let $\tilde U\subset U$ be a compact neighborhood of $\xstar.$ Then for sufficiently large $t$, \[ f(\xstar)+\frac{1}{2t}\norm{z-\xstar}<f(z), \qquad \forall ~z\in \Re^d\setminus \tilde U\,, \] which implies $\prox_{tf}(x)\subset \tilde U.$ Since $\tilde U$ is compact, there exists a constant $M>0$ such that, for arbitrary $z\in\tilde U,$ \[ \abs{\norm{\xstar-x}^2-\norm{z-x}^2} = \abs{\langle \xstar-z, \xstar+z-2x\rangle}\le M\norm{\xstar-z}\,. \] On the other hand, by \eqref{eq:sharp}, for arbitrary $z\in\tilde U$ and $t> M/\eta,$ \[ f(z)-f(\xstar) \ge \eta \norm{z-\xstar}> \frac{M}{t}\norm{z-\xstar}\ge \norm{\xstar-x}^2-\norm{z-x}^2\,, \] which implies \[ f(z)+\frac{1}{2t}\norm{z-x} > f(\xstar)+\frac{1}{2t}\norm{\xstar-x} , \qquad \forall ~z\in \tilde U\,. \] Hence, for $t$ sufficiently large, $\prox_{tf}(x)$ is single-valued and $\prox_{tf}(x)=\xstar.$ \end{proof} The following theorem indicates the convergence of the approximate proximal operator given in \eqref{eqn:aprox} as $\delta\to 0^+$ under the condition that $\phi$ is continuous around $z^*.$ A similar result was proved in \cite{tibshirani2024laplace} under stronger assumptions. \begin{thm} \label{thm:conv} Assume that $\phi:\mathbb R^d\to \mathbb R$ is $p$-coercive for some $p>0$, and that $\phi$ has a unique global minimizer $z^*$. If there exists a neighborhood $U$ of $z^*$ such that $\phi$ restricted on $U$ is continuous, then \begin{equation} \label{eqn:mean_gibbs} \lim_{\delta\to 0^+}\frac{\int z\exp\left(-\phi(z)/\delta\right)dz}{\int\exp{(-\phi(z)/\delta)dz}} = z^*\,. \end{equation} \end{thm} \begin{proof} For $\delta>0,$ write \begin{equation*} z_\delta := \frac{\int z\exp\left(-\phi(z)/\delta\right)dz}{\int\exp{(-\phi(z)/\delta)dz}}\,. \end{equation*} It follows that \[ z_\delta-z^* = \frac{\int (z-z^*)\exp\left(-\phi(z)/\delta\right)dz}{\int\exp{(-\phi(z)/\delta)dz}}\,. \] Define $\tilde \phi(z):=\phi(z)-\phi(z^*).$ Then $\tilde \phi\ge 0$ and \[ z_\delta-z^* = \frac{\int (z-z^*)\exp{(-\tilde \phi(z)/\delta)}dz}{\int\exp{(-\tilde \phi(z)/\delta)dz}}\,. \] As $\tilde \phi$ is also $p$-coercive, there exists some constant $\eta>0$ such that $\norm{z-z^*}\ge\eta$ implies that $\tilde \phi(z)\ge\norm{z-z^*}^p.$ It follows that, if $\norm{z-z^*}\ge\eta$, then $\tilde \phi(z)\ge c\eta^p+(1-c)\norm{z-z^*}^p$ for any $c\in (0,1).$ Hence, as $\delta\to 0^+,$ \[ \norm{\int_{\Set{z:\norm{z-z^*}\ge\eta}} (z-z^*)\exp{(-\tilde \phi(z)/\delta)}dz} = o(\exp(-\eta^p/2\delta))\,, \] and \[ \int_{\Set{z:\norm{z-z^*}\ge\eta}} \exp{(-\tilde \phi(z)/\delta)}dz = o(\exp(-\eta^p/2\delta))\,. \] As there exists a neighborhood $U$ of $z^*$ such that $\phi$ restricted on $U$ is continuous, for $\epsilon>0$ sufficiently small, $U_{\epsilon} := \Set{z:\norm{z-z^*}\le\epsilon}\subset U$ and $\min\limits_{\epsilon \le \norm{z-z^*}\le\eta}\tilde \phi(z) = \tilde\epsilon$ for some $\tilde\epsilon>0.$ Therefore, \[ \norm{\int_{\Set{z:\epsilon \le\norm{z-z^*}\le\eta}} (z-z^*)\exp{(-\tilde \phi(z)/\delta)}dz} = o(\exp(-\tilde\epsilon/2\delta))\,, \] and \[ \int_{\Set{z:\epsilon \le\norm{z-z^*}\le\eta}} \exp{(-\tilde\phi(z)/\delta)}dz = o(\exp(-\tilde\epsilon/2\delta))\,. \] Moreover, as $\tilde \phi(z^*)=0$ and $\tilde \phi$ is continuous on $U_\epsilon\subset U$, \[ \int_{\Set{z:\norm{z-z^*}<\epsilon}} \exp{(-\tilde \phi(z)/\delta)}dz=\Omega (\exp(-\min\{\eta^p,\tilde\epsilon\}/2\delta))\,. \] Therefore, \begin{align*} \lim_{\delta\to 0^+}\norm{z_\delta-z^*} =& \lim_{\delta\to 0^+} \norm{\frac{\int_{\Re^d} (z-z^*)\exp\left(-\phi(z)/\delta\right)dz}{\int\exp{(-\phi(z)/\delta)dz}}}\\ =& \lim_{\delta\to 0^+} \norm{\frac{\int_{\Set{z:\norm{z-z^*}<\epsilon}} (z-z^*)\exp\left(-\phi(z)/\delta\right)dz}{\int_{\Set{z:\norm{z-z^*}<\epsilon}}\exp{(-\phi(z)/\delta)dz}}} \le \epsilon\,. \end{align*} Since $\epsilon >0$ can be arbitrarily small, $\lim\limits_{\delta\to 0^+}\norm{z_\delta-z^*} = 0$, which completes the proof. \end{proof} \begin{lem}[Morse lemma \cite{gamkrelidze2012analysis}]\label{lem:morse} Let $z^*$ be a nondegenerate stationary point of $\phi$. There exists an open neighborhood $U$ of $z^*$ and a homeomorphism $T:U\to V \subset\Re^d$ such that $T(z^*) = 0$, $\det(T'(z^*))=1$, and, with $y=T(z),$ $\phi(z) = \sum_{i=1}^d \lambda_i y_i^2$, where $\lambda_1,\lambda_2,\cdots,\lambda_n$ are eigenvalues of $\Hess \phi(z^*)$. \end{lem} In particular, if $\phi$ is $C^2$ around the stationary point, then a twice continuously differentiable homeomorphism $T$ can be obtained by introducing hyperspherical coordinates (see e.g., \cite[Section 3.7]{miller2006applied}). \begin{lem}[{\cite[Chapter 2]{miller2006applied}}]\label{lem:integral} Let $\alpha$ and $\beta$ be positive numbers. For any function $\psi\in C^2(\Re),$ as $\lambda\to+\infty$, \[ \int_{-\alpha}^\beta \psi(y)\exp{(-\lambda y^2)}dy = \sqrt{\frac{\pi}{\lambda}}\left(\psi(0)+\frac{\psi''(0)}{2}\lambda^{-1}+\mathcal O(\lambda^{-2})\right). \] \end{lem} When $z^*$ is nondegenerate, an error bound for the approximate proximal operator in \eqref{eqn:aprox} can be obtained according to the following theorem. \begin{thm}\label{thm_C2} Assume that $\phi:\mathbb R^d\to \mathbb R$ is $p$-coercive for some $p>0$, and that $\phi$ has a unique nondegenerate global minimizer $z^*$. Also assume that there exists a neighborhood $U$ of $z^*$ such that $\phi$ restricted on $U$ is twice continuously differentiable. Then as $\delta\to 0^+,$ \[ \norm{\frac{\int z\exp\left(-\phi(z)/\delta\right)dz}{\int\exp{(-\phi(z)/\delta)dz}}-z^*} = \bigO(\delta). \] \end{thm} \begin{proof} For $\delta>0,$ write \[ z_\delta := \frac{\int z\exp\left(-\phi(z)/\delta\right)dz}{\int\exp{(-\phi(z)/\delta)dz}}. \] It follows that \[ z_\delta-z^* = \frac{\int(z-z^*)\exp\left(-\frac{\phi(z)}{\delta}\right)dz}{\int\exp{(-\phi(z)/\delta)dz}}\,. \] Define $\tilde \phi(z):=\phi(z)-\phi(z^*).$ Then $\tilde \phi\ge 0$ and \[ z_\delta-z^* = \frac{\int (z-z^*)\exp{(-\tilde \phi(z)/\delta)}dz}{\int \exp{(-\tilde \phi(z)/\delta)dz}}. \] As $\tilde \phi$ is also $p$-coercive, there exists some constant $\eta>0$ such that $\norm{z-z^*}\ge\eta$ implies that $\tilde \phi(z)\ge\norm{z-z^*}^p.$ Hence, as $\delta\to 0^+,$ \[ \norm{\int_{\Set{z:\norm{z-z^*}\ge\eta}} (z-z_\delta)\exp{(-\tilde \phi(z)/\delta)}dz} = o(\exp(-\eta^p/2\delta))\,, \] and \[ \int_{\Set{z:\norm{z-z^*}\ge\eta}} \exp{(-\tilde \phi(z)/\delta)}dz = o(\exp(-\eta^p/2\delta))\,. \] By Lemma~\ref{lem:morse}, there exists an open neighborhood $V$ of $z^*$ and a homeomorphism $T$ such that $T(z^*)=0,$ $\det(T'(z^*))=1$, and \[ \tilde \phi(z) = \frac{1}{2}\sum_{i=1}^d\lambda_iy_i^2\,, \] where $y=T(z)$ and $\lambda_1,\cdots,\lambda_n$ are eigenvalues of $\Hess \phi(z^*)$. Consider $\tau>0$ sufficiently small such that \[ C_\tau :=\Set{y:\norm{y}_\infty\le\tau}\subset T(U\cap V)\,. \] It follows that \begin{align}\label{eqn:integral} &\int_{T^{-1}(C_\tau)} z\exp{(-\tilde \phi(z)/\delta)}dz \nonumber\\ =&\int_{C_\tau}T^{-1}(y)\exp(-\sum_{i=1}^d \lambda_i y_i^2/2\delta)\det((T^{-1})'(y))dy \nonumber\\ =&\int_{C_\tau}(z^*+\mathcal O(y))\exp(-\sum_{i=1}^d \lambda_i y_i^2/2\delta)dy \nonumber\\ =&z^*\prod_{i=1}^d\int_{-\tau}^{\tau}\exp(-\lambda_i y_i^2/2\delta)dy_i+\int_{C_\tau}\mathcal O(y)\exp(-\sum_{i=1}^d \lambda_i y_i^2/2\delta)dy \nonumber\\ =&\frac{(2\pi\delta)^{d/2}}{\sqrt{\det(\Hess\tilde f(z^*))}}z^*+\int_{C_\tau}\mathcal O(y)\exp(-\sum_{i=1}^d \lambda_i y_i^2/2\delta)dy\,. \end{align} Hence, by Lemma~\ref{lem:integral}, as $\delta\to 0^+,$ \begin{align*} \norm{\int_{T^{-1}(C_\tau)} z\exp{(-\tilde \phi(z)/\delta)}dz-\frac{(2\pi\delta)^{d/2}}{\sqrt{\det(\Hess\tilde \phi(z^*))}}z^*}=\bigO(\delta^{d/2+1})\,. \end{align*} Similarly, as $\delta\to 0^+,$ \[ \int_{T^{-1}(C_\tau)} \exp{(-\tilde \phi(z)/\delta)}dz = \frac{(2\pi\delta)^{d/2}}{\sqrt{\det(\Hess\tilde \phi(z^*))}}+\bigO (\delta^{d/2+1})\,. \] For $z\in\Set{z:\norm{z-z^*}\le\eta}\setminus T^{-1}(C_\tau)$, since $z^*$ is the unique minimizer and $T^{-1}(C_\tau)$ is an open neighborhood of $z^*,$ there must exist $\epsilon>0$ such that $\tilde \phi(z)\ge\epsilon$. Hence, as $\delta\to 0^+,$ \[ \norm{\int_{\Set{z:\norm{z-z^*}\le\eta}\setminus T^{-1}(C_\tau)}(z-z^*)\exp{(-\tilde \phi(z)/\delta)}dz}=o(e^{-\epsilon/2\delta})\,, \] and \[ \int_{\Set{z:\norm{z-z^*}\le\eta}\setminus T^{-1}(C_\tau)}\exp{(-\tilde \phi(z)/\delta)}dz=o(e^{-\epsilon/2\delta})\,. \] Therefore,\[ \lim_{\delta\to 0^+} z_\delta = \lim_{\delta\to 0^+}\frac{\frac{(2\pi\delta)^{d/2}}{\sqrt{\det(\Hess\tilde \phi(z^*))}}z^*+\bigO(\delta^{d/2+1})}{\frac{(2\pi\delta)^{d/2}}{\sqrt{\det(\Hess\tilde \phi(z^*))}}+\bigO(\delta^{d/2+1})} = z^*\,. \] Moreover, \[ \lim_{\delta\to 0^+} \norm{z_\delta-z^*}= \frac{\bigO(\delta^{d/2+1})} {\frac{(2\pi\delta)^{d/2}}{\sqrt{\det(\Hess\tilde \phi(z^*))}}+\bigO(\delta^{d/2+1})} = \bigO(\delta)\,. \] \end{proof} According to Theorems~\ref{thm:conv} and \ref{thm_C2}, evaluating the parameterized operator, $\prox_{tf}^\delta(x)$, as given in \eqref{eqn:aprox} for a small $\delta>0$ indeed provides a good approximation of the proximal operator. To illustrate this numerically, Figure~\ref{fig:prox_acc} displays the approximation errors for the two-dimensional Ackley function \cite{benchmark_Andrea}. Figure~\ref{fig:2a} indicates that sufficiently accurate approximations may be obtained by choosing $\delta\le 0.5.$ Figure~\ref{fig:2b} indicates that for varied $\delta$ and $x,$ $\prox_{tf}^\delta(x)$ is closer to the global minimizer $x^*$ than $x,$ making it effective in an IPP method. Additionally, abundant numerical evidence demonstrating the effectiveness of approximating the proximal operator using MC estimates of $\prox^\delta_{tf}(x)$ can be found in \cite{osher2023hamilton, tibshirani2024laplace}. \begin{figure}[H] \centering \subfloat[]{\includegraphics[width=0.48\textwidth]{pix/delta_prox.eps} \label{fig:2a}} \hfill \subfloat[]{\includegraphics[width=0.48\textwidth]{pix/delta_z.eps} \label{fig:2b}} \vspace{-0.5cm} \caption{Approximations of \( \prox_{tf}(x) \) by \( \prox^\delta_{t f}(x) \) for Ackley function \cite{benchmark_Andrea}. (a) Approximation errors for varied $t$, $\delta$ and $x$; (b) Distances between \( \prox_{t f}^\delta(x) \) and $x^*$ for varied $\delta$ and $x$, with $t = 2$.} \label{fig:prox_acc} \end{figure} Of theoretical interest, when the proximal operator is multi-valued, the following corollary implies that $\prox_{tf}^\delta(x)$ given in \eqref{eqn:aprox} approximates a point in the convex hull of $\prox_{tf}(x)$. \begin{cor} Assume that $\phi:\mathbb R^d\to \mathbb R$ is $p$-coercive for some $p>0$, and that $\phi$ has multiple nondegenerate global minimizers $z_1^*,\cdots,z_m^*$. Also assume that, for each $j\in\Set{1,\cdots,m}$, there exists a neighborhood $U_j$ of $z_j^*$ such that $\phi$ restricted on $U_j$ is twice continuously differentiable. Then, as $\delta\to 0^+,$ \[ \norm{\frac{\int z\exp\left(-\phi(z)/\delta\right)dz}{\int \exp{(-\phi(z)/\delta)dz}}-\bar z^*} = \bigO(\delta), \] for some $\bar z^*\in \Set{\sum_{j=1}^m a_j z^*_j: a_j\ge 0 \textrm{ and } \sum_{j=1}^m a_j=1}.$ \end{cor} \begin{proof} Let \[ \bar z^*= \left(\sum\limits_{j-1}^m \frac{z_j^*}{\sqrt{\det(\Hess \phi(z_j^*))}} \right)/ \left(\sum\limits_{j-1}^m \frac{1}{\sqrt{\det(\Hess\phi(z_j^*))}} \right). \] Following similar arguments as in the proof of Theorem~\ref{thm_C2}, \[ \lim_{\delta\to 0^+} \norm{z_\delta-\bar z^*}= \frac{\bigO(\delta^{d/2+1})} {\sum\limits_{j-1}^m \frac{(2\pi\delta)^{d/2}}{\sqrt{\det(\Hess\phi(z_j^*))}}+\bigO(\delta^{d/2+1})} = \bigO(\delta). \] \end{proof} \section{Tensor train for estimating the proximal operator}\label{sec:approx} Based on results in Section~\ref{sec:analysis}, an inexact evaluation of the proximal operator can be obtained by estimating $\prox^\delta_{tf}(x)$ defined in \eqref{eqn:aprox} for some small $\delta>0.$ Deterministic methods, such as the trapezoidal rule, requires a function evaluation on each quadrature node, and the number of nodes grows exponentially as the dimension $d$ increases. Therefore, directly using these methods can be prohibitively expensive. Alternatively, randomized approaches, such as Monte Carlo (MC) integration, may be used to compute the integrals in (\ref{eqn:aprox}). The MC-based method has been introduced in \cite{osher2023hamilton,heaton2024global,tibshirani2024laplace}, which may suffer from high variance and underflow errors in practice \cite{heaton2024global, beck2018underflow}. In this section, we consider a new approach that is based on the usage of a \emph{tensor train} (TT) approximation algorithm \cite{oseledets2010tt, cross_error}, and analyze the associated error in estimating the proximal operator. By exploiting the Sobolev smoothness of the integrands, the TT-based method circumvents the curse of dimensionality, thereby improving the estimation accuracy. Specifically, we first compute a low-rank TT approximation of the function \begin{equation}\label{eqn:psi} \psi := \exp\left(-f/\delta\right), \end{equation} and then use a quadrature rule to estimate (\ref{eqn:aprox}). For computational purposes involving the TT approximation, we restrict the definition of $f$ on a bounded domain $\Omega\subset\Re^d$. \subsection{Tensor train algorithms}\label{sec:tt} In this subsection, we provide a brief review of TT algorithms and the associated error bound. As the goal is to estimate the integrals in (\ref{eqn:aprox}) using TT approximation, we consider a $d$-dimensional mesh grid \[ \mathcal Z := \Set{z_1^{(i_1)}}_{i_1=1}^{n_1}\times \cdots\times\Set{z_d^{(i_d)}}_{i_d=1}^{n_d}\,, \] with each node given by $\left(z_1^{(i_1)},\cdots,z_d^{(i_d)}\right)\in\Omega$. The discretization of a function $\psi$ on $\Z$, denoted by $\psi_{\Zd}$, can be viewed as a tensor of dimension $n_1\times\cdots \times n_d$, with entries given by the values of $\psi$ at node points. A TT approximation of $\psi_{\Zd}$, denoted by $\psi_\TT$, is given by \begin{equation}\label{eqn:tt} \psi_\TT(i_1,\cdots,i_d) := \sum_{\alpha_1=1}^{r_1} \cdots\sum_{\alpha_{d-1}=1}^{r_{d-1}} G_1(r_0,i_1, \alpha_1)G_2(\alpha_1, i_2, \alpha_2)\cdots G_d(\alpha_{d-1}, i_d\,, r_d)\,, \end{equation} where $r_0=r_d=1$ and $G_1,\cdots,G_d$ are called the \emph{cores}, with each $G_j$ of dimension $r_{j-1}\times n_j \times r_j$. When the approximation is exact, the TT decomposition of a tensor is a generalization of the singular value decomposition (SVD) of a matrix and the cores are analogues to singular vectors. The separable structure of $\psi_{\TT}$ reduces the computational cost of numerical integration from $\bigO(n^d)$ to $\bigO(dnr^2)$, where $n=\max\{n_1,\cdots,n_d\}$ and $r=\max\{r_1,\cdots,r_d\}$. We include a standard result on the TT approximation error below. \begin{thm} \label{Thm_TT_accuracy} \cite[Thm. 2.2]{oseledets2010tt} For a tensor $\psi_{\Zd}$, define the \emph{unfolding matrices} \begin{equation}\label{eqn:unfolding} A_j:= \psi_{\Zd}(i_1,\cdots,i_j; i_{j+1},\cdots,i_d),\quad j=1,\cdots,d-1, \end{equation} where the first $j$ indices enumerate the rows of $A_j$ and the last $d-j$ indices enumerate the columns. There exists a tensor train approximation $\psi_\TT$ with ranks $\Set{r_j}_{j=1}^{d-1}$ such that $\|\psi_{\Zd}-\psi_{\TT}\|_F \leq \sqrt{\sum_{j=1}^{d-1}\epsilon_j^2}$, where $\epsilon_j = \min_{\textrm{rank} (B) \leq r_j}\|A_j-B\|_F$ for each $j$. \end{thm} The theorem above implies that the error of the TT approximation can be made sufficiently small by choosing an appropriate rank. In particular, functions possessing certain Sobolev smoothness and underlying low-rank structures (e.g., Figure~\ref{fig:1d}) can be approximated with a relatively low TT rank \cite{Thm_TT_accuracy}; see also Table \ref{tab:tt_comparison}. To efficiently construct a TT approximation $\psi_{\TT}$, we consider the randomized TT cross algorithm \cite{oseledets2010tt, savostyanov2014quasioptimality}, which is based on the cross approximation of matrices. Given a matrix $A$, a rank-r \emph{cross approximation} of $A$ is given by $$A \approx A(:,J) A(I,J)^{-1} A(I,:)\,,$$ where $J$ is a subset of column indices of $A$ and $I$ is a subset of row indices, with $|I|=|J|=r$. The index sets $I$ and $J$ are selected based on the \emph{maximum volume principle}, i.e., selecting the submatrix $A(I,J)$ that has the largest possible absolute value of the determinant. The TT cross algorithm iteratively samples random row or column multi-indices and updates the tensor cores by performing cross approximation on sampled submatrices of unfolding matrices. Details of the algorithm are summarized in Algorithm~\ref{alg:TTcross}, including two subroutines \texttt{TT-Cross-Right-To-Left-Sweep} and \texttt{TT-Cross-Left-To-Right-Sweep}. The subroutine \texttt{TT-Cross-Right-To-Left-Sweep} performs cross approximation on submatrices of the unfolding matrices $A_j$ given in \eqref{eqn:unfolding} from $j=d-1$ to $j=1$, and \texttt{TT-Cross-Right-To-Left-Sweep} performs cross approximation on submatrices of $A_j$ from $j=1$ to $j=d-1$. The per-iteration cost of the TT cross algorithm is roughly $\bigO(dr^3)$ flops and $\bigO(dr^2)$ function evaluations. The algorithm obtains a TT approximation by evaluating only a small number of entries from the original tensor, without ever storing the full tensor, thus substantially reducing computational and storage costs. \begin{algorithm}[H] \caption{TT Cross Algorithm \cite{oseledets2010tt}} \label{alg:TTcross} \begin{algorithmic}[1] \STATE \textbf{Input:} Black-box tensor $\psi_{\Zd}\in \mathbb{R}^{n_1\times \cdots \times n_d}$, tolerance $\tau_{stop}>0$ \STATE \textbf{Output:} $\psi_{\TT}$ with cores $G_1,\cdots,G_d$ \STATE Randomly choose sets of column multi-indices $J_1,\cdots,J_{d-1}$ \STATE $G_1,\cdots,G_d,I_1,\cdots,I_{d-1} \gets \text{TT-Cross-Left-To-Right-Sweep}(\psi_{\Zd}, J_1,\cdots,J_{d-1})$ \WHILE{$\|H_1\cdots H_d - G_1\cdots G_d\|_F < \tau_{stop} \|G_1\cdots G_d\|$} \STATE $\widehat{I}_1,\cdots,\widehat{I}_{d-1} \gets I_1,\cdots,I_{d-1}$ extended with random row multi-indices \STATE $H_1,\cdots,H_d,J_1,\cdots,J_{d-1} \gets \text{TT-Cross-Right-To-Left-Sweep}(\psi_{\Zd}, \widehat{I}_1,\cdots,\widehat{I}_{d-1})$ \STATE $\widehat{J}_1,\cdots,\widehat{J}_{d-1} \gets J_1,\cdots,J_{d-1}$ extended with random column multi-indices \STATE $G_1,\cdots,G_d,I_1,\cdots,I_{d-1} \gets \text{TT-Cross-Left-To-Right-Sweep}(\psi_{\Zd}, \widehat{J}_1,\cdots,\widehat{J}_{d-1})$ \ENDWHILE \end{algorithmic} \end{algorithm} If two tensors are both represented in the TT-format \eqref{eqn:tt}, the Hadamard product can be performed directly using such representation, which is useful in designing an efficient TT-based IPP algorithm in Section~\ref{sec:ttIPP} below. When reducing the parameter $\delta$ to $\delta/2$ in \eqref{eqn:aprox}, we simply need to approximate $\psi^2$, whose TT approximation can be obtained as the Hadamrd product of $\psi_{\TT}$ and itself. As derived in \cite{oseledets2011tensor}, the Hadamard product of the two tensors can be written explicitly as \[ \psi_{\TT}\circ \psi{\TT} = (G_1\otimes G_1)\cdot(G_2\otimes G_2)\cdots (G_d\otimes G_d)\,, \] where $\otimes$ denotes the Kronecker product. Note that the resulting tensor can be written out explicitly in this manner and does not require any extra function evaluations. Moreover, as performing the Hadamard product leads to an increase in the ranks, a rounding procedure, \emph{TT-rounding}, introduced in \cite{oseledets2011tensor} can be employed to reduce the ranks while preserving the accuracy of the TT approximation within a specified tolerance. The TT-rounding procedure involves orthogonalization and performing truncated SVDs on unfolded tensor cores, with a computational complexity of $\bigO(dnr^3).$ The estimation of the proximal operator in \eqref{eqn:aprox} involves the integrand $\tilde \psi(z):= \psi(z)\eta(z)$, where $\eta(z):= \exp{\left(-\norm{z-x}^2/(2t\delta)\right)}$ for a fixed $x$. The discretization of the function $\eta(z)$ can be easily represented in a rank-$1$ TT format due to its separable structure, with cores given by \[ u_j := \left[\exp\left(-\frac{1}{2t\delta}\|z_{j,1}-x_j\|^2 \right),\cdots,\exp\left(-\frac{1}{2t\delta}\|z_{j,n_j}-x_j\|^2 \right) \right]^T \] for $j=1,\cdots,d$, where $\{z_{j,k}\}_{k=1}^{n_j}$ are nodal points of $\Z$ along the $j$-th dimension. The integrals in \eqref{eqn:aprox} can then be estimated using a quadrature rule. Let $\Set{w_{j,k}}_{k=1}^{n_j}$ be the quadrature weights associated with nodal points along the $j$-th dimension. The denominator in \eqref{eqn:aprox} is approximated by a discrete sum \begin{equation}\label{eqn:tt_integral} \int_\Omega\exp\left(-\phi(z)/\delta\right)dz = \int_{\Omega}\exp(-f(z)/\delta)\exp\left(-\frac{\|z-x\|_2^2}{2t\delta}\right)dz \approx \prod_{j=1}^{d}\left(\sum_{i_j=1}^{n_j}G_j(i_j)u_j(i_j)w_{j,i_j}\right), \end{equation} where $G_j(i_j)=G_j(:,i_j,:)$ is of dimension $r_{j-1}\times r_j$ for each $j$. Similarly, the $j$-th component of the numerator in \eqref{eqn:aprox} is approximated by \small{\begin{equation}\label{eqn:tt_zintegral} \left[\int_\Omega z\exp\left(-\phi(z)/\delta\right)dz\right]_j \approx \left(\sum_{i_1=1}^{n_1}G_1(i_1)u_1(i_1)w_{1,i_1}\right) \cdots \left( \sum_{i_j=1}^{n_j} z_{j,i_j}G_j(i_j)u_j(i_j)w_{j,i_j} \right)\cdots \left(\sum_{i_d=1}^{n_d}G_d(i_d)u_j(i_d)w_{d,i_d}\right). \end{equation}} \normalsize \subsection{Error analysis}\label{sec:error} In this subsection, we derive error bounds for estimating the proximal operator based on (\ref{eqn:aprox}) using the TT approximation. For simplicity, we consider the case where $\Omega=[0,1]^d$, and assume that $n_1=\cdots =n_d=n$ with the mesh size $h=1/n$. The results can be easily extended to a general bounded domain $\Omega\subset\Re^d.$ The error is comprised of two parts: the error in the TT approximation and the error in the numerical integration. As discussed in Section~\ref{sec:tt}, the error in the TT approximation can be made arbitrarily small by allowing sufficiently large ranks. In particular, for functions possessing certain Sobolev smoothness and underlying low-rank structures (as illustrated in Figure~\ref{fig:1d}), a relatively low TT rank is sufficient. To analyze the error in each component of the numerical computation, we assume that \begin{equation}\label{eqn:epsilonTT} \|\psi_{\TT}-\psi_{\mathcal{\Zd}}\|_F\leq {\epsilon_{\TT}}\,, \end{equation} where $\psi_\Zd$ is the discretization of $\psi$ on the mesh grid $\mathcal{Z}$. The value of $\epsilon_{\TT}$ depends on the choice of the termination tolerance $\tau_{stop}$ in Algorithm~\ref{alg:TTcross}. Detailed error analysis on TT cross approximation can be found in \cite{cross_error, oseledets2010tt,Thm_TT_accuracy}. Now we consider the error in applying a quadrature rule on the TT approximations of the integrands in (\ref{eqn:aprox}). Let $\left(\int\exp\left(-\phi(z)/\delta\right)dz\right)_{\TT}$ be the TT estimate of the integral $\int\exp\left(-\phi(z)/\delta\right)dz$ given in \eqref{eqn:tt_integral}, and let $\left(\int z\exp\left(-\phi(z)/\delta\right)dz\right)_{\TT}$ be the TT estimate of the integral $\int z\exp\left(-\phi(z)/\delta\right)dz$ given in \eqref{eqn:tt_zintegral}. Then the TT estimate of the proximal operator is given by \begin{equation}\label{eq:x_TT} \prox_{tf}(x)\approx \frac{\left(\int_\Omega z\exp\left(-\phi(z)/\delta\right)dz\right)_{\TT}} {\left(\int_\Omega\exp\left(-\phi(z)/\delta\right)dz\right)_{\TT}} :=(\prox^\delta_{tf}(x))_{\TT}\, . \end{equation} First, we consider the case where $f\in C^2(\Omega)$. When the trapezoidal rule is used, we have the following standard result: \begin{equation}\label{eq:int_error1} \norm{\int_\Omega z\exp\left(-\phi(z)/\delta\right)dz-\left(\int_\Omega z\exp\left(-\phi(z)/\delta\right)dz\right)_{\Zd}} \le \frac{d h^2}{12}\max_{z,k,l} \norm{\frac{\partial^2 \left(z\exp\left(-\phi(z)/\delta\right)\right)}{\partial z_k \partial z_{l}}}\,, \end{equation} where $(\cdot)_{\Zd}$ denotes the numerical quadrature over the mesh $\Z$ for estimating the integral. Similarly, \begin{equation}\label{eq:int_error2} \norm{\int_\Omega \exp\left(-\phi(z)/\delta\right)dz-\left(\int_\Omega \exp\left(-\phi(z)/\delta\right)dz\right)_{\Zd}} \le \frac{d h^2}{12}\max_{z,k,l} \norm{\frac{\partial^2 \left(\exp\left(-\phi(z)/\delta\right)\right)}{\partial z_k \partial z_{l}}}\,, \end{equation} It follows that, for $\delta\in (0,1)$, the errors in \eqref{eq:int_error1} and \eqref{eq:int_error2} are bounded by $\frac{c_1 dh^2}{12\delta^2}\exp\left(\frac{-\phi_{\min}}{\delta}\right)$, where $c_1$ is a constant that depends on the magnitude of the first and second-order partial derivatives of $\phi$ on $\Omega$, and $$\phi_{\min}=\min_{z\in\Omega}\phi(z)\ge f(x^*)\,.$$ Thus, by Cauchy-Schwartz inequality, \small{\begin{align*} &\norm{\int_\Omega z\exp\left(-\phi(z)/\delta\right)dz-\left(\int_\Omega z\exp\left(-\phi(z)/\delta\right)dz\right)_{\TT}}\\ \le & \norm{\int_\Omega z\exp\left(-\phi(z)/\delta\right)dz-\left(\int_\Omega z\exp\left(-\phi(z)/\delta\right)dz\right)_{\Z}} +\norm{\left(\int_\Omega z\exp\left(-\phi(z)/\delta\right)dz\right)_{\Zd}-\left(\int_\Omega z\exp\left(-\phi(z)/\delta\right)dz\right)_{\TT}}\\ \le & \frac{c_1 dh^2}{12\delta^2}\exp\left(\frac{-\phi_{\min}}{\delta}\right)+ \norm{\psi_{\Zd}-\psi_{\TT}}_F\left(\sum_{j=1}^{n^d}\left(\norm{z^{(j)}}w^{(j)}\right)^2\right)^{1/2}\\ \le & \frac{c_1 dh^2}{12\delta^2}\exp\left(\frac{-\phi_{\min}}{\delta}\right)+ c_2\epsilon_{\TT} h^{d/2}\,, \end{align*}} where $\Set{z^{(j)}}$ and $\Set{w^{(j)}}$ denote the quadrature nodes and weights, $c_2$ is the approximation to integral $\int_\Omega \norm{x}^2dx\le 1$ which is a constant. Similarly, \[ \norm{\int_\Omega \exp\left(-\phi(z)/\delta\right)dz-\left(\int_\Omega \exp\left(-\phi(z)/\delta\right)dz\right)_{\TT}} \le \frac{c_1 dh^2}{12\delta^2}\exp\left(\frac{-\phi_{\min}}{\delta}\right)+ \tilde c_2\epsilon_{\TT} h^{d/2}\,, \] where $\tilde c_2\approx |\Omega|=1.$ The TT estimation error of the proximal operator satisfies \begin{equation} \label{eqn:discrete_1} \norm{\prox_{tf}(x)-\left(\prox_{tf}^\delta(x) \right)_{\TT}} \leq \norm{\prox_{tf}(x)-\prox_{tf}^\delta(x)}+\norm{\prox_{tf}^\delta(x)-\left(\prox_{tf}^\delta(x) \right)_{\TT}}\,. \end{equation} By Theorem~\ref{thm_C2}, for $0<\delta\ll 1$, the first term above is $\bigO(\delta)$, i.e., \[ \norm{\prox_{tf}(x)-\prox_{tf}^\delta(x)}\le C_1\delta \] for some constant $C_1$ that depends on the homeomorphism $T$ in Lemma~\ref{lem:morse}. To estimate the second error term, let \[ \beta_1 = \int_\Omega z\exp\left(-\phi(z)/\delta\right)dz\,, \quad \beta_2 = \int_\Omega \exp\left(-\phi(z)/\delta\right)dz\,, \] and let $\tilde\beta_1$, $\tilde\beta_2$ as their TT approximations respectively: \[ \tilde\beta_1 = \left(\int_\Omega z\exp\left(-\phi(z)/\delta\right)dz\right)_{\TT},\quad \tilde\beta_2 = \left(\int_\Omega \exp\left(-\phi(z)/\delta\right)dz\right)_{\TT}. \] The magnitude of the second term in \eqref{eqn:discrete_1} is then bounded by \begin{align*} \norm{\frac{\beta_1}{\beta_2} - \frac{\tilde{\beta}_1}{\tilde{\beta}_2}} &\leq \norm{\frac{\tilde{\beta}_1}{\tilde{\beta}_2} - \frac{\tilde{\beta}_1}{\beta_2}} + \norm{ \frac{\tilde{\beta}_1}{\beta_2} - \frac{\beta_1}{\beta_2}} \leq \norm{\frac{\tilde{\beta}_1}{\tilde{\beta}_2}} \left|\frac{\beta_2-\tilde{\beta}_2}{\beta_2}\right| + \norm{\frac{\beta_1-\tilde{\beta}_1}{\beta_2}} \\ &\le\left(\norm{\frac{\tilde{\beta}_1}{\tilde{\beta}_2}}+1\right) \left(\frac{c_1 dh^2}{12\delta^2\int_\Omega \exp\left(-\tilde \phi(z)/\delta\right)dz} +\frac{\max\{\tilde c_2,c_2\}\epsilon_{\TT} h^{d/2}\exp\left(\frac{\phi_{\min}}{\delta}\right)}{\int_\Omega \exp\left(-\tilde \phi(z)/\delta\right)dz}\right) \end{align*} where $\tilde \phi(z) = \phi(z)-\phi_{\min}.$ As derived in the proof of Theorem~\ref{thm_C2}, \[ \int_{\Omega} \exp{(-\tilde \phi(z)/\delta)}dz = \frac{(2\pi\delta)^{d/2}}{\sqrt{\det(\Hess\tilde \phi(z^*))}}+\bigO(\delta^{d/2+1}) \] for small $\delta>0.$ Also notice \( \norm{\tilde{\beta}_1/\tilde{\beta}_2} \le \max_{j} \norm{x^{(j)}}\le 1\,. \) It follows that, for $0<\delta\ll 1,$ \[ \norm{\frac{\beta_1}{\beta_2} - \frac{\tilde{\beta}_1}{\tilde{\beta}_2}} \le \frac{C_2 dh^2}{\delta^{2+d/2}}+ \frac{C_3\epsilon_{\TT} h^{d/2}}{\delta^{d/2}}\exp\left(\frac{\phi_{\min}}{\delta}\right)\,, \] where $C_2$ and $C_3$ are constants that depend on the magnitude of the first and second-order derivatives of $f$ on $\Omega$. The results above are summarized in the following proposition. \begin{prop} \label{prop:C2} Assume that $f\in C^2(\Omega)$ and that $z^*:=\prox_{tf}(x)$ is the unique nondegenerate global minimizer of $\phi$. For $0<\delta\ll 1$, the error in estimating $\prox_{tf}(x)$ using TT approximation and Trapezoidal rule is given by \begin{equation}\label{eq:int_tt_error} \norm{\prox_{tf}(x)-\left(\prox_{tf}^\delta(x) \right)_{\TT}}\le C_1\delta+ \frac{C_2 dh^2}{\delta^{2+d/2}}+ \frac{C_3\epsilon_{\TT} h^{d/2}}{\delta^{d/2}}\exp\left(\frac{\phi_{\min}}{\delta}\right), \end{equation} where $C_1, C_2, C_3$ are constants that are independent of the choices of $\Set{\delta,\epsilon_{\TT},h}$. In particular, as $\delta\to 0^+$, if \begin{equation}\label{eq:hepsilon} h = \bigO(\delta^{\max\{d,2\}/4+3/2}) \quad \textrm{ and } \quad \epsilon_{\TT}\le \exp\left(\frac{-\phi_{\min}}{\delta}\right) \end{equation} then $\norm{\prox_{tf}(x)-\left(\prox_{tf}^\delta(x) \right)_{\TT}} = \bigO(\delta).$ \end{prop} For high-dimensional problems, the bound on the mesh size $h$ given in (\ref{eq:hepsilon}) is unrealistic. This restriction is due to the theoretical error bounds on the numerical integration given in \eqref{eq:int_error1}-\eqref{eq:int_error2}. Alternatively, we consider the case where $f$ lies in a \emph{Sobolev space} given by \[ H^s(\Omega):=\Set{g\in L^2(\Omega):\norm{g}_s^2:=\left(\sum_{\tau=0}^s\norm{g^{(\tau)}}_{L^2(\Omega)}^2\right)<\infty} \] For $s \ge 2$, \cite[Theorem 4.5 ]{Sobolev_integral} implies \begin{equation} \label{eqn:error_Hs} \left\|\int_\Omega z\exp\left(-\phi(z)/\delta\right)dz- \left(\int_\Omega z\exp\left(-\phi(z)/\delta\right)dz\right)_{\Z}\right\|\le C\frac{d(\log n)^{s/2+1/4}}{n^s} \end{equation} where $C$ is a constant dependent on the Sobolev norm of the integrand. Following similar arguments as for the previous case, the following result can be derived. \begin{cor}\label{cor:TT_error} Assume that $f\in H^{s}(\Omega)$ for $s\ge 2$ and that $z^*:=\prox_{tf}(x)$ is the unique nondegenerate global minimizer of $\phi$. If \begin{equation}\label{eqn:hepsilon2} h = \mathcal{O}\left(\delta^{\frac{d+2}{2s}+1}\right)~\textrm{ and }~ \epsilon_{\TT}\le \exp\left(\frac{-\phi_{\min}}{\delta}\right). \end{equation} Then the error in in estimating $\prox_{tf}(x)$ using TT approximation and Trapezoidal rule satidfies \begin{equation*} \norm{\prox_{tf}(x)-\left(\prox_{tf}^\delta(x) \right)_{\TT}} = \bigO(\delta). \end{equation*} \end{cor} By \eqref{eq:hepsilon} and \eqref{eqn:hepsilon2}, if $\phi_{\min}\le 0$, it is sufficient to require the TT approximation error $\epsilon_{\TT}\le 1$ to guarantee the desired accuracy in estimating the proximal operator. In practice, if we have an estimate of $\phi_{\min}$, i.e., $c\approx \phi_{\min}$ for some constant $c$, we may shift the original $\phi$ and estimate $\prox_{tf}(x)$ using the equivalent formula \[ \prox_{tf}(x) = \frac{\int_{\Omega} z\exp\left(-\tilde \phi(z)/\delta\right)dz}{\int_{\Omega}\exp{(-\tilde \phi(z)/\delta)dz}}\,, \] where $\tilde \phi(z) = \phi(z)-c,$ with $\tilde \phi_{\min} = \phi_{\min}-c\approx 0.$ This shift is not mandatory but may enhance numerical stability. \section{Two practical algorithms}\label{sec:IPP_alg} \subsection{The TT-IPP algorithm}\label{sec:ttIPP} In this section, we propose a practical IPP algorithm, TT-IPP, which utilizes the TT approximation to estimate the proximal operator \[ \prox_{tf}(x)\approx \left(\prox_{tf}^\delta(x)\right)_{\TT}, \] and allows the parameter $\delta$ to decrease adaptively. The TT estimate $\left(\prox_{tf}^\delta(x)\right)_{\TT}$ is computed as described in Section~\ref{sec:approx}. Details of TT-IPP are summarized in Algorithm~\ref{alg:ttIPP}. An initial TT approximation of the function $\psi:=\exp{\left(-f/\delta_0\right)}$ is computed first, and the parameter $\delta_k$ is decreased adaptively based on whether the current iterate achieves a sufficient function decrease as written in Line~\ref{line:if_decrease} of Algorithm~\ref{alg:ttIPP}. For iterations where $\delta_k$ is not reduced, the previous TT approximation is reused. When $\delta_k$ is reduced by half, we compute the Hadamard product to efficiently update the TT approximation without the need of any additional function evaluation. According to the error analysis in Section~\ref{sec:error}, TT estimates of the proximal operators are accurate if the mesh size if small relative to the value of $\delta.$ When the mesh size $h_k$ is too large relative to the current parameter $\delta_k$, the mesh size is reduced and the TT approximation is refined over the new mesh grid. When using a uniform mesh, previous function evaluations may be reused in refining the TT approximation in Line~\ref{line:refineTT} if function evaluations are expensive. Additionally, according to Theorem \ref{thm:conv}, a reasonable initial guess of the global minimizer can be obtained by approximating the integrals in \eqref{eqn:mean_gibbs} if substituting $\phi$ with the original objective function $f$, i.e. \begin{equation}\label{eq:initial} x^0\approx \frac{\int_\Omega x\exp\left(-f(x)/\delta_0\right)dx}{\int_\Omega \exp{(-f(x)/\delta_0)dx}}\,. \end{equation} The TT-estimate of \eqref{eq:initial}, based on the initial TT approximation of $\exp{\left(-f/\delta_0\right)}$, can serve as a warm start of TT-IPP at a cost equivalent to a single iteration of the algorithm. \begin{algorithm}\caption{Tensor-Train Inexact Proximal Point Method (TT-IPP)}\label{alg:ttIPP} \begin{algorithmic}[1] \STATE \textbf{Input:} $x^0\in\Re^d$, $0<\delta_0< 1$, $0<\eta_-<1<\eta_+$, $0<\theta_1\le\theta_2<1$, $\bar\epsilon>0$, $0<\eta<1$, $T>0$, $0<\tau\le t_0\le T$, $h_0>0$, $\gamma>1$, $C>0$, $m\ge 1$, $k_{\max}>0$, and $\epsilon_{stop}>0$ \STATE $k \gets 0$ \STATE Compute $\left(\exp\left(\frac{-f}{\delta_{0}}\right)\right)_{\TT}$, a TT approximation of $\exp\left(\frac{-f}{\delta_0}\right)$ on a mesh grid of size $h_0$ by Algorithm~\ref{alg:TTcross} \WHILE{$k < k_{\max}$} \STATE $x^{k+1} \gets \left(\prox_{t_k f}^{\delta_k}(x^k)\right)_{\TT}$ \IF{$k \ge m-1$ \textbf{and} $f(x^{k+1}) > \max\{f(x^k), f(x^{k-1}),\dots,f(x^{k-m+1})\} - \eta/k$} \label{line:if_decrease} \STATE $\delta_{k+1} \gets \delta_k / 2$ \STATE $\left(\exp\left(\frac{-f}{\delta_{k+1}}\right)\right)_{\TT} \gets \left(\exp\left(\frac{-f}{\delta_{k}}\right)\right)_{\TT}$ using Hadamard product \IF{$h_k > C \delta_k^\gamma$} \STATE $h_{k+1} \gets h_k / 2^{\lfloor \gamma \rfloor}$ \STATE Update $\left(\exp\left(\frac{-f}{\delta_{k+1}}\right)\right)_{\TT}$ on the refined mesh by Algorithm~\ref{alg:TTcross} \label{line:refineTT} \ENDIF \ELSE \STATE $\delta_{k+1} \gets \delta_k$, $h_{k+1} \gets h_k$ \STATE $\left(\exp\left(\frac{-f}{\delta_{k+1}}\right)\right)_{\TT} \gets \left(\exp\left(\frac{-f}{\delta_{k}}\right)\right)_{\TT}$ \ENDIF \STATE Determine $t_{k+1}$ using Lines~\ref{line:qk}--\ref{line:tk} of Algorithm~\ref{alg:IPP} \STATE $k \gets k + 1$ \IF{$\norm{x^{k+1} - x^k} < \epsilon_{stop}$} \STATE \textbf{Break} \ENDIF \ENDWHILE \STATE \textbf{Output:} last iterate $x^k$ \end{algorithmic} \end{algorithm} The convergence of TT-IPP is a corollary of the convergence of IPP methods proved in Theorem~\ref{thm:IPP}. \begin{cor}\label{cor:ttIPP} For $f:\Omega\subset\Re^d\to\Re$, where $\Omega$ is a bounded domain, suppose Assumptions~\ref{assum1} -- \ref{assum3} hold, and one of the conditions listed in Proposition~\ref{prop:single_prox} holds. Let $\Set{x^k}_{k\ge 0}$ be the sequence of iterates generated by Algorithm~\ref{alg:ttIPP}. If $T>0$ and $\gamma>1$ chosen for Algorithm~\ref{alg:ttIPP} are sufficiently large, and the TT approximation error $\epsilon_{\TT}$ in \eqref{eqn:epsilonTT} is sufficiently small, then $\Set{x^k}_{k\ge 0}$ converges to $x^*$ as $k\to\infty.$ In particular, for $f\in H^s{(\Omega)}\cap C^2(\Omega)$, it is sufficient if $\gamma > (d+2)/(2s)+1$, and $\epsilon_{\TT}$ satisfies \eqref{eqn:hepsilon2} at each iteration. \end{cor} \begin{proof} Assume that $\lim\limits_{k\to\infty}\delta_k > 0$, then the condition in Line~\ref{line:if_decrease} holds for only finitely many iterations. It follows that there exists a constant $K>0$ such that, for all $k>K$, \[ f(x^{k+1}) \le \max\{f(x^k), f(x^{k-1}),\cdots,f(x^{k-m+1})\}-\eta/k. \] It can be shown by induction that, for all $k\ge m-1,$ \begin{equation}\label{eq:fdecrease} f(x^{k+1})\le\max{\left\{f(x^0)-\sum_{j\in\K_0\cap\K}\frac{\eta}{j},f(x^1)-\sum_{j\in \K_1\cap\K} \frac{\eta}{j}, \cdots,f(x^{m-1})-\sum_{j\in \K_{m-1}\cap\K} \frac{\eta}{j}\right\}}-c_k, \end{equation} where $\K=\Set{1,\cdots,k}$ and \[ \K_0=\Set{lm-1}_{l=1}^{\infty}, \K_1=\Set{lm}_{l=1}^{\infty}, \cdots, \K_{m-1}=\Set{(l+1)m-2}_{l=1}^{\infty}. \] The right-hand side of \eqref{eq:fdecrease} decreases to $-\infty$ as $k\to\infty$, which contradicts the assumption that $f$ is bounded below by $f(x^*).$ Hence, $\lim_{k\to\infty}\delta_k = 0$. As shown in the proof of Theorem~\ref{thm:IPP}, $t_k = T$ for all $k$ sufficiently large. If any of the conditions listed in Proposition~\ref{prop:single_prox} holds for $f$ and $T>0$ is sufficiently large, then the proximal operator $\prox_{t_k f}(x^k)$ is single-valued for all $k$ sufficiently large. According to Theorem~\ref{thm:conv} and the error analysis in Section~\ref{sec:error}, if $\gamma>1$ in Algorithm~\ref{alg:ttIPP} is sufficiently large and the TT approximation error $\epsilon_{\TT}$ is sufficiently small, then \eqref{eq:prox_error} holds. In particular, for $f\in H^s{(\Omega)}\cap C^2(\Omega)$, if $\gamma > (d+2)/(2s)+1$, and $\epsilon_{\TT}$ satisfies \eqref{eqn:hepsilon2} at each iteration, then by Corollary~\ref{cor:TT_error}, \eqref{eq:prox_error} holds. It follows from Theorem~\ref{thm:IPP} that, by choosing $T>0$ sufficiently large, $\Set{x^k}_{k\ge 0}$ is guaranteed to converge to $x^*$ as $k\to\infty.$ \end{proof} While a fine mesh ensures the theoretical convergence of TT-IPP according to the above corollary, using a relatively coarse mesh improves computational efficiency. The error bounds in Section~\ref{sec:error} are derived for worst-case scenarios \cite{quarteroni2010numerical} and are often larger than actual errors observed in practice. Therefore, in numerical experiments presented in Section~\ref{sec:experiments}, we heuristically select the control parameters for TT-IPP, guided by the theoretical error bounds. \subsection{The MC-IPP algorithm} In this section, we propose a practical IPP algorithm, MC-IPP, which is based on the Monte Carlo (MC) estimates of proximal operators. The MC estimate of a proximal operator is given by \[ \left(\prox_{t f}^{\delta}(x)\right)_{\MC} := \frac{\sum_{i=1}^N z^i\exp({-f(z^i)/\delta})}{\sum_{i=1}^N \exp({-f(z^i)/\delta})}\approx \prox_{tf}(x) \] for a sample size $N\ge 1$ and sample points $z^i\overset{\text{iid}}\sim \mathcal{N}(x, \delta t I)$ from the Gaussian distribution centered at \( x \). To motivate the design of this algorithm, we provide an analysis of the MC sample complexity under the assumptions in Theorem~\ref{thm_C2} on the function $\phi$ given in \eqref{eqn:phi}. Without loss of generality, we assume $\phi_{\min}=0.$ Define \[ \beta_1:= \int z\exp({-\phi(z)}) dz \quad\textrm{ and }\quad \hat\beta_1 :=\frac{1}{N}\sum_{i=1}^N z^i\exp({-f(z^i)/\delta})\approx \beta_1, \] Then by standard results of MC integration \cite{lemieux2009monte} and following similar arguments as in the proof of Theorem~\ref{thm_C2}, the expectation of $\hat \beta_1$ is $\expect{\hat \beta_1} = \beta_1$, with $\norm{\beta_1}=\bigO\left(\delta^{d/2}\right),$ and the covaiance of $\hat\beta_1$ is diagonal, with \begin{align*} \norm{\Cov{\hat \beta_1}}_2 \le &\frac{1}{N} \int \norm{\sqrt{2\pi t\delta}z\exp\left(\frac{-f(z)}{\delta}\right)-\beta_1 e}^2 \frac{\exp{\left(-\norm{z-x}^2/(2t\delta)\right)}}{\sqrt{2\pi t\delta}}dz \\ = & \frac{1}{N} \int \sqrt{2\pi t\delta} \norm{z}^2 \exp{\left(\frac{-2f(z)}{\delta}-\frac{\norm{z-x}^2}{2t\delta}\right)}dz-\frac{\beta_1^2}{N}\\ =& \bigO\left(\frac{\delta^{(d+1)/2}}{N}\right). \end{align*} Similarly, define \[ \beta_2:= \frac{1}{N}\int \exp({-\phi(z)}) dz \quad\textrm{ and }\quad \hat\beta_2 :=\frac{1}{N}\sum_{i=1}^N \exp({-f(z^i)/\delta}), \] then $\expect{\hat \beta_2} = \beta_2= \bigO\left(\delta^{d/2}\right)$ and $\Var{\hat \beta_2} = \bigO\left(\frac{\delta^{(d+1)/2}}{N}\right)$. Notice that, for arbitrary $\gamma\in (0,1)$, if $\norm{\beta_1-\hat\beta_1} =\bigO(\delta^{d/2+\gamma})$ and $\left|\beta_2-\hat\beta_2\right|=\bigO(\delta^{d/2+\gamma})$, then \[ \norm{\prox_{tf}(x)-\left(\prox_{t f}^{\delta}(x)\right)_{\MC}}\le \bigO(\delta)+ \norm{\frac{\beta_1}{\beta_2}-\frac{\hat \beta_1}{\hat\beta_2}}=\bigO(\delta^\gamma)\,. \] By Chebyshev's inequality, \[ \pr{\norm{\beta_1-\hat\beta_1}>\delta^{d/2+\gamma}}\le \norm{\Cov{\hat \beta_1}}_2/(\delta^{d+2\gamma}) \quad\textrm{ and }\quad \pr{\left|\hat\beta_2-\beta_2\right|>\delta^{d/2+\gamma}}\le \Var{\hat \beta_2}/(\delta^{d+2\gamma}), \] which implies \[ \pr{\norm{\prox_{tf}(x)-\left(\prox_{t f}^{\delta}(x)\right)_{\MC}}>\delta^\gamma} \le \bigO\left(\frac{1}{N\delta^{(d-1)/2+2\gamma}}\right). \] Therefore, for $\gamma\in (0,1/4)$, with a sample size $N = \bigO(\delta^{-d/2})$, \[ \pr{\norm{\prox_{tf}(x)-\left(\prox_{t f}^{\delta}(x)\right)_{\MC}}>\delta^\gamma} \le \bigO\left(\delta^{1/2-2\gamma}\right). \] This sample size requirement is impractical for high dimensional problems. Fortunately, various variance reduction techniques \cite{lepage1980vegas,reddi2016stochastic, lemieux2009monte} may be applied to reduce the sample complexity. Here we consider a simple variance reduction called the exponentially weighted moving average (EWMA) \cite{ross2009probability, kingma2014adam}, which computes \[ x^{k+1} = \alpha \left(\prox_{t_k f}^{\delta_k}(x^k)\right)_{\MC}+(1-\alpha)x^{k} \] for a damping parameter $\alpha\in(0,1).$ EWMA reduces the variances of $\hat\beta_1$ and $\hat\beta_2$ to approximately $\bigO\left(\frac{\delta^{(d+1)/2}\alpha}{N (2-\alpha) }\right)$, thereby reduces the required sample size to \begin{equation}\label{eq:sampleComp} N = \bigO\left(\frac{\delta^{-d/2}\alpha}{2-\alpha}\right). \end{equation} In practice, we observe in our numerical experiments in Section~\ref{sec:experiments} that a sample size smaller than the scale of \eqref{eq:sampleComp} is needed to achieve a desirable empirical accuracy. Details of the MC-IPP algorithm are summarized in Algorithm~\ref{alg:mcIPP}. By \eqref{eq:sampleComp}, when $\alpha$ is close to $0$, the iterates move slowly and the required sample size is small; whereas when $\alpha$ is close to $1,$ the iterates move fast and the required sample size is large. In MC-IPP, we adaptively updates the damping parameter $\alpha_k$ at each iteration $k$ based on whether a sufficient decrease in the function value is achieved (see Line~\ref{line:alpha_min} and Line~\ref{line:alpha_max}). The algorithm also adaptively increases the sample size $N_k$ and decreases the parameter $\delta_k$. When an increase in the function value is observed comparing to several previous iterates, we reject the MC estimate with a positive probability $p$ (see Line~\ref{line:reject}) and resample. Additionally, MC-IPP can be warm-started by computing a MC estimate of \eqref{eq:initial} with $N_0$ uniformly distributed sample points. \begin{algorithm}[htp!] \caption{Monte-Carlo Inexact Proximal Point Method (MC-IPP)}\label{alg:mcIPP} \begin{algorithmic}[1] \STATE \textbf{Input:} $x^0\in\Re^d$, $0<\delta_0< 1$, $0<\eta_-<1<\eta_+$, $0<\theta_1\le\theta_2<1$, $\bar\epsilon>0$, $0<\eta<1$, $T>0$, $0<\tau\le t_0\le T$, $0<\alpha_{\min}\le\alpha_0\le\alpha_{\max}\le 1$, $0<p<1$, $N_0>0$, $C>1$, $0<c<1$, $m\ge 1$, $k_{\max}>0$, and $\epsilon_{stop}>0$ \STATE $k \gets 0$ \WHILE{$k < k_{\max}$} \STATE $y^{k} \gets \alpha_k \left(\prox_{t_k f}^{\delta_k}(x^k)\right)_{\MC} + (1-\alpha_k)x^{k}$ \label{line:yk} \IF{$k \ge m-1$ \textbf{and} $f(y^{k}) > \max\{f(x^k), f(x^{k-1}), \dots, f(x^{k-m+1})\} - \eta/k$} \IF{$f(y^{k}) \ge \max\{f(x^k), f(x^{k-1}), \dots, f(x^{k-m+1})\}$} \STATE Reject $y^k$ and return to Line~\ref{line:yk} with probability $p$ \label{line:reject} \ENDIF \STATE $x^{k+1} \gets y^k$ \STATE $\delta_{k+1} \gets c \delta_k$ \STATE $\alpha_{k+1} \gets \max\{\alpha_{\min}, c\alpha_k\}$ \label{line:alpha_min} \STATE $N_{k+1} \gets C N_k$ \ELSE \STATE $x^{k+1} \gets y^k$ \STATE $\delta_{k+1} \gets \delta_k$ \STATE $\alpha_{k+1} \gets \min\{\alpha_k / c, \alpha_{\max}\}$ \label{line:alpha_max} \STATE $N_{k+1} \gets N_k$ \ENDIF \STATE Determine $t_{k+1}$ using Lines~\ref{line:qk}--\ref{line:tk} of Algorithm~\ref{alg:IPP} \STATE $k \gets k + 1$ \IF{$\norm{x^{k+1} - x^k} < \epsilon_{stop}$} \STATE \textbf{Break} \ENDIF \ENDWHILE \STATE \textbf{Output:} last iterate $x^k$ \end{algorithmic} \end{algorithm} The almost sure convergence of MC-IPP is a corollary of the convergence of IPP methods proved in Theorem~\ref{thm:sIPP}. \begin{cor} For $f:\Re^d\to\Re$, suppose Assumptions~\ref{assum1} -- \ref{assum3} hold, and one of the conditions listed in Proposition~\ref{prop:single_prox} holds. Let $\Set{x^k}_{k\ge 0}$ be the sequence of iterates generated by Algorithm~\ref{alg:mcIPP}. If $T>0$ and $C>1$ chosen for Algorithm~\ref{alg:mcIPP} are sufficiently large, and $\alpha_{\min}>1-\eta_-$, then $\Set{x^k}_{k\ge 0}$ converges to $x^*$ almost surely as $k\to\infty.$ \end{cor} \begin{proof} Assume that $\lim\limits_{k\to\infty}\delta_k > 0$ or $\lim\limits_{k\to\infty} N_k<\infty$. Following similar arguments as in the proof of Corollary~\ref{cor:ttIPP}, the right-hand side of \eqref{eq:fdecrease} would decrease to $-\infty$ as $k\to\infty$, which contradicts the assumption that $f$ is bounded below. Therefore, $\lim\limits_{k\to\infty}\delta_k = 0$ and $\lim\limits_{k\to\infty} N_k=\infty.$ As shown in the proof of Theorem~\ref{thm:sIPP}, $t_k = T$ for all $k$ sufficiently large almost surely. If any of the conditions listed in Proposition~\ref{prop:single_prox} holds for $f$ and $T>0$ is sufficiently large, then the proximal operator $\prox_{t_k f}(x^k)$ is single-valued for all $k$ sufficiently large almost surely. By Theorem~\ref{thm:conv} and standard results of MC integration, if choosing $C>1$ sufficiently large in Algorithm~\ref{alg:mcIPP}, the condition \eqref{eq:pprox_error} holds. Therefore, if $T>0$ and $C>1$ chosen for Algorithm~\ref{alg:mcIPP} are sufficiently large, and $\alpha_{\min}>1-\eta_-$, by Theorem~\ref{thm:sIPP}, $\Set{x^k}_{k\ge 0}$ converges to $x^*$ almost surely as $k\to\infty.$ \end{proof} While a sufficiently large $C$ ensures the theoretical convergence of Algorithm~\ref{alg:mcIPP} according to the above corollary, a smaller $C$ reduces the required number of function evaluations. Therefore, in the numerical experiments presented in Section~\ref{sec:experiments}, the control parameters for MC-IPP are chosen heuristically. \section{Experiments}\label{sec:experiments} This section presents experimental results of TT-IPP and MC-IPP on a diverse set of benchmark functions and two practical applications \footnote{The source code is available at \url{https://github.com/fq-han/ipp-global-opt}.} . More applications are presented in Appendix~\ref{append:applications}, including the use of TT-IPP for solving the Hamilton-Jacobi equation \cite{evans2022partial} and the application of TT estimates of proximal operators for sampling from a nonconvex distribution \cite{liang2022proximal}. \subsection{Experiments on benchmark functions} \label{sec_Ex} In this section, we test the proposed IPP algorithms on benchmark functions from the established function library \cite{benchmark_Andrea}, unless stated otherwise. The performance of each algorithm in all numerical experiments is assessed using the accuracy metric \(\|x^k - x^*\|_{\infty}\), where \(x^*\) denotes the global minimizer, and \(x^k\) is the final iterate upon termination. For ease of comparison, test functions were shifted from their original definitions to ensure that their minimum values are within \([-1,1]^d\) and the minimum value is $0$. We compare our algorithms with several existing global optimization algorithms discussed in Section~\ref{sec:prior}. For HJ-MAD and TT-Opt, we utilized the original implementations provided by the authors in \cite{heaton2024global, sozykin2022ttopt}. The implementation of CBO was based on the code from \cite{CBO_Analysis}. For PSO \cite{PSO_first}, PRS \cite{locatelli2013global}, and SA \cite{sa_original}, MATLAB's built-in functions were used, while the implementation of DE was from \cite{BuehrenDE_Code}. For particle-based methods including CBO, PSO, and DE, we used $40d$ particles in each iteration and reported the location of the best particle in the whole population. For the proposed TT-IPP in Algorithm \ref{alg:ttIPP} and MC-IPP in Algorithm \ref{alg:mcIPP}, choices of the control parameters are summarized in Table~\ref{table:parms}. These parameters were selected heuristically, guided by theoretical error bounds or theoretical sample complexity derived in previous sections. In general, we observed that our IPP algorithms exhibit robustness across different parameter choices. For TT-IPP, the domain of the test functions is restricted on $\Omega=[-5,5]^d$ due to the requirement of constructing TT approximations, and a uniform mesh was used. Our implementation of TT-IPP is based on the implementation of the TT-cross algorithm \cite{savostyanov2014quasioptimality} in the TT toolbox \cite{TT-Toolbox}. The initial guess $x^0$ was chosen to be a TT estimate of \eqref{eq:initial} obtained on a coarse initial mesh. For MC-IPP, the initial sample size for estimating the proximal operator was set to be $N_0=40d$, where $d$ represents the dimension of each problem, and $x^0$ was chosen to be an MC estimate of \eqref{eq:initial} obtained using $40d$ initial sample points from the uniform distribution on $[-3,3]^d$. For other iterative solvers, $x^0$ was chosen randomly from the uniform distribution on $[-3,3]^d$. \begin{table}\centering \footnotesize \begin{tabular}{l|p{0.39cm}p{0.39cm}p{0.39cm}p{0.39cm}p{0.39cm}p{0.39cm}p{0.39cm}p{0.39cm}p{0.39cm}p{0.39cm}p{0.39cm}p{0.39cm}p{0.39cm}p{0.39cm}p{0.39cm}p{0.39cm}p{0.39cm}p{0.39cm}} \toprule & $\delta_0$ & $h_0$ & $\eta_-$ & $\eta_+$ & $\theta_1$ & $\theta_2$ & $\bar{\epsilon}$ & $\eta$ & $T$ & $\tau$ & $t_0$ & $\gamma$ & $c$ & $C$ & $m$ & $\alpha_{\min}$ & $\alpha_{\max}$ & $p$ \\ \midrule \textbf{TT-IPP} & $0.1$ & $0.1$ & $0.5$ & $2$ & $0.25$ & $0.75$ & $0.2$ & $10^{-3}$ & $20$ & $0.5$ & $1$ & $1.1$ & -- & $10^3$ & $4$ & -- & -- & -- \\ \textbf{MC-IPP} & $0.1$ & -- & $0.9$ & $2$ & $0.25$ & $0.75$ & $0.2$ & $10^{-3}$ & $20$ & $0.5$ & $1$ & -- & $0.9$ & $1.1$ & $4$ & $0.2$ & $0.3$ &$0.8$ \\ \bottomrule \end{tabular} \caption{Control parameters for TT-IPP and MC-IPP algorithms} \label{table:parms} \end{table} Table \ref{table:TT_benchmark} compares TT-IPP with other solvers on benchmark functions under two scenarios: \begin{enumerate} \item The number of function evaluations required to achieve the desired accuracy. \item The final error after a fixed number of function evaluations. \end{enumerate} As shown in Table \ref{table:TT_benchmark}, TT-IPP significantly outperforms other methods, particularly in cases where $d\geq 20$. While the function evaluations for other methods increase almost exponentially with dimension, TT-IPP (and TT-Opt, to a lesser extent) demonstrates a nearly linear growth owing to the use of TT approximations. \begin{table}\[\def\arraystretch{1.3} \footnotesize \begin{array}{@{} l| *{5}{c} l |*{4}{c} @{}} \toprule \text{\makecell{Test\\ Problems}} & \multicolumn{5}{c@{}}{\text{Func. Eval. till $\|x_k-x^*\|_{\infty}\leq 10^{-2}$}} & &\multicolumn{4}{c@{}}{\text{Avg. Error after 500 K Func. Eval.}} \\ \cmidrule(l){2-6} \cmidrule(l){8-11} &\text{TT-IPP} &\text{DE} &\text{PSO} &\text{SA} &\text{TT-Opt} & &\text{TT-IPP} &\text{DE} &\text{PSO} &\text{SA}\\ \cline{1-11} \text{Griewank } \mathbb{R}^4 & \textbf{5379} & 17K & -& - & 22K & & \mathbf{3.31\times 10^{-4}} & 5.06 & 1.25 & 1.62 \\ \text{Griewank } \mathbb{R}^{10}& \textbf{14K} & - & - &- &54K & & \mathbf{5.15\times 10^{-5}} & 9.80 & 2.65 & 1.31 \\ \text{Griewank } \mathbb{R}^{20} & \textbf{38K} &- & - &- &117K & & \mathbf{6.22\times 10^{-4}} &6.37 & 2.13 & 2.13 \\ \text{Griewank } \mathbb{R}^{50} & \textbf{69K} & - &- &- &309K & & \mathbf{2.93\times 10^{-4}} & 3.78 & 1.39 & 1.61 \\ \text{Griewank } \mathbb{R}^{100} & \textbf{140K} & - & 476K& -&627K && \mathbf{2.97\times 10^{-4}} & 3.03& 1.90& 2.16\\ \cline{1-11} \text{Levy 3 } \mathbb{R}^{5} & \textbf{3899} & 10K & 11K &- &22K& & \mathbf{3.37\times 10^{-5}} & 9.00\times 10^{-5} & 7.67\times 10^{-4} & 2.25 \\ \text{Levy 3 } \mathbb{R}^{20} & \textbf{17K} & 384K & 87K &- &117K && {4.16\times 10^{-4}} & \mathbf{1.25\times 10^{-4}} & 7.94\times 10^{-4} & 2.81 \\ \text{Rastrigin } \mathbb{R}^{5} & \textbf{3899} & 12K & 8200 &- &22K & & \mathbf{6.89\times 10^{-4}} & 3.61\times 10^{-3} & 0.1002 & 2.68 \\ \text{Rastrigin } \mathbb{R}^{20} & \textbf{17K} & - & - & -&117K & & \mathbf{8.76\times 10^{-4}} &2.74 & 1.98 & 2.88 \\ \text{Ackley } \mathbb{R}^{5} & {11K} & {12K} & \textbf{10K} & -&43K & & \mathbf{1.05\times 10^{-5}} & 3.61\times 10^{-3} & 7.23\times 10^{-4} & 2.35 \\ \text{Ackley } \mathbb{R}^{20} & \textbf{112K} & 303K & 203K &- &235K & & \mathbf{4.44\times 10^{-5}} & 2.52\times 10^{-3}& 8.61\times 10^{-4} &2.85\\ \text{Ackley } \mathbb{R}^{50} & \textbf{336K} & - & - &- &618K & & \mathbf{4.02\times 10^{-4}} & 3.22 & 9.48\times 10^{-4} & 2.95 \\ \cline{1-11} \text{Brwon } \mathbb{R}^{10} & {7396} & 55K & 27K &\textbf{7152}&457K & &<10^{-16} & <10^{-16} & 8.94\times 10^{-4} &7.34\times 10^{-4} \\ \text{Exponential } \mathbb{R}^{10} & \textbf{7494} & - & - & - &-& &\mathbf{<10^{-16}} &2.68 & 1.46 & 3.12\\ \text{Trid } \mathbb{R}^{10} & \textbf{7434} & 47K & 34K &7591 &107K & & {1.18\times 10^{-8}} & \mathbf{ <10^{-16}} & 7.41\times 10^{-4} & 9.64\times 10^{-4} \\ \text{Schaffer 1 } \mathbb{R}^{5} \cite{test_problems_2005} & \textbf{27K} & 292K & -&- &43K && \mathbf{6.46\times 10^{-8}} &2.81 & 2.96 & 2.88 \\ \text{Corrugated } \mathbb{R}^{5} & \textbf{114K} & 322K & - & -&163K & &\mathbf{3.15\times 10^{-7}} & 0.431 & 0.515 & 2.445 \\ \text{Cos. Mix. } \mathbb{R}^{20} & \textbf{15K} & - &- &17K &235K& & \mathbf{6.43\times 10^{-6}}& 8.75 & 2.62 & 6.07\times 10^{-4} \\ \text{Alphine 1 } \mathbb{R}^{5} & \textbf{26K} & 30K &- & -&41K & & \mathbf{4.72\times 10^{-4}} & 3.75\times 10^{-3} & 2.21 &3.14 \\ \bottomrule \end{array} \] \caption{Comparison of TT-IPP with other algorithms. The values on the left represent the number of function evaluations required to achieve the desired accuracy $\|x-x^*\|_{\infty} \leq 10^{-2}$, with ``-" indicating that the method fails to converge after $1000K$ function evaluations. The values on the right represent final errors achieved either when $\|x-x^*\|_{\infty} \leq 10^{-3}$ or after $500K$ function evaluations. For the last three methods, the results are the average errors obtained from 50 random initial guesses.} \label{table:TT_benchmark} \end{table} In Table~\ref{table:MC_benchmark}, we compare MC-IPP and other solvers when only a limited number of function evaluations are allowed. TT-IPP and TT-Opt are excluded from the comparison because they both rely on the computation of TT approximations on a mesh grid, which reduces randomness in their results but also limits their ability to explore larger domains. Results in Table~\ref{table:MC_benchmark} show that MC-IPP outperforms other methods in most cases, consistently demonstrating its advantage for functions defined on $\mathbb{R}^{10}$ and $\mathbb{R}^{20}$, making it a strong candidate for global optimization under resource constraints. Figure~\ref{fig:path} illustrates the trajectories of different optimization algorithms. From Figure~\ref{fig:path}, it can be observed that TT-IPP provides a robust and direct convergence path to the global minimizer, leveraging the information across the entire domain efficiently. For MC-IPP, although the trajectory exhibits slight oscillations and requires more iterations, it maintains a direct and reliable path to the global minimizer without being trapped at local minimizers. In contrast, algorithms like DE and PSO tend to wander or converge to local minimizers within the domain. Additionally, Table~\ref{tab:tt_comparison} compares the performance of TT-IPP in iteratively minimizing the Schaffer 02 function on \(\mathbb{R}^{10}\) with the direct evaluation of \eqref{eq:initial} for a small fixed \(\delta\). TT-IPP terminates when \(\|x^k - x^*\|_{\infty} \leq 10^{-5}\), while for the direct evaluation, the mesh size for the TT approximation is set as \(h = \delta / 2\). The first two rows of the table demonstrate that starting with a larger initial \(\delta\) enables TT-IPP to obtain a good initial guess on a coarser mesh, thereby reducing the number of function evaluations required for convergence. On the other hand, using a smaller \(\delta\) leads to a TT approximation with a lower rank, as shown in Figure~\ref{fig:regula}, which reduces the associated storage requirements and computational costs. A comparison of the first two rows (TT-IPP results) with the last three rows (results from direct integral evaluation) demonstrates that TT-IPP achieves significantly higher accuracy with fewer function evaluations. This underscores the advantages of TT-IPP in delivering accurate solutions while ensuring computational efficiency. \begin{table}\[\def\arraystretch{1.3} \footnotesize \begin{array}{@{} l| *{7}{c} l} \toprule \text{\makecell{\\Optimization\\ Algorithm}} & \multicolumn{7}{c@{}}{\text{Error after $10^4$ Func. Eval. in $\mathbb{R}^{10}$}} & \\ \cmidrule(l){2-8} & \text{MC-IPP} & \text{HJ-MAD} & \text{CBO} & \text{PRS} & \text{DE} & \text{PSO} & \text{SA} & \\ \cline{1-8} \text{Griewank} & \mathbf{6.65 \times 10^{-2}} & 1.75 & >5 & 4.28 & >5 & >5 & >5 \\ \text{Levy 3 } & \mathbf{6.49 \times 10^{-1}} & 1.80 & 4.52 & 2.78 & 1.29 & 1.68 & 4.72 \\ \text{Zakharov} & \mathbf{1.71 \times 10^{-1}} & 1.81 & 3.14 & 2.56 & 4.06 & 3.12 \times 10^{-1} & 1.28 \\ \text{Ackley} & \mathbf{7.81 \times 10^{-2}} & 1.70 & 2.77 & 2.04 & 7.14 \times 10^{-1} & 8.93 \times 10^{-2} & 4.46 \\ \text{Rosenbrock}& \mathbf{2.70 \times 10^{-1}} & 2.06 & 3.97 & 2.28 & 1.20 & 1.00 & 1.57 \\ \text{Brown}& 1.08 \times 10^{-1} & 1.59 & >5 & 2.39 & 1.37 & \mathbf{1.02 \times 10^{-1}} & 1.00 \\ \text{Exponential}& \mathbf{7.01 \times 10^{-2}} & 1.89 & >5 & 4.35 & 4.50 & >5 & 4.63 \\ \text{Cos. Mix.} &\mathbf{6.75 \times 10^{-2}} & 1.82 & >5 & 2.48 & >5 & >5 & >5 \\ \text{Alphine 1}& \mathbf{1.81 \times 10^{-1}} & 2.54 & 4.79 & 3.32 & >5 & 4.17 & 3.85 \\ \text{Dropwave}& \mathbf{3.21 \times 10^{-1}} & 1.71 & 3.15 & 2.19 & 1.96 & 4.18 \times 10^{-1} & 4.31 \\ \toprule & \multicolumn{7}{c@{}}{\text{Error after $4\times 10^4$ Func. Eval. in $\mathbb{R}^{20}$}} & \\ \cmidrule(l){1-8} \text{Griewank} & \mathbf{8.11 \times 10^{-2}} & 2.49 & >5 & 4.43 & >5 & 4.47 & 4.67 \\ \text{Rosenbrock} & \mathbf{5.97 \times 10^{-1}} & 2.16 & 4.49 & 3.23 & >5 & 1.40 & 1.08 \\ \text{Exponential} & \mathbf{1.15 \times 10^{-1}} & 2.88 & >5 & 4.69 & 4.77 & >5 & >5 \\ \text{Cos. Mix.} & \mathbf{5.85 \times 10^{-2}} & 2.38 & >5 & 4.22 & >5 & >5 & >5 \\ \text{Alphine 1} & \mathbf{1.33 \times 10^{-1}} & 2.92 & >5 & 3.64 & >5 & >5 & 4.31 \\ \text{Dropwave} & 5.07 \times 10^{-1} & {2.38} & 4.62 & 3.70 & 4.12 & \mathbf{4.69 \times 10^{-1}} & 4.85 \\ \toprule \end{array} \] \caption{Comparison of MC-IPP with other algorithms. Average errors over $10$ runs are reported for benchmark functions on $\mathbb{R}^{10}$ and $\mathbb{R}^{20}$, with a maximum number of function evaluations.} \label{table:MC_benchmark} \end{table} \begin{figure} \centering \includegraphics[width=0.45\linewidth]{pix/convergence_path_Ras.eps} \includegraphics[width=0.45\linewidth]{pix/convergence_path_Schaffer.eps} \caption{Trajectories of different algorithms for minimizing Rastrigin function (left) and Schaffer 1 function (right).} \label{fig:path} \end{figure} \begin{table}\centering \footnotesize \begin{tabular}{c|c|c|c|c|c} \hline & $\|x_k - x^*\|_{\infty}$ & \makecell{$\delta_K$} & \makecell{Initial \\ TT rank}& \makecell{Final \\ TT rank}& Func. Eval. \\ \hline \makecell{TT-IPP \\with $\delta_0 = 0.5$} & $2.89 \times 10^{-8}$ & $3.91\times 10^{-3}$ & 7 & 1 &59K \\\hline \makecell{TT-IPP \\with $\delta_0 = 0.1$} & $2.21\times 10^{-7}$ &$1.6\times 10^{-3}$ & 4 & 1 & 101K \\\hline \makecell{Evaluating \eqref{eq:initial} \\with $\delta = 0.05$} & $5.82 \times 10^{-2}$ &- & 4& -& 218K \\\hline \makecell{Evaluating \eqref{eq:initial} \\with $\delta = 0.01$} & $1.87 \times 10^{-3}$ & - & 2 &-& 514K \\ \hline \end{tabular} \caption{Comparison of TT-IPP using different initial $\delta_0$ and directly evaluating \eqref{eq:initial} for Schaffer 02 function with $d = 10$, where $\delta_K$ denotes the value of $\delta$ at termination.}\label{tab:tt_comparison} \end{table} \subsection{Practical applications} We test our algorithms using two practical optimization problems from engineering applications. \begin{enumerate} \item The first example is from \cite{soley2021iterative}, which involves a black-box optimization problem for identifying the global minimum energy configuration in a model of a DNA chain consisting of \( d = 50 \) hydrogen-bonded adenine-thymine (A-T) base pairs. The model calculates the total energy (in electronvolts) as a sum over all base pairs, where the energy of each base pair depends on the proton's position \( x_i \). The global minimum at \( x_i^* = -1 \) corresponds to the stable A-T configuration, while \( x_i = 1 \) represents the less stable tautomeric A*-T* configuration. This energy landscape results in a potential energy surface with \( 2^{50} \) local minima, reflecting the vast number of possible protonation states. Determining the minimum energy configuration is biologically significant, as abnormal hydrogen bonding can disrupt correct base pairing during DNA replication, a process linked to genetic mutations and cancer formation. \item The second example is from a financial application described in \cite{roncalli2013introduction}, where the goal is to optimize a portfolio such that each equity contributes equally to the overall risk. The objective function is defined as \[ f(w) = \sum_{i=1}^d\left(w_i(\Sigma w)_i - \frac{\sqrt{w^T\Sigma w}}{d}\right)^2, \] where $d = 10$, \(w_i\) represents the weight of each portfolio component, and \(\Sigma\) is the covariance matrix of returns. This optimization problem is nonconvex due to the square root term involving the variance. To address the constraint \(\sum w_i = 1\), a penalty term is added to the objective function. Additionally, since portfolio weights must be non-negative and bounded by 1, the search domain is restricted to \([0,1]^d\). For the covariance matrix \(\Sigma\), we pick $\Sigma_{ij} = \exp(-|i-j|^2/4)$. \end{enumerate} \begin{table}\centering \[ \footnotesize \begin{array}{c|c|c|c|c|c|c|c|c|c} \hline \text{Problems} & \text{TT-IPP} & \text{MC-IPP} & \text{HJ-MAD} & \text{CBO} & \text{PRS} & \text{DE} & \text{PSO} & \text{SA} & \text{TT-Opt} \\\hline \text{1} & 2.11 \times 10^{-15} & 1.56 & 2.34 & 1.95 & 1.27 & 4.36 & 2.34 & 2.00 & 5 \\ \text{2} & 7.92 \times 10^{-3} & 4.87 \times 10^{-2} & 0.550 & 0.981 & 0.236 & 0.784 & 0.599 & 0.397 & 9.21 \times 10^{-2} \\ \hline \end{array} \] \caption{Error of methods for solving two practical problems using up to $100K$ function evaluations.} \label{tab:practical} \end{table} Table~\ref{tab:practical} reports the error $\norm{x^k-x^*}_\infty$ at termination for different methods used to solve the two practical problems with a limit of $100K$ function evaluations, highlighting the effectiveness of TT-IPP and MC-IPP. \section{Conclusions}\label{sec:conclud} In this work, we formulate a theoretical framework for inexact proximal point (IPP) methods for the global optimization of continuous nonconvex functions, establishing convergence guarantees under mild assumptions when either deterministic or stochastic estimates of proximal operators are used. The convergence of the expectation under the associated Gibbs measure as $\delta\to 0^+$ is established, and the convergence rate of $\bigO(\delta)$ is derived under additional assumptions. These results serve as a theoretical foundation for evaluating proximal operators inexactly using sampling-based methods such as MC integration. Additionally, we introduce a new TT-based approach, accompanied by an analysis of the estimation error. Furthermore, we propose two practical IPP algorithms. TT-IPP leverages TT estimates of the proximal operators, while MC-IPP employs MC integration to estimate the proximal operators. Both algorithms are designed to adaptively balance efficiency and accuracy in inexact evaluations of proximal operators. The effectiveness of the two algorithms is demonstrated through experiments on a diverse set of benchmark functions and various applications. The two IPP algorithms each have their advantages and limitations. While traditional global optimization methods typically require computational complexity that increases exponentially with problem dimensionality, TT-IPP employs the randomized TT cross algorithm and leverages the Sobolev smoothness of functions to circumvent the curse of dimensionality, making it suitable for higher-dimensional problems. However, constructing a TT approximation over a mesh grid involves higher initial costs and restricts the search space of TT-IPP to a bounded domain, limiting its applicability to functions defined on larger or unbounded domains. On the other hand, MC-IPP benefits from easy implementation and is not restricted to bounded domains. Despite employing the exponentially weighted moving average technique to reduce variance, the sample size required to achieve reliable MC estimates may still become impractically large in high-dimensional settings, consequently constraining its applicability in such scenarios. Future work includes exploring other variance reduction techniques and rejection sampling \cite{gomes2023derivativefree} to enhance the performance of MC-IPP, developing strategies to integrate the strengths of TT and MC techniques for improved efficiency, training machine learning models to approximate proximal operators \cite{cassioli2012machine}, and extending the algorithms to optimization problems with constraints and noise. \bibliographystyle{siamplain} \bibliography{ref} \appendix \section{Other applications}\label{append:applications} In this section, we explore the use of TT-IPP for solving the Hamilton-Jacobi equation and the application of TT estimates of proximal operators for sampling from a nonconvex distribution. \subsection{Solving Hamilton-Jacobi Equation} We aim to solve the following Hamilton-Jacobi (HJ) equation: \[ \begin{cases} \frac{\partial u}{\partial t} + H(\nabla u) = 0, & t > 0\,, \\ u(x, 0) = f(x), & t = 0\,. \end{cases} \] According to the Hopf-Lax formula, when \( H \) is convex, the solution is given by \[ u(x, t) = \min_y \left\{ f(y) + t H^*\left( \frac{x - y}{t} \right) \right\}, \quad t > 0\,. \] We apply our proposed TT-IPP algorithm to solve this optimization problem, obtaining an approximation \(\tilde{y}\) to the global minimum \(y\) and constructing an approximate solution \(\tilde{u}\) as \begin{equation} \label{appx_inf_conv_HJ} \tilde{u}(x, t) = f\left(\tilde{y}(x, t)\right) + t H^*\left(\frac{\tilde{y}(x, t) - x}{t}\right)\,. \end{equation} To measure the accuracy of the solution, we introduce the residual function \[ r(x, t) := \left| \frac{\partial \tilde{u}(x, t)}{\partial t} + H(\nabla \tilde{u}(x, t)) \right|\,. \] We investigate the accuracy of the approximation \eqref{appx_inf_conv_HJ} for different convex Hamiltonians given by \[ H(x) = \frac{\|x\|_p^p}{p}, \quad H^*(x) = \frac{\|x\|_q^q}{q}, \quad \frac{1}{p} + \frac{1}{q} = 1\,. \] The \( L^2 \)-norm of the residual function is computed for various values of \( \delta \) and dimensions. We fix \( t = 1 \) and evaluate the residual function at 100 randomly sampled points in \( x \in [-2, 2]^d \), with two different nonconvex initial conditions $f_1(x) = \|x\|_{1/2}^{1/2}$ and $f_2(x) = \sum_{i=1}^d(\sin(\pi x_i)+1)$. The results are summarized in Table~\ref{table_HJ}, demonstrating that our mesh-free approximation can provide a reasonably accurate approximation to the solution of the original HJ equation. Figure~\ref{fig:hj_solution} presents contour plots of the 2D slice of the approximate solution to the HJ equation, with initial data \( \|x\|_{1/2}^{1/2} \) and Hamiltonian \( H(u) = \|u\|_2^2 / 2 \), evaluated at \( t = 0 \), \( t = 0.2 \), and \( t = 2 \) in a 10-dimensional space. \begin{table}\centering \begin{tabular}{c|c|c|c|c|c|c} \hline \( \mathbf{d} \) & \(\makecell{p=2\\f= f_1}\) & \(\makecell{p=4\\f= f_1}\) & \(\makecell{p=6\\f= f_1}\) & \(\makecell{p=2\\f= f_2}\) & \(\makecell{p=4\\f= f_2}\) & \(\makecell{p=6\\f= f_2}\) \\ \hline 32 & \(1.18\times 10^{-3} \) & \( 9.22 \times 10^{-4} \) & \(1.07\times 10^{-3} \) & \( 2.16\times 10^{-3} \) & \(1.57 \times 10^{-3} \) & \( 1.24 \times 10^{-3} \) \\ 64 & \(2.48 \times 10^{-3} \) & \( 1.31 \times 10^{-3} \) & \( 2.97 \times 10^{-3} \) & \( 4.03 \times 10^{-3} \) & \(2.15 \times 10^{-3} \) & \( 1.75 \times 10^{-3} \) \\ 128 & \(3.51 \times 10^{-3} \) & \( 2.75 \times 10^{-3} \) & \( 3.43 \times 10^{-3} \) & \( 6.67\times 10^{-3} \) & \( 3.54 \times 10^{-3} \) & \(2.81 \times 10^{-3} \) \\ \hline \end{tabular} \label{table_HJ} \caption{$L^2$-norm of residual function for different \( d \), \(p \), and initial conditions $f_1(x) = \|x\|_{1/2}^{1/2}$, $f_2(x) = \sum_{i=1}^d(\sin(\pi x_i)+1)$.} \end{table} \begin{figure} \centering \includegraphics[width=0.32\linewidth]{pix/hj_t0.eps} \includegraphics[width=0.32\linewidth]{pix/hj_t1.eps} \includegraphics[width=0.32\linewidth]{pix/hj_t2.eps} \caption{Contour plots of the Hamilton-Jacobi equation solution with initial condition \( \|x\|_{1/2} \) and Hamiltonian \( H(u) = \|u\|_2^2 \) at \( t = 0 \), \( t = 0.2 \), and \( t = 2 \) in a 10-dimensional space.} \label{fig:hj_solution} \end{figure} \subsection{Sampling} We aim to sample from a target distribution \[ \rho^*(x) := \frac{1}{Z} \exp(-f(x)), \] where \( f(x) \) is a known potential function, and \( Z \) is the normalization constant. In sampling methods based on the restricted Gaussian Oracle (RGO) \cite{liang2022proximal, RGO_Lee_2021}, the sampling process requires solving a proximal problem at each iteration, where our proposed TT estimates of proximal operators can be applied. This approach is particularly appealing because it is inherently unbiased. The proximal sampling method using the RGO proceeds iteratively as follows: \begin{enumerate} \item Sample \( y_{k+1} \sim \exp\left(- \frac{1}{2\eta}\|x_k - y\|_2^2\right) \). \item Sample \( x_{k+1} \sim \exp\left(-f(x) - \frac{1}{2\eta}\|x - y_{k+1}\|_2^2\right) \). \end{enumerate} The first sub-step involves straightforward sampling, while the second sub-step, which requires sampling from the RGO, is more challenging due to the possible non-convexity of \( f(x) \). To address this difficulty, \cite{liang2022proximal} proposes approximating the second distribution by sampling from \[ \exp(-h_w^{y_k}(x)),\quad \text{ where }\quad h_w^{y_k}(x) = f(w) + \nabla f(w) \cdot (x - w) - \frac{M}{2}\|x - w\|_2^2 + \frac{1}{2\eta}\|x - y_k\|_2^2\,, \] where \( w \) is defined as the solution to the proximal subproblem \begin{equation} \label{w_prox_sample} w = \arg\min_x \left\{ f(x) + \frac{1}{2\eta}\|x - y\|_2^2 \right\}, \end{equation} and \( M \) is a sufficiently large constant chosen to ensure that \( h_w^{y_k}(x) \leq f^{y_k}(x) := f(x) + \frac{1}{2\eta}\|y_k - x\|_2^2 \). The RGO sampling is then implemented using a rejection sampling framework: \begin{enumerate} \item Solve the proximal problem to compute \( w \). \item Sample from \(\exp(- h_w^{y_k} )\) which is a Gaussian with respect to $x$. \item Perform rejection sampling by utilizing the relation \( h_w^{y_k}(x) \leq f^{y_k}(x) \), to obtain \( x_k \). \end{enumerate} Our method can be applied to efficiently solve the proximal problem in the first step, which is a critical component of each iteration in the sampling algorithm. Existing approaches, such as those in \cite{liang2022proximal}, typically employ gradient-based methods that rely on iterative gradient evaluations of \( f(x) \). However, these methods lack rigorous convergence guarantees when \( f(x) \) is non-convex, limiting their applicability in such cases. As an example, consider sampling from the following non-log-concave distribution: \[ \rho^*(x) = \frac{1}{Z}\left[\exp(-c_1\|x + c_2\|_1) + \exp(-c_1\|x - c_2\|_2^2)\right], \] where \( Z \) is a normalization constant, \( c_1 = 1 \), and \( c_2 = 2 \). We set \( \delta = 0.01 \) and use \eqref{eq:x_TT} to approximate the optimizer in \eqref{w_prox_sample}, comparing it with the Nesterov’s accelerated gradient method as used in \cite{liang2022proximal}. Since the target distribution in Figure~\ref{fig:sampling} is non-log-concave and multimodal, the gradient-based optimization method shown in the first row often gets trapped in local minimizers, making it slow to recover all modes accurately. In contrast, the sampling results obtained by directly evaluating the integral in \eqref{eqn:mean_gibbs}, which has a global convergence guarantee and is depicted in the second row, effectively identify all modes. \begin{figure}[H] \centering \subfloat[]{\includegraphics[width=0.3\textwidth]{pix/initial.eps} \label{fig:3a}} \hfill \subfloat[]{\includegraphics[width=0.3\textwidth]{pix/xk2_1.eps} \label{fig:3b}} \hfill \subfloat[]{\includegraphics[width=0.3\textwidth]{pix/xk2_2.eps} \label{fig:3c}} \\ \subfloat[]{\includegraphics[width=0.3\textwidth]{pix/xk1_1.eps} \label{fig:3d}} \hfill \subfloat[]{\includegraphics[width=0.3\textwidth]{pix/xk1_2.eps} \label{fig:3e}} \hfill \subfloat[]{\includegraphics[width=0.3\textwidth]{pix/xk4_1.eps} \label{fig:3f}} \caption{Distribution of points in the first dimension. First row from left to right: initial points, points after $20$ and $40$ iterations with accelerated gradient method to solve proximal map. Second row from left to right: points after $20$ and $40$ iterations with TT integration, and points after $20$ iterations with MC integration to approximate proximal map with $d = 4$.} \label{fig:sampling} \end{figure} \end{document}
2412.11473v1
http://arxiv.org/abs/2412.11473v1
Optimal interpolation in Hardy and Bergman spaces: a reproducing kernel Banach space approach
\documentclass[reqno]{amsart} \usepackage{amsmath,amssymb,amsthm,mathrsfs,graphicx} \usepackage{mathtools} \usepackage{tikz} \usetikzlibrary{positioning} \usepackage{color} \renewcommand{\theequation}{\arabic{section}.\arabic{equation}} \pagestyle{plain} \theoremstyle{plain} \numberwithin{equation}{section} \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \newtheorem{question}[theorem]{Question} \newtheorem{problem}[theorem]{Problem} \renewcommand{\theequation}{\arabic{section}.\arabic{equation}} \newcommand{\BA}{{\mathbb A}}\newcommand{\BB}{{\mathbb B}} \newcommand{\BC}{{\mathbb C}}\newcommand{\BD}{{\mathbb D}} \newcommand{\BE}{{\mathbb E}}\newcommand{\BF}{{\mathbb F}} \newcommand{\BG}{{\mathbb G}}\newcommand{\BH}{{\mathbb H}} \newcommand{\BI}{{\mathbb I}}\newcommand{\BJ}{{\mathbb J}} \newcommand{\BK}{{\mathbb K}}\newcommand{\BL}{{\mathbb L}} \newcommand{\BM}{{\mathbb M}}\newcommand{\BN}{{\mathbb N}} \newcommand{\BO}{{\mathbb O}}\newcommand{\BP}{{\mathbb P}} \newcommand{\BQ}{{\mathbb Q}}\newcommand{\BR}{{\mathbb R}} \newcommand{\BS}{{\mathbb S}}\newcommand{\BT}{{\mathbb T}} \newcommand{\BU}{{\mathbb U}}\newcommand{\BV}{{\mathbb V}} \newcommand{\BW}{{\mathbb W}}\newcommand{\BX}{{\mathbb X}} \newcommand{\BY}{{\mathbb Y}}\newcommand{\BZ}{{\mathbb Z}} \newcommand{\cA}{{\mathcal A}}\newcommand{\cB}{{\mathcal B}} \newcommand{\cC}{{\mathcal C}}\newcommand{\cD}{{\mathcal D}} \newcommand{\cE}{{\mathcal E}}\newcommand{\cF}{{\mathcal F}} \newcommand{\cG}{{\mathcal G}}\newcommand{\cH}{{\mathcal H}} \newcommand{\cI}{{\mathcal I}}\newcommand{\cJ}{{\mathcal J}} \newcommand{\cK}{{\mathcal K}}\newcommand{\cL}{{\mathcal L}} \newcommand{\cM}{{\mathcal M}}\newcommand{\cN}{{\mathcal N}} \newcommand{\cO}{{\mathcal O}}\newcommand{\cP}{{\mathcal P}} \newcommand{\cQ}{{\mathcal Q}}\newcommand{\cR}{{\mathcal R}} \newcommand{\cS}{{\mathcal S}}\newcommand{\cT}{{\mathcal T}} \newcommand{\cU}{{\mathcal U}}\newcommand{\cV}{{\mathcal V}} \newcommand{\cW}{{\mathcal W}}\newcommand{\cX}{{\mathcal X}} \newcommand{\cY}{{\mathcal Y}}\newcommand{\cZ}{{\mathcal Z}} \newcommand{\bA}{{\mathbf A}}\newcommand{\bB}{{\mathbf B}} \newcommand{\bC}{{\mathbf C}}\newcommand{\bD}{{\mathbf D}} \newcommand{\bE}{{\mathbf E}}\newcommand{\bF}{{\mathbf F}} \newcommand{\bG}{{\mathbf G}}\newcommand{\bH}{{\mathbf H}} \newcommand{\bI}{{\mathbf I}}\newcommand{\bJ}{{\mathbf J}} \newcommand{\bK}{{\mathbf K}}\newcommand{\bL}{{\mathbf L}} \newcommand{\bM}{{\mathbf M}}\newcommand{\bN}{{\mathbf N}} \newcommand{\bO}{{\mathbf O}}\newcommand{\bP}{{\mathbf P}} \newcommand{\bQ}{{\mathbf Q}}\newcommand{\bR}{{\mathbf R}} \newcommand{\bS}{{\mathbf S}}\newcommand{\bT}{{\mathbf T}} \newcommand{\bU}{{\mathbf U}}\newcommand{\bV}{{\mathbf V}} \newcommand{\bW}{{\mathbf W}}\newcommand{\bX}{{\mathbf X}} \newcommand{\bY}{{\mathbf Y}}\newcommand{\bZ}{{\mathbf Z}} \newcommand{\sA}{{\mathscr A}}\newcommand{\sB}{{\mathscr B}} \newcommand{\sC}{{\mathscr C}}\newcommand{\sD}{{\mathscr D}} \newcommand{\sE}{{\mathscr E}}\newcommand{\sF}{{\mathscr F}} \newcommand{\sG}{{\mathscr G}}\newcommand{\sH}{{\mathscr H}} \newcommand{\sI}{{\mathscr I}}\newcommand{\sJ}{{\mathscr J}} \newcommand{\sK}{{\mathscr K}}\newcommand{\sL}{{\mathscr L}} \newcommand{\sM}{{\mathscr M}}\newcommand{\sN}{{\mathscr N}} \newcommand{\sO}{{\mathscr O}}\newcommand{\sP}{{\mathscr P}} \newcommand{\sQ}{{\mathscr Q}}\newcommand{\sR}{{\mathscr R}} \newcommand{\sS}{{\mathscr S}}\newcommand{\sT}{{\mathscr T}} \newcommand{\sU}{{\mathscr U}}\newcommand{\sV}{{\mathscr V}} \newcommand{\sW}{{\mathscr W}}\newcommand{\sX}{{\mathscr X}} \newcommand{\sY}{{\mathscr Y}}\newcommand{\sZ}{{\mathscr Z}} \newcommand{\fA}{{\mathfrak A}}\newcommand{\fB}{{\mathfrak B}} \newcommand{\fC}{{\mathfrak C}}\newcommand{\fD}{{\mathfrak D}} \newcommand{\fE}{{\mathfrak E}}\newcommand{\fF}{{\mathfrak F}} \newcommand{\fG}{{\mathfrak G}}\newcommand{\fH}{{\mathfrak H}} \newcommand{\fI}{{\mathfrak I}}\newcommand{\fJ}{{\mathfrak J}} \newcommand{\fK}{{\mathfrak K}}\newcommand{\fL}{{\mathfrak L}} \newcommand{\fM}{{\mathfrak M}}\newcommand{\fN}{{\mathfrak N}} \newcommand{\fO}{{\mathfrak O}}\newcommand{\fP}{{\mathfrak P}} \newcommand{\fQ}{{\mathfrak Q}}\newcommand{\fR}{{\mathfrak R}} \newcommand{\fS}{{\mathfrak S}}\newcommand{\fT}{{\mathfrak T}} \newcommand{\fU}{{\mathfrak U}}\newcommand{\fV}{{\mathfrak V}} \newcommand{\fW}{{\mathfrak W}}\newcommand{\fX}{{\mathfrak X}} \newcommand{\fY}{{\mathfrak Y}}\newcommand{\fZ}{{\mathfrak Z}} \newcommand{\tilA}{\tilde{A}}\newcommand{\tilB}{\tilde{B}} \newcommand{\tilC}{\tilde{C}}\newcommand{\tilD}{\tilde{D}} \newcommand{\tilE}{\tilde{E}}\newcommand{\tilF}{\tilde{F}} \newcommand{\tilG}{\tilde{G}}\newcommand{\tilH}{\tilde{H}} \newcommand{\tilI}{\tilde{I}}\newcommand{\tilJ}{\tilde{J}} \newcommand{\tilK}{\tilde{K}}\newcommand{\tilL}{\tilde{L}} \newcommand{\tilM}{\tilde{M}}\newcommand{\tilN}{\tilde{N}} \newcommand{\tilO}{\tilde{O}}\newcommand{\tilP}{\tilde{P}} \newcommand{\tilQ}{\tilde{Q}}\newcommand{\tilR}{\tilde{R}} \newcommand{\tilS}{\tilde{S}}\newcommand{\tilT}{\tilde{T}} \newcommand{\tilU}{\tilde{U}}\newcommand{\tilV}{\tilde{V}} \newcommand{\tilW}{\tilde{W}}\newcommand{\tilX}{\tilde{X}} \newcommand{\tilY}{\tilde{Y}}\newcommand{\tilZ}{\tilde{Z}} \newcommand{\wtilA}{\widetilde{A}}\newcommand{\wtilB}{\widetilde{B}} \newcommand{\wtilC}{\widetilde{C}}\newcommand{\wtilD}{\widetilde{D}} \newcommand{\wtilE}{\widetilde{E}}\newcommand{\wtilF}{\widetilde{F}} \newcommand{\wtilG}{\widetilde{G}}\newcommand{\wtilH}{\widetilde{H}} \newcommand{\wtilI}{\widetilde{I}}\newcommand{\wtilJ}{\widetilde{J}} \newcommand{\wtilK}{\widetilde{K}}\newcommand{\wtilL}{\widetilde{L}} \newcommand{\wtilM}{\widetilde{M}}\newcommand{\wtilN}{\widetilde{N}} \newcommand{\wtilO}{\widetilde{O}}\newcommand{\wtilP}{\widetilde{P}} \newcommand{\wtilQ}{\widetilde{Q}}\newcommand{\wtilR}{\widetilde{R}} \newcommand{\wtilS}{\widetilde{S}}\newcommand{\wtilT}{\widetilde{T}} \newcommand{\wtilU}{\widetilde{U}}\newcommand{\wtilV}{\widetilde{V}} \newcommand{\wtilW}{\widetilde{W}}\newcommand{\wtilX}{\widetilde{X}} \newcommand{\wtilY}{\widetilde{Y}}\newcommand{\wtilZ}{\widetilde{Z}} \newcommand{\whatA}{\widehat{A}}\newcommand{\whatB}{\widehat{B}} \newcommand{\whatC}{\widehat{C}}\newcommand{\whatD}{\widehat{D}} \newcommand{\whatE}{\widehat{E}}\newcommand{\whatF}{\widehat{F}} \newcommand{\whatG}{\widehat{G}}\newcommand{\whatH}{\widehat{H}} \newcommand{\whatI}{\widehat{I}}\newcommand{\whatJ}{\widehat{J}} \newcommand{\whatK}{\widehat{K}}\newcommand{\whatL}{\widehat{L}} \newcommand{\whatM}{\widehat{M}}\newcommand{\whatN}{\widehat{N}} \newcommand{\whatO}{\widehat{O}}\newcommand{\whatP}{\widehat{P}} \newcommand{\whatQ}{\widehat{Q}}\newcommand{\whatR}{\widehat{R}} \newcommand{\whatS}{\widehat{S}}\newcommand{\whatT}{\widehat{T}} \newcommand{\whatU}{\widehat{U}}\newcommand{\whatV}{\widehat{V}} \newcommand{\whatW}{\widehat{W}}\newcommand{\whatX}{\widehat{X}} \newcommand{\whatY}{\widehat{Y}}\newcommand{\whatZ}{\widehat{Z}} \newcommand{\al}{\alpha} \newcommand{\be}{\beta} \newcommand{\ga}{\gamma}\newcommand{\Ga}{\Gamma} \newcommand{\de}{\delta}\newcommand{\De}{\Delta} \newcommand{\ep}{\epsilon}\newcommand{\vep}{\varepsilon} \newcommand{\ze}{\zeta} \newcommand{\te}{\theta}\newcommand{\vth}{\vartheta} \newcommand{\Th}{\Theta} \newcommand{\io}{\iota} \newcommand{\ka}{\kappa} \newcommand{\la}{\lambda}\newcommand{\La}{\Lambda} \newcommand{\vpi}{\varpi} \newcommand{\vro}{\varrho} \newcommand{\si}{\sigma}\newcommand{\vsi}{\varsigma}\newcommand{\Si}{\Sigma} \newcommand{\up}{\upsilon}\newcommand{\Up}{\Upsilon} \newcommand{\vph}{\varphi} \newcommand{\om}{\omega}\newcommand{\Om}{\Omega} \newcommand{\bal}{{\boldsymbol\alpha}} \newcommand{\bbe}{{\boldsymbol\beta}} \newcommand{\bga}{{\boldsymbol\gamma}} \newcommand{\bGa}{{\boldsymbol\Gamma}} \newcommand{\bde}{{\boldsymbol\delta}} \newcommand{\bDe}{{\boldsymbol\Delta}} \newcommand{\bep}{{\boldsymbol\epsilon}} \newcommand{\bvep}{{\boldsymbol\varepsilon}} \newcommand{\bze}{{\boldsymbol\zeta}} \newcommand{\bte}{{\boldsymbol\theta}} \newcommand{\bvth}{{\boldsymbol\vartheta}} \newcommand{\bTh}{{\boldsymbol\Theta}} \newcommand{\bio}{{\boldsymbol\iota}} \newcommand{\BFa}{{\boldsymbol\kappa}} \newcommand{\bla}{{\boldsymbol\lambda}} \newcommand{\bLa}{{\boldsymbol\Lambda}} \newcommand{\bvpi}{{\boldsymbol\varpi}} \newcommand{\bvro}{{\boldsymbol\varrho}} \newcommand{\bsi}{{\boldsymbol\sigma}} \newcommand{\bvsi}{{\boldsymbol\varsigma}} \newcommand{\bSi}{{\boldsymbol\Sigma}} \newcommand{\bups}{{\boldsymbol\upsilon}} \newcommand{\bUp}{{\boldsymbol\Upsilon}} \newcommand{\bvph}{{\boldsymbol\varphi}} \newcommand{\bom}{{\boldsymbol\omega}} \newcommand{\bOm}{{\boldsymbol\Omega}} \newcommand{\bet}{{\boldsymbol\eta}} \newcommand{\bmu}{{\boldsymbol\mu}} \newcommand{\bnu}{{\boldsymbol\nu}} \newcommand{\bxi}{{\boldsymbol\xi}} \newcommand{\bXi}{{\boldsymbol\Xi}} \newcommand{\bpi}{{\boldsymbol\pi}} \newcommand{\bPi}{{\boldsymbol\Pi}} \newcommand{\brh}{{\boldsymbol\rho}} \newcommand{\btau}{{\boldsymbol\tau}} \newcommand{\bphi}{{\boldsymbol\phi}} \newcommand{\bPhi}{{\boldsymbol\Phi}} \newcommand{\bchi}{{\boldsymbol\chi}} \newcommand{\bpsi}{{\boldsymbol\psi}} \newcommand{\bPsi}{{\boldsymbol\Psi}} \newcommand{\rank}{\operatorname{rank}} \newcommand{\ran}{\operatorname{ran}} \newcommand{\eig}{\operatorname{eig}} \newcommand{\im}{\operatorname{Im}} \newcommand{\re}{\operatorname{Re}} \newcommand{\kr}{\operatorname{Ker}} \newcommand{\diag}{\operatorname{diag}} \newcommand{\spec}{r_\textup{spec}} \newcommand{\codim}{\operatorname{codim}} \newcommand{\imag}{\textup{i}\,} \newcommand{\degr}{\operatorname{deg}} \newcommand{\Dom}{\operatorname{Dom}} \newcommand{\Index}{\operatorname{Index}} \newcommand{\Row}{\operatorname{Row}} \newcommand{\Col}{\operatorname{Col}} \newcommand{\mat}[1]{\begin{bmatrix} #1 \end{bmatrix}} \newcommand{\pmat}[1]{\ensuremath{\begin{pmatrix} #1 \end{pmatrix}}} \newcommand{\sm}[1]{\begin{smallmatrix}#1\end{smallmatrix}} \newcommand{\sbm}[1]{\left[\begin{smallmatrix} #1\end{smallmatrix}\right]} \newcommand{\sbpm}[1]{\left(\begin{smallmatrix} #1\end{smallmatrix}\right)} \newcommand{\ov}[1]{{\overline{#1}}} \newcommand{\un}[1]{{\underline{#1}}} \newcommand{\inn}[2]{\ensuremath{\langle #1,#2 \rangle}} \newcommand{\tu}[1]{\textup{#1}} \newcommand{\wtil}[1]{{\widetilde{#1}}} \newcommand{\what}[1]{{\widehat{#1}}} \newcommand{\sfrac}[2]{\mbox{\ensuremath{\frac{#1}{#2}}}} \newcommand{\bvz}{\bigvee_{n=0}^\infty } \newcommand{\bvo}{\bigvee_{n=1}^\infty } \newcommand{\half}{\frac{1}{2}} \newcommand{\ands}{\quad\mbox{and}\quad} \newcommand{\ors}{\quad\mbox{or}\quad} \newcommand{\ons}{\mbox{ on }} \definecolor{purple}{rgb}{.6,0,.7} \newcommand{\tcp}{\textcolor{purple}} \newcommand{\tcb}{\textcolor{blue}} \definecolor{green}{rgb}{.0,.6,.0} \newcommand{\tcg}{\textcolor{green}} \newcommand{\tcr}{\textcolor{red}} \begin{document} \title{Optimal interpolation in Hardy and Bergman spaces:\ a reproducing kernel Banach space approach} \author[G.J. Groenewald]{Gilbert J. Groenewald} \address{G.J. Groenewald, School of Mathematical and Statistical Sciences, North-West University, Research Focus: Pure and Applied Analytics, Private Bag X6001, Potchefstroom 2520, South Africa} \email{[email protected]} \author[S. ter Horst]{Sanne ter Horst} \address{S. ter Horst, School of Mathematical and Statistical Sciences, North-West University, Research Focus: Pure and Applied Analytics, Private~Bag X6001, Potchefstroom 2520, South Africa and DSI-NRF Centre of Excellence in Mathematical and Statistical Sciences (CoE-MaSS), Johannesburg, South Africa} \email{[email protected]} \author[H.J. Woerdeman]{Hugo J. Woerdeman} \address{H.J. Woerdeman, Department of Mathematics, Drexel University, Philadelphia, PA 19104, USA} \email{[email protected]} \thanks{This work is based on the research supported in part by the National Research Foundation of South Africa (Grant Numbers 118513 and 127364). In addition, HW is partially supported by NSF grant DMS-2000037.} \subjclass[2020]{46E15, 42B30, 30H10, 46B10, 46B25} \keywords{Reproducing kernel Banach space, optimal interpolation, Hardy spaces} \begin{abstract} After a review of the reproducing kernel Banach space framework and semi-inner products, we apply the techniques to the setting of Hardy spaces $H^p$ and Bergman spaces $A^p$, $1<p<\infty$, on the unit ball in $\BC^n$, as well as the Hardy space on the polydisk and half-space. In particular, we show how the framework leads to a procedure to find a minimal norm element $f$ satisfying interpolation conditions $f(z_j)=w_j$, $j=1,\ldots , n$. We also explain the techniques in the setting of $\ell^p$ spaces where the norm is defined via a change of variables and provide numerical examples. \end{abstract} \maketitle \section{Introduction}\label{S:Intro} The reproducing kernel framework has led to very useful techniques in dealing with Hilbert spaces of functions, especially in interpolation and approximation problems. Based on ideas developed in the early twentieth century (see \cite{aron} for details), the general theory of reproducing kernel Hilbert spaces goes back to the work of Aronszajn \cite{aron0, aron}, followed up by, for instance, \cite{S-NK} and later with a more applied focus in \cite{Aizerman}. A recent comprehensive account can be found in the monograph \cite{PaulsenR}. More recently, starting with \cite{Boser}, reproducing kernels have been used extensively in the setting of machine learning; see, e.g., \cite{Fine, Fuku, Gretton, Slavakis, SFL11}. It were actually the applications to machine learning that inspired generalizing the framework to Banach spaces, leading to the notion of reproducing kernel Banach space, as introduced in \cite{ZXZ09}, combined with semi-inner product techniques from \cite{G67}. The advantage of moving out of the Hilbert space setting, is that this allows for a larger variety of norms, enabling one to deal with more intricate problems, while still maintaining a large part of the theory, subject to the Banach space geometry conditions one is willing to accept. One of the seminal results underlying many of the applications is the so-called Representer Theorem, going back to Wahba \cite{W90}, extended to the Hilbert space framework in \cite{SHS01} and to the Banach space setting in \cite{ZXZ09}. It has since been used extensively to address machine learning topics in a Banach space setting, such as Shannon sampling \cite{ZZ11}, multi-task learning problems \cite{ZZ13}, support vector machines \cite{FHY15,XY19} and converse sampling \cite{CM22}, to mention just a few; cf., \cite{LZZ22} for an up to date account and a unified framework incorporating many recent developments. In recent years, much attention has been devoted to extending the framework to non-reflexive Banach spaces, usually built with an $\ell^1$-based construction, cf., \cite{WX,CX21,U21,LWXY23,WXY}, since norms constructed in this way often lead to sparse solutions for the problem under consideration; for an insightful recent review paper and many further references see \cite{X}. A drawback to the reproducing kernel Banach space approach is the additional computational complexity compared to the Hilbert space setting. In particular, when using semi-inner products, the duality operator connecting the semi-inner product on a Banach space $\cX$ and the duality pairing between $\cX$ and its dual space becomes a nonlinear map which in many concrete Banach spaces is not known explicitly, and it is this duality operator that plays an essential role in the solution to optimal interpolation and regularization problems provided by the Representer Theorem. Motivated by this observation, we consider optimal interpolation problems in Hardy $H^p$ spaces and Bergman $A^p$ spaces on various single- and multi-variable domains, and analyze what the reproducing kernel Banach space approach provides in these concrete cases. Some of these problems have been considered as extremal problems in the complex analysis literature since the 1950s, cf., \cite{MR50,RS}, and we draw from techniques developed in these papers to obtain more concrete results and illustrate these with some numerical examples. While the details are worked out for Hardy and Bergman spaces, we hope that our results will also give some insight into how one can approach interpolation problems in other Banach spaces of functions that can be placed in the reproducing kernel Banach space setting; in the last subsection we have already developed the results in the setting of $\ell^p$, where we use a norm depending on a change of basis. In the present paper, we only consider Hardy $H^p$ spaces and Bergman $A^p$ spaces for $1<p<\infty$. Given the recent interests in non-reflexive Banach spaces we plan to return to this topic in a follow-up paper in which the cases $p=1$ and $p=\infty$ are also addressed. The paper is organized as follows. In Section \ref{S:SIP-RKBS} we review the general theory of semi-inner product reproducing kernel Banach spaces including the first representer theorem which yields optimality conditions for a minimal norm interpolant. In Section \ref{S:SIPsubspace} we consider how the main ingredients of the theory gets translated when restricting to a subspace of a semi-inner product space. In Section \ref{S:Hp interpolation} we provide the details how the Hardy spaces and Bergman spaces appear as semi-inner product reproducing kernel Banach spaces. In Section \ref{S:num} we provide the numerical details how one constructs minimal norm interpolants in $H^p$ on the unit disk and we also illustrate the results in the setting of $\ell^p$ with a change of basis. \section{Semi-inner product reproducing kernel Banach spaces: general theory}\label{S:SIP-RKBS} In this section we give a brief review of some of the theory concerning semi-inner product reproducing kernel Banach spaces. Such spaces play an important role in machine learning theory. The most common examples of reproducing kernel Banach spaces are those generated by a semi-inner product, although there are also examples that do not fall in the semi-inner product framework; we refer to \cite{LZZ22} for a recent overview. We will only consider the semi-inner product framework here, since the special cases we consider later on fit well in this framework, as we will see in Section \ref{S:Hp interpolation}. Also, we shall only discuss spaces over the complex numbers here. \subsection{Semi-inner product spaces}\label{SubS:SIP} The notion of a semi-inner product space goes back to Lumer \cite{L61}. Most of the material in this subsection originates from the seminal paper of Giles \cite{G67}. A {\em semi-inner product space} is a vector space $\cX$ (over $\BC$) together with a map $[\cdot ,\cdot ]:\cX\times\cX \to \BC$ such that for all $x,y,z\in\cX$ and $\al\in\BC$: \begin{itemize} \item[(SIP1)] $[x+\al y, z]=[x,z]+\al[y,z]$; \item[(SIP2)] $[x,\al y]=\ov{\al}[x,y]$; \item[(SIP3)] $[x,x]\geq 0$ and $[x,x]=0$ implies $x=0$; \item[(SIP4)] $|[x,y]|^2\leq [x,x] [y,y]$. \end{itemize} The difference with an inner product is that a semi-inner product need not be additive in the second component. Indeed, as was shown for instance in \cite[Proposition 6] {ZXZ09}, additivity in the second component, is equivalent to $[x,y]=\overline{[y,x]}$ for all $x,y\in \cX$. Nonetheless, it is easy to see that a semi-inner product also defines a norm $\|\cdot \|$ on $\cX$ in the usual way: \begin{equation}\label{SIPnorm} \|x\|=[x,x]^\half,\quad x\in\cX. \end{equation} Hence each semi-inner product space is also a normed space. Conversely, by \cite[Theorem 1]{G67}, each normed space $\cX$ admits a semi-inner product $[\cdot,\cdot]$ such that its norm is given by \eqref{SIPnorm}. In that case we say that $[\cdot,\cdot]$ is a semi-inner product on $\cX$ that generates the norm of $\cX$. The existence proof in \cite{G67} relies on the Hahn-Banach theorem, and as a result, in general, there is no construction of the semi-inner product and it need not be unique. We return to this topic in Subsection \ref{SubS:SIPunique}. Let $(\cX,[\cdot,\cdot])$ be a semi-inner product space with norm $\|\cdot\|$. We then define the dual space $\cX'$ of the normed space $\cX$ in the usual way, and denote the duality pairing between $\cX$ and $\cX'$ by $\inn{\cdot}{\cdot}_{\cX,\cX'}$, i.e., \begin{equation}\label{dualpairing} \inn{x}{\varphi}_{\cX,\cX'}=\varphi(x),\quad x\in\cX,\varphi\in\cX'. \end{equation} Using the semi-inner product, each element $x\in\cX$ defines an element $x^{\star_\cX}$ in $\cX'$ via \begin{equation}\label{Star} x^{\star_\cX}(y)=[y,x],\quad y\in\cX. \end{equation} Note that in case $\cZ$ is a closed complemented subspace of $\cX$, then restricting the semi-inner product of $\cX$ to $\cZ$ provides a semi-inner product on $\cZ$ which generates the norm on $\cZ$ obtained by restricting the norm of $\cX$. However, the dual space of $\cZ$ may not be so straightforward to determine, and as a result the $\star_\cZ$-operation applied to $x\in\cZ$ may be different compared to when $x$ is viewed as an element of $\cX$. The map $x\mapsto x^{\star_\cX}$ connects the semi-inner product and the duality pairing via the following formula: \begin{equation}\label{SIPvsDP} [y,x]=\inn{y}{x^{\star_\cX}},\quad x,y\in\cX. \end{equation} It is easy to see that $(\la x)^{\star_\cX}=\ov{\la} x^{\star_\cX}$ for any $\la\in\BC$ and $x\in\cX$. However, the $\star_\cX$-operation is additive if and only if the semi-inner product is an inner product. Indeed, since the duality pairing is additive in the second component, it follows directly from \eqref{SIPvsDP} that additivity of the $\star_\cX$-operation corresponds to additivity of the semi-inner product in the second component. To illustrate the above, we consider the semi-inner product of $L^p$; cf., \cite[Section 3]{G67} for more details and proofs. \begin{example}\label{Ex:Lp} Let $(\Omega , {\mathcal F}, \mu)$ be a measure space. For $1<p<\infty$, let $L^p=L^p(\Omega , {\mathcal F}, \mu)$ denote the (equivalence classes of) measurable functions $f$ on $\Omega$ for which the norm \[ \|f\|_{L^p}:=\left( \int_\Omega|f|^p \tu{d}\mu \right)^{1/p} \] is finite. The dual space is isometrically isomorphic to $L^q=L^q(\Omega , {\mathcal F}, \mu)$, with $1<q<\infty$ such that $\frac{1}{p}+\frac{1}{q}=1$, with the duality pairing \[ \inn{f}{g}= \int_\Omega f g \ \tu{d}\mu,\quad f\in L^p, g\in L^q. \] When $\Omega$ is a topological space and ${\mathcal F}$ is the Borel $\sigma$-algebra we may write $L^p(\Omega , \mu)$ instead of $L^p(\Omega , {\mathcal F}, \mu)$. As it turns out, by results reviewed in the next subsection, the semi-inner product of $L^p$ is unique, and the $\star_{L^p}$-operation associating the semi-inner product and the duality pairing is given by \[ f^{\star_{L^p}}= \frac{1}{\|f\|_{L^p}^{p-2}} \overline{f}|f|^{p-2}. \] Here $\overline{f}(x)=\overline{f(x)}$ and $|f|(x)= |f(x)|$. (Note that in the above formula one needs to interpret $0 |0|^{p-2}$ as $0$ and also understand that ${\bf 0}^{\star_{L^p}}={\bf 0}$.) In fact, the map $f\mapsto f^{\star_{L^p}}$ provides a non-additive (if $p\neq 2$) isometric bijection from $L^p$ to $L^q$, with inverse map the $\star_{L^q}$-operation on $L^q$; it is an interesting exercise to verify directly that $(f^{\star_{L^p}})^{\star_{L^q}}=f$ for each $f\in L^p$. It then follows that the semi-inner product on $L^p$ is given by \[ [f,h]=\inn{f}{h^{\star_{L^p}}}=\frac{1}{ \|h\|_{L^p}^{p-2}}\int_\Omega f\overline{h} |h|^{p-2} \tu{d}\mu. \] \end{example} We note that the above definition of $\star_{L^p}$ also works for $p=1$. However, in this case the semi-inner product is not unique and we will not pursue the $p=1$ case further in this paper. \subsection{Uniqueness of the generating semi-inner product}\label{SubS:SIPunique} The theory of semi-inner products discussed in the previous section does not require any conditions on the normed space. If one requires more from the semi-inner product, assumptions have to be made about the normed space. In the remainder of this section we shall assume that $\cX$ is a Banach space and discuss how various conditions on the Banach space geometry improve the behaviour of the semi-inner product. Let $(\cX,\|\cdot\|)$ be a Banach space, with generating semi-inner product $[\cdot,\cdot]$. Define the {\em unit sphere} $S_\cX$ and {\em closed unit ball} $B_\cX^{\rm cl}$ of $\cX$ by \[ S_\cX=\{x\in\cX \colon \|x\|=1\}\ands B_\cX^{\rm cl}=\{x\in\cX \colon \|x\|\leq 1\}. \] Now recall that $\cX$ is called {\em smooth} if each $x_0\in S_\cX$ is a {\em point of smoothness} for $B_\cX^{\rm cl}$, that is, if there exists a unique $\varphi\in\cX'$ with $\|\varphi\|=1=\varphi(x_0)$. Moreover, the semi-inner product on $\cX$ is called {\em continuous} if for any $x,y\in S_\cX$: \[ \re([y,x+ty])\to\re([y,x]) \mbox{ as } \BR\ni t \to 0, \] and {\em uniformly continuous} whenever this convergence is uniform in $(x,y)\in S_\cX\times S_\cX$. We then have the following characterizations of uniqueness of the semi-inner product generating the norm of $\cX$; cf., \cite[Theorems 1 and 3]{G67} and their proofs as well as \cite[Theorem 5.4.17]{M98}. \begin{theorem}\label{T:SIPunique1} For a Banach space $(\cX,\|\cdot\|)$ and a semi-inner product $[\cdot,\cdot]$ that generates the norm of $\cX$, the following are equivalent: \begin{itemize} \item[(1)] $[\cdot,\cdot]$ is the unique semi-inner product that generates the norm of $\cX$; \item[(2)] $\cX$ is a smooth Banach space; \item[(3)] $[\cdot,\cdot]$ is a continuous semi-inner product. \end{itemize} \end{theorem} The above theorem does not provide a way to determine the semi-inner product on $\cX$. For this one needs to differentiate the norm of $\cX$. For a given function $h:\cX\to \BC$, the {\em G\^{a}teaux derivative} (or {\em G-derivative} for short) of $h$ at a point $x\in\cX$ in direction $y\in\cX$ is given by \[ D h(x,y)=\lim_{\BR\ni t\to 0}\frac{h(x+ty)- h(x)}{t} \] in case the limit exist. We say that $h$ is {\em G-differentiable at $x$ in direction $y$} whenever $Dh(x,y)$ exists, $h$ is called {\em G-differentiable at $x$} if it is G-differentiable at $x$ in each direction $y\in\cX$, and $h$ is called {\em G-differentiable} if it is G-differentiable at each $x\in\cX$. Note that the limit in the definition of $Dh(x,y)$ is only over real values of $t$, despite $\cX$ being a complex Banach space. Whenever the limit in the definition of $Dh(x,y)$ is uniform in $(x,y)\in \cX\times \cX$, then we call $h$ {\em uniformly Fr\'{e}chet differentiable}; note that by \cite[Proposition 5.3.4]{AH05}, uniformly Fr\'{e}chet differentiability implies Fr\'{e}chet differentiability. Of specific interest in Banach space geometry, are Banach spaces for which the norm is G-differentiable (resp.\ uniformly Fr\'{e}chet differentiable). If that is the case, the Banach space is called {\em G-differentiable} (resp.\ {\em uniformly Fr\'{e}chet differentiable}). We can now provide another criterion for uniqueness of the semi-inner product, which does provide a formula for the (unique) semi-inner product. \begin{theorem}\label{Dnorm} A Banach space $(\cX,\|\cdot\|)$ has a unique semi-inner product that generates its norm if and only if it is G-differentiable. In that case, the unique semi-inner product on $\cX$ that generates its norm is given by \[ [x,y]=\|y\|\left(D\|\cdot\|(y,x)+i D\|\cdot\|(iy,x)\right), \quad x,y\in\cX. \] \end{theorem} The first claim is \cite[Theorem 3]{G67}; the formula for the unique semi-inner product follows from the proof in \cite{G67}. In particular, the previous two theorems show that the (unique) semi-inner product of a Banach space is continuous if and only if the norm is G-differentiable. It is also the case that the (unique) semi-inner product of a Banach space is uniformly continuous if and only if the norm is uniformly Fr\'{e}chet differentiable \cite[Theorem 3]{G67}. \subsection{Further improvements of the semi-inner product} In the case of a smooth Banach space $\cX$, or, equivalently, when there exists a continuous semi-inner product generating the norm, variations on orthogonality defined via the norm and semi-inner product also coincide. Recall from \cite{G67} that for $x,y\in\cX$, we say that $x$ is {\em normal} to $y$ and $y$ is {\em transversal} to $x$ if $[y,x]=0$. Note that normality and transversality are not the same, since $[y,x]$ need not be equal to the conjugate of $[x,y]$. Birkhoff \cite{Birkhoff} and James \cite{James} considered a notion of orthogonality in normed spaces defined as: \[ \mbox{$x$ is {\em orthogonal} to $y$ if } \|x+\la y\|\geq \|x\| \mbox{ for each $\la\in\BC$}. \] The following result was proved in \cite[Theorem 2]{G67}. \begin{theorem}\label{normal} Let $(\cX,\|\cdot\|)$ be a smooth Banach space. Then for all $x,y\in\cX$, $x$ is normal to $y$ if and only if $x$ is orthogonal to $y$. \end{theorem} A Banach space $(\cX,\|\cdot\|)$ is called {\em uniformly convex} if for every $\vep>0$ there exists a $\de>0$ such that for all $x,y\in S_\cX$ we have $\|\frac{x+y}{2}\|\leq 1-\de$ whenever $\|x-y\|>\vep$. Also, $\cX$ is called {\em strictly convex} whenever $\|x\|+\|y\|=\|x+y\|$ for $x,y\in\cX\backslash\{0\}$ implies that $y=\la x$ for some $\la>0$. Any uniformly convex Banach space is also strictly convex. Recall that a Banach space that is either uniformly convex or uniformly Fr\'{e}chet differentiable must be reflexive \cite[Theorem 9.11]{FHHMZ11}. In case the Banach space is uniformly convex as well as smooth, we also have an analogue of the Riesz representation theorem for the semi-inner product \cite[Theorem 6]{G67}. \begin{theorem}\label{T:Riesz} Let $(\cX,\|\cdot\|)$ be a uniformly convex and smooth Banach space. Then for each $\varphi\in\cX'$ there exists a unique $y\in\cX$ such that $\varphi(x)=[x,y]$ for all $x\in\cX$. Moreover, we have $\|\varphi\|_{\cX'}=\|y\|_\cX$. \end{theorem} From the uniqueness of the semi-inner product, due to the uniform convexity assumption, it is clear that the vector $y$ must be such that $y^{\star_\cX}=\varphi$. In other words, when the Banach space $\cX$ is uniformly convex and smooth, then the theorem states that the map $x\mapsto x^{\star_\cX}$ is an isometric bijection mapping $\cX$ onto $\cX'$. Finally, in case the smoothness condition in Theorem \ref{T:Riesz} is replaced by the stronger condition that the (unique) semi-inner product is uniformly continuous, then an even more complete duality between $\cX$ and $\cX'$ as semi-inner product spaces is obtained \cite[Theorem 7]{G67}. \begin{theorem}\label{T:Duality} Let $(\cX,\|\cdot\|)$ be a uniformly convex Banach space that admits a uniformly continuous semi-inner product. Then $\cX'$ is also uniformly convex with a uniformly continuous semi-inner product which is defined by $[y^{\star_\cX},x^{\star_\cX}]_{\cX'}=[x,y]_{\cX}$. \end{theorem} In terms of the $\star$-operations, under the condition in the theorem, $\cX$ is reflexive and subject to the identification $\cX\simeq \cX''$, the theorem implies that the inverse map of $x\mapsto x^{\star_\cX}$ is given by the $\star$-operation on $\cX'$, i.e., $(x^{\star_\cX})^{\star_{\cX'}}=x$ and $(\varphi^{\star_{\cX'}})^{\star_{\cX}}=\varphi $ for all $x\in\cX$ and $\varphi\in\cX'$. Indeed, for the first identity note that for all $x,y\in\cX$ we have \begin{align*} [x,y]_\cX &=[y^{\star_\cX}, x^{\star_\cX}]_{\cX'} =\inn{y^{\star_\cX}}{(x^{\star_\cX})^{\star_{\cX'}}}_{\cX',\cX''}\\ &= \inn{(x^{\star_\cX})^{\star_{\cX'}}}{y^{\star_\cX}}_{\cX,\cX'} =[(x^{\star_\cX})^{\star_{\cX'}},y]_\cX, \end{align*} so that for all $x,y\in\cX$ we have \[ \inn{x-(x^{\star_\cX})^{\star_{\cX'}}}{y^{\star_\cX}}=[x-(x^{\star_\cX})^{\star_{\cX'}},y]_\cX=0 \] and we can conclude that $x=(x^{\star_\cX})^{\star_{\cX'}}$. The second identity follows by reflexivity. In conclusion, we have the following result. \begin{corollary} Let $(\cX,\|\cdot\|)$ be a uniformly convex Banach space that admits a uniformly continuous semi-inner product, so that $\cX$ is reflexive. Then the map $x\mapsto x^{\star_\cX}$ is an isometric bijection mapping $\cX$ onto $\cX'$ whose inverse map is given by $\varphi\mapsto \varphi^{\star_{\cX'}}$. \end{corollary} Next we briefly return to the examples of the earlier subsection. \begin{example}\label{Ex:Lp2} For $1<p<\infty$, the Banach space $L^p=L^p(\Om,\cF,\mu)$ of Example \ref{Ex:Lp} has a very well behaved Banach space geometry. Indeed, $L^p$ is uniformly convex by \cite[Theorem 9.3]{FHHMZ11} and uniformly Fr\'{e}chet differentiable by \cite[Fact 9.7 and Theorem 9.10]{FHHMZ11} in case $\mu$ is a $\si$-finite measure. \end{example} \subsection{Semi-inner product reproducing kernel Banach spaces} With the above preliminaries on semi-inner products out of the way, we turn to reproducing kernel Banach spaces. A Banach space $\cX$ is called a {\em Banach space of functions} on a set $\Om$ if each $f\in\cX$ is a function on $\Om$ and $\|f\|=0$ if and only if $f(z)=0$ for each $z\in\Om$. Hence, point evaluation separates the elements of the Banach space. Following \cite[Definition 1]{ZXZ09}, for a reproducing kernel Banach space one also requires point evaluation to be continuous, and the dual space should be isometrically isomorphic to a Banach space of functions on $\Om$ with the same property. \begin{definition}\label{D:RKBS} A {\em reproducing kernel Banach space} (RKBS for short) is a Banach space of functions $\cX$ on a set $\Om$ such that the dual space $\cX'$ is isometrically isomorphic to a Banach space $\wtil{\cX}$ of functions on $\Om$ and point evaluation is continuous both in $\cX$ and in $\wtil{\cX}$. \end{definition} It will be useful to consider an example. \begin{example}\label{HpD} Let $1<p<\infty$, $\BD=\{ z \in \BC: |z|<1\}$ and $\BT=\{ z \in \BC: |z|=1\}$. The Hardy space $H^p(\BD)$ is the space of analytic functions $f$ on $\BD$ so that $$ \| f \|_{H^p} = \sup_{0<r<1} \left( \frac{1}{2\pi} \int_0^{2\pi} | f(re^{it})|^p dt\right)^{\frac1p} < \infty. $$ By taking nontangential boundary limits, one may view $H^p(\BD)$ as a closed subspace of $L^p(\BT)$; for details, see, e.g., \cite{Hoffman}. For $f \in L^p(\BT)$ we let $f(z) =\sum_{j=-\infty}^\infty \hat{f}(j) z^j$ be its Fourier series and $ \hat{f}(j)$, $j\in\BZ$, its Fourier coefficients. Using Fourier coefficients, the Hardy space $H^p(\BD)$ corresponds to the closed subspace of $L^p(\BT)$ consisting of functions $f$ so that $ \hat{f}(j)=0$, $j<0$. We will also be using the subspaces $$ \overline{H^p(\BD)} = \{ f \in L^p(\BT) : \hat{f}(j)=0, j > 0 \} , zH^p(\BD) = \{ f \in L^p(\BT) : \hat{f}(j)=0, j \le 0 \}, $$ and view the functions $f \in \overline{H^p(\BD)}$ as co-analytic on the unit disk $\BD$ (i.e., analytic in $\overline{z}$). Thus both $H^p(\BD)$ and $ \overline{H^p(\BD)}$ are Banach spaces of functions on $\BD$. In order to establish $H^p(\BD) $ as a RKBS we need to view its dual as a Banach space of functions. It is well known (see, e.g., \cite[Chapter 8]{D70}, \cite[Chapter VII]{Koosis}, \cite{Hensgen}) that the dual of $H^p(\BD) $ can be identified as the quotient space $H^p(\BD)'\simeq L^q(\BT)/zH^q(\BD)$, where as usual $\frac1p+\frac1q=1$. Indeed, if we have a linear functional $\varphi \in H^p(\BD)'$, it can be extended (by the Hahn-Banach theorem) to all of $L^p(\BT)$, and therefore it is of the form $$ \varphi(f) =\varphi_g(f) = \langle f , g \rangle_{L^p(\BT),L^q(\BT)}= \frac{1}{2\pi} \int_0^{2\pi} f(e^{it}) g(e^{it})dt ,\quad f \in H^p(\BD),$$ for some $g \in L^q(\BT)$. The choice of $g$ is not unique as \begin{equation}\label{orth} \langle f , h \rangle_{L^p(\BT),L^q(\BT)}=0,\quad f\in H^p(\BD), h \in zH^q(\BD),\end{equation} thus leading to $\varphi_g=\varphi_k$ whenever $k\in g+zH^q(\BD)$. In order to establish $H^p(\BD) $ as a RKBS we will isometrically identify $H^p(\BD)'$ with $\widetilde{H^p(\BD)}$, defined by $ \widetilde{H^p(\BD)}:= \{ g \in L^q(\BT) : \hat{g}(j)=0, j > 0 \}$ with $\| g \|_{\widetilde{H^p}(\BD)}:= \| \varphi_g \|_{H^p(\BD)'}$. In other words, $ \widetilde{H^p(\BD)}$ consists exactly of the functions $g \in \overline{H^q(\BD)}$, but its norm is given by the norm of the linear functional $\varphi_g$ (which is, in general, not equal to the $\overline{H^q(\BD)}$ norm of $g$). We will return to this example (in the multivariable setting) in Subsection \ref{HpB}. \hfill $\Box$ \end{example} Following similar arguments as in the case of a reproducing kernel Hilbert space \cite{aron}, the above definition is strong enough to prove the existence of a reproducing kernel. \begin{theorem}\label{repkernel} \cite[Theorem 2]{ZXZ09} Let $\cX$ be a reproducing kernel Banach space on a set $\Om$. Then there exists a unique function $K:\Om \times \Om \to \BC$ such that: \begin{itemize} \item[(1)] For every $z\in\Om$, $K(\cdot,z)\in\wtil{\cX}$ and $f(z)=K(\cdot,z)(f)$ for all $f\in \cX$; \item[(2)] For every $z\in\Om$, $K(z,\cdot)\in\cX$ and $\wtil{f}(z)=f'(K(z,\cdot))$ for any $\wtil{f}\in\wtil{\cX}$ with $f'\in\cX'$ the dual element associated with $\wtil{f}$ via the isometry between $\cX'$ and $\wtil{\cX}$. \item[(3)] The span of $\{K(z,\cdot)\colon z\in\Om\}$ is dense in $\cX$ and the span of $\{K(\cdot,z)\colon z\in\Om\}$ is dense in $\wtil{\cX}$. \item[(4)] For all $z,w\in\Om$ we have \[ K(z,w)=K(\cdot,w)(K(z,\cdot))=\inn{K(z,\cdot)}{K(\cdot,w)}_{\cX,\cX'}. \] \end{itemize} \end{theorem} The function $K$ in the above theorem is called the {\em reproducing kernel} of the RKBS $\cX$. Since the elements of $\cX$ are uniquely determined by their point evaluations, it follows that the reproducing kernel $K$ is uniquely determined. In the case when $\cX=H^p(\BD)$, the reproducing kernel is the Szeg\"o kernel; see Subsection \ref{HpB}. A more general notion of reproducing kernel Banach spaces is considered in \cite{LZZ22} where the dual space is allowed to be isometrically isomorphic to a Banach space of functions on a different set $\wtil{\Om}$, in which case the reproducing kernel will act on $\Omega \times \wtil{\Omega}$, but we will not pursue that in the present paper. When $\cX$ is a Banach space of functions on a set $\Om$ with continuous point evaluation, then the challenge is to find another Banach space of functions on $\Om$, with continuous point evaluation, that is isometrically isomorphic to $\cX'$. This is where the semi-inner product plays a role, since it links $\cX'$ to $\cX$ via the $\star_\cX$-operation, provided the Banach space is sufficiently well-behaved. \begin{definition}\label{D:SIP-RKBS} A {\em semi-inner product reproducing kernel Banach space} (SIP-RKBS for short) is a RKBS $\cX$ on a set $\Om$ which is uniformly convex and uniformly Fr\'{e}chet differentiable. \end{definition} By Theorem \ref{T:Duality}, it follows that the dual of a SIP-RKBS is also a SIP-RKBS. Either by direct proof, as in \cite[Theorem 9]{ZXZ09}, or through the connection between the duality pairing and the semi-inner product, one can prove the following variation on Theorem \ref{repkernel}. \begin{theorem}\label{T:SIP-RKBS} Let $\cX$ be a SIP-RKBS on $\Om$. Then there exists a unique function $H:\Om\times \Om\to \BC$ such that $\{H(z,\cdot)\colon z\in\Om \}\subset \cX$ and \[ f(z)=[f,H(z,\cdot)] \mbox{ for all } f\in\cX,z\in\Om. \] Moreover, the reproducing kernel $K$ of the previous section is related through $H$ via $K(\cdot,z)=(H(z,\cdot))^{\star_\cX}$, $z\in\Om$. Furthermore, $\cX'$ (via the identification with $\cX^{\star_\cX}$) is a SIP-RKBS with the unique function $H'$, with above properties of $H$, given by $H'(z,\cdot)=K(z,\cdot)^{\star_\cX}$, $z\in\Om$. Finally, we have \[ H(z,w)=[H(z,\cdot),H(w,\cdot)],\quad z,w\in\Omega. \] \end{theorem} Despite creating some possible confusion, we will call the function $H$ in Theorem \ref{T:SIP-RKBS} the {\em SIP-reproducing kernel} of the SIP-RKBS $\cX$. \subsection{The first representer theorem for SIP-RKBS} One of the seminal results on reproducing kernel Banach spaces, which is of great importance in the machine learning literature, is the representer theorem; cf., \cite[Theorems 19 and 23]{ZXZ09}. In this paper we only consider the first representer theorem (\cite[Theorem 19]{ZXZ09}) and will specify this result to the case of a SIP-RKBS. For the reader's convenience we give a brief sketch of the proof. The setting for the first representer theorem is the following {\em minimal norm interpolation problem}. \begin{problem}\label{P:Interpolation} Let $\cX$ be a Banach space of functions on a set $\Om$, let \[ {\bf z}=\{z_1,\ldots,z_n\}\subset\Om \ands {\bf s}=\{s_1,\ldots,s_n\}\subset\BC, \] with $z_i\neq z_j$ if $i\neq j$. Find an $f\in\cX$ of minimal norm such that $f(z_j)=s_j$ for $j=1,\ldots,n$, if it exists. If so, what is its norm, and when is there a unique solution. \end{problem} In the remainder of this subsection we shall assume $\cX$ to be a SIP-RKBS on a set $\Om$ with SIP-reproducing kernel $H$. There are other variations and more general set-ups; for a recent overview, please see \cite{LZZ22}. In order to analyze the above interpolation problem, define the sets \[ \BI_{\bf z,s}:=\{f\in\cX \colon f(z_j)=s_j,\, j=1,\ldots,n\},\quad H({\bf z},\cdot)^{\star_\cX}:=\{H(z_j,\cdot)^{\star_\cX} \colon j=1,\ldots,n\}. \] Hence, the interpolation problem is to determine whether $\BI_{\bf z,s}$ has a minimal norm element, to find this element and determine whether it is unique. We write $\BI_{\bf z,0}$ in case ${\bf s}=\{0,\ldots,0\}$. Note that for any $x\in \BI_{\bf z,s}$ we have \begin{equation}\label{Bz0Bzs} x + \BI_{\bf z,0} = \BI_{\bf z,s}. \end{equation} The following lemma provides a criterion on a given set of points ${\bf z}$ under which functions in $\cX$ that satisfy the interpolation conditions exist for each set of values ${\bf s}$. \begin{lemma}\label{exists} \cite[Lemma 14]{ZXZ09} Let ${\bf z}=\{z_1,\ldots,z_n\}\subset\Om$. The set $\BI_{\bf z,s}$ is non-empty for each ${\bf s}\in\BC^n$ if and only if the set $H({\bf z},\cdot)^{\star_\cX}$ is linearly independent in $\cX'$. \end{lemma} Assuming the condition of the previous lemma, the first representer theorem is the following result; see \cite[Theorem 19]{ZXZ09}. \begin{theorem}[First SIP-RKBS Representer Theorem]\label{repr} Let $\cX$ be a SIP-RKBS on a set $\Omega$ and let $H$ be its SIP-reproducing kernel. Let ${\bf z}$ and ${\bf s}$ as in Problem \ref{P:Interpolation} and assume that the set $H({\bf z},\cdot)^{\star_\cX}$ is linearly independent in $\cX'$. Then there exists a unique $f_\tu{min}\in \BI_{\bf z,s}$ such that \[ \|f_\tu{min}\|=\min \{\|f\| \colon f\in \BI_{\bf z,s}\}. \] Moreover, for this vector $f_\tu{min}$ we have $f_\tu{min}^{\star_\cX}\in \tu{span} H({\bf z},\cdot)^{\star_\cX}$. \end{theorem} \begin{proof}[Sketch of proof] From Lemma \ref{exists} we know that $\BI_{\bf z,s}$ is non-empty. The uniform convexity implies that a unique minimal-norm element in $\BI_{\bf z,s}$ exists; see, e.g., \cite[page 53]{istr}. Indicate this vector by $f_\tu{min}$. To show that $f_\tu{min}^{\star_\cX}\in \tu{span} H({\bf z},\cdot)^{\star_\cX}$, requires some `orthogonality' arguments. For subsets $A \subset \cX$ and $B\subset \cX'$, we define \[ A^\perp = \{ g \in \cX' : \langle a , g \rangle =0 \ \hbox{\rm for all} \ a \in A \}, \ ^\perp B = \{ f \in \cX : \langle f , b \rangle =0 \ \hbox{\rm for all} \ b \in B \}. \] Mimicking the Hilbert space proof, since $\cX$ is a SIP-RKBS and hence reflexive, it is easy to see that for any subset $B\subset \cX'$ we have $(^\perp B )^\perp = \overline{\rm span} B$, where $\overline{\rm span}$ indicates the closure of the span of the set it is working on. Now observe that by \eqref{Bz0Bzs}, for each $h \in \BI_{\bf z,0}$ and $\lambda\in\BC$ we have $f_\tu{min}+\lambda h\in \BI_{\bf z,s}$ and hence $\| f_\tu{min} \| \le \| f_\tu{min} + \lambda h \|$. In other words, $f_\tu{min}$ is orthogonal to each $h\in \BI_{\bf z,0}$. By Theorem \ref{normal} we have \[ 0=[h,f_\tu{min}]=\inn{h}{f_\tu{min}^{\star_\cX}} \quad \mbox{for all } h\in \BI_{\bf z,0}. \] Thus $f_\tu{min}^{\star_\cX}\in \BI_{\bf z,0}^\perp$. The reproducing property of $H$ directly yields \[ \BI_{\bf z,0}=\prescript{^\perp\!\!}{}{(H({\bf z},\cdot)^{\star_\cX})}. \] Hence \[ f_\tu{min}^{\star_\cX}\in \BI_{\bf z,0}^\perp= (\prescript{^\perp\!\!}{}{(H({\bf z},\cdot)^{\star_\cX})})^\perp=\overline{\textup{span}}\, H({\bf z},\cdot)^{\star_\cX}=\textup{span}\, H({\bf z},\cdot)^{\star_\cX}, \] where the second identity follows from the general observation about $(^\perp B )^\perp$ made earlier in the proof. \end{proof} Solving the optimal interpolation problem then works as follows. Note that $f^{\star_\cX}\in \BI_{\bf z,0}^\perp$ does not guarantee that also $f\in \BI_{\bf z,s}$. However, the uniform convexity and linear independence of $H({\bf z},\cdot)^{\star_\cX}$ does guarantee that the intersection of $(\BI_{\bf z,0}^\perp)^{\star_{\cX'}}$ and $\BI_{\bf z,s}$ consists of precisely one element, and this is the unique solution. Hence consider $f \in \cX$ of the form \[ f^{\star_\cX}=\sum_{j=1}^n c_j H( z_j,\cdot)^{\star_{\cX}} \] for $c_1,\ldots,c_n\in\BC$. Then \[ f=(f^{\star_\cX})^{\star_{\cX'}}=(\sum_{j=1}^n c_j H( z_j,\cdot)^{\star_{\cX}})^{\star_{\cX'}} \] depends only on the parameters $c_1,\ldots,c_n\in\BC$, though not in a linear way since the map $f\mapsto f^{\star_{\cX'}}$ is not additive. It then remains to determine the parameters $c_1,\ldots,c_n\in\BC$ from the interpolation conditions $f(z_j)=s_j$, $j=1,\ldots,n$, which will thus typically be nonlinear equations in $c_1,\ldots,c_n$. In case there is only one interpolation condition the formulas simplify. We state this next. \begin{corollary}\label{single} Let $\cX$ be a SIP-RKBS on a set $\Omega$ and let $H$ be as in Theorem \ref{T:SIP-RKBS}. Let $z_1\in\Om$ and $s_1\in\BC$. Then $$ f_\tu{min}(z):= s_1 \frac{H(z_1,z)}{H(z_1,z_1)}$$ is the unique $f_\tu{min}\in\cX$ such that \[ f_\tu{min}(z_1)=s_1,\ \|f_\tu{min}\|=\min \{\|f\| \colon f\in\cX,\ f(z_1)=s_1\}. \] \end{corollary} \begin{proof} By Theorem \ref{repr} there exists a $c_1$ so that $f_\tu{min}^{\star_\cX}=c_1 H(z_1,\cdot)^{\star_\cX}.$ Applying $\star_{\cX'}$ on both sides we obtain that $f_\tu{min}=\overline{c_1} H(z_1,\cdot).$ Plugging in the condition that $f_\tu{min}(z_1)=s_1$, gives that $\overline{c_1}= \frac{s_1}{H(z_1,z_1)}$. \end{proof} There is also a second representer theorem, which concerns a regularized version of minimizing a loss function among interpolants; details can, for instance, be found in \cite[Section 5.2]{ZXZ09}. \section{Subspaces of semi-inner product spaces}\label{S:SIPsubspace} Let $(\cX,\|\cdot\|)$ be a normed space with semi-inner product $[\cdot,\cdot]$ and let $\cZ\subset\cX$ be a closed subspace. Then $\cZ$ becomes a normed space with semi-inner product, simply by restricting the norm and the semi-inner product. Also, in case $\cX$ is G-differentiable, then so is $\cZ$, so that uniqueness of the semi-inner product carries over from $\cX$ to $\cZ$ (this does not require $\cX$ to be a Banach space; see \cite{G67}). What is less straightforward is what the $\star$-operations on $\cZ$ and $\cZ'$ will be and how they relate to the $\star$-operations on $\cX$ and $\cX'$. In this section, under certain conditions on $\cX$, we provide formulas for the $\star_\cZ$- and $\star_{\cZ'}$-operations. Let $y\in\cZ$. Then \[ y^{\star_\cZ}(x)=[x,y]_\cZ=[x,y]_\cX=y^{\star_\cX}(x),\quad x\in\cZ. \] Hence $y^{\star_\cZ}=y^{\star_\cX}|_{\cZ}$. However, there can be many $\varphi\in\cX'$ with $\varphi|_{\cZ}=y^{\star_\cX}|_{\cZ}$ and we require one with the property that $\|y\|_\cZ=\|y^{\star_\cZ}\|_{\cZ'}= \|\varphi\|_{\cX'}$. Such $\varphi\in\cX'$ exists by the Hahn-Banach theorem but it need not be unique. We now recall some results from \cite[Section 4]{RS} that provide conditions under which a unique $\varphi\in\cX'$ with $\varphi|_{\cZ}=y^{\star_\cX}|_{\cZ}$ and $\|y\|_\cZ=\|\varphi\|_{\cX'}$ exists. Given $\varphi\in\cX'$ we say that $\wtil{\varphi}\in\cX'$ is {\em equivalent to $\varphi$ with respect to $\cZ$} (notation $\wtil{\varphi} \sim_\cZ \varphi$) if \[ \wtil{\varphi}(z) = \varphi(z) ,\quad z \in \cZ. \] We may just say `equivalent' if the subspace $\cZ$ is clear from the context. It is not hard to see that the set $S_{\varphi, \cZ}:=\{ \wtil{\varphi} : \wtil{\varphi} \sim_\cZ \varphi \}$ is closed and convex. Clearly, whenever $\what{\varphi} \sim_\cZ \varphi$ we have that \[ \| \varphi|_{\cZ} \|_{\cZ'} = \| \what{\varphi}|_{\cZ} \|_{\cZ'} \le \inf_{\wtil{\varphi} \in S_{\varphi, \cZ}} \| \wtil{\varphi} \|_{\cX'}. \] We say that $ \wtil{\varphi}_* \in \cX'$ is {\em an extremal functional with respect to $\cZ$} associated with $\varphi$ if $\wtil{\varphi}_* \sim_\cZ \varphi$ and \begin{equation}\label{extremal} \| \wtil{\varphi}_*\|_{\cX'} = \inf_{\wtil{\varphi} \in S_{\varphi, \cZ}} \| \wtil{\varphi} \|_{\cX'}. \end{equation} Again, we will leave out ``with respect to $\cZ$'' if no confusion regarding $\cZ$ can occur. \begin{proposition}\label{P:UniExtremeFunct} \cite[Theorems II, III and IV]{RS} The extremal functionals associated with $\varphi\in\cX'$ form a nonempty, closed and convex subset of $\cX'$ and $$\inf_{\wtil{\varphi} \in S_{\varphi, \cZ}} \| \wtil{\varphi} \|_{\cX'}=\|\varphi|_{\cZ}\|_{\cZ'}.$$ Moreover, if $\cX'$ is strictly convex, then there is exactly one extremal functional associated with $\varphi$. \end{proposition} The existence of an extremal functional $\wtil{\varphi}_*$ with $\|\wtil{\varphi}_*\|_{\cX'}=\|\varphi|_{\cZ}\|_{\cZ'}$ is due to the Hahn-Banach theorem. This also means that we may then replace 'inf' by 'min' in the right hand side of \eqref{extremal}. Let $\varphi\in \cX'$ be nontrivial. We say that $z \in \cZ$ is an {\em extremal point} for $\varphi$ on $\cZ$ if $\| z \|_\cZ = 1$ and $\varphi(z)=\| \varphi|_{\cZ} \|_{\cZ'}$. Recall that the {\em weak topology} on $\cX$ is the weakest topology that makes the linear functionals on $\cX$ continuous (in other words, the smallest topology containing the sets $f^{-1}(U)$, where $f\in \cX'$ and $U\subset \BC$ is open). The Banach space $\cX$ is {\em weakly compact} if the unit ball $B_\cX^{\rm cl}$ in $\cX$ is compact in the weak topology. By Kakutani's Theorem (see \cite[Theorem V.4.2]{C90}) this is equivalent to $\cX$ being reflexive. We have the following. \begin{proposition}\label{P:UniExtremePoint} \cite[Parts of Theorems V, VI and VII]{RS} Let $\varphi\in \cX'$ be nontrivial and let $\cZ$ be a closed linear subspace of $\cX$. In addition, let $\cX$ be weakly compact. Then the extremal points for $\varphi$ on $\cZ$ form a nonempty, closed and convex subset of $\cZ$. Moreover, if $\cX$ is also strictly convex, then there is exactly one extremal point for $\varphi$ on $\cZ$. \end{proposition} Returning to our original problem, given $y\in\cZ$, for $y^{*_{\cZ'}}$ we seek the extremal functional of $y^{*_{\cX'}}$ with respect to $\cZ$. In case $\cX$ is strictly convex, this extremal functional exists and is unique. For the remainder of the section we assume that $\cX$ is strictly convex and weakly compact, and that $\cX'$ is also strictly convex. Assume that $\cZ^\perp$ is complemented in $\cX'$, i.e., there exists a bounded projection $P$ of $\cX'$ onto $\cZ^\perp$. Let $\wtil{\cZ}$ be the range of $I-P$, that is, $\wtil{\cZ}$ is the complement of $\cZ^\perp$ in $\cX'$ so that $P$ is the projection on $\cZ^\perp$ along $\wtil{\cZ}$. Note that for the dual space of $\cZ$ we have \[ \cZ'\simeq \cX'/\cZ^\perp \simeq \wtil{\cZ}. \] Example \ref{HpD} provides an example of the above equivalences. We can now describe the $\star$-operations of the semi-inner products on $\cZ$ and $\cZ'$. \begin{proposition}\label{P:substar} Let $(\cX,\|\cdot\|)$ be a uniformly convex Banach space that is weakly compact and has a uniformly continuous semi-inner product $[\cdot,\cdot]$. Let $\cZ$ be a subspace of $\cX$ such that $\cZ^\perp$ is complementable in $\cX'$ with complement $\wtil{\cZ}$ and $P$ the projection onto $\cZ^\perp$ along $\wtil{\cZ}$. Then for $y\in\cZ$ we have \[ y^{\star_\cZ}=(I-P)[y^{\star_\cX}]\in \wtil{\cZ}. \] Moreover, for $\varphi\in \wtil{\cZ}$, \[ \varphi^{\star_{\wtil{\cZ}}}=\|\varphi\|_{\wtil{\cZ}} z_*, \] where $z_*\in\cZ$ is the unique extremal point of $\varphi$ on $\cZ$. \end{proposition} \begin{proof} Let $y\in\cZ$. Then $P y^{\star_\cX}\in\cZ^\perp$, so that \[ =\inn{z}{P y^{\star_\cX}}=(P y^{\star_\cX})(z)=0,\quad z\in\cZ. \] For each $z\in\cZ$ we have \begin{align*} [z,y]_{\cZ} &=[z,y]_{\cX}=\inn{z}{y^{\star_\cX}}_{\cX,\cX'}=\inn{z}{(I-P)y^{\star_\cX}}_{\cX,\cX'} =\inn{z}{(I-P)y^{\star_\cX}}_{\cZ,\wtil{\cZ}}. \end{align*} Since the above identity holds for all $z\in\cZ$, it follows that $y^{\star_\cZ}=(I-P)y^{\star_\cX}$. Next we turn to the formula for $\varphi^{\star_{\wtil{\cZ}}}$. Note that \begin{align*} \|\varphi\|_{\wtil{\cZ}}^2=[\varphi,\varphi]_{\wtil{\cZ}}=\inn{\varphi^{\star_{\wtil{\cZ}}}}{\varphi}_{\cZ,\wtil{\cZ}} =\varphi(\varphi^{\star_{\wtil{\cZ}}}). \end{align*} Thus $\what{z}:=\|\varphi\|_{\wtil{\cZ}}^{-1}\varphi^{\star_{\wtil{\cZ}}}$ is a vector in $\cZ$ such that $\|\what{z}\|=1$, because $\|\varphi^{\star_{\wtil{\cZ}}}\|_\cZ=\|\varphi\|_{\wtil{\cZ}}$, and $\varphi(\what{z})=\|\varphi\|_{\wtil{\cZ}}$. Hence $\what{z}=z_*$ is an extremal point of $\varphi$ on $\cZ$, which is unique according to Proposition \ref{P:UniExtremePoint}, and it follows that $\varphi^{\star_{\wtil{\cZ}}}=\|\varphi\|_{\wtil{\cZ}}\what{z}$ as claimed. \end{proof} In case $\cZ'\neq \{0\}$, so that $\cZ^\perp\neq\cX'$, then $\cZ^\perp$ has infinitely many complements in $\cX'$ (assuming $\cZ^\perp$ is complemented in $\cX'$). Hence the projection $P$ is not unique, and so is $y^{*_\cZ}$ in Proposition \ref{P:substar}. However, this nonuniqueness is only caused by the fact that $y^{*_\cZ}$ is in $\wtil{\cZ}$ rather than in $\cZ'$. \section{Hardy and Bergman spaces as SIP-RKBS}\label{S:Hp interpolation} In the following subsections, we will show how the theory we have developed so far applies to different Hardy and Bergman space settings. \subsection{Hardy space on the unit ball in $\BC^n$}\label{HpB} Let $\BC^n$ be endowed with the usual Euclidean inner product $\langle \cdot , \cdot \rangle_{\rm Eucl}$, and let $B_n$ be the open unit ball in $\BC^n$; thus, $\langle z , w \rangle_{\rm Eucl} = \sum_{i=1}^n z_i \overline{w_i}$ and $B_n=\{ x\in \BC^n : \| x \|_{\rm Eucl} < 1\}$. For $0<r<1$ and a function $f$ with domain $B_n$ we let $f_r$ be the dilated function $f_r(z):=f(rz)$, which is defined on $\frac{1}{r}B_n = \{ x\in \BC^n : \| x \|_{\rm Eucl} < \frac1r \}$. We let $\sigma$ denote the rotation-invariant probability Borel measure on the sphere $S_n=\{ x \in \BC^n : \| x \|_{\rm Eucl} = 1 \}.$ The Hardy space on $B_n$ is the space of holomorphic functions on $B_n$ so that $$ \| f \|_{H^p(B_n)} := \sup_{0<r<1} \left( \int_{S_n} |f_r|^p \tu{d} \sigma \right)^\frac1p <\infty.$$ It is well-known (for details, see \cite[Section 5.6]{Rudin}) that one may view $H^p(B_n)$ as a closed subspace of $L^p(S_n,\sigma)$. This requires taking nontangential boundary limits, thus associating with $f\in H^p(B_n)$ a function $f_{\rm boundary}$, which is almost everywhere defined on $S_n$. It will be convenient (justified by the theory) to treat $f$ and $f_{\rm boundary}$ as the same function, as we do for instance in equation \eqref{repH}. References on Hardy spaces include \cite{Hoffman, D70, Koosis} for the single variable case and \cite{Rudin} for the several variable case. The Cauchy kernel for $B_n$ is defined by $$ K_{\rm C} (w,z) = \frac{1}{(1-\langle z , w \rangle_{\rm Eucl} )^n} , \ z,w\in B_n,$$ and is the reproducing kernel for $H^p(B_n)$; see \cite[Corollary of Theorem 6.3.1]{Rudin}. When $n=1$ it is the Szeg\"o kernel $1/(1-z\overline{w})$. Thus, when $f\in H^p(B_n)$ we have that \begin{equation}\label{repH} f(z)= \int_{S_n} K_{\rm C} (w,z) f(w) \tu{d} \sigma(w), z\in B_n. \end{equation} In this context we can state the following new result. We will provide the proof later in this section. \begin{theorem}\label{single2} Let $1<p<\infty$, $z_0=(z_0^{(1)}, \ldots , z_0^{(n)} )\in B_n$, and $w_0\in \BC$. The unique minimal norm interpolant $f_{\rm min}$ in $H^p(B_n)$ with the single interpolation condition $f(z_0)=w_0$ is given by \begin{equation}\label{singleH} f_{\rm min}(z)= w_0 \left( \frac{1-\|z_0\|_{\rm Eucl}^2}{1-\langle z, z_0 \rangle_{\rm Eucl}} \right)^{\frac{2n}{p}} = w_0 \left( \frac{1-\sum_{j=1}^n|z_0^{(j)}|^2}{ 1-\sum_{j=1}^n z_j \overline{z_0^{(j)}} } \right)^{\frac{2n}{p}} , \end{equation} where $z=(z_1,\ldots,z_n) \in B_n$. In addition, $\| f_{\rm min} \|_{H^p(B_n)} = |w_0|(1-\|z_0\|_{\rm Eucl}^2)^\frac{n}{p} $. \end{theorem} For $n=1$ the result above was proven before as it follows easily from \cite[Theorem 4]{CMS94}; it can also be found directly in \cite[Theorem 2]{Harmsen}. \subsection{Bergman space on the unit ball in $\BC^n$} Let $\nu$ be the Lebesgue measure on $\BC^n$ normalized so that $\nu(B_n)=1$. The Bergman space $A^p_n$ is defined as $$ A^p_n = L^p(B_n,\nu) \cap \{ f: B_n \to \BC : f \ \hbox{\rm is holomorphic on} \ B_n \}.$$ References on the Bergman space include \cite{Axler, DurenBergman, Hedenmalm} for the single variable case and \cite{Rudin} for the several variable case. Extremal functionals in the Bergman setting were considered in \cite{Ferguson2}. The Bergman kernel for $B_n$ is defined by $$ K_{\rm B} (w,z) = \frac{1}{(1-\langle z , w \rangle_{\rm Eucl} )^{n+1}} , \ z,w\in B_n, $$ which is the reproducing kernel for $A_n^p$ \cite[Theorem 7.4.1]{Rudin}. Thus, when $f\in A^p_n$ we have that \begin{equation}\label{repB} f(z)= \int_{B_n} K_{\rm B} (w,z) f(w) \tu{d} \nu(w), z\in B_n. \end{equation} Similar to the Hardy space on $B_n$, we are able to provide the following solution to the single point minimal norm interpolation problem in the Bergman space. \begin{theorem}\label{single3} Let $1<p<\infty$, $z_0=(z_0^{(1)}, \ldots , z_0^{(n)} )\in B_n$, and $w_0\in \BC$. The unique minimal norm interpolant $f_{\rm min}$ in $A_n^p$ with the single interpolation condition $f(z_0)=w_0$ is given by \begin{equation}\label{singleB} f_{\rm min}(z)= w_0 \left( \frac{1-\|z_0\|_{\rm Eucl}^2}{1-\langle z, z_0 \rangle_{\rm Eucl}} \right)^{\frac{2(n+1)}{p}} = w_0 \left( \frac{1-\sum_{j=1}^n|z_0^{(j)}|^2}{ 1-\sum_{j=1}^n z_j \overline{z_0^{(j)}} } \right)^{\frac{2(n+1)}{p}} , \end{equation} where $z=(z_1,\ldots,z_n) \in B_n$. In addition, $\| f_{\rm min} \|_{A^p_n} = |w_0|(1-\|z_0\|_{\rm Eucl}^2)^\frac{n+1}{p} $. \end{theorem} \subsection{Hardy space on the Polydisk} Let $\BD = \{ z\in \BC : |z|<1 \}$ be the open unit disk in $\BC$. For $0<r<1$ and a function $f$ with domain $\BD^n$ we let $f_r$ be the dilated function $f_r(z):=f(rz)$, which is defined on $\frac{1}{r}\BD^n$. The Hardy space on $\BD^n$ is the space of holomorphic functions on $\BD^n$ so that $$ \| f \|_{H^p(\BD^n)} := \sup_{0<r<1} \left( \frac{1}{(2\pi)^n}\int_0^{2\pi} \cdots \int_0^{2\pi} |f_r(e^{it_1}, \ldots , e^{it_n})|^p \tu{d} t_1 \cdots \tu{d} t_n \right)^\frac1p <\infty.$$ It is well-known (for details, see \cite{Rudin2}) that one may view $H^p(\BD^n)$ as a closed subspace of $L^p(\BT^n)$. The Hardy space $H^p(\BD^n)$ is complemented in $L^p(\BT^n)$ (see, e.g., \cite{Ebenstein}), and the reproducing kernel on $H^p(\BD^n)$ is given by $$ K(w,z)=\prod_{j=1}^n \frac{1}{1-z_j\overline{w_j}} ;$$ see, e.g., \cite[Section 1.2]{Rudin}. For the single point minimal norm interpolation, we have the following result. \begin{theorem}\label{single4} Let $1<p<\infty$, $z_0=(z_0^{(1)}, \ldots , z_0^{(n)} )\in \BD^n$, and $w_0\in \BC$. The unique minimal norm interpolant $f_{\rm min}$ in $H^p(\BD^n)$ with the single interpolation condition $f(z_0)=w_0$ is given by \begin{equation}\label{singleP} f_{\rm min}(z)= w_0 \left( \prod_{j=1}^n \frac{1-|z_0^{(j)}|^2}{1-z_j\overline{z_0^{(j)}}} \right)^{\frac{2}{p}} , \end{equation} where $z=(z_1,\ldots,z_n) \in \BD^n$. In addition, $\| f_{\rm min} \|_{H^p(\BD^n)} = |w_0| \prod_{j=1}^n (1-|z_0^{(j)}|^2)^\frac1p $. \end{theorem} \subsection{Hardy space on the upper half-plane $\BC_+$} We recall from \cite[Chapter 11]{D70} the following. Let $\BC_+=\{x+iy : x\in \BR, y >0\}$ be the open upper half-plane of the complex plane. The Hardy space $H^p(\BC_+)$ consists of analytic functions on $\BC_+$ so that $|f(x+iy)|^p$ is integrable for all $y>0$ and $$ \| f \|_{H^p(\BC_+)} = \sup_{0<y<\infty} \left( \int_{-\infty}^\infty |f(x+iy)|^p dx \right)^\frac1p <\infty . $$ If $f \in H^p(\BC_+)$ then the boundary function $$f(x) = \lim_{y\to 0+} f(x+iy)$$ exists almost everywhere and belongs to $L^p(\BR, \tu{d}x)$; see \cite[Corollary to Theorem 1.11]{D70}. By identifying $f \in H^p(\BC_+)$ with its boundary function, we may view $H^p(\BC_+)$ as a subspace of $L^p(\BR, \tu{d}x)$. This is a closed complemented subspace; see \cite[Section 3.6]{Helson} for details. The reproducing kernel for $H^p(\BC_+)$ is given by $$K(w,z)= \frac{1}{2\pi i} \frac{1}{\overline{w}-z}, \ z,w \in \BC_+; $$ see \cite[Theorem 11.8]{D70}. Analogously, one has the multivariable version on the poly-half-plane $\BC_+^n$. The reproducing kernel for $H^p(\BC_+^n)$ is given by $$K(w,z)= \frac{1}{(2\pi i)^n}\prod_{j=1}^n \frac{1}{\overline{w_j}-z_j}, \ z,w \in \BC_+^n. $$ See \cite[Section III.5]{SW} for further details. We now obtain \begin{theorem}\label{single5n} Let $1<p<\infty$, $z_0 =(z_0^{(1)}, \ldots , z_0^{(n)}) \in \BC_+^n$, and $w_0\in \BC$. The unique minimal norm interpolant $f_{\rm min}$ in $H^p(\BC_+^n)$ with the single interpolation condition $f(z_0)=w_0$ is given by \begin{equation}\label{singleH2} f_{\rm min}(z)= w_0 \prod_{j=1}^n \left( \frac{2i{\rm Im} z_0^{(j)}}{z_j-\overline{z_0^{(j)}} } \right)^{\frac{2}{p}} , \ z=(z_1, \ldots, z_n) \in \BC_+^n. \end{equation} In addition, $\| f_{\rm min} \|_{H^p(\BC_+^n)} = |w_0|\prod_{j=1}^n ({4\pi{\rm Im} z_0^{(j)}})^\frac{1}{p} $. \end{theorem} \subsection{Common threads and proofs} The setting in the above subsections all correspond to taking for $\cX$ an appropriate space $L^p(\widehat\Omega, \tu{d} \mu)$, with $1<p<\infty$, and then considering a subspace of Hardy space type $H^p(\Omega)$ or Bergman space type $A^p(\Omega)$, where we have some Banach space reproducing kernel. In the case of the Hardy space, $\widehat \Omega$ is the distinguished boundary of $\Omega$, while in the Bergman setting we have $\widehat\Omega = \Omega$. We will denote the full space as $L^p$ and the subspace as $\cZ^p$, write $K:\Om \times \Om \to \BC$ for the reproducing kernel of $\cZ^p$ and ${\widetilde{\cZ^p}}$ for the Banach space of functions on $\Om$ isomorphic to $\cZ'$, as in Section \ref{S:SIPsubspace}. Following the terminology of \cite{RS} for $H^p$ on the unit disc ($n=1$), an {\em extremal function} in $\cZ^p$ corresponds to what is called an extremal point in Section \ref{S:SIPsubspace}, while an {\em extremal kernel} in ${\widetilde{\cZ^p}}$ corresponds to an extremal functional in Section \ref{S:SIPsubspace}. The following applies to all the above settings. For $g \in {\widetilde{\cZ^p}}$ we define the corresponding linear map $\varphi_g$ on $\cZ^p$ via $$ \varphi_g(f):= \int_{\widehat\Omega} f(z) g(z) \tu{d} \mu(z) ,\quad f \in \cZ^p.$$ \begin{proposition}\label{Hpstars} Let $1<p<\infty$ and $\frac1p+\frac1q=1$. Every $\varphi_g, g \in {\widetilde{\cZ^p}}$, has a unique extremal function and a unique extremal kernel. If $f\in \cZ^p$ is the extremal function for $\varphi_g$, then $k: = \| g \|_{{\widetilde{\cZ^p}}} \overline{f} |f|^{p-2}$ is the extremal kernel for $\varphi_g$. In addition, $k^{\star_{L^q}} = \| g \|_{{\widetilde{\cZ^p}}} f$. \end{proposition} \begin{proof} The uniqueness follows from Propositions \ref{P:UniExtremeFunct} and \ref{P:UniExtremePoint}, where we use that $L^p$ and $L^q$ are strictly convex. Next, let $f$ be the extremal function for $\varphi_g$. Then $\| f \|_{\cZ^p} =1$ and $\varphi_g(f)=\| \varphi_g \|_{(\cZ^p)'}$. We now claim that $k\in L^q$ is the extremal kernel for $\varphi_g$ if and only \begin{equation}\label{kernel} \varphi_k(f)=\| \varphi_g \|_{(\cZ^p)'}= \| k \|_{L^q}. \end{equation} Thus $$ \int_{\widehat\Omega} f(z) k(z) \tu{d}\mu(z)= \int_{\widehat\Omega} |f(z) k(z)| \tu{d}\mu(z)= \|f \|_{L^p} \| k \|_{L^q} = \| k \|_{L^q}. $$ Using the first two equalities, we obtain that $$ f(x) k (x) \ge 0, |k(x)|^\frac1p = \| \varphi_g \|_{(\cZ^p)'}^\frac1p |f(x)|^\frac1q , x \in \widehat\Omega \ \hbox{\rm a.e.} $$ These are exactly the properties that the extremal function and extremal kernel exhibit (see \cite[Theorem 11]{RS} for a particular instance). Note that $k=\| g \|_{{\widetilde{\cZ^p}}} \overline{f} |f|^{p-2}$ has these properties, and therefore it is the unique extremal kernel for $\varphi_g$. Finally, one computes that $k^{\star_{L^q}} = \| g \|_{{\widetilde{\cZ^p}}} ( \overline{f} |f|^{p-2})^{\star_{L^q}} = \| g \|_{{\widetilde{\cZ^p}}}f$, where we used that $\| f \|_{\cZ^p}=1$. \end{proof} Notice that the above implies that \begin{equation}\label{fp} fk = \| g \|_{{\widetilde{\cZ^p}}} |f|^p .\end{equation} \begin{proposition}\label{Kstar} Let $1<p<\infty$. If the reproducing kernel satisfies that $(K( z_1, \cdot))^\frac2p \in \cZ^p$ and $K(z,w)=\overline{K(w,z)}$, then $$ K(\cdot, z_1)^{\star_{\widetilde{\cZ}^p}}= (K( z_1, \cdot))^\frac2p, $$ or equivalently, \begin{equation}\label{Kstareq}\left( (K( z_1, \cdot))^\frac2p \right)^{\star_{\cZ^p}}= K(\cdot, z_1). \end{equation} \end{proposition} \begin{proof} We prove the second equality. Let us start by noting the following: $$ \| (K( z_1, \cdot))^\frac2p \|_{\cZ^p}^p= \| (K( z_1, \cdot))^\frac2p \|_{L^p}^p = \int_{\widehat\Omega} |K(z_1,w)|^2\tu{d}\mu(w) = $$ $$\langle K(z_1,\cdot ) , K(\cdot , z_1) \rangle = K(z_1,z_1).$$ Next, we perform the following computation: $$ \left( (K( z_1, \cdot))^\frac2p \right)^{\star_{L^p}}= \frac{1}{\| (K( z_1, \cdot))^\frac2p\|^{p-2}_{L^p}}\overline{(K( z_1, \cdot))^\frac2p} | (K( z_1, \cdot))^\frac2p|^{p-2} = $$ $$ \frac{1}{ K( z_1, z_1)^{\frac{p-2}{p}}} \overline{(K( z_1, \cdot))^{\frac2p+\frac{p-2}{p}}} (K( z_1, \cdot))^{\frac{p-2}{p}}= $$ $$ \frac{1}{ K( z_1, z_1)^{1-\frac2p}} {K( \cdot, z_1)} (K( z_1, \cdot))^{1-\frac{2}{p}}=: g,$$ where we used that $\overline{K( z_1, \cdot)} = K(\cdot , z_1)$. To prove \eqref{Kstareq} we need to show that $$ g \sim_{\cZ^p} K( \cdot , z_1 ). $$ As $\cZ^p = \overline{\rm span} \{K( \zeta , \cdot): \zeta \in \Omega \}, $ it suffices to prove that \begin{equation}\label{gK} \langle K( \zeta , \cdot) , g\rangle = \langle K( \zeta , \cdot) , K( \cdot, z_1) \rangle \end{equation} for all $\zeta \in \Omega$. The right hand side of \eqref{gK} equals $K(\zeta, z_1)$. For the left hand side of \eqref{gK} we compute $$ \langle K( \zeta , \cdot), g \rangle = \frac{1}{ K( z_1, z_1 )^{1-\frac2p}}{\int_{\widehat\Omega} {K(w, z_1)} K(\zeta, w)(K( z_1, w))^{1-\frac{2}{p}}}\tu{d}\mu(w)= $$ $$ \frac{1}{ K( z_1, z_1 )^{1-\frac2p}} \langle K(\zeta, \cdot)(K( z_1, \cdot))^{1-\frac{2}{p}}, K(\cdot, z_1) \rangle = $$ $$ \frac{1}{ K( z_1, z_1 )^{1-\frac2p}} K(\zeta, z_1)(K( z_1, z_1))^{1-\frac{2}{p}} = K(\zeta,z_1),$$ where in the last step we used $K(\zeta, \cdot)(K( z_1, \cdot))^{1-\frac{2}{p}} \in \cZ^p$. Thus we have proven $ g \sim_{\cZ^p} K( \cdot , z_1 ) $, finishing the proof of \eqref{Kstareq}. \end{proof} {\em Proof of Theorems \ref{single2}, \ref{single3}, \ref{single4}, and \ref{single5n}.} Combine Corollary \ref{single} (including the observation $H(z_0,\cdot )^{\star_\cX} = K(\cdot, z_0) $) with Proposition \ref{Kstar}. Note that in all cases $K(\cdot, z_0)$ is bounded away from 0 on $\Omega$, and thus $(K( z_1, \cdot))^\frac2p \in \cZ^p$ holds in all cases. To compute the norm, we use the reproducing kernel property. Let us illustrate this for Theorem \ref{single3}. We have $$ \| f_\tu{min} \|^p_{A_n^p} = |w_0|^p \int_{B_n} \left| \frac{1-\|z_0\|_{\rm Eucl}^2}{1-\langle z, z_0 \rangle_{\rm Eucl}} \right|^{2(n+1)}\tu{d}\nu(w) = $$ $$|w_0|^p (1-\|z_0\|_{\rm Eucl}^2)^{2(n+1)} \langle K_{\rm B} (z_0, \cdot ), K_{\rm B} (\cdot , z_0) \rangle = $$ $$|w_0|^p (1-\|z_0\|_{\rm Eucl}^2)^{2(n+1)} K_{\rm B} (z_0, z_0 )= |w_0|^p (1-\|z_0\|_{\rm Eucl}^2)^{n+1}. $$ Now take the $p$th root. \hfill $\Box$ \medskip Let us also rephrase the first representer theorem (Theorem \ref{repr}) in the current context. \begin{theorem}\label{repr2} Consider $\cZ^p$ as before and let $K$ be its reproducing kernel. Let ${\bf z}=\{ z_1, \ldots , z_k \}$, with $z_j\neq z_l$ when $j\neq l$, and ${\bf s}=\{ s_1, \ldots , s_k \}$ be as in Problem \ref{P:Interpolation}. Then there exists a unique $f_\tu{min}\in \cZ^p$ with $f(z_i)=s_i$, $i=1,\ldots , k$, such that \[ \|f_\tu{min}\|=\min \{\|f\| \colon f\in \cZ^p\ \hbox{\rm with} \ f(z_i)=s_i, i=1,\ldots , k\}. \] Moreover, we have that $f_\tu{min}^{\star_{\cZ^p}}\in \tu{span} \{ K(\cdot, z_i), i=1,\ldots , k \}$. \end{theorem} \subsection{Other examples} There are many other settings where the theory developed in this paper can be applied. We provide some here. \subsubsection{Weighted Bergman space on the disk} Let $\alpha > -1$. By the weighted Bergman space $A^p_\alpha$ on the disk we mean the space of all functions $f$ that are analytic on $\BD$ such that $$ \|f\|_{A^p_\alpha} := \left( (\alpha+1)\int_\mathbb{D} |f(z)|^p \, (1-|z|^2)^\alpha \tu{d}\nu (z) \right)^{1/p} < \infty. $$ Here the kernel is given by $$K_{{\rm B}, \alpha} (w,z) = \frac{\alpha+1}{(1-\overline{w}z)^{\alpha+2}} \; \; \; \; \; z, w \in \mathbb{D}.$$ The multivariable case is given similarly. For more details, see for instance \cite{DurenBergman}. \subsubsection{Weighted Bergman space on the right half-plane} Let $\alpha > -1$ and $\BC_{{\rm realpos}}$ be the open right half-plane. The weighted Bergman space $A^p_\alpha (\BC_{{\rm realpos}})$ on the right half-plane is the space of all functions $f$ that are analytic on $\BC_{{\rm realpos}}$ such that $$ \|f\|_{A^p_\alpha(\BC_{{\rm realpos}})} := \left( \frac{1}{\pi}\int_{\BC_{{\rm realpos}}} |f(x+iy)|^p x^\alpha \, dx \, dy \right)^{1/p}. $$ The reproducing kernel (see \cite{Elliott}) is $$K(w,z) = \frac{2^\alpha(\alpha+1)}{(\overline{w}+z)^{\alpha+2}} \; \; \; \; \; z,w \in \mathbb{C}_{\rm realpos}.$$ When $\alpha=0$ we get the traditional Bergman space on the half-plane. \subsubsection{Harmonic Bergman functions on half-spaces} We follow the presentation in \cite{Ramey}. The upper half-space $H_n$ is the open subset of $\BR^n $ given by $$H_n = \{ (x, y) \in \BR^n : x \in \BR^{n-1}, y > 0 \}. $$ Let $p \in (1, \infty ). $ We have let $\tu{d}V$ denote the Lebesgue volume measure on $H_n$. The harmonic Bergman space $b^p(H_n)$ on $H_n$ consists of harmonic functions $u$ on $H_n$ such that $$ \| u \|_{b^p(H_n)} =\left( \int_{H_n} |u|^p \tu{d}V \right)^\frac1p <\infty .$$ Due to \cite[Proposition 8.3]{ABR} this is a Banach space (of functions) over $\BR$. Here the reproducing kernel is given by $$ R(z,w)=\frac{4}{nV(B_{{\rm real},n})} \frac{n(z_n+w_n)^2-|z-\hat{w}|^2}{|z-\hat{w}|^{n+2}},$$ where $\hat{w} = (w_1,\ldots , w_{n-1}, -w_n)$ and $B_{{\rm real},n}$ is the unit ball in $\BR^n$; see \cite[Theorem 8.22]{ABR}. For $f \in L^p(H_n, \tu{d}V)$ we let $$ \Pi f (z) = \int_{H_n} f(w) R(z,w)\tu{d} w. $$ We now have that $\Pi : L^p(H_n, \tu{d}V) \to b^p(H_n)$ is a bounded projection \cite[Theorem 3.2]{Ramey}. In addition, due to \cite[Theorem 3.3]{Ramey} we have that the dual space of $b^p(H_n)$ can be identified with $b^q(H_n)$, where as usual $\frac1p+\frac1q = 1$. These results give us the tools to apply the theory of the previous sections (but now over the reals). \subsubsection{$\ell^p$, $1<p<\infty$, with a change of basis}\label{lpS} This example has a different flavor from the previous ones, but still provides an example where the general theory can be applied. Let $p\in(1,\infty)$ and consider the Banach space $$ \ell^p=\left\{x=(x_j)_{j=1}^\infty : x_j\in \BC, \| x \|_{\ell^p}:= \bigg( \sum_{j=1}^\infty |x_j|^p \bigg)^\frac1p <\infty \right\}.$$ Let $S: \ell^p \to \ell^p$, $1<p<\infty$, be an invertible map. On $\ell^p$ we define the norm $$ \| x \|_{\ell^p_S} := \| Sx \|_{\ell^p}. $$ The resulting Banach space we denote by $\ell^p_S$. The dual space is isometrically isomorphic to $\ell^q_{(S^T)^{-1}}$, with $1<q<\infty$ such that $\frac{1}{p}+\frac{1}{q}=1$, endowed with the norm $$ \| y \|_{\ell^q_{(S^T)^{-1}}}:= \| (S^T)^{-1} y \|_{\ell^q}.$$ Here $S^T: \ell^q \to \ell^q$ is the transpose of $S$, i.e., $\inn{e_i}{S^T(e_j)}= \inn{Se_i}{e_j}$, where $e_i$ is the sequence with a 1 in spot $i$ and zeroes elsewhere. We will use the shorthand notation $S^{-T} := (S^T)^{-1}$. The duality pairing is \[ \inn{x}{y}=\sum_{i=1}^\infty x_i y_i ,\quad x\in \ell^p_S, y\in \ell^q_{S^{-T}}. \] The semi-inner product of $\ell^p$ is unique, and the $\star$-operation associating the semi-inner product and the duality pairing is given by \[ x^{\star_{\ell^p_S}}= S^T [(Sx)^{\star_{\ell^p}}]. \] It then follows that the semi-inner product on $\ell^p_S$ is given by \[ [x,y]=\inn{x}{y^{\star_{\ell^p_S}}}. \] Let us next illustrate Theorem \ref{repr}. In this case $\Omega=\BN$ and $(H(j,\cdot ))^{\star_\cX}= K(\cdot , j) = e_j$, which is the $j$th standard basis vector. Observe that this yields $$ H(i,\cdot )= S^{-1} [(S^{-T} e_i)^{\star_{\ell^q}}].$$ When we take finite sequences $x=(x_i)_{i=1}^n$ and $S$ an invertible $n\times n$ matrix, we denote the space by $\ell^p_{n,S}$. The above considerations, with obvious minor modifications, all go through for this setting as well. \section{Optimal interpolation examples and numerical results}\label{S:num} In this section we illustrate how the theory developed in the previous sections can be used to compute minimal norm interpolants numerically. \subsection{Optimal interpolation in $H^p(\BD )$, $1<p<\infty$.}In this subsection we show how we can apply the representer theorem for SIP-RKBS to algorithmically solve the optimal interpolation problems in the Hardy space $H^p (\BD )$ on the disk. In general, it is hard to determine $ g^{\star_{{\widetilde{H^p(\BD)}}}}$. However, in the setting of the representer theorem with a rational reproducing kernel we may use that $g$ is a rational function. Thus we may apply the results of \cite{MR50}; see also \cite[Section 8.4]{D70} and \cite{RS, K, BK12}. This yields the following procedure. {\em Procedure to find $ g^{\star_{{\widetilde{H^p(\BD)}}}}$.} We now have the following procedure to find $ g^{\star_{{\widetilde{H^p(\BD)}}}}$ in case $ g \in {\widetilde{H^p(\BD)}}$ has a finite number of poles $\beta_i$, $i=1,\ldots, n$, in ${\mathbb D}$. Let $f= g^{\star_{{\widetilde{H^p(\BD)}}}}$ and $k$ the corresponding extremal kernel for $\varphi_g$, which will have the same poles as $g$. In this case, by \cite[Lemma in Section 8.4]{RS}, we have that $$ f(e^{it})k(e^{it}) = | \sum_{i=1}^n \frac{c_i}{1-\overline{\beta_i} e^{it}} |^2 \ (= \| g \|_{{\widetilde{H^p(\BD)}}} |f(e^{it})|^p) $$ for some $c_i\in{\mathbb C}$. As an aside, we mention that similar results for the Bergman space have so far been more elusive, as for instance Conjecture 1 in \cite{KS97} shows. Then for $z$ in a neighborhood of ${\mathbb T}$ we have that $$ f(z) k(z) = \left( \sum_{i=1}^n \frac{c_i}{1-\overline{\beta_i} z} \right) \left( \sum_{i=1}^n \frac{\overline{c_i}z}{z-{\beta_i} } \right). $$ This yields that $$ f(z) = B(z) \left( \frac{\sum_{i=1}^n \frac{c_i}{1-\overline{\beta_i} z}}{B(z)} \right)^\frac2p, $$ where $B$ is the Blaschke product with the same roots as $\gamma(z):= \sum_{i=1}^n \frac{c_i}{1-\overline{\beta_i} z}$. Indeed, the above is derived by observing that the equality $|\gamma|^2=\| g \|_{{\widetilde{H^p}}} |f|^p$ determines the outer part of $f\in H^p(\BD)$; see also \cite[Formulas (1.3.5-6)]{MR50}. The unknowns $c_i$ should now be chosen so that $g-f^{\star_{L^p(\BT)}} \in zH^q(\BD)$. We thus find that $$ g - \frac{1}{\|f \|_{L^p(\BT)}^{p-2} B} (\overline{\gamma} B )^\frac2p (|\gamma|^\frac2p)^{p-2} = $$ $$ g- \frac{1}{\|f \|_{L^p(\BT)}^{p-2} B} (\overline{\gamma} B )^\frac2p \left( \overline{\gamma} B\right)^{\frac{p-2}{p}} \left( \frac{\gamma}{B} \right)^{\frac{p-2}{p}} = g-\frac{1}{\|f \|^{p-2}_{L^p}}\overline{\gamma} \left( \frac{\gamma}{B} \right)^{1-\frac2p} \in zH^q(\BD).$$ Using that $\overline{\gamma}(z)= \sum_{i=1}^n \frac{\overline{c_i}z}{z-{\beta_i} }$ and absorbing the factor $\frac{1}{\|f \|_{L^p}}$ in the unknown constants, we need to choose $d_i\in {\mathbb C}$ so that $$ g(z) - \left( \sum_{i=1}^n \frac{\overline{d_i}z}{z-{\beta_i} } \right) \left( \frac{1}{B(z)} \sum_{i=1}^n \frac{d_i}{1-\overline{\beta_i} z} \right)^{1-\frac2p} \in zH^q(\BD) , $$ where $B$ is the Blaschke product with the same roots as $\sum_{i=1}^n \frac{d_i}{1-\overline{\beta_i} z}$. In other words, the residues of $\frac{g(z)}{z}$ and $$\frac{k(z)}{z}:= \left( \sum_{i=1}^n \frac{\overline{d_i}}{z-{\beta_i} } \right) \left( \frac{1}{B(z)} \sum_{i=1}^n \frac{d_i}{1-\overline{\beta_i} z} \right)^{1-\frac2p}$$ at the poles $\beta_i$ should coincide. We then have that $g^{\star_{{\widetilde{H^p(\BD)}}}} = f=k^{\star_{L^q(\BT)}}$ and $$\| g \|_{\widetilde{H^p(\BD)}} =\| \varphi_g\|_{H^p(\BD)'} = \| \varphi_k \|_{H^p(\BD)'} = \| k \|_{L^q}= \left( \frac{1}{2\pi} \int_0^{2\pi} |\sum_{i=1}^n \frac{d_i}{1-\overline{\beta_i} e^{it}}|^2 dt \right)^{\frac1q} . $$ For the case when there are several interpolation conditions we can recover the procedure from \cite{CMS94} by applying the above results as follows. We seek $f \in {H^p(\BD)}$ of minimal norm so that $f(z_i)=w_i$, $i=1,\ldots , n$. Call the optimal function $f_{\rm min}$. By Theorem \ref{repr} we know that $f_{\rm min}^{\star_{H^p(\BD)}} \in {\rm span} \{ \frac{1}{1-\overline{z_i}z} : i=1,\ldots, n \}$. Using now the procedure from the previous section, we arrive at the following. {\em Procedure for finding the minimal norm interpolant in $H^p(\BD)$:} \noindent For $d_i\in\BC, i=1, \ldots ,n$, consider $$f(z)=B(z) \left( \frac{\sum_{i=1}^n \frac{d_i}{1-\overline{z_i}z}}{B(z)} \right)^\frac2p,$$ where $B$ is the Blaschke product with the same roots as $\sum_{i=1}^n \frac{d_i}{1-\overline{z_i}z}.$ We now need to determine the values of $d_i \in \BC$, $i=1,\ldots, n$, so that $f(z_i)=w_i$, $i=1,\ldots , n$. The resulting function $f$ equals $f_{\rm min}=f_{\rm min}^{[p]}$, where we added the superscript $[p]$ to emphasize that $f_{\rm min}$ depends on $p$. When we apply the above procedure with $z_1=\frac12, z_2=-\frac13, z_3=\frac{i}{4}$, $w_1=1,w_2=0.9, w_3=0.8$ and letting $p\in[1.7,2.6]$ we obtain the graph in Figure \ref{fig:enter-label0} for $\| f_{\rm min}^{[p]} \|_{H^p(\BD)}$ as a function of $p$: \begin{figure}[h] \centering \includegraphics[width=5in, height=3in]{RKBSgraph4.png} \caption{Value of $\| f_{\rm min}^{[p]} \|_{H^p(\BD)}$ as a function of $p$.} \label{fig:enter-label0} \end{figure} \begin{remark} If $\mu$ is a finite measure, $1<p\le r<\infty$ and $f\in L^p(\BT)=L^p(\BT) \cap L^r(\BT) $, we have that \begin{equation}\label{prineq} \| f \|_{L^p(\BT) } \le \mu(\Omega)^{\frac1p -\frac1r} \| f \|_{L^r(\BT) }. \end{equation} In the above we have that $\mu$ is a probability measure. Now, as both $f_\tu{min}^{[p]}$ and $f_\tu{min}^{[r]}$ are interpolants, we obtain for $p\le r$ that $$ \| f_\tu{min}^{[p]} \|_{H^p(\BD)} \le \| f_\tu{min}^{[r]} \|_{H^p(\BD)} \le \| f_\tu{min}^{[r]} \|_{H^r(\BD)}, $$ where in the first inequality we use that $f_\tu{min}^{[p]}$ is the optimal solution in the $H^p(\BD)$ norm and in the second inequality we use \eqref{prineq}. This explains why the graph in Figure \ref{fig:enter-label0} is increasing. \end{remark} Finally, let us mention that we only let $p$ vary in the interval $[1.7,2.6]$ as the numerical results were less reliable beyond this region. We suspect that this is due to some of the roots (needed for $B(z)$) approaching the boundary of the disk when $p$ moves outside this region. To analyze this in more depth will be a separate project. \subsection{Optimal interpolation in $\ell^p_{n,S}$.} We use the finite dimensional version of Subsection \ref{lpS}. We have that $\Omega = \{ 1, \ldots , n \}$ and we choose $z_1, \ldots, z_k \in \Omega$. For notational convenience, let us assume that $z_1=1, \ldots , z_k=k$ are the interpolation points. Thus we are looking for a minimal norm solution to $x\in \ell^p_{n,S}$ with $x(j)=s_j$, $j=1, \ldots , k$. Theorem \ref{repr} now tells us that the optimal solution satisfies $$x_\tu{min}^{\star_\cX}=c_1 e_1+ \cdots + c_k e_k,$$ for some $c_1,\ldots , c_k \in {\mathbb C}$. In this case, we have \begin{equation}\label{xmin} x_\tu{min} = S^{-1}[(S^{-T}(c_1 e_1+ \cdots + c_k e_k))^{\star_q}].\end{equation} Note that when $q$ is an even integer, the equations the interpolation conditions give are polynomial of total degree $q-1$ in $c_j$ and $\overline{c_j}$, $j=1,\ldots, n$. To illustrate the above numerically, we implemented the above procedure in MATLAB as follows. This involved writing a function that takes as input $c_1,\ldots ,c_k$, and then it produces $ (x_\tu{min})_1-s_1, \ldots , (x_\tu{min})_k-s_k$, where $x_\tu{min}$ is as in \eqref{xmin}. Subsequently, we use the MATLAB command 'fsolve' to find the $c_1,\ldots ,c_k$ that produces all zeros as the outcome of the aforementioned function. For the finite dimensional case with $n=4$, $$ S = \frac14 \begin{bmatrix} 1 & 1 & 1 & 1 \cr 1 & -1 & 1 & -1 \cr 1 & 1 & -1 & -1 \cr 1 & -1 & -1 & 1 \end{bmatrix} $$ and interpolation conditions given by $k=3$ and $s_1=1, s_2=2, s_3=3$, we find the results below. The first graph (in Figure \ref{fig:enter-label}) has on the $x$-axis the value of $p$ and on the $y$-axis the value of $(x_{\rm min}^{[p]})_4$. The lowest value of $p$ is $1.02$. \begin{figure}[h] \centering \includegraphics[width=5in, height=3in]{RKBSgraph.png} \caption{Value of $(x_{\rm min}^{[p]})_4$ as a function of $p$.} \label{fig:enter-label} \end{figure} The second graph (in Figure \ref{fig:enter-label2}) shows the minimal norm of the minimal interpolant as a function of $p$. \begin{figure}[h] \centering \includegraphics[width=5in, height=3in]{RKBSgraph2.png} \caption{Value of $\| x_{\rm min}^{[p]} \|_{\ell^p_{4,S}}$ as a function of $p$; here $S$ is $4 \times 4$.} \label{fig:enter-label2} \end{figure} We note that for $p=1$, any value in $[0,4]$ is a possible $(x_{\rm min}^{[p]})_4$ (not unique!), and the optimal norm is 3.5. When $p=\infty$, we have $(x_{\rm min}^{[p]})_4=-1$ and the optimal norm is 1.25. Next we took a $16\times 16$ matrix $S$ so that $S^{-1}$ is the tridiagonal Toeplitz with main diagonal entries equal to 2, and the sub/super-diagonal entries equal to 1. The interpolation conditions are $x(1)=1, x(6)=2, x(11)=3$. We let $p$ range from 1.05 to 10. The results are in Figure \ref{fig:enter-label3} \begin{figure}[h] \centering \includegraphics[width=5in, height=3in]{RKBSgraph5.png} \caption{Value of $\| x_{\rm min}^{[p]} \|_{\ell^p_{16,S}}$ as a function of $p$; here $S$ is $16 \times 16$.} \label{fig:enter-label3} \end{figure} \begin{remark} In general we have that $1<p\le r<\infty$ implies $\| x\|_{\ell^p} \ge \| x \|_{\ell^r}. $ Thus, as both $x_\tu{min}^{[p]}$ and $x_\tu{min}^{[r]}$ are interpolants, we obtain for $p\le r$ that $$ \| S x_\tu{min}^{[p]} \|_{\ell^p} \ge \| S x_\tu{min}^{[p]} \|_{\ell^r} \ge \| S x_\tu{min}^{[r]} \|_{\ell^r}. $$ This explains why the graphs in Figures \ref{fig:enter-label2} and \ref{fig:enter-label3} are decreasing. \end{remark} The graphs in Figures \ref{fig:enter-label2} and \ref{fig:enter-label3} suggest that the optimal value is a continuous function of $p$. In fact, $x_\tu{min}^{[p]}$ depends continuously on $p$ (as Figure \ref{fig:enter-label} suggests) by the Maximum Theorem, as we show next. \begin{lemma} $x_\tu{min}^{[p]}$ and $\| x_\tu{min}^{[p]} \|_{\ell^p_{n,S}}$ depend continuously on $p$. \end{lemma} \begin{proof} Let $M>0$ be so that $$ \sup_{1<p<\infty} \| S x_\tu{min}^{[p]} \|_{\ell^1_n} <M.$$ For instance, one may choose $M=\| S (\sum_{j=1}^k s_je_j )\|_{\ell^1_n} + 1$. Let $$C=\{ x=(x_r)_r : x_j=s_j, j=1\dots, k, \| x \|_{\ell^1_n} \le M \}.$$ Next, let $$ f(x,p) = \| x \|_{\ell^p_{n,S}} , f^*(p)= \min_{x\in C} f(x,p) . $$ Applying now the Maximum Theorem (see, e.g., \cite[Theorem 2, page 116]{Berge}) we obtain that $$ C^*(p)=\{ x \in C : f(x,p) = f^*(p) \} $$ is continuous in $p$. In our case, we have $C^*(p) = \{ x_\tu{min}^{[p]} \}$, and thus the lemma follows. As a composition of continuous functions is continuous, the continuity of $\| x_\tu{min}^{[p]} \|_{\ell^p_{n,S}}$ also follows. \end{proof} \subsection{Using $\ell^p_{N,S}$ norm minimization in time delay estimation} This subsection is based on the papers \cite{MN,ZSZ}. We start with a description of the problem as outlined in \cite{ZSZ}. This concerns observed discrete-time signals at two sensors $$ \left\{ \begin{matrix} x_1[n] = s[n] + v_1[n] , \ \ \ \ \ \ \ \ & n=1,2\ldots , N, \cr x_2[n] = \beta s[n-D] + v_2[n] , &n=1,2\ldots, N.\end{matrix} \right.$$ where $s[n]$ is an unknown source signal, $\beta$ and $D$ are the attenuation factor and time delay between the sensors, respectively, and $v_1[n]$ and $v_2[n]$ are uncorrelated additive noises independent of the source signal. In order to estimate the time delay $D$, the following minimization is used: \begin{equation}\label{hopt} h_{\rm opt} = {\rm arg \ min}_h \sum_{n=M+1}^{N-M} \left| x_2[n] - \beta \sum_{n=-M}^M h_i x_1[n-i]\right|^p,\end{equation} where the approximation $$ s[n-D]=\sum_{i=-\infty}^\infty s[n-i]{\rm sinc}(i-D)\approx \sum_{i=-M}^M s[n-i]{\rm sinc}(i-D)$$ is used. The delay $D$ is next approximated by computing $$D_{\rm opt} ={\rm arg \ max}_D \sum_{i=-M}^M h_{{\rm opt},i}{\rm sinc}(D-i).$$ Depending on the nature of the noise some values of the parameter $p$ perform better than others. When the noise is Gaussian, $p=2$ is a good choice, while in non-Gaussian instances a value $1<p<2$ may perform better. We can recast the problem in the setting of this paper as follows. Let $T$ be the (tall) Toeplitz matrix $$ T= (x_1[i-j])_{i=2M+1, j=0}^{N \ \ \ \ \ \ \ \ 2M}$$ and $y=(x_2[i])_{i=M+1}^{N-M}$, and perform Gaussian elimination to write $[T\ y]$ as $$[T\ y]=S[E \ \hat{y}],$$ with $S$ invertible and $E$ the row reduced echelon form of $T$. Let $r$ be the rank of $T$. We now have that \eqref{hopt} is equivalent to \begin{equation}\label{hopt2} x_{\rm min} = {\rm arg \ min}_x \| Sx\|_p \ {\rm subject \ to} \ x_{j}=\hat{y}_j, j=r+1,\ldots , N ,\end{equation} and $Eh_{\rm opt}=\hat{y}-x_{\rm min}$. Notice that the last equation is easy to solve, as $E$ is in row reduced echelon form. As an illustration, we took a source signal $s[n]$, $n=1, \ldots, 2001$, with each entry a random number in the interval $[0,1]$ (using 'rand' in Matlab). In addition, we took $\beta=1$, $D=5$ and non-Gaussian noise with values $v_2[n] \in \{ -0.4,0,0.4 \}$ and $v_2[n] \in \{ -10,0,10 \}$. We put $M=10$. Letting $p=1.01$, we find the solutions $$ h_{{\rm opt},p=1.01}^{0.4} = \begin{bmatrix} -0.000232823052903 \cr 0.000040776761574 \cr -0.000049481864553\cr 0.000133215025281\cr -0.000183737404681\cr 1.000000881642775\cr -0.000044649227715\cr -0.000196161795022\cr -0.000095126313718\cr -0.000097099554212\cr 0.000033868802636\cr -0.000146470233426\cr -0.000200182922050\cr -0.000079300151792\cr 0.000079332208475\cr 0.000120242785738\cr 0.000065015502363\cr 0.000281342918683\cr -0.000004940174283\cr 0.000070394035326\cr 0.000007260964693 \end{bmatrix}, h_{{\rm opt},p=1.01}^{10}=\begin{bmatrix} -0.005820061605867 \cr 0.001019548250559 \cr -0.001236528515010 \cr 0.003331180840021 \cr -0.004592569849389 \cr 1.000022762279084 \cr -0.001116623684563 \cr -0.004903685210011 \cr -0.002378077978435 \cr -0.002427966380243 \cr 0.000845998083710 \cr -0.003661436517994 \cr -0.005004189106320 \cr -0.001983061497221 \cr 0.001983807240446 \cr 0.003005241842425 \cr 0.001625326145315 \cr 0.007033634145231 \cr -0.000123313898717 \cr 0.001759697717274 \cr 0.000181776684826 \end{bmatrix}, $$ where the superscript in $h_{\rm opt}$ indicates the level of the noise. From this it is clear that we retrieve $D=5$ for both noise levels. When we use $p=2$ we find $$ h_{{\rm opt},p=2}^{0.4} = \begin{bmatrix} 0.018009040220347 \cr 0.002332350834668 \cr 0.001890133545715 \cr -0.003072754337702 \cr 0.030593903501315 \cr 0.988017340867130 \cr -0.021206323141440 \cr 0.017823418513643 \cr 0.015038119392389 \cr 0.002017146896049 \cr -0.001601106447733 \cr 0.012924523085732 \cr -0.000953916946889 \cr -0.008799966349236 \cr 0.001692369555610 \cr -0.016685275281673 \cr -0.012582988810589 \cr -0.023738009359622 \cr -0.010928158128446 \cr -0.011105474647007 \cr 0.009191378729438 \end{bmatrix}, h_{{\rm opt},p=2}^{10} = \begin{bmatrix} 0.450226005508666 \cr 0.058308770866700 \cr 0.047253338642885 \cr -0.076818858442547 \cr 0.764847587532876 \cr 0.700433521678247 \cr -0.530158078536007 \cr 0.445585462841087 \cr 0.375952984809734 \cr 0.050428672401218 \cr -0.040027661193312 \cr 0.323113077143296 \cr -0.023847923672227 \cr -0.219999158730899 \cr 0.042309238890246 \cr -0.417131882041814 \cr -0.314574720264717 \cr -0.593450233990547 \cr -0.273203953211154 \cr -0.277636866175162 \cr 0.229784468235959 \cr \end{bmatrix} .$$ At the noise level 0.4, we still conclude $D=5$ when we use $p=2$, but for the noise level 10 the optimization with $p=2$ no longer gives a useful result (as the largest entry in $h_{\rm opt}$ moved position and several other entries are not that far from the largest). \subsection{Optimal interpolation in multivariable Hardy and Bergman spaces where $p\in 2\BN$.} Using ideas from \cite[Proof of Theorem 2]{Harmsen} we have the following observation. As before $\cZ^p$ is either a Hardy space $H^p$ or a Bergman space $A^p$. \begin{proposition}\label{Bas2} Let $1<p<\infty$. Consider the interpolation problem in $\cZ^p$ with $f(z_i)=s_i\neq 0$, $i=1,\ldots,k$. Let the unique minimal norm interpolant be denoted by $f_{\tu{min},p}$. Associated, let $g_{\tu{min}, {\bf{s}}^{p/2}}$ denote the unique minimal interpolant in $\cZ^2$ with interpolation conditions $g(z_i)=s_i^{\frac{p}{2}}$, $i=1,\ldots, k$. If $f_{\tu{min},p}^{\frac{p}{2}}$ and $g_{\tu{min}, {\bf{s}}^{p/2}}^\frac2p $ are analytic, $f_{\tu{min},p}^{\frac{p}{2}}(z_i)=s_i^{\frac{p}{2}}$ and $g_{\tu{min}, {\bf{s}}^{p/2}}^\frac2p (z_i)=s_i$, $i=1,\ldots,k$, then $f_{\tu{min},p}=g_{\tu{min}, {\bf{s}}^{p/2}}^\frac2p$. When $p\in 2\BN$, the analyticity of $f_{\tu{min},p}^{\frac{p}{2}}$ and the interpolation conditions $f_{\tu{min},p}^{\frac{p}{2}}(z_i)=s_i^{\frac{p}{2}}$, $i=1,\ldots,k$, are automatic. \end{proposition} Note that confirming that both $f_{\tu{min},p}^{\frac{p}{2}}$ and $g_{\tu{min}, {\bf{s}}^{p/2}}^\frac2p$ are indeed interpolants, is necessary as the branch of logarithm should have been chosen consistently. Indeed, it is for instance possible that the initial branch chosen to compute $s_i^{\frac{p}{2}}$, $i=1,\ldots, k$, may not work for $g_{\tu{min}, {\bf{s}}^{p/2}}^\frac2p $. \begin{proof} If $f_{\tu{min},p}^{\frac{p}{2}} \in \cZ^2$ then this function satisfies the interpolation conditions $g(z_i)=s_i^{\frac{p}{2}}$, $i=1,\ldots, k$. As $g_{\tu{min}, {\bf{s}}^{p/2}} \in \cZ^2$ is the minimal norm interpolant for these conditions, we obtain that $$ \| f_{\tu{min},p}^{\frac{p}{2}} \|_{\cZ^2} \ge \| g_{\tu{min}, {\bf{s}}^{p/2}} \|_{\cZ^2}. $$ But then $$ \| f_{\tu{min},p} \|_{\cZ^p}^p = \| f_{\tu{min},p}^{\frac{p}{2}} \|_{\cZ^2}^2 \ge \| g_{\tu{min}, {\bf{s}}^{p/2}} \|_{\cZ^2}^2 = \| g_{\tu{min}, {\bf{s}}^{p/2}}^\frac2p \|_{\cZ^p}^p .$$ As the minimal norm interpolant is unique, we must have $f_{\tu{min},p}=g_{\tu{min}, {\bf{s}}^{p/2}}^\frac2p$. \end{proof} As is well-known (see, e.g., \cite[Chapter 3]{PaulsenR}), in the case of a reproducing kernel Hilbert space (RKHS) applying the first representer theorem (Theorem \ref{repr}) comes down to solving a system of linear equations. Indeed, we obtain that \begin{equation}\label{rkhs} x_\tu{min} = \sum_{j=1}^k c_j K(\cdot , z_j) \ \hbox{\rm where} \ (c_j)_{j=1}^k = [(K(z_j,z_i))_{i,j=1}^k]^{-1} (s_i)_{i=1}^k . \end{equation} If we combine this with Proposition \ref{Bas2}, we are able to easily come up with a candidate for the minimal norm interpolant in $H^p$ or $A^p$ for the case when $p \in 2\BN$. Subsequently checking for analyticity (of $g_\tu{min}^{2/p}$ in Proposition \ref{Bas2}), then seals the deal. Let us illustrate this on an example. \begin{example} We consider the two-variable Bergman space $A_2^4$. Let $z_1=(\frac14, \frac34)$, $z_2=(0,0)$ and $s_1=1$, $s_2=98/100$. Proposition \ref{Bas2} tells us to consider the minimal norm interpolant $g_\tu{min}$ in $A^2_2$ with interpolation conditions $g(z_1)=1^2$ and $g(z_2)=(98/100)^2$. Solving for $c_1$ and $c_2$ in $$ \begin{bmatrix} \frac{1}{(1-\langle z_1 , z_1 \rangle_{\rm Eucl})^3} & \frac{1}{(1-\langle z_2 , z_1 \rangle_{\rm Eucl})^3} \cr \frac{1}{(1-\langle z_1 , z_2 \rangle_{\rm Eucl})^3} & \frac{1}{(1-\langle z_2 , z_2 \rangle_{\rm Eucl})^3} \end{bmatrix} \begin{bmatrix} c_1\cr c_2 \end{bmatrix} = \begin{bmatrix} \frac{16^3}{6^3} & 1 \cr 1 & 1 \end{bmatrix} \begin{bmatrix} c_1\cr c_2 \end{bmatrix} = \begin{bmatrix} 1\cr \frac{98^2}{100^2} \end{bmatrix},$$ giving $c_1=2673/1212500, c_2=1161812/1212500$, we find that $$ g_\tu{min}(\zeta)= \frac{1}{1212500}\left( \frac{2673}{(1-\zeta_1/4-3\zeta_2/4)^3}+1161812\right).$$ For $\zeta=(\zeta_1,\zeta_2) \in B_2$ we have that $|\langle \zeta , \begin{bmatrix} 1/4 \cr 3/4 \end{bmatrix} \rangle_{\rm Eucl}| \le \sqrt{10}/4 $ and thus $$|1-\zeta_1/4-3\zeta_2/4|^3 \ge |1-\frac{\sqrt{10}}{4}|^3 \approx 0.00918587. $$ Since $2673/1161812 \approx 0.0023007<0.00918587$, we have that $|g_\tu{min}(\zeta)| \ge \epsilon >0$, $\zeta \in B_2$, for some $\epsilon>0$. Thus $g_\tu{min}^\frac12$ is a well-defined analytic function on $B_2$, and thus by Proposition \ref{Bas2} we find that the optimal interpolant is $f_{\tu{min},4}=g_\tu{min}^\frac12$. \hfill $\Box$ \end{example} Finally, we observe that in the setting of Figure \ref{fig:enter-label0}, the function $f_\tu{min}^{[p]}$, $p\in [1.7,2.6]$, does not have any roots in $\BD$. Thus, by Proposition \ref{Bas2}, $f_\tu{min}^{[p]}=g_{\tu{min}, {\bf{s}}^{p/2}}^\frac2p$, explaining the continuity we see in Figure \ref{fig:enter-label0}. \bigskip \subsection*{Acknowledgements} We thank Professor David Ambrose for pointing us to Henry Helson's book 'Harmonic Analysis'. This work is based on research supported in part by the National Research Foundation of South Africa (NRF, Grant Numbers 118513 and 127364) and the DSI-NRF Centre of Excellence in Mathematical and Statistical Sciences (CoE-MaSS). Any opinion, finding and conclusion or recommendation expressed in this material is that of the authors and the NRF and CoE-MaSS do not accept any liability in this regard. We also gratefully acknowledge funding from the National Graduate Academy for Mathematical and Statistical Sciences (NGA(MaSS)). In addition, HW is partially supported by United States National Science Foundation (NSF) grant DMS-2000037. \begin{thebibliography}{xx} \bibitem{Aizerman} M.A. Aizerman, E.M. Braverman, and L.I. Rozonoer, Theoretical foundations of the potential function method in pattern recognition learning, {\em Automat.\ Remote Control} {\bf 25} (1964), 821--837. \bibitem{aron0} N. Aronszajn, La th\'eorie des noyaux reproduisants et ses applications I, {\em Proc.\ Cambridge Philos.\ Soc.} {\bf 39} (1943), 133--153. \bibitem{aron} N. Aronszajn, Theory of reproducing kernels, {\em Trans.\ Amer.\ Math.\ Soc.} {\bf 68} (1950), 337--404. \bibitem{AH05} K. Atkinson and W. Han, {\em Theoretical numerical analysis}, Texts Appl.\ Math.\ {\bf 39}, Springer, Dordrecht, 2009. \bibitem{Axler} S. Axler, {\em Bergman spaces and their operators}, Pitman Res.\ Notes Math.\ Ser.\ {\bf 171}, Longman Scientific \& Technical, Harlow, 1988, pp.\ 1--50. \bibitem{ABR} S. Axler, P. Bourdon, and W. Ramey, {\em Harmonic function theory}, Springer--Verlag, New York, 1992. \bibitem{BK12} C. B{\'e}n{\'e}teau and D. Khavinson, A survey of linear extremal problems in analytic function spaces, {\em CRM Proc.\ Lecture Notes} {\bf 55}, Amer.\ Math.\ Soc., Providence, RI, 2012, pp.\ 33--46. \bibitem{Berge} C. Berge, {\em Topological Spaces}, Oliver and Boyd, 1963. \bibitem{Birkhoff} G. Birkhoff, Orthogonality in linear metric spaces, {\em Duke Math.\ J.} {\bf 1} (1935), 169--172. \bibitem{Boser} B.E. Boser, I.M. Guyon, and V.N. Vapnik, A training algorithm for optimal margin classifiers, In:\ {\em Proceedings of the fifth annual workshop on Computational learning theory}, pp.\ 144--152, 1992. \bibitem{CM22} H. Centeno and J.M. Medina, A converse sampling theorem in reproducing kernel Banach spaces, {\em Sampl.\ Theory Signal Process.\ Data Anal.} {\bf 20} (2022), Paper No.\ 8. \bibitem{CX21} R. Cheng and Y. Xu, Minimum norm interpolation in the $\ell^1(\mathbb{N})$ space, {\em Anal. Appl.\ (Singap.)} {\bf 19} (2021), 21--42. \bibitem{CMS94} J. Cima, T. MacGregor, M. Stessin, Recapturing Functions in $H^p$ Spaces, {\em Indiana Univ.\ Math.\ J.} {\bf 43} (1994), 205--220. \bibitem{C90} J.B. Conway, {\em A Course in Functional Analysis}, Springer-Verlag, New York, 1985. \bibitem{D70} P.L. Duren, {\em Theory of $H_p$ Spaces}, Academic Press, 1970. \bibitem{DurenBergman} P.L. Duren and A. Schuster, {\em Bergman spaces}, Math.\ Surveys Monogr.\ {\bf 100}, Amer.\ Math.\ Soc., Providence, RI, 2004. \bibitem{Ebenstein} S.E. Ebenstein, Some $H^p$ spaces which are uncomplemented in $L^p$, {\em Pacific J. Math.} {\bf 43} (1972), 327--339. \bibitem{Elliott} S.J. Elliott and A. Wynn, Composition operators on the weighted Bergman spaces of the half plane, {\em Proc.\ Edinb.\ Math.\ Soc.\ (2)} {\bf 54} (2011), 374--379. \bibitem{FHHMZ11} M. Fabian, P. Habala, P. H\'{a}jek, V. Montesinos, and V. Zizler, {\em Banach space theory, The basis for linear and nonlinear analysis}, CMS Books Math./Ouvrages Math.\ SMC Springer, New York, 2011. \bibitem{FHY15} G. E. Fasshauer, F. J. Hickernell, and Q. Ye, Solving support vector machines in reproducing kernel Banach spaces with positive definite functions, {\em Appl.\ Comput.\ Harmon.\ Anal.} {\bf 38} (2015), 115--139. \bibitem{Ferguson2} Timothy Ferguson, Solution of extremal problems in Bergman spaces using the Bergman projection, {\em Comput.\ Methods Funct.\ Theory} {\bf 14} (2014), 35--61. \bibitem{Fine} S. Fine and K. Scheinberg, Efficient SVM training using low-rank kernel representations. {\em J. Mach.\ Learn.\ Res.} {\bf 2} (2001), 243--264. \bibitem{Fuku} K. Fukumizu, F.R. Bach, and M.I. Jordan, Dimensionality reduction for supervised learning with reproducing kernel Hilbert spaces. {\em J. Mach.\ Learn.\ Res.} {\bf 4} (2004), 73--99. \bibitem{G67} J.R. Giles, Classes of semi-inner-product spaces, {\em Trans.\ Amer.\ Math.\ Soc.} {\bf 129} (1967), 436--446. \bibitem{Harmsen} Bas Harmsen, {\em Interpolatieproblemen in $H_p$-ruimten}, Doctoraalscriptie, Radboud Universiteit Nijmegen, The Netherlands, 2005. \bibitem{Gretton} A. Gretton, R. Herbrich, A. Smola, O. Bousquet, and B. Sch\"olkopf, Kernel methods for measuring independence, {\em J. Mach.\ Learn.\ Res.} {\bf 6} (2005), 2075--2129. \bibitem{Hedenmalm} H. Hedenmalm, B. Korenblum, and K. Zhu, {\em Theory of Bergman spaces}, Grad.\ Texts in Math.\ {\bf 199}, Springer--Verlag, New York, 2000. \bibitem{Helson} H. Helson, {\em Harmonic Analysis}, Addison-Wesley, Reading, MA, 1983. \bibitem{Hensgen} W. Hensgen, On the dual space of $H^p(X),1<p<\infty$, {\em J. Funct.\ Anal.} {\bf 92} (1990), 348--371. \bibitem{Hoffman} K. Hoffman, {\em Banach Spaces of Analytic Functions}, Prentice Hall, Englewood Cliffs, 1962. \bibitem{istr} V.I. Istr\u{a}\c{t}escu, {\em Strict convexity and complex strict convexity}, Lecture Notes in Pure and Appl.\ Math. {\bf 89}, Marcel Dekker Inc., New York, 1984. \bibitem{James} R.C. James, Orthogonality and linear functionals in normed linear spaces, {\em Trans. Amer. Math. Soc.} {\bf 61} (1947), 265--292. \bibitem{KS97} D. Khavinson and M. Stessin, Certain linear extremal problems in Bergman spaces of analytic functions, {\em Indiana Univ.\ Math.\ J.} {\bf 46} (1997), 933--974. \bibitem{K} S.Ya. Khavinson, Two papers on extremal problems in complex analysis, {\em Amer.\ Math.\ Soc.\ Transl.\ Ser.\ 2}, No.\ {\bf 129}, 1980. \bibitem{Koosis} P. Koosis, {\em Introduction to $H^p$ spaces}, Cambridge University Press, 1998. \bibitem{LZZ22} R.R. Lin, H.Z. Zhang, and J. Zhang, On reproducing kernel Banach spaces: generic definitions and unified framework of constructions, {\em Acta Math.\ Sin.\ (Engl.\ Ser.)} {\bf 38} (2022), 1459--1483. \bibitem{LWXY23} Q. Liu, R. Wang, Y. Xu, and M. Yan, Parameter choices for sparse regularization with the $\ell^1$ norm, {\em Inverse Problems} {\bf 39} (2023), Paper No.\ 025004. \bibitem{L61} G. Lumer, Semi-inner-product spaces, {\em Trans.\ Amer.\ Math.\ Soc.} {\bf 100} (1961), 29--43. \bibitem{MN} X. Ma, C.L. Nikias, Joint estimation of time delay and frequency delay in impulsive noise using fractional lower order statistics, {\em IEEE Trans.\ Signal Process.} {\bf 44} (1996), 2669--2687. \bibitem{MR50} A.J. Macintyre and W.W. Rogosinski, Extremum problems in the theory of analytic functions, {\em Acta Math.} {\bf 82} (1950), 275--325. \bibitem{M98} R.E. Megginson, {\em An introduction to Banach space theory}, Grad.\ Texts in Math.\ {\bf 183}, Springer-Verlag, New York, 1998. \bibitem{PaulsenR} V.I. Paulsen and M. Raghupathi, {\em An introduction to the theory of reproducing kernel Hilbert spaces}, Cambridge Stud.\ Adv.\ Math.\ {\bf 152}, Cambridge University Press, Cambridge, 2016. \bibitem{Ramey} W.C. Ramey and H. Yi, Harmonic Bergman functions on half--spaces, {\em Trans. Amer. Math. Soc.} {\bf 348} (1996), 633--660. \bibitem{RS} W.W. Rogosinski and H.S. Shapiro, On certain extremum problems for analytic functions, {\em Acta Math.} {\bf 90} (1953), 287--318. \bibitem{Rudin2} W. Rudin, {\em Function theory in Polydisks}, W.A. Benjamin Inc., New York, NY, 1969. \bibitem{Rudin} W. Rudin, {\em Function theory in the unit ball of $\BC^n$}, Classics Math.\ Springer-Verlag, Berlin, 2008. \bibitem{SHS01} B. Sch\"{o}lkopf, R. Herbrich, and A.J. Smola, A Generalized Representer Theorem, In:\ {\em Computational Learning Theory}, Lecture Notes in Computer Science. Vol. 2111. Berlin, Heidelberg:\ Springer, 2001, pp.\ 416--426. \bibitem{Slavakis} K. Slavakis, P. Bouboulis, and S. Theodoridis, Chapter 17--Online learning in reproducing kernel Hilbert spaces, in:\ {\em Academic Press Library in Signal Processing: Volume 1, Signal Processing Theory and Machine Learning}, p.\ 883--987. Elsevier, 2014. \bibitem{SFL11} B.K. Sriperumbudur, K. Fukumizu, and G.R.G. Lanckriet, Universality, characteristic kernels and RKHS embedding of measures, {\em J. Mach. Learn. Res.} {\bf 12} (2011), 2389--2410. \bibitem{SW} E.M. Stein and G. Weiss, {\em Introduction to Fourier Analysis on Euclidean Spaces}, Princeton Math.\ Ser.\ {\bf 32}, Princeton University Press, Princeton, NJ, 2016. \bibitem{S-NK} B\'ela Sz.-Nagy and Adam Kor\'anyi, Operatortheoretische Behandlung und Verallgemeinerung eines Problemkreises in der komplexen Funktionentheorie, {\em Acta Math.} {\bf 100} (1958), 171–202. \bibitem{U21} M. Unser, A unifying representer theorem for inverse problems and machine learning, {\em Found.\ Comput.\ Math.} {\bf 21} (2021), 941--960. \bibitem{X} Y. Xu, Sparse machine learning in Banach spaces, {\em Appl.\ Numer.\ Math.} {\bf 187} (2023), 138--157. \bibitem{XY19} Y. Xu and Q. Ye, Generalized Mercer kernels and reproducing kernel Banach spaces, {\em Mem.\ Amer.\ Math.\ Soc.} {\bf 258} (2019), no.\ 1243. \bibitem{W90} G. Wahba, {\em Spline Models for Observational Data}, CBMS-NSF Regional Conference Series in Applied Mathematics vol. 59, SIAM, Philadelphia, 1990. \bibitem{WX} R. Wang and Y. Xu, Representer theorems in Banach spaces:\ minimum norm interpolation, regularized learning and semi-discrete inverse problems, {\em J. Mach.\ Learn.\ Res.} {\bf 22} (2021), Paper no.\ 225. \bibitem{WXY} R. Wang, Y. Xu, and M. Yan, Sparse representer theorems for learning in reproducing kernel Banach spaces, {\em J. Mach.\ Learn.\ Res.} {\bf 25} (2024), Paper no.\ 93. \bibitem{ZSZ} W.-J. Zeng, H.C. So, and A.M. Zoubir, An $\ell_p$-norm minimization approach to time delay estimation in impulsive noise, {\em Digit.\ Signal Process.} {\bf 23} (2013), 1247--1254. \bibitem{ZXZ09} H. Zhang, Y Xu, and J. Zhang, Reproducing kernel Banach spaces for machine learning, {\em J. Mach.\ Learn.\ Res.} {\bf 10} (2009), 2741--2775. \bibitem{ZZ13} H. Zhang and J. Zhang, Vector-valued reproducing kernel Banach spaces with applications to multi-task learning, {\em J. Complexity} {\bf 29} (2013), 195--215. \bibitem{ZZ11} H. Zhang and J. Zhang, Frames, Riesz bases, and sampling expansions in Banach spaces via semi-inner products, {\em Appl.\ Comput.\ Harmon.\ Anal.} {\bf 31} (2011), 1–-25. \end{thebibliography} \end{document}
2412.11547v1
http://arxiv.org/abs/2412.11547v1
Weak convergence of complex Monge-Ampère operators on compact Hermitian manifolds
\documentclass[12pt]{amsart} \usepackage[top=30truemm,bottom=30truemm,left=25truemm,right=25truemm]{geometry} \usepackage{mathrsfs} \usepackage{amsmath, amsthm, amssymb} \usepackage{color} \usepackage{bm} \usepackage{amsfonts,amssymb} \usepackage{dsfont} \usepackage{amscd} \usepackage{extarrows} \usepackage{amsmath} \usepackage{mathrsfs} \usepackage{enumerate} \usepackage{amscd} \usepackage[all]{xy} \usepackage[pagebackref,colorlinks]{hyperref} \usepackage{geometry} \geometry{margin=1in} \renewcommand{\baselinestretch}{1.15} \newtheorem{thm}{Theorem}[section] \newtheorem{defn}[thm]{Definition} \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition} \newtheorem{rem}[thm]{Remark} \newtheorem{exm}[thm]{Example} \newcommand{\half}{\frac{1}{2}} \newcommand{\ind}[1]{\mathbf{1}_{#1}} \newcommand{\PSH}[2]{\text{PSH}(#1, #2)} \newcommand{\env}[2]{P_{#2}(#1)} \newcommand{\be}{\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\bea}{\begin{eqnarray}} \newcommand{\eea}{\end{eqnarray}} \newcommand{\ben}{\begin{eqnarray*}} \newcommand{\een}{\end{eqnarray*}} \newcommand{\bt}{\begin{split}} \newcommand{\et}{\end{split}} \newcommand{\bet}{\begin{equation}} \newcommand{\mc}{\mathbb{C}} \newcommand{\mr}{\mathbb{R}} \newcommand{\ra}{\rightarrow} \newtheorem*{remark0}{\indent\sc Remark} \renewcommand{\proofname}{\indent\sc Proof.} \newcommand{\ddbar}{\partial \bar{\partial}} \newcommand{\dbar}{\bar{\partial}} \begin{document} \title[Weak convergence of Monge-Amp\`ere operator] {Weak convergence of complex Monge-Amp\`ere operators on compact Hermitian manifolds} \author[K. Pang]{Kai Pang} \address{Kai Pang: School of Mathematical Sciences\\ Beijing Normal University\\ Beijing 100875\\ P. R. China} \email{[email protected]} \author[H. Sun]{Haoyuan Sun} \address{Haoyuan Sun: School of Mathematical Sciences\\ Beijing Normal University\\ Beijing 100875\\ P. R. China} \email{[email protected]} \author[Z. Wang]{Zhiwei Wang} \address{Zhiwei Wang: Laboratory of Mathematics and Complex Systems (Ministry of Education)\\ School of Mathematical Sciences\\ Beijing Normal University\\ Beijing 100875\\ P. R. China} \email{[email protected]} \begin{abstract} Let $(X,\omega)$ be a compact Hermitian manifold and let $\{\beta\}\in H^{1,1}(X,\mathbb R)$ be a real $(1,1)$-class with a smooth representative $\beta$, such that $\int_X\beta^n>0$. Assume that there is a bounded $\beta$-plurisubharmonic function $\rho$ on $X$. First, we provide a criterion for the weak convergence of non-pluripolar complex Monge-Amp\`ere measures associated to a sequence of $\beta$-plurisubharmonic functions. Second, this criterion is utilized to solve a degenerate complex Monge-Amp\`ere equation with an $L^1$-density. Finally, an $L^\infty$-estimate of the solution to the complex Monge-Amp\`ere equation for a finite positive Radon measure is given. \end{abstract} \subjclass[2010]{32W20, 32U05, 32U40, 53C55} \keywords{Non-pluripolar complex Monge-Amp\`ere operator, convergence in capacity, $L^1$-density, degenerate complex Monge-Amp\`ere equation} \maketitle \tableofcontents \section{Introduction} Let $(X, \omega)$ be a compact Hermitian manifold of complex dimension $n$, with a Hermitian metric $\omega$. The nef cone and pseudoeffective cone in $H^{1,1}(X, \mathbb{R})$ are crucial objects of study, closely related to the geometric structure of the manifold $X$. However, the current research methods and tools for these cones are limited. Inspired by the significant work done in the K\"ahler setting (see e.g. \cite{DeP04,BDPP13}), it is believed that utilizing the Monge-Amp\`ere equation could be a promising approach. In the study of degenerate complex Monge-Amp\`ere equations, we usually need to consider the convergence of complex Monge-Amp\`ere operators. For any Borel subset $E\subset X$, the capacity of $E$ with respect to $\omega$ is defined as: $$\text{Cap}_{\omega}(E):=\sup\left\{\int_E(\omega+dd^cv)^n:v\in \text{PSH}(X,\omega),0 \leq v \leq 1 \right\}.$$ We say a sequence $\{\varphi_j\}_{j=1}^\infty\subset \mbox{PSH}(X,\beta)$ converges in capacity to $\varphi\in \mbox{PSH}(X,\beta)$, if for a given $\varepsilon>0$, $$\lim_{j\rightarrow+\infty}\mbox{Cap}_{\omega}(|\varphi_j-\varphi|>\varepsilon)=0.$$ It is known that the complex Monge-Amp\`ere operator is not continuous with respect to the $L^1$-topology (see an example due to Cegrell in \cite[Example 3.25]{GZ17}), and it is continuous with respect to the convergence in capacity of uniformly bounded (quasi-)psh functions \cite{X09} on compact K\"ahler manifolds. When the sequence of quasi-psh functions is not uniformly bounded, there are also many important results for non-pluripolar complex Monge-Amp\`ere operators in the compact K\"ahler case; see \cite{P08,DV22,DP12,DDL23} and references therein. Recently, similar results were generalized to the Hermitian setting. Kolodziej-Nguyen \cite{KN22} proved the following interesting result: for a Hermitian metric $\omega$ on $X$, if the sequence $\{u_j\}$ is uniformly bounded $\omega$-psh functions such that $u_j\rightarrow u$ in $L^1(X,\omega^n)$ and $(\omega+dd^cu_j)^n\leq C(\omega+dd^c\varphi_j)$ for some uniformly bounded sequence $\{\varphi_j\} $ such that $\varphi_j\rightarrow \varphi\in PSH(X,\omega)$ in capacity, then $(\omega+dd^cu_j)^n$ converges weakly to $(\omega+dd^cu)^n$. This is a generalization of \cite[Lemma 2.1]{CK06} from the local setting to compact Hermitian manifolds that is pointed out in the proof of \cite[Lemma 2.11]{KN23b} (see also \cite{KN23a}). The degenerate case was studied by Alehyane-Lu-Salouf \cite{ALS24}. Let $\beta$ be a smooth semipositive $(1,1)$-form with $\int_X\beta^n>0$, and assume that there is a Hermitian metric $\omega$ on $X$ such that $dd^c\omega=0$ and $d\omega\wedge d^c\omega=0$. Let $u_j, u\in \mathcal E(X,\beta)$ and $u_j\rightarrow u$ in $L^1(X,\omega^n)$. It is proved in \cite{ALS24} that if $\langle(\beta+dd^cu_j)^n\rangle\leq \mu$ for some positive non-pluripolar measure, then $u_j\rightarrow u$ in capacity and the non-pluripolar complex Monge-Amp\`ere measure $\langle(\beta+dd^cu_j)^n\rangle\rightarrow \langle(\beta+dd^cu)^n\rangle$ in the weak sense. Here $\mathcal E(X,\beta)$ denotes the set of $\beta$-psh functions with full Monge-Amp\`ere mass, and $\langle(\beta+dd^c\cdot)^n\rangle$ represents the non-pluripolar complex Monge-Amp\`ere measure. For detailed definitions, see\cite [\S 2]{ALS24}. Now let $\beta$ be a smooth real closed $(1,1)$ form, which is not necessarily semipositive. A function $u:X\rightarrow [-\infty,+\infty)$ is called quasi-plurisubharmonic if locally $u$ can be written as the sum of a smooth function and a plurisubharmonic function. A $\beta$-plurisubharmonic ($\beta$-psh for short) function $u$ is defined as a quasi-plurisubharmonic function satisfying $\beta+dd^cu\geq 0$ in the sense of currents. The set of all $\beta$-psh functions on $X$ is denoted by $\mbox{PSH}(X,\beta)$. Suppose that there exists a function $\rho\in \mbox{PSH}(X,\beta)\cap L^{\infty}(X)$. We define a $(\beta+dd^c\rho)$-psh function to be a function $v$ such that $\rho+v\in \mbox{PSH}(X,\beta)$, and denote by $\mbox{PSH}(X,\beta+dd^c\rho)$ the set of all $(\beta+dd^c\rho)$-psh functions. Following \cite{BT87,GZ07,BEGZ10}, Li-Wang-Zhou introduced in \cite{LWZ24a} the non-pluripolar complex Monge-Amp\`ere operator $\langle (\beta+dd^c\varphi)^n\rangle$ for any $\varphi\in \mbox{PSH}(X,\beta)$. Denote by $\mathcal E(X,\beta)$ the class of all $\varphi \in \text{PSH}(X,\beta)$ with full Monge-Amp\`ere mass, i.e., $\int_X\langle (\beta+dd^c\varphi)^n\rangle=\int_X\beta^n$. In this paper, we are interested in studying the criteria for the weak convergence of the non-pluripolar complex Monge-Amp\`ere operator. We establish the following theorem: \begin{thm}\label{thm: main-1} Let $(X,\omega)$ be a compact Hermitian manifold of complex dimension $n$, equipped with a Hermitian metric $\omega$. Let $\beta$ be a smooth closed real $(1,1)$-form with $\int_X\beta^n>0$. Assume that there is a bounded $\beta$-psh function $\rho$ on $X$. Let $u_j\in \mathcal{E}(X, \beta)$, such that $(\beta +dd^c u_j)^n\leq \mu$ for some positive non-pluripolar Radon measure $\mu$. If $u_j\rightarrow u\in PSH(X, \beta)$ in $L^1(X)$, then $u\in \mathcal{E}(X, \beta)$ and $u_j\rightarrow u$ in capacity. Furthermore, $\langle(\beta+dd^cu_j)^n\rangle$ converges weakly to $\langle(\beta+dd^cu)^n\rangle$. \end{thm} \begin{rem} The proof of this theorem is primarily inspired by \cite{ALS24}. In that work, the envelope plays a crucial role. Unlike the condition in \cite{ALS24}, where $\theta$ is assumed to be smooth and semi-positive, we only assume that the semi-positivity has a bounded weight function. This assumption is weaker both in terms of regularity and positivity, which introduces new challenges when analyzing the $(\beta + dd^c\rho)$-envelope. To overcome these difficulties, we introduce a new envelope (see Definition \ref{defn: new envelope}) and, building on the results from \cite{GLZ19} and \cite{ALS24}, we prove that this new envelope also possesses desirable properties. For instance, it preserves monotonicity, and its Monge-Amp\`ere mass is concentrated on the contact set (see for example: Theorem \ref{thm: key}, Theorem \ref{thm: put_no_mass4}, Remark \ref{rem: second_version}). \end{rem} As an application of above theorem, we prove the following: \begin{cor} \label{cor: main} Let $(X,\omega)$ be a compact Hermitian manifold of complex dimension $n$, equipped with a Hermitian metric $\omega$. Let $\beta$ be a smooth closed real $(1,1)$-form with $\int_X\beta^n>0$. Assume that there is a bounded $\beta$-psh function $\rho$ on $X$. Fix $\lambda > 0$, let $\mu$ be a positive Radon measure vanishing on pluripolar sets which can be writtten as $\mu=f \omega^n$, where $f\in L^{1}(\omega^n)$. Then there exist a unique $u \in \mathcal{E} (X, \beta)$ such that $$ (\beta+dd^c u)^n = e^{\lambda u} \mu. $$ \end{cor} \begin{rem} If $f\in L^p(X,\omega)$ with $p>1$, the authors in \cite{LWZ24a} established the existence and uniqueness of bounded solutions to the above Monge-Amp\`ere equations. For more general results, we refer the reader to \cite{LWZ24b, LLZ24}. \end{rem} The $L^{\infty}$ estimate has a long history in the study of Monge-Amp\`ere equations. In the solution of the Calabi conjecture \cite{Yau78}, Yau introduced the Moser iteration, which is a key step in obtaining the $L^{\infty}$ estimate. This method works for equations with the right-hand side in $L^p$, where $p > n$ and $n$ is the dimension of the underlying compact K\"ahler manifold. Kolodziej \cite{Ko98} introduced a new method for the $L^{\infty}$ estimate using pluripotential techniques and integration by parts on compact K\"ahler manifolds. This method applies to $L^p$ densities, where $p > 1$. Kolodziej's approach has been further generalized to handle less positive or collapsing families of cohomology classes on K\"ahler manifolds; see, for example, \cite{EGZ09, EGZ08, DP10, BEGZ10}. There are also generalizations to compact Hermitian manifolds; see, for instance, \cite{LWZ24a} and the references therein. Recently, Guo, Phong, and Tong \cite{GPT23} used a PDE approach to obtain a priori $L^{\infty}$ estimates for Monge-Amp\`ere equations. In \cite{GL21, GL22, GL23}, Guedj and Lu introduced a new method to give $L^{\infty}$ estimates for some degenerate Monge-Amp\`ere equations on compact hermitian manifolds. Their method only relies on compactness and envelope properties of quasi-plurisubharmonic functions. We follow this approach to obtain a priori $L^\infty$-estimates for the solution to the degenerate complex Monge-Amp\`ere equation for a finite positive Radon measure. \begin{thm} \label{thm: linf est} Let $(X,\omega)$ be a compact Hermitian manifold of complex dimension $n$, equipped with a Hermitian metric $\omega$. Let $\beta$ be a smooth closed real $(1,1)$-form with $\int_X\beta^n>0$. Assume that there is a bounded $\beta$-psh function $\rho$ on $X$. Let $\mu$ be a finite positive Radon measure on X such that $PSH(X,\beta+dd^c\rho)\subset L^m(\mu)$ for some $m>n$ and $\mu(X)=\int_X\beta^n$. Then any solution $\varphi\in PSH(X,\beta+dd^c\rho)\cap L^{\infty}(X)$ to $(\beta+dd^c(\rho+\varphi))^n=\mu$, satisfies $$ Osc_X(\varphi)\leq T $$ for some uniform constant T which only depends on $X,\beta$ and $$ A_m(\mu):=sup\left\{(\int_X(-\psi)^md\mu)^{\frac{1}{m}}: \psi\in PSH(X,\beta_{\rho})\,with\,\underset{X}{sup}\,\psi=0\right\}. $$ \end{thm} \subsection*{Acknowledgements} This research is supported by National Key R \& D Program of China (No. 2021YFA1002600) and by NSFC grant (No. 12071035). The third author is partially supported by the Fundamental Research Funds for the Central Universities. \section{Preliminaries} Let $X$ be a compact Hermitian manifold of complex dimension $n$ equipped with a Hermitian metric $\omega$. Let $\beta$ be a closed smooth real $(1,1)$-form with $\int_X\beta^n>0$. Suppose there exists a function $ \rho\in \mbox{PSH}(X,\beta)\cap L^{\infty}(X)$. \subsection{Non-pluripolar product} In this subsection, we collect several fundamental facts and properties concerning the non-pluripolar product. The non-pluripolar product is a crucial tool in the study of complex Monge-Amp\`ere equations and plays a significant role in understanding the behavior of plurisubharmonic (psh) functions and their associated measures. \begin{lem}\label{pl_prop_} Let $u, v \in PSH(X, \beta+dd^c \rho) \cap L^{\infty} (X)$, then $$ \ind{\{\{u>v\}} (\beta+ dd^c\rho + dd^c u)^n = \ind{\{u>v\}} (\beta+ dd^c \rho + dd^c \mathrm{max} (u, v) )^n.$$ \end{lem} \begin{proof} Choose $\{\Omega_i \}_{i=1}^n$ an open cover of $X$ such that there exist $\eta_i\in PSH(\Omega_i) \cap L^{\infty}(\Omega_i)$ satisfying $\beta+dd^c \rho = dd^c \eta_i$ on $\Omega_i$. On $\{u>v \}\cap \Omega_i = \{u+\eta_i > v + \eta_i\}\cap \Omega_i$, since $u+\eta_i = \max (u+\eta_i, v+\eta_i) $, by \cite[Corollary 4.2, Corollary 4.3]{BT87}, we have $$ \ind{ \{u>v \}\cap \Omega_i} (dd^c(\eta_i + u))^n = \ind{ \{u>v \}\cap \Omega_i} (dd^c\max(\eta_i + u, \eta_i + v))^n.$$ Thus $$\ind{ \{u>v \}\cap \Omega_i} (\beta+dd^c \rho +dd^c u)^n = \ind{ \{u>v \}\cap \Omega_i} (\beta+dd^c \rho + dd^c\max(u, v))^n,$$ the lemma follows. \end{proof} \begin{cor}\label{pl_prop} $u, v \in PSH(X, \beta) \cap L^{\infty}(X)$, then $$ \ind{\{u>v\}} (\beta + dd^c u)^n = \ind{\{u>v\}} (\beta + dd^c \mathrm{max} (u, v) )^n.$$ \end{cor} \begin{proof} Since $u-\rho, v-\rho \in PSH(X, \beta+dd^c \rho)$, applying Lemma \ref{pl_prop_}, we get the result. \end{proof} \begin{lem}\label{plurifine locality_1} Let $O$ be an plurifine open set, if $u,v \in PSH(X, \beta+dd^c \rho) \cap L^{\infty} (X)$ and $u=v$ on $O$, then $$\ind{O} (\beta+dd^c \rho +dd^c u)^n = \ind{O} (\beta+dd^c \rho +dd^c v)^n. $$ \end{lem} \begin{proof} Follow Lemma \ref{pl_prop_}'s proof, Choose $\{\Omega_i \}_{i=1}^n$ an open cover of $X$ such that there exist $\eta_i\in PSH(\Omega_i)\cap L^{\infty}(\Omega_i)$ satisfying $\beta+dd^c \rho = dd^c \eta_i$ on $\Omega_i$. By \cite[Corollary 4.2, Corollary 4.3]{BT87}, we have $$\ind{O\cap \Omega_i } (dd^c(\eta_i + u))^n =\ind{O\cap \Omega_i } (dd^c(\eta_i + v))^n,$$ thus the lemma follows. \end{proof} \begin{cor}\label{cor: plurifine locality_2} $u, v \in PSH(X, \beta) \cap L^{\infty}(X)$, $O$ is an plurifine open set, $u=v$ on $O$, then $$ \ind{O} (\beta + dd^c u)^n = \ind{O} (\beta + dd^c v )^n.$$ \end{cor} \begin{proof} The same as the proof of Corollary \ref{pl_prop}. \end{proof} For $\varphi \in PSH(X, \beta)$, by Corollary \ref{pl_prop}, we have $\{ \ind{\{u>\rho-k\}}(\beta+dd^c \max (u, \rho-k))^n \}_{k\in \mathbb{N}}$ is a non-decreasing measure sequence. From integration by part, we also have \begin{align*} &\sup_{k} \int_{K\cap\{ \varphi>\rho-k \}} \left[\beta+dd^c \max(\varphi, \rho-k) \right]^n\\ \leq &\sup_{k} \int_X \left[\beta+dd^c \max(\varphi, \rho-k) \right]^n\\ =&\int_{X} \beta^n < +\infty, \end{align*} thus $\{ \ind{\{u>\rho-k\}}(\beta+dd^c \max (u, \rho-k))^n \}_{k\in \mathbb{N}}$ has convergence subsequence by Banach-Alaoglu theorem, and monotonicity implies that limit point is unique, thus the limit $$ \beta_{u}^n:= \lim\limits_{k\rightarrow +\infty} \ind{\{u>\rho-k\}}(\beta+dd^c \mathrm{max} (u, \rho-k))^n$$ is a well defined positive Radon measure. \begin{defn}[{\cite{BEGZ10,LWZ24a}}]\label{defn: nn product} For any $\varphi \in \mbox{PSH}(X,\beta)$, the non-pluripolar product $ \left\langle(\beta+dd^c \varphi)^n \right\rangle$ is defined as $$\langle \beta_\varphi^n \rangle :=\left\langle(\beta+dd^c \varphi)^n \right\rangle:=\lim_{k\rightarrow \infty} \ind{\{\varphi \textgreater \rho-k\}}(\beta+dd^c\varphi_k)^n,$$ where $\varphi_k:=\max\{\varphi,\rho-k\}$. \end{defn} \begin{rem} By the same argument as in \cite{BEGZ10}, the non-pluripolar product defined above possesses the basic properties in \cite[Proposition 1.4]{BEGZ10}. \end{rem} \begin{thm} \label{thm: plur_prop1} If $u, v \in PSH(X, \beta)$, then $$ \ind{\{u>v\}} \langle(\beta+ dd^c u)^n\rangle = \ind{\{u>v\}} \langle(\beta+dd^c \mathrm{max} \{u, v \} )^n\rangle. $$ \end{thm} \begin{proof} For $j \in \mathbb{N}$, set $ u_j = \max\{u, \rho - j\}, \quad v_j = \max\{v, \rho - j\}. $ From Lemma \ref{pl_prop}, we have $$ \ind{\{u_j > v_j\}} \beta_{u_j}^n = \ind{\{u_j > v_j\}} \beta_{\max\{u_j, v_j\}}^n. $$ Since $\{u_j > v_j\} = \{u > v\} \cap \{u > \rho - j\}$, we have \begin{equation} \label{eq:11} \ind{\{u > v\} \cap \{u > \rho - j\}} \beta_{u_j}^n = \ind{\{u > v\} \cap \{u > \rho - j\}} \beta_{\max\{u_j, v_j\}}^n. \end{equation} By the plurifine property (Lemma \ref{pl_prop}), for $k \geq j$, \begin{align*} \ind{\{u > \rho - j\}} \ind{\{u > \rho - k\}} \beta_{u_k}^n &= \ind{\{u > \rho - j\}} \beta_{u_k}^n \\ &= \ind{\{u_k > \rho - j\}} \beta_{u_k}^n \\ &= \ind{\{u_k > \rho - j\}} \beta_{\max\{u_k, \rho - j\}}^n \\ &= \ind{\{u > \rho - j\}} \beta_{u_j}^n. \end{align*} Thus, we have $$ \ind{\{u > \rho - j\}} \ind{\{u > \rho - k\}} \beta_{u_k}^n = \ind{\{u > \rho - j\}} \beta_{u_j}^n. $$ Letting $k \to +\infty$, by the definition of the non-pluripolar Monge-Amp\`ere measure, we get \begin{equation} \label{eq:22} \ind{\{u > \rho - j\}} \langle\beta_u^n\rangle = \ind{\{u > \rho - j\}} \beta_{u_j}^n. \end{equation} Similarly, \begin{align*} &\ind{\{u > v\} \cap \{u > \rho - j\}} \ind{\{\max(u, v) > \rho - k\}} (\beta + dd^c \max\{u, v\}_k)^n \\ &= \ind{\{u > v\} \cap \{u > \rho - j\}} \beta_{\max(u, v)_k}^n \\ &= \ind{\{u > v\} \cap \{\max(u, v)_k > \rho - j\}} \beta_{\max(u, v)_k}^n \\ &= \ind{\{u > v\} \cap \{\max(u, v)_k > \rho - j\}} \beta_{\max\{\max(u, v)_k, \rho - j\}}^n \\ &= \ind{\{u > v\} \cap \{u > \rho - j\}} \beta_{\max(u, v)_j}^n. \end{align*} Letting $k \to +\infty$, by the definition of the non-pluripolar Monge-Amp\`ere measure, we get \begin{equation} \label{eq:33} \ind{\{u > v\} \cap \{u > \rho - j\}} \langle\beta_{\max\{u, v\}}^n \rangle = \ind{\{u > v\} \cap \{u > \rho - j\}} \beta_{\max(u, v)_j}^n. \end{equation} Combining equations \eqref{eq:11}, \eqref{eq:22}, and \eqref{eq:33}, we obtain $$ \ind{\{u > v\} \cap \{u > \rho - j\}} \langle\beta_u^n \rangle = \ind{\{u > v\} \cap \{u > \rho - j\}} \langle\beta _{\max\{u, v\}}^n\rangle. $$ Letting $j \to +\infty$, and noting that $\langle\beta_u^n \rangle$ and $\langle\beta_{\max(u, v)}^n \rangle$ do not charge pluripolar sets, we conclude that $$ \ind{\{u > v\}} \langle\beta_u^n \rangle = \ind{\{u > v\}} \langle\beta_{\max\{u, v\}}^n \rangle. $$ \end{proof} \begin{thm} \label{thm: plurifine_locality_3} Let $u, v \in \text{PSH}(X, \beta)$ and let $O$ be a plurifine open set such that $u = v$ on $O$. Then, $$ \ind{O} \langle \beta_u^n \rangle = \ind{O} \langle \beta_v^n \rangle. $$ \end{thm} \begin{proof} Following \cite[Proposition 1.4 (a)]{BEGZ10}, set $E_{k,j} := \{u > \rho - k\} \cap \{v > \rho - j\}$. By Corollary \ref{cor: plurifine locality_2}, $$ \ind{O \cap E_{k,j}} (\beta + dd^c u_k)^n = \ind{O \cap E_{k,j}} (\beta + dd^c v_j)^n. $$ First, let $k \to +\infty$. Since $\{u > \rho - k\} \uparrow \{u > -\infty\}$, we have $$ \ind{O \cap \{v > \rho - j\}} \langle (\beta + dd^c u)^n \rangle = \ind{O \cap \{v > \rho - j\} \cap \{u > -\infty\}} (\beta + dd^c v_j)^n. $$ Next, let $j \to +\infty$. Since $\{v > \rho - j\} \uparrow \{v > -\infty\}$, we obtain $$ \ind{O \cap \{v > -\infty\}} \langle (\beta + dd^c u)^n \rangle = \ind{O \cap \{u > -\infty\}} \langle(\beta + dd^c v)^n \rangle. $$ Since the non-pluripolar Monge-Amp\`ere measures do not charge pluripolar sets, it follows that $$ \ind{O} \langle \beta_u^n \rangle = \ind{O} \langle \beta_v^n \rangle. $$ \end{proof} \begin{cor} \label{cor: comparison} If $u, v \in \text{PSH}(X, \beta)$, then $$ \langle \beta_{\max(u,v)}^n \rangle \geq \ind{\{u > v\}} \langle\beta_u^n \rangle + \ind{\{u \leq v\}} \langle\beta_v^n \rangle. $$ In particular, if $u \leq v$ quasi-everywhere, then $$ \ind{\{u = v\}} \langle \beta_v^n \rangle \geq \ind{\{u = v\}} \langle \beta_u^n \rangle. $$ \end{cor} \begin{proof} The proof follows from \cite[Lemma 2.9]{DDL23}. \end{proof} \subsection{Quasi-plurisubharmonic envelopes} If $h: X \to \mathbb{R} \cup \{\pm \infty\}$ is a measurable function, the $\beta$-plurisubharmonic (psh) envelope of $h$ is defined by $$ \env{h}{\beta} = \left( \sup \left\{ \varphi \in \PSH{X}{\beta} : \varphi \leq h \quad \text{quasi-everywhere} \right\} \right)^*, $$ with the convention that $\sup \emptyset = -\infty$. When $h = \min(u, v)$, we use the notation $\env{u, v}{\beta} = \env{\min(u, v)}{\beta}$. \begin{lem} \label{lem: neg} Let $\{u_j\}_{j \in \mathbb{N}}$ be a sequence of $\beta$-psh functions on $X$ that is uniformly bounded above. Then the set $\{ (\sup_{j} u_j)^* > \sup_{j} u_j \}$ is a pluripolar set. \end{lem} \begin{proof} The result is local, so we may work on a coordinate ball $B$. On this ball, we can assume that $\beta = dd^c \psi$, where $\psi$ is smooth on $B$ and continuous on $\bar{B}$. We have: \begin{align*} \{ (\sup_{j} u_j)^* > \sup_{j} u_j \} &= \{ \psi + (\sup_{j} u_j)^* > \psi + \sup_{j} u_j \} \\ &= \{ (\sup_{j} (u_j + \psi))^* > \sup_{j} (u_j + \psi) \}. \end{align*} The last set is negligible in the sense of \cite{BT82}, and by \cite[Theorem 7.1]{BT82}, it follows that the set above is pluripolar. \end{proof} \begin{lem} \label{lem: key1} If $h: X \to \mathbb{R} \cup \{-\infty\}$ is a measurable function that is bounded above, then either $$ \sup \left\{ \varphi \in \PSH{X}{\beta} : \varphi \leq h \quad \text{quasi-everywhere} \right\} $$ is bounded above on $X$, or the set $$ \left\{ \varphi \in \PSH{X}{\beta} : \varphi \leq h \quad \text{quasi-everywhere} \right\} $$ is empty, in which case by convention, $$ \sup \left\{ \varphi \in \PSH{X}{\beta} : \varphi \leq h \quad \text{quasi-everywhere} \right\} = -\infty. $$ \end{lem} \begin{proof} First, if the set $$ S := \left\{ \varphi \in \PSH{X}{\beta} : \varphi \leq h \quad \text{quasi-everywhere} \right\} $$ is empty, the conclusion is immediate by definition. Therefore, we assume that $S$ is non-empty. Let $h: X \to \mathbb{R} \cup \{-\infty\}$ be a measurable function that is bounded above by some constant $C \in \mathbb{R}$. We need to show that every $\varphi \in S$ is bounded above by a uniform constant on $X$. Choose an open cover $\{U_i\}_{i=1}^n$ of $X$ such that on each $U_i$, $\beta = dd^c \phi_i$, where $\phi_i \in \mathcal{C}^\infty(U_i) \cap L^\infty(U_i)$. Let $M := \max_{i=1}^n \|\phi_i\|_\infty$. For any $\varphi \in S$ and any point $z_0 \in X$, there exists a ball $B_0$ centered at $z_0$ such that $B_0 \subseteq U_i$ for some $i$. Since $\phi_i + \varphi \in \text{PSH}(B_0)$, by the sub-mean value property of plurisubharmonic functions, we have: $$ \phi_i(z_0) + \varphi(z_0) \leq \frac{1}{\text{Vol}(B_0)} \int_{B_0} (\phi_i(z) + \varphi(z)) \, dV. $$ Since $\varphi \leq h \leq C$ quasi-everywhere on $X$, and $h$ is bounded above by $C$ outside a pluripolar set $P$, we can write: $$ \phi_i(z_0) + \varphi(z_0) \leq \frac{1}{\text{Vol}(B_0)} \int_{B_0 \setminus P} (\phi_i(z) + \varphi(z)) \, dV. $$ Given that $\|\phi_i\|_\infty \leq M$ and $\varphi \leq C$ on $B_0 \setminus P$, we have: $$ \phi_i(z_0) + \varphi(z_0) \leq \frac{1}{\text{Vol}(B_0)} \int_{B_0 \setminus P} (C + M) \, dV = C + M. $$ Thus, $$ \varphi(z_0) \leq C + M - \phi_i(z_0). $$ Since $\|\phi_i\|_\infty \leq M$, it follows that: $$ \varphi(z_0) \leq C + M + M = C + 2M. $$ Therefore, $\varphi$ is bounded above by $C + 2M$ for all $z_0 \in X$. This shows that the supremum of all such $\varphi$ is also bounded above by $C + 2M$. In conclusion, if $S$ is non-empty, then $$ \sup \left\{ \varphi \in \PSH{X}{\beta} : \varphi \leq h \quad \text{quasi-everywhere} \right\} $$ is bounded above by $C + 2M$ on $X$. \end{proof} We need the following lemma. \begin{lem}[\cite{LLZ24}] \label{lem: pluripolar} If $\beta$ is a closed real $(1,1)$-form on $X$ with a bounded $\beta$-psh potential, then any locally pluripolar set in $X$ is globally pluripolar with respect to $\PSH{X}{\beta}$. \end{lem} \begin{prop} \label{prop: representation} Let $\beta$ be a closed real $(1,1)$-form on $X$. \begin{itemize} \item [1.] If $h: X \to \mathbb{R}$ is a bounded measurable function, we define the $\beta$-psh envelope of $h$ as $$ \tilde{P}_\beta(h)= \left( \sup \left\{ \varphi \in \PSH{X}{\beta} : \varphi \leq h \right\} \right)^*, $$ then we have $$ \tilde{P}_\beta(h)= \left( \sup \left\{ \varphi \in \PSH{X}{\beta} : \varphi \leq h \quad \text{quasi-everywhere} \right\} \right)^*=\env{h}{\beta}. $$ In particular, if $h_j \downarrow h$ is a sequence of bounded functions, then $$ \env{h_j}{\beta} \downarrow \env{h}{\beta}. $$ \item[2.] If $h: X \to \mathbb{R} \cup \{-\infty\}$ is a measurable function that is bounded from above, if $h_j $ is a sequence of bounded functions and $h_j \downarrow h$, then $$ \env{h_j}{\beta} \downarrow \env{h}{\beta}. $$ \end{itemize} \end{prop} \begin{proof} \textbf{Proof of 1).} The proof follows \cite[Proposition 2.2]{GLZ19}. Denote $$ G := \sup \left\{ \varphi \in \PSH{X}{\beta} : \varphi \leq h \quad \text{quasi-everywhere} \right\}. $$ If the set $$ \left\{ \varphi \in \PSH{X}{\beta} : \varphi \leq h \quad \text{quasi-everywhere} \right\} $$ is empty, then the set $$ \left\{ \varphi \in \PSH{X}{\beta} : \varphi \leq h \right\} $$ is also empty, and we have $$ \tilde P_{\beta}(h) = \sup \left\{ \varphi \in \PSH{X}{\beta} : \varphi \leq h \quad \text{quasi-everywhere} \right\} = -\infty. $$ Now assume $G \neq -\infty$. By Lemma \ref{lem: key1}, $G$ is bounded above. Using Choquet's Lemma, there exists a sequence $\{\varphi_j\} \subset \PSH{X}{\beta}$ such that $\varphi_j \leq h$ quasi-everywhere and $$ P_\beta(h)= G^* = \left( \sup_j \varphi_j \right)^*. $$ By Lemma \ref{lem: neg}, $G^* \in \PSH{X}{\beta}$ and $G^* \leq h$ quasi-everywhere. Therefore, by definition, $G^* = G$. We now show that $$ \tilde P_{\beta}({\max(h, G)})= G = G^*. $$ On one hand, since $G = G^* \leq \max(h, G)$, it follows that $$ G = G^* \leq \tilde P_{\beta}({\max(h, G)}). $$ On the other hand, $\tilde P_{\beta}({\max(h, G)})\leq \max(h, G)$ quasi-everywhere. Since $\max(h, G) = h$ quasi-everywhere, we have $$ \tilde P_{\beta}({\max(h, G)})\leq h \quad \text{quasi-everywhere}, $$ and thus $$ \tilde P_{\beta}({\max(h, G)}) \leq G \quad \text{by definition}. $$ Therefore, $$ \tilde P_{\beta}({\max(h, G)}) = G = G^*. $$ Next, we prove that $$ \tilde P_{\beta}({\max(h, G)})=\tilde P_{\beta}(h). $$ Define $$ E := \left\{ \tilde P_{\beta}({\max(h, G)}) \geq \max(h, G) \right\} \cup \left\{ \max(h, G) \neq h \right\}, $$ which is pluripolar. By Lemma \ref{lem: pluripolar}, there exists $\phi \in \PSH{X}{\beta}$ such that $E \subseteq \{\phi = -\infty\}$. For $\lambda \in (0, 1)$, we have $$ \lambda \phi + (1-\lambda) \tilde P_{\beta}({\max(h, G)}) \in \PSH{X}{\beta} $$ and is bounded above by $h$ (since $h$ is bounded). Thus, $\env{h}{\beta}$ is not identically $-\infty$. Letting $\lambda \to 0$, we get $$ \tilde P_{\beta}({\max(h, G)})\leq \tilde P_{\beta}(h)\quad \text{quasi-everywhere}. $$ Since both sides are $\beta$-psh, this inequality holds everywhere. The converse inequality is immediate. Therefore, $$ \tilde P_{\beta}({\max(h, G)})=\tilde P_{\beta}(h). $$ This completes the proof of the first part. \textbf{Proof of the decreasing property:} On one hand, by definition, $$ \env{h_j}{\beta} \geq \env{h}{\beta}. $$ On the other hand, $\env{h_j}{\beta} \leq h_j$ quasi-everywhere, so $$ \lim_{j \to +\infty} \env{h_j}{\beta} \leq h \quad \text{quasi-everywhere}. $$ If $\lim_{j \to +\infty} \env{h_j}{\beta} \in \PSH{X}{\beta}$, then by the first part of this theorem, $$ \lim_{j \to +\infty} \env{h_j}{\beta} \leq \env{h}{\beta}. $$ If $\lim_{j \to +\infty} \env{h_j}{\beta} = -\infty$, the conclusion is immediate. \textbf{Proof of 2).} The proof of the decreasing property for part 2) is identical to the second part of the proof of part 1). \end{proof} \begin{rem}\label{rem: equ of env} From above proof, we have already get that for measurable function $h: X \to \mathbb{R}\cup \{-\infty\}$ which is bounded from above, $$P_\beta(h)= \sup \left\{ \varphi \in \PSH{X}{\beta} : \varphi \leq h \quad \text{quasi-everywhere} \right\} .$$ \end{rem} \begin{defn} \label{defn: new envelope} Assume $\rho$ is a bounded $\beta$-psh function. We define $u$ to be a $(\beta + dd^c \rho)$-psh function if $u + \rho$ is a $\beta$-psh function. Denote the set of all $(\beta + dd^c \rho)$-psh functions as $\PSH{X}{\beta + dd^c \rho}$. If $\rho$ is a continuous $\beta$-psh function, then $$ \PSH{X}{\beta + dd^c \rho} \subseteq \text{USC}(X), $$ where $\text{USC}(X)$ denotes the set of all upper semicontinuous (u.s.c.) functions on $X$. For any function $h$, we have $$ \left( \sup \left\{ \varphi \in \PSH{X}{\beta + dd^c \rho} : \varphi \leq h \right\} \right)^* = \left( \sup \left\{ \psi \in \PSH{X}{\beta} : \psi \leq h + \rho \right\} \right)^* - \rho. $$ If $\rho$ is only assumed to be bounded, $(\beta + dd^c \rho)$-psh functions need not be upper semicontinuous (u.s.c.). Therefore, the upper semicontinuous regularization process is not suitable in this case. Motivated by Remark \ref{rem: equ of env}, we define the $(\beta + dd^c \rho)$-psh envelope of $h$ as $$ \env{h}{\beta + dd^c \rho} := \sup \left\{ \varphi \in \PSH{X}{\beta + dd^c \rho} : \varphi \leq h \quad \text{quasi-everywhere} \right\}. $$ \end{defn} Fortunately, one can prove that the envelope $P_{\beta + dd^c\rho}(h)$ shares similar properties as stated in Proposition \ref{prop: representation}. \begin{thm} \label{thm: key} Let $h$ be a measurable function that is bounded from above, then we have $$ \env{h}{\beta + dd^c \rho} = \sup \left\{ \varphi \in \PSH{X}{\beta + dd^c \rho} : \varphi \leq h \quad \text{quasi-everywhere} \right\} = \env{h + \rho}{\beta} - \rho. $$ In particular: \begin{enumerate} \item If $h_j \downarrow h$, where each $h_j$ is bounded from above, then $$ \env{h_j}{\beta + dd^c \rho} \downarrow \env{h}{\beta + dd^c \rho}. $$ \item If the set $$ \left\{ \varphi \in \PSH{X}{\beta + dd^c \rho} : \varphi \leq h \right\} $$ is non-empty, then $\env{h + \rho}{\beta} \in \PSH{X}{\beta}$, and thus $$ \env{h}{\beta + dd^c \rho} \in \PSH{X}{\beta + dd^c \rho}. $$ \end{enumerate} \end{thm} \begin{proof} By Remark \ref{rem: equ of env}, we have \begin{equation} \label{eq:key-proof} \begin{split} \env{h}{\beta + dd^c \rho} &= \sup \left\{ \varphi \in \PSH{X}{\beta + dd^c \rho} : \varphi \leq h \quad \text{quasi-everywhere} \right\} \\ &= \sup \left\{ \psi \in \PSH{X}{\beta} : \psi \leq h + \rho \quad \text{quasi-everywhere} \right\} - \rho \\ &= \env{h + \rho}{\beta} - \rho. \end{split} \end{equation} \textbf{Proof of (1):} If $h_j \downarrow h$, where each $h_j$ is bounded from above, then by part 2) of Proposition \ref{prop: representation}, $$ \env{h_j + \rho}{\beta} \downarrow \env{h + \rho}{\beta}. $$ Therefore, $$ \env{h_j}{\beta + dd^c \rho} = \env{h_j + \rho}{\beta} - \rho \downarrow \env{h + \rho}{\beta} - \rho = \env{h}{\beta + dd^c \rho}. $$ \textbf{Proof of (2):} If the set $$ \left\{ \varphi \in \PSH{X}{\beta + dd^c \rho} : \varphi \leq h \right\} $$ is non-empty, then the set $$ \left\{ \psi \in \PSH{X}{\beta} : \psi \leq h + \rho \right\} $$ is also non-empty. By Proposition \ref{prop: representation}, we have $\env{h + \rho}{\beta} \in \PSH{X}{\beta}$, and thus $$ \env{h}{\beta + dd^c \rho} = \env{h + \rho}{\beta} - \rho \in \PSH{X}{\beta + dd^c \rho}. $$ The proof is completed. \end{proof} If $h$ is bounded, then by Choquet's lemma and the condition that $\rho$ is bounded, we have $$ \env{h}{\beta} \in \PSH{X}{\beta} \cap L^{\infty}(X). $$ Thus, $(\beta + dd^c \env{h}{\beta})^n$ is well-defined. \begin{lem} \label{lem: put_no_mass1} If $h$ is a bounded Lebesgue measurable function, then $(\beta + dd^c \env{h}{\beta})^n$ puts no mass on $L(h) \cap \{ \env{h}{\beta} < h \}$, where $L(h)$ is the lower semi-continuity set of $h$. \end{lem} \begin{proof} The proof follows from Lemma 2.3 of \cite{GLZ19}. Since $h$ is bounded, from Proposition \ref{prop: representation}, we have $\env{h}{\beta}=\tilde P_\beta(h)$. Denote $\hat{h} := \env{h}{\beta}=\tilde P_\beta(h)$. Fix $x_0 \in L(h) \cap \{ \hat{h} < h \}$. By the lower semi-continuity of $h$ and the upper semi-continuity of $\hat{h}$, there exists a ball $B$ such that $x_0 \in B \subseteq \{ \hat{h} < h \}$ and \begin{align}\label{equ: ineq} \max_{\bar{B}} \hat{h} < \min_{\bar{B}} h - \delta \end{align} for some $\delta > 0$. Since $\beta + dd^c \rho$ is a closed positive current, by the Dolbeault-Grothendieck Lemma, we can choose a Stein open neighborhood $D$ such that $\beta + dd^c \rho = dd^c \hat{\rho}$ on $D$, where $\hat{\rho} \in \text{PSH}({D}) \cap L_{\text{loc}}^1(D)$. Thus $\hat{\rho} - \rho$ is smooth and $\hat{\rho}$ is also bounded (shrink $D$ if necessary). Set $u := \hat{h} + (\hat{\rho} - \rho) \in \text{PSH}({D}) $, assuming $\operatorname{osc}_{\bar{B}} (\hat{\rho} - \rho) < \delta$. By \cite[Proposition 9.1]{BT82}, there exists a psh function $v$ on $D$ such that \begin{align*} \left\{ \begin{array}{ll} v = u & \text{on } D \setminus B, \\ v \geq u & \text{on } D, \\ (dd^c v)^n = 0 & \text{on } B. \end{array} \right. \end{align*} On $\partial B$, we have \begin{align*} v = u = \hat{h} + (\hat{\rho} - \rho) \leq \max_{\bar{B}} \hat{h} + \max_{\bar{B}} (\hat{\rho} - \rho). \end{align*} By the maximum principle, it follows that \begin{align*} v \leq \max_{\bar{B}} \hat{h} + \max_{\bar{B}} (\hat{\rho} - \rho) \quad \text{on } \bar{B}. \end{align*} Thus, by inequality \eqref{equ: ineq}, \begin{align*} v - (\hat{\rho} - \rho) &\leq \max_{\bar{B}} \hat{h} + \max_{\bar{B}} (\hat{\rho} - \rho) - (\hat{\rho} - \rho) \\ &\leq \min_{\bar{B}} h - \delta + \operatorname{osc}_{\bar{B}} (\hat{\rho} - \rho) \\ &\leq h \quad \text{on } B. \end{align*} Now define \begin{align*} w := \begin{cases} v - (\hat{\rho} - \rho) & \text{on } B, \\ \hat{h} & \text{on } X \setminus B. \end{cases} \end{align*} On the one hand, $w$ is $\beta$-psh on $X$, and $w \leq h$ on $X$. On the other hand, $w = v - (\hat{\rho} - \rho) \geq u - (\hat{\rho} - \rho) = \hat{h}$ on $B$. It follows that $w = \hat{h}$ on $B$, thus \begin{align*} (\beta + dd^c \hat{h})^n = (dd^c v)^n = 0 \quad \text{on } B. \end{align*} Therefore, $(\beta + dd^c \env{h}{\beta})^n$ puts no mass on $L(h) \cap \{ \env{h}{\beta} < h \}$. \end{proof} \begin{lem} \label{put_no_mass2} If $h$ is a quasi-continuous bounded Lebesgue measurable function, then $(\beta + dd^c \env{h}{\beta})^n$ puts no mass on $\{ \env{h}{\beta} < h \}$. \end{lem} \begin{proof} The proof follows the idea of \cite[Proposition 2.5]{GLZ19}. By definition, there exists a sequence of compact sets $(K_l)$ such that $\text{Cap}(X \setminus K_l) \leq 2^{-l}$ and the restriction $h|_{K_l}$ is a continuous function on $K_l$. By taking $\widetilde{K}_j := \cup_{1 \leq l \leq j} K_l$, we can assume the sequence $(K_j)$ is increasing. Using the Tietze-Urysohn Lemma, there exists a continuous function $H_j$ on $X$ such that $H_j|_{K_j} = h|_{K_j}$, and $H_j$ shares the same bounds as $h$. Set $$ h_j := \sup \{ H_l \mid l \geq j \}. $$ Then $\{ h_j \}_j$ is lower semicontinuous, quasi-continuous, and uniformly bounded: - For fixed $j$, and for $k \geq j$, $$ h_j|_{K_k} = \max \left\{ \sup \{ H_l \mid l \geq k \}, \max (H_j, \dots, H_{k-1}) \right\} = \max \{ h, \max (H_j, \dots, H_{k-1}) \}, $$ which implies $h_j|_{K_k}$ is continuous. Since $j$ is arbitrary, $h_j$ is quasi-continuous. Since $h$ is bounded, by definition, $\{ h_j \}$ is uniformly bounded. By \cite[Lemma 2.4]{GLZ19}, $h_j$ converges to $h$ in capacity. By Hartogs' lemma, $\hat{h}_j \rightarrow \hat{h}$ in capacity. According to \cite[Theorem 2.6]{DDL23} (see also \cite[Theorem 2.6]{LLZ24}), $$ \int_X h (\beta + dd^c \hat{h})^n \leq \liminf_{j \to +\infty} \int_X h_j (\beta + dd^c \hat{h}_j)^n. $$ Since $\{ \hat{h}_j \}_j$ decreases to $\hat{h}$ by Proposition \ref{prop: representation}, we have $$ \int_X \hat{h} (\beta + dd^c \hat{h})^n = \lim_{j \to +\infty} \int_X \hat{h}_j (\beta + dd^c \hat{h}_j)^n. $$ Thus, by Lemma \ref{lem: neg} and the fact that Monge-Amp\`ere measures put no mass on pluripolar sets, we have $$ \int_X (h - \hat{h}) (\beta + dd^c \hat{h})^n \leq \liminf_{j \to +\infty} \int_X (h_j - \hat{h}_j) (\beta + dd^c \hat{h}_j)^n. $$ The right-hand term is equal to $0$ by Lemma \ref{lem: put_no_mass1}. Therefore, $$ \int_X (h - \hat{h}) (\beta + dd^c \hat{h})^n = 0, $$ which implies that $(\beta + dd^c \env{h}{\beta})^n$ puts no mass on $\{ \env{h}{\beta} < h \}$. \end{proof} \begin{rem} \label{rem: put_no_mass3} If $h$ is a quasi-continuous Lebesgue measurable function with a lower bound, and $\env{h}{\beta} \in \PSH{X}{\beta}$, then Lemma \ref{put_no_mass2} still holds. Actually, as in the proof of \cite[Theorem 2.3]{GL22}, one can replace $h$ by $\min(h, C)$, where $C > \sup_X \env{h}{\beta}$. Then we have $$ \env{h}{\beta} = \env{\min(h, C)}{\beta}, $$ and the claim follows from Lemma \ref{put_no_mass2}. \end{rem} Now, we are ready to improve Lemma \ref{put_no_mass2} without the bounded condition, following the idea of \cite[Theorem 2.5]{ALS24}. \begin{thm} \label{thm: put_no_mass4} Assume $h$ is quasi-continuous on $X$, and $\env{h}{\beta} \in \PSH{X}{\beta}$. Then $\env{h}{\beta} \leq h$ outside a pluripolar set, and we have $$ \int_{\{ \env{h}{\beta} < h \}} \langle(\beta + dd^c \env{h}{\beta})^n\rangle = 0. $$ \end{thm} \begin{proof} By Remark \ref{rem: put_no_mass3}, we may assume $h$ is bounded above. Set \begin{align*} h_j := \max(h, -j), \end{align*} which is a bounded function. Since $\rho$ is bounded, $\env{h_j}{\beta} \in \PSH{X}{\beta}$. By Lemma \ref{lem: neg}, $\env{h_j}{\beta} \leq h_j$ outside a pluripolar set. Since $\env{h}{\beta} \in \PSH{X}{\beta}$, also by Lemma \ref{lem: neg}, we conclude that $\env{h}{\beta} \leq h$ quasi-everywhere. Next, we prove the second result. By Lemma \ref{put_no_mass2} and Remark \ref{rem: put_no_mass3}, \begin{align*} \int_{\{ \env{h_j}{\beta} < h_j \}} (\beta + dd^c \env{h_j}{\beta})^n = 0, \quad \forall j \geq 1. \end{align*} Fix $k \in \mathbb{N}$. For every $j \geq k$, we have $\{ \env{h_k}{\beta} < h \} \subseteq \{ \env{h_j}{\beta} < h_j \}$, which implies \begin{align*} \int_{\{ \env{h_k}{\beta} < h \}} (\beta + dd^c \env{h_j}{\beta})^n = 0. \end{align*} Fix $C > 0$. From Theorem \ref{thm: plur_prop1}, \begin{align*} \int_{\{ \env{h_k}{\beta} < h \} \cap \{ \env{h}{\beta} > \rho - C \}} (\beta + dd^c \max(\env{h_j}{\beta}, \rho - C))^n = 0. \end{align*} Set \begin{align*} f_k := \left[ \max(h, \env{h_k}{\beta}) - \env{h_k}{\beta} \right] \times \left[ \max(\env{h}{\beta}, \rho - C) - (\rho - C) \right], \end{align*} which is a positive, bounded, quasi-continuous function. By \cite[Lemma 4.2]{KH09}, we have \begin{align*} \int_X f_k (\beta + dd^c \max(\env{h_j}{\beta}, \rho - C))^n = 0, \quad \forall j \geq k. \end{align*} By the Bedford-Taylor convergence theorem, \begin{align*} (\beta + dd^c \max(\env{h_j}{\beta}, \rho - C))^n \rightarrow (\beta + dd^c \max(\env{h}{\beta}, \rho - C))^n, \end{align*} which implies \begin{align*} \int_X f_k (\beta + dd^c \max(\env{h}{\beta}, \rho - C))^n &\leq \liminf_{j \to +\infty} \int_X f_k (\beta + dd^c \max(\env{h_j}{\beta}, \rho - C))^n \\ &= 0, \end{align*} thanks to \cite[Theorem 2.6]{DDL23} (see also \cite[Theorem 2.6]{LLZ24}). The above is equivalent to \begin{align*} \int_{\{ \env{h_k}{\beta} < h \} \cap \{ \env{h}{\beta} > \rho - C \}} (\beta + dd^c \max(\env{h}{\beta}, \rho - C))^n = 0. \end{align*} By Theorem \ref{thm: plur_prop1}, \begin{align*} \int_{\{ \env{h_k}{\beta} < h \} \cap \{ \env{h}{\beta} > \rho - C \}} \langle(\beta + dd^c \env{h}{\beta})^n\rangle = 0. \end{align*} Letting $k \to +\infty$ and then $C \to +\infty$, we obtain \begin{align*} \int_{\{ \env{h}{\beta} < h \}} \langle(\beta + dd^c \env{h}{\beta})^n \rangle= 0. \end{align*} \end{proof} \begin{cor} \label{cor: contactieq} Let $u, v \in \PSH{X}{\beta}$ be such that $\env{u, v}{\beta} \in \PSH{X}{\beta}$. Then \begin{align*} \langle\beta_{\env{u, v}{\beta}}^n \rangle &\leq \ind{\{\env{u, v}{\beta} = u\}} \langle\beta_u^n \rangle + \ind{\{\env{u, v}{\beta} = v, \env{u, v}{\beta} < u\}} \langle \beta_v^n \rangle. \end{align*} In particular, if $\mu$ is a positive measure such that $\langle\beta_u^n \rangle \leq \mu$ and $\langle \beta_v^n \rangle \leq \mu$, then $\langle \beta_{\env{u, v}{\beta}}^n \rangle \leq \mu$. \end{cor} \begin{proof} By Theorem \ref{thm: put_no_mass4}, $\langle\beta_{\env{u, v}{\beta}}^n\rangle$ is supported on the contact set $ K := K_u \cup K_v $, where \begin{align*} K_u &:= \{ \env{u, v}{\beta} = u \}, \\ K_v &:= \{ \env{u, v}{\beta} = v \} \cap \{ \env{u, v}{\beta} < u \}. \end{align*} From Corollary \ref{cor: comparison}, we have \begin{align*} \ind{K_u} \langle \beta_{\env{u, v}{\beta}}^n \rangle &\leq \ind{K_u} \langle \beta_u^n \rangle, \\ \ind{K_v} \langle \beta_{\env{u, v}{\beta}}^n \rangle&\leq \ind{K_v} \langle\beta_v^n \rangle. \end{align*} Adding these two inequalities, we obtain \begin{align*} \langle\beta_{\env{u, v}{\beta}}^n\rangle &= \ind{K_u} \langle\beta_{\env{u, v}{\beta}}^n \rangle + \ind{K_v} \langle \beta_{\env{u, v}{\beta}}^n \rangle\\ &\leq \ind{K_u} \langle\beta_u^n \rangle + \ind{K_v}\langle \beta_v^n \rangle, \end{align*} which completes the proof. For the second part, if $\mu$ is a positive measure such that $\beta_u^n \leq \mu$ and $\beta_v^n \leq \mu$, then by the above inequality, \begin{align*} \langle\beta_{\env{u, v}{\beta}}^n \rangle &\leq \ind{K_u} \langle \beta_u^n \rangle + \ind{K_v} \langle \beta_v^n \rangle \\ &\leq \ind{K_u} \mu + \ind{K_v} \mu \\ &= \mu. \end{align*} \end{proof} \begin{rem} The corollary above is true if we replace $\beta$ by $\beta + dd^c \rho$ and $\PSH{X}{\beta}$ by $\PSH{X}{\beta + dd^c \rho}$ due to the equality in Theorem \ref{thm: key}. \end{rem} \subsection{The full mass class} \begin{defn} The full mass class $\mathcal{E}(X, \beta)$ is defined as follows: \begin{align*} \mathcal{E}(X, \beta) &:= \left\{ u \in \PSH{X}{\beta} \mid \lim_{t \to +\infty} (\beta + dd^c \max(u, \rho - t))^n(\{u \leq \rho - t\}) = 0 \right\} \\ &= \left\{ u \in \PSH{X}{\beta} \mid \int_X \langle(\beta + dd^c u)^n\rangle = \int_X \beta^n \right\}. \end{align*} \end{defn} The second equation holds because: \begin{align*} \int_X \langle(\beta + dd^c u)^n \rangle &= \lim_{t \to +\infty} \int_X \ind{\{u > \rho - t\}} (\beta + dd^c \max(u, \rho - t))^n \\ &= \lim_{t \to +\infty} \left[ \int_X (\beta + dd^c \max(u, \rho - t))^n - \int_X \ind{\{u \leq \rho - t\}} (\beta + dd^c \max(u, \rho - t))^n \right] \\ &= \int_X \beta^n - \lim_{t \to +\infty} (\beta + dd^c \max(u, \rho - t))^n(\{u \leq \rho - t\}). \end{align*} Similarly, we define the full mass class for $\beta + dd^c \rho$: \begin{defn} \label{class2} The full mass class $\mathcal{E}(X, \beta + dd^c \rho)$ is defined as follows: \begin{align*} \mathcal{E}(X, \beta + dd^c \rho) &:= \left\{ u \in \PSH{X}{\beta + dd^c \rho} \mid \lim_{t \to +\infty} (\beta + dd^c \rho + dd^c \max(u, -t))^n(\{u \leq -t\}) = 0 \right\} \\ &= \left\{ u \in \PSH{X}{\beta + dd^c \rho} \mid \lim_{t \to +\infty} (\beta + dd^c \max(\rho + u, \rho - t))^n(\{u + \rho \leq \rho - t\}) = 0 \right\} \\ &= \left\{ u \in \PSH{X}{\beta + dd^c \rho} \mid u + \rho \in \mathcal{E}(X, \beta) \right\} \\ &= \left\{ u \in \PSH{X}{\beta + dd^c \rho} \mid \int_X \langle (\beta + dd^c \rho + dd^c u)^n \rangle= \int_X \beta^n \right\}. \end{align*} \end{defn} \begin{rem} \label{rem: second_version} \begin{enumerate} \item From the above computations, Theorem \ref{thm: plur_prop1} and Corollary \ref{cor: comparison} still hold if we replace $\beta$ by $\beta + dd^c \rho$. \item We have the following: for measurable function $h: X \to \mathbb{R}$ which is bounded from above, \begin{align*} P_{\beta + dd^c \rho}(h) &:= \left( \sup \left\{ \varphi \in \PSH{X}{\beta + dd^c \rho} \mid \varphi \leq h \quad \text{quasi-everywhere} \right\} \right) \\ &= \left( \sup \left\{ \widetilde{\varphi} - \rho \mid \widetilde{\varphi} \in \PSH{X}{\beta}, \widetilde{\varphi} \leq h + \rho \quad \text{quasi-everywhere} \right\} \right) \\ &= \left( \sup \left\{ \widetilde{\varphi} \in \PSH{X}{\beta} \mid \widetilde{\varphi} \leq h + \rho \quad \text{quasi-everywhere} \right\} \right) - \rho \\ &= P_{\beta}(h + \rho) - \rho. \end{align*} Therefore, by Remark \ref{rem: put_no_mass3}, in Theorem \ref{thm: put_no_mass4}, if we replace $\beta$ by $\beta + dd^c \rho$, the same result holds. \item (See \cite[Lemma 1.2]{GZ07} or \cite[Lemma 2.4]{LWZ24b} )Assume $\{s_j\}_j$ is a sequence converging to $+\infty$ such that $s_j \leq j$. Then $u \in \mathcal{E}(X, \beta + dd^c \rho)$ if and only if $$ \lim_{j \to +\infty} (\beta + dd^c \rho + dd^c \max(u, -j))^n(\{u \leq -s_j\}) = 0. $$ \item (Comparison Principle, see \cite[Proposition 3.11]{LWZ24a}). If $\varphi, \psi \in \mathcal{E}(X, \beta + dd^c \rho)$, then $$ \int_{\{\varphi < \psi\}} \langle(\beta + dd^c \rho + dd^c \psi)^n \rangle\leq \int_{\{\varphi < \psi\}} \langle(\beta + dd^c \rho + dd^c \varphi)^n\rangle. $$ \end{enumerate} \end{rem} \begin{prop} \label{prop: chac_class} Assume $\rho \in \PSH{X}{\beta} \cap L^{\infty}(X)$. Let $u \in \PSH{X}{\beta + dd^c \rho}$. The following are equivalent: \begin{enumerate} \item $u \in \mathcal{E}(X, \beta + dd^c \rho)$. \item $\env{Au}{\beta + dd^c \rho} \in \PSH{X}{\beta + dd^c \rho}$ for every $A \geq 1$. \end{enumerate} \end{prop} \begin{proof} The proof is mainly based on \cite[Proposition 2.5]{ALS24}. \textbf{(1) $\Rightarrow$ (2).} Assume $u \in \mathcal{E}(X, \beta + dd^c \rho)$. Define the sequence $u_j := \max(u, -j)$ and set $\varphi_j = \env{Au_j}{\beta + dd^c \rho}$. By Remark \ref{second_version}. (2), we have \begin{equation}\label{p1} \left\{ \begin{array}{ll} \varphi_j &\leq Au_j \quad \text{quasi-everywhere}, \\ (\beta + dd^c \rho + dd^c \varphi_j)^n &= \ind{\{\varphi_j = Au_j\}} (\beta + dd^c \rho + dd^c \varphi_j)^n. \end{array} \right. \end{equation} We have $\frac{1}{A} \varphi_j \leq u_j \quad \text{quasi-everywhere}$. Both sides of this inequality are $\beta + dd^c \rho$-psh, so $\frac{1}{A} \varphi_j \leq u_j \quad \text{everywhere}$. From \eqref{p1} and Corollary \ref{cor: comparison}, we get: \begin{equation} \label{p2} \begin{split} (\beta + dd^c \rho + dd^c \varphi_j)^n &\leq A^n \ind{\{\varphi_j = Au_j\}} (\beta + dd^c \rho + dd^c A^{-1} \varphi_j)^n \\ &\leq A^n \ind{\{\varphi_j = Au_j\}} (\beta + dd^c \rho + dd^c u_j)^n. \end{split} \end{equation} Fix $k \in \mathbb{N}$. Assume by contradiction that $\varphi_j \to -\infty$ as $j \to +\infty$. By a compactness argument, it follows that $\sup_X \varphi_j \to -\infty$. Then there exists $j_0$ such that $X = \{\varphi_j \leq -Ak\}$ for $j \geq j_0$, which implies \begin{equation} \label{p3} \begin{split} \int_X \beta^n &\leq \int_{\{\varphi_j \leq -Ak\}} (\beta + dd^c \rho + dd^c \varphi_j)^n \\ &\leq A^n \int_{\{u_j \leq -k\}} (\beta + dd^c (\rho + u_j))^n \\ &= A^n \left( \int_X (\beta + dd^c (\rho + u_j))^n - \int_{\{u_j > -k\}} (\beta + dd^c (\rho + u_j))^n \right). \end{split} \end{equation} The second inequality follows from \eqref{p2}. For $j \geq k$, by Remark \ref{second_version}. (1), we have \begin{equation*} \begin{split} \int_{\{u_j > -k\}} (\beta + dd^c (\rho + u_j))^n &= \int_{\{u_j > -k\}} (\beta + dd^c \rho + dd^c \max(u_j, -k))^n \\ &= \int_{\{u > -k\}} (\beta + dd^c (\rho + u_k))^n, \end{split} \end{equation*} thus by \eqref{p3}, \begin{equation*} \begin{split} \int_X \beta^n &\leq A^n \left( \int_X (\beta + dd^c (\rho + u_j))^n - \int_{\{u > -k\}} (\beta + dd^c (\rho + u_k))^n \right) \\ &\leq A^n \left( \int_X \beta^n - \int_{\{u > -k\}} (\beta + dd^c (\rho + u_k))^n \right). \end{split} \end{equation*} Letting $k \to +\infty$, by $u \in \mathcal{E}(X, \beta + dd^c \rho)$ and Definition \ref{class2}, we infer $$ \int_X \beta^n \leq 0, $$ which contradicts the assumption $\int_X \beta^n > 0$. Therefore, $(\varphi_j)_j$ does not converge uniformly to $-\infty$. Since $\varphi_j \downarrow \env{Au}{\beta + dd^c \rho}$, we conclude that $\env{Au}{\beta + dd^c \rho} \in \PSH{X}{\beta + dd^c \rho}$. \textbf{(2) $\Rightarrow$ (1).} Since $\int_X (\beta + dd^c \rho + dd^c u)^n \leq \int_X \beta^n$ by definition, to prove the result, we need to show the following claim: $$ \int_X \langle(\beta + dd^c \rho + dd^c u)^n \rangle \geq \int_X \beta^n. $$ Since $\env{Au}{\beta + dd^c \rho} \leq Au \quad \text{quasi-everywhere} \Rightarrow A^{-1} \env{Au}{\beta + dd^c \rho} \leq u \quad \text{everywhere}$, we have $$ \rho + A^{-1} \env{Au}{\beta + dd^c \rho} \leq \rho + u \quad \text{everywhere}. $$ By \cite[Theorem 3.3]{DDL23}, \begin{equation*} \begin{split} \int_X \langle(\beta + dd^c \rho + dd^c u)^n \rangle &\geq \int_X \left \langle (\beta + dd^c \rho + dd^c A^{-1} \env{Au}{\beta + dd^c \rho}\right)^n \rangle \\ &\geq \int_X \left(1 - \frac{1}{A}\right)^n (\beta + dd^c \rho)^n \\ &= \left(1 - \frac{1}{A}\right)^n \int_X \beta^n. \end{split} \end{equation*} Letting $A \to +\infty$, the claim follows. Thus, we have shown both directions, completing the proof. \end{proof} \begin{cor} \label{cor: A} If $u \in \mathcal{E}(X, \beta + dd^c \rho)$, then $\env{Au}{\beta + dd^c \rho} \in \mathcal{E}(X, \beta + dd^c \rho)$ for $A \geq 1$. \end{cor} \begin{proof} The proof is mainly based on \cite[Corollary 2.6]{ALS24}. Fix $A\geq 1$. Set $v=P_{\beta+dd^c \rho}(Au) \in PSH(X, \beta+dd^c \rho)$. By Proposition \ref{prop: chac_class}, it suffices to prove $P_{\beta+dd^c \rho}(tv)\in PSH(X, \beta+dd^c \rho)$ for $t\geq 1$. Since $P_{\beta+dd^c \rho}(Atu) \leq t P_{\beta+dd^c \rho}(Au)$, we get $tv\geq P_{\beta+dd^c \rho}(Atu)\in PSH(X, \beta+dd^c \rho)$. Hence $P_{\beta+dd^c \rho}(tv) \geq P_{\beta+dd^c \rho}(Atu)\in PSH(X, \beta+dd^c \rho)$ and we conclude that $P_{\beta+dd^c \rho}(tv) \in PSH(X, \beta+dd^c \rho)$. \end{proof} \begin{cor} \label{cor: convex} $\mathcal{E}(X, \beta + dd^c \rho)$ is a convex set, and so is $\mathcal{E}(X, \beta)$. \end{cor} \begin{proof} The proof is mainly based on \cite[Proposition 1.6]{GZ07}. \textbf{Step 1:} If $\varphi \in \PSH{X}{\beta + dd^c \rho}$ and $\frac{\varphi}{2} \in \mathcal{E}(X, \beta + dd^c \rho)$, then $\varphi \in \mathcal{E}(X, \beta + dd^c \rho)$. \textit{Proof of Step 1:} Set $u = \frac{\varphi}{2}$, $u_j := \max(u, -j)$, and $\varphi_j := \max(\varphi, -j)$. Then $u_j = \frac{\varphi_{2j}}{2}$ and $$ \beta + dd^c \rho + dd^c u_j = \frac{1}{2}[\beta + dd^c \rho + (\beta + dd^c \rho + dd^c \varphi_{2j})] \geq \frac{1}{2} (\beta + dd^c \rho + dd^c \varphi_{2j}). $$ Thus, \begin{equation*} \begin{split} \int_{\{\varphi \leq -2j\}} (\beta + dd^c \rho + dd^c \varphi_{2j})^n &= \int_{\{u \leq -j\}} (\beta + dd^c \rho + dd^c \varphi_{2j})^n \\ &\leq 2^n \int_{\{u \leq -j\}} (\beta + dd^c \rho + dd^c u_j)^n \to 0, \end{split} \end{equation*} which implies $\varphi \in \mathcal{E}(X, \beta + dd^c \rho)$. \textbf{Step 2:} If $\varphi \in \mathcal{E}(X, \beta + dd^c \rho)$ and $\psi \in \PSH{X}{\beta + dd^c \rho}$ such that $\varphi \leq \psi$, then $\psi \in \mathcal{E}(X, \beta + dd^c \rho)$ by \cite[Theorem 3.3]{DDL23}, and moreover, $\frac{\psi}{2} \in \mathcal{E}(X, \beta + dd^c \rho)$. \textbf{Step 3:} If $\varphi, \psi \in \mathcal{E}(X, \beta + dd^c \rho)$, then $\frac{\varphi + \psi}{4} \in \mathcal{E}(X, \beta + dd^c \rho)$. \textit{Proof of Step 3:} Set $w := \frac{\varphi + \psi}{4}$, $w_j := \max(w, -j)$, $\varphi_j := \max(\varphi, -j)$, and $\psi_j := \max(\psi, -j)$. Since $$ \{w \leq -j\} \subseteq \{\varphi \leq -2j\} \cup \{\psi \leq -2j\}, $$ it suffices to prove $$ (\beta + dd^c \rho + dd^c w_j)^n(\{\varphi \leq -2j\}) \to 0 $$ and $$ (\beta + dd^c \rho + dd^c w_j)^n(\{\psi \leq -2j\}) \to 0. $$ Without loss of generality, assume $\varphi, \psi \leq -2$. Then we have $$ \{\varphi \leq -2j\} \subseteq \{\varphi_{2j} < w_j - j + 1\} \subseteq \{\varphi \leq -j\}. $$ Hence, \begin{equation*} \begin{split} (\beta + dd^c \rho + dd^c w_j)^n(\{\varphi \leq -2j\}) &\leq (\beta + dd^c \rho + dd^c w_j)^n(\{\varphi_{2j} < w_j - j + 1\}) \\ &\leq (\beta + dd^c \rho + dd^c \varphi_{2j})^n(\{\varphi \leq -j\}) \\ &= (\beta + dd^c \rho + dd^c \varphi_j)^n(\{\varphi \leq -j\}) \to 0, \end{split} \end{equation*} where the second inequality follows from the comparison principle (Remark \ref{second_version} (4)), and the last equality follows from Lemma \ref{pl_prop}. By Steps 1 and 3, we conclude that $\mathcal{E}(X, \beta + dd^c \rho)$ is a convex set. The same argument applies to $\mathcal{E}(X, \beta)$ by setting $\rho = 0$. \end{proof} \begin{lem} \label{lem: min1} Assume $u, v \in \mathcal{E}(X, \beta + dd^c \rho)$. Then $\env{\min(u, v)}{\beta + dd^c \rho} \in \mathcal{E}(X, \beta + dd^c \rho)$. \end{lem} \begin{proof} The proof is mainly based on \cite[Lemma 2.9]{ALS24}. By adding a constant, we can assume without loss of generality that $u, v \leq 0$. Denote by $\varphi = \env{\min(u, v)}{\beta + dd^c \rho}$ and set $h = \frac{u + v}{2}$. Since $u, v \in \mathcal{E}(X, \beta + dd^c \rho)$, by Corollary \ref{cor: convex}, the set $\mathcal{E}(X, \beta + dd^c \rho)$ is convex. Therefore, $h = \frac{u + v}{2} \in \mathcal{E}(X, \beta + dd^c \rho)$. By Corollary \ref{cor: A}, it follows that $\env{u + v}{\beta + dd^c \rho} \in \mathcal{E}(X, \beta + dd^c \rho)$. Moreover, since $u, v \leq 0$, we have $u + v \leq \min(u, v)$. Consequently, $$ \env{u + v}{\beta + dd^c \rho} \leq \env{\min(u, v)}{\beta + dd^c \rho}. $$ By Step 2 in the proof of Corollary \ref{cor: convex}, if $\psi \in \mathcal{E}(X, \beta + dd^c \rho)$ and $\phi \in \PSH{X}{\beta + dd^c \rho}$ such that $\psi \leq \phi$, then $\phi \in \mathcal{E}(X, \beta + dd^c \rho)$. Applying this observation to $\psi = \env{u + v}{\beta + dd^c \rho}$ and $\phi = \env{\min(u, v)}{\beta + dd^c \rho}$, we conclude that $$ \env{\min(u, v)}{\beta + dd^c \rho} \in \mathcal{E}(X, \beta + dd^c \rho). $$ This completes the proof. \end{proof} \begin{lem} \label{lem: ineq_3} Assume $u, v \in \PSH{X}{\beta + dd^c \rho}$ and $\env{u, v}{\beta + dd^c \rho} \in \PSH{X}{\beta + dd^c \rho}$, and let $h \in \PSH{X}{\beta + dd^c \rho}$. Then, \begin{equation*} \begin{split} &\int_{X} \left| e^{P_{\beta+dd^c \rho}(u,v)} - e^h \right| \langle(\beta + dd^c \rho + dd^c P_{\beta+dd^c \rho}(u,v))^n \rangle\\ \leq & \int_{X} \left| e^u - e^h \right| \langle (\beta + dd^c \rho + dd^c u)^n \rangle + \int_{X} \left| e^v - e^h \right| \langle(\beta + dd^c \rho + dd^c v)^n\rangle. \end{split} \end{equation*} \end{lem} \begin{proof} The proof is a direct consequence of Corollary \ref{cor: contactieq}. \end{proof} \begin{thm} [Domination Principle] \label{thm: domi_principal} Let $c \in [0, 1)$, and assume $\beta + dd^c \rho \geq 0$ with $\rho$ bounded. Let $u, v \in \mathcal{E}(X, \beta + dd^c \rho)$. If $$ \langle(\beta + dd^c \rho + dd^c u)^n \rangle \leq c \langle (\beta + dd^c \rho + dd^c v)^n\rangle $$ on $\{u < v\}$, then $u \geq v$. \end{thm} \begin{proof} The proof is mainly based on \cite{ALS24}. By Lemma \ref{pl_prop}, we have $$ c \langle(\beta + dd^c \rho + dd^c \max(u, v))^n \rangle= c \langle (\beta + dd^c \rho + dd^c v)^n \rangle \leq \langle(\beta + dd^c \rho + dd^c v)^n\rangle $$ on $\{u < v\}$. Thus, we can replace $v$ by $\max(u, v)$ and assume without loss of generality that $u \leq v$. Our goal is to prove that $u = v$, which implies the final result. For $b > 1$, define $u_b = \env{bu - (b-1)v}{\beta + dd^c \rho}$. We have $bu - (b-1)v \leq v$ for each $b > 1$ and $u_b \in \PSH{X}{\beta + dd^c \rho}$ or $u_b \equiv -\infty$. Next, we show that $u_b \in \mathcal{E}(X, \beta + dd^c \rho)$ for each $b > 1$. By Corollary \ref{cor: A}, since $u \in \mathcal{E}(X, \beta + dd^c \rho)$, we have \begin{equation} \label{D1} P_{\beta + dd^c \rho}(bu) \in \mathcal{E}(X, \beta + dd^c \rho). \end{equation} We also have $$ P_{\beta + dd^c \rho}(bu) \leq bu \leq bu - (b-1)(v - \sup_X v) $$ quasi-everywhere, which implies \begin{equation} \label{D2} P_{\beta + dd^c \rho}(bu) - (b-1) \sup_X v \leq bu - (b-1)v \quad \text{quasi-everywhere}. \end{equation} From \eqref{D1} and \eqref{D2}, it follows that $$ u_b \geq P_{\beta + dd^c \rho}(bu) - (b-1) \sup_X v \in \mathcal{E}(X, \beta + dd^c \rho). $$ By Step 2 in the proof of Corollary \ref{cor: convex}, we conclude that $u_b \in \mathcal{E}(X, \beta + dd^c \rho)$. Denote by $\mathrm{D} := \{u_b = bu - (b-1)v\}$. According to Remark \ref{second_version} (2), $(\beta + dd^c \rho + dd^c u_b)^n$ is carried by $\mathrm{D}$, and $u_b \leq bu - (b-1)v$ quasi-everywhere (i.e., $b^{-1}u_b + (1-b^{-1})v \leq u$ quasi-everywhere). Since both sides are in $\PSH{X}{\beta + dd^c \rho}$, this inequality holds everywhere. By Corollary \ref{second_version} (1), $$ \ind{D} \left \langle \left( \beta + dd^c \rho + dd^c (b^{-1}u_b + (1-b^{-1})v) \right)^n \right \rangle \leq \ind{D} \langle (\beta + dd^c \rho + dd^c u)^n \rangle. $$ Hence, by hypothesis, \begin{align*} &\ind{D \cap \{u < v\}} (b^{-1})^n \langle (\beta + dd^c \rho + dd^c u_b)^n \rangle + \ind{D \cap \{u < v\}} (1-b^{-1})^n \langle (\beta + dd^c \rho + dd^c v)^n \rangle\\ &\leq \ind{D \cap \{u < v\}} \left \langle\left( \beta + dd^c \rho + dd^c (b^{-1}u_b + (1-b^{-1})v) \right)^n \right \rangle \\ &\leq \ind{D \cap \{u < v\}} c \langle (\beta + dd^c \rho + dd^c v)^n \rangle. \end{align*} Choose $b$ large enough such that $(1-b^{-1})^n > c$, we have \begin{align*} &\ind{D \cap \{u < v\}} (b^{-1})^n \langle (\beta + dd^c \rho + dd^c u_b)^n \rangle \\ &\leq \ind{D \cap \{u < v\}} (c - (1-b^{-1})^n) \langle (\beta + dd^c \rho + dd^c v)^n \rangle. \end{align*} Thus, $\ind{D \cap \{u < v\}} \langle (\beta + dd^c \rho + dd^c u_b)^n \rangle = 0$, which means $ \langle (\beta + dd^c \rho + dd^c u_b)^n \rangle$ is carried by $D \cap \{u = v\}$. On $D \cap \{u = v\}$, we have $u_b = v = u$ and $u_b \leq bu - (b-1)v \leq v$ on $X$. By Remark \ref{second_version} (1), \begin{equation} \label{D3} \ind{D \cap \{u = v\}} \langle (\beta + dd^c \rho + dd^c u_b)^n \rangle \leq \ind{D \cap \{u = v\}} \langle (\beta + dd^c \rho + dd^c v)^n \rangle. \end{equation} Since $\langle (\beta + dd^c \rho + dd^c v)^n \rangle$ does not charge $\{v = -\infty\}$, one can construct an increasing function $h: \mathbb{R}^+ \to \mathbb{R}^+$ such that $h(+\infty) = +\infty$ and $h(|v|) \in L^1(\langle (\beta + dd^c \rho + dd^c v)^n \rangle )$. Thus, by \eqref{D3}, \begin{equation} \label{D4} \begin{split} \int_X h(|u_b|) \langle (\beta + dd^c \rho + dd^c u_b)^n \rangle &= \int_{D \cap \{u = v\}} h(|v|) \langle (\beta + dd^c \rho + dd^c u_b)^n \rangle \\ &\leq \int_X h(|v|) \langle (\beta + dd^c \rho + dd^c v)^n \rangle < +\infty. \end{split} \end{equation} If $(\sup_X u_b)_{b > 1}$ has a subsequence that converges to $-\infty$, then for every positive scalar $\alpha$, one can find $b$ such that $\sup_X u_b \leq -\alpha$. This implies \begin{equation*} \begin{split} \int_X h(|u_b|) \langle (\beta + dd^c \rho + dd^c u_b)^n \rangle &\geq h(\alpha) \int_X \langle (\beta + dd^c \rho + dd^c u_b)^n \rangle\\ &= h(\alpha) \int_X \beta^n, \end{split} \end{equation*} which is impossible because $\sup_{b \geq 1} \int_X h(|u_b|) \langle (\beta + dd^c \rho + dd^c u_b)^n \rangle < +\infty$. Therefore, $(\sup_X u_b)_{b > 1}$ is uniformly bounded. By compactness properties, $(u_b)_b$ has a subsequence $(u_{b_j})_j$ which converges in $L^1(X)$ to a function $w \in \PSH{X}{\beta + dd^c \rho}$. Fix $a > 0$. On $\{u < v - a\}$, we have $u_b \leq v - ab$, hence \begin{equation} \begin{split} \int_{\{u < v - a\}} w \cdot \omega^n &= \lim_{j \to +\infty} \int_{\{u < v - a\}} u_{b_j} \cdot \omega^n \\ &\leq \lim_{j \to +\infty} (-ab_j) \int_{\{u < v - a\}} \omega^n + \int_{\{u < v - a\}} v \cdot \omega^n. \end{split} \end{equation} Since $w \in L^1(X)$, we infer that $$ \int_{\{u < v - a\}} \omega^n = 0, $$ which implies $u \geq v - a$ almost everywhere. Both sides are quasi-psh, so by subharmonicity, $u \geq v - a$ everywhere. Letting $a \to 0$, we conclude that $u = v$. \end{proof} \begin{cor} \label{cor: comparison1} Fix $\lambda > 0$. If $u_1, u_2 \in \mathcal{E}(X, \beta + dd^c \rho)$ are such that \begin{align*} e^{-\lambda u_1} \langle (\beta + dd^c \rho + dd^c u_1)^n \rangle \leq e^{-\lambda u_2} \langle (\beta + dd^c \rho + dd^c u_2)^n \rangle, \end{align*} then $u_1 \geq u_2$. \end{cor} \begin{proof} For $a > 0$, we have $$ \langle (\beta + dd^c \rho + dd^c u_1)^n \rangle \leq e^{\lambda(u_1 - u_2)} \langle (\beta + dd^c \rho + dd^c u_2)^n \rangle \leq e^{-\lambda a} \langle (\beta + dd^c \rho + dd^c u_2)^n \rangle $$ on $\{u_1 < u_2 - a\}$. By Theorem \ref{thm: domi_principal}, this implies $u_1 \geq u_2$. \end{proof} \begin{cor} \label{cor: comparison2} If $u, v \in \mathcal{E}(X, \beta + dd^c \rho)$ are such that \begin{align*} \langle (\beta + dd^c \rho + dd^c u)^n \rangle \leq c \langle (\beta + dd^c \rho + dd^c v)^n \rangle \end{align*} for some positive constant $c$, then $c \geq 1$. \end{cor} \begin{proof} If, in contradiction, $c < 1$. Then for each $a \in \mathbb{R}$, we have $$ \langle (\beta + dd^c \rho + dd^c u)^n \rangle \leq c \langle (\beta + dd^c \rho + dd^c (v + a))^n \rangle $$ on $\{u < v + a\}$. By the Domination Principle (Theorem \ref{thm: domi_principal}), this implies $u \geq v + a$ for every $a \in \mathbb{R}$, which is a contradiction. We infer that $c \geq 1$. \end{proof} \section{Continuity of non-pluripolar Monge-Amp\`ere measures} Let $(X, \omega)$ be a Hermitian manifold and let $\beta$ be a closed real $(1, 1)$ form with $\int_X\beta^n>0$ and there exists a bounded $\beta$-psh function $\rho$. In this section, we provide the proof of Theorem \ref{thm: main-1}, which corresponds to Theorem \ref{thm: maintheorem} stated below. Additionally, we prepare several results that will be used in the proof of Corollary \ref{cor: main}. \begin{thm} [=Theorem \ref{thm: main-1}]\label{thm: maintheorem} Let $u_j \in \mathcal{E}(X, \beta + dd^c \rho)$ be such that $\langle (\beta + dd^c \rho + dd^c u_j)^n \rangle \leq \mu$ for some non-pluripolar Radon measure $\mu$. If $u_j \rightarrow u \in PSH(X, \beta + dd^c \rho)$ in $L^1(X)$, then $u \in \mathcal{E}(X, \beta + dd^c \rho)$ and $u_j \rightarrow u$ in capacity. Furthermore, $\langle(\beta+dd^cu_j)^n\rangle$ converges weakly to $\langle(\beta+dd^cu)^n\rangle$. \end{thm} The proof of Theorem \ref{thm: maintheorem} follows the steps outlined in \cite[Theorem 3.3]{ALS24}. \begin{thm} \label{thm: main3} Assume $u_j, u \in \mathcal{E}(X, \beta + dd^c \rho)$ are such that $u_j \rightarrow u$ in capacity. Assume $h_j$ is a sequence of uniformly bounded quasi-continuous functions converging in capacity to a bounded quasi-continuous function $h$. Then $$ h_j \langle (\beta + dd^c \rho + dd^c u_j)^n \rangle \rightarrow h \langle (\beta + dd^c \rho + dd^c u)^n \rangle. $$ \end{thm} \begin{proof} Let $\Theta$ be a cluster point of $(\beta + dd^c \rho + dd^c u_j)^n$ for the weak topology. We need to prove that $\Theta = (\beta + dd^c \rho + dd^c u)^n$. According to \cite[Theorem 2.6]{DDL23} \cite[Theorem 2.6]{LLZ24}, we need to show that $$ \int_{X} \langle (\beta + dd^c \rho + dd^c u)^n \rangle \geq \limsup_{j \to +\infty} \int_{X} \langle (\beta + dd^c \rho + dd^c u_j)^n \rangle, $$ which is evident since $u_j, u \in \mathcal{E}(X, \beta + dd^c \rho)$, and both sides of the above inequality equal $\int_{X}\beta^n$. \end{proof} \begin{rem} Theorem \ref{thm: main3} states that the non-pluripolar Monge-Amp\`ere measure is continuous with respect to convergence in capacity. \end{rem} \begin{lem} \label{lem: non-vanishing} Let $u_j \in \mathcal{E}(X, \beta + dd^c \rho)$ be such that $\langle (\beta + dd^c \rho + dd^c u_j)^n \rangle \leq \mu$ for some non-pluripolar Radon measure $\mu$. Then for any $v \in PSH(X, \beta + dd^c \rho)$, we have $$ \inf_{j} \int_{X} e^{v} \langle (\beta + dd^c \rho + dd^c u_j)^n \rangle > 0. $$ \end{lem} \begin{proof} Assume, for the sake of contradiction, that $$ \inf_{j} \int_{X} e^{v} \langle (\beta + dd^c \rho + dd^c u_j)^n \rangle = 0. $$ By possibly extracting a subsequence, we can assume without loss of generality that $$ \lim_{j \to +\infty} \int_{X} e^v \langle (\beta + dd^c \rho + dd^c u_j)^n \rangle = 0. $$ Fix $c > 0$. We then have \begin{equation*} \begin{split} \int_{X} \beta^n &= \int_{X} \langle (\beta + dd^c \rho + dd^c u_j)^n \rangle\\ &= \int_{\{v \leq -c\}} \langle (\beta + dd^c \rho + dd^c u_j)^n \rangle + \int_{\{v > -c\}} \langle (\beta + dd^c \rho + dd^c u_j)^n \rangle \\ &\leq \mu(\{v \leq -c\}) + e^c \int_{X} e^v \langle (\beta + dd^c \rho + dd^c u_j)^n \rangle. \end{split} \end{equation*} First, let $j \to +\infty$, and then let $c \to +\infty$. The right-hand side approaches zero, which contradicts the fact that $\int_{X} \beta^n > 0$. \end{proof} \begin{lem} \label{lem: main2} Assume $u_j \in \mathcal{E}(X, \beta + dd^c \rho)$ satisfy the hypothesis of Theorem \ref{thm: maintheorem}, and $u_j \rightarrow u \in PSH(X, \beta + dd^c \rho)$ in $L^1(X)$. Then $u \in \mathcal{E}(X, \beta + dd^c \rho)$. \end{lem} \begin{proof} According to Proposition \ref{prop: chac_class}, it suffices to prove that $P_{\beta+dd^c \rho}(Au) \in PSH(X, \beta + dd^c \rho)$ for every $A \geq 1$. Fix $A \geq 1$ and set $\varphi_j = P_{\beta + dd^c \rho}(Au_j) \in PSH(X, \beta + dd^c \rho)$, since $u_j \in \mathcal{E}(X, \beta + dd^c \rho)$. Theorem \ref{thm: put_no_mass4} implies that $(\beta + dd^c \rho + dd^c \varphi_j)^n$ is carried by $\{\varphi_j = Au_j\}$ and that $\varphi_j \leq Au_j$. Corollary \ref{cor: comparison} implies \begin{equation} \begin{split} \langle(\beta + dd^c \rho + dd^c \varphi_j)^n \rangle &= 1_{\{\varphi_j = Au_j\}} \langle (\beta + dd^c \rho + dd^c \varphi_j)^n \rangle \\ &\leq 1_{\{\varphi_j = Au_j\}} \langle \left[A(\beta + dd^c \rho) + dd^c \varphi_j \right]^n \rangle \\ &= 1_{\{A^{-1}\varphi_j = u_j\}} A^n \left\langle \left[(\beta + dd^c \rho) + dd^c \left(\frac{1}{A}\varphi_j\right) \right]^n \right \rangle\\ &\leq 1_{\{A^{-1}\varphi_j = u_j\}} A^n \langle \left[(\beta + dd^c \rho) + dd^c u_j \right]^n \rangle\\ &\leq A^n \langle \left[(\beta + dd^c \rho) + dd^c u_j \right]^n \rangle \leq A^n \mu. \end{split} \end{equation} Hence, we obtain \begin{equation} \begin{split} \lim_{j \to +\infty} \int_{X} |e^{A^{-1}\varphi_j} - e^{u}| \langle (\beta + dd^c \rho + dd^c \varphi_j)^n \rangle &= \lim_{j \to +\infty} \int_{\{A^{-1}\varphi_j = u_j\}} |e^{u_j} - e^{u}| \langle (\beta + dd^c \rho + dd^c \varphi_j)^n \rangle\\ &\leq \lim_{j \to +\infty} A^n \int_{X} |e^{u_j} - e^{u}| d\mu. \end{split} \end{equation} Since $\{u_j\}_j$ is uniformly bounded from above, we can assume without loss of generality that $u_j$ are non-positive. It follows that $\{e^{u_j}\}_j$ is uniformly bounded and $e^{u_j} \in PSH(X, \beta + dd^c \rho)$. Moreover, $e^{u_j} +\rho \in PSH(X, \beta) \subseteq PSH(X, \omega)$. By \cite[Corollary 2.2]{KN22}, we have \begin{equation*} \lim_{j \to +\infty} \int_{X} \left|(e^{u_j} - \rho) - (e^{u} - \rho)\right| d\mu = 0, \end{equation*} consequently, \begin{equation} \lim_{j \to +\infty} \int_{X} |e^{A^{-1}\varphi_j} - e^{u}| \langle(\beta + dd^c \rho + dd^c \varphi_j)^n \rangle = 0. \end{equation} Assume, for the sake of contradiction, that $\varphi_j$ converges uniformly to $-\infty$. It follows that \begin{equation*} \lim_{j \to +\infty} \int_{X} e^{A^{-1}\varphi_j} \langle(\beta + dd^c \rho + dd^c \varphi_j)^n \rangle \leq A^n \lim_{j \to +\infty} \int_{X} e^{A^{-1} \varphi_j} d\mu = 0. \end{equation*} Therefore, \begin{equation*} \lim_{j \to +\infty} \int_{X} e^{u} \langle (\beta + dd^c \rho + dd^c \varphi_j)^n \rangle = 0, \end{equation*} which contradicts Lemma \ref{lem: non-vanishing}. By the compactness property, the sequence $(\varphi_j)_j$ has a subsequence converging to $\varphi \in PSH(X, \beta + dd^c \rho)$. Since $\varphi_j \leq Au_j$ almost everywhere, we have $\varphi \leq Au$ almost everywhere. Both sides being quasi-psh, this inequality holds everywhere. Hence, $P_{\beta + dd^c \rho}(Au) \in PSH(X, \beta + dd^c \rho)$, which implies $u \in \mathcal{E}(X, \beta + dd^c \rho)$. \end{proof} \begin{lem} \label{lem: main4} Consider $u_j, u$ as in Theorem \ref{thm: maintheorem}. Then $(u_j)_j$ has a subsequence converging to $u$ in capacity. \end{lem} \begin{proof} We may assume $u_j \leq 0$ for all $j$. By Lemma \ref{lem: main2}, we have $u \in \mathcal{E}(X, \beta + dd^c \rho)$. Up to extracting a subsequence, we may assume \begin{equation} \int_{X} |e^{u_j} - e^u| d\mu \leq 2^{-j}, \quad \forall j \geq 1. \end{equation} For each $k \geq j \geq 1$, consider $v_{j,k} := P_{\beta + dd^c \rho}\left( \min(u_j, \ldots, u_k) \right)$. It follows from Lemmas \ref{cor: contactieq} and \ref{lem: ineq_3} that \begin{equation} \label{matrix} \left\{ \begin{array}{ll} v_{j,k} \in \mathcal{E}(X, \beta + dd^c \rho), \\ \langle(\beta + dd^c \rho + dd^c v_{j,k})^n \rangle \leq \mu, \\ \int_{X} |e^{v_{j,k}} - e^{u}| \langle (\beta + dd^c \rho + dd^c v_{j,k})^n \rangle \leq 2^{-j+1}. \end{array} \right. \end{equation} Lemma \ref{lem: non-vanishing} implies $\inf_{j,k} \int_{X} e^u \langle(\beta + dd^c \rho + dd^c v_{j,k})^n \rangle > 0$, thus by \eqref{matrix}, we have \begin{equation} \label{63} \int_{X} e^{v_{j,k}} \langle (\beta + dd^c \rho + dd^c v_{j,k})^n \rangle \end{equation} does not converge to zero as $k \to +\infty$. If $\sup_{X} v_{j,k} \to -\infty$, then by \eqref{matrix}, $$ \int_{X} e^{v_{j,k}} \langle (\beta + dd^c \rho + dd^c v_{j,k})^n\rangle \leq e^{\sup_{X} v_{j,k}} \int_{X} \beta^n \to 0 $$ as $k \to +\infty$, which contradicts \eqref{63}. Therefore, \begin{equation} \label{64} \sup_{X} v_{j,k} \end{equation} does not converge to $-\infty$ as $k \to +\infty$. It follows that $v_j := P_{\beta + dd^c \rho}(\inf_{k \geq j} u_k) \in PSH(X, \beta + dd^c \rho)$ since $(v_{j,k})_k$ is a decreasing sequence converging to $v_j$. By Lemma \ref{lem: main2}, $v_j \in \mathcal{E}(X, \beta + dd^c \rho)$. By construction, $(v_j)_j$ is non-decreasing. Set $v := \lim v_j = \sup v_j \in PSH(X, \beta + dd^c \rho)$. Corollary \ref{cor: convex} implies $v \in \mathcal{E}(X, \beta + dd^c \rho)$, and $v_j \to v$ in capacity by Hartogs' Lemma. Now, $|e^{v_j} - e^u|$ is uniformly bounded and quasi-continuous, and $$ |e^{v_j} - e^u| - |e^v - e^u| \leq |e^{v_j} - e^v|. $$ Since $e^{v_j} \to e^v$ in capacity, we have \begin{equation} |e^{v_j} - e^u| \to |e^v - e^u| \end{equation} in capacity. By Theorem \ref{thm: main3}, we have that \begin{equation} \label{68} \int_{X} |e^{v} - e^{u}| \langle(\beta + dd^c \rho + dd^c v)^n \rangle = \lim_{j \to +\infty} \int_{X} |e^{v_j} - e^{u}| \langle(\beta + dd^c \rho + dd^c v_j)^n \rangle. \end{equation} Similarly, by the monotone convergence theorem and \eqref{matrix}, \begin{equation} \label{69} \int_{X} |e^{v_j} - e^{u}| \langle(\beta + dd^c \rho + dd^c v_j)^n\rangle = \lim_{k \to +\infty} \int_{X} |e^{v_{j,k}} - e^{u}| \langle(\beta + dd^c \rho + dd^c v_{j,k})^n \rangle = 0. \end{equation} Equations \eqref{68} and \eqref{69} imply that $$ \int_{X} |e^{v} - e^{u}| \langle(\beta + dd^c \rho + dd^c v)^n \rangle = 0, $$ thus Theorem \ref{thm: domi_principal} yields that $v \geq u$. From the definition of $v_j$, we have $v_j \leq u_j$, thus $v \leq u$. We conclude that $u = v$. Now, we have $v_j \leq u_j \leq \max(u_j, u)$, $v_j \to v = u$, and $\max(u_j, u) \to u$ in capacity. We infer that $u_j \to u$ in capacity. \end{proof} \begin{proof}[\textbf{End proof of Theorem \ref{thm: maintheorem}}] The proof follows from Theorem 3.3 in \cite{ALS24} using contradiction and applying Lemma \ref{lem: main4}. \end{proof} By Lemma \ref{lem: key1} and Theorem \ref{thm: key}, we have the following result. \begin{thm}\label{thm: maintheorem2} Assume $\rho$ is a bounded $\beta$-psh function, and $u_j \in \mathcal{E}(X, \beta)$ such that $\langle(\beta + dd^c u_j)^n\rangle \leq \mu$ for some non-pluripolar Radon measure $\mu$. If $u_j \to u \in \operatorname{PSH}(X, \beta)$ in $L^1(X)$ and $\{u_j \}_j$ is uniformly bounded above, then $u \in \mathcal{E}(X, \beta)$ and $u_j \to u$ in capacity. \end{thm} To finish this section, we need to make some preparations to prove Corollary \ref{cor: main}, following the approach in \cite{KN22}. \begin{thm}\label{thm: continuity 2} Let $\{u_j\}$ be a uniformly bounded sequence of $\beta$-psh functions. Assume $(\beta + dd^c u_j)^n \leq C(\beta + dd^c \varphi_j)^n$ for some uniformly bounded sequence $\{\varphi_j\} \subset \operatorname{PSH}(X, \beta)$ such that $\varphi_j \to \varphi \in \operatorname{PSH}(X, \beta)$ in capacity. If $u_j \to u \in \operatorname{PSH}(X, \beta) \cap L^{\infty}(X)$ in $L^1(X)$, then $u_j \to u$ in capacity. \end{thm} \begin{proof} This is a direct consequence of Lemma \ref{lem: cap convergence lemma2} and Theorem \ref{thm: cap convergence lemma} below. \end{proof} Similarly, one can obtain the following theorem. \begin{thm}\label{thm: continuity 3} Let $\{u_j\}$ be a uniformly bounded sequence of $\beta$-psh functions. Assume $(\beta + dd^c u_j)^n \leq C(\omega + dd^c \varphi_j)^n$ for some uniformly bounded sequence $\{\varphi_j\} \subset \operatorname{PSH}(X, \omega)$ such that $\varphi_j \to \varphi \in \operatorname{PSH}(X, \omega)$ in capacity. If $u_j \to u \in \operatorname{PSH}(X, \beta) \cap L^{\infty}(X)$ in $L^1(X)$, then $u_j \to u$ in capacity. \end{thm} \begin{proof} This is a direct consequence of \cite[Lemma 2.3]{KN22} and Theorem \ref{thm: cap convergence lemma} below. \end{proof} \begin{defn}[\cite{LWZ24a}] For any Borel set $D \subset X$, the capacity of $D$ with respect to $\beta$ is defined as $$ \mathrm{Cap}_{\beta}(D) := \sup \left\{ \int_{D} (\beta + dd^c h)^n : h \in PSH(X, \beta), \rho \leq h \leq \rho + 1 \right\}. $$ The capacity of $D$ with respect to $\beta + dd^c \rho$ is defined as $$ \mathrm{Cap}_{\beta + dd^c \rho}(D) := \sup \left\{ \int_{D} (\beta + dd^c \rho + dd^c h)^n : h \in PSH(X, \beta + dd^c \rho), 0 \leq h \leq 1 \right\}, $$ and $\mathrm{Cap}_{\beta}(D) = \mathrm{Cap}_{\beta + dd^c \rho}(D)$ by Remark 2.12 of \cite{LWZ24a}. \end{defn} In the sequel, let \[ P_0 := \left\{v \in \operatorname{PSH}(X, \beta) \cap L^{\infty}(X) \mid \sup_X v = 0 \right\} \] be a relatively compact subset in $\operatorname{PSH}(X, \beta)$. Following the proof in \cite{KN22}, we have the following lemma: \begin{lem}\label{lem: convergence} Let $\mu$ be a non-pluripolar Radon measure on $X$. Suppose $\{u_j\} \subset P_0$ converges almost everywhere with respect to the Lebesgue measure to $u \in P_0$. Then there exists a subsequence, still denoted by $\{u_j\}$, such that \[ \lim_{j \to \infty} \int_X |u_j - u| \, d\mu = 0. \] \end{lem} \begin{lem}\label{lem: cap convergence lemma2} Let $\{u_j\}$ be as in Theorem \ref{thm: continuity 2}, and let $\{w_j\} \subset P_0$ be a uniformly bounded sequence that converges in capacity to $w \in P_0$. Then, \[ \lim_{j \to \infty} \int_X |u - u_j| (\beta + dd^c w_j)^n = 0. \] \end{lem} \begin{proof} Observe that \[ |u_j - u| = (\max\{u_j, u\} - u_j) + (\max\{u_j, u\} - u). \] Let $\phi_j := \max\{u_j, u\}$ and $v_j := \left(\sup_{k \geq j} \phi_k\right)^*$. We have $\phi_j \geq u$ and $v_j$ decreases to $u$ pointwise. By Hartogs' lemma, for any $\delta > 0$, \[ \operatorname{Cap}_{\beta}\left(\{|\phi_j - u| > \delta\}\right) = \operatorname{Cap}_{\beta}\left(\{\phi_j > u + \delta\}\right) \leq \operatorname{Cap}_{\beta}\left(\{v_j > u + \delta\}\right) \to 0, \] where the last step follows since monotone convergence implies convergence in capacity. Next, fix $\epsilon > 0$. For large $j$, we have: \begin{align*} \int_X (\phi_j - u)(\beta + dd^c w_j)^n & \leq C \int_{\{|\phi_j - u| > \epsilon\}} (\beta + dd^c w_j)^n + \epsilon \int_X (\beta + dd^c w_j)^n \\ & \leq C \operatorname{Cap}_{\beta}(|\phi_j - u| > \epsilon) + C \epsilon, \end{align*} where the first inequality follows since both $u_j$ and $\phi_j$ are uniformly bounded, and $C$ is a constant controlling them. Therefore, \[ \lim_{j \to \infty} \int_X (\phi_j - u)(\beta + dd^c w_j)^n = 0. \] We next turn to the estimate of the second term. For $j > k$, \begin{align*} & \int_X (\phi_j - u_j)(\beta + dd^c w_j)^n - \int_X (\phi_j - u_j)(\beta + dd^c w_k)^n \\ &= \int_X (\phi_j - u_j) dd^c (w_j - w_k) \wedge T \\ &= \int_X (w_j - w_k) dd^c (\phi_j - u_j) \wedge T \\ &\leq \int_X |w_j - w_k| \left[(\beta + dd^c \phi_j) + (\beta + dd^c u_j)\right] \wedge T \\ &\leq C \operatorname{Cap}_{\beta}(|w_j - w_k| > \epsilon) + C \epsilon, \end{align*} where $T = \sum_{s=1}^{n-1} (\beta + dd^c w_j)^s \wedge (\beta + dd^c w_k)^{n-1-s}$ is a closed positive current. The last inequality holds because the integral $\int_X \left[(\beta + dd^c \phi_j) + (\beta + dd^c u_j)\right] \wedge T$ is dominated by $A^n \operatorname{Cap}_{\beta}(X) =A^n \int_X \beta^n$, where $A$ is the uniform bound of $\phi_j$ and $u_j$. Since $w_j \to w$ in capacity, the right-hand side is less than $C \epsilon$ for some uniform constant $C$, provided that $j > k > k_0$ for some large $k_0$. Finally, we can estimate: \begin{align*} \int_X (\phi_j - u_j)(\beta + dd^c w_j)^n &\leq \int_X (\phi_j - u_j)(\beta + dd^c w_k)^n \\ & \quad + \left|\int_X (\phi_j - u_j)(\beta + dd^c w_j)^n - \int_X (\phi_j - u_j)(\beta + dd^c w_k)^n\right| \\ &\leq \int_X |u - u_j| (\beta + dd^c w_j)^n + C \epsilon. \end{align*} Fixing $k = k_0$ and applying Lemma~\ref{lem: convergence}, we obtain the result by letting $j \to \infty$. \end{proof} \begin{thm}\label{thm: cap convergence lemma} Let \( u_j \in \mathrm{PSH}(X, \beta + dd^c \rho) \) be a uniformly bounded sequence converging in \( L^1(X) \) to a \( (\beta + dd^c \rho) \)-psh function \( u \). If \[ \lim_{j \to +\infty} \int_X |u_j - u| \, (\beta + dd^c \rho + dd^c u_j)^n = 0, \] then \( u_j \to u \) in capacity. \end{thm} \begin{proof} Up to extracting a subsequence, we can assume that \[ \int_X |u_j - u| \, (\beta + dd^c \rho + dd^c u_j)^n \leq 2^{-j}, \quad \forall j. \] Fix \( j \in \mathbb{N} \) and consider \[ v_{j,k} := P_{\beta + dd^c \rho} \left(\min(u_j, \ldots, u_k)\right), \quad \text{for every } k \geq j. \] The sequence \( (v_{j,k})_k \) decreases to the function \[ v_j := P_{\beta + dd^c \rho} \left(\inf_{k \geq j} u_k \right). \] By Lemma \ref{lem: ineq_3}, we have \[ \int_X |v_{j,k} - u| \, (\beta + dd^c \rho + dd^c v_{j,k})^n \leq \sum_{l=j}^k \int_X |u_l - u| \, (\beta + dd^c \rho + dd^c u_l)^n. \] Since \( v_{j,k} \in \mathrm{PSH}(X, \beta + dd^c \rho) \), \( \{v_{j,k}\}_k \) is uniformly bounded, and \( v_j \) is bounded, with \( v_{j,k} \downarrow v_j \). Thus, by the Bedford-Taylor convergence theorem, \[ (\beta + dd^c \rho + dd^c v_{j,k})^n \to (\beta + dd^c \rho + dd^c v_j)^n \quad \text{as } k \to +\infty. \] We now apply \cite[Lemma 2.5]{DDL23} to show that \begin{equation}\label{convergence_1} \lim_{k \to +\infty} v_{j,k} \, (\beta + dd^c \rho + dd^c v_{j,k})^n = v_j \, (\beta + dd^c \rho + dd^c v_j)^n. \end{equation} Since \( \{u_j\}_j \) is uniformly bounded, so is \( \{v_{j,k}\}_k \). There exists \( c \geq 1 \) such that \[ -c \leq v_{j,k} \leq c \quad \Rightarrow \quad 0 \leq v_{j,k} + c \leq 2c \quad \Rightarrow \quad 0 \leq \frac{1}{2c} v_{j,k} + \frac{1}{2} \leq 1. \] Therefore, there exist constants \( C, C' \leq 1 \) such that for any Borel set \( E \), \begin{align*} C \cdot (\beta + dd^c \rho + dd^c v_{j,k})^n(E) & \leq \left[\beta + dd^c \rho + dd^c (C v_{j,k} + C')\right]^n(E) \\ & \leq \mathrm{Cap}_{\beta + dd^c \rho}(E) \\ & = \mathrm{Cap}_{\beta}(E) \\ & \leq M^n \, \mathrm{Cap}_{\omega}(E), \end{align*} where the last inequality follows from the boundedness of \( \rho \) (see \cite{LWZ24a}). Using a similar argument for \( v_j \), we satisfy the conditions of \cite[Lemma 2.5]{DDL23}. Since \( v_{j,k} \downarrow v_j \) and by Hartogs' lemma, \( v_{j,k} \to v_j \) in capacity, \cite[Lemma 2.5]{DDL23} implies \[ \lim_{k \to +\infty} |v_{j,k} - u| \, (\beta + dd^c \rho + dd^c v_{j,k})^n = |v_j - u| \, (\beta + dd^c \rho + dd^c v_j)^n. \] Therefore, \begin{equation}\label{star} \int_X |v_j - u| \, (\beta + dd^c \rho + dd^c v_j)^n = \lim_{k \to +\infty} \int_X |v_{j,k} - u| \, (\beta + dd^c \rho + dd^c v_{j,k})^n \leq 2^{-j+1}. \end{equation} The sequence \( (v_j)_j \) is non-decreasing. By Choquet's lemma, there exists \( v \in \mathrm{PSH}(X, \beta + dd^c \rho) \) such that \( v_j \to v \) almost everywhere (and in capacity by Hartogs' lemma). Since \( (v_j)_j \) is uniformly bounded, \( v \) is bounded, thus \( v \in \mathcal{E}(X, \beta + dd^c \rho) \). By the Bedford-Taylor convergence theorem, \cite[Lemma 2.5]{DDL23} and using \eqref{star}, \[ \int_X |v - u| \, (\beta + dd^c \rho + dd^c v)^n = 0. \] Since \( u_j \) converges to \( u \) in \( L^1(X) \) and is uniformly bounded, \( u \) is bounded and belongs to \( \mathcal{E}(X, \beta + dd^c \rho) \). We infer that \( v \geq u \) by Theorem \ref{thm: domi_principal}, and since \( v_j \leq u_j \) for all \( j \), it follows that \( u = v \). Since \( v_j \uparrow v \), \( v_j \leq u_j \), and \( v_j \to v \) in capacity, we conclude that \( u_j \to u \) in capacity. This completes the proof. \end{proof} \section{Solving Monge-Amp\`ere equations} Let $(X, \omega)$ be a Hermitian manifold and let $\beta$ be a closed real $(1, 1)$ form with $\int_X\beta^n>0$ and there exists a bounded $\beta$-psh function $\rho$. In this section, we provide the proof of Corollary \ref{cor: main}. For the reader's convenience, we state it as follows. \begin{thm}[=Corollary \ref{cor: main}]\label{thm: solve_equation_1} Fix \(\lambda > 0\). Let \(\mu\) be a positive Radon measure vanishing on pluripolar sets, and assume \(\mu\) is absolutely continuous with respect to $\omega^n$, i.e., can be written as \(\mu = f \omega^n\), where \(f \in L^{1}(\omega^n)\). Then there exists a unique \(u \in \mathcal{E}(X, \beta)\) such that \[ \langle(\beta + dd^c u)^n \rangle = e^{\lambda u} \mu. \] \end{thm} \begin{proof} The proof follows the approach of Theorem 4.2 in \cite{ALS24}, using Theorem 1.4 from \cite{LWZ24a}. \textbf{Step 1.} Assume first that \(f \in L^{\infty}(X)\). By \cite{LWZ24a}, Theorem 1.4, there exists \(u \in \operatorname{PSH}(X, \beta) \cap L^{\infty}(X)\) such that \[ (\beta + dd^c u)^n = e^{\lambda u} f \omega^n. \] \textbf{Step 2.} For each \(j \in \mathbb{N}^*\), by Step 1, there exists \(u_j \in \operatorname{PSH}(X, \beta) \cap L^{\infty}(X)\) such that \[ (\beta + dd^c u_j)^n = e^{\lambda u_j} \min(f, j) \omega^n. \] The sequence \((u_j)\) is decreasing by Corollary \ref{cor: comparison1}. Set \(u = \lim u_j\). We claim \(u \in \mathcal{E}(X, \beta)\): if \(\sup_{X} u_j \to -\infty\), then \[ \int_{X} \beta^n = \int_{X} (\beta + dd^c u_j)^n \leq \int_{X} e^{\lambda \sup_{X} u_j} d\mu \to 0, \] which contradicts our assumptions. Hence, \(u \in \operatorname{PSH}(X, \beta)\), and \(u_j \to u\) in \(L^1(X)\) by the compactness properties of quasi-psh functions. Since \((\beta + dd^c u_j)^n \leq e^{\lambda u_1} d\mu\) for \(j \geq 1\), Theorem \ref{thm: maintheorem2} implies \(u \in \mathcal{E}(X, \beta)\). The convergence \((\beta + dd^c u_j)^n \to \langle(\beta + dd^c u)^n\rangle \) weakly follows from Theorem \ref{thm: main3}. Applying the Dominated Convergence Theorem, we conclude \[ (\beta + dd^c u_j)^n = e^{\lambda u_j} \min(f, j) \omega^n \to e^{\lambda u} f \omega^n. \] Thus, \( \langle(\beta + dd^c u)^n\rangle = e^{\lambda u} d\mu\). Uniqueness follows from Corollary \ref{cor: comparison1}. \end{proof} \begin{rem} If we define \(\mathcal{E}^1(X, \beta)\) as \[ \mathcal{E}^1(X, \beta) := \left\{u \in \mathcal{E}(X, \beta) \mid \int_X u \langle(\beta + dd^c u)^n \rangle > -\infty \right\}, \] then the solution \(u\) obtained in the above theorem actually lies in \(\mathcal{E}^1(X, \beta)\). This follows from the fact that \(u e^{\lambda u}\) is bounded and both sides of the equation can be multiplied by \(u\) and integrated. \end{rem} \begin{rem} The Monge-Amp\`ere energy of \(u \in \operatorname{PSH}(X, \beta)\) is defined as \[ I_{\rho}(u) := \frac{1}{n+1} \sum_{k=0}^n \int_X u \langle(\beta + dd^c u)^k\rangle \wedge (\beta + dd^c \rho)^{n-k}. \] The class \(\mathcal{E}^1(X, \beta)\) consists precisely of functions in \(\operatorname{PSH}(X, \beta)\) with finite Monge-Amp\`ere energy. Since we do not use this fact in the paper, we omit the proof. \end{rem} \begin{cor} Let \(\mu\) be a positive Radon measure vanishingand assume \(\mu\) is absolutely continuous with respect to $\omega^n$, i.e., can be written as \(\mu = f \omega^n\), where \(f \in L^{1}(\omega^n)\). Then there exist a unique \(c > 0\) and a function \(u \in \mathcal{E}(X, \beta)\) such that \[ \langle(\beta + dd^c u)^n \rangle = c \mu. \] \end{cor} \begin{proof} By Theorem \ref{thm: solve_equation_1}, for each \(j \geq 1\), there exists \(u_j \in \mathcal{E}(X, \beta)\) such that \[ \langle(\beta + dd^c u_j)^n \rangle = e^{u_j / j} \mu. \] Setting \(v_j = u_j - \sup_{X} u_j\), we write \[ \langle(\beta + dd^c v_j)^n \rangle = c_j e^{v_j / j} \mu \] for some positive number \(c_j\). By the compactness properties of quasi-psh functions, we can extract a subsequence \(v_j \to v \in \operatorname{PSH}(X, \beta)\) in \(L^1(X)\) and almost everywhere. Since \(e^{v_j / j} \mu \to \mu\) and \((c_j)\) is uniformly bounded, we obtain \[ \langle(\beta + dd^c v_j)^n \rangle \to c \mu \] for some \(c > 0\). Finally, by Theorem \ref{thm: maintheorem2}, \(v \in \mathcal{E}(X, \beta)\), and Theorem \ref{thm: main3} ensures weak convergence of measures. Uniqueness of \(c\) follows from Corollary \ref{cor: comparison2}. \end{proof} \section{An $L^{\infty}$ a prior estimate} Let $(X, \omega)$ be a Hermitian manifold and let $\beta$ be a closed real $(1, 1)$ form with $\int_X\beta^n>0$ and there exists a bounded $\beta$-psh function $\rho$. In this section we establish the $L^{\infty}$ a prior estimates (Theorem \ref{thm: linf est}) for Monge-Ampere equations in our setting. We follow the method of \cite{GL21}. The resulting uniform bound does not depend on the bound in Skoda's uniform integrability as obtained in \cite{LWZ24a}, but rather on the $L^m$ supremum of a compact subset in $\text{PSH}(X,\beta)$. We can't make use of this fact in this paper as it is still hard to verify the uniform boundedness in order to use Theorem \ref{thm: continuity 3} when we are solving equations, but we believe that it will be useful in the future. Following the lines of the proof of \cite{GL21}, Lemma 1.7, we obtain the following lemma: \begin{lem}\label{lem: concave weight} Fix a concave increasing function $\chi:\mathbb{R}^-\rightarrow\mathbb{R}^-$ such that $\chi'(0)\geq 1$. Let $\varphi\leq0$ be a bounded $\beta_{\rho}$-psh function and $\psi=\chi\circ\varphi$. Then $$ (\beta_{\rho}+dd^cP_{\beta+dd^c\rho}(\psi))^n\leq1_{\{P_{\beta+dd^c\rho}(\psi)=\psi\}}(\chi'\circ\varphi)^n(\beta_{\rho}+dd^c\varphi)^n. $$ \end{lem} \begin{proof} Set $\sigma:=\chi^{-1}:\mathbb{R}^-\rightarrow\mathbb{R}^-$, it is easy to see that $\sigma$ is a convex increasing function such that $\sigma'\leq 1$. Let $\eta:=P_{\beta+dd^c\rho}(\psi)$ and $v:=\sigma\circ\eta$. \begin{align*} \beta_{\rho}+dd^cv=&\beta_{\rho}+\sigma''\circ\eta d\eta\wedge d^c\eta+\sigma'\circ\eta dd^c\eta\\ \geq&(1-\sigma'\circ\eta)\beta_{\rho}+\sigma'\circ\eta(\beta_{\rho}+dd^c\eta)\\ \geq&\sigma'\circ\eta(\beta_{\rho}+dd^c\eta)\geq0. \end{align*} It follows that $v$ is $(\beta+dd^c\rho)$-psh and $(\beta_{\rho}+dd^c\eta)^n\leq1_{\{\eta=\psi\}}(\sigma'\circ\eta)^{-n}(\beta_{\rho}+dd^cv)^n$. On the contact set $\{\eta=\psi\}$ we have $$ \sigma'\circ\eta=\sigma'\circ\psi=(\chi'\circ\psi)^{-1}. $$ We also have $v\leq\sigma\circ\psi=\varphi$ quasi everywhere, hence everywhere on X. By corollary \ref{cor: comparison} we obtain that $(\beta_{\rho}+dd^cv)^n\leq(\beta_{\rho}+dd^c\varphi)^n$ and conclude the proof. \end{proof} \begin{thm}[=Theorem \ref{thm: linf est}]\label{thm: a prior estimate} Let $\mu$ be a finite positive Radon measure on X such that $\text{PSH}(X,\beta+dd^c\rho)\subset L^m(\mu)$ for some $m>n$ and $\mu(X)=\int_X\beta^n$. Then any solution $\varphi\in \text{PSH}(X,\beta+dd^c\rho)\cap L^{\infty}(X)$ to $(\beta_{\rho}+dd^c\varphi)^n=\mu$, satisfies $$ Osc_X(\varphi)\leq T $$ for some uniform constant T which only depends on $X,\beta$ and $$ A_m(\mu):=sup\left\{(\int_X(-\psi)^md\mu)^{\frac{1}{m}}: \psi\in \text{PSH}(X,\beta_{\rho})\,with\,\underset{X}{sup}\,\psi=0\right\}. $$ \end{thm} \begin{rem} As noted in \cite{GL21}, this theorem applies to measures $\mu=f\omega^n$ for $0\leq f\in L^p(X)$ with $p>1$. It also applies to more general densities, such as the Orlicz weight. We refer the reader to \cite{GL21} and \cite{GL22} for more details. \end{rem} \begin{proof}[Proof of Theorem \ref{thm: a prior estimate}] The proof follows from \cite{GL22} Theorem 2.2. We can assume without loss of generality that $\underset{X}{sup}\,\varphi=0$ and $\mu(X)=\int_X\beta^n=1$. Set $T_{max}:=sup\{t>0:\mu(\varphi<-t)>0\}$. Then we have $\mu(\varphi<-T_{max})=0$. It follows that $\varphi\geq T_{max}$ [$\mu$] a.e., since we have $\mu=(\beta_{\rho}+dd^c\varphi)^n$, the domination principle Theorem \ref{thm: domi_principal} implies that $\varphi\geq-T_{max}$. We only need to establish the bound for $T_{max}$. Let $\chi:\mathbb{R}^-\rightarrow\mathbb{R}^-$ be a concave increasing weight such that $\chi(0)=0$ and $\chi'(0)=1$. Set $\psi:=\chi\circ\varphi$ and $u:=P_{\beta+dd^c\rho}(\psi)$. By lemma \ref{lem: concave weight} we have $$ (\beta_{\rho}+dd^cu)^n\leq1_{\{u=\psi\}}(\chi'\circ\varphi)^n\mu. $$ \textbf{Step 1.} We first control the energy of u. Fix $\epsilon>0$ small and set $m:=n+3\epsilon$. Since $\chi(0)=0$ and $\chi$ is concave, we have $|\chi(t)|\leq |t|\chi'(t)$. We can thus estimate the energy : \begin{align*} \int_X(-u)^{\epsilon}(\beta_{\rho}+dd^cu)^n\leq&\int_X(-\chi\circ\varphi)^{\epsilon}(\chi'\circ\varphi)^nd\mu\leq\int_X(-\varphi)^{\epsilon}(\chi'\circ\varphi)^{n+\epsilon}d\mu\\ \leq&\left(\int_X(-\varphi)^{n+2\epsilon}d\mu\right)^{\frac{\epsilon}{n+2\epsilon}}\left(\int_X(\chi'\circ\varphi)^{n+2\epsilon}d\mu\right)^{\frac{n+\epsilon}{n+2\epsilon}}\\ \leq&A_m(\mu)^{\epsilon}\left(\int_X(\chi'\circ\varphi)^{n+2\epsilon}d\mu\right)^{\frac{n+\epsilon}{n+2\epsilon}}. \end{align*} Where the first inequality follows since $u=\chi\circ\varphi$ on the support of $(\beta_{\rho}+dd^cu)^n$, the second inequality follows from the fact $|\chi(t)|\leq |t|\chi'(t)$, the third follows from Holder's inequality while the last is the definition of $A_m(\mu)$. \textbf{Step 2.} We choose a appropriate weight $\chi$ such that $\int_X(\chi'\circ\varphi)^{n+2\epsilon}d\mu\leq 2$ is under control. We first fix a $T\in[0,T_{max})$. Recall that if $g:\mathbb{R}^+\rightarrow\mathbb{R}^+$ is an increasing, absolutely continuous function (on [0,T]) with $g(0)=1$, then $$ \int_Xg\circ(-\varphi)d\mu=\mu(X)+\int_0^{T_{max}}g'(t)\mu(\varphi<-t)dt. $$ Set $f(t):=\begin{cases} \frac{1}{(1+t)^2\mu(\varphi<-t)}, & \text{if } t \in [0,T] \\ \frac{1}{(1+t)^2}, & \text{if } t >T , \end{cases}$ , then $f(t)$ is a $L^1$ function. Set $g(x):=\int_0^xf(t)dt+1$. It is easy to check that g is absolutely continuous on $[0,T]$ and $g'(t)=f(t)$ almost everywhere (at least at every lebesgue point). Using the argument again, by letting $-\chi(-x):=\int_0^xg(t)^{\frac{1}{n+2\epsilon}}dt$, then$\chi(0)=0,\chi'(0)=1$ and $g(t)=(\chi'(-t))^{n+2\epsilon}$,note that this equality holds everywhere since g is continuous and $g\geq1$. The choice guarantees that $\chi$ is concave increasing with $\chi'\geq1$, and $$ \int_X(\chi'\circ\varphi)^{n+2\epsilon}d\mu\leq \mu(X)+\int_0^{+\infty}\frac{1}{(1+t)^2}dt\leq 2. $$ \textbf{Step 3.} We next turn to estimate the level set $\mu(\varphi<-t)$. Using step 2, we can establish a uniform lower bound for $\underset{X}{sup}\,u$ as follows: \begin{align*} (-\underset{X}{sup}\,u)^{\epsilon}=&(-\underset{X}{sup}\,u)^{\epsilon}\int_X(\beta_{\rho}+dd^cu)^n\\ \leq&\int_X(-u)^{\epsilon}(\beta_{\rho}+dd^cu)^n\\ \leq&2A_m(\mu)^{\epsilon}. \end{align*} This implies that $0\geq\underset{X}{sup}\,u\geq -2^{\frac{1}{\epsilon}}A_m(\mu)$. We infer from this that $$ ||u||_{L^m(\mu)}\leq(1+2^{\frac{1}{\epsilon}})A_m(\mu):=B. $$ Indeed, there exists $c\leq2^{\frac{1}{\epsilon}}A_m(\mu)$ such that $\underset{X}{sup}\,u+c=0$, we have by definition $||u+c||_{L^m(\mu)}\leq A_m(\mu)$, and thus $||u||_{L^m(\mu)}\leq c+A_m(\mu)\leq(1+2^{\frac{1}{\epsilon}})A_m(\mu)$ by the Minkowski inequality. From $u\leq\chi\circ\varphi\leq0$ quasi everywhere (notice that from our condition $\mu$ does not charge pluripolar set), we deduce that $||\chi\circ\varphi||_{L^m(\mu)}\leq ||u||_{L^m(\mu)}\leq B$. We thus have \begin{align*} \mu(\varphi<-t)=&\int_{\{\varphi<-t\}}d\mu\leq\int_X\frac{|\chi\circ\varphi|^m}{|\chi(-t)|^m}d\mu\\ =&\frac{1}{|\chi(-t)|^m}||\chi\circ\varphi||_{L^m(\mu)}^m\leq \frac{B^m}{|\chi(-t)|^m}. \end{align*} \textbf{Step 4.} Conclusion. Set $h(t):=-\chi(-t)$. We have $h(0)=0 $ and $h'(t)=[g(t)]^{\frac{1}{n+2\epsilon}}$ is positive increasing, hence h is convex. Observe also that $g(t)\geq g(0)\geq1$, we have $h'(t)\geq1$ and $h(1)=\int_0^1h'(t)dt\geq 1$. For almost all $t\in[0,T]$, $$ \frac{1}{(1+t)^2g'(t)}=\mu(\varphi<-t)\leq \frac{B^m}{h^m(t)}. $$ This implies that $$ h^m(t)\leq B^m(1+t)^2g'(t)=(n+2\epsilon)B^m(1+t)^2h''(t)(h')^{n+2\epsilon-1}(t) $$ almost everywhere. It follows that we can multiply both sides by $h'$ and integrate between 0 and t, we obtain \begin{align*} \frac{h^{m+1}(t)}{m+1}\leq&(n+2\epsilon)B^m\int_0^th''(s)(h')^{n+2\epsilon}(s)ds\\ \leq&\frac{(n+2\epsilon)B^m(1+t)^2}{n+2\epsilon+1}((h')^{n+2\epsilon+1}(t)-1)\\ \leq&B^m(1+t)^2(h')^{n+2\epsilon+1}(t) \end{align*} Note that the first and the second inequality holds because $h$ and $h'$ are absolutely continuous and hence we can take derivative and integrate. Recall that: $\alpha:=m+1=n+3\epsilon+1>\gamma:=n+2\epsilon+1>2$. The previous inequality then reads $(1+t)^{-\frac{2}{\gamma}}\leq Ch'(t)h(t)^{-\frac{\alpha}{\gamma}}$ for some uniform constant C depending only on $n,m,B$. Integrating both sides between 1 and T we get $$ \frac{(1+T)^{1-\frac{2}{\gamma}}}{1-\frac{2}{\gamma}}-\frac{2^{1-\frac{2}{\gamma}}}{1-\frac{2}{\gamma}}\leq C\int_1^T\frac{(h^{1-\frac{\alpha}{\gamma}})'}{1-\frac{\alpha}{\gamma}}\leq C_1h(1)^{1-\frac{\alpha}{\gamma}}\leq C_1 $$ since $1-\frac{\alpha}{\gamma}<0$ and $h(1)\geq 1$. From this we deduce that T can be controlled by a constant $C_2$ which depends only on $A_m(\mu)$. Finally, letting $T\rightarrow T_{max}$ we conclude the proof. \end{proof} \begin{thebibliography}{10} \bibitem[ALS24]{ALS24} O. Alehyane, C. Lu, and M. Salouf, Degenerate complex Monge-Amp\`ere equations on some compact Hermitian manifolds. J Geom Anal {\bf 34}, 320 (2024). https://doi.org/10.1007/s12220-024-01772-w. \bibitem[BT82]{BT82} E. Bedford and B.~A. Taylor, A new capacity for plurisubharmonic functions, Acta Math. {\bf 149} (1982), no.~1-2, 1--40; MR0674165 \bibitem[BT87]{BT87} E. Bedford and B.~A. Taylor, Fine topology, \v Silov boundary, and $(dd^c)^n$, J. Funct. Anal. {\bf 72} (1987), no.~2, 225--251; MR0886812 \bibitem[BDPP13]{BDPP13} S. Boucksom, J.-P. Demailly, M. P\u aun and T. Peternell, The pseudo-effective cone of a compact K\"ahler manifold and varieties of negative Kodaira dimension, J. Algebraic Geom. {\bf 22} (2013), no.~2, 201--248; MR3019449 \bibitem[BEGZ10]{BEGZ10} S. Boucksom, P. Eyssidieux, V. Guedj and A. Zeriahi, Monge-Amp\`ere equations in big cohomology classes. Acta Math. \bf205\rm (2010), no. 2, 199--262. \bibitem[DDL23]{DDL23} T. Darvas, E. Di Nezza and C. Lu, Relative pluripotential theory on compact K\" ahler manifolds, arXiv:2303.11584. \bibitem[CK06]{CK06}U. Cegrell and S. Ko\l odziej, The equation of complex Monge-Amp\`ere type and stability of solutions, Math. Ann. {\bf 334} (2006), no.~4, 713--729; MR2209253 \bibitem[DeP04]{DeP04} J.-P. Demailly and M. P\u aun, Numerical characterization of the K\"ahler cone of a compact K\"ahler manifold, Ann. of Math. (2) {\bf 159} (2004), no.~3, 1247--1274; MR2113021 \bibitem[DP10]{DP10} J.-P. Demailly and N. Pali, Degenerate complex Monge-Amp\`ere equations over compact K\"ahler manifolds, Internat. J. Math. {\bf 21} (2010), no.~3, 357--405; MR2647006 \bibitem[DP12]{DP12} S. Dinew and H.~H. Ph\d am, Convergence in capacity on compact K\"ahler manifolds, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) {\bf 11} (2012), no.~4, 903--919; MR3060705 \bibitem[DV22]{DV22} H. Do and D. Vu, Quantitative stability for the complex Monge-Amp\`ere equations, arXiv:2209.00248. \bibitem[EGZ08]{EGZ08} P. Eyssidieux, V. Guedj and A. Z\'eriahi, A priori $L^\infty$-estimates for degenerate complex Monge-Amp\`ere equations, Int. Math. Res. Not. IMRN {\bf 2008}, Art. ID rnn 070, 8 pp.; MR2439574 \bibitem[EGZ09]{EGZ09} P. Eyssidieux, V. Guedj and A. Z\'eriahi, Singular K\"ahler-Einstein metrics, J. Amer. Math. Soc. {\bf 22} (2009), no.~3, 607--639; MR2505296 \bibitem[GL21]{GL21} V. Guedj and C. Lu, Quasi-plurisubharmonic envelopes 1: Uniform estimates on K\"ahler manifolds J. Eur. Math. Soc, 2024. http://DOI 10.4171/JEMS/1460. \bibitem[GL22]{GL22} V. Guedj and C.~H. Lu, Quasi-plurisubharmonic envelopes 2: Bounds on Monge-Amp\`ere volumes, Algebr. Geom. {\bf 9} (2022), no.~6, 688--713; MR4518244 \bibitem[GL23]{GL23} V. Guedj and C.~H. Lu, Quasi-plurisubharmonic envelopes 3: Solving Monge-Amp\`ere equations on hermitian manifolds, J. Reine Angew. Math. {\bf 800} (2023), 259--298; MR4609828 \bibitem[GLZ19]{GLZ19} V. Guedj, C.~H. Lu and A. Z\'eriahi, Plurisubharmonic envelopes and supersolutions, J. Differential Geom. {\bf 113} (2019), no.~2, 273--313; MR4023293 \bibitem[GPT23]{GPT23} B. Guo, D.~H. Phong and F. Tong, On $L^\infty$ estimates for complex Monge-Amp\`ere equations, Ann. of Math. (2) {\bf 198} (2023), no.~1, 393--418; MR4593734 \bibitem[GT23]{GT23} V. Guedj and A. Trusiani, K\"ahler-Einstein metrics with hpositive curvature near an isolated log terminal singularity, arXiv: 2306.07900. \bibitem[GZ07]{GZ07} V. Guedj and A. Z\'eriahi, The weighted Monge-Amp\`ere energy of quasiplurisubharmonic functions, J. Funct. Anal. {\bf 250} (2007), no.~2, 442--482; MR2352488 \bibitem[GZ17-1]{GZ17} V. Guedj and A. Z\'eriahi, Regularizing properties of the twisted K\"ahler-Ricci flow, J. Reine Angew. Math. {\bf 729} (2017), 275--304; MR3680377 \bibitem[GZ17-2]{GZ172} V. Guedj and A. Z\'eriahi, {\it Degenerate complex Monge-Amp\`ere equations}, EMS Tracts in Mathematics, 26, Eur. Math. Soc., Z\"urich, 2017; MR3617346 \bibitem[Ko98]{Ko98} S. Ko\l odziej, The complex Monge-Amp\`ere equation, Acta Math. {\bf 180} (1998), no.~1, 69--117; MR1618325 \bibitem[KH09]{KH09} Nguyen~V\u an~Khu\^e{} and H.~H. Ph\d am, A comparison principle for the complex Monge-Amp\`ere operator in Cegrell's classes and applications, Trans. Amer. Math. Soc. {\bf 361} (2009), no.~10, 5539--5554; MR2515822 \bibitem[KN23a]{KN23a} S. Ko\l odziej and N.~C. Nguyen, The Dirichlet problem for the Monge-Amp\`ere equation on Hermitian manifolds with boundary, Calc. Var. Partial Differential Equations {\bf 62} (2023), no.~1, Paper No. 1, 39 pp.; MR4505144 \bibitem[KN23b]{KN23b} S. Ko\l odziej and N.~C. Nguyen, Weak solutions to Monge-Amp\`ere type equations on compact Hermitian manifold with boundary, J. Geom. Anal. {\bf 33} (2023), no.~1, Paper No. 15, 20 pp.; MR4502776 \bibitem[KN22]{KN22} Kolodziej S, Nguyen N C. Weak convergence of Monge-Amp\`ere measures on compact Hermitian manifolds, arXiv:2212.11550, 2022. \bibitem[LLZ24]{LLZ24}Y. Li, G. Lin, and X. Zhou, Monge-Amp\`ere equation on compact Hermitian manifolds, arXiv:2311.14958. \bibitem[LWZ24a]{LWZ24a} Y. Li, Z. Wang and X. Zhou, Degenerate complex Monge-Amp\`ere type equations on compact Hermitian manifolds and applications, Trans. Amer. Math. Soc. {\bf 377} (2024), no.~8, 5947--5992; MR4771241 \bibitem[LWZ25]{LWZ24b} Y. Li, Z. Wang and X. Zhou, On the Range of a class of Complex Monge-Amp\`ere operators on compact Hermitian manifolds, J. Funct. Anal. {\bf 288} (2025), 110787, 1--26. \bibitem[P08]{P08} H.~H. Ph\d am, On the convergence in capacity on compact Kahler manifolds and its applications, Proc. Amer. Math. Soc. {\bf 136} (2008), no.~6, 2007--2018; MR2383507 \bibitem[X09]{X09} Y. Xing, Continuity of the complex Monge-Amp\`ere operator on compact K\"ahler manifolds, Math. Z. {\bf 263} (2009), no.~2, 331--344; MR2534121 \bibitem[Yau78]{Yau78} S.-T. Yau, On the Ricci curvature of a compact K\"ahler manifold and the complex Monge-Amp\`ere equation. I, Comm. Pure Appl. Math. {\bf 31} (1978), no.~3, 339--411; MR0480350 \end{thebibliography} \end{document}
2412.11528v1
http://arxiv.org/abs/2412.11528v1
Water Cells in Compositions of 1s and 2s
\documentclass[11pt]{amsart} \usepackage{amssymb,latexsym} \usepackage{graphicx} \usepackage{url} \newcommand{\R}{{\mathbb R}} \newcommand{\Q}{{\mathbb Q}} \newcommand{\C}{{\mathbb C}} \newcommand{\N}{{\mathbb N}} \newcommand{\Z}{{\mathbb Z}} \newcommand{\minus}{\scalebox{0.75}[1.0]{$-$}} \newcommand{\Mod}[1]{\ (\mathrm{mod}\ #1)} \theoremstyle{plain} \newtheorem{thm}{Theorem} \newtheorem{theorem}[thm]{Theorem} \newtheorem{cor}[thm]{Corollary} \newtheorem{lemma}[thm]{Lemma} \newtheorem{example}[thm]{Example} \newtheorem{definition}[thm]{Definition} \newtheorem{proposition}[thm]{Proposition} \newtheorem{remark}[thm]{Remark} \begin{document} \title{Water Cells in Compositions of 1s and 2s} \author{Brian Hopkins} \address{Saint Peter's University\\ Jersey City, New Jersey\\ 07306, USA} \email{[email protected]} \author{Aram Tangboonduangjit} \address{Mahidol University\\Mahidol University International College\\ Nakhon Pathom\\ 73170, Thailand} \email{[email protected]} \begin{abstract} Mansour and Shattuck introduced the notion of water cells for integer compositions in 2018. We focus on compositions with parts restricted to 1 and 2 and consider the array of counts for such compositions of $n$ with $k$ water cells, establishing generating functions for the columns and diagonal sums, recurrences within the array in the spirit of Pascal's lemma, and connections to other restricted compositions. Most of our proofs are combinatorial, but we also make connections to Riordan arrays. \end{abstract} \maketitle \section{Introduction and background} Given an integer $n \ge 1$, a composition of $n$ is an ordered collection $(c_1, \ldots, c_t)$ of positive integers such that $\sum_i c_i = n$. We write $C(n)$ for the set of compositions of $n$ and $c(n) = |C(n)|$ for their count. For example, \[ C(4) = \{(4), (3,1), (2,2), (2,1,1), (1,3), (1,2,1), (1,1,2), (1,1,1,1)\}\] and $c(4) = 8$. We sometimes use superscripts to denote repetition, e.g., writing $(1^4)$ rather than $(1,1,1,1)$. By convention, $c(0) = 1$ counting the empty composition. MacMahon proved $c(n) = 2^{n-1}$ for $n \ge 1$ by a combinatorial argument \cite{m15} equivalent to the following. Represent a composition of $n$ as a tiling of a $1 \times n$ board where a part $k$ corresponds to a $1 \times k$ rectangle. The composition is determined by $n-1$ binary choices we call cut and join: A cut \texttt{C} at a juncture separates two parts while a join \texttt{J} is internal to a part with $k \ge 2$. Figure \ref{fig0} shows this representation of the composition $(3,1)$ which has cut-join sequence \texttt{JJC}. Further, MacMahon defined the conjugate of a composition obtained by switching the cuts and joins; Figure \ref{fig0} also shows the tiling representation of $(1,1,2)$ with cut-join sequence \texttt{CCJ}, the conjugate of $(3,1)$. \begin{figure}[ht] \begin{center} \begin{picture}(200,40) \setlength{\unitlength}{2pt} \multiput(10,10.5)(0,2){5}{\line(0,1){1}} \multiput(20,10.5)(0,2){5}{\line(0,1){1}} \multiput(90,10.5)(0,2){5}{\line(0,1){1}} \thicklines \put(0,10){\line(1,0){40}} \put(0,20){\line(1,0){40}} \put(0,10){\line(0,1){10}} \put(30,10){\line(0,1){10}} \put(40,10){\line(0,1){10}} \put(60,10){\line(1,0){40}} \put(60,20){\line(1,0){40}} \put(60,10){\line(0,1){10}} \put(70,10){\line(0,1){10}} \put(80,10){\line(0,1){10}} \put(100,10){\line(0,1){10}} \put(8.5,1){\texttt{J}} \put(18.5,1){\texttt{J}} \put(28.5,1){\texttt{C}} \put(68.5,1){\texttt{C}} \put(78.5,1){\texttt{C}} \put(88.5,1){\texttt{J}} \end{picture} \caption{The tiling representations of compositions $(3,1)$ with cut-join sequence \texttt{JJC} and its conjugate $(1,1,2)$ with cut-join sequence \texttt{CCJ}.} \label{fig0} \end{center} \end{figure} We will also reference partitions of $n$ which, in contrast to compositions, are unordered collections of parts. For example, \[P(4) = \{(4), (3,1), (2,2), (2,1,1), (1,1,1,1)\}\] where we follow the convention of listing parts in nonincreasing order. A generalized notion of partitions allows for parts to appear in multiple ``colors.'' These colored parts are also unordered. For example, the partitions of 4 where there are two colors of 2 available, denoted by subscripts, are \[\{(4), (3,1), (2_1, 2_1), (2_1, 2_2), (2_2, 2_2), (2_1, 1, 1), (2_2, 1, 1), (1,1,1,1)\};\] notice that $(2_2, 2_1)$ is not listed as it is equivalent to the partition $(2_1, 2_2)$. Let $C_{12}(n)$ denote the compositions of $n$ whose parts are all in the set $\{1,2\}$. From above, we have $c_{12}(4) = |C_{12}(4)| = 5$. It has been known since ancient India \cite{s85} that $c_{12}(n) = F_{n+1}$ where $F_n$ denotes the Fibonacci numbers defined as $F_0 = 0$, $F_1 = 1$, and $F_n = F_{n-1} + F_{n-2}$ for $n \ge 2$. In 2018, Mansour and Shattuck introduced the notion of water cells of compositions based on a different graphical representation \cite{ms18}. The bargraph representation of a composition $(c_1, \ldots, c_t)$ consists of $t$ columns starting from a horizontal line where column $i$ consists of $c_i$ square cells. A water cell is a square outside of the bargraph representation that would ``hold water'' poured over the shape. Figure \ref{fig1} shows the bargraph for the composition $(1,2,1,4,2,4,1,2,1,3) \in C(21)$ which has 8 water cells. See also Blecher, Brennan, and Knopfmacher \cite{bbk18}, who seem to have come upon the same notion independently. \begin{figure}[ht] \begin{center} \begin{picture}(210,80) \setlength{\unitlength}{2pt} \def\ccc{\put(0,0){\line(1,0){10}}\put(10,0){\line(0,1){10}} \put(10,10){\line(-1,0){10}}\put(0,10){\line(0,-1){10}}} \def\bbb{\multiput(0.75,1.5)(0,2){5}{\line(1,0){8.5}}} \put(0,0){ \multiput(0,0)(0,10){1}{\ccc} \multiput(10,0)(0,10){2}{\ccc} \multiput(20,0)(0,10){1}{\ccc} \multiput(30,0)(0,10){4}{\ccc} \multiput(40,0)(0,10){2}{\ccc} \multiput(50,0)(0,10){4}{\ccc} \multiput(60,0)(0,10){1}{\ccc} \multiput(70,0)(0,10){2}{\ccc} \multiput(80,0)(0,10){1}{\ccc} \multiput(90,0)(0,10){3}{\ccc} \multiput(20,10)(0,10){1}{\bbb} \multiput(40,20)(0,10){2}{\bbb} \multiput(60,10)(0,10){2}{\bbb} \multiput(70,20)(0,10){1}{\bbb} \multiput(80,10)(0,10){2}{\bbb} } \end{picture} \caption{The bargraph of $(1,2,1,4,2,4,1,2,1,3)$ and its water cells.} \label{fig1} \end{center} \end{figure} In this work, we restrict our attention to the compositions $C_{12}(n)$. Let $W(n,k)$ be the compositions in $C_{12}(n)$ with exactly $k$ water cells and $w(n,k) = |W(n,k)|$. (Because we are only considering compositions using parts 1 and 2, the more precise notation $W_{12}(n,k)$ is unnecessary here.) Figure \ref{fig2} provides small examples of these compositions: $(2,1,2) \in C_{12}(5)$ is the unique smallest composition with one water cell while $(2,1,1,2) \in C_{12}(6)$ and $(2,1,2,1,2) \in C_{12}(8)$ are among the compositions with two water cells. \begin{figure}[ht] \begin{center} \begin{picture}(320,70) \setlength{\unitlength}{2pt} \def\ccc{\put(0,0){\line(1,0){10}}\put(10,0){\line(0,1){10}} \put(10,10){\line(-1,0){10}}\put(0,10){\line(0,-1){10}}} \def\bbb{\multiput(0.75,1.5)(0,2){5}{\line(1,0){8.5}}} \put(0,10){ \multiput(0,0)(0,10){2}{\ccc} \multiput(10,0)(0,10){1}{\ccc} \multiput(20,0)(0,10){2}{\ccc} \multiput(50,0)(0,10){2}{\ccc} \multiput(60,0)(0,10){1}{\ccc} \multiput(70,0)(0,10){1}{\ccc} \multiput(80,0)(0,10){2}{\ccc} \multiput(110,0)(0,10){2}{\ccc} \multiput(120,0)(0,10){1}{\ccc} \multiput(130,0)(0,10){2}{\ccc} \multiput(140,0)(0,10){1}{\ccc} \multiput(150,0)(0,10){2}{\ccc} \multiput(10,10)(0,10){1}{\bbb} \multiput(60,10)(0,10){1}{\bbb} \multiput(70,10)(0,10){1}{\bbb} \multiput(120,10)(0,10){1}{\bbb} \multiput(140,10)(0,10){1}{\bbb} \put(11.5,-8){(a)} \put(66.5,-8){(b)} \put(131.5,-8){(c)} } \end{picture} \caption{The bargraphs of (a) $(2,1,2)$ with one water cell, (b) $(2,1,1,2)$ and (c) $(2,1,2,1,2)$ each with two water cells.} \label{fig2} \end{center} \end{figure} Narrowing a very general result of Toufik and Mansour \cite[Theorem 2]{ms18} to our context gives the generating function \begin{equation} \sum w(n,k) q^n z^k = \frac{1}{1-q} + \frac{q^2(1-zq)}{(1-q)^2(1-zq-q^2)}. \label{wcgf} \end{equation} Note that with $z=1$ this reduces to $1/(1-q-q^2)$, a generating function for $F_{n+1}$. In Section 2, we consider the array of numbers $w(n,k)$ and establish recurrences in the resulting triangle of numbers and also for its diagonal sums. We also give bijections to other classes of restricted compositions as well as certain partitions with colored parts. Although most of our arguments are combinatorial, in Section 3 we explain connections to Riordan arrays and use them to complete one proof. These structures, defined in 1991 by Shapiro and collaborators \cite{sgww91}, succinctly describe certain number triangles and allow for simple determination of row sums, alternating row sums, diagonal sums, etc. Those unfamiliar with the Riordan group (there is a group structure for these arrays) are invited to read a recent survey article \cite{dfs24} by Shapiro et al. \section{Water cells in $C_{12}(n)$} Table \ref{wtab} provides the $w(n,k)$ values for small $n$ and $k$, the number of compositions in $C_{12}(n)$ with exactly $k$ water cells. As mentioned for Figure \ref{fig2}, the smallest composition with one water cell is $(2,1,2) \in C_{12}(5)$, thus the $k = 1$ column has its first nonzero value in row $n = 5$. Because we are partitioning the compositions of $C_{12}(n)$ by the number of water cells, the row sums are Fibonacci numbers. \begin{table}[ht] \begin{center} \renewcommand{\arraystretch}{1.3} \begin{tabular}{r|rrrrrrrrrrr} $n \backslash k$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline 0 & 1 \\ 1 & 1 \\ 2 & 2 \\ 3 & 3 \\ 4 & 5 \\ 5 & 7 & 1 \\ 6 & 10 & 2 & 1 \\ 7 & 13 & 5 & 2 & 1 \\ 8 & 17 & 8 & 6 & 2 & 1 \\ 9 & 21 & 14 & 10 & 7 & 2 & 1 \\ 10 & 26 & 20 & 20 & 12 & 8 & 2 & 1 \\ 11 & 31 & 30 & 30 & 27 & 14 & 9 & 2 & 1 \\ 12 & 37 & 40 & 50 & 42 & 35 & 16 & 10 & 2 & 1 & \phantom{10} \\ 13 & 43 & 55 & 70 & 77 & 56 & 44 & 18 & 11 & 2 & 1 \\ 14 & 50 & 70 & 105 & 112 & 112 & 72 & 54 & 20 & 12 & 2 & 1 \end{tabular} \end{center} \caption{The $w(n,k)$ values for $0 \le n \le 14$ and $0 \le k \le 10$.} \label{wtab} \end{table} Below, we establish a generating function for each column of the $w(n,k)$ irregular triangular array. (The first six columns are A033638, A006918, A096338, A177747, A299337, and A178440 in \cite{o}, respectively.) Next, we show several additional relations and a direct formula for the first column, then show how other columns have more efficient recurrences when values from other columns are incorporated. In the last result of this section, we show that the diagonal sums of $w(n,k)$ count compositions where only the first and last parts are allowed to be odd. \subsection{Water cell columns} We begin with the first column which, as suggested by the irregular shape of the number array in Table \ref{wtab}, requires individual attention. Note that a composition without a water cell is weakly unimodal, i.e., $c_1 \le \cdots \le c_k \ge \cdots \ge c_t$ for some $1 \le k \le t$. \begin{theorem} \label{wum} The generating function for $w(n,0)$ is \begin{equation} \sum_{n \ge 0} w(n,0) q^n = \frac{1}{1-q} + \frac{q^2}{(1-q)^2(1-q^2)} = \frac{1-q+q^3}{1 - 2 q + 2 q^3 - q^4}. \label{w0gf} \end{equation} Also, for each $n \ge 0$, the following values are equal. \begin{enumerate} \renewcommand{\theenumi}{\alph{enumi}} \item $w(n,0)$, \item $\lfloor n^2/4 \rfloor + 1$, \item $w(n-2,0) + n - 1$ (for $n \ge 2$), \item $2w(n-1,0) - 2w(n-3,0) + w(n-4,0)$ (for $n \ge 4$). \end{enumerate} \end{theorem} \begin{proof} For the generating function, rather than work from \eqref{wcgf}, we give a combinatorial argument. A composition in $C_{12}(n)$ with no water cells is weakly unimodal, so it has the form $(1^a, 2^b, 1^c)$ for nonnegative $a,b,c$. One possibility is the composition $(1^n) \in C_{12}(n)$ accounted for in the generating function by the summand $1/(1-q)$. The second summand, $q^2/((1-q)^2(1-q^2))$, counts partitions of $n$ with parts 1 and 2 where there is at least one part 2 and there are two colors of parts 1. These correspond to compositions in $C_{12}(n)$ with no water cells and at least one part 2: any partition parts $1_1$ correspond to the initial run $1^a$ in the composition, additional parts 2 from the partition contribute to $2^b$, and any partition parts $1_2$ correspond to the terminal run $1^c$. (For example, the partition $(2,2,1_1,1_1,1_2)$ corresponds to the composition $(1,1,2,2,1)$.) The second generating function expression follows from algebraic manipulation. For the direct formula (b), consider the number of parts 2, which must be adjacent. Suppose first that $n = 2m$. There is one composition counted by $w(n,0)$ that has no parts 2, the composition $(1^n)$. There are $2m-1$ compositions with a single part 2, as it can be before or after any of the $2m-2$ parts 1. There are $n-3$ compositions with the block $(2,2)$ before or after any of the $n-4$ parts 1. This continues to the single composition $(2^m)$, giving \[w(2m,0) = 1 + (2m-1) + (2m-3) + \cdots + 1 = m^2 + 1.\] The analogous sum for the case $n = 2m+1$ is \[w(2m+1,0) = 1 + 2m + (2m-2) + \cdots + 2 = m^2 + m + 1\] and, in both cases, these match $\lfloor n^2/4 \rfloor + 1$. For the nonhomogeneous recurrence (c), we describe how to construct $W(n,0)$ from most of $W(n-2,0)$ and some other compositions. We actually show \[w(n,0) = (w(n-2,0) - 1) + (n-1) + 1.\] For each of the $w(n-2,0) - 1$ compositions of $W(n-2,0)$ that contain at least one part 2 in a single run, add another part 2 to that run. There are $n-1$ compositions in $W(n,0)$ with a single part 2 in every possible positions before and after the $n-2$ parts 1. Finally, $(1^n) \in W(n,0)$. In the reverse direction, from the compositions in $W(n,0)$ with two or more parts 2, remove one of them to make a composition in $W(n-2,0)$; this leaves $n-1$ compositions in $W(n,0)$ with a single part 2 and also $(1^n)$. The linear recurrence (d) follows from the generating function \eqref{w0gf}, but we give a combinatorial proof, establishing the bijection \[W(n,0) \cup 2W(n-3,0) \cong 2W(n-1,0) \cup W(n-4,0)\] where the coefficient 2 means two copies of a set. The explanation of the bijection is simplified by defining the increasable subset $W^i(n,0)$ of $W(n,0)$ as $(1^n)$ and the compositions whose last two parts are $(2,1)$, i.e., the compositions where the last part can be increased by 1 without creating a water cell. Let $W^j(n,0) = W(n,0) \setminus W^i(n,0)$. We show \begin{gather*} W(n,0) \cong W(n-1,0) \cup W^i(n-1,0), \\ 2W(n-3,0) \cong\ W^j(n-1,0) \cup W(n-4,0) \end{gather*} from which the claim follows. For the first bijection, decrease the last part of each composition in $W(n,0)$ by one. Those with last part 1 are mapped bijectively to $W(n-1,0)$. The compositions in $W(n,0)$ with last part 2 are mapped into another $W(n-1,0)$ set, namely the compositions ending in $(2,1)$ and $(1^{n-1})$, i.e., $W^i(n-1,0)$. For the reverse map of the first bijection, add a part 1 at the end of the compositions in $W(n-1,0)$ to obtain the compositions of $W(n,0)$ with last part 1. Increase the last part of the compositions in $W^i(n-1,0)$ by one (allowed by definition), giving the compositions of $W(n,0)$ with last part 2. For the second bijection, in the first set $W(n-3,0)$, for the compositions that end in 1, removing that last part establishes a bijection to $W(n-4,0)$. For the compositions in the first set $W(n-3,0)$ that end in 2, add another part 2. In the second set $W(n-3,0)$, send $(1^{n-3})$ to $(1^{n-3},2)$ and add the parts $(1,1)$ at the end of the other compositions. Together these comprise the set $W^j(n-1,0)$: those ending with a run of two or more parts 2, the composition ending in a single 2, and those with at least one part 2 ending in $(1,1)$, exactly the compositions where increasing the last part yields a composition outside $C_{12}(n-1)$ or a composition in $C_{12}(n-1)$ with a water cell. For the reverse map of the second bijection, for the compositions of $W^j(n-1,0)$ that end in $(1,1)$, removing those last two parts establishes a bijection to $W(n-3,0)$ except for $(1^{n-3})$ (since $(1^{n-1}) \in W^i(n-1,0)$). Since no composition of $W^j(n-1,0)$ ends in $(2,1)$ (that would be increasable), the remaining compositions have last part 2, which we remove. One of those is $(1^{n-3},2) \in W^j(n-1,0)$ whose image $(1^{n-3})$ completes the first $W(n-3,0)$. The other compositions in $W^j(n-1,0)$ that end in 2 have additional parts 2: These must be in a run of parts 2 at the end, else there would be a water cell. Thus the images in the second $W(n-3,0)$ comprise all the compositions in that $W(n-3,0)$ that end in 2. Finally, add a part 1 at the end of each composition in $W(n-4,0)$ to complete the second $W(n-3,0)$ with the compositions that end in 1. \end{proof} See Table \ref{wumd} for an example of the bijections in the last part of the proof. \begin{table}[ht] \centering \begin{tabular}{rcl} $W(6,0)$ & $\longleftrightarrow$ & $W(5,0) \cup W^i(5,0)$ \\ \hline $(2,2,1,1)$ & & $(2,2,1)$ \\ $(2,1,1,1,1)$ & & $(2,1,1,1)$ \\ $(1,2,2,1)$ & & $(1,2,2)$ \\ $(1,2,1,1,1)$ & & $(1,2,1,1)$ \\ $(1,1,2,1,1)$ & & $(1,1,2,1)$ \\ $(1,1,1,2,1)$ & & $(1,1,1,2)$ \\ $(1,1,1,1,1,1)$ & & $(1,1,1,1,1)$ \\ \cline{3-3} $(2,2,2)$ & & $(2,2,1)$ \\ $(1,1,2,2)$ & & $(1,1,2,1)$ \\ $(1,1,1,1,2)$ & & $(1,1,1,1,1)$ \end{tabular} \qquad \begin{tabular}{rcl} $2W(3,0)$ & $\longleftrightarrow$ & $W(2,0) \cup W^j(5,0)$ \\ \hline $(2,1)$ & & $(2)$ \\ $(1,1,1)$ & & $(1,1)$ \\ \cline{3-3} $(1,2)$ & & $(1,2,2)$ \\ \cline{1-1} $(1,1,1)$ & & $(1,1,1,2)$ \\ $(2,1)$ & & $(2,1,1,1)$ \\ $(1,2)$ & & $(1,2,1,1)$ \end{tabular} \caption{The bijections of the proof of Theorem \ref{wum} (d) for $n= 6$.} \label{wumd} \end{table} Next we determine the generating function for each sequence $w(n,k)$ with $k \ge 1$ by extending the connection between the appropriate elements of $C_{12}(n)$ and partitions with parts restricted to 1 and 2 which may appear in multiple colors. \begin{theorem} \label{wcolgf} For a given $k \ge 1$, the generating function for the sequence $w(n,k)$ is \[ \sum w(n,k) q^n = \frac{q^{k+4}}{(1-q)^2(1-q^2)^{k+1}}.\] \end{theorem} The proof builds on the colored partition argument in the verification of \eqref{w0gf}. \begin{proof} Again, we give a combinatorial proof. As suggested in Figure \ref{fig2} (a) and (b), the smallest composition with $k$ water cells is $(2,1^k, 2) \in C_{12}(k+4)$. We describe how to construct, for all $n$, all compositions in $C_{12}(n)$ with exactly $k$ water cells from $(2,1^k, 2)$. The $q^{k+4}$ term in the numerator of the generating function corresponds to $(2,1^k, 2)$. The denominator of the generating function accounts for partitions made of two colors of parts 1 and $k+1$ colors of parts 2. The parts of such a partition of $n-k-4$ are incorporated into the composition $(2,1^k, 2)$ to make a composition in $C_{12}(n)$ as follows. Any partition parts $1_1$ correspond to an initial run of parts 1 in the composition, and any partition parts $1_2$ correspond to a terminal run of parts 1. Note that because these new composition parts 1 are not bounded by parts 2 on both sides, they do not contribute any water cells. Any partition parts $2_1$ correspond to additional parts 2 in the composition placed after the first part 2 in $(2,1^k, 2)$, and any partition parts $2_{k+1}$ correspond to additional parts 2 in the composition placed after the last part 2 in $(2,1^k, 2)$. For each $2 \le i \le k$, any partition parts $2_i$ correspond to parts 2 in the composition placed between the $(i-1)$st part 1 and the $i$th part 1 in $(2,1^k, 2)$. None of these parts 2 impact the number of water cells, so the resulting composition in $C_{12}(n)$ has exactly $k$ water cells. \end{proof} Note that the statement about the smallest composition with $k$ water cells in the proof is true only for our setting of $C_{12}(n)$. Among all compositions, the smallest with six water cells is $(3,1,1,1,3) \in C(9)$, not $(2,1^6,2) \in C(10)$. It follows from Theorem \ref{wcolgf} that, for a fixed $k \ge 1$, the sequence $w(n,k)$ satisfies a degree $2k+4$ linear recurrence. We show, however, that using values from other columns allows for much simpler recurrences. In the next result, for example, we establish that $w(n,1)$ depends on just the two terms $w(n-2,1)$ and $w(n-3,0)$ rather than involving $w(n-6,1)$. \begin{theorem} \label{wc1} For $n \ge 5$, \[w(n,1) = w(n-2,1) + w(n-3,0) - 1.\] \end{theorem} \begin{proof} We establish a bijection \[W(n,1) \cong W(n-2,1) \cup (W(n-3,0) \setminus (1^{n-3}))\] from which the claim follows. Given a composition in $W(n,1)$, it has either a single part 2 after its water cell or a run of at least two parts 2 after its water cell. If there is a single part 2 after the water cell, then remove the part 1 beneath the water cell and the next part 2, leaving a composition in $W(n-3,0)$ with at least one part 2 (preceding the removed water cell). If the composition in $W(n,1)$ has a run of two or more parts 2 after the water cell, remove the first of those parts 2, leaving a composition in $W(n-2,1)$. For the reverse map, a composition in $W(n-2,1)$ must contain consecutive parts $(2,1,2)$ to have a water cell; add another part 2 after the water cell to make a composition in $W(n,1)$. Given a composition in $W(n-3,0)$ with at least one part 2, add the parts $(1,2)$ after the last part 2 to make a composition in $W(n,1)$. \end{proof} See Table \ref{wc1ex} for an example of the bijection. \begin{table}[ht] \centering \begin{tabular}{rcl} $W(8,1)$ & $\longleftrightarrow$ & $(W(5,0) \setminus (1^5)) \cup W(6,1)$ \\ \hline $(2,2,1,2,1)$ & & $(2,2,1)$ \\ $(2,1,2,1,1,1)$ & & $(2,1,1,1)$ \\ $(1,2,2,1,2)$ & & $(1,2,2)$ \\ $(1,2,1,2,1,1)$ & & $(1,2,1,1)$ \\ $(1,1,2,1,2,1)$ & & $(1,1,2,1)$ \\ $(1,1,1,2,1,2)$ & & $(1,1,1,2)$ \\ \cline{3-3} $(2,1,2,2,1)$ & & $(2,1,2,1)$ \\ $(1,2,1,2,2)$ & & $(1,2,1,2)$ \end{tabular} \caption{The bijections of the proof of Theorem \ref{wc1} for $n= 8$.} \label{wc1ex} \end{table} The remaining columns of the triangle of $w(n,k)$ values all follow the same recurrence, involving just the two terms $w(n-1,k-1)$ and $w(n-2,k)$, a significant improvement to a linear recurrence of degree $2k+4$. \begin{theorem} \label{wc2} For $k \ge 2$ and $n \ge 6$, \[w(n,k) = w(n-1,k-1) + w(n-2,k).\] \end{theorem} \begin{proof} We establish a bijection \[W(n,k) \cong W(n-1,k-1) \cup W(n-2,k)\] from which the claim follows. Given a composition in $W(n,k)$, it has either a single part 2 after its last water cell or a run of at least two parts 2 after its last water cell. If there is a single part 2 after the last water cell, then remove the part 1 beneath the last water cell to give a composition in $W(n-1,k-1)$. If the composition in $W(n,k)$ has a run of two or more parts 2 after the last water cell, remove the first of those parts 2, leaving a composition in $W(n-2,k)$. For the reverse map, given a composition in $W(n-1,k-1)$, add another part 1 after the last water cell to make a composition in $W(n,k)$. Given a composition in $W(n-2,k)$, add a part 2 after the last water cell. \end{proof} See Table \ref{wc2ex} for an example of the bijection. \begin{table}[ht] \centering \begin{tabular}{rcl} $W(9,3)$ & $\longleftrightarrow$ & $W(8,2) \cup W(7,3)$ \\ \hline $(2,2,1,1,1,2)$ & & $(2,2,1,1,2)$ \\ $(2,1,2,1,1,2)$ & & $(2,1,2,1,2)$ \\ $(2,1,1,2,1,2)$ & & $(2,1,1,2,2)$ \\ $(2,1,1,1,2,1,1)$ & & $(2,1,1,2,1,1)$ \\ $(1,2,1,1,1,2,1)$ & & $(1,2,1,1,2,1)$ \\ $(1,1,2,1,1,1,2)$ & & $(1,1,2,1,1,2)$ \\ \cline{3-3} $(2,1,1,1,2,2)$ & & $(2,1,1,1,2)$ \end{tabular} \caption{The bijections of the proof of Theorem \ref{wc2} for $n= 9$ and $k = 3$.} \label{wc2ex} \end{table} \subsection{Water cell diagonals} The sequence of diagonal sums \[d(n) = w(n,0) + w(n-1,1) + w(n-2,2) + \cdots\] begins $1, 1, 2, 3, 5, 7, 11, 15, 23, 31$ which matches A052955 in \cite{o} with an additional term 1 at the beginning. Write $D(n)$ for the corresponding set of compositions. We connect these to compositions where only the first and last parts can be odd, that is, any internal parts are even: Let $C_{ie}(n)$ be the compositions in $C(n)$ with length one or two and the $(c_1, \ldots, c_t)$ with $t \ge 3$ such that parts $c_2, \ldots, c_{t-1}$ are all even. For example, \begin{gather*} C_{ie}(4) = \{(4),(3,1),(2,2),(1,3),(1,2,1)\}, \\ C_{ie}(5) = \{(5),(4,1),(3,2),(2,3),(2,2,1),(1,4),(1,2,2)\} \end{gather*} so that $c_{ie}(4) = 5$ and $c_{ie}(5) = 7$. Note that, for $n$ even, compositions in $C_{ie}(n)$ either have all parts even or both $c_1$ and $c_t$ odd (for compositions with at least two parts). Similarly, for $n$ odd, exactly one of $c_1$ and $c_t$ is odd when $t \ge 2$. \begin{theorem} \label{dp} The water cell array diagonal starting from $w(n,0)$ equals the number of compositions of $n$ with internal parts even, i.e., $d(n) = c_{ie}(n)$. The generating function for $d(n)$ is \begin{equation} \sum_{n \ge 0} d(n) q^n = \frac{1-q^2+q^3}{1 - q - 2 q^2 + 2 q^3}. \label{dgf} \end{equation} Further, for $n \ge 1$, \begin{equation} d(n) = \begin{cases} 2^m - 1 & \text{if $n = 2m-1$}, \\ 3\cdot 2^{m-1} - 1 & \text{if $n = 2m$}. \end{cases} \label{df} \end{equation} \end{theorem} We defer the proof of \eqref{dgf} to the next section. \begin{proof} We establish a bijection \[D(n) = \bigcup_{k \ge 0} W(n-k,k) \cong C_{ie}(n).\] Given a composition in $W(n-k,k)$ for some $k \ge 0$, add a part 1 after each water cell to get a composition $c \in W(n,2k)$ where each internal run of 1s has even length. Then the composition conjugate to $c$ has any internal parts even. For the reverse map, given a composition $C_{ie}(n)$, its conjugate has parts at most 2 (since a part 3, for instance, would require an internal part 1) with each internal run of 1s having even length. Halve each internal run of 1s to produce a composition in $W(n-k,k)$ for some $k \ge 0$. To establish \eqref{df}, we use $c_{ie}(n)$ rather than $d(n)$ and first consider the case $n = 2m-1$ for some $m \ge 1$. We establish the bijection \[C_{ie}(2m-1) \cong C(m) \cup ( C(m) \setminus (m)) \] from which the identity follows since $c(m) = 2^{m-1}$. Given a composition in the first set $C(m)$, double each part and then decrease the first part by one. In the second set $C(m)$, double each part and then decrease the last part by one (but not $(2m)$ where the first and last part are the only part). The resulting compositions have only the first part odd, respectively, the last part odd, so they are in $C_{ie}(2m-1)$. For the reverse map, a composition in $C_{ie}(2m-1)$ has exactly one odd part, the first or the last part. For those with first part odd, increase that part by 1 and halve all parts to make one set $C(m)$. For those with last part odd (excluding the single part composition $(2m-1)$), increase that part by 1 and halve all parts to make the other set $C(m)$ except for $(m)$. For the case $n = 2m$, we establish the bijection \[C_{ie}(2m) \cong 2C(m) \cup ( C(m) \setminus (m)) \] from which the identity follows. Given a composition in the first set $C(m)$, double each part. In the second set $C(m)$, double each part, then decrease the first part by one and add a part 1 at the end; this produces a composition with first part odd and last part 1. In the third set $C(m)$, double each part, then decrease the first part by one and increase the last part by one (but exclude $(2m)$ which would be sent to itself); this produces a composition with first part odd and last part odd at least 3. For the reverse map, a composition in $C_{ie}(2m)$ has either no odd parts or two odd parts, the first and the last. For those with all even parts, halve all parts to make one set $C(m)$. For compositions with first part odd and last part 1, increase the first part by one, remove the last part, and halve the resulting parts to make a second set $C(m)$. For compositions with first part odd and last part odd at least 3, increase the first part by one, decrease the last part by one, and halve the resulting parts to make a third set $C(m)$ except for $(m)$. \end{proof} See Tables \ref{dbij} and \ref{dfbij} for examples of the bijections. \begin{table}[ht] \centering \begin{tabular}{rcl} $W(8,0) \cup W(7,1) \cup W(6,2)$ & $\longleftrightarrow$ & $C_{ie}(8)$ \\ \hline $(2,2,2,2)$ & & $(1,2,2,2,1)$ \\ $(1,2,2,1,1,1)$ & & $(2,2,4)$ \\ $(1,1,2,1,1,1,1)$ & & $(3,5)$ \\ $(1,1,1,1,2,2)$ & & $(5,2,1)$ \\ $(1^8)$ & & $(8)$ \\ $\cdots$ & & [compositions with all internal parts 2] \\ \cline{1-1} $(2,2,1,2)$ & & $(1,2,4,1)$ \\ $(2,1,2,2)$ & & $(1,4,2,1)$ \\ $(2,1,2,1,1)$ & & $(1,4,3)$ \\ $(1,2,1,2,1)$ & & $(2,4,2)$ \\ $(1,1,2,1,2)$ & & $(3,4,1)$ \\ \cline{1-1} $(2,1,1,2)$ & & $(1,6,1)$ \end{tabular} \caption{The first bijection of the proof of Theorem \ref{dp} for $n= 8$ (with only some of the 17 compositions in $W(8,0)$ but all of $W(7,1)$ and $W(6,2)$).} \label{dbij} \end{table} \begin{table}[ht] \centering \begin{tabular}{rcl} $2C(3) \setminus (3)$ & $\longleftrightarrow$ & $C_{ie}(5)$ \\ \hline $(3)$ & & $(5)$ \\ $(2,1)$ & & $(3,2)$ \\ $(1,2)$ & & $(1,4)$ \\ $(1,1,1)$ & & $(1,2,2)$ \\ \cline{1-1} $(2,1)$ & & $(4,1)$ \\ $(1,2)$ & & $(2,3)$ \\ $(1,1,1)$ & & $(2,2,1)$ \end{tabular} \qquad \begin{tabular}{rcl} $3C(3) \setminus (3)$ & $\longleftrightarrow$ & $C_{ie}(6)$ \\ \hline $(3)$ & & $(6)$ \\ $(2,1)$ & & $(4,2)$ \\ $(1,2)$ & & $(2,4)$ \\ $(1,1,1)$ & & $(2,2,2)$ \\ \cline{1-1} $(3)$ & & $(5,1)$ \\ $(2,1)$ & & $(3,2,1)$ \\ $(1,2)$ & & $(1,4,1)$ \\ $(1,1,1)$ & & $(1,2,2,1)$ \\ \cline{1-1} $(2,1)$ & & $(3,3)$ \\ $(1,2)$ & & $(1,5)$ \\ $(1,1,1)$ & & $(1,2,3)$ \end{tabular} \caption{The bijections of the proof of \eqref{df} for $n= 5$ and $n = 6$ (both with $m = 3$).} \label{dfbij} \end{table} We establish \eqref{dgf} using Riordan arrays in the next section. One could pursue a combinatorial proof of that generating function, or at least the recurrence \[d(n) = d(n-1) + 2d(n-2) - 2d(n-3)\] for $n \ge 3$ that follows from it, similar to the proof of Theorem \ref{wum} (d). Also, the identity \[d(n) = 2d(n-2)+1\] for $n \ge 2$ follows from \eqref{df}, but there may be simpler combinatorial proof of this identity not requiring cases based on the parity of $n$. We leave these to the interested reader. \section{Connections to Riordan arrays} In this final section, we consider many of our results through Riordan arrays and complete the proof of Theorem \ref{dp}. For our purposes, it is enough to say that certain triangular arrays of integers can be described by an ordered pair of rational functions $(d(t), h(t))$ where $d(t)$ is the generating function of the first column and $h(t)$ encapsulates the relation between columns. From this formulation, row sums, diagonal sums, and antidiagonal sums follow directly from $d(t)$ and $h(t)$ as detailed by Sprugnoli \cite{s94}. The irregular water cell triangle $w(n,k)$ of Table \ref{wtab} is not a Riordan array, but the portion starting with the $k=1$ column from $n=5$ is. Specifically, the Riordan array for compositions with a positive number of water cells is given by \begin{equation} \left(\frac{1}{(1-t)^2(1-t^2)^2}, \frac{1}{1-t^2} \right)\!. \label{w1+} \end{equation} The verification of $d(t)$ is the $k=1$ case Theorem \ref{wcolgf} and confirming $h(t)$ is equivalent to Theorem \ref{wc2}. With this, the row sums for this subtriangle have generating function \begin{align} \frac{d(t)}{1-t h(t)} & = \frac{1}{(1 - t)^2 (1 - t^2) (1 - t - t^2)} \label{wrs} \\ & = 1 + 3 t + 8 t^2 + 17 t^3 + 34 t^4 + 63 t^5 + 113 t^6 + 196 t^7 + 334 t^8 + \cdots. \notag \end{align} Since we know the row sums for the complete $w(n,k)$ array are Fibonacci numbers, we now have another way to establish \eqref{w0gf} from Theorem \ref{wum}: The generating function for $w(n,0)$ is \[ \frac{1}{1 - t - t^2} - \frac{t^5}{(1 - t)^2 (1 - t^2) (1 - x - t^2)} = \frac{1-t-t^3}{1 - 2 t + 2 t^3 - t^4} \] where the $t^5$ term modifying \eqref{wrs} places the Riordan array correctly in the $w(n,k)$ triangle. The diagonal sums of the Riordan array subtriangle of $w(n,k)$ have generating function \begin{align} \frac{d(t)}{1-t^2 h(t)} & = \frac{1}{(1 - t)^3 (1 + t) (1 - 2 t^2)} \label{wrd} \\ & = 1 + 2 t + 6 t^2 + 10 t^3 + 21 t^4 + 32 t^5 + 58 t^6 + 84 t^7 + 141 t^8 + \cdots. \notag \end{align} With this, we complete the proof of Theorem \ref{dp}. \begin{proof}[Proof of \eqref{dgf}] The generating function for $d(n)$ follows from combining \eqref{w0gf} for $w(n,0)$ and \eqref{wrd} with the appropriate factor for the rest of the diagonal sum. The first $d(n)$ with two positive summands is $d(6) = w(6,0) + w(5,1)$, so we have \[ \sum_{n\ge0} d(n) t^n = \frac{1-t-t^3}{1 - 2 t + 2 t^3 - t^4} + \frac{t^6}{(1 - t)^3 (1 + t) (1 - 2 t^2)} = \frac{1 - t^2 + t^3}{1 - t -2t^2 + 2t^3}. \qedhere\] \end{proof} \section*{Acknowledgments} We appreciate Paul Barry's assistance with Riordan arrays and James Shapiro's helpful online tool \url{https://riordancalculator.com/}. \begin{thebibliography}{9} \bibitem{bq03} A. Benjamin, J. Quinn, \textit{Proofs That Really Count:\ The Art of Combinatorial Proof}, Dolciani Mathematical Expositions 27, Mathematical Association of America, 2003. \bibitem{bbk18} A. Blecher, C. Brennan, A. Knopfmacher, The water capacity of integer compositions, \textit{Online J. Anal. Comb.} \textbf{13} (2018) \#06, 14 pp. \bibitem{dfs24} D. E. Davenport, S. K. Frankson, L. W. Shapiro, L. C. Woodson, An invitation to the Riordan group, \textit{Enumer. Combin. Appl.} \textbf{4} (2024) \#S2S1, 26 pp. \bibitem{m15} P. MacMahon, \textit{Combinatory Analysis}, vol. 1, Cambridge University Press, 1915. \bibitem{ms18} T. Mansour, M. Shattuck, Counting water cells in bargraphs of compositions and set partitions, \textit{Appl. Anal. Discrete Math.} 12 (2018) 413--438. \bibitem{o} OEIS Foundation Inc., \textit{The On-Line Encyclopedia of Integer Sequences}, 2024, oeis.org. \bibitem{sgww91} L. W. Shapiro, S. Getu, W. Woan, L. C. Woodson, The Riordan Group, \textit{Discrete Appl. Math.} 34 (1991) 229--239. \bibitem{s85} P. Singh, The so-called Fibonacci numbers in ancient and medieval India, \textit{Historia Math.} 12 (1985) 229--244. \bibitem{s94} R. Sprugnoli, Riordan arrays and combinatorial sums, \textit{Discrete Math.} 132 (1994) 267--290. \end{thebibliography} \medskip \noindent MSC2020: 05A17, 11B37 \end{document}
2412.11646v1
http://arxiv.org/abs/2412.11646v1
BA-BFL: Barycentric Aggregation for Bayesian Federated Learning
\documentclass{article} \usepackage{arxiv} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{hyperref} \usepackage{url} \usepackage{booktabs} \usepackage{amsfonts} \usepackage{nicefrac} \usepackage{microtype} \usepackage{lipsum} \usepackage{graphicx} \graphicspath{ {./images/} } \usepackage{multirow} \usepackage{amsmath} \allowdisplaybreaks[1] \usepackage{amssymb} \usepackage{listings} \title{BA-BFL: Barycentric Aggregation for Bayesian Federated Learning} \author{ Nour Jamoussi$^{1}$, Giuseppe Serra$^{1}$, Photios A. Stavrou$^{1}$, Marios Kountouris$^{1,2}$ \\[1ex] $^{1}$Communication Systems Department, EURECOM, France \\ $^{2}$Andalusian Institute of Data Science and Computational Intelligence (DaSCI) \\ Department of Computer Science and Artificial Intelligence, University of Granada, Spain \\ \texttt{\{jamoussi, serra, stavrou, kountour\}@eurecom.fr} } \input{mathShorts} \begin{document} \maketitle \begin{abstract} In this work, we study the problem of aggregation in the context of Bayesian Federated Learning (BFL). Using an information geometric perspective, we interpret the BFL aggregation step as finding the barycenter of the trained posteriors for a pre-specified divergence metric. We study the barycenter problem for the parametric family of $\alpha$-divergences and, focusing on the standard case of independent and Gaussian distributed parameters, we recover the closed-form solution of the reverse Kullback–Leibler barycenter and develop the analytical form of the squared Wasserstein-2 barycenter. Considering a non-IID setup, where clients possess heterogeneous data, we analyze the performance of the developed algorithms against state-of-the-art (SOTA) Bayesian aggregation methods in terms of accuracy, uncertainty quantification (UQ), model calibration (MC), and fairness. Finally, we extend our analysis to the framework of Hybrid Bayesian Deep Learning (HBDL), where we study how the number of Bayesian layers in the architecture impacts the considered performance metrics. Our experimental results show that the proposed methodology presents comparable performance with the SOTA while offering a geometric interpretation of the aggregation phase. \end{abstract} \section{Introduction} Federated Learning (FL) is considered a de facto standard in decentralized learning systems where strong privacy guarantees are required. As envisioned in \cite{mcmahan2017communication}, an FL system is comprised of a server, storing a global model, which interfaces with multiple clients, i.e., end-devices, possessing private data. FL schemes mainly operate in two phases: a learning phase, where each client trains its local model on the locally available data, and an aggregation phase, where the local models are sent to the server and ``merged'' according to a pre-determined rule. The two phases alternate iteratively, using the global model resulting from the previous iteration as updated local models in the current one. Aggregation is a key operation for combining individual contributions into a global model while keeping data secure and being communication efficient. Although different aggregation techniques are proposed in the literature, e.g., \cite{qi2023model}, most known aggregation methods rely on variations of weighted averaging. For example, FedAVG \cite{mcmahan2017communication} and FedProx \cite{li2022federated} aggregate the local model through the weighted average of their parameters. The weights become an important design parameter, encoding auxiliary attributes such as the importance of the local models for the objective of the algorithm or proportionally to the volume of data held by each client. One key challenge in FL, and generally in distributed learning systems, is the statistical heterogeneity of the involved clients. In most realistic scenarios, the datasets available to each end-user rarely satisfy the ideal IID conditions, showing heterogeneous properties, i.e., shifts, across the clients. Commonly observed shifts are surveyed in \cite{kairouz2021advances,li2022federated}, where five major models of heterogeneity are identified: \textit{label distribution skew}, i.e., variance in the number of samples representative of a specific label across users (under or over-represented classes); \textit{feature distribution skew}, i.e., features associated with a class label vary across users; \textit{concept drift}, i.e., different clients have the same labels for different features; \textit{concept shift}, i.e., different clients may have mismatched labels for the same samples; \textit{quantity skew}, i.e., heterogeneity in the size of the datasets possessed by each client. Although compelling from the practical point of view, approaching all the listed non-idealities may not be feasible. Therefore, the vast majority of the literature considers only subsets of them \cite{deng2020adaptive,t2020personalized,fallah2020personalized}. Clients' heterogeneity also raises other issues in the distributed architecture, such as the problem of algorithmic fairness. Algorithmic fairness refers to treating all individuals or groups impartially in decision-making, without bias or favoritism based on their innate or acquired characteristics \cite{mehrabi2021survey, saxena2019fairness}. In the context of FL, models are trained on data from multiple sources, which may represent diverse populations. Neglecting the fairness aspects could potentially lead to introducing or reinforcing existing biases in the global model, leading to unfair treatment of certain groups. Multiple works focused their attention on fairness in FL algorithms, suggesting solutions such as Personalized FL. A taxonomy of Fairness-Aware FL approaches that encompass key phases in FL such as client selection, optimization, contribution assessment, and incentive distribution can be found in \cite{shi2023towards}. Building on the applicability of FL in real-world scenarios, uncertainty quantification (UQ) and model calibration (MC) are central qualities of reliable models. Nonetheless, these aspects have not been investigated thoroughly in the existing research around deterministic FL. A preliminary study on the topic, with specific application to healthcare, is provided in \cite{zhang2023uncertainty}, through an overview of various UQ methods in the deterministic FL setting, later implemented also in \cite{koutsoubis2024privacy}. We stress, however, that the surveyed techniques are mainly inspired by Bayesian-like solutions, such as Bayesian ensembles and Monte Carlo dropout. Improving the reliability of models is where Bayesian Learning (BL) excels, as Bayesian methods allow for better quantification of the model uncertainty and calibration, rendering it a compelling solution for FL in the non-IID context. FedPPD \cite{bhatt2024federated} introduces an FL framework that incorporates UQ, where each client, in each round, estimates the posterior distribution over its parameters and the posterior predictive distribution (PPD). The PPD is then distilled into a single deep neural network and sent to the server. pFedBayes \cite{zhang2022personalized} and Fedpop \cite{kotelevskii2022fedpop} are also Bayesian approaches with a focus on UQ aspects, proposed in the context of personalized BFL. Nonetheless, we emphasize that many of the existing methods \cite{bhatt2024federated,zhang2022personalized,ozer2022combine,fischer2024federated} rely on variations of weighted averaging of the posteriors' parameters. \paragraph{Contributions} This work aims to introduce novel aggregation methodologies in the context of BFL leveraging barycentric aggregation (BA-BFL). Given a divergence metric, we interpret the aggregation process as a geometric problem, where the global model is identified as the barycenter of the local posteriors. In light of this perspective, we study the barycenter problem for the parametric family of $\alpha$-divergences for general distributions. Subsequently, considering the specific case of independent and Gaussian distributed posteriors, we recover the analytical barycenter solution for the reverse-Kullback-Leiber (RKL) divergence and derive a closed-form solution for the squared Wasserstein-2 ($\W2$) barycenter, through which we define the RKLB and WB aggregation methodologies. We compare the performance of the proposed techniques against SOTA Bayesian aggregation methods in terms of accuracy and UQ scores within a heterogeneous setting. To address the gap identified in the BFL literature and following the analysis of Hybrid Bayesian Deep Learning (HBDL) envisioned in \cite{zeng2018relevance}, we extend our study to analyze the performance impact of considering only a limited number of Bayesian layers on the different Bayesian aggregation techniques. Furthermore, this paper takes a step towards analyzing BFL aggregation methods, including the novel approach we propose, from a fairness perspective. To the best of our knowledge, this has not been addressed in the existing FL literature. \section{Background and Related Work} \paragraph{Federated Learning} An FL system \cite{mcmahan2017communication} consists of a central server and $N$ clients, engaging in an iterative learning process involving server-client communication. For each communication round, the $k^{th}$ client trains its local model, parameterized by $\theta_k$, on its private data $\mathcal{D}_k$. Subsequently, the model parameters $\theta_k$ are sent to the server, which aggregates all the clients' models to obtain the global model, later sent to the clients to refine their local models for the next communication round. By aggregating the local models of the clients, FL seeks to achieve a global model $\theta^*$ on $\mathcal{D} = \bigcup_{k = 1}^N \mathcal{D}_k $, the aggregated dataset from all participating clients. In general, an FL system considers objective functions of the form \begin{align} \min _{{\theta}} F({\theta})=\sum_{k=1}^N w_k F_k({\theta}) \label{eq:fl_objective} \end{align} where $F_k({\theta})= \mathbb{E}_{(x,y) \sim \mathcal{D}_k }[\mathcal{L}(\theta ; (x,y)]$ is the local objective function of the $k^{th}$ client and $w_k$ is its associated weight such that $\sum_{k = 1}^N w_k = 1$. Minimizing $F_k(\theta)$ at each communication round gives rise locally to $\theta_k$. \paragraph{Fairness} The topic of fairness intersects multiple domains, including social sciences, law, machine learning, and statistics, each leading to different perspectives and implicitly different definitions. \cite{mehrabi2021survey} provides an overview of the most significant aspects of fairness in the context of ML and categorizes them into \textit{individual fairness}, where the goal is to achieve comparable predictions for similar individuals, \textit{group fairness}, ensuring that all groups receive equal treatment, and \textit{subgroup fairness}, which aims to capture the most beneficial aspects of both group-based and individual-based approaches to fairness. In FL, we consider not only the fairness of each individual ML model but also the fairness of the algorithm across different clients within the FL setting. To enhance clients' engagement, it is important to ensure that the FL algorithm does not discriminate against certain clients. Thus, we regard fairness from a sociological point of view where the two main definitions are: \begin{itemize} \item \textit{Utilitarianism} \cite{maskin1978theorem} is a notion of social science where as long as the overall performance of the whole society is optimal, we call the society to be fair. In FL, this results in looking at the average performance of all the users. \item \textit{Egalitarianism} \cite{rawls1974some,rawls1999eory} is a fairness rule aiming to maximize the worst-case performance. \end{itemize} In \cite{zhang2022proportional}, the authors explored proportional fairness in FL and propose PropFair as a proportional fair FL algorithm, achieving a good balance between utilitarian and egalitarian fairness. \paragraph{Hybrid Bayesian Deep Learning (HBDL)} Despite the remarkable results in terms of model performance, Deep Learning (DL) does not address crucial problems in realistic scenarios, such as reliability and UQ. In a recent position paper \cite{papamarkou2024position}, the authors propose Bayesian Deep Learning (BDL) as a solution for the ethical, privacy, and safety challenges of modern DL. Acknowledging ongoing issues with BDL, e.g., the added complexity cost of performing BL on large-scale deep models, the authors envision the alternative framework of HBDL to maintain the efficiency and lower complexity of DL while retaining the reliability of BDL. HBDL is also discussed in \cite{jospin2022hands} as \textit{Bayesian inference on the (n-)last layer(s) only}. The idea behind this framework is to substitute part of the layers in a Bayesian deep model with deterministic layers, making them equivalent to the classical DL counterpart. The possibility of having only a partially Bayesian model allows for retaining UQ capabilities while reducing the complexity compared to a fully Bayesian model. \paragraph{Bayesian Federated Learning} FL naturally inherits the issues of DL we discussed so far, as well as some of the highlighted solutions. BFL aims to integrate BDL strengths into FL. \cite{cao2023Bayesian} presents a taxonomy of BFL, categorizing it following the Bayesian perspective into: \begin{itemize} \item \textit{Client-side BFL} \cite{yurochkin2019Bayesian,zhang2022personalized,boroujeni2024personalized, zhu2023confidence,ozer2022combine,liu2023Bayesian,hasan2024calibrated,bhatt2024federated}, where Bayesian neural networks and Bayesian optimization are employed to train the local models. \item \textit{Server-side BFL} \cite{guo2023federated, chen2020fedbe,al2020federated,corinzia2019variational}, where Bayesian techniques like Bayesian model ensembling or Bayesian posterior decomposition are used to aggregate the local models in order to obtain the global model. \end{itemize} \paragraph{Divergence metrics and Information Geometry} Dealing with the set of local posteriors generated by the clients, BFL requires a principled way to compare and aggregate the received distributions. The majority of the proposed solutions consider parameterized local statistical models, later aggregated through operations in the shared parameter space. However, following the solutions proposed in FL, these methods often assume an Euclidean structure in the parameter space, proposing aggregation relying on simple averaging of the local parameters, disregarding the implications on the underlying manifold to which the posterior probabilities belong. Given a statistical manifold $\mathcal{M}$, a divergence $D: \mathcal{M} \times \mathcal{M} \to [0, +\infty)$ is a non-negative function expressing the degree of dissimilarity between two distributions on $\mathcal{M}$. Widely used in statistics and ML, some examples include: \begin{itemize} \item the parametric family of $\alpha$-divergences \cite{Cressie:88}, defined for $\alpha \in \mathbb{R} \setminus \{0,1\}$ as \begin{align*} D_\alpha(p\|q) \triangleq \frac{1}{\alpha(1 - \alpha)} \left( 1 - \int p_{\nu}^{\alpha} q_{\nu}^{1-\alpha} d\nu \right), \end{align*} where $p_{\nu}=\frac{dp}{d\nu}$ and $q_{\nu} = \frac{dq}{d\nu}$ are the Radon-Nicodym derivatives w.r.t. a reference measure $\nu$. This parametric family includes several commonly used divergences, obtained for specific values of $\alpha$, e.g., for $\alpha = 1/2$ we obtain the Hellinger distance $H(p||q)$, for $\alpha \to 1$ we have the KL divergence $D_{KL}(p||q)$, for $\alpha \to 0$ we obtain the RKL divergence $ \RKL(p||q)= D_{KL}(q||p)$. \item the squared Wasserstein-2 distance $\W2$, introduced in \cite{gelbrich:1990:W2formula} and defined as \begin{align*} \W2( p, q) \triangleq \min_{\pi \in \Pi} \mathbb{E}_{\pi}\left[||X - Y||^2\right], \end{align*} where $(X,Y) \sim \pi$ and $\Pi$ is the set of all distributions $\pi$ such that its marginal distributions are $p$ and $q$. \end{itemize} Information Geometry \cite{Amari:2016} studies the geometric properties that a divergence function induces on a manifold, allowing translation of operations naturally defined on a manifold, such as projections onto sets \cite{Csiszar:1975, Csizar:2003}, or identifying centers of mass of a set of points, i.e., barycenters \cite{Nielsen:2009, Ortenzio:2022}, to operations on the parameter space. We leverage this connection to derive aggregation methodologies for the local models in the parameter space as interpretable operations on the manifold. \section{Proposed Method} In this section, we present our main theoretical results. We start by formalizing the essential technical aspects of the Client-side BFL framework. \paragraph{Client-Side BFL} The Bayesian view presents a different framework for FL. The goal is to estimate the posterior distribution of the global model's parameters, $ p(\theta^* | \mathcal{D}) $, given the posterior distributions of local models $p(\theta_k | \mathcal{D}_k)$. Nevertheless, exact posterior inference is usually intractable, requiring the use of approximate inference methods instead. In this work, we consider Variational Inference \cite{jordan1999introduction,blei2017variational} to approximate the local posteriors given a common prior distribution $p(\theta)$ and the client likelihoods $p(\mathcal{D}_k | \theta_k)$. For a parametric family $\mathcal{Q}$, the optimization problem seeks to identify the distribution $q \in \mathcal{Q}$ that minimizes the KL divergence from the posterior distribution $p(\theta|\mathcal{D})$, i.e., \begin{align} \min_{q(\theta) \in \mathcal{Q}} \KL(q(\theta) \| p(\theta | \mathcal{D})) \label{eq: primal_VI} \end{align} However, the minimization in \eqref{eq: primal_VI} is not directly tractable and is commonly approached through the derivation of the \textit{Negative Evidence Lower BOund} (NELBO) surrogate objective \begin{equation} \label{eq:elbo} \min _{q(\theta) \in \mathcal{Q}}-\mathbb{E}_{q(\theta)}[\log p(\mathcal{D}|\theta)]+\KL(q(\theta) || p(\theta)). \end{equation} The local models are trained by minimizing \eqref{eq:elbo} to achieve their models' posterior distributions $p(\theta_k|\mathcal{D}_k),~ \forall k \in \{1, .., N\}$. The local posteriors are then aggregated in order to get the global model's posterior $p(\theta^*|\mathcal{D})$. Given this setting, we now introduce our main assumptions regarding the common prior $p(\theta)$ and the parametric family $\mathcal{Q}$, which will stay valid throughout the rest of this paper. \begin{assumption} For each client, we assume the prior distribution $p(\theta)$ to be a $d$-dimensional Gaussian with independent marginals, parameterized by a zero mean vector $\mathbf{0}_d$ and an identity covariance matrix $\mathbf{I}_d$. \end{assumption} \begin{assumption} (Mean-field Model) The parametric family $\mathcal{Q}$ is composed of $d$-dimensional Gaussian distributions with independent marginals, i.e., $\theta \sim \mathcal{N}(\mu, \Sigma)$, with mean $\mu \in \mathbb{R}^d$ and diagonal covariance $\Sigma = \diag(\sigma^2_1,\ldots,\sigma^2_d)$. \label{ass:gaussian_ind} \end{assumption} \paragraph{Bayesian Aggregation as Posteriors Barycenter} The main novelty of this work stands in the introduction of Barycentric Aggregation for BFL (BA-BFL), an aggregation method inspired by the geometric properties of the manifold to which the local posteriors $\{ p(\theta_k|\mathcal{D}_k)\}_{k = 1 \ldots N}$ belong. Given a divergence metric $D$, we propose as a global model the barycenter $p^*_D$ of the set of clients' posteriors, i.e., the distribution that minimizes the weighted divergence from a given set. The following problem formalizes this interpretation of the aggregation process. \begin{problem} \label{problem:barycenter} ($D$-barycenter) Given a statistical manifold $\mathcal{M}$, a divergence function $D: \mathcal{M} \times \mathcal{M} \to [0,\infty)$, and a set of distributions $\mathcal{S} = \{p_k\}_{k = 1 \ldots N} \subseteq \mathcal{M}$ with associated normalized weights $\{w_k\}_{k = 1 \ldots N}$, i.e., $\sum_{k = 1}^N w_k = 1$, the barycenter $p^*_D$ of the set $\mathcal{S}$ is defined as: \begin{align} p^*_D = \argmin_{q \in \mathcal{M}} \sum_{k = 1}^N w_k D(p_k|| q). \label{eq:barycenter1} \end{align} \end{problem} We now study Problem \ref{problem:barycenter} under various assumptions on the distribution set $\mathcal{S}$ and divergence metric $D$. First, we consider the generic case where $D = D_{\alpha}$, namely, it belongs to the family of $\alpha$-divergences, without any additional assumption on the set $\mathcal{S}$. \begin{theorem}($\alpha$-barycenter) Let $D(p|| q) = D_{\alpha}(p|| q)$ with $\alpha \in \mathbb{R} \setminus \{ 0 \}$. Then, the barycenter $p^*_{D_{\alpha}}$ in \eqref{eq:barycenter1} is the following: \begin{align} p^*_{D_{\alpha}} = \frac{\left(\sum_{k = 1}^N w_k p_k^{\alpha}\right)^{\frac{1}{\alpha}}}{\bigints \left(\sum_{k = 1}^N w_k p_k^{\alpha} \right)^{\frac{1}{\alpha}} d\nu}. \label{eq:barycenter:alpha} \end{align} Additionally, for $\lim_{\alpha \to 0} D_{\alpha}(p||q) = D_{RKL}(p||q)$, the barycenter $p^*_{RKL}$ in \eqref{eq:barycenter1} becomes: \begin{align} p^*_{RKL} = \frac{\prod_{k = 1}^N p_k^{w_k}}{\bigints \prod_{k = 1}^N p_k^{w_k} d\nu}. \label{eq:barycenter:KL} \end{align} \end{theorem} \begin{proof} Let the distribution $\hat{p}$ be defined as: \begin{align*} \hat{p} \triangleq c \cdot \left( \sum_{k = 1}^N w_k p_k^{\alpha}\right)^{\frac{1}{\alpha}} \text{where}~ c = \frac{1}{\bigints \left( \sum_{k = 1}^N w_k p_k^{\alpha}\right)^{\frac{1}{\alpha}} d\nu}. \end{align*} Then, the right-hand side of $\eqref{eq:barycenter1}$ can be expressed as: \begin{align} \sum_{k = 1}^N & w_k D_{\alpha}(p_k || q) \nonumber \\ &= \sum_{k = 1}^N w_k \left[ \frac{1}{\alpha(1 - \alpha)}\left( 1 - \int q^{1-\alpha} p_k^{\alpha} d\nu \right)\right] \nonumber \\ &= \frac{1}{\alpha(1 - \alpha)}\left[1 - \int q^{1-\alpha} \left( \sum_{k = 1}^N w_k p_k^{\alpha}\right) d\nu \right] \nonumber\\ &= \frac{1}{\alpha(1 - \alpha)}\left[1 - \int q^{1-\alpha} \frac{\hat{p}^{\alpha}}{c^{\alpha}} d\nu \right] \nonumber\\ &= \frac{1}{\alpha(1 - \alpha)}\left[1 -\frac{1}{c^{\alpha}} +\frac{1}{c^{\alpha}} - \int q^{1-\alpha} \frac{\hat{p}^{\alpha}}{c^{\alpha}} d\nu \right] \nonumber\\ &= \frac{1 - c^{-\alpha}}{\alpha(1 - \alpha)} + \frac{c^{-\alpha}}{\alpha(1 - \alpha)} \left[ 1 - \int q^{1-\alpha} {\hat{p}^{\alpha}} d\nu \right] \nonumber\\ &= \frac{1 - c^{-\alpha}}{\alpha(1-\alpha)} + c^{-\alpha} D_{\alpha}(\hat{p}|| q). \label{eq:proof:barycenter_alpha:1} \end{align} Since \eqref{eq:proof:barycenter_alpha:1} depends on $q$ only through the term $D_{\alpha}(\hat{p}||q)$, the minimum in $\eqref{eq:barycenter1}$ is attained by $q = \hat{p}$ due to the fundamental properties of divergence measures, i.e., $D(\hat{p} || q) \iff q = \hat{p}$, proving the first part of the theorem. The second part can be obtained following similar steps, and by considering that $\lim_{\alpha \to 0} D_{\alpha}(p||q) = \RKL(p|| q)$. This concludes the proof. \end{proof} We now focus on the case where all $p_k \in \mathcal{S}$ are $d$-dimensional Gaussian distributions, i.e., $p_k = \Gaussian(\mu_k, \Sigma_k)$, with mean $\mu_k$ and covariance matrix $\Sigma_k$. This setting derives from Assumption \ref{ass:gaussian_ind}, where we assume that the parameters of each Bayesian layer are Gaussian distributed. For the same reasons, we are also interested in the cases where the resulting barycenter is itself Gaussian, to enforce that global and local models belong to the same family of distributions. Alas, this is not the case for the majority of $\alpha$-divergences, as discussed in the following remark. \begin{remark} (On the $\alpha$-barycenter of a set of Gaussians) Given $\mathcal{S} = \{ \Gaussian(\mu_k,\Sigma_k \}_{k = 1\ldots N}$, the barycenter distribution $p^*_{D_\alpha}$ in \eqref{eq:barycenter:alpha} is not Gaussian. In fact, $(p^*_{D_\alpha})^{\alpha} \propto \sum_{k = 1}^N w_k p_k^{\alpha}$, showing that the resulting barycenter is related to the Gaussian mixture obtained from the weighted sum of the elements of $\mathcal{S}$. On the other hand, $p^*_{RKL}$ is still Gaussian since the Gaussian family is closed under the product operation and \eqref{eq:barycenter:KL} is the normalized product of unnormalized Gaussians. \end{remark} In light of the above technical remark, among the considered $\alpha$-divergences we focus exclusively on the case of $\alpha \to 0$, i.e., $p_{RKL}^*$, as the barycenter naturally belongs to the Gaussian family, leaving the study of other $\alpha$-divergences as future work. The following corollary specializes \eqref{eq:barycenter:KL} for the Gaussian case. \begin{corollary}\label{corollary:1}(Gaussians $\RKL$-barycenter) Let $\mathcal{S} = \{ \Gaussian(\mu_k,\Sigma_k) \}_{k = 1\ldots N}$ and let $D(p || q) = \RKL(p|| q)$. Then, the barycenter $p_D^*\equiv{p}_{RKL}^*$ in \eqref{eq:barycenter1} is Gaussian, i.e, $p_{RKL}^* = \Gaussian(\mu_{RKL}, \Sigma_{RKL})$, with parameters: \begin{align} \Sigma_{RKL} = \left(\sum_{k = 1}^N w_k \Sigma_k^{-1} \right)^{-1}, ~ \mu_{RKL} = \Sigma_{RKL} \sum_{k = 1}^N w_k \Sigma_k^{-1} \mu_k. \label{eq:barycenter:rKL} \end{align} \label{corr:RKL_bary} \end{corollary} \begin{proof} This follows from evaluating \eqref{eq:barycenter:KL} for the given $\mathcal{S}$. \end{proof} It should be remarked that the same result as in Corollary \ref{corollary:1} was obtained in \cite{Battistelli:14} using different methodologies. Similarly to the $D_{RKL}$ divergence, the barycenter of a set of Gaussians in the Wasserstein-2 distance belongs to the same family. In the general setting, the parameters of the barycenter are obtained through a set of fixed-point equations \cite{Agueh:11}. However, to obtain analytic expressions, we consider only the case where the set of covariance matrices $\{\Sigma_k\}_{k=1 \ldots N}$ is composed of diagonal matrices, i.e., $\Sigma_k = \diag(\sigma^2_{k,1},\ldots,\sigma^2_{k,d})$. The following theorem reports the resulting closed-form expressions. \begin{theorem}(Independent Gaussians for $\W2$-barycenter) Let $\mathcal{S} = \{ \Gaussian(\mu_k,\Sigma_k) \}_{k = 1\ldots N}$ with $\Sigma_k = \diag(\sigma^2_{k,1}, \ldots, \sigma^2_{k,d})$ and let $D(p||q) = \W2(p,q)$. Then, the barycenter $p_D^*\equiv{p}_{\W2}^*$ in \eqref{eq:barycenter1} is Gaussian, i.e, $p_{\W2}^* = \Gaussian(\mu_{\W2}, \Sigma_{\W2})$, with parameters: \begin{align} \Sigma_{\W2} = \left(\sum_{k = 1}^N w_k \Sigma_k^{\frac{1}{2}} \right)^2 ,\qquad \mu_{\W2} = \sum_{k = 1}^N w_k \mu_k. \label{eq:barycenter:W2} \end{align} \label{theo:w2_bary} \end{theorem} \begin{proof} We start by showing that for the set $\mathcal{S}$, the barycenter $p_{\W2}^*$ has necessarily a diagonal covariance matrix $\Sigma_{\W2}$. Characterizing the local posteriors $\theta_k$ as $\theta_k = \mu_k + \hat{\theta}_k$ with $\hat{\theta}_k \sim \Gaussian(0, \Sigma_k)$, (\ref{eq:barycenter1}) can be expressed as \begin{align*} \left(\mu_{\W2},\Sigma_{\W2}\right) = \argmin_{\mu_b, \Sigma_b} \sum_{k = 1}^N w_i \left( ||\mu - \mu_k||^2 + \W2(\hat{\theta_b}, \hat{\theta}_k) \right), \end{align*} with $\hat{\theta_b} \sim \Gaussian(0,\Sigma_b)$. Following \cite[Proposition 3]{serra:2024}, we note that the distance $\W2(\hat{\theta_b}, \hat{\theta}_k)$ is minimized if and only if $\Sigma_b$ and $\Sigma_k$ commute, i.e., $\Sigma_b \Sigma_k = \Sigma_k \Sigma_b$. Since the covariance matrices $\{\Sigma_k\}_{k = 1,\ldots,N}$ are diagonal, and therefore pairwise commutative, the minimizing covariance matrix $\Sigma_{\W2}$ necessarily commutes with $\Sigma_k$, $\forall k = 1, \ldots, N$. Hence, $\Sigma_{\W2}$ is a diagonal matrix. As shown in \cite{Agueh:11}, the fixed-point equations describing the parameters of the barycenter $p_{\W2}^*$ are \begin{align} \Sigma_{\W2} = \sum_{k = 1}^N w_k \left( \Sigma_{\W2}^{\frac{1}{2}} \Sigma_k \Sigma_{\W2}^{\frac{1}{2}} \right)^{\frac{1}{2}},\qquad \mu_{\W2} = \sum_{k = 1}^N w_k \mu_k. \label{eq:fixed-point} \end{align} Since $\forall k = 1,\ldots,N$ $ \Sigma_{\W2}$ and $\Sigma_k$ commute, \eqref{eq:fixed-point} can be simplified to \begin{align*} \Sigma_{\W2} = \sum_{k = 1}^N w_k \left( \Sigma_k \Sigma_{\W2} \right)^{\frac{1}{2}} = \Sigma_{\W2}^{\frac{1}{2}}\sum_{k = 1}^N w_k \Sigma_k^{\frac{1}{2}} \end{align*} \begin{align*} \Rightarrow{} \Sigma_{\W2}^{\frac{1}{2}} = \sum_{k = 1}^N w_k \Sigma_k^{\frac{1}{2}}, \end{align*} which results in (\ref{theo:w2_bary}). This concludes the proof. \end{proof} In the sequel, we refer to the aggregation methods resulting from \eqref{eq:barycenter:rKL} and \eqref{eq:barycenter:W2} with the acronyms RKLB and WB, respectively. Lastly, we comment on the applicability of the proposed methods in the contest of HBDL, where part of the model architecture is deterministic. In such setting, the posterior distribution $p(\theta_{k,i}|\mathcal{D})$ for the $i^{th}$ layer of the $k^{th}$ client is constrained to be a point-mass located at $\mu_{k,i}$, i.e., $p(\theta_{k,i}|\mathcal{D}) = \delta_{(\theta_{k,i} = \mu_{k,i})}$ where $\delta_{x}$ is the Dirac distribution. We investigate the behavior of the proposed methods considering the posterior $p(\theta_{k,i}|\mathcal{D}) = \Gaussian(\mu_{k,i}, \epsilon)$ in the limit case of $\epsilon \to 0$. Both \eqref{eq:barycenter:rKL} and \eqref{eq:barycenter:W2} can be shown to be well-defined in the limit, resulting in the barycenter distribution $p^*(\theta_i | \mathcal{D}) = \delta_{\left(\theta_i = \sum_{k = 1}^N w_k \mu_{k,i} \right)}$. \section{Experiments} We devote this section to the experimental investigation of the proposed BA-BFL. To this end, we conduct experimental studies on the FashionMNIST \cite{xiao2017fashion}, CIFAR-10 \cite{Krizhevsky09learningmultiple}, and SVHN \cite{netzer2011reading} datasets, within a heterogeneous client setting. To compare the proposed methodologies, we consider the following baselines: \begin{itemize} \item for deterministic FL, FedAVG \cite{mcmahan2017communication} aggregates the parameters of the clients' models through arithmetic weighted average, i.e., \begin{align*} \theta^* = \sum_{k=1}^N w_k \theta_k \end{align*} \item for BFL, BFLAVG \cite{ozer2022combine,fischer2024federated} aggregates the clients' posterior distributions using different possible statistical aggregation methods detailed below. \begin{itemize} \item Empirical Arithmetic Aggregation (EAA): \[ \mu_{EAA} = \sum_{k=1}^{N} w_k \mu_k, \quad \sigma^2_{EAA} = \sum_{k=1}^{N} w_k \sigma^2_k. \] \item Gaussian Arithmetic Aggregation (GAA): \[ \mu_{GAA} = \sum_{k=1}^{N} w_k \mu_k, \quad \sigma^2_{EAA} = \sum_{k=1}^{N} w_k^2 \sigma^2_k. \] \item Arithmetic Aggregation with Log Variance (AALV): $$ \mu_{A A L V}=\sum_{k=1}^N w_k \mu_k, \quad \sigma_{A A L V}^2=e^{\sum_{k=1}^N w_k \log \sigma_k^2}. $$ \end{itemize} \end{itemize} \begin{figure}[h] \centering \includegraphics[scale=0.5]{images/aggregation_gaussian.pdf} \caption{Comparison of statistical and geometric aggregations using univariate Gaussian distributions.} \label{fig:aggregations} \end{figure} Fig. \ref{fig:aggregations}, presents a toy example of aggregation of two univariate Gaussian distributions, $p = \Gaussian(0,1)$ and $q = \Gaussian(2, 0.25))$, with equal weights, $w_p = w_q = \tfrac{1}{2}$. This example illustrates the differences between the three statistical aggregation methods discussed above and the novel geometric aggregations RKLB and WB.\\ \paragraph{Metrics} To evaluate the performance of the considered FL algorithms in terms of accuracy, UQ, MC, and fairness, we employ the following metrics: \begin{itemize} \item \textit{Accuracy (Acc)}, which is employed to evaluate the performance of a classification model. In essence, accuracy provides a straightforward indication of how often the model makes correct predictions. \item \textit{Expected Calibration Error (ECE)}, which is a metric used to evaluate the calibration of probabilistic models, particularly in the context of classification tasks. Calibration refers to the alignment between predicted probabilities and actual outcomes. ECE provides a single scalar value summarizing the calibration of a model, the lower the ECE, the better calibrated the model is. \item \textit{Negative log likelihood (NLL)}, which is commonly used for optimizing neural networks in classification tasks. Additionally, NLL inherently involves probabilistic interpretations. A lower NLL indicates a better model fit, as it suggests that the model assigns higher probabilities to the true outcomes, implying a better uncertainty quantification. \item \textit{Average Accuracy ($\text{Acc}_{\text{avg}}$}), which represents the average performance in a utilitarian view of fairness. It is formalized as: $$ \text{Acc}_{\text{avg}} = \sum_{k=1}^N w_k \text{Acc}_k $$ where $\text{Acc}_k$ is the accuracy of client $k$. \item \textit{Worst 10\% Accuracies}, which serves as a measure to evaluate the accuracy of the 10\% worst performing clients, with an emphasis on egalitarian fairness. \end{itemize} \paragraph{Experimental Setup} In order to induce label shifts among the 10 clients participating in the FL scheme, we partition the samples of each label between the clients using a Dirichlet distribution as suggested in \cite{li2020practical,yurochkin2019Bayesian,wang2020federated,wang2020tackling,lin2020ensemble,ozer2022combine}. Fig. \ref{fig:label_shift} illustrates an example of such a split applied to the FashionMNIST dataset, where darker (resp. lighter) shades indicate higher (resp. lower) availability of data samples of the $i^{th}$ class at client $k$. \begin{figure}[h] \centering \includegraphics[scale=0.7]{images/data_distribution.pdf} \caption{Example of a data split simulating the label shift.} \label{fig:label_shift} \end{figure} We assign the aggregation weights to reflect the importance of each client in proportion to the volume of data locally owned, i.e., $w_k = \tfrac{|\mathcal{D}_k|}{|\mathcal{D}|}$ where $|\cdot|$ indicates the number of samples in the dataset. The architecture of the global and local models consists of two convolutional layers and three fully connected layers. Following an HBDL approach, we implement the last $n = 0, 1,2,3$ layers as Bayesian fully connected layers, whereas the remaining layers are deterministic. In our comparative study, increasing $n$ allows measuring the impact of additional Bayesian layers on the UQ, MC, fairness, and the cost-effectiveness of the FL algorithm in time. \begin{table*} \setlength{\tabcolsep}{1mm} \centering \resizebox{\textwidth}{!}{\fontsize{9}{11}\selectfont \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|} & & \multicolumn{3}{|c|}{\textbf{FashionMNIST}} & \multicolumn{3}{|c|}{\textbf{CIFAR-10}} & \multicolumn{3}{|c|}{\textbf{SVHN}} \\ \cline{3-11} \textbf{Nbl} & \textbf{Alg.} & \textbf{Acc $\uparrow$} & \textbf{ECE $\downarrow$} & \textbf{NLL $\downarrow$} & \textbf{Acc $\uparrow$} & \textbf{ECE $\downarrow$} & \textbf{NLL $\downarrow$} & \textbf{Acc $\uparrow$} & \textbf{ECE $\downarrow$} & \textbf{NLL $\downarrow$} \\ \hline \textbf{0} & \textbf{FedAVG} & 87.88 $\pms$ 0.79 & 9.42 $\pms$ 0.62 & 0.76 $\pms$ 0.04 & 61.63 $\pms$ 3.11 & 12.17 $\pms$ 2.21 & 1.18 $\pms$ 0.11 & 86.06 $\pms$ 0.45 & 10.56 $\pms$ 0.46 & 1.01 $\pms$ 0.06 \\ \hline \multirow{5}{*}{\textbf{1}} & \textbf{AALV} & 88.22 $\pms$ 0.34 & 8.59 $\pms$ 0.27 & 0.67 $\pms$ 0.02 & 63.42 $\pms$ 3.02 & 8.63 $\pms$ 1.37 & 1.09 $\pms$ 0.09 & 86.52 $\pms$ 0.29 & 8.67 $\pms$ 0.45 & 0.81 $\pms$ 0.05 \\ \cline{2-11} & \textbf{EAA} & 88.07 $\pms$ 0.22 & 8.61 $\pms$ 0.06 & 0.66 $\pms$ 0.02 & 63.69 $\pms$ 2.47 & 7.99 $\pms$ 1.10 & 1.07 $\pms$ 0.09 & 86.24 $\pms$ 0.20 & 9.00 $\pms$ 0.45 & 0.84 $\pms$ 0.06 \\ \cline{2-11} & \textbf{GAA} & 88.15 $\pms$ 0.31 & 8.49 $\pms$ 0.25 & 0.66 $\pms$ 0.01 & 63.66 $\pms$ 2.22 & 8.49 $\pms$ 2.06 & 1.08 $\pms$ 0.09 & 86.36 $\pms$ 0.28 & 8.73 $\pms$ 0.44 & 0.82 $\pms$ 0.06 \\ \cline{2-11} & \textbf{RKLB} & 88.07 $\pms$ 0.36 & 8.61 $\pms$ 0.20 & 0.67 $\pms$ 0.01 & 63.37 $\pms$ 2.62 & 8.21 $\pms$ 1.31 & 1.08 $\pms$ 0.09 & 86.26 $\pms$ 0.26 & 8.80 $\pms$ 0.32 & 0.82 $\pms$ 0.04 \\ \cline{2-11} & \textbf{WB} & \textbf{88.34 $\pms$ 0.30} & 8.39 $\pms$ 0.29 & 0.67 $\pms$ 0.02 & 63.91 $\pms$ 2.64 & 8.62 $\pms$ 1.63 & 1.08 $\pms$ 0.09 & \textbf{86.55 $\pms$ 0.37} & 8.66 $\pms$ 0.56 & 0.82 $\pms$ 0.06 \\ \hline \multirow{5}{*}{\textbf{2}} & \textbf{AALV} & 87.62 $\pms$ 0.45 & 7.93 $\pms$ 0.23 & 0.60 $\pms$ 0.01 & 65.03 $\pms$ 2.92 & 7.07 $\pms$ 1.62 & 1.03 $\pms$ 0.09 & 85.46 $\pms$ 0.10 & 7.66 $\pms$ 0.53 & 0.77 $\pms$ 0.04 \\ \cline{2-11} & \textbf{EAA} & 87.53 $\pms$ 0.57 & 8.03 $\pms$ 0.36 & 0.61 $\pms$ 0.03 & 64.02 $\pms$ 1.99 & 7.26 $\pms$ 1.82 & 1.05 $\pms$ 0.07 & 85.64 $\pms$ 0.33 & 7.58 $\pms$ 0.55 & 0.77 $\pms$ 0.05 \\ \cline{2-11} & \textbf{GAA} & 87.82 $\pms$ 0.64 & 7.68 $\pms$ 0.44 & 0.60 $\pms$ 0.03 & 64.59 $\pms$ 3.51 & 7.65 $\pms$ 2.40 & 1.04 $\pms$ 0.12 & 85.54 $\pms$ 0.44 & 7.63 $\pms$ 0.70 & 0.77 $\pms$ 0.04 \\ \cline{2-11} & \textbf{RKLB} & 87.59 $\pms$ 0.57 & 8.01 $\pms$ 0.38 & 0.63 $\pms$ 0.03 & \textbf{65.20 $\pms$ 3.99} & 6.87 $\pms$ 2.33 & 1.01 $\pms$ 0.11 & 85.57 $\pms$ 0.45 & 7.64 $\pms$ 0.79 & 0.77 $\pms$ 0.05 \\ \cline{2-11} & \textbf{WB} & 87.69 $\pms$ 0.74 & 7.96 $\pms$ 0.60 & 0.62 $\pms$ 0.03 & 64.74 $\pms$ 3.29 & 7.39 $\pms$ 2.22 & 1.03 $\pms$ 0.10 & 85.57 $\pms$ 0.51 & 7.65 $\pms$ 0.60 & 0.76 $\pms$ 0.05 \\ \hline \multirow{5}{*}{\textbf{3}} & \textbf{AALV} & 88.07 $\pms$ 0.58 & 5.55 $\pms$ 0.56 & \textbf{0.45 $\pms$ 0.03} & 63.71 $\pms$ 3.63 & 4.89 $\pms$ 1.34 & 1.01 $\pms$ 0.09 & 86.15 $\pms$ 0.80 & \textbf{2.62 $\pms$ 0.62} & 0.53 $\pms$ 0.04 \\ \cline{2-11} & \textbf{EAA} & 87.81 $\pms$ 0.54 & 5.57 $\pms$ 0.26 & 0.46 $\pms$ 0.03 & 64.45 $\pms$ 1.79 & 4.44 $\pms$ 1.09 & \textbf{0.99 $\pms$ 0.05} & 86.04 $\pms$ 0.62 & 2.80 $\pms$ 0.63 & 0.53 $\pms$ 0.03 \\ \cline{2-11} & \textbf{GAA} & 88.02 $\pms$ 0.55 & \textbf{5.42 $\pms$ 0.28} & \textbf{0.45 $\pms$ 0.03} & 64.40 $\pms$ 2.30 & 4.13 $\pms$ 0.71 & 0.99 $\pms$ 0.06 & 86.27 $\pms$ 1.02 & 2.69 $\pms$ 0.72 & 0.53 $\pms$ 0.05 \\ \cline{2-11} & \textbf{RKLB} & 87.77 $\pms$ 0.80 & 5.75 $\pms$ 0.40 & 0.46 $\pms$ 0.03 & 64.55 $\pms$ 2.97 & 4.55 $\pms$ 0.83 & 1.00 $\pms$ 0.08 & 86.53 $\pms$ 1.03 & 2.76 $\pms$ 0.84 & \textbf{0.52 $\pms$ 0.05} \\ \cline{2-11} & \textbf{WB} & 87.54 $\pms$ 0.54 & 5.77 $\pms$ 0.23 & 0.46 $\pms$ 0.03 & 64.30 $\pms$ 2.55 & \textbf{4.12 $\pms$ 1.18} & 1.00 $\pms$ 0.07 & 85.99 $\pms$ 0.68 & 2.91 $\pms$ 0.58 & 0.54 $\pms$ 0.03 \\ \hline \end{tabular}} \caption{Accuracy, Model Calibration, and Uncertainty Quantification Results. The table compares the performance of the global models resulting of FedAVG, BFLAVG with different aggregation methods; AALV, EAA, GAA and BA-BFL with RKL barycenter (RKLB) and Wasserstein-2 barycenter (WB). The methods are grouped based on the number of Bayesian layers (Nbl) used in the models' architecture. The evaluation is performed on three datasets (FashionMNIST, CIFAR-10, and SVHN) using the metrics of Accuracy (Acc), Expected Calibration Error (ECE), and Negative Log-Likelihood (NLL).} \label{tab:acc_uq_mc} \end{table*} \paragraph{Overview of the Results} Table \ref{tab:acc_uq_mc} shows that the compared aggregation methods using the same models' architecture, i.e., the same number of Bayesian layers, yield similar scores, suggesting comparable performance across the evaluated metrics. Depending on the score and dataset, it is difficult to conclusively determine the best method. The datasets used (FashionMNIST, SVHN, CIFAR-10) present different levels of difficulty. State-of-the-art results in heterogeneous settings indicate that the CIFAR-10 task is the most challenging, as evidenced by lower accuracy, higher ECE and NLL, and longer training times, compared to the tasks involving FashionMNIST and SVHN. Comparing all the Bayesian algorithms discussed in this paper with FedAVG as the deterministic baseline, we observe that incorporating Bayesian methods significantly enhances UQ and MC, as evidenced by improvements in ECE and NLL. This performance is achieved while maintaining similar global accuracy on FashionMNIST and SVHN and outperforming FedAVG on CIFAR-10. This superior performance on CIFAR-10 can be attributed to the task's increased difficulty in the heterogeneous setting, where Bayesian methods provide a noticeable advantage. By examining the results in Table \ref{tab:acc_uq_mc} and Figs. \ref{fig:uq_vs_nbl}, \ref{fig:time_vs_nbl}, and \ref{fig:fairness_vs_nbl}, we observe some trends depending on the number of Bayesian layers present, allowing us to analyze various aspects, including UQ, MC, fairness, and cost-effectiveness: \begin{itemize} \item \textit{UQ and MC:} Regardless of the Bayesian algorithm used, the trends reported in Fig. \ref{fig:uq_vs_nbl} indicate that incorporating a greater number of Bayesian layers into the local models improves both global MC and UQ while reducing the ECE and NLL scores. \begin{figure}[h] \centering \includegraphics[scale=0.45]{images/uq_vs_nbl.pdf} \caption{Effect of Bayesian Layers on UQ and MC.} \label{fig:uq_vs_nbl} \end{figure} \item \textit{Cost-effectiveness:} Increasing the number of Bayesian layers offers the significant advantage of enhancing UQ and MC. However, this increased Bayesian complexity often comes at the cost of reduced time efficiency. As the number of Bayesian layers increases, the computational demand grows, leading to longer processing time per communication round, as demonstrated in Fig. \ref{fig:time_vs_nbl}. This trade-off between improved model reliability and increased complexity must be carefully considered in practical applications. \begin{figure}[h] \centering \includegraphics[scale=0.45]{images/time_per_communication_round.pdf} \caption{Effect of Bayesian Layers on Cost-Effectiveness.} \label{fig:time_vs_nbl} \end{figure} \item \textit{Fairness:} The observed trend in Fig. \ref{fig:fairness_vs_nbl} indicates that incorporating more Bayesian layers tends to decrease the fairness of the algorithm between the different clients on the Fashion-MNIST and SVHN datasets, suggesting a trade-off between UQ and fairness. Conversely, on the CIFAR-10 dataset, which presents a more complex task as explained above, we observe an improvement in fairness with an increasing number of Bayesian layers. This may suggest that the relationship between Bayesian complexity and fairness is influenced by the nature of the dataset and task complexity. Notably, based on the results from our exploration of CIFAR-10 under the heterogeneous setting, we can assert that adopting a Bayesian approach for challenging tasks can enhance not only fairness, but also accuracy, UQ, and MC. \begin{figure}[h] \centering \includegraphics[scale=0.45]{images/fairness_vs_nbl.pdf} \caption{Effect of Bayesian Layers on Fairness.} \label{fig:fairness_vs_nbl} \end{figure} \end{itemize} \section{Conclusions} In this paper, we proposed BA-BFL, a novel geometric interpretation of barycenters as a solution to the BFL aggregation problem. Following this idea, we proposed two aggregation techniques based on the analytical results of Gaussian barycenters derived for two widely-used divergences, squared Wasserstein-2 distance and reverse KL divergence. We experimentally tested the proposed methods in a heterogeneous setting, showing performances similar to existing statistical aggregation methods. In the same setting, we also examined the impact of varying the number of Bayesian layers in an HBDL context on accuracy, uncertainty quantification, model calibration, cost-effectiveness, and fairness. For future work, we envision several extensions to our study, including expanding the family of distributions to incorporate non-parametric distributions and investigating alternative divergences. Additionally, we aim to address the personalization problem within the context of barycentric aggregation in BFL. \bibliographystyle{unsrt} \bibliography{references} \end{document} \usepackage{xcolor} \usepackage{bigints} \usepackage{amsthm} \newcommand{\review}[1]{#1} \newtheorem{theorem}{\bfseries Theorem}\newtheorem{lemma}{\bfseries Lemma} \newtheorem{definition}{\bfseries Definition} \newtheorem{corollary}{\bfseries Corollary} \newtheorem{hypothesis}{Hypothesis} \newtheorem{axiom}{Axiom} \newtheorem{remark}{\bfseries Remark} \newtheorem{notation}{Notation} \newtheorem{symbols}{Symbols} \newtheorem{proposition}{\bfseries Proposition} \newtheorem{example}{\bfseries Example} \newtheorem{assumptions}{\bfseries Assumptions} \newtheorem{conjecture}{\bfseries Conjecture} \newtheorem{problem}{\bfseries Problem} \newtheorem{assumption}{Assumption} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\KL}{D_{KL}} \DeclareMathOperator*{\RKL}{D_{RKL}} \DeclareMathOperator*{\W2}{W_2^2} \DeclareMathOperator*{\diag}{diag} \DeclareMathOperator*{\Gaussian}{\mathcal{N}} \DeclareMathOperator*{\pms}{ {\scriptstyle \pm}}
2412.11607v1
http://arxiv.org/abs/2412.11607v1
Nonlocal double phase Neumann and Robin problem with variable $s(\cdot,\cdot)-$order
\documentclass[reqno,a4paper,11pt]{amsart} \usepackage{amsmath, amsthm, amscd, amsfonts, amssymb, graphicx, color} \usepackage{amssymb} \usepackage[english]{babel} \usepackage[dvips, lmargin=4cm, rmargin=4cm, tmargin=4cm, bmargin=4cm]{geometry} \usepackage[colorlinks=true,urlcolor=blue,linkcolor=blue, citecolor=red]{hyperref} \usepackage{hyperref} \usepackage[T1]{fontenc} \usepackage{amssymb} \usepackage{tikz} \usetikzlibrary{backgrounds} \usepackage[english]{babel} \usepackage{enumitem} \usepackage{dsfont} \newtheorem{thm}{Theorem}[section] \newtheorem{defini}{Definition}[section] \newtheorem{rem}{Remark}[section] \newtheorem{lem}{Lemma}[section] \newtheorem{prop}{Proposition}[section] \newtheorem{coro}{Corollary}[section] \newtheorem{ex}{Example} [section] \newcommand{\pr}{{\bf Proof: \hspace{0.3cm}}} \numberwithin{equation}{section} \def \K {\mathbb{K} } \def \S {\mathbb{S} } \def \L {\mathbb{L} } \def \N {\mathbb{N} } \def \Z {\mathbb{Z} } \def \Q {\mathbb{Q} } \def \R {\mathbb{R} } \def \C {\mathbb{C} } \def \F {\mathfrak{F}} \def \B {\mathfrak{B}} \def \O {\mathfrak{O}} \def \U {\mathfrak{U}} \begin{document} \title[Nonlocal double phase Neumann and Robin problem ]{Nonlocal double phase Neumann and Robin problem with variable $s(\cdot,\cdot)-$order} \author[ Mohammed SRATI] {Mohammed SRATI} \address{Mohammed SRATI\newline High School of Education and Formation (ESEF), University Mohammed First, Oujda, Morocco.} \email{[email protected]} \subjclass[2010]{46E35, 35R11, 35J20, 47G20.} \keywords{ Fractional Musielak-Sobolev spaces, Nonlocal problems, Neumann boundary condition, Robin boundary condition, Direct variational method.} \maketitle \begin{abstract} In this paper, we develop some properties of the $a_{x,y}(\cdot)$-Neumann derivative for the nonlocal $s(\cdot,\cdot)$-order operator in fractional Musielak-Sobolev spaces with variable $s(\cdot,\cdot)-$order. Therefore we prove the basic proprieties of the correspondent function spaces. In the second part of this paper, by means of Ekeland's variational principal and direct variational approach, we prove the existence of weak solutions to the following double phase Neumann and Robin problem with variable $s(\cdot,\cdot)-$order : $$ \left\{ \begin{array}{clclc} (-\Delta)^{s_1(x,\cdot)}_{a^1_{(x,\cdot)}} u+(-\Delta)^{s_2(x,\cdot)}_{a^2_{(x,\cdot)}} u +\widehat{a}^1_x(|u|)u+\widehat{a}^2_x(|u|)u & = & \lambda f(x,u) \text{ in } \Omega, \\\\ \mathcal{N}^{s_1(x,\cdot)}_{a^1(x,\cdot)}u+\mathcal{N}^{s_2(x,\cdot)}_{a^2(x,\cdot)}u+\beta(x)\left( \widehat{a}^1_x(|u|)u+\widehat{a}^2_x(|u|)u \right) & = & 0 \hspace*{0.2cm} \text{ in } \R^N\setminus \Omega, \end{array} \right. $$ where $(-\Delta)^{s_i(x,\cdot)}_{a^i_{(x,\cdot)}}$ and $\mathcal{N}^{s_i(x,\cdot)}_{a^i(x,\cdot)}$ denote the variable $s_i(\cdot,\cdot)$-order fractional Laplace operator and the nonlocal normal $a_i(\cdot,\cdot)$-derivative of $s_i(\cdot,\cdot)$-order, respectively. \end{abstract} \tableofcontents \section{Introduction}\label{S1} The idea of a variable-order fractional derivative is an interesting extension of classical fractional derivatives. In this case, the derivative order \(\alpha\) is not constant but depends on the position (and possibly time). This approach allows modeling phenomena where local diffusion or memory properties change throughout the domain. The variable-order fractional derivative can be defined by adapting the classical definitions of fractional derivatives. If \(\alpha = \alpha(x)\) is a function of \(x\), we can define the variable-order fractional derivative in several ways. As a result, we can define fractional spaces with a variable order of derivation. In this sense Biswas et al \cite{s1} defined the fractional Sobolev space with a fractional exponent with a variable order, subsequently Srati in \cite{srati} presented an extension of these results, namely the fractional Musielak Sobolev space with variable $s(\cdot,\cdot)-$order, also he defined the nonlocal $s(\cdot,\cdot)$-order operator of elliptic type defined as follows {\small $$ \begin{aligned} (-\Delta)^{s(x,\cdot)}_{a_{(x,\cdot)}}u(x)=2\lim\limits_{\varepsilon\searrow 0} \int_{\R^N\setminus B_\varepsilon(x)} a_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s(x,y)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s(x,y)}} \dfrac{dy}{|x-y|^{N+s(x,y)} } \end{aligned} $$} for all $x\in \R^N$, where:\\ $\bullet$ $s(\cdot,\cdot)~ : \overline{\Omega}\times\overline{\Omega}\rightarrow (0,1)$ is a continuous function such that: \begin{equation} s(x,y)=s(y,x)~~ \forall x,y \in \overline{\Omega}\times\overline{\Omega}, \end{equation} \begin{equation} 0<s^-=\inf\limits_{\overline{\Omega}\times\overline{\Omega}}s(x,y)\leqslant s^+=\sup\limits_{\overline{\Omega}\times\overline{\Omega}}s(x,y)<1. \end{equation} $\bullet$ $a_{(x,y)}(t):=a(x,y,t) : \overline{\Omega}\times\overline{\Omega}\times \R\longrightarrow \R$ is symmetric function : \begin{equation}\label{n4} a(x,y,t)=a(y,x,t) ~~ \forall(x,y,t)\in \overline{\Omega}\times\overline{\Omega}\times \R,\end{equation} In this work, we continue the study of the class of fractional problems in the new space $W^{s(x,y)}L_{\varPhi_{x,y}}(\Omega)$. Our main objective is to investigate, for the first time, a fractional problem involving the nonlocal $s(\cdot,\cdot)$-order elliptic operator with nonlocal Neumann and Robin boundary conditions.\\ The Neumann boundary condition, credited to the German mathematician Neumann, is also known as the boundary condition of the second kind. In this type of boundary condition, the value of the gradient of the dependent variable normal to the boundary, $\frac{\partial \phi}{\partial n}$, is prescribed on the boundary. \\ In the last years, great attention has been devoted to the study of nonlocal problems with fractional Neumann boundary condition, In this contex, Dipierro, Ros-Oton, and Valdinoci, in \cite{N5} introduce an extension for the classical Neumann condition $\frac{\partial \phi}{\partial n} = 0$ on $\partial\Omega$ consists in the nonlocal prescription \begin{equation}\label{n1} \begin{aligned} \mathcal{N}^s_2u(x)= \int_{\Omega} \dfrac{u(x)-u(y)}{|x-y|^{N+2s}}dy ,~~\forall x\in \R^N\setminus \Omega. \end{aligned} \end{equation} Other Neumann problems for the fractional Laplacian (or other nonlocal operators) were introduced in \cite{N1,N2,N3}. All these different Neumann problems for nonlocal operators recover the classical Neumann problem as a limit case, and most of them have clear probabilistic interpretations as well. An advantage of this approach (\ref{n1}) is that the problem has a variational structure. In \cite{N4}, Mugnai and Proietti Lippi introduced an extension of (\ref{n1}) for $p\neq 2$.\\ Bahrouni et al in \cite{N6} introduced the nonlocal Neumann and Robin boundary conditions corresponding to the fractional $p(\cdot,\cdot)$-Laplacian as follows: \begin{equation}\label{n3} \begin{aligned} \mathcal{N}^s_{p(x,\cdot)}u(x)= \int_{\Omega} \dfrac{|u(x)-u(y)|^{p(x,y)-2}(u(x)-u(y))}{|x-y|^{N+sp(x,y)} }dy,~~\forall x\in \R^N\setminus \Omega, \end{aligned} \end{equation} where $p: \R^{2N} \longrightarrow (1, +\infty)$ is a symmetric, continuous function bounded and $p(\cdot) = p(\cdot,\cdot)$. $\mathcal{N}^s_{p(x,\cdot)}$ is the nonlocal normal $p(\cdot,\cdot)$-derivative [or $p(\cdot,\cdot)$-Neumann boundary condition] and describes the natural Neumann boundary condition in the presence of the fractional $p(\cdot,\cdot)$-Laplacian, (\ref{n3}) extends the notion of the nonlocal normal derivative for the fractional $p$-Laplacian. See also \cite{kamali}.\\ On other extention of $p$-Neumann boundary condition, has proposed by Bahrouni and Salort in \cite{N7} as following {\small$$ \begin{aligned} \mathcal{N}^s_{a(\cdot)}u(x)= \int_{\Omega} a\left( \dfrac{|u(x)-u(y)|}{|x-y|^s }\right)\dfrac{u(x)-u(y)}{|x-y|^s} \dfrac{dy}{|x-y|^{N+s}},~~\forall x\in \R^N\setminus \Omega, \end{aligned} $$ } where $a = A'$ such that $A$ is a Young function and $s \in (0, 1)$. To read other problems driven by a nonlocal operator of the elliptic type in fractional Orlicz-Sobolev spaces, we refer the reader to the works \cite{SRN1,SRN2,SRT}. \\ Very recently, Srati et al. \cite{srati2}, introduced the natural Neumann boundary condition in the presence of the fractional $a_{x,y}(\cdot)$-Laplacian in fractional Musielak Sobolev spaces as following {\small$$ \begin{aligned} \mathcal{N}^s_{a(x,\cdot)}u(x)= \int_{\Omega} a_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^s }\right)\dfrac{u(x)-u(y)}{|x-y|^s} \dfrac{dy}{|x-y|^{N+s}},~~\forall x\in \R^N\setminus \Omega, \end{aligned} $$ } denotes $a_{(x,\cdot)}-$Neumann boundary condition and present the natural Neumann boundary condition for $(-\Delta)^s_{a_{(x,\cdot)}}$ in fractional Musielak-Sobolev space. \\ For the fractional derivative case of variable order, Biswas, Bahrouni and Carvalho in \cite{x1} introduced the nonlocal Neumann and Robin boundary conditions corresponding to the fractional $s(\cdot,\cdot) \& p(\cdot,\cdot)$-Laplacian as follows: \begin{equation}\label{n33} \begin{aligned} \mathcal{N}^{s(x,\cdot)}_{p(x,\cdot)}u(x)= \int_{\Omega} \dfrac{|u(x)-u(y)|^{p(x,y)-2}(u(x)-u(y))}{|x-y|^{N+s(x,y)p(x,y)}}dy,~~\forall x\in \R^N\setminus \Omega, \end{aligned} \end{equation} where $p(\cdot,\cdot): \R^{2N} \longrightarrow (1, +\infty)$ and $s(\cdot,\cdot)~ : \overline{\Omega}\times\overline{\Omega}\rightarrow (0,1)$ are a continuous and symmetric functions.\\ The use of fractional Neumann conditions with variable order in modular spaces, such as Orlicz spaces or Lebesgue spaces with variable exponent, extends the flexibility and applicability of mathematical models to more complex and realistic scenarios. Indeed in Orlicz spaces, this makes it possible to model data with growth conditions which differ from the polynomial growth dictated by $L_p$ spaces. For example, Orlicz spaces can handle exponential growth, which is useful in many applications like nonlinear elasticity, fluid dynamics, and image processing. In Variable Exponent Lebesgue spaces: These spaces generalize classical Lebesgue spaces by allowing the exponent $t^p$ to vary with the position $x$. This flexibility is crucial in problems where the regularity of the solution may change across the domain, such as in materials with spatially varying properties or in image processing where different regions of the image may require different levels of smoothing.\\ For that, in this paper, we introduce the fractional Neumann boundary condition with variable order in the presence of the nonlocal $s(\cdot,\cdot)$-order elliptic operator. Therefore we are concerned with the existence of weak solutions to the following Neumann-Robin problem $$\label{P} (\mathcal{P}_a) \left\{ \begin{array}{clclc} (-\Delta)^{s_1(x,\cdot)}_{a^1_{(x,\cdot)}} u+(-\Delta)^{s_2(x,\cdot)}_{a^2_{(x,\cdot)}} u +\widehat{a}^1_x(|u|)u+\widehat{a}^2_x(|u|)u & = & \lambda f(x,u) \text{ in } \Omega, \\\\ \mathcal{N}^{s_1(x,\cdot)}_{a^1(x,\cdot)}u+\mathcal{N}^{s_2(x,\cdot)}_{a^2(x,\cdot)}u+\beta(x)\left( \widehat{a}^1_x(|u|)u+\widehat{a}^2_x(|u|)u \right) & = & 0 \hspace*{0.2cm} \text{ in } \R^N\setminus \Omega, \end{array} \right. $$ where $\Omega$ is an open bounded subset in $\R^N$, $N\geqslant 1$, with Lipschitz boundary $\partial \Omega$, $f: \Omega\times \R \longrightarrow \R$ is a Carath\'eodory function, $\beta\in L^{\infty}(\R^N\setminus \Omega)$ such that $\beta\geqslant 0$ in $\R^N\setminus \Omega$ and $(-\Delta)^{s_i(x,\cdot)}_{a^i_{(x,\cdot)}}$ $(i = 1, 2)$ are the nonlocal integro-differential operators of elliptic type defined as follows {\small $$ \begin{aligned} (-\Delta)^{s_i(x,\cdot)}_{a^i_{(x,\cdot)}}u(x)=2\lim\limits_{\varepsilon\searrow 0} \int_{\R^N\setminus B_\varepsilon(x)} a^i_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s_i(x,y)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}} \dfrac{dy}{|x-y|^{N+s_i(x,y)} }, \end{aligned} $$} for all $x\in \R^N$, where:\\ $\bullet$ $s_i(\cdot,\cdot)~ : \overline{\Omega}\times\overline{\Omega}\rightarrow (0,1)$ is a continuous function such that: \begin{equation} s_i(x,y)=s_i(y,x),~~ \forall x,y \in \overline{\Omega}\times\overline{\Omega}, \end{equation} \begin{equation} 0<s_i^-=\inf\limits_{\overline{\Omega}\times\overline{\Omega}}s_i(x,y)\leqslant s_i^+=\sup\limits_{\overline{\Omega}\times\overline{\Omega}}s_i(x,y)<1. \end{equation} $\bullet$ $(x,y,t)\mapsto a^i_{(x,y)}(t):=a^i(x,y,t) : \overline{\Omega}\times\overline{\Omega}\times \R\longrightarrow \R$ $(i = 1, 2)$ are symmetric functions : \begin{equation}\label{n4} a^i(x,y,t)=a^i(y,x,t) ~~ \forall(x,y,t)\in \overline{\Omega}\times\overline{\Omega}\times \R,\end{equation} and the functions : $\varphi^i(\cdot,\cdot,\cdot) : \overline{\Omega}\times\overline{\Omega}\times \R \longrightarrow \R$ $(i = 1, 2)$ defined by $$ \varphi^i_{x,y}(t):=\varphi^i(x,y,t)= \left\{ \begin{array}{clclc} a^i(x,y,|t|)t & \text{ for }& t\neq 0, \\\\ 0 & \text{ for } & t=0, \end{array} \right. $$ are increasing homeomorphisms from $\R$ onto itself. For $i = 1, 2$, let $$\varPhi^i_{x,y}(t):=\varPhi^i(x,y,t)=\int_{0}^{t}\varphi^i_{x,y}(\tau)d\tau~~\text{ for all } (x,y)\in \overline{\Omega}\times\overline{\Omega},~~\text{ and all } t\geqslant 0.$$ Then, $\varPhi^i_{x,y}$ are a Musielak functions (see \cite{mu}). Also, we take $ \widehat{a}^i_x(t):=\widehat{a}^i(x,t)=a^i_{(x,x)}(t) ~~ \forall~ (x,t)\in \overline{\Omega}\times \R$ $(i = 1, 2)$. Then the functions $\widehat{\varphi}^i(\cdot,\cdot) : \overline{\Omega}\times \R \longrightarrow \R$ defined by : $$ \widehat{\varphi}^i_{x}(t):=\widehat{\varphi}^i(x,t)= \left\{ \begin{array}{clclc} \widehat{a}^i(x,|t|)t & \text{ for }& t\neq 0, \\\\ 0 & \text{ for } & t=0, \end{array} \right. $$ are increasing homeomorphisms from $\R$ onto itself. If we set \begin{equation}\label{phi} \widehat{\varPhi}^i_{x}(t):=\widehat{\varPhi}^i(x,t)=\int_{0}^{t}\widehat{\varphi}^i_{x}(\tau)d\tau ~~\text{ for all}~~ t\geqslant 0. \end{equation} Then, $\widehat{\varPhi}^i_{x}$ is also a Musielak function.\\ Furthermore, $\mathcal{N}^{s_i(x,\cdot)}_{a^i_{(x,\cdot)}}$ $(i = 1, 2)$ are defined by {\small$$ \begin{aligned} \mathcal{N}^{s_i(x,\cdot)}_{a^i{(x,\cdot)}}u(x)= \int_{\Omega} a^i_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s_i(x,y)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}} \dfrac{dy}{|x-y|^{N+s_i(x,y)}},~~\forall x\in \R^N\setminus \Omega, \end{aligned} $$ } denotes $a^i_{(x,\cdot)}-$Neumann boundary condition and present the natural Neumann boundary condition for $(-\Delta)^{s_i(x,\cdot)}_{a^i_{(x,\cdot)}}$ in fractional Musielak-Sobolev space. This paper is organized as follows, In Section \ref{S1}, we set the problem \hyperref[P]{$(\mathcal{P}_{a})$} and the related hypotheses. Moreover, we are introduced the new Neumann boundary condition associated to nonlocal $s(\cdot,\cdot)$-order elliptic operator. The Section \ref{S2}, is devoted to recall some properties of $s(x,y)$-fractional Musielak-Sobolev spaces. In section \ref{S3}, we introduce the corresponding function space for weak solutions of \hyperref[P]{$(\mathcal{P}_{a})$}, and we prove some properties, and state the corresponding Green formula for problems such as \hyperref[P]{$(\mathcal{P}_{a})$}. In section \ref{S4}, by means of Ekeland's variational principle and direct variational approach, we obtain the existence of a nontrivial weak solution for our Problem. \section{Preliminaries results}\label{S2} To deal with this situation we define the fractional Musielak-Sobolev space with variable $s(\cdot,\cdot)-$order to investigate Problem \hyperref[P]{$(\mathcal{P}_{a})$}. Let us recall the definitions and some elementary properties of this spaces. We refer the reader to \cite{benkirane,benkirane2,3,SRH,srati} for further reference and for some of the proofs of the results in this section. For the function $\widehat{\varPhi}_x$ $(i=1,2)$ given in (\ref{phi}), we introduce the Musielak space as follows $$L_{\widehat{\varPhi}^i_x} (\Omega)=\left\lbrace u : \Omega \longrightarrow \R \text{ mesurable }: \int_\Omega\widehat{\varPhi}^i_x(\lambda |u(x)|)dx < \infty \text{ for some } \lambda>0 \right\rbrace. $$ The space $L_{\widehat{\varPhi}^i_x} (\Omega)$ is a Banach space endowed with the Luxemburg norm $$||u||_{\widehat{\varPhi}^i_x}=\inf\left\lbrace \lambda>0 \text{ : }\int_\Omega\widehat{\varPhi}^i_x\left( \dfrac{|u(x)|}{\lambda}\right) dx\leqslant 1\right\rbrace. $$ The conjugate function of $\varPhi^i_{x,y}$ $(i=1,2)$ is defined by $\overline{\varPhi^i}_{x,y}(t)=\int_{0}^{t}\overline{\varphi^i}_{x,y}(\tau)d\tau$ $\text{ for all } (x,y)\in\overline{\Omega}\times\overline{\Omega}$ $\text{ and all } t\geqslant 0$, where $\overline{\varphi^i}_{x,y} : \R\longrightarrow \R$ is given by $\overline{\varphi^i}_{x,y}(t):=\overline{\varphi^i}(x,y,t)=\sup\left\lbrace \alpha \text{ : } \varphi^i(x,y,\alpha)\leqslant t\right\rbrace.$ Furthermore, we have the following H\"older type inequality \begin{equation} \left| \int_{\Omega}uvdx\right| \leqslant 2||u||_{\widehat{\varPhi}^i_x}||v||_{\overline{\widehat{\varPhi}^i}_x}\hspace*{0.5cm} \text{ for all } u \in L_{\widehat{\varPhi}^i_x}(\Omega) \text{ and } v\in L_{\overline{\widehat{\varPhi}^i}_x}(\Omega). \end{equation} Throughout this paper, for $i=1,2$, we assume that there exist two positive constants ${\varphi_i}^+$ and $\varphi_i^-$ such that \begin{equation}\label{v1}\tag{$\varPhi_1$} 1<\varphi_i^-\leqslant\dfrac{t\varphi^i_{x,y}(t)}{\varPhi^i_{x,y}(t)}\leqslant \varphi_i^+<+\infty\text{ for all } (x,y)\in\overline{\Omega}\times\overline{\Omega}~~\text{ and all } t\geqslant 0. \end{equation} This relation implies that \begin{equation}\label{A2} 1<\varphi_i^-\leqslant \dfrac{t\widehat{\varphi}^i_{x}(t)}{\widehat{\varPhi}^i_{x}(t)}\leqslant\varphi_i^+<+\infty,\text{ for all } x\in\overline{\Omega}~~\text{ and all } t\geqslant 0.\end{equation} It follows that $\varPhi^i_{x,y}$ and $\widehat{\varPhi}^i_{x}$ satisfy the global $\Delta_2$-condition (see \cite{ra}), written $\varPhi^i_{x,y}\in \Delta_2$ and $\widehat{\varPhi}^i_{x}\in \Delta_2$, that is, \begin{equation}\label{r1} \varPhi^i_{x,y}(2t)\leqslant K_1\varPhi^i_{x,y}(t)~~ \text{ for all } (x,y)\in\overline{\Omega}\times\overline{\Omega},~~\text{ and all } t\geqslant 0, \end{equation} and \begin{equation}\label{rr1} \widehat{\varPhi}^i_{x}(2t)\leqslant K_2\widehat{\varPhi}^i_{x}(t) ~~\text{ for any } x\in\overline{\Omega},~~\text{ and all } t\geqslant 0, \end{equation} where $K_1$ and $K_2$ are two positive constants. Furthermore, for $i=1,2$, we assume that $\varPhi^i_{x,y}$ satisfies the following condition \begin{equation}\label{f2.}\tag{$\varPhi_2$} \text{ the function } [0, \infty) \ni t\mapsto \varPhi^i_{x,y}(\sqrt{t}) \text{ is convex. } \end{equation} Now, due to the nonlocality of the operators $(-\Delta)^{s_i(x,y)}_{a_{(x,\cdot)}}$ $(i = 1, 2)$, we define the new $s(x,y)$-fractional Musielak-Sobolev spaces as introduce in \cite{srati} as follows \begingroup\makeatletter\def\f@size{9}\check@mathfonts$$ W^{s_i(x,y)}{L_{\varPhi^i_{x,y}}}(\Omega)=\Bigg\{u\in L_{\widehat{\varPhi^i}_x}(\Omega) : \int_{\Omega} \int_{\Omega} \varPhi^i_{x,y}\left( \dfrac{\lambda| u(x)- u(y)|}{|x-y|^{s_i(x,y)}}\right) \dfrac{dxdy}{|x-y|^N}< \infty \text{ for some } \lambda >0 \Bigg\}. $$\endgroup This space can be equipped with the norm \begin{equation}\label{r2} ||u||_{s_i(x,y),\varPhi^i_{x,y}}=||u||_{\widehat{\varPhi}^i_x}+[u]_{s_i(x,y),\varPhi^i_{x,y}}, \end{equation} where $[\cdot]_{s_i(x,y),\varPhi^i_{x,y}}$ is the Gagliardo seminorm defined by $$[u]_{s_i(x,y),\varPhi^i_{x,y}}=\inf \Bigg\{\lambda >0 : \int_{\Omega} \int_{\Omega} \varPhi^i_{x,y}\left( \dfrac{|u(x)- u(y)|}{\lambda|x-y|^{s_i(x,y)}}\right) \dfrac{dxdy}{|x-y|^N}\leqslant 1 \Bigg\}. $$ \begin{thm}$($\cite{srati}$)$. Let $\Omega$ be an open subset of $\R^N$. For $i=1,2$, the spaces $W^{s_i(x,y)}L_{\varPhi^i_{x,y}}(\Omega)$ $(i = 1, 2)$ are a Banach spaces with respect to the norm $(\ref{r2})$, and a separable $($resp. reflexive$)$ spaces if and only if $\varPhi^i_{x,y} \in \Delta_2$ $($resp. $\varPhi^i_{x,y}\in \Delta_2 $ and $\overline{\varPhi^i}_{x,y}\in \Delta_2$$)$. Furthermore, if $\varPhi^i_{x,y} \in \Delta_2$ and $\varPhi^i_{x,y}(\sqrt{t})$ is convex, then the spaces $W^{s_i(x,y)}L_{\varPhi^i_{x,y}}(\Omega)$ are an uniformly convex space. \end{thm} \begin{defini}$($\cite{benkirane}$)$. We say that $\varPhi^i_{x,y}$ $(i=1,2)$ satisfies the fractional boundedness condition, written $\varPhi^i_{x,y}\in \mathcal{B}_{f}$, if \begin{equation}\tag{$\varPhi_3$} \label{v3} \sup\limits_{(x,y)\in \overline{\Omega}\times\overline{\Omega}}\varPhi^i_{x,y}(1)<\infty. \end{equation} \end{defini} \begin{thm} $($\cite{benkirane,srati}$)$. \label{TT} Let $\Omega$ be an open subset of $\R^N$. Assume that $\varPhi^i_{x,y}\in \mathcal{B}_{f}$ $(i=1,2)$. Then, $$C^2_0(\Omega)\subset W^{s_i(x,y)}L_{\varPhi^i_{x,y}}(\Omega).$$ \end{thm} For $i=1,2$ we denote by $\widehat{\varPhi^i}_{x}^{-1}$ the inverse function of $\widehat{\varPhi^i}_{x}$ which satisfies the following conditions: \begin{equation}\label{15} \int_{0}^{1} \dfrac{\widehat{\varPhi^i}_{x}^{-1}(\tau)}{\tau^{\frac{N+s_i^-}{N}}}d\tau<\infty~~ \text{ for all } x\in \overline{\Omega} \text{ and } ~~i=1,2, \end{equation} \begin{equation}\label{16n} \int_{1}^{\infty} \dfrac{\widehat{\varPhi^i}_{x}^{-1}(\tau)}{\tau^{\frac{N+s_i^-}{N}}}d\tau=\infty ~~\text{ for all }x\in \overline{\Omega} \text{ and } ~~i=1,2. \end{equation} If (\ref{16n}) is satisfied, we define the inverse Musielak conjugate function of $\widehat{\varPhi}_x$ as follows \begin{equation}\label{17} (\widehat{\varPhi}^*_{i})^{-1}(t)=\int_{0}^{t}\dfrac{\widehat{\varPhi^i}_{x}^{-1}(\tau)}{\tau^{\frac{N+s_i^-}{N}}}d\tau ~~\text{ for } ~~i=1,2. \end{equation} \begin{thm}\cite{srati}\label{th2.} Let $\Omega$ be a bounded open subset of $\R^N$ with $C^{0,1}$-regularity and bounded boundary. If $(\ref{15})$ and $(\ref{16n})$ hold, then \begin{equation}\label{18} W^{s_i(x,y)}{L_{\varPhi^i_{x,y}}}(\Omega)\hookrightarrow L_ {\widehat{\varPhi}^*_{i}}(\Omega)~~\text{ for } i=1,2. \end{equation} Moreover, the embedding \begin{equation}\label{27} W^{s_i(x,y)}{L_{\varPhi^i_{x,y}}}(\Omega)\hookrightarrow L_{B_x}(\Omega)~~\text{ for } i=1,2, \end{equation} is compact for all $B_x\prec\prec \widehat{\varPhi}^*_{i}$ $~~\text{ for } ~~i=1,2.$ \end{thm} Finally, the proof of our existence result is based on the following Ekeland's variational principle theorem and direct variational approach. \begin{thm}\label{th1}(\cite{ek}) Let V be a complete metric space and $F : V \longrightarrow \R\cup \left\lbrace +\infty\right\rbrace$ be a lower semicontinuous functional on $V$, that is bounded below and not identically equal to $+\infty$. Fix $\varepsilon>0$ and a point $u\in V$ such that $$F(u)\leqslant \varepsilon +\inf\limits_{x\in V}F(x).$$ Then for every $\gamma > 0$, there exists some point $v\in V$ such that : $$F(v)\leqslant F(u),$$ $$d(u,v)\leqslant \gamma$$ and for all $w\neq v$ $$F(w)> F(v)-\dfrac{\varepsilon}{\gamma}d(v,w).$$ \end{thm} \begin{thm}\label{th2} (\cite{110}) Suppose that $Y$ is a reflexive Banach space with norm $||.||$ and let $V\subset Y$ be a weakly closed subset of $Y$. Suppose $E : V \longrightarrow \R \cup \left\lbrace +\infty\right\rbrace $ is coercive and (sequentially) weakly lower semi-continuous on $V$ with respect to $Y$, that is, suppose the following conditions are fulfilled: \begin{itemize} \item[$\bullet$] $E(u)\rightarrow \infty$ as $||u||\rightarrow \infty$, $u\in V$. \item[$\bullet$] For any $u\in V$, any sequence $\left\lbrace u_n\right\rbrace $ in $V$ such that $u_n\rightharpoonup u$ weakly in $X$ there holds: $$E(u)\leqslant \liminf_{n\rightarrow \infty}E(u_n).$$ \end{itemize} Then $E$ is bounded from below on $V$ and attains its infimum in $V$. \end{thm} \section{Some qualitative properties of $\mathcal{N}^{s(x,\cdot)}_{a(x,\cdot)}$}\label{S3} The aim of this section is to give the basic properties of the fractional $a^i(x,y)$-Laplacian with the associated $a^i(x,y)$-Neumann boundary condition.\\ Let $u : \R^N \longrightarrow \R$ be a measurable function, for $i=1,2$ we set $$\|u\|_{X_i}=[u]_{s_i(x,y),\varPhi^i_{x,y}}+\|u\|_{\widehat{\varPhi}^i_x}+\|u\|_{\widehat{\varPhi}^i_x,\beta,C\Omega}$$ where {\small$$[u]_{s_i(x,y),\varPhi^i_{x,y}}=\inf \Bigg\{\lambda >0 : \int_{\R^{2N}\setminus (C\Omega)^2} \varPhi^i_{x,y}\left( \dfrac{|u(x)- u(y)|}{\lambda|x-y|^{s_i(x,y)}}\right) \dfrac{dxdy}{|x-y|^N}\leqslant 1 \Bigg\} $$ } and $$\|u\|_{\widehat{\varPhi}^i_x,\beta,C\Omega}=\inf\left\lbrace \lambda>0 \text{ : }\int_{C\Omega}\beta(x)\widehat{\varPhi}^i_x\left( \dfrac{|u(x)|}{\lambda}\right) dx\leqslant 1\right\rbrace $$ with $C\Omega =\R^N\setminus \Omega$. We define $$X_i=\left\lbrace u : \R^N\longrightarrow \R~~\text{ measurable } : \|u\|_{X_i}<\infty\right\rbrace.$$ \begin{rem} It is easy to see that $\|\cdot\|_{X_i}$ is a norm on $X_i$ $(i=1,2)$. We only show that if $\|u\|_{X_i}=0$, then $u=0$ a.e. in $\R^N$. Indeed, form $\|u\|_{X_i}=0$, we get $\|u\|_{\widehat{\varPhi}^i_x}=0$, which implies that \begin{equation}\label{N1} u=0 ~~\text{a.e. in } \Omega \end{equation} and \begin{equation}\label{N2} \int_{\R^{2N}\setminus (C\Omega)^2} \varPhi^i_{x,y}\left( \dfrac{|u(x)- u(y)|}{|x-y|^{s_i(x,y)}}\right) \dfrac{dxdy}{|x-y|^N}=0. \end{equation} By $(\ref{N2})$, we deduce that $u(x)=u(y)$ in $\R^{2N}\setminus (C\Omega)^2$, that is $u=c\in \R$ in $\R^N$, and by $(\ref{N1})$ we have $u=0$ a.e. in $\R^N$. \end{rem} \begin{prop}\label{rem} Note that the norm $\|\cdot\|_{X_i}$ is equivalent on $X_i$ to $$\|u\|_i:=\inf\left\lbrace \lambda>0~~:~~\rho_{s_i(x,y)}\left( \dfrac{u}{\lambda}\right) \leqslant 1\right\rbrace $$ where $(i=1,2)$, and the modular function $\rho_{s_i(x,y)}~~ : X\longrightarrow \R$ is defined by \begin{equation}\label{smodN} \begin{aligned} \rho_{s_i(x,y)}(u)=&\int_{\R^{2N}\setminus (C\Omega)^2} \varPhi^i_{x,y}\left( \dfrac{|u(x)- u(y)|}{|x-y|^{s_i(x,y)}}\right) \dfrac{dxdy}{|x-y|^N}\\ &+\int_{\Omega}\widehat{\varPhi}^i_x\left(|u(x)|\right) dx+\int_{C\Omega}\beta(x)\widehat{\varPhi}^i_x\left( |u(x)|\right) dx. \end{aligned} \end{equation} \end{prop} The proof is similar to \cite[Proposition 2.1]{benkirane}; see also \cite[Proposition 3]{sr5}.\\ An important role in manipulating the $s(\cdot,\cdot)$-fractional Musielak-Sobolev spaces is played by the modular function $(\ref{smodN})$. It is worth noticing that the relation between the norm and the modular shows an equivalence between the topology defined by the norm and that defined by the modular. \begin{prop}\label{Nmod} Assume that (\ref{v1}) is satisfied. Then, for any $u \in X_i$ $(i=1,2)$, the following relations hold true: \begin{equation}\label{Nmod1} ||u||_i>1\Longrightarrow ||u||_i^{{\varphi^i}^-} \leqslant \rho_{s_i(x,y)}(u)\leqslant ||u||_i^{{\varphi^i}^+}, \end{equation} \begin{equation}\label{Nmod2} ||u||_i<1\Longrightarrow ||u||_i^{{\varphi^i}^+} \leqslant \rho_{s_i(x,y)}(u)\leqslant ||u||_i^{{\varphi^i}^-}. \end{equation} \end{prop} The proof is similar to \cite[Proposition 2.2]{benkirane}. \begin{prop} $\left(X_i, \|\cdot\|_{X_i}\right)$ $(i=1,2)$ is a reflexive Banach space. \end{prop} \begin{proof} Now, we prove that $X_i$ is complete. For this, let $\left\lbrace u_n\right\rbrace $ be a Cauchy sequence in $X_i$. In particular $\left\lbrace u_n\right\rbrace $ is a Cauchy sequence in $L_{\widehat{\varPhi}^i_x(\Omega)}$ and so, there exists $u\in L_{\widehat{\varPhi}^i_x(\Omega)}$ such that $$u_n\longrightarrow u~~\text{in}~~L_{\widehat{\varPhi}^i_x(\Omega)}~~\text{and a.e. in } \Omega.$$ Then, we can find $Z_1\subset \R^N$ such that \begin{equation}\label{N3} |Z_1|=0~~\text{ and } u_n(x)\longrightarrow u(x)~~\text{ for every } x\in \Omega\setminus Z_1. \end{equation} For any $u : \R^N\longrightarrow \R$, and for any $(x,y)\in \R^{2N}$, we set $$E_u(x,y)=\dfrac{(u(x)-u(y))}{|x-y|^{s_i(x,y)}}\mathcal{X}_{\R^{2N}\setminus (C\Omega)^2}(x,y).$$ Using the fact that $\left\lbrace u_n\right\rbrace $ is a Cauchy sequence in $L_{\varPhi^i_{x,y}}\left( \R^{2N},d\mu\right)$, where $\mu$ is a measure on $\Omega\times\Omega$ which is given by $d\mu :=|x-y|^{-N}dxdy.$ So, there exists a subsequence $\left\lbrace E_{u_n}\right\rbrace$ converges to $E_u$ in $L_{\varPhi^i_{x,y}}\left( \R^{2N},d\mu\right)$ and a.e. in $\R^{2N}$. Then, we can find $Z_2\subset \R^{2N}$ such that \begin{equation}\label{N4} |Z_2|=0~~\text{ and } E_{u_n}(x,y)\longrightarrow E_u(x,y)~~\text{ for every } (x,y)\in \R^{2N}\setminus Z_2. \end{equation} For any $x\in \Omega$, we set $$S_x:=\left\lbrace y\in \R^N~~:~~(x,y)\in \R^{2N}\setminus Z_2\right\rbrace $$ $$W:=\left\lbrace (x,y)\in \R^{2N},~~x\in \Omega~~\text{and}~~y\in \R^N\setminus S_x\right\rbrace $$ $$V:=\left\lbrace x\in \Omega~~:~~|\R^N\setminus S_x|=0\right\rbrace.$$ Let $(x,y)\in W$, we have $y\in \R^N\setminus S_x$. Then $(x,y)\notin \R^{2N}\setminus Z_2$, i.e. $(x,y)\in Z_2$. So $$W\subset Z_2,$$ therefore, by $(\ref{N4})$ $$|W|=0,$$ then, by the Fubini's Theorem we have $$0=|W|=\int_{\Omega}| \R^N\setminus S_x|dx,$$ which implies that $| \R^N\setminus S_x|=0$ a.e $x\in \Omega$. It follows that $|\Omega\setminus V|=0$. This end with $(\ref{N3})$, implies that $$|\Omega\setminus (V\setminus Z_1)|=|(\Omega\setminus V)\cup Z_1|\leqslant |\Omega\setminus V|+|Z_1|=0.$$ In particular $V\setminus Z_1\neq \varnothing,$ then we can fix $x_0\in V\setminus Z_1$, and by $(\ref{N3})$, it follows $$\lim\limits_{n\rightarrow \infty} u_n(x_0)=u(x_0).$$ In addition, since $x_0\in V,$ we obtain $|\R^N\setminus S_{x_0}|=0$. Then, for almost all $y\in \R,$ this yields $(x_0,y)\in \R^{2N}\setminus Z_2$, and hence, by $(\ref{N4})$ $$\lim\limits_{n\rightarrow \infty} E_{u_n}(x_0,y)=E_u(x_0,y).$$ Since $\Omega\times C\Omega \subset \R^{2N}\setminus (C\Omega)^2$, we have $$E_{u_n}(x_0,y)=\dfrac{(u_n(x_0)-u_n(y))}{|x_0-y|^{s_i(x,y)}}\mathcal{X}_{\R^{2N}\setminus (C\Omega)^2}(x_0,y)$$ for almost all $y\in C\Omega.$ However, this implies $$ \begin{aligned} \lim\limits_{n\rightarrow \infty} u_n(y)&=\lim\limits_{n\rightarrow \infty}\left( u_n(x_0) -|x_0-y|^{s_i(x,y)}E_{u_n}(x_0,y)\right)\\ &= u(x_0) -|x_0-y|^{s_i(x,y)}E_{u}(x_0,y) \end{aligned} $$ for almost all $y\in C\Omega.$ Combining this end with $(\ref{N3})$, we see that $u_n$ is converges to some $u$ a.e. in $\R^N$. Since $u_n$ is a Cauchy sequence in $X$, so for any $\varepsilon>0$, there exists $N_\varepsilon>0$ such that for any $k>N_\varepsilon$, we have by applying Fatou's Lemma $$ \begin{aligned} \varepsilon\geqslant & \liminf\limits_{k\rightarrow \infty}\|u_n-u_k\|_{X_i}\\ &\geqslant c\liminf\limits_{k\rightarrow \infty}\|u_n-u_k\|_i\\ &\geqslant c \liminf\limits_{k\rightarrow \infty}\left( \rho_{s_i(x,y)}(u_n-u_k)\right) ^{\frac{1}{{\varphi^i}^\pm}}\\ &\geqslant c \left( \rho_{s_i(x,y)}(u_n-u)\right) ^{\frac{1}{{\varphi^i}^\pm}}\\ &\geqslant c \|u_n-u\|_i^{\frac{{\varphi^i}^\pm}{{\varphi^i}^\pm}}\\ &\geqslant c \|u_n-u\|_{X_i}^{\frac{{\varphi^i}^\pm}{{\varphi^i}^\pm}}, \end{aligned} $$ where $c$ is a positive constant given by Proposition $\ref{rem}$. This implies that $u_n$ converge to $u$ in $X_i$, and so $X_i$ is complete space. Now, we show that $X_i$ is a reflexive space. For this, we consider the following space $$Y_i=L_{\widehat{\varPhi}^i_x}(\Omega)\times L_{\widehat{\varPhi}^i_x}(C\Omega)\times L_{\widehat{\varPhi}^i_{x,y}}\left( \R^{2N}\setminus (C\Omega)^2,d\mu\right) $$ endowed with the norm $$\|u\|_{Y_i}=[u]_{s_i(x,y),\varPhi^i_{x,y},\R^{2N}\setminus (C\Omega)^2}+\|u\|_{\widehat{\varPhi}^i_x}+\|u\|_{\widehat{\varPhi}^i_x,\beta,C\Omega}.$$ We note that $(Y_i, \|\cdot\|_{Y_i})$ is a reflexive Banach space, we consider the map $T : X_i\longrightarrow Y_i$ defined as : $$T(u)=\left(u,u,D^{s_i(x,y)}u\right),$$ where $D^{s_i(x,y)}u:=\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}}.$ By construction, we have that $$\|T(u)\|_{Y_i}=\|u\|_{X_i}.$$ Hence, $T$ is an isometric from $X_i$ to the reflexive space $Y_i$. This show that $X_i$ is reflexive. \end{proof} \begin{prop}\label{N10} Let $\Omega$ be a bounded open subset of $\R^N$ with $C^{0,1}$-regularity and bounded boundary. If $(\ref{15})$ and $(\ref{16n})$ hold, then \begin{equation}\label{N18} X_i\hookrightarrow L_ {\widehat{\varPhi}^*_{i}}(\Omega)~~\text{ for } i=1,2. \end{equation} In particular, the embedding \begin{equation}\label{N27} X_i\hookrightarrow L_{B_x}(\Omega) ~~\text{ for } i=1,2. \end{equation} is compact for all $B_x\prec\prec \widehat{\varPhi}^*_{i} \text{ for } i=1,2.$ \end{prop} \begin{proof} Since $\Omega\times\Omega\subset \R^{2N}\setminus (C\Omega)^2$. Then $$||u||_{s_i(x,y),\varPhi^i_{x,y}}\leqslant \|u\|_{X_i}~~\text{for all }~~u\in X_i \text{ and } i=1,2.$$ Therefore, by Theorem \ref{th2.}, we get our desired result. \end{proof} Now, by integration by part formula, we have the following result. \begin{prop} Let $u\in X_i$ $(i=1,2)$, then $$\int_\Omega (-\Delta)^{s_i(x,\cdot)}_{a^i_{(x,\cdot)}}u(x) dx=-\int_{\R^N\setminus \Omega}\mathcal{N}^{s_i(x,\cdot)}_{a^i(x,\cdot)}u(x)dx.$$ \end{prop} \begin{proof} Since the role of $x$ and $y$ are symmetric and $a^i(\cdot,\cdot)$ and $s_i(\cdot,\cdot)$ are a symmetric functions, we obtain $$ \begin{aligned} \int_{\Omega} \int_{\Omega} & a^i_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s_i(x,y)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}} \dfrac{dxdy}{|x-y|^{N+s_i(x,y)}}\\ &= - \int_{\Omega} \int_{\Omega} a^i_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s_i(x,y)} }\right)\dfrac{u(y)-u(x)}{|x-y|^{s_i(x,y)}} \dfrac{dxdy}{|x-y|^{N+s_i(x,y)}}\\ &=- \int_{\Omega} \int_{\Omega} a^i_{(y,x)}\left( \dfrac{|u(y)-u(x)|}{|x-y|^{s_i(y,x)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s_i(y,x)}} \dfrac{dydx}{|x-y|^{N+s_i(y,x)}}\\ &=- \int_{\Omega} \int_{\Omega} a^i_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s_i(x,y)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}} \dfrac{dxdy}{|x-y|^{N+{s_i(x,y)}}}.\\ \end{aligned} $$ This implies that $$2\int_{\Omega} \int_{\Omega} a^i_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s_i(x,y)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}} \dfrac{dxdy}{|x-y|^{N+s_i(x,y)}}=0$$ that is, $$ \int_{\Omega} \int_{\Omega} a^i_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s_i(x,y)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}} \dfrac{dxdy}{|x-y|^{N+s_i(x,y)}}=0.$$ Hence, we have that $$ \begin{aligned} \int_\Omega (-\Delta)^{s_i(x,\cdot)}_{a^i_{(x,\cdot)}}u(x) dx &=\int_{\Omega} \int_{\R^N} a^i_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s_i(x,y)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}} \dfrac{dydx}{|x-y|^{N+s_i(x,y)}}\\ &= \int_{\Omega} \int_{\R^N\setminus \Omega} a^i_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s_i(x,y)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}} \dfrac{dydx}{|x-y|^{N+s_i(x,y)}}\\ & ~~+\int_{\Omega} \int_{\Omega} a^i_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s_i(x,y)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}} \dfrac{dydx}{|x-y|^{N+s_i(x,y)}}\\ &= \int_{\R^N\setminus \Omega} \left( \int_{\Omega} a^i_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s_i(x,y)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}} \dfrac{dx}{|x-y|^{N+s_i(x,y)}}\right) dy\\ &= -\int_{\R^N\setminus \Omega}\mathcal{N}^{s_i(x,\cdot)}_{a^i(x,\cdot)}u(y)dy. \end{aligned} $$ \end{proof} \begin{prop} For all $u\in X_i$ $(i=1,2)$, we have $$ \begin{aligned} \dfrac{1}{2} & \int_{\R^{2N}\setminus(C\Omega)^2} a^i_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s_i(x,y)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}}\dfrac{v(x)-v(y)}{|x-y|^{s_i(x,y)}} \dfrac{dxdy}{|x-y|^{N}}\\ &=\int_\Omega v (-\Delta)^{s_i(x,\cdot)}_{a^i_{(x,\cdot)}}u dx+\int_{C \Omega} v \mathcal{N}^{s_i(x,\cdot)}_{a^i_{(x,\cdot)}}udx.\\ \end{aligned} $$ \end{prop} \begin{proof} By symmetric, and since $\R^{2N}\setminus(C\Omega)^2=(\Omega\times \R^N)\cup (C \Omega\times\Omega)$. Then, we have \begin{equation}\label{N7} \begin{aligned} \dfrac{1}{2} & \int_{\R^{2N}\setminus(C\Omega)^2} a^i_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s_i(x,y)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}}\dfrac{v(x)-v(y)}{|x-y|^{s_i(x,y)}} \dfrac{dxdy}{|x-y|^{N}}\\ &=\dfrac{1}{2} \int_{\R^{2N}\setminus(C\Omega)^2} v(x) a^i_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s_i(x,y)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}} \dfrac{dxdy}{|x-y|^{N+s_i(x,y)}}\\ &~~- \dfrac{1}{2} \int_{\R^{2N}\setminus(C\Omega)^2} v(y) a^i_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s_i(x,y)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}} \dfrac{dxdy}{|x-y|^{N+s_i(x,y)}}\\ &=\dfrac{1}{2} \int_{\R^{2N}\setminus(C\Omega)^2} v(x) a^i_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s_i(x,y)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}} \dfrac{dxdy}{|x-y|^{N+s_i(x,y)}}\\ &~~+ \dfrac{1}{2} \int_{\R^{2N}\setminus(C\Omega)^2} v(y) a^i_{(y,x)}\left( \dfrac{|u(y)-u(x)|}{|y-x|^{s_i(y,x)} }\right)\dfrac{u(y)-u(x)}{|y-x|^{s_i(y,x)}} \dfrac{dxdy}{|x-y|^{N+s_i(x,y)}}\\ &= \int_{\R^{2N}\setminus(C\Omega)^2} v(x) a^i_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s_i(x,y)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}} \dfrac{dxdy}{|x-y|^{N+s_i(x,y)}}\\ &=\int_{\Omega}v(x)\int_{\R^{N}} a^i_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s_i(x,y)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}} \dfrac{dxdy}{|x-y|^{N+s_i(x,y)}}\\ &~~ +\int_{C\Omega} v(x) \int_{\Omega} a^i_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s_i(x,y)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}} \dfrac{dxdy}{|x-y|^{N+s_i(x,y)}}\\ &= \int_\Omega v (-\Delta)^{s_i(x,\cdot)}_{a^i_{(x,\cdot)}}u dx+\int_{C \Omega} v \mathcal{N}^{s_i(x,\cdot)}_{a^i(x,\cdot)}udx.\\ \end{aligned} \end{equation} \end{proof} However, the natural solution space to study Problem \hyperref[P]{$(\mathcal{P}_{a})$} is given by $$X=X_1\cap X_2$$ endowed with the norm $$||u||_X=\|u\|_{X_1}+\|u\|_{X_2}$$ Clearly $X$ is still a reflexive and separable Banach space with respect to $\|u\|_{X}$. It is not difficult to see that we can make use of another norm on $X$ equivalent to $\|u\|_{X}$, given as $$ \|u\|:=\|u\|_X=\inf \left\{\eta \geq 0: \rho_{s(x,y)}\left(\frac{u}{\eta}\right) \leq 1\right\} $$ where the combined modular $\rho_{s(x,y)}: X \rightarrow \mathbb{R}$ is defined as $$ \rho_{s(x,y)}(u)=\rho_{s_1(x,y)}(u)+\rho_{s_2(x,y)}(u), $$ such that $\rho_{s_1(x,y)}$ and $\rho_{s_2(x,y)}$ are described as in (\ref{smodN}). Arguing similarly to Proposition \ref{Nmod}, we can get the following comparison result. \begin{prop}\label{Nmod.} Assume that (\ref{v1}) is satisfied. Then, for any $u \in X$, the following relations hold true: \begin{equation}\label{Nmod1.} ||u||>1\Longrightarrow ||u||^{\min\left\lbrace \varphi_1^-, \varphi_2^-\right\rbrace } \leqslant \rho(u)\leqslant ||u||^{\max\left\lbrace \varphi_1^+, \varphi_2^+\right\rbrace }, \end{equation} \begin{equation}\label{Nmod2.} ||u||<1\Longrightarrow ||u||^{\max\left\lbrace \varphi_1^+, \varphi_2^+\right\rbrace} \leqslant \rho(u)\leqslant ||u||^{\min\left\lbrace \varphi_1^-, \varphi_2^-\right\rbrace}. \end{equation} \end{prop} Based on the integration by part formula, we are now in position to state the natural definition of a weak solution of \hyperref[P]{$(\mathcal{P}_{a})$}. First, to simplify the notation, for arbitrary function $u, v\in X$, we set {\small$$ \begin{aligned} \mathcal{A}_{s}(u,v) = & \sum_{i=1}^{2}\Big( \dfrac{1}{2} \int_{\R^{2N}\setminus(C\Omega)^2} a^i_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s_i(x,y)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}}\dfrac{v(x)-v(y)}{|x-y|^{s_i(x,y)}} \dfrac{dxdy}{|x-y|^{N}}\\ &+\int_{\Omega}\widehat{a}^i_{x}(|u|)u vdx+\int_{C \Omega}\beta(x) \widehat{a}^i_{x}(|u|)u vdx \Big) . \end{aligned} $$} We say that $u\in X$ is a weak solution of \hyperref[P]{$(\mathcal{P}_{a})$} is \begin{equation}\label{N6} \mathcal{A}_{s}(u,v) =\lambda\int_\Omega f(x,u)vdx \end{equation} for all $v\in X$. \begin{rem} Let us first state the definition of a weak solution to our problem $(\ref{N6})$. Note that here we are using that $a_{x,y}$ is symmetric. Therefore, In \cite{benkirane,benkirane2}, the authors must set the condition (\ref{n4}), to be the definition of weak solution has a meaning. \end{rem} As a consequence of this definition \ref{N6}, we have the following result. \begin{prop} Let $u\in X$ be a weak solution of \hyperref[P]{$(\mathcal{P}_{a})$}. Then $$\sum_{i=1}^{2}\left( \mathcal{N}^{s_i(x,\cdot)}_{a^i_{(x,\cdot)}}u+\beta(x) \widehat{a}^i_{x}(|u|)u\right) =0~~\text{ a.e in } \R^N\setminus \Omega.$$ \end{prop} \begin{proof} First, we take $v\in X$ such that $v=0$ in $\Omega$ as a test function in $(\ref{N6})$, and similar calculus to $(\ref{N7})$. We have $$ \begin{aligned} 0=&\mathcal{A}_{s}(u,v)\\ &= \sum_{i=1}^{2} \Big( \dfrac{1}{2} \int_{\R^{2N}\setminus(C\Omega)^2} a^i_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s_i(x,y)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}}\dfrac{v(x)-v(y)}{|x-y|^{s_i(x,y)}} \dfrac{dxdy}{|x-y|^{N}}\\ &~~+\int_{C \Omega}\beta(x) \widehat{a}^i_{x}(|u|)u vdx\Big)\\ &= \sum_{i=1}^{2} \Big( \int_{\Omega} \int_{\R^N\setminus\Omega} a^i_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s_i(x,y)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}}v(x) \dfrac{dxdy}{|x-y|^{N+s_i(x,y)}}\\ &~~+\int_{C \Omega}\beta(x) \widehat{a}^i_{x}(|u|)u vdx\Big)\\ &= \sum_{i=1}^{2} \Big( \int_{\R^N\setminus\Omega}v(x)\int_{\Omega} a^i_{(x,y)}\left( \dfrac{|u(x)-u(y)|}{|x-y|^{s_i(x,y)} }\right)\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}} \dfrac{dxdy}{|x-y|^{N+s_i(x,y)}}\\ &~~+\int_{C \Omega}\beta(x) \widehat{a}^i_{x}(|u|)u vdx\Big)\\ &= \sum_{i=1}^{2} \Big( \int_{\R^N\setminus\Omega}v(x)\mathcal{N}^{s_i(x,\cdot)}_{a^i_{(x,\cdot)}}u(x)dx+\int_{C \Omega}\beta(x) \widehat{a}^i_{x}(|u|)u vdx\Big)\\ &= \sum_{i=1}^{2} \Big( \int_{\R^N\setminus\Omega}\left( \mathcal{N}^{s_i(x,\cdot)}_{a^i_{(x,\cdot)}}u(x)dx+\beta(x) \widehat{a}^i_{x}(|u|)u\right) v(x) dx\Big).\\ \end{aligned} $$ This implies that $$ \sum_{i=1}^{2} \left( \int_{\R^N\setminus\Omega}\left( \mathcal{N}^{s_i(x,\cdot)}_{a^i_{(x,\cdot)}}u(x)dx+\beta(x) \widehat{a}^i_{x}(|u|)u\right) v(x) dx\right) =0$$ for any $v\in X$, and $v=0$ in $\Omega$. In particular is true for every $v\in C^\infty_c(\R^N\setminus \Omega)$, and so $$ \sum_{i=1}^{2} \left( \mathcal{N}^{s_i(x,\cdot)}_{a^i_{(x,\cdot)}}u+\beta(x) \widehat{a}^i_{x}(|u|)u\right) =0~~\text{ a.e in } \R^N\setminus \Omega.$$ \end{proof} \section{Existence results and proofs}\label{S4} The aim of this section is to prove the existence of a weak solution of \hyperref[P]{$(\mathcal{P}_{a})$}. In what follows, we will work with the modular norm $\|.\|$ and we denote by $\left( X^*, ||.||_*\right)$ the dual space of $\left( X, ||.||\right)$.\\ Next, we suppose that $f : \Omega \times \R \rightarrow \R$ is a Carath\'eodory function such that \begin{equation}\label{f1}\tag{$f_1$} |f(x,t)|\leqslant c_1|t|^{q(x)-1}, \end{equation} \begin{equation}\label{f2}\tag{$f_2$} c_2|t|^{q(x)}\leqslant F(x,t):=\int_{0}^{t}f(x,\tau)d\tau, \end{equation} for all $x\in \Omega$ and all $t\in \R^N$, where $c_1$ and $c_2$ are two positive constants, and $q\in C(\overline{\Omega})$ with $1<q^+\leqslant\min\left\lbrace \varphi_1^- , \varphi_2^- \right\rbrace$, and \begin{equation}\label{8} \lim_{t\rightarrow \infty}\left(\sup\limits_{x\in \overline{\Omega}}\dfrac{|t|^{q^+}}{(\widehat{\varPhi^i}_{x,s_i^-})_*(kt)}\right)=0 ~~~~i=1,2~~~~\forall k>0. \end{equation} \begin{rem}\label{rem1} By $(\ref{8})$, we can apply Proposition $\ref{N10}$ we obtain that $X_{i}$ is compactly embedded in $L^{q+}(\Omega)$ for $i=1,2$. That fact combined with the continuous embedding of $L^{q^+}(\Omega)$ in $L^{q(x)}(\Omega)$, ensures that $X_{i}$ is compactly embedded in $L^{q(x)}(\Omega)$ for $i=1,2$. In particular, the embedding $$X \hookrightarrow L^{q(x)}(\Omega)$$ is compact. \end{rem} For simplicity, we set \begin{equation} D^{s_i(x,y)}u:=\dfrac{u(x)-u(y)}{|x-y|^{s_i(x,y)}}. \label{h} \end{equation} Now, we are ready to state our existence result. \begin{thm}\label{2.1.} Assume $f$ satisfy \hyperref[f1]{$(f_1)$} and \hyperref[f2]{$(f_2)$}. Then there exist $\lambda_*$ and $\lambda^*$, such that for any $\lambda\in(0,\lambda_*)\cup [\lambda^*,\infty)$, problem \hyperref[P]{$(\mathcal{P}_{a})$} has a nontrivial weak solutions. \end{thm} For each $\lambda>0$, we define the energy functional $J_\lambda : X\longrightarrow \R$ by {\small\begin{equation}\label{14.} \begin{aligned} J_\lambda(u)=&\displaystyle \sum_{i=1}^{2}\Big( \dfrac{1}{2} \int_{\R^{2N}\setminus(C\Omega)^2} \varPhi^i_{x,y}\left( \dfrac{ |u(x)- u(y)|}{|x-y|^{s_i(x,y)}}\right) \dfrac{dxdy}{|x-y|^N}+\int_{\Omega}\widehat{\varPhi^i}_x\left( |u(x)|\right) dx\\ &+\int_{C \Omega}\beta(x) \widehat{\varPhi^i}_x\left( |u(x)|\right) dx\Big)-\lambda\int_{\Omega}F(x,u)dx. \end{aligned} \end{equation}} \begin{rem} We note that the functional $J_\lambda : X\longrightarrow \R$ in $(\ref{14.})$ is well defined. Indeed, if $u\in X$, then, we have $u \in L^{q(x)}(\Omega)$. Hence, by the condition \hyperref[f1]{$(f_1)$}, $$ |F(x,u)|\leqslant\int_{0}^{u}|f(x,t)|dt=c_1|u|^{q(x)}$$ and thus, $$\int_{\Omega}|F(x,u)|dx<\infty.$$ \end{rem} We first establish some basis properties of $J_\lambda$. \begin{prop}\label{prop1} Assume condition \hyperref[f1]{$(f_1)$} is satisfied. Then, for each $\lambda>0$, $J_\lambda\in C^1\left( X, \R \right)$ with the derivative given by $$ \begin{aligned} \left\langle J'_\lambda(u),v\right\rangle =&\sum_{i=1}^{2}\Big( \dfrac{1}{2} \int_{\R^{2N}\setminus(C\Omega)^2} a^i_{x,y}(|D^{s_i(x,y)}u|)D^{s_i(x,y)}u D^{s_i(x,y)}vd\mu+\int_{\Omega}\widehat{a}^i_{x}(|u|)u vdx\\ &+\int_{C \Omega}\beta(x) \widehat{a}^i_{x}(|u|)u vdx\Big)-\lambda\int_{\Omega}f(x,u)vdx \end{aligned} $$ for all $u,v \in X$.\end{prop} Proof of this Proposition is similar to \cite[Proposition 3.1]{benkirane}.\\ Now, define the functionals $I_i : X\longrightarrow \R$ $i=1,2$ by $$ \begin{aligned} I_1(u)=&\displaystyle \sum_{i=1}^{2}\Big( \dfrac{1}{2} \int_{\R^{2N}\setminus(C\Omega)^2} \varPhi^i_{x,y}\left( \dfrac{ |u(x)- u(y)|}{|x-y|^{s_i(x,y)}}\right) \dfrac{dxdy}{|x-y|^N}+\int_{\Omega}\widehat{\varPhi^i}_x\left( |u(x)|\right) dx\\ &+\int_{C \Omega}\beta(x) \widehat{\varPhi^i}_x\left( |u(x)|\right) dx\Big). \end{aligned}$$ and $$I_2(u)= \int_{\Omega}F(x,u)dx.$$ \begin{prop}\label{lem3} The functional $J_\lambda$ is weakly lower semi continuous. \end{prop} \begin{proof} First, note that $I_1$ is lower semi-continuous in the weak topology of $X$. Indeed, since $\varPhi_{x,y}$ is a convex function so $I_1$ is also convex. Then, let $\left\lbrace u_n\right\rbrace \subset X$ with $u_n\rightharpoonup u$ weakly in $ X$, then by convexity of $I_1$ we have $$I_1(u_n)-I_1(u)\geqslant \left\langle I_1'(u),u_n-u\right\rangle,$$ and hence, we obtain $$I_1(u)\leqslant \liminf I_1(u_n),$$ that is, the map $I_1$ is weakly lower semi continuous. On the other hand, since $I_2\in C^1\left( X, \|.\|\right),$ we have $$\lim\limits_{n\rightarrow \infty}\int_{\Omega}F(x,u_n)dx=\int_{\Omega}F(x,u)dx.$$ Thus, we find $$J_\lambda(u)\leqslant \liminf J_\lambda(u_n).$$ Therefore, $J_\lambda$ is weakly lower semi continuous and Proposition $\ref{lem3}$ is verified. \end{proof} \begin{lem}\label{4.4.} Assume that the sequence $\left\lbrace u_n\right\rbrace $ converges weakly to $u$ in $X$ and \begin{equation}\label{35.} \limsup_{n\rightarrow \infty}\left\langle I_1'(u_n),u_n-u\right\rangle \leqslant 0. \end{equation} Then the sequence $\left\lbrace u_n\right\rbrace$ is convergence strongly to $u$ in $X$. \end{lem} \begin{proof} Since $u_n$ converges weakly to $u$ in $X$, then $\left\lbrace ||u_n||\right\rbrace $ is a bounded sequence of real numbers. Then by Proposition $\ref{Nmod}$, we deduce that $\left\lbrace I_1(u_n)\right\rbrace$ is bounded. So for a subsequence, we deduce that, $$I_1(u_n)\longrightarrow c.$$ Or since $I_1$ is weak lower semi continuous, we get $$I_1(u)\leqslant \liminf_{n\rightarrow \infty}I_1(u_n)=c.$$ On the other hand, by the convexity of $I_1$, we have $$I_1(u)\geqslant I_1(u_n)+\left\langle I_1'(u_n),u_n-u\right\rangle .$$ Next, by the hypothesis $(\ref{35.})$, we conclude that $$I_1(u)=c.$$ Since $\left\lbrace \dfrac{u_n+u}{2} \right\rbrace $ converges weakly to $u$ in $X$, so since $I_1$ is sequentially weakly lower semicontinuous : \begin{equation}\label{32.} c=I_1(u)\leqslant \liminf_{n\rightarrow \infty}I_1\left( \dfrac{u_n+u}{2}\right). \end{equation} We assume by contradiction that $\left\lbrace u_n\right\rbrace$ does not converge to $u$ in $X$. Hence, there exist a subsequence of $\left\lbrace u_n\right\rbrace $, still denoted by $\left\lbrace u_n\right\rbrace $ and there exits $\varepsilon_0>0$ such that $$\Bigg|\Bigg|\dfrac{u_n-u}{2}\Bigg|\Bigg|\geqslant \dfrac{\varepsilon_0}{2},$$ by Proposition $\ref{Nmod}$, we have $$I_1\left( \dfrac{u_n-u}{2}\right) \geqslant \varepsilon_0^{\varphi_1^\pm}+\varepsilon_0^{\varphi_2^\pm}.$$ On the other hand, by the conditions (\ref{v1}) and (\ref{f2.}), we can apply \cite[Lemma 2.1]{Lam} in order to obtain \begin{equation}\label{33.} \dfrac{1}{2}I_1(u_n)+\dfrac{1}{2}I_1(u)-I_1\left( \dfrac{u_n+u}{2}\right) \geqslant I_1\left( \dfrac{u_n-u}{2}\right) \geqslant \varepsilon_0^{\varphi_1^\pm}+\varepsilon_0^{\varphi_2^\pm}. \end{equation} It follows from $(\ref{33.})$ that \begin{equation}\label{34.} I_1(u)-(\varepsilon_0^{\varphi_1^\pm}+\varepsilon_0^{\varphi_2^\pm}) \geqslant \limsup_{n\rightarrow \infty}I_1\left( \dfrac{u_n+u}{2}\right), \end{equation} from $(\ref{32.})$ and $(\ref{34.})$ we obtain a contradiction. This shows that $\left\lbrace u_n\right\rbrace$ converges strongly to $u$ in $X$. \end{proof} \begin{lem}\label{lem5} Assume the hypotheses of Theorem $\ref{2.1.}$ are fulfilled. Then there exist $\rho, \alpha>0$ and $\lambda_*>0$ such that for any $\lambda\in (0,\lambda_*),~~J_\lambda(u)\geqslant \alpha>0$ for any $u\in X$ with $||u||=\rho$. \end{lem} \begin{proof} Since $X$ is continuously embedded in $L^{q(x)}(\Omega)$. Then there exists a positive constant $c>0$ such that \begin{equation}\label{28} ||u||_{q(x)}\leqslant c||u|| ~~\forall u\in X. \end{equation} We fix $\rho\in (0,1)$ such that $\rho<\dfrac{1}{c}$. Then relation $(\ref{28})$ implies that for any $u\in X$ with $||u||=\rho$ : $$\begin{aligned} J_\lambda(u)&\geqslant ||u||^{\max\left\lbrace \varphi_1^+, \varphi_2^+\right\rbrace}-\lambda c_2 c^{q^\pm}||u||^{q^\pm}\\ &=\rho^{q^\pm}\left( \rho^{\max\left\lbrace \varphi_1^+, \varphi_2^+\right\rbrace-{q^\pm}}-\lambda c^{q^\pm} c_2\right). \end{aligned} $$ By the above inequality, we remark if we define \begin{equation}\label{29} \lambda_*=\dfrac{\rho^{\max\left\lbrace \varphi_1^+, \varphi_2^+\right\rbrace-{q^\pm}}}{2c_2 c^{q^\pm}}. \end{equation} Then for any $u\in X$ with $||u||=\rho$, there exists $\alpha=\dfrac{\rho^{\max\left\lbrace \varphi_1^+, \varphi_2^+\right\rbrace}}{2}>0$ such that $$J_\lambda(u)\geqslant \alpha>0,~~\forall \lambda\in (0,\lambda_*).$$ The proof of Lemma $\ref{lem5}$ is complete. \end{proof} \begin{lem}\label{lem6} Assume the hypotheses of Theorem $\ref{2.1.}$ are fulfilled. Then there exists $\theta\in X$ such that $\theta>0$ and $J_\lambda(t\theta)<0$ for $t>0$ small enough. \end{lem} \begin{proof} Let $\Omega_0\subset \subset \Omega$, for $x_0\in \Omega_0$, $0 < R < 1$ satisfy $B_{2R}(x_0)\subset \Omega_0$, where $B_{2R}(x_0)$ is the ball of radius $2R$ with center at the point $x_0$ in $\R^N$. Let $\theta\in C_0^\infty(B_{2R}(x_0))$ satisfies $0\leqslant \theta \leqslant 1$ and $\theta \equiv 1$ in $B_{2R}(x_0)$. Theorem $\ref{TT}$ implies that $||\theta||<\infty.$ Then for $0 < t < 1$, by $\hyperref[f2]{(f_2)}$, we have $$ \begin{aligned} J_\lambda(t\theta)=&\sum_{i=1}^{2}\Big( \displaystyle\dfrac{1}{2} \int_{\R^{2N}\setminus(C\Omega)^2} \varPhi^i_{x,y}\left( \dfrac{|t\theta(x)-t\theta(y)|}{|x-y|^{s_i(x,y)} }\right) \dfrac{dxdy}{|x-y|^N} +\int_{\Omega}\varPhi^i_x(|t \theta|)dx\\ &+\int_{C \Omega}\beta(x) \widehat{\varPhi}^i_{x}(|t\theta|)dx \Big) -\lambda\int_{\Omega}F(x,t\theta)dx\\ &\leqslant ||t\theta||^{\min\left\lbrace \varphi_1^-, \varphi_2^-\right\rbrace}-\lambda c_2\int_{\Omega_0} |t\theta|^{q(x)}dx\\ &\leqslant t^{\min\left\lbrace \varphi_1^-, \varphi_2^-\right\rbrace}||\theta||^{\min\left\lbrace \varphi_1^-, \varphi_2^-\right\rbrace}-\lambda c_2t^{q\pm}\int_{\Omega_0} |\theta|^{q(x)}dx. \end{aligned} $$ Since $\min\left\lbrace \varphi_1^-, \varphi_2^-\right\rbrace > q^+$ and $\displaystyle\int_{\Omega_0} |\theta|^{q(x)}dx>0$ we have $J_\lambda(t_0\theta)<0$ for $t_0\in(0,t)$ sufficiently small. \end{proof} \begin{lem}\label{lem7} Assume the hypotheses of Theorem $\ref{2.1.}$ are fulfilled. Then for any $\lambda>0$ the functional $J_\lambda$ is coercive. \end{lem} \begin{proof} For each $u\in X$ with $||u||>1$ and $\lambda>0$, relations $(\ref{Nmod1.})$, $(\ref{28})$ and the condition \hyperref[f1]{$(f_1)$} imply $$ \begin{aligned} J_\lambda(u)=& \sum_{i=1}^{2}\Big(\displaystyle\dfrac{1}{2} \int_{\R^{2N}\setminus(C\Omega)^2} \varPhi^i_{x,y}\left( \dfrac{ |u(x)- u(y)|}{|x-y|^{s_i(x,y)}}\right) \dfrac{dxdy}{|x-y|^N}+\int_{\Omega}\widehat{\varPhi}^i_x\left( |u(x)|\right) dx\\ &+\int_{C \Omega}\beta(x) \widehat{\varPhi}^i_x\left( |u(x)|\right) dx\Big)\\ &\geqslant ||u||^{\min\left\lbrace \varphi_1^-, \varphi_2^-\right\rbrace}-\lambda c_1\int_{\Omega} |u|^{q(x)}dx\\ &\geqslant ||u||^{\min\left\lbrace \varphi_1^-, \varphi_2^-\right\rbrace}-\lambda c_1c||u||^{q^\pm}. \end{aligned} $$ Since $\min\left\lbrace \varphi_1^-, \varphi_2^-\right\rbrace>q^+$ the above inequality implies that $J_\lambda(u)\longrightarrow \infty$ as $||u||\rightarrow \infty$, that is, $J_\lambda$ is coercive. \end{proof} \begin{proof}[\noindent \textbf{Proof of Theorem $\ref{2.1.}$}] Let $\lambda_*>0$ be defined as in $(\ref{29})$ and $\lambda\in (0,\lambda_*)$. By Lemma $\ref{lem5}$ it follows that on the boundary oh the ball centered in the origin and of radius $\rho$ in $X$, denoted by $B_\rho(0)$, we have $$\inf\limits_{\partial B_\rho(0)}J_\lambda>0.$$ On the other hand, by Lemma $\ref{lem6}$, there exists $\theta \in X$ such that $J_\lambda(t\theta)<0$ for all $t>0$ small enough. Moreover for any $u\in B_\rho(0)$, we have $$ \begin{aligned} J_\lambda(u)\geqslant ||u||^{\max\left\lbrace \varphi_1^+, \varphi_2^+\right\rbrace}-\lambda c_1c||u||^q. \end{aligned} $$ It follows that $$-\infty<c:=\inf\limits_{\overline{B_\rho(0)}} J_\lambda<0.$$ We let now $0<\varepsilon <\inf\limits_{\partial B_\rho(0)} J_\lambda - \inf\limits_{B_\rho(0)} J_\lambda.$ Applying Theorem $\ref{th1}$ to the functional $J_\lambda : \overline{B_\rho(0)}\longrightarrow \R$, we find $u_\varepsilon \in \overline{B_\rho(0)}$ such that $$ \left\{ \begin{array}{clclc} J_\lambda(u_\varepsilon)&<\inf\limits_{\overline{B_\rho(0)}} J_\lambda+\varepsilon,& \\\\ J_\lambda(u_\varepsilon)&< J_\lambda(u)+\varepsilon ||u-u_\varepsilon||,& \text{ } u\neq u_\varepsilon. \end{array} \right. $$ Since $J_\lambda(u_\varepsilon)\leqslant \inf\limits_{\overline{B_\rho(0)}} J_\lambda+\varepsilon\leqslant \inf\limits_{B_\rho(0)} J_\lambda+\varepsilon \leqslant \inf\limits_{\partial B_\rho(0)} J_\lambda$, we deduce $u_\varepsilon \in B_\rho(0)$. Now, we define $\Lambda_\lambda : \overline{B_\rho(0)}\longrightarrow \R$ by $$\Lambda_\lambda(u)=J_\lambda(u)+\varepsilon||u-u_\varepsilon||.$$ It's clear that $u_\varepsilon$ is a minimum point of $\Lambda_\lambda$ and then $$\dfrac{\Lambda_\lambda(u_\varepsilon+t v)-\Lambda_\lambda(u_\varepsilon)}{t}\geqslant 0$$ for small $t>0$, and any $v\in B_\rho(0).$ The above relation yields $$\dfrac{J_\lambda(u_\varepsilon+t v)-J_\lambda(u_\varepsilon)}{t}+\varepsilon||v||\geqslant 0.$$ Letting $t\rightarrow 0$ it follows that $\left\langle J'_{\lambda}(u_\varepsilon),v\right\rangle +\varepsilon ||v||>0$ and we infer that $||J'_{\lambda}(u_\varepsilon)||_*\leqslant \varepsilon$. We deduce that there exists a sequence $\left\lbrace v_n\right\rbrace \subset B_\rho(0)$ such that \begin{equation}\label{10} J_\lambda(v_n) \longrightarrow c \text{ and } J'_\lambda(v_n)\longrightarrow 0. \end{equation} It is clear that $\left\lbrace v_n\right\rbrace $ is bounded in $X$. Thus, there exists $v\in X$, such that up to a subsequence $\left\lbrace v_n\right\rbrace $ converges weakly to $v$ in $X$. Since $X$ is a compactly embedded in $L^{q(x)}(\Omega)$. The above information combined with condition \hyperref[f1]{$(f_1)$} and H\"older's inequality implies \begin{equation}\label{11} \begin{aligned} \left| \int_{\Omega} f(x,v_n)(v_n-v)dx\right| &\leqslant c_1\int_{\Omega} \left| v_n\right| ^{q(x)-1}\left| v_n-v\right| dx\\ &\leqslant c_1\left| \left| |v_n|^{q(x)-1}\right| \right|_{\frac{q(x)}{q(x)-1}}\left| \left| v_n-v\right| \right|_{q(x)} \longrightarrow 0. \end{aligned} \end{equation} On the other hand, by $(\ref{10})$ we have \begin{equation}\label{12} \lim\limits_{n\rightarrow \infty}\left\langle J'_\lambda(v_n) , v_n-v\right\rangle =0. \end{equation} Relations $(\ref{11})$ and $(\ref{12})$ imply $$ \lim\limits_{n\rightarrow \infty}\left\langle I'_1(v_n) , v_n-v\right\rangle =0.$$ Thus, by Lemma $\ref{4.4.}$ we find that $\left\lbrace v_n\right\rbrace $ converges strongly to $v$ in $X$, so by $(\ref{10})$: $$ J_\lambda(v)=c<0 \text{ and } J'_\lambda(v)=0.$$ We conclude that $v$ is a nontrivial weak solution for problem \hyperref[P]{$(\mathcal{P}_{a})$} for any $\lambda\in(0,\lambda_*)$. Next, by Lemma $\ref{lem7}$ and Proposition $\ref{lem3}$ we infer that $J_\lambda$ is coercive and weakly lower semi continuous in $X$ for all $\lambda>0$. Then Theorem $\ref{th2}$ implies that there exists $u_\lambda \in X$ a global minimized of $J_\lambda$ and thus a weak solution of problem \hyperref[P]{$(\mathcal{P}_{a})$}. Now, we show that $u_\lambda$ is non trivial. Indeed, letting $t_0>1$ be fixed real and $$ \left\{ \begin{array}{clclc} u_0(x)&=t_0~~\text{ in } \Omega & \\\\ u_0(x)&=0~~\text{ in } \R^N\setminus \Omega ,& \end{array} \right. $$ we have $ u_0\in X$ and $$ \begin{aligned} J_\lambda(u_0)&=I_1(u_0)-\lambda\int_\Omega F(x,u_0)dx\\ &=\sum_{i=1}^{2}\Big(\int_\Omega\widehat{\varPhi}^i_x(t_0)dx\Big)-\lambda\int_\Omega F(x,t_0)dx\\ &\leqslant \sum_{i=1}^{2}\Big(\int_\Omega \widehat{\varPhi}^i_x(t_0)dx\Big)-\lambda c_2\int_\Omega |t_0|^{q(x)} dx\\ &=L-\lambda c_2|t_0|^{q^-}|\Omega|, \end{aligned} $$ where $L$ is a positive constant. Thus, for $\lambda^*>0$ large enough, $J_\lambda(u_0)<0$ for any $\lambda\in [\lambda^*,\infty)$. It follows that $J_\lambda(u_\lambda)<0$ for any $\lambda\in [\lambda^*,\infty)$ and thus $u_\lambda$ is a nontrivial weak solution of problem \hyperref[P]{$(\mathcal{P}_{a})$} for any $\lambda\in [\lambda^*,\infty)$. Therefore, problem \hyperref[P]{$(\mathcal{P}_{a})$} has a nontrivial weak solution for all $\lambda\in(0,\lambda_*)\cup [\lambda^*,\infty)$.\end{proof} \begin{thebibliography}{} \bibitem{benkirane} E. Azroul , A. Benkirane, M. Shimi and M. Srati (2020): \textit{On a class of nonlocal problems in new fractional Musielak-Sobolev spaces}, \textit{Applicable Analysis}, doi: 10.1080/00036811.2020.1789601. \bibitem{benkirane2} E. Azroul , A. Benkirane , M. Shimi and M. Srati (2020): \textit{Embedding and extension results in fractional Musielak–Sobolev spaces, Applicable Analysis}, \textit{Applicable Analysis}, doi: 10.1080/00036811.2021.1948019. \bibitem{3} E. Azroul, A. Benkirane, M. Srati, \textit{Existence of solutions for a nonlocal type problem in fractional Orlicz Sobolev spaces}, \textit{Adv. Oper. Theory} (2020) doi: 10.1007/s43036-020-00042-0. \bibitem{sr5} E. Azroul, A. Benkirane, M. Srati, \textit{Nonlocal eigenvalue type problem in fractional Orlicz-Sobolev space}, \textit{Adv. Oper. Theory} (2020) doi: 10.1007/s43036-020-00067-5. \bibitem{SRN1} E. Azroul, A. Benkirane, M. Srati, \textit{Eigenvalue problem associated with nonhomogeneous integro-differential operators} \textit{J. Elliptic Parabol Equ} (2021). https://doi.org/10.1007/s41808-020-00092-8. \bibitem{SRN2} E. Azroul, A. Benkirane and M. Srati, \textit{Mountain pass type solutions for a nonlacal fractional $a(.)$-Kirchhoff type problems}, \textit{Journal of Nonlinear Functional Analysis}, Vol. 2021 (2021), Article ID 3, pp. 1-18. \bibitem{SRT} E. Azroul, A. Benkirane, M. Srati, and C. Torres, \textit{Infinitely many solutions for a nonlocal type problem with sign-changing weight function}. \textit{ Electron. J. Differential Equations}, Vol. \textbf{2021} (2021), No. 16, pp. 1-15. \bibitem{SRH} E. Azroul, A. Benkirane, M. Shimi and M. Srati, \textit{On a class of fractional $p(x)$-Kirchhoff type problems}. \textit{ Applicable Analysis} (2019) doi: 10.1080/00036811.2019.1603372. \bibitem{kamali} E. Azroul, N. Kamali, M. A. Ragusa, M. Shimi, \textit{On a p(x,.)-integrodifferential problem with Neumann boundary conditions}. \textit{Z. Anal. Anwend}. (2024), doi : 10.4171/ZAA/1780? \bibitem{N6} A. Bahrouni, V. Radulesc\u{u}, and P. Winkert, \textit{Robin fractional problems with symmetric variable growth}, J. Math. Phys. 61, 101503 (2020); doi: 10.1063/5.0014915. \bibitem{N7} S. Bahrouni and A. Salort \textit{Neumann and Robin type boundary conditions in Fractional Orlicz-Sobolev spaces} (2021) \textit{ESAIM Control Optimisation and Calculus of Variations} doi : 10.1051/cocv/2020064 \bibitem{N1} G. Barles, E. Chasseigne, C. Georgelin, and E. R. Jakobsen, \textit{On Neumann type problems for nonlocal equations in a half space}. Trans. Amer. Math. Soc. 366 (2014), no. 9, 4873-4917. \bibitem{N2} G. Barles, C. Georgelin, and E. R. Jakobsen, \textit{On Neumann and oblique derivatives boundary conditions for nonlocal elliptic equations}. J. Differential Equations 256 (2014), no. 4, 1368–1394. \bibitem{s1} Biswas R, Tiwari S. \textit{Multiplicity and uniform estimate for a class of variable order fractional $p(x)$-Laplacian problems with concave-convex nonlinearities}. arXiv:1810.12960. \bibitem{x1} R. Biswas, S. Bahrouni, \& M.L. Carvalho, Fractional double phase Robin problem involving variable order-exponents without Ambrosetti–Rabinowitz condition. Z. Angew. Math. Phys. 73, 99 (2022). doi : 10.1007/s00033-022-01724-w \bibitem{N3} C. Cortazar, M. Elgueta, J. D. Rossi, and N. Wolanski, \textit{How to approximate the heat equation with Neumann boundary conditions by nonlocal diffusion problems}. Arch. Ration. Mech. Anal. 187 (2008), no. 1, 137-156. \bibitem{N5} S. Dipierro, X. Ros-Oton, and E. Valdinoci, \textit{Nonlocal problems with Neumann boundary conditions}, Rev. Mat. Iberoam. 33(2), 377–416 (2017) \bibitem{ek} I. Ekeland On the variational principle J. Math. Anal. Appl., 47 (1974), pp. 324-353. \bibitem{Lam} J. Lamperti, \textit{On the isometries of certain function-spaces}, Pacific J. Math. \textbf{8} (1958), 459-466. \bibitem{ra} M. Mih\"ailescu, V. R\"adulescu, Neumann problems associated to nonhomogeneous differential operators in Orlicz-Soboliv spaces, Ann. Inst. Fourier 58 (6) (2008) 2087-2111. \bibitem{N4} D. Mugnai and E. Proietti Lippi, \textit{Neumann fractional p-Laplacian: Eigenvalues and existence results}, Nonlinear Anal. 188, 455-474 (2019). \bibitem{mu} J. Musielak; Orlicz Spaces and Modular Spaces, Lecture Notes in Mathematics, Vol. 1034, Springer, Berlin, 1983. \bibitem{110} M. Struwe, \textit{Variational Methods: Applications to Nonlinear Partial Differential Equations and Hamiltonian Systems}, Springer-Verlag, Berlin, Heidelberg, 1990. \bibitem{srati} M. Srati, \textit{Eigenvalue type problem in $s(\cdot, \cdot)$-fractional Musielak–Sobolev spaces.} J Elliptic Parabol Equ 10, 387-413 (2024). doi : 10.1007/s41808-024-00269-5. \bibitem{srati2} M. Srati, E. Azroul, and A. Benkirane, \textit{Nonlocal problems with Neumann and Robin boundary condition in fractional Musielak-Sobolev spaces}, \textit{Rend. Circ. Mat. Palermo, II. Ser} (2024). doi : 10.1007/s12215-024-01117-0. \end{thebibliography} \end{document}
2412.11697v1
http://arxiv.org/abs/2412.11697v1
On the second coefficient in the semi-classical expansion of Toeplitz Operators on CR manifolds
\documentclass[12pt]{amsart} \usepackage[margin=2.5cm]{geometry} \usepackage{amsmath,amssymb,amsfonts,amsthm,amsrefs,mathrsfs} \usepackage{epic} \usepackage{amsopn,amscd,graphicx} \usepackage{color,transparent} \usepackage[usenames, dvipsnames]{xcolor} \usepackage [colorlinks, breaklinks, bookmarks = false, linkcolor = NavyBlue, urlcolor = ForestGreen, citecolor = ForestGreen, hyperfootnotes = false ] {hyperref} \usepackage{scalerel} \usepackage{stackengine,wasysym} \usepackage{palatino, mathpazo} \usepackage{multirow} \usepackage[all,cmtip]{xy} \usepackage{stmaryrd} \usepackage{tikz-cd} \usepackage{yhmath} \usepackage{enumerate} \usepackage[dvipsnames]{xcolor} \font\bbb=msbm10 scaled \magstep 1 \font\german=eufm10 scaled \magstep 1 \font\script=rsfs10 scaled \magstep 1 \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{problem}[theorem]{Problem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{remark}[theorem]{Remark} \newtheorem{assumption}{Assumption} \newtheorem{example}[theorem]{Example} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem*{notation}{Notation} \newtheorem*{claim}{Claim} \newtheoremstyle{named}{}{}{\itshape}{}{\bfseries}{.}{.5em}{\thmnote{#3}#1} \theoremstyle{named} \newtheorem*{namedtheorem}{} \numberwithin{equation}{subsection} \newcommand{\ol}{\overline} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\ddbar}{\overline\partial} \newcommand{\pr}{\partial} \newcommand{\Td}{\widetilde} \newcommand{\norm}[1]{\left\Vert#1\right\Vert} \newcommand{\abs}[1]{\left\vert#1\right\vert} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\To}{\rightarrow} \def\cC{\mathscr{C}} \newcommand\reallywidetilde[1]{\ThisStyle{ \setbox0=\hbox{$\SavedStyle#1$} \stackengine{-.1\LMpt}{$\SavedStyle#1$}{ \stretchto{\scaleto{\SavedStyle\mkern.2mu\AC}{.5150\wd0}}{.6\ht0} }{O}{c}{F}{T}{S}}} \title[On the second coefficient in the semi-classical expansion of Toeplitz Operators]{On the second coefficient in the semi-classical expansion of Toeplitz Operators on CR manifolds} \begin{document} \author[Chin-Chia Chang]{Chin-Chia Chang} \address{Universit{\"a}t zu K{\"o}ln, Mathematisches Institut, Weyertal 86-90, 50931 K{\"o}ln, Germany} \thanks{} \email{[email protected]} \author[Hendrik Herrmann]{Hendrik Herrmann} \address{University of Vienna, Faculty of Mathematics, Oskar-Morgenstern-Platz 1, 1090 Vienna, Austria} \thanks{} \email{[email protected] or [email protected]} \author[Chin-Yu Hsiao]{Chin-Yu Hsiao} \address{Department of Mathematics, National Taiwan University, Astronomy-Mathematics Building, No. 1, Sec. 4, Roosevelt Road, Taipei 10617, Taiwan} \thanks{} \email{[email protected] or [email protected]} \date{\today} \maketitle \begin{abstract} Let $X$ be a compact strictly pseudoconvex embeddable CR manifold and let $A$ be the Toeplitz operator on $X$ associated with a Reeb vector field $\mathcal{T}\in\cC^\infty(X,TX)$. Consider the operator $\chi_k(A)$ defined by functional calculus of $A$, where $\chi$ is a smooth function with compact support in the positive real line and $\chi_k(\lambda):=\chi(k^{-1}\lambda)$. It was established recently that $\chi_k(A)(x,y)$ admits a full asymptotic expansion in $k$. The second coefficient of the expansion plays an important role in the further study of CR geometry. In this work, we calculate the second coefficient of the expansion. \end{abstract} \tableofcontents \bigskip \textbf{Keywords:} CR manifolds, Szeg\H{o} kenrels, Toeplitz operators \textbf{Mathematics Subject Classification:} 32Vxx, 32A25, 53D50 \tableofcontents \section{Introduction and Main Results} \label{s-gue241206yyd} {\tiny{{\color{white}{\subsection{ }}}}} Let $X$ be a compact strictly pseudoconvex embeddable CR manifold and let $A$ be the Toeplitz operator on $X$ associated with a Reeb vector field $\mathcal{T}\in\cC^\infty(X,TX)$. Consider the operator $\chi_k(A)$ defined by functional calculus of $A$, where $\chi$ is a smooth function with compact support in the positive real line and $\chi_k(\lambda):=\chi(k^{-1}\lambda)$. Let $\chi_k(A)(x,y)$ denote the distribution kernel of $\chi_k(A)$. It was shown in~\cite[Theorem 1.1]{HHMS23} that $\chi_k(A)(x,y)$ admits a full asymptotic expansion in $k$. In particular, \begin{equation}\label{e-gue241114yyd} \chi_k(A)(x,x)\sim k^{n+1}b_0(x)+k^nb_1(x)+\cdots, \end{equation} where $b_j(x)\in\mathcal{C}^\infty(X)$, $j=0,1,\ldots$. The expansion \eqref{e-gue241114yyd} can be seen as a CR analogue of Tian-Yau-Catlin-Zelditch Bergman kernel asymptotics in complex geometry (we refer the reader to the book~\cite{Ma_Marinescu_HMI_BKE_2007}). The term $b_0$ in \eqref{e-gue241114yyd} was already calculated in~\cite{HHMS23}. For the Sasakian case the expansion~\eqref{e-gue241114yyd} was obtained in~\cite{Herrmann_Hsiao_Li_T_equivariant_Szego_2020}. It is a very natural (and fundamental question) to calculate $b_1$ in \eqref{e-gue241114yyd}. The calculation of $b_1$ can be seen as some kind of local index theorem in CR geometry. On the other hand, the second coefficient of Bergman kernel asymptotic expansion in complex geometry plays an important role in K\"ahler geometry (see \cite{Ma_Marinescu_HMI_BKE_2007}). The main goal of this work is to calculate $b_1$. We now formulate our main result. We refer the reader to Section~\ref{s-gue241119yyd} for some terminology and notations used here. Let $(X,T^{1,0}X)$ be a compact orientable strictly pseudoconvex embeddable CR manifold of dimension $2n+1$, \(n\geq 1\). Fix a global one form $\xi\in\cC^\infty(X,T^*X)$ such that for any \(x\in X\) we have $\xi(x)\neq0$, $\ker\xi(x)=\operatorname{Re}T_x^{1,0}X$ and the Levi form $\mathcal{L}_x$ is positive definite on \(T^{1,0}_xX\). Here the Levi form $\mathcal{L}_x=\mathcal{L}^\xi_x$ at \(x\in X\) (with respect to \(\xi\)) is defined by \[\mathcal{L}_x\colon T^{1,0}_xX\times T^{1,0}_xX\to\C,\,\,\,\mathcal{L}_x(U,V)=\frac{1}{2i}d\xi(U,\overline{V}).\] There exists a unique vector field $\mathcal{T}\in\cC^\infty(X,TX)$ such that $\mathcal{T}\lrcorner\xi\equiv1$ and $\mathcal{T}\lrcorner d\xi\equiv 0$, where \(\lrcorner\) denotes the interior product (contraction) between vectors and forms. We call $\mathcal{T}$ the Reeb vector field on $X$ with respect to $\xi$. Let $\langle\,\cdot\,|\,\cdot\,\rangle$ denote the Hermitian metric on $\mathbb CTX$ induced by the Levi form $\mathcal{L}_x$ (see \eqref{e-gue241115yyd}). The Hermitian $\langle\,\cdot\,|\,\cdot\,\rangle$ on $\mathbb CTX$ induces a Hermitian metric $\langle\,\cdot\,|\,\cdot\,\rangle$ on $\oplus_{p,q\in\mathbb N_0,0\leq p,q\leq n}T^{*p,q}X$, where $T^{*p,q}X$ denotes the bundle of $(p,q)$ forms of $X$. Let $\Omega^{p,q}(X)$ denote the space of smooth sections with values in $T^{*p,q}X$. We write $\cC^\infty(X):=\Omega^{0,0}(X)$. Fix a volume form $dV$ on $X$ such that $\mathcal{L}_{\mathcal{T}}dV\equiv 0$, where $\mathcal{L}_{\mathcal{T}}$ denotes the Lie derivative of $\mathcal{T}$. For $p, q\in\mathbb N_0$, $0\leq p,q\leq n$, let $(\,\cdot\,|\,\cdot\,)$ be the $L^2$ inner product on $\Omega^{p,q}(X)$ induced by $dV$ and $\langle\,\cdot\,|\,\cdot\,\rangle$ and let $L^2_{(p,q)}(X)$ be the completion of $\Omega^{p,q}(X)$ with respect to $(\,\cdot\,|\,\cdot\,)$. We write $L^2(X):=L^2_{(0,0)}(X)$. Let $\ddbar_b: \cC^\infty(X)\To\Omega^{0,1}(X)$ be the tangential Cauchy-Riemann operator. We extend $\ddbar_b$ to the $L^2$ space: \[\ddbar_b: {\rm Dom\,}\ddbar_b\subset L^2(X)\To L^2_{(0,1)}(X),\] where ${\rm Dom\,}\ddbar_b:=\set{u\in L^2(X);\, \ddbar_bu\in L^2_{(0,1)}(X)}$. Let $H^0_b(X):={\rm Ker\,}\ddbar_b$. Let \[\Pi: L^2(X)\to H^0_b(X)\] be the Szeg\H{o} projection, i.e., the orthogonal projection onto $H^0_b(X)$ with respect to $(\,\cdot\,|\,\cdot\,)$. Let $\Pi(x,y)\in\mathcal{D}'(X\times X)$ denote the distribution kernel of $\Pi$. We recall Boutet de Monvel--Sj\"ostrand's fundamental theorem~\cite{Boutet_Sjoestrand_Szego_Bergman_kernel_1975} (see also~\cite{Hsiao_Szego_Bergman_kernel_q_forms_2010},~\cite{Hsiao_Marinescu_Szego_lower_energy_2017}) about the structure of the Szeg\H{o} kernel. For any coordinate patch $(D,x)$ on $X$ and any positive function \(\lambda\in\mathscr{C}^\infty(D,\R_+)\) there is a smooth function $\varphi:D\times D\to\C$ with \begin{equation} \label{Eq:PhaseFuncMainThm} \begin{split} &\operatorname{Im}\varphi(x,y)\geq 0,\\ &\varphi(x,y)=0~\text{if and only if}~y=x,\\ &d_x\varphi(x,x)=-d_y\varphi(x,x)=\lambda(x)\xi(x),\ \, \end{split} \end{equation} such that we have on $D\times D$, the Szeg\H{o} projector can be approximated by a Fourier integral operator \begin{equation}\label{eq:PiFIO} \Pi(x,y)=\int_0^{+\infty} e^{it\varphi(x,y)}s(x,y,t)dt+F(x,y), \end{equation} where $F(x,y)\in\cC^\infty(D\times D)$ and $s(x,y,t) \in S^{n}_{\operatorname{cl}}(D\times D\times{\R}_+)$ is a classical H\"ormander symbol satisfying $s(x,y,t) \sim\sum_{j=0}^{+\infty}s_j(x,y)t^{n-j}$ in $S^{n}_{1,0}(D\times D\times{\R}_+)$, $s_j(x,y)\in\cC^\infty(D\times D)$, $j=0,1,\ldots$, and if $\lambda(x)=1$, \begin{equation} \label{eq:leading term s_0 intro} s_0(x,x)=\frac{1}{2\pi^{n+1}}\frac{dV_\xi}{dV}(x), \end{equation} where \begin{equation}\label{e-gue241115ycd} dV_{\xi}:=\frac{2^{-n}}{n!}\xi\wedge \left(d\xi\right)^n. \end{equation} Let \[A:=\Pi(-i\mathcal{T})\Pi: \cC^\infty(X)\To\cC^\infty(X)\] be the Toeplitz operator. We extend $A$ to the $L^2$ space: \begin{eqnarray}\label{eq:DefExtensionToeplitzOperator} A:=\Pi(-i\mathcal{T})\Pi: {\rm Dom\,}A\subset L^2(X)\To L^2(X), \end{eqnarray} where ${\rm Dom\,}A:=\set{u\in L^2(X);\, Au\in L^2(X)}$. It was shown in~\cite[Theorem 3.3]{HHMS23} that $A$ is a self-adjoint operator. We consider the operator $\chi_k(A)$ defined by functional calculus of $A$, where $\chi$ is a smooth function with compact support in the positive real line and $\chi_k(\lambda):=\chi(k^{-1}\lambda)$. Let $\chi_k(A)(x,y)$ denote the distribution kernel of $\chi_k(A)$. We recall the following consequence from the result obtained in~\cite{HHMS23}. \begin{theorem}\label{thm:ExpansionMain} Let $(X,T^{1,0}X)$ be a compact orientable strictly pseudoconvex embeddable CR manifold of dimension $2n+1$, \(n\geq 1\). Fix a global one form $\xi\in\cC^\infty(X,T^*X)$ such that for any \(x\in X\) we have $\xi(x)\neq0$, $\ker\xi(x)=\operatorname{Re}T_x^{1,0}X$ and the respective Levi form $\mathcal{L}_x$ is positive definite. Denote by \(\mathcal{T}\) the Reeb vector field associated to \(\xi\), choose a volume form \(dV\) on \(X\) with $\mathcal{L}_{\mathcal{T}}dV\equiv 0$ and consider the operator \(A\) given by \eqref{eq:DefExtensionToeplitzOperator}. Let $(D,x)$ be any coordinate patch and let $\varphi:D\times D\to\C$ be any phase function satisfying \eqref{Eq:PhaseFuncMainThm} and \eqref{eq:PiFIO}. Then, for any \(\chi\in \mathscr{C}^\infty_c(\R_+)\) putting \(\chi_k(t):=\chi(k^{-1}t)\), \(k,t\in\R_+\), one has for the distributional kernel $\chi_k(A)(x,y)$ of $\chi_k(A)$ that \begin{equation} \label{eq:asymptotic expansion of chi_k(T_P)} \chi_k(A)(x,y)=\int_0^{+\infty} e^{ikt\varphi(x,y)}b^\chi(x,y,t,k)dt+O\left(k^{-\infty}\right)~\text{on}~D\times D, \end{equation} where $b^\chi(x,y,t,k)\in S^{n+1}_{\mathrm{loc}} (1;D\times D\times{\R}_+)$, \begin{equation} \label{Eq:LeadingTermMainThm} \begin{split} &b^\chi(x,y,t,k)\sim\sum_{j=0}^{+\infty}b^\chi_{j}(x,y,t)k^{n+1-j}~ \text{in $S^{n+1}_{\mathrm{loc}}(1;D\times D\times{\R}_+)$},\\ &b^\chi_j(x,y,t)\in\mathscr{C}^\infty(D\times D\times{\R}_+),~j=0,1,2,\ldots,\\ &b^\chi_{0}(x,x,t)=\frac{1}{2\pi ^{n+1}} \frac{dV_{\xi}}{dV}(x)\,\chi(t)\,t^n,\ \ \mbox{for every $x\in D$ with \(\lambda(x)=1\) (see \eqref{Eq:PhaseFuncMainThm})}, \end{split} \end{equation} with \(dV_\xi\) given by \eqref{e-gue241115ycd} and for $I^\chi:=\operatorname{supp}\chi$, \begin{equation}\label{e-gue241115ycda} \begin{split} {\rm supp\,}_t b^\chi(x,y,t,k),~{\rm supp\,}_t b^\chi_j(x,y,t)\subset I^\chi,\ \ j=0,1,2,\ldots~. \end{split} \end{equation} (see Definition~\ref{def:suppt} for the meaning of \(\operatorname{supp}_t\).) In particular, we have \begin{equation}\label{e-gue241115ycdI} \chi_k(A)(x,x)\sim\sum^{+\infty}_{j=0}b^\chi_j(x)k^{n+1-j}\ \ \mbox{in $S^{n+1}_{{\rm loc\,}}(1;X)$}, \end{equation} where $b^\chi_j(x)=\int b^\chi_j(x,x,t)dt$, $b^\chi_j(x,y,t)$ is as in \eqref{Eq:LeadingTermMainThm}, $j=0,1,\ldots$. \end{theorem} The expansion \eqref{eq:asymptotic expansion of chi_k(T_P)} can be seen as a semi-classical version of Bouetet de Monvel-Sj\"ostrand's result on strictly pseudoconvex CR manifolds (see \eqref{eq:PiFIO}). As mentioned above, it is a very natural (and fundamental question) to calculate $b^\chi_1(x)$ in \eqref{e-gue241115ycdI}. In this work, we successfully calculate $b^\chi_1(x)$. The following is our main result. \begin{theorem}\label{t-gue241115yyd} With the notations and assumptions in Theorem~\ref{thm:ExpansionMain}, for $b^\chi_1(x)$ in \eqref{e-gue241115ycdI}, we have \begin{equation}\label{e-gue241115ycdb} b^\chi_1(x)=\frac{1}{2\pi^{n+1}}e^{-2(n+1)f(x)}\left(\frac{1}{2}R_{scal}(x)+(n+1)\Delta_b f(x)\right)\int\chi(t)t^{n-1}dt, \end{equation} where $f(x):=\frac{1}{2(n+1)}\log \frac{dV(x)}{dV_\xi(x)}\in\cC^\infty(X)$ with \(dV_\xi\) given by \eqref{e-gue241115ycd}, $R_{scal}$ is the Tanaka-Webster scalar curvature with respect to the contact form $\xi$ (see \eqref{e-gue241122yydI}) and $\Delta_b:\cC^{\infty}(X)\rightarrow\cC^{\infty}(X)$ denotes the CR sublaplacian operator (see \eqref{e-gue241122yyd}). \end{theorem} To prove Theorem~\ref{t-gue241115yyd}, we need to understand how the symbol $b^\chi_j(x,y,t)$ in \eqref{Eq:LeadingTermMainThm} depends on \(\chi\) and $t$. We have the following. \begin{theorem}\label{t-gue241115ycd} With the notations and assumptions in Theorem~\ref{thm:ExpansionMain}, let $(D,x)$ be any coordinate patch and let $\varphi:D\times D\to\C$ be any phase function satisfying \eqref{Eq:PhaseFuncMainThm} and \eqref{eq:PiFIO}. Then there exist smooth functions \begin{eqnarray}\label{Eq:Defa_js} a_{j,s}(x,y)\in\cC^\infty(D\times D),\ \ j,s=0,1,\ldots, \end{eqnarray} such that for any \(\chi\in \mathscr{C}^\infty_c(\R_+)\) we can take $b^\chi_j(x,y,t)$ in \eqref{Eq:LeadingTermMainThm}, $j=0,1,\ldots$, so that \begin{equation}\label{eqn:ThmI1z} b^\chi_j(x,y,t)=\sum^{+\infty}_{s=0}a_{j,s}(x,y)\chi^{(s)}(t)t^{n+s-j},\quad j=0,1,\ldots,\end{equation} where \(\chi^{(s)}:=(\partial/\partial t)^s\chi\) and the infinite sum in \eqref{eqn:ThmI1z} converges uniformly in $\cC^{\infty}(K\times K\times I)$ topology, for any compact subset $K\subset D$, $I\subset\mathbb R$, for all $j=0,1\dots$. Assume in addition that $\mathcal{T}$ is a CR vector field, that is, $[\mathcal{T},\cC^\infty(X,T^{1,0}X)]\subset\cC^\infty(X,T^{1,0}X)$. Then, there exists a phase function $\varphi:D\times D\to\C$ satisfying \eqref{Eq:PhaseFuncMainThm} and \eqref{eq:PiFIO} such that $\mathcal{T}_x\varphi(x,y)\equiv1$, $\mathcal{T}_y\varphi(x,y)\equiv-1$, and for any such phase function we can take the $a_{j,s}(x,y)$ in \eqref{Eq:Defa_js}, $j,s=0,1,\ldots$, such that for all \(j\geq 0\) we have \begin{equation}\label{e-gue241116yyd} \begin{split} &a_{j,s}(x,y)=0,\ \ s=1,2,\ldots,\\ &\mathcal{T}_xa_{j,0}(x,y)=\mathcal{T}_ya_{j,0}(x,y)=0. \end{split} \end{equation} \end{theorem} \begin{remark}\label{r-gue241119yyd} (i) In the proof of Theorem~\ref{t-gue241115yyd}, we show that for a fix point $p\in X$, we can find a phase $\varphi$ satisfying \eqref{Eq:PhaseFuncMainThm} and \eqref{eq:PiFIO} and $a_{j,s}(x,y)$, $j, s=0,1,\ldots$, such that $a_{1,s}(p,p)=0$, for all $s=1,\ldots$, and then we calculate $a_{1,0}(p,p)$ (see Theorem~\ref{thm:s1_indep_of_deri_of_chi}, for the details), where $a_{j,s}(x,y)$, $j, s=0,1,\ldots$, are as in \eqref{eqn:ThmI1z}. (ii) To get \eqref{e-gue241115ycdb}, we need to generalize the result in~\cite{Hsiao_Shen_2nd_coefficient_BS_2020} to more general setting (see Theorem~\ref{t-gue241122yyd}, for the details). \end{remark} From Theorem~\ref{t-gue241115yyd}, Theorem~\ref{t-gue241115ycd} and by using integration by parts in $t$, we can reformulate our Theorem~\ref{t-gue241115yyd}. \begin{theorem}\label{t-gue241119yyd} With the notations and assumptions in Theorem~\ref{thm:ExpansionMain}, there exist smooth functions \(\{a_j\}_{j\geq 0}\subset \mathscr{C}^\infty(X)\) such that for any \(\chi\in \mathscr{C}_c^\infty(\R_+)\) we have \begin{equation}\label{e-gue241119yyd} \chi_k(A)(x,x)\sim\sum^{+\infty}_{j=0}k^{n+1-j}a_j(x)\int\chi(t)t^{n-j}dt\ \ \mbox{in $S^{n+1}_{{\rm loc\,}}(1;X)$}. \end{equation} Furthermore, we have \begin{equation} \begin{split} &a_0(x)=\frac{1}{2\pi ^{n+1}} \frac{dV_{\xi}}{dV}(x),\\ &a_1(x)=\frac{1}{2\pi^{n+1}}e^{-2(n+1)f(x)}\left(\frac{1}{2}R_{scal}(x)+(n+1)\Delta_b f(x)\right), \end{split} \end{equation} where $f(x):=\frac{1}{2(n+1)}\log \frac{dV(x)}{dV_\xi(x)}\in\cC^\infty(X)$ with \(dV_\xi\) given by \eqref{e-gue241115ycd}, $R_{scal}$ is the Tanaka-Webster scalar curvature with respect to the contact form $\xi$ (see \eqref{e-gue241122yydI}) and $\Delta_b:\cC^{\infty}(X)\rightarrow\cC^{\infty}(X)$ denotes the CR sublaplacian operator (see \eqref{e-gue241122yyd}). \end{theorem} We note that the formula for the coefficient \(a_0\) is a direct consequence from the results in \cite{HHMS23}. Furthermore, it was also shown there that for the circle bundle case the expansion of the form described in~\eqref{e-gue241119yyd} holds (see Example~\ref{Ex:RelationToBKExpansion}). Hence, Theorem~\ref{t-gue241119yyd} extends the results in \cite{HHMS23} by showing that the expansion of the form described in~\eqref{e-gue241119yyd} holds under more general assumptions and provides an explicit formula for \(a_1\). \begin{remark}\label{rmk:ajLocalInvariants} The functions \(a_j\), \(j\geq 0\), in Theorem~\ref{t-gue241119yyd} are uniquely determined by~\eqref{e-gue241119yyd}. Furthermore, for any \(j=0,1,\ldots\) and any point \(x\in X\) the value \(a_j(x)\) of the function \(a_j\) in Theorem~\ref{t-gue241119yyd} is determined by the restriction of \(T^{1,0}X\), \(\xi\) and \(dV\) to any neighborhood of \(x\). More precisely, we have the following: Given a compact CR manifold \((X',T^{1,0}X')\) with one form $\xi'\in\cC^\infty(X',T^*X')$ and volume form \(dV'\) such that \((X',T^{1,0}X')\), \(\xi'\) and \(dV'\) satisfy the assumptions in Theorem~\ref{thm:ExpansionMain} denote by \(\{a'_j\}_{j\geq0}\subset \mathscr{C}^\infty(X')\) the smooth functions given by Theorem~\ref{t-gue241119yyd} associated to the data \((X',T^{1,0}X')\), \(\xi'\) and \(dV'\). Let \(D\subset X\) be an open neighborhood of a point \(x\in X\) and let \(D'\subset X'\) be an open neighborhood of the point \(x'\in X'\). If \(F\colon D'\to D\) is a CR diffeomorphism with \(F(x')=x\), \(F^*\xi=\xi'\) and \(F^*dV=dV'\) on \(D'\) we have \(a_j'(x')=a_j(x)\) for \(j=0,1,\ldots\) (see Remark~\ref{rmk:verifyargumentrmklocalinvariant}). In particular, choosing \(dV=dV_\xi\) we have that the functions \(a_j\), \(j=0,1,\ldots\), become pseudo-Hermitian invariants for the triple \((X,T^{1,0}X,\xi)\). \end{remark} \begin{example}\label{Ex:RelationToBKExpansion} Let \((L,h)\to M\) be a positive holomorphic line bundle with Hermitian metric \(h\) over a compact complex manifold \(M\) of dimension \(\dim_\C M=n\) and volume form \(dV_M\). Denote the Bergman kernel function for \(H^0(M,L^k)\), \(k\in \N\), with respect to the \(L^2\) norm \[\|s\|_k:=\sqrt{\int_{M}|s|^2_{h^k}dV_M},\,\,\,\, s\in H^0(M,L^k)\] by \(P_k\) that is \(P_k(x):=\sup\{|s(x)|^2_{h^k}\mid s\in H^0(M,L^k), \|s\|_k\leq1\}\), \(x\in M\). Here \(L^k\) denotes the \(k\)-th tensor power of \(L\) and \(h^k\) is the Hermitian metric on \(L^k\) induced by \(h\). It is well known (see \cite{Catlin_BKE_1999}, \cite{Ma_Marinescu_HMI_BKE_2007}, \cite{Zelditch_BKE_1998}) that \(P_k\) has an asymptotic expansion \[P_k(x)\sim\sum^{+\infty}_{j=0}k^{n-j}b_j(x)\ \ \mbox{in $S^{n}_{{\rm loc\,}}(1;M)$},\] for smooth functions \(b_j\in\cC^\infty(M,\R)\), \(j\geq 0\). Put \[X:=\{v\in L^*\mid |v|_{h^*}=1\} \subset L,\,\,\,T^{1,0}X:=\C TX\cap T^{1,0}L^*.\] Here \(L^*\) denotes the dual line bundle with metric \(h^*\) induced by \(h\). Then \((X,T^{1,0}X)\) is a compact strongly pseudoconvex CR manifold with a transversal CR \(S^1\)-action given by \(S^1\times X\ni(\lambda,v)\mapsto \lambda v \in X\). Denote by \(\mathcal{T}\) the infinitesimal generator of this action and let \(\xi\in \cC^\infty(X,T^*X)\) be the uniquely defined real one form with \(\mathcal{T}\lrcorner\xi\equiv1\) and \(\xi(T^{1,0}X)=0\). Let \(\pi\colon X\to M\) denote the projection map. Putting \(dV_X=\xi\wedge\pi^*dV_M\) it follows that the setup satisfies the assumptions in Theorem~\ref{t-gue241119yyd}. Hence there exist smooth functions \(a_j\in\cC^\infty(X,\R)\), \(j\geq 0\), such that for any \(\chi\in \cC_c^\infty(\R_+)\) we have \begin{eqnarray}\label{eq:ExpansionOnCircleBundleCase} \chi_k(A)(x,x)\sim\sum^{+\infty}_{j=0}k^{n+1-j}a_j(x)\int\chi(t)t^{n-j}dt\ \ \mbox{in $S^{n+1}_{{\rm loc\,}}(1;X)$}. \end{eqnarray} We note that \eqref{eq:ExpansionOnCircleBundleCase} was already obtained in \cite{HHMS23} for the circle bundle case and it was shown that \(a_j=\frac{1}{2\pi}b_j\circ \pi\), \(j\geq 0\). For a more detailed study for the relation between the coefficients \(a_1\) and \(b_1\) we refer to Remark~\ref{rmk:CoefficientsOnComplexManifolds}. \end{example} \section{Preliminaries}\label{s-gue241119yyd} \subsection{Notations} We use the following notations throughout this article $\mathbb{Z}$ is the set of integers, $\N=\{1,2,3,\ldots\}$ is the set of natural numbers and we put $\N_0=\N\bigcup\{0\}$; $\mathbb R$ is the set of real numbers. Also, $\R_+:=\{x\in\R:x>0\}$ and $\overline{\mathbb R}_+=\R_+\cup\{0\}$. For a multi-index $\alpha=(\alpha_1,\ldots,\alpha_n)\in\N^n_0$ and $x=(x_1,\ldots,x_n)\in\mathbb R^n$, we set \begin{equation} \begin{split} &x^\alpha=x_1^{\alpha_1}\ldots x^{\alpha_n}_n,\\ & \partial_{x_j}=\frac{\partial}{\partial x_j}\,,\quad \partial^\alpha_x=\partial^{\alpha_1}_{x_1}\ldots\partial^{\alpha_n}_{x_n} =\frac{\partial^{|\alpha|}}{\partial x^\alpha}\cdot \end{split} \end{equation} Let $z=(z_1,\ldots,z_n)$, $z_j=x_{2j-1}+ix_{2j}$, $j=1,\ldots,n$, be coordinates on $\C^n$. We write \begin{equation} \begin{split} &z^\alpha=z_1^{\alpha_1}\ldots z^{\alpha_n}_n\,, \quad\ol z^\alpha=\ol z_1^{\alpha_1}\ldots\ol z^{\alpha_n}_n\,,\\ &\partial_{z_j}=\frac{\partial}{\partial z_j}= \frac{1}{2}\Big(\frac{\partial}{\partial x_{2j-1}}- i\frac{\partial}{\partial x_{2j}}\Big)\,, \quad\partial_{\ol z_j}=\frac{\partial}{\partial\ol z_j} =\frac{1}{2}\Big(\frac{\partial}{\partial x_{2j-1}}+ i\frac{\partial}{\partial x_{2j}}\Big),\\ &\partial^\alpha_z=\partial^{\alpha_1}_{z_1}\ldots\partial^{\alpha_n}_{z_n} =\frac{\partial^{|\alpha|}}{\partial z^\alpha}\,,\quad \partial^\alpha_{\ol z}=\partial^{\alpha_1}_{\ol z_1} \ldots\partial^{\alpha_n}_{\ol z_n} =\frac{\partial^{|\alpha|}}{\partial\ol z^\alpha}\,. \end{split} \end{equation} For $j, s\in\mathbb Z$, set $\delta_{js}=1$ if $j=s$, $\delta_{js}=0$ if $j\neq s$. Let $M$ be a smooth paracompact manifold. We let $TM$ and $T^*M$ denote respectively the tangent bundle of $M$ and the cotangent bundle of $M$. The complexified tangent bundle $TM \otimes \mathbb{C}$ of $M$ will be denoted by $\mathbb C TM$, similarly we write $\mathbb C T^*M$ for the complexified cotangent bundle of $M$. Consider $\langle\,\cdot\,,\cdot\,\rangle$ to denote the pointwise duality between $TM$ and $T^*M$; we extend $\langle\,\cdot\,,\cdot\,\rangle$ bi-linearly to $\mathbb CTM\times\mathbb C T^*M$. Let $Y\subset M$ be an open set and let $B$ be a vector bundle over $Y$. From now on, the spaces of distribution sections of $B$ over $Y$ and smooth sections of $B$ over $Y$ will be denoted by $\mathcal D'(Y, B)$ and $\cC^\infty(Y, B)$, respectively. Let $\mathcal E'(Y, B)$ be the subspace of $\mathcal D'(Y, B)$ whose elements have compact support in $Y$. Let $\cC^\infty_c(Y,B):=\cC^\infty(Y,B)\cap\mathcal{E}'(Y,B)$. \subsection{Some standard notations in microlocal and semi-classical analysis}\label{s-gue170111w} Let us first introduce some notions of microlocal analysis used here. Let $D\subset\R^{2n+1}$ be an open set. \begin{definition}\label{d-gue241119ycdp} For any $m\in\mathbb R$, $S^m_{1,0}(D\times D\times\mathbb{R}_+)$ is the space of all $s(x,y,t)\in\cC^\infty(D\times D\times\mathbb{R}_+)$ such that for all compact sets $K\Subset D\times D$, all $\alpha, \beta\in\N^{2n+1}_0$ and $\gamma\in\N_0$, there is a constant $C_{K,\alpha,\beta,\gamma}>0$ satisfying the estimate \begin{equation} \left|\partial^\alpha_x\partial^\beta_y\partial^\gamma_t a(x,y,t)\right|\leq C_{K,\alpha,\beta,\gamma}(1+|t|)^{m-|\gamma|},\ \ \mbox{for all $(x,y,t)\in K\times\mathbb R_+$, $|t|\geq1$}. \end{equation} We put $S^{-\infty}(D\times D\times\mathbb{R}_+) :=\bigcap_{m\in\mathbb R}S^m_{1,0}(D\times D\times\mathbb{R}_+)$. \end{definition} Let $s_j\in S^{m_j}_{1,0}(D\times D\times\mathbb{R}_+)$, $j=0,1,2,\ldots$ with $m_j\rightarrow-\infty$ as $j\rightarrow+\infty$. By the argument of Borel construction, there always exists $s\in S^{m_0}_{1,0}(D\times D\times\mathbb{R}_+)$ unique modulo $S^{-\infty}$ such that \begin{equation} s-\sum^{\ell-1}_{j=0}s_j\in S^{m_\ell}_{1,0}(D\times D\times\mathbb{R}_+) \end{equation} for all $\ell=1,2,\ldots$. If $s$ and $s_j$ have the properties above, we write \begin{equation} s(x,y,t)\sim\sum^{+\infty}_{j=0}s_j(x,y,t)~\text{in}~ S^{m_0}_{1,0}(D\times D\times\mathbb{R}_+). \end{equation} Also, we use the notation \begin{equation} s(x, y, t)\in S^{m}_{{\rm cl\,}}(D\times D\times\mathbb{R}_+) \end{equation} if $s(x, y, t)\in S^{m}_{1,0}(D\times D\times\mathbb{R}_+)$ and we can find $s_j(x, y)\in\cC^\infty(D\times D)$, $j\in\N_0$, such that \begin{equation} s(x, y, t)\sim\sum^{+\infty}_{j=0}s_j(x, y)t^{m-j}\text{ in }S^{m}_{1, 0} (D\times D\times\mathbb{R}_+). \end{equation} For smooth paracompact manifolds $M_1, M_2$, we define the symbol spaces $$S^m_{{1,0}}(M_1\times M_2\times\mathbb R_+), \quad S^m_{{\rm cl\,}}(M_1\times M_2\times\mathbb R_+)$$ and asymptotic sums thereof in the standard way. Let $W_1$ be an open set in $\mathbb R^{N_1}$ and let $W_2$ be an open set in $\mathbb R^{N_2}$. Let $E$ and $F$ be vector bundles over $W_1$ and $W_2$, respectively. For any continuous operator $P: \cC^\infty_c(W_2,F)\To\mathcal{D}'(W_1,E)$, we write $P(x,y)$ to denote the distribution kernel of $P$. A $k$-dependent continuous operator $A_k: \cC^\infty_c(W_2,F)\To\mathcal{D}'(W_1,E)$ is called $k$-negligible on $W_1\times W_2$ if, for $k$ large enough, $A_k$ is smoothing and, for any $K\Subset W_1\times W_2$, any multi-indices $\alpha$, $\beta$ and any $N\in\mathbb N$, there exists $C_{K,\alpha,\beta,N}>0$ such that \[ \abs{\pr^\alpha_x\pr^\beta_yA_k(x, y)}\leq C_{K,\alpha,\beta,N}k^{-N}\:\: \text{on $K$},\ \ \forall k\gg1. \] In that case we write \[A_k(x,y)=O(k^{-\infty})\:\:\text{on $W_1\times W_2$,} \quad \text{or} \quad A_k=O(k^{-\infty})\:\:\text{on $W_1\times W_2$.}\] If $A_k, B_k: \cC^\infty_c(W_2, F)\To\mathcal{D}'(W_1, E)$ are $k$-dependent continuous operators, we write $A_k= B_k+O(k^{-\infty})$ on $W_1\times W_2$ or $A_k(x,y)=B_k(x,y)+O(k^{-\infty})$ on $W_1\times W_2$ if $A_k-B_k=O(k^{-\infty})$ on $W_1\times W_2$. When $W=W_1=W_2$, we sometime write ``on $W$". Let $X$ and $M$ be smooth manifolds and let $E$ and $F$ be vector bundles over $X$ and $M$, respectively. Let $A_k, B_k: \cC^\infty(M,F)\To\cC^\infty(X,E)$ be $k$-dependent smoothing operators. We write $A_k=B_k+O(k^{-\infty})$ on $X\times M$ if on every local coordinate patch $D$ of $X$ and local coordinate patch $D_1$ of $M$, $A_k=B_k+O(k^{-\infty})$ on $D\times D_1$. When $X=M$, we sometime write on $X$. We recall the definition of the semi-classical symbol spaces \begin{definition} \label{d-gue140826} Let $W$ be an open set in $\mathbb R^N$. Let \[ S(1)=S(1;W):=\Big\{a\in\cC^\infty(W);\, \forall\alpha\in\mathbb N^N_0: \sup_{x\in W}\abs{\pr^\alpha a(x)}<\infty\Big\}\,,\] and let $S^0_{{\rm loc\,}}(1;W)$ be \[\Big\{(a(\cdot,k))_{k\in\mathbb R};\,\forall\alpha\in\mathbb N^N_0, \forall \chi\in \cC^\infty_c(W)\,:\:\sup_{k\in\mathbb R, k\geq1}\sup_{x\in W}\abs{\pr^\alpha(\chi a(x,k))}<\infty\Big\}\,. \] Hence $a(\cdot,k)\in S^\ell_{{\rm loc}}(1;W)$ if for every $\alpha\in\mathbb N^N_0$ and $\chi\in\cC^\infty_c(W)$, there exists $C_\alpha>0$ independent of $k$, such that $\abs{\pr^\alpha (\chi a(\cdot,k))}\leq C_\alpha k^{\ell}$ holds on $W$. Consider a sequence $a_j\in S^{\ell_j}_{{\rm loc\,}}(1)$, $j\in\N_0$, where $\ell_j\searrow-\infty$, and let $a\in S^{\ell_0}_{{\rm loc\,}}(1)$. We say \[ a(\cdot,k)\sim \sum\limits^\infty_{j=0}a_j(\cdot,k)\:\:\text{in $S^{\ell_0}_{{\rm loc\,}}(1)$}, \] if, for every $N\in\N_0$, we have $a-\sum^{N}_{j=0}a_j\in S^{\ell_{N+1}}_{{\rm loc\,}}(1)$ . For a given sequence $a_j$ as above, we can always find such an asymptotic sum $a$, which is unique up to an element in $S^{-\infty}_{{\rm loc\,}}(1)=S^{-\infty}_{{\rm loc\,}}(1;W):=\cap _\ell S^\ell_{{\rm loc\,}}(1)$. Let $\ell\in\mathbb R$ and let \[ S^\ell_{{\rm loc},{\rm cl\,}}(1):=S^\ell_{{\rm loc},{\rm cl\,}}(1;W) \] be the set of all $a\in S^\ell_{{\rm loc}}(1;W)$ such that we can find $a_j\in\cC^\infty(W)$ independent of $k$, $j=0,1,\ldots$, such that \[ a(\cdot,k)\sim \sum\limits^\infty_{j=0}k^{\ell-j}a_j(\cdot)\:\:\text{in $S^{\ell_0}_{{\rm loc\,}}(1)$}. \] Similarly, we can define $S^\ell_{{\rm loc\,}}(1;Y)$ in the standard way, where $Y$ is a smooth manifold. \end{definition} \begin{definition}\label{def:suppt} Let \(U\) be an open set in \(\R^N\) and let \(F\colon U\times \R\) be a function \((x,t)\mapsto F(x,t)\). Given a subset \(I\subset \R\) we say \(\operatorname{supp}_tF(x,t)\subset I\) if for any \((x,t)\in U\times \R\) with \(t\notin I\) we have \(F(x,t)=0\). \end{definition} \subsection{CR geometry}\label{s-gue241119ycd} We recall some notations concerning CR geometry. Let $X$ be a smooth orientable manifold of real dimension $2n+1,~n\geq 1$. We say $X$ is a (codimension one) Cauchy--Riemann (CR for short) manifold with CR structure \(T^{1,0}X\) if $T^{1,0}X\subset\mathbb{C}TX$ is a subbundle such that \begin{enumerate} \item[(i)] $\dim_{\mathbb{C}}T^{1,0}_{p}X=n$ for any $p\in X$. \item[(ii)] $T^{1,0}_p X\cap T^{0,1}_p X=\{0\}$ for any $p\in X$, where $T^{0,1}_p X:=\overline{T^{1,0}_p X}$. \item[(iii)] For $V_1, V_2\in \mathscr{C}^{\infty}(X,T^{1,0}X)$, we have $[V_1,V_2]\in\mathscr{C}^{\infty}(X,T^{1,0}X)$, where $[\cdot,\cdot]$ stands for the Lie bracket between vector fields. \end{enumerate} From now on, we assume that $(X,T^{1,0}X)$ is a compact CR manifold of dimension $2n+1$, $n\geq1$. Since $X$ is orientable, there is a global one form $\xi\in\cC^\infty(X,T^*X)$ such that $u\lrcorner\xi=0$, for every $u\in T^{1,0}X$ and $\xi(x)\neq0$, for every $x\in X$ where \(\lrcorner\) denotes the interior product between vectors and forms. We call $\xi$ a characteristic form on $X$. \begin{definition}\label{D:Leviform} The Levi form $\mathcal{L}_x=\mathcal{L}^{\xi}_x$ of $X$ at $x\in X$ associated to a characteristic form $\xi$ is the Hermitian form on $T^{1,0}_xX$ given by \begin{equation}\label{eq:2.12b} \mathcal{L}_x:T^{1,0}_xX\times T^{1,0}_xX\to\C, \:\: \mathcal{L}_x(U,V)=\frac{1}{2i}d\xi(U, \ol V).\end{equation} \end{definition} \begin{definition}\label{d-gue241119yydq} A CR manifold $(X,T^{1,0}X)$ is said to be strictly pseudoconvex if there exists a characteristic $1$-form $\xi$ such that for every $x\in X$ the Levi form $\mathcal{L}^{\xi}_x$ is positive definite. \end{definition} \begin{remark} Given a strictly pseudoconvex CR manifold $(X,T^{1,0}X)$ we have that \(\operatorname{Re}T^{1,0}X\) defines a contact structure on \(X\) where any characteristic $1$-form $\xi$ is a contact form for this contact structure. This follows from the observation that \(\ker \xi=\operatorname{Re}T^{1,0}X\) with \(\dim_\R \operatorname{Re}T^{1,0}X=2n\) and $\xi\wedge(d\xi)^n\neq 0$, where $\dim_\R X=2n+1$. \end{remark} From now on, we assume that $(X,T^{1,0}X)$ is a compact strictly pseudoconvex CR manifold of dimension $2n+1$, $n\geq1$, and we fix a characteristic one form $\xi$ on $X$ such that the associated Levi form $\mathcal{L}_x$ is positive definite at every point of $x\in X$. There exists a unique vector field $\mathcal{T}\in\cC^\infty(X,TX)$ such that $\mathcal{T}\lrcorner\xi\equiv1$ and $\mathcal{T}\lrcorner d\xi\equiv 0$. We call $\mathcal{T}$ the Reeb vector field on $X$ with respect to $\xi$. The Levi form $\mathcal{L}_x$ induces a Hermitian metric $\langle\,\cdot\,|\,\cdot\,\rangle$ on $\mathbb CTX$ given by \begin{equation}\label{e-gue241115yyd} \begin{split} &\langle\,u\,|\,v\,\rangle:=\mathcal{L}_x(u,\ol v),\ \ u, v\in T^{1,0}_xX,\\ &T^{1,0}X\perp T^{0,1}X,\\ &\mathcal{T}\perp(T^{1,0}X\oplus T^{0,1}X),\ \ \langle\,T\,|\,T\,\rangle=1. \end{split} \end{equation} We call $\langle\,\cdot\,|\,\cdot\,\rangle$ the Levi metric. Denote $T^{*1,0}X$ and $T^{*0,1}X$ the dual bundles in $\C T^*X$ annihilating $\C\mathcal{T}\bigoplus T^{0,1}X$ and $\C\mathcal{T}\bigoplus T^{1,0}X$, respectively. Define the bundles of $(p,q)$ forms by $T^{*p,q}X:=(\Lambda^pT^{*1,0}X)\wedge(\wedge^qT^{*0,1}X)$. The Hermitian $\langle\,\cdot\,|\,\cdot\,\rangle$ on $\mathbb CTX$ induces a Hermitian metric $\langle\,\cdot\,|\,\cdot\,\rangle$ on $\oplus^n_{p,q=0}T^{*p,q}X$. For $p,q=0,1,\ldots,n$, let $\Omega^{p,q}(X):=\cC^\infty(X,T^{*p,q}X)$. With respect to the given Hermitian metric $\langle\cdot|\cdot\rangle$, for $p, q\in\mathbb N_0$, $0\leq p,q\leq n$, we consider the orthogonal projection \begin{equation} \pi^{(p,q)}:\Lambda^{p+q}\mathbb{C}T^*X\to T^{*p,q}X. \end{equation} The tangential Cauchy--Riemann operator is defined to be \begin{equation} \label{tangential Cauchy Riemann operator} \overline{\partial}_b:=\pi^{(0,q+1)}\circ d: \Omega^{0,q}(X)\to\Omega^{0,q+1}(X). \end{equation} Fix a volume form $dV$ on $X$ such that $\mathcal{L}_{\mathcal{T}}dV\equiv 0$, where $\mathcal{L}_{\mathcal{T}}$ denotes the Lie derivative of $\mathcal{T}$. Let $(\,\cdot\,|\,\cdot\,)$ be the $L^2$ inner product on $\Omega^{p,q}(X)$ induced by $dV$ and $\langle\,\cdot\,|\,\cdot\,\rangle$ and let $L^2_{(p,q)}(X)$ be the completion of $\Omega^{p,q}(X)$ with respect to $(\,\cdot\,|\,\cdot\,)$, $p, q\in\mathbb N_0$, $0\leq p,q\leq n$. We write $L^2(X):=L^2_{(0,0)}(X)$. We extend $\ddbar_b$ to the $L^2$ space: \[\ddbar_b: {\rm Dom\,}\ddbar_b\subset L^2(X)\To L^2_{(0,1)}(X),\] where ${\rm Dom\,}\ddbar_b:=\set{u\in L^2(X);\, \ddbar_bu\in L^2_{(0,1)}(X)}$. In this work, w eassume that $X$ is embeddable, that is, we can find a CR embedding \[F: X\To\mathbb C^N,\] for some $N\in\mathbb N$. Moreover, $X$ is embeddable if and only if $\ddbar_b$ has $L^2$ closed range. We also recall some geometric quantities in pseudo-Hermitian geometry. The Hermitian metric $\langle\,\cdot\,|\,\cdot\,\rangle$ and the volume form $dV_\xi$ induce an $L^2$ inner product $(\,\cdot\,|\,\cdot\,)_\xi$ on $\Omega^{p,q}(X)$, $p, q\in\mathbb N_0$, $0\leq p,q\leq n$. Let \[\pr_b:=\pi^{1,0}\circ d: \cC^\infty(X)\To\Omega^{1,0}(X)\] and let \[d_b:=\pr_b+\ddbar_b: \cC^\infty(X)\To\Omega^{0,1}(X)\oplus\Omega^{1,0}(X).\] The \textit{sublaplacian operator} $\Delta_b:\cC^{\infty}(X)\rightarrow\cC^{\infty}(X)$ is defined by \begin{equation}\label{e-gue241122yyd} (\,\Delta_b f\,|\,u\,)_\xi=\frac{1}{2}(\,d_bf\,|\,d_bu\,)_{\xi},\quad f\in\cC^\infty(X), u\in\cC^{\infty}(X).\end{equation} \begin{definition} Given a contact form $\xi$ on $X,$ and $J$ a complex structure on $$HX:=\operatorname{Re}(T^{1,0}X\oplus T^{0,1}X).$$ There exists an unique affine connection $\nabla:=\nabla^{\xi,J}$ with respect to $\xi$ and $J$ such that \begin{enumerate} \item $\nabla_{v}\cC^\infty(X,HX)\subset\cC^\infty(X,HX)$ for any $v\in\cC^\infty(X,TX),$ \item $\nabla\mathcal{T}=\nabla J=\nabla(d\xi)=0.$ \item The torsion $\tau$ of $\nabla$ is defined by $\tau(u,u)=\nabla_u v-\nabla_vu-[u,v]$ and satisfies $\tau(u,v)=d\xi(u,v)\mathcal{T},$ $\tau(\mathcal{T},Jv)=-J(\mathcal{T},v)$ for any $u,v\in\cC^\infty(X,TX),$ \end{enumerate} \end{definition} Let $\{L_\alpha\}_{\alpha=1}^n$ be a local frame of $T^{1,0}X$. The dual frame of $\{L_\alpha\}_{\alpha=1}^n$ is denoted by $\{\xi^{\alpha}\}_{\alpha=1}^n.$ The connection one form $\omega_\alpha^\beta$ is defined by $$\nabla L_{\alpha}=\omega_\alpha^\beta\otimes L_{\beta}.$$ Let $L_{\overline{\alpha}}$ and $\xi^{\overline{\alpha}}$ denote $\overline{L_\alpha}$ and $\overline{\xi^{\alpha}}$ respectively, we also have $\nabla L_{\overline{\alpha}}=\omega^{\overline{\beta}}_{\overline{\alpha}}\otimes L_{\overline{\beta}}.$ The Tanaka-Webster curvature two form $\Theta_{\alpha}^{\beta}$ is defined by \begin{equation}\label{eqn:def_tw2form} \Theta_{\alpha}^{\beta}=d\omega_{\alpha}^{\beta}-\omega_\alpha^{\gamma}\wedge\omega_\gamma^{\beta}. \end{equation} It is straightforward to have the following expression for \eqref{eqn:def_tw2form}: \begin{equation}\label{eqn:tw2form compute} \Theta^\beta_\alpha=R^\beta_{\alpha j \overline{k}}\xi^{j}\wedge\xi^{\overline{k}}+ A^{\alpha}_{\beta j k}\xi^{j}\wedge\xi^k+B^\alpha _{\beta j k}\xi^{\overline{j}}\wedge\xi^{\overline{k}}+C\wedge\xi \end{equation} for some one form $C$. The pseudohermitian Ricci curvature $R_{\alpha \overline{k}}$ is defined by $$ R_{\alpha \overline{k}}:=\sum_{\beta=j=1}^n R^\beta_{\alpha j \overline{k}}. $$ Write $d\xi=ih_{\alpha\ol{\beta}}\xi^\alpha\wedge\xi^{\ol\beta}$ and let $h^{\ol cd}$ be the inverse matrix of $h_{a\ol b}$. The Tanaka-Webster scalar curvature $R_{{\rm scal\,}}$ is defined by \begin{equation}\label{e-gue241122yydI} R_{\rm scal}:=\sum_{\alpha,k=1}^n h^{\alpha \overline{k}}R_{\alpha\overline{k}}. \end{equation} \section{Proofs of Theorem~\ref{t-gue241115yyd} and Theorem~\ref{t-gue241115ycd}}\label{s-gue241120yyd} {\tiny{{\color{white}{\subsection{ }}}}} The main goal of this section is to prove Theorem~\ref{t-gue241115yyd}. We first generalize the result in~\cite{Hsiao_Shen_2nd_coefficient_BS_2020} to a more general setting. Let us first recall the result in~\cite{Hsiao_Shen_2nd_coefficient_BS_2020}. \begin{theorem}\label{t-gue241120yyd} With the notations and assumptionsin Theorem~\ref{thm:ExpansionMain}, assume $dV_\xi=dV$. Recall that we work with the assumption that $X$ is embeddable. Let $(D,x)$ be any coordinate patch and let $\varphi:D\times D\to\C$ be any phase function satisfying \eqref{Eq:PhaseFuncMainThm} with $\lambda(x)\equiv1$ and \eqref{eq:PiFIO}. Suppose that \begin{equation}\label{e-gue241120yyd} \mbox{$\ddbar_{b,x}\varphi(x,y)$ vainishes to infinite order on $x=y$}. \end{equation} Suppose further that \begin{equation}\label{e-gue241120ycdag} \mathcal{T}_ys(x,y,t)=0, \ \ \mathcal{T}_ys_j(x,y)=0,\ \ j=0,1,\ldots, \end{equation} where $s(x,y,t)$, $s_j(x,y)$, $j=0,1,\ldots$, are as in \eqref{eq:PiFIO}. Then, $s_0(x,y)$ has unique Taylor expansion at $x=y$, $s_1(x,x)$ is a global defined as a smooth function on $X$, \begin{equation}\label{e-gue241120ycdb} \begin{split} &\mbox{$s_0(x,y)-\frac{1}{2\pi^{n+1}}$ vanishes to infinite order at $x=y$, for every $x\in D$}, \end{split} \end{equation} and \begin{equation}\label{e-gue201221yydb} \mbox{$s_1(x,x)=\frac{1}{4\pi^{n+1}}R_{\mathrm{scal}}(x)$, for every $x\in D$}, \end{equation} where $R_{{\rm scal\,}}$ is the Tanaka--Webster scalar curvature on $X$ (see \eqref{e-gue241122yydI}). \end{theorem} We are going to generalize Theorem~\ref{t-gue241120yyd} to the case $dV_\xi\neq dV$. Fix $p\in X$. It is known that (see~\cite[(3.99), Proposition 3.2]{Hsiao_Shen_2nd_coefficient_BS_2020}), there is an open local coordinate patch $D$ of $X$ with local coordinates $x=(x_1,\ldots,x_{2n+1})$, $x(p)=0$, $\mathcal{T}=\frac{\pr}{\pr x_{2n+1}}$ on $D$ and a phase $\varphi:D\times D\to\C$ satisfying \eqref{Eq:PhaseFuncMainThm} with $\lambda(x)=1+O(\abs{x}^3)$ and \eqref{eq:PiFIO} such that \begin{equation}\label{e-gue201226ycdb} \begin{split} &\mbox{$\ddbar_{b,x}(\varphi(x,y))$ vanishes to infinite order at $x=y$}, \end{split} \end{equation} and \begin{equation}\label{e-gue241220yydp} \varphi(x,y)=x_{2n+1}-y_{2n+1}+\frac{i}{2}\sum_{j=1}^n\left[|z_j-w_j|^2+(\overline{z}_jw_j-z_j\overline{w}_j)\right]+O\left(|(x,y)|^4\right), \end{equation} \begin{equation}\label{e-gue241220yydq} \begin{split} &\sum^n_{\ell,j=1}\frac{\pr^4\phi}{\pr z_j\pr z_\ell\pr\ol z_j\pr\ol z_\ell}(0,0)= -\frac{i}{2}R_{{\rm scal\,}}(0),\\ &\sum^n_{\ell,j=1}\frac{\pr^4\phi}{\pr w_j\pr w_\ell\pr\ol w_j\pr\ol w_\ell}(0,0)=-\frac{i}{2}R_{{\rm scal\,}}(0), \end{split} \end{equation} where $\frac{\pr}{\pr w_j}=\frac{1}{2}(\frac{\pr}{\pr y_{2j-1}}-i\frac{\pr}{\pr y_{2j}})$, $\frac{\pr}{\pr\ol w_j}=\frac{1}{2}(\frac{\pr}{\pr y_{2j-1}}+i\frac{\pr}{\pr y_{2j}})$, $j=1,\ldots,n$. Now, we work in the local coordinate patch $(D,x)$ and we assume that $\varphi$ satisfies \eqref{Eq:PhaseFuncMainThm} with $\lambda(x)=1+O(\abs{x}^3)$, \eqref{eq:PiFIO}, \eqref{e-gue201226ycdb}, \eqref{e-gue241220yydp} and \eqref{e-gue241220yydq}. Recall that $dV_\xi$ is the volume form induced by the contact form $\xi$ on $X$. We have the expression $$dV_\xi(x)=V_\xi(x)dx_1\cdots dx_{2n+1}=\frac{1}{n!}\left(\frac{d\xi}{2}\right)^n\wedge\xi,$$ $V_\xi(x)\in\cC^\infty(D)$. There exists a function $f\in\cC^\infty(X)$ satisfying $\mathcal{T}f\equiv 0$ on $X$ such that \begin{equation}\label{eqn:relation_two_volumes} dV(x)=e^{2(n+1)f(x)}V_\xi(x) dx_1\cdots dx_{2n+1}=:V(x)dx_1\cdots dx_{2n+1}. \end{equation} We need \begin{lemma} With the notations used above, for $s_0(x,y)$ in \eqref{eq:PiFIO}, we can take $s_0(x,y)$ so that $s_0(x,y)$ is independent of $y_{2n+1}$, \begin{equation}\label{e-gue241128yyd} s_0(x,y)=s_0(x,y'),\ \ \mbox{for all $(x,y)\in D\times D$}, \end{equation} where $y'=(y_1,\ldots,y_{2n})$, \begin{equation}\label{e-gue241120yydg} s_0(0,0)=\frac{1}{2\pi^{n+1}}\frac{dV_\xi}{dV}(0)=\frac{1}{2\pi^{n+1}}(e^{-2(n+1)f})(0), \end{equation} \begin{equation}\label{e-gue241120yydu} \begin{split} &\frac{\pr s_0}{\pr\ol z_j}(0,0)=\frac{\pr s_0}{\pr w_j}(0,0)=0,\ \ j=1,\ldots,n,\\ &\frac{\pr s_0}{\pr z_j}(0,0)=\frac{1}{2\pi^{n+1}}\frac{\pr}{\pr z_j}(e^{-2(n+1)f})(0),\ \ j=1,\ldots,n,\\ &\frac{\pr s_0}{\pr\ol w_j}(0,0)=\frac{1}{2\pi^{n+1}}\frac{\pr}{\pr\ol w_j}(e^{-2(n+1)f})(0),\ \ j=1,\ldots,n,\\ &\frac{\pr^2s_0}{\pr\ol z_j\pr z_\ell}(0,0)=\frac{\pr^2s_0}{\pr\ol w_j\pr w_\ell}(0,0)=0,\ \ j, \ell=1,\ldots,n, \end{split} \end{equation} and \begin{equation}\label{e-gue241125ycd} (\mathcal{T}_xs_0)(0,0)=(\mathcal{T}_ys_0)(0,0)=0, \end{equation} where $\frac{\pr}{\pr w_j}=\frac{1}{2}(\frac{\pr}{\pr y_{2j-1}}-i\frac{\pr}{\pr y_{2j}})$, $\frac{\pr}{\pr\ol w_j}=\frac{1}{2}(\frac{\pr}{\pr y_{2j-1}}+i\frac{\pr}{\pr y_{2j}})$, $j=1,\ldots,n$, $f(x):=\frac{1}{2(n+1)}\log \frac{dV(x)}{dV_\xi(x)}\in\cC^\infty(X)$. \end{lemma} \begin{proof} By using integration by parts in $t$, we can take $s_0$ so that $s_0$ is independent of $y_{2n+1}$. We assume that $s_0$ is independent of $y_{2n+1}$. Since $\varphi$ is independent of $y_{2n+1}$, $\ddbar_{b,x}\varphi(x,y)$ vanishes to infinite order at $x=y$. From this observation and $s_0$ is independent of $y_{2n+1}$, we conclude that \begin{equation}\label{e-gue241207yydp} \mbox{$\ddbar_{b,x}s_0(x,y')$ vanishes to infinote order at $x=y$}. \end{equation} From \eqref{e-gue241207yydp}, we get \begin{equation}\label{e-gue241207yydq} \frac{\pr s_0}{\pr\ol z_j}(0,0)=0,\ \ j=1,\ldots,n. \end{equation} From $\ddbar_{b,x}\int e^{-it\ol\varphi(y,x)}\ol s(y,x,t)dt\equiv0$, we get \begin{equation}\label{e-gue241207yydr} \mbox{$\ddbar_{b,x}(-\ol\varphi(y,x))+g(x,y)\ol\varphi(y,x)$ vanishes to infinite order at $(0,0)$}, \end{equation} where $s(x,y,t)$ is as in \eqref{eq:PiFIO} and $g$ is a smooth function defined near $(0,0)$. From \eqref{e-gue241220yydp} and \eqref{e-gue241207yydq}, we can check that \begin{equation}\label{e-gue241207yyds} g(x,y)=O(\abs{(x,y)}^2). \end{equation} By using integration by parts in $t$, we get \begin{equation}\label{e-gue241207ycdk} \begin{split} &\ddbar_{b,x}\int e^{-it\ol\varphi(y,x)}\ol s(y,x,t)dt\\ &=\int e^{-it\ol\varphi(y,x)}(-(n+1)g(x,y)\ol s_0(y,x)+\ddbar_{b,x}\ol s_0(y,x))t^n+O(t^{n-1})dt. \end{split} \end{equation} From \eqref{e-gue241207ycdk}, we see that \begin{equation}\label{e-gue241207ycdo} \mbox{$(-(n+1)g(x,y)\ol s_0(y,x)+\ddbar_{b,x}\ol s_0(y,x))-h(x,y)\ol\varphi(y,x)$ vanishes to infinite order at $(0,0)$}, \end{equation} where $h$ is a smooth function defined near $(0,0)$. From \eqref{e-gue241207yyds} and \eqref{e-gue241207ycdo}, it is not difficult to see that \begin{equation}\label{e-gue241207ycdg} \frac{\pr s_0}{\pr w_j}(0,0)=\frac{\pr^2 s_0}{\pr w_j\pr\ol w_j}=0,\ \ j=1,\ldots,n. \end{equation} From \eqref{e-gue241207yydq} and \eqref{e-gue241207ycdg}, we get the first equation in \eqref{e-gue241120yydu}. From \eqref{e-gue241207yydq}, \eqref{e-gue241207ycdg} and notice that $s_0(x,x')=\frac{1}{2\pi^{n+1}}(e^{-2(n+1)f})(x)$, we get the second and third equation in \eqref{e-gue241120yydu}. From \eqref{e-gue241207yydp} and \eqref{e-gue241207ycdg}, we get the last equation in \eqref{e-gue241120yydu}. Finally, from $s_0(x,x')=\frac{1}{2\pi^{n+1}}(e^{-2(n+1)f})(x)$ and $f$ is $\mathcal{T}$-invariant, we get \eqref{e-gue241125ycd}. \end{proof} We introduce an one-form $\widehat{\xi}=e^{2f}\xi.$ It is easy to check that $\widehat{\xi}$ is a contact form and the volume form $dV$ is induced by $\widehat{\xi},$ that is, $$dV=\frac{1}{n!}\left(\frac{d\widehat{\xi}}{2}\right)^n\wedge\widehat{\xi}.$$ With this volume, we have the following theorem: \begin{theorem}\label{t-gue241120yydk} With the same notation above, the second coefficient of the Szeg\H{o} kernel is $$s_1(0,0)=\frac{1}{2\pi^{n+1}}e^{-2(n+1)f(0)}\left(\frac{1}{2}R_{{\rm scal\,}}(0)+(n+1)\Delta_b f(0)\right),$$ where $R_{{\rm scal\,}}$ is the Tanaka-Webster scalar curvature with respect to the contact form $\xi$ (see \eqref{e-gue241122yydI}) and $\Delta_b$ denotes the CR sublaplacian operator (see \eqref{e-gue241122yyd}). \end{theorem} \begin{proof} By replacing the volume $dV_\xi$ with $dV$ in the proof of \cite[Section 3.3]{Hsiao_Shen_2nd_coefficient_BS_2020}, we have \begin{equation}\label{eqn:Interproof 01 szego conformal} s_1(0,0)=-L^{(1)}_{(x,\sigma)}(s_0(0,x)s_0(x,0)e^{2(n+1)f(x)}\sigma^n)|_{(x,\sigma)=(0,1)} \end{equation} where $L^{(1)}$ is the partial differential operator in the stationary phase formula of H\"ormander. In present case, \begin{equation}\label{e-gue241207ycdm} \begin{split} & L^{(1)}_{(x,\sigma)}(s_0(0,x)s_0(x,0)e^{2(n+1)f(x)}\sigma^n)\\ &=\sum^{2}_{\mu=0}\frac{i^{-1}}{\mu!(\mu+1)!}\left(\sum^{n}_{j=1}\frac{\partial^2}{\partial z_j\partial\overline{z_j}}+\frac{\partial^2}{\partial x_{2n+1}\partial\sigma}\right)^{\mu+1}(G^{\mu}s_0(0,x)s_0(x,0)e^{2(n+1)f(x)}\sigma^n), \end{split} \end{equation} where $$G(x,\sigma)=F(x,\sigma)-F(0,1)-\frac{1}{2}\Big\langle \operatorname{Hess}(F)(p,1)\begin{pmatrix} x\\ \sigma-1 \end{pmatrix},\begin{pmatrix} x\\ \sigma-1 \end{pmatrix}\Big\rangle,$$ and $$F(x,\sigma)=\sigma\varphi(0,x)+\varphi(x,0).$$ From \eqref{e-gue241120yydg} and \eqref{e-gue241120yydu}, we have \begin{equation}\label{e-gue241121yyd} s_1(0,0)=\frac{-1}{2\pi^{n+1}}e^{-2(n+1)f(0)}\left(2(n+1)\sum_{i=1}^{n}\frac{\partial^2 f}{\partial z_i\partial\overline{z}_i}(0)V_{\xi}(0)+\sum_{i=1}^{n}\frac{\partial^2 V_\xi}{\partial z_i\partial\overline{z}_i}(0)+i\sum_{j,k=1}^n\frac{\partial^4\varphi}{\partial z_j\partial\overline{z}_j\partial z_k\partial\overline{z}_k}(0)\right). \end{equation} From \eqref{e-gue241220yydq}, we see that \begin{equation}\label{e-gue241121yydIz} i\sum_{j,k=1}^n\frac{\partial^4\varphi}{\partial z_j\partial\overline{z}_j\partial z_k\partial\overline{z}_k}(0)=\frac{1}{2}R_{{\rm scal\,}}(0). \end{equation} Moreover, from~\cite[(3.101)]{Hsiao_Shen_2nd_coefficient_BS_2020}, we see that \begin{equation}\label{e-gue241121yydI} \sum_{i=1}^{n}\frac{\partial^2 V_\xi}{\partial z_i\partial\overline{z}_i}(0)=-R_{{\rm scal\,}}(0). \end{equation} By definition of $\Delta_b$, it is straightforward to check that \begin{equation}\label{e-gue241121yydII} \sum_{i=1}^{n}\frac{\partial^2 f}{\partial z_i\partial\overline{z}_i}(0)=-\frac{1}{2}(\Delta_bf)(0). \end{equation} From \eqref{e-gue241121yyd}, \eqref{e-gue241121yydIz}, \eqref{e-gue241121yydI} and \eqref{e-gue241121yydII}, we get \begin{equation}\label{e-gue241121yydIII} s_1(0,0)=\frac{-1}{2\pi^{n+1}}e^{-2(n+1)f(0)}\left(-(n+1)\Delta_b f(0)-\frac{1}{2}R_{scal}\right). \end{equation} The theorem follows. \end{proof} From Theorem~\ref{t-gue241120yydk}, we generalize the result in~\cite{Hsiao_Shen_2nd_coefficient_BS_2020} to any Reeb-invariant volume form. \begin{theorem}\label{t-gue241122yyd} Recall that we work with the assumption that $X$ is embeddable. Let $(D,x)$ be any coordinate patch and let $\varphi:D\times D\to\C$ be any phase function satisfying \eqref{Eq:PhaseFuncMainThm} with $\lambda(x)\equiv1$ and \eqref{eq:PiFIO}. Suppose that \begin{equation}\label{e-gue241120yydz} \mbox{$\ddbar_{b,x}\varphi(x,y)$ vainishes to infinite order on $x=y$}. \end{equation} Suppose further that \begin{equation}\label{e-gue241120ycdazz} \mathcal{T}_ys(x,y,t)=0, \ \ \mathcal{T}_ys_j(x,y)=0,\ \ j=0,1,\ldots, \end{equation} where $s(x,y,t)$, $s_j(x,y)$, $j=0,1,\ldots$, are as in \eqref{eq:PiFIO}. Then, $s_1(x,x)$ is well-defined as a smooth function on $X$ and we have \begin{equation}\label{e-gue201221yydbz} \mbox{$s_1(x,x)=\frac{-1}{2\pi^{n+1}}e^{-2(n+1)f(x)}\left(-(n+1)\Delta_b f(x)-\frac{1}{2}R_{scal}(x)\right)$, for every $x\in D$}, \end{equation} where $f(x):=\frac{1}{2(n+1)}\log \frac{dV(x)}{dV_\xi(x)}\in\cC^\infty(X)$, $R_{{\rm scal\,}}$ is the Tanaka--Webster scalar curvature on $X$ (see \eqref{e-gue241122yydI}) and $\Delta_b:\cC^{\infty}(X)\rightarrow\cC^{\infty}(X)$ denotes the CR sublaplacian operator(see \eqref{e-gue241122yyd}). \end{theorem} \begin{remark}\label{rmk:CoefficientsOnComplexManifolds} Let \((L,h)\to M\) be a positive holomorphic line bundle with Hermitian metric \(h\) over a compact complex manifold \(M\) of \(\dim_\C M=n\). Let $R^L$ denote the curvature two form of $L$. Fix any Hermitian metric $\langle\,\cdot\,|\,\cdot\,\rangle_M$ on $\mathbb CTM$ and let $\Theta$ be the real two form on $M$ induced by $\langle\,\cdot\,|\,\cdot\,\rangle_M$. Le $dV_{\Theta}$ be the volume form on $M$ induced by $\Theta$. For every $m\in N$, let \((L^m,h^m)\to M\) be the $m$-th power of $(L,h)$, where $h^m$ is the Hermitian metric on $L^m$ induced by $h$. Let $(\,\cdot\,|\,\cdot\,)_m$ be the $L^2$ inner product on $L^2(M,L^m)$ induced by $dV_{\Theta}$ and $h^m$. Let $H^0(M,L^m)$ be the space of global holomorphic sections with values in $L^m$. Let $\set{f_1,\ldots,f_{d_m}}$ be an orthonormal basis of $H^0(M,L^m)$ with respect to $(\,\cdot\,|\,\cdot\,)_m$. The Bergman kernel function is given by \begin{equation}\label{e-gue241214yyd} P_m(x):=\sum^{d_m}_{j=1}\abs{f_j(x)}^2_{h^m}\in\cC^\infty(M). \end{equation} It is well-known that (see~\cite{Catlin_BKE_1999},~\cite{HM14},~\cite{Ma_Marinescu_HMI_BKE_2007},~\cite{Zelditch_BKE_1998}) \begin{equation}\label{e-gue241214yydI} \begin{split} &P_m(x)\sim\sum^{+\infty}_{j=0}b_j(x)k^{n-j}\ \ \mbox{in $S^n_{{\rm loc\,}}(1;M)$},\\ &b_j(x)\in\cC^\infty(M),\ \ j=0,1,\ldots. \end{split} \end{equation} Let us consider the associated circle bundle \(\pi\colon X\to M\) as in Example~\ref{Ex:RelationToBKExpansion}. Take $\mathcal{T}\in\cC^\infty(X,TX)$ be the vector field on $X$ induced by the $S^1$ action acting on the fiber of $L^*$ and let $\xi$ be the global one form on $X$ so that $\mathcal{T}\lrcorner\xi\equiv1$ and $u\lrcorner\xi=0$, for every $u\in T^{1,0}X\oplus T^{0,1}X$. Take \begin{equation}\label{e-gue241214yydIIIw} dV:=\frac{1}{n!}\xi\wedge\Theta^n \end{equation} and let $\Pi$ be the Szeg\H{o} projection associated to $dV$. Let $s_1$ be as in Theorem~\ref{t-gue241122yyd}. We have \begin{equation}\label{e-gue241214yydII} s_1(x,x)=\frac{1}{2\pi}b_1(\pi(x)),\ \ \mbox{for all $x\in X$}, \end{equation} where $\pi: X\To M$ is the natural projection. From Theorem~\ref{t-gue241122yyd} and \eqref{e-gue241214yydII}, we get \begin{equation}\label{e-gue241214ycdb} \mbox{$b_1(\pi(x))=\frac{-(2\pi)}{2\pi^{n+1}}e^{-2(n+1)f(x)}\left(-(n+1)\Delta_b f(x)-\frac{1}{2}R_{scal}(x)\right)$, for every $x\in X$}. \end{equation} We now rewrite \eqref{e-gue241214ycdb} in terms of geometric invariance on $M$. We can check that \begin{equation}\label{e-gue241214yydIIIzz} d\xi(x)=\sqrt{-1}R^L(\pi(x)) \end{equation} and \begin{equation}\label{e-gue241214yydIIIz} dV_\xi(x)=\Bigr(\sqrt{-1}R^L(\pi(x))\Bigr)^n\frac{2^{- n}}{n!}. \end{equation} For every $x\in M$, let \[\dot{R}^L(x): T^{1,0}_xM\To T^{1,0}_xM\] be the linear map given by $\langle\,\dot{R}^L(x)U\,|\,V\,\rangle_M=\langle\,R^L(x)\,,\,U\wedge\ol V\,\rangle$, $U, V\in T^{1,0}_xM$. For every $x\in M$, let ${\rm det\,}\dot{R}^L(x)=\lambda_1(x)\cdots\lambda_n(x)$, where $\lambda_j(x)$, $j=1,\ldots,n$, be the eigenvalues of $\dot{R}^L(x)$. We can check that \begin{equation}\label{e-gue241214yydIII} {\rm det\,}\dot{R}^L(\pi(x))=2^n\frac{dV_\xi(x)}{dV(x)},\ \ \mbox{for every $x\in X$}, \end{equation} and \begin{equation}\label{e-gue241214ycd} f(x)=\frac{1}{2(n+1)}\log \frac{dV(x)}{dV_{\xi}(x)}=\frac{1}{2(n+1)}\log\Bigr(\bigr( {\rm det\,}\dot{R}^L(\pi(x))\bigr)^{-1}2^{n}\Bigr),\ \ \mbox{for every $x\in X$}. \end{equation} Let $\omega:=\frac{\sqrt{-1}}{2\pi}R^L$ be the K\"ahler form on \(M\). The K\"ahler form $\omega$ induces a Hermitian metric $\langle\,\cdot\,|\,\cdot\,\rangle_{\omega}$ on $\mathbb CTM$ and also a Hermitian metric $\langle\,\cdot\,|\,\cdot\,\rangle_{\omega}$ on $T^{*0,q}M$ the bundle of $(0,q)$ forms on $M$, for every $q=0,1,\ldots,n$. Let $(\,\cdot\,|\,\cdot\,)_{\omega}$ be the $L^2$ inner product on $\oplus^n_{q=0}\Omega^{0,q}(M)$ induced by $\langle\,\cdot\,|\,\cdot\,\rangle_{\omega}$, where $\Omega^{0,q}(M):=\cC^\infty(M,T^{*o,q}M)$. The complex Laplacian with respect to $\omega$ is the operator $\Delta_\omega:\cC^{\infty}(M)\rightarrow\cC^{\infty}(M)$ given by \begin{equation}\label{e-gue241122yydz} (\,\Delta_\omega f\,|\,u\,)_\omega=(\,df\,|\,du\,)_{\omega},\quad f\in\cC^\infty(M), u\in\cC^{\infty}(M).\end{equation} Let $g\in\cC^\infty(M)$ and let $\hat g\in\cC^\infty(X)$ given by $\widehat g(x):=g(\pi(x))$, for all $x\in X$. We can check that \begin{equation}\label{e-gue241214ycdqq} \Delta_b\widehat{g}(x)=\frac{1}{2\pi}(\Delta_\omega g)(\pi(x)),\ \ \mbox{for all $x\in X$}. \end{equation} Let $r$ be the scalar curvature on $M$ with respect to $\omega=\frac{\sqrt{-1}}{2\pi}R^L$ (see~\cite[(1.8)]{Hsiao_BT_coefficient_2012}). It was shown in~\cite[Theorem 3.5]{Herrmann_Hsiao_Li_Q-R-Sasakian_Szego_2018} that \begin{equation}\label{e-gue241214ycdq} r(\pi(x))=4\pi R_{\rm{scal}}(x),\ \ \mbox{for every $x\in X$}. \end{equation} From \eqref{e-gue241214ycdb}, \eqref{e-gue241214ycd}, \eqref{e-gue241214ycdqq} and \eqref{e-gue241214ycdq}, we get \begin{equation}\label{e-gue241214ycdl} b_1(\pi(x))=(2\pi)^{-n}{\rm det\,}\dot{R}^L(\pi(x))\Bigr(\frac{1}{8\pi}r(\pi(x)) +(n+1)(\Delta_bf)(x)\Bigr). \end{equation} From \eqref{e-gue241214ycd} and \eqref{e-gue241214ycdqq}, we can check that \begin{equation}\label{e-gue241214ycdj} (n+1)(\Delta_bf)(x)=\frac{1}{4\pi}\Bigr(\hat r(\pi(x))-r(\pi(x)\Bigr), \end{equation} for every $x\in X$, where $\hat r$ is given by~\cite[(1.8)]{Hsiao_BT_coefficient_2012}. From \eqref{e-gue241214ycdl} and \eqref{e-gue241214ycdj}, we get \begin{equation}\label{e-gue241214ycdk} b_1(x)=(2\pi)^{-n}{\rm det\,}\dot{R}^L(x)\Bigr(\frac{1}{4\pi}\hat r(x)-\frac{1}{8\pi}r(x)\Bigr), \end{equation} for every $x\in M$. The formula \eqref{e-gue241214ycdk} coincides with the result in~\cite[Theorem 1.4]{Hsiao_BT_coefficient_2012}. The computation of the coefficients of Bergman kernel in more general setting can be found in \cite{Ma_Marinescu_JRAM_2012}. We also refer the reader to~\cite{Herrmann_Hsiao_Li_Q-R-Sasakian_Szego_2018} for the computation of the coefficients \(b_0\), \(b_1\) and \(b_2\) for quasi-regular Sasakian manifolds. \end{remark} Fix $p\in D$, we take local coordinates $x=(x_1,\ldots,x_{2n+1})$ defined on $D$ so that $x(p)=0$ and $\mathcal{T}=\frac{\pr}{\pr x_{2n+1}}$. Until further notice, we work in $D$ and work with the local coordinates $x=(x_1,\ldots,x_{2n+1})$. We need \begin{theorem}\label{t-gue241123yyd} With the notations and assumptions in Theorem~\ref{thm:ExpansionMain}, let $\varphi:D\times D\to\C$ be any phase function satisfying \eqref{Eq:PhaseFuncMainThm} and \eqref{eq:PiFIO}. Then there exist smooth functions \begin{eqnarray}\label{Eq:Defa_jsSec3} a_{j,s}(x,y)\in\cC^\infty(D\times D),\ \ j,s=0,1,\ldots, \end{eqnarray} such that for any \(\chi\in \mathscr{C}^\infty_c(\R_+)\) we can take $b^\chi_j(x,y,t)$ in \eqref{Eq:LeadingTermMainThm}, $j=0,1,\ldots$, so that \begin{equation}\label{eqn:ThmI1} b^\chi_j(x,y,t)=\sum^{+\infty}_{s=0}a_{j,s}(x,y)\chi^{(s)}(t)t^{n+s-j},\quad j=0,1,\ldots,\end{equation} where \(\chi^{(s)}:=(\partial/\partial t)^s\chi\) and the infinite sum in \eqref{eqn:ThmI1z} converges uniformly in $\cC^{\infty}(K\times K\times I)$ topology, for any compact subsets $K\subset D$, $I\subset\mathbb R$, for all $j=0,1\dots$. \end{theorem} \begin{proof} From~\cite[Lemma 4.2, Lemma 4.3, Theorem 4.11]{HHMS23}, we see that there is a phase $\varphi_1(x,y)\in\cC^\infty(D\times D)$ satisfying \eqref{Eq:PhaseFuncMainThm} and \eqref{eq:PiFIO} such that \begin{equation}\label{e-gue241124ycd} \chi_k(A)(x,y)=\int_0^{+\infty} e^{ikt\varphi_1(x,y)}\hat b^\chi(x,y,t,k)dt+O\left(k^{-\infty}\right)~\text{on}~D\times D, \end{equation} where $\hat b^\chi(x,y,t,k)\in S^{n+1}_{\mathrm{loc}} (1;D\times D\times{\R}_+)$, \begin{equation}\label{e-gue241124ycdI} \begin{split} &\hat b^\chi(x,y,t,k)\sim\sum_{j=0}^{+\infty}\hat b^\chi_{j}(x,y,t)k^{n+1-j}~ \text{in $S^{n+1}_{\mathrm{loc}}(1;D\times D\times{\R}_+)$},\\ &\hat b^\chi_j(x,y,t)\in\mathscr{C}^\infty(D\times D\times{\R}_+),~j=0,1,2,\ldots, \end{split} \end{equation} and for some bounded open interval $I^\chi\Subset\R_+$, \begin{equation}\label{e-gue241115ycdaz} \begin{split} {\rm supp\,}_t\hat b^\chi(x,y,t,k),~{\rm supp\,}_t \hat b^\chi_j(x,y,t)\subset I^\chi,\ \ j=0,1,2,\ldots~. \end{split} \end{equation} and \begin{equation}\label{e-gue241124yyd} \begin{split} &\hat b^\chi_j(x,y,t)=\sum^{N_j}_{s=0}\hat a_{j,s}(x,y)\chi^{(s)}(t)t^{n+s-j},\ \ j=0,1,\ldots,\\ &\hat a_{j,s}(x,y)\in\cC^\infty(D\times D),\ \ s=0,1,\ldots,N_j,\ \ j=0,1,\ldots, \end{split} \end{equation} where $N_j\in\mathbb N$, $j=0,1,\ldots$ and \(\hat a_{j,s}(x,y)\), \( s=0,1,\ldots,N_j,\, j=0,1,\ldots\), are independent of \(\chi\). Let $\varphi\in\cC^\infty(D\times D)$ be any phase function satisfying \eqref{Eq:PhaseFuncMainThm} and \eqref{eq:PiFIO}. From~\cite[Theorem 5.4]{Hsiao_Marinescu_Szego_lower_energy_2017}, we see that \begin{equation}\label{e-gue241124yydI} \varphi(x,y)=g(x,y)\varphi_1(x,y)+f(x,y), \end{equation} where $g(x,y)$, $f(x,y)\in\cC^\infty(D\times D)$, $f(x,y)$ vanishes to infinite order at $x=y$. From Malgrange preparation theorem, near $(0,0)$, we have \begin{equation}\label{e-gue241124yydII} \begin{split} &\varphi(x,y)=\alpha(x,y)(-y_{2n+1}+\hat\varphi(x,y')),\\ &\varphi_1(x,y)=\beta(x,y)(-y_{2n+1}+\hat\varphi_1(x,y')),\\ \end{split} \end{equation} where $y'=(y_1,\ldots,y_{2n})$, $\alpha, \beta, \hat\varphi, \hat\varphi_1$ are smooth functions defiend near $(0,0)$. We may assume that \eqref{e-gue241124yydII} hold on $D\times D$ and hence $\alpha, \beta, \hat\varphi, \hat\varphi_1\in\cC^\infty(D\times D)$. We may take $D$ small enough so that $\alpha(x,y)\neq0$, $\beta(x,y)\neq0$ at every point of $D\times D$. It is not difficult to see that \begin{equation}\label{e-gue241124yyda} \begin{split} &{\rm Im\,}\alpha(x,y)=O(\abs{(x-y}),\\ &{\rm Im\,}\beta(x,y)=O(\abs{(x-y}). \end{split} \end{equation} From \eqref{e-gue241124yydI} and \eqref{e-gue241124yydII}, it is not difficult to see that \begin{equation}\label{e-gue241124yydr} \hat\varphi(x,y')-\hat\varphi_1(x,y')=O(\abs{(x'-y')}^N),\ \ \mbox{for all $N\in\mathbb N$}. \end{equation} In order to simplify notation we will use \(\hat b(x,y,t,k)\), \(\hat b_j(x,y,t)\) and \(I\) instead of \(\hat b^\chi(x,y,t,k)\) and \(\hat b^\chi_j(x,y,t)\) and \(I^\chi\) keeping in mind that \(\hat b(x,y,t,k)\), \(\hat b_j(x,y,t)\) and \(I\) will depend on \(\chi\). Let $\Td{\hat b}(x,y,\Td t,k)\in\cC^\infty(D\times D\times I^{\mathbb C})$ be an almost analytic extension of $\hat b(x,y,t,k)$ in $t$ variable, where $I^{\mathbb C}$ is a bounded open set in $\mathbb C$ so that $I^{\mathbb C}\cap\mathbb R=I$. Recall that $I$ is a bounded open interval as in \eqref{e-gue241115ycdaz}. We take $\Td{\hat b}(x,y,\Td t,k)$ so that \begin{equation}\label{e-gue241115ycdazz} \begin{split} {\rm supp\,}_{\Td t}\Td{\hat b}(x,y,\Td t,k)\subset I^{\mathbb C},\ \ j=0,1,2,\ldots, \end{split} \end{equation} and \begin{equation}\label{e-gue241124yydm} \Td{\hat b}(x,y,\frac{\alpha(x,y)}{\beta(x,y)}t,k)\sim\sum^{+\infty}_{j=0}\Td{\hat b}_j(x,y,\frac{\alpha(x,y)}{\beta(x,y)}t)k^{n+1-j}\ \ \mbox{in $S^{n+1}_{{\rm loc\,}}(1;D\times D\times\mathbb R)$}, \end{equation} where $\Td{\hat b}_j(x,y,\Td t)\in\cC^\infty(D\times D\times I^{\mathbb C})$ denotes an almost analytic extension of $\hat b_j(x,y,t)$ in $t$ variable, $j=0,1,\ldots$, so that ${\rm supp\,}_{\Td t}\Td{\hat b}_j(x,y,\Td t)\subset I^{\mathbb C}$, $j=0,1,2,\ldots$, and \begin{equation}\label{e-gue241124yydn} \Td{\hat b}_j(x,y,\Td t)=\sum^{N_j}_{s=0}\hat a_{j,s}(x,y)(\frac{\pr^s}{\pr\Td t^s}\Td\chi)(\frac{\alpha(x,y)}{\beta(x,y)}t)(\frac{\alpha(x,y)}{\beta(x,y)}t)^{n+s-j}, \end{equation} $\Td\chi$ is an almost analytic extension of $\chi$ with ${\rm supp\,}\Td\chi\subset I^{\mathbb C}$ and the \(\hat a_{j,s}(x,y)\)'s are independent of \(\chi\). From \eqref{e-gue241124yyda}, we can apply Stokes' theorem to \eqref{e-gue241124ycd} and get \begin{equation}\label{e-gue241124ycdp} \chi_k(A)(x,y)=\int_0^{+\infty} e^{ikt\alpha(x,y)(-y_{2n+1}+\hat\varphi_1(x,y'))}\Td{\hat b}(x,y,\frac{\alpha(x,y)}{\beta(x,y)}t,k)\frac{\alpha(x,y)}{\beta(x,y)}dt+O\left(k^{-\infty}\right)~\text{on}~D\times D. \end{equation} From \eqref{e-gue241124yydr} and notice that ${\rm Im\,}\hat\varphi_1(x,y')\geq C\abs{x'-y'}^2$, where $C>0$ is a constant, we can change $\hat\varphi_1(x,y')$ in \eqref{e-gue241124ycdp} to $\hat\varphi(x,y')$ and get \begin{equation}\label{e-gue241124ycdq} \chi_k(A)(x,y)=\int_0^{+\infty} e^{ikt\varphi(x,y))}\Td{\hat b}(x,y,\frac{\alpha(x,y)}{\beta(x,y)}t,k)\frac{\alpha(x,y)}{\beta(x,y)}dt+O\left(k^{-\infty}\right)~\text{on}~D\times D. \end{equation} From Borel construction, we can find $ b^\chi(x,y,t,k)\in S^{n+1}_{\mathrm{loc}} (1;D\times D\times{\R}_+)$, such that $b^\chi(x,y,t,k)-\frac{\alpha(x,y)}{\beta(x,y)}\Td{\hat b}(x,y,\frac{\alpha(x,y)}{\beta(x,y)}t,k)$ vanishes to infinite order at $x=y$, \begin{equation}\label{e-gue241124ycdIg} \begin{split} &b^\chi(x,y,t,k)\sim\sum_{j=0}^{+\infty}b^\chi_{j}(x,y,t)k^{n+1-j}~ \text{in $S^{n+1}_{\mathrm{loc}}(1;D\times D\times{\R}_+)$},\\ &b^\chi_j(x,y,t)\in\mathscr{C}^\infty(D\times D\times{\R}_+),~j=0,1,2,\ldots, \end{split} \end{equation} and ${\rm supp\,}_t b^\chi(x,y,t,k)\subset I^\chi$, ${\rm supp\,}_t b^\chi_j(x,y,t)\subset I^\chi$, $j=0,1,2,\ldots$, and \begin{equation}\label{e-gue241124ycdg} \begin{split} &b^\chi_j(x,y,t)=\sum^{+\infty}_{s=0}a_{j,s}(x,y)\chi^{(s)}(t)t^{n+s-j},\quad j=0,1,\ldots,\\ &a_{j,s}(x,y)\in\cC^\infty(D\times D),\ \ j,s=0,1,\ldots, \end{split}\end{equation} where the infinite sum in \eqref{e-gue241124ycdg} converges uniformly in $\cC^{\infty}(K\times K\times I)$ topology, for any compact subsets $K\subset D$, \(I\subset \R\) and the \(a_{j,s}(x,y)\)'s are independent of \(\chi\). Since $b^\chi(x,y,t,k)-\frac{\alpha(x,y)}{\beta(x,y)}\Td{\hat b}(x,y,\frac{\alpha(x,y)}{\beta(x,y)}t,k)$ vanishes to infinite order at $x=y$, we can replace $\frac{\alpha(x,y)}{\beta(x,y)}\Td{\hat b}(x,y,\frac{\alpha(x,y)}{\beta(x,y)}t,k)$ in \eqref{e-gue241124ycdq} by $b^\chi(x,y,t,k)$. The theorem follows. \end{proof} In the rest of this section, we assume that $\varphi$ satisfies \eqref{Eq:PhaseFuncMainThm} with $\lambda(x)=1+O(\abs{x}^3)$, \eqref{eq:PiFIO}, \eqref{e-gue201226ycdb}, \eqref{e-gue241220yydq} and has the following form \begin{equation}\label{eqn:ThmI2} \begin{split} &\varphi(x,y)=-y_{2n+1}+x_{2n+1}+\hat\varphi(x,y'),\quad y'=(y_1,\cdots,y_{2n}),\\ &\hat\varphi(x,y')\in\cC^\infty(D\times D),\\ &\hat\varphi(x,y')=\frac{i}{2}\sum_{j=1}^n\left[|z_j-w_j|^2+(\overline{z}_jw_j-z_j\overline{w}_j)\right]+O\left(|(x,y')|^4\right). \end{split} \end{equation} This is always possible (see~\cite[(3.99), Proposition 3.2]{Hsiao_Shen_2nd_coefficient_BS_2020}). By using integration by parts, we can take $b^\chi_j(x,y,t)$ in \eqref{Eq:LeadingTermMainThm}, $j=0,1,\ldots$, so that \begin{equation} \label{e-gue241128yydz} \begin{split} &b^\chi_j(x,y,t)=b^\chi_j(x,y',t)=\sum^{+\infty}_{s=0}a_{j,s}(x,y')\chi^{(s)}(t)t^{n+s-j},\quad j=0,1,\ldots,\\ &a_{j,s}(x,y')\in\cC^\infty(D\times D),\ \ j,s=0,1,\ldots \end{split} \end{equation} and the \(a_j(x,y')\)'s are independent of \(\chi\). Assume $D=\widehat{D}\times(-\epsilon,\epsilon),$ $\widehat{D}$ is an open set of $\R^{2n}$ of $0\in R^{2n}$, $\varepsilon>0$ is a constant. Fix $\tau\in\cC^{\infty}_c((-\epsilon,\epsilon)), \tau\equiv 1$ near $0\in\mathbb R$ and $m_0\in\mathbb R$. Consider the map \begin{align*} \reallywidetilde{\chi_k(A)}:\cC^{\infty}_{c}(\hat D)&\longrightarrow\cC^{\infty}(D),\\ u&\longmapsto\frac{k}{2\pi}\int\chi_k(A)(x,y)e^{ikm_0y_{2n+1}}\tau(y_{2n+1})u(y')dy'dy_{2n+1}. \end{align*} This distribution kernel of $\reallywidetilde{\chi_k(A)}$ is of the form: \begin{equation}\label{eqn:ThmI3} \begin{split} \reallywidetilde{\chi_k(A)}(x,y')&=\frac{k}{2\pi}\int e^{ikt\varphi(x,y)+ikm_0y_{2n+1}}b^\chi(x,y',t,k)\tau(y_{2n+1})dy_{2n+1}dt+O(k^{-\infty})\\ &=\frac{k}{2\pi}\int e^{ikt(-y_{2n+1}+\widehat{\varphi}(x,y'))+ikm_0y_{2n+1}}b^\chi(x,y',t,k)\tau(y_{2n+1})dy_{2n+1}dt+O(k^{-\infty})\\ &=\frac{k}{2\pi}\int e^{ikt(-y_{2n+1}+\widehat{\varphi}(x,y'))+ikm_0y_{2n+1}}b^\chi(x,y',t,k)dy_{2n+1}dt+O(k^{-\infty})\\ &=e^{ikm_0\widehat{\varphi}(x,y')}b^\chi(x,y',m_0,k)+O(k^{-\infty}). \end{split} \end{equation} We first prove the following uniqueness of the expansion: \begin{theorem}\label{t-gue241125yyda} With the notations and assumptions above, assume $$\int e^{it\varphi(x,y)}a(x,y',t,k)dt=\int e^{it\varphi(x,y)}\widehat{a}(x,y',t,k)dt+O(k^{-\infty})\quad \text{on }D\times D, $$ where $\varphi$ satisfies \eqref{eqn:ThmI2}, $a(x,y',t,k)$, $\hat a(x,y',t,k)\in S^{n+1}_{{\rm loc\,}}(1;D\times D\times\mathbb R)$, \[\begin{split} &\mbox{$a(x,y',t,k)\sim\sum^{\infty}_{j=0}a_j(x,y',t)k^{n+1-j}$ in $S^{n+1}_{{\rm loc}}(1;D\times D\times\mathbb R)$},\\ &\mbox{$\widehat{a}(x,y',t,k)\sim\sum^{\infty}_{j=0}\widehat{a}_j(x,y',t)k^{n+1-j}$ in $S^{n+1}_{{\rm loc\,}}(1;D\times D\times\mathbb R)$,}\\ &\mbox{$a_j, \hat a_j\in\cC^\infty(D\times D\times\mathbb R)$, $j=0,1,\ldots$},\\ &\mbox{$\underset{t}{\rm supp\,}a\subset I$, $\underset{t}{\rm supp\,}\widehat{a}\subset I$, $\underset{t}{\rm supp\,}a_j\subset I,$ $\underset{t}{\rm supp\,}\widehat{a}_j\subset I, j=0,1,\dots,$} \end{split}\] $I \Subset\mathbb R$ is a bounded interval. Then $$a_j(x,y',t)=\widehat{a}_j(x,y',t)+O(|(x'-y')|^N),\quad \forall N,\forall t\in I, \forall j=0,1,\dots.$$ \end{theorem} \begin{proof} Fix $m_0\in I.$ We can repeat the process in \eqref{eqn:ThmI3} and deduce that $$e^{ikm_0\widehat{\varphi}(x,y')}a(x,y',m_0,k)=e^{ikm_0\widehat{\varphi}(x,y')}\widehat{a}(x,y',m_0,k)+F_k(x,y'),\quad F_k(x,y')=O(k^{-\infty}).$$ Hence, \begin{equation}\label{eqn:ThmI4} a(x,y',m_0,k)-\widehat{a}(x,y',m_0,k)=e^{-ikm_0\widehat{\varphi}(x,y')}F_k(x,y'). \end{equation} From \eqref{eqn:ThmI4}, we have \begin{align*} a_0(x,x',m_0)-\widehat{a}_0(x,x',m_0)&=\lim_{k\to\infty}\frac{1}{k^{n+1}}e^{-ikm_0\widehat{\varphi}(x,x')}F_k(x,x')\\ &=\lim_{k\to\infty}\frac{1}{k^{n+1}}e^{ikm_0x_{2n+1}}F_k(x,x')=0. \end{align*} Similarly, $$\left(\partial^{\alpha}_{x}\partial^{\beta}_{y'}(a_0-\widehat{a}_0)\right)(x,x',m_0)=\lim_{k\to\infty}\frac{1}{k^{n+1}}\partial^{\alpha}_{x}\partial^{\beta}_{y'}\left(e^{-ikm_0\widehat{\varphi}(x,y')}F_k(x,y')|_{(x,x')}\right)=0,$$ for all $\alpha\in\N_0^{2n+1},$ $\beta\in\N_0^{2n}.$ Similarly, we can show that $a_j(x,y',t)=\widehat{a}_j(x,y',t)+O(|x'-y'|^N),$ for all $N\in\N,$ $j=1,2,\cdots.$ \end{proof} From now on, we assume that $b^\chi_j(x,y,t)$, $j=0,1,\ldots,n$, satisfy \eqref{e-gue241128yydz}. \begin{theorem}\label{thm:s1_indep_of_deri_of_chi} With the notations and assumptions used above, recall that $\varphi$ satisfies \eqref{e-gue201226ycdb}, \eqref{e-gue241220yydq}, \eqref{eqn:ThmI2}, and $b^\chi_j(x,y,t)$, $j=0,1,\ldots$, in \eqref{Eq:LeadingTermMainThm}, satisfy \eqref{e-gue241128yydz}. Then, for all $x\in D$, $t\in\mathbb R$ and \(\chi\in\mathscr{C}_c^\infty(\R_+)\), \begin{equation}\label{eqn:ThmIII4} \begin{split} &b^\chi_0(0,0,t)=s_0(0,0)\chi(t)t^n,\\ &\frac{\pr b^\chi_0}{\pr x_j}(0,0,t)=\frac{\pr s_0}{\pr x_j}(0,0)\chi(t)t^n,\ \ j=1,\ldots,2n+1,\\ &\frac{\pr b^\chi_0}{\pr y_j}(0,0,t)=\frac{\pr s_0}{\pr y_j}(0,0)\chi(t)t^n,\ \ j=1,\ldots,2n+1,\\ &\frac{\pr^2b^\chi_0}{\pr\ol z_j\pr z_\ell}(0,0,t)=\frac{\pr^2b^\chi_0}{\pr\ol w_j\pr w_\ell}(0,0,t)=0,\ \ j, \ell=1,\ldots,n, \end{split} \end{equation} and \begin{equation}\label{eqn:ThmIII5} a_{1,s}(p,p)=0,\quad s=1,2,\cdots, \end{equation} where $s_0(x,y)$ is as in \eqref{eq:PiFIO} and $s_0(x,y)$ satisfies \eqref{e-gue241128yyd}, \eqref{e-gue241120yydg}, \eqref{e-gue241120yydu}, \eqref{e-gue241125ycd} and $a_{1,s}(x,y')$ is as in \eqref{e-gue241128yydz}, $s=1,2,\cdots.$ \end{theorem} \begin{proof} The prove of \eqref{eqn:ThmIII4} is the same as the proofs of \eqref{e-gue241120yydu} and \eqref{e-gue241125ycd}. From \eqref{eq:PiFIO} and \eqref{eqn:ThmI2}, we can check that \[\begin{split} &\left((-i)\mathcal{T}\Pi\right)(x,y)=\int e^{it\varphi(x,y)}g(x,y',t)dt,\\ &\left(\Pi(-i\mathcal{T})\right)(x,y)=\int e^{it\varphi(x,y)}\hat g(x,y',t)dt,\\ &g(x,y',t)\sim\sum^{\infty}_{j=0}t^{n+1-j}g_j(x,y')\ \ \mbox{in $S^{n+1}_{1,0}(D\times D\times\mathbb R)$},\\ &\hat g(x,y',t)\sim\sum^{\infty}_{j=0}t^{n+1-j}\hat g_j(x,y')\ \ \mbox{in $S^{n+1}_{1,0}(D\times D\times\mathbb R)$},\\ &g_j(x,y'), \hat g_j(x,y')\in\cC^\infty(D\times D),\ \ j=0,1,\ldots, \end{split}\] \begin{equation}\label{eqn:ThmIII7} g_0(x,y')-\hat g_0(x,y')=O(\abs{x-y}^3). \end{equation} Let $\reallywidetilde{\chi_k(A)}$(x,y') be as in \eqref{eqn:ThmI3}. We have \begin{align} \frac{1}{k}(-i\mathcal{T})\Pi\reallywidetilde{\chi_k(A)}(p,p)=k^{n+1}m_0b^\chi_0(p,p',m_0)+k^nm_0b^\chi_1(p,p',m_0)+O(k^{n+1})\nonumber\\=k^{n+1}m_0^{n+1}\chi(m_0)s_0(p,p')+\sum^{\infty}_{s=0}k^na_{1,s}(p,p')\chi^{(s)}(m_0)m_0^{n+s}+O(k^{n-1}). \label{eqn:ThmIII10} \end{align} Let $\widehat{\chi}(t)=t\chi(t).$ We have $\frac{1}{k}\Pi(-i\mathcal{T})\reallywidetilde{\chi_k(A)}=\reallywidetilde{\widehat{\chi_k}(A)}.$ From this observation, we get \begin{equation}\label{eqn:ThmIII11} \begin{split} &\reallywidetilde{\widehat{\chi_k}(A)}(p,p)\\ &=k^{n+1}m_0^{n+1}\chi(m_0)s_0(p,p')+\sum^{\infty}_{s=0}k^na_{1,s}(p,p')(t\chi(t))^{(s)}|_{t=m_0}m_0^{n-1+s}+O(k^{n-1}). \end{split} \end{equation} From \eqref{eqn:ThmIII7} and stationary phase formula of H\"ormander, we can check that \begin{equation}\label{eqn:ThmIII9} \left(\frac{1}{k}(-i\mathcal{T})\Pi\reallywidetilde{\chi_k(A)}\right)(p,p)-\left(\frac{1}{k}\Pi(-i\mathcal{T})\reallywidetilde{\chi_k(A)}\right)(p,p)=O(k^{n-1}). \end{equation} From \eqref{eqn:ThmIII10}, \eqref{eqn:ThmIII11} and \eqref{eqn:ThmIII9}, we get \begin{equation}\label{eqn:ThmIII12} \sum^{\infty}_{s=0}a_{1,s}(p,p')\chi^{(s)}(m_0)m_0^{n+s}=\sum^{\infty}_{s=0}a_{1,s}(p,p')(t\chi(t))^{(s)}|_{t=m_0}m_0^{n-1+s}. \end{equation} Take $\chi$ so that $\chi(m_0)\neq 0$ and $\chi^{(s)}(m_0)=0,$ for all $s\geq 1.$ From \eqref{eqn:ThmIII12}, we get $$a_{1,0}(p,p')\chi(m_0)m_0^n=a_{1,0}(p,p')\chi(m_0)m_0^n+a_{1,1}(p,p')\chi(m_0)m_0^n.$$ Thus, $a_{1,1}(p,p')=0.$ Suppose $a_{1,s}(p,p')=0,$ for all $1\leq s\leq N_0,$ for some $N_0\in\N.$ Take $\chi$ so that $\chi^{(N_0)}(m_0)\neq 0$, $\chi^{(s)}(m_0)=0$, $s>N_0.$ By induction assumption and \eqref{eqn:ThmIII12}, we get $$a_{1,0}(p,p')\chi(m_0)m_0^n=a_{1,0}(p,p)\chi(m_0)m_0^n+a_{1,N_0+1}(p,p')\chi^{(N_0)}(m_0)m_0^{n+N_0}.$$ Thus, $a_{1,N_0+1}(p,p')=0.$ By induction we get \eqref{eqn:ThmIII5}. \end{proof} \begin{proof}[Proof of Theorem~\ref{t-gue241115yyd}] Let $\chi, \tau\in\cC^\infty_c(\mathbb R_+)$, $\tau\equiv1$ on ${\rm supp\,}\chi$. By Theorem~\ref{t-gue241123yyd}, Theorem~\ref{thm:s1_indep_of_deri_of_chi}, we have the following expansion: \begin{equation}\label{eqn:chiktp} \begin{split} &\chi_k(T_P)(x,y)=\int e^{ikt\varphi(x,y)}\left(k^{n+1}\hat b_0(x,y',t,\chi)+k^n\hat b_1(x,y',t,\chi)+O(k^{n-1})\right)dt,\\ &\tau_k(T_P)(x,y)=\int e^{ikt\varphi(x,y)}\left(k^{n+1}\hat b_0(x,y',t,\tau)+k^n\hat b_1(x,y',t,\tau)+O(k^{n-1})\right)dt, \end{split} \end{equation} where \begin{equation}\label{e-gue241206ycd} \begin{split} &\hat b_0(x,y',t,\chi)=\hat b_0(x,y,t,\chi)=b^\chi_0(x,y,t)=\sum^{+\infty}_{s=0}a_{0,s}(x,y')\chi^{(s)}(t)t^{n+s},\\ &\hat b_1(x,y',t,\chi)=\hat b_1(x,y,t,\chi)=b^\chi_1(x,y,t)=\sum^{+\infty}_{s=0}a_{1,s}(x,y')\chi^{(s)}(t)t^{n+s-1},\\ &a_{j,s}(x,y')\in\cC^\infty(D\times D),\ \ j,s=0,1,\ldots,\\ &a_{1,s}(p,p)=0,\ \ s=1,2,\ldots, \end{split} \end{equation} where $b^\chi_0(x,y,t)$, $b^\chi_1(x,y,t)$ are as in \eqref{Eq:LeadingTermMainThm}. By the relation, \begin{equation}\label{eqn:relation tauchi=chi} \tau_k(T_P)\circ\chi_k(T_P)=(\tau\chi)_k(T_P)=\chi_k(T_P), \end{equation} and complex stationary phase formula, we have \begin{equation} \begin{split} &\tau_k(T_P)\circ\chi_k(T_P)(x,y)\\ &=\int e^{ik(s\varphi(x,u)+t\varphi(u,y))}\Bigg[k^{2n+2}\hat b_0(x,u,s,\tau)\hat b_0(u,y,t,\chi)\\ &\quad+k^{2n+1}\Big(\hat b_1(x,u,s,\tau)\hat b_0(u,y,t,\chi)+\hat b_0(x,u,s,\tau)\hat b_1(u,y,t,\chi)\Big)+O(k^{2n})\Bigg] V(u)dudsdt\\ &=\int e^{ik(t(\sigma\varphi(x,u)+\varphi(u,y))}\Bigg[k^{2n+2}\hat b_0(x,u,t\sigma,\tau)\hat b_0(u,y,t,\chi)\\ &\quad+k^{2n+1}\Big(\hat b_1(x,u,t\sigma,\tau)\hat b_0(u,y,t,\chi)\\ &\quad+\hat b_0(x,u,t\sigma,\tau)\hat b_1(u,y,t,\chi)\Big)+O(k^{2n})\Bigg] tV(u)dud\sigma dt\\ &=\int e^{ikt\varphi(x,y)}\Bigg[k^{n+1}\beta^{\chi,\tau}_0(x,y',t)+k^n\beta^{\chi,\tau}_1(x,y',t)+O(k^{n-1})\Bigg]dt, \end{split} \end{equation} where $\beta^{\chi,\tau}_0(x,y',t), \beta^{\chi,\tau}_1(x,y',t)\in\cC^\infty(D\times D\times I)$. From Theorem~\ref{t-gue241125yyda} (the uniqueness of the expansion) and \eqref{e-gue241206ycd}, we see that \begin{equation}\label{e-gue241206ycdr} \begin{split} &\beta^{\chi,\tau}_0(p,p,t)=\hat b_0(p,p,t,\chi)=b^\chi_0(p,p,t)=s_0(p,p)\chi(t)t^n,\\ &\beta^{\chi,\tau}_1(p,p,t)=\hat b_1(p,p,t,\chi)=a_{1,1}(p,p)\chi(t)t^{n-1}. \end{split} \end{equation} From \eqref{e-gue241206ycdr} and the stationary phase formula of H\"ormander, we have \[\begin{split} &k^na_{1,0}(p,p)t^{n-1}\chi(t)\\ &= k^nt^{-n-1}L^{(1)}_{(u,\sigma)}\Big(\hat b_0(p,u,\sigma t,\tau)\hat b_0(u,p,t,\chi)e^{2(n+1)f(u)}\Big)|_{(u,\sigma)=(p,1)}\\ &+(2\pi^{n+1})2k^na_{1,1}(p,p)t^{n-1}\chi(t)s_0(p,p)V(p)\chi(t), \end{split}\] where $L^{(1)}_{(u,\sigma)}$ is as in \eqref{e-gue241207ycdm}. From \eqref{eqn:ThmIII4} and notice that $s_0(p,p)=\frac{1}{2\pi^{n+1}}\left(\frac{V_{\xi}}{V}\right)(p)$, we can check that \begin{align} a_{1,0}(p,p)t^{n-1}\chi(t)=&-L^{(1)}_{(u,\sigma)}\Big(s_0(p,u)s_0(u,p)e^{2(n+1)f(u)}\tau(\sigma t)\sigma^n\Big)|_{(u,\sigma)=(p,1)}t^{n-1}\chi(t)\nonumber\\=&- L^{(1)}_{(u,\sigma)}\Big(s_0(p,u)s_0(u,p)e^{2(n+1)f(u)}\sigma^n\Big)|_{(u,\sigma)=(p,1)}t^{n-1}\chi(t). \end{align} Therefore, by \eqref{e-gue241121yyd}, we have \begin{equation} a_{1,0}(p,p)=s_1(p,p). \end{equation} If we assume further $dV=dV_\xi$, then $f\equiv 0$, and thus \begin{equation} a_{1,0}(p,p)=\frac{1}{4\pi^{n+1}}R_{{\rm scal\,}}(p). \end{equation} \end{proof} \begin{remark}\label{rmk:verifyargumentrmklocalinvariant} From the arguments in the proof of Theorem~\ref{t-gue241115yyd} one can see that for any \(j\geq 0\) we have that \(b_j(x,x,t)\) is given by a polynomial in the derivatives of \(b_0,\ldots,b_{j-1}\), \(\xi\) and \(dV\) at \((x,x,t)\). Hence, as a conclusion, we obtain the statement in Remark~\ref{rmk:ajLocalInvariants}. \end{remark} \begin{proof}[Proof of Theorem~\ref{t-gue241115ycd}] We now prove Theorem~\ref{t-gue241115ycd}. In view of Theorem~\ref{t-gue241123yyd}, we only need to prove \eqref{e-gue241116yyd}. Now assume that $\mathcal{T}$ is a CR vector field. Let $x=(x_1,\ldots,x_{2n+1})$ be local coordinates defined on an open set $D$ of $X$ with $\mathcal{T}=\frac{\pr}{\pr x_{2n+1}}$ on $D$. From now on, we work on $D$ and we work with the local coordinates $x=(x_1,\ldots,x_{2n+1})$. Since $\mathcal{T}$ is a CR vector field, we can take $\varphi:D\times D\to\C$ satisfying \eqref{Eq:PhaseFuncMainThm}, \eqref{eq:PiFIO} and \begin{equation}\label{e-gue241210yyd} \varphi(x,y)=x_{2n+1}-y_{2n+1}+\hat\varphi(x',y'),\ \ \hat\varphi(x',y')\in\cC^\infty(D\times D). \end{equation} We take $b^\chi_j(x,y,t)$ in \eqref{Eq:LeadingTermMainThm}, $j=0,1,\ldots$, so that \eqref{e-gue241128yydz} hold. Let $\reallywidetilde{\chi_k(A)}$(x,y') be as in \eqref{eqn:ThmI3}. We have \begin{equation}\label{e-gue241210ycdy} \frac{1}{k}\reallywidetilde{\chi_k(A)(-i\mathcal{T})}(x,x')= \sum^{N}_{j=0}k^{n+1-j}m_0b^\chi_j(x,x',m_0)+O(k^{n-N}), \end{equation} for every $N\in\mathbb N$, for every $x\in D$. Let $\widehat{\chi}(t)=t\chi(t).$ We have $\frac{1}{k}\reallywidetilde{\chi_k(A)(-i\mathcal{T})}=\reallywidetilde{\widehat{\chi_k}(A)}$. From this observation and \eqref{e-gue241210ycdy}, we can repeat the proof of \eqref{eqn:ThmIII5} and deduce that \begin{equation}\label{e-gue241210ycdk} a_{j,s}(x,x')=0, \ \mbox{for all $s=1,2,\ldots$, for all $j=0,1,\ldots$}, \end{equation} where $a_{j,s}$, $j, s=0,1,\ldots$, are as in \eqref{e-gue241128yydz}. Thus, \begin{equation}\label{e-gue241210ycdl} b^\chi_j(x,x',t)=a_{j,0}(x,x')\chi(t)t^{n-j},\ \ j=0,1,\ldots. \end{equation} Note that $a_{j,0}$ is independent of $\chi$, $j=0,1,\ldots$. We have that $x\mapsto\chi_k(A)(x,x)$ is $\mathcal{T}$-invariant. From this observation and \eqref{e-gue241210ycdl}, it is not difficult to see that $b_j(x,x',t)$ is independent of $x_{2n+1}$, for all $j=0,1,\ldots$, and hence \begin{equation}\label{e-gue241210ycdm} b^\chi_j(x,x',t)=b^\chi_j(x',x',t)=a_{j,0}(x',x')\chi(t)t^{n-j},\ \ j=0,1,\ldots. \end{equation} From $\ddbar_{b,x}\varphi(x,y)$, $\ddbar_{b,x}\ol\varphi(y,x)$ vanish to infinite order at $x=y$, it is straightforward to check that \begin{equation}\label{e-gue241210ycdn} \mbox{$\ddbar_{b,x}b_j(x,y',t)$, $\ddbar_{b,x}\ol b_j(y,x',t)$ vanish to infinite order at $x=y$}. \end{equation} From \eqref{e-gue241210ycdm} and \eqref{e-gue241210ycdn}, we can repeat the proof of~\cite[Lemma 3.1]{Hsiao_Galasso_JGA_2023} with minor changes and deduce that there are $\alpha_j(x',y')\in\cC^\infty(D\times D)$, $j=0,1,\ldots$, such that \begin{equation}\label{e-gue241210ycdq} \mbox{$b^\chi_j(x,y',t)-\alpha_j(x',y')\chi(t)t^{n-j}$ vanishes to infinite order at $x=y$, $j=0,1,\ldots$}. \end{equation} From \eqref{e-gue241210ycdq}, we can change $b^\chi_j(x,y',t)$ to $\alpha_j(x',y')\chi(t)t^{n-j}$, for all $j=0,1,\ldots$. The theorem follows. \end{proof} \bibliographystyle{plain} \begin{thebibliography}{99} \bibitem{Boutet_Sjoestrand_Szego_Bergman_kernel_1975} L.~Boutet de Monvel and J.~Sj{\"o}strand. \emph{{Sur la singularit\'{e} des noyaux de Bergman et de Szeg\H{o}}.} {Journ{\'e}es {\'E}quations aux D{\'e}riv{\'e}es Partielles} (1975) 123-164. Ast\'{e}risque, No. 34--35. \bibitem{Catlin_BKE_1999} D.~Catlin, \emph{The Bergman kernel and a theorem of Tian}, Analysis and Geometry in Several Complex Variables, Birkh\"auser, Boston, (1999), pages 1--23. \bibitem{Donaldson_cscK_2001} S.~Donaldson, \emph{Scalar curvature and projective embeddings I}, Journal of Differential Geometry, volume 59, no, 3, (2001), pages 479--522. \bibitem{Hsiao_Galasso_JGA_2023} A. Galasso and C.-Y. Hsiao, \emph{Toeplitz operators on CR manifolds and group actions}, J. Geom. Anal. {\bf 33} (2023), no.~1, Paper No. 21, 55 pp.; MR4510165 \bibitem{Herrmann_Hsiao_Li_Q-R-Sasakian_Szego_2018} H.~Herrmann, C.-Y.~Hsiao and X.~Li, \emph{{Szeg\H{o} kernel asymptotic expansion on strongly pseudoconvex CR manifolds with \(S^1\) action. }} Int. J. Math. (2018), 1850061. \bibitem{Herrmann_Hsiao_Li_T_equivariant_Szego_2020} H.~Herrmann, C.-Y.~Hsiao and X.~Li, \emph{{Torus equivariant Szeg\H{o} kernel asymptotics on strongly pseudoconvex CR manifolds. }} Acta Math. Vietnam. 45 (2020), no. 1, 113–135. \bibitem{HHMS23} H.~Herrmann, C.-Y.~Hsiao, G.~Marinescu and W.-C.~Shen. \emph{Semi-classical spectral asymptotics of Toeplitz operators on CR manifolds}, arXiv preprint arXiv:2303.17319 (2023). \bibitem{Hsiao_Szego_Bergman_kernel_q_forms_2010} C.-Y.~Hsiao, \emph{{Projections in several complex variables.}} Mémoires de la Société Mathématique de France, 123 (2010), 131 pages. \bibitem{Hsiao_BT_coefficient_2012} C.-Y. Hsiao, \emph{On the coefficients of the asymptotic expansion of the kernel of Berezin-Toeplitz quantization}, Ann. Global Anal. Geom. {\bf 42} (2012), no.~2, 207--245; MR2947953 \bibitem{HM14} C.-Y.~Hsiao and G.~Marinescu. \emph{Asymptotics of spectral function of lower energy forms and Bergman kernel of semi-positive and big line bundles}, Communications in Analysis and Geometry, {\bf 22} (2014), no. 1, 1-108. \bibitem{Hsiao_Marinescu_Szego_lower_energy_2017} C.-Y.~Hsiao and G.~Marinescu. \emph{On the singularities of the Szeg\H{o} projections on lower energy forms}, J. Differential Geometry 107 (2017) 83-155. \bibitem{Hsiao_Shen_2nd_coefficient_BS_2020} C.-Y.~Hsiao and W.-C.~Shen. \emph{On the second coefficient of the asymptotic expansion of Boutet de Monvel-Sj\"ostrand.} Bulletin of the Institute of Mathematics Academia Sinica NEW SERIES 15(4), 2020. \bibitem{Ma_Marinescu_HMI_BKE_2007} X.~Ma and G.~Marinescu. \emph{Holomorphic {M}orse inequalities and {B}ergman kernels}, volume 254 of Progress in Mathematics, Birkh\"auser Verlag, Basel, 2007. \bibitem{Ma_Marinescu_JRAM_2012} X. Ma and G. Marinescu, \emph{Berezin-Toeplitz quantization on K\"ahler manifolds}, J. Reine Angew. Math. {\bf 662} (2012), 1--56; MR2876259 \bibitem{Zelditch_BKE_1998} S.~Zelditch, \emph{Szeg{\H o} kernels and a theorem of {T}ian}, International Mathematics Research Notices, 1998, no. 6, 317--331. \end{thebibliography} \end{document}
2412.11833v1
http://arxiv.org/abs/2412.11833v1
A monotone block coordinate descent method for solving absolute value equations
\documentclass[12pt,a4paper,twoside]{article} \usepackage{graphics} \usepackage{amssymb} \usepackage{amsmath} \usepackage{mathrsfs} \usepackage{makeidx} \usepackage{color} \usepackage{graphicx} \usepackage{subfigure} \usepackage{multicol} \usepackage{multirow} \usepackage{float} \usepackage{booktabs} \usepackage{epsfig} \usepackage{multirow} \usepackage{epstopdf} \usepackage[colorlinks, linkcolor=red, anchorcolor=blue, citecolor=blue ]{hyperref} \usepackage{caption} \usepackage{cases} \usepackage{algorithm,algorithmic} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \usepackage{latexsym, cite, bm, amsthm, amsmath} \usepackage{enumerate} \usepackage{marginnote} \usepackage{epstopdf} \usepackage{pifont} \newcommand{\double}{\baselineskip 1.68\baselineskip} \def\vextra{\vphantom{\vrule height0.4cm width0.9pt depth0.1cm}} \def \[{\begin{equation}} \def \]{\end{equation}} \def\R{\mathbb{R}} \hoffset=0truemm \voffset=0truemm \topmargin=0truemm \oddsidemargin=0truemm \evensidemargin=0truemm \textheight=226truemm \textwidth=158truemm \newtheorem{thm}{Theorem}[section] \newtheorem{cor}{Corollary}[section] \newtheorem{lem}{Lemma}[section] \newtheorem{prop}{Proposition}[section] \newtheorem{defn}{Definition}[section] \newtheorem{alg}{Algorithm}\newtheorem{rem}{Remark}[section] \newtheorem{assu}{Assumption}[section] \newtheorem{exam}{Example}[section] \newtheorem{method}{Method} \newtheorem{case}{Case}[section] \numberwithin{equation}{section} x{\pmb{Fix}} \title{\bf A monotone block coordinate descent method for solving absolute value equations} \usepackage[marginal]{footmisc} \usepackage{authblk} \author[a]{Tingting Luo\thanks{Email address: [email protected].}} \author[a]{Jiayu Liu\thanks{Email address: [email protected].}} \author[a]{Cairong Chen\thanks{Supported in part by the Fujian Alliance of Mathematics (2023SXLMQN03). Email address: [email protected].}} \author[b]{Qun Wang\thanks{Corresponding author. Supported in part by the National Natural Science Foundation of China (11801502) and the Zhejiang Provincial Philosophy and Social Sciences Planning Project (24NDJC113YB). Email address: [email protected].}} \affil[a]{School of Mathematics and Statistics \& Key Laboratory of Analytical Mathematics and Applications (Ministry of Education) \& Fujian Key Laboratory of Analytical Mathematics and Applications (FJKLAMA) \& Center for Applied Mathematics of Fujian Province (FJNU), Fujian Normal University, Fuzhou, 350117, P.R. China.} \affil[b]{School of Data Sciences, Zhejiang University of Finance and Economics, Hangzhou, 310018, P.R. China.} \begin{document} \date{\today} \maketitle \begin{quote} {\bf Abstract:} In this paper, we proposed a monotone block coordinate descent method for solving absolute value equation (AVE). Under appropriate conditions, we analyzed the global convergence of the algorithm and conduct numerical experiments to demonstrate its feasibility and effectiveness. {\bf Keyword:} Absolute value equation; Block coordinate descent method; Convergence. \end{quote} \section{Introduction}\label{sec:intro} We consider the system of absolute value equations (AVE) of the type \begin{equation}\label{eq:ave} Ax - \vert x \vert = b, \end{equation} where~$A\in\mathbb{R}^{n\times n}$ and $b\in\mathbb{R}^n$ are known, and~$x\in\mathbb{R}^n$ is unknown. Here, $\vert x\vert=[\vert x_1\vert,\vert x_2\vert,\cdots,\vert x_n\vert]^\top$. Obviously, AVE~\eqref{eq:ave} is a special case of the following generalized absolute value equations (GAVE) \begin{equation}\label{eq:gave} Ax + B\vert x \vert = b, \end{equation} where~$B \in \mathbb{R}^{n\times n}$ is given. To the best of our knowledge, GAVE~\eqref{eq:gave} was first introduced in~\cite{rohn2004}. As demonstrated in~\cite{mang2007}, solving the general GAVE~\eqref{eq:gave} is NP-hard. Furthermore, if it is solvable, checking whether GAVE~\eqref{eq:gave} has a unique solution or multiple solutions is NP-complete~\cite{prok2009}. AVE \eqref{eq:ave} and GAVE \eqref{eq:gave} are significant mathematical models with practical applications in engineering and information security. For instance, a transform function based on GAVE \eqref{eq:gave} has been used to effectively improve the security of cancellable biometric systems \cite{dnhl2023}. Meanwhile, AVE \eqref{eq:ave} and GAVE \eqref{eq:gave} are equivalent to the linear complementarity problem \cite{cops1992,huhu2010,mang2007,mame2006,prok2009}, and are closely related to the solution of linear interval equations \cite{rohn1989}. They have garnered increasing attention in the fields of numerical optimization and numerical algebra in recent years. There are numerous results on both the theoretical and numerical aspects of AVE \eqref{eq:ave} and GAVE \eqref{eq:gave}. On the theoretical side, conditions for the existence, nonexistence, and uniqueness of solutions to AVE \eqref{eq:ave} and GAVE \eqref{eq:gave} have been reported; see, e.g., \cite{hlad2018,mang2007,mame2006,mezz2020,rohn2004,rohn2009,rohf2014,wuli2018,wush2021} and references therein. On the numerical side, the discrete iterative methods \cite{chyh2024,chyh2023,doss2020,edhs2017,ghlw2017,keyf2020,kema2017,mang2009,nika2011,rohf2014,wacc2019,yuch2022,yuly2021,zhwl2021} and continuous dynamic models \cite{cyyh2021,gawa2014,jyfc2023,lyyh2023,maer2018,sanc2019} for solving AVE \eqref{eq:ave} and GAVE \eqref{eq:gave} have been studied. Notably, Noor, Iqbal, Khattri and AI-Said proposed an iterative method for solving AVE \eqref{eq:ave} based on the minimization technique \cite{nika2011}, which can be viewed as a modification of the Gauss-Seidel approach. Specifically, the solution to AVE \eqref{eq:ave} is reformulated by finding the minimum point of an unconstrained optimization problem. Subsequently, a minimization technique is used to search along each block coordinate direction to solve the corresponding unconstrained optimization problem. However, it is important to note that the objective function of the constructed unconstrained optimization problem is not globally quadratic differentiable. Consequently, the approach in \cite{nika2011} misused the second-order Taylor expansion of the objective function to determine the step sizes, which might make the method being non-monotonic descent and the rigorously global convergence analysis is lack in \cite{nika2011}. Therefore, after fully exploring the intrinsic properties of the objective function proposed in \cite{nika2011}, this paper develops a monotone block coordinate descent method with guaranteed convergence to solve the AVE \eqref{eq:ave}. The rest of this paper is organized as follows. In Section \ref{sec:Pre}, we present some lemmas which are useful for our later developments. In Section \ref{sec:analysis}, a monotone block coordinate descent algorithm for solving AVE \eqref{eq:ave} is proposed and its convergence is analyzed. The numerical experiments and the conclusion of this paper are given in Section \ref{sec:Num} and Section \ref{sec:conclusion}, respectively. \textbf{Notation.} Let $\mathbb{R}^{m\times n}$ be the set of all $m\times n$ real matrices and $\mathbb{R}^m=\mathbb{R}^{m\times 1}$. $I$ denotes the identity matrix with suitable dimensions, $I(:,i)$ represents the $i$ column of the identity matrix $I$. For the vector $x\in\mathbb{R}^n$, $x_i$ represents its $i$th element. For vectors $x,y\in\mathbb{R}^n$, $\langle x,y\rangle=x^\top y$, where $\cdot ^\top$ denotes the transpose operation. $A=(a_{ij})\in\mathbb{R}^{m\times n}$ indicates that the element in the row $i$ and the column $j$ of $A$ is $a_{ij}$. For $x\in\mathbb{R}$, the symbolic function is defined as \begin{equation*} {\rm sign}(x)= \begin{cases} 1,~~\text{if}~x>0;\\ 0,~~\text{if}~x=0;\\ -1,~~\text{if}~x<0. \end{cases} \end{equation*} For $x\in\mathbb{R}^n$, ${\rm sign}(x)$ represents the vector formed by taking the sign function value of the corresponding element, $\mathcal{D}(x)={\rm diag}({\rm sign}(x))$ denotes a diagonal matrix with a sign function vector as its diagonal elements. \section{Preliminaries}\label{sec:Pre} In this section, we provide the preparatory knowledge required for the later development. Let $A\in\mathbb{R}^{n\times n}$ be a symmetric matrix and $b\in\mathbb{R}^n$. Define \begin{equation}\label{eq:ff} f(x)=\langle Ax,x\rangle - \langle |x|,x\rangle - 2\langle b,x\rangle,\quad x\in\mathbb{R}^n. \end{equation} As shown in \cite{nika2011}, the function $f$ defined as in \eqref{eq:ff} is continuously differentiable and \begin{equation}\label{eq:diff} \nabla f(x)=2(Ax-|x|-b). \end{equation} In the following, we will explore properties of the function $f$ defined as in \eqref{eq:ff}. \begin{lem}\label{lem:1} The function $\nabla f$ defined as in \eqref{eq:diff} is Lipschitz continuous on $\mathbb{R}^n$ with the Lipschitz constant $2(\|A\|+1)$. \end{lem} \begin{proof} For any $x,y\in\mathbb{R}^n$, it follows from \eqref{eq:diff} that \begin{align*} \|\nabla f(x)-\nabla f(y)\|&=\|2(Ax-|x|-b)-2(Ay-|y|-b)\|\\ &=\|2A(x-y)-2(|x|-|y|)\|\\ &\leq 2(\|A\|+1)\|x-y\|, \end{align*} where the last inequality uses the triangle inequality and $\||x|-|y|\| \leq \|x-y\|.$ \end{proof} \begin{lem}\label{lem:2} If $A-I$ is a symmetric positive definite matrix, then the function $f$ defined by \eqref{eq:ff} is strong convex on $\mathbb{R}^n$. \end{lem} \begin{proof} To prove that $f$ is strong convex on $\mathbb{R}^n$, it suffices to show that $\nabla f$ is strongly monotonic on $\mathbb{R}^n$. For any $x,y\in\mathbb{R}^n$ with $x\neq y$, we have \begin{align*} \langle \nabla f(x)-\nabla f(y),x-y\rangle &=\langle 2A(x-y)-2(|x|-|y|),x-y \rangle\\ &=2\langle A(x-y),x-y \rangle-2\langle |x|-|y|,x-y\rangle\\ &\geq 2\langle A(x-y),x-y \rangle-2\langle x-y,x-y\rangle\\ &=2(x-y)^\top (A-I)(x-y)\\ &\ge 2\lambda_{\min}(A-I)\|x-y\|^2 \end{align*} where the first inequality uses $\||x|-|y|\| \leq \|x-y\|$. The proof is completed since the smallest eigenvalue $\lambda_{\min}(A-I) >0$. \end{proof} \begin{thm}\label{thm:1} If $A-I$ is a symmetric positive definite matrix, then for any $b \in \mathbb{R}^n$, the function $f$ defined as in \eqref{eq:ff} has a unique minimum point, which is the unique solution of AVE~\eqref{eq:ave}. \end{thm} \begin{proof} Since $A-I$ is a symmetric positive definite matrix, then the interval matrix $[A-I,A+I]$ is regular. Therefore, for any $b \in \mathbb{R}^n$, AVE \eqref{eq:ave} has a unique solution $x_*$ \cite{wuli2018}. From \eqref{eq:diff}, we know $\nabla f(x_*) = 0$. In addition, it follows from Lemma \ref{lem:2} that the differentiable function $f$ is strong convex on $\mathbb{R}^n$. Hence, $x_*$ is the unique minimum point of the function $f$. \end{proof} \section{Monotone block coordinate descent method}\label{sec:analysis} Before proposing the monotone block coordinate descent method for solving AVE \eqref{eq:ave}, we first review the iterative method presented in \cite{nika2011}. In the method proposed in \cite{nika2011} (see Algorithm \ref{alg:1} for details), the Taylor expansion of $f(y^{(i)}+\alpha v^{(i)}+\beta v^{(j)})$ at $y^{(i)}$ is minimized to obtain $\alpha^{(i)}$ and $\beta^{(i)}$. In other words, $\alpha^{(i)}$ and $\beta^{(i)}$ are determined by minimizing \begin{equation}\label{eq:taylor} f(y^{(i)}+\alpha v^{(i)}+\beta v^{(j)})=f(y^{(i)})+a\alpha^2+d\beta^2+2p^{(i)}\alpha+2p^{(j)}\beta+2c\alpha\beta \end{equation} in terms of $\alpha$ and $\beta$. However, since the function $f$ is not actually globally second-order continuously differentiable, the Taylor expansion \eqref{eq:taylor} dose not hold on $\mathbb{R}^n$, which may make the iterative sequence $\{x^{(k)}\}$ generated by Algorithm~\ref{alg:1} satisfying $f(x^{(k+1)}) > f(x^{(k)})$ for some $k$. The following Example~\ref{exam:1} is used to demonstrate this phenomenon. \begin{algorithm}[t] \caption{The iterative method proposed in \cite{nika2011}.}\label{alg:1} \begin{algorithmic}[1] \REQUIRE a symmetric matrix $A\in\mathbb{R}^{n\times n}$ and a vector $b\in\mathbb{R}^n$. Take the initial vector $x^{(0)}\in\mathbb{R}^n$. Set $k:=0$. \ENSURE an approximate solution of AVE \eqref{eq:ave}. \REPEAT \STATE For $i=1,2,\ldots,n$, \begin{align} &j=i-1,\nonumber\\ &if~i=1,~then~j=n,\nonumber\\ &v^{(i)}=I(:,i),~~v^{(j)}=I(:,j),\nonumber\\ &\text{if}~i=1,~\text{then}~y^{(i)}=x^{(k)};\nonumber\\ &C=A-\mathcal{D}(y^{(i)}),\nonumber\\ &a=\langle Cv^{(i)},v^{(i)}\rangle,~~d=\langle Cv^{(j)},v^{(j)}\rangle,\nonumber\\ &c=\langle Cv^{(i)},v^{(j)}\rangle=\langle Cv^{(j)},v^{(i)}\rangle,\nonumber\\ &p^{(i)}=\langle Ay^{(i)}-|y^{(i)}|-b,v^{(i)}\rangle,~p^{(j)}=\langle Ay^{(i)}-|y^{(i)}|-b,v^{(j)}\rangle,\nonumber\\ &\alpha^{(i)}=\frac{cp^{(j)}-dp^{(i)}}{ad-c^2},~\beta^{(i)}=\frac{cp^{(i)}-ap^{(j)}}{ad-c^2},\label{eq:step}\\ &y^{(i+1)}=y^{(i)}+\alpha^{(i)}v^{(i)}+\beta^{(i)}v^{(j)}.\label{eq:yi} \end{align} \STATE Let $x^{(k+1)}=y^{(n+1)}, k = k+1$. \UNTIL{convergence}; \RETURN the last $x^{(k+1)}$ as the approximation to the solution of AVE~\eqref{eq:ave}. \end{algorithmic} \end{algorithm} \begin{exam}\label{exam:1}{\rm Consider AVE \eqref{eq:ave} with \begin{equation*} A=\begin{bmatrix} \frac{3}{2}&\frac{1}{4}\\ \frac{1}{4}&\frac{3}{2} \end{bmatrix}\quad \text{and}\quad b =\begin{bmatrix} \frac{1}{4}\\ 1 \end{bmatrix}. \end{equation*} Then $A-I$ is symmetric positive definite and the unique solution of this AVE is $x_*=[-\frac{2}{19},\frac{39}{19}]^\top$. In addition, \begin{align}\label{eq:f'} f(x) =&\frac{3}{2}x_1^2+\frac{3}{2}x_2^2+\frac{1}{2}x_1x_2-x_1|x_1|-x_2|x_2|-\frac{1}{2}x_1-2x_2\nonumber\\ =&\begin{cases} \frac{1}{2}x_1^2+\frac{1}{2}x_2^2+\frac{1}{2}x_1x_2-\frac{1}{2}x_1-2x_2 = f_1(x),~~\text{if}~x_1\geq0~\text{and}~x_2\geq0;\\ \frac{1}{2}x_1^2+\frac{5}{2}x_2^2+\frac{1}{2}x_1x_2-\frac{1}{2}x_1-2x_2=f_2(x),~~\text{if}~x_1\geq0~\text{and}~x_2<0;\\ \frac{5}{2}x_1^2+\frac{1}{2}x_2^2+\frac{1}{2}x_1x_2-\frac{1}{2}x_1-2x_2=f_3(x),~~\text{if}~x_1<0~\text{and}~x_2\geq0;\\ \frac{5}{2}x_1^2+\frac{5}{2}x_2^2+\frac{1}{2}x_1x_2-\frac{1}{2}x_1-2x_2=f_4(x),~~\text{if}~x_1<0~\text{and}~x_2<0. \end{cases} \end{align} According to Algorithm \ref{alg:1}, $v^{(1)}=[1,0]^\top, v^{(2)}=[0,1]^\top$. The numerical results obtained by selecting different initial values of $x^{(0)}$ are shown in Table 1. As can be seen from Table \ref{table1}, when $x^{(0)}=[0.6,1.2]^\top$, we have $f(y^{(1)})<f(y^{(2)})$. The reason for this is that for the initial point $x^{(0)}=[0.6,1.2]^\top$, $\alpha^{(1)}=-\frac{19}{15}$ and $\beta^{(1)}=\frac{17}{15}$ are obtained by minimizing $g(\alpha,\beta)=f_1(y^{(1)})+\langle f_1^\prime(y^{(1)}),\alpha v^{(1)}+\beta v^{(2)} \rangle +\frac{1}{2}\langle f_1^{\prime\prime}(y^{(1)})(\alpha v^{(1)}+\beta v^{(2)}),\alpha v^{(1)}+\beta v^{(2)}\rangle$, which is the Taylor expansion of $f_1(y^{(1)}+\alpha v^{(1)}+\beta v^{(2)})$ at $y^{(1)}$. Then we have $f(y^{(1)}) =f_1(y^{(1)}) < f_1(y^{(2)})$. However, since $y^{(2)}_1 = -\frac{2}{3}< 0$, it follows from \eqref{eq:f'} that $f(y^{(2)})\neq f_1(y^{(2)})$. Indeed, we have $-\frac{23}{18}=f(y^{(2)})> f_1(y^{(2)})=-\frac{39}{18}> f(y^{(1)})$. In conclusion, since the minimum point of $f_1$ in $\mathbb{R}^2$ is different from the minimum point of function $f$ in $\mathbb{R}^2$, to minimize $f_1$ in $\mathbb{R}^2$ does not guarantee a decrease in the value of function $f$. Conversely, since $f_3$ and $f$ have the same minimum point in $\mathbb{R}^2$, minimizing $f_3$ on $\mathbb{R}^2$ is equivalent to minimizing $f$. This is why in Table \ref{table1} the Algorithm \ref{alg:1} can get the solution $x_*$ by iterating one step from $x^{(0)}=[-0.6,1.2]^\top$ or $y^{(2)} = [-\frac{2}{3}, \frac{7}{3}]$. \renewcommand{\arraystretch}{1.5} \begin{table}[H] \centering \caption{Numerical results of Example~\ref{exam:1}.}\label{table1} \begin{tabular}{|c|c|c|c|c|c|c}\hline $x^{(0)}=y^{(1)}$ &$f(y^{(1)})$ &$y^{(2)}$ &$f(y^{(2)})$ &$y^{(3)}$ &$f(y^{(3)})$ \\\hline $[0.6,1.2]^\top$ &$-\frac{36}{25}$ &$[-\frac{2}{3},\frac{7}{3}]^\top$ &$-\frac{23}{18}$ &$[-\frac{2}{19},\frac{39}{19}]^\top$ &$-\frac{77}{38}$ \\\hline $[-0.6,1.2]^\top$ &$-\frac{21}{25}$ &$[-\frac{2}{19},\frac{39}{19}]^\top$ &$-\frac{77}{38}$ & & \\\hline \end{tabular} \end{table} } \end{exam} As demonstrated in Example \ref{exam:1}, the value of the objective function may increase during the iterations of Algorithm \ref{alg:1}, which makes Algorithm \ref{alg:1} non-monotonic and lacking a strict convergence analysis. This motivates us to propose a monotone block coordinate descent method which has a convergence guarantee. Our method is described in Algorithm \ref{alg:2}. \begin{algorithm}[t] \caption{Monotone block coordinate descent method for solving AVE \eqref{eq:ave}.}\label{alg:2} \begin{algorithmic}[1] \REQUIRE a symmetric matrix $A\in\mathbb{R}^{n\times n}$ and the vector $b\in\mathbb{R}^n$. Take the initial vector $x^{(0)}\in\mathbb{R}^n$. \ENSURE an approximate solution of AVE \eqref{eq:ave}. \REPEAT \STATE Iteration $k + 1$ with $k\ge 0$. Given $x^{(k)}=[(x_{n_1}^{(k)})^\top, (x_{n_2}^{(k)})^\top,\cdots, (x_{n_N}^{(k)})^\top]^\top \in \mathbb{R}^{n}$ with $n_1 + n_2 + \cdots + n_N = n$, choose $s=r$ at iterations $r,r+N,r+2N,\ldots,$ for $r=1,\ldots,N$ and compute a new iterate $x^{(k+1)}$ satisfying \begin{align} \label{eq:alpha} x_{n_s}^{(k+1)} &= {\rm arg}\min_{x_{n_s}} f(x_{n_1}^{(k)}, \cdots, x_{n_{s-1}}^{(k)}, x_{n_s}, x_{n_{s+1}}^{(k)},\cdots,x_{n_{N}}^{(k)}),\\\nonumber x_{n_j}^{(k+1)} &= x_{n_j}^{(k)}, \quad \forall j\neq s. \end{align} \UNTIL{convergence}; \RETURN the last $x^{(k+1)}$ as the approximation to the solution of AVE~\eqref{eq:ave}. \end{algorithmic} \end{algorithm} In the following, we will discuss how to solve \eqref{eq:alpha}. For $s = r$ with $r\in \{1,2,\cdots,N\}$, let $v^{(i)} = [1,0]^\top$ and $v^{(j)} =[0,1]^\top$. Then we have $x_{n_s}^{(k+1)} = x_{n_s}^{(k)} + \alpha^{(k)}v^{(i)} + \beta^{(k)}v^{(j)}$, in which \begin{equation}\label{eq:step} [ \alpha^{(k)},\beta^{(k)}] = {\rm arg}\min_{\alpha,\beta} f(x^{(k)} + \alpha I(:,2s - 1)+\beta I(:,2s)). \end{equation} In order to solve \eqref{eq:step}, we conduct some properties of $f$. Inspired by Example \ref{exam:1}, for \begin{align}\label{eq:f} f(x) &=\begin{bmatrix} x_1 & x_2 \end{bmatrix} \begin{bmatrix} a_{11} & a_{12}\\ a_{12} & a_{22} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} -\begin{bmatrix} x_1 & x_2 \end{bmatrix} \begin{bmatrix} |x_1| \\ |x_2| \end{bmatrix} -2\begin{bmatrix} b_1 & b_2 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}\nonumber\\ &=a_{11} x_1^2+a_{22} x_2^2+2a_{12}x_1x_2-x_1|x_1|-x_2|x_2|-2b_1x_1-2b_2x_2\nonumber\\ &=\begin{cases} (a_{11}-1)x_1^2+(a_{22}-1)x_2^2+2a_{12}x_1x_2-2b_1x_1-2b_2x_2=f_1(x),~\text{if}~x_1\ge 0~\text{and}~x_2\ge 0;\\ (a_{11}-1)x_1^2+(a_{22}+1)x_2^2+2a_{12}x_1x_2-2b_1x_1-2b_2x_2=f_2(x),~\text{if}~x_1\ge 0~\text{and}~x_2<0;\\ (a_{11}+1)x_1^2+(a_{22}-1)x_2^2+2a_{12}x_1x_2-2b_1x_1-2b_2x_2=f_3(x), ~\text{if}~x_1<0~\text{and}~x_2\ge 0;\\ (a_{11}+1)x_1^2+(a_{22}+1)x_2^2+2a_{12}x_1x_2-2b_1x_1-2b_2x_2=f_4(x), ~\text{if}~x_1<0~\text{and}~x_2<0, \end{cases} \end{align} we present the following lemma. \begin{lem}\label{lem:3} Let $f$ and $f_i(i = 1,2,3,4)$ be defined as in \eqref{eq:f}. If $A-I\in \mathbb{R}^{2\times 2}$ is a symmetric positive definite matrix, then for any $b\in \mathbb{R}^2$ we have \begin{itemize} \item [(1)] If we extend the domain of $f_1$ to $\mathbb{R}^2$, then $f_1$ has the unique minimum point $$p=\left[\frac{b_2a_{12}-b_1(a_{22}-1)}{a_{12}^2-(a_{11}-1)(a_{22}-1)}, \frac{b_1a_{12}-b_2(a_{11}-1)}{a_{12}^2-(a_{11}-1)(a_{22}-1)}\right]^\top;$$ \item [(2)] If we extend the domain of $f_2$ to $\mathbb{R}^2$, then $f_2$ has the unique minimum point $$q =\left[\frac{b_2a_{12}-b_1(a_{22}+1)}{a_{12}^2-(a_{11}-1)(a_{22}+1)}, \frac{b_1a_{12}-b_2(a_{11}-1)}{a_{12}^2-(a_{11}-1)(a_{22}+1)}\right]^\top;$$ \item [(3)] If we extend the domain of $f_3$ to $\mathbb{R}^2$, then $f_3$ has the unique minimum point $$r =\left[\frac{b_2a_{12}-b_1(a_{22}-1)}{a_{12}^2-(a_{11}+1)(a_{22}-1)}, \frac{b_1a_{12}-b_2(a_{11}+1)}{a_{12}^2-(a_{11}+1)(a_{22}-1)}\right]^\top;$$ \item [(4)] If we extend the domain of $f_4$ to $\mathbb{R}^2$, then $f_4$ has the unique minimum point $$s = \left[\frac{b_2a_{12}-b_1(a_{22}+1)}{a_{12}^2-(a_{11}+1)(a_{22}+1)}, \frac{b_1a_{12}-b_2(a_{11}+1)}{a_{12}^2-(a_{11}+1)(a_{22}+1)}\right]^\top;$$ \item[(5)] One of $p$, $q$, $r$ and $s$ is the unique minimum point of $f$ in $\mathbb{R}^2$. \end{itemize} \end{lem} \begin{proof} Since $A - I\in \mathbb{R}^2$ is a symmetric positive definite matrix, we have \begin{equation}\label{eq:condition} (a_{11}-1)(a_{22}-1)>a_{12}^2>0,\quad a_{11}-1>0,\quad a_{22}-1>0. \end{equation} For the function $f_1$ in $\mathbb{R}^2$, we have \begin{align*} \frac{\partial f_1}{\partial x_1}&=2(a_{11}-1)x_1+2a_{12}x_2-2b_1,\\ \frac{\partial f_1}{\partial x_2}&=2(a_{22}-1)x_2+2a_{12}x_1-2b_2,\\ \frac{\partial^2 f_1}{\partial x_1^2}&=2(a_{11}-1),\\ \frac{\partial^2 f_1}{\partial x_2^2}&=2(a_{22}-1),\\ \frac{\partial^2 f_1}{\partial x_1\partial x_2}&=2a_{12}.\\ \end{align*} Then, for any $b\in \mathbb{R}^2$, it follows from \eqref{eq:condition} that $f_1$ has the unique minimum point $p$ in $\mathbb{R}^2$. Similarly, we can prove the results of (2)--(4). In the following, we will prove the result in (5). Since $A-I$ is a symmetric positive definite matrix, for any $b\in \mathbb{R}^2$, Lemma \ref{lem:2} implies that $f$ has a unique minimum point in $\mathbb{R}^2$. In addition, since $f$ is convex, any local minimum point is the global minimum point. To complete the proof, we are required to verify that one of $p$, $q$, $r$ and $s$ is the minimum point of $f$ in $\mathbb{R}^2$. To this end, we first show that there is no $A\in \mathbb{R}^{2\times 2}$ and $b\in \mathbb{R}^2$ with $A-I$ being symmetric positive definite such that \begin{align*} &p_1 <0\quad \text{or} \quad p_2 <0,\\ &q_1 <0\quad \text{or} \quad q_2 \ge 0,\\ &r_1 \ge 0\quad \text{or} \quad r_2 < 0,\\ &s_1 \ge 0\quad \text{or} \quad s_2 \ge 0. \end{align*} This implies that at least one of $p$, $q$, $r$ and $s$ is the minimum point of $f$ in $\mathbb{R}^2$. Indeed, if $p_1<0$, then $r_1\ge 0$ is not true, which implies $r_2 <0$. It follows from $r_2 <0$ that $q_2\ge 0$ and $s_1\ge 0$. Then we have \begin{align}\label{eq:1a} b_2a_{12}-b_1(a_{22}-1)>0,\\\label{eq:4a} b_2a_{12}-b_1(a_{22}+1)\leq0,\\\label{eq:2b} b_1a_{12}-b_2(a_{11}-1)\leq0.\\\label{eq:3b} b_1a_{12}-b_2(a_{11}+1)>0, \end{align} It follows from \eqref{eq:1a} and \eqref{eq:4a} that $b_1>0$ and \eqref{eq:2b} and \eqref{eq:3b} imply $b_2<0$. Substituting $b_1>0,b_2<0$ into \eqref{eq:1a} yields that $a_{12}<0$. Then it follows from \eqref{eq:1a} and \eqref{eq:2b} that $\frac{a_{12}}{a_{22}-1}<\frac{b_1}{b_2}$ and $\frac{a_{11}-1}{a_{12}}\geq\frac{b_1}{b_2}$, from which we have $\frac{a_{12}}{a_{22}-1}<\frac{b_1}{b_2}\leq\frac{a_{11}-1}{a_{12}}$. Hence, $a_{12}^2>(a_{11}-1)(a_{22}-1)$, which contradicts to the first inequality of \eqref{eq:condition}. If $p_2 < 0$ is satisfied, we have $q_1 < 0$, $r_1\ge 0$ and $s_2\ge 0$. Namely, \begin{align}\label{eq:1b} b_1a_{12}-b_2(a_{11}-1)>0,\\\label{eq:4b} b_1a_{12}-b_2(a_{11}+1)\leq0,\\\label{eq:2a} b_2a_{12}-b_1(a_{22}+1)>0,\\\label{eq:3a} b_2a_{12}-b_1(a_{22}-1)\leq0. \end{align} From \eqref{eq:1b} and \eqref{eq:4b}, we have $b_2>0$. In addition, it follows from \eqref{eq:2a} and \eqref{eq:3a}that $b_1<0$. Substituting $b_1<0$ and $b_2>0$ into \eqref{eq:1b} yields that $a_{12}<0$. Then it follows from \eqref{eq:3a} and \eqref{eq:1b} that $\frac{a_{12}}{a_{22}-1}\leq\frac{b_1}{b_2}$ and $\frac{a_{11}-1}{a_{12}}>\frac{b_1}{b_2}$, which imply $\frac{a_{12}}{a_{22}-1}\leq\frac{b_1}{b_2}<\frac{a_{11}-1}{a_{12}}$. Hence, $a_{12}^2>(a_{11}-1)(a_{22}-1)$, which also contradicts to the first inequality of \eqref{eq:condition}. In the following, we will show that only one of the following four cases holds: \begin{equation*} \begin{array}{ll} \text{[(i)]}~p\ge 0; &[(ii)]~q_1\ge 0~\text{and}~q_2<0;\\ \text{[(iii)]}~r_1<0~\text{and}~r_2\ge 0; & \text{[(iv)]}~ s<0. \end{array} \end{equation*} When (i) holds, (ii) and (iii) cannot be hold. Hence, we only need to prove that (iv) is also not satisfied. To this end, we only to prove that the following inequalities \begin{align}\label{eq:1.1} b_2a_{12}-b_1(a_{22}-1)\leq0,\\\label{eq:1.2} b_1a_{12}-b_2(a_{11}-1)\leq0,\\\label{eq:4.1} b_2a_{12}-b_1(a_{22}+1)>0, \\ \label{eq:4.2} b_1a_{12}-b_2(a_{11}+1)>0, \end{align} cannot hold simultaneously. Indeed, from \eqref{eq:1.1} and \eqref{eq:4.1}, we get $b_1<0$. In the same way, from \eqref{eq:1.2} and \eqref{eq:4.2} we have $b_2<0$. Substituting $b_1<0$ and $b_2<0$ into \eqref{eq:1.1} yields that $a_{12}>0$. Then it follows from \eqref{eq:1.1} and \eqref{eq:1.2} that $$\frac{a_{22}-1}{a_{12}} \leq \frac{b_2}{b_1} \leq \frac{a_{12}}{a_{11}-1},$$ from which we have $a_{12}^2>(a_{11}-1)(a_{22}-1)$, which contradicts to the first inequality of~\eqref{eq:condition}. If (ii) holds, (i) and (iv) cannot be satisfied. Hence, we only need to prove that (iii) is also not satisfied. Namely, the inequalities \begin{align} \label{eq:2.1} b_2a_{12}-b_1(a_{22}+1)\leq0,\\\label{eq:2.2} b_1a_{12}-b_2(a_{11}-1)>0, \\\label{eq:3.1} b_2a_{12}-b_1(a_{22}-1)>0, \\\label{eq:3.2} b_1a_{12}-b_2(a_{11}+1)\leq0, \end{align} do not have a solution. From \eqref{eq:2.1} and \eqref{eq:3.1}, we get $b_1>0$. In addition, it follows from \eqref{eq:2.2} and \eqref{eq:3.2} that $b_2>0$. Substituting $b_1>0$ and $b_2>0$ into \eqref{eq:2.2} yields that $a_{12}>0$. Then it follows from \eqref{eq:2.2} and \eqref{eq:3.1} that $$\frac{a_{22}-1}{a_{12}} < \frac{b_2}{b_1} < \frac{a_{12}}{a_{11}-1},$$ that is, $a_{12}^2>(a_{11}-1)(a_{22}-1)$, which contradicts to the first inequality of \eqref{eq:condition}. If (iii) holds, (i) and (iv) cannot hold. Moreover, we have proved that (ii) and (iii) cannot hold simultaneously. If (iv) is satisfied, (ii) and (iii) cannot hold. In addition, we have proved that (i) and (iv) cannot hold simultaneously. \end{proof} For certain $s\in \{1,2,\cdots,N\}$, for simplicity, we denote $v^{(i)} = I(:,2s -1)$ and $v^{(j)} = I(:,2s)$ in the following. Let \begin{align*} g(\alpha,\beta) &=f(x^{(k)}+\alpha v^{(i)}+\beta v^{(j)})\\ &=(x^{(k)}+\alpha v^{(i)}+\beta v^{(j)})^\top A (x^{(k)}+\alpha v^{(i)}+\beta v^{(j)})-(x^{(k)}+\alpha v^{(i)}+\beta v^{(j)})^\top |x^{(k)}+\alpha v^{(i)}+\beta v^{(j)}|\\ &-2b^\top(x^{(k)}+\alpha v^{(i)}+\beta v^{(j)})\\ &=\alpha^2 (v^{(i)})^\top Av^{(i)}+\beta^2 (v^{(j)})^\top Av^{(j)}+2\alpha (x^{(k)})^\top A v^{(i)}+2\beta (x^{(k)})^\top A v^{(j)} + 2\alpha \beta (v^{(i)})^\top Av^{(j)}\\ &\quad -(x^{(k)}+\alpha v^{(i)}+\beta v^{(j)})^\top \mathcal{D}( x^{(k)}+\alpha v^{(i)}+\beta v^{(j)})(x^{(k)}+\alpha v^{(i)}+\beta v^{(j)})\\ &\quad +(x^{(k)})^\top A x^{(k)}-2b^\top x^{(k)}-2\alpha b^\top v^{(i)}-2\beta b^\top v^{(j)}. \end{align*} Then we have \begin{equation*} g(\alpha,\beta)=\left\{ \begin{aligned} &g_1(\alpha,\beta),~~\text{if}~x^{(k)}_i+\alpha \ge 0~\text{and}~x^{(k)}_j+\beta \ge 0;\\ &g_2(\alpha,\beta),~~\text{if}~x^{(k)}_i+\alpha \ge 0~\text{and}~x^{(k)}_j+\beta < 0;\\ &g_3(\alpha,\beta),~~\text{if}~x^{(k)}_i+\alpha < 0~\text{and}~x^{(k)}_j+\beta \ge 0;\\ &g_4(\alpha,\beta),~~\text{if}~x^{(k)}_i+\alpha < 0~\text{and}~x^{(k)}_j+\beta < 0,\\ \end{aligned} \right. \end{equation*} where \begin{align*} g_1(\alpha,\beta) &=(a_{ii}-1)\alpha^2+2\alpha[(x^{(k)})^\top Av^{(i)}-b^\top v^{(i)}-x^{(k)}_i]-(x^{(k)}_i)^2\\ &\quad +(a_{jj}-1)\beta^2+2\beta[(x^{(k)})^\top Av^{(j)}-b^\top v^{(j)}-x^{(k)}_j]-(x^{(k)}_j)^2\\ &\quad +2\alpha\beta a_{ij}+(x^{(k)})^\top Ax^{(k)}-2b^\top x^{(k)}-\sum\limits_{l\neq i,j}x^{(k)}_l |x^{(k)}_l|,\\ g_2(\alpha,\beta) &=(a_{ii}-1)\alpha^2+2\alpha[(x^{(k)})^\top Av^{(i)}-b^\top v^{(i)}-x^{(k)}_i]-(x^{(k)}_i)^2\\ &\quad +(a_{jj}+1)\beta^2+2\beta[(x^{(k)})^\top Av^{(j)}-b^\top v^{(j)}+x^{(k)}_j]+(x^{(k)}_j)^2\\ &\quad +2\alpha\beta a_{ij}+(x^{(k)})^\top Ax^{(k)}-2b^\top x^{(k)}-\sum\limits_{l\neq i,j}x^{(k)}_l |x^{(k)}_l|,\\ g_3(\alpha,\beta) &=(a_{ii}+1)\alpha^2+2\alpha[(x^{(k)})^\top Av^{(i)}-b^\top v^{(i)}+x^{(k)}_i]+(x^{(k)}_i)^2\\ &\quad +(a_{jj}-1)\beta^2+2\beta[(x^{(k)})^\top Av^{(j)}-b^\top v^{(j)}-x^{(k)}_j]-(x^{(k)}_j)^2\\ &\quad +2\alpha\beta a_{ij}+(x^{(k)})^\top Ax^{(k)}-2b^\top x^{(k)}-\sum\limits_{l\neq i,j}x^{(k)}_l |x^{(k)}_l|,\\ g_4(\alpha,\beta) &=(a_{ii}+1)\alpha^2+2\alpha[(x^{(k)})^\top Av^{(i)}-b^\top v^{(i)}+x^{(k)}_i]+(x^{(k)}_i)^2\\ &\quad +(a_{jj}+1)\beta^2+2\beta[(x^{(k)})^\top Av^{(j)}-b^\top v^{(j)}+x^{(k)}_j]+(x^{(k)}_j)^2\\ &\quad +2\alpha\beta a_{ij}+(x^{(k)})^\top Ax^{(k)}-2b^\top x^{(k)}-\sum\limits_{l\neq i,j}x^{(k)}_l |x^{(k)}_l|. \end{align*} Since adding or subtracting a constant does not change the minimum point of a function, finding the minimum point of $g(\alpha,\beta)$ is equivalent to finding the minimum point of $\tilde{g}(\alpha,\beta)$, where \begin{equation*} \tilde{g}(\alpha,\beta)=\left\{ \begin{aligned} &\tilde{g}_1(\alpha,\beta),~~\text{if}~x^{(k)}_i+\alpha \ge 0~\text{and}~x^{(k)}_j+\beta \ge 0,\\ &\tilde{g}_2(\alpha,\beta),~~\text{if}~x^{(k)}_i+\alpha \ge 0~\text{and}~x^{(k)}_j+\beta < 0,\\ &\tilde{g}_3(\alpha,\beta),~~\text{if}~x^{(k)}_i+\alpha < 0~\text{and}~x^{(k)}_j+\beta \ge 0,\\ &\tilde{g}_4(\alpha,\beta),~~\text{if}~x^{(k)}_i+\alpha < 0~\text{and}~x^{(k)}_j+\beta< 0,\\ \end{aligned} \right. \end{equation*} in which \begin{align*} \tilde{g}_1(\alpha,\beta) &=(a_{ii}-1)\alpha^2+2\alpha[(x^{(k)})^\top Av^{(i)}-b^\top v^{(i)}-x^{(k)}_i]-(x^{(k)}_i)^2\\ &\quad +(a_{jj}-1)\beta^2+2\beta[(x^{(k)})^\top Av^{(j)}-b^\top v^{(j)}-x^{(k)}_j]-(x^{(k)}_j)^2+2\alpha\beta a_{ij},\\ \tilde{g}_2(\alpha,\beta) &=(a_{ii}-1)\alpha^2+2\alpha[(x^{(k)})^\top Av^{(i)}-b^\top v^{(i)}-x^{(k)}_i]-(x^{(k)}_i)^2\\ &\quad +(a_{jj}+1)\beta^2+2\beta[(x^{(k)})^\top Av^{(j)}-b^\top v^{(j)}+x^{(k)}_j]+(x^{(k)}_j)^2+2\alpha\beta a_{ij},\\ \tilde{g}_3(\alpha,\beta) &=(a_{ii}+1)\alpha^2+2\alpha[(x^{(k)})^\top Av^{(i)}-b^\top v^{(i)}+x^{(k)}_i]+(x^{(k)}_i)^2\\ &\quad +(a_{jj}-1)\beta^2+2\beta[(x^{(k)})^\top Av^{(j)}-b^\top v^{(j)}-x^{(k)}_j]-(x^{(k)}_j)^2+2\alpha\beta a_{ij},\\ \tilde{g}_4(\alpha,\beta) &=(a_{ii}+1)\alpha^2+2\alpha[(x^{(k)})^\top Av^{(i)}-b^\top v^{(i)}+x^{(k)}_i]+(x^{(k)}_i)^2\\ &\quad +(a_{jj}+1)\beta^2+2\beta[(x^{(k)})^\top Av^{(j)}-b^\top v^{(j)}+x^{(k)}_j]+(x^{(k)}_j)^2+2\alpha\beta a_{ij}. \end{align*} Let $t_1=x^{(k)}_i+\alpha,~t_2=x^{(k)}_j+\beta,~w_1=(v^{(i)})^\top(Ax^{(k)}-b),~w_2=(v^{(j)})^\top(Ax^{(k)}-b)$, then $\tilde{g}(\alpha,\beta)$ can be converted to $h(t_1,t_2)$ with \begin{equation*} h(t_1,t_2)=\left\{ \begin{aligned} &h_1(t_1,t_2),~~\text{if}~t_1 \ge 0~\text{and}~t_2 \ge 0,\\ &h_2(t_1,t_2),~~\text{if}~t_1 \ge 0~\text{and}~t_2 < 0,\\ &h_3(t_1,t_2),~~\text{if}~t_1 < 0~\text{and}~t_2 \ge 0,\\ &h_4(t_1,t_2),~~\text{if}~t_1 < 0~\text{and}~t_2 < 0,\\ \end{aligned} \right. \end{equation*} in which \begin{align*} h_1(t_1,t_2) &=(a_{ii}-1)t_1^2-2(x^{(k)}_i a_{ii}-w_1+a_{ij}x^{(k)}_j)t_1+a_{ii}(x^{(k)}_i)^2-2w_1x^{(k)}_i\\ &\quad +(a_{jj}-1)t_2^2-2(x^{(k)}_j a_{jj}-w_2+a_{ij}x^{(k)}_i)t_2+a_{jj}(x^{(k)}_j)^2-2w_2x^{(k)}_j\\ &\quad +2a_{ij}(t_1t_2+x^{(k)}_i x^{(k)}_j),\\ h_2(t_1,t_2) &=(a_{ii}-1)t_1^2-2(x^{(k)}_i a_{ii}-w_1+a_{ij}x^{(k)}_j)t_1+a_{ii}(x^{(k)}_i)^2-2w_1x^{(k)}_i\\ &\quad +(a_{jj}+1)t_2^2-2(x^{(k)}_j a_{jj}-w_2+a_{ij}x^{(k)}_i)t_2+a_{jj}(x^{(k)}_j)^2-2w_2x^{(k)}_j\\ &\quad +2a_{ij}(t_1t_2+x^{(k)}_i x^{(k)}_j),\\ h_3(t_1,t_2) &=(a_{ii}+1)t_1^2-2(x^{(k)}_i a_{ii}-w_1+a_{ij}x^{(k)}_j)t_1+a_{ii}(x^{(k)}_i)^2-2w_1x^{(k)}_i\\ &\quad +(a_{jj}-1)t_2^2-2(x^{(k)}_j a_{jj}-w_2+a_{ij}x^{(k)}_i)t_2+a_{jj}(x^{(k)}_j)^2-2w_2x^{(k)}_j\\ &\quad +2a_{ij}(t_1t_2+x^{(k)}_i x^{(k)}_j),\\ h_4(t_1,t_2) &=(a_{ii}+1)t_1^2-2(x^{(k)}_i a_{ii}-w_1+a_{ij}x^{(k)}_j)t_1+a_{ii}(x^{(k)}_i)^2-2w_1x^{(k)}_i\\ &\quad +(a_{jj}+1)t_2^2-2(x^{(k)}_j a_{jj}-w_2+a_{ij}x^{(k)}_i)t_2+a_{jj}(x^{(k)}_j)^2-2w_2x^{(k)}_j\\ &\quad +2a_{ij}(t_1t_2+x^{(k)}_i x^{(k)}_j). \end{align*} Since adding or subtracting a constant does not change the minimum point of a function, finding the minimum point of $h(t_1,t_2)$ is equivalent to finding the minimum point of $\tilde{h}(t_1,t_2)$ with \begin{equation*} \tilde{h}(t_1,t_2)=\left\{ \begin{aligned} &\tilde{h}_1(t_1,t_2),~~\text{if}~t_1 \ge 0~\text{and}~t_2 \ge 0,\\ &\tilde{h}_2(t_1,t_2),~~\text{if}~t_1 \ge 0~\text{and}~t_2 < 0,\\ &\tilde{h}_3(t_1,t_2),~~\text{if}~t_1 < 0~\text{and}~t_2 \ge 0,\\ &\tilde{h}_4(t_1,t_2),~~\text{if}~t_1 < 0~\text{and}~t_2 < 0,\\ \end{aligned} \right. \end{equation*} where \begin{align*} \tilde{h}_1(t_1,t_2) &=(a_{ii}-1)t_1^2-2(x^{(k)}_i a_{ii}-w_1+a_{ij}x^{(k)}_j)t_1\\ &\quad +(a_{jj}-1)t_2^2-2(x^{(k)}_j a_{jj}-w_2+a_{ij}x^{(k)}_i)t_2+2a_{ij}t_1t_2,\\ \tilde{h}_2(t_1,t_2) &=(a_{ii}-1)t_1^2-2(x^{(k)}_i a_{ii}-w_1+a_{ij}x^{(k)}_j)t_1\\ &\quad +(a_{jj}+1)t_2^2-2(x^{(k)}_j a_{jj}-w_2+a_{ij}x^{(k)}_i)t_2+2a_{ij}t_1t_2,\\ \tilde{h}_3(t_1,t_2) &=(a_{ii}+1)t_1^2-2(x^{(k)}_i a_{ii}-w_1+a_{ij}x^{(k)}_j)t_1\\ &\quad +(a_{jj}-1)t_2^2-2(x^{(k)}_j a_{jj}-w_2+a_{ij}x^{(k)}_i)t_2+2a_{ij}t_1t_2,\\ \tilde{h}_4(t_1,t_2) &=(a_{ii}+1)t_1^2-2(x^{(k)}_i a_{ii}-w_1+a_{ij}x^{(k)}_j)t_1\\ &\quad +(a_{jj}+1)t_2^2-2(x^{(k)}_j a_{jj}-w_2+a_{ij}x^{(k)}_i)t_2+2a_{ij}t_1t_2. \end{align*} Since $A-I$ is symmetric positive definite, the submatrix $$ \begin{bmatrix} a_{ii}& a_{ij}\\ a_{ji}& a_{jj} \end{bmatrix} $$ is also symmetric positive definite. Then it follows from Lemma \ref{lem:3} that \begin{itemize} \item [(1)] If we extend the domain of $\tilde{h}_1(t_1,t_2)$ to $\mathbb{R}^2$, then $\tilde{h}_1(t_1,t_2)$ has the unique minimum point $[t_1,t_2]^\top$ with \begin{align}\label{eq:h1t1} t_1&=\quad\frac{-w_2 a_{ij}+w_1 a_{jj}-w_1+x^{(k)}_j a_{ij}+x^{(k)}_i(a_{ij}^2-a_{ii}a_{jj}+a_{ii})}{a_{ij}^2-(a_{ii}-1)(a_{jj}-1)},\\\label{eq:h1t2} t_2&=\frac{-w_1 a_{ij}+w_2 a_{ii}-w_2+x^{(k)}_i a_{ij}+x^{(k)}_j(a_{ij}^2-a_{ii}a_{jj}+a_{jj})}{a_{ij}^2-(a_{ii}-1)(a_{jj}-1)}; \end{align} \item [(2)] If we extend the domain of $\tilde{h}_2(t_1,t_2)$ to $\mathbb{R}^2$, then $\tilde{h}_2(t_1,t_2)$ has the unique minimum point $[t_1,t_2]^\top$ with \begin{align}\label{eq:h2t1} t_1&= \frac{-w_2 a_{ij}+w_1 a_{jj}+w_1-x^{(k)}_j a_{ij}+x^{(k)}_i(a_{ij}^2-a_{ii}a_{jj}-a_{ii})}{a_{ij}^2-(a_{ii}-1)(a_{jj}+1)}, \\\label{eq:h2t2} t_2& =\frac{-w_1 a_{ij}+w_2 a_{ii}-w_2+x^{(k)}_i a_{ij}+x^{(k)}_j(a_{ij}^2-a_{ii}a_{jj}+a_{jj})}{a_{ij}^2-(a_{ii}-1)(a_{jj}+1)}; \end{align} \item [(3)] If we extend the domain of $\tilde{h}_3(t_1,t_2)$ to $\mathbb{R}^2$, then $\tilde{h}_3(t_1,t_2)$ has the unique minimum point $[t_1,t_2]^\top$ with \begin{align}\label{eq:h3t1} t_1&= \frac{-w_2 a_{ij}+w_1 a_{jj}-w_1+x^{(k)}_j a_{ij}+x^{(k)}_i(a_{ij}^2-a_{ii}a_{jj}+a_{ii})}{a_{ij}^2-(a_{ii}+1)(a_{jj}-1)},\\\label{eq:h3t2} t_2& =\frac{-w_1 a_{ij}+w_2 a_{ii}+w_2-x^{(k)}_i a_{ij}+x^{(k)}_j(a_{ij}^2-a_{ii}a_{jj}-a_{jj})}{a_{ij}^2-(a_{ii}+1)(a_{jj}-1)}; \end{align} \item [(4)] If we extend the domain of $\tilde{h}_4(t_1,t_2)$ to $\mathbb{R}^2$, then $\tilde{h}_4(t_1,t_2)$ has the unique minimum point $[t_1,t_2]^\top$ with \begin{align}\label{eq:h4t1} t_1&= \frac{-w_2 a_{ij}+w_1 a_{jj}+w_1-x^{(k)}_j a_{ij}+x^{(k)}_i(a_{ij}^2-a_{ii}a_{jj}-a_{ii})}{a_{ij}^2-(a_{ii}+1)(a_{jj}+1)},\\\label{eq:h4t2} t_2& =\frac{-w_1 a_{ij}+w_2 a_{ii}+w_2-x^{(k)}_i a_{ij}+x^{(k)}_j(a_{ij}^2-a_{ii}a_{jj}-a_{jj})}{a_{ij}^2-(a_{ii}+1)(a_{jj}+1)}; \end{align} \item[(5)] One of the minimum points of $\tilde{h}_i(t_1,t_2)$ $(i=1,2,3,4)$ is the minimum point of $h(t_1,t_2)$ in $\mathbb{R}^2$. \end{itemize} Then the solution $(\alpha^{(k)}, \beta^{(k)})$ can be determined as follows. If $t_1$ and $t_2$ defined as in \eqref{eq:h1t1} and \eqref{eq:h1t2} satisfy $t_1\ge 0$ and $t_2\ge 0$, then the minimum point of $\tilde{h}_1(t_1,t_2)$ is the unique minimum point of $h(t_1,t_2)$, which implies that \begin{align*} \alpha^{(k)} &=t_1-x^{(k)}_i=\frac{-w_2a_{ij}+w_1a_{jj}-w_1+x^{(k)}_ja_{ij}-x^{(k)}_ia_{jj}+x^{(k)}_i}{a_{ij}^2 -(a_{ii}-1)(a_{jj}-1)},\\ \beta^{(k)} &=t_2-x^{(k)}_j=\frac{-w_1a_{ij}+w_2a_{ii}-w_2+x^{(k)}_ia_{ij}-x^{(k)}_ja_{ii}+x^{(k)}_j}{a_{ij}^2- (a_{ii}-1)(a_{jj}-1)}. \end{align*} If $t_1$ and $t_2$ defined as in \eqref{eq:h2t1} and \eqref{eq:h2t2} satisfy $t_1\ge 0$ and $t_2 < 0$, then the minimum point of $\tilde{h}_2(t_1,t_2)$ is the unique minimum point of $h(t_1,t_2)$, which implies that \begin{align*} \alpha^{(k)} &=\frac{-w_2a_{ij}+w_1a_{jj}+w_1-x^{(k)}_ja_{ij}-x^{(k)}_ia_{jj} -x^{(k)}_i}{a_{ij}^2-(a_{ii}-1)(a_{jj}+1)},\\ \beta^{(k)} &=\frac{-w_1a_{ij}+w_2a_{ii}-w_2+x^{(k)}_ia_{ij} +x^{(k)}_ja_{ii}-x^{(k)}_j}{a_{ij}^2-(a_{ii}-1)(a_{jj}+1)}. \end{align*} If $t_1$ and $t_2$ defined as in \eqref{eq:h3t1} and \eqref{eq:h3t2} satisfy $t_1< 0$ and $t_2 \ge 0$, then the minimum point of $\tilde{h}_3(t_1,t_2)$ is the unique minimum point of $h(t_1,t_2)$, which implies that \begin{align*} \alpha^{(k)} &=\frac{-w_2a_{ij}+w_1a_{jj}-w_1+x^{(k)}_ja_{ij} +x^{(k)}_ia_{jj}-x^{(k)}_i}{a_{ij}^2-(a_{ii}+1)(a_{jj}-1)},\\ \beta^{(k)} &=\frac{-w_1a_{ij}+w_2a_{ii}+w_2-x^{(k)}_ia_{ij} -x^{(k)}_ja_{ii}-x^{(k)}_j}{a_{ij}^2-(a_{ii}+1)(a_{jj}-1)}. \end{align*} If $t_1$ and $t_2$ defined as in \eqref{eq:h4t1} and \eqref{eq:h4t2} satisfy $t_1< 0$ and $t_2 < 0$, then the minimum point of $\tilde{h}_4(t_1,t_2)$ is the unique minimum point of $h(t_1,t_2)$, which implies that \begin{align*} \alpha^{(k)} &=\frac{-w_2a_{ij}+w_1a_{jj}+w_1-x^{(k)}_ja_{ij}+x^{(k)}_ia_{jj} +x^{(k)}_i}{a_{ij}^2-(a_{ii}+1)(a_{jj}+1)},\\ \beta^{(k)} &=\frac{-w_1a_{ij}+w_2a_{ii}+w_2-x^{(k)}_ia_{ij} +x^{(k)}_ja_{ii}+x^{(k)}_j}{a_{ij}^2-(a_{ii}+1)(a_{jj}+1)}. \end{align*} According to Lemma \ref{lem:3}, the determined step sizes $\alpha^{(k)}$ and $\beta^{(k)}$ imply $f(x^{(k+1)})<f(x^{(k)})$. Therefore, Algorithm \ref{alg:2} is called as the monotone block coordinate descent method. Before ending this section, we will analyze the convergence of Algorithm \ref{alg:2}. For the given $x^{(0)}\in\mathbb{R}^n$, define the level set \begin{equation}\label{eq:level} \mathcal{L}(x^{(0)})=\{x\in\mathbb{R}^n:f(x)\leq f(x^{(0)})\}. \end{equation} \begin{lem}\label{lem:5} If $A-I\in \mathbb{R}^{n\times n}$ is a symmetric positive definite matrix and function $f$ is defined by \eqref{eq:f}, then the level set \eqref{eq:level} is a compact set. \end{lem} \begin{proof} It is known continuously by the function $f$ that $\mathcal{L}(x^{(0)})$ is closed, the following proof shows that the level set $\mathcal{L}(x^{(0)})$ is bounded. Assume, for contradiction, that $\{z^{(k)}\}\subseteq \mathcal{L}(x^{(0)})$ is an unbounded sequence. Then we have $$ f(z^{(k)})\leq f(x^{(0)}), $$ which implies $$ (z^{(k)})^\top A z^{(k)}-(z^{(k)})^\top |z^{(k)}|-2(z^{(k)})^\top b\leq f(x^{(0)}). $$ Assume $z^*$ is a accumulation point of the sequence $\{\frac{z^{(k)}}{\|z^{(k)}\|}\}$, then $$ \frac{(z^{(k)})^\top A z^{(k)}}{\|z^{(k)}\|^2}-\frac{(z^{(k)})^\top |z^{(k)}|}{\|z^{(k)}\|^2}-\frac{2(z^{(k)})^\top b}{\|z^{(k)}\|^2} \leq \frac{f(x^{(0)})}{\|z^{(k)}\|^2}. $$ Taking the limit as $k\rightarrow \infty$, we obtain \begin{equation}\label{eq:z} (z^*)^\top (A-\mathcal{D}(z^*))z^*\leq 0. \end{equation} Since $A-I$ is positive definite, $A-\mathcal{D}(z^*)$ is positive definite. Combining this with \eqref{eq:z}, we conclude $z^*=0$, which contradicts with $\|z^*\|=1.$ To sum up, the level set $\mathcal{L}(x^{(0)})$ is closed and bounded, so the level set $\mathcal{L}(x^{(0)})$ is compact. \end{proof} Then the following convergence result is obtained by Theorem~\ref{thm:1}, Lemma \ref{lem:5} and \cite[Proposition 6]{grsc2000}. \begin{thm} Assume that $A - I$ is symmetric positive definite. Then the sequence $\{x^{(k)}\}$ generated by Algorithm \ref{alg:2} has a limit point which is the solutions of AVE~\eqref{eq:ave}. \end{thm} \section{Numerical results}\label{sec:Num} In this section, three numerical examples are used to illustrate the effectiveness of Algorithm 2. In both examples, the performance of Algorithm 2 (denoted by BCDA) and Algorithm 1 (denoted by MGSM) are compared. In the numerical results, we will report ``IT'' (the number of iterations. For Algorithm \ref{alg:1}, IT $=k\times n$.), ``Time'' (the elapsed CPU time in seconds) and ``RES'' which is defined by $$ {\rm RES} =\frac{\|b+|x^{(k)}|-Ax^{(k)}\|}{\|b\|}. $$ All tests are started from initial zero vector (except Example~\ref{exam:4.3}) and terminated if the current iteration satisfies ${\rm RES} \le 10^{-6}$. In our computations, all runs are implemented in MATLAB (version 9.10 (R2021a)) on a personal computer with IntelCore(TM) i7 CPU 2.60 GHz, 16.0GB memory. In order to visualize the performance of the two algorithms, a two-dimensional problem is first considered. \begin{exam}\label{exam:4.1} Consider AVE \eqref{eq:ave}, where \begin{equation*} A=\begin{bmatrix} \frac{3}{2}&\frac{1}{4}\\ \frac{1}{4}&\frac{3}{2} \end{bmatrix} ,~b=[\frac{1}{4},1]^\top\in\mathbb{R}^2. \end{equation*} The numerical results are shown in Table 2, and the iterative trajectories of the algorithms are shown in Figure~\ref{fig1}, which clearly shows that BCDA is monotonic while MGSM is not. \begin{table}[H] \centering \caption{Numerical results for Examples~\ref{exam:4.1}.}\label{table2} \begin{tabular}{lllllllll}\hline &$n=2$ &BCDA &MGSM \\\hline &IT &1 &2 \\ &Time &0.003460 &0.010728 \\ &RES &5.3854e-17 &4.8168e-16 \\\hline \end{tabular} \end{table} \begin{figure}[h] \centering \includegraphics[width=0.7\linewidth]{fig1.pdf} \caption{Iterative trajectory of the algorithm.}\label{fig1} \end{figure} \end{exam} Next, a high-dimensional example is used to demonstrate the performance of the proposed algorithm. \begin{exam}\label{exam:4.2} Consider AVE \eqref{eq:ave}, where \begin{eqnarray*} A=tridiag(\frac{3}{4},4,\frac{3}{4})= \left(\begin{array}{cccccc} 4&\frac{3}{4}&0&\cdots&0&0\\ \frac{3}{4}&4&\frac{3}{4}&\cdots&0&0\\ 0&\frac{3}{4}&4&\cdots&0&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ 0&0&0&\cdots&4&\frac{3}{4}\\ 0&0&0&\cdots&\frac{3}{4}&4\\ \end{array}\right)\in\mathbb{R}^{n\times n} \end{eqnarray*} and ~$b=[\frac{1}{2},1,\frac{1}{2},1,\ldots,\frac{1}{2},1]^\top\in\mathbb{R}^n$. The numerical results are shown in Table 3. For this example, Table 3 shows that BCDA outperforms MGSM in both the number of iteration steps and the iteration time. \begin{table}[H] \centering \caption{Numerical results for Examples~\ref{exam:4.2}.}\label{table3} \begin{tabular}{lllllllll}\hline &Method &$n$ &1000 &1500 &2000 &2500 &3000 \\\hline &BCDA &IT &2000 &2250 &3000 &3750 &4500 \\ &{} &Time &0.0187 &0.0231 &0.0589 &0.1357 &0.2225 \\ &{} &RES &9.1891e-08 &8.8970e-07 &7.7050e-07 &6.8916e-07 &6.2911e-07 \\ &MGSM &IT &6000 &9000 &12000 &15000 &18000 \\ &{} &Time &0.0377 &0.0561 &0.1796 &0.4969 &0.8426 \\ &{} &RES &3.0374e-07 &3.0432e-07 &3.0461e-07 &3.0478e-07 &3.0490e-07 \\\hline \end{tabular} \end{table} \end{exam} \begin{exam}\label{exam:4.3} Consider AVE \eqref{eq:ave}, where $$ A=\begin{bmatrix} 1 & 0.25 \\ 0.25 & 1 \end{bmatrix}, b=\begin{bmatrix} 1 \\ 0.5 \end{bmatrix}. $$ In this case, $A-I$ is not symmetric positive definite. It can be verified that this AVE has a solution $x^*=[2,4]^\top$. For both algorithms MGSM and BCDA, the initial point $x^{(0)}=[0.1,-1]^\top$ is selected. The relative residual curves of the algorithms are shown in Figure \ref{fig2}. In this case, the algorithm MGSM does not converge. In fact, for the algorithm MGSM there is \begin{equation*} x^{(1)}=\begin{bmatrix} 0.1 \\ -1 \end{bmatrix},~ x^{(2)}=\begin{bmatrix} -30 \\ 4 \end{bmatrix},~ x^{(3)}=\begin{bmatrix} 2 \\ -12 \end{bmatrix},~ x^{(4)}=\begin{bmatrix} -30 \\ 4 \end{bmatrix},\dots, \end{equation*} that is, the algorithm MGSM iterates between points $[-30,4]^\top$ and $[2,-12]^\top$ starting at $x^{(2)}$. However, the algorithm BCDA can converge to the solution $x^*$. \end{exam} \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{fig2.pdf} \caption{Plot of relative residuals for Example \ref{exam:4.3}.}\label{fig2} \end{figure} \section{Conclusion}\label{sec:conclusion} A monotone block coordinate descent method is proposed to solve AVE \eqref{eq:ave}, which can be superior to the method presented in \cite{nika2011}. A future study direction is to develop a nonmonotone block coordinate descent method with convergence guarantee to solve AVE \eqref{eq:ave}. \begin{thebibliography}{99} \bibitem{chyh2024} C.-R. Chen, B. Huang, D.-M. Yu, D.-R. Han. Optimal parameter of the SOR-like iteration method for solving absolute value equations, \emph{Numer. Algorithms.}, 96: 799--826, 2024. \bibitem{cyyh2021} C.-R. Chen, Y.-N. Yang, D.-M. Yu, D.-R. Han. An inverse-free dynamical system for solving the absolute value equations, \emph{Appl. Numer. Math.}, 2021, 168: 170--181. \bibitem{chyh2023} C.-R. Chen, D.-M. Yu, D.-R. Han. Exact and inexact Douglas-Rachford splitting methods for solving large-scale sparse absolute value equations, \emph{IMA J. Numer. Anal.}, 2023, 43: 1036--1060. \bibitem{cops1992} R. W. Cottle, J.-S. Pang, R. E. Stone. The Linear Complementarity Problem, Academic Press, New York, 1992. \bibitem{dnhl2023} T. M. Dang, T. D. Nguyen, T. Hoang, H. Iim, A. B. J. Teoh, D. Choi. AVET: A novel transform fucntion to improve cancellable biometrics security, \emph{IEEE Trans. Inf. Foren. Sec.}, 2023, 18: 758--772. \bibitem{doss2020} X. Dong, X.-H. Shao, H.-L. Shen. A new SOR-like method for solving absolute value equations, \emph{Appl. Numer. Math.}, 2020, 156: 410--421. \bibitem{edhs2017} V. Edalatpour, D. Hezari, D. K. Salkuyeh. A generalization of the Gauss-Seidel iteration method for solving absolute value equations, \emph{Appl. Math. Comput.}, 2017, 293: 156--167. \bibitem{gawa2014} X.-B. Gao, J. Wang. Analysis and application of a one-layer neural network for solving horizontal linear complementarity problems, \emph{Int. J. Comput. Intell. Syst.}, 2014, 7: 724--732. \bibitem{ghlw2017} X.-M. Gu, T.-Z. Huang, H.-B. Li, S. Wang, L. Li. Two CSCS-based iteration methods for solving absolute value equations, \emph{Appl. Anal. Comput.}, 2017, 7: 1336--1356. \bibitem{grsc2000} L. Grippo, M. Sciandrone. On the convergence of the block nonlinear Gauss-Seidel method under convex constraints, \emph{Oper. Res. Lett.}, 2000, 26: 127-136. \bibitem{hlad2018} M. Hlad\'{i}k. Bounds for the solutions of absolute value equations, \emph{Comput. Optim. Appl.}, 2018, 69: 243266. \bibitem{huhu2010} S.-L. Hu, Z.-H. Huang. A note on absolute value equations, \emph{Optim. Lett.}, 2010, 4: 417--424. \bibitem{jyfc2023} X.-X. Ju, X.-S. Yang, G. Feng, H.-J. Che. Neurodynamic optimization approaches with finite/fixed-time convergence for absolute value equations, \emph{Neural Netw.}, 2023, 165: 971--981. \bibitem{keyf2020} Y.-F. Ke. The new iteration algorithm for absolute value equation, \emph{Appl. Math. Lett.}, 2020, 99: 105990. \bibitem{kema2017} Y.-F. Ke, C.-F. Ma. SOR-like iteration method for solving absolute value equations, \emph{Appl. Math. Comput.}, 2017, 311: 195---202. \bibitem{lyyh2023} X.-H. Li, D.-M. Yu, Y.-N. Yang, D.-R. Han, C.-R. Chen. A fixed-time inverse-free dynamical system for solving the system of absolute value equations, \emph{Numer. Math-Theory, Me.}, 2023, 16: 622--633. \bibitem{mang2007} O. L. Mangasarian. Absolute value programming, \emph{Comput. Optim. Appl.}, 2007, 36: 43--53. \bibitem{mang2009} O. L. Mangasarian. A generalized Newton method for absolute value equations, \emph{Optim. Lett.}, 2009, 3: 101108. \bibitem{mame2006} O. L. Mangasarian, R. R. Meyer. Absolute value equations, \emph{Linear Algebra Appl.}, 2006, 419: 359--367. \bibitem{maer2018} A. Mansoori, M. Erfanian. A dynamic model to solve the absolute value equations, \emph{J. Optim. Theory Appl.}, 2018, 333: 28--35. \bibitem{mezz2020} F. Mezzadri. On the solution of general absolute value equations, \emph{Appl. Math. Lett.}, 2020, 107: 106462. \bibitem{nika2011} M. A. Noor, J. Iqbal, S. Khattri, E. Al-Said. A new iterative method for solving absolute value equations, \emph{INT. J. PHYS. SCI.}, 2011, 6: 1793--1797. \bibitem{prok2009} O. Prokopyev. On equivalent reformulations for absolute value equations, \emph{Comput. Optim. Appl.}, 2009, 44: 363--372. \bibitem{rohn1989} J. Rohn. Systems of linear interval equations, \emph{Linear Algebra Appl.}, 1989, 126: 39--78. \bibitem{rohn2004} J. Rohn. A theorem of the alternatives for the equation~$Ax+B\vert x \vert= b$, \emph{Linear Multilinear Algebra}, 2004, 52: 421--426. \bibitem{rohn2009} J. Rohn. On unique solvability of the absolute value equation, \emph{Optim. Lett.}, 2009, 3: 603--606. \bibitem{rohf2014} J. Rohn, V. Hooshyarbakhsh, R. Farhadsefat. An iterative method for solving absolute value equations and sufficient conditions for unique solvability, \emph{Optim. Lett.}, 2014, 8: 35--44. \bibitem{sanc2019} B. Saheya, C. T. Nguyen, J.-S. Chen. Neural network based on systematically generated smoothing functions for absolute value equation, \emph{J. Appl. Math. Comput.}, 2019, 61: 533--558. \bibitem{wacc2019} A. Wang, Y. Cao, J.-X. Chen. Modi ed Newton-type iteration methods for generalized absolute value equations, \emph{J. Optim. Theory Appl.}, 2019, 181: 216230. \bibitem{wuli2018} S.-L. Wu, C.-X. Li. The unique solution of the absolute value equations, \emph{Appl. Math. Lett.}, 76, 195--200, 2018. \bibitem{wush2021} S.-L. Wu, S.-Q. Shen. On the unique solution of the generalized absolute value equation, \emph{Optim. Lett.}, 2021, 15: 2017--2024. \bibitem{yuch2022} D.-M. Yu, C.-R. Chen, D.-R. Han. A modified fixed point iteration method for solving the system of absolute value equations, \emph{Optimization}, 2022, 71: 449--461. \bibitem{yuly2021} Z.-S. Yu, L. Li, Y. Yuan. A modified multivariate spectral gradient algorithm for solving absolute value equations, \emph{Appl. Math. Lett.}, 2021, 121: 107461. \bibitem{yzch2024} D.-M. Yu, G.-H. Zhang, C.-R. Chen, D.-R. Han. The neural network models with delays for solving absolute value equations, \emph{Neurocomputing}, 589: 127707, 2024. \bibitem{zhwl2021} H.-Y. Zhou, S.-L. Wu, C.-X. Li. Newton-based matrix splitting method for generalized absolute value equation, \emph{J. Comput. Appl. Math.}, 2021, 394: 113578. \end{thebibliography} \end{document}
2412.11841v1
http://arxiv.org/abs/2412.11841v1
A Serrin-type over-determined problem for Hessian equations in the exterior domain
\documentclass{amsart} \usepackage{amsmath} \usepackage{color} \usepackage{mathtools} \usepackage{amsmath,enumerate,amsfonts,amssymb,color,graphicx,amsthm} \pagestyle{myheadings} \renewcommand{\baselinestretch}{1} \def\NN{{\mathbb N}} \def\ZZ{{\mathbb Z}} \def\RR{{\mathbb R}} \def\HH{{\mathbb H}} \def\CC{{\mathbb C}} \def\SSphere{{\mathbb S}} \def\FM{{\mathfrak M}} \def\FS{{\mathfrak S}} \def\Ric{{\rm Ric}} \def\Sym{{\rm Sym}} \def\diag{{\rm diag}} \def\Snn{{\mathcal{S}^{n \times n}}} \newcounter{marnote} \newcommand\marginnote[1]{\stepcounter{marnote}$^{\bullet\,\themarnote}$\marginpar{\tiny$\bullet\,\themarnote$:\,#1}} \numberwithin{equation}{section} \begin{document} \newtheorem{thm}{Theorem}[section] \newtheorem{Def}[thm]{Definition} \newtheorem{lem}[thm]{Lemma} \newtheorem{rem}[thm]{Remark} \newtheorem{question}[thm]{Question} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{example}[thm]{Example} \title[A Serrin-type over-determined problem]{A Serrin-type over-determined problem for Hessian equations in the exterior domain} \author{Bo Wang} \address{ School of Mathematics and Statistics, Beijing Institute of Technology, Beijing 100081, China.} \email{[email protected].} \author{ Zhizhang Wang} \address{ School of Mathematical Sciences, Fudan University, Shanghai, China.} \email{[email protected].} \date{} \thanks{The first author is supported by NSFC Grants No.12271028, BNSF Grants No.1222017 and the Fundamental Research Funds for the Central Universities. The second author is supported by NSFC Grants No.12141105. } \maketitle \begin{abstract} In this paper, we consider the Hessian equations in some exterior domain with prescribed asymptotic behavior at infinity and Dirichlet-Neumann conditions on its interior boundary. We obtain that there exists a unique bounded domain such that the over-determined problem admits a unique strictly convex solution. \end{abstract} \setcounter{section}{0} \section{Introduction} Suppose that $\Omega$ is a smooth, bounded, simply connected, open set in $\mathbb{R}^d$ ($d\geq2$). Throughout this paper, we always use $\nu$ to denote the unit outer normal of $\partial \Omega$, which is the boundary of $\Omega$. A celebrated theorem of Serrin \cite{serrin1971} states that if $u\in C^{2}(\bar{\Omega})$ satisfies the Poisson equation \begin{equation*} \Delta u=d~~~~\mbox{in }\Omega \end{equation*} with Dirichlet-Neumann boundary conditions: $u=0$ and $\partial u/\partial \nu=1$ on $\partial\Omega$, then up to a translation, $\Omega$ is a unit ball and $u(x)=(|x|^{2}-1)/2$. Serrin's proof is based on the well-known moving planes method. After that, Weinberger \cite{w1971} gave another proof by using the integral identities. Brandolini-Nitsch-Salani-Trombetti \cite{bnst2008a} gave an alternative proof of Serrin's theorem. Meanwhile, they also applied their approach to deal with the over-determined problem for the Hessian equations. More precisely, for $k=1$, $\cdots$, $d$, they proved that if $u\in C^{2}(\bar{\Omega})$ satisfies \begin{equation*} \sigma_{k}(\lambda(D^{2}u))=\binom{d}{k}\text{ in } \Omega \end{equation*} with Dirichlet-Neumann boundary conditions: $u=0$ and $\partial u/\partial \nu=1$ on $\partial\Omega$, then up to a translation, $\Omega$ is a unit ball and $u(x)=(|x|^{2}-1)/2$. Here $\binom{d}{k}$ denotes the combination number, $\lambda(D^{2}u)=(\lambda_{1},\cdots,\lambda_{d})$ denotes the eigenvalue of $D^{2}u$ and \begin{equation*} \sigma_{k}(\lambda(D^{2}u)):=\sum\limits_{1\leq i_{1}<\cdots<i_{k}\leq d}\lambda_{i_{1}}\cdots\lambda_{i_{k}} \end{equation*} denotes the $k$-th elementary symmetric function of $\lambda(D^{2}u)$. In particular, if $k=1,d$, this is the Laplace operator and the Monge-Amp\`{e}re operator, respectively. Later on, in \cite{bnst2008b,bnst2008c}, they further studied the stability of the corresponding problem for $k=1,d$. The over-determined problem has attracted a lot of attention. Please see \cite{d2023,dz2023,jinxiong,ps1989,qx2017,reichel1995,reichel1996,hs2012,wangbao2014,wangbao2015,wgs1994} and the references therein for more results. In this paper, we consider a natural question: can we extend the above results to the exterior domain $\mathbb{R}^d\texttt{\symbol{'134}}\bar{\Omega}$ ($d\geq 3$)? More precisely, for $1\leq k\leq d$, we consider the following exterior over-determined problem for the $k$-Hessian equations: \begin{equation}\label{baozhang3} \sigma_{k}(\lambda(D^{2}u))=\binom{d}{k}~~~~\text{in } \mathbb{R}^d\setminus \bar{\Omega}, \end{equation} \begin{equation}\label{baozhang2} u=0,~~\partial u/\partial \nu=1~~~~\mbox{on }\partial\Omega, \end{equation} \begin{equation}\label{ab} u(x)-\left(\frac{1}{2}x\cdot Ax+b\cdot x+c\right)\rightarrow0~~~~\text{as } |x|\rightarrow \infty, \end{equation} where $A$ is a $d\times d$ real symmetric positive definite matrix, $b\in\RR^{d}$, $c\in\RR$ and $``\cdot"$ denotes the inner product of $\mathbb{R}^d$. \begin{Def} For $1\leq k\leq d$, let $\mathcal{A}_k$ denote the set of all $d\times d$ real symmetric positive definite matrix $A$ satisfying $\sigma_{k}(\lambda(A))=\binom{d}{k}$. \end{Def} For the Monge-Amp\`{e}re equation, i.e., $k=d$ in (\ref{baozhang3}): \begin{equation}\label{ma} \det(D^{2}u)=1~~~~\text{in } \mathbb{R}^d\setminus \bar{\Omega}, \end{equation} Caffarelli-Li \cite{cl2003} proved that, if $u$ is a convex viscosity solution of (\ref{ma}), then $u$ satisfies the asymptotic behavior (\ref{ab}) at infinity for some $A\in \mathcal{A}_{d}$, $b\in\RR^{d}$ and $c\in\RR$. Moreover, they obtained the decay rate of $u$ approaching the quadratic polynomial at infinity: \begin{equation}\label{fd4} \limsup\limits_{|x|\rightarrow\infty}|x|^{d-2}\left|u(x)-\left(\frac{1}{2}x\cdot Ax+b\cdot x+c\right)\right|<\infty. \end{equation} With this prescribed asymptotic behavior, they also proved that given any $A\in\mathcal{A}_{d}$, $b\in\RR^{d}$ and $\varphi\in C^{2}(\partial\Omega)$, there exists a constant $c^{*}$, depending on $d$, $\Omega$, $A$, $b$ and $\varphi$, such that the exterior Dirichlet problem: (\ref{ma}), (\ref{ab}) with $c>c^{*}$, and $u=\varphi$ on $\partial\Omega$ admits a unique convex viscosity solution $u\in C^{\infty}(\RR^{d}\setminus\bar{\Omega})\cap C^{0}(\RR^{d}\setminus\Omega)$. After that, Li-Lu \cite{lilu2018} proved that there exists a sharp constant $c_{*}$ such that the existence result holds for $c\geq c_{*}$ while the non-existence result holds for $c<c_{*}$. For the Poisson equation, i.e., $k=1$ in (\ref{baozhang3}): \begin{equation}\label{la} \Delta u=d~~~~\text{in } \mathbb{R}^d\setminus \bar{\Omega}, \end{equation} by the classical theorem of B\^{o}cher \cite{bocher}, the asymptotic behavior (\ref{ab}) implies (\ref{fd4}). For the $k$-Hessian equations (\ref{baozhang3}) with $2\leq k\leq d-1$, Bao-Wang \cite{BaoW} proved that the asymptotic behavior (\ref{ab}) implies (\ref{fd4}). In \cite{baolili}, Bao-Li-Li obtained the existence result of the corresponding exterior Dirichlet problem. Furthermore, Li-Li \cite{lili2018} extended the existence result to the Hessian quotient equations. For more results on the exterior problem of nonlinear elliptic equations, we refer to \cite{AB,baocao,baoli,baolili,baolizhang0,baolizhang,baoxiongzhou,cl2003, dbw2024,liliyuan2020,lilu2018,li2019,reichel1997} and the references therein. \subsection{Main results} In this paper, we will investigate the uniqueness and existence of the domain $\Omega$ such that the problem (\ref{baozhang3})-(\ref{ab}) admits a unique strictly convex solution. Note that if $u\in C^{2}(\RR^{d}\setminus\Omega)$ is a strictly convex solution of (\ref{baozhang3}) and (\ref{ab}) for some $A$, $b$, $c$, then $A$ must belong to $\mathcal{A}_k$, see \cite{BaoW}. So we will always assume that $A\in\mathcal{A}_k$ throughout this paper. For the case $A=I\in\mathcal{A}_k$, where $I$ denotes the $d\times d$ identity matrix, we can give a complete characterization of such domain. \begin{thm}\label{cases:80} Let $d\geq 3$ and $1\leq k\leq d$. Given $A=I\in\mathcal{A}_k$ and $b\in\RR^{d}$, let $c_{*}$ be some constant determined by $k,d,b$. Then there exists a unique smooth, bounded domain $\Omega\subset\mathbb{R}^d$ such that the problem \eqref{baozhang3}-\eqref{ab} with $c<c_{*}$ admits a unique strictly convex solution $u\in C^{\infty}(\RR^{d}\setminus\Omega)$. Moreover, $\Omega$ is a ball centered at $-b$ with radius $r_{0}$ depending on $k$, $d$, $b$, $c$, and $u$ is radially symmetric. Additionally, the problem \eqref{baozhang3}-\eqref{ab} with $c\geq c_{*}$ admits no strictly convex solution for any smooth bounded domain. \end{thm} \begin{rem} The explicit expression of $c_{*}$ can be given by \begin{equation*} c_*=\left(\frac{d-k}{d}\right)^{2/k}\left[-\frac{1}{2}+\int^{\infty}_1\tau\left(\left(1+\frac{k}{d-k}\tau^{-d}\right)^{1/k}-1\right)d\tau\right]+\frac{1}{2}|b|^{2} \end{equation*} for $1\leq k\leq d-1$ and $$c_*=\int_{0}^{\infty}\tau((1+\tau^{-d})^{1/d}-1)d\tau+\frac{1}{2}|b|^{2}~~~~~~~~\mbox{ for $k=d$}.$$ \end{rem} Compared with the results obtained by Serrin \cite{serrin1971} and Brandolini-Nitsch-Salani-Trombetti \cite{bnst2008a}, for the case $A\in\mathcal{A}_k$ and $A\neq I$, one may expect the non-existence of the domain such that the problem \eqref{baozhang3}-\eqref{ab} admits a unique strictly convex solution. However, surprisingly, we can construct some non-symmetric domain by performing Legrendre transform and solving the corresponding dual problem. This phenomenon is quite different from the Serrin-type over-determined problem in the bounded domain. This is the main novelty of this paper. Denote $\lambda_{\min}(A)$ and $\lambda_{\max}(A)$ to be the minimum eigenvalue and maximum eigenvalue of $A$, respectively. Let $$\bar{c}=\bar{c}(A,b):= \frac{|b|^{2}\lambda_{\min}(A)-\lambda_{\max}(A)}{2\lambda_{\min}(A)\lambda_{\max}(A)}.$$ Then we have the following theorems. \begin{thm}\label{cases:10} Let $d\geq 3$ and $2\leq k\leq d$. Given $(A,b,c)\in\mathcal{A}_{k}\times \RR^{d}\times(-\infty,\bar{c})$, there exists a unique smooth, bounded domain $\Omega\subset\mathbb{R}^d$ such that the problem \eqref{baozhang3}-\eqref{ab} admits a unique strictly convex solution $u\in C^{\infty}(\RR^{d}\setminus\Omega)$. \end{thm} For the Poisson equation, i.e. equation \eqref{la}, we need an extra assumption on $A$. \begin{thm}\label{cases:20} Let $d\geq 3$. Given $(A,b,c)\in\mathcal{A}_{1}\times \RR^{d}\times(-\infty,\bar{c})$, if we further assume that $\lambda_{\max}(A)<\frac{1}{2}$, then there exists a unique smooth, bounded domain $\Omega\subset\mathbb{R}^d$ such that the problem \eqref{la}, \eqref{baozhang2}, \eqref{ab} admits a unique strictly convex solution $u\in C^{\infty}(\RR^{d}\setminus\Omega)$. \end{thm} \begin{rem}\label{sm} The decay rate for the solutions $u$, which are obtained by the above two theorems, approaching the quadratic polynomial is \eqref{fd4}. \end{rem} \begin{rem} Concerning the above two theorems, it may be an interesting question to find the sharp constant $c_{*}$ for the case $A\in\mathcal{A}_k$ and $A\neq I$ such that the problem \eqref{baozhang3}-\eqref{ab} admits a unique strictly convex solution which is non-radially symmetric. \end{rem} Note that our overdetermined problem for the Poisson equation is indeed a free boundary problem, which may be related to Berestycki-Caffarelli-Nirenberg conjecture \cite{BCN}. For more results on the free boundary problem, we refer to \cite{cs} and the references therein. \subsection{Questions and comments} A function $u \in C^2 (\RR^{d}\setminus\bar{\Omega})$ is called $k$-convex if \begin{equation*} \lambda (D^2 u) \in \Gamma_k:= \{\lambda \in \mathbb{R}^{d}: \sigma_{j} (\lambda) > 0, j = 1, \cdots, k\}~~~~\mbox{in }\RR^{d}\setminus\bar{\Omega}. \end{equation*} \begin{Def} For $1\leq k\leq d$, let $\tilde{\mathcal{A}}_k$ denote the set of all $d\times d$ real symmetric matrix $A$ satisfying $\lambda (A) \in \Gamma_k$ and $\sigma_{k}(\lambda(A))=\binom{d}{k}$. \end{Def} \begin{lem}\label{general} For $1\leq k\leq d-1$, if $u\in C^{2}(\RR^{d}\setminus\bar{\Omega})$ is a $k$-convex solution of (\ref{baozhang3}) and satisfies (\ref{ab}) for some $d\times d$ real symmetric matrix $A$, $b\in\RR^{d}$ and $c\in\RR$, then $A\in\tilde{\mathcal{A}}_k$. \end{lem} For $1\leq k\leq d-1$, it is natural to discuss the uniqueness and existence of the domain $\Omega$ such that the problem (\ref{baozhang3})-(\ref{ab}) admits a unique solution in the class of $k$-convex functions. For the uniqueness, we propose the following question: \begin{question}\label{ershu} Let $d\geq 3$ and $1\leq k\leq d-1$. Given $(A,b,c)\in\tilde{\mathcal{A}}_{k}\times \RR^{d}\times\RR$, suppose that $(\Omega_{1}$, $u_{1})$ and $(\Omega_{2}$, $u_{2})$ are two pairs of domain and solution such that the problem \eqref{baozhang3}-\eqref{ab} admits a unique solution in the class of $k$-convex functions. Can we conclude that $\Omega_{1}=\Omega_{2}$ and $u_{1}\equiv u_{2}$? \end{question} \begin{rem} As in the proof of our main results, the answer of the above question is Yes in the class of strictly convex functions. \end{rem} For the existence, we propose the following question: \begin{question}\label{bashu} Let $d\geq 3$ and $1\leq k\leq d-1$. Given $(A,b,c)\in\tilde{\mathcal{A}}_{k}\times \RR^{d}\times\RR$, does there exist a bounded domain such that the problem \eqref{baozhang3}-\eqref{ab} admits a unique $k$-convex solution? \end{question} For the case $A=I\in\tilde{\mathcal{A}}_{k}$, we can find a constant $\hat{c}$ such that the answer of Question \ref{bashu} is Yes with $c<\hat{c}$ and the domain is a ball. \begin{rem}\label{pangxie} Let $d\geq 3$ and $1\leq k\leq d-1$. Given $A=I\in\tilde{\mathcal{A}}_{k}$ and $b\in\RR^{d}$, suppose that $\Omega$ is a ball centered at $-b$ with some radius $r_{0}>0$. Then there exists a constant $\hat{c}=\hat{c}(k,d,b)$ such that, if $c\geq \hat{c}$, the problem \eqref{baozhang3}-\eqref{ab} admits no $k$-convex solution and if $c<\hat{c}$, the problem \eqref{baozhang3}-\eqref{ab} admits a unique $k$-convex solution $u\in C^{\infty}(\RR^{d}\setminus\Omega)$ of the form \begin{equation}\label{daxia} u(x)=\int_{r_{0}}^{|x+b|}s(1+Cs^{-d})^{1/k}ds, \end{equation} where $C=C(r_{0})=r_{0}^{d-k}-r_{0}^{d}$. Moreover, the constant $\hat{c}$ is strictly decreasing with respect to $k$. \end{rem} In view of Remark \ref{pangxie}, if $A=I\in\tilde{\mathcal{A}}_{k}$, one may ask Question \ref{ershu} more precisely. That is the following rigidity question. \begin{question}\label{laosan} Let $d\geq 3$ and $1\leq k\leq d-1$. Given $A=I\in\tilde{\mathcal{A}}_{k}$, $b\in\RR^{d}$ and $c\in\RR$, suppose that there exists a bounded domain $\Omega$ such that the problem \eqref{baozhang3}-\eqref{ab} admits a unique $k$-convex solution $u\in C^{\infty}(\RR^{d}\setminus\Omega)$, then can we conclude that $\Omega$ is a ball centered at $-b$ with some radius $r_{0}>0$ and $u$ is of the form \eqref{daxia}? \end{question} We can give a partial answer of Question \ref{laosan} under an additional assumption: \begin{equation}\label{fd4--} \limsup\limits_{|x|\rightarrow\infty}|x|^{\alpha}\left|u(x)-\left(\frac{1}{2}|x|^{2}+c\right)\right|<\infty, \end{equation} where $\alpha>d$. \begin{thm}\label{ridig''} Let $d\geq 3$ and $2\leq k\leq d-1$. Suppose that there exists a bounded star-shaped domain $\Omega\subset\mathbb{R}^d$ such that problem \eqref{baozhang3}, \eqref{baozhang2}, \eqref{fd4--} with some $c<0$ and $\alpha>d$ admits a $k$-convex solution $u\in C^{2}(\RR^{d}\setminus\Omega)$, then $\Omega$ is a unit ball and $u(x)=\frac{1}{2}|x|^{2}-\frac{1}{2}$. \end{thm} For $k=1$, the star-shapedness and the condition $c< 0$ can be removed. \begin{thm}\label{ridig} Let $d\geq 3$. Suppose that there exists a bounded domain $\Omega\subset\mathbb{R}^d$ such that the problem \eqref{la}, \eqref{baozhang2}, \eqref{fd4--} with some $c\in\RR$ and $\alpha>d$ admits a solution $u\in C^{2}(\RR^{d}\setminus\Omega)$, then $\Omega$ is a unit ball and $u(x)=\frac{1}{2}|x|^{2}-\frac{1}{2}$. \end{thm} \subsection{Outline of the proof} We give a brief introduction of our proof. By an orthogonal transform and a translation, we may assume that \begin{equation*} A=\mbox{diag}\{1/a_{1},\cdots,1/a_{d}\}~~~~\mbox{and}~~~~b=0, \end{equation*} where $a_{i}>0$, $i=1$, $\cdots$, $d$, and $\sigma_{k}(1/a_{1},\cdots,1/a_{d})=\binom{d}{k}$. Then the asymptotic behavior (\ref{fd4}) can be written as \begin{equation}\label{fd4'} \limsup\limits_{|x|\rightarrow\infty}|x|^{d-2}\left|u(x)-\left(\frac{1}{2}\sum\limits_{i=1}^{d}\frac{x_{i}^{2}}{a_{i}}+c\right)\right|<\infty. \end{equation} Let $u^{*}=u^{*}(p)$ denote the Ledgendre transform function of $u$. Then, in Section 2, we will show that $u^{*}$ satisfies the Hessian quotient equation in the exterior domain: \begin{equation}\label{fd5} \sigma_{d}(\lambda(D^{2}u^{*}))/\sigma_{d-k}(\lambda(D^{2}u^{*}))=\binom{d}{k}~~~~\mbox{in }\mathbb{R}^d\texttt{\symbol{'134}}\bar{B}_{1} \end{equation} with the Robin boundary condition on its interior boundary: \begin{equation}\label{fd6} u^{*}=\partial u^{*}/\partial n~~~~\mbox{on }\partial B_{1}, \end{equation} and the following prescribed asymptotic behavior at infinity: \begin{equation}\label{fd7} \limsup\limits_{|p|\rightarrow\infty}|p|^{d-2}\left|u^{*}(p)-\left(\frac{1}{2}\sum\limits_{i=1}^{d}a_{i}p_{i}^{2}-c\right)\right|<\infty, \end{equation} where $B_{1}\subset\RR^{d}$ denotes the unit ball centered at the origin and $n=n(p)=p$ for any $p\in\partial B_{1}$. The problem (\ref{fd5})-(\ref{fd7}) is a Hessian quotient equation with the oblique boundary condition on its interior boundary and the dual asymptotic behavior at infinity. In order to solve this problem, we will divide our proof into three steps. Step 1. For $0\leq t\leq 1$, in Section 3, we will construct the subsolution $\underline{\omega}_t$ and the supsolution $\bar{\omega}_t$ of the problem: (\ref{fd5}) with the right hand side $\binom{d}{k}$ replaced by $\sigma_{d}(a(t))/\sigma_{d-k}(a(t))$, (\ref{fd6}) and the following asymptotic behavior at infinity: \begin{equation}\label{fd7----------} \limsup\limits_{|p|\rightarrow\infty}|p|^{d_{k}^{*}(a(t))-2}\left|u^{*}(p)-\left(\frac{1}{2}\sum\limits_{i=1}^{d}a_{i}(t)p_{i}^{2}-c\right)\right|<\infty, \end{equation} where \begin{equation}\label{at} a(t):=(a_1(t),\cdots, a_d(t))=ta+(1-t)(1,\cdots,1), \end{equation} and \begin{equation*} d_{k}^{*}(a(t)):=\frac{k}{\max\limits_{1\leq i\leq d}a_{i}(t)\sigma_{k-1;i}(a(t))}. \end{equation*} Step 2. For sufficiently large $R>1$, in Section 4, we will consider the following problem on the annulus: \begin{equation}\label{fd5a} \sigma_{d}(\lambda(D^{2}u^{*}_{R,t}))/\sigma_{d-k}(\lambda(D^{2}u^{*}_{R,t}))=\sigma_{d}(a(t))/\sigma_{d-k}(a(t))~~~~\mbox{in }B_{R}\texttt{\symbol{'134}}\bar{B}_{1} \end{equation} with the Robin boundary condition: \begin{equation}\label{fd6a} u^{*}_{R,t}=\partial u^{*}_{R,t}/\partial n+(1-t)\psi_t~~~~\mbox{on }\partial B_{1}, \end{equation} and the Dirichlet boundary condition: \begin{equation}\label{fd7a} u^{*}_{R,t}=\underline{\omega}_t~~~~\mbox{on }\partial B_{R}, \end{equation} where $\underline{\omega}_t$ is constructed as in Step 1 and \begin{equation}\label{psit} \psi_t:=\underline{\omega}_t-\partial \underline{\omega}_t/\partial n. \end{equation} It is easy to see that the above problem is solvable for $t=0$ with $u^{*}_{R,0}\equiv\underline{\omega}_0$. In order to solve the above problem for $t=1$, by the continuity method, we will derive the closedness and openness of the problem (\ref{fd5a})-(\ref{fd7a}) with respect to $t$. Step 3. For sufficiently large $R>1$, let $u_{R,1}^{*}$ be the solution of (\ref{fd5a})-(\ref{fd7a}) for $t=1$. In Section 5, we will derive the $C^{0}$-$C^{2}$ local estimates of $u_{R,1}^{*}$ with respect to $R$. Then, using the Cantor subsequences method, we obtain that $u_{R,1}^{*}$ converges to a function $u^{*}$, which is the unique strictly convex solution of the problem (\ref{fd5})-(\ref{fd7}). Since the Pogorelov type interior estimates does not always hold for the Hessian quotient equations, we cannot obtain the asymptotic behavior directly for the dual problem. To overcome this difficulty, we use the Pogorelov type interior estimates for the Hessian equation \eqref{baozhang3} to obtain the asymptotic behavior, then we derive the asymptotic behavior of the dual problem via the Legendre transform. By the above three steps, we can finish the proof of Theorem \ref{cases:10} and \ref{cases:20}. The rest of this paper is organized as follows. In Section 6, we give the proof of Theorem 1.2 and Remark 1.13. In Section 7, we give the proof of Theorem 1.15 and 1.16. In Section 8, we give the proof of Lemma 1.9. \section{The dual problem} Let $u$ be a locally strictly convex $C^{2}$ function defined in $\mathbb{R}^d\verb"\"\Omega$. We define $u^{*}$, the Ledgendre transform function of $u$ as \begin{equation} u^{*}(p)=p\cdot x-u(x), \quad \forall~x\in\mathbb{R}^d\verb"\"\Omega,~p=Du(x)\in\bar{\Omega}^{*},\label{cases:11} \end{equation} where $\Omega^{*}:=Du(\mathbb{R}^d\texttt{\symbol{'134}}\bar{\Omega})$. With the aid of Ledgendre transform, we have the following lemma. \begin{lem}\label{miyun} If $u\in C^{2}(\mathbb{R}^d\verb"\"\Omega)$ is a locally strictly convex solution of \eqref{baozhang3}, \eqref{baozhang2} and \eqref{fd4'}, then the Legendre transform of $u$, $u^{*}\in C^{2}(\mathbb{R}^d\verb"\"B_{1})$ is a locally strictly convex solution of \eqref{fd5}-\eqref{fd7}. \end{lem} \begin{proof} We firstly prove that \begin{equation}\label{dd} \Omega^{*}=\mathbb{R}^d\verb"\"\bar{B}_{1}. \end{equation} Indeed, on one side, for any $p\in\Omega^{*}$, there exists $x=x(p)\in\mathbb{R}^d\texttt{\symbol{'134}}\bar{\Omega}$ such that $p=Du(x)$. By (\ref{baozhang2}) and the strictly convexity of $\Omega$, there exist $x_{0}\in\partial\Omega$ and $\xi_{0}\in\mathbb{R}^d\texttt{\symbol{'134}}\bar{\Omega}$ such that \begin{equation*} |p|=|Du(x)|\geq\frac{\partial u}{\partial \nu}(x)=\frac{\partial u}{\partial \nu}(x_{0})+\frac{\partial^{2} u}{\partial \nu^{2}}(\xi_{0})|x-x_{0}|>1. \end{equation*} Thus, we have that \begin{equation} \Omega^{*}\subset \mathbb{R}^d\verb"\"\bar{B}_{1}.\label{cases:3} \end{equation} On the other side, for any $p\in\partial\Omega^{*}$, there exists a sequence $\{p^{(m)}\}_{m=1}^{\infty}\subset\Omega^*$ such that $\lim\limits_{m\rightarrow\infty}p^{(m)}=p$. Let $x^{(m)}\in\RR^{d}\setminus\bar{\Omega}$ such that $p^{(m)}=Du(x^{(m)})$ for $m\in\NN^{+}$. By Lemma 3.2 in \cite{BaoW}, we have that, for $i=1$, $\cdots$, $d$, \begin{equation}\label{Du} D_{i}u(x)=\frac{x_{i}}{a_{i}}+O\left(|x|^{1-d}\right),~~~~\mbox{as $|x|\rightarrow\infty$}. \end{equation} It follows that $\{x^{(m)}\}_{m=1}^{\infty}$ is a bounded sequence in $\RR^{d}\setminus\bar{\Omega}$. Thus, there exists some subsequence, which converges to some $x\in\RR^{d}\setminus\Omega$ and $p=Du(x)$. If $x\in\RR^{d}\setminus\bar{\Omega}$, since $Du$ maps interior point to interior point, we know that $p$ is an interior point, which is a contradiction. Thus $x\in\partial\Omega$. By \eqref{baozhang2}, we can conclude that \begin{equation} \partial\Omega^{*}\subset Du(\partial\Omega)\subset \partial B_{1}.\label{cases:4} \end{equation} Then (\ref{cases:3}) and (\ref{cases:4}) yields (\ref{dd}). We secondly prove that $u^{*}$ is a $C^{2}$ locally strictly convex solution of (\ref{fd5})-(\ref{fd6}). Indeed, by the definition of $u^{*}$ and (\ref{dd}), $u^{*}\in C^{2}(\mathbb{R}^d\verb"\"B_{1})$ is a locally strictly convex function. Moreover, we have that $D^{2}u^{*}=(D^{2}u)^{-1}$ in $\RR^{d}\setminus\bar{B}_{1}$. Consequently, by (\ref{baozhang3}), we can conclude that $u^{*}$ satisfies (\ref{fd5}). By (\ref{baozhang2}), for any $p\in \partial B_{1}$, there exists $x=x(p)\in\partial\Omega$ such that $p=Du(x)$ and \begin{equation*} u^{*}(p)=p\cdot x-u(x)=p\cdot Du^{*}(p)=\frac{\partial u^{*}}{\partial n}(p), \end{equation*} that is (\ref{fd6}). We thirdly prove that $u^{*}$ satisfies (\ref{fd7}). By \eqref{Du}, we have that, for $i=1$, $\cdots$, $d$, \begin{equation*} x_{i}=a_{i}p_{i}+O\left(|p|^{1-d}\right),~~~~\mbox{as $|p|\rightarrow\infty$}, \end{equation*} where we have used the fact that $O(|x|^{-1})=O(|p|^{-1})$. It follows from (\ref{fd4'}) that \begin{equation*} u^{*}(p)=p\cdot x-u(x)=\frac{1}{2}\sum\limits_{i=1}^{d}a_{i}p_{i}^{2}-c+O\left(|p|^{2-d}\right),~~~~\mbox{as }|p|\rightarrow\infty,\label{cases:5} \end{equation*} that is (\ref{fd7}). \end{proof} \section{The Sub and Super Solutions} In this section, for any vector $a=(a_1,\cdots,a_d)\in\mathbb{R}^d$ with $a_i>0$, $i=1$, $\cdots$, $d$, we will construct the strictly convex sub and super solutions of the equation \begin{equation}\label{fd5-------} \sigma_{d}(\lambda(D^{2}u^{*}))/\sigma_{d-k}(\lambda(D^{2}u^{*}))=\sigma_d(a)/\sigma_{d-k}(a)~~~~\mbox{in }\mathbb{R}^d\texttt{\symbol{'134}}\bar{B}_{1} \end{equation} with the boundary condition (\ref{fd6}) and the following prescribed asymptotic behavior at infinity: \begin{equation}\label{fd7'} \limsup\limits_{|p|\rightarrow\infty}|p|^{d_k^*(a)-2}\left|u^{*}(p)-\left(\frac{1}{2}\sum\limits_{i=1}^{d}a_{i}p_{i}^{2}-c\right)\right|<\infty. \end{equation} We construct the subsolutions of the form \begin{equation*} \underline{\omega}(p)=\Phi(r),~~~~r:=\sqrt{\sum\limits_{i=1}^{d}a_{i}p_{i}^{2}}. \end{equation*} A straightforward calculation yields that for $i$, $j=1$, $\cdots$, $d$, \begin{align*} D_{ij}\underline{\omega}&=a_{i}r^{-1}\Phi'\delta_{ij}+r^{-2}(\Phi''-r^{-1}\Phi')(a_{i}p_{i})(a_{j}p_{j})\\ &=a_{i}h\delta_{ij}+r^{-1}h'(a_{i}p_{i})(a_{j}p_{j}), \end{align*} where $h:=\Phi'/r$. Denote $l=d-k$. By Proposition 1.2 in \cite{baolili}, we have that \begin{align*} \frac{\sigma_{d}(\lambda(D^{2}\underline{\omega}))}{\sigma_{l}(\lambda(D^{2}\underline{\omega}))}&=\frac{\sigma_{d}(a)}{\sigma_{l}(a)}\frac{h^{d}+r^{-1}h'h^{d-1}\sum\limits_{i=1}^{d}\frac{\sigma_{d-1;i}(a)a_{i}}{\sigma_{d}(a)}(a_{i}p_{i}^{2})}{h^{l}+r^{-1}h'h^{l-1}\sum\limits_{i=1}^{d}\frac{\sigma_{l-1;i}(a)a_{i}}{\sigma_{l}(a)}(a_{i}p_{i}^{2})}\\ &=\frac{\sigma_{d}(a)}{\sigma_{l}(a)}\frac{h^{d}+rh'h^{d-1}}{h^{l}+r^{-1}h'h^{l-1}\sum\limits_{i=1}^{d}\frac{\sigma_{l-1;i}(a)a_{i}}{\sigma_{l}(a)}(a_{i}p_{i}^{2})}. \end{align*} Let $\underline{t}_{0}(a):=0$ and \begin{equation*} \underline{t}_{l}(a):=\min\limits_{1\leq i\leq d}\frac{\sigma_{l-1;i}(a)a_{i}}{\sigma_{l}(a)},~~~~l=1,\cdots,d-1. \end{equation*} Note that $0\leq\underline{t}_{l}(a)\leq\frac{l}{d}<1$. We will consider the ODE \begin{equation}\label{chenglu'} \frac{h^{d}+rh'h^{d-1}}{h^{l}+\underline{t}_{l}(a)rh'h^{l-1}}=1. \end{equation} We rewrite the above equation into the following divergence form: \begin{equation}\label{chenglu} \left(r^{\frac{d-l}{1-\underline{t}_{l}(a)}}h^{\frac{d-l}{1-\underline{t}_{l}(a)}}\right)'=\left(r^{\frac{d-l}{1-\underline{t}_{l}(a)}}h^{\frac{d-l}{1-\underline{t}_{l}(a)}\underline{t}_{l}(a)}\right)'. \end{equation} It is easy to see that \begin{equation}\label{doujiali} d^*_k(a)=\frac{d-l}{1-\underline{t}_{l}(a)}. \end{equation} Indeed, let $\lambda_{i}:=1/a_{i}$, $i=1$, $\cdots$, $d$. We use $1/\lambda$ to denote the vector $a$. Then we have that \begin{equation*} \underline{t}_{l}(a)=\min\limits_{1\leq i\leq d}\frac{\sigma_{l-1;i}(\frac{1}{\lambda})\frac{1}{\lambda_{i}}}{\sigma_{l}(\frac{1}{\lambda})}=\min\limits_{1\leq i\leq d}\frac{\sigma_{k;i}(\lambda)}{\sigma_{d-1;i}(\lambda)\lambda_{i}}\frac{\sigma_{d}(\lambda)}{\sigma_{k}(\lambda)}=\min\limits_{1\leq i\leq d}\frac{\sigma_{k;i}(\lambda)}{\sigma_{k}(\lambda)}. \end{equation*} It follows that \begin{equation*} 1-\underline{t}_{l}(a)=\max\limits_{1\leq i\leq d}\left(1-\frac{\sigma_{k;i}(\lambda)}{\sigma_{k}(\lambda)}\right)=\max\limits_{1\leq i\leq d}\frac{\sigma_{k-1;i}(\lambda)\lambda_{i}}{\sigma_{k}(\lambda)}=\frac{k}{d_{k}^{*}(a)}, \end{equation*} that is (\ref{doujiali}). By (\ref{doujiali}), the equation (\ref{chenglu}) can be written as \begin{equation*} \left(r^{d^*_k(a)}h^{d^*_k(a)}\right)'=\left(r^{d^*_k(a)}h^{d^*_k(a)\underline{t}_{l}(a)}\right)'. \end{equation*} Integrating the above equation and dividing by $r^{-d_{k}^{*}(a)}$, we have that \begin{equation}\label{xi} \xi(h):=h^{d^*_k(a)}-h^{d^*_k(a)\underline{t}_{l}(a)}=C_{1}r^{-d^*_k(a)}, \end{equation} where $C_{1}$ is an arbitrary constant. It is easy to see that $\xi$ is strictly decreasing on $\left[0,\left(\underline{t}_{l}(a)\right)^{\frac{1}{d-l}}\right]$ and strictly increasing on $\left[\left(\underline{t}_{l}(a)\right)^{\frac{1}{d-l}},\infty\right)$. We take $C_{1}\geq0$. Since $\xi(0)=0$ and $\xi\rightarrow \infty$ as $h\rightarrow\infty$, then we can solve $h$ from (\ref{xi}) as $$h(r)=\xi^{-1}\left(C_{1}r^{-d^*_k(a)}\right),$$ where $\xi^{-1}$ denotes the inverse of $\xi$ on $\left[\left(\underline{t}_{l}(a)\right)^{\frac{1}{d-l}},\infty\right)$. Moreover, $h\geq1$ and $h'\leq0$. Recall that $h=\Phi'/r$. Let \begin{equation}\label{w} \underline{\omega}(p):=\Phi(r,\eta)=\int^{r}_{\eta}s\xi^{-1}\left(C_{1}s^{-d^*_k(a)}\right)ds+\eta^2\xi^{-1}(C_1\eta^{-d^*_k(a)}), \end{equation} where $\eta=\min\limits_{1\leq i\leq d}\sqrt{a}_{i}$. Now we prove that $\underline{\omega}$ is a strictly convex subsolution of (\ref{fd5-------}) satisfying (\ref{fd6}). By \eqref{chenglu'}, $0\leq\underline{t}_{l}(a)<1$ and $h\geq1$, we have that \begin{equation}\label{shengguo} h'=-\frac{h}{r}\frac{h^{d-1}-h^{l-1}}{h^{d-1}-\underline{t}_{l}(a)h^{l-1}}>-\frac{h}{r}. \end{equation} It follows from the above inequality, $0\leq \underline{t}_{l}(a)<1$, $h\geq1$ and $h'\leq0$ that \begin{align*} \sigma_{j}\left(\lambda(D^{2}\underline{\omega})\right)&=\sigma_{j}(a)\left(h^{j}+r^{-1}h'h^{j-1}\sum\limits_{i=1}^{d}\frac{\sigma_{j-1;i}(a)a_{i}}{\sigma_{j}(a)}(a_{i}p_{i}^{2})\right)\\ &\geq \sigma_{j}(a)(h^{j}+\underline{t}_{j}(a)rh'h^{j-1})\\ &\geq\sigma_{j}(a)h^{j-1}(h+rh')>0,~~~~j=1,\cdots,d, \end{align*} and \begin{eqnarray*} \frac{\sigma_{d}(\lambda(D^{2}\underline{\omega}))}{\sigma_{l}(\lambda(D^{2}\underline{\omega}))}&=&\frac{\sigma_{d}(a)}{\sigma_{l}(a)}\frac{h^{d}+r^{-1}h'h^{d-1}\sum\limits_{i=1}^{d}\frac{\sigma_{d-1;i}(a)a_{i}}{\sigma_{d}(a)}(a_{i}p_{i}^{2})}{h^{l}+r^{-1}h'h^{l-1}\sum\limits_{i=1}^{d}\frac{\sigma_{l-1;i}(a)a_{i}}{\sigma_{l}(a)}(a_{i}p_{i}^{2})}\\ &\geq & \frac{\sigma_{d}(a)}{\sigma_{l}(a)}\frac{h^{d}+rh'h^{d-1}}{h^{l}+\underline{t}_{l}(a)rh'h^{l-1}}=\frac{\sigma_{d}(a)}{\sigma_{l}(a)}. \end{eqnarray*} On the ellipse $E_{\eta}=\{p\in\RR^{d}:\sum\limits_{i=1}^{d}a_ip_i^2=\eta^2\}$, we have that \begin{equation*} \underline{\omega}-p\cdot D\underline{\omega}=\Phi-r\Phi'=0. \end{equation*} Since $$(\Phi-r\Phi')'=-r\Phi''=-r(h+rh')< 0,~~~~\forall~r\geq \eta,$$ in view of $E_{\eta}\subset B_1$, we have that $$\underline{\omega}-\partial\underline{\omega}/\partial n\leq 0\ \ \ \text{ on } \partial B_1,$$ which implies that $\underline{\omega}$ is a strictly convex subsolution of (\ref{fd5-------}) and (\ref{fd6}). To discuss the asymptotic behavior, we rewrite \eqref{w} as \begin{equation}\label{sub} \underline{\omega}(p)=\frac{1}{2}\sum\limits_{i=1}^{d}a_{i}p_{i}^{2}+\mu(C_{1})-\int_r^{\infty}s\left[\xi^{-1}(C_{1}s^{-d^*_k(a)})-1\right] ds, \end{equation} where \begin{equation*} \mu(C_{1}):=\int_{\eta}^{\infty}s\left[\xi^{-1}(C_{1}s^{-d^*_k(a)})-1\right] ds-\frac{\eta^2}{2}+\eta^2\xi^{-1}(C_1\eta^{-d^*_k(a)}). \end{equation*} Since $\xi^{-1}(0)=1$, we have that as $s\rightarrow \infty$, $$\xi^{-1}(C_{1}s^{-d^*_k(a)})-1=\xi^{-1}(C_{1}s^{-d^*_k(a)})-\xi^{-1}(0)=O(s^{-d^*_k(a)}).$$ Thus, if $d^*_k(a)>2$, we have that as $r\rightarrow \infty$, $$\int_r^{\infty}s\left[\xi^{-1}(C_{1}s^{-d^*_k(a)})-1\right] ds=O(r^{2-d^*_k(a)}).$$ Inserting the above equality into (\ref{sub}), we can obtain the asymptotic behavior of $\underline{\omega}$ at infinity: \begin{equation}\label{asyw} \underline{\omega}(p)=\frac{1}{2}\sum\limits_{i=1}^{d}a_{i}p_{i}^{2}+\mu(C_{1})+O(|p|^{2-d^*_k(a)}),~~~~\mbox{as }p\rightarrow\infty. \end{equation} Note that the condition $d_k^*(a)>2$ is equivalent to $$\underline{t}_{d-k}(a)>-(k-2)/2.$$ For $k\geq 2$, it obviously holds. For $k=1$, the above inequality becomes \begin{equation*} \sigma_{d-1;i}(a)<\frac{1}{2}\sigma_{d-1}(a),~~~~i=1,\cdots, d. \end{equation*} Note that $\mu(C_{1})$ is strictly increasing with respect to $C_{1}$, $\mu(0)=\eta^2/2$ and $\lim\limits_{C_{1}\rightarrow\infty}\mu(C_{1})=\infty$. Then we have that the range of $\mu$ for $C_1\geq 0$ is $\left[\frac{\eta^2}{2},\infty\right)$ . Therefore, we need to require that $-c\geq \eta^2/2=\frac{1}{2}\left(\min\limits_{1\leq i\leq d}a_{i}\right)$. The super solutions can be taken directly as \begin{equation}\label{sup} \bar{\omega}(p)=\frac{1}{2}\sum\limits_{i=1}^{d}a_{i}p_{i}^{2}-c, \end{equation} where $-c\geq\frac{1}{2}\left(\max\limits_{1\leq i\leq d}a_{i}\right)$. Indeed, it is easy to see that $\bar{\omega}$ is a strictly convex solution of (\ref{fd5-------}) satisfying (\ref{fd7'}). Moreover, we have that \begin{equation*} \bar{\omega}-\partial\bar{\omega}/\partial n=-c-\frac{1}{2}\sum\limits_{i=1}^{d}a_{i}p_{i}^{2}\geq -c-\frac{1}{2}\left(\max\limits_{1\leq i\leq d}a_{i}\right)\geq 0~~~~\mbox{on }\partial B_{1}. \end{equation*} \section{The Approximate Problem} For $0\leq t\leq 1$, let $a(t)$ be defined as in (\ref{at}) and let $\underline{\omega}_t$ be constructed as in \eqref{w} with $a$ replaced by $a(t)$. In this section, our goal is to solve (\ref{fd5a})-(\ref{fd7a}) for $t=1$. Since the problem is obviously solvable for $t=0$, by the continuity method, we need to establish the closedness and the openness of (\ref{fd5a})-(\ref{fd7a}). The closedness can be obtained by Lemma \ref{c0}-\ref{c2} below. \begin{lem}\label{c0} Let $R>1$. There exists a constant $C>0$ depending on $R$, $a$ and $c$ such that for any $0\leq t\leq 1$ and any $u_{R,t}^{*}\in C^{2}(\bar{B}_{R}\setminus B_{1})$ satisfying \eqref{fd5a}-\eqref{fd7a}, we have that \begin{equation}\label{c0estimate} |u_{R,t}^{*}|\leq C~~~~\mbox{on }\bar{B}_{R}\setminus B_{1}. \end{equation} \end{lem} \begin{proof} Let $\underline{\omega}_t$ and $\bar{\omega}_t$ be the sub and super solution constructed as in \eqref{w} and \eqref{sup} with $a$ replaced by $a(t)$. Then, by the maximum principle, we have that \begin{equation*} \underline{\omega}_t\leq u^{*}_{R,t}\leq \bar{\omega}_t~~~~\mbox{on }\bar{B}_{R}\setminus B_{1}. \end{equation*} (\ref{c0estimate}) follows from the above inequality directly. \end{proof} In the following, we will derive the estimates of $Du_{R,t}^{*}$ and $D^{2}u^{*}_{R,t}$ on $\bar{B}_{R}\setminus B_{1}$, respectively. \begin{lem}\label{c1} Let $R>1$. There exists a constant $C>0$ depending on $R$, $a$ and $c$ such that for any $0\leq t\leq 1$ and any $u_{R,t}^{*}\in C^{2}(\bar{B}_{R}\setminus B_{1})$ satisfying \eqref{fd5a}-\eqref{fd7a}, we have that \begin{equation*} |Du_{R,t}^{*}|\leq C~~~~\mbox{on }\bar{B}_{R}\setminus B_{1}. \end{equation*} \end{lem} \begin{proof} By the strict convexity of $u^{*}_{R,t}$, we have that \begin{equation*} \max\limits_{\bar{B}_R\setminus B_{1}}|Du_{R,t}^{*}|=\max\limits_{\partial B_R\cup\partial B_{1}}|Du_{R,t}^{*}|. \end{equation*} By (\ref{fd6a}) and Lemma \ref{c0}, we have that \begin{equation*} |\partial u^{*}_{R,t}/\partial n|=|u^{*}_{R,t}-(1-t)\psi_t|\leq \max\limits_{\partial B_{1}}|\underline{\omega}_t|+\max\limits_{\partial B_{1}}|\bar{\omega}_t|+\max\limits_{\partial B_{1}}|\psi_t|~~~~\mbox{on }\partial B_{1}. \end{equation*} By the maximum principle, Lemma \ref{c0} and the above estimate, we have that \begin{equation*} |Du^{*}_{R,t}|\leq |D\underline{\omega}_t|~~~~\mbox{on }\partial B_R. \end{equation*} Now it only remains to estimate the derivatives of $u_{R,t}^{*}$ along any tangential vector field on $\partial B_{1}$. Suppose that $T_{ij}:=p_i\frac{\partial}{\partial p_j}-p_j\frac{\partial}{\partial p_i}$ are the angular derivatives for $i\neq j$. Since any tangential vector on $\partial B_1$ can be expressed as a linear combination of $T_{ij}|_{\partial B_1}$ with bounded coefficients, we only need to estimate $T_{ij}u_{R,t}^*$. Without loss of generality, we only consider $T:=T_{12}$. By Lemma 2.1 in \cite{ITW}, differentiating (\ref{fd5a}) with respect to $T$, we have that \begin{equation*} F^{ij}(D^{2}u_{R,t}^{*})(Tu_{R,t}^{*})_{ij}=0~~~~\mbox{in }B_{R}\setminus\bar{B}_{1}, \end{equation*} where $F$ and $F^{ij}$ are defined as for any $d\times d$ real symmetric matrix $M=(M_{ij})$, \begin{eqnarray}\label{F} F(M):=\left(\frac{\sigma_{d}(\lambda(M)}{\sigma_l(\lambda(M))}\right)^{\frac{1}{d-l}} \text{ and }F^{ij}(M):=\frac{\partial F}{\partial M_{ij}},~~i,j=1,\cdots,d. \end{eqnarray} By the maximum principle, we have that \begin{equation*} |Tu_{R,t}^{*}|\leq \max\limits_{{\partial B_R}\cup\partial B_{1}}|Tu_{R,t}^{*}|~~~~\mbox{on }\bar{B}_{R}\setminus B_{1}. \end{equation*} By \eqref{fd7a}, we have that \begin{equation}\label{chou1'} Tu_{R,t}^{*}=T\underline{\omega}_t~~~~\mbox{on }\partial B_{R}. \end{equation} A straightforward calculation shows that for any $C^{2}$ function $v$, $$p\cdot DTv=T(p\cdot Dv).$$ Taking $v=u^*_{R,t}$, we have that \begin{eqnarray}\label{comm} \frac{\partial }{\partial n}Tu^*_{R,t}=T\frac{\partial u_{R,t}^*}{\partial n}\ \ ~~~~\mbox{on }\partial B_{1}. \end{eqnarray} If there exists $p_{0}\in\partial B_{1}$ such that $Tu_{R,t}^{*}(p_{0})=\max\limits_{\partial B_{R}\cup\partial B_{1}}Tu_{R,t}^{*}$, then we have that $(Tu_{R,t}^{*})_{n}(p_{0})\leq0$. It follows that \begin{eqnarray}\label{chou2} Tu_{R,t}^{*}(p_{0})&=&T((u_{R,t}^{*})_{n}+(1-t)\psi_t)(p_{0})\\ & =& (Tu_{R,t}^{*})_{n}(p_{0})+(1-t)T\psi_t(p_{0})\nonumber\\ &\leq& 2\max\limits_{\partial B_{1}}|D\psi_t|,\nonumber \end{eqnarray} where we have used \eqref{comm} in the second equality. Analogously, if there exists $p_{0}\in\partial B_{1}$ such that $Tu_{R,t}^{*}(p_{0})=\min\limits_{\partial B_{R}\cup\partial B_{1}}Tu_{R,t}^{*}$, then we have that $(Tu_{R,t}^{*})_{n}(p_{0})\geq0$ and \begin{equation}\label{chou3} Tu_{R,t}^{*}(p_{0})\geq-2\max\limits_{\partial B_{1}}|D\psi_t|. \end{equation} Combining (\ref{chou1'}), (\ref{chou2}), (\ref{chou3}) together, we can obtain the desired estimate. \end{proof} \begin{lem}\label{c2} Let $u^{*}_{R,t}$ be the solution of \eqref{fd5a}-\eqref{fd7a}. Then there exists some constant $C>0$ such that \begin{equation*} |D^{2}u_{R,t}^{*}|\leq C~~~~\mbox{on }\bar{B}_{R}\setminus B_{1}, \end{equation*} when $R$ is sufficiently large. \end{lem} \begin{proof} We will divide our proof into the following three steps. Step 1. We derive the double tangental derivatives on $\partial B_1$. In fact, we will prove that \begin{equation}\label{chou12} |T^{2}u_{R,t}^{*}|\leq C~~~~\mbox{on }\bar{B}_{R}\setminus B_{1}. \end{equation} Indeed, by differentiating (\ref{fd5a}) with respect to $T$ twice, we have that \begin{equation*} \sum\limits_{i,j=1}^{d}F^{ij}(D^{2}u_{R,t}^{*})(T^{2}u_{R,t}^{*})_{ij}+\sum\limits_{i,j=1}^{d}\sum\limits_{p,q=1}^{d}F^{ij,pq}(D^{2}u_{R,t}^{*})(Tu_{R,t}^{*})_{ij}(Tu_{R,t}^{*})_{pq}=0, \end{equation*} where for any $d\times d $ real symmetric matrix $M$, $F(M)$, $F^{ij}(M)$ are defined as in \eqref{F} and \begin{equation*} F^{ij,pq}(M):=\frac{\partial^{2}F}{\partial M_{ij}\partial M_{pq}}(M),~~~~~~~~i,j,p,q=1,\cdots,d. \end{equation*} Note that the calculation of the above equality can be found in Lemma 2.1 of \cite{ITW}. By the concavity of $F$, we have that \begin{equation*} \sum\limits_{i,j=1}^{d}F^{ij}(D^{2}u_{R,t}^{*})(T^{2}u_{R,t}^{*})_{ij}\geq0~~~~\mbox{in }B_{R}\setminus\bar{B}_{1}. \end{equation*} It follows from the maximum principle, \begin{equation*} T^{2}u_{R,t}^{*}\leq\max\limits_{\partial B_{R}\cup\partial B_{1}}T^{2}u_{R,t}^{*}~~~~\mbox{on }\bar{B}_{R}\setminus B_{1}. \end{equation*} By \eqref{fd7a}, we have that \begin{equation}\label{chou10} T^{2}u_{R,t}^{*}=T^2\underline{\omega}_t~~~~\mbox{on }\partial B_{R}. \end{equation} If there exists $p_{0}\in\partial B_{1}$ such that $T^{2}u_{R,t}^{*}(p_{0})=\max\limits_{\partial B_{R}\cup\partial B_{1}}T^{2}u_{R,t}^{*}$, then we have that $(T^{2}u_{R,t}^{*})_{n}(p_{0})\leq0$. It follows that \begin{eqnarray}\label{chou11} T^{2}u_{R,t}^{*}(p_{0})&=&T^{2}((u_{R,t}^{*})_{n}+(1-t)\psi_t)(p_{0})\\ &=&(T^{2}u_{R,t}^{*})_{n}(p_{0})+(1-t)T^{2}\psi_t(p_{0})\nonumber\\ &\leq& 4\max\limits_{\partial B_{1}}(|D\psi_t|+|D^{2}\psi|),\nonumber \end{eqnarray} where we have used \eqref{comm} twice in the second equality. Combining (\ref{chou10}) and (\ref{chou11}) together, we can obtain (\ref{chou12}). Step 2. We derive the double normal derivatives on $\partial B_1$. Let \begin{equation*} \phi_{R,t}^{*}:=p\cdot Du_{R,t}^{*}-2u_{R,t}^{*}. \end{equation*} A straightforward calculation gives that \begin{equation*} (\phi_{R,t}^{*})_{ij}=\sum\limits_{l=1}^{d}p_{l}(u_{R,t}^{*})_{ijl}. \end{equation*} It follows that \begin{equation*} \sum\limits_{i,j=1}^{d}F^{ij}(D^{2}u_{R,t}^{*})(\phi_{R,t}^{*})_{ij}=\sum\limits_{l=1}^{d}p_{l}\sum\limits_{i,j=1}^{d}F^{ij}(D^{2}u_{R,t}^{*})(u_{R,t}^{*})_{ijl}=0~~~~\mbox{in }B_{R}\setminus\bar{B}_{1}. \end{equation*} By the maximum principle, we have that \begin{equation*} \min\limits_{\partial B_{R}\cup\partial B_{1}}\phi_{R,t}^{*}\leq \phi_{R,t}^{*}\leq\max\limits_{\partial B_{R}\cup\partial B_{1}}\phi_{R,t}^{*}~~~~\mbox{on }\bar{B}_{R}\setminus B_{1}. \end{equation*} By \eqref{fd7a}, we have that, on $\partial B_R$, \begin{eqnarray}\label{chou1} \phi_{R,t}^{*}\leq p\cdot D\underline{\omega}_t-2\underline{\omega}_t= 2c+O(|p|^{2-d^*_k(a(t))}), \end{eqnarray} for sufficiently large $R$. On $\partial B_{1}$, we have \begin{equation*} \phi_{R,t}^{*}=\partial u^{*}_{R,t}/\partial n-2u^{*}_{R,t}=-u_{R,t}^{*}-(1-t)\psi_t\geq-u_{R,t}^{*}\geq-\bar{\omega}_t\geq-\max\limits_{\partial B_{1}}\bar{\omega}_t>2c, \end{equation*} by using of the condition $c< -\frac{a_i}{2}$ for $i=1,\cdots, d$. Then for sufficiently large $R$, the maximum value of $\phi_{R,t}^{*}$ should be achieved on $\partial B_{1}$. It follows that $(\phi_{R,t}^{*})_{n}\leq0$ on $\partial B_1$, which implies that $$(u^*_{R,t})_{nn}\leq 2 (u^*_{R,t})_n\leq C\mbox{ on $\partial B_1$}. $$ Step 3. We finish the proof of this lemma. Since we have \begin{equation*} \sum\limits_{i,j=1}^{d}F^{ij}(D^{2}u_{R,t}^{*})(\Delta u_{R,t}^{*})_{ij}\geq0, \end{equation*} the maximum principle and the above two steps yield that \begin{equation*} \max\limits_{\bar{B}_{R}\setminus B_{1}}\Delta u_{R,t}^{*}=\max\limits_{\partial{B}_{R}\cup\partial B_{1}}\Delta u_{R,t}^{*}. \end{equation*} By the convexity of $u^*_{R,t}$, Step 1 and Step 2, we have the $C^2$ estimate on $\partial B_1$. For the $C^2$ boundary estimate on $\partial B_R$, we would like to use the same argument as Guan in \cite{Guan}. However, our equation is a Hessian quotient equation, which does not satisfy the assumption (1.7) in \cite{Guan}. Therefore, in order to obtain the $C^2$ boundary estimates, we need to give a different proof of Lemma 6.2 in \cite{Guan}, i.e., we need to construct a barrier function $\mathfrak{b}$ satisfying \begin{eqnarray}\label{mmm} \sum_{i,j}F^{ij}\mathfrak{b}_{ij}\leq -c(1+\sum_i F^{ii})\,\,\mbox{in $B_R\setminus B_1$}\mbox{ for some constant $c$}, \end{eqnarray} and \[\mathfrak{b}\geq 0\mbox {~in $B_R\setminus B_1$},\ \ \mathfrak{b}=0\,\,\mbox{on $ \partial B_R.$}\] We define a new barrier function: $$\mathfrak{b}=u^*_{R,t}-\bar{\omega}_t+C(R^2-|p|^2).$$ It is easy to see that $\mathfrak{b}\geq 0$ in $B_R\setminus B_1$ and $\mathfrak{b}=0$ on $\partial B_R$. Moreover, since $\sum_{i,j}F^{ij}(u^*_{R,t})_{ij}=F$, taking sufficiently large $C$, we can get \eqref{mmm}. We have obtained the desired estimate. \end{proof} We are in the position to solve our Dirichlet-Neumann problem on the sufficiently large annulus. \begin{lem}\label{app} For sufficiently large $R>1$, the following approximate problem on the annulus: \begin{equation}\label{appprob} \frac{\sigma_{d}(\lambda(D^{2}u^{*}_{R}))}{\sigma_{d-k}(\lambda(D^{2}u^{*}_{R}))}=\frac{1}{\binom{d}{k}}~~~~\mbox{in }B_{R}\texttt{\symbol{'134}}\bar{B}_{1} \end{equation} with the boundary conditions: \begin{equation}\label{appbound} u^{*}_{R}=\partial u^{*}_{R}/\partial n~~~~\mbox{on }\partial B_{1}\text{ and } u^{*}_{R}=\underline{\omega}_1~~~~\mbox{on }\partial B_{R} \end{equation} admits a unique strictly convex solution in $C^{2}(\bar{B}_{R}\setminus B_{1})$. \end{lem} \begin{proof} the closedness follows from Lemma \ref{c0}-\ref{c2}. We only need to obtain the openness of the problem (\ref{fd5a})-(\ref{fd7a}). We let \begin{eqnarray} &&v=\frac{\partial u_{R,t}^{*}}{\partial t}, f=\frac{\partial}{\partial t} \left(\frac{\sigma_{d}(a(t))}{\sigma_{l}(a(t))}\right)^{1/(d-l)}, a_{ij}=F^{ij}(D^{2}u_{R,t}^{*}), \nonumber\\ &&\tilde{\psi}=-\psi_t+(1-t)\frac{\partial \psi_t}{\partial t}, \tilde{\varphi}=\frac{\partial\underline{\omega}_t}{\partial t}.\nonumber \end{eqnarray} Differentiating (\ref{fd5a})-(\ref{fd7a}) with respect to $t$, we have the linear problem on the annulus: \begin{equation}\label{fd5a''} \sum\limits_{i,j=1}^{n}a_{ij}v_{ij}=f~~~~\mbox{in }B_{R}\texttt{\symbol{'134}}\bar{B}_{1}, \end{equation} with the boundary condition \begin{equation}\label{fd6a''} v=\frac{\partial v}{\partial n}+\tilde{\psi}~~~~\mbox{on }\partial B_{1}, \text{ and } v=\tilde{\varphi}~~~~\mbox{on }\partial B_{R}.\end{equation} Since we do not find an appropriate reference for the existence of the problem (\ref{fd5a''})-(\ref{fd6a''}), thus we give a sketch proof here. We apply the continuity method to solve this problem. For any $0\leq s\leq 1$, we consider a family of the uniformly elliptic equations: \begin{equation}\label{liuli} L_{s}v:=s\sum_{i,j=1}^da_{ij}v_{ij}+(1-s)\Delta v=f~~~~\mbox{in }B_{R}\setminus\bar{B}_{1} \end{equation} with the boundary conditions \eqref{fd6a''}. Here $\Delta$ is the Laplace operator. The closedness can be obtained as follows. By theorem 6.30 in \cite{gt}, for any solution $v\in C^{2,\alpha}(\bar{B}_{R}\setminus B_{1})$ of \eqref{liuli} and \eqref{fd6a''}, we have \begin{equation*} \|v\|_{2,\alpha}\leq C(\|v\|_{0}+\|\tilde{\psi}\|_{1,\alpha}+\|\tilde{\varphi}\|_{2,\alpha}+\|f\|_{0,\alpha}). \end{equation*} Here $\|\cdot\|_{k,\alpha}$ is the $C^{k,\alpha}$ norm. The maximum principle also gives that \begin{equation*} \|v\|_{0}\leq C(\|\tilde{\psi}\|_{0}+\|\tilde{\varphi}\|_{0}+\|f\|_{0}). \end{equation*} Thus, $\|v\|_{2,\alpha}$ is bounded by some constant independent of $s$. For the openness, we let $X_{1}:=C^{2,\alpha}(\bar{B}_{R}\setminus B_{1})$ and $X_{2}=C^{2,\alpha}(\partial B_{R})\times C^{1,\alpha}(\partial B_{1})$ with the norm $\|(\tilde{\varphi},\tilde{\psi})\|:=\|\tilde{\varphi}\|_{2,\alpha}+\|\tilde{\psi}\|_{1,\alpha}$. Then it is easy to see that the map $L_{s}:X_{1}\rightarrow X_{2}$ is one-to-one and onto if and only if the problem (\ref{liuli}), (\ref{fd6a''}) can be uniquely solved. Thus, we only need to solve (\ref{liuli})-(\ref{fd6a''}) for $s=0$, which is equivalent to \begin{equation}\label{liuma} \Delta u=f~~~~\mbox{in }B_{R}\setminus \bar{B}_{1} \end{equation} with the boundary conditions \begin{equation}\label{liuma-} u=\partial u/\partial n+\Phi~~~~\mbox{on }\partial B_{1} \text{ and } u=0~~~~\mbox{on }\partial B_{R}. \end{equation} We now using $L^2$ theory to solve the above problem. We say that $u$ is a weak solution of (\ref{liuma})-(\ref{liuma-}) if $u\in H_{0}^{1}(B_{R})$ satisfies \begin{equation*} B[u,\xi]:=\int_{B_{R}\setminus \bar{B}_{1}}D\xi\cdot Du+\int_{\partial B_{1}}\xi u=\int_{\partial B_{1}}\xi \Phi-\int_{B_{R}\setminus \bar{B}_{1}}\xi f. \end{equation*} Note that \begin{equation*} B[u,u]\geq \frac{\lambda}{2}\int_{B_{R}\setminus \bar{B}_{1}}|Du|^{2}-c\lambda\int_{B_{R}\setminus \bar{B}_{1}}u^{2}~~~~\mbox{for some constants $\lambda$ and $c$}. \end{equation*} Then the Lax-Milgram theorem will give the existence of the weak solution. Moreover, by the regularity results in \cite{lie}, the weak solution is in fact the classical solution. \end{proof} \section{The Local Estimates} For sufficiently large $R$, let $u^{*}_{R}$ be the solution of \eqref{appprob} and \eqref{appbound} obtained from Lemma \ref{app}. In this section, we will establish the local estimates of $u^{*}_{R}$ with respect to $R$ to obtain the existence of the solution of \eqref{fd5}-\eqref{fd7}. The local $C^0$ estimate can be stated as follows. \begin{lem}\label{c01} Let $R>R_{0}>1$. There exists a constant $C$ depending on $R_{0}$, $a$ and $c$ such that for any $u^{*}_{R}\in C^{2}(\bar{B}_{R}\setminus B_{1})$ satisfying \eqref{appprob} and \eqref{appbound}, we have that \begin{equation}\label{c0estimate---} |u^*_R|\leq C \text{ on } \bar{B}_{R_0}\setminus B_{1}. \end{equation} \end{lem} \begin{proof} Let $\underline{\omega}_1$ and $\bar{\omega}_1$ be the sub and super solution constructed as in \eqref{w} and \eqref{sup}. Then, by the maximum principle, we have that \begin{equation*} \underline{\omega}_1\leq u^{*}_{R}\leq \bar{\omega}_1~~~~\mbox{on }\bar{B}_{R}\setminus B_{1}. \end{equation*} (\ref{c0estimate---}) follows from the above inequality directly. \end{proof} The local $C^1$ estimate can be stated as follows. \begin{lem}\label{c11} Suppose $R_0>0$ is a given sufficiently large constant. For any $R>R_0+10$, let $u^{*}_{R}$ be the solution of \eqref{appprob} and \eqref{appbound}. Then there exists a constant $C>0$ depends only on $R_0$ such that \begin{equation*} |Du_{R}^{*}|\leq C \text{ on } \bar{B}_{R_0}\setminus B_{1}. \end{equation*} \end{lem} \begin{proof} Let $r=|p|$ be the radius of the points. By the convexity, we have $\frac{\partial^2 u^*_{R}}{\partial r^2}>0$. Therefore, the minimum value of $\frac{\partial u^*_{R}}{\partial r}$ can be achieved on $\partial B_1$. Thus $\frac{\partial u^*_{R}}{\partial r}$ has a uniform lower bound in $B_R\setminus B_1$ by Lemma \ref{c01}. Suppose that the set $\left\{ p\in B_{R_0+10}\setminus \bar{B}_1:\frac{\partial u^*_{R}}{\partial r} (p)>0\right\}$ is non empty. We consider the test function $$\phi=((R_0+10)^2-|p|^2) \frac{\partial u^*_{R}}{\partial r} e^{u^*_{R}}. $$ Assume $\phi$ achieves its maximum value at some point $p_0$. If $p_0\in \partial B_1$, by \eqref{appbound} and Lemma \ref{c01}, we get an upper bound of $\phi$. If $p_0\in B_{R_0+10}\setminus\bar{B}_1$, then at $p_0$, we have $\frac{\partial \phi}{\partial r}=0$, which gives $$-2r\frac{\partial u^*_{R}}{\partial r}+((R_0+10)^2-|p|^2) \frac{\partial^2 u^*_{R}}{\partial r^2}+((R_0+10)^2-|p|^2) \left(\frac{\partial u^*_{R}}{\partial r}\right)^2=0.$$ Thus, we get $$((R_0+10)^2-|p|^2)\frac{\partial u^*_{R}}{\partial r}\leq 2r\leq 2R_0+20.$$ It gives an upper bound of $\phi$. Thus, we get the uniform upper and lower bound of $\frac{\partial u^*_{R}}{\partial r}$ on $\bar{B}_{R_0+9}\setminus B_1$. Let us estimate the angular derivatives. We consider the function $$\psi(p,v)=((R_0+5)^2-|p|^2)v\cdot Du^*_{R}(p) e^{\epsilon u^*_{R}(p)}.$$ where $v$ is some vector and $\epsilon>0$ is a small undetermined constant. Define the set $$S=\{(p,v); p\in B_{R_0+5}\setminus \bar{B}_1, v \in T_{p}B_{|p|}, \text{ and } |v|=|p| \}$$ Here $B_{|p|}$ denotes the ball centered at origin with radius $|p|$ and $T_{p}B_{|p|}$ is its tangential space at $p$. Assume $\psi(p,v)$ achieves its maximum value at point $(p_0,v_0)$ on the closure of $S$. We rotate the coordinate to satisfy $p_0=(|p_0|,0,\cdots,0)$ and $p_2$ parallel to $v_0$. Thus, denote $T=p_1\frac{\partial}{\partial p_2}-p_2\frac{\partial }{\partial p_1}$, then we have the test function $$\psi_1=((R_0+5)^2-|p|^2)Tu^*_{R} e^{\epsilon u^*_{R}}$$ also achieves its maximum value at $p_0$ and $\psi_1(p_0)=\psi(p_0,v_0)$. If $p_0\in \partial B_1$, we have $\frac{\partial \psi_1}{\partial n}\leq 0$, which gives \begin{eqnarray*} 0&\geq& -2Tu^*_{R}+((R_0+5)^2-1)\frac{\partial Tu^*_{R}}{\partial n}+((R_0+5)^2-1)\epsilon Tu^*_{R}\frac{\partial u^*_{R}}{\partial n}\\ &=&((R_0+5)^2-3) Tu^*_{R} +((R_0+5)^2-1)\epsilon u^*_{R} Tu^*_{R}. \end{eqnarray*} Since by Lemma \ref{c01}, we have the uniform lower bound of $u^*_{R}$. If $\epsilon$ is sufficiently small, the above inequality gives a contradiction. Thus, $p_0\in B_{R_0+5}\setminus \bar{B}_1$. At $p_0$, we have $\frac{\partial \psi_1}{\partial p_2}=0$, which implies $$0=((R_0+5)^2-|p|^2)\left(|p_0|\frac{\partial^2u^*_{R}}{\partial p_2^2}-\frac{\partial u^*_{R}}{\partial p_1}\right)+((R_0+5)^2-|p|^2)\frac{\epsilon}{|p_0|} (Tu^*_{R})^2.$$ Thus, we get $$Tu^*_{R}\leq C\sqrt{\frac{\partial u^*_{R}}{\partial r}}.$$ Therefore, we get an uniform upper bound of $\psi_1$, which implies an uniform upper bound of the angular derivatives of $u^*_{R}$ in $B_{R_0}\setminus B_1$. Thus, we get the uniform bound of $|Du^*_{R}|$ in $B_{R_0}\setminus B_1$. \end{proof} Let us consider the local $C^2$ estimate. We give the asymptotic behavior of $D^2u^*_R$ at first, then a similar argument as Lemma \ref{c2} will give the local $C^2$ estimate. However, the difficulty is lack of the Pogorelov type estimate for Hessian quotient equations, if we would like to generalize the argument of Caffarelli-Li \cite{cl2003} for Monge-Amp\`{e}re equations. To overcome the difficulty, our strategy is to consider the asymptotic behavior of $D^2u_R$, instead of $D^2u^*_R$, where $u_R$ is Legendre transform of $u^*_R$. By Lemma \ref{c01} and the asymptotic behaviors of $\bar{\omega}_1$ and $\underline{\omega}_1$, there exist two sufficiently large positive constants $r_{0}, C_{0}$ depending on $d$ and $a$, such that for any $R>r_{0}$, we have that \begin{equation}\label{fnfn} \left|u_{R}^{*}-\frac{1}{2}\sum\limits_{i=1}^{d}a_{i}p_{i}^{2}+c\right|\leq C_{0}|p|^{2-d^*_k(a)}~~~~\mbox{on }\bar{B}_{R}\setminus B_{r_{0}}. \end{equation} Let $u_R$ be the Legendre transform of $u^*_R$ and $\Omega_R=Du_{R}^{*}(B_R\setminus \bar{B}_1)$. Since $Du^*_R$ is a diffeomorphism, we can denote $\Gamma_1, \Gamma_2$ to be the exterior boundary and interior boundary of $\Omega_R$. Then $u_{R}$ satisfies that \begin{equation*} \sigma_{k}(\lambda(D^{2}u_{R}))=\binom{d}{k}~~~~\mbox{in }\Omega_R. \end{equation*} Then, we have \begin{lem}\label{gradient} There exist two uniform constants $\theta,\tilde{C}$ such that \begin{equation}\label{bound} \Gamma_1\subset \mathbb{R}^d\setminus \bar{B}_{\theta R},\text{ and } \ \ \Gamma_2\subset B_{\tilde{C}}. \end{equation} Moreover, for sufficiently large $R$, on the annulus $\bar{B}_{\theta R}\setminus B_{\tilde{C}}\subset \Omega_R$, we have \begin{equation}\label{fnfn'} \left |u_{R}-\frac{1}{2}\sum\limits_{i=1}^{d}\frac{1}{a_{i}}x_{i}^{2}-c\right|\leq C_{0}|x|^{2-d^*_k(a)}. \end{equation} \end{lem} \begin{proof} For sufficiently large $R$, on the boundary $\partial B_R$, by the convexity, it is clear that \begin{eqnarray} \frac{\partial u^*_R}{\partial n}&\geq& \frac{\min\limits_{\partial B_R}u^*_R-\max\limits_{\partial B_1}u^*_{R}}{R-1}\geq\frac{\min\limits_{\partial B_R}\underline{\omega}_1-\max\limits_{\partial B_1}\bar{\omega}_1}{R-1}\\ &\geq&\frac{1}{R-1}\left[\frac{1}{2}\sum_{i=1}^da_ip_i^2-c-C\right]\nonumber\\ &\geq&\frac{1}{R-1}\left[\frac{R^2}{2}\min_{1\leq i\leq d} a_i-c-C\right]\nonumber\\ &\geq& \theta R,\nonumber \end{eqnarray} where $\theta$ is a small positive constant. Therefore, $|Du^*_R|\geq \theta R$ on $\partial B_R$, which implies the first result of \eqref{bound}. The second result of \eqref{bound} is a corollary of Lemma \ref{c11} if we take $R_0=2$ there. It is obvious that $$\bar{\omega}^*_1(x)=x\cdot p-\bar{\omega}_1(p)=\frac{1}{2}\sum_{i=1}^d\frac{x_i^2}{a_i}+c,$$ and it is defined on the whole $\mathbb{R}^n$. Then, let's consider $\underline{\omega}^*_1$, the Legendre transform of $\underline{\omega}_1$. By \eqref{w}, $\underline{\omega}_1$ is defined on $r=\sqrt{\sum_ia_ip_i^2}\geq \eta$. Then we have $$x_i=D_i\underline{\omega}_1=a_ip_i\xi^{-1}\left(C_1r^{-d^*_k(a)}\right)=a_ip_ih(r),$$ which implies \begin{equation}\label{px} \sum_{i=1}^d\frac{x_i^2}{a_i}=r^2h^2(r). \end{equation} Here $h(r)=\xi^{-1}\left(C_1r^{-d^*_k(a)}\right)$. Furthermore, the derivative of the function $r^2h(r)$ is $$2rh(r)+r^2h'(r)\geq rh(r)>0,$$ where we have used \eqref{shengguo}. Thus, the range of the function $r^2h(r)$ is $[\eta^2h(\eta),\infty)$ in view of $h\geq 1$. Therefore, we known that $$D\underline{\omega}_1= \mathbb{R}^d\setminus \left\{x\in\mathbb{R}^n; \sum_i\frac{x_i^2}{a_i}\leq \eta^2h(\eta)\right\}.$$ Thus, $\underline{\omega}_1^*$ can be defined on $\mathbb{R}^d\setminus B_{\tilde{C}}$, for $\tilde{C}>(\min_{i} a_i) \eta^2h(\eta)$. In view of \eqref{px}, we know that, $$r^2\leq \sum_{i=1}^d\frac{x_i^2}{a_i}\leq r^2h^2(\eta).$$ Combining with \eqref{asyw}, we get $$\underline{\omega}_1^*=\frac{1}{2}\sum_{i=1}^d\frac{x_i^2}{a_i}+c+O(|x|^{2-d^*_k(a)}), \text{ as } |x|\rightarrow+\infty.$$ At last, we prove, on $\bar{B}_{\theta R}\setminus B_{\tilde{C}}$, $$\bar{\omega}_1^*\leq u_R\leq \underline{\omega}_1^*.$$ We only prove the first inequality, and the second one is same. For any $x=D\bar{\omega}_1(p_1)=Du^*_R(p_2)\in\bar{B}_{\theta R}\setminus B_{\tilde{C}}$, we have \begin{eqnarray} \bar{\omega}_1^*(x)-u_R(x)&=&x\cdot p_1-\bar{\omega}_1(p_1)-x\cdot p_2+u^*_R(p_2)\nonumber\\ &\leq& x\cdot (p_1-p_2)-u^*_R(p_1)+u^*_R(p_2)\leq 0\nonumber. \end{eqnarray} Here the last inequality comes from the strict convexity of $u^*_R$. Thus, by the asymptotic behavior of $\underline{\omega}_1^*, \bar{\omega}_1^*$, we obtain \eqref{fnfn'}. \end{proof} Now, we can derive the asymptotic behavior of $Du_{R}^{*}$ and $D^2u^*_R$. \begin{lem}\label{cl} There exist two sufficiently large constants $r_{1}>1$ and $C_{1}>0$ such that for any $\theta R>2r_1$ and $i$, $j=1$, $\cdots$, $d$, we have \begin{equation}\label{clc} |D_{i}u_{R}^{*}-a_{i}p_{i}|\leq C_{1}|p|^{1-d_k^*(a)} \text{ and }\ \ |D_{ij}u_{R}^{*}-a_{i}\delta_{ij}|\leq C_{1}|p|^{-d_k^*(a)} \end{equation} for $r_{1}\leq|p|<\theta R/2$. Here $\theta$ is the constant given in Lemma \ref{gradient}. \end{lem} \begin{proof} We follow the proof of Lemma 3.3 in Bao-Wang \cite{BaoW}. For $s>0$, define $$D_s=\left\{y\in\mathbb{R}^d: \frac{1}{2}\sum_{i=1}^d\frac{y_i^2}{a_i}<s\right\}.$$ For sufficiently large $R$, by Lemma \ref{gradient}, $u_R$ can be defined on the annulus $\bar{B}_{\theta R}\setminus B_{\tilde{C}}$. For $x\in \mathbb{R}^d$, $|x|\in[2\tilde{C},\frac{2}{3}\theta R]$, we let $r=(\max_i a_i)^{-1/2}|x|$ and \begin{equation*} \eta_{r}(y):=\frac{16}{r^2}u_{R}\left(x+\frac{r}{4}y\right),~~~~y\in D_2. \end{equation*} It is easy to see that $\eta_{r}$ satisfies \begin{equation*} \sigma_{k}(\lambda((D^{2}\eta_{r}))=\binom{d}{k}, ~~D^{2}\eta_{r}>0,~~~~\mbox{in }D_{2}. \end{equation*} Let \begin{equation*} w_{r}(y):=\eta_{r}(y)-\frac{8}{r^2}\sum\limits_{i=1}^{n}\frac{1}{a_i}\left(x_{i}+\frac{r}{4}y_{i}\right)^{2}-\frac{16}{r^2}c,\end{equation*} and $$\bar{\eta}_r(y)=w_r(y)+\frac{1}{2}\sum_{i=1}^d\frac{y^2_i}{a_i}.$$ It is clear that \begin{equation*} \sigma_{k}(\lambda((D^{2}\bar{\eta}_{r}))=\binom{d}{k}. \end{equation*} For $M\leq 2$, let \begin{equation*} \Omega_{M,r}:=\{y\in D_2; \bar{\eta}_{r}(y)<M\}. \end{equation*} By (\ref{fnfn'}), we have, for $y\in D_2$, $$\left|\bar{\eta}_r(y)-\frac{1}{2}\sum_{i=1}^d\frac{y^2_i}{a_i}\right|=|w_r(y)|\leq \frac{C}{r^2}\left(x+\frac{r}{4}y\right)^{2-d^*_k(a)}\leq Cr^{-d^*_k(a)}$$ and $|\eta_r|\leq C$. Thus, for sufficiently large $r_1$, if $r>r_1$, $\Omega_{1.5,r}\subset D_{1.6}$. Applying the interior gradient estimate in \cite{cw2001} to $\bar{\eta}_r$ in $D_2$, we have that \begin{equation*} \|D\bar{\eta}_{r}\|_{L^{\infty}(D_{1.6})}\leq C. \end{equation*} Applying the Pogorelov type second order derivative estimate in \cite{cw2001} to $\bar{\eta}_r$ in $\Omega_{1.5,r}$, we have that \begin{equation*}\label{dn} (\bar{\eta}(y)-1.5)^4|D^{2}\bar{\eta}_{r}(y)|\leq C, \end{equation*} which implies $$\|D^2\bar{\eta}_r\|_{L^{\infty}(\Omega_{1.2,r})}\leq C.$$ Then $w_{r}$ satisfies that \begin{equation*} \sum_{i,j=1}^d a^{ij}(w_{r})_{ij}=0,~~~~\mbox{in }D_{2}, \end{equation*} where $$a^{ij}=\int^1_0\sigma_k^{ij}(\lambda(A+sD^2w_r))ds.$$ Since $A+sD^2w_r=(1-s)A+sD^2\bar{\eta}$ and $(\sigma_k^{ij}(\lambda(A))),(\sigma_k^{ij}(\lambda(D^2\bar{\eta}_r)))$ has uniformly lower and upper bound, therefore $(a_{ij})$ also has uniformly lower and upper bound, \begin{equation*} \frac{I}{C}<(a_{ij})<CI,~~~~\mbox{in }D_{1}. \end{equation*} By the Schauder theory, we have \begin{equation*} |D^{m}w_{r}(0)|\leq C\|w_{r}\|_{L^{\infty}(D_{1})}\leq Cr^{-d^*_k(a)}, \end{equation*} which implies that \begin{equation}\label{uRasy} \left|D^{m}\left(u_{R}-\frac{1}{2}\sum\limits_{i=1}^{n}\frac{1}{a_{i}}x_{i}^{2}-c\right)\right|\leq C|x|^{2-d^*_k(a)-m}, \end{equation} for $(\max_i a_i)^{1/2}r_1\leq |x|\leq \frac{2}{3} \theta R$. By \eqref{uRasy}, for $m=1$, we have $$p_i=D_iu_R=\frac{x_i}{a_i}+O(|x|^{1-d^*_k(a)}),$$ for sufficiently large $|x|$, which implies the first inequality of \eqref{clc}, if $r_1$ is sufficiently large. For $m=2$, since we known $(D^2u_R)=(D^2u^*_R)^{-1}$, then we have $$|(D^2u^*_R)^{-1}-A|\leq C|p|^{-d^*_k(a)}.$$ This gives the bound of $D^2u^*_R$, which implies the second inequality of \eqref{clc} for sufficiently large $r_1$. \end{proof} We are in the position to give the local $C^2$ estimate. \begin{lem}\label{localC2} Suppose $R_0>0$ is a given sufficiently large constant. For any $\frac{\theta R}{2}>R_0+10$, let $u^{*}_{R}$ be the solution of (\ref{appprob}) and (\ref{appbound}), where $\theta$ is the constant given in Lemma \ref{gradient}. Then there exists a constant $C$ only depending on $R_0$ such that \begin{equation*} |D^2u_{R}^{*}|\leq C \text{ on } \bar{B}_{R_0}\setminus B_{1} \end{equation*} \end{lem} \begin{proof} Using Lemma \ref{cl}, we only need the $C^2$ estimate on $\bar{B}_{r_1}\setminus B_1$. We use the same argument as in Lemma \ref{c2}. The proof of Step 1 and Step 3 are the same as in Lemma \ref{c2}, therefore, we only need to reprove Step 2. Let \begin{equation*} \phi_{R}^{*}:=p\cdot Du_{R}^{*}-2u_{R}^{*}. \end{equation*} Same argument as Step 2 shows \begin{equation*} \phi_{R}^{*}\leq\max\limits_{\partial B_{r_1}\cup\partial B_{1}}\phi_{R}^{*}~~~~\mbox{on }\bar{B}_{r_1}\setminus B_{1}. \end{equation*} By \eqref{fnfn'} and \eqref{clc}, we have that, on $\partial B_{r_1}$, \begin{eqnarray}\label{chou1} \phi_{R}^{*}= 2c+O(|p|^{2-d^*_k(a)}),\nonumber \end{eqnarray} for sufficiently large $r_1$. On $\partial B_{1}$, we still have \begin{equation*} \phi_{R}^{*}=\partial u^{*}_{R}/\partial n-2u^{*}_{R}=-u_{R}^{*}\geq-\bar{\omega}_1\geq-\max\limits_{\partial B_{1}}\bar{\omega}_1>2c, \end{equation*} by using of the condition $c< -\frac{a_i}{2}$ for $i=1,\cdots, d$. Then for sufficiently large $r_1$, the maximum of $\phi_{R}^{*}$ should achieve on $\partial B_{1}$. It follows that $(\phi_{R}^{*})_{n}\leq0$, which implies, on $\partial B_1$, $$(u^*_{R})_{nn}\leq 2 (u^*_{R})_n\leq C. $$ \end{proof} Combining local $C^0-C^2$ estimates with asymptotic behavior \eqref{fnfn'}, using Cantor's subsequence method, we obtain a unique solution $u^*$ to the dual problem \eqref{fd5}-\eqref{fd7}. The uniqueness comes from the maximum principle. Suppose $u$ is the Legendre transform of $u^*$. Then, $u$ should satisfy $\sigma_k(\lambda(D^2u))=\binom{d}{k}$. Let $D=Du^*(\mathbb{R}^d\setminus \bar{B}_1)$, which is an open set. Let $u^*_R=u^*|_{B_R}$ for sufficiently large $R$. Then using the same argument, we also can prove that Lemma \ref{gradient} holds for $u^*_R$, which is $$Du^*_R(\partial B_R)\subset \mathbb{R}^d\setminus \bar{B}_{\theta R}, \text{ and } Du^*_R(\partial B_1)\subset B_{\tilde{C}}.$$ Here $\theta, \tilde{C}$ are two constants not depending on $R$. Since $R$ can go to infinity, we have $\mathbb{R}^d\setminus \bar{B}_{\tilde{C}}\subset D$. Moreover, $Du^*(\partial B_1)$ is the interior boundary of $D$. Let $\Omega$ be the domain enclosed by $Du^*(\partial B_1)$. Then, we have $\partial\Omega=Du^*(\partial B_1)$ and $D=\mathbb{R}^d\setminus \bar{\Omega}$. By \eqref{fd6}, $u=0$ on $\partial\Omega$. Since $u$ is a strictly convex function, we get the strict convexity of $\Omega$. Again using Lemma \ref{gradient}, we further have the asymptotic behavior of $u$ by \eqref{fnfn'}. Moreover, by Lemma 3.3 in \cite{BaoW}, we can conclude that the decay rate for $u$ approaching the quadratic polynomial is \eqref{fd4}. At last, let's prove $\frac{\partial u}{\partial n}=1$ on $\partial \Omega$. Since on $\partial \Omega$, $1=|Du|=|u_n|$, then $u_n=\pm 1$. Since $u=0$ on $\partial\Omega$, if $u_n=-1$, one can see $u<0$ near $\partial\Omega$. By the asymptotic behavior \eqref{fd4'}, $u>0$ at infinity. Thus, $u$ can achieve negative minimum value at some point $P$ in $D$. Thus, we have $Du(P)=0$ which is a contradiction for $Du(D)=\mathbb{R}^d\setminus \bar{B}_1$. Therefore, we get $u_n=1$ on $\partial \Omega$. Thus, we have proved $u$ satisfying \eqref{baozhang3}, \eqref{baozhang2} and \eqref{fd4'}. We complete the proof of Theorem \ref{cases:10}, \ref{cases:20} and Remark \ref{sm}. \section{Radially symmetric solutions} We firstly give the \begin{proof}[proof of Theorem \ref{cases:80}] Without loss of generality, we may assume that $b=0$. For the existence and non-existence part, we can assume that $\Omega$ is a ball centered at the origin with radius $r_{0}>0$. We will construct the radially symmetric solution $u(x)=u(r)$ with $r=|x|$ of the problem (\ref{baozhang3})-(\ref{ab}). A straightforward calculation shows that $h=u'/r$ satisfies the ODE \begin{equation*}\label{ODE} h^k+\frac{k}{d}rh'h^{k-1}=1,~~~~\forall~r>r_{0}. \end{equation*} Then the solution of the above equation with boundary conditions $u(r_{0})=0$ and $u'(r_{0})=1$ is \begin{equation}\label{app} u(r)=\int_{r_0}^{r}s\left(1+Cs^{-d}\right)^{1/k}ds,~~~~\forall~r\geq r_{0}, \end{equation} where $C=C(r_{0})=r_0^{d-k}-r_0^d$. Moreover, we have that \begin{equation}\label{app2} u(r)=\frac{1}{2}r^{2}+\mu(r_0)+O(r^{2-d}),~~~~\mbox{as $r\rightarrow\infty$}, \end{equation} where \begin{equation}\label{hailizi} \mu(r_0)=-\frac{1}{2}r_{0}^{2}+\int_{r_0}^{\infty}s\left(\left(1+Cs^{-d}\right)^{1/k}-1\right)ds. \end{equation} The strictly convexity of $u$ gives $u''>0$, which implies that $$r^d-\frac{d-k}{k}C>0,~~~~\forall~r\geq r_0.$$ Thus, we only need $r^d_{0}-\frac{d-k}{k}C>0$, which gives $r_0>r_{2}:=\left(\frac{d-k}{d}\right)^{1/k}$. Now we discuss the range of $\mu(r_0)$ for $r_0>r_{2}$. It is obvious that $\mu\rightarrow -\infty$ as $r_0\rightarrow \infty$. The derivative of $\mu$ is $$\mu'(r_0)=-1+\frac{d}{k}r_0^{d-1}(r_{2}^{k}r_0^{-k}-1)\int^{\infty}_{r_0}s^{1-d}(1+Cs^{-d})^{1/k-1}ds.$$ For $k=d$, it is clear that $r_2=0$ and $\mu'<0$. Thus, $$\mu(r_0)< \mu(0)=\int_{0}^{\infty}s((1+s^{-d})^{1/d}-1)ds.$$ For $1\leq k\leq d-1$, by the change of variable $s=r_0\tau$, $\mu$ and $\mu'$ can be written as $$\mu(r_0)=r_0^2\left[-\frac{1}{2}+\int^{\infty}_1\tau\left((1+(r_0^{-k}-1)\tau^{-d})^{1/k}-1\right)d\tau\right],$$ and $$\mu'(r_0)=-1+\frac{d}{k}r_0(r_2^k r_0^{-k}-1)\int^{\infty}_{1}\tau^{1-d}(1+(r_0^{-k}-1)\tau^{-d})^{1/k-1}d\tau.$$ Since $r_2<1$, we have that $\mu'(r_{0})<0$ for any $r_{0}>r_{2}$. Thus, $$\mu(r_0)< \mu(r_2)=\left(\frac{d-k}{d}\right)^{2/k}\left[-\frac{1}{2}+\int^{\infty}_1\tau\left(\left(1+\frac{k}{d-k}\tau^{-d}\right)^{1/k}-1\right)d\tau\right].$$ For the uniqueness part, if ($\Omega_{1}$, $u_{1}$) and ($\Omega_{2}$, $u_{2}$) are two pairs of domain and solution such that the problem (\ref{baozhang3})-(\ref{ab}) admits a unique strictly convex solution, then $u^{*}_{1}$ and $u_{2}^{*}$, the Legendre transform of $u_{1}$ and $u_{2}$, should be the same by the maximum principle. Thus, we obtain that $\Omega_{1}=\Omega_{2}$ and $u_{1}\equiv u_{2}$. \end{proof} We secondly give the \begin{proof}[proof of Remark \ref{pangxie}] Without loss of generality, we may assume that $b=0$. Suppose that $\Omega$ is a ball centered at the origin with radius $r_{0}>0$. We will construct the radially symmetric solution $u(x)=u(r)$ with $r=|x|$ of the problem (\ref{baozhang3})-(\ref{ab}). A straightforward calculation shows that $u$ is of the form (\ref{app}). Moreover, $u$ satisfies the asymptotic behavior (\ref{app2}) with $\mu(r_{0})$ defined as in (\ref{hailizi}). Now we need to derive the range of $\mu(r_{0})$ for any $r_{0}>0$. For $k=1$, we have that \begin{equation*} \mu(r_{0})=-\frac{1}{2(d-2)}(r_{0}-1)^{2}+\frac{1}{2(d-2)}. \end{equation*} Then it is easy to see that \begin{equation*} \mu(r_{0})\leq\hat{c}(1,d):=\max\limits_{[0,\infty)}\mu=\mu(1)=\frac{1}{2(d-2)}. \end{equation*} For $2\leq k\leq d-1$, since $\lim\limits_{r_{0}\rightarrow\infty}\mu(r_{0})=-\infty$, $\mu(0)=0$ and $\liminf\limits_{r_{0}\rightarrow0^{+}}\mu'(r_{0})>0$, there exists $r_{3}>0$ such that \begin{equation*} \mu(r_{0})\leq \hat{c}(k,d):=\max\limits_{[0,\infty)}\mu=\mu(r_{3}). \end{equation*} Note that the solutions with different $k$ have the same asymptotic behavior. Then the monotonicity of $\hat{c}$ with respect to $k$ follows directly from the monotonicity of $h$ with respect to $k$. \end{proof} \section{Some ridigity results} We firstly give the \begin{proof}[proof of Theorem \ref{ridig''}] We take $R>0$ sufficiently large such that $\bar{\Omega}\subset B_{R}$. The asymptotic behavior (\ref{fd4--}) implies that as $|x|\rightarrow\infty$, \begin{equation}\label{chouba} Du(x)=x+O(|x|^{-(\alpha+1)}),~~~~D^{2}u=I+O(|x|^{-(\alpha+2)}). \end{equation} It follows that as $|x|\rightarrow\infty$, \begin{equation}\label{choudan} \sigma_{k}^{ij}:=\sigma_{k}^{ij}(\lambda(D^{2}u))= \begin{cases} \binom{d-1}{k-1}\delta_{ij}+O(|x|^{-(\alpha+2)}),&i=j,\\ O(|x|^{-(\alpha+2)}),&i\neq j. \end{cases} \end{equation} for $i$, $j=1$, $\cdots$, $d$. Then for sufficiently large $R>0$, we can compute the following boundary terms: \begin{equation*} \int_{\partial B_{R}}\sigma_{k}^{ij}u_{j}x_{i}=k\binom{d}{k}R|B_{R}|+O(R^{d-1-\alpha}), \end{equation*} \begin{equation*} \int_{\partial B_{R}}(x\cdot Du)\sigma_{k}^{ij}u_{j}x_{i}=k\binom{d}{k}R^{3}|B_{R}|+O(R^{d+1-\alpha}), \end{equation*} \begin{equation*} \int_{\partial B_{R}}\sigma_{k}^{ij}u_{i}u_{j}=k\binom{d}{k}R|B_{R}|+O(R^{d-1-\alpha}), \end{equation*} \begin{equation*} \int_{\partial B_{R}}x_{l}u\frac{\partial \sigma_{k}^{ij}}{\partial x_{l}}u_{j}x_{i}=O(R^{d+1-\alpha}), \end{equation*} \begin{equation*} \int_{\partial B_{R}}u=\frac{1}{2}dR|B_{R}|+cdR^{-1}|B_{R}|+O(R^{d-1-\alpha}), \end{equation*} \begin{equation*} \int_{\partial B_{R}}u\sigma_{k}^{ij}u_{j}x_{i}=\frac{1}{2}k\binom{d}{k}R^{3}|B_{R}|+k\binom{d}{k}cR|B_{R}|+O(R^{d+1-\alpha}). \end{equation*} Integrating the equation (\ref{baozhang3}) on $B_{R}\setminus\bar{\Omega}$, we have that \begin{equation}\label{yaduo} \int_{B_{R}\setminus\bar{\Omega}}\sigma_{k}(\lambda(D^{2}u))=\binom{d}{k}(|B_{R}|-|\Omega|). \end{equation} The left hand side can be computed as \begin{align*} \int_{B_{R}\setminus\bar{\Omega}}\sigma_{k}(\lambda(D^{2}u))&=\frac{1}{k} \int_{B_{R}\setminus\bar{\Omega}}(\sigma_{k}^{ij}u_{j})_{i}=\frac{1}{kR}\int_{\partial B_{R}}\sigma_{k}^{ij}u_{j}x_{i}-\frac{1}{k}\int_{\partial\Omega}\sigma_{k}^{ij}u_{i}u_{j}\\ &=\binom{d}{k}|B_{R}|-\frac{1}{k}\int_{\partial\Omega}\sigma_{k}^{ij}u_{i}u_{j}+O(R^{d-2-\alpha}). \end{align*} Inserting the above equality into (\ref{yaduo}) and letting $R\rightarrow\infty$, we have that \begin{equation*} |\Omega|=\frac{1}{k\binom{d}{k}}\int_{\partial\Omega}\sigma_{k}^{ij}u_{i}u_{j}=\frac{1}{k\binom{d}{k}}\int_{\partial\Omega}H_{k-1}. \end{equation*} Here $H_{k}$ denotes the $k$-th curvature of the level set of $u$. And it is well-known that \begin{equation*} \sigma_{k}(\lambda(D^{2}u))=H_{k}|Du|^{k}+\frac{\sigma_{k}^{ij}u_{i}u_{lj}u_{l}}{|Du|^{2}},~~~~H_{k-1}=\frac{\sigma_{k}^{ij}u_{i}u_{j}}{|Du|^{k+1}}. \end{equation*} We first prove that $x\cdot Du\geq 0$ and $u\geq 0$. We define a function $$\varphi=x\cdot Du-2u.$$ It is obvious that $\varphi\rightarrow -2c > 0$ and $\varphi\geq 0$ on $\partial\Omega$. Moreover, one can prove $$\sigma_k^{ij}\varphi_{ij}=0.$$ Thus, we get $\varphi\geq 0$ in $\mathbb{R}^d\setminus \Omega$. Therefore, we get $$\frac{1}{r^2}\left(x\cdot Du-2u\right)=x\cdot D(\frac{u}{r^2})\geq 0,$$ which implies $$\frac{u}{r^2}\geq \left.\frac{u}{r^2}\right|_{\partial \Omega}=0.$$ Thus, we get $u\geq 0$ and $x\cdot Du\geq 2u\geq 0$. By the Newton's inequality and (\ref{baozhang3}), we have that \begin{equation}\label{shenxia} \Delta u\geq d\left(\frac{\sigma_{k}(\lambda(D^{2}u))}{\binom{d}{k}}\right)^{1/k}=d. \end{equation} Multiplying inequality (\ref{shenxia}) with $u\geq0$ and integrating on $B_{R}\setminus\bar{\Omega}$, we have that \begin{equation}\label{weilai'} \int_{B_{R}\setminus\bar{\Omega}}u\Delta u\geq d\int_{B_{R}\setminus\bar{\Omega}}u. \end{equation} The left hand side can be computed by integration by part as \begin{equation*} \int_{B_{R}\setminus\bar{\Omega}}u\Delta u=-\int_{B_{R}\setminus\bar{\Omega}}|Du|^{2}+\frac{d}{2}R^{2}|B_{R}|+cd|B_{R}|+O(R^{d-\alpha}). \end{equation*} Inserting the above equality into (\ref{weilai'}), we have that \begin{equation}\label{chenke} \int_{B_{R}\setminus\bar{\Omega}}|Du|^{2}\leq -d\int_{B_{R}\setminus\bar{\Omega}}u+\frac{d}{2}R^{2}|B_{R}|+cd|B_{R}|+O(R^{d-\alpha}). \end{equation} Multiplying inequality (\ref{shenxia}) with $(x\cdot Du)\geq0$ and integrating on $B_{R}\setminus\bar{\Omega}$, we have that \begin{equation}\label{zhuoyue'} \int_{B_{R}\setminus\bar{\Omega}}(x\cdot Du)\Delta u\geq d\int_{B_{R}\setminus\bar{\Omega}}(x\cdot Du). \end{equation} The right hand side can be computed by integration by part as \begin{equation} d\int_{B_{R}\setminus\bar{\Omega}}(x\cdot Du)=-d^{2}\int_{B_{R}\setminus\bar{\Omega}}u+\frac{1}{2}d^{2}R^{2}|B_{R}|+cd^{2}|B_{R}|+O(R^{d-\alpha}).\label{youxiu'--} \end{equation} Note that \begin{equation*} (x\cdot Du)\Delta u=(x_{l}u_{l}u_{i})_{i}-\frac{1}{2}(x_{l}|Du|^{2})_{l}+\frac{d-2}{2}|Du|^{2}. \end{equation*} Then the left hand side of (\ref{zhuoyue'}) can be computed as \begin{equation*} \int_{B_{R}\setminus\bar{\Omega}}(x\cdot Du)\Delta u=\frac{d-2}{2}\int_{B_{R}\setminus\bar{\Omega}}|Du|^{2}-\frac{d}{2}|\Omega|+\frac{1}{2}dR^{2}|B_{R}|+O(R^{d-\alpha}).\nonumber \end{equation*} Inserting the above equality and (\ref{youxiu'--}) into (\ref{zhuoyue'}), we have that \begin{equation}\label{xuhoubao'} \frac{d-2}{2}\int_{B_{R}\setminus\bar{\Omega}}|Du|^{2}\geq-d^{2}\int_{B_{R}\setminus\bar{\Omega}}u+\frac{1}{2}d(d-1)R^{2}|B_{R}|+cd^{2}|B_{R}|+\frac{d}{2}|\Omega|+O(R^{d-\alpha}). \end{equation} Combining (\ref{chenke}) and (\ref{xuhoubao'}) together, we have that \begin{equation*} \int_{B_{R}\setminus\bar{\Omega}}u\geq \frac{d}{2(d+2)}R^{2}|B_{R}|+c|B_{R}|+\frac{1}{d+2}|\Omega|+O(R^{d-\alpha}). \end{equation*} We also know that \begin{align*} &\quad\int_{B_{R}\setminus\bar{\Omega}}u\sigma_{k+1}(\lambda(D^{2}u))\\ &=-\frac{1}{k+1}\int_{B_{R}\setminus\bar{\Omega}}\sigma_{k+1}^{ij}u_{i}u_{j}+\frac{1}{(k+1)R}\int_{\partial B_{R}}u\sigma_{k+1}^{ij}u_{j}x_{i}\\ &=-\frac{1}{k+1}\int_{B_{R}\setminus\bar{\Omega}}H_{k}|Du|^{k+2}+\frac{1}{2}\binom{d}{k+1}R^{2}|B_{R}|+c\binom{d}{k+1}|B_{R}|+O(R^{d-\alpha})\\ &=-\frac{k+2}{2(k+1)}\binom{d}{k}\int_{B_{R}\setminus\bar{\Omega}}|Du|^{2}+\frac{d}{2(k+1)}\binom{d}{k}R^{2}|B_{R}|+c\binom{d}{k+1}|B_{R}|\\ &\ \ \ \ -\frac{1}{2(k+1)}\int_{\partial\Omega}H_{k-1}+O(R^{d-\alpha})\\ &\geq \frac{d(k+2)}{2(k+1)}\binom{d}{k}\int_{B_{R}\setminus\bar{\Omega}}u-\frac{kd}{4(k+1)}\binom{d}{k}R^{2}|B_{R}|-\frac{k(d+2)}{2(k+1)}\binom{d}{k}c|B_{R}|\\ &\ \ \ \ -\frac{1}{2(k+1)}\int_{\partial\Omega}H_{k-1}+O(R^{d-\alpha}). \end{align*} It follows that \begin{align*} 0&\geq \int_{B_{R}\setminus\bar{\Omega}}u\left(\sigma_{k+1}(\lambda(D^{2}u))-\binom{d}{k+1}\right)\\ &\geq \frac{k(d+2)}{2(k+1)}\binom{d}{k}\int_{B_{R}\setminus\bar{\Omega}}u-\frac{kd}{4(k+1)}\binom{d}{k}R^{2}|B_{R}|-\frac{k(d+2)}{2(k+1)}\binom{d}{k}c|B_{R}|\\ &\quad\quad-\frac{1}{2(k+1)}\int_{\partial\Omega}H_{k-1}+O(R^{d-\alpha})\\ &\geq\frac{1}{2(k+1)}\left[k\binom{d}{k}|\Omega|-\int_{\partial\Omega}H_{k-1}\right]+O(R^{d-\alpha})=O(R^{d-\alpha}). \end{align*} Letting $R\rightarrow\infty$, we have that $\sigma_{k+1}(\lambda(D^{2}u))=\binom{d}{k+1}$, which implies that $D^{2}u=I$. By (\ref{baozhang2}), we can conclude that $u(x)=\frac{1}{2}|x|^{2}-\frac{1}{2}$. The proof of Theorem \ref{ridig''} is finished. \end{proof} We secondly give the \begin{proof}[proof of Theorem \ref{ridig}] We take $R>0$ sufficiently large such that $\bar{\Omega}\subset B_{R}$. The asymptotic behavior (\ref{fd4--}) implies that as $|x|\rightarrow\infty$, (\ref{chouba}) and (\ref{choudan}) hold. Integrating the equation (\ref{la}) on $B_{R}\setminus\bar{\Omega}$, we have that \begin{equation}\label{yaduo---} \int_{B_{R}\setminus\bar{\Omega}}\Delta u=d(|B_{R}|-|\Omega|). \end{equation} By the divergence theorem, (\ref{chouba}) and (\ref{baozhang2}), the left hand side can be computed as \begin{equation*} \int_{B_{R}\setminus\bar{\Omega}}\Delta u=\frac{1}{R}\int_{\partial B_{R}}(x\cdot Du)-|\partial\Omega|=d|B_{R}|-|\partial\Omega|+O(R^{d-2-\alpha}). \end{equation*} Inserting the above equality into (\ref{yaduo---}) and letting $R\rightarrow\infty$, we have that \begin{equation*} |\partial\Omega|=d|\Omega|. \end{equation*} Multiplying the equation (\ref{la}) with $x\cdot Du$ and integrating on $B_{R}\setminus\bar{\Omega}$, we have that \begin{equation}\label{zhuoyue} \int_{B_{R}\setminus\bar{\Omega}}(x\cdot Du)\Delta u=d\int_{B_{R}\setminus\bar{\Omega}}(x\cdot Du). \end{equation} By (\ref{baozhang2}), (\ref{fd4--}) and integration by part, the right hand side of (\ref{zhuoyue}) can be computed as \begin{equation}\label{youxiu} d\int_{B_{R}\setminus\bar{\Omega}}(x\cdot Du)=-d^{2}\int_{B_{R}\setminus\bar{\Omega}}u+\frac{1}{2}d^{2}R^{2}|B_{R}|+cd^{2}|B_{R}|+O(R^{d-\alpha}). \end{equation} Note that \begin{equation*} (x\cdot Du)\Delta u=(x_{l}u_{l}u_{i})_{i}-\frac{1}{2}(x_{l}|Du|^{2})_{l}+\frac{d-2}{2}|Du|^{2}. \end{equation*} Integrating the above equality on $B_{R}\setminus\bar{\Omega}$, by the divergence theorem, (\ref{baozhang2}) and (\ref{chouba}), we have that \begin{equation*} \int_{B_{R}\setminus\bar{\Omega}}(x\cdot Du)\Delta u=\frac{d-2}{2}\int_{B_{R}\setminus\bar{\Omega}}|Du|^{2}-\frac{d}{2}|\Omega|+\frac{d}{2}R^{2}|B_{R}|+O(R^{d-\alpha}). \end{equation*} Inserting the above equality and (\ref{youxiu}) into (\ref{zhuoyue}), we have that \begin{equation}\label{xuhoubao} \frac{d-2}{2}\int_{B_{R}\setminus\bar{\Omega}}|Du|^{2}+d^{2}\int_{B_{R}\setminus\bar{\Omega}}u=\frac{d(d-1)}{2}R^{2}|B_{R}|+cd^{2}|B_{R}|+O(R^{d-\alpha})+\frac{d}{2}|\Omega|. \end{equation} Multiplying the equation (\ref{la}) with $u$ and integrating on $B_{R}\setminus\bar{\Omega}$, we have that \begin{equation}\label{weilai} \int_{B_{R}\setminus\bar{\Omega}}u\Delta u=d\int_{B_{R}\setminus\bar{\Omega}}u. \end{equation} By integration by part, (\ref{baozhang2}), (\ref{fd4--}) and (\ref{chouba}), we have that \begin{equation*} \int_{B_{R}\setminus\bar{\Omega}}u\Delta u=-\int_{B_{R}\setminus\bar{\Omega}}|Du|^{2}+\frac{d}{2}R^{2}|B_{R}|+cd|B_{R}|+O(R^{d-\alpha}). \end{equation*} Inserting the above equality into (\ref{weilai}), we have that \begin{equation}\label{zhaohao} \int_{B_{R}\setminus\bar{\Omega}}|Du|^{2}+d\int_{B_{R}\setminus\bar{\Omega}}u=\frac{d}{2}R^{2}|B_{R}|+cd|B_{R}|+O(R^{d-\alpha}). \end{equation} Solving from (\ref{xuhoubao}) and (\ref{zhaohao}), we have that \begin{equation*} \int_{B_{R}\setminus\bar{\Omega}}|Du|^{2}=\frac{d}{d+2}R^{2}|B_{R}|-\frac{d}{d+2}|\Omega|+O(R^{d-\alpha}), \end{equation*} and \begin{equation}\label{luogu} \int_{B_{R}\setminus\bar{\Omega}}u=\frac{d}{2(d+2)}R^{2}|B_{R}|+c|B_{R}|+\frac{1}{d+2}|\Omega|+O(R^{d-\alpha}). \end{equation} Since $2\sigma_{2}(\lambda(D^{2}u))=(\sigma_{2}^{ij}u_{i})_{j}$, by integration by part, (\ref{baozhang2}) and (\ref{choudan}), we have that \begin{align*} \int_{B_{R}\setminus\bar{\Omega}}u\sigma_{2}(\lambda(D^{2}u))&=-\frac{1}{2}\int_{B_{R}\setminus\bar{\Omega}}\sigma_{2}^{ij}u_{i}u_{j}+\frac{1}{2R}\int_{\partial B_{R}}u\sigma_{2}^{ij}u_{j}x_{i}\\ &=-\frac{1}{2}\int_{B_{R}\setminus\bar{\Omega}}\sigma_{2}^{ij}u_{i}u_{j}+\frac{d(d-1)}{4}R^{2}|B_{R}|+\frac{d(d-1)}{2}c|B_{R}|+O(R^{d-\alpha}). \end{align*} Then we have that $$H_{1}=\frac{\sigma_{2}^{ij}u_{i}u_{j}}{|Du|^{3}},~~~~\Delta u=H_{1}|Du|+\frac{u_{ij}u_{i}u_{j}}{|Du|^{2}}.$$ Then we have that \begin{align*} &\int_{B_{R}\setminus\bar{\Omega}}u\sigma_{2}(\lambda(D^{2}u))\\ &=-\frac{1}{2}\int_{B_{R}\setminus\bar{\Omega}}H_{1}|Du|^{3}+\frac{d(d-1)}{4}R^{2}|B_{R}|+\frac{d(d-1)}{2}c|B_{R}|+O(R^{d-\alpha})\\ &=-\frac{3}{4}d\int_{B_{R}\setminus\bar{\Omega}}|Du|^{2}+\frac{d^{2}}{4}R^{2}|B_{R}|+\frac{d(d-1)}{2}c|B_{R}|-\frac{1}{4}|\partial\Omega|+O(R^{d-\alpha})\\ &=\frac{3}{4}d^{2}\int_{B_{R}\setminus\bar{\Omega}}u-\frac{d^{2}}{8}R^{2}|B_{R}|-\frac{d(d+2)}{4}c|B_{R}|-\frac{1}{4}|\partial\Omega|+O(R^{d-\alpha}). \end{align*} It follows from the above inequality and (\ref{luogu}) that \begin{align*} 0&\geq \int_{B_{R}\setminus\bar{\Omega}}u\left(\sigma_{2}(\lambda(D^{2}u))-\frac{d(d-1)}{2}\right)\\ &= \frac{d(d+2)}{4}\int_{B_{R}\setminus\bar{\Omega}}u-\frac{d^{2}}{8}R^{2}|B_{R}|-\frac{d(d+2)}{4}c|B_{R}|-\frac{1}{4}|\partial\Omega|+O(R^{d-\alpha})\\ &=\frac{1}{4}(d|\Omega|-|\partial\Omega|)+O(R^{d-\alpha})=O(R^{d-\alpha}). \end{align*} Letting $R\rightarrow\infty$, we have that $\sigma_{2}(\lambda(D^{2}u))=\frac{d(d-1)}{2}$, which implies that $D^{2}u=I$. By (\ref{baozhang2}), we can conclude that $u(x)=\frac{1}{2}|x|^{2}-\frac{1}{2}$. The proof of Theorem \ref{ridig} is finished. \end{proof} \section{Proof of Lemma \ref{general}} In this section, we give the \begin{proof}[proof of Lemma \ref{general}] Without loss of generality, we may assume that $A=\mbox{diag}\{a_{1},\cdots,$ $ a_{d}\}$, $b=0$ and $c=0$, that is \begin{equation*} w(x):=u(x)-\frac{1}{2}x\cdot Ax=o(1), ~~~~\mbox{as }|x|\rightarrow\infty. \end{equation*} We take $R_{0}>0$ such that $\bar{\Omega}\subset B_{R_{0}}$ and \begin{equation}\label{ximenzi} |w(x)|\leq \frac{1}{2},~~~~\forall~|x|>R_{0}. \end{equation} Since $\lambda(A)\in\Gamma_{k}$, we can take some $\varepsilon_{0}>0$ such that \begin{equation*} (a_{1},\cdots,a_{d})-2\varepsilon_{0}(1,\cdots,1)\in\Gamma_{k}. \end{equation*} Let $\varphi(y):=\frac{1}{2}\sum\limits_{i=1}^{d}(a_{i}-2\varepsilon_{0})y_{i}^{2}$. For any $s>0$, define \begin{equation*} D_{s}:=\left\{y\in\RR^{d}:\frac{1}{2}\sum\limits_{i=1}^{d}a_{i}y_{i}^{2}<s+\varphi(y)\right\}=\left\{y\in\RR^{d}:\varepsilon_{0}|y|^{2}<s\right\}. \end{equation*} Fix $x\in\RR^{d}$ with $|x|>2R_{0}$. Let $R=\sqrt{2\varepsilon_{0}}|x|$. Let \begin{equation*} u_{R}(y):=\left(\frac{4}{R}\right)^{2}u\left(x+\frac{R}{4}y\right),~~~~\forall~y\in D_{2}, \end{equation*} and \begin{equation*} w_{R}(y):=\left(\frac{4}{R}\right)^{2}w\left(x+\frac{R}{4}y\right),~~~~\forall~y\in D_{2}. \end{equation*} It follows that \begin{equation*} w_{R}(y)=u_{R}(y)-\left(\frac{1}{2}y\cdot Ay+\frac{8}{R^{2}}x\cdot Ax+\frac{4}{R}x\cdot Ay\right),~~~~\forall~y\in D_{2}. \end{equation*} Let \begin{equation*} \bar{u}_{R}(y):=u_{R}(y)-\left(\frac{8}{R^{2}}x\cdot Ax+\frac{4}{R}x\cdot Ay\right),~~~~\forall~y\in D_{2}. \end{equation*} By (\ref{baozhang3}) and (\ref{ximenzi}), we have that \begin{equation*} \sigma_{k}(\lambda(D^{2}\bar{u}_{R}))=\binom{d}{k}~~~~\mbox{in }D_{2}, \end{equation*} and \begin{equation}\label{liangdong} \left|\bar{u}_{R}(y)-\frac{1}{2}y\cdot Ay\right|=|w_{R}(y)|\leq \frac{8}{R^{2}},~~~~\forall~y\in D_{2}.\end{equation} In particular, \begin{equation*} \|\bar{u}_{R}\|_{L^{\infty}(D_{2})}\leq C(A,R_{0},\varepsilon_{0}). \end{equation*} For $M\leq 2$, define \begin{equation*} \Omega_{M,R}:=\{y\in D_{2}:\bar{u}_{R}(y)<M+\varphi(y)\}. \end{equation*} In view of (\ref{liangdong}), we have that \begin{equation*} \bar{\Omega}_{1.5,R}\subset D_{1.6} \end{equation*} for any $R>R_{1}:=\max\{R_{0},10\}$. Applying the interior gradient estimate, Theorem 3.2 in \cite{cw2001}, to $\bar{u}_{R}$, we have that \begin{equation*} \|D\bar{u}_{R}\|\leq C. \end{equation*} Here and in the following, $C\geq1$ denotes some constant depending on $k$, $d$, $A$, $R_{0}$. Applying the interior second derivatives estimate, Theorem 1.5 in \cite{cw2001}, to $\bar{u_{R}}$, we have that \begin{equation*} (\bar{u}_{R}(y)-\varphi(y)-1.5)^{4}|D^{2}\bar{u}_{R}(y)|\leq C,~~~~\forall~y\in\Omega_{1.5,R}. \end{equation*} It follows that \begin{equation*} \|D^{2}\bar{u}_{R}\|_{L^{\infty}(\Omega_{1.2,R})}\leq C, \end{equation*} and \begin{equation*} D^{2}u_{R}=D^{2}\bar{u}_{R}\leq CI~~~~\mbox{in }D_{1.1}. \end{equation*} By the above inequality, the concavity of $\sigma_{k}^{1/k}$ and the interior estimate, we have that \begin{equation*} \|u_{R}\|_{C^{4,\alpha}(\bar{D}_{1})}\leq C,~~~~\forall~\alpha\in(0,1). \end{equation*} It is easy to see that \begin{equation}\label{lomeng} \|w_{R}\|_{C^{4,\alpha}(\bar{D}_{1})}\leq C~~\mbox{and}~~A+D^{2}w_{R}\leq CI~~~~\mbox{in }D_{1}. \end{equation} Clearly, $w_{R}$ satisfies that \begin{equation*} a_{ij}^{R}D_{ij}w_{R}=0~~~~\mbox{in }D_{1}, \end{equation*} where $$a^{ij}:=\int^1_0\sigma_k^{ij}(\lambda(A+sD^2w_R))ds.$$By (\ref{lomeng}), $a_{ij}^{R}$ is uniformly elliptic and \begin{equation*} \|a_{ij}\|_{C^{2,\alpha}(\bar{D}_{1})}\leq C. \end{equation*} By the Schauder estimate, we have that \begin{equation*} |D^{2}w_{R}(0)|\leq C\|w_{R}\|_{L^{\infty}(D_{1})}\leq CR^{-2}. \end{equation*} It follows that for $|x|>R_{1}$, \begin{equation*} |D^{2}w(x)|=|D^{2}w_{R}(0)|\leq C|x|^{-2}, \end{equation*} which implies that $D^{2}u(x)\rightarrow A\in\tilde{A}_{k}$ as $|x|\rightarrow\infty$. \end{proof} \bigskip {\bf Acknowledgement.} The authors wish to thank Professor Jiguang Bao for many helpful discussions. Part of the work was done while the first author was visiting Fudan University. He would like to thank Fudan University for their hospitality. \newcommand{\noopsort}[1]{} \begin{thebibliography}{10} \bibitem{AB} {\sc A.~Aftalion, J.~Busca}, {\em Radial symmetry of overdertermined boundary-value problems in exterior domains}, Arch. Rational Mech. Anal., 143 (1998), pp.~195--206. \bibitem{baocao} {\sc J.~Bao, X.~Cao}, {\em Hessian equations on exterior domain}, J. Math. Anal. Appl., 448 (2017), 22--43. \bibitem{baoli} {\sc J.~Bao, H.~Li}, {\em The exterior Dirichlet problem for fully nonlinear elliptic equations related to the eigenvalues of the Hessian}, Journal of Differential Equations, 256 (2014), 2480--2501. \bibitem{BaoW} {\sc J.~Bao, C.~Wang}, {\em Liouville property and existence of entire solutions of Hessian equations}, Nonlinear Analysis, 223(2022), 113020. \bibitem{baolili} {\sc J.~Bao, H.~Li, Y.Y.~Li}, {\em On the exterior Dirichlet problem for Hessian equations}, Transactions of the American Mathematical Society, 366 (2014), pp.~6183--6200. \bibitem{baolizhang0} {\sc J.~Bao, H.~Li, L.~Zhang}, {\em Monge-Ampere equation on exterior domains}, Calculus of Variations and PDE's, 52 (2015), 39--63. \bibitem{baolizhang} {\sc J.~Bao, H.~Li, L.~Zhang}, {\em Global solutions and exterior Dirichlet problem for Monge-Ampere equation in $\RR^2$}, Differential and Integral Equations, 29(2016), 563--582. \bibitem{baoxiongzhou} {\sc J.~Bao, J.~Xiong, Z.~Zhou}, {\em Existence of entire solutions of Monge-Ampššre equations with prescribed asymptotic behavior}, Calc. Var. Partial Differential Equations, 58 (2019), no. 6, Paper No. 193, 12 pp. \bibitem{BCN} {\sc H.~Berestycki, L.A.~Caffarelli, L.~Nirenberg}, {\em Monotonicty for elliptic equations in unbounded Lipschitz domians}, Comm. Pure Appl. Math. 50 (11), 1089--1111 (1997). \bibitem{bocher} {\sc M.~B\^{o}cher}, {\em Singular points of functions which satisfy partail differential equations of the elliptic type}, Bull. Amer. Math. Soc., 9 (1903), no. 9, 455--465. \bibitem{bnst2008a} {\sc B.~Brandolini, C.~Nitsch, P.~Salani, C.~Trombetti}, {\em Serrin-type overdetermined problems: An alternative proof}, Arch. Rational Mech. Anal., 190 (2008), no. 2, pp.~267--280. \bibitem{bnst2008b} {\sc B.~Brandolini, C.~Nitsch, P.~Salani, C.~Trombetti}, {\em On the stability of the Serrin problem}, J. Differ. Equ., 245 (2008), no. 6, pp.~1566--1583. \bibitem{bnst2008c} {\sc B.~Brandolini, C.~Nitsch, P.~Salani, C.~Trombetti}, {\em Stability of radial symmetry for a Monge-Ampere overdetermined problem}, Annali di Matematica, 188 (2009), pp.~445--453. \bibitem{cl2003} {\sc L.~Caffarelli, Y.~Y.~Li}, {\em An extension to a theorem of Jorgens, Calabi, and Pogorelov}, Comm. Pure Appl. Math., 56 (2003), pp.~549--583. \bibitem{cs} {\sc L.~Caffarelli, S.~Salsa}, {\em A geometric approach to free boundary problems}, Graduate Studies in Mathematics, 68. American Mathematical Society, Providence, RI, 2005. \bibitem{cw2001} {\sc K.S.~Chou, X.-J.~Wang}, {\em A variational theory of the Hessian equation}, Comm. Pure Appl. Math., 54 (2001), pp.~1029--1064. \bibitem{d2023} {\sc G.~Dai}, {\em Answer to some open problems and global bifurcation for an overdetermined problem}, Indiana Univ. Math. J., 72 (2023), no. 5, pp.~1749--1787. \bibitem{dz2023} {\sc G.~Dai, Y. Zhang}, {\em Sign-changing solution for an overdetermined problem elliptic problem on unbounded domain}, J. Reine Angew. Math. 803 (2023), pp.~267--293. \bibitem{dbw2024} {\sc L.~Dai, J.~Bao, B.~Wang}, {\em Solvability of Hessian quotient equations in exterior domains}, Can. J. Math., to be appeared. \bibitem{gt} {\sc D. Gilbarg, N.S. Trudinger}, Elliptic Partial Differential Equations of Second Order, Springer, New York, NY, 1983. \bibitem{Guan} {\sc B. Guan}, The Dirichlet problem for Hessian equations on Riemannian manifolds, Calc. Var. Partial Differential Equations 8 (1999), no. 1, 45–69. \bibitem{ITW} {\sc N.M. Ivochkina, N.S. Trudinger, X.-J. Wang}, The Dirichlet problem for degenerate Hessian equations, Comm. PDE, 29 (2004) 219–235. \bibitem{jinxiong} {\sc T.~Jin, J.~Xiong}, {\em Solutions of some Monge-Amp\`ere equations with isolated and line singularities}, Adv. Math 289 (2016), 114--141. \bibitem{lili2018} {\sc D.~Li, Z.~Li}, {\em On the exterior Dirichlet problem for Hessian quotient equations}, J. Differential Equations 264 (2018), no. 11, 6633--6662. \bibitem{liliyuan2020} {\sc D.~Li, Z.~Li, Y.~Yuan}, {\em A Bernstein problem for special Lagrangian equations in exterior domians}, Adv. Math. 361 (2020), 106927, 29pp. \bibitem{lilu2018} {\sc Y.Y.~Li, S.~Lu}, {\em Existence and nonexistence to exterior Dirichlet problem for Monge-Ampere equation}, Calculus of Variatoins and PDEs, 57 (2018), no. 6, Art. 161, 17 pp. \bibitem{li2019} {\sc Z.~Li}, {\em On the exterior Dirichlet problem for special Lagrangian equations}, Trans. Amer. Math. Soc. 372 (2019), no. 2, 889--924. \bibitem{lie} {\sc G.M.~Lieberman}, Oblique Derivative Problems for Elliptic Equations, World Scientific, 2013. \bibitem{ps1989} {\sc L.E. Payne, P.W. Schaefer}, {\em Duality theorems in some overdetermined boundary value problems}, Math. Methods Appl. Sci. 11 (6) (1989) 805--819. \bibitem{qx2017} {\sc G. Qiu, C. Xia}, {\em Overdetermined boundary value problems in $\SSphere^{n}$}, J. Math. Study 50 (2017), no. 2, pp.~165--173. \bibitem{reichel1995} {\sc W.~Reichel}, {\em Radial symmetry by moving planes for semilinear elliptic BVPs on annuli and other non-convex domains}, In Progress in Partial Differential Equations: Elliptic and Parabolic Problems, Ed. C. Bandle et al., Pitman Res. Notes 325, (1995), no. 6, pp.~164--182. \bibitem{reichel1996} {\sc W.~Reichel}, {\em Radial symmetry for an electrostatic, a capillarity and some fully nonlinear overdetermined problems on exterior domains}, Z. Anal. Anwend. 15 (1996) 619--635. \bibitem{reichel1997} {\sc W.~Reichel}, {\em Radial symmetry for elliptic boundary-value problems on exterior domains}, Arch. Rational Mech. Anal., 137 (1997), no. 4, pp.~381--394. \bibitem{serrin1971} {\sc J.~Serrin}, {\em A symmetry problem in potential theory}, Arch. Rational Mech. Anal., 43 (1971), pp.~304--318. \bibitem{hs2012} {\sc H.~Shahgholian}, {\em Diversifications of Serrin's and related symmetry problems}, Complex Variables and Elliptic Equations, 57 (2012), no. 6, pp.~549--583. \bibitem{wangbao2014} {\sc B. Wang, J. Bao}, {\em Mirror symmetry for a Hessian over-determined problem and its generalization}, Commun. Pure Appl.Anal. 13 (6) (2014) 2305--2316. \bibitem{wangbao2015} {\sc B. Wang, J. Bao}, {\em Over-determined problems for k-Hessian equations in ring-shaped domains}, Nonlinear Analysis, 127 (2015), 143--156. \bibitem{w1971} {\sc H.~F.~Weinberger}, {\em Remark on the preceding paper of Serrin}, Arch. Rational Mech. Anal., 43 (1971), pp.~319--320. \bibitem{wgs1994} {\sc N.B. Willms, G. Gladwell, D. Siegel}, {\em Symmetry theorems for some overdetermined boundary value problems on ring domains}, Z. Angew. Math. Phys. 45 (1994) 556--579. \end{thebibliography} \end{document} For $n=2$, by \cite{cl2003}, there exits $A\in\mathbb{A}=$\{ÐÐÁÐʜΪ$1$µÄ$2$œ×ʵ¶Ô³ÆÕý¶šŸØÕóµÄÈ«Ìå\}, $b\in\mathbb{R}^2$, $c$ and $d\in\mathbb{R}$ such that \begin{equation*} u(x)=\frac{1}{2}x^{T}Ax+b^{T}x+d\log\sqrt{x^{T}Ax}+c+O(|x|^{-1}), \end{equation*} as $|x|\rightarrow+\infty$. We need to add a stronger condition of problem (\ref{cases:1}) at infinity, that is , \begin{equation} u(x)=\frac{1}{2}|x|^{2}+c_{0}+d_{0}\log|x|+O(|x|^{-1}), \label{cases:100} \end{equation} as $|x|\rightarrow+\infty$, where $c_{0}$, $d_{0}\in\mathbb{R}$ and $d_{0}<\frac{1}{2}$. Then we have the following theorem. \begin{thm}\label{cases:99} Let $\Underline{\omega}$ be a bounded strictly convex domain in $\mathbb{R}^2$ with smooth boundary. If $u\in C^{2}(\mathbb{R}^2\verb"\"\Underline{\omega})$ is a strictly convex solution to problem (\ref{cases:1}) and satisfies (\ref{cases:100}), then $\Underline{\omega}$ is a circle in $\mathbb{R}^2$. \end{thm} \section{Proof of Theorem \ref{cases:99}} For $n=2$ and $u^{*}$, the Ledgendre transform of $u$, we have \begin{thm}\label{cases:97} Suppose that $u\in C^{2}(\mathbb{R}^2\verb"\"\Underline{\omega})$ is a strictly convex solution to (\ref{cases:1}) and (\ref{cases:100}), then $u^{*}(p)$ satisfies the Robin problem of Monge-Ampere equation \begin{equation} \begin{cases} detD^{2}u^{*}=1,&p\in\mathbb{R}^2\texttt{\symbol{'134}}\overline{B_{1}(0)},\\ u^{*}=\frac{\partial u^{*}}{\partial \nu},&p\in\partial B_{1}(0), \label{cases:96} \end{cases} \end{equation} where $\nu(p)=p$ for any $p\in\partial B_{1}(0)$, and as $|p|\rightarrow+\infty$, we have \begin{equation} u^{*}(p)=\frac{1}{2}|p|^{2}-d_{0}\log|x|+O(1),\label{cases:98} \end{equation} \end{thm} \begin{proof} As in the proof of Theorem \ref{cases:14}, we can prove that $u^{*}(p)$ satisfies (\ref{cases:96}). Now we only need to prove that as $|p|\rightarrow+\infty$, $u^{*}(p)$ satisfies (\ref{cases:98}). By (\ref{cases:100}) and corollary 1.3 in \cite{cl2003}, as $|x|\rightarrow+\infty$, we have \begin{equation*} Du(x)=x+O(|x|^{-1}). \end{equation*} Taking $p=Du(x)$, by the triangle inequality, we have $|p|=|x|+O(|x|^{-1})$. As $|x|\rightarrow+\infty$, we also have \begin{eqnarray*} \frac{O(|x|^{-1})}{|p|^{-1}}&=&\frac{|p|}{|x|}\frac{O(|x|^{-1})}{|x|^{-1}}\\ &=&(1+\frac{O(|x|^{1-n})}{|x|})O(1)\\ &=&O(1), \end{eqnarray*} Thus, $O(|x|^{-1})=O(|p|^{-1})$, as $|x|$ or $|p|\rightarrow+\infty$. It follows that as $|p|\rightarrow+\infty$, \begin{eqnarray*} u^{*}(p)&=&p^{T}x-u(x)\\ &=&p^{T}(p+O(|p|^{-1}))-\frac{1}{2}(|p|+O(|x|^{-1}))^{2}-c_{0}-d_{0}\log|x|+O(|x|^{-1})\\ &=&|p|^{2}+O(1)-\frac{1}{2}(|p|+O(|p|^{-1}))^{2}-c_{0}-d_{0}\log(|p|+O(|p|^{-1})+O(|p|^{-1})\\ &=&|p|^{2}+O(1)-\frac{1}{2}|p|^{2}+O(1)+O(|p|^{-2})-c_{0}-d_{0}\log(|p|(1+O(|p|^{-2})))+O(|p|^{-1})\\ &=&\frac{1}{2}|p|^{2}-d_{0}\log|p|+O(1). \end{eqnarray*} \end{proof} Applying Theorem \ref{cases:97}, we can obtain the following theorem \begin{thm} If $u^{*}(p)$ satisfies (\ref{cases:96}) and (\ref{cases:98}), we have \begin{equation} u^{*}(p)=\int_{1}^{|p|}(\tau^{2}-2d_{0})^{\frac{1}{2}}d\tau+(1-2d_{0})^{\frac{1}{2}}. \end{equation}\label{cases:90} \begin{proof} First of all, we compute the radially symmetric solution of (\ref{cases:96}). Let $v(p)=\varphi(|p|)$ be the radially symmetric solution of (\ref{cases:96}), then $\varphi(r)$ satisfies \begin{gather*} \varphi''(r)\cdot\frac{\varphi'(r)}{r}=1, r>1. \end{gather*} It follows that \begin{equation*} \varphi'(r)=(r^{2}+C)^{\frac{1}{2}}, \end{equation*} where $C\in(-1,+\infty)$ is a constant. Integrating the above with respect to $r$, we have \begin{equation*} \varphi(r)=\int_{1}^{r}(\tau^{2}+C)^{\frac{1}{2}}d\tau+C_{3}, \end{equation*} where $C_{3}$ is an arbitrary constant. By the Robin condition in (\ref{cases:2}), \begin{equation*} \varphi'(1)=(1+C)^{\frac{1}{2}}=C_{3}=\varphi(1). \end{equation*} Therefore, the radially symmetric solution of (\ref{cases:96}) is of the form \begin{equation*} v(p)=\int_{1}^{|p|}(\tau^{2}+C)^{\frac{1}{2}}d\tau+(1+C)^{\frac{1}{2}}, \end{equation*} where $C\in(-1,+\infty)$ is a constant. Next, we take $C=-2d_{0}$ such that (\ref{cases:98}) holds. By direct computation, we have \begin{eqnarray*} & &\int_{1}^{t}(\tau^{2}-2d_{0})^{\frac{1}{2}}d\tau-\frac{1}{2}t^{2}+d_{0}\log t\\ &=&\int_{1}^{t}[(\tau^{2}-2d_{0})^{\frac{1}{2}}-\tau+d_{0}\tau^{-1}]d\tau-\frac{1}{2}\\ &=&\int_{1}^{+\infty}[(\tau^{2}-2d_{0})^{\frac{1}{2}}-\tau+d_{0}\tau^{-1}]d\tau-\int_{t}^{+\infty}[\tau(1-2d_{0}\tau^{-2})^{\frac{1}{2}}-\tau+d_{0}\tau^{-1}]d\tau-\frac{1}{2}\\ &=&\int_{1}^{+\infty}[(\tau^{2}-2d_{0})^{\frac{1}{2}}-\tau+d_{0}\tau^{-1}]d\tau-\int_{t}^{+\infty}[\tau(1-d_{0}\tau^{-2}+O(\tau^{-4}))-\tau+d_{0}\tau^{-1}]d\tau-\frac{1}{2}\\ &=&\int_{1}^{+\infty}[(\tau^{2}-2d_{0})^{\frac{1}{2}}-\tau+d_{0}\tau^{-1}]d\tau-\int_{t}^{+\infty}O(\tau^{-3})d\tau-\frac{1}{2}\\ &=&\int_{1}^{+\infty}[(\tau^{2}-2d_{0})^{\frac{1}{2}}-\tau+d_{0}\tau^{-1}]d\tau+O(t^{-2})-\frac{1}{2}\\ &=&O(1). \end{eqnarray*} It follows that \begin{equation*} v(p)=\frac{1}{2}|p|^{2}-d_{0}\log|p|+O(1), \end{equation*} At last, let $w=u^{*}-v$, then there exists some positive definite matrix $(a^{ij}(x))$ such that $w$ satisfies \begin{equation} \begin{cases} a^{ij}D_{ij}w=0,&p\in\mathbb{R}^2\texttt{\symbol{'134}}\overline{B_{1}(0)},\\ w=\frac{\partial w}{\partial \nu},&p\in\partial B_{1}(0),\\ w\mbox{ÓМç}, \label{cases:91} \end{cases} \end{equation} By the maximum priinciple, we can conclude that $w\equiv0$, that is, $u^{*}\equiv v$. \end{proof} Finally, we prove Theorem \ref{cases:99}. \begin{proof}[Proof of Theorem \ref{cases:99}] By Theorem \ref{cases:90}, we have \begin{equation} u^{*}(p)=\int_{1}^{|p|}(\tau^{2}-2d_{0})^{\frac{1}{2}}d\tau+(1-2d_{0})^{\frac{1}{2}}. \end{equation} It follows that \begin{equation*} x=D_{p}u^{*}(Du(x))=(|Du(x)|^{2}-2d_{0})^{\frac{1}{2}}\frac{Du(x)}{|Du(x)|}. \end{equation*} Therefore, for any $x\in\partial\Underline{\omega}$, $|Du(x)|=1$, $x=(1-2d_{0})^{\frac{1}{2}}Du(x)$, that is, $|x|=(1-2d_{0})^{\frac{1}{2}}>0$. We can conclude that $\Underline{\omega}$ is a circle in $\mathbb{R}^2$. \end{proof} . By Legredre transform, we will construct the sub solutions of \begin{equation*} \sigma_{k}(D^{2}w^{*})=1. \end{equation*} We want to seek sub solutions of the form \begin{equation*} \underline{\omega}^{*}(x)=\underline{\omega}^{*}(r),~~~~r:=\sqrt{\sum\limits_{i=1}^{d}\frac{x_{i}^{2}}{a_{i}}}. \end{equation*} Direct computation yields that for $i$, $j=1$, $\cdots$, $d$, \begin{align*} D_{ij}\underline{\omega}^{*}&=r^{-1}(\underline{\omega}^{*})'\frac{1}{a_{i}}\delta_{ij}+r^{-2}((\underline{\omega}^{*})''-r^{-1}(\underline{\omega}^{*})')(\frac{x_{i}}{a_{i}})(\frac{x_{j}}{a_{j}})\\ &=h^{*}\frac{1}{a_{i}}\delta_{ij}+r^{-1}(h^{*})'(\frac{x_{i}}{a_{i}})(\frac{x_{j}}{a_{j}}), \end{align*} where $h^{*}:=(\underline{\omega}^{*})'/r$. By proposition 1.2 in \cite{baolili}, we have that \begin{equation*} \sigma_{k}(\lambda(D^{2}\underline{\omega}^{*}))=(h^{*})^{k}\sigma_{k}(\frac{1}{a})+r^{-1}(h^{*})'(h^{*})^{k-1}\sum\limits_{i=1}^{d}\sigma_{k-1;i}(\frac{1}{a})\frac{1}{a_{i}}(\frac {x_{i}^{2}}{a_{i}}), \end{equation*} where $1/a:=(1/a_{1},\cdots,1/a_{d})$. Now we solve the following ode: \begin{equation*} (h^{*})^{k}\sigma_{k}(\frac{1}{a})+\bar{t}_{k}(\frac{1}{a})r(h^{*})'(h^{*})^{k-1}=1. \end{equation*} Multiplying the above equation with $\frac{k}{\bar{t}_{k}(\frac{1}{a})}r^{\frac{k\sigma_{k}(\frac{1}{a})}{\bar{t}_{k}(\frac{1}{a})}-1}$, we have that \begin{equation*} (r^{\frac{k\sigma_{k}(\frac{1}{a})}{\bar{t}_{k}(\frac{1}{a})}}(h^{*})^{k})'=\frac{k}{\bar{t}_{k}(\frac{1}{a})}r^{\frac{k\sigma_{k}(\frac{1}{a})}{\bar{t}_{k}(\frac{1}{a})}-1}. \end{equation*} Integrating once and noting that $\sigma_{k}(1/a)=1$, we have that \begin{equation*} h^{*}(r)=(1+C_{1}r^{-\frac{k}{\bar{t}_{k}(\frac{1}{a})}})^{\frac{1}{k}}, \end{equation*} where we take the arbitrary constant $C_{1}>0$. Then $(h^{*})'<0$. Since $h^{*}:=(\underline{\omega}^{*})'/r$, we have that \begin{equation*} \underline{\omega}^{*}(r)=\int_{\eta}^{r}s(1+C_{1}r^{-\frac{k}{\bar{t}_{k}(\frac{1}{a})}})^{\frac{1}{k}}ds+C_{2}, \end{equation*} where $C_{2}$ is an arbitrary constant. Now we claim that $\underline{\omega}^{*}$ is a convex subsolution of (\ref{}). Indeed, for $j=1$, $\cdots$, $d$, \begin{align*} \sigma_{j}(\lambda(D^{2}\underline{\omega}))&=h^{j}\sigma_{j}(a)+r^{-1}h'h^{j-1}\sum\limits_{i=1}^{d}\sigma_{j-1;i}(a)a_{i}(a_{i}p_{i}^{2})\\ &\geq h^{j}\sigma_{j}(a)+\bar{t}_{j}(a)rh'h^{j-1}\\ &=h^{j-1}\sigma_{j}(a)[h+\frac{\bar{t}_{j}(a)}{\sigma_{j}(a)}rh']\\ &\geq h^{j-1}\sigma_{j}(a)[1-\frac{\bar{t}_{j}(a)}{\sigma_{j}(a)}\frac{\sigma_{d}(a)}{\bar{t}_{d}(a)}]h\geq0, \end{align*} where we have used the facts that $h\geq1$, $h'\leq0$, $\frac{\bar{t}_{j}(a)}{\sigma_{j}(a)}\leq \frac{\bar{t}_{d}(a)}{\sigma_{d}(a)}$, $\frac{\underline{t}_{l}(a)}{\bar{t}_{d}(a)}<1$ and \begin{equation*} h'=-\frac{\sigma_{d}(a)h}{r\bar{t}_{d}(a)}\frac{h^{d-1}-h^{l-1}}{h^{d-1}-\frac{\underline{t}_{l}(a)}{\bar{t}_{d}(a)}h^{l-1}}\geq-\frac{\sigma_{d}(a)}{\bar{t}_{d}(a)}\frac{h}{r}. \end{equation*} And \begin{align*} \sigma_{k}(\lambda(D^{2}\underline{\omega}^{*}))&=(h^{*})^{k}\sigma_{k}(\frac{1}{a})+r^{-1}(h^{*})'(h^{*})^{k-1}\sum\limits_{i=1}^{d}\sigma_{k-1;i}(\frac{1}{a})\frac{1}{a_{i}}(\frac {x_{i}^{2}}{a_{i}})\\ &\geq(h^{*})^{k}\sigma_{k}(\frac{1}{a})+\bar{t}_{k}(\frac{1}{a})r(h^{*})'(h^{*})^{k-1}=1. \end{align*} If $C_{1}<0$ and $C_{1}r^{-\frac{\sigma_{d}(a)(d-l)}{\bar{t}_{d}(a)-\underline{t}_{l}(a)}}\geq\min\limits_{h\in\RR}\xi(h)$, we can solve two functions from (\ref{xi}). They are both of the form $h(r)=\xi^{-1}(C_{1}r^{-\frac{\sigma_{d}(a)(d-l)}{\bar{t}_{d}(a)-\underline{t}_{l}(a)}})$ and $h>0$. However, one satisfies $h'<0$ and the other satisfies $h'>0$. And $\underline{\omega}$ is also of form (\ref{w}). Now we seek the subsolutions of \begin{equation*} \sigma_{k}(\lambda (D^{2}\underline{\omega}^{*}))=1. \end{equation*} Let \begin{equation*} \underline{\omega}^{*}(x)=\beta+\int^{\frac{1}{2}\sum\limits_{i=1}^{d}\frac{x_{i}^{2}}{a_{i}}}_{\eta}(1+\alpha s^{-\frac{k}{2\bar{t}_{k}(a)}})^{\frac{1}{k}}. \end{equation*} Then as $x\rightarrow\infty$, \begin{equation*} \underline{\omega}^{*}(x)=\frac{1}{2}\sum\limits_{i=1}^{d}\frac{x_{i}^{2}}{a_{i}}+\mu(\alpha)+O(|x|^{2-\frac{k}{\bar{t}_{k}(a)}}), \end{equation*} where \begin{equation*} \mu(\alpha):=\beta-\eta+\int_{\eta}^{\infty}1+\alpha s^{-\frac{k}{2\bar{t}_{k}(a)}})^{\frac{1}{k}} \end{equation*} Then \begin{equation*} \underline{\omega}(p)=p\cdot x-\underline{\omega}^{*}(x)=\frac{1}{2}\sum\limits_{i=1}^{d}a_{i}p_{i}^{2}-\mu(\alpha)+O(|p|^{2-\frac{k}{\bar{t}_{k}(a)}}). \end{equation*} \subsection{The infinitesimal deformation of $\Omega$} A question is that, if the matrix $A$ is perturbed near the identity matrix $I$, what is the deformation of $\Omega$ near the ball. We will answer the infinitesimal version of this question. Suppose $A_t$ is a family of symmetric matrices near $I$, and satisfies $\sigma_k (A_t)=1$. Take derivative with respect to $t$ and assume $B=\left.\frac{d}{d t}A_t\right|_{t=0}$, then we have $$\sigma_1(B)=0.$$ Assume $D_{ij}$ is the diagonal matrix whose entries are all zero except the $i$-th and $j$-th principal entries. Its $i$-th principal entry equals to $1$ and its $j$-th principal entry equals to $-1$. Rotation the coordinate of $\mathbb{R}^d$, we may assume $B$ is diagonal. It is easy to see that $B$ can be expressed as a linear combination of some $D_{ij}$, that is \begin{equation} B=\sum_{i<j}a_{ij}D_{ij}, a_{ij}\in\mathbb{R}. \end{equation} Let's first consider $B=-D_{12}$. Then we have $$A_t=I-tD_{12}+O(t^2),$$ which implies $A_t^{-1}=I+tD_{12}+O(t^2)$. \begin{align*} A_{t}&=\mbox{diag}\{1/(1-t),1-t,1,\cdots, 1\}\\ &=I+t\mbox{diag}\{1,-1,0,\cdots, 0\}+t^{2}\mbox{diag}\{1,0,0,\cdots, 0\}+t^{3}\mbox{diag}\{1,0,0,\cdots, 0\}+\cdots \end{align*} Suppose also that \begin{equation*} u_{t}=u_{0}+tu_{1}+t^{2}u_{2}+t^{3}u_{3}+\cdots, \end{equation*} and \begin{equation*} u_{t}\rightarrow \frac{1}{2}(|x|^{2}+1)+\frac{t}{2}(x_{1}^{2}-x_{2}^{2})+\frac{t^{2}}{2}x_{1}^{2}+\frac{t^{3}}{2}x_{1}^{3}+\cdots. \end{equation*} Then we have that $u_{0}\rightarrow\frac{1}{2}(|x|^{2}+1)$, $u_{1}\rightarrow\frac{1}{2}(x_{1}^{2}-x_{2}^{2})$, $u_{i}\rightarrow\frac{x_{1}^{2}}{2}$, $i\geq2$ and \begin{equation*} D^{2}u_{t}=I+tD^{2}u_{1}+t^{2}D^{2}u_{2}+t^{2}D^{2}u_{3}+\cdots. \end{equation*} It follows that \begin{align*} 1=\det(D^{2}u_{t})&=\det(I+tD^{2}u_{1}+t^{2}D^{2}u_{2}+t^{2}D^{2}u_{3}+\cdots)\\ &=1+t\Delta u_{1}+t^{2}(\Delta u_{2}+\det(D^{2}u_{2}))+\cdots. \end{align*} Thus, $\Delta u_{1}=0$ and $\Delta u_{2}+\det(D^{2}u_{2})$. If $\Delta u_{1}=0$, $u_{1}\rightarrow\frac{1}{2}(x_{1}^{2}-x_{2}^{2})$, $(u_{1})_{r}=u_{1}$ on $\partial B_{1}$, we use the following type \begin{equation*} u_{1}=a(r)\frac{1}{2}(x_{1}^{2}-x_{2}^{2}),~~~~r=\sqrt{x_{1}^{2}+\cdots+x_{d}^{2}}. \end{equation*} Then \begin{align*} \Delta u_{1}&=(\Delta a)\frac{1}{2}(x_{1}^{2}-x_{2}^{2})+2Da\cdot D(\frac{1}{2}(x_{1}^{2}-x_{2}^{2}))\\ &=(\Delta a)\frac{1}{2}(x_{1}^{2}-x_{2}^{2})+2\frac{a'}{r}(x_{1}^{2}-x_{2}^{2})\\ &=(\Delta a+\frac{4}{r}a')\frac{1}{2}(x_{1}^{2}-x_{2}^{2})=0. \end{align*} We require that \begin{equation*} \Delta a+\frac{4}{r}a'=a''+\frac{d-1}{r}a'+\frac{4}{r}a'=0. \end{equation*} Then we have that \begin{equation*} (r^{d+3}a')'=0. \end{equation*} It follows that $r^{d+3}a'=c$. Then $a'=\frac{c}{r^{n+3}}$. Integrating again, we have that \begin{equation*} a=1+\frac{c}{r^{n+2}}. \end{equation*} Thus, $u_{1}=a(r)\frac{1}{2}(x_{1}^{2}-x_{2}^{2})\rightarrow \frac{1}{2}(x_{1}^{2}-x_{2}^{2})$. Furthermore, we also require that $(u_{1})_{r}=u_{1}$ on $\partial B_{1}$. Then \begin{equation*} a'+a=0,~~~~r=1. \end{equation*} Thus, $c=\frac{1}{d+1}$. Therefore, we can conclude that $u_{1}=(1+\frac{1}{d+1}\frac{1}{r^{d+2}})\frac{1}{2}(x_{1}^{2}-x_{2}^{2})$. We have posed the condition $c\leq -\frac{1}{\lambda_{\min}}$ in Theorem \ref{cases:10} and Theorem \ref{cases:20}, where $\lambda_{\min} $ is the minimum eigenvalue of $A$. A natural question is that, is our condition sharp? We think no. In fact, our condition is related to the construction of sup solution in Section 3. If we can find new and more explicit sub and sup solutions, we think that the condition on $c$ can be improved. Here, let's consider the rotation symmetric case as an example to look at the above question. Suppose $a_{1}=\cdots=a_{d}=\binom{d}{k}^{1/k}$ in the exterior problem (\ref{baozhang3}), (\ref{baozhang2}) and (\ref{fd4'}) and the domain $\Omega$ is a ball centered at origin. Therefore, we assume $u(x)=u(r)$, $r=|x|$ is the solution of \eqref{baozhang3}. As the Section 3, a straightforward calculation shows that $h=u'/r$ satisfies the ODE \begin{equation}\label{ODE} h^k+\frac{k}{d}rh'h^{k-1}=\frac{1}{\binom{d}{k}}. \end{equation} For any given $r_0>0$, the solution of this equation is \begin{equation*} u(r)=\frac{1}{\binom{d}{k}^{1/k}}\int_{r_0}^{r}s\left(1+Cs^{-d}\right)^{\frac{1}{k}}ds. \end{equation*} The boundary condition is $u=0,u_r=1$ for $r=r_0$, which gives $C=r_0^{d-k}\binom{d}{k}-r_0^d$. Moreover, as $r\rightarrow\infty$, we have that \begin{equation*} u(r)=\frac{r^2}{2\binom{d}{k}^{1/k}}+\mu(r_0)+O(r^{2-d}), \end{equation*} where \begin{equation*} \mu(r_0)=-\frac{r_0^2}{2\binom{d}{k}^{1/k}}+\frac{1}{\binom{d}{k}^{1/k}}\int_{r_0}^{\infty}s\left(\left(1+Cs^{-d}\right)^{1/k}-1\right)ds. \end{equation*} Let's discuss the range of $\mu(r_0)$ for $r_0>0$. The derivative of $\mu$ is $$\mu'(r_0)=-1+\frac{1}{k\binom{d}{k}^{1/k}}\left[(d-k)r_0^{d-k-1}\binom{d}{k}-dr_0^{d-1}\right]\int^{\infty}_{r_0}s^{1-d}(1+Cs^{-d})^{\frac{1}{k}-1}ds.$$ It is clear that for $d=k$, we always have $\mu'<0$. Therefore, we have \begin{equation}\label{cbar} c< \bar{c}=\int_{0}^{\infty}s\left(\left(1+s^{-d}\right)^{1/d}-1\right)ds . \end{equation} It is obvious that $\bar{c}$ is finite. For $d>k$, let $$r_1=\binom{d}{k}^{1/k}, r_2=\left(\frac{d-k}{d}\binom{d}{k}\right)^{1/k}, \tau=\frac{s}{r_0}.$$ Then, we can rewrite $\mu'$ as $$\mu'(r_0)=-1+\frac{dr_0}{kr_1}(r_0^{-k}r_2^k-1)\int^{\infty}_{1}\tau^{1-d}(1+(r_0^{-k}r_1^k-1)\tau^{-d})^{\frac{1}{k}-1}d\tau.$$ Thus, $r_1>r_2$. If $r_0\geq r_2$, we have $\mu'<0$. Therefore, $\mu(r_0)\leq \mu(r_2)$, $\mu(r_1)< \mu(r_2)$. It is easy to see $\mu(r_1)=-\frac{r_1}{2}$. Since the condition in our main Theorems is $c<\mu(r_1)$, in view of $\mu(r_1)<\mu(r_2)$, we see that our condition is not sharp. In fact, $r_2$ also is not sharp. Assume the constant $r_3$ is the maximum value point of $\mu$. We now prove $r_3>0$ and the maximum value is positive. Let $N=\frac{r_1}{r_0}$. We can further rewrite $\mu'$ as \begin{eqnarray} \mu'(r_0)&=&-1+\frac{d}{kN}((r_2/r_1)^{k}N^k-1)\int^{\infty}_{1}\tau^{1-d}(1+(N^k-1)\tau^{-d})^{\frac{1}{k}-1}d\tau\nonumber\\ &=&-1+\frac{d}{k}((r_2/r_1)^{k}-N^{-k})\int^{\infty}_{1}\tau^{1-d}(N^{-k}+(1-N^{-k})\tau^{-d})^{\frac{1}{k}-1}d\tau\nonumber. \end{eqnarray} When $r_0\rightarrow 0$, we have $N^{-1}\rightarrow 0$. Thus, we get, for $d>k, d\neq 2k$, $$\mu'(0)=-1+\frac{d}{k}\frac{r^k_2}{r^k_1}\int^{\infty}_{1}\tau^{1-d/k}d\tau=-1+\frac{d}{k}\frac{r_2^k}{r_1^k} \left.\frac{\tau^{2-d/k}}{2-d/k}\right|^{\infty}_1.$$ Then, we have, for $d>2k$, $\mu'(0)=-1+\frac{d-k}{d-2k}=\frac{k}{d-2k}>0$. For $k<d<2k$, $\mu'(0)=\infty>0$. For $d=2k$, we have $$\mu'(0)=-1+\frac{d-k}{k}\int_1^{\infty}\tau^{-1}d\tau=\infty.$$ Since $\mu\rightarrow -\infty$ when $r_0\rightarrow \infty$ and $\mu(0)\geq 0, \mu'(0)>0$, we conclude that $\mu$ has a positive maximum value point $r_3$ between $r_2$ and $0$. However, since $\mu'=0$ is not easy to solve for $k\geq 2$, we can not find the explicit expression of $r_3$. For $k=1$, since $$\int^{\infty}_{1}\tau^{1-d}d\tau=\frac{1}{d-2},$$ we get $\mu'(r_0)=\frac{1-dr_0r_1^{-1}}{d-2}$. Thus, $r_3=1$ is the maximum value point of $\mu$, and we can calculate that $$\mu(1)=\frac{1}{2d}+\frac{1}{d(d-2)}.$$ Then, we get the sharp condition $c\leq \mu(1)$. We conclude that, for $k=1,d$, we can find the sharp range of the constant $c$. For $1<k<d$, the sharp point may be not easy to obtain. to and the following appropriate asymptotic behavior at infinity: \begin{equation}\label{fd40} \limsup\limits_{|x|\rightarrow\infty}|x|^{d_{k}^{*}-2}\left|u(x)-\frac{1}{2}x\cdot Ax-b\cdot x-c\right|<\infty, \end{equation} where $d_{k}^{*}=d_{k}^{*}(\lambda(A))$ is defined by \begin{equation}\label{dk} d_{k}^{*}(\lambda(A)):=\frac{k}{\max\limits_{1\leq i\leq d}\lambda_{i}(A)\sigma_{k-1;i}(\lambda(A))},~~\sigma_{k-1;i}(\lambda(A)):=\sigma_{k-1}(\lambda(A))|_{\lambda_{i}(A)=0}. \end{equation} Multiplying the equation (\ref{baozhang3}) with $\langle x,Du\rangle$ and integrating on $B_{R}\setminus\bar{\Omega}$, we have that \begin{equation}\label{zhuoyue} \int_{B_{R}\setminus\bar{\Omega}}\langle x,Du\rangle\sigma_{k}(\lambda(D^{2}u))=\int_{B_{R}\setminus\bar{\Omega}}\langle x,Du\rangle. \end{equation} The right hand side can be computed as \begin{align} \int_{B_{R}\setminus\bar{\Omega}}\langle x,Du\rangle&=-d\int_{B_{R}\setminus\bar{\Omega}}u+R\int_{\partial B_{R}}u\nonumber\\ &=-d\int_{B_{R}\setminus\bar{\Omega}}u+\frac{1}{2}d\hat{a}R^{2}|B_{R}|+cd|B_{R}|+O(R^{d-\alpha}).\label{youxiu} \end{align} Note that \begin{equation*} k\langle x,Du\rangle\sigma_{k}(\lambda(D^{2}u))=(x_{l}u_{l}\sigma_{k}^{ij}u_{j})_{i}-\frac{1}{2}(x_{l}\sigma_{k}^{ij}u_{i}u_{j})_{l}+\frac{d-2}{2}\sigma_{k}^{ij}u_{i}u_{j}+\frac{1}{2}(x_{l}\frac{\partial\sigma_{k}^{ij}}{\partial x_{l}}uu_{j})_{i}. \end{equation*} Then the left hand side of (\ref{zhuoyue}) can be computed as \begin{align*} \int_{B_{R}\setminus\bar{\Omega}}\langle x,Du\rangle\sigma_{k}(\lambda(D^{2}u))&=\frac{d-2}{2k}\int_{B_{R}\setminus\bar{\Omega}}\sigma_{k}^{ij}u_{i}u_{j}-\frac{1}{2k}\int_{\partial\Omega}\langle x,Du\rangle\sigma_{k}^{ij}u_{i}u_{j}\\ &\quad\quad+\frac{1}{kR}\int_{\partial B_{R}}\langle x,Du\rangle\sigma_{k}^{ij}u_{j}x_{i}-\frac{R}{2k}\int_{\partial B_{R}}\sigma_{k}^{ij}u_{i}u_{j}+\frac{1}{2kR}\int_{\partial B_{R}}x_{l}u\frac{\partial \sigma_{k}^{ij}}{\partial x_{l}}u_{j}x_{i}\\ &=\frac{d-2}{2k}\int_{B_{R}\setminus\bar{\Omega}}\sigma_{k}^{ij}u_{i}u_{j}-\frac{1}{2k}\int_{\partial\Omega}\langle x,Du\rangle\sigma_{k}^{ij}u_{i}u_{j}\\ &\quad\quad+\frac{d}{2k}\binom{d-1}{k-1}\hat{a}^{k+1}R^{2}|B_{R}|+O(R^{d-\alpha}). \end{align*} Inserting the above equality and (\ref{youxiu}) into (\ref{zhuoyue}), we have that \begin{equation}\label{xuhoubao} \frac{d-2}{2k}\int_{B_{R}\setminus\bar{\Omega}}\sigma_{k}^{ij}u_{i}u_{j}+d\int_{B_{R}\setminus\bar{\Omega}}u=\frac{d-1}{2}\hat{a}R^{2}|B_{R}|+cd|B_{R}|+O(R^{d-\alpha})+\frac{1}{2k}\int_{\partial\Omega}\langle x,Du\rangle\sigma_{k}^{ij}u_{i}u_{j}. \end{equation} Multiplying the equation (\ref{baozhang3}) with $u$ and integrating on $B_{R}\setminus\bar{\Omega}$, we have that \begin{equation}\label{weilai} \int_{B_{R}\setminus\bar{\Omega}}u\sigma_{k}(\lambda(D^{2}u))=\int_{B_{R}\setminus\bar{\Omega}}u. \end{equation} The left hand side can be computed as \begin{align*} \int_{B_{R}\setminus\bar{\Omega}}u\sigma_{k}(\lambda(D^{2}u))&=-\frac{1}{k}\int_{B_{R}\setminus\bar{\Omega}}\sigma_{k}^{ij}u_{i}u_{j}+\frac{1}{kR}\int_{\partial B_{R}}u\sigma_{k}^{ij}u_{j}x_{i}\\ &=-\frac{1}{k}\int_{B_{R}\setminus\bar{\Omega}}\sigma_{k}^{ij}u_{i}u_{j}+\frac{1}{2}\hat{a}R^{2}|B_{R}|+c|B_{R}|+O(R^{d-\alpha}). \end{align*} Inserting the above equality into (\ref{weilai}), we have that \begin{equation}\label{zhaohao} \frac{1}{k}\int_{B_{R}\setminus\bar{\Omega}}\sigma_{k}^{ij}u_{i}u_{j}+\int_{B_{R}\setminus\bar{\Omega}}u=\frac{1}{2}\hat{a}R^{2}|B_{R}|+c|B_{R}|+O(R^{d-\alpha}). \end{equation} Solving from (\ref{xuhoubao}) and (\ref{zhaohao}), we have that \begin{equation*} \int_{B_{R}\setminus\bar{\Omega}}\sigma_{k}^{ij}u_{i}u_{j}=\frac{k}{d+2}\hat{a}R^{2}|B_{R}|-\frac{1}{d+2}\int_{\partial\Omega}\langle x,Du\rangle\sigma_{k}^{ij}u_{i}u_{j}+O(R^{d-\alpha}), \end{equation*} and \begin{equation*} \int_{B_{R}\setminus\bar{\Omega}}u=\frac{d}{2(d+2)}\hat{a}R^{2}|B_{R}|+c|B_{R}|+\frac{1}{k(d+2)}\int_{\partial\Omega}\langle x,Du\rangle\sigma_{k}^{ij}u_{i}u_{j}+O(R^{d-\alpha}). \end{equation*} Since $u$ satisfies the asymptotic behavior (\ref{fd4--}), we have that \begin{equation*} Du(x)=\hat{a}x+O(|x|^{-\alpha-1}),~~~~D^{2}u=\hat{a}I+O(|x|^{-\alpha-2}),~~\mbox{as }|x|\rightarrow\infty. \end{equation*} Then for sufficiently large $R>0$, we can compute the following boundary terms: \begin{equation*} \int_{\partial B_{R}}\langle x,Du\rangle=R|B_{R}|+O(R^{d-1-\alpha}), \end{equation*} \begin{equation*} \int_{\partial B_{R}}\langle x,Du\rangle^{2}=\frac{1}{d}R^{3}|B_{R}|+O(R^{d+1-\alpha}), \end{equation*} \begin{equation*} \int_{\partial B_{R}}|Du|^{2}=\frac{1}{d}R|B_{R}|+O(R^{d-1-\alpha}), \end{equation*} \begin{equation*} \int_{\partial B_{R}}u=\frac{1}{2}R|B_{R}|+cdR^{-1}|B_{R}|+O(R^{d-1-\alpha}), \end{equation*} \begin{equation*} \int_{\partial B_{R}}u\langle x,Du\rangle=\frac{1}{2d}R^{3}|B_{R}|+cR|B_{R}|+O(R^{d+1-\alpha}). \end{equation*} Integrating the equation (\ref{la}) on $B_{R}\setminus\bar{\Omega}$, we have that \begin{equation}\label{yaduo} \int_{B_{R}\setminus\bar{\Omega}}\Delta u=|B_{R}|-|\Omega|. \end{equation} The left hand side can be computed as \begin{equation*} \int_{B_{R}\setminus\bar{\Omega}}\Delta u=\frac{1}{R}\int_{\partial B_{R}}\langle x,Du\rangle-\int_{\partial\Omega}\frac{\partial u}{\partial\nu}=|B_{R}|-|\partial\Omega|+O(R^{d-2-\alpha}). \end{equation*} Inserting the above equality into (\ref{yaduo}) and letting $R\rightarrow\infty$, we have that \begin{equation}\label{luogu} |\Omega|=|\partial\Omega|. \end{equation}
2412.11935v1
http://arxiv.org/abs/2412.11935v1
Riesz Bases in Krein Spaces
\documentclass[]{interact} \usepackage{epstopdf}\usepackage[caption=false]{subfig} \usepackage[numbers,sort&compress]{natbib} \renewcommand\bibfont{\fontsize{10}{12}\selectfont}\makeatletter\def\NAT@def@citea{\def\@citea{\NAT@separator}}\makeatother\usepackage{anyfontsize} \theoremstyle{plain}\newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \usepackage{xcolor} \def\cb{\color{blue}} \theoremstyle{remark} \newtheorem{notation}{Notation} \newcommand \+{^{\dagger}} \begin{document} \title{Riesz Bases in Krein Spaces} \author{ \name{Shah Jahan\textsuperscript{a}, P. Sam Johnson\textsuperscript{b}} \affil{\textsuperscript{a} Department of Mathematics, Central University of Haryana, Mahendergarh 123029, Haryana, India. Email address: [email protected]} \textsuperscript{b} Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka (NITK), Surathkal, Mangaluru 575025, India. Email address: [email protected]} \maketitle \begin{abstract} We start by introducing and studying the definition of a Riesz basis in a Krein space $(\mathcal{K},[.,.])$, along with a condition under which a Riesz basis becomes a Bessel sequence. The concept of biorthogonal sequence in Krein spaces is also introduced, providing an equivalent characterization of a Riesz basis. Additionally, we explore the concept of the Gram matrix, defined as the sum of a positive and a negative Gram matrices, and specify conditions under which the Gram matrix becomes bounded in Krein spaces. Further, we characterize the conditions under which the Gram matrices $\{[f_n,f_j]_{n,j \in I_+}\}$ and $\{[f_n,f_j]_{n,j \in I_-}\}$  become bounded invertible operators. Finally, we provide an equivalent characterization of a Riesz basis in terms of Gram matrices. \end{abstract} \begin{keywords} Krein space, Riesz basis, biorthogonal sequence, Gram matrix, frame sequence. \end{keywords} \begin{amscode}42C15, 46C05, 46C20.\end{amscode} \maketitle \section{Introduction}\label{sec1} Hilbert space frames were originally introduced by Duffin and Schaeffer \cite{g} in 1952 to deal with some problems in non-harmonic Fourier analysis. After some decades, Daubechies, Grossmann and Meyer \cite{f} announced formally the definition of frame in the abstract Hilbert spaces in $1986$.  After their work, frame theory began to be widely used, particularly in the more specialized context of wavelet frames and Gabor frames. The linear independence property for a basis, which allows every vector to be uniquely represented as a linear combination, is very restrictive for practical problems. Frames allow each element in the space to be written as a linear combination of the elements in the frame, but linear independence is not required. Frames can be viewed as redundant bases which are generalizations of Riesz bases. This redundancy property is extremely important in applications. Let $H$ be a Hilbert space and $I$ be a countable index set. A collection $\{f_n\}_{n \in I}$ in $H$ is called a frame for $H$ if there exist positive constants $0< A\leqslant B< \infty$ such that \begin{align}\label{h1} A\|f\|^2 \leqslant \sum_{n \in I} |\langle f, f_n \rangle|^2 \leqslant B\|f\|^2 ,\quad \text{for all} \ f \in H. \end{align} The bounded linear operator $S:H\rightarrow H$ defined by \begin{align*} Sf=\sum_{n \in I}\langle f,f_n \rangle f_n,~~~~ f \in H \end{align*} is known as the frame operator associated to the frame $\{f_n\}_{n \in I}.$ This operator $S$ is bounded invertible, positive and self adjoint. It allows to reconstruct each vector in terms of the sequence $\{f_n\}_{n \in I}$ as follows: \begin{align}\label{h2} f = \sum_{n \in I}\langle f,S^{-1}f_n \rangle f_n=\sum_{n \in I}\langle f,f_n \rangle S^{-1}f_n.\end{align} The formula (\ref{h2}) is known as reconstruction formula associated to $\{f_n\}_{n \in I}$. If $S=I,$ then the reconstruction formula resembles the Fourier series of $f$ associated with the orthonormal sequence $\{f_n\}_{n \in I}.$\\ Giribet et al. \cite{k} introduced and studied frames for Krein spaces. The authors have given a new notion of frames in Krein spaces in \cite{s}. In this paper, we study the concept of Riesz basis in Krein spaces by decomposing basis in a natural way and obtain some new results on frames and Riesz bases. We organize the paper as follows. In Section 2, definition and basic results of a Krein space are given which are required in this paper. In Section 3, we define the concept of a Riesz basis in Krein spaces and a necessary and sufficient condition is given for a sequence to be a Riesz basis. In Section 4, we define Gram matrix in Krein spaces and study properties of Gram matrix.  Further, we show that if $\{f_n\}_{n \in I}$ is a Bessel sequence, then the Gram matrix defines bounded invertible operators on $\ell_2(I_+)$ and $\ell_2(I_-).$ \section{Preliminaries} We begin this section with the definition of Krein space and some notations which are used in sequel. For more detail about Krein spaces, one may refer to \cite{a,b,d}. \begin{definition}A linear space $\mathcal{K}$ with a $Q$-metric $[x,y],$ which admits a canonical decomposition $\mathcal{K}=\mathcal{K}^{+} \oplus \mathcal{K}^{-}$ in which $\mathcal{K}^{+}$ and $\mathcal{K}^{-}$ are complete relative to the norm $\|x\|=[x,x]^\frac{1}{2}$ and $\|x\|=(-[x,x])^\frac{1}{2}$ respectively is called a Krein space. \end{definition}Through out this paper $(\mathcal{K},[.,.])$ denotes a Krein space, which means that $\mathcal{K}$ can be written as the direct $[.,.]$-orthogonal sum $\mathcal{K}^+\oplus \mathcal{K}^- $ of Hilbert spaces. Two vectors in $ \mathcal{K}$ are said to be orthogonal if $[x,y]=0$ and we denote this by the symbol $x[\bot]y.$ By orthonormal basis, we mean a family $\{e_i\}_{i \in I} \in \mathcal{K}$ such that $[e_i,e_j]=\pm\delta_{ij},$ for all $i,j \in I.$ If $I_+=\{n \in I : [f_n,f_n]\geq 0\}$ and $I_-=\{n \in I : [f_n,f_n]< 0\},$ then the orthogonal decomposition of $\ell_2(I)$ is given by $\ell_2(I)=\ell_2(I_+)\oplus \ell_2(I_-).$ \begin{lemma}\label{rl} Let $( \mathcal{K},[.,.])$ be a Krein space. \begin{enumerate} \item [(a)] If $x,y \in \mathcal{K^+}$ satisfying $[x,z]=[y,z], \text{for all} z \in \mathcal{K^+}$, then $x=y$. \item [(b)] If $x,y \in \mathcal{K^-}$ satisfying $[x,z]=[y,z], \text{for all} z \in \mathcal{K^-}$, then $x=y$. \end{enumerate} \end{lemma} \section{Riesz basis} We begin this section with the definition of Riesz basis in Krein spaces and followed by we prove that a Riesz basis is a Bessel sequence. \begin{definition} A Riesz basis for $( \mathcal{K},[.,.])$ is a system of the form $\mathcal{F}= (\{U^+e_n\}_{n \in I_+}\cup\{U^-e_n\}_{n \in I_-})$, where $\{e_n\}_{n \in I_+}$ is an orthonormal basis for $\mathcal{K}^+$ and $\{e_n\}_{n \in I_-}$ is an orthonormal basis for $\mathcal{K}^-$ and $U^+: \mathcal{K^+} \rightarrow \mathcal{K^+}$, $U^-: \mathcal{K^-} \rightarrow \mathcal{K^-}$ are bounded bijective operators.   \end{definition} \begin{theorem}\label{T1} Let $\{f_n\}_{n \in I}$ be a Riesz basis for  $( \mathcal{K},[.,.]).$ Then $\{f_n\}_{n \in I}$ is a Bessel sequence with Bessel bounds $\|U^+\|$ and $\|U^-\|$ respectively. Moreover, there exist unique sequences $\{g_n\}_{n \in I_+} \in \mathcal{K^+}$ and $\{g_n\}_{n \in I_-} \in \mathcal{K^-}$ such that \begin{equation} f = \sum_{n \in I_+}[f,g_n]f_n, \ \mbox{for all} \ f \in \mathcal{K^+} \end{equation}and \begin{equation} f = \sum_{n \in I_-}[f,g_n]f_n, \ \mbox{for all} \ f \in \mathcal{K^-}. \end{equation} The sequences $\{g_n\}_{n \in I_+}$ and $\{g_n\}_{n \in I_-}$ are Riesz bases and both the series converge unconditionally. \end{theorem} \begin{proof} Since $\{f_n\}_{n \in I}$ is a Riesz basis for the Krein space $( \mathcal{K},[.,.])$, we can write $\{f_n\}_{n \in I}= (\{U^+e_n\}_{n \in I_+}\cup\{U^-e_n\}_{n \in I_-})$, where $\{e_n\}_{n \in I_+}$ is an orthonormal basis for $\mathcal{K}^+$, $\{e_n\}_{n \in I_-}$ is an orthonormal basis for $\mathcal{K}^-$ and $U^+: \mathcal{K^+} \rightarrow \mathcal{K^+}$, $U^-: \mathcal{K^-} \rightarrow \mathcal{K^-}$ are bounded bijective operators. Let $f \in \mathcal{K^+}$. By expanding $(U^+)^{-1}$ in term of orthonormal basis $\{e_n\}_{n \in I_+}$, we have \begin{eqnarray*} (U^+)^{-1}f&=&\sum_{n \in I_+}[(U^+)^{-1}f,e_n]e_n \\&=&\sum_{n \in I_+}[f, ((U^+)^{-1})^*e_n]e_n. \end{eqnarray*} Now with $g_n=((U^+)^{-1})^*e_n,$ we can write \begin{eqnarray*} f&=&U^+(U^+)^{-1}f \\&=&\sum_{n \in I_+}[f, ((U^+)^{-1})^*e_n]U^+e_n \\&=&\sum_{n \in I_+}[f,g_n]f_n. \end{eqnarray*}And similarly, one can easily prove that for all $f \in \mathcal{K^-}$ and with $g_n=((U^-)^{-1})^*e_n,$ we have \begin{eqnarray*} f &=&\sum_{n \in I_-}[f, ((U^-)^{-1})^*e_n]U^-e_n \\&=&\sum_{n \in I_-}[f,g_n]f_n. \end{eqnarray*} Since both the operators $((U^+)^{-1})^*$ and $((U^-)^{-1})^*$ are bounded and bijective, so $\{g_n\}_{n \in I_+}$ and $\{g_n\}_{n \in I_-}$ are Riesz bases for $\mathcal{K^+}$ and $\mathcal{K^-}$ respectively. Now for $f \in \mathcal{K^+}$, we have \begin{eqnarray*}\sum_{n \in I_+}|[f,f_n]|^2 &=& \sum_{n \in I_+}|[f,U^+e_n]|^2 \\&=&\|(U^+)^*f\|^2 \\&\leq& \|(U^+)^*\|\|f\| \\&=& \|U^+\|\|f\|.\end{eqnarray*} So $\{f_n\}_{n \in I_+}$ is a Bessel sequence with Bessel bound $\|U^+\|$ and for $f \in \mathcal{K^-}$ one can easily prove that $\{f_n\}_{n \in I_-}$ is also a Bessel sequence with Bessel bound $\|U^-\|.$ Hence $\{f_n\}_{n \in I}$ is a Bessel sequence for the Krein space $( \mathcal{K},[.,.]).$ The proof for the unconditional convergence of the series follows from the Corollary 3.3 \cite{s}. Now, to complete the proof we only need to show  $\{g_n\}_{n \in I_+}$ and $\{g_n\}_{n \in I_-}$ are unique sequences in  $\mathcal{K^+}$ and $\mathcal{K^-}$ respectively. For this, suppose that $\{g_n\}_{n \in I_+}$ and $\{h_n\}_{n \in I_+}$ are two sequences in $\mathcal{K^+}$ such that \begin{eqnarray}\label{r1} f= \sum_{n \in I_+}[f,g_n]h_n = \sum_{n \in I_+}[f,h_n]g_n, \ \mbox{for all} \ f \in \mathcal{K^+}. \end{eqnarray}  Then $g_n = h_n$ for all $f \in \mathcal{K^+}.$ Hence from (\ref{r1}), we have \begin{equation*} [f,g_n] = [f,h_n], \ \mbox{for all} \ f \in \mathcal{K^+}. \end{equation*} Similarly, suppose that $\{g_n\}_{n \in I_-}$ and $\{h_n\}_{n \in I_-}$ are two sequences in $\mathcal{K^-}$, then $[f,g_n] = [f,h_n], \mbox{ for all } f \in \mathcal{K^-}.$ From these equalities and by Lemma \ref{rl}, the result for the uniqueness follows. \end{proof} \noindent The following is the definition of biorthogonal sequence in a Krein space $(\mathcal{K},[.,.]).$ \begin{definition} Two sequences $\{f_n\}_{n \in I}$ and $\{g_n\}_{n \in I}$ are said to be biorthogonal in a Krein space $( \mathcal{K},[.,.])$ if \begin{equation}[f_n, g_m]= \pm \delta_{nm}. \end{equation} \end{definition} \begin{theorem} Let $\{f_n\}_{n \in I}= (\{U^+e_n\}_{n \in I_+}\cup\{U^-e_n\}_{n \in I_-})$ be a Riesz basis for $( \mathcal{K},[.,.])$. Then there exist positive constants $A,B,A^{'} ,B^{'}$ such that \begin{equation} A\|f\|^2 \leq \sum_{n \in I_+}|[f,f_n]|^2\leq B\|f\|^2, \ \mbox{for all} \ f \in \mathcal{K^+} \end{equation} and \begin{equation} A^{'}\|f\|^2 \leq \sum_{n \in I_-}|[f,f_n]|^2\leq B^{'}\|f\|^2, \ \mbox{for all} \ f \in \mathcal{K^-}. \end{equation} Moreover, the largest and the smallest values for these constants $A,A^{'}, B,B^{'}$ are $\frac{1}{\|(U^+)^{-1}\|^2}$, $\frac{1}{\|(U^-)^{-1}\|^2}$, $\|U^+\|^2$ and $\|U^{-1}\|^2$ respectively. \end{theorem} \begin{proof} By Theorem \ref{T1}, if $\{f_n\}_{n \in I}=(\{U^+e_n\}_{n \in I_+}\cup\{U^-e_n\}_{n \in I_-})$ is a Riesz basis for the Krein space $( \mathcal{K},[.,.]),$ then it is a Bessel sequence, with bounds $\|U^+\|^2$ and $\|U^{-1}\|.$ Next we prove the result for the lower bounds. For this, let $f \in \mathcal{K^+},$ \begin{equation} \sum_{n \in I_+}|[f, f_n]|^2 = \sum_{n \in I_+}|[f, U^+e_n]| = \|(U^+)^*f\|^2 \label{O1} \end{equation} and we can write \begin{eqnarray} \notag \|f\|&=&\|((U^+)^*)^{-1}(U^+)^*f\| \\&\leq&\notag \|((U^+)^*)^{-1}\| \|(U^+)^*f\| \label{O2}\\&=& \|(U^+)^{-1}\| \|(U^+)^*f\|. \end{eqnarray} Therefore from (\ref{O1}) and (\ref{O2}), we conclude that \begin{eqnarray*} \frac{1}{\|(U^+)^{-1}\|^2}\|f\|^2 \leq \sum_{n \in I_+}|[f, f_n]|^2, \ \mbox{for all} \ f \in \mathcal{K^+}, \end{eqnarray*} and similarly for $f \in \mathcal{K^-},$ one can easily prove that \begin{eqnarray*} \frac{1}{\|(U^-)^{-1}\|^2}\|f\|^2 \leq \sum_{n \in I_-}|[f, f_n]|^2, \ \mbox{for all} f \in \mathcal{K^-}. \end{eqnarray*} \end{proof} Let $\mathcal{K}_1$ and $\mathcal{K}_2$ be Krein spaces such that $\mathcal{K}_1 = \mathcal{K}^+_1\oplus \mathcal{K}^+_1$ and $\mathcal{K}_2 = \mathcal{K^+}_2\oplus \mathcal{K^-}_2.$ Let $\{h_n\}_{n \in I_+}, \{g_n\}_{n \in I_+}, \{h_n\}_{n \in I_-}$ and $\{g_n\}_{n \in I_-}$ be sequences in $\mathcal{K}^+_1$, $\mathcal{K}^+_2$, $\mathcal{K}^-_1$ and $\mathcal{K}^-_2$ respectively. If the sequences $\{g_n\}_{n \in I_+}$ and $\{g_n\}_{n \in I_-}$ are Bessel sequences with bounds $B$ and $B^{'}$ respectively and if $\{h_n\}_{n \in I_+}, \{h_n\}_{n \in I_-}$ are complete in $\mathcal{K}^+_1$ and $\mathcal{K}^-_2$ respectively, there exist positive constants $A$ and $A^{'}$ such that \begin{eqnarray*} A\sum_{n \in I_+}|c_n|^2 \leq \Big\|\sum_{n \in I_+}c_nh_n\Big\|^2, \ \text{for all scalar sequences }\{c_n\}_{n \in I_+} \in \ell_2(I_+) \end{eqnarray*} and \begin{eqnarray*} A^{'}\sum_{n \in I_-}|c_n|^2 \leq \Big\|\sum_{n \in I_-}c_nh_n\Big\|^2,\ \text{for all scalar sequences }\{c_n\}_{n \in I_-} \in \ell_2(I_-). \end{eqnarray*} Since $\{h_n\}_{n \in I_+}$ is complete in $\mathcal{K}^+_1$, every $h \in \text{span}\{h_n ; n \in I_+\}$ has a unique representation $h= \sum_{n \in I_+}c_nh_n$, where $\{c_n\}_{n \in I_+} \in \ell_2(I_+).$ Then \begin{eqnarray*} U^+\Big(\sum_{n \in I_+}c_nh_n\Big)= \sum_{n \in I_+}c_ng_n, \ \text{for all scalar sequences }\{c_n\}_{n \in I_+} \in \ell_2(I_+) \end{eqnarray*} and \begin{eqnarray*} U^-\Big(\sum_{n \in I_-}c_nh_n\Big)= \sum_{n \in I_-}c_ng_n, \ \text{for all scalar sequences }\{c_n\}_{n \in I_-} \in \ell_2(I_-),\end{eqnarray*} define bounded linear operators from $\text{span}\{h_n: n \in I_+\}$ into $\text{span}\{g_n: n \in I_+\}$ and $\text{span}\{h_n: n \in I_-\}$ into $\text{span}\{g_n: n \in I_-\}$ respectively. So $U^+$ is a well-defined linear operator and for $\{c_n\}_{n \in I_+} \in \ell_2(I_+)$ we have, \begin{eqnarray*} \Big\|U^+\Big(\sum_{n \in I_+}c_nh_n\Big)\Big\|^2&=&\Big\|\sum_{n \in I_+}c_ng_n \Big\|^2 \\&\leq&B\sum_{n \in I_+}|c_n|^2 \\&\leq&\frac{B}{A}\sum_{n \in I_+}|c_n|^2. \end{eqnarray*}Therefore, \begin{eqnarray*} \Big\|U^+\Big(\sum_{n \in I_+}c_nh_n\Big)\Big\| &\leq&\sqrt{\frac{B}{A}}\Big\|\sum_{n \in I_+}c_nh_n\Big\|.\end{eqnarray*}Similarly, as $\{h_n\}_{n \in I_-}$ is complete in $\mathcal{K}^-_1$ and by taking $\{c_n\}_{n \in I_+}\in \ell_2(I_-)$, we have \begin{eqnarray*} \Big\|U^-\Big(\sum_{n \in I_-}c_nh_n\Big)\Big\| &\leq&\sqrt{\frac{B^{'}}{A^{'}}}\Big\|\sum_{n \in I_-}c_nh_n\Big\|.\end{eqnarray*} Hence $U^+$ and $U^-$ are bounded operators and have extensions to bounded operators on $\mathcal{K^+}$ and $\mathcal{K^+}$ respectively. The following theorem gives a characterization for Riesz bases in Krein spaces. \begin{theorem}\label{rt} Let $\{f_n\}_{n \in I}$ be a sequence in $(\mathcal{K},[.,.]).$ Then the following statements are equivalent: \begin{enumerate} \item [(a)] $\{f_n\}_{n \in I}$ is a Riesz basis for $(\mathcal{K},[.,.]).$ \item [(b)] $\{f_n\}_{n \in I}$ is complete in $(\mathcal{K},[.,.])$ and there exist positive constants $A,B,A^{'}$ and $B^{'}$ such that for $\{c_n\}_{n \in I_+} \in \ell_2(I_+),$ then \begin{equation}\label{a1} A\sum_{n \in I_+}|c_n|^2 \leq \Big\|\sum_{n \in I_+}c_nf_n\Big\|^2 \leq B\sum_{n \in I_+}|c_n|^2 \end{equation}   and for $\{c_n\}_{n \in I_-} \in \ell_2(I_-)$ we have \begin{equation}\label{a2} A^{'}\sum_{n \in I_-}|c_n|^2 \leq \Big\|\sum_{n \in I_-}c_nf_n\Big\|^2 \leq B^{'}\sum_{n \in I_-}|c_n|^2. \end{equation} \end{enumerate} \end{theorem} \begin{proof}(a) $\Rightarrow$ (b) : First suppose that $\{f_n\}_{n \in I}$ is a Riesz basis for the Krein space $(\mathcal{K},[.,.]),$ write $f_n= \{\{U^+e_n\}_{n \in I_+} \cup \{U^-e_n\}_{n \in I_-}\}$ by definition. Also $\{f_n\}_{n \in I_+}$ is complete by the consequence of the Theorem \ref{T1} and for any $\{c_n\}_{n \in I_+} \in \ell_2(I_+)$ we have \begin{eqnarray*} \Big\|\sum_{n \in I_+}c_nf_n\Big\|^2&=& \Big\|U^+\Big(\sum_{n \in I_+}c_ne_n\Big)\Big\|^2 \\&\leq&\|U^+\|^2\Big\|\sum_{n \in I_+}c_ne_n\Big\|^2\\&=& \|U^+\|^2\sum_{n \in I_+}|c_n|^2 \end{eqnarray*} and \begin{eqnarray*} \Big\|\sum_{n \in I_+}c_ne_n\Big\|^2&=&\Big\|{U^+}^{-1}U^+\Big(\sum_{n \in I_+}c_ne_n\Big)\Big\|^2 \\&\leq&\|{U^+}^{-1}\|^2\Big\|\sum_{n \in I_+}c_nf_n\Big\|^2. \end{eqnarray*} Hence \begin{eqnarray*} \frac{1}{\|{U^+}^{-1}\|^2}\sum_{n \in I_+}|c_n|^2 \leq \Big\|\sum_{n \in I_+}c_nf_n\Big\|^2 \leq \|U^+\|^2\sum_{n \in I_+}|c_n|^2. \end{eqnarray*} Similarly, by Theorem \ref{T1}, $\{f_n\}_{n \in I_-}$ is complete and for $\{c_n\}_{n \in I_+} \in \ell_2(I_-)$ we get \begin{eqnarray*} \frac{1}{ \|{U^-}^{-1}\|^2}\sum_{n \in I_-}|c_n|^2 \leq \Big\|\sum_{n \in I_-}c_nf_n\Big\|^2 \leq \|U^-\|^2\sum_{n \in I_-}|c_n|^2.\end{eqnarray*} (b)$\Rightarrow$ (a) : The inequalities  (\ref{a1}) and (\ref{a2}) imply that $\{f_n\}_{n \in I_+}$ and $\{f_n\}_{n \in I_-}$ are Bessel sequences with Bessel bounds $B$ and $B^{'}$ respectively. Let $\{e_n\}_{n \in I_+}$ and $\{e_n\}_{n \in I_-}$ be orthonormal bases for $\mathcal{K^+}$ and $\mathcal{K^-}$ respectively and we can extend $U^+_1e_n= f_n$ to a bounded operators on   $\mathcal{K}^+_1$ and $U^-_1e_n= -f_n$ to a bounded operator on $\mathcal{K}^-_1.$ In a similar fashion, extend ${U_2}^+f_n=e_n$ to a bounded operator on $\mathcal{K}^+_2$ and ${U_2}^-f_n=-e_n$ to a bounded operator on $\mathcal{K}^-_2.$ Then ${U_2}^+U^+_1=U^+_1{U_2}^+=I$ and ${U_2}^-U^-_1=U^-_2{U_2}^-=I,$ so $U^+_1$ and $U^-_1$ are invertible operators. Hence $\{f_n\}_{n \in I}$ is a Riesz basis for $(\mathcal{K},[.,.]).$ \end{proof} \section{Gram matrix} In this section, we introduce the concept of Gram matrix in $(\mathcal{K},[.,.])$ and prove some of its properties. Let $\{f_n\}_{n \in I}\subset \mathcal{K}$ be a Bessel sequence, we can compose the synthesis operator $T$ and its adjoint $T^*$; where $T=T^++T^-$ and $T^*={T^+}^*+{T^-}^*.$ We obtain the bounded operators ${T^+}^*T^+: \ell_2(I_+)\longrightarrow \ell_2(I_+)$ defined by \begin{eqnarray*} {T^+}^*T^+\{c_n\}_{n \in I_+}=\Big\{[\sum_{n \in I_+}c_nf_n,f_n]\Big\} \end{eqnarray*} and ${T^-}^*T^-: \ell_2(I_-)\longrightarrow \ell_2(I_-)$ defined by \begin{eqnarray*} {T^-}^*T^-\{c_n\}_{n \in I_-}=\Big\{[\sum_{n \in I_-}c_nf_n,f_n]\Big\} \end{eqnarray*} Let $\{e_n\}_{n \in I_+}$ and $\{e_n\}_{n \in I_-}$ be the canonical orthonormal bases for $\ell_2(I_+)$ and $\ell_2(I-)$ respectively. Then the $ij$-th entries in the matrix representations for ${T^+}^*T^+$ and ${T^-}^*T^-$ are \begin{eqnarray*} [ {T^+}^*T^+ e_i,e_j ]=[T^+ e_i,T^+e_j ]\Big\{([f_i,f_j]_{i,j \in I_+}\Big\} \end{eqnarray*} and \begin{eqnarray*} [ {T^-}^*T^- e_i,e_j ]=[T^- e_i,T^-e_j ]=\Big\{[f_i,f_j]_{i,j \in I_-}\Big\}. \end{eqnarray*} The matrix $\Big\{[f_i,f_j]_{i,j \in I_+}\Big\}$ is called the positive Gram matrix and $\Big\{[f_i,f_j]_{i,j \in I_-}\Big\}$ is called the negative Gram matrix associated with $\{f_n\}_{n \in I_+}$ and $\{f_n\}_{n \in I_-}$ respectively. Moreover, if $\{f_n\}_{n \in I}\subset \mathcal{K}$ is a Bessel sequence, then the Gram matrix defines bounded operators on $\ell_2(I_+)$ and $\ell_2(I_-).$\\ The following lemma states that in order to prove that the Gram matrices are bounded we can consider the Bessel condition. \begin{lemma}\label{lg}Let $\{f_n\}_{n \in I}$ be a sequence in $(\mathcal{K},[.,.])$. Then the following conditions are equivalent: \begin{enumerate} \item [(a)] $\{f_n\}_{n \in I}$ is a Bessel sequence with Bessel bound $B$. \item [(b)] The positive Gram matrix and the negative Gram matrix  associated with $\{f_n\}_{n \in I_+}$ and $\{f_n\}_{n \in I_-}$ define bounded operators on $\ell_2(I_+)$ and $\ell_2(I_-),$ with norm at most $B$ and $B^{'}$ respectively. \end{enumerate} \end{lemma} \begin{proof}$(a)\Rightarrow (b)$ : Suppose $\{f_n\}_{n \in I}$ is a Bessel sequence in $(\mathcal{K},[.,.])$. Then the positive Gram matrix and the negative Gram matrix define bounded operators on $\ell_2(I_+)$ and $\ell_2(I-)$ and the proof for the norm estimation follows from the Theorem 3.1 in \cite{s}. $(b)\Rightarrow (a)$ : Next, suppose that $(b)$ holds. Let $\{c_n\}_{n \in I_-} \in \ell_2(I_-).$ Then \begin{eqnarray*} \sum_{j \in I_-}\Big|\sum_{n \in I_-}c_n[f_n,f_j]\Big|^2\leq B^2\sum_{n \in I_-}|c_n|^2. \end{eqnarray*} Let $\ell,m \in I_-$ such that $\ell>m.$ Then \begin{eqnarray*} \Big\|\sum_{n=1}^\ell c_nf_n-\sum_{n=1}^mc_nf_n\Big\|^4 &=& \Big\|\sum_{n =m+1}^\ell c_nf_n\Big\|^4 \\&=&\Big|[\sum_{n=m+1}^\ell c_nf_n,\sum_{k=m+1}^\ell c_kf_k]\Big|^2 \\&=&\Big|\sum_{n=m+1}^\ell \overline{c_n} \sum_{k=m+1}^\ell c_k[f_n,f_k]\Big|^2 \\&\leq&\Big(\sum_{n=m+1}^\ell|c_n|^2\Big)\Big(\sum_{n=m+1}^\ell\Big|\sum_{k=m+1}^\ell c_k[f_n,f_k]\Big|^2\Big). \end{eqnarray*} Therefore \begin{eqnarray*} \sum_{n=m+1}^\ell\Big|\sum_{k=m+1}^\ell c_k[f_n,f_k]\Big|^2 \leq B^2\sum_{n=m+1}^\ell|c_n|^2. \end{eqnarray*} Hence we conclude that \begin{eqnarray*} \Big\|\sum_{n=1}^\ell c_nf_n-\sum_{n=1}^mc_nf_n\Big\|^4\leq B^2\Big(\sum_{n=m+1}^\ell |c_n|^2\Big)^2. \end{eqnarray*} Thus $\sum_{n \in I_-}c_nf_n$ is convergent and by repeating the above argument, we get \begin{eqnarray*} \Big\|\sum_{n \in I_-}c_nf_n\Big\| \leq \sqrt{B}\Big(\sum_{n \in I_-} |c_n|^2\Big)^\frac{1}{2}. \end{eqnarray*} Similarly, one can easily prove that \begin{eqnarray*} \Big\|\sum_{n \in I_+}c_nf_n \Big\| \leq \sqrt{B^{'}}\Big(\sum_{n \in I_+}|c_n|^2\Big)^\frac{1}{2}. \end{eqnarray*} Hence by Theorem 3.1 in \cite{s}, $\{f_n\}_{n \in I}\subset \mathcal{K}$ is a Bessel sequence with Bessel bound B. \end{proof} The following lemma gives us a sufficient condition for the Gram matrices defining bounded operators on $\ell_2(I_+)$ and $\ell_2(I_-).$ \begin{lemma} Let $M_1=\{{M_1}_{j,n}\}_{j,n \in I_+}$ and let $M_2=\{{M_2}_{j,n}\}_{j,n \in I_-}$ be two matrices for which ${M_1}_{j,n} =\overline{{M_1}_{n,j}}$ for all $j,n \in I_+$ and ${M_2}_{j,n} =\overline{{M_2}_{n,j}}$ for all $j,n \in I_-$ and there exist constants $B$ and $B^{'}$ such that $\sum_{j,n \in I_+}|{M_1}_{j,n}| \leq B$ and $\sum_{j,n \in I_+}|{M_2}_{j,n}| \leq B^{'}.$ Then $M_1$ and $M_2$ define bounded linear operators on $\ell_2(I_+)$ and $\ell_2(I_-)$ with norm at most $B$ and $B^{'}$ respectively. \end{lemma} \begin{proof} Let $\{c_n\}_{n \in I_+} \in \ell_2(I_+).$ Then ${M_1}\{c_n\}_{n \in I_+}$ is well defined as the sequence indexed by $I_+$ and whose $j$-th coordinate is given by $\sum_{n \in I_+}{M_1}_{j,n}c_n.$ To prove this sequence is in $\ell_2(I_+),$ it is enough to show that the map \begin{eqnarray}\label{lm}\phi:\{d_n\}_{n \in I_+} \longrightarrow [\{d_n\}_{n \in I_+},{M_1}\{c_n\}_{n \in I_+}]_{\ell_2(I_+)} \end{eqnarray} is a continuous linear functional on $\ell_2(I_+).$ Now, for $\{d_n\}_{n \in I_+} \in \ell_2(I_+),$ we have \begin{eqnarray*} \sum_{j \in I_+}\Big|\sum_{n \in I_+}\overline{{M_1}_{n,j}c_n}d_j \Big| &\leq& \sum_{j \in I_+}\sum_{n \in I_+}|{M_1}_{j,n}c_nd_j| \\&=& \sum_{j \in I_+}\sum_{n \in I_+}\Big(|{M_1}_{j,n}|^\frac{1}{2}|c_n|)(|{M_1}_{j,n}|^\frac{1}{2}|d_j|\Big) \\&\leq& \Big(\sum_{j \in I_+}\sum_{n \in I_+}|{M_1}_{j,n}||c_n|^2\Big)^\frac{1}{2}\Big(\sum_{j \in I_+}\sum_{n \in I_+}|{M_1}_{j,n}||d_j|^2\Big)^\frac{1}{2} \\&\leq& B\Big(\sum_{n \in I_+}|c_n|^2\Big)^\frac{1}{2}\Big(\sum_{j \in I_+}|d_j|^2\Big)^\frac{1}{2}. \end{eqnarray*}From above, we conclude that $(\ref{lm})$ defines a continuous linear functional on $\ell_2(I_+).$ Also, \begin{eqnarray*} \|{M_1}\{c_n\}_{n \in I_+}\| &=&\sup\limits_{\|\{d_n\}\|=1}\Big|[\{d_n\}_{n \in I_+}, {M_1}\{c_n\}_{n \in I_+}]_{\ell_2(I_+)}\Big| \\&\leq& B\Big(\sum_{n \in I_+}|c_n|^2\Big)^\frac{1}{2}. \end{eqnarray*} This shows that ${M_1}$ is a bounded linear operator on $\ell_2(I_+)$ with norm at most $B.$ Similarly, it is easy to prove that ${M_2}$ defines a bounded linear operator on $\ell_2(I_+)$ with norm at most $B^{'}.$ \end{proof} \begin{proposition}Let $\{f_n\}_{n \in I}$ be a sequence in $(\mathcal{K},[.,.])$. If there exists a positive constant $B$ such that \begin{eqnarray}\label{pm} \sum_{j, n \in I}|[f_j,f_n]| \leq B, \end{eqnarray} then $\{f_n\}_{n \in I}$ is a Bessel sequence with Bessel bound B.   \end{proposition} The condition $(\ref{pm})$ is much better than that of Bessel condition in \cite{s} (Equation 6). Because here we have to check the indefinite inner product between the elements of $\{f_n\}_{n \in I},$ whereas in the case of Bessel condition we have to check the indefinite inner product for all $f$ in the Krein space $\mathcal{K}$. The following theorem gives an equivalent condition of the Riesz basis in terms of Gram matrices. \begin{theorem} Let $\{f_n\}_{n \in I}$ be a sequence in $(\mathcal{K},[.,.])$. Then the following conditions are equivalent: \begin{enumerate} \item [$(a)$] $\{f_n\}_{n \in I}$ is a Riesz basis for $(\mathcal{K},[.,.]).$ \item [$(b)$] $\{f_n\}_{n \in I_+}$ and $\{f_n\}_{n \in I_-}$ are complete and their Gram matrices $\{[f_n,f_j]_{n,j \in I_+}\}$ and $\{[f_n,f_j]_{n,j \in I_-}\}$ define bounded, invertible operators on $\ell_2(I_+)$ and $\ell_2(I_-)$ respectively. \end{enumerate}\end{theorem} \begin{proof} (a) $\Rightarrow$ (b) : Since $\{f_n\}_{n \in I}$ is a Riesz basis, we can write $\{f_n\}_{n \in I}= \{\{U^+e_n\}_{n \in I_+}\cup \{U^-e_n\}_{n \in I_-}\}$ by definition. For any $n,j \in I_+,$ we have \begin{eqnarray*} [f_n,f_j] = [U^+e_n,U^+e_n]= [{U^+}^*U^+e_n,e_j] \end{eqnarray*} and for $n,j \in I_-$ \begin{eqnarray*} [f_n,f_j]=[{U^-}^*U^-e_n,e_j]. \end{eqnarray*} It is the positive Gram matrix representing the bounded invertible operator ${U^+}^*U^+$ in the basis $\{e_n\}_{n \in I_+}$ and the negative Gram matrix is the matrix representing the bounded invertible operator ${U^-}^*U^-$ in the basis $\{e_n\}_{n \in I_-}.$ $(b)\Rightarrow (a)$ : Suppose that (b) holds. Then the upper Riesz conditions in $(\ref{a1})$ and $(\ref{a2})$ are satisfied by using Lemma (\ref{lg}) and Theorem $(\ref{rt}).$ Next, we will prove that the lower conditions in $(\ref{a1})$ and $(\ref{a2})$ are also satisfied. For this, let $G_1$ denote the operator on $\ell_2(I_+)$ given by the Gram matrix $\{[f_n,f_j]_{n,j \in I_+}\}$ and $G_2$ denote the operator on $\ell_2(I_-)$ given by the Gram matrix $\{[f_n,f_j]_{n,j \in I_-}\}.$ First, for a given sequence $\{c_n\}_{n \in I_+} \in \ell_2(I_+)$, the $j^{\text{th}}$ element in the image sequence $G_1\{c_n\}$ is $\sum_{n \in I_+}[f_n,f_j]e_n.$ So, \begin{equation*} [G_1 \{c_n\}_{n \in I_+}, \{c_k\}_{k \in I_+}] = \sum_{j \in I_+}\sum_{k \in I_+}[f_n,f_j]c_n\overline{c_j} = \|\sum_{n \in I_+}c_nf_n\|. \end{equation*} Thus $G_1$ is positive and with a similar argument it is easy to show that $G_1$ is self adjoint. Let $W$ be a positive square root of $G_1$. Then by the above calculation, we get \begin{eqnarray*} \Big\|\sum_{n \in I_+}c_nf_n\Big\| &=&\|W\{c_n\}_{n \in I_+}\|^2 \ \ \geq \ \ \frac{1}{\|W^{-1}\|^2}\sum_{n \in I_+}|c_n|^2. \end{eqnarray*} \end{proof} \begin{center} \textbf{Acknowledgements} \end{center} \noindent The first author is highly thankful to the Central University of Haryana for providing basic facilities to carry out this research. The present work of the second author is partially supported by ANRF (SERB), DST, Government of India (TAR/2022/000219) under the scheme “TARE”. \begin{thebibliography}{10} \bibitem{a} T. Ya. Azizov and I. S. Iokhvidov, Linear operators in spaces with an indefinite metric, Wiley-interscience, Chichester, 1989. \bibitem{b} J. Bognar, Indefinite inner product spaces, Springer Verlag, Berlin-Heidelberg, 1974. \bibitem{c} P. Casazza, The art of frame theory, \textit{Taiwan. J. Math.} 4, (2000) 129-201. \bibitem{e} O.Christensen, An introduction to frames and Riesz bases, Applied and Numerical Harmonic Analysis, Birkhauser, Boston, 2003. \bibitem{f} I. Daubechies, A. Grossmann and Y. Meyer, Painless non orthogonal expansions, \textit{J. Math. Phys.}, 27 (1986), 1271-1283. \bibitem{g} R.J. Duffin and A.C. Schaeffer, A class of non-harmonic Fourier series, \textit{Trans. Amer. Math.} Soc. 72 (1952), 341-366.   \bibitem{k} J.I. Giribet, A. Maestripieri, F. Martinez Peria, P.G. Massey, On frames for Krein spaces, \textit{J. Math. Anal. Appl.} 393 (2012) 122-137. \bibitem{d} Shah Jahan, Varinder Kumar and S.K. Kaushik, On the existence of non-linear frames, \textit{Arch. Math.} 53(2), 2017:101-109. \bibitem{s} Shah Jahan and P. Sam Johnson, Frame operators for frames in Krein spaces, \textit{Eur. J. Math. Anal.} 4 (2024) 1 (10 pages). \bibitem{n} I. Peng, S. Waldron, Signed frames and Hadamard products of Gram matrices, \textit{Lin. Alg. Appl.} 347(1-3)(2002)131-157. \end{thebibliography} \end{document}
2412.11933v2
http://arxiv.org/abs/2412.11933v2
Generalised Fermat equation: a survey of solved cases
\documentclass[11pt]{article} \usepackage{pdflscape} \usepackage{latexsym,amsmath,amssymb,mathrsfs,graphicx,hyperref,longtable,mathpazo,soul,color,amsthm,mathtools,xcolor,multirow} \hypersetup{pdftex,colorlinks=true,linkcolor=blue,citecolor=blue,filecolor=blue,urlcolor=blue} \renewcommand{\rmdefault}{ptm} \usepackage{url} \usepackage{caption} \setlength{\textwidth}{17 cm} \setlength{\topmargin}{-2.25 cm} \setlength{\textheight}{24.5 cm} \setlength{\oddsidemargin}{0 cm} \renewcommand{\ge}{\geqslant} \renewcommand{\le}{\leqslant} \def\tr{{\raise0pt\hbox{$\scriptscriptstyle\top$}}} \DeclarePairedDelimiter\ceil{\lceil}{\rceil} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \def\cA{{\cal A}} \def\cL{{\cal L}} \def\cS{{\cal S}} \def\prob{\mathbb{P}} \def\E{\mathbb{E}} \def\erf{\mathop{{\rm erf}}} \usepackage{longtable} \def\argmin{\mathop{{\rm argmin}}} \def\eop{\hfill{$\Box$}\medskip} \usepackage{rotating} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{exercise}[theorem]{Exercise} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{example}[theorem]{Example} \newtheorem{problem}[theorem]{Problem} \newtheorem{definition}[theorem]{Definition} \newtheorem{question}[theorem]{Open Question} \newtheorem{algorithm}[theorem]{Algorithm} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{remark}[theorem]{Remark} \numberwithin{equation}{section} \numberwithin{table}{section} \interfootnotelinepenalty=10000 \title{Generalised Fermat equation: a survey of solved cases} \author{Ashleigh Ratcliffe and Bogdan Grechuk} \begin{document} \maketitle \begin{abstract} Generalised Fermat equation (GFE) is the equation of the form $ax^p+by^q=cz^r$, where $a,b,c,p,q,r$ are positive integers. If $1/p+1/q+1/r<1$, GFE is known to have at most finitely many primitive integer solutions $(x,y,z)$. A large body of the literature is devoted to finding such solutions explicitly for various six-tuples $(a,b,c,p,q,r)$, as well as for infinite families of such six-tuples. This paper surveys the families of parameters for which GFE has been solved. Although the proofs are not discussed here, collecting these references in one place will make it easier for the readers to find the relevant proof techniques in the original papers. Also, this survey will help the readers to avoid duplicate work by solving the already solved cases. \end{abstract} \tableofcontents \section{Introduction} \subsection{Generalized Fermat equation} Around 1637, Pierre de Fermat wrote in the margin of his private copy of Arithmetica that for any integer $n\geq 3$ the equation \begin{equation*} x^n + y^n = z^n \end{equation*} has no solutions in positive integers. This statement became known as Fermat's Last Theorem (FLT), and has been one of the most famous open problems in mathematics for over $350$ years. After cancelling the common factors if needed, it suffices to prove the non-existence of positive integer solutions such that $\gcd(x,y,z)=1$. The FLT has been finally proved by Andrew Wiles\footnote{With part of the proof delegated to a joint paper with Richard Taylor~\cite{MR1333036}.}~\cite{wiles1995modular} in 1995, and this proof is widely recognized as one of the most monumental achievements of modern mathematics. Even before FLT had been proven, many researchers started to investigate special cases of the more general equation \begin{equation} \label{eq:genfermat} ax^p+by^q=cz^r, \end{equation} where $x,y,z$ are integer variables, while positive integers $p,q,r$ and integers $a,b,c$ are parameters. If $abc=0$, then \eqref{eq:genfermat} has at most two monomials and is easy to solve, see~\cite[Section 1.6]{mainbook} and~\cite[Section 1.16]{wilcox2024systematic}, so from now on we will assume that $abc\neq 0$. If \eqref{eq:genfermat} is solvable in integers $(x,y,z)\neq (0,0,0)$, then, under a minor condition\footnote{Specifically, under the condition that at least one of the integers $p,q,r$ is coprime with another two.} on $(p,q,r)$, it has infinitely many integer solutions, and all these solutions are easy to describe, see~\cite[Section 3.4.2]{MR4620765} and~\cite[Proposition 4.53]{mainbook}. However, the problem of describing \emph{primitive} integer solutions to \eqref{eq:genfermat}, that is, ones satisfying $\gcd(x,y,z)=1$, turns out to be much more interesting and delicate. The triple $(p,q,r)$ is called the signature of Eq.~\eqref{eq:genfermat}. If $\min\{p,q,r\} = 1$ then \eqref{eq:genfermat} is trivial, see~\cite[Section 1.5.1]{mainbook} and~\cite[Section 1.12]{wilcox2024systematic}, so we may assume that $\min\{p,q,r\}\geq 2$. The theory of primitive solutions to \eqref{eq:genfermat} crucially depends on whether the quantity $1/p + 1/q + 1/r$ is greater than $1$, equal to $1$, or less than $1$. In the first case, we have the following result proved by Beukers~\cite{MR1487980} in 1998. \begin{theorem} \label{eq:Beukers1998} Assume that $1/p+1/q+1/r>1$ and $\min\{p,q,r\}\geq 2$. Then if \eqref{eq:genfermat} has at least one primitive solution with $xyz\neq 0$, then it has infinitely many such solutions. In this case, there exists a finite number of parametrized solutions of \eqref{eq:genfermat} with two parameters such that all primitive solutions can be obtained by specialization of the parameters to integer values. Moreover, there is an algorithm that computes these parametrized solutions. \end{theorem} Theorem \ref{eq:Beukers1998} covers the signatures $(2,2,k)$ for $k\geq 2$, $(2,3,3)$, $(2,3,4)$, $(2,3,5)$, and their permutations. As a nice example of its application, see~\cite{edwards2005platonic} for the complete description of primitive integer solutions to the equation $x^2+ y^3= z^5$. In the case $1/p+1/q+1/r=1$, which consists of permutations of $(2,3,6)$, $(2,4,4)$ and $(3,3,3)$, Eq.~\eqref{eq:genfermat} reduces to the problem of computing rational points on certain elliptic curves. This problem is open in general, but there are algorithms that work well for all specific equations with not-too-large coefficients $a,b,c$. For example, as early as in 1954, Selmer~\cite{MR41871,MR67131} solved \eqref{eq:genfermat} with $(p,q,r)=(3,3,3)$ for all $a,b,c$ satisfying $1\leq | abc| \leq 500$. With modern computers and improved algorithms one may proceed much further if needed. The most interesting and challenging case of Eq.~\eqref{eq:genfermat} is $1/p + 1/q + 1/r < 1$. In this case, Darmon and Granville~\cite{DOI:10.1112/blms/27.6.513} proved the following fundamental result. \begin{theorem} \label{th:genfermat} For any given positive integers $p,q,r,a,b,c$ satisfying $1/p + 1/q + 1/r < 1$, the generalized Fermat Eq.~\eqref{eq:genfermat} has only finitely many primitive integer solutions. \end{theorem} \begin{table} \begin{center} \caption{\label{tb:pqrsigs}Triples $(p,q,r)$ for which equation \eqref{eq:fermatpqr} has been solved, up to permutations (triples marked with * do not include permutations).} \begin{tabular}{|c|c|} \hline Triple & Solved \\ \hline \hline $(2,3,n), 7 \leq n \leq 12, $\footnotemark $n=15$ & \cite{2007ToXa,BRUIN_1999,+2003+27+49,BRUIN2005179,dahmen2008,brown2010primitive,MR3095226,MR4036449,SiksekStoll14} \\\hline $(2, 4, n), n \geq 5$ & \cite{bennett2010diophantine,MR2075481,BRUIN_1999,Bennett_Skinner_2004,+2003+27+49} \\\hline $(2,6,n), n \geq 3$ & \cite{bennett2012multi,BRUIN_1999,miscellany} \\\hline $(2,2n,3)^*, 3 \leq n \leq 10^7$ & \cite{chen2008equation,dahmen2011refined,dahmen2008,MR3095226} \\\hline $(2,2n,m)^*, m \in\{6,9,10,15,21\}, n\geq 2$ & \cite{miscellany,MR4418449} \\\hline $(4,2n,3)^*, n \geq 2$ & \cite{miscellany} \\\hline $(3,4,5)$ & \cite{siksek2012partial} \\\hline $(3,3,n), 3 \leq n \leq 10^9$ & \cite{chen2009perfect,kraus1998equation,10.1007/10722028_9,dahmen2008,MR3526934} \\\hline $(3,3,2n), n \geq 2$ & \cite{miscellany} \\\hline $(3,6,n), n \geq 3$ & \cite{miscellany} \\\hline $(5,5,n), n \in \{7,19\}$ & \cite{dahmen2014perfectpowersexpressiblesums} \\\hline $(7,7,5)$ & \cite{dahmen2014perfectpowersexpressiblesums} \\\hline $(n, n, 2), n \geq 5$ & \cite{Darmon1997,poonen1998some} \\\hline $(n, n, 3), n \geq 3$ & \cite{Darmon1997,poonen1998some}\footnotemark \\\hline $(n, n, n), n \geq 3$ & \cite{wiles1995modular,MR1333036} \\\hline $(2j,2k,n)^*, j,k\geq 5$ prime, $n \in \{3,5,7,11,13\}$ & \cite{anni2016modular,10.5565/PUBLMAT6722309} \\\hline $(2j,2k,17)^*, j,k \neq 5$ primes & \cite{MR4418449} \\\hline $(2n, 2n, 5)^*, n \geq 2$ & \cite{JTNB_2006} \\\hline $(2n,2n,17)^*$ & \cite{MR4418449} \\\hline $(3j, 3k, n)^*, j, k \geq 2, n \geq 3$ & \cite{kraus1998equation} \\ \hline \end{tabular} \end{center} \end{table} \footnotetext[3]{The proof for $n=11$ in \cite{MR4036449} is conditional on the generalized Riemann hypothesis. The case $n=12$ follows from the analysis of case $(2,3,6)$, which reduces to a rank $0$ elliptic curve.} \footnotetext[4]{It was noted on Mathoverflow (\url{https://mathoverflow.net/questions/488724/}) that \cite{Darmon1997} does not provide details for the case $(3,4,4)$, this case is proven in \cite[Proposition 14.6.6]{MR2312338}.} The proof of Theorem \ref{th:genfermat} is ineffective and does not provide an algorithm for actually listing the primitive solutions to \eqref{eq:genfermat} for a given $p,q,r,a,b,c$. This problem is the topic of current intensive research. The case $a=b=c=1$, that is, equation \begin{equation} \label{eq:fermatpqr} x^p+y^q=z^r, \end{equation} is known as the Fermat--Catalan equation, and has been particularly well-studied. A number of survey papers exist which are devoted to this equation, see, for example,~\cite{miscellany,Kraus1999}, Section 1.2 in~\cite{MR4122899}, and wikipedia page \url{https://en.wikipedia.org/wiki/Beal\_conjecture}. We provide a summary\footnote{Recently, Bartolom\'e and Mih{\u a}ilescu~\cite{bartolome2021semilocal} claimed to solve \eqref{eq:fermatpqr} for all signatures $(p,p,r)$ such that $p,r$ are primes, $p\geq 5$, and $r>3\sqrt{p\log_2 p}$. To the best of our knowledge, this paper has not been peer reviewed yet.} of solved signatures $(p,q,r)$ in Table \ref{tb:pqrsigs}. There is also some recent work with partial results for other signatures, see e.g~\cite{MR4205757,MR3217670,miscellany,MR4306226,MR3690598,signature2019multi,10.1007/10722028_9,MR2652585,chen2022modularapproachfermatequations,chen2009perfect,dahmen2008,dahmen2011refined,dahmen2014perfectpowersexpressiblesums,dahmen2024generalized,dieulefait2005modular,MR3493372,MR4036449,kraus1998equation,madriaga2024hypergeometricmotivesgeneralizedfermat,MR4648747,ThesisPutz,MR4580454}. Famous Fermat--Catalan conjecture predicts that \eqref{eq:fermatpqr} has only finitely many solutions in positive integers $(p,q,r,x,y,z)$ such that $1/p+1/q+1/r<1$ and $\text{gcd}(x,y,z)=1$. In fact, the only known such solutions are listed in Table \ref{tb:knownsol}, and failing to find other solutions despite extensive efforts~\cite{norvig2015beal,sikora2024fermat} suggests that this list might be complete. One may note that all exponents $(p,q,r)$ in Table \ref{tb:knownsol} have $\min(p, q, r) = 2$. In 1997, Beal~\cite{MR1488570} offered a million-dollar prize for the proof or disproof that \eqref{eq:fermatpqr} has no solutions in coprime positive integers $(x,y,z)$ when $\min\{p,q,r\}\geq 3$. \begin{table} \begin{center} \caption{\label{tb:knownsol}Known primitive positive integer solutions to~\eqref{eq:fermatpqr} with $1/p+1/q+1/r<1$, up to exchange of $(x,p)$ with $(y,q)$.} \begin{tabular}{|c|c|c|c|} \hline $(p,q,r)$ & $(x,y,z)$ & $(p,q,r)$ & $(x,y,z)$ \\\hline \hline $(p,3,2)$ & $(1,2,3)$ & $(8,2,3)$ & $(33,1549034,15613)$ \\\hline $(5,2,4)$ & $(2,7,3)$ & $(3,2,7)$ & $(1414,2213459,65)$ \\\hline $(3,2,9)$ & $(7,13,2)$ & $(3,2,7)$ & $(9262,15312283, 113)$ \\\hline $(7,3,2)$ & $(2,17,71)$ & $(7,3,2)$ & $(17,76271,21063928)$ \\\hline $(5,4,2)$ & $(3,11,122)$ & $(8,3,2)$ & $(43, 96222,30042907)$ \\\hline \end{tabular} \end{center} \end{table} Comparison of Table \ref{tb:pqrsigs} with Table \ref{tb:triples} below shows that the only cases of Eq.~\eqref{eq:fermatpqr} with $(p,q,r)$ not-all-primes that remain to be considered are $(p,q,r)=(2,5,9)$ and $(p,q,r)=(2,3,25)$. In the case when all $(p,q,r)$ are primes, Table \ref{tb:pqrsigs} covers the triples $(2,3,7)$, $(5,5,7)$, $(5,5,19)$, $(7,7,5)$, $(3,3,p)$ for $p\leq 10^9$, $(p,p,2)$, $(p,p,3)$ and $(p,p,p)$ for all $p$, as well as $(2,3,11)$ conditional on the generalized Riemann hypothesis. Hence, the triple $(p,q,r)$ with the smallest sum $p+q+r$ for which Eq.~\eqref{eq:fermatpqr} is currently open is $(p,q,r)=(2,5,7)$. The smallest triple for which Beal's conjecture remains open is $(p,q,r)=(3,5,7)$. Recently, Sikora~\cite{sikora2024fermat} reported that Eq.~\eqref{eq:fermatpqr} has no solutions in coprime positive integers other than ones listed in Table \ref{tb:knownsol} in the range $z^r<2^{71}$. Restricting the search to only exponents not covered in Table \ref{tb:pqrsigs} allowed us to significantly speed up the search and reach the bound $2^{100}$. \begin{proposition} \label{prop:onlysol} The only solutions to \eqref{eq:fermatpqr} in coprime positive integers such that $z^r\leq 2^{100}$ are those listed in Table \ref{tb:knownsol}. \end{proposition} While there are good surveys of Eq.~\eqref{eq:fermatpqr}, a similar survey for the more general Eq.~\eqref{eq:genfermat} is not available. There are many dozens of papers studying this equation, and each of them cites just a small portion of others. As a result, some equations of the form \eqref{eq:genfermat} have been solved multiple times. For example, equation $x^4+3y^4=z^3$ has been solved in 1994 by Terai~\cite{MR1288426}, and then again in 2019 by S\"oderlund~\cite{soderlund2019some}. The aim of this survey is to summarize the solved cases of the generalized Fermat Eq.~\eqref{eq:genfermat} with $| abc| > 1$. This will help the readers to avoid duplicate work by solving the already solved cases. To keep the survey relatively short, we will not discuss proofs. Despite this, we hope that collecting all these references in one place should make it much easier for the readers to find the proof techniques in the original papers. In the rest of the introduction, we discuss some easy reductions, elementary special cases, and state some conjectures. Sections \ref{sec:special} to \ref{sec:pqr} discuss for which parameters $(a,b,c,p,q,r)$ Eq.~\eqref{eq:genfermat} has been solved. The solved cases are then summarized in the final table (Table \ref{tbl10}) in Section \ref{sec:summary}. \subsection{Easy reductions and elementary special cases} We first observe that we may assume that the integers $a,b$ and $c$ in \eqref{eq:genfermat} are all positive. Indeed, if an equation $Ax^p+By^q+Cz^r=0$ has $A,B,C$ all of the same sign, then it has no solutions other than $(x,y,z)=(0,0,0)$ if all $p,q,r$ are even, and otherwise we can change the sign of one of $A,B,C$ by a transformation of one of the variables to their negative. Hence, we may assume that not all $A,B,C$ are of the same sign, and then the equation can be rearranged to the form \eqref{eq:genfermat} with positive $a,b,c$. We are interested in finding primitive solutions to \eqref{eq:genfermat}, that is, ones satisfying $\gcd(x,y,z)=1$. Because solutions to \eqref{eq:genfermat} with $xyz=0$ are easy to describe, it suffices to find primitive solutions in non-zero integers. We will call such solutions \emph{non-trivial}. Further, if a prime $s$ is a common factor of, say, $x$ and $y$, then $\gcd(x,y,z)=1$ implies that $\gcd(s,z)=1$, but then \eqref{eq:genfermat} implies that $s^{\min\{p,q\}}$ is a divisor of $c$. Continuing this way, we obtain the following observation. \begin{proposition} \label{prop:paircop} Assume that $c$ has no divisors of the form $s^{\min\{p,q\}}$ for prime $s$, $b$ has no divisors of the form $s^{\min\{p,r\}}$ for prime $s$, while $a$ has no divisors of the form $s^{\min\{q,r\}}$ for prime $s$. Then $(x,y,z)$ is a primitive solution to \eqref{eq:genfermat} if and only if $x,y,z$ are pairwise coprime. \end{proposition} From now on, for all equations satisfying the conditions of Proposition \ref{prop:paircop}, we will use the terms ``primitive solution'' and ``solution in pairwise coprime integers'' interchangeably. We next observe that if one of the integers $(p,q,r)$, say $r$, is not prime, and $1<d<r$ is any divisor of $r$, then change of variables $Z=z^{r/d}$ reduces \eqref{eq:genfermat} to an equation of the same form with exponents $(p,q,d)$. If $1/p+1/q+1/d<1$, then this equation has finitely many primitive integer solutions by Theorem \ref{th:genfermat}. If we list all such solutions $(x,y,Z)$, we can then check for which of them $z=Z^{d/r}$ is an integer. This implies that it is sufficient to solve \eqref{eq:genfermat} only in the case when either $p,q,r$ are all primes, or $(p,q,r)$ is a \emph{special} triple defined below. \begin{definition} A triple $(p,q,r)$ of positive integers is called \emph{special} if (i) not all $p,q,r$ are primes, (ii) $1/p+1/q+1/r<1$, and (iii) if $P,Q,R$ are positive divisors of $p,q,r$, respectively, such that $(P,Q,R)\neq (p,q,r)$, then $1/P+1/Q+1/R \geq 1$. \end{definition} Every special triple is a permutation of one of the triples listed in Table \ref{tb:triples}. \begin{table} \begin{center} \caption{\label{tb:triples}Special triples $(p,q,r)$, up to permutations.} \begin{tabular}{|c|c|c|c|} \hline $(2,3,8)$ & $(2,3,25)$ & $(2,5,6)$ & $(3,3,9)$ \\\hline $(2,3,9)$ &$(2,4,r), r\geq 5 \text{ prime}$ & $(2,5,9)$ & $(3,4,4)$ \\\hline $(2,3,10)$ & $(2,4,6)$ & $(2,6,6)$ & $(3,4,5)$ \\\hline $(2,3,12)$ & $(2,4,8)$ & $(3,3,4)$ & $(4,4,4)$ \\\hline $(2,3,15)$ & $(2,4,9)$ & $(3,3,6)$ & \\\hline \end{tabular} \end{center} \end{table} Some equations of the form \eqref{eq:genfermat} have no non-trivial primitive solutions for elementary reasons. For example, modulo $9$ analysis shows that if $(x,y,z)$ is any integer solution to the equation $x^4 + y^4 = 3 z^3$, then all $x,y,z$ must be divisible by $3$. Hence, this equation has no non-trivial primitive solutions. More generally, we have the following trivial observation. \begin{proposition} \label{prop:local} If there exists any prime $s$ and positive integer $m$ such that all solutions $(x,y,z)$ to \eqref{eq:genfermat} modulo $s^m$ must have all $x,y,z$ divisible by $s$, then \eqref{eq:genfermat} has no non-trivial primitive solutions. \end{proposition} For equations covered by Proposition \ref{prop:local}, we say that they have no non-trivial primitive solutions by local obstructions. All such equations will from now be excluded from this survey. As mentioned above, we may assume that either $p,q,r$ are all primes, or $(p,q,r)$ is a permutation of one of the special triples listed in Table \ref{tb:triples}. In the second case, at least one of the integers $p,q,r$ is composite. If, for example, $r$ is composite, and $1<d<r$ is any divisor of $r$, then change of variables $Z=z^{r/d}$ reduces \eqref{eq:genfermat} to the equation \begin{equation} \label{eq:genfermatred} ax^p+by^q=cZ^d. \end{equation} Because $(p,q,r)$ is a special triple, we must have $1/p+1/q+1/d\geq 1$. If $1/p+1/q+1/d>1$, then, by Theorem \ref{eq:Beukers1998}, either \eqref{eq:genfermatred} has no non-trivial primitive solutions, or all such solutions can be covered by a finite number of parametrizations with two parameters. In the first case, \eqref{eq:genfermat} has no non-trivial primitive solutions as well. In the second case, we obtain that \begin{equation} \label{eq:genfermateasy} z^{r/d}=Z=P_i(u,v) \text{ for some } i\in\{1,2,\dots,k\}, \end{equation} where $P_1,\dots,P_k$ are some polynomials in two variables $u,v$ with integer coefficients. In some cases, Eqs.~\eqref{eq:genfermateasy} are easy to solve, which leads to the resolution of \eqref{eq:genfermat}. If $1/p+1/q+1/d=1$, then \eqref{eq:genfermatred} reduces to computing rational points on an elliptic curve. If this curve has rank $0$, then there is a finite number of such points, and they can be explicitly listed, see e.g.~\cite[Section 3.4.4]{mainbook} and~\cite[Section 3.21]{wilcox2024systematic}. Then \eqref{eq:genfermatred} has a finite number of primitive solutions, and it is easy to check for which of them (if any) $z=Z^{d/r}$ is an integer. For some of the special triples in Table \ref{tb:triples} more than one of the exponents $p,q,r$ is composite, and some exponents may have more than one non-trivial divisor $d$. Hence, Eq.~\eqref{eq:genfermat} with special $(p,q,r)$ may be reducible to several equations of the form \eqref{eq:genfermatred}. If any of the reduced equations has finitely many primitive solutions, and these solutions can be computed, this immediately solves \eqref{eq:genfermat}. For example, this solves equation $x^4+2y^4=z^4$, which is the case $(a,b,c,p,q,r)=(1,2,1,4,4,4)$ of Eq.~\eqref{eq:genfermat}, because the corresponding equation $x^4+2y^4=Z^2$ reduces to finding rational points on rank $0$ elliptic curve $Y^2=X^4+2$, where $Y=\frac{Z}{y^2}$, $X=\frac{x}{y}$. From now on, all Eqs.~\eqref{eq:genfermat} solvable by this method will be excluded from this survey. In summary, below we will discuss the generalized Fermat's Eqs.~\eqref{eq:genfermat}, not covered by Proposition \ref{prop:local}, in which $a,b,c$ are positive integers with $abc>1$, and exponents $(p,q,r)$ are either (i) prime numbers satisfying $1/p+1/q+1/r<1$, or (ii) one of the special triples listed in Table \ref{tb:triples} such that all the corresponding Eqs.~\eqref{eq:genfermatred} have infinitely many primitive solutions. For each such equation, we will consider the following problem. \begin{problem} \label{prob:main} For a given $a,b,c,p,q,r$, list all primitive integer solutions $(x,y,z)$ to \eqref{eq:genfermat}. \end{problem} For some equations, we will also discuss the following easier problem. \begin{problem} \label{prob:existence} For a given $a,b,c,p,q,r$, determine whether Eq.~\eqref{eq:genfermat} has any non-trivial primitive integer solutions. \end{problem} Because the solutions to \eqref{eq:genfermat} with $xyz=0$ are easy to find, the negative answer to Problem \ref{prob:existence} solves Problem \ref{prob:main} for the corresponding equation as well. However, if \eqref{eq:genfermat} has some easy-to-find non-trivial primitive integer solution, this automatically solves Problem \ref{prob:existence} with a ``Yes'' answer, without resolving Problem \ref{prob:main} for this equation. \subsection{Some conjectures and open problems} Famous and well-believed abc conjecture of Masser and Oesterl\'e~\cite{oesterle1988nouvelles} predicts that for every real number $\epsilon>0$ there exist only finitely many triples $(A,B,C)$ of coprime positive integers such that \begin{equation} \label{eq:abc} A+B=C \quad \text{and} \quad C > \mathrm{rad}(ABC)^{1+\epsilon}, \end{equation} where $\mathrm{rad}(n)$ denotes the product of the distinct prime factors of $n$. Darmon and Granville~\cite[Section 5.2]{DOI:10.1112/blms/27.6.513} observed that the abc conjecture implies the Fermat--Catalan conjecture about the finiteness of primitive integer solutions to \eqref{eq:fermatpqr}, and remarked that the same argument may be extended to show that it also implies the following conjectures. \begin{conjecture} \label{conj:abcfixed} For any integers $a,b,c$ such that $abc\neq 0$, Eq.~\eqref{eq:genfermat} has only finitely many solutions\footnote{Solutions to \eqref{eq:genfermat} that have $\min\{x,y,z\}=1$ and differ only by the exponent(s) of $1$ are assumed to be the same, e.g.~family $(x,y,z,p,q,r)=(1,1,1,p,q,r)$ for $(a,b,c)=(1,1,2)$ is considered as one solution.} in integers $(x,y,z,p,q,r)$ such that $\gcd(x,y,z)=1$, $p>0$, $q>0$, $r>0$, $1/p+1/q+1/r<1$. \end{conjecture} \begin{conjecture} \label{conj:primesfixed} For any finite set $S$ of primes, Eq.~\eqref{eq:genfermat} has only finitely many solutions in integers \\ $(a,b,c,x,y,z,p,q,r)$ such that $\gcd(ax,by,cz)$ $=1$, $p>0$, $q>0$, $r>0$, $1/p+1/q+1/r<1$, and all the prime factors of $abc$ belong to set $S$. \end{conjecture} \begin{table} \begin{center} \caption{\label{tab:pqrprimesols}Known positive integer solutions $(x,y,z,p,q,r,a,b,c)$ to \eqref{eq:open}, up to exchange of $(a,p,x)$ and $(b,q,y)$.} \begin{tabular}{|c|c|c|} \hline $(a,b,c)$ & $(p,q,r)$ & $(x,y,z)$ \\ \hline \hline $(1,1,2)$ & $(p,q,r)$ & $(1,1,1)$ \\\hline $(1,2,1)$ & $(2,q,3)$ & $(5,1,3)$ \\\hline $(2,1,1)$ & $(2,q,5)$ & $(11,1,3)$ \\\hline $(1,1,2)$ & $(3,2,8)$ & $(7,13,2)$ \\\hline $(2,1,1)$ & $(4,4,3)$ & $(5,3,11)$ \\\hline $(1,1,2)$ & $(7,3,2)$ & $(3,5,34)$ \\\hline $(1,1,2)$ & $(3,q,2)$ & $(23,1,78)$ \\\hline $(1,2,1)$ & $(5,8,2)$ & $(7,3,173)$ \\\hline $(1,2,1)$ & $(5,4,2)$ & $(7,9,173)$ \\\hline $(2,1,1)$ & $(9,3,2)$ & $(3,19,215)$ \\\hline $(1,1,2)$ & $(2,q,4)$ & $(239,1,13)$ \\\hline $(1,1,2)$ & $(3,4,3)$ & $(237,43,203)$ \\\hline $(1,1,2)$ & $(3,13,2)$ & $(239,3,2761)$ \\\hline $(2,1,1)$ & $(2,8,3)$ & $(21395,3,971)$ \\\hline $(1,1,2)$ & $(3,8,2)$ & $(799,7,16060)$ \\\hline $(1,2,1)$ & $(3,11,2)$ & $(1719,5,71953)$ \\\hline $(1,2,1)$ & $(9,2,3)$ & $(13,75090,2797)$ \\\hline $(2,1,1)$ & $(4,5,2)$ & $(6071,959,59397119)$ \\\hline $(1,1,2)$ & $(2,5,4)$ & $(49800547,953,6357)$ \\\hline $(1,2,1)$ & $(3,7,2)$ & $(1346695,177,1566280459)$ \\\hline \end{tabular} \end{center} \end{table} Conjecture \ref{conj:abcfixed} is a far-reaching generalization of Theorem \ref{th:genfermat} that allows $p,q,r$ to also be variables. There is no triple $(a,b,c)$ for which this conjecture has been proved. Conjecture \ref{conj:primesfixed} is even more general. Eq.~\eqref{eq:fermatpqr} corresponds to the case $a=b=c=1$ of Conjecture \ref{conj:abcfixed}. The ``next'' case one may consider is $| abc| =2$. By rearranging \eqref{eq:genfermat} appropriately, we may assume that $a,b,c,x,y,z$ are all positive integers, and consider equation \begin{equation} \label{eq:open} ax^p+by^q=cz^r, \quad \text{gcd}(x,y,z)=1, \quad 1/p+1/q+1/r<1, \quad abc=2, \end{equation} for which we are looking for solutions in positive integers $a,b,c,p,q,r,x,y,z$. Our computer search\footnote{This search used the ALICE High Performance Computing facility at the University of Leicester.} in the range $cz^r\leq 2^{80}$ returned only the solutions listed in Table \ref{tab:pqrprimesols}. \begin{question} \label{q:abc2} Find all solutions to \eqref{eq:open} in positive integers $a$, $b$, $c$, $p$, $q$, $r$, $x$, $y$, $z$. In particular, does it have any solutions except of the ones listed in Table \ref{tab:pqrprimesols}? \end{question} While we are not sure whether the solution list to \eqref{eq:open} in Table \ref{tab:pqrprimesols} is complete, we conjecture that \eqref{eq:open} has no solutions with $\min(p,q,r)\geq 3$ other than $1^p+1^q=2 \cdot 1^r$, $2\cdot 5^4+3^4=11^3$ and $43^4 + 237^3=2\cdot 203^3$. Another goal, suggested in~\cite[Section 3.4.2]{MR4620765}, is to describe \emph{all} (not necessary primitive) integer solutions to \eqref{eq:genfermat}, or at least to \eqref{eq:fermatpqr}. This might be easier because for some exponents $(p,q,r)$ there exist formulas describing all integer solutions to these equations that give no hint on the structure of primitive solutions. As mentioned above, such formulas have been derived in~\cite{MR4620765} for Eq.~\eqref{eq:genfermat} under the condition that at least one of the integers $p,q,r$ is coprime with the other two. This condition fails, if, for example, $(p,q,r)=(PQ,QR,RP)$ for some integers $P,Q,R$ greater than $1$. In this case, \eqref{eq:genfermat} reduces to $ax^{PQ}+by^{QR}=cz^{RP}$, or $a+b(y^R/x^P)^Q=c(z^R/x^Q)^P$. This equation can be written as \begin{equation} \label{eq:cupmbvqma} cU^P-bV^Q=a, \end{equation} where $U=z^R/x^Q$ and $V=y^R/x^P$ are rational variables. If $P+Q\geq 7$, this is an equation of genus $g\geq 2$, hence it has finitely many rational solutions by Falting's Theorem~\cite{MR718935}, and, if we could list them all, we would be able to easily describe all possible $x,y,z$. In the case $a=b=c=1$, \eqref{eq:cupmbvqma} reduces to $U^P-V^Q=1$. Mih\u{a}ilescu~\cite{MR2076124}, confirming famous conjecture of Catalan, proved that the only positive integer solutions to this equation with $P,Q\geq 2$ is $(U,V,P,Q)=(3,2,2,3)$, but solving this equation in rational $U,V$ seems to be more difficult, see~\cite[Theorem 12.4]{MR891406} and~\cite[Theorem 5]{mihailescu2007cylcotomic} for a partial progress. All conjectures and open questions mentioned in this section look very difficult.\footnote{In 2022, Mochizuki et al.~\cite{mochizuki2022explicit} claimed to prove the effective abc conjecture, stating that \eqref{eq:abc} has only finitely many solutions in coprime positive integers $(A,B,C)$, and, moreover, there is an explicitly computable upper bound for the size of any such solution. If true, this would imply the truth of Conjectures \ref{conj:abcfixed} and \ref{conj:primesfixed}, the full resolution of \eqref{eq:fermatpqr} for all but finitely many explicit signatures~\cite{zhongpeng2025the}, reduce Open Question \ref{q:abc2} to finite computation, and solve many other cases of \eqref{eq:genfermat}. However, Scholze and Stix~\cite{scholze2018abc} have found a serious flaw in the argument.} Readers preferring easier-looking open problems are invited to investigate equations listed in Table \ref{tab:H60Fermat}, see Section \ref{sec:system}. \section{Equations with special signatures} \label{sec:special} In this section we will discuss Eq.~\eqref{eq:genfermat} with special signatures $(p,q,r)$ presented in Table \ref{tb:triples}. We start with the most studied case $(p,q,r)=(4,4,4)$. \subsection{Equations of signature $(4,4,4)$} The case $p=q=r=4$ in \eqref{eq:genfermat} results in the equation \begin{equation} \label{eq:ax4pby4mcz4} ax^4+by^4=cz^4 \end{equation} for positive integers $a,b,c$. Sections 6.2.2 and 6.3.4 of~\cite{mainbook} and Sections 6.4 and 6.5 of~\cite{wilcox2024systematic} study Eqs.~\eqref{eq:ax4pby4mcz4} ordered by $a+b+c$, decide the existence of a solution $(x,y,z)\neq (0,0,0)$ for all equations with $a+b+c\leq 62$, and solves Problem \ref{prob:existence} for the equations with $a+b+c<39$. The first Eq.~\eqref{eq:ax4pby4mcz4} for which Problem \ref{prob:existence} is left open in~\cite{mainbook} is \begin{equation*}\label{eq:7x4p25y4m7z4} 7x^4+25y^4=7z^4. \end{equation*} However, there are Eqs.~\eqref{eq:ax4pby4mcz4} with $a+b+c<39$ that have easy-to-find non-trivial solutions, which solves Problem \ref{prob:existence} but leaves open Problem \ref{prob:main}. For example, this happens if $a+b=c$, in which case $(x,y,z)=(\pm 1, \pm 1, \pm 1)$ are the solutions. The smallest non-trivial example in this category is the equation \begin{equation*} x^4+2y^4=3z^4, \end{equation*} whose only primitive solutions are $(x,y,z)=(\pm 1, \pm 1, \pm 1)$, as proved by Jeremy Rouse on Mathoverflow website.\footnote{\url{https://mathoverflow.net/questions/480114}} Other non-trivial examples with $a+b=c$ are the equations \begin{equation*} (a) \,\, x^4+3y^4=4z^4, \quad \text{and} \quad (b) \,\, x^4+8y^4=9z^4. \end{equation*} Lucas (see~\cite[pp. 630]{dickson1920history}) proved that these equations have no primitive integer solutions other than $(\pm 1, \pm 1, \pm 1)$. In the case $a=b=1$, Eq.~\eqref{eq:ax4pby4mcz4} reduces to \begin{equation} \label{eq:x4py4ecz4} x^4+y^4=cz^4. \end{equation} Problem \ref{prob:existence} for this equation reduces to the question whether a positive integer $c$ can be represented as a \emph{sum} of two rational fourth powers. Cohen~\cite[Section 6.6]{MR2312337} solved this problem for all integers $c$ in the range $1\leq c \leq 10,000$, except of $c=4481$, $7537$ and $8882$. These values of $c$ have been addressed in~\cite{MR709852,MR4458887}, hence Problem \ref{prob:existence} for \eqref{eq:x4py4ecz4} is now solved for all $1\leq c \leq 10,000$. In particular, $c=5906=(25/17)^4+(149/17)^4$ is the smallest integer that is the sum of two fourth powers of rational numbers but not the sum of two fourth powers of integers. Grigorov and Rizov~\cite{Grigorov1998Heights} proved that if $c>2$ is a fourth-power free integer and the rank of elliptic curve $v^2=u^3-cu$ is $1$, then Eq.~\eqref{eq:x4py4ecz4} has no non-zero integer solutions. For values of $c$ when \eqref{eq:x4py4ecz4} is solvable, such as $c=17$, the question to find \emph{all} primitive solutions (that is, Problem \ref{prob:main}) has not been addressed in~\cite[Section 6.6]{MR2312337}. The mentioned case $c=17$ has been solved in 2001 by Flynn and Wetherell~\cite{flynn2001covering}. \begin{theorem} \label{th:flynn2001} The primitive positive integer solutions to Eq.~\eqref{eq:x4py4ecz4} with $c=17$, that is, \begin{equation*} x^4+y^4=17z^4 \end{equation*} are $(x,y,z)=(2,1,1)$ and $(1,2,1)$. \end{theorem} A combination of Theorem \ref{th:flynn2001} with elementary methods discussed in the introduction solves Eq.~\eqref{eq:x4py4ecz4} for all $1\leq c \leq 81$, and in fact for all $1\leq c\leq 100$ except of $c=82$ and $c=97$. In 2023, using the methods developed to prove Theorem \ref{th:flynn2001} with some extra tweaks, P{\u{a}}durariu~\cite{puadurariu2023rational} was able to resolve the case $c=97$. \begin{theorem} \label{th2:flynn2001} The primitive positive integer solutions to Eq.~\eqref{eq:x4py4ecz4} with $c=97$, that is, \begin{equation*} x^4+y^4=97z^4 \end{equation*} are $(x,y,z)=(2,3,1)$ and $(3,2,1)$. \end{theorem} To the best of our knowledge, the case $c=82$ remains open. Eq.~\eqref{eq:x4py4ecz4} with $z\neq 0$ can be rewritten as $(x/z)^4+(y/z)^4=c$, and reduces to finding a representation of $c$ as a sum of two rational fourth powers. A similar problem of representing an integer $b$ as a \emph{difference} of two fourth powers reduces to equation \begin{equation} \label{eq:x4pby4ez4} x^4+b y^4=z^4 \end{equation} with $y\neq 0$. The problem of \emph{existence} of integer solutions to \eqref{eq:x4pby4ez4} has been solved in~\cite[Section 6.3.4]{mainbook} for $1\leq b \leq 218$, and is left open for $b=219$. However, if a non-trivial solution to \eqref{eq:x4pby4ez4} exists, the problem of finding all solutions has not been studied in~\cite{mainbook}. In 2020, S\"oderlund~\cite{soderlund2020note} resolved this problem for $b=34$. \begin{theorem} \label{th:soderlund} The only primitive non-zero integer solutions to the equation \begin{equation*} x^4+34 y^4=z^4 \end{equation*} are $(x, y, z) = (\pm 3, \pm 2, \pm 5)$. \end{theorem} Taclay and Bacani~\cite{taclay2023} consider Eq.~\eqref{eq:x4pby4ez4} with $b=2p$ where $p$ is a prime and they prove the following result. \begin{theorem} \label{th:x4p2py4mz4} If $p$ is a prime satisfying any of the following conditions: \begin{itemize} \item[(i)] {$p \not \equiv 1$} (mod $16$), \item[(ii)] {$p\equiv 3,4$} (mod $5$), \item[(iii)] {$p \equiv 7,8,11$} (mod $13$), \item[(iv)] {$p \equiv 4,5,6,9,13,22,28$} (mod $29$). \end{itemize} Then the equation \begin{equation} \label{eq:x4p2py4mz4} x^4+2py^4=z^4 \end{equation} has no non-trivial primitive integer solutions. \end{theorem} The smallest prime $p$ for which Eq.~\eqref{eq:x4p2py4mz4} is not covered by Theorems \ref{th:soderlund} and \ref{th:x4p2py4mz4} is $p=97$. Eq.~\eqref{eq:ax4pby4mcz4} with coefficients $a,b,c$ being perfect cubes was studied in~\cite{MR249355,Mordell_1970}, and we have the following result. \begin{theorem} \label{th:MR249355} Let $s,t,u$ be pairwise coprime positive integers, with $s \equiv t \equiv u \equiv -1$ (mod 8). Let $n$ be an integer divisible by $8$ and let $v,w$ be integers satisfying $v \equiv w \equiv -1$ (mod 8), and ratios \begin{equation*} \frac{tw-uv}{s}, \quad \frac{un-sw}{t}, \quad \frac{sv-tn}{u} \end{equation*} are integers whose positive odd factors are all $1$ modulo $8$. Then, there are no non-trivial integer solutions of \begin{equation*} \left(\frac{tw-uv}{s} \right)^3 x^4 + \left(\frac{un-sw}{t} \right)^3 y^4 = \left(\frac{tn-sv}{u} \right)^3z^4. \end{equation*} \end{theorem} Examples of values $(s,t,u,n,v,w)$ satisfying these conditions are $(7,15,23,\\ 8280,4991,13335)$, $(7,15,23,8280,16583,15855)$ and $(7,15,23,11040,3703, \\14175)$ which correspond to equations \begin{equation*} \begin{aligned} & 12176^3x^4 + 6473^3 y^4=3881^3z^4, \quad 20512^3x^4 + 353^3 y^4=5297^3z^4 , \\ & 18208^3x^4 + 10313^3 y^4=6073^3z^4 , \end{aligned} \end{equation*} respectively. \subsection{Equations of signature $(2,4,r)$} The case $p=2$, $q=4$ and prime $r \geq 5$ in \eqref{eq:genfermat} results in the equation \begin{equation} \label{eq:sig24r} ax^2+by^4=cz^r, \quad r \geq 5 \end{equation} with positive integer parameters $a,b,c$. Let us first consider Eq.~\eqref{eq:sig24r} with $a=2$ and $b=c=1$, that is, \begin{equation} \label{eq:x4p2y2mzr} 2x^2+y^4 = z^r, \quad\quad r\geq 5. \end{equation} In 2008, Dieulefait and Urroz~\cite{dieulefait2008solvingfermattypeequations} proved that if $r >349$ is a prime, then Eq.~\eqref{eq:x4p2y2mzr} does not have non-trivial primitive solutions. In 2010, Bennett, Ellenberg and Ng~\cite{bennett2010diophantine} significantly strengthened this result and completely solved Eq.~\eqref{eq:x4p2y2mzr} for all integers $r \geq 5$. \begin{theorem} \label{th:bennett2010rint} Eq.~\eqref{eq:x4p2y2mzr} has no non-trivial primitive solutions for integer $r\geq 6$, while for $r=5$ its only non-trivial primitive integer solutions are $(x,y,z)=(\pm 11, \pm 1,3)$. \end{theorem} Let us now consider Eq.~\eqref{eq:sig24r} with $a=3$ and $b=c=1$. Dieulefait and Urroz~\cite{dieulefait2008solvingfermattypeequations} proved that if $r >131$ is a prime, then the equation \begin{equation*} 3x^2+y^4= z^r, \end{equation*} does not have any non-trivial primitive solutions. Pacetti and Villagra Torcomian~\cite{MR4473105} remarked that this result can be improved to $r > 17$ by Proposition 5.4 of~\cite{koutsianas2019generalizedfermatequationa23b6cn}. In 2022, Pacetti and Villagra Torcomian~\cite{MR4473105} studied Eq.~\eqref{eq:sig24r} with $a \in \{5,6,7\}$, $b=c=1$ and prime $p$, that is, \begin{equation} \label{eq:ax4p5y2pzp} ax^2 + y^4 = z^p, \end{equation} and they proved the following results. \begin{theorem} \label{th:ax4p5y2pzp} Let $(a,p_0)\in\{(5,499), (6,563), (7,349)\}$. Assume that $p > p_0$ is prime. Then Eq.~\eqref{eq:ax4p5y2pzp} has no non-trivial primitive solutions. \end{theorem} D\c abrowski~\cite{MR2737959} studied equations of the form $y^4- z^2 = sx^p$, which, after substitution $x\to -x$, can be rewritten with positive coefficients as \begin{equation} \label{eq:qxppy4z2} sx^p + y^4 = z^2, \end{equation} where $(s,p)$ are certain pairs of odd primes. They used a result from~\cite{ivorra2006quelques} to deduce the following theorem. \begin{theorem} \label{th:qxppy4z2} Let $s > 3$ be a prime and let $s \equiv 3$ mod 8 and $s \neq 2t^2 + 1$, or $s \equiv 5$ mod 8 and $s \neq t^2 + 4$. In addition, let $p$ be a prime satisfying $p > (8 \sqrt{s+1}+1)^{16(s-1)}$. Then Eq.~\eqref{eq:qxppy4z2} has no non-trivial primitive integer solutions. \end{theorem} Examples of $s$ satisfying the conditions of Theorem \ref{th:qxppy4z2} are $s=11$, $37$, $43$ and so on. Equation \begin{equation} \label{eq:qxppy4z4} sx^p + y^4 = z^4 \end{equation} is a special case of \eqref{eq:qxppy4z2} with $z$ in \eqref{eq:qxppy4z2} being a perfect square. This equation has been studied in~\cite{Dabrowski_2007,MR2737959}, where the following results have been obtained. \begin{theorem} Let $\alpha\geq 0$ be an integer and $p\geq 5$ be a prime. Then equation \begin{equation*} 2^\alpha x^p + y^4 = z^4 \end{equation*} has no non-trivial primitive integer solutions. \end{theorem} \begin{theorem} \label{th:2asbxppy4mz4} Let $s$ be an odd prime and $s \neq 2^t \pm 1$. Let $p$ be a prime satisfying $p > (\sqrt{ 8s + 8} + 1)^{2s-2}$. Let $\alpha\geq 0$ and $\beta>0$ be integers. Then equation \begin{equation} \label{eq:2asbxppy4mz4} 2^\alpha s^\beta x^p + y^4 = z^4 \end{equation} has no non-trivial primitive integer solutions. \end{theorem} Eq.~\eqref{eq:2asbxppy4mz4} has been further studied by Bennett~\cite{MR4205757}, who proved a version of Theorem \ref{th:2asbxppy4mz4} with a smaller set of excluded primes. \begin{theorem} \label{th:bennett21} Let $s$ be a prime not of the form $s=2^{2^k}+1$ for integer $k\geq 1$, and let $\alpha,\beta$ be non-negative integers. Let $p$ be a prime satisfying $p > (\sqrt{8s + 8} + 1)^{2s-2}$. Then Eq.~\eqref{eq:2asbxppy4mz4} has no non-trivial primitive integer solutions. \end{theorem} Primes of the form $s=2^{2^k}+1$ are called Fermat primes. It is widely believed that the only such primes are $s=3,5,17,257$ and $65537$. The case $s=3$ corresponds to $k=0$, while Theorem \ref{th:bennett21} fails to apply only if $k\geq 1$. Bennett~\cite{MR4205757} also solved Eq.~\eqref {eq:2asbxppy4mz4} for the exceptional cases $s=5$ and $17$, at least for $(\alpha,\beta)=(0,1)$. \begin{theorem} For every prime $p>5$, equations \begin{equation*} 5 x^p + y^4 = z^4 \quad\quad \text{and} \quad\quad 17 x^p + y^4 = z^4 \end{equation*} have no non-trivial primitive integer solutions. \end{theorem} Theorem \ref{th:bennett21} is applicable only if $p$ is large in terms of $s$. For small values of $s$, Bennett~\cite{MR4205757} was able to remove this condition and extend Theorem \ref{th:bennett21} to all $p>5$. \begin{theorem} If $s$ is a prime with $2\leq s < 50$, $s\neq 5$, $s\neq 17$, $\alpha$ and $\beta$ are non-negative integers, and $p>5$ is a prime, then Eq.~\eqref{eq:2asbxppy4mz4} has no non-trivial primitive integer solutions. \end{theorem} Pacetti and Villagra Torcomian~\cite{MR4609012} studied equation \begin{equation} \label{eq:xppby2mz4} x^p+by^2=z^4 \end{equation} for certain values of $b$, and they obtained the following results. \begin{theorem} Let $p > 19$ be a prime number such that $p \neq 97$ and $p \equiv 1, 3$ modulo $8$. Then, Eq.~\eqref{eq:xppby2mz4} with $b=6$ has the only non-trivial primitive integer solutions $(x,y,z)=(1, \pm 20, \pm 7)$. \end{theorem} \begin{theorem} Let $p > 19$ be a prime number such that $p \equiv 1, 3$ modulo $8$. If \begin{itemize} \item[(i)] {$b=10$} and $p \neq 139$, or \item[(ii)] {$b=11$} and $p \neq 73$, or \item[(iii)] {$b=19$} and $p \neq 43,113$, or \item[(iv)] {$b=129$} and $p \neq 43$, \end{itemize} then Eq.~\eqref{eq:xppby2mz4} has no non-trivial primitive solutions. \end{theorem} \begin{theorem} Let $p > 900$ be a prime number. Then, there are no non-trivial primitive solutions of Eq.~\eqref{eq:xppby2mz4} with $b=129$. \end{theorem} Cao~\cite{MR1341665} studied Eq.~\eqref{eq:xppby2mz4} with $b=p$, that is, \begin{equation} \label{eq:xpppy2mz4} x^p+py^2=z^4 \end{equation} and proved that if $p \equiv 1$ modulo $4$ is a prime and\footnote{Here, $B_n$ denotes the $n$th Bernoulli number.} $p\nmid B_{(p-1)/2}$, then Eq.~\eqref{eq:xpppy2mz4} has no integer solutions with $\gcd(y,z)=1$, $p\vert y$ and $2\vert z$. Langmann~\cite{MR1604052} considers the equation \begin{equation} \label{eq:ax2mpy2mzn} ax^{2m}+y^2=z^n \end{equation} and they obtained the following result. \begin{theorem} \label{th:ax2mpy2mzn} Let $m,n \geq 2$ be integers. Then, for almost all square-free $a \not \equiv -1$ modulo $4$ with \begin{equation*} \frac{n}{\gcd(n,h(-a))} \geq \max\{6,14-2m\}, \end{equation*} where $h(-a)$ denotes the class number\footnote{A list of the class numbers for the first 10,000 square-free $a$ can be found at \url{https://oeis.org/A000924/b000924.txt}.} of $\mathbb{Q}(\sqrt{-a})$, Eq.~\eqref{eq:ax2mpy2mzn} has no non-trivial primitive integer solutions. \end{theorem} The case $m=2$ of Theorem \ref{th:ax2mpy2mzn} solves many equations of the form $ax^4+y^2=z^n$ of signature $(4,2,n)$. \subsection{Equations of signature $(2,6,r)$} Some researchers have considered equations with signatures $(2,6,r)$ for $r\geq 3$. This family contains special signatures $(2,6,4)$, $(2,6,5)$ and $(2,6,6)$. Chen~\cite{Chen_2012} studied Eq.~\eqref{eq:genfermat} with $p$ prime, $q=6$, $b=r=2$ and $a=c=1$, that is, \begin{equation} \label{chen_a1c1} x^p +2y^6= z^2. \end{equation} \begin{theorem} Let $p$ be a prime such that $p \equiv 1, 7$ (mod $24$) and $p \neq 7$. Then Eq.~\eqref{chen_a1c1} does not have any non-trivial primitive integer solutions except those with $x = \pm1$. \end{theorem} Eq.~\eqref{eq:genfermat} with $r$ prime, $q=6$, $p=2$ and $a=c=1$, that is, \begin{equation} \label{eq:x2pby6mzr} x^2+by^6=z^r, \end{equation} has been studied in several papers. The case $b=2$ of \eqref{eq:x2pby6mzr}, that is, \begin{equation} \label{eq:x2p2y6mzr} x^2 + 2y^6 = z^r, \end{equation} has been studied by Pacetti and Villagra Torcomian~\cite{MR4473105}, where it was resolved for all $r>257$. \begin{theorem} Let $r > 257$ be a prime number. Then Eq.~\eqref{eq:x2p2y6mzr} has no non-trivial primitive integer solutions. \end{theorem} Koutsianas~\cite{koutsianas2019generalizedfermatequationa23b6cn} solved the case $b=3$ of \eqref{eq:x2pby6mzr}, that is, equation \begin{equation} \label{eq:x2p3y6mzr} x^2 +3y^6 = z^r. \end{equation} \begin{theorem} Let $r \geq 3$ be an integer. If $r\neq 4$, then Eq.~\eqref{eq:x2p3y6mzr} has no non-trivial primitive integer solutions, while for $r=4$ its only non-trivial primitive integer solutions are $(x, y, z) = (\pm 47, \pm 2, \pm 7)$. \end{theorem} Eq.~\eqref{eq:x2pby6mzr} with $b=6$, that is, \begin{equation} \label{eq:x2p6y6mzr} x^2 + 6y^6 = z^r, \end{equation} was solved by Pacetti and Villagra Torcomian~\cite{MR4473105} for $r>563$. \begin{theorem} Let $r > 563$ be a prime number. Then, Eq.~\eqref{eq:x2p6y6mzr} has no non-trivial primitive integer solutions. \end{theorem} Eq.~\eqref{eq:x2pby6mzr} with some other values of $b$ has been studied in~\cite{MR4583916}, where the following theorem has been proven. \begin{theorem} \label{th:x2pby6mzr} Let $(b,r_0)$ be as in Table \ref{tb:x2pby6mzr}. Assume that $r\geq r_0$ is a prime satisfying the congruence conditions in Table \ref{tb:x2pby6mzr}. Then Eq.~\eqref{eq:x2pby6mzr} has no non-trivial primitive integer solutions. \end{theorem} \begin{table} \begin{center} \caption{\label{tb:x2pby6mzr}Details for Theorem \ref{th:x2pby6mzr}. } \begin{tabular}{|c|c|c|} \hline $b$ & $r_0$ & Congruence conditions \\ \hline\hline 5 & 1033 & \\\hline 7 & 337 & $r \equiv 5, 7$ (mod 12) \\\hline 11 & 557 & $r \equiv 3$ (mod 4) \\\hline 13 & 3491 & \\\hline 15 & 743 & $r \equiv 5, 7, 15, 17, $ or 19 (mod 24) \\\hline 19 & 1031 & \\\hline \end{tabular} \end{center} \end{table} In the same paper~\cite{MR4583916}, the authors remarked that they also studied \eqref{eq:x2pby6mzr} with $b=10$ and $b=17$, but the resulting computation is unfeasible. Zelator~\cite{MR1188732} proves that for any positive odd integer $b$ with a prime factor $l \equiv \pm 3$ modulo $8$ and positive integers $m$ and $n$, the equation \begin{equation*} x^2 + b^2y^{2m} = z^{4n} \end{equation*} has no positive integer solutions with $\gcd(x,by)=1$ and $y$ odd. With $m=3$ and $n=1$, this gives some partial progress towards the resolution of the equation $x^2 + b^2y^{6} = z^{4}$ of signature $(2,6,4)$. \subsection{Equations of signature $(4,4,3)$} Let us now discuss Eq.~\eqref{eq:genfermat} with $(p,q,r)=(4,4,3)$ and $a=c=1$, that is, equation \begin{equation} \label{eq:x2pby4pz3} x^4+b y^4=z^3. \end{equation} The case $b=2$ of \eqref{eq:x2pby4pz3} has been resolved in 2017 by S\"oderlund~\cite{soderlund2017primitive}. \begin{theorem} \label{th:x4p2y4pz3} The only non-trivial primitive integer solutions to the equation \begin{equation*} x^4+2y^4=z^3 \end{equation*} are $(x,y,z)=(\pm 3, \pm 5, 11)$. \end{theorem} The case $b=3$ was solved in 1994 by Terai~\cite{MR1288426}. \begin{theorem} The equation \begin{equation*} x^4+3y^4=z^3 \end{equation*} has no non-trivial primitive integer solutions. \end{theorem} The next case $b=4$ is a special case of the following theorem~\cite{soderlund2020diophantine}. \begin{theorem} For any prime number $s$, equation \begin{equation*} x^4+s^2 y^4=z^3 \end{equation*} has no non-trivial primitive integer solutions. \end{theorem} The next case is $b=5$, and we do not know whether this case has been solved. In 2019, S\"oderlund~\cite{soderlund2019some} reproved the case $b=3$ and resolved the cases of $b=19,43$ and $67$. \begin{theorem} For $b \in \{3,19,43,67\}$, equation \begin{equation*} x^4+b y^4=z^3 \end{equation*} has no non-trivial primitive integer solutions. \end{theorem} \subsection{A systematic approach} \label{sec:system} \begin{table}\begin{center} \caption{\label{tab:H60Fermat}Uninvestigated equations \eqref{eq:genfermat} with $H=| a| 2^p+| b| 2^q+| c| 2^r \leq 60$.} \begin{tabular}{|c|c|c|c|c|c|} \hline $H$ & Equation & $H$ & Equation & $H$ & Equation \\ \hline \hline 40 & $x^4+2y^3+z^3=0$ & 56 & $x^4+4y^3+z^3=0$ &56 & $x^5+y^4-2z^2=0$ \\ \hline 48 & $x^4+3y^3+z^3=0$ & 56 & $2x^4-y^4+z^3=0$ & 60 & $x^5+y^4+3z^2=0$ \\ \hline 56 & $x^4+3y^3+2z^3=0$ &56 & $x^5+2y^3+z^3=0$ &60 & $x^5+y^4-3z^2=0$ \\\hline \end{tabular} \end{center} \end{table} Monograph~\cite{mainbook} suggests studying Diophantine equations systematically. It defines the size of Eq.~\eqref{eq:genfermat} as $H=| a| 2^p+| b| 2^q+| c| 2^r$, and lists all non-trivial equations of size $H\leq 60$. Equations of small size with $| abc| =1$ are covered in Table \ref{tb:pqrsigs}. Equations $x^4+y^4=2z^3$ and $y^3+z^3=2x^4$ of size $H=48$ are special cases of Theorems \ref{th2:bennett2004} and \ref{th:zhang2014}, respectively. Equation $2x^2+y^4=z^5$ of size $H=56$ is the $r=5$ case of Theorem \ref{th:bennett2010rint}. Equation $x^4+2y^4=z^3$ of the same size has been solved in Theorem \ref{th:x4p2y4pz3}. Equations $x^4+2y^3+2z^3=0$, $x^4-y^4=2z^3$ and $x^4-y^4=3z^3$ of sizes $H=48$, $48$ and $56$, respectively, have been solved in~\cite[Appendix A]{wilcox2024systematic}. All other non-trivial equations of size $H\leq 60$ are listed in Table \ref{tab:H60Fermat}. All these equations except $x^5+2y^3+z^3=0$ have special signatures, and we were not able to find a reference where any of them has been solved. We invite the reader to investigate these equations. \section{Equations of signature $(p,p,p)$} \subsection{Reduction to computing rational points on hyperelliptic curves} \label{sec3.1} As discussed in the introduction, if the exponents $(p,q,r)$ in \eqref{eq:genfermat} are not special, then we may assume that all $p,q,r$ are prime. In this section we discuss the case $p=q=r$, that is, equation \begin{equation} \label{eq:axppbypmczp} ax^p+by^p=cz^p, \end{equation} where $a,b,c$ are positive integers, and $p\geq 5$ is a prime. It is well-known~\cite[Section 6.2.3]{mainbook} and easy to check that if $(x,y,z)$ with $xyz\neq 0$ is an integer solution to \eqref{eq:axppbypmczp}, then \begin{equation*} \begin{aligned} (X,Y)=\,&(xy/z^2,a(x/z)^p-c/2), \quad (xz/y^2,-c(z/y)^p+b/2)\nonumber\\ &\text{and} \quad (yz/x^2,b(y/x)^p+a/2) \end{aligned} \end{equation*} are rational points on the curves \begin{equation} \label{eq:homdia3dhyp3} Y^2 = -abX^p+c^2/4, \quad Y^2 = acX^p+b^2/4 \quad \text{and} \quad Y^2 = bcX^p+a^2/4, \end{equation} respectively. In all cases, $X\neq 0$. These are curves of genus $g=(p-1)/2 \geq 2$, and, by Faltings' theorem~\cite{MR718935}, each of them has a finite number of rational points. If these points can be computed for at least one of these curves, it is straightforward to convert them into solutions to \eqref{eq:axppbypmczp}. However, computing rational points on a high-genus curve is a difficult problem in general, and the algorithms are known only in some special cases. One example is given by the following theorem, which is a corollary of the main result of~\cite{em/1069786350}, and formulated explicitly in~\cite[Theorem 3.76]{mainbook}. \begin{theorem} \label{th:jacrank0} There is an algorithm for computing all rational solutions to any equation defining a curve of genus $g\geq 2$ whose Jacobian has rank $r=0$. \end{theorem} The definition of the rank of Jacobian $r$ in Theorem \ref{th:jacrank0} is too technical to be presented here, but we mention that for the equation $y^2=P(x)$, the rank $r$ can be computed (most often estimated) using the following Magma commands \begin{align*} &{\tt > P<x> := PolynomialRing(Rationals());}\\ &{\tt > C := HyperellipticCurve(P(x));}\\ &{\tt > J := Jacobian(C);}\\ &{\tt > RankBounds(J);} \end{align*} that can be run in the Magma calculator\index{Magma calculator} \url{http://magma.maths.usyd.edu.au/calc/}. In general, the output looks like ``$a\,\,b$'', where $a$ and $b$ are integers that are the lower and upper bounds for the rank $r$ of the Jacobian. If for at least one out of the three curves \eqref{eq:homdia3dhyp3} the output is ``$0\,\,0$'', then its rational points can be computed by Theorem \ref{th:jacrank0}, which in turn leads to the resolution of the corresponding Eq.~\eqref{eq:axppbypmczp}. \subsection{Explicit small values of $p$} We next discuss Eq.~\eqref{eq:axppbypmczp} for explicit small $p\geq 5$. In this section, we will not assume that $p$ is prime. With $p=5$, Eq.~\eqref{eq:axppbypmczp} becomes \begin{equation} \label{eq:ax5pby5mcz5} ax^5+by^5=cz^5, \end{equation} where $a,b,c$ are positive integers. For Eq.~\eqref{eq:ax5pby5mcz5}, the corresponding curves \eqref{eq:homdia3dhyp3} have genus $g=2$, and if at least one of these curves has the rank of the Jacobian $r\leq 1$, then its rational points can be computed by Magma's {\tt Chabauty} command, which leads to the complete resolution of \eqref{eq:ax5pby5mcz5}. As noted in Sections 6.2.2 and 6.3.4 of~\cite{mainbook}, this method resolves Problem \ref{prob:existence} for all Eqs.~\eqref{eq:ax5pby5mcz5} with $a+b+c<19$. If Eqs.~\eqref{eq:ax5pby5mcz5} are ordered by $a+b+c$, then the first Eq.~\eqref{eq:ax5pby5mcz5} for which Problem \ref{prob:existence} is left open in~\cite{mainbook} is \begin{equation} \label{eq:4x5p4y5m11z5} 4x^5+4y^5=11z^5. \end{equation} If an Eq.~\eqref{eq:ax5pby5mcz5} has some obvious non-trivial integer solution, this immediately solves Problem \ref{prob:existence} with the ``Yes'' answer, but leaves open the problem of listing all primitive integer solutions (Problem \ref{prob:main}). In particular, equation \begin{equation*} 2x^5+3y^5=5z^5 \end{equation*} has solution $(x,y,z)=(1,1,1)$, but we do not know whether it has any other non-trivial primitive integer solutions. Problem \ref{prob:existence} for Eq.~\eqref{eq:ax5pby5mcz5} with $a=b=1$, that is, \begin{equation} \label{eq:x5py5ecz5} x^5+y^5=cz^5, \end{equation} reduces to investigating which positive integers $c$ can be represented as a sum of two rational fifth powers. As remarked in~\cite[Section 6.3.4]{mainbook}, the discussed argument solves this question for all positive integers $1\leq c\leq 100$ except of $c = 68, 74, 87$ and $88$. Dirichlet (see~\cite[pp. 735]{dickson1920history}) proved the non-existence of non-zero integer solutions to \eqref{eq:x5py5ecz5} for certain values of $c$ with no prime factors congruent to $1$ modulo $5$. Lebesgue conjectured that \eqref{eq:x5py5ecz5} is not solvable for any such $c$, except of solutions with $x=y=z$ for $c=2$. In 2004, Halberstadt and Kraus~\cite{kraus2004} confirmed this conjecture. \begin{theorem} \label{th:halberstadt2004} Let $c$ be an integer with no prime factors congruent to $1$ modulo $5$. If $c \neq 2$, Eq.~\eqref{eq:x5py5ecz5} has no non-trivial integer solutions. If $c =2$, the only non-trivial primitive solutions are $(x,y,z)=\pm(1,1,1)$. \end{theorem} In particular, out of the exceptional values $c = 68, 74, 87$ and $88$ listed above, Theorem \ref{th:halberstadt2004} covers the first three, hence the only $1\leq c \leq 100$ for which Problem \ref{prob:existence} for Eq.~\eqref{eq:x5py5ecz5} remains open is $c=88$. In this case, \eqref{eq:x5py5ecz5} reduces to \eqref{eq:4x5p4y5m11z5}. With $p=6$, Eq.~\eqref{eq:axppbypmczp} becomes \begin{equation} \label{eq:ax6pby6mcz6} ax^6+by^6=cz^6. \end{equation} In 2023, Newton and Rouse~\cite{MR4552507} investigated the solvability of Eq.~\eqref{eq:ax6pby6mcz6} with $a=b=1$. Specifically, they proved that $164634913$ is the smallest positive integer that is the sum of two sixth powers of rational numbers, and not the sum of two sixth powers of integers. To prove this result, the authors determine all positive integers $c\leq 164634913$ for which the equation \begin{equation*} x^6+y^6=c z^6 \end{equation*} has a solution with $z\neq 0$. Dirichlet (see~\cite[pp. 736]{dickson1920history}) proved that Eq.~\eqref{eq:axppbypmczp} with $p=14$, $a=c=1$ and $b=2^{\alpha} 7^{\beta}$ for $\alpha \geq 0$ and $\beta \geq 1$, that is, \begin{equation*} x^{14}+2^{\alpha} 7^{\beta} y^{14}=z^{14}, \end{equation*} has no non-trivial primitive integer solutions. \subsection{The case $a=b=1$} We next discuss the works investigating infinite families of Eq.~\eqref{eq:axppbypmczp} in which some of $a,b,c$ are fixed while $p$ varies. We start with the case $a=b=1$. In this instance, Eq.~\eqref{eq:axppbypmczp} reduces to \begin{equation} \label{eq:genfermata1b1ppp} x^p+y^p=cz^p, \end{equation} where $c$ is a positive integer and $p\geq 5$ is a prime. Eq.~\eqref{eq:genfermata1b1ppp} with $c=15$, that is, \begin{equation} \label{eq:xppypm15zp} x^p+y^p=15z^p \end{equation} was considered by Kraus~\cite{kraus1996equations} who proved the following result. \begin{theorem} \label{th:xppypm15zp} If $p\geq 5$ is a prime\footnote{Kraus~\cite{kraus1996equations} states the result for $p\geq 7$, but direct verification using the method described in \ref{sec3.1} shows that it remains correct for $p=5$.} such that either $2p+1$ or $4p+1$ is also prime then Eq.~\eqref{eq:xppypm15zp} has no non-trivial integer solutions. \end{theorem} Examples of primes satisfying the condition of Theorem \ref{th:xppypm15zp} are $p=5$, $7$, $11$, $13$, $23$ and so on. Kraus~\cite{kraus1996equations} also proved that for \emph{all} primes $p\geq 5$ Eq.~\eqref{eq:xppypm15zp} has no non-trivial integer solutions with $xyz$ divisible by $p$. We next consider Eq.~\eqref{eq:genfermata1b1ppp} with $c=p$, that is, \begin{equation} \label{eq:genfermata1b1cppp} x^p+y^p=pz^p. \end{equation} Maillet (see~\cite[pp. 759]{dickson1920history}) solved Eq.~\eqref{eq:genfermata1b1cppp} with $p$ being a regular prime. Recall that a prime $p$ is said to be a regular prime, when it is not a factor of the numerator of any one of the first $(p-3)/2$ Bernoulli numbers. \begin{theorem} \label{th:regular} For any regular prime $p$, Eq.~\eqref{eq:genfermata1b1cppp} has no non-trivial integer solutions. \end{theorem} A note of Westlund~\cite{MR1517149} extended the theory of this equation for any odd prime $p$, resulting in the following partial result. \begin{theorem} For any odd prime $p$, Eq.~\eqref{eq:genfermata1b1cppp} has no integer solutions such that $z$ is coprime to $p$. \end{theorem} Ribet~\cite{ribet1997equation} considered Eq.~\eqref{eq:genfermata1b1ppp} with $c=2^{\alpha}$ where $\alpha$ is a positive integer, that is, \begin{equation} \label{eq:ribetfermat} x^p+y^p =2^{\alpha} z^p. \end{equation} The case when $\alpha=1$ corresponds to the three integers $x^p$, $z^p$ and $y^p$ forming an arithmetic progression. Confirming conjecture of D\'enes'~\cite{MR68560}, Darmon and Merel~\cite{Darmon1997,MR1730439} proved that this is impossible unless $xyz=0$ or $x$, $y$ and $z$ are all equal. \begin{theorem} \label{th:Darmon1997} Eq.~\eqref{eq:genfermata1b1ppp} with $c=2$ has no primitive solutions with $| xyz| >1$ for integers $p \geq 3$. \end{theorem} Theorem \ref{th:Darmon1997} treats the case of $\alpha=1$ in \eqref{eq:ribetfermat}. For $2\leq \alpha <p$, Ribet~\cite{ribet1997equation} proved the following result. \begin{theorem} \label{th2:Darmon1997} For prime $p\geq 3$ and any $2\leq \alpha <p$, Eq.~\eqref{eq:ribetfermat} has no solutions with $xyz \neq 0$. \end{theorem} Because the case of general $\alpha\geq 0$ can be reduced to the case $0\leq \alpha <p$, and the case $\alpha=0$ is covered by Fermat's Last Theorem, a combination of Theorems \ref{th:Darmon1997} and \ref{th2:Darmon1997} solves \eqref{eq:ribetfermat} for all $\alpha\geq 0$. We next discuss Eq.~\eqref{eq:genfermata1b1ppp} with $c=s^{\alpha}$ for odd prime $s$, that is, equation \begin{equation} \label{eq:ribetfermats} x^p + y^p = s^{\alpha} z^p. \end{equation} Kraus~\cite{MR1611640} investigated Eq.~\eqref{eq:ribetfermats} and proved the following result. \begin{theorem} \label{th:kraus1997} Let $s$ be an odd prime number which is not of the form $s=2^k\pm 1$, and let $\alpha$ be a non-negative integer. If $p$ is a prime such that $p > \left(1+\sqrt{(s+1)/2}\right)^{(s+11)/6}$, then Eq.~\eqref{eq:ribetfermats} has no solutions in non-zero integers $x,y,z$. \end{theorem} Theorem \ref{th:kraus1997} works only if $p$ is large in terms of $s$. For small $s$, we have the general results covering all $p$. As remarked by Ribet~\cite{ribet1997equation}, the following result was proved by Serre~\cite{MR885783} subject to some conjectures that have been established in later works~\cite{MR1611640,MR1047143,wiles1995modular}. \begin{theorem} \label{th:ribet1997} Suppose that $p$ and $s$ are prime numbers, $3 \leq s \leq 60$, $s\neq 31$, $p \geq 11$, $p\neq s$. If $\alpha \geq 1$, then there are no triples of non-zero integers $(x, y, z)$ which satisfy \eqref{eq:ribetfermats}. \end{theorem} Cohen~\cite[Theorem 15.5.3]{MR2312338} extended Theorem \ref{th:ribet1997} to the range $3 \leq s \leq 100$ and all $p\geq 5$. \begin{theorem} \label{th:ribetfermats} Suppose that $3\leq s \leq 100$ is a prime, $s\neq 31$. Then Eq.~\eqref{eq:ribetfermats} with prime $p \geq 5$ and any $\alpha\geq 1$, does not have any solutions with $x,y,z$ non-zero and pairwise coprime. \end{theorem} Theorem \ref{th:ribetfermats} does not work for $s=31$, because the resulting equation \begin{equation} \label{eq:xppypm31alphazp} x^p + y^p = 31^{\alpha} z^p, \end{equation} has the primitive solution $(x, y, z) = (2, -1, 1)$ when $\alpha =1$ and $p=5$. For $7 \leq p \leq 10^6$, Cohen~\cite{MR2312338} proved the following theorem. \begin{theorem} \label{th:xppypm31alphazp} Eq.~\eqref{eq:xppypm31alphazp} does not have any non-trivial pairwise coprime solutions $(x,y,z)$ for prime $p$ satisfying $7 \leq p \leq 10^6$. \end{theorem} See~\cite[Theorem 15.6.4]{MR2312338} for the case $11\leq p \leq 10^6$ and~\cite[Corollary 15.7.5]{MR2312338} for the case $p=7$. We next return to Eq.~\eqref{eq:genfermata1b1ppp} with $c$ not necessarily being a prime power. Sitaraman~\cite{sitaraman2000fermat} considered the case $c=sp$ for regular prime $p$ and integer $s$, and obtained the following theorem. \begin{theorem} Let $p\geq 5$ be a regular prime, and let $s$ be an integer divisible only by primes of the form $kp-1$ where $\gcd(k, p)=1$. Then \begin{equation*} x^p+ y^p= ps z^p \end{equation*} has no non-trivial integer solutions. \end{theorem} \subsection{The case $a=1$} In this instance, Eq.~\eqref{eq:axppbypmczp} reduces to \begin{equation} \label{eq:xppbyppczp} x^p+by^p=cz^p, \end{equation} where integers $b,c>1$. Eq.~\eqref{eq:xppbyppczp} with $b=3$, $c=5$ and $p \geq 7$ a prime, that is, \begin{equation} \label{eq:xpp3ypp5zp} x^p+3y^p=5z^p \end{equation} was considered by Kraus~\cite{kraus1996equations} who proved that the following result. \begin{theorem} \label{th:xpp3ypp5zp} If $p \geq 5$ is a prime\footnote{Kraus~\cite{kraus1996equations} states the result for $p \geq 7$, but direct verification shows that it remains correct for $p=5$.} such that $2p+1$ or $4p+1$ is also prime then Eq.~\eqref{eq:xpp3ypp5zp} has no non-zero integer solutions. \end{theorem} Examples of primes satisfying the condition of Theorem \ref{th:xpp3ypp5zp} are $p=5$, $7$, $11$, $13$, $23$ and so on. Kraus~\cite{kraus1996equations} also proved that for \emph{all} primes $p\geq 5$ Eq.~\eqref{eq:xpp3ypp5zp} has no non-trivial integer solutions with $xyz$ divisible by $p$. Kraus~\cite{Kraus2002} later verified that for $5 \leq p <10^7$, Eq.~\eqref{eq:xpp3ypp5zp} has at least one local obstruction and therefore has no non-trivial integer solutions. Kraus~\cite{MR1611640} considered equations of the form \eqref{eq:xppbyppczp} with coefficients being powers of primes, and obtained the following results. \begin{theorem} \label{th1:MR1611640} Let $s$ be an odd prime number not of the form $s=2^k\pm 1$. Let $\alpha$ and $\beta$ be non-negative integers. If $p$ is a prime such that $p > \left(1+\sqrt{8(s+1)}\right)^{2(s-1)}$, $b=2^{\beta}$, $c=s^{\alpha}$ and if $\alpha$ is not a multiple of $p$, then there are no integer solutions to Eq.~\eqref{eq:xppbyppczp} with $xyz \neq 0$. \end{theorem} \begin{theorem} \label{th2:MR1611640} Let $s$ be an odd prime number which is not equal to $17$, and $p \geq 5$ is prime with $p \neq s$. If $p > \left(1+\sqrt{(s+1)/6}\right)^{(s+1)/6}$, $b=16$ and $c=s^{\alpha}$ with $\alpha \geq 0$, then there are no integer solutions to Eq.~\eqref{eq:xppbyppczp} with $xyz \neq 0$. \end{theorem} \subsection{General $a,b,c$} Dahmen and Yazdani~\cite{Dahmen_2012} considered Eq.~\eqref{eq:axppbypmczp} with $c=16$ and $p \geq 5$, that is, \begin{equation} \label{eq:axppbypm24zp} ax^p + by^p =16 z^p, \end{equation} and their results imply the following theorem. \begin{theorem} Let \begin{equation*} (a, b) \in \{(5^2,23^4), (5^8, 37), (5^7, 59^7), (7, 47^7),(11, (5\cdot 17)^2)\}. \end{equation*} Then for odd $p \geq 5$, Eq.~\eqref{eq:axppbypm24zp} has no non-zero integer solutions. \end{theorem} In 2002, Kraus~\cite[Proposition 2.3]{Kraus2002} considered Eq.~\eqref{eq:axppbypmczp} with $a=3$, $b=4$ and $c=5$ and proved the following theorem. \begin{theorem} Let $p$ be a prime number congruent to $13$ modulo $24$. Then the equation \begin{equation} \label{eq:3xpp4ypp5zp} 3x^p+4y^p=5z^p \end{equation} has no non-trivial integer solutions. \end{theorem} In 2016, Freitas and Kraus~\cite{FREITAS2016751} improved this result and obtained the following. \begin{theorem} Let $p \geq 5$ be a prime satisfying $p \equiv 5$ (mod 8) or $p \equiv 19$ (mod 24). Then Eq.~\eqref{eq:3xpp4ypp5zp} has no non-trivial integer solutions. \end{theorem} In the same paper, Freitas and Kraus~\cite{FREITAS2016751} also studied Eq.~\eqref{eq:axppbypmczp} with $a=3$, $b=8$ and $c=21$ and obtained the following result. \begin{theorem} Let $p \geq 5$ be a prime\footnote{The paper~\cite{FREITAS2016751} states the result for $p >5$, but direct verification shows that it remains correct for $p=5$.} satisfying $p \equiv 5$ (mod 8) or $p \equiv 23$ (mod 24). Then the equation \begin{equation*} 3x^p + 8y^p = 21z^p \end{equation*} has no non-trivial integer solutions. \end{theorem} Dieulefait and Soto~\cite{MR4203704} studied Eq.~\eqref{eq:axppbypmczp} under some restrictions on the prime factors of $abc$, and they proved the following results.\footnote{These results state that $p$ is sufficiently large. Specifically, $p > G(a,b,c)$ which is an explicit bound dependent on $a,b,c$.} \begin{theorem} Assume that all prime factors of $abc$ are $1$ modulo $3$. Then the equation \begin{equation} \label{eq:axppbypm16czp} ax^p + by^p = 16cz^p \quad \end{equation} has no solutions with $xyz \neq 0$ for all sufficiently large $p$. \end{theorem} \begin{theorem} Let $n$ be a positive integer not dividing $14$, $16$ or $18$. Assume that all prime factors of $a,b,c$ are equal to $\pm 1$ modulo $n$. Then Eq.~\eqref{eq:axppbypm16czp} has no solutions with $xyz \neq 0$ for all sufficiently large $p$. \end{theorem} \begin{theorem} Assume that all prime factors of $abc$ are $1$ modulo $12$. Then for every integer $r\geq 0$, $r\neq 1$, the equation \begin{equation*} ax^p + by^p = 2^rcz^p \quad \end{equation*} has no solutions with $xyz \neq 0$ for all sufficiently large $p$. \end{theorem} \begin{theorem} \label{th:qoddprime} Let $q$ be an odd prime. Assume that $\gcd(a,b,c)=1$, all odd prime factors of $a,b,c$ are equal to $1$ modulo $4q$, and either $abc$ is odd or $4\vert bc$. Then Eq.~\eqref{eq:axppbypmczp} has no solutions with $xyz \neq 0$ for all sufficiently large $p$. \end{theorem} In the same work, the authors~\cite{MR4203704} also prove the following result which uses the Legendre symbol\footnote{For the definition, see~\cite[Sec. 1.4.2]{MR1228206}. } $\left(\frac{\cdot}{\cdot} \right)$. \begin{theorem} \label{th:jacobi} Let $q \geq 5$ and $l \geq 5$ be primes. Assume one of the following: \begin{itemize} \item[(i)] {$(q,l) \equiv (-5, 5)$} or $(11, -11)$ modulo $24$, \item[(ii)] {$q \equiv 11$} modulo $24$, $l \equiv 5$ modulo $24$ and $\left(\frac{q}{l}\right)=-1$, or \item[(iii)] {$q \equiv \pm 3$} modulo $8$, $l \equiv -1$ modulo 24, $l \not \equiv -1$ modulo $q$. \end{itemize} Assume that $\gcd(a,b,c)=1$ and $\mathrm{rad}(abc) = ql$. Let $n = 0$ or $n \geq 4$, then the equation \begin{equation*} ax^p + by^p + 2^ncz^p = 0 \end{equation*} has no solutions with $xyz \neq 0$ for all sufficiently large $p$. Let $r \geq 1$, then the equation \begin{equation*} ax^p + 2^rby^p + 2^rcz^p = 0 \end{equation*} has no solutions with $xyz \neq 0$ for all sufficiently large $p$. \end{theorem} Powell~\cite{POWELL198434} studied Eq.~\eqref{eq:axppbypmczp} and obtained the following result. \begin{theorem} For any even integer $m$ for which\footnote{Here $\phi(c)$ denotes Euler's totient function.} $3\phi(m) > m$, if $n$ is any positive integer sufficiently large for which $mn + 1 = q$, with $q$ a prime, and $a,b$, and $c$ are integers for which $a \pm b \pm c \neq 0$ and $a \neq \pm b$, $b \neq \pm c$, $a \neq \pm c$, then the equation \begin{equation} \label{eq:axnpbynmczn} ax^n + by^n = cz^n \end{equation} has no solutions in integers $x,y,z$ with $xyz \neq 0$. \end{theorem} The work~\cite{MR779375} then provides explicit bounds for what it means for $n$ to be sufficiently large, and obtained the following result. \begin{theorem} \label{th:powell} Let $a,b, c$ be non-zero integers such that $a\pm b \pm c \neq 0$, $a\pm b \neq 0$, $a \pm c \neq 0$, $b \pm c \neq 0$. \begin{itemize} \item[(a)] Let {$l$} be a prime, $l> | a| +| b| +| c|$, let $n \geq 1$ be an integer such that $2ln+1=q$ is a prime bigger than $(| a| +| b| +| c| )^{l-1}$. \item[(b)] Let {$m\geq 2$} be an even integer such that $3\phi(m) > m$, let $n \geq 1$ be an integer such that $mn +1 = q$ is a prime bigger than $(| a| +| b| +| c| )^{\phi(m)}$. \end{itemize} If (a) or (b) holds, then Eq.~\eqref{eq:axnpbynmczn} has no solutions in integers $x,y,z$ with $xyz \neq 0$. \end{theorem} Browning and Dietmann~\cite{MR2537095} proved that for any fixed $n>1$ the set of integers $(a,b,c)$ for which Eq.~\eqref{eq:axnpbynmczn} is solvable in non-zero integers $(x,y,z)$ has density $0$. \section{Equations of signature $(p,p,r)$} In this section we discuss the case $p=q$ in \eqref{eq:genfermat}, that is, equation \begin{equation} \label{eq:axppbypmczr} ax^p+by^p=cz^r, \end{equation} where $a,b,c$ are positive integers, $p\neq r$ primes and $p \geq 3$. We discuss the works investigating infinite families of \eqref{eq:axppbypmczr} in which some of $a,b,c$ are fixed whilst $p$ or $r$ varies. \subsection{The case $a=b=1$} We start with the case $a=b=1$. In this instance Eq.~\eqref{eq:axppbypmczr} reduces to \begin{equation} \label{eq:genfermata1b1peq} x^p+y^p=cz^r. \end{equation} We remark that for Eq.~\eqref{eq:genfermata1b1peq} Proposition \ref{prop:paircop} is applicable if and only if $c$ is not divisible by a $p$th power of any prime. If this condition is satisfied, we may talk about solutions in pairwise coprime integers $(x, y,z)$. From symmetry, we may also assume that $x>y$. In 2023, Freitas and Najman~\cite{MR4683842} considered Eq.~\eqref{eq:genfermata1b1peq} where $p,r > 3$ are primes and $c$ is an odd positive integer, not an $r$th power and $r \nmid c$. They obtained that for a set of prime exponents $p$ of positive density, the equation has no non-trivial primitive solutions $(x, y, z)$ such that $2 \mid x + y$ or $r \mid x + y$. In 2024, Kara, Mocanu and \"Ozman~\cite{kara2024} prove some results conditional on the weak Frey--Mazur conjecture~\cite[Conjecture 3.9]{kara2024} and the Eichler--Shimura conjecture~\cite[Conjecture 3.11]{MR1638494,kara2024}. \begin{theorem} Let $p > 5$ be a fixed prime and $c$ be a non-zero integer with $\gcd(c, p) = 1$. Assume the Weak Frey-Mazur Conjecture. Then, \begin{itemize} \item[(i)] If {$p \equiv 3$} modulo $4$, then there exists a (non-effective) constant ${\mathcal{B}}_p$ such that for any prime $r$ with $r > {\mathcal{B}}_p$, Eq.~\eqref{eq:genfermata1b1peq} has no non-trivial primitive integer solutions. \item[(ii)] If {$p \equiv 1$} modulo $4$, then there exists a (non-effective) constant ${\mathcal{B}}'_p$ such that for any prime $r$ with $r > {\mathcal{B}}'_p$, Eq.~\eqref{eq:genfermata1b1peq} has no non-trivial primitive integer solutions $(x, y, z)$ where $p \vert z$. \end{itemize} \end{theorem} \begin{theorem} Let $p > 5$ be a fixed prime such that $p \equiv 1$ modulo $4$ and $c$ be a non-zero integer with $\gcd(c, p) = 1$. Assume the Weak Frey-Mazur Conjecture and the Eichler-Shimura Conjecture. Then there exists a (non-effective) constant ${\mathcal{B}}''_p$ such that for any prime $r$ with $r > {\mathcal{B}}''_p$, Eq.~\eqref{eq:genfermata1b1peq} has no non-trivial primitive integer solutions. \end{theorem} \subsubsection{Eq.~\eqref{eq:genfermata1b1peq} with fixed $p$} In this section, we consider Eq.~\eqref{eq:genfermata1b1peq} with $p\geq 3$ fixed and $r$ varies. In this section, we will not assume that $p$ is a prime. We start with the case $p=3$. Eq.~\eqref{eq:genfermata1b1peq} with $p=3$, $c=2$ and $r$ even was solved by Zhang and Bai~\cite{zhang2014}. \begin{theorem} \label{th:zhang2014} Let $r \geq 2$ be an integer. Then the equation \begin{equation*} x^3+y^3=2z^{2r} \end{equation*} has no integer solutions with $x,y$ coprime other than $(x,y,z)=(1,1,\pm 1)$, $\pm (1,-1,0)$. \end{theorem} Eq.~\eqref{eq:genfermata1b1peq} with $p=3$, $c=s^\alpha$ with $s$ prime, $\alpha \geq 1$ and $r\geq 4$, that is, \begin{equation} \label{eq:x3y3calphazr} x^3+y^3=s^\alpha z^r \end{equation} has been studied by several authors. In 2006, Mulholland~\cite{mulholland2006elliptic} obtained the following result. \begin{theorem} \label{th:mulholland2006} There is an explicit set $T$ of primes containing a positive proportion of all primes,\footnote{Specifically, $T$ is the set of primes $s$ for which there does not exist an elliptic curve with rational 2-torsion and conductor $2^M 3^{2}s$, $1 \leq M \leq 3$.} such that for every $s \in T$, every integer $\alpha \geq 1$, and every prime $r$ satisfying $r \geq s^{8s}$ and $r \nmid \alpha$, Eq.~\eqref{eq:x3y3calphazr} has no solutions in coprime non-zero integers $x,y,z$. \end{theorem} Theorem \ref{th:mulholland2006} has been significantly strengthened by Bennett, Luca and Mulholland~\cite{bennett2011twisted}. \begin{theorem} \label{th:bennett2011} There exists a set ${\mathcal{W}}_3$ of primes, with $\#\{s\leq x: s\in {\mathcal{W}}_3\} \leq K\sqrt{x}\log^2(x)$ for some $K>0$, such that for every prime $s\geq 5$ with $s \not \in {\mathcal{W}}_3$, any $\alpha\geq 1$, and every prime $r$ satisfying $r\geq s^{2s}$, Eq.~\eqref{eq:x3y3calphazr} has no solutions in coprime non-zero integers $x,y,z$. \end{theorem} Because the number of all primes up to $x$ is about $\frac{x}{\log x}$, set ${\mathcal{W}}_3$ contains a negligible part of all primes, hence Theorem \ref{th:bennett2011} solves Eq.~\eqref{eq:x3y3calphazr} for almost all primes $s$. There is an algorithm for checking whether any particular prime belongs to ${\mathcal{W}}_3$. For example, it is known~\cite{bennett2011twisted} that all primes $s$ equal to $317$ or $1757$ modulo $2040$ do not belong to ${\mathcal{W}}_3$, hence Theorem \ref{th:bennett2011} is applicable to all such primes. Theorem \ref{th:bennett2011} is not applicable for primes $s \in {\mathcal{W}}_3$. In 2018, Bennett, Bruni and Freitas~\cite{MR3830208} proved a more general version of this theorem with a smaller exceptional set. Let \begin{equation*} S_1 = \{q \text{ prime} : q = 2^a 3^b \pm 1, a \in \{2, 3\} \text{ or } a \geq 5, b \geq 0\}, \end{equation*} \begin{equation*} S_2 = \{q \text{ prime} : q = | 2^a \pm 3^b| , a \in \{2, 3\} \text{ or } a \geq 5, b \geq 1\}, \end{equation*} \begin{equation*} S_3 = \left\{q \text{ prime} : q = \frac{1}{3} (2^a+ 1),a \geq 5 \text{ odd}\right\}, \end{equation*} \begin{equation*} S_4 = \{q \text{ prime} : q = d^2+ 2^a 3^b, a \in \{2, 4\} \text{ or } a \geq 8 \text{ even}, b \text{ odd}\}, \end{equation*} \begin{equation*} S_5 = \{q \text{ prime} : q = 3d^2+2^a, a \in \{2, 4\} \text{ or } a \geq 8\text{ even}, d \text{ odd}\}, \end{equation*} \begin{equation*} S_6 = \left\{q \text{ prime} : q = \frac{1}{4} (d^2+3^b), d \text{ and } b \text{ odd}\right\}, \end{equation*} \begin{equation*} S_7 = \left\{q \text{ prime} : q = \frac{1}{4} (3d^2+1), d \text{ odd}\right\}, \end{equation*} \begin{equation*} S_8 = \left\{q \text{ prime} : q = \frac{1}{2} (3v^2-1), u^2-3v^2=-2 \right\}, \end{equation*} where $a,b,u,v$ and $d$ are integers, and let \begin{equation} \label{eq:S0def} S_0 = S_1 \cup S_2 \cup S_3 \cup S_4 \cup S_5 \cup S_6 \cup S_7 \cup S_8. \end{equation} \begin{theorem} \label{th:bennett2017} For every prime $s\geq 5$ such that $s \not \in S_0$, any $\alpha\geq 1$, and any prime $r$ satisfying $r\geq s^{2s}$, Eq.~\eqref{eq:x3y3calphazr} has no solutions in coprime non-zero integers $x,y,z$. \end{theorem} As mentioned in~\cite{MR3830208}, $S_0 \subset {\mathcal{W}}_3$, hence Theorem \ref{th:bennett2017} is a generalization of Theorem \ref{th:bennett2011}. The smallest examples of primes in ${\mathcal{W}}_3 \setminus S_0$ are \fontsize{7.5pt}{10.5pt}\selectfont\begin{equation*} s = 53, 83, 149, 167, 173, 199, 223, 227, 233, 263, 281, 293, 311, 347, 353, 359, \dots \end{equation*}\normalsize Examples of infinite families outside $S_0$ are primes $s$ equal to $53$ modulo $96$, $120$ or $144$, or primes $s$ equal to $65$ modulo $81$ or $84$. So, Theorem \ref{th:bennett2017} is applicable to all such primes. Bennett, Bruni and Freitas~\cite{MR3830208} considered the equation \begin{equation} \label{eq:x3py3mszr} x^3+y^3=sz^r \end{equation} where $s$ and $r$ are primes. For small primes $s \in S_0$, they determined~\cite[Theorem 7.2]{MR3830208} the modularity conditions on $r$ such that for $r \geq s^{2s}$ the equation has no primitive integer solutions, see Table \ref{tb:MR3830208}. \begin{table}\begin{center} \caption{\label{tb:MR3830208}Modularity conditions on $r$ for small primes $s$, such that Eq.~\eqref{eq:x3py3mszr} has no primitive integer solutions if $r \geq s^{2s}$.} \begin{tabular}{|c|c||c|c|} \hline $s$ & Condition on $r$ & $s$ & Condition on $r$ \\\hline\hline 5 & 13,19 or 23 (mod 24) & 47 & $5, 11, 13, 17, 19$ or 23 (mod 24) \\\hline 11 & $13, 17, 19$ or 23 (mod 24) & 59 & $5, 7, 11, 13, 19$ or 23 (mod 24) \\\hline 13 & 11 (mod 12) & 67 & $7, 11, 13, 29, 37, 41, 43,59, $ \\ &&& $ 67, 71, 89, 101$ or 103 (mod 120) \\\hline 17 & 5,17 or 23 (mod 24) & 71 & 5 (mod 6) \\\hline 23 & 19 or 23 (mod 24) & 73 & 41,71 or 89 (mod 120) \\\hline 29 & $7, 11, 13, 17, 19$ or 23 (mod 24) & 79 & $5, 7, 11, 13, 19$ or 23 (mod 24) \\\hline 31 & 5 or 11 (mod 24) & 89 & $13, 17, 19$ or 23 (mod 24) \\\hline 41 & $5, 7, 11, 17, 19$ or 23 (mod 24) & 97 & 11 (mod 12) \\\hline \end{tabular} \end{center} \end{table} In the case $s=5$, the authors were able to remove the condition $r \geq s^{2s}$ and prove the following theorem. \begin{theorem} If $r$ is prime with $r \equiv 13, 19$ or $23$ (mod $24$), then there are no non-trivial primitive integer solutions $(x,y,z)$ to the equation \begin{equation*} x^3+y^3=5z^r. \end{equation*} \end{theorem} Set $S_0$ contains all primes of the form $s=2^a 3^b-1$ for integers $a\geq 5$ and $b\geq 1$, hence these primes are not covered by Theorem \ref{th:bennett2017}. For such primes, we have the following result~\cite{MR3830208}, which uses a Jacobi symbol\footnote{For the definition and its main properties, see~\cite[Section 1.4.2]{MR1228206}.} condition. \begin{theorem} \label{th2:bennett2017} Suppose that $s = 2^a 3^b-1$ is prime, where $a \geq 5$ and $b \geq 1$ are integers and let $\alpha \geq 1$. If $r > s^{2s}$ is prime and integers $(\alpha,r,a,b)$ do \emph{not} satisfy the condition \begin{equation} \label{cd:th2:bennett2017} \left( \frac{\alpha}{r} \right) = \left( \frac{4-a}{r} \right) = \left( \frac{-6b}{r} \right), \end{equation} then Eq.~\eqref{eq:x3y3calphazr} has no non-trivial primitive solutions. \end{theorem} Another result from~\cite{MR3830208} is the following one. \begin{theorem} \label{th3:bennett2017} Let $T = S_7 \cup \{q \text{ prime} : q = 3d^2 + 16, d \in \mathbb{Z}\}$. If $s$ is a prime with $s \not \in T$, then, for a positive proportion of primes $r$, there are no solutions to equation \begin{equation*} x^3+y^3=sz^r \end{equation*} in coprime non-zero integers $x$, $y$ and $z$. \end{theorem} We next discuss Eq.~\eqref{eq:genfermata1b1peq} with $p=4$ and $c,r$ primes, that is \begin{equation} \label{eq:x4y4czr} x^4+y^4 = cz^r. \end{equation} If $c$ is not equal to $1$ modulo $8$, then it is easy to show (see~\cite{MR2139003}) that \eqref{eq:x4y4czr} has no solutions modulo $c$ other than $(0,0,0)$ and therefore has no primitive integer solutions for any $r$. Hence, it suffices to consider the case $c \equiv 1(\text{mod}\, 8)$. We first discuss Eq.~\eqref{eq:x4y4czr} with $c=73,89$ or $113$ and $r >13$. Dieulefait~\cite{MR2139003} proved the following theorem. \begin{theorem} If $r$ is a prime with $r >13$, then Eq.~\eqref{eq:x4y4czr} with $c = 73, 89$ or $113$ does not have any non-trivial primitive integer solutions. \end{theorem} The techniques of~\cite{MR2139003} are applicable only to prove that an equation of the form \eqref{eq:x4y4czr} has no primitive integer solutions. If $c=A^4+B^4$ for some integers $A,B$, then the equation has an integer solution with $z=1$, so the techniques of~\cite{MR2139003} are not applicable. For a different reason, the techniques are also not applicable if $c=(2A)^4 + B^2$ for some integers $A,B$. The values $c = 73, 89, 113$ are the first three values of $c$ satisfying $c \equiv 1(\text{mod}\, 8)$ and not belonging to these excluded families. We next discuss Eq.~\eqref{eq:genfermata1b1peq} with $p=5$, that is, \begin{equation} \label{eq:x5py5mczr} x^5+y^5=cz^r. \end{equation} In 2015, Bruni~\cite{bruni2015twisted} proved a version of Theorem \ref{th:bennett2011} with exponent $5$ instead of $3$. \begin{theorem} \label{th:bruni2015} There exists a set ${\mathcal{W}}_5$ of primes, with $\#\{s\leq x: s\in {\mathcal{W}}_5\} \leq K\sqrt{x}\log^2(x)$ for some $K>0$, such that for every prime $s\geq 5$ such that $s \not \in {\mathcal{W}}_5$, any $\alpha\geq 1$, and any prime $r$ satisfying $r\geq s^{13s}$, equation \begin{equation} \label{eq:x5y5calphazr} x^5+y^5=s^\alpha z^r \end{equation} has no solutions in coprime non-zero integers $x,y,z$. \end{theorem} Bruni~\cite{bruni2015twisted} also developed methods for solving Eq.~\eqref{eq:x5y5calphazr} for certain primes $s$ belonging to ${\mathcal{W}}_5$. In fact, he solved Eq.~\eqref{eq:x5y5calphazr} for all primes $s$ outside the specific families listed in Table 7.6 of~\cite{bruni2015twisted}. For Eq.~\eqref{eq:x5py5mczr} with $c=2$, that is, \begin{equation} \label{eq:x5py5mz2} x^5+y^5=2z^r, \end{equation} we have the following partial result. \begin{theorem}[\cite{signature2019multi}] For all primes $r$, Eq.~\eqref{eq:x5py5mz2} has no non-trivial primitive integer solutions $(x,y,z)$ such that $z$ is divisible by either $2$, $5$, or $r$. \end{theorem} Eq.~\eqref{eq:x5py5mczr} with $c=3$, that is, \begin{equation} \label{eq:sig55pc3} x^5+y^5=3z^r \end{equation} has been solved for certain values of $r$ in~\cite{Billerey_2007,billerey2008solvingfermattypeequationsx5y5dzp}, and then for all values of $r$ by Billerey, Chen, Dieulefait and Freitas~\cite{signature2019multi}. \begin{theorem} \label{th:billerey2024} Eq.~\eqref{eq:sig55pc3} has no non-trivial primitive integer solutions for any integer $r \geq 2$. \end{theorem} We next discuss Eq.~\eqref{eq:x5py5mczr} for other values of $c$. In 2013, Dieulefait and Freitas~\cite{44370229} proved the following theorem. \begin{theorem} Let $s$ be an integer divisible only by primes $s_i \not \equiv 1$ (mod 5). \begin{itemize} \item[(a)] For any prime {$r>13$} such that $r \equiv 1$ (mod 4), or $r \equiv \pm 1$ (mod 5), Eq.~\eqref{eq:x5py5mczr} with $c=2 s$ has no non-trivial integer solutions with $\gcd(x,y)=1$. \item[(b)] For any prime {$r>73$} such that $r \equiv 1$ (mod 4), or $r \equiv \pm 1$ (mod 5), Eq.~\eqref{eq:x5py5mczr} with $c=3 s$ has no non-trivial integer solutions with $\gcd(x,y)=1$. \end{itemize} \end{theorem} In 2007, Billerey~\cite{Billerey_2007} proved the following result. \begin{theorem} Let $c=2^{\alpha} \cdot 5^{\gamma}$ with $2\leq \alpha \leq 4$ and $0 \leq \gamma \leq 4$ and let $r$ be a prime satisfying $r\geq 7$. Then, Eq.~\eqref{eq:x5py5mczr} has no non-trivial coprime integer solutions. \end{theorem} Billerey and Dieulefait~\cite{billerey2008solvingfermattypeequationsx5y5dzp} studied the cases of $c=7$ and $c=13$ in \eqref{eq:x5py5mczr} and proved the following results. \begin{theorem} If either (i) $c=7$ and $r\geq 13$ or (ii) $c=13$ and $r \geq 19$, then Eq.~\eqref{eq:x5py5mczr} does not have any non-trivial primitive integer solutions. \end{theorem} In the same paper~\cite{billerey2008solvingfermattypeequationsx5y5dzp}, the authors also proved that Eq.~\eqref{eq:x5py5mczr} with $c=2^{\alpha} 3^{\beta} 5^{\gamma}$, that is, \begin{equation*} x^5+y^5= 2^{\alpha} 3^{\beta} 5^{\gamma} z^r \end{equation*} with $\alpha \geq 2$, $\beta,\gamma$ arbitrary and $r \geq 13$, has no integer solutions with $\gcd(x,y)=1$ and $z \neq 0$. Work~\cite{Billerey_2007} studies this equation for primes $r \geq 7$ and provides sufficient conditions for $r$, $\alpha,\beta$ and $\gamma$ for which the equation has no non-trivial coprime integer solutions. None of the theorems above include the case when $c$ is divisible by $11$ in \eqref{eq:x5py5mczr}. Noubissie~\cite{noubissie2020generalized} considered Eq.~\eqref{eq:x5py5mczr} with $c=2^{\alpha}11$, that is, \begin{equation} \label{eq:x5py5m2a11zr} x^5+y^5= 2^{\alpha}11 z^r \end{equation} and proved the following theorem. \begin{theorem} \label{th:Noubissie} Let $r >19$ be prime and $\alpha \geq 2$. Then Eq.~\eqref{eq:x5py5m2a11zr} has no non-trivial primitive integer solutions. \end{theorem} Noubissie~\cite{noubissie2020generalized} also proved some results in the cases $\alpha=0$ and $\alpha=1$ which are not included in Theorem \ref{th:Noubissie}. If $\alpha =0$, the resulting equation $x^5+y^5= 11 z^r$ has no non-trivial primitive solutions with $x\equiv 2(\text{mod}\, 4)$, while if $\alpha=1$ then the resulting equation $x^5+y^5= 22 z^r$ has no non-trivial primitive solutions with $z$ even. Let us now discuss Eq.~\eqref{eq:genfermata1b1peq} with $p>5$. In case $p=6$, Kraus~\cite{kraus2002question} proved the following result. \begin{theorem} Let $r\geq 5$ be a prime number. Then equation \begin{equation*} x^6+y^6=2z^r \end{equation*} has no non-trivial primitive integer solutions with $| xyz| \neq 1$. \end{theorem} For odd $p$, equation \begin{equation} \label{eq:kraus2002} x^p-y^p=cz^r \end{equation} reduces to \eqref{eq:genfermata1b1peq} by the trivial change of variables $y \to -y$. For even $p$, these equations are not equivalent, and \eqref{eq:kraus2002} must be considered separately. Kraus~\cite{kraus2002question} studied Eq.~\eqref{eq:kraus2002} with $p$ being an integer in the range $6\leq p \leq 12$. To state the results, we must first introduce the following notation. Given an integer $N \geq 1$, we denote by ${\mathcal{D}}_N$ the set of integers $A \geq 1$ which satisfy both conditions \begin{itemize} \item[(i)] for any prime number {$l$} we have\footnote{For a prime $l$, $\nu_l(A)$ is the $l$-adic valuation of $A$, that is, the highest power of $l$ that divides $A$.} $\nu_l(A) < N$, and, \item[(ii)] for any prime divisor {$l$} of $A$ we have $l \not \equiv 1$ modulo $N$. \end{itemize} We may now state the theorems proved by Kraus~\cite{kraus2002question}. \begin{theorem} \label{th:kraus2002} \begin{itemize} \item[(1)] Suppose we have \begin{equation*} (p,r) \in \{(6,2),(8,2),(8,3),(9,3),(10,2),(12,2) \}. \end{equation*} Then, for any integer $c$ belonging to ${\mathcal{D}}_p$, Eq.~\eqref{eq:kraus2002} has no non-trivial primitive integer solutions. \item[(2)] Suppose we have {$p=5$} or $p=7$. Let $c$ be an integer divisible by $p$ and belonging to ${\mathcal{D}}_p$. Then, Eq.~\eqref{eq:kraus2002} with $r=2$ has no non-trivial primitive integer solutions. \end{itemize} \end{theorem} \begin{theorem} Let $c \geq 1$ be an integer such that for any prime divisor $l$ of $c$, we have $\nu_l(c) < 12$ and $l \not \equiv 1$ modulo $4$. Then, for any integer $r \geq 3$, the equation \begin{equation*} x^{12}-y^{12}=cz^r \end{equation*} has no non-trivial primitive integer solutions. \end{theorem} We now return to Eq.~\eqref{eq:genfermata1b1peq}, and next discuss it with $p=7$. Let $s$ be an integer only divisible by primes $s_i \not \equiv 0,1$ modulo $7$. In 2015, Freitas~\cite{freitas2015recipes} proved that if either (i) $\alpha \geq 2$, $\beta \geq 0$ and $\gamma \geq 0$, or (ii) $\alpha =1$, $\beta \geq 1$ and $\gamma \geq 0$, or (iii) $\alpha= 0$, $\beta \geq 0$ and $\gamma \geq 1$, then Eq.~\eqref{eq:genfermata1b1peq} with $p=7$, $r \geq 17$ and $c=2^{\alpha} 3^{\beta} 5^{\gamma} s$ has no integer solutions where $z$ is divisible by $7$ and $| xyz| > 1$. In the same paper, Freitas~\cite{freitas2015recipes} also proved the following result. \begin{theorem} If $\alpha \geq 2$ or $\beta >0$ or $\gamma >0$, then Eq.~\eqref{eq:genfermata1b1peq} with $p=14$, $r \geq 17$ and $c=2^{\alpha} 3^{\beta} 5^{\gamma} s$, (where $s$ is an integer only divisible by primes $s_i \not \equiv 0,1$ modulo $7$), has no primitive integer solutions with $| xyz| > 1$. \end{theorem} In 2024, the authors of~\cite{billerey2024darmonsprogramgeneralizedfermat} use the Darmon program~\cite{darmon2000rigid} to solve Eq.~\eqref{eq:genfermata1b1peq} for $(p,c)=(7,3)$. \begin{theorem} For any integer $r\geq 2$, equation \begin{equation*} x^7+y^7=3z^r \end{equation*} has no non-trivial primitive integer solutions. \end{theorem} The case $(p,r)=(9,2)$ of \eqref{eq:genfermata1b1peq} has been studied by Cel~\cite{MR995897}, who proved the following result. \begin{theorem} For any $n \geq 0$, the equation \begin{equation*} x^9+y^9=2^nz^2 \end{equation*} has no non-trivial primitive integer solutions with $z \neq \pm 1$. \end{theorem} We next discuss the case $p=13$ in \eqref{eq:genfermata1b1peq}. In 2014, Siksek and Frietas~\cite{freitas2014criteriairreducibilitymodp}, improving an earlier result of Dieulefait and Freitas~\cite{FreitasDieulefait2013}, proved the following theorem. \begin{theorem} Let $s = 3, 5, 7$ or $11$ and $\gamma$ be an integer divisible only by primes $\gamma_i \not \equiv 1$ (mod 13). Let also \begin{equation} \label{def:R} R \coloneq \{2, 3, 5, 7, 11, 13, 19, 23, 29, 71\}. \end{equation} If $r$ is a prime not belonging to $R$, then: \begin{itemize} \item[(i)] Eq.~\eqref{eq:genfermata1b1peq} with {$p=13$} and $c=s \gamma$ has no primitive integer solutions $(x,y,z)$ such that $13 \nmid z$ and $| xyz| > 1$. \item[(ii)] Eq.~\eqref{eq:genfermata1b1peq} with {$p=26$} and $c=10 \gamma$ has no primitive integer solutions with $| xyz| > 1$. \end{itemize} \end{theorem} Billerey, Chen, Dieulefait, and Freitas~\cite{signature2019multi} proved that Eq.~\eqref{eq:genfermata1b1peq} with $c=3$, $p=13$ and $r \geq 2$ where $r$ is not a power of $7$ has no non-trivial primitive integer solutions. In a later paper, Billerey, Chen, Demb\'el\'e, Dieulefait, and Freitas~\cite{10.5565/PUBLMAT6722309} extended this result to all integers $r \geq 2$. \begin{theorem} Equation \begin{equation*} x^{13}+y^{13}=3z^r \end{equation*} has no non-trivial primitive integer solutions for all integers $r \geq 2$. \end{theorem} \subsubsection{Eq.~\eqref{eq:genfermata1b1peq} with fixed $r$} We now consider Eq.~\eqref{eq:genfermata1b1peq} with $r$ fixed. We start with the case $r=2$, that is, equation \begin{equation} \label{eq:xppypmcz2} x^p+y^p=cz^2. \end{equation} In 1977, Terjanian~\cite{00efa9d9-bb1b-3dd3-b2cb-1761e27ea5c8} proved for an odd prime $p$ and integer $c \geq 2$ with no square factor or a prime divisor of the form $2kp+1$, if $(x,y,z)$ is a non-trivial primitive integer solution to \eqref{eq:xppypmcz2} with even $z$, then $z$ must be divisible by $p$. In 2011, S\"oderlund and Ald\'en~\cite{soderlund2011diophantine} studied Eq.~\eqref{eq:xppypmcz2} with $p=5$ and proved the following result. \begin{theorem} \label{th:soderlund2011} If $\alpha = 0$ or $\alpha \geq 2$, $\beta \geq 0$, $\gamma \geq 0$, $s$ is a prime equal to $3$ or $7$ modulo $10$, then the equation \begin{equation} \label{eq:x5py5mcz2} x^5+y^5=cz^2 \end{equation} with $c = 2^\alpha 5^\beta s^\gamma$ has no non-trivial primitive integer solutions. \end{theorem} The examples of values of $c$ covered by Theorem \ref{th:soderlund2011} are \begin{equation*} c = 1, 3, 5, 7, 8, 13, 15, 17, 23, 24, 35, 37, 40, 43, 47, 53, 56, 65, 67, 73, \dots \end{equation*} Theorem \ref{th:soderlund2011} does not include the case $\alpha=1$. In the follow-up work~\cite{soderlund2013note}, the same authors included this case as well, and thus solved Eq.~\eqref{eq:x5py5mcz2} for $c=10$ and $c=2s$ where $s$ is a prime equal to $3$ or $7$ modulo $10$. Bennett and Skinner~\cite{Bennett_Skinner_2004} considered Eq.~\eqref{eq:xppypmcz2} with $p \geq 4$, and they obtained the following result. \begin{theorem} \label{th:BennettSkinner2004} If $p \geq 4$ and $c \in \{2, 3, 5, 6, 10, 11, 13, 17 \}$ then Eq.~\eqref{eq:xppypmcz2} has no solutions in non-zero pairwise coprime integers $(x, y,z)$ with $x > y$, unless $(p,c) = (4, 17)$ or \begin{equation*} (p,c, x, y,z) \in \{(5, 2, 3, -1, \pm 11),(5, 11, 3, 2, \pm 5),(4, 2, 1, -1, \pm 1)\}. \end{equation*} \end{theorem} Bennett and Skinner~\cite{Bennett_Skinner_2004} state that with further computation, the results can be extended to cover the additional cases $c=14,15$ and $19$. Some cases of Theorem \ref{th:BennettSkinner2004} rely on results proven by Bruin in~\cite{Bruin2006}. Eq.~\eqref{eq:xppypmcz2} has also been studied by Bennett and Mulholland~\cite{bennett2006diophantine}, who proved that if $p$ and $c \geq 5$ are primes with $c \neq 7$ and $p > c^{12c^2}$, then \eqref{eq:xppypmcz2} has no primitive solutions with $| xy| > 1$ and $z$ even. They also considered the case $c=2s$ where $s \geq 5$ is prime, and proved the following theorem. \begin{theorem} If $p$ and $s \geq 5$ are primes such that $p > s^{132s^2}$, then the equation \begin{equation*} x^p + y^p = 2sz^2 \end{equation*} has no primitive integer solutions with $| xy| > 1$. \end{theorem} Bennett and Mulholland~\cite{mulholland2006elliptic} studied Eq.~\eqref{eq:xppypmcz2} with coefficient $c=2^{\alpha}s$ where $s$ is prime and $\alpha \geq 1$, and proved the following theorem. \begin{theorem} Let $s \neq 7$ be prime and $\alpha \geq 1$. If prime $p$ satisfies $p > s^{27s^2}$, then the equation \begin{equation} \label{eq:xppypm2asz2} x^p + y^p = 2^{\alpha}sz^2 \end{equation} has no primitive integer solutions. \end{theorem} D\c abrowski~\cite{MR2737959} proved a more general version of this theorem, which does not restrict $s$ to being a prime. \begin{theorem} Let $s$ be an odd square-free positive integer with $\gcd(s,21)=1$ and $\alpha \geq 1$. If prime $p$ satisfies $p >s^{132s^2}$, then Eq.~\eqref{eq:xppypm2asz2} has no primitive integer solutions. \end{theorem} Ivorra~\cite{MR2310336} studied Eq.~\eqref{eq:xppypmcz2} for $p \in \{11,13,17 \}$ and obtained the following theorem. \begin{theorem} Let $p \in \{11,13,17 \}$ and let $c \geq 3$ be an integer without square factors and for any prime divisor $l$ of $c$ we have $l \not \equiv 1$ modulo $p$. Then Eq.~\eqref{eq:xppypmcz2} has no non-trivial primitive integer solutions. \end{theorem} Bennett, Vatsal and Yazdani~\cite{bennett2004ternary} studied Eq.~\eqref{eq:genfermata1b1peq} with $r=3$, and proved the following theorems. \begin{theorem} \label{th2:bennett2004} If $c$ and $p$ are integers with $2 \leq c \leq 5$ and $p \geq 4$, then the equation \begin{equation} \label{eq:xpypcz3} x^p + y^p = cz^3 \end{equation} has no solutions in coprime non-zero integers $x$, $y$ and $z$ with $| xy| > 1$. \end{theorem} \begin{theorem} \label{th:bennett2004} If $s$ and $p$ are primes such that $p > s^{4s^2}$, and $\alpha$ is a non-negative integer, then the equation \begin{equation*} x^p + y^p = s^\alpha z^3 \end{equation*} has no solutions in coprime integers $x$, $y$ and $z$ with $| xy| > 1$. \end{theorem} Krawci\'ow~\cite{KarolinaKrawciow2011} studied Eq.~\eqref{eq:xpypcz3} where $c$ is divisible by $3$ and obtained the following result. \begin{theorem} \label{th:Krawciow2011} Let $p$ be a prime number, and let $c$ be a non-zero cube-free integer which is divisible by $3$. Let $C=\mathrm{rad}(c)$, then if $p > C^{10C^2}$, Eq.~\eqref{eq:xpypcz3} has no non-trivial primitive integer solutions. \end{theorem} \subsection{The case $a=c=1$} In this case, Eq.~\eqref{eq:axppbypmczr} is of the form \begin{equation} \label{eq:genfermata1c1ppr} x^p+by^p=z^r. \end{equation} Eq.~\eqref{eq:genfermata1c1ppr} with $r=2$, $b=2^{\alpha}$ for $\alpha \geq 2$, and $p \geq 7$ prime was studied by Cohen~\cite[Section 15.3]{MR2312338} who obtained the following result. \begin{theorem} \label{th:xpp2aypmz2} The only non-zero pairwise coprime solutions to \begin{equation} \label{eq:xpp2aypmz2} x^p+ 2^{\alpha}y^p = z^2 \end{equation} for $\alpha \geq 2$ and $p\geq 7$ prime are for $\alpha=3$, for which $(x,y,z)=(1,1,\pm 3)$ is a solution for all $p$. \end{theorem} Theorem \ref{th:xpp2aypmz2} describes pairwise coprime solutions to \eqref{eq:xpp2aypmz2} under the stated conditions, but this equation can have primitive solutions that are not pairwise coprime, e.g.~$(x, y,z) = (2, 1,3 \cdot 2^{(p-3)/2})$ for $\alpha = p - 3$, see~\cite{ivorra2003equations}. In 2004, Ivorra~\cite[Chapter 1]{ivorra2004equations} proves that there are no other such examples, which completely solves Eq.~\eqref{eq:xpp2aypmz2} for $\alpha \geq 2$ and $p\geq 7$. Siksek also independently proved the same result in~\cite{MR2142239}. Theorem \ref{th:xpp2aypmz2} excludes the case where $p=5$. In this case, Ivorra~\cite{ivorra2003equations} proved the following result. \begin{theorem} If $1 \leq \alpha \leq 4$ then for Eq.~\eqref{eq:xpp2aypmz2} with $p=5$, that is, \begin{equation*} x^5+ 2^{\alpha}y^5 = z^2, \end{equation*} the non-trivial primitive integer solutions $(x,y,z)$ are given by \begin{equation*} (\alpha,x,y,z)=(1,-1,1,\pm 1),(2,2,1,\pm 6),(3,1,1,\pm 3),(4,2,-1,\pm 4). \end{equation*} \end{theorem} Bennett, Vatsal and Yazdani~\cite{bennett2004ternary} considered Eq.~\eqref{eq:genfermata1c1ppr} with $r=3$ and $b$ being a prime power, that is, \begin{equation} \label{eq:bennett2004ternary} x^p + s^{\alpha}y^p = z^3, \end{equation} and they obtained the following results. \begin{theorem} \label{th:bennett2004ternary} If $s$ and $p$ are primes such that $s \neq s_1^3 \pm 3^{s_2}$ for any integers $s_1$ and $s_2$ with $s_2 \neq 1$, $p > s^{2s}$ and $\alpha \geq 0$, then Eq.~\eqref{eq:bennett2004ternary} has no solutions in coprime integers $x$, $y$ and $z$ with $| xy| > 1$. \end{theorem} \begin{theorem} \label{th6:bennett2004} Let $\alpha \geq 1$ be an integer, \begin{equation*} s \in \{5, 11, 13, 23, 29, 31, 41, 43, 47, 53, 59, 61, 67, 71, 79, 83, 97\} \end{equation*} and let $p\geq 11$ be a prime not dividing $s^2-1$. Assume also that \begin{equation*} \begin{aligned} (s, p) \not\in \{ & (13, 19),(29, 11),(43, 13),(47, 13),(59, 11), \\ & (61, 61),(67, 73),(79, 97),(97, 13),(97, 79)\}. \end{aligned} \end{equation*} Then Eq.~\eqref{eq:bennett2004ternary} has no solutions in coprime integers $x$, $y$ and $z$ with $| xy| > 1$. \end{theorem} So far in this section we considered the cases $r=2$ and $r=3$. For the next case $r=5$, we have the following result~\cite[Theorem 2.8]{soderlund2019some}. \begin{theorem} If $b \in \{3, 11, 19, 43, 67, 163\}$, then the equation \begin{equation*} x^4 + b y^4 = z^5 \end{equation*} has no non-trivial primitive solutions. \end{theorem} Azon \cite{azon2025} considered Eq.~\eqref{eq:genfermata1c1ppr} where $r$ varies. Specifically, he considered Eq.~\eqref{eq:genfermata1c1ppr} with $b=7$, $p=5$ and $r>41$ a prime and he proved that the equation has no non-trivial primitive integer solutions such that $10|z$. \subsection{The case $a=1$} In this instance, Eq.~\eqref{eq:axppbypmczr} reduces to \begin{equation} \label{eq:xppbyppczr} x^p+by^p=cz^r, \end{equation} with $\min\{b,c\} > 1$. We first consider the case $r=2$, which reduces Eq.~\eqref{eq:xppbyppczr} to \begin{equation} \label{eq:xppbyppcz2} x^p+by^p=cz^2. \end{equation} Ivorra~\cite{ivorra2003equations} studies Eq.~\eqref{eq:xppbyppcz2} with $b= 2^{\beta}$ for $0 \leq \beta \leq p-1$, $c=2$ and $p \geq 7$ prime, that is, \begin{equation} \label{eq:ivorra2003} x^p+2^{\beta}y^p=2z^2, \end{equation} and obtained the following result. \begin{theorem} For prime $p \geq 7$ and integer $1 \leq \beta \leq p-1$, Eq.~\eqref{eq:ivorra2003} has no non-trivial primitive integer solutions. \end{theorem} Let prime $p \geq 7$ and $\alpha \geq 2$. The task of finding all primitive integer solutions to Eq.~\eqref{eq:xppbyppcz2} with $b=2^{\alpha}$ and $c=3$, that is, \begin{equation*} x^p+2^{\alpha} y^p=3z^2, \end{equation*} is left as an exercise to the reader in~\cite[Section 15.8]{MR2312338}. The authors of~\cite{MR3222262} prove that the equation has no solutions in coprime non-zero integers $x,y,z$ with $| xy| >1$. Bennett and Skinner~\cite{Bennett_Skinner_2004} obtained the following results concerning Eq.~\eqref{eq:xppbyppcz2}. \begin{theorem} Suppose that $p \geq 7$ is prime. Let $b=2^{\alpha}$. If \begin{equation} \label{eq:BennettSkinner2004} (c, \alpha_0) \in \{(3, 2),(5, 6),(7, 4),(11, 2),(13, 2),(15, 6),(17, 6)\}, \end{equation} $\alpha \geq \alpha_0$, $p > c$ and $(c, \alpha, p) \neq (11, 3, 13)$, then Eq.~\eqref{eq:xppbyppcz2} has no solutions in non-zero pairwise coprime integers $(x,y,z)$ with $xy \neq \pm 1$. \end{theorem} \begin{theorem} Suppose that $p \geq 11$ is prime and $\alpha$ is a non-negative integer. If $b \in \{5^{\alpha},11^{\alpha},13^{\alpha}\}$ and $p$ is not a divisor of $b$, then Eq.~\eqref{eq:xppbyppcz2} with $c=2$ has no solution in non-zero pairwise coprime integers $(x, y,z)$. \end{theorem} Eq.~\eqref{eq:xppbyppcz2} with $b$ a power of $2$ and $c$ a prime was studied by Zhang~\cite{zhang2012}, who proved the following theorem. \begin{theorem} \label{th:zhang2012} Let $s,p$ be odd primes. For any integers $k,d,s\neq (2^k \pm 1)/d^2$ and $\alpha \geq 0$, $\alpha \neq 1$. If $p >s^{8s^2}$, then the equation \begin{equation*} x^p+2^\alpha y^p=sz^2 \end{equation*} has no integer solutions in pairwise coprime integers $x,y,z$ with $xyz \neq 0$. \end{theorem} From Theorem \ref{th:zhang2012}, Zhang~\cite{zhang2012} deduces the following. \begin{corollary} Let $s,p$ be odd primes. For any integers $\alpha \geq 2$ and $s>5$ satisfying $s \equiv \pm 3$ modulo $8$. If $p >s^{8s^2}$, then the equation \begin{equation*} x^p+2^\alpha y^p=sz^2 \end{equation*} has no integer solutions in pairwise coprime integers $x,y,z$ with $xyz \neq 0$. \end{corollary} We next consider the case $r=3$ in \eqref{eq:xppbyppczr}, that is, equation \begin{equation} \label{eq:xppbyppcz3} x^p+by^p=cz^3. \end{equation} Bennett et al.~\cite{bennett2004ternary} studied \eqref{eq:xppbyppcz3} with $b=s^{\alpha}$ and $c=3^{\beta}$ for prime $s$ and positive integers $\alpha,\beta$, that is, \begin{equation} \label{eq3:bennett2004} x^p + s^{\alpha}y^p = 3^{\beta}z^3, \end{equation} and they proved the following theorems. \begin{theorem} \label{th3:bennett2004} If $s$ and $p$ are primes such that $s \not\in \{5, 3t^3 \pm1, 9t^3 \pm 1 : t \in \mathbb{N}\}$, $p>s^{28s}$ and $\alpha, \beta$ are positive integers with $3 \nmid \beta$, then Eq.~\eqref{eq3:bennett2004} has no solutions in coprime integers $x$, $y$ and $z$ with $| xy| > 1$. \end{theorem} \begin{theorem} \label{th7:bennett2004} If $p \geq 7$ is prime, $s \in \{7, 11, 13\}$ with \begin{equation*} (s,p) \not \in \{ (7,13),(13,7)\} \end{equation*} and $\alpha, \beta$ are positive integers with $3 \nmid \beta$, then Eq.~\eqref{eq3:bennett2004} has no solutions in coprime integers $x$, $y$ and $z$ with $| xy| > 1$. \end{theorem} Bennett et al.~\cite{bennett2004ternary} also studied equation \begin{equation} \label{eq4:bennett2004} x^p + 3^{\alpha}y^p = s^{\beta}z^3 \end{equation} and proved the following theorem. \begin{theorem} \label{th4:bennett2004} If $s \in \{2, 3, 5, 7, 11, 13, 15, 17, 19\}$, $p$ is a prime satisfying $p > \max\{s, 4\}$, \begin{equation*} (s,p) \not \in \{(7,11), (11,13)\} \quad \text{and} \quad (\alpha,s) \not \in \{(1,t) : t =2, \text{ or } t\geq 11\}, \end{equation*} and $\alpha$ and $\beta$ are non-negative integers, then Eq.~\eqref{eq4:bennett2004} has no solutions in coprime integers $x$, $y$ and $z$ with $| xy| > 1$, unless $(| x| , | y| , \alpha, p, | s^{\beta} z^3| ) =(2, 1, 1, 7, 125)$. \end{theorem} Krawci\'ow~\cite{KarolinaKrawciow2011} studied Eq.~\eqref{eq:xppbyppcz3} with $b=s^{\alpha}$, that is, \begin{equation} \label{eq:xppsayppcz3} x^p+ s^{\alpha} y^p=cz^3, \end{equation} and solved it under some conditions on the parameters. To state the conditions, we must first introduce some notation. Let $s_1,\ldots , s_k$ be odd primes, such that $3 \in \{s_1,\ldots , s_k\}$. Then let $P(s_1,\ldots , s_k)$ denote the set of primes $x$ satisfying any of the $2^{k+1}$ equations \begin{equation*} x^\alpha - {s_1}^{\alpha_1} \cdots {s_k}^{\alpha_k} y^3 = \pm 1, \quad (1\leq \alpha_i \leq 2, \,\, i = 1,\ldots , k), \end{equation*} for some integers $\alpha>1$ and $y>1$. Theorem \ref{th:Krawciow2011} implies that the set $P(s_1,\ldots , s_k)$ is always finite. For example, $P(3) = \{2,5\}$, see~\cite{KarolinaKrawciow2011}. \begin{theorem} \label{th:KarolinaKrawciow2011} Let $c = \prod_{i=1}^{k} s_i^{\gamma_i}$ be a prime factorization of a positive cube-free odd integer $c$ which is divisible by $3$, $\alpha$ be a positive integer, $p$ is a prime, and let $C=\mathrm{rad}(c)$. If $s$ is a prime such that $s \not \in P(s_1, \dots, s_k)$ and $s \neq \prod_{i=1}^{k} s_i^{\alpha_i}t^3 \pm 1$ $(1 \leq \alpha_i \leq 2, i = 1,\dots, k)$ for any integer $t$, and if $p > (sC)^{10sC^2}$, then Eq.~\eqref{eq:xppsayppcz3} has no non-trivial primitive integer solutions. \end{theorem} \subsection{The case $c=1$} In this instance, Eq.~\eqref{eq:axppbypmczr} reduces to \begin{equation} \label{eq:eq:axppbypmzr} ax^p+by^p=z^r \end{equation} for positive integers $a,b$. \subsubsection{Eq.~\eqref{eq:eq:axppbypmzr} with fixed $p$} Eq.~\eqref{eq:eq:axppbypmzr} with $p=3$ was considered by Bennett and Dahmen which led to the following partial result~\cite[Theorem 13.2]{bennett2013klein}. \begin{theorem} \label{th:bennett2013klein} Let $a$ and $b$ be odd integers such that $ax^3 + by^3$ is irreducible in $\mathbb{Q}[x, y]$, and suppose that all solutions to the equation \begin{equation} \label{eq:genfermatc1prod} ax^3 + by^3 = \prod_{s \vert 6ab} s^{\alpha_s} \end{equation} in coprime non-zero integers $x$ and $y$, and non-negative integers $\alpha_s$, satisfy $\alpha_2 \in \{1, 4\}$. Then there exists an effectively computable constant $r_0$ such that the equation \begin{equation} \label{eq:genfermatc1z} ax^3 + by^3 = z^r \end{equation} has no solution in non-zero integers $x$, $y$ and $z$ (with $\gcd(x, y) = 1$) and prime $r \equiv 1$ (mod 3) with $r \geq r_0$. If all solutions to \eqref{eq:genfermatc1prod} with $x$ and $y$ coprime integers have $\alpha_2 \leq 4$, then there exists an effectively computable constant $r_1$ such that Eq.~\eqref{eq:genfermatc1z} has no solutions in odd integers $x$, $y$ and prime $r \geq r_1$. \end{theorem} Examples of pairs $(a,b)$ satisfying the conditions of Theorem \ref{th:bennett2013klein} are $(1,57)$, $(1,83)$ and $(3, 19)$. \subsubsection{Eq.~\eqref{eq:eq:axppbypmzr} with fixed $r$} In 2004, Bennett and Skinner~\cite{Bennett_Skinner_2004} considered Eq.~\eqref{eq:eq:axppbypmzr} with $r=2$, and $p \geq 11$ prime, that is, \begin{equation} \label{eq:bennetAB} ax^p + by^p = z^2, \end{equation} and they obtained the following results. \begin{table}\begin{center} \caption{\label{tb:ABvalsodd}Values for $a,b,p, \alpha$ not covered by Theorem \ref{th:Bennettabc}.} \begin{tabular}{|c|c|c||c|c|c|} \hline $ab$ & $p$ & $\alpha$ &$ab$ & $p$ & $\alpha$ \\\hline\hline $2^{\alpha} 19^{\beta}$ & 11 & 1 & $2^{\alpha} 61^{\beta}$ & 13 & 0,3 \\\hline $2^{\alpha} 43^{\beta}$ & 11 & $0,3,\geq 7$ & $2^{\alpha} 61^{\beta}$ & 31 & $\geq 7$ \\\hline $2^{\alpha} 53^{\beta}$ & 11 & $2,4,5$ & $2^{\alpha} 67^{\beta}$ & 11 & $0,3,6$ \\\hline $2^{\alpha} 53^{\beta}$ & 17 & 0,3 & $2^{\alpha} 67^{\beta}$ & 13 & 0,3 \\\hline $2^{\alpha} 59^{\beta}$ & 11 & $\geq 7$ & $2^{\alpha} 67^{\beta}$ & 17 & $0,3,\geq 7$ \\\hline $2^{\alpha} 59^{\beta}$ & 29 & 6 &&& \\\hline \end{tabular} \end{center} \end{table} \begin{theorem} \label{th:Bennettabc} Suppose that $p \geq 11$ is prime, $a,b$ are coprime integers satisfying $p \nmid ab$, $\alpha \geq 0$ and $\beta \geq 1$ are integers, and the values $ab, p, \alpha, \beta$ are not those given in Table \ref{tb:ABvalsodd}. If \begin{equation*} ab \in \{2^{\alpha}11^{\beta}, 2^{\alpha}13^{\beta},2^{\alpha}19^{\beta},2^{\alpha}29^{\beta},2^{\alpha}43^{\beta},2^{\alpha}53^{\beta}, 2^{\alpha}59^{\beta}, 2^{\alpha}61^{\beta},2^{\alpha}67^{\beta} \} \end{equation*} for $\alpha =0$ or $\alpha \geq 3$, or if \begin{equation*} ab \in \{2 \cdot 19^{\beta}, 4 \cdot 11^{\beta}, 4 \cdot 19^{\beta}, 4 \cdot 43^{\beta}, 4 \cdot 59^{\beta}, 4\cdot 61^{\beta}, 4\cdot 67^{\beta}\} \end{equation*} then Eq.~\eqref{eq:bennetAB} has no solutions in non-zero pairwise coprime integers $(x, y,z)$. \end{theorem} \begin{theorem} \label{th:Bennettabceven} Suppose that $p \geq 11$ is prime, $a$ and $b$ are coprime integers satisfying $p \nmid ab$, $\alpha,\beta,\gamma$ are non-negative integers with $\alpha \geq 6$ and $ab = 2^{\alpha}s^{\beta}t^{\gamma}$ where \begin{equation*} \begin{aligned} (s,t) \in \{ & (3, 31) (\beta \geq 1), (5, 11) (\alpha \geq 7), (5, 19),(5, 23) (\beta \geq 1), \\ & (7, 19) (\gamma \geq 1), (11, 13), (11, 23) (\beta \geq 1), (11, 29),(11, 31) (\beta \geq 1), \\ & (13, 31) (\beta \geq 1), (19, 23) (\beta \geq 1), (19, 29),(29, 31) (\beta \geq 1)\}, \end{aligned} \end{equation*} and the values $ab$, $p$ and $\alpha$ are not those given in Table \ref{tb:ABvalseven}. Then Eq.~\eqref{eq:bennetAB} has no solutions in non-zero pairwise coprime integers $(x, y, z)$. \end{theorem} In 2014, Zhang and Luo~\cite{MR3222262} obtained the following result in the case $ab = 2^{\alpha}3^{\beta}$ of Eq.~\eqref{eq:bennetAB}. \begin{table}\begin{center} \caption{\label{tb:ABvalseven}Assume that $\beta$ and $\gamma$ are positive integers. Values for $a,b,p, \alpha$ not covered by Theorem \ref{th:Bennettabceven}.} \begin{tabular}{|c|c|c||c|c|c|} \hline $ab$ & $p$ & $\alpha$ &$ab$ & $p$ & $\alpha$ \\\hline\hline $2^{\alpha} 5^{\beta} 23^{\gamma}$ & 11 & $\geq 7$ & $2^{\alpha} 11^{\beta} 29^{\gamma}$ & 13 & $\geq 7$ \\\hline $2^{\alpha} 7^{\beta} 19^{\gamma}$ & 11 & $\geq 7$ & $2^{\alpha} 19^{\beta} 23^{\gamma}$ & 11 & $\geq 7$ \\\hline $2^{\alpha} 11^{\beta} 23^{\gamma}$ & 11 & $\geq 6$ & $2^{\alpha} 19^{\beta} 29^{\gamma}$ & 11 & $\geq 7$ \\\hline \end{tabular} \end{center} \end{table} \begin{theorem} Suppose that $a$ and $b$ are coprime integers with $ab = 2^{\alpha}3^{\beta}$ where $\alpha \geq 6$ and $\beta \geq 0$, and $p \geq 7$ is prime. Then Eq.~\eqref{eq:bennetAB} has no solutions in non-zero pairwise coprime integers $(x, y, z)$. \end{theorem} \subsection{The general $a,b,c$} In this case, we are considering Eq.~\eqref{eq:axppbypmczr} with general positive coefficients $a,b,c$. In this section, we will sometimes consider equations with $p$ or $r$ composite. If $p$ is even, we may also have $b<0$. We start with the case $r=2$, that is, equation \begin{equation} \label{eq:axppbypmcz2} ax^p+by^p=cz^2. \end{equation} Noubissie and Togb\'e~\cite{armandphdthesis,Noubissie21} studied Eq.~\eqref{eq:axppbypmcz2} with $a=5^{\alpha}$, $\alpha \geq 1$, $b=64$, and $c=3$, and obtained the following result. \begin{theorem} Suppose that $p \geq 7$ is a prime number and $\alpha \geq 1$. Then the equation \begin{equation*} 5^{\alpha}x^p + 64y^p = 3z^2 \end{equation*} has no non-zero primitive integer solutions with $x y \equiv 1$ (mod 2). \end{theorem} In 2006, Ivorra and Kraus~\cite{ivorra2006quelques} considered Eq.~\eqref{eq:axppbypmcz2} with \begin{equation} \label{abc_in_axppbypmcz2} (a,b,c)\in \{(2^n,l^m,1),(2^n l^m,1,1),(1,l^m,2)\}. \end{equation} They provide some restrictions on $(l,n)$ and for $m \geq 1$, such that for $p$ larger than an explicit bound which depends on $l$, $m$ and $n$, Eq.~\eqref{eq:axppbypmcz2} has no non-trivial primitive integer solutions. Ivorra and Kraus~\cite{ivorra2006quelques} also solved Eq.~\eqref{eq:axppbypmcz2} for certain values of $(a,b,c)$ not of the form \eqref{abc_in_axppbypmcz2}. Specifically, they obtain the following results. \begin{theorem} \begin{itemize} \item[(a)] For {$p \geq 7$} and $(a,b,c)=(4,1,3)$, Eq.~\eqref{eq:axppbypmcz2} only has the non-trivial primitive integer solutions $(x,y,z)=(1,-1,\pm 1)$. \item[(b)] For {$p \geq 11$} and $(a,b,c)=(64,1,7)$, Eq.~\eqref{eq:axppbypmcz2} only has the non-trivial primitive integer solutions $(x,y,z)=(1,-1,\pm 3)$. \end{itemize} \end{theorem} In 2022, Cha\l upka et al.~\cite{CHALUPKA2022153} studied Eq.~\eqref{eq:axppbypmcz2} with $(a,b,c,p)=(3^{31},-4,7,34)$ and proved the following theorem. \begin{theorem} The equation \begin{equation*} 3^{31}x^{34}-4y^{34} = 7z^2 \end{equation*} has no solutions in coprime odd integers. \end{theorem} We next discuss Eq.~\eqref{eq:axppbypmczr} with $r=3$, that is, \begin{equation} \label{eq:axppbypmcz3} ax^p+by^p=cz^3. \end{equation} Noubissie and Togb\'e~\cite{armandphdthesis,Noubissie21} considered Eq.~\eqref{eq:axppbypmcz3} with $a=2^{\alpha}$, $\alpha \geq 1$, $b=27$, $c=s^{\beta}$, $\beta \geq 1$, and $s \in \{7,13\}$, and proved the following results. \begin{theorem} \label{th:noubissie1} Suppose that $p \geq 11$ is a prime number and $\alpha \geq 1$, $\beta \geq 1$ and $\beta \equiv 1$ (mod 3). Then the equation \begin{equation*} 2^{\alpha}x^p + 27y^p = 7^{\beta}z^3 \end{equation*} has no non-zero primitive integer solutions. \end{theorem} \begin{theorem} \label{th:noubissie2} Suppose that $p > 13$ is a prime number, $\alpha \geq 1$, $\beta \geq 1$ and $\beta \equiv 1$ (mod 3). Then the equation \begin{equation*} 2^{\alpha}x^p + 27y^p = 13^{\beta} z^3 \end{equation*} has no non-zero primitive integer solutions. \end{theorem} We next discuss Eq.~\eqref{eq:axppbypmczr} with $r=5$, that is, \begin{equation} \label{eq:axppbypmcz5} ax^p+by^p=cz^5. \end{equation} Azon \cite{azon2025} considered Eq.~\eqref{eq:axppbypmcz5} with $a=7$, $b=1$, $c=3$ and $p >71$ a prime and he proved that this equation has no non-trivial primitive integer solutions such that $10|xy$. In the same paper, he also proved the following result. \begin{theorem} \begin{itemize} \item[(a)] Suppose that $p>71$ is a prime. Then, for any $\alpha \in \{1,2,3,4\}$ and $\beta \in \{3, 4\}$, there are no non-trivial primitive integer solutions to the equation $$ 7x^p + 2^i 5^j y^p = 3z^5. $$ \item[(b)] Suppose that $p>41$ is a prime. Then, for any $\alpha \in \{1,2,3,4\}$ and $\beta \in \{2, 3, 4\}$, there are no non-trivial primitive integer solutions to the equation $$ x^5 + 7y^5 = 2^{\alpha} 5^{\beta} z^p. $$ \end{itemize} \end{theorem} \section{Equations of signature $(p,q,r)$} \label{sec:pqr} This section discusses Eq.~\eqref{eq:genfermat} with exponents $p\neq q\neq r$. We will sometimes consider equations with some of the exponents being composite. \subsection{The case $a=c=1$} In this case, Eq.~\eqref{eq:genfermat} reduces to \begin{equation} \label{eq:xppbyqmzr} x^p+by^q=z^r. \end{equation} In 2014, Zhang and Luo~\cite{MR3222262} studied Eq.~\eqref{eq:xppbyqmzr} for $p=2$, $r=3$ and $q$ even, that is, \begin{equation} \label{eq:x2pby2qmz3} x^2+by^{2q}=z^3 \end{equation} and they obtained the following results. \begin{theorem} Let $q \geq 7$ be prime and $\alpha \geq 6$ and $\beta \geq 0$ be integers. If $b \in \{ 2^{\alpha},4 \cdot 3^{2\beta}\}$, then \eqref{eq:x2pby2qmz3} has no non-trivial solutions in pairwise coprime integers $x,y,z$, except $(x,y,z,b)=(\pm 11, \pm 1,5,4)$. \end{theorem} \begin{theorem} \label{th2:MR3222262} Let $q \geq 11$ be prime and let $\alpha \geq 6$, $\beta \geq 1$ and $\gamma \geq 0$ be integers that do not satisfy any of the following conditions. \begin{itemize} \item[(i)] {$2 \vert \alpha$}, $2 \vert \beta$, $2 \nmid \gamma$, \item[(ii)] {$2 \vert \alpha$}, $2 \nmid \beta$, $2 \vert \gamma$, \item[(iii)] $2 \nmid \alpha \beta \gamma$. \end{itemize} If $b= 2^\alpha 3^\beta 31^\gamma$ and $q \nmid b$, then Eq.~\eqref{eq:x2pby2qmz3} has no non-trivial solutions in pairwise coprime integers $x,y,z$. Also, if $\gamma =0$, then the same result holds for $q=7$. \end{theorem} Noubissie~\cite{noubissie2020generalized} studied Eq.~\eqref{eq:xppbyqmzr} with $p$ even, $q=2$ and $b=r=3$, that is, \begin{equation} \label{eq:x2np3y2pmz3} x^{2p}+3y^{2}=z^3, \end{equation} and they obtained the following result. \begin{theorem} Eq.~\eqref{eq:x2np3y2pmz3} with integer $p \geq 8$ has no non-trivial primitive integer solutions. \end{theorem} Noubissie~\cite{noubissie2020generalized} also proved that for prime $q$ satisfying $q >1964$, the equation \begin{equation*} x^{2}+3y^{2q}=z^3 \end{equation*} has no non-trivial primitive integer solutions with $q\vert y$, while equation \begin{equation*} x^2+2y^3=z^{3r} \end{equation*} with prime $r \geq 3$ has no non-trivial primitive solutions with $x$ even. Eq.~\eqref{eq:xppbyqmzr} with $b$ a positive odd integer, $p$ even, and $5 \nmid h(-b)$ where $h(-b)$ denotes the class number of $\mathbb{Q}(\sqrt{-b})$ was considered by Zhang and Bai~\cite{zhang2013} who obtained the following partial result. \begin{theorem} Let $p \geq 7$ be a prime, $b$ be a positive odd integer such that $5 \nmid h(-b)$. Then the equation \begin{equation*} x^{2p}+by^2=z^5 \end{equation*} has no solutions in pairwise coprime integers with $xyz \neq 0$ and $x$ even. \end{theorem} In the same paper, Zhang and Bai~\cite{zhang2013} considered Eq.~\eqref{eq:xppbyqmzr} with $p=2$, $q$ even and $r=5$ and proved the following theorem. \begin{theorem} Let $\beta \geq 1$, $\gamma \geq 0$ and $p\geq 11$ prime. Let $b$ satisfy one of the following conditions. \begin{itemize} \item[(i)] {$b=2^\alpha$}, $\alpha \geq 2$, \item[(ii)] {$b=2^\alpha 5^\beta 11^\gamma$}, $\alpha \geq 3$, $p \nmid b$, \item[(iii)] {$b=2^\alpha 5^\beta 19^\gamma$}, $\alpha \geq 2$, $p \nmid b$, \item[(iv)] {$b=2^\alpha 5^\beta 23^\gamma$}, $\alpha \geq 2$, $2 \vert \alpha \beta \gamma$, $p \nmid b$, $\gamma \geq 1$ and $p \geq 13$. \end{itemize} Then the equation \begin{equation*} x^2+by^{2p}=z^5 \end{equation*} has no integer solutions in pairwise coprime integers $x,y,z$ with $xyz\neq 0$. The same conclusion holds in case (i) for $p=7$. \end{theorem} The authors of~\cite{MR3222262} also considered Eq.~\eqref{eq:xppbyqmzr} with $q=2p$ and $r=2$, that is, \begin{equation} \label{eq:xppby2pmz2} x^p+by^{2p}=z^2, \end{equation} and they obtained the following result. \begin{theorem} Let $p \geq 7$ be prime, and let $b=4 \cdot 3^{2\beta +1}$ with $\beta \geq 0$. Then Eq.~\eqref{eq:xppby2pmz2} has no non-trivial solutions in pairwise coprime integers $x,y,z$. \end{theorem} Noubissie~\cite{noubissie2020generalized} studied Eq.~\eqref{eq:xppby2pmz2} with $b=3^{2p-6}$, that is, \begin{equation*} x^p+3^{2p-6}y^{2p}=z^2, \end{equation*} and proved that for prime $p \equiv 13$ modulo $24$, the equation has no non-trivial primitive integer solutions with $z \equiv 2(\text{mod}\, 4)$. \subsection{The general $a,b,c$} In this section we discuss cases of Eq.~\eqref{eq:genfermat} with non-equal exponents $(p,q,r)$ such that at least two of the coefficients $a,b,c$ are greater than $1$. In 2022, Cha\l upka et al.~\cite{CHALUPKA2022153} studied equations \begin{equation} \label{eq:ax2py2pm4z3} x^{2p}+ay^2 = 4z^3 \end{equation} \begin{equation} \label{eq:x2pay2pm4z3} x^2 +ay^{2p} = 4z^3 \end{equation} where $p \geq 2$ is an integer. In the case $a=7$ in \eqref{eq:ax2py2pm4z3}, they obtained the following results. \begin{theorem}[Theorem 1 and 12 of~\cite{CHALUPKA2022153}] Let $p$ be a prime satisfying $5 \leq p <10^9$ with $p \neq 7,13$, then the equation \begin{equation} \label{eq:7x2py2pm4z3} x^{2p} +7y^2= 4z^3 \end{equation} has no non-trivial primitive integer solutions. \end{theorem} \begin{theorem} Let $p$ be a prime satisfying \begin{equation*} p \equiv 3 \text{ or } 55 (\text{mod }106) \quad\text{ or } \quad p \equiv 47, 65, 113, 139, 143 \text{ or } 167 (\text{mod }168). \end{equation*} Then Eq.~\eqref{eq:7x2py2pm4z3} has no non-trivial primitive integer solutions. \end{theorem} \begin{theorem} If $a \in \{11, 19, 43, 67, 163\}$ and $p \geq 2$ is an integer, then Eqs.~\eqref{eq:ax2py2pm4z3} and \eqref{eq:x2pay2pm4z3} have no non-trivial primitive integer solutions. \end{theorem} Noubissie~\cite{noubissie2020generalized} also considered Eqs.~\eqref{eq:ax2py2pm4z3} and \eqref{eq:x2pay2pm4z3} with $a=3$, and proved that for prime $p \geq 3$ these equations have no non-trivial primitive integer solutions with $6 \vert y$. Cha\l upka et al.~\cite{CHALUPKA2022153} also studied some Eqs.~\eqref{eq:genfermat} with $p=2q$. Specifically, they proved the following results. \begin{theorem} Let $p$ be a prime with $p \equiv 5$ (mod $6$). Then the equation \begin{equation} \label{eq:x2pm4ypm21z2} x^{2p} + 4y^{p} = 21z^2 \end{equation} has no non-trivial solutions. \end{theorem} They also prove that for all primes $p \geq 11$ with $p \neq 19$, Eq.~\eqref{eq:x2pm4ypm21z2} has no solutions in coprime odd integers $(x,y,z)$. \begin{theorem} Let $11 \leq p < 10^9$ and $p \neq 13, 17$ be a prime. Then the equation \begin{equation} \label{eq:32pm3x2pm4ypm7z2} 3^{2p-3}x^{2p}+4y^p = 7z^2 \end{equation} has no primitive solutions $(x,y,z)$ in odd integers. \end{theorem} \begin{theorem} Let $p$ be a prime satisfying \begin{equation*} p \equiv 3 \text{ or } 55 (\text{mod }106) \quad\text{ or } \quad p \equiv 47, 65, 113, 139, 143 \text{ or } 167 (\text{mod }168). \end{equation*} Then Eq.~\eqref{eq:32pm3x2pm4ypm7z2} has no primitive solutions in odd integers. \end{theorem} \begin{theorem} The equation \begin{equation*} x^{38} + 4y^{19} = 21z^2 \end{equation*} has no solution in coprime odd integers $(x,y,z)$. \end{theorem} In 2024, Cazorla Garc\'\i a~\cite{MR4793291} studied equation \begin{equation} \label{eq:garcia24} a x^2 + s^k y^{2n} = z^n \end{equation} where $s$ is prime, and provided an algorithmically testable set of conditions on $(a,s)$ which, if satisfied, imply the existence of a constant $B=B(a,s)$ such that if $n>B$ then for any $k\geq 0$ Eq.~\eqref{eq:garcia24} has no solutions in coprime integers $(x,y,z)$ such that $\gcd(ax, sy, z) = 1$ and $z$ is even. In a follow-on paper to~\cite{CHALUPKA2022153}, the authors~\cite{CHALUPKA2024} prove that for any prime $p=5$ or $11$ modulo $12$, the equation \begin{equation*} 3^{2p-3} 7 x^{2p}+4y^p=z^2 \end{equation*} has no solutions in coprime odd integers. In the same paper~\cite{CHALUPKA2024}, the authors prove the following result. \begin{theorem} Let $n \geq 2$ be an integer. The equation \begin{equation*} x^2+7y^{2n}=4z^{12} \end{equation*} has no non-trivial primitive integer solutions $x,y,z$. \end{theorem} In Chapter $5$ of their thesis, Putz~\cite{ThesisPutz} solved a number of equations of signature $(5,3,11)$, that is, equation \begin{equation} \label{eq:pqr5311} ax^5+by^3=cz^{11}. \end{equation} Specifically, they proved the following theorem. \begin{theorem} Let \begin{equation} \label{eq:putz} \begin{aligned} \mathcal{C}\coloneq & \{(11^8,1,3^6 \cdot 5^{10}),(3^6 \cdot 11^8,1,5^{10}),(3^9 \cdot 11^8,1,5^{10}), \\ & (1,11^4,3^6 \cdot 5^{10}), (3^4 \cdot 11^8,1,5^{10}),(11^8,3^3,5^{10}), \\ & (3^7 \cdot 11^8,1,5^{10}), (3^8\cdot 11^8,1,5^{10}),(3^{10}\cdot 11^8,1,5^{10})\}. \end{aligned} \end{equation} If $(a,b,c) \in \mathcal{C}$ then Eq.~\eqref{eq:pqr5311} has no non-trivial primitive integer solutions. \end{theorem} \section{Summarizing table} \label{sec:summary} Table \ref{tbl10} below summarizes the equations solved in the literature considered in this survey. The table lists Eqs.~\eqref{eq:genfermat} with $|abc|>1$, see Table \ref{tb:pqrsigs} for the case $|abc|=1$. Table \ref{tbl10} specifies the equation, any restrictions on the parameters, the progress towards the solution of the equation, and the reference to the solution. The progress we refer to can be: \begin{itemize} \item ``Solved'' implies that all primitive integer solutions have been described. This includes where the equation has no non-trivial primitive integer solutions. \item ``Partial'' implies that there is only partial progress in the solution of the equation. This means that only the primitive solutions satisfying some additional conditions have been described. \item ``Non-trivial'' implies that it has been determined whether the equation has any non-trivial integer solution (that is, one with all variables non-zero). \item ``Non-zero'' implies that it has been determined whether the equation has any integer solution with $(x,y,z)\neq (0,0,0)$. \end{itemize} \captionof{table}{\label{tbl10} Generalised Fermat Eqs.~\eqref{eq:genfermat} with $1/p+1/q+1/r<1$ and $|abc|>1$ solved in the literature. Assume $m,n$ are integers and $p,r,s$ are primes. Unless otherwise stated, assume $\alpha, \beta, \gamma \geq 0$.} \begin{longtable}[c]{ |c|c|c|c|c|c|c|c|c| } \hline Equation & Parameters & Progress & References \\ \hline \hline \endfirsthead \multicolumn{4} {|c|}{\textbf{Special Signatures}}\\\hline \hline \multicolumn{4} {|c|}{Signature $(4,4,4)$}\\\hline \hline $x^4+y^4=cz^4$ & $1 \leq c \leq 100$, $c\neq 82$ & Solved & \cite{flynn2001covering,puadurariu2023rational} \\\hline $x^4+y^4=cz^4$ & $c \leq 10000$ & Non-zero & \cite{MR709852,MR4458887} \\ &&& \cite[Sec 6.6]{MR2312337} \\\hline $x^4+y^4=cz^4$ & $c >2$ fourth-power free, & Solved & \cite{Grigorov1998Heights} \\ & rank of $v^2=u^3-cu$ is 1 & & \\\hline $x^4+2y^4=3z^4$ & & Solved & Mathoverflow\footnotemark \\\hline $x^4+3y^4=4z^4$ & &Solved & \cite[pp. 630]{dickson1920history} \\\hline $x^4+8y^4=9z^4$ & &Solved& \cite[pp. 630]{dickson1920history} \\\hline $ax^4+by^4=cz^4$ & $| a| +| b| +| c| \leq 62$ & Non-zero & \cite[Sec 6.2.2]{mainbook} \\ &&& \cite[Sec 6.4-5]{wilcox2024systematic} \\\hline $ax^4+by^4=bz^4$ & $| a| +2| b| \leq 38$ & Non-trivial & \cite[Sec 6.15]{wilcox2024systematic} \\ \hline $x^4+b y^4=z^4$ & $1\leq b \leq 218$ &Non-trivial & \cite[Sec 6.15]{wilcox2024systematic} \\\hline $x^4+34y^4=z^4$ &&Solved& \cite{soderlund2020note} \\\hline $x^4+2py^4=z^4$ & Any of the conditions & Solved & \cite{taclay2023} \\ & in Theorem \ref{th:x4p2py4mz4} && \\\hline $a^3x^4+b^3y^4=c^3z^4$ & Conditions in & Solved & \cite{MR249355,Mordell_1970} \\ & Theorem \ref{th:MR249355} && \\\hline \hline \multicolumn{4} {|c|}{Signature $(2,4,r)$}\\\hline \hline $2x^2+y^4=z^n$ & $n \geq 5$ & Solved & \cite{bennett2010diophantine} \\\hline $ax^2 +y^4 = z^p$ & $(a=3,p > 17)$ or & Solved & \cite{dieulefait2008solvingfermattypeequations,MR4473105} \\ & $(a=5,p > 499)$ or && \\ & $(a=6,p > 563)$ or && \\ & $(a=7,p > 349)$ && \\\hline $sx^p+y^4=z^2$ & $s >3, s \equiv 3$ (mod 8), &Solved & \cite{MR2737959} \\ & $s \neq 2t^2+1$, && \\ & $p > (8\sqrt{s+1}+1)^{16(s-1)}$ && \\\hline $sx^p+y^4=z^2$ & $s \equiv 5$ (mod 8), $s \neq t^2+4$, &Solved & \cite{MR2737959} \\ & $p > (8\sqrt{s+1}+1)^{16(s-1)}$ && \\\hline $2^{\alpha}x^p+y^4=z^4$ & $\alpha \geq 0$, $p \geq 5$ & Solved& \cite{MR2737959} \\\hline $2^{\alpha}s^{\beta}x^p+y^4=z^4$ & $\alpha \geq 0$, $\beta \geq 0$, $s\geq 3$,& Solved& \cite{MR2737959,MR4205757} \\ & $s \neq 2^{2^k}+ 1$, && \\ & $p > (\sqrt{8s+8}+1)^{2s-2}$ && \\\hline $sx^p+y^4=z^4$ & $s \in \{5,17\}$, $p>5$ & Solved & \cite{MR4205757} \\\hline $2^{\alpha}s^{\beta}x^p+y^4=z^4$ & $2 \leq s <50$, $s \neq 5$, $s \neq 17$, & Solved & \cite{MR4205757} \\ & $p>5$, $\alpha, \beta \geq 0$ && \\\hline $x^p+py^2=z^4$ & $p \equiv 1$ (mod 4), $p \nmid B_{(p-1)/2}$ & Partial & \cite{MR1341665} \\\hline $ax^{2m}+y^2=z^n$ & Conditions in & Solved & \cite{MR1604052} \\ & Theorem \ref{th:ax2mpy2mzn} && \\\hline $x^p+by^2=z^4$ & $p >19$, $p \equiv 1,3$ (mod 8), & Solved & \cite{MR4609012} \\ & $(b=6,p \neq 97)$ or && \\ & $(b=10, p \neq 139)$ or && \\ & $(b=11,p \neq 73)$ or && \\ & $(b=19,p\neq 43,113)$ or && \\ & $(b=129,p \neq 43)$ && \\\hline $x^p+129y^2=z^4$ & $p >900$ & Solved& \cite{MR4609012} \\ \hline \hline \multicolumn{4} {|c|}{Signature $(2,6,r)$}\\\hline \hline $x^p +2y^6= z^2$ & $p \neq 7$, $p \equiv 1,7$ (mod 24) & Solved & \cite{Chen_2012} \\\hline $x^2 +2y^6= z^p$ & $p >257$ & Solved & \cite{MR4473105} \\\hline $x^2 +3y^6= z^n$ & $n \geq 3$ & Solved & \cite{koutsianas2019generalizedfermatequationa23b6cn} \\\hline $x^2 +6y^6= z^p$ & $p >563$ & Solved & \cite{MR4473105} \\\hline $x^2+by^6=z^p$ & $p \geq r_0$, $(b,r_0)$ given by & Solved & \cite{MR4583916} \\ & Table \ref{tb:x2pby6mzr} && \\\hline $x^2+b^2y^{2m}=z^{4n}$ & $b>0$ odd with prime &Partial & \cite{MR1188732} \\ & factor $l = \pm 3$ (mod 8) && \\\hline \hline \multicolumn{4} {|c|}{Signature $(3,3,4)$}\\\hline \hline $2x^3+2y^3=z^4$ & & Solved & \cite[App. A]{wilcox2024systematic} \\\hline \hline \multicolumn{4} {|c|}{Signature $(4,4,3)$}\\\hline \hline $x^4+2y^3=z^4$ & & Solved & \cite[App. A]{wilcox2024systematic} \\\hline $x^4+3y^3=z^4$ & & Solved & \cite[App. A]{wilcox2024systematic} \\\hline $x^4+2y^4=z^3$ & & Solved & \cite{soderlund2017primitive} \\\hline $x^4+3y^4=z^3$ & & Solved & \cite{MR1288426} \\\hline $x^4+s^2y^4=z^3$ & & Solved & \cite{soderlund2020diophantine} \\\hline $x^4+by^4=z^3$ & $b \in \{19,43,67\}$ & Solved & \cite{soderlund2019some} \\\hline \hline \multicolumn{4} {|c|}{\textbf{Signature $(p,p,p)$}}\\\hline \hline \multicolumn{4} {|c|}{Explicit small values of $p$}\\\hline \hline $x^5+y^5=cz^5$ & $1 \leq c \leq 100$, $c \neq 88$ & Non-trivial &\cite[Sec 6.3.4]{mainbook} \\\hline $x^5+y^5=cz^5$ & $c$ only divisible by primes & Solved & \cite{kraus2004} \\ & $c_i \not \equiv 1$ (mod 5) && \\\hline $ax^5+by^5=cz^5$ & $a+b+c< 19$ &Non-zero& \cite[Sec 6.2.3]{mainbook} \\ &&& \cite[Sec 6.7]{wilcox2024systematic} \\\hline $x^6+y^6=cz^6$ & $c \leq 164634913$ & Non-trivial & \cite{MR4552507} \\\hline $ax^6+by^6=bz^6$ & $| a| +2| b| \leq 10$ &Non-trivial& \cite[Sec 6.16]{wilcox2024systematic} \\\hline $x^{14}+2^{\alpha} 7^{\beta} y^{14}=z^{14}$ & $\alpha \geq 0,\beta \geq 1$ & Solved & \cite[pp. 736]{dickson1920history} \\\hline \hline \multicolumn{4} {|c|}{ The case $a=b=1$}\\\hline \hline $x^p+y^p=15z^p$ & $p \geq 5$ and $2p+1$ or & Solved & \cite{kraus1996equations} \\ & $4p+1$ is prime && \\\hline $x^p+y^p=15z^p$ & $ p \geq 5$ & Partial & \cite{kraus1996equations} \\\hline $x^p+y^p=pz^p$ & $p$ regular & Solved & \cite[pp. 759]{dickson1920history} \\ & (see Theorem \ref{th:regular}) && \\\hline $x^p+y^p=pz^p$ & $p\geq 3$ & Partial & \cite{MR1517149} \\\hline $x^n+y^n=2^{\alpha}z^n$ & $\alpha \geq 0$, $n \geq 3$ & Solved & \cite{Darmon1997,ribet1997equation} \\\hline $x^p+y^p=31^\alpha z^p$ & $7 \leq p \leq 10^6$, $\alpha \geq 0$ & Solved & \cite[Sec 15.7]{MR2312338} \\ \hline $x^p+y^p=s^\alpha z^p$ & $s\neq 2^n\pm 1$ odd, $\alpha \geq 0$, & Solved & \cite{MR1611640} \\ & $p > \left(1+\sqrt{(s+1)/2}\right)^{(s+11)/6}$ && \\\hline $x^p+y^p=s^{\alpha}z^p$ & $3 \leq s \leq 100$, $p \geq 5$, & Solved & \cite{ribet1997equation,MR1611640}, \\& $\alpha \geq 0$, $s \neq 31$ && \cite[Th.15.5.3]{MR2312338} \\\hline $x^p+y^p=psz^p$ & $p \geq 5$ regular, & Solved & \cite{sitaraman2000fermat} \\ & $s$ only divisible by primes && \\ & $kp-1$ with $\gcd(k,p)=1$ && \\\hline \hline \multicolumn{4} {|c|}{ The case $a=1$}\\\hline \hline $x^p+3y^p=5z^p$ & $p \geq 5$ and $2p+1$ & Solved & \cite{kraus1996equations} \\ & or $4p+1$ is prime && \\\hline $x^p+3y^p=5z^p$ & $p \geq 5$ & Partial & \cite{kraus1996equations} \\\hline $x^p+3y^p=5z^p$ & $5 \leq p < 10^7$ & Solved & \cite{Kraus2002} \\\hline $x^p+2^{\beta} y^p= s^{\alpha}z^p$ & $s > 3$, $s\neq 2^n \pm 1$, & Solved & \cite{MR1611640} \\ & $p >(1+\sqrt{8(s+1)})^{2(s-1)}$, && \\ & $\alpha,\beta \geq 0$, $p \nmid \alpha$ \\\hline $x^p+16y^p=s^{\alpha}z^p$ & $\alpha \geq 0$, $s\geq 3$, $s \neq 17$, & Solved & \cite{MR1611640} \\ & $p \neq s$, $p \geq 5$, && \\ & $p > \left(1+\sqrt{(s+1)/6}\right)^{(s+1)/6}$ && \\ \hline \hline \multicolumn{4} {|c|}{ General $a,b,c$}\\\hline \hline $ax^n+by^n=16z^n$ & $(a,b) \in \{(25,23^4), (5^8,37), $ & Solved & \cite{Dahmen_2012} \\ & $(5^7,59^7),(7,47^7),(11,85^2) \}, $ && \\ & $n \geq 5$ odd && \\\hline $3x^p+4y^p=5z^p$ & $p \equiv 5$ (mod 8) or & Solved & \cite{Kraus2002,FREITAS2016751} \\ & $p \equiv 19$ (mod 24) && \\\hline $3x^p+8y^p=21z^p$ & $p \equiv 5$ (mod 8) or & Solved & \cite{FREITAS2016751} \\ & $p \equiv 23$ (mod 24) && \\\hline $ax^p+by^p=16cz^p$ & Prime factors of $abc$ & Solved & \cite{MR4203704} \\ & are 1 (mod 3), && \\ & $p$ sufficiently large && \\\hline $ax^p+by^p=16cz^p$ & $n>0$ not a divisor of & Solved & \cite{MR4203704} \\ & 14,16 or 18, prime factors && \\ & of $abc$ are $\pm 1$ (mod $n$), && \\ & $p$ sufficiently large && \\\hline $ax^p+by^p=2^rcz^p$ & $r \geq 0$, $r \neq 1$, & Solved & \cite{MR4203704} \\ & prime factors of $abc$ && \\ & are 1 (mod 12), && \\ & $p$ sufficiently large && \\\hline $ax^p+by^p=cz^p$ & Conditions in & Solved & \cite{MR4203704} \\ & Theorem \ref{th:qoddprime} && \\\hline $ax^p+by^p+2^ncz^p=0$ & Any of the conditions & Solved & \cite{MR4203704} \\ & in Theorem \ref{th:jacobi} && \\\hline $ax^p+2^rby^p+2^rcz^p=0$ & Any of the conditions & Solved & \cite{MR4203704} \\ & in Theorem \ref{th:jacobi} && \\\hline $ax^n+by^n=cz^n$ &Conditions in & Solved & \cite{POWELL198434,MR779375} \\ & Theorem \ref{th:powell} && \\\hline \hline \multicolumn{4} {|c|}{\textbf{Signature $(p,p,r)$} }\\\hline \hline \multicolumn{4} {|c|}{ The case $a=b=1$ and fixed $p$}\\\hline \hline $x^3+y^3=2z^{2r}$ & $r \geq 2$ & Solved & \cite{zhang2014} \\\hline $x^3+y^3=s^\alpha z^r$ & $\alpha \geq 1$, $r \geq s^{2s}$, $s\geq 5$, & Solved & \cite{MR3830208} \\ & $s\not\in S_0$, see \eqref{eq:S0def} && \\\hline $x^3+y^3=s^{\alpha}z^r$ & $s=2^a 3^b-1$, $a \geq 5$, & Solved & \cite{MR3830208} \\ & $b \geq 1$, $\alpha \geq 1$, $r>s^{2s}$, && \\ & $(\alpha,r,a,b)$ do not && \\ & satisfy \eqref{cd:th2:bennett2017} && \\\hline $x^3+y^3=s z^r$ & $s\not\in T$, see Theorem \ref{th3:bennett2017}, & Solved & \cite{MR3830208} \\ & positive proportion of $r$ && \\\hline $x^3+y^3=sz^r$ & $(s,r)$ satisfy Table \ref{tb:MR3830208}, & Solved & \cite{MR3830208} \\ & $r \geq s^{2s}$ && \\ \hline $x^3+y^3=5z^r$ & $r\equiv 13,19$ or 23 (mod 24) & Solved & \cite{MR3830208} \\ \hline $x^4+y^4=sz^r$ & $s \in \{73,89,113 \}$, $r >13$ & Solved & \cite{MR2139003} \\\hline $x^5+y^5=s^{\alpha}z^r$ & $r \geq s^{13s}$, $\alpha \geq 1$, & Solved & \cite{bruni2015twisted} \\ & $s \not \in {\cal{W}}_5$ prime, && \\ & see Theorem \ref{th:bruni2015} && \\\hline $x^5+y^5=2z^r$ & & Partial & \cite{signature2019multi} \\\hline $x^p+y^p=3z^n$ & $p \in \{5,7, 13 \}$, $n \geq 2$ & Solved & \cite{signature2019multi,10.5565/PUBLMAT6722309,billerey2024darmonsprogramgeneralizedfermat} \\\hline $x^5+y^5=2c z^r$ & $c$ only divisible by primes & Solved & \cite{44370229} \\ & $c_i \not \equiv 1$ (mod 5), $r > 13$, && \\ & $r \equiv 1$ (mod 4) or && \\ & $r \equiv \pm 1$ (mod 5) && \\\hline $x^5+y^5=3c z^r$ & $c$ only divisible by primes & Solved & \cite{44370229} \\ & $c_i \not \equiv 1$ (mod 5), $r > 73$, && \\ & $r \equiv 1$ (mod 4) or && \\ & $r \equiv \pm 1$ (mod 5) && \\\hline $x^5+y^5=7z^r$ & $r \geq 13$ & Solved & \cite{billerey2008solvingfermattypeequationsx5y5dzp} \\\hline $x^5+y^5=13z^r$ & $r \geq 19$ & Solved & \cite{billerey2008solvingfermattypeequationsx5y5dzp} \\\hline $x^5+y^5=2^{\alpha} 3^{\beta} 5^{\gamma} z^r$ & $r \geq 7$, $\alpha \geq 2$, $\beta, \gamma \geq 0$ & Partial & \cite{Billerey_2007,billerey2008solvingfermattypeequationsx5y5dzp} \\\hline $x^5+y^5=2^{\alpha} 3^{\beta} 5^{\gamma} z^r$ & $r \geq 13$, $\alpha \geq 2$, $\beta, \gamma \geq 0$ & Solved & \cite{billerey2008solvingfermattypeequationsx5y5dzp} \\\hline $x^5+y^5=2^{\alpha}11z^r$ & $r > 19$, $\alpha \geq 2$ & Solved &\cite{noubissie2020generalized} \\\hline $x^5+y^5=2^{\alpha}11z^r$ & $r > 19$, $\alpha=0,1$ & Partial &\cite{noubissie2020generalized} \\\hline $x^6+y^6=2z^r$ & $r \geq 5$ & Solved & \cite{kraus2002question} \\\hline $x^p-y^p=cz^r$ & $(p,r) \in \{(6,2),(8,2),(8,3), $ & Solved & \cite{kraus2002question} \\ & $(9,3),(10,2),(12,2)\}, $ && \\ & $c \in {\cal{D}}_p$, see Theorem \ref{th:kraus2002} && \\\hline $x^p-y^p=cz^2$ & $p\in\{5,7\}$, $p \vert c$, & Solved & \cite{kraus2002question} \\ & $c \in {\cal{D}}_p$, see Theorem \ref{th:kraus2002} && \\\hline $x^{12}-y^{12}=cz^n$ & $n \geq 3$, $c \geq 1$, & Solved & \cite{kraus2002question} \\ & $l$ prime divisors of $c$, && \\ & $\nu_l(c)<12$ && \\ & and $l \not \equiv 1$ (mod 4) && \\\hline $x^9+y^9=2^nz^2$ & $n \geq 0$ & Solved & \cite{MR995897} \\\hline $x^{13}+y^{13}=s \gamma z^r$ & $s \in \{3,5,7,11 \}$, & Partial & \cite{FreitasDieulefait2013} \\ & $\gamma$ divisible only by primes && \\ & $\gamma_i \not \equiv 1$ (mod 13), && \\ & $r \not \in R$, see \eqref{def:R} && \\\hline $x^{14}+y^{14}=2^{\alpha} 3^{\beta} 5^{\gamma} c z^r$ & $r \geq 17$, $\alpha \geq 2$ or & Solved & \cite{freitas2015recipes} \\ & $\beta> 0$ or $\gamma > 0$ && \\ & $c$ only divisible by primes && \\ & $c_i \not \equiv 0,1$ (mod 7) && \\\hline $x^{26}+y^{26}=10 \gamma z^r$ &$\gamma$ only divisible by primes & Solved & \cite{freitas2014criteriairreducibilitymodp} \\ & $\gamma_i \not \equiv 1$ (mod 13), && \\ & $r \not \in R$, see \eqref{def:R} && \\\hline \hline \multicolumn{4} {|c|}{ The case $a=b=1$ and fixed $r$}\\\hline \hline $x^5+y^5=2^{\alpha}5^{\beta}s^{\gamma}z^2$ & $s=\pm 3$ (mod 10), & Solved & \cite{soderlund2011diophantine,soderlund2013note} \\ & $\alpha \geq 0$, $\beta \geq 0$, $\gamma \geq 0$ && \\\hline $x^n+y^n=cz^2$ & $c \in \{ 2,3,5,6,10, $ & Solved & \cite{Bennett_Skinner_2004} \\ & $ 11,13,17 \}$, $n \geq 5$ && \\ \hline $x^p+y^p=cz^2$ & $c \geq 5$, $c \neq 7$, $p >c^{12c^2}$ & Partial & \cite{bennett2006diophantine} \\\hline $x^p+y^p=2sz^2$ & $s \geq 5$, $p >s^{132s^2}$ & Solved & \cite{bennett2006diophantine} \\\hline $x^p+y^p=2^{\alpha}sz^2$ & $p>s^{27s^2}$, $s \neq 7$, $\alpha \geq 1$ & Solved & \cite{mulholland2006elliptic} \\ \hline $x^p+y^p=2^{\alpha}cz^2$ & $c$ odd square-free, $3 \nmid c$, & Solved & \cite{MR2737959} \\ & $7 \nmid c$, $p>c^{132c^2}$, $\alpha \geq 1$ && \\\hline $x^p+y^p=cz^2$ & $c \geq 3$, $c$ square-free, & Solved & \cite{MR2310336} \\ & $c$ only divisible by primes && \\ & $c_i \not \equiv 1$ (mod $p$), && \\ & $p \in \{11,13,17\}$ && \\\hline $x^n+y^n=cz^3$ & $c \in \{2,3,4,5 \}$, $n\geq 4$ & Solved &\cite{bennett2004ternary} \\\hline $x^p+y^p=s^{\alpha} z^3$ & $\alpha \geq 0$, $p >s^{4s^2}$ & Solved & \cite{bennett2004ternary} \\\hline $x^p+y^p=cz^3$ & $c \neq 0$ cube-free, $3 \vert c$, & Solved & \cite{KarolinaKrawciow2011} \\ & $C=\mathrm{rad}(c)$, $p > C^{10C^2}$ && \\\hline \hline \multicolumn{4} {|c|}{ The case $a=c=1$ and fixed $r$}\\\hline \hline $x^5+2y^5=z^2$ & & Solved & \cite{ivorra2003equations} \\\hline $x^p+2^{\alpha}y^p=z^2$ & $\alpha \geq 2$, $p \geq 5$ & Solved & \cite{ivorra2003equations} \\ &&& \cite[Sec 15.3]{MR2312338} \\\hline $x^p+s^{\alpha}y^p=z^3$ & $\alpha \geq 0$, $p>s^{2s}$, & Solved & \cite{bennett2004ternary} \\ & $s \neq s_1^3\pm 3^{s_2}$ for $s_2 \neq 1$ && \\\hline $x^p+s^{\alpha}y^p=z^3$ & $s \in \{ 5,11,13,23,29,31, $ & Solved & \cite{bennett2004ternary} \\ & $41,43,47,53,59,61,67, $ &&\\ & $71,79,83,97 \}$ && \\ & $\alpha \geq 1$, $p \geq 11$, $p \nmid s^2-1$, && \\ & $(s, p) \not\in \{(13, 19),(29, 11), $ && \\ & $(43, 13),(47, 13),(59, 11), $ && \\ & $(61, 61),(67, 73),(79, 97), $ && \\ & $(97, 13),(97, 79)\}$ && \\\hline $x^4+by^4=z^5$ & $b \in \{3,11,19,43,67,163\}$ & Solved & \cite[Th.2.8]{soderlund2019some} \\\hline \hline \multicolumn{4} {|c|}{ The case $a=1$ and fixed $p$}\\\hline \hline $x^5 + 7 y^5 = 2^{\alpha} 5^{\beta} z^r$ & $r >41, \alpha \in \{1,2,3,4\}, \beta \in \{2,3,4\}$ & Solved & \cite{azon2025} \\\hline $x^5+7y^5=z^r$ & $r>41$ & Partial & \cite{azon2025} \\\hline \hline \multicolumn{4} {|c|}{ The case $a=1$ and fixed $r$}\\\hline \hline $x^p+2^{\beta}y^p=2z^2$ & $p \geq 7$, $1 \leq \beta \leq p-1$ & Solved & \cite{ivorra2003equations} \\ &&& \\\hline $x^p+2^{\alpha}y^p=cz^2$ & $c \in \{3,5,7,11,13,15,17 \}$, & Solved & \cite{Bennett_Skinner_2004} \\ & $\alpha \geq \alpha_0$, $p > c$, && \\ & $(c,\alpha,p) \neq (11,3,13)$, && \\ & where $(c,\alpha_0)$ && \\ & is given by \eqref{eq:BennettSkinner2004} && \\\hline $x^p+s^{\alpha}y^p=2z^2$ & $p \geq 11$, $s \in \{5,11,13 \}$, & Solved & \cite{Bennett_Skinner_2004} \\ & $s \neq p$, $\alpha \geq 0$ && \\\hline $x^p+2^\alpha y^p=sz^2$ & $s \neq (2^k \pm 1)/d^2$, $k,d \in \mathbb{Z}$, & Solved & \cite{zhang2012} \\ & $\alpha \geq 0$, $\alpha \neq 1$, $p > s^{8s^2}$ && \\\hline $x^p+2^\alpha y^p=sz^2$ & $\alpha \geq 2$, $s >5$, & Solved & \cite{zhang2012} \\ & $s =\pm 3$ (mod 8), && \\ & $p >s^{8s^2}$ && \\\hline $x^p + s^{\alpha}y^p = 3^{\beta}z^3$ & $s \not\in \{ 5, 3t^3 \pm1, $ & Solved & \cite{bennett2004ternary} \\ & $9t^3 \pm 1 : t \in \mathbb{N}\}$ && \\ & $\alpha, \beta \geq 1$, $3 \nmid \beta$, $p>s^{28s}$ && \\\hline $x^p+s^{\alpha}y^p=3^{\beta}z^3$ & $p \geq 7$, $s \in \{7,11,13\}$, & Solved & \cite{bennett2004ternary} \\ & $(s,p) \neq (7,13),(13,7)$, && \\ & $3 \nmid \beta$, $\alpha,\beta \geq 1$ && \\\hline $x^p+3^{\alpha} y^p=c^{\beta} z^3$ & $c \in \{2,3,5,7,11,13, $ & Solved & \cite{bennett2004ternary} \\ & $15,17,19\}$, && \\ & $(c,p) \not \in \{(7,11),(11,13)\}$, && \\ & $(\alpha,c) \not \in\{(1,t):t=2, \text{ or }$ && \\ & $ t \geq 11\}, $ && \\ & $\alpha, \beta \geq 0$, $p>\max\{c,4\}$ && \\\hline $x^p+s^{\alpha}y^p=Mz^3$ & Conditions of & Solved & \cite{KarolinaKrawciow2011} \\ & Theorem \ref{th:KarolinaKrawciow2011} && \\\hline \hline \multicolumn{4} {|c|}{ The case $c=1$ and fixed $p$ }\\\hline \hline $ax^3 + by^3 = z^n$ & Conditions of & Partial & \cite[Th.13.2]{bennett2013klein} \\ & Theorem \ref{th:bennett2013klein} && \\\hline \hline \multicolumn{4} {|c|}{ The case $c=1$ and fixed $r$ }\\\hline \hline $ax^p+by^p=z^2$& $p \geq 11$, $ab \in \{2^{\alpha}11^{\beta}, 2^{\alpha}13^{\beta}, $ & Solved & \cite{Bennett_Skinner_2004} \\ & $2^{\alpha}19^{\beta},2^{\alpha}29^{\beta},2^{\alpha}43^{\beta}, $ && \\ & $2^{\alpha}53^{\beta},2^{\alpha}59^{\beta},2^{\alpha}61^{\beta},2^{\alpha}67^{\beta} \}$, && \\ & $a,b$ coprime, $p \nmid ab$, && \\ & $\alpha = 0$ or $\alpha \geq 3$, $\beta \geq 1$, && \\ & $ab,p,\alpha,\beta$ not in Table \ref{tb:ABvalsodd} && \\ \hline $ax^p+by^p=z^2$& $p \geq 11$, $ab \in \{ 2 \cdot 19^{\beta}, 4 \cdot 11^{\beta}, $ &Solved & \cite{Bennett_Skinner_2004} \\ & $4 \cdot 19^{\beta},4 \cdot 43^{\beta}, 4 \cdot 59^{\beta}, $ && \\ & $4\cdot 61^{\beta}, 4\cdot 67^{\beta} \}, $ && \\ & $a,b$ coprime, $p \nmid ab$, $\beta \geq 1$, && \\ & $ab,p,\alpha,\beta$ not in Table \ref{tb:ABvalsodd} && \\ \hline $ax^p+by^p=z^2$& $a,b$ coprime, $ab=2^{\alpha}s^{\beta}t^{\gamma}$, & Solved & \cite{Bennett_Skinner_2004} \\ & $\beta,\gamma \geq 0$, $\alpha \geq 6$, && \\ & $(s,t) \in \{ (3, 31) (\beta \geq 1), $ && \\ & $(5, 11) (\alpha \geq 7), (5, 19), $ && \\ & $(5, 23) (\beta \geq 1), (7, 19) (\gamma \geq 1), $ && \\ & $ (11, 13),(11, 23) (\beta \geq 1), $ && \\ & $ (11, 29),(11, 31) (\beta \geq 1), $ && \\ & $ (13, 31) (\beta \geq 1),(19, 23) (\beta \geq 1)$, && \\ & $(19, 29),(29, 31) (\beta \geq 1) \}, $ && \\ & $p \geq 11$, $p \nmid ab$, && \\ & $ab,p,\alpha,\beta$ not in Table \ref{tb:ABvalseven} && \\\hline $ax^p+by^p=z^2$ & $ab=2^\alpha 3^\beta$, $\gcd(a,b)=1$, & Solved & \cite{MR3222262} \\ & $\alpha \geq 6$, $\beta \geq 0$, $p \geq 7$, $p \nmid ab$ && \\\hline \hline \multicolumn{4} {|c|}{The general $a,b,c$ and fixed $r$ }\\\hline \hline $5^{\alpha} x^p+64y^p=3z^2$ & $p \geq 7$, $\alpha \geq 1$ & Partial & \cite{armandphdthesis,Noubissie21} \\ \hline $ax^p+by^p=cz^2$ & $(a,b,c) \in\{ (2^n,l^m,1), $ & Solved & \cite{ivorra2006quelques} \\ & $(2^n l^m, 1,1),(1,l^m,2) \}$, && \\ & conditions below \eqref{abc_in_axppbypmcz2} && \\\hline $4x^p+y^p=3z^2$ & $p \geq 7$ & Solved & \cite{ivorra2006quelques} \\\hline $64x^p+y^p=7z^2$ & $p \geq 11$ & Solved & \cite{ivorra2006quelques} \\ \hline $3^{31}x^{34}-4y^{34}=7z^2$ & & Partial & \cite{CHALUPKA2022153} \\ \hline $2^{\alpha}x^p+27y^p=s^{\beta} z^3$ & $s \in \{7,13\}$, $p >s $, $\alpha \geq 1$, & Solved & \cite{armandphdthesis,Noubissie21}\\ & $ \beta \geq 1$, $\beta \equiv 1$ (mod 3) && \\\hline $7x^p+y^p=3z^5$ & $p>71$ & Partial & \cite{azon2025} \\\hline $7x^p + 2^{\alpha} 5^{\beta} y^p = 3z^5$ & $p>71, \alpha \in \{1,2,3,4\}, \beta \in \{3,4\}$ & Solved & \cite{azon2025} \\\hline \hline \multicolumn{4} {|c|}{ \textbf{Signature} $(p,q,r)$ }\\\hline \hline \multicolumn{4} {|c|}{The case $a=c=1$ }\\\hline \hline $x^{2n}+3y^2=z^3$ & $n \geq 8$ & Solved & \cite{noubissie2020generalized} \\\hline $x^{2}+3y^{2p}=z^3$ & $p >1964$ & Partial & \cite{noubissie2020generalized} \\\hline $x^2+2y^3=z^{3p}$ & $p \geq 3$ & Partial & \cite{noubissie2020generalized} \\\hline $x^2+by^{2p}=z^3$ & $p \geq 7$, $\alpha \geq 6$, $\beta \geq 0$, & Solved & \cite{MR3222262} \\ & $b \in \{2^\alpha,4 \cdot 3^{2\beta}\}$ && \\\hline $x^2+2^\alpha 3^\beta 31^\gamma y^{2p}=z^3$ & Conditions of & Solved & \cite{MR3222262} \\ &Theorem \ref{th2:MR3222262} && \\\hline $x^{2p}+by^2=z^5$ & $p \geq 7$, $b$ positive odd, & Partial & \cite{zhang2013} \\ & $5 \nmid h(-b)$ && \\\hline $x^2+2^{\alpha}y^{2p}=z^5$ & $\alpha \geq 2$, $p \geq 7$ & Solved & \cite{zhang2013} \\\hline $x^2+2^{\alpha}5^{\beta}11^{\gamma}y^{2p}=z^5$ & $\alpha \geq 3$, $p >11$, & Solved & \cite{zhang2013} \\ & $\beta \geq 1$, $\gamma \geq 0$ && \\\hline $x^2+2^{\alpha}5^{\beta}19^{\gamma}y^{2p}=z^5$ & $\alpha \geq 2$, $p \geq11$, $p \neq 19$, & Solved & \cite{zhang2013} \\ & $\beta \geq 1$, $\gamma \geq 0$ && \\ \hline $x^2+2^{\alpha}5^{\beta}23^{\gamma}y^{2p}=z^5$ & $\alpha \geq 2$, $\beta \geq 1$, $\gamma \geq 1$, $2\vert \alpha \beta \gamma$ & Solved & \cite{zhang2013} \\ & $p \geq 13$, $p \neq 23$ && \\\hline $x^p+4 \cdot 3^{2\beta+1}y^{2p}=z^2$ & $p \geq 7$, $\beta \geq 0$ & Solved & \cite{MR3222262} \\ \hline $x^p+3^{2p-6}y^{2p}=z^2$ & $p \geq 7$, & Partial & \cite{noubissie2020generalized} \\ & $p \equiv 13$ (mod 24) && \\\hline \hline \multicolumn{4} {|c|}{The general $a,b,c$ }\\\hline \hline $x^2+3y^{2p}=4z^3$ & $p \geq 3$ & Partial & \cite{noubissie2020generalized} \\\hline $x^{2p}+3y^{2}=4z^3$ & $p\geq 3$ & Partial & \cite{noubissie2020generalized} \\\hline $7x^2+y^{2p}=4z^3$ & $5 \leq p <10^9$, $p \neq 7,13$ & Solved & \cite{CHALUPKA2022153} \\\hline $7x^2+y^{2p}=4z^3$ & $p\equiv 3,55$ (mod 106) or & Solved & \cite{CHALUPKA2022153} \\ & $p \equiv 47,65,113, $ && \\ & 139,143 or 167 (mod 168)&& \\\hline $ax^2+y^{2n}=4z^3$ & $a \in \{11,19,43,67,163\}$, & Solved & \cite{CHALUPKA2022153} \\ & $n \geq 2$ && \\\hline $x^2+ay^{2n}=4z^3$ & $a \in \{11,19,43,67,163\}$, & Solved & \cite{CHALUPKA2022153} \\ & $n \geq 2$ && \\\hline $x^{2p}+4y^p=21z^2$ & $p \equiv 5$ (mod 6) & Solved & \cite{CHALUPKA2022153} \\\hline $x^{2p}+4y^p=21z^2$ & $p \geq 11$ & Partial & \cite{CHALUPKA2022153} \\\hline $3^{2p-3}x^{2p}+4y^p=7z^2$ & $11 \leq p<10^9$, $p \neq 13,17$ & Partial & \cite{CHALUPKA2022153} \\ \hline $3^{2p-3}x^{2p}+4y^p=7z^2$ & $p\equiv 3,55$ (mod 106) or & Solved & \cite{CHALUPKA2022153} \\ & $p \equiv 47,65,113, $ && \\ & 139,143 or 167 (mod 168)&& \\\hline $x^{38}+4y^{19}=21z^2$ & & Partial& \cite{CHALUPKA2022153} \\ \hline $ax^2+s^ky^{2n}=z^n$ & See conditions below \eqref{eq:garcia24} & Partial & \cite{MR4793291} \\ \hline $3^{2p-3}7x^{2p}+4y^p=z^2$ & & Partial & \cite{CHALUPKA2024} \\ \hline $x^2+7y^{2n}=4z^{12}$ & $n \geq 2$ & Solved & \cite{CHALUPKA2024} \\\hline $ax^5+by^3=cz^{11}$ & $(a,b,c) \in \cal{C}$, see \eqref{eq:putz} & Solved & \cite{ThesisPutz} \\\hline \end{longtable} \footnotetext{{\url{https://mathoverflow.net/questions/480114}}} \section*{Acknowledgements} The authors sincerely thank Frits Beukers, Andrew Bremner, David Zureick-Brown, Tim Browning, Carmen Bruni, Pedro Jos\'e Cazorla Garc\'\i a, Andrzej D\c abrowski, Rainer Dietmann, Jordan Ellenberg, Victor Flynn, Nuno Freitas, Andrew Granville, Alain Kraus, Florian Luca, Lo\"ic Merel, Oana P\u{a}durariu, Kenneth Ribet, Jeremy Rouse, Samir Siksek, Sankar Sitaraman, G\"okhan Soydan, Alain Togb\'e, Lucas Villagra Torcomian, Soroosh Yazdani, Konstantine Zelator and Zhongfeng Zhang for their responses, feedback and corrections on earlier versions of this paper. This research used the ALICE High Performance Computing facility at the University of Leicester. We also thank the referees for their useful comments to improve the quality of this paper. \bibliographystyle{habbrv} \bibliography{genfermat.bib} \enddocument
2412.16193v1
http://arxiv.org/abs/2412.16193v1
Arithmetic properties of $k$-tuple $\ell$-regular partitions
\documentclass[12pt, reqno]{amsart} \usepackage[utf8]{inputenc} \usepackage{amssymb,mathtools,cite,enumerate,color,eqnarray,hyperref,amsfonts,amsmath,amsthm,setspace,tikz,verbatim, times} \addtolength{\textheight}{\topskip} \usepackage[a4paper,top=2cm,bottom=2cm,left=2.2cm,right=2.2cm]{geometry} \usepackage[T1]{fontenc} \usepackage[greek,english]{babel} \numberwithin{equation}{section} \definecolor{ao(english)}{rgb}{0.0, 0.5, 0.0} \hypersetup{colorlinks=true, linkcolor=ao(english),citecolor=ao(english)} \usepackage[normalem]{ulem} \newcommand{\manjil}[1]{\textcolor{blue}{#1}} \newcommand{\abhishek}[1]{\textcolor{red}{#1}} \newcommand{\hirak}[1]{\textcolor{violet}{#1}} \newcommand{\james}[1]{\textcolor{brown}{#1}} \newcommand{\hemjyoti}[1]{\textcolor{green}{#1}} \newcommand\mycom[2]{\genfrac{}{}{0pt}{}{#1}{#2}} \newcommand{\op}{\overline{p}} \newcommand{\opt}{\overline{OPT}} \newcommand{\btt}{\overline{b}} \usepackage{color, xcolor} \newtheorem{theorem}{Theorem}[section] \newtheorem{conjecture}{Conjecture}[section] \newtheorem{definition}{Definition}[section] \newtheorem{corollary}{Corollary}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{remark}{Remark}[section] \title[Arithmetic Properties of $k$-tuple $\ell$-regular Partitions]{Arithmetic Properties of $k$-tuple $\ell$-regular Partitions} \author[H. Nath]{Hemjyoti Nath} \address[H. Nath]{Lokhra chariali, Guwahati 781040, Assam, India} \email{[email protected]} \author[M. P. Saikia]{Manjil P. Saikia} \address[M. P. Saikia]{Mathematical and Physical Sciences division, School of Arts and Sciences, Ahmedabad University, Ahmedabad 380009, Gujarat, India} \email{[email protected]} \author[A. Sarma]{Abhishek Sarma} \address[A. Sarma]{Department of Mathematical Sciences, Tezpur University, Napaam, Tezpur 784028, Assam, India} \email{[email protected]} \linespread{1.05} \keywords{Integer partitions, Ramanujan-type congruences, modular forms.} \subjclass[2020]{11P81, 11P82, 11P83, 05A17.} \date{} \begin{document} \begin{abstract} In this paper, we study arithmetic properties satisfied by the $k$-tuple $\ell$-regular partitions. A $k$-tuple of partitions $(\xi_1, \xi_2, \ldots, \xi_k)$ is said to be $\ell$-regular if all the $\xi_i$'s are $\ell$-regular. We study the cases $(\ell, k)=(2,3), (4,3), (\ell, p)$, where $p$ is a prime, and even the general case when both $\ell$ and $k$ are unrestricted. Using elementary means as well as the theory of modular forms we prove several infinite family of congruences and density results for these family of partitions. \end{abstract} \maketitle \vspace{5mm} \section{Introduction} A partition $\lambda$ of a natural number $n$ is a nonincreasing sequence of natural numbers whose sum is $n$. If $\lambda=(\lambda_1, \lambda_2, \ldots, \lambda_k)$ such that $\lambda_1\geq \lambda_2\geq \cdots \geq \lambda_k$ and $\sum\limits_{i=1}^k \lambda_i=n$, then $\lambda$ is called a partition of $n$, and $\lambda_i$'s are called the parts of the partition $\lambda$. For instance, the $7$ partitions of $5$ are \[ 5, 4+1, 3+2, 3+1+1, 2+2+1, 2+1+1+1, 1+1+1+1+1. \] We denote by $p(n)$ the number of partitions of $n$, and its generating function was given by Euler to be \[ \sum_{n\geq 0}p(n)q^n=\frac{1}{\prod_{i=1}^\infty(1-q^i)}. \] For ease of notation, we write $(a;q)_\infty:=\prod\limits_{i=0}^\infty(1-aq^i)$ and $f_k:=(q^k;q^k)_\infty$. Thus, Euler's generating function becomes \[ \sum_{n\geq 0}p(n)q^n=\frac{1}{(q;q)_\infty}=\frac{1}{f_1}. \] Partitions have been studied since the time of Euler, and several well-known mathematicians have explored their properties. Prominent among them is Ramanujan, who in 1920 \cite{Ramanujan} proved the following amazing congruences that the partition function satisfies: for all $n\geq 0$, we have \begin{align*} p(5n+4)&\equiv 0\pmod 5,\\ p(7n+5)&\equiv 0\pmod 7,\\ p(11n+6)&\equiv 0\pmod{11}. \end{align*} Since then, one strand of research related to partitions is to find such Ramanujan-type congruences for partitions as well as for generalized partitions. For a general overview of the area of partitions, we refer the reader to the excellent books by Andrews \cite{gea1} and Johnson \cite{john}. Among the class of generalized partitions, a frequently studied class is that of $\ell$-regular partitions, for $\ell>1$. By an $\ell$-regular partition of $n$ we mean a partition of $n$ where no parts are divisible by $\ell$. Let $b_\ell(n)$ denote the number of $\ell$-regular partitions of $n$, then we have the following generating function \[ \sum_{n\geq 0}b_\ell(n)q^n=\frac{f_\ell}{f_1}. \] In this paper, we are interested in a more general class of partitions, which we call $k$-tuple $\ell$-regular. A partition $k$-tuple $(\xi_1, \xi_2, \ldots, \xi_k)$ is called a $k$-tuple $\ell$-regular partition if all of the $\xi_i$'s are themselves $\ell$-regular partitions. Let us denote the number of such partitions of $n$ by $T_{\ell,k}(n)$. It is easy to see that its generating function is given by \begin{equation}\label{eq:gf-lk} \sum_{n\geq 0}T_{\ell,k}(n)q^n=\dfrac{f_\ell^k}{f_1^k}. \end{equation} When $k=3$, we suppress the value of $k$ and just use the notation $T_{\ell,3}(n)=T_\ell(n)$. So, we get \begin{equation}\label{e1.0.0.0} \sum_{n\geq 0} T_{\ell}(n)q^n = \dfrac{f_\ell^3}{f_1^3}. \end{equation} Although, $\ell$-regular partitions are very well studied, it seems that $k$-tuple $\ell$-regular partitions have not received the same attention. In this paper, we remedy this situation and study various arithmetic properties that the $T_{\ell, k}(n)$ function satisfies. The case when $\ell=k=3$ was first studied by Adiga and Dasappa \cite{AdigaDasappa}, the case $\ell=3$ and $k=9, 27$ were studied by Baruah and Das \cite{BaruahDas}, the case $\ell=3, k=6$ was studied by Murugan and Fathima \cite{MuruganFathima}, and very recently Nadji and Ahmia \cite{NadjiAhmia} studied the cases \(\ell=2, k=3\) and $\ell=k=3$. Here, we not only study the cases \(\ell=2, k=3\) and $\ell=k=3$, extending some of the results of Nadji and Ahmia \cite{NadjiAhmia}, but also the cases $(\ell, k)=(4,3), (\ell, p)$, for a prime $p$ as well as the more general case when $\ell$ and $k$ are unrestricted. Our proof techniques come from both elementary means as well as from the theory of modular forms. We begin our results by first proving a general congruence that $T_{\ell,p}(n)$ satisfies, where $p$ is a prime. The proof is short and simple, so we complete it here. \begin{theorem} Let $p$ be a prime and $l$ be a non-negative integer. Then \begin{align} T_{\ell,p}(pn+r)\equiv 0 \pmod p\label{cong:0 mod p} \end{align} for $r\in\{1,2,\ldots, p-1\}$. \end{theorem} \begin{proof} Putting $k = p$ in \eqref{eq:gf-lk}, we have \begin{align*} \sum_{n\geq 0}T_{\ell, p}(n)q^n&=\dfrac{f_\ell^p}{f_1^p}\equiv\dfrac{f_{\ell p}}{f_p}\pmod p. \end{align*} Comparing the coefficients of $pn+r$ for $r\in\{1,2,\ldots, p-1\}$ on both sides, we arrive at \eqref{cong:0 mod p}. \end{proof} \noindent In the above proof, we have used the following easily verifiable identity: for a prime $p$, and positive integers $k$ and $l$, we have \begin{align}\label{e0.1} f_{k}^{p^l} \equiv f_{pk}^{p^{l-1}} \pmod{p^l}. \end{align} We will use this fact without commentary in the sequel. Before proceeding to our other results, we state the following result without proof, which follows very easily from an application of \eqref{e2.0.3.3} and \eqref{e0.2}, stated in the next section. \begin{theorem}\label{t0.1} For $n\geq0$, let $T_n$ be the $n$-th triangular number, then \begin{equation}\label{e0.2.2} T_{2}(9n+1) = \begin{cases} 3 \pmod{6} \hspace{1mm} \text{if} \quad n = T_n,\\ 0 \pmod{6} \hspace{1.5mm} \text{otherwise}. \end{cases} \end{equation} \end{theorem} The next few results give several infinite family of congruences for $T_{\ell}(n)$ when $\ell=2,4$. \begin{theorem}\label{c1.4} For all $n\geq 0$ and $\alpha\geq 0$, we have \begin{align} T_{2}\left(3^{4\alpha+2}n+\sum_{i=0}^{2\alpha}3^{2i}+3^{4\alpha+1}\right)&\equiv 0\pmod{24}, \label{c0.1.4}\\ T_{2}\left(3^{4\alpha+2}n+\sum_{i=0}^{2\alpha}3^{2i}+2\cdot 3^{4\alpha+1}\right)&\equiv 0\pmod{24}, \label{c1.1.4}\\ T_{2}\left(3^{4\alpha+4}n+\sum_{i=0}^{2\alpha+1}3^{2i}+3^{4\alpha+3}\right)&\equiv 0\pmod{24}, \label{c2.1.4}\\ T_{2}\left(3^{4\alpha+4}n+\sum_{i=0}^{2\alpha+1}3^{2i}+2\cdot 3^{4\alpha+3}\right)&\equiv 0\pmod{24}. \label{c3.1.4} \end{align} \end{theorem} \begin{remark} Nadji and Ahmia \cite[Theorem 3]{NadjiAhmia} proved the above congruences modulo $12$. \end{remark} \begin{theorem}\label{c1.4.1} For all $n\geq 0$ and $\alpha \geq 0$, we have \begin{align} T_{4}\left( 3^{2\alpha +2 }n + \dfrac{17 \cdot 3^{2\alpha+1}-3}{8} \right) & \equiv 0 \pmod{3}, \label{e3.0}\\ T_{4}\left( 3^{2\alpha +3 }n + \dfrac{19 \cdot 3^{2\alpha+2}-3}{8} \right) & \equiv 0 \pmod{3}, \label{e3.1}\\ T_{4}\left( 27 \cdot 5^{2\alpha}n + \dfrac{171 \cdot 5^{2\alpha}-3}{8} \right) & \equiv 0 \pmod{3}. \label{e2.9} \end{align} \end{theorem} Theorem \ref{c1.4} is proved in Section \ref{sec:pf-1} and Theorem \ref{c1.4.1} is proved in Section \ref{sec:pf-2}. A corollary of Theorem \ref{c1.4.1} is also given in Section \ref{sec:pf-2}. For the next few results, we will need the Legendre symbol, which for a prime $p\geq3$ is defined as \begin{align*} \left(\dfrac{a}{p}\right)_L:=\begin{cases}\quad1,\quad \text{if $a$ is a quadratic residue modulo $p$ and $p\nmid a$,}\\\quad 0,\quad \text{if $p\mid a$,}\\~-1,\quad \text{if $a$ is a quadratic nonresidue modulo $p$.} \end{cases} \end{align*} \begin{theorem}\label{t0.1.0.0} Let $p \geq 5$ be prime and let $r$, $1 \leq r \leq p-1$, be such that $\left( \dfrac{8r+1}{p} \right)_L = -1$, then for all $n \geq 0$, we have $T_2(9(pn+r)+1) \equiv 0 \pmod{6}$. \end{theorem} \begin{theorem}\label{t0.0.1} Let $p \geq 5$ be a prime. If $\left(\dfrac{-2}{p}\right)_L=-1$, then for $n,\alpha \geq 0$ with $p \nmid n$, \begin{equation}\label{e50.0} T_{2}\left( 9p^{2\alpha+1}n + \dfrac{9p^{2\alpha+2}-1}{8} \right) \equiv 0 \pmod{6}. \end{equation} \end{theorem} \begin{theorem}\label{t2} Let $a(n)$ be defined by \begin{equation*} \sum_{n=0}^{\infty}a(n)q^n = \dfrac{f_3^6}{f_1}. \end{equation*} Let $p\geq5$ be a prime, we define \begin{equation*} \omega(p) := a\left( \dfrac{17}{24}(p^2-1) \right)+p\left( \dfrac{2}{p} \right)_L\left( \dfrac{\frac{-17}{24}(p^2-1)}{p} \right)_L. \end{equation*} \begin{enumerate}[(i)] \item If $\omega(p) \equiv 0 \pmod{2}$, then for $n,k\geq 0$ and $1\leq j \leq p-1$, we have \begin{equation}\label{t3.1} T_{2}\left( 3p^{4k+4}n + 3p^{4k+3}j + \dfrac{17\cdot p^{4k+4}-1}{8} \right) \equiv 0 \pmod{12}. \end{equation} Furthermore, if $17 \nmid (24n+17)$, then for $n,k \geq 0$, we have \begin{equation}\label{t3.3.1} T_{2}(3n+2) \equiv T_{2}\left( 3\cdot 17^{4k+2}n + \dfrac{17^{4k+3}-1}{8} \right) \pmod{12}. \end{equation} \item If $\omega(p) \equiv 1 \pmod{2}$, then for $n,k\geq 0$ and $1\leq j \leq p-1$, we have \begin{equation}\label{t2.2} T_{2}\left( 3p^{6k+6}n + 3p^{6k+5}j + \dfrac{17\cdot p^{6k+6}-1}{8} \right) \equiv 0 \pmod{12}. \end{equation} Furthermore, if $p \nmid (24n+17)$ and $p\neq 17$, then for $n,k\geq 0$ we have \begin{equation}\label{t3.3} T_{2}\left( 3p^{6k+2}n + \dfrac{17\cdot p^{6k+2}-1}{8} \right) \equiv 0 \pmod{12}. \end{equation} \end{enumerate} \end{theorem} \begin{remark} Using \textit{Mathematica}, we find that $a(17) \equiv 1 \pmod{2}$. Using Theorem \ref{t2} with $p=5$, we see that $\omega(5) \equiv 0 \pmod{2}$ and hence by Theorem \ref{t2}\((i)\), we obtain that for any integer $n \geq 0$ and $j=1$, we have $T_2(1875n + 1703) \equiv 0 \pmod{12}$. \end{remark} \begin{theorem}\label{t4} Let $p$ be an odd prime. Then the following statements hold: \begin{enumerate}[(i)] \item Suppose that $s$ is an integer satisfying $1\leq s \leq 8p$, $s \equiv 1 \pmod{8}$ and $\left( \dfrac{s}{p} \right)_L = -1$. Then, \begin{equation}\label{e12.0.1} T_{2}\left( pn+\dfrac{s-1}{8} \right) \equiv 0 \pmod{2}. \end{equation} If $\tau(p) \equiv 0 \pmod{2}$, then, for all $n \geq 0, k\geq 1$, \begin{equation}\label{e12.0.2} T_{2}\left( p^{2k+1}n+\dfrac{sp^{2k}-1}{8} \right) \equiv 0 \pmod{2}. \end{equation} \item Suppose that $r$ is an integer such that $1\leq r \leq 8p$, $rp \equiv 1 \pmod{8}$ and $(r,p)=1$. If $\tau(p) \equiv 0 \pmod{2}$, then, for all $n \geq 0, k\geq 1$, \begin{equation}\label{e12.0.3} T_{2}\left( p^{2k+2}n + \dfrac{rp^{2k+1}-1}{8} \right) \equiv 0 \pmod{2}. \end{equation} \end{enumerate} \end{theorem} We prove Theorem \ref{t0.1.0.0} in Section \ref{sec:thmn2}, Theorem \ref{t0.0.1} in Section \ref{s4}, Theorem \ref{t2} in Section \ref{s5}, and Theorem \ref{t4} in Section \ref{s3}. Using the theory of modular forms, we prove the following result in Section \ref{sec:thmn1}. \begin{theorem}\label{thm1.00} Let $k, n$ be nonnegative integers. For each $i$ with $1\leq i \leq k+1$, if $p_i \geq 5$ is prime such that $p_i \not\equiv 1 \pmod 8$, then for any integer $j \not\equiv 0 \pmod {p_{k+1}}$ \begin{align*} T_2\left(9p_1^2\dots p_{k+1}^2n + \frac{9p_1^2\dots p_{k}^2p_{k+1}(8j+p_{k+1})-1}{8}\right) \equiv 0 \pmod{6}. \end{align*} \end{theorem} Let $p\geq 5$ be a prime such that $p \not\equiv 1\pmod{8}$. By taking all the primes $p_1, p_2, \ldots, p_{k+1}$ to be equal to the same prime $p$ in Theorem \ref{thm1.00}, we obtain the following infinite family of congruences for $T_2(n)$: \begin{align*} T_2\left( 9p^{2(k+1)}n + 9p^{2k+1}j + \frac{9 p^{2(k+1)}-1}{8}\right) \equiv 0 \pmod 6, \end{align*} where $j \not\equiv 0 \pmod p$. In particular, for all $n\geq 0$ and $j\not\equiv 0\pmod{5}$, taking $k=0$ and $p=5$, we have \begin{align*} T_2\left(225n + 45j + 28\right) \equiv 0 \pmod{6}. \end{align*} Along with the study of Ramanujan-type congruences, the study of distribution of the coefficients of a formal power series modulo $M$ is also an interesting problem to explore. Given an integral power series $A(q) := \displaystyle\sum_{n=0}^{\infty}a(n)q^n $ and $0 \leq r \leq M$, the arithmetic density $\delta_r(A,M;X)$ is defined as \begin{equation*} \delta_r(A,M;X) = \frac{\#\{ n \leq X : a(n) \equiv r \pmod{M} \}}{X}. \end{equation*} An integral power series $A$ is called \textit{lacunary modulo $M$} if \begin{equation*} \lim_{X \to \infty} \delta_0(A,M;X)=1, \end{equation*} which means that almost all the coefficients of $A$ are divisible by $M$. It turns out that such a result can also be proved for the $T_{\ell,k}(n)$ function. Specifically, we prove the following result in Section \ref{sec:lacunary}. \begin{theorem}\label{thm1} Let $G(q)=\sum_{n \geq 0}T_{\ell,k}(n)q^n$, $k >0$ and $\ell >1$ be integers and $p^a$ be the largest power of a prime $p$ that divides $\ell$. If $p^{2a} \geq \ell$ and $k \leq p^{m+a}(1-p^{2s-2a})$, then for every positive integer $m$ and $s$ with $m > a > s \geq 0$, we have, \begin{equation}\label{e199} \lim_{X\to\infty} \delta_{0}(G,p^m;X) = 1. \end{equation} \end{theorem} \noindent The following is an easy corollary now. \begin{corollary} Let $F(q)=\sum_{n \geq 0}T_{p,k}(n)q^n$. Then for every positive integer $m$ and for every prime $p$, we have \begin{equation}\label{e200} \lim_{X\to\infty} \delta_{0}(F,p^m;X) = 1. \end{equation} \end{corollary} Before proceeding to the proofs of our main theorems, we present a brief proof of the following result on the lacunarity of $T_2(9n+1)$. The proof is simple, so we complete it here. \begin{theorem}\label{thm0.02365} The series $\sum_{n \geq 0} T_2(9n+1)q^n$ exhibits lacunarity modulo $6$, namely, \begin{equation*} \lim_{X \to \infty} \frac{\#\{ 0 \leq n \leq X : T_2(9n+1) \equiv 0 \pmod{6} \}}{X} = 1. \end{equation*} \end{theorem} To prove this, we recall the following classical result due to Landau \cite{lan}. \begin{lemma}\label{l0.001} Let $r(n)$ and $s(n)$ be quadratic polynomials. Then \begin{equation*} \left( \sum_{n \in \mathbb{Z}} q^{r(n)} \right) \left( \sum_{n \in \mathbb{Z}} q^{s(n)} \right) \end{equation*} is lacunary modulo $2$. \end{lemma} \begin{proof}[Proof of Theorem \ref{thm0.02365}] From \eqref{e50.1}, we have \begin{equation*} \sum_{n \geq 0} T_{2}(9n+1)q^n \equiv 3f_1f_2 \pmod{6}. \end{equation*} Using \eqref{e2.0.3.4} together with its magnified version (where $q \to q^2$) under modulo $2$ , we apply Lemma \ref{l0.001} to complete the proof of the theorem. \end{proof} The rest of the paper is organized as follows: In Section \ref{sec:prelim} we recall some results which we will use in our proofs, Sections \ref{sec:pf-1} -- \ref{sec:lacunary} contains the proofs of our results, we end the paper with some concluding remarks in Section \ref{sec:conc}. \section{Preliminaries}\label{sec:prelim} In this section we collect some results from elementary $q$-series analysis and the theory of modular forms which are useful in proving our results. \subsection{Elementary Results} We need Euler's pentagonal number theorem \cite[Eq. (1.3.18)]{Spirit} \begin{equation}\label{e2.0.3.4} f_1=\sum_{n \in \mathbb{Z}}(-1)^nq^{n(3n-1)/2}. \end{equation} and Jacobi's Triple Product identity \cite[Eq. (1.3.24)]{Spirit} \begin{equation}\label{e2.0.3.3} f_1^3=\sum_{n\geq 0}(-1)^n(2n+1)q^{n(n+1)/2}. \end{equation} Some known $3$-dissections are required (see for example \cite[Lemma 2.2]{DAS2025128913}). \begin{lemma} The following $3$-dissections holds \begin{align} \dfrac{f_1^2}{f_2} & = \dfrac{f_9^2}{f_{18}} -2q\dfrac{f_3f_{18}^2}{f_6f_9}, \label{e0.7}\\ f_1^3 & = \dfrac{f_6f_9^6}{f_3f_{18^3}} -3q f_9^3 +4q^3 \dfrac{f_3^2f_{18}^6}{f_6^2f_9^3}, \label{e0.8}\\ \dfrac{1}{f_1^3} & = a_3^2\dfrac{f_9^3}{f_{3}^{10}}+3qa_3\dfrac{f_9^6}{f_3^{11}}+9q^2\dfrac{f_9^9}{f_3^{12}},\label{e0.8.0}\\ a_1& =a_3+6q\dfrac{f_9^3}{f_3}, \label{e0.7.0} \end{align} where Borweins’ cubic theta function $a(q)$ is given by \[a_n = a(q^n) := \sum\limits_{j,k=-\infty}^{\infty}q^{n\cdot(j^2+jk+k^2)}.\] \end{lemma} We recall, partitions with even parts distinct wherein odd parts are unrestricted but even parts are distinct. Let $ped(n)$ denote the number of such partitions of $n$, then its generating function is given by \begin{equation}\label{eq:gf-ped} \sum_{n\geq 0}ped(n)q^n=\frac{f_4}{f_1}. \end{equation} We need the following results related to this function. \begin{lemma}\cite[Corollary 3.3]{andrews2010arithmetic} We have, for all $n\geq 0$ \begin{align} ped(9n+7)& \equiv 0 \pmod{12}. \label{e2.6} \end{align} \end{lemma} \begin{lemma}\cite[Corollary 3.6]{andrews2010arithmetic} We have, for all $n\geq 0$ \begin{align} ped\left( 3^{2\alpha +1 }n + \dfrac{17 \cdot 3^{2\alpha}-1}{8} \right) & \equiv 0 \pmod{6}, \label{e2.7}\\ ped\left( 3^{2\alpha +2 }n + \dfrac{19 \cdot 3^{2\alpha+1}-1}{8} \right) & \equiv 0 \pmod{6}. \label{e2.8} \end{align} \end{lemma} \begin{lemma}\cite[Theorem 1.1]{Nath} We have, for all $n\geq 0$ \begin{equation}\label{e3.2} ped(9n+7) \equiv ped\left( 9 \cdot 5^{2\alpha}n + \dfrac{57 \cdot 5^{2\alpha}-1}{8} \right) \pmod{24}. \end{equation} \end{lemma} \noindent We also need the following result from \cite[Lemma 7]{NadjiAhmia}. \begin{lemma} We have \begin{align} \sum_{n \geq 0}T_{2}(3n+1)q^n & = 3\frac{f_2^4f_3^5}{f_1^8f_6}, \label{e0.2}\\ \sum_{n \geq 0}T_{2}(3n+2)q^n & = 6\frac{f_2^3f_3^2f_6^2}{f_1^7}. \label{e0.2.1} \end{align} \end{lemma} The following result of Newman will play a crucial role in the proof of Theorem \ref{t2}, therefore we shall quote it as a lemma. Following the notations of Newman's paper, we shall let $p$, $q$ denote distinct primes, let $r, s \neq 0, r \not \equiv s \pmod{2}$. Set \begin{equation}\label{e7} \phi(\tau) = \prod_{n=1}^{\infty}(1-x^n)^r(1-x^{nq})^s = \sum_{n=0}^{\infty}a(n)x^n, \end{equation} $\epsilon = \dfrac{1}{2}(r+s), t=(r+sq)/24, \Delta= t(p^2-1), \theta = (-1)^{\frac{1}{2}-\epsilon}2q^s$, then we have the following result. \begin{lemma}\label{l2}\textup{\cite[Theorem 3]{16}} With the notations defined as above, the coefficients $c(n)$ of $\phi(\tau)$ satisfy \begin{equation}\label{e8} a(np^2+\Delta)-\gamma(n)a(n) +p^{2\epsilon-2}a\left( \dfrac{n-\Delta}{p^2} \right) = 0, \end{equation} where \begin{equation}\label{e9} \gamma(n) = p^{2\epsilon - 2}\alpha-\left( \dfrac{\theta}{p} \right)_L p^{\epsilon-3/2}\left( \dfrac{n-\Delta}{p} \right)_L, \end{equation} and $\alpha$ is a constant. \end{lemma} \subsection{Preliminaries from Modular Forms} We recall some basic facts and results from the theory of modular forms that will be useful in the proof of some of our results. For a positive integer $N$, we will assume that: \begin{align*} \textup{SL}_2(\mathbb{Z}) & :=\left\{\begin{bmatrix} a & b \\ c & d \end{bmatrix}: a, b, c, d \in \mathbb{Z}, ad-bc=1 \right\},\\ \Gamma_{0}(N) &:=\left\{ \begin{bmatrix} a & b \\ c & d \end{bmatrix} \in \Gamma : c\equiv~0\pmod N \right\},\\ \Gamma_{1}(N) &:=\left\{ \begin{bmatrix} a & b \\ c & d \end{bmatrix} \in \Gamma : a\equiv d \equiv 1\pmod N \right\},\\ \Gamma(N) &:=\left\{ \begin{bmatrix} a & b \\ c & d \end{bmatrix} \in \textup{SL}_2(\mathbb{Z}) : a \equiv d \equiv 1 \pmod{N}, \text{and} \hspace{2mm} b\equiv c \equiv 0 \pmod N \right\}.\\ \end{align*} A subgroup $\Gamma$ of $\textup{SL}_2(\mathbb{Z})$ is called a congruence subgroup if $\Gamma(N) \subseteq \Gamma$ for some $N$. The smallest $N$ such that $\Gamma(N) \subseteq \Gamma$ is called the level of $\Gamma$. For example, $\Gamma_{0}(N)$ and $\Gamma_{1}(N)$ are congruence subgroups of level $N$.\\ Let $\mathbb{H}:= \{ z \in \mathbb{C} : \Im(z) > 0 \}$ be the upper half of the complex plane. The group $$\textup{GL}_2^{+}(\mathbb{R}) = \left\{ \begin{bmatrix} a & b \\ c & d \end{bmatrix} : a,b,c,d \in \mathbb{R} \hspace{2mm} \text{and} \hspace{2mm} ad-bc>0 \right\}$$ acts on $\mathbb{H}$ by $\begin{bmatrix} a & b \\ c & d \end{bmatrix} z = \dfrac{az +b}{cz+d}$. We identify $\infty$ with $\dfrac{1}{0}$ and define $ \begin{bmatrix} a & b \\ c & d \end{bmatrix} \dfrac{r}{s} = \dfrac{ar +bs}{cr+ds}$, where $\dfrac{r}{s} \in \mathbb{Q} \cup \{\infty\}$. This gives an action of $\textup{GL}_2^{+}(\mathbb{R})$ on the extended upper half-plane $\mathbb{H}^{\star} = \mathbb{H} \cup \mathbb{Q} \cup \{\infty\}$. Suppose that $\Gamma$ is a congruence subgroup of $\textup{SL}_2(\mathbb{Z})$. A cusp of $\Gamma$ is an equivalence class in $\mathbb{P}^{1}=\mathbb{Q} \cup \{ \infty\}$ under the action of $\Gamma$.\\ The group $\textup{GL}_2^{+}(\mathbb{R})$ also acts on the functions $f : \mathbb{H} \to \mathbb{C}$. In particular, suppose that $\gamma = \begin{bmatrix} a & b \\ c & d \end{bmatrix} \in \textup{GL}_2^{+}(\mathbb{R}) $. If $f(z)$ is a meromorphic function on $\mathbb{H}$ and $\ell$ is an integer, then define the slash operator $\mid_{\ell}$ by $(f\mid_{\ell}\gamma )(z) := (det \gamma)^{\ell/2}(cz+d)^{-\ell}f(\gamma z)$.\\ \begin{definition} Let $\gamma$ be a congruence subgroup of level N. A holomorphic function $f : \mathbb{H} \to \mathbb{C}$ is called a modular form with integer weight on $\Gamma$ if the following hold: \begin{enumerate} \item We have \begin{align*} f\left(\frac{az+b}{cz+d}\right)=(cz+d)^{\ell}f(z) \end{align*} for all $z\in\mathbb{H}$ and all $\begin{bmatrix} a&b\\ c&d \end{bmatrix}\in\Gamma$. \item If $\gamma\in\textup{SL}_2(\mathbb{Z})$, then $(f\mid_{\ell}\gamma )(z)$ has a Fourier expansion of the form \begin{align*} (f\mid_{\ell}\gamma )(z)=\sum_{n=0}^{\infty}a_{\gamma}(n)q^n_{N}, \end{align*} where $q^n_{N}=e^{\frac{2\pi iz}{N}}$. \end{enumerate} \end{definition} For a positive integer $\ell$, let $M_\ell(\Gamma_1(N)))$ denote the complex vector space of modular forms of weight $\ell$ with respect to $\Gamma_1(N)$. \begin{definition}\cite[Definition 1.15]{ono2004} If $\chi$ is a Dirichlet character modulo $N$, then a modular form $f\in M_\ell(\Gamma_1(N))$ has Nebentypus character $\chi$ if $f\left( \frac{az+b}{cz+d}\right)=\chi(d)(cz+d)^{\ell}f(z)$ for all $z\in \mathbb{H}$ and all $\begin{bmatrix} a & b \\ c & d \end{bmatrix}\in \Gamma_0(N)$. The space of such modular forms is denoted by $M_\ell(\Gamma_0(N),\chi)$. \end{definition} Finally, we require the following results to prove Theorem \ref{thm1}. \begin{theorem}\cite[Theorem 1.64]{ono2004}\label{thm_ono1} Suppose that $f(z)=\displaystyle\prod_{\delta\mid N}\eta(\delta z)^{r_\delta}$ is an eta-quotient such that $\ell=\displaystyle\dfrac{1}{2}\sum_{\delta\mid N}r_{\delta}\in \mathbb{Z}$, $\sum_{\delta\mid N} \delta r_{\delta}\equiv 0 \pmod{24}$, and $\sum_{\delta\mid N} \dfrac{N}{\delta}r_{\delta}\equiv 0 \pmod{24}$. Then, $ f\left( \dfrac{az+b}{cz+d}\right)=\chi(d)(cz+d)^{\ell}f(z) $ for every $\begin{bmatrix} a & b \\ c & d \end{bmatrix} \in \Gamma_0(N)$. Here \( \chi(d):=\left(\dfrac{(-1)^{\ell} \prod_{\delta\mid N}\delta^{r_{\delta}}}{d}\right)_L.\label{chi}\) \end{theorem} \noindent If the eta-quotient $f(z)$ satisfies the conditions of Theorem \ref{thm_ono1} and is holomorphic at all of the cusps of $\Gamma_0(N)$, then $f\in M_{\ell}(\Gamma_0(N), \chi)$. To determine the orders of an eta-quotient at each cusp is the following. \begin{theorem}\cite[Theorem 1.65]{ono2004}\label{thm_ono1.1} Let $c, d,$ and $N$ be positive integers with $d\mid N$ and $\gcd(c, d)=1$. If $f(z)$ is an eta-quotient satisfying the conditions of Theorem~\ref{thm_ono1} for $N$, then the order of vanishing of $f(z)$ at the cusp $\dfrac{c}{d}$ is $\dfrac{N}{24}\sum_{\delta\mid N}\dfrac{\gcd(d,\delta)^2r_{\delta}}{\gcd(d,\frac{N}{d})d\delta}.$ \end{theorem} We state the following deep result due to Serre, which is useful to prove Theorem \ref{thm1}. \begin{theorem}\cite[Theorem~2.65]{ono2004}\label{serre} Let $k, m$ be positive integers. If $f(z)\in M_{k}(\Gamma_0(N), \chi(\bullet))$ has the Fourier expansion $f(z)=\sum_{n \geq 0}c(n)q^n\in \mathbb{Z}[[q]],$ then there is a constant $\alpha>0$ such that $$ \# \left\{n\leq X: c(n)\not\equiv 0 \pmod{m} \right\}= \mathcal{O}\left(\dfrac{X}{\log^{\alpha}{}X}\right). $$ \end{theorem} The Dedekind $\eta$- function is given by \begin{equation*} \eta(z) := q^{1/24}(q;q)_{\infty}, \end{equation*} where $q=e^{2\pi iz}$ and $z$ lies in the complex upper half plane $\mathbb{H}$. The well known $\Delta$-function is denoted by \begin{equation*} \Delta(z) := \eta(z)^{24} = \sum_{n=1}^{\infty}\tau(n)q^n. \end{equation*} As usual, we denote $M_k(SL_2(\mathbb{Z}))$ (resp. $S_k(SL_2(\mathbb{Z}))$) is the complex vector space of weight $k$ holomorphic modular (resp. cusp) forms with respect to $SL_2(\mathbb{Z})$. \begin{definition} Let $m$ be a positive integer and $f(z) = \sum_{n=0}^{\infty} a(n)q^n \in M_{\ell}(\Gamma_0(N),\chi)$. Then the action of Hecke operator $T_m$ on $f(z)$ is defined by \begin{align*} f(z)|T_m := \sum_{n \geq 0} \left(\sum_{d\mid \gcd(n,m)}\chi(d)d^{\ell-1}a\left(\frac{nm}{d^2}\right)\right)q^n. \end{align*} In particular, if $m=p$ is prime, we have \begin{align}\label{hecke1} f(z)|T_p := \sum_{n \geq 0} \left(a(pn)+\chi(p)p^{\ell-1}a\left(\frac{n}{p}\right)\right)q^n. \end{align} We note that $a(n)=0$ unless $n$ is a nonnegative integer. \end{definition} \begin{definition}\label{hecke2} A modular form $f(z)=\sum_{n=0}^{\infty}a(n)q^n \in M_{\ell}(\Gamma_0(N),\chi)$ is called a Hecke eigenform if for every $m\geq2$ there exists a complex number $\lambda(m)$ for which \begin{align}\label{hecke3} f(z)|T_m = \lambda(m)f(z). \end{align} \end{definition} It is known that $\Delta(z) \in S_{12}(SL_2(\mathbb{Z}))$ and is an eigenform for all Hecke operators. In other words \begin{equation*} \Delta(z) \mid T_{p} = \tau(p)\Delta(z) \end{equation*} for any prime $p$. The coefficients of $\Delta(z) = \sum_{n=1}^{\infty}\tau(n)q^n$ satisfies the following properties \begin{equation}\label{tau1} \tau(mn) = \tau(m)\tau(n), \quad \text{if} \quad gcd(m,n)=1, \end{equation} \begin{equation}\label{tau2} \tau(p^{\ell}) = \tau(p)\tau(p^{\ell-1}) - p^{11}\tau(p^{\ell-2}). \end{equation} \section{Proof of Theorem \ref{c1.4}}\label{sec:pf-1} Considering \eqref{e0.1} and \eqref{e0.2} with $p=2$, $k=3$ and $l = 1$, we observe that \begin{equation}\label{e0.3} \sum_{n\geq 0}T_{2}(3n+1)q^n \equiv 3 \dfrac{f_3^5}{f_6} \pmod{24}. \end{equation} Collecting the terms of the form $q^{3n+j}$ for $j=0,1,2$ from both sides of the equation \eqref{e0.3}, we get \begin{align} \sum_{n\geq 0}T_{2}(9n+1)q^n & \equiv 3 \dfrac{f_1^5}{f_2} \pmod{24}, \label{e0.4}\\ T_{2}(9n+4) &\equiv 0 \pmod{24}, \label{e0.5}\\ T_{2}(9n+7) &\equiv 0 \pmod{24}. \label{e0.6} \end{align} Substituting \eqref{e0.7}, \eqref{e0.8} into \eqref{e0.4}, we obtain \begin{equation}\label{e0.9} \sum_{n\geq 0}T_{2}(9n+1)q^n \equiv 3\dfrac{f_6f_9^8}{f_3f_{18}^4}+9q \dfrac{f_9^5}{f_{18}} + 18q^2 \dfrac{f_3f_9^2f_{18}^2}{f_6} + 12q^3\dfrac{f_3^2f_{18}^5}{f_6^2f_9} \pmod{24}. \end{equation} If we extract the terms involving $q^{3n+1}$ from both sides of the above equation, divide by $q$ and then replace $q^3$ by $q$, we get \begin{equation}\label{e1.0} \sum_{n\geq 0}T_{2}(27n+10)q^n \equiv 9\dfrac{f_3^5}{f_{6}} \pmod{24}. \end{equation} Collecting the terms containing $q^{3n+j}$ for $j=0,1,2$ from both sides of \eqref{e1.0}, we get \begin{align} \sum_{n\geq 0}T_{2}(81n+10)q^n & \equiv 9 \dfrac{f_1^5}{f_2} \pmod{24}, \label{e1.1}\\ T_{2}(81n+37) &\equiv 0 \pmod{24}, \label{e1.2}\\ T_{2}(81n+64) &\equiv 0 \pmod{24}. \label{e1.3} \end{align} Again by substituting \eqref{e0.7}, \eqref{e0.8} into \eqref{e1.1} and extracting the powers of the form $q^{3n+1}$ from both sides of the resulting equation, we find that \begin{equation}\label{e1.4} \sum_{n\geq 0} T_{2}(243n+91)q^n \equiv 3 \dfrac{f_3^5}{f_6} \pmod{24}. \end{equation} From \eqref{e0.3}, \eqref{e1.0} and \eqref{e1.4}, we deduce that \begin{align} 3T_{2}(3n+1) &\equiv T_{2}(27n+10) \pmod{24} \label{e1.5},\\ T_{2}(3n+1) &\equiv T_{2}(243n+91) \pmod{24} \label{e1.6}. \end{align} Utilizing both \eqref{e1.5} and \eqref{e1.6} and by mathematical induction on $\alpha \geq 0$, we arrive at \begin{align} 3T_{2}(3n+1) &\equiv T_{2}\left( 3^{4\alpha+3} + \sum_{i=0}^{2\alpha+1}3^{2i} \right) \pmod{24} \label{e1.7},\\ T_{2}(3n+1) &\equiv T_{2}\left( 3^{4\alpha+1} + \sum_{i=0}^{2\alpha}3^{2i} \right) \pmod{24} \label{e1.8}. \end{align} Using \eqref{e1.8}, \eqref{e0.5} and \eqref{e0.6}, we obtain \eqref{c0.1.4} and \eqref{c1.1.4}, respectively. Similarly, using \eqref{e1.7}, \eqref{e1.2} and \eqref{e1.3}, we obtain \eqref{c2.1.4} and \eqref{c3.1.4}, respectively. \section{Proof of Theorem \ref{c1.4.1}}\label{sec:pf-2} \begin{proof}[Proof of Theorem \ref{c1.4.1}] We have \begin{equation}\label{e1.9} \sum_{n\geq 0}T_{4}(n)q^n = \dfrac{f_4^3}{f_1^3}. \end{equation} Using \eqref{e0.1} in \eqref{e1.9} with $p=3$, $k=1$, we observe that \begin{equation}\label{e2.0} \sum_{n\geq 0}T_{4}(n)q^n \equiv \dfrac{f_{12}}{f_3} \pmod{3}. \end{equation} Collecting the terms containing $q^{3n}$ from both sides of \eqref{e2.0}, we get \begin{equation} \sum_{n\geq 0}T_{4}(3n)q^n \equiv \dfrac{f_4}{f_1} \pmod{3}, \label{e2.5} \end{equation} Employing \eqref{eq:gf-ped}, \eqref{e2.6}, \eqref{e2.7}, \eqref{e2.8}, and \eqref{e3.2} into \eqref{e2.5}, we obtain \eqref{e3.0}, \eqref{e3.1} and \eqref{e2.9} respectively. \end{proof} As a consequence of Theorem \ref{c1.4.1}, we have the following corollary. \begin{corollary} The function $T_{4}(n)$ is divisible by $3$ at least $13/648$ of the time. \end{corollary} \begin{proof} The arithmetic sequences $3^{2\alpha +2 }n + \dfrac{17 \times 3^{2\alpha+1}-3}{8}$, $3^{2\alpha +3 }n + \dfrac{19 \times 3^{2\alpha+2}-3}{8}$ and $27 \cdot 5^{2\alpha}n + \dfrac{171 \cdot 5^{2\alpha}-3}{8}$ (for $\alpha \geq 1$), on which $T_{4}(\cdot)$ is $0$ modulo $3$, do not intersect. These sequences account for \begin{equation*} \sum_{j=4}^{\infty}\dfrac{1}{3^j} + \dfrac{1}{27}\sum_{j=1}^{\infty}\dfrac{1}{5^{2j}} = \dfrac{13}{648} \end{equation*} of all positive integers. \end{proof} \section{Proof of Theorem \ref{t0.1.0.0}}\label{sec:thmn2} From equations \eqref{e2.0.3.3} and \eqref{e0.2}, we have \begin{equation*} \sum_{n \geq 0} T_2(9n+1)q^n \equiv 3\sum_{j \geq 0}(-1)^n(2j+1)q^{j(j+1)/2} \pmod{6}. \end{equation*} Therefore, if we wish to consider values of the form $T_2(9(pn+r)+1)$, then we need to know whether we can write $pn+r = j(j+1)/2$ for some nonnegatie integer $j$. If we can show that no such representations exist, then the theorem is proved. Note that, if a representation of the form $pn+r = j(j+1)/2$ exists, then $r \equiv j(j+1)/2 \pmod{p}$. We complete the square to obtain $8r+1 \equiv (2j+1)^2 \pmod{p}$, which is not possible because we have assumed that $\left( \dfrac{8r+1}{p} \right)_L = -1$. Therefore, no such representation is possible, and this completes the proof. \section{Proof of Theorem \ref{t0.0.1}}\label{s4} We have from \eqref{e0.2} that \begin{equation}\label{e50.1} \sum_{n\geq0}T_{2}(9n+1)q^n \equiv 3f_1f_2 \pmod{6}. \end{equation} Combining \eqref{e2.0.3.4} and \eqref{e50.1}, we get \begin{equation}\label{e50.3} \sum_{n\geq 0} T_{2}(9n+1)q^{24n+3} \equiv 3\sum_{m \in \mathbb{Z}}\sum_{k \in \mathbb{Z}} (-1)^{m+k}q^{{(6m-1)}^2+2(6k-1)^2} \pmod{6}. \end{equation} From \eqref{e50.3}, we find that if $24n+3$ is not of the form ${(6m-1)}^2+2(6k-1)^2$,then \begin{equation*} T_2(9n+1) \equiv 0 \pmod{6}. \end{equation*} Let $p \geq 5$ be a prime with $\left(\dfrac{-2}{p}\right)_L=-1$ and let $\nu_p(N)$ denote the exponent of the highest power of $p$ dividing $N$. If $N$ is of the form $x^2+2y^2$, then $\nu_p(N)$ is even. However, it is easy to check that if $p \nmid N$, then \begin{equation*} \nu_p\left( 24 \left( p^{2\alpha+1}n+ \dfrac{p^{2\alpha+2}-1}{8} \right)+3 \right) = 2\alpha+1. \end{equation*} Therefore, $ 24 \left( p^{2\alpha+1}n+ \dfrac{p^{2\alpha+2}-1}{8} \right)+3$ is not of the form $x^2+2y^2$ when $p\nmid n$ and hence \eqref{e50.0} holds. \section{Proof of theorem \ref{t2}}\label{s5} From \eqref{e0.2.1}, we have that \begin{equation}\label{e10} \sum_{n \geq 0}T_{2}(3n+2)q^n = 6\dfrac{f_2^3f_3^2f_6^2}{f_1^7} \equiv 6 \dfrac{f_3^6}{f_1} \equiv 6 \sum_{n \geq 0}a(n)q^n \pmod{12}, \end{equation} where $\dfrac{f_3^6}{f_1} = \sum_{n \geq 0}a(n)q^n$. Putting $r=-1, q=3$ and $s=6$ in $\eqref{e7}$, we have by Lemma $\ref{l2}$, for any $n\geq 0$ \begin{equation}\label{e11} a\left( p^2n +\dfrac{17}{24}(p^2-1) \right) - \gamma(n)a(n) - p^3a\left( \dfrac{1}{p^2}\left( n-\dfrac{17}{24}(p^2-1) \right) \right)=0, \end{equation} where \begin{equation}\label{e12} \gamma(n)=p^3\alpha-p\left(\dfrac{2}{p}\right)_L\left(\dfrac{n-\frac{17}{24}(p^2-1)}{p}\right)_L, \end{equation} and $\alpha$ is a constant integer. Setting $n=0$ in $\eqref{e11}$ and using the fact that $a(0) = 1$ and \[a\left(\dfrac{\frac{-17}{24}(p^2-1)}{p^2}\right)=0,\] we obtain \begin{equation}\label{e13} a\left( \dfrac{17}{24}(p^2-1) \right) = \gamma(0). \end{equation} Setting $n=0$ in $\eqref{e12}$ and using $\eqref{e13}$, we obtain \begin{equation}\label{e14} p^3\alpha = a\left( \frac{17}{24}(p^2-1) \right)+p\left( \dfrac{2}{p}\right)_L \left(\dfrac{\frac{-17}{24}(p^2-1)}{p}\right)_L := \omega(p). \end{equation} Now rewriting $\eqref{e11}$, by referring $\eqref{e12}$ and $\eqref{e14}$, we obtain \begin{align} a\left( p^2n +\dfrac{17}{24}(p^2-1) \right) &= \left( \omega(p)-p\left(\dfrac{2}{p}\right)_L\left( \dfrac{n-\frac{17}{24}(p^2-1)}{p} \right)_L \right)a(n) \nonumber\\ &\quad- p^3a\left( \dfrac{1}{p^2}\left( n-\dfrac{17}{24}(p^2-1) \right) \right).\label{e15} \end{align} Now, replacing $n$ by $pn + \dfrac{17}{24}(p^2-1)$ in $\eqref{e15}$, we obtain \begin{equation}\label{e16} a\left( p^3n+\dfrac{17}{24}(p^4-1) \right) = \omega(p)a\left( pn + \dfrac{17}{24}(p^2-1) \right) - p^3 a(n/p). \end{equation} From equation $\eqref{e11}$, we can see that \begin{equation}\label{e28} a\left( p^2n +\dfrac{17}{24}(p^2-1) \right) - \gamma(n)a(n) + a\left( \dfrac{1}{p^2}\left( n-\frac{17}{24}(p^2-1) \right) \right) \equiv 0 \pmod{2}, \end{equation} where \begin{equation}\label{e29} \gamma(n) \equiv \omega(p) + \left(\dfrac{n-\frac{17}{24}(p^2-1)}{p}\right)_L \pmod2. \end{equation} Setting $n=0$ in $\eqref{e28}$ and using the fact that $a(0)=1$ and $a\left( \dfrac{\frac{-17}{24}(p^2-1)}{p^2} \right) = 0$, we arrive at \begin{equation}\label{e30} a\left( \dfrac{17}{24}(p^2-1) \right) \equiv \gamma(0) \pmod{2}. \end{equation} Setting $n=0$ in $\eqref{e29}$ yields \begin{equation}\label{e31} \gamma(0) \equiv \omega(p) + 1 \pmod2, \qquad \text{for} \quad p \neq 17, \end{equation} and \begin{equation}\label{e31.1} \gamma(0) \equiv \omega(p) \pmod2, \qquad \text{for} \quad p = 17. \end{equation} Combining $\eqref{e30}$ and $\eqref{e31}$ yields \begin{equation}\label{e32} a\left( \dfrac{17}{24}(p^2-1) \right) + 1 \equiv \omega(p) \pmod{2}, \quad p\neq 17, \end{equation} and combining $\eqref{e30}$ and $\eqref{e31.1}$ yields \begin{equation}\label{e32:2} a\left( \dfrac{17}{24}(p^2-1) \right) \equiv \omega(p) \pmod{2}, \quad p=17. \end{equation} \underline{\textbf{Case - 1 :} $\omega(p) \equiv 0 \pmod{2}$} From $\eqref{e16}$ we obtain \begin{equation}\label{e17} a\left( p^3n+\dfrac{17}{24}(p^4-1) \right) \equiv p^3 a(n/p) \pmod{2}. \end{equation} Now, replacing $n$ by $pn$ in $\eqref{e17}$, we obtain \begin{equation}\label{e18} a\left(p^4n+\dfrac{17}{24}(p^4-1)\right) \equiv p^3 a(n) \equiv a(n) \pmod{2}. \end{equation} Since $p^{4k}n+\dfrac{17}{24}(p^{4k}-1)=p^4\left(p^{4k-4}n+\dfrac{17}{24}(p^{{4k-4}}-1)\right)+\dfrac{17}{24}(p^4-1)$, using equation $\eqref{e17}$, we obtain that for every integer $k\geq 1$, \begin{equation}\label{e19} a\left(p^{4k}n+\dfrac{17}{24}(p^{4k}-1)\right) \equiv a\left(p^{4k-4}n+\dfrac{17}{24}(p^{4k-4}-1)\right) \equiv a(n) \pmod{2}. \end{equation} Now if $p \nmid n$, then $\eqref{e17}$ yields \begin{equation}\label{e20} a\left(p^3n +\dfrac{17}{24}(p^4-1)\right) \equiv 0 \pmod{2}. \end{equation} Replacing $n$ by $p^3+\dfrac{17}{24}(p^4-1)$ in $\eqref{e19}$ and using $\eqref{e20}$, we obtain \begin{equation}\label{e21} a\left( p^{4k+3}n + \dfrac{17}{24}(p^{4k+4}-1) \right) \equiv 0 \pmod{2}. \end{equation} In particular, for $1 \leq j \leq p-1$, we have from $\eqref{e21}$, that \begin{equation}\label{e40} a\left( p^{4k+4}n + p^{4k+3}j + \dfrac{17}{24}\left( p^{4k+4}-1 \right)\right) \equiv 0 \pmod{2}. \end{equation} Congruence $\eqref{t3.1}$ follows from \eqref{e10} and $\eqref{e40}$. \\ Now, assume $p=17$, we have from $\eqref{e29}$ and $\eqref{e32:2}$, that \begin{equation}\label{e51} \gamma(n) \equiv a\left( \dfrac{17}{24}(p^2-1) \right) + \left( \dfrac{n-\frac{17}{24}(p^2-1)}{p} \right)_L \pmod{2}. \end{equation} By $\eqref{e28}$ and $\eqref{e51}$ \begin{align} a\left( p^2n + \dfrac{17}{24}(p^2-1) \right) &\equiv \left( a\left( \dfrac{17}{24}(p^2-1) \right) + \left( \dfrac{n - \frac{17}{24}(p^2-1)}{p} \right)_L \right) a(n) \nonumber \\ &\quad + a\left( \dfrac{1}{p^2} \left( n - \dfrac{17}{24}(p^2-1) \right) \right) \pmod{2}. \label{e52} \end{align} Replacing $n$ by $pn + \dfrac{19}{24}(p^2-1)$ in $\eqref{e52}$ yields \begin{equation}\label{e53} a\left( p^3n + \dfrac{17}{24}(p^4-1) \right) \equiv a\left( \dfrac{17}{24}(p^2-1) \right)a\left( pn + \dfrac{17}{24}(p^2-1) \right) + a(n/p) \pmod{2}. \end{equation} For $p=17$, we have $\omega(p) \equiv 0 \pmod{2}$ so from $\eqref{e32:2}$, we have $a\left( \dfrac{17}{24}(p^2-1) \right) \equiv 0 \pmod{2}$. Therefore, replacing $n$ by $pn$ in $\eqref{e53}$ we find that \begin{equation}\label{e54} a\left( p^4n + \dfrac{17}{24}(p^4-1) \right) \equiv a(n) \pmod{2}. \end{equation} By $\eqref{e54}$ and iteration, we deduce that for $n,k \geq 0$, \begin{equation}\label{e55} a\left( p^{4k}n + \dfrac{17}{24}(p^{4k}-1) \right) \equiv a(n) \pmod{2}. \end{equation} Moreover, we can rewrite $\eqref{e52}$ as \begin{equation}\label{e56} a\left( p^2n + \dfrac{17}{24}(p^2-1) \right) \equiv a(n) \left( \dfrac{n-\frac{17}{24}(p^2-1)}{p} \right) + a\left( \dfrac{1}{p^2}\left( n-\dfrac{17}{24}(p^2-1) \right) \right) \pmod{2}. \end{equation} If $p\nmid (24n+17)$, then $p \nmid \left( n -\dfrac{17}{24}(p^2-1) \right)$ and $\dfrac{n-\frac{17}{24}(p^2-1)}{p^2}$ is not an integer. Therefore \begin{equation}\label{e57} \left( \dfrac{n-\frac{17}{24}(p^2-1)}{p} \right)_L \equiv 1 \pmod{2}, \end{equation} and \begin{equation}\label{e58} a\left( \dfrac{n-\frac{17}{24}(p^2-1)}{p^2} \right) = 0. \end{equation} On account of $\eqref{e56}$ -- $\eqref{e58}$, we deduce that \begin{equation}\label{e59} a\left( p^2n + \dfrac{17}{24}(p^2-1) \right) \equiv a(n) \pmod{2}. \end{equation} Replacing $n$ by $p^2n + \dfrac{17}{24}(p^2-1)$ in $\eqref{e55}$ and employing $\eqref{e59}$, we see that for $k\geq 0$, \begin{equation}\label{e60} a\left( p^{4k+2}n + \dfrac{17}{24}(p^{4k+2}-1) \right) \equiv a(n) \pmod{2}. \end{equation} Equations $\eqref{e10}$ and $\eqref{e60}$ readily yield $\eqref{t3.3.1}$.\\ \underline{\textbf{Case - 2 :} $\omega(p) \equiv 1 \pmod{2}$} In order to prove $(ii)$, we replace $n$ by $p^2n+\dfrac{17}{24}p(p^2-1)$ in $\eqref{e16}$. \begin{align} a\left( p^5n + \dfrac{17}{24}(p^6-1) \right) & = a\left( p^3\left( p^2n+\dfrac{17}{24}p(p^2-1) \right) + \dfrac{17}{24}(p^4-1) \right) \nonumber\\ & \equiv \omega(p)a\left(p^3n+\dfrac{17}{24}(p^4-1)\right)-p^3a\left( pn + \dfrac{17}{24}(p^2-1) \right) \nonumber \\ & \equiv \left[ \omega^2(p) - p^3\right]a\left( pn+\dfrac{17}{24}(p^2-1) \right) -p^3\omega(p)a(n/p). \label{e22} \end{align} Now, as $\omega(p) \equiv 1 \pmod{2}$ and $p\geq5$ is an odd prime, we have $\omega^2(p)-p^3=0 \pmod{2}$, and therefore $\eqref{e22}$ becomes \begin{equation}\label{e23} a\left( p^5n + \dfrac{17}{24}(p^6-1) \right) \equiv a(n/p) \pmod{2}. \end{equation} Replacing $n$ by $pn$ in $\eqref{e23}$, we obtain \begin{equation}\label{e24} a\left( p^6n +\dfrac{17}{24}(p^6-1) \right) \equiv a(n) \pmod{2}. \end{equation} Using equation $\eqref{e24}$ repeatedly, we see that for every integers $k\geq 1$, \begin{equation}\label{e25} a\left( p^{6k+5}n + \dfrac{17}{24}(p^{6k}-1) \right) \equiv a(n) \pmod{2}. \end{equation} Observe that if $p\nmid n$, then $a(n/p)=0$. Thus $\eqref{e23}$ yields \begin{equation}\label{e26} a\left( p^5n + \dfrac{17}{24}(p^6-1) \right) \equiv 0 \pmod{2}. \end{equation} Replacing $n$ by $p^5n+\dfrac{17}{24}(p^6-1)$ in $\eqref{e25}$ and using $\eqref{e26}$, we obtain \begin{equation}\label{e27} a\left( p^{6k+5}n + \dfrac{17}{24}(p^{6k+6}-1) \right) \equiv 0 \pmod{2}. \end{equation} In particular, for $1 \leq j \leq p-1$, we have from $\eqref{e27}$, that \begin{equation}\label{e27.1} a\left( p^{6k+6}n + p^{6k+5}j + \dfrac{17}{24}\left( p^{6k+6}-1 \right)\right) \equiv 0 \pmod{2}. \end{equation} Congruence $\eqref{t2.2}$ readily follows from $\eqref{e10}$ and $\eqref{e27.1}$. \\ Again, from $\eqref{e29}$ and $\eqref{e32}$, we have \begin{equation}\label{e33} \gamma(n) \equiv a\left( \dfrac{17}{24}(p^2-1)\right) + 1 + \left( \dfrac{n-\frac{17}{24}(p^2-1)}{p} \right)_L \pmod{2}. \end{equation} By $\eqref{e28}$ and $\eqref{e33}$, \begin{align} a\left( p^2n + \dfrac{17}{24}(p^2-1) \right) &\equiv \left( a\left( \dfrac{17}{24}(p^2-1) \right) + 1 + \left( \dfrac{n - \frac{17}{24}(p^2-1)}{p} \right)_L \right) a(n) \nonumber \\ &\quad + a\left( \dfrac{1}{p^2} \left( n - \dfrac{17}{24}(p^2-1) \right) \right) \pmod{2} \label{e34}. \end{align} Replacing $n$ by $pn + \dfrac{17}{24}(p^2-1)$ in $\eqref{e34}$ yields \begin{equation}\label{e35} a\left( p^3n + \dfrac{17}{24}(p^4-1) \right) \equiv \left( a\left( \dfrac{17}{24}(p^2-1) \right) + 1 \right)a\left( pn + \dfrac{17}{24}(p^2-1) \right) + a(n/p). \end{equation} Since $\omega(p) \equiv 1 \pmod{2}$ so from $\eqref{e32}$, we have $a\left( \dfrac{17}{24}(p^2-1) \right) \equiv 0 \pmod{2}$. Therefore, replacing $n$ by $pn$ in $\eqref{e35}$, we find that \begin{equation}\label{e42} a\left( p^{4}n + \dfrac{17}{24}(p^4-1) \right) \equiv a\left( p^2n+\dfrac{17}{24}(p^2-1) \right) + a(n) \pmod{2}. \end{equation} Replacing $n$ by $p^2n + \dfrac{17}{24}(p^2-1)$ in $\eqref{e42}$ yields \begin{equation}\label{e43} a\left( p^6n + \dfrac{17}{24}(p^6-1) \right) \equiv a\left( p^4n + \dfrac{17}{24}(p^4-1) \right) + a\left( p^2n+ \dfrac{17}{24}(p^2-1) \right) \pmod{2}. \end{equation} Combining $\eqref{e42}$ and $\eqref{e43}$ yields \begin{equation}\label{e44} a\left( p^6n + \dfrac{17}{24}(p^6-1) \right) \equiv a(n) \pmod{2}. \end{equation} By $\eqref{e44}$ and iteration, we deduce that for $n,k \geq 0$, \begin{equation}\label{e45} a\left( p^{6k}n + \dfrac{17}{24}(p^{6k}-1) \right) \equiv a(n) \pmod{2}.\\ \end{equation} Moreover, we can rewrite $\eqref{e34}$ as \begin{multline} \label{e46} a\left( p^2n + \dfrac{17}{24}(p^2-1) \right) \equiv \left(1 + \left( \dfrac{n-\frac{17}{24}(p^2-1)}{p} \right)_L \right) a(n)\\ + a\left( \dfrac{1}{p^2}\left( n-\dfrac{17}{24}(p^2-1) \right) \right) \pmod{2}. \end{multline} If $p\nmid (24n+17)$ and $p \neq 19$, then $p \nmid \left( n -\dfrac{17}{24}(p^2-1) \right)$ and $\dfrac{n-\frac{17}{24}(p^2-1)}{p^2}$ is not an integer. Therefore \begin{equation}\label{e47} \left( \dfrac{n-\frac{17}{24}(p^2-1)}{p} \right)_L \equiv 1 \pmod{2}, \end{equation} and \begin{equation}\label{e48} a\left( \dfrac{n-\frac{17}{24}(p^2-1)}{p^2} \right) = 0. \end{equation} On account of $\eqref{e46}$ -- $\eqref{e48}$, we deduce that \begin{equation}\label{e49} a\left( p^2n + \dfrac{17}{24}(p^2-1) \right) \equiv 0 \pmod{2}. \end{equation} Replacing $n$ by $p^2n + \dfrac{17}{24}(p^2-1)$ in $\eqref{e45}$ and employing $\eqref{e49}$, we see that for $k\geq 0$, \begin{equation}\label{e50} a\left( p^{6k+2}n + \dfrac{17}{24}(p^{6k+2}-1) \right) \equiv 0 \pmod{2}. \end{equation} Congruence $\eqref{t3.3}$ follows from $\eqref{e10}$ and $\eqref{e50}$.\\ \section{Proof of Theorem \ref{t4}}\label{s3} In this section, we prove Theorem \ref{t4} using the theory of Hecke eigenforms. \noindent Magnifying $q \to q^8$ and multiplying $q$ on both sides of $\eqref{e1.0.0.0}$, we have \begin{equation}\label{e4} \sum_{n \geq 0}T_{2}(n)q^{8n+1}=q\dfrac{f_{16}^3}{f_8^3} \equiv \Delta(z) = \sum_{n=1}^{\infty}\tau(n)q^n \pmod{2}. \end{equation} Hence, from $\eqref{e2.0.3.3}$, we have \begin{equation*} \Delta(z)=qf_1^{24} \equiv qf_8^3 \equiv \sum_{n=0}^{\infty}q^{(2n+1)^2} \pmod{2}. \end{equation*} Therefore, we have \begin{equation}\label{e5} \sum_{n \geq 0}T_{2}(n)q^{8n+1}\equiv \sum_{n=0}^{\infty}q^{(2n+1)^2} \pmod{2}. \end{equation} If $s\equiv1 \pmod{8}$ and $\left(\dfrac{s}{p}\right) = -1$, then, for any $n\geq 0$, $8np+s$ cannot be an odd square. This implies that the coefficients of $q^{8np+s}$ in the left-hand side of $\eqref{e5}$ must be even. It follows that \begin{equation}\label{e6} T_{2}\left( pn + \dfrac{s-1}{8} \right) \equiv 0 \pmod{2}. \end{equation} This proves $\eqref{e12.0.1}$.\\ Since $\tau(p) \equiv 0 \pmod{2}$ and $\Delta(n)$ is a Hecke eigenform, we have \begin{equation*} \Delta(z)\mid T_{p,12} = \tau(p)\Delta(z) \equiv 0 \pmod{2}. \end{equation*} By $\eqref{hecke1}$ and $\eqref{e4}$, we get \begin{equation*} \sum_{n \geq 0}T_{2}(n) q^{8n+1} \mid T_{p,12} \equiv \sum_{n = n_0}^{\infty}\left( T_{2}\left( \dfrac{pn-1}{8} \right) + T_{2}\left( \dfrac{n/p-1}{8}\right) \right)q^n \equiv 0 \pmod{2}. \end{equation*} If we write $m=\dfrac{n/p-1}{8}\in N $, then $\dfrac{pn-1}{8} = p^2m + \dfrac{p^2-1}{8}$. So, we have \begin{equation*} T_{2}\left( p^2m + \dfrac{p^2-1}{8} \right) + T_{2}(m) \equiv 0 \pmod{2}. \end{equation*} That is, \begin{equation*} T_{2}\left( p^2m + \dfrac{p^2-1}{8} \right) \equiv T_{2}(m) \pmod{2}. \end{equation*} By induction, for $k\geq 1$ we find that \begin{equation*} T_{2}\left( p^{2k}m + \dfrac{p^{2k}-1}{8} \right) \equiv T_{2}(m) \pmod{2}. \end{equation*} Considering $\eqref{e6}$, we get \begin{equation*} T_{2}\left( p^{2k+1}n+\dfrac{sp^{2k}-1}{8} \right) \equiv T_{2}\left( pn + \dfrac{s-1}{8} \right) \equiv 0 \pmod{2}. \end{equation*} This proves $\eqref{e12.0.2}$.\\ Since $\tau(p) \equiv 0 \pmod{2}$, $\eqref{tau2}$ gives \begin{equation*} \tau(p^{2k+1}) \equiv 0 \pmod{2}. \end{equation*} Applying $\eqref{tau1}$, we have \begin{equation*} \tau\left( p^{2k+1} \left( 8pn + r \right) \right) = \tau\left( p^{2k+1} \right)\tau\left( 8pn+r \right) \equiv 0 \pmod{2}. \end{equation*} It follows from $\eqref{e4}$ that \begin{equation*} T_{2}\left( p^{2k+2}n + \dfrac{rp^{2k+1}-1}{8} \right) \equiv 0 \pmod{2}. \end{equation*} This proves $\eqref{e12.0.3}$. \section{Proof of Theorem \ref{thm1.00}}\label{sec:thmn1} We use the theory of Hecke eigenforms to prove Theorems \ref{thm1.00}.\\ We have from \eqref{e50.1} that \begin{align*} \sum_{n \geq 0}T_2(9n+1)q^n \equiv 3f_1^3 \pmod{6}. \end{align*} This gives \begin{align*} \sum_{n \geq 0}T_2(9n+1)q^{8n+1} \equiv 3\eta(4z)^6 \pmod{6}. \end{align*} Let $\eta(4z)^6 = \sum_{n \geq 0} a(n)q^n.$ Then $a(n) = 0$ if $n\not\equiv 1\pmod{8}$ and for all $n\geq 0$, \begin{align}\label{1_1} T_2(9n+1) \equiv a(8n+1) \pmod 6. \end{align} By Theorem \ref{thm_ono1}, we have $\eta(4z)^6 \in S_3(\Gamma_0(16))$. Since $\eta(4z)^6$ is a Hecke eigenform (see, for example \cite{martin}), \eqref{hecke1} and \eqref{hecke3} yield \begin{align*} \eta(4z)^6|T_p = \sum_{n \geq 1} \left(a(pn) + \chi(p)p^2 a\left(\frac{n}{p}\right) \right)q^n = \lambda(p) \sum_{n \geq 1} a(n)q^n, \end{align*} which implies \begin{align}\label{1.1} a(pn) + \chi(p)p^2 a\left(\frac{n}{p}\right) = \lambda(p)a(n). \end{align} Putting $n=1$ and noting that $a(1)=1$, we readily obtain $a(p) = \lambda(p)$. Since $a(p)=0$ for all $p \not\equiv 1 \pmod{8}$, we have $\lambda(p) = 0$. From \eqref{1.1}, we obtain \begin{align}\label{new-1.1} a(pn) + \chi(p)p^2 a\left(\frac{n}{p}\right) = 0. \end{align} From \eqref{new-1.1}, we derive that for all $n \geq 0$ and $p\nmid r$, \begin{align}\label{1.2} a(p^2n + pr) = 0 \end{align} and \begin{align}\label{1.3} a(p^2n) = - \chi(p)p^2 a(n)\equiv a(n) \pmod{2}. \end{align} Substituting $n$ by $8n-pr+1$ in \eqref{1.2} and together with \eqref{1_1}, we find that \begin{align}\label{1.4} T_2\left(9p^2n + \frac{9p^2-1}{8}+ 9pr\frac{1-p^2}{8}\right) \equiv 0 \pmod 6. \end{align} Substituting $n$ by $8n+1$ in \eqref{1.3} and using \eqref{1_1}, we obtain \begin{align}\label{1.5} T_2\left(9p^2n + \frac{9p^2-1}{8}\right) \equiv T_2(9n+1) \pmod 6. \end{align} Since $p \geq 5$ is prime, so $8\mid (1-p^2)$ and $\gcd \left(\frac{1-p^2}{8} , p\right) = 1$. Hence when $r$ runs over a residue system excluding the multiple of $p$, so does $\frac{1-p^2}{8}r$. Thus \eqref{1.4} can be rewritten as \begin{align}\label{1.6} T_2\left(9p^2n + \frac{9p^2-1}{8}+ 9pj\right) \equiv 0 \pmod 6, \end{align} where $p \nmid j$. \par Now, $p_i \geq 5$ are primes such that $p_i \not\equiv 1 \pmod 8$. Since \begin{align*} 9p_1^2\dots p_{k}^2n + \frac{9p_1^2\dots p_{k}^2-1}{8}=9p_1^2\left(p_2^2\dots p_{k}^2n + \frac{p_2^2\dots p_{k}^2-1}{8}\right)+\frac{9p_1^2-1}{8}, \end{align*} using \eqref{1.5} repeatedly we obtain that \begin{align*} T_2\left(9p_1^2\dots p_{k}^2n + \frac{9p_1^2\dots p_{k}^2-1}{8}\right) \equiv T_2(9n+1) \pmod{6}, \end{align*} which implies \begin{equation}\label{1.7} T_2\left( p_1^2 \cdots p_k^2 n + \frac{p_1^2\cdots p_k^2 -1}{8} \right) \equiv T_2(n) \pmod{6}. \end{equation} Let $j\not\equiv 0\pmod{p_{k+1}}$. Then \eqref{1.6} and \eqref{1.7} yield \begin{align*} T_2\left(9p_1^2\dots p_{k+1}^2n + \frac{9p_1^2\dots p_{k}^2p_{k+1}(8j+p_{k+1})-1}{8}\right) \equiv 0 \pmod{6}. \end{align*} This completes the proof of the theorem. \section{Proof of Theorem \ref{thm1}}\label{sec:lacunary} To prove Theorem \ref{thm1}, we need the following lemmas. \begin{lemma}\label{lem2}For any positive integers $m>2$, we have \begin{align*} \dfrac{\eta(24\ell z)^k \eta(24z)^{p^{a+m}-k}}{\eta(24p^az)p^m} \equiv \sum_{n \geq 0}T_{\ell,k}(n)q^{24n+k(\ell-1)} \pmod {p^{m}}. \end{align*} \end{lemma} \begin{proof} Consider \begin{align*} \mathcal{A}(z) =\dfrac{\eta(24z)^{p^a}}{\eta(24p^a z)}. \end{align*} We have, \begin{align*} \mathcal{A}^{p^{m}}(z) = \dfrac{\eta(24z)^{p^{m+a}}}{\eta(24p^a z)^{p^m}} \equiv 1 \pmod {p^{m+1}}. \end{align*} Define $\mathcal{B}_{\ell, p, m}(z)$ by $$\mathcal{B}_{\ell, p, m}(z)= \dfrac{\eta(24\ell z)^k}{\eta(24z)^k}\mathcal{A}^{p^{m}}(z).$$ \noindent Now, modulo $p^{m+1}$, we have \begin{align}\label{new-110} \mathcal{B}_{\ell, p, m}(z) &=\dfrac{\eta(24\ell z)^k}{\eta(24z)^k}\dfrac{\eta(24z)^{p^{m+a}}}{\eta(24p^az)^{p^m}}\equiv \dfrac{\eta(24\ell z)^k}{\eta(24z)^k}= q^{k(\ell-1)}\dfrac{f_{24\ell}^k}{f_{24}^k} \pmod{p^{m+1}}. \end{align} Combining \eqref{eq:gf-lk} and \eqref{new-110}, we obtain the required result. \end{proof} \begin{lemma}\label{lem1} Let $\ell$ be an integer greater than one, $p$ be a prime divisor of $\ell$, $a$ the largest positive integer such that $p^a$ divides $\ell$, and $t$ be a positive integer such that $t$ divides $\ell$. If $p^{2a} \geq \ell$ and $k \leq p^{m+a}(1-p^{2s-2a})$, then for every positive integer $m$, we have \begin{align*} \mathcal{B}_{\ell, p, m}(z) \in M_{\frac{p^m(p^a-1)}{2}}\left(\Gamma_0(576 \cdot \ell), \chi_1(\bullet)\right), \end{align*} where the Nebentypus character $$\chi_1(\bullet)=\left(\dfrac{(-1)^{\frac{p^m(p^a-1)}{2}} (24\ell)^{k}\cdot (24)^{p^{m+a}-k} \cdot (24p^a)^{-p^m}}{\bullet}\right)_L.$$ \end{lemma} \begin{proof} First we verify the first, second and third hypotheses of Theorem \ref{thm_ono1}. The weight of the eta-quotient is $\dfrac{p^m(p^a-1)}{2}$. Suppose the level of the eta-quotient $\mathcal{B}_{\ell, p, m}(z)$ is $24\ell u$, where $u$ is the smallest positive integer satisfying the following identity. \begin{equation*} 24 \ell u \left( \dfrac{k}{24\ell}+ \dfrac{p^{m+a}-k}{24} - \dfrac{p^m}{24p^a} \right) \equiv 0 \pmod{24}. \end{equation*} Equivalently, we have \begin{equation}\label{e100} u\left( k(1-\ell)+\ell p^{m-a}(p^{2a}-1) \right) \equiv0 \pmod{24}. \end{equation} Hence, from \eqref{e100}, we conclude that the level of the eta-quotient $\mathcal{B}_{\ell, p, m}$ is $576 \ell$ for $m>a$. By Theorem \ref{thm_ono1.1}, the cusps of $\Gamma_0(576\ell)$ are given by $\dfrac{c}{d}$ where $d \nmid 576\ell$ and $\text{gcd}(c,d)=1$. Now $\mathcal{B}_{\ell, p, m}$ is holomorphic at a cusp $\dfrac{c}{d}$ if and only if \begin{equation}\label{e101} \ell(p^{m+a}-k)\dfrac{\text{gcd}(d,24)^2}{\text{gcd}(d,24\ell)^2}-\ell p^{m-a}\dfrac{\text{gcd}(d,24p^a)^2}{\text{gcd}(d,24\ell)^2}+k \geq 0. \end{equation} To check the positivity of \eqref{e101}, we have to find all the possible divisors of $576\ell$. We define $$\mathcal{S}=\{ 2^{r_1}3^{r_2}tp^s : 0 \leq r_1 \leq 6, 0\leq r_2 \leq 2, t \mid \ell \hspace{1.5mm} \text{but} \hspace{1.5mm} p\nmid t, \hspace{1.5mm} \text{and}\hspace{1.5mm} 0 \leq s \leq a \}$$ to be the set of all divisors $d$ of $576\ell$. We observe that the the values of $\dfrac{\text{gcd}(d,24)^2}{\text{gcd}(d,24\ell)^2}$ and $\dfrac{\text{gcd}(d,24p^a)^2}{\text{gcd}(d,24\ell)^2}$ are $\dfrac{1}{t^2p^{2s}}$ and $\dfrac{1}{t^2}$ respectively when divisors $d \in \mathcal{S}$. Now, substituting these values in the left side of \eqref{e101}, we see that it equals \begin{equation}\label{e102} \dfrac{\ell}{t^2}\left( \dfrac{p^{m+a}-k}{p^{2s}} - p^{m-a} \right) + k. \end{equation} When $s=a$, \eqref{e101} becomes $\left( k-\dfrac{k\ell}{t^2p^{2a}} \right) \geq 0$, which is true as $p^{2a}\geq \ell$. For $0 \leq s < a$, the quantity in \eqref{e102} is greater than equal to zero as $m>a$ and $k \leq p^{m+a}(1-p^{2s-2a})$ (by assumption). Hence the quantity in \eqref{e102} are greater than or equal to $0$ when $p^{2a} \geq \ell$ and $k \leq p^{m+a}(1-p^{2s-2a})$. Therefore, the orders of vanishing of $\mathcal{B}_{\ell, p, m}(z)$ at the cusp $\dfrac{c}{d}$ is nonnegative. So $\mathcal{B}_{\ell, p, m}(z)$ is holomorphic at every cusp $\dfrac{c}{d}$. We have also verified the Nebentypus character by Theorem \ref{thm_ono1}. Hence $\mathcal{B}_{\ell, p, m}(z)$ is a modular form of weight $\dfrac{p^m(p^a-1)}{2}$ on $\Gamma_0(576\cdot \ell)$ with Nebentypus character $\chi_1(\bullet)$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm1}]Suppose $m>1$ is a positive integer. From Lemma~\ref{lem1}, we have $$ \mathcal{B}_{\ell, p, m}(z)=\dfrac{\eta(24\ell z)^k\eta(24z)^{p^{m+a}-k}}{\eta(24p^a z)^{p^m}} \in M_{\frac{p^m(p^a-1)}{2}}\left(\Gamma_0(576 \cdot \ell), \chi_1(\bullet)\right),$$ Also the Fourier coefficients of the eta-quotient $\mathcal{B}_{\ell, p, m}(z)$ are integers. So, by Theorem \ref{serre} and Lemma \ref{lem2}, we can find a constant $\alpha>0$ such that $$ \# \left\{n\leq X:T_{\ell, k}(n)\not\equiv 0 \pmod{p^m} \right\}= \mathcal{O}\left(\dfrac{X}{\log^{\alpha}{}X}\right). $$ Hence $$\lim\limits_{X\to +\infty}\dfrac{ \# \left\{n\leq X:T_{\ell, k}(n)\equiv 0 \pmod{p^m} \right\}}{X}=1.$$ This allows us to prove the required divisibility by $p^m$ for all $m>a$ which trivially gives divisibility by $p^m$ for all positive integer $m\leq a$. This completes the proof of Theorem \ref{thm1}. \end{proof} \section{Concluding Remarks}\label{sec:conc} \begin{enumerate} \item There are several more isolated congruences that we have found, which we do not mention in this paper. For instance, for all $n\geq 0$ it is easy to prove using the theory of modular forms, that \[ T_4(3^4n+57)\equiv 0 \pmod{12}. \] It would be interesting to make some more systematic studies and prove more general Ramanujan-type that the $T_{\ell, k}(n)$ function satisfies. \item There seems to be an infinite family of congruences for $T_9(n)$, similar to a result of Baruah and Das \cite{BaruahDas}. We leave it as an open problem to find such families. \item In Theorem \ref{thm1}, the bound on $k$ is given by $k \leq p^{m+a} \left( 1 - p^{2s-2a} \right) + \epsilon$, where we believe that there exists a positive integer $\epsilon$, although its exact value is yet to be determined. We leave this as an open problem. \item We close the paper with the following conjecture. \begin{conjecture} Let $p\geq 5$ be a prime with $\left( \dfrac{-2}{p} \right)_{L}=-1$ and, let $t$ be a positive integer with $(t,6) = 1$ and $p\mid t$. Then for all $n\geq 0$ and $1\leq j \leq p-1$, we have \begin{equation*} T_2\left( 9\cdot t^2n + \frac{9\cdot t^2j}{p} + \frac{57\cdot t^2-1}{8} \right) \equiv 0 \pmod{6}. \end{equation*} \end{conjecture} \end{enumerate} \section*{Acknowledgements} The second author was partially supported by a Start-Up Grant from Ahmedabad University (Reference No. URBSASI24A5). The third author was partially supported by an institutional fellowship for doctoral research from Tezpur University, Assam, India. \bibliographystyle{alpha} \bibliography{ref} \end{document}
2412.11999v2
http://arxiv.org/abs/2412.11999v2
Pattern-avoiding shallow permutations
\documentclass{article} \usepackage{subfigure} \usepackage{authblk} \usepackage{cellspace}\setlength\cellspacetoplimit{3pt} \setlength\cellspacebottomlimit{3pt} \usepackage{makecell} \setcellgapes{3pt} \usepackage{fullpage,enumerate,comment} \makeatletter \def\keywords{\xdef\@thefnmark{}\@footnotetext} \makeatother \usepackage{tikz} \newcommand\Matching[2]{ \begin{tikzpicture} \draw(-0.5,0) -- ++ (#1+1,0); \foreach \x in {0,...,#1}{ \draw[circle,fill] (\x,0)circle[radius=1mm]node[below]{}; } \foreach \x/\y in {#2} { \draw(\x,0) to[bend left=45] (\y,0); } \end{tikzpicture}} \usetikzlibrary{matrix} \usetikzlibrary{arrows,automata} \usetikzlibrary{positioning} \usepackage{amssymb,mathrsfs,amsmath,amsthm,bm,diagbox,float} \usepackage{multirow} \usetikzlibrary{matrix,arrows} \newcommand{\R}{\mathbb{R}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\CC}{\mathbb{C}} \newcommand{\Z}{\mathbb{Z}} \usepackage{hyperref} \usepackage{mathtools} \DeclarePairedDelimiter{\ceil}{\lceil}{\rceil} \DeclarePairedDelimiter{\floor}{\lfloor}{\rfloor} \renewcommand{\S}{\mathcal{S}} \newcommand{\C}{\mathcal{C}} \newcommand{\A}{\mathcal{A}} \newcommand{\T}{\mathcal{T}} \DeclareMathOperator{\Av}{Av} \DeclareMathOperator{\cyc}{cyc} \DeclareMathOperator{\lrmax}{lrmax} \DeclareMathOperator{\red}{red} \DeclareMathOperator{\des}{des} \newtheorem{theorem}{Theorem}[section] \newtheorem{definition}[theorem]{Definition} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{conjecture}[theorem]{Conjecture} \theoremstyle{definition} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \newtheorem{question}[theorem]{Question} \newcommand{\todo}[1]{\vspace{2 mm}\par\noindent \marginpar[\flushright\textsc{ToDo}]{\textsc{ToDo}}\framebox{\begin{minipage}[c]{\textwidth} \tt #1 \end{minipage}}\vspace{2 mm}\par} \title{Pattern-avoiding shallow permutations} \author[1]{Kassie Archer} \author[1]{Aaron Geary} \author[1]{Robert P. Laudone} \affil[1]{{\small Department of Mathematics, United States Naval Academy, Annapolis, MD, 21402}} \affil[ ]{{\small Email: \{karcher, geary, laudone\}@usna.edu }} \date{} \begin{document} \maketitle \begin{abstract} Shallow permutations were defined in 1977 to be those that satisfy the lower bound of the Diaconis-Graham inequality. Recently, there has been renewed interest in these permutations. In particular, Berman and Tenner showed they satisfy certain pattern avoidance conditions in their cycle form and Woo showed they are exactly those whose cycle diagrams are unlinked. Shallow permutations that avoid 321 have appeared in many contexts; they are those permutations for which depth equals the reflection length, they have unimodal cycles, and they have been called Boolean permutations. Motivated by this interest in 321-avoiding shallow permutations, we investigate $\sigma$-avoiding shallow permutations for all $\sigma \in \S_3$. To do this, we develop more general structural results about shallow permutations, and apply them to enumerate shallow permutations avoiding any pattern of length 3. \end{abstract} \keywords{{\bf Keywords:} permutations; avoidance; shallow; enumeration.} \section{Introduction and background} Let $\S_n$ denote the set of permutations on $[n]=\{1,2,\ldots,n\}$ and we write these permutations in their one-line notation as $\pi = \pi_1\pi_2\ldots\pi_n$ where $\pi_i:=\pi(i)$. There are multiple measures of disorder or disarray of a permutation $\pi \in \S_n$. Three of these, namely the total displacement $D(\pi)$, the length $I(\pi)$, and the reflection length $T(\pi)$, are connected by the Diaconis-Graham inequalities \cite{DG77}: \[ I(\pi) + T(\pi) \leq D(\pi) \leq 2 \, I(\pi). \] Here, the total displacement $D(\pi)$, also called Spearman's measure of disarray, is given by \[ D(\pi) = \sum_{i = 1}^n |\pi_i - i|. \] The length $I(\pi)$ is equal to the minimal number of simple transpositions required to produce $\pi$. It is also called the inversion number and is given by \[ I(\pi) = \sum_{i=1}^n |\{i < j \; | \; \pi_i > \pi_j\}|. \] The reflection length $T(\pi)$ is the minimal number of transpositions required to produce $\pi$ from the identity permutation, which was shown by Cayley in 1849 to be \[ T(\pi) = n - \cyc(\pi), \] where $\cyc(\pi)$ denotes the number of cycles in the disjoint cycle decomposition of $\pi.$ It is well-known that the upper Diaconis-Graham inequality is achieved, i.e., $D(\pi)=2I(\pi)$, when $\pi$ avoids the pattern 321, meaning there is no set of indices $i<j<k$ with $\pi_i>\pi_j>\pi_k$. A permutation is called {\em shallow} when it satisfies the lower inequality, i.e., when $I(\pi) + T(\pi) = D(\pi)$. We note that shallow permutations have recently been investigated from various perspectives: In \cite{BT22}, the authors use pattern functions to characterize the cycle form of these permutations in terms of pattern-avoidance, and in \cite{W22}, the author proves that shallow permutations are exactly those permutations whose cycle diagram is equivalent to the unknot when viewed as a knot diagram. Permutations which satisfy both the upper and lower bound of the Diaconis-Graham inequalities have been well-studied in their own right. These permutations are exactly those that are shallow 321-avoiding permutations; these have been called Boolean permutations \cite{RT1,RT2,T07}, unimodal permutations \cite{E87} (because of their unimodal cycle form), and are characterized as avoiding both 321 and 3412 \cite{PT15}. It was stated, without proof, in \cite{DG77}, that these permutations are enumerated by $F_{2n-1}$, the $(2n-1)$-st Fibonacci number. A proof of this fact does appear in other places, including \cite{PT15}, and we provide an independent proof of this fact in this paper, directly using shallowness. Motivated by this interesting answer regarding 321-avoiding shallow permutations, in this paper we investigate shallow permutations which avoid $\sigma$ for $\sigma \in \S_3$. In Section \ref{sec:shallow}, we describe certain properties of general shallow permutations which we use in follow-on sections. In particular, we show how to build shallow permutations from smaller ones, and we prove that all shallow permutations necessarily avoid certain mesh patterns. In Sections \ref{sec:132}, \ref{sec:231}, \ref{sec:123}, and \ref{sec:321} we enumerate $\sigma$-avoiding shallow permutations for $\sigma \in \S_3$. Additionally, we enumerate $\sigma$-avoiding shallow permutations by number of descents and by three symmetry properties. In particular, we enumerate those shallow $\sigma$-avoiding permutations that are fixed under inverse, reverse-complement, and reverse-complement-inverse. The sections are ordered by the complexity of the proofs involved, with the exception of $\sigma=321$, which we do last since these have been investigated in previous papers. We conclude the paper with open questions and directions for future study. \renewcommand{\arraystretch}{2.2} \begin{table}[ht] \centering \begin{tabular}{c|c|c} $\sigma$ & number of shallow $\sigma$-avoiding permutations & Theorem \\ \hline\hline $132$ & \multirow{ 2}{*}{$t_n(\sigma)=F_{2n-1}$}& \multirow{2}{*}{Theorem~\ref{thm: 132}}\\ \cline{1-1} $213$ & & \\ \hline $231$ & \multirow{ 2}{*}{g.f. $T_{\sigma}(x) = \dfrac{1-3x+2x^2-x^3-x^4-x^5}{1-4x+4x^2-2x^3-x^4-x^5}$}& \multirow{2}{*}{Theorem~\ref{thm:231}} \\ \cline{1-1} $312$ & & \\ \hline $123$ & g.f. $T_{\sigma}(x) =\dfrac{1-3x+11x^3-13x^4+7x^5+6x^6+3x^7}{(1-x)^4 (1 - 4 x^2 + x^4)}$ & Theorem~\ref{thm: 123}\\ \hline $321$ & $t_n(\sigma)=F_{2n-1}$ & Theorem~\ref{thm: 321} \\ \end{tabular} \caption{In this table, $t_n(\sigma)$ denotes the number of shallow permutations avoiding a given pattern $\sigma$, and $T_{\sigma}(x) = \sum_{n\geq 0} t_n(\sigma)x^n$ is the corresponding generating function.} \label{tab:my_label} \end{table} \section{Structure of shallow permutations}\label{sec:shallow} Let $\mathcal{T}_n$ denote the permutations $\pi \in \S_n$ that are shallow and let $t_n = |\mathcal{T}_n|$. We will often make use of the following recursive formulation of shallow permutations that is due to Hadjicostas and Monico \cite{HM13}. \begin{theorem}\cite[Theorem 4.1]{HM13} \label{thm: shallowRecursive} Suppose $\pi \in \mathcal{S}_n$ and for $n\geq 2$, define \[ \pi^R = \begin{cases} \pi_1\pi_2\ldots\pi_{n-1} & \pi_n=n \\ \pi_1\pi_2\ldots\pi_{j-1}\pi_n\pi_{j+1}\ldots \pi_{n-1} & \pi_j=n \text{ with } j<n \end{cases} \] Then $\pi =1 \in \S_1$ is shallow, and when $n\geq 2,$ \begin{itemize} \item if $\pi_n = n$, then $\pi$ is shallow exactly when $\pi^R$ is, and \item if $\pi_j=n$ with $j<n$, then $\pi$ is shallow exactly when $\pi^R$ is shallow and $\pi^R_j = \pi_n$ is a left-to-right maximum or right-to-left minimum in $\pi^R$. \end{itemize} \end{theorem} Let us see an example. Suppose $\pi= 421635\in \T_6$, a shallow permutation of 6 elements. Notice $\pi_4=6$, and $\pi_6=5$; applying the $\pi^R$ map we see \[ 421\underline{\mathbf{6}}3\underline{\mathbf{5}} \xlongrightarrow{\pi^R} 421\underline{\mathbf{5}}3, \] and 42153 is $\in \T_5$. Notice that we can use the inverse of this map to construct new shallow permutations from old ones. Given any $\tau\in\T_{n-1}$ and any position $i$ for which $\tau_i$ is either a left-to-right maximum or a right-to-left minimum, we can construct a permutation $\pi$ for which $\pi^R=\tau$ by taking $\pi_j=\tau_j$ for $j\neq i$, $\pi_i=n$, and $\pi_n=\tau_i.$ Notice that we can get every shallow permutation on $[n]$ from the shallow permutations on $[n-1]$ in this way since every shallow permutation $\pi\in\T_n$ has an image $\tau=\pi^R$ in $\T_{n-1}.$ We will define a similar operation $\pi^L$ which acts on the left of $\pi$. To this end, let us define certain symmetries of permutations in general. We also give the names of permutations which are fixed under these symmetries, which will be relevant throughout this paper. \begin{itemize} \item We denote by $\pi^{-1}$ the algebraic inverse of $\pi.$ That is, $\pi^{-1}_j=i$ if and only if $\pi_i=j$. This corresponds to a reflection of the diagram of the permutation (given by points $(j,\pi_j)$ for each $j\in[n]$) about the main diagonal. Permutations which are their own inverse are called \emph{involutions}. \item We define $\pi^{rc}$ to be the reverse-complement of $\pi$, so that $\pi^{rc}_{n+1-i} = n+1-j$ if $\pi_i=j$. This corresponds to a $180^\circ$ rotation of the diagram of the permutation. Permutations satisfying $\pi=\pi^{rc}$ are called \emph{centrosymmetric}; see for example~\cite{E07,LO10}. \item Finally, let $\pi^{rci}:=(\pi^{rc})^{-1}$ be the reverse-complement-inverse of the permutation, corresponding to the reflection of the diagram of the permutation about the anti-diagonal. We will refer to permutations satisfying $\pi = \pi^{rci}$ as \emph{persymmetric}. \end{itemize} In the following proposition we show that each of these three symmetries preserves shallowness. \begin{proposition} \label{prop: invRc} If $\pi$ is shallow, then so are the permutations $\pi^{-1}$, $\pi^{rc},$ and $\pi^{rci}$. \end{proposition} \begin{proof} To see that $\pi^{-1} \in \mathcal{T}_n$ notice first that $D(\pi^{-1}) = \sum_{i=1}^n |\pi^{-1}_{\pi_i} - \pi_i| = \sum_{i=1}^n |i - \pi_i| = D(\pi)$. Next, $I(\pi^{-1}) = I(\pi)$ since this is also the length of $\pi^{-1}$ and if $\pi = s_{i_1}\cdots s_{i_k}$ then $\pi^{-1} = s_{i_k} \cdots s_{i_1}$. Similarly, $T(\pi^{-1})= T(\pi)$ since the cycle type of $\pi$ and $\pi^{-1}$ are the same; if $p = c_1 \cdots c_\ell$ then $p^{-1} = c_1^{-1} \cdots c_\ell^{-1}$. So since $I(\pi) + T(\pi) = D(\pi)$ the same is true for $\pi^{-1}$ meaning $\pi^{-1}$ is shallow. We similarly check $\pi^{rc}$. First, \[ D(\pi^{rc}) = \sum_{i=1}^n |\pi^{rc}_i - i| = \sum_{i=1}^n |(n-\pi_{n-i+1}+1)-i| = \sum_{i=1}^n |(n-i+1) - \pi_{n-i+1}| = D(\pi). \] Next, $I(\pi^{rc}) = I(\pi)$ since $\pi$ has an inversion in position $(i,j)$ if and only if $\pi^{rc}$ has one in position $(n-i+1,n-j+1)$. Indeed $\pi_i > \pi_j$ with $i < j$ if and only if $\pi^{rc}_{n-i+1} = n-\pi_i +1 < n-\pi_j + 1 = \pi^{rc}_{n-j+1}$ with $n-i+1 > n-j+1$. Finally $\pi^{rc}$ and $\pi$ have the same cycle type because $\pi^{rc} = \sigma^{-1} \pi \sigma$ where $\sigma = n(n-1)(n-2)\cdots21$. Two permutations have the same cycle type if and only if they are conjugate. Finally, $\pi^{rci}$ preserves shallowness because the reverse-complement and inverse do. \end{proof} We can now use Proposition~\ref{prop: invRc} to define a similar operation to $\pi^R$, which we denote by $\pi^L$, which also preserves shallowness and is defined as follows. Here, the reduction operator, $\red$, takes the elements of its input of length $\ell$ and returns a permutation in $\S_\ell$ in the same relative order. For example, $\red(48291)= 34251$ and $\red(9482) = 4231.$ \begin{theorem} \label{thm: shallowRecursiveL} Suppose $\pi \in \mathcal{S}_n$ and for $n\geq 2$, define \[ \pi^L = \begin{cases} \red(\pi_2\ldots\pi_{n-1}) & \pi_1=1 \\ \red(\pi_2\ldots\pi_{j-1}\pi_1\pi_{j+1}\ldots \pi_{n}) & \pi_j=1 \text{ with } j>1 \end{cases} \] Then $\pi =1 \in \S_1$ is shallow, and when $n\geq 2,$ \begin{itemize} \item if $\pi_1 = 1$, then $\pi$ is shallow exactly when $\pi^L$ is, and \item if $\pi_j=1$ with $j>1$, then $\pi$ is shallow exactly when $\pi^L$ is shallow and $\pi^L_j = \pi_n$ is a left-to-right maximum or right-to-left minimum in $\pi^L$. \end{itemize} \end{theorem} \begin{proof} This follows immediately from Theorem \ref{thm: shallowRecursive} and Proposition \ref{prop: invRc} since $\pi^L = [(\pi^{rc})^R]^{rc}$. \end{proof} Let us see an example. If $\pi= 421635\in \T_6$, we can apply $\pi^L$ to get \[ \underline{\mathbf{4}}2\underline{\mathbf{1}}635 \xlongrightarrow{\pi^L} \text{red}(2\underline{\mathbf{4}}635)=1\underline{\mathbf{3}}524, \] and note that $13524\in \T_5$. Similar to our observation about the right operator above, this left operator can also be ``inverted'' to produce all shallow permutations on $[n]$ from those on $[n-1]. $ We denote by $\pi^{R^n}$ and $\pi^{L^n}$ the application the right and left operators from Theorems~\ref{thm: shallowRecursive} and \ref{thm: shallowRecursiveL}, respectively, applied $n$ times. For example, $\pi^{L^3} = ((\pi^L)^L)^L.$ On occasion, after applying the left operator to a permutation, we will work with the entries of the resulting permutation without reducing, for ease of notation. When we do this, we mark the entries. For example, we may write $(421635)^{L}$ as $2'4'6'3'5'$ with $i'=i-1$, instead of writing 13524. More generally if $\pi = \pi_1 \pi_2 \ldots \pi_{j-1} \pi_j \pi_{j+1} \ldots \pi_n$ and $\pi_j = 1$ we may refer to $\pi^L$ as $\pi^L=\pi_1' \pi_2' \ldots \pi_{j-1}' \pi_1' \pi_{j+1}' \ldots \pi_n'$ with $\pi_i'=\pi_i-1$ for each $i\neq j$ instead of writing $\pi^L=(\pi_1-1) (\pi_2-1) \ldots (\pi_{j-1}-1) (\pi_1-1) (\pi_{j+1}-1) \ldots (\pi_n-1)$. Next, let us make some general observations about shallowness. In the following lemma, we will see that shallowness is preserved under direct sums. Here, if $\pi \in \S_n$ and $\sigma\in\S_m$, then $\tau = \pi\oplus\sigma$ denotes the permutation in $\S_{m+n}$ with $\tau_i=\pi_i$ for all $i\in[n]$ and $\tau_{j+n}=\sigma_j$ for all $j\in[m]$. For example, we have that $4312\oplus 53142 = 431297586.$ \begin{lemma}\label{lem: dir sum} If $\pi\in\T_n$ and $\sigma\in\T_m$, then $\pi\oplus\sigma\in\T_{n+m}.$ \end{lemma} \begin{proof} First, notice that $D(\pi \oplus \sigma) = \sum_{i=1}^n |\pi_i - i| + \sum_{i=1}^{m} |(\sigma_{i}+n) - (i+n)|= D(\pi) +D(\sigma)$. Next, $I(\pi \oplus \sigma) = I(\pi) + I(\sigma)$ since there can be no additional inversions between the elements of $\pi$ and $\sigma$. Finally, $T(\pi \oplus \sigma) = T(\pi) + T(\sigma)$ since the number of cycles in $\pi \oplus \sigma$ is the sum of the number of cycles in $\pi$ plus the number of those in $\sigma$. It then follows from the original definition of a shallow permutation that if $\pi$ and $\sigma$ are shallow, so is $\pi \oplus \sigma$. \end{proof} In the next lemma, we see that we can always add $n$ to the beginning and 1 to the end of a shallow permutation of length $n-2$ and the result is a shallow permutation of length $n$, and we can similarly delete those elements from a shallow permutation of length $n$ to get a shallow permutation of length $n-2.$ \begin{lemma}\label{lem: n1} Let $\pi \in \S_{n-2}$ and $\tau = (n)(\pi_1+1)(\pi_2+1)\ldots(\pi_n+1)1$. Then $\pi\in \T_{n-2}$ if and only if $\tau\in\T_n$. \end{lemma} \begin{proof} Let $\pi\in \T_{n-2}$. Then by Lemma \ref{lem: dir sum}, $\sigma=1\oplus\pi \in \T_{n-1}$. By Theorem \ref{thm: shallowRecursive} and with $\sigma_1=1$ a left to right max, we apply the inverse recursion from Theorem \ref{thm: shallowRecursive} which replaces the 1 with $n$ and moves 1 to the end. Thus we arrive at a shallow permutation in the form of $\tau$ as defined in the statement of the Lemma. \end{proof} By repeatedly applying this lemma, we can obtain the following corollary, which we will use frequently in this paper. \begin{corollary}\label{cor:decreasing} The decreasing permutation $\delta_n=n(n-1)\ldots21$ is shallow. \end{corollary} \begin{remark} It is possible to prove a stronger version of the above results. If $\pi \in \mathcal{T}_n$, $\tau \in \mathcal{T}_m$ and $\pi_i = i$, then the inflation of $i$ by $\tau$ remains shallow. Indeed, it is straightforward to check that the sum of the inversion number and reflection length of the inflated permutation still equals its total displacement. Lemma \ref{lem: dir sum} is the inflation of $12$ by two shallow permutations and Lemma \ref{lem: n1} is the inflation of $321$ at $2$ by a shallow permutation. We do not use this stronger result, so omit its full proof. \end{remark} We end this section by noting that shallow permutations necessarily avoid certain mesh patterns (see \cite{BC11}). We will denote by $3\boldsymbol{4}\boldsymbol{1}2$ the permutation pattern $3412$ where the ``4'' is equal to $n$ and the ``1'' is equal to 1. For example, $\pi = 642981537$ contains the subsequence $4913$ which is a $3\boldsymbol{4}\boldsymbol{1}2$. It also contains $6815$ which is a 3412 pattern, but not a $3\boldsymbol{4}\boldsymbol{1}2$ pattern since the ``4'' in this pattern is not equal to $n=9.$ We denote by $\underline{3}41\underline{2}$ the permutation pattern $3412$ where the ``3'' occurs in the first position and the ``2'' occurs in the last position. For example, the permutation $\pi = 672198435$ contains the subsequence $6835$ which is a $\underline{3}41\underline{2}$ pattern since 6, which is the ``3'' in this pattern, appears in the first position and 5, which is the ``2'' in this pattern, appears in the last position. \begin{theorem}\label{thm: 3n12} If $\pi\in\T_n$, then $\pi$ avoids the patterns $3\boldsymbol{4}\boldsymbol{1}2$ and $\underline{3}41\underline{2}$. \end{theorem} \begin{proof} Let us proceed by contradiction. Suppose $\pi$ contains a $3\boldsymbol{4}\boldsymbol{1}2$ pattern. We will show that upon repeatedly applying the right operator $R$, we will eventually move an element that will be neither a left-to-right maximum nor a right-to-left minimum in the new permutation, contradicting that $\pi$ is shallow. To this end, suppose $\pi_r\pi_i\pi_j\pi_s$ is a $3\boldsymbol{4}\boldsymbol{1}2$ pattern, so $\pi_i=n$, $\pi_j=1$, and $\pi_r>\pi_s.$ Notice that when we apply the right operator once, we get \[\pi^R = \pi_1\ldots \pi_{i-1}\pi_n\pi_{i+1}\ldots \pi_{j-1}1\pi_{j+1}\ldots\pi_{n-1}.\] If $s=n$, then we have a contradiction since $1<\pi_s<\pi_r$ and so is neither a left-to-right maximum nor a right-to-left minimum in $\pi^R.$ If $s<n$, then we must have that $\pi_n$ is a left-to-right maximum, or else $\pi$ would not be shallow, so the element in position $i$ is still larger than all elements in positions 1 through $i-1$. Now, let us continue to apply the right operator, $R$. Each time, the last element is either deleted (if it is the largest element), moved to a position to the right of 1 (if the largest element is also to the right of 1), or it is moved to the left of 1, in which case it must be a left-to-right maximum. Note that each time an element is moved to the left of 1, it must be in a position greater than or equal to $i$ since each element moved over is itself larger than all elements in positions 1 through $i-1.$ Eventually, $\pi_{s}$ will be moved to the left of 1, and it will be moved to a position greater than or equal to $i$. However, $\pi_s<\pi_r$ with $r<i$. Thus $\pi_s$ cannot be a left-to-right maximum in this permutation. It also cannot be a right-to-left minimum since $1$ is to its right. Thus the original permutation is not shallow. The other avoidance follow from Proposition~\ref{prop: invRc} and the fact that $\pi$ avoids $3\boldsymbol{4}\boldsymbol{1}2$ if and only if $\pi^{-1}$ avoids $\underline{3}41\underline{2}$. \end{proof} \section{Shallow permutations that avoid 132 or 213}\label{sec:132} In this section, we enumerate shallow permutations that avoid the pattern 132. We also consider the number of such permutations with a given number of descents, as well as those that exhibit certain symmetry. Let $\mathcal{T}_n(\sigma)$ denote the permutations $\pi \in S_n$ that are shallow and avoid $\sigma$. We set $t_n(\sigma) = |\mathcal{T}_n(\sigma)|$. Note that by Proposition \ref{prop: invRc} $\mathcal{T}_n(132) = \mathcal{T}_n(213)$, so proving Theorem~\ref{thm: 132} for shallow permutations avoiding 132 holds for 213 as well. \subsection{Enumeration of 132-avoiding shallow permutations} In this subsection, we will prove the following theorem. \begin{theorem} \label{thm: 132} For $n \geq 1$ and $\sigma \in \{132, 213\}$, $t_n(\sigma) = F_{2n-1}$, the $(2n-1)$st Fibonacci number. \end{theorem} We will first establish a few lemmas. This first lemma guarantees that for any shallow 132-avoiding permutation $\pi$, we must have that if $\pi$ does not start or end with $n$, it must end with $1$ and in the case that $n$ is not in the second-to-last position, $\pi$ must start with $(n-1).$ \begin{lemma} \label{lem: 132-ends-in-1} For $n\geq 3$, suppose $\pi \in \mathcal{T}_n(132)$. \begin{itemize} \item If $\pi_j =n$ with $2 \leq j \leq n-1$ then $\pi_n = 1$, and \item If $\pi_j = n$ with $2\leq j \leq n-2$ then $\pi_1 = n-1$. \end{itemize} \end{lemma} \begin{proof} Let us consider the first bullet point. Suppose $\pi \in \mathcal{T}_n(321)$ with $\pi_j=n$ for $2\leq j\leq n-1$. Note that $\pi_i>\pi_k$ for any $i<j<k$ since $\pi$ avoids 132. By Theorem \ref{thm: shallowRecursive}, $\pi^R_j=\pi_n$ must be either a left-to-right maximum or right-to-left minimum. It cannot be a left-to-right maximum because $\pi^R_{j-1} = \pi_{j-1} > \pi_n = \pi^R_j$. So $\pi^R_j$ must be a right-to-left minimum. However, since $\pi$ is $132$ avoiding we know that $1$ appears to the right of $n$ in $\pi$, so the only way for $\pi^R_j$ to be a right-to-left minimum is if $\pi^R_j = 1$, and thus $\pi_n = 1$. Now let us prove the second bullet point. Since $\pi$ is $132$ avoiding and $j > 1$, $n-1$ must occur to the left of $n$ in $\pi$. This means that $n-1$ occurs to the left of $1$ in $\pi^R$. Suppose $\pi^R_k = n-1$ with $1\leq k < j$, we will show that $k = 1$. Again by Theorem \ref{thm: shallowRecursive}, $\pi^{R^2}_k$ must be a left-to-right maximum or right-to-left minimum. But now it cannot possibly be a right-to-left minimum because $\pi^{R^2}_j = 1$ by Lemma \ref{lem: 132-ends-in-1} and $k < j \leq n-2$. So $\pi^{R^2}_k$ must be a right-to-left maximum. Since $\pi$ was $132$ avoiding every entry to the left of $1$ in $\pi^R$ will be larger than every entry to the right of $1$. So the only way $\pi^{R^2}_k$ is a right-to-left maximum is if $k = 1$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm: 132}.] Let $a_n = |\mathcal{T}_n(132)|$ and $b_n$ be the number of $\pi \in \mathcal{T}_n(132)$ ending with $1$. Notice that $b_n$ is also the number of $\pi \in \mathcal{T}_n(132)$ beginning with $n$ since $\pi$ is a 132-avoiding shallow permutation if and only if $\pi^{-1}$ is. By Lemma \ref{lem: 132-ends-in-1}, we know that each $\pi \in \mathcal{T}_n(132)$ either begins with $n$, ends with $n$ or ends with $1$. There are clearly $a_{n-1}$ such permutations that end in $n$ (by removing that fixed point) and by Lemma~\ref{lem: n1}, there are $a_{n-2}$ such permutation that start with $n$ and end with 1. Thus it follows that \[ a_n = a_{n-1} + 2b_n - a_{n-2}. \] Next, let us find a recurrence for $b_n$; let $\pi\in\mathcal{T}_n(132)$ with $\pi_n=1$ and consider the position of $n$. If $\pi_{n-1} = n$, then $\pi^R \in \mathcal{T}_n(132)$ ending in $1$ and so there are $b_{n-1}$ such permutations. If $\pi_j = n$ for $2 \leq j \leq n-2$, then $\pi_1 = n-1$ and so $\pi^{RL}$ is the 132-avoiding permutation obtained by deleting 1 from the end and $n-1$ from the front. Since in $\pi^R$, $\pi_n=1$ is clearly a right-to-left minimum and in $\pi^{RL}$, $\pi_1=(n-1)'=n-2$ will clearly be a right-to-left minimum, this permutation is also shallow. Since the resulting permutation is any shallow 132-avoiding permutation that does not end in $n-2$, there are $a_{n-2} - a_{n-3}$ such permutations. Finally, if $\pi_1 = n$, there are clearly $a_{n-2}$ such permutations by Lemma~\ref{lem: n1}. Altogether, we find \[ b_n = b_{n-1} + 2a_{n-2} - a_{n-3}. \] Substituting this back into our recursion for $a_n$, \begin{align*} a_n &= a_{n-1} + 2(b_{n-1} + 2a_{n-2} - a_{n-3}) - a_{n-2}\\ &= a_{n-1} + (a_{n-2} +2b_{n-1} - a_{n-3}) + 2a_{n-2} - a_{n-3}\\ &= 2a_{n-1} + 2a_{n-2} - a_{n-3} \end{align*} which is precisely the recursion satisfied by $F_{2n-1}$. \end{proof} \subsection{132-avoiding shallow permutation by descent number} In this subsection, we will refine the enumeration of 132-avoiding shallow permutations by their descent number. We first present the following lemma. \begin{lemma} \label{lem: 132-descent-inverse} If $\pi \in \S_n(132)$ has $k$ descents, then $\pi^{-1} \in \S_n(132)$ also has $k$ descents. \end{lemma} \begin{proof} We will proceed by strong induction on $n$. The result is clear for $n \leq 3$. So assume $n \geq 4$ and $\pi \in \S_n(132)$. This means $\pi = (\tau \oplus 1) \ominus \sigma$ for some $\tau \in \S_k(132)$ and $\sigma \in S_{n-k-1}(132)$. In this case, $\pi^{-1} = \sigma^{-1} \ominus (\tau^{-1} \oplus 1)$. By induction $\sigma^{-1}$ and $\tau^{-1}$ have the same number of descents as $\sigma$ and $\tau$, we lose one descent in position $j$ of $\pi$, but gain an additional descent in position $n-j$ of $\pi^{-1}$ that does not come from any of the descents in $\sigma^{-1}$ or $\tau^{-1}$. We therefore preserve the number of descents, the result follows by induction. \end{proof} \begin{example} Consider $\pi = 534621 \in \S_6(132)$ with $3$ descents. $\pi = (312 \oplus 1) \ominus (21)$ and $\pi^{-1} = 652314$ which is $(21)^{-1} \ominus ((312)^{-1} \oplus 1)$. The number of descents in $(21)^{-1}$ and $(312)^{-1}$ are the same as in $21$ and $312$, we lose the descent in position $4$ of $\pi$, but gain a descent in position $6-4 = 2$ of $\pi^{-1}$ when we transition from $\sigma^{-1}$ to $\tau^{-1}$. \end{example} We now adapt the proof of Theorem \ref{thm: 132} to keep track of descents to arrive at the following result. \begin{theorem} For $n\geq 2$, the number of shallow, 132-avoiding permutations with $k$ descents is equal to \[\displaystyle\binom{2n-2-k}{k}.\] \end{theorem} \begin{proof} Let $a_{n,k}$ be the number of shallow, 132-avoiding permutations of length $n$ with $k$ descents and let $b_{n,k}$ be the number of such permutations that end with $\pi_n=1$. Note that by Lemma~\ref{lem: 132-descent-inverse}, $b_{n,k}$ is also the number of shallow, 132-avoiding permutations with $k$ descents that starts with $\pi_1=n.$ As in the proof of Theorem~\ref{thm: 132}, we know that for any permutation $\pi\in\mathcal{T}_n(132),$ by Lemma~\ref{lem: 132-ends-in-1}, $\pi$ has either $\pi_n = 1, \pi_n = n$ or $\pi_1 = n$. It is clear there are $a_{n-1,k}$ shallow, 132-avoiding permutations with $k$ descents ending in $n$ since adding an $n$ to the end preserves shallowness and does not change the number of descents. For $n\geq 3,$ it also clear that there are $a_{n-2,k-2}$ permutations that both begin with $n$ and end with 1, which is seen by deleting both $n$ and $1$ to obtain a shallow permutation that still avoids 132 and has two fewer descents. This means \[ a_{n,k} = a_{n-1,k} + 2b_{n,k} - a_{n-2,k-2}. \] Now, let us consider those permutations with $\pi_n=1$. As before, if $\pi_{n-1}=n$, then there are still $k$ descents in $\pi^R$, which still ends in 1, and so $b_{n-1,k}$ permutations. If $\pi_j=n$ for $2\leq j \leq n-2$, then $\pi_1=n-1$ by Lemma~\ref{lem: 132-ends-in-1}. If $j=2$, then $\pi^{RL}$ has one fewer descent and begins with its largest element $n-2.$ If $3\leq j\leq n-2$, then $\pi^{RL}$ has two fewer descents, and it must not end or begin with its largest element $n-2.$ Thus in total, there are $b_{n-2,k-1} + a_{n-2,k-2} - b_{n-2,k-2}-a_{n-3,k-2}$ permutations. Finally, if $\pi_1=n$, as stated above there are $a_{n-2,k-2}$ such permutations. In total, we have \[b_{n,k} = 2a_{n-2,k-2} + b_{n-1,k} + b_{n-2,k-1} - b_{n-2,k-2}-a_{n-3,k-2}.\] We claim that $b_{n,k} = a_{n-1,k-1}.$ We will prove this claim by strong induction on $n$. It is straightforward to check this claim for values of $n\leq 3$ so let us assume $n\geq 4$. Note that \begin{align*} b_{n,k} &= 2a_{n-2,k-2} + b_{n-1,k} + b_{n-2,k-1} - b_{n-2,k-2}-a_{n-3,k-2} \\ & = 2b_{n-1,k-1} + a_{n-2,k-1} + b_{n-2,k-1} - a_{n-3,k-3} - b_{n-2,k-1} \\ & = a_{n-2,k-1} + 2b_{n-1,k-1} - a_{n-3,k-3} \\ & = a_{n-1,k-1}, \end{align*} where the last equality follow from the recurrence for $a_{n,k}$ above. Notice that by taking $b_{n,k}=a_{n-1,k-1}$, we now obtain a recurrence for $a_{n,k}$ as follows: \[ a_{n,k}=a_{n-1,k} + 2a_{n-1,k-1} - a_{n-2,k-2}, \] which together with the initial conditions is exactly the recurrence satisfied by $a_{n,k} = \binom{2n-2-k}{k}.$ \end{proof} \subsection{132-avoiding shallow permutations with symmetry} In this subsection, we consider 132-avoiding shallow permutations that are involutions (so that $\pi=\pi^{-1}$), that are centrosymmetric (so that $\pi=\pi^{rc}$), and that are persymmetric (so that $\pi=\pi^{rci}$). \begin{theorem} For $n\geq 1$, the number of shallow, 132-avoiding involutions of length $n$ is $F_{n+1}$, where $F_{n+1}$ is the $(n+1)$-st Fibonacci number. \end{theorem} \begin{proof} Let $i_n$ be the number of shallow, 132-avoiding permutations of length $n$ that are involutions. We will show that $i_n=i_{n-1}+i_{n-2}$ and with initial conditions $i_1=1$ and $i_2=2$, we have the Fibonacci sequence shifted by 1. There are clearly $i_{n-1}$ shallow, 132-avoiding involutions of length $n$ with $\pi_n=n$ since adding the fixed point $n$ to the end of an involution in $\mathcal{T}_{n-1}(132)$ gives us a permutation that is still an involution, still avoids 132, and is still shallow by Theorem \ref{thm: shallowRecursive}. If $\pi\in\mathcal{T}_{n-1}(132)$ does not end in $n$, then by Theorem~\ref{lem: 132-ends-in-1}, $\pi_1=n$ or $\pi_n=1$. However, if $\pi$ is an involution, then one of these will imply the other. Note that by Lemma \ref{lem: n1}, we can add an $n$ to the beginning and 1 to the end of an involution in $\mathcal{T}_{n-1}(132)$, and the resulting permutation is still shallow. Additionally the permutation still avoids 132 and is still an involution since we have only added the 2-cycle $(1,n)$. Thus there are $a_{n-2}$ involutions in $\mathcal{T}_n(132)$ beginning with $n$ and ending with 1. The recurrence, and thus the result, follows. \end{proof} \begin{theorem} For $n\geq2$, the number of 132-avoiding shallow centrosymmetric permutations is $\lceil (n+1)/2\rceil$. \end{theorem} \begin{proof} Notice that if $\pi$ avoids 132 and $\pi=\pi^{rc}$, $\pi$ must also avoid $132^{rc}=213.$ By Lemma~\ref{lem: 132-ends-in-1}, we know that either $\pi_n=n$, $\pi_n=1$, or $\pi_1=n$. However, if $\pi=\pi^{rc}$, then $\pi_1=n$ implies that $\pi_n=1.$ Therefore, either $\pi_n=n$ or $\pi_1=n$ and $\pi_n=1$. In the first case, since $\pi_n=n$ and $\pi$ avoids 213, $\pi$ is the increasing permutation. In the second case, by Lemma~\ref{lem: n1}, by deleting $n$ and 1, we obtain a shallow 132-avoiding centrosymmetric permutation of length $n-2.$ Letting $c_n$ be the number of centrosymmetric permutations in $\mathcal{T}_n(132)$, we thus have $c_n = c_{n-2}+1$, which together with the initial conditions that $c_1=1$ and $c_2=2$ implies the result. \end{proof} \begin{theorem} Let $p_n(132)$ be the number of 132-avoiding shallow persymmetric permutations and let $P_{132}(x)$ be the generating function for $p_n(132)$. Then \[P_{132}(x) = \frac{1-x^2+2x^3}{(1-x)(1-2x^2-x^4)}.\] \end{theorem} \begin{proof} Let $n\geq 5$ and let $\pi\in\mathcal{T}_n(132)$ with $\pi=\pi^{rc}$. We use Lemma~\ref{lem: 132-ends-in-1} to determine a few possible cases. First, it $\pi_n=n$, since $\pi=\pi^{rc},$ we must have $\pi_1=1$, which implies that $\pi$ is the increasing permutation. If $\pi_{n-1}=n$, then by Lemma~\ref{lem: 132-ends-in-1}, we must have $\pi_n=1$. Since $\pi=\pi^{rc}$, then $\pi_1=2,$ which implies that $\pi=2345\ldots n1$ since $\pi$ is 132-avoiding. Note that this permutation is clearly shallow. Next, consider a permutation where $\pi_j=n$ for some $2\leq j\leq n-2$. By Lemma~\ref{lem: 132-ends-in-1}, this permutation must end with 1 and start with $n-1$. But this implies that $\pi_2=n$ and so $\pi = (n-1)n\pi_3\ldots\pi_{n-1} 1$. Note that $\pi^{RL}$ can be obtained by deleting $n$ and $1$ from $\pi.$ This permutation still is still shallow, still avoids 132, and still is persymmetric, and furthermore begins with $n$. If we let $q_n(132)$ be the number of persymmetric permutations in $\T_n(132)$ that begin with $n$, then we thus have \[ p_n(132) = 2+q_n(132)+q_{n-2}(132).\] Similarly considering those that end with $1$ (or equivalently start with $n$, since $\pi$ is persymmetric if and only if $\pi^{-1}$ is), we clearly have $p_{n-2}(132)$ permutations that start with $n$ and end with $1$ since removing these will leave a persymmetric shallow permutation avoiding 132. Considering the cases above that begin with $1$ listed above, we have \[q_n(132) = 1+p_{n-2}(132)+q_{n-1}(132).\] Letting $Q_{132}(x)$ be the generating function for $q_n(132)$, taking into account the initial conditions, we get \[Q_{132}(x) = x+\frac{x^4}{1-x} + x^2P_{132}(x) + x^2Q_{132}(x)\] and \[ P_{132}(x) = 1+x^2+x^3 + \frac{2x^4}{1-x} + (1+x^2)Q_{132}(x). \] Solving for $Q_{132}(x)$, plugging the result into the equation for $P_{132}(x)$, solving for $P_{132}(x),$ and then simplifying gives the result in the statement of the theorem. \end{proof} \section{Shallow permutations that avoid 231 or 312}\label{sec:231} In this section, we enumerate shallow permutations that avoid the pattern 231. We also consider the number of such permutations with a given number of descents, as well as those that exhibit certain symmetry. Note that by Proposition \ref{prop: invRc} $\mathcal{T}_n(231) = \mathcal{T}_n(312)$, and so Theorem~\ref{thm:231} holds for shallow permutations avoiding 312 as well. \subsection{Enumeration of 231-avoiding shallow permutations} \begin{theorem}\label{thm:231} Let $T_{231}(x) = \sum_{n\geq 0}t_n(231)x^n$ be the generating function for $t_n(231)$. Then, \[T_{231}(x) = \frac{1-3x+2x^2-x^3-x^4-x^5}{1-4x+4x^2-2x^3-x^4-x^5}.\] \end{theorem} We will prove this theorem via a series of lemmas. First, we prove that permutations of a particular form built from decreasing permutations are shallow. \begin{lemma}\label{lem:shallow-n,n-1} If $\pi\in\S_n$ is of one of the permutations below: \begin{itemize} \item $21\ominus(\delta_j\oplus\delta_k)$, \item $\delta_i\ominus(1\oplus\delta_k)$, or \item $\delta_i\ominus(\delta_j\oplus 1)$, \end{itemize} where $i,j,k\geq 0$, then $\pi$ is a shallow permutation. \end{lemma} \begin{proof} For the first bullet point, notice that $\pi^{LL}(21\ominus(\delta_j\oplus\delta_k)) = \delta_{j-2} \oplus (12 \oplus \delta_{k})$ which is a direct sum of shallow permutations and is therefore shallow. Furthermore, $n'=n-1$ is a left-to-right maximum in $\pi^{L}(21\ominus(\delta_j\oplus\delta_k))$ and $(n-1)''=n-3$ is a left-to-right maximum in $\pi^{LL}(21\ominus(\delta_j\oplus\delta_k))$. Therefore, Theorem \ref{thm: shallowRecursiveL} implies the original permutation is shallow. We prove the second and third bullet points by induction on the length of the permutation. Let us first consider the second bullet point, when $\pi = \delta_i\ominus(1\oplus\delta_k)\in\S_n$. If $k=0$, then $\pi = \delta_{i+1}$ which is shallow, and if $i=0$, then $\pi$ is a direct sum of shallow permutations and thus is shallow. Therefore, let us consider the cases when $i,k \geq 1$. It is straightforward to check the base cases when $n\leq 3,$ so let us assume $n\geq 4.$ Notice that $\pi^L= (\delta_{i-1}\oplus 1)\ominus \delta_k$ and $\pi^{LR} = (\delta_{i-1} \ominus (1 \oplus \delta_{k-1}))$. Since $n'=n-1$ is a left-to-right maximum of $\pi^L$, $2'=1$ is a right-to-left minimum of $\pi^{LR}$, and $\pi^{LR}$ is shallow by induction, we conclude by Theorems \ref{thm: shallowRecursive} and \ref{thm: shallowRecursiveL} that $\pi$ is also shallow. The result follows by induction. An identical argument works for the third bullet point since $(\delta_i \ominus(\delta_j \oplus 1))^{LR} = (\delta_{i-1}\ominus(\delta_{j-1} \oplus 1)$. \end{proof} In order to enumerate $\T_n(231)$, we will decompose these permutations into a direct sum of two shallow permutations that avoid 231, one of which begins with $\pi_1=n$. In order to enumerate those permutations in $\T_n(231)$ that begin with $n$ we will decompose them further, enumerating them in terms of those that begin with $\pi_1\pi_2=n(n-1)$. \begin{lemma}\label{lem:bc-231} Suppose $b_n$ is the number of permutations in $\T_n(231)$ with $\pi_1=n$ and let $c_n$ be the number of permutations in $\T_n(231)$ with $\pi_1=n$ and $\pi_2=n-1.$ Then we have \[b_n=\sum_{i=1}^{n-1} b_ic_{n-i+1}.\] \end{lemma} \begin{proof} We will show that if $\alpha\in\T_m(231)$ satisfies $\alpha_1=m$ and $\beta\in\T_\ell(231)$ with $\beta_1=\ell$ and $\beta_2=\ell-1$ then the permutation $\pi$ with $\pi_1=m+\ell-1$, $\pi_i=\alpha_i$ for $2\leq i\leq m$ and $\pi_{m+j-1}=\beta_{j}+m-1$ for $2\leq j\leq \ell$. In other words, taking $n=m+\ell-1$, we have \[\pi=n\alpha_2\alpha_3\ldots \alpha_m(n-1)\beta_3'\beta_4'\ldots \beta_\ell'\] where $\beta'_i=\beta_i+m-1$ for $3\leq i\leq \ell$. Let us first see that this permutation is also shallow. Note that since $\alpha$ and $\beta$ are shallow, we have that $I(\alpha) + m-\cyc(\alpha) = D(\alpha)$ and $I(\beta) + \ell-\cyc(\beta) = D(\beta)$. It will be enough for us to show that $I(\pi) + n-\cyc(\pi) = D(\pi)$. First, notice that $I(\pi) = I(\alpha)+I(\beta).$ Indeed, if $(i,j)$ is an inversion of $\pi$ (so that $i<j$ and $\pi_i>\pi_j$), then we have a few cases to consider. If $1\leq i,j\leq m$, then $(i,j)$ is also an inversion of $\alpha$, and in fact, all inversions of $\alpha$ are counted this way. If $(1,j)$ is an inversion of $\pi$ with $m+1\leq j \leq n$, then $(1,j-m+1)$ is an inversion of $\beta$ (since $\pi_1=n$). If $(i,j)$ is an inversion of $\pi$ with $m+1\leq i,j\leq n$, then $(i-m+1,j-m+1)$ is an inversion of $\beta$. Furthermore, the previous two cases count all inversions of $\beta$. Finally, since $\pi_r<\pi_s$ for all $2\leq r\leq m$ and $m+1\leq s\leq n$, there are no other inversions of $\pi.$ Next, let us show that $\cyc(\pi) =\cyc(\alpha)+\cyc(\beta)-1.$ Notice that any cycles of $\alpha$ that do not contain $1$ are still cycles of $\pi$ since their values and positions are unchanged. Similarly, all cycles of $\beta$ that do not contain $1$ correspond to cycles of $\pi$ with values scaled up by $m-1.$ Let $(1, m, a_3, \ldots,a_r)$ and $(1,\ell, b_3,\ldots,b_s)$ be the cycles in $\alpha$ and $\beta$, respectively, that contain $1.$ Then in $\pi$, we have the corresponding cycle $(1,n, b_3+m-1, \ldots, b_s+m-1, m+1, a_3,\ldots,a_r)$. Finally, let us consider displacement; we will see that $D(\pi) = D(\alpha)+D(\beta).$ Indeed we have \begin{align*} D(\pi) &= \sum_{i=1}^n|\pi_i-i| \\ &=(n-1) + \sum_{i=2}^m|\pi_i-i| + \sum_{i=m+1}^n|\pi_i-i| \\ & =(n-1) + \sum_{i=2}^m|\alpha_i-i| + \sum_{j=2}^\ell|\beta_j+m-1-(m+j-1)| \\ & =(n-1) + D(\alpha) -(m-1) + D(\beta)-(\ell-1) \\ & = D(\alpha)+ D(\beta), \end{align*} where the last equality holds since $n=m+\ell-1.$ Taken altogether, we can see that \begin{align*} I(\pi) +n-\cyc(\pi) &= I(\alpha)+I(\beta) + (m+\ell-1) -(\cyc(\alpha)+\cyc(\beta)-1) \\ &= I(\alpha)+m-\cyc(\alpha)+I(\beta) + \ell -\cyc(\beta) \\ &= D(\alpha)+D(\beta) \\ &=D(\pi). \end{align*} \end{proof} \begin{remark} One could also use Berman and Tenner's characterization of shallow permutations in \cite{BT22} to prove Lemma \ref{lem:bc-231} by considering the cycle form of $\pi$. We opted for a different proof to avoid introducing additional terminology. \end{remark} \begin{lemma}\label{lem:c-231} Let $c_n$ be the number of permutations in $\T_n(231)$ with $\pi_1=n$ and $\pi_2=n-1.$ Then for $n\geq 5$, $c_n=3n-11$. \end{lemma} \begin{proof} Let $n\geq 5$ and $\pi$ be a shallow permutation that avoids 231. Let us first consider permutations $\pi = n(n-1) \pi_3\ldots \pi_n$ so that $\pi_{k+3}=n-3$ for some $1\leq k \leq n-4$. Thus we have \[ \pi = n(n-1)\pi_3\ldots\pi_{k+2}(n-2)\pi_{k+4}\ldots \pi_n \] where $\{\pi_3,\ldots\pi_{k+2}\}=\{1,2,\ldots,k\}$ and $\{\pi_{k+4}, \ldots, \pi_n\}=\{k+1,\ldots,n-3\}$ since $\pi$ avoids 231. Furthermore, suppose $\pi_s=1$ for some $3\leq s\leq k+3$. Notice that $\pi^L$ deletes $n$ from the beginning and replaces $\pi_s=1$ with the first element $n$ and re-sizes the elements, so that \[\pi^{L}=(n-2)(\pi_3-1)\ldots (\pi_{s-1}-1)(n-1)(\pi_{s+1}-1)\ldots (\pi_{k+2}-1)(n-3)(\pi_{k+4}-1)\ldots (\pi_n-1). \] If the original permutation $\pi$ is shallow, then $\pi^L$ is as well since $n-1$ is necessarily a left-to-right maximum in a permutation in $\S_{n-1}$. Next, we find $\pi^{LR}=(\pi^L)^R$ by replacing $n$ in $\pi^L$ with the last element $(\pi_n-1)$ and deleting $(\pi_n-1)$ from the end. This cannot be a left-to-right maximum in $\pi^{LR}$ since $\pi^{LR}$ necessarily starts with its largest element. Notice it can only be a right-to-left minimum if $\pi_n$ is the smallest element among $\{\pi_{k+4}, \ldots, \pi_n\}$ and if the largest element in $\pi^L$ appeared after all elements smaller than the last element of $\pi^L$. In other words, $\pi_n=k+1$ and $\pi_{k+2}=1$. Since $\pi$ avoids 231, this implies that \[\pi = n(n-1)k(k-1)\ldots 21(n-2)(n-3)\ldots (k+1).\] A similar argument proves that if $\pi$ ends with $n-2$, it must be of the form \[\pi = n(n-1)(n-3)(n-4)\ldots 21(n-2).\] Since by Lemma~\ref{lem:shallow-n,n-1}, these permutations are shallow, this gives us $n-3$ shallow permutations $\pi$ that avoid $231$ and begin with $\pi_1\pi_2=n(n-1)$ with the property that $\pi_3\neq n-2$. Next we need to show that there are $2n-8$ such permutations with $\pi_3=n-2.$ Suppose that $\pi = n(n-1)(n-2)(n-3)\ldots (n-m) \pi_{m+2}\ldots \pi_{s-1}(n-m-1)\pi_{s+1}\ldots \pi_n$ for some $m\geq 3$ and $m+3\leq s\leq n$. We will show that there is one shallow permutation avoiding 231 with $s=m+3$ and one with $s=n.$ First suppose $\pi_n=n-m$. Then by the same argument above (i.e.~by considering the shallowness of $\pi^{LR}$), we must have that $\pi = n(n-1)\ldots (n-m)(n-m-2)\ldots 21(n-m).$ If $s=m+3$, then by the same argument as above, $\pi = n(n-1)\ldots (n-m)1(n-m-1)(n-m-2)\ldots 32.$ Note that by Lemma~\ref{lem:shallow-n,n-1}, these are both shallow permutations. Now for the sake of contradiction, suppose $m+3<s<n$. Then, by the argument above, $\pi = n(n-1)\ldots(n-m)k(k-1)\ldots 21(n-m-1)(n-m-2)\ldots (k+1)$ for some $k\geq 2.$ By considering $\pi^{LL}$, we get a permutation in $\S_{n-2}$ equal to \[\pi^{LL} = (n-2)'\ldots(n-m)'k'(k-1)'\ldots 3'(n-1)'n'(n-m-1)'(n-m-2)'\ldots (k+1)'\] where $j'=j-2$ for each element $3\leq j\leq n$. Now taking $\pi^{LLR} = (\pi^{LL})^R$, we get \[\pi^{LL} = (n-2)'\ldots(n-m)'k'(k-1)'\ldots 3'(n-1)'(k+1)'(n-m-1)'(n-m-2)'\ldots (k+2)'.\] Finally, we consider $\pi^{LLRR}.$ First suppose $k<n-m-2$. Since $\pi^{LLRR}$ must start with its largest element $(n-2)'=n-4$, the element $(k+2)'=k$ must not be a left-to-right maximum. However, since it is to the left of $(k+1)'=k-1$ it is also not a right-to-left minimum and thus the permutation $\pi$ is not shallow. If $k=n-m-2$, then $\pi^{LLR}$ ends with $(n-m-1)'$, which is also smaller than $(n-2)'$ and larger than $(k+1)',$ and so will not be a left-to-right maximum or right-to-left minimum in $\pi^{LLRR}.$ Thus there are $2(n-4)$ shallow permutations avoiding 231 starting with $\pi_1\pi_2\pi_3=n(n-1)(n-2).$ Since we have a total of $n-3+2(n-4)$ shallow permutations that begin with $\pi_1\pi_2=n(n-1),$ the proof is complete. \end{proof} We now have the tools necessary to prove the main theorem. \begin{proof}[Proof of Theorem~\ref{thm:231}] As above, suppose $b_n$ is the number of permutations in $\T_n(231)$ with $\pi_1=n$ and let $c_n$ be the number of permutations in $\T_n(231)$ with $\pi_1=n$ and $\pi_2=n-1.$ Let $B(x) = \sum_{n\geq1} b_nx^n$ and $C(x) = \sum_{n\geq 2} c_nx^n.$ Since any $231$-avoiding permutation is the direct sum of a 231 avoiding permutation and a 231-avoiding permutation starting with $n$, we can use Lemma~\ref{lem: dir sum} to write that $t_n(231)=\sum_{i=0}^{n-1}t_i(231)b_{n-i}.$ Therefore, we have $T(x)=T(x)B(x)+1$. By Lemma~\ref{lem:bc-231}, we also have that $B(x) = \frac{1}{x}B(x)C(x)+x$. Finally, by Lemma~\ref{lem:c-231}, we know that for $n\geq 5,$ $c_n=3n-11$. Together with the fact that $c_2=1, c_3=1$, and $c_4=2,$ we have that \[C(x) = x^4 + \frac{x^2}{1-x} + \frac{3x^5}{(1-x)^2}.\] Since $T(x) = \dfrac{1}{1-B(x)}$ and $B(x) =\dfrac{x}{1-\frac{1}{x}C(x)},$ the result follows. \end{proof} \subsection{231-avoiding shallow permutations by descent number} We can refine the generating function in the previous section with respect to descents. Notice that since $312=231^{rc}$ and the reverse-complement preserves the number of descents, this result holds for 312-avoiding shallow permutations as well. For the purposes of this subsection, let $t_{n,k}(231)$ be the number of permutations in $\T_n(231)$ with $k$ descents, let $b_{n,k}$ be the number of such permutations that begin with $\pi_1=n$, and let $c_{n,k}$ be the number of such permutations that begin with $\pi_1\pi_2=n(n-1).$ Furthermore, let $T_{231}(x,t) = \sum t_{n,k}(231)x^nt^k$, $B(x,t) = \sum b_{n,k}x^nt^k$, and $C(x,t) = \sum c_{n,k}x^nt^k$. \begin{theorem} \[C(x,t) = t^2x^4 + \frac{tx^2}{1-xt} + \frac{3t^3x^5}{(1-xt)^2}\] and \[B(x,t) = \frac{x+C(x,t)-\frac{1}{t}C(x,t)}{1-\frac{1}{xt}C(x,t)} \] and finally, \[T(x,t) = \frac{1}{1-B(x,t)}.\] \end{theorem} \begin{proof} We first note that by the proof of Lemma~\ref{lem:c-231}, shallow permutations that avoid 231 and begin with $\pi_1\pi_2=n(n-1)$ must either be the decreasing permutation or have at most one ascent. It follows that for each $n$, the coefficient of $x^n$ in $C(x,t)$ must be the polynomial $(3n-10)t^{n-2} + t^{n-1}$ for $n\geq 5.$ It follows that \[C(x,t) = t^2x^4 + \frac{tx^2}{1-xt} + \frac{3t^3x^5}{(1-xt)^2}.\] Next, by the proof of Lemma~\ref{lem:bc-231}, permutations in $\T_n(231)$ that start with $n$ are built from smaller permutations: $\alpha$ that starts with $n$ and $\beta$ that starts with $n(n-1).$ When the $\alpha$ is at least size 2, we have that $\des(\pi) = \des(\alpha)+\des(\beta) -1$ since the first descent in $\beta$ is lost in this process. Therefore, we get that \[B(x,t) = x+C(x,t) +\frac{1}{xt}C(x,t)(B(x,t)-x). \] Finally, the number of descents in the direct sum of two permutations is the sum of the number of descents in each summand. Therefore $T(x,t) = T(x,t)B(x,t)+1$. \end{proof} \subsection{231-avoiding shallow permutations with symmetry} In this subsection, we consider those 231-avoiding shallow permutations that exhibit certain symmetries. In particular, we enumerate 231-avoiding shallow involutions, in which $\pi=\pi^{-1}$, 231-avoiding shallow centrosymmetric permutations, in which $\pi=\pi^{rc},$ and 231-avoiding shallow persymmetric permutations, in which $\pi=\pi^{rci}.$ We show that in fact all 231-avoiding involutions and centrosymmetric permutations are shallow, but this same result does not hold for persymmetric permutations. \begin{theorem} For $n\geq 1$, the number of shallow, 231-avoiding involutions of length $n$ is $2^{n-1}$. \end{theorem} \begin{proof} In \cite{SS85}, Simion and Schmidt show there are $2^{n-1}$ involutions of length $n$ that avoid 231. In their proof, it is shown that each of these permutations is a direct sum of decreasing permutations, i.e., $\pi = \bigoplus_{i=1}^k \delta_{m_i}$ for some composition $(m_i)_{i=1}^k$ of $n$. Since the decreasing permutation is always shallow, as is the direct sum of shallow permutations by Lemma \ref{lem: dir sum}, all 231-avoiding involutions are shallow. \end{proof} \begin{theorem} For $n\geq 1$, the number of shallow, 231-avoiding centrosymmetric permutations of length $n$ is $2^{\lfloor n/2\rfloor}$. \end{theorem} \begin{proof} In \cite{E07}, Egge shows there are $2^{\lfloor n/2\rfloor}$ centrosymmetric permutations of length $n$ that avoid 231. In his proof, it is shown that each of these permutations is a direct sum of decreasing permutations, i.e., $\pi = \bigoplus_{i=1}^k \delta_{m_i}$ for a palindromic composition $(m_i)_{i=1}^k$ of $n$. Since the decreasing permutation is always shallow, as is the direct sum of shallow permutations by Lemma \ref{lem: dir sum}, all 231-avoiding centrosymmetric permutations are shallow. \end{proof} \begin{theorem} For $n\geq 1$, if the number of shallow, 231-avoiding persymmetric permutations of length $n$ is $p_n(231)$ and the corresponding generating function is $P_{231}(x)$, then \[P_{231}(x) = \frac{x^{10} + 2 x^8 + x^7 + x^6 - x^5 - 2 x^4 + x^3 + 2 x^2 - x - 1}{x^{10} + x^8 + 2 x^6 - 4 x^4 + 4 x^2 - 1}.\] \end{theorem} \begin{proof} Let $P^B(x)$ be the generating function for shallow 231-avoiding persymmetric permutations that begin with $n$ and $P^C(x)$ be the generating function for those beginning with $n(n-1)$. Then, since the only 231-avoiding shallow permutations that begin with $n(n-1)$ (of the form described in Lemma~\ref{lem:c-231}) are the decreasing permutation $\pi = n(n-1)\ldots 21,$ the permutations $\pi = n(n-1)\ldots 4312$, and when $n$ is even, the permutation $21\ominus(\delta_{n/2-1}\oplus\delta_{n/2-1}).$ Therefore for $n\geq 6,$ there are 2 such permutations when $n$ is odd and 3 such permutations when $n$ is even, giving us \[P^C(x) = x^2+x^3+\frac{2x^4}{1-x} + \frac{x^6}{1-x^2}.\] For those permutations beginning with $n$, if $\pi_i=n-1$ for $i>2$, then we must have that $\pi_i\pi_{i+1}\ldots \pi_n$ are composed of the numbers $\{i-1,i,\ldots,n-1\}$ and is order-isomorphic to the reverse-complement-inverse of $\pi_2\pi_3\ldots \pi_{n-i+2}$ which is composed of the elements in $\{1,2,\ldots, m-1\}.$ The remaining permutation is itself a shallow 231-avoiding persymmetric permutation beginning with $n.$ Thus, we have that \[P^B(x) = x+ P^C(x) +\frac{1}{x^2}C(x^2)P^B(x)\] where $C(x)$ is the generating function given in the proof of Theorem~\ref{thm:231}. Finally, if a given persymmetric permutation in $\T_n(231)$ does not begin with $n$, it is the direct sum $\gamma\oplus\nu\oplus\gamma^{rci}$ where $\nu$ is a shallow 231-avoiding persymmetric permutation and $\gamma$ is any shallow 231-avoiding permutation beginning with $n$. Thus, \[P_{231}(x) = 1+P^B(x) + B(x^2)T'(x)\] where $B(x)$ is the generating function given in the proof of Theorem~\ref{thm:231}. The result follows. \end{proof} \section{Shallow permutations that avoid 123}\label{sec:123} In this section, we consider those shallow permutations that avoid the pattern 123, as well as those that exhibit the three symmetries of inverse, reverse-complement, and reverse-complement-inverse. We omit the enumeration of 123-avoiding shallow permutations with a given number of descents, though this is likely tractable (but tedious) by following the proof of Theorem~\ref{thm: 123} below. \subsection{Enumeration of 123-avoiding shallow permutations} Let us start by stating the main theorem in this section. \begin{theorem}\label{thm: 123} Let $T_{123}(x)$ be the generating function for the number of shallow permutations that avoid 123. Then, \[ T_{123}(x) =\frac{1-3x+11x^3-13x^4+7x^5+6x^6+3x^7}{(1-x)^4 (1 - 4 x^2 + x^4)}. \] \end{theorem} We first establish a few lemmas based on the position of $n$ and $1$ in the permutation. In Lemma~\ref{lem: 123-1}, we consider those permutations that do not start with $n$ or end with 1, and in Lemma~\ref{lem: 123-2}, we consider those that do start with $n$ and have a 1 in any other position. \begin{lemma}\label{lem: 123-1} For $n\geq 3$, the number of 123-avoiding shallow permutations with $\pi_1\neq n$ and $\pi_n\neq 1$ is equal to $\displaystyle2\binom{n-1}{3} + (n-1).$ \end{lemma} \begin{proof} Let us first consider the case when $\pi_i=n$ and $\pi_j=1$ for some $1<i<j<n.$ We will see that there are $j-i$ such 123-avoiding shallow permutations. In particular, these $j-i$ permutations are of the form \[\pi=\underline{(t-1)\ldots(t-i+1)}\, n\, \underline{(t-i)\ldots 2} \,\underline{(n-1) \ldots (t+n-j)}\, 1\, \underline{(t+n-j-1)\ldots t}\] for any $i+1\leq t\leq j$ where the underlined regions are decreasing. We will first show that $\pi$ is shallow. Let us consider the permutation $\pi^{R^{n-t}}$. Since upon each iteration of the right operator, the last element replaces the largest element, all elements that appear before $n-1$, except for $n$, will remain unchanged. Each time, a term will be deleted, leaving us with \[\pi^{R^{n-t}} = \underline{(t-1)\cdots(t-i+1)}t\underline{(t-i)\cdots 21} \in \S_t.\] For example, if $\pi = 493287165$, we have $n = 9$ and $t = 5$, so $\pi^{R^4} = 45321$. In the first step, $t$ is a left-to-right maximum in $\pi^R$, and in all the subsequent steps the element we move is a right-to-left minimum in its new position. Furthermore, $\pi^{R^{n-t}} = (\delta_{i-1} \oplus 1) \ominus \delta_{t-i}$ is shallow by an identical argument to Lemma \ref{lem:shallow-n,n-1}. These two facts in combination with Theorem \ref{thm: shallowRecursive} imply that $\pi$ is shallow. Now let us see that these are indeed the only shallow 123-avoiding permutations with $\pi_i=n$ and $\pi_j=1$ for some $1<i<j<n.$ Indeed, since $\pi$ avoids 123, we must have $\pi_1\ldots \pi_{i-1}$ and $\pi_{j+1}\ldots \pi_n$ are decreasing. Furthermore, by considering $\pi^R,$ we would have that $\pi_n$ is to the left of 1 and thus must be a left-to-right maximum, implying that $\pi_n>\pi_1,$ which in turn implies that each element of $\{\pi_1,\ldots,\pi_{i-1}\}$ is less than each element of $\{\pi_{j+1},\ldots,\pi_n\}$. This implies that if $\pi_r=2$ then either $r=i-1$ or $i<r<j$. Clearly if $\pi_{i-1}=2$, then the subsequence $\pi_{i+1}\ldots\pi_{j-1}\pi_{j+1}\ldots \pi_n$ is decreasing and thus is of the above form with $t = i+1$. Similarly, if $\pi_s = n-1$, then either $s=j+1$ or $i<s<j$. If $\pi_{j+1}=n-1,$ then $\pi$ must be of the form above with $t=j.$ We can thus assume $i<r,s<j$. If $r<s$, then it is of the form above, so for the sake of contradiction, suppose $r>s$ (so, suppose 2 appears after $n-1$). However, in this case, $\pi^{RL}$ contains the subsequence $\pi_n'(n-1)'2'\pi_1'$ which is a $3\boldsymbol{4}\boldsymbol{1}2$ pattern, contradicting Theorem~\ref{thm: 3n12}. Next, let us consider those permutations with $1$ appearing before $n$ in $\pi.$ Since $\pi$ avoids 123, it must be that $\pi=\pi_1\ldots \pi_{i-1}1n\pi_{i+2}\ldots \pi_n$ for $1\leq i \leq n-1$. Furthermore, we must have $\pi_1 > \pi_2 > \cdots > \pi_{i-1}$ and $\pi_{i+2} > \pi_{i+3} > \cdots > \pi_n$. We claim that if $\pi_i = 1$ and $\pi_{i+1} = n$, then either $\pi_1 < \pi_n$ in which case \[ \pi = i(i-1) \cdots 1n(n-1)(n-2) \cdots (i+1), \] or $\pi_1 > \pi_n$. Since, the elements preceding $1$ are decreasing and those after $n$ are decreasing, we must have that $\pi_1 \in [i+1,n-1]$, $\pi_n \in [2,i]$. Furthermore, we can show that $\pi_{n-1} > \pi_2$. For the sake of contradiction, suppose not. Then $\pi_1 > \pi_2 > \pi_{n-1} > \pi_n$. But then $\pi^{RL}$ contains the sequence $\pi_2'\pi_1'\pi_n'\pi_{n-1}'$ which is a $\underline{3}41\underline{2}$ pattern, contradicting Theorem~\ref{thm: 3n12}. Thus once $i,$ $\pi_1$, and $\pi_n$ are selected, the rest of the permutation is determined. So in total for each $1 \leq i \leq n-1$ there are $(n-i-1)(i-1)$ permutations with $\pi_1 > \pi_n$ and 1 with $\pi_1<\pi_n.$ Summing over all possible values of $i$, we obtain \[ \sum_{i=1}^{n-1} (1+(n-i-1)(i-1)) = \binom{n-1}{3} + (n-1) \] total permutations with $1$ appearing before $n$. Altogether, there are $\sum_{j=3}^{n-1} \sum_{i=2}^{j-1} (j-i) = \binom{n-1}{3}$ permutations with $n$ appearing before $1$ and $\binom{n-1}{3} + (n-1)$ permutations where $1$ appears before $n$. Adding these gives us the result. \end{proof} Let $b_n$ be the number of permutations in $\T_n(123)$ that start with $\pi_1=n$ and let $b_n(j)$ be the number of such permutations that also have $\pi_j=1.$ Note that by considering the reverse-complement, we have that $b_n$ is also the number that end with $\pi_n=1$ and $b_n(j)$ is also the number with $\pi_{n-j+1}= n$ and $\pi_n=1$. \begin{lemma}\label{lem: 123-2} For $n\geq 5,$ we have $b_n(2) = 1$, $b_{n}(n-1) = b_n(n) = t_{n-2}(123)$, and for $3\leq j\leq n-2$ we have \[ b_n(j) = 2 + (n-j-2)(2j-5) + 4\binom{j-2}{2} + b_{n-2}(j-1). \] \end{lemma} \begin{proof} Let us first consider those permutations with $\pi_1=n$, $\pi_j=1$ and $\pi_n=2$ with $j\leq n-2.$ Notice that $\pi^{RL}$ is still shallow of length $n-2$ and has the property that 1 appears in the $j-1$ position where $j-1\leq n-3,$ so $\pi^{RL}$ does not end with 1. It avoids 123 since it was essentially obtained by ``deleting'' 1 and $n$. By considering the position of $n-2$ in $\pi^{RL}\in\T_{n-2}(123)$, by the proof of Lemma~\ref{lem: 123-1}, there are $1+\binom{j-2}{2} + (j-2)(n-2-j) + b_{n-2}(j-1)$ such permutations. Next, let us consider those with $\pi_i=2$ with $1<i<j.$ First, let us consider those permutations with $\pi_{j+1}=n-1$. In this case, we must have $i=j-1$, so we have \[\pi=n\pi_2\ldots\pi_{j-2}21(n-1)\pi_{j+2}\ldots \pi_n\] where $\pi_2\ldots\pi_{j-2}$ and $\pi_{j+2}\ldots \pi_n$ are both decreasing segments since $\pi$ is 123-avoiding. We claim that the only such permutations are either \[n(j-1)\ldots 21(n-1)\ldots j\] or that have $\pi_2\in\{j,\ldots,n-2\}$ and $\pi_n\in\{3,\ldots,j-1\}$, with all the remaining elements before 2 being smaller than all the remaining elements after 1. If $\pi$ is not of one of these forms, then we have $\pi_2>\pi_3>\pi_{n-1}>\pi_n,$ in which case $\pi^{LLR}$ would contain a $\underline{3}41\underline{2}$ pattern, contradicting Theorem~\ref{thm: 3n12}. These are clearly shallow since $\pi^{LLR}$ is the direct sum of two shallow permutations, and it is clear there are $(j-3)(n-j-1)+1$ such permutations based on the choices of $\pi_2$ and $\pi_{n}.$ Next, consider those with $\pi_2=n-1,$ so \[\pi=n(n-1)\pi_3\ldots\pi_{i-1}2\pi_{i+1}\ldots\pi_{j-1}1\pi_{j+1}\ldots \pi_n\] where $\pi_{i+1}\ldots\pi_{j-1}\pi_{j+1}\ldots \pi_n$ is decreasing since $\pi$ avoids 123. Notice that by first considering $\pi^{RR},$ we get a permutation \[\pi^{RR}=\pi_n\pi_{n-1}\pi_3\ldots\pi_{i-1}2\pi_{i+1}\ldots\pi_{j-1}1\pi_{j+1}\ldots \pi_{n-2}\] with $\pi_{n-1}>\pi_n$ since $j\leq n-2.$ This is clearly still shallow if the original $\pi$ was. Now, taking $\pi^{RRLL}$, we see that our original permutation is only shallow if $\pi_{n-1}$ is a left-to-right maximum in $\pi^{RRLL}$ since $\pi_n<\pi_{n-1}$ will appear to its right. Thus we must have that the elements of the segment $\pi_3\ldots\pi_{i-1}$ are all less than $\pi_{n-1},$ as is $\pi_{n}.$ Thus $\pi_{n-1}=i+1$ and there are $i-2$ choices of $\pi,$ all of which are shallow. Summing over all possible choices of $i,$ we see there are $\binom{j-2}{2}$ permutations. Now left us consider the final case, when $\pi_j=1$, $\pi_i=n-1$ with $3\leq i\leq j-1$, and $\pi_n\neq 2.$ We claim that $\pi_n\in\{3,\ldots,j-1\}$ for each possible value of $i$ and that the other terms are determined, for a total of $(j-3)^2$ permutations. Indeed, in this case, we have $\pi=n\pi_2\ldots(n-1)\ldots 1\ldots\pi_{n-1}\pi_n$, and so $\pi^{RL} = \pi'_2\ldots(n-1)'\ldots \pi'_{n}\ldots\pi'_{n-1}$. Note that if we show that both $\pi_2<\pi_{n-1}$ and $2$ appears before $n-2$, the rest of the permutation $\pi$ must be determined since $\pi$ must avoid 123. Notice that in $\pi^{RL}$, if $\pi_2>\pi_{n-1},$ then $\pi'_2(n-1)'\pi_n'\pi_{n-1}'$ is a $\underline{3}41\underline{2}$ pattern, contradicting Theorem~\ref{thm: 3n12}. Note also that $\pi_2\neq n-2$ since otherwise $\pi_2'(n-1)'\pi_n'\pi_{n-1}'$ would be a $\underline{3}41\underline{2}$ pattern in $\pi^{RL}.$ If $n-2$ does occur before 2, then we would have \[\pi = n\pi_2\ldots (n-1)\ldots (n-2)\ldots 2\ldots 1\ldots\pi_{n-1}\pi_n,\] but then $\pi^{RLR}$ contains $\pi_{n-1}'(n-2)'2'\pi_{n}'$ which is a $3\boldsymbol{4}\boldsymbol{1}2$ pattern, contradicting Theorem~\ref{thm: 3n12}. Thus we have $(j-3)(n-j-1)+1 + \binom{j-2}{2} + (j-3)^2$ permutations that do not end in 2. Adding all these possible cases togther gives us the result in the statement of the theorem. \end{proof} We are now ready to prove the main theorem of this section. \begin{proof}[Proof of Theorem~\ref{thm: 123}] Let $t_n(123)$ be the number of permutations in $\T_n(123)$ and let $b_n$ be the number of permutations in $\T_{n}(123)$ that start with $n$. Since there are clearly also $b_n$ permutations that end with 1 and $a_{n-2}$ permutations that both start with $n$ and end with $1$, using the results of Lemm~\ref{lem: 123-1}, we have \[t_n(123) = 2b_n-t_{n-2}(123) + 2\binom{n-1}{3}+n-1. \] Using Lemma~\ref{lem: 123-2}, we obtain \begin{align*} b_n &= \sum_{j=2}^n b_n(j) \\ &= 1+2t_{n-2}(123)+\sum_{j=3}^{n-2}\left(2+(n-j-2)(2j-5) + 4\binom{j-2}{2} + b_{n-2}(j-1)\right) \\ & =1+2t_{n-2}(123)+ 2(n-3) + 5\binom{n-3}{3}+\binom{n-4}{3} + b_{n-2}-b_{n-2}(n-2)\\ & =1+2t_{n-2}(123)+ 2(n-3) + 5\binom{n-3}{3}+\binom{n-4}{3} + b_{n-2}-t_{n-4}(123).\\ \end{align*} Thus if $B(x)$ is the generating function for the sequence $\{b_n\}$, we have \[ T_{123}(x) = 2B(x) -x^2T_{123}(x) + \frac{x^2}{(1-x)^2} + \frac{2x^4}{(1-x)^4} + 1-x \] and \[ B(x) = (2x^2-x^4)T_{123}(x) + x^2B(x) + \frac{x}{1-x} + \frac{5x^6+x^7}{(1-x)^4} + \frac{2x^5}{(1-x)^2}-2(x^2+x^3). \] Solving for $T_{123}(x)$, we obtain the result in the statement of the theorem. \end{proof} \subsection{123-avoiding shallow permutations with symmetry} In this subsection, we consider 123-avoiding permutations that exhibit one of the three symmetries. \begin{theorem} For $n\geq 1$, the number of shallow, 123-avoiding involutions of length $n$ is $ \lfloor \frac{n^2}{4} \rfloor +1$. \end{theorem} \begin{proof} Let $a_n$ be the number of shallow, 123-avoiding permutations that are involutions. We will show that $a_n=a_{n-2}+n-1$. This together with the initial conditions $a_1=1$ and $a_2=2$ implies the formula as given in the statement of the theorem. Note that by Lemma \ref{lem: n1}, there are $a_{n-2}$ shallow 123-avoiding involutions that start with $n$ and end with $1$ since these comprise a 2-cycle and thus removing them leaves us with an involution. Also note that all involutions that have $\pi_n=1$ must also have $\pi_1=n$ and thus all involutions starting with $\pi_1=n$ are counted in this way. Next suppose $\pi_i=n$ for $i>1$. Then since $\pi$ is an involution $\pi_n=i.$ We claim that $\pi_1\leq \pi_n.$ For the sake of contradiction, suppose not. If $\pi_1>\pi_n=i$, then since $\pi$ is an involution $\pi_{\pi_1}=1$. Since $\pi_1>i$, this 1 appears after $n$ and before $\pi_n.$ Thus, in $\pi^L$, $\pi_1$ is replaces this 1, but cannot be a left-to-right maximum since $n$ is to its left and cannot be a right-to-left minimum since it is larger than $\pi_n.$ Thus $\pi_1\leq \pi_n.$ Finally, since $\pi$ avoids 123 and $\pi_1\leq \pi_n,$ the only permutations that satisfy this are of the form \[\pi=m(m-1)\ldots 1 n (n-1)\ldots (m+1)\] for $m\in[n-1]$. There are clearly $n-1$ such permutations, adn so the result follows. \end{proof} \begin{theorem} For $n\geq 1$, the number of shallow, 123-avoiding centrosymmetric permutations of length $n$ is $ \frac{n^2}{4} +1$ when $n$ is even and 1 when $n$ is odd. \end{theorem} \begin{proof} Let $c_n$ be the number of centrosymmetric 123-avoiding permutations. First, let us consider the case when $n$ is odd. Since $\pi=\pi^{rc},$ we must have that $\pi_{(n+1)/2}=(n+1)/2.$ Since $\pi$ avoids 123, it must be the case that the elements in $\pi_1\pi_2\ldots \pi_{(n+1)/2-1}$ are greater than $(n+1)/2$ and the elements in $\pi_{(n+1)/2+1}\ldots\pi_n$ are less than $(n+1)/2.$ In particular, $n$ occurs in the first half and 1 occurs in the second half. If 1 occurs at the end of $\pi,$ then since $\pi=\pi^{rc}$, $\pi_1=n.$ Thus by Lemma~\ref{lem: n1}, there are $c_{n-2}$ such permutations. If $1$ does not occur at the end, then $n$ necessarily does not occur at the beginning. But then, in $\pi^R,$ $\pi_n$ is neither a left-to-right maximum nor a right-to left minimum. Thus, when $n$ is odd, we have $c_n=c_{n-2}.$ Since $c_1=1$, the result for odd $n$ follows. Now, suppose $n$ is even. We will show that $\pi$ either starts with $n$, in which case there are $c_{n-2}$ for the same reasons as above, or is either of the form \[\pi = (n-k)(n/2)(n/2-1)\ldots (k+1)(k-1)\ldots 21 n (n-1)\ldots (n-k+1)(n-k-1)\ldots (n/2+1) k \] for $2\leq k\leq n/2+1$, or of the form \[\pi=(n/2)(n/2-1)\ldots (k+1) n k\ldots 2(n-1)(n-2)\ldots (n-k+1) 1(n-k)\ldots (n/2+1) \] for $1\leq k< n/2$. Let us first consider the case when $n$ appears after 1 in $\pi.$ Since $\pi$ avoids 123 and is centrosymmetric, it must be that $\pi_{n/2}=1$ and $\pi_{n/2+1}=n$. Note that if $\pi_1<\pi_n$, then we must have the first case above with $k=n/2+1$, so let us assume $\pi_1>\pi_n.$ In that case, in $\pi^{RL}$, we will have a $\underline{3}41\underline{2}$ pattern unless $\pi_2<\pi_{n-1}$, contradicting Theorem~\ref{thm: 3n12}. Since $\pi$ is centrosymmetric, the only possibility is the first one listed above. Next consider when $n$ appears before 1 in $\pi.$ In this case, we must have $\pi_1>\pi_n$ or else we will have a $3\boldsymbol{4}\boldsymbol{1}2$ pattern, contradicting Theorem~\ref{thm: 3n12}. Therefore, since $\pi$ avoids 123 and is centrosymmetric, we must have $\pi_1=n/2$ and $\pi_n=n/2+1$. Furthermore, the elements that appear before $n$ are decreasing and consecutive and those after 1 are decreasing are consecutive, since otherwise we would have a 123 pattern. This implies that either $1$ appears immediately after $n,$ in which case we have the second case above with $k=1,$ or the 2 and $n-1$ appear between the $n$ and $1$ in $\pi.$ In fact, we must have 2 appearing before $n-1$, or else $\pi^{RL}$ will have a $3\boldsymbol{4}\boldsymbol{1}2$ pattern, contradicting Theorem~\ref{thm: 3n12}. It is a straightforward exercise to check that these permutations listed above are indeed shallow, and we now have shown they are the only possible shallow 123-avoiding permutations of length $n.$ Thus when $n$ is even, $c_n=c_{n-2}+n-1,$ which together with the fact that $c_2=2,$ implies that $c_n = \frac{n^2}{4}+1.$ \end{proof} \begin{theorem} For $n\geq 3$, the number of shallow 123-avoiding persymmetric permutations of length $n$ has the associated generating function \[ P_{123}(x) = \frac{x^6 + x^5 + x^3 - 2 x^2 + 1}{(x - 1)^2 (x + 1) (1-2x^2-x^4)}. \] \end{theorem} \begin{proof} Let $p_n$ denote the number of persymmetric 123-avoiding permutations and let $q_n$ denote those that start with $\pi_1=n.$ First note that if $\pi_2=n$, then we must have $\pi_1=n-1$ since $\pi$ is persymmetric. Also, we must have $\pi_n=1$ since if 1 appeared anywhere else in $\pi,$ then in $\pi^L$, the element $n-1$ would not be a left-to right maximum nor a right-to-left minimum, and so $\pi$ would not be shallow. Thus since $\pi_1\pi_2=(n-1)n$ and $\pi_n=1,$ then $\pi^{RL}\in\S_{n-2}$ will be a shallow 123-avoiding persymmetric permutation that starts with $n-2.$ Since any such permutation can be obtained this way, there are $q_{n-2}$ persymmetric permutations $\pi\in\T_{n}(123)$ with $\pi_2=n.$ Now, we will show there is exactly one shallow persymmetric 123-avoiding permutation with $\pi_i=n$ for $3\leq i \leq \lfloor \frac{n}{2}\rfloor +1$ and none with $i>\lfloor\frac{n}{2}\rfloor+1.$ First note that if $i>\lfloor\frac{n}{2}\rfloor+1$, then $\pi_1 = n+1-i$. But since $\pi$ avoids 123, the elements before $n$ must be decreasing, which is impossible in this case since $\pi_1$ is too small. Now assume $i\leq \lfloor\frac{n}{2}\rfloor+1.$ Since $\pi$ is persymmetric, this means $\pi_1=n+1-i$ and since $\pi$ avoids 123, we have $\pi_1\ldots\pi_{i-1}$ is decreasing. If the 1 appears before $n$, then we must have that $\pi_{i-1}=1$ and $\pi_n=n+2-i$, and that every element after $n$ is decreasing in order to avoid 123. The only way this is possible is if $i=\lfloor\frac{n}{2}\rfloor+ 1$ and $\pi = (n/2)\ldots 21n(n-1)\ldots (n/2+1)$. In fact, this is the only possibility for $i =\lfloor\frac{n}{2}\rfloor+ 1$, so assume $i\leq \lfloor\frac{n}{2}\rfloor$ and that the 1 appears after $n.$ Note that if $\pi_j=i$ for $i+1\leq j\leq n-1$, then $\pi_n=n+1-j$ which implies $\pi$ contains $3\boldsymbol{4}\boldsymbol{1}2$. In order to avoid this, we must have $\pi_n=1.$ Since $\pi^{RL}$ must avoid $\underline{3}41\underline{2}$, we must have that either $\pi_{n-1}=2$ or $\pi_{n-1}>\pi_2$. In the second case, since $\pi$ avoids 123 and persymmetric, the only possibility is that $n$ is odd and taking $r=(n+1)/2$ we get \[\pi = (n+1-j)(r-1)(r-2)\ldots (r-i+2)n(r-i+1)\ldots 32(n-1)(n-2)\ldots r1.\] If $\pi_{n-1}=2$, then we must not have $\pi_{n-2}=3$ since $\pi^{RRLL}$ would send $\pi_2$ to where $\pi_{n-1}$ is in $\pi$ and it would not be a left-to-right maximum since $\pi_1>\pi_2$ would appear before it and would not be a right-to-left minimum since $\pi_{n-2}=3$ would appear to its right. Thus for similar reasons to above, we would have to have $\pi_{n-2}>\pi_2$ and there would only be one case: that $n$ is even and taking $r=n/2+1$, we have \[\pi = (n+1-j)(r-1)(r-2)\ldots (r-i+2)n(r-i+1)\ldots 32(n-1)(n-2)\ldots r21.\] Again it is straightforward to check these permutations are indeed shallow. Taken altogether, this implies that \[p_n = q_n + q_{n-2} + \bigg\lfloor \frac{n}{2}\bigg\rfloor-1.\] Next, let us consider those that have $\pi_1=n$. If $\pi_n=1$, then by Lemma~\ref{lem: n1}, there are $p_{n-2}$ such permutations. If $\pi_{n-1}=1$, then since $\pi$ is persymmetric, we must have $\pi_n=2.$ Then $\pi^{RL}$ is a persymmetric permutation that ends with 1, which are also enumerated by $q_{n-2}$. Finally, by a similar proof to the one above, there is exactly one shallow 123-avoiding shallow permutation that starts with $n$ and has $\pi_i=1$ for $\lfloor \frac{n}{2}\rfloor + 1\leq i\leq n-2.$ Now, this implies that \[q_n = p_{n-2} + q_{n-2} + \bigg\lfloor \frac{n-1}{2}\bigg\rfloor-1.\] Taking $P_{123}(x)$ and $Q_{123}(x)$ to be the respective generating functions. These recurrences together with the initial conditions imply that \[P_{123}(x) = (1+x^2)Q_{123}(x) + \frac{x^5+x^4}{(1-x^2)^2} + 1+x^2\] and \[Q_{123}(x) = x^2P_{123}(x) +x^2Q_{123}(x) + \frac{x^6+x^5}{(1-x^2)^2} +x.\] Solving for $P_{123}(x)$ gives us the generating function in the statement of the theorem. \end{proof} \section{Shallow permutations that avoid 321}\label{sec:321} Diaconis and Graham \cite{DG77} pointed out that permutations which satisfy the upper and lower bound of their inequality are enumerated by the bisection of the Fibonacci numbers, $F_{2n-1}$. These permutations were further discussed and characterized in \cite{HM13}. We start this section by providing an independent proof of this enumeration. We then enumerate these permutations by their descent count as well as those that exhibit certain symmetry. \subsection{Enumeration of 321-avoiding shallow permutations} \begin{theorem} \label{thm: 321} For $n \geq 1$, $t_n(321) = F_{2n-1}$, where $F_{2n-1}$ is the $(2n-1)$-st Fibonacci number. \end{theorem} Before proving this theorem, we will prove the following lemma, which determines what these permutations must look like when $n$ occurs before position $n-1$. \begin{lemma} \label{lem: 321end} Let $n\geq 3$. If $\pi \in \mathcal{T}_n(321)$ has $\pi_j=n$ with $1 \leq j < n-1$ then: \begin{itemize} \item $\pi_n = n-1$ \item $\pi^R \in \mathcal{T}_n(321)$ \item $\pi_{k} = k-1$ for $j+2\leq k \leq n,$ \end{itemize} \end{lemma} \begin{proof} Let $\pi\in\T_n(321)$ with $n\geq 3$. Since $\pi$ avoids $321$ we must have $\pi_{j+1} < \pi_{j+2} < \cdots < \pi_n$. By Theorem \ref{thm: shallowRecursive}, since $\pi$ is shallow, $\pi_n$ must be either a left-to-right maximum or right-to-left minimum in $\pi^R = \pi_1 \cdots \pi_{j-1} \pi_n \pi_{j+1} \cdots \pi_{n-1}$. It cannot be a right-to-left minimum because $j<n-1$ and $\pi^R_{j+1} = \pi_{j+1} < \pi_n = \pi^R_j$. So $\pi_n$ must be a left-to-right maximum in $\pi^R$. If $\pi_n \not= n-1$, since it is a left-to-right maximum in $\pi^R$, $n-1$ must occur after position $j$ in $\pi^R,$ and thus in $\pi$. However, this means $\pi$ contains $n(n-1)\pi_n$ as a subsequence, which is a 321 pattern. Thus $\pi_n=n-1.$ This completes the proof of the first bullet point. Note that the previous paragraph also implies that if $\pi \in \mathcal{T}_n(321)$ with $\pi_j = n$ where $1 \leq j < n-1$ then $\pi^R \in \mathcal{T}_n(321)$. Indeed, by Theorem \ref{thm: shallowRecursive} $\pi^R$ is still shallow and we form $\pi^R$ by replacing $n$ with $n-1$, so $\pi^R$ is still $321$ avoiding since $\pi$ was. This establishes the second bullet point. We can combine the first two bullet points to prove the third. If $\pi \in \mathcal{T}_n(321)$ has $\pi_j = n$ with $1 \leq j < n-1$ then the first and second bullet point imply that $\pi^{R^m} \in \mathcal{T}_{n-m}(321)$ with $\pi_j = n-m$ for $1 \leq m \leq n-j-1$. When $1 \leq m \leq n-j-2$ we have $j \leq n-m-2$, in this case the first bullet point shows that $\pi_{n-m} = \pi^{R^m}_{n-m} = n-m-1$. This is equivalent to $\pi_k = k-1$ for $j+2 \leq k \leq n-1$ which in combination with the first bullet point proves the third. \end{proof} As an example, if we have a permutation $\pi\in\mathcal{T}_{13}(321)$ with the element 13 in position $8$, then we must have that the permutation $\pi$ ends with $\pi_{10}\pi_{11}\pi_{12}\pi_{13}=9(10)(11)(12)$. Note that $\pi_9$ is not determined by this lemma. We are now able to prove Theorem~\ref{thm: 321}. \begin{proof}[Proof of Theorem \ref{thm: 321}] Let $\pi \in \mathcal{T}_n(321)$. If $\pi_n = n$, by Theorem \ref{thm: shallowRecursive}, $\pi^R$ obtained by removing $n$ will be shallow and still $321$ avoiding. Similarly, we can append $n$ to the end of any $\tau \in \mathcal{T}_{n-1}(321)$ to obtain a permutation in $\mathcal{T}_n(321)$. Therefore, there are $t_{n-1}(321)$ permutations $\pi \in \mathcal{T}_n(321)$ with $\pi_n = n$. Similarly, if $\pi_{n-1}=n$, then $\pi^R$ is obtained by replacing $n$ with $\pi_n$, which is equivalent to deleting $n$ from $\pi$. One can clearly add $n$ into the $(n-1)$st position any $\pi\in\T_{n-1}(321)$ and obtain a permutation that is still shallow and 321-avoiding. This shows that there are $t_{n-1}(321)$ permutations $\pi \in \mathcal{T}_n(321)$ with $\pi_{n-1} = n$. Now let us see that there are $t_{j}(321)$ permutations $\pi \in \mathcal{T}_n(321)$ with $\pi_{j} = n$ for $1 \leq j \leq n-2$. Suppose $\pi\in\mathcal{T}_n(321)$ with $\pi_j=n$ and $1\leq j\leq n-2.$ A direct consequence of Lemma \ref{lem: 321end} is that $\pi^{R^{n-j}} \in \mathcal{T}_{j}(321)$. This is actually a bijection. Indeed, given any $\tau \in \mathcal{T}_{j}(321)$, we can form a new permutation $\hat{\tau} \in S_n$ with $\hat{\tau}_m = \tau_m$ for $1 \leq m < j$, $\hat{\tau}_{j} = n$, $\hat{\tau}_{j+1} = \tau_{j}$, and $\hat{\tau}_{k} = k-1$ for $j+2 \leq k \leq n$. For example, given the permutation $\tau = 41263857 \in \T_8(321),$ we can obtain the permutation $\pi = 4126385(13)79(10)(11)(12)\in \mathcal{T}_{13}(321).$ It is clear that the permutation $\hat\tau$ formed is 321-avoiding, and it is shallow since $\hat\tau^{R^{n-j}}=\tau$ is. As this exhausts all the possible positions of $n$, we conclude that \[ t_n(321) = 2 t_{n-1}(321) + \sum_{i=1}^{n-2} t_{i}(321) \] which, together with the initial conditions, is satisfied by $F_{2n-1}.$ \end{proof} \subsection{321-avoiding shallow permutations by descent number} In this subsection, we consider those shallow 321-avoiding permutations with $k$ descents. \begin{theorem}\label{thm: 321-descents} Let $a_{n,k}$ be the number of permutations in $\T_n(321)$ with $k$ descents and let $A(x,z) = \sum_{n,k} a_{n,k} x^k z^n$. Then, \[ A(x,z) = \frac{z - 2 z^2 + x z^2 + z^3 - x z^3}{1 - 3 z + 3 z^2 - 2 x z^2 - z^3 + x z^3}. \] \end{theorem} \begin{proof} Let $a_{n,k}$ denote the number of permutations $\pi \in \T_n(321)$ with $k$ descents and $b_{n,k}$ the number of such permutations with $\pi_{n-1} = n$. Let $\pi \in \T_n(321)$ have $\pi_{n-1} = n$ and $k$ descents and consider the value of $\pi_{n-2}$. If $\pi_{n-2} = n-1$ then $\pi^R \in \S_{n-1}$ is still a shallow 321-avoiding permutation and has $\pi^R_{n-2} = n-1$. Since $\pi$ has $k$ descents, $\pi^R$ will also have $k$ descents. These are precisely the permutations enumerated by $b_{n-1,k}$. This construction is clearly reversible so there are $b_{n-1,k}$ permutations $\pi \in \T_n(321)$ with $k$ descents, $\pi_{n-1} = n$ and $\pi_{n-2} = n-1$. If $\pi_{n-2} \not= n-1$ this forces $\pi_{n-2} < \pi_{n}$, otherwise we have a $321$ consisting of $(n-1) \pi_{n-2} \pi_n$. This means $\pi^R$ will have one fewer descent, since we are removing the descent in position $n-1$. In other words, $\pi^R$ can be any permutation $\pi' \in \T_{n-1}(321)$ with $k-1$ descents and $\pi'_{n-2} \not= n-1$. These are precisely enumerated by $a_{n-1,k-1} - b_{n-1,k-1}$. Again, this construction is reversible, so there are $a_{n-1,k-1} - b_{n-1,k-1}$ shallow 321-avoiding permutations of size $n$ with $k$ descents, $\pi_{n-1} = n$ and $\pi_{n-2} \not= n-1$. This implies the following recursion for $b_{n,k}$: \[ b_{n,k} = b_{n-1,k} + a_{n-1,k-1} - b_{n-1,k-1}. \] Now, if $\pi \in \T_n(321)$ with $k$ descents and $\pi_n = n$, then $\pi^R \in \T_{n-1}(321)$ with $k$ descents. This is reversible, so there are $a_{n-1,k}$ such permutations. If $\pi_j = n$ with $1 \leq j \leq n-1$, then since $\pi$ is $321$ avoiding we must have $\pi_{j+1} < \pi_{j+2} < \cdots < \pi_{n}$. In order to have $k$ descents we therefore must have $k+1 \leq j \leq n-1$. We claim there are $b_{j+1,k}$ such permutations with $\pi_j = n$. This is clearly true when $j = n-1$ by construction. Now, if $k+1 \leq j \leq n-2$, by Lemma \ref{lem: 321end} since $\pi \in \mathcal{T}_n(321)$ has $\pi_{j} = n$ with $1 \leq j \leq n-2$, we have $\pi_{k} = k-1$ for $j+2 \leq k \leq n$. As a result, $\pi^{R^{n-j-1}} \in \S_{j+1}$ is a flat permutation with $k$ descents and $\pi^{R^{n-j-1}}_{j} = j+1$; these are precisely enumerated by $b_{j+1,k}$. Even stronger, thanks to Lemma \ref{lem: 321end}, reversing this operation produces all the $\pi \in \T_n(321)$ with $k$ descents and $\pi_j = n$. This proves the claim that such permutations are enumerated by $b_{j+1,k}$. Summing over all the possible positions of $n$ we find that \[ a_{n,k} = a_{n-1,k} + \sum_{j=k+1}^{n-1} b_{j+1,k}. \] Now, let $B(x,z) = \sum_{n,k} b_{n,k}x^kz^n$. These recursions together with the initial conditions imply that \[ B(x,z) = z B(x,z) + xz A(x,z) - xz B(x,z) + z + z^2x. \] and \[ A(x,z) = zA(x,z) + (B(x,z) - z - z^2x)(1 + z + z^2 + \cdots + z^{n-k-2}) + z. \] We need to remove $z$ and $z^2x$ because in the recursion we always have $n$ at least two greater than $k$, the number of descents. These are the only two terms in $B(x,z)$ where this does not occur. Furthermore, we can replace $1 + z + z^2 + \cdots + z^{n-k-2}$ by $\frac{1}{1-z}$ because the coefficients of $z^jx^k$ in $(B(x,z) - z - z^2x)$ for $0 \leq j \leq k+1$ are all zero. We can therefore conclude, \[ A(x,z) = zA(x,z) + \frac{xzA(x,z)}{(1-z)(1-z+xz)}+ z. \] This gives us \[ A(x,z) = \frac{z}{1-z-\tfrac{xz}{(1-z)(1-z+xz)}} \] which simplifies to the desired generating function. \end{proof} Recall that Grassmannian permutations are those permutations with at most one descent. These permutations necessarily avoid 321 and thus we can obtain the following corollary. \begin{corollary}\label{thm:shallowgras} For $n\geq 2$, the number of shallow permutations of length $n$ that are Grassmannian is equal to $\binom{n+1}{3}+1.$ \end{corollary} \begin{proof} It follows from the generating function in Theorem~\ref{thm: 321-descents} that the generating function for shallow 321-avoiding permutations with exactly one descent is $\frac{\partial}{\partial x}\mid_{x=0} A(x,t) = \frac{z^2}{(1-z)^4}$ which tells us there are $\binom{n+1}{3}$ permutations in $\T_n(321)$ with 1 descent. Since there is 1 permutation of any size with zero descents and that permutation is shallow, the result follows. \end{proof} \subsection{321-avoiding shallow permutations with symmetry} In this last subsection, let us consider those shallow 321-avoiding permutations that exhibit certain symmetry. \begin{theorem} For $n\geq 1$, the number of involutions in $\mathcal{T}_n(321)$ is $F_{n+1}$, where $F_n$ is the $n$-th Fibonacci number. \end{theorem} \begin{proof} Let $i_n(321)$ be the number of 321-avoiding shallow involutions. First we note that if $\pi\in \mathcal{T}_n(321)$ is an involution with $\pi_j=n$, then $j=n$ or $j=n-1$. To see this, consider $j<n-1$. Then by Lemma \ref{lem: 321end} we have $\pi_n=n-1$. But then since $\pi$ is an involution, we must have $\pi_{n-1}=n$, a contradiction. Therefore $n$ is in position $n$ or $n-1$. It is clear that there are $i_{n-1}(321)$ such permutations that have $\pi_n=n$. Since any involution with $\pi_{n-1}=n$ must also have $\pi_n=n-1$, there are $i_{n-2}(321)$ permutations in $\mathcal{T}_n(321)$ with $\pi_{n-1}=n$. With the initial conditions $i_1(321)=1$ and $i_2(321)=2$, the result follows. \end{proof} \begin{theorem} The number of centrosymmetric 321-avoiding shallow permutations is $F_{n+1}$ when $n$ is even and $F_{n-2}$ when $n$ is odd, where $F_n$ is the $n$-th Fibonacci number. \end{theorem} \begin{proof} Let $c_n(321)$ be the number of shallow 321-avoiding centrosymmetric permutations and let us consider the position of $n.$ If $\pi_n=n$, then since $\pi$ is centrosymmetric, $\pi_1=1$. By removing both $n$ and $1$, we are left with a centrosymmetric shallow 321-avoiding permutation of size $n-2.$ Since this is reversible for any centrosymmetric permutation in $\T_n(321)$, there are $c_{n-2}(321)$ such permutations. If $\pi_{n-1}=n$, then we must have $\pi_2=1.$ In this case $\pi^{RL}$ is the same as the permutation obtained by deleting both 1 and $n$ (scaling as appropriate). The remaining permutation is a centrosymmetric shallow 321-avoiding permutation of size $n-2.$ Again, this is reversible for any centrosymmetric permutation in $\T_n(321)$, so there are $c_{n-2}(321)$ such permutations. Now consider the case where $\pi_{n-j}=n$ for $n-j\leq n-2$. Then since $\pi$ is centrosymmetric and satisfies Lemma~\ref{lem: 321end}, we must have \[\pi = 23\ldots j\pi_{j} 1 \ldots n\pi_{n-j+1}(n-j+1)(n-j+2)\ldots (n-1).\] Note that $\pi^{(RL)^{j}}$ leaves us with a centrosymmetric 321-avoiding shallow permutation of length $n-2j$. Thus, we get \[c_n = c_{n-2}+c_{n-2}+c_{n-4}+c_{n-6}+\ldots\] which is equivalent to $c_n = 3c_{n-2}-c_{n-4}$ which together with the initial conditions is satisfied by $F_{n+1}$ when $n$ is even and $F_{n-2}$ when $n$ is odd. \end{proof} \begin{theorem} The number of persymmetric 321-avoiding shallow permutations is $F_{n+1}$, where $F_n$ is the $n$-th Fibonacci number. \end{theorem} \begin{proof} Let $p_n(321)$ be the number of shallow 321-avoiding persymmetric permutations and let us consider the position of $n.$ If $\pi_n=n$, then since $\pi$ is persymmetric, $\pi_1=1$. By removing both $n$ and $1$, we are left with a persymmetric shallow 321-avoiding permutation of size $n-2.$ Since this is reversible for any persymmetric permutation in $\T_n(321)$, there are $p_{n-2}(321)$ such permutations. If $\pi_{n-1}=n$, then we must have $\pi_1=2.$ In this case $\pi^{RL}$ is the same as the permutation obtained by deleting both 2 and $n$ (scaling as appropriate). The remaining permutation is a persymmetric shallow 321-avoiding permutation of size $n-2.$ Again, this is reversible for any persymmetric permutation in $\T_n(321)$, so there are $p_{n-2}(321)$ such permutations. Now consider the case where $\pi_{n-j}=n$ for $n-j\leq n-2$. Then since $\pi$ is persymmetric, we must have \[\pi = (j+1)12\ldots (j-1)\ldots n\pi_{n-j+1}(n-j+1)(n-j+2)\ldots (n-1).\] Note that $\pi^{(RL)^{j}}$ leaves us with a persymmetric 321-avoiding shallow permutation of length $n-2j$. Thus, we get \[p_n = p_{n-2}+p_{n-2}+p_{n-4}+p_{n-6}+\ldots\] which is equivalent to $p_n = 3p_{n-2}-p_{n-4}$ which together with the initial conditions is satisfied by the Fibonacci numbers. \end{proof} \section{Future directions and concluding remarks}\label{sec:conclude} Theorems~\ref{thm: 132} and \ref{thm: 321} imply that $t_n(132)=t_n(321)$, since both are equal to $F_{2n-1}.$ In our paper, we prove these separately and directly, but it does raise the following question. \begin{question} Is there a bijective proof that $t_n(132)=t_n(321)$? \end{question} Based on the numerical data, we can conjecture something stronger. We conjecture that there is a bijection $f: \T_n(132) \to \T_n(321)$ with the property that $\cyc(\pi) = \cyc(f(\pi))$ and $\des(\pi) + 1=\lrmax(f(\pi)),$ where $\lrmax(\sigma)$ is the number of left-to-right maxima in a permutation $\sigma.$ It seems likely that there are more statistics that could be preserved in bijections between shallow 132-avoiding and shallow 321-avoiding permutations. It may even be the case that this relationship between $\T_n(132)$ and $\T_n(321)$ goes deeper and could imply more interesting things about 132-avoiding shallow permutations. For example, it is known that the 321-avoiding shallow permutations have many nice properties (see \cite{E07,PT15}, among others): they have unimodal cycles, they avoid the patterns 321 and 3412, they satisfy both the upper and lower bound of the Diaconis-Graham inequality, etc. \begin{question} Are there any interesting characterizations of 132-avoiding shallow permutations that are in the same vein as those listed above for 321-avoiding shallow permutations? \end{question} Another possibility for future work is related to Theorem~\ref{thm: 3n12}. In that theorem we show that shallow permutations avoid certain mesh patterns. However, this is not a complete characterization of shallow permutations. This leads us to ask the following question \begin{question} Can shallow permutations be characterized completely in terms of mesh pattern avoidance? That is, is there a set of mesh patterns $S$ so that $\pi$ is shallow if and only if it avoids all patterns in $S$? \end{question} Finally, there are many other questions about pattern-avoiding shallow permutations that we did not consider in this paper. In many cases, it seems reasonable to count these permutations by various statistics, like number of cycles. One might also consider shallow permutations that avoid longer patterns or sets of patterns. As a more general question, one could attempt to count pattern-avoiding permutations that are not shallow, but perhaps satisfiy $I(\pi) + T(\pi) +k=D(\pi)$ for a fixed value $k$, or perhaps pattern-avoiding permutations whose cycle diagrams correspond to different knots/links (see for example, \cite{CM24, W22}). \begin{thebibliography}{99} \bibitem{BC11} P. Br\"{a}nd\'{e}n and A. Claesson. Mesh patterns and the expansion of permutation statistics as sums of permutation patterns, \textit{Elect. J. Comb.} 18(2) (2011), P5.14. \bibitem{BT22} Y. Berman and B. Tenner, Pattern-functions, statistics, and shallow permutations. \textit{Elect. J. Comb.} 29 (2022), P4.43. \bibitem{CM24} C. Cornwell, N. McNew, Links and the Diaconis–Graham Inequality. \textit{Combinatorica} 44 (2024), 1149--1167. \bibitem{DG77} P. Diaconis and R. Graham, Spearman's footrule as a measure of disarray. \textit{Journal of the Royal Statistical Society Series B: Statistical Methodology} 39.2 (1977), 262-268. \bibitem{E87} P. H. Edelman, On inversions and cycles in permutations. \textit{ European J. Combin.} 8 (1987), 269–279. \bibitem{E07} E. S. Egge, Restricted Symmetric Permutations. \textit{Ann. Comb.} 11 (2007), 405--434 \bibitem{HM13} P. Hadjicostas and C. Monico, A re-examination of the Diaconis-Graham inequality, \textit{J. Combin. Math. Combin. Comput.}, 87 (2013), 275--295. \bibitem{LO10} D. Lonoff and J. Ostroff, Symmetric permutations avoiding two patterns, \textit{Ann. Comb.} 14 (2010), 143--158. \bibitem{PT15} T. Petersen and B. Tenner, The depth of a permutation. \textit{Journal of Combinatorics} 6.1-2 (2015): 145--178. \bibitem{RT1} K. Ragnarsson and B. E. Tenner, Homotopy type of the boolean complex of a Coxeter system, \textit{Adv. Math.} 222 (2009), 409--430. \bibitem{RT2} K. Ragnarsson and B. E. Tenner, Homology of the boolean complex, \textit{J. Algebraic Combin.} 34 (2011), 617--639. \bibitem{SS85} R. Simion and F. W. Schmidt, Restricted permutations. \textit{European Journal of Combinatorics} 6(4) (1985), 383--406. \bibitem{T07} B. E. Tenner, Pattern avoidance and the Bruhat order, \textit{J. Combin. Theory Ser. A} 114 (2007), 888--905. \bibitem{W22} A. Woo, The shallow permutations are the unlinked permutations. Pre-print, arXiv:2201.12949. \end{thebibliography} \end{document}
2412.12070v1
http://arxiv.org/abs/2412.12070v1
Simultaneous and multiplicative Diophantine approximation on missing-digit fractals
\documentclass[12pt,reqno]{amsart} \usepackage[english]{babel} \usepackage[applemac]{inputenc} \usepackage[T1]{fontenc} \usepackage{bbm} \usepackage{float, verbatim} \usepackage{amsmath,amssymb,amsthm,amsfonts,mathtools} \usepackage{graphicx} \usepackage{hyperref} \pagestyle{headings} \usepackage{color} \usepackage{tikz} \usepackage{xcolor} \usepackage{pgfplots} \usepackage{caption} \usepackage{subcaption} \usepackage{enumerate} \usepackage{url} \hypersetup{ colorlinks=true, linkcolor=blue, filecolor=magenta, urlcolor=cyan, citecolor= red, } \pgfplotsset{compat=1.18} \newcommand{\ubox}{\overline{\dim}_{\mathrm{B}}} \newcommand{\pbox}{\overline{\dim}_{\mathrm{B}}} \newcommand{\lbox}{\underline{\dim}_{\mathrm{B}}} \newcommand{\boxd}{\dim_{\mathrm{B}}} \newcommand{\bunderline}[1]{\underline{#1\mkern-4mu}\mkern4mu } \newcommand{\Haus}{\dim_{\mathrm{H}}} \newcommand{\red}{\textcolor{red}} \newcommand{\blue}{\textcolor{blue}} \numberwithin{equation}{section} \numberwithin{figure}{section} \newtheorem*{thm*}{Theorem} \newtheorem*{conj*}{Conjecture} \newtheorem*{ques*}{Question} \newtheorem*{rem*}{Remark} \newtheorem*{defn*}{Definition} \newtheorem*{mainques*}{Main questions} \newtheorem{thmx}{Theorem} \renewcommand{\thethmx}{\Alph{thmx}} \newtheorem{conjx}{Conjecture} \renewcommand{\theconjx}{\Alph{conjx}} \newtheorem{remx}{Remark} \renewcommand{\theremx}{\Alph{remx}} \newtheorem{thm}{Theorem}[section] \newtheorem{sta}[thm]{Statement} \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{cond}[thm]{Condition} \newtheorem{prop}[thm]{Proposition} \newtheorem{conj}[thm]{Conjecture} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \newtheorem{rem}[thm]{Remark} \newtheorem{ques}[thm]{Question} \newtheorem{property}[thm]{Property} \newtheorem{ex}[thm]{Example} \def\RR{\mathbb{R}} \def\CC{\mathbb{C}} \def\FF{\mathbb{F}} \def\QQ{\mathbb{Q}} \def\ZZ{\mathbb{Z}} \def\NN{\mathbb{N}} \def\Nzero{\mathbb{Z}_{\geq 0}} \def\EE{\mathbb{E}} \def\PP{\mathbb{P}} \def\calC{\mathcal{C}} \def\calD{\mathcal{D}} \def\calE{\mathcal{E}} \def\calF{\mathcal{F}} \def\calG{\mathcal{G}} \def\calH{\mathcal{H}} \def\supp{\mathrm{supp}} \def\diam{\mathrm{diam}} \def \dist {{\mathrm{dist}}} \def\sp{\mathrm{span}} \def\cl{\mathrm{cl}} \def\ker{\mathrm{ker}} \def \balp {{\boldsymbol{\alpha}}} \def \bxi {{\boldsymbol{\xi}}} \def \mmod {{\: \mathrm{mod}\:}} \newcommand{\rbr}[1]{\left( {#1} \right)} \newcommand{\sbr}[1]{\left[ {#1} \right]} \newcommand{\cbr}[1]{\left\{ {#1} \right\}} \newcommand{\abr}[1]{\left\langle {#1} \right\rangle} \newcommand{\abs}[1]{\left| {#1} \right|} \newcommand{\norm}[1]{\left\|#1\right\|} \def\one{\mathbf{1}} \DeclareMathOperator*{\esssup}{ess\,sup} \newcommand*\wc{{}\cdot{}} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \def \bN {{\mathbb N}} \def \bQ {{\mathbb Q}} \def \bR {{\mathbb R}} \def \bZ {{\mathbb Z}} \def \cM {{\mathcal{M}}} \def \cR {{\mathcal{R}}} \def \cS {{\mathcal{S}}} \def \cT {{\mathcal{T}}} \def \fN {{\mathfrak{N}}} \def \alp {{\alpha}} \def \bet {{\beta}} \def \gam {{\gamma}} \def \del {{\delta}} \def \eps {{\varepsilon}} \def \epsilon {{\varepsilon}} \def \lam {{\lambda}} \def \sig {{\sigma}} \def \ome {{\omega}} \def \bk {{\mathbf{k}}} \def \bt {{\mathbf{t}}} \def \bzero {{\boldsymbol{0}}} \def \d {{\mathrm{d}}} \def \bbA {\mathbb A} \def \bB {\mathbb B} \def \bC {\mathbb C} \def \bD {\mathbb D} \def \bE {\mathbb E} \def \bbF {\mathbb F} \def \bG {\mathbb G} \def \bH {\mathbb H} \def \bK {\mathbb K} \def \bN {\mathbb N} \def \bP {\mathbb P} \def \bQ {\mathbb Q} \def \bR {\mathbb R} \def \bRn {\mathbb R^{n-1}} \def \bZ {\mathbb Z} \def \bZn {\mathbb Z^{n-1}} \def \bT {\mathbb T} \def \bM {\mathbb M} \def \bS {\mathbb S} \def \sA {\mathscr A} \def \sB {\mathscr B} \def \sC {\mathscr C} \def \sD {\mathscr D} \def \sE {\mathscr E} \def \sF {\mathscr F} \def \sG {\mathscr G} \def \sH {\mathscr H} \def \sI {\mathscr I} \def \sJ {\mathscr J} \def \sK {\mathscr K} \def \sL {\mathscr L} \def \sM {\mathscr M} \def \sN {\mathscr N} \def \sO {\mathscr O} \def \sP {\mathscr P} \def \sQ {\mathscr Q} \def \sR {\mathscr R} \def \sS {\mathscr S} \def \sT {\mathscr T} \def \sU {\mathscr U} \def \sV {\mathscr V} \def \sW {\mathscr W} \def \sX {\mathscr X} \def \sY {\mathscr Y} \def \sZ {\mathscr Z} \def \ra {\mathrm a} \def \rb {\mathrm b} \def \rc {\mathrm c} \def \rd {\mathrm d} \def \re {\mathrm e} \def \rf {\mathrm f} \def \rg {\mathrm g} \def \rh {\mathrm h} \def \ri {\mathrm i} \def \rj {\mathrm j} \def \rk {\mathrm k} \def \rl {\mathrm l} \def \rm {\mathrm m} \def \rn {\mathrm n} \def \ro {\mathrm o} \def \rp {\mathrm p} \def \rq {\mathrm q} \def \rr {\mathrm r} \def \rs {\mathrm s} \def \rt {\mathrm t} \def \ru {\mathrm u} \def \rv {\mathrm v} \def \rx {\mathrm x} \def \rw {\mathrm w} \def \ry {\mathrm y} \def \rz {\mathrm z} \def \rA {\mathrm A} \def \rB {\mathrm B} \def \rC {\mathrm C} \def \rD {\mathrm D} \def \rE {\mathrm E} \def \rF {\mathrm F} \def \rG {\mathrm G} \def \rH {\mathrm H} \def \rI {\mathrm I} \def \rJ {\mathrm J} \def \rK {\mathrm K} \def \rL {\mathrm L} \def \rM {\mathrm M} \def \rN {\mathrm N} \def \rO {\mathrm O} \def \rP {\mathrm P} \def \rQ {\mathrm Q} \def \rR {\mathrm R} \def \rS {\mathrm S} \def \rT {\mathrm T} \def \rU {\mathrm U} \def \rV {\mathrm V} \def \rX {\mathrm X} \def \rW {\mathrm W} \def \rY {\mathrm Y} \def \rZ {\mathrm Z} \def \ba {\mathbf a} \def \bb {\mathbf b} \def \bc {\mathbf c} \def \bd {\mathbf d} \def \be {\mathbf e} \def \bf {\mathbf f} \def \bF {\mathbf F} \def \bg {\mathbf g} \def \bh {\mathbf h} \def \bi {\mathbf i} \def \bj {\mathbf j} \def \bk {\mathbf k} \def \bl {\mathbf l} \def \bm {\mathbf m} \def \bn {\mathbf n} \def \bo {\mathbf o} \def \bp {\mathbf p} \def \bq {\mathbf q} \def \br {\mathbf r} \def \bs {\mathbf s} \def \bt {\mathbf t} \def \bu {\mathbf u} \def \bv {\mathbf v} \def \bx {\mathbf x} \def \bw {\mathbf w} \def \by {\mathbf y} \def \bz {\mathbf z} \def \bzero {\mathbf 0} \def \blambda {\boldsymbol{\lambda}} \def \btau {\boldsymbol{\tau}} \def \bmu {{\boldsymbol{\mu}}} \def \bnu {{\boldsymbol{\nu}}} \def \balp {{\boldsymbol{\alp}}} \def \bbet {{\boldsymbol{\beta}}} \def \bbeta {{\boldsymbol{\beta}}} \def \bgam {{\boldsymbol{\gam}}} \def \bdel {{\boldsymbol{\del}}} \def \bxi {{\boldsymbol{\xi}}} \def \bzeta {{ \boldsymbol{\zeta}}} \def \bomega {\boldsymbol{\omega}} \def \bome {\boldsymbol{\omega}} \def \bom {\boldsymbol{\omega}} \def \bgam {\boldsymbol{\gamma}} \def \fa {\mathfrak a} \def \fb {\mathfrak b} \def \fc {\mathfrak c} \def \fd {\mathfrak d} \def \fe {\mathfrak e} \def \ff {\mathfrak f} \def \fg {\mathfrak g} \def \fh {\mathfrak h} because it's reserved \def \fj {\mathfrak j} \def \fk {\mathfrak k} \def \fl {\mathfrak l} \def \fm {\mathfrak m} \def \fn {\mathfrak n} \def \fo {\mathfrak o} \def \fp {\mathfrak p} \def \fq {\mathfrak q} \def \fr {\mathfrak r} \def \fs {\mathfrak s} \def \ft {\mathfrak t} \def \fu {\mathfrak u} \def \fv {\mathfrak v} \def \fw {\mathfrak w} \def \fx {\mathfrak x} \def \fy {\mathfrak y} \def \fz {\mathfrak z} \def \fA {\mathfrak A} \def \fB {\mathfrak B} \def \fC {\mathfrak C} \def \fD {\mathfrak D} \def \fE {\mathfrak E} \def \fF {\mathfrak F} \def \fG {\mathfrak G} \def \fH {\mathfrak H} \def \fI {\mathfrak I} \def \fJ {\mathfrak J} \def \fK {\mathfrak K} \def \fL {\mathfrak L} \def \fM {\mathfrak M} \def \fN {\mathfrak N} \def \fO {\mathfrak O} \def \fP {\mathfrak P} \def \fQ {\mathfrak Q} \def \fR {\mathfrak R} \def \fS {\mathfrak S} \def \fT {\mathfrak T} \def \fU {\mathfrak U} \def \fV {\mathfrak V} \def \fW {\mathfrak W} \def \fX {\mathfrak X} \def \fY {\mathfrak Y} \def \fZ {\mathfrak Z} \def \cA {\mathcal A} \def \cB {\mathcal B} \def \cC {\mathcal C} \def \cD {\mathcal D} \def \cE {\mathcal E} \def \cF {\mathcal F} \def \cG {\mathcal G} \def \cH {\mathcal H} \def \cI {\mathcal I} \def \cJ {\mathcal J} \def \cK {\mathcal K} \def \cL {\mathcal L} \def \cM {\mathcal M} \def \cN {\mathcal N} \def \cO {\mathcal O} \def \cP {\mathcal P} \def \cQ {\mathcal Q} \def \cR {\mathcal R} \def \cS {\mathcal S} \def \cT {\mathcal T} \def \cU {\mathcal U} \def \cV {\mathcal V} \def \cW {\mathcal W} \def \cX {\mathcal X} \def \cY {\mathcal Y} \def \cZ {\mathcal Z} \def \le {\leqslant} \def \leq {\leqslant} \def \ge {\geqslant} \def \geq {\geqslant} \def \rank {\mathrm{rank}} \def \Im {\mathrm{Im}} \def \Re {\mathrm{Re}} \def \Nm {\mathrm{Nm}} \def \dim {\mathrm{dim}} \def \dimh {{\mathrm{\dim_H}}} \def \dimf {{\mathrm{\dim_F}}} \def \ord {\mathrm{ord}} \def \det {\mathrm{det}} \def \Tr {\mathrm{Tr}} \def \lcm {\mathrm{lcm}} \def \Ker {\mathrm{Ker}} \def \Hom {\mathrm{Hom}} \def \acts {\curvearrowright} \def \vol {\mathrm{vol}} \def \deg {\mathrm{deg}} \def \sinc {\mathrm{sinc}} \def \meas {\mathrm{meas}} \def \diag {\mathrm{diag}} \def \dag {\dagger} \def \diam {\diamond} \def \Li {{\mathrm{Li}}} \def \supp {{\mathrm{supp}}} \def \Bad {{\mathrm{Bad}}} \def \sgn {{\mathrm{sgn}}} \def \d {{\mathrm{d}}} \def \ds1 {\mathds{1}} \def \ind {\mathds{1}} \def \alp {{\alpha}} \def \bet {{\beta}} \def \gam {{\gamma}} \def \del {{\delta}} \def \eps {{\varepsilon}} \def \kap {{\kappa}} \def \lam {{\lambda}} \def \ome {{\omega}} \usepackage[colorinlistoftodos]{todonotes} \title[Diophantine approximation on fractals]{Simultaneous and multiplicative Diophantine approximation on missing-digit fractals} \author{Sam Chow} \address{Sam Chow, Mathematics Institute, Zeeman Building, University of Warwick, Coventry CV4 7AL, UK} \curraddr{} \email{[email protected]} \author{Han Yu} \address{Han Yu, Mathematics Institute, Zeeman Building, University of Warwick, Coventry CV4 7AL, UK} \curraddr{} \email{[email protected]} \thanks{} \subjclass[2020]{11J83 (primary); 28A80, 42B05 (secondary)} \keywords{Diophantine approximation, Littlewood's conjecture, fractals, Fourier series} \begin{document} \makeatletter \providecommand\@dotsep{5} \makeatother \begin{abstract} We investigate the metric theory of Diophantine approximation on missing-digit fractals. In particular, we establish analogues of Khinchin's theorem and Gallagher's theorem, as well as inhomogeneous generalisations. \end{abstract} \maketitle \section{Introduction} \subsection{Two foundational results in metric Diophantine approximation} We begin by recalling the foundational results of Khinchin and Gallagher. We write $\lam_k$ for $k$-dimensional Lebesgue measure, and write $\| x \|$ for the distance between a real number $x$ and the set of integers. For $\psi: \bN \to [0,1)$, we denote by $W_k^\times(\psi)$ the set of $\bx = (x_1, \ldots, x_k) \in [0,1)^k$ such that \[ \| n x_1 \| \cdots \| n x_k \| < \psi(n) \] has infinitely many solutions $n \in \bN$. Similarly, we denote by $W_k(\psi)$ the set of $\bx \in [0,1)^k$ such that \[ \max\{ \| n x_1 \|, \dots ,\| n x_k \|\} < \psi(n) \] holds for infinitely many $n \in \bN$. \begin{thm}[Khinchin's theorem \cite{Khi1924, Khi1926}] Let $\psi: \bN \to [0,1)$ be non-increasing, and let $k \in \bN$. Then \[ \lam_k(W_k(\psi)) = \begin{cases} 0, &\text{if } \displaystyle \sum_{n=1}^\infty \psi(n)^k < \infty \\ \\ 1, &\text{if } \displaystyle \sum_{n=1}^\infty \psi(n)^k = \infty. \end{cases} \] \end{thm} \begin{thm}[Gallagher's theorem \cite{Gal1962}] Let $\psi: \bN \to [0,1)$ be non-increasing, and let $k \in \bN$. Then \[ \lam_k(W_k^\times(\psi)) = \begin{cases} 0, &\text{if } \displaystyle \sum_{n=1}^\infty \psi(n) (\log n)^{k-1} < \infty \\ \\ 1, &\text{if } \displaystyle \sum_{n=1}^\infty \psi(n) (\log n)^{k-1} = \infty. \end{cases} \] \end{thm} Khinchin's theorem gives the simultaneous approximation rate of a generic vector, whilst Gallagher's theorem gives the generic multiplicative approximation rate. The latter is a strong form of a famous conjecture of Littlewood, except that the approximation rate is only valid for almost all vectors. \begin{conj} [Littlewood's conjecture in $k$ dimensions] Let \mbox{$\bx \in \bR^k$,} where $k \ge 2$. Then \[ \liminf_{n \to \infty} n \| n x_1 \| \cdots \| n x_k \| = 0. \] \end{conj} \subsection{Diophantine approximation on fractals} \label{sec: DAfrac} Now that we have seen the classical results of Khinchin and Gallagher, it is natural to ask what happens if one replaces Lebesgue measure with some other measure. A probability measure $\mu$ is \emph{Khinchin} if Khinchin's theorem holds with $\mu$ in the place of $\lambda_k$, and \emph{Gallagher} if Gallagher's theorem holds with $\mu$ in the place of $\lambda_k.$ The philosophy is that a `natural' probability measure $\mu$ should be Khinchin and Gallagher, unless there is a simple obstruction. A natural class of measures is given by induced Lebesgue measures on manifolds. The following landmark result is the culmination of decades of devoted research; see the major breakthroughs by Kleinbock and Margulis~\cite{KM1998} and by Beresnevich \cite{Ber2012}, as well as \cite{BDV2007, BVVZ2017, BVVZ2021, Ber1977, DRV1991, VV2006}. \begin{thm}[Beresnevich--Yang \cite{BY2023}, Beresnevich--Datta \cite{BD}] \label{DreamTheorem} Let $\mathcal{M}$ be a non-degenerate submanifold of $\mathbb{R}^k$, where $k\geq 2.$ Let $\cU$ be an open subset of $\cM$, and let $\mu$ be a smooth probability measure supported on $\cU \cap [0,1]^k$. Then $\mu$ is Khinchin. \end{thm} For the Gallagher property, much less is known. Badziahin and Levesley solved the convergence theory for planar curves, subject to curvature. For hyperplanes, see \cite{BHV2020, Cho2018, CT2020, CT2024, CY2024, Hua2024}, and finally \cite{CY}, where the problem was largely solved when $k \ge 9$. For our purposes, the most relevant result is as follows. \begin{thm} [Chow--Yu \cite{CY}] Let $\mathcal{M}$ be a hypersurface in $\mathbb{R}^k$, where $k\geq 5.$ Let $\cU$ be an open subset of $\cM$, in which the Gaussian curvature is non-zero, and let $\mu$ be a smooth probability measure supported on $\cU \cap [0,1]^k$. Then $\mu$ is Gallagher. \end{thm} In this article, we investigate \emph{missing-digit measures}. These generalise the canonical measure on the middle-third Cantor set, and we will define them in \S \ref{sec: Cantor}. A motivating goal is as follows. \begin{conj} [Dream Theorem for missing-digit fractals] \label{conj: main} Let $k$ be a positive integer, and let $\mu$ be a missing-digit measure on $\mathbb{R}^k$ that is not supported on any affine hyperplane. Then $\mu$ is Khinchin and Gallagher. \end{conj} \begin{rem} The so-called Dream Theorem was introduced in a survey article \cite{BRV2020}, referring to Theorem \ref{DreamTheorem} before it was proved. Conjecture \ref{conj: main} is analogous, but also includes the Gallagher property. We would expect it to extend to self-similar measures \cite{Mat2015}. \end{rem} A measure on $\mathbb{R}^k$ is \emph{split} if it is a Cartesian product of missing-digit measures on $\mathbb{R}$. Our main result is as follows. \begin{thm} [Main theorem] \label{thm: main} Let $\mu$ be a missing-digit measure on $\mathbb{R}^k.$ If \begin{equation} \label{MainAssumption} \dim_{\ell^1}(\mu) > k - \frac{k-1}{k+1}, \end{equation} then $\mu$ is Khinchin. If moreover $ \mu = \mu_1 \times \cdots \times \mu_k $ is split, and \begin{equation} \label{SplitAssumption} \dim_{\ell^1}(\mu_j) > 1 - \frac1{k+1} \qquad (1 \le j \le k), \end{equation} then it is Gallagher. \end{thm} The Fourier $\ell^1$ dimension $\dim_{\ell^1}$ was used in \cite{Yu} to establish a Jarn\'ik--Besicovitch-type theorem for fractal measures; we will define it in \S \ref{sec: dimension}. Computing Fourier $\ell^1$ dimension for missing-digit measures is not a simple task. In fact, it is not even immediately clear why there are proper missing-digit measures that satisfy \eqref{MainAssumption}. For this reason, we will in \S \ref{sec: special} provide concrete examples where Theorem \ref{thm: main} applies. In particular, we will see that if the `digit structure' of a missing-digit measure is simple enough, then its Fourier $\ell^1$ and Hausdorff dimensions differ only slightly. Fourier $\ell^1$ dimension was used in \cite{YuRadial, YuManifold} to deduce various results on the continuity of projections of missing-digit measures in $\mathbb{R}^k$, where $k \ge 2$. Recently, in \cite{CVY}, a precise method to compute $\dim_{\ell^1}(\mu)$ for an arbitrary missing-digit measure $\mu$ on $[0,1]$ was presented. We discuss a higher-dimensional analogue in \S \ref{sec: compute}. \subsection{Related work} Similar conjectures can be found in the works of Levesley--Salp--Velani~\cite{LSV2007} and Bugeaud--Durand~\cite{BD2016} and, less formally, in those of Mahler \cite{Mahler84} and Kleinbock--Lindenstrauss--Weiss~\cite{KLW}. Recent progress towards Conjecture \ref{conj: main} can be found in \cite{CVY, DJ, EFS2011, KL2020, SW2019, Yu}. We highlight the following breakthrough. \begin{thm} [Khalil--Luethi \cite{KL2020}] \label{KLthm} Let $k \in \bN$, and let $\mu$ be a missing-digit measure on $[0,1]^k$. Then there is an effectively-computable constant $\epsilon > 0$ depending only on $k$ such that if $\Haus(\mu) > k -\epsilon$ then $\mu$ is Khinchin. \end{thm} Khalil and Luethi's method applies not only to missing-digit measures but also to more general self-similar measures with rational parameters and the open set condition. The Hausdorff dimension $\Haus(\mu)$ of a Borel measure $\mu$ is defined, for example, in \cite[Chapter 2]{Fal97}, but we do not require the notion in full generality. If $\mu$ is the Cantor--Lebesgue measure of a missing-digit fractal $\cK$, see \S \ref{sec: Cantor}, then \[ \dim_{\ell^1}(\mu) \le \Haus(\mu) = \Haus(\cK). \] In this setting, we will see in \S \ref{sec: special} that the difference between $\dim_{\ell^1}(\mu)$ and $\Haus(\mu) = \Haus(\cK)$ is often small, in which case Theorem \ref{thm: main} goes beyond Theorem \ref{KLthm}. In the case $k=1$, there has been even more dramatic recent progress. \begin{thm} [B\'enard--He--Zhang \cite{BHZ}] \label{BHZthm} Let $\mu$ be a self-similar probability measure on $[0,1]$ whose support is not a singleton. Then $\mu$ is Khinchin. \end{thm} Theorems \ref{KLthm} and \ref{BHZthm} both use random walks on homogeneous spaces. Our Fourier-analytic methodology, here and in \cite{CY, Yu}, is very different. \subsection{More refined results} In the course of proving Theorem \ref{thm: main}, we establish some finer results. In order to state them, we introduce some more refined notions. For $\psi: \bN \to [0,1)$ and $\by =(y_1,\dots,y_k)\in\mathbb{R}^k$, we denote by $W_k(\psi,\by)$ the set of $\bx \in [0,1)^k$ such that \[ \max\{\| n x_1-y_1 \|, \dots ,\| n x_k-y_k \|\} < \psi(n) \] has infinitely many solutions $n \in \bN$, and write $W_k^\times(\psi,\by)$ for the set of $\bx \in [0,1)^k$ such that \[ \| n x_1 - y_1 \| \cdots \| n x_k-y_k \| < \psi(n) \] holds for infinitely many $n \in \bN$. \begin{defn} Let $k \in \bN$, and let $\psi:\mathbb{N} \to [0,1)$ be non-increasing. A probability measure $\mu$ on $[0,1]^k$ is \begin{itemize} \item \emph{inhomogeneous convergent Khinchin} if \[ \sum_{n=1}^{\infty} \psi(n)^k < \infty \implies \mu(W_k(\psi,\by)) = 0 \quad (\by \in \bR^k), \] \item \emph{inhomogeneous divergent Khinchin} if \[ \sum_{n=1}^{\infty} \psi(n)^k = \infty \implies \mu(W_k(\psi,\by)) = 1 \quad (\by \in \bR^k), \] \item \emph{inhomogeneous convergent Gallagher} if \[ \sum_{n=1}^{\infty} \psi(n) (\log n)^{k-1} < \infty \implies \mu(W^\times_k(\psi,\by)) = 0 \quad (\by \in \bR^k), \] \item \emph{inhomogeneous divergent Gallagher} if \[ \sum_{n=1}^{\infty} \psi(n) (\log n)^{k-1} = \infty \implies \mu(W^\times_k(\psi,\by)) = 1 \quad (\by \in \bR^k). \] \end{itemize} \end{defn} We establish the following new results. Theorem \ref{thm: main} follows from these by specialising $\by = \bzero$ in the conclusions. \begin{thm} [Convergence theory] \label{thm: convergence} Let $k \in \bN$, and let $\mu$ be a missing-digit measure on $[0,1]^k$ satisfying \begin{equation} \label{WeakAssumption} \dim_{\ell^1}(\mu) > k - \frac{k}{k+1}. \end{equation} Then $\mu$ is inhomogeneous convergent Khinchin. If moreover $ \mu = \mu_1 \times \cdots \times \mu_k $ is split and satisfies \eqref{SplitAssumption}, then it is inhomogeneous convergent Gallagher. \end{thm} \begin{thm} [Divergence theory] \label{thm: divergence} Let $k \in \bN$, and let $\mu$ be a missing-digit measure on $[0,1]^k$ satisfying \eqref{MainAssumption}. Then $\mu$ is inhomogeneous divergent Khinchin and inhomogeneous divergent Gallagher. \end{thm} Observe that, for the divergent Gallagher and convergent/divergent Khinchin properties, we do not require $\mu$ to be split. We only require $\mu$ to be split for the convergent Gallagher result. Note also that if $k = 1$ then the assumption \eqref{MainAssumption} cannot be met. We will see in \S \ref{sec: special} that if $k \ge 2$ then \eqref{MainAssumption} can hold. \subsection{Methods} Observe that sets like $W_k^\times(\psi)$ are limit superior sets. To show that they have full measure, we use the divergence Borel--Cantelli lemma \cite{BV2023}. This requires us to estimate sums of measures of arcs around rationals, and of the intersections of these arcs. We achieve this using Fourier analysis, specifically via the framework of `moment transference principles' that we recently developed in \cite{CY}. Unlike in the case of non-degenerate hypersurfaces \cite{CY}, the Fourier coefficients $\hat \mu(\bk)$ do not decay pointwise: \mbox{$\hat \mu(\bk) \not \to 0$} as $\| \bk \|_\infty \to \infty$. We instead exploit their average decay using the Fourier $\ell^1$ dimension --- developed in \cite{Yu} and \cite{CVY} --- as discussed in \S \ref{sec: DAfrac}. Divisibility also plays a key role, since the $n^{-1}$-periodicity of the arcs manifests as $n$-divisibility in Fourier space $\bZ^k$. To establish the variance transference principle, we ultimately count solutions to a system of Diophantine equations. \subsection{A bound on the Hausdorff dimension} Our expectation transference principle is strong enough to establish a Hausdorff dimension bound for simultaneous approximation on missing-digit fractals. To put this into context, we recall Levesley's inhomogeneous version of the classical Jarn\'ik--Besicovitch theorem. For $t \ge 0$ and $n \in \bN$, let $\psi_t(n) = n^{-t}$. \begin{thm} [Inhomogeneous Jarn\'ik--Besicovitch \cite{Lev1998}] \label{LevesleyThm} Let $k \in \bN$ and $\bgam \in \bR$. Then \[ \Haus(W_k(\psi_t,\by)) = \min \left\{ \frac{k+1}{t+1}, k \right\}. \] \end{thm} Our result is as follows. \begin{thm} \label{HausdorffConvThm} Let $k \in \bN$ and $\by \in \bR^k$. Then there exists $\eps > 0$ such that the following holds whenever \[ t < \frac1k + \eps. \] Let $\cK$ be a missing-digit fractal in $[0,1]^k$ whose Cantor--Lebesgue measure $\mu$ satisfies \eqref{WeakAssumption}. Then \[ \Haus(W_{k}(\psi_t,\by) \cap \cK) \leq \Haus(W_k(\psi_t,\by)) + \Haus(\cK) - k. \] \end{thm} \begin{rem} This upper bound was first conjectured with equality in \cite{BD2016}, when $\cK$ is the middle-third Cantor set and $\by = \bzero$. Despite the recent progress made in \cite{Yu} and \cite{CVY}, the problem remains open. The conjectured threshold differs for larger values of $t$, transitioning to $\Haus(\cK)/(t + 1)$. \end{rem} \subsection{Counting rational points on/near missing-digit fractals} For any compact set $\cK \subset \mathbb{R}^k$, and real numbers $Q \ge 1$ and $\del \ge 0$, we define \[ \cN_\cK(Q,\delta) = \{ (\ba,q) \in \bZ^k \times (\bZ \cap [1,Q]): \dist(\ba/q, \cK) \le \delta/Q \}. \] Observe that $\mathcal{N}_\cK(Q,0)$ comprises rational points in $\cK$ of height at most $Q$. A challenging problem is to show that if $\cK$ `lacks linear structure' but has a naturally-associated dimension $\dim(\cK)$ then, for any $\varepsilon>0,$ \[ \#\mathcal{N}_\cK(Q,0) \ll_\eps Q^{\dim (\cK) + \epsilon}. \] For missing-digit fractals, we have $\dim (\cK) = \Haus (\cK).$ For irreducible projective varieties, the problem is known as Serre's dimension growth conjecture \cite{Bro2009, Serre}. It was solved by Salberger in 2009, and that solution was recently published \cite{Sal2023}. For the middle-third Cantor set, the conjecture was made by Broderick, Fishman, and Reich \cite{BFR11}. The statement naturally extends to missing-digit fractals. In the case $k=1$, progress was made recently in \cite{CVY}, but not until now has substantial progress been made for $k \ge 2$. For $\del \in (0,1)$, heuristic reasoning suggests that \begin{equation} \label{HeuristicEstimate} \#\mathcal{N}_\cK(Q,\delta) \asymp \delta^{k-\dim (\cK)} Q^{\dim(\cK)+1}. \end{equation} The critical threshold, corresponding to Dirichlet's approximation theorem, is $\del = Q^{-1/k}$. The problem becomes significantly more demanding when $\delta$ is smaller than this, and the estimate can even break down when $\delta$ is extremely small. A related problem was introduced by Mazur~\cite{Mazur}, who asked how close a rational point can be to a smooth planar curve and still miss. When $\cK$ is a suitably curved manifold, we now have some good counting estimates with $\delta$ below the $Q^{-1/k}$ threshold, especially for hypersurfaces, see for instance \cite{BVVZ2021, Hua2020, SchYam2022, S24}. For certain missing-digit fractals on $[0,1]$, such a result was obtained in \cite{CVY}. Our present methods deliver \eqref{HeuristicEstimate} beyond the critical threshold for missing-digit fractals on $[0,1]^k$. \begin{thm} \label{thm: Rational Counting} Let $k \in \bN$, and let $\cK$ be a proper missing-digit fractal in $[0,1]^k$ with Cantor--Lebesgue measure $\mu$ satisfying \eqref{WeakAssumption}. Then, for some constant $\eta > 0,$ we have \eqref{HeuristicEstimate} whenever $Q$ is sufficiently large and $\del \gg Q^{-\eta-1/k}$. \end{thm} We can apply Theorem \ref{thm: Rational Counting} to count rational points on missing-digit fractals. \begin{cor} \label{CountingRationalPoints} Let $k \in \bN$, and let $\cK$ be a proper missing-digit fractal in $[0,1]^k$ with Cantor--Lebesgue measure $\mu$ satisfying \eqref{WeakAssumption}. Then, for some constant $\eta > 0,$ \[ \# \cN_{\cK}(Q,0) \ll Q^{(1+1/k)\Haus(\cK) - \eta}. \] \end{cor} \begin{proof} By Theorem \ref{thm: Rational Counting} with $\eta/(k-\Haus(\cK))$ in place of $\eta$, \begin{align*} \# \cN_{\cK}(Q,0) &\le \# \cN_{\cK}(Q,Q^{-\eta-1/k}) \\ &\ll Q^{-\eta} (Q^{-1/k})^{k-\Haus(\cK)} Q^{\Haus(\cK) + 1} = Q^{(1+1/k)\Haus(\cK) -\eta}. \end{align*} \end{proof} This generalises the $k=1$ case, which was demonstrated in \cite{CVY}. Counting rationals in the middle-third Cantor set is considered to be a difficult problem, see for instance \cite{RSTW2020}. As explained in \cite{CVY}, the condition \eqref{WeakAssumption} fails for the middle-third Cantor set. The condition holds, for instance, when \[ k = 1, \qquad b \ge 5, \qquad \# \cD = b-1. \] See \S \ref{sec: special} for some situations in which $k \in \bN$ is arbitrary the condition holds. The assumption \eqref{WeakAssumption} is essential for the proof of Theorem \ref{thm: Rational Counting}, but we suspect that the weaker assumption $\Haus(\mu) > k-\frac{k}{k+1}$ should suffice for the result. It is easy to see that such a condition cannot be dropped without imposing other conditions, e.g. that $\mu$ is not supported in any proper affine subspace. We illustrate with the following example, using the notation of \S \ref{sec: prep}. \begin{ex} \label{ex: sharp counting} Suppose $\cK = \cK_{b,\cD}$ is a missing-digit fractal in $[0,1]^k$ with \[ \cD \supseteq \{ 0, 1, \ldots, b-1 \}^{k-1} \times \{ 0 \}. \] As $\cK$ contains $[0,1]^{k-1} \times \{ 0 \}$, we have \[ \# \cN_{\cK}(Q,0) \gg Q^k. \] By \eqref{HausdorffMissing}, such a set $\cK$ exists with \[ \Haus(\cK) < k - \frac{k}{k+1}. \] If moreover $\del \le Q^{-1/k}$, then \[ \delta^{k - \Haus(\cK)} Q^{\Haus(\cK) + 1} \le Q^{(k+1)\Haus(\cK)/k} = o(Q^k). \] \end{ex} The constant $\eta$ in Theorem \ref{thm: Rational Counting} can be chosen according to a lower bound for $\dim_{\ell^1}(\mu)$, which can be effectively computed using Theorem~\ref{l1lower}. In special cases, we obtain a bound for $\# \cN_\cK(Q,0)$ with an exponent that is arbitrarily close to best possible. \begin{thm} \label{OtherBound} Fix $\eta > 0.$ Then there exists a missing-digit fractal $\cK \supseteq [0,1]^{k-1} \times \{ 0 \}$ such that \[ \# \cN_{\cK}(Q,0) \ll Q^{k+\eta}, \qquad \Haus(\cK) > k - \eta. \] \end{thm} Note that if $\Haus(\cK) > k -\frac{k}{k+1}$ then the estimate \[ \# \cN_{\cK}(Q,0) \ll Q^{k+\eta} \] does not follow trivially from Corollary \ref{CountingRationalPoints}. Since $\cK \supseteq [0,1]^{k-1} \times \{ 0 \}$, the bound is sharp up to a factor of $O(Q^\eta)$. However, we suspect that better estimates hold for missing-digit fractals that do not concentrate too much on affine subspaces. \subsection{Intrinsic Diophantine approximation} Although the main focus of this paper is extrinsic Diophantine approximation, we also establish new results in the intrinsic setting, wherein we approximate by rational vectors in $\cK$. The terminology was introduced by Mahler in the influential article \cite{Mahler84}. Let $\cK \subsetneq [0,1]^k$ be a missing-digit fractal with Cantor--Lebesgue measure $\mu$, and let $\psi:\mathbb{N} \to [0,1)$. We denote by $W^\cK_k(\psi)$ the set of $\bx = (x_1,\dots,x_k) \in [0,1)^k$ such that \[ \max\left\{\left|x_1-\frac{a_1}{n}\right|,\dots,\left|x_k-\frac{a_k}{n}\right|\right\}<\frac{\psi(n)}{n} \] holds for infinitely many $(a_1,\dots,a_k,n) \in\mathbb{Z}^k \times \mathbb{N}$ for which \[ \left(\frac{a_1}{n},\dots,\frac{a_k}{n}\right) \in \cK. \] For $\tau>0,$ we write $W_k^\cK(\tau) = W_k^K(n \mapsto n^{-\tau}).$ It follows from a result of Weiss \cite{W01} that \begin{equation} \label{WeissCor} \mu(W_1^\cK(\tau))=0 \qquad (\tau > 1). \end{equation} More generally, it follows from a result of Kleinbock--Lindenstrauss--Weiss \cite{KLW} that if $\mu$ satisfies an affine irreducibility condition then \begin{equation} \label{KLWcor} \mu(W_k^\cK(\tau)) = 0 \qquad (\tau > 1/k). \end{equation} \begin{conj} [Broderick--Fishman--Reich \cite{BFR11}] \label{IntrinsicVWAconjecture} Let $k \in \bN$, and let $\cK$ be a proper missing-digit fractal in $[0,1]^k$ that is not contained in any proper affine subspace. Then, for all $\tau>0,$ \[ \mu(W_k^\cK(\tau)) = 0. \] \end{conj} This conjecture was originally posed for the middle-third Cantor set, but we believe that it should hold in general. A recent result in \cite{CVY} asserts that if $\dim_{\ell^1}(\mu) > 1/2$ then there exists $\eta > 0$ such that \begin{equation} \label{CVYintrinsic} \mu(W_1^\cK(\tau))=0 \qquad (\tau > 1 - \eta). \end{equation} This refined \eqref{WeissCor}. Presently, we extend \eqref{CVYintrinsic} to higher dimensions, thereby refining \eqref{KLWcor}. \begin{thm} \label{thm: Intrinsic App} Let $k \in \bN$, and let $\cK$ be a proper missing-digit fractal in $[0,1]^k$ with Cantor--Lebesgue measure $\mu$ satisfying \eqref{WeakAssumption}. Then, for some $\eps > 0$, \[ \mu(W_k^\cK(\tau)) = 0 \qquad \left( \tau > \frac1k - \eps \right). \] \end{thm} We will see that this is a simple consequence of Corollary \ref{CountingRationalPoints}. Note that if \eqref{WeakAssumption} holds then $\mu$ cannot be supported in any proper affine subspace. \subsection{Further related work} One can simplify the problems of counting rationals and intrinsic approximation by using the intrinsic height \cite{FS2014} instead of the denominator. For the middle-third Cantor set, see the recent article \cite{TWW2024}. Mahler asked if there are very well approximable numbers in the middle-third Cantor set that are irrational and non-Liouville. To solve this problem, Levesley--Salp--Velani \cite{LSV2007} resolved the metric theory of approximation by triadic rationals in the middle-third Cantor set. Velani asked about the analogous problem for dyadic approximation. This is related to Furstenburg's `times two, times three' problem \cite{Fur1967}, and was first investigated in \cite{ACY2023}, with subsequent developments in \cite{Bak2024} and most recently \cite{ABCY2023}. Velani's conjecture is still very much open. \subsection*{Organisation} In \S \ref{sec: prep}, we gather some preliminaries in describe our general setup. In \S \ref{sec: ETP}, we establish the ETP. In \S \ref{sec: VTP}, we establish the VTP. In \S \ref{sec: final proofs}, we complete the proofs of Theorems \ref{thm: convergence}, \ref{thm: divergence}, \ref{HausdorffConvThm}, \ref{thm: Rational Counting}, \ref{OtherBound}, and \ref{thm: Intrinsic App}. In Appendix \ref{sec: morel1}, we further discuss Fourier $\ell^1$ dimension for missing-digit measures. \subsection*{Notation} For complex-valued functions $f$ and $g$, we write $f \ll g$ or $f = O(g)$ if there exists $C$ such that $|f| \le C|g|$ pointwise, we write $f \sim g$ if $f/g \to 1$ in some specified limit, and we write $f = o(g)$ if $f/g \to 0$ in some specified limit. We will work in $k$-dimensional Euclidean space, and the implied constants will always be allowed to depend on $k$, as well as on any parameters which we describe as fixed or constant. Any further dependence will be indicated with a subscript. For $x \in \bR$, we write $e(x) = e^{2 \pi i x}$. For $r > 0$ and $\by \in \bR^k$, we write $B_r(\by)$ for the ball of radius $r$ centred at $\by$, and put $B_r = B_r(\bzero)$. \subsection*{Funding} HY was supported by the Leverhulme Trust (ECF-2023-186). \subsection*{Rights} For the purpose of open access, the authors have applied a Creative Commons Attribution (CC-BY) licence to any Author Accepted Manuscript version arising from this submission. \section{Preparation} \label{sec: prep} In this section, we gather some preliminaries and describe our general setup. \subsection{Missing-digit fractals and measures} \label{sec: Cantor} Let $k \ge 1$ and $b \ge 2$ be integers, and let $\bP$ be a probability measure on $\{ 0,1,\ldots,b-1 \}^k$. A \emph{missing-digit measure} is the probability measure $\mu_{b,\bP}$ whose distribution is that of the random variable \[ \sum_{j=1}^\infty \bd^{(j)}/b^j, \] where the $\bd^{(j)} \in \{0,1,\ldots,b-1\}^k$ are i.i.d. with distribution $\bP$. A \emph{missing-digit fractal} is the support of a missing-digit measure. Equivalently, a missing-digit fractal is \[ \cK_{b,\cD} = \left \{ \sum_{j=1}^\infty \bd^{(j)}/b^j: \bd^{(j)} \in \cD \right \} \subseteq [0,1]^k, \] where $k,b$ are as above and $\cD \subseteq \{ 0, 1, \ldots, b-1 \}^k$. The \emph{Cantor--Lebesgue measure} of a missing-digit fractal $\cK_{b,\cD}$ is \[ \mu_{b, \cD} = \mu_{b,\bP}, \qquad \text{where} \qquad \bP(\bd) = \# \cD^{-1} \quad (d \in \cD). \] \begin{ex} The Cantor set is $\cK_{b, \{0,2\}}$. The Cantor measure is the Cantor--Lebesgue measure of the Cantor set. \end{ex} A missing-digit fractal $\cK$ is \emph{proper} if $\# \cK \ge 2$ and $\cK \ne [0,1]^k$. A missing-digit measure is \emph{proper} if its support is proper. \subsection{Notions of dimension} \label{sec: dimension} We begin our discussion with Hausdorff measure and dimension, which are standard concepts in fractal geometry \cite{Fal2014}. For $\cU \subseteq \bR^k$, we write $|\cU|$ for the diameter of $\cU$. Let $k \in \bN$ and $\cF \subseteq \bR^k$. For $s \ge 0$ and $\del > 0$, define \[ H^s_\del(\cF) = \inf \left \{ \sum_{i=1}^\infty |\cU_i|^s: \cF \subseteq \displaystyle \bigcup_{i=1}^\infty \cU_i, \quad |\cU_i| \le \del \: \forall i \right \}. \] The \emph{Hausdorff $s$-measure} of $\cF$ is \[ H^s(\cF) = \lim_{\del \to 0} H^s_\del(\cF), \] and the \emph{Hausdorff dimension} of $\cF$ is \[ \Haus(\cF) = \inf \{ s \ge 0: H^s(\cF) = 0 \}. \] Hausdorff measure refines Lebesgue measure, in that $H^k(\cF) = c_k \lam_k(\cF)$ for some constant $c_k > 0$. Note that \begin{equation} \label{HausdorffMissing} \Haus(\cK_{b,\cD}) = \frac{\log \# \cD}{\log b}. \end{equation} We equip ourselves the following basic tool, which is commonly used to establish upper bounds for Hausdorff measure and dimension. \begin{lem} [Hausdorff--Cantelli lemma] \label{HausdorffCantelli} Let $k \in \bN$, let $\cE_1, \cE_2, \ldots$ be hypercubes in $\bR^k$, and put $\cE_\infty = \displaystyle \limsup_{n \to \infty} \cE_n$. Let $s > 0$, and suppose \[ \sum_{j=1}^\infty |\cE_j|^s < \infty. \] Then $H^s(\cE_\infty) = 0$ and $\Haus(\cE_\infty) \le s$. \end{lem} \begin{proof} This is \cite[Lemma 3.10]{BD1999}. \end{proof} \bigskip Next, we recall the Fourier $\ell^t$ dimensions, which were brought into this subject in \cite{Yu}. Let $\mu$ be a Borel probability measure supported on $[0,1]^k$. For $\bxi \in \bZ^k$, the $\bxi^{\mathrm{th}}$ \emph{Fourier coefficient} of $\mu$ is \[ \hat \mu(\bxi) = \int_{[0,1]^k} e(-\bxi \cdot \bx) \d \mu(\bx). \] The \emph{Fourier $\ell^t$ dimension} of $\mu$ is \[ \dim_{\ell^t}(\mu) = \sup \left \{ s \ge 0: \sum_{\| \bxi \|_\infty \le Q} |\hat \mu(\bxi)|^t \ll Q^{k - s} \right \}. \] It follows from the Cauchy--Schwarz inequality that \[ \frac{\dim_{\ell^2}(\mu)}{2} \le \dim_{\ell^1}(\mu) \le \dim_{\ell^2}(\mu). \] For $s > 0$, we say that $\mu$ is \emph{$s$-regular} if \[ \mu(B_r(\by)) \asymp r^s \qquad (\by \in \supp(\mu), \quad 0 < r \le 1). \] Note that missing-digit measures are $s$-regular with $s$ being the Hausdorff dimension of their support. By mimicking the continuous analogue in \cite[\S 3.8]{Mat2015}, it is readily confirmed that if $\mu$ is $s$-regular then \[ \dim_{\ell^2}(\mu) = \Haus(\supp(\mu)) = s. \] Note also that if $\mu$ is a missing-digit measure and $\Haus(\supp(\mu)) = s$ then \begin{equation} \label{MeasureUpper} \mu(B_r(\by)) \ll r^s \qquad (\by \in \bR^k, \quad 0 < r \le 1). \end{equation} \subsection{Moment transference principles} \label{sec: MTP} For any Borel probability measure $\mu$ on $[0,1]^k$ and any $f \in L^1(\mu)$, we write \[ \mu(f) = \int_{[0,1]^k} f \d \mu. \] Let $\lam$ be Lebesgue measure on $[0,1]^k$. Let $\cE_1, \cE_2, \ldots$ be bounded Borel subsets of $\bR^k$, and put $\cE_\infty = \displaystyle \limsup_{n \to \infty} \cE_n$. For $n \in \bN$, let $f_n: \bR^k \to [0,\infty)$ be compactly supported and measurable. Define \begin{align*} E_N(\mu) = \sum_{n \le N} \mu(f_n), \qquad V_N(\mu) = \sum_{n \le N} (f_n(\bx) - \lam(f_n))^2 \d \mu(\bx). \end{align*} The \emph{expectation transference principle} (ETP) holds if \[ E_N(\mu) = (1 + o(1)) E_N(\lam) + O(1) \qquad (N \to \infty). \] The \emph{variance transference principle} (VTP) holds if \[ V_N(\mu) = V_N(\lam) + o(E_N(\mu)^2) + O(1), \] along a sequence of $N \to \infty$. \begin{lem} [General convergence theory conclusion] \label{GenConv} Let $\psi: \bN \to [0,1)$. Assume that $f_n \gg 1$ on $\cE_n$ for all sufficiently large $n \in \bN$, that \begin{equation} \label{LebesgueConvergence} \sum_{n=1}^\infty \lam(f_n) < \infty, \end{equation} and that we have ETP for this sequence of functions. Then $\mu(\cE_\infty) = 0$. \end{lem} \begin{proof} Replace $A_n^\times$ by $\cE_n$ in the proof of \cite[Lemma 2.6]{CY}. \end{proof} \begin{lem} [General divergence theory conclusion] \label{GenDiv} Let $C \ge 1$ and $\psi: \bN \to [0,1)$. For $n \in \bN$, suppose $f_n$ is supported on $\cE_n$. Suppose we have ETP and VTP for this sequence of functions, as well as \begin{equation} \label{LebesgueDivergence} \sum_{n=1}^\infty \lam(f_n) = \infty \end{equation} and \begin{equation} \label{LebesgueQIA} \sum_{m,n \le N} \lam(f_m f_n) \le C E_N(\lam)^2 + O(1). \end{equation} Then $\mu(\cE_\infty) \ge 1/C$. \end{lem} \begin{proof} The ETP and the divergence of $\sum_n \lam(f_n)$ give \[ \sum_{n=1}^\infty \mu(f_n) = \infty. \] By \cite[Lemma 2.5]{CY}, \[ \sum_{m,n \le N} \mu(f_m f_n) = \sum_{m,n \le N} \lam(f_m f_n) + o(E_N(\mu)^2) + O(1), \] along a sequence of $N \to \infty$. Combining this with \eqref{LebesgueQIA} and the ETP yields \[ \sum_{m,n \le N} \mu(f_m f_n) \le (C + o(1)) E_N(\mu)^2 + O(1) \] along this sequence. Functional divergence Borel--Cantelli \cite[Lemma 2.7]{CY} completes the proof. \end{proof} \subsection{Setup} \label{sec: rectangle} Our analysis involves decomposing the problem into rectangles, scaling bump functions accordingly, and passing to Fourier space. Let $w: \bR \to [0,1]$ be a non-zero bump function supported on $[-2,2]$. This has the property that \begin{equation} \label{decay} \hat w(\xi) \ll_L |\xi|^{-L}, \end{equation} for any $L \ge 1$. The function $w$ will approximate $\{ x: |x| \le 1\}$ or $\{ x: 1/2 \le |x| \le 1 \}$. Next, we adapt $w$ to certain rectangles. Let $\by \in \bR^k$ and $\bd \in (0,1/2]^k$, and define \begin{align*} A_n(\bd) &= \{ \bx \in [0,1]^k: \| n x_j - y_j \| < n d_j \quad (1 \le j \le k) \}, \\ A^\circ_n(\bd) &= \{ \bx \in [0,1]^k: n d_j/2 < \| n x_j - y_j \| < n d_j \quad (1 \le j \le k) \},\\ \cR(\bd) &= [-d_1, d_1] \times \cdots \times [-d_k, d_k], \\ \cR^\vee(\bd) &= [-1/d_1,1/d_1] \times \cdots \times [-1/d_k, 1/d_k], \\ w_{\cR(\bd)} (\bx) &= \prod_{j \le k} w(x_j/d_j). \end{align*} Then \[ \widehat{w_{\cR(\bd)}}(\bxi) = \prod_{j \le k} d_j \hat w(d_j \xi_j). \] Note that $w_{\cR(\bd)}$ is normalised so that \[ \widehat{w_{\cR(\bd)}}(\bzero)\asymp d_1\cdots d_k. \] If $\bd$ is clear from the context, then we will simply write $\cR$ for $\cR(\bd)$ and $\cR^\vee$ for $\cR^\vee(\bd)$. By \eqref{decay}, we thus have \begin{equation} \label{decay2} \frac{\widehat{w_\cR}(\bxi)} {d_1 \cdots d_k} \ll 2^{-mL} \qquad (\bxi \in 2^m \cR^\vee \setminus 2^{m-1} \cR^\vee). \end{equation} We see that $A_n(\bd)$ and $\cR$ have the same shape, and note that $\cR^\vee$ is the dual rectangle of $\cR.$ The bump function $w_{\cR}$ is supported on \[ 2 \cR := \{ 2 \bx: \bx \in \cR \}. \] Its Fourier transform $\hat{w}_{\cR}$ is essentially supported on $\cR^\vee.$ For any shift $\bgam\in\mathbb{R}^k,$ the shifted function $w_{\cR}(\cdot-\bgam)$ is supported on $\cR+\bgam$, and its Fourier transform is still essentially supported on $\cR^\vee.$ These considerations can be used to establish ETP and VTP for unions of rectangles. We will work with a smooth, periodically extended version of the indicator function of $A_n(\bd)$, namely \begin{equation} \label{AnStar} A^*_n(\bx;\bd) = \sum_{\ba \in \mathbb{Z}^k} w_{\cR(\bd)} \left( \bx - \frac{\ba+\by}{n} \right), \end{equation} where $w$ approximates $\{ x: |x| \le 1 \}$. By choosing $w$ to instead approximate $\{ x: 1/2 \le |x| \le 1 \}$, we can alternatively use this same expression to approximate $A_n^\circ(\bd)$. For the simultaneous theory, we use $A_n(\bd)$ with $\bd =(d,d,\dots,d)$ for some $d>0.$ Thus, the corresponding $\cR$ and $\cR^\vee$ are all hypercubes. For the multiplicative theory, the situation is more complicated, as we need to consider boxes of different shapes in order to approximate \[ A_n^\times := \{ \bx \in [0,1)^k: \| n x_1 - y_1 \| \cdots \| n x_k - y_k \| < \psi(n) \}. \] We write \[ h_i = 2^{-i} \] for each integer $i$ in some range. The range is given by \[ \frac{\psi(2^{m-1})}{2^{m-1}} \le h_i \le \frac1{2^{m-1}} \] in the convergence setting, for \[ n \in D_m := [2^{m-1}, 2^m). \] Fixing a small constant $\tau > 0$, the range in the divergence setting is \[ \frac1{2^{(1+(1+\tau)/k)m}} \ll h_i \ll \frac1{2^{(1+(1-\tau)/k)m}}. \] Writing \[ h h_1 \cdots h_{k-1} = \frac{\psi(2^m)}{2^{km}}, \qquad H h_1 \cdots h_{k-1} = \frac{\psi(2^{m-1})}{2^{k(m-1)}} \] and \begin{align*} \fB_n &= \bigsqcup_{h_1,\ldots, h_{k-1}} A_n^\circ(h_{i_1}, \ldots, h_{i_{k-1}}, h), \\ \fC_n &= \bigcup_{h_1,\ldots, h_{k-1}} A_n(h_{i_1}, \ldots, h_{i_{k-1}}, 2^{k-1}H), \end{align*} we then have \[ \fB_n \subseteq A_n^\times \subseteq \fC_n. \] \subsection{Admissible set systems and admissible function systems} These will be used for the VTP, which we require for the divergence theory. For each $m \in \bN$, we choose an index set $I_m$ and a threshold \[ \fd_m \gg \frac1{n^{1+(1+\tau)/k}} \qquad (n \in D_m). \] For each $i \in I_m$ and each $j \in \{1,2,\ldots,k \}$, we choose $d_{i,j} = d_{i,j,m}$ such that \[ n^{-\tau} \fd_m \ll d_{i,j} \ll n^{\tau} \fd_m \qquad (n \in D_m), \] and such that $\prod_{j \le k} d_{i,j}$ does not depend on $i$. For each $n \in D_m$, we introduce a disjoint union \[ A_n = \bigsqcup_{i \in I_m} A_n^\circ(d_{i,1},\ldots,d_{i,k}). \] An \emph{admissible set system} is a sequence $(A_n)_{n=1}^\infty$ obtained in this way. For the Khinchin theory, we choose $I_m$ to be a singleton for each $m$ and $d_{i,j}$ all equal. \begin{ex} [Simultaneous approximation] \label{SimOK} We claim that for the divergent Khinchin theory, we may assume that \[ \psi(n) \ge \psi_L(n) := (n (\log n)^2)^{-1/k} \qquad (n \ge 2). \] To see this, put $\tilde \psi = \max \{ \psi, \psi_L \}$ and note that \[ W_k(\tilde \psi; \by) = W_k(\psi;\by) \cup W_k(\psi_L;\by). \] As $W_k(\psi_L;\by)$ has zero measure, by the convergence theory, the set $W_k(\tilde \psi;\by)$ has the same measure as $W_k(\psi;\by)$, and we may therefore replace $\psi$ by $\tilde \psi$. For $n \in D_m$, we can then use $A_n^\circ(d,\ldots,d)$ with \[ d \asymp \frac{\psi(2^m)}{2^m}. \] \end{ex} \bigskip For the Gallagher theory, we form an admissible set system as follows. \begin{ex} [Multiplicative approximation] \label{MultOK} We claim that for the divergent Gallagher theory we may assume that \[ \psi(n) \ge \psi_L(n) := \frac1{n (\log n)^{k+1}} \qquad (n \ge 2). \] To see this, put $\tilde \psi = \max \{ \psi, \psi_L \}$ and note that \[ W_k^\times(\tilde \psi; \by) = W_k^\times(\psi; \by) \cup W_k^\times(\psi_L; \by). \] As $W_k^\times(\psi_L; \by)$ has zero measure, by the convergence theory, the set $W_k^\times(\tilde \psi; \by)$ has the same measure as $W_k^\times(\psi; \by)$, and we may therefore replace $\psi$ by $\tilde \psi$. For $n \in D_m$, we can then pack $A_n^\times$ with roughly $m^{k-1}$ many sets $$A_n^\circ(d_{i,1},\ldots,d_{i,k})$$ as above, forming an admissible set system with \[ \prod_{j \le k} d_{i,j} \asymp \frac{\psi(2^m)}{2^{km}}. \] \end{ex} \begin{rem} \label{AssumeUpper} For the divergent Khinchin theory, we may also assume that \[ \psi(2^{m}) \ll 2^{-m/k} \qquad (m \in \bN). \] Indeed, if $\psi(2^{m}) > 2^{-m/k}$ for infinitely many $m$, then we may replace $\psi$ by a non-increasing function $\psi': \bN \to [0,1)$ such that \[ \psi' \le \psi, \qquad \psi'(2^{m}) \ll 2^{-m/k} \quad (m \in \bN), \qquad \sum_{n=1}^\infty \psi'(n)^k = \infty. \] To construct $\psi'$, one can define \[ \psi'(n) = 2^{-m/k} \qquad (n \in D_m, \quad \psi(2^m) > 2^{-m/k}) \] and interpolate between these ranges. Similarly, for the divergent Gallagher theory, we may assume that \[ \psi(2^{m}) \ll 2^{-m} \qquad (m \in \bN). \] These observations simplify matters, but our approach is robust enough to succeed even without them. \end{rem} \bigskip Next, we introduce a functional analogue. Let $(A_n)_{n=1}^\infty$ be an admissible set system. For $n \in D_m$, let \[ f_n(\bx) = \sum_{i \in I_m} A_n^*(\bd^{(i)};\bx), \] where \[ \bd^{(i)} = (d_{i,1}, \ldots, d_{i,k}), \] and $A_n^*(\bd;\bx)$ is given by \eqref{AnStar}. Furthermore, suppose the bump function $w$, from \S \ref{sec: rectangle}, is supported on \[ \{ x \in \bR: 1/2 \le |x| \le 1 \}. \] An \emph{admissible function system} is a sequence $(f_n)_{n=1}^\infty$ constructed in this way. \subsection{The Lebesgue second moment} In the Gallagher setting, this was estimated in our previous article, see \cite[Remark 6.3, Theorem~6.5, and Lemma 6.6]{CY}. \begin{thm} [Chow--Yu \cite{CY}] \label{LebesgueEstimate} Let $\psi: \bN \to [0,1)$ be non-increasing, and let $k \ge 2$ be an integer. To any non-zero bump function $\bR \to [0,1]$ supported on $[-2,2]$, we may associate a sequence of smooth functions $(f_n)_{n=1}^\infty$ in $\bR^k$ as in Example \ref{SimOK} or \ref{MultOK}. There exists such a bump function $w$ supported on $\{ x \in \bR: 1/2 \le |x| \le 1 \}$ such that, for some $C = C(k,w) > 1$ arbitrarily close to 1, we have \eqref{LebesgueQIA}. \end{thm} \section{Expectation transference principle} \label{sec: ETP} In this section, we establish some ETP results. For notational convenience, we write \[ \tilde E_N(\mu) = E_{2N-1}(\mu) - E_{N-1}(\mu) - E_{2N-1}(\lam) + E_{N-1}(\lam). \] \begin{thm} [Dyadic split ETP] \label{thm: ETP} Let $N \in \bN$ and $\bd \in (0,1/2]^k$. For each $n \in [N,2N)$, we associate $A_n^*$ to $\bd$ as in \S \ref{sec: rectangle}. Let $\mu = \mu_1 \times \dots \times \mu_k$ be a product of probability measures on $[0,1]$ with $\dim_{\ell^1}(\mu_j) = \kappa_j$ for each $j.$ Then, for any $\eps > 0$, \[ \tilde E_N(\mu) \ll_\eps N^{k+\eps} \prod_{j=1}^{k} d_j^{\kap_j - \eps}. \] \end{thm} \begin{proof} By Parseval, \begin{align*} \tilde E_N(\mu) &= \sum_{N \le n < 2N} \int (A_n^*(\bx)-\lam(A_n^*))\d\mu(\bx) = \sum_{N \le n < 2N} \sum_{\bxi \neq \mathbf{0}} \widehat{A_n^*}(\bxi) \hat{\mu} (-\bxi). \end{align*} To proceed, the key intuition is that $\widehat{A_n^*}$ is essentially supported on the multiples of $n$ in the box \[ \cR^\vee_N := [-1/d_1, 1/d_1] \times \cdots \times [-1/d_k, 1/d_k]. \] Indeed, as $A_n^*$ is $n^{-1}$-periodic, its Fourier coefficients vanish away from multiples of $n$. Moreover, we can see from \eqref{decay2} that if $u \in \bN$ and $\bxi \in 2^u \cR^\vee_N \setminus 2^{u-1} \cR^\vee_N$ then \[ \widehat{A_n^*}(\bxi) \ll_L 2^{-uL} N^k d_1 \cdots d_k, \] for any constant $L \ge 1$. Thus, \begin{align*} \frac{\tilde E_N(\mu)}{N^k d_1\cdots d_k} \ll \sum_{\substack{n \mid \bxi \in \cR^\vee_N \setminus \{ \bzero \} \\ N \le n < 2N}} |\hat{\mu}(\bxi)| + \sum_{u=1}^\infty 2^{-uL} \sum_{\substack{n \mid \bxi \in 2^u \cR^\vee_N \setminus 2^{u-1} \cR^\vee_N \\ N \le n < 2N}} |\hat{\mu}(\bxi)|. \end{align*} Next, we exploit the product structure of $\mu$. Observe that there are $O(N^\epsilon)$ many integers $n$ such that $n \mid \bxi$, whence \begin{align*} \sum_{\substack{n \mid \bxi \in \cR^\vee_N \setminus \{ \bzero \} \\ N \le n < 2N}} |\hat{\mu}(\bxi)| &\ll N^\epsilon\prod_{j \le k} \left(\frac{1}{d_j}\right)^{1-\kap_j+\epsilon}. \end{align*} For $u \in \bN$, \begin{align*} \sum_{\substack{n\mid \bxi \in 2^u \cR^\vee_N\setminus 2^{u-1} \cR^\vee_N \\ N \le n < 2N}} |\hat{\mu}(\bxi)| &\ll (2^u N)^\epsilon \prod_{j \le k} \left(\frac{2^u}{d_j}\right)^{1-\kap_j + \epsilon} \\ &\le 2^{Ku} N^{\epsilon} \prod_{j \le k} \left(\frac{1}{d_j} \right)^{1 - \kap_j + \eps}, \end{align*} where $K = (k+1)(1+\eps)$. Choosing $L > K$ then gives \begin{align*} \label{eqn: last step} \frac{\tilde E_N(\mu)}{N^k d_1\cdots d_k} \ll N^{\epsilon} \prod_{j=1}^{k} \left(\frac{1}{d_j} \right)^{1 - \kap_j + \eps}. \end{align*} \end{proof} We also need an ETP result for general measures, rather than product measures. For this, we require a strong assumption on $\bd.$ \begin{thm} [Dyadic general ETP] \label{thm: ETP for general measure} Let $N \in \bN$ and $0 < d \le 1/2$. For each integer $n \in [N,2N)$, we associate $A_n^*$ to $\bd = (d_1, \ldots, d_k)$ as in \S \ref{sec: rectangle}, where $\tau > 0$ is fixed and \[ N^{- \tau} d \ll d_j \ll N^\tau d. \] Let $\mu$ be a Borel probability measure on $[0,1]^k$ with $\dim_{\ell^1}(\mu) = \kappa$. Then, for any fixed $\epsilon>0,$ \[ \tilde E_N(\mu) \ll N^{k+(2k+\eps)\tau+\eps} d^{\kappa - \eps}. \] \end{thm} \begin{proof} Imitating the proof of Theorem \ref{thm: ETP} gives \begin{align*} &\frac{\tilde E_N(\mu)}{N^k d_1 \cdots d_k} \ll \sum_{\substack{n \mid \bxi \in \cR^\vee_N \setminus \{ \bzero \} \\ N \le n < 2N}} |\hat{\mu}(\bxi)| + \sum_{u=1}^\infty 2^{-uL} \sum_{\substack{n \mid \bxi \in 2^u \cR^\vee_N \setminus 2^{u-1} \cR^\vee_N \\ N \le n < 2N}} |\hat{\mu}(\bxi)|, \end{align*} where $ R_N^\vee = [-1/d_1,1/d_1] \times \cdots \times [-1/d_k, 1/d_k]. $ A similar argument to the remainder of that proof then delivers \[ \frac{\tilde E_N(\mu)}{N^{(1+\tau)k} d^k} \ll N^\eps (N^\tau/d)^{k-\kap+\eps}. \] \end{proof} \section{Variance transference principle} \label{sec: VTP} In this section, we establish the VTP. \begin{thm} \label{thm: VTP} Let $k \in \bN$, let $\mu$ be a probability measure on $[0,1]^k$ satisfying \eqref{MainAssumption}, and let $(f_n)_{n=1}^\infty$ be an admissible function system. Then VTP holds for $(f_n)_n$ and $\mu$. \end{thm} \begin{proof} We wish to estimate \[ V_N(\mu) = \int_{[0,1]^k} \left( \sum_{n \le N} (f_n(\bx) - \lam(f_n)) \right)^2 \d \mu(\bx). \] As $f_n$ is a Schwartz function, we may replace it by its Fourier series to obtain \begin{align*} \label{eq: VTP fourier identity} V_N(\mu) &= \int_{[0,1]^k} \left( \sum_{n \le N} \sum_{\bxi \neq \bzero} \hat{f}_n(\bxi) e(\bxi \cdot \bx) \right)^2 \d\mu(\bx) \\ &= \int_{[0,1]^k} \sum_{n,n' \le N} \sum_{\bxi,\bxi' \neq \mathbf{0}} \hat{f}_{n}(\bxi) \hat{f}_{n'}(\bxi') e((\bxi+\bxi') \cdot \bx) \d\mu(\bx)\\ &= \sum_{n,n'} \sum_{\bxi,\bxi' \neq \mathbf{0}, \bxi+\bxi' = \mathbf{0}} \hat{f}_{n}(\bxi) \hat{f}_{n'}(\bxi')\\ & \qquad + \sum_{n,n'} \sum_{\bxi,\bxi',\bxi+\bxi'\neq\mathbf{0}}\hat{f}_{n}(\bxi)\hat{f}_{n'}(\xi') \hat{\mu}(\bxi+\bxi') \\ &= V_N(\lam) + \sum_{n,n'} \sum_{\bxi,\bxi',\bxi+\bxi' \neq \mathbf{0}}\hat{f}_{n}(\bxi) \hat{f}_{n'}(\bxi') \hat{\mu}(\bxi+\bxi'). \end{align*} The final equality is gotten by running the same calculation for $V_N(\lam)$. It is crucial that all of the Fourier series converge absolutely, which is the case as our functions are Schwartz. We are left with the contribution of the non-zero coefficients: \[ V_N(\mu) = V_N(\lam) + E, \] where \[ E = \sum_{n,n' \le N} \sum_{\bxi,\bxi',\bxi+\bxi' \neq \mathbf{0}} \hat{f}_{n}(\bxi) \hat{f}_{n'}(\bxi') \hat{\mu}(\bxi+\bxi'). \] We sum over $n,n'$ dyadically, noting that \[ |E| \le \sum_{m,m' \le \frac{\log N}{\log 2}} \sum_{\substack{n\in D_m \\ n'\in D_{m'}}} \sum_{\bxi, \bxi', \bxi+\bxi'\neq \mathbf{0}} |\hat{f}_{n}(\bxi)\hat{f}_{n'}(\bxi')\hat{\mu}(\bxi+\bxi')|, \] where $D_m=[2^m,2^{m+1}).$ We normalise the Fourier coefficients by dividing by \[ \lam(f_n) = \sum_{i\in I_m} n^k \prod_{j \le k} d_{i,j} \asymp 2^{km} \sum_{i\in I_m} \prod_{j \le k} d_{i,j} =: w_m \] and $ \lam(f_{n'}) \asymp w_{m'}. $ Writing \[ a_n(\bxi) = \frac{\hat{f}_n(\bxi)} {\lam(f_n)}, \qquad a_{n'}(\bxi) = \frac{\hat{f}_{n'}(\bxi)} {\lam(f_{n'})}, \] we now have \begin{align*} E &\ll \sum_{m,m' \le \frac{\log N}{\log 2}} w_m w_{m'} \sum_{\substack{n\in D_m \\ n'\in D_{m'}}} \sum_{\bxi, \bxi', \bxi + \bxi'\neq \bzero} |a_n(\bxi) a_{n'}(\bxi') \hat{\mu}(\bxi+\bxi')| \\ &= \sum_{m,m' \le \frac{\log N}{\log 2}} w_m w_{m'} S(m,m'), \end{align*} where \[ S(m,m') = \sum_{\substack{n \in D_m \\ n'\in D_{m'}}} \sum_{\bxi+\bxi'\neq \bzero} |a_n(\bxi) a_{n'}(\bxi') \hat{\mu}(\bxi+\bxi')|. \] To proceed, we examine the coefficients $a_n,a_{n'},$ which satisfy \[ |a_n|, |a_{n'}| \le 1. \] The support of $f_n$ is essentially a union of rectangles with different side lengths but the same volume, and its Fourier series is the sum of the Fourier series of those rectangles. In what follows, we may assume that $I_m$ is non-empty, since otherwise it will not contribute anything to the sums. We write \[ a_n = \sum_{i\in I_m} a_{n,i}, \] where $ \lam(f_n) a_{n,i} $ is the Fourier transform of $\bx \mapsto A_n^*(\bd^{(i)};\bx)$. Note that \[ \lam(f_n) \asymp w_m = (\# I_m) 2^{km} \prod_{j \le k} d_{i,j} \] for all $i \in I_m$, whence \[ a_{n,i} \ll (\#I_m)^{-1}. \] We now see that \[ S(m,m') \ll \sum_{\substack{n\in D_m \\ n'\in D_{m'}}} \sum_{\substack{i \in I_m\\ i' \in I_{m'}}} \sum_{\bxi, \bxi', \bxi+\bxi'\neq \bzero} |a_{n,i}(\bxi)a_{n',i'}(\bxi') \hat{\mu}(\bxi+\bxi')|. \] The function $a_{n,i}$ is essentially supported on the multiples of $n$ in \[ \cR_{m,i}^\vee := [-1/d_{i,1},1/d_{i,1}] \times \cdots \times [-1/d_{i,k},1/d_{i,k}]. \] Indeed, we have the following estimate \cite[Lemma 5.3]{CY}. \begin{lem} Let \[ n \in D_m, \quad i \in I_m, \quad L,t \ge 1, \quad \bxi \in \bZ^k \setminus t \cR_{m,i}^\vee. \] Then \[ a_{n,i}(\bx) \ll_L t^{-L} (\# I_m)^{-1}. \] \end{lem} \noindent Similarly, the function $a_{n',i'}$ is essentially supported on the multiples of $n'$ in $\cR^\vee_{m',i'}$. The upshot is that \[ S(m,m') \ll_L \sum_{u,u' = 0}^\infty S_{u,u'}(m,m'), \] where \[ S_{u,u'}(m,m') = \frac{2^{-L(u+u')}}{\#I_m \# I_{m'}} \sum_{\substack{i \in I_m \\ i' \in I_{m'}}} \sum_{\substack{\bxi, \bxi', \bxi + \bxi' \ne \bzero \\ n \mid \bxi \in 2^u \cR_{m,i}^\vee \\ n' \mid \bxi' \in 2^{u'} \cR_{m',i'}^\vee}} |\hat \mu(\bxi + \bxi')|. \] Here $L \ge 1$ is a constant that we will describe later. Write \[ T = \begin{pmatrix} t_1 & t_1' \\ t_2 & t_2' \\ \vdots & \vdots \\ t_k & t_k' \end{pmatrix}, \qquad \bn = \begin{pmatrix} n \\ n' \end{pmatrix}, \qquad \bx = \begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_k \end{pmatrix} \] For $\bzero \ne \bx \in \bZ^k$, let $N(\bx) = N(\bx; m, m', u, u', i, i')$ count solutions $(\bt, \bt', \bn) \in \bZ^{2k+2}$ to $T \bn = \bx$ such that \begin{align*} &n \in D_m, \qquad n' \in D_{m'}, \qquad \bt, \bt' \ne \bzero, \\ &|t_j n| \le 2^u/d_{i,j,m}, \quad |t'_j n'| \le 2^{u'}/d_{i',j,m'} \qquad (1 \le j \le k). \end{align*} Then \begin{align*} S_{u,u'}(m,m') = \frac{2^{-L(u+u')}}{\# I_m \# I_{m'}} \sum_{\substack{i\in I_m \\ i'\in I_{m'}}} \sum_{\bzero \ne \bx \in \mathbb{Z}^k} |\hat{\mu}(\bx)| N(\bx). \end{align*} This is a finite sum, since $N(\bx) = 0$ for large $\bx$. Indeed, if $N(\bx) \ge 1$ then \begin{equation} \label{xbound} |x_j| \le \frac{2^u}{d_{i,j,m}} + \frac{2^{u'}}{d_{i',j,m'}} \ll \frac{2^{u + \tau m}}{\fd_m} + \frac{2^{u' + \tau m'}}{\fd_{m'}}. \end{equation} For $j = 1,2,\ldots, k$, we can bound the number of possibilities for $t_j, t_j'$ by \[ r_j := \frac{2^{u+2-m}}{d_{i,j,m}} + 1, \qquad r_j' := \frac{2^{u'+2-m'}}{d_{i',j,m'}} + 1, \] respectively. Note that \[ r_j \ll \frac{2^{u + (\tau - 1) m}}{\fd_m}, \qquad r_j' \ll \frac{2^{u' + (\tau-1)m'}}{\fd_{m'}}. \] To estimate $N(\bx)$, we decompose $N(\bx) = N_1(\bx) + N_2(\bx)$, where $N_1(\bx)$ counts solutions with $\bt, \bt'$ parallel. Without loss of generality, we assume that $m \le m'$. \subsection{The partial-rank case} Since $\bx \ne \bzero$ we may, by symmetry, assume that $x_1 \ne 0$. As $n \bt + n' \bt' = \bx$ for any solution counted by $N_1(\bx)$, the vectors $\bt$ and $\bt'$ must be multiples of $\bx$, with $t_1 t_1' \ne 0$. The count $N_1(\bx)$ is therefore bounded by the number of solutions to \[ t_1 n + t_1' n' = x_1. \] Choosing the value of $t_1 n \notin \{ 0,x_1 \}$, then applying the divisor bound to determine $t_1, n, t_1', n'$, gives \[ N_1(\bx) \ll 2^{\tau m} \left( \frac{2^{u + \tau m}}{\fd_m} + \frac{2^{u' + \tau m'}}{\fd_{m'}} \right). \] \subsection{The full-rank case} Since $\bt,\bt'$ are not parallel we may, be symmetry, assume that the first two rows of $T$ are linearly independent. As $(t_1, t_2) \ne (0,0)$, we may also assume that $t_1 \ne 0$. We begin by choosing $t_1 \ne 0$, as well as $t'_1$ and $t_2$. Using that \[ t_1 n + t'_1 n' = x_1, \qquad t_2 n + t'_2 n' = x_2, \] we see that \[ (t_2 t'_1 - t_1 t'_2) n' = t_2 x_1 - t_1 x_2. \] The left hand side is non-zero, so \[ n' \mid t_2 x_1 - t_1 x_2 \neq 0. \] Thus, by the divisor bound, the number of possibilities for $n'$ is at most $O(2^{(m+m')\tau}).$ After choosing $n'$, we can determine at most one possibility for $n$ via \[ t_1 n + t'_1 n' = x_1, \] and at most one possibility for $t'_2$ via \[ t_2 n + t'_2 n' = x_2. \] Choosing $t_3,\dots,t_k$ will then determine $t'_3,\dots,t'_k$ in at most one way. The upshot is that \[ N_2(\bx) \ll r_1 r_1' r_2 r_3 \cdots r_k 2^{(m+m')\tau} \ll \left(\frac{2^{u + (\tau - 1) m}}{\fd_m} \right)^k \frac{2^{u' + (\tau-1)m'}}{\fd_{m'}} 2^{(m+m')\tau}. \] \subsection{VTP conclusion} Combining our bounds on $N_1(\bx)$ and $N_2(\bx)$ with \eqref{xbound} furnishes \begin{align*} &\sum_{\bx \ne \bzero} |\hat \mu(\bx)| N(\bx) \ll 2^{O(\tau)m'} \left( \frac{2^{u}}{\fd_m} + \frac{2^{u'}}{\fd_{m'}} \right)^{k - \dim_{\ell^1}(\mu)} \left(\frac{2^{u - m}}{\fd_m} \right)^k \frac{2^{u'-m'}}{\fd_{m'}}. \end{align*} Now, taking $L \ge 1$ sufficiently large, \[ S(m,m') \ll 2^{O(\tau)m'} \left( \frac{1}{\fd_m} + \frac{1}{\fd_{m'}} \right)^{k - \dim_{\ell^1}(\mu)} \left(\frac{2^{- m}}{\fd_m} \right)^k \frac{2^{-m'}}{\fd_{m'}}. \] Therefore \begin{align*} E &\ll \sum_{m, m' \le \frac{\log N}{\log 2}} w_m w_{m'} \\ &\qquad \cdot \left(2^{O(\tau)m'} \left( \frac{1}{\fd_m} + \frac{1}{\fd_{m'}} \right)^{k - \dim_{\ell^1}(\mu)} \left(\frac{2^{- m}}{\fd_m} \right)^k \frac{2^{-m'}}{\fd_{m'}} \right) \\ &\ll \sum_{m, m' \le \frac{\log N}{\log 2}} w_m w_{m'} 2^{O(\tau)m'} (2^{m'(k+1)/k})^{k - \dim_{\ell^1}(\mu)} 2^m 2^{m'/k}. \end{align*} Since \[ E_N(\lam) \asymp \sum_{m \le \frac{\log N}{\log 2}} 2^m w_m \] whenever $N$ is a power of two, and since $\tau$ is small, it remains to show that \[ (1 + 1/k) (k - \dim_{\ell^1}(\mu)) + 1/k < 1. \] This is nothing more than a rearrangement of \eqref{MainAssumption}. \end{proof} \section{Completing the proofs}\label{sec: final proofs} In this section, we establish Theorems \ref{thm: convergence}, \ref{thm: divergence}, \ref{HausdorffConvThm}, \ref{thm: Rational Counting}, \ref{OtherBound}, and \ref{thm: Intrinsic App}. \subsection{Proof of Theorem \ref{thm: convergence}} \label{ConvergentProofs} For the convergent Khinchin statement, we may assume that \[ \psi(n) \ge (n \log^2 n)^{-1/k} \qquad (n \ge 2). \] Indeed, we may replace $\psi$ by $\tilde \psi$, where $\tilde \psi(n) = \psi(n) + (n \log^2 n)^{-1/k}$ for $n \ge 2$, since $\psi \le \tilde \psi$ and $\sum_n \tilde \psi(n)^k < \infty$. We use $f_n(\bx) = A_n^*(\bx; \bd)$ from \eqref{AnStar} where, for $n \in D_m$, \[ \bd = (d, \ldots, d), \qquad d = \psi(2^{m-1})/2^{m-1}. \] Then $ \lam(f_n) \ll \psi(2^{m-1})^k $ so, by the Cauchy condensation test and the convergence of $\sum_{n=1}^\infty \psi(n)^k$, we have \eqref{LebesgueConvergence}. Moreover, we have $f_n \gg 1$ on \[ \{ \bx \in [0,1)^k: \max \{ \| n x_1 - y_1 \|, \ldots, \| n x_k - y_k \| \} < \psi(n) \}. \] By Lemma \ref{GenConv}, it remains to confirm the ETP. For any $\eps > 0$, Theorem \ref{thm: ETP for general measure} gives \[ \sum_{n \in D_m} (\mu(f_n) -\lam(f_n)) \ll_\eps (2^{m-1})^{k+\eps} (\psi(2^{m-1})/2^{m-1})^{\kap - \eps}. \] For $N = 2^M - 1$, we thus have \begin{align*} E_N(\mu) - E_N(\lam) &\ll \sum_{m \le M-1} (2^{m-1})^{k - \kap + 2 \eps} \psi(2^{m-1})^{\kap - \eps}, \end{align*} where $\kap = \dim_{\ell^1}(\mu)$. On the other hand, \[ E_N(\lam) \gg \sum_{n \le N} \psi(n)^k \ge \sum_{m \le M-1} 2^{m-1} \psi(2^{m-1})^k. \] We see from \eqref{WeakAssumption} that if $\eps$ is sufficiently small then \[ n^{k - \kap - 1 + 2 \eps} = o((n \log^2 n)^{(\kap - \eps -k)/k}) \] and hence \[ n^{k - \kap - 1 + 2 \eps} = o(\psi(n)^{\kap - \eps - k}). \] Applying this to $n = 2^{m-1}$, we now have \begin{align*} E_N(\mu) - E_N(\lam) &\ll \sum_{m \le M - 1} o_{m \to \infty}(2^{m-1} \psi(2^{m-1})^k) \\ &= O(1) + o (E_N(\lam)), \end{align*} as $N = 2^M - 1 \to \infty$. This verifies the ETP and completes the proof of the Khinchin assertion. \bigskip For the convergent Gallagher statement, we may assume that \[ \psi(n) \ge (n \log^{k+1} n)^{-1} \qquad (n \ge 2). \] Indeed, we may replace $\psi$ by $\tilde \psi$, where $\tilde \psi(n) = \psi(n) + (n \log^{k+1} n)^{-1}$ for $n \ge 2$, since $\sum_n \tilde \psi(n) \log^{k-1} n < \infty$. We deploy the rectangular decomposition $\fC_n$ from \S \ref{sec: rectangle}. Our smooth approximation then satisfies $f_n \gg 1$ on $A_n^\times$ and, since \[ \lam(f_n) \ll \psi(2^{m-1}) \log^{k-1}(1/\psi(2^{m-1})) \ll \psi(2^{m-1}) m^{k-1} \qquad (n \in D_m), \] we have \eqref{LebesgueConvergence} by the Cauchy condensation test and the convergence of $\sum_{n=1}^\infty \psi(n) (\log n)^{k-1}$. By Lemma \ref{GenConv}, it remains to confirm the ETP. Note that $f_n$ is a linear combination of $\asymp \log^{k-1}(1/\psi(n))$ many functions $A_n^* = A_n^*(\bd; \cdot)$, where \[ \frac{\psi(2^{m-1})}{2^{m-1}} \le d_j \le \frac1{2^{m-1}} \quad(1 \le j \le k), \qquad d_1 \cdots d_k \asymp \frac{\psi(2^{m-1})}{2^{k(m-1)}}. \] It therefore suffices to prove that \[ \sum_{n \in D_m} (\mu(A_n^*) - \lam(A_n^*)) = o(2^{m-1} m^{k-1} \psi(2^{m-1})). \] Recall that $\mu = \mu_1 \times \cdots \times \mu_k$, where each $\mu_j$ is a missing-digit measure. By \eqref{SplitAssumption}, there exists $\kap$ in the range \[ \min \{ \dim_{\ell^1}(\mu_j): 1 \le j \le k \} > \kap > 1 - \frac{1}{k+1}. \] For any fixed $\eps > 0$, Theorem \ref{thm: ETP} gives \begin{align*} \sum_{n \in D_m} (\mu(A_n^*) -\lam(A_n^*)) &\ll (2^{m-1})^{k+\eps} \sum_{i \in I_m} (d_1 \cdots d_k)^{\kap} \\ &\ll (2^{m-1})^{k+\eps} \left( \frac{\psi(2^{m-1})}{2^{k(m-1)}} \right)^{\kap}. \end{align*} Writing $n=2^{m-1}$, it remains to prove that if $\eps$ is sufficiently small then \[ n^{k + \eps} (\psi(n)/n^k)^{\kap} = o(n \psi(n)). \] This is equivalent to \[ n^{k(1-\kap) - 1 + \eps} = o(\psi(n)^{1-\kap}), \] which follows from the inequalities \[ \psi(n) \ge \frac1{n^{1+\eps}}, \qquad 1 - \kap < \frac1{k+1} \] and the fact that $\eps$ is small. This completes the proof of Theorem \ref{thm: convergence}. \subsection{Proof of Theorem \ref{thm: divergence}} For the divergent Khinchin statement, we use an admissible function system $(f_n)_{n=1}^\infty$ built from the admissible set system in Example \ref{SimOK}. Then $\lam(f_n) \gg \psi(2^m)^k$ so, by the Cauchy condensation test and the divergence of $\sum_{k=1}^\infty \psi(n)^k$, we have \eqref{LebesgueDivergence}. For any $C > 1$, we may apply Theorem \ref{LebesgueEstimate} to obtain \eqref{LebesgueQIA}. We have the ETP, in the same way as for the convergent Khinchin statement (see \S \ref{ConvergentProofs}). We also have VTP, by Theorem \ref{thm: VTP}. Lemma \ref{GenDiv} now completes the proof of the divergent Khinchin assertion. \bigskip For the divergent Gallagher statement, we use an admissible function system $(f_n)_{n=1}^\infty$ built from the admissible set system in Example \ref{MultOK}. Then $\lam(f_n) \gg \psi(2^m) m^{k-1}$ so, by the Cauchy condensation test and the divergence of the series $\sum_{k=1}^\infty \psi(n) (\log n)^{k-1}$, we have \eqref{LebesgueDivergence}. For any $C > 1$, we may apply Theorem \ref{LebesgueEstimate} to obtain \eqref{LebesgueQIA}. We also have the VTP, by Theorem \ref{thm: VTP}. By Lemma \ref{GenDiv}, it remains to confirm the ETP. By \eqref{MainAssumption}, there exists $\kap$ in the range \[ \dim_{\ell^1}(\mu) > \kap > k - \frac{k-1}{k+1}. \] For any fixed $\eps > 0$, Theorem \ref{thm: ETP for general measure} gives \begin{align*} \sum_{n \in D_m} (\mu(A_n^*) -\lam(A_n^*)) &\ll (2^{m-1})^{k+O(\tau+\eps)} \left( \frac{\psi(2^{m-1})^{1/k}}{2^{(m-1)}} \right)^{\kap}. \end{align*} Writing $n=2^{m-1}$, it suffices to prove that if $\eps$ is sufficiently small then \[ n^{k + \eps} (\psi(n)^{1/k}/n)^{\kap} = o(n \psi(n)). \] This is equivalent to \[ n^{k - 1 - \kap + \eps} = o(\psi(n)^{(k-\kap)/k}), \] which follows from the inequalities \[ \psi(n) \ge \frac1{n^{1+\eps}}, \qquad \kap > \frac{k^2 + 1}{k+1} > \frac{k^2}{k+1} \] and the fact that $\eps$ is small. This completes the proof of Theorem \ref{thm: divergence}. \subsection{Proof of Theorem \ref{HausdorffConvThm}} The proof is similar to that of the Khinchin part of Theorem \ref{thm: convergence}, but we now exploit the quantitative error term in Theorem \ref{thm: ETP for general measure}. Let $\mu$ be the Cantor--Lebesgue measure of $\cK$. Note that $\mu$ is $\kap_2$-regular, where $\kap_2 = \Haus(\cK)$. Our setup is similar to that of the Khinchin part of Theorem \ref{thm: convergence}, with $\psi = \psi_t$. For $n \in D_m$, the smooth function $f_n$ is supported on a union of balls of radius $2 \psi(2^{m-1})/2^{m-1}$, and $f_n \gg 1$ on at set $\cS_n$ containing \[ A_n := \{ \bx \in [0,1)^k: \max \{ \| n x_1 - y_1 \|, \ldots, \| n x_k - y_k \| \} < \psi(n) \}. \] Moreover, if $\bz \in A_n$ then $B_r(\bz) \subseteq \cS_n$, where \[ r = r_m = \psi(2^{m-1})/2^m. \] As $\mu$ is $\kap_2$-regular, we now see that $\bigcup_{n \in D_m} A_n \cap \cK$ is covered by a collection $\cB_m$ of hypercubes $B$ of diameter $|B| \asymp r$ centred in $\cK$ such that \[ \sum_{B \in \cB_m} \mu(B) \ll \sum_{n \in D_m} \mu(\cS_n) \ll \sum_{n \in D_m} \mu(f_n). \] Thus, by Theorem \ref{thm: ETP for general measure} and the $\kap_2$-regularity of $\mu$, we have \begin{align*} \label{eqn: rational counting} r_m^{\kap_2} \# \cB_m \ll_\eps 2^{m-1} \psi(2^{m-1})^k + (2^{m-1})^{k + \eps} r_m^{\kap - \eps} \end{align*} for any constant $\eps > 0$, where $\kap = \dim_{\ell^1}(\mu)$. Now, for any fixed $s > 0$, \[ \sum_{B \in \cB_m} |B|^s \ll r_m^s \# \cB_m \ll r_m^{s-\kap_2} (2^{m-1} \psi(2^{m-1})^k + (2^{m-1})^{k + \eps} r_m^{\kap - \eps}). \] By Theorem \ref{LevesleyThm} and Lemma \ref{HausdorffCantelli}, it remains to show that \begin{equation} \label{FirstTermConverges} \sum_{m=1}^\infty 2^{m-1} r_m^{s-\kap_2} \psi(2^{m-1})^k < \infty \end{equation} and \begin{equation} \label{FirstTermDominates} (2^{m-1})^{k + \eps} r_m^{\kap - \eps} \ll 2^{m-1} \psi(2^{m-1})^k, \end{equation} whenever $\eps$ is sufficiently small and \[ \frac1k < t < \frac1k + \eps, \qquad s > \frac{k+1}{t+1} + \kap_2 - k. \] By the Cauchy condensation test, we have \eqref{FirstTermConverges} if and only if \[ \sum_{n=1}^\infty (\psi(n)/n)^{s-\kap_2} \psi(n)^k < \infty. \] The latter indeed holds, since \[ (\psi(n)/n)^{s-\kap_2} \psi(n)^k = n^{(t+1)(\kap_2 - s)-tk} \] and \begin{align*} (t+1)(\kap_2 - s)-tk < (t+1) \left( k - \frac{k+1}{t+1} \right) - tk = -1. \end{align*} With $n = 2^{m-1}$, we can rewrite \eqref{FirstTermDominates} as \[ n^{k+\eps} (\psi(n)/n)^{\kap - \eps} \ll n \psi(n)^k, \] which is equivalent to \[ \psi(n)^{k - \kap + \eps} \gg n^{k-1+2\eps - \kap}. \] By \eqref{WeakAssumption}, the latter holds for any sufficiently small $\eps$, since $\psi(n) = n^{-t}$ and \[ \frac{\kap - k}{k} > k - 1 - \kap. \] This confirms \eqref{FirstTermDominates} and completes the proof of Theorem \ref{HausdorffConvThm}. \subsection{Proof of Theorem \ref{thm: Rational Counting}} \label{RationalCountingProof} Let us write \[ \kap = \dim_{\ell^1}(\mu), \qquad \kap_2 = \Haus(\cK). \] For the upper bound we use the smooth function $f_n(\bx) = A_n^*(\bx; \bd)$ from \eqref{AnStar}, where \[ \bd = (\del/N, \ldots, \del/N), \qquad \by = \bzero, \] for each $n \in [N,2N)$ and each $N \in \bN$ such that $N \le Q$ and $Q/N$ is a power of two. By Theorem \ref{thm: ETP for general measure}, if $\eps > 0$ then \[ \sum_{N \le n < 2N} \mu(f_n) \ll_\eps \del^k N + N^{k+\eps} (\del/N)^{\kap - \eps}. \] To each $(\ba, n) \in \cN_\cK(2N-1, \del) \setminus \cN_\cK(N-1, \del)$ we may associate a hypercube $B$ of diameter $|B| \asymp \del/N$ centred in $\cK$, such that \[ \sum_B \mu(B) \ll \sum_{N \le n < 2N} \mu(f_n). \] Using the $\kap_2$-regularity of $\mu$, we thus obtain \begin{align*} &\# \cN_\cK(2N-1, \del) - \# \cN_\cK(N-1, \del) \\ & \qquad \ll (\del/N)^{-\kap_2} (\del^k N + N^{k+\eps} (\del/N)^{\kap - \eps}) \\ & \qquad = \del^{k-\kap_2} N^{\kap_2 + 1} + \del^{\kap-\kap_2-\eps} N^{\kap_2 + k - \kap + 2\eps}, \end{align*} and summing over $N$ yields \[ \# \cN_\cK(Q, \del) \ll \del^{k-\kap_2} Q^{\kap_2 + 1} + \del^{\kap-\kap_2-\eps} Q^{\kap_2 + k - \kap + 2\eps}. \] Now taking $\eps, \eta$ small, we see from \eqref{WeakAssumption} that if $\del \gg Q^{-\eta-1/k}$ then $\# \cN_\cK(Q, \del) \ll \del^{k-\kap_2} Q^{\kap_2 + 1}$. For the lower bound we use $f_n(\bx) = A_n^*(\bx; \bd)$ from \eqref{AnStar}, with bump function $w$ supported on $\{ 1/2 \le |x| \le 1 \}$, where \[ \bd = (\del/Q, \ldots, \del/Q), \qquad \by = \bzero, \] for each $n \in [N,2N)$. Here $N = \lfloor (Q-1)/2 \rfloor$. By Theorem \ref{thm: ETP for general measure} and \eqref{WeakAssumption}, if $\eps > 0$ is a sufficiently small constant then \[ \sum_{N \le n < 2N} \mu(f_n) \ge \eps \del^k N - \eps^{-1} N^{k+\eps} (\del/N)^{\kap - \eps} \gg \del^k Q. \] Finally, by \eqref{MeasureUpper}, \[ \# \cN_\cK(Q, \del) \gg (\del/Q)^{-\kap_2} \del^k Q = \del^{k - \kap_2} Q^{\kap_2 + 1}. \] This completes the proof of Theorem \ref{thm: Rational Counting}. \subsection{Proof of Theorem \ref{OtherBound}} We take $\cK = \cK_{b,\cD}$ with \[ \cD = \{ 0, 1, \ldots, b-1 \}^{k-1} \times \{ 0, 1, \ldots, a-1 \}, \] for some positive integers $a < b$ to be chosen. We write \[ \kap = \dim_{\ell^1}(\mu), \qquad \kap_2 = \Haus(\cK). \] We may assume that $\eta < 1/100$. By Theorem \ref{thm: l1 bound}(b), if $b$ is sufficiently large then \[ \kap_2 - \kap \le \frac{k \log (2 \log b)}{\log b} < \eta^3 =: \eps. \] By \eqref{HausdorffMissing}, if $b$ is sufficiently large then there exists an integer $a \in [1,b-1]$ such that \[ k - \kap_2 = 1 - \frac{\log a}{\log b} \in (\eta/2,\eta). \] Now that we have provided the construction, we proceed to estimate $\# \cN_{\cK}(Q,0)$. Let $\mu$ be the Cantor--Lebesgue measure of $\cK$. With $\del \in (0,1/2]$ chosen in due course, we use the smooth function $f_n(\bx) = A_n^*(\bx; \bd)$ from \eqref{AnStar}, where \[ \bd = (\del/N, \ldots, \del/N), \qquad \by = \bzero, \] for each $n \in [N,2N)$ and each $N \in \bN$ such that $N \le Q$ and $Q/N$ is a power of two. By Theorem \ref{thm: ETP for general measure}, \[ \sum_{N \le n < 2N} \mu(f_n) \ll \del^k N + N^{k+\eps} (\del/N)^{\kap - \eps}. \] Reasoning as in \S \ref{RationalCountingProof}, we obtain \begin{align*} \# \cN_{\cK}(Q, \del) &\ll \del^{k-\kap_2}Q^{\kap_2+1} + \del^{\kap-\kap_2-\eps} Q^{\kap_2 + k - \kap + 2 \eps}. \end{align*} The choice $\delta = Q^{-2/\eta}$ delivers $ \# \cN_{\cK}(Q, \del) \ll Q^{k+\eta}, $ which finishes the proof because $ \cN_\cK(Q,0)\leq \cN_\cK(Q,\delta). $ \subsection{Proof of Theorem \ref{thm: Intrinsic App}} Observe that \[ W_k^\cK(\tau) = \limsup_{m \to \infty} A_m, \] where \[ A_m = \bigcup_{\substack{\ba/n \in \cK \\ n \in D_m}} \prod_{j \le k} \left( \frac{a_j - n^{-\tau}}{n}, \frac{a_j + n^{-\tau}}{n} \right) \qquad (m \in \bN). \] Put $\kap_2 = \Haus(\cK)$. By Corollary \ref{CountingRationalPoints} and the $\kap_2$-regularity of $\mu$, there exists a constant $\eta > 0$ such that \[ \mu(A_m) \ll (2^m)^{(1+1/k) \kap_2 -\eta - (1 + \tau) \kap_2}. \] If $\tau$ is sufficiently close to $1/k$, then $ (1+1/k)\kap_2 -\eta - (1 + \tau)\kap_2 < 0, $ so \[ \sum_{m=1}^\infty \mu(A_m) < \infty. \] By the first Borel--Cantelli lemma, we now have $\mu(W_k^\cK(\tau)) = 0$. This completes the proof of Theorem \ref{thm: Intrinsic App}. \appendix \section{More about the Fourier \texorpdfstring{$\ell^1$}{l1} dimension} \label{sec: morel1} In this appendix, we further discuss Fourier $\ell^1$ dimension for missing-digit measures. \subsection{The Fourier \texorpdfstring{$\ell^1$}{l1} dimension in special cases} \label{sec: special} In a natural, wide class of special cases, we can rigorously estimate the Fourier $\ell^1$ dimension. Building on \cite[Theorem 2.6]{CVY}, the following estimates were demonstrated in \cite[Theorem 2.15]{YuManifold}. \begin{thm} [Yu \cite{YuManifold}] \label{thm: l1 bound} Let $k \in \bN$, and let $\mu_{b,\cD}$ be the Cantor--Lebesgue measure of a missing-digit fractal $\cK_{b,\cD}$ on $[0,1]^k$. \begin{enumerate}[(a)] \item Let $t \in \bN$. Then \[ \liminf_{\substack{b\to\infty \\ \#\cD \geq b^k-t}} \dim_{\ell^1}(\mu_{b,\cD}) = k. \] In particular, if $\eps > 0$ and $b$ is sufficiently large, then \[ \dim_{\ell^1}(\mu_{b,D}) > k-\epsilon \] holds whenever $\#\cD = b^k -1.$ \item Let $b\geq 4$ be an integer, and suppose $\cD$ has the form \[ \cD = [a_1,b_1] \times [a_2,b_2] \times \cdots \times [a_k,b_k] \cap \{0,\dots,b-1\}^k. \] Then \[ \dim_{\ell^1} (\mu_{b,\mathcal{D}}) \geq \Haus(\cK_{b,\cD}) -\frac{k\log (2\log b)}{\log b}. \] \end{enumerate} \end{thm} \subsection{Estimating the Fourier \texorpdfstring{$\ell^1$}{l1} dimension} \label{sec: compute} Here we provide a general algorithm to estimate the Fourier $\ell^1$ dimension of a missing-digit measure $\mu = \mu_{b, \bP}$ on $[0,1]^k.$ The strategy comes from \cite{CVY}. For $\bx \in \bR^k$, define \[ g(\bx) = \left| \sum_{\bd \in \{0,1,\ldots,b-1\}^k} \bP(\bd) e(\bd \cdot \bx) \right|. \] Since the Fourier transform of the distribution of a sum of independent random variables is the product of the Fourier transforms of the individual distributions, we have \begin{equation} \label{ProductFormula} |\hat \mu(\bx)| = \prod_{j=1}^\infty g(b^{-j} \bx). \end{equation} For $L \in \bN$ and $\bx \in \bR^k$, define \[ S_L(\bx) = \prod_{j=0}^{L-1} g(b^j \bx). \] \begin{thm} \label{l1lower} If $L \in \bN$ then \[ \dim_{\ell^1}(\mu) \ge \frac{-\log \left( \displaystyle \sup_\bx b^{-kL} \sum_{i_1, \ldots, i_k = 0}^{b^L-1} S_L\left(\bx + \frac{\bi}{b^L} \right) \right)}{\log (b^L)}. \] \end{thm} \begin{proof} We may assume without loss of generality that $L = 1$. Indeed, if $L \ge 2$, then we can replace $b$ by $b^L$ and $\bP$ by the distribution of \[ \sum_{j=0}^{L-1} \bd^{(j)} b^j, \] where the $\bd^{(j)}$ are independent random variables distributed according to $\bP$. This leaves $\mu$ unchanged, and replaces $S_L$ by $S_1 = g$. It now suffices to prove that \[ \sum_{\| \bxi \|_\infty \le Q} |\hat \mu(\bxi)| \ll Q^{k-s} \qquad (Q \in \bN), \] where \[ s = \frac{-\log \left( \displaystyle \sup_\bx b^{-k} \sum_{i_1, \ldots, i_k = 0}^{b-1} g\left(\bx + \frac{\bi}{b} \right) \right)}{\log b}. \] Furthermore, we may assume that $Q = b^N - 1$ for some $N \in \bN$. Thus, it remains to show that \[ \sum_{\| \bxi \|_\infty < b^N} |\hat \mu(\bxi)| \ll \left( \displaystyle \sup_\bx \sum_{i_1, \ldots, i_k = 0}^{b-1} g\left(\bx + \frac{\bi}{b} \right) \right)^N. \] Arguing by induction, we establish the sharper and more general inequality \[ \displaystyle \sup_\by \sum_{\| \bxi \|_\infty < b^N} |\hat \mu(\by + \bxi)| \le \left( \sup_\bx \sum_{i_1, \ldots, i_k = 0}^{b-1} g\left(\bx + \frac{\bi}{b} \right) \right)^N, \] for every integer $N \ge 0$. The case $N = 0$ is simply the fact that $|\hat \mu(\by)| \le 1$ for all $\by$. Let us now take $N \ge 1$, and assume the inequality with $N-1$ in place of $N$. By \eqref{ProductFormula}, \[ |\hat \mu(\bx)| = g(b^{-1}\bx) |\hat \mu(b^{-1}\bx)| \qquad (\bx \in \bR^k). \] Writing $\bxi = \bxi^{(1)} + b \bxi^{(2)}$ now gives \begin{align*} &\sum_{\| \bxi \|_\infty < b^N} |\hat \mu(\by + \bxi)| \\ &= \sum_{\| \bxi^{(1)} \|_\infty < b} g(b^{-1} \by + b^{-1} \bxi^{(1)}) \sum_{\| \bxi^{(2)} \|_\infty < b^{N-1}} |\hat \mu(b^{-1} \by + b^{-1} \bxi^{(1)} + \bxi^{(2)})|. \end{align*} Thus, by the inductive hypothesis, \begin{align*} &\sum_{\| \bxi \|_\infty < b^N} |\hat \mu(\by + \bxi)| \\ &\le \sum_{\| \bxi^{(1)} \|_\infty < b} g(b^{-1} \by + b^{-1} \bxi^{(1)}) \left( \sup_\bx \sum_{i_1, \ldots, i_k = 0}^{b-1} g\left(\bx + \frac{\bi}{b} \right) \right)^{N-1} \\ &\le \left( \sup_\bx \sum_{i_1, \ldots, i_k = 0}^{b-1} g\left(\bx + \frac{\bi}{b} \right) \right)^N. \end{align*} This closes the induction and completes the proof. \end{proof} We expect that replacing the maximum with a minimum would give rise to an upper bound. However, we also expect that a proof of this would be technical and lengthy. Moreover, the lower bound is what is needed for applications. For these reasons, we do not establish the upper bound. Note by periodicity that the lower bound in Theorem \ref{l1lower} is effectively computable. \bibliographystyle{amsplain} \begin{thebibliography}{99} \bibitem{ABCY2023} D. Allen, S. Baker, S. Chow and H. Yu, \emph{A note on dyadic approximation in Cantor's set}, Indag. Math. (N. S.) \textbf{34} (2023), 190--197. \bibitem{ACY2023} D. Allen, S. Chow and H. Yu, \emph{Dyadic Approximation in the Middle-Third Cantor Set}, Selecta Math. (N. S.) \textbf{29} (2023), Paper No. 11, 49 pages. \bibitem{Bak2024} S. Baker, \emph{Approximating elements of the middle third Cantor set with dyadic rationals}, Isr. J. Math. (2024), https://doi.org/10.1007/s11856-024-2686-x. \bibitem{BHZ} T. B\'enard, W. He and H. Zhang, \emph{Khintchine dichotomy for self-similar measures}. \bibitem{Ber2012} V. Beresnevich, \emph{Rational points near manifolds and metric Diophantine approximation}, Ann. of Math. (2) \textbf{175} (2012), 187--235. \bibitem{BD} V. Beresnevich and S. Datta, forthcoming. \bibitem{BDV2007} V. Beresnevich, D. Dickinson and S. Velani, \emph{Diophantine approximation on planar curves and the distribution of rational points}, with an appendix by R. C. Vaughan, Ann. of Math. (2) \textbf{166} (2007), 367--426. \bibitem{BHV2020} V. Beresnevich, A. Haynes and S. Velani, \emph{Sums of reciprocals of fractional parts and multiplicative Diophantine approximation}, Mem. Amer. Math. Soc. \textbf{263} (2020). \bibitem{BVVZ2017} V. Beresnevich, R. C. Vaughan, S. Velani and E. Zorin, \emph{Diophantine approximation on manifolds and the distribution of rational Points: contributions to the convergence theory}, Int. Math. Res. Not. \textbf{2017}, 2885--2908. \bibitem{BRV2020} V. Beresnevich, F. Ram\'irez and S. Velani, \emph{Metric Diophantine Approximation: some aspects of recent work}, Dynamics and Analytic Number Theory, London Math. Soc. Lecture Note Ser. (N.S.) \textbf{437,} Cambridge University Press, 2016, pages 1--95. \bibitem{BVVZ2021} V. Beresnevich, R. C. Vaughan, S. Velani and E. Zorin, \emph{Diophantine approximation on curves and the distribution of rational points: Contributions to the divergence theory}, Adv. Math. \textbf{388} (2021), Paper No. 107861. \bibitem{BV2023} V. Beresnevich and S. Velani, \emph{The divergence Borel--Cantelli Lemma revisited}, J. Math. Anal. Appl. \textbf{519} (2023), 126750. \bibitem{BY2023} V. Beresnevich and L. Yang, \emph{Khintchine's theorem and diophantine approximation on manifolds}, Acta Math. \textbf{231} (2023), 1--30. \bibitem{Ber1977} V. I. Bernik, \emph{An analogue of Hin\u{c}in's theorem in the metric theory of Diophantine approximations of dependent variables I}, Vesc\={i} Akad. Navuk BSSR Ser. F\={i}z.-Mat. Navuk (1977), 44--49. \bibitem{BD1999} V. I. Bernik and M. M. Dodson, \emph{Metric Diophantine Approximation on Manifolds}, Cambridge Tracts in Math. \textbf{137,} Cambridge University Press, Cambridge, 1999. \bibitem{BFR11} R. Broderick, L. Fishman and A. Reich, \emph{Intrinsic Approximation on Cantor-like Sets, a Problem of Mahler}, Mosc. J. Comb. Number Theory \textbf{1} (2011), 291--300. \bibitem{Bro2009} T. D. Browning, \emph{Quantitative arithmetic of projective varieties}, Progr. Math. \textbf{277,} Birkh\"{a}user Verlag, Basel, 2009. \bibitem{BD2016} Y. Bugeaud, A. Durand, \emph{Metric Diophantine approximation on the middle-third Cantor set}, J. Eur. Math. Soc. \textbf{18} (2016), 1233--1272. \bibitem{Cho2018} S. Chow, \emph{Bohr sets and multiplicative diophantine approximation}, Duke Math. J. \textbf{167} (2018), 1623--1642. \bibitem{CT2020} S. Chow and N. Technau, Higher-rank Bohr sets and multiplicative diophantine approximation, Compos. Math. \textbf{155} (2019), 2214--2233. \bibitem{CT2024} S. Chow and N. Technau, \emph{Littlewood and Duffin--Schaeffer-type problems in diophantine approximation}, Mem. Amer. Math. Soc. \textbf{296} (2024), 74 pages. \bibitem{CVY} S. Chow, P. Varj\'{u} and H. Yu, \emph{Counting rationals and Diophantine approximation in missing-digit Cantor sets}, arXiv:2402.18395, preprint 2024. \bibitem{CY2024} S. Chow and L. Yang, Effective equidistribution for multiplicative diophantine approximation on lines, Invent. math. \textbf{235} (2024), 973--1007. \bibitem{CY} S. Chow and H. Yu, \emph{Moment transference principles and multiplicative diophantine approximation on hypersurfaces}, arXiv:2408.10911, preprint 2024. \bibitem{DJ} S. Datta and S. Jana, \emph{On Fourier Asymptotics and Effective Equidistribution}, arXiv:2407.11961. \bibitem{DRV1991} M. M. Dodson, B. P. Rynne and J. A. G. Vickers, \emph{Khintchine-type theorems on manifolds}, Acta Arith. \textbf{57} (1991), 115--130. \bibitem{EFS2011} M. Einsiedler, L. Fishman and U. Shapira, \emph{Diophantine approximations on fractals}, Geom. Funct. Anal. \textbf{21} (2011), 14--35. \bibitem{Fal2014} K. Falconer, \emph{Fractal Geometry: Mathematical Foundations and Applications}, third edition, John Wiley \& Sons Ltd., Chichester, 2014. \bibitem{Fal97} K. Falconer, \emph{Techniques in Fractal Geometry}, John Wiley \& Sons Ltd., Chichester, 1997. \bibitem{FS2014} L. Fishman and D. Simmons, \emph{Intrinsic approximation for fractals defined by rational iterated function systems: Mahler's research suggestion}, Proc. Lond. Math. Soc. (3) \textbf{109} (2014), 189--212. \bibitem{Fur1967} H. Furstenberg, \emph{Disjointness in Ergodic Theory, minimal sets, and a problem in diophantine approximation} Math. Systems Theory \textbf{1} (1967), 1--49. \bibitem{Gal1962} P. X. Gallagher, \emph{Metric simultaneous diophantine approximation}, J. Lond. Math. Soc. \textbf{37} (1962), 387--390. \bibitem{Hua2020} J.-J. Huang, \emph{The density of rational points near hypersurfaces}, Duke Math. J. \textbf{169} (2020), 2045--2077. \bibitem{Hua2024} J.-J. Huang, \emph{Extremal affine subspaces and Khintchine-Jarn\'ik type theorems}, Geom. Funct. Anal. \textbf{34} (2024), 113--163. \bibitem{KL2020} O. Khalil and M. Luethi, \emph{Random Walks, Spectral Gaps, and Khintchine's Theorem on Fractals}, Invent. math. \textbf{232} (2023), 713--831. \bibitem{Khi1924} A. Khinchin, \emph{Einige S\"{a}tze \"{u}ber Kettenbr\"{u}cke, mit Anwendungen auf die Theorie der Diophantischen Approximationen}, Math. Ann. \textbf{92} (1924), 115--125. \bibitem{Khi1926} A. Khinchin, \emph{Zur metrischen theorie der diophantischen approximationen}, Math. Z. \textbf{24} (1926), 706--714. \bibitem{KLW} D. Kleinbock, E. Lindenstrauss and B. Weiss, \emph{On fractal measures and Diophantine approximation}, Selecta Math. (N.S.) \textbf{10} (2004), 479--523. \bibitem{KM1998} D. Y. Kleinbock and G. A. Margulis, \emph{Flows on homogeneous spaces and Diophantine approximation on manifolds}, Ann. of Math. (2) \textbf{148} (1998), 339--360. \bibitem{Lev1998} J. Levesley, \emph{A General Inhomogeneous Jarnik--Besicovitch Theorem}, J. Number Theory \textbf{71} (1998), 65--80. \bibitem{LSV2007} J. Levesley, C. Salp and S. Velani, \emph{On a problem of K. Mahler: Diophantine approximation and Cantor sets}, Math. Ann. \textbf{338} (2007), 97--118. \bibitem{Mahler84} K. Mahler, \emph{Some suggestions for further research}, Bull. Austral. Math. Soc. \textbf{29} (1984), 101--108. \bibitem{Mat2015} P. Mattila, \emph{Fourier Analysis and Hausdorff Dimension}, Cambridge Studies in Advanced Mathematics \textbf{150,} Cambridge University Press, Cambridge, 2015. \bibitem{Mazur} B. Mazur, \emph{Perturbations, deformations, and variations (and ``near-misses'') in geometry, physics, and number theory}, Bull. Amer. Math. Soc. (N.S.) \textbf{41} (2004), 307--336. \bibitem{RSTW2020} A. Rahm, N. Solomon, T. Trauthwein and B. Weiss, \emph{The distribution of rational numbers on Cantor's middle-thirds set}, Unif. Distrib. Theorem \textbf{15} (2020), 73--92. \bibitem{Sal2023} P. Salberger, \emph{Counting rational points on projective varieties}, Proc. Lond. Math. Soc. (3) \textbf{126} (2023), 1092--1133. \bibitem{SchYam2022} D. Schindler and S. Yamagishi, \emph{Density of rational points near/on compact manifolds with certain curvature conditions}, Adv. Math. \textbf{403} (2022), 108358. \bibitem{Serre} J.-P. Serre, \emph{Lectures on the Mordell-Weil Theorem}, third edition, Aspects Math., Friedr. Vieweg \& Sohn, Braunschweig, 1997. \bibitem{SW2019} D. Simmons and B. Weiss, \emph{Random walks on homogeneous spaces and Diophantine approximation on fractals}, Invent. math. \textbf{216} (2019), 337--394. \bibitem{S24} R. Srivastava \emph{Counting Rational Points In Non-Isotropic Neighborhoods of Manifolds}, arXiv:2407.03078, preprint 2024. \bibitem{TWW2024} B. Tan, B., Wang and J. Wu, \emph{Mahler’s question for intrinsic Diophantine approximation on triadic Cantor set: the divergence theory} Math. Z. \textbf{306} (2024), https://doi.org/10.1007/s00209-023-03397-1. \bibitem{VV2006} R. C. Vaughan and S. Velani, \emph{Diophantine approximation on planar curves: the convergence theory}, Invent. math. \textbf{166} (2006), 103--124. \bibitem{W01} B. Weiss, \emph{Almost no points on a Cantor set are very well approximable}, R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci. \textbf{457} (2001), 949--952. \bibitem{Yu} H. Yu, \emph{Rational points near self-similar sets}, arXiv:2101.05910, preprint 2021. \bibitem{YuRadial} H. Yu, \emph{On the absolute continuity of radial and linear projections of missing digits measures}, arXiv:2309.01298, preprint 2023. \bibitem{YuManifold} H. Yu, \emph{Missing digits points near manifolds}, arXiv:2309.00130, preprint 2023. \end{thebibliography} \end{document}
2412.12372v1
http://arxiv.org/abs/2412.12372v1
On Mahler's conjecture for even s-concave functions in dimensions 1 and 2
\documentclass[11pt,reqno]{amsart} \usepackage[utf8x]{inputenc} \usepackage[english]{babel} \usepackage{amsmath,amsfonts,amssymb,amsthm,array,tikz-cd,dsfont} \usepackage{graphicx} \usepackage[margin=2.5cm]{geometry} \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{claim}[theorem]{Claim} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \newtheorem{notation}[theorem]{Notation} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{question}[theorem]{Question} \long\def\forget#1\forgotten{} \newcommand{\inte}{\mathrm{int}} \makeatletter \@namedef{subjclassname@2020}{\textup{2020} Mathematics Subject Classification} \makeatother \newcommand{\1}{\mathds{1}} \newcommand{\Z}{{\mathbb Z}} \newcommand{\R}{\mathbb{R}} \newcommand{\N}{\mathbb{N}} \newcommand{\Q}{{\mathbb Q}} \newcommand{\f}{\varphi} \newcommand{\e}{\varepsilon} \newcommand{\M}{\mathcal{M}} \newcommand{\F}{\mathcal{F}} \renewcommand{\L}{\mathcal{L}} \newcommand{\Epi}{\mathrm{Epi}} \newcommand{\supp}{\mathrm{supp}} \newcommand{\conv}{\mathrm{Conv}} \newcommand{\dom}{\mathrm{dom}} \begin{document} \title{On Mahler's conjecture for even s-concave functions in dimensions 1 and 2.} \author{Matthieu Fradelizi, Elie Nakhle} \subjclass[2020]{Primary 52A40, 52A20} \keywords{Mahler's conjecture, functional form, volume product, s-polarity, equipartition, s-concave functions} \begin{abstract} In this paper, we establish different sharp forms of Mahler's conjecture for $s$-concave even functions in dimensions $n$, for $n=1$ and $2$, for $s>-1/n$, thus generalizing our previous results in \cite{FN} on log-concave even functions in dimension 2, which corresponds to the case $s=0$. The functional volume product of an even $s$-concave function $g$ is \[ \int_{\R^{n}}g(x)dx\int_{\R^{n}}\L_{s}g(y)dy, \] where $\L_{s}g$ is the $s$-polar function associated to $g$. The analogue of Mahler's conjecture for even $s$-concave functions postulates that this quantity is minimized for the indicatrix of a cube for any $s>-1/n$. In dimension $n=1$, we prove this conjecture for all $s\in(-1,0)$ (the case $s\ge0$ was established by the first author and Mathieu Meyer in \cite[page 17]{FM10}). In dimension $n=2$, we only consider the case $1/s\in\Z$: for $s>0$, we establish Mahler's conjecture for general $s$-concave even functions; for $s<0$, the situation is more involved, we only prove a sharp inequality for $s$-concave functions $g$ such that $g^s$ admits an asymptote in every direction. Notice that this set of functions is quite natural to consider, when $s<0$, since it is the largest subset of $s$-concave functions stable by $s$-duality. \end{abstract} \maketitle \section{Introduction.} The classical Blaschke-Santal\'o inequality gives the following sharp relation between the volume of a centrally symmetric convex body $K$ in~$\R^{n}$ and the volume of its polar body $K^{\circ}$: $$P(K)=|K||K^{\circ}|\leq P(B_{2}^{n}),$$ where $B_{2}^{n}$ is the Euclidean ball of radius one, $K^{\circ}=\{y\in\R^{n}, \langle x,y\rangle\leq 1, \forall x\in K\}$ is the polar body of $K$ in $\R^{n}$ and $|A|$ stands for the Lebesgue measure of a Borel subset~$A$ of $\R^{n}$.\\ One of the well-known problems in the field of geometry of convex bodies is an inverse form of the Blaschke-Santal\'o inequality, namely Mahler's conjecture. This conjecture states that the product of the volume of a centrally symmetric convex body $K$ and the volume of its polar body~$K^{\circ}$, should be minimized by the cube, that is, $$ P(K)\geq P([-1,1]^{n})=\frac{4^{n}}{n!}. $$ This conjecture was stated in 1938 by Mahler \cite{Mah2}. The inequality in dimension two was established by Mahler himself \cite{Mah1}, the $3$-dimensional inequality and the equality case were proved by Iriyeh and Shibata \cite{IS}, a shorter proof can be found in \cite{FHMRZ}. The conjecture was also proved for several particular families of convex bodies like unconditional convex bodies by Saint Raymond \cite{SR,M}, zonoids by Reisner \cite{Re,GMR}, hyperplane sections of $B_{p}^{n}=\{x\in\R^{n};\sum |x|_{i}^{p}\leq 1\}$ by Karasev \cite{K}.\\ Functional forms of Mahler's conjecture have also been studied, where the centrally symmetric convex body is replaced with an even $s$-concave function and the polar body is replaced with the suitable $s$-polar of this function. More precisely, let us recall the definition of $s$-concavity. \begin{definition} Let $s\in\R$. A function $g:\R^{n}\to\R_{+}$ is $s$-concave, if for every $\lambda\in[0,1]$, and every $x$, $y\in\R^{n}$ such that $g(x)g(y)>0$, one has $$ g((1-\lambda)x+\lambda y)\geq( (1-\lambda)g(x)^{s}+\lambda g(y)^{s})^{\frac{1}{s}}, \quad \hbox{for}\ s\neq0, $$ and $g((1-\lambda)x+\lambda y)\geq g(x)^{1-\lambda}g(y)^\lambda$, for $s=0$. \end{definition} Let $g:\R^{n}\to\R_{+}$ be an $s$-concave function not identically zero. Notice that for $s>0$, it means that $g^{s}$ is concave on its support, while for $s<0$, it means that $g^{s}:\R^n\to\R_+\cup\{+\infty\}$ is convex. For $s$-concave functions, the $s$-duality takes the following form. We define, for every $y\in\R^{n}$, $$ \L_{s}g(y):=\inf_{x\in\R^{n}}\frac{\left(1-s\langle x,y\rangle\right)_{+}^{\frac{1}{s}}}{g(x)},\quad \hbox{for}\ s\neq0,\ \hbox{and}\quad \L_{0}g(y)=\inf_{x\in\R^{n}}\frac{e^{-\langle x,y\rangle}}{g(x)}, $$ where the infimum are considered on the set $\{x\in\R^{n}; g(x)>0\}$. Note that $\L_{s}g$ is $s$-concave and that $\L_{0}$ can also be described using the Legendre transform: for any function $\f:\R^n\to\R\cup\{+\infty\}$, one has $\L_{0}(e^{-\f})=e^{-\f^*}$, where $\f^*(y)=\sup_x(\langle x,y\rangle-\f(x))$ denotes the Legendre transform of $\f$. The following conjecture is the natural analogue of Mahler's conjecture for even $s$-concave functions. \begin{conjecture}\label{mahler-s} Let $s>-\frac{1}{n}$ and $g:\R^{n}\to\R_{+}$ be an even $s$-concave function such that $0<\int g<+\infty$. Then, $$ \int_{\R^{n}}g(x)dx\int_{\R^{n}}\L_{s}g(y)dy\geq\frac{4^{n}}{(1+s)\cdots(1+ns)}, $$ with equality for $g=\1_{[-1,1]^n}$. \end{conjecture} The conjecture has been proved for $s=0$ and $g$ being unconditional in \cite{FM08a, FM08b}. Recall that a function is unconditional if $g(|x_{1}|,...,|x_{n}|)=g(x_{1},...,x_{n})$ for every $(x_{1},...,x_{n})\in\R^{n}$. It has also been established for $s>0$ and $n=1$ in \cite[page 17]{FM10}; for $s>0$ and $g$ unconditional in \cite{BF,FGSZ} and for $s=0$ and $n=2$ in \cite{FN}. In our first main result, we prove the above conjecture in the case where $1/s\in\N$, and in dimension $n=2$. \begin{theorem}\label{thm:spos} Let $s>0$ be such that $1/s\in\N$ and let $g:\R^{2}\to\R_{+}$ be an even $s$-concave function such that $0<\int g<+ \infty$. Then, \[ \int_{\R^{2}}g(x)dx\int_{\R^{2}}\L_{s}g(y)dy\geq\frac{16}{(1+s)(1+2s)}, \] with equality for $g=\1_{[-1,1]^2}$. \end{theorem} Notice that letting $s\to0$, we get a new proof of the log-concave case from \cite{FN}, though without the equality case, see Corollary \ref{cor:FN23}. \\ For $s<0$, the situation is more involved, because the $s$-dual $\L_s g$ of an integrable $s$-concave function $g$ is not only $s$-concave but, as observed in \cite{Ro, FGSZ}, the function $(\L_{s}g)^s$ has also asymptotes in every direction. For example, for $g=\1_{[-1,1]^n}$, one has $(\L_{s}g(y))^s=1-s\|y\|_1$, where $\|y\|_1=\sum_{i=1}^n|y_i|$, for $y=(y_1,\dots,y_n)$. More precisely, in general, one has $(\L_{s}g)^s\in\F$, where the set $\F$ is defined below. \begin{definition}\label{def:F} $\F$ is the set of convex lower semi-continuous functions $f:\R^{n}\to (0,+\infty)$ such that, for any $x\neq 0$, one has $\lim_{t\to+\infty}f(tx)=+\infty$, and $t\mapsto\frac{f(tx)}{t}$ is non-increasing on $(0,+\infty)$. \end{definition} Moreover, it is easy to see that for any $g$ such that $g^s\in\F$, one has $\L_s\L_s g=g$, from which it follows that for any map $g$, one has $\L_s\L_s\L_s g=\L_sg$. As we shall see from the one-dimensional case, it is natural to formulate two more conjectures from which Conjecture \ref{mahler-s} follows in the case $s<0$. The first one postulates a sharp lower bound on the functional volume product of $s$-concave even functions, among the ones such that $g^s\in\F$. \begin{conjecture}\label{mahler-s-F} Let $s\in(-\frac{1}{n},0)$ and let $g:\R^{n}\to\R_{+}$ be even such that $g^s\in\F$ and $0<\int g<+\infty$. Then, $$ \int_{\R^{n}}g(x)dx\int_{\R^{n}}\L_{s}g(y)dy\geq\frac{1}{(1+ns)}\times\frac{4^{n}}{(1+s)\cdots(1+ns)}, $$ with equality for $g(x)=(\max(1,\|x\|_\infty))^{1/s}$. \end{conjecture} Notice that $(\max(1,\|x\|_\infty))^{1/s}=\L_s\L_s(\1_{[-1,1]^n})$. In \cite{FGSZ}, the above conjecture was established if the function $g$ is unconditional. In particular, the conjecture holds for $n=1$. In our second main result, we establish Conjecture \ref{mahler-s-F} when $n=2$ and $-1/s\in\N$. \begin{theorem}\label{th:mahler-s-F} Let $s\in(-\frac{1}{2},0)$ be such that $-1/s\in\N$ and let $g:\R^{2}\to\R_{+}$ be even such that $g^s\in\F$ and $0<\int g<+\infty$. Then, $$ \int_{\R^{2}}g(x)dx\int_{\R^{2}}\L_{s}g(y)dy\geq\frac{16}{(1+s)(1+2s)^2}, $$ with equality for $g(x)=(\max(1,\|x\|_\infty))^{1/s}$. \end{theorem} The second conjecture compares the integral of $g$ with the integral of its double $s$-polar. \begin{conjecture}\label{comp-bipolar} Let $s\in(-\frac{1}{n},0)$ and let $g:\R^{n}\to\R_{+}$ be even $s$-concave such that $0<\int g<+\infty$. Then, \begin{equation}\label{eq:bipolar} \int_{\R^{n}}g(x)dx\ge(1+ns)\int_{\R^{n}}\L_s\L_{s}g(x)dx, \end{equation} with equality for $g=\1_{[-1,1]^n}$. \end{conjecture} Notice that, for any function $g:\R^{n}\to\R_{+}$, one has, for any $x\in\R^n$, $g(x)\le \L_s\L_sg(x)$ thus $\int_{\R^{n}}g(x)dx\le\int_{\R^{n}}\L_s\L_{s}g(x)dx$. The conjectured inequality \eqref{eq:bipolar} postulates that one has an opposite bound up to constant for $s$-concave even functions. \forget Hence, the inequality in Conjecture \ref{conj:s-concave} can be simplified as follows: \begin{conjecture}\label{conj:concavedimn} Let $m\in\Z$ and $f:\R^{n}\to\R_{+}$ be an even concave function such that $0<\int_{\R^{n}}f<+\infty$. Then, $$ P_{m}(f)=\int_{\R^{n}}f(x)^{m}dx\int_{\R^{n}}(\L f(y))^{m}dy\geq\frac{4^{n}}{(m+1)...(m+n)}, $$ where $\L f(y)=\inf_{x\in\R^{n}}\frac{\left(1-\langle x,y\rangle \right)_{+}}{f(x)}$. \end{conjecture} \forgotten In our third main result, we establish Conjecture \ref{comp-bipolar} in dimension $n=1$. \begin{theorem}\label{comp-bipolar-one} Let $s\in(-1,0)$ and let $g:\R\to\R_{+}$ be even $s$-concave such that $0<\int g<+\infty$. Then, $$ \int_{\R}g(x)dx\ge(1+s)\int_{\R}\L_s\L_{s}g(y)dy, $$ with equality for $g=\1_{[-1,1]}$. \end{theorem} \begin{remark} Notice that, if Conjecture \ref{comp-bipolar} holds for an $s$-concave even function $g$ and Conjecture~\ref{mahler-s-F} holds for $\L_s g$, then multiplying equation \eqref{eq:bipolar} by $\int\L_s g$, using that $(\L_s g)^s\in\F$ and applying Conjecture~\ref{mahler-s-F} to $\L_sg$, then we deduce that Conjecture~\ref{mahler-s} holds for $g$. \end{remark} From the above remark, the results in \cite{FGSZ} and Theorem \ref{comp-bipolar-one}, one deduces that Conjecture~\ref{mahler-s} holds for $n=1$ and $s\in(-1,0)$.\\ The proof's strategy of our main theorems in dimension $n=2$ relies on constructing an associated symmetric convex body in dimension $m+2$, where $m=1/|s|$, to which we apply the following proposition concerning Mahler's conjecture in higher dimensions proved in \cite{FHMRZ}. \begin{proposition}\label{prop:Mahler} {\bf{[FHMRZ]}} Let $K\subset\R^{n}$ be a centrally symmetric convex body that can be partitioned with hyperplanes $H_{1}$, $H_{2}$,..., $H_{n}$ into $2^{n}$ pieces of the same volume such that each section $K\cap H_{i}$ satisfies Mahler's conjecture and is partitioned into $2^{n-1}$ regions of the same $(n-1)$-dimensional volume by the remaining hyperplanes, then \[ P(K)\geq\frac{4^{n}}{n!}. \] \end{proposition} In our proofs, we shall need the following notion of equipartition of functions. \begin{definition} Let $m\in\Z$. A function $f:\R^{n}\to\R_{+}$ is said to be $m$-equipartitioned if $$ \int_{\R^{n}_{\varepsilon}}f^m=\frac{1}{2^{n}}\int_{\R^{n}}f^{m}\quad\mbox{and}\quad\int_{\R_{\varepsilon}^{n}} f^{m-1}=\frac{1}{2^{n}}\int_{\R^{n}}f^{m-1}, $$ where $\forall\varepsilon\in\{-1,1\}^{n}$, $\R^{n}_{\varepsilon}=\{x\in\R^{n};\varepsilon_{i}x_{i}\geq 0,\forall i\in\{1,...,n\}\}$. \end{definition} This paper is organized in the following way: In section 2, we present some general results and properties of the $s$-dual function $\L_{s}$ and we introduce the construction of convex bodies using $s$-concave functions. In section 3, we establish Conjecture \ref{mahler-s} in dimensions $n=1$ and $n=2$ for $\frac{1}{s}\in\N$. In section 4, we give a lower bound of the functional volume product applied to even $s$-concave functions in dimensions $n=1$ and $n=2$ for $-\frac{1}{s}\in\N$. \section{Construction and properties of associated convex bodies.} Let $s>-\frac{1}{n}$ and $g:\R^{n}\to\R_{+}$ be an $s$-concave function such that $0<\int g<+\infty$. We discuss two cases according to the sign of $s$. \subsection{Case $s>0$.} let $m=\frac{1}{s}>0$. Then there exists a function $f:\R^{n}\to\R_{+}$ concave on its support such that $g=f^{\frac{1}{s}}=f^{m}$ and \[ \L_{s}g(y)=\L_{s}f^{\frac{1}{s}}(y)=\inf_{x\in\R^{n}}\frac{\left(1-s\langle x,y\rangle\right)_{+}^{\frac{1}{s}}}{f(x)^{\frac{1}{s}}}=\left(\inf_{x\in\R^{n}}\frac{\left( 1-\langle x,sy\rangle\right)_{+}}{f(x)}\right)^{\frac{1}{s}}=\left(\L f(sy)\right)^{\frac{1}{s}}=\left( \L f\left(\frac{y}{m}\right)\right)^{m}, \] where $\L f(y)=\L_{1}f(y)=\inf_{x\in\R^{n}}\frac{\left(1-\langle x,y\rangle \right)_{+}}{f(x)}$. In addition, by the change of variables $y=mz$, we get $$ \int_{\R^{n}}\L_{s}g(y)dy=m^{n}\int_{\R^{n}}(\L f(z))^{m}dz. $$ Introducing the operator $P_{m}$ defined as follows \[ P_{m}(f)=\int_{\R^{n}}f(x)^{m}dx\int_{\R^{n}}(\L f(y))^{m}dy, \] and changing variables, we get \[ \int_{\R^{n}}g(x)dx\int_{\R^{n}}\L_{s}g(y)dy=m^{n}P_{m}(f). \] Thus, for $s>0$, Conjecture \ref{mahler-s} can equivalently be formulated in the following form: for any even function $f:\R^{n}\to\R_{+}$, concave on its support and integrable and for any $m>0$, one should have \begin{equation}\label{eq:conjPmf} P_m(f)=\int_{\R^{n}}f(x)^{m}dx\int_{\R^{n}}(\L f(y))^{m}dy\ge P_m(\1_{[-1,1]^n})=\frac{4^n}{(m+1)\cdots(m+n)}. \end{equation} The operator $P_{m}$ is invariant under linear transformation. Indeed, let $T:\R^n\to\R^n$ be an invertible linear map. Then, putting $z=Tx$ we get, \[ \int_{\R^{n}}(f\circ T(x))^{m}dx=\frac{1}{|\det T|}\int_{\R^{n}}f(z)^{m}dz. \] We also obtain, for every $y\in\R^n$, \[ \L(f\circ T)(y)=\inf_{x\in\R^{n}}\frac{\left(1-\langle x,y\rangle\right)_{+}}{f(Tx)}=\inf_{z\in\R^{n}}\frac{\left(1-\langle T^{-1}z,y\rangle\right)_{+}}{f(z)}=\L f\left((T^{-1})^{*}y\right). \] Therefore, $\L(f\circ T)=(\L f)\circ (T^{-1})^*$. Hence, changing variables, we get \begin{eqnarray*} P_{m}(f\circ T)&=&\int_{\R^{n}}(f(Tx))^{m}dx\int_{\R^{n}}(\L f((T^{-1})^{*}y))^{m}dy\\ &=&\frac{1}{|\det T|}\int_{\R^{n}}f(x)^{m}dx\times |\det T^{*}|\int_{\R^{n}}(\L f(y))^{m}dy=P_{m}(f). \end{eqnarray*} Let us introduce some tools that will help us in the proof of our main theorem in section 3. \\ Let $L$ be a symmetric convex body in $\R^{m}$. Define the set \[ K_{L,m}(f)=\{(x,t)\in\R^{n}\times\R^{m};\|t\|_{L}\leq f(x)\}, \] where $\|.\|_{L}$ is defined for $y\in\R^{m}$ by $\|y\|_{L}=\inf\{\lambda>0; y\in \lambda L\}$. Variants of these sets and their volume products were considered in \cite{AKM, BF} but we establish their properties below for completeness. Since $f$ is even and concave, then $K_{L,m}(f)$ is a symmetric convex body in $\R^{m+n}$ and for every $m>0$ one has \[ |K_{L,m}(f)|=|\{(x,t)\in\R^{n}\times\R^{m};\|t\|_{L}\leq f(x)\}|=\int_{\R^{n}}|f(x)L|_{m}dx=|L|\int_{\R^{n}}f(x)^mdx. \] Moreover, we have \begin{eqnarray*} (K_{L,m}(f))^{\circ}&=&\{(y,s)\in\R^{n}\times\R^{m};\langle x,y\rangle+\langle s,t\rangle\leq 1,\quad\forall (x,t)\in K_{L,m}(f)\}\\ &=&\{(y,s)\in\R^{n}\times\R^{m}; \sup_{\{t\in\R^{m};\|t\|_{L}\leq f(x)\}}\langle s,t\rangle\leq 1-\langle x,y\rangle,\quad\forall x\in\R^n\}\\ &=&\left\{(y,s)\in\R^{n}\times\R^{m};f(x)\|s\|_{L^{\circ}}\leq (1-\langle x,y\rangle)_+,\quad\forall x\in\R^n\right\}\\ &=&\left\{(y,s)\in\R^{n}\times\R^{m}; \|s\|_{L^{\circ}}\leq\L f(y)\right\}\\ &=&K_{L^{\circ},m}(\L f). \end{eqnarray*} Hence \[ |K_{L,m}(f)^{\circ}|=|K_{L^{\circ},m}(\L f)|=|L^{\circ}|\int_{\R^{n}}(\L f(y))^{m}dy. \] Thus, one has \begin{eqnarray*} P(K_{L,m}(f))&=&|K_{L,m}(f)||K_{L^{\circ},m}(\L f)| =|L||L^{\circ}|\int_{\R^{n}}f(x)^mdx\int_{\R^{n}}(\L f(y))^{m}dy =P(L)P_{m}(f). \end{eqnarray*} We need the following lemma describing the relation between the concave function $f$ and its dual. \begin{lemma}\label{supp} Let $f:\R^{n}\to\R_{+}$ be a concave function on its support. Then, \[ \supp(\L f)=(\supp f)^{\circ}. \] \end{lemma} \begin{proof} Let $C=\supp(f)$ and $K(f)=\{(x,t)\in C\times\R; |t|\leq f(x)\}$. We know that $K(f)$ is convex in $\R^{n+1}$ and $K(f)^{\circ}=K(\L f)$. Since the operation of duality transforms sections of a given convex body into projections of its polar and \[ K(f)\cap e_{n+1}^{\perp}=C=P_{e_{n+1}^{\perp}}(K(f)), \] we have \[ K(\L f)\cap e_{n+1}^{\perp}=K(f)^{\circ}\cap e_{n+1}^{\perp}=\left(P_{e_{n+1}^{\perp}}(K(f))\right)^{\circ}. \] Thus, \[ \supp\L f=C^{\circ}=(\supp f)^{\circ}. \] \end{proof} \subsection{Case $-\frac{1}{n}<s<0$.} Let $m=-\frac{1}{s}>n$. Then there exists a convex function $f:\R^{n}\to(0,+\infty]$ such that $f=g^{s}$. Since $0<\int g<+\infty$, one has $\lim_{t\to+\infty}f(tx)=+\infty$, for any $x\neq 0$. Moreover, since $s=-1/m$ and $g=f^{-m}$, \begin{eqnarray*} \L_{s}g(y)=\inf_{x\in\R^{n}}\frac{\left(1-s\langle x,y\rangle\right)^{\frac{1}{s}}_{+}}{g(x)} =\left( \inf_{x\in\R^{n}}\frac{(1+\langle x,\frac{y}{m}\rangle)^{-1}_{+}}{f^{-1}(x)}\right)^{m} =\left(\L_{-1}(f^{-1})\left(\frac{y}{m}\right)\right)^{m}. \end{eqnarray*} Let us define the function $\M f$, for $y\in\R^{n}$, by \[ \M f(y)=(\L_{-1}(f^{-1})(y))^{-1}=\sup_{x\in\R^{n}}\frac{1+\langle x,y\rangle}{f(x)}. \] It was proved in \cite{FGSZ} that, with these hypotheses, the function $\M f$ belongs to $\F$ (see definition \ref{def:F}). Using the change of variables $y=mz$, we get \[ \int_{\R^{n}}\L_{s}g(y)dy=m^{n}\int_{\R^{n}}\left(\L_{-1}(f^{-1})(z)\right)^{m}dz=m^{n}\int_{\R^{n}}\frac{1}{(\M f(z))^{m}}dz. \] Consider the operator $Q_{m}$ defined as follows \[ Q_{m}(f)=\int_{\R^{n}}\frac{1}{f(x)^m}dx\int_{\R^{n}}\frac{1}{(\M f(y))^{m}}dy, \] one has \begin{eqnarray*} \int_{\R^{n}}g(x)dx\int_{\R^{n}}\L_{s}g(y)dy =m^{n}\int_{\R^{m}}\frac{1}{f(x)^m}dx\int_{\R^{n}}\frac{1}{(\M f(z))^{m}}dz=m^{n}Q_{m}(f). \end{eqnarray*} Thus, for $s<0$, Conjecture \ref{mahler-s} can equivalently be formulated in the following form: for any even function $f:\R^{n}\to(0,+\infty]$, convex, such that $\lim_{t\to+\infty}f(tx)=+\infty$, for any $x\neq 0$ and for any $m>n$, one should have \begin{equation}\label{eq:conjQmf} Q_m(f)=\int_{\R^{n}}\frac{1}{f(x)^{m}}dx\int_{\R^{n}}\frac{1}{(\M f(y))^{m}}dy\ge Q_m(1+I_{[-1,1]^n})=\frac{4^n}{(m-1)\cdots(m-n)}, \end{equation} where, for any $K\subset\R^n$, the function $I_K$ is defined by $I_K(x)=0$ for $x\in K$ and $I_K(x)=+\infty$ for $x\notin K$. In the same way, Conjecture \ref{mahler-s-F} can be reformulated in the form: for $f\in\F$ even and $m>n$, \begin{equation}\label{eq:conjQmf-F} Q_m(f)\ge Q_m(\max(1,\|x\|_\infty))=\frac{m}{m-n}\times\frac{4^n}{(m-1)\cdots(m-n)}. \end{equation} And Conjecture \ref{comp-bipolar} read as: for any even function $f:\R^{n}\to(0,+\infty]$, convex such that $\lim_{t\to+\infty}f(tx)=+\infty$, for any $x\neq 0$ and for any $m>n$, \begin{equation}\label{eq:conj-bipolar-f} \int_{\R^n}\frac{1}{f(x)^m}dx\ge \frac{m-n}{m}\int_{\R^n}\frac{1}{(\M\M f(x))^m}dx, \end{equation} with equality for $f=1+I_{[-1,1]^n}$. Similarly as before, one may easily prove that the operator $Q_{m}$ is invariant under linear transformation. Let $K\subset\R^{m}$ be a symmetric convex body. For any even convex function $f:\R^{n}\to\R_{+}$ such that $\lim_{x\to\infty}f(tx)=+\infty$, for any $x\neq 0$, we define the function \[ \begin{array}{ccccl} \pi_m(f) & : & \R^{n}\times\R^{m} & \to & \R_{+}\cup\{+\infty\} \\ & & (x,s) & \mapsto & \pi_m(f)(x,s)=\begin{cases} \|s\|_{K} f(\frac{x}{\|s\|_{K}}) & \text{if $s\neq 0$}. \\ \lim_{s\to 0}\|s\|_{K}f(\frac{x}{\|s\|_{K}}) & \text{if $s=0$}. \end{cases} \\ \end{array} \] We denote the domain of $f$ by $\dom(f)=\{x\in\R^{n}; f(x)<+\infty\}$. Then \[ \dom(\pi_m(f))=\{(x,s)\in\R^{n}\times\R^{m}; \pi_m(f)(x,s)<+\infty\}=\{(x,s)\in\R^{n}\times\R^{m}; x\in \|s\|_{K}\dom(f)\}. \] Consider the following convex body \[ C^{K}_{m}(f)=\{(x,s)\in\R^{n}\times\R^{m}; \pi_m(f)(x,s)\leq 1\}=\overline{\left\{(x,s)\in\R^{n}\times\R^{m}\setminus\{0\}; \|s\|_{K}f\left(\frac{x}{\|s\|_{K}}\right)\leq 1\right\}}. \] One has, \begin{eqnarray*} (C^{K}_{m}(f))^{\circ}&=&\{(y,t)\in\R^{n}\times\R^{m}; \langle (x,s),(y,t)\rangle\leq 1, \forall (x,s)\in C^{K}_{m}(f)\}\\ &=&\left\{(y,t)\in\R^{n}\times\R^{m}; \langle x,y\rangle + \langle s,t\rangle\leq 1, \forall (x,s); \|s\|_{K}f\left(\frac{x}{\|s\|_{K}}\right)\leq 1\right\}. \end{eqnarray*} Let $z=\frac{x}{\|s\|_{K}}$, then, \begin{eqnarray*} (C^{K}_{m}(f))^{\circ}&=&\left\{(y,t)\in\R^{n}\times\R^{m}; \|s\|_{K}\langle z,y\rangle+\langle s,t\rangle\leq 1, \forall (z,s), f(z)\leq\frac{1}{\|s\|_{K}}\right\}\\ &=&\left\{(y,t)\in\R^{n}\times\R^{m};\sup_{\left\{s;\|s\|_{K}\leq1/f(z)\right\}} \|s\|_{K}\langle z,y\rangle +\langle s,t\rangle\leq 1, \forall z\right\}\\ &=&\left\{(y,t)\in\R^{n}\times\R^{m};\frac{\langle z,y\rangle+\|t\|_{K^\circ}}{f(z)}\leq 1, \forall z\right\}\\ &=&\left\{(y,t)\in\R^{n}\times\R^{m}; \sup_{z\in\R^{n}}\frac{\langle z,y\rangle+\|t\|_{K^{\circ}}}{f(z)}\leq 1\right\}\\ &=&\overline{\left\{(y,t)\in\R^{n}\times\R^{m}\setminus\{0\}; \|t\|_{K^\circ}\M f\left(\frac{y}{\|t\|_{K^\circ}}\right)\leq 1\right\}}\\ &=& C^{K^\circ}_{m}(\M f). \end{eqnarray*} Let us compute the volume of the convex body $C_{m}^{K}(f)$. \begin{eqnarray*} \left|C_{m}^{K}(f)\right|&=&\left|\left\{(x,s)\in\R^{n}\times\R^{m}\setminus\{0\};\|s\|_{K}f\left(\frac{x}{\|s\|_{K}}\right)\leq 1\right\}\right|\\ &=&\int_{\R^{m}}\left|\left\{x\in\R^{n}; \|s\|_{K}f\left(\frac{x}{\|s\|_{K}}\right)\leq 1\right\}\right|_{n}ds\\ &=&\int_{\R^{m}}\|s\|_{K}^{n}\left|\left\{z\in\R^{n};f(z)\leq\frac{1}{\|s\|_{K}}\right\}\right|_{n}ds=\int_{\R^{m}}\varphi(\|x\|_K)dx, \end{eqnarray*} where $\f(t)=t^{n}\left|\left\{z\in\R^{n};f(z)\leq\frac{1}{t}\right\}\right|_{n}.$ \forget We shall use the following lemma. \begin{lemma} Let $m>0$. Let $K$ be a symmetric convex body in $\R^{m}$ and $\varphi:\R\to\R_{+}$ a function such that $\lim_{t\to+\infty}t^{m}\varphi(t)=0$. Then, \[ \int_{\R^{m}}\varphi(\|s\|_{K})ds=|K|\int_{0}^{+\infty}mt^{m-1}\varphi(t)dt. \] \end{lemma} \forgotten Using Fubini and a change of variables, one has \begin{eqnarray*} \int_{\R^{m}}\f(\|s\|_{K})ds=\int_{0}^{+\infty}(-\f'(t))|\{s\in\R^{m};\|s\|_{K}\leq t\}|dt = |K|\int_{0}^{+\infty} t^{m}(-\f'(t))dt =m|K|\int_{0}^{+\infty}t^{m-1}\f(t)dt. \end{eqnarray*} Hence, we get \begin{eqnarray*} |C_{m}^{K}(f)| &=& m |K|\int_{0}^{+\infty}t^{m-1}t^{n}\left|\left\{z\in\R^{n};f(z)\leq\frac{1}{t}\right\}\right|_{n} dt\\ &=&m |K|\int_{0}^{+\infty}\frac{1}{u^{m+n+1}}\left|\left\{z\in\R^{n}; f(z)\leq u\right\}\right| du\\ &=&m|K|\int_{\R^{n}}\int_{f(z)}^{+\infty}\frac{du}{u^{m+n+1}}dz\\ &=&\frac{m}{m+n}|K|\int_{\R^{n}}\frac{dz}{f(z)^{m+n}}. \end{eqnarray*} Using that $C_{m}^{K}(f)^{\circ}=C_{m}^{K^\circ}(\M f)$, we get \[ |C_{m}^{K}(f)^{\circ}|=\frac{m}{m+n}|K^{\circ}|\int_{\R^{n}}\frac{dz}{(\M f(z))^{m+n}}. \] Thus, we obtain for any $m>0$ \begin{equation}\label{eq:Mf} \int_{\R^{n}}\frac{1}{f(x)^{m+n}}dx\int_{\R^{n}}\frac{1}{(\M f(y))^{m+n}}dy=\left(\frac{m+n}{m}\right)^{2}\frac{1}{|K||K^{\circ}|}|C_{m}^{K}(f)||C_{m}^{K}(f)^{\circ}|. \end{equation} \section{Proof of the inequalities in dimension 1 and 2 for $s>0$.} In this section, we start by giving a monotonicity result, which implies Conjecture~\ref{mahler-s} in dimension $1$ where $\frac{1}{s}\in\N$. Recall that Conjecture~\ref{mahler-s} in dimension $1$ was proved in \cite[page 17]{FM10} for any $s>0$. We present here a simpler proof in the case where $\frac{1}{s}\in\N$. Then, we present a proof of Theorem \ref{thm:spos}, which is Mahler's conjecture for $s$-concave functions in the case $n=2$ and $\frac{1}{s}\in\N$. Let $m=\frac{1}{s}$ and $g:\R^{n}\to\R_{+}$ be an even $s$-concave function, then there exists an even function $f:\R^{n}\to\R$, concave on its support, such that $g=f^{m}$. Thus, in dimension $n=1$ and for $1/s\in\N$, Conjecture~\ref{mahler-s} is a consequence of the following theorem. \begin{theorem}\label{dim1} Let $m\in\N$, $a>0$, $f:\R\to\R_{+}$ be an even concave function on its support $C=[-a,a]$ and \[ P_{m}(f)=\int_{\R}f(x)^{m}dx\int_{\R}(\L f(y))^{m}dy. \] Then the sequence $((m+1)^2P_m(f)-4m)_m$ is non-decreasing. Hence, one has $P_m(f)\geq\frac{4}{m+1}$. Moreover, if $P_m(f)=\frac{4}{m+1}$ then either $f$ is constant on its support $[-a,a]$ or there is $c>0$ such that $f(x)=c\left(1-\frac{|x|}{a}\right)_+$. \end{theorem} \begin{proof} First, using Lemma \ref{supp}, we know that $\supp(\L f)=C^{\circ}=[-\frac{1}{a},\frac{1}{a}]$. Set \[ I_{m}=(m+1)\int_{0}^{a}f(x)^{m}dx\quad\mbox{and}\quad J_{m}=(m+1)\int_{0}^{\frac{1}{a}}(\L f(y))^{m}dy. \] The monotonicity that we want to prove reduces to $I_{m}J_{m}\geq m+1$, for every $m\in\N$. The inequality holds for $m=0$ (there is equality). Let $m\in\N^*$. Being concave on $[-a,a]$, the function $f$ is left differentiable on $(-a,a]$. By concavity, denoting by $f'(t)$ the left derivative of $f$ at $t$, we have for all $t\in(-a,a]$ and $x\in[-a,a]$ \begin{eqnarray}\label{eq:f-f'-dim1} f(x)\leq f(t)+(x-t)f'(t). \end{eqnarray} multiplying both sides of the inequality by $mf(t)^{m-1}$ and integrating on $[0,a]$ gives \[ f(x)I_{m-1}= f(x)\int_{0}^{a}mf(t)^{m-1}dt\leq \int_{0}^{a}mf(t)^mdt+\int_{0}^{a}m(x-t)f'(t)f(t)^{m-1}dt. \] Integration by parts gives that for every $x\in[-a,a]$ \begin{eqnarray*} \int_{0}^{a}m(x-t)f'(t)f(t)^{m-1}dt =(x-a)f(a)^{m}-xf(0)^{m}+\int_{0}^{a}f(t)^mdt. \end{eqnarray*} Using that $(x-a)f(a)^{m}\le 0$, we get that for all $x\in[-a,a]$, \begin{equation}\label{ineq-Im} f(x)I_{m-1}\leq I_m-xf(0)^{m}. \end{equation} Hence, one has for all $x\in[-a,a]$ \[ \frac{I_{m-1}}{I_{m}}\leq\frac{1}{f(x)}\left(1-\frac{xf(0)^{m}}{I_{m}}\right). \] By taking the infinimum over all $x\in[-a,a]$, and denoting $y_0=\frac{f(0)^{m}}{I_{m}}$ we get \[ \frac{I_{m-1}}{I_{m}}\leq\L f\left(y_0\right). \] Since $\L f$ is concave, we can apply the result to $\L f$ and, denoting $x_0=\frac{(\L f(0))^{m}}{J_{m}}$, we get \[ \frac{J_{m-1}}{J_{m}}\leq f\left(x_0\right). \] On the other hand, the definition of $\L f$ implies that for all $x\in\supp f$, $y\in\supp\L f$ \[ f(x)\L f(y)\leq (1-xy)_{+}. \] Using \eqref{ineq-Im} for $x=0$ gives that $I_m\ge f(0)I_{m-1}$, thus $I_m\ge f(0)^ma$. In the same way, one has $J_m\ge \L f(0)^ma^{-1}$. Multiplying these inequalities and using that $f$ is even and concave, we get $\L f(0)=\max(f)^{-1}=f(0)^{-1}$, we conclude that $I_mJ_m\ge1$, which implies that $x_0y_0\le1$. Hence, one has \begin{eqnarray}\label{eq:Im-Jm-x_0} \frac{I_{m-1}J_{m-1}}{I_{m}J_{m}}\leq f\left(x_0\right)\L f\left(y_0\right)\leq 1-x_0y_0= 1-\frac{1}{I_{m}J_{m}}. \end{eqnarray} Finally we get $I_{m-1}J_{m-1}\leq I_{m}J_{m}-1$. Hence the sequence $(I_mJ_m-m)_m$ is non-decreasing. Since $I_{m}J_{m}=\frac{(m+1)^2}{4}P_m(f)$, we deduce that the sequence $u_m=(m+1)^2P_m(f)-4m$ is non-decreasing. It follows that $u_m\ge u_0=P_0(f)=4$, hence $P_m(f)\ge\frac{4}{m+1}$, for any integer $m$. Assume now that there is equality, for some integer $m$: $P_m(f)=\frac{4}{m+1}$. Then, there is equality in \eqref{eq:Im-Jm-x_0}. It follows that there is equality in \eqref{ineq-Im} for $x=x_0$ and also $(x_0-a)f(a)^{m}=0$. Moreover, there is also equality in the corresponding inequality for $\L f$ instead of $f$ and $y_0$ instead of $x_0$, thus $(y_0-\frac{1}{a})\L f(1/a)^m=0$. In addition, there is equality in \eqref{eq:f-f'-dim1} for $x=x_0$ and for almost any $t\in[-a,a]$: \[ f(t)= f(x_0)+(t-x_0)f'(t). \] On the other hand, one has $f(t)=f(x_0)+\int_{x_0}^tf'(u)du$. Since the left derivative $f'$ is non-increasing, we deduce that $f'$ is constant on $[0,x_0)$ and on $(x_0,a]$, which means that $f$ is affine on on $[0,x_0)$ and on $(x_0,a]$. In the same way, $\L f$ is affine on $[0,y_0)$ and on $(y_0,1/a]$. If $x_0=a$, then $f$ is affine on $[0,a]$ and $J_m=\L f(0)^m/a$. But, by concavity, one has for every $y\in[0,1/a]$ \[ \L f(y)\ge\L f(0)(1-ay). \] Raising this inequality to the power $m$ and integrating, one gets that $J_m\ge\L f(0)^m/a$. Since there is equality we deduce that $\L f(y)=\L f(0)(1-ay)$, for every $y\in[0,1/a]$. Thus $f=f(0)\1_{[-a,a]}$. If $x_0\neq a$, then $f(a)=0$. Thus there exists $0\le b\le1/a$ and $c>0$ such that, for $t\ge0$, \[ f(t)=c(1-bt)\1_{[0,x_0]}+c(1-bx_0)\frac{a-x}{a-x_0}\1_{(x_0,a]}. \] This implies that, for any $y\ge0$, \[ \L f(y)=\frac{1}{c}\left(\1_{[0,b]}(y)+\frac{1-x_0y}{1-bx_0}\1_{(b,\frac{1}{a}]}(y) \right). \] Since $x_0\neq a$, one has $\L f(1/a)\neq0$, hence $y_0=1/a$. This gives $I_m=af(0)^m$ and this implies as before that $f(x)=f(0)(1-\frac{x}{a})$, for all $x\in[0,a]$. \forget A simple computation gives \[ \int_0^{+\infty}f^m=\frac{c^m}{(m+1)b}\left(1-\left(1-ab\right)^{m+1} \right)\quad\hbox{and}\quad \int_0^{+\infty}(\L f)^m=\frac{1}{c^m}\left(ab+\frac{1}{m+1}(1-ab)\right). \] Hence, denoting $x=1-ab\in[0,1]$, we obtain that \[ P_m(f)=\frac{4(m+1-mx)}{(m+1)^2}\times\frac{1-x^{m+1}}{1-x}. \] Since we assumed that $f$ satisfies the equality case in the inequality $P_m(f)\ge4/(m+1)$, it means that $x\in[0,1]$ satisfies the equality case in the inequality \[ \frac{1}{1+x+\cdots +x^m}=\frac{1-x}{1-x^{m+1}}\le 1-\frac{m}{m+1}x. \] But from the strict convexity of the left hand side and the fact that there is equality for $x=0$ and $x=1$, the equality cases can only occur for $x=0$ or $x=1$. These cases correspond to $b=1/a$ or $b=0$ and this establishes the claimed equality cases. \forgotten \end{proof} Let us prove the 2-dimensional case of our inequality for $m$ being an integer. \begin{theorem}\label{dim2} Let $m\in\N$ and $f:\R^{2}\to\R_{+}$ be an even function, concave on its support, such that $0<\int f<+\infty$. Then, $$ P_{m}(f)\geq \frac{16}{(m+1)(m+2)}. $$ Moreover, there is equality for $f=\1_{[-1,1]^2}$. \end{theorem} As noticed before, this theorem may be reformulated in the following form: for any integer $m>0$ and any even function $g:\R^2\to\R_+$ that is $\frac{1}{m}$-concave, integrable with $\int_{\R^{2}}g>0$ one has \begin{eqnarray}\label{eq:m-concave} \int_{\R^{2}}g\int_{\R^{2}}\L_{\frac{1}{m}}g\geq\frac{16m^2}{(m+1)(m+2)}. \end{eqnarray} The following corollary proves that letting $m$ go to infinity we get a new proof of Mahler conjecture for even log-concave functions, which was our previous main result in \cite{FN}. \begin{corollary}\label{cor:FN23} Let $f:\R^{2}\to\R$ be an even log-concave function, then \begin{eqnarray}\label{eq:FN} \int_{\R^{2}}f(x)dx\int_{\R^{2}}f^{\circ}(y)dy\geq 16, \end{eqnarray} where $f^{\circ}(y)=\inf_{x\in\R^{n}}\frac{e^{-\langle x,y\rangle}}{f(x)} $ is the polar function of $f$. \end{corollary} \begin{proof} Let $f$ be an even log-concave function and consider the function $f_{m}$ defined as follows \[ f_{m}(x)=\left(1+\frac{\log f(x)}{m}\right)^{m}_{+}. \] First, notice that the log-concavity of $f$ implies the $\frac{1}{m}$-concavity of $f_{m}$. Since for all $x$, $e^{-x}\ge 1-x $, we get \[ \left(1-\frac{1}{m}\langle x,y\rangle\right)^{m}\le e^{-\langle x,y\rangle}. \] Thus, $\L_{\frac{1}{m}}f\leq\L_{0}f=f^{\circ}$. In addition, it is easy to see that $f_{m}\leq f$ for any $m>0$, and since a log-concave function is continuous on its support one has $\lim_{m\to\infty}f_{m}=f$ locally uniformly on $\R^{2}$. Hence, we can conclude that $\lim_{m\to\infty}\int_{\R^{2}}f_{m}=\int_{\R^{2}}f$. The fact that $\L_{\frac{1}{m}} f\leq f^{\circ}$ implies that $\L_{\frac{1}{m}}f_{m}\leq f_{m}^{\circ}$. Moreover, one has \[ \frac{16m^2}{(m+1)(m+2)}\leq \int_{\R^{2}}f_{m}\int_{\R^{2}}\L_{\frac{1}{m}}f_{m}\leq\int_{\R^{2}}f_{m}\int_{\R^{2}}f_{m}^{\circ}. \] On the other hand, since the functions $f_{m}$ are log-concave and $\lim_{m\to\infty}f_{m}=f$, then using lemma 3.2 in \cite{AKM}, we know that $\lim_{m\to\infty}f_{m}^{\circ}=f^{\circ}$ locally uniformly on the interior of the support of $f^{\circ}$. Applying the same lemma again on the log-concave function $f_{m}^{\circ}$, we get that $\lim_{m\to\infty}\int_{\R^{2}}f_{m}^{\circ}=\int_{\R^{2}}f^{\circ}$. Thus, we conclude that \[ \int_{\R^{2}}f\int_{\R^{2}}f^{\circ}=\lim_{m\to\infty}\int_{\R^{2}}f_{m}\int_{\R^{2}}f^{\circ}_{m}\geq 16. \] \end{proof} \begin{remark} Notice that we don't get the equality case in this way. Recall that, in \cite{FN}, it was proved that there is equality in \eqref{eq:FN} if and only if there exists two Hanner polytopes $K_1\subset F_1$ and $K_2\subset F_2$, where $F_1$ and $F_2$ are two supplementary subspaces in $\R^{2}$, with $0\le\dim(F_i)\le2$, such that for all $(x_1,x_{2})\in F_1\times F_{2}$ \[ f(x_{1},x_{2})=e^{-\|x_{1}\|_{K_{1}}}\1_{K_{2}}(x_{2}). \] It would be natural to conjecture that a concave function $f$ on its support satisfies the equality case in \eqref{eq:m-concave} if and only if \[ f(x_{1},x_{2})=(1-\|x_{1}\|_{K_{1}})_+\1_{K_{2}}(x_{2}), \] with $K_1, K_2, F_1, F_2$ as before. It is easy to see that such functions satisfy the equality case. Nevertheless, clearly, the function $f(x_1,x_2)=(1-|x_1|)_+\1_{B_1^2}(x_1,x_2)$ satisfies also the equality case but cannot be written in this form. \end{remark} For the proof of theorem \ref{dim2}, let us start by proving that every even concave function $f:\R^{2}\to\R_{+}$ can be $m$-equipartitioned by the canonical basis of $\R^{2}$. \begin{lemma}\label{lem:m-equip} Let $m>0$ and $f:\R^{2}\to\R_{+}$ be an even concave function such that $0<\int_{\R^{2}}f<+\infty$. Then there exists a linear invertible map $T:\R^{2}\to\R^{2}$ such that the function $f\circ T$ is $m$-equipartitioned. \end{lemma} \begin{proof} The proof follows the proof of Lemma 4.2 in our previous work \cite{FN} but we reproduce it for completeness. For any $u\in S^1$, let $C(u)\subset S^1$ be the open half-circle delimited by $u$ and $-u$ containing the vectors $v$ which are after $u$ with respect to the counterclockwise orientation of $S^1$. For $v\in C(u)$, let $C(u,v)=\R_+u+\R_+v$ be the cone generated by $u$ and $v$ and define $$ f_u(v)=\int_{C(u,v)}f(x)^mdx. $$ The map $f_u$ is continuous and increasing on $C(u)$, $f_u(u)=0$ and, since $f$ is even, $f_u(v)\longrightarrow\frac{1}{2}\int_{\R^{2}}f^{m}(x)dx$ when $v\longrightarrow -u$. Thus, there exists a unique $v(u)\in C(u)$ such that $$ f_u(v(u))=\int_{C(u,v(u))}f^{m}(x)dx=\frac{1}{4}\int_{\R^2}f^{m}(x)dx. $$ Notice that $v:S^1\to S^1$ is continuous and, since $f$ is even, one has $$ \int_{C(v(u),-u)}f^{m}(x)dx=\frac{1}{4}, $$ thus $v(v(u))=-u$ for any $u\in S^1$. For $u\in S^1$, let $$ g(u)=\int_{C(u,v(u))}f^{m-1}(x)dx-\frac{1}{4}. $$ Then, $g$ is continuous on $S^1$ and, since $f$ is even, $$ g(v(u))=\int_{C(v(u),-u)}f^{m-1}(x)dx-\frac{1}{4}=\frac{1}{2}-\int_{C(u,v(u))}f^{m-1}(x)dx-\frac{1}{4}=\frac{1}{4}-\int_{C(u,v(u))}f^{m-1}(x)dx=-g(u). $$ Hence, by the intermediate value theorem, there exists $u\in S^1$ such that $g(u)=0$, thus \[\int_{C(u,v(u))}f^{m-1}=\frac{1}{4}\int_{\R^{2}}f^{m-1}\quad\hbox{and}\quad \int_{C(u,v(u))}f^{m}=\frac{1}{4}\int_{\R^{2}}f^{m}. \] \end{proof} Let us proceed to the proof of our main theorem. \begin{proof}[\bf{Proof of Theorem \ref{dim2}}] Let $f:\R^{2}\to\R_{+}$ be an even concave function such that $0<\int_{\R^{2}}f<+\infty$. Let $L=B_{\infty}^{m}$. For simplicity, the convex body $K_{B_\infty^m,m}(f)$ introduced in section 2 will be denoted in this case by $K_{\infty,m}(f)$ and its dual $K_{B_1^m,m}(\L f)$ will be denoted by $K_{1,m}(\L f)$. Thus, we know that \[ P(K_{\infty,m}(f))=P(B_{\infty}^{m})P_{m}(f)=\frac{4^{m}}{m!}P_{m}(f). \] The proof of the theorem concludes if $K_{\infty,m}$ verifies Mahler's conjecture in dimension $m+2$, i.e. $P(K_{\infty,m})\geq\frac{4^{m+2}}{(m+2)!}$. For that, it is enough to prove that the convex body $K_{\infty,m}$ verifies the hypotheses of Proposition \ref{prop:Mahler}. We proceed by induction on $m\in\N$. Set, for any positive integer $k$ and for any $\varepsilon\in\{-1,1\}^{k}$, $\R^{k}_{\varepsilon}=\{x\in\R^{k};\varepsilon_{i}x_{i}\geq 0,\forall i\in\{1,...,k\}\}$. \begin{itemize} \item Let us start by proving that $K_{\infty,m}(f)$ can be partitioned into $2^{m+2}$ pieces of the same volume. Since $f$ is $m$-equipartitioned, we get \begin{eqnarray*} \left|K_{\infty,m}(f)\cap\R^{m+2}_{+}\right|&=&\left|\{(x,t)\in\R^{2}_{+}\times\R_{+}^{m};\|t\|_{\infty}\leq f(x)\}\right| =\int_{\R^{2}_{+}}f(x)^{m}dx|B_{\infty}^{m}\cap\R_{+}^{m}|\\ &=&\int_{\R^{2}_{+}}f(x)^{m}dx=\frac{1}{2^{2}|B_{\infty}^{m}|}\int_{\R^{2}}f(x)^{m}|B_{\infty}^{m}|dx =\frac{1}{2^{m+2}}|K_{\infty,m}(f)|. \end{eqnarray*} Using the fact that $\int_{\R^{2}_{+}}f(x)^{m}dx=\int_{\R_{+}\times\R_{-}}f(x)^{m}dx$, we get, \[ |K_{\infty,m}(f)\cap\R_{\e}^{m+2}|=\frac{1}{2^{m+2}}|K_{\infty,m}(f)|\quad\forall\e\in\{-1,1\}^{m+2}. \] \item Our next step is to prove that $K_{\infty,m}(f)\cap e_{i}^{\perp}$ verifies Mahler's conjecture for all $i\in\{1,...,m+2\}$.\\ For $i=1$ or $i=2$, the same argument holds so let us assume that $i=1$. One has \[ K_{\infty,m}(f)\cap e_{1}^{\perp}=\{(0,x_{2},t)\in\R^{2}\times\R^{m};\|t\|_{\infty}\leq f(0,x_{2})\}. \] Thus, \[ |K_{\infty,m}(f)\cap e_{1}^{\perp}|=2^{m}\int_{\R}f(0,x_{2})^{m}dx_{2}. \] Since, the dual of a section is the projection of the dual, one has \[ \left(K_{\infty,m}(f)\cap e_{1}^{\perp}\right)^{\circ}=\left(K_{\infty,m}\left(f_{|e_{1}^{\perp}}\right)\right)^{\circ}=K_{1,m}\left(\L\left(f_{|e_{1}^{\perp}}\right)\right). \] Thus \[ \left|\left(K_{\infty,m}(f)\cap e_{1}^{\perp}\right)^{\circ}\right|=\frac{2^{m}}{m!}\int_{\R}\left(\L\left(f_{|e_{1}^{\perp}}\right)(y)\right)^{m}dy. \] Our Theorem \ref{dim1} in dimension $1$ implies that \[ \int_{\R}(f(0,x_{2}))^{m}dx_{2}\int_{\R}(\L(f_{|e_{1}^{\perp}})(y))^{m}dy\geq\frac{4}{m+1}. \] Hence, \[ P(K_{\infty,m}(f)\cap e_{1}^{\perp})=\frac{4^{m}}{m!}\int_{\R}(f(0,x_{2}))^{m}dx_{2}\int_{\R}(\L(f_{|e_{1}^{\perp}})(y))^{m}dy\geq\frac{4^{m+1}}{(m+1)!}. \] which implies that $K_{\infty,m}(f)\cap e_{1}^{\perp}$ verifies Mahler's conjecture in $\R^{m+1}$.\\ Now, for $i\ge3$, again, by symmetry, we may assume that $i=3$. One has \[ K_{\infty,m}(f)\cap e_{3}^{\perp}=\{(x,t)\in\R^{2}\times(\R^{m}\cap e_{3}^{\perp});\|t\|_{\infty}\leq f(x)\}=K_{\infty,m-1}(f). \] Hence, \begin{eqnarray*} P(K_{\infty,m}(f)\cap e_{3}^{\perp})&=&P(K_{\infty,m-1}(f))=\frac{4^{m-1}}{(m-1)!}\int_{\R^{2}}f(x)^{m-1}dx\int_{\R^{2}}(\L f(y))^{m-1}dy\\ &\geq&\frac{4^{m-1}}{(m-1)!}\frac{16}{m(m+1)}=\frac{4^{m+1}}{(m+1)!}. \end{eqnarray*} where the last inequality occurs by the induction hypothesis. \item Finally, let us show that $K_{\infty,m}(f)\cap e_{i}^{\perp}$, for any $i\in\{1,...,m+1\}$, can be partitioned into $2^{m+1}$ pieces of the same $(m+1)$-volume. For $i=1$ (and the same follows for $i=2$), since $f$ is even, one has \begin{eqnarray*} \left|K_{\infty,m}(f)\cap e_{1}^{\perp}\cap\R^{m+1}_{+}\right|_{m+1} &=&\left|\{(0,x_{2},t)\in\R^{2}_{+}\times\R^{m}_{+};\|t\|_{\infty}\leq f(0,x_{2})\}\right|_{m+1}\\ &=&\int_{0}^{+\infty}\left|f(0,x_{2})B_{\infty}^{m}\cap\R^{m}_{+}\right|_{m}dx_{2}\\ &=&\int_{0}^{\infty}f(0,x_{2})^{m}dx_{2}\\ &=&\frac{1}{2^{m+1}}\left|K_{\infty,m}(f)\cap e_{1}^{\perp}\right|_{m+1}. \end{eqnarray*} For $i=3$ (and the same follows for $i>3$), \[K_{\infty,m}(f)\cap e_{3}^{\perp}\cap \R^{m+1}_{+}=\{(x,t)\in\R^{2}_{+}\times(\R^{m}_{+}\cap e_{1}^{\perp});\|t\|_{\infty}\leq f(x)\}=K_{\infty,m-1}(f)\cap\R^{m+1}_{+}. \] Thus, \[ \left|K_{\infty,m}(f)\cap e_{3}^{\perp}\cap\R^{m+1}_{+}\right|_{m+1}=\int_{\R^{2}_{+}}f(x)^{m-1}dx\times|B^{m-1}_{\infty}\cap\R^{m-1}_{+}|_{m-1}=\int_{\R^{2}_{+}}f(x)^{m-1}dx, \] and the result follows from the $m$-equipartition condition. \end{itemize} \end{proof} \begin{remark} Notice that if one replaces in the above argument the cube $B_\infty^m$ by any unconditional convex body $L$ in $\R^m$, the same proof shows that $K_{L,m}(f)$ satisfies Mahler's conjecture in dimension $m+2$. \end{remark} \forget \begin{conjecture} Let $K$ be a symmetric convex body in $\R^{n}$. Then, \[ \int_{K}m|t|^{m-1}dt\int_{K^{\circ}}m|t|^{m-1}dt\geq\int_{B_{\infty}^{n}}m|t|^{m-1}dt\int_{B_{1}^{n}}m|t|^{m-1}dt. \] \end{conjecture} Since $K$ is a symmetric convex body, we know that there exists $a$, $b>0$ and two even concave functions $f_{1}$, $f_{2}:[-a,b]\to\R$ such that \[ K=\{(x,t)\in\R\times\R;\quad-a\leq x\leq b,\quad-f_{2}(x)\leq t\leq f_{1}(x)\}. \] Since $K$ is symmetric then we can consider that $-a\leq x\leq a$ and $-f_{2}(x)=-f_{1}(-x)$. Hence \[ K=\{(x,t)\in[-a,a]\times\R;\quad -f(-x)\leq t\leq f(x)\}. \] \[ I=\int_{K}m|t|^{m-1}dt=\int_{-a}^{a}\int_{-f(-x)}^{f(x)}m|t|^{m-1}dtdx=2\int_{-a}^{a}\int_{f(a)\frac{x}{a}}^{f(x)}m|t|^{m-1}dtdx \] Let $b\in[-a,a]$ such that $f(b)=0$. Thus, \begin{eqnarray*} I&=&2\int_{-a}^{0}\left[t^{m}\right]^{f(x)}_{f(a)\frac{x}{a}}dx+2\int_{0}^{b}\left[t^{m}\right]^{f(x)}_{0}dx+2\int_{0}^{b}\left[-(-t)^{m}\right]^{0}_{f(a)\frac{x}{a}}dx+2\int_{b}^{a}\left[-(-t)^{m}\right]^{f(x)}_{f(a)\frac{x}{a}}dx\\ &=&2\int_{-a}^{0}\left(f(x)^{m}-\left(\frac{f(a)}{a}x\right)^{m}\right)dx+2\int_{0}^{b}f(x)^{m}dx+2\int_{0}^{b}\left(-f(a)\frac{x}{a}\right)^{m}dx\\ &+&2\int_{b}^{a}\left[-(-f(x))^{m}+\left(-f(a)\frac{x}{a}\right)^{m}\right]dx\\ &=&2\int_{-a}^{a}\mbox{sign}(f(x))|f(x)|^{m}dx-\left(\frac{f(a)}{a}\right)^{m}\left[\frac{x^{m+1}}{m+1}\right]^{0}_{-a}+2\left(\frac{-f(a)}{a}\right)^{m}\left[\frac{x^{m+1}}{m+1}\right]^{a}_{0}\\ &=&2\int_{-a}^{a}\mbox{sign}(f(x))|f(x)|^{m}-\frac{|f(a)|^{m}a}{m+1}+\frac{|f(a)|^{m}a}{m+1}\\ &=&2\int_{-a}^{a}\mbox{sign}(f(x))|f(x)|^{m}dx. \end{eqnarray*} \begin{theorem} Let $K$ be a symmetric convex body in $\R^{2}$. Then, \[ \int_{K}m|t|^{m-1}dt\int_{K^{\circ}}m|t|^{m-1}dt\geq\frac{16}{m+1}. \] \end{theorem} \begin{proof} Let $\mu(K)=\int_{K}m|t|^{m-1}dt$.\\ This is true due to Saint-Raymond. Il faut introduire les shadow systems\\ t reste fixe et je deplace le x tout au long de la droite horizontale alors la mesure reste la meme ($\mu(K_{t})$ est cte) et $t\to\mu(K_{t}^{\circ})^{-1}$ est paire et convexe. \end{proof} \begin{theorem} Let $K$ be a symmetric convex body in $\R^{3}$ and symmetric with respect to $e_{3}^{\perp}$. Set $\mu(K)=\int_{K}m|t|^{m-1}dtdx$, then \[ \mu(K)\mu(K^{\circ})\geq\mu(B_{\infty}^{3})\mu(B_{1}^{3}). \] \end{theorem} \newpage \forgotten \section{Proof of the inequalities in dimension 1 and 2 for $-\frac{1}{n}<s<0$.} Let $-\frac{1}{n}<s=-\frac{1}{m}<0$ and $g:\R^{n}\to\R_{+}$ be an even $s$-concave function. Then, the function $f=g^s:\R^{n}\to(0,+\infty]$ is even and convex. In this section, we give a sharp lower bound of the operator $Q_{m}$ applied to even convex functions $f$ in dimensions $1$ and $2$. First, we start by proving the following theorem, which is the analogue version of Theorem~\ref{th:mahler-s-F} for convex functions rather than $s$-concave ones, as seen in equation \eqref{eq:conjQmf-F}. \begin{theorem}\label{th:m-neg-F-d2} Let $f:\R^{2}\to (0,+\infty)$ be an even convex function such that $f\in\F$. Then, for any integer $m>2$ , one has \[ \int_{\R^{2}}\frac{1}{(f(x))^{m}}dx\int_{\R^{2}}\frac{1}{(\M f(y))^{m}}dy\geq\frac{16m}{(m-1)(m-2)^2}. \] Moreover, there is equality for $f(x)=\max(1,\|x\|_\infty)$. \end{theorem} \begin{proof} Let us recall equation (\ref{eq:Mf}) and apply it in dimension $n$, which gives that, for all $m>n$ and for every symmetric convex body $K\in\R^{m-n}$, one has \[ \int_{\R^{n}}\frac{1}{f(x)^{m}}dx\int_{\R^{n}}\frac{1}{(\M f(y))^{m}}dy=\left(\frac{m}{m-n}\right)^{2}\frac{1}{|K||K^{\circ}|}|C_{m-n}^{K}(f)||C_{m-n}^{K}(f)^{\circ}|, \] where $C^{K}_{m-n}(f)=\left\{(x,s)\in\R^{n}\times\R^{m-n}; \|s\|_{K}f\left(\frac{x}{\|s\|_{K}}\right)\leq 1\right\}.$ Applying it for $K=B_{\infty}^{m-n}$, we simplify the notation of the associated convex body to \[ C_{m-n}^{\infty}(f)=\left\{(x,s)\in\R^{n}\times\R^{m-n};\|s\|_{\infty}f\left(\frac{x}{\|s\|_{\infty}}\right)\leq 1\right\}. \] And its dual is denoted by $C_{m-n}^1(\M f)$. In addition, if $C_{m-n}^{\infty}(f)$ verifies Mahler's inequality in dimension $m$, we get \begin{eqnarray*} \int_{\R^{n}}\frac{1}{f(x)^{m}}dx\int_{\R^{n}}\frac{1}{(\M f(y))^{m}}dy&=& \left(\frac{m}{m-n}\right)^{2}\frac{1}{|B_{\infty}^{m-n}||B_{1}^{m-n}|}|C_{m-n}^{\infty}(f)||C_{m-n}^{\infty}(f)^{\circ}|\\ &\geq&\left(\frac{m}{m-n}\right)^{2}\frac{(m-n)!}{4^{m-n}}\frac{4^{m}}{m!}\\ &=&\frac{m}{m-n}\times\frac{4^{n}}{(m-n)\cdots(m-1)}. \end{eqnarray*} Thus, we get the desired inequality \eqref{eq:conjQmf-F} in dimension $n$. Since the $m$-equipartition condition, which is the key of our proof, is only verified in dimension $2$, it remains to prove that $C_{m-2}^{\infty}(f)$ verifies the hypothesis of Proposition \ref{prop:Mahler} in dimension $m$ for any integer $m\ge3$. We prove it by induction on $m$. For $m=3$, $C_{m-2}^{\infty}(f)=C_1^{\infty}(f)$ is a symmetric convex body in $\R^3$ hence it satisfies Mahler's conjecture by \cite{IS}. Let $m\ge4$ and assume that Mahler's conjecture is satisfied for $C_{m-3}^{\infty}(f)$ and let us prove it for $C_{m-2}^{\infty}(f)$. For that, recall that $\R^{2}_{\varepsilon}=\{x\in\R^{2};\varepsilon_{i}x_{i}\geq 0,\forall i\in\{1,2\}\}$, $\forall\varepsilon\in\{-1,1\}^{2}$. We can assume that the even convex function $f:\R^{2}\to\R_{+}$ is $m$-equipartitioned. In the sense that, $\forall\varepsilon\in\{-1,1\}^{2}$, one has \[ \int_{\R_{\varepsilon}^{2}}\frac{dx}{f(x)^{m}}=\frac{1}{4}\int_{\R^{2}}\frac{dx}{f(x)^{m}}\quad\mbox{and}\quad\int_{\R_{\varepsilon}^{2}}\frac{dx}{f(x)^{m-1}}=\frac{1}{4}\int_{\R^{2}}\frac{dx}{f(x)^{m-1}}. \] We use proposition \ref{prop:Mahler} by proving that the convex body $C_{m-2}^{\infty}(f)$ verifies its conditions to deduce the lower bound of Mahler's volume of $C_{m-2}^{\infty}(f)$. \begin{itemize} \item \underline{$C_{m-2}^{\infty}(f)$ can be partitioned into $2^{m}$ pieces of the same volume.} \[ C_{m-2}^{\infty}(f)\cap\R_{+}^{m}=\left\{(x,s)\in\R_{+}^{2}\times\R_{+}^{m-2};\|s\|_{\infty}f\left(\frac{x}{\|s\|_{\infty}}\right)\leq 1\right\}. \] Then, \begin{eqnarray*} \left|C_{m-2}^{\infty}(f)\cap\R_{+}^{m}\right|=\frac{m-2}{m}\left|B_{\infty}^{m-2}\cap\R_{+}^{2}\right|\int_{\R^{2}_{+}}\frac{dx}{f(x)^{m}}=\frac{m-2}{4m}\int_{\R^{2}}\frac{dx}{f(x)^{m}}. \end{eqnarray*} Thus, \[ |C_{m-2}^{\infty}(f)|=\frac{m-2}{m}|B_{\infty}^{m-2}|\int_{\R^{2}}\frac{dx}{f(x)^{m}}=\frac{m-2}{m}2^{m-2}\int_{\R^{2}}\frac{dx}{f(x)^{m}}=2^{m}|C_{m-2}^{\infty}(f)\cap\R_{+}^{m}|. \] Similarly, since $f$ is even and $m$-equipartitioned, we can easily prove that for all $\varepsilon\in\{-1,1\}^{m}$ \[ |C_{m-2}^{\infty}(f)\cap\R_{\varepsilon}^{m}|=\frac{1}{2^{m}}|C_{m-2}^{\infty}(f)|. \] \item \underline{$C_{m-2}^{\infty}(f)\cap e_{i}^{\perp}$ verifies Mahler in dimension $m-1$, for $1\le i\le m$.}\\ For $i=1$ (the same holds for $i=2$), \[C_{m-2}^{\infty}(f)\cap e_{1}^{\perp}=\left\{(0,x_{2},s)\in\R^{2}\times\R^{m-2};\|s\|_{\infty}f\left(\frac{(0,x_{2})}{\|s\|_{\infty}}\right)\leq 1\right\}=\{0\}\times C_{m-2}^{\infty}\left(f_{|e_{1}^{\perp}}\right). \] One has, \[ \left(C_{m-2}^{\infty}(f)\cap e_{1}^{\perp}\right)^{\circ}=\left(\{0\}\times C_{m-2}^{\infty}\left(f_{|e_{1}^{\perp}}\right)\right)^{\circ}=\{0\}\times \left(C_{m-2}^{\infty}\left(f_{|e_{1}^{\perp}}\right)\right)^{\circ}=\{0\}\times C_{m-2}^{1}\left(\M \left(f_{|e_{1}^{\perp}}\right)\right). \] Thus, we get the following volumes \[ \left|C_{m-2}^{\infty}(f)\cap e_{1}^{\perp}\right|=\left|C_{m-2}^{\infty}\left(f_{e_{1}^{\perp}}\right)\right|=\frac{m-2}{m-1}2^{m-2}\int_{\R}\frac{dx_{2}}{f_{|e_{1}^{\perp}}(x_{2})^{m-1}}. \] And \[ \left|\left(C_{m-2}^{\infty}(f)\cap e_{1}^{\perp}\right)^{\circ}\right|=\left|C_{m-2}^{1}\left(\M\left(f_{|e_{1}^{\perp}}\right)\right)\right|=\frac{(m-2)2^{m-2}}{(m-1)(m-2)!}\int_{\R}\frac{dx_{2}}{\left(\M f_{|e_{1}^{\perp}}(x_{2})\right)^{m-1}}. \] Applying the result in dimension $1$, proved in \cite{FGSZ}, to $f_{|e_{1}^{\perp}}$ we get \begin{eqnarray*} \left|C_{m-2}^{\infty}(f)\cap e_{1}^{\perp}\right|\left|\left(C_{m-2}^{\infty}(f)\cap e_{1}^{\perp}\right)^{\circ}\right| &=&\frac{(m-2)^{2}4^{m-2}}{(m-1)^{2}(m-2)!}\int_{\R}\frac{dx_{2}}{ f_{|e_{1}^{\perp}}(x_{2})^{m-1}}\int_{\R}\frac{dx_{2}}{\left(\M f_{|e_{1}^{\perp}}(x_{2})\right)^{m-1}}\\ &\geq&\frac{(m-2)^{2}4^{m-2}}{(m-1)^{2}(m-2)!}\frac{m-1}{m-2}\frac{4}{m-2)} =\frac{4^{m-1}}{(m-1)!}. \end{eqnarray*} For $i=3$ and the case $i>3$ follows similarly: \begin{eqnarray*} C_{m-2}^{\infty}(f)\cap e_{3}^{\perp}=\left\{(x,0,s)\in\R^{2}\times\{0\}\times\R^{m-3};\|(0,s)\|_{\infty}f\left(\frac{x}{\|(0,s)\|_{\infty}}\right)\leq 1\right\}=C_{m-3}^{\infty}(f). \end{eqnarray*} Thus, \[ \left|C_{m-2}^{\infty}(f)\cap e_{3}^{\perp}\right|=\left|C_{m-3}^{\infty}(f)\right|=\frac{m-3}{m-1}2^{m-3}\int_{\R^{2}}\frac{dx}{f(x)^{m-1}}. \] And \[ \left|\left(C_{m-2}^{\infty}(f)\cap e_{3}^{\perp}\right)^{\circ}\right|=\left|\left(C_{m-3}^{\infty}(f)\right)^{\circ}\right|=\left|C_{m-3}^{1}(\M f)\right|=\frac{m-3}{m-1}\frac{2^{m-3}}{(m-3)!}\int_{\R^{2}}\frac{dy}{\left(\M f(y)\right)^{m-1}}. \] Then, using the hypothesis of induction on $m$, we obtain, \begin{eqnarray*} \left|C_{m-2}^{\infty}(f)\cap e_{3}^{\perp}\right|\left|\left(C_{m-2}^{\infty}(f)\cap e_{3}^{\perp}\right)^{\circ}\right|&=&\frac{(m-3)^{2}}{(m-1)^{2}}\frac{4^{m-3}}{(m-3)!}\int_{\R^{2}}\frac{dx}{f(x)^{m-1}}\int_{\R^{2}}\frac{dy}{\left(\M f(y)\right)^{m-1}}\\ &\ge&\frac{4^{m-1}}{(m-1)!}. \end{eqnarray*} which concludes the second part. \item \underline{$C_{m-2}^{\infty}(f)\cap e_{i}^{\perp}$ can be partitioned into $2^{m-1}$ pieces of the same volume, for $1\le i\le m$.}\\ For $i=1$ and $i=2$, one has \[ C_{m-2}^{\infty}(f)\cap e_{1}^{\perp}=\{0\}\times C_{m-2}^{\infty}\left(f_{|e_{1}^{\perp}}\right)\quad\hbox{and}\quad C_{m-2}^{\infty}(f)\cap e_{2}^{\perp}= C_{m-2}^{\infty}\left(f_{|e_{2}^{\perp}}\right)\times\{0\}. \] These convex bodies are unconditional, thus we get the partition. For $i=3$, by the $m$-equipartition condition of $f$, we obtain \begin{eqnarray*} \left|C_{m-2}^{\infty}(f)\cap e_{3}^{\perp}\cap\R_{+}^{m-1}\right|&=&\left|C_{m-3}^{\infty}(f)\cap\R_{+}^{m-1}\right|=\frac{m-3}{m-1}\left|B_{\infty}^{m-3}\cap\R_{+}^{m-1}\right|\int_{\R_{+}^{2}}\frac{dx}{f(x)^{m-1}}\\ &=&\frac{1}{4}\frac{m-3}{m-1}\int_{\R^{2}}\frac{dx}{f(x)^{m-1}}=\frac{1}{2^{m-1}}\left|C_{m-2}^{\infty}(f)\cap e_{3}^{\perp}\right|. \end{eqnarray*} Using the same method, one may prove the partition of the body $C_{m-2}^{\infty}(f)\cap e_{i}^{\perp}$ for $i>3$. \end{itemize} Therefore $C_{m-2}^\infty(f)$ satisfies Mahler's conjecture, which concludes the proof of the inequality of Theorem~\ref{th:m-neg-F-d2}. Moreover, it is easy to check that there is equality for $f(x)=\max(1,\|x\|_\infty)$, since, in this case $C^\infty_{m-2}(f)=B_\infty^m$. \end{proof} Our next goal in this section is to prove the following theorem in dimension $1$, which establishes Mahler's conjecture in dimension 1 for $s$-concave functions for any $-1<s<0$. \begin{theorem}\label{th:mahler-sneg} Let $f:\R\to (0,+\infty]$ be an even convex function such that $\lim_{t\to+\infty}f(tx)=+\infty$, for any $x\neq 0$. Then, for any $m>1$, one has \[ Q_m(f)=\int_{\R}\frac{dx}{f(x)^m}\int_{\R}\frac{dy}{\left(\M f(y)\right)^{m}}\geq Q_m\left(1+I_{[-1,1]}\right)=\frac{4}{m-1}, \] with equality if and only if $f=c+I_{[-\alpha,\alpha]}$, for some $c,\alpha>0$. \end{theorem} We first show that Theorem \ref{th:mahler-sneg} follows from the next theorem, which implies Theorem \ref{comp-bipolar-one}, if one applies it to $f=g^s$, for any $g$, $s$-concave, for $-1<s<0$ and $m=-1/s$. \begin{theorem}\label{th:f-mmf-dim1} Let $f:\R\to (0,+\infty]$ be an even convex function such that $\lim_{t\to+\infty}f(tx)=+\infty$, for any $x\neq 0$. Then for any $m>1$, \begin{equation}\label{eq:f-mmf-dim1} \int_{\R}\frac{1}{f(x)^m}dx\ge \frac{m-1}{m}\int_{\R}\frac{1}{(\M\M f(x))^m}dx, \end{equation} with equality if and only if $f=c+I_{[-\alpha,\alpha]}$, for some $c,\alpha>0$. \end{theorem} Multiplying equation \eqref{eq:f-mmf-dim1} by $\int(\M f)^{-m}$ we deduce that for any even convex $f$ tending to $+\infty$ at infinity, one has \begin{equation}\label{eq:Q-m-f-mmf} Q_m(f)\ge \frac{m-1}{m} Q_m(\M f), \end{equation} with equality if and only if $f=c+I_{[-\alpha,\alpha]}$, for some $c,\alpha>0$. Since $\M f\in\F$, using \cite{FGSZ} we get \begin{equation}\label{fgsz-dim1} Q_m(\M f)\ge Q_m\left(1+|x|\right)=\frac{4m}{(m-1)^2}. \end{equation} Using equations \eqref{eq:Q-m-f-mmf} and \eqref{fgsz-dim1}, we conclude the proof of Theorem~\ref{th:mahler-sneg}. \begin{proof}[\bf{Proof of Theorem \ref{th:f-mmf-dim1}}] Let us define a variant of the convex body $C_{m}^{K}$ introduced in section 2, which is more suitable for $m=1/|s|\notin\N$, this variant was considered in \cite{IW} for $s>0$: \[ C_1(f)=C^{[-1,1]}_{1}(f)=\left\{(x,t)\in\R\times\R; \pi_1(f)(x,t)\leq 1\right\}=\left\{(x,t)\in\R\times\R; |t|f\left(\frac{x}{|t|}\right)\leq 1\right\}. \] The same computations as before show easily that $C_1(f)^\circ=C_1(\M f)$. Moreover, for any $m>1$, one has \[ \int_{\R}\frac{1}{f(x)^m}dx=m\int_{\R}\int_{f(x)}^{+\infty}\frac{m}{t^{m+1}}dtdx=\int_0^{+\infty}\frac{m}{t^{m+1}}|\{x\in\R; f(x)\le t\}|dt. \] Using the change of variables $t=1/u$, we get \[ \int_{\R}\frac{1}{f(x)^m}dx=\frac{m}{2}\int_{C_1(f)}|u|^{m-2}dudx=\mu_m(C_1(f)), \] where we denote by $\mu_m$ the measure with density $d\mu_m(x,t)=\frac{m}{2}|t|^{m-2}dxdt$ on $\R^{2}$. \forget Consider the convex body in $\R^{2}$ \[ C_{1}(f)=\{(x,s)\in\R^{2}; |s|f\left(\frac{x}{|s|}\right)\leq 1\}. \] Since $f$ is even, then $C_{1}(f)$ is an unconditional convex body. In addition, one has \begin{eqnarray*} \int_{C_{1}(f)}|s|^{m-2}dsdx&=&2\int_{0}^{+\infty}s^{m-2}\left|\left\{x\in\R; \, (x,s)\in C_{1}(f)\right\}\right|ds \\ &=&2\int_{0}^{+\infty}s^{m-2}\left|\{x\in\R; \, sf\left(\frac{x}{s}\right)\leq 1\}\right|ds\\ &=& 2\int_{0}^{+\infty}s^{m-2}\left|\{sy\in\R; \, sf(y)\leq 1\}\right|ds\\ &=& 2\int_{0}^{+\infty}s^{m-1}\left|\{y\in\R; \, sf(y)\leq 1\}\right|ds\\ &=&2\int_{0}^{+\infty}\frac{1}{u^{m+1}}\left|\{y\in\R; \, f(y)\leq u\}\right|du. \end{eqnarray*} Now using Fubini, we obtain \begin{eqnarray*} \int_{C_{1}(f)}|s|^{m-2}dsdx&=&2\int_{0}^{+\infty}\frac{1}{u^{m+1}}\int_{\{y\in\R; \, f(y)\leq u\}}dy du\\ &=&2\int_{\R}\int_{f(y)}^{+\infty}\frac{1}{u^{m+1}}dudy\\ &=&2\int_{\R}\left[\frac{u^{-m}}{-m}\right]_{f(y)}^{+\infty}dy\\ &=&\frac{2}{m}\int_{\R}\frac{dy}{(f(y))^{m}}. \end{eqnarray*} Thus, \[ \int_{\R}\frac{dy}{(f(y))^{m}}dy=\frac{m}{2} \int_{C_{1}(f)}|s|^{m-2}dsdx. \] \forgotten Applied to $\M\M f$ instead of $f$, we get \[ \int_{\R}\frac{dz}{(\M \M f(z))^{m}}=\frac{m}{2}\int_{C_{1}(\M \M f)}|s|^{m-2}dsdx=\mu_{m}(C_{1}(\M\M f)). \] Since $C_{1}(\M \M f)=C_{1}(\M f)^{\circ}=C_{1}(f)^{\circ\circ}=\conv(C_{1}(f))$, we get \[ \int_{\R}\frac{dz}{(\M \M f(z))^{m}}=\frac{m}{2}\int_{\conv(C_{1}(f))}|s|^{m-2}dsdx=\mu_m(\conv(C_{1}(f))). \] The inequality \eqref{eq:f-mmf-dim1} that we want to prove is thus equivalent to \begin{equation}\label{eq:muC1fconv} \mu(C_{1}(f))\geq\frac{m-1}{m}\mu\left(\conv(C_{1}(f))\right). \end{equation} It was proved in \cite[Theorem 2.16]{FGSZ} that $C_{1}(f)$ can be written as $C_{1}(f)=C_{1}(f)_{+}\cup C_{1}(f)_{-}$, where \[ C_{1}(f)_{+}=C_1(f)\cap\{s\ge0\} =\overline{\left\{(x,s)\in\R\times(0,+\infty); sf\left(\frac{x}{s}\right)\leq 1\right\}} \] is a convex body containing $0$ on its boundary and $C_{1}(f)_{-}$ is its symmetric image with respect to the hyperplane $\{s=0\}$. Thus, our problem can be formulated as follows:\\ for every convex body $K\subset\{(x,s)\in\R^{2}; s\geq 0\}$ such that $0\in\partial K$ and $K$ symmetric with respect to $\R e_{2}$ and if we denote $C=K\cup\sigma(K)$, where $\sigma=\sigma_{{e_{2}}^{\perp}}$ then \begin{equation}\label{eq:muC-muconvC} \mu_{m}(C)\geq\frac{m-1}{m}\mu_{m}(\conv(C)). \end{equation} Let $a\in\R$ such that $P_{e_{2}^{\perp}}(K)=[-a,a]$, then there exists $b\geq 0$ such that $(a,b)\in K$. Set \[ S=K\cap\{s=b\}=[-a,a]\times\{b\},\quad K_{1}=\conv(S,0),\quad \mbox{and}\quad C_1=K_1\cup\sigma(K_1). \] Then, one has $C_1\subset C$ and one easily sees that $\conv(C)\setminus C\subset \conv(C_1)\setminus C_1$, thus \begin{equation}\label{eq:mu-C-C_1} \frac{\mu_m(\conv(C)\setminus C)}{\mu_m(C)}\le \frac{\mu_m(\conv(C_1)\setminus C_1)}{\mu_m(C_1)}. \end{equation} \forget \[ A=K\cap\{s\geq b\},\quad\quad\quad S=K\cap\{s=b\}\quad\mbox{ and }\quad K_{1}=\conv(A,0). \] It is easy to see that $K_{1}\subset K$ and $K_{1}=A\cup\conv(0,S)$. Since $\conv(K,\sigma(K))=A\cup\sigma(A)\cup\conv(S,\sigma(S))$, then $\conv(K_{1},\sigma(K_{1}))=\conv(K,\sigma(K))$.\\ Let $L$ be a symmetric convex body in $\R^{2}$ and $F$ be the function defined as follows: \[ F(L)=\frac{\mu(L\cup\sigma(L))}{\mu(\conv(L,\sigma(L)))}. \] From the previous observation, one has $F(K)\geq F(K_{1})$.\\ Set $K_{2}=\conv(0,S)$. Then, $\conv(K_{2},\sigma(K_{2}))=\conv(S,\sigma(S))$. In addition, one has \[ \mu(K_{1}\cup\sigma(K_{1}))=2\mu(K_{1})=2(\mu(A)+\mu(\conv(0,S)))=2(\mu(A)+\mu(K_{2})). \] On the other hand, using again the fact that $\conv(K_{1},\sigma(K_{1}))=A\cup\sigma(A)\cup\conv(S,\sigma(S))$, it implies that \[ \mu_m(\conv(K_{1},\sigma(K_{1})))=2\mu_m(A)+\mu_m(\conv(K_{2},\sigma(K_{2}))). \] Thus, \[ F(K_{1})=\frac{2\mu(A)+2\mu(K_{2})}{2\mu(A)+\mu(\conv(K_{2},\sigma(K_{2})))}\geq\frac{2\mu(K_{2})}{\mu(\conv(K_{2},\sigma(K_{2})))}=F(K_{2}). \] Our final step is to prove that $F(K_{2})\geq\frac{m-1}{m}$. \forgotten The triangle $K_1$ with vertices $0$, $(a,b)$ and $(-a,b)$ can be described as \[ K_{1}=\{(x,s)\in\R^{2};\frac{b}{a}|x|\leq s\leq b\}. \] Hence, we get \[ \mu_m(C_1)=2\mu_{m}(K_{1})=4\int_{0}^{a}\int_{\frac{b}{a}x}^{b}s^{m-2}dsdx=\frac{4}{m}ab^{m-1}. \] And since $\conv(C_1)=[-a,a]\times[-b,b]$, we get \[ \mu_{m}(\conv(C_{1}))=4\int_{0}^{a}\int_{0}^{b}s^{m-2}dsdx=\frac{4}{m-1}ab^{m-1}. \] Hence \[ \frac{\mu_m(\conv(C_1)\setminus C_1)}{\mu_m(C_1)}=\frac{\frac{1}{m-1}-\frac{1}{m}}{\frac{1}{m}}=\frac{1}{m-1}. \] Using \eqref{eq:mu-C-C_1}, we deduce that \eqref{eq:muC-muconvC} holds. Moreover, there is equality in \eqref{eq:mu-C-C_1} if and only if $C=C_1$, i.e. $K=K_1$, which means that $f=\frac{1}{b}+I_{[-\frac{a}{b},\frac{a}{b}]}$. Thus, the equality case follows. \forget Then, $F(K_{2})=\frac{m-1}{m}$ and $F(K)\geq\frac{m-1}{m}$.\\ To deduce our inequality, we will use the observations done in the previous proof. In fact, we have \begin{eqnarray*} \int_{\R}\frac{1}{f(x)^m}dx\int_{\R}\frac{1}{(\M f(y))^{m}}dy&=&\left(\frac{m}{m-1}\right)^{2}\frac{1}{|B_{\infty}^{m-1}||B_{1}^{m-1}|}|(K\cup\sigma(K))||(K\cup\sigma(K))^{\circ}|\\ &=&\left(\frac{m}{m-1}\right)^{2}\frac{(m-1)!}{4^{m-1}}|(K\cup\sigma(K))||(K\cup\sigma(K))^{\circ}|\\ &=&\left(\frac{m}{m-1}\right)^{2}\frac{(m-1)!}{4^{m-1}}\frac{|K\cup\sigma(K)|}{|\conv(K,\sigma(K))|}|\conv(K,\sigma(K))||(K\cup\sigma(K))^{\circ}| \end{eqnarray*} Using Mahler's inequality in dimension $m$, one has \[ |\conv(K,\sigma(K))||(K\cup\sigma(K))^{\circ}|\geq\frac{4^{m}}{m!}. \] Thus, we get \begin{eqnarray*} \int_{\R}\frac{1}{f(x)^m}dx\int_{\R}\frac{1}{(\M f(y))^{m}}dy&\geq&\left(\frac{m}{m-1}\right)^{2}\frac{(m-1)!}{4^{m-1}}\left(\frac{m-1}{m}\right)\frac{4^{m}}{m!} \\ &=&\frac{4}{m-1}, \end{eqnarray*} and the result follows. \forgotten \end{proof} \forget \section{Open problems} For even $s$-concave function $g$ on $\R^n$. Case $n\ge2$ and $1/|s|\notin\N$. Case $n\ge2$, $s<0$ and $f\notin\F$: proof of Theorem 4.3 equality cases. We simplify the notation and denote: \[ C_1(f)=C^{[-1,1]}_{1}(f)=\{(x,s)\in\R\times\R; \pi_1(f)(x,s)\leq 1\}=\{(x,s)\in\R\times\R; |s|f\left(\frac{x}{|s|}\right)\leq 1\}. \] We know that $C_1(f)^\circ=C_1(\M f)$. Moreover, for any $m>n$, one has \[ \int_{\R^n}\frac{1}{f(x)^m}dx=m\int_{\R^n}\int_{f(x)}^{+\infty}\frac{m}{s^{m+1}}dsdx. \] Using the change of variables $s=1/t$ and $x=y/t$, we get \[ \int_{\R^n}\frac{1}{f(x)^m}dx=\frac{m}{2}\int_{C_1(f)}|t|^{m-n-1}dtdx=\frac{m}{2}\mu_m(C_1(f)), \] where we denote by $\mu_m$ the measure on $\R^n\times\R$ with density $|t|^{m-n-1}$, where $t$ denotes the last coordinate.\\ \forgotten \begin{thebibliography}{Zz99} \bibitem[AKM]{AKM} S. Artstein, B. Klartag and V. Milman. The Santal\'o point of a function and a functional form of Santal\'o inequality. \emph{Mathematika} {\bf 51} (2004), 33-48. \bibitem[BF]{BF} F. Barthe and M. Fradelizi. The volume product of convex bodies with many hyperplane symmetries. \emph{American Journal of Mathematics} {\bf 135} (2013), no. 2, 311–347. \bibitem[FGSZ]{FGSZ} M. Fradelizi, N. Gozlan, S. Sadovsky and S. Zugmeyer. Transport-entropy forms of direct and converse Blaschke-Santaló inequalities. \emph{Revista Matemática Iberoamericana} {\bf 40} (2024), no. 5, 1917-1952. \bibitem[FHMRZ]{FHMRZ}M. Fradelizi, A. Hubard, M. Meyer, E. Roldan-Pensado and A. Zvavitch. Equipartitions and Mahler volumes of symmetric convex bodies. \emph{American Journal of Mathematics} {\bf 144} (2022), no. 5, 1201-1219. \bibitem[FM08a]{FM08a} M. Fradelizi and M. Meyer. Some functional inverse Santal\'o inequalities. \emph{Advances in Mathematics} {\bf 218} (2008), 1430-1452. \bibitem[FM08b]{FM08b} M. Fradelizi and M. Meyer. Increasing functions and inverse Santal\'o inequality for unconditional functions. \emph{Positivity} {\bf 12} (2008), 407-420. \bibitem[FM10]{FM10} M. Fradelizi and M. Meyer. Functional inequalities related to Mahler's conjecture. \emph{Monatshefte für Mathematik} {\bf 159} (2010), no. 1-2, 13–25. \bibitem[FN]{FN} M. Fradelizi and E. Nakhle. The functional form of Mahler’s conjecture for even log-concave functions in dimension 2. \emph{International Mathematics Research Notices} {\bf 12} (2023), 10067-10097. \bibitem[GMR]{GMR} Y. Gordon, M. Meyer and S. Reisner. Zonoids with minimal volume product - a new proof. \emph{Proceedings of the American Mathematical Society} {\bf 104} (1988), no. 1, 273-276. \bibitem[IS]{IS} H. Iriyeh and M. Shibata. Symmetric Mahler's conjecture for the volume product in the three dimensional case. \emph{Duke Mathematical Journal} {\bf 169} (2020), no. 6, 1077-1134. \bibitem[IW]{IW} G.~M. Ivanov and E.~M. Werner. Geometric representation of classes of concave functions and duality. \emph{Journal of Geometric Analysis} {\bf 34} (2024), no.~8, Paper No. 260, 25 pp. \bibitem[K]{K} R. Karasev. Mahler's conjecture for some hyperplane sections. \emph{Israel Journal of Mathematics} {\bf 241} (2021), 795–815. \bibitem[Mah1]{Mah1} K. Mahler. Ein {\"U}bertragungsprinzip f{\"u}r konvexe K{\"o}rper. \emph{Casopis Pyest. Mat. Fys.} {\bf 68} (1939), 93–102. \bibitem[Mah2]{Mah2} K. Mahler. Ein Minimalproblem f{\"u}r konvexe Polygone. \emph{Mathematica (Zutphen)} {\bf 7} (1938), 118-127. \bibitem[M]{M} M. Meyer. Une caracterisation volumique de certains espaces normés de dimension finie. \emph{Israel Journal of Mathematics} {\bf 55} (1986), no. 3, 317–326. \bibitem[Re]{Re} S. Reisner. Zonoids with minimal volume product. \emph{Mathematische Zeitschrift} {\bf 192} (1986), 339–346. \bibitem[Ro]{Ro}L.~Rotem. A sharp Blaschke-Santal{\'o} inequality for $\alpha$-concave functions. \emph{Geometriae Dedicata} {\bf 172} (2014), no. 1, 217--228. \bibitem[SR]{SR} J. Saint Raymond. Sur le volume des corps convexes symétriques. \emph{In Séminaire d'Initiation à l'Analyse} {\bf 81} (1980). \end{thebibliography} \noindent {\footnotesize\sc Matthieu Fradelizi:} {\footnotesize Univ Gustave Eiffel, Univ Paris Est Creteil, CNRS, LAMA UMR8050 F-77447 Marne-la-Vallée, France. \\ ORCID: 0000-0001-9362-6819}\\[-1.3mm] {\footnotesize e-mail: {\tt [email protected] \tt }}\\ \noindent {\footnotesize\sc Elie Nakhle:} {\footnotesize Univ Paris Est Creteil, Univ Gustave Eiffel, CNRS, LAMA UMR8050, F-94010 Creteil, France. }\\[-1.3mm] {\footnotesize e-mail: {\tt elie\_b\[email protected] \tt }}\\ \end{document}
2412.12608v1
http://arxiv.org/abs/2412.12608v1
SOR-like iteration and FPI are consistent when they are equipped with certain optimal iterative parameters
\documentclass[]{interact} \usepackage{color} \usepackage{epstopdf}\usepackage{caption} \usepackage{cases} \usepackage{subfigure} \usepackage{graphics,graphicx} \usepackage{algorithm,algorithmic} \usepackage{caption} \usepackage[colorlinks, linkcolor=red, anchorcolor=blue, citecolor=blue ]{hyperref} \usepackage{cleveref} \usepackage[numbers,sort&compress]{natbib}\bibpunct[, ]{[}{]}{,}{n}{,}{,}\renewcommand\bibfont{\fontsize{10}{12}\selectfont}\makeatletter\def\NAT@def@citea{\def\@citea{\NAT@separator}}\makeatother \theoremstyle{plain}\newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{alg}{Algorithm}\theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \theoremstyle{remark} \newtheorem{remark}{Remark} \newtheorem{notation}{Notation} \begin{document} \title{SOR-like iteration and FPI are consistent when they are equipped with certain optimal iterative parameters} \author{ \name{Jiayu Liu\textsuperscript{a}\thanks{Email address: [email protected].} and Tingting Luo\textsuperscript{a}\thanks{Email address: [email protected].} and Cairong Chen\textsuperscript{a}\thanks{Corresponding author. Email address: [email protected].} and Deren Han\textsuperscript{b}\thanks{Email address: [email protected].}} \affil{\textsuperscript{a}School of Mathematics and Statistics \& Key Laboratory of Analytical Mathematics and Applications (Ministry of Education) \& Fujian Provincial Key Laboratory of Statistics and Artificial Intelligence, Fujian Normal University, Fuzhou, 350117, P.R. China} \affil{\textsuperscript{b}LMIB of the Ministry of Education, School of Mathematical Sciences, Beihang University, Beijing, 100191, P.R. China} } \maketitle \begin{abstract} Two common methods for solving absolute value equations (AVE) are SOR-like iteration method and fixed point iteration (FPI) method. In this paper, novel convergence analysis, which result wider convergence range, of the SOR-like iteration and the FPI are given. Based on the new analysis, a new optimal iterative parameter with a analytical form is obtained for the SOR-like iteration. In addition, an optimal iterative parameter with a analytical form is also obtained for FPI. Surprisingly, the SOR-like iteration and the FPI are the same whenever they are equipped with our optimal iterative parameters. As a by product, we give two new constructive proof for a well known sufficient condition such that AVE has a unique solution for any right hand side. Numerical results demonstrate our claims. \end{abstract} \begin{keywords} Absolute value equations; iterative method; convergence domain; optimal iteration parameter \end{keywords} \section{Introduction}\label{sec:intro} We consider absolute value equations (AVE) of the form \begin{equation}\label{eq:ave} Ax - | x | = b, \end{equation} where $A\in\mathbb{R}^{n\times n}$, $b\in\mathbb{R}^n$, and $|x|\in\mathbb{R}^n$ denotes the entrywise absolute value of the unknown vector $x\in\mathbb{R}^n$. AVE \eqref{eq:ave} can be regarded as a special case of the general absolute value equation (GAVE) \begin{equation}\label{eq:gave} Cx - D|x| = e, \end{equation} where $C,D\in\mathbb{R}^{m\times n}$ and $e\in \mathbb{R}^m$. It was known that determining the existence of a solution to the general GAVE is NP-hard \cite{mang2007a}, and if it has a solution, determining whether the GAVE has a unique solution or multiple solutions is NP-complete \cite{prok2009}. For further investigation on GAVE, one can see \cite{hlad2018,love2013,mezz2020,rohn2009a,rohf2014,wush2021}. Over the past two decades, AVE \eqref{eq:ave} has garnered significant attention in the community of numerical optimization since it is closely related to many mathematical programming problems, which include linear complementarity problems (LCP) \cite{huhu2010,mang2014,mame2006,prok2009}. In addition, AVE \eqref{eq:ave} also arises from the characterization of certain solutions to the system of linear interval equations \cite{rohn1989,rohn2004}. Recently, a transform function based on the underdetermined GAVE~\eqref{eq:gave} is used to improve the security of the cancellable biometric system \cite{dnhk2023}. Given these diverse applications and theoretical significance, developing efficient numerical methods for solving AVE \eqref{eq:ave} remains as an active research topic. In recent years, there has been numerous algorithms for solving AVE \eqref{eq:ave}. For example, Newton-type iteration methods \cite{mang2009a,lilw2018,bcfp2016,wacc2019}, iterative methods based on matrix splitting \cite{lild2022,kema2017,edhs2017}, concave minimization approaches \cite{mang2007b,zahl2021}, methods based on neurodynamic models \cite{cyyh2021,yzch2024}, and others; see, e.g., \cite{ke2020,alct2023,chyh2023,xiqh2024,soso2023,bcfp2016,maer2018,abhm2018,sayc2018,tazh2019}. The goal of this paper is to revisit the convergence conditions and optimal iterative parameters for two of the above-mentioned algorithms, i.e., the SOR-like iteration method \cite{kema2017} and the fixed point iteration (FPI) method \cite{ke2020}. In the following, we briefly review these two methods. Let $y = |x|$, AVE~\eqref{eq:ave} is equivalent to \begin{equation}\label{eq:ave-eq} \mathcal{A}z := \begin{bmatrix} A &-I\\ -\mathcal{D}(x) & I\end{bmatrix} \begin{bmatrix} x\\ y\end{bmatrix} = \begin{bmatrix} b\\ 0\end{bmatrix} := c, \end{equation} where $\mathcal{D}(x) = {\rm diag}({\rm sign}(x))$. By splitting $$ \omega\mathcal{A} = \begin{bmatrix} A &0\\ -\omega \mathcal{D}(x) & I\end{bmatrix} - \begin{bmatrix} (1-\omega)A &\omega I\\0 & (1-\omega)I\end{bmatrix} $$ with $\omega> 0$ is the iterative parameter, Ke and Ma \cite{kema2017} proposed the following SOR-like iteration for solving AVE~\eqref{eq:ave}: \begin{equation*} \begin{bmatrix} A &0\\ -\omega \mathcal{D}(x^{(k+1)}) & I\end{bmatrix} \begin{bmatrix} x^{(k+1)}\\ y^{(k+1)}\end{bmatrix} = \begin{bmatrix} (1-\omega)A &\omega I\\0 & (1-\omega)I\end{bmatrix}\begin{bmatrix} x^{(k)}\\ y^{(k)}\end{bmatrix} + \begin{bmatrix} \omega b\\ 0\end{bmatrix}. \end{equation*} The SOR-like iteration method is described in \Cref{alg:SOR}. \begin{algorithm}[htp] \caption{SOR-like iteration method for solving AVE \eqref{eq:ave} \cite{kema2017}.}\label{alg:SOR} Let $A\in \mathbb{R}^{n\times n}$ be a nonsingular matrix and $b\in \mathbb{R}^{n}$. Given the initial vectors $x^{(0)}\in \mathbb{R}^{n}$ and $y^{(0)}\in \mathbb{R}^{n}$, for $k=0,1,2,\cdots$ until the iteration sequence $\left\{(x^{(k)},y^{(k)})\right\}_{k=0}^\infty$ is convergent, compute \begin{eqnarray}\label{eq:sor} \begin{cases} x^{(k+1)}=(1-\omega)x^{(k)}+\omega A^{-1}(y^{(k)}+b),\\ y^{(k+1)}=(1-\omega)y^{(k)}+\omega |x^{(k+1)}|, \end{cases} \end{eqnarray} where $\omega > 0$ is the iterative parameter. \end{algorithm} Hereafter, based on \eqref{eq:ave-eq} again, Ke \cite{ke2020} proposed the following FPI method (see \Cref{alg:FPI}) for solving AVE~\eqref{eq:ave}. \begin{algorithm}[htp] \caption{FPI method for solving AVE \eqref{eq:ave} \cite{ke2020}}\label{alg:FPI} Let $A\in \mathbb{R}^{n\times n}$ be a nonsingular matrix and $b\in \mathbb{R}^{n}$. Given the initial vectors $x^{(0)}\in \mathbb{R}^{n}$ and $y^{(0)}\in \mathbb{R}^{n}$, for $k=0,1,2,\cdots$ until the iteration sequence $\left\{(x^{(k)},y^{(k)})\right\}_{k=0}^\infty$ is convergent, compute \begin{eqnarray}\label{eq:fpi} \begin{cases} x^{(k+1)}=A^{-1}(y^{(k)}+b),\\ y^{(k+1)}=(1-\tau)y^{(k)}+\tau |x^{(k+1)}|, \end{cases} \end{eqnarray} where $\tau>0$ is the iterative parameter. \end{algorithm} Let $(x_*, y_*)$ be the solution pair of the nonlinear equation \eqref{eq:ave-eq} and define $$ e_k^x = x_* - x^{(k)}, e_k^y = y_* - y^{(k)}. $$ Then we can review the following results. For the SOR-like iteration method, Ke and Ma obtain the following theorem. \begin{theorem}[{\cite[Theorem 2.1]{kema2017}}]\label{thm:kema} Assume that $A \in \mathbb{R}^{n\times n}$ is a nonsingular matrix and $b\in \mathbb{R}^{n}$. Denote $$ \nu=\|A^{-1}\|_2, \quad a=|1-\omega|\quad \text{and}\quad d=\omega^2\nu. $$ For the sequence $\{(x^{(k)},y^{(k)})\}$ generated by \eqref{eq:sor}, if \begin{equation}\label{eq:cond1} 0<\omega< 2 \qquad \text{and} \qquad a^4-3a^2 -2ad- 2d^2 +1 >0, \end{equation} the following inequality \begin{equation*}\| |(e_{k+1}^x,e_{k+1}^y)| \|_{\omega} < \| |(e_k^x,e_k^y) |\|_{\omega} \end{equation*} holds for $ k=0,1,2,\cdots $. Here the norm $\| |\cdot|\|_{\omega}$ is defined by $$\| |(e_k^x,e_k^y) |\|_{\omega}:=\sqrt {\|e_k^x \|_2^2+\omega ^{-2}\|e_k^y \|_2^2 }.$$ \end{theorem} Recently, Chen et al. \cite{chyh2024} revisited the convergence condition \eqref{eq:cond1} of the SOR-like iteration method and determined the optimal iteration parameter which minimizes $\|T_{\nu}(\omega)\|_A$ with $$T_\nu(\omega) = \begin{bmatrix} |1-\omega| & \omega\nu \\ \omega |1-\omega| & |1-\omega| +\omega^2\nu \end{bmatrix}$$ and $A = \begin{bmatrix} 1 & 0\\ 0 &\frac{1}{\omega^2}\end{bmatrix}$ such that \begin{equation}\label{eq:errsor} 0\le \| (\|e_{k+1}^x\|_2,\|e_{k+1}^y\|_2) \|_A \le \|T_\nu(\omega) \|_A \cdot \| (\|e_k^x\|_2,\|e_k^y\|_2) \|_A. \end{equation} Here, $\|x\|_A = \sqrt{x^\top Ax}$ and $\|X\|_A = \|A^{\frac{1}{2}}XA^{-\frac{1}{2}}\|_2$. From \eqref{eq:errsor}, for the sequence $\{(\|e_x^k\|_2, \|e^k_y\|_2)\}$, $\|T_{\nu}(\omega)\|_A$ is an upper bound of the linear convergence factor for the SOR-like iteration method in terms of the metric $\|\cdot \|_A$. However, the metric $\|\cdot \|_A$ is $\omega$-dependent and the resulting optimal iterative parameter doesn't have a analytical form (see \eqref{eq:opt}). This brings out an interesting question on finding an optimal iterative parameter with a analytical form. To this end, we reanalysis the convergence of the SOR-like iteration method without using the metric $\|\cdot \|_A$. For the FPI method, Ke proposed the following theorem. \begin{theorem}[{\cite[Theorem 2.1]{ke2020}}]\label{thm:kefpi} Assume that $A \in \mathbb{R}^{n\times n}$ is a nonsingular matrix and $b\in \mathbb{R}^{n}$. Denote $$\nu=\|A^{-1}\|{_2}\quad \text{and}\quad E^{(k+1)}=\begin{bmatrix}\begin{array}{c} \|e_{k+1}^x\|_2\\ \|e_{k+1}^y\|_2\end{array}\end{bmatrix}.$$ For the sequence $\{(x^{(k)},y^{(k)})\}$ generated by \eqref{eq:fpi}, if \begin{equation}\label{eq:cfpi} 0<\nu< \frac{\sqrt{2}}{2} \qquad \text{and} \qquad \frac{1- \sqrt{1- \nu^2}}{1- \nu} < \tau < \frac{1+\sqrt{1-\nu^2}}{1+\nu}, \end{equation} $\|E^{(k+1)}\|_2< \|E^{(k)}\|_2$ for all $k=0,1,2,\cdots$. \end{theorem} For AVE~\eqref{eq:ave}, the following \Cref{pro:us} reveals a sufficient condition such that AVE~\eqref{eq:ave} has a unique solution for any $b \in \mathbb{R}^{n}$. However, in \eqref{eq:cfpi}, $\nu\in (0, \frac{\sqrt{2}}{2})$. There exists a gap between $(0, \frac{\sqrt{2}}{2})$ and $(0, 1)$. In order to theoretically fill this gap, Yu et al. \cite{yuch2022} modified the FPI by introducing an auxiliary matrix. However, the optimal iterative parameter of the FPI method is still lack in the literature. This motivates us to give a new convergence analysis of the FPI method which not only can fill the above-mentioned gap without modifying the original FPI but also can shine the light into determining the optimal iterative parameter. \begin{proposition}[\cite{mame2006}]\label{pro:us} Assume that $A \in \mathbb{R}^{n\times n}$ is invertible. If $\|A\|_2^{-1}<1$, AVE~\eqref{eq:ave} has a unique solution for any $b \in \mathbb{R}^{n}$. \end{proposition} Generally, the SOR-like iteration \eqref{eq:sor} and the FPI \eqref{eq:fpi} are different from each other. Surprisingly, our analysis below investigates that the SOR-like iteration \eqref{eq:sor} and the FPI \eqref{eq:fpi} are the same whenever they are equipped with our optimal iterative parameters. Our work makes the following key contributions: \begin{enumerate} \item For the SOR-like iteration method, new convergence result and optimal iteration parameter are given. The new convergence range is larger than the existing one and the new optimal iteration parameter has a analytical form. \item For the FPI method, new convergence result is given. Unlike \cite{yuch2022}, we theoretically fill the convergence gap without modifying the original method. Furthermore, we obtain the optimal iterative parameter. \item We discover that the SOR-like iteration and and the FPI are the same when they are equipped with our optimal iterative parameters. \end{enumerate} The rest of this paper is organized as follows: In \Cref{sec:Preliminaries}, we present preliminary results and essential lemmas that serve as the foundation for our subsequent analysis. In \Cref{sec:SOR} and \Cref{sec:FPI}, we establishes broader convergence domains and derives explicit expressions for optimal iteration parameters of the SOR-like iteration and FPI, respectively. Numerical results are given in \Cref{sec:ne}. Finally, some concluding remarks are given in \Cref{sec:conclusions}. \textbf{Notation.} Let $\mathbb{R}^{n\times n}$ be the set of all $n\times n$ real matrices and $\mathbb{R}^n=\mathbb{R}^{n\times 1}$. $|U|\in\mathbb{R}^{m\times n}$ denote the componentwise absolute value of the matrix $U$. $I$ denotes the identity matrix with suitable dimensions. $\|U\|_2$ denotes the $2$-norm of $U\in\mathbb{R}^{m\times n}$ which is defined by the formula $\|U\|_2=\max\{\|Ux\|_2:x\in\mathbb{R}^n,\|x\|_2=1\}$, where $\|x\|_2$ is the $2$-norm of the vector $x$. $\rho(U)$ denotes the spectral radius of $U$. For $A \in \mathbb{R}^{n\times n}$, $\det (A)$ denotes its determinant. The sign of a real $r$ is defined by ${\rm sign}(r)=1$ if $r> 0$, $0$ if $r=0$ and $-1$ if $r<0$. For $x\in \mathbb{R}^n$, ${\rm diag}(x)$ represents a diagonal matrix with $x_i$ as its diagonal entries for every $i = 1,2,\ldots,n$. \section{Preliminaries}\label{sec:Preliminaries} In this section, we collect some basic results that will be used later. \begin{lemma}[{\cite[Lemma 2.1]{youn1971}}]\label{lem:2.1} Let $p$ and $q$ be real coefficients. Then both roots of the quadratic equation $x^2 - px + q = 0$ are less than one in modulus if and only if $|q|<1$ and $|p|<1+q$. \end{lemma} \begin{lemma}[{e.g., \cite[Theorem~1.10]{saad2003}}]\label{lem:2.4} For~$U\in\mathbb{R}^{n\times n}$,~$\lim\limits_{k\rightarrow+\infty} U^k=0$ if and only if~$\rho(U)<1$. \end{lemma} \begin{lemma}[{e.g., \cite[Theorem~1.11]{saad2003}}]\label{lem:2.3} For~$U\in\mathbb{R}^{n\times n}$, the series~$\sum\limits_{k=0}^\infty U^k$ converges if and only if~$\rho(U)<1$ and we have~$\sum\limits_{k=0}^\infty U^k=(I-U)^{-1}$ whenever it converges. \end{lemma} \section{New convergence and new optimal iterative parameter of SOR-like iteration}\label{sec:SOR} In this section, we devote to giving new convergence analysis and deriving new optimal iterative parameter for the SOR-like iteration method. \subsection{New convergence analysis} In this subsection, we derive a new convergence condition for the SOR-like iteration method, which results a larger range of $\omega$ than that of \cite{chyh2024}. Concretely, we have the following theorem. \begin{theorem}\label{thm:sor} Let $A\in \mathbb{R}^{n\times n}$ be a nonsingular matrix and denote $\nu=\|A^{-1}\|_2$. If \begin{equation}\label{eq:con-sor} 0<\nu<1 \quad \text{and}\quad 0<\omega<\frac{2 - 2\sqrt{\nu}}{1 - \nu}, \end{equation} AVE \eqref{eq:ave} has a unique solution for any $b\in \mathbb{R}^n$ and the sequence~$\{(x^{(k)},y^{(k)})\}^\infty_{k=0}$ generated by~\eqref{eq:sor} globally linearly converges to~$(x_{*}, y_{*}=|x_*|)$ with $x_{*}$ being the unique solution of AVE~\eqref{eq:ave}. \end{theorem} \begin{proof} It follows from \eqref{eq:sor} that \begin{eqnarray}\label{eq:sor'} \begin{cases} x^{(k)}=(1-\omega)x^{(k-1)}+\omega A^{-1}(y^{(k-1)}+b),\\ y^{(k)}=(1-\omega)y^{(k-1)}+\omega |x^{(k)}|. \end{cases} \end{eqnarray} Subtracting~\eqref{eq:sor'} from~\eqref{eq:sor}, we have \begin{eqnarray*} \begin{cases} x^{(k+1)}-x^{(k)}=(1-\omega)(x^{(k)}-x^{(k-1)})+\omega A^{-1}(y^{(k)}-y^{(k-1)}),\\ y^{(k+1)}-y^{(k)}=(1-\omega)(y^{(k)}-y^{(k-1)})+\omega (|x^{(k+1)}|-|x^{(k)}|), \end{cases} \end{eqnarray*} from which and $\||x| - |y|\|_2 \le \|x - y\|_2$ that \begin{eqnarray*} \begin{cases} \|x^{(k+1)}-x^{(k)}\|_2 \leq |1-\omega| \|x^{(k)}-x^{(k-1)}\|_2 +\omega \nu \|y^{(k)}-y^{(k-1)}\|_2,\\ \|y^{(k+1)}-y^{(k)}\|_2 \leq |1-\omega| \|y^{(k)}-y^{(k-1)}\|_2 +\omega \|x^{(k+1)}-x^{(k)}\|_2. \end{cases} \end{eqnarray*} That is, \begin{equation}\label{eq:sor*} \begin{bmatrix} 1 & 0 \\ -\omega & 1 \end{bmatrix} \begin{bmatrix} \|x^{(k+1)}-x^{(k)}\|_2 \\ \|y^{(k+1)}-y^{(k)}\|_2 \end{bmatrix} \leq \begin{bmatrix} |1-\omega| & \omega\nu \\ 0 & |1-\omega| \end{bmatrix} \begin{bmatrix} \|x^{(k)}-x^{(k-1)}\|_2 \\ \|y^{(k)}-y^{(k-1)}\|_2 \end{bmatrix}. \end{equation} Multiplying \eqref{eq:sor*} from left by the nonnegative matrix $ \begin{bmatrix} 1 & 0 \\ \omega & 1 \end{bmatrix} $, we get \begin{equation}\label{eq:W} \begin{bmatrix} \|x^{(k+1)}-x^{(k)}\|_2 \\ \|y^{(k+1)}-y^{(k)}\|_2 \end{bmatrix} \leq W \begin{bmatrix} \|x^{(k)}-x^{(k-1)}\|_2 \\ \|y^{(k)}-y^{(k-1)}\|_2 \end{bmatrix} \end{equation} with \begin{equation}\label{eq:w} W=\begin{bmatrix} |1-\omega| & \omega\nu \\ \omega |1-\omega| & \omega^2 \nu+|1-\omega| \end{bmatrix}\ge 0. \end{equation} For each $m \geq 1$, if $\rho(W)<1$, it follows from~\eqref{eq:W}, \eqref{eq:w}, \Cref{lem:2.4} and \Cref{lem:2.3} that \begin{align*} \left[\begin{array}{c} \|x^{(k+m)}-x^{(k)}\|_2 \\ \|y^{(k+m)}-y^{(k)}\|_2 \end{array}\right]&= \left[\begin{array}{c} \|\sum_{j=0}^{m-1}(x^{(k+j+1)}-x^{(k+j)})\|_2 \\ \|\sum_{j=0}^{m-1}(y^{(k+j+1)}- y^{(k+j)})\|_2 \end{array}\right] \leq \sum_{j=0}^{m-1} \left[\begin{array}{c} \|x^{(k+j+1)}-x^{(k+j)}\|_2 \\ \|y^{(k+j+1)}- y^{(k+j)}\|_2 \end{array}\right]\nonumber\\ &\leq \sum_{j=0}^{\infty}W^{j+1} \left[\begin{array}{c} \|x^{(k)}- x^{(k-1)}\|_2 \\ \|y^{(k)}- y^{(k-1)}\|_2 \end{array}\right] =(I-W)^{-1}W \left[\begin{array}{c} \|x^{(k)}-x^{(k-1)}\|_2 \\ \|y^{(k)}-y^{(k-1)}\|_2 \end{array}\right]\nonumber\\ &\leq (I-W)^{-1}W^k \left[\begin{array}{c} \|x^{(1)}-x^{(0)}\|_2 \\ \|y^{(1)}-y^{(0)}\|_2 \end{array}\right] \rightarrow \left[\begin{array}{c} 0\\ 0 \end{array}\right]~~(\text{as}\quad k\rightarrow \infty). \end{align*} Hence, $\{x^{(k)}\}_{k=0}^{\infty}$ and~$\{y^{(k)}\}_{k=0}^{\infty}$ are Cauchy sequences and they are convergent in $\mathbb{R}^n$. Let $\lim\limits_{k\rightarrow\infty} x^{(k)} =x_{*}$ and $\lim\limits_{k\rightarrow\infty} y^{(k)} =y_{*}$, it follows from~\eqref{eq:sor} that \begin{eqnarray*} \begin{cases} x_*=(1-\omega)x_*+\omega A^{-1}(y_*+b),\\ y_*=(1-\omega)y_*+\omega |x_*|, \end{cases} \end{eqnarray*} from which and $\omega>0$ we have \begin{eqnarray*} \begin{cases} Ax_{*}-y_*-b=0,\\ y_{*} = |x_*|. \end{cases} \end{eqnarray*} Thus, $x_{*}$ is a solution to AVE~\eqref{eq:ave}. Next, we turn to consider the conditions such that $\rho(W)<1$. Suppose that~$\lambda$ is an eigenvalue of~$W$, and then \begin{eqnarray*} \det (\lambda I-W)=\det\left( \begin{bmatrix} \lambda-|1-\omega| & -\omega\nu \\ -\omega|1-\omega| & \lambda-(\omega^2 \nu+|1-\omega|) \end{bmatrix} \right)=0, \end{eqnarray*} from which we have \begin{equation*}\lambda^2-(\nu\omega^2 +2|1-\omega|)\lambda +(1-\omega)^2=0. \end{equation*} It follows from Lemma~\ref{lem:2.1} that $|\lambda|<1$ (i.e., $\rho(W)<1$) if and only if \begin{align} (1-\omega)^2&<1, \label{eq:con1}\\ \nu\omega^2 +2|1-\omega|&<1+(1-\omega)^2. \label{eq:con2} \end{align} Obviously, the inequality \eqref{eq:con1} holds if and only if $0<\omega<2$. Next, we will continue our discussion by dividing the following two cases. \textbf{Case 1:} when $0< \omega \leq 1$, the inequality \eqref{eq:con2} becomes $$ \nu\omega^2 +2(1-\omega)<1+(1-\omega)^2 \Leftrightarrow \omega^2 \nu<\omega^2, $$ which holds if $0< \nu<1$. \textbf{Case 2:} when $1< \omega <2$, the inequality \eqref{eq:con2} becomes $$ \omega^2 \nu +2(\omega-1)<1+(1-\omega)^2 \Leftrightarrow (\nu-1)\omega^2+4\omega-4<0, $$ which holds if $0< \nu< 1$ and $ 1<\omega<\frac{2-2\sqrt{\nu}}{1-\nu}<2. $ According to \textbf{Case 1} and \textbf{Case 2}, we can conclude that $\rho(W) < 1$ if \eqref{eq:con-sor} holds. Finally, if \eqref{eq:con-sor} holds, we can prove the unique solvability of AVE~\eqref{eq:ave}. In contrast, suppose that $\bar{x}_{*}\neq x_*$ is another solution to AVE~\eqref{eq:ave}, we have \begin{numcases}{} \|x_*-\bar{x}_*\|_2 \leq |1-\omega| \|x_*-\bar{x}_*\|_2 +\omega \nu \|y_*-\bar{y}_*\|_2 ,\label{eq:xb1}\\ \|y_*-\bar{y}_*\|_2 \leq|1-\omega| \|y_*-\bar{y}_*\|_2 +\omega \|x_*-\bar{x}_*\|_2,\label{eq:yb1} \end{numcases} where $y_{*}=|x_{*}|$ and $\bar{y}_{*}=|\bar{x}_{*}|$. It follows from \eqref{eq:xb1} and \eqref{eq:yb1} that \begin{align*} \|y_*-\bar{y}_*\|_2 &\leq (|1-\omega|+\frac{\omega^2\nu}{1-|1-\omega|})\|y_*-\bar{y}_*\|_2\\ &=\frac{|1-\omega|-(1-\omega)^2+\omega^2\nu}{1-|1-\omega|}\|y_*-\bar{y}_*\|_2. \end{align*} Recall \eqref{eq:con2}, we get $\frac{|1-\omega|-(1-\omega)^2+\omega^2\nu}{1-|1-\omega|}<1$, and then $$\|y_*-\bar{y}_*\|_2 <\|y_*-\bar{y}_*\|_2,$$ which is a contradiction. \end{proof} \begin{remark} The condition \eqref{eq:con-sor} seems simpler than the condition \eqref{eq:cond1} proposed in \cite{kema2017}. The condition \eqref{eq:cond1} proposed in \cite{kema2017} is further investigated in \cite[Theorem 2.2]{chyh2024}. In addition, for given $\nu \in (0,1)$, the following \Cref{fig:sor} demonstrates that the range of $\omega$ determined by \eqref{eq:con-sor} is larger than that giving in \cite[Theorem 2.2]{chyh2024}. \begin{figure}[htp] \centering \includegraphics[width=0.7\linewidth]{fig_SOR} \caption{Comparison of convergence domains for the SOR-like method. The light blue area represents the range of $\omega$ obtained from \eqref{eq:con-sor}, and the red striped area represents the range of $\omega$ obtained from \cite[Theorem 2.2]{chyh2024}.}\label{fig:sor} \end{figure} \end{remark} \begin{remark} The proof of \Cref{thm:sor} can be seen as a new constructive proof of \Cref{pro:us}. \end{remark} \subsection{Optimal iterative parameter of SOR-like iteration} Similar to the derivation of \eqref{eq:W}, we have \begin{equation}\label{eq:err} \begin{bmatrix} \|x^{(k+1)}-x_*\|_2 \\ \|y^{(k+1)}-y_*\|_2 \end{bmatrix} \leq W \begin{bmatrix} \|x^{(k)}-x_*\|_2 \\ \|y^{(k)}-y_*\|_2 \end{bmatrix} \le \ldots \le W^{k+1} \begin{bmatrix} \|x^{(0)}-x_*\|_2 \\ \|y^{(0)}-y_*\|_2 \end{bmatrix}. \end{equation} In addition, the small value of $\rho(W)$ is, the faster $\{W^k\}$ will converge to zero later on (as $k\rightarrow +\infty$). Hence, it follows from \eqref{eq:err} that the small value of $\rho(W)$ is, the faster $\{x^{(k)}\}_{k=0}^{\infty}$ will converge to $x_*$ later on. In the following, for given $\nu \in (0,1)$, we will determine the optimal iterative parameter $\omega \in \left(0,\frac{2 - 2\sqrt{\nu}}{1 - \nu}\right)$ by minimizing $\rho(W)$. Given $\nu \in (0,1)$, for $\omega \in \left(0,\frac{2 - 2\sqrt{\nu}}{1 - \nu}\right)$ we have \begin{equation*} \triangle=(\omega^2 \nu +2|1-\omega|)^2-4(1-\omega)^2 > 0, \end{equation*} which implies that \begin{align*} \rho(W)&=\frac{2|1-\omega|+\omega^2\nu+\sqrt{(2|1-\omega|+\omega^2\nu)^2-4(1-\omega)^2}}{2},\\ &=\frac{2|1-\omega|+\omega^2\nu+\omega\sqrt{4|1-\omega|\nu+\omega^2\nu^2}}{2}. \end{align*} Let \begin{equation*}g_\nu(\omega)=2|1-\omega|+\omega^2\nu+\omega\sqrt{4|1-\omega|\nu+\omega^2\nu^2}, \end{equation*} for given $\nu \in (0,1)$, the problem of finding the optimal iterative parameter is changing to find the minimum point of $g_\nu(\omega)$ in $\omega \in \left(0,\frac{2 - 2\sqrt{\nu}}{1 - \nu}\right)$. Then we have the following theorem. \begin{theorem}\label{thm:op-sor} Let $A\in \mathbb{R}^{n\times n}$ be a nonsingular matrix and let $\nu=\|A^{-1}\|_2$. Given $\nu \in (0,1)$, the optimal iterative parameter that minimizes $g_\nu(\omega)$ in $\left(0,\frac{2 - 2\sqrt{\nu}}{1 - \nu}\right)$ is $\omega=1$. \end{theorem} \begin{proof} Since \begin{equation*}g_\nu(\omega)= \begin{cases} 2(1-\omega)+\omega^2\nu+\omega\sqrt{4(1-\omega)\nu+\omega^2\nu^2}, & \text{if}~0<\omega\leq1, \\ 2(\omega-1)+\omega^2\nu+\omega\sqrt{4(\omega-1)\nu+\omega^2\nu^2}, & \text{if}~1<\omega<\frac{-2+2\sqrt{\nu}}{\nu-1}, \end{cases} \end{equation*} we have \begin{equation*}g^\prime_\nu(\omega)= \begin{cases} -2+2\omega\nu+\frac{\omega\nu(-2+\omega\nu)}{\sqrt{\nu(4-4\omega+\omega^2\nu)}}+\sqrt{\nu(4-4\omega+\omega^2 \nu)}, & \mbox{if}~0<\omega\leq1, \\ 2+2\omega\nu+\frac{\omega\nu(2+\omega\nu)}{\sqrt{\nu(-4+4\omega+\omega^2\nu)}}+\sqrt{\nu(-4+4\omega+\omega^2\nu)}, & \mbox{if}~1<\omega<\frac{-2+2\sqrt{\nu}}{\nu-1}. \end{cases} \end{equation*} When $0<\omega\leq1$, we have \begin{equation*} g''_\nu(\omega)=2\nu+\frac{-16\nu^2+12\omega\nu^2+12\omega\nu^3-12\omega^2\nu^3+2\omega^3\nu^4} {(4\nu-4\nu\omega+\omega^2\nu^2)^{\frac{3}{2}}} \end{equation*} and \begin{equation*} g'''_\nu(\omega)=-\frac{24(\omega-2)(\nu-1)\sqrt{\nu(\omega^2\nu-4\omega+4)}} {(\omega^2\nu-4\omega+4)^3}<0. \end{equation*} Hence, $g''_\nu$ is monotonically decreasing on the interval $(0, 1]$. Then $g''_\nu(\omega)<0$ with $\omega \in (0, 1]$ since $g''_\nu$ is continuous and $\lim\limits_{\omega\rightarrow 0^{+}} g''_\nu(\omega)=2(\nu-\sqrt{\nu}) < 0$. Thus, $g'_\nu $ is also monotonically decreasing on the interval $(0, 1]$. Similarly, $g'_\nu(\omega)<0$ with $\omega \in (0, 1]$ since $g'_\nu$ is continuous and $\lim\limits_{\omega\rightarrow 0^{+}} g'_\nu(\omega)=2(\sqrt{\nu}-1) < 0$. Hence, $g_\nu $ is monotonically decreasing on the interval $(0, 1]$. When $1<\omega<\frac{-2+2\sqrt{\nu}}{\nu-1}$, we have $g'_\nu(\omega)>0$ and thus $g_\nu $ is monotonically increasing on the interval $\left(1,\frac{-2+2\sqrt{\nu}}{\nu-1}\right)$. It follows from the above discussion and the continuity of $g_\nu $ that the minimum point of $g_\nu $ on the interval $\left(0,\frac{-2+2\sqrt{\nu}}{\nu-1}\right)$ is $\omega=1$. \end{proof} \begin{remark} In \cite{chyh2024}, in a different sense, Chen et al. proposed the optimal iterative parameter of the SOR-like iteration of the form \begin{equation}\label{eq:opt} \omega^*_{opt}=\begin{cases} \omega_{opt}, & \mbox{if }~\frac{1}{4}<\nu<1, \\ 1, & \mbox{if}~0<\nu\leq \frac{1}{4}, \end{cases} \end{equation} where $\omega_{opt}\in (0,1)$ is the root of {\small\begin{align*} g_{\nu}^1(\omega) &= 6(\omega-1)+8\nu^2\omega^3+2\nu(2\omega-3\omega^2)\\ &\qquad +\frac{[3\left( \omega -1 \right) ^{2}+2\,{\nu}^{2}{\omega}^{4}+2\,\nu{\omega }^{2} \left( 1-\omega \right)][6(\omega-1)+8\nu^2\omega^3+2\nu(2\omega-3\omega^2)] -8(\omega-1)^3}{\sqrt{[3\left( \omega -1 \right) ^{2}+2\,{\nu}^{2}{\omega}^{4}+2\,\nu{\omega }^{2} \left( 1-\omega \right)]^2-4(\omega-1)^4}}. \end{align*}} The root of $g_{\nu}^1$ doesn't have a analytical form while it can be approximately calculated by the classical bisection method. Given $\nu\in(0,1)$, our new optimal iterative parameter has a analytical form. \end{remark} \section{New convergence and optimal iterative parameter of FPI method}\label{sec:FPI} In this section, we present new convergence result of FPI for solving AVE \eqref{eq:ave} and determine its optimal iterative parameter. \subsection{New convergence result of FPI} Similar to the proof of \Cref{thm:sor}, we can obtain the following theorem. However, we remain the sketch of the proof here in order to determine the optimal iterative parameter of FPI. \begin{theorem}\label{thm:fpi} Let $A\in \mathbb{R}^{n\times n}$ be a nonsingular matrix and $\nu=\|A^{-1}\|_2$. If \begin{equation}\label{eq:con-fpi} 0< \nu<1 \quad \text{and} \quad 0< \tau <\frac{2}{\nu+1}, \end{equation} AVE \eqref{eq:ave} has a unique solution for any $b\in \mathbb{R}^n$ and the sequence~$\{(x^{(k)},y^{(k)})\}^\infty_{k=0}$ generated by~\eqref{eq:fpi} globally linearly converges to~$(x_{*}, y_{*}=|x_*|)$, where $|x_*|$ is the unique solution of AVE~\eqref{eq:ave}. \end{theorem} \begin{proof} Similar to the proof of \Cref{thm:sor}, we have \begin{equation}\label{eq:U} \begin{bmatrix} \|x^{(k+1)}-x^{(k)}\| \\ \|y^{(k+1)}-y^{(k)}\| \end{bmatrix} \leq U \begin{bmatrix} \|x^{(k)}-x^{(k-1)}\| \\ \|y^{(k)}-y^{(k-1)}\| \end{bmatrix} \end{equation} with \begin{equation}\label{eq:u} U=\begin{bmatrix} 0 & \nu \\ 0 & \tau \nu+|1-\tau| \end{bmatrix}\ge 0. \end{equation} Then, the proof is completed if $\rho(U)<1$. By some algebra, $\rho(U) < 1$ if \eqref{eq:con-fpi} holds. \end{proof} \begin{remark} \Cref{fig:fpi} illustrates the comparison of convergence domains for FPI, from which we see that our new result substantially extends the convergence domain of \eqref{eq:cfpi}. Moreover, we fill the gap mentioned in \Cref{sec:intro} without modifying the original FPI. \begin{figure}[htp] \centering \includegraphics[width=0.7\linewidth]{fig_FPI} \caption{Comparison of convergence domains for the FPI method. The light blue area represents the range of $\tau$ obtained from \eqref{eq:con-fpi}, and the red striped area represents the range of $\tau$ obtained from \eqref{eq:cfpi}. }\label{fig:fpi} \end{figure} \end{remark} \begin{remark} The proof of \Cref{thm:fpi} can also be seen as a new constructive proof of \Cref{pro:us}. \end{remark} \subsection{Optimal iterative parameter of FPI method} The optimal iterative parameter of FPI is lack in the literature. In this subsection, we will give the optimal iterative parameter which minimizes $\rho(U)$. Similar to the derivation of \eqref{eq:U}, we have \begin{equation}\label{eq:errfpi} \begin{bmatrix} \|x^{(k+1)}-x_*\|_2 \\ \|y^{(k+1)}-y_*\|_2 \end{bmatrix} \leq U \begin{bmatrix} \|x^{(k)}-x_*\|_2 \\ \|y^{(k)}-y_*\|_2 \end{bmatrix} \le \ldots \le U^{k+1} \begin{bmatrix} \|x^{(0)}-x_*\|_2 \\ \|y^{(0)}-y_*\|_2 \end{bmatrix}. \end{equation} Hence, it follows from \eqref{eq:errfpi} that the small value of $\rho(U)$ is, the faster $\{x^{(k)}\}_{k=0}^\infty$ will converge to $x_*$ later on. In the following, for given $\nu \in (0,1)$, we will determine the optimal iterative parameter $\tau \in \left(0, \frac{2}{\nu+1}\right)$ that minimizes $\rho(U)$. Specially, we have the following theorem. \begin{theorem}\label{thm:op-fpi} Let $A\in \mathbb{R}^{n\times n}$ be a nonsingular matrix and $\nu=\|A^{-1}\|_2$. Given $\nu \in (0,1)$, the optimal iterative parameter that minimizes $\rho(U)$ in $\left(0, \frac{2}{\nu+1}\right)$ is $\tau=1$. \end{theorem} \begin{proof} From \eqref{eq:u}, for given $\nu \in (0,1)$, let \begin{equation}\label{eq:g} g_\nu(\tau) = \tau\nu+ |1-\tau|. \end{equation} When $0< \tau \leq 1$, \eqref{eq:g} becomes \begin{equation*} g_\nu(\tau)=\tau\nu+1-\tau. \end{equation*} Then, $g^\prime_\nu(\tau)=\nu-1< 0$. Hence, $g_\nu$ is a monotonically decreasing function in the interval $(0,1]$. When $1<\tau<\frac{2}{\nu+1}$, \eqref{eq:g} becomes \begin{equation*} g_\nu(\tau)=\tau\nu+\tau-1, \end{equation*} and then we get $g^\prime_\nu(\tau)=\nu+1 > 0$. Hence, $g_\nu$ is a monotonically decreasing function in the interval $\left(1, \frac{2}{\nu+1}\right)$. In conclusion, in the interval $\left(0, \frac{2}{\nu+1}\right)$, the continuous function $g_\nu$ obtains its minimum value at $\tau = 1$. \end{proof} \begin{remark} An interesting finding is that the SOR-like iteration \eqref{eq:sor} and the FPI \eqref{eq:fpi} are the same when they are equipped with our optimal iterative parameters $\omega = 1$ and $\tau = 1$, respectively. \end{remark} \section{Numerical experiments}\label{sec:ne} In this section, we will present two numerical examples to illustrate the superior performance of the SOR-like iteration method with our optimal iteration parameter ``$\omega_{\rm nopt} = 1$'' (denoted by ``SORLnopt'') for solving AVE \eqref{eq:ave}. We compare the performance of SORLo, SORLopt, SORLaopt, SORLno algorithms mentioned in \cite{chyh2024} and the FPIno algorithm (the FPI method \cite{ke2020} with the numerically optimal iterative parameter ``$\tau_{{\rm no}}$'', which is selected from $\tau = [0.001 : 0.001 : 1.999]$ and is the first one to reach the minimal number of iteration of the method) with the SORLnopt algorithm. Note that in all these algorithms, the main task per iteration is solving a system of linear equations. In this paper, the tested methods are implemented in conjunction with the Cholesky factorization since the coefficient matrix is symmetric positive definite. Specifically, we use $dA = \textbf{\text{decomposition}}(A, \text{`chol'})$ to generate the Cholesky decomposition of $A$, where `decomposition' is the routine in MATLAB, which returns the corresponding decomposition of a matrix $A$ that can be used to solve the linear system $Ax = b$ efficiently. The call $x = d A \backslash b$ returns the same vector as $A\backslash b$, but is typically faster. In the numerical results, we will report ``IT" (the number of iteration), ``CPU" (the elapsed CPU time in seconds), and ``RES" (the relative residual error). RES is defined by $$ \operatorname{RES} =\frac{ \left\| A x^{(k)} - \left| x^{(k)} \right| - b \right\|}{\|b\|}. $$ All tests are started from the initial zero vectors and terminated if the current iteration satisfies $\operatorname{RES} \leq 10^{-8}$ or the number of prescribed maximal iteration steps $k_{\max} = 100$ is exceeded (denoted by ``-"). We denote ``Range1'' as the range of $\omega$ for the SOR-like iteration determined by (2.24)-(2.26) in \cite{chyh2024}, ``Range2'' as the range of $\omega$ for the SOR-like iteration determined by \eqref{eq:con-sor}, ``Range3'' as the range of $\tau$ for FPI determined by \eqref{eq:cfpi}, and ``Range4'' as the range of $\tau$ for FPI determined by \eqref{eq:con-fpi}. In order to obtain more accurate CPU time, we run all test problems in each method five times and take the average. All computations are done in MATLAB R2021a on a personal computer with IntelCore(TM) i7 CPU 2.60 GHz, 16.0GB memory. \begin{example}\label{exam:5.1} Consider AVE \eqref{eq:ave} with \begin{equation*} A = {\rm Tridiag}(-I_m,S_m,-I_m)= \begin{bmatrix} S_m & -I_m & 0 & \ldots & 0 & 0 \\ -I_m & S_m & -I_m & \ldots & 0 & 0 \\ 0 & -I_m & S_m & \ldots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \ldots & S_m & -I_m \\ 0 & 0 & 0 & \ldots & -I_m & S_m \end{bmatrix}\in\mathbb{R}^{n\times n}, \end{equation*} \begin{equation*} S_m={\rm tridiag}(-1,8,-1)= \begin{bmatrix} 8 & -1 & 0 & \ldots & 0 & 0 \\ -1 & 8 & -1 & \ldots & 0 & 0 \\ 0 & -1 & 8 & \ldots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \ldots & 8 & -1 \\ 0 & 0 & 0 & \ldots & -1 & 8 \end{bmatrix}\in\mathbb{R}^{m\times m} \end{equation*} and $b=Ax^*-|x^*|$, where $x^*=[-1,1,-1,1,\cdots,-1,1]^\top\in\mathbb{R}^n$. Here,we have $n=m^2$. The parameters for this example are displayed in Table \ref{t1}, from which we can find that$~\omega_{\rm o},~\omega_{{\rm no}},~\omega_{{\rm aopt}},~\omega_{{\rm opt}},~\omega_{{\rm nopt}} \in {\rm Range1} \subset {\rm Range2}$ and $\omega_{{\rm nopt}}, \tau_{\rm no} \in {\rm Range3} \subset {\rm Range4}$. In addition, the larger the value of $\nu$ is, the smaller the range of $\omega$ or $\tau$ is. Numerical results for this example are reported in Table \ref{t2}. From Table \ref{t2}, we find SORLopt, SORLno, SORLnopt and FPIno take the same number of iteration, but SORLnopt is better than others in terms of CPU time. SORLaopt performs the worst in terms of IT, SORLopt always performs better than SORLo and SORLaopt in terms of IT and CPU. \begin{table}[htp] \centering \caption{Parameters for Example~\ref{exam:5.1}}\label{t1} \begin{tabular}{lllll} \hline &$m$ & & & \\ \cline{2-5} &8 &16 &32 &64\\ \hline $\nu$ &0.2358 &0.2458 &0.2489 &0.2497\\ $\omega_{\rm o}$ &1.0671 &1.0704 &1.0714 &1.0717\\ $\omega_{{\rm no}}$ &0.9810 &0.9830 &0.9840 &0.9850\\ $\omega_{{\rm aopt}}$ &0.8354 &0.8305 &0.8290 &0.8286\\ $\omega_{{\rm opt}}$ &1 &1 &1 &1\\ $\tau_{{\rm no}}$ &0.9610 &0.9660 &0.9680 &0.9690 \\ {\rm Range1} &(0.3994, 1.3447) &(0.4003, 1.3347) &(0.4005, 1.3316) &(0.4006, 1.3308)\\ {\rm Range2} &(0, 1.3463) &(0, 1.3371) &(0, 1.3343) &(0, 1.3336) \\ {\rm Range3} &(0.0369, 1.5956) &(0.0407, 1.5808) &(0.0419, 1.5762) &(0.0422, 1.5750)\\ {\rm Range4} &(0, 1.6184) &(0, 1.6054) &(0, 1.6014) &(0, 1.6004) \\\hline \end{tabular} \end{table} \begin{table}[htp] \centering \caption{Numerical results for Example~\ref{exam:5.1}}\label{t2} \begin{tabular}{llllll} \hline Method & &$m$ & & & \\ \cline{2-6} & & 8 & 16 & 32 & 64 \\ \hline SORLo &IT & 16 &16 &17 &17 \\ &CPU &0.0381 &0.0413 &0.0495 &0.1502 \\ &RES &4.9538e-09 &8.5231e-09 &3.3922e-09 &3.6490e-09 \\ SORLaopt &IT &18 &19 &19 &19 \\ &CPU &0.0365 &0.0402 &0.0493 &0.1463 \\ &RES &8.6677e-09 &4.9002e-09 &5.6512e-09 &5.9552e-09 \\ SORLopt &IT &11 &11 &11 &11 \\ &CPU &0.0354 &0.0374 &0.0447 &0.1408 \\ &RES &2.6003e-09 &3.2109e-09 &3.5117e-09 &3.6612e-09 \\ SORLno &IT &11 &11 &11 &11 \\ &CPU &1.0504 &1.7228 &4.4550 &18.1472 \\ &RES &9.5801e-09 &9.8268e-09 &9.8327e-09 &9.4146e-09 \\ SORLnopt &IT &11 &11 &11 &11 \\ &CPU &0.0088 &0.0103 &0.0102 &0.0143 \\ &RES &2.6003e-09 &3.2109e-09 &3.5117e-09 &3.6612e-09 \\ FPIno &IT &11 &11 &11 &11 \\ &CPU &0.8285 &1.3509 &3.5003 &14.4657 \\ &RES &9.9594e-09 &9.7292e-09 &9.7047e-09 &9.6682e-09 \\ \hline \end{tabular} \end{table} \end{example} \begin{example}\label{exam:5.2} Consider AVE \eqref{eq:ave} with the matrix $A\in\mathbb{R}^{n\times n}$ arises from six different test problems listed in Table \ref{t3}. There matrices are sparse and symmetric positive definite and $\|A^{-1}\|<1$. In addition, let $b=Ax^*-|x^*|$ with $x^*=[-1,1,-1,1,\cdots,-1,1]^\top\in\mathbb{R}^n$. \begin{table}[htp] \centering \caption{Problems for \Cref{exam:5.2}}\label{t3} \begin{tabular}{lllll} \hline Problem & $n$ & Problem & $n$ \\\hline mesh1e1 & 48 & Trefethen\_{20b} & 19 \\ mesh1em1 & 48 & Trefethen\_{200b} & 199 \\ mesh2e1 & 306 & Trefethen\_{20000b} & 19999 \\ \hline \end{tabular} \end{table} The parameters for this \Cref{exam:5.2} are displayed in Table \ref{t4}. It follows from Table \ref{t4} that the range of $\omega$ or $\tau$ becomes smaller as the value of $\nu$ becomes larger. Furthermore, all values of $\omega_{{\rm no}},~\omega_{\rm aopt},~\omega_{\rm opt},~\omega_{\rm nopt}\in {\rm Range1},~\omega_{{\rm nopt}}, \tau_{\rm no}\in {\rm Range4}$, while $\omega_{\rm o}$ does not belong to Range1 or Range2 for the former three test problems and it belongs to Range1 for the later three test problems. Moreover, for the test problem mesh2e1, $\tau_{\rm no}$ does not belong to Range3, because $\nu=0.7615>\frac{\sqrt{2}}{2}$ lead to ${\rm Range3}=\emptyset$, while it belongs to Range4. In addition, we can find that~${\rm Range1} \subset {\rm Range2}$ and ${\rm Range3} \subset {\rm Range4}$. \begin{table}[htp] \setlength{\tabcolsep}{1pt} \small \centering \caption{Parameters for Example~\ref{exam:5.2}}\label{t4} \begin{tabular}{lllllll} \hline &Problem & & & & & \\ \cline{2-7} &mesh1e1 &mesh1em1 &mesh2e1 &Trefethen\_20b &Trefethen\_200b &Trefethen\_20000b\\\hline $\nu$ &0.5747 &0.6397 & 0.7615 &0.4244 &0.4265 &0.4268\\ $\omega_{\rm o}$ &1.2105 &1.2498 &1.3438 &1.1372 &1.1381 &1.1382\\ $\omega_{\rm no}$ &0.9430 &0.9290 &0.9320 &0.9300 &0.9510 &0.9990\\ $\omega_{\rm aopt}$ &0.7102 &0.6929 &0.6641 &0.7569 &0.7561 &0.7561\\ $\omega_{\rm opt}$ &0.8218 &0.7848 &0.7210 &0.9114 &0.9102 &0.9101 \\ $\tau_{\rm no}$ &0.8920 &0.8350 &0.8710 &0.8690 &0.8410 &0.9700\\ {\rm Range1} &(0.4361, 1.0753) &(0.4460, 1.0367) &(0.4692, 0.9413) &(0.4175, 1.1785) &(0.4177, 1.1769) &(0.4177, 1.1767)\\ {\rm Range2} &(0, 1.1376) &(0, 1.1112) &(0, 1.0680) &(0, 1.2110) &(0, 1.2099) &(0, 1.2097)\\ {\rm Range3} &(0.4271, 1.1547) &(0.6422, 1.0786) &$\emptyset $ &(0.1642, 1.3377) &(0.1665, 1.3351) &(0.1669, 1.3347)\\ {\rm Range4} &(0, 1.2701) &(0, 1.2197) &(0, 1.1354) &(0, 1.4041) &(0, 1.4020) &(0, 1.4017)\\\hline \end{tabular} \end{table} Table \ref{t5} reports the numerical results for Example \ref{exam:5.2}. From Table \ref{t5}, we can see that for the test problems mesh1e1, mesh1em1, mesh2e1, Trefethen\_20b and Trefethen\_200b, SORLnopt performs better than others in terms of CPU, even though it does not have the fewest number of iterations. For the test problem Trefethen\_20000b, SORLnopt performs the best in terms of IT and CPU. In addition, for problems mesh1em1, Trefethen\_20b and Trefethen\_200b, we can conclude that the number of iteration of SORLopt is less than that of SORLnopt and reverse for other problems. SORLopt is better than SORLo and SORLaopt in terms of IT and CPU. FPIno and SORLno have the longest CPU times compared to other algorithms, and they show the same number of iterations except in problems mesh1em1 and Trefethen\_200b. In summary, for different problems, the optimal parameters of the same algorithm under different metrics may have different performance. \begin{table}[h] \setlength{\tabcolsep}{1.5pt} \centering \caption{Numerical results for Example~\ref{exam:5.2}}\label{t5} \begin{tabular}{llllllll} \hline & &Method & & & & \\ \cline{3-8} Problem & & SORLo & SORLaopt & SORLopt & SORLno & SORLnopt &FPIno \\ \hline mesh1e1 &IT & -- &35 &27 &19 &26 &19 \\ &CPU & -- &0.0387 &0.0367 &1.1828 &0.0100 &1.0529 \\ &RES & -- &7.3969e-09 &5.9299e-09 &9.8415e-09 &8.9665e-09 &9.9665e-09 \\ mesh1em1 &IT & -- &33 &27 &18 &33 &19 \\ &CPU & -- &0.0381 &0.0373 &1.1812 &0.0097 &1.0739 \\ &RES & -- &8.5320e-09 &5.9776e-09 &9.8672e-09 &8.2026e-09 &9.9313e-09 \\ mesh2e1 &IT & -- &35 &31 &18 &24 &18 \\ &CPU & -- &0.0289 &0.0280 &2.2952 &0.0103 &2.0373 \\ &RES & -- &9.0933e-09 &7.3981e-09 &9.6759e-09 &9.3366e-09 &9.9450e-09 \\ Trefethen\_20b &IT &49 &20 &13 &12 &15 &12 \\ &CPU &0.0291 &0.0272 &0.0254 &0.9417 &0.0095 &0.7934 \\ &RES &9.2171e-09 &6.3617e-09 &7.0832e-09 &9.5933e-09 &9.0253e-09 &9.9970e-09 \\ Trefethen\_200b &IT &36 &14 &10 &8 &11 &9 \\ &CPU &0.0478 &0.0435 &0.0423 &3.0857 &0.0112 &2.5563 \\ &RES &8.6534e-09 &9.3612e-09 &3.7711e-09 &9.7946e-09 &7.8160e-09 &9.9102e-09 \\ Trefethen\_20000b &IT &10 &14 &8 &3 &3 &3 \\ &CPU &7.9540 &8.2715 &7.8463 &4255.4442 &5.5824 &5416.5100 \\ &RES &6.5615e-09 &2.6424e-09 &4.2785e-09 &7.9680e-09 &8.0323e-09 &9.7652e-09 \\ \hline \end{tabular} \end{table} \end{example} \section{Conclusions}\label{sec:conclusions} In this paper, we provide a new convergence analysis of the SOR-like iteration and the FPI method, which result wider convergence range. Based on the new analysis, we obtain the new optimal iterative parameters for both the SOR-like iteration and the FPI, respectively. And we find that with our optimal iterative parameters $\omega=1$ and $\tau=1$, the SOR-like iteration and the FPI are consistent. Two numerical examples are given to demonstrate the superior performance of the SOR-like iteration method (and the FPI method) with our optimal iteration parameters. \section*{Acknowledgments} C. Chen was partially supported by the Fujian Alliance of Mathematics (No. 2023SXLMQN03) and the Natural Science Foundation of Fujian Province (No. 2021J01661). D. Han was partially supported by the Ministry of Science and Technology of China (No. 2021YFA1003600) and the National Natural Science Foundation of China (Nos. 12126603 and 12131004). \bibliographystyle{plain} \bibliography{sorfpi4ave} \end{document}
2412.12607v1
http://arxiv.org/abs/2412.12607v1
Linear Convergence of Resolvent Splitting with Minimal Lifting and its Application to a Primal-Dual Algorithm
\documentclass[10pt]{article} \usepackage{algorithm2e} \usepackage{authblk} \usepackage{blindtext} \usepackage[utf8]{inputenc} \usepackage[margin=2cm]{geometry} \usepackage{enumerate} \usepackage{amsmath,amsthm,amssymb,amsfonts} \usepackage{todonotes} \usepackage{graphicx} \usepackage{caption} \usepackage{subcaption} \captionsetup[figure]{justification=centering} \usepackage[rightcaption]{sidecap} \usepackage{stmaryrd} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \newtheorem{assumption}[theorem]{Assumption} \usepackage{multirow} \usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=blue, filecolor=magenta, citecolor=blue, urlcolor=blue } \usepackage{todonotes} \DeclareMathOperator{\Id}{Id} \DeclareMathOperator{\Fix}{Fix} \DeclareMathOperator{\prox}{prox} \DeclareMathOperator{\gra}{gra} \DeclareMathOperator{\zer}{zer} \DeclareMathOperator{\dom}{dom} \DeclareMathOperator{\epi}{epi} \DeclareMathOperator{\sri}{sri} \DeclareMathOperator{\proj}{proj} \DeclareMathOperator{\ri}{ri} \DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator{\cone}{cone} \DeclareMathOperator{\iso}{iso} \newcommand{\setto}{\rightrightarrows} \providecommand{\keywords}[1] { \small \textbf{\textit{Keywords.}} #1 } \title{Linear Convergence of Resolvent Splitting with Minimal Lifting and its Application to a Primal-Dual Algorithm} \author[*]{Farhana A. Simi} \author[*]{Matthew K. Tam} \affil[*]{School of Mathematics and Statistics, University of Melbourne, Parkville VIC 3010, Australia. Email: \href{mailto:[email protected]}{[email protected]}, \href{mailto:[email protected]}{[email protected]}} \begin{document} \maketitle \begin{abstract} We consider resolvent splitting algorithms for finding a zero of the sum of finitely many maximally monotone operators. The standard approach to solving this type of problem involves reformulating as a two-operator problem in the product-space and applying the Douglas--Rachford algorithm. However, existing results for linear convergence cannot be applied in the product-space formulation due to a lack of appropriate Lipschitz continuity and strong monotonicity. In this work, we investigate a different approach that does not rely on the Douglas--Rachford algorithm or the product-space directly. We establish linear convergence of the ``resolvent splitting with minimal lifting" algorithm due to Malitsky \& Tam for monotone inclusions with finitely many operators. Our results are then used to derive linear convergence of a primal-dual algorithm for convex minimization problems involving infimal convolutions. The theoretical results are demonstrated on numerical experiments in image denoising. \end{abstract} \paragraph*{Keywords.} Resolvent splitting, linear convergence, Lipschitz continuity, strong monotonicity, image denoising \paragraph*{MSC2020.} 47H05, 49M27, 65K10, 90C30 \section{Introduction} Let $\mathcal{H}$ be a real Hilbert space. In this work, we consider the monotone inclusion problem given by \begin{equation} \label{eq:1n} \text{find } x\in\mathcal{H} \text{ such that } 0\in\sum_{i=1}^{n}A_{i}(x)\subseteq\mathcal{H}, \end{equation} where the (set-valued) operator $A_{i}:\mathcal{H} \setto \mathcal{H}$ is maximally monotone for all $i\in \{1,\dots,n\}$. The setting of problem~\eqref{eq:1n} is quite general and includes many fundamental problems that arise in mathematical optimization such as nonsmooth minimization~\cite{bagirov2014introduction,{rockafellar1970monotone},{rockafellar1997convex}}, variational inequalities~\cite{marcotte1995convergence,{rockafellar1976monotone},tam2023bregman}, and fixed point problems \cite{eckstein1992douglas,lions1979splitting,setzer2009split}. Of particular interest for this work is the following convex minimization problem involving infimal convolution. \begin{example}\label{example 1.1} Let $\mathcal{H}_{1} \text{ and } \mathcal{H}_{2}$ be real Hilbert spaces. Suppose $C:\mathcal{H}_{1}\rightarrow\mathcal{H}_{2}$ is bounded and linear, $f_{i}:\mathcal{H}_{1}\rightarrow\mathbb{R}$ is convex and differentiable with Lipschitz continuous gradient for $i=2,\dots,n-1$, $f_{n}:\mathcal{H}_{1}\rightarrow(-\infty,+\infty]$ is proper, closed and strongly convex, $g_{i}:\mathcal{H}_{2}\rightarrow(-\infty,+\infty]$ is proper, closed and strongly convex for $i=2,\dots,n-1$, and $g_{n}:\mathcal{H}_{2}\rightarrow\mathbb{R}$ is convex and differentiable with Lipschitz continuous gradient. Consider the minimization problem \begin{equation} \label{convex optimization problem intro} \min_{u\in\mathcal{H}_{1}}\quad \sum_{i=2}^{n}f_{i}(u)+(g_{2}\Box\cdot\cdot\cdot\Box g_{n})(Cu), \end{equation} where $(g_{2}\Box\cdot\cdot\cdot\Box g_{n})$ denotes the infimal convolution of $g_{2},\dots,g_{n}$. The first order optimality condition for \eqref{convex optimization problem intro} can be expressed as the monotone inclusion \begin{equation} \label{monotone inclusion n=2*} \text{find }\begin{pmatrix} u\\v \end{pmatrix}\in\mathcal{H}_{1}\times\mathcal{H}_{2}\text{ such that }\begin{pmatrix} 0\\0 \end{pmatrix}\in\begin{pmatrix} 0&C^*\\-C&0 \end{pmatrix}\begin{pmatrix} u\\v \end{pmatrix}+\sum_{i=2}^{n-1}\begin{pmatrix} \nabla f_{i}(u)\\\nabla g_{i}^*(v)\end{pmatrix}+\begin{pmatrix} \partial f_{n}(u)\\\partial g^*_{n}(v) \end{pmatrix}, \end{equation} where $f^*_{i}$ and $g^*_{i}$ denote conjugates of $f_{i}$ and $g_{i}$ respectively for $i=2,\dots,n$. The inclusion problem~\eqref{monotone inclusion n=2*} is in the form of~\eqref{eq:1n} with \begin{equation*} \label{monotone operators} \mathcal{H}=\mathcal{H}_1\times\mathcal{H}_{2},\quad A_{1}=\begin{pmatrix} 0&C^*\\-C&0 \end{pmatrix}, \quad A_{i}=\begin{pmatrix} \nabla f_{i}\\ \nabla g_{i}^*\end{pmatrix},\quad A_{n}=\begin{pmatrix} \partial f_{n}\\ \partial g_{n}^* \end{pmatrix}, \end{equation*} where $i=2,\dots,n-1$. \end{example} \medskip \emph{Resolvent splittings} are a family of algorithms that can be used to solve~\eqref{eq:1n}. These work by invoking each operator in~\eqref{eq:1n} individually, through their resolvents, rather than using the whole sum directly. Recall that the resolvent of a maximally monotone operator $A$ is the operator $J_{A}:\mathcal{H}\rightarrow\mathcal{H}$ defined as $J_{A}=(\Id+A)^{-1}$~\cite[Corollary]{minty1962monotone}. A well known example of a resolvent splitting, which solves the monotone inclusion problem \eqref{eq:1n} when $n=2$, is the \emph{Douglas--Rachford algorithm}~\cite{{lions1979splitting},{svaiter2011weak}}. Let $T_{\rm DR}:\mathcal{H}\rightarrow\mathcal{H}$ and ${z}^{0}\in \mathcal{H}$, this algorithm can be described in terms of the iteration \begin{equation} \label{eq:4n} {z}^{k+1}=T_{\rm DR}({z}^k):={z}^k+J_{A_{2}}(2J_{A_{1}}({z}^k)-{z}^k)-J_{A_{1}}({z}^k) \quad \forall k\in\mathbb{N}. \end{equation} The sequence $({z}^k)_{k\in \mathbb{N}}$ given by \eqref{eq:4n} converges weakly to a point ${z}\in \mathcal{H}$ with $z=T_{\rm DR}(z)$, and the \emph{shadow sequence} $\bigl(J_{A_{1}}({z}^k)\bigr)_{k\in \mathbb{N}}$ converges weakly to $J_{A_{1}}(z)$, which is a solution of \eqref{eq:1n}, see \cite[Theorem~1]{svaiter2011weak} and \cite[Theorem~2.3]{svaiter2019simplified}. Further, if one operator is Lipschitz continuous and the other is strongly monotone, then the result can be refined --- both sequences can be shown to converge linearly, see~\cite[Theorem~4.3]{moursi2019douglas} and \cite[Corollary~4.10 \& Remark~4.11]{dao1809adaptive}. Linear convergence of the Douglas--Rachford algorithm has also been established in a number of important, but specialized, settings of~\eqref{eq:1n} including where the operators are assumed to be subdifferentials~\cite{giselsson2016linear,giselsson2017tight} or normal cones~\cite{bauschke2016optimal,bauschke2014rate,bauschke2016douglas,hesse2013nonconvex,hesse2014alternating,phan2016linear}. The standard way to solve \eqref{eq:1n} for more than $n>2$ operators involves using the Douglas--Rachford algorithm applied to a two operator reformulation in the product space $\mathcal{H}^n$. Precisely, \begin{equation}\label{product space DR} \text{find }\mathbf{x}=(x,\dots,x)\in \mathcal{H}^n \text{ such that } 0\in (A+N_{\Delta_{n}})(\mathbf{x})\subseteq \mathcal{H}^n, \end{equation} where $A=(A_{1},\dots, A_{n})$, $N_{\Delta_{n}}$ denotes the normal cone to the \emph{diagonal subspace} $\Delta_{n}:=\{\mathbf{x}=(x_{1},\dots, x_{n})\in \mathcal{H}^n: x_{1}=\dots= x_{n}\}$. Any solution $\mathbf{x}=(x,\dots,x)$ of \eqref{product space DR} is necessarily contained in $\Delta_n$ with $x$ a solution to \eqref{eq:1n}, and vice versa. However, many of the existing results for linear convergence of the Douglas--Rachford algorithm do not apply to \eqref{product space DR} as the normal cone $N_{\Delta_{n}}$ is neither Lipschitz continuous nor strongly monotone. This study aims to establish linear convergence of the ``resolvent splitting algorithm with minimal lifting" due to Malitsky and Tam~\cite{malitsky2023resolvent}. This algorithm does not rely on a product space formulation in solving the inclusion problem~\eqref{eq:1n}. Let $T_{\rm MT}:\mathcal{H}^{n-1}\rightarrow\mathcal{H}^{n-1}$, $\mathbf{z}^0=(z_{1}^0,\dots, z_{n-1}^0)\in \mathcal{H}^{n-1}$, and $\gamma\in(0, 1)$, this algorithm can be described in terms of the iteration \begin{equation}\label{eq:1} \mathbf{z}^{k+1}=T_{\rm MT}(\mathbf{z}^k)=\mathbf{z}^k+\gamma\begin{pmatrix} x_{2}^{k}-x_{1}^{k}\\x_{3}^{k}-x_{2}^{k}\\\vdots \\x_{n}^{k}-x_{n-1}^{k} \end{pmatrix}, \end{equation} where $\mathbf{x}^k=(x_{1}^k,\dots,x_{n}^{k})\in\mathcal{H}^{n}$ depends on $\mathbf{z}=(z_{1}^k, \dots, z_{n-1}^k)\in \mathcal{H}^{n-1}$ and is given by\\ \begin{equation} \label{eq:2} \left\{\begin{aligned} x_{1}^k &=J_{A_{1}}(z_{1}^k)\\ x_{i}^k &=J_{A_{i}}(z_{i}^k+x_{i-1}^k-z_{i-1}^k)&\forall i\in \{2,\dots,n-1\} \\ x_{n}^k &=J_{A_{n}}(x_{1}^k+x_{n-1}^k-z_{n-1}^k). \end{aligned}\right. \end{equation} The sequence $(\mathbf{z}^k)_{k\in\mathbb{N}}$ given by~\eqref{eq:1} converges weakly to a point $\mathbf{z}^*\in\mathcal{H}^{n-1}$ with $\mathbf{z}^*=T_{\rm MT}(\mathbf{z^*})$, and the shadow sequence $(\mathbf{x}^k)_{k\in\mathbb{N}}$ converges weakly to a point $(x,\dots,x)\in\mathcal{H}^n$ with $x=J_{A_{1}}(z_{1})$, which is a solution of \eqref{eq:1n}, see \cite[Theorem 4.5]{malitsky2023resolvent}. Although this algorithm is known to converge linearly for affine feasibility problems~\cite{bauschke2023splitting}, linear convergence in the setting of \eqref{eq:1n} has not been previously studied. In this work, we address this by establishing linear convergence of this algorithm when applied to the inclusion problems~\eqref{eq:1n}. The remainder of this paper is structured as follows. In Section~\ref{s: prel}, we recall the preliminaries needed for our analysis. In Section~\ref{s:resolvent splitting}, we present our main result (Theorem~\ref{theorem for linear convergence}) concerning linear convergence of the ``resolvent splitting with minimal lifting" algorithm \cite{malitsky2023resolvent} for problem~\eqref{eq:1n} with $n\geq2$. When specialized to $n=2$ operators, our result generalizes the findings presented in~\cite{moursi2019douglas}. In Section~\ref{s: section 4}, we apply the results of Section~\ref{s:resolvent splitting} to derive linear convergence of a primal-dual algorithm for the convex minimization problem with infimal convolution given in Example~\ref{example 1.1}. In Section~\ref{s: Experiment}, we present experimental results on image denoising which are supported by our findings. Finally, Section~\ref{s: conclusions} concludes by outlining future directions and open question for future research. \section{Preliminaries}\label{s: prel} Throughout this paper, $\mathcal{H}$ denotes a real Hilbert space with inner product $\langle\cdot,\cdot\rangle$ and induced norm $\|\cdot\|$. A \emph{set-valued} operator, denoted $A:\mathcal{H}\setto \mathcal{H}$, maps each point $x\in \mathcal{H}$ to a set $A(x)\subseteq \mathcal{H}$. When $A$ is \emph{single-valued} (\emph{i.e.,}~$A(x)$ is a singleton for all $x\in\mathcal{H})$, we write $A:\mathcal{H}\rightarrow\mathcal{H}$. The \emph{graph}, the set of \emph{fixed points} and the set of \emph{zeros} of the operator $A\colon\mathcal{H}\setto\mathcal{H}$ are defined by $\gra A:=\{(x,u)\in \mathcal{H}\times\mathcal{H}:u\in A(x)\}, \Fix A:=\{x\in \mathcal{H}:x\in A(x)\}$, and $\zer A:=\{x\in \mathcal{H}:0\in A(x)\}$ respectively. The \emph{identity operator} is denoted by $\Id:\mathcal{H}\rightarrow \mathcal{H}$. An operator $A:\mathcal{H}\setto\mathcal{H}$ is $\mu$-\emph{monotone} if $$\langle x-y,u-v\rangle\geq\mu\|x-y\|^2\quad \forall (x,u),(y,v)\in \gra A,$$ and it is \emph{maximally $\mu$-monotone}, if there exists no $\mu$-monotone operator $B:\mathcal{H}\setto\mathcal{H}$ such that $\gra B$ properly contains $\gra A$. Depending on the sign of $\mu$, we say $A$ is monotone if $\mu=0$ and $A$ is $\mu$-\emph{strongly monotone} if $\mu>0$. A single-valued operator $B:\mathcal{H}\rightarrow\mathcal{H}$ is $\beta$-\emph{Lipschitz}, with $\beta\geq0$, if $$\|B(x)-B(y)\|\leq\beta\|x-y\|\quad \forall (x,y)\in\mathcal{H},$$ and a $\beta$-Lipschitz operator with $\beta\in[0,1)$ is said to be a \emph{$\beta$-contraction}. A $1$-Lipschitz operator is said to be \emph{nonexpansive}. The \emph{resolvent} of an operator $A:\mathcal{H}\setto\mathcal{H}$ is defined as $J_{A}:=(\Id+A)^{-1}$. The following proposition summarises its key properties in the presence of monotonicity. \begin{proposition}\label{nonexpansiveness} Let $A:\mathcal{H}\setto\mathcal{H}$ be maximally monotone operator. Then the resolvent $J_{A}$ is single-valued with full domain and satisfies $$ \|J_{A}(x)-J_{A}(y)\|^2+\|(\Id-J_{A})(x)-(\Id-J_{A})(y)\|^2\leq\|x-y\|^2\quad\forall (x,y)\in\mathcal{H}.$$ In particular, $J_A$ is a nonexpansive. \end{proposition} \begin{proof} See \cite[Corollary~23.10]{bauschke2011convex}. \end{proof} The following theorem will be important for establishing linear convergence. Recall that a sequence $({z}^k)_{k\in\mathbb{N}}$ is said to converge \emph{$R$-linearly} to a point $z\in\mathcal{H}$ if there exists $c\in\mathbb{R}_+$ and $r\in[0,1)$ such that $\|{z}^{k}-{z}\|\leq cr^k$ for all $k\in\mathbb{N}$. \begin{theorem}[\emph{Banach fixed-point theorem}]\label{Banach Theorem} Let $T:\mathcal{H}\rightarrow\mathcal{H}$ be $\beta$-contraction. Given $z^0\in\mathcal{H}$, define a sequence $(z^k)_{k\in\mathbb{N}}$ according to $$z^{k+1}=T(z^k) \quad \forall k\in\mathbb{N}.$$ Then there exists $z\in\mathcal{H}$ such that the following hold: \begin{enumerate}[(i)] \item $z$ is the unique fixed point of $T$. \item $\|z^k-z\|\leq\beta^k\|z^0-z\|$ for all $k\in\mathbb{N}$. \end{enumerate} In particular, the sequence $(z^k)_{k\in\mathbb{N}}$ converges $R$-linearly to $z$. \end{theorem} \begin{proof} See \cite[Theorem 1.48]{bauschke2011convex}. \end{proof} Given a function $f:\mathcal{H}\rightarrow[-\infty,+\infty]$, we say $f$ is \emph{proper}, if $-\infty\notin f(\mathcal{H})$ and $\dom f:=\{x\in\mathcal{H}:f(x)<+\infty\}\neq\emptyset$. We say $f$ is \emph{lower semi-continuous (lsc)} at $\Bar{x}\in\mathcal{H}$ if $$\liminf_{x\rightarrow\bar{x}}f(x)\geq f(\Bar{x}),$$ and say it is \emph{lower semi-continuous (lsc)}, if it is lsc at every point in $\mathcal{H}$. A function $f$ is \emph{convex}, if $$f((1-\lambda)x+\lambda y)\leq\lambda f(x)+(1-\lambda)f(y) \quad \forall (x,y)\in\mathcal{H},\quad \lambda\in(0,1),$$ and $f$ is $\alpha$-\emph{strongly convex}, with $\alpha>0$, if $f-\frac{\alpha}{2}\|\cdot\|^2$ is convex. The \emph{conjugate (Fenchel conjugate)} of $f$ is the function $f^*:\mathcal{H}\rightarrow[-\infty,+\infty]$ defined by $$f^*(u)=\sup_{x\in\mathcal{H}}(\langle x,u\rangle-f(x)).$$ The \emph{infimal convolution} of $f_{1},\dots, f_{n}:\mathcal{H}\rightarrow(-\infty,+\infty]$ is the function $(f_{1}\Box\cdots\Box f_{n}):\mathcal{H}\rightarrow[-\infty,+\infty]$ defined by \begin{equation}\label{infimal convolution} (f_{1}\Box\cdots\Box f_{n})(u)=\inf_{(v_{1},\dots,v_{n})\in\mathcal{H}\times\dots\times\mathcal{H}}\{f_{1}(v_{1})+\cdots+f_{n}(v_{n}):u=v_{1}+\dots+v_{n}\}. \end{equation} and it is said to be \emph{exact} at a point $u\in\mathcal{H}$, if the infimum in \eqref{infimal convolution} is attained. The following two proposition explore properties of the infimal convolution. \begin{proposition}\label{remark infimal convolution} Suppose $f_{1},\dots,f_{n}:\mathcal{H}\rightarrow(-\infty,+\infty]$ are proper convex functions. Then $$(f_{1}\Box\cdots\Box f_{n})^*=f^*_{n}+\dots+f^*_{n}.$$ \end{proposition} \begin{proof} See \cite[Theorem 16.4]{rockafellar1997convex}. \end{proof} \begin{proposition}\label{prop for infimal convolution} Suppose $f_{1},\dots,f_{n-1}:\mathcal{H}\rightarrow(-\infty,+\infty]$ are proper lsc $\alpha$-strongly convex, and $f_{n}:\mathcal{H}\rightarrow(-\infty,+\infty)$ is convex. Then $(f_{1}\Box\cdots\Box f_{n})\colon\mathcal{H}\to(-\infty,+\infty)$ is convex and exact at every $v\in\mathcal{H}.$ \end{proposition} \begin{proof} Convexity of $f_{1}\Box\cdots\Box f_{n}$ follows by applying \cite[Proposition~8.26]{bauschke2011convex} to the function $F_1:\mathcal{H}\times\mathcal{H}^{n-1}\rightarrow(-\infty,+\infty]:(u,(v_1,\dots,v_{n-1}))\mapsto\sum_{i=1}^{n-1}f_{i}(v_{i})+f_{n}\bigl(u-\sum_{i=1}^{n-1}v_{i}\bigr)$. To show $f_{1}\Box\cdots\Box f_{n}$ is exact, fix $u\in\mathcal{H}$ and consider the convex function $$F_2(v_1,\dots,v_{n-1}):=\sum_{i=1}^{n-1}f_{i}(v_{i})+f_{n}\bigl(u-\sum_{i=1}^{n-1}v_{i}\bigr),$$ where we note that $\dom F_2\supseteq \dom f_1\times\dots\times\dom f_{n-1}$ as $\dom f_n=\mathcal{H}$. Since $f_1,\dots,f_{n-1}$ are proper and lsc, it follows that $F_2$ is also proper and lsc. Since $f_1,\dots,f_{n-1}$ are $\alpha$-strongly convex on $\mathcal{H}$, it follows that $F_2$ is $\alpha$-strongly convex on $\mathcal{H}^{n-1}$. Applying \cite[Corollary 11.17]{bauschke2011convex} to the proper lsc $\alpha$-convex function $F_2$ implies it has exactly one minimizer. Since $u\in\mathcal{H}$ was chosen arbitrarily, this completes the proof. \end{proof} The \emph{subdifferential} of a function $f:\mathcal{H}\rightarrow(-\infty,+\infty]$ at $x\in\dom f$ is given by $$\partial f(x):=\{u\in\mathcal{H}:\langle y-x,u\rangle+f(x)\leq f(y), \forall y\in\mathcal{H}\},$$ and at $x\notin \dom f$ it is defined as $\partial f(x):=\emptyset$. In order to compute the subdifferential of the sum of two functions, we will make use the following sum-rule which assumes a condition involving the strong relative interior. Recall that a set $D\subseteq\mathcal{H}$ is \emph{cone} if it satisfies $D=\mathbb{R}_{++}D$. The smallest cone in $\mathcal{H}$ containing $D$ is denoted $\cone D$, and the smallest closed linear subspace of $\mathcal{H}$ containing $D$ is denoted $\overline{\text{span} D}$. The \emph{strong relative interior} of $D$ is given by $$\sri D:=\{x\in D: \cone(D-x)=\overline{\text{span}(D-x)}\}.$$ Note that when $\mathcal{H}$ is finite-dimensional, the notion of strong relative interior coincides with the usual notation of \emph{relative interior}~\cite[Fact 6.14(i)]{bauschke2011convex}. \begin{theorem}\label{sum rule of subdifferential for two functions} Let $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$ be real Hilbert spaces. Suppose $f:\mathcal{H}_{1}\rightarrow(-\infty,+\infty]$ and $g:\mathcal{H}_{2}\rightarrow(-\infty,+\infty]$ are proper lsc convex functions, and $C:\mathcal{H}_{1}\rightarrow\mathcal{H}_{2}$ is bounded and linear. If $0\in\sri(\dom g-C\dom f)$ then $$\partial(f+g\circ C)=\partial f+C^*\circ\partial g\circ C.$$ \end{theorem} \begin{proof} See \cite[Theorem 16.37(i)]{bauschke2011convex}. \end{proof} Now introduce the following proposition which will be useful for simplifying our result. \begin{proposition}\label{lemma for gap} Suppose $f\colon\mathcal{H}\to(-\infty,+\infty]$ is proper lsc convex, and $(u^k)$ converges $R$-linearly to $u$. If there exists a bounded sequence of subgradients $\phi^k\in\partial f(u^k)$ and $\partial f(u)\neq \emptyset$, then $f(u^k)$ converges $R$-linearly to $f(u)$. \end{proposition} \begin{proof} By assumption, there exists $M>0$ such that $\|\phi^k\|\leq M$ for all $k\in\mathbb{N}$. On one hand, since $\phi^k\in\partial f(u^k)$, we have $f(u^k)-f(u)\leq \langle \phi^k,u^k-u\rangle \leq \|\phi^k\|\|u^k-u\|\leq M\|u^k-u\|. $ On the other hand, for any $\phi\in\partial f(u)\neq\emptyset$, we have $ f(u)-f(u^k)\leq \langle \phi,u-u^k\rangle \leq \|\phi\|\|u-u^k\|. $ Since $(u^k)$ converges $R$-linearly to $u$, the result follows by combining these inequalities. \end{proof} Given a proper lsc convex function $f:\mathcal{H}\rightarrow(-\infty,+\infty]$, its \emph{proximal operator} \cite[Definition 12.23]{bauschke2011convex}, denoted by $\prox_{f}\colon\mathcal{H}\rightarrow\mathcal{H}$, is given by $$\prox_f:=\argmin_{u\in\mathcal{H}}\left\{f(u)+\frac{1}{2}\|\cdot-u\|^2\right\}.$$ The proximal operator of $f$ be can viewed as the resolvent of $\partial f$. In other words, $J_{\partial f}=\prox_{f}$ (see \cite[Example 23.3]{bauschke2011convex}). Finally, we recall the \emph{Moreau decomposition} which relates the proximal operator of a function to the proximal operator of its conjugate. \begin{theorem}[\emph{Moreau decomposition}]\label{Moreau decomposition} Let $f:\mathcal{H}\rightarrow(-\infty,+\infty]$ be a proper lsc convex function. Then $$x=\prox_f(x)+\prox_{f^*}(x) \quad \forall x\in\mathcal{H}.$$ \end{theorem} \begin{proof} See \cite[Remark 14.4]{bauschke2011convex}. \end{proof} \section{Linear Convergence of Resolvent Splitting with Minimal Lifting}\label{s:resolvent splitting} In this section, we establish linear convergence of the algorithm given by \eqref{eq:1} and \eqref{eq:2} for solving the inclusion \eqref{eq:1n}. This algorithm is a fixed-point algorithm based on the operator $T_{\rm MT}:\mathcal{H}^{n-1}\rightarrow\mathcal{H}^{n-1}$ defined as \begin{equation}\label{eq: fixed point operator} T_{\rm MT}(\mathbf{z})=\mathbf{z}+\gamma\begin{pmatrix} x_{2}-x_{1}\\x_{3}-x_{2}\\\vdots\\x_{n}-x_{n-1} \end{pmatrix}, \end{equation} where $\mathbf{x}=(x_{1},\dots,x_{n})\in\mathcal{H}^{n}$ depends on $\mathbf{z}=(z_{1},\dots, z_{n-1})\in \mathcal{H}^{n-1}$ and is given by\\ \begin{equation} \label{eq: def of x} \left\{\begin{aligned} x_{1} &=J_{A_{1}}(z_{1})\\ x_{i} &=J_{A_{i}}(z_{i}+x_{i-1}-z_{i-1})&\forall i\in \{2,\dots,(n-1)\} \\ x_{n} &=J_{A_{n}}(x_{1}+x_{n-1}-z_{n-1}). \end{aligned}\right. \end{equation} Our analysis identifies conditions under which the operator $T_{\rm MT}$ is a $\beta$-contraction with $\beta\in(0,1)$, as detailed in Lemma~\ref{lemma for contraction factor}, and our main regarding linear convergence is given in Theorem~\ref{theorem for linear convergence}. We will use the following lemmas to simplify the presentation of our main result. We begin by recalling the following Lemma~\ref{new lemma} concerning fixed point of $T_{\rm MT}$. \begin{lemma}\label{new lemma} Let $n\geq2$ and $\gamma\in(0,1)$. Suppose $A_{1},\dots,A_{n}:\mathcal{H}\setto\mathcal{H}$ are maximally monotone. Let $\mathbf{z}^*=(z^*_{1},\dots,z^*_{n-1})\in\Fix T_{MT}$ and set $x^*=J_{A_{1}}({z_{1}}^*)$. Then $x^*\in\zer(\sum_{i=1}^n A_{i})$, and \begin{equation} \label{eq: def of x^*} x^* =J_{A_{i}}(z^*_{i}+x^*-z^*_{i-1})=J_{A_{n}}(2x^*-z^*_{n-1})\quad \forall i\in \{2,\dots,(n-1)\}. \end{equation} \end{lemma} \begin{proof} See \cite[Lemma 4.2]{malitsky2023resolvent}. \end{proof} The following lemma refines \cite[Lemma 4.3]{malitsky2023resolvent} and its proof to the setting where some of the operators are potentially strongly monotone. \begin{lemma} \label{lemma 3.1} Let $n\geq 2$ and $\gamma\in(0, 1)$. Suppose $A_{1},\dots,A_{n}: \mathcal{H}\setto \mathcal{H}$ are maximally $\mu_{i}$-monotone with $\mu_{i}\geq0$ for $i\in\{1,\dots,n\}$. Then, for all $\mathbf{z}=(z_{1},\dots, z_{n-1})\in \mathcal{H}^{n-1}$ and $\mathbf{\Bar{z}}=(\bar{z}_{1},\dots, \bar{z}_{n-1})\in \mathcal{H}^{n-1}$, we have \begin{multline} \label{eq:3} \| T_{\rm MT}(\mathbf{z})-T_{\rm MT}(\Bar{\mathbf{z}})\|^2 +\gamma(1-\gamma)\sum_{i=1}^{n-1}\|({x}_{i}-{x}_{i+1})-(\Bar{x}_{i}-\Bar{{x}}_{i+1})\|^2+\gamma\|(x_{n}-x_{1})-(\Bar{x}_{n}-\Bar{x}_{1})\|^2\\ \leq \|\mathbf{z}-\bar{\mathbf{z}}\|^2-2\gamma\sum_{i=1}^{n}\mu_{i}\|x_{i}-\bar{x}_{i}\|^2, \end{multline} where $T_{\rm MT}:\mathcal{H}^{n-1}\rightarrow \mathcal{H}^{n-1}$ is defined by \eqref{eq: fixed point operator}, $\mathbf{x}=(x_{1},\dots,x_{n})\in \mathcal{H}^{n}$ is given by \eqref{eq: def of x} and $\Bar{\mathbf{x}}=(\Bar{x}_{1},\dots,\bar{x}_{n})\in \mathcal{H}^{n}$ is given analogously. \end{lemma} \begin{proof} For convenience, denote $\mathbf{z}^+:= T_{\rm MT}(\mathbf{z})$ and $\Bar{\mathbf{z}}^+:=T_{\rm MT}(\Bar{\mathbf{z}})$. Since $z_{1}-x_{1}\in A_{1}(x_{1})$ and $\bar{z}_{1}-\bar{x}_{1}\in A_{1}(\bar{x}_{1})$, maximally $\mu_{1}$-monotonicity of $A_{1}$ implies \begin{equation} \label{eq:4} \begin{aligned} \mu_{1}\|x_{1}-\bar{x}_{1}\|^2&\leq\left<x_{1}-\bar{x}_{1},(z_{1}-x_{1})-(\bar{z}_{1}-\bar{x}_{1})\right>\\ &=\left<x_{2}-\bar{x}_{1},(z_{1}-x_{1})-(\bar{z}_{1}-\bar{x}_{1})\right>+\left<x_{1}-x_{2},(z_{1}-x_{1})-(\bar{z}_{1}-\bar{x}_{1})\right>. \end{aligned} \end{equation} For $i\in\{2,\dots,n-1\}, z_{i}-z_{i-1}+x_{i-1}-x_{i}\in A_{i}(x_{i})$ and $\bar{z}_{i}-\bar{z}_{i-1}+\bar{x}_{i-1}-\bar{x}_{i}\in A_{i}(\bar{x}_{i})$. Thus maximally $\mu_{i}$-monotonicity of $A_{i}$ yields \begin{equation*} \begin{aligned} \mu_{i}\|x_{i}-\bar{x}_{i}\|^2&\leq\langle x_{i}-\bar{x}_{i}, (z_{i}-z_{i-1}+x_{i-1}-x_{i})-(\bar{z}_{i}-\bar{z}_{i-1}+\bar{x}_{i-1}-\bar{x}_{i})\rangle\\&=\langle x_{i}-\bar{x}_{i}, (z_{i}-x_{i})-(\bar{z}_{i}-\bar{x}_{i})\rangle-\langle x_{i}-\bar{x}_{i}, (z_{i-1}-x_{i-1})-(\bar{z}_{i-1}-\bar{x}_{i-1})\rangle\\ &=\langle x_{i+1}-\bar{x}_{i}, (z_{i}-x_{i})-(\bar{z}_{i}-\bar{x}_{i})\rangle+\langle x_{i}-{x}_{i+1}, (z_{i}-x_{i})-(\bar{z}_{i}-\bar{x}_{i})\rangle\\ &\qquad -\left<x_{i}-\bar{x}_{i-1}, (z_{i-1}-x_{i-1})-(\bar{z}_{i-1}-\bar{x}_{i-1})\right>-\left<\bar{x}_{i-1}-\bar{x}_{i}, (z_{i-1}-x_{i-1})-(\bar{z}_{i-1}-\bar{x}_{i-1})\right>. \end{aligned} \end{equation*} Summing this inequality for $i\in\{2,\dots,n-1\}$ and simplifying gives \begin{multline} \label{eq:5} \mu_{i}\|x_{i}-\bar{x}_{i}\|^2\leq\left<x_{n}-\bar{x}_{n}, (z_{n-1}-x_{n-1})-(\bar{z}_{n-1}-\bar{x}_{n-1})\right>-\left<x_{2}-\bar{x}_{1}, (z_{1}-x_{1})-(\bar{z}_{1}-\bar{x}_{1})\right>\\ +\sum_{i=2}^{n-1}\left<x_{i}-{x}_{i+1}, (z_{i}-x_{i})-(\bar{z}_{i}-\bar{x}_{i})\right>-\sum_{i=1}^{n-2}\left<\bar{x}_{i}-\bar{x}_{i+1}, (z_{i}-x_{i})-(\bar{z}_{i}-\bar{x}_{i})\right>. \end{multline} Since $x_{1}+x_{n-1}-x_{n}-z_{n-1}\in A_{n}(x_{n})$ and $\bar{x}_{1}+\bar{x}_{n-1}-\bar{x}_{n}-\bar{z}_{n-1}\in A_{n}(\bar{x}_{n})$, maximally $\mu_{n}$-monotonicity of $A_{n}$ gives \begin{equation} \label{eq:6} \begin{aligned} \mu_{n}\|x_{n}-\Bar{x}_{n}\|^2&\leq\langle x_{n}-\bar{x}_{n}, (x_{1}+x_{n-1}-x_{n}-z_{n-1})-(\bar{x}_{1}+\bar{x}_{n-1}-\bar{x}_{n}-\bar{z}_{n-1})\rangle\\ &=\langle x_{n}-\bar{x}_{n}, (x_{n-1}-z_{n-1})-(\bar{x}_{n-1}-\bar{z}_{n-1})\rangle+\langle x_{n}-\bar{x}_{n}, (x_{1}-\bar{x}_{1})-({x}_{n}-\bar{x}_{n})\rangle\\ &=-\langle x_{n}-\bar{x}_{n-1},(z_{n-1}-x_{n-1})-(\bar{z}_{n-1}-\bar{x}_{n-1})\rangle+\langle\bar{x}_{n}-\bar{x}_{n-1},(z_{n-1}-x_{n-1})-(\bar{z}_{n-1}-\bar{x}_{n-1})\rangle\\ &\qquad +\frac{1}{2}(\|x_{1}-\bar{x}_{1}\|^2-\|x_{n}-\bar{x}_{n}\|^2-\|(x_{1}-x_{n})-(\bar{x}_{1}-\bar{x}_{n})\|^2). \end{aligned} \end{equation} Adding \eqref{eq:4}, \eqref{eq:5}, and \eqref{eq:6} and rearranging gives \begin{multline} \label{eq:7} \sum_{i=1}^n\mu_{i}\|x_{i}-\bar{x}_{i}\|^2\leq\sum_{i=1}^{n-1}\langle(x_{i}-\bar{x}_{i})-(x_{i+1}-\bar{x}_{i+1}), \bar{x}_{i}-x_{i}\rangle+\sum_{i=1}^{n-1}\langle(x_{i}-\bar{x}_{i})-(x_{i+1}-\bar{x}_{i+1}), {z}_{i}-\bar{z}_{i}\rangle\\+\frac{1}{2}(\|x_{1}-\bar{x}_{1}\|^2-\|x_{n}-\bar{x}_{n}\|^2-\|(x_{1}-x_{n})-(\bar{x}_{1}-\bar{x}_{n})\|^2). \end{multline} The first term in \eqref{eq:7} can be expressed as \begin{equation} \label{eq:8} \begin{aligned} &\sum_{i=1}^{n-1}\langle(x_{i}-\bar{x}_{i})-(x_{i+1}-\bar{x}_{i+1}), \bar{x}_{i}-x_{i}\rangle\\ &=\frac{1}{2}\sum_{i=1}^{n-1}(\|x_{i+1}-\bar{x}_{i+1}\|^2-\|x_{i}-\bar{x}_{i}\|^2-\|(x_{i}-x_{i+1})-(\bar{x}_{i}-\bar{x}_{i+1})\|^2)\\ &=\frac{1}{2}(\|x_{n}-\bar{x}_{n}\|^2-\|x_{1}-\bar{x}_{1}\|^2-\sum_{i=1}^{n-1}\|(x_{i}-x_{i+1})-(\bar{x}_{i}-\bar{x}_{i+1})\|^2). \end{aligned} \end{equation} Also the second term in \eqref{eq:7} can be written as \begin{equation} \label{eq:9} \begin{aligned} &\sum_{i=1}^{n-1}\left<(x_{i}-\bar{x}_{i})-(x_{i+1}-\bar{x}_{i+1}), {z}_{i}-\bar{z}_{i}\right>\\ &=\frac{1}{\gamma}\sum_{i=1}^{n-1}\left<(z_{i}-z_{i}^+)-(\bar{z}_{i}-\bar{z}_{i}^+),z_{i}-\bar{z}_{i}\right>\\ &=\frac{1}{\gamma}\left<(\mathbf{z}-\mathbf{z}^+)-(\bar{\mathbf{z}}-\bar{\mathbf{z}}^+), \mathbf{z}-\bar{\mathbf{z}}\right>\\ &=\frac{1}{2\gamma}\left(\|(\mathbf{z}-\mathbf{z}^+)-(\bar{\mathbf{z}}-\bar{\mathbf{z}}^+)\|^2+\|\mathbf{z}-\bar{\mathbf{z}}\|^2-\|\mathbf{z}^+-\bar{\mathbf{z}}^+\|^2\right)\\ &=\frac{1}{2\gamma}\left(\sum_{i=1}^{n-1}\|(z_{i}-z^+_{i})-(\bar{z}_{i}-\bar{z}^+_{i})\|^2+\|\mathbf{z}-\bar{\mathbf{z}}\|^2-\|\mathbf{z}^+-\bar{\mathbf{z}}^+\|^2\right)\\ &=\frac{\gamma}{2}\sum_{i=1}^{n-1}\|(x_{i}-x_{i+1})-(\bar{x}_{i}-\bar{x}_{i+1})\|^2+\frac{1}{2\gamma}\left(\|\mathbf{z}-\bar{\mathbf{z}}\|^2-\|\mathbf{z}^+-\bar{\mathbf{z}}^+\|^2\right). \end{aligned} \end{equation} Thus substituting \eqref{eq:8} and \eqref{eq:9} into \eqref{eq:7}, and simplifying gives \eqref{eq:3}. This completes the proof. \end{proof} In what follows, we will make frequent use of the inequality \begin{equation}\label{inequality} ab\leq \frac{1}{2\epsilon}a^2+\frac{\epsilon}{2}b^2\text{ for }a,b\geq0 \text{ and }\epsilon>0. \end{equation} \begin{lemma}\label{lipschitz operators} Let $n\geq 2$. Suppose that $A_{1},\dots,A_{n-1}: \mathcal{H}\rightarrow \mathcal{H}$ are maximally monotone and $L$-Lipschitz, and $A_{n}:\mathcal{H}\setto\mathcal{H}$ is maximally monotone. Then there exists $\eta\in(0,1)$ such that for all $\mathbf{z}=(z_{1},\dots, z_{n-1})\in \mathcal{H}^{n-1}$ and $\mathbf{\Bar{z}}=(\bar{z}_{1},\dots, \bar{z}_{n-1})\in \mathcal{H}^{n-1}$, we have \begin{equation}\label{lipschitz for n*} \sum_{i=1}^{n-1}\|x_{i}-\Bar{x}_{i}\|^2\geq \eta\|\mathbf{z}-\bar{\mathbf{z}}\|^2, \end{equation} where $\mathbf{x}=(x_{1},\dots,x_{n})\in \mathcal{H}^{n}$ is given by \eqref{eq: def of x}, and $\Bar{\mathbf{x}}=(\Bar{x}_{1},\dots,\bar{x}_{n})\in \mathcal{H}^{n}$ is given analogously. \end{lemma} \begin{proof} Since $z_{1}-x_{1}\in A_{1}(x_{1})$ and $\bar{z}_{1}-\bar{x}_{1}\in A_{1}(\bar{x}_{1})$, $L$-Lipschitz continuity of $A_{1}$ implies \begin{align} \label{eq34} L^2\|x_{1}-\Bar{x}_{1}\|^2\geq\|A_{1}(x_{1})-A_{1}(\bar{x}_{1})\|^2=\|{(z_{1}-x_{1})-(\Bar{z}_{1}-\Bar{x}_{1})}\|^2. \end{align} For $i\in\{2,\dots,n-1\}, z_{i}-z_{i-1}+x_{i-1}-x_{i}\in A_{i}(x_{i})$ and $\bar{z}_{i}-\bar{z}_{i-1}+\bar{x}_{i-1}-\bar{x}_{i}\in A_{i}(\bar{x}_{i})$. Thus, for any $\epsilon_{i}>0$, $L$-Lipschitz continuity of $A_{i}$ followed by applying \eqref{inequality} yields \begin{equation}\begin{aligned}\label{eq:A_i Lips} L^2\| x_{i}-\bar{x}_{i}\|^2&\geq \| A_{i}(x_{i})-A_{i}(\bar{x}_{i})\|^2\\ &=\|(z_{i}-z_{i-1}+x_{i-1}-x_{i})-(\bar{z}_{i}-\bar{z}_{i-1}+\bar{x}_{i-1}-\bar{x}_{i})\|^2\\ &=\|\{(z_{i}-x_{i})-(\bar{z}_{i}-\bar{x}_{i})\}-\{(z_{i-1}-x_{i-1})-(\bar{z}_{i-1}-\bar{x}_{i-1})\}\|^2\\ &=\|(z_{i}-x_{i})-(\bar{z}_{i}-\bar{x}_{i})\|^2+\|(z_{i-1}-x_{i-1})-(\bar{z}_{i-1}-\bar{x}_{i-1})\|^2\\&\qquad-2\langle(z_{i}-x_{i})-(\bar{z}_{i}-\bar{x}_{i}),(z_{i-1}-x_{i-1})-(\bar{z}_{i-1}-\bar{x}_{i-1})\rangle\\ &\geq\|(z_{i}-x_{i})-(\bar{z}_{i}-\bar{x}_{i})\|^2+\|(z_{i-1}-x_{i-1})-(\bar{z}_{i-1}-\bar{x}_{i-1})\|^2\\ &\qquad-\frac{1}{\epsilon_{i}}\|(z_{i}-x_{i})-(\bar{z}_{i}-\bar{x}_{i})\|^2-\epsilon_{i}\|(z_{i-1}-x_{i-1})-(\bar{z}_{i-1}-\bar{x}_{i-1})\|^2\\ &=(1-\frac{1}{\epsilon_{i}})\|(z_{i}-x_{i})-(\bar{z}_{i}-\bar{x}_{i})\|^2+(1-\epsilon_{i})\|(z_{i-1}-x_{i-1})-(\bar{z}_{i-1}-\bar{x}_{i-1})\|^2. \end{aligned}\end{equation} Summing the inequality~\eqref{eq:A_i Lips} for $i\in\{2,\dots,n-1\}$ and then adding \eqref{eq34} gives \begin{equation}\label{*} \begin{aligned} \sum_{i=1}^{n-1}L^2\| x_{i}-\bar{x}_{i}\|^2&\geq\|{(z_{1}-x_{1})-(\Bar{z}_{1}-\Bar{x}_{1})}\|^2+\sum_{i=2}^{n-1}(1-\frac{1}{\epsilon_{i}})\|(z_{i}-x_{i})-(\bar{z}_{i}-\bar{x}_{i})\|^2\\&\qquad+\sum_{i=2}^{n-1}(1-\epsilon_{i})\|(z_{i-1}-x_{i-1})-(\bar{z}_{i-1}-\bar{x}_{i-1})\|^2\\ &\geq(2-\epsilon_{2})\|{(z_{1}-x_{1})-(\Bar{z}_{1}-\Bar{x}_{1})}\|^2+\sum_{i=2}^{n-2}\left(2-\frac{1}{\epsilon_{i}}-\epsilon_{i+1}\right)\|(z_{i}-x_{i})-(\bar{z}_{i}-\bar{x}_{i})\|^2\\ &\qquad+\left(1-\frac{1}{\epsilon_{n-1}}\right)\|(z_{n-1}-x_{n-1})-(\bar{z}_{n-1}-\bar{x}_{n-1})\|^2. \end{aligned} \end{equation} Now fix $\epsilon_{2}\in(1,2)$. We claim that we can choose constants $\epsilon_3,\dots,\epsilon_{n-1}\in(1,2)$ such that \begin{equation}\label{min of epsilon'} \epsilon':=\min_{i\in\{2,\dots,n-2\}}\left\{(2-\epsilon_{2}),\left(2-\frac{1}{\epsilon_{i}}-\epsilon_{i+1}\right),\left(1-\frac{1}{\epsilon_{n-1}}\right)\right\}>0. \end{equation} Indeed, first note that $2-\epsilon_2>0$ by assumption. Next suppose $\epsilon_i\in(1,2)$ for some $i\in\{2,\dots,n-2\}$. Since $1<(2-\frac{1}{\epsilon_i})<2$, we deduce that $$\epsilon_{i+1}:=\sqrt{2-\frac{1}{\epsilon_{i}}}\in(1,2) \implies \epsilon_{i+1} < \epsilon_{i+1}^2 = 2-\frac{1}{\epsilon_{i}} \implies 2-\frac{1}{\epsilon_{i}} - \epsilon_{i+1}>0. $$ Finally, by construction $\epsilon_{n-1}\in(1,2)$ and so $1-\frac{1}{\epsilon_{n-1}}>0$. Now, combining \eqref{min of epsilon'} and \eqref{*} followed by applying \eqref{inequality}, we deduce that \begin{equation}\label{simplify for epsilon*} \begin{aligned} L^2\sum_{i=1}^{n-1}\|x_{i}-\Bar{x}_{i}\|^2 &\geq \epsilon'\sum_{i=1}^{n-1}\|(z_{i}-x_{i})-(\bar{z}_{i}-\bar{x}_{i})\|^2\\ &= \epsilon'\sum_{i=1}^{n-1}\left(\|z_{i}-\bar{z}_i\|^2+\|x_{i}-\bar{x}_{i}\|^2-2\langle z_i-\bar{z}_i,x_i-\bar{x}_i\rangle \right)\\ &\geq \epsilon'\sum_{i=1}^{n-1}\left(\|z_{i}-\bar{z}_i\|^2+\|x_{i}-\bar{x}_{i}\|^2-\frac{\sqrt{\epsilon'}}{\sqrt{\epsilon'}+L}\|z_i-\bar{z}_i\|^2-\frac{\sqrt{\epsilon'}+L}{\sqrt{\epsilon'}}\|x_i-\bar{x}_i\|^2 \right)\\ &= \frac{\epsilon'L}{\sqrt{\epsilon'}+L}\|\mathbf{z}-\mathbf{\Bar{z}}\|^2-\sqrt{\epsilon'}L\sum_{i=1}^{n-1}\|x_{i}-\Bar{x}_{i}\|^2. \end{aligned} \end{equation} Rearranging this expression gives \begin{equation}\label{lipschitz for n operator} \sum_{i=1}^{n-1}\|x_{i}-\Bar{x}_{i}\|^2\geq\frac{1}{\left(1+\frac{1}{\sqrt{\epsilon'}}L\right)^2}\|\mathbf{z}-\bar{\mathbf{z}}\|^2, \end{equation} which implies \eqref{lipschitz for n*}. This completes the proof. \end{proof} \begin{lemma}\label{lemma for contraction factor} Let $n\geq 2$ and $\gamma\in(0,1)$. Suppose that one of the following holds: \begin{enumerate}[(a)] \item $A_{1},\dots,A_{n-1}: \mathcal{H}\rightarrow \mathcal{H}$ are maximally monotone and $L$-Lipschitz, and $A_{n}\colon \mathcal{H}\setto \mathcal{H}$ is maximally $\mu$-strongly monotone. \item $A_{1},\dots,A_{n-1}: \mathcal{H}\rightarrow \mathcal{H}$ are maximally $\mu$-strongly monotone and $L$-Lipschitz, and $A_{n}\colon \mathcal{H}\setto \mathcal{H}$ is maximally monotone. \end{enumerate} Then $T_{\rm MT}$ is a contraction. \begin{proof} For convenience, denote $\mathbf{z}^+:= T_{\rm MT}(\mathbf{z})$ and $\bar{\mathbf{z}}^+:= T_{\rm MT}(\bar{\mathbf{z}})$. Let $\textbf{x}=(x_{1},\dots,x_{n})\in \mathcal{H}^n$ be given by \eqref{eq: def of x} and $\Bar{\textbf{x}}=(\Bar{x}_{1},\dots,\bar{x}_{n})\in \mathcal{H}^n$ be given analogously. (a):~Since $A_{1},\dots,A_{n-1}$ are maximally monotone and $A_{n}$ is maximally $\mu$-strongly monotone, Lemma~\ref{lemma 3.1} implies \begin{equation}\label{correct version for n} \| \mathbf{z}^+ - \Bar{\mathbf{z}}^+\|^2+\gamma(1-\gamma)\sum_{i=1}^{n-1}\|({x}_{i}-{x}_{i+1})-(\Bar{x}_{i}-\Bar{{x}}_{i+1})\|^2\leq\| \mathbf{z}-\bar{\mathbf{z}}\|^2-2\gamma\mu\|x_{n}-\bar{x}_{n}\|^2. \end{equation} For $i\in\{1,\dots,n-1\}$ and any $\alpha_{i}>0$, applying \eqref{inequality} gives \begin{equation}\label{new 33} \begin{aligned} \|(x_{i}-x_{i+1})-(\Bar{x}_{i}-\Bar{x}_{i+1})\|^2&\geq \|x_{i+1}-\Bar{x}_{i+1}\|^2+\|x_{i}-\Bar{x}_{i}\|^2-2\langle x_{i}-\bar{x}_{i},x_{i+1}-\bar{x}_{i+1}\rangle\\ &\geq (1-\alpha_{i})\|x_{i+1}-\Bar{x}_{i+1}\|^2+(1-\frac{1}{\alpha_{i}})\|x_{i}-\Bar{x}_{i}\|^2. \end{aligned} \end{equation} By combining \eqref{correct version for n} and \eqref{new 33}, we obtain \begin{multline}\label{new eq 33} \| \mathbf{z}^+ - \Bar{\mathbf{z}}^+\|^2+\gamma(1-\gamma)\left[\left(1-\frac{1}{\alpha_{1}}\right)\|x_{1}-\bar{x}_{1}\|^2+\sum_{i=2}^{n-1}\left(2-\frac{1}{\alpha_{i}}-\alpha_{i-1}\right)\|x_{i}-\Bar{x}_{i}\|^2\right]\\+[2\gamma\mu+\gamma(1-\gamma)(1-\alpha_{n-1})]\|x_{n}-\bar{x}_{n}\|^2\leq\| \mathbf{z}-\bar{\mathbf{z}}\|^2. \end{multline} We claim that we can choose constants $\alpha_{1},\dots,\alpha_{n-1}$ such that \begin{equation}\label{p'} \alpha':=\min_{i\in\{2,\dots,n-1\}}\left\{\left(1-\frac{1}{\alpha_{1}}\right),\left(2-\frac{1}{\alpha_{i}}-\alpha_{i-1}\right)\right\}>0. \end{equation} Set $\alpha_{n-1}:=1+\frac{2\mu}{(1-\gamma)}>1$ and note that $2-\frac{1}{\alpha_{n-1}}>1$. Suppose $\alpha_i>1$ for some $i\in\{n-1,\dots,2\}$. Since $2-\frac{1}{\alpha_i}>1$, we deduce that $$\alpha_{i-1}:=\sqrt{2-\frac{1}{\alpha_{i}}}>1\implies \alpha_{i-1} < \alpha_{i-1}^2 = 2-\frac{1}{\alpha_{i}} \implies 2-\frac{1}{\alpha_{i}} - \alpha_{i-1}>0.$$ Finally, by construction $\alpha_{1}>1$ and so $1-\frac{1}{\alpha_{1}}>0$. Now, using \eqref{p'} in \eqref{new eq 33} implies \begin{equation} \label{eq:33} \|\mathbf{z}^+ - \Bar{\mathbf{z}}^+\|^2\leq\| \mathbf{z}-\bar{\mathbf{z}}\|^2-\gamma(1-\gamma)\alpha'\sum_{i=1}^{n-1}\|x_{i}-\Bar{x}_{i}\|^2. \end{equation} Since $A_{i}$ is maximally monotone and $L$-Lipschitz for $i\in\{1,\dots,n-1\}$, Lemma~\ref{lipschitz operators} implies there exists $\eta\in(0,1)$ such that \begin{equation}\label{lipschitz for n} \sum_{i=1}^{n-1}\|x_{i}-\Bar{x}_{i}\|^2\geq\eta\|\mathbf{z}-\bar{\mathbf{z}}\|^2. \end{equation} Substituting \eqref{lipschitz for n} into \eqref{eq:33} and rearranging the equation we get, \begin{equation} \label{eq:37} \|\mathbf{z}^+ - \Bar{\mathbf{z}}^+\|^2\leq\left[(1-\gamma(1-\gamma)\alpha'\eta\right]\|\mathbf{z}-\mathbf{\Bar{z}}\|^2. \end{equation} Therefore, $T_{\rm MT}$ is a $\beta$-contraction with $\beta=(1-\gamma(1-\gamma)\alpha'\eta)\in(0, 1)$. This completes the proof. (b):~Since $A_{1},\dots,A_{n-1}$ are maximally $\mu$-strongly monotone and $A_{n}$ is maximally monotone, Lemma~\ref{lemma 3.1} implies \begin{equation}\label{correct version for n*} \| \mathbf{z}^+ - \Bar{\mathbf{z}}^+\|^2\leq\| \mathbf{z}-\bar{\mathbf{z}}\|^2-2\gamma\mu\sum_{i=1}^{n-1}\|x_{i}-\bar{x}_{i}\|^2. \end{equation} Since $A_{1},\dots,A_{n-1}$ are maximally monotone and $L$-Lipschitz, Lemma~\ref{lipschitz operators} implies there exists $\eta\in(0,1)$ such that \begin{equation}\label{lipschitz} \sum_{i=1}^{n-1}\|x_{i}-\Bar{x}_{i}\|^2\geq\eta\|\mathbf{z}-\bar{\mathbf{z}}\|^2. \end{equation} Substituting \eqref{lipschitz} into \eqref{correct version for n*} gives \begin{equation} \label{eq:37*} \|\mathbf{z}^+ - \Bar{\mathbf{z}}^+\|^2\leq\left(1-2\gamma\mu\eta\right)\|\mathbf{z}-\mathbf{\Bar{z}}\|^2. \end{equation} Therefore, $T_{\rm MT}$ is a $\beta$-contraction with $\beta=(1-2\gamma\mu\eta)\in(0,1)$. This completes the proof. \end{proof} \end{lemma} \begin{remark} In the absence of appropriate strong monotonicity or Lipschitz continuity (such as in Lemma~\ref{lemma for contraction factor}), the operator $T_{\rm MT}$ need not be a contraction. In what follows, we provide two such examples of the monotone inclusion problem \eqref{eq:1n} with $n=3$. The first example shows that, without strong monotonicity, $T_{MT}$ need not be a contraction even when all the operators are Lipschitz continuous. The second shows that, without Lipschitz continuity, $T_{MT}$ need not be a contraction even when all the operators are strongly monotone. In both cases, we show that $\Fix T_{\rm MT}$ contains more than one point which implies $T_{\rm MT}$ is not a contraction. \begin{enumerate}[(a)] \item Consider the operators defined on $\mathbb{R}$ given by \begin{equation*} A_{1}=0,\quad A_{2}=0,\quad A_{3}=0. \end{equation*} Any $x^*\in\mathbb{R}$ is a solution of the inclusion, and the operators $A_{1}, A_{2}, A_{3}$ are monotone (but not strongly monotone) and $L$-Lipschitz for all $L>0$. The resolvents are given by $$J_{A_{1}}=\Id,\quad J_{A_{2}}=\Id,\quad J_{A_{3}}=\Id.$$ Let $\mathbf{z}=\binom{z_{1}}{z_{2}}\in\mathbb{R}\binom{1}{1}$. Then \eqref{eq: fixed point operator} and \eqref{eq: def of x} become \begin{equation*} \left\{\begin{aligned} x_{1} &=J_{A_{1}}(z_{1}) = z_1\\ x_{2} &=J_{A_{2}}(z_{2}+x_{1}-z_{1}) = J_{A_2}(z_2) = z_{2}\\ x_{3} &= J_{A_{3}}(x_1+x_2-z_2) = J_{A_3}(z_{1}) = z_{1} \end{aligned}\right. \implies \quad T_{\rm MT}(\mathbf{z}) = \mathbf{z}+\gamma\begin{pmatrix} z_{2}-z_{1} \\ z_{1}-z_{2}\\ \end{pmatrix} =\mathbf{z}, \end{equation*} and thus we conclude that $\mathbb{R}\binom{1}{1}\subseteq\Fix T_{\rm MT}$. Since $T_{\rm MT}$ has more than one fixed point, we conclude that it is not a contraction. \item Let $\mu>0$ and consider the operators defined on $\mathbb{R}$ given by $$ A_1 = \mu \Id + N_{\mathbb{R}_-},\quad A_2 = \mu \Id + N_{\mathbb{R}_+},\quad A_3 = \mu \Id + N_{\{0\}}. $$ Note that $x^*=0$ is the unique solution of the inclusion, and the operators $A_1,A_2,A_3$ are $\mu$-strongly monotone (but not Lipschitz continuous). The resolvent \cite[Example 23.4]{bauschke2011convex} of these operators are given by $$ J_{A_1} = P_{N_{\mathbb{R}_-}}\circ \frac{1}{1+\mu}\Id,\quad J_{A_2} = P_{N_{\mathbb{R}_+}}\circ \frac{1}{1+\mu}\Id,\quad J_{A_3} = P_{N_{\{0\}}}\circ \frac{1}{1+\mu}\Id,$$ where $P_{N_{\mathbb{R}_-}}, P_{N_{\mathbb{R}_+}}, P_{N_{\{0\}}}$ denote the projection onto $N_{\mathbb{R}_-}, N_{\mathbb{R}_+}$ and $N_{\{0\}}$ respectively. Let $\mathbf{z}=\binom{z_1}{z_2}\in\mathbb{R}_-\times\{0\}$. Then \eqref{eq: fixed point operator} and \eqref{eq: def of x} become \begin{equation*} \left\{\begin{aligned} x_{1} &=J_{A_{1}}(z_{1}) = P_{\mathbb{R}_+}\left(\frac{1}{1+\mu}z_1\right)=0 \\ x_{2} &=J_{A_{2}}(z_{2}+x_{1}-z_{1}) = P_{\mathbb{R}_-}\left(-\frac{1}{1+\mu}z_1\right) = 0\\ x_{3} &= J_{A_{3}}(x_1+x_2-z_2) = P_{\{0\}}\left(\frac{1}{1+\mu}\cdot 0\right)=0 \end{aligned}\right. \implies T_{\rm MT}(\mathbf{z}) = \mathbf{z} + \gamma\begin{pmatrix} 0\\ 0\\ \end{pmatrix} = \mathbf{z}, \end{equation*} and thus we conclude that $\mathbb{R}_-\times\{0\}\subseteq\Fix T_{\rm MT}$. Since $T_{\rm MT}$ has more than one fixed point, we conclude that it is not a contraction. \end{enumerate} \end{remark} We are now ready to state the main result of this section regarding linear convergence of the algorithm presented in \eqref{eq:1} and \eqref{eq:2}. \begin{theorem}\label{theorem for linear convergence} Let $n\geq2$ and $\gamma\in(0,1)$. Suppose that one of the following holds: \begin{enumerate}[(a)] \item $A_{1},\dots,A_{n-1}:\mathcal{H}\rightarrow\mathcal{H}$ are maximally monotone and $L$-Lipschitz, and $A_{n}:\mathcal{H}\setto\mathcal{H}$ is maximally $\mu$-strongly monotone. \item $A_{1},\dots,A_{n-1}:\mathcal{H}\rightarrow\mathcal{H}$ are maximally $\mu$-strongly monotone and $L$-Lipschitz, and $A_{n}:\mathcal{H}\setto\mathcal{H}$ is maximally monotone. \end{enumerate} Given $\mathbf{z}^0\in \mathcal{H}^{n-1}$, let $(\mathbf{z}^k)_{k\in\mathbb{N}}$ and $(\mathbf{x}^k)_{k\in\mathbb{N}}$ be the sequences given by~\eqref{eq:1} and \eqref{eq:2}. Then the following assertions hold: \begin{enumerate}[(i)] \item $(\mathbf{z}^k)_{k\in\mathbb{N}}$ converges $R$-linearly to the unique fixed point $\mathbf{z}^*\in\Fix T_{\rm MT}$. \item $(\mathbf{x}^k)_{k\in\mathbb{N}}$ converges $R$-linearly to a point $(x^*,\dots, x^*)\in \mathcal{H}^n$ where $x^*$ is the unique element of $\zer(\sum_{i=1}^{n}A_{i})$. \end{enumerate} \end{theorem} \begin{proof} (i):~Since the operators $A_{1},\dots,A_{n}$ satisfy either (a) or (b), Lemma~\ref{lemma for contraction factor} implies that the operator $T_{\rm MT}$ is a $\beta$-contraction for some $\beta\in(0,1)$. Thus, according to the Banach fixed-point theorem (Theorem~\ref{Banach Theorem}), $T_{\rm MT}$ has a unique fixed point, say $\{\mathbf{z}^*\}=\Fix T_{\rm MT}$ and \begin{equation}\label{linear convergence} \|\mathbf{z}^k-\mathbf{z}^*\|\leq c\beta^k\quad\forall k\in\mathbb{N}, \end{equation} where $c:=\|\mathbf{z}^{0}-\mathbf{z}^*\|$. In particular, this shows that the sequence $(\mathbf{z}^k)$ converges $R$-linearly to $\mathbf{z}^*\in\Fix T_{\rm MT}$. (ii):~Since $\mathbf{z}^*\in\Fix T_{\rm MT}$, Lemma~\ref{new lemma} implies $x^*=J_{A_{1}}({z}^*_{1})\in \zer(\sum_{i=1}^n A_{i})$ and that \eqref{eq: def of x^*} holds. Furthermore, since $\sum_{i=1}^nA_{i}$ is strongly monotone, $\{x^*\}=\zer(\sum_{i=1}^n A_{i})$. Recall that $J_{A_{i}}$ is nonexpansive for $i\in\{1,\dots,n\}$ (Proposition~\ref{nonexpansiveness}). Thus, together with \eqref{linear convergence}, we deduce that \begin{equation}\label{eq:r-lin x1} \|x^k_{1}-x^*\|=\|J_{A_{1}}(z^k_{1})-J_{A_{1}}(z^*_{1})\|\leq\|z_{1}^k-z^*_{1}\| \leq \|\mathbf{z}^k-\mathbf{z}^*\| \leq c\beta^k. \end{equation} For $i\in\{2,\dots,n-1\}$, with the help of \eqref{linear convergence}, \eqref{eq:r-lin x1} and \eqref{eq:r-lin xi} (inductively), we get \begin{equation}\label{eq:r-lin xi}\begin{aligned} \|x^k_{i}-x^*\| &=\|J_{A_{i}}(z^k_{i}+x^k_{i-1}-z^k_{i-1})-J_{A_{i}}(z^*_{i}+x^*-z^*_{i-1})\| \\ &\leq \|z_{i}^k-z^*_{i}\|+\|z^k_{i-1}-z^*_{i-1}\|+\|x^k_{i-1}-x^*\|\\ &\leq \|\mathbf{z}^k-\mathbf{z}^*\| + \|\mathbf{z}^k-\mathbf{z}^*\| + \bigl(2(i-1)-1\bigr) c\beta^k \leq (2i-1)c\beta^k. \end{aligned} \end{equation} Finally, using \eqref{linear convergence}, \eqref{eq:r-lin x1} and \eqref{eq:r-lin xi}, we deduce \begin{align*} \|x^k_{n}-x^*\| &=\|J_{A_{n}}(x^k_{1}+x^k_{n-1}-z^k_{n-1})-J_{A_{n}}(x^*+x^*-z^*_{n-1})\|\\ &\leq\|z^k_{n-1}-z^*_{n-1}\|+\|x^k_{1}-x^*\|+\|x^k_{n-1}-x^*\|\\ &\leq \|\mathbf{z}^k-\mathbf{z}^*\| + c\beta^k + \bigl(2(n-1)-1\bigr) c\beta^k \leq (2n-1)c\beta^k. \end{align*} This shows that $(\mathbf{x}^k)$ converges $R$-linearly to $(x^*,\dots,x^*)\in\mathcal{H}^n$. \end{proof} \section{Linear Convergence of a Primal-Dual Algorithm} \label{s: section 4} In this section, we apply the results of Section~\ref{s:resolvent splitting} to the setting of Example~\ref{example 1.1} to derive linear convergence of a primal-dual algorithm based on the resolvent splitting algorithm presented in \eqref{eq:1} and \eqref{eq:2}. To this end, let $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$ denote real Hilbert spaces and consider the minimization problem with $n\geq 2$ given by \begin{equation} \label{convex optimization} \min_{u\in\mathcal{H}_{1}}\quad \sum_{i=2}^{n}f_{i}(u)+(g_{2}\Box\cdot\cdot\cdot\Box g_{n})(Cu) \end{equation} under the hypotheses in stated below in Assumption~\ref{assumption}. Using Proposition~\ref{remark infimal convolution}, the Fenchel dual of \eqref{convex optimization} can be expressed as \begin{equation} \label{convex optimization dual} \min_{v\in\mathcal{H}_{2}}\quad (f^*_{2}\Box\cdots\Box f^*_{n})(C^*v)+\sum_{i=2}^{n}g^*_{i}(-v). \end{equation} \begin{assumption}\label{assumption} \begin{enumerate}[(i)] \item $C:\mathcal{H}_{1}\rightarrow\mathcal{H}_{2}$ is bounded and linear. \item $f_{i}:\mathcal{H}_{1}\rightarrow\mathbb{R}$ is convex and differentiable with $\alpha$-Lipschitz continuous gradient for $i=2,\dots,n-1$. \item $f_{n}:\mathcal{H}_{1}\rightarrow(-\infty,+\infty]$ is proper, closed and $\sigma$-strongly convex. \item $g_{i}:\mathcal{H}_{2}\rightarrow(-\infty,+\infty]$ is proper, closed and $\tau$-strongly convex for $i=2,\dots,n-1$. \item $g_{n}:\mathcal{H}_{2}\rightarrow\mathbb{R}$ is convex and differentiable with $\beta$-Lipschitz continuous gradient. \end{enumerate} \end{assumption} Denote $f(u):=\sum_{i=2}^{n-1}f_{i}(u)+f_{n}(u)$ and $g(v):=(g_{2}\Box\cdots\Box g_{n})(v)$. By Assumptions~\ref{assumption}\hyperref[assumption]{(ii)}-\hyperref[assumption]{(iii)}, we have $\dom f=\cap_{i=2}^n\dom f_i=\dom f_n\neq\emptyset$ and, by Assumptions~\ref{assumption}\hyperref[assumption]{(iv)}-\hyperref[assumption]{(v)} combined with Proposition~\ref{prop for infimal convolution}, we have that $g$ is proper lsc convex with $\dom g=\mathcal{H}_2$. We therefore have that $$\sri(\dom g-C(\dom f)) =\sri(\mathcal{H}_2-C(\dom f_n))=\sri\mathcal{H}_2= \mathcal{H}_2\ni 0.$$ Thus the first order optimality condition for the primal problem~\eqref{convex optimization} followed by the subdifferential sum rule (Theorem~\ref{sum rule of subdifferential for two functions}) gives \begin{equation}\label{eq:fop} 0\in\partial(f+g\circ C)(u)=\partial f(u)+C^*\partial g(Cu). \end{equation} The inclusion \eqref{eq:fop} holds precisely when there exists $v\in\partial g(Cu)$ such that $0\in\partial f(u)+C^*v$. Since $(\partial g)^{-1}=\partial g^*$ by \cite[p.~216]{rockafellar1970maximal}, $v\in \partial g(Cu)\iff Cu\in (\partial g)^{-1}(v)=\partial g^*(v)$ is equivalent to $0\in\partial g^*(v)-Cu$. Combining these two inclusions into a single system gives the monotone inclusion \begin{equation}\label{inclusion for f and g} \begin{pmatrix} 0\\0 \end{pmatrix}\in\begin{pmatrix} 0&C^*\\-C&0 \end{pmatrix}\begin{pmatrix} u\\v \end{pmatrix}+\begin{pmatrix} \partial f(u)\\\partial g^*(v) \end{pmatrix}. \end{equation} Since $f_{2},\dots,f_{n-1}$ are differentiable with full domain, we have $\partial f=\sum_{i=2}^{n-1}\nabla f_i+\partial f_n$ by Theorem~\ref{sum rule of subdifferential for two functions}. Similarly, $g^*_{2},\dots,g^*_{n-1}$ are differentiable by \cite[Theorem 2.1]{bauschke2009baillon}, thus Theorems~\ref{remark infimal convolution}~\&~\ref{sum rule of subdifferential for two functions} yield $\partial g^*=(g_1\Box\cdots\Box g_n)^*=\sum_{i=2}^{n-1}\nabla g_i^*+\partial g_n^*$. Substituting these two identities into \eqref{inclusion for f and g} gives \begin{equation}\label{n monotone} \begin{pmatrix} 0\\0 \end{pmatrix}\in\begin{pmatrix} 0&C^*\\-C&0 \end{pmatrix}\begin{pmatrix} u\\v \end{pmatrix}+\sum_{i=2}^{n-1}\begin{pmatrix} \nabla f_{i}(u)\\\nabla g_{i}^*(v)\end{pmatrix}+\begin{pmatrix} \partial f_{n}(u)\\\partial g_{n}^*(v) \end{pmatrix}. \end{equation} In other words, \eqref{n monotone} is a monotone inclusion with $n$ operators in the form of \eqref{eq:1n} with \begin{equation} \label{monotone operators*} x=(u,v),\quad \mathcal{H}=\mathcal{H}_{1}\times\mathcal{H}_{2},\quad A_{1}=\begin{pmatrix} 0&C^*\\-C&0 \end{pmatrix}, \quad A_{i}=\begin{pmatrix} \nabla f_{i}\\\nabla g_{i}^*\end{pmatrix}, \quad A_{n}=\begin{pmatrix} \partial f_{n}\\\partial g_{n}^* \end{pmatrix}. \end{equation} The following lemma summarises properties of the operators defined in \eqref{monotone operators*}. \begin{lemma}\label{lemma primal dual splitting} Let $A_1,\dots,A_n$ be the operators defined in \eqref{monotone operators*}, and suppose Assumption~\ref{assumption} holds. Then the following assertions hold. \begin{enumerate}[(i)] \item $A_{1}$ is maximally monotone and $\|C\|$-Lipschitz. \item $A_{i}$ is maximally monotone and $L$-Lipschitz with $L=\max\{\alpha,\frac{1}{\tau}\} \text{ for }i=2,\dots,n-1$. \item $A_{n}$ is maximally $\mu$-strongly monotone with $\mu=\min\{\sigma, \frac{1}{\beta}\}$. \end{enumerate} \end{lemma} \begin{proof} (i):~Since $A_{1}$ is skew-symmetric, \cite[Example~20.30]{bauschke2011convex} implies that $A_{1}$ is maximally monotone. For all $(u,v),(u',v')\in\mathcal{H}_{1}\times\mathcal{H}_{2}$, we have \begin{align*} \|A_1(u,v)-A_1(u',v')\|^2 &= \|C^*(v-v')\|^2+\|C(u-u')\|^2 \\ &\leq \|C^*\|^2\|v-v'\|^2+\|C\|^2\|u-u'\|=\|C\|^2\|(u,v)-(u',v')\|^2, \end{align*} where the last equality uses the identity $\|C^*\|=\|C\|$. Hence, $A_{1}$ is $\|C\|$-Lipschitz. (ii):~By Assumption~\ref{assumption}\hyperref[assumption]{(iv)} and \cite[Theorem~2.1]{bauschke2009baillon}, $g^*_{i}$ is differentiable with $\frac{1}{\tau}$-Lipschitz gradients for $i=2,\dots,n-1$. Therefore, $A_{i}$ is maximally monotone from \cite[Corollary~20.25]{bauschke2011convex} and $L$-Lipschitz continuous with $L=\max\{\alpha,\frac{1}{\tau}\}$. (iii):~By Assumption~\ref{assumption}\hyperref[assumption]{(iii)} and \cite[Theorem~20.40~\&~Example~22.3(iv)]{bauschke2011convex}, $\partial f_{n}$ is maximally $\sigma$-strongly monotone. By Assumption~\ref{assumption}\hyperref[assumption]{(v)} and \cite[Theorem~2.1]{bauschke2009baillon}, $\partial g^*_{n}$ is maximally $\frac{1}{\beta}$-strongly monotone. Therefore, $A_{n}$ is maximally $\mu$-strongly monotone with $\mu=\min\{\sigma, \frac{1}{\beta}\}$. \end{proof} Let $\gamma\in(0,1)$. When applied to \eqref{monotone operators*}, the fixed point operator from Section~\ref{s:resolvent splitting} (given in \eqref{eq: fixed point operator} and \eqref{eq: def of x}) becomes \begin{equation} \label{new algorithm def} \begin{aligned} T_{\rm PD}&\begin{pmatrix}\mathbf{p},\mathbf{q}\end{pmatrix}=\begin{pmatrix}\mathbf{p},\mathbf{q}\end{pmatrix}+\gamma \begin{pmatrix} \begin{pmatrix} u_{2},v_{2} \end{pmatrix}-\begin{pmatrix} u_{1},v_{1} \end{pmatrix}\\\begin{pmatrix} u_{3},v_{3} \end{pmatrix}-\begin{pmatrix} u_{2},v_{2} \end{pmatrix}\\\vdots\\\begin{pmatrix} u_{n},v_{n} \end{pmatrix}-\begin{pmatrix} u_{n-1},v_{n-1} \end{pmatrix} \end{pmatrix}, \end{aligned} \end{equation} where $\mathbf{u}=(u_{1},\dots, u_{n})\in{\mathcal{H}}_{1}^n$ and $\mathbf{v}=(v_{1},\dots, v_{n})\in\mathcal{H}_{2}^n$ are given by \begin{equation} \label{new x def} \left\{\begin{aligned} \begin{pmatrix} u_{1}\\v_{1} \end{pmatrix}&=J_{A_{1}}\begin{pmatrix} p_{1}\\q_{1} \end{pmatrix}\\ \begin{pmatrix} u_{i}\\v_{i} \end{pmatrix}&=J_{A_{i}}\begin{pmatrix} p_{i}+u_{i-1}-p_{i-1}\\q_{i}+v_{i-1}-q_{i-1} \end{pmatrix}&\forall i\in \{2,\dots,n-1\}\\ \begin{pmatrix} u_{n}\\v_{n} \end{pmatrix}&=J_{A_{n}}\begin{pmatrix} u_{1}+u_{n-1}-p_{n-1}\\v_{1}+v_{n-1}-q_{n-1} \end{pmatrix}. \end{aligned}\right. \end{equation} We now look at how to compute the resolvents in \eqref{new x def}. Using the standard formula for the inverse of a $2\times 2$ block matrix (see, for example, \cite[Section 2.1]{o2014primal}), the resolvent of $A_{1}$ can be expressed as \begin{align*} J_{{A}_{1}}=\begin{pmatrix} \Id&C^*\\-C&\Id \end{pmatrix}^{-1} = \begin{pmatrix} 0&0\\0&\Id \end{pmatrix}+\begin{pmatrix} \Id\\C \end{pmatrix}(\Id+C^*C)^{-1}\begin{pmatrix} \Id\\-C \end{pmatrix}^*. \end{align*} For $i\in\{2,\dots,n\}$, by using the Moreau decomposition for $g_i$ (Theorem~\ref{Moreau decomposition}), we deduce the resolvent of $A_{i}$ $$ J_{A_i} = \binom{\prox_{f_i}}{\prox_{g^*_i}} = \binom{\prox_{f_i}}{\Id-\prox_{g_i}}. $$ The algorithm obtained by putting this altogether is described in Algorithm~\ref{alg:PD}, and its convergence is analyzed in Corollary~\ref{corollary primal dual linear convergence}. \SetKwComment{Comment}{/* }{ */} \RestyleAlgo{ruled} \begin{algorithm}[!htb] \caption{A primal-dual algorithm for solving \eqref{convex optimization} and \eqref{convex optimization dual}.\label{alg:PD}} \KwIn{Choose $\mathbf{p}^0=(p^0_{1},\dots,p^0_{n-1})\in\mathcal{H}^{n-1}_{1}$, $\mathbf{q}^0=(q^0_{1},\dots,q^0_{n-1})\in\mathcal{H}^{n-1}_{2}$ and $\gamma\in(0,1)$.} \For{$k=1,2\dots$}{Compute $\mathbf{u}^k=(u^k_{1},\dots, u^k_{n})\in{\mathcal{H}}_{1}^n$ and $\mathbf{v}^k=(v^k_{1},\dots, v^k_{n})\in\mathcal{H}_{2}^n$ according to \begin{equation} \label{new x} \left\{\begin{aligned} u^k_{1}&=(\Id+C^*C)^{-1}(p_1^k-C^*q_1^k)\\ v^k_{1}&=q_1^k+Cu_1^k\\ u^k_{i}&=\prox_{f_{i}}(p^k_{i}+u^k_{i-1}-p^k_{i-1})&\forall i\in \{2,\dots,n-1\}\\ v^k_{i}&=(\Id-\prox_{g_{i}})(q^k_{i}+v^k_{i-1}-q^k_{i-1})&\forall i\in \{2,\dots,n-1\}\\ u^k_{n}&= \prox_{{f}_{n}}(u^k_{1}+u^k_{n-1}-p^k_{n-1})\\ v^k_{n}&=(\Id-\prox_{g_{n}})(v^k_{1}+v^k_{n-1}-q^k_{n-1}). \end{aligned}\right. \end{equation} Update $\mathbf{p}^{k}=(p^k_{1},\dots, p^k_{n-1})\in \mathcal{H}_{1}^{n-1}$ and $\mathbf{q}^{k}=(q^k_{1},\dots, q^k_{n-1})\in \mathcal{H}_{2}^{n-1}$ according to \begin{equation} \label{new algorithm} \mathbf{p}^{k+1}=\mathbf{p}^k+\gamma \begin{pmatrix} u^{k}_{2} - u^{k}_{1}\\ u^{k}_{3}- u^{k}_{2}\\\vdots\\ u^{k}_{n}- u^{k}_{n-1} \end{pmatrix}, \qquad \mathbf{q}^{k+1}=\mathbf{q}^k+\gamma\begin{pmatrix} v^{k}_{2} - v^{k}_{1}\\ v^{k}_{3}- v^{k}_{2}\\\vdots\\ v^{k}_{n}- v^{k}_{n-1} \end{pmatrix}. \end{equation}} \end{algorithm} Let $u^*$ be the (unique) solution of the primal problem~\eqref{convex optimization}, and $v^*$ be the unique solution of the dual problem~\eqref{convex optimization dual}. Then the \emph{primal-dual gap} given by \begin{equation}\label{primal-dual gap} G(u,v)=\left( \sum_{i=2}^{n}f_{i}(u)+\langle Cu,v^*\rangle - \sum_{i=2}^{n}g_i^*(v^*)\right)-\left( \sum_{i=2}^{n}f_{i}(u^*)+\langle Cu^*,v\rangle - \sum_{i=2}^{n}g_i^*(v)\right). \end{equation} Now we are ready for the main result of this section. \begin{corollary}\label{corollary primal dual linear convergence} Suppose Assumption~\eqref{assumption} holds. Given $(\mathbf{p}^0,\mathbf{q}^0)\in\mathcal{H}_{1}^{n-1}\times\mathcal{H}_{2}^{n-1}$, let $(\mathbf{p}^k,\mathbf{q}^k)\in\mathcal{H}_{1}^{n-1}\times\mathcal{H}_{2}^{n-1}$ and $(\mathbf{u}^k,\mathbf{v}^k)\in\mathcal{H}_{1}^n\times\mathcal{H}^n_{2}$ be the sequences given by \eqref{new x} and \eqref{new algorithm} in Algorithm~\ref{alg:PD}. Then the following assertions hold: \begin{enumerate}[(i)] \item $(\mathbf{p}^k,\mathbf{q}^k)_{k\in\mathbb{N}}$ converges $R$-linearly to the unique fixed point $(\mathbf{p^*},\mathbf{q^*})\in\Fix T_{\rm PD}$. \item $(\mathbf{u}^k)_{k\in\mathbb{N}}$ converges $R$-linearly to $(u^*,\dots, u^*)\in\mathcal{H}^n_{1}$ where $u^*$ is the unique solution of the primal problem~\eqref{convex optimization}. \item $(\mathbf{v}^k)_{k\in\mathbb{N}}$ converges $R$-linearly to $(v^*,\dots, v^*)\in\mathcal{H}^n_{2}$ where $v^*$ is the unique solution of the dual problem~\eqref{convex optimization dual}. \item $G(u^k_n,v^k_n)$ converges $R$-linearly to zero. \end{enumerate} \end{corollary} \begin{proof} (i)-(iii):~By Lemma~\ref{lemma primal dual splitting}, the operators $A_1,\dots,A_n$ satisfy the assumptions of Theorem~\ref{theorem for linear convergence}\hyperref[theorem for linear convergence]{(a)}. Thus, by applying Theorem~\ref{theorem for linear convergence}, we have that (i) holds, that $(\mathbf{u}^k,\mathbf{v}^k)$ converges $R$-linearly to a point $((u^*,\dots,u^*),(v^*,\dots,v^*))$ such that $(u^*,v^*)\in\zer(\sum_{i=1}^nA_i)$, and that \eqref{eq: def of x^*} holds. The latter together with \cite[Theorem~19.1]{bauschke2011convex} implies that $u^*$ solves the primal problem~\eqref{convex optimization} and $v^*$ solves the dual problem~\eqref{convex optimization dual}. Since both primal and dual problems are strongly convex, their minimizers are unique. (iv):~To establish the result, we analyze each term in $G$ separately. First note that \begin{equation}\label{inequality for inner} \begin{aligned} \left|\langle Cu_n^k,v^*\rangle- \langle Cu^*,v_n^k\rangle\right| &= \left| \langle Cu^k_n,v^*\rangle - \langle Cu^*,v^*\rangle + \langle Cu^*,v^*\rangle -\langle Cu^*,v^k_n\rangle\right| \\ &= \left|\langle u^k_n-u^*,C^*v^*\rangle + \langle Cu^*,v^*-v^k_n\rangle \right| \\ &\leq \left|\langle u^k_n-u^*,C^*v^*\rangle \right|+\left|\langle Cu^*,v^*-v^k_n\rangle\right|\\ &\leq \frac{1}{2}\|u^k_n-u^*\|\|C^*v^*\|+\frac{1}{2}\|Cu^*\|\|v^k_n-v^*\|, \end{aligned} \end{equation} from which follows that $(\langle Cu_n^k,v^*\rangle- \langle Cu^*,v_n^k\rangle)_{k\in\mathbb{N}}$ converges $R$-linearly to zero. Next, since $f_{i}$ is convex and differentiable with $\alpha$-Lipschitz gradient for $i\in\{2,\dots,{n-1}\}$, we have $$ \|\nabla f_i(u_n^k)\| \leq \|\nabla f_i(u_n^k)-\nabla f_i(u^*)\|+\|\nabla f_i(u^*)\| \leq \alpha \|u_n^k-u^*\|+ \|\nabla f_i(u^*)\|. $$ Since $(u^k_n)$ converge $R$-linearly to $u^*$ by (ii), it follows $(\nabla f_i(u_n^k))$ is bounded. By applying Lemma~\ref{lemma for gap}, it follows that $f_{i}(u^k_{n})$ converge $R$-linearly to $f_{i}(u^*)$. Since $g^*_{i}$ is convex and differentiable with $\alpha$-Lipschitz gradient for $i\in\{2,\dots,{n-1}\}$, an analogous argument shows that $g^*_{i}(v^k_{n})$ converge $R$-linearly $g^*_{i}(v^*)$. Due to \eqref{new x}, we have $u^k_{1}+u^k_{n-1}-u^k_{n}-p^k_{n-1}\in\partial f_{n}(u^k_{n})$ which is a bounded sequence as the sum of convergent sequences, and, due to \eqref{eq: def of x^*}, $u^*-p^*_{n-1}\in\partial f_{n}(u^*)$. Thus, by applying Lemma~\ref{lemma for gap}, it follows that $f_{n}(u^k_{n})$ converge $R$-linearly to $f_{n}(u^*)$. An analogous argument shows that $g^*_{n}(v^k_{n})$ converge $R$-linearly $g^*_{n}(v^*)$. As all terms in $G$ converge $R$-linearly, and so the result follows. \end{proof} \section{Numerical Experiment: Image denoising}\label{s: Experiment} In this section, we give a numerical illustration of the Algorithm~\ref{alg:PD} applied to image denoising. All computations are run in Python 3.12.0 on a MacBook Pro machine equipped with 16GB memory and an Apple M1 Pro Chip. Let $u$ denote an $M\times M$ gray scale image, where pixel values range from 0 to 1 (with 0 representing black and 1 representing white), represented by a vector in $\mathbb{R}^m$ with $m=M\times M$ which is to be recovered from a noisy observation $b\in\mathbb{R}^m$. In our numerical experiments, the observed image $b$ is generated by adding Gaussian noise to the true image. Let $\lambda_1, \lambda_2, \lambda_3, \lambda_4>0$. To recover $u$ from $b$, we consider the minimization problem~\cite[Section 6.2]{aragon2021strengthened} \begin{equation}\label{delburring problem} \min_{u\in\mathbb{R}^{m}}\frac{1}{2}\|u-b\|^2+\frac{\lambda_{1}}{2}\|u\|^2+\left((\lambda_{2}\|\cdot\|_{\iso}+\frac{\lambda_3}{2}\|\cdot\|^2) \Box (\frac{\lambda_4}{2}\|\cdot\|^2)\right)(Du), \end{equation} where $\|\binom{v^1}{v^2}\|_{\iso}:=\sum_{i=1}^m\sqrt{(v^1_i)^2+(v^2_i)^2}$ and $D\in\mathbb{R}^{2m\times m}$ denotes the \emph{discrete gradient} given by \begin{equation*} D=\begin{pmatrix} \Id\otimes D_{1}\\D_{1}\otimes\Id \end{pmatrix},\qquad D_{1}=\begin{pmatrix} -1&1&0&\cdots&0&0\\ 0&-1&1&\cdots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\cdots&-1&1\\ 0&0&0&\cdots&0&0\\ \end{pmatrix}\in\mathbb{R}^{M\times M}. \end{equation*} Here $\otimes$ denotes the Kronecker product. Under this notation, $u\mapsto \|Du\|_{\iso}$ is the \emph{discrete isotropic total variation (TV) norm} introduced in \cite{rudin1992nonlinear}. In~\eqref{delburring problem}, the first term $\frac{1}{2}\|u-b\|^2$ ensures data fidelity, and the term $\|Du\|_{\iso}+\frac{1}{2}\|Du\|^2$ ensures the reconstruction has small total variation which promotes ``sharp edges'' in the image. The terms $\frac{1}{2}\|u\|^2$ and $\frac{1}{2}\|D u\|^2$ are regularizers needed in the setting of our algorithm. The parameters $\lambda_1,\lambda_2,\lambda_3, \lambda_4>0$ are used to control the relative importance of each of these components. Problem~\eqref{delburring problem} can be seen as a special case of problem \eqref{convex optimization} with $n=3$, $\mathcal{H}_{1}=\mathbb{R}^m$, $\mathcal{H}_{2}=\mathbb{R}^{2m}$ and $$f_2(u)=\frac{1}{2}\|u-b\|^2, \quad f_3(u)=\frac{\lambda_{1}}{2}\|u\|^2, \quad g_{2}(v)=\lambda_{2}\|v\|_{\iso}+ \frac{\lambda_{3}}{2}\|v\|^2, \quad g_{3}(v)=\frac{\lambda_4}{2}\|v\|^2,\quad C=D.$$ Here, $f_{2}$ is differentiable with $1$-Lipschitz continuous gradient, $f_{3}$ is $\lambda_{1}$-strongly convex, $g_{2}$ is $\lambda_{3}$-strongly convex, and $g_{3}$ is differentiable with $\lambda_4$-Lipschitz continuous gradient. We can therefore apply Algorithm~\ref{alg:PD}, for which Corollary~\ref{corollary primal dual linear convergence} guarantees linear convergence to the solution of \eqref{delburring problem} To do so, it is necessary to compute the proximal operators appearing \eqref{new x}. In what follows, we collect expressions for these operators. \begin{enumerate}[(i)] \item The proximal operator of the functions $f_2$, $f_3$, $g_3$, respectively, are given by $$\prox_{f_{2}}(u)=\frac{u+b}{2}, \quad \prox_{f_{3}}(u)=\frac{u}{1+\lambda_{1}}, \quad\prox_{g_3}(v)=\frac{v}{1+\lambda_4}.$$ \item Denote $y=\binom{v^1}{v^2}\in\mathbb{R}^{2m}$. By combining~\cite[Proposition 23.29(i)]{bauschke2011convex} and \cite[p.~1727]{o2014primal}, we obtain $$\prox_{g_2}(v)=\prox_{\frac{\lambda_2}{\lambda_3+1}\|\cdot\|_{\iso}}\left(\frac{1}{\lambda_3+1}v\right) = \frac{1}{\lambda_3+1}\binom{\alpha\odot v^1}{\alpha\odot v^2},$$ where $\odot$ denotes the Hadamard product and $\alpha\in\mathbb{R}^{m}$ is given by $$ \alpha_i = \begin{cases} 1-\dfrac{\lambda_2}{\sqrt{({v^1_{i}})^2+({v^2_{i}})^2}} & \text{if } \sqrt{({v^1_{i}})^2+({v^2_{i}})^2}>{\lambda_2}, \\ 0 & \text{otherwise.} \end{cases} $$ \end{enumerate} \begin{table}[b] \centering \renewcommand{\arraystretch}{1.2} \caption{The effect of different values of $\gamma$ for Algorithm~\ref{alg:PD} after 100 iterations.} \begin{tabular}{|c|c|c|c|} \hline $\gamma$ & $\frac{\|(\mathbf{p}^k,\mathbf{q}^k)-(\mathbf{p}^{k-1},\mathbf{q}^{k-1})\|}{m}$ & SNR \\ \hline 0.01 & $1.10\times10^{-5}$ & 1.72\\ 0.05 & $1.81\times10^{-5}$ & 6.52 \\ 0.1 & $8.49\times 10^{-6}$ & 10.56\\ 0.5 & $2.14\times 10^{-7}$ & 12.10\\ 0.9 & $4.51\times 10^{-8}$ & 12.10\\ 0.99 & $3.19\times 10^{-8}$ & 12.10\\ \hline \end{tabular} \label{tab:different_values_of_gamma} \end{table} \begin{figure}[t] \centering \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=0.95\textwidth]{images/Cam_Original_image144.png} \caption{Original image} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=0.95\textwidth]{images/cam_Noisy_image144.png} \caption{Noisy image} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=0.95\textwidth]{images/cam_Denoised_image_la1_0.01.png} \caption{Restored image} \end{subfigure} \caption{The denoised image for Algorithm~\ref{alg:PD} with $\gamma=0.99, \lambda_{1}=0.01, \lambda_2=0.05, \lambda_3=0.0001$, and $\lambda_{4}=10$.} \label{fig:observed from noisy} \end{figure} \begin{figure}[h!] \centering \begin{subfigure}[b]{0.19\textwidth} \includegraphics[width=0.95\textwidth]{images/image_0_la1.png} \caption{$\lambda_{1}=0.0001$} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth} \includegraphics[width=0.95\textwidth]{images/image_1_la1.png} \caption{$\lambda_{1}=0.001$} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth} \includegraphics[width=0.95\textwidth]{images/image_2_la1.png} \caption{$\lambda_{1}=0.01$} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth} \includegraphics[width=0.95\textwidth]{images/image_3_la1.png} \caption{$\lambda_{1}=1$} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth} \includegraphics[width=0.95\textwidth]{images/image_4_la1.png} \caption{$\lambda_1=5$} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth} \includegraphics[width=0.95\textwidth]{images/image_0_la2.png} \caption{$\lambda_{2}=0.001$} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth} \includegraphics[width=0.95\textwidth]{images/image_1_la2.png} \caption{$ \lambda_{2}=0.01$} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth} \includegraphics[width=0.95\textwidth]{images/image_2_la2.png} \caption{$ \lambda_{2}=0.05$} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth} \includegraphics[width=0.95\textwidth]{images/image_3_la2.png} \caption{$ \lambda_{2}=0.5$} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth} \includegraphics[width=0.95\textwidth]{images/image_4_la2.png} \caption{$\lambda_{2}=1$} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth} \centering \includegraphics[width=0.95\textwidth]{images/image_0_la3.png} \caption{$\lambda_3=0.000001$} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth} \includegraphics[width=.95\textwidth]{images/image_1_la3.png} \caption{$\lambda_3=0.00001$} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth} \includegraphics[width=0.95\textwidth]{images/image_2_la3.png} \caption{$\lambda_3=0.0001$} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth} \includegraphics[width=0.95\textwidth]{images/image_3_la3.png} \caption{$\lambda_3=0.5$} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth} \includegraphics[width=0.95\textwidth]{images/image_4_la3.png} \caption{$\lambda_3=1$} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth} \includegraphics[width=0.95\textwidth]{images/image_0_la4.png} \caption{$\lambda_4=0.001$} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth} \includegraphics[width=0.95\textwidth]{images/image_1_la4.png} \caption{$\lambda_4=1$} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth} \includegraphics[width=0.95\textwidth]{images/image_2_la4.png} \caption{$\lambda_4=10$} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth} \includegraphics[width=0.95\textwidth]{images/image_3_la4.png} \caption{$\lambda_4=20$} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth} \includegraphics[width=0.95\textwidth]{images/image_4_la4.png} \caption{$\lambda_4=100$} \end{subfigure} \caption{The effect of varying $\lambda_1, \lambda_2, \lambda_3$ and $\lambda_4$ on the solution of \eqref{delburring problem}.} \label{fig:effect of lambda4} \end{figure} To examine the effect of varying regularization parameters, we applied Algorithm~\ref{alg:PD} to the problem \eqref{delburring problem} with the ``Cameraman" image~\cite{schreiber1978image}. The original image was degraded by a Gaussian noise with zero mean and standard deviation 0.05. Figure~\ref{fig:observed from noisy} shows the original, noisy, and the recovered image. Using a trial-and-error process, summarized in Figure~\ref{fig:effect of lambda4}, $(\lambda_{1},\lambda_2,\lambda_3,\lambda_4)=(0.01,0.05,0.0001,10)$ were found to give to be a good reconstruction, which is used in all experiments. For the initialization, we set $\mathbf{p}^0=(\mathbf{0},\mathbf{0})\in\mathbb{R}^{m}\times \mathbb{R}^m$ and $\mathbf{q}^0=(\mathbf{0},\mathbf{0})\in\mathbb{R}^{2m}\times\mathbb{R}^{2m}$. Table~\ref{tab:different_values_of_gamma} shows the normalized final change iterates $\frac{\|(\mathbf{p}^k,\mathbf{q}^k)-(\mathbf{p}^{k-1},\mathbf{q}^{k-1})\|}{m}$ and the signal to noise ratio after 100 iterations. Here the normalization by $m$ ensures the algorithm performance is independent across different image sizes, making the convergence rate independent of image resolution, and signal to noise ratio~\cite{price1993signals} is defined as $$\text{SNR}=10\cdot\log_{10}\left(\frac{\|\text{Original}-\text{Restored}\|}{\|\text{Original}\|}\right).$$ The Table~\ref{tab:different_values_of_gamma} suggests that $\gamma=0.99$ as a good choice for Algorithm~\ref{alg:PD}. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[height=5cm]{images/Plot_error_vs_iteration_cam_man_1000_pp.png} \caption{} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[height=5cm]{images/Plot_error_vs_iteration_cam_man_1000_pq.png} \caption{} \end{subfigure} \caption{(a) Normalized distance to the solution vs iterations, and (b) normalized final change iterates with $\lambda_{1}=0.01,\lambda_{2}=0.05,\lambda_3=0.0001$ and $\lambda_4=10$ for Algorithm~\ref{alg:PD}.} \label{fig:error for different lambda} \end{figure} Figure~\ref{fig:error for different lambda}\hyperref[fig:error for different lambda]{(a)} illustrates that Algorithm~\ref{alg:PD} converges linearly which supports the results presented in Corollary~\ref{corollary primal dual linear convergence}. In this figure, we analyzed the normalized distance to the solution $\frac{\|(\mathbf{p}^k,\mathbf{q}^k)-(\mathbf{p}^{*},\mathbf{q}^*)\|}{m}$ for the first 100 iterations, where $(\mathbf{p}^k,\mathbf{q}^k)$ is the sequence generated by Algorithm~\ref{alg:PD} at each iteration $k$ and $(\mathbf{p}^*,\mathbf{q}^*)$ is an approximate optimal values obtained by running our Algorithm~\ref{alg:PD} for 200 iterations as the change of iterates are almost same after 200 iterations which are shown in Figure~\ref{fig:error for different lambda}\hyperref[fig:error for different lambda]{(b)}. We compared Algorithm~\ref{alg:PD} and Douglas--Rachford algorithm in product space discussed in \eqref{product space DR} applied to \eqref{monotone inclusion n=2*}. For this comparison, we took four test images~\cite{cvg2023} with different sizes shown in Figure \ref{fig:restored lena shapman}. Table~\ref{tab:comparison table 2} reports the average number of iterations, normalized distance to the solution and elapsed time to achieve the stopping criteria $$\frac{\|(\mathbf{p}^k,\mathbf{q}^k)-(\mathbf{p}^{k-1},\mathbf{q}^{k-1})\|}{m}\leq 10^{-4} $$ for 10 different instances of Gaussian noise. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=0.9\textwidth]{images/image_0_original.png} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=0.9\textwidth]{images/image_0_noisy.png} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=0.9\textwidth]{images/image_0_restored_PD.png} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=0.9\textwidth]{images/image_0_restored_DR.png} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=.9\textwidth]{images/image_1_original.png} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=.9\textwidth]{images/image_1_noisy.png} \end{subfigure} \begin{subfigure}{0.23\textwidth} \centering \includegraphics[width=.9\textwidth]{images/image_1_restored_PD.png} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=.9\textwidth]{images/image_1_restored_DR.png} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=.9\textwidth]{images/image_2_original.png} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=.9\textwidth]{images/image_2_noisy.png} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=.9\textwidth]{images/image_2_restored_PD.png} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=.9\textwidth]{images/image_2_restored_DR.png} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=0.9\textwidth]{images/image_3_original.png} \caption{Original} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=0.9\textwidth]{images/image_3_noisy.png} \caption{Noisy} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=0.9\textwidth]{images/cam_Denoised_image_la1_0.01.png} \caption{Restored (Algorithm~\ref{alg:PD})} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=0.9\textwidth]{images/cam_Denoised_image_DR.png} \caption{Restored (DR)} \end{subfigure} \caption{Original, noisy and denoised images for Algorithm~\ref{alg:PD} and Douglas--Rachford algorithm in product space.} \label{fig:restored lena shapman} \end{figure} \begin{table}[h!] \centering \renewcommand{\arraystretch}{1.2} \caption{Summary of the computational results for ten random instances.} \begin{tabular}{ |c|c|c|c|c|c|c|c|c| } \hline Image& Size ($M$) &\multicolumn{2}{c|}{\textbf{$k$}} & \multicolumn{2}{c|}{\textbf{$\frac{\|(\mathbf{p}^k,\mathbf{q}^k)-(\mathbf{p}^*,\mathbf{q}^*)\|}{m}$}}& \multicolumn{2}{c|}{Time (s)} \\ \cline{3-8} & & Alg~\ref{alg:PD} & DR & Alg~\ref{alg:PD} & DR &Alg~\ref{alg:PD} & DR\\ \hline \multirow{4}{0.5em}\\Shepp-Logan phantom& 96 &9&26&$3.83\times 10^{-5}$&$2.71\times 10^{-5}$&5.10&13.98\\ Barbara & 112 &10&27& $2.49\times 10^{-5}$&$2.08\times 10^{-5}$&5.84&14.68\\ Bird & 128 &8&22& $2.82\times 10^{-5}$&$2.05\times 10^{-5}$&4.81&12.09\\ Cameraman& 144 &10&22&$2.11\times 10^{-5}$&$2.13\times 10^{-5}$&5.88&12.54\\ \hline \end{tabular} \label{tab:comparison table 2} \end{table} The normalized distance to the solution $\frac{\|(\mathbf{p}^k,\mathbf{q}^k)-(\mathbf{p}^{*},\mathbf{q}^{*})\|}{m}$ was plotted against $100$ iterations for both algorithms across the four test images showed in Figure \ref{fig:error graph}. For Algorithm~\ref{alg:PD}, Corollary~\ref{corollary primal dual linear convergence} guarantees the linear convergence which have shown in Figure~\ref{fig:error graph}. However, convergence behavior of DR algorithm does not clearly show linear convergence, as seen in the same figures. Also, Algorithm~\ref{alg:PD} performs better than Douglas--Rachford algorithm on these instances in terms of iterations, normalized distance to the solution and time based on Table~\ref{tab:comparison table 2} and Figure~\ref{fig:error graph}. \begin{figure}[h!] \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=0.9\textwidth]{images/error_plot_p0.png} \caption{Shepp-Logan phantom} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=0.9\textwidth]{images/error_plot_p1.png} \caption{Barbara} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=0.9\textwidth]{images/error_plot_p2.png} \caption{Bird} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=0.9\textwidth]{images/error_plot_p3.png} \caption{Cameraman} \end{subfigure} \caption{Relative error vs iterations of Algorithm~\ref{alg:PD} and DR algorithm for (a) ``Shepp-Logan phantom" (b) ``Barbara", (c) ``Bird", and (d) ``Cameraman" images.} \label{fig:error graph} \end{figure} \section{Conclusions}\label{s: conclusions} In this work, we established linear convergence of the resolvent splitting algorithm due to Malitsky and Tam~\cite{malitsky2023resolvent} for finding a zero in the sum of $n$ maximally monotone operators under two different sets of assumptions. In the first setting, the first $(n-1)$ operators are maximally monotone and Lipschitz, and the last operator is maximally strongly monotone. In the other setting, the first $(n-1)$ operators are maximally strongly monotone and Lipschitz, and the last one is maximally monotone. We then applied our result under the first set of assumptions to derive linear convergence of a primal--dual algorithm (Algorithm~\ref{alg:PD}) for a convex minimization problem involving infimal convolution, and presented experimental results in the context of image denoising. Our experiments demonstrate that Algorithm~\ref{alg:PD} achieves linear convergence. Furthermore, we conducted a comparative analysis with the Douglas--Rachford (DR) algorithm applied in the product space. For this problem, our results suggest that Algorithm~\ref{alg:PD} performs better than Douglas--Rachford algorithm in terms of both accuracy and time. One possible direction for further research is to investigate tight linear convergence rates of Algorithm~\ref{alg:PD} similar to in \cite{giselsson2017tight}. In our proof of \ref{theorem for linear convergence}, we rely on the inequality \eqref{inequality}, which is unlikely to be tight. \section*{Acknowledgments} FAS was supported in part by a Research Training Program Scholarship from the Australian Commonwealth Government and the University of Melbourne. MKT was supported in part by Australian Research Council grant DP230101749. \section*{Disclosure of Interest} The authors report there are no competing interests to declare. \bibliographystyle{plain} \bibliography{ref} \end{document}
2412.12633v2
http://arxiv.org/abs/2412.12633v2
Arborescences of Random Covering Graphs
\documentclass[submission]{FPSAC2025} \usepackage{color} \usepackage{tikz,hyperref} \usepackage{parskip} \usepackage{amsmath} \usepackage{amsfonts} \usepackage[backend=bibtex]{biblatex} \addbibresource{main.bib} \RequirePackage{xcolor} \usepackage{tikz} \usepackage{tikz-cd} \usetikzlibrary{decorations.pathreplacing,decorations.markings} \newcommand{\red}[1]{{\color{red}#1}} \tikzset{ on each segment/.style={ decorate, decoration={ show path construction, moveto code={}, lineto code={ \path [#1] (\tikzinputsegmentfirst) -- (\tikzinputsegmentlast); }, curveto code={ \path [#1] (\tikzinputsegmentfirst) .. controls (\tikzinputsegmentsupporta) and (\tikzinputsegmentsupportb) .. (\tikzinputsegmentlast); }, closepath code={ \path [#1] (\tikzinputsegmentfirst) -- (\tikzinputsegmentlast); }, }, }, mid arrow/.style={postaction={decorate,decoration={ markings, mark=at position .5 with {\arrow[#1]{stealth}} }}}, } \newtheorem{thm}{Theorem}[section] \newtheorem{df}[thm]{Definition} \newtheorem{eg}[thm]{Example} \newtheorem{lm}[thm]{Lemma} \newtheorem{coro}[thm]{Corollary} \newtheorem{prp}[thm]{Proposition} \newtheorem{prob}[thm]{Problem} \newtheorem{rmk}[thm]{Remark} \newtheorem{conj}[thm]{Conjecture} \newtheorem{question}[thm]{Question} \newtheorem*{theorem1*}{Theorem \ref{thm:main}} \newtheorem*{theorem2*}{Theorem \ref{thm:specialize}} \newcommand{\acts}{\lefttorightarrow} \newcommand{\C}{\mathbb{C}} \newcommand{\E}{\mathbb{E}} \newcommand{\F}{\mathbb{F}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\N}{\mathbb{N}} \newcommand{\R}{\mathbb{R}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\p}{\mathbb{P}} \renewcommand{\S}{\mathfrak{S}} \renewcommand{\L}{\mathcal{L}} \newcommand{\GL}{\textup{GL}} \newcommand{\End}{\textup{End}} \renewcommand{\a}{\mathfrak{a}} \renewcommand{\b}{\mathfrak{b}} \renewcommand{\c}{\mathfrak{c}} \newcommand{\cc}{\mathcal{C}} \newcommand{\dd}{\mathcal{D}} \newcommand{\m}{\mathfrak{m}} \renewcommand{\p}{\mathfrak{p}} \renewcommand{\P}{\mathfrak{P}} \newcommand{\q}{\mathfrak{q}} \renewcommand{\bar}{\overline} \renewcommand{\hat}{\widehat} \newcommand{\Hom}{\textup{Hom}} \newcommand{\eps}{\varepsilon} \newcommand{\Sym}{\mathrm{Sym}} \newcommand{\wt}{\mathrm{wt}} \title{Arborescences of Random Covering Graphs} \author{Muchen Ju, Junjie Ni, Kaixin Wang, Yihan Xiao} \author[Ju, Ni, Wang \and Xiao]{Muchen Ju\addressmark{1} , Junjie Ni\addressmark{2} , Kaixin Wang\addressmark{3} \and Yihan Xiao\addressmark{4}} \address{\addressmark{1}Department of Mathematics, Fudan University, 220 Handan Rd, Shanghai, China \\ \addressmark{2}Department of Mathematics, Nankai University, 94 Weijin Rd, Tianjin, China \\ \addressmark{3}Department of Mathematics, Duke University, 113 Edens Dr, Box 99043, Durham, NC \\ \addressmark{4}School of Science, Shanghai University, 99 Shangda Rd, Shanghai, China} \received{\today} \date{July 2024} \abstract{ A rooted arborescence of a directed graph is a spanning tree directed towards a particular vertex. A recent work of Chepuri et al. \cite{Sylvester} showed that the arborescences of a covering graph of a directed graph $G$ are closely related to the arborescences of $G$. In this paper, we study the weighted sum of arborescences of a random covering graph and give a formula for the expected value, resolving a conjecture of Chepuri et al. \cite{Sylvester}. } \begin{document} \maketitle \section{Introduction} For a weighted directed graph $\Gamma$, an \emph{arborescence rooted at a vertex} $v$ is a spanning tree whose edges are directed towards $v$. The weight of an arborescence is the product of the edge weights. We define $A_v(\Gamma)$ to be the weighted sum of all arborescences rooted at $v$. Covering graphs are central objects in topological graph theory motivated by the notion of covering spaces in topology. A weighted digraph $\tilde \Gamma$ is a covering graph of $\Gamma$ defined in the topological sense (see section \ref{section3} for details). The fundamental work of Gross-Tucker \cite{Tucker} gave a characterization of graph coverings by permutation voltage assignments. In their study of $R$-systems, Galashin and Pylyavskyy \cite{Galashin} observed that the ratio of arborescences of a covering graph and its base graph $A_{\tilde v}(\tilde \Gamma)\over A_v(\Gamma)$ is invariant of the choice of the root $v$. More recently, Chepuri et al. \cite{Sylvester} further investigated the ratio and proved that it is always an integer-coefficient polynomial in the edge weights of the base graph $\Gamma$ conjecturally with positive coefficients. Following up these works, we study the expected value of the ratio $A_{\tilde v}(\tilde \Gamma)\over A_v(\Gamma)$ when $\tilde \Gamma$ is taken to be a random covering graph. Our main theorem is a formula for its expected value, affirming conjecture 5.6 of \cite{Sylvester}. \begin{thm} Let $\Gamma = (V,E,\wt)$ be a weighted digraph. We make it a permutation-voltage graph as follows: the permutation voltages $\sigma_e \in S_k$ are chosen independently and uniformly at random. The random voltages make our covering graph $\tilde{\Gamma}$ a random graph. Then $$ \E[R(\tilde{\Gamma})] = \frac 1k \prod_{w\in V} \left( \sum_{\alpha \in E_s(w)} \wt(\alpha)\right)^{k-1}$$ Where $E_s(w)$ is the set of edges in $\Gamma$ coming out of $w$. \end{thm} The paper is organized as follows. In section \ref{section2} we introduce the relevant background in graph theory, including the matrix-tree theorem. In section \ref{section3}, we state the definition of covering graphs and review the theory of arborescence ratio in \cite{Sylvester}. In section \ref{section4} we prove our main result theorem 1.1. \section{Background On Graph Theory}\label{section2} In this paper, we consider directed graphs, or digraph for short. Unless otherwise stated, we consider digraphs weighted on the edges. \begin{df} Let $G = (V,E,\wt)$ be a weighted digraph. The \emph{degree matrix} $D$ of $G$ is the $|V|\times |V|$ diagonal matrix $$D[v_i, v_i] = \sum_{e\in E_s(v_i)} \wt(e)$$ The \emph{adjacency matrix} $A$ of $G$ is defined as $$A[v_i,v_j] = \sum_{e=(v_i\to v_j)} \wt(e) $$ for all $v_i,v_j \in V$. The \emph{Laplacian matrix} $L$ of $G$ is defined as $D-A$. \end{df} \begin{eg} Let $G$ be a weighted digraph with vertex set $\{1,2,3\}$, with only one edge between each $i, j$ (not necessarily distinct) in $V(G)$ with weight $x_{ij}$ for all $i,j \in \{1,2,3\}$. Then $$ D = \begin{bmatrix} x_{11} + x_{12} + x_{13} & 0 & 0 \\ 0 & x_{21} + x_{22} + x_{23} & 0 \\ 0 & 0 & x_{31} + x_{32} + x_{33} \end{bmatrix}$$ $$ A = \begin{bmatrix} x_{11} & x_{12} & x_{13} \\ x_{21} & x_{22} & x_{23} \\ x_{31} & x_{32} & x_{33} \end{bmatrix}$$ Hence $$ L = D - A = \begin{bmatrix} x_{12} + x_{13} & -x_{12} & -x_{13} \\ -x_{21} & x_{21} + x_{23} & -x_{23} \\ -x_{31} & -x_{32} & x_{31} + x_{32} \end{bmatrix}$$ \end{eg} Note that the Laplacian is always a singular matrix, since the sum of entries in each row is 0. \begin{df} An \emph{arborescence} $T$ of a weighted digraph $\Gamma$ rooted at $v\in V$ is a spanning tree directed towards $v$. The \emph{weight} of an arborescence is the product of the weights of its edges: $$ \wt(T) = \prod_{e\in T} \wt(e)$$ We define $A_v(\Gamma)$ to be the sum of weights of all arborescences. \end{df} The Matrix-Tree Theorem \cite{Fomin} relates the minors of the Laplacian and the arborescences. More concretely, \begin{thm} [Matrix-Tree Theorem \cite{Fomin}] Let $G$ be a weighted digraph with vertex set $\{1,\cdots,n\}$, with only one edge $i\to j$ in $V(G)$ with weight $x_{ij}$ for each $i,j\in \{1,\cdots,n\}$. Then $$ A_i(G) = \det(L_i^i) $$ for all $i\in \{1,\cdots,n\}$. Here $L_i^i$ is the $L$ with row $i$ and column $i$ removed. \end{thm} \begin{eg} For $n=3$, $$ L_1^1 = \begin{bmatrix} x_{21} + x_{23} & -x_{23}\\ -x_{32} & x_{31} + x_{32} \end{bmatrix}$$ Then $$\det(L_1^1) = (x_{21}+x_{23})(x_{31}+x_{32}) - (-x_{23})(-x_{32}) = x_{21}x_{31} + x_{23}x_{31} + x_{21}x_{32} = A_1(G)$$ The last equality holds since there are 3 arborescences rooted at $1$: namely $2,3\to 1, 2\to 3\to 1$ and $3\to 2\to 1$. \end{eg} \section{Covering Graphs}\label{section3} \begin{df} A \emph{$k$-fold cover} of $\Gamma = (V, E)$ is a graph $\tilde{\Gamma} = (\tilde{V}, \tilde{E})$ that is a $k$-fold covering space of $G$ in the topological sense that preserves edge weight. That is, we require that a lifted edge in the covering graph to have the same weight as its corresponding base edge in the base graph. In order to use this definition, we need to find a way to formally topologize directed graphs in a way that encodes edge orientation. To avoid this, we instead give a more concrete alternative definition of a covering graph. The graph $\tilde{\Gamma} = (\tilde{V}, \tilde{E})$ is a $k$-fold covering graph of $\Gamma = (V, E)$ if there exists a projection map $\pi: \tilde{\Gamma} \rightarrow \Gamma$ such that \begin{enumerate}[nosep] \item $\pi$ maps vertices to vertices and edges to edges; \item $|\pi^{-1}(v)| = |\pi^{-1}(e)| = k$ for all $v \in V, e \in E$; \item For all $\tilde{e} \in \tilde{E}$, we have $\wt(\tilde{e}) = \wt(\pi(\tilde{e}))$; \item $\pi$ is a local homeomorphism. Equivalently, $|E_s(\tilde{v})| = |E_s(\pi(\tilde{v}))|$ and $|E_t(\tilde{v})| = |E_t(\pi(\tilde{v}))|$ for all $\tilde{v} \in \tilde{V}$. \end{enumerate} When we refer to $\tilde{\Gamma}$ as a covering graph of $\Gamma$, we fix a distinguished projection $\pi: \tilde{\Gamma} \rightarrow \Gamma$. \end{df} We do not require a covering graph to be connected. However, disconnected graphs contain no arborescences, so our main quantity of interest $A_{\tilde v}(\tilde{\Gamma})$ is always $0$ in the disconnected case. \begin{df} A \emph{weighted permutation-voltage graph} $\Gamma = (V, E, \wt, \nu)$ is a weighted directed multigraph with each edge $e$ also labeled by a permutation $\nu(e) = \sigma_e\in S_k$, the symmetric group on $k$ letters. This labeling is called the \emph{voltage} of the edge $e$. Note that the voltage of an edge $e$ is independent of the weight of $e$. \end{df} \begin{df} Given a permutation-voltage graph $\Gamma$, we may construct a $k$-fold covering graph $\tilde{\Gamma} = (\tilde{V}, \tilde{E}, \wt)$ of $\Gamma$. $\tilde{\Gamma}$ is a graph with vertex set $\tilde{V} = V \times \{1,2,\ldots,k\}$ and edge set \begin{align*} \tilde{E}:=\left\{\left[v \times x, w \times \sigma_e(x)\right] : x \in \{1,\ldots,k\}, e=(v,w)\in\Gamma\right\}. \end{align*} \end{df} \begin{thm}[\cite{Tucker}] Every covering graph of $\Gamma$ can be constructed from a permutation voltage graph $\Gamma$. \end{thm} \begin{eg}\label{ex:perm} Let $\Gamma$ be the permutation-voltage graph shown in Figure~\ref{fig:perm-voltage-graph}, where edges labeled $(w, \sigma)$ have edge weight $w$ and voltage $\sigma \in S_3$. Then we can construct a $3$-fold cover $\tilde{\Gamma}$, with vertices $(v, y) = v^y$ and with edges labeled by weight, is shown in Figure~\ref{fig:perm-derived-graph}. \input{covering_graph_by_Sylvester/Santa_hat} \input{covering_graph_by_Sylvester/Santa_hat_covering} \end{eg} For any $\tilde v \in V(\tilde \Gamma)$, if $v$ is the vertex in $V(\Gamma)$ that it covers, then $\frac{A_{\tilde v}(\tilde \Gamma)}{A_v(\Gamma)}$ is independent of $\tilde v$. We call this ratio $R(\tilde \Gamma)$. This ratio has an explicit formula in \cite{Sylvester} as follows. \begin{thm}[\cite{Sylvester}]\label{bigboi} Let $\Gamma = (V, E, \wt)$ be an edge-weighted multigraph, and let $\tilde{\Gamma}$ be a $k$-fold cover of $\Gamma$ such that each lifted edge is assigned the same weight as its base edge. Denote by $\mathcal{L}(\Gamma)$ the voltage Laplacian of $\Gamma$. Then for any vertex $v$ of $\Gamma$ and any lift $\tilde{v}$ of $v$ in $\tilde \Gamma$ of $\Gamma$, we have \begin{align*} \frac{A_{\tilde{v}}(\tilde{\Gamma})}{A_v(\Gamma)} = \frac{1}{k}{\det [\L(\Gamma)}]. \end{align*} \end{thm} where $\L$ is defined below: \begin{df}\label{lowerright} Let $\{v_1,\cdots,v_n\}$ be the set of vertices of our graph $\Gamma$, let $\tilde \Gamma$ be a $k$-fold cover of $\Gamma$, where vertex $v_i$ is lifted to $v_i^1,\cdots,v_i^k$. Define $n(k-1)\times n(k-1)$ matrices $D$ and $\mathcal{A}$ with basis vectors $v_1^2,\cdots,v_1^{k}, v_2^2,\cdots,v_2^{k}, \cdots, v_n^2, \cdots, v_n^{k}$, as follows. \[\mathcal{A}[v_i^t,v_j^r]=\sum_{e\in E(v_i^t,v_j^r)}\wt(e)-\sum_{e\in E(v_i^t,v_j^1)}\wt(e)\] \[D[v_i^t,v_i^t]=\sum_{e\in E_s(v_i^t)}\wt(e)\] for $1<t,r\leq k.$ Finally, we define \[ \mathcal{L}(\Gamma):=D-\mathcal{A}.\] \end{df} \begin{eg} Returning to figures \ref{fig:perm-voltage-graph} and \ref{fig:perm-derived-graph} as an example. The voltage Laplacian can be written as a linear transformation on the basis vectors as above, with matrix $$ \mathcal{L} = \begin{bmatrix} b & 0 & 0 & -b & 0 & 0 \\ a & 2a+b & b & b & 0 & 0\\ 0 & 0 & c & 0 & -c & 0\\ 0 & 0 & 0 & c & 0 & -c \\ -d & 0 & 0 & -e & d+e & 0\\ 0 & -d & -e & 0 & 0 & d+e\\ \end{bmatrix}$$ \end{eg} \section{Proof of Theorem 1.1} \label{section4} Before we prove it, we revisit a classical proof of the Matrix-Tree Theorem. \subsection{Ideas for Proof of Matrix-Tree Theorem} Let $G$ be a weighted digraph with vertex set $\{1,\cdots,n\}$, with only one edge between each $i\to j$ in $V(G)$ with weight $x_{ij}$. Then $$ A_1(G) = \det(L_1^1) $$ \begin{eg} For $n=3$, $$ L_1^1 = \begin{bmatrix} x_{21} + x_{23} & -x_{23}\\ -x_{32} & x_{31} + x_{32} \end{bmatrix}$$ \end{eg} We want to take a monomial of the form $m = x_{2j_2} x_{3j_3} \cdots x_{nj_n}$. This corresponds to a digraph $M$ with vertex set $\{1,\cdots,n\}$ and edges $t \to j_t$ for $t \in \{2,\cdots,n\}$. This digraph $M$ either has a cycle, or is an in-tree at $1$. We can show the coefficient of $m$ in $$ \det(L_1^1) = \sum_{\substack{\sigma \in \Sym(\{2,\cdots,n\})} } sgn(\sigma) \prod_{j=2}^n L_{j,\sigma(j)}$$ is $\begin{cases} 1 & \text{if the digraph } M \text{ is an in-tree at vertex } 1\\ 0 & \text{otherwise} \end{cases}$ \subsection{Proof of 1.1} Call the vertices of the base graph $v_1,\cdots,v_n$ and edges $\{e = (v_i \to v_j, \sigma_e)\}$ Recall $$ R(\tilde{\Gamma}) = \frac 1k \det \mathcal{L} = \frac 1k \det (D-\mathcal{A})$$ where $D,\mathcal{A}$ are square matrices with rows/column labelled $v_1^2,\cdots,v_1^{k}, v_2^2,\cdots,v_2^{k}, $ \\ $\cdots, v_n^2, \cdots, v_n^{k}$, in this order. Here $D$ is a deterministic diagonal matrix satisfying $$D[v_i^t, v_i^t] = \sum_{e = (v_i\to v_j)} \wt(e)$$ and $\mathcal{A}$ is a random matrix with $$ \mathcal{A}[v_i^t, v_j^r] = \sum_{e = (v_i\to v_j), \sigma_e(t) = r} \wt(e) - \sum_{e = (v_i\to v_j), \sigma_e(t) = 1} \wt(e) $$ And the $\sigma_e \in S_k$ are chosen independently and uniformly at random. Thus, the conjecture is equivalent to showing our random matrix $\mathcal{A}$ satisfies $$\E [\det(D-\mathcal{A})] = \det(D)$$ Define $S := \{v_1^2,\cdots,v_1^{k}, v_2^2,\cdots,v_2^{k}, \cdots, v_n^2, \cdots, v_n^{k}\} $ For each edge, let $X_{e,a,b} = 1\{ \sigma_e(a)=b\}$, then we can write $$\mathcal{A}[v_i^t, v_j^r] = \sum_{e = (v_i\to v_j)} \wt(e) (X_{e,t,r} - X_{e,t,1}) $$ Where we group together $X_{e,t,r},X_{e,t,1}$; we naturally define $Y_{e,i,j} := X_{e,i,j} - X_{e,i,1}$. \begin{eg} The $n=1,k=3$ example with two loops with weights $a,b$ looks like this: $$\begin{bmatrix} a+b - a(X_{a,2,2}-X_{a,2,1}) - b(X_{b,2,2}-X_{b,2,1}) & - a(X_{a,2,3}-X_{a,2,1}) - b(X_{b,2,3}-X_{b,2,1}) \\ - a(X_{a,3,2}-X_{a,3,1}) - b(X_{b,3,2}-X_{b,3,1}) & a+b - a(X_{a,3,3}-X_{a,3,1}) - b(X_{b,3,3}-X_{b,3,1}) \end{bmatrix}$$ $$=\begin{bmatrix} a+b - aY_{a,2,2}- bY_{b,2,2} & - aY_{a,2,3} - bY_{b,2,3} \\ - aY_{a,3,2} - bY_{b,3,2} & a+b - aY_{a,3,3} - bY_{b,3,3} \end{bmatrix}$$ \end{eg} We first choose a permutation to expand, and for each edge $e$, the \emph{terms} are either a fixed $\wt(e)$ on the diagonal or a $\wt(e)Y_{e,*,*} $. We define a \emph{monomial} to be a product of terms, and we classify monomials $$\left(\prod_{e' \in F} \wt(e')\right) \prod_{e\in G} (-\wt(e)) \left(\prod_{l=1}^{n_e}Y_{e,i_l,j_l} \right)$$ by $(n_e)_{e\in G}$; in other words, for each edge $e\in G$, the number of times I choose $Y_{e,*,*}$. Note that both $i_1,\cdots,i_t$ and $j_1,\cdots,j_t$ are pairwise distinct, and take values in $\{2,\cdots,n\}$. Our focus is on the \emph{contribution} of monomials in the same class that has at least one $ Y_{e,*,*} $ term to the expected value; it suffices to show that contribution is indeed zero. Note that monomials in the same class are all contributing to the same coefficient (in this case, $\prod_{e' \in F} \wt(e') \prod_{e\in G} (-\wt(e))$), and all have the same expected value due to the following fact when we zoom into edges that have the same weight: \begin{prp} Fix an edge $e$ and integer $t \in \{1,\cdots,n-1\}$. If $\{j_1,\cdots,j_t\} \subset \{2,\cdots,n\}$ and $\pi \colon \{i_1,\cdots,i_t\} \to \{j_1,\cdots,j_t\}$ is a bijection, then $$ \E\left[\prod_{l=1}^t Y_{e,i_l, \pi(i_l)}\right]$$ depends only on $t$; furthermore when $t=1$ the expected value is 0. \end{prp} \begin{proof} Observe if $\pi \colon \{i_1,\cdots,i_t\} \to [k]$ is a map, then $$ \E\left[\prod_{l=1}^t X_{e,i_l, \pi(i_l)}\right] = \begin{cases} 0 & \text{ if } \pi \text{ is not injective; } \\ \frac{(k-t)!}{k! } & \text{ if } \pi \text{ is injective } \end{cases}$$ In our case, $\pi$ is not injective if and only if $\pi(s) = \pi(u) = 1 \text{ for some } s\ne u$. Hence \begin{align*} & \E\left[\prod_{l=1}^t Y_{e,i_l, \pi(i_l)}\right] \\ &= \E\left[\prod_{l=1}^t (X_{e,i_l, \pi(i_l)} - X_{e,i_l,1}) \right] \\ &= \E\left[ \prod_{l=1}^t X_{e,i_l, \pi(i_l)}\right] - \sum_{z=1}^t \E\left[\prod_{l=1}^t X_{e,i_l, \pi^{(z)}(i_l)}\right] \\ &= (1-t)\frac{(k-t)!}{k!} \end{align*} Here for $z\in \{1,\cdots,t\}$, $\pi^{(z)} = (1 \pi(i_z)) \pi $ is a bijection $\{i_1,\cdots,i_t\} \to \{j_1,\cdots,j_t,1\} \setminus j_z$. \end{proof} For this group, choose an edge $e_0$ such that it has at least one $ Y_{e_0,*,*}$ term. \footnote{if there are multiple choices for $e_0$, label the edges in some pre-determined way so that $e_0$ is the ``first" edge to avoid an ill-defined pairing} Case 1: The chosen monomial has exactly one term of the form $Y_{e_0,*,*} $. In this case, since the $\sigma_e$'s are independent, the expected value of this monomial factors through a term $\E[Y_{e_0,*,*} ]$, which is zero, so the contribution of this monomial to the expected value is 0. Case 2: The chosen monomial has $\ge 2$ terms of the form $Y_{e_0,*,*} $. In this case, $\E\left[\prod_{l=1}^{n_{e_0}}Y_{e_0,i_l,j_l} \right] $ may be nonzero, but it is still only depends on $n_{e_0}$ by proposition 4.3. To choose a permutation $\rho$ of $S = \{v_i^t \mid 1\le i\le n, 1\le t\le k-1\}$ (attained when expanding the determinant) that picks up this term, we must have $\rho(i_l) = j_l$ for all $l\in [n_e]$. We now consider general permutations $\rho'$ of $S$ that takes $\{ v_x^{i_1},\cdots,v_x^{i_{n_e}}\}$ to $\{v_y^{j_1}, \cdots, v_y^{i_{n_e}} \} $ and agrees with $\rho$ on points in $S \setminus \{ v_x^{i_1},\cdots,v_x^{i_{n_e}}\}$ (here $x,y$ are defined so that $e$ is an edge $v_x\to v_y$). Since the unsigned stuff has the same expected value for every $\rho'$ in our sub-class, each sub-class contributes 0 to the expected value. Furthermore, the sub-class is determined by the class of polynomials, $e_0, \{i_1,\cdots,i_t\} $ and $\{j_1,\cdots,j_t\}$, so this implies that being in the same class for two monomials is an equivalence relation. This turns this large class of monomials can be partitioned into smaller sub-classes where each sub-class has sum 0, so the conclusion follows. \begin{eg} $$\begin{bmatrix} a+b - aY_{a,2,2} - bY_{b,2,2} & - aY_{a,2,3} - bY_{b,2,3} \\ - aY_{a,3,2} - bY_{b,3,2} & a+b - aY_{a,3,3} - bY_{b,3,3} \end{bmatrix}$$ Two classes of monomials are $(-aY_{a,*,*})(-bY_{b,*,*})$ and $(-aY_{a,*,*})(-aY_{a,*,*})$. Note that $ (-aY_{a,*,*})(-bY_{b,*,*})$ is different from $ (-aY_{a,*,*})b$ since the number of $Y_{b,*,*}$ that appear is different. In the class $(-aY_{a,*,*})(-bY_{b,*,*})$, there are four monomials: $$(-aY_{a,2,2})(-bY_{b,3,3}), (-aY_{a,2,3})(-bY_{b,3,2}), (-aY_{a,3,2})(-bY_{b,2,3}), (-aY_{a,3,3})(-bY_{b,2,2})$$ The contribution of each $(-aY_{a,i,j})(-bY_{b,i',j'})$ to the expected value is $0$ since $\E[Y_{a,i,j}]=0$. In the class $(-aY_{a,*,*})(-aY_{a,*,*})$, there are two monomials: $(-aY_{a,2,2})(-aY_{a,3,3}), (-aY_{a,2,3})(-aY_{a,3,2})$. Note that $ (- aY_{a,2,2})(- aY_{a,3,3}), (- aY_{a,2,3})(- aY_{a,3,2})$ are not the same monomial, but they belong to the same class. They also belong to the same sub-class, labelled by $a, \{i_1,i_2\}= \{2,3\}, \{j_1,j_2\} = \{2,3\}$ Although $\E [ Y_{a,2,2}Y_{a,3,3}] \ne 0$, but this is counted once with the permutation $\rho = id_{\{2,3\}}$ and once with $\rho = (23)$, so total contribution of the class to the expected value is $$\E[ (- aY_{a,2,2})(- aY_{a,3,3}) - (- aY_{a,2,3})(- aY_{a,3,2})] = a^2(\E[ Y_{a,2,2} Y_{a,3,3}] - \E[Y_{a,2,3}Y_{a,3,2}])= 0$$ \end{eg} \section*{Acknowledgements} We are grateful to Sylvester W. Zhang for introducing the problem and providing valuable guidance on the research. We thank Tao Gui for helpful conversations. We thank PACE 2024 for providing an unforgettable opportunity for us to conduct mathematical researches. \printbibliography \end{document} \begin{figure}[htp]\label{fig:derived-graph} \centering \begin{tikzpicture} [scale=0.65,line width=1.2] \coordinate (1) at (0, 3); \coordinate (2) at (3/1.71, 0); \coordinate (3) at (-3/1.71, 0); \coordinate (4) at (0, 3 + 0.4); \draw (1) arc(270:360+270:0.4); \path [draw = red, postaction = {on each segment = {mid arrow = red}}] (1)--(2); \path [draw = blue, postaction = {on each segment = {mid arrow = red}}] (3)--(1); \path [draw = green, postaction = {on each segment = {mid arrow = red}}] (2) to [bend left] (3); \path [draw = purple, postaction = {on each segment = {mid arrow = red}}] (3) to [bend left] (2); \draw[fill] (1) circle [radius=0.1]; \node at (0.5, 3) {$1$}; \draw[fill] (2) circle [radius=0.1]; \node at (3/1.71 + 0.5, 0) {2}; \draw[fill] (3) circle [radius=0.1]; \node at (-3/1.71-0.5, 0) {3}; \node at (1 + 0.4, 3+ 0.5) {$(a, 321)$}; \node at (3/1.71/2 + 1.1, 3/2) {$(b, 231)$}; \node at (-3/1.71/2 - 1.2, 3/2) {$(d, 123)$}; \node at (0, 0.85 + 0.1) {$(e, 132)$}; \node at (0, -0.85 - 0.1) {$(c, 123)$}; \end{tikzpicture} \caption{A permutation-voltage graph $\Gamma$.} \label{fig:perm-voltage-graph} \end{figure} \begin{figure}[htp] \centering \begin{tikzpicture}[scale = 0.7, line width=1.2] \coordinate (1) at (0, 3+4); \coordinate (2) at (3/1.71, 4); \coordinate (3) at (-3/1.71, 4); \coordinate (4) at (-4, 3); \coordinate (5) at (3/1.71 - 4, 0); \coordinate (6) at (-3/1.71 - 4, 0); \coordinate (7) at (4, 3); \coordinate (8) at (3/1.71 + 4, 0); \coordinate (9) at (-3/1.71 + 4, 0); \draw[fill] (1) circle [radius=0.1] ; \draw[fill] (2) circle [radius=0.1]; \draw[fill] (3) circle [radius=0.1]; \draw[fill] (4) circle [radius=0.1]; \draw[fill] (5) circle [radius=0.1]; \draw[fill] (6) circle [radius=0.1]; \draw[fill] (7) circle [radius=0.1]; \draw[fill] (8) circle [radius=0.1]; \draw[fill] (9) circle [radius=0.1]; \node at (0, 7.5) {$1^1$}; \node at (3/1.71, 3.6) {$2^1$}; \node at (-3/1.71 -0.4, 4) {$3^1$}; \node at (-4 + 0.5, 3+0.3) {$1^2$}; \node at (3/1.71 - 4, -0.5) {$2^2$}; \node at (-3/1.71 - 4 -0.5, 0) {$3^2$}; \node at (4 + 0.5, 3) {$1^3$}; \node at (3/1.71 + 4 + 0.5, 0) {$2^3$}; \node at (-3/1.71 + 4, -0.5) {$3^3$}; \path[draw = black, postaction = {on each segment = {mid arrow = red}}] (1) to (7) (7) to [bend right] (1); \draw (4) arc(-45:360-45:0.4); \path[draw = green, postaction = {on each segment = {mid arrow = red}}] (5) -- (6) (8) to (9) (2) to [bend left] (3); \path[draw = red, postaction = {on each segment = {mid arrow = red}}] (1) -- (5) (4) -- (8) (7) -- (2); \path[draw = blue, postaction = {on each segment = {mid arrow = red}}] (3) -- (1) (6) -- (4) (9) -- (7); \path[draw = purple, postaction = {on each segment = {mid arrow = red}}] (3) to [bend left] (2) (6) to [bend right] (8) (9) -- (5); \node at (1.5, 5.1) {$a$}; \node at (2.8, 5.8) {$a$}; \node at (-4.2, 4) {$a$}; \node at (0.6, 1.9) {$b$}; \node at (-1, 2.9) {$b$}; \node at (2.7, 3.2) {$b$}; \node at (0, 4-0.3) {$c$}; \node at (-4, 0.3) {$c$};\node at (4, 0.3) {$c$}; \node at (-3/1.71/2 - 0.5, 3/2+4) {$d$}; \node at (-3/1.71/2 - 0.5-4, 3/2) {$d$}; \node at (-3/1.71/2 + 0.4+4, 3/2) {$d$}; \node at (0, 4+0.3) {$e$}; \node at (0, -1.3) {$e$}; \node at (0, -0.3) {$e$}; \draw[fill] (1) circle [radius=0.1] ; \draw[fill] (2) circle [radius=0.1]; \draw[fill] (3) circle [radius=0.1]; \draw[fill] (4) circle [radius=0.1]; \draw[fill] (5) circle [radius=0.1]; \draw[fill] (6) circle [radius=0.1]; \draw[fill] (7) circle [radius=0.1]; \draw[fill] (8) circle [radius=0.1]; \draw[fill] (9) circle [radius=0.1]; \end{tikzpicture} \caption{The derived covering graph $\tilde{\Gamma}$ of $\Gamma$ in Figure~\ref{fig:perm-voltage-graph}. Edge colors denote correspondence to the edges of $\Gamma$ via the quotient map.} \label{fig:perm-derived-graph} \end{figure}
2412.18623v2
http://arxiv.org/abs/2412.18623v2
Total restrained coalitions in graphs
\documentclass[12pt]{article} \usepackage{amssymb,amsfonts,amsmath,latexsym,epsf,tikz,url} \usepackage[usenames,dvipsnames]{pstricks} \usepackage{pstricks-add} \usepackage{epsfig} \usepackage{pst-grad} \usepackage{pst-plot} \usepackage[space]{grffile} \usepackage{etoolbox} \usepackage{float} \usepackage{soul} \usepackage{tikz} \usepackage[colorinlistoftodos]{todonotes} \usepackage{pgfplots} \usepackage{mathrsfs} \usepackage[colorlinks]{hyperref} \usetikzlibrary{arrows} \makeatletter \patchcmd\Gread@eps{\@inputcheck#1 }{\@inputcheck"#1"\relax}{}{} \makeatother \newtheorem{theorem}{Theorem}[section] \newtheorem{Conjecture}[theorem]{Conjecture} \newtheorem{Observation}[theorem]{Observation} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{observation}[theorem]{Observation} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \newtheorem{definition}[theorem]{Definition} \newtheorem{question}[theorem]{Question} \newtheorem{problem}[theorem]{Problem} \newcommand{\proof}{\noindent{\bf Proof.\ }} \newcommand{\qed}{\hfill $\square$\medskip} \tikzstyle{black_v}=[fill=black, draw=black, shape=circle] \tikzstyle{none}=[fill=none, draw=none, shape=circle] \tikzstyle{blue_v}=[fill=blue, draw=blue, shape=circle] \tikzstyle{red_v}=[fill=red, draw={rgb,255: red,246; green,10; blue,34}, shape=circle] \tikzstyle{green_v}=[fill={rgb,255: red,17; green,255; blue,0}, draw={rgb,255: red,45; green,255; blue,8}, shape=circle] \tikzstyle{BigBlue}=[fill=blue, draw=blue, shape=circle, scale=1.3] \tikzstyle{BigRed}=[fill=red, draw=red, shape=circle, scale=1.75] \tikzstyle{BBigBlue}=[fill=blue, draw=blue, shape=circle, scale=1.75] \tikzstyle{BigGreen}=[fill={rgb,255: red,49; green,215; blue,37}, draw={rgb,255: red,0; green,184; blue,0}, shape=circle] \tikzstyle{red_E}=[-, draw=red, fill=red, ultra thick] \tikzstyle{dashed_line}=[-, dashed] \tikzstyle{green_E}=[-, draw={rgb,255: red,58; green,228; blue,83}] \tikzstyle{magenta_E}=[-, draw={rgb,255: red,246; green,101; blue,246}] \tikzstyle{blue_E}=[-, draw={rgb,255: red,32; green,32; blue,253}, ultra thick] \tikzstyle{olive_E}=[-, draw={rgb,255: red,0; green,128; blue,128}] \tikzstyle{flecha}=[->] \tikzstyle{doble}=[-, double] \tikzstyle{dots}=[-, dotted, tikzit draw={rgb,255: red,238; green,87; blue,236}] \tikzstyle{gray_e}=[-, fill=none, draw={rgb,255: red,171; green,171; blue,171}] \tikzstyle{blue_e}=[-, draw={rgb,255: red,28; green,93; blue,244}] \textwidth 14.5cm \textheight 21.0cm \oddsidemargin 0.4cm \evensidemargin 0.4cm \voffset -1cm \begin{document} \def\nt{\noindent} \title{Total restrained coalitions in graphs} \bigskip \author{ M. Chellali $^{1}$, J.C. Valenzuela-Tripodoro $^{2}$, H. Golmohammadi $^{3,4}$, \\[.5em] I.I. Takhonov $^{3}$, N.A. Matrokhin $^{3}$ } \maketitle \begin{center} $^{1}$LAMDA-RO Laboratory, Department of Mathematics, University of Blida, Blida, Algeria $^{2}$Department of Mathematics, University of C\'{a}diz, Spain $^{3}$Novosibirsk State University, Pirogova str. 2, Novosibirsk, 630090, Russia\\ \medskip $^{4}$Sobolev Institute of Mathematics, Ak. Koptyug av. 4, Novosibirsk, 630090, Russia\\ \medskip {\tt m\[email protected] ~~ [email protected] ~~ [email protected] ~~ [email protected] ~~ [email protected]} \end{center} \begin{abstract} In this paper, we introduce the concept of total restrained coalition and total restrained coalition partition in graphs. A vertex set in a graph without isolated vertices is a total restrained dominating set (TRD-set) if it is dominating, induces a subgraph without isolated vertices, and the vertices not in the set also induce a subgraph without isolated vertices. Two vertex sets, which are not TRD-sets, form a total restrained coalition if their union is a TRD-set. A total restrained coalition partition is a partition where none of its elements are TRD-sets, but each forms a total restrained coalition with another element. The goal is to maximize the cardinality of such a partition, denoted $C_{tr}(G)$. We initiate the study of this concept by proving certain properties, extremal values, general bounds, and its relation to known structural parameters. Exact values for specific graph families are also provided. \end{abstract} \noindent{\bf Keywords:} Coalition; total restrained coalition, total restrained dominating set. \medskip \noindent{\bf AMS Subj.\ Class.:} 05C60. \section{Introduction} Throughout this article, we only consider finite and simple graphs without isolated vertices. For such a graph $G=(V,E)$ and a vertex $v\in V$, we denote by $N(v):= \{w\in V\mid vw\in E\}$ the open neighborhood of $v$ and by $N[v] := N(v)\cup\{v\}$ its closed neighborhood. The order of a graph $G$ refers to the cardinality $|V|$ of its set of vertices. Each vertex of $N(v)$ is called a neighbor of $v$, and the cardinality of $|N(v)|$ is called the degree of $v$, denoted by $deg(v)$. The minimum and maximum degree of graph vertices are denoted by $\delta(G)$ and $\Delta(G)$, respectively. An isolated vertex in $G$ is a vertex of degree 0. A graph is isolate-free if it contains no isolated vertex. A set $S \subseteq V$ is called a dominating set if every vertex of $V \setminus S$ is adjacent to at least one vertex in $S$. Further if every vertex in $G$ is adjacent to some other vertex in $S$, then $S$ is a total dominating set, abbreviated TD-set of $G$. The domination number of $G$, denoted by $\gamma(G)$, is the minimum cardinality of a dominating set of $G$, while the total domination number $\gamma_{t}(G)$ of $G$ is the minimum cardinality of a TD-set of $G$. Various aspects of domination are well studied in the literature, and a thorough study of domination appears in \cite{A11, A12}. Given a graph $G$, a set $S \subseteq V (G)$ is said to be a total restrained dominating set (abbreviated TRD-set) of $G$ if every vertex in $V\setminus S$ is adjacent to at least one vertex in $S$ and at least one other vertex in $V\setminus S$, and every vertex in $S$ is adjacent to at least one other vertex in $S$. The total restrained domination number of $G$, denoted by $\gamma_{tr}(G)$, is the cardinality of a minimum TRD-set of $G$. It is worth mentioning that every graph without isolated vertices has a TRD-set, since $S=V$ is such a set. The concept of the total restrained domination was introduced by Telle and Proskurovsky \cite{A14}, although implicitly, as a vertex partitioning problem. Total restrained domination in graphs is well studied in the literature. For more details we refer the reader to the recent book chapter by Hattingh and Joubert \cite{A6}. Let $\mathcal{D}$ be a partition of the vertex set $V(G)$ of $G$. If all sets of $\mathcal{D}$ are total dominating sets in $G$, then $\mathcal{D}$ is called a total domatic partition of $G$. The maximum number of sets of a total domatic partition of $G $ is the total domatic number $d_{t}(G)$ of $G$. In \cite{Z}, Zelinka studied this concept. Analogously the total restrained domatic partition is a partition of vertices of a graph into total restrained dominating sets. The maximum cardinality of a total restrained domatic partition is called the total restrained domatic number, denoted by $d^r_{t}(G)$. The total restrained domatic number of a graph was introduced by Zelinka in \cite{A15}.\newline Fairly recently, the concept of coalition in graphs has triggered a great deal of interest due to its definition, which is based on dominating sets. A coalition in a graph $G$ is composed of two disjoint sets of vertices $X$ and $Y$ of $G$, neither of which is a dominating set but whose union $X \cup Y$ is a dominating set of $G $. A coalition partition is a vertex partition $\pi=\{V_1,V_2,\dots,V_k\}$ of $V$ such that for every $i\in\{1,2,\dots,k\}$ the set $V_i$ is either a dominating set and $|V_i|=1$, or there exists another set $V_j$ so that they form a coalition. The maximum cardinality of a coalition partition is called the coalition number of the graph, and denoted by $C(G)$. Coalitions in graphs were introduced and first studied by Haynes et al. in \cite{A7}, and have been studied further \cite{A4,A8,A9,A10}. Several types of domination coalitions have been studied by imposing additional conditions on the domination coalition, see \cite{A1,A2,A3,A5,A13}. The aim of this paper is to introduce and study the concept of total restrained coalition in graphs. We begin with the following definitions. \begin{definition}[Total restrained coalition] Two disjoint sets $X,Y\subseteq V(G)$ form a total restrained coalition in a graph $G$ if they are not TRD-sets but their union is a TRD-set in $G$. \end{definition} \begin{definition}[Total restrained coalition partition]\label{2.2} A total restrained coalition partition, { abbreviated as a trc-partition}, of a graph $G$ is a partition $\Phi=\{V_1,V_2,\dots,V_k\}$ of the vertex set $V$ such that any $V_i\in \Phi, 1\leq i \leq k,$ is not a TRD-set but forms a total restrained coalition with another { set $V_j \in \Phi$}. The maximum cardinality of a total restrained coalition partition is called the total restrained coalition number of $G$ and denoted by $C_{tr}(G)$. A trc-partition of $G$ of cardinality $C_{tr}(G)$ is called a $C_{tr}(G)$-partition.\medskip \end{definition} Since every TRD-set in \(G\) is a TD-set, a natural question that arises is whether both problems are equivalent. Consider the cycle graph \(C_3\) with \(V(C_3) = \{x, y, z\}\). The trc-partitions of \(C_3\) with two elements are \[ \Phi_1 = \{\{x\}, \{y, z\}\}, \quad \Phi_2 = \{\{y\}, \{x, z\}\}, \quad \Phi_3 = \{\{z\}, \{x, y\}\}. \] First, note that none of the trc-partitions \(\Phi_1\), \(\Phi_2\), or \(\Phi_3\) qualifies as a tc-partition, because each contains a two-vertex set that is a total dominating set. Furthermore, it is straightforward to see that \(\{\{x\}, \{y\}, \{z\}\}\) is a tc-partition but not a trc-partition of \(C_3\), leading to the inequality \[ 2 = C_{\text{tr}}(C_3) < C_t(C_3) = 3. \] Therefore, both problems are not equivalent, and it is worth studying the total restrained coalition partition problem. \medskip The main contributions of this work are as follows. In Section 2, we first discuss the possibility of the existence of trc-partitions in graphs and derive some bounds. In Section 3, we determine the total restrained coalition number for some classes of graphs. In Section 4, we are interested in graphs with a large total restrained coalition number. \section{Properties and bounds} In this section, we present basic properties and bounds on the total restrained coalition number. We first call up the following trivial observation that we need for what follows. \begin{observation}{\rm\cite{A6}} Every graph $G$ without an isolated vertex has a TRD-set. \end{observation} Now we state the following observation about the total restrained coalition number of a graph $G$. \begin{observation} If a graph $G$ contains an isolated vertex, then $C_{tr}(G)=0$. \end{observation} We are now in a position to prove the following result. \begin{theorem} ~\label{1} Let $G$ be an isolate-free graph. Then $G$ has, at least, a trc-partition and $C_{tr}(G)\ge 2d_t^r(G).$ \end{theorem} \begin{proof} Consider a graph $G$ with a total restrained domatic partition $\mathcal{D}=\{S_1, \ldots, S_k\}$, { with $k=d_t^r$}. In what follows we demonstrate the process of constructing a trc-partition $\Phi$ of $G$. For any integer $1\leq i\leq k-1$, assume that $S_i$ is a minimal TRD-set of $G$. If it is not, then there exists a minimal TRD-set $S_i'\subseteq S_i$. In this case, we replace $S_i$ with $S_i'$ and put all members of $S_i\setminus S_i'$ to $S_k$. In order to create a { trc-partition, $\Phi$ of $G$,} we divide each minimal TRD-set $S_i$ with $i<k$ into two non-empty sets $S_{i,1}$ and $S_{i,2}$ and add them to $\Phi$. Note that neither $S_{i,1}$ nor $S_{i,2}$ is a TRD-set, but their union is a TRD-set. Next, we consider the set $S_k$. If $S_k$ is a minimal TRD-set, we split it into two non-empty sets $S_{k,1}$ and $S_{k,2}$ and attach them to $\Phi$. So, we obtain a trc-partition $\Phi$ of cardinality $2d_t^r.$ If $S_k$ is not a minimal TRD-set, there exists a set $S_k'\subseteq S_k$ that is minimal and total restrained dominating. We split $S_k'$ into two non-empty sets $S_{k,1}'$ and $S_{k,2}'$ and attach them to $\Phi$. Let $S_k''=S_k\backslash S_k'$. It is worth emphasizing that $S_k''$ cannot be a TRD-set, as this would imply that $d^r_{t}(G)>k$, against our assumptions. If $S_k''$ forms a total restrained coalition with any set in $\Phi$, we attach it to $\Phi$ and finish the construction process obtaining a total restrained coalition partition $\Phi$, of cardinality at least $2k+1\ge 2d_t^r$. Otherwise, by replacing $S_{k,2}'$ with $S_{k,2}'\cup S_k''$ in $\Phi$ we obtain a trc-partition with cardinality $2k=2d_t^r$. $\Box$ \end{proof}\medskip It is clear that for all graphs $G$ without isolated vertices, $d^r_{t}(G)\geq 1$. By Theorem \ref{1} we infer the following result. \begin{corollary}~\label{3} If $G$ is an isolate-free graph, then $2\leq C_{tr}(G)\leq n$. \end{corollary} Notice that if an isolate-free graph $G$ satisfies $C_{tr}(G)=2$, then we must have $d^r_{t}(G)=1$. However, the converse is not true and this can be seen by the cycle $C_5$, where $d^r_{t}(C_5)=1$ and $C_{tr}(C_5)=3$.\medskip We next recall the following result due to Zelinka \cite{A15}. \begin{theorem} {\rm\cite{A15}}\label{A} Let $G$ be a graph without isolated vertices. Then $d^r_{t}(G)=d_{t}(G)$. \end{theorem} Plugging the result of Theorem \ref{A} into the bound of Theorem \ref{1} immediately yields the following result. \begin{corollary}\label{B} Let $G$ be a graph without isolated vertices. Then $C_{tr}(G)\geq 2d_{t}(G)$. \end{corollary} In \cite{Z}, Zelinka showed that if $G$ is an isolate-free graph of order $n$ and minimum degree $\delta$, then $d_{t}(G)\geq\left\lfloor \frac{n}{n-\delta+1}\right\rfloor$. As a consequence of this result and Corollary \ref{B}, we have the following result. \begin{corollary} \label{delta}For any isolate-free graph $G,$ $C_{tr}(G)\geq2\left\lfloor \frac{n}{n-\delta+1}\right\rfloor$. \end{corollary} Restricted to connected graphs $G$ with minimum degree at least two and girth seven or more, we provide a lower bound for $C_{tr}(G)$ in terms of the maximum degree. \begin{theorem} \label{girth 7} Let $G$ be a connected graph with minimum degree $\delta (G)\geq2,$ maximum degree $\Delta(G)$ and girth at least $7.$ Then $C_{tr}(G)\geq\Delta(G)+1.$ \end{theorem} \textbf{Proof. }Let $\delta(G)=\delta$ and $\Delta(G)=\Delta.$ Let $w$ be a vertex with maximum degree, and let $w_{1},w_{2},...,w_{\Delta}$ denote the neighbors of $w$. Clearly, $N(w)$ is independent, for otherwise $G$ has a triangle contradicting the assumption on the girth. The same argument of the girth together with the fact $\delta\geq2$ also imply $V(G)-N[w]$ is non empty. Let $A=V(G)-N(w).$ Clearly, since $\delta\geq2,$ each $w_{i}\in N(w)$ has at least one neighbor in $A$ other than $w.$ For any $w_{i}\in N(w),$ let $w_{i}^{\prime}$ denote a neighbor of $w_{i}$ in $A-\{w\}.$ Recall that $w$ has no neighbor in $A$ and thus $ww_{i}^{\prime}\notin E(G).$ We make some useful remarks for the following. For any two distinct vertices $w_{i},w_{j}\in N(w),$ we have: (i) $w_{i}^{\prime}\neq w_{j}^{\prime}$, for otherwise vertices $w,w_{i},w_{j}$ and $w_{i}^{\prime}$ induce a cycle $C_{4},$ contradicting $G$ has girth at least 7. (ii) $w_{i}^{\prime}w_{j}^{\prime}\notin E(G),$ for otherwise vertices $w,w_{i},w_{j},w_{i}^{\prime}$ and $w_{j}^{\prime}$ induce a cycle $C_{5},$ a contradiction too. (iii) No vertex $x$ in $A$ is adjacent to both $w_{i}^{\prime}$ and $w_{j}^{\prime},$ for otherwise $w,w_{i},w_{j},w_{i}^{\prime},w_{j}^{\prime}$ and $x$ induce a cycle $C_{6},$ a contradiction. Accordingly, since $\delta\geq2$, each vertex $A-\{w\}$ still has a neighbor in $A.$ In particular, $A-\{w,w_{1}^{\prime },w_{2}^{\prime},...,w_{\Delta}^{\prime}\}$ is non empty and induce an isolate-free subgraph. Now, consider the partition $\Phi=\{V_{1}% ,V_{2},.,V_{\Delta},V_{\Delta+1}\},$ where for any $i\in\{1,...,\Delta\},$ each $V_{i}=\{w_{i},w_{i}^{\prime}\}$ and $V_{\Delta+1}=A-\{w'_1,w'_2,...,w'_\Delta\}$. Clearly since $w\in V_{\Delta+1}$ and $w$ has no neighbor in $V_{\Delta+1},$ no set of $\Phi$ is a TRD-set. Moreover, it is not hard to notice that $V_{\Delta+1}$ forms a total restrained coalition with any other set of $\Phi,$ leading to $C_{tr}% (G)\geq\left\vert \Phi\right\vert =\Delta+1.$ $\Box$\newline The bound established in Theorem 2.8 is tight, as demonstrated, for example, by any cycle $C_n$ where $n \not\equiv 0 \pmod{4}$ and $n \geq 7$. (see Th.~\ref{cn}) \medskip We next present a technical lemma, which gives us the number of total restrained coalitions involving any set in a $C_{tr}(G)$-partition of $G$. \begin{lemma}\label{4} If $G$ is an isolate-free graph, then for any $C_{tr}(G)$-partition $\Phi$ and for any $X\in \Phi$, the number of total restrained coalitions formed by $X$ is at most $\Delta(G)$. \end{lemma} \begin{proof} Since $X\in\Phi$, $X$ is not a TRD-set. We now distinguish two cases. \nt {\bf Case 1.} There is a vertex $v \in V(G)$ such that $N(v) \cap X=\emptyset$.\newline We first assume that $v\in X$. If a set $A\in \Phi$ forms a total restrained coalition with $X$, then $A\cup X$ is a TRD-set of $G$. So $v$ must has at least one neighbor in $A$. Thus, there are at most $|N(v)|-1\leq \Delta(G)-1$ other sets that can be in a total restrained coalition with $X$, and consequently, $X$ is in at most $\Delta(G)$ total restrained coalitions. Next let $v \not\in X$ and $X\cap N(v)=\emptyset$. Then, each set of $\Phi$ which is in a total restrained coalition with $X$ must contain at least one of the members of $N[v]$. We claim that there is no set $Y\in \Phi$ that forms a total restrained coalition with $X$ and $Y\cap N[v]=\{v\}$. Suppose to the contrary that there is a set $Y\in \Phi$ that forms a total restrained coalition with $X$ and $Y\cap N[v]=\{v\}$. Thus $X\cup Y$ is a TRD-set. This implies that $v$ has a neighbor in $X\cup Y,$ contradicting our assumption $X\cap N(v)=\emptyset$ and $Y\cap N(v)=\emptyset$. This proves the claim. Consequently, there exists a unique set $Y$ among all sets of $\Phi$ forming a total restrained coalition with $X$, where $v$ belongs to $Y$ and $Y$ has a non-empty intersection with $N(v)$. This implies that the largest possible number of sets in $\Phi$ forming a total restrained coalition with $X$ is no more than $|N(v)|$. Therefore, the total number of sets of $\Phi$ forming a total restrained coalition with $X$ is at most $\Delta(G)$. \nt {\bf Case 2.} There is a vertex $v \in V-X$ such that $N(v) \cap (V-X)=\emptyset$. In this case, we prove that there is exactly one set in $\Phi$ that forms a total restrained coalition with $X$. Assume that $W\in\Phi\setminus \{X\}$ such that $\{X,W\}$ is a tr-coalition. If $v\not\in W$ then $v\not\in X\cup W$ and therefore $N(v)\cap V\setminus \left(X\cup W\right)\neq \emptyset$ because $X\cup W$ is a TRD-set in $G$. The latter is a contradiction because $N(v)\subseteq X.$ Hence, it must be that $v\in W.$ and thus, $W$ is the only set that forms a total restrained coalition with $X$. It follows from the two cases above that $X$ belongs to, at most, $\Delta(G)$ total restrained coalitions. $\Box$ \end{proof} \medskip Now we prove the following lemmas for graphs with leaves. \begin{lemma}~\label{5} Let $G$ be a graph with $\delta(G)=1$, and let $x$ be a leaf of $G$ and $y$ be the support vertex of $x$. Let $\Phi$ be a $C_{tr}(G)$-partition, and let $X, Y\in \Phi$ such that $x\in X$ and $y\in Y$ (possibly $X=Y$). For any two sets $A,B\in \Phi$ that form a total restrained coalition, we have $A\in \{X, Y\}$ or $B\in\{X,Y\}$. \end{lemma} \begin{proof} Since $A$ and $B$ form a total restrained coalition, $A\cup B$ is a TRD-set of $G$. If $A\not\in \{X,Y\}$ and $B\not\in \{X,Y\}$, then the vertex $x$ has no neighbor in $A\cup B$, which is a contradiction. Therefore, $A\in \{X, Y\}$ or $B\in\{X,Y\}$. $\Box$ \medskip \end{proof} \begin{remark}~\label{5b} Since, by the definition of a total restrained dominating $S$ set, we may deduce that $deg(v)\ge 2$ for every vertex $v\not\in S.$ Consequently, any leaf of $G$ must belong to $S.$ \end{remark} We establish next an upper bound on the total restrained coalition in terms of the maximum degree of $G$. \begin{theorem}\label{6} Let $G$ be an isolate-free graph with $\delta(G)=1$. Then, $C_{tr}(G) \leq \Delta(G)+1$. \end{theorem} \begin{proof} Let $x$ be a vertex of $G$ with $\deg(x)=1$ and let $\Phi=\{V_1,V_2,\ldots,V_k\}$ be a $C_{tr}(G)$-partition. Without loss of generality, we can assume that $x\in V_1.$ If $\{V_i,V_j\}\subseteq \Phi$ form a total restrained coalition then, by Remark \ref{5b}, we have that $x\in V_i\cup V_j$. Consequently, $V_1\in \{V_i, V_j\}$. By Lemma~\ref{4}, $V_1$ is in total restrained coalition with at most $\Delta(G)$ sets of $\Phi$. Hence, $C_{tr}(G)\leq \Delta(G)+1$. \end{proof} \medskip Let us point out that the bound given by Theorem~\ref{6} is sharp. To see this, it is sufficient to consider the graph depicted in Figure~\ref{fig1}, where $V_1$ forms a tr-coalition with any of the remaining sets $V_2,V_3,$ or $V_4.$ \begin{figure}[t!] \begin{center} \begin{tikzpicture}[scale=0.6] \node [style={black_v},label=above left:{\large $v_1$}] (0) at (-7, 4) {}; \node [style={black_v},label=above left:{\large $v_2$}] (1) at (-7, -1) {}; \node [style={black_v},label=below left:{\large $v_3$}] (2) at (-4, 2) {}; \node [style={black_v},label=below left:{\large $v_4$}] (3) at (-4, -3) {}; \node [style=black_v,label=above left:{\large $v_5$}] (4) at (-1, 4) {}; \node [style=black_v,label=above left:{\large $v_6$}] (5) at (-1, -1) {}; \node [style={black_v},label=above right:{\large $v_7$}] (6) at (2, 1.5) {}; \node [style={black_v},label=above right:{\large $v_8$}] (7) at (5.5, 1.5) {}; \node [style=none,label=above:{\large $\Phi=\{ V_1=\{v_1,v_2\},$}] (8) at (-9, -6.5) {}; \node [style=none,label=above:{\large $V_2=\{v_3,v_4\},$}] (9) at (-3, -6.5) {}; \node [style=none,label=above:{\large $V_3=\{v_5,v_6\},$}] (10) at (2, -6.5) {}; \node [style=none,label=above:{\large $V_4=\{v_7,v_8\}\}$}] (11) at (7, -6.5) {}; \draw (0) to (4); \draw (4) to (2); \draw (2) to (0); \draw (0) to (1); \draw (1) to (3); \draw (3) to (2); \draw (3) to (5); \draw (5) to (1); \draw (4) to (6); \draw (6) to (5); \draw (6) to (7); \end{tikzpicture} \end{center} \caption{A graph attaining the bound given by Theorem \ref{6}.}\label{fig1} \end{figure} \medskip \begin{theorem} \label{delta2} Let $G$ be an isolate-free graph with $\delta(G)=2$. Then, $C_{tr}(G) \leq 2 \Delta(G)$. \end{theorem} \begin{proof} Let $x$ be a vertex of $G$ with $\deg(x)=2$, and suppose that $N(x)=\{y,z\}$. Let $\Phi$ be a $C_{tr}(G)$-partition. We now distinguish the following cases. \begin{itemize} \item{\bf Case 1.} There is a set $U\in \Phi$ such that $\{x,y,z\}\subseteq U$. Then, each set of $\Phi\backslash U$ must form a total restrained coalition with $U$. Otherwise, we would have two distinct sets $A, B\in \Phi$ forming a total restrained coalition. Thus, $x$ must have at least one neighbor in $A \cup B$, contradicting our supposition that $\deg(x)=2$. Therefore, by Lemma \ref{4}, $U$ is in total restrained coalitions with at most $\Delta(G)$ sets. Consequently, $C_{tr}(G)\leq \Delta(G)+1\leq 2\Delta(G)+1$. \item{\bf Case 2.} Assume that $X, A\in \Phi$ such that $x\in X$ and $\{y,z\}\subseteq A$. Since $N(x)\subseteq A$, there is no set $B\neq A$ that forms a total restrained coalition with~$X$. So $X$ forms a total restrained coalition only with $A$. Moreover, $A$ does not form a total restrained coalition with any other set in $\Phi$ other than $X$. Otherwise, we would have a set $C\in \Phi$ forming a total restrained coalition with $A$. Thus, $x$ must have at least one neighbor outside in $A \cup C$, contradicting our supposition that $\deg(x)=2$. Hence, $C_{tr}(G)\leq 2$. \item{\bf Case 3.} Assume that $Y, B\in \Phi$ such that $y\in Y$ and $\{x,z\}\subseteq B$. Then, each set of $\Phi\backslash\{Y,B\}$ form a total restrained coalition with $Y$ or $B$. Otherwise, we would have two distinct sets $C, D\in \Phi$ forming a total restrained coalition. Thus, $x$ must have at least one neighbor in $C \cup D$, contradicting our supposition that $\deg(x)=2$. If $Y$ and $B$ form a total restrained coalition, by Lemma \ref{4}, we have $C_{tr}(G)\leq \Delta(G)-1+\Delta(G)-1+1+1=2\Delta(G)$. Next, suppose that $Y$ and $B$ do not form a total restrained coalition. We consider two subcases. \item {\bf Subcase 3.1.} There exists a vertex $w\in V(G)$ having no neighbor in $Y\cup B$. Since any set of $\Phi\backslash\{Y, B\}$ form a total restrained coalition with $Y$ or $B$, in order to totally restrained dominate the vertex $w$, any set of $\Phi\backslash\{Y, B\}$ must contain at least one of the members of $N(w)$. So, by Lemma \ref{4}, $C_{tr}(G)\leq |N(w)|+2\leq \Delta(G)+2\leq2\Delta(G)+1$. \item {\bf Subcase 3.2.} There exists a vertex $w \in (V-(Y \cup B))$ such that $N(w) \cap (V-(Y \cup B))=\emptyset$. It follows that $N(w)\subseteq (Y\cup B)$. Then all TRD-sets must contain the vertex $w$, as each set of $\Phi\backslash\{Y,B\}$ form a total restrained coalition with $Y$ or $B$. This yields that $w$ is totally restrained dominated. Since $x$ and $y$ are adjacent, we deduce that there are at most $|N(y)|-1\leq \Delta(G)-1$ sets containing a member of $N(y)$. Thus, the set $Y$ is in at most $|N(y)|-1\leq \Delta(G)-1$ total restrained coalitions. Analogously, we observe that the set $B$ is in at most $|N(z)|-1\leq \Delta(G)-1$ total restrained coalitions. Hence, $C_{tr}(G)\leq \Delta(G)-1 + \Delta(G)-1+2=2\Delta(G) \leq 2\Delta(G)+1$. \item {\bf Case 4.} There are two distinct sets $Z, C\in \Phi$ such that $z\in Z$ and $\{x,y\}\subseteq C$. The proof is similar to the proof of {\bf Case 3}. \item {\bf Case 5.} Assume that $X, Y, Z\in \Phi$ such that $x\in X, y\in Y$ and $z\in Z$. We claim the following facts, \begin{itemize} \item[(5.i)] If $X,T \in \Phi$ form a tr-coalition then $T\in\{Y,Z\}$. This is because the neighbors of $x$ belongs to $Y\cup Z.$ \item[(5.ii)] $Y,Z$ can not form a tr-coalition because otherwise $x\not\in Y\cup Z$ would not be total restrained dominated. \item[(5.iii)] If $Y,T\in \Phi\setminus \{X,Z\}$ form a tr-coalition then $N(z)\cap \left(Y\cup T\right)\neq\emptyset.$ Otherwise, the vertex $z$, which does not belongs to $Y\cup T,$ would not be total restrained dominated by $Y\cup T$. \item[(5.iv)] If $Z,T\in \Phi\setminus \{X,Y\}$ form a tr-coalition then $N(y)\cap \left(Z\cup T\right)\neq\emptyset.$ Otherwise, the vertex $y$, which does not belongs to $Z\cup T,$ would not be total restrained dominated by $Z\cup T$. \end{itemize} Now, let us distinguish three different cases, \begin{itemize} \item If $N(z)\cap Z \neq \emptyset$ or $N(z) \cap Y\neq \emptyset$ then by considering (5.iii) we know that $Y$ can form a tr-coalition with, at most, $|N(z)|-2$ different sets $T$. Since $x$ and $y$ are adjacent, we deduce that there are at most $|N(y)|-1\leq \Delta(G)-1$ sets which contain a member of $N(y)$. Thus, the set $Z$ is in at most $|N(y)|-1\leq \Delta(G)-1$ total restrained coalitions. Therefore, $$ C_{tr}(G)\le |N(z)|-2+|N(y)|-1+3\le 2\Delta(G)$$ \item If $N(y)\cap Z \neq \emptyset$ or $N(y) \cap Y\neq \emptyset$ then by considering (5.iv) we know that $Z$ can form a tr-coalition with, at most, $|N(y)|-2$ different sets $T$. Since $x$ and $z$ are adjacent, we deduce that there are at most $|N(z)|-1\leq \Delta(G)-1$ sets which contain a member of $N(z)$. Thus, the set $Y$ is in at most $|N(z)|-1\leq \Delta(G)-1$ total restrained coalitions. Therefore, $$ C_{tr}(G)\le |N(z)|-1+|N(y)|-2+3\le 2\Delta(G)$$ \item Otherwise, assume that $N(z)\cap Z=N(z)\cap Y=N(y)\cap Z=N(y)\cap Y = \emptyset.$ If $T$ form a tr-coalition with $Y$ then $N(z)\cap T\neq\emptyset$ because $z\not\in Y\cup T$ and $Y\cup T$ is a TRD-set. Besides, $N(y)\cap T\neq\emptyset$ because $y\in Y\cup T$, $N(y)\cap Y =\emptyset$ and $Y\cup T$ is a TRD-set. Consequently, any set $T$ that forms a tr-coalition with $Y$ (analogously, with $Z$) must contain both a neighbor of $y$ and a neighbor of $z$. Therefore, $$ C_{tr}(G)\le |N(z)|-1+3\le \Delta(G)+2\le 2\Delta(G).$$ \end{itemize} Based on the analysis of all the above cases, we infer that $C_{tr}(G)\le 2\Delta(G).$ $\Box$ \end{itemize} \end{proof} The bound described in Theorem~\ref{delta2} is sharp, as illustrated by any cycle \( C_n \) with \( n \geq 7 \) and \( n \equiv 0 \pmod{4} \) (refer to Th.~\ref{cn} for further details). \section{ Total restrained coalition number of specific graphs } In this section, we deal with the problem of obtaining the exact value of the total restrained coalition number. We first recall the following results. \begin{proposition}{\rm\cite{A6}} Let $n \geq 4$ be a positive integer. Then $\gamma_{tr}(K_n)=2$. \end{proposition} \begin{proposition} {\rm\cite{A6}}\label{7} Let $n_1$ and $n_2$ be positive integers such that $\min\{n_1, n_2\} \geq 2$. Then $\gamma_{tr}(K_{n_1, n_2})=2$. \end{proposition} \begin{proposition} {\rm\cite{A6}} \label{8} Let $n$ be a positive integer. Then $\gamma_{tr}(K_{1,{n-1}})=n$. \end{proposition} The following proposition gives us the total restrained coalition number of the complete graph. \begin{proposition} \label{9} Let $n \geq 4$ be a positive integer. Then $C_{tr}(K_n)=n$. \end{proposition} \begin{proof} Let $G$ be a complete graph of order $n$ with vertex set $V=\{v_1, v_2,\ldots, v_n\}$. Since $\gamma_{tr}(G)=2$, every two adjacent vertices $v_i$ and $v_j$ of $G$ can be in a total restrained coalition. It follows that $\Phi=\left\{\{v_1\}, \{v_2\}, \ldots, \{v_n\}\right\}$ is a trc-partition, and hence $C_{tr}(K_n)=n$. $\Box$ \end{proof}\medskip By Proposition \ref{7}, we get the following result. \begin{observation} \label{10} Let $G=K_{p,q}$ be a complete bipartite graph such that $q\geq p\geq 2$. Then $C_{tr}(K_{p,q})=p+q=n$. \end{observation} Proposition \ref{8} gives the next result. \begin{observation} \label{11} If $G=K_{1,{n-1}}$ is a star graph, then $C_{tr}(K_{1,{n-1}})=2$. \end{observation} Next we determine the total restrained coalition number of paths. But before we need to recall the following result from \cite{A6}. \begin{theorem}{\rm\cite{A6}} \label{12} Let $n\geq 4$ be a positive integer. Then $\gamma_{tr}(P_n)=n-2\lfloor\frac{n-2}{4}\rfloor$. \end{theorem} \begin{theorem}\label{13} For any path $P_n$, \begin{equation*} C_{tr}(P_n)=\left\{ \begin{aligned}[c] 2, & { \ \ if\ } 2\leq n\leq 7 \\[.5em] 3, & { \ \ if \ } n\geq 8 \\ \end{aligned}\right. \end{equation*} \end{theorem} \begin{proof} Let $V(P_n)=\{v_1,v_2,\ldots,v_n\}$. By Theorem \ref{6} and Corollary \ref{3}, we have $2\leq C_{tr}(P_n)\leq 3$ for any path $P_n$. If $n=2$, then Proposition \ref{9} gives the desired result, while if $n=3$, the result follows from Observation \ref{11}. We next proceed to show that $C_{tr}(P_n) \ne 3$ where $4 \leq n\leq 7$. Let $\Phi=\{A, B, C\}$ be a $C_{tr}(P_n)$-partition. By Lemma \ref{4}, each set of $\Phi$ is in total restrained coalition with at most two sets of $\Phi$. So, without loss of generality, assume that each of $B$ and $C$ forms a total restrained coalition with $A$. By Theorem \ref{12}, we have $|A|+|B|\geq n-2\lfloor\frac{n-2}{4}\rfloor$ and $|A|+|C|\geq n-2\lfloor\frac{n-2}{4}\rfloor$. Therefore, $2|A|+|B|+|C|\geq 2n-4\lfloor\frac{n-2}{4}\rfloor$. On the other hand, we know that $|A|+|B|+|C|=n$. Hence, $|A|\geq n-4\lfloor\frac{n-2}{4}\rfloor$. Now suppose that $n=4$. Hence, $|A|\geq 4$, contradicting the fact that $|A|<4$. This implies that $C_{tr}(P_4)\neq 3$. If $n=5$, then $|A|\geq 5$ which impossible as $|A|<5$. Consequently, $C_{tr}(P_5)\neq 3$. Now assume that $n=6$. Thus, we have $|A|\geq 2$. On the other side, $|A|\leq 5$. We now distinguish the following cases. \nt {\bf Case 1.} $\Phi$ consists of a set of cardinality 2 (namely $A$), a set of cardinality 3 (namely $B$) and a singleton set (namely $C$). Since $\gamma_{tr}(P_6)=4$, each of $A$ and $C$ must be in a total restrained coalition with $B$. This is impossible because $P_6$ has no TRD-set of order 5. Hence, $C_{tr}(P_6) \ne 3$. \nt {\bf Case 2.} Let $|A|=|B|=|C|=2$. We may assume that each of $B$ and $C$ must be in a total restrained coalition with $A$, which is impossible, as $P_6$ has a unique TRD-set of order 4. Hence, $C_{tr}(P_6) \ne 3$. \nt {\bf Case 3.} $\Phi$ consists of a set of cardinality 3 (namely $A$), a set of cardinality 2 (namely $B$) and a singleton set (namely $C$). Analogous argument as in Case 1(by interchanging the roles of $A$ and $B$) can be applied to show that $\Phi$ of order 3 does not exist. \nt {\bf Case 4.} $\Phi$ consists of a set of cardinality 4, say $A$, and two singleton sets such as $B$ and $C$. Since $\gamma_{tr}(P_6)=4$, no two singleton sets in $\Phi$ form a total restrained coalition. It follows that each of $B$ and $C$ must be in a total restrained coalition with $A$, which is impossible, as $P_6$ has no TRD-set of order 5. Hence, $C_{tr}(P_6) \ne 3$. \nt {\bf Case 5.} Let $|A|=5$. It follows that either $B$ or $C$ is an empty set. But this is impossible. Then, $C_{tr}(P_6) \ne 3$. Next suppose that $n=7$. So, we have $|A|\geq 3$. On the other side, $|A|\leq 6$. We now consider the following cases. \nt {\bf Case 1.} $\Phi$ consists of two sets of cardinality 3, say $A$ and $B$, and a singleton set $C$. Since $\gamma_{tr}(P_7)=5$, neither $A$ nor $B$ can be in a total restrained coalition with $C$. Consequently, there is no total restrained coalition partition of order 3. Hence, $C_{tr}(P_7) \ne 3$. \nt {\bf Case 2.} $\Phi$ consists of a set of cardinality 4 (namely $A$), a set of cardinality of 2 (namely $B$) and a singleton set (namely $C$). Since $\gamma_{tr}(P_7)=5$, each of $B$ and $C$ must be in a total restrained coalition with $A$. This is impossible because $P_7$ has no TRD-set of order 6. Thus, $C_{tr}(P_7) \ne 3$. \nt {\bf Case 3.} $\Phi$ consists of a set of cardinality 5, say $A$, and two singleton sets such as $B$ and $C$. Since $\gamma_{tr}(P_7)=5$, each of $B$ and $C$ must be in a total restrained coalition with $A$. This is impossible because $P_7$ has no TRD-set of order 6. Hence, $C_{tr}(P_7) \ne 3$. \nt {\bf Case 4.} Let $|A|=6$. It follows that either $B$ or $C$ is an empty set. But this is impossible. Hence, $C_{tr}(P_7) \ne 3$.\medskip By the above discussions, we infer that $C_{tr}(P_n)=2$ where $4 \leq n\leq 7$.\medskip Finally, let $n\geq 8$. By Theorem \ref{6}, for any path $P_n$ we have $C_{tr}(P_{n})\leq 3$. To achieve equality, all we need is to give a total restrained partition of order 3 for any $n\geq8$, and which will be as follows: $$\Phi(P_n)= \left\{X=\{v_1,v_2 \dots v_{n-6},v_{n-1},v_{n}\}, Y=\{v_{n-5},v_{n-4}\}, Z=\{v_{n-3},v_{n-2}\}\right\}.$$ One can observe that each of $Y$ and $Z$ is in a total restrained coalition with $X$. Therefore, the proof is complete. $\Box$ \end{proof}\medskip We close this section by calculating the total restrained coalition number of cycles. It is straightforward to see that $C_{tr}(C_3)=2$ so we next focus on the cases where the order is at least $4$. We begin by recalling the following result. \begin{theorem}{\rm\cite{A6}} \label{cn} Let $n\geq 4$ be a positive integer. Then $\gamma_{tr}(C_n)=n-2\lfloor\frac{n}{4}\rfloor$. \end{theorem} \begin{theorem}\label{cycle} Let $n\ge 4$ be an intenger and let $C_n$ be a cycle. Then, $$ \mathcal{C}_{tr}(C_{n})=\left\{ \begin{array}{cc} 4 &\quad n\equiv 0 ~(\mbox{mod } 4)\\[.5em] 3 &\quad\mbox{otherwise. } \end{array}\right. $$ \end{theorem} \begin{proof} Let $G=C_n$, where $V(G)=\lbrace v_1,v_2,\dots ,v_n\rbrace$ and $E(G)=\lbrace v_i v_{i+1} : 1\leq i\leq n\rbrace$. By Theorem \ref{delta2} and Corollary \ref{3}, we have $2\leq C_{tr}(C_n)\leq 4$ for any cycle $C_n$. Now assume that $n\equiv 0~(\mbox{mod } 4)$. We find a total restrained coalition partition of order 4 as follows: $\Phi(C_n)=\{A=\{v_{n}\}, B=\{v_{n-2}\}, C=\{v_1,v_2,v_5,v_6 \dots v_{n-7},v_{n-6}, v_{n-3}\}, D=\{v_3,v_4,v_7,v_8 \dots v_{n-5},v_{n-4}, v_{n-1}\} \}$. It is easy to verify that $A$ and $D$ form a total restrained coalition, and $B$ and $C$ form a total restrained coalition. We next suppose that $n$ is not divisible by 4. We claim that $\mathcal{C}_{tr}(C_n)\neq 4$. Suppose, to the contrary that $\mathcal{C}_{tr}(C_n)=4$. Let $\{A, B, C, D\}$ be a $\mathcal{C}_{tr}(C_n)$-partition. By Theorem~\ref{4}, we conclude that each set of $\Phi$ forms a total coalition with at most two sets of $\Phi$. Now assume without loss of generality that $A$ and $B$ form a total restrained coalition, and $C$ and $D$ form a restrained total coalition. By Theorem~\ref{cn}, we have $\gamma_{tr}(C_n)=n-2\lfloor\frac{n}{4}\rfloor$ where $n\geq 4$. Since $A \cup B$ and $C \cup D$ are the total restrained dominating sets, it follows that $|A|+|B|\geq n-2\lfloor\frac{n}{4}\rfloor$ and $|C|+|D|\geq n-2\lfloor\frac{n}{4}\rfloor$. Hence, $|A|+|B|+|C|+|D|\geq 2n-4\lfloor \frac{n}{4}\rfloor$. On the other side, we know that $|A|+|B|+|C|+|D|=n$. Thus, we obtain $4\lfloor\frac{n}{4}\rfloor \geq n$. Since $n$ is not divisible by 4, we get $n-1\geq n$. But this leads to a contradiction. Then, $\mathcal{C}_{tr}(C_n)\neq 4$. Hence, $\mathcal{C}_{tr}(C_n)\leq 3$. In the following, we construct a total restrained coalition partition of order 3. So assume that $\Phi =\{A=\{v_1, v_2 \dots v_{n-4}\}, B=\{v_{n},v_{n-1}\}, C=\{v_{n-2},v_{n-3}\}\}$. Note that each of $B$ and $C$ forms a total restrained coalition with $A$. $\Box$ \end{proof} \section{Graphs with large total restrained coalition number} In this section, we will be interested in connected graphs $G$ of order $n$ such that $C_{tr}(G)=n$. We start by giving the following sufficient condition for $G$ to have $C_{tr}(G)=n$. We recall that a universal vertex of a graph $G$ is a vertex that is adjacent to every other vertex in $G$. \begin{proposition} If $G$ is a graph of order $n$ and minimum degree at least three and having a universal vertex, then $C_{tr}(G)=n$. \end{proposition} \textbf{Proof. }Let $v_{1},v_{2},...,v_{n}$ denote the vertices of $G,$ and assume that $v_{1}$ is a universal vertex in $G.$ Clearly, $n\geq4$, since $\delta(G)\geq3.$ Consider the partition $\Phi=\{\{v_{1}\},\{v_{2}% \},...,\{v_{n}\}\}.$ Since each vertex in $G$ has at least two neighbors besides $v_{1},$ the set $\{v_{1}\}$ forms a total restrained coalition with any other set of $\Phi.$ Hence $\Phi$ is a trc-partition in $G,$ and thus $C_{tr}% (G)=\left\vert \Phi\right\vert =n.$ $\Box$ \medskip We note that the condition on the minimum degree to be at least three is important, and this can be seen for stars of order at least three, when $\delta(G)=1$, and for the graph $G$ obtained from $k\geq2$ triangles sharing the same vertex, when $\delta(G)=2$. Both graphs have a universal vertex but in either graph $C_{tr}(G)<n$. \bigskip We now give a necessary condition for connected graphs $G$ such that $C_{tr}(G)=n.$ \begin{proposition} \label{Ctr=n}Let $G$ be a connected graph of order $n\geq3$ such that $C_{tr}(G)=n$. Then \begin{enumerate} \item[(1)] $\gamma_{tr}(G)=2.$ \item[(2)] $\delta(G)\geq2.$ \item[(3)] $G$ has diameter at most $2.$ \end{enumerate} \end{proposition} \textbf{Proof. }Let\textbf{ }$\Phi=\{\{v_{1}\},\{v_{2}\},...,\{v_{n}\}\}$ be a $C_{tr}(G)$-partition. Clearly, item (1) follows from the fact that each set of\ $\Phi$ has to be in a total restrained coalition with another set of $\Phi.$ To prove item (2), suppose to the contrary that $G$ contains a leaf, say $v_{1},$ and let $v_{2}$ be the neighbor of $v_{1}.$ Since $v_{1}$ and $v_{2}$ belong to all TRD-sets of $G$, and since $n\geq3,$ it is no longer possible to form total restrained coalitions from any two singleton sets of $\Phi,$ contradicting $\gamma_{tr}(G)=2.$ Hence $\delta(G)\geq2.$ Finally, to prove item (3), we first note that by (1) $G$ has diameter at most three. Assume, for a contradiction, that $G$ has diameter three, and let $x$ and $y$ be two vertices at distance three in $G.$ Observe that $x$ and $y$ have no common neighbor, and thus set $\{x\}$ cannot form a total restrained coalition with any other set of $\Phi$ so that $\gamma_{tr}(G)=2.$ Hence $G$ has diameter at most $2.$ $\Box$ It is worth noting that the converse of Proposition \ref{Ctr=n} is not true. This can be seen by the graph $G$ obtained from a cycle $C_5$ whose vertices are labeled in order $v_1,v_2,v_3,v_4,v_5,v_1$ by adding the edge $v_2v_5$. Clearly $G$ satisfies conditions of Proposition \ref{Ctr=n}, but $\{v_1\}$ cannot form a total restrained coalition with any singleton set of $V$, leading to $C_{tr}(G)<5$. \bigskip In the next, we characterize connected triangle-free graphs $G$ of order $n\geq2$ with $C_{tr}(G)=n$. \begin{proposition} \label{triangle-free}A connected triangle-free graphs $G$ of order $n\geq2$ satisfies $C_{tr}(G)=n$ if and only if $G=P_{2}$ or $G=K_{p,q}$ for any integers $p,q\geq2.$ \end{proposition} \textbf{Proof. }Let $G$ be a connected triangle-free graphs of order $n\geq2$ such that $C_{tr}(G)=n.$ Clearly, if $n=2,$ then $G$ is a path $P_{2}.$ Hence assume that $n\geq3.$ By item (1) of Proposition \ref{Ctr=n}, $\gamma _{tr}(G)=2,$ and so let $\{u,v\}$ be a minimum TRD-set of $G.$ Since $G$ is triangle free, each of $N(v)$ and $N(v)$ is an independent set. Moreover, $u$ and $v$ have no common neighbor. Let $A=N(u)-\{v\}$ and $B=N(v)-\{u\}.$ Observe that if $A=\emptyset,$ then $G$ is a star and thus $C_{tr}(G)=2<n,$ { a contradiction}. Therefore $A\neq\emptyset$ and likewise $B\neq\emptyset.$ Now, since $\delta(G)\geq2,$ by Proposition \ref{Ctr=n}-(2), each vertex in $A$ has at least one neighbor in $B$ { and vice versa}. Moreover, if a vertex $x\in A$ has a non-neighbor $y\in B,$ then clearly $x$ and $y$ are at distance three in $G,$ leading that $G$ has diameter 3, contradicting item (3) of Proposition \ref{Ctr=n}. Hence $A\cup B$ induces a complete bipartite graph $K_{\left\vert A\right\vert ,\left\vert B\right\vert },$ and therefore $G=K_{\left\vert A\right\vert +1,\left\vert B\right\vert +1},$ as desired. For the converse, since any two adjacent vertices of either $P_{2}$ or $K_{p,q}$ with $p,q\geq2,$ form a TRD-set of $G,$ we easily conclude that $C_{tr}(G)=n.$ $\Box$ \bigskip According to Propositions \ref{Ctr=n} and \ref{triangle-free}, for every tree $T$ of order $n\geq3,$ $C_{tr}(T)\leq n-1.$ Our aim in the following is to characterize all trees $T$ of order $n\geq3$ such that $C_{tr}(T)=n-1.$ \begin{theorem} The path $P_{3}$ is the only tree $T$ of order $n\geq3$ satisfying $C_{tr}(T)=n-1.$ \end{theorem} \textbf{Proof. }If $T=P_{3},$ then by Theorem \ref{13}, $C_{tr}(P_{3})=2=n-1.$ To prove the converse, let $T$ be a tree of order $n\geq3$ such that $C_{tr}(T)=n-1.$ Since $C_{tr}(T)\leq\Delta(T)+1,$ by Theorem \ref{6}, we deduce that $\Delta(T)\geq n-2.$ If $\Delta(T)=n-1,$ then $T$ is a star $K_{1,n-1},$ and thus by Observation \ref{11}, $n=3$ leading to $T=K_{1,2}% =P_{3}.$ Hence in the following we can assume that $\Delta(T)=n-2,$ and thus $n\geq4.$ Clearly, $T$ is a double star $S_{n-3,1},$ with one support vertex having $n-3$ leaf neighbors and the other with only one leaf neighbor. Let\textbf{\ }$\Phi$ be a $C_{tr}(T)$-partition.\ Since $\left\vert \Phi\right\vert =n-1,$ one member of $\Phi$ has cardinality 2, and any other member is a singleton set, that is of cardinality 1. Now, if $n\geq5,$ then some leaf $w$ of $T$ alone will form a singleton set $\{w\}$ in $\Phi.$ But then $\{w\}$ cannot be in total restrained coalition with any other set of $\Phi,$ a contradiction. Hence $n=4$ and so $T$ is simply a path $P_{4}.$ By Theorem \ref{13}, $C_{tr}(P_{4})=2<n-1,$ a contradiction. This completes the proof. $\Box$ \bigskip From the above, the following corollary is derived. \begin{corollary} If $T$ is a tree of order $n\geq4,$ then $C_{tr}(T)\leq n-2.$ \end{corollary} \begin{proposition} Let $T$ be a tree of order $n\geq4.$ Then $C_{tr}(T)=n-2$ if and only if $T\in\{K_{1,3},P_{4},P_{5}\}$ \end{proposition} \textbf{Proof. }Let $T$ be a tree of order $n\geq4$ such that $C_{tr}(T)=n-2.$ By Theorem \ref{6}, $C_{tr}(T)\leq\Delta(T)+1$ and thus $\Delta(T)\geq n-3.$ If $\Delta(T)=n-1,$ then $T$ is a star $K_{1,n-1},$ and by Observation \ref{11} we deduce that $n=4,$ that is $T=K_{1,3}.$ Hence in the following we only consider $\Delta(T)\in\{n-3,n-2\}.$ Assume first that $\Delta(T)=n-2.$ Then $T$ is a double star $S_{n-3,1},$ and since $V(T)$ is the unique TRD-set, we deduce that any $C_{tr}(T)$-partition $\Phi$ contains only two sets, leading to $\left\vert \Phi\right\vert =n-2=2.$ Therefore $n=4$ and $T=S_{1,1}$, that is $T$ is a path $P_{4}.$ Finally, assume that $\Delta(T)=n-3.$ Let $x$ be the vertex of maximum degree, and let $u$ and $v$ denote the two non-neighbors of $x.$ If $u$ and $v$ are adjacent, then, without loss of generality, let $u$ and $x$ have a common neighbor $y.$ As before, since $V(T)$ is the unique TRD-set of $T,$ we deduce that any $C_{tr}(T)$-partition $\Phi$ contains only two sets, leading to $\left\vert \Phi\right\vert =n-3=2.$ Therefore, $n=5,$ and $T$ is a path $P_{5}.$ In the following, we can assume that $uv\notin E(T).$ In this case, let $y$ and $z$ be the common neighbors of $x$ with $u$ and $v,$ respectively. Note that every neighbor of $x$ besides $y$ and $z,$ if any, is a leaf. But since $T$ has diameter $4,$ any $C_{tr}(T)$-partition $\Phi$ contains only two sets, leading as before to $n=5,$ and thus $T=P_{5}.$ The converse follows from Observation \ref{11} and Theorem \ref{13}. $\Box$ \section{Concluding remarks} In this article, we introduced and studied the total restrained coalition in graphs. We investigated the existence of a total restrained coalition partition. We derived some upper and lower bounds for the total restrained coalition number. We provided a necessary condition for graphs $G$ of order $n$ such that $C_{tr}(G)=n$, and we characterized triangle-free graphs with $C_{tr}(G)=n$. Restricted to the class of trees, we characterized those trees $T$ of order $n$ such that $C_{tr}(T)$ belongs to $\{n-1,n-2\}$. In order to expand the study of coalitions, we propose some potential research directions. \begin{enumerate} \item Characterize all graphs satisfying $C_{tr}(G)=C(G)$. \item Study the total restrained coalitions in cubic graphs. \item Similar to the coalition graph, it is natural to define and study the total restrained coalition graph for a given graph $G$ with respect to the total restrained coalition partition $\Phi$. We define it as follows. Corresponding to any total restrained coalition partition $\Phi=\{V_1,V_2,\ldots, V_k\}$ of a given graph $G$, a {\em total resinated coalition graph} $TRCG(G,\Phi)$ is associated with a one-to-one correspondence between the vertices of $TRCG(G,\Phi)$ and the sets $V_1,V_2,\ldots,V_k$ of $\Phi$. Two vertices of $TRCG(G,\Phi)$ are adjacent if and only if their corresponding sets in $\Phi$ form a total restrained coalition. \item Study the restrained coalitions in graphs. \item Does there exist a polynomial algorithm for computing $C_{tr}(T)$ for any tree $T$? \item Study the complexity issue of the decision problem related to the total restrained coalition number. \end{enumerate} \section*{Author contributions statement} All authors contributed equally to this work. \section*{Conflicts of interest} The authors declare no conflict of interest. \section*{Data availability} No data was used in this investigation. \begin{thebibliography}{99} \bibitem{A1} S. Alikhani, D. Bakhshesh and H.Golmohammadi, Total coalitions in graphs, Quaest. Math. (2024): 1–12. https://doi.org/10.2989/16073606.2024.2365365. \bibitem{A2} S. Alikhani, D. Bakhshesh, H. Golmohammadi and S. Klavzar, On independent coalition in graphs and independent coalition graphs, Discuss. Math. Graph Theory (2024) doi.org/10.7151/dmgt.2543. \bibitem{A3} S. Alikhani, D. Bakhshesh, H. Golmohammadi and E.V. Konstantinova, Connected coalitions in graphs, Discuss. Math. Graph Theory 44 (2024) 1551–1566. \bibitem{A4} D. Bakhshesh, M.A. Henning and D. Pradhan, On the coalition number of trees, Bull. Malays. Math. Sci. Soc. 46 (2023) 95. \bibitem{A5} J. Bar\'at and Z.L. Bl\'azsik, General sharp upper bounds on the total coalition number, Discuss. Math. Graph Theory 44 (2024) 1567--1584. \bibitem{A6} J. H. Hattingh and E. J. Joubert, Restrained and total restrained domination in graphs, Topics in Domination in Graphs. Dev. Math., vol., 64, Springer, Cham, 2020, 129–150. \bibitem{A7} T.W. Haynes, J.T. Hedetniemi, S.T. Hedetniemi, A.A. McRae and R. Mohan, Introduction to coalitions in graphs, AKCE Int. J. Graphs Combin. 17 (2) (2020), 653–659. \bibitem{A8} T.W. Haynes, J.T. Hedetniemi, S.T. Hedetniemi, A.A. McRae and R. Mohan, Coalition graphs of paths, cycles, and trees, Discuss. Math. Graph Theory 43 (2023) 931–946. \bibitem{A9} T.W. Haynes, J.T. Hedetniemi, S.T. Hedetniemi, A.A. McRae and R. Mohan, Upper bounds on the coalition number, Austral. J. Combin. 80 (3) (2021), 442–453. \bibitem{A10} T.W. Haynes, J.T. Hedetniemi, S.T. Hedetniemi, A.A. McRae and R. Mohan, Coalition graphs, Comm. Combin. Optim. no. 2 (2023), 423–430. \bibitem{A11}T. W. Haynes, S. T. Hedetniemi and M. A. Henning, Domination in Graphs: Core Concepts Series: Springer Monographs in Mathematics, Springer, Cham, 2023. xx + 644 pp. \bibitem{A12} T.W. Haynes, S.T. Hedetniemi and P.J. Slater, Fundamentals of Domination in Graphs, in: Chapman and Hall/CRC Pure and Applied Mathematics Series, Marcel Dekker, Inc. New York, 1998. \bibitem{A13} M.A. Henning and S.N. Jogan, A characterization of graphs with given total coalition numbers, Discrete Appl. Math. 358 (2024) 395--403. \bibitem{A14} J.A. Telle and A. Proskurowski, Algorithms for vertex partitioning problems on partial $k$-trees, SIAM J. Discrete Math. 10 (1997) 529–550. \bibitem{Z} B. Zelinka, Total domatic number and degrees of vertices of a graph, Math. Slovaca, 39 (1) (1989), 7--11. \bibitem{A15} B. Zelinka, Remarks on restrained domination and total restrained domination in graphs, Czechoslovak Math. J. 55 (2005) 393–396. \end{thebibliography} \end{document}
2412.12854v1
http://arxiv.org/abs/2412.12854v1
Non-Uniqueness Phase in Hyperbolic Marked Random Connection Models using the Spherical Transform
\documentclass[12pt,a4paper,dvipsnames]{article} \usepackage{etoolbox,fullpage,amssymb,amsmath,amsfonts,amsthm,paralist,graphicx,color,float, mathtools,tikz,xfrac} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage[colorlinks=true,linkcolor=BrickRed,citecolor=blue]{hyperref} \usepackage[normalem]{ulem} \usepackage{centernot} \usepackage{ifthen,xkeyval, tikz, calc, graphicx} \usepackage{xcolor} \usepackage{framed} \usepackage[textsize=scriptsize, backgroundcolor=blue!20!white,bordercolor=red]{todonotes} \usepackage{dsfont} \usepackage{mathrsfs} \usepackage{comment} \usepackage{stmaryrd} \usepackage{enumitem} \usepackage{caption} \usepackage{subcaption} \setlength {\marginparwidth }{2cm} \usepackage{todonotes} \usepackage{thmtools} \usetikzlibrary{plotmarks} \usetikzlibrary{shapes.misc} \usepackage{thmtools} \usepackage{thm-restate} \usepackage{cleveref} \usepackage{orcidlink} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{prop}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{observation}[theorem]{Observation} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{problem}[theorem]{Problem} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}{Example}[section] \newtheorem{assumption}{Assumption} \newtheorem{assumptionalt}{Assumption}[assumption] \newenvironment{assumptionp}[1]{ \renewcommand\theassumptionalt{#1} \assumptionalt }{\endassumptionalt} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \def\acknowledgementsname{Acknowledgments} \newenvironment{acks}[1][\acknowledgementsname]{\noindent\textbf{#1.}\space\ignorespaces}{\par} \definecolor{shadecolor}{named}{GreenYellow} \makeatletter \ignorespaces} \ignorespaces} \makeatother \newcommand{\al}[1]{\begin{align*}#1\end{align*}} \newcommand{\aln}[1]{\begin{align}\begin{split}#1\end{split}\end{align}} \newcommand{\algn}[1]{\begin{align}#1\end{align}} \newcommand{\eqq}[1]{\begin{equation}#1\end{equation}} \newcommand{\prob}[1]{\mathbb P[#1]} \newcommand{\p}{\mathbb P} \newcommand{\pla}{\mathbb P_\lambda} \newcommand{\ela}{\mathbb E_{\lambda}} \newcommand{\Pcal}{\mathcal P} \newcommand{\Acal}{\mathcal{A}} \newcommand{\Mcal}{\mathcal{M}} \newcommand{\Ucal}{\mathcal{U}} \newcommand{\Wcal}{\mathcal{W}} \newcommand{\Pfrak}{\mathfrak P} \newcommand{\Dcal}{\mathcal{D}} \newcommand{\Lcal}{\mathcal{L}} \newcommand{\Ical}{\mathcal{I}} \newcommand{\Jcal}{\mathcal{J}} \newcommand{\Kcal}{\mathcal{K}} \newcommand{\Vcal}{\mathcal{V}} \newcommand{\E}{\mathbb E} \newcommand{\Ecal}{\mathcal E} \newcommand{\Bcal}{\mathcal B} \newcommand{\Efrak}{\mathfrak E} \newcommand{\Tfrak}{\mathfrak T} \newcommand{\R}{\mathbb R} \newcommand{\X}{\mathbb X} \newcommand{\Xcal}{\mathcal X} \newcommand{\Y}{\mathbb Y} \newcommand{\Rd}{\mathbb R^d} \newcommand{\Z}{\mathbb Z} \newcommand{\Q}{\mathbb Q} \newcommand{\N}{\mathbb N} \newcommand{\B}{\mathcal B} \newcommand{\bbB}{\mathbb B} \newcommand{\HypTwo}{{\mathbb{H}^2}} \newcommand{\HypDim}{{\mathbb{H}^d}} \newcommand{\dd}{\mathrm{d}} \newcommand{\C}{\mathscr {C}} \newcommand{\Complex}{\mathbb C} \newcommand{\thinn}[1]{\langle #1 \rangle} \newcommand{\piv}[1]{\mathsf {Piv}(#1)} \newcommand{\Leb}{{\rm Leb}} \newcommand{\xbar}{\overline{x}} \newcommand{\ybar}{\overline{y}} \newcommand{\zbar}{\overline{z}} \newcommand{\ubar}{\overline{u}} \newcommand{\wbar}{\overline{w}} \newcommand{\rbar}{\overline{r}} \newcommand{\sbar}{\overline{s}} \newcommand{\tbar}{\overline{t}} \newcommand{\vbar}{\overline{v}} \newcommand{\zerobar}{\overline{0}} \newcommand{\Hilbert}{\mathbb H} \DeclareMathOperator*{\esssup}{ess\,sup} \DeclareMathOperator*{\essinf}{ess\,inf} \newcommand{\Fourier}[1]{\mathcal{F}\left( #1 \right)} \newcommand{\Rmin}{R^{({\rm min})}_d} \newcommand{\Rmax}{R^{({\rm max})}_d} \newcommand{\Vmax}{V^{({\rm max})}} \newcommand{\LandauBigO}[1]{\mathcal{O}\left(#1\right)} \newcommand{\tildebeta}{\widetilde{\beta}} \newcommand{\convex}[1]{{\mathrm{conv}\left(#1\right)}} \newcommand{\Gcal}{\mathcal{G}} \newcommand{\dequal}{\,{\buildrel d \over =}\,} \newcommand{\ConstantTriangle}{C_\Delta} \newcommand{\cbar}[1]{\overline{c}_{#1}} \DeclareMathOperator{\sech}{sech} \DeclareMathOperator{\csch}{csch} \DeclareMathOperator{\arcsec}{arcsec} \DeclareMathOperator{\arccot}{arccot} \DeclareMathOperator{\arccsc}{arccsc} \DeclareMathOperator{\arcosh}{arcosh} \DeclareMathOperator{\arsinh}{arsinh} \DeclareMathOperator{\artanh}{artanh} \DeclareMathOperator{\arsech}{arsech} \DeclareMathOperator{\arcsch}{arcsch} \DeclareMathOperator{\arcoth}{arcoth} \newcommand{\conn}[3]{#1 \longleftrightarrow #2\:\textrm {in}\: #3} \newcommand{\adja}[3]{#1 \sim #2\:\textrm{in}\: #3} \newcommand{\notadja}[3]{#1 \not\sim #2\:\textrm{in}\: #3} \newcommand{\nconn}[3]{#1 \centernot\longleftrightarrow #2\:\textrm{in}\: #3} \newcommand{\dconn}[3]{#1 \Longleftrightarrow #2\textrm { in } #3} \newcommand{\ndconn}[3]{#1 \centernot\Longleftrightarrow #2\textrm { in } #3} \newcommand{\xconn}[4]{#1 \xleftrightarrow{\,\,#4\,\,} #2\textrm { in } #3} \newcommand{\offconn}[4]{#1 \longleftrightarrow #2\textrm { in } #3 \textrm{ off } #4} \newcommand{\viaconn}[4]{#1 \longleftrightarrow #2\textrm { in } #3 \textrm{ through } #4} \newcommand{\sqconn}[3]{#1 \leftrightsquigarrow #2\textrm { in } #3 } \newcommand{\sqarrow}{\leftrightsquigarrow} \newcommand{\xsqarrow}[1]{\overset{#1}{\leftrightsquigarrow}} \newcommand{\xsqconn}[4]{#1 \overset{#3}{\leftrightsquigarrow} #2 \text{ in } #4} \newcommand{\lstepconn}[4]{#1 \overset{#3}{\longleftrightarrow} #2 \text{ in } #4} \newcommand{\thconn}[3]{#1 \notin {#3}^{#1}_{\langle #2 \rangle}} \newcommand{\thinning}[2]{#1_{\langle #2 \rangle}} \newcommand{\tlam}{\tau_\lambda} \newcommand{\tlamtilt}{\tau_{\lambda,\alpha}} \newcommand{\tlamo}{\tau_\lambda^\circ} \newcommand{\tlame}{\tlam^{(\varepsilon)}} \newcommand{\glam}{\gamma_\lambda} \newcommand{\tillam}{\tilde\tau_\lambda} \newcommand{\tklam}{\tau_{\lambda,k}} \newcommand{\tilklam}{\tilde\tau_{\lambda,k}} \newcommand{\ftlam}{\widehat\tau_\lambda} \newcommand{\ftillam}{\widehat{\tilde\tau}_\lambda} \newcommand{\ftklam}{\widehat\tau_{\lambda,k}} \newcommand{\ftilklam}{\widehat{\tilde\tau}_{\lambda,k}} \newcommand{\Lace}{\Pi} \newcommand{\Lacelam}{\Pi_\lambda} \newcommand{\fLacelam}{\widehat\Pi_\lambda} \newcommand{\LacelamC}{\Pi_{\lambda_c}} \newcommand{\fLacelamC}{\widehat\Pi_{\lambda_c}} \newcommand{\fupslam}{\widehat\Upsilon_\lambda} \newcommand{\vardbtilde}[1]{\tilde{\raisebox{0pt}[0.85\height]{$\tilde{#1}$}}} \newcommand{\dtilklam}{\vardbtilde{\tau}_{\lambda,k}} \newcommand{\fdtilklam}{\widehat{\vardbtilde{\tau}}_{\lambda,k}} \newcommand{\trilam}{\triangle_\lambda} \newcommand{\trilamo}{\triangle^\circ_\lambda} \newcommand{\trilamoo}{\triangle^{\circ\circ}_\lambda} \newcommand{\trilamB}{\triangle^{(B)}_\lambda} \newcommand{\trilamBar}{\overline{\triangle_\lambda}} \newcommand{\trilamoBar}{\overline{\triangle^\circ_\lambda}} \newcommand{\trilamooBar}{\overline{\triangle^{\circ\circ}_\lambda}} \newcommand{\trilamBBar}{\overline{\triangle^{(B)}_\lambda}} \newcommand{\Wk}{W_k} \newcommand{\WkBar}{\overline{W_k}} \newcommand{\HkBar}{\overline{H_k}} \newcommand{\Ulam}{U_\lambda} \newcommand{\Vlam}{V_\lambda} \newcommand{\greensmu}{G_\mu} \newcommand{\gmu}{G_{\mulam}} \newcommand{\fgreensmu}{\widehat G_\mu} \newcommand{\fgmu}{\widehat G_{\mulam}} \newcommand{\dispfgmu}{\widehat{\mathcal{G}}_{\mulam}} \newcommand{\creensmu}{C_\mu} \newcommand{\cmu}{C_{\mulam}} \newcommand{\fcreensmu}{\widehat C_\mu} \newcommand{\fcmu}{\widehat C_{\mulam}} \newcommand{\fUmu}{\widehat U_\mu} \newcommand{\fUmulam}{\widehat U_\mulam} \newcommand{\Gfrakmu}{\mathfrak{G}_{\mu,l}} \newcommand{\Jfrakmu}{\mathfrak{J}_{\mu,k,l}} \newcommand{\Greenlam}{g_\lambda} \newcommand{\OpGreenlam}{\mathcal{G}_\lambda} \newcommand{\Id}{\mathds 1} \newcommand{\Optlam}{\mathcal{T}_\lambda} \newcommand{\OptlamCritL}{\mathcal{T}_{\lambda_\mathrm{c}(L),L}} \newcommand{\Optlamtilt}{\mathcal{T}_{\lambda,\alpha}} \newcommand{\OptlamT}{\mathcal{T}_{\lambda_O}} \newcommand{\fOptlam}{\widehat{\mathcal{T}}_\lambda} \newcommand{\fOptlamT}{\widehat{\mathcal{T}}_{\lambda_O}} \newcommand{\Optklam}{\mathcal{T}_{\lambda,k}} \newcommand{\fOptklam}{\widehat{\mathcal{T}}_{\lambda,k}} \newcommand{\Opconnf}{\varPhi} \newcommand{\fOpconnf}{\widehat{\Opconnf}} \newcommand{\OpLace}{\varPi} \newcommand{\OpLacelam}{\varPi_\lambda} \newcommand{\OpLacelamT}{\varPi_{\lambda_O}} \newcommand{\fOpLace}{\widehat{\OpLace}} \newcommand{\fOpLacelam}{\fOpLace_{\lambda}} \newcommand{\fOpLacelamT}{\fOpLace_{\lambda_O}} \newcommand{\StartBlock}{\kappa} \newcommand{\EndBlock}{\overline{\psi}} \newcommand{\OpStartBlock}{K} \newcommand{\OpEndBlock}{\overline{\Psi}} \newcommand{\OpEndDiagram}{\overline{K}} \newcommand{\OneNorm}[1]{\left\lVert#1\right\rVert_{1,\infty}} \newcommand{\TwoNorm}[1]{\left\lVert#1\right\rVert_{2,\infty}} \newcommand{\InfNorm}[1]{\left\lVert#1\right\rVert_{\infty,\infty}} \newcommand{\OneOneNorm}[1]{\left\lVert#1\right\rVert_{1,1}} \DeclarePairedDelimiter\abs{\lvert}{\rvert} \DeclarePairedDelimiter\norm{\lVert}{\rVert} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \DeclarePairedDelimiter\ceil{\lceil}{\rceil} \newcommand{\SpecRadius}[1]{\varrho\left(#1\right)} \DeclarePairedDelimiterX{\inner}[2]{\langle}{\rangle}{#1, #2} \newcommand{\MinModulus}[1]{\left\lVert#1\right\rVert_{\rm min}} \newcommand{\SupSpec}[1]{\mathbb{S}\left(#1\right)} \newcommand{\habs}[1]{\abs*{#1}_\HypTwo} \newcommand{\closure}[1]{{\rm Cl}\left(#1\right)} \newcommand{\EssIm}[1]{{\rm ess.Im}\left(#1\right)} \newcommand{\Ess}{{\rm ess.}} \newcommand{\dist}[1]{\mathrm{dist}_{\HypDim}\left(#1\right)} \newcommand{\HSNorm}[1]{\left\lVert#1\right\rVert_{\rm HS}} \newcommand{\iphi}{1} \newcommand{\phiint}{q_\connf} \newcommand{\phiintL}{q_{\connf_L}} \newcommand{\mulam}{{\mu_\lambda}} \newcommand{\orig}{o} \newcommand{\origin}[1]{{o_{#1}}} \newcommand{\e}{\mathrm{e}} \renewcommand{\i}{\text{i}} \newcommand{\parl}{\mathrel{\raisebox{-0.05 cm}{\includegraphics{Figures/parl.pdf}}} \hspace{-0.05cm}} \newcommand{\connf}{\varphi} \newcommand{\sconnf}{\widetilde{\connf}} \newcommand{\Exconnf}[1]{\connf^{[#1]}} \newcommand{\connfrw}{D_\varphi} \newcommand{\fconnf}{\widehat\connf} \newcommand{\fconnfrw}{\widehat D_\connf} \newcommand{\unitball}{\mathbb B^d} \newcommand{\Bb}{\mathbb B} \newcommand{\ballep}{\mathsf B^{(\varepsilon)}} \newcommand{\const}{c_f} \newcommand{\constKappaOne}{c_{\kappa+1}} \definecolor{darkorange}{RGB}{255,165,0} \definecolor{altviolet}{RGB}{139,0,139} \definecolor{turquoise}{RGB}{64,224,208} \definecolor{lblue}{RGB}{173,216,230} \definecolor{violet}{RGB}{238,130,238} \definecolor{darkgreen}{RGB}{0,100,0} \definecolor{lgreen}{RGB}{144,238,144} \numberwithin{equation}{section} \allowdisplaybreaks \title{Non-Uniqueness Phase in Hyperbolic Marked Random Connection Models using the Spherical Transform} \author{Matthew Dickson\footnote{University of British Columbia, Department of Mathematics, Vancouver, BC, Canada, V6T 1Z2; Email: [email protected]; \orcidlink{0000-0002-8629-4796}~https://orcid.org/0000-0002-8629-4796}} \date{} \begin{document} \maketitle \vspace{-1em} {\centering{ \today}\par} \vskip-3em \begin{abstract} A non-uniqueness phase for infinite clusters is proven for a class of marked random connection models on the $d$-dimensional hyperbolic space, $\HypDim$, in a high volume-scaling regime. The approach taken in this paper utilizes the spherical transform on $\HypDim$ to diagonalize convolution by the adjacency function and the two-point function and bound their $L^2\to L^2$ operator norms. Under some circumstances, this spherical transform approach also provides bounds on the triangle diagram that allows for a derivation of certain mean-field critical exponents. In particular, the results are applied to some Boolean and weight-dependent hyperbolic random connection models. While most of the paper is concerned with the high volume-scaling regime, the existence of the non-uniqueness phase is also proven without this scaling for some random connection models whose resulting graphs are almost surely not locally finite. \end{abstract} \noindent\emph{Mathematics Subject Classification (2020).} Primary: 82B43; Secondary: 60G55, 43A90. \smallskip \noindent\emph{Keywords and phrases.} Random connection models, continuum percolation, hyperbolic space, non-uniqueness phase, mean-field critical exponents, spherical transform. \section{Introduction} Random connection models (RCMs) are random graph models in which the vertex set is a Poisson point process on some ambient space (with an intensity parameter $\lambda>0$), and the edges then exist independently with a probability that is prescribed by an \emph{adjacency function}, $\connf$, that depends upon the positions of the two vertices in question. \emph{Marked} random connection models are RCMs for which the ambient space is the product of an infinite space (say Euclidean $\Rd$ or hyperbolic $\HypDim$) with a probability space $\Ecal$ of marks. The adjacency function naturally depends on these marks, and this ensures that not all vertices behave in the same way (as they might well in a purely very symmetric ambient space). The primary aim of this paper is to prove marked RCMs on $\HypDim\times \Ecal$ exhibit a \emph{non-uniqueness} phase, in which there are almost surely infinitely many infinite clusters. Specifically, we show in Theorem~\ref{thm:NonUniqueness} that in a certain ``stretched-out" regime the percolation critical intensity $\lambda_\mathrm{c}$ (above which infinite components exist) and the uniqueness critical intensity $\lambda_\mathrm{u}$ (above which there is a unique infinite component) satisfy \begin{equation} \label{eqn:non-uniqueness} \lambda_\mathrm{c} < \lambda_\mathrm{u}. \end{equation} Notably, this ``stretched-out" regime is not the ``spread-out" regime as seen in the lace expansion arguments of \cite{HarSla90,HeyHofLasMat19}. As we shall see, the approach taken in this paper also helps to derive mean-field critical exponents for such models (see Theorem~\ref{thm:meanfield}). These results are also applied to Boolean disc models (with random radii) and weight-dependent RCMs on $\HypDim$. In particular, Proposition~\ref{prop:nonperturb} proves the existence of a non-uniqueness regime for some weight-dependent RCMs \emph{without} requiring the ``stretched-out" perturbation. Note that this proposition also proves a non-trivial property for some random graphs that are almost surely not locally finite. There are three main ideas in this paper that allow us to show non-uniqueness. In a similar way to \cite{hutchcroft2019percolation} we first use the two-point (or connectedness) and adjacency functions to construct operators on $L^p$ spaces - in particular the Hilbert space $L^2$. This produces the critical intensity $\lambda_{2\to 2}$, at which the $L^2\to L^2$ operator norm of the two-point operator switches from being finite to infinite. Crucially, this operator critical intensity lies between the susceptibility critical intensity $\lambda_T$ ($=\lambda_{1\to 1}$ -- see Lemma~\ref{lem:generalOperatorBounds}) and the uniqueness critical intensity $\lambda_{\mathrm{u}}$: \begin{equation} \lambda_{T}\leq \lambda_{2\to 2} \leq \lambda_{\mathrm{u}}. \end{equation} If one could show $\lambda_T=\lambda_{\mathrm{c}}$ and that one of these bounds is strict (it will be the lower bound in this argument), then a non-uniqueness phase would be proven. In practice we will find an upper bound on $\lambda_\mathrm{c}$ by comparing to a model on which the percolation and susceptibility thresholds are equal - we only prove $\lambda_T=\lambda_{\mathrm{c}}$ for models in which we also derive mean-field critical exponents. Furthermore, $\lambda_{2\to 2}$ can be bounded below by the reciprocal of the $L^2\to L^2$ operator norm of the adjacency operator. The second main idea is that this $L^2\to L^2$ operator norm of the adjacency operator can be written explicitly using the spherical transform of \cite{helgason1994geometric}. Specifically, in the case of \emph{unmarked} hyperbolic RCMs the $L^2\to L^2$ operator norm of the adjacency operator is exactly \begin{equation} \label{eqn:SimpleSphericalTransform} \int_{\HypDim}\connf\left(\dist{x,\orig}\right)Q_d\left(\dist{x,\orig}\right) \mu\left(\dd x\right), \end{equation} where $\mathrm{dist}_{\HypDim}$ is the hyperbolic metric, $\mu$ is the hyperbolic measure, and the function $Q_d\colon \R_+\to \R_+$ is given by an explicit integral in \eqref{eqn:QdFunctionDefinition} (taking $\R_+=\left[0,\infty\right)$). By studying simple properties of $Q_d$, one can show that as $\connf$ becomes more spread out, the integral \eqref{eqn:SimpleSphericalTransform} is much smaller than the total mass of $\connf$ (i.e. this same integral without $Q_d$). The third idea is that a geometric property of $\HypDim$ allows us to use a cluster of the RCM to dominate a branching process, which tells us that the susceptibility critical intensity is bounded above by a constant divided by the total mass of $\connf$. We will have therefore shown that if $\connf$ is sufficiently spread out then the susceptibility critical intensity is strictly less than $\lambda_{2\to 2}$, which is less than or equal to the uniqueness critical intensity. The critical intensity $\lambda_{2\to 2}$ has the additional advantage that the triangle diagram can easily be shown to be finite for $\lambda<\lambda_{2\to 2}$, and standard methods then allow us to show mean field critical exponents hold. \subsection{Background} Bernoulli bond percolation on $\Z^d$ with nearest neighbour edges is the archetypal percolation model and naturally has a very large collection of literature. A standard ergodicity and trifurcation argument proves that there is almost surely at most one infinite connected cluster (see \cite{grimmett1989percolation}). Furthermore, by coupling with Bernoulli bond percolation on a Bethe lattice, \cite{AizNew84} proved that if the triangle condition holds then the susceptibility critical exponent exists and takes its mean-field value. Similarly, \cite{AizBar87,AizBar91} showed that other critical exponents take their mean-field values under the triangle condition. The matter of whether the triangle condition held was considered in \cite{HarSla90}, in which a lace expansion argument was used to show that the triangle condition holds for so-called `spread-out' models when $d>6$, and in the nearest neighbour model for $d>d^*$ for some $d^*\geq 6$. Currently the best upper bound for this upper critical dimension is $d^*\leq 10$ by \cite{FitHof17}. The questions of Bernoulli bond percolation can also be asked on graphs other than the usual $\Z^d$. \cite{benjamini1996percolation} provides an early study of Bernoulli bond percolation on locally finite Cayley graphs, quasi-transitive graphs, and planar graphs. In particular, they conjectured that for Bernoulli bond percolation on locally finite nonamenable quasi-transitive graphs there is a non-empty interval of bond probabilities that produces infinitely many infinite clusters almost surely. For nonamenable finitely generated groups, \cite{pak2000non} proved every nonamenable group has a Cayley graph such that the associated Bernoulli bond percolation model has a non-empty non-uniqueness phase, and \cite{nachmias2012non} proved the existence of this phase if the Cayley graph has sufficiently high girth (i.e. sufficiently long shortest cycle). Likewise, \cite{benjamini2001percolation} and \cite{hutchcroft2019percolation} proved that a non-uniqueness phase exists for locally finite nonamenable transitive planar single-ended graphs and locally finite nonamenable quasi-transitive locally finite \emph{Gromov hyperbolic} graphs respectively. In particular, \cite{hutchcroft2019percolation} used an operator description of the two-point function and a geometric result that they called a `\emph{hyperbolic magic lemma}' that they derived from the so-called `\emph{magic lemma}' of \cite{benjamini2011recurrence} to prove this non-uniqueness. For percolation on finite graphs, the question of the uniqueness of the infinite component becomes a question of the relative sizes of the two largest components. In \cite{bollobas2007phase}, this question was answered for inhomogeneous random graphs, in which each edge in the complete graph of $n$ vertices is given a mark from a finite measure space and the edge between two vertices is open independently with a probability given by a kernel function (that varies with $n$ to close in on critical behaviour). Under the assumption that the kernel is ``irreducible" and an assumption that ensures that the expected number of edges is ``what it should be," they prove that the largest component is of order $n$ with high probability, while the second largest component is $o\left(n\right)$. Regarding critical exponents, \cite{Schon01} showed that the triangle condition holds for locally finite graphs with sufficiently high Cheeger constant, and used this to prove that various mean-field critical exponents are attained. \cite{Schon02} was then able to do this for nonamenable transitive locally finite single-ended graphs \emph{without} resorting to the triangle condition, instead using the dual graph to find a separating barrier between clusters. For certain regular tessellations of two and three dimensional hyperbolic spaces, \cite{madras2010trees} adapted this approach by using the geometric result that a positive fraction of the vertices in a finite cluster will be on the boundary of the convex hull of the cluster. In addition to the non-uniqueness phase, \cite{nachmias2012non,hutchcroft2019percolation} both derived mean-field critical exponents in their respective regimes by bounding the triangle diagram. For \cite{hutchcroft2019percolation} this came naturally from their operator description. For random connection models (RCMs) on $\Rd\times\Ecal$ many of the results that applied to Bernoulli bond percolation on $\Z^d$ can be adapted. \cite{MeeRoy96} considered unmarked RCMs with rotation invariant and decreasing adjacency functions, showing that there was no non-uniqueness regime. This was generalised to \emph{marked} RCMs (also removing the requirement of being rotation invariant and decreasing) in \cite{chebunin2024uniqueness}, by using the amenability of $\Rd$ to show that clusters in such models on $\Rd\times\Ecal$ are \emph{deletion tolerant}. Then under a natural irreducibility assumption they proved that this property is equivalent to having no non-uniqueness phase. The lace expansion argument was adapted by \cite{HeyHofLasMat19} to show that (an appropriate version) of the triangle condition holds for `spread-out' RCMs on $\Rd$ when $d>6$, and otherwise for $d>d^*$ for some $d^*\geq 6$. In contrast to the lattice version, no effort has been made to optimize $d^*$ in this context. They then demonstrated the utility of this triangle condition by using it to prove that the susceptibility critical exponent takes its mean-field value. For marked RCMs on $\Rd\times\Ecal$, the lace expansion argument was performed by \cite{DicHey2022triangle}, in which a version of the triangle condition was proven to hold for sufficiently high dimensions. This was then used by \cite{caicedo2023criticalexponentsmarkedrandom} to show that certain critical exponents take mean-field values in sufficiently high dimensions. Let us now review RCMs on $\HypDim$. For the specific case of the Boolean Disc Model with a fixed radius (variously also called Poisson Boolean model, Gilbert disc model, etc.) on $\HypDim$, \cite{tykesson2007number} proved that there is a non-uniqueness phase if $d=2$ or if $d\geq 3$ and the radius of the discs is sufficiently large. This was achieved by finding infinite geodesic rays from a point that are contained in the vacant set of the vertices' discs. In \cite{dickson2024hyperbolicrandomconnectionmodels}, the geometric arguments of \cite{madras2010trees} have been adapted to show that a wide variety of RCMs on $\HypTwo$ and $\mathbb{H}^3$ exhibit mean-field critical exponents. Like for \cite{madras2010trees}, the argument could not be made to work for $d\geq 4$. The study of RCMs on $\HypDim$ (or $\HypDim\times\Ecal$) is not only relevant for itself, but may also have applications for RCMs on $\Rd\times\Ecal$. It is described in \cite{Hofstad2024RGCNV2} how a (Boolean disc) RCM on $\HypTwo$ can be interpreted as a one-dimensional product \emph{geometric inhomogeneous random graph} (a form of RCM on $\R\times\Ecal$). Hyperbolic embeddings of complex networks have also attracted much attention in themselves: \cite{boguna2010sustaining} models the Internet by embedding it in $\HypTwo$. The novelty in this work is that not only do we consider marked hyperbolic RCMs (i.e. RCMs on the ambient space $\HypDim\times\Ecal$) for any $d\geq 2$, but we show non-uniqueness (Theorem~\ref{thm:NonUniqueness}) and mean-field critical exponents (Theorem~\ref{thm:meanfield}) by using spherical transforms -- a technique with no obvious analog in the Bernoulli percolation or RCM literature. The ability of this approach to deal with models that almost surely produce non-locally finite graphs is also demonstrated in Proposition~\ref{prop:nonperturb}. The non-local finiteness of the resulting graph is emphasised here because all the references above are only concerned with models that produce locally finite graphs (by having the original graph being locally finite or by having an integrable adjacency function). This came from natural assumptions for them because they wished to analyse the percolation transition (and so needed it to be non-zero), but in our case we are studying a super-critical behaviour that makes sense even if the graph ends up not being locally finite. Note that Proposition~\ref{prop:nonperturb} also provides a \emph{non-perturbative} result - an advantage over the main Theorems~\ref{thm:NonUniqueness} and \ref{thm:meanfield}. Nevertheless these main theorems are perturbative like the works of \cite{pak2000non,Schon01,nachmias2012non}, \cite{tykesson2007number} for $d\geq 3$, and \cite{HarSla90,HeyHofLasMat19} in considering spread-out models. \subsection{Presenting the Model and the Questions} Let us now be more precise in our definition of marked hyperbolic RCMs. \paragraph{Hyperbolic space.} For $d\geq 2$, the $d$-dimensional hyperbolic space (denoted $\HypDim$) is the unique simply connected $d$-dimensional Riemannian manifold with constant sectional curvature $-1$. We will often use the Poincar{\'e} ball model (sometimes called \emph{conformal ball model}) of $\HypDim$. Let $\mathbb{B}=\left\{x\in\Rd\colon \abs*{x}<1\right\}$ be the open unit Euclidean radius ball in $\Rd$. The \emph{hyperbolic metric} on $\mathbb{B}$ then assigns length \begin{equation} \ell\left(\gamma\right) := 2\int^1_0 \frac{\abs*{\gamma'(t)}}{1-\abs*{\gamma(t)}^2}\dd t \end{equation} to the curve $\gamma=\left\{\gamma(t)\right\}_{t\in\left[0,1\right]}$. In particular, this means that the origin in $\mathbb{B}$ (denoted $\orig$) and an arbitrary point $x\in \mathbb{B}$ are hyperbolic distance \begin{equation} \label{eqn:HypDistance} \dist{\orig,x} = 2\artanh \abs{x} \end{equation} apart, where $\abs{x}$ is the Euclidean distance between $\orig$ and $x$ in $\Rd$. For Borel measurable subsets $E\subset \mathbb{B}$, the \emph{hyperbolic measure} (denoted $\mu$) assigns mass \begin{equation} \mu\left(E\right) := \int_E\frac{4}{\left(1-\abs*{x}^2\right)^2}\dd x. \end{equation} In Section~\ref{sec:prelims} below, the isometries of these spaces are described. In particular, they have a transitive family that means that the point identified by $\orig$ is arbitrary. \paragraph{Marked hyperbolic random connection model.} We first let $\Ecal$ be a Borel measure space with probability measure $\Pcal$. Then, given $\lambda>0$, the vertex set of the marked hyperbolic RCM, $\eta$, is distributed as a Poisson point process on $\HypDim\times \Ecal$ with intensity measure given by the product measure of the hyperbolic measure and the probability measure: $\lambda\nu := \lambda\mu\otimes\Pcal$. Two distinct vertices then form an edge independently of all other vertices and possible edges with a probability given by a measurable \emph{adjacency function} $\connf\colon \R_+\times\Ecal^2\to \left[0,1\right]$ that is symmetric under the transposition of the two mark arguments. Note that throughout we will assume that $\connf>0$ on a $\Leb\otimes \Pcal\otimes\Pcal$-positive set so that some edges do indeed occur. We write the event that two vertices $\mathbf{x},\mathbf{y}\in\eta\subset\HypDim\times\Ecal$ form an edge as $\mathbf{x}\sim\mathbf{y}$. Given $x,y\in\HypDim$ and $a,b\in\Ecal$, \begin{equation} \label{eqn:EdgeProbs} \mathbb{P}\left(\left(x,a\right)\sim\left(y,b\right)\right) = \connf\left(\dist{x,y};a,b\right). \end{equation} We then use $\xi$ to denote the whole random graph, and we use $\pla$ to denote the law of $\xi$ (and $\ela$ for the associated expectation). A more precise and complete description of construction of a RCM can be found in \cite{HeyHofLasMat19}, where the reader need only replace instances of $\Rd$ with the space $\HypDim\times \Ecal$ we use here. Most of the results contained in this paper hold in a perturbative regime, where we make longer edges more likely. This is achieved by modifying the adjacency function with a scaling function. For all $L>0$ let $\sigma_L\colon\R_+\to \R_+$ be an increasing bijection such that \begin{itemize} \item $\sigma_1(r)=r$, \item for all $r>0$, $L\mapsto \sigma_L(r)$ is increasing and $\lim_{L\to\infty}\sigma_L(r)=\infty$, \end{itemize} and define $\connf_L\colon\R_+\times\Ecal^2\to\left[0,1\right]$ by \begin{equation} \label{eqn:scaledAdjFnct} \connf_L\left(r;a,b\right) := \connf\left(\sigma_L^{-1}(r);a,b\right). \end{equation} We call $\sigma_L$ the scaling function and $\connf_L$ the scaled adjacency function. More generally, the presence of a subscript $L$ in our notation then indicates that the scaled adjacency function $\connf_L$ is being used in the place of the reference adjacency function $\connf$. Note that this is different to the concept of \emph{spread-out} models that appear in lace expansion arguments like \cite{HarSla90,HeyHofLasMat19}, because we do not multiply the adjacency function by a small factor in addition to changing the length scales. One is of course free to do so, but it would play no role in the argument. We shall be particularly interested in a scaling function that linearly scales the volume of balls. For all dimensions $d\geq 1$ and radii $r\geq0$, define \begin{equation} \mathbf{V}_d(r):= \int^r_0\left(\sinh t\right)^{d-1}\dd t. \end{equation} Note that the function $\mathbf{V}_d\colon \R_+\to \R_+$ is a bijection. We can then define the \emph{volume-linear scaling function} to be \begin{equation} s_L(r) := \mathbf{V}_d^{-1}\left(L \mathbf{V}_d(r)\right) \end{equation} for all $d\geq 1$ and $r\geq 0$. The name arises because this scaling linearly transforms the volume of hyperbolic balls of any radius. Scaling functions are discussed further in Appendix~\ref{app:ScalingFunctions}. For example, the advantages of the volume-linear scaling function over the simpler \emph{length-linear} $\sigma_L(r)=Lr$ scaling function are identified, and some non-intuitive properties of scaling functions are described that arise from the space $\HypDim$. \paragraph{Critical Behaviour.} Let $\orig$ designate the (arbitrary) origin of $\HypDim$ -- it will sometimes also be convenient to use the notation $\origin{a}=\left(\orig,a\right)\in\HypDim\times \Ecal$. For $\mathbf{x}\in\eta$ let $\C\left(\mathbf{x},\xi\right)$ denote the set of vertices in $\eta$ that are connected to $\mathbf{x}$ in $\xi$, and naturally $\#\C\left(\mathbf{x},\xi\right)$ then denotes the size of this set under the counting measure. The notation $\xi^\mathbf{x}$ indicates that $\xi$ has been augmented by a vertex at $\mathbf{x}$, and since $\eta$ is distributed as a Poisson point process this is equivalent to conditioning on $\mathbf{x}\in\eta$. More details of this procedure can be found in Section~\ref{sec:prelims}. For all $\lambda>0$ and $L>0$, we define the \emph{susceptibility} function $\chi_{\lambda,L}\colon \Ecal \to \left[0,\infty\right]$ and \emph{percolation probability} function $\theta_{\lambda,L}\colon \Ecal\to \left[0,1\right]$ by \begin{align} \chi_{\lambda,L}(a)&:= \E_{\lambda,L}\left[\#\C\left(\origin{a},\xi^{\origin{a}}\right)\right],\\ \theta_{\lambda,L}(a) &:= \mathbb{P}_{\lambda,L}\left(\#\C\left(\origin{a},\xi^{\origin{a}}\right) = \infty\right), \end{align} respectively. We then define the \emph{susceptibility critical intensity} and \emph{percolation critical intensity} using the susceptibility and percolation probability respectively: \begin{align} \lambda_T(L) :=& \inf\left\{\lambda>0\colon \esssup_{a\in\Ecal}\chi_{\lambda,L}(a)\right\}, \label{eqn:criticalSusceptibility} \\ \lambda_\mathrm{c}(L) :=& \inf\left\{\lambda>0\colon \esssup_{a\in\Ecal}\theta_{\lambda,L}(a)>0\right\}. \end{align} If we fix $L>0$, then we say that the critical exponents $\gamma=\gamma(L)$ and $\beta=\beta(L)$ exist in the bounded ratio sense if there exist $\lambda$-independent constants $C_1,C_2\in\left(0,\infty\right)$ and $\varepsilon>0$ such that \begin{equation} C_1\left(\lambda_{T}(L)-\lambda\right)^{-\gamma} \leq \norm*{\chi_{\lambda,L}}_p \leq C_2\left(\lambda_{T}(L)-\lambda\right)^{-\gamma} \end{equation} for all $\lambda\in\left(\lambda_{T}(L)-\varepsilon,\lambda_{T}(L)\right)$ and $p\in\left[1,\infty\right]$, and \begin{equation} C_1\left(\lambda-\lambda_{\mathrm{c}}(L)\right)^\beta \leq \norm*{\theta_{\lambda,L}}_p\leq C_2\left(\lambda - \lambda_{\mathrm{c}}(L)\right)^\beta \end{equation} for all $\lambda\in\left(\lambda_{\mathrm{c}}(L),\lambda_{\mathrm{c}}(L)+\varepsilon\right)$ and $p\in\left[1,\infty\right]$. In these, $\norm*{\cdot}_p$ denotes the $L^p$ norm with respect to the measure $\Pcal$ on $\Ecal$. We also say that the cluster tail critical exponent $\delta=\delta(L)$ exists in the bounded ratio sense if there exist constants $C_1,C_2\in\left(0,\infty\right)$ such that \begin{equation} C_1 n^{-1-\frac{1}{\delta}} \leq \mathbb{P}_{\lambda_\mathrm{c}(L),L}\left(\#\C\left(\origin{a},\xi^{\origin{a}}\right) = n\right) \leq C_2 n^{-1-\frac{1}{\delta}} \end{equation} for all $n\in\N$ and $\Pcal$-almost every $a\in\Ecal$. In addition to the percolation critical intensity, we will discuss the \emph{uniqueness critical intensity} defined by \begin{equation} \lambda_{\mathrm{u}}(L) := \inf\left\{\lambda>0\colon \mathbb{P}_{\lambda,L}\left(\exists! \text{ infinite cluster}\right)>0\right\}. \end{equation} If $\lambda<\lambda_\mathrm{c}(L)$, then there are almost-surely no infinite clusters, and therefore $\lambda_\mathrm{u}(L)\geq \lambda_\mathrm{c}(L)$. We will be interested in finding whether this inequality is strict. Note that the invariance of both the vertex process and adjacency function under the isometries means that all the distributions $\mathbb{P}_{\lambda,L}$ are invariant under the isometries. Therefore by standard ergodicity arguments $\mathbb{P}_{\lambda,L}\left(\exists! \text{ infinite cluster}\right)\in\left\{0,1\right\}$, and so $\lambda_{\mathrm{u}}(L)= \inf\left\{\lambda>0\colon \exists! \text{ infinite cluster } \mathbb{P}_{\lambda,L}\text{-almost surely}\right\}$. \subsection{General Results} \label{sec:results} We first state the two main general theorems, before applying them to notable specific cases in Section~\ref{sec:SpecificModels}. One of these general theorems proves that a non-uniqueness phase exists, and the other proves that critical exponents take their mean-field values. These will be proven in two types of regime. In one regime we require that there are only finitely many marks, which allows us more freedom in the scaling function chosen. In the other regime we allow for infinitely many marks, but to avoid excessive complications we require that the scaling function takes the nice volume-scaling form. \begin{assumptionp}{F}[Finitely Many Marks] \label{assump:finitelymany} There are only finitely many marks, and there exists $L_0>0$ such that $L\geq L_0$ implies that \begin{equation} \label{eqn:pointwiseIntegrable} \max_{a,b\in\Ecal}\int^\infty_0\connf_L\left(r;a,b\right)r\exp\left(\frac{1}{2}\left(d-1\right) r\right)\dd r<\infty. \end{equation} Also assume that \begin{equation} \label{eqn:EigenValueRatio} \lim_{L\to\infty}\frac{\max_{a,b\in\Ecal}\int^R_0\connf_L\left(r;a,b\right) \left(\sinh r\right)^{d-1}\dd r}{\max_{a,b\in\Ecal}\int^\infty_0\connf_L\left(r;a,b\right) \left(\sinh r\right)^{d-1}\dd r} = 0 \end{equation} for all $R<\infty$. \end{assumptionp} Note that under Assumption~\ref{assump:finitelymany}, without loss of generality we assume that $\Pcal\left(a\right)>0$ for all $a\in\Ecal$. In Appendix~\ref{app:ScalingFunctions}, it is shown that \eqref{eqn:EigenValueRatio} holds for all adjacency functions if the scaling function is volume-linear or length-linear, and that it holds for all scaling functions for some classes of adjacency function. However, Lemma~\ref{lem:BadExample} demonstrates that \eqref{eqn:EigenValueRatio} does not hold for a specific choice of scaling and adjacency function. Therefore \eqref{eqn:EigenValueRatio} is indeed a necessary part of Assumption~\ref{assump:finitelymany}. \begin{assumptionp}{S}[Volume-Linear Scaling] \label{assump:specialscale} The scaling function is volume-linear, \begin{equation} \label{eqn:pointwiseIntegrableVolumeLinear} \int^\infty_0\connf\left(r;a,b\right)r\exp\left(\frac{1}{2}\left(d-1\right) r\right)\dd r< \infty \end{equation} for $\Pcal$-almost every $a,b\in\Ecal$, and \begin{equation} \label{eqn:L2normVolumeScale} \sup_{f\in L^2\left(\Ecal\right), f\ne 0}\frac{\int_\Ecal\abs*{\int_\Ecal\left(\int^\infty_0\connf\left(r;a,b\right)r\exp\left(\frac{1}{2}\left(d-1\right) r\right)\dd r\right) f(b)\Pcal\left(\dd b\right)}^2\Pcal\left(\dd a\right)}{\int_{\Ecal}\abs*{f(a)}^2\Pcal\left(\dd a\right)}<\infty. \end{equation} \end{assumptionp} The condition \eqref{eqn:pointwiseIntegrableVolumeLinear} corresponds to \eqref{eqn:pointwiseIntegrable}, taking into account the fact that the volume-linear scaling ensures that if \eqref{eqn:pointwiseIntegrableVolumeLinear} is finite for one scaling $L$ then it is finite for any $L>0$. In the operator notation we introduce in Section~\ref{sec:Operators}, the conditions \eqref{eqn:pointwiseIntegrable} and \eqref{eqn:L2normVolumeScale} ensure that the operator norms $\norm*{\Opconnf_L}_{2\to 2}<\infty$ for $L$ sufficiently large. The choice of the volume-linear scaling also means that no condition corresponding to \eqref{eqn:EigenValueRatio} is required (see Lemma~\ref{lem:VolumeLinearNice}). \begin{theorem}\label{thm:NonUniqueness} If Assumption~\ref{assump:finitelymany} or Assumption~\ref{assump:specialscale} holds, then for sufficiently large parameter $L$ we have $\lambda_u(L) > \lambda_c(L)$. \end{theorem} The restriction of Assumptions~\ref{assump:finitelymany} and \ref{assump:specialscale} to \ref{assump:finitelymanyPlus} and \ref{assump:specialscalePlus} respectively in Theorem~\ref{thm:meanfield} results from inheriting conditions from \cite{caicedo2023criticalexponentsmarkedrandom} (recalled exactly in Section~\ref{sec:ProofCritExponents}). For succinctness, we introduce some notation. Given $a,b\in\Ecal$, $k\geq 1$, and $L>0$, let us define \begin{align} D_L(a,b) &:= \int_{\HypDim}\connf_L\left(x;a,b\right)\mu\left(\dd x\right),\label{eqn:DegreeFunction}\\ D_L^{(k)}(a,b) &:= \int_{\Ecal^{k-1}}\left(\prod^k_{j=1} D_L(c_{j-1},c_j)\right)\prod^{k-1}_{i=1}\Pcal\left(\dd c_i\right),\label{eqn:kDegreeFunction} \end{align} where $c_0=a$ and $c_k=b$. The functions $D(a,b)$ and $D^{(k)}(a,b)$ (with no subscript $L$) are constructed in exactly the same way with the reference function $\connf$ in the place of $\connf_L$. \begin{assumptionp}{F+}\label{assump:finitelymanyPlus} In addition to Assumption~\ref{assump:finitelymany}, the following inequalities hold: \begin{align} \limsup_{L\to\infty}\frac{\max_{a,b\in\Ecal} D_L(a,b)}{\min_{a\in\Ecal}\sum_{b\in\Ecal} D_L(a,b)\Pcal\left(b\right)}&<\infty,\label{eqn:maxmaxOverminsum}\\ \liminf_{L\to\infty}\sup_{k\geq 1}\frac{\min_{a,b\in\Ecal} D^{(k)}_L(a,b)}{\left(\max_{a\in\Ecal}\sum_{b\in\Ecal} D_L(a,b)\Pcal\left(b\right)\right)^k} &>0.\label{eqn:kstepRatio} \end{align} \end{assumptionp} \begin{assumptionp}{S+}\label{assump:specialscalePlus} In addition to Assumption~\ref{assump:specialscale}, the following inequalities hold for the reference adjacency function: \begin{align} \esssup_{a,b\in\Ecal} D(a,b)&<\infty,\label{eqn:bounddegreedensity}\\ \essinf_{a\in\Ecal}\int_\Ecal D(a,b)\Pcal\left(\dd b\right)&>0\label{eqn:infIntegralDegree},\\ \esssup_{a\in\Ecal}\sup_{k\geq 1}\essinf_{b\in\Ecal}D^{(k)}(a,b)&>0.\label{eqn:strongIrreducibilityCondition} \end{align} \end{assumptionp} \begin{theorem} \label{thm:meanfield} If Assumption~\ref{assump:finitelymanyPlus} or Assumption~\ref{assump:specialscalePlus} holds, then for all $L$ sufficiently large the critical exponents exist and $\gamma(L)=1$, $\beta(L)=1$, and $\delta(L)=2$. Furthermore, $\lambda_T(L)=\lambda_\mathrm{c}(L)$. \end{theorem} The additions of Assumption~\ref{assump:specialscalePlus} and Assumption~\ref{assump:finitelymanyPlus} serve the same role, in that they allow us to use results from \cite{caicedo2023criticalexponentsmarkedrandom} (summarized as Proposition~\ref{prop:summaryCD24} in this paper). \begin{remark} The requirement that $L$ be sufficiently large is very important for the proofs of both Theorem~\ref{thm:NonUniqueness} and \ref{thm:meanfield}. While the analogy with Bernoulli bond percolation on locally finite quasi-transitive nonamenable (and, say, Gromov hyperbolic) graphs suggests that the non-uniqueness phase should be non-empty for at least a large class of RCMs on $\HypDim\times\Ecal$, the general argument presented here relies heavily on the perturbative parameter $L$. That said, Proposition~\ref{prop:nonperturb} in Section~\ref{sec:SpecificModels} does show non-uniqueness for some models if one is willing to drop the requirement that the resulting graph is locally finite. \end{remark} \subsection{Applications to Specific Models} \label{sec:SpecificModels} We now see how Theorems~\ref{thm:NonUniqueness} and \ref{thm:meanfield} can apply to various notable examples of random connection models. We also see in Proposition~\ref{prop:nonperturb} a result that is not strictly a corollary of these theorems, but can be derived by applying similar techniques. In particular, it is non-perturbative unlike the general theorems. \paragraph{Scaled Boolean Disc Model on $\HypDim$.} Let $\Ecal\subset\left(0,\infty\right)$ and $\Pcal$ be a probability measure on $\Ecal$, and for all $r\geq 0$ and $a,b\in\Ecal$ let \begin{equation} \connf\left(r;a,b\right)=\Id_{\left\{r<a+b\right\}}. \end{equation} The RCM with this reference adjacency function is a Boolean disc model in that it can be interpreted by placing balls with random radii with distribution $\Pcal$ centred on the points of a Poisson point process (intensity $\lambda$) on $\HypDim$. Vertices in the RCM are then adjacent if the associated two balls overlap. We refer to the model with the scaled adjacency function $\connf_L$ as the \emph{scaled} Boolean disc model. The distinction is emphasised because in general the scaled Boolean disc model does not have the ``placing balls" interpretation. If the scaling function is length-linear (i.e. $\sigma_L(r)=Lr$) then the interpretation survives: if the original ball had radius $R$, then the scaled ball has radius $LR$ and vertices share an edge if and only if their balls intersect. If $\Ecal$ is a singleton (without loss of generality $\Ecal=\left\{1\right\}$), then the interpretation survives again and the balls all have radius $\frac{1}{2}\sigma_L(2)$. However, if the support of $\Pcal$ is at least two elements of $\left(0,\infty\right)$, and the scaling function is not linear in $r$, then radii cannot be assigned to each mark so that vertices are adjacent in the RCM if and only if the balls intersect. If one insisted on retaining the interpretation of ``placing balls" on the vertices, one could consider a model where $\connf_L(r;a,b) := \Id_{\left\{r<\sigma_L(a)+\sigma_L(b)\right\}}$, for some scaling function $\sigma_L$. However, if $\sigma_L(a)+\sigma_L(b)\ne \sigma_L(a+b)$ for some $a,b\in\Ecal$ for sufficiently large $L$, then this is not in our framework. \begin{restatable}{corollary}{BooleanCorollary} \label{lem:BooleanCorollary} Consider the scaled Boolean Disc model on $\HypDim$. \begin{enumerate}[label=(\alph*)] \item \label{enum:BooleanCorPart0} If $R^*:=\sup\Ecal<\infty$ and $\Pcal\left(\left\{R^*\right\}\right)>0$, then for sufficiently large $L$ we have $\lambda_u(L) > \lambda_c(L)$. \item \label{enum:BooleanCorPart1} If $\sigma_L$ is volume-linear and \begin{equation} \label{eqn:FiniteExpectedVolume} \int^\infty_0r^2\exp\left(\left(d-1\right)r\right)\Pcal\left(\dd r\right)<\infty, \end{equation} then for sufficiently large $L$ we have $\lambda_u(L) > \lambda_c(L)$. \item \label{enum:BooleanCorPart2} If \begin{equation} \label{eqn:InfiniteExpectedVolume}\int^\infty_0\exp\left(\left(d-1\right)r\right)\Pcal\left(\dd r\right)=\infty, \end{equation} then for all $L\geq 1$ almost every vertex is in the same infinite cluster, and $\lambda_{\mathrm{u}}(L)=\lambda_{\mathrm{c}}(L)=0$. \item \label{enum:BooleanCorPart3} If $\sigma_L$ is volume-linear and $R^*<\infty$, then for sufficiently large $L$ we also have $\gamma(L)=1$, $\beta(L)=1$, $\delta(L)=2$, and $\lambda_T(L)=\lambda_{\mathrm{c}}(L)$. \end{enumerate} \end{restatable} Observe that parts \ref{enum:BooleanCorPart0} and \ref{enum:BooleanCorPart2} do not require a specific scaling function. Therefore these parts apply to models with the length-linear scaling function, on which the ``placing balls" interpretation survives. Also note that apart from the extra factor of $r^2$ in \eqref{eqn:FiniteExpectedVolume}, the volume-linear scaling choice, and the requirement that $L$ be sufficiently large, the condition \eqref{eqn:FiniteExpectedVolume} is the complement of \eqref{eqn:InfiniteExpectedVolume}. This leaves only a small space between the two regimes uncovered. Corollary~\ref{lem:BooleanCorollary} parts \ref{enum:BooleanCorPart0} and \ref{enum:BooleanCorPart1} are both generalisations of the non-uniqueness result of \cite{tykesson2007number} in dimensions $d\geq 3$ -- their result in these dimensions is recovered by taking $\Ecal=\left\{1\right\}$, for example. It does not recover their result on $\HypTwo$ because the corollary here still requires the perturbative ``sufficiently large $L$" condition for that part. For Boolean disc models on $\R^d$, the result of \cite{chebunin2024uniqueness} proves that there is at most one infinite cluster, and \cite{hall1985continuum} proved that almost every point in $\R^d$ is covered by one of the balls if the radius distribution has infinite $d$ moment. The values of the critical exponents are less well understood. If the radius distribution has a finite $5d-3$ moment, then \cite{DumRaoTas20} proves a lower bound for the percolation function that corresponds to the exponent $\gamma=1$, and \cite{DewMui} extends this to lower bounds on the susceptibility function and critical cluster tail probabilities that correspond to the mean-field exponents $\beta=1$ and $\delta=2$. The $5d-3$ moment is not expected to be `true' and is instead a consequence of the technique used. If we further restrict to radii distributions with \emph{bounded support}, \cite{DicHey2022triangle} shows that the triangle diagram is finite for sufficiently large $d$ (a similar approach should work for $d>6$ and a sufficiently `spread-out' radius distribution), and then \cite{caicedo2023criticalexponentsmarkedrandom} can be used to derive matching upper bounds (up to a factor of a constant) for the percolation and susceptibility functions and the critical cluster tail probabilities. \paragraph{Weight-dependent hyperbolic RCMs.} Let $\Ecal=\left(0,1\right)$ with Lebesgue measure: $\Pcal=\Leb\left(0,1\right)$. Let the profile function $\rho\colon \R_+\to \left[0,1\right]$ be non-increasing and the kernel function $\kappa\colon \left(0,1\right)^2\to \left(0,\infty\right)$ be measurable and non-increasing in both arguments. We define a weight-dependent random connection model on $\HypDim$ by its having the adjacency function \begin{equation} \connf\left(r;a,b\right) = \rho\left(s^{-1}_{\kappa(a,b)}\left(r\right)\right) \end{equation} for all $r\geq 0$ and $a,b\in\left(0,1\right)$. This model has not been studied in this generality before, but is a natural analogy of the weight-dependent Euclidean RCMs discussed in literature such as \cite{KOMJATHY20201309,GraLucMor,GraHeyMonMor,HofstadHoornMaitra_2023,jorritsma2024large} (variously also called: \emph{general geometric inhomogeneous}, \emph{spatial inhomogeneous}, and \emph{kernel-based spatial} random graphs). In both the hyperbolic and Euclidean cases the form of the input to the profile function ensures that density of edges between marks is just given by the product of the mass of the profile function and a function of the kernel function evaluated on those marks. In our case, the use of the volume-linear scaling function means that the density of $a$-$b$ edges is given by \begin{equation} \int_{\HypDim}\connf_L\left(\dist{x,\orig};a,b\right)\mu\left(\dd x\right) = L\kappa\left(a,b\right)\int_{\HypDim}\rho\left(\dist{x,\orig}\right)\mu\left(\dd x\right). \end{equation} In particular this allows for the possibility that under appropriate kernel functions, the degree of a vertex with uniformly chosen mark asymptotically follows a Pareto distribution rather than a Poisson distribution as it would in the unmarked case. The homomorphism structure of $s_L$ (see \eqref{eqn:scalingHomomorphism}) means that the volume-linear scaling parameter $L$ is equivalent to a parameter multiplying the kernel by $L$. In the Euclidean literature this is often called the edge density parameter, and is denoted by the character $\beta$. In this paper $\beta$ is reserved for the percolation critical exponent. We now highlight five specific weight-dependent RCMs by specifying their respective kernel function. In each case the parameter $\zeta>0$, while $a\wedge b:= \min\left\{a,b\right\}$ and $a\vee b:= \max\left\{a,b\right\}$. \begin{itemize} \item The \emph{product kernel} is defined by \begin{equation} \label{eqn:prodkernel} \kappa^{\textrm{prod}}(a,b) := \left(ab\right)^{-\zeta}. \end{equation} \item The \emph{strong kernel} is defined by \begin{equation} \label{eqn:strongkernel} \kappa^{\textrm{strong}}(a,b) := \left(a\wedge b\right)^{-\zeta}. \end{equation} \item The \emph{sum kernel} is a related kernel function defined by \begin{equation} \kappa^{\textrm{sum}}(a,b) := a^{-\zeta} + b^{-\zeta}. \end{equation} The sum and strong kernels are related because $\kappa^{\textrm{strong}}\leq \kappa^{\textrm{sum}} \leq 2 \kappa^{\textrm{strong}}$, and therefore show qualitatively the same behaviour. Observe that in this interpretation of the weight-dependent hyperbolic RCM, the sum kernel does not produce a Boolean disc model (specifically the ``placing balls on each vertex" interpretation), because $s_L(r_1)+s_L(r_2)\ne s_L(r_1+r_2)$ in general. It is also a different model to the \emph{scaled} Boolean disc model described above, because the ``sum'' appears in the scaling function here, while it appeared in the reference adjacency function for the scaled Boolean disc model. \item The \emph{weak kernel} is defined by \begin{equation} \kappa^{\textrm{weak}}(a,b) := \left(a\vee b\right)^{-1-\zeta}. \end{equation} \item The \emph{preferential attachment kernel} is defined by \begin{equation} \label{eqn:prefattachkernel} \kappa^{\textrm{pa}}(a,b) := \left(a\vee b\right)^{-1+\zeta}\left(a\wedge b\right)^{-\zeta}. \end{equation} \end{itemize} In \cite{GraLucMor,GraHeyMonMor} the parameter $\zeta$ was denoted by $\gamma$, but this latter character will be reserved for the susceptibility critical exponent in this paper. As the following Corollaries will show, we cannot actually say anything about the behaviour of the weak and preferential attachment models, but we include them to show the limitations of the approach (which possibly suggests at a different behaviour for these models). In the following results, let $\Kcal$ denote the linear operator acting on $L^2\left(\Ecal\right)$ by \begin{equation} \left(\Kcal f\right)(a) = \int_{\Ecal}\kappa\left(a,b\right)f(b)\Pcal\left(\dd b\right), \end{equation} and let $\norm*{\Kcal}_{2\to 2}$ denote the operator norm: \begin{equation} \norm*{\Kcal}_{2\to 2} = \sup\left\{\frac{\norm{\Kcal f}_2}{\norm{f}_2}\colon f\in L^2(\Ecal),f\ne 0\right\}. \end{equation} \begin{corollary} \label{cor:WDRCMGeneral} Consider a weight-dependent random connection model on $\HypDim$ with volume-linear scaling. If \begin{equation} \label{eqn:WDRCMprofileCondition} \int^\infty_0\rho\left(r\right) r\exp\left(\frac{1}{2}\left(d-1\right)\right)\dd r <\infty \end{equation} and $\norm*{\Kcal}_{2\to 2}<\infty$, then for sufficiently large $L$ we have $\lambda_u(L) > \lambda_c(L)$. \end{corollary} \begin{corollary} \label{cor:WDRCMSpecific} For the product, strong, and sum kernels, \begin{equation} \zeta<\frac{1}{2} \iff \norm*{\Kcal}_{2\to 2}<\infty. \end{equation} For the weak and preferential attachment kernels, $\norm*{\Kcal}_{2\to 2}=\infty$ for all $\zeta>0$. Therefore for weight-dependent random connection models on $\HypDim$ with product, strong, and sum kernels with parameter $\zeta<\frac{1}{2}$, if the scaling is volume-linear and the profile function satisfies \eqref{eqn:WDRCMprofileCondition} we have $\lambda_{\mathrm{u}}(L) > \lambda_{\mathrm{c}}(L)$ for $L$ sufficiently large. \end{corollary} While the following result is not strictly a corollary of the theorems above, by using the bound on the uniqueness threshold developed here we can produce a non-perturbative result for weight-dependent hyperbolic RCMs with moderately heavy-tailed profile functions. \begin{restatable}{prop}{nonperturb} \label{prop:nonperturb} Consider a weight-dependent random connection model on $\HypDim$. If \begin{enumerate}[label=(\alph*)] \item \begin{equation}\label{eqn:InfiniteDegree} \int^\infty_0\rho(r)\exp\left(\left(d-1\right)r\right)\dd r=\infty, \end{equation} \item \begin{equation}\label{eqn:ProfileL2bound} \int^\infty_0\rho(r)r\exp\left(\frac{1}{2}\left(d-1\right)r\right)\dd r<\infty, \end{equation} \item and \begin{equation} \norm*{\Kcal}_{2\to 2}<\infty, \end{equation} \end{enumerate} then $\lambda_T=\lambda_\mathrm{c}=0$ and $\lambda_\mathrm{u}>0$. Furthermore, $\theta_\lambda\left(a\right)=1$ for all $\lambda>0$ for a $\Pcal$-positive measure of $a\in\Ecal$, and therefore the critical exponent $\beta=0$. \end{restatable} If $\lambda_T=0$, then it does not make sense to talk about our definition of the critical exponent $\gamma$. \begin{remark} When talking about Bernoulli bond (or site) percolation it is usually assumed that the original graph is locally finite. Similarly, when talking about (marked or unmarked) RCMs there is usually an integral condition on $\connf$ that ensures that the degrees of each vertex are almost surely finite (and so the resulting graph is locally finite). One reason this is done is that without these conditions one immediately gets $\lambda_\mathrm{c} = 0$ (and $p_\mathrm{c}=0$). If one wishes to study behaviour at or around $\lambda_\mathrm{c}$, then many properties then become trivial. Proposition~\ref{prop:nonperturb} shows that uniqueness of the infinite cluster is not one of these properties. The condition \eqref{eqn:InfiniteDegree} ensures that every vertex almost surely has infinite degree (forcing $\lambda_\mathrm{c}=0$), but the proposition identifies conditions under which there are still infinitely many infinite clusters. That is, the resulting graph is not locally finite and yet there is still structure worth studying. \end{remark} \begin{remark} It is worth commenting on the difficulty that arises when trying to prove that the critical exponents take their mean-field values for the weight-dependent models. In each of the cases described by the kernels \eqref{eqn:prodkernel}-\eqref{eqn:prefattachkernel}, the expected degree of a vertex can be made arbitrarily large by taking the mark to be arbitrarily close to $0$. The argument presented here aims to show that the triangle diagram, $\triangle_\lambda$ (see Definition~\ref{defn:TriangleDiagram}), is small enough at criticality to use the result of \cite{caicedo2023criticalexponentsmarkedrandom} to prove that the critical exponents tale their mean-field values. However, the presence of the $\esssup$ in the expression for $\triangle_\lambda$ means that $\triangle_\lambda=\infty$ for these weight-dependent models. If we were to expect that some types of weight-dependent random connection models could exhibit mean-field behaviour, then it is apparent that this particular form of the triangle condition is too strong. The presence of the $\esssup$ in Assumption~\ref{assump:specialscalePlus} has similar issues. \end{remark} \subsection{Preliminaries and Remarks} \label{sec:prelims} Here are some extra details and remarks on some of the objects that appeared in the results. \paragraph{Hyperbolic space.} The (hyperbolic-)isometries of $\mathbb{B}$ can be characterized by M{\"o}bius transformations (that is, finite compositions of reflections). Every isometry of $\mathbb{B}$ extends to a unique M{\"o}bius transformation of $\mathbb{B}$, and every M{\"o}bius transformation restricts to an isometry of $\mathbb{B}$ (see \cite{ratcliffe1994foundations}). If we denote $\partial \mathbb{B}:= \left\{x\in\Rd\colon \abs*{x}=1\right\}$, then the orientation-preserving isometries can be classified by the number of fixed points of the associated M{\"o}bius transformation in $\mathbb{B}$ and in $\partial\mathbb{B}$. These isometries are sometimes called \begin{itemize} \item \emph{rotations} if they fix a point in $\mathbb{B}$, \item \emph{horolations} if they fix no point in $\mathbb{B}$ but a unique point in $\partial\mathbb{B}$, \item \emph{translations} if they fix no point in $\mathbb{B}$ but fix two points in $\partial\mathbb{B}$. \end{itemize} These classes are often called `elliptic', `parabolic', and `hyperbolic' respectively, but we will prefer the former terms to avoid overloading the term `hyperbolic.' Note that while in $\Rd$ rotations cannot be expressed as a composition of translations, in $\HypDim$ all orientation-preserving isometries can be expressed as the composition of two translations (see for example \cite[Exercise~29.13]{martin2012foundations}). The decision to make the edge probability only depend on the spatial distance between the vertices (see \eqref{eqn:EdgeProbs}) and not the full possibilities of the unordered pair $\left\{x,y\right\}$ is no great restriction. If we want our edge law to be \emph{spatially} homogeneous (i.e. there is no special spatial position), then we require that $\connf(x,a,y,b) = \connf(t(x),a,t(y),b)$ for all $x,y\in\HypDim$, $a,b\in \Ecal$, and translations $t$. However, since all rotations and horolations can be expressed as the product of two translations, we then know that $\connf(x,a,y,b) = \connf(i(x),a,i(y),b)$ for all $x,y\in\HypDim$, $a,b\in \Ecal$, and orientation-preserving isometry $i$. Therefore for all $r\geq 0$, $a,b\in\Ecal$, and $x,y\in\left\{u\in\HypDim\colon \dist{u,\orig}=r\right\}$, we have $\connf(x,a,\orig,b)=\connf(y,a,\orig,b)$ and the edge probabilities are of the form $\connf\left(\dist{x,y};a,b\right)$ in \eqref{eqn:EdgeProbs}. The use of the hyperbolic measure $\mu$ ensures that the vertex distribution is invariant under the isometries, and since the positions of the vertices only influence the edge probability through their metric distance the distribution $\mathbb{P}_\lambda$ of the whole graph $\xi$ is also invariant under these isometries. Note that we have no such assumed symmetries in the marks other than the trivial transposition of the marks (which is required since the event $\mathbf{x}\sim \mathbf{y}$ is itself symmetric). The symmetries of the hyperbolic spaces can also be viewed through identifying with the coset space \begin{equation} SO\left(d,1\right) / O(d). \end{equation} Specifically $\HypDim$ can be realised as the symmetric space of the simple Lie group $SO(d,1)$. It is this description as a symmetric space that allows us to use spherical transforms (see \cite{helgason1994geometric}). \paragraph{Connections and Augmentations.} Given two distinct vertices $\mathbf{x},\mathbf{y}\in\eta$, we say that $\mathbf{x}$ \emph{is connected to} $\mathbf{y}$ in $\xi$ (written ``$\conn{\mathbf{x}}{\mathbf{y}}{\xi}$") if there is a finite sequence of distinct vertices $\mathbf{x}=\mathbf{v}_0,\mathbf{v}_1,\ldots,\mathbf{v}_{k-1},\mathbf{v}_k=\mathbf{y}\in\eta$ such that $\mathbf{v}_i\sim \mathbf{v}_{i+1}$ for all $i=0,\ldots,k-1$. For $\mathbf{x}\in\HypDim\times\Ecal$, we use $\eta^\mathbf{x}$ and $\xi^{\mathbf{x}}$ to denote the vertex set and graph that have been augmented by $\mathbf{x}$. For $\eta^\mathbf{x}$ this simply means $\eta^\mathbf{x}:=\eta\cap\left\{\mathbf{x}\right\}$. For $\xi^\mathbf{x}$ this also includes the (random) edges between $\mathbf{x}$ and the vertices already present in $\eta$. This can naturally be generalized to produce $\eta^{\mathbf{x}_1,\ldots,\mathbf{x}_k}$ and $\xi^{\mathbf{x}_1,\ldots,\mathbf{x}_k}$ for any finite $k$. A more precise and complete description of this augmenting procedure can be found in \cite{HeyHofLasMat19}. This augmenting procedure allows us to define the \emph{two-point function} as \begin{equation} \tlam\left(\mathbf{x},\mathbf{y}\right):= \mathbb{P}_\lambda\left(\conn{\mathbf{x}}{\mathbf{y}}{\xi^{\mathbf{x},\mathbf{y}}}\right). \end{equation} Since $\pla$ is $\HypDim$-isometry invariant, so is $\tlam$. Therefore we will often write $\tlam\left(\left(x,a\right),\left(y,b\right)\right) = \tlam\left(\dist{x,y};a,b\right)$ and treat $\tlam$ as a $\R_+\times\Ecal^2\to\left[0,1\right]$ function like $\connf$. When we are talking about the two-point function of a model using a scaled adjacency function, $\connf_L$, we will denote it by $\tau_{\lambda,L}$. Note that this does \emph{not} mean that $\tau_{\lambda,L}$ is just $\tlam$ with the distance scaled by some $\sigma_L$, though. \subsection{Outline of the Paper} In Section~\ref{sec:Operators} we see what conclusions we can come to by using operators and operator norms without using significant properties of $\HypDim$. The adjacency function and two-point function are formulated as operators from $L^2\left(\HypDim\times\Ecal\right)$ to itself, and $\lambda_{2\to 2}$ is defined as the intensity at which the operator norm of the two-point operator changes from being finite to being infinite. The main claim can be summarised as \begin{equation} \lambda_{2\to 2}(L) \geq \frac{1}{\norm*{\Opconnf_L}_{2\to 2}}, \end{equation} where $\Opconnf_L$ is the operator derived from the scaled adjacency function. The $\lambda_{2\to 2}$ critical threshold also has a role in showing that the triangle diagram is finite. In Section~\ref{sec:sphericaltransform} the spherical transform is introduced and is used to evaluate the $L^2\to L^2$ operator norms. Of particular importance is how the supremum of the spherical transform differs from the plain integral. The main claim here can be summarised as \begin{equation} \norm*{\Opconnf_L}_{2\to 2} = o\left(\norm*{D_L}_{2\to 2}\right) \end{equation} as $L\to\infty$. In Section~\ref{sec:NonUniqueness} it is demonstrated how the $L^2\to L^2$ critical threshold acts as a lower bound for the uniqueness threshold, while geometric considerations give an upper bound for the percolation critical threshold. The main claims here can be summarised as \begin{equation} \lambda_\mathrm{u}(L) \geq \lambda_{2\to 2}(L),\qquad \lambda_\mathrm{c}(L) \leq \frac{C_d}{\norm*{D_L}_{2\to 2}}. \end{equation} For sufficiently large $L$, these bounds are the required way around to demonstrate the existence of the non-uniqueness phase. Section~\ref{sec:ProofCritExponents} then proceeds to show that not only is the triangle diagram finite, but is small enough to use the results of \cite{caicedo2023criticalexponentsmarkedrandom}. The applications of the main results for the Boolean disc model (in Section~\ref{sec:ProofBooleanModel}) and weight-dependent hyperbolic random connection models (in Section~\ref{sec:ProofWDRCM}) are proven in Section~\ref{sec:ProofSpecificModels}. Interactions between scaling functions and adjacency functions are discussed in Appendix~\ref{app:ScalingFunctions}. The volume and length-linear scaling functions are shown to behave nicely for all adjacency functions, while all scaling functions are shown to behave nicely for some classes of adjacency function. However, some specific pairs of scaling and adjacency function are identified that behave unintuitively. \section{Operator Bounds} \label{sec:Operators} In this section we will prove the lemmata that do not depend on the hyperbolic structure of the spatial component of the ambient space. The scaling $\sigma_L$ will also not be relevant in this section, so $L$ is omitted from the notation. Let $\left(\X,\mathcal{X}\right)$ be a Borel space with associated non-atomic $\sigma$-finite measure $\nu$. Let $\connf\colon \X\times\X\to\left[0,1\right]$ be $\Xcal\times\Xcal$-measurable, and let $\pla$ denote the law of the RCM on $\X$ with intensity measure $\lambda\nu$ and adjacency function $\connf$. \begin{definition}[Construction of Operators] Let $\psi\colon \X\times\X\to\R_+$ be $\Xcal\times\Xcal$-measurable. Then define $\Dcal(\psi)\subset \R^\X$ to be the set of $f\in\R^\X$ such that $\int_{\X}\psi(\mathbf{x},\mathbf{y})\abs{f(\mathbf{y})}\nu(\dd \mathbf{y})<\infty$ for $\nu$-almost every $\mathbf{x}\in\X$, so that \begin{equation} \label{eqn:OperatorDefn} \left(\Psi f\right)(\mathbf{y}) := \int_{\X}\psi(\mathbf{x},\mathbf{y})f(\mathbf{y})\nu\left(\dd \mathbf{y}\right) \end{equation} defines a linear operator $\Psi\colon \Dcal(\psi)\to \R^\X$. For each $p,q\in\left[1,\infty\right]$, the $L^p\to L^q$ operator norm of $\Psi$ is defined by \begin{equation} \label{eqn:OperatorNormDefn} \norm{\Psi}_{p\to q} := \begin{cases} +\infty &\text{if }L^p(\X)\not\subset \Dcal(\psi)\\ \sup\left\{\frac{\norm{\Psi f}_q}{\norm{f}_p}\colon f\in L^p(\X),f\ne 0\right\} &\text{otherwise.} \end{cases} \end{equation} \end{definition} Let $\Optlam$ and $\Opconnf$ be the linear operators constructed in this way from $\tlam$ and $\connf$ respectively. We can define the operator critical thresholds to be \begin{equation} \lambda_{p\to q} := \inf\left\{\lambda>0\colon \norm{\Optlam}_{p\to q} = \infty\right\} \end{equation} for each $p,q\in\left[1,\infty\right]$. In the following result, these are bounded by comparing the two-point function $\tlam$ to the Green's function of a branching process with offspring density $\lambda\connf$: \begin{equation} \Greenlam(\mathbf{x},\mathbf{y}) := \sum^\infty_{n=1}\lambda^{n-1}\connf^{\star n}\left(\mathbf{x},\mathbf{y}\right), \end{equation} where $\star$ denotes convolution. That is, given functions $f_1,f_2\colon \X\times\X\to \R_+$ define \begin{equation} f_1\star f_2(\mathbf{x},\mathbf{y}) := \int_\X f_1(\mathbf{x},\mathbf{u})f_2(\mathbf{u},\mathbf{y})\nu\left(\dd \mathbf{u}\right), \end{equation} and similarly $f^{\star n}_1\left(\mathbf{x},\mathbf{y}\right)$ by performing this iteratively. Also observe that while $\Opconnf^{n}$ is defined as the iterated application of the operator $\Opconnf$, it is also equal to the operator constructed from the function $\connf^{\star n}$. \begin{lemma} \label{lem:qtoqintensity_bound} For all $p\in\left[1,\infty\right]$, \begin{equation} \lambda_{p\to p} \geq \frac{1}{\norm{\Opconnf}_{p\to p}}. \end{equation} \end{lemma} \begin{proof} We first note the relation that for $\lambda>0$ and $\nu$-almost all $\mathbf{x},\mathbf{y}\in\X$, \begin{equation}\label{eqn:twopointVsGreen} \tlam\left(\mathbf{x},\mathbf{y}\right) \leq \Greenlam\left(\mathbf{x},\mathbf{y}\right). \end{equation} This was established in the proof of \cite[Lemma~2.2]{caicedo2023criticalexponentsmarkedrandom} using a `method of generations' approach. The neighbours of the starting vertex $\mathbf{y}$ are distributed as a Poisson point process with intensity measure $\lambda\connf\left(\mathbf{x},\mathbf{y}\right)\nu\left(\dd \mathbf{x}\right)$. These vertices are the `$1^{st}$ generation.' The `$2^{nd}$ generation' are the vertices that are adjacent to the $1^{st}$ generation, but not adjacent to $\mathbf{y}$. These are also distributed according to a Poisson point process, but the intensity measure is now a thinned measure whose density with respect to $\nu$ is less than or equal to $\lambda^2\connf^{\star 2}\left(\mathbf{x},\mathbf{y}\right)$. This can then be repeated for any generation. By Mecke's formula (Lemma~\ref{lem:Mecke} below) and monotone convergence, we then find that for any measurable $B\subset \X$, \begin{equation} \lambda\int_B\tlam\left(\mathbf{x},\mathbf{y}\right)\nu\left(\dd \mathbf{x}\right) = \E_\lambda\left[\#\left\{\mathbf{u}\in\eta\cap B\colon \conn{\mathbf{y}}{\mathbf{u}}{\xi^{\mathbf{y}}}\right\}\right] \leq \sum^\infty_{n=1}\lambda^n\int_B\connf^{\star n}\left(\mathbf{x},\mathbf{y}\right)\nu\left(\dd \mathbf{x}\right). \end{equation} The relation \eqref{eqn:twopointVsGreen} then follows. To arrive at the result, we make the elementary observation that if $\psi_1\geq\psi_2\geq 0$ $\nu$-everywhere, then their corresponding operators follow the partial ordering $\norm*{\Psi_1}_{p\to p}\geq \norm*{\Psi_2}_{p\to p}$ for every $p\in\left[1,+\infty\right]$. Then \eqref{eqn:twopointVsGreen}, the triangle inequality, and the sub-multiplicativity of the operator norm imply that \begin{equation} \label{eqn:TwoPointBoundGreen} \norm{\Optlam}_{p\to p} \leq \norm{\OpGreenlam}_{p\to p} \leq \sum^{\infty}_{n=1}\lambda^{n-1}\norm{\Opconnf^n}_{p\to p} \leq \sum^{\infty}_{n=1}\lambda^{n-1}\norm{\Opconnf}_{p\to p}^n. \end{equation} This upper bound is finite if $\lambda\norm{\Opconnf}_{p\to p}<1$, and so the result follows. \end{proof} \begin{lemma}[Mecke's Formula]\label{lem:Mecke} Given $m\in\N$ and a measurable non-negative $f$, \begin{equation} \E_\lambda \left[ \sum_{\vec {\mathbf{x}} \in \eta^{(m)}} f(\xi, \vec{\mathbf{x}})\right] = \lambda^m \int \E_\lambda\left[ f\left(\xi^{\mathbf{x}_1, \ldots, \mathbf{x}_m}, \vec {\mathbf{x}}\right)\right] \nu^{\otimes m}\left(\dd \vec{\mathbf{x}}\right), \label{eq:prelim:mecke_n} \end{equation} where $\vec{\mathbf{x}}=(\mathbf{x}_1,\ldots,\mathbf{x}_m)$, $\eta^{(m)}=\{(\mathbf{x}_1,\ldots,\mathbf{x}_m)\colon x_i \in \eta, \mathbf{x}_i \neq \mathbf{x}_j \text{ for } i \neq j\}$, and $\nu^{\otimes m}$ is the $m$-product measure of $\nu$ on $\X^m$. \end{lemma} \begin{proof} A proof and discussion of Mecke's formula can be found in \cite[Chapter~4]{LasPen17}. \end{proof} For the following lemma, we can generalize the definition of $\lambda_T$ in \eqref{eqn:criticalSusceptibility} to our more general space. For $\mathbf{x}\in\X$, \begin{align} \chi_{\lambda}(\mathbf{x})&= \E_{\lambda}\left[\#\C\left(\mathbf{x},\xi^{\mathbf{x}}\right)\right], \label{eqn:susceptibilityGeneral}\\ \lambda_T &= \inf\left\{\lambda>0\colon \esssup_{\mathbf{x}\in\X}\chi_{\lambda}(\mathbf{x})\right\}. \end{align} In the case $\X=\HypDim\times\Ecal$, the transitive symmetry of the model in $\HypDim$ ensures that we recover the previous definition. The norm $\HSNorm{\cdot}$ in the following lemma is called the \emph{Hilbert-Schmidt} norm. \begin{lemma} \label{lem:generalOperatorBounds} For all linear operators $\Psi\colon \Dcal(\psi)\to \R^\X$ defined as in \eqref{eqn:OperatorDefn}, \begin{align} \norm*{\Psi}_{1\to 1} &= \esssup_{\mathbf{y}\in\X}\int_\X\abs*{\psi\left(\mathbf{x},\mathbf{y}\right)}\nu\left(\dd \mathbf{x}\right),\\ \norm*{\Psi}_{2\to 2} &\leq \HSNorm{\Psi}:= \left(\int_\X\int_\X\abs*{\psi\left(\mathbf{x},\mathbf{y}\right)}^2\nu\left(\dd \mathbf{x}\right)\nu\left(\dd \mathbf{y}\right)\right)^\frac{1}{2}. \end{align} In particular, this means \begin{equation} \lambda_T = \lambda_{1\to 1}. \end{equation} \end{lemma} \begin{proof} The first two are standard results. The first follows immediately from considering test functions approximating delta functions, while the second follows by applying a Cauchy-Schwarz inequality to $\norm*{\Psi f}_{2}$ in \eqref{eqn:OperatorNormDefn}. The equality of the critical intensities then follows from Mecke's formula: \begin{multline} \esssup_{\mathbf{y}\in\X}\chi_{\lambda}(\mathbf{y}) = 1+ \esssup_{\mathbf{y}\in\X}\E_{\lambda}\left[\sum_{\mathbf{x}\in\eta}\Id_{\left\{\conn{\mathbf{x}}{\mathbf{y}}{\xi^{\mathbf{y}}}\right\}}\right] \\= 1+ \lambda\esssup_{\mathbf{y}\in\X}\int_\X\tlam\left(\mathbf{x},\mathbf{y}\right)\nu\left(\dd \mathbf{x}\right) = 1+ \lambda\norm*{\Optlam}_{1\to 1}. \end{multline} \end{proof} A crucial object in our derivation of mean-field critical exponents is the triangle diagram. \begin{definition} \label{defn:TriangleDiagram} For $\lambda\geq0$, the \emph{triangle diagram} is defined as \begin{equation} \trilam := \lambda^2\esssup_{\mathbf{x},\mathbf{y}\in\X}\tlam^{\star 3}\left(\mathbf{x},\mathbf{y}\right). \end{equation} \end{definition} \begin{lemma} \label{lem:TriangleFinite} If $\esssup_{\mathbf{x}\in\X}\int_{\X}\connf\left(\mathbf{x},\mathbf{y}\right)^2\nu\left(\dd \mathbf{y}\right)<\infty$ and $\lambda<\lambda_{2\to 2}$, then $\trilam<\infty$. \end{lemma} \begin{proof} Here we want to bound $\trilam$ using the operator norm $\norm*{\Optlam}_{2\to 2}$, which we know to be finite because $\lambda<\lambda_{2\to 2}$. The issue is that while we can approximate the essential suprema in the definition of $\trilam$ using $\Optlam$ and suitable test functions, the $2$-norms of these test functions may blow up as we improve the approximation. We avoid this issue by using factors of $\connf$ to mollify this blow-up. In \cite[Lemma~6.2]{DicHey2022triangle} it was proven using Mecke's formula that \begin{align} \tlam\left(\mathbf{x},\mathbf{y}\right) &\leq \connf\left(\mathbf{x},\mathbf{y}\right) + \lambda\connf\star\tlam\left(\mathbf{x},\mathbf{y}\right)\\ \tlam\left(\mathbf{x},\mathbf{y}\right) &\leq \connf\left(\mathbf{x},\mathbf{y}\right) + \lambda\tlam\star \connf\left(\mathbf{x},\mathbf{y}\right) \end{align} both hold for all $\lambda>0$ and $\mathbf{x},\mathbf{y}\in\X$. By then applying these inequalities to the first and last factors in $\tlam^{\star 3}$, we get \begin{equation} \tlam^{\star 3}\left(\mathbf{x},\mathbf{y}\right) \leq \connf\star\tlam\star\connf\left(\mathbf{x},\mathbf{y}\right) + 2\lambda\connf\star\tlam^{\star 2}\star\connf\left(\mathbf{x},\mathbf{y}\right) + \lambda^2\connf\star\tlam^{\star 3}\star\connf\left(\mathbf{x},\mathbf{y}\right) \end{equation} for all $\lambda>0$ and $\mathbf{x},\mathbf{y}\in\X$. By applying \cite[Lemma~6.3]{DicHey2022triangle} to our case, we find that for all $\varepsilon>0$ and $M\in\EssIm{\tlam^{\star 3}}$ (i.e. the essential image), there exist $\nu$-positive and finite sets $E_1,E_2\subset \X$ such that \begin{equation} f_i:= \frac{1}{\nu\left(E_i\right)}\Id_{E_i} \end{equation} for $i=1,2$ satisfy \begin{equation} \abs*{M - \inner{f_1}{\Optlam^3f_2}}\leq \varepsilon. \end{equation} Here $\inner{f}{g}:=\int_\X f\left(\mathbf{x}\right)\overline{g\left(\mathbf{x}\right)}\nu\left(\dd \mathbf{x}\right)$ denotes the inner product of two functions $f,g\in L^2\left(\X\right)$. Now since $f_1,f_2\geq 0$ everywhere, we arrive at \begin{equation} \inner{f_1}{\Optlam^3f_2} \leq \inner{f_1}{\left(\Opconnf\Optlam\Opconnf + 2\lambda\Opconnf\Optlam^2\Opconnf + \lambda^2\Opconnf\Optlam^3\Opconnf\right)f_2}. \end{equation} Observe that if $\norm*{\Opconnf}_{2\to 2}=\infty$ then $\norm*{\Optlam}_{2\to 2}=\infty$ for all $\lambda>0$ and therefore $\lambda_{2\to 2}=0$. Therefore under our assumptions $\Opconnf$ is a bounded operator, and therefore the symmetry of $\connf$ implies that $\Opconnf\colon L^2\left(\X\right)\to L^2\left(\X\right)$ is self-adjoint. This self-adjointness, the Cauchy-Schwarz inequality, the triangle inequality, and the sub-multiplicativity of the operator norm lead to \begin{equation} \inner{f_1}{\Optlam^3f_2} \leq \left(\norm*{\Optlam}_{2\to 2} + 2\lambda\norm*{\Optlam}_{2\to 2}^2 + \lambda^2\norm*{\Optlam}_{2\to 2}^3\right)\norm*{\Opconnf f_1}_2 \norm*{\Opconnf f_2}_2. \end{equation} Then we can write for $i=1,2$ that \begin{multline} \norm*{\Opconnf f_i}_2^2 = \int_\X\int_\X\int_\X f_i(\mathbf{x})\connf(\mathbf{x},\mathbf{y})\connf(\mathbf{y},\mathbf{z})f_i(\mathbf{z})\nu(\dd \mathbf{z})\nu(\dd \mathbf{y})\nu(\dd \mathbf{x}) \\\leq \norm*{f_i}^2_1\esssup_{\mathbf{x},\mathbf{z}\in\X}\connf^{\star 2}(\mathbf{x},\mathbf{z}) \leq \esssup_{\mathbf{x}\in\X}\int_{\X}\connf(\mathbf{x},\mathbf{y})^2\nu\left(\dd \mathbf{y}\right)<\infty. \end{multline} Finally since $\lambda<\lambda_{2\to 2}$, we know that $\norm*{\Optlam}_{2\to 2}<\infty$ and therefore $\trilam<\infty$. \end{proof} \section{Spherical Transform} \label{sec:sphericaltransform} We now restrict our attention to $\X=\HypDim$, and consider transforms of radial functions. Since we will be requiring this rotational symmetry, it will be convenient to write explicit expressions in terms of the Poincar{\`e} disc model. Letting \begin{equation} \mathbb{B} := \left\{z\in\Rd\colon \abs*{z}<1\right\}, \end{equation} the hyperbolic volume element on $\mathbb{B}$ is given by \begin{equation} \mathbf{d}z = \frac{4}{\left(1-\abs*{z}^2\right)^2}\dd z. \end{equation} We also let ${\partial\mathbb{B}} = \left\{z\in\Rd\colon \abs*{z}=1\right\}$ denote the boundary of $\mathbb{B}$. \begin{definition} Let $C^\infty_c\left(\mathbb{B}\right)$ denote the set of smooth functions on $\mathbb{B}$ with compact support (with respect to the hyperbolic metric), and $C^\infty_{c,\natural}\left(\mathbb{B}\right)$ denote the subset of such functions that are symmetric with respect to rotations about the origin $\orig$. Given a radial function $g\colon \mathbb{B}\to \R_+$, converting into polar coordinates (with a minor abuse of notation) gives the expression \begin{equation} \int_\mathbb{B} g(z)\mathbf{d}z = \mathfrak{S}_{d-1}\int^1_0 g(\varrho)\frac{4\varrho^{d-1}}{\left(1-\varrho^2\right)^2}\dd \varrho, \end{equation} where $\mathfrak{S}_{d-1}$ denotes the (Lebesgue) $\left(d-1\right)$-volume of the unit-radius sphere ${\partial\mathbb{B}}$. For $g\in C^\infty_{c}\left(\mathbb{B}\right)$, $s\in\R$ and $b\in {\partial\mathbb{B}}$, we define the \emph{spherical transform} to be \begin{equation} \widetilde{g}(s,b) := \int_\mathbb{B}g(z)\e^{\left(-is+d-1\right)A(z,b)}\frac{4}{\left(1-\abs{z}^2\right)^2}\dd z, \end{equation} where \begin{equation} A(z,b) := \frac{1}{2}\log \frac{1-\abs{z}^2}{\abs{z-b}^2}. \end{equation} If $g\in C^\infty_{c,\natural}\left(\mathbb{B}\right)$, this expression is $b$-invariant and reduces to \begin{align} \widetilde{g}(s) &:= \mathfrak{S}_{d-2}\int^1_0 g(\varrho)\frac{4\varrho^{d-1}}{\left(1-\varrho^2\right)^2}\left(\int^{\pi}_0\left(\frac{1-\varrho^2}{1+\varrho^2-2\varrho\cos\theta}\right)^{\frac{d-1-is}{2}}\left(\sin \theta\right)^{d-2}\dd \theta\right)\dd \varrho\\ &= \mathfrak{S}_{d-1}\int^1_0 g(\varrho)Q^{\mathbb{B}}_d(\varrho;s)\frac{4\varrho^{d-1}}{\left(1-\varrho^2\right)^2}\dd \varrho, \end{align} where, for $\varrho\in\left[0,1\right)$, $d\geq 2$ and $s\in\R$, we define \begin{equation} \label{eqn:Qd_Definition} Q^{\mathbb{B}}_d(\varrho;s):= \frac{\mathfrak{S}_{d-2}}{\mathfrak{S}_{d-1}}\int^{\pi}_0\left(\frac{1-\varrho^2}{1+\varrho^2-2\varrho\cos\theta}\right)^{\frac{d-1-is}{2}}\left(\sin \theta\right)^{d-2}\dd \theta. \end{equation} Note that for $s=0$ the integrand is real and strictly positive for all $\varrho\in\left[0,1\right)$ and $\theta\in\left(0,\pi\right)$, and therefore $Q^{\mathbb{B}}_d(\varrho;0)>0$ for all $\varrho\in\left[0,1\right)$. Furthermore, \begin{equation} \label{eqn:QdatZero} Q^{\mathbb{B}}_d(0;s) = \frac{\mathfrak{S}_{d-2}}{\mathfrak{S}_{d-1}}\int^\pi_{0}\left(\sin \theta\right)^{d-2}\dd \theta = 1 \end{equation} for all $d\geq 2$ and $s\in\R$. It will sometimes be convenient to describe $Q^{\mathbb{B}}_d(\varrho;0)$ in terms of the hyperbolic distance rather than the Euclidean distance on $\mathbb{B}$. To this end, let us use \eqref{eqn:HypDistance} to define $Q_d\colon \R_+\to\R_+$ by \begin{equation} \label{eqn:QdFunctionDefinition} Q_d\left(r\right):= Q^{\mathbb{B}}_d\left(\tanh\frac{r}{2};0\right). \end{equation} While the above definition of $\widetilde{g}(s,b)$ and $\widetilde{g}(s)$ was given for $g\in C^\infty_{c}\left(\mathbb{B}\right)$ and $C^\infty_{c,\natural}\left(\mathbb{B}\right)$ functions respectively, these definition can naturally be extended to \begin{align} \mathfrak{D}\left(\mathbb{B}\right) &:= \left\{g\colon \mathbb{B}\to \R \:\left\vert\: \int_\mathbb{B}\abs*{g(z)}\e^{\left(d-1\right)A(z,b)}\frac{1}{\left(1-\abs{z}^2\right)^2}\dd z<\infty\right.\right\}\\ \mathfrak{D}_\natural\left(\mathbb{B}\right) &:= \left\{g\in\mathfrak{D}\left(\mathbb{B}\right) \:\left\vert\: g \:\text{is invariant under rotations about}\: \orig\right.\right\} \end{align} by using dominated convergence. An alternative expression for $\mathfrak{D}_\natural\left(\mathbb{B}\right)$ is \begin{multline} \label{eqn:frakDnatural} \mathfrak{D}_\natural\left(\mathbb{B}\right) = \left\{g\colon \mathbb{B}\to \R \:\left\vert\:g \:\text{is invariant under rotations about}\: \orig, \right.\right.\\\left.\left.\:\text{and}\: \int^1_0\abs*{g\left(\varrho\right)}Q^\mathbb{B}_d\left(\varrho;0\right)\frac{\varrho^{d-1}}{\left(1-\varrho^2\right)^2}\dd \varrho<\infty\right.\right\}. \end{multline} \end{definition} \begin{remark} The spherical transform for $\HypTwo$ is given explicitly in \cite{helgason2000groups}. The general expression for symmetric spaces can be found in \cite{helgason1994geometric}, from which the expression of the $\HypDim$ spherical transform for $d\geq 3$ can be deduced. \end{remark} \begin{lemma} \label{lem:limitQd} For all $d\geq 2$, $\lim_{\varrho\nearrow 1}Q^{\mathbb{B}}_d(\varrho;0)=0$. In particular, there exist $c_d\in\left(0,\infty\right)$ such that \begin{equation} Q^{\mathbb{B}}_d(\varrho;0) \leq c_d\left(1-\varrho\right)^{\frac{d-1}{2}}\left(1\vee\abs*{\log\left(1-\varrho\right)}\right). \end{equation} Equivalently, there exist $c'_d\in\left(0,\infty\right)$ such that \begin{equation} Q_d(r) \leq c'_d \left(1\vee r\right) \exp\left(-\frac{1}{2}\left(d-1\right)r\right). \end{equation} \end{lemma} \begin{proof} First observe that by a double angle formula and appropriate substitutions, \begin{align} Q^{\mathbb{B}}_d(\varrho;0) & = \frac{\mathfrak{S}_{d-2}}{\mathfrak{S}_{d-1}}\left(\frac{1-\varrho^2}{1+\varrho^2}\right)^\frac{d-1}{2}\int^{\pi}_0\left(1-\frac{2\varrho}{1+\varrho^2}\cos\theta\right)^{-\frac{d-1}{2}}\left(\sin \theta\right)^{d-2}\dd \theta \nonumber\\ & =\frac{\mathfrak{S}_{d-2}}{\mathfrak{S}_{d-1}}\left(\frac{1-\varrho^2}{\left(1-\varrho\right)^2}\right)^\frac{d-1}{2}\int^{\frac{\pi}{2}}_0\left(1+\frac{4\varrho}{\left(1-\varrho\right)^2}\left(\sin t\right)^2\right)^{-\frac{d-1}{2}}\left(\sin 2t\right)^{d-2}\dd t \nonumber\\ & =\frac{\mathfrak{S}_{d-2}}{\mathfrak{S}_{d-1}}\left(\frac{1-\varrho^2}{\left(1+\varrho\right)^2}\right)^\frac{d-1}{2}\int^{\frac{\pi}{2}}_0\left(1-\frac{4\varrho}{\left(1+\varrho\right)^2}\left(\sin t\right)^2\right)^{-\frac{d-1}{2}}\left(\sin 2t\right)^{d-2}\dd t \nonumber\\ & =\frac{\mathfrak{S}_{d-2}}{\mathfrak{S}_{d-1}}\left(\frac{1-\varrho}{1+\varrho}\right)^\frac{d-1}{2}J_d\left(\frac{4\varrho}{\left(1+\varrho\right)^2}\right), \end{align} where \begin{equation} J_d\left(m\right) := \int^{\frac{\pi}{2}}_0\left(1-m\left(\sin t\right)^2\right)^{-\frac{d-1}{2}}\left(\sin 2t\right)^{d-2}\dd t. \end{equation} Note that $J_2\left(m\right)$ is the elliptic integral of the first kind. Also note that $J_3(m)$ can be evaluated by integration by substitution to get \begin{equation} J_3(m) = -\frac{2}{m}\log\left(1-m\right), \end{equation} and (since $\mathfrak{S}_1=2\pi$, $\mathfrak{S}_2=4\pi$ and $\varrho=\tanh\frac{r}{2}$) this produces \begin{equation} Q^{\mathbb{B}}_3\left(\varrho;0\right) = \frac{1-\varrho^2}{2\varrho}\log\frac{1+\varrho}{1-\varrho}, \qquad Q_3(r) = \frac{r}{\sinh r}. \end{equation} Similarly, by integration by substitution and long division it is possible to evaluate $Q^\mathbb{B}_d$ for any odd $d\geq 3$ using rational functions and logarithms. However because of the difficulty in getting generic expressions out of the long division step we will proceed by a different approach. For $d=2,3,4,5$, $Q^{\mathbb{B}}_d\left(\varrho;0\right)$ has been plotted in Figure~\ref{fig:QdFunctions2345}. \begin{figure} \centering \includegraphics[width=\columnwidth]{QdFunctions2345.pdf} \caption{Plot of $Q^\mathbb{B}_d\left(\varrho;0\right)$ for $d=2,3,4,5$ using MATLAB.} \label{fig:QdFunctions2345} \end{figure} For $\varepsilon\in\left(0,1\right)$, by using the substitution $y=1-\left(1-\varepsilon\right)\left(\sin t\right)^2$, \begin{multline} J_d\left(1-\varepsilon\right) = \int^{\frac{\pi}{2}}_0\left(1-\left(1-\varepsilon\right)\left(\sin t\right)^2\right)^{-\frac{d-1}{2}}\left(\sin 2t\right)^{d-2}\dd t \\ = \left(1-\varepsilon\right)^{2-d}2^{d-3}\int^1_{\varepsilon}y^{-\frac{d-1}{2}}\left(1-y\right)^{\frac{d-3}{2}}\left(y-\varepsilon\right)^\frac{d-3}{2}\dd y. \end{multline} There exists a sequence $\left\{c_k\right\}_{k\in\N}$ such that $\left(y-\varepsilon\right)^{\frac{d-3}{2}}= \sum^\infty_{k=0}c_k\varepsilon^k y^{\frac{d-3}{2}-k}$, and only the integral resulting from the first of these terms will diverge as $\varepsilon\searrow 0$. Since $c_0=1$, \begin{equation} J_d\left(1-\varepsilon\right) = \left(1-\varepsilon\right)^{2-d}2^{d-3}\int^1_{\varepsilon}y^{-1}\left(1-y\right)^{\frac{d-3}{2}}\dd y + \LandauBigO{1} \end{equation} as $\varepsilon\searrow 0$. Therefore there exists a constant $C_d\in\left(0,\infty\right)$ such that \begin{equation} J_d\left(1-\varepsilon\right) \sim -C_d \log \varepsilon \end{equation} as $\varepsilon\searrow 0$. Therefore there exists a constant $C'_d\in\left(0,\infty\right)$ such that \begin{equation} Q^{\mathbb{B}}_d(\varrho;0) \sim -C'_d\left(1-\varrho\right)^{\frac{d-1}{2}}\log\left(1-\varrho\right) \end{equation} as $\varrho\nearrow 1$. From the expression \eqref{eqn:Qd_Definition}, it is clear $Q^{\mathbb{B}}_d(\varrho;0)$ is continuous for $\varrho\in\left[0,1\right)$, and \eqref{eqn:QdatZero} then implies that the upper bound result holds. The bounds for $Q_d$ then follow from using \eqref{eqn:HypDistance} to get \begin{equation} 1-\varrho \sim 2 \exp\left(-r\right). \end{equation} \end{proof} We now use the spherical transform to `partially diagonalize' $L^2\left(\HypDim\times\Ecal\times\Ecal\right)\to L^2\left(\HypDim\times\Ecal\times\Ecal\right)$ linear operators that are invariant under rotations about $\orig$ in $\HypDim$. The following lemma gives the three properties that we require from the spherical transform for our argument. \begin{lemma} \label{lem:diagonalisePlancherel} For $f\in \mathfrak{D}\left(\mathbb{B}\right)$, and $z\in\mathbb{B}$, \begin{equation} f(z) = \frac{1}{w} \int_\R\int_{{\partial\mathbb{B}}}\e^{\left(+is+d-1\right)A(z,b)}\widetilde{f}(s,b)\abs{\mathbf{c}(s)}^{-2}\dd b\dd s, \end{equation} where $\mathbf{c}(s)$ is the Harish-Chandra $\mathbf{c}$-function and $w$ is the order of the Weyl group. For $f_1\in \mathfrak{D}_\natural\left(\mathbb{B}\right)$ and $f_2\in \mathfrak{D}\left(\mathbb{B}\right)$, \begin{equation}\label{eqn:diagonalise} \left(f_1\star f_2\right)^{\sim}(s,b) = \widetilde{f_1}(s)\widetilde{f_2}(s,b). \end{equation} For $f_1,f_2\in \mathfrak{D}\left(\mathbb{B}\right)$ the following Plancherel result holds: \begin{equation} \int_{\mathbb{B}}f_1(z)f_2(z) \mathbf{d}x = \frac{1}{w}\int_\R\int_{{\partial\mathbb{B}}}\widetilde{f_1}(s,b)\overline{\widetilde{f_2}(s,b)}\abs{\mathbf{c}(s)}^{-2}\dd b\dd s. \end{equation} \end{lemma} \begin{proof} For $d=2$, this is proven in \cite{helgason2000groups}, and the $d\geq 3$ cases follow from the general symmetric space case dealt with in \cite{helgason1994geometric}. Note that while \cite{helgason1994geometric,helgason2000groups} express these properties for functions $f_1,f_2$ in the space of continuous functions with compact (with respect to the $\HypDim$ metric) support, these can be extended to $\mathfrak{D}\left(\mathbb{B}\right)$ in the usual manner. Also be warned that these references define convolution in the reverse order to the convention we take here (convolution on $\HypDim$ is in general not commutative). \end{proof} In what follows, let $\psi\colon\left(\HypDim\times\Ecal\right)\times \left(\HypDim\times\Ecal\right)\to \left[0,1\right]$ be measurable and define the associated operator $\Psi\colon \Dcal\left(\psi\right)\to \R^{\HypDim\times \Ecal}$ in the sense of \eqref{eqn:OperatorDefn}. Also suppose that $\psi$ is isometry invariant in the $\HypDim$ arguments, so that $\psi\left(x,a,y,b\right)=\psi\left(\dist{x,y};a,b\right)$ for $x,y\in\HypDim$ and $a,b\in\Ecal$. Observe then that the spherical transform is defined for $r\mapsto \psi\left(r;a,b\right)$ if and only if \begin{equation} \label{eqn:SphericalTransformDefined}\int^\infty_0\psi\left(r;a,b\right)Q_d\left(r\right)\e^{\left(d-1\right)r}\dd r<\infty, \end{equation} where we have taken the integrability condition in \eqref{eqn:frakDnatural} and expressed an equivalent condition in hyperbolic distance coordinates (and using $\psi\in\left[0,1\right]$). If \eqref{eqn:SphericalTransformDefined} holds for $\Pcal$-almost every $a,b\in\Ecal$, then for each $s\in\R$ we can define an operator $\widetilde{\Psi}(s)$ using the functions $\left(a,b\right)\mapsto\widetilde{\psi}(s;a,b)$. \begin{lemma} \label{lem:TwoToTwoNormBound} If \eqref{eqn:SphericalTransformDefined} holds for $\Pcal$-almost every $a,b\in\Ecal$ and $\norm*{\widetilde{\Psi}\left(0\right)}_{2\to 2}<\infty$, \begin{equation} \norm*{\Psi}_{2\to 2} = \esssup_{s\in\R}\norm*{\widetilde{\Psi}\left(s\right)}_{2\to 2} = \norm*{\widetilde{\Psi}\left(0\right)}_{2\to 2}. \end{equation} \end{lemma} \begin{proof} The bound \eqref{eqn:SphericalTransformDefined} holding for $\Pcal$-almost every $a,b\in\Ecal$ indicates that the spherical transform is defined for these $a,b\in\Ecal$, and we can use Lemma~\ref{lem:diagonalisePlancherel} for those values. The first equality holds by the same argument as in \cite[Lemma~3.6]{DicHey2022triangle}. Approximately speaking, for each $s\in\R$ find a unit $g^{(s)}\in L^2\left(\Ecal\right)$ such that $\norm*{\widetilde{\Psi}(s) g^{(s)}}_2$ approximates the operator norm of $\widetilde{\Psi}(s)$. Then by suitably choosing some $\widetilde{f}\colon \R\to \Complex$ one can use the Plancherel and inverse transform parts of Lemma~\ref{lem:diagonalisePlancherel} to construct a unit $h\in L^2\left(\HypDim\times \Ecal\right)$ such that $\norm*{\Psi h}_2$ approximates some part of the essential spectrum of $\Psi$. This shows $\norm*{\Psi}_{2\to 2} \geq \esssup_{s\in\R}\norm*{\widetilde{\Psi}\left(s\right)}_{2\to 2}$. To show the reverse inequality, first take some unit $h\in L^2\left(\HypDim\times\Ecal\right)$ that approximates the operator norm of $\Psi$, and then for all $s\in\R$ define $g^{(s)}$ by fixing $a\in\Ecal$ and taking the spherical transform of $h(x,a)$ (i.e. $g^{(s)}(a) = \widetilde{h}(s,a)$). The convolution operator on $L^2\left(\Ecal\right)$ that uses $g^{(s)}$ can be `diagonalised' into a multiplication operator on some other space by a unitary transformation. This unitarity, and the Plancherel and multiplication parts of Lemma~\ref{lem:diagonalisePlancherel} then mean that $g^{(s)}$ is a unit vector in $L^2\left(\Ecal\right)$ such that $\norm*{\widetilde{\Psi}(s)g^{(s)}}_2$ approximates some part of the essential spectrum of $\widetilde{\Psi}(s)$. For the inequality $\esssup_{s\in\R}\norm*{\widetilde{\Psi}\left(s\right)}_{2\to 2} \leq \norm*{\widetilde{\Psi}\left(0\right)}_{2\to 2}$, note that \begin{align} &\abs*{\widetilde{\psi}\left(s;a,b\right)}\nonumber\\ &\hspace{1cm}=\abs*{\mathfrak{S}_{d-2}\int^1_0 \psi(\varrho;a,b)\frac{4\varrho^{d-1}}{\left(1-\varrho^2\right)^2}\left(\int^{\pi}_0\left(\frac{1-\varrho^2}{1+\varrho^2-2\varrho\cos\theta}\right)^{\frac{d-1-is}{2}}\left(\sin \theta\right)^{d-2}\dd \theta\right)\dd \varrho} \nonumber\\ &\hspace{1cm}\leq \mathfrak{S}_{d-2}\int^1_0 \psi(\varrho;a,b)\frac{4\varrho^{d-1}}{\left(1-\varrho^2\right)^2}\left(\int^{\pi}_0\abs*{\left(\frac{1-\varrho^2}{1+\varrho^2-2\varrho\cos\theta}\right)^{\frac{d-1-is}{2}}\left(\sin \theta\right)^{d-2}}\dd \theta\right)\dd \varrho\nonumber\\ &\hspace{1cm}= \mathfrak{S}_{d-2}\int^1_0 \psi(\varrho;a,b)\frac{4\varrho^{d-1}}{\left(1-\varrho^2\right)^2}\left(\int^{\pi}_0\left(\frac{1-\varrho^2}{1+\varrho^2-2\varrho\cos\theta}\right)^{\frac{d-1}{2}}\left(\sin \theta\right)^{d-2}\dd \theta\right)\dd \varrho\nonumber\\ &\hspace{1cm}= \widetilde{\psi}\left(0;a,b\right) \end{align} uniformly in $a,b\in\Ecal$ and $s\in\R$. The inequality then holds because of the monotonicity of the operator norm. The last equality then uses $\norm*{\widetilde{\Psi}\left(0\right)}_{2\to 2}<\infty$ and a dominated convergence argument to show that $s\to \norm*{\widetilde{\Psi}\left(s\right)}_{2\to 2}$ is continuous at $s=0$. \end{proof} Recall the definition of $D_L(a,b)$ from \eqref{eqn:DegreeFunction}. We can use this function to define a linear operator on $\Dcal\left(D_L\right)\subset \R^\Ecal$ as in \eqref{eqn:OperatorDefn}, and let $\norm*{D_L}_{p\to q}$ denote the $L^p(\Ecal)\to L^q(\Ecal)$ operator norm of this operator. \begin{lemma} \label{lem:RatioofNorms} Suppose Assumption~\ref{assump:finitelymany} or Assumption~\ref{assump:specialscale} holds. Then \begin{equation} \lim_{L\to\infty}\frac{\norm*{\Opconnf_L}_{2\to 2}}{\norm*{D_L}_{2\to 2}} = 0. \end{equation} \end{lemma} \begin{proof} Fix $R\in\left(0,1\right)$. Then Lemma~\ref{lem:limitQd} indicates that there exists a constant $C=C(d)<\infty$ such that $Q^{\mathbb{B}}_d\left(\varrho;0\right)\leq C$ and thus \begin{align} &\int^1_0 \connf_L(\varrho;a,b)Q^{\mathbb{B}}_d\left(\varrho;0\right)\frac{4\varrho^{d-1}}{\left(1-\varrho^2\right)^2}\dd \varrho \nonumber \\ &\hspace{2cm}\leq C\int^R_0 \connf_L(\varrho;a,b)\frac{4\varrho^{d-1}}{\left(1-\varrho^2\right)^2}\dd \varrho + \int^1_R \connf_L(\varrho;a,b)\frac{4\varrho^{d-1}}{\left(1-\varrho^2\right)^2}\dd \varrho \esssup_{t>R}Q^{\mathbb{B}}_d(t;0)\nonumber\\ &\hspace{2cm}\leq C\int^R_0 \connf_L(\varrho;a,b)\frac{4\varrho^{d-1}}{\left(1-\varrho^2\right)^2}\dd \varrho + D_L(a,b) \esssup_{t>R}Q^{\mathbb{B}}_d(t;0). \end{align} Now define \begin{equation} D^{(\leq R)}_L\left(a,b\right):= \mathfrak{S}_{d-1}\int^R_0 \connf_L(\varrho;a,b)\frac{4\varrho^{d-1}}{\left(1-\varrho^2\right)^2}\dd \varrho \end{equation} and the associated linear operator $\Dcal\left(D^{(\leq R)}_L\right)\to \R^\Ecal$. Assumptions~\ref{assump:finitelymany} and \ref{assump:specialscale} each show that $\int^\infty_0\connf_L\left(r;a,b\right)Q_d\left(r\right)\e^{\left(d-1\right)r}\dd r<\infty$ for $\Pcal$-almost every $a,b\in\Ecal$ and $ \norm*{\widetilde{\Opconnf}_L(0)}_{2\to 2}<\infty$ in their regimes, and therefore we can apply Lemma~\ref{lem:TwoToTwoNormBound}, the triangle inequality, and the homogeneity of the norm to get \begin{equation} \norm*{\Opconnf_L}_{2\to 2} = \norm*{\widetilde{\Opconnf}_L(0)}_{2\to 2} \leq C\norm*{D^{(\leq R)}_L}_{2\to 2} + \norm*{D_L}_{2\to 2}\esssup_{t>R}Q^{\mathbb{B}}_d(t;0). \end{equation} Then under Assumption~\ref{assump:finitelymany}, the equivalence of norms and \eqref{eqn:EigenValueRatio} (and translating into Poincar{\`e} disc coordinates) implies that there exists a sequence $\left\{C'_L\right\}_{L}$ such that $C'_L\to 0$ and $\norm*{D^{(\leq R)}_L}_{2\to 2}\leq C'_L \norm*{D_L}_{2\to 2}$. Under Assumption~\ref{assump:specialscale}, the scaling function is volume-linear and therefore \begin{align} \norm*{D^{(\leq R)}_L}_{2\to 2} &\leq L \mathfrak{S}_{d-1}\mathbf{V}_d\left(s^{-1}_L(R)\right) = \mathfrak{S}_{d-1}\mathbf{V}_d\left(R\right)\\ \norm*{D_L}_{2\to 2} & = L\norm*{D}_{2\to 2}. \end{align} That is, under Assumption~\ref{assump:specialscale} there also exists a sequence $\left\{C'_L\right\}_{L}$ such that $C'_L\to 0$ and $\norm*{D^{(\leq R)}_L}_{2\to 2}\leq C'_L \norm*{D_L}_{2\to 2}$. Either way, \begin{equation} \norm*{\Opconnf_L}_{2\to 2} \leq\norm*{D_L}_{2\to 2}\left(C'_L+\esssup_{t>R}Q^{\mathbb{B}}_d(t;0)\right). \end{equation} Therefore \begin{equation} \limsup_{L\to\infty}\frac{\norm*{\Opconnf_L}_{2\to 2}}{\norm*{D_L}_{2\to 2}} \leq \esssup_{t>R}Q^{\mathbb{B}}_d(t;0). \end{equation} From Lemma~\ref{lem:limitQd}, this bound can be made arbitrarily small by taking $R\nearrow 1$ and the result follows. \end{proof} When we are working under Assumption~\ref{assump:specialscale}, Lemma~\ref{lem:RatioofNorms} doesn't give enough information when $\norm*{D}_{2\to 2}=\infty$, and we will need the following refinement. \begin{lemma} \label{lem:NormSublinear} If Assumption~\ref{assump:specialscale} holds, then $\norm*{\Opconnf_L}_{2\to 2} =o\left( L\right)$ as $L\to\infty$. \end{lemma} \begin{proof} Let $a,b\in\Ecal$. Then by a change of variables \begin{multline} \int^\infty_0\connf_L(r;a,b)Q_d(r)\left(\sinh r\right)^{d-1}\dd r = L \int^\infty_0\connf(r;a,b) Q_d\left(s_L\left(r\right)\right)\left(\sinh r\right)^{d-1}\dd r \\ \leq L \int^\infty_0 \connf(r;a,b) f\left(s_L(r)\right)\left(\sinh r\right)^{d-1}\dd r, \end{multline} where $f(r):= \sup_{t\geq r}Q_d\left(t\right)$. $f$ is non-increasing and inherits the $r\to\infty$ asymptotics from those derived for $Q_d$ in the proof of Lemma~\ref{lem:limitQd}. In particular Assumption~\ref{assump:specialscale} implies that \begin{equation} \mathfrak{S}_{d-1}\int^\infty_0 \connf(r;a,b) f\left(r\right)\left(\sinh r\right)^{d-1}\dd r<\infty \end{equation} for almost every $a,b\in\Ecal$, and the operator with this kernel (denote this $\Opconnf^{(f)}$) has finite $L^2\left(\Ecal\right)\to L^2\left(\Ecal\right)$ operator norm. Now we explore the asymptotics of $s_L(r)$ as $r\to\infty$ for fixed $L$. Clearly as $r\to\infty$ \begin{align} \mathbf{V}_d\left(r\right) &\sim \frac{1}{d-1}\frac{1}{2^{d-1}}\e^{\left(d-1\right)r},\\ \mathbf{V}_d^{-1}\left(r\right) &\sim \log 2 + \frac{1}{d-1}\log\left(d-1\right) + \frac{1}{d-1}\log r, \end{align} and therefore \begin{equation} s_L(r)\sim r+ \frac{1}{d-1}\log L. \end{equation} Therefore from the proof of Lemma~\ref{lem:limitQd} we have \begin{equation} \label{eqn:largerratio} \frac{f\left(s_L\left(r\right)\right)}{f\left(r\right)} \sim \frac{s_L(r)}{r}\exp\left(\frac{1}{2}\left(d-1\right)\left(r - s_L(r)\right)\right) \sim \frac{1}{\sqrt{L}} \end{equation} as $r\to \infty$ for fixed $L$. On the other hand, if we fix $r>0$ and let $L\to \infty$ we get $s_L(r)\to\infty$ and therefore \begin{equation} \lim_{L\to\infty}\frac{f\left(s_L\left(r\right)\right)}{f\left(r\right)} = 0 \end{equation} for all $r>0$. Furthermore, since $L\mapsto s_L(r)$ is increasing, $L\mapsto \tfrac{f\left(s_L\left(r\right)\right)}{f\left(r\right)}$ is non-increasing. For $L,\varepsilon>0$, define \begin{equation} R_{L,\varepsilon}:= \sup\left\{r>0\colon \frac{f\left(s_L\left(r\right)\right)}{f\left(r\right)} \geq \varepsilon\right\}, \end{equation} so that the map $L\mapsto R_{L,\varepsilon}$ is non-increasing. Now given $\varepsilon>0$, fix $R_{\varepsilon}:= R_{4\varepsilon^{-2},\varepsilon}$ (\eqref{eqn:largerratio} implies $R_\varepsilon<\infty$). Therefore for all $a,b\in\Ecal$ \begin{align} &\int^\infty_0\connf_L(r;a,b)Q_d(r)\left(\sinh r\right)^{d-1}\dd r\nonumber\\ &\hspace{1cm}\leq L\int^{R_\varepsilon}_0 f\left(s_L(r)\right)\left(\sinh r\right)^{d-1}\dd r + L\int^\infty_{R_\varepsilon}\frac{f\left(s_L(r)\right)}{f(r)}\connf(r;a,b)f\left(r\right)\left(\sinh r\right)^{d-1}\dd r\nonumber\\ &\hspace{1cm}\leq L\int^{R_\varepsilon}_0 f\left(s_L(r)\right)\left(\sinh r\right)^{d-1}\dd r + \varepsilon L\int^\infty_{0}\connf(r;a,b)f\left(r\right)\left(\sinh r\right)^{d-1}\dd r. \end{align} By Lemma~\ref{lem:TwoToTwoNormBound}, the triangle inequality, and the homogeneity of the operator norm, \begin{equation} \norm*{\Opconnf_L}_{2\to 2} \leq L\mathfrak{S}_{d-1}\int^{R_\varepsilon}_0 f\left(s_L(r)\right)\left(\sinh r\right)^{d-1}\dd r + \varepsilon L \norm*{\Opconnf^{(f)}}_{2\to 2}. \end{equation} Therefore \begin{equation} \limsup_{L\to\infty} \frac{\norm*{\Opconnf_L}_{2\to 2}}{L} \leq \varepsilon\norm*{\Opconnf^{(f)}}_{2\to 2}. \end{equation} Since this norm on the right hand side is finite, taking $\varepsilon\searrow 0$ gives the result. \end{proof} \section{Proof of the Main Results} \label{sec:MainProof} In the first subsection, the non-uniqueness result Theorem~\ref{thm:NonUniqueness} is proven. Then Theorem~\ref{thm:meanfield} is proven in Section~\ref{sec:ProofCritExponents}. \subsection{Proof of Non-Uniqueness} \label{sec:NonUniqueness} First we relate the uniqueness critical threshold to the $L^p\to L^p$ critical thresholds. The only special feature of $\HypDim$ that the following lemma uses is the existence of an infinite transitive family of transformations, such as the translations. To reflect this generality we generalize the definition of $\theta_\lambda$ like we did for $\chi_\lambda$ in \eqref{eqn:susceptibilityGeneral}: for $\mathbf{x}\in\HypDim\times \Ecal$ \begin{equation} \theta_\lambda\left(\mathbf{x}\right) = \mathbb{P}_{\lambda}\left(\#\C\left(\mathbf{x},\xi^{\mathbf{x}}\right) = \infty\right). \end{equation} \begin{lemma} \label{lem:Uniquenesstoqtoq} For all $p\in\left[1,\infty\right]$ and $L>0$, \begin{equation} \lambda_u(L) \geq \lambda_{p\to p}(L). \end{equation} \end{lemma} \begin{proof} If there exists a unique infinite cluster in $\xi$, denote its vertex set by $\mathcal{I}\subset\eta$. Choose $\lambda>\lambda_{\mathrm{u}}$ such that $\pla\left(\exists! \text{ infinite cluster}\right)>0$. Then ergodicity implies that there is almost surely an infinite cluster and therefore \begin{equation} \tlam\left(\mathbf{x},\mathbf{y}\right)\geq \pla\left(\mathbf{x}\in\mathcal{I} \text{ in }\xi^{\mathbf{x}}, \mathbf{y}\in\mathcal{I} \text{ in }\xi^{\mathbf{y}}\right) \geq \theta_\lambda\left(\mathbf{x}\right)\theta_\lambda\left(\mathbf{y}\right), \end{equation} where we have used the FKG inequality (Lemma~\ref{lem:FKG} below). Since $\lambda>\lambda_u\geq \lambda_c$, there exist $\varepsilon>0$ and $E\subset \X$ such that $\theta_\lambda(\mathbf{x})\geq\varepsilon$ for all $\mathbf{x}\in E$, and $\nu\left(E\right)>0$. From the hyperbolic translation invariance, $\theta_\lambda(\mathbf{x})\geq \varepsilon$ on an infinite measure set and therefore $\norm{\theta_\lambda}_p=\infty$. Given $f\in L^p\left(\X\right)$ such that $f\geq 0$, we then have \begin{equation} \int_\X \tlam(\mathbf{x},\mathbf{y})f(\mathbf{y})\nu\left(\dd \mathbf{y}\right) \geq \theta_\lambda(\mathbf{x})\int_\X\theta_\lambda(\mathbf{y})f(\mathbf{y})\nu\left(\dd \mathbf{y}\right), \end{equation} and \begin{equation} \norm{\Optlam f}_{p} \geq \norm{\theta_\lambda}_p\int_\X\theta_\lambda(\mathbf{y})f(\mathbf{y})\nu\left(\dd \mathbf{y}\right). \end{equation} Now let $f=\Id_E$. Then $\norm*{f}_p=\nu\left(E\right)^{\frac{1}{p}}$ and \begin{equation} \int_\X\theta_\lambda(\mathbf{y})f(\mathbf{y})\nu\left(\dd \mathbf{y}\right) \geq \varepsilon\nu\left(E\right). \end{equation} Therefore \begin{equation} \norm*{\Optlam}_{p\to p} \geq \frac{\norm*{\Optlam f}_{p}}{\norm*{f}_p}\geq \norm*{\theta_\lambda}_{p}\varepsilon \nu\left(E\right)^{1-\frac{1}{p}} = \infty. \end{equation} Therefore $\lambda_{p\to p}\geq \lambda \geq \lambda_\mathrm{u}$. \end{proof} \begin{remark} \label{rem:2to2isminimal} The following observation was made in an analogous case by \cite{hutchcroft2019percolation}, and applies equally well here. The symmetry of $\connf$ implies that $\norm{\Opconnf}_{p\to p} = \norm{\Opconnf}_{\frac{p}{p-1}\to\frac{p}{p-1}}$, and the Reisz-Thorin Theorem implies that $p\mapsto \log \norm{\Opconnf}_{\frac{1}{p}\to\frac{1}{p}}$ is a convex function on $p\in\left[0,1\right]$. Hence $p\mapsto\norm{\Opconnf}_{p\to p}$ is decreasing on $\left[1,2\right]$ and increasing on $\left[2,\infty\right]$. Hence the choice $p=2$ gives the optimal version of the bound in Lemma~\ref{lem:Uniquenesstoqtoq}. \end{remark} \begin{lemma}[FKG Inequality] \label{lem:FKG} Given two increasing events $E,F$, \begin{equation} \pla\left(\xi\in E\cap F\right) \geq \pla\left(\xi\in E\right)\pla\left(\xi\in F\right). \end{equation} \end{lemma} \begin{proof} This is proven in \cite[Section~2.3]{HeyHofLasMat19}, building upon the FKG inequality for point processes (for example from \cite[Theorem~20.4]{LasPen17}). \end{proof} For the next two (similar) lemmas, we will make use of two subsets of $\HypDim$ that we now construct. Select an arbitrary direction which we will refer to by the geodesic ray $\gamma_0$ emanating from $\orig$. Then, given $\theta\in\left[0,\pi\right]$, define \begin{equation} \Lcal_*\left(\theta\right) := \arcosh\left(\frac{1-\cos\theta\cos\frac{\theta}{2}}{\sin\theta\sin\frac{\theta}{2}}\right). \end{equation} The geometric construction of this length can be seen in Figure~\ref{fig:separattion_length}. The hyperbolic quadrilateral has internal angles $\theta$, $\pi-\theta$, $\pi-\theta$, and $0$ (the the ideal point on the boundary of the hyperbolic plane). By splitting this into two triangles along the geodesic between the $\theta$ angle and the $0$ angle, the formula for $\Lcal_*\left(\theta\right)$ follows by applying the cosine rule for hyperbolic triangles. \begin{figure} \centering \begin{tikzpicture}[scale=2] \draw (5,0) arc (0:60:5); \draw (5,0) -- (0,0) -- (2.5,4.33); \draw (4.33,2.5) arc(120:150:6.83); \draw (4.33,2.5) arc(300:270:6.83); lldraw (0,0) circle (1pt); lldraw (1.83,0) circle (1pt); lldraw (0.915,1.585) circle (1pt); \draw (0.3,0) node[above right]{$\theta$} arc(0:60:0.3); \draw (2.13,0) arc(0:60:0.3); \draw (1.215,1.585) arc(0:60:0.3); \draw[<->] (0,-0.2) -- (1.83,-0.2); \draw (0.915,-0.2) node[below]{$\Lcal_*(\theta)$}; \end{tikzpicture} \caption{Construction of the length $\Lcal_*\left(\theta\right)$ in the Poincar{\'e} disc model.} \label{fig:separattion_length} \end{figure} Given $x\in\HypDim$, let $t_x$ denote the unique translation isometry that maps $\orig\mapsto x$. Let $x_0,x_1\in\HypDim$ be distinct points satisfying $\dist{x_0,\orig},\dist{x_1,\orig}>\Lcal_*\left(\theta\right)$, where $\theta$ is the angle subtended at $\orig$ by $x_0$ and $x_1$. Also let $\left(k_1,k_2,\ldots\right)\in \left\{0,1\right\}^\N$ and $\left(l_1,l_2,\ldots\right)\in \left\{0,1\right\}^\N$ be two distinct sequences. Then it was proven in \cite{dickson2024hyperbolicrandomconnectionmodels} that there exists $\varepsilon>0$ such that \begin{equation} \dist{t_{x_{k_n}}t_{x_{k_{n-1}}}\ldots t_{x_{k_1}}x_0,t_{x_{l_n}}t_{x_{l_{n-1}}}\ldots t_{x_{l_1}}x_1}>\varepsilon \end{equation} for all $n\geq 1$ such that $\left(k_1,k_2,\ldots,k_n\right)\ne \left(l_1,l_2,\ldots,l_n\right)$. Imprecisely, this means that in the binary tree generated by $x_0$ and $x_1$ the vertices do not get close to each other. The reasoning is that if the length of the edge labelled $\Lcal_*(\theta)$ in Figure~\ref{fig:separattion_length} was in fact longer, then the branching edges would not be able to meet by the time they reached the boundary. Now let $\angle\left(x,\orig,y\right)$ denote the angle subtended at $\orig$ by $x$ and $y$. Fix distinct $x_0,x_1\in\HypDim$ such that $\dist{x_0,\orig},\dist{x_1,\orig}>2\Lcal_*\left(\theta\right)$. Then for $\varepsilon>0$ we define the two frusta (cones without the point) \begin{align} \Vcal_0 &:= \left\{y\in\HypDim\colon \angle\left(x_0,\orig,y\right)<\varepsilon, \dist{y,\orig}>2\Lcal_*\left(\theta\right)\right\}\\ \Vcal_1 &:= \left\{y\in\HypDim\colon \angle\left(x_1,\orig,y\right)<\varepsilon, \dist{y,\orig}>2\Lcal_*\left(\theta\right)\right\}. \end{align} The $d=2$ versions of these are sketched in Figure~\ref{fig:VsetConstruction}. \begin{figure} \centering \begin{tikzpicture}[scale=1.1] \clip (-2,-2.5) rectangle + (7,7.5); \draw (5,0) arc (0:360:5); \draw (0,2) arc (90:100:2); \draw (0,2) arc (90:80:2); \draw (2,0) arc (0:10:2); \draw (2,0) arc (0:-10:2); \draw[dashed] (0,0) -- (1.970,0.347); \draw (1.970,0.347) -- (4.924,0.868); \draw[dashed] (0,0) -- (1.970,-0.347); \draw (1.970,-0.347) -- (4.924,-0.868); \draw[dashed] (0,0) -- (0.347,1.970); \draw (0.347,1.970) -- (0.868,4.924); \draw[dashed] (0,0) -- (-0.347,1.970); \draw (-0.347,1.970) -- (-0.868,4.924); \node at (0,3) {$\Vcal_1$}; \node at (3,0) {$\Vcal_0$}; \draw (5.5,0) arc (0:10:5.5); \draw (5.5,0) arc (0:-10:5.5); \node at (5.8,0) {$\varepsilon$}; \draw[<->] (0,-0.2) -- (1.970,-0.347-0.2); \node at (1,-1.1) {$2\Lcal\left(\frac{\pi}{2}\right)$}; lldraw (0,0) circle (2pt); \end{tikzpicture} \caption{Sketch of the construction of the sets $\Vcal_0$ and $\Vcal_1$ in $\HypTwo$.} \label{fig:VsetConstruction} \end{figure} Then for each $n\geq 1$ and $\mathbf{k}=\left(k_1,\ldots,k_n\right)\in\left\{0,1\right\}^n$, we can define \begin{equation} \Vcal^{(n)}_{\mathbf{k}} := \left\{y\in\HypDim\colon \exists \left\{z_i\right\}^n_{i=1}\subset \prod^n_{i=1}\Vcal_{k_i} \text{ s.t. } y = t_{z_n}t_{z_{n-1}}\ldots t_{z_{2}}t_{z_{1}}\orig\right\}. \end{equation} Then for $\varepsilon$ sufficiently small, for all $n\geq 1$, $\mathbf{k},\mathbf{m}\in\left\{0,1\right\}^n$, $\mathbf{k}\ne \mathbf{m}$ implies $\Vcal^{(n)}_{\mathbf{k}}\cap \Vcal^{(n)}_{\mathbf{m}} = \emptyset$. Furthermore, if $n_1<n_2$ and $\left(k_1,\ldots,k_{n_1}\right)\ne \left(m_1,\ldots,m_{n_1}\right)$, then $\Vcal^{(n_1)}_{\mathbf{k}}\cap \Vcal^{(n_2)}_{\mathbf{m}} = \emptyset$. \begin{lemma} \label{lem:criticaltoOnetoOne-finite} If $\#\Ecal<\infty$ and \eqref{eqn:EigenValueRatio} holds, then for all $p,q\in\left[1,\infty\right]$ there exists $C_d=C_d(\connf,p,q)<\infty$ such that for $L$ sufficiently large, \begin{equation} \lambda_\mathrm{c}(L) \leq \frac{C_d}{\norm*{D_L}_{p\to q}}. \end{equation} \end{lemma} \begin{proof} Since $\#\Ecal<\infty$, for each $L$ there exist $a^*_L,b^*_L\in\Ecal$ such that \begin{equation} D_L\left(a^*_L,b^*_L\right) = \max_{a,b\in\Ecal}D_L(a,b), \end{equation} where we permit this maximum to be infinite. The symmetry of $\connf$ also implies that $D_L\left(b^*_L,a^*_L\right) = \max_{a,b\in\Ecal}D_L(a,b)$. Observe that since $\Ecal$ is finite, $\max_{a,b\in\Ecal}D_L(a,b)$ and all $\norm*{D_L}_{p\to q}$ are equivalent norms on $D_L$. Furthermore, the finiteness of $\Ecal$ and the convention that $\Pcal\left(a\right)>0$ for all $a\in\Ecal$ imply that $\liminf_{L\to \infty}\Pcal\left(a^*_L\right)>0$ and $\liminf_{L\to \infty}\Pcal\left(b^*_L\right)>0$. Now starting from $\origin{a^*_L}$, we aim to extract an infinite tree from $\C\left(\origin{a^*_L},\xi^\origin{a^*_L}\right)$. By Mecke's formula, the expected number of neighbours of $\origin{a^*_L}$ in $\Vcal_1\times\left\{b^*_L\right\}$ is given by \begin{multline} \E_{\lambda,L}\left[\#\left\{y\in\eta\cap \Vcal_0\times\left\{b^*_L\right\}\colon \adja{\origin{a^*_L}}{y}{\xi^{\origin{a^*_L}}}\right\}\right] = \lambda\Pcal\left(b^*_L\right)\int_{\Vcal_0}\connf_L\left(x;a^*_L,b^*_L\right)\dd x \\= \lambda\Pcal\left(b^*_L\right)c_d\int^\infty_{2\Lcal_*\left(\theta\right)}\connf_L\left(r;a^*_L,b^*_L\right)\left(\sinh r\right)^{d-1}\dd r, \end{multline} where $c_d=c_d(\varepsilon)$ is the $(d-1)$-Lebesgue volume of the spherical cap $\left\{y\in \partial \mathbb{B}\colon \angle\left(y,\orig,x_0\right)<\varepsilon\right\}$. Then by using \eqref{eqn:EigenValueRatio}, for $L$ sufficiently large \begin{multline} \int^\infty_{2\Lcal_*\left(\theta\right)}\connf_L\left(r;a^*_L,b^*_L\right)\left(\sinh r\right)^{d-1}\dd r \geq \frac{1}{2}\int^\infty_{0}\connf_L\left(r;a^*_L,b^*_L\right)\left(\sinh r\right)^{d-1}\dd r \\= \frac{1}{2\mathfrak{S}_{d-1}}D_L\left(a^*_L,b^*_L\right). \end{multline} Therefore \begin{equation} \E_{\lambda,L}\left[\#\left\{y\in\eta\cap \Vcal_0\times\left\{b^*_L\right\}\colon \adja{\origin{a^*_L}}{y}{\xi^{\origin{a^*_L}}}\right\}\right]\geq \lambda\Pcal\left(b^*_L\right)\frac{c_d}{2\mathfrak{S}_{d-1}} D_L\left(a^*_L,b^*_L\right) \end{equation} for sufficiently large $L$. Since the number of neighbours of $\origin{a^*_L}$ is a Poisson random variable, \begin{align} &\mathbb{P}_{\lambda,L}\left(\exists y\in\eta\cap \Vcal_0\times\left\{b^*_L\right\}\colon \adja{\origin{a^*_L}}{y}{\xi^{\origin{a^*_L}}}\right)\nonumber\\ &\hspace{4cm}= 1 - \exp\left(-\E_{\lambda,L}\left[\#\left\{y\in\eta\cap \Vcal_0\times\left\{b^*_L\right\}\colon \adja{\origin{a^*_L}}{y}{\xi^{\origin{a^*_L}}}\right\}\right]\right)\nonumber\\ &\hspace{4cm} \geq 1 - \exp\left(-\frac{1}{2}\lambda\Pcal\left(b^*_L\right)\frac{c_d}{\mathfrak{S}_{d-1}} D_L\left(a^*_L,b^*_L\right)\right) \end{align} for $L$ sufficiently large. Similarly, \begin{align} \mathbb{P}_{\lambda,L}\left(\exists y\in\eta\cap \Vcal_1\times\left\{b^*_L\right\}\colon \adja{\origin{a^*_L}}{y}{\xi^{\origin{a^*_L}}}\right) &\geq 1 - \exp\left(-\frac{1}{2}\lambda\Pcal\left(b^*_L\right)\frac{c_d}{\mathfrak{S}_{d-1}} D_L\left(a^*_L,b^*_L\right)\right)\\ \mathbb{P}_{\lambda,L}\left(\exists y\in\eta\cap \Vcal_0\times\left\{a^*_L\right\}\colon \adja{\origin{b^*_L}}{y}{\xi^{\origin{b^*_L}}}\right) &\geq 1 - \exp\left(-\frac{1}{2}\lambda\Pcal\left(a^*_L\right)\frac{c_d}{\mathfrak{S}_{d-1}} D_L\left(a^*_L,b^*_L\right)\right)\\ \mathbb{P}_{\lambda,L}\left(\exists y\in\eta\cap \Vcal_1\times\left\{a^*_L\right\}\colon \adja{\origin{b^*_L}}{y}{\xi^{\origin{b^*_L}}}\right) &\geq 1 - \exp\left(-\frac{1}{2}\lambda\Pcal\left(a^*_L\right)\frac{c_d}{\mathfrak{S}_{d-1}} D_L\left(a^*_L,b^*_L\right)\right). \end{align} We now construct our tree. Starting from the $0^{th}$ generation $\origin{a^*_L}$, we let our offspring be the nearest neighbour in the $\Vcal_0\times\left\{b^*_L\right\}$ branch and the nearest neighbour in the $\Vcal_1\times\left\{b^*_L\right\}$ branch (if either does not exist, then that branch produces no offspring). $\Vcal_1$ and $V_2$ are disjoint, so these offspring are independent. Furthermore, the vertex set that is further from $\origin{a^*_L}$ than these nearest neighbours is also independent of these neighbours. If a vertex $\left(x,b^*_L\right)$ is one of these offspring (i.e. in generation $1$), then it selects its offspring to be its nearest neighbour in $t_{x}\left(\Vcal_0\right)\times\left\{a^*_L\right\}$ and its nearest neighbour in $t_{x}\left(\Vcal_1\right)\times\left\{a^*_L\right\}$. This is then repeated iteratively, alternating between $a^*_L$ and $b^*_L$ in each generation. From the construction of $\Vcal_0$ and $\Vcal_1$, each offspring is independent from each other offspring in its generation and the offspring in previous generations (except its ancestral line). This random tree in $\HypDim\times\Ecal$ does not go extinct with a positive probability if the probability of each offspring is strictly greater than $\frac{1}{2}$. Therefore if \begin{equation} \frac{1}{2}\lambda\min\left\{\Pcal\left(a^*_L\right),\Pcal\left(b^*_L\right)\right\}\frac{c_d}{\mathfrak{S}_{d-1}} D_L\left(a^*_L,b^*_L\right) > \log 2, \end{equation} then the tree survives with a positive probability. Since this tree is a subset of $\C\left(\origin{a^*_L},\xi^\origin{a^*_L}\right)$, the cluster is infinite with positive probability. Since $\Pcal\left(a^*_L\right)>0$, we therefore have \begin{equation} \lambda_{\mathrm{c}}(L) \leq \frac{2\mathfrak{S}_{d-1}\log 2}{c_d\min\left\{\Pcal\left(a^*_L\right),\Pcal\left(b^*_L\right)\right\} D_L\left(a^*_L,b^*_L\right)}. \end{equation} Then from $\liminf_{L\to \infty}\Pcal\left(a^*_L\right)>0$ and $\liminf_{L\to \infty}\Pcal\left(b^*_L\right)>0$, and the equivalence of norms, the result follows. \end{proof} There are two main difficulties in applying the above argument to cases with infinitely many marks. The first is that we no longer have equivalence on norms on kernels on $\Ecal^2$. The second (and more serious) is that it is now possible that the significant marks move around infinitely many sets in ways that are very specific to the model in question and are difficult to control. Both of these issues can be solved by using the volume-linear scaling function. The property that \begin{equation} D_L(a,b) = LD(a,b) \end{equation} for all $a,b\in\Ecal$ and $L$ ensures that the same marks are the significant ones for all scales. This - with the homogeneity of the norms - also implies that all $\norm*{D_L}_{p\to q} = L\norm*{D}_{p\to q}$. Therefore if $\norm*{D}_{p\to q}<\infty$, then the $L$ in the denominator in Lemma~\ref{lem:criticaltoOnetoOne-SpecialScaling} could be replaced with $\norm*{D_L}_{p\to q}$ like in Lemma~\ref{lem:criticaltoOnetoOne-finite}. \begin{lemma} \label{lem:criticaltoOnetoOne-SpecialScaling} If $\sigma_L(r)$ is volume-linear, then there exists $C_d=C_d(\connf)<\infty$ such that for $L$ sufficiently large, \begin{equation} \lambda_\mathrm{c}(L) \leq \frac{C_d}{L}. \end{equation} \end{lemma} \begin{proof} We will proceed similarly to the proof of the finitely many mark case, but instead of the two single marks $a^*_L$ and $b^*_L$, we will use sets $E,F\subset\Ecal$ and now our choice of $s_L$ will ensure that we do not need to vary these sets as $L$ changes. Since $\connf\not\equiv 0$, we have $\esssup_{a,b\in\Ecal}D(a,b)>0$. Therefore there exists $\varepsilon>0$ such that we can choose non-$\Pcal$-null sets $E,F\subset\Ecal$ such that \begin{equation} \label{eqn:lowerboundonD} \essinf_{a\in E,b\in F}D(a,b) \in\left(\varepsilon,\infty\right). \end{equation} Now let $a\in E$. By Mecke's formula, the expected number of neighbours of $\origin{a}$ in $\Vcal_1\times F$ is given by \begin{multline} \E_{\lambda,L}\left[\#\left\{y\in\eta\cap \Vcal_0\times F\colon \adja{\origin{a}}{y}{\xi^{\origin{a}}}\right\}\right] = \lambda\int_F\int_{\Vcal_0}\connf_L\left(x;a,b\right)\dd x\Pcal\left(\dd b\right) \\= \lambda c_d\int_F \int^\infty_{2\Lcal_*\left(\theta\right)} \connf_L\left(r;a,b\right)\left(\sinh r\right)^{d-1}\dd r\Pcal\left(\dd b\right). \end{multline} Since $\connf\leq 1$ we have \begin{multline} \E_{\lambda,L}\left[\#\left\{y\in\eta\cap \Vcal_0\times F\colon \adja{\origin{a}}{y}{\xi^{\origin{a}}}\right\}\right] \\\geq \frac{\lambda c_d}{\mathfrak{S}_{d-1}} \int_F D_L\left(a,b\right)\Pcal\left(\dd b\right) - \lambda c_d\Pcal\left(F\right) \mathbf{V}_d\left(2\Lcal_*\left(\theta\right)\right). \end{multline} Since our scaling function is volume-linear we have $D_L(a,b)= L D(a,b)$ for all $a,b\in\Ecal$, and therefore \eqref{eqn:lowerboundonD} implies that for sufficiently large $L$ we have \begin{equation} \E_{\lambda,L}\left[\#\left\{y\in\eta\cap \Vcal_0\times F\colon \adja{\origin{a}}{y}{\xi^{\origin{a}}}\right\}\right] \geq \lambda \frac{c_d}{2\mathfrak{S}_{d-1}}L\varepsilon\Pcal\left(F\right). \end{equation} Repeating this argument in all our required cases gives \begin{align} \essinf_{a\in E}\mathbb{P}_{\lambda,L}\left(\exists y\in\eta\cap \Vcal_0\times F\colon \adja{\origin{a}}{y}{\xi^{\origin{a}}}\right) &\geq 1 - \exp\left(-\lambda \frac{c_d}{2\mathfrak{S}_{d-1}}L\varepsilon\Pcal\left(F\right)\right)\\ \essinf_{a\in E}\mathbb{P}_{\lambda,L}\left(\exists y\in\eta\cap \Vcal_1\times F\colon \adja{\origin{a}}{y}{\xi^{\origin{a}}}\right) &\geq 1 - \exp\left(-\lambda \frac{c_d}{2\mathfrak{S}_{d-1}}L\varepsilon\Pcal\left(F\right)\right)\\ \essinf_{a\in F}\mathbb{P}_{\lambda,L}\left(\exists y\in\eta\cap \Vcal_0\times E\colon \adja{\origin{a}}{y}{\xi^{\origin{a}}}\right) &\geq 1 - \exp\left(-\lambda \frac{c_d}{2\mathfrak{S}_{d-1}}L\varepsilon\Pcal\left(E\right)\right)\\ \essinf_{a\in F}\mathbb{P}_{\lambda,L}\left(\exists y\in\eta\cap \Vcal_1\times E\colon \adja{\origin{a}}{y}{\xi^{\origin{a}}}\right) &\geq 1 - \exp\left(-\lambda \frac{c_d}{2\mathfrak{S}_{d-1}}L\varepsilon\Pcal\left(E\right)\right). \end{align} Constructing the tree in the same way, replacing the singletons $a^*_L$ and $b^*_L$ with the sets $E$ and $F$ then tells us that \begin{equation} \lambda_{\mathrm{c}}(L) \leq \frac{2\mathfrak{S}_{d-1}\log 2}{c_d\min\left\{\Pcal\left(E\right),\Pcal\left(F\right)\right\} L \varepsilon} \end{equation} for $L$ sufficiently large. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:NonUniqueness}] Both assumptions~\ref{assump:finitelymany} and \ref{assump:specialscale} with Lemma~\ref{lem:limitQd} imply that the integral $\int^\infty_0\connf_L\left(r;a,b\right)Q_d\left(r\right)\e^{\left(d-1\right)r}\dd r<\infty$ for $\Pcal$-almost every $a,b\in\Ecal$ and $\norm*{\widetilde{\Opconnf}_L(0)}_{2\to 2}<\infty$ for all $L$ sufficiently large. Therefore by applying Lemma~\ref{lem:RatioofNorms} to the adjacency function $\connf_L$, we see that $\norm*{\Opconnf_L}_{2\to 2} = o\left( \norm*{D_L}_{2\to 2}\right)$ as $L\to \infty$. Under Assumption~\ref{assump:finitelymany}, Lemma~\ref{lem:criticaltoOnetoOne-finite} then implies that $\lambda_c(L) =o\left( \norm*{\Opconnf_L}_{2\to 2}^{-1}\right)$ as $L\to \infty$. If $\norm*{D}_{p\to p}<\infty$ for any $p\in\left[1,\infty\right]$, then Lemma~\ref{lem:criticaltoOnetoOne-SpecialScaling} proves the same asymptotic behaviour under Assumption~\ref{assump:specialscale}. If $\norm*{D}_{2\to 2}=\infty$ (and therefore $\norm*{D}_{p\to p}=\infty$ for all $p\in\left[1,\infty\right]$ by the reasoning in Remark~\ref{rem:2to2isminimal}), then Lemma~\ref{lem:NormSublinear} is also required to show $\lambda_\mathrm{c}(L) \leq \frac{C_d}{L} \ll \frac{1}{\norm*{\Opconnf_L}_{2\to 2}}$ as $L\to\infty$. Finally Lemma~\ref{lem:Uniquenesstoqtoq} tells us that $\frac{1}{\norm*{\Opconnf_L}_{2\to 2}} \leq \lambda_{u}(L)$ for all $L$, and therefore for sufficiently large $L$ we have $\lambda_c(L)<\lambda_u(L)$. \end{proof} \subsection{Proof of Critical Exponents} \label{sec:ProofCritExponents} \begin{remark} If we were to consider Bernoulli bond percolation on a given graph, then usually the finiteness of the triangle diagram ($\triangle_\lambda$ in our case) at criticality would be sufficient to derive mean-field behaviour for various critical exponents. This is because finiteness would show that an \emph{open triangle condition} would hold, and it is this that is actually used to derive critical exponents. For Bernoulli bond percolation on $\Z^d$ with nearest neighbour edges the fact that the triangle condition implied the open triangle condition was proven in \cite{AizBar91}, and it was proven in \cite{kozma2011triangle} for any transitive graph. Both of these use the fact that the two-point function is of positive type on the set of vertices. However, the differentiability (and therefore continuity) of the two-point function for RCMs on $\Rd$ shown by \cite[Lemma~2.2]{HeyHofLasMat19} and the equality $\tau_0\left(\mathbf{x},\mathbf{y}\right) = \connf(\mathbf{x},\mathbf{y})$ shows that need not be true for RCMs for which the adjacency function is not itself of positive type (such as the Boolean disc model). \end{remark} \begin{lemma} \label{lem:TriangleIsSmall} If Assumption~\ref{assump:finitelymany} holds, or Assumption~\ref{assump:specialscale} and \begin{equation} \label{eqn:supintegralsquaredbound} \esssup_{a\in\Ecal} \int_{\Ecal}\int_{\HypDim}\connf(x;a,b)^2\mu\left(\dd x\right)\Pcal\left(\dd b\right)<\infty \end{equation} hold, then \begin{equation} \lim_{L\to\infty}\triangle_{\lambda_{\mathrm{c}}(L)}= 0. \end{equation} \end{lemma} \begin{proof} From Lemma~\ref{lem:TriangleFinite} and having $\lambda_{2\to 2}(L)>\lambda_{\mathrm{c}}(L)$ for sufficiently large $L$, the condition \eqref{eqn:supintegralsquaredbound} implies that $\triangle_{\lambda_{\mathrm{c}}(L)}<\infty$ for $L$ sufficiently large. We now need to show that it is in fact ``small". Repeating the argument of the proof of Lemma~\ref{lem:TriangleFinite}, we can show that \begin{multline} \triangle_{\lambda_{\mathrm{c}}(L)} \leq \left(\lambda_{\mathrm{c}}(L)\norm*{\OptlamCritL}_{2\to 2} + 2\lambda_{\mathrm{c}}(L)^2\norm*{\OptlamCritL}_{2\to 2}^2 + \lambda_{\mathrm{c}}(L)^3\norm*{\OptlamCritL}_{2\to 2}^3\right)\\\times\lambda_{\mathrm{c}}(L)\esssup_{a\in\Ecal} \int_{\Ecal}\int_{\HypDim}\connf(x;a,b)^2\mu\left(\dd x\right)\Pcal\left(\dd b\right). \end{multline} Now we can use $\connf_L\leq 1$ to get \begin{align} \lambda_{\mathrm{c}}(L)\esssup_{a\in\Ecal}\int_{\Ecal}\int_{\HypDim}\connf_L(z;a,b)^2\dd z\Pcal\left(\dd b\right) &\leq \lambda_{\mathrm{c}}(L)\esssup_{a\in\Ecal}\int_{\Ecal}D_L(a,b)\Pcal\left(\dd b\right)\nonumber\\ &= \lambda_{\mathrm{c}}(L)\norm*{D_L}_{1\to 1}\nonumber\\ &\leq C_d, \end{align} where $C_d$ is the constant arising in Lemma~\ref{lem:criticaltoOnetoOne-finite} or Lemma~\ref{lem:criticaltoOnetoOne-SpecialScaling} as appropriate. We therefore only need to show that $\lambda_{\mathrm{c}}(L)\norm*{\OptlamCritL}_{2\to 2}\to 0$ as $L\to\infty$. By using the bound \eqref{eqn:TwoPointBoundGreen} arising from bounding $\tlam$ with $\Greenlam$, we get \begin{equation} \lambda_{\mathrm{c}}(L)\norm*{\OptlamCritL}_{2\to 2} \leq \sum_{k=1}^\infty\lambda_{\mathrm{c}}(L)^k\norm*{\Opconnf_L}_{2\to 2}^k. \end{equation} Since we have $\lambda_{\mathrm{c}}(L)\ll\frac{1}{\norm*{\Opconnf_L}_{2\to 2}}$ as $L\to\infty$, we therefore have $\lambda_{\mathrm{c}}(L)\norm*{\OptlamCritL}_{2\to 2}\to 0$ and $\triangle_{\lambda_{\mathrm{c}}(L)}\to 0$ as $L\to\infty$. \end{proof} We now want to use results from \cite{caicedo2023criticalexponentsmarkedrandom} to show that a form of triangle condition does indeed imply that certain critical exponents take their mean-field values. Let us introduce some notation so we can use their result. Given $\lambda>0$, $a\in\Ecal$, and $L>0$, define \begin{align} \Ical_{\lambda,a}(L) &:= \left(\sup_{k\geq 1}\left(\frac{\lambda}{1+\lambda}\right)^k\essinf_{b\in\Ecal}D_L^{(k)}(a,b)\right)^{-1},\\ \Jcal_{\lambda,a}(L) &:= \left(\sup_{k\geq 1}\left(\frac{\lambda}{2\left(1+\lambda\right)}\right)^k\essinf_{b\in\Ecal}D_L^{(k)}(a,b)\right)^{-1},\\ \cbar{\lambda}(L) &:= 1+\lambda\esssup_{a,b,c\in\Ecal}D_L(a,b)\Jcal_{\lambda,c}(L),\\ \ConstantTriangle(L) &:= \min\left\{\frac{1}{\left(1+\lambda_T(L)\esssup_{a,b,c\in\Ecal}D_L(a,b)\Ical_{\lambda_T(L),c}(L)\right)^2},\right.\nonumber\\ &\hspace{3cm}\left.\frac{1}{\cbar{\lambda_T(L)}(L)}\frac{\lambda_T(L)^2\left(\essinf_{a\in\Ecal} \int D_L(a,b)\Pcal(\dd b)\right)^2}{1 + 2\lambda_T(L)\esssup_{a\in\Ecal}\int D_L(a,b)\Pcal(\dd b)}\right\}. \end{align} Following the naming from \cite{caicedo2023criticalexponentsmarkedrandom}, we have three conditions: \paragraph{Conditions:} \begin{enumerate}[label=\textbf{(D.\arabic*)}] \item \label{Assump:BoundExpectedDegree} \begin{equation} \esssup_{a,b\in\Ecal}D_L(a,b) < \infty, \label{eqn:supsupbound} \end{equation} \item \label{Assump:AllReachablebySome} \begin{equation} \esssup_{a\in\Ecal}\sup_{k\geq 1}\essinf_{b\in\Ecal}D_L^{(k)}(a,b) >0,\label{eqn:supinfbound} \end{equation} \end{enumerate} \begin{enumerate}[label=\textbf{(T)}] \item \begin{equation} \triangle_{\lambda_T(L)} < \ConstantTriangle(L). \end{equation}\label{TriangleCondition_Assumption} \end{enumerate} The following proposition is an amalgamation of Theorems~1.7, 1.8, 1.9, 1.11, and 1.12, and Corollary~1.10, all from \cite{caicedo2023criticalexponentsmarkedrandom}. The notation $\asymp$ is used to denote asymptotic equality ``in the bounded ratio sense." \begin{prop}\label{prop:summaryCD24} If Conditions~\ref{Assump:BoundExpectedDegree}, \ref{Assump:AllReachablebySome}, and \ref{TriangleCondition_Assumption} all hold for some $L$, then \begin{equation} \begin{array}{rll} \norm*{\chi_{\lambda,L}}_p &\asymp \left(\lambda_{\mathrm{c}}(L)-\lambda\right)^{-1} &\text{as }\lambda\nearrow\lambda_{\mathrm{c}}(L),\\ \norm*{\theta_{\lambda,L}}_p &\asymp \lambda-\lambda_{\mathrm{c}}(L) &\text{as }\lambda\searrow\lambda_{\mathrm{c}}(L),\\ \mathbb{P}_{\lambda_\mathrm{c}(L),L}\left(\#\C\left(\origin{a},\xi^{\origin{a}}\right) = n\right) &\asymp n^{-1-\frac{1}{2}} &\text{as }n\to\infty, \end{array} \end{equation} for $\Pcal$-almost every $a\in\Ecal$, for all $p\in\left[1,\infty\right]$, and that $L$ in question. In particular, $\lambda_T(L)=\lambda_\mathrm{c}(L)$. \end{prop} While the argument in \cite{caicedo2023criticalexponentsmarkedrandom} is phrased for RCMs on $\Rd\times\Ecal$, the argument follows in exactly the same way for RCMs on $\HypDim\times\Ecal$. Now if Conditions~\ref{Assump:BoundExpectedDegree}, \ref{Assump:AllReachablebySome}, and \ref{TriangleCondition_Assumption} can all be shown to hold, Theorem~\ref{thm:meanfield} will be proven. Conditions \ref{Assump:BoundExpectedDegree} and \ref{Assump:AllReachablebySome} follow fairly directly from the extra ingredients of Assumptions \ref{assump:finitelymanyPlus} and \ref{assump:specialscalePlus}. In Assumption~\ref{assump:finitelymanyPlus}, \eqref{eqn:maxmaxOverminsum} implies $\max_{a,b\in\Ecal} D_L(a,b)<\infty$ for all $L$ (since $D_L(a,b)$ is non-decreasing in $L$), and \eqref{eqn:kstepRatio} implies $\max_{a\in\Ecal}\sup_{k\geq 1}\min_{b\in\Ecal} D^{(k)}_L(a,b)>0$ for all $L$ sufficiently large. Therefore Assumption~\ref{assump:finitelymanyPlus} implies that the Conditions~\ref{Assump:BoundExpectedDegree} and \ref{Assump:AllReachablebySome} of \cite{caicedo2023criticalexponentsmarkedrandom} both hold. In Assumption~\ref{assump:specialscalePlus}, the conditions \eqref{eqn:bounddegreedensity} and \eqref{eqn:strongIrreducibilityCondition} are precisely the Conditions~\ref{Assump:BoundExpectedDegree} and \ref{Assump:AllReachablebySome} of \cite{caicedo2023criticalexponentsmarkedrandom} for the model with the \emph{reference} adjacency function, $\connf$, and \eqref{eqn:infIntegralDegree} makes if feasible for Condition~\ref{TriangleCondition_Assumption} to hold. The choice of a volume-linear scaling then means that these values change in an explicit way, and we can show that all of \ref{Assump:BoundExpectedDegree}, \ref{Assump:AllReachablebySome}, and \ref{TriangleCondition_Assumption} hold for the \emph{scaled} versions of the model. This leaves only Condition~\ref{TriangleCondition_Assumption} to address. \begin{lemma} \label{lem:FiniteCase-ConstantPositive} If $\#\Ecal<\infty$, and \begin{align} \liminf_{L\to\infty}\sup_{k\geq 1}\frac{\min_{a,b\in\Ecal} D^{(k)}_L(a,b)}{\left(\max_{a\in\Ecal}\sum_{b\in\Ecal} D_L(a,b)\Pcal\left(b\right)\right)^k} &>0, \label{eqn:FiniteIrreducibility-ish}\\ \limsup_{L\to\infty}\frac{\max_{a,b\in\Ecal} D_L(a,b)}{\min_{a\in\Ecal}\sum_{b\in\Ecal} D_L(a,b)\Pcal\left(b\right)}&<\infty, \label{eqn:FiniteSameOrder} \end{align} both hold, then there exists $\varepsilon>0$ such that \begin{equation} \ConstantTriangle(L) \geq \varepsilon \end{equation} for $L$ sufficiently large. \end{lemma} \begin{proof} For succinctness, let us denote \begin{align} \overline{A}_\star(L) &:= \max_{a,b\in\Ecal} D_L(a,b)\\ \overline{A}(L) &:= \max_{a\in\Ecal}\sum_{b\in\Ecal} D_L(a,b)\Pcal\left(b\right)\\ \underline{A}(L) &:= \min_{a\in\Ecal}\sum_{b\in\Ecal} D_L(a,b)\Pcal\left(b\right)\\ \underline{A}^{(k)}_\star(L) &:= \min_{a,b\in\Ecal} D^{(k)}_L(a,b). \end{align} First observe that \begin{multline} \max_{a\in\Ecal}\Ical_{\lambda_T(L),a}(L) = \left(\min_{a\in\Ecal}\sup_{k\geq 1}\left(\frac{\lambda_T(L)}{1+\lambda_T(L)}\right)^k\min_{b\in\Ecal}D_L^{(k)}(a,b)\right)^{-1} \\ = \left(\sup_{k\geq 1}\left(\frac{\lambda_T(L)}{1+\lambda_T(L)}\right)^k\underline{A}^{(k)}_\star(L)\right)^{-1}. \end{multline} Then from Lemmata~\ref{lem:qtoqintensity_bound} and \ref{lem:criticaltoOnetoOne-finite} there exist constants $c_1,c_2\in\left(0,\infty\right)$ such that \begin{equation} \frac{c_1}{\overline{A}(L)} \leq \lambda_T(L) \leq \frac{c_2}{\overline{A}(L)}. \end{equation} This means that \begin{equation} \max_{a\in\Ecal}\Ical_{\lambda_T(L),a}(L) \leq \left(\sup_{k\geq 1}\left(\frac{c_1}{c_1 + \overline{A}(L)}\right)^k\underline{A}^{(k)}_\star(L)\right)^{-1}. \end{equation} Then \eqref{eqn:FiniteIrreducibility-ish} implies that there exists a constant $C<\infty$ such that \begin{align} \max_{a\in\Ecal}\Ical_{\lambda_T(L),a}(L) &\leq C\\ \max_{a\in\Ecal}\Jcal_{\lambda_T(L),a}(L) &\leq C \end{align} for all $L\geq 1$. We also have \begin{equation} \limsup_{L\to\infty}\frac{\overline{A}_\star(L)}{\overline{A}(L)}\leq \limsup_{L\to\infty}\frac{\overline{A}_\star(L)}{\underline{A}(L)}<\infty, \end{equation} which then implies that there exists a constant $C'<\infty$ such that \begin{align} \cbar{\lambda_T(L)}(L) &\leq 1 + C',\\ \left(1+\lambda_T(L)\max_{a,b,c\in\Ecal} D_L(a,b)\Ical_{\lambda_T(L),c}(L)\right)^{-2} &\geq \left(1+C'\right)^{-2}. \end{align} Therefore \begin{align} &\frac{1}{\cbar{\lambda_T(L)}(L)}\frac{\lambda_T(L)^2\left(\min_{a\in\Ecal} \sum_{b\in\Ecal} D_L(a,b)\Pcal( b)\right)^2}{1 + 2\lambda_T(L)\max_{a\in\Ecal}\sum_{b\in\Ecal} D_L(a,b)\Pcal( b)} \nonumber\\ &\hspace{7cm} \geq \frac{1}{1 + C'}\frac{\lambda_T(L)^2\underline{A}(L)^2}{1 + 2\lambda_T(L)\overline{A}(L)}\nonumber\\ &\hspace{7cm} \geq \frac{1}{1 + C'}\frac{c_1^2}{1 + 2c_2}\left(\frac{\underline{A}(L)}{\overline{A}(L)}\right)^2 \nonumber\\ &\hspace{7cm} \geq \frac{1}{1 + C'}\frac{c_1^2}{1 + 2c_2}\left(\frac{\overline{A}_\star(L)}{\underline{A}(L)}\right)^{-2}. \end{align} The condition \eqref{eqn:FiniteSameOrder} then produces the result. \end{proof} \begin{lemma} \label{lem:ScalingCase-ConstantPositive} If $\sigma_L$ is volume-linear, and \begin{align} \esssup_{a,b\in\Ecal} D(a,b)&<\infty \label{eqn:Dnormonetoone}\\ \essinf_{a\in\Ecal}\int_\Ecal D(a,b)\Pcal\left(\dd b\right)&>0\\ \esssup_{a\in\Ecal}\sup_{k\geq 1}\essinf_{b\in\Ecal}D^{(k)}(a,b)&>0 \label{eqn:StrongIrreducibility} \end{align} all hold, then there exists $\varepsilon>0$ such that \begin{equation} \ConstantTriangle(L) \geq \varepsilon \end{equation} for $L$ sufficiently large. \end{lemma} \begin{proof} First observe that the choice of $\sigma_L=s_L$ means that \begin{equation} D_L(a,b) = LD(a,b) \end{equation} for all $a,b\in\Ecal$ and $L>0$. This then means that \begin{align} \esssup_{a\in\Ecal}\int D_L(a,b)\Pcal(\dd b) &= L\esssup_{a\in\Ecal}\int D(a,b)\Pcal(\dd b),\\ \essinf_{a\in\Ecal}\int D_L(a,b)\Pcal(\dd b) &= L\essinf_{a\in\Ecal}\int D(a,b)\Pcal(\dd b),\\ \essinf_{b\in\Ecal}D_L^{(k)}(a,b) &= L^k\essinf_{b\in\Ecal}D^{(k)}(a,b) \qquad\forall a\in\Ecal,k\geq 1, \end{align} for all $L>0$. From Lemma~\ref{lem:qtoqintensity_bound}, we then have \begin{equation} \lambda_T(L) = \lambda_{1\to 1}(L) \geq \frac{1}{\norm*{\Opconnf_L}_{1\to 1}} = \frac{1}{\norm*{D_L}_{1\to 1}} = \frac{1}{L\norm*{D}_{1\to 1}}>0, \end{equation} where this positivity follows from \eqref{eqn:Dnormonetoone} implying $\norm*{D}_{1\to 1}<\infty$. Therefore for $L\geq 1$, \begin{multline} \esssup_{a\in\Ecal}\sup_{k\geq 1}\left(\frac{\lambda_T(L)}{1+\lambda_T(L)}\right)^k\essinf_{b\in\Ecal}D_L^{(k)}(a,b) \\\geq \esssup_{a\in\Ecal}\sup_{k\geq 1}\left(\frac{1}{\norm*{D}_{1\to 1}+1}\right)^k\essinf_{b\in\Ecal}D^{(k)}(a,b). \end{multline} In particular, this bound is $L$-independent and \eqref{eqn:StrongIrreducibility} implies that it is strictly positive. This then means that there exists $c<\infty$ such that \begin{align} \esssup_{a\in\Ecal}\Ical_{\lambda_T(L),a}(L) &\leq c,\\ \esssup_{a\in\Ecal}\Jcal_{\lambda_T(L),a}(L) &\leq c. \end{align} Therefore \begin{equation} \cbar{\lambda_T(L)}(L) \leq 1 + cL\lambda_T(L)\esssup_{a,b\in\Ecal}D(a,b) \leq 1 + \frac{c}{\norm*{D}_{1\to 1}}\esssup_{a,b\in\Ecal}D(a,b). \end{equation} Also recall from Lemma~\ref{lem:criticaltoOnetoOne-SpecialScaling} that \begin{equation} \lambda_T(L) \leq \frac{C_d}{\norm*{D}_{1\to 1}}. \end{equation} In summary, we have \begin{equation} \left(1+\lambda_T(L)\esssup_{a,b,c\in\Ecal}D_L(a,b)\Ical_{\lambda_T(L),c}(L)\right)^{-2} \geq \left(1 + \frac{c}{\norm*{D}_{1\to 1}}\esssup_{a,b\in\Ecal}D(a,b)\right)^{-2} >0, \end{equation} and \begin{multline} \frac{1}{\cbar{\lambda_T(L)}(L)}\frac{\lambda_T(L)^2\left(\essinf_{a\in\Ecal} \int D_L(a,b)\Pcal(\dd b)\right)^2}{1 + 2\lambda_T(L)\esssup_{a\in\Ecal}\int D_L(a,b)\Pcal(\dd b)} \\\geq \frac{1}{\left(\norm*{D}_{1\to 1} + c\esssup_{a,b\in\Ecal}D(a,b)\right)\left(\norm*{D}_{1\to 1}+2C_d\esssup_{a,b\in\Ecal}D(a,b)\right)}>0. \end{multline} Therefore we have the required strictly positive and $L$-independent lower bound for $\ConstantTriangle(L)$. \end{proof} \section{Proofs for Specific Models} \label{sec:ProofSpecificModels} \subsection{Boolean Disc Model} \label{sec:ProofBooleanModel} \begin{proof}[Proof of Corollary~\ref{lem:BooleanCorollary}] For part~\ref{enum:BooleanCorPart0}, first note that we have \begin{equation} \label{eqn:BooleanPtAitem1} \lambda_\mathrm{u}(L) \geq \frac{1}{\norm*{\Opconnf_L}_{2\to 2}} \end{equation} through Lemmata~\ref{lem:qtoqintensity_bound} and \ref{lem:Uniquenesstoqtoq}. In general we cannot directly apply Lemma~\ref{lem:RatioofNorms} to our model, but the argument needs only a slight modification for our case. Fix $R\in\left(0,\infty\right)$. Since $D^{(\leq R)}_L(a,b)\leq \mathfrak{S}_{d-1}\mathbf{V}_d\left(R\right)$ for all $L$ and all $a,b\in\Ecal$, \begin{equation} \norm*{D^{(\leq R)}_L}_{2\to 2} \leq \mathfrak{S}_{d-1}\mathbf{V}_d\left(R\right) \end{equation} for all $L$. Now fix $\varepsilon>0$ such that $\Pcal\left(\left(\varepsilon,\infty\right)\right)>0$ and consider $f(a):= \Pcal\left(\left(\varepsilon,\infty\right)\right)^{-\frac{1}{2}}\Id_{\left\{a>\varepsilon\right\}}$. We have $\norm*{f}_2=1$ and \begin{align} \norm*{D_L}_{2\to 2}^2 \geq \norm*{D_Lf}_2^2 &= \frac{\mathfrak{S}^2_{d-1}}{\Pcal\left(\left(\varepsilon,\infty\right)\right)}\int_{\Ecal}\left(\int_{\Ecal\cap\left(\varepsilon,\infty\right)}\mathbf{V}_d\left(\sigma_L\left(a+b\right)\right)\Pcal\left(\dd a\right)\right)^2\Pcal\left(\dd b\right)\nonumber\\ &\geq \frac{\mathfrak{S}^2_{d-1}}{\Pcal\left(\left(\varepsilon,\infty\right)\right)}\int_{\Ecal}\mathbf{V}_d\left(\sigma_L\left(\varepsilon\right)\right)^2\Pcal\left(\left(\varepsilon,\infty\right)\right)^2\Pcal\left(\dd b\right)\nonumber\\ & = \mathfrak{S}^2_{d-1}\Pcal\left(\left(\varepsilon,\infty\right)\right)\mathbf{V}_d\left(\sigma_L\left(\varepsilon\right)\right)^2 \to\infty \end{align} as $L\to\infty$. Therefore $\norm*{D^{(\leq R)}_L}_{2\to 2} = o\left(\norm*{D_L}_{2\to 2}\right)$ as $L\to\infty$ and the argument in Lemma~\ref{lem:RatioofNorms} proceeds in the same way to show \begin{equation} \label{eqn:BooleanPtAitem2} \lim_{L\to\infty}\frac{\norm*{\Opconnf_L}_{2\to 2}}{\norm*{D_L}_{2\to 2}} = 0. \end{equation} In general we also cannot directly apply Lemma~\ref{lem:criticaltoOnetoOne-finite} or Lemma~\ref{lem:criticaltoOnetoOne-SpecialScaling}. However, we can bound $\lambda_\mathrm{c}(L)$ for our model by the critical intensity for another model that we can apply Lemma~\ref{lem:criticaltoOnetoOne-finite} for. Recall $R^*:= \esssup\Ecal\in\left(0,\infty\right)$, and let $\lambda^*_\mathrm{c}(L)$ denote the percolation critical intensity for the same scaled Boolean disc model but with the finite mark space $\Ecal^*=\left\{0,R^*\right\}$ and $\Pcal^*\left(\left\{0\right\}\right) = \Pcal\left(\left(0,R^*\right)\right)$ and $\Pcal^*\left(\left\{R^*\right\}\right) = \Pcal\left(\left\{R^*\right\}\right)$. This can be coupled to the original scaled Boolean disc model in such a way that the graph resulting from the new model is a sub-graph of the graph resulting from the original model. Therefore \begin{equation} \lambda^*_\mathrm{c}\left(L\right) \geq \lambda_\mathrm{c}\left(L\right). \end{equation} We can now apply Lemma~\ref{lem:criticaltoOnetoOne-finite} to this new model. From the equivalence of norms on finite dimensional spaces, \begin{equation} \norm*{D^*_L}_{2\to 2} \asymp \mathbf{V}_d\left(\sigma_L\left(2R^*\right)\right) \end{equation} as $L\to \infty$. On the other hand, bounding by the Hilbert-Schmidt norm gives \begin{multline} \norm*{D_L}_{2\to 2} \leq \HSNorm{D_L} = \mathfrak{S}_{d-1}\left(\int_{\left(0,R^*\right]}\int_{\left(0,R^*\right]}\mathbf{V}_d\left(\sigma_L\left(a+b\right)\right)^2\Pcal\left(\dd a\right)\Pcal\left(\dd b\right)\right)^\frac{1}{2}\\ \leq \mathfrak{S}_{d-1}\mathbf{V}_d\left(\sigma_L\left(2R^*\right)\right). \end{multline} Therefore there exists a constant $C>0$ such that for sufficiently large $L$ \begin{equation} \label{eqn:BooleanPtAitem3} \lambda_\mathrm{c}(L) \leq \frac{C}{\norm*{D_L}_{2\to 2}}. \end{equation} Taken together, \eqref{eqn:BooleanPtAitem1}, \eqref{eqn:BooleanPtAitem2}, and \eqref{eqn:BooleanPtAitem3} are enough to prove part~\ref{enum:BooleanCorPart0}. To prove part~\ref{enum:BooleanCorPart1}, directly use Theorem~\ref{thm:NonUniqueness} with Assumption~\ref{assump:specialscale}. This requires that the $L^2\left(\Ecal\right)\to L^2\left(\Ecal\right)$ operator norm of the operator constructed from the function \begin{equation} \left(a,b\right)\mapsto \int^{a+b}_0r\exp\left(\frac{1}{2}\left(d-1\right)r\right)\dd r \end{equation} is finite. Let us denote this operator by $\Bcal$ (for ``Boolean"). We can bound the $L^2(\Ecal)\to L^2(\Ecal)$ operator norm by the Hilbert-Schmidt norm: \begin{equation} \norm*{\Bcal}_{2\to 2} \leq \HSNorm{\Bcal} = \left(\int^\infty_0\int^\infty_0\left(\int^{a+b}_0r\exp\left(\frac{1}{2}\left(d-1\right)r\right)\dd r\right)^2\Pcal\left(\dd a\right)\Pcal\left(\dd b\right)\right)^\frac{1}{2}. \end{equation} There exists a constant $C_d<\infty$ such that \begin{multline} \int^{a+b}_0r\exp\left(\frac{1}{2}\left(d-1\right)r\right)\dd r = \frac{4}{\left(d-1\right)^2}\left(1 - \left(1-\frac{1}{2}\left(d-1\right)\left(a+b\right)\e^{\frac{1}{2}\left(d-1\right)\left(a+b\right)}\right)\right)\\ \leq C_d\left(\left(1\vee a\right)+\left(1\vee b\right)\right)\e^{\frac{1}{2}\left(d-1\right)\left(a+b\right)}, \end{multline} so that \begin{align} \norm*{\Bcal}_{2\to 2}^2 &\leq C_d^2\int^\infty_0\int^\infty_0 \left(\left(1\vee a\right)+\left(1\vee b\right)\right)^2\e^{\left(d-1\right)\left(a+b\right)}\Pcal\left(\dd a\right)\Pcal\left(\dd b\right)\nonumber\\ & = C_d^2\int^\infty_0\int^\infty_0 \left(\left(1\vee a\right)^2+\left(1\vee b\right)^2\right)\e^{\left(d-1\right)\left(a+b\right)}\Pcal\left(\dd a\right)\Pcal\left(\dd b\right) \nonumber\\ &\hspace{4cm}+ 2C_d^2 \int^\infty_0\int^\infty_0 \left(1\vee a\right)\left(1\vee b\right)\e^{\left(d-1\right)\left(a+b\right)}\Pcal\left(\dd a\right)\Pcal\left(\dd b\right)\nonumber\\ & = 2C_d^2\left(\int^\infty_0\e^{\left(d-1\right)b}\Pcal\left(\dd b\right)\int^\infty_0\left(1\vee a\right)^2\e^{\left(d-1\right)a}\Pcal\left(\dd a\right)\right.\nonumber\\ &\hspace{4cm}+ \left.\left(\int^\infty_0\left(1\vee a\right)\e^{\left(d-1\right)a}\Pcal\left(\dd a\right)\right)^2\right)\nonumber\\ & \leq \left(2C_d \int^\infty_0\left(1\vee r^2\right)\e^{\left(d-1\right)r}\Pcal\left(\dd r\right)\right)^2. \end{align} This bound is finite precisely when $\int^\infty_0r^2\e^{\left(d-1\right)r}\Pcal\left(\dd r\right)<\infty$, and therefore the condition \eqref{eqn:FiniteExpectedVolume} is sufficient to apply Theorem~\ref{thm:NonUniqueness}. For part \ref{enum:BooleanCorPart2}, let us first consider the $L=1$ version. This version still has the interpretation in which a vertex with mark $a$ is assigned a ball of radius $a$ that is centred on it, and two vertices are adjacent in the RCM if their balls overlap. For the associated model on $\Rd$, \cite{hall1985continuum} gives an argument that every point in $\Rd$ is covered by a ball if and only if the expected size of the random ball if infinite. The same argument applies for the $\HypDim$ models when $L=1$, and the expected size of the balls is finite exactly when \eqref{eqn:FiniteExpectedVolume} holds. Therefore if this expectation is infinite the union of all the balls almost surely equals $\HypDim$, and the connectedness and infinite-ness of this set then implies $\lambda_{\mathrm{u}}(1)=\lambda_{\mathrm{c}}(1)=0$. For $L>1$, observe that this family of RCMs is monotone: there is a coupling of the $L=1$ model with the $L>1$ model such that if an edge exists in the $L=1$ model then it exists in the $L>1$ model. Therefore if almost every vertex is in the same cluster in the $L=1$ model, then the same is true for the $L>1$ model. For part \ref{enum:BooleanCorPart3}, use Theorem~\ref{thm:meanfield} with Assumption~\ref{assump:specialscalePlus}. The form of $\connf$ and $\Pcal\left(\left\{0\right\}\right)<1$ imply that \eqref{eqn:infIntegralDegree} and \eqref{eqn:strongIrreducibilityCondition} hold, and it is simple to show that the boundedness of the support of $\Pcal$ implies the condition \eqref{eqn:bounddegreedensity} holds. \end{proof} \subsection{Weight-Dependent Hyperbolic Random Connection Models} \label{sec:ProofWDRCM} \begin{proof}[Proof of Corollary~\ref{cor:WDRCMGeneral}] First we aim to use Theorem~\ref{thm:NonUniqueness} with Assumption~\ref{assump:specialscale}. The main observation is that \begin{equation} \label{eqn:SeparateProfileandKernel} \int^\infty_0\connf\left(r;a,b\right) r\exp\left(\frac{1}{2}\left(d-1\right)r\right)\dd r = \kappa\left(a,b\right)\int^\infty_0\rho(r)r\exp\left(\frac{1}{2}\left(d-1\right)r\right)\dd r. \end{equation} Therefore condition \eqref{eqn:WDRCMprofileCondition} ensures that $\int^\infty_0\connf\left(r;a,b\right) r\exp\left(\frac{1}{2}\left(d-1\right)r\right)\dd r<\infty$ for $\Pcal$-almost every $a,b\in\Ecal$, and \eqref{eqn:L2normVolumeScale} becomes the requirement that \begin{equation} \norm*{\Kcal}_{2\to 2}\int^\infty_0\rho(r)r\exp\left(\frac{1}{2}\left(d-1\right)r\right)\dd r<\infty. \end{equation} This is then clearly satisfied if $\norm*{\Kcal}_{2\to 2}<\infty$. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:WDRCMSpecific}] This follows directly from Corollary~\ref{cor:WDRCMGeneral} once one has evaluated $\norm*{\Kcal}_{2\to 2}$ for each model. These are indeed evaluated in Lemmata~\ref{lem:ProdKernel}--\ref{lem:PAkernel} below. \end{proof} \begin{lemma} \label{lem:ProdKernel} \begin{equation} \norm*{\Kcal^{\mathrm{prod}}}_{2\to 2} = \begin{cases} \frac{1}{1-2\zeta} &: \zeta\in\left(0,\frac{1}{2}\right)\\ \infty &: \zeta\geq \frac{1}{2}. \end{cases} \end{equation} \end{lemma} \begin{proof} We first consider $\zeta<\frac{1}{2}$ and evaluate $\HSNorm{\Kcal^{\mathrm{prod}}}$. We find \begin{equation} \HSNorm{\Kcal^{\mathrm{prod}}} = \left(\int^1_0 \int^1_0 a^{-2\zeta} b^{-2\zeta} \dd a\dd b\right)^\frac{1}{2} = \int^1_0 a^{-2\zeta}\dd a = \frac{1}{1-2\zeta}. \end{equation} Therefore $\norm*{\Kcal^{\mathrm{prod}}}_{2\to 2}\leq \frac{1}{1-2\zeta}$ for $\zeta<\frac{1}{2}$. Now let us consider the normalized $L^2$-function \begin{equation} f_1(a) := \left(1-2\zeta\right)^\frac{1}{2}a^{-\zeta}. \end{equation} Then \begin{equation} \left(\Kcal^{\mathrm{prod}}f_1\right)(a) = \frac{1}{\left(1-2\zeta\right)^{\frac{1}{2}}}a^{-\zeta}, \end{equation} and \begin{equation} \norm*{\Kcal^{\mathrm{prod}}f_1}_2 = \frac{1}{1-2\zeta}. \end{equation} Therefore $\norm*{\Kcal^{\mathrm{prod}}}_{2\to 2}= \frac{1}{1-2\zeta}$ for $\zeta<\frac{1}{2}$. For $\zeta\geq \frac{1}{2}$, let us consider $f_2\equiv 1$. It is clear that $f_2\in L^2$ with $\norm*{f_2}_2=1$. We then find \begin{equation} \left(\Kcal^{\mathrm{prod}}f_2\right)(a) = \frac{1}{1-\zeta}a^{-\zeta}. \end{equation} Note that for $\zeta\geq \frac{1}{2}$, $\norm*{\Kcal^{\mathrm{prod}}f_2}_2=\infty$ and $\Kcal^{\mathrm{prod}}f_2\not\in L^2$. Therefore $\norm*{\Kcal^{\mathrm{prod}}}_{2\to 2}= \infty$ for $\zeta\geq\frac{1}{2}$. \end{proof} \begin{lemma} \label{lem:StrongKernel} \begin{equation} \norm*{\Kcal^{\mathrm{strong}}}_{2\to 2}<\infty \iff \zeta<\frac{1}{2} \end{equation} \end{lemma} \begin{proof} For $\zeta<\frac{1}{2}$, we first show that by the symmetry of the kernel \begin{equation} \HSNorm{\Kcal^{\mathrm{strong}}}^2 = 2\int^1_0\left(\int^b_0 a^{-2\zeta} \dd a\right)\dd b = \frac{1}{\left(1-2\zeta\right)\left(1-\zeta\right)}. \end{equation} This then bounds $\norm*{\Kcal^{\mathrm{strong}}}_{2\to 2}<\infty$. For $\zeta\geq \frac{1}{2}$, let us consider $f\equiv 1$. We find that for $a\in\left(0,1\right)$ \begin{equation} \left(\Kcal^{\mathrm{strong}}f\right)(a) = a^{-\zeta}\left(1-a\right) + \int^a_0b^{-\zeta}\dd t = \begin{cases} a^{-\zeta}\left(1-a\right) + \frac{1}{1-\zeta}a^{1-\zeta} &\colon \zeta<1\\ \infty &\colon \zeta\geq 1. \end{cases} \end{equation} Clearly $\norm*{\Kcal^{\mathrm{strong}}f}_2=\infty$ if $\zeta\geq \frac{1}{2}$, and therefore $\norm*{\Kcal^{\mathrm{strong}}}_{2\to 2}=\infty$. \end{proof} While the product and strong (and therefore sum) kernels do indeed have parameter regimes for which $\norm*{\Kcal}_{2\to 2}<\infty$, we can now show that the weak and preferential attachment kernels do not, and therefore our results do not apply to them. \begin{lemma} \label{lem:WeakKernel} For all $\zeta>0$, $\norm*{\Kcal^{\mathrm{weak}}}_{2\to 2} = \infty$. \end{lemma} \begin{proof} For $\zeta>0$, let us consider $f(a)= a^{\zeta -\frac{1}{2}}$. We have $f\in L^2$ since $\norm*{f}_2 = \frac{1}{\sqrt{2\zeta}}<\infty$. Now for $a\in\left(0,1\right)$ \begin{equation} \left(\Kcal^{\mathrm{weak}}f\right)(a) = a^{-1-\zeta}\int^a_0b^{\zeta -\frac{1}{2}}\dd b + \int^1_ab^{-\frac{3}{2}}\dd b = \frac{1}{\zeta + \frac{1}{2}}a^{-\frac{1}{2}} + 2\left(a^{-\frac{1}{2}}-1\right). \end{equation} Therefore $\norm*{\Kcal^{\mathrm{weak}}f}_2=\infty$, and $\norm*{\Kcal^{\mathrm{weak}}}_{2\to 2} = \infty$. \end{proof} \begin{lemma} \label{lem:PAkernel} For all $\zeta>0$, $\norm*{\Kcal^{\mathrm{pa}}}_{2\to 2} = \infty$. \end{lemma} \begin{proof} First we consider $\zeta<\frac{1}{2}$ and the family of functions $f_\varepsilon(a) = a^{\zeta-1}\Id_{a>\varepsilon}$ for $\varepsilon>0$. We have \begin{equation} \norm*{f_\varepsilon}^2_2 = \int^1_\varepsilon a^{2\zeta-2}\dd a = \frac{1}{1-2\zeta}\left(\varepsilon^{2\zeta-1}-1\right). \end{equation} Note $\norm*{f_\varepsilon}_2<\infty$ for all $\varepsilon>0$, but that $\norm*{f_\varepsilon}_2\to\infty$ as $\varepsilon\to 0$. Now for $a\leq\varepsilon$ we have \begin{equation} \left(\Kcal^{\mathrm{pa}}f_\varepsilon\right)(a) = \frac{a^{-\zeta}}{1-2\zeta}\left(\varepsilon^{2\zeta-1}-1\right), \end{equation} and for $a>\varepsilon$ we have \begin{equation} \left(\Kcal^{\mathrm{pa}}f_\varepsilon\right)(a) = \frac{a^{-\zeta}}{1-2\zeta}\left(a^{2\zeta-1}-1\right) + a^{\zeta-1}\log\frac{a}{\varepsilon}. \end{equation} Since $\zeta<\frac{1}{2}$, we can identity the dominant term and find \begin{equation} \norm*{\Kcal^{\mathrm{pa}}f_\varepsilon}^2_2 \asymp \int^1_\varepsilon a^{2\zeta-2}\left(\log a\right)^2\dd a \end{equation} as $\varepsilon\to 0$. Since $\left(\log a\right)^2\to\infty$ as $a\to 0$, \begin{equation} \norm*{\Kcal^{\mathrm{pa}}f_\varepsilon}_2 \gg \norm*{f_\varepsilon}_2 \end{equation} as $\varepsilon\to 0$. Therefore $\norm*{\Kcal^{\mathrm{pa}}}_{2\to 2}=\infty$ for $\zeta<\frac{1}{2}$. For $\zeta\geq \frac{1}{2}$, consider $f\equiv 1$. Then \begin{equation} \left(\Kcal^{\mathrm{pa}}f\right)(a) = a^{-\zeta}\int^1_a b^{\zeta-1}\dd b + a^{\zeta-1}\int^a_0 b^{-\zeta}\dd b = \begin{cases} \frac{1}{\zeta}a^{-\zeta} - \frac{1}{\zeta} + \frac{1}{1-\zeta} &\colon \zeta<1\\ \infty &\colon \zeta\geq 1. \end{cases} \end{equation} Therefore $\norm*{\Kcal^{\mathrm{pa}}f}_2=\infty$ for $\zeta\geq \frac{1}{2}$, and $\norm*{\Kcal^{\mathrm{pa}}}_{2\to 2}=\infty$. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:nonperturb}] First note that by Mecke's formula, the expected degree of a vertex with mark $a$ is given by \begin{multline} \lambda\mathfrak{S}_{d-1}\int^1_0\left(\int^\infty_0\rho\left(s^{-1}_{\kappa\left(a,b\right)}\left(r\right)\right)\left(\sinh r\right)^{d-1}\dd r\right) \dd b \\= \lambda\mathfrak{S}_{d-1} \int^1_0 \kappa\left(a,b\right)\left(\int^\infty_0\rho\left(r\right)\left(\sinh r\right)^{d-1}\dd r \right)\dd b. \end{multline} Therefore since $\kappa\left(a,b\right)>0$ for a positive measure of $\left(a,b\right)\in\left(0,1\right)^2$, the expected degree of a vertex with mark $a$ equals $\infty$ for a positive measure of $a$. In particular this occurs for any $\lambda>0$. Since the expected cluster size is bounded below by the expected degree, $\theta_\lambda(a)=1$ for $\lambda>0$ and a positive measure of $a$, and $\lambda_\mathrm{c}=0$. To show that $\lambda_{\mathrm{u}}>0$, we utilize Lemma~\ref{lem:Uniquenesstoqtoq} to show $\lambda_{\mathrm{u}}\geq \lambda_{2\to 2}$ and Lemma~\ref{lem:qtoqintensity_bound} to show $\lambda_{2\to 2}\geq \frac{1}{\norm*{\Opconnf}_{2\to 2}}$. Then the observation \eqref{eqn:SeparateProfileandKernel} means that \eqref{eqn:ProfileL2bound} implies that \begin{equation} \int^\infty_0\connf(r;a,b) Q_d\left(r\right)\exp\left(\left(d-1\right)r\right)\dd r<\infty \end{equation} for $\Pcal$-almost every $a,b\in\Ecal$, and Lemma~\ref{lem:TwoToTwoNormBound} shows that $\norm*{\Opconnf}_{2\to 2} = \norm*{\widetilde{\Opconnf}(0)}_{2\to 2}$ if the latter is finite. Since $\int^\infty_0\rho(r) Q_d(r)\left(\sinh r\right)^{d-1}\dd r<\infty$ if \eqref{eqn:ProfileL2bound} holds, $\norm*{\widetilde{\Opconnf}(0)}_{2\to 2}<\infty$ if $\norm*{\Kcal}_{2\to 2}<\infty$. Therefore $\lambda_{2\to 2}>0=\lambda_{\mathrm{c}}$. \end{proof} \begin{appendix} \section{Scaling Functions} \label{app:ScalingFunctions} The volume-linear scaling $s_L$ appearing in Assumptions~\ref{assump:specialscale} and \ref{assump:specialscalePlus} is special because it transforms radial integrals in a predictable way. Up to an $r$-independent constant, $\mathbf{V}_d(r)$ equals the volume of a hyperbolic ball with radius $r$, and $s_L$ can therefore be viewed as a conjugation between lengths and volumes. Therefore for any measurable function $\phi\colon \R_+\to \R_+$, we have \begin{equation} \label{eqn:volumeScalingProperty} \int_\HypDim\phi\left(s_L^{-1}\left(\dist{x,\orig}\right)\right)\mu\left(\dd x\right) = L\int_\HypDim\phi\left(\dist{x,\orig}\right)\mu\left(\dd x\right). \end{equation} Also observe that $L\mapsto s_L$ is homomorphism: for all $L,M>0$ and $r>0$ \begin{equation} \label{eqn:scalingHomomorphism} s_M\left(s_L\left(r\right)\right) = \mathbf{V}_d^{-1}\left(M \mathbf{V}_d\left(\mathbf{V}_d^{-1}\left(L \mathbf{V}_d(r)\right)\right)\right) = \mathbf{V}_d^{-1}\left(ML \mathbf{V}_d(r)\right) = s_{ML}(r). \end{equation} In particular this means $s_L^{-1} = s_{1/L}$. To give a picture of what the function $s_L(r)$ actually looks like, it is elementary to evaluate the asymptotics of $s_L(r)$. As $r,L\to\infty$, \begin{equation} s_L(r) \sim \begin{cases} L^\frac{1}{d}r &\colon r=\LandauBigO{L^{-\frac{1}{d}}}\\ \frac{1}{d-1}\log L &\colon L^{-\frac{1}{d}} \ll r \ll \log L\\ r &\colon \log L=\LandauBigO{r}. \end{cases} \end{equation} Recall that we refer to the family of functions $\left\{\sigma_L\right\}_{L>0}$ as a scaling function if all $\sigma_L\colon\R_+\to \R_+$ are increasing bijections such that \begin{itemize} \item $\sigma_1(r)=r$, \item for all $r>0$, $L\mapsto \sigma_L(r)$ is increasing and $\lim_{L\to\infty}\sigma_L(r)=\infty$, \end{itemize} Also recall that Assumption~\ref{assump:finitelymany} required of the scaling and adjacency function that \begin{equation} \label{eqn:EigenValueRatioV2} \lim_{L\to\infty}\frac{\max_{a,b\in\Ecal}\int^R_0\connf_L\left(r;a,b\right) \left(\sinh r\right)^{d-1}\dd r}{\max_{a,b\in\Ecal}\int^\infty_0\connf_L\left(r;a,b\right) \left(\sinh r\right)^{d-1}\dd r} = 0 \end{equation} for all $R<\infty$. \begin{lemma} \label{lem:VolumeLinearNice} Suppose $\#\Ecal<\infty$. For all $\connf$ such that $\max_{a,b\in\Ecal}\int^\infty_0\connf\left(r;a,b\right)\dd r>0$, the volume-linear scaling function satisfies \eqref{eqn:EigenValueRatioV2}. \end{lemma} \begin{proof} Our bound for the numerator in \eqref{eqn:EigenValueRatioV2} is independent of the scaling function. For all $a,b\in \Ecal$ and $R\in\left[0,\infty\right]$, \begin{equation} \int^R_0\connf_L\left(r;a,b\right)\left(\sinh r\right)^{d-1}\dd r \leq \mathbf{V}_d\left(R\right). \end{equation} On the other hand, for all $a,b\in\Ecal$ the volume-linear scaling function means \begin{equation} \int^\infty_0\connf_L\left(r;a,b\right)\left(\sinh r\right)^{d-1}\dd r = L\int^\infty_0\connf\left(r;a,b\right)\left(\sinh r\right)^{d-1}\dd r. \end{equation} Therefore, if $\max_{a,b\in\Ecal}\int^\infty_0\connf\left(r;a,b\right)\dd r>0$ then $\max_{a,b\in\Ecal}\int^\infty_0\connf\left(r;a,b\right)\left(\sinh r\right)^{d-1}\dd r>0$ and \begin{multline} \limsup_{L\to\infty}\frac{\max_{a,b\in\Ecal}\int^R_0\connf_L\left(r;a,b\right) \left(\sinh r\right)^{d-1}\dd r}{\max_{a,b\in\Ecal}\int^\infty_0\connf_L\left(r;a,b\right) \left(\sinh r\right)^{d-1}\dd r} \\\leq \frac{\mathbf{V}_d\left(R\right)}{\max_{a,b\in\Ecal}\int^\infty_0\connf\left(r;a,b\right)\left(\sinh r\right)^{d-1}\dd r}\lim_{L\to\infty}\frac{1}{L}=0. \end{multline} \end{proof} An alternative natural option for a scaling function is the length-linear scaling function $\sigma_L(r) = L r$. However, the volume scaling effects of this simple length-linear scaling are rather complicated. In Euclidean $\Rd$, if we scale lengths by $L$ then we scale volumes by $L^d$. Specifically, if we have a measurable function $\phi\colon \R_+\to \R_+$ then \begin{equation} \int_{\Rd}\phi\left(\frac{\abs*{x}}{L}\right)\dd x = L^d \int_{\Rd}\phi\left(\abs*{x}\right)\dd x. \end{equation} On the other hand, there is no function $f\colon \R_+\to \R_+$ such that \begin{equation} \int_\HypDim\phi\left(\frac{\dist{x,\orig}}{L}\right)\mu\left(\dd x\right) = f(L)\int_\HypDim\phi\left(\dist{x,\orig}\right)\mu\left(\dd x\right) \end{equation} for all $\phi$. \begin{lemma} \label{lem:LenthLinearNice} Suppose $\#\Ecal<\infty$. For all $\connf$ such that $\max_{a,b\in\Ecal}\int^\infty_0\connf\left(r;a,b\right)\dd r>0$, the length-linear scaling function satisfies \eqref{eqn:EigenValueRatioV2}. \end{lemma} \begin{proof} As in the proof of Lemma~\ref{lem:VolumeLinearNice}, for all $a,b\in \Ecal$ and $R\in\left[0,\infty\right]$, \begin{equation} \int^R_0\connf_L\left(r;a,b\right)\left(\sinh r\right)^{d-1}\dd r \leq \mathbf{V}_d\left(R\right). \end{equation} On the other hand, for all $a,b\in\Ecal$ the length-linear scaling function means \begin{multline} \int^\infty_0\connf_L\left(r;a,b\right)\left(\sinh r\right)^{d-1}\dd r = L\int^\infty_0\connf\left(r;a,b\right)\left(\sinh L r\right)^{d-1}\dd r \\\geq L\int^\infty_0\connf\left(r;a,b\right)\left(\sinh r\right)^{d-1}\dd r \end{multline} for $L\geq 1$. The proof then concludes similarly as for Lemma~\ref{lem:VolumeLinearNice}. \end{proof} It is also worth noting that there are also some adjacency functions for which \eqref{eqn:EigenValueRatioV2} holds for any scaling function. \begin{lemma} \label{lem:adjacencyNearOrigin} Suppose $\#\Ecal<\infty$ and that \begin{equation} \label{eqn:adjacencyNearOrigin} \liminf_{r\searrow 0}\max_{a,b\in\Ecal}\connf\left(r;a,b\right)>0, \end{equation} Then for any scaling function \eqref{eqn:EigenValueRatioV2} holds. \end{lemma} \begin{proof} As in the previous lemmata, the numerator in \eqref{eqn:EigenValueRatioV2} is bounded by $\mathbf{V}_d\left(R\right)$, and we aim to show the denominator diverges. Since $\lim_{L\to\infty}\sigma_L(r)=\infty$ for any $r>0$, $\lim_{L\to\infty}\sigma^{-1}_L(r)=0$ for any $r>0$ and \begin{equation} \liminf_{L\to\infty}\connf_L\left(r_\star;a,b\right) = \liminf_{r\to0}\connf\left(r;a,b\right) \end{equation} for all $a,b\in\Ecal$ and $r_*>0$. Then by Fatou's lemma and our suppositions, \begin{multline} \liminf_{L\to\infty}\max_{a,b\in\Ecal}\int^\infty_0\connf_L\left(r;a,b\right)\left(\sinh r\right)^{d-1}\dd r \\\geq \int^\infty_0\left(\liminf_{L\to\infty}\max_{a,b\in\Ecal}\connf_L\left(r;a,b\right)\right)\left(\sinh r\right)^{d-1}\dd r = \infty \end{multline} as required. \end{proof} \begin{lemma} Suppose $\#\Ecal<\infty$, $\max_{a,b\in\Ecal}\int^\infty_0\connf\left(r;a,b\right)\dd r>0$, and that there exists $\varepsilon>0$ such that \begin{equation} \max_{a,b\in\Ecal}\esssup_{r<\varepsilon}\connf\left(r;a,b\right) = 0. \end{equation} Then for any scaling function \eqref{eqn:EigenValueRatioV2} holds. \end{lemma} \begin{proof} Given $R<\infty$, let $L_0=L_0(R)$ satisfy $\sigma_{L_0}\left(\varepsilon\right)> R$. Then for all $L\geq L_0$ \begin{equation} \max_{a,b\in\Ecal}\int^R_0\connf_L\left(r;a,b\right)\left(\sinh r\right)^{d-1}\dd r = 0. \end{equation} The assumption $\max_{a,b\in\Ecal}\int^\infty_0\connf\left(r;a,b\right)\dd r>0$ then ensures that the the denominator of \eqref{eqn:EigenValueRatioV2} is strictly positive for all $L>0$, and therefore the result follows. \end{proof} One may initially (and incorrectly) expect that because scaling functions turn short edges into long edges the expected degree of a vertex will diverge. While this is true for the volume-linear and length-linear scaling functions, and for adjacency functions satisfying the condition \eqref{eqn:adjacencyNearOrigin} in Lemma~\ref{lem:adjacencyNearOrigin}, the following example shows that it is not true in general. \begin{example}\label{expl:annulus} Let $\Ecal$ be a singleton (so we can neglect it from the notation) and consider the adjacency function \begin{equation} \connf\left(r\right) = \Id_{\left\{1<r<2\right\}} \end{equation} and scaling function \begin{equation} \sigma_L\left(r\right) = \begin{cases} Lr &\colon r\leq 1,\\ L + a_L(r-1) &\colon 1< r\leq \frac{L}{1-a_L},\\ r &\colon r>\frac{L}{1-a_L}, \end{cases} \end{equation} where $a_L = o\left(\e^{-L\left(d-1\right)}\right)$. \begin{figure} \centering \begin{subfigure}{0.33\textwidth} \centering \begin{tikzpicture} \draw[->] (0,0) -- (0,4) node[left]{$\sigma_L$}; \draw[->] (0,0) -- (4,0) node[right]{$r$}; \draw[dashed] (0,0)--(4,4); \draw[very thick] (0,0) -- (0.5,2) -- (2.2,2.2) -- (4,4); \draw[dashed] (0.5,0) node[below]{$1$} -- (0.5,2) -- (0,2) node[left]{$L$}; \draw[dashed] (2.2,0) node[below]{$\frac{L}{1-a_L}$} -- (2.2,2.2); \end{tikzpicture} \caption{Example scaling function} \end{subfigure} \hfill \begin{subfigure}{0.65\textwidth} \centering \begin{tikzpicture}[scale=2] \draw[->] (0,0) -- (0,2) node[left]{$\connf$}; \draw[->] (0,0) -- (3,0) node[right]{$r$}; \draw[very thick] (1,1.5) -- (2,1.5); \draw[very thick] (0,0) -- (1,0); \draw[very thick] (2,0) -- (3,0); \draw[dashed] (1,1.5) -- (0,1.5) node[left]{$1$}; \draw[dashed] (1,0) node[below]{$1$} -- (1,1.5); \draw[dashed] (2,1.5) -- (2,0) node[below]{$2$}; \end{tikzpicture} \caption{Example adjacency function} \end{subfigure} \caption{Sketches of the example scaling and adjacency functions in Example~\ref{expl:annulus}.} \label{fig:ExampleAnnulus} \end{figure} \begin{lemma} For this example, \begin{equation} \lim_{L\to\infty}\mathbb{E}_{\lambda,L}\left[\#\left\{x\in\eta\colon \adja{\orig}{x}{\xi^\orig}\right\}\right] = 0. \end{equation} \end{lemma} \begin{proof} By Mecke's formula and the substitution $r=\sigma_L(y)$, \begin{multline} \mathbb{E}_{\lambda,L}\left[\#\left\{x\in\eta\colon \adja{\orig}{x}{\xi^\orig}\right\}\right] = \lambda\mathfrak{S}_{d-1}\int^\infty_0\connf_L\left(r\right)\left(\sinh r\right)^{d-1}\dd r \\= \lambda\mathfrak{S}_{d-1}\int^2_1\sigma'_L\left(y\right)\left(\sinh \sigma_L(y)\right)^{d-1}\dd y. \end{multline} The scaling function $\sigma_L$ is simple to differentiate (almost everywhere) and find \begin{equation} \sigma'\left(r\right) = \begin{cases} L &\colon r\leq 1,\\ a_L &\colon 1< r\leq \frac{L}{1-a_L},\\ 1 &\colon r>\frac{L}{1-a_L}, \end{cases} \end{equation} Therefore \begin{equation} \int^2_1\sigma'_L\left(y\right)\left(\sinh \sigma_L(y)\right)^{d-1}\dd y = a_L\int^2_1\left(\sinh \left(L + a_L(y-1)\right)\right)^{d-1}\dd y \leq 2a_L\e^{L\left(d-1\right)}\to 0, \end{equation} where the inequality holds for sufficiently large $L$ (so $a_L\leq \frac{\log 2}{d-1}$). \end{proof} \end{example} The following example demonstrates that condition \eqref{eqn:EigenValueRatioV2} does not follow from $\sigma_L$ being a scaling function, and it is indeed a required part of Assumption~\ref{assump:finitelymany}. \begin{example}\label{expl:manyAnnulii} Let $\Ecal$ be a singleton (so we can neglect it from the notation) and consider the adjacency function \begin{equation} \connf\left(r\right) = \sum^\infty_{i=0}\Id_{\left\{2^{-2i-1}<r<2^{-2i}\right\}}. \end{equation} Before defining our scaling function, we define a `dummy' (continuous) scaling function $\overline{\sigma}_L$ by its derivative. Fix $R\in\left(0,\infty\right)$ and for $L\geq R$ let \begin{equation} \overline{\sigma}'_L\left(r\right) = \begin{cases} L &\colon r<\frac{R}{L},\\ 1 &\colon \exists i\in\N_0 \colon r\in\left(2^{-2i-1},2^{-2i}\right)\cap\left(\frac{R}{L},1\right),\\ 2^{2i+2} &\colon \exists i\in\N_0 \colon r\in\left(2^{-2i-2},2^{-2i-1}\right)\cap\left(\frac{R}{L},1\right),\\ 1 &\colon r>1. \end{cases} \end{equation} Then we can choose $a_L\in\left(0,1\right)$ such that \begin{equation} \label{eqn:SizeofA_L} a_L = o\left(\frac{1}{\int^1_{\frac{1}{3}}\left(\sinh \overline{\sigma}_L(r)\right)^{d-1}\dd r}\right) \end{equation} as $L\to\infty$. Now for $L\geq R$ define the true (continuous) scaling function $\sigma_L$ by its derivative: \begin{equation} \sigma'_L\left(r\right) = \begin{cases} L &\colon r<\frac{R}{L},\\ a_L &\colon \exists i\in\N_0 \colon r\in\left(2^{-2i-1},2^{-2i}\right)\cap\left(\frac{R}{L},1\right),\\ 2^{2i+2} &\colon \exists i\in\N_0 \colon r\in\left(2^{-2i-2},2^{-2i-1}\right)\cap\left(\frac{R}{L},1\right),\\ 1 &\colon r>1. \end{cases} \end{equation} Observe that $\overline{\sigma}_L(r) \geq \sigma_L(r)$ for all $r\geq 0$. The idea behind this scaling function is that for $r<\sigma_L^{-1}\left(R\right)$ the scaling function is just the normal length-linear scaling function. Then for larger $r$ the scaling function is a ``ladder" that is very shallow when $\connf(r)>0$ and very steep when $\connf(r)=0$ (the derivative for $r> 1$ is unimportant other than being $\geq 1$). The precise steepness of the steep segments is chosen so that the total increase in $\sigma_L$ on each of these segments is exactly $1$, and the dummy scaling function was introduced so that we know how shallow the shallow segments needed to be. Let us check that $\sigma_L$ is indeed a scaling function. It is clearly strictly increasing in $r$ and diverges as $r\to\infty$. As $L$ increases, the transition point $\frac{R}{L}$ decreases towards $0$, and therefore more and more shallow intervals are revealed. Since each steep interval adds $1$ to $\sigma_L(r)$, we have $\sigma_L(r)\to\infty$ as $L\to\infty$ for all $r>0$. The same argument also shows that $\overline{\sigma}_L$ is a scaling function and in particular that $\lim_{L\to\infty} \overline{\sigma}_L(r)=\infty$ for all $r>0$. This then shows that necessarily $a_L\to0$ as $L\to\infty$. \begin{figure} \centering \includegraphics[width=\linewidth]{ScalingFunctionR025.pdf} \caption{Plot of the scaling function $\sigma_L$ in Example~\ref{expl:manyAnnulii} for $R=0.25$, using MATLAB.} \label{fig:ExampleManyAnnulus} \end{figure} Now let $\connf_L(r)$ be the scaled adjacency function using the true scaling function $\sigma_L$ (not the dummy scaling function $\overline{\sigma}_L$). \begin{lemma} \label{lem:BadExample} For this example, \eqref{eqn:EigenValueRatioV2} does \emph{not} hold. \end{lemma} \begin{proof} By changing variables $y=\sigma_L(r)$ (and back for the last equality) we have \begin{multline} \int^R_0\connf_L(r)\left(\sinh r\right)^{d-1}\dd r = L\int^\frac{R}{L}_0\connf(y)\left(\sinh Ly\right)^{d-1}\dd y \\ \geq L\int^\frac{R}{3L}_0\left(\sinh Ly\right)^{d-1}\dd y = \mathbf{V}_d\left(\frac{R}{3}\right). \end{multline} In the above inequality we have used that $\left(\sinh Ly\right)^{d-1}$ is increasing in $y$ to move the intervals on which $\connf(y)> 0$ towards $0$. The total Lebesgue measure of these intervals intersected with $\left(0,\frac{R}{L}\right)$ is always greater than or equal to $\frac{R}{3L}$ (achieved when $\frac{R}{L}$ coincides with the lower bound of one of these positive intervals), and therefore $\frac{R}{3L}$ appears in the integral bound. We now prove that \eqref{eqn:EigenValueRatioV2} does not hold by showing that \begin{equation} \int^\infty_R\connf_L(r)\left(\sinh r\right)^{d-1}\dd r \to 0 \end{equation} as $L\to \infty$. The same change of variables as above shows that \begin{multline} \int^\infty_R\connf_L(r)\left(\sinh r\right)^{d-1}\dd r = \int^\infty_\frac{R}{L}\connf(y)\sigma'_L(y)\left(\sinh \sigma_L\left(y\right)\right)^{d-1}\dd y \\\leq a_L \int^1_\frac{R}{L}\connf(y)\left(\sinh \overline{\sigma}_L\left(y\right)\right)^{d-1}\dd y, \end{multline} where in the inequality we have used that on $r>\frac{R}{L}$ the property $\connf(y)>0$ implies $\sigma'_L(y)=a_L$, that $\overline{\sigma}_L(y)\geq \sigma_L(y)$, and that $\connf(y)=0$ for $y>1$. Then since $\int^1_0\connf(y)\dd y = \frac{2}{3}$ and $\left(\sinh \overline{\sigma}_L\left(y\right)\right)^{d-1}$ is increasing in $y$, we can move the intervals of $\connf$ towards $1$ to get \begin{equation} a_L \int^1_\frac{R}{L}\connf(y)\left(\sinh \overline{\sigma}_L\left(y\right)\right)^{d-1}\dd y \leq a_L \int^1_\frac{1}{3}\left(\sinh \overline{\sigma}_L\left(y\right)\right)^{d-1}\dd y. \end{equation} The bound on $a_L$ in \eqref{eqn:SizeofA_L} then gives us the result. \end{proof} \end{example} \end{appendix} \vspace{5mm} \begin{acks} The author would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme \emph{Stochastic systems for anomalous diffusion}, where work on this paper was undertaken. This work was supported in parts by EPSRC grant EP/Z000580/1 and by NSERC of Canada. Thanks are also due to Petr Kosenko for guiding me to the spherical transform, Markus Heydenreich for their discussions about RCMs and percolation on hyperbolic spaces, and Gordon Slade for their advice on presenting the introduction. \end{acks} \bibliography{bibliography}{} \bibliographystyle{alpha} \end{document}
2412.12921v2
http://arxiv.org/abs/2412.12921v2
Equivariant and Invariant Parametrized Topological Complexity
\documentclass[12pt,reqno,a4paper]{amsart} \usepackage{ amsmath, amsfonts, amssymb, amsthm, amscd, gensymb, graphicx, comment, etoolbox, url, booktabs, stackrel, mathtools,enumitem, mathdots, microtype, lmodern, mathrsfs, graphicx, tikz, longtable,tabularx, float, tikz, pst-node, tikz-cd, multirow, tabularx, amscd, bm, array, makecell, diagbox, booktabs,ragged2e, caption, subcaption} \usepackage{makecell,slashbox} \usepackage{comment} \allowdisplaybreaks \makeatletter \def\@tocline#1#2#3#4#5#6#7{\relax \ifnum #1>\c@tocdepth \else \par \addpenalty\@secpenalty\addvspace{#2} \begingroup \hyphenpenalty\@M \@ifempty{#4}{ \@tempdima\csname r@tocindent\number#1\endcsname\relax }{ \@tempdima#4\relax } \parindent\z@ \leftskip#3\relax \advance\leftskip\@tempdima\relax \rightskip\@pnumwidth plus4em \parfillskip-\@pnumwidth #5\leavevmode\hskip-\@tempdima \ifcase #1 #6\nobreak\relax \hfill\hbox to\@pnumwidth{\@tocpagenum{#7}}\par \nobreak \endgroup } \makeatother \usepackage{xcolor} \usepackage[utf8]{inputenc} \usepackage{microtype, fullpage, wrapfig,textcomp,mathrsfs,csquotes,fbb} \usepackage[colorlinks=true, linkcolor=blue, citecolor=blue, urlcolor=blue, breaklinks=true]{hyperref} \usepackage[capitalise]{cleveref} \setlength{\marginparwidth}{2cm} \usepackage{todonotes} \usetikzlibrary{positioning} \usetikzlibrary{shapes,arrows.meta,calc} \usetikzlibrary{arrows} \newcommand{\bigzero}{\mbox{\normalfont\Large\bfseries 0}} \newcommand{\rvline}{\hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep}} \newtheorem{theorem}{Theorem}[section] \newtheorem{acknowledgement}[theorem]{Acknowledgement} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{corollary}[theorem] {Corollary} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{notation}[theorem]{Notation} \newtheorem{observation}[theorem]{Observation} \newtheorem{problem}[theorem]{Problem} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{remark}[theorem]{Remark} \newtheorem{question}[]{Question} \newtheorem*{theorem*}{Theorem} \def\S{\mathbb{S}} \newcommand\Q{\mathbb{Q}} \newcommand\R{\mathbb{R}} \newcommand\Z{\mathbb{Z}} \newcommand\C{\mathbb{C}} \newcommand\p{\mathfrak{p}} \newcommand{\cs}{\mathcal{S}} \newcommand{\TC}{\mathrm{TC}} \newcommand{\Gr}{\mathrm{Gr}} \newcommand{\ct}{\mathrm{cat}} \newcommand{\zl}{\mathrm{zcl}} \newcommand{\secat}{\mathrm{secat}} \newcommand{\nil}{\mathrm{nil}} \newcommand{\A}{\mathcal{A}} \newcommand{\co}{\mathrm{coind}} \newcommand{\ind}{\mathrm{ind}} \newcommand{\hts}{\mathrm{ht}} \def\mbar{\overline{M}} \newcommand{\malpha}{\mathrm{M}_{\alpha}} \newcommand{\mbalpha}{\overline{\mathrm{M}}_{\alpha}} \newcommand{\colim}{\mathrm{colim}} \newcolumntype{x}[1]{>{\centering\arraybackslash}p{#1}} \newcommand\diag[4]{ \multicolumn{1}{p{#2}|}{\hskip-\tabcolsep $\vcenter{\begin{tikzpicture}[baseline=0,anchor=south west,inner sep=#1] \path[use as bounding box] (0,0) rectangle (#2+2\tabcolsep,\baselineskip); \node[minimum width={#2+2\tabcolsep},minimum height=\baselineskip+\extrarowheight] (box) {}; \draw (box.north west) -- (box.south east); \node[anchor=south west] at (box.south west) {#3}; \node[anchor=north east] at (box.north east) {#4}; \end{tikzpicture}}$\hskip-\tabcolsep}} \def\sgnote#1{\textcolor{magenta}{#1}} \hbadness=99999 \hfuzz=999pt \begin{document} \title[]{Equivariant and Invariant Parametrized Topological Complexity} \author[R. Singh]{Ramandeep Singh Arora} \address{Department of Mathematics, Indian Institute of Science Education and Research Pune, India} \email{[email protected]} \email{[email protected]} \author[N. Daundkar]{Navnath Daundkar} \address{Department of Mathematics, Indian Institute of Science Education and Research Pune, India.} \email{[email protected]} \begin{abstract} For a $G$-equivariant fibration $p \colon E\to B$, we introduce and study the invariant analogue of Cohen, Farber and Weinberger's parametrized topological complexity, called the invariant parametrized topological complexity. This notion generalizes the invariant topological complexity introduced by Lubawski and Marzantowicz. We establish the equivariant fibrewise homotopy invariance of this notion and derive several bounds, including a cohomological lower bound and a dimensional upper bound. Additionally, we compare invariant parametrized topological complexity with other well-known invariants. When $G$ is a compact Lie group acting freely on $E$, we show that the invariant parametrized topological complexity of the $G$-fibration $p \colon E\to B$ coincides with the parametrized topological complexity of the induced fibration $\overline{p} \colon \overline{E} \to \overline{B}$ between the orbit spaces. Finally, under minor conditions, we compute the invariant parametrized topological complexity of equivariant Fadell-Neuwirth fibrations, which measures the complexity of motion planning in presence of obstacles having unknown positions such that the order in which they are placed is irrelevant. Apart from this, we establish several bounds, including a cohomological lower bound, an equivariant homotopy dimension-connectivity upper bound and various product inequalities for the equivariant sectional category. Applying them, we obtain some interesting results for equivariant and invariant parametrized topological complexity of a $G$-fibration. \end{abstract} \keywords{equivariant parametrized topological complexity, invariant topological complexity, equivariant sectional category, Lusternik-Schnirelmann category} \subjclass[2020]{55M30, 55R91, 55S40} \maketitle \section{Introduction} \label{sec:intro} The \emph{topological complexity} of a space $X$, denoted by $\TC(X)$, is defined as the smallest positive integer $k$ such that the product space $X \times X$ can be covered by open sets $\{U_1,\dots, U_k\}$, where each $U_i$ admits a continuous section of the free path space fibration \begin{equation} \label{eq: free path space fibration} \pi \colon PX\to X\times X \quad \text{defined by} \quad \pi(\gamma)=(\gamma(0),\gamma(1)), \end{equation} where $PX$ denotes the free path space of $X$ equipped with the compact-open topology. The concept of topological complexity was introduced by Farber in \cite{FarberTC} to analyze the computational challenges associated with motion planning algorithms for the configuration space $X$ of a mechanical system. Over the past two decades, this invariant has attracted significant attention and has been a subject of extensive research. \subsection*{Parameterized motion planning problem} Recently, a novel parametrized approach to the theory of motion planning algorithms was introduced in \cite{farber-para-tc, PTCcolfree}. This approach provides enhanced universality and flexibility, allowing motion planning algorithms to operate effectively in diverse scenarios by incorporating external conditions. These external conditions are treated as parameters and form an integral part of the algorithm's input. A parametrized motion planning algorithm takes as input a pair of configurations subject to the same external conditions and produces a continuous motion of the system that remains consistent with these external conditions. We now briefly define the concept of parametrized topological complexity. For a fibration $p \colon E \to B$, let $E\times_B E$ denote the fibre product, which is the space of all pair of points in $E$ that lie in a common fibre of $p$. Let $E^I_B$ denote the space of all paths in $E$ whose images are contained within in a single fibre. Define the \emph{parametrized endpoint map} \begin{equation} \label{eq: Pi} \Pi \colon E^I_B\to E\times_{B}E \quad \text{by} \quad \Pi(\gamma)=(\gamma(0),\gamma(1)). \end{equation} In \cite{farber-para-tc}, it is shown that $\Pi$ is a fibration. The \emph{parametrized topological complexity} of a fibration $p \colon E \to B$, denoted by $\TC[p \colon E\to B]$, is the smallest positive integer $k$ such that there is an open cover $\{U_1,\dots, U_k\}$ of $E \times_B E$, where each $U_i$ admits a continuous section of $\Pi$. For further details and interesting computational results for parametrized topological complexity, see \cite{farber-para-tc}, \cite{PTCcolfree}, \cite{ptcspherebundles} and \cite{minowa2024parametrized}. Additionally, the concept has been extended to fibrewise spaces by Garc\'{\i}a-Calcines in \cite{fibrewise}. On the other hand, Crabb \cite{crabb2023fibrewise} established some computational results in the fibrewise setting. One of the key motivations for introducing this concept was to address the challenge of collision-free motion planning in environments where obstacles have unknown positions in advance. This can be described by the following scenario: A military commander oversees a fleet of $t$ submarines navigating waters with $s$ mines. The positions of these mines change every 24 hours. Each day, the commander must determine a movement plan for each submarine, ensuring that they travel from their current locations to their designated destinations without colliding with either the mines or other submarines. A parametrized motion planning algorithm will take as input the positions of the mines and the current and the desired positions of the submarines and will produced as output a collision-free motion of a fleet. Hence, the complexity of the universal motion planning algorithm in this setting can be described as the parametrized topological complexity of the Fadell-Neuwirth fibration $$ p \colon F(\R^d,s+t) \to F(\R^d, s), \quad (x_1,\dots,x_s,y_1,\dots,y_t) \mapsto (x_1,\dots,x_s) $$ where $F(\R^d,s)$ is the configuration space of $s$ distinct points lying in $\R^d$, see \Cref{sec: examples}. However, in a real-life scenario, the specific order in which the mines are placed should be irrelevant. For the two configurations of mines, $$ (x_1, \dots, x_s) \quad \text{ and } \quad (x_{\sigma(1)},\dots,x_{\sigma(s)}), $$ for any $\sigma$ in the permutation group $\Sigma_s$, the military commander should assign the same motion plan for the submarines. This is because both configurations describe the mines being placed at the same set of positions, regardless of their labeling. Thus, we should consider the unordered configuration space $ F(\R^d,s)/\Sigma_s $ for the placement of mines. Hence, in this new perspective, the complexity of the universal motion planning algorithm should be described as the parametrized topological complexity of the induced fibration $$ \overline{p} \colon \overline{F(\R^d,s+t)} \to \overline{F(\R^d, s)} $$ which is obtained by $p$ by taking the quotient under the natural action of $\Sigma_s$ on the configuration spaces. In this paper, we introduce the notion of invariant parametrized topological complexity for a $G$-fibration $p\colon E\to B$, denoted by $\TC^G[p \colon E\to B]$, to measure the complexity of parametrized motion planning problem where the order in which the mines are placed is irrelevant. The invariant parametrized topological complexity is a parametrized analogue of the invariant topological complexity, which was introduced by Lubawski and Marzantowicz \cite{invarianttc}. The invariant topological complexity for a $G$-space $X$, denoted by $\TC^G(X)$, behaves well with respect to quotients. In particular, if a compact Lie group $G$ acts freely on $X$, then the equality $\TC^G(X) = \TC(X/G)$ holds (see \cite[Theorem 3.10]{invarianttc}). Generalizing this to the parameterized setting we establish the following theorem. \begin{theorem*} Suppose $G$ is a compact Lie group. Let $p \colon E \to B$ be a $G$-fibration and let $\overline{p} \colon \overline{E} \to \overline{B}$ be the induced fibration between the orbit spaces. If the $G$-action on $E$ is free and $\overline{E} \times \overline{E}$ is hereditary paracompact, then \[ \TC^{G}[p \colon E \rightarrow B] = \TC[\overline{p} \colon \overline{E} \rightarrow \overline{B}]. \] \end{theorem*} \subsection*{Outline of the paper} The aim of this paper is twofold. First, we examine various properties of the equivariant sectional category and equivariant parametrized topological complexity. Using these properties, we develop and analyze the new concept of invariant parametrized topological complexity, which we introduce in \Cref{sec: inv-para-tc}. In \Cref{subsec: eq-secat}, we study the equivariant sectional category of a $G$-fibration, and establish multiple lower bounds in \Cref{thm: cohomological lower bound for eq-secat}, \Cref{prop: secat(overline(p)) leq secat_G(p)} and \Cref{prop: subgroup-ineq for equi-secat}. We also provide an equivariant homotopy dimension-connectivity upper bound in \Cref{thm: G-homo-dim ub on equi-secat}. Afterwards, we establish product inequalities in \Cref{prop: special-prod-ineq for equi-secat} and \Cref{cor: equi-secat of pullback fibration under diagonal map}. In \Cref{subsec:eq-LS-category}, we recall the notion of the equivariant LS category of a $G$-space, and provide an equivariant homotopy dimension-connectivity upper bound, as stated in \Cref{thm: G-hom-dim on equi-cat}. Subsequently, \Cref{subsec:invtc} devoted to the equivariant and invariant topological complexity of a $G$-space, and we provide an equivariant homotopy dimension-connectivity upper bound for the former in \Cref{thm: G-hom-dim ub for equi-tc}. In \Cref{sec: equi-para-tc}, we explore various properties of the equivariant parametrized topological complexity of $G$-fibrations $p\colon E\to B$. Our main result \Cref{thm: equivalent defn of a section of equi-para-tc}, characterizes the elements of parametrized motion planning cover as the $G$-compressible subsets of the fibre product $E\times_B E$ into the diagonal $\Delta(E)$. Furthermore, we establish some lower bounds and the product inequalities in \Cref{prop: subgroup-ineq for equi-para-tc}, \Cref{thm: non-equi coho-lower-bound for equi-para-tc} and \Cref{thm: prod-ineq for equi-para-tc}, respectively. In \Cref{sec: inv-para-tc}, we introduce the notion of invariant parametrized topological complexity for $G$-fibrations. We establish the fibrewise $G$-homotopy invariance of this notion, and show that it generalizes both the parametrized and invariant topological complexity; see \Cref{thm:G-homotopy-invariace-of-inv-para-tc} and \Cref{prop: special cases of inv-para-tc}, respectively. For a $G$-fibration $p\colon E\to B$, in \Cref{thm: equivalent defn of a section of in-para-tc}, we show that the elements of invariant parametrized motion planning cover can be characterized as the $(G\times G)$-compressible subsets of the fibre product $E\times_{B/G} E$ into the saturated diagonal $\mathbb{k}(E) = E \times_{E/G} E$. In \Cref{subsec:prop-bounds}, we investigate various properties and bounds for $\TC^G[p\colon E \to B]$. For example, we establish dimensional upper bound (\Cref{prop: inva-para-tc bounds}), lower bound (\Cref{prop: subgroup-ineq for inv-para-tc}), cohomological lower bounds (\Cref{thm: cohomological lower bound for inv-para-tc} and \Cref{thm: non-equi coho-lower-bound for inv-para-tc}), and product inequality (\Cref{thm: prod-ineq for inv-para-tc}). Finally, we prove one of our main result, \Cref{thm: inv-para-tc under free action}, which shows that the $\TC^G[p\colon E\to B]$ coincides with the parametrized topological complexity of the corresponding orbit fibration, when $G$ acts freely on $E$. In \Cref{sec: examples}, we provide sharp estimates for the invariant parametrized topological complexity of the equivariant Fadell-Neuwirth fibrations. Specifically, in \Cref{thm:estimates-for-FN-fibrations}, for configuration spaces of odd-dimensional Euclidean spaces, under certain conditions, we compute the exact value of invariant parametrized topological complexity. For the configuration spaces of even-dimensional Euclidean spaces, we show that the upper bound and lower bound differ by $1$. \section{Preliminaries} \label{sec:preliminaries} In this section, we systematically introduce various numerical invariants: equivariant sectional category, equivariant LS-category, equivariant topological complexity, $A$-Lusternik-Schnirelmann $G$-category, and invariant topological complexity. In \Cref{subsec: eq-secat}, we first establish a cohomological lower bound on the equivariant sectional category of a $G$-fibration $p \colon E \to B$ using Borel cohomology. When $G$ is a compact Hausdorff topological group, we further show that the equivariant sectional category of $p$ is bounded below by the sectional category of $p$ and the induced fibration $\overline{p} \colon \overline{E} \to \overline{B}$ between the orbit spaces. We then define the notion of $G$-homotopy dimension for $G$-spaces and establish the equivariant dimension-connectivity upper bound on the equivariant sectional category. Additionally, we prove some properties of the equivariant sectional category. As a consequence to \Cref{thm: G-homo-dim ub on equi-secat}, we obtain the equivariant dimension connectivity upper bound on the equivariant LS category in \Cref{subsec:eq-LS-category}. Later in \Cref{subsec:invtc} we define the invariant topological complexity and recall the basic information related to Clapp-Puppe invariant of LS type in \Cref{subsec:Clapp-Puppe}. \subsection{Equivariant sectional category} \hfill\\ \vspace{-0.7em} \label{subsec: eq-secat} Schwarz \cite{Sva} introduced and studied the notion of sectional category of a fibration and later by Bernstein and Ganea in \cite{secat} for any map. The corresponding equivariant analogue was introduced by Colman and Grant in \cite{colmangranteqtc}. We will now briefly recall its definition and present the cohomological lower bound. \begin{definition}[{\cite[Definition 4.1]{colmangranteqtc}}\label{def:eqsecat}] Let $p \colon E\to B$ be a $G$-map. The equivariant sectional category of $p$, denoted by $\secat_G(p)$, is the least positive integer $k$ such that there is a $G$-invariant open cover $\{U_1, \ldots, U_k\}$ of $B$ and $G$-maps $s_i \colon U_i \to E$, for $i=1, \ldots, k$, such that $p \circ s_i \simeq_G i_{U_i}$, where $i_{U_i} \colon U_i \hookrightarrow B$ is the inclusion map. \end{definition} First we establish the cohomological lower bound on the equivariant sectional category. To the best of our knowledge, such a bound has not been documented in the literature. We believe that this result must already be known to experts in the field. Nevertheless, we provide a thorough proof of this result here. Suppose $EG \to BG$ is a universal principal $G$-bundle. For a $G$-space $X$, let $X^h_{G}$ be the homotopy orbit space of $X$ defined as $$ X^h_{G} := EG \times_{G} X, $$ and the Borel $G$-equivariant cohomology $H^{*}_{G}(X;R)$ of $X$ with coefficients in a commutative ring $R$ is defined as $H^{*}_{G}(X;R) := H^{*}(X^h_{G};R)$. We note that for a $G$-map $p \colon E \to B$, there is an induced map $p^h_G \colon E^h_G \to B^h_G$. \begin{theorem}[Cohomological lower bound] \label{thm: cohomological lower bound for eq-secat} Suppose $p \colon E \to B$ is a $G$-map. If there are cohomology classes $u_1,\dots,u_k \in \widetilde{H}^{*}_{G}(B;R)$ (for any commutative ring $R$) with $$ (p^h_G)^*(u_1) = \cdots = (p^h_G)^{*}(u_k) = 0 \quad \text{and} \quad u_1 \smile \dots \smile u_k \neq 0, $$ then $\secat_G(p) > k$. \end{theorem} \begin{proof} Suppose $\secat_G(p) \leq k$. Then there exists a $G$-invariant open cover $\{U_1, \dots, U_k\}$ of $B$ such that each $U_i$ admits a $G$-equivariant homotopy section $s_i$ of $p$. Let $j_i \colon U_i \hookrightarrow B$ be the inclusion map. Then $$ ((j_i)^{h}_G)^*(u_i) = ((s_i)^{h}_G)^*\left((p^{h}_G)^*(u_i)\right) = 0 $$ since $p \circ s_i \simeq_G j_i$ implies $((j_i)^{h}_G)^* = ((s_i)^{h}_G)^* \circ (p^{h}_G)^*$. Hence, the long exact sequence in cohomology associated to the pair $(B^h_G,(U_i)^h_G)$ gives an element $v_i \in H^{*}(B^h_G,(U_i)^h_G;R)$ such that $((q_i)^{h}_G)^*(v_i)=u_i$, where $q_i \colon B \hookrightarrow (B,U_i)$ is the inclusion map. Hence, we get $$ v_1 \smile \cdots \smile v_k \in H^*(B^h_G,\cup_{i=1}(U_i)^h_G;R) = H^*(B^h_G,B^h_G;R) = 0. $$ Moreover, by the naturality of cup products, we have $(q^h_G)^*(v_1 \smile \cdots \smile v_k) = u_1 \smile \cdots \smile u_k$, where $q \colon B \hookrightarrow (B,B)$ is the inclusion map. Hence, $u_1 \smile \cdots \smile u_k = 0$. \end{proof} \begin{remark}\ \begin{enumerate} \item Observe that if $G$ acts trivially on $X$, then the lower bound in \Cref{thm: cohomological lower bound for eq-secat} recovers the cohomological lower bound given by Schawrz in \cite[Theorem 4]{Sva}. \item Note the following commutative diagram of $G$-maps \[ \begin{tikzcd} X \arrow[dr, "\Delta"'] \arrow{rr}{h} & & PX \arrow{dl}{\pi} \\ & X\times X, \end{tikzcd} \] where $h$ is a $G$-homotopy equivalence. Then, the lower bound in \Cref{thm: cohomological lower bound for eq-secat} recovers the bound \cite[Theorem 5.15]{colmangranteqtc} on $\TC_G(X)$, obtained by Colman and Grant. More generally, it also recovers the bound \cite[Theorem 4.25]{D-EqPTC} on the equivariant parametrized topological complexity, obtained by the second author. \end{enumerate} \end{remark} In practice, however, the difficulty of computing cup products in Borel cohomology (or more generally, in any equivariant cohomology) makes the problem cumbersome. We can then ask whether non-equivariant cohomological bounds can be utilized in some way. When $G$ is a compact Hausdorff topological group and $p \colon E \to B$ is a $G$-fibration, we will show that non-equivariant sectional category of $p$ and the induced fibration $\overline{p} \colon \overline{E} \to \overline{B}$ between the orbit spaces are lower bounds for the equivariant sectional category of $p$. Note that $\overline{p}$ fits into the commutative diagram \begin{equation} \label{diag: orbit-map comm-diag} \begin{tikzcd} E \arrow[d, "\pi_E"'] \arrow[r, "p"] & B \arrow[d, "\pi_B"] \\ \overline{E} \arrow[r, "\overline{p}"] & \overline{B} , \end{tikzcd} \end{equation} where $\pi_B \colon B \to \overline{B}$ and $\pi_E \colon E \to \overline{E}$ are orbit maps. \begin{proposition} \label{prop: secat(overline(p)) leq secat_G(p)} Suppose $p \colon E \to B$ is a $G$-fibration such that the induced map $\overline{p} \colon \overline{E} \to \overline{B}$ between orbit spaces is a fibration. Then \[ \secat(\overline{p}) \leq \secat_G(p). \] \end{proposition} \begin{proof} Suppose $U$ is a $G$-invariant open subset of $B$ with a $G$-equivariant section $s$ of $p$ over $U$. As the orbit map $\pi_B \colon B \to \overline{B}$ is open, we have $\overline{U} := \pi_B(U)$ is an open subset of $\overline{B}$. As $U$ is $G$-invariant, it follows $U$ is saturated with respect to $\pi_B$. Hence, $\pi_B \colon U \to \overline{U}$ is a quotient map. Then, by universal property of quotient maps, there exists a unique continuous map $\overline{s} \colon \overline{U} \to \overline{E}$ such that the following diagram \[ \begin{tikzcd} U \arrow[r, "\pi_E \circ s"] \arrow[d, "\pi_B"'] & \overline{E} \\ \overline{U} \arrow[ru, "\overline{s}"', dashed] & \end{tikzcd} \] commutes. Then $$ \overline{p} ( \overline{s} (\overline{b})) = \overline{p} (\overline{s} (\pi_B(b))) = \overline{p}(\pi_E(s(b))) = \pi_B (p(s(b))) = \pi_B(b) = \overline{b} $$ implies $\overline{s}$ is a section of $\overline{p}$ over $\overline{U}$. Thus, the result follows since $\pi_B \colon B \to \overline{B}$ is surjective. \end{proof} \begin{theorem} \label{thm: secat(overline(p)) = secat_G(p) for free actions} Suppose $p \colon E \to B$ is a $G$-fibration. If \begin{enumerate} \item $G$ is a compact Hausdorff topological group, then $\overline{p} \colon \overline{E} \to \overline{B}$ is a fibration, and $$ \secat(\overline{p}) \leq \secat_G(p). $$ \item $G$ is a compact Lie group such that $G$ acts freely on $B$, then $$ \secat(\overline{p}) = \secat_G(p). $$ \end{enumerate} \end{theorem} \begin{proof} By \cite[Corollary 2]{gevorgyan2023equivariant}, it follows that $\overline{p}$ is a fibration in both cases. Hence, the statement (1) follows from \Cref{prop: secat(overline(p)) leq secat_G(p)}. Now we will show the statement (2). Note that $G$ acts freely on $E$ since $p$ is a $G$-map. Hence, the diagram \eqref{diag: orbit-map comm-diag} is a pullback in the category of $G$-spaces, see the proof of \cite[Theorem II.7.3]{bredon-transformation-groups}. Suppose $\overline{U}$ is an open subset of $\overline{B}$ and $\overline{s} \colon \overline{U} \to \overline{E}$ is a section of $\overline{p}$. Let $U=\pi_B^{-1}(\overline{U})$. Then, by the universal property of pullbacks, there exists a unique $G$-map $s \colon U \to E$ such that $\pi_E \circ s = \overline{s} \circ \pi_B \colon U \to \overline{E}$ and $p \circ s = i_U \colon U \to B$. Hence, $\secat_G(p) \leq \secat(\overline{p})$. \end{proof} \begin{remark} Suppose $\pi \colon PX \to X \times X$ is the free path space fibration with $\mathbb{Z}_2$-action on $PX$ given by reversing the paths and on $X \times X$ by transposition of factors. Then, by \Cref{thm: secat(overline(p)) = secat_G(p) for free actions}(1), we get $\secat(\overline{\pi}) \leq \secat_{\mathbb{Z}_2}(\pi)$. Hence, we recover the cohomological lower bound on the symmetrized topological complexity $\TC^{\Sigma}(X)$ in \cite[Theorem 4.6]{grant2019symmetrized} since the following commutative diagram \[ \begin{tikzcd} \overline{X} \arrow[rr, "\simeq"] \arrow[rd, "\overline{\Delta}"'] & & \overline{PX} \arrow[ld, "\overline{\pi}"] \\ & \overline{X \times X} & \end{tikzcd} \] implies the nilpotency of the kernel of $\overline{\Delta}$ and $\overline{\pi}$ are the same. \end{remark} Suppose $X$ is a $G$-space. For a subgroup $H$ of $G$, define the $H$-invariant subspace of $X$ as $$ X^H := \{x \in X \mid h \cdot x = x \text{ for all } h \in H\}. $$ \begin{proposition} \label{prop: subgroup-ineq for equi-secat} Suppose $p \colon E \to B$ is a $G$-fibration. If $H$ and $K$ are subgroups of $G$ such that the fixed point map $p^H \colon E^H \to B^H$ is a $K$-fibration, then $$ \secat_K(p^H) \leq \secat_G(p). $$ \end{proposition} \begin{proof} Suppose $s \colon U \to E$ is a $G$-equivariant section of $p$. Define $V = U \cap E^H$. Note that $V$ is $H$-fixed points of $U$ and is $K$-invariant. Since $s$ is $G$-equivariant, it takes $H$-fixed points to $H$-fixed points, and hence restricts to a $K$-equivariant map $\left.s\right |_{V} \colon V \to B^H$. It is clear that $\left.s\right|_{V}$ is a section of $p^H$. \end{proof} \begin{corollary} \label{cor: fixed-point-and-subgroup-ineq for equi-secat} Suppose $G$ is a compact Hausdorff topological group and $p \colon E \to B$ is a $G$-fibration. Then \begin{enumerate} \item the fixed point map $p^H \colon E^H \to B^H$ is a fibration for all closed subgroups $H$ of $G$, and $$ \secat(p^H) \leq \secat_G(p). $$ \item $p \colon E \to B$ is a $K$-fibration for all closed subgroups $K$ of $G$, and $$ \secat(p) \leq \secat_K(p) \leq \secat_G(p). $$ \end{enumerate} \end{corollary} \begin{proof} Suppose $T$ is the trivial subgroup of $G$. By \cite[Theorem 4]{gevorgyan2023equivariant}, we know that for a $G$-fibration $p$, the fixed point map $p^H \colon E^H \to B^H$ is fibration for all closed subgroups $H$ of $G$. Hence, by taking $K=T$ in \Cref{prop: subgroup-ineq for equi-secat}, it follows that $\secat(p^H) =\secat_T(p^H) \leq \secat_G(p)$. By \cite[Theorem 3]{gevorgyan2023equivariant}, we know that $p$ is a $K$-fibration for all closed subgroups $K$ of $G$. Hence, by taking $H = T$ in \Cref{prop: subgroup-ineq for equi-secat}, we get the subgroup inequality $\secat_K(p) = \secat_K(p^T) \leq \secat_G(p)$. Note that $T$ is a closed subgroup of compact Hausdorff topological group $K$. Hence, applying the subgroup inequality for the $K$-fibration $p$, we get $\secat(p) = \secat_T(p) \leq \secat_K(p)$. \end{proof} The following proposition states some basic properties of the equivariant sectional category. Proofs are left to the reader. For analogous results concerning the non-equivariant sectional category, refer to \cite[Lemma 2.1]{Grant-ParaTC-group}. \begin{proposition} \label{prop: properties of equi-sec-cat} Suppose $p \colon E \to B$ is a $G$-map. \begin{enumerate} \item If $p' \colon E \to B$ is $G$-homotopic to $p$, then $\secat_G(p') = \secat_G(p)$. \item If $h \colon E' \to E$ is $G$-homotopy equivalence, then $\secat_G(p \circ h) = \secat_G(p)$. \item If $f \colon B \to B'$ is a $G$-homotopy equivalence, then $\secat_G(f \circ p) = \secat_G(p)$. \end{enumerate} \end{proposition} \begin{corollary} \label{cor: equi-sec-cat under homotopy equivalence pullback} Suppose $p \colon E \to B$ is a $G$-fibration. If $g \colon B' \to B$ is a $G$-homotopy equivalence and $p' \colon E' \to B'$ is the pullback of $p$ along $g$, then $$ \secat_G(p') = \secat_G(p). $$ \end{corollary} \begin{proof} Suppose the following diagram is a pullback \[ \begin{tikzcd} E' \arrow[r, "h"] \arrow[d, "p'"'] & E \arrow[d, "p"] \\ B' \arrow[r, "g"] & B. \end{tikzcd} \] Since $g$ is a $G$-homotopy equivalence and $p$ is a $G$-fibration, it follows that $h$ is also a $G$-homotopy equivalence. Hence, we get $$ \secat_G(p') = \secat_G(g \circ p') = \secat_G(p \circ h) = \secat_G(p). $$ by \Cref{prop: properties of equi-sec-cat}. \end{proof} Generalizing Schwarz's dimension-connectivity upper bound on the sectional category, Grant established the corresponding equivariant analogue for the equivariant sectional category in \cite[Theorem 3.5]{grant2019symmetrized}. We extend this approach to derive the equivariant version of the homotopy dimension-connectivity upper bound for the equivariant sectional category. To achieve this, we first introduce the notion of $G$-homotopy dimension for $G$-spaces. \begin{definition} Suppose $X$ is a $G$-$\mathrm{CW}$-complex. The $G$-homotopy dimension of $X$, denoted $\mathrm{hdim}_{G}(X)$, is defined to be $$ \mathrm{hdim}_{G}(X) := \min\{\dim(X') \mid X' \text{ is a $G$-}\mathrm{CW}\text{-complex}, X'\simeq_G X\}. $$ \end{definition} We are now ready to state the equivariant homotopy dimension connectivity upper bound. \begin{theorem} \label{thm: G-homo-dim ub on equi-secat} Suppose $p \colon E \to B$ is a Serre $G$-fibration with fibre $F$, whose base $B$ is a $G$-$\mathrm{CW}$-complex of dimension at least $2$. If there exists $s \geq 0$ such that the fibre of $p^H \colon E^H \to B^H$ is $(s-1)$-connected for all subgroups $H$ of $G$, then $$ \secat_{G}(p) < \frac{\mathrm{hdim_{G}(B)+1}}{s+1} + 1. $$ \end{theorem} \begin{proof} It is enough to show that for any $G$-$\mathrm{CW}$-complex $B'$ which is $G$-homotopy equivalent to $B$, we have $$ \secat_{G}(p) < \frac{\mathrm{dim(B')+1}}{s+1}+1. $$ Suppose $f \colon B' \to B$ is a $G$-homotopy between $G$-$\mathrm{CW}$-complexes $B'$ and $B$, and $p' \colon E' \to B'$ is the pullback of $p$ along $f$. Then, by \Cref{cor: equi-sec-cat under homotopy equivalence pullback}, we have $\secat_{G}(p') = \secat_{G}(p)$. Since the fibre of $p'$ is also $F$, we get $$ \secat_{G}(p') < \frac{\mathrm{dim(B')+1}}{s+1}+1, $$ by \cite[Theorem 3.5]{grant2019symmetrized}. \end{proof} Our next aim is to establish the product inequality for the equivariant sectional category. Before proving it, we define some useful notions. \begin{definition}\ \begin{enumerate} \item A topological space $X$ is called completely normal if, for any two subsets $A$ and $B$ of $X$ with $\overline{A} \cap B = A \cap \overline{B} = \emptyset$, there exist disjoint open subsets of $X$ containing $A$ and $B$, respectively. \item A $G$-space $X$ is called $G$-completely normal if, for any two $G$-invariant subsets $A$ and $B$ of $X$ with $\overline{A} \cap B = A \cap \overline{B} = \emptyset$, there exist disjoint $G$-invariant open subsets of $X$ containing $A$ and $B$, respectively. \end{enumerate} \end{definition} \begin{lemma}[{\cite[Lemma 3.12]{colmangranteqtc}}] \label{lemma: completely normal G-space is G-completely normal} Suppose that $G$ is a compact Hausdorff topological group acting continuously on a Hausdorff topological space $X$. If $X$ is completely normal, then $X$ is $G$-completely normal. \end{lemma} \begin{proposition} \label{prop: special-prod-ineq for equi-secat} Suppose $p_i \colon E_i \to B_i$ is a $G$-fibration for $i =1,2$. If $G$ is a compact Hausdorff, then $p_1 \times p_2 \colon E_1 \times E_2 \to B_1 \times B_2$ is a $G$-fibration, where $G$ acts on $E_1 \times E_2$ and $B_1 \times B_2$ diagonally. Furthermore, if $B_1$ and $B_2$ are Hausdorff, and $B_1 \times B_2$ is completely normal, then $$ \secat_{G}(p_1\times p_2) \leq \secat_{G}(p_1) + \secat_{G}(p_2) - 1. $$ \end{proposition} \begin{proof} Suppose $G$ is compact Hausdorff. Then identifying $G$ with the diagonal subgroup of $G\times G$, we see that it is a closed subgroup of $G\times G$. Hence, by \cite[Theorem 3]{gevorgyan2023equivariant}, it follows that $p_1 \times p_2 \colon E_2 \times E_2 \to B_1 \times B_2$ is a $G$-fibration, where $G$ acts diagonally on the spaces $E_1 \times E_2$ and $B_1 \times B_2$. If $B_1$ and $B_2$ are Hausdorff, and $B_1 \times B_2$ is completely normal, then \Cref{lemma: completely normal G-space is G-completely normal} implies $B_1 \times B_2$ is $(G \times G)$-completely normal. Hence, the desired inequality $$ \secat_{G}(p_1 \times p_2) \leq \secat_{G\times G}(p_1 \times p_2) \leq \secat_G(p_1) + \secat_G(p_2) - 1 $$ follows from \Cref{cor: fixed-point-and-subgroup-ineq for equi-secat} (2) and \cite[Proposition 2.9]{A-D-S}. \end{proof} \begin{corollary} \label{cor: equi-secat of pullback fibration under diagonal map} Suppose $p_i \colon E_i \to B$ is a $G$-fibration for $i =1,2$. Let $E_1 \times_B E_2 = \{(e_1,e_2) \in E_1 \times E_2 \mid p_1(e_1) = p_2(e_2)\}$ and let $p\colon E_1 \times_B E_2 \to B$ be the $G$-map given by $p(e_1,e_2)=p_1(e_1)=p_2(e_2)$, where $G$ acts on $E_1 \times_B E_2$ diagonally. If $G$ is compact Hausdorff, then $p$ is a $G$-fibration. Furthermore, if $B$ is Hausdorff and $B \times B$ is completely normal, then \[ \secat_G(p) \leq \secat_G(p_1) + \secat_G(p_2) -1. \] \end{corollary} \begin{proof} Note that the following diagram \[ \begin{tikzcd} E_1 \times_B E_2 \arrow[r, hook] \arrow[d, "p"'] & E_1 \times E_2 \arrow[d, "p_1 \times p_2"] \\ B \arrow[r, "\Delta"] & B \times B \end{tikzcd} \] is a pullback in the category of $G$-spaces, where $\Delta \colon B \to B \times B$ is the diagonal map. In \Cref{prop: special-prod-ineq for equi-secat}, we showed that $p_1 \times p_2$ is a $G$-fibration if $G$ is compact Hausdorff. Hence, $p$ is a $G$-fibration. Thus, the desired inequality \begin{align*} \secat_{G}(p) \leq \secat_{G}(p_1 \times p_2) \leq \secat_G(p_1) + \secat_G(p_2) - 1 \end{align*} follows from \cite[Proposition 4.3]{colmangranteqtc} and \Cref{prop: special-prod-ineq for equi-secat}. \end{proof} \subsection{Equivariant LS-category} \hfill\\ \vspace{-0.7em} \label{subsec:eq-LS-category} The notion of Lusternik-Schnirelmann (LS) category was introduced by Lusternik and Schnirelmann in \cite{LScat}. In this section we collect some basic information about the corresponding equivariant analogue this notion and establish the dimension-connectivity upper bound. We now define some notions for stating and proving the our desired result of this subsection. A $G$-invariant subset $U$ of a $G$-space $X$ is said to be \emph{$G$-categorical} if the inclusion map $i_U \colon U \hookrightarrow X$ is $G$-homotopy equivalent to a map which takes values in a single orbit. \begin{definition}\cite{Fadelleqcat} The equivariant LS-category of a $G$-space $X$, denoted by $\ct_G(X)$, is the least positive integer $k$ such that there exists a $G$-categorical open cover $\{U_1,\dots,U_k\}$ of $X$. \end{definition} \begin{definition} A $G$-space $X$ is said to be \emph{$G$-connected} if $X^H$ is path-connected for every closed subgroup $H$ of $G$. \end{definition} Let $X$ be a $G$-space, and $x_0 \in X$. Define the path space of $(X,x_0)$ as $$ P_{x_0}X = \{\alpha \colon I \to X \mid \alpha(0) = x_0\}. $$ Then the map $e_X \colon P_{x_0}X \to X$, given by $e_X(\alpha)=\alpha(1)$, is a fibration. Moreover, if the point $x_0$ is fixed under the $G$-action, then $e_X$ is a $G$-fibration, where $P_{x_0}X$ admits a $G$-action via $(g\cdot \alpha)(t) := g\cdot \alpha(t)$. We note that the fibre of $e_X$ is the loop space $\Omega X = (e_X)^{-1}(x_0)$ of $X$, and the $G$-action on $P_{x_0}X$ restricts to a $G$-action on $\Omega X$. \begin{lemma}[{\cite[Corollary 4.7]{colmangranteqtc}}] \label{lemma: catG(X) = secatG(X)} If $X$ is a $G$-space such that $X$ is $G$-connected and $x_0 \in X^G$, then $$ \ct_G(X) = \secat_G(e_X). $$ \end{lemma} Now, as a consequence of \Cref{thm: G-homo-dim ub on equi-secat}, we obtain the equivariant homotopy dimension connectivity upper bound on the equivariant LS category. \begin{theorem} \label{thm: G-hom-dim on equi-cat} Suppose $X$ is a $G$-$\mathrm{CW}$-complex of dimension at least $2$ such that $X^G \neq \emptyset$. If there exists $s \geq 0$ such that $X^H$ is $s$-connected for all subgroups $H$ of $G$, then $$ \ct_G(X) < \frac{\mathrm{hdim}_G(X)+1}{s+1}+1. $$ \end{theorem} \begin{proof} If $x_0 \in X^G$, then $e_X \colon P_{x_0}X \to X$ is a $G$-fibration with fibre $\Omega X$ which also admits a $G$-action. Note that $(\Omega X)^H = \Omega(X^H)$. Since $X^H$ is $s$-connected, the loop space $\Omega (X^H)$ is $(s-1)$-connected. Hence, by \Cref{thm: G-homo-dim ub on equi-secat}, we get $$ \secat_{G}(e_X) < \frac{\mathrm{hdim}_G(X)+1}{s+1}+1. $$ As $X^H$ is $s$-connected, it follows that $X^H$ is path-connected. Hence, $X$ is $G$-connected, and the theorem follows by \Cref{lemma: catG(X) = secatG(X)}. \end{proof} \subsection{Equivariant and invariant topological complexity} \hfill\\ \vspace{-0.7em} \label{subsec:invtc} We recall the concept of equivariant topological complexity introduced by Colman and Grant in \cite{colmangranteqtc}. Let $X$ be a $G$-space. Observe that the free path space $PX$ admits a $G$-action via $(g\cdot \alpha)(t):=g\cdot \alpha(t)$. Similarly, the product space $X^k$ is a $G$-space with the diagonal action. The fibration $$ e_{k,X} \colon PX \to X^k, \quad \alpha \mapsto \left(\alpha(0), \alpha\left(\frac{1}{k-1}\right), \ldots, \alpha\left(\frac{i}{k-1}\right), \ldots, \alpha\left(\frac{k-2}{k-1}\right), \alpha(1)\right) $$ is a $G$-fibration. \begin{definition} The sequential equivariant topological complexity of a $G$-space $X$ is defined as $$ \TC_{k,G}(X) := \secat_G(e_{k,X}). $$ In particular, when $k=2$, we will denote $e_{2,X}$ by $\pi$ and $\TC_{2,G}(X)$ by $\TC_G(X)$. \end{definition} In \cite[Proposition 2.45]{A-D-S}, Sarkar and the authors of this paper provided a dimension-connectivity upper bound on the sequential equivariant topological complexity. We improve their result by establishing a homotopy dimension-connectivity upper bound. We omit the proof, as it is similar to the original and follows from the homotopy dimension-connectivity upper bound on the equivariant sectional category in \Cref{thm: G-homo-dim ub on equi-secat}. \begin{theorem} \label{thm: G-hom-dim ub for equi-tc} Let $X$ be a $G$-$\mathrm{CW}$-complex of dimension atleast 1 such that $X^H$ is $m$-connected for all subgroups $H \leq G$. Then $$ \TC_{k,G}(X) < \frac{k\ \mathrm{hdim}_G(X)+1}{m+1}+1. $$ \end{theorem} It is important to note that the equivariant topological complexity of $G$-spaces does not necessarily relate to the topological complexity of their orbit spaces. However, Lubawski and Marzantowicz provided an alternative definition of equivariant topological complexity, designed to facilitate such a comparison. We now present their definition and recall the corresponding result. Suppose $X$ is a $G$-space. Let $\pi_X: X \to X/G$ denote the orbit map. $$ PX \times_{X/G} PX := \{ (\gamma,\delta) \in PX \times PX \mid G\cdot \gamma(1) = G \cdot \delta(0)\} $$ That is the following diagram \[ \begin{tikzcd} PX \times_{X/G} PX \arrow[d, "\pi_1"'] \arrow[r, "\pi_2"] & PX \arrow[d, "\pi_X \circ e_0"] \\ PX \arrow[r, "\pi_X \circ e_1"'] & X/G \end{tikzcd} \] is a pullback. Define the map $$ \p\colon PX \times_{X/G} PX \to X \times X, \quad (\gamma,\delta) \mapsto (\gamma(0),\delta(1)). $$ It was shown in \cite[Proposition 3.7]{invarianttc} that the map $\p$ is a $(G \times G)$-fibration. \begin{definition} Let $X$ be a $G$-space. The invariant topological complexity of $X$ denoted by $\TC^G(X)$, is defined as \[ \TC^G(X):=\secat_{G\times G}(\p). \] \end{definition} The following theorem relates the invariant topological complexity of a free $G$-space $X$ with that of the topological complexity of its corresponding orbit space. \begin{theorem}[{\cite[Theorem 3.9 and 3.10]{invarianttc}}] \label{thm: invariance theorem for TC} Let $G$ be a compact Lie group and $X$ be a compact $G$-ANR. Then $$ \TC(X/G)\leq \TC^G(X). $$ Moreover, if $X$ has one orbit type, then $$ \TC^G(X)=\TC(X/G). $$ \end{theorem} \subsection{Clapp-Puppe invariant of Lusternik-Schnirelmann type} \label{subsec:Clapp-Puppe} \begin{definition}\label{def:G-compress} Let $A$ be a $G$-invariant closed subset of a $G$-space $X$. A $G$-invariant open subset of $X$ is said to be $G$-compressible into $A$ if the inclusion map $i_{U} \colon U \rightarrow X$ is $G$-homotopic to a $G$-map $c \colon U \to X$ which takes values in $A$. \end{definition} \begin{definition} Let $A$ be a $G$-invariant closed subset of a $G$-space $X$. The $A$-Lusternik-Schnirelmann $G$-category of $X$, denoted $~_{A}\ct_G(X)$, is the least positive integer $k$ such that there exists a $G$-invariant open cover $\{U_1,\dots,U_k\}$ of $X$ such that each $U_i$ is $G$-compressible into $A$. \end{definition} Colman and Grant in \cite[Lemma 5.14]{colmangranteqtc} showed that for a $G$-invariant open subset $U$ of $X \times X$ the following are equivalent: \begin{enumerate} \item there exists a $G$-equivariant section of $e_X \colon PX \to X \times X$ over $U$, \item $U$ is $G$-compressible into the diagonal $\Delta(X) \subset X \times X$. \end{enumerate} In particular, $$ \TC_G(X) = ~_{\Delta(X)}\ct_G(X \times X). $$ Later, Lubawski and Marzantowicz in \cite[Lemma 3.8]{invarianttc} showed a similar result for invariant topological complexity. More precisely, for a $(G \times G)$-invariant open subset of $U$ of $X \times X$ the following are equivalent: \begin{enumerate} \item there exists a $(G \times G)$-equivariant section of $\mathfrak{p} \colon PX \times_{X/G} X \to X \times X$ over $U$, \item $U$ is $(G \times G)$-compressible into the saturation of the diagonal $\mathbb{k}(X) := (G \times G)\cdot \Delta(X) \subset X \times X$. \end{enumerate} In particular, $$ \TC^G(X) = ~_{\mathbb{k}(X)}\ct_{G \times G}(X \times X). $$ In \Cref{sec: equi-para-tc} and \Cref{sec: inv-para-tc}, we give analogous results for equivariant parametrized topological complexity and invariant parametrized topological complexity, respectively. We use these results to prove \Cref{thm: inv-para-tc under free action}. \section{Equivariant parametrized topological complexity} \label{sec: equi-para-tc} In this section, we define the equivariant parametrized topological complexity of $G$-fibrations $p\colon E\to B$. We show that it can be expressed as the equivariant $\Delta(E)$-LS category of the fibre product $E\times_B E$. Additionally, we establish the product inequality for the equivariant parametrized topological complexity. For a $G$-fibration $p \colon E\to B$, consider the subspace $E^I_B$ of the free path space $E^I$ of $E$ defined by \[ E^I_B := \{\gamma\in E^I \mid \gamma(t) \in p^{-1}(b) ~\text{for some}~ b\in B ~\text{and for all}~ t\in[0,1] \}. \] Consider the pullback corresponding to the fibration $p \colon E\to B$ defined by \[ E \times_B E = \{(e_1,e_2) \in E \times E \mid p(e_1)=p(e_2) \}. \] It is clear that the $G$-action on $E^I$ given by $$ (g \cdot \gamma)(t) := g \cdot \gamma(t) \quad \text{for all } g \in G, \gamma \in E^I, t \in I; $$ and the diagonal action of $G$ on $E \times E$ restricts to $E^I_B$ and $E \times_B E$, respectively. Then the map \begin{equation} \Pi \colon E^I_B \to E\times_B E, \quad \Pi(\gamma) = (\gamma(0),\gamma(1)) \end{equation} is a $G$-fibration, see \cite[Corollary 4.3]{D-EqPTC}. \begin{definition}[{\cite[Definition 4.1]{D-EqPTC}}] The equivariant parametrized topological complexity of a $G$-fibration $p\colon E\to B$, denoted by $\TC_G[p\colon E\to B]$, is defined as \[ \TC_G[p\colon E \to B] := \secat_G\left( \Pi \right). \] \end{definition} Suppose $\Delta \colon E \to E \times E$ is the diagonal map. Then it is clear that the image $\Delta(E)$ is a $G$-invariant subset of $E \times_B E$. In the next theorem, we prove the parametrized analogue of \cite[Lemma 5.14]{colmangranteqtc} in the equivariant setting. \begin{theorem} \label{thm: equivalent defn of a section of equi-para-tc} Let $p\colon E\to B$ be a $G$-fibration. For a $G$-invariant (not necessarily open) subset $U$ of $E \times_B E$ the following are equivalent: \begin{enumerate} \item there exists a $G$-equivariant section of $\Pi \colon E^I_B\to E\times_B E$ over $U$. \item there exists a $G$-homotopy between the inclusion map $i_U\colon U \hookrightarrow E \times_B E$ and a $G$-map $f\colon U \to E \times_B E$ which takes values in $\Delta(E)$. \end{enumerate} \end{theorem} \begin{proof} $(1) \implies (2)$. Suppose $s \colon U \to E^{I}_B$ is a $G$-equivariant section of $\Pi$. Let $H \colon E^{I}_B \times I \to E^{I}_B$ be given by $$ H(\gamma,t)(s) = \gamma(s(1-t)), \quad \text{for }\gamma\in E^{I}_B \text{ and } s,t \in I. $$ It is clear that $H(\gamma,t) \in E^{I}_B$ for all $\gamma \in E^{I}_B$ and $t \in I$. Hence, $H$ is well-defined. Clearly, $H$ is $G$-equivariant such that $H(\gamma,0) = \gamma$ and $H(\gamma,1) = c_{\gamma(0)}$, where $c_{e}$ is the constant path in $E$ taking the value $e \in E$. Then $$ F := \Pi \circ H \circ (s \times \mathrm{id}_I) \colon U \times I \to E \times_{B} E $$ is a $G$-homotopy such that $F_0 = \Pi \circ \mathrm{id}_{E^{I}_B} \circ s = i_U$ and $F_1(u) = \Pi(H_1(s(u))) = \Pi(c_{s(u)(0)}) = (s(u)(0),s(u)(0)) \in \Delta(E)$. Hence, $F_1$ is the desired map. $(2) \implies (1)$. Suppose $H \colon U \times I \to E \times_{B} E$ is a $G$-homotopy between $f$ and $i_U$. Let $s \colon U \to E^{I}_B$ be the $G$-map given by $s(u) = c_{\pi_1(f(u))} = c_{\pi_2(f(u))}$, where $\pi_i \colon E \times E \to E$ is the projection map onto the $i$-th factor. By $G$-homotopy lifting property of $\Pi$, there exists a $G$-homotopy $\widetilde{H} \colon U \times I \to E^{I}_B$ such that the following diagram \[ \begin{tikzcd} U \times \{0\} \arrow[d, hook] \arrow[rr, "s"] & & E^{I}_B \arrow[d, "\Pi"] \\ U \times I \arrow[rr, "H"] \arrow[rru, "\widetilde{H}", dashed] & & E \times_{B} E \end{tikzcd} \] commutes. Then $\Pi \circ \widetilde{H}_1 = H_1 = i_U$ implies $\widetilde{H}_1$ is a $G$-equivariant section of $\Pi$ over $U$. \end{proof} As a consequence to the previous theorem we can now express the equivariant parametrized topological complexity as the equivariant $\Delta(E)$-LS category of the fibre product. \begin{corollary} \label{cor: equivalent defn of eq-para-tc} For a $G$-fibration $p \colon E \to B$, we have \[ \TC_{G}[p \colon E \to B] = ~_{\Delta(E)}\ct_G(E\times_B E). \] \end{corollary} \subsection{Properties and Bounds} \begin{proposition} \label{prop: subgroup-ineq for equi-para-tc} Suppose $p \colon E \to B$ is a $G$-fibration. If $H$ and $K$ are subgroups of $G$ such that the fixed point map $p^H \colon E^H \to B^H$ is a $K$-fibration, then $$ \TC_K[p^H \colon E^H \to B^H] \leq \TC_G[p \colon E \to B]. $$ \end{proposition} \begin{proof} Suppose $\Pi \colon E^I_B \to E \times_B E$ is $G$-equivariant parametrized fibration corresponding to $p$. Then it is easily checked that $$ (E^I_B)^H = (E^H)^I_{B^H} \quad \text{and} \quad (E \times_B E)^H = E^H \times_{B^H} E^H, $$ and the $K$-equivariant parameterized fibration corresponding to $p^H$ is given by $\Pi^H$. Hence, it follows that $$ \TC_K[p^H \colon E^H \to B^H] = \secat_K(\Pi^H) \leq \secat_G(\Pi) = \TC_G[p \colon E \to B] $$ by \Cref{prop: subgroup-ineq for equi-secat}. \end{proof} Applying \Cref{prop: subgroup-ineq for equi-para-tc} and \Cref{cor: fixed-point-and-subgroup-ineq for equi-secat}, we obtain the following corollary. \begin{corollary} \label{cor: fixed-point-and-subgroup-ineq for equi-para-tc} Suppose $G$ is a compact Hausdorff topological group and $p \colon E \to B$ is a $G$-fibration. Then \begin{enumerate} \item the fixed point map $p^H \colon E^H \to B^H$ is a fibration for all closed subgroups $H$ of $G$, and $$ \TC[p^H \colon E^H \to B^H] \leq \TC_G[p \colon E \to B]. $$ \item $p \colon E \to B$ is a $K$-fibration for all closed subgroups $K$ of $G$, and $$ \TC[p \colon E \to B] \leq \TC_H[p \colon E \to B] \leq \TC_G[p \colon E \to B]. $$ \end{enumerate} \end{corollary} \subsubsection{A cohomological lower bound} \hfill\\ \vspace{-0.7em} A cohomological lower bound for the equivariant parametrized topological complexity was established by the second author in \cite[Theorem 4.5]{D-EqPTC} using Borel cohomology. In the following theorem, we provide an alternative cohomological lower bound that is easier to compute since it relies on non-equivariant cohomology. Let $E_{B,G} := (E \times_B E)/G$ and let $d_GE \subseteq E_{B,G}$ denote the image of the diagonal subspace $\Delta(E) \subseteq E \times_B E$ under the orbit map $\rho \colon E \times_B E \to E_{B,G}$. \begin{theorem} \label{thm: non-equi coho-lower-bound for equi-para-tc} Suppose $p \colon E \to B$ is a $G$-fibration. If there exists cohomology classes $u_1,\dots,u_k \in H^*(E_{B,G};R)$ (for any commutative ring $R$) such that \begin{enumerate} \item $u_i$ restricts to zero in $H^*(d_GE;R)$ for $i=1,\dots,k$; \item $u_1 \smile \dots \smile u_k \neq 0$ in $H^*(E_{B,G};R)$, \end{enumerate} then $\TC_G[p \colon E \to B]>k$. \end{theorem} \begin{proof} Suppose $\TC_G[p \colon E \to B] \leq k$. Then there exists a $G$-invariant open cover $\{U_1,\dots,U_k\}$ of $E \times_B E$ such that each $U_i$ admits a $G$-equivariant section of $\Pi$. By \Cref{thm: equivalent defn of a section of equi-para-tc}, for each $i =1, \dots, k$, there exists a $G$-homotopy $H_i \colon U_i \times I \to E \times_B E$ from the inclusion map $j_{U_i} \colon U_i \hookrightarrow E \times_B E$ to a $G$-map $f_i \colon U_i \to E \times_B E$ which takes values in $\Delta(E)$. Let $\overline{U_i} := \rho(U_i)$. As $I$ is locally compact, $H_i$ induces a homotopy $\overline{H_i} \colon \overline{U_i} \times I \to E_{B,G}$ from the inclusion map $j_{\overline{U_i}} \colon \overline{U_i} \hookrightarrow E_{B,G}$ to a map $\overline{f_i} \colon \overline{U_i} \to E_{B,G}$ which takes values in $d_GE$. Thus, the following diagram \[ \begin{tikzcd} & d_GE \arrow[d, "j_{dX}", hook] \\ \overline{U_i} \arrow[r, "j_{\overline{U_i}}"', hook] \arrow[ru, "\overline{f_i}"] & {E_{B,G}} \end{tikzcd} \] is commutative. Hence, by hypothesis (1), each $u_i$ restricts to zero in $H^{*}(\overline{U_i};R)$. By long exact sequence of the pair $(E_{B,G},d_GE)$, there exists classes $v_i \in H^{*}(E_{B,G},\overline{U_i};R)$ such that $v_i$ maps to $u_i$ under the coboundary map $H^{*}(E_{B,G},\overline{U_i};R) \to H^{*}(E_{B,G};R)$. Hence, we get $$ v_1 \smile \dots \smile v_k \in H^{*}(E_{B,G},\cup_{i=1}^{k} \overline{U_i};R) = H^{*}(E_{B,G},E_{B,G};R) = 0. $$ Thus, by the naturality of cup products, we get $u_1 \smile \dots \smile u_k = 0 \in H^{*}(E_{B,G};R)$, contradicting the hypothesis (2). \end{proof} When $B$ is a point, we can use the above theorem to get a cohomological lower bound for $\TC_G(E)$. \subsubsection{Product Inequalities} \hfill\\ \vspace{-0.7em} The product inequality for parametrized topological complexity was proved in \cite[Proposition 6.1]{farber-para-tc}. We now establish the corresponding equivariant analogue. \begin{theorem} \label{thm: prod-ineq for equi-para-tc} Let $p_1 \colon E_1 \to B_1$ be a $G_1$-fibration and $p_2 \colon E_2 \to B_2$ be a $G_2$-fibration. If $(E_1 \times E_1) \times (E_2 \times E_2)$ is $(G_1 \times G_2)$-completely normal, then \[ \TC_{G_1 \times G_2}[p_1 \times p_2 \colon E_1 \times E_2 \to B_1 \times B_2] \leq \TC_{G_1}[p_1 \colon E_1 \to B_1]+\TC_{G_2}[p_2 \colon E_2\to B_2] - 1, \] where $G_i$ acts on $E_i \times E_i$ diagonally for $i=1,2$; and $G_1 \times G_2$ acts on $(E_1 \times E_1) \times (E_2 \times E_2)$ componentwise. \end{theorem} \begin{proof} Let $\Pi_1 \colon (E_1)^I_{B_1} \to E_1 \times_{B_1} E_1$ and $\Pi_2 \colon (E_2)^{I}_{B_2} \to E_2 \times_{B_2} E_2$ be the equivariant parametrized fibrations corresponding to $p_1$ and $p_2$, respectively. If $E := E_1 \times E_2$, $B := B_1 \times B_2$ and $p := p_1 \times p_2$ is the product $(G_1 \times G_2)$-fibration, then it easily checked that $$ E^I_B = (E_1)^I_{B_1} \times (E_2)^I_{B_2} \quad \text{and} \quad E \times_B E = (E_1 \times_{B_1} E_1) \times (E_2 \times_{B_2} E_2) $$ and the $(G_1 \times G_2)$-equivariant parametrized fibration $\Pi \colon E^{I}_B \to E \times_B E$ corresponding to $p$ is equivalent to the product $(G_1 \times G_2)$-fibration $$ \Pi_1 \times \Pi_2 \colon (E_1)^I_{B_1} \times (E_2)^{I}_{B_2} \to (E_1 \times_{B_1} E_1) \times (E_2 \times_{B_2} E_2). $$ Since a subspace of a $(G_1 \times G_2)$-completely normal space is itself $(G_1 \times G_2)$-completely normal, it follows that $(E_1 \times_{B_1} E_1) \times (E_2 \times_{B_2} E_2)$ is $(G_1 \times G_2)$-completely normal. Hence, \begin{align*} \TC_{G_1 \times G_2}[p_1 \times p_2 \colon E_1 \times E_2 \to B_1 \times B_2] & = \secat_{G_1 \times G_2}(\Pi_1 \times \Pi_2) \\ & \leq \secat_{G_1}(\Pi_1) + \secat_{G_2}(\Pi_2) - 1\\ & = \TC_{G_1}[p_1 \colon E_1 \to B_1]+\TC_{G_2}[p_2 \colon E_2\to B_2] -1, \end{align*} by \cite[Proposition 2.9]{A-D-S}. \end{proof} \begin{corollary} \label{cor: special-prod-ineq for equi-para-tc} Suppose $p_i \colon E_i \to B_i$ is a $G$-fibration for $i =1,2$. If $G$ is compact Hausdorff, then $p_1 \times p_2 \colon E_1 \times E_2 \to B_1 \times B_2$ is a $G$-fibration, where $G$ acts diagonally on the spaces $E_1 \times E_2$ and $B_1 \times B_2$. Furthermore, if $E_1$ and $E_2$ are Hausdorff, and $E_1\times E_1 \times E_2 \times E_2$ is completely normal, then \[ \TC_{G}[p_1 \times p_2 \colon E_1 \times E_2 \to B_1 \times B_2] \leq \TC_{G}[p_1 \colon E_1 \to B_1]+\TC_{G}[p_2 \colon E_2\to B_2]-1. \] \end{corollary} \begin{proof} In \Cref{prop: special-prod-ineq for equi-secat}, we showed that $p_1 \times p_2 \colon E_2 \times E_2 \to B_1 \times B_2$ is a $G$-fibration if $G$ is compact Hausdorff. If $E_1$ and $E_2$ are Hausdorff, and $(E_1\times E_1) \times (E_2 \times E_2)$ is completely normal, then \Cref{lemma: completely normal G-space is G-completely normal} implies that $(E_1\times E_1) \times (E_2 \times E_2)$ is $(G \times G)$-completely normal. Hence, the desired inequality \begin{align*} \TC_{G}[p_1 \times p_2 \colon E_1 \times E_2 \to B_1 \times B_2] & \leq \TC_{G \times G}[p_1 \times p_2 \colon E_1 \times E_2 \to B_1 \times B_2] \\ & \leq \TC_{G}[p_1 \colon E_1 \to B_1]+\TC_{G}[p_2 \colon E_2\to B_2]-1 \end{align*} follows from \Cref{cor: fixed-point-and-subgroup-ineq for equi-para-tc} (2) and \Cref{thm: prod-ineq for equi-para-tc}. \end{proof} \begin{corollary} \label{cor: equi-para-tc of pullback fibration under diagonal map} Suppose $p_i \colon E_i \to B$ is a $G$-fibration for $i =1,2$. Let $E_1 \times_B E_2 = \{(e_1,e_2) \in E_1 \times E_2 \mid p_1(e_1) = p_2(e_2)\}$ and let $p\colon E_1 \times_B E_2 \to B$ be the $G$-map given by $p(e_1,e_2)=p_1(e_1)=p_2(e_2)$, where $G$ acts on $E_1 \times_B E_2$ diagonally. If $G$ is compact Hausdorff, then $p$ is a $G$-fibration. Furthermore, if $E_1$ and $E_2$ are Hausdorff, and $E_1 \times E_1 \times E_2 \times E_2$ is completely normal, then \[ \TC_{G}[p \colon E_1 \times_B E_2 \to B] \leq \TC_{G}[p_1 \colon E_1 \to B]+\TC_{G}[p_2 \colon E_2\to B]-1, \] where $G$ acts on $E_i \times E_i$ diagonally for $i=1,2$; and $G \times G$ acts on $(E_1 \times E_1) \times (E_2 \times E_2)$ componentwise. \end{corollary} \begin{proof} Note the following diagram \[ \begin{tikzcd} E_1 \times_B E_2 \arrow[r, hook] \arrow[d, "p"'] & E_1 \times E_2 \arrow[d, "p_1 \times p_2"] \\ B \simeq \Delta(B) \arrow[r, hook] & B \times B \end{tikzcd} \] is a pullback in the category of $G$-spaces, where $\Delta \colon B \to B \times B$ is the diagonal map. In \Cref{cor: equi-secat of pullback fibration under diagonal map}, we showed that $p$ is a $G$-fibration. Hence, the desired inequality \begin{align*} \TC_{G}[p \colon E_1 \times_B E_2 \to B] & \leq \TC_{G}[p_1 \times p_2 \colon E_1 \times E_2 \to B \times B] \\ & \leq \TC_{G}[p_1 \colon E_1 \to B]+\TC_{G}[p_2 \colon E_2\to B]-1 \end{align*} follows from \cite[Proposition 4.6]{D-EqPTC} and \Cref{cor: special-prod-ineq for equi-para-tc}. \end{proof} \section{Invariant Parametrized Topological Complexity} \label{sec: inv-para-tc} In this section, we introduce the main object of our study, the invariant parametrized topological complexity. Suppose $p \colon E \to B$ is a $G$-fibration. Define the space $$ E^{I}_B \times_{E/G} E^{I}_B := \{ (\gamma,\delta) \in E^{I}_B \times E^I_B \mid G\cdot \gamma(1) = G \cdot \delta(0)\}. $$ That is the following diagram \[ \begin{tikzcd} E^{I}_B \times_{E/G} E^{I}_B \arrow[d, "\pi_1"'] \arrow[r, "\pi_2"] & E^{I}_B \arrow[d, "\pi_E \circ e_0"] \\ E^{I}_B \arrow[r, "\pi_E \circ e_1"'] & E/G \end{tikzcd} \] is a pullback. For each path $\alpha \in E^{I}_B$, let $b_{\alpha}$ denote the element in $B$ such that $\alpha$ take values in the fibre $p^{-1}(b_{\alpha})$. Define the map \begin{equation}\label{eq:inv-parametrized-fibration} \Psi \colon E^{I}_B \times_{E/G} E^{I}_B \to E \times_{B/G} E, \quad \text{by} \quad \Psi(\gamma,\delta) = (\gamma(0),\delta(1)). \end{equation} The map $\Psi$ is well-defined as $\gamma(1) = g \cdot \delta(0)$ for some $g \in G$ and $\gamma,\delta \in E^{I}_B$ implies that $b_{\gamma} = g \cdot b_{\delta}$. Hence, $p(\gamma(0)) = b_{\gamma} = g \cdot b_\delta = g \cdot p(\delta(1))$ implies $(\gamma(0),\delta(1)) \in E \times_{B/G} E$. As $E^{I}_B \times_{E/G} E^{I}_B$ and $E \times_{B/G} E$ are $(G \times G)$-invariant subsets of $E^{I}_B \times E^{I}_B$ and $E \times E$ respectively, we get $(G \times G)$-action on $E^{I}_B \times_{E/G} E^{I}_B$ and $E \times_{B/G} E$, and $\Psi$ becomes a $(G \times G)$-equivariant map. \begin{proposition}\label{prop:Pi-is-G2-fibration} If $p \colon E \to B$ is a $G$-fibration, then the map $\Psi \colon E^{I}_B \times_{E/G} E^I_B \to E \times_{B/G} E$ is a $(G \times G)$-fibration. \end{proposition} \begin{proof} Suppose $E^I_B \to E \times_B E$ is the equivariant parametrized fibration corresponding to $p$. Suppose $\widehat{p} \colon E^{I}_B \times E^{I}_B \to (E \times_B E) \times (E \times_B E)$ is the product $(G \times G)$-fibration. Define $$ S := \{(e_1,e_2,e_3,e_4) \in (E \times_B E) \times (E \times_B E) \mid (\gamma(1), \delta(0)) \in E \times_{E/G} E\}. $$ It is readily checked that $(\gamma,\delta) \in E^I_B \times_{E/G} E^{I}_B$ if and only if $(\gamma,\delta) \in (\widehat{p})^{-1}(S)$. Since $S$ is $(G \times G)$-invariant, it follows that the restriction $$ \left.\widehat{p}\right|_{E^I_B \times_{E/G} E^{I}_B} \colon E^I_B \times_{E/G} E^{I}_B \to S $$ is a $(G \times G)$-fibration. Now consider the pullback diagram \[ \begin{tikzcd} E \times_B E \arrow[r, "\pi_2"] \arrow[d, "\pi_1"'] & E \arrow[d, "p"] \\ E \arrow[r, "p"] & B . \end{tikzcd} \] As $p$ is a $G$-fibration, it follows that $\pi_1$ and $\pi_2$ are $G$-fibrations. Hence, the projection map $\pi_{1,4} := \pi_1 \times \pi_4 \colon (E \times_B E) \times (E \times_B E) \to E \times E$, given by $(e_1,e_2,e_3,e_4) \mapsto (e_1,e_4)$, is a $(G \times G)$-fibration. It is readily checked that $(e_1,e_2,e_3,e_4) \in S$ if and only if $(e_1,e_2,e_3,e_4) \in (\pi_{1,4})^{-1}(E \times_{B/G} E)$. Since $E \times_{B/G} E$ is $(G \times G)$-invariant, it follows that $$ \left.\pi_{1,4}\right|_{S} \colon S \to E \times_{B/G} E $$ is a $(G \times G)$-fibration. Hence, $\Psi = \left.\pi_{1,4}\right|_{S} \circ \left.\widehat{p}\right|_{E^I_B \times_{E/G} E^{I}_B}$ is a $(G \times G)$-fibration. \end{proof} We now introduce the main object of our study, which is a parametrized analogue of invariant topological complexity introduced by Lubawski and Marzantowicz in \cite{invarianttc}. \begin{definition}\label{def:inv-para-tc} Suppose $p\colon E \to B$ is a $G$-fibration. The invariant parametrized topological complexity, denoted by $\TC^G[p \colon E \to B]$ is defined as \[ \TC^{G}[p \colon E \rightarrow B] := \secat_{G\times G} \left(\Psi \right). \] \end{definition} The $G$-homotopy equivalence of the invariant topological complexity was established by Lubawski and Marzantowicz in \cite[Proposition 2.4 and Lemma 3.8]{invarianttc}. We will now establish the corresponding parametrized analogue. In particular, we establish the equivariant fibrewise homotopy equivalence of invariant parametrized topological complexity. We refer the reader to \cite[Section 4.1]{D-EqPTC} for basic information about fibrewise equivariant homotopy equivalence. \begin{theorem} \label{thm:G-homotopy-invariace-of-inv-para-tc} If $p \colon E \to B$ and $p' \colon E' \to B$ are $G$-fibrations which are fibrewise $G$-homotopy equivalent, then \[ \TC^{G}[p \colon E \to B] = \TC^{G}[p' \colon E' \to B]. \] \end{theorem} \begin{proof} Suppose we have a fibrewise $G$-homotopy equivalence given by the following commutative diagram: \[ \begin{tikzcd} E \arrow[rr, "f", shift left] \arrow[rd, "p"', shift right] & & E' \arrow[ll, "f'", shift left] \arrow[ld, "p'", shift left] \\ & B. & \end{tikzcd} \] Suppose $\tilde{f} = f\times f$, $\tilde{f}^I(\gamma,\delta) = (f\circ \gamma,f\circ \delta)$ and $\tilde{f'}$, $\tilde{f'}^I$ are defined similarly. Note that $\tilde{f}$ and $\tilde{f'}$ are $(G\times G)$-maps. Then we have the following commutative diagram. \[ \begin{tikzcd} E^I_B\times_{E/G} E^I_B \arrow{r}{\tilde{f}^I} \arrow[swap]{d}{\Psi} & E'^I_B\times_{E'/G} E'^I_B \arrow{d}{\Psi'} \arrow{r}{\tilde{f'}^I} & E^I_B \times_{E/G} E^I_B \arrow{d}{\Psi} \\E\times_{B/G} E \arrow{r}{\tilde{f}} & E'\times_{B/G} E' \arrow{r}{\tilde{f'}} & E\times_{B/G} E. \end{tikzcd} \] Since the maps $f'\circ f$ and $\mathrm{id}_E$ are fibrewise $G$-homotopy equivalent, it follows that the maps $\tilde{f'}\circ \tilde{f}$ and $\mathrm{id}_{E\times_{B/G} E}$ are $(G \times G)$-homotopy equivalent. Then, using \cite[Lemma 4.10(2)]{D-EqPTC}, we obtain the inequality \[ \TC^G[p \colon E\to B] = \secat_{G\times G}(\Psi) \leq \secat_{G\times G}(\Psi') = \TC^G[p' \colon E'\to B]. \] Similarly, we can derive the reverse inequality, which completes the proof. \end{proof} The next proposition shows that the invariant parametrized topological complexity of a $G$-fibration is a generalization of both the parametrized topological complexity of a fibration \cite{farber-para-tc} and the invariant topological complexity of a $G$-space \cite{invarianttc}. \begin{proposition} \label{prop: special cases of inv-para-tc} Suppose $p \colon E \to B$ is a $G$-fibration. \begin{enumerate} \item If $G$ acts trivially on $E$ and $B$, then $\TC^{G}[p \colon E \rightarrow B] = \TC[p \colon E \rightarrow B]$. \item If $B=\{*\}$, then $\TC^{G}[p \colon E \rightarrow \{*\}] = \TC^{G}(E)$. \end{enumerate} \end{proposition} \begin{proof} (1) If $G$ acts trivially on $E$, then $\pi_E \colon E \to E/G$ is the identity map. Hence, $E^{I}_B \times_{E/G} E^{I}_B = E^{I}_B \times_{E} E^{I}_B$ which is homeomorphic to $E^{I}_B$ via the map $(\gamma, \delta) \mapsto \gamma * \delta$, where $\gamma * \delta$ is the concatenation of paths $\gamma$ and $\delta$. The inverse of this homeomorphism is given by $\alpha \mapsto \left(\left.\alpha\right|_{\left[0,\frac{1}{2}\right]},\left.\alpha\right|_{\left[\frac{1}{2},1\right]}\right)$ for $\alpha \in E^{I}_B$. If $G$ acts trivially on $B$, then $\pi_B \colon B \to B/G$ is the identity map. Hence, $E \times_{B/G} E = E \times_B E$. Therefore, the fibration $\Psi$ is given by $$ \Psi \colon E^{I}_B \to E \times_{B} E, \quad \Psi(\alpha) = (\alpha(0),\alpha(1)). $$ Hence, we get $\TC^{G}[p \colon E \rightarrow B] = \TC[p \colon E \rightarrow B]$. (2) If $B=\{*\}$, then $E^{I}_B = E^{I}$ and $E \times_{B/G} E = E \times E$. Hence, the fibration $\Psi$ is given by $$ \Psi \colon E^{I} \times_{E/G} E^I \to E \times E, \quad \Psi(\gamma,\delta) = (\gamma(0),\delta(1)). $$ Therefore, $\TC^{G}[p \colon E \rightarrow \{*\}] = \TC^{G}(E)$. \end{proof} \begin{proposition} \label{prop: inv-para-TC for trivial-G-fib} Let $p \colon B \times F \to B$ be the trivial $G$-fibration with $G$ acting trivially on $F$. Then \[ \TC^G[p \colon B \times F \to B] = \TC(F). \] \end{proposition} \begin{proof} Let $E = B \times F$. Then note that $E^I_B = B \times F^I$ and $E \times_{B/G} E = (B \times_{B/G} B) \times (F \times F)$. As $E/G = (B \times F)/G = (B/G) \times F$, we have $$ E^I_B \times_{E/G} E^I_B = (B \times_{B/G} B) \times (F^I \times_{F} F^I) \cong_G (B \times_{B/G} B) \times F^I, $$ where the last $G$-homeomorphism is induced by $(\gamma,\delta) \in F^{I} \times_F F^{I} \mapsto \gamma*\delta \in F^{I}$. Then it follows that the fibration $\Psi$ corresponding to $p$ is given by $\Psi = \mathrm{id}_{B \times_{B/G} B} \times e_F$, where $e_F \colon F^I \to F\times F$ is the free path space fibration corresponding to $F$. Thus, we obtain \[ \TC^G[p \colon E\to B] = \secat_{G\times G}(\Psi) = \secat_{G\times G}(\mathrm{id} \times e_F) = \secat(e_F) = \TC(F), \] since $G$ acts trivially on $F$. \end{proof} \begin{remark} In general, if $G$ acts non-trivially on $F$, then the equality \[ \TC^G[p \colon B \times F \to B] = \TC^G(F) \] may not hold. For example, let $E=S^1 \times S^1$ and $B=S^1$. If $G=S^1$ acts on $B$ by left multiplication and diagonally on $E$, then \begin{align*} \TC^{S^1}[p \colon S^1 \times S^1 \to S^1] & = \TC[p/S^1 \colon (S^1 \times S^1)/S^1 \to S^1/S^1] & \text{by \Cref{thm: inv-para-tc under free action}} \\ & = \TC(S^1 \to \{*\}) \\ & = \TC(S^1) & \text{by \Cref{prop: special cases of inv-para-tc}} \\ & = 2. \end{align*} But $\TC^{S^1}(S^1) = \TC(\{*\}) = 1$ by \cite[Theorem 3.10]{invarianttc}. \end{remark} Suppose $\mathbb{k}(E)$ is the saturation of the diagonal $\Delta(E)$ with respect to the $(G \times G)$-action on $E \times E$, i.e., $$ \mathbb{k}(E) := (G \times G)\cdot \Delta(E) \subseteq E \times E. $$ If $E \times_{E/G} E$ is the pullback corresponding to $\pi_E \colon E \to E/G$, i.e., \[ E \times_{E/G} E := \{ (e_1,e_2) \in E \times E \mid \pi_E(e_1) = \pi_E(e_2)\}, \] then it is readily checked that $\mathbb{k}(E) = E \times_{E/G} E \subseteq E \times_{B/G} E$. Hence, we will use the notation $\mathbb{k}(E)$ and $E \times_{E/G} E$ interchangeably. In the next theorem, we establish the parametrized analogue of \cite[Lemma 3.8]{invarianttc}. \begin{theorem} \label{thm: equivalent defn of a section of in-para-tc} Suppose $p\colon E \to B$ is a $G$-fibration. For a $(G\times G)$-invariant (not necessarily open) subset $U$ of $E \times_{B/G} E$ the following are equivalent: \begin{enumerate} \item there exists a $(G \times G)$-equivariant section of $\Psi \colon E^{I}_B \times_{E/G} E^{I}_B \to E \times_{B/G} E$ over $U$. \item there exists a $(G \times G)$-homotopy between the inclusion map $i_U \colon U \hookrightarrow E \times_{B/G} E$ and a $(G \times G)$-map $f \colon U \to E \times_{B/G} E$ which takes values in $E \times_{E/G} E$. \end{enumerate} \end{theorem} \begin{proof} $(1) \implies (2)$. Suppose $s = (s_1,s_2) \colon U \to E^{I}_B \times_{E/G} E^{I}_B$ is a $(G \times G)$-equivariant section of $\Psi$. Let $H \colon (E^{I}_B \times_{E/G} E^{I}_B) \times I \to E^{I}_B \times_{E/G} E^{I}_B$ be given by $$ H(\gamma,\delta,t) = (\gamma_t',\delta_t'), \quad \text{for } (\gamma,\delta) \in E^{I}_B\times_{E/G} E^{I}_B, \text{ and } t \in I, $$ where $\gamma_t'(s) = \gamma(s+t(1-s))$ and $\delta_t'(s) = \delta(s(1-t))$. It is clear that $\gamma_t', \delta_t' \in E^{I}_B$, and $\gamma_t'(1) = \gamma(1)$ and $\delta_t'(0) = \delta(0)$ for all $(\gamma,\delta) \in E^{I}_B\times_{E/G} E^{I}_B$ and for all $t \in I$. Hence, $H$ is well-defined. Clearly, $H$ is $(G\times G)$-equivariant such that $H(\gamma,\delta,0) = (\gamma, \delta)$ and $H(\gamma,\delta,1) = (c_{\gamma(1)},c_{\delta(0)})$, where $c_{e}$ is the constant path in $E$ taking the value $e \in E$. Then $$ F := \Psi \circ H \circ (s \times \mathrm{id}_I) \colon U \times I \to E \times_{B/G} E $$ is a $(G \times G)$-homotopy such that $F_0 = \Psi \circ \mathrm{id}_{E^{I}_B \times_{E/G} E^{I}_B} \circ s = i_U$ and $F_1(u) = \Psi(H_1(s(u))) = ((s_1(u))(1),(s_2(u))(0))$. As $s(u) = (s_1(u), s_2(u)) \in E^{I}_B \times_{E/G} E^{I}_B$ for all $u \in U$, it follows $F_1(u) = ((s_1(u))(1),(s_2(u))(0)) \in E \times_{E/G} E$. Hence, $F_1$ is the desired $(G\times G)$-homotopy. $(2) \implies (1)$. Suppose $H \colon U \times I \to E \times_{B/G} E$ is a $(G \times G)$-homotopy between $f$ and $i_U$. Let $s \colon U \to E^{I}_B \times_{E/G} E^{I}_B$ be the $(G \times G)$-map given by $s(u)=(c_{\pi_1(f(u))},c_{\pi_2(f(u))})$, where $\pi_i \colon E \times_{B/G} E \to E$ is the projection map onto the $i$-th factor. The map $s$ is well-defined since $f$ takes values in $E \times_{E/G} E$. By $G$-homotopy lifting property of $\Psi$, there exists a $(G \times G)$-homotopy $\widetilde{H} \colon U \times I \to E^{I}_B \times_{E/G} E^{I}_B$ such that the following diagram \[ \begin{tikzcd} U \times \{0\} \arrow[d, hook] \arrow[rr, "s"] & & E^{I}_B \times_{E/G} E^{I}_{B} \arrow[d, "\Psi"] \\ U \times I \arrow[rr, "H"] \arrow[rru, "\widetilde{H}", dashed] & & E \times_{B/G} E \end{tikzcd} \] commutes. Then $\Psi \circ \widetilde{H}_1 = H_1 = i_U$ implies $\widetilde{H}_1$ is a $(G\times G)$-equivariant section of $\Psi$ over $U$. \end{proof} \begin{corollary}\label{cor:inv-para-tc-as-A-cat} For a $G$-fibration $p \colon E \to B$, we have \[ \TC^{G}[p \colon E \to B] = ~_{\mathbb{k}(E)}\ct_{G \times G}(E \times_{B/G} E). \] \end{corollary} \subsection{Properties and Bounds} \label{subsec:prop-bounds} \begin{proposition} \label{prop: pullback-ineq for inv-para-tc} Suppose $p \colon E \to B$ is a $G$-fibration and $B'$ is a $G$-invariant subset of $B$. If $E' = p^{-1}(B')$ and $p' \colon E' \to B'$ is the $G$-fibration obtained by restriction of $p$, then $$ \TC^{G}[p' \colon E' \to B'] \leq \TC^{G}[p \colon E \to B]. $$ In particular, if $b \in B^G$, then the fibre $F=p^{-1}(b)$ is a $G$-space and $$ \TC^{G}(F) \leq \TC^{G}[p \colon E \to B]. $$ \end{proposition} \begin{proof} Note that we have the following commutative diagram \[ \begin{tikzcd} (E')^I_{B'} \times_{E'/G} (E')^I_{B'} \arrow[rr, hook] \arrow[d, "\Psi'"'] & & E^I_B \times_{E/G} E^I_B \arrow[d, "\Psi"] \\ E' \times_{B'/G}E' \arrow[rr, hook] & & E \times_{B/G} E, \end{tikzcd} \] where $\Psi'$ and $\Psi$ are the fibrations corresponding to $p'$ and $p$, respectively. We will now show that this diagram is a pullback. Suppose $Z$ is a topological space with $(G \times G)$-maps $k=(k_1,k_2) \colon Z \to E^I_B \times_{E/G} E^I_B$ and $h = (h_1,h_2) \colon Z \to E' \times_{B'/G} E'$ such that $\Psi \circ k = h$. As $\Psi \circ k = h$, we have $$ k_1(z)(0) = h_1(z) \in E' \quad \text{and} \quad k_2(z)(1) = h_2(z) \in E'. $$ As $k(z) = (k_1(z),k_2(z)) \in E^I_B \times_{E/G} E^I_B$, we have $$ p(k_1(z)(t))=b_{k_1(z)},\quad p(k_2(z)(t))=b_{k_2(z)}\quad \text{and} \quad k_1(z)(1) = g_{k(z)} \cdot k_2(z)(0) $$ for some $b_{k_1(z)}, b_{k_2(z)} \in B$, $g_{k(z)} \in G$ and for all $t\in I$. Note that $b_{k_1(z)} = p(k_1(z)(t)) = p(k_1(z)(0)) = p(h_1(z))$ implies $b_{k_1(z)} \in B'$ since $h_1(z) \in E' = p^{-1}(B')$. Hence, $k_1(z) \in (E')^{I}_{B'}$ since $k_1(z)(t) \in p^{-1}(b_{k_1(z)}) \subset p^{-1}(B') = E'$ for all $t \in I$. Similarly, $b_{k_2(z)} \in B'$ and $k_2(z) \in (E')^{I}_{B'}$. Hence, $k_1(z)(1) = g_{k(z)} \cdot k_2(z)(0)$ implies $\mathrm{Im}(k) \subseteq (E')^I_{B'} \times_{E'/G} (E')^I_{B'}$. Hence, the diagram above is a pullback. Then the required inequality $$ \TC^{G}[p' \colon E' \to B'] = \secat_{G\times G}(\Psi') \leq \secat_{G \times G}(\Psi) = \TC^{G}[p \colon E \to B]. $$ follows from \cite[Proposition 4.3]{colmangranteqtc}. \end{proof} \begin{proposition} \label{prop: inva-para-tc bounds} Let $p \colon E\to B$ be a $G$-fibration. If $e \in E^G$, then the fibre $F = p^{-1}(p(e))$ is a $G$-space and $$ \ct_G(F) \leq \TC^G(F) \leq \TC^G[p \colon E\to B]. $$ Furthermore, \begin{enumerate} \item if $E\times_{B/G}E$ is $(G\times G)$-connected, then $$ \TC^G[p \colon E\to B] \leq \ct_{G\times G}(E\times_{B/G} E). $$ \item if $E \times_{B/G} E$ is a connected $(G\times G)$-$\mathrm{CW}$-complex, then $$ \ct_{G\times G}(E\times_{B/G} E) \leq \mathrm{dim} \left(\frac{E \times_{B/G} E}{G \times G}\right)+1. $$ \end{enumerate} Consequently, if $E \times_{B/G} E$ is $(G\times G)$-connected $(G\times G)$-$\mathrm{CW}$-complex, then $$ \TC^G[p \colon E\to B] \leq \mathrm{dim} \left(\frac{E \times_{B/G} E}{G \times G}\right)+1. $$ \end{proposition} \begin{proof} If $e \in E^G$, then $b = p(e) \in B^G$. Hence, by \Cref{prop: pullback-ineq for inv-para-tc}, $F := p^{-1}(b)$ admits a $G$-action and $ \TC^G(F) \leq \TC^G[p \colon E\to B]. $ Observe that $e \in F^G$. Therefore, the inequality $\ct_G(F)\leq \TC^G(F)$ follows from \cite[Proposition 2.7]{ZK}. (1) Note that if $c_e$ is the constant path in $E$ which takes the value $e$, then $(c_e,c_e) \in (E^I_B \times_{E/G} E^I_B)^{(G\times G)}$. Moreover, since $E\times_{B/G}E$ is $(G\times G)$-connected, it follows that $$ \TC^G[p \colon E\to B] = \secat_{G \times G}(\Psi) \leq \ct_{G\times G}(E\times_{B/G} E). $$ by \cite[Proposition 4.4]{colmangranteqtc}. (2) Since $E \times_{B/G} E$ is connected and $(e, e) \in (E \times_{B/G} E)^{(G \times G)}$, it follows that $$ \ct_{G\times G}(E\times_{B/G} E) \leq \mathrm{dim} \left(\frac{E \times_B E}{G \times G}\right)+1, $$ by \cite[Corollary 1.12]{M}. Now the last inequality follows from $(1)$ and $(2)$. \end{proof} \begin{corollary}\label{cor: G-contratibility-of-fibre} Let $p \colon E\to B$ be a $G$-fibration such that $\TC^G[p \colon E \to B] = 1$. If $e \in E^G$, then the fibre $F = p^{-1}(p(e))$ is a $G$-contractible space. \end{corollary} \begin{proof} By \Cref{prop: inva-para-tc bounds}, we have $\ct_G(F)=1$, i.e., $F$ is $G$-contractible. \end{proof} We now establish sufficient conditions for $\TC^G[p\colon E\to B]$ to be $1$. This serves as a converse of \Cref{cor: G-contratibility-of-fibre}. \begin{theorem} \label{thm: suff-cond for inv-para-tc = 1 with fixed point} Suppose $p \colon E\to B$ is a $G$-fibration such that $E \times_{B/G} E$ is a $G$-$\mathrm{CW}$-complex. Let $e \in E^G$. If the fibre $F = p^{-1}(p(e))$ satisfies either \begin{itemize} \item $F$ is $G$-connected, $G$-contractible and $F^G = \{e\}$, or \item $F$ strong $G$-deformation retracts to the point $e$, \end{itemize} then $\TC^G[p \colon E \to B] = 1$. \end{theorem} \begin{proof} Note that $$ \Psi^{-1}(e,e) = \{(\alpha,\beta) \in E^I_B \times E^I_B \mid \alpha(0) = \beta(1) = e, \alpha(1) = g \cdot \beta(0) \text{ for some } g \in G\}. $$ Since $\alpha(0) = \beta(1) = e$ and $\alpha,\beta \in E^I_B$, it follows that the fibre $\Psi^{-1}(e,e)$ is $(G \times G)$-homeomorphic to $$ \mathcal{F} = \{\gamma \in F^I \mid \gamma(1/2) = e, \gamma(0) = g \cdot \gamma(1) \}, $$ where $(G\times G)$-action on $\mathcal{F}$ is given by $$ ((g_1,g_2)\cdot \gamma)(t) = \begin{cases} g_1 \cdot \gamma(t) & 0 \leq t \leq 1/2,\\ g_2 \cdot \gamma(t) & 1/2 \leq t \leq 1. \end{cases} $$ This action is well-defined since $\gamma(1/2) = e \in E^G$. Suppose $F$ is $G$-connected, $G$-contractible and $F^G = \{e\}$. Since $F$ is $G$-connected, we have $_{\{e\}}\ct_G(F) = \ct_G(F)$, see \cite[Remark 2.3]{invarianttc} and \cite[Lemma 3.14]{colmangranteqtc}. Hence, $_{\{e\}}\ct_G(F) = 1$ as $F$ is $G$-contractible. Thus, there exists a $G$-homotopy $H \colon F \times I \to F$ such that $H(f,0)=f$ and $H(f,1)=e$ for all $f\in F$. Let $K \colon F^I \times I \to F^I$ be the homotopy given by $K(\delta,t)(s) = H(\delta(s),t)$ for all $s,t \in I$ and $\delta \in F^I$. Note that $K$ is a $G$-homotopy. If $\gamma \in \mathcal{F}$, then $$ g \cdot K(\gamma,t)(1/2) = g \cdot H(\gamma(1/2),t) = g \cdot H(e,t) = H(g \cdot e, t) = H(e,t) $$ for all $g \in G$, i.e., $K(\gamma,t)(1/2) \in F^G$. Since $F^G = \{e\}$, we get $K(\gamma,t)(1/2) = e$ for all $t\in I$. Suppose $F$ strong $G$-deformation retracts to the point $e$, then there exists a $G$-homotopy $H \colon F \times I \to F$ such that $H(f,0)=f$ and $H(f,1)=e$ and $H(e,t)=e$ for all $f\in F$ and $t\in I$. Then the homotopy $K$ defined on $F^I$ like above satisfies $K(\gamma,t)(1/2) =e$ due to the condition $H(e,t) =e$ for all $t \in I$. Moreover, $$ K(\gamma,t)(0) = H(\gamma(0),t) = H(g \cdot \gamma(1),t) = g \cdot H(\gamma(1),t) = g \cdot K(\gamma,t)(1) $$ where $\gamma(0) = g \cdot \gamma(1)$. Hence, if $\gamma \in \mathcal{F}$, we have $K(\gamma,t) \in \mathcal{F}$. Hence, in both cases, $K$ restricts to a $(G \times G)$-homotopy on $K \colon \mathcal{F} \times I \to \mathcal{F}$ such that $K(\gamma,0) = \gamma$ and $K(\gamma,1) = c_e$, where $c_e$ is the constant path in $E$ taking the value $e$. In particular, $\mathcal{F}$ is $(G\times G)$-contractible. Hence, by equivariant obstruction theory, $\Psi$ admits a $(G \times G)$-section. \end{proof} Later, we will also provide sufficient conditions for $\TC^G[p \colon E \to B] = 1$ and its converse when the group action on the base is free, as stated in \Cref{cor: suff-cond for inv-para-tc = 1 with free action} and \Cref{cor: inv-para-tc = 1 implies F is contractible with free action}, respectively. \begin{proposition} \label{prop: subgroup-ineq for inv-para-tc} Suppose $p \colon E \to B$ is a $G$-fibration such that $G$ acts freely on $B$. If $K$ is a subgroup of $G$ such that $p \colon E \to B$ is also a $K$-fibration, then $$ \TC^{K}[p \colon E \to B] \leq \TC^{G}[p \colon E \to B]. $$ \end{proposition} \begin{proof} Suppose $\Psi_K \colon E^I_{B} \times_{E/K} E^I_{B} \to E \times_{B/K} E$ is the invariant parametrized fibration corresponding to $K$-fibration $p$. Then the following diagram \[ \begin{tikzcd} E^I_{B} \times_{E/K} E^I_{B} \arrow[d, "\Psi_K"'] \arrow[rr, hook] & & E^I_B \times_{E/G} E^I_B \arrow[d, "\Psi"] \\ E \times_{B/K} E \arrow[rr, hook] & & E \times_{B/G} E \end{tikzcd} \] is commutative. Suppose $U$ is a $(G \times G)$-invariant open subset of $E \times_{B/G} E$ with a $(G \times G)$-equivariant section $s \colon U \to E^I_B \times_{E/G} E^I_B$ of $\Psi$. Define $V := U \cap \left(E \times_{B/K} E\right)$. Then $V$ is $(K \times K)$-invariant open subset of $E \times_{B/K} E$. Suppose $(e_1,e_2) \in V$ and $s(e_1,e_2) = (\gamma,\delta) \in E^I_B \times_{E/G} E^I_B$. We claim that $s(e_1,e_2) = (\gamma,\delta)$ lies in $E^I_{B} \times_{E/K} E^I_{B}$. Note that $p(e_1) = k \cdot p(e_2)$ for some $k \in K$, as $(e_1,e_2) \in E \times_{B/K} E$. Since $s$ is a section of $\Psi$, we have $$ b_\gamma = p(\gamma(0)) = p(e_1) = k \cdot p(e_2) = k \cdot p(\delta(1)) = k \cdot b_\delta, $$ where $\gamma(t) \in p^{-1}(b_\gamma)$ and $\delta(t) \in p^{-1}(b_\delta)$ for some $b_\gamma, b_\delta \in B$ and for all $t \in I$. Since $(\gamma,\delta) \in E^I_B \times_{E/G} E^I_B$, we have $\gamma(1) = g \cdot \delta(0)$ for some $g \in G$. Hence, $$ b_\gamma = p(\gamma(1)) = p(g \cdot \delta(0)) = g \cdot p(\delta(0)) = g \cdot b_\delta. $$ Thus, we get $g \cdot b_\delta = k \cdot b_\delta$. It follows that $g=k$ since $G$ acts freely on $B$. Thus, $\gamma(1) = k \cdot \delta(0)$ implies $(\gamma,\delta) \in E^I_{B} \times_{E/K} E^I_{B}$. Hence, the restriction $\left.s\right|_{V} \colon V \to E^I_{B} \times_{E/K} E^I_{B}$ is a $(K \times K)$-equivariant section of $\Psi_K$. \end{proof} \begin{corollary} \label{cor: subgroup-ineq for inv-para-tc} Suppose $G$ is a compact Hausdorff topological group. If $p \colon E \to B$ is a $G$-fibration such that $G$ acts freely on $B$, then $$ \TC^{K}[p \colon E \to B] \leq \TC^{G}[p \colon E \to B] $$ for all closed subgroups $K$ of $G$. In particular, $$ \TC(F) \leq \TC[p \colon E \to B] \leq \TC^{G}[p \colon E \to B], $$ where $F$ is the fibre of $p$. \end{corollary} \begin{proof} Note that, by \cite[Theorem 3]{gevorgyan2023equivariant}, the map $p \colon E \to B$ is a $K$-fibration. Hence, the result follows from \cite[Page 235]{farber-para-tc} and \Cref{prop: subgroup-ineq for inv-para-tc}. \end{proof} \begin{corollary} \label{cor: inv-para-tc = 1 implies F is contractible with free action} Suppose $G$ is a compact Hausdorff topological group and $p \colon E \to B$ is a $G$-fibration with fibre $F$ such that $G$ acts freely on $B$. If $\TC^{G}[p \colon E \to B] = 1$, then the $F$ is contractible. \end{corollary} \begin{proof} This follows from \Cref{cor: subgroup-ineq for inv-para-tc} and the fact that $\TC(F) = 1$ if and only if $F$ is contractible. \end{proof} \subsubsection{Cohomological Lower Bounds} \begin{lemma} \label{lemma: homotopy equivalence of the saturation of the diagonal with total space} Suppose $p \colon E \to B$ is a $G$-fibration. Then the map $c \colon E \times_{E/G} E \to E^{I}_B \times_{E/G} E^{I}_B$, given by $c(e_1,e_2)=(c_{e_1},c_{e_2})$ where $c_{e_i}$ is the constant path in $E$ taking the value $e_i \in E$, is a $(G \times G)$-homotopy equivalence. \end{lemma} \begin{proof} Let $f \colon E^{I}_B \times_{E/G} E^{I}_B \to E \times_{E/G} E$ be the map given by $f(\gamma,\delta) = (\gamma(1),\delta(0))$. Then $f$ is $(G \times G)$-equivariant such that $(c \circ f) (\gamma,\delta) = (c_{\gamma(1)},c_{\delta(0)})$ and $f \circ c$ is the identity map of $E \times_{E/G} E$. Let $H \colon (E^{I}_B \times_{E/G} E^{I}_B) \times I \to E^{I}_B \times_{E/G} E^{I}_B$ be the homotopy given by $$ H(\gamma,\delta,t) = (\gamma'_t, \delta'_t), $$ where $\gamma'_t(s) = \gamma(s + t(1-s))$ and $\delta'_t(s) = \delta(s(1-t))$. Then following the proof of \Cref{thm: equivalent defn of a section of in-para-tc}, we see that $H$ is well-defined, $(G\times G)$-equivariant, $H(\gamma,\delta,0)=(\gamma,\delta)$, and $H(\gamma,\delta,1)=(c_{\gamma(1)},c_{\delta(0)})$. Hence, $c \circ f$ is $(G \times G)$-homotopic to the identity map of $E^{I}_B \times_{E/G} E^{I}_B$. \end{proof} Note that the following diagram \[ \begin{tikzcd} E \times_{E/G} E \arrow[rr, "c"] \arrow[rd, "i"', hook] & & E^{I}_B \times_{E/G} E^{I}_B \arrow[ld, "\Psi"] \\ & E \times_{B/G}E & \end{tikzcd} \] is commutative, where $i \colon E \times_{E/G} E \hookrightarrow E \times_{B/G} E$ is the inclusion map. In other words, $\Psi$ is a $(G \times G)$-fibrational substitute for the $(G \times G)$-map $i$. For ease of notation in the upcoming theorem, let $G^2$ denote the product $G \times G$. \begin{theorem} \label{thm: cohomological lower bound for inv-para-tc} Suppose $p \colon E \to B$ is a $G$-fibration. Suppose there exists cohomology classes $u_1,\dots,u_k \in \widetilde{H}^{*}_{G^2}(E \times_{B/G} E;R)$ (for any commutative ring R) such that $$ (i^h_{G^2})^*(u_1) = \cdots = (i^h_{G^2})^*(u_k) = 0 \quad \text{and} \quad u_1 \smile \cdots \smile u_k \neq 0, $$ then $\TC^G[p \colon E \to B] >k$. \end{theorem} \begin{proof} Note that $\Psi \circ c = i$ implies $(c^h_{G^2})^* \circ (\Psi^h_{G^2})^* = (i^h_{G^2})^*$. Since $c$ is a $(G \times G)$-homotopy equivalence (see \Cref{lemma: homotopy equivalence of the saturation of the diagonal with total space}), it follows $c^h_{G^2}$ is a homotopy equivalence. Hence, $(c^h_{G^2})^*$ is an isomorphism. Thus, the result follows from \Cref{thm: cohomological lower bound for eq-secat}. \end{proof} \begin{remark} We note that any $G$-map $f \colon X \to Y$ that is a non-equivariant homotopy equivalence induces an isomorphism on the level of Borel cohomology, see \cite{may1987characteristic}. Hence, for \Cref{thm: cohomological lower bound for inv-para-tc}, we don't need $c$ is a $(G\times G)$-homotopy equivalence, we only require $c$ to be a $(G \times G)$-map and a non-equivariant homotopy equivalence. \end{remark} Now we give a non-equivariant cohomological lower bound for the invariant parametrized topological complexity. Let $E_{B,G^2} := (E \times_{B/G} E)/(G \times G)$ and let $\mathbb{k}_{G^2}E \subseteq E_{B,G^2}$ denote the image of the saturated diagonal subspace $ \mathbb{k}(E) = E \times_{E/G} E \subseteq E \times_{B/G} E$ under the orbit map $\rho \colon E \times_{B/G} E \to E_{B,G^2}$. By using \Cref{thm: equivalent defn of a section of in-para-tc} and following the arguments in \Cref{thm: non-equi coho-lower-bound for equi-para-tc}, one can establish the following theorem. The proof is left to the reader. \begin{theorem} \label{thm: non-equi coho-lower-bound for inv-para-tc} Suppose $p \colon E \to B$ is a $G$-fibration. If there exists cohomology classes $u_1,\dots,u_k \in H^*(E_{B,G^2};R)$ (for any commutative ring $R$) such that \begin{enumerate} \item $u_i$ restricts to zero in $H^*(\mathbb{k}_{G^2}E;R)$ for $i=1,\dots,k$; \item $u_1 \smile \dots \smile u_k \neq 0$ in $H^*(E_{B,G^2};R)$, \end{enumerate} then $\TC^G[p \colon E \to B]>k$. \end{theorem} \subsubsection{Product Inequalities} \begin{theorem} \label{thm: prod-ineq for inv-para-tc} Let $p_1 \colon E_1 \to B_1$ be a $G_1$-fibration and $p_2 \colon E_2 \to B_2$ be a $G_2$-fibration. If $E_1 \times E_1 \times E_2 \times E_2$ is $(G_1 \times G_1 \times G_2 \times G_2)$-completely normal, then \[ \TC^{G_1 \times G_2}[p_1\times p_2 \colon E_1\times E_2\to B_1\times B_2] \leq \TC^{G_1}[p_1 \colon E_1 \to B_1] + \TC^{G_2}[p_2 \colon E_2\to B_2]-1. \] \end{theorem} \begin{proof} Let $\Psi_1 \colon (E_1)^I_{B_1} \times_{E_1/G_1} (E_1)^I_{B_1} \to E_1 \times_{B_1/G_1} E_1$ and $\Psi_2 \colon (E_2)^{I}_{B_2} \times_{E_2/G_2} (E_2)^I_{B_2} \to E_2 \times_{B_2/G_2} E_2$ be the invariant parametrized fibrations corresponding to $p_1$ and $p_2$, respectively. If $E := E_1 \times E_2$, $B := B_1 \times B_2$, $G := G_1 \times G_2$, and $p := p_1 \times p_2$ is the product $G$-fibration, then it easily checked that $$ E^I_B \times_{E/G} E^I_B = \left((E_1)^I_{B_1} \times_{E_1/G_1} (E_1)^I_{B_1}\right) \times \left((E_2)^{I}_{B_2} \times_{E_2/G_2} (E_2)^I_{B_2}\right), $$ and $$ E \times_{B/G} E = \left(E_1 \times_{B_1/G_1} E_1 \right) \times \left(E_2 \times_{B_2/G_2} E_2\right), $$ and the invariant parametrized fibration $\Psi \colon E^I_B \times_{E/G} E^I_B \to E \times_{B/G} E$ corresponding to $p$ is equivalent to the product fibration $\Psi_1 \times \Psi_2$. Hence, \begin{align*} \TC^{G_1 \times G_2}[p_1 \times p_2 \colon E_1 \times E_2 \to B_1 \times B_2] & = \secat_{(G_1 \times G_2) \times (G_1 \times G_2)}(\Psi) \\ & = \secat_{(G_1 \times G_1) \times (G_2 \times G_2)}(\Psi_1 \times \Psi_2) \\ & \leq \secat_{G_1 \times G_1}(\Psi_1) + \secat_{G_2 \times G_2}(\Psi_2) - 1\\ & = \TC^{G_1}[p_1 \colon E_1 \to B_1]+\TC^{G_2}[p_2 \colon E_2\to B_2] -1, \end{align*} by \cite[Proposition 2.9]{A-D-S}. \end{proof} The product inequality for invariant topological complexity was proved in \cite[Theorem 3.18]{invarianttc}. In the following corollary, we show that the cofibration hypothesis assumed in \cite[Theorem 3.18]{invarianttc} can be removed by using \Cref{thm: prod-ineq for inv-para-tc}. \begin{corollary} Suppose $X$ is a $G$-space and $Y$ is a $H$-space. If $X \times X \times Y \times Y$ is $(G \times G \times H \times H)$-completely normal, then $$ \TC^{G \times H}(X \times Y) \leq \TC^{G}(X) + \TC^H(Y) - 1. $$ \end{corollary} \begin{proof} Note that $X \to \{*_1\}$ is a $G$-fibration and $Y \to \{*_2\}$ is a $H$-fibration. Hence, \begin{align*} \TC^{G \times H}(X \times Y) & = \TC^{G \times H}[X\times Y \to \{*_1\} \times \{*_2\}] \\ & \leq \TC^{G}[X \to \{*_1\}] + \TC^H[Y \to \{*_2\}] - 1 \\ & = \TC^{G}(X) + \TC^H(Y) - 1, \end{align*} by \Cref{prop: special cases of inv-para-tc} and \Cref{thm: prod-ineq for inv-para-tc}. \end{proof} The proof of the following corollary is similar to \Cref{cor: special-prod-ineq for equi-para-tc} and can be shown using \Cref{cor: subgroup-ineq for inv-para-tc} and \Cref{thm: prod-ineq for inv-para-tc}. \begin{corollary} \label{cor: special-prod-ineq for inv-para-tc} Suppose $p_i \colon E_i \to B_i$ is a $G$-fibration such that $G$ acts on $B_i$ freely for $i = 1,2$. If $G$ is compact Hausdorff, then $p_1\times p_2 \colon E_1\times E_2\to B_1\times B_2$ is a $G$-fibration, where $G$ acts diagonally on the spaces $E_1 \times E_2$ and $B_1 \times B_2$. Furthermore, if $E_1$ and $E_2$ are Hausdorff, and $E_1 \times E_1 \times E_2 \times E_2$ is completely normal, then \[ \TC^{G}[p_1\times p_2 \colon E_1\times E_2\to B_1\times B_2] \leq \TC^{G}[p_1 \colon E_1 \to B_1] + \TC^{G}[p_2 \colon E_2\to B_2]-1. \] \end{corollary} The proof of the following corollary is similar to that of \Cref{cor: equi-para-tc of pullback fibration under diagonal map} and follows from \Cref{prop: pullback-ineq for inv-para-tc} and \Cref{cor: special-prod-ineq for inv-para-tc}. \begin{corollary} Suppose $p_i \colon E_i \to B$ is a $G$-fibration, for $i =1,2$, such that $G$ acts on $B$ freely. Let $E_1 \times_B E_2 = \{(e_1,e_2) \in E_1 \times E_2 \mid p_1(e_1) = p_2(e_2)\}$ and let $p\colon E_1 \times_B E_2 \to B$ be the $G$-map given by $p(e_1,e_2)=p_1(e_1)=p_2(e_2)$, where $G$ acts on $E_1 \times_B E_2$ diagonally. If $G$ is compact Hausdorff, then $p$ is a $G$-fibration. Furthermore, if $E_1$ and $E_2$ are Hausdorff, and $E_1 \times E_1 \times E_2 \times E_2$ is completely normal, then \[ \TC^{G}[p \colon E_1 \times_B E_2 \to B] \leq \TC^G[p_1 \colon E_1 \to B]+\TC^G[p_2 \colon E_2\to B]-1. \] \end{corollary} \subsection{Some technical results} \hfill\\ \vspace{-0.7em} In this subsection, we establish two technical results which will help us compute the invariant parametrized topological complexity of Fadell-Neuwirth fibrations in \Cref{sec: examples}. \begin{definition}[{\cite[Section 5]{gevorgyan2023equivariant}}] Suppose $p\colon E \to B$ is a $G$-map and $F$ is a $G$-space. We say that $p$ is a locally trivial $G$-fibration with fibre $F$ if for each point $b \in B$ there exists a $G$-invariant open subset $U$ containing $b$ and a $G$-equivariant homeomorphism $\phi: p^{-1}(U) \to U \times F$ such that the following diagram \[ \begin{tikzcd} p^{-1}(U) \arrow[rr, "\phi"] \arrow[rd, "p"'] & & U \times F \arrow[ld, "\pi_1"] \\ & U & \end{tikzcd} \] commutes, where $G$ acts on $U \times F$ diagonally. The map $\phi$ is called a $G$-trivialization of $p$. \end{definition} \begin{proposition} \label{prop: quotient-fibration} Suppose $p \colon E \to B$ is a locally trivial $G$-fibration with fibre $F$. If $G$ acts trivially on $F$, then the induced map $\overline{p} \colon \overline{E} \to \overline{B}$ between the orbit spaces is locally trivial with fibre $F$. \end{proposition} \begin{proof} Suppose $\phi \colon p^{-1}(U) \to U \times F$ is a $G$-trivialization of $p$ over $U$. As the quotient map $\pi_B \colon B \to \overline{B}$ is open, it follows $\overline{U} := \pi_B(U)$ is an open subset of $\overline{B}$. Further, $U$ is $G$-invariant implies $U$ is saturated with respect to $\pi_B$. Hence, the restriction $\left.\pi_B\right|_{U} \colon U \to \overline{U}$ is an open quotient map and so is the product map $\left(\left.\pi_E\right|_{U}\right) \times \mathrm{id}_F \colon U \times F \to \overline{U} \times F$. Hence, the induced natural map $(U \times F)/G \to \overline{U} \times F$ is a homeomorphism. If $(\overline{p^{-1}(U)}) := \pi_E(p^{-1}(U))$, then $(\overline{p^{-1}(U)}) = (\overline{p})^{-1}(\overline{U})$ since $U$ is $G$-invariant. Similarly, $\left.\pi_E\right|_{p^{-1}(U)} \colon p^{-1}(U) \to (\overline{p})^{-1}(\overline{U})$ is an open quotient map, and the induced natural map $p^{-1}(U)/G \to (\overline{p})^{-1}(\overline{U})$ is a homeomorphism. Hence, the homeomorphism $\phi/G \colon p^{-1}(U)/G \to (U \times F)/G$ induced by $\phi$ gives a trivialization $$ \overline{\phi} \colon (\overline{p})^{-1}(\overline{U}) \to \overline{U} \times F $$ for $\overline{p}$ over $\overline{U}$. As $p$ is surjective, it follows $\overline{p}$ is locally trivial with fibre $F$. \end{proof} As noted in \Cref{thm: secat(overline(p)) = secat_G(p) for free actions}, the induced map $\overline{p} \colon \overline{E} \to \overline{B}$ is a $G$-fibration when $G$ is a compact Hausdorff topological group. However, to compute the invariant parametrized topological complexity of the equivariant Fadell-Neuwirth fibration, defined in \cite{D-EqPTC}, we require \Cref{prop: quotient-fibration}, which says $\overline{p}$ is also locally trivial. We will introduce equivariant Fadell-Neuwirth fibration and calculate their invariant parametrized topological complexity in \Cref{sec: examples}. Now, we present one more result which will be required in \Cref{sec: examples}. \begin{proposition} \label{prop: homotopy dimension of total space} Suppose $p \colon E \to B$ is a fibre bundle with fibre $F$, where the spaces $E,B,F$ are $\mathrm{CW}$-complexes. Then $$ \mathrm{hdim}(E) \leq \mathrm{hdim}(B) + \dim(F). $$ \end{proposition} \begin{proof} Since $p$ is locally trivial, it follows that $\dim(E) \leq \dim(B) + \dim(F)$. In particular, $\mathrm{hdim}(E) \leq \dim(B) + \dim(F)$. If $h \colon B' \to B$ is a homotopy equivalence and $E'$ is the pullback of $E$ along $h$, then $E'$ is a fibre bundle over $B'$ with fibre $F$. Thus, we have $\dim(E') \leq \dim(B') + \dim(F)$. Note that $E'$ is homotopy equivalent to $E$ as $h$ is a homotopy equivalence. Hence, we get $\mathrm{hdim}(E) \leq \dim(B') + \dim(F)$ and the result follows. \end{proof} \subsection{Invariance Theorem} \hfill\\ \vspace{-0.7em} \label{subsec:invariance-thm} Suppose $p\colon E \to B$ is a $G$-fibration such that the induced map $\overline{p} \colon \overline{E} \to \overline{B}$ between the orbit spaces is a fibration. If $\overline{\Pi} \colon (\overline{E})^{I}_{\overline{B}} \to \overline{E} \times_{\overline{B}} \overline{E}$ is the parametrized fibration induced by $\overline{p} \colon \overline{E} \to \overline{B}$, then we have a commutative diagram \[ \begin{tikzcd} E^I_B \times_{E/G} E^I_B \arrow[rr, "\Psi"] \arrow[d,swap,"f"] & & E \times_{B/G} E \arrow[d,"\pi_E \times \pi_E"] \\ (\overline{E})^{I}_{\overline{B}} \arrow[rr,"\overline{\Pi}"] & & \overline{E} \times_{\overline{B}} \overline{E}, \end{tikzcd} \] where $f(\gamma,\delta) = \overline{\gamma}*\overline{\delta}$, where $\overline{\gamma} = \pi_E \circ \gamma$. \begin{lemma} \label{lemma: pi_E x pi_E is an open quotient map} The restriction $\pi_E \times \pi_E \colon E \times_{B/G} E \to \overline{E} \times_{\overline{B}} \overline{E}$ is an open quotient map. \end{lemma} \begin{proof} As $\pi_E \colon E \to \overline{E}$ is an open quotient map, it follows $\pi_E \times \pi_E \colon E \times E \to \overline{E} \times \overline{E}$ is also an open quotient map. The subset $E \times_{B/G} E$ of $E \times E$ is saturated with respect to $\pi_E \times \pi_E$, since $E \times_{B/G} E$ is $(G \times G)$-invariant. Thus, $\pi_E \times \pi_E \colon E \times_{B/G} E \to (\pi_E \times \pi_E)(E \times_{B/G} E)$ is an open quotient map. Note that \begin{align*} (\overline{e_1},\overline{e_2}) \in \overline{E} \times_{\overline{B}} \overline{E} & \iff \overline{p} (\overline{e_1}) = \overline{p} (\overline{e_2}) \in \overline{B} \\ & \iff \overline{p(e_1)} = \overline{p(e_2)} \in \overline{B} \\ & \iff p(e_1) = g \cdot p(e_2) \text{ for some } g \in G\\ & \iff (e_1,e_2) \in E \times_{B/G} E. \end{align*} Hence, the result follows. \end{proof} \begin{proposition} Suppose $p \colon E \to B$ is a $G$-fibration such that $\overline{p} \colon \overline{E} \to \overline{B}$ is a fibration. Then \[ \TC[\overline{p} \colon \overline{E} \rightarrow \overline{B}] \leq \TC^{G}[p \colon E \rightarrow B]. \] \end{proposition} \begin{proof} Suppose $U$ is a $(G \times G)$-invariant open subset of $E \times_{B/G} E$ with a $(G \times G)$-equivariant section $s$ of $\Psi$ over $U$. Then $\overline{U} := (\pi_E \times \pi_E)(U)$ is an open subset of $\overline{E} \times_{\overline{B}} \overline{E}$, by \Cref{lemma: pi_E x pi_E is an open quotient map}. As $U$ is $(G \times G)$-invariant, it follows $U$ is saturated with respect to $\pi_E \times \pi_E$. Hence, $\pi_E \times \pi_E \colon U \to \overline{U}$ is a quotient map. Then, by universal property of quotient maps, there exists a unique continuous map $\overline{s} \colon \overline{U} \to E^{I}_B$ such that the following diagram \[ \begin{tikzcd} U \arrow[r, "f \circ s"] \arrow[d, "\pi_E \times \pi_E"'] & \overline{E}^I_B \\ \overline{U} \arrow[ru, "\overline{s}"', dashed] & \end{tikzcd} \] commutes. Then $$ \overline{\Pi} ( \overline{s} (\overline{e_1},\overline{e_2})) = \overline{\Pi}(f(s(e_1,e_2))) = (\pi_E \times \pi_E) (\Psi(s(e_1,e_2))) = (\pi_E \times \pi_E)(e_1,e_2) = (\overline{e_1},\overline{e_2}) $$ implies $\overline{s}$ is a section of $\overline{\Pi}$ over $\overline{U}$. Thus, the result follows since $\pi_E \times \pi_E \colon E \times_{B/G} E \to \overline{E} \times_{\overline{B}} \overline{E}$ is surjective. \end{proof} \begin{theorem} \label{thm: inv-para-tc under free action} Suppose $G$ is a compact Lie group. Let $p \colon E \to B$ be a $G$-fibration and let $\overline{p} \colon \overline{E} \to \overline{B}$ be the induced fibration between the orbit spaces. If the $G$-action on $E$ is free and $\overline{E} \times \overline{E}$ is hereditary paracompact, then \[ \TC^{G}[p \colon E \rightarrow B] = \TC[\overline{p} \colon \overline{E} \rightarrow \overline{B}]. \] \end{theorem} \begin{proof} Suppose $\overline{U}$ is an open subset of $\overline{E} \times_{\overline{B}} \overline{E}$ with section $\overline{s}$ of $\overline{\Pi}$ over $\overline{U}$. Then, by \Cref{thm: equivalent defn of a section of equi-para-tc} for the trivial group action, there exists a homotopy $\overline{H} \colon \overline{U} \times I \to \overline{E} \times_{\overline{B}} \overline{E}$ such that $\overline{H}_0$ is the inclusion map of $i_{\overline{U}} \colon \overline{U} \hookrightarrow \overline{E} \times_{\overline{B}} \overline{E}$ and $\overline{H}_1$ takes value in $\Delta(\overline{E})$. Let $U = (\pi_E \times \pi_E)^{-1}(\overline{U})$. Then $U$ is $(G \times G)$-invariant and $\overline{U}$ is hereditary paracompact. Note that the following diagram \[ \begin{tikzcd} U \times \{0\} \arrow[d, hook] \arrow[rrr, hook] & & & E \times_{B/G} E \arrow[d, "\pi_E \times\pi_E"] \\ U \times I \arrow[rr, "(\pi_E \times\pi_E) \times \mathrm{id}_I"] & & \overline{U} \times I \arrow[r, "\overline{H}"] & \overline{E} \times_{\overline{B}} \overline{E} \end{tikzcd} \] commutes. As the $G$-action on $E$ is free, it follows the action of $G \times G$ on $E \times_{E/G} E$ (and $U$) is free. Hence, by the Covering Homotopy Theorem of Palais \cite[Theorem II.7.3]{bredon-transformation-groups} and \Cref{lemma: pi_E x pi_E is an open quotient map}, it follows there exists a $(G \times G)$-homotopy $H \colon U \times I \to E \times_{B/G} E$ such that $H_0 = i_{U} \colon U \hookrightarrow E \times_{B/G} E$ and $(\pi_E \times \pi_E) \circ H = \overline{H} \circ ((\pi_E \times \pi_E) \times \mathrm{id}_I)$. As $\overline{H}_1$ takes value in $\Delta(\overline{E})$, it follows $H_1$ takes values in $E \times_{E/G} E$. Hence, by \Cref{thm: equivalent defn of a section of in-para-tc}, we get a $(G \times G)$-equivariant section of $\Psi$ over $U$. Thus, $\TC^{G}[p \colon E \rightarrow B] \leq \TC[\overline{p} \colon \overline{E} \rightarrow \overline{B}]$. \end{proof} We note that the main theorem in Lubawski and Marzantowicz's paper, as stated in \Cref{thm: invariance theorem for TC}, can be recovered from \Cref{thm: inv-para-tc under free action} by taking the base space $B$ to be a point. Now, we state some corollaries of this theorem. \begin{corollary} \label{cor: para-tc leq para-tc of orbit fibration for free action} Suppose $G$ is a compact Lie group. Let $p \colon E \to B$ be a $G$-fibration and let $\overline{p} \colon \overline{E} \to \overline{B}$ be the induced fibration between the orbit spaces. If the $G$-action on $B$ is free and $\overline{E} \times \overline{E}$ is hereditary paracompact, then \[ \TC(F) \leq \TC[p \colon E \rightarrow B] \leq \TC^{G}[p \colon E \rightarrow B] = \TC[\overline{p} \colon \overline{E} \rightarrow \overline{B}], \] where $F$ is the fibre of $p$. \end{corollary} \begin{proof} Observe that $G$ acts freely on $E$. Hence, the result follows from \Cref{cor: subgroup-ineq for inv-para-tc} and \Cref{thm: inv-para-tc under free action}. \end{proof} \begin{corollary} \label{cor: suff-cond for inv-para-tc = 1 with free action} Suppose $G$ is a compact Lie group and $p$ is locally trivial $G$-fibration with fibre $F$, such that $G$ acts trivially on $F$, $G$ acts freely on $B$ and $\overline{E} \times \overline{E}$ is hereditary paracompact. If $F$ is contractible and $\overline{E} \times_{\overline{B}} \overline{E}$ is homotopy equivalent to a $\mathrm{CW}$-complex, then $\TC^G[p \colon E \to B] =1$. \end{corollary} \begin{proof} By \Cref{prop: quotient-fibration}, we have $\overline{p} \colon \overline{E} \rightarrow \overline{B}$ is a locally trivial fibration with fibre $F$. We note that the fibre of the parametrized fibration $\overline{\Pi} \colon (\overline{E})^{I}_{\overline{B}} \to \overline{E} \times_{\overline{B}} \overline{E}$ induced by $\overline{p} \colon \overline{E} \to \overline{B}$ is the loop space $\Omega F$, which is contractible since $F$ is contractible. Hence, $\TC[\overline{p} \colon \overline{E} \rightarrow \overline{B}] = 1$ by obstruction theory. Thus, the result follows from the Invariance \Cref{thm: inv-para-tc under free action}. \end{proof} \section{Computations for equivariant Fadell-Neuwirth fibrations}\label{sec: examples} In this section, we provide estimates for the invariant parametrized topological complexity of equivariant Fadell-Neuwirth fibrations. We begin by defining the configuration spaces and the associated Fadell-Neuwirth fibrations, along with introducing an appropriate symmetric group action on the configuration spaces to ensure they possess an equivariant fibration structure. The \emph{ordered configuration space} of $s$ points on $\R^d$, denoted by $F(\R^d,s)$, is defined as \[ F(\R^d,s):=\{(x_1,\dots,x_s)\in (\R^d)^s \mid x_i\neq x_j ~\text{ for }~ i\neq j\}. \] \begin{definition}[{\cite{FadellNeuwirth}}] \label{def:fadellNeuiwirth} The maps \[ p \colon F(\R^d,s+t)\to F(\R^d,s) \quad \text{defined by} \quad p(x_1,\dots,x_{s},y_1,\dots,y_t)=(x_1,\dots,x_s) \] are called Fadell-Neuwirth fibrations. \end{definition} We now define an action of the permutation group $\Sigma_s$ on $F(\R^d,s+t)$. For $\sigma\in \Sigma_s$, define \[ \sigma\cdot (x_1,\dots,x_s,y_1,\dots,y_t)=(x_{\sigma(1)},\dots,x_{\sigma(s)},y_1,\dots,y_t). \] Similarly, $\Sigma_s$ acts on $F(\R^d,s)$ by permuting the coordinates. Notice that the map $p$ in \Cref{def:fadellNeuiwirth} is $\Sigma_s$-equivariant. In fact, in \cite{D-EqPTC}, it was demonstrated that this map is a $\Sigma_s$-fibration. The parametrized topological complexity of these fibrations were computed in \cite{farber-para-tc} and \cite{PTCcolfree}. In particular, they proved the following result: \begin{theorem}[{\cite[Theorem 9.1]{farber-para-tc}} and {\cite[Theorem 4.1]{PTCcolfree}}] \label{thm: ptc-Fadell_Neuwirth} Suppose $s\geq 2$, $t\geq 1$. Then \[ \TC[p \colon F(\R^d,s+t)\to F(\R^d, s)] = \begin{cases} 2t+s, & \text{if $d$ is odd},\\ 2t+s-1 & \text{if $d$ is even}. \end{cases} \] \end{theorem} For the rest of the section, we will use the notation $p \colon E \to B$ for the Fadell-Neuwirth fibration. The fibre $F$ of $p$ is the configuration space of $t$ points on $\R^d \setminus \mathcal{O}_s$, where $\mathcal{O}_s$ represents the configuration of $s$ distinct points in $\R^d$. More precisely, $F = F(\R^d \setminus \mathcal{O}_s, t)$. We will now demonstrate that the invariant parametrized topological complexity of the Fadell–Neuwirth fibrations coincides with that of the corresponding orbit fibrations. Furthermore, it is bounded below by the parametrized topological complexity of the Fadell–Neuwirth fibrations. This implies that the complexity of the universal motion planning algorithm is greater when the order in which obstacles (mines) are placed is irrelevant. This is something one would expect, since the motion planners, in a sense, satisfies an extra condition, i.e., they remain unchanged under the reordering of the obstacles (mines). \begin{theorem} \label{thm: tc-Fadell-Neuwith-orbit-fib} The induced map $\overline{p} \colon \overline{F(\R^d,s+t)} \to \overline{F(\R^d, s)}$ is a locally trivial fibration with fibre $F$. Moreover, \begin{align*} \TC[p \colon F(\R^d,s+t) \to F(\R^d, s)] & \leq \TC^{\Sigma_s}[p \colon F(\R^d,s+t)\to F(\R^d, s)] \\ & = \TC[\overline{p} \colon \overline{F(\R^d,s+t)} \to \overline{F(\R^d, s)}] \end{align*} \end{theorem} \begin{proof} We note that the equivariant Fadell-Neuwirth fibrations are locally $\Sigma_s$-trivial with $\Sigma_s$ acting trivially on the fibre $F$, see \cite[Section 5.1]{D-EqPTC}. Since $F(\R^d, s)$ is a manifold, it is paracompact Hausdorff. Hence, by \Cref{prop: quotient-fibration}, it follows that the induced map $\overline{p}$ is a locally trivial fibration with fibre $F$. As the action of $\Sigma_s$ on $F(\R^d,s+t)$ and $F(\R^d,s)$ is free, and since $F(\R^d,s+t)$ and $F(\R^d,s)$ are manifolds, it follows that $\overline{F(\R^d,s+t)}$ and $\overline{F(\R^d,s)}$ are also manifolds. Thus, the result follows from \Cref{cor: para-tc leq para-tc of orbit fibration for free action}. \end{proof} \begin{proposition} \label{prop: Fadell-Neuwirth-dim} Suppose $p \colon E \to B$ denotes the equivariant Fadell-Neuwirth fibration with fibre $F$. Then \begin{enumerate} \item the space $E \times_{B/G} E$ is $(d-2)$-connected, and \item $\dim \left(\overline{E} \times_{\overline{B}} \overline{E}\right) = \dim(B) + 2 \dim(F) = ds+2dt$. \item $\mathrm{hdim}(\overline{E} \times_{\overline{B}} \overline{E}) \leq \mathrm{hdim}(\overline{B}) + 2 \dim(F) = (d-1)(s-1) + 2dt$. \end{enumerate} \end{proposition} \begin{proof} (1) Since $\Sigma_s$ is a finite group acting freely on a Hausdorff space $B$, it follows $\pi_B \colon B \to B/G$ is a covering map. Hence, $\pi_B$ is a fibration. Thus, $\pi_1 \colon E \times_{B/G} E \to E$ is a fibration with fibre $\coprod_{g \in G} F$ since the following diagram $$ \begin{tikzcd} E \times_{B/G} E \arrow[d, "\pi_1"'] \arrow[r, "\pi_2"] & E \arrow[d, "\pi_B \circ p"] \\ E \arrow[r, "\pi_B \circ p"] & B/G \end{tikzcd} $$ is a pullback. As $E$ and $F$ are $(d-2)$-connected (see discussion after the statement of Theorem 4.1 in \cite{PTCcolfree}), it follows that the space $E \times_{B/G} E$ is $(d-2)$-connected. (2) As $\overline{p} \colon \overline{E} \to \overline{B}$ is a locally trivial fibration with fibre $F$, it follows that the obvious map $\overline{E} \times_{\overline{B}} \overline{E} \to \overline{B}$ is a locally trivial fibration with fibre $F \times F$. Hence, $$ \dim(\overline{E} \times_{\overline{B}} \overline{E}) = \dim(F \times F) + \dim(\overline{B}) = 2 \dim(F)+\dim(\overline{B}). $$ Since $\Sigma_s$-action on the manifold $B$ is free, we get that $\overline{B}$ is a manifold with $\dim(\overline{B}) = \dim(B)$. (3) It is shown in \cite[Example 7.1.12]{basabe2013highertopologicalcomplexitysymmetrization-arxiv-v5} that the homotopy dimension of the unordered configuration space $\overline{B}$ is $(d-1)(s-1)$. Hence, by \Cref{prop: homotopy dimension of total space}, the claim follows. \end{proof} We are now ready to present our estimates for the invariant parametrized topological complexity of the Fadell-Neuwirth fibrations. \begin{theorem}\label{thm:estimates-for-FN-fibrations} Suppose $s\geq 2$, $t\geq 1$. Then \[ \TC^{\Sigma_s}[p \colon F(\R^d,s+t)\to F(\R^d, s)] < 2t+s+\frac{2t+1}{d-1}. \] Additionally, we have \begin{equation}\label{eq:invptc-lb-FN} \TC^{\Sigma_s}[p \colon F(\R^d,s+t)\to F(\R^d, s)]\geq \begin{cases} 2t+s, & \text{if $d\geq 3$ is odd},\\ 2t+s-1 & \text{if $d\geq 2$ is even}. \end{cases} \end{equation} In particular, if $d \geq 2t+2$, then \begin{equation} \label{eq:invptc-FN-estimates} \TC^{\Sigma_s}[p \colon F(\R^d,s+t)\to F(\R^d, s)]= \begin{cases} 2t+s, & \text{if $d$ is odd},\\ \text{either } 2t+s-1 \text{ or } 2t+s & \text{if $d$ is even}. \end{cases} \end{equation} \end{theorem} \begin{proof} By \Cref{thm: tc-Fadell-Neuwith-orbit-fib}, it enough to compute $\TC[\overline{p} \colon \overline{E}\to \overline{B}]$. Since the fibre of $\overline{p} \colon \overline{E}\to \overline{B}$ is $F$, which is $(d-2)$-connected, we obtain the following inequality \begin{align*} \TC[\overline{p} \colon \overline{E}\to \overline{B}] & < \frac{\mathrm{hdim}(\overline{E} \times_{\overline{B}} \overline{E})+1}{d-1}+1 & \text{by \cite[Proposition 7.2]{farber-para-tc}}\\ & \leq \frac{(d-1)(s-1)+2dt+1}{d-1}+1 & \text{by \Cref{prop: Fadell-Neuwirth-dim} (3)}\\ & = 2t+s +\frac{2t+1}{d-1}. \end{align*} Therefore, if $d \geq 2t+2$, then we obtain the following $ \TC[\overline{p} \colon \overline{E}\to \overline{B}] \leq 2t+s. $ From \Cref{thm: tc-Fadell-Neuwith-orbit-fib}, we have \[ \TC[p \colon F(\R^d,s+t)\to F(\R^d, s)] \leq \TC[\overline{p} \colon \overline{F(\R^d,s+t)} \to \overline{F(\R^d, s)}]. \] Thus, the inequalities of \eqref{eq:invptc-lb-FN} and \eqref{eq:invptc-FN-estimates} follows using \Cref{thm: ptc-Fadell_Neuwirth}. \end{proof} \begin{remark} As suggested by Grant, we may be able to get rid of the condition $d \geq 2t+2$ in \eqref{eq:invptc-FN-estimates} using obstruction theory. More precisely, it is enough to show that the following cohomology groups with local coefficients \begin{equation} \label{eq: local cohomology groups} H^{k+1}(\overline{E} \times_{\overline{B}} \overline{E}; \pi_{k}(*_{2t+s}\Omega F)) = 0 \end{equation} for all $ k +1 \leq \mathrm{hdim}(\overline{E} \times_{\overline{B}} \overline{E}) \leq (d-1)(s-1)+2dt = (2t+s)(d-1)+2t-d + 1 $, where $*_{2t+s}\Omega F$ is the join of $(2t+s)$-copies of $\Omega F$, see \cite[Theorem 3 and Theorem 5]{Sva}. Since $*_{2t+s}\Omega F$ is $((2t+s)(d-1)-2)$-connected, it would be enough to show \eqref{eq: local cohomology groups} for all $$(2t+s)(d-1) -1 \leq k \leq (2t+s)(d-1)+2t-d.$$ This could be done by looking at the Serre spectral sequence with local coefficients for the fibration $F \times F \hookrightarrow \overline{E} \times_{\overline{B}} \overline{E} \to \overline{B}$. We note that since $F \times F$ is simply connected, we have $\pi_1(\overline{E} \times_{\overline{B}} \overline{E}) \cong \pi_1(\overline{B})$. Hence, any local coefficients on $\overline{E} \times_{\overline{B}} \overline{E}$ is a pullback of local coefficients on $\overline{B}$. Thus, we can apply \cite[Theorem 2.9]{Siegel-Spectral-sequence} to compute the desired cohomology groups of $\overline{E} \times_{\overline{B}} \overline{E}$ with local coefficients. \end{remark} \section{Acknowledgement} We thank Professor Mark Grant for his valuable feedback on an earlier version of this article, which has significantly improved the paper in various aspects. In particular, we are grateful to him for suggesting an approach to establish a cohomological lower bound through the cohomology of orbit spaces and suggesting possible improvements in \Cref{thm:estimates-for-FN-fibrations}. The first author would like to acknowledge IISER Pune - IDeaS Scholarship and Siemens-IISER Ph.D. fellowship for economical support. The second author acknowledges the support of NBHM through grant 0204/10/(16)/2023/R\&D-II/2789. \bibliographystyle{plain} \bibliography{references} \end{document}
2412.12958v1
http://arxiv.org/abs/2412.12958v1
The exact subgraph hierarchy and its local variant for the stable set problem for Paley graphs
\pdfoutput=1 \documentclass{article} \usepackage{mathtools} \usepackage{graphicx} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsthm} \usepackage{url} \usepackage{multirow} \usepackage{enumerate} \usepackage{hyperref} \usepackage{epstopdf} \usepackage{tikz} \usepackage{pgfplots} \usepackage{here} \usepackage{bigstrut} \usepackage{subcaption} \usepackage{algorithm} \usepackage{algpseudocode} \usepackage[colorinlistoftodos]{todonotes} \usepackage{dsfont} \usepackage{adjustbox} \usepackage{xcolor, colortbl} \usepackage{afterpage} \usepackage[indent]{parskip} \usepackage[normalem]{ulem} \topmargin-0.9cm \textheight=22cm \oddsidemargin0.5cm \textwidth=15cm \evensidemargin0.5cm \DeclareMathOperator{\tr}{tr} \DeclareMathOperator{\trace}{trace} \DeclareMathOperator{\Diag}{Diag} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\val}{val} \DeclareMathOperator{\conv}{conv} \DeclareMathOperator{\STAB}{STAB} \DeclareMathAlphabet{\mymathbb}{U}{BOONDOX-ds}{m}{n} \newcommand{\allones}[1]{\mymathbb{1}_{#1}} \newtheorem{thm}{Theorem} \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition} \newtheorem{bem}[thm]{Remark} \newtheorem{obs}[thm]{Observation} \newtheorem{definition}[thm]{Definition} \newcommand{\epr}{\hfill $\Box$} \newcommand{\N}{\mathbb{N}} \newcommand{\R}{\mathbb{R}} \newcommand{\Proof}{{\em Proof}.~} \newcommand{\elli}[1]{{\color{blue}#1}} \begin{document} \title{The exact subgraph hierarchy and its local variant for the stable set problem for Paley graphs} \author{ {Elisabeth Gaar}\thanks{University of Augsburg, Germany and Johannes Kepler University Linz, Austria, {\tt [email protected]}} \and {Dunja Pucher}\thanks{University of Klagenfurt, Austria, {\tt [email protected]} }} \date{\today} \maketitle \begin{abstract} The stability number of a graph, defined as the cardinality of the largest set of pairwise non-adjacent vertices, is NP-hard to compute. The exact subgraph hierarchy (ESH) provides a sequence of increasingly tighter upper bounds on the stability number, starting with the Lovász theta function at the first level and including all exact subgraph constraints of subgraphs of order~$k$ into the semidefinite program to compute the Lovász theta function at level~$k$. In this paper, we investigate the ESH for Paley graphs, a class of strongly regular, vertex-transitive graphs. We show that for Paley graphs, the bounds obtained from the ESH remain the Lovász theta function up to a certain threshold level, i.e., the bounds of the ESH do not improve up to a certain level. To overcome this limitation, we introduce the local ESH for the stable set problem for vertex-transitive graphs such as Paley graphs. We prove that this new hierarchy provides upper bounds on the stability number of vertex-transitive graphs that are at least as tight as those obtained from the ESH. Additionally, our computational experiments reveal that the local ESH produces superior bounds compared to the ESH for Paley graphs. \small \textbf{Keywords:} Stability number, semidefinite programming, Paley graphs \end{abstract} \section{Introduction} This paper deals with the stable set problem, a fundamental combinatorial optimization problem. Given an undirected simple graph $G = (V, E)$ with $\vert V \vert = n$ vertices and $\vert E \vert = m$ edges, a subset of vertices $S \subseteq V$ is called a stable set if no two vertices of $S$ are adjacent. A stable set of largest possible cardinality in $G$ is called maximum stable set, and its cardinality is denoted by $\alpha(G)$. The stable set problem asks to determine $\alpha(G)$. It is an NP-hard problem contained in Karp's list~\cite{Karp72} from 1972. Therefore, unless P = NP, there is no polynomial time algorithm to solve the stable set problem. A method that can be employed for solving the stable set problem is branch-and-bound, which strongly depends on good upper bounds. One possible upper bound on the stability number of a graph is given by the Lovász theta function~\cite{Lov:79}, which was introduced in 1979 and which can be computed as the optimal objective function value of a semidefinite program (SDP). Since its introduction, the Lovász theta function as well as its strengthenings have been extensively studied. One of the first refinements is the one proposed by Schrijver~\cite{Schrijver}. The Lovász theta function can also be tightened using a hierarchical approach. Such a hierarchy is a systematic procedure to strengthen a relaxation by adding additional variables or constraints, and, usually starting from the Lovász theta function at the first level of the hierarchy, it involves multiple levels, each providing a tighter relaxation than the previous one. The most prominent hierarchies based on semidefinite programming (SDP) are the ones from Sherali and Adams~\cite{SheraliAdamas}, Lovász and Schrijver~\cite{LovSch} and Lasserre~\cite{Lasserre}. Laurent investigated the computational application of these hierarchies to the stable set polytope in~\cite{Laurent}. In~\cite{AARW:15}, Adams, Anjos, Rendl, and Wiegele proposed the exact subgraph hierarchy for classes of NP-hard problems that are based on graphs and for which the projection of the problem onto a subgraph shares the same structure as the original problem. In particular, they considered the Max-Cut and the stable set problems. The approach from~\cite{AARW:15} for the latter starts by computing the Lovász theta function at the first level of the hierarchy. At level~$k$ of the hierarchy it considers all subsets of $k$ vertices and requires the exact subgraph constraint to be fulfilled, a constraint that ensures that for the corresponding subgraph the solution is exact. Computational studies of the exact subgraph hierarchy for the stable set, the Max-Cut, and graph coloring problems were performed by Gaar~\cite{Gaa:18} and by Gaar and Rendl~\cite{Gaa:20}. The results for the stable set problem presented in~\cite{Gaa:18} reveal that for some graphs, higher levels of the hierarchy do not necessarily yield tighter bounds on their stability numbers than the ones obtained at the first level of the hierarchy. One such example is the Paley graph on $61$ vertices, for which an improvement of the bound was realized only on the sixth level of the hierarchy. On the other hand, for certain graphs, a significant improvement of the bounds on their stability numbers was achieved already on the second level of the hierarchy. A question that naturally arises and that we address in this paper is whether, for all Paley graphs, there is no improvement of the bound on the stability number up to a certain level of the exact subgraph hierarchy and, if so, whether there is a theoretical explanation for it. So far, the stability numbers of Paley graphs have gained some attention in the literature. Several authors investigated closed-form bounds on stability numbers of Paley graphs already a long time ago, for example, Broere, Döman, and Ridley~\cite {Broere} and Cohen~\cite{Cohen}. The best closed-form upper bound for a certain class of Paley graphs was given only recently by Hanson and Petridis~\cite{Hanson}. Upper SDP bounds on the stability numbers of Paley graphs based on block-diagonal SDP hierarchies for 0/1 programming were investigated by Gvozdenovic, Laurent, and Vallentin~\cite{Gvozdenovic2009}. Furthermore, Magsino, Mixon, and Parshall~\cite{Magsino} argued that one can obtain bounds on the stability number of Paley graphs by considering certain subgraphs only. In particular, they showed that the Schrijver refinement of the Lovász theta function applied on these subgraphs yields bounds that are sometimes better than the ones proposed in~\cite{Hanson}. In this paper, we generalize the approach of~\cite{Magsino} for all vertex-transitive graphs, which eventually leads us to the introduction of a new hierarchy for their stable set problem. \textbf{Contribution and outline } In this paper, we consider upper bounds on the stability numbers of Paley graphs $P_q$ of order $q$ based on exact subgraph constraints from two different aspects. First, we systematically investigate various levels of the exact subgraph hierarchy for Paley graphs, starting with the construction of an optimal solution for the SDP to compute the first level---the Lovász theta function. Then, we examine whether the upper bounds for the stability numbers of Paley graphs on higher levels of the exact subgraph hierarchy improve. In particular, we identify a specific level~$\ell(q) = \left\lfloor\frac{\sqrt{q}+3}{2}\right\rfloor$ for which adding exact subgraph constraints on subgraphs of orders ${2, \ldots, \ell(q)}$ fails to provide better bounds on the stability numbers for the Paley graph $P_q$. Moreover, for certain Paley graphs, we prove that adding exact subgraph constraints on subgraphs of order~$\ell(q) + 1$ also fails to improve the Lovász theta function as an upper bound on the stability number. As a consequence, the exact subgraph hierarchy does not perform well at providing upper bounds on the stability number of a Paley graph $P_q$, as it only has the potential to improve the bound starting from level~$\ell(q)+1$ or even $\ell(q) + 2$. Second, to address this limitation, we present an alternative approach for the computation of upper bounds on the stability numbers of Paley graphs. For this purpose, we introduce the local exact subgraph hierarchy for the stable set problem for vertex-transitive graphs, which includes Paley graphs. To do so, we fix one vertex of the graph $G$ to be in a maximum stable set and apply the exact subgraph hierarchy to the local graph of $G$, i.e., to the subgraph of $G$ in which the remaining vertices of the maximum stable set can be. We prove that for any vertex-transitive graph, the bounds obtained from the local exact subgraph hierarchy are at least as good as those from the exact subgraph hierarchy. In our computational study, we demonstrate that this new hierarchy yields bounds on the stability numbers of Paley graphs that are significantly better than the bounds obtained from the exact subgraph hierarchy and that are even tighter than the ones proposed in~\cite{Hanson} and~\cite{Magsino}. Specifically, our computations reveal that for every Paley graph considered, the upper bound on the stability number at the second level of the local exact subgraph hierarchy is significantly tighter than at the first (and consequently $\ell(q)$-th) level of the exact subgraph hierarchy. This demonstrates that the local exact subgraph hierarchy effectively overcomes the issue of stagnant upper bounds for multiple levels of the exact subgraph hierarchy. The rest of this paper is organized as follows. In Section~\ref{Prerequisites}, we give an overview of the theoretical concepts used in this work, including Paley graphs, the Lovász theta function, and the exact subgraph hierarchy. In Section \ref{section_ESH_Paley}, we investigate various levels of the exact subgraph hierarchy for Paley graphs that do not yield improved upper bounds on their stability numbers. In Section \ref{section_ESH_local}, we introduce the local exact subgraph hierarchy for the stable set problem for vertex-transitive graphs and investigate its effectiveness in finding upper bounds on stability numbers of Paley graphs. We conclude with a short discussion of our results and open questions in Section~\ref{Conclusion}. \textbf{Notation and terminology } We close this section with some notation and terminology. We denote by $\mathbb{N}_0$ the natural numbers including zero. The vector of all-ones of length $n$ is denoted by $\mathds{1}_n$ and we write $\mathds{1}_{n \times n} =\mathds{1}_n \mathds{1}^T_n$ for the matrix of all-ones. Let $0_n$ be the zero matrix and $I_n$ the identity matrix of order~$n$. We denote by $e_i$ the column $i$ of the matrix $I_n$. Furthermore, we set \begin{align*} E_i &= e_i e_i^T \\ E_{ij} &= (e_i + e_j)(e_i + e_j)^T. \end{align*} If $a \in \mathbb{R}^n$, then $\Diag(a)$ denotes a $n \times n$ diagonal matrix with $a$ on the main diagonal, while if $X \in \mathbb{R}^{n \times n}$, then $\diag(X)$ is a vector of length $n$ containing the main diagonal of $X$. For basic graph theoretical concepts, we use the notion of graphs, the order, the complement graph $\overline{G}$ of a graph $G$, the induced subgraph, as well as the set of neighbors of a vertex of Diestel~\cite{Diestel}. We recall that a clique is a subset of vertices such that every two distinct vertices in the clique are adjacent. The cardinality of a maximum clique in $G$ is denoted by $\omega(G)$. Since a stable set in $G$ is a clique in $\overline{G}$, we have $\alpha(G) = \omega(\overline{G})$. A matrix $X \in \mathbb{R}^{n \times n}$ is circulant if $X_{i+1, j+1} = X_{ij}$ for all $i, j \in \mathbb{Z}_n = \{0, 1, \ldots, n - 1\}$. A graph $G$ on $n$ vertices is circulant if there exists a labeling of its vertices with $\mathbb{Z}_n$ such that its adjacency matrix is circulant. If $G$ is circulant, its complement $\overline{G}$ is also circulant. Furthermore, a graph $G$ is vertex-transitive if its automorphism group acts transitively on $V$. Analogously, a graph $G$ is edge-transitive if its automorphism group acts transitively on $E$. A regular graph $G$ that is neither complete nor empty is said to be strongly regular with parameters $(n, r, a, c)$ if it is $r$-regular, if every pair of adjacent vertices has $a$ common neighbors, and if every pair of distinct non-adjacent vertices has $c$ common neighbors. See Godsil and Royle~\cite{Godsil} for more information on these definitions. \section{Theoretical background}\label{Prerequisites} In this section, we provide the theoretical background required for this paper. In Section~\ref{Paley_graphs}, we introduce Paley graphs and discuss their properties, along with established bounds on their stability numbers. The theory concerning the Lovász theta function is briefly elaborated in Section~\ref{semidefinite_relaxation}. Lastly, we describe the exact subgraph hierarchy in Section~\ref{method_ESH}. \subsection{Paley graphs}\label{Paley_graphs} Paley graphs are named after the English mathematician Raymond Edward Alan Christopher Paley (1907-1933) and are closely related to the Paley construction for constructing Hadamard matrices, see~\cite{Paley}. Paley graphs were independently introduced by Sachs in~\cite{Sachs} and by Erd{\"o}s and R{\'e}nyi in~\cite{Erdos}. Some aspects of the history of the Paley graphs can be found in~\cite{Jones}. Paley graphs are defined in the following way. Let $p$ be a prime, $s$ a positive integer, and let $q$ be a prime power, i.e.\ $q = p^s$, such that $q \equiv 1 \pmod{4}$. Then the Paley graph has as vertex set the elements of the finite field $\mathbb{F}_q = \{0, \ldots, q - 1\}$, with two vertices being adjacent if and only if their difference is a nonzero square in $\mathbb{F}_q$. The congruence condition on $q$ implies that $-1$ is a square in $\mathbb{F}_q$; hence, the graph is undirected. For given $q$, we denote such a Paley graph by $P_q$ and write $V_q = \{0, \ldots, q - 1\}$ and $E_q$ for the set of vertices and edges, respectively. Paley graphs have been extensively studied due to their various properties; see, for instance,~\cite{Brouwer}. Among others, Paley graphs are quasi-random graphs, as shown in~\cite{Bollobas2001}, and their clique numbers play an important role in Ramsey theory; see, for instance,~\cite{Xu}. In the following text, we focus on the particular characteristics of Paley graphs necessary for determining and computing their stability numbers. The first important property is that Paley graphs are self-complementary, as shown in~\cite{Bollobas2001}. This leads to the following observation. \begin{obs}\label{alpha_equals_clique} Let $P_q$ be a Paley graph. Then $\alpha(P_q) = \omega(P_q)$. \end{obs} Furthermore, Paley graphs are vertex and edge transitive, and they are strongly regular graphs with parameters $(q, \frac{q-1}{2}, \frac{q-5}{4}, \frac{q-1}{4})$, as also demonstrated in \cite{Bollobas2001}. A property of strongly regular graphs is that the adjacency matrix has only three different eigenvalues, as shown in~\cite{Godsil}. In particular, if $G$ is a strongly regular graph with parameters $(n, r, a, c)$, then the eigenvalues of the matrix $A$ are $r$, as well as $\frac{1}{2}\Bigl((a-c) \pm \sqrt{(a-c)^2 + 4(r-c)}\Bigr)$. In the context of Paley graphs, this leads to the following observation. \begin{obs}\label{eigenvalues_Paley} The adjacency matrix $A$ of a Paley graph $P_q$ has eigenvalues $\frac{q-1}{2}$ and $\frac{-1 \pm \sqrt{q}}{2}$. \end{obs} Additionally, since Paley graphs are self-complementary, we note the following. \begin{obs}\label{eigenvalue_Paley_complement} The adjacency matrix $\overline{A}$ of $\overline{P}_q$ has the same eigenvalues as the adjacency matrix $A$ of $P_q$. \end{obs} Finally, we note that if $q$ is a prime, then $P_q$ is circulant and the subgraph of $P_q$ induced by the neighborhood of the vertex $0$ is also circulant. For further details, see~\cite {Magsino}. Next, we investigate bounds on the stability numbers of Paley graphs from the literature. In~\cite{Cohen}, Cohen investigated lower bounds on the stability numbers of Paley graphs and showed that for any $P_q$ with $q = p^s$ for some prime $p$ and some positive integer $s$, \begin{align*} \alpha(P_q) \geq \frac{p}{p-1} \left( \frac{\frac{1}{2} \log q - 2 \log \log q}{\log 2} + 1 \right) \end{align*} holds. Regarding upper bounds on the stability numbers of Paley graphs, due to their regularity, one can apply Hoffman's bound~\cite{Hoffman} to derive a closed-form upper bound. In general, let $G$ be an $r$-regular graph on $n$ vertices, and let $\tau$ denote the smallest eigenvalue of the adjacency matrix of $G$. Then, according to Hoffman's bound, \begin{align*} \alpha(G) \leq \frac{n}{1 - \frac{r}{\tau}}. \end{align*} In case of a Paley graph $P_q$, we have that $r = \frac{q-1}{2}$, and $\tau = \frac{-1 - \sqrt{q}}{2}$, according to Observation~\ref{eigenvalues_Paley}. Hence, \begin{align}\label{Hoffman} \alpha(P_q) \leq \sqrt{q} \end{align} for every $P_q$. If $q$ is a square, then the upper bound~\eqref{Hoffman} is tight, as shown in~\cite{Broere}. The upper bound for cases when $q$ is not a square and $q \neq 5$ was improved by Maistrelli and Penman~\cite{Maistrelli} to $\alpha(G_q) \leq \sqrt{q - 4}$. Recently, Hanson and Petridis~\cite{Hanson} considered Paley graphs $P_q$ where $q$ is a prime and gave the even better closed-form bound \begin{align*} b_{H}(P_q) = \frac{\sqrt{2q-1}+1}{2}, \end{align*} where equality holds for $q \in \{5, 13, 41\}$. \subsection{Lovász theta function}\label{semidefinite_relaxation} In 1979, Lovász~\cite{Lov:79} introduced the graph parameter Lovász theta function $\vartheta(G)$, which yields an upper bound on the stability number of $G$ and which can be computed as the optimal objective function value of an SDP in polynomial time to arbitrary fixed precision, as shown in~\cite{VanBoyd}. There are several different formulations for the Lovász theta function. A study of different variants of the Lovász theta function can be found in \cite{GALLI2017159}. In~\cite{LovSch}, Lovász and Schrijver showed that for a graph $G$ on $n$ vertices the formulations \begin{align} \vartheta(G) ~&=~ \max \Biggl\{\mathds{1}^T_n x \colon \begin{pmatrix} X & x \\ x^T & 1 \end{pmatrix} \succeq 0, ~ \diag(X) = x, ~ X_{ij} = 0 ~~\forall \{i,j\} \in E\Biggr\} \label{theta_1} \\ &=~ \max \{\langle \mathds{1}_{n \times n}, X \rangle \colon X \succeq 0, ~ \tr(X) = 1, ~ X_{ij} = 0 ~~\forall \{i,j\} \in E\} \label{theta_2} \end{align} are equivalent. The relationship between the formulations~\eqref{theta_1} and~\eqref{theta_2} has been studied by several authors. In~\cite[Lemma~2.8 and Theorem~2.9]{Gru:03}, Gruber and Rendl showed the following statements. \begin{lem}~\label{opt_1} Let $(x^*, X^*)$ be a feasible solution of~\eqref{theta_1} with $\tr(X^*) > 0$. Then $X^\prime = \frac{1}{\tr(X^*)}{X^*}$ is a feasible solution of~\eqref{theta_2}. \end{lem} \begin{lem}\label{opt_2} Let $X^\prime$ be an optimal solution of~\eqref{theta_2}, and let $Y = \langle \mathds{1}_{n \times n}, X^\prime \rangle X^\prime$. Then $(\diag(Y), Y)$ is an optimal solution of~\eqref{theta_1}. \end{lem} Another property of the formulation~\eqref{theta_2} studied in the original work by Lovász~\cite[Corollary~2]{Lov:79} is the relationship between the Lovász theta function of a graph $G$ and its complement $\overline{G}$. In this context, Lovász proved the following. \begin{lem}\label{theta_complement} For any graph $G$ on $n$ vertices \begin{equation*} \vartheta(G) \vartheta(\overline{G}) \geq n. \end{equation*} \end{lem} Additionally, if $G$ is vertex-transitive, the following theorem, also proven in~\cite[Theorem~8]{Lov:79}, holds. \begin{thm}\label{theta_vertex_transitive} If $G$ is a vertex-transitive graph on $n$ vertices, then \begin{equation*} \vartheta(G) \vartheta(\overline{G}) = n. \end{equation*} \end{thm} \subsection{Exact subgraph hierarchy}\label{method_ESH} We now recall the definitions of exact subgraph constraints and the exact subgraph hierarchy, whose aim it is to strengthen the Lovász theta function $\vartheta(G)$ into the direction of $\alpha(G)$. We follow the presentation of Gaar~\cite{GaarVersionsESH}. For a graph $G = (V,E)$ with $|V|=n$ and $V = \{0, \dots, n-1\}$ the squared stable set polytope $\STAB^{2}(G)$ is defined as \begin{align*} \STAB^{2}(G) &= \conv\left\{ss^{T}: s \in \{0,1\}^{n}, \, s_{i}s_{j} = 0 ~ \forall \{i,j\} \in E \right\}, \end{align*} i.e., $\STAB^{2}(G)$ is the convex hull of the outer product of all incidence vectors of stable sets in $G$. It turns out that when the constraint $X \in \STAB^{2}(G)$ is added to the SDP $\eqref{theta_1}$ to compute the Lovász theta function $\vartheta(G)$, the optimal objective function is $\alpha(G)$, see Gaar~\cite{GaarVersionsESH}. Unfortunately, the resulting SDP is in general too large for computations, so the idea is to include the constraint $X \in \STAB^{2}(G)$ only partially for smaller subgraphs. In particular, for a subset of the vertices $I \subseteq V$, the subgraph of $G$ that is induced by~$I$ is denoted by $G_I$. Moreover, the submatrix of $X\in\R^{n\times n}$ which is indexed by~$I$ is denoted by $X_I=(X_{i,j})_{i,j\in I}$. Then the exact subgraph constraint (ESC) for $G_I$ is defined as $ X_I \in \STAB^{2}(G_I). $ Furthermore, let $J$ be a set of subsets of $V$. Then $z_{J}(G)$ is the optimal objective function value of~\eqref{theta_1} with the ESC for every subgraph induced by a set in $J$, so \begin{equation*} z_{J}(G) = \max \Biggl\{\mathds{1}_n^T x : \begin{pmatrix} X & x \\ x^T & 1 \end{pmatrix} \succeq 0, \, \diag(X) = x, \, X_{ij} = 0 ~\forall \{i,j\} \in E, \, X_I \in \STAB^{2}(G_I) ~ \forall I \in J\Biggr\}. \end{equation*} Finally, the $k$-th level of the exact subgraph hierarchy (ESH) is defined as $z_{k}(G) = z_{J_k}(G)$ for $J_k = \{I \subseteq V: \vert I \vert = k\}$, i.e., it contains the ESC for every subgraph of $G$ of order~$k$. For $k > n$, we define $z_k(G) = z_n(G)$ as a logical continuation of the ESH. The ESH goes back to Adams, Anjos, Rendl and Wiegele~\cite{AARW:15}, who called the ESCs for subgraphs of order~$k$ the $k$-projection constraints, and who considered the hierarchy not only for the stable set problem, but also for the Max-Cut problem. Furthermore, \begin{align} \label{ineq:relationship_z} \alpha(G) = z_n(G) \leq \cdots \leq z_{k+1}(G) \leq z_k(G) \leq \cdots \leq z_1(G) = z_0(G) = \vartheta(G) \end{align} holds, so the higher the considered level~$k$ of the ESH is, the better the bounds on the stability number are. Later on, Gaar~\cite{Gaa:18} and Gaar and Rendl~\cite{Gaa:20} considered the option of not including all ESCs of subgraphs of a certain order into \eqref{theta_1}, but to include the ESC only for some cleverly chosen subgraphs. They provided a framework to obtain strengthened upper bounds on the stability number by iteratively computing $z_{J}(G)$ by solving the corresponding SDP, searching for violated ESCs and updating $J$. The huge advantage of the ESH over other SDP-based hierarchies that start from the Lovász theta function $\vartheta(G)$ is that the size of the square matrix that needs to be positive semidefinite remains $n+1$ on each level of the ESH, see Gaar~\cite{GaarVersionsESH} for more details. Furthermore, the bounds obtained with the help of ESCs were utilized within a branch-and-bound algorithm to solve the stable set problem to optimality by Gaar, Siebenhofer and Wiegele~\cite{GaarSiebenhoferWiegeleStabBaB}. Moreover, Gaar~\cite{GaarVersionsESH} investigated including the ESCs not starting from the SDP $\eqref{theta_1}$ to compute the Lovász theta function $\vartheta(G)$, but beginning from the alternative SDP $\eqref{theta_2}$. \section{The exact subgraph hierarchy for Paley graphs}\label{section_ESH_Paley} In this section, we explore the bounds on the stability numbers of Paley graphs obtained using the ESH. We start by constructing an optimal solution for the SDP on the first level of the ESH, the Lovász theta function, in Section~\ref{optimal_solution_for_Paley}. In Section~\ref{subsection_ESH_Paley}, for each Paley graph $P_q$ we identify a specific level~$\ell(q)$ of the ESH for which adding ESCs on subgraphs of orders $\{2, \ldots, \ell(q)\}$ fails to provide better bounds on the stability numbers. A detailed computational investigation is presented in Section~\ref{comp_justification}. Finally, in Section~\ref{sec_relationship_alpha}, we show that for certain Paley graphs, including ESCs for subgraphs of order~$\ell(q) + 1$ also fail to improve the Lovász theta function as an upper bound on $\alpha(P_q)$. \subsection{Optimal solution for SDP to compute the Lovász theta function}\label{optimal_solution_for_Paley} In this section, we demonstrate that for Paley graphs, an optimal solution for the SDP to compute the Lovász theta function can be explicitly constructed. To achieve this result, we utilize the properties of Paley graphs described in Section~\ref{Paley_graphs} along with the properties of the Lovász theta function outlined in Section~\ref{semidefinite_relaxation}. We start by recalling that Paley graphs are vertex-transitive. Thus, from Theorem~\ref{theta_vertex_transitive} we have that $\vartheta(P_q) \vartheta(\overline{P_q}) = q$. Additionally, Paley graphs are self-complementary. This leads to the following important observation about the Lovász theta function for Paley graphs. \begin{obs}\label{theta_paley_graphs} Let $P_q$ be a Paley graph. Then $\vartheta(P_q) = \sqrt{q}$. \end{obs} Hence, for every $q$, we know the value of $\vartheta(P_q)$. As previously mentioned, in~\cite{Broere} it was proved that, if $q$ is a square, then $\alpha(P_q) = \sqrt{q}$. Thus, in this case $\vartheta(P_q)$ and $\alpha(P_q)$ coincide. As a result, the Lovász theta function does not need to be improved as bound on $\alpha(P_q)$ because the following holds. \begin{obs}\label{theta_exact_q_square} Let $P_q$ be a Paley graph such that $q$ is a square. Then \begin{align*} \alpha(P_q) = \sqrt{q} = \vartheta(P_q) = z_{k}(P_q) \end{align*} holds for all $k \leq n$. \end{obs} For all values of $q$, by combining Observation~\ref{theta_paley_graphs} with Lemmas~\ref{opt_1} and \ref{opt_2}, we are able to construct an optimal solution for~\eqref{theta_1} as follows. \begin{lem}\label{optimal_solution} Let $P_q$ be a Paley graph. Let $x^* \in \mathbb{R}^q$ and $X^* \in \mathbb{R}^{q \times q}$ with \begin{subequations}\label{solution} \begin{alignat}{3} x^*_i &= \frac{1}{\sqrt{q}} &&\quad \forall i \in V_q \label{sol_1}\\ X^*_{ii} & = x^*_{i} &&\quad \forall i \in V_q \label{sol_2}\\ X^*_{ij} &= 0 &&\quad \forall \{i,j\} \in E_q \label{sol_3} \\ X^*_{ij} &= \frac{2}{q + \sqrt{q}} &&\quad \forall \{i,j\} \notin E_q \label{sol_4}. \end{alignat} \end{subequations} Then $(x^*, X^*)$ is an optimal solution of~\eqref{theta_1}. \end{lem} \begin{proof} First, we set $X^\prime = \frac{1}{\tr(X^*)}X^* = \frac{1}{\sqrt{q}}X^*$, and show that $X^\prime$ is a feasible solution for~\eqref{theta_2}. From~\eqref{solution} we have that the constraints $\tr(X^\prime) = 1$ as well as $X^\prime_{ij} = 0$ for all edges $\{i,j\}$ are satisfied. What is left to show is that $X^\prime \succeq 0$. Since $\frac{1}{\sqrt{q}} > 0$, it is sufficient to show that $X^* \succeq 0$. Let $D = \Diag(x^*)$. Then, we can write the matrix $X^*$ as \begin{align*} X^* = D + \frac{2}{q + \sqrt{q}} \overline{A}, \end{align*} where $\overline{A}$ is the adjacency matrix of the complement of $P_q$. From Observations~\ref{eigenvalues_Paley} and~\ref{eigenvalue_Paley_complement} we know that $\overline{A}$ has eigenvalues $\frac{q-1}{2}$ and $\frac{-1 \pm \sqrt{q}}{2}$. Since $D$ is a diagonal matrix with elements equal to $x^*_i$, the only eigenvalue of $D$ is $\frac{1}{\sqrt{q}}$. Therefore, the eigenvalues of the matrix $X^*$ are $1$, $\frac{2}{1 + \sqrt{q}}$ and $0$. Hence, the matrix $X^*$ is positive semidefinite, and thus $X^\prime$ is feasible for~\eqref{theta_2}. Next, we note that every vertex $i \in P_q$ has $\frac{q - 1}{2}$ neighbors as well as $\frac{q - 1}{2}$ non-neighbors. Therefore, the sum of the $i$-th row of the matrix $X^*$ is \begin{align*} \frac{1}{\sqrt{q}} + \frac{q - 1}{2} \frac{2}{q + \sqrt{q}} = 1. \end{align*} Since there are $q$ rows in the matrix $X^*$, the sum of all elements in $X^*$ is $q$. Hence, the objective function value of~\eqref{theta_2} of the solution $X^\prime$ is \begin{align*} \langle \mathds{1}_{q \times q}, X^\prime \rangle = \sum_{i=1}^{q} \sum_{j = 1}^q X^\prime_{ij} = \frac{1}{\sqrt{q}}\sum_{i = 1}^q \sum_{j = 1}^q X^*_{ij} = \frac{1}{\sqrt{q}} q = \sqrt{q}. \end{align*} From Observation~\ref{theta_paley_graphs} we know that $\vartheta(P_q) = \sqrt{q}$. Therefore, $X^\prime$ is an optimal solution of~\eqref{theta_2} and since $X^*~=~\langle \mathds{1}_{q \times q}, X^\prime \rangle X^\prime$ we know from Lemma~\ref{opt_2} that $(x^*, X^*)$ is an optimal solution of~\eqref{theta_1}. \end{proof} \subsection{No improvement up to a certain level}\label{subsection_ESH_Paley} We now investigate bounds for the stability numbers of Paley graphs that can be obtained by using the ESH approach as described in Section~\ref{method_ESH}. For the start, we consider the second and the third level of the ESH, so $z_k(P_q)$ for $k = 2$ and $k = 3$. For an arbitrary graph $G$, the ESC for a subgraph $G_I$ with $I = \{i, j\}$ (and thus of order $k = 2$), is equivalent to \begin{subequations}\label{ESC_2} \begin{align} X_{ij} &\geq 0 \label{2_1} \\ X_{ij} &\leq X_{ii} \label{2_2} \\ X_{ij} &\leq X_{jj} \label{2_3} \\ X_{ii} + X_{jj} &\leq 1 + X_{ij} \label{2_4}, \end{align} \end{subequations} while the ESC for a subgraph $G_I$ with $I = \{i, j, \ell \}$ (and thus of order $k = 3$) is equivalent to \begin{subequations} \begin{align} X_{ij} + X_{i\ell}&\leq X_{ii} + X_{j\ell} \label{3_1} \\ X_{ij} + X_{j\ell}&\leq X_{jj} + X_{i\ell} \label{3_2} \\ X_{i\ell} + X_{j\ell}&\leq X_{\ell \ell} + X_{ij} \label{3_3} \\ X_{ii} + X_{jj} + X_{\ell\ell} &\leq 1 + X_{ij} + X_{i\ell} + X_{j\ell} \label{3_4}, \end{align} \end{subequations} as shown in~\cite{GaarVersionsESH}. The relationship between $\vartheta(P_q)$ and $z_k(P_q)$ for $k = 2$ and $k = 3$ can be derived directly from the optimal solution of the SDP for computing the Lovász theta function presented in Lemma~\ref{optimal_solution}, as the next two lemmas show. \begin{lem}\label{level_2} Let $P_q$ be a Paley graph. Then $z_2(P_q) = \vartheta(P_q)$. \end{lem} \begin{proof} Let $(x^*, X^*)$ be as in Lemma~\ref{optimal_solution} and let $I = \{i,j\} \subseteq V_q$. Then the inequality~\eqref{2_1} is satisfied, as $X^*_{ij} \geq 0$ by the definition of $X^*$. Furthermore, for all $q$ we have that $\frac{2}{q + \sqrt{q}} \leq \frac{1}{\sqrt{q}}$ and $ 0 \leq \frac{1}{\sqrt{q}}$, so the inequalities~\eqref{2_2} and~\eqref{2_3} also hold for $X^*$. Finally, since $q \geq 5$, it follows that $X^*_{ii} + X^*_{jj} = x^*_i + x^*_j = \frac{2}{\sqrt{q}} \leq 1$, so the inequality~\eqref{2_4} is satisfied. Hence, adding ESCs for subgraphs of order~$2$ will not improve the Lovász theta function. \end{proof} \begin{lem}\label{level_3} Let $P_q$ be a Paley graph with $q\geq 9$. Then $z_3(P_q) = \vartheta(P_q)$. \end{lem} \begin{proof} Let $(x^*, X^*)$ be as in Lemma~\ref{optimal_solution} and let $I = \{i,j,\ell\} \subseteq V_q$. Then, for the values of $x^*$ and $X^*$ given in Lemma~\ref{optimal_solution}, the inequality \begin{align*} X^*_{ij} + X^*_{i\ell} \leq \frac{4}{q + \sqrt{q}} \leq \frac{1}{\sqrt{q}} \leq x^*_i + X^*_{j\ell} = X^*_{ii} + X^*_{j\ell} \end{align*} holds if $q \geq 9$. Therefore,~\eqref{3_1},~\eqref{3_2} and~\eqref{3_3} hold for $P_q$ with $q \geq 9$. Furthermore, \begin{align*} X^*_{ii} + X^*_{jj} + X^*_{\ell \ell} = x^*_i + x^*_j + x^*_\ell = \frac{3}{\sqrt{q}} \leq 1 \leq 1 + X^*_{ij} + X^*_{i\ell} + X^*_{j\ell} \end{align*} holds if $q \geq 9$, so~\eqref{3_4} holds. As a result, $z_3(P_q) = \vartheta(P_q)$ holds for $P_q$ with $q \geq 9$. \end{proof} From Lemmas~\ref{level_2} and \ref{level_3}, we can note that the answer to the question whether $z_k(P_q) = \vartheta(P_q)$ arises from the construction of the optimal solution $(x^*, X^*)$ as stated in Lemma~\ref{optimal_solution}, and thus strongly depends on the value of $q$. We now consider ESH of an arbitrary level~$k$ for $P_q$ with $q \geq 9$. Since the listing of all possible ESCs is too exhaustive, we consider the convex hulls of subgraphs of arbitrary order~$k$. Such an approach will enable us to determine levels $k$ of ESH for which $z_k(P_q) = \vartheta(P_q)$. We start by giving the following statement. \begin{lem}\label{level_k} Let $P_q$ be a Paley graph with $q\geq 9$, let $k \in \mathbb{N}$, let $J_k = \{I \subseteq V_q: \vert I \vert = k\}$ and let $(x^*, X^*)$ be as in Lemma~\ref{optimal_solution}. If $k \leq \frac{\sqrt{q} + 3}{2}$, then $X^{*}_I \in \STAB^{2}(P_{q_I})$ for all $I \in J_k$. \end{lem} \begin{proof} Let $k \leq \frac{\sqrt{q} + 3}{2}$ and let $I =\{v_1, \ldots, v_k\} \in J_k$, i.e., $I \subseteq V_q$ with $\vert I \vert = k$. We have to show that $X^*_I \in \STAB^2(P_{q_I})$ holds. To do so, it is enough to show that there exist non-negative $\lambda$, $\mu_i$ for all $1 \leq i \leq k$ and $\nu_{ij}$ for all $1\leq i < j \leq k$ such that \begin{align} \lambda + \sum_{i = 1}^{k} \mu_i + \sum_{i = 1}^{k - 1} \sum_{j = i + 1}^{k} \nu_{ij} = 1 \label{sum_is_one} \end{align} and \begin{align} X^*_I = \lambda 0_k + \sum_{i = 1}^{k} \mu_i E_i + \sum_{i = 1}^{k - 1} \sum_{j = i + 1}^{k} \nu_{ij} E_{ij} \label{matrix_X} \end{align} hold, and where $\nu_{ij} = 0$ for all $1\leq i < j \leq k$ with $\{v_i,v_j\} \in E_q$. We set \begin{alignat}{3} \nu_{ij} &= X^*_{v_i,v_j} &&\quad 1\leq i < j \leq k, \nonumber \\ \mu_i &= x^*_{v_i} - \sum_{\substack{j=1 \\ j \neq i}}^{k} \nu_{{ij}} && \quad 1 \leq i \leq k, \label{coefficient_mu}\\ \lambda &= 1 - \sum_{i = 1}^{k} \mu_i - \sum_{i = 1}^{k - 1} \sum_{j = i + 1}^{k} \nu_{ij}, \nonumber \end{alignat} so~\eqref{sum_is_one} is clearly satisfied by construction. Furthermore, \eqref{matrix_X} and $\nu_{ij} = 0$ for all $1\leq i < j \leq k$ with $\{v_i,v_j\} \in E_q$ hold because of~\eqref{sol_2} and~\eqref{sol_3}, respectively. From~\eqref{solution} we have that $X^*_{v_i,v_j}$ can either be equal to $0$ or equal to $\frac{2}{q + \sqrt{q}}$; therefore, all $\nu_{ij}$ are non-negative. Regarding $\mu_i$ computed with~\eqref{coefficient_mu}, for a fixed $i$ there are at most $k-1$ values $\nu_{ij}$ equal to $\frac{2}{q + \sqrt{q}}$. Thus, \begin{align*} \mu_i \geq \frac{1}{\sqrt{q}} - (k-1)\frac{2}{q + \sqrt{q}} = \frac{\sqrt{q} + 3 - 2k}{q + \sqrt{q}}, \end{align*} and since $k \leq \frac{\sqrt{q} + 3}{2}$, all $\mu_i$ are non-negative. What is left to show is that $\lambda$ is non-negative. The matrix $X^*_I$ is symmetric, so $X^*_{v_i,v_j} = X^*_{v_j,v_i}$ for all $1 \leq i < j \leq k$. Furthermore, we have that $k \leq \frac{\sqrt{q} + 3}{2}$ and $q\geq 9$. Therefore, \begin{align*} \sum_{i = 1}^{k} \mu_i + \sum_{i = 1}^{k - 1} \sum_{j = i + 1}^{k} \nu_{ij} &= \sum_{i = 1}^{k} x^*_{v_i} - \sum_{i = 1}^k \sum_{\substack{j=1 \\ j \neq i}}^{k} X^*_{v_i,v_j} + \sum_{i = 1}^{k - 1} \sum_{j = i + 1}^{k} X^*_{v_i,v_j} \\ &= \sum_{i = 1}^{k} x^*_{v_i} - 2\sum_{i = 1}^{k - 1} \sum_{j = i + 1}^{k} X^*_{v_i,v_j} + \sum_{i = 1}^{k - 1} \sum_{j = i + 1}^{k} X^*_{v_i,v_j} \\ &= \sum_{i = 1}^{k} x^*_{v_i} - \sum_{i = 1}^{k - 1} \sum_{j = i + 1}^{k} X^*_{v_i,v_j} \leq \sum_{i = 1}^{k} x^*_{v_i} = k\frac{1}{\sqrt{q}} \leq 1. \end{align*} Hence, $\lambda$ is non-negative, which finishes the proof. \end{proof} Note that in the proof of Lemma~\ref{level_k} we show that $X^*_I$ is in $\STAB^{2}(P_{q_I})$ by showing that it is a convex combination of outer products of incidence vectors of stable sets of at most $2$ vertices. We do not utilize the outer product of incidence vectors of stable sets of more vertices, even though they might exist in $P_q$. The following theorem is a direct consequence of Lemmas~\ref{level_2} and~\ref{level_k}. \begin{thm}\label{thm_level} Let $P_q$ be a Paley graph and let $k \in \mathbb{N}_0$ with $k \leq \frac{\sqrt{q} + 3}{2}$. Then $z_k(P_q) = \vartheta(P_q)$. \end{thm} \begin{proof} If $q = 5$, then $\left\lfloor\frac{\sqrt{q}+3}{2}\right\rfloor = 2$ and $z_2(P_q) = \vartheta(P_q)$ due to Lemma~\ref{level_2}. Thus, the statement holds because of \eqref{ineq:relationship_z}. If $q \geq 9$, then the statement follows directly from Lemma~\ref{level_k} and \eqref{ineq:relationship_z}. \end{proof} For a Paley graph $P_q$, we define \begin{align*} \ell(q) = \left\lfloor\frac{\sqrt{q}+3}{2}\right\rfloor. \end{align*} Clearly, it follows from Theorem~\ref{thm_level} that adding ESCs for subgraphs of orders $\{2, \dots, \ell(q)\}$ does not improve the Lovász theta function as bound on $\alpha(P_q)$, so the next corollary holds. \begin{cor}\label{cor_level} Let $P_q$ be a Paley graph. Then $z_k(P_q) = \vartheta(P_q)$ holds for all $k \in \mathbb{N}_0$ with $k \leq \ell(q)$. \end{cor} So far, we do not know anything about adding ESCs for subgraphs of order~$\ell(q) +1$ besides in the case when $q$ is square with Observation~\ref{theta_exact_q_square}. For some Paley graphs, we might have that $z_{\ell(q)}(P_q) = z_{\ell(q)+1}(P_q)$, while for others, we might have that adding ESCs of size $\ell(q) + 1$ yield bounds that are tighter than the Lovász theta function. In the next section, we will perform computational experiments and empirically determine levels for which imposing ESCs improves the Lovász theta function of a Paley graph $P_q$. Additionally, we will confirm computationally that adding ESCs for subgraphs of order up to $\ell(q)$ does not yield tighter bounds. \subsection{Computational study}\label{comp_justification} Next, we want to investigate if there is an improvement of the Lovász theta function as bound on $\alpha(P_q)$ when including ESCs for subgraphs of orders $\ell(q) + 1$ and higher. To do so, we first conduct a computational study and compute upper bounds on $z_k(P_q)$ for $k \in \{2, \dots, 10\}$ and $q < 200$. Note that we do not consider any $q$ that is a square in this computational study because we already know what happens on all levels of the ESH due to Observation~\ref{theta_exact_q_square}. In order to compute upper bounds on $z_k(P_q)$ with MATLAB R2022b we use the source code~\href{https://gitlab.com/egaar/tightenthetafunction}{https://gitlab.com/egaar/tightenthetafunction}, which accompanies~\cite{Gaa:18} and lead to the publications~\cite{Gaa:20,GaarSiebenhoferWiegeleStabBaB}. We call the function \texttt{tightenThetaFunction()} for the \texttt{adjacencyMatrix} of the Paley graph $P_q$, which iteratively adds violated ESC for subgraphs of order $k$ in each of $\texttt{nrIterations} = 10$ cycle and thus computes an upper bound on $z_k(P_q)$ by computing $z_J(P_q)$ for growing sets $J$ of subgraphs of order~$k$. To find violated ESC to add to $J$, in every cycle the function performs a simulated annealing heuristic for $\texttt{nrExecutionsSimAnn} = 4000$ times, where it tries to minimize the inner product between $X_I$ and several different separation matrices (stored in $\texttt{usedSeparationMatricesPerRun}$) determined with the function \texttt{setupSeparation\-Matrices()} with all ($\texttt{mySMVersion} = 7$, $\texttt{usedMethodToChangeNrSimAnnPerSepMat} = 1$) possible separation matrices from the function \texttt{setupPossibleSeparationMatrices()}, which includes extreme copositive matrices with entries in $\{-1,0,1\}$, facet inducing matrices of the squared stable set polytope and random matrices, see~\cite{Gaa:18,Gaa:20} for more details. Then the code includes the $\texttt{maxNr\-NewViolatedSubgraphsPerIteration} = 1000$ most violated ESC found by adding one inequality constraint inducing a separating hyperplane as described in~\cite{GaarSiebenhoferWiegeleStabBaB} (\texttt{usedCutting\-Planes\-PerRun} $= 3$). Each SDP is solved with MOSEK~10.0.~\cite{mosek} ($\texttt{usedSolverPerRun} = 7$). All computations were done on an Intel(R) Core(TM) i7-1260P CPU @ 2.10GHz with 64GB of RAM, operated under Windows 10. The code and the instances are available as ancillary files from the arXiv page of this paper. The results, shown in Table~\ref{Table_2}, are organized as follows. First, we give basic information for every considered Paley graph $P_q$: the value of $q$, the stability number $\alpha(P_q)$, the value of the Lovász theta function $\vartheta(P_q)$, and the value of $\ell(q)$. Note that the values of $\alpha(P_q)$ are taken from~\cite{Gvozdenovic2009} for $61 \leq q < 200$, where $q$ is a prime, and from \url{https://aeb.win.tue.nl/graphs/Paley.html} for $q < 61$ and $q = 125$. Furthermore, we report computed upper bounds on $z_k(P_q)$ for $k \in \{2, \dots, 10\}$. The respective cell for $z_k(P_q)$ is shaded in gray if $k \leq \ell(q)$. If for some $P_q$ there exists a level~$k > \ell(q)$ such that the upper bound on $z_k(P_q)$ is equal to $\vartheta(P_q)$, we highlight by boldface the value in the respective cell. Finally, based on the computed data, we determine the empirical maximum level~$\tilde{\ell}(q)$ for which adding the ESCs does not improve the Lovász theta function as bound on the stability number, i.e., the upper bound on $z_{\tilde{\ell}(q)}$ is equal to $\vartheta(P_q)$ and the upper bound on $z_{\tilde{\ell}(q)+1}$ is less than $\vartheta(P_q)$, so on level~$\tilde{\ell}(q)+1$ there is the first improvement over the Lovász theta function. Note that as we only compute upper bounds on $z_k(P_q)$ by computing $z_J(P_q)$ for some set $J$, it is possible that for some values of $q$ the bounds on $z_k(P_q)$ increase for higher values of $k$, even though the values of $z_k(P_q)$ decrease. This could be the case because for the values of $k$ up to $6$ a lot of investigation has been done in cleverly choosing separation matrices for finding violated ESC; see~\cite{Gaa:18} for more details. \begin{table}[hbt!] \caption{Computed upper bounds on $z_k(P_q)$ for Paley graphs $P_q$ } \centering \begin{adjustbox}{max width=\textwidth} \begin{tabular}{ r r r | r r r r r r r r r r r} $q$ & $\alpha(P_q)$ & $\vartheta(P_q)$ & $\ell(q)$ & $\tilde{\ell}(q)$ & $z_2(P_q)$ & $z_3(P_q)$ & $z_4(P_q)$ & $z_5(P_q)$ & $z_6(P_q)$ & $z_7(P_q)$ &$z_8(P_q)$ & $z_9(P_q)$ & $z_{10}(P_q)$ \\ \hline 5 & 2 & 2.2361 & 2 & 2 & \cellcolor{lightgray} 2.2361 & 2.0000 & 2.0000 & 2.0000 & - & - & - & - & - \\ 13 & 3 & 3.6056 & 3 & 3 & \cellcolor{lightgray} 3.6056 & \cellcolor{lightgray} 3.6056 & 3.0000 & 3.0000 & 3.0000 & 3.0000 & 3.0000 & 3.0000 & 3.0000 \\ 17 & 3 & 4.1231 & 3 & 3 & \cellcolor{lightgray} 4.1231 & \cellcolor{lightgray} 4.1231 & 3.6651 & 3.6646 & 3.6411 & 3.6404 & 3.2846 & 3.1512 & 3.0674 \\ 29 & 4 & 5.3852 & 4 & 4 & \cellcolor{lightgray} 5.3852 & \cellcolor{lightgray} 5.3852 & \cellcolor{lightgray} 5.3852 & 4.6187 & 4.6122 & 4.4966 & 4.4966 & 4.4963 & 4.4958 \\ 37 & 4 & 6.0828 & 4 & 4 & \cellcolor{lightgray} 6.0828 & \cellcolor{lightgray} 6.0828 & \cellcolor{lightgray} 6.0828 & 5.6048 & 5.6369 & 5.4941 & 5.4935 & 5.1477 & 5.1896 \\ 41 & 5 & 6.4031 & 4 & 4 & \cellcolor{lightgray} 6.4031 & \cellcolor{lightgray} 6.4031 & \cellcolor{lightgray} 6.4031 & 6.0702 & 6.1107 & 5.9933 & 5.9925 & 5.5562 & 5.5703 \\ 53 & 5 & 7.2801 & 5 & 5 & \cellcolor{lightgray} 7.2801 & \cellcolor{lightgray} 7.2801 & \cellcolor{lightgray} 7.2801 & \cellcolor{lightgray} 7.2801 & 6.9560 & 6.1956 & 6.1915 & 6.1945 & 6.1942 \\ 61 & 5 & 7.8102 & 5 & 5 & \cellcolor{lightgray} 7.8102 & \cellcolor{lightgray} 7.8102 & \cellcolor{lightgray} 7.8102 & \cellcolor{lightgray} 7.8102 & 7.6664 & 7.0000 & 6.9963 & 6.9988 & 6.9978 \\ 73 & 5 & 8.5440 & 5 & 5 & \cellcolor{lightgray} 8.5440 & \cellcolor{lightgray} 8.5440 & \cellcolor{lightgray} 8.5440 & \cellcolor{lightgray} 8.5440 & 8.5297 & 8.1970 & 8.1910 & 8.1949 & 8.1386 \\ 89 & 5 & 9.4340 & 6 & 8 & \cellcolor{lightgray} 9.4340 & \cellcolor{lightgray} 9.4340 & \cellcolor{lightgray} 9.4340 & \cellcolor{lightgray} 9.4340 & \cellcolor{lightgray} 9.4340 & \textbf{9.4340} & \textbf{9.4340} & 9.2172 & 9.2651 \\ 97 & 6 & 9.8489 & 6 & 6 & \cellcolor{lightgray} 9.8489 & \cellcolor{lightgray} 9.8489 & \cellcolor{lightgray} 9.8489 & \cellcolor{lightgray} 9.8489 & \cellcolor{lightgray} 9.8489 & 9.3042 & 9.0777 & 9.1612 & 9.3810\\ 101 & 5 & 10.0499 & 6 & 9 & \cellcolor{lightgray} 10.0499 & \cellcolor{lightgray} 10.0499 & \cellcolor{lightgray} 10.0499 & \cellcolor{lightgray} 10.0499 & \cellcolor{lightgray} 10.0499 & \textbf{10.0499} & \textbf{10.0499} & \textbf{10.0499} & 10.0415 \\ 109 & 6 & 10.4403 & 6 & 6 & \cellcolor{lightgray} 10.4403 & \cellcolor{lightgray} 10.4403 & \cellcolor{lightgray} 10.4403 & \cellcolor{lightgray} 10.4403 & \cellcolor{lightgray} 10.4403 & 10.1091 & 10.0652 & 10.1459 & 10.3095 \\ 113 & 7 & 10.6301 & 6 & 6 & \cellcolor{lightgray} 10.6301 & \cellcolor{lightgray} 10.6301 & \cellcolor{lightgray} 10.6301 & \cellcolor{lightgray} 10.6301 & \cellcolor{lightgray} 10.6301 & 10.3927 & 10.3732 & 10.4730 & 10.5898 \\ 125 & 7 & 11.1803 & 7 & 7 & \cellcolor{lightgray} 11.1803 & \cellcolor{lightgray} 11.1803 & \cellcolor{lightgray} 11.1803 & \cellcolor{lightgray} 11.1803 & \cellcolor{lightgray} 11.1803 & \cellcolor{lightgray} 11.1803 & 10.6415 & 10.7220 & 10.9749 \\ 137 & 7 & 11.7047 & 7 & 7 & \cellcolor{lightgray} 11.7047 & \cellcolor{lightgray} 11.7047 & \cellcolor{lightgray} 11.7047 & \cellcolor{lightgray} 11.7047 & \cellcolor{lightgray} 11.7047 & \cellcolor{lightgray} 11.7047 & 11.2181 & 11.3131 & 11.5605 \\ 149 & 7 & 12.2066 & 7 & 7 & \cellcolor{lightgray} 12.2066 & \cellcolor{lightgray} 12.2066 & \cellcolor{lightgray} 12.2066 & \cellcolor{lightgray} 12.2066 & \cellcolor{lightgray} 12.2066 & \cellcolor{lightgray} 12.2066 & 11.9771 & 12.1427 & 12.1907 \\ 157 & 7 & 12.5300 & 7 & 7 & \cellcolor{lightgray} 12.5300 & \cellcolor{lightgray} 12.5300 & \cellcolor{lightgray} 12.5300 & \cellcolor{lightgray} 12.5300 & \cellcolor{lightgray} 12.5300 & \cellcolor{lightgray} 12.5300 & 12.4315 & 12.5108 & 12.5259 \\ 173 & 8 & 13.1529 & 8 & 8 & \cellcolor{lightgray} 13.1529 & \cellcolor{lightgray} 13.1529 & \cellcolor{lightgray} 13.1529 & \cellcolor{lightgray} 13.1529 & \cellcolor{lightgray} 13.1529 & \cellcolor{lightgray} 13.1529 & \cellcolor{lightgray} 13.1529 & 13.0004 & 13.1152 \\ 181 & 7 & 13.4536 & 8 & $\geq 10$ & \cellcolor{lightgray} 13.4536 & \cellcolor{lightgray} 13.4536 & \cellcolor{lightgray} 13.4536 & \cellcolor{lightgray} 13.4536 & \cellcolor{lightgray} 13.4536 & \cellcolor{lightgray} 13.4536 & \cellcolor{lightgray} 13.4536 & \textbf{13.4536} & \textbf{13.4536} \\ 193 & 7 & 13.8924 & 8 & $\geq 10$ & \cellcolor{lightgray} 13.8924 & \cellcolor{lightgray} 13.8924 & \cellcolor{lightgray} 13.8924 & \cellcolor{lightgray} 13.8924 & \cellcolor{lightgray} 13.8924 & \cellcolor{lightgray} 13.8924 & \cellcolor{lightgray} 13.8924 & \textbf{13.8924} & \textbf{13.8924} \\ 197 & 8 & 14.0357 & 8 & 8 & \cellcolor{lightgray} 14.0357 & \cellcolor{lightgray} 14.0357 & \cellcolor{lightgray} 14.0357 & \cellcolor{lightgray} 14.0357 & \cellcolor{lightgray} 14.0357 & \cellcolor{lightgray} 14.0357 & \cellcolor{lightgray} 14.0357 & 13.9961 & 14.0289 \\ \hline \end{tabular} \end{adjustbox} \label{Table_2} \end{table} First, we observe that as shown in Table~\ref{Table_2}, including ESCs on subgraphs of orders $\{2, \ldots, \ell(q)\}$ (in the gray cells) does not improve the Lovász theta function for all $22$ considered $P_q$, which aligns with Corollary~\ref{cor_level}. So our computational results fit the theoretical results obtained so far, and in particular, $\tilde{\ell}(q) \geq \ell(q)$ holds for all $P_q$. However, for some values of $q$, we see that $\tilde{\ell}(q) > \ell(q)$ holds, i.e., there is still no improvement of the Lovász theta function also on levels of the exact subgraph hierarchy that are higher than $\ell(q)$. We will continue to investigate these cases in Section~\ref{sec_relationship_alpha}. Generally, the considered upper bounds on the levels of the ESH yield good bounds on the stability numbers for small Paley graphs. For $P_q$ with $q \in \{5, 13\}$, the bounds are best possible and coincide with the stability number, while for $q \in \{17, 29, 41\}$ rounding of the best bounds yields the stability numbers. For $P_q$ with $q\in\{37, 53, 61,125, 149, 197\}$, the best computed bounds are better than the Lovász theta function by one integer. For the remaining larger Paley graphs, better integer bounds were not obtained. We will investigate in Section~\ref{section_ESH_local} how even better bounds on the stability number of Paley graphs can be obtained. \subsection{No improvement on higher levels} \label{sec_relationship_alpha} So far, we know that for a Paley graph $P_q$, there is no improvement of the Lovász theta function as bound on the stability number up to level~$\ell(q)$ of the ESH. It is the aim of this section to investigate what happens on higher levels of the ESH. To do so, we first go back to the computational results. For most of the considered Paley graphs $P_q$, the values of $\ell(q)$ and the computational empirical level up until there is no improvement $\tilde\ell(q)$ given in Table~\ref{Table_2} coincide, which suggests that for these graphs indeed $\ell(q)$ is the largest order of subgraphs, for which adding the ESCs does not improve the Lovász theta function as bound on the stability number. However, for $q \in \{89,101,181,193\}$ the values of $\tilde\ell(q)$ are larger then $\ell(q)$. Interestingly, the stability number $\alpha(P_q)$ appears to influence the relationship between $\ell(q)$ and $\tilde\ell(q)$. For Paley graphs where $\alpha(P_q) \geq \ell(q)$, we note that $\ell(q) = \tilde{\ell}(q)$ for all considered values of $q$, thus adding ESCs for subgraphs of order~$\ell(q) + 1$ led to better bounds. In contrast to that, for the four considered Paley graphs with $\alpha(P_q) < \ell(q)$, i.e., $q \in \{89,101,181,193\}$, this enhancement is not achieved, leading to $\ell(q) < \tilde{\ell}(q)$. We visualize the values of $\alpha(P_q)$, $\ell(q)$ and $\tilde{\ell}(q)$ (where we plot the value 10 as $\tilde{\ell}(q)$ for $q \in \{181, 193\}$, even though this is actually just a lower bound on the unknown value of $\tilde{\ell}(q)$) in Figure~\ref{figure_1}. \begin{figure}[hbt!] \centering \caption{The values of $\alpha(P_q)$, $\ell(q)$, and $\tilde{\ell}(q)$ for Paley graphs $P_q$ } \begin{tikzpicture}[scale=1.0] \begin{axis}[ xlabel={$q$}, xmin=0, xmax=200, ymin=0, ymax=11, xtick={25, 50, 75, 100, 125, 150, 175, 200}, ytick={0,1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}, legend pos=north west, ymajorgrids=true, grid style=dashed, ] \addplot[ color=blue, mark=square, ] coordinates { (5,2) (13, 3) (17,3) (29, 4) (37, 4) (41, 5) (53, 5) (61, 5) (73, 5) (89, 5) (97, 6) (101, 5) (109, 6) (113, 7) (125, 7) (137,7) (149, 7) (157, 7) (173, 8) (181, 7) (193, 7) (197, 8) }; \addlegendentry{$\alpha(P_q)$} \addplot[ color=red, mark=triangle*, ] coordinates { (5, 2) (13, 3) (17,3) (29, 4) (37, 4) (41, 4) (53, 5) (61, 5) (73, 5) (89, 6) (97, 6) (101, 6) (109, 6) (113, 6) (125, 7) (137,7) (149, 7) (157, 7) (173, 8) (181, 8) (193, 8) (197, 8) }; \addlegendentry{$\ell(q)$} \addplot[ color=teal, mark=asterisk, ] coordinates { (5, 2) (13, 3) (17,3) (29, 4) (37, 4) (41, 4) (53, 5) (61, 5) (73, 5) (89, 8) (97, 6) (101, 9) (109, 6) (113, 6) (125, 7) (137,7) (149, 7) (157, 7) (173, 8) (181, 10) (193, 10) (197, 8) }; \addlegendentry{$\tilde{\ell}(q)$} \end{axis} \end{tikzpicture} \label{figure_1} \end{figure} Indeed, it turns out that the fact that $\tilde{\ell}(q) > \ell(q)$ holds whenever $\alpha(P_q) < \ell(q)$ is satisfied is not a coincidence, as the following result shows. \begin{thm}\label{thm_alpha_levelPlus1} Let $P_q$ be a Paley graph with $q\geq 25$. If $\alpha(P_q) < \ell(q)$, then $z_{\ell(q)+1}(P_q) = z_{\ell(q)}(P_q) = \vartheta(P_q)$. \end{thm} \begin{proof} Let $P_q$ be a Paley graph such that $\alpha(P_q) < \ell(q)$ holds and let $(x^*, X^*)$ be as in Lemma~\ref{optimal_solution}. Let $H$ be a subgraph of $P_q$ on $k=\ell(q)+1$ vertices $\{v_1, \dots, v_k\}$. Due to Corollary~\ref{cor_level} it is enough to show that the ESC for $H$ is fulfilled. Towards this end we consider two cases. Case 1: If all vertices of $H$ have at least one neighbor in $H$, then in each row of $X^*_H$ there is at least one off-diagonal zero entry according to~\eqref{solution} in Lemma~\ref{optimal_solution}. This implies that in each row of $X^*_H$ there are at most $k-2 = \ell(q)-1$ off-diagonal entries equal to $\frac{2}{q+\sqrt{q}}$. With this property in mind, the steps of the proof of Lemma~\ref{level_k} can be followed in order to finish the proof. In particular, the values of $\lambda$, $\mu_i$, and $\nu_{ij}$ are set in the same fashion, i.e., as they are set in \eqref{coefficient_mu}. The above property implies that all $\mu_i$ are non-negative, and in order to guarantee $\lambda \geq 0$ one has to have $k\frac{1}{\sqrt{q}} \leq 1$, which is fulfilled for all $q \geq 25$. Case 2: If there is at least one vertex of $H$ that does not have a neighbor in $H$, then let without loss of generality $v_1, \dots, v_p$ be the vertices of $H$ that do not have any neighbor in $H$. If $p \geq 3$, then we define $S = \{v_1, \dots, v_p\}$. If $p \leq 2$, then due to Observation~\ref{alpha_equals_clique} we know that $\omega(P_q) = \alpha(P_q) < \ell(q)$, so there are two distinct vertices $v^*$ and $v^{**}$ among the $\ell(q)$ vertices $\{v_2, \dots, v_k\}$ that are not adjacent and we define $S = \{v_1, \dots, v_p\} \cup \{v^*, v^{**}\}$. So independent from the value of $p$, the set $S$ is a stable set in $H$ on at least three vertices, and all vertices in $H$ that have no neighbor in $H$ are in $S$. Let $s \in \{0,1\}^k$ be the incidence vector of the stable set $S$ in $H$. In order to show that the ESC for $H$ is satisfied, we set \begin{alignat}{3} \sigma &= \frac{2}{q+\sqrt{q}}, \nonumber \\ \nu_{ij} &= \left\{ \begin{array}{ll} 0 & \text{ if } v_i \in S \text{ and } v_j \in S\\ X^*_{v_i,v_j} & \text{ otherwise} \end{array} \right. &&\quad 1\leq i < j \leq k, \nonumber \\ \mu_i &= \left\{ \begin{array}{ll} x^*_{v_i} - \sigma - \sum_{\substack{j=1 \\ j \neq i}}^{k} \nu_{{ij}} & \text{ if } v_i \in S \\ x^*_{v_i} - \sum_{\substack{j=1 \\ j \neq i}}^{k} \nu_{{ij}} & \text{ otherwise} \end{array} \right. && \quad 1 \leq i \leq k, \nonumber\\ \lambda &= 1 - \sum_{i = 1}^{k} \mu_i - \sum_{i = 1}^{k - 1} \sum_{j = i + 1}^{k} \nu_{ij} - \sigma , \nonumber \end{alignat} and see that \begin{align*} &\lambda + \sum_{i = 1}^{k} \mu_i + \sum_{i = 1}^{k - 1} \sum_{j = i + 1}^{k} \nu_{ij} + \sigma = 1, \\ &X^*_H = \lambda 0_k + \sum_{i = 1}^{k} \mu_i E_i + \sum_{i = 1}^{k - 1} \sum_{j = i + 1}^{k} \nu_{ij} E_{ij} + \sigma ss^T, \ \end{align*} and $\nu_{ij} = 0$ for all $1\leq i < j \leq k$ with $\{v_i,v_j\} \in E_q$ hold. Furthermore, clearly $\sigma$ and all $\nu_{ij}$ are non-negative by construction. Additionally, \begin{align*} \mu_i \geq \frac{1}{\sqrt{q}} - (k-2)\frac{2}{q + \sqrt{q}} \end{align*} holds for all values of $1 \leq i \leq k$. In particular, if $v_i \in S$, then by construction ($S$ hast at least three elements) at least two values of $\nu_{ij}$ for $1 \leq j \leq k$, $j \neq i$ are zero, and $\sigma$ has the same value as the remaining values of $\nu_{ij}$. If $v_i \notin S$, then $v_i$ has at least one neighbor in $H$ and hence at least one of $\nu_{ij}$ for $1 \leq j \leq k$, $j \neq i$ is zero. As a consequence, all values of $\mu_i$ are non-negative since $k = \ell(q) + 1 = \lfloor \frac{\sqrt{q} + 5}{2} \rfloor$. Finally, $\lambda \geq 0$ holds by analogous arguments as in the proof of Lemma~\ref{level_k} as soon as $k\frac{1}{\sqrt{q}} \leq 1$ holds, which is the case for all $q \geq 25$. As a consequence, the ESC for $H$ is fulfilled for $p \geq 3$. \end{proof} Note that in the proof of Theorem~\ref{thm_alpha_levelPlus1} we show that all ESCs for subgraphs of the order~$\ell(q) + 1$ are satisfied by considering outer products of incidence vectors of stable sets, where all but one stable set have at most $2$ vertices and one stable set has at least $3$ vertices. Thus, it is not surprising that we are able to prove a stronger result than Lemma~\ref{level_k}, where in the proof we only use outer products of incidence vectors of stable sets of at most $2$ vertices. As a result of Theorem~\ref{thm_alpha_levelPlus1}, if $\alpha(P_q) < \ell(q)$ holds, then there is not only no improvement of the ESH up until level~$\ell(q)$, but also on the next level~$\ell(q)+1$. When we again turn to the computational results in Table~\ref{Table_2}, then for the four instances with $\alpha(P_q) < \ell(q)$, i.e., $q \in \{89,101,181,193\}$, also including ESCs for subgraphs of order~$\ell(q) + 2$ fails to yield any improvement, suggesting that the minimum level of the ESH where an improvement may be achieved is $\ell(q) + 3$ for these values of $q$. It remains an open question whether indeed, if $\alpha(P_q) < \ell(q)$ holds, there is no improvement up to $\ell(q) + 2$ for all values of $q$, and it is still open to prove this for $q \in \{89,101,181,193\}$. Most likely, such proofs need to consider outer products of incidence vectors of stable sets on more vertices or more of these outer products. \section{The local exact subgraph hierarchy for vertex-transitive graphs}\label{section_ESH_local} In this section, we introduce a new hierarchy for the stable set problem for vertex-transitive graphs, which builds upon the ESH and which we will call the local ESH. The basic concept of this new hierarchy is described in Section~\ref{intro_LESH}. In Section~\ref{relationship_ESH_LESH}, we investigate the relationship between the ESH and the local ESH. Additionally, in Section~\ref{LESH_Paley}, we explore the behavior of the local ESH for Paley graphs. Finally, in Section~\ref{ESH_comp_local_Paley}, we present a computational study on the upper bounds on the stability numbers of Paley graphs obtained by using the local ESH. \subsection{Definition}\label{intro_LESH} In~\cite{Magsino}, Magsino, Mixon, and Parshall proposed a new approach for computing upper bounds on clique numbers of Paley graphs exploiting their vertex-transitivity. These bounds can be transferred to bounds on the stability number, as the maximum clique problem is equivalent to the stable problem in Paley graphs due to Observation~\ref{alpha_equals_clique}. We now modify and extend the bounds from~\cite{Magsino} to arbitrary vertex-transitive graphs and introduce a new hierarchy for computing upper bounds on the stability number of vertex-transitive graphs. Towards this end, let $G = (V,E)$ with $\vert V \vert = n$ and $V = \{0, \dots, n-1\}$ be a vertex-transitive graph. Since $G$ is vertex-transitive, we can, without loss of generality, choose the vertex $0$ to be in a maximum stable set of $G$. Therefore, any other vertex adjacent to the vertex $0$ cannot be contained in the maximum stable set. Consequently, all other vertices contained in the maximum stable set are elements of $V^L = V \setminus (\{0\} \cup N(0))$. We call the induced subgraph of $G$ induced by the vertices $V^L$ the local graph of $G$ and denote it by $G^L$. From the given argumentation, we note the following. \begin{obs}\label{stability_local_graph} Let $G = (V,E)$ with $\vert V \vert = n$ and $V = \{0, \dots, n-1\}$ be a vertex-transitive graph, and let $G^L$ be the local graph of $G$. Then, \begin{align*} \alpha(G) = 1 + \alpha(G^L). \end{align*} \end{obs} Hence, calculating the stability number of a vertex-transitive graph $G$ reduces to computing the stability number of $G^L$, and we obtain a maximum stable set of $G$ by taking a maximum stable set in $G^L$ and adding the vertex $0$. Since every vertex-transitive graph is $r$-regular for some $r \in \mathbb{N}_0$, an immediate advantage of Observation~\ref{stability_local_graph} is that the order of the graph that is considered reduces from $n$ for $G$ to $n - r - 1$ for $G^L$. However, the computation of the stability number of $G^L$, and thus $G$, remains NP-hard. Therefore, we introduce the local ESH to explore upper bounds on $\alpha(G)$ using the following principle. Following the theory of the ESH from Section~\ref{method_ESH}, we know that when the constraint $X \in \STAB^{2}(G^L)$ is added to the SDP $\eqref{theta_1}$ for $G^L$ to compute the Lovász theta function $\vartheta(G^L)$, the optimal objective function value of the resulting SDP is $\alpha(G^L)$. Due to Observation~\ref{stability_local_graph}, this allows us to deduce the stability number of $G$. Nevertheless, the obtained SDP is still too large to solve, so we include the constraint $X \in \STAB^{2}(G^L)$ only partially for smaller subgraphs. In particular, for $k \in \{0, \ldots, n - r - 1\}$ we define the $k$-th level of the local ESH (LESH) as \begin{align}\label{k_th_level_LESH} z^\prime_k(G) &= 1 + z_k(G^L). \end{align} Furthermore, we define $z^\prime_k(G) = z^\prime_{n-r-1}(G)$ for $k > n - r - 1$, analogously to the definition of the ESH. From Observation~\ref{stability_local_graph} we obtain the relationship \begin{align} \label{ineq:relationship_zPrime} \alpha(G) = 1 + \alpha(G^L) = z^\prime_{n - r - 1}(G) \leq \cdots \leq z^\prime_2(G) \leq z^\prime_1(G) = z^\prime_0(G) = 1 + \vartheta(G^L), \end{align} so the higher the considered level~$k$ of the LESH of a vertex-transitive graph is, the better the bounds on the stability number are, just as it is the case for the ESH. \subsection{Relationship between hierarchies}\label{relationship_ESH_LESH} As the next step, we investigate the relationship between the bounds on the $k-$th level of the ESH and the LESH for vertex-transitive graphs. We begin by establishing the following statement. \begin{lem}\label{ESH_subgraphs} Let $G = (V, E)$ be a graph on $n$ vertices, let $I\subseteq V$, $I \neq \emptyset$, and let $(x, X)$ be a feasible solution for~\eqref{theta_1}. Let $i \in I$ such that \begin{enumerate}[(a)] \item\label{first} $X_{ij} = 0$ for all $j \in I$, or \item\label{second} $X_{ii} = 1$ and $X_{ij} = X_{jj}$ for all $j \in I\setminus \{i\}$ \end{enumerate} holds. Then \begin{align*} X_I \in \STAB^2(G_I) \quad \Leftrightarrow \quad X_{I \setminus \{i\}} \in \STAB^2(G_{I \setminus \{i\}}). \end{align*} \end{lem} \begin{proof} We proceed by proving both directions of the equivalence separately. "\(\Rightarrow\)": Suppose \( X_I \in \mathrm{STAB}^2(G_I) \). Gaar~\cite[Observation 2.1.5]{Gaa:18} showed that if an ESC is fulfilled for a subgraph $H$, it is also satisfied for each subgraph of $H$. Thus, $X_{I \setminus \{i\}} \in \STAB^2(G_{I \setminus \{i\}})$ holds. "\(\Leftarrow\)": Suppose $X_{I \setminus \{i\}} \in \STAB^2(G_{I \setminus \{i\}})$. Without loss of generality, we assume that $i$ is the last entry of $I$. Let $k=|I|$ and let $\mathcal{S}(G_{I \setminus \{i\}})$ be the set of all incidence vectors of stable sets in $G_{I \setminus \{i\}}$. Then, there exist some $t \geq 0$, some $s_\ell \in \mathcal{S}(G_{I \setminus \{i\}})$ and some $\lambda_{\ell} > 0$ for all $1 \leq \ell \leq t$, such that \begin{align}\label{sum_of_coef} \sum_{\ell = 1}^{t} \lambda_{\ell} = 1 \end{align} and such that the matrix $X_{I \setminus \{i\}}$ can be expressed as \begin{align}\label{matrix_X_I_without_i} X_{I \setminus \{i\}} = \sum_{\ell = 1}^{t} \lambda_\ell s_{\ell} s^T_{\ell}. \end{align} We now show that $X_{I} \in \mathrm{STAB}^2(G_{I})$ by considering two cases, depending on whether~\eqref{first} or~\eqref{second} holds for $i$. Case 1: Assume~\eqref{first} holds, so $X_{i j} = 0$ for all $j \in I$. In this case, every incidence vector of a stable set $s_{\ell} \in \mathcal{S}(G_{I \setminus \{i\}})$ can be extended to an incidence vector of a stable set $\tilde{s}_{\ell} \in \mathcal{S}(G_{I})$ by setting $\tilde{s}_{\ell} = \begin{pmatrix} s_{\ell} & 0 \end{pmatrix}^T$. We set $\tilde\lambda_{\ell} = \lambda_{\ell}$ for all $1 \leq \ell \leq t$. Then, $\tilde\lambda_{\ell} \geq 0$ for all $1 \leq \ell \leq t$, and $\sum_{\ell = 1}^{t} \lambda^\prime_{\ell} = 1$ due to~\eqref{sum_of_coef}. Furthermore, given that because of~\eqref{first} \begin{align*} X_{I} = \begin{pmatrix} X_{I \setminus \{i\}} & 0_{k-1} \\ 0^T_{k-1} & 0 \end{pmatrix} \end{align*} and \begin{align*} \tilde{s}_\ell \tilde{s}^T_\ell = \begin{pmatrix} s_\ell s^T_\ell & 0_{k-1} \\ 0^T_{k-1} & 0 \end{pmatrix}, \end{align*} we obtain that we can write $X_I$ as \begin{align*} X_{I} = \sum_{\ell = 1}^{t} \tilde{\lambda}_{\ell} \tilde{s}_{\ell} \tilde{s}^T_{\ell}. \end{align*} Hence, $X_I \in \STAB^2(G_I)$ holds. Case 2: Assume~\eqref{second} holds, so $X_{i i} = 1$ and $X_{i j} = X_{jj}$ for all $j \in I \setminus \{i\}$. First, we argue that $s^\prime_{\ell} = \begin{pmatrix} s_{\ell} & 1 \end{pmatrix}^T$ is an incidence vector of a stable set in $G_I$ for all $1 \leq \ell \leq t$. Indeed, assume that this is not the case. Then, there exists some $\ell^\prime$ such that $s^\prime_{\ell^\prime} = \begin{pmatrix} s_{\ell^\prime} & 1 \end{pmatrix}^T$ is not an incidence vector of a stable set in $G_I$. Hence, there exists some $j^\prime \in I \setminus \{i\}$ such that $[s_{\ell^\prime}]_{j^\prime} = 1$ and $\{i, j^\prime\} \in E$. Now, since $\{i, j^\prime\} \in E$, we have that $X_{i{j^\prime}} = 0$, and therefore $X_{j^\prime j^\prime} = 0$ because of~\eqref{second}. Due to the representation~\eqref{matrix_X_I_without_i} and because $\lambda_\ell > 0$ and $s_{\ell} \in \{0,1\}^{|I|-1}$ for all $1 \leq \ell \leq t$, this implies $\lambda_{\ell}[s_{\ell}]_{j^\prime} = 0$ for all $1 \leq \ell \leq t$, and hence $[s_{\ell^\prime}]_{j^\prime} = 0$, a contradiction. Therefore, $s^\prime_{\ell}$ is indeed an incidence vector of a stable set in $G_I$ for all $1 \leq \ell \leq t$. Now, we set $\lambda^\prime_{\ell} = \lambda_{\ell}$ for all $1 \leq \ell \leq t$, and obtain $\lambda^\prime_{\ell} \geq 0$ for all $1 \leq \ell \leq t$ and $\sum_{\ell = 1}^{t} \lambda^\prime_{\ell} = 1$ due to~\eqref{sum_of_coef}. Furthermore, \begin{align*} X_{I} = \begin{pmatrix} X_{I \setminus \{i\}} & x_{I \setminus \{i\}} \\ x_{I \setminus \{i\}}^T & 1 \end{pmatrix} \end{align*} and \begin{align*} s^\prime_{\ell} s ^{\prime T}_{\ell} = \begin{pmatrix} s_{\ell} s^T_{\ell}& s_{\ell} \\ s^T_{\ell} & 1 \end{pmatrix}. \end{align*} Thus, we can express the matrix $X_I$ as \begin{align*} X_{I} = \sum_{\ell = 1}^{t} \lambda^\prime_{\ell} s^\prime_{\ell} s^{\prime T}_{\ell}. \end{align*} Hence, also in this case, we obtain that $X_I \in \STAB^2(G_I)$ holds, which finishes the proof. \end{proof} With the statement of Lemma~\ref{ESH_subgraphs} we are now ready to show that the bounds obtained by the LESH are at least as good as the ones obtained by the ESH. \begin{thm}\label{LESH_at_least_as_good_as_ESH} Let $G = (V, E)$ be a vertex-transitive graph. Then, for all $k \in \mathbb{N}_0$, \begin{align*}z^\prime_k(G) \leq z_k(G). \end{align*} \end{thm} \begin{proof} Let $n$ be the number of vertices of $G$. Given that $G$ is vertex-transitive, we know that it is $r$-regular for some $r \in \mathbb{N}_0$. If $k > n - r - 1$, then \begin{align*} z^\prime_k(G) = z^\prime_{n-r-1}(G) = \alpha(G) \leq z_{k}(G) \end{align*} holds according to the definition of the LESH,~\eqref{ineq:relationship_zPrime} and~\eqref{ineq:relationship_z}. Thus, what is left to show is that $z^\prime_k(G) \leq z_k(G)$ holds for all $k \in \{0, \ldots, n - r - 1\}$. From the definition of $z^\prime_k(G)$ given in~\eqref{k_th_level_LESH}, we have $z^\prime_k(G) = 1 + z_k(G^L)$, where $G^L = (V^L, E^L)$ is the local graph of $G$. Thus, we need to show that $1 + z_k(G^L) \leq z_k(G)$. Let $(x^*,X^*)$ be an optimal solution of the SDP to compute $z_k(G^L)$, i.e., of~\eqref{theta_1} for the local graph $G^L$ with the ESC for each subgraph of order~$k$ added. Assume the vertex set of $G$ is $V = \{0, \ldots, n-1\}$, and the vertex set of $G^L$ is $V^L = \{0^\prime, \ldots, n-r-2^\prime\}$. Now let $\pi \colon V^L \to V$ such that $\pi(i^\prime) \in \{0, \ldots, n-1\}$ is the vertex in $G$ that corresponds to the vertex $i^\prime$ in $G^L$. We define $y \in \mathbb{R}^n$ as well as $Y \in \mathbb{R}^{n \times n}$ as follows. First, we set \begin{alignat*}{3} y_{\pi(i^\prime)} &=x^*_{i^\prime} && \quad \forall i^\prime \in V^L \\ Y_{\pi(i^\prime)\pi(j^\prime)} & = X^*_{i^\prime j^\prime} && \quad \forall i^\prime, j^\prime \in V^L. \end{alignat*} Furthermore, since the vertex $0$ is removed from $G$ to obtain $G^L$, we have that $0 \notin \{\pi(0^\prime), \ldots, \pi(n-r-2^\prime)\}$. Hence, we set \begin{align*} y_{0} = Y_{00} &= 1 \\ Y_{0j} = Y_{j0} &= x^*_j \quad \forall j \in \{\pi(0^\prime), \ldots, \pi(n-r-2^\prime)\}. \end{align*} Finally, for all other indices, we define \begin{alignat*}{3} y_{i} &= 0 && \quad \forall i \in V \setminus \{0, \pi(0^\prime), \ldots, \pi(n-r-2^\prime)\} \\ Y_{ij} & = 0 && \quad ~i \in V \setminus \{0, \pi(0^\prime), \ldots, \pi(n-r-2^\prime)\} \textrm{ or } j \in V \setminus \{0, \pi(0^\prime), \ldots, \pi(n-r-2^\prime)\}. \end{alignat*} This constructs an embedding of the solution $(x^*, X^*)$ into $(y, Y)$, which represents taking the solution $(x^*, X^*)$ for $G^L$ and the vertex $0$ as solution for the graph $G$. In particular, $y$ and $Y$ can be seen as a permutation of the vector $\tilde{y} = \begin{pmatrix} 1 & x^{*T} & 0^T_{r} \end{pmatrix}^T $ and the matrix $$\tilde{Y} = \begin{pmatrix} 1 & x^{*T} & 0^T_{r} \\ x^* & X^* & 0_{(n-r-1) \times r} \\ 0_{r} & 0_{r\times (n-r-1)} & 0_{r\times r} \end{pmatrix},$$ respectively. Next, we show that $(y, Y)$ is a feasible solution for the SDP to compute $z_k(G)$. From the construction of $(y, Y)$ and from $\diag(X^*) = x^*$ and $X^*_{ij} = 0 ~\forall \{i,j\}\in E$ it immediately follows that $\diag(Y) = y$ and $Y_{ij} = 0$ for all $\{i,j\} \in E$. In order to show that $\begin{pmatrix} Y & y \\ y^T & 1 \end{pmatrix} \succeq 0$ holds, it is enough to show that $\begin{pmatrix} \tilde{Y} & \tilde{y} \\ \tilde{y}^T & 1 \end{pmatrix} \succeq 0$. Due to Schur's complement, this is the case if and only if $$ \tilde{Y} - \tilde{y}\tilde{y}^T = \begin{pmatrix} 0 & 0^T_{n-r-1} & 0^T_{r} \\ 0_{n-r-1} & X^* - x^*x^{*T} & 0_{(n-r-1) \times r} \\ 0_{r} & 0_{r\times (n-r-1)} & 0_{r\times r} \end{pmatrix} \succeq 0, $$ which clearly holds because $\begin{pmatrix} X^* & x^* \\ x^{*T} & 1 \end{pmatrix} \succeq 0$ and hence with Schur's complement $X^* - x^*x^{*T} \succeq 0$. What is left to show is that $Y_I \in \STAB^{2}(G_I)$ for all $I \subseteq V$ with $\vert I \vert = k$. First, we note that \begin{align}\label{property_subgraphs} X^*_{I^L} \in \STAB^{2}(G^L_{I^L}) ~ \forall I^L \subseteq V^L: \vert I^L \vert = k. \end{align} Now let $I \subseteq V$ with $\vert I \vert = k$ be arbitrary but fixed, and let $\tilde{I} \subseteq I$ be the subset of vertices of $I$ that correspond to vertices of $G^L$, i.e., $\tilde{I} = I \cap \{\pi(0^\prime), \ldots, \pi(n-r-2^\prime)\}$, and let $\tilde{k} = \vert \tilde{I} \vert$. Then, we can distinguish two cases. Case 1: If $\tilde{k} = k$ (so $\tilde{I} = I$), then from the construction of $(y, Y)$ and~\eqref{property_subgraphs}, we obtain that $Y_I \in \STAB^{2}(G_I)$. Case 2: If $\tilde{k} < k$, then we can apply the reasoning from the first case and obtain that $Y_{\tilde{I}} \in \STAB^{2}(G_{\tilde{I}})$. Now let $\{v_1, \dots, v_{k-\tilde k}\} = I \setminus \tilde{I}$. From the construction of $(y, Y)$, we have that $Y_{v_1j} = 0$ for all $j \in I$ if $v_1 \neq 0$, or $Y_{v_1v_1} = 1$ and $Y_{v_1j} = y_j$ for all $j \in I \setminus \{v_1\}$ if $v_1 = 0$. Hence, by applying Lemma~\ref{ESH_subgraphs}, we obtain that $Y_{\tilde{I} \cup \{v_1\}} \in \STAB^{2}(G_{\tilde{I} \cup \{v_1\} })$. We apply this argumentation iteratively for $v_2$, \dots, $v_{k-\tilde k}$ to finally conclude for $I = \tilde{I} \cup \{v_1, \dots, v_{k-\tilde k}\}$ that $Y_I \in \STAB^{2}(G_I)$ holds. In both cases, we see that $Y_I$ is contained in $\STAB^{2}(G_I)$. Thus, $(y, Y)$ is feasible for the SDP~\eqref{theta_1} for the graph $G$ with the ESC for each subgraph of order~$k$ added, i.e., for the SDP to compute $z_k(G)$, and its objective function value equals \begin{align*} \mathds{1}_n^T y = 1 + \mathds{1}_{n-r-1}^T x^* = 1 + z_k(G^L), \end{align*} which implies that $1 + z_k(G^L) \leq z_k(G)$, and thus $z^\prime_k(G) \leq z_k(G)$. \end{proof} \subsection{Behavior for Paley graphs}\label{LESH_Paley} In Section~\ref{section_ESH_Paley}, we observed that for every $P_q$ there exists a level~$\ell(q)$ for which adding ESCs on subgraphs of orders $\{2, \ldots, \ell(q)\}$ does not yield an improvement over the Lovász theta function in terms of bounding the stability number $\alpha(P_q)$. We now investigate whether this behavior persists with the LESH or if the LESH provides stronger bounds for these graphs. To this end, we first revisit the approach and results presented by Magsino, Mixon, and Parshall in~\cite{Magsino}. In that work, the authors exploited the vertex-transitivity of Paley graphs and computed upper bounds for their clique numbers employing the Lovász theta function and its strengthening given by Schrijver~\cite{LovSch}. Since they dealt with the maximum clique problem, they considered the vertex $0$ as well as its neighborhood as a subgraph that contains a maximum clique. Denoting this subgraph by $L_q$, they computed the Lovász theta function for the complement $\overline{L_q}$. The graph $P^L_q$ is isomorphic to $\overline{L_q}$, allowing us to transfer their bounds to our setting. Specifically, given that $\alpha(P_q) = 1 + \alpha(P^L_q)$ according to Observation~\ref{stability_local_graph}, and by computing the Lovász theta function $\vartheta(P^L_q)$, they derived \begin{align*} b_{M}(P_q) = 1 + \vartheta(P^L_q) \end{align*} as an upper bound on $\alpha(P_q)$. In~\cite{Magsino}, the authors strengthen $b_{M}(P_q)$ as an upper bound on $\alpha(P_q)$ by considering the Schrijver refinement~\cite{Schrijver} of the Lovász theta function. In particular, for a graph $G$ on $n$ vertices, Schrijver adds the non-negativity constraint into~\eqref{theta_2} and defines \begin{align*} \vartheta^*(G) ~&=~ \max \{\langle \mathds{1}_{n \times n}, X \rangle \colon X \succeq 0, ~X \geq 0, ~ \tr(X) = 1, ~ X_{ij} = 0 ~~\forall \{i,j\} \in E\}. \end{align*} Galli and Letchford~\cite{GALLI2017159} argued that incorporating this non-negativity constraint $X \geq 0$ into~\eqref{theta_1} results in the same bound $\vartheta^*(G)$. Since $\vartheta^*(G) \leq \vartheta(G)$, using $\vartheta^*(G)$ provides a potentially better upper bound on $\alpha(P_q)$ than $b_{M}(P_q)$, namely \begin{align*} b_{M^*}(P_q) = 1 + \vartheta^*(P^L_q). \end{align*} Moreover, Schrijver~\cite{Schrijver} demonstrated that the SDP formulations of both $\vartheta(G)$ and $\vartheta^*(G)$ can be reduced to linear programs for circulant graphs. Since $P_q$, and consequently $P^L_q$, is circulant when $q$ is a prime, the values of both $b_M(P_q)$ and $b_{M^*}(P_q)$ can be computed by solving a linear program in this case. In order to assess the behavior of the LESH for Paley graphs, we now explore the relationship between the bounds on the $k$-th level of the LESH for Paley graphs $P_q$ and the bounds $b_{M}(P_q)$ and $b_{M^*}(P_q)$. First, we note the following. \begin{obs}\label{ESH_level_0_1} Let $P_q$ be a Paley graph. Then, \begin{align*} b_{M}(P_q) = 1 + \vartheta(P^L_q) = z^\prime_1(P_q) = z^\prime_0(P_q). \end{align*} \end{obs} Therefore, the bound $b_M(P_q)$ coincides with $z^\prime_0(P_q)$ and $z^\prime_1(P_q)$. To connect the LESH with the bound $b_{M^*}(P_q)$, we first establish a relationship between the ESH and Schrijver's refinement of the Lovász theta function. \begin{prop}\label{observation_Schrijver_leve_2} For any graph $G$, \begin{align*} z_2(G) \leq \vartheta^*(G) \leq z_1(G). \end{align*} \end{prop} \begin{proof} From the definition of the ESH, we know that $z_1(G) = \vartheta(G)$. Since $\vartheta^*(G) \leq \vartheta(G)$, it follows that $\vartheta^*(G) \leq z_1(G)$. Furthermore, Schrijver's refinement $\vartheta^*(G)$ is obtained by adding the non-negativity constraint $X \geq 0$ into any SDP \eqref{theta_1} or \eqref{theta_2} to compute the Lovász theta function $\vartheta(G)$. However, for subgraphs of order~$2$, the non-negativity constraint~\eqref{2_1} is only one of four inequalities describing the ESC~\eqref{ESC_2}. This implies that the bound on the second level of the ESH is at least as good as $\vartheta^*(G)$. Therefore, $z_2(G) \leq \vartheta^*(G)$. \end{proof} Hence, Schrijver's refinement $\vartheta^*(G)$ lies between the first and second levels of the ESH. In the LESH framework, Proposition~\ref{observation_Schrijver_leve_2} leads to the following relationship. \begin{cor}\label{cor_1} For any vertex-transitive graph $G$, \begin{align*} z^\prime_2(G) \leq 1 + \vartheta^*(G^L) \leq z^\prime_1(G). \end{align*} \end{cor} Finally, based on Observation~\ref{ESH_level_0_1} and Corollary~\ref{cor_1}, we are now able to establish the relationship between the bound $b_M^*(P_q)$ and the LESH for Paley graphs. \begin{cor}\label{cor_b_M^*} Let $P_q$ be a Paley graph. Then, \begin{align*} z^\prime_2(P_q) \leq b_{M^*}(P_q) \leq b_{M}(P_q) = z^\prime_1(P_q). \end{align*} \end{cor} Thus, the bound on the second level of the LESH is at least as good as the bound $b_{M^*}(P_q)$. Naturally, we ask whether there exist Paley graphs $P_q$ for which the strict inequalities $z^\prime_2(P_q) < b_{M^*}(P_q)$ or $b_{M^*}(P_q) < b_{M}(P_q)$ hold. In either case, this would imply $z^\prime_2(P_q) < z^\prime_1(P_q)$, demonstrating that the LESH provides better performance than the ESH for some Paley graphs, yielding tighter bounds on their stability numbers already at the second level of the hierarchy. To answer this question, we refer to the computational results presented in~\cite{Magsino}, where the bounds $b_M(P_q)$ and $b_{M^*}(P_q)$ were computed and compared with the bound $b_H(P_q)$ for $P_q$ with $q < 3000$. According to Table 1 in~\cite{Magsino}, for 63 Paley graphs $P_q$ with $q < 3000$, the inequality $b_{M^*}(P_q) < b_{M}(P_q)$ holds. Together with Corollary~\ref{cor_b_M^*}, we draw the following conclusion. \begin{obs}\label{LESH_makes_improvement} There exist Paley graphs $P_q$ for which \begin{align*} z^\prime_2(P_q) < z^\prime_1(P_q). \end{align*} \end{obs} This finding is noteworthy because, as stated in Lemma~\ref{level_2}, adding ESCs for subgraphs of order~$2$ in the ESH does not improve the Lovász theta function as bound on the stability number for all Paley graphs $P_q$. Due to Observation~\ref{LESH_makes_improvement} the LESH does bring an improvement even on level~$2$ for some Paley graphs, so this hierarchy is significantly stronger. \subsection{Computational study}\label{ESH_comp_local_Paley} We now computationally employ the LESH to compute upper bounds on the stability numbers of Paley graphs $P_q$. For this purpose, we revisit Paley graphs $P_q$ with $q < 200$ and compute upper bounds on $z^\prime_k(P_q)$ for $k \in \{2, \dots, 10\}$. The main goal of this computational study is to compare the bounds on $z^\prime_k(P_q)$ and on $z_k(P_q)$, as well as to investigate the quality of the bounds on stability numbers of Paley graphs obtained by utilizing the LESH by comparing $z^\prime_k(P_q)$ to $b_{H}(P_q)$, $b_{M}(P_q)$ and $b_{M^*}(P_q)$. We use the same computational setup and code as in Section~\ref{comp_justification}. We again do not consider any $q$ that is a square because of Observation~\ref{theta_exact_q_square}. Furthermore, as we only compute upper bounds on $z^\prime_k(P_q)$ and because of the same arguments as in Section~\ref{comp_justification}, it is again possible that for some values of $q$ the bounds on $z^\prime_k(P_q)$ increase for higher values of $k$, even though the values of $z^\prime_k(P_q)$ decrease. The results of this computational experiment are presented in Table~\ref{Table_3}. We provide the values of $q$, $\alpha(P_q)$ and $\vartheta(P_q)$. Then, the bounds $b_{H}(P_q)$, $b_{M}(P_q)$, and $b_{M^*}(P_q)$ are given. Finally, we present computed upper bounds on $z^\prime_k(P_q)$ for $k \in \{2, \dots, 10\}$. If $z^\prime_k(P_q)$ for some $k \in \{2, \dots, 10\}$ yields a better integer bound on $\alpha(P_q)$ than both bounds $b_{H}(P_q)$ and $b_{M^*}(P_q)$, the respective value is printed in bold. Furthermore, we recall that the bound $b_{H}(P_q)$ is valid only for $P_q$ where $q$ is a prime, and therefore we do not present the bound $b_{H}(P_q)$ for $q = 125$. \begin{table}[hbt!] \caption{Comparison of upper bounds on $\alpha(P_q)$ } \centering \begin{adjustbox}{max width=\textwidth} \begin{tabular}{ r r r r r r | r r r r r r r r r} $q$ & $\alpha(P_q)$ & $\vartheta(P_q)$ & $b_{H}(P_q)$ & $b_{M}(P_q)$ & $b_{M^*}(P_q)$ & $z^\prime_2(P_q)$ & $z^\prime_3(P_q)$ & $z^\prime_4(P_q)$ & $z^\prime_5(P_q)$ & $z^\prime_6(P_q)$ & $z^\prime_7(P_q)$ & $z^\prime_8(P_q)$ & $z^\prime_9(P_q)$ & $z^\prime_{10}(P_q)$ \\ \hline 5 & 2 & 2.2361 & 2.0000 & 2.0000 & 2.0000 & 2.0000 & 2.0000 & 2.0000 & 2.0000 & - & - & - & - & - \\ 13 & 3 & 3.6056 & 3.0000 & 3.0000 & 3.0000 & 3.0000 & 3.0000 & 3.0000 & 3.0000 & 3.0000 & 3.0000 & 3.0000 & 3.0000 & 3.0000 \\ 17 & 3 & 4.1231 & 3.3723 & 3.3431 & 3.3431 & 3.3431 & 3.2928 & 3.0000 & 3.0000 & 3.0000 & 3.0000 & 3.0000 & 3.0000 & 3.0000 \\ 29 & 4 & 5.3852 & 4.2749 & 4.3177 & 4.3177 & 4.3177 & 4.3177 & 4.0000 & 4.0000 & 4.0000 & 4.0000 & 4.0000 & 4.0000 & 4.0000 \\ 37 & 4 & 6.0828 & 4.7720 & 4.7599 & 4.7599 & 4.7599 & 4.7599 & 4.5264 & 4.0000 & 4.0000 & 4.0000 & 4.0000 & 4.0014 & 4.0188 \\ 41 & 5 & 6.4031 & 5.0000 & 5.4721 & 5.4721 & 5.4721 & 5.3818 & 5.0000 & 5.0000 & 5.0000 & 5.0000 & 5.0000 & 5.0000 & 5.0000 \\ 53 & 5 & 7.2801 & 5.6235 & 5.6783 & 5.6783 & 5.6783 & 5.6761 & 5.5983 & 5.0634 & 5.0065 & 5.0004 & 5.0000 & 5.0116 & 5.0653 \\ 61 & 5 & 7.8102 & 6.0000 & 5.9009 & 5.8886 & 5.8886 & 5.8886 & 5.8474 & 5.5173 & 5.4913 & 5.4384 & 5.1922 & 5.2066 & 5.2101 \\ 73 & 5 & 8.5540 & 6.5208 & 6.3772 & 6.3772 & 6.3772 & 6.3772 & 6.3373 & 6.1044 & \textbf{5.9488} & \textbf{5.6720} & \textbf{5.6263} & \textbf{5.6076} & \textbf{5.6081} \\ 89 & 5 & 9.4340 & 7.1521 & 7.1553 & 7.0600 & 7.0599 & 7.0598 & 7.0403 & \textbf{6.8056} & \textbf{6.7391} & \textbf{6.2342} & \textbf{6.2320} & \textbf{6.2179} & \textbf{6.1961} \\ 97 & 6 & 9.8489 & 7.4462 & 7.9483 & 7.9451 & 7.9451 & 7.9449 & 7.8032 & 7.4022 & 7.4942 & 7.0127 & 7.0085 & 7.0116 & 7.0136 \\ 101 & 5 & 10.0499 & 7.5887 & 7.2903 & 7.2891 & 7.2891 & 7.2891 & 7.2891 & 7.1738 & 7.0916 & \textbf{6.7396} & \textbf{6.7373} & \textbf{6.6830} & \textbf{6.6628} \\ 109 & 6 & 10.4403 & 7.8655 & 8.0070 & 8.0018 & 8.0018 & 8.0017 & 7.9916 & 7.6901 & 7.8152 & 7.3638 & 7.2925 & 7.3116 & 7.3178 \\ 113 & 7 & 10.6301 & 8.0000 & 8.3305 & 8.3305 & 8.3305 & 8.3305 & 8.3183 & 8.0452 & 8.0425 & \textbf{7.4031} & \textbf{7.3836} & \textbf{7.4326} & \textbf{7.4408} \\ 125 & 7 & 11.1803 & - & 8.5700 & 8.5700 & 8.5700 & 8.5700 & 8.4404 & 8.3771 & 8.3924 & \textbf{7.2679} & \textbf{7.4508} & \textbf{7.8357} & 8.0555 \\ 137 & 7 & 11.7047 & {8.7614} & 8.8290 & 8.8261 & 8.8261 & 8.8259 & 8.8145 & 8.7179 & 8.7227 & \textbf{7.9157} & \textbf{7.9101} & 8.0265 & 8.0564 \\ 149 & 7 & 12.2066 & 9.1168 & 9.1884 & 9.1884 & 9.1884 & 9.1884 & 9.1884 & 9.1380 & 9.0119 & \textbf{8.1717} & \textbf{8.1820} &\textbf{8.2875} & \textbf{8.3928} \\ 157 & 7 & 12.5300 & 9.3459 & 9.6949 & 9.6704 & 9.6699 & 9.6692 & 9.6692 & 9.4742 & 9.4869 & \textbf{8.8095} & \textbf{8.7167} & \textbf{8.8073} & \textbf{8.9108} \\ 173 & 8 & 13.1529 & 9.7871 & 10.3165 & 10.2339 & 10.2336 & 10.2331 & 10.2294 & 10.0904 & 10.1463 & 9.3973 & 9.3308 & 9.4518 & 9.5914 \\ 181 & 7 & 13.4563 & 10.0000 & 10.3241 & 10.3207 & 10.3205 & 10.3203 & 10.3203 & 10.2424 & 10.1963 & \textbf{9.4891} & \textbf{9.3425} & \textbf{9.5454} & \textbf{9.6373} \\ 193 & 7 & 13.8924 & 10.3107 & 10.5058 & 10.4379 & 10.4370 & 10.4360 & 10.4359 & 10.3395 & 10.2916 & \textbf{9.7385} & \textbf{9.5777} & \textbf{9.7065} & \textbf{9.8309} \\ 197 & 8 & 14.0357 & 10.4121 & 10.6517 & 10.6517 & 10.6517 & 10.6517 & 10.6500 & 10.5206 & 10.5547 & \textbf{9.8377} & \textbf{9.7119} & \textbf{9.8523} & \textbf{10.0225} \\ \hline \end{tabular} \end{adjustbox} \label{Table_3} \end{table} First, we examine whether applying the LESH on Paley graphs yields better bounds on $\alpha(P_q)$ than $b_{H}(P_q)$, $b_{M}(P_q)$ and $b_{M^*}(P_q)$, and if so, whether we also obtain better integer bounds on $\alpha(P_q)$ for the considered instances. We note in Table~\ref{Table_3} that for $q \in \{5, 13\}$, the bounds $b_{H}(P_q)$, $b_{M}(P_q)$ and $b_{M^*}(P_q)$ coincide with $\alpha(P_q)$, as does the bound on $z^\prime_2(P_q)$. Thus, the bounds $z^\prime_2(P_q)$, $b_{H}(P_q)$, $b_{M}(P_q)$ and $b_{M^*}(P_q)$ are of the same quality. Nevertheless, for all other considered instances, we observe that the best computed bounds on $z^\prime_k(P_q)$ are significantly better than the bounds $b_{H}(P_q)$, $b_{M}(P_q)$ and $b_{M^*}(P_q)$. In particular, for $q \in \{17, 29, 37, 41, 53\}$ those bounds are best possible, as they coincide with $\alpha(P_q)$. Furthermore, for $11$ of $15$ considered graphs $P_q$ with $61\leq q < 200$, the best computed bounds on $z^\prime_k(P_q)$ are better than the bounds $b_{H}(P_q)$ and $b_{M^*}(P_q)$ by one integer. Thus, we see in our computational study that, in general, our bounds obtained with the LESH are much better than the bounds proposed in~\cite{Hanson} and~\cite{Magsino}. Second, we analyze the bounds on $z^\prime_2(P_q)$ and $b_{M^*}(P_q)$. Note that in our computational results for all values of $q$ we have that $z^\prime_2(P_q) \leq b_{M^*}(P_q)$, which is in accordance with the theory from Corollary~\ref{cor_b_M^*}. We observe that for $5$ of $22$ considered instances, i.e.\ for $q \in \{89, 157, 173, 181, 193\}$, the strict inequality $z^\prime_2(P_q) < b_{M^*}(P_q)$ holds. Hence, empirical data show that there exist Paley graphs for which adding not only non-negativity constraints as it is done in $b_{M^*}(P_q)$, but also the inequalities \eqref{2_2}-\eqref{2_4} into the bound as it is done in $z^\prime_2(P_q)$ improves the upper bound on $\alpha(P_q)$. Nevertheless, we note that this improvement was rather modest, and the best improvement of $0.0009$ was obtained for $P_q$ with $q = 193$. Finally, and most importantly, we compare the bounds on $z_k(P_q)$ in Table~\ref{Table_2} and $z^\prime_k(P_q)$ in Table~\ref{Table_3} and analyze whether the LESH yields better (integer) bounds on $\alpha(P_q)$ than the ESH. Here we note that for all considered $P_q$ with $q \geq 17$ and for all $k \in \{2, \ldots, 10\}$ the relation $$ \lfloor z^\prime_k(P_q) \rfloor \leq \lfloor z_k(P_q) \rfloor - 1$$ holds, so on all levels $k \geq 2$ the bounds from the LESH are at least one integer better than the bounds based on the ESH. \begin{figure}[hbt!] \centering \caption{The upper bounds on $z_2(P_q)$ and $z^\prime_2(P_q)$ for Paley graphs $P_q$ with $17 \leq q < 200$} \begin{tikzpicture}[scale=1.0] \begin{axis}[ xlabel={$q$}, xmin=10, xmax=200, ymin=0, ymax=16, xtick={25, 50, 75, 100, 125, 150, 175, 200}, ytick={0, 2, 4, 6, 8, 10, 12, 14, 16}, legend pos=north west, ymajorgrids=true, grid style=dashed, ] \addplot[ color=blue, mark=square, ] coordinates { (17,3) (29, 4) (37, 4) (41, 5) (53, 5) (61, 5) (73, 5) (89, 5) (97, 6) (101, 5) (109, 6) (113, 7) (125, 7) (137,7) (149, 7) (137,7) (149, 7) (157, 7) (173, 8) (181, 7) (193, 7) (197, 8) }; \addlegendentry{$\alpha(P_q)$} \addplot[ color=red, mark=triangle, ] coordinates { (17, 4.1231) (29, 5.3852) (37, 6.0828) (41, 6.4031) (53, 7.2801) (61, 7.8102) (73, 8.5440) (89, 9.4340) (97, 9.8499) (101, 10.0499) (109, 10.4403) (113, 10.6301) (125, 11.1803) (137,11.7047) (149, 12.2066) (157, 12.5300) (173, 13.1529) (181, 13.4536) (193, 13.8924) (197, 14.0357) }; \addlegendentry{$z_2(P_q)$} \addplot[ color=cyan, mark=o, ] coordinates { (17, 3.3431) (29, 4.3177) (37, 4.7599) (41, 5.4721) (53, 5.6783) (61, 5.8886) (73, 6.3772) (89, 7.0599) (97, 7.9451) (101, 7.2891) (109, 8.0018) (113, 8.3305) (125, 8.5700) (137, 8.8261) (149, 9.1884) (157, 9.6699) (173, 10.2336) (181, 10.3205) (193, 10.4370) (197, 10.6517) }; \addlegendentry{$z^\prime_2(P_q)$} \end{axis} \end{tikzpicture} \label{figure_2a} \end{figure} In Figure~\ref{figure_2a}, we examine the bounds on $\alpha(P_q)$ from the second level of the ESH and the LESH in more detail. We see that for $q \in \{17, 29, 41\}$, the bound $z^\prime_2(P_q)$ is better than the bound $z_2(P_q)$ by one integer, while improvement by two integers is observed for $q \in \{37, 53, 61, 73, 89, 97, 109, 113\}$. Furthermore, for $q \in \{101, 125, 137, 149, 157, 173, 181, 193\}$, an improvement of the bound by three integers is recorded. The best improvement in bound by four integers is observed for $q = 197$. This shows that already on the second level, the LESH clearly outperforms the ESH significantly, and the bound on $z^\prime_2(P_q)$ is better than the bound on $z_2(P_q)$ by at least one integer for all considered Paley graphs. This is not a surprise, as we know from Observation~\ref{level_2} that $z_2(P_q) = \vartheta(P_q)$ for all $P_q$, so the bounds from the second level of the ESH do not improve on $\vartheta(P_q)$ as bound on $\alpha(P_q)$. \begin{figure}[hbt!] \centering \caption{The upper bounds on $z_8(P_q)$ and $z^\prime_8(P_q)$ for Paley graphs $P_q$ with $17 \leq q < 200$} \begin{tikzpicture}[scale=1.0] \begin{axis}[ xlabel={$q$}, xmin=10, xmax=200, ymin=0, ymax=16, xtick={25, 50, 75, 100, 125, 150, 175, 200}, ytick={0, 2, 4, 6, 8, 10, 12, 14, 16}, legend pos=north west, ymajorgrids=true, grid style=dashed, ] \addplot[ color=blue, mark=square, ] coordinates { (17,3) (29, 4) (37, 4) (41, 5) (53, 5) (61, 5) (73, 5) (89, 5) (97, 6) (101, 5) (109, 6) (113, 7) (125, 7) (137,7) (149, 7) (137,7) (149, 7) (157, 7) (173, 8) (181, 7) (193, 7) (197, 8) }; \addlegendentry{$\alpha(P_q)$} \addplot[ color=red, mark=triangle, ] coordinates { (17,3.2864) (29, 4.4966) (37, 5.4935) (41, 5.9925) (53, 6.1915) (61, 6.9963) (73, 8.1910) (89, 9.4340) (97, 9.0777) (101, 10.0499) (109, 10.0652) (113, 10.3732) (125, 10.6415) (137,11.2181) (149, 11.9771) (157, 12.4315) (173, 13.1529) (181, 13.4536) (193, 13.8924) (197, 14.0357) }; \addlegendentry{$z_8(P_q)$} \addplot[ color=cyan, mark=o, ] coordinates { (17,3) (29, 4) (37, 4) (41, 5) (53, 5) (61, 5.1922) (73, 5.6293) (89, 6.2320) (97, 7.0085) (101, 6.7373) (109, 7.2925) (113, 7.3836) (125, 7.4508) (137,7.9101) (149, 8.1820) (157, 8.7167) (173, 9.3308) (181, 9.3425) (193, 9.5777) (197, 9.7119) }; \addlegendentry{$z^\prime_8(P_q)$} \end{axis} \end{tikzpicture} \label{figure_2} \end{figure} When we consider higher levels of the LESH, the results are even better. Exemplary, we compare our computed upper bounds on $z_8(P_q)$ and $z^\prime_8(P_q)$ in Figure~\ref{figure_2}. For $q \in \{17, 29, 37, 41, 53\}$, the bounds on $z^\prime_8(P_q)$ coincide with $\alpha(P_q)$, whereas for all other considered $P_q$, the bounds on $z^\prime_8(P_q)$ surpass the bounds on $z_8(P_q)$ by at least one integer. Specifically, for the graph $P_q$ with $q = 61$, the bound on $z^\prime_8(P_q)$ is better than the bound on $z_8(P_q)$ by one integer. Furthermore, for $q = 97$, $q \in \{73, 89, 109, 113, 125, 149\}$ and $q \in \{101, 137, 157, 173, 181, 193\}$ improvements of bounds by two, three and four integers are recorded, respectively. Finally, the best improvement in bound by five integers is observed for $q = 197$. Thus, using the LESH for computing upper bounds on $\alpha(P_q)$ produces significantly better bounds than the ESH, highlighting the strong potential of the LESH for computing upper bounds on the stability number of Paley graphs. \section{Conclusions and open questions}\label{Conclusion} This work provides an in-depth analysis of ESC-based approaches for the computation of upper bounds on the stability numbers of Paley graphs. First, we examined the bounds on the stability number of Paley graphs by employing the ESH, which starts with the Lovász theta function $\vartheta$ on the first level and includes ESCs for all subgraphs of order~$k$ on level~$k$. In particular, we showed that for every Paley graph $P_q$, there exists a level~$\ell(q) = \left\lfloor\frac{\sqrt{q}+3}{2}\right\rfloor$ for which adding ESCs on subgraphs of orders $\{2, \ldots, \ell(q)\}$ fails to provide better bounds on the stability number $\alpha(P_q)$ than the Lovász theta function, i.e., there is no improvement of the bound from the ESH up to level~$\ell(q)$. Moreover, we showed that for certain Paley graphs $P_q$ there is also no improvement of the bound on the level~$\ell(q)+1$ of the ESH. To overcome this stagnation of the bounds, we introduced the LESH for the stable set problem for vertex-transitive graphs, and thus also for Paley graphs. We proved that the bounds obtained by the LESH are at least as good as the ones obtained from the ESH. A computational study of bounds based on the LESH for the stable set problem for Paley graphs showed superior results compared to bounds based on the ESH, often improving the bounds by several integers. Several questions remain open. First, we have shown that there is no improvement of the bound from the ESH up to level~$\ell(q)$ for all Paley graphs in Corollary~\ref{cor_level}, and that there is also no improvement on level~$\ell(q)+1$ if $\alpha(P_q) < \ell(q)$ and $q\geq 25$ in Theorem~\ref{thm_alpha_levelPlus1}. So it is unclear if, indeed, there is an improvement of the bound on the ESH on level~$\ell(q)+1$ for all values of $q$ for which $\alpha(P_q) \geq \ell(q)$ holds, as our computations suggest for $q \leq 200$. In the case that $\alpha(P_q) < \ell(q)$ holds, it remains open to prove that there is no improvement also on level~$\ell(q)+2$ for $q \in \{89,101,181,193\}$, and even on level~$\ell(q)+3$ for $q = 101$, which is indicated by our computations. Furthermore, it would be interesting to know whether there is no improvement up to the level~$\ell(q) + 2$ for all values of $q$ with $\alpha(P_q) < \ell(q)$. From the results for $P_q$ with $q < 200$ presented in Table~\ref{Table_2}, we observe that for all considered graphs $\ell(q) \in \{\alpha(P_q) - 1, \alpha(P_q), \alpha(P_q) + 1\}$. It would be interesting to know if this is the case for all values of $q$, and, if not, to determine a graph $P_q$ for which this is not the case. Of course, a positive answer to this question would imply a new bound on $\alpha(P_q)$ and thus be a remarkable result. Another question regarding the computational implementation of the ESH and LESH for Paley graphs of prime order is whether the SDP in order to compute bounds based on the ESH and LESH can be reduced to a linear program, as this is possible for the Lovász theta function and Schrijver's refinement due to the circulant nature of such Paley graphs. Furthermore, it would be interesting to investigate bounds obtained by the LESH for other classes of vertex-transitive graphs, given that the newly introduced LESH is applicable to all vertex-transitive graphs. Finally, since we were able to construct an optimal solution for the SDP to compute the Lovász theta function for Paley graphs, and the Lovász theta function serves as a bound on both the stability number and the chromatic number, it would be interesting to explore the ESH for the graph coloring problem for Paley graphs. \section*{Funding} This research was funded in part by the Austrian Science Fund (FWF) [10.55776/DOC78] and by the Johannes Kepler University Linz, Linz Institute of Technology (LIT): LIT-2021-10-YOU-216. For the purposes of open access, the authors have applied a CC BY public copyright license to all author-accepted manuscript versions resulting from this submission. \afterpage{\clearpage} \bibliographystyle{plain} \bibliography{PaleyBib} \end{document}
2412.13011v1
http://arxiv.org/abs/2412.13011v1
All non-Gaussian states are advantageous for channel discrimination: Robustness of non-convex continuous variable quantum resources
\documentclass[aps,pra,groupedaddress,bibnotes,twocolumn,reprint]{revtex4-2} \usepackage{graphicx} \usepackage{dsfont} \usepackage{amssymb} \usepackage{amsthm} \usepackage{braket} \usepackage{tikz} \usepackage{amsmath} \usetikzlibrary{positioning,automata} \usepackage{ragged2e} \usepackage[caption=false,labelformat=simple,labelfont={normalsize}]{subfig} \renewcommand\thesubfigure{(\alph{subfigure})} \newtheoremstyle{bfnote} {}{} {\itshape}{} {\bfseries}{.} { }{\thmname{#1}\thmnumber{ #2}\thmnote{ (#3)}} \theoremstyle{bfnote} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{definition}[theorem]{Definition} \newtheorem*{remark}{Remark} \usepackage{xcolor,hyperref} \usepackage{times,txfonts} \usepackage{blindtext} \usepackage{geometry} \geometry{ a4paper, total={170mm,257mm}, left=25mm, top=25mm, right=25mm, bottom=25mm, } \newcommand{\lscr}{\underline{\mathcal{R}_{\mathcal{F}}}(\rho)} \newcommand{\rob}{\mathcal{R}_{\mathcal{F}}(\rho)} \newcommand{\lscrm}{\underline{\mathcal{R}^{(m)}_{\mathcal{F}}}(\rho)} \newcommand{\robm}{\mathcal{R}^{(m)}_{\mathcal{F}}(\rho)} \newcommand{\ff}{\mathcal{F}} \newcommand{\xiseq}{\{\xi_n\}_n} \newcommand{\ddh}{\mathcal{D(H)}} \newcommand{\tth}{\mathcal{T(H)}} \newcommand{\bbh}{\mathcal{B(H)}} \newcommand{\glscr}{\underline{\mathcal{R_G}}} \newcommand{\grob}{\mathcal{R}_{\mathcal{G}}} \DeclareMathOperator{\tr}{Tr} \DeclareMathOperator{\cone}{cone} \newcommand{\kkh}{\mathcal{K(H)}} \DeclareMathOperator{\povm}{POVM} \newcommand{\swap}{U_{\textup{SWAP}}} \begin{document} \title{All non-Gaussian states are advantageous for channel discrimination: \\ Robustness of non-convex continuous variable quantum resources} \author{Leah Turner} \email{[email protected]} \affiliation{School of Mathematical Sciences and Centre for the Mathematical and Theoretical Physics of Quantum Non-Equilibrium Systems, University of Nottingham, University Park, Nottingham, NG7 2RD, United Kingdom} \author{M\u{a}d\u{a}lin Gu\c{t}\u{a}} \email{[email protected]} \affiliation{School of Mathematical Sciences and Centre for the Mathematical and Theoretical Physics of Quantum Non-Equilibrium Systems, University of Nottingham, University Park, Nottingham, NG7 2RD, United Kingdom} \author{Gerardo Adesso} \email{[email protected]} \affiliation{School of Mathematical Sciences and Centre for the Mathematical and Theoretical Physics of Quantum Non-Equilibrium Systems, University of Nottingham, University Park, Nottingham, NG7 2RD, United Kingdom} \begin{abstract} Quantum resource theories provide a mathematical framework for quantifying the advantage given by quantum phenomena in various tasks. The generalized robustness is one such quantifier, and enjoys an operational interpretation in the setting of channel discrimination. It is a well studied resource monotone in finite-dimensional or convex resource theories, however as of yet it has not been studied in the setting of infinite-dimensional resource theories with a non-convex set of free states. In this work, we define the generalized robustness for an arbitrary resource theory, without restricting to convex sets or to finite dimensions. We show it has two operational interpretations: firstly, it provides an upper bound on the maximal advantage in a multi-copy channel discrimination task. Secondly, in many relevant theories, it quantifies the worst-case advantage in single-copy channel discrimination when considering a decomposition of the free states into convex subsets. Finally, we apply our results to the resource theory of non-Gaussianity, thus showing that all non-Gaussian states can provide an advantage in some channel discrimination task, even those that are simply mixtures of Gaussian states. To illustrate our findings, we provide exact formulas for the robustness of non-Gaussianity of Fock states, along with an analysis of the robustness for a family of non-Gaussian states within the convex hull of Gaussian states. \end{abstract} \maketitle \section{Introduction}\label{sec:Intro} The integration of quantum mechanics into modern science and technology has led to the development of quantum technologies, which hold the potential to outperform their classical counterparts in numerous applications. Such technologies utilize uniquely quantum phenomena, such as entanglement \cite{entanglement}, coherence \cite{coherence}, and non-Gaussianity \cite{nonGstates}, to achieve capabilities unattainable by conventional means. The systematic characterization of these and other quantum features has given rise to the field of {\it quantum resource theories} \cite{QRTs}, in which quantum phenomena are rigorously analyzed to explore what advantages they may provide. Quantum resource theories categorize states as either free or resourceful, with the expectation that resource states are advantageous over those that are free. The study then aims to quantify how useful a particular state would be in some relevant task. Beyond specific examples, resource theories can also be studied in a general sense, allowing for the understanding of properties common in a wide range of resource scenarios and unveiling links between different quantum properties. One such link has been successfully established by focusing on a resource measure known as {\it generalized robustness} \cite{original_robustness}, which turns out to quantify how useful a resource state is in a class of hypothesis testing tasks commonly referred to as {\it channel discrimination}. This result was first shown in finite dimensions for general resource theories with convex sets of free states \cite{robustnessofcoherence,finiteconvexrobustness}, and later extended to generalized probabilistic theories \cite{GPTRobustness}, infinite-dimensional convex theories \cite{infiniteconvex,longinfiniteconvex}, and resource theories with non-convex free sets in finite dimensions \cite{finitenonconvex,longfinitenonconvex}. Similar general results have been shown for channel exclusion tasks by quantifying advantages in terms of resource weight measures \cite{WoR} and also for a family of quantum betting tasks which interpolates between channel discrimination and exclusion \cite{Betting}. All these results are nevertheless not universal, as they leave out the nontrivial case of quantum resources defined by a non-convex set of free states in infinite dimensions, which play a prominent role in continuous variable quantum information processing. As an alternative to the discrete variable approach, {\it continuous variable quantum information} \cite{QI_with_CV,alessiobook} is growing as a promising area, largely due to its ease of implementation with current quantum optical setups. Although quantum technologies in this regime are rapidly advancing, less is understood about the specific resources needed to underpin the quantum advantages such technologies aim to achieve. This is largely due to the fact that mathematical techniques developed for the discrete variable setting do not necessarily apply to continuous variables, as the infinite-dimensional Hilbert spaces involved are substantially more complex than their finite-dimensional counterparts. Several results in this area make restricting assumptions on the spaces involved, such as a bounded mean energy which effectively makes the Hilbert space finite \cite{continuity_too_strong}, or work solely within the simplified Gaussian regime. Gaussian states admit an elegant mathematical formalism \cite{CVQI_and_beyond} and arise naturally in physical systems as the ground and thermal equilibrium states of harmonic Hamiltonians \cite{alessiobook}. They can be created and manipulated with relative ease using linear optics, which makes Gaussian quantum information a popular area to explore theoretically and experimentally \cite{GaussianQI}. However, simplicity comes at the price of significant limitations. While basic protocols in domains such as quantum communication and metrology can be implemented using Gaussian states and Gaussian operations alone, their performance may be improved and optimized only by resorting to non-Gaussian elements \cite{estimation,estimationng,photonsubtraction,teleportationng,gaussianqkd,gaussianqkdlimited,variationalcentrone}. Additionally, a series of no-go results involving e.g.~resource distillation, universal quantum computation and error correction, mean that leaving the Gaussian world is necessary in order to even achieve these and other quantum information tasks \cite{noGdistillation,noGdistillation2,noGdistillation3,gaussianQRTs,noGEC,wignernegativity}. We see therefore that {\it non-Gaussianity} emerges quite naturally as an essential resource for continuous variable quantum information processing \cite{nonGstates,quantifyingnG,nonGfilip}. This raises the central question: Is any non-Gaussian state useful for something? Does any form and amount of non-Gaussianity translate into a concrete advantage in some quantum task? In previous literature, a distinction has often been made between non-Gaussian states which cannot be obtained as convex mixtures of Gaussian states --- referred to as possessing `quantum' or `genuine' non-Gaussianity --- whose usefulness has been more thoroughly investigated \cite{convexG1,convexG2,infiniteconvex,longinfiniteconvex,FilipMista,Tommoguitar,stellartois,stellaresource}, and states within the convex hull of the (non-convex) set of Gaussian states, whose resourcefulness has not been generally appreciated, as they admit an efficient classical simulation \cite{wignernegativity} while still being non-Gaussian. Quantitative approaches to witness or measure non-Gaussianity have been proposed e.g.~in terms of how distinguishable a given non-Gaussian state is from a reference Gaussian state (for general non-Gaussianity) \cite{quantifyingnG,gaussianHS,gaussianRE,GREmimimized,husimi,negentropy} or from the convex hull of Gaussian states (for the `genuine' variation) \cite{infiniteconvex,longinfiniteconvex,FilipMista,Tommoguitar,hahn24}. However, as anticipated, the operational framework for the generalized robustness so far does not include the case of infinite-dimensional, non-convex resource theories, and hence as of yet cannot be used to study non-Gaussianity and the advantages it may give in its most general incarnation. In this paper, we show that every resource state in any resource theory, without assuming convexity or finite dimensions, can provide an advantage in some channel discrimination task, even if it lies within the convex hull of free states [Section~\ref{sec:TwoCopies}]. We extend the generalized robustness to the case of infinite-dimensional, non-convex resource theories, prove its faithfulness and monotonicity under general free operations, and give it two operational interpretations: the worst case advantage with respect to a convex set decomposition of the free states in single-copy channel discrimination, and an upper bound on the advantage given by a resource state in multi-copy channel discrimination [Section~\ref{sec:generalroberto}]. This approach also provides observable bounds to estimate the robustness of any resource in experiments by measuring suitable witnesses on one or two copies of the system of interest. The framework we establish in this paper can finally be applied to study non-Gaussianity as a resource in full generality [Section~\ref{sec:NG}]. Specializing to this case, we show that the generalized robustness of non-Gaussianity is lower semi-continuous and can be calculated exactly in some relevant cases, including Fock states for any number of photons. We further show that simple mixtures of Gaussian states, which have neither `genuine' non-Gaussianity nor optical nonclassicality, can nonetheless outperform all Gaussian states at some channel discrimination task, and we provide lower bounds on the advantage characterized by the robustness of non-Gaussianity. Our results prove in particular that {\it all} non-Gaussian states, within and outside of the convex hull of Gaussian states, yield provably useful resources for quantum communication and discrimination technologies, thus providing an affirmative answer to the main question raised in this work. \section{Preliminaries}\label{sec:backup} Here we begin by introducing the notation and background material necessary for this paper. Throughout this paper, $\mathcal{H}$ refers to a separable, not necessarily finite-dimensional, Hilbert space; $\bbh$ is the space of bounded operators on $\mathcal{H}$, $\tth\subseteq\bbh$ is the space of trace class operators, $\kkh \subseteq \bbh$ is the space of compact operators, and $\ddh\subseteq\tth$ is the space of density operators, or states, on $\mathcal{H}$. The trace norm and the operator norm are denoted by $\|\cdot\|_1$ and $\|\cdot\|_{\infty}$, respectively. We further use $\cone(X)$ to denote the cone generated by the set $X\subseteq \tth$, that is $\cone(X):=\{\mu x \colon x\in X, \mu\in\mathbb{R}_{\geq 0}\}$. \subsection{Topologies in infinite dimensions} When working with an infinite-dimensional state space, we encounter a rich topological structure, including different topologies that no longer coincide (see e.g. \cite{Megginson}). We provide here a brief overview of the topologies we use in this paper. The trace norm $\|\cdot\|_1$ induces the {\em trace norm topology} on $\tth$, in which a sequence of trace class operators $\{T_n\}_n$ is said to converge to $T$ if $\lim_{n\to\infty}\|T-T_n\|_1=0$, and we write this as $T_n\xrightarrow{\text{tn}} T$. Due to the operational significance of the trace norm in quantifying state distinguishability \cite{Holevo1973,Helstrom1976}, the trace norm topology is an important topology when working with quantum states, and we will use this topology by default unless otherwise specified. A different norm topology that we will also use in this paper is the {\em Hilbert-Schmidt topology}, induced by the Hilbert-Schmidt norm $\|X\|_2:=\tr[XX^{\dagger}]^{\frac{1}{2}}$. A sequence $\{T_n\}_n$ converges in the Hilbert-Schmidt topology to $T$ if $\lim_{n\to\infty}\|T-T_n\|_2=0$, which we write as $T_n\xrightarrow{\text{H-S}} T$. Another topology that will prove useful is the {\em weak* topology} on $\tth$. This is the topology induced on $\tth$ by its predual, the space of compact operators $\kkh$. A sequence of trace class operators $\{T_n\}_n$ is said to converge in the weak* topology to $T\in\tth$ if $\lim_{n\to\infty}\tr[T_nK]=\tr[TK]$ for all $K\in \kkh$, and we write this as $T_n\xrightarrow{\text{w*}} T$. The usefulness of this topology in this paper is due to the Banach-Alaoglu theorem \cite[Theorem~2.6.18]{Megginson}, which says the trace norm unit ball in $\tth$ is compact in the weak* topology, and hence this topology lends itself well to arguments involving sequences. Since we are working in infinite dimensions, all these topologies are different. It is, however, worth noting that when working specifically with states, convergence of a sequence of states to another state is equivalent in all of these topologies, i.e. for a sequence of states $T_n\in\ddh$ and another state $T\in\ddh$, we have (see e.g.~\cite{SWOT} and references therein) \begin{equation} T_n\xrightarrow{\text{tn}} T \iff T_n\xrightarrow{\text{H-S}} T \iff T_n\xrightarrow{\text{w*}} T. \end{equation} \subsection{Quantum resource theories} \begin{figure}[t] \centering \subfloat[]{\includegraphics[height=2.5cm]{pringlesman.pdf}\label{fig:pringlesman}}\hfill \subfloat[]{\includegraphics[height=2.5cm]{binarychanneldiscrimination5.pdf}\label{fig:pringleschannel}} \caption{\label{fig:pringles} (a) Illustration of a general resource theory. The oval represents the space $\ddh$ of states on a finite- or infinite-dimensional Hilbert space $\mathcal{H}$. The bow tie represents the closed, but possibly non-convex set $\ff\subseteq\ddh$ of free states. (b) Schematic of a $m$-copy binary channel discrimination task. The goal is to maximize the probability $p_{succ}$ of correctly guessing which channel was applied to the factorized input state $\rho^{\otimes m}$, out of an ensemble of possible channels $\{\Psi_i\}_i$ with prior probabilities $\{p_i\}_i$, by implementing an optimal measurement $\{M_i\}_i$ at the output. In this paper we show that, in any resource theory, every $\rho \notin \ff$ can strictly outperform all free states $\sigma \in \ff$ at some $m$-copy channel discrimination task (with $m=1$ if $\ff$ is convex and $m \geq 2$ otherwise).} \end{figure} Quantum resource theories are motivated by the fact that, in a realistic setting, the accessible states and operations will be restricted \cite{QRTs}. In this framework, the set of all possible quantum states is partitioned into the set of free states $\ff\subseteq\ddh$ (those that are readily available in the considered context) and the set of resource states $\ddh \backslash \ff$. The idea is that a non-free state $\rho\notin\ff$ can then potentially be used as a \emph{resource} to enable or improve the performance of useful tasks. The set of operations that can be implemented under the given restrictions are known as free operations, and have the requirement that $\ff$ is closed under their action; in other words, free operations cannot create resource states out of free states, and do not require any resource expenditure to be implemented. Resource theories can be considered in specific cases (entanglement \cite{entanglement} and coherence \cite{coherence} being prime examples), or in a general setting, allowing for the study of results that apply to broad classes of resources beyond their physical specifications. Here, we work mostly in this general case. We make the standard assumption that $\ff$ is closed (in the trace norm topology) -- if a state can be approximated by a sequence of free states then it must also be free. To allow for full generality in the applicability of our results, this is the only assumption we make [Fig.~\ref{fig:pringlesman}]. Note that although convexity of $\ff$ is often also assumed in a general resource theory \cite{BrandaoGour}, we do not make this assumption here. This is because there exist several interesting and well-motivated sets of free states which are naturally non-convex, a notable example here being the set of Gaussian states of continuous variable systems \cite{CVQI_and_beyond}. Additionally, there are situations in which randomness can be costly and may be accounted for as a resource \cite{randomnessresource}, and therefore it is not necessarily always productive to consider mixtures of free states as free \cite{finitenonconvex,longfinitenonconvex,robustnesscontinuity,starresourcetheories}. It is a natural assumption that states with a higher resource content should be more useful. Given $\rho\in\ddh\backslash\ff$, the task is then to quantify the amount of resource in $\rho$. This is done by introducing a resource monotone $\mathcal{M}:\rho \mapsto [0,\infty)$. Whilst there is no unique way of defining such an $\mathcal{M}$, two standard conditions may be imposed in order to ensure the monotone is meaningful: faithfulness, $\mathcal{M}(\rho) = 0 \iff \rho \in \ff$, and monotonicity under free operations, $\mathcal{M}(\Phi(\rho))\leq \mathcal{M}(\rho)$ for any free operation $\Phi$. It is also instructive to consider monotones with some {\it operational} meaning, i.e. relate the resource content of a state with respect to that monotone to the relative advantage it gives in some specific task of interest. When working with monotones in the continuous variable case, continuity issues often arise. Intuitively, continuity of a monotone is desirable since if two states can be related by only a slight perturbation, their resource content should be similar. In the finite-dimensional case, this is often trivially given by the compactness of $\ff$, whereas when defined the same way in infinite dimensions, many quantities are discontinuous everywhere \cite{continuity_too_strong}. It is therefore necessary to impose a weaker version of continuity: lower semi-continuity. A monotone $\mathcal{M}$ is lower semi-continuous if it satisfies the property $\liminf_{n\to\infty}\mathcal{M}(\xi_n)\geq M(\rho)$ for any sequence of states $\xiseq \rightarrow\rho$. This direction of semi-continuity imposes that approximating a state by a sequence of states does not give a lower value of resource than in the actual state, i.e. taking the limit will not increase the resource. \subsection{Channel discrimination tasks} In a channel discrimination task, we are provided with a channel ensemble $\{p_i,\Psi_i\}_i$, from which the channel $\Psi_i$ is picked with probability $p_i$ and applied to an input state $\eta$. The task is to correctly guess which channel was picked by measuring the output, and the probability of success can be maximized by optimizing the choice of $\povm$ (positive operator valued measure) implemented at the output. The success probability of correctly guessing from the ensemble $\{p_i,\Psi_i\}_i$ using input state $\eta$ and using POVM $\{M_i\}_i$ is denoted \cite{robustnessofcoherence,finiteconvexrobustness} \begin{equation} p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\eta):=\sum_i p_i \tr[\Psi_i(\eta) M_i]. \end{equation} These tasks are very relevant in quantum information, since any scenario in which we want to discover which of a set of processes are occurring can be reformulated as a channel discrimination task. For example, promising uses of binary discrimination tasks include quantum illumination \cite{IlluminationIntro,GaussianIllumination,ExperimantalIllumination} or quantum reading \cite{ReadingIntro,ReadingAndIllumination,ExperimentalReading}. We can further consider the more general case of multi-copy channel discrimination tasks [Fig.~\ref{fig:pringleschannel}], in which the channels $\{\Psi_i\}_i$ act on, say, $m$ copies of an input state $\eta$ \cite{finitenonconvex,longfinitenonconvex}. The probability of success is accordingly denoted as $p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\eta^{\otimes m})$. In general, we say that a state $\rho$ provides an {\it advantage} over another state $\tau$ in a given $m$-copy channel discrimination task $\{p_i,\Psi_i\}_i$, if \begin{equation} \begin{aligned} &\sup_{\{M_i\}_i}p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\rho^{\otimes m}) \\ &> \sup_{\{M_i\}_i}p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\tau^{\otimes m}). \end{aligned} \end{equation} A nontrivial question is thus whether {\it any} quantum resource will give an advantage over free states in some instance of channel discrimination. In the single-copy scenario ($m=1$), the answer is affirmative provided the set $\ff$ of free states is convex \cite{finiteconvexrobustness}. Extending this result to arbitrary resource theories in the multi-copy scenario is the focus of Section~\ref{sec:TwoCopies}. \subsection{Continuous variables}\label{sec:CV} A continuous variable bosonic system is described as a system of $m$ modes, each corresponding to a quantum harmonic oscillator with a different frequency \cite{alessiobook}. In the phase space representation, each mode $i$ has an associated pair of quadrature operators $\hat{x}_i, \hat{y}_i$. The overall system is then described by the collection of these: $\mathbf{\hat{r}}=(\hat{x}_1,\hat{y}_1,...,\hat{x}_m,\hat{y}_m)$. These satisfy the commutation relations $[\hat{r}_j,\hat{r}_k]=i\Omega_{jk}$, where $\boldsymbol{\Omega}$ is the $m$-mode symplectic form, given by $$\boldsymbol{\Omega} = \bigoplus_{l=1}^m \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix}.$$ The eigenvalue associated with eigenvector $\ket{r_i}$ of the quadrature operator $\hat{r}_i$ is denoted $r_i$. Information is then encoded in the continuous spectra of these operators. Continuous variable states can be represented in phase space by their (symmetrically ordered) characteristic function $\chi^{\rho}(\mathbf{r})$, defined as \begin{equation} \chi^{\rho}(\mathbf{r}) = \tr[{\rm e}^{-i\mathbf{r}^\top \boldsymbol{\Omega} \mathbf{\hat{r}}}\rho]. \end{equation} Alternatively, states can be described by quasi-probability distributions. One such distribution is the Wigner function, given by the Fourier transform of $\chi^{\rho}(\mathbf{r})$, and defined for $m$ modes as \begin{equation} \mathcal{W}^\rho(\mathbf{r}) = \frac{1}{\pi^{2m}} \int_{\mathbb{R}^{2m}} \chi^\rho(\mathbf{s}) {\rm e}^{i\mathbf{s}^\top\boldsymbol{\Omega} \mathbf{r}} d^m \mathbf{s}. \end{equation} Within the continuous variable regime, the set of Gaussian states plays a special role \cite{CVQI_and_beyond}. Gaussian states are defined to be all states with a Gaussian Wigner function \begin{equation}\label{gaussianwigner} \mathcal{W}^\rho(\mathbf{r}) = \frac{1}{\pi^m\sqrt{\det(\boldsymbol{V})}}{\rm e}^{{-(\mathbf{r}-\boldsymbol{\mu})^\top \boldsymbol{V}^{-1}(\mathbf{r}-\boldsymbol{\mu})}} \end{equation} where $\rho$ is uniquely specified by two quantities: the vector of first moments, $\boldsymbol{\mu}:=\tr[\rho \mathbf{\hat{r}}]$, and the covariance matrix, $(\boldsymbol{V})_{jk}:=\tr[\rho\{\hat{r}_j-\mu_j,\hat{r}_k-\mu_k\}]$, satisfying the {\it bona fide} condition $\boldsymbol{V}+i\boldsymbol{\Omega}\geq \boldsymbol{0}$ \cite{alessiobook}. \section{Operational advantage of general resources in two-copy channel discrimination}\label{sec:TwoCopies} In this Section, we show that every resource state in any resource theory provides an advantage in some two-copy channel discrimination task, see Fig.~\ref{fig:pringles}. The proof strategy involves directly constructing such a task, using the properties of {\it resource witness} operators. When $\ff$ is convex, the hyperplane separation theorem guarantees the existence of a linear (i.e.~acting on a single copy of a state) witness operator \cite{witnessexistence}. When the set $\ff$ is non-convex, such a linear operator is no longer guaranteed to exist. We therefore turn to multi-copy witness operators. These were first introduced in entanglement theory \cite{swaptrick}, and amount to a witness operator $W\in\mathcal{B(H}^{\otimes m})$ that acts on multiple copies of a state $\rho$ in order to separate it from $\ff$, i.e. \begin{equation}\label{witnessdef} \begin{array}{l} \tr[W\rho^{\otimes m}]<0\,, \\ \tr[W\sigma^{\otimes m}]\geq 0 \ \ \forall \sigma\in\ff. \end{array} \end{equation} Using witnesses of this form was the approach taken in Ref. \cite{finitenonconvex} to show an advantage in channel discrimination in finite-dimensional theories even for non-convex sets of free states, however such a witness is yet to be used in a general infinite-dimensional theory. We begin by showing that a multi-copy witness always exists. \begin{proposition} \label{WitnessExistence} For any set of free states $\ff\subseteq\ddh$ that is closed in the trace norm topology and some resource state $\rho\in\ddh \backslash \ff$, there exists a two-copy witness operator $W$ such that $\tr[W\rho^{\otimes 2}]<0$ and $\tr[W\sigma^{\otimes 2}]\geq 0$ for all $\sigma \in \ff$. \end{proposition} \begin{proof} Consider the two-copy witness operator, introduced in the finite-dimensional case in Ref. \cite{finitenonconvex}, given by \begin{equation} \label{lockness} W(\rho, \varepsilon) = \swap+(\tr[\rho^2]-\varepsilon)\mathds{1}\otimes \mathds{1}-2\rho \otimes \mathds{1} \end{equation} where the $\swap$ operator is defined via $\swap \ket{\phi}\otimes \ket{\psi} = \ket{\psi}\otimes \ket{\phi}$ and has the property $\tr[\swap\ \eta^{\otimes 2}] = \tr[\eta^2]$ \cite{swaptrick}. When evaluated on two copies of a state $\eta$, this gives \begin{equation}\tr[W\eta^{\otimes 2}] = \tr[(\rho-\eta)^2]-\varepsilon \end{equation} and is therefore a witness provided $\varepsilon$ is picked such that $0<\varepsilon \leq \tr[(\rho -\sigma)^2]$ for all $\sigma \in \ff$. We now need to argue that such an $\varepsilon$ can always be chosen. To this end, assume towards a contradiction that no such $\varepsilon$ exists, that is, $\tr[(\rho -\sigma)^2]$ can be made arbitrarily small. This means there exists some sequence $\{\sigma_n\}_n$ in $\ff$ such that $\sigma_n \xrightarrow{\text{H-S}} \rho$, and hence also implies $\sigma_n \xrightarrow{\text{tn}} \rho$. This contradicts the assumption that $\ff$ is closed in the trace norm topology, and therefore such an $\varepsilon$ must always exist. \end{proof} This shows that, although there may not exist a linear witness operator in some resource theory, there will always exist a multi-copy witness operator. Interestingly, it also shows we only require two copies of the states involved to be able to strictly separate $\rho$ from $\ff$, regardless of the systems dimension and including infinite-dimensional theories. We use this result to show that, for any resource state $\rho$, there always exists a multi-copy channel discrimination task for which $\rho$ performs strictly better than any free state. \begin{theorem} \label{DiscriminationAdvantage} $\rho \in \ddh \backslash \ff$ if and only if there exist two channels $\Psi_0, \Psi_1$ acting on $\mathcal{D(H}^{\otimes 2})$ such that \begin{equation*}\frac{\sup_{\{M_i\}_i}p_{succ}(\{p_i,\Psi_i\}_{i=0,1},\{M_i\}_i,\rho^{\otimes 2})}{\sup_{\sigma\in\ff}\sup_{\{M_i\}_i}p_{succ}(\{p_i,\Psi_i\}_{i=0,1},\{M_i\}_i,\sigma^{\otimes 2})}>1.\end{equation*} \end{theorem} \begin{proof} The ``if" direction follows directly from the assumption that $\ff$ is closed. For the ``only if" direction, we directly construct a channel discrimination task for which $\rho\in \ddh \backslash \ff$ gives an advantage. Given $\rho$, take any bounded, two-copy witness operator (the existence of which is shown in Proposition \ref{WitnessExistence}) and define $X:=\mathds{1}^{\otimes 2} - \frac{W}{\|W\|_{\infty}}$. Note $X$ has the properties $\tr[X\rho^{\otimes 2}]>1$, $0\leq \tr[X\sigma^{\otimes 2}]\leq 1$ for all $\sigma\in\ff$. Now consider the two quantum channels given by \begin{equation*} \begin{aligned} &\Psi_0(\eta^{\otimes 2}):= \mbox{$\left(\frac{1}{2} + \frac{\tr[X\eta^{\otimes 2}]}{2\|X\|_{\infty}}\right)\ket{0}\!\bra{0} +\left(\frac{1}{2} - \frac{\tr[X\eta^{\otimes 2}]}{2\|X\|_{\infty}}\right) \ket{1}\!\bra{1}$}\\ &\Psi_1(\eta^{\otimes 2}):= \mbox{$\left(\frac{1}{2} - \frac{\tr[X\eta^{\otimes 2}]}{2\|X\|_{\infty}}\right)\ket{0}\!\bra{0} + \left(\frac{1}{2} + \frac{\tr[X\eta^{\otimes 2}]}{2\|X\|_{\infty}}\right)\ket{1}\!\bra{1}$} \end{aligned} \end{equation*} where $\ket{0},\ket{1}$ are mutually orthogonal states. Consider the task of determining which of these two channels was applied when there was an equal probability of picking either channel. By the Holevo-Helstrom theorem \cite{Holevo1973,Helstrom1976}, the maximum success probability of correctly guessing is \begin{equation} \begin{aligned} \sup_{\{M_i\}_i}\ &p_{succ}\left(\{\tfrac{1}{2},\Psi_i\}_i,\{M_i\}_i, \eta^{\otimes 2}\right) \\ &= \frac{1}{2}\left(1+\frac{1}{2} \|(\Psi_0 - \Psi_1)\eta^{\otimes 2}\|_1\right)\\ &=\frac{1}{2}\left(1+\frac{\tr[X\eta^{\otimes 2}]}{\|X\|_{\infty}}\right)\\ & \begin{cases} > \frac{1}{2}\left(1+\frac{1}{\|X\|_{\infty}}\right) & \text{for} \ \eta = \rho\\ \leq \frac{1}{2}\left(1+\frac{1}{\|X\|_{\infty}}\right) & \text{for} \ \eta = \sigma \in \ff \end{cases} \end{aligned} \end{equation} and therefore the ratio of success probabilities is \begin{equation*} \begin{aligned} & \frac{\sup_{\{M_i\}_i}p_{succ}(\{p_i,\Psi_i\}_{i=0,1},\{M_i\}_i,\rho^{\otimes 2})}{\sup_{\sigma\in\ff}\sup_{\{M_i\}_i}p_{succ}(\{p_i,\Psi_i\}_{i=0,1},\{M_i\}_i,\sigma^{\otimes 2})} \\ & \qquad >\frac{\frac{1}{2}\left(1+\frac{1}{\|X\|_{\infty}}\right)}{\frac{1}{2}\left(1+\frac{1}{\|X\|_{\infty}}\right)} = 1. \end{aligned}\vspace*{-0.4cm} \end{equation*} \end{proof} This shows an operational advantage of every resource state in any resource theory in the channel discrimination setting. Remarkably, we only need two copies of the states to unlock such an advantage, even in the most general scenario. Since the only assumption we make is that $\ff$ is closed, this result applies to every resource theory, generalizing and complementing the finite-dimensional, non-convex case \cite{finitenonconvex,longfinitenonconvex} and the infinite-dimensional, convex case \cite{infiniteconvex,longinfiniteconvex}. \section{Generalized robustness monotones in non-convex, continuous variable resource theories}\label{sec:generalroberto} We begin this Section by introducing the generalized robustness monotones considered in this paper and discussing their properties. \begin{definition}[Generalized robustness]\label{robDef} The generalized robustness of a state $\rho$ with respect to the set of free states $\ff$ is given by \begin{equation} \rob :=\inf_{\tau\in\ddh}\left\{\lambda\in \mathbb{R}_{\geq 0} \colon \frac{\rho + \lambda \tau}{1+\lambda} = \sigma \in \ff \right\}. \end{equation} \end{definition} The generalized robustness monotone was first introduced in entanglement theory \cite{original_robustness}, and later extended to arbitrary resource theories in finite dimensions \cite{finiteconvexrobustness} and convex resource theories within the framework of general probabilistic theories \cite{GPTRobustness}. The generalized robustness $\rob$ quantifies the amount of resource in a state $\rho$ by the amount of noise that can be added via mixing with another state before all the resource is lost. \begin{remark}The generalized robustness can equivalently be expressed as \begin{equation}\label{robDmax} \begin{aligned} \rob &= \inf_{\sigma \in \ff} \left\{\lambda\in \mathbb{R}_{\geq 0} : \rho \leq (1+\lambda) \sigma \right\} \\ & = \inf_{\sigma \in \ff} \left\{\exp\big[D_{\max}(\rho \| \sigma)\big] -1 \right\}\,, \end{aligned} \end{equation} where \begin{equation}\label{Dmax} D_{\max}(\rho \| \sigma) := \inf \{\gamma \in \mathbb{R} : \rho \leq {\rm e}^\gamma\ \sigma\}, \end{equation} is the {\it max-relative entropy} between $\rho$ and $\sigma$ \cite{min-maxRE,datta2009max}. \end{remark} As an operational interpretation, when used in discrete variable theories, $\rob$ quantifies the maximal potential advantage given by a resource state $\rho$ in single-copy channel discrimination tasks, provided $\ff$ is convex \cite{finiteconvexrobustness}. In the case where $\ff$ is non-convex, it has been shown to quantify the worst case maximal advantage in single-copy channel discrimination tasks, with respect to a decomposition of $\ff$ into convex subsets \cite{finitenonconvex}. When used in continuous variable theories, $\rob$ as in Definition \ref{robDef} is no longer guaranteed to be lower semi-continuous. We therefore resort to a modified definition that characterizes the generalized robustness in such a way that it is automatically lower semi-continuous: \begin{definition}[Lower semi-continuous generalized robustness]\label{lscrDef} The lower semi-continuous generalized robustness of a state $\rho$ with respect to $\ff$ is given by \cite{infiniteconvex,note1inf} \begin{equation}\label{lscr} \begin{aligned} \!\!\!\!\lscr &:= \liminf_{\xi\rightarrow\rho}\mathcal{R_F}(\xi)\\ &\:= \inf_{\{\tau_n\}_n, \xiseq}\left\{\lambda \in \mathbb{R}_{\geq 0} \colon \frac{\xi_n + \lambda \tau_n}{1+\lambda} = \sigma_n \in \ff, \right. \\ &\qquad \qquad\!\! \quad\left.\tau_n, \xi_n \in \ddh, \ \xiseq \xrightarrow{\textup{tn}} \rho \right\}. \end{aligned} \end{equation} \end{definition} Here we optimize over all possible sequences of states $\xiseq$ such that $\xi_n\xrightarrow{\text{tn}}\rho$. So far, this lower semi-continuous robustness $\lscr$ has only been studied for convex $\ff$ \cite{infiniteconvex,longfinitenonconvex}. Whilst it is no longer a convex function when this assumption is removed, we can show that $\lscr$ is still monotonic and faithful. \begin{proposition}\label{propoprops} The resource quantifier $\lscr$ defined with respect to a (not necessarily convex) set $\ff$ is both {\em (a)} monotonic under free operations and {\em (b)} faithful. \end{proposition} \begin{proof} We prove each claim individually. To establish (a), from the definition of $\lscr$ we have $\forall \varepsilon > 0$, there exist sequences $\{\sigma(\varepsilon)_{n}\}_n$ in $\ff$ and $\{\xi(\varepsilon)_n\}_n \rightarrow\rho$ such that $\xi_n \leq (1+\lscr + \varepsilon)\sigma_n$ for each $n$. For any completely positive, linear, trace non-increasing free operation $\Phi$, positivity implies $\Phi(\xi_n)\leq (1+\lscr + \varepsilon)\Phi(\sigma_n)$, and contractivity of the trace norm under $\Phi$ means that $\Phi(\xi_n)\rightarrow \Phi(\rho)$ also. Hence $\lscr$ is a suboptimal value for $\underline{\mathcal{R_F}}(\Phi(\rho))$, i.e. $\underline{\mathcal{R_F}}(\Phi(\rho))\leq \lscr$. To see (b) holds, if $\rho \in \ff$, clearly $\lscr = 0$. For the converse, $\lscr=0$ implies that there exists a sequence $\lambda^{(k)} \to 0$ such that $\xi_n^{(k)}-\sigma_n^{(k)} = \lambda^{(k)}(\sigma_n^{(k)} - \tau_n^{(k)})$. For fixed $k$, we have $\|\xi_n^{(k)}-\sigma_n^{(k)}\|_1\leq 2\lambda^{(k)}$, and thus $\|\rho-\sigma_n^{(k)}\|_1 \leq \|\rho - \xi_n^{(k)}\|_1 + \|\xi_n^{(k)}-\sigma_n^{(k)}\|_1 \leq \|\rho - \xi_n^{(k)}\|_1 + 2\lambda^{(k)}$. Taking the limits $n\to \infty$ and $k\to \infty$ we see that $\sigma_n^{(k)}$ converges to $\rho$, hence $\rho\in\ff$. \end{proof} This means $\lscr$ remains a valid resource quantifier in any resource theory, even in the continuous variable case when $\ff$ is non-convex. A natural question is then to ask when Definitions \ref{robDef} and \ref{lscrDef} are equal. This has already been shown in Ref. \cite[Proposition~7]{longinfiniteconvex} under the condition that $\ff$ is compact, and therefore the two definitions coincide in finite-dimensional resource theories. In the case of convex $\ff$, they are also shown to be equal if $\cone(\ff)$ is weak* closed \cite[Theorem~12]{longinfiniteconvex}. Our next result provides a proof that this equivalence holds even in the non-convex case. \begin{proposition}\label{ClosedCone} If $\cone({\ff})$ is closed in the weak* topology, then $\lscr = \rob$. \end{proposition} \begin{proof} We have $\lscr \leq \rob$ by definition, so it remains to show that $\lscr \geq \rob$. Let $B_1 := \{X\in \mathcal{T(H)}: \|X\|_1 \leq 1\}$ be the trace norm unit ball and note that via the Banach-Alaoglu theorem \cite[Theorem~2.6.18]{Megginson}, $B_1$ is weak* compact. Consider the space $$\tilde{\mathcal{F}}=\cone({\mathcal{F}})\cap B_1.$$ Since $\cone(\ff)$ is weak* closed by assumption, $\tilde{\ff}$ is weak* (sequentially \footnote{Note, compactness and sequential compactness are not necessarily the same, however they are equivalent on metrizable spaces. The weak* topology is not itself induced by a metric, and hence the Banach-Alaoglu theorem only guarantees compactness. However, since $\mathcal{H}$ is separable and $\tilde{\ff}\subseteq \tth$ is compact, we can conclude $\tilde{\ff}$ is also sequentially compact, see discussion in Ref. \cite[Remark~1]{attainability} for details.}) compact. By definition of $\rob$, $\forall \varepsilon > 0$ there exists some $\sigma_{\varepsilon} \in \ff$ with $\rho \leq (1+\rob +\varepsilon)\sigma_{\varepsilon}$. Now apply this to some sequence of states $\xiseq$ such that $\xi_n \xrightarrow{\textup{tn}} \rho$ and pick $\varepsilon = \frac{1}{n}$, which shows there exists a sequence $\{\sigma_n\}_n$ in $\ff$ such that $(1+\mathcal{R_F}(\xi_n)+\frac{1}{n})\sigma_n-\xi_n \geq 0$ for each $n$. We now wish to show that $\lim_{n\to\infty}\mathcal{R_F}(\xi_n)\geq \rob$. Note that, up to extracting a subsequence, the sequence given by $\{\mathcal{R_F}(\xi_n)\}_n$ can be assumed to converge, else this inequality is trivial. Let $\lim_{n\to\infty}\mathcal{R_F}(\xi_n)=:\tilde{R}$. Take now some compact operator $K\in\kkh$, such that $K\geq 0$. We then have $$\tr\left[K\left(\left(1+\mathcal{R_F}(\xi_n)+\tfrac{1}{n}\right)\sigma_n\right)-\xi_n\right]\geq 0.$$ Since ${\rm cone}(\mathcal{F})$ is weak* compact, there exists a sub-sequence $\sigma_{n_k}$ that converges to $\mu\sigma \in \tilde{\ff}$ in the weak* topology, for $\sigma\in \mathcal{F} $ and $0\leq \mu\leq 1$. This implies \begin{equation*} \begin{aligned} \lim_{n_k\to\infty}&\tr\left[K\left(\left(1+\mathcal{R_F}(\xi_{n_k})+\tfrac{1}{n_k}\right)\sigma_{n_k}\right)-\xi_{n_k}\right] \\ =& \tr\left[K\left(\left(1+\tilde{R}\right)\mu\sigma\right)-\rho\right]\geq 0. \end{aligned} \end{equation*} Pick now the case $K=\ket{\psi}\!\bra{\psi}$. In this case, positivity of the above equation implies $$\left(1+\tilde{R}\right)\mu\sigma-\rho\geq 0.$$ From this we conclude \begin{equation*} \begin{aligned} \rho \leq \left(1+\tilde{R}\right)\mu\sigma &\leq \left(1+\tilde{R}\right)\sigma \\ \end{aligned} \end{equation*} And it follows that for any $\xiseq \xrightarrow{tn}\rho$ we have \begin{equation*} 1+\rob \leq 1+\tilde{R}. \end{equation*} Additionally optimizing over all such $\xiseq$ yields $\rob \leq \lscr$, as required. \end{proof} Although the requirement in Proposition \ref{ClosedCone} may seem rather abstract, it has been shown to hold true for the free states in many cases of resource theories, most notably separable states \cite[Lemma~25]{longinfiniteconvex} and incoherent states \cite[Lemma~31]{longinfiniteconvex}. Crucially, it also holds for the set of Gaussian states of continuous variable systems \cite[Lemma~34]{attainability}, allowing for an easier evaluation of advantages in one of the most important resource theories in the non-convex, non-finite setting: non-Gaussianity. This will be our focus in Section~\ref{sec:NG}. \subsection{Interpretations for generalized robustness}\label{sec:interpret} In this Section, we establish operational interpretations for $\lscr$ when used in a general setting, without convexity or finite-dimensional restrictions. We find it can be used to provide an upper bound on the advantage in multi-copy channel discrimination, and thus remains a valuable quantifier in channel discrimination tasks. We also investigate cases in which $\lscr$ exactly quantifies the worst case maximal advantage in a convex set decomposition of $\ff$, extending results found in Ref. \cite{finitenonconvex} to the continuous variable setting. \subsubsection{Upper bound in multi-copy channel discrimination} We begin by defining a multi-copy variant of the generalized robustness monotone. \begin{definition} The $m$-copy generalized robustness of a state $\rho$ with respect to a free set $\ff$ is defined as \begin{equation} \robm:=\inf_{\tau\in\mathcal{D(H}^{\otimes m})}\left\{\lambda \!\in\! \mathbb{R}_{\geq 0} \colon \frac{\rho^{\otimes m}+\lambda \tau}{1+\lambda}=\sigma^{\otimes m}, \ \sigma \!\in\! \ff \right\}. \end{equation} \end{definition} This quantifies the resource in a state $\rho$ by how much $m$ copies of $\rho$ can tolerate mixture with another state $\tau$ before turning into $m$ copies of some free state $\sigma$. We further define a lower semi-continuous version of $\robm$. \begin{definition} The lower semi-continuous $m$-copy generalized robustness with respect to $\ff$ is defined as \begin{equation} \begin{aligned} &\lscrm:=\liminf_{\xi\rightarrow\rho}\mathcal{R}_{\ff}^{(m)}(\xi)\\ &\!\!=\!\!\inf_{\xiseq \to \rho}\inf_{\tau_n\in\mathcal{D(H}^{\otimes m})}\left\{\lambda \in \mathbb{R}_{\geq 0} \colon \frac{\xi_{n}^{\otimes m}+\lambda \tau_n}{1+\lambda}=\sigma_{n}^{\otimes m}, \ \sigma_n \!\in\! \ff \right\}. \end{aligned} \end{equation} \end{definition} We can relate this multi-copy version of robustness to the single-copy version by using the link (\ref{robDmax}) between the generalized robustness and the max-relative entropy (\ref{Dmax}) \cite{min-maxRE,datta2009max}, along with the known additivity property of the latter under tensor products. \begin{proposition}\label{prop1m1m} For any integer m, we have \begin{equation}\label{oneversusmany}1+\lscrm=\left(1+\lscr\right)^m.\end{equation} \end{proposition} \begin{proof} \begin{equation} \begin{aligned} \!\!\!\! 1+\lscrm&=\liminf_{\xi\to\rho}\ \big(1+\mathcal{R}_\mathcal{F}^{(m)}(\xi)\big)\\ &=\liminf_{\xi\to\rho} \inf_{\sigma\in\ff}\exp \big(D_{\max}(\xi^{\otimes m}\|\sigma^{\otimes m})\big)\\ &=\liminf_{\xi\to\rho} \inf_{\sigma\in\ff}\exp \big(m D_{\max}(\xi\|\sigma)\big)\\ &=\liminf_{\xi\to\rho}\ \big(1+\mathcal{R_F}(\xi)\big)^m\\ &=\big(1+\lscr\big)^m. \end{aligned} \end{equation} \end{proof} Note that the faithfulness and monotonicity under free operations of $\lscrm$ follow directly from this relation. Starting by adapting part of the proof in Ref. \cite[Theorem~10]{longinfiniteconvex}, we can now show the multi-copy lower semi-continuous robustness $\lscrm$ upper bounds the maximal advantage given by $\rho$ in multi-copy channel discrimination tasks. The link between $\lscrm$ and $\lscr$ then allows us to show that $\lscr$ can directly be used to upper bound a regularization of such an advantage. \begin{theorem}\label{theoremupperbody} In a general resource theory, the lower semi-continuous generalized robustness of a state $\rho$ upper bounds the regularized advantage enabled by $\rho$ over any free state in $m$-copy channel discrimination, \begin{equation*} \begin{gathered} \left(\sup_{\{p_i,\Psi_i\}_i}\frac{\sup_{\{M_i\}_i}p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\rho^{\otimes m})}{\sup_{\{M_i\}_i}\sup_{\sigma\in\ff}p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\sigma^{\otimes m})}\right)^{\frac1m} \\ \leq 1+\lscr. \end{gathered} \end{equation*} \end{theorem} \begin{proof} Consider a sequence of the form $\{\xi_n^{\otimes m}\}_n = \{(1+\lambda)\sigma_n^{\otimes m} - \lambda \tau_n\}_n$ such that $\xiseq \to \rho$. For a channel ensemble $\{p_i,\Psi_i\}_i$ and POVM $\{M_i\}_i$, the success probability of correctly guessing with $m$ copies of a state is \begin{equation*} \begin{aligned} &p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\rho^{\otimes m}) \\ &= \sum_ip_i\tr[M_i\Psi_i(\rho^{\otimes m})]\\ &=\lim_{n\to\infty}\sum_ip_i\tr[M_i\Psi_i(\xi_n^{\otimes m})] \\ &\leq \limsup_{n\to\infty}\sum_ip_i\tr[M_i\Psi_i((1+\lambda)\sigma_n^{\otimes m})]\\ &\leq (1+\lambda)\sup_{\tilde{\sigma}\in\ff}\sum_ip_i\tr[M_i\Psi_i(\tilde{\sigma}^{\otimes m})]\\ &=(1+\lambda)\sup_{\tilde{\sigma}\in\ff}p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\tilde{\sigma}^{\otimes m}). \end{aligned} \end{equation*} Since this holds for any $\lambda$ such that $\frac{\xi_{n}^{\otimes m}+\lambda \tau_n}{1+\lambda}=\sigma_{n}^{\otimes m}$ for some $\sigma_n \in \ff$, we have \begin{equation*} \begin{aligned} &p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\rho^{\otimes m})\\ &\leq \inf\left\{(1+\lambda)\sup_{\sigma\in\ff}p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\sigma^{\otimes m}) \colon \right.\\ &\qquad \quad \left.\frac{\xi_{n}^{\otimes m}+\lambda \tau_n}{1+\lambda}=\sigma_{n}^{\otimes m}, \ \sigma_n \in \ff\right\}\\ &=(1+\lscrm)\sup_{\sigma\in\ff}p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\sigma^{\otimes m}) . \end{aligned} \end{equation*} It follows that $ \dfrac{p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\rho^{\otimes m})}{\sup_{\sigma\in\ff}p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\sigma^{\otimes m})}\leq (1+\lscrm) = (1+\lscr)^m $ and, since this holds for an arbitrary $\povm$ and channel ensemble $\{p_i,\Psi_i\}_i$, we have \begin{equation*} \begin{aligned} &\sup_{\{M_i\}_i,\{p_i,\Psi_i\}_i}\frac{p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\rho^{\otimes m})}{\sup_{\sigma\in\ff}p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\sigma^{\otimes m})} \\ &\leq(1+\lscr)^m. \end{aligned} \end{equation*} The above considers the case in which the same measurement strategy is used throughout for $\rho$ and free states. We can also then upper bound the more realistic scenario in which the measurement strategy can be optimized individually for $\rho$ and $\sigma\in\ff$ since this will only reduce the overall ratio of success probabilities: \begin{equation} \begin{aligned} & \sup_{\{p_i,\Psi_i\}_i}\frac{\sup_{\{M_i\}_i}p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\rho^{\otimes m})}{\sup_{\{M_i\}_i}\sup_{\sigma\in\ff}p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\sigma^{\otimes m})}\\ &\leq (1+\lscr)^m. \end{aligned} \end{equation} \end{proof} In convex resource theories, the above bound is tight already for a single copy, $m=1$ \cite{robustnessofcoherence,finiteconvexrobustness,infiniteconvex}. For non-convex resource theories, the above bound may not be tight, and as argued in Section~\ref{sec:TwoCopies} we require at least $m=2$ to show a genuine advantage of resource states. This shows that, even in the case of non-convex resource theories in any dimension, $\lscr$ still retains a valuable interpretation in characterizing the advantage in channel discrimination tasks, by providing an upper bound on the maximum advantage that can be achieved. To the best of our knowledge, this interesting result was not presented in previous literature on non-convex resource theories \cite{finitenonconvex,longfinitenonconvex,robustnesscontinuity,starresourcetheories} \subsubsection{Worst case advantage} Here we consider an alternative operational interpretation for the generalized robustness, which originates from partitioning the non-convex free set $\ff$ into a collection of convex subsets, and analyzing advantages in each of the corresponding convex subtheories. Let $\ff=\bigcup_k \ff_k$ be a decomposition of $\ff$ into a union of convex sets $\ff_k$. In the finite-dimensional case, $\rob$ has been shown to quantify the worst case maximal advantage given by $\rho$ when compared to $\sigma\in\ff_k$ in single-copy channel discrimination tasks \cite{finitenonconvex,longfinitenonconvex}. The aim of this Section is to investigate conditions on $\ff$ for which this result also holds for $\lscr$ in the infinite-dimensional case. Whilst it is not necessarily true in full generality, we show it does hold true in most cases of interest. We begin by looking at conditions under which the non-convex robustness and the worst case convex robustness are equal. \begin{proposition}\label{lscr=inflscr} Let $\ff=\bigcup_k \ff_k$, where the each $k$ belongs to an arbitrary (not necessarily countable) set of indices, and each $\ff_k$ is closed and convex. Let $\rho\in\ddh\backslash\ff$. Then \begin{equation} \lscr = \inf_k\underline{\mathcal{R}_{\mathcal{F}_k}}(\rho) \end{equation} provided one of the following conditions is true: \begin{enumerate} \item $\lscr = \rob$ \item The decomposition $\ff = \bigcup_k\ff_k$ consists of a finite number of convex sets \end{enumerate} \end{proposition} \begin{proof} Firstly, note that the following inequality always holds: \begin{equation} \lscr \leq \inf_k\underline{\mathcal{R}_{\mathcal{F}_k}}(\rho). \end{equation} This is because, since $\ff_k \subseteq \ff$, we have $\lscr \leq \underline{\mathcal{R}_{\mathcal{F}_k}}(\rho)$ for all $k$. Taking infimums over $k$ on both sides yields $\lscr \leq \inf_k\underline{\mathcal{R}_{\mathcal{F}_k}}(\rho)$ as required. For the reverse inequality, we prove each case individually. {\it Case 1.}~Let $\lscr=\rob$ and assume, towards a contradiction, that $\rob<\inf_k\mathcal{R}_{\ff_k}(\rho)$. This implies $\exists \ \delta>0$ such that $\rob+\delta<\inf_k\mathcal{R}_{\ff_k}(\rho)$. From the definition of $\rob$, we have that $\forall \varepsilon>0$ there exists some $\sigma_{\varepsilon}$ such that $\rho\leq (1+\rob+\varepsilon)\sigma_{\varepsilon}$, where each $\sigma_{\varepsilon}\in\ff_{k_\varepsilon}$ for some ($\varepsilon$-dependent) $k$. Fix some $\varepsilon<\delta/2$, then for this value of $\varepsilon$, we have \begin{equation*} \begin{aligned} \rho&\leq\left(1+\rob+\varepsilon\right)\sigma_{\varepsilon}\\&<\left(1+\rob+\delta/2\right)\sigma_{\varepsilon}\\&<\left(1+\inf_k\mathcal{R}_{\ff_k}(\rho)-\delta/2\right)\sigma_{\varepsilon} \end{aligned} \end{equation*} where $\sigma_{\varepsilon}$ is in some fixed $\ff_k$. This implies $\mathcal{R}_{\ff_k}(\rho)\leq \inf_k\mathcal{R}_{\ff_k}(\rho)-\delta/2<\inf_k\mathcal{R}_{\ff_k}(\rho)$, which is a contradiction. We therefore must have $\rob\geq\inf_k\mathcal{R}_{\ff_k}(\rho)$. Since by definition $\underline{\mathcal{R}_{\ff_k}}(\rho)\leq \mathcal{R}_{\ff_k}(\rho)$ for fixed $k$ (and hence $\inf_k\underline{\mathcal{R}_{\ff_k}}(\rho)\leq \inf_k\mathcal{R}_{\ff_k}(\rho)$), we also have $\rob\geq\inf_k\underline{\mathcal{R}_{\ff_k}}(\rho)$, as required. {\it Case 2.}~We have $\forall \varepsilon >0$, there exist sequences $\xiseq \xrightarrow{\textup{tn}}\rho$ and $\{\sigma(\varepsilon)_n\}_n \in \ff$ such that $\xi_n \leq (1+ \lscr + \varepsilon)\sigma_n$. For $\ff = \bigcup_k\ff_k$, since the union of convex sets is finite, there will always exist (at least) one of these convex sets, say $\ff_{\tilde{k}}$, that contains a subsequence $\{\sigma_{n_j}\}_{n_j}$ of $\{\sigma_n\}_{n}$ for fixed $\varepsilon$. This means that we can find sequences such that $\xi_{n_j} \leq (1+ \underline{\mathcal{R}_{\mathcal{F}_{\tilde{k}}}}(\rho) + \varepsilon)\sigma_{n_j}$ for any $\varepsilon>0$, and therefore $\lscr$ acts as a suboptimal value of $\underline{\mathcal{R}_{\mathcal{F}_{\tilde{k}}}}(\rho)$. This implies $\lscr \geq \underline{\mathcal{R}_{\mathcal{F}_{\tilde{k}}}}(\rho)$ and hence $\lscr \geq \inf_k\underline{\mathcal{R}_{\mathcal{F}_k}}(\rho)$. \end{proof} The first condition covers the cases where $\ff$ is compact in the trace norm topology. This includes any energy-constrained resource theory, any resource theory with a finite number of free states, and recovers the result in finite dimensions. It also covers the case where $\cone(\ff)$ is weak* closed even when non-convex, and thus includes the resource theory of non-Gaussianity. The second condition relates to practically relevant cases where the free states are given by multiple constraints, such as being incoherent in multiple bases \cite{finitenonconvex}, or being a collection of thermal equilibrium states at different temperatures \cite{resourceengines}. The associated resource theories arise naturally in applications such as quantum metrology and thermal engineering. Our next result establishes that, under the conditions of Proposition \ref{lscr=inflscr}, $\lscr$ quantifies the worst case maximal advantage given by a resource state in a single-copy channel discrimination task, with respect to the decomposition into convex sets. \begin{theorem}\label{theoremin} When $\lscr = \inf_k\underline{\mathcal{R}_{\mathcal{F}_k}}(\rho)$, we have \begin{equation*} \begin{aligned}\inf_k & \!\! \sup_{\{M_i\}_i,\{p_i,\Psi_i\}_i} \frac{p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\rho)}{\sup_{\sigma\in\ff_k}p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\sigma)}\\ &= 1 + \lscr. \end{aligned} \end{equation*} \end{theorem} \begin{proof} Since each $\ff_k$ is convex, we can use the result from \cite[Theorem~1]{infiniteconvex}, and hence \begin{equation} \sup_{\{M_i\}_i,\{p_i,\Psi_i\}_i} \frac{p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\rho)}{\sup_{\sigma\in\ff_k}p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\sigma)} = 1 + \underline{\mathcal{R}_{\mathcal{F}_k}}(\rho). \end{equation} Taking the infimum on each side then gives \begin{equation} \begin{aligned} \inf_k & \sup_{\{M_i\}_i,\{p_i,\Psi_i\}_i} \frac{p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\rho)}{\sup_{\sigma\in\ff_k}p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\sigma)} \\ &=1 + \inf_k\underline{\mathcal{R}_{\mathcal{F}_k}}(\rho)\\ &= 1 + \lscr. \end{aligned} \end{equation} \end{proof} This generalizes the result from \cite[Theorem~4]{finitenonconvex} and shows that, in most cases of practical interest, $\lscr$ remains an exact quantifier for the advantage given in a worst case variant of channel discrimination tasks, even in the continuous variable regime and without convexity restrictions. \subsection{Observable lower bounds to robustness}\label{sec:lowerbounds} Here we show that one can provide experimentally accessible lower bounds to the generalized robustness from $m$-copy witness operators. We will exploit again the properties of the max-relative entropy (\ref{Dmax}). By further manipulating the expressions in the proof of Proposition~\ref{prop1m1m}, we can write \begin{equation} \begin{aligned} &(1+\lscr)^m \\ &=\liminf_{\xi\to\rho}\inf_{\sigma \in \ff}\exp(D_{\max}(\xi^{\otimes m}\|\sigma^{\otimes m}))\\ &=\liminf_{\xi\to\rho}\inf_{\sigma \in \ff} \sup_{X\in \mathcal{B(H}^{\otimes m})}\! \left\{\frac{\tr[\xi^{\otimes m}X]}{\tr[\sigma^{\otimes m}X]} \colon \!\!\begin{array}{c}0\leq X \leq I, \\ \tr[\sigma^{\otimes m}X]>0\end{array}\!\right\}\\ &\geq \liminf_{\xi\to\rho} \sup_{X\in \mathcal{B(H}^{\otimes m})}\! \left\{\frac{\tr[\xi^{\otimes m}X]}{\sup_{\sigma\in\ff}\tr[\sigma^{\otimes m}X]} \colon \!\!\begin{array}{c}0\leq X \leq I, \\ \tr[\sigma^{\otimes m}X]>0\end{array}\!\right\}\\ &\geq \sup_{X\in \mathcal{B(H}^{\otimes m})}\! \liminf_{\xi\to\rho}\left\{\frac{\tr[\xi^{\otimes m}X]}{\sup_{\sigma\in\ff}\tr[\sigma^{\otimes m}X]} \colon \!\!\begin{array}{c}0\leq X \leq I, \\ \tr[\sigma^{\otimes m}X]>0\end{array}\!\right\}\\ &=\sup_{X\in \mathcal{B(H}^{\otimes m})}\! \left\{\frac{\tr[\rho^{\otimes m}X]}{\sup_{\sigma\in\ff}\tr[\sigma^{\otimes m}X]} \colon \!\! \begin{array}{c}0\leq X \leq I, \\ \tr[\sigma^{\otimes m}X]>0\end{array}\!\right\}\\ &\geq \tr[\rho^{\otimes m}X]\,,\quad {0 \leq X\in \mathcal{B(H}^{\otimes m})}, 0< \tr[\sigma^{\otimes m}X] \leq 1, \end{aligned}\label{obwound} \end{equation} where in the second equality we use \cite[Corollary~3.46]{Mosonyi2023}. Note that an operator $X$ satisfying the conditions in the last line of (\ref{obwound}) can be defined as $X:=\mathds{1}^{\otimes m} - \frac{W}{\|W\|_{\infty}}$, where $W$ is a $m$-copy resource witness fulfilling Eq.~(\ref{witnessdef}) with the additional constraint $\tr[\sigma^{\otimes m} W] \neq \|W\|_{\infty}$ $\forall \sigma \in \ff$. We have then \begin{equation} \label{obiwan} \lscr \geq \left(\max \left\{0,\, - \frac{\tr[\rho^{\otimes m} W] }{\|W\|_{\infty}}\right\}\right)^\frac1m. \end{equation} \section{Generalized robustness of non-Gaussianity and its applications}\label{sec:NG} In this Section, we specialize to the resource theory of non-Gaussianity. We take the set of free states to be the set of all and only Gaussian states, $\mathcal{G}$, i.e.~all states with a Gaussian Wigner distribution \cite{CVQI_and_beyond,alessiobook} (see Sec.~\ref{sec:CV}). This forms a closed \cite{gaussianQRTs} yet non-convex subset in the infinite-dimensional space $\ddh$, and therefore doesn't fit into previously studied resource-theoretic frameworks. In particular, our approach differs from the study of so-called `quantum' or `genuine' non-Gaussianity, in which one takes as free set the {\it convex hull} of the set of Gaussian states, $\overline{\mathcal{G}}:= \text{cl conv}(\mathcal{G})$ \cite{convexG1,convexG2,infiniteconvex,longinfiniteconvex,Tommoguitar}. Our goal is to identify advantageous features of any non-Gaussian state, including those which can be obtained as convex mixtures of Gaussian states and hence have a positive, yet non-Gaussian Wigner function. Since the cone generated by Gaussian states is weak* closed \cite{attainability}, Proposition \ref{ClosedCone} implies that Definitions \ref{robDef} and \ref{lscrDef} coincide, and hence we define the lower semi-continuous generalized robustness of non-Gaussianity of a state $\rho$ as \begin{equation} \underline{\mathcal{R_G}}(\rho)=\mathcal{R_G}(\rho):=\inf_{\tau\in\ddh}\left\{\lambda\in \mathbb{R}_{\geq 0} \colon \frac{\rho + \lambda \tau}{1+\lambda} = \sigma \in \mathcal{G} \right\}. \end{equation} We will refer to $\underline{\mathcal{R_G}}=\mathcal{R_G}$ simply as the {\em (generalized) robustness of non-Gaussianity}, and omit the underline from now on. By virtue of Proposition~\ref{propoprops}, the robustness $\mathcal{R_G}$ is a faithful measure of non-Gaussianity and is monotonically nonincreasing under Gaussian channels, i.e., free operations mapping Gaussian states into Gaussian states. Recalling Proposition~\ref{lscr=inflscr}, and observing that a convex set decomposition of $\mathcal{G}$ consists of an infinite number of sets, each containing a single state, we note that an intuitive operational interpretation of ${\mathcal{R_G}}(\rho)$ arising from Theorem~\ref{theoremin} is the following: given the choice between using a non-Gaussian state $\rho$ and an arbitrary (but fixed) Gaussian state $\sigma$, there will always exist a single-copy channel discrimination task for which $\rho$ performs strictly better than $\sigma$ by a factor of at least $1+{\mathcal{R_G}}(\rho)$. In formula, \begin{equation}\inf_{\sigma \in \mathcal{G}} \sup_{\{M_i\}_i,\{p_i,\Psi_i\}_i} \frac{p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\rho)}{p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\sigma)} = 1 + \mathcal{R_G}(\rho).\end{equation} Alternatively, from Theorem~\ref{theoremupperbody} the robustness of non-Gaussianity $\mathcal{R_G}(\rho)$ also upper bounds the regularized advantage of using a non-Gaussian state $\rho$ versus any Gaussian state $\sigma \in \mathcal{G}$ in multi-copy channel discrimination tasks. In what follows, we will evaluate and analyze the robustness of non-Gaussianity for some insightful examples, illustrated in Fig.~\ref{fig:wigners}. \begin{figure}[t] \centering \subfloat[]{\includegraphics[width=0.5\columnwidth]{fockstaten4v2.pdf}\label{fig:wignerfock}}\hfill \subfloat[]{\includegraphics[width=0.5\columnwidth]{coherentmixture.pdf}\label{fig:wignermix}} \caption{Illustration of the Wigner distributions for two classes of non-Gaussian states: (a)~Fock state $\ket{n}\!\bra{n}$ with $n=4$; (b)~balanced mixture $\rho_{0,d}$ of two coherent states with amplitudes $\pm d/\!\sqrt{2}$ for $d=5/2$.} \label{fig:wigners} \end{figure} \subsection{Fock states} As a first case study, here we directly calculate $\mathcal{R_G}$ for Fock states $\ket{n}\!\bra{n}$; see Fig.~\ref{fig:wignerfock}. We begin by recalling known results for the generalized robustness $\rob$ of an arbitrary resource. For pure states $\ket{\psi}\!\bra{\psi}$ we have \cite[Lemma~13]{longinfiniteconvex} \begin{equation}\label{upper} 1+\mathcal{R_F}(\ket{\psi}\!\bra{\psi})= \inf_{\sigma \in \ff} \bra{\psi}\sigma^{-1}\ket{\psi} \end{equation} and for arbitrary states we can use that the (Umegaki) relative entropy $D(\rho\|\sigma):=\tr[\rho (\log \rho - \log \sigma)]$ always lower bounds the max-relative entropy (\ref{Dmax}) \cite{min-maxRE}: \begin{equation}\label{lower} \inf_{\sigma\in\ff}D(\rho\| \sigma) \leq \inf_{\sigma\in\ff}D_{\max}(\rho\| \sigma) =\log(\rob+1). \end{equation} We can then use the above bounds to calculate the robustness of non-Gaussianity of Fock states. \begin{proposition}\label{propofocker} For an $n$-photon Fock state $\ket{n}\!\bra{n}$, the generalized robustness of non-Gaussianity is given by \begin{equation}\label{fockn} {\mathcal{R_G}}(\ket{n}\!\bra{n}) = \frac{(n+1)^{n+1}}{n^n}-1. \end{equation} \end{proposition} \begin{proof} We use the reference Gaussian state for a Fock state $\ket{n}\!\bra{n}$ (i.e. the Gaussian state with the same first and second moments \cite{extremality,gaussianHS,gaussianRE}), which is a thermal state $\tau_n$ with average photon number $n$, written as \begin{equation} \tau_n = \frac{1}{n+1}\sum_{m=0}^{\infty}\left(\frac{n}{n+1}\right)^m\ket{m}\!\bra{m}. \end{equation} Therefore for an upper bound we use (\ref{upper}) to get \begin{equation} {\mathcal{R_G}}(\ket{n}\!\bra{n})\leq\bra{n}\tau_n^{-1}\ket{n} -1 = \frac{(n+1)^{n+1}}{n^n}-1. \end{equation} For a lower bound, we use the fact that the relative entropy of non-Gaussianity of a state $\rho$ \cite{gaussianRE} is minimized by the Gaussian reference state $\tau_\rho$ \cite{GREmimimized}, and its value when $\rho$ is pure is given simply by the von Neumann entropy of the reference state. We hence have \begin{equation} \begin{aligned} \inf_{\sigma \in \mathcal{G}}D(\ket{n}\!\bra{n}\|\ \sigma) &= -\tr(\tau_n \log \tau_n)\\ &= \log \left(\frac{(n+1)^{n+1}}{n^n}\right) \\ &\leq \log({\mathcal{R_G}}(\ket{n}\!\bra{n})+1). \end{aligned} \end{equation} The upper and lower bounds coincide. \end{proof} \begin{figure}[t] \includegraphics[width=0.95\columnwidth]{rngfocks.pdf} \caption{\label{fig:rngfocks} Comparison of non-Gaussianity measures for Fock states $\ket{n}\!\bra{n}$ as a function of $n$. The solid blue line with circle markers denotes the generalized robustness of non-Gaussianity $\mathcal{R_G}(\ket{n}\!\bra{n})$ computed analytically in Eq.~(\ref{fockn}) with respect to the free set of Gaussian states ${\mathcal{G}}$. The dashed red line with square markers denotes the robustness of `genuine' non-Gaussianity $\mathcal{R}_{\overline{\mathcal{G}}}(\ket{n}\!\bra{n})$ obtained numerically in Ref.~\cite{longinfiniteconvex} with respect to the convex hull of Gaussian states $\overline{\mathcal{G}}$.} \end{figure} We plot our exact expression (\ref{fockn}) for the robustness of non-Gaussianity of Fock states $\mathcal{R_G}(\ket{n}\!\bra{n})$ as a function of $n$ in Fig.~\ref{fig:rngfocks}. This is compared with the numerical expression for the robustness of `genuine' non-Gaussianity $\mathcal{R}_{\overline{\mathcal{G}}}(\ket{n}\!\bra{n})$ studied in \cite{infiniteconvex,longinfiniteconvex,note1inf} by considering the convex hull of Gaussian states $\overline{\mathcal{G}}$ as a free set. We notice that the robustness of non-Gaussianity evaluated in this paper grows linearly with $n$, $\mathcal{R_G}(\ket{n}\!\bra{n}) \approx {\rm e}\!\ n$, showing that the advantage of using Fock states in channel discrimination tasks over any Gaussian state is directly proportional to the photon number $n$. In contrast, the robustness evaluated with respect to the convex hull of Gaussian states \cite{infiniteconvex,longinfiniteconvex} grows only sublinearly, $\mathcal{R}_{\overline{\mathcal{G}}}(\ket{n}\!\bra{n}) \approx O(\sqrt{n})$. We can also consider the multimode case where each mode $i$ has a Fock state with photon number $n_i$, with the overall state given by $\ket{\psi} = \ket{n_1} \otimes ... \otimes \ket{n_m}$. In this case, the Gaussian reference state is a tensor product of thermal states with mean photon number $n_i$ in each mode, and, using the same method as in Proposition~\ref{propofocker}, we get \begin{equation} {\mathcal{R_G}}(\ket{\psi}\!\bra{\psi}) = \prod_{i=1}^m\left(\frac{(n_i+1)^{(n_i+1)}}{n_i^{n_i}}\right)-1. \end{equation} \subsection{Mixtures of Gaussian states} We now consider a mixture of coherent states \begin{equation}\label{rhomix} \rho_{q,d} := \frac{1+q}{2} \ket{\alpha_d}\!\bra{\alpha_d} + \frac{1-q}{2} \ket{\alpha_{-d}}\!\bra{\alpha_{-d}}, \end{equation} with $q \in [-1,1]$, where \begin{equation} \ket{\alpha} = {\rm e}^{-\frac{|\alpha|^2}{2}}\sum_{n=0}^{\infty}\frac{\alpha^n}{\sqrt{n!}}\ket{n} \end{equation} denotes a single-mode coherent state and we take without loss of generality the coherent amplitude to be real, $\alpha_{\pm d} = \text{Re}[\alpha_{\pm d}]= \pm d/\sqrt 2$, with $d \in {\mathbb{R}}_{\geq 0}$; see Fig.~\ref{fig:wignermix}. Note that $\rho_{q,d} \in \overline{\mathcal{G}}$, which means that the states $\rho_{q,d}$ have vanishing `genuine' non-Gaussianity, and also vanishing optical non-classicality, being mixtures of semiclassical states. Therefore, these states would be considered useless for typical resource theoretic applications constrained by convexity \cite{convexG1,convexG2,infiniteconvex,longinfiniteconvex}. Nevertheless, they are non-Gaussian, $\rho_{q,d} \not\in \mathcal{G}$, $\forall$ $d\neq 0,\ |q|\neq 1$, which means they can still be advantageous for channel discrimination following the results of this paper. Since such an advantage is quantifiable via the robustness of non-Gaussianity $\mathcal{R_G}(\rho_{q,d})$, we will provide useful bounds on the latter. The states in (\ref{rhomix}) can equivalently be described by their Wigner distribution in phase space, \begin{equation}\label{wignermix} \mathcal{W}^\rho_{q,d}(x,y) = \frac{1+q}{2\pi} {\rm e}^{-(x-d)^2-y^2} + \frac{1-q}{2\pi}{\rm e}^{-(x+d)^2-y^2}. \end{equation} Alternatively, they can be expressed as qubit-like density operators with respect to the orthonormal basis $\{\ket{+},\ket{-}\}$, with $\ket{\pm} = \big(2(1\pm {\rm e}^{-d^2})\big)^{-1} \big(\ket{\alpha_d} \pm \ket{\alpha_{-d}}\big)$, as \begin{equation}\label{qubitlike} \rho_{q,d} = \frac12\left( \begin{array}{cc} 1+{\rm e}^{-d^2} & q \sqrt{1-{\rm e}^{-2d^2}} \\ q \sqrt{1-{\rm e}^{-2d^2}} q & 1-{\rm e}^{-d^2} \\ \end{array} \right). \end{equation} The latter expression is useful to construct a witness operator such as the one based on Hilbert-Schmidt distance in Eq.~(\ref{lockness}), which detects non-Gaussianity in the states $\rho_{q,d}$ and enters in the explicit construction of a two-copy discrimination task for which these states outperform all Gaussian states according to Theorem~\ref{DiscriminationAdvantage}. To provide a lower bound on the robustness, we can consider once again the relative entropy of non-Gaussianity \cite{gaussianRE,GREmimimized} and its hierarchical relation (\ref{lower}) with the max-relative entropy \cite{datta2009max}, \begin{equation} \begin{aligned} \inf_{\sigma \in \mathcal{G}}D(\rho_{q,d}\|\sigma) &= D(\rho_{q,d}\|\tau_{q,d}) \\ &= S(\tau_{q,d}) - S(\rho_{q,d}) \\ &\leq \log\big({\mathcal{R_G}}(\rho_{q,d})+1\big). \end{aligned} \end{equation} Here the reference Gaussian state $\tau_{q,d}$ is a single-mode squeezed thermal state with displacement vector $\boldsymbol{\mu}_{q,d}$ and covariance matrix $\boldsymbol{V}_{q,d}$ given respectively by \begin{equation} \boldsymbol{\mu}_{q,d} =\left( \begin{array}{c} d q \\ 0 \\ \end{array} \right),\quad \boldsymbol{V}_{q,d}=\left( \begin{array}{cc} 2 d^2 \left(1-q^2\right)+1 & 0 \\ 0 & 1 \\ \end{array} \right). \end{equation} The von Neumann entropy of $\tau_{q,d}$ can be evaluated via established methods \cite{holewer}, while the von Neumann entropy of $\rho_{q,d}$ can be computed by diagonalising Eq.~(\ref{qubitlike}). Explicitly, \begin{equation} \begin{aligned} S(\tau_{q,d}) &= (1 + \nu_{q,d}) \log(1 + \nu_{q,d}) - \nu_{q,d} \log \nu_{q,d}\,, \\ S(\rho_{q,d}) &= - \lambda^{+}_{q,d} \log(\lambda^{+}_{q,d}) - \lambda^{-}_{q,d} \log(\lambda^{-}_{q,d})\,, \end{aligned} \end{equation} with \begin{equation*} \begin{aligned} \nu_{q,d} & = \frac{1}{2} \left(-1+\sqrt{1+2 d^2 (1- q^2)}\right), \\ \lambda^{\pm}_{q,d} & =\frac{1}{2} \left(1 \pm \sqrt{q^2+{\rm e}^{-2 d^2} \left(1-q^2\right)}\right). \end{aligned} \end{equation*} We then find \begin{equation}\label{lowermixrelentless} {\mathcal{R_G}}(\rho_{q,d}) \geq \exp\big[S(\tau_{q,d}) - S(\rho_{q,d})\big]-1\,. \end{equation} The above bound is faithful as it vanishes only for limiting values of the parameters, $|q|=1$ or $d=0$, in which cases the mixture $\rho_{q,d}$ trivially collapses into a single Gaussian state. A tighter bound on the robustness can be found by bounding the max-relative entropy directly in terms of the {\it measured} max-relative entropy evaluated on the classical probability distribution resulting from a POVM or projective measurement performed on the states \cite{bertasquallor}. Consider a resource theory for which $\lscr = \rob$ and let us denote by ${\cal M(H)}$ the set of POVMs on ${\cal H}$. We have then \cite{Mosonyi2015,Mosonyi2023,bertasquallor} \begin{equation} \begin{aligned} 1+ {\cal R_F}(\rho)&= \inf_{\sigma \in \ff} \sup_{k,\{M_k\}} \left\{\frac{\tr [\rho M_k]}{\tr [\sigma M_k]} : \{M_k\}_k \in {\cal M(H)}\right\} \\ &= \inf_{\sigma \in \ff} \sup_{k,\{\ket{k}\}} \left\{\frac{\bra{k}\rho\ket{k}}{\bra{k}\sigma\ket{k}} : \mbox{$\sum_k \ket{k}\!\bra{k}=\mathds{1}$}\right\} \\ &\geq \sup_{k,\{\ket{k}\}} \inf_{\sigma \in \ff}\left\{\frac{\bra{k}\rho\ket{k}}{\bra{k}\sigma\ket{k}} : \mbox{$\sum_k \ket{k}\!\bra{k}=\mathds{1}$}\right\}, \end{aligned} \end{equation} where the sum $\sum_k$ can be replaced by an integral for a continuous basis $\{\ket{k}\}$. \begin{figure}[t] \includegraphics[width=0.95\columnwidth]{rngbounds.pdf} \caption{\label{fig:rngbounds} Lower bounds on the generalized robustness of non-Gaussianity $\mathcal{R_G}$ of the states $\rho_{q,d}$ [Eq.~(\ref{rhomix})] with $q=0$, corresponding to balanced mixtures of two Gaussian coherent states with coherent amplitude $\pm d/\sqrt{2}$. The solid red curve corresponds to the bound (\ref{lowermixrelentless}) obtained from the relative entropy of non-Gaussianity. The dashed blue curve corresponds to the tighter bound (\ref{lowerhomo}) obtained from the measured max-relative entropy via an optimized quadrature measurement.} \end{figure} In our example, we can consider a homodyne measurement in the position basis $\{\ket{x}\}$, leading to a probability distribution given by the marginal $\bra{x} \rho \ket{x} = \int_{-\infty}^\infty \mathcal{W}^\rho(x,y) dy$ of the Wigner distribution $\mathcal{W}^\rho(x,y)$ for a single-mode state $\rho$. We can thus write \begin{equation}\label{wigratio} \begin{aligned} 1+ {\cal R_G}(\rho_{q,d}) \geq \sup_x \inf_{\mu,V}\frac{\int_{-\infty}^\infty \mathcal{W}^\rho_{q,d}(x,y) dy}{\int_{-\infty}^\infty \mathcal{W}^\sigma_{\mu,V}(x,y) dy}, \end{aligned} \end{equation} where $\mathcal{W}^\rho_{q,d}(x,y)$ is given in (\ref{wignermix}) and $\mathcal{W}^\sigma_{\boldsymbol{\mu},\boldsymbol{V}}$ is the Wigner function (\ref{gaussianwigner}) of a generic single-mode Gaussian state $\sigma \in \mathcal{G}$ with displacement vector $\boldsymbol{\mu}$ and covariance matrix $\boldsymbol{V}$. From now on, we specialize to the case of a balanced mixture, $q=0$. Given the symmetry of the state $\rho_{0,d}$, we can restrict the optimization over the position eigenstates $\{\ket{x}\}$ to the positive semiaxis with no loss of generality, and we find that the optimization over the Gaussian state $\sigma \in \mathcal{G}$ is solved by $\mu = (0,0)^T$ and $V=\mathrm{diag}(a,1)$ with $a\geq 1$. This gives \begin{equation}\label{envelope} 1+ {\cal R_G}(\rho_{0,d}) \geq \sup_{x \geq 0} \inf_{a \geq 1}\frac{{\rm e}^{-(d+x)^2} \left({\rm e}^{4 d x}+1\right)/\sqrt2}{{\rm e}^{-\frac{x^2}{a}}/\sqrt{a}}. \end{equation} Evaluating the right-hand side in (\ref{envelope}) amounts to finding the tightest Gaussian envelope to the even Gaussian-mixture distribution appearing in the numerator. The minimization over $a$ is solved analytically by $a = 2x^2$, leading to the lower bound \begin{equation}\label{lowerhomo} {\cal R_G}(\rho_{0,d}) \geq \sup_{x \geq 0} \left[{\rm e}^{\frac{1}{2}-(d+x)^2} \left({\rm e}^{4 d x}+1\right)\frac{x}{\sqrt2}\right]-1. \end{equation} The value of $x$ that solves the remaining maximization, say $x = x^{\text{opt}}_d$, can be evaluated numerically. An analytical approximation yields $2 ({x^{\text{opt}}_d})^2 \geq 2 d^2+1+\tanh^8\big(\!\sqrt{d}\,\big) \geq 1$ $\forall d\geq 0$, which confirms that the optimal Gaussian state $\sigma$ featured in the bound (\ref{wigratio}) is physical. A comparison between the two lower bounds (\ref{lowermixrelentless}) and (\ref{lowerhomo}) to $\mathcal{R_G}(\rho_{0,d})$ is presented in Fig.~\ref{fig:rngbounds}. We see that the robustness of non-Gaussianity of $\rho_{0,d}$ (and hence the channel discrimination advantage enabled by these states) grows at least linearly with $d$, namely ${\cal R_G}(\rho_{0,d}) \gtrsim \sqrt{{\rm e}/2}\ d - 1$ for $d \gg 0$, despite the fact that these states are simple convex mixtures of Gaussian coherent states, with both a positive Wigner function and a positive Glauber-Sudarshan $P$-representation. \section{Conclusions} In this paper, we showed that \emph{any} resource state in \emph{any} resource theory can provide an advantage in the task of channel discrimination, without restricting to convex sets of free states or to finite dimensions. We began by considering multi-copy witness operators and showing that, for an arbitrary resource state, a two-copy witness operator always exists. Based on the existence of such a witness, we constructed a discrimination task to show that every resource state is advantageous over all free states in some two-copy channel discrimination task. Furthermore, we extended the definition of lower semicontinuous generalized robustness to the case of non-convex sets of free states in infinite-dimensional theories, and proved that it remains a faithful monotone. We investigated conditions under which this definition, when used in the general case, coincides with the simpler definition of generalized robustness employed in finite dimensions. We provided two operational interpretations for the generalized robustness monotone: firstly, we showed that it can always be used to find an upper bound for the maximal advantage given by a resource state compared to all free states in some multi-copy channel discrimination task. Secondly, we showed that, in many resource theories of interest, it exactly quantifies the worst-case advantage given by a resource state compared to each set in a decomposition of the free states into convex sets. Our methods also allowed us to provide experimentally observable lower bounds for the robustness, based on the measurement of suitable multi-copy witness operators. Finally, we specialized to the resource theory of non-Gaussianity. Here, we calculated the generalized robustness of non-Gaussianity for Fock states with arbitrary photon number, and compared it to the robustness with respect to the convex hull of Gaussian states. We also investigated the generalized robustness of a mixture of two coherent states and provided lower bounds, thus demonstrating that even simple classical-like mixtures of Gaussian states can provide an advantage over all Gaussian states in the context of channel discrimination. This work overcomes limitations of previous studies \cite{finiteconvexrobustness,infiniteconvex,longinfiniteconvex,finitenonconvex,longfinitenonconvex} and provides a universal operational scenario to characterize the usefulness of quantum resources. Our work demonstrates in particular that any instance of non-Gaussianity can be advantageous for specific discrimination tasks in continuous variable quantum technologies. A more systematic analysis of the power of non-Gaussian states and operations to provide enhancements in quantum sensing and metrology applications, inspired by the methods presented in this work, deserves further consideration. \begin{acknowledgments} We acknowledge fruitful discussions with M.~Berta, K.~Kuroiwa, M.~Mosonyi, M.~Piani, B.~Regula, S.~Sibilia, R.~Takagi, and especially L.~Lami. This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) [Grants No.~EP/W524402/1, EP/X010929/1, and EP/T022140/1]. The authors confirm that no new data were created during this study. \end{acknowledgments} \bibliographystyle{apsrevfixedwithtitles} \bibliography{bib} \end{document} \documentclass[12pt]{article} \usepackage{graphicx} \usepackage{dsfont} \usepackage{amssymb} \usepackage{amsthm} \usepackage{braket} \usepackage{tikz} \usepackage{amsmath} \usetikzlibrary{positioning,automata} \usepackage{ragged2e} \newtheorem{definition}{Definition} \newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma} \newtheorem{proposition}{Proposition} \usepackage{blindtext} \usepackage{geometry} \geometry{ a4paper, total={170mm,257mm}, left=25mm, top=25mm, right=25mm, bottom=25mm, } \usepackage[skip=10pt plus1pt, indent=15pt]{parskip} \title{} \author{} \date{} \newcommand{\lscr}{\underline{\mathcal{R}_{\mathcal{F}}}(\rho)} \newcommand{\rob}{\mathcal{R}_{\mathcal{F}}(\rho)} \newcommand{\ff}{\mathcal{F}} \newcommand{\xiseq}{\{\xi_n\}_n} \newcommand{\ddh}{\mathcal{D(H)}} \newcommand{\tth}{\mathcal{T(H)}} \newcommand{\bbh}{\mathcal{B(H)}} \newcommand{\glscr}{\underline{\mathcal{R_G}}} \newcommand{\grob}{\mathcal{R}_{\mathcal{G}}} \DeclareMathOperator{\tr}{Tr} \DeclareMathOperator{\cone}{cone} \newcommand{\kkh}{\mathcal{K(H)}} \DeclareMathOperator{\povm}{POVM} \DeclareMathOperator{\swap}{SWAP} \begin{document} \section{intro} \begin{itemize} \item The introduction of quantum mechanics to modern technologies has led to the development of quantum technologies that outperform their classical counterparts in many areas. Such technologies utilize the explicitly quantum features, such as entanglement, coherence and non-Gaussianity, in order to go beyond classical capabilities. The characterization of such properties has developed the field of quantum resource theories, in which quantum properties are rigorously studied to see what advantages they may provide. \item Quantum resource theories \cite{QRTs} categorize states as either readily available (free) or resourceful, with the expectation that resource states are advantageous over those that are free. The study then aims to measure how useful a particular state would be in some relevant task. Many specific examples exist, and resource theories can also be studied in a general sense, allowing for the understanding of properties common in a wide range of resource theories and unveiling links between different quantum properties. One successful such measure is the generalized robustness, which quantifies how useful a resource state is in channel discrimination tasks. This result was first shown in finite dimensions for convex sets \cite{finiteconvexrobustness}, and later extended to GPTs \cite{GPTRobustness}, infinite dimensions \cite{infiniteconvex}, and nonconvex sets in finite dimensions \cite{finitenonconvex}. \item As an alternative to the discrete variable approach, continuous variable quantum information is a promising area, largely due to its ease of implementation with current quantum optical technologies, allowing for rapid advancements. Although quantum technologies in this regime are promising, less is understood about the resources needed to give the quantum advantage the technologies aim to achieve. This is largely due to the fact that mathematical techniques developed for the discrete variable case do not necessarily apply to continuous variables, and the infinite dimensional Hilbert spaces involved are more complex than their discrete variable counterparts. Many results in this area make restricting assumptions on the spaces involved, or work solely within the simplified Gaussian regime. Whilst Gaussian quantum information is a popular area due to relative ease of creating and manipulating states with linear optics, it is known that leaving this Gaussian formalism is necessary in order to improve or even achieve certain tasks \cite{noGEC,noGdistillation,gaussianQRTs,wignernegativity,estimation,photonsubtraction}, and therefore can be considered as a quantum resource. But is any non-Gaussian state useful for something? Even those within the convex hull? The framework for generalized robustness so far does not include the case of infinite dimensional, nonconvex resource theories, and hence as of yet cannot be used to study non-Gaussianity and the advantages it may give. \item In this paper, we show that every resource state in any resource theory, without assuming convexity or finite dimensions, can provide an advantage in some channel discrimination task, even if it lies within the convex hull of free states. We extend the generalized robustness monotone to the case of infinite dimensional, nonconvex resource theories, and give it two interpretations: worst case advantage with respect to a convex set decomposition of the free states in single-copy channel discrimination, and to construct an upper bound on the advantage given by a resource state in multi-copy channel discrimination. Further specializing to the case of non-Gaussianity, we show that the generalized robustness can be calculated exactly in some specific cases, and provide analysis for an example of a state within the convex hull of Gaussian states. \item Organization. \end{itemize} \section{Preliminaries} Here we begin by introducing the notation and background material necessary for this paper. Throughout this paper, $\mathcal{H}$ refers to a separable, not necessarily finite dimensional, Hilbert space. $\bbh$ is the space of bounded operators on $\mathcal{H}$. $\tth\subseteq\bbh$ is the space of trace class operators, $\kkh \subseteq \bbh$ is the space of compact operators. $\ddh\subseteq\bbh$ is the space of density operators, or states, on $\mathcal{H}$. $\|\cdot\|_1$ is the trace norm and $\|\cdot\|_{\infty}$ is the operator norm. We use $\cone(X)$ to denote the cone generated by the set $X\subseteq \tth$, that is $\cone(X)=\{\mu x \colon x\in X, \mu\in\mathbb{R}_{\geq 0}\}$. \subsection{Topologies in infinite dimensions} Unlike in the finite dimensional case, when working with an infinite dimensional state space the topologies do not coincide. We provide here a brief overview of the topologies we use in this paper. The trace norm $\|\cdot\|_1$ induces the trace norm topology on $\tth$, in which a sequence of trace class operators $T_n$ is said to converge in the trace norm topology to $T$ if $\lim_{k\to\infty}\|T-T_k\|_1=0$, which we write this as $T_n\xrightarrow{\text{tn}} T$. Due to its operational significance in quantifying of state distinguishability, the trace norm topology is one of the most important topologies when working with quantum states, and we will use this topology unless otherwise specified. Another norm topology that we will use in this paper is the Hilbert-Schmidt topology, induced by the Hilbert-Schmidt norm $\|X\|_2:=\tr[XX^{\dagger}]^{\frac{1}{2}}$. A sequence $T_n$ converges in the Hilbert-Schmidt topology to $T$ if $\lim_{k\to\infty}\|T-T_k\|_2=0$, which we write as $T_n\xrightarrow{\text{H-S}} T$. Another topology that will prove useful is the weak* topology on $\tth$. This is the topology induced on $\tth$ by its predual, the space of compact operators $\kkh$. A sequence $T_n\in\tth$ is said to converge in the weak* topology to $T\in\tth$ if $\tr[T_nK]\xrightarrow{n\to \infty} \tr[TK]$ for all $K\in \kkh$, and we write this as $T_n\xrightarrow{\text{w*}} T$. The usefulness of this topology here is due to the Banach-Alaoglu theorem \cite[Theorem~2.6.18]{Megginson}, which says the unit ball in $\tth$ is compact in the weak* topology, and hence this topology lends itself well to arguments involving sequences. Since we are working in infinite dimensions, all these topologies are different. It is, however, worth noting that when working only with states, convergence of a sequence of states to another state is equivalent in all of these topologies, i.e. for a sequence of states $T_n\in\ddh$ and another state $T\in\ddh$, we have \cite{SWOT} $$T_n\xrightarrow{\text{tn}} T \iff T_n\xrightarrow{\text{H-S}} T \iff T_n\xrightarrow{\text{w*}} T$$ \subsection{Quantum resource theories} Quantum resource theories are motivated by the fact that, in a realistic setting, the accessible states and operations will be restricted. In this framework, the set of all possible quantum states is partitioned into the set of free states $\ff\subseteq\ddh$ (those that are readily available in the considered context) and resource states $\ddh \backslash \ff$. The idea is that a state $\rho\notin\ff$ can then potentially be used as a \emph{resource} to achieve better results. The set of operations that can be implemented are known as free operations, and have the requirement that $\ff$ is closed under their action. Resource theories can be considered in specific cases, or in a general setting, allowing for the study of results that apply to general classes of resources. Here, we work mostly in this general case. We make the standard assumption that $\ff$ is closed (in the trace norm topology) - if a state can be approximated by a sequence of free states then it must also be free. To allow for full generality in the applicability of our results, this is the only assumption we make. Note that although convexity of $\ff$ is often also assumed in a general resource theory, we do not make this assumption here. There exist several interesting and well-motivated sets of free states which are naturally nonconvex, a notable example here being the set of Gaussian states \cite{CVQI_and_beyond}. Additionally, there are situations in which randomness can be costly and used as a resource \cite{randomnessresource}, and therefore it is not necessarily always constructive to consider mixtures of free states as free. It is a natural assumption that states with a higher resource content should be more useful. Given $\rho\in\ddh\backslash\ff$, the task is then to quantify the amount of resource in $\rho$. This is done by introducing a resource monotone $\mathcal{M}:\rho \mapsto [0,\infty)$. Whilst there is no unique way of defining a $\mathcal{M}$, two standard conditions are imposed in order to ensure the monotone is meaningful: faithfulness $\mathcal{M}(\rho) = 0 \iff \rho \in \ff$ and monotonicity under free operations $\mathcal{M}(\Phi(\rho))\leq \mathcal{M}(\rho)$ for any free operation $\Phi$. It is also instructive to consider monotones with some operational meaning, i.e. relate the resource content of a state with respect to that monotone to the relative advantage it gives in some specific task of interest. When working with monotones in the continuous variable case, continuity issues often arise. Intuitively, continuity of a monotone is desirable since if two states can be related by only a slight perturbation, their resource content should be similar. In the finite dimensional case, this is often trivially given by the compactness of $\ff$, whereas when defined the same way in infinite dimensions, many quantities are discontinuous everywhere \cite{continuity_too_strong}. It is therefore necessary to impose a weaker version of continuity: lower semi-continuity. A monotone $M$ is lower semi-continuous if it satisfies the property $\liminf_{n\to\infty}M(\xi_n)\geq M(\rho)$ for any sequence of states $\xiseq \xrightarrow{tn}\rho$. This direction of semi-continuity imposes that approximating a state by a sequence of states does not give a lower value of resource than in the actual state, i.e. taking the limit will not increase the resource. \section{Two copy channel discrimination advantage} In this section, we show that every resource state in any resource theory provides an advantage in some two-copy channel discrimination task. The proof strategy involves directly constructing such a task, using the properties of witness operators. When $\ff$ is convex, the hyperplane separation theorem guarantees the existence of a linear (i.e. acting on a single copy of a state) witness operator \cite{witnessexistence}. When the set $\ff$ is nonconvex, such a linear operator is no longer guaranteed to exist. We therefore turn to multi-copy witness operators. These were first introduced in entanglement theory \cite{swaptrick}, and amount to a witness operator $W\in\mathcal{B(H}^{\otimes m})$ that acts on multiple copies of a state $\rho$ in order to separate it from $\ff$, i.e. $\tr[W\rho^{\otimes m}]<0$ and $\tr[W\sigma^{\otimes m}] \ \forall \sigma\in\ff$. Using witnesses of this form was the approach taken in \cite{finitenonconvex} to show an advantage in channel discrimination in finite dimensional theories even for nonconvex sets, however such a witness is yet to be used in a general infinite dimensional theory. We begin be showing that such a witness always exists, in any resource theory. \begin{proposition} \label{WitnessExistence} For any set of free states $\ff\subseteq\ddh$ that is closed in the trace norm topology and some resource state $\rho\in\ddh \backslash \ff$, there exists a two-copy witness operator $W$ such that $\tr[W\rho^{\otimes 2}]<0$ and $\tr[W\sigma^{\otimes 2}]\geq 0$ for all $\sigma \in \ff$ \end{proposition} \begin{proof} Consider the two copy witness operator, introduced in for the finite dimensional case in \cite{finitenonconvex}, given by \begin{equation} W(\rho, \varepsilon) = \swap+(\tr[\rho^2]-\varepsilon)\mathds{1}\otimes \mathds{1}-2\rho \otimes \mathds{1} \end{equation} where the $\swap$ operator is defined via $\swap \ket{\phi}\otimes \ket{\psi} = \ket{\psi}\otimes \ket{\phi}$ and has the property $\tr[\swap \eta^{\otimes 2}] = \tr[\eta^2]$ \cite{swaptrick}. When evaluated on two copies of a state $\eta$, this gives \begin{equation}\tr[W\eta^{\otimes 2}] = \tr[(\rho-\eta)^2]-\varepsilon \end{equation} and is therefore a witness provided $\varepsilon$ is picked such that $0<\varepsilon \leq \tr[(\rho -\sigma)^2]$ for all $\sigma \in \ff$. We now need to argue that such an $\varepsilon$ can always be chosen. To this end, assume towards a contradiction that no such $\varepsilon$ exists, that is, $\tr[(\rho -\sigma)^2]$ can be made arbitrarily small. This means there exists some sequence $\{\sigma_k\}_k$ in $\ff$ such that $\sigma_k \xrightarrow{\text{H-S}} \rho$, and hence also implies $\sigma_k \xrightarrow{\text{tn}} \rho$. This contradicts the assumption that $\ff$ is closed in the trace norm topology, and therefore such an $\varepsilon$ must always exist. \end{proof} This shows that, although there may not exist a linear witness operator in some resource theory, there will always exist a multi-copy witness operator. Interestingly, it also shows we only require two copies of the states involved to be able to strictly separate $\rho$ from $\ff$, regardless of the systems dimension and including infinite dimensional theories. We use this result in the following to show that a resource state always give an advantage in some variant of a channel discrimination task. In a channel discrimination task, we are provided with a channel ensemble $\{p_i,\Psi_i\}$, from which the channel $\Psi_i$ is picked with probability $p_i$ and applied to a state $\eta$. The task is to correctly guess which channel was picked by measuring the output, and the probability of success can be optimized by optimizing the choice of $\povm$. The success probability of correctly guessing from the ensemble $\{p_i,\Psi_i\}_i$ using input state $\eta$ and using POVM $\{M_i\}_i$ is denoted $p_{succ}(\{p_i,\Psi_i\}_i,\{M_i\}_i,\eta):=\sum_i p_i \tr[\Psi_i(\eta) M_i]$. Such tasks are vital in quantum information, since any scenario in which we want to discover which of a set of processes are occurring can be reformulated as a channel discrimination task. Promising uses of binary discrimination tasks include quantum illumination \cite{IlluminationIntro,GaussianIllumination,ExperimantalIllumination} or quantum reading \cite{ReadingIntro,ReadingAndIllumination,ExperimentalReading}. A nontrivial question is whether any quantum resource will give an advantage over free states in such a task. In a multi-copy channel discrimination task, the channels $\{\Psi_i\}_i$ act on multiple copies of a state $\eta$. We now show that, for any resource state $\rho$, there always exists a task of this form for which $\rho$ performs strictly better than any free state. \begin{theorem} \label{DiscriminationAdvantage} $\rho \in \ddh \backslash \ff$ if and only if there exist two channels $\Psi_0, \Psi_1$ acting on $\mathcal{D(H}^{\otimes 2})$ such that $$\frac{\sup_{\{M_i\}}p_{succ}(\{p_i,\Psi_i\}_{i=0,1},\{M_i\},\rho^{\otimes 2})}{\sup_{\sigma\in\ff}\sup_{\{M_i\}}p_{succ}(\{p_i,\Psi_i\}_{i=0,1},\{M_i\},\sigma^{\otimes 2})}>1$$ \end{theorem} \begin{proof} The if direction follows directly from the assumption that $\ff$ is closed. For the other direction, we construct a channel discrimination task for which $\rho\in \ddh \backslash \ff$ gives an advantage. Given $\rho$, take any bounded, two-copy witness operator (the existence of which is shown in Lemma \ref{WitnessExistence}) and define $X:=\mathds{1}^{\otimes 2} - \frac{W}{\|W\|_{\infty}}$. Note $X$ has the properties $\tr[X\rho^{\otimes 2}]>1$, $0\leq \tr[X\sigma^{\otimes 2}]\leq 1$ for all $\sigma\in\ff$. Now consider the two CPTP maps given by \begin{equation} \begin{aligned} &\Psi_0(\eta^{\otimes 2}):= \left(\frac{\tr[\eta^{\otimes 2}]}{2} + \frac{\tr[X\eta^{\otimes 2}]}{2\|X\|_{\infty}}\right)\ket{0}\!\bra{0} +\left(\frac{\tr[\eta^{\otimes 2}]}{2} - \frac{\tr[X\eta^{\otimes 2}]}{2\|X\|_{\infty}}\right) \ket{1}\!\bra{1}\\ &\Psi_1(\eta^{\otimes 2}):= \left(\frac{\tr[\eta^{\otimes 2}]}{2} - \frac{\tr[X\eta^{\otimes 2}]}{2\|X\|_{\infty}}\right)\ket{0}\!\bra{0} + \left(\frac{\tr[\eta^{\otimes 2}]}{2} + \frac{\tr[X\eta^{\otimes 2}]}{2\|X\|_{\infty}}\right)\ket{1}\!\bra{1} \end{aligned} \end{equation} where $\ket{0},\ket{1}$ are mutually orthogonal states. Consider the task of determining which of these two channels was applied when there was an equal probability of picking either channel. By the Holevo-Helstrom theorem, the maximum success probability of correctly guessing is \begin{equation} \begin{aligned} \sup_{\{M_i\}}p_{succ}\left(\{\tfrac{1}{2},\Psi_i\},\{M_i\}, \eta^{\otimes 2}\right) &= \frac{1}{2}\left(1+\frac{1}{2} \|(\Psi_0 - \Psi_1)\eta^{\otimes 2}\|_1\right)\\ &=\frac{1}{2}\left(1+\frac{\tr[X\eta^{\otimes 2}]}{\|X\|_{\infty}}\right)\\ & \begin{cases} > \frac{1}{2}(1+\frac{1}{\|X\|_{\infty}}) & \text{for} \ \eta = \rho\\ \leq \frac{1}{2}(1+\frac{1}{\|X\|_{\infty}}) & \text{for} \ \eta = \sigma \in \ff \end{cases} \end{aligned} \end{equation} and therefore the ratio of success probabilities is \begin{equation} \begin{aligned} \frac{\sup_{\{M_i\}}p_{succ}(\{p_i,\Psi_i\}_{i=0,1},\{M_i\},\rho^{\otimes 2})}{\sup_{\sigma\in\ff}\sup_{\{M_i\}}p_{succ}(\{p_i,\Psi_i\}_{i=0,1},\{M_i\},\sigma^{\otimes 2})}>\frac{\frac{1}{2}(1+\frac{1}{\|X\|_{\infty}})}{\frac{1}{2}(1+\frac{1}{\|X\|_{\infty}})} = 1 \end{aligned} \end{equation} \end{proof} This shows an operational advantage of every resource state in every resource theory in the channel discrimination setting. Remarkably, we only need two copies of the states to see such an advantage, even in the most general setting. Since the only assumption we make is that $\ff$ is closed, this result applies to every resource theory. \section{Generalized robustness monotone in nonconvex, continuous variable resource theories} We begin this section by introducing the generalized robustness monotones considered in this paper and discussing their properties. \begin{definition}[generalized robustness]\label{robDef} The generalized robustness of a state $\rho$ with respect to the set of free states $\ff$ is given by \begin{equation} \rob :=\inf_{\tau\in\ddh}\left\{\lambda\in \mathbb{R}_{\geq 0} \colon \frac{\rho + \lambda \tau}{1+\lambda} = \sigma \in \ff \right\} \end{equation} \end{definition} The generalized robustness monotone was first introduced in entanglement theory \cite{original_robustness}, and later extended to arbitrary resource theories in finite dimensions \cite{finiteconvexrobustness} and convex resource theories within the framework of general probabilistic theories \cite{GPTRobustness}. $\rob$ quantifies the amount of resource in a state by the amount of noise that can be added via mixing with another state before all the resource is lost. As an operational interpretation, when used in discrete variable theories, it quantifies the maximal advantage given by a resource state in single copy channel discrimination tasks \cite{finiteconvexrobustness}. In the case where $\ff$ is nonconvex, it has been shown to quantify the worst case maximal advantage in single copy channel discrimination tasks, with respect to a decomposition of $\ff$ into convex sets \cite{finitenonconvex}. When used in continuous variable theories, definition \ref{robDef} no longer guarantees lower semi-continuity. We therefore use here the modified definition that defines the generalized robustness in such a way that it is automatically lower semi-continuous: \begin{definition}[lower semi-continuous generalized robustness]\label{lscrDef} The lower semi-continuous generalized robustness of $\rho\in\ddh$ with respect to $\ff$ is given by \cite{infiniteconvex} \begin{equation}\label{lscr} \begin{aligned} \lscr :&= \liminf_{\xi\xrightarrow{tn}\rho}\mathcal{R_F}(\xi)\\ &= \inf_{\{\tau_n\}_n, \xiseq}\left\{\lambda \in \mathbb{R}_{\geq 0} \colon \frac{\xi_n + \lambda \tau_n}{1+\lambda} = \sigma_n \in \ff, \tau_n, \xi_n \in \ddh, \ \xiseq \xrightarrow{\text{tn}} \rho \right\} \end{aligned} \end{equation} \end{definition} Note, this definition is shifted by $1$ compared to \cite{infiniteconvex} where it was first introduced. So far, this lower semi-continuous robustness $\lscr$, has only been studied for convex $\ff$. Whilst it is no longer a convex function when this assumption is removed, it is still monotonic and faithful. \begin{proposition} $\lscr$ when defined with respect to (not necessarily convex) $\ff$ is both (a) monotonic under free operations and (b) faithful. \end{proposition} \begin{proof} We prove each claim individually. To establish (a), from the definition of $\lscr$ we have $\forall \varepsilon > 0, \ \exists \ \sigma_n\in\ff$ and $\xi_n \xrightarrow{tn}\rho$ such that $\xi_n \leq (1+\lscr + \varepsilon)\sigma_n$. For any CP, linear, trace non-increasing map $\Phi$, positivity implies $\Phi(\xi_n)\leq (1+\lscr + \varepsilon)\Phi(\sigma_n)$, and contractivity of the trace norm under $\Phi$ means that $\Phi(\xi_n)\xrightarrow{tn} \Phi(\rho)$ also. Hence $\lscr$ is a suboptimal value for $\underline{\mathcal{R_F}}(\Phi(\rho))$, i.e. $\underline{\mathcal{R_F}}(\Phi(\rho))\leq \lscr$. To see (b) holds, if $\rho \in \ff$, clearly $\lscr = 0$. For the other implication, $\lscr=0$ implies $\exists$ a sequence $\lambda^{(k)} \to 0$ such that $\xi_n^{(k)}-\sigma_n^{(k)} = \lambda^{(k)}(\sigma_n^{(k)} - \tau_n^{(k)})$. For fixed k, we have $\|\xi_n^{(k)}-\sigma_n^{(k)}\|_1\leq 2\lambda^{(k)}$, and thus $\|\rho-\sigma_n^{(k)}\|_1 \leq \|\rho - \xi_n^{(k)}\|_1 + \|\xi_n^{(k)}-\sigma_n^{(k)}\|_1 \leq \|\rho - \xi_n^{(k)}\|_1 + 2\lambda^{(k)}$. Taking the limits $n\to \infty$ and $k\to \infty$ then show that $\sigma_n^{(k)}$ converges to $\rho$, hence $\rho\in\ff$. \end{proof} This means $\lscr$ is a valid resource quantifier in any resource theory, even in the continuous variable case when $\ff$ is nonconvex. A natural question is then to ask when the definitions \ref{robDef}, \ref{lscrDef} are equal. This has already been shown in \cite[Proposition~7]{longinfiniteconvex} under the condition that $\ff$ is compact, and therefore the two definitions coincide in finite dimensional resource theories. In the case of convex $\ff$, they are also shown to be equal if $\cone(\ff)$ is weak* closed \cite[Theorem~12]{longinfiniteconvex}. Our next result provides a proof that this equivalence holds even in the nonconvex case. \begin{proposition}\label{ClosedCone} If $\cone({\ff})$ is closed in the weak* topology, then $\lscr = \rob$. \end{proposition} \begin{proof} We have $\lscr \leq \rob$ by definition, so it remains to show that $\rob \leq \lscr$. Let $B_1 = \{X\in \mathcal{T(H)}: \|X\|_1 \leq 1\}$ be the trace norm unit ball and note that via the Banach-Alaoglu theorem \cite[Theorem~2.6.18]{Megginson}, $B_1$ is weak* compact. Consider the space $$\tilde{\mathcal{F}}=\cone({\mathcal{F}})\cap B_1$$ Since $\cone(\ff)$ is weak* closed by assumption, $\tilde{\ff}$ is weak* (sequentially \footnote{Note compactness and sequential compactness are not necessarily the same, however are equivalent on metrizable spaces. The weak* topology is not itself induced by a metric, and hence the Banach-Alaoglu theorem only guarantees compactness. However, since $\mathcal{H}$ is separable and $\tilde{\ff}\subseteq \tth$ is compact, we can conclude $\tilde{\ff}$ is also sequentially compact, see discussion in \cite[Remark~1]{attainability} for details.} ) compact. By definition of generalized robustness, $\forall \varepsilon > 0$ there exists some $\sigma \in \ff$ such that $\rho \leq (1+\rob +\varepsilon)\sigma$. Now take any sequence of states $\xiseq \xrightarrow{tn} \rho$, applying this to the sequence $\xiseq$ and picking $\varepsilon = \frac{1}{n}$ shows there exists a sequence $\{\sigma_n\}_n$, $\sigma_n\in \ff$ such that $(1+\mathcal{R_F}(\xi_n)+\frac{1}{n})\sigma_n-\xi_n \geq 0$. Since $\ff \subseteq \tilde{\ff}$, there exists a weak* convergent subsequence $\sigma_{n_k}\xrightarrow{w*} \tilde{\sigma} \in \tilde{\ff}$. Consider also the sequence given by $\{\mathcal{R_F}(\xi_n)\}_n$. Up to extracting a subsequence, we can assume $\{\mathcal{R_F}(\xi_n)\}_n$ converges, else the inequality is trivial. Take now some compact operator $K\in\kkh$, such that $K\geq 0$. We then have $\tr[K((1+\mathcal{R_F}(\xi_n)+\frac{1}{n})\sigma_n)-\xi_n]\geq 0$ Taking the $n_k$th elements of this and using the definition of weak* convergence gives $$\tr[K((1+\mathcal{R_F}(\xi_{n_k})+\frac{1}{n_k})\sigma_{n_k})-\xi_{n_k}] \xrightarrow{n_k \to \infty} \tr[K((1+\lscr)\tilde{\sigma})-\rho]\geq 0$$ Pick now the case $K=\ket{\psi}\!\bra{\psi}$. In this case, $$\tr[K((1+\lscr)\tilde{\sigma})-\rho]\geq 0 \implies (1+\lscr)\tilde{\sigma}-\rho\geq 0$$ Note since $\tilde{\sigma}\in \tilde{\ff}$ it is of the form $\tilde{\sigma}=\mu \sigma$ for some $0\leq \mu \leq 1$ and $\sigma \in \ddh$. From this we conclude that for any $\varepsilon >0$, $$\rho \leq (1+\lscr)\mu\sigma \leq (1+\lscr)\sigma \leq (1+\lscr+\varepsilon)\sigma$$ By definition $\rob$ is the minimal quantity for which $\rho \leq (1+\lscr+\varepsilon)\sigma$ holds $\forall \varepsilon$, hence it follows that $\rob \leq \lscr$. \end{proof} Although the requirement in Proposition \ref{ClosedCone} may seem rather abstract, it has been shown to hold true for the free states in many cases of resource theories, most notably separable states \cite[Lemma~25]{longinfiniteconvex}, incoherent states \cite[Lemma~31]{longinfiniteconvex}. Crucially, it also holds for the set of Gaussian states \cite[Lemma~34]{attainability}, allowing for an easier evaluation of advantages in one of the most important resource theories in the continuous variable, nonconvex setting: non-Gaussianity. \subsection{Interpretations for generalized robustness} In this section, we establish operational interpretations for $\lscr$ when used in a general setting, without convexity or finite dimensional restrictions. We find it can be used to provide an upper bound on the advantage in multi-copy channel discrimination, and thus remains a valuable quantifier in channel discrimination tasks. We also investigate cases in which $\lscr$ exactly quantifies the worst case maximal advantage in a convex set decomposition of $\ff$, extending results found in \cite{finitenonconvex} to the continuous variable setting. \subsubsection{Upper bound in multi-copy channel discrimination} We begin by defining a multi-copy variant of the generalized robustness monotone. \begin{definition} The $m$-copy generalized robustness $\rob^{(m)}$ is defined as \begin{equation} \rob^{(m)}:=\inf_{\tau\in\mathcal{D(H}^{\otimes m})}\left\{\lambda \in \mathbb{R}_{\geq 0} \colon \frac{\rho^{\otimes m}+\lambda \tau}{1+\lambda}=\sigma^{\otimes m}, \ \sigma \in \ff \right\} \end{equation} \end{definition} This quantifies the resource in a state $\rho$ by how much $m$ copies of $\rho$ can be mixed with another state before it becomes $m$ copies of some free state. We further define a lower semi-continuous version of $\rob^{(m)}$. \begin{definition} The lower semi-continuous $m$-copy generalized robustness is defined as \begin{equation} \begin{aligned} \lscr^{(m)}:&=\liminf_{\xi\to\rho}\mathcal{R_{\ff}}(\xi)^{(m)}\\ &=\inf_{\xiseq \to \rho}\inf_{\tau\in\mathcal{D(H}^{\otimes m})}\left\{\lambda \in \mathbb{R}_{\geq 0} \colon \frac{\xi_{n}^{\otimes m}+\lambda \tau_n}{1+\lambda}=\sigma_{n}^{\otimes m}, \ \sigma_n \in \ff \right\} \end{aligned} \end{equation} \end{definition} We can relate this multi-copy version of robustness to the single copy version by using the link between robustness and max-relative entropy \cite{min-maxRE}, along with the known additivity property of the latter. \begin{proposition} For any integer m, we have $(1+\lscr^{(m)})=(1+\lscr)^m$ \end{proposition} \begin{proof} \begin{equation} \begin{aligned} 1+\lscr^{(m)}&=\liminf_{\xi_n\to\rho}(1+\mathcal{R_F}(\xi_n)^{(m)})\\ &=\liminf_{\xi_n\to\rho} \inf_{\sigma_n\in\ff}\exp (D_{max}(\xi_n^{\otimes m}\|\sigma_n^{\otimes m}))\\ &=\liminf_{\xi_n\to\rho} \inf_{\sigma_n\in\ff}\exp (mD_{max}(\xi_n\|\sigma_n))\\ &=\liminf_{\xi_n\to\rho}(1+\mathcal{R_F}(\xi_n))^m\\ &=(1+\lscr)^m \end{aligned} \end{equation} \end{proof} Note that the faithfulness and monotonicity under free operations of $\lscr^{(m)}$ follow directly from this relation. Starting by adapting part of the proof in \cite[Theorem~10]{longinfiniteconvex}, we can show the multi-copy lower semi-continuous robustness $\lscr^{(m)}$ upper bounds the maximal advantage given by $\rho$ in multi-copy channel discrimination tasks. The link between $\lscr^{(m)}$ and $\lscr$ then allows us to show that $\lscr$ can also be used to upper bound such an advantage. \begin{theorem} \begin{equation} \frac{\sup_{\{M_i\}}p_{succ}(\{p_i,\Psi_i\},\{M_i\},\rho^{\otimes m})}{\sup_{\{M_i\}}\sup_{\sigma\in\ff}p_{succ}(\{p_i,\Psi_i\},\{M_i\},\sigma^{\otimes m})}\leq (1+\lscr)^m \end{equation} \end{theorem} \begin{proof} Consider a sequence of the form $\{\xi_n^{\otimes m}\}_n = \{(1+\lambda)\sigma_n^{\otimes m} - \lambda \tau_n\}_n$ such that $\xiseq \to \rho$ (or equivalently $\{\xi_n^{\otimes m}\}_n\to \rho^{\otimes m}$). For a set of channels $\{\Psi_i\}$ and POVM $\{M_i\}$, the success probability of correctly guessing with two copies of a state is \begin{equation} \begin{aligned} p_{succ}(\{p_i,\Psi_i\},\{M_i\},\rho^{\otimes m}) &= \sum_ip_i\tr[M_i\Psi_i(\rho^{\otimes m})]\\ &=\lim_{n\to\infty}\sum_ip_i\tr[M_i\Psi_i(\xi_n^{\otimes m})] \\ &\leq \limsup_{n\to\infty}\sum_ip_i\tr[M_i\Psi_i((1+\lambda)\sigma_n^{\otimes m})]\\ &\leq (1+\lambda)\sup_{\tilde{\sigma}\in\ff}\sum_ip_i\tr[M_i\Psi_i(\tilde{\sigma}^{\otimes m})]\\ &=(1+\lambda)\sup_{\tilde{\sigma}\in\ff}p_{succ}(\{p_i,\Psi_i\},\{M_i\},\tilde{\sigma}^{\otimes m}) \end{aligned} \end{equation} Since this holds for any $\lambda$ such that $\frac{\xi_{n}^{\otimes m}+\lambda \tau_n}{1+\lambda}=\sigma_{n}^{\otimes m}$ with $\sigma_n \in \ff$, we have \begin{equation} \begin{aligned} &p_{succ}(\{p_i,\Psi_i\},\{M_i\},\rho^{\otimes m})\\ &\leq \inf\left\{(1+\lambda)\sup_{\sigma\in\ff}p_{succ}(\{p_i,\Psi_i\},\{M_i\},\sigma^{\otimes m}) \colon \frac{\xi_{n}^{\otimes m}+\lambda \tau_n}{1+\lambda}=\sigma_{n}^{\otimes m}, \ \sigma \in \ff\right\}\\ &=(1+\lscr^{(m)})\sup_{\sigma\in\ff}p_{succ}(\{p_i,\Psi_i\},\{M_i\},\sigma^{\otimes m}) \end{aligned} \end{equation} and hence we have \begin{equation} \frac{p_{succ}(\{p_i,\Psi_i\},\{M_i\},\rho^{\otimes m})}{\sup_{\sigma\in\ff}p_{succ}(\{p_i,\Psi_i\},\{M_i\},\sigma^{\otimes m})}\leq (1+\lscr^{(m)}) = (1+\lscr)^m \end{equation} and since this holds for an arbitrary $\povm$ we have \begin{equation} \sup_{\{M_i\}}\frac{p_{succ}(\{p_i,\Psi_i\},\{M_i\},\rho^{\otimes m})}{\sup_{\sigma\in\ff}p_{succ}(\{p_i,\Psi_i\},\{M_i\},\sigma^{\otimes m})}\leq(1+\lscr)^m \end{equation} The above considers the case in which the same measurement strategy is used throughout for $\rho$ and free states. We can also then upper bound the more realistic scenario in which the measurement strategy can be optimized individually for $\rho$ and $\sigma\in\ff$ since this will only reduce the overall ratio of success probabilities: \begin{equation} \frac{\sup_{\{M_i\}}p_{succ}(\{p_i,\Psi_i\},\{M_i\},\rho^{\otimes m})}{\sup_{\{M_i\}}\sup_{\sigma\in\ff}p_{succ}(\{p_i,\Psi_i\},\{M_i\},\sigma^{\otimes m})}\leq (1+\lscr)^m \end{equation} \end{proof} In convex resource theories, the above bound is tight for $m=1$ \cite{infiniteconvex}. For nonconvex resource theories, the above bound may not be tight, and we require at least $m=2$ to show a genuine advantage of resource states. This shows that, even in the case of nonconvex resource theories, $\lscr$ still retains a valuable interpretation in quantifying the advantage in channel discrimination tasks by providing an upper bound on the maximum advantage that can be achieved. \subsubsection{Worst case advantage} Let $\ff=\bigcup_k \ff_k$ be a decomposition of $\ff$ into a union of convex sets $\ff_k$. In the finite dimensional case, $\rob$ has been shown to quantify the worst case maximal advantage given by $\rho$ when compared to $\sigma\in\ff_k$ is single copy channel discrimination tasks \cite{finitenonconvex}. The aim of this section is to investigate conditions on $\ff$ for which this result also holds for $\lscr$ in the infinite dimensional case. Whilst it is not necessarily true in full generality, we show it does hold true in most cases of interest. We begin by looking at conditions under which the nonconvex robustness and the worst case convex robustness are equal. \begin{proposition}\label{lscr=inflscr} Let $\ff=\bigcup_k \ff_k$ where each $\ff_k$ is closed and convex, and let $\rho\in\ddh\backslash\ff$. Then \begin{equation} \lscr = \inf_k\underline{\mathcal{R_{F}}_k}(\rho) \end{equation} provided one of the following conditions is true: \begin{enumerate} \item $\lscr = \rob$ \item The decomposition $\ff = \bigcup_k\ff_k$ is composed of a finite number of convex sets \end{enumerate} \end{proposition} \begin{proof} Firstly, note that the following inequality always holds: \begin{equation} \lscr \leq \inf_k\underline{\mathcal{R_{F}}_k}(\rho) \end{equation} This is because, since $\ff_k \subseteq \ff$, we have $\lscr \leq \underline{\mathcal{R_{F}}_k}(\rho)$ for all $k$. Taking infimums over $k$ on both sides yields $\lscr \leq \inf_k\underline{\mathcal{R_{F}}_k}(\rho)$ as required. For the reverse inequality, we prove each case individually \begin{enumerate} \item Let $\rob=\lscr$. From the definition of $\rob$, we have that $\forall\varepsilon > 0$ there exists some $\sigma \in \ff$ such that $\rho \leq (1+\rob +\varepsilon)\sigma$. Since $\ff = \bigcup_k \ff_k$, it must be that $\sigma\in \ff_k$ for some $k$. When requiring $\sigma\in\ff_k$, $\mathcal{R_{\ff}}_k(\rho)$ is the minimal value such that $\rho \leq (1+\mathcal{R_{\ff}}_k(\rho) +\varepsilon)\sigma$. It is clear from this that $\mathcal{R_{\ff}}_k(\rho)\leq \rob$, and taking the infimum over $k$ on both sides gives $\inf_k \underline{\mathcal{R_{\ff}}_k}(\rho) \leq \inf_k \mathcal{R_{\ff}}_k(\rho) \leq \rob = \lscr$ as required. \item From the definition of $\lscr$, $\forall \varepsilon >0$, there exist sequences $\xiseq \xrightarrow{tn}\rho$ and $\{\sigma_n\}_n \in \ff$ such that $\xi_n \leq (1+ \lscr + \varepsilon)\sigma_n$. For $\ff = \bigcup_k\ff_k$, since the union into convex sets is finite, there will always exist (at least) one of these convex sets, say $\ff_{\tilde{k}}$, that contains a subsequence $\{\sigma_{n_j}\}_{n_j}$ of $\{\sigma_n\}_{n}$. This means that we have $\xi_{n_j} \leq (1+ \underline{\mathcal{R_F}}_{\tilde{k}}(\rho) + \varepsilon)\sigma_{n_j}$ for any $\varepsilon>0$, and therefore $\lscr$ acts as a suboptimal value of $\underline{\mathcal{R_{F}}_{\tilde{k}}}(\rho)$. This implies $\lscr \geq \underline{\mathcal{R_{F}}}_{\tilde{k}}(\rho)$ and hence $\lscr \geq \inf_k\underline{\mathcal{R_{F}}_k}(\rho)$ \end{enumerate} \end{proof} The first condition covers the cases where $\ff$ is compact, which includes any energy constrained resource theory, any resource theory where the free states are a finite number of states, and recovers the result in finite dimensions. It also covers the case where $\cone(\ff)$ is weak* closed even when nonconvex, and thus includes the resource theory of non-Gaussianity. The second condition relates to practically relevant cases where the free states are given by multiple constraints, as considered in \cite{finitenonconvex}, such as being incoherent in multiple bases. \color{red} this could have more practical examples for sure \color{black} Our next result establishes that, under the conditions of Proposition \ref{lscr=inflscr}, $\lscr$ quantifies the worst case maximal advantage given by a resource state in a single-copy channel discrimination task, with respect to the decomposition into convex sets. \begin{theorem} When $\lscr = \inf_k\underline{\mathcal{R_{F}}_k}(\rho)$, we have $$\inf_k \sup_{\{M_i\},\{p_i,\Psi_i\}} \frac{p_{succ}(\{p_i,\Psi_i\},\{M_i\},\rho)}{\sup_{\sigma\in\ff}p_{succ}(\{p_i,\Psi_i\},\{M_i\},\sigma)} = 1 + \lscr$$ \end{theorem} \begin{proof} Since each $\ff_k$ is convex, we can use the result from \cite[Theroem~1]{infiniteconvex}, and hence \begin{equation} \sup_{\{M_i\},\{p_i,\Psi_i\}} \frac{p_{succ}(\{p_i,\Psi_i\},\{M_i\},\rho)}{\sup_{\sigma\in\ff_k}p_{succ}(\{p_i,\Psi_i\},\{M_i\},\sigma)} = 1 + \underline{\mathcal{R_{F}}_k}(\rho) \end{equation} Taking the infimum on each side then gives \begin{equation} \begin{aligned} \inf_k \sup_{\{M_i\},\{p_i,\Psi_i\}} \frac{p_{succ}(\{p_i,\Psi_i\},\{M_i\},\rho)}{\sup_{\sigma\in\ff}p_{succ}(\{p_i,\Psi_i\},\{M_i\},\sigma)} &=1 + \inf_k\underline{\mathcal{R_{F}}_k}(\rho)\\ &= 1 + \lscr \end{aligned} \end{equation} \end{proof} This generalizes the result from \cite[Theorem~4]{finitenonconvex} and shows that, in most cases of practical interest, $\lscr$ remains an exact quantifier for the advantage given in a variant of channel discrimination tasks, even in the continuous variable regime and without convexity restrictions. \section{Generalized robustness of non-Gaussianity} In this section, we specialize to the resource theory of non-Gaussianity. We take the set of free states to be the set of all Gaussian states, $\mathcal{G}$, i.e. all states with a Gaussian Wigner distribution. This forms a nonconvex subset in the infinite dimensional space $\ddh$, and therefore doesn't fit into previously studied frameworks. Since the cone generated by Gaussian states is weak* closed \cite{attainability}, Theorem \ref{ClosedCone} implies that the definitions \ref{robDef}, \ref{lscrDef} are equal, and hence we define the lower semi-continuous robustness of non-Gaussianity as: \begin{equation} \underline{\mathcal{R_G}}(\rho)=\mathcal{R_G}(\rho):=\inf_{\tau\in\ddh}\left\{\lambda\in \mathbb{R}_{\geq 0} \colon \frac{\rho + \lambda \tau}{1+\lambda} = \sigma \in \mathcal{G} \right\} \end{equation} Using that a convex set decomposition of $\mathcal{G}$ consists of an infinite number of sets, each containing a single state, we note that an interpretation of $\underline{\mathcal{R_G}}(\rho)$ is the following: given the choice between using $\rho$ and an arbitrary (but fixed) Gaussian state $\sigma$, there will always exist a single-copy channel discrimination task for which $\rho$ performs strictly better than $\sigma$ by a factor of at least $\underline{\mathcal{R_G}}(\rho)$. \subsection{Fock states} As an example, in this section we directly calculate $\lscr$ for Fock states $\ket{n}\!\bra{n}$. We begin by discussing known results for $\rob$. For pure states $\ket{\psi}\!\bra{\psi}$ we have \cite[Lemma~13]{longinfiniteconvex} \begin{equation}\label{upper} \mathcal{R_F}(\ket{\psi}\!\bra{\psi})+1= \inf_{\sigma \in \ff} \bra{\psi}\sigma^{-1}\ket{\psi} \end{equation} and for arbitrary states we can use that relative entropy always lower bounds max-relative entropy: \begin{equation}\label{lower} \inf_{\sigma\in\ff}D(\rho\| \sigma) \leq \inf_{\sigma\in\ff}D_{max}(\rho\| \sigma) =\log_{2}(\rob+1) \end{equation} We can then use the above bounds to calculate the robustness of Fock states. \begin{proposition} For an $n$ particle Fock state $\ket{n}\!\bra{n}$, the generalized robustness of non-Gaussianity is given by \begin{equation} \underline{\mathcal{R_G}}(\ket{n}\!\bra{n}) = \frac{(n+1)^{n+1}}{n^n}-1 \end{equation} \end{proposition} \begin{proof} We use the reference Gaussian state for a Fock state $\ket{n}\!\bra{n}$ (i.e. the Gaussian state with the same first and second moments), which is a thermal state $\tau$ with average photon number $n$, written as \begin{equation} \tau := \frac{1}{n+1}\sum_{m=0}^{\infty}(\frac{n}{n+1})^m\ket{m}\!\bra{m} \end{equation} Therefore for an upper bound we use \ref{upper} to get \begin{equation} \underline{\mathcal{R_G}}(\ket{n}\!\bra{n})\leq\bra{n}\tau^{-1}\ket{n} -1 = \frac{(n+1)^{n+1}}{n^n}-1 \end{equation} For the lower bound, the relative entropy of non-Gaussianity is minimised by the Gaussian reference state $\tau$, and its value is given for pure states by the von-Neumann entropy of $\tau$ \cite{GREmimimized}. We hence have \begin{equation} \begin{aligned} \inf_{\sigma \in \mathcal{G}}D(\ket{n}\!\bra{n}\|\sigma) &= -\tr(\tau \log_2 \tau)\\ &= \log_2 \left(\frac{(n+1)^{n+1}}{n^n}\right) \\ &\leq \log_{2}(\underline{\mathcal{R_G}}(\ket{n}\!\bra{n})+1) \end{aligned} \end{equation} The upper and lower bounds coincide. \end{proof} We can also consider the multimode case where each mode $m_i$ has a Fock state with photon number $n_i$, with the overall state given by $\ket{\psi} = \ket{n_1} \otimes ... \otimes \ket{n_m}$. In this case, the Gaussian reference state is a tensor product of thermal states with mean photon number $n_i$ in each mode, and, using the same method as above, we get \begin{equation} \underline{\mathcal{R_G}}((\ket{\psi}\!\bra{\psi})) = \prod_{i=1}^m\left(\frac{(n_i+1)^{(n_i+1)}}{n_i^{n_i}}\right)-1 \end{equation} \subsection{Within the convex hull} stuff gerardo did \section{Conclusions} \newpage \bibliography{bib} \bibliographystyle{unsrt} \end{document}
2412.13075v1
http://arxiv.org/abs/2412.13075v1
Monogenic Cyclic Cubic Trinomials
\documentclass[12]{amsart} \usepackage{amsmath,amssymb,amsthm,color,enumerate,comment,centernot,enumitem,url,cite} \usepackage{graphicx,relsize,bm} \usepackage{mathtools} \usepackage{array} \makeatletter \newcommand{\tpmod}[1]{{\@displayfalse\pmod{#1}}} \makeatother \newcommand{\ord}{\operatorname{ord}} \newtheorem{thm}{Theorem}[section] \newtheorem{lemma}[thm]{Lemma} \newtheorem{conj}[thm]{Conjecture} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{sch}[thm]{Scholium} \newtheorem{guess}[thm]{Guess} \newtheorem{ex}[thm]{Example} \newtheorem{exs}[thm]{Examples} \newtheorem{question}[thm]{Question} \theoremstyle{remark} \newtheorem*{remark}{{\bf Remark}} \newtheorem*{rems}{{\bf Remarks}} \newtheorem*{note}{Note} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \newtheorem{hyp}[thm]{Hypothesis} \newtheorem{rem}[thm]{Remark} \newtheorem{Con}{Conditions}[section] \theoremstyle{THM} \newtheorem*{THM}{Theorem} \DeclareMathOperator{\st}{\left\vert\vphantom{\frac{1}{1}}\right.} \DeclareMathOperator{\lcm}{lcm} \newcommand{\abs}[1]{\left|{#1}\right|} \def\ds{\displaystyle} \def\UU {{\widehat{\mathcal U}}} \def\g {{\widehat{g}}} \def\uu {{\widehat{u}}} \def\VV {{\widehat{\mathcal V}}} \def\vv {{\widehat{v}}} \def\P {{\mathcal P}} \def\O {{\mathcal O}} \def\p {{\mathfrak p}} \def\FF {{\mathcal F}} \def\GG {{\mathcal G}} \def\TT {{\mathcal T}} \def\R {{\mathbb R}} \def\Z {{\mathbb Z}} \def\ZZ {{\mathcal Z}} \def\NN {{\mathcal N}} \def\QQ {{\mathcal Q}} \def\N{{\mathbb N}} \def\Q {{\mathbb Q}} \def\T {{\mathcal T}} \def\E {{\mathcal E}} \def\B {{\mathcal B}} \def\C {{\mathcal C}} \def\S {{{\mathcal S}}} \def\A {{\mathcal A}} \def\D {{\mathcal D}} \def\L {{\mathcal L}} \def\Norm{{\mathcal N}} \def\M {{\mathcal M}} \def\h {{\mathfrak h}} \def\X {{\mathcal X}} \def\U {{\mathcal U}} \def\V {{\mathcal V}} \def\K{{\mathcal K}} \def\PP {{\widehat{\mathcal P}}} \def\d {{\rm det}} \def\PPP {{{\mathfrak P}}} \newcommand{\im}{\operatorname{im}} \def\k {k} \def\GG {{\mathcal G}} \def\UU {{\widehat{\mathcal U}}} \def\uu {{\widehat{u}}} \def\VV {{\widehat{\mathcal V}}} \def\vv {{\widehat{v}}} \def\P {{\mathcal P}} \def\F {{\mathbb F}} \def\D {{\mathcal D}} \def\Z {{\mathbb Z}} \def\Q {{\mathbb Q}} \def\C {{\mathbb C}} \def\S {{{\mathcal S}}} \def\U {{\mathcal U}} \def\V {{\mathcal V}} \def\W {{\mathcal W}} \def\PP {{\widehat{\mathcal P}}} \def\RR {{{\mathcal R}}} \def\PPP {{{\mathfrak P}}} \def\CC {{\mathcal C}} \def\Gal{{\mbox{{\rm{Gal}}}}} \makeatletter \@namedef{subjclassname@2020}{ \textup{2020} Mathematics Subject Classification} \makeatother \def\n{{\mathbf n}} \def\tr{\mathrm{tr}} \def\red#1 {\textcolor{red}{#1 }} \def\blue#1 {\textcolor{blue}{#1 }} \numberwithin{equation}{section} \def\ds{\displaystyle} \def\Z {{\mathbb Z}} \newcommand{\Mod}[1]{\ (\mathrm{mod}\enspace #1)} \newcommand{\mmod}[1]{\ \mathrm{mod}\enspace #1} \begin{document} \title[Monogenic Cyclic Cubic Trinomials]{Monogenic Cyclic Cubic Trinomials} \author{Lenny Jones} \address{Professor Emeritus, Department of Mathematics, Shippensburg University, Shippensburg, Pennsylvania 17257, USA} \email[Lenny~Jones]{[email protected]} \date{\today} \begin{abstract} A series of recent articles has shown that there exist only three monogenic cyclic quartic trinomials in ${\mathbb Z}[x]$, and they are all of the form $x^4+bx^2+d$. In this article, we conduct an analogous investigation for cubic trinomials in ${\mathbb Z}[x]$. Two irreducible cyclic cubic trinomials are said to be \emph{equivalent} if their splitting fields are equal. We show that there exist two infinite families of non-equivalent monogenic cyclic cubic trinomials of the form $x^3+Ax+B$. We also show that there exist exactly four monogenic cyclic cubic trinomials of the form $x^3+Ax^2+B$, all of which are equivalent to $x^3-3x+1$. \end{abstract} \subjclass[2020]{Primary 11R16, 11R32}\keywords{monogenic, quartic, trinomial, Galois} \maketitle \section{Introduction}\label{Section:Intro} A monic polynomial $f(x)\in \Z[x]$ of degree $n$ that is irreducible over $\Q$ is called \emph{cyclic} if the Galois group over $\Q$ of $f(x)$, denoted $\Gal(f)$, is the cyclic group $C_n$ of order $n$, while $f(x)$ is called \emph{monogenic} if $\{1,\theta,\theta^2,\ldots, \theta^{n-1}\}$ is a basis for the ring of integers of $\Q(\theta)$, where $f(\theta)=0$. Hence, $f(x)$ is monogenic if and only if $\Z_K=\Z[\theta]$. For the minimal polynomial $f(x)$ of an algebraic integer $\theta$ over $\Q$, it is well known \cite{Cohen} that \begin{equation} \label{Eq:Dis-Dis} \Delta(f)=\left[\Z_K:\Z[\theta]\right]^2\Delta(K), \end{equation} where $\Delta(f)$ and $\Delta(K)$ are the respective discriminants over $\Q$ of $f(x)$ and the number field $K$. Thus, from \eqref{Eq:Dis-Dis}, $f(x)$ is monogenic if and only if $\Delta(f)=\Delta(K)$. In a series of recent articles \cite{HJF1,JonesBAMSEven,JonesQuartic} it was shown that the only monogenic cyclic quartic trinomials are \begin{equation*}\label{Eq:Complete} x^4-4x^2+2,\quad x^4+4x^2+2\quad \mbox{and} \quad x^4-5x^2+5. \end{equation*} In this article, our focus is on an investigation of the analogous question for monogenic cyclic cubic trinomials. We say that two monogenic cyclic cubic trinomials are \emph{equivalent} if their splitting fields are equal. Otherwise, we say they are \emph{distinct}. Trivially, a monogenic cyclic cubic trinomial is equivalent to itself. Our main result is: \begin{thm}\label{Thm:Main} Let $A,B\in \Z$ with $AB\ne 0$. \begin{enumerate} \item \label{Main:I1} There exist two infinite single-parameter families, $\FF_1$ and $\FF_2$, of monogenic cyclic cubic trinomials of the form $x^3+Ax+B$, such that $\FF_1 \cap \FF_2=\varnothing$ and no two elements of $\FF_1 \cup \FF_2$ are equivalent. \item \label{Main:I2} There are exactly four monogenic cyclic trinomials of the form $x^3+Ax^2+B$: \[x^3+3x^2-3,\quad x^3-3x^2+3,\quad x^3-3x^2+1, \quad x^3+3x^2-1;\] and they are all equivalent to $T(x)=x^3-3x+1\in \FF_1$. \end{enumerate} \end{thm} Monogenic cyclic cubic fields have been investigated by various authors \cite{A,Gras1,Gras2,KS}. Gras \cite{Gras1,Gras2} gave three characterizations of monogenic cyclic cubic fields in terms of the solvability of certain Diophantine equations. More recently, Kashio and Sekigawa \cite{KS} showed that every monogenic cyclic cubic field is the splitting field of $x^3-tx^2-(t+3)x-1$ for some $t\in \Z$, and they gave a description of this characterization in terms of the parameter $t$. Most of these characterizations involve the conductor of the field. Theorem \ref{Thm:Main} represents a new and different approach to the characterization of monogenic cyclic cubic fields with a specific focus on trinomials and their particular forms. \begin{rem} The polynomials $x^3-tx^2-(t+3)x-1$ were first studied by Daniel Shanks \cite{Shanks}, and their splitting fields are known as the \emph{simplest cubic fields}. \end{rem} \section{Preliminaries}\label{Section:Prelim} The formula for the discriminant of an arbitrary monic trinomial, due to Swan \cite[Theorem 2]{Swan}, is given in the following theorem. \begin{thm} \label{Thm:Swan} Let $f(x)=x^n+Ax^m+B\in \Q[x]$, where $0<m<n$, and let $d=\gcd(n,m)$. Then $\Delta(f)=$ \[ (-1)^{n(n-1)/2}B^{m-1}\left(n^{n/d}B^{(n-m)/d}-(-1)^{n/d}(n-m)^{(n-m)/d}m^{m/d}A^{n/d}\right)^d. \] \end{thm} \noindent Note that if $f(x)$ is a cubic trinomial, then $f(x)$ is cyclic if and only if $\Delta(f)$ is a square. The next result, due to Jakhar, Khanduja and Sangwan \cite[Theorem 1.1]{JKS2}, is helpful in determining the monogenicity of a trinomial. We use the notation $q^e\mid \mid W$ to mean that $q^e$ is the exact power of the prime $q$ that divides the integer $W$. \begin{thm}{\rm \cite{JKS2}}\label{Thm:JKS} Let $N\ge 2$ be an integer. Let $K=\Q(\theta)$ be an algebraic number field with $\theta\in \Z_K$, the ring of integers of $K$, having minimal polynomial $f(x)=x^{N}+Ax^M+B$ over $\Q$, with $\gcd(M,N)=d_0$, $M=M_1d_0$ and $N=N_1d_0$. A prime factor $q$ of $\Delta(f)$ does not divide $\left[\Z_K:\Z[\theta]\right]$ if and only if $q$ satisfies one of the following conditions: \begin{enumerate}[font=\normalfont] \item \label{JKS:I1} when $q\mid A$ and $q\mid B$, then $q^2\nmid B$; \item \label{JKS:I2} when $q\mid A$ and $q\nmid B$, then \[\mbox{either } \quad q\mid A_2 \mbox{ and } q\nmid B_1 \quad \mbox{ or } \quad q\nmid A_2\left((-B)^{M_1}A_2^{N_1}-\left(-B_1\right)^{N_1}\right),\] where $A_2=A/q$ and $B_1=\frac{B+(-B)^{q^j}}{q}$ with $q^j\mid\mid N$; \item \label{JKS:I3} when $q\nmid A$ and $q\mid B$, then \begin{gather*} \mbox{either} \quad q\mid A_1 \quad \mbox{and}\quad q\nmid B_2\\ \mbox{or}\\ q\nmid A_1B_2^{M-1}\left((-A)^{M_1}A_1^{N_1-M_1}-\left(-B_2\right)^{N_1-M_1}\right), \end{gather*} where $A_1=\frac{A+(-A)^{q^\ell}}{q}$ with $q^\ell\mid\mid (N-M)$, and $B_2=B/q$; \item \label{JKS:I4} when $q\nmid AB$ and $q\mid M$ with $N=s^{\prime}q^k$, $M=sq^k$, $q\nmid \gcd\left(s^{\prime},s\right)$, then the polynomials \begin{equation*} H_1(x):=x^{s^{\prime}}+Ax^s+B \quad \mbox{and}\quad H_2(x):=\dfrac{Ax^{sq^k}+B+\left(-Ax^s-B\right)^{q^k}}{q} \end{equation*} are coprime modulo $q$; \item \label{JKS:I5} when $q\nmid ABM$, then \[q^2\nmid \left(B^{N_1-M_1}N_1^{N_1}-(-1)^{M_1}A^{N_1}M_1^{M_1}(M_1-N_1)^{N_1-M_1} \right).\] \end{enumerate} \end{thm} Although the following lemma is little more than an observation, it will be useful enough to state formally. \begin{lemma}\label{Lem:Equiv} Let $f(x)$ and $g(x)$ be monogenic cyclic cubic trinomials. Then $f(x)$ and $g(x)$ are equivalent if and only if $\Delta(f)=\Delta(g)$. \end{lemma} \begin{proof} Let $K$ be the splitting field of $f(x)$. Then the lemma is an immediate consequence of the fact that because $f(x)$ is monogenic, it follows from \eqref{Eq:Dis-Dis} that $\Delta(f)=\Delta(K)$. \end{proof} \section{The Proof of Theorem \ref{Thm:Main}}\label{Section:MainProof} \begin{proof} For item \eqref{Main:I1}, let $k\in \Z$. Define \begin{gather} \label{F1} f_{1,k}(x):=x^3-3\delta_1 x+(6k+1)\delta_1\\ \nonumber \mbox{and}\\ \label{F2} f_{2,k}(x)=x^3-\delta_2 x+(2k+1)\delta_2, \end{gather} where \[\delta_1:=9k^2+3k+1 \quad \mbox{and} \quad \delta_2:=27k^2+27k+7.\] Then, we have from Theorem \ref{Thm:Swan} that \begin{equation}\label{Eq:Dis} \Delta(f_{1,k})=3^4\delta_1^2 \quad \mbox{and} \quad \Delta(f_{2,k})=\delta_2^2. \end{equation} Observe that \[f_{1,k}(x)\equiv f_{2,k}(x)\equiv x^3+x+1 \pmod{2}.\] Hence, since $x^3+x+1$ is irreducible in $\F_2[x]$, we deduce that $f_{i,k}(x)$ is irreducible over $\Q$. Therefore, since $\Delta(f_{i,k})$ is a perfect square, it follows that $\Gal(f_{i,k})\simeq C_3$. We claim that $f_{i,k}(x)$ is monogenic if and only if $\delta_i$ is squarefree. We give details only for $i=1$ since the approach is similar for $i=2$. Let $\Z_K$ denote the ring of integers of $K=\Q(\theta)$, where $f_{1,k}(\theta)=0$. Suppose first that $q^2\mid \delta_1$ for some prime $q$. Then, it is easy to see that condition \eqref{JKS:I1} of Theorem \ref{Thm:JKS} is not satisfied, so that $q\mid [\Z_K:\Z[\theta]]$ and $f_{1,k}(x)$ is not monogenic. Conversely, suppose that $\delta_1$ is squarefree, and let $q$ be a prime divisor of $\delta_1$. Note that $q\not \in \{2,3\}$. If $q\mid (6k+1)$, then $k\equiv -1/6 \pmod{q}$ and \[\delta_1 \equiv 9(-1/6)^2+3(-1/6)+1\equiv 3/4 \not \equiv 0 \pmod{q}.\] Hence, $q^2\nmid (6k+1)\delta_1$ so that condition \eqref{JKS:I1} of Theorem \ref{Thm:JKS} is satisfied. Thus, $q\nmid [\Z_K:\Z[\theta]]$. We must still examine condition \eqref{JKS:I2} of Theorem \ref{Thm:JKS} for the prime $q=3$. In this case we have that \[B=-(6k+1)\delta_1, \quad A_2=-\delta_1 \quad \mbox{and}\quad B_1=\frac{(6k+1)\delta_1-(6k+1)^3\delta_1^3}{3}.\] As previously noted, $3\nmid A_2$ so that the first possibility in condition \eqref{JKS:I2} of Theorem \ref{Thm:JKS} is not satisfied. Using Maple to examine the second possibility in condition \eqref{JKS:I2} yields \[(-B)^{M_1}A_2^{N_1}-\left(-B_1\right)^{N_1}=(6k+1)\delta_1^4+\left(\frac{(6k+1)\delta_1-(6k+1)^3\delta_1^3}{3}\right)^3\equiv 1 \pmod{3},\] which confirms that this second possibility of condition \eqref{JKS:I2} is indeed satisfied. Consequently, $3\nmid [\Z_K:\Z[\theta]]$ and $f_{1,k}(x)$ is monogenic, which establishes the claim. We can now define two families of monogenic trinomials: \begin{gather}\label{Eq:Fams} \FF_1:=\{f_{1,k}(x):k\in \Z, \ \mbox{with $\delta_1$ squarefree}\}\\ \nonumber \mbox{and}\\ \FF_2:=\{f_{2,k}(x):k\in \Z, \ \mbox{with $k\ge 0$ and $\delta_2$ squarefree}\}. \end{gather} For each $i\in \{1,2\}$, it follows from a result of Nagel \cite{Nagel} that there exist infinitely many $k\in \Z$ such that $\delta_i$ is squarefree. Hence, each of the families $\FF_i$ is an infinite set. If $f_{1,k}(x)=f_{2,m}(x)$ for some $k,m\in \Z$, then $\Delta(f_{1,k})=\Delta(f_{2,m})$, so that \[k=\frac{3\pm \sqrt{3(108m^2+108m+19)}}{18}\in \Z,\] which is impossible since $108m^2+108m+19\equiv 1\pmod{3}$. Hence, $\FF_1\cap \FF_2=\varnothing$. We show next that no two elements of $\FF_1$ are equivalent. By way of contradiction, assume that for some $f_{1,k}(x),f_{1,m}(x)\in \FF_1$ with $k\ne m$ and $f_{1,k}(\alpha)=f_{1,m}(\beta)=0$, we have $\Q(\alpha)=\Q(\beta)$. Since $f_{1,k}(x)$ and $f_{1,m}(x)$ are both monogenic, it follows from Lemma \ref{Lem:Equiv} and \eqref{Eq:Dis} that \begin{equation}\label{Eq:1} 3^4(9k^2+3k+1)=3^4(9m^2+3m+1). \end{equation} Using Maple to solve \eqref{Eq:1} with the restriction that $k\ne m$, we get three solutions. The first solution has $k=-m-1/3$, which contradicts the fact that $k,m\in \Z$. The remaining two solutions require that $-36m^2-12m-7$ is the square of an integer. However, it is easy to see that $-36m^2-12m-7<0$ for all $m\in \R$. Therefore, the elements of $\FF_1$ are distinct. Turning to $\FF_2$, we apply the same strategy to show that no two elements of $\FF_2$ are equivalent, and we use Maple to solve \[(27k^2+27k+7)^2=(27m^2+27m+7)^2,\] where $k\ne m$. In this situation, Maple again gives three solutions. The first solution has $k=-m-1$, which is impossible since $km\ge 0$. The remaining two solutions require that $-324m^2-324m-87$ is the square of an integer, which is impossible since $m\ge 0$. Hence, the elements of $\FF_2$ are distinct. We show now that no element of $\FF_1$ is equivalent to some element of $\FF_2$. Using the same approach as before, we assume that \begin{equation}\label{Eq:3} 3^4(9k^2+3k+1)^2=(27m^2+27m+7)^2, \end{equation} for some $k,m\in \Z$ with $m\ge 0$. Solving \eqref{Eq:3} using Maple gives four solutions. Two of these solutions require that $-108m^2-108m-55$ is the square of an integer, which is impossible since $m\ge 0$. A third solution has \[k=\frac{-3+\sqrt{108m^2+108m+1}}{18},\] which contradicts the fact that $k\in \Z$ since $108m^2+108m+1\equiv 1 \pmod{3}$. The fourth solution has $k=(-3-\sqrt{108m^2+108m+1})/18$ and so we arrive at the same contradiction. Thus, we have established the fact that the trinomials in $\FF_1\cup \FF_2$ are all distinct. For item \eqref{Main:I2}, let $f(x)=x^3+Ax^2+B$ be a monogenic cyclic trinomial. By Theorem \ref{Thm:Swan}, we have that \begin{equation}\label{Eq:Delf} \Delta(f)=-B(4A^3+27B), \end{equation} which must be a square since $f(x)$ is cyclic. It is straightforward to verify that $\Delta(f)>1$, and we omit the details. We let $q$ be a prime divisor of $\Delta(f)$, and we apply Theorem \ref{Thm:JKS} to obtain necessary conditions on $A$ and $B$ for the monogenicity of $f(x)$. Suppose that $\abs{B}>1$ and that $q\mid B$. If $q\mid A$, then since condition \eqref{JKS:I1} of Theorem \ref{Thm:JKS} must be satisfied, we conclude that $q^2\nmid B$. Hence, $q\mid \mid B$. If $q\nmid A$, then since $N-M=1$, we have that $\ell=0$ and $A_1=0$ in condition \eqref{JKS:I3} of Theorem \ref{Thm:JKS}. Thus, $q\mid A_1$, and so $q^2\nmid B$ since $f(x)$ is monogenic. Hence, again we see that $q\mid\mid B$. Therefore, we have shown that $B$ is squarefree. Suppose next that $q\nmid A$ and $q\nmid B$. If $q=2$, then we see from condition \eqref{JKS:I4} of Theorem \ref{Thm:JKS} that $H_2(x)=0$. Thus, $H_1(x)$ and $H_2(x)$ are not coprime modulo $q=2$, contradicting the fact that $f(x)$ is monogenic. Hence, $q\ne 2$ and $q\nmid 2AB$. Then, from condition \eqref{JKS:I5} of Theorem \ref{Thm:JKS}, we have that $q^2\nmid (4A^3+27B)$, which contradicts the fact that $\Delta(f)$ is a square. It follows that any prime divisor $q$ of $\Delta(f)$ must divide either $A$ or $B$. Suppose then that $q\mid A$ and $q\nmid B$. Thus, we see from \eqref{Eq:Delf} that $q=3$. Observe if $9\mid A$, then it follows from \eqref{Eq:Delf} that $3^3\mid\mid \Delta(f)$, which contradicts the fact that $\Delta(f)$ is a square. Therefore, $3\mid \mid A$. We provide the following summary to emphasize three facts that we have gleaned from Theorem \ref{Thm:JKS}: \begin{gather} \label{Info:I1} \mbox{$B$ is squarefree,}\\ \label{Info:I2} q\mid A \quad \mbox{or} \quad q\mid B,\\ \label{Info:I0} \mbox{if $q\mid A$ and $q\nmid B$, then $q=3$ and $3\mid \mid A$,} \end{gather} where $q$ is a prime divisor of $\Delta(f)$. Since $\Delta(f)$ is a square, we deduce from \eqref{Eq:Delf} and \eqref{Info:I1} that $B\mid (4A^3+27B)$. Hence, $B\mid 2A$, so that $B^2\mid 4A^2$. Thus, \begin{gather} \nonumber \Delta(f)=B^2Z, \quad \mbox{where} \\ \label{Eq:Z} Z=-\left(\frac{4A^3}{B}+27\right)=-\left(\left(\frac{4A^2}{B^2}\right)AB+27\right)\in \Z \end{gather} is a square. Note that if $Z=1$, then $-A^3=7B$, which is impossible by \eqref{Info:I1}. Hence, $Z>1$ and we let $q$ be a prime divisor of $Z$. It follows from \eqref{Info:I2} and \eqref{Eq:Z} that $q=3$. Therefore, $Z=3^{2k}$ for some integer $k\ge 1$, and we have from \eqref{Eq:Z} that \begin{equation}\label{Eq:3^{2k}} B=\frac{-4A^3}{3^{2k}+27}\in \Z. \end{equation} Next, we examine the solutions to \eqref{Eq:3^{2k}} when $k\in \{1,2\}$. When $k=1$, we get the equation $-(A/3)^3=B/3$ from \eqref{Eq:3^{2k}} and, by \eqref{Info:I1}, the solutions are easily seen to be \[(A,B)\in \{(3,-3), (-3,3)\}.\] It is straightforward to verify that both of the trinomials \begin{equation}\label{Eq:k=1} x^3+3x^2-3 \quad \mbox{and} \quad x^3-3x^2+3 \end{equation} are monogenic and cyclic. When $k=2$, we arrive at the equation $-(A/3)^3=B$ from \eqref{Eq:3^{2k}}, and it is easy to see that the solutions are \[(A,B)\in \{(-3,1), (3,-1)\}\] by \eqref{Info:I1}. As before, it is straightforward to confirm that both of the trinomials \begin{equation}\label{Eq:k=2} x^3-3x^2+1 \quad \mbox{and}\quad x^3+3x^2-1 \end{equation} are monogenic and cyclic. We assume now that $k\ge 3$. It is easy to see that $3\mid A$ in \eqref{Eq:3^{2k}}, and since $2^2\mid \mid (3^{2k}+27)$, we can rewrite \eqref{Eq:3^{2k}} as \begin{gather}\label{Eq:3^{2k} again} \begin{split} BW=\left(\frac{-A}{3}\right)^3,\\ \mbox{where \ $W=\dfrac{3^{2k-3}+1}{4}\in \Z$.} \end{split} \end{gather} If $3\mid B$, then $3\mid \mid B$ by \eqref{Info:I1}, which implies that $3\mid \mid BW$ since $3\nmid W$, contradicting the fact that $BW$ is a cube in \eqref{Eq:3^{2k} again}. Thus, $3\nmid B$ and we conclude from \eqref{Info:I0} that $3\mid \mid A$. We therefore examine condition \eqref{JKS:I2} of Theorem \ref{Thm:JKS} with $q=3$. Straightforward calculations with $B$ as in \eqref{Eq:3^{2k}} show that \[A_2=A/3 \quad \mbox{and} \quad B_1=\frac{4A_2^3\left(16A_2^6-(3^{2k-3}+1)^2\right)}{3(3^{2k-3}+1)^3}.\] Since $3\nmid A_2$ and \[\frac{16A_2^6-(3^{2k-3}+1)^2}{3}\equiv 2 \pmod{3},\] it follows that \begin{equation*} (-B)^2A_2^3+B_1^3=\frac{16A_2^9\left((3^{2k-3}+1)^7+4\left(\frac{16A_2^6-(3^{2k-3}+1)^2}{3}\right)^3\right)}{(3^{2k-3}+1)^9}\equiv 0 \pmod{3}, \end{equation*} which proves that $f(x)$ is not monogenic by Theorem \ref{Thm:JKS}. Hence, the four monogenic cyclic trinomials given in \eqref{Eq:k=1} and \eqref{Eq:k=2} are the only monogenic cyclic trinomials of the form $x^3+Ax^2+B$. To see that the trinomials in \eqref{Eq:k=1} and \eqref{Eq:k=2} are equivalent to $T(x)=x^3-3x+1$, suppose that $T(\alpha)=0$. Then $T(x)$ factors over $\Q(\alpha)$ as \[T(x)=x^3-3x+1=(x-\alpha)(x-\alpha^2+2)(x+\alpha^2+\alpha-2),\] and it is easy to verify that the four trinomials in \eqref{Eq:k=1} and \eqref{Eq:k=2} factor over $\Q(\alpha)$ as: \begin{align*} x^3+3x^2-3&=(x+\alpha+1)(x+\alpha^2-1)(x-\alpha^2-\alpha+3),\\ x^3-3x^2+3&=(x-\alpha-1)(x-\alpha^2+1)(x+\alpha^2+\alpha-3),\\ x^3-3x^2+1&=(x+\alpha-1)(x+\alpha^2-3)(x-\alpha^2-\alpha+1),\\ x^3+3x^2-1&=(x-\alpha+1)(x-\alpha^2+3)(x+\alpha^2+\alpha-1).\qedhere \end{align*} \end{proof} \section{Final Remarks} Theorem \ref{Thm:Main} gives two infinite mutually exclusive single-parameter families $\FF_1$ and $\FF_2$ of monogenic cyclic cubic trinomials of the form $x^3+Ax+B$. However, these families are not exhaustive. That is, there exist other monogenic cyclic cubic trinomials of the form $x^3+Ax+B$ that are not contained in $\FF_1\cup \FF_2$ and not equivalent to any element of $\FF_1\cup \FF_2$. We do not know if these trinomials can somehow be parameterized into one or more infinite families, or whether they are isolated, and we leave such a task for future research. We end by providing some examples of these trinomials: \begin{gather*} x^3-6447x+199243,\quad x^3-23907x+1422773,\quad x^3-66063x+6535601,\\ x^3-123411x+16687025, \quad x^3-1738191x+882052345,\\ x^3-47970741x+127882981837. \end{gather*} \begin{thebibliography}{99} \bibitem{A} G. Archinard, \emph{Extensions cubiques cycliques de $\Q$ dont l'anneau des entiers est monog\`{e}ne}, (French) Enseign. Math. (2) {\bf 20} (1974), 179--203. \bibitem{Cohen} H. Cohen, \emph{A Course in Computational Algebraic Number Theory}, {Springer-Verlag}, 2000. \bibitem{Gras1} Marie-Nicole Gras, \emph{Sur les corps cubiques cycliques dont l'anneau des entiers est monog\`{e}ne,} (French) Ann. Sci. Univ. Besan\c{c}on Math. {\bf (3)} 1973, no. 6, 26 pp. \bibitem{Gras2} Marie-Nicole Gras, \emph{Sur les corps cubiques cycliques dont l'anneau des entiers est monog\`{e}ne}, (French) C. R. Acad. Sci. Paris S\'{e}r. A {\bf 278} (1974), 59--62. \bibitem{HJF1} J. Harrington and L. Jones, \emph{Monogenic trinomials of the form $x^4+ax^3+d$ and their Galois groups}, J. Algebra Appl. (to appear). \bibitem{JKS2} A. Jakhar, S. Khanduja and N. Sangwan, \emph{Characterization of primes dividing the index of a trinomial}, Int. J. Number Theory {\bf 13} (2017), no. 10, 2505--2514. \bibitem{JonesBAMSEven} L. Jones, \emph{Monogenic even quartic trinomials}, Bull. Aust. Math. Soc. (to appear). \bibitem{JonesQuartic} L. Jones, \emph{Monogenic cyclic trinomials of the form $x^4+cx+d$}, \url{arXiv:2411.10572}. \bibitem{KS} T. Kashio and R. Sekigawa, \emph{The characterization of cyclic cubic fields with power integral bases}, Kodai Math. J. {\bf 44} (2021), no. 2, 290--306. \bibitem{Nagel} T. Nagel, \emph{Zur Arithmetik der Polynome}, Abh. Math. Sem. Hamburg. Univ. {\bf 1} (1922), 179--194. \bibitem{Shanks} D. Shanks, \emph{The simplest cubic fields}, Math. Comp. {\bf 28} (1974), 1137--1152. \bibitem{Swan} R. Swan, \emph{Factorization of polynomials over finite fields}, Pacific J. Math. {\bf 12} (1962), 1099--1106. \end{thebibliography} \end{document} \begin{align} \begin{split} f_{1,k}(x)&=x^3-3(9k^2+3k+1)x+(6k+1)(9k^2+3k+1)\\ &\quad \mbox{with $9k^2+3k+1$ squarefree}\\ &\begin{center} \mbox{and} \end{center}\\ f_{2,k}(x)&=x^3-(27k^2-27k-7)x+(2k+1)(27k^2+27k+7)\\ &\quad \mbox{with $27k^2+27k+7$ squarefree.} \end{split} \end{align} , and thus \[\overline{T}(x)=x(x^3+c)=x(x^3+c^3)=x(x+c)^3.\] Thus, we can let \[ There are three possibilities for $\overline{T}(x)$: \begin{enumerate} \item \label{C1} \end{enumerate} We show now that $f(x)=x^4+cx+d$ for item \eqref{I:1} of Theorem \ref{Thm:Main} in Table \ref{T:2} by giving the number of distinct monogenic polynomials $\FF(x)$ having Galois group 8TX, where X $\in X_{\FF}$. \begin{table}[h] \begin{center} \begin{tabular}{c|cccccccccccc} X & 2 & 3 & 4 & 6 & 8 & 9 & 11 & 15 &16 & 17 &22 &26\\ [2pt] \hline \\[-8pt] \# & 1 & 1 & 1 & 1 & 2 & $\infty$ &0 & $\infty$ & 3 & $\infty$ & 0 & $\infty$ \end{tabular} \end{center} \caption{The number $\#$ of monogenic 8TX polynomials with X in $X_{\FF}$} \label{T:2} \end{table} For item \eqref{I:2}, to be more precise, we have listed in Table \ref{T:3} the number of distinct monogenic polynomials $\GG(x)$ having Galois group 8TX, where X $\in X_{\GG}$. \begin{table}[h] \begin{center} \begin{tabular}{c|ccccc} X & 2 & 3 & 4 & 9 & 10 \\ [2pt] \hline \\[-8pt] \# & 1 & 0 & 0 & 0 & 4 \end{tabular} \end{center} \caption{The number $\#$ of monogenic 8TX polynomials with X in $X_{\GG}$} \label{T:3} \end{table} \begin{table}[h] \begin{center} \begin{tabular}{c|cccccc} X & 2 & 3 & 4 & 6 & 8 & 9 \\ [2pt] \# & 1 & 0 & 1 & 0 & $<\infty$ & $\infty$ \\ [2pt] \hline X & 11 & 15 & 16 & 17 & 22 & 26\\ [2pt] \# & 0 & $\infty$ & 3 & $\infty$ & 0 & $\infty$ \end{tabular} \end{center} \caption{The number $\#$ of monogenic 8TX polynomials with X in $X_{\FF}$} \label{T:2} \end{table} \begin{table}[h] \begin{center} \begin{tabular}{ccc} $x^3-6447x+199243$, & $x^3-23907x+1422773$, & $x^3-66063x+6535601$, \\ [2pt] $x^3-123411x+16687025$, & $x^3-1738191x+882052345$ \\ \end{tabular} \end{center} \caption{Non-equivalent monogenic cyclic cubic trinomials not in $\FF_1\cup \FF_2$} \label{T:1} \end{table}
2412.13068v1
http://arxiv.org/abs/2412.13068v1
Regularity lost: the fundamental limitations and constraint qualifications in the problems of elastoplasticity
\documentclass{article} \usepackage{geometry} \usepackage{mathtools} \usepackage{graphicx} \usepackage{amsmath} \usepackage{amssymb} \usepackage{dsfont} \usepackage{amsthm} \usepackage{cases} \usepackage{hyperref} \usepackage{abstract} \usepackage{float} \usepackage{titling} \usepackage{extarrows} \usepackage[dvipsnames]{xcolor} \usepackage[ruled,vlined]{algorithm2e} \usepackage{enumerate} \usepackage{bm} \usepackage{multirow} \usepackage[shortlabels]{enumitem} \usepackage[ safeinputenc, backend=biber, style=alphabetic, maxnames = 10, minnames = 1, isbn=false, url=false, eprint=false ]{biblatex} \addbibresource{regularity_lost_arXiv.bib} \hypersetup{ colorlinks=True, linkcolor=blue, filecolor=magenta, urlcolor=cyan, citecolor=ForestGreen } \urlstyle{same} \geometry{a4paper} \pdfminorversion=7 \newtheorem{Theorem}{Theorem}[section] \newtheorem{Proposition}{Proposition}[section] \newtheorem{Lemma}{Lemma}[section] \newtheorem{Corollary}{Corollary}[section] \newtheorem{Assumption}{Assumption} \theoremstyle{definition} \newtheorem{Example}{Example} \newtheorem{Definition}{Definition}[section] \newtheorem{Remark}{Remark}[section] \newtheorem{Observation}{Observation} \def\ra{\rangle} \def\la{\langle} \def\intOm{\int\limits_\Omega} \begin{document} \title{Regularity lost: the fundamental limitations and constraint qualifications in the problems of elastoplasticity} \author{Ivan Gudoshnikov \thanks{Institute of Mathematics of the Czech Academy of Sciences, \v{Z}itn\'{a} 25, 115 67, Praha 1, Czech Republic, \url{[email protected]}}} \thanksmarkseries{arabic} \date{2024} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \footnotetext{\emph{Key words:} perfect plasticity, plasticity with hardening, sweeping process, constraint qualification, measure-valued solution.} \renewcommand{\thefootnote}{\arabic{footnote}} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \footnotetext{\emph{Mathematics Subject Classification (2020):} 74C05, 47J22, 47B02, 47B93.} \renewcommand{\thefootnote}{\arabic{footnote}} \maketitle \begin{abstract} We investigate the existence and non-existence of a function-valued strain solution in various models of elastoplasticity from the perspective of the constraint-based ``dual'' formulations. We describe abstract frameworks for linear elasticity, elasticity-perfect plasticity and elasticity-hardening plasticity in terms of adjoint linear operators and convert them to equivalent formulations in terms of differential inclusions (the sweeping process in particular). Within such frameworks we consider several manually solvable examples of discrete and continuous models. Despite their simplicity, the examples show how for discrete models with perfect plasticity it is possible to find the evolution of stress and strain (elongation), yet continuum models within the same framework may not possess a function-valued strain. Although some examples with such phenomenon are already known, we demonstrate that it may appear due to displacement loading. The central idea of the paper is to explain the loss of strain regularity in the dual formulation by the lack of additivity of the normal cones and the failure of Slater's constraint qualification. In contrast to perfect plasticity, models with hardening are known to be well-solvable for strains. We show that more advanced constraint qualifications can help to distinguish between those cases and, in the case of hardening, ensure the additivity of the normal cones, which means the existence of a function-valued strain rate. \end{abstract} \tableofcontents \newpage \section{Introduction} \label{sect:intro} The treatment of elastoplasticity as a variational inequality problem in the PDE setting goes back to the book by Duvaut and Lions \cite{Duvaut1972}. Already then it was understood that for the seemingly simplest elastoplastic constitutive law, namely for perfect plasticity, the quasistatic evolution of stress is well-solvable, but the evolution of strain is problematic. J.-J. Moreau approached this issue with the technique of differential inclusions with the normal cone operator, which worked well in a finite-dimensional setting \cite[Ch. 6]{Moreau1973}, \cite{Moreau1973-3}. Onward, Moreau could solve the strain problem for one-dimensional continuous rod \cite{Moreau1976} by using the space of bounded measures as the state space, instead of a space of Lebesgue-integrable functions. In order to apply this idea to elastic-perfectly plastic three-dimensional continua, the theory of the space of bounded deformations ($BD$) was developed in the 1980's, with major contributions by P.-M. Suquet \cite{Suquet1978a}, \cite{Suquet1980}, \cite{Suquet1981}, \cite{Suquet1988}, R. Temam and G. Strang \cite{Strang1980}, \cite{TemamStrangBD}, \cite{Temam1985}. More recent treatments of the problem with perfect plasticity include \cite{Ebobisse2004}, \cite{DalMaso2006}, \cite{Demyanov2009}, \cite{Francfort2016}, \cite{Mora2016}, which are also based on $BD$-spaces. In the current text we primarily explore the necessity for the measure-valued solutions in the continuous setting, i.e. we go back to the transition from \cite{Moreau1973} to \cite{Moreau1976}. We begin with a rigorous presentation of abstract functional-analytic frameworks for quasistatic evolution with elasticity and with Prandtl-Reuss elasticity-perfect plasticity. These frameworks are suitable for discrete as well as continuous models of media, and we provide toy examples for both. Using the ideas of \cite{Moreau1973} and \cite{Kunze2000} we convert the abstract problem into a pair of differential inclusions of the form \begin{numcases}{} -\frac{d}{dt}\bm{y}\in N_{\left(\Sigma-\bm{\widetilde{\sigma}}(t)\right)\cap \mathcal{V}}(\bm{y}), \label{eq:sp-elastoplastic-intro}\\ \frac{d}{dt} \bm{\varepsilon} \in M\left(\left(N_{\Sigma-\bm{\widetilde{\sigma}}(t)}(\bm{y})+ \frac{d}{dt} \bm{y}\right)\cap \mathcal{V}^\perp\right), \label{eq:inclusion2-elastoplastic-intro} \end{numcases} where $\bm{y}=\bm{\sigma}-\bm{\widetilde{\sigma}}(t)$ is the difference between the stress in the elastoplastic problem and the known stress $\bm{\widetilde{\sigma}}$ in the corresponding elastic problem, $\bm{\varepsilon}$ is the strain in the elastoplastic problem, $N$ denotes a normal cone, $M$ is a known affine map, $\Sigma$ is the set of stress distributions which fit in the elastic range and $\mathcal{V}$ is the subspace of stress distributions which satisfy the homogeneous equilibrium equation. On this initial step we consider the classical sufficient condition of solvability of \eqref{eq:sp-elastoplastic-intro}--\eqref{eq:inclusion2-elastoplastic-intro}, which is \begin{equation} \left({\rm int}\, \Sigma-\bm{\widetilde{\sigma}}(t)\right)\cap \mathcal{V} \neq \varnothing. \label{eq:Slater-I-intro} \end{equation} Our main goal is to compare the outcomes for discrete (finite-dimensional) and continuous models being plugged into {\it the same} abstract framework \eqref{eq:sp-elastoplastic-intro}--\eqref{eq:Slater-I-intro}. More specifically, \begin{itemize} \item for finite-dimensional state space we provide easily-solvable toy examples of discrete mechanical systems and observe that the abstract framework admits solutions for both strain and stress. In such a case the sufficient condition \eqref{eq:Slater-I-intro} is adequate. \item for the continuous models in $L^2$ we provide another simple example, for which the inclusion \eqref{eq:inclusion2-elastoplastic-intro} admits no solution with values in $L^2$. We will explain that, in contrast to the discrete case, the sufficient condition \eqref{eq:Slater-I-intro} does not hold for any of the models with plasticity in $L^2$. \end{itemize} While the examples to merely observe the lack of function-valued strain solution are long present in the literature (see \cite[Sect. 3.g, pp.~69--70]{Moreau1976}, \cite[Sect. 3.4.a), pp.~312--313]{Suquet1988}), we aim to understand {\it how} Moreau's method \eqref{eq:sp-elastoplastic-intro}--\eqref{eq:inclusion2-elastoplastic-intro} fails in $L^2$ and what is the mathematical reason for that. For the dissipation-based strain formulation of the elastoplasticity problem (``primal formulation'' in terms of \cite{Han2012}) the mathematical reason for the unsolvability is generally known: the dissipation function, defined on $L^2$, has linear growth at infinity and does not admit a minimum (see e.g. \cite[pp. vi, 79--80]{Temam1985}). We follow Moreau's constraint-based formulation \eqref{eq:sp-elastoplastic-intro}--\eqref{eq:inclusion2-elastoplastic-intro} (``dual formulation''), and it turns out that in such a case the unsolvability is due to the {\it lack of additivity of the normal cones}, i.e. in \eqref{eq:inclusion2-elastoplastic-intro} we may have \begin{equation} N_{\Sigma-\bm{\widetilde{\sigma}}(t)}(\bm{y}) +N_{\mathcal{V}}(\bm{y}) \subsetneq N_{\left(\Sigma-\bm{\widetilde{\sigma}}(t)\right)\cap \mathcal{V}}(\bm{y}). \label{eq:strict-subadditivity-intro} \end{equation} Moreover, as we will show by an example, a continuous model with elasticity-perfect plasticity may get stuck in the situation of \eqref{eq:strict-subadditivity-intro} for a time-interval of positive length, during which the right-hand side of \eqref{eq:inclusion2-elastoplastic-intro} remains an empty set. It is known, however, that similar models with the {\it hardening} type of plasticity (replacing perfect plasticity), can be solved for both strain and stress in $L^2$, see e.g. \cite{Han2012}. We can extend the abstract framework \eqref{eq:sp-elastoplastic-intro}--\eqref{eq:inclusion2-elastoplastic-intro} to include such models. In the extended version we follow the same idea to use the additivity of the normal cones to obtain $\bm{\varepsilon}$. However, the condition of the type \eqref{eq:Slater-I-intro} fails in $L^2$ for the models with hardening as well. Thus, we turn to explore the class of conditions for the additivity of the normal cones, generally known as {\it constraint qualifications}. Our second goal is to show that \begin{itemize} \item constraint qualifications can be used to distinguish solvable models (discrete, continuous with hardening) among those which fit in the abstract framework. \end{itemize} Such constraint qualifications are usually written for a pair of {\it Fenchel-Rockefaller dual problems} to ensure the existence and equality of their minimizers ({\it strong duality}). The proof of equivalence between the additivity of the normal cones and strong duality of specially constructed dual problems is included in the text. While constraint qualifications in optimization go back to Slater in the 1950's \cite{Slater1950} and Rockafellar in the 1970's \cite{Rockafellar1974}, the topic matured and was reflected on in the context of infinite-dimensional spaces only in the late 1980's--early 1990's \cite{Attouch1986}, \cite{Gowda1990}, \cite{Jeyakumar1992}. This is much later than the pioneering works of Moreau \cite{Moreau1973}, \cite{Moreau1976} on elastoplasticity, whose method we can now analyze with the help of the constraint qualifications machinery. The paper is organized as follows. Section \ref{sect:convex_prelims} is a reference of the basic definitions and facts about convex analysis, which we will need throughout the paper. This also includes the definition and well-posedness of the general type of problem, known as the ``sweeping process'', to which \ref{eq:sp-elastoplastic-intro} belongs. Section \ref{sect:elasticity} contains a formal framework for the problems in linear elasticity in terms of adjoint operators between abstract spaces, together with examples of finite-dimensional (simple discrete networks of springs) and infinite-dimensional (continuous rod with lateral displacements) models. We solve the abstract problem and the examples with the method of orthogonal subspaces (cf. \cite{Gudoshnikov2025}). The components of the framework and the method, such as the fundamental subspaces, as well as the final formulas for elastic solutions are reused in Section \ref{sect:perfect-plasticity}, where we consider the problem of elasticity-perfect plasticity in the abstract form and discuss its well-posedness for discrete examples. In contrast, in Section \ref{sect:regularity-lost} we uncover how for the continuous rod example, although it is written within the same abstract framework, the exists no strain solution in $L^2$ due to the lack of additivity of normal cones. In Section \ref{sect:constraint-qualification-and-hardening} we discuss constraint qualifications, which can guarantee the additivity of normal cone in infinite dimensional spaces. Also there we extend the abstract framework to cover plasticity with hardening. We show that the continuous elastoplastic rod satisfies some of the constraint qualifications and, therefore, possesses additivity of normal cones in $L^2$ when it has hardening with a linear growth estimate. Finally, Section \ref{sect:conclusions} contains summarizing conclusions and the ideas for future work, and Appendix \ref{sect:appendix} contains some additional mathematical facts we rely on in the paper. Let us clarify some conventions about notation: \begin{enumerate}[{\it i)}] \item Because we study evolution problems in vector spaces, and, particularly, in spaces of functions, for the values in such state spaces we will use a bold font. For example, for a time-dependent solution $\sigma\in W^{1,\infty}([0, T], L^2(\Omega))$ we write its value at a time moment $t\in[0,T]$ as $\bm{\sigma}(t)\in L^2(\Omega)$, while numerical values for a.a. $x\in \Omega$ we write as $\sigma(t,x)$. We also use a bold font for special quantities ${\bf E},{\bf E'}, {\bf D}, {\bf C}, \bm{C}, \bm{H}$ wich are operators between the state spaces and real functions to define such operators. \item With the tilde symbol we indicate the state variables in the problems with elasticity, e.g. $\bm{\widetilde{\sigma}}$. The state variables without tilde correspond to the problems with elastoplasticity. \item Some quantities play a similar role in the problems with perfect plasticity, as well as in the problems with hardening. With the hat symbol (e.g. $\widehat{\Sigma}$ vs $\Sigma$) we indicate the context of a problem with hardening. \end{enumerate} \section{Preliminaries from convex analysis} \label{sect:convex_prelims} \begin{Definition} Let $\mathcal{H}$ be a Hilbert space, and $\mathcal{C} \subset \mathcal{H}$ be a closed, convex, nonempty set. \begin{enumerate}[{\it i)}] \item Let a point $\bm{x}\in \mathcal{C}$. Then the {\it outward normal cone} to $\mathcal{C}$ at $x$ is the set \begin{equation} N_{\mathcal{C}}(\bm{x}) = \left\{\bm{y}\in \mathcal{H}: \text{for any }\bm{c}\in \mathcal{C}\text{ we have } \la\bm{y}, \bm{c}-\bm{x}\ra_{\mathcal{H}}\leqslant 0\right\}, \label{eq:nc-abstract-def} \end{equation} see Fig. \ref{fig:fig-normal-cone} for an illustration. \begin{figure}[H]\center \includegraphics{Fig-normal-cone.pdf} \caption{\footnotesize Examples of typical normal cones in a finite-dimensional setting. Here we illustrate it with $\mathcal{C}$ taken as a polygon in $\mathcal{H}=\mathbb{R}^2$. We depict the normal cones $N_{\mathcal{C}}(\bm{x})$ as translated from $0$ to $\bm{x}$. Vectors $\bm{y}$ are generic elements of $N_{\mathcal{C}}(\bm{x})$, called {\it supporting vectors} to the set $\mathcal{C}$ at $\bm{x}$. } \label{fig:fig-normal-cone} \end{figure} \item Let a point $\bm{y}\in \mathcal{H}$. Then the {\it metric projection} of $\bm{y}$ on $\mathcal{C}$ is a point $\bm{x}\in \mathcal{C}$, denoted \[ {\rm proj}(\bm{y}, \mathcal{C}) \] such that \[ \|\bm{x}-\bm{y}\| = \inf_{\bm{c}\in \mathcal{C}} \|\bm{c}-\bm{y}\|. \] Such point always exists and it is unique \cite[Th. 5.2, p. 132]{Brezis2011}. \item The {\it indicator function} of $\mathcal{C}$ is the function \begin{equation} \delta_\mathcal{C}: \mathcal{H} \to \mathbb{R}\cup\{+\infty\},\qquad \delta_\mathcal{C}(\bm{x})= \begin{cases} 0&\text{when }\bm{x}\in \mathcal{C},\\ +\infty & \text{when }\bm{x}\notin \mathcal{C}. \end{cases} \label{eq:def-indicator-function} \end{equation} \item The {\it support function} of $\mathcal{C}$ is the function \begin{equation} \delta^*_{\mathcal{C}}: \mathcal{H} \to \mathbb{R}\cup\{+\infty\},\qquad \delta^*_{\mathcal{C}}(\bm{y})= \sup_{\bm{x}\in \mathcal{C}}\,\la \bm{y}, \bm{x} \ra_{\mathcal{H}}. \label{eq:def-support-function} \end{equation} \item The {\it conic hull} of $\mathcal{C}$ (when $\mathcal{C}$ is convex, cf. \cite[p. 2]{Zalinescu2002} and \cite[p. 32]{Hiriart-UrrutyANDLemarechal}) is the set \begin{equation} {\rm cone}\, \mathcal{C} = \{\lambda \bm{c}: \lambda\geqslant 0, \bm{c}\in \mathcal{C}\}. \label{eq:conic-hull-def} \end{equation} \end{enumerate} \end{Definition} \noindent We are going to use the following well-known properties. \begin{Proposition} \label{prop:nc-equivalent} Let $\mathcal{H}$ be a Hilbert space and $\mathcal{C} \subset \mathcal{H}$ be a closed, convex, nonempty set. For vectors $\bm{x}\in \mathcal{C}$ and $\bm{y}\in \mathcal{H}$ the following statements are equivalent: \begin{enumerate}[{\it i)}] \item $\bm{y}\in N_\mathcal{C}(\bm{x})$, \item \label{enum:abstract-nc-in-proj-notation} ${\rm proj}(\bm{y}+\bm{x}, \mathcal{C})=\bm{x}$, \item \label{enum:abstract-nc-in-support-func-notation} $\la\bm{y}, \bm{x}\ra_\mathcal{H} = \delta^*_\mathcal{C}(\bm{y})$. \end{enumerate} \end{Proposition} \begin{Proposition} \label{prop:nc-abstract-properties} Formulas \eqref{eq:nc-abstract-def} and \eqref{eq:def-support-function} yield the following properties: \begin{enumerate}[{\it i)}] \item \label{enum:prop:nc-abstract-properties-nc-translation} For any $\bm{x_1}, \bm{x_2}\in \mathcal{C}$ \begin{equation} N_{\mathcal C}(\bm{x_1}+\bm{x_2}) = N_{\mathcal C-\bm{x_2}}(\bm{x_1}) \label{eq:argument-sum-nc} \end{equation} \item \label{enum:prop:nc-abstract-properties-nc-subadditivity} \cite[Section 2.f, p. 206]{Moreau1973}, \cite[Lemma 1 (b), pp.~4--5]{Kunze2000} For any closed convex nonempty sets $\mathcal{C}_1,\mathcal{C}_2\subset\mathcal{H}$ and any $\bm{x}\in \mathcal{C}_1\cap\mathcal{C}_2$ the subadditivity of normal cones holds: \begin{equation} N_{\mathcal{C}_1}(\bm{x}) + N_{\mathcal{C}_2}(\bm{x}) \subset N_{\mathcal{C}_1 \cap \mathcal{C}_2}(\bm{x}). \label{eq:nc-subadditivity} \end{equation} Additionally, if the so-called Slater's constraint qualification holds: \begin{equation} {\mathcal{C}_1}\cap {{\rm int}\, \mathcal{C}_2}\neq \varnothing, \label{eq:nontrivial-intersection-condition} \end{equation} then we have the equality (i.e. the additivity of normal cones): \begin{equation} N_{\mathcal{C}_1}(\bm{x}) + N_{\mathcal{C}_2}(\bm{x}) = N_{\mathcal{C}_1 \cap \mathcal{C}_2}(\bm{x}). \label{eq:nc-additivity} \end{equation} \item \label{enum:prop:nc-abstract-properties-support-function} For closed, convex nonempty sets $\mathcal{C}_1, \mathcal{C}_2\subset \mathcal{H}$ \begin{equation} \mathcal{C}_1\subset \mathcal{C}_2 \qquad\Longleftrightarrow \qquad \text{for any }y\in \mathcal{H}\quad \delta^*_{\mathcal{C}_1}(y)\leqslant\delta^*_{\mathcal{C}_2}(y). \label{eq:sp-subset-property} \end{equation} \end{enumerate} \end{Proposition} \noindent The normal cone can be used to define the following nonsmooth evolution problem. \begin{Definition}{\cite[Section 5.f]{Moreau1973}, \cite[p. 9]{Kunze2000}} Let $\mathcal{H}$ be a Hilbert space and $\mathcal{C}:[0,T]\rightrightarrows \mathcal{H}$ be a set-valued map such that its values $\mathcal{C}(t)$ are closed convex nonempty subsets of $\mathcal{H}$ for every $t$. Given a $\bm{y_0}\in C(0)$, we call the initial value problem \begin{numcases}{} -\frac{d}{dt} \bm{y} \in N_{\mathcal{C}(t)}(\bm{y}), \label{eq:abstract-sp}\\ \bm{y}(0)=\bm{y_0}, \label{eq:abstract-sp-ic} \end{numcases} a {\it (Moreau's) sweeping process}. We say that a function $y:[0,T]\to \mathcal{H}$ is a {\it solution of the sweeping process} \eqref{eq:abstract-sp}--\eqref{eq:abstract-sp-ic} if it satisfies the following conditions: \begin{enumerate}[{\it i)}] \item $\bm{y}(0)=\bm{y_0}$, \item $\bm{y}(t)\in \mathcal{C}(t)$ for all $t\geqslant 0$, \label{enum:abstract-sp-solution-def-constraint} \item $\frac{d}{dt}\bm{y}(t)$ exists and the inclusion \eqref{eq:abstract-sp} holds for a.a. $t\in [0,T]$. \label{enum:abstract-sp-solution-def-nc} \end{enumerate} \label{def:abstract-sp} \end{Definition} The sweeping process gets its name from the intuition about the solution $\bm{y}$ being ``swept'' by the time-dependent unilateral constraint of Definition \ref{def:abstract-sp} \ref{enum:abstract-sp-solution-def-constraint}. For more details we refer to \cite[Sections 1 and 3.1]{Kunze2000} as an excellent introduction to the topic and to Moreau's original works \cite[Section 5.f]{Moreau1973}, \cite{Moreau1977}. Here we include the following definition and the existence theorem, which we will use to find the evolution of stress in problems with elastoplasticity. \begin{Theorem}{\bf \cite[Th. 2]{Kunze2000}} \label{th:abstract-sp-existence} \newline Let $\mathcal{H}$ be a Hilbert space and $\mathcal{C}:[0,T]\rightrightarrows \mathcal{H}$ be a set-valued map. Assume that \begin{enumerate}[{\it i)}] \item values $\mathcal{C}(t)$ are closed convex nonempty sets for every $t\in[0,T]$, \item map $\mathcal{C}$ is Lipschitz-continuous with a constant $L$ with respect to Hausdorff distance, i.e. there is $L\geqslant 0$ such that \begin{equation} d_H(C(t_1),C(t_2))\leqslant L|t_1-t_2| \qquad \text{for any } t_1,t_2\in[0,T], \label{eq:h-d-Lipschitz-def} \end{equation} where for any nonempty subsets $\mathcal{C}_1, \mathcal{C}_2\subset \mathcal{H}$ Hausdorff distance between the subsets is \begin{multline} d_H(\mathcal{C}_1,\mathcal{C}_2)=\max\left(\sup_{x\in \mathcal{C}_2}{\rm dist}(x, \mathcal{C}_1),\,\sup_{x\in \mathcal{C}_1}{\rm dist}(x, \mathcal{C}_2)\right)=\\ =\max\left(\sup_{x\in \mathcal{C}_1}\inf_{y\in \mathcal{C}_2} \|x-y\|,\,\sup_{x\in \mathcal{C}_2}\inf_{y\in \mathcal{C}_1} \|x-y\|\right)=\\ =\inf\left\{r>0: B_r(0)+\mathcal{C}_1\subset \mathcal{C}_2 \text{ and }B_r(0)+\mathcal{C}_2\subset \mathcal{C}_1\right\}, \label{eq:h-d-def} \end{multline} \end{enumerate} Then for any $\bm{y_0}\in \mathcal{C}(0)$ there is a unique solution $y$ of \eqref{eq:abstract-sp}--\eqref{eq:abstract-sp-ic} and $y$ is Lipschitz-continuous with the same constant $L$, i.e. $y\in W^{1,\infty}([0,T], \mathcal{H})$. \end{Theorem} \section{The abstract geometric framework and examples for linear elasticity} \label{sect:elasticity} \subsection{An abstract linear operator and its adjoint} \label{ssect:adjoint-operatorsED} We will use the concept of an adjoint to a linear operator to unite discrete and continuum models in one generalization, but we need to account for a non densely-defined operator. The following definition introduces the operators and the setting, which we will use throughout the paper. \begin{Definition} \label{def:abstract-adjoint} Let $\mathcal{H}$ be a Hilbert space and $\mathcal{X}$ be a reflexive Banach space, in which $\mathcal{W}_0$ is a linear, but not necessarily closed, subspace. Let a linear operator ${\bf E}$ be defined on $\mathcal{W}_0$, i.e. \begin{equation} {\bf E}: D({\bf E})=\mathcal{W}_0\subset \mathcal{X} \to \mathcal{H}. \label{eq:ae-abstract-E-def} \end{equation} We assume that ${\bf E}$ is a closed operator (i.e. its graph is closed in $\mathcal{X}\times \mathcal{H}$) with its image also closed. Let \[ \mathcal{X}_0=\overline{\mathcal{W}_0}, \] which is a reflexive Banach space itself \cite[Prop. 3.20, p. 70]{Brezis2011}. Subspace $\mathcal{X}_0$ may coincide with $\mathcal{W}_0$ or $\mathcal{X}$. To clarify, here and anywhere in the paper the closure $\overline{\mathcal{W}_0}$ is in the sense of the norm of $\mathcal{X}$. Consider now the closed densely defined linear operator with closed image \begin{equation} {\bf E'}: \mathcal{W}_0\subset {\mathcal{X}_0} \to \mathcal{H}, \end{equation} and denote by ${\bf D}$ its adjoint in the sense of the unbounded operator theory \cite[Section 2.6, pp. 43--48]{Brezis2011} : \[ {\bf D}:D({\bf D})\subset \mathcal{H}\to \mathcal{X}_0^*. \] In other words, the operator ${\bf D}$ is defined over the set \[ D({\bf D})= \left\{\bm{f}\in \mathcal{X}_0^*: \text{there exists } c\geqslant 0: \text{ for all } \bm{u}\in \mathcal{W}_0:|\left\la \bm{f}, {\bf E}\, \bm{u} \right\ra_{\mathcal{H}}|\leqslant c\|\bm{u}\|_\mathcal{X}\right\}. \] A value ${\bf D}\bm{f}$ is a continuous linear functional, defined by the following formula for $\bm{u}\in \mathcal{W}_0$: \[ {\bf D}\bm{f}: \bm{u}\mapsto \la \bm{f}, {\bf E}\, \bm{u}\ra_{\mathcal{H}}, \] and by the continuity limit for $\bm{u}\in \mathcal{X}_0$. \end{Definition} \begin{Lemma} Under the assumptions of Definition \ref{def:abstract-adjoint} the following properties hold: \begin{enumerate}[{\it i)}] \item The operator ${\bf D}$ is also densely defined, and ${\bf E'} = {\bf D}^*$, see \cite[Th. 3.24, p.~72]{Brezis2011}. The image of ${\bf D}$ is also closed, see \cite[Th. 2.19, p.~46]{Brezis2011}. \item The image of ${\bf E}$ and the kernel of ${\bf D}$ are orthogonal complements in the space $\mathcal{H}$, see \cite[pp.~45--46]{Brezis2011}. \end{enumerate} \end{Lemma} \begin{Remark} \label{rem:abstract-elasticity-nonunique-ext} By the Hahn-Banach theorem \cite[Corollary 1.2, p. 3]{Brezis2011} it is possible to extend each functional ${\bf D}\bm{f}$ to take its argument from the entire space $\mathcal{X}$. When ${\bf E}$ happens to be densely defined (i.e. ${\bf E}={\bf E'}$) such extension is unique, see \cite[p. 44]{Brezis2011}. However, in some of the examples we will have $\mathcal{X}_0$ being a proper closed subspace of $\mathcal{X}$, which implies that multiple different extensions of ${\bf D}\bm{f}$ are possible. \end{Remark} \subsection{The abstract geometric framework for linear elasticity} \label{ssect:abstract-elasticity} We start by considering the quasistatic evolution of elastic models, which can be written via the following abstract framework. \begin{Definition} \label{def:ae} Let spaces $\mathcal{X}, \mathcal{X}_0, \mathcal{W}_0, \mathcal{H}$ and operators ${\bf E}, \bf{D}$ be as in the previous Section \ref{ssect:adjoint-operatorsED}. Additionally, let us be given a linear operator \begin{equation} {\bf C}:\mathcal{H}\to \mathcal{H} \label{eq:ae-C-def} \end{equation} which is \begin{enumerate}[{\it i)}] \item bounded, i.e. there is $C_0>0$ s.t. for all $\bm{\tau} \in \mathcal{H}$ \[ \|{\bf C} \,\bm{\tau}\| \leqslant C_0 \|\bm{\tau}\|, \] \item coercive, (see \cite[Section A.2.4, pp. 473--474]{Ern2004}, \cite[p. 269]{Kress2014}) i. e. there is $c_0>0$ s.t. for all $\bm{\tau} \in \mathcal{H}$ \[ \la{\bf C} \,\bm{\tau}, \bm{\tau}\ra \geqslant c_0 \|\bm{\tau}\|^2, \] \item symmetric, i.e. for all $\bm{\tau}_1, \bm{\tau}_2 \in \mathcal{H}$ \[ \la {\bf C}\, \bm{\tau}_1, \bm{\tau}_2 \ra = \la \bm{\tau}_1, {\bf C} \,\bm{\tau}_2 \ra. \] \end{enumerate} Finally, let us be given the functions we call {\it Dirichlet offset} and {\it external force}, (called together {\it loads}): \begin{equation} g\in W^{1, \infty}(I, \mathcal{H}), \qquad f\in W^{1, \infty}(I,{\rm Im}\, {\bf D}), \label{eq:loads-abstract-def} \end{equation} where ${\rm Im}\, {\bf D}$ is a closed subspace of $\mathcal{X}_0^*$. We say that unknown variables \[ \widetilde{\varepsilon}, \,\widetilde{\sigma} \in W^{1,\infty}(I, \mathcal{H}) \] solve the {\it abstract problem of quasi-static evolution in elasticity} if they satisfy for all $t\in I$ \begin{align} \bm{\widetilde{\varepsilon}}& \in {\rm Im}\, {\bf E}+\bm{g}(t) , \label{eq:ae-1} \tag{EL1}\\ \bm{\widetilde{\sigma}}& = {\bf C} \, \bm{\widetilde{\varepsilon}}, \label{eq:ae-2} \tag{EL2}\\ \bm{\widetilde{\sigma}}\in D(\bf D)\quad \text{ and }\quad {\bf D}\, \bm{\widetilde{\sigma}}&=\bm{f}(t). \label{eq:ae-3} \tag{EL3} \end{align} As we will show in the examples below, \eqref{eq:ae-1} is an abstract form of the {\it compatibility equation} and {\it boundary conditions}, \eqref{eq:ae-2} is the {\it constitutive law} of linear elasticity ({\it Hooke's law}), and \eqref{eq:ae-3} is an abstract form of the {\it equation of equilibrium}, see also the scheme of the Fig. \ref{fig:elasticity-scheme}. In this text we will refer to $\widetilde{\sigma}$ as the {\it stress solution for elasticity}, and finding it is the main goal of the current Section \ref{sect:elasticity}. \end{Definition} \begin{Remark} \label{rem:abstract-operator-to-restrict} On top of Definition \ref{def:ae} we would like to add some details, which are common in mechanical models. Although these details can be left outside of the abstract Definition \ref{def:ae}, they will appear in the construction of the concrete examples. \begin{enumerate}[{\it i)}] \item As we will see in the examples to follow, the operator ${\bf E}$ is usually defined as a restriction of another operator, which we will write as \begin{equation} {\rm E}: D({\rm E})=\mathcal{W}\subset \mathcal{X} \to \mathcal{H}, \label{eq:ae-general-strain} \end{equation} where $\mathcal{W}$ is a linear space, such that $\mathcal{W}_0\subset\mathcal{W}\subset \mathcal{X}$. The subspace $\mathcal{W}$ may or may not coincide with the entire $\mathcal{X}$. Given $\mathcal{W}_0$ and such ${\rm E}$ we set \begin{equation} {\bf E}: \bm{u} \mapsto {\rm E}\, \bm{u}\qquad \text{for any }\bm{u}\in \mathcal{W}_0. \label{eq:ex11-E-def} \end{equation} \item In turn, equation \eqref{eq:ae-1} usually appears in the form \begin{align} \bm{\widetilde{\varepsilon}} &= {\rm E}\, \bm{\widetilde{u}}, \label{eq:ae-compatibility-abstract} \\ \bm{\widetilde{u}} & \in \mathcal{W}_0 + \bm{u}_{\bf D}(t)\subset \mathcal{W}, \label{eq:ae-bc-abstract} \end{align} where $u_{\rm D}\in W^{1\infty}(I,\mathcal{W})$ is given, and $\bm{\widetilde{u}}\in \mathcal{W}$ is the unknown variable, which represents in mechanical models the vector distribution of {\it displacements} from a reference configuration. The purpose of equation \eqref{eq:ae-compatibility-abstract} is to ensure the {\it geometric compatibility}, and \eqref{eq:ae-bc-abstract} is an abstract form of the {\it boundary condition}. Both equations \eqref{eq:ae-compatibility-abstract}--\eqref{eq:ae-bc-abstract} are combined in \eqref{eq:ae-1}, where \begin{equation} \bm{g}(t) = {\rm E}\, \bm{u}_{\bf D}(t). \label{eq:ae-abstract-g-def} \end{equation} In this paper we focus on solving problems in terms of $\bm{\widetilde{\varepsilon}}$ and $\bm{\widetilde{\sigma}}$, and do not attempt to determine the concrete values of $\bm{\widetilde{u}}$. In general, provided with $\bm{\widetilde{\varepsilon}}$, the corresponding $\bm{\widetilde{u}}$ can be found from \eqref{eq:ae-compatibility-abstract}--\eqref{eq:ae-bc-abstract}. \item \label{rem:abstract-operator-to-restrict-about-forces} We think of $\mathcal{X}$ as the space of {\it geometric configurations of the system in the physical space}, such as a a finite collection of displacement vectors in a system of particles or a vector field of displacements in a continuous body. Therefore, it is natural to think of the external force to be given for each $t\in I$ as a functional \[ \bm{F}(t) \in \mathcal{X}^*, \] i.e. as a counterpart of $\bm{\widetilde{u}}\in \mathcal{W}\subset \mathcal{X}$. Thus for any two displacement distributions $\bm{\widetilde{u}}_1, \bm{\widetilde{u}}_2\in \mathcal{W}\subset \mathcal{X}$ the expression \[ \la\bm{F}(t),\bm{\widetilde{u}}_2- \bm{\widetilde{u}}_1\ra_{\mathcal{X}^*, \mathcal{X}} \] is precisely how the {\it virtual work} of the force $\bm{F}(t)$ is defined over a path from $\bm{\widetilde{u}}_1$ to $\bm{\widetilde{u}}_2$. We should make this compatible with the operator ${\bf D}$ taking values in $\mathcal{X}_0^*$. For a given external force load $\bm{F}\in \mathcal{X}^*$ (or even $\bm{F}\in \mathcal{X}$ when $\mathcal{X}$ is a Hilbert space and the Riesz theorem is applicable), we define the functional $\bm{f}\in \mathcal{X}_0^*$ as the restriction of $\bm{F}$, i.e. \begin{equation} \bm{f}: \bm{u}\mapsto \la \bm{F},\bm{u} \ra_{\mathcal{X}^*, \mathcal{X}} \qquad \text{for }\bm{u} \in \mathcal{X}_0, \label{eq:ae-force-functional-abstract} \end{equation} and this is the given load, which we use in the right-hand side of \eqref{eq:ae-3}. However, some caution is required, as the mapping $\bm{F}\mapsto \bm{f}$ will not always be injective, see also Remark \ref{rem:abstract-elasticity-nonunique-ext} on the same issue. \end{enumerate} These discussions allow us to represent the abstract problem of quasi-static evolution in elasticity as a scheme of the Fig. \ref{fig:elasticity-scheme}. \end{Remark} \begin{figure}[H]\center \includegraphics{Fig-elasticity-scheme.pdf} \caption{\footnotesize Schematic representation of the problem of Definition \ref{def:ae} and Remark \ref{rem:abstract-operator-to-restrict}. The unknown variables are indicated by blue color. In the problem of Definition \ref{def:ae} we are only looking for the unknowns $\bm{\widetilde{\varepsilon}}$ and $\bm{\widetilde{\sigma}}$. } \label{fig:elasticity-scheme} \end{figure} \subsection{Stress solution for elasticity in the abstract geometric framework} To find the stress solution we need to redefine the spaces and operators to take into account ${\bf C}$. Recall, that by Lax-Milgram lemma (see e.g. \cite[Th. 13.29, p. 269]{Kress2014}, \cite[Sect. 1.1, pp. 1--5]{Gatica2014}), the operator ${\bf C}$ has an inverse ${\bf C}^{-1}$, which is also bounded, coercive and symmetric. \begin{Definition} \label{def:ae-weighted} In the setting of Definition \ref{def:ae} denote by $\mathcal{H}_{{\bf C}^{-1}}$ the space $\mathcal{H}$ equipped with the following inner product and norm: \begin{gather} \la\bm{\sigma}, \bm{\tau}\ra_{{\bf C}^{-1}} = \la \bm{\sigma}, {\bf C}^{-1} \bm{ \tau} \ra_\mathcal{H} \qquad \text{for any } \bm{\sigma}, \bm{\tau}\in \mathcal{H}, \label{eq:weighted-ip} \\ \|\bm{\sigma}\|_{{\bf C}^{-1}}=\left(\la\bm{\sigma}, \bm{\sigma}\ra_{{\bf C}^{-1}}\right)^{\frac{1}{2}} \qquad \text{for any } \bm{\sigma}\in \mathcal{H}. \label{eq:weighted-norm} \end{gather} Further, define the operators \begin{equation} \begin{array}{cc} {\bf E_{C}}:\mathcal{W}_0\subset \mathcal{X} \to \mathcal{H}_{{\bf C}^{-1}}, & {\bf D_{C}}:D({\bf D})\subset \mathcal{H}_{{\bf C}^{-1}} \to \mathcal{X}_0^* ,\\[2mm] {\bf E_{C}} = {\bf C\, E}, & {\bf D_{C}}\, \bm{\sigma} ={\bf D}\, \bm{\sigma} \text { for any } \bm{\sigma} \in D({\bf D}). \end{array} \label{eq:EC-DC-def} \end{equation} The operators ${\bf E_{C}}$ and ${\bf D_{C}}$ themselves fit into the framework of Section \ref{ssect:adjoint-operatorsED}, so that they are also closed, have closed ranges and ${\bf D_{C}}$ is densely defined. Furthermore, the subspaces \begin{equation} \mathcal{U} = {\rm Im}\, {\bf E_C}, \qquad \mathcal{V} = {\rm Ker}\, {\bf D_{C}}, \label{eq:ae-UV-def} \end{equation} are closed in $\mathcal{H}_{{\bf C}^{-1}}$, orthogonal in the sense of \eqref{eq:weighted-ip} and \[ \mathcal{H}_{{\bf C}^{-1}}=\mathcal{U}\oplus\mathcal{V}. \] We call $\mathcal{U}$ and $\mathcal{V}$ the {\it fundamental subspaces}, and thoughout the paper we consider them as equipped with the norm of $\mathcal{H}_{{\bf C}^{-1}}$. Finally, define the operators of orthogonal (in the sense of \eqref{eq:weighted-ip}) projections \begin{equation} P_{\mathcal U}: \mathcal{H}_{{\bf C}^{-1}} \to \mathcal{U}, \qquad P_{\mathcal V}: \mathcal{H}_{{\bf C}^{-1}} \to \mathcal{V}, \label{eq:ae-PU-PV-def} \end{equation} i.e. for any $\bm{\tau}\in \mathcal{H}_{{\bf C}^{-1}} $ we have a unique Helmholtz- or Beltrami-type decomposition \begin{equation} {\bm \tau} = P_{\mathcal{U}}\,{\bm \tau} + P_{\mathcal{V}}\,\bm{\tau}. \label{eq:projection_decomposition} \end{equation} \end{Definition} The next proposition will be instrumental in finding the stress solution. \begin{Proposition} \label{prop:elasticity-affine-planes} Let $G$ and $Q$ be the following bounded operators. Operator $G$ to be defined by \begin{equation} G: \mathcal{H} \to \mathcal{V}\subset \mathcal{H}_{{\bf C}^{-1}} , \qquad G=P_{\mathcal{V}}\, {\bf C}, \label{eq:ae-G-def} \end{equation} and \[ Q: {\rm Im}\,{\bf D} \to \mathcal{U} \subset \mathcal{H}_{{\bf C}^{-1}} \] be the inverse of the restriction ${\bf D}_{{\bf C},r}$ of ${\bf D_C}$ to $\mathcal{U}$, acting as \[ {\bf D}_{{\bf C},r}: \mathcal{U}\cap D({\bf D})\subset \mathcal{U}\to {\rm Im}\, {\bf D}. \] We have the following facts. \begin{enumerate}[\it i)] \item A vector $\bm{\widetilde{\varepsilon}}\in \mathcal{H}$ satisfies \eqref{eq:ae-1} if and only if \begin{equation} {\bf C}\, \bm{\widetilde{\varepsilon}} \in \mathcal{U} + G\, {\bm g}. \label{eq:elasticity-affine-plane1} \end{equation} \item A vector $\bm{\widetilde{\sigma}}\in \mathcal{H}$ satisfies \eqref{eq:ae-3} if and only if \begin{equation} \bm{\widetilde{\sigma}} \in \mathcal{V} + Q\, {\bm f}. \label{eq:elasticity-affine-plane2} \end{equation} \end{enumerate} \end{Proposition} \noindent {\bf Proof.} {\it i)} Since the operator ${\bf C}$ is invertible, equation \eqref{eq:ae-1} is equivalent to \[ {\bf C}\, \bm{\widetilde{\varepsilon}} \in \mathcal{U} + {\bf C}\, {\bm g}, \] which, by the decomposition \eqref{eq:projection_decomposition} of ${\bf C}\, {\bm g}$, is equivalent to \eqref{eq:elasticity-affine-plane1}. \noindent {\it ii)} As ${\bf D}_{\bf C}$ is closed and densely defined, we follow the lines of \cite[Appendix A]{Beutler1976} to show that the operator ${\bf D}_{{\bf C},r}$ is closed, densely defined and bijective. Hence there exists the inverse $Q$ of ${\bf D}_{{\bf C},r}$, and the continuity of $Q$ follows from the closed graph theorem (see e.g. \cite[Th. 2.9, p. 37]{Brezis2011}). Thus \eqref{eq:ae-3} is equivalent to \eqref{eq:elasticity-affine-plane2}. $\blacksquare$ At last, we have a convenient expression of the stress solution for elasticity. \begin{Theorem} \label{th:elasticity-solution} Function $\bm{\widetilde{\sigma}}(t)$ is a stress solution for elasticity if and only if \begin{equation} \bm{\widetilde{\sigma}}(t)\in ( \mathcal{U}+G\, {\bm g}(t)) \cap( \mathcal{V}+Q\,{\bm f}(t)). \label{eq:abstract_intersection} \end{equation} Inclusion \eqref{eq:abstract_intersection} has exactly one solution \begin{equation} \bm{\widetilde{\sigma}}(t) = G\, {\bm g}(t) + Q\,{\bm f}(t). \label{eq:abstract_linear_solution} \end{equation} \end{Theorem} \noindent {\bf Proof.} Inclusion \eqref{eq:abstract_intersection} is equivalent to \eqref{eq:ae-1}--\eqref{eq:ae-3} by Proposition \ref{prop:elasticity-affine-planes}. To derive \eqref{eq:abstract_linear_solution} we notice, that \eqref{eq:abstract_intersection} is also equivalent to \[ \bm{\widetilde{\sigma}}- P_{\,\mathcal{V}}\,\bm{g} -Q\, \bm{f} \in (\mathcal{U}+P_{\,\mathcal{V}}\,\bm{g}) \cap(\mathcal{V}+Q\, \bm{f})-P_{\,\mathcal{V}}\, \bm{g} -Q\, \bm{f}, \] In turn, the right-hand side is \begin{multline*} (\mathcal{U}+P_{\,\mathcal{V}}\, \bm{g}) \cap(\mathcal{V}+Q\, \bm{f})-P_{\,\mathcal{V}}\, \bm{g} -Q\, \bm{f} = (\mathcal{U}+P_{\,\mathcal{V}}\, \bm{g}-P_{\,\mathcal{V}}\, \bm{g} -Q\, \bm{f}) \cap(\mathcal{V}+Q\, \bm{f}-P_{\,\mathcal{V}}\, \bm{g} -Q\, \bm{f})=\\ = (\mathcal{U}-Q\, \bm{f}) \cap(\mathcal{V}-P_{\,\mathcal{V}}\, \bm{g})=\mathcal{U}\cap\mathcal{V}=\{0\}, \end{multline*} where the last two equalities are due to $Q\, \bm{f} \in \mathcal{U}, P_{\,\mathcal{V}}\,\bm{g}\in \mathcal{V}$ and the triviality of the intersection of the orthogonal subspaces, respectively. Therefore, \eqref{eq:abstract_linear_solution} is indeed equivalent to \eqref{eq:abstract_intersection}. $\blacksquare$ \begin{Remark} \label{rem:ae-time-regularity} When the loads have time-regularity \eqref{eq:loads-abstract-def}, the stress solution for elasticity $\widetilde{\varepsilon}$, defined via \eqref{eq:abstract_linear_solution}, also has time-regularity $W^{1, \infty}(I, \mathcal{H}_{{\bf C}^{-1}}) \cong W^{1, \infty}(I, \mathcal{H})$. Indeed, by continuity of operators $G$ and $Q$ we have by Proposition \ref{prop:prelim-classical-derivatives-ae} that for a.a. $t\in I$ \[ \frac{d}{dt}\, \bm{\widetilde{\sigma}}(t) = \lim_{\Delta\to 0} \frac{\bm{\widetilde{\sigma}}(t+\Delta)-\bm{\widetilde{\sigma}}(t)}{\Delta}=G \frac{d}{dt}\, \bm{g}(t)+ Q\,\frac{d}{dt}\, \bm{f}(t) \] and, therefore, \[ \left\|\frac{d}{dt}\, \bm{\widetilde{\sigma}}(t)\right\|_{{\bf C}^{-1}} \leqslant \|G\|_{\rm op} \,\left\|\frac{d}{dt}\bm{g}(t)\right\|_{\mathcal{H}}+\|Q\|_{\rm op}\,\left\|\bm{f}(t)\right\|_{\mathcal{X}_0^*}, \] where the operator norms are \[ \|G\|_{\rm op} = \sup_{\bm{\tau}\in \mathcal{H}\setminus\{0\}} \frac{\|G \,\bm{\tau}\|_{{\bf C}^{-1}}}{\|\bm{\tau}\|_{\mathcal{H}}}, \qquad \|Q\|_{\rm op} = \sup_{\bm{\varphi}\in {\rm Im}\, {\bf D}\setminus \{0\}}\frac{ \|Q\, \bm{\varphi}\|_{{\bf C}^{-1}}}{ \|\bm{\varphi}\|_{\mathcal{X}_0^*}}. \] We conclude that \begin{equation} \left\|\widetilde{\sigma}\right\|_{W^{1,\infty}\left(I, \mathcal{H}_{{\bf C}^{-1}}\right)}\leqslant \|G\|_{\rm op}\, \|g\|_{W^{1, \infty}(I, \mathcal{H})} + \|Q\|_{\rm op} \, \|f\|_{W^{1, \infty}\left(I, \mathcal{X}_0^*\right)}. \label{eq:ae-time-regularity} \end{equation} \end{Remark} \begin{Remark} In the problem of elasticity the unknown $\bm{\widetilde{\varepsilon}}$ can be recovered from \eqref{eq:ae-2} by using bounded operator ${\bf C}^{-1}$. \end{Remark} \subsection{Example 1 --- a discrete model with one-dimensional \texorpdfstring{$\mathcal{V}$}{V}} \label{ssect:ex11-elasticity} \begin{figure}[H]\center \includegraphics{Fig-discrete-models-elastic.pdf} \caption{\footnotesize Discrete models of Examples 1 ({\it a}) and 2 ({\it b}). Red arrows denote the external forces $\bm{F}$, applied at the nodes. } \label{fig:discrete-models-elastic} \end{figure} As the first example, consider the discrete model of Fig. \ref{fig:discrete-models-elastic} {\it a)}, which consists of two elastic springs connected in series between three nodes. The left endpoint is held at $x=0$ and the displacement of the right endpoint from its position in the relaxed configuration is prescribed as a known function $l\in W^{1,\infty}(I)$. \subsubsection{Example 1 --- kinematics} In terms of the abstract framework of Section \ref{ssect:abstract-elasticity} we have the following quantities. The ambient spaces are given as \[ \mathcal{X} = \mathbb{R}^3, \qquad \mathcal{H} = \mathbb{R}^2, \] and we write the boundary condition as the following constraint on the vector of displacements $\bm{\widetilde u}\in \mathcal{X}$: \begin{equation} R\, \bm{\widetilde u} + r(t) =0, \label{eq:ex11-bc} \end{equation} where \[ R=\begin{pmatrix} 1 & 0 & 0\\ 0 & 0 & 1 \end{pmatrix}, \qquad r(t)= \begin{pmatrix} 0\\ -l(t) \end{pmatrix}. \] Further, denote the following matrix \begin{equation} {\rm E} = \begin{pmatrix} -1 & 1 & 0 \\ 0 & -1 & 1 \end{pmatrix}. \label{ae:ex11-E-matrix} \end{equation} The {\it elongations} $\bm{\widetilde{\varepsilon}} \in \mathcal{H}$ of the springs are computed for each $t\in I$ via the expression \eqref{eq:ae-compatibility-abstract}. We define the operator ${\bf E}$ from \eqref{eq:ae-abstract-E-def} by the formula \eqref{eq:ex11-E-def} with \[ \mathcal{W} = \mathcal{X}=\mathbb{R}^3,\qquad \mathcal{W}_0=D({\bf E}) = {\rm Ker}\, R. \] \noindent Since $\mathcal{W}_0$ is closed we have $\mathcal{X}_0=\mathcal{W}_0$. It is a one-dimensional space, parametrized by $u_2\in \mathbb{R}$, so that \begin{equation} \mathcal{W}_0=\mathcal{X}_0=D({\bf E})={\rm Ker}\, R= {\rm Im}\, R_0 = \left\{R_0\, u_2:u_2\in \mathbb{R} \right\} \qquad \text{with }R_0=\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}. \label{eq:ex11-DE-parametrization} \end{equation} To derive \eqref{eq:ae-1} we write \eqref{eq:ex11-bc} in the equivalent form \eqref{eq:ae-bc-abstract} by using the Moore-Penrose pseudoinverse of $R$ (see Corollary \ref{cor:mp-solving-linear-systems}) and setting \[ \bm{u}_{\bf D} (t) = -R^+ r(t), \qquad R^+= R^{\top}\left(RR^{\top}\right)^{-1}= \begin{pmatrix}1 & 0 \\ 0 & 0 \\ 0 & 1\end{pmatrix}. \] Thus, we follow \eqref{eq:ae-abstract-g-def} and let \[ \bm{g}(t) ={\rm E}\,\bm{u}_{\bf D} (t) = - {\rm E}\, R^+ r(t) = \begin{pmatrix} 0 \\ l(t)\end{pmatrix}. \] \noindent We have set up all of the quantities, which appear in \eqref{eq:ae-1} and \eqref{eq:ae-general-strain}--\eqref{eq:ae-abstract-g-def}. \subsubsection{Example 1 --- Hooke's law} In the two-spring the model of Fig. \ref{fig:discrete-models-elastic} {\it a)} we have the operator ${\bf C}$ from \eqref{eq:ae-C-def} and its inverse defined by the respective diagonal matrices \[ {\bf C} =\begin{pmatrix} k_1 & 0\\[1mm] 0 & k_2 \end{pmatrix}, \qquad {\bf C}^{-1} =\begin{pmatrix} k_1^{-1} & 0\\[1mm] 0 & k_2^{-1} \end{pmatrix}, \] in which $k_1, k_2>0$ are the {\it stiffness} parameters of the springs. Equation \eqref{eq:ae-2} in this model is Hooke's law, connecting the {\it stresses} $\bm{\widetilde{\sigma}}\in \mathbb{R}^2$ of the springs with the elongations $\bm{\widetilde{\varepsilon}}$. \subsubsection{Example 1 --- statics} Assume that a given {\it external force} $F\in W^{1,\infty}(I, \mathbb{R}^3)$ is applied in the model of Fig. \ref{fig:discrete-models-elastic} {\it a)}. Specifically, for each $t\in I$, each of the three components of $\bm{F}$ denotes a longitudinal force, applied at the respective node. From the {\it principle of virtual work} (see e.g. \cite[Section 1.4, pp.~16--17]{Goldstein2002}) we write the equation of equilibrium on the unknown $\bm{\widetilde{\sigma}}\in \mathbb{R}^2$ as \begin{equation} \left\la - {\rm E}^\top \,\bm{\widetilde{ \sigma}} + \bm{F}, \bm{\delta u} \right\ra_{\mathbb{R}^3} = 0 \qquad \text{for any }\bm{\delta u}\in {\rm Ker}\, R. \label{eq:ex11-equilibrium-basic-formulation} \end{equation} We use the convention, that positive values of $\bm{\widetilde{\varepsilon}}$ denote a configuration with stretched springs (with respect to the stress-free reference configuration). Therefore, Hooke's law \eqref{eq:ae-2} implies that positive values of $\bm{\widetilde{\sigma}}$ correspond to the forces acting towards {\it contraction}. Substitute ${\rm E}$ from \eqref{ae:ex11-E-matrix} to observe that the term $ - {\rm E}^\top \,\bm{\widetilde{ \sigma}}$ in \eqref{eq:ex11-equilibrium-basic-formulation} indeed expresses the forces at the nodes, produced by a stress vector $\bm{\widetilde{\sigma}}$. Now we transform equation \eqref{eq:ex11-equilibrium-basic-formulation} to the abstract form \eqref{eq:ae-3}. For each value $\bm{F}(t)\in \mathbb{R}^3$ we define a functional $\bm{f}(t)\in \mathcal{X}_0^*$ acting as \begin{equation} {\bm f}: \bm{u} \mapsto \left\la \bm{F}, \bm{u} \right\ra_{\mathbb{R}^3} \qquad \text{for any } \bm{u}\in \mathcal{X}_0, \label{eq:ex11-functionals-def} \end{equation} which coincides with \eqref{eq:ae-force-functional-abstract} in the current setting. In fact, due to the representation \eqref{eq:ex11-DE-parametrization}, we can write \eqref{eq:ex11-functionals-def} explicitly as \begin{equation} {\bm f}: \bm{u} \mapsto F_2\, u_2 \qquad \text{for any } \bm{u}\in \mathcal{X}_0. \label{eq:ex11-functionals-explicit} \end{equation} \noindent We can now rewrite the equation of equilibrium \eqref{eq:ex11-equilibrium-basic-formulation} as \begin{equation} \left\la \bm{\widetilde{\sigma}}, {\bf E}\, \bm{\delta u}\right\ra_{\mathcal{H}} = \la\bm{f}, \bm{\delta u}\ra_{\mathcal{X}_0^*, \mathcal{X}_0} \qquad \text{for any } \bm{\delta u}\in \mathcal{X}_0, \label{eq:ex11-equilibrium-functional-formulation} \end{equation} which becomes \eqref{eq:ae-3} when we take the operator ${\bf D}$ as the adjoint, according to the Definition \ref{def:abstract-adjoint}. The adjoint operator has its domain \[ D({\bf D})= \mathcal{H}^*\cong \mathcal{H}=\mathbb{R}^2 \] and it is defined by \begin{gather} {\bf D}: \mathcal{H}^* \to \mathcal{X}_0^*, \label{eq:ex11-D-def1} \\ \left\la{\bf D}\, \bm{\sigma}, \bm{\delta u}\right \ra_{\mathcal{X}_0^*,\mathcal{X}_0} = \left\la \bm{\sigma}, {\bf E}\, \bm{\delta u}\right\ra_{\mathcal{H}} \qquad \text{for any } \bm{\sigma}\in \mathcal{H},\, \bm{\delta u} \in \mathcal{X}_0. \label{eq:ex11-D-def2} \end{gather} \begin{Remark} \label{rem:ex11-all-forces-are-resolvable} We can use the representation \eqref{eq:ex11-DE-parametrization} to observe for any $\bm{\sigma} = \begin{pmatrix} \sigma_1\\ \sigma_2 \end{pmatrix} \in \mathbb{R}^2,\, \bm{u}\in \mathcal{X}_0\subset \mathbb{R}^3$ that \begin{equation} \left\la {\bf D} \bm{\sigma}, \bm{u} \right\ra_{\mathcal{X}_0^*,\mathcal{X}_0} = \left\la \begin{pmatrix} \sigma_1\\ \sigma_2 \end{pmatrix}, {\rm E} R_0\, u_2 \right\ra_{\mathbb{R}^2} = \begin{pmatrix} 1 & -1 \end{pmatrix} \begin{pmatrix} \sigma_1 \\ \sigma_2 \end{pmatrix} u_2 =(\sigma_1-\sigma_2)\,u_2, \label{eq:ex11-explicit-D} \end{equation} from which we deduce that ${\bf D}$ is surjective onto $\mathcal{X}_0^*$, since both ${\rm Im}\, {\bf D}$ and $\mathcal{X}_0^*$ are one-dimensional (respectively, due to \eqref{eq:ex11-explicit-D}, and due to $\mathcal{X}_0$ being one-dimensional, see \eqref{eq:ex11-DE-parametrization}). In terms of mechanics of the model of Fig. \ref{fig:discrete-models-elastic} {\it a)}, the surjectivity of ${\bf D}$ means that {\it any external force } $\bm{F}\in \mathbb{R}^3$ {\it applied at the nodes can be compensated (i.e. brought to an equilibrium state) by at least one stress configuration $\bm{\sigma}\in \mathbb{R}^2$. } If we borrow the terminology of \cite[p. 424]{RothWhiteley1981} (see also \cite[Sect. 3]{Gudoshnikov2023preprint}), we can say that each force load $\bm{F}\in \mathbb{R}^3$ is {\it resolvable}. \end{Remark} \begin{Remark} \label{rem:ex11-elasticity-nonunique-ext} We observed earlier that the space $\mathcal{X}_0^*$ is one-dimensional, as the dual to one-dimensional space $\mathcal{X}_0$. At the same time, we just have shown that any $\bm{F}$ from the three-dimensional space $\mathbb{R}^3$ can be compensated. This can be explained by the fact that the mapping \[ \bm{F} \mapsto \bm{f} \] \[ \mathbb{R}^3 \to \mathcal{X}_0^*, \] defined by \eqref{eq:ex11-functionals-def} is not injective . Indeed, for any $\bm{F_0} \in \mathcal{X}_0^\perp = \left({\rm Ker}\, R\right)^\perp = {\rm Im}\, R^\top$ both $\bm F$ and $\bm{F}+\bm{F_0}$ define the same functional by \eqref{eq:ex11-functionals-def}. Put differently, a functional $\bm{f}\in \mathcal{X}_0^*$ admits many such extensions $\bm{F}+\bm{F_0}\in \mathcal{X}^*$, cf. Remark \ref{rem:abstract-elasticity-nonunique-ext} and Remark \ref{rem:abstract-operator-to-restrict} \ref{rem:abstract-operator-to-restrict-about-forces}. \end{Remark} \subsubsection{Example 1 --- the fundamental subspaces and the stress solution} In the above we have provided all of the data required in Definition \ref{def:ae} to set up a problem of the type \eqref{eq:ae-1}-\eqref{eq:ae-3}. Now we will gvie exact expressions of the quantities from Definition \ref{def:ae-weighted} and Theorem \ref{th:elasticity-solution} for the model of Fig. \ref{fig:discrete-models-elastic} {\it a)}. Space $\mathcal{H}_{{\bf C}^{-1}}$ is $\mathbb{R}^2$ equipped with the inner product \begin{equation} \left\la \begin{pmatrix} \sigma_1\\ \sigma_2\end{pmatrix}, \begin{pmatrix}\tau_1\\ \tau_2\end{pmatrix} \right\ra_{{\bf C}^{-1}} = \sigma_1 k_1^{-1} \tau_1+ \sigma_2 k_2^{-1} \tau_2 \qquad \text{for all }\begin{pmatrix} \sigma_1\\ \sigma_2\end{pmatrix}, \begin{pmatrix}\tau_1\\ \tau_2\end{pmatrix}\in \mathbb{R}^2. \label{eq:ex11-inner-product} \end{equation} Subspace $\mathcal{U}$ is \[ \mathcal{U} ={\rm Im}\,{\bf C} {\rm E} R_0 = {\rm Im}\, \begin{pmatrix}k_1 & 0 \\ 0 & k_2\end{pmatrix} \begin{pmatrix} -1 & 1 & 0\\ 0 & -1 & 1 \end{pmatrix} \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} = {\rm Im}\, \begin{pmatrix} k_1 \\ -k_2 \end{pmatrix}, \] and, from \eqref{eq:ex11-explicit-D} used in the definition of ${\bf D}_{\bf C}$ in \eqref{eq:EC-DC-def}, we have that \[ \mathcal{V} = {\rm Im}\, \begin{pmatrix} 1 \\ 1\end{pmatrix}, \] see Fig. \ref{fig:discrete-models-spaces} {\it a)}. \begin{figure}[H]\center \includegraphics{Fig-discrete-models-spaces.pdf} \caption{\footnotesize The fundamental spaces and the stress solution for elasticity in the discrete models of Examples 1.1 ({\it a}) and 1.2 ({\it b}). The figure shows the situation with the stiffness parameters $k_i$ equal to $1$. } \label{fig:discrete-models-spaces} \end{figure} By the formula for the projection on a subspace in terms of a weighted inner product (see e.g. \cite[(5.31), p. 247]{OlverLinearAlgebra2018}) we have the matrices of projections \begin{multline*} P_{\mathcal{U}}= \begin{pmatrix} k_1 \\ -k_2\end{pmatrix}\left(\begin{pmatrix} k_1 & -k_2\end{pmatrix}\begin{pmatrix} k_1^{-1} & 0\\ 0 & k_2^{-1}\end{pmatrix}\begin{pmatrix} k_1 \\ -k_2\end{pmatrix}\right)^{-1}\begin{pmatrix} k_1 & -k_2\end{pmatrix}\begin{pmatrix} k_1^{-1} & 0\\ 0 & k_2^{-1} \end{pmatrix}=\\ = \frac{1}{k_1+k_2}\begin{pmatrix} k_1 & -k_1 \\ -k_2 & k_2\end{pmatrix}, \end{multline*} \[ P_{\mathcal{V}} = \begin{pmatrix} 1 \\ 1\end{pmatrix}\left(\begin{pmatrix} 1 & 1\end{pmatrix}\begin{pmatrix} k_1^{-1}& 0 \\ 0 & k_2^{-1}\end{pmatrix}\begin{pmatrix} 1 \\ 1\end{pmatrix}\right)^{-1}\begin{pmatrix} 1 & 1\end{pmatrix}\begin{pmatrix} k_1^{-1}& 0 \\ 0 & k_2^{-1}\end{pmatrix} = \frac{1}{k_1^{-1}+k_2^{-1}}\begin{pmatrix} k_1^{-1}& k_2^{-1}\\[1mm] k_1^{-1} & k_2^{-1}\end{pmatrix}. \] Further, operator $G$ is represented by the matrix \[ G= P_{\mathcal{V}}{\bf C} = \frac{1}{k_1^{-1}+k_2^{-1}}\begin{pmatrix} k_1^{-1}& k_2^{-1}\\ k_1^{-1} & k_2^{-1}\end{pmatrix} \begin{pmatrix} k_1& 0 \\ 0 & k_2\end{pmatrix} =\frac{1}{k_1^{-1}+k_2^{-1}}\begin{pmatrix} 1& 1 \\ 1 &1\end{pmatrix}. \] Now we only have to find a representation for the operator $Q$ in terms of an external force $\bm{F}$. Notice, that for any $\bm{\sigma}\in \mathcal{U}$ there exists a unique $\sigma\in \mathbb{R}$, such that \[ \bm{\sigma}= \begin{pmatrix}k_1\\ -k_2\end{pmatrix} \sigma. \] Plug that into \eqref{eq:ex11-explicit-D}, and then plug the result together with \eqref{eq:ex11-functionals-explicit} into \eqref{eq:ae-3} to see that \[ (k_1+k_2)\sigma = F_2, \] i. e. \[ \sigma = \frac{F_2}{k_1+k_2}. \] Therefore, for a given external force $\bm{F}\in \mathbb{R}^{3}$ and the corresponding (by \eqref{eq:ex11-functionals-def}) functional $\bm{f}\in \mathcal{X}_0^*$ we have the value of the operator $Q$ \[ Q\, \bm{f} = \frac{F_2}{k_1+k_2}\begin{pmatrix}k_1 \\ -k_2\end{pmatrix}\in \mathcal{U}. \] By Theorem \ref{th:elasticity-solution} we conclude that the stress solution for elasticity in the model of Fig. \ref{fig:discrete-models-elastic} {\it a)} has the closed form \begin{equation} \bm{\widetilde{\sigma}}(t) = G\, \bm{g}(t) + Q\, \bm{f}(t) = \frac{l(t)}{k_1^{-1}+ k_2^{-1}} \begin{pmatrix}1 \\ 1\end{pmatrix} + \frac{F_2(t)}{k_1+k_2}\begin{pmatrix} k_1\\-k_2\end{pmatrix}, \label{eq:ex11-linear-solution} \end{equation} see Fig. \ref{fig:discrete-models-spaces} {\it a)}. \subsection{Example 2 --- a discrete model with two-dimensional \texorpdfstring{$\mathcal{V}$}{V}} \label{ssect:ex12-elasticity} Our second example is the discrete model of Fig. \ref{fig:discrete-models-elastic} {\it b)} with three elastic springs connected in series and prescribed constraints on elongations \begin{equation} \begin{array}{c} \widetilde{\varepsilon}_1+ \widetilde{\varepsilon}_2 = l_1(t),\\ \widetilde{\varepsilon}_2+ \widetilde{\varepsilon}_3 = l_2(t), \end{array} \label{eq:ex12-constraint} \end{equation} where $l_1, l_2\in W^{1, \infty}(I)$ are given. \subsubsection{Example 2 --- kinematics} We proceed similarly to the previous example and start with the ambient spaces \[ \mathcal{X}=\mathbb{R}^4, \qquad \mathcal{H} = \mathbb{R}^3. \] and the displacement-elongation relation \eqref{eq:ae-compatibility-abstract} with $\bm{\widetilde{\varepsilon}}\in \mathcal{H}, \,\bm{\widetilde{u}}\in \mathcal{X}$, \[ {\rm E} = \begin{pmatrix} -1 & 1 & 0 & 0\\ 0 & -1 & 1 & 0\\ 0 & 0 & -1 & 1 \end{pmatrix}. \] But, instead of the boundary condition \eqref{eq:ex11-bc}, we write the constraint \eqref{eq:ex12-constraint} in terms of $\bm{\widetilde{\varepsilon}}\in \mathcal{H}$ as \begin{equation} R\,\bm{\widetilde{\varepsilon}} + r(t) = 0 \label{eq:ex12-constraint-matrix-form} \end{equation} with \[ R= \begin{pmatrix} 1 & 1 & 0\\ 0 & 1 & 1 \end{pmatrix}, \qquad r(t) = -\begin{pmatrix}l_1(t) \\ l_2(t) \end{pmatrix}. \] In terms of $\bm{\widetilde{u}}\in \mathcal{X}$ the constraint can be written as \begin{equation} R {\rm E}\,\bm{\widetilde{u}} +r(t) =0, \label{eq:ex12-constrain-u-form} \end{equation} where \[ R {\rm E} = \begin{pmatrix}-1 & 0 & 1& 0\\ 0 & -1& 0 & 1 \end{pmatrix}. \] \noindent We set the spaces \[ \mathcal{W}=\mathcal{X}=\mathbb{R}^4, \qquad \mathcal{W}_0= D({\bf E}) ={\rm Ker}\, (R {\rm E}), \] and by \eqref{eq:ex11-E-def} we set the operator ${\bf E}$ from \eqref{eq:ae-abstract-E-def}. We again have $\mathcal{W}_0=\mathcal{X}_0$ with a parametrization \begin{equation} \mathcal{W}_0=\mathcal{X}_0=D({\bf E}) = {\rm Im}\, R_0 = \left\{R_0 \begin{pmatrix}u_1\\ u_2 \end{pmatrix}: \begin{pmatrix} u_1\\ u_2 \end{pmatrix}\in \mathbb{R}^2\right\}\qquad \text{with } R_0 = \begin{pmatrix}1& 0\\ 0 & 1\\ 1& 0 \\ 0 & 1\end{pmatrix}. \label{eq:ex12-DE-parametrization} \end{equation} Similarly to the previous example, to derive \eqref{eq:ae-1} we write \eqref{eq:ex12-constrain-u-form} in the equivalent form \eqref{eq:ae-bc-abstract} with the use of Moore-Penrose pseudoinverse as in Corollary \ref{cor:mp-solving-linear-systems}. Specifically, we set \[ \bm{u}_{\bf D} (t) = -(R{\rm E})^+ r(t), \qquad (R {\rm E})^+= (R{\rm E})^{\top}\left(R{\rm E}(R{\rm E})^{\top}\right)^{-1}= \frac{1}{2}\begin{pmatrix}-1 & 0 \\ 0 & -1 \\ 1 & 0 \\ 0 & 1\end{pmatrix}. \] Again, we follow \eqref{eq:ae-abstract-g-def} and let \begin{equation} \bm{g}(t) ={\rm E}\,\bm{u}_{\bf D} (t) = - {\rm E}\, (R{\rm E})^+ r(t) = \frac{1}{2}\begin{pmatrix}1 & -1 \\ 1 & 1 \\ -1 & 1\end{pmatrix} \begin{pmatrix} l_1(t) \\ l_2(t)\end{pmatrix}. \label{eq:ex12-g-def} \end{equation} \noindent For the model of Fig. \ref{fig:discrete-models-elastic} {\it b)} we have set up all of the quantities in \eqref{eq:ae-1} and \eqref{eq:ae-general-strain}--\eqref{eq:ae-abstract-g-def}. \begin{Remark} In the particular (and the currently present) case of the matrix ${\rm E}$ being surjective onto $\mathcal{H}$, instead of \eqref{eq:ex12-g-def} one can use the simpler expression \begin{equation*} \bm{g}(t)= - R^+ r(t), \end{equation*} which leads to \begin{equation} \bm{g}(t) =\frac{1}{3}\begin{pmatrix}2 & -1 \\ 1 & 1 \\ -1 & 2\end{pmatrix} \begin{pmatrix} l_1(t) \\ l_2(t)\end{pmatrix}. \label{eq:ex12-g-def-alt-concrete} \end{equation} when the current values are plugged in. One can verify the equivalence between \eqref{eq:ex12-g-def} and \eqref{eq:ex12-g-def-alt-concrete} in terms of \eqref{eq:ae-1} by checking that their difference belongs to the subspace ${\rm Im}\,{\bf E} = {\rm Im}\, {\rm E}R_0$. \end{Remark} \subsubsection{Example 2 --- Hooke's law} In the three-spring the model of Fig. \ref{fig:discrete-models-elastic} {\it b)} we have the operator ${\bf C}$ from \eqref{eq:ae-C-def} and its inverse defined by the respective diagonal matrices \[ {\bf C} =\begin{pmatrix} k_1 & 0 & 0\\[1mm] 0 & k_2 & 0\\[1mm] 0 & 0 & k_2 \end{pmatrix}, \qquad {\bf C}^{-1} =\begin{pmatrix} k_1^{-1} & 0 & 0\\[1mm] 0 & k_2^{-1} & 0\\[1mm] 0 & 0 & k_3^{-1} \end{pmatrix}, \] in which $k_1, k_2, k_3>0$. The stress vectors $\bm{\widetilde{\sigma}}$, defined via \eqref{eq:ae-2}, are now from $\mathbb{R}^3$. \subsubsection{Example 2 --- statics} Similarly to the previous example, we assume that a given external force $F\in W^{1,\infty}(I, \mathbb{R}^4)$ is applied in the model of Fig. \ref{fig:discrete-models-elastic} {\it b)}, so that for each $t\in I$ each of the four components of $\bm{F}$ denotes a longitudinal force, applied at the respective node. The equilibrium equation on an unknown stress $\bm{\widetilde{\sigma}}\in \mathbb{R}^3$ is \begin{equation} \left\la - {\rm E}^\top \,\bm{\widetilde{ \sigma}} + \bm{F}, \bm{\delta u} \right\ra_{\mathbb{R}^4} = 0 \qquad \text{for any }\bm{ \delta u}\in {\rm Ker}\, {\rm E}R, \label{eq:ex12-equilibrium-basic-formulation} \end{equation} Again, we need to transform equation \eqref{eq:ex12-equilibrium-basic-formulation} to the abstract form \eqref{eq:ae-3}. For each $\bm{F}\in \mathbb{R}^4$ we define the corresponding functional $\bm{f}\in \mathcal{X}_0^*$ by the formula \begin{equation} {\bm f}: \bm{u} \mapsto \left\la \bm{F}, \bm{u} \right\ra_{\mathbb{R}^4} \qquad \text{for any } \bm{u}\in \mathcal{X}_0. \label{eq:ex12-functionals-def} \end{equation} The parametrization \eqref{eq:ex12-DE-parametrization} yields the following explicit formula for $\bm{f}$: \begin{equation} {\bm f}: \bm{u} \mapsto \begin{pmatrix}F_1+ F_3& F_2+ F_4\end{pmatrix}\begin{pmatrix}u_1\\ u_2\end{pmatrix} \qquad \text{for any } \bm{u}\in \mathcal{X}_0. \label{eq:ex12-functionals-explicit} \end{equation} Again, we rewrite the equilibrium equation as \eqref{eq:ex11-equilibrium-functional-formulation}, which becomes \eqref{eq:ae-3} when we take the operator ${\bf D}$ as the adjoint, according to the Definition \ref{def:abstract-adjoint}. In other words, the adjoint operator is defined by \eqref{eq:ex11-D-def1}--\eqref{eq:ex11-D-def2} on its domain \[ D({\bf D}) =\mathcal{H}^*\cong\mathcal{H}=\mathbb{R}^3. \] \begin{Remark} We can use the representation \eqref{eq:ex12-DE-parametrization} to observe for any $\bm{\sigma}=\begin{pmatrix} \sigma_1\\ \sigma_2\\ \sigma_3 \end{pmatrix} \in \mathbb{R}^3,\, \bm{u}\in \mathcal{X}_0\subset \mathbb{R}^4$ that \begin{multline} \left\la {\bf D} \bm{\sigma}, \bm{u} \right\ra_{\mathcal{X}_0^*,\mathcal{X}_0} = \left\la \begin{pmatrix} \sigma_1\\ \sigma_2\\ \sigma_3 \end{pmatrix}, {\rm E} R_0 \begin{pmatrix}u_1\\ u_2\end{pmatrix} \right\ra_{\mathbb{R}^3} = \begin{pmatrix} \sigma_1 & \sigma_2 & \sigma_3 \end{pmatrix}\begin{pmatrix}-1 & 1\\ 1 & -1 \\ -1 & 1\end{pmatrix}\begin{pmatrix}u_1\\u_2\end{pmatrix}=\\= \begin{pmatrix} \sigma_1 & \sigma_2 & \sigma_3 \end{pmatrix}\begin{pmatrix}-1 \\ 1 \\ -1 \end{pmatrix}\left(u_1 - u_2\right)= (-\sigma_1+\sigma_2-\sigma_3)\left(u_1 - u_2\right). \label{eq:ex12-explicit-D} \end{multline} We can see that ${\bf D}$ is not surjective, since ${\rm Im}\, {\bf D}$ is one-dimensional (due to \eqref{eq:ex12-explicit-D}), but $\mathcal{X}_0^*$ is two-dimensional (due to two-dimensional $\mathcal{X}_0$ in \eqref{eq:ex12-DE-parametrization}), cf. Remark \ref{rem:ex11-all-forces-are-resolvable}. Therefore, we indeed must specifically require the values $\bm{f}(t)$ to be from ${\rm Im}\, {\bf D}$ as in \eqref{eq:loads-abstract-def}. Notice, that to have \eqref{eq:ex12-functionals-explicit} of the type \eqref{eq:ex12-explicit-D} we must take $\bm{F}$ such that \begin{equation} F_1 + F_3 = -(F_2+ F_4) \label{eq:ex12-F-restriction-form1} \end{equation} i.e. \begin{equation} F_1+F_2+F_3+F_4 =0. \label{eq:ex12-F-restriction-form2} \end{equation} This agrees with the physical reasoning, that since the model of Fig. \ref{fig:discrete-models-elastic} {\it b)} is not fixed in place, an external force can be compensated by internal stresses (i.e. the force is resolvable) only when the total applied momentum is zero. \end{Remark} \begin{Remark} \label{rem:ex12-elasticity-nonunique-ext} Notice, that the space $\mathcal{X}_0^*$ is two-dimensional, but $\bm{F}$ are taken from the three-dimensional subspace of $\mathbb{R}^4$, given by \eqref{eq:ex12-F-restriction-form2}. Thus, we again have the situation with non-injective mapping $\bm{F}\mapsto \bm{f}$, i.e. a functional $\bm{f}\in \mathcal{W}^*$ admits many extensions $\bm{F}\in \mathcal{X}^*$. Compare this to Remarks \ref{rem:abstract-elasticity-nonunique-ext}, \ref{rem:abstract-operator-to-restrict} \ref{rem:abstract-operator-to-restrict-about-forces} and \ref{rem:ex11-elasticity-nonunique-ext}. \end{Remark} \subsubsection{Example 2 --- the fundamental subspaces and the stress solution} In the above we have provided all of the data required in Definition \ref{def:ae} to set up a problem of the type \eqref{eq:ae-1}--\eqref{eq:ae-3}. Now we will give exact expressions of the quantities from Definition \ref{def:ae-weighted} and Theorem \ref{th:elasticity-solution} for the model of Fig. \ref{fig:discrete-models-elastic} {\it b)}. Space $\mathcal{H}_{{\bf C}^{-1}}$ is $\mathbb{R}^3$ equipped with the inner product \begin{equation} \left\la \begin{pmatrix} \sigma_1\\ \sigma_2\\ \sigma_3\end{pmatrix}, \begin{pmatrix}\tau_1\\ \tau_2\\ \tau_3\end{pmatrix} \right\ra_{{\bf C}^{-1}} = \sigma_1 k_1^{-1} \tau_1+ \sigma_2 k_2^{-1} \tau_2+ \sigma_3 k_3^{-1} \tau_3 \qquad \text{for all } \begin{pmatrix} \sigma_1\\ \sigma_2\\ \sigma_3\end{pmatrix}, \begin{pmatrix}\tau_1\\ \tau_2\\ \tau_3\end{pmatrix}\in \mathbb{R}^3. \label{eq:ex12-inner-product} \end{equation} Subspace $\mathcal{U}$ is \begin{multline*} \mathcal{U} ={\rm Im}\,{\bf C} {\rm E} R_0 = {\rm Im}\, \begin{pmatrix}k_1 & 0 & 0\\ 0 & k_2 & 0 \\ 0 & 0 & k_3\end{pmatrix} \begin{pmatrix} -1 & 1 & 0 & 0\\ 0 & -1 & 1 & 0\\ 0 & 0 & -1 & 1 \end{pmatrix} \begin{pmatrix}1& 0\\ 0 & 1\\ 1& 0 \\ 0 & 1\end{pmatrix} =\\= {\rm Im}\, \begin{pmatrix} -k_1 & k_1 \\ k_2 & -k_2 \\ -k_3 & k_3 \end{pmatrix}= {\rm Im}\, \begin{pmatrix} -k_1 \\ k_2 \\ -k_3 \end{pmatrix}, \end{multline*} and, from \eqref{eq:ex12-explicit-D} used in the definition of ${\bf D}_{\bf C}$ in \eqref{eq:EC-DC-def}, we have that \[ \mathcal{V} = {\rm Ker}\, \begin{pmatrix} -1 & 1 & -1\end{pmatrix} = {\rm Im}\, \begin{pmatrix} 1 & 0 \\1 & 1 \\ 0 & 1\end{pmatrix}, \] see Fig. \ref{fig:discrete-models-spaces} {\it b)}. Again, by the formula \cite[(5.31), p. 247]{OlverLinearAlgebra2018} we have the matrices of projections \[ P_{\mathcal{U}} = \frac{1}{k_1+k_2+k_3}\begin{pmatrix} k_1 & -k_1 & k_1 \\ -k_2 & k_2 & -k_2\\ k_3 & -k_3 & k_3\end{pmatrix}, \] \[ P_{\mathcal{V}} = \frac{1}{k_1^{-1}k_2^{-1}+k_2^{-1}k_3^{-1}+k_1^{-1}k_3^{-1}} \begin{pmatrix} k_1^{-1}k_2^{-1}+k_1^{-1}k_3^{-1} & k_2^{-1}k_3^{-1}& -k_2^{-1}k_3^{-1} \\[1mm] k_1^{-1}k_3^{-1} & k_1^{-1}k_2^{-1}+k_2^{-1}k_3^{-1} & k_1^{-1}k_3^{-1} \\[1mm]-k_1^{-1}k_2^{-1}& k_1^{-1}k_2^{-1}&k_1^{-1}k_3^{-1}+ k_2^{-1}k_3^{-1} \end{pmatrix}. \] Further, operator $G$ is represented by the matrix \[ G= P_{\mathcal{V}}{\bf C} = \frac{1}{k_1^{-1}k_2^{-1}+k_2^{-1}k_3^{-1}+k_1^{-1}k_3^{-1}}\begin{pmatrix} k_2^{-1}+k_3^{-1} & k_3^{-1}& -k_2^{-1} \\[1mm]k_3^{-1} & k_1^{-1}+k_3^{-1} & k_1^{-1} \\[1mm]-k_2^{-1}& k_1^{-1}&k_1^{-1}+ k_2^{-1} \end{pmatrix}. \] We now find a representation for the operator $Q$ in terms of an external force $\bm{F}$. Notice, that for any $\bm{\sigma}\in \mathcal{U}$ there exists a unique $\sigma\in \mathbb{R}$, such that \[ \bm{\sigma}= \begin{pmatrix}-k_1\\ k_2\\-k_3\end{pmatrix} \sigma. \] Plug that into \eqref{eq:ex12-explicit-D}, and then plug the result together with \eqref{eq:ex12-functionals-explicit} into \eqref{eq:ae-3} to see that \[ (k_1+k_2+k_3)\,\sigma \,(u_1-u_2) = \begin{pmatrix}F_1+ F_3& F_2+ F_4\end{pmatrix}\begin{pmatrix}u_1\\ u_2\end{pmatrix}. \] Due to \eqref{eq:ex12-F-restriction-form1} we get that \[ (k_1+k_2+k_3)\sigma = F_1+F_3, \] i. e. \[ \sigma = \frac{F_1+F_3}{k_1+k_2+k_3}. \] Therefore, for a given external force $\bm{F}\in \mathbb{R}^{4}$ and the corresponding (by \eqref{eq:ex11-functionals-def}) functional $\bm{f}\in \mathcal{X}_0^*$ we have the value of the operator $Q$ \[ Q\, \bm{f} = \frac{F_1+F_3}{k_1+k_2+k_3}\begin{pmatrix}-k_1 \\ k_2\\ -k_3\end{pmatrix}. \] By Theorem \ref{th:elasticity-solution} we conclude that the stress solution for elasticity in the model of Fig. \ref{fig:discrete-models-elastic} {\it b)} exists whenever \eqref{eq:ex12-F-restriction-form2} holds and it has the closed form \begin{multline} \bm{\widetilde{\sigma}}(t) = G\, \bm{g}(t) + Q\, \bm{f}(t) = \\ =\frac{1}{k_1^{-1}k_2^{-1}+k_2^{-1}k_3^{-1}+k_1^{-1}k_3^{-1}}\begin{pmatrix} k_2^{-1}+k_3^{-1} & -k_2^{-1} \\[1mm]k_3^{-1} & k_1^{-1} \\[1mm]-k_2^{-1}& k_1^{-1}+ k_2^{-1} \end{pmatrix}\begin{pmatrix} l_1(t) \\ l_2(t)\end{pmatrix} + \frac{F_1(t)+F_3(t)}{k_1+k_2+k_3}\begin{pmatrix}-k_1 \\ k_2\\ -k_3\end{pmatrix}, \label{eq:ex12-linear-solution} \end{multline} see Fig. \ref{fig:discrete-models-spaces} {\it b)}. \subsection{Example 3 --- a continuum model} \label{ssect:ex21-elasticity} \begin{figure}[H]\center \includegraphics{Fig-continuum-model.pdf} \caption{\footnotesize A continuum model of Example 3 in the relaxed reference configuration ({\it a}) and an arbitrary current configuration ({\it b}). } \label{fig:continuous-rod-model-elastic} \end{figure} As the third example we describe the quasi-static evolution of a one-dimensional elastic rod subjected to Dirichlet boundary conditions at both endpoints and a body force load distributed along its length. We use the Lagrangian description with respect to a relaxed reference configuration of the rod occupying the interval $\overline{\Omega}$ with \[ \Omega = (a,b) \] for given reference values $a<b$. \subsubsection{Example 3 --- kinematics} The displacement variable is now an unknown function \[ \widetilde{u}\in W^{1,\infty}(I, W^{1,2}(\Omega)) \] of time $t\in I$ and Lagrangian coordinate $x\in \Omega$. Let the Dirichlet boundary conditions at $x=a$ and $x=b$ be given by scalars $u_a(t), u_b(t)\in \mathbb{R}$ respectively. We assume that \begin{equation} u_a, u_b\in W^{1, \infty}(I) \label{eq:ex21-bc-1} \end{equation} and that $u_b(t) - u_a(t)>b-a$ for all $t>0$. Using $u_a(t)$ and $u_b(t)$ we choose a function \begin{equation} u_{\rm D} \in W^{1, \infty}(I, W^{1,2}(\Omega)), \label{eq:ex21-bc-2} \end{equation} such that \begin{equation} u_{\rm D}(t,a)= u_a (t), \qquad u_{\rm D}(t,b)= u_b (t), \qquad \text{for all }t\in I, \label{eq:ex21-bc-3} \end{equation} e. g. we can choose the linear interpolation \[ u_{\rm D}(t,x) = \frac{b-x}{b-a}\,u_a(t)+\frac{x-a}{b-a}\,u_b(t). \] Using such $u_{\rm D}$ we write the Dirichlet boundary condition as \begin{equation} \bm{\widetilde{u}}(t) \in W_0^{1,2}(\Omega) +{\bm{u}_{\bf D}}(t). \label{eq:ex21-bc} \end{equation} Define the following operator \begin{gather} {\rm E}: W^{1,2}(\Omega) \to L^2(\Omega), \label{eq:strain-operator-def1}\\ {\rm E}: \bm{u}\mapsto \frac{d}{d x}\bm{u} \qquad \text{for any } \bm{u}\in W^{1,2}(\Omega). \label{eq:strain-operator-def2} \end{gather} and call {\it strain} an unknown function \[ \widetilde{\varepsilon}\in W^{1, \infty}(L^{2}(\Omega)), \] defined for each $t\in I$ by formula \eqref{eq:ae-compatibility-abstract}. In terms of the framework of Definition \ref{def:ae} and Remark \ref{rem:abstract-operator-to-restrict} we set \[ \mathcal{H} = L^2(\Omega) \qquad \mathcal{X}=\mathcal{X}_0=L^2(\Omega), \] \begin{equation} \mathcal{W} = D({\rm E})=W^{1,2}(\Omega), \qquad \mathcal{W}_0 = D({\bf E}) = W^{1,2}_0(\Omega), \label{eq:ex21-E-domain-def} \end{equation} and we define operator {\bf E} by the formula \eqref{eq:ex11-E-def}. We follow \eqref{eq:ae-abstract-g-def} and set \begin{equation} \bm{g}(t) = {\rm E}\, {\bm{u}_{\bf D}}(t) = \frac{\partial}{\partial x}{u_ {\rm D}}(t,x), \label{eq:ex21-g-def} \end{equation} so that we have set up \eqref{eq:ae-compatibility-abstract}, \eqref{eq:ae-bc-abstract} (which is \eqref{eq:ex21-bc}), and \eqref{eq:ae-1}. It is a standard exercise in functional analysis to check that the operator ${\bf E}$, as we defined it here, is unbounded, densely defined, closed and has closed image. \subsubsection{Example 3 --- Hooke's law} \label{sssect:ex21-Hooke-law} The elastic property of the rod is characterized by the {\it stiffnes}, which is a given function $\bm{C}\in L^{\infty}(\Omega)$, for which there exist $0<c_0\leqslant C_0$ such that \[ c_0\leqslant \bm{C}(x)\leqslant C_0 \qquad \text{for a.a. }x\in \Omega. \] We define operator ${\bf C}$, acting as \eqref{eq:ae-C-def}, by the formula \[ {\bf C}: \bm{\widetilde{\varepsilon}} \mapsto \bm{C}\bm{\widetilde{\varepsilon}}, \] and then \eqref{eq:ae-2} is the constitutive law of elasticity, which connects strain $\widetilde{\varepsilon}$ and {\it stress} $\widetilde{\sigma}\in W^{1,\infty}(I,L^2(\Omega))$ for each $t\in I$. \subsubsection{Example 3 --- statics} Consider given {\it body force load} \begin{equation} F\in W^{1, \infty}(I, L^{2}(\Omega)), \label{eq:ex21-body-force-load} \end{equation} applied at each $x\in \Omega$ along the direction of the rod. The principle of virtual work (for the continuum mechanics formulations see \cite[Section 2.2.2, pp.~63--65]{Altenbach2018}, \cite[Appendix 1, pp.~589--591]{Anandarajah2011}, \cite[Remark 5.1.1, p.~61]{Necas1981}) yields the following equation of equilibrium of the rod \begin{equation} -\intOm \bm{\widetilde{\sigma}}(t)\, {\rm E}(\delta u)\, dx + \intOm \bm{F}(t)\, \delta u\, dx = 0 \qquad \text{for all }\delta u\in W^{1,2}_0(\Omega). \label{eq:ex21-equilibrium-basic-formulation} \end{equation} On the other hand, when ${\bf E}$ is defined via \eqref{eq:ae-abstract-E-def}, \eqref{eq:ex11-E-def}, \eqref{eq:ex21-E-domain-def}, its adjoint ${\bf D} ={\bf E}^*$ is defined by the formula \begin{equation} {\bf D}: \bm{\sigma} \mapsto -\frac{d}{dx} \bm{\sigma} \label{eq:ex21-D-def} \end{equation} for any $\bm{\sigma}$ from the domain \begin{equation} D({\bf D}) = W^{1,2}(\Omega). \label{eq:ex21-D-domain-def} \end{equation} To prove the adjacency, and to check that ${\bf D}$ is unbounded, densely defined, closed and has closed image is again a standard exercise in functional analysis (we also refer to \cite[Th. 4.1]{Gudoshnikov2025}, which is a similar statement for a more complicated case of many spatial dimensions). Notice, in particular, that the operator ${\bf D}$ is surjective. \begin{Remark} Naturally, we use the Riesz representation theorem (see e.g. \cite[Th. 5.5, p. 135]{Brezis2011}) to understand the values in \eqref{eq:ex21-D-def} as functionals over $\mathcal{W}_0 = W^{1,2}_0(\Omega)$ and, by extension, over $\mathcal{X}=\mathcal{X}_0=L^{2}(\Omega)$. In partucular, due to the density of $W_0^{1,2}(\Omega)$ in $L^2(\Omega)$ we have this time $\mathcal{X}=\mathcal{X}_0$, cf. Remarks \ref{rem:abstract-elasticity-nonunique-ext}, \ref{rem:abstract-operator-to-restrict} \ref{rem:abstract-operator-to-restrict-about-forces}, \ref{rem:ex11-elasticity-nonunique-ext}, \ref{rem:ex12-elasticity-nonunique-ext}. \end{Remark} For each $t\in I$ define $\bm{f}(t)\in \mathcal{H^*}$ by $\bm{F}(t)$ in accordance with the Riesz representation theorem, i.e. \[ \bm{f}: \bm{\delta u} \mapsto \intOm \bm{F}\, \bm{\delta u} \,dx \qquad \text{for any }\bm{\delta u}\in \mathcal{X}_0=\mathcal{X}= L^2(\Omega). \] By plugging this and the formula \eqref{eq:ex11-E-def} into the equation of equilibrium \eqref{eq:ex21-equilibrium-basic-formulation} we get exactly \eqref{eq:ae-3}. \subsubsection{Example 3 --- the fundamental subspaces and the stress solution} In the continuum model we described above we have the space $\mathcal{H}_{\bf C^{-1}}$ being $L^2(\Omega)$, equipped with the inner product \begin{equation} \la\bm{\tau}_1, \bm{\tau}_2\ra_{{\bf C}^{-1}} = \intOm \bm{\tau}_1\, {\bf C}^{-1}\,\bm{\tau}_2\, dx =\intOm \frac{\tau_1(x)\, \tau_2(x)}{\bm{C}(x)}\, dx. \label{eq:ex21-ip} \end{equation} We denote the space by $L^2_{{\bf C}^{-1}}(\Omega)$. From the definition of ${\mathcal{V}}$ in \eqref{eq:ae-UV-def}, the definition of ${\bf D_C}$ in \eqref{eq:EC-DC-def}, and the formulas \eqref{eq:ex21-D-def}--\eqref{eq:ex21-D-domain-def} for ${\bf D}$ we deduce with the help of the fundamental theorem of calculus for Lebesgue integrals (see e.g. \cite[Section 3.2, pp.~71--84]{Leoni2017}) that the subspace $\mathcal{V}$ is the one-dimensional space of constant functions: \begin{equation} \mathcal{V} = \left\{\bm{\sigma} \in L^2_{{\bf C}^{-1}}(\Omega): \text{ there exists }c\in \mathbb{R} \text{ such that for a.a. }x\in \Omega \text{ we have } \sigma(x)\equiv c \right\}. \label{eq:ex21-space-V} \end{equation} Therefore, it orthogonal complement in the sense of the inner product \eqref{eq:ex21-ip} is the subspace of functions with zero weighted average: \[ \mathcal{U} = \left\{\bm{\sigma}\in L^2_{{\bf C}^{-1}}(\Omega): \intOm \frac{1}{\bm{C}(x)}\,\sigma(x)\, dx=0\right\}. \] To find the formulas for the operators of projection \eqref{eq:ae-PU-PV-def} we take arbitrary $\bm{\sigma}\in L^2_{{\bf C}^{-1}}(\Omega)$ and solve for $c\in \mathbb{R}$ such that \[ \intOm \frac{\sigma(x)\lambda}{\bm{C}(x)} dx= \intOm \frac{c \lambda}{\bm{C}(x)} dx\qquad \text{for any }\lambda\in \mathbb{R}, \] from which it follows that \[ c=\frac{\intOm \frac{\sigma(x)}{\bm{C}(x)} dx}{\intOm \frac{1}{\bm{C}(x)} dx}. \] Thus the operators of projection are \[ (P_{\mathcal{V}}\, \bm{\sigma})(x) \equiv w_0 \intOm \frac{1}{\bm{C}(y)}\,\sigma(y)\, dy, \qquad (P_{\mathcal{U}}\, \bm{\sigma})(x) = \sigma(x)-w_0 \intOm \frac{1}{\bm{C}(y)}\,\sigma(y)\, dy, \] where $w_0\in \mathbb{R}$ is the constant \begin{equation} w_0 = \frac{1}{\intOm\frac{1}{\bm{C}(y)}\, dy}. \label{eq:w0-const-def} \end{equation} It follows that the operator $G$ from \eqref{eq:ae-G-def} is \begin{equation} (G\, \bm{\varepsilon})(x) \equiv w_0 \intOm \varepsilon(y)\, dy \qquad \text{for any }\bm{\varepsilon}\in \mathcal{H} =L^2(\Omega). \label{eq:ex21-operator-G-def} \end{equation} To find the representation on the operator $Q$ in terms of a given $\bm{F}\in L^2(\Omega)$ and the corresponding $\bm{f}\in \mathcal{X}_0^*$ we must find $\bm{\sigma}\in \mathcal{U}$ satisfying \eqref{eq:ae-3} (since the values of ${\bf D}$ and ${\bf D_C}$ coincide by the construction, see \eqref{eq:EC-DC-def}). This means that \[ \intOm \frac{1}{\bm{C}(x)}\,\sigma(x)\, dx=0 \] and, by \eqref{eq:ae-3}, \eqref{eq:ex21-D-def} and the fundamental theorem of calculus for Lebesgue integrals, we have that \[ \sigma(x) - \sigma(a) = -\int\limits_a^x F(x)\, dx\qquad\text{for any } x\in \Omega=(a,b), \] i.e. \begin{equation} \sigma(x) =\sigma(a) -\int\limits_a^x F(y)\, dy. \label{eq:ex21-stress-intermediate-expression} \end{equation} Therefore, \[ \intOm \frac{1}{\bm{C}(x)}\left(\sigma(a) -\int\limits_a^x F(y)\, dy\right) dx=0, \] i.e. \[ \sigma(a)\intOm \frac{1}{\bm{C}(x)}\,dx= \intOm \frac{1}{\bm{C}(x)}\int\limits_a^x F(y)\, dy\, dx \] and, if we rename $x$ to $z$, we can compute \[ \sigma(a) = \omega_0 \intOm \frac{1}{\bm{C}(z)}\int\limits_a^z F(y)\, dy\, dz. \] Thus, using \eqref{eq:ex21-stress-intermediate-expression}, we can write that \begin{equation} (Q\bm{f})(x)= \sigma(x) =\omega_0 \intOm \frac{1}{\bm{C}(z)}\int\limits_a^z F(y)\, dy\, dz -\int\limits_a^x F(y)\, dy. \label{eq:ex21-operator-Q-def} \end{equation} We plug \eqref{eq:ex21-g-def}, \eqref{eq:ex21-operator-G-def} and \eqref{eq:ex21-operator-Q-def} into \eqref{eq:abstract_linear_solution} to write the stress solution for elasticity in terms of given time-dependent displacement boundary data \eqref{eq:ex21-bc-1}--\eqref{eq:ex21-bc-3} and a time-dependent longitudinal body force load \eqref{eq:ex21-body-force-load} in the closed form \begin{equation} \begin{aligned} \widetilde{\sigma}(t,x)& = \left(G\, \bm{g}(t)\right)(x) + \left(Q\, \bm{f}(t)\right)(x)=\\[1mm] &= \omega_0 \intOm \frac{d}{dy} u_{\rm D}(t,y)\, dy + \omega_0 \intOm \frac{1}{\bm{C}(z)}\int\limits_a^z F(t,y)\, dy\, dz -\int\limits_a^x F(t,y)\, dy = \\[1mm] &= \omega_0 \left(u_b (t)- u_a (t)+ \intOm \frac{1}{\bm{C}(z)}\int\limits_a^z F(t,y)\, dy\, dz\right) - \int\limits_a^x F(t,y)\, dy. \end{aligned} \label{eq:ex21-linear-solution} \end{equation} For a reader's convenience we collect the quantities from all three examples of linear elasticity in Table \ref{tab:linear_elasticity_examples}. \begin{table}[H] \begin{center} \renewcommand{\arraystretch}{1.2} \begin{tabular}{|c|c|c|c|} \hline \textbf{Quantity} & \textbf{Example 1 (Fig. \ref{fig:discrete-models-elastic} {\it a})} & \textbf{Example 2 (Fig. \ref{fig:discrete-models-elastic} {\it b})} & \textbf{Example 3 (Fig. \ref{fig:continuous-rod-model-elastic})}\\ \hline $\mathcal{X}$ & $\mathbb{R}^3$ & $\mathbb{R}^4$ & $L^2(\Omega), \, \Omega=(a,b)$ \\ \hline $\mathcal{H}$ & $\mathbb{R}^2$ & $\mathbb{R}^3$ & $L^2(\Omega)$\\ \hline $\mathcal{W}$ & $\mathbb{R}^3$ & $\mathbb{R}^4$ &$W^{1,2}(\Omega)$ \\ \hline ${\rm E}:\mathcal{W}\to \mathcal{H}$ & $\begin{pmatrix} -1 & 1 & 0 \\ 0 & -1 & 1 \end{pmatrix}$ & $\begin{pmatrix} -1 & 1 & 0 & 0\\ 0 & -1 & 1 & 0\\ 0 & 0 & -1 & 1 \end{pmatrix}$ & $ \bm{u}\mapsto \frac{d}{d x}\bm{u}$ \\ \hline $\mathcal{W}_0$ & $\begin{array}{c} {\rm Ker}\, R, \\ R=\begin{pmatrix} 1 & 0 & 0\\ 0 & 0 & 1 \end{pmatrix} \end{array} $ & $\begin{array}{c} {\rm Ker}\, ({R} {\rm E}),\\ {R} = \begin{pmatrix} 1 & 1 & 0\\ 0 & 1 & 1 \end{pmatrix} \end{array} $ & $W_0^{1,2}(\Omega)$ \\ \hline $\mathcal{X}_0 =\overline{\mathcal{W}_0}$ & ${\rm Ker}\, R$ & ${\rm Ker}\, ({R} {\rm E})$ & $L^2(\Omega)$\\ \hline ${\bf E}: \mathcal{W}_0 \subset \mathcal{X} \to \mathcal{H}$ & ${\rm E} \text{ restricted to } \mathcal{W}_0$ & ${\rm E} \text{ restricted to } \mathcal{W}_0$ & ${\rm E} \text{ restricted to } \mathcal{W}_0$ \\ \hline ${\bf D}: D({\bf D})\subset \mathcal{H}\to \mathcal{X}_0^* $ & $\begin{array}{c} \bm{\sigma} \mapsto \text{the functional}\\\left(\bm{u}\in \mathcal{X}_0\mapsto \left({\rm E}^\top \bm{\sigma}\right)\cdot \bm{u}\right)\end{array}$ & $\begin{array}{c} \bm{\sigma} \mapsto \text{the functional}\\\left(\bm{u}\in \mathcal{X}_0\mapsto \left({\rm E}^\top \bm{\sigma}\right)\cdot \bm{u}\right)\end{array}$ & $\bm{u}\mapsto -\frac{d}{d x}\bm{u}$ \\ \hline $D({\bf D})$ & $\mathbb{R}^2$ & $\mathbb{R}^3$ & $W^{1,2}(\Omega)$\\ \hline ${\bf C}:\mathcal{H}\to \mathcal{H}$ & $\begin{pmatrix} k_1 & 0\\[1mm] 0 & k_2 \end{pmatrix}$ & $\begin{pmatrix} k_1 & 0 & 0\\[1mm] 0 & k_2 & 0\\[1mm] 0 & 0 & k_2 \end{pmatrix}$ & $\bm{C}\in L^\infty(\Omega)$ \\ \hline $\begin{array}{c}\mathcal{U}={\rm Im}\, {\bf E_C}=\\={\rm Im}\, {\bf C E}\subset \mathcal{H}_{\bf C^{-1}}\end{array}$&${\rm Im}\, \begin{pmatrix} k_1 \\ -k_2 \end{pmatrix}$ & $ {\rm Im}\, \begin{pmatrix} -k_1 \\ k_2 \\ -k_3 \end{pmatrix}$ & $ \begin{array}{c}\left\{\bm{\sigma}\in L^2_{{\bf C}^{-1}}(\Omega):\vphantom{\intOm}\right.\\ \left. \intOm \frac{1}{\bm{C}(x)}\,\sigma(x)\, dx=0\right\}\end{array}$ \\ \hline $\begin{array}{c}\mathcal{V}={\rm Ker}\, {\bf D_C}=\\ ={\rm Ker}\, {\bf D}\subset \mathcal{H}_{\bf C^{-1}}\end{array} $& ${\rm Im}\, \begin{pmatrix} 1 \\ 1\end{pmatrix}$ & $ {\rm Im}\, \begin{pmatrix} 1 & 0 \\1 & 1 \\ 0 & 1\end{pmatrix}$ & $\begin{array}{c}\left\{\bm{\sigma} \in L^2_{{\bf C}^{-1}}(\Omega): \exists c\in \mathbb{R}\right.\\\left. \text{for a.a. }x\in \Omega \quad \sigma(x)\equiv c \right\}\end{array}$ \\ \hline $\bm{g}(t)$& $\begin{pmatrix} 0 \\ l(t)\end{pmatrix}$ & $\frac{1}{2}\begin{pmatrix}1 & -1 \\ 1 & 1 \\ -1 & 1\end{pmatrix} \begin{pmatrix} l_1(t) \\ l_2(t)\end{pmatrix}$&$ \frac{\partial}{\partial x}{u_ {\rm D}}(t,x)$ \\ \hline $\begin{array}{c}\text{Stress solution }\\ \bm{\widetilde{\sigma}}(t)= G\, {\bm g}(t) + Q\,{\bm f}(t) \end{array}$ &\text{formula \eqref{eq:ex11-linear-solution}}& \text{formula \eqref{eq:ex12-linear-solution}} & \text{formula \eqref{eq:ex21-linear-solution}}. \\ \hline \end{tabular} \end{center} \caption{The quantities in the examples of linear elasticity of the current paper.} \label{tab:linear_elasticity_examples} \end{table} \newpage \section{The sweeping process framework for elasticity-perfect plasticity} \label{sect:perfect-plasticity} \subsection{The abstract geometric framework and the abstract sweeping process} Now we consider the quasistatic evolution with elasticity and perfect plasticity combined, see Definition \ref{def:aepp} and Fig. \ref{fig:elasticity-pp-scheme} below. \begin{Definition} \label{def:aepp} Let spaces $\mathcal{H}, \mathcal{X}, \mathcal{W}_0$, operators ${\bf E}, {\bf D}, {\bf C}$ and functions $\bm{g}, \bm{f}$ be as in Section \ref{ssect:adjoint-operatorsED} and Definition \ref{def:ae}, but we require $\mathcal{H}$ to be separable. In addition, let us be given a bounded closed convex nonempty set $\Sigma\subset \mathcal{H}$. We say, that the unknown variables \begin{equation} \varepsilon, \varepsilon_{\rm el}, \varepsilon_{\rm p}, \sigma \in W^{1, \infty}(I, \mathcal{H}) \label{eq:aepp-unknowns} \end{equation} solve the {\it abstract problem of quasi-static evolution in elasticity-perfect plasticity} if they satisfy \begin{align} \bm{\varepsilon}& \in {\rm Im}\, {\bf E}+\bm{g} , \label{eq:aepp-1} \tag{EPP1}\\ \bm{\varepsilon}& = \bm{\varepsilon_{\bf el}}+\bm{\varepsilon_{\bf p}}, \label{eq:aepp-2} \tag{EPP2}\\ \bm{\sigma}& = {\bf C} \, \bm{\varepsilon_{\bf el}}, \label{eq:aepp-3} \tag{EPP3}\\ \frac{d}{dt} \bm{\varepsilon_{\bf p}}& \in N_\Sigma (\bm{\sigma}), \label{eq:aepp-4} \tag{EPP4}\\ \bm{\sigma}\in D(\bf D)\quad \text{ and }\quad {\bf D}\, \bm{\sigma}&=\bm{f}, \label{eq:aepp-5} \tag{EPP5} \end{align} and the initial condition \[ (\bm{\varepsilon}(0), \bm{\varepsilon}_{\bf el}(0), \bm{\varepsilon}_{\bf p}(0), \bm{\sigma}(0)) =(\bm{\varepsilon}_{\bf 0}, \bm{\varepsilon}_{\bf el0}, \bm{\varepsilon}_{\bf p0}, \bm{\sigma}_{\bf 0}) \] with some given right-hand side from $\mathcal{H}^4$ satisfying \eqref{eq:aepp-1}--\eqref{eq:aepp-3}, \eqref{eq:aepp-5} at $t=0$ and \begin{equation} {\bm \sigma}_{\bf 0} \in \Sigma. \label{eq:aepp-sigma-ic-compatible} \end{equation} Equations \eqref{eq:aepp-1}--\eqref{eq:aepp-3}, \eqref{eq:aepp-5} are required to hold for all $t\in I$. Equation \eqref{eq:aepp-4} is required to hold for a.a. $t\in I$, as its left-hand side is the following vector-valued derivative (see Section \ref{ssect:prelim-bochner-sobolev}) \[ \frac{d}{dt} \varepsilon_{\rm p}\in L^\infty(I,\mathcal{H}). \] \noindent In turn, the right-hand side of \eqref{eq:aepp-4} is the outward normal cone to $\Sigma$ at $\bm{\sigma}$, as defined by formula \eqref{eq:nc-abstract-def}. \end{Definition} \begin{figure}[H]\center \includegraphics{Fig-elasticity-pp-scheme.pdf} \caption{\footnotesize Schematic representation of the problem of Definition \ref{def:aepp}. The unknown variables are indicated by blue color. In the problem of Definition \ref{def:aepp} we are only looking for the unknowns $\varepsilon, \varepsilon_{\rm el}, \varepsilon_{\rm p}$ and $\sigma$. Red rectangles indicate the constitutive relations. } \label{fig:elasticity-pp-scheme} \end{figure} We are going to write the abstract problem of Definition \ref{def:aepp} in terms of differential inclusions. In the current setting we have all the data, required by Definition \ref{def:ae-weighted}, so that we have the space $\mathcal{H}_{{\bf C}^{-1}}$, the operators ${\bf E_C}, {\bf D_C}$ and the fundamental subspaces $\mathcal{U}, \mathcal{V}$ defined accordingly. Moreover, from Proposition \ref{prop:elasticity-affine-planes} we have the operators $G, Q$, and from Theorem \ref{th:elasticity-solution} we have the stress solution for elasticity $\widetilde{\sigma}$, which we will treat further as a {\it known} function of time, defined by \eqref{eq:abstract_linear_solution}. In space $\mathcal{H}_{{\bf C}^{-1}}$ consider the outward normal cone \eqref{eq:nc-abstract-def} in the sense of the inner product \eqref{eq:weighted-ip}, which, for a closed convex nonempty set ${\mathcal C}\subset\mathcal{H}_{{\bf C}^{-1}}$ and a point $\bm{\tau}\in \mathcal{C}$, is \begin{equation} N^{{\bf C}^{-1}}_{\mathcal C}(\bm{\tau})=\left\{\bm{y}\in \mathcal{H}_{{\bf C}^{-1}}: \text{ for any }\bm{c}\in \mathcal{C} \text{ we have } \la\bm{y}, \bm{c}-\bm{\tau}\ra_{{\bf C}^{-1}}\leqslant 0 \right\}. \label{eq:nc-C-inv-def} \end{equation} \noindent On top of Proposition \ref{prop:nc-abstract-properties}, such normal cone has the following properties. \begin{Proposition} \label{prop:c-inv-normal-cone-properties} Formula \eqref{eq:nc-C-inv-def} yields the following properties: \begin{enumerate}[{\it i)}] \item For any $\bm{\tau}\in \mathcal{C}$ \begin{equation} N^{{\bf C}^{-1}}_{\mathcal C}(\bm{\tau}) = {\bf C} N_{\mathcal C}(\bm{\tau}), \label{eq:regular-and-weighted-nc} \end{equation} \item For any $\bm{v}\in \mathcal{V}$ we have \begin{equation} \mathcal{U}= N^{{\bf C}^{-1}}_{\mathcal{V}}(\bm{v}) \label{eq:orthogonal-space-nc} \end{equation} \end{enumerate} \end{Proposition} Now we are ready to derive the differential inclusions to solve the problem of Definition \ref{def:aepp}. \begin{Theorem} \label{th:aepp-to-sweeping-process} Functions \eqref{eq:aepp-unknowns} solve the abstract problem of quasi-static evolution in elasticity-perfect plasticity if and only if the unknown \begin{equation} y = \sigma-\widetilde{\sigma}\in W^{1,\infty}(I, \mathcal{H}_{{\bf C}^{-1}}) \label{eq:yielding-to-sweeping} \end{equation} and the unknown $\varepsilon \in W^{1,\infty}(I, \mathcal{H})$ and solve the differential inclusions \begin{numcases}{} -\frac{d}{dt}\bm{y}\in N^{{\bf C}^{-1}}_{\mathcal{C}(t)}(\bm{y}), \label{eq:sp-elastoplastic}\\ \frac{d}{dt} \bm{\varepsilon} \in {\bf C}^{-1}\left(\left(N^{{\bf C}^{-1}}_{\Sigma-\bm{\widetilde{\sigma}}(t)}(\bm{y})+ \frac{d}{dt} \bm{y}\right)\cap \mathcal{U}+ \frac{d}{dt} \bm{\widetilde{\sigma}}(t)\right) \label{eq:inclusion2-elastoplastic} \end{numcases} with the initial conditions \begin{align} \bm{y}(0) &= \bm{\sigma}_{\bf 0}- \bm{\widetilde{\sigma}}(0)\in \mathcal{C}(0), \label{eq:diff-incs-ic-y}\\ \bm{\varepsilon}(0)& = \bm{\varepsilon}_{\bf 0}\in {\bf C}^{-1}(\mathcal{U}+\bm{\widetilde{\sigma}}(0)), \label{eq:diff-incs-ic-eps} \end{align} where the moving set is \begin{equation} \mathcal{C}(t)=\left(\Sigma-\bm{\widetilde{\sigma}}(t)\right)\cap \mathcal{V}. \label{eq:aepp-sp-moving-set} \end{equation} \end{Theorem} \begin{Remark} Notice, that differential inclusion \eqref{eq:sp-elastoplastic} is a sweeping process, which does not depend on $\varepsilon$, hence a unique $y$ can be found whenever the sweeping process is solvable. Moreover, the right-hand side of \eqref{eq:inclusion2-elastoplastic} is independent of $\varepsilon$ as well, and, if we already know $y$ from \eqref{eq:sp-elastoplastic}, \eqref{eq:diff-incs-ic-y}, we could find $\varepsilon$ by integrating a {\it measurable selection} from the right-hand side of \eqref{eq:inclusion2-elastoplastic}. We will discuss the existence and integrability of such selection for the solution of the sweeping process \eqref{eq:sp-elastoplastic} after the proof of Theorem \ref{th:aepp-to-sweeping-process}. \end{Remark} \noindent{\bf Proof of Theorem \ref{th:aepp-to-sweeping-process}.} Since \eqref{eq:aepp-5} and \eqref{eq:ae-3} are the same equation on the unknown, by Proposition \ref{prop:elasticity-affine-planes} {\it ii)} we have equation \eqref{eq:aepp-5} equivalent to \begin{equation*} \bm{\sigma}\in \mathcal{V}+ Q\,\bm{f}. \end{equation*} Notice, that $G$ takes values in $\mathcal{V}$, therefore we can write \begin{equation*} \bm{\sigma}\in \mathcal{V}+ Q\,\bm{f} + G\, \bm{g}, \end{equation*} i.e. \begin{equation} \bm{\sigma}- \bm{\widetilde{\sigma}}\in \mathcal{V}. \label{eq:stress-constraint} \end{equation} In particular, together with \eqref{eq:aepp-sigma-ic-compatible} this implies the compatibility of the initial value \eqref{eq:diff-incs-ic-y}. In turn, since \eqref{eq:aepp-1} is also the same as \eqref{eq:ae-1}, by Proposition \ref{prop:elasticity-affine-planes} {\it i)} we have equation \eqref{eq:aepp-5} equivalent to \begin{equation*} {\bf C}\, \bm{\varepsilon}\in \mathcal{U}+ G \,\bm{g}, \end{equation*} and, since $Q$ takes values in $\mathcal{U}$, \[ {\bf C}\, \bm{\varepsilon}\in \mathcal{U}+ G \,\bm{g} + Q\, \bm{f}, \] i.e. \begin{equation} {\bf C}\, \bm{\varepsilon}\in \mathcal{U}+ \bm{\widetilde{\sigma}}. \label{eq:compatibility-affine-constraint-proof} \end{equation} This yields the compatibility of the initial value \eqref{eq:diff-incs-ic-eps} and the equation on the strain rate \begin{equation} \frac{d}{dt} {\bf C}\, \bm{\varepsilon}\in \mathcal{U}+ \frac{d}{dt} \bm{\widetilde{\sigma}}. \label{eq:rate-strain-constraint} \end{equation} On the other hand, from \eqref{eq:aepp-2}--\eqref{eq:aepp-3} we get that \[ {\bf C}\, \bm{\varepsilon} = \bm{\sigma}+ {\bf C} \, \bm{\varepsilon}_{\bf p}, \] which also yields the following rate equation for a.a. $t\in I$ \begin{equation} -\frac{d}{dt} \bm{\sigma} = {\bf C} \,\frac{d}{dt} \bm{\varepsilon}_{\bf p} -\frac{d}{dt} {\bf C}\, \bm{\varepsilon}, \label{eq:rate-additive-decomposition} \end{equation} where ${\bf C}$ commutes with $\frac{d}{dt}$ due to Proposition \ref{prop:prelim-classical-derivatives-ae} and the continuity of ${\bf C}$. Observe that by applying ${\bf C}$ to \eqref{eq:aepp-4} and using the expression \eqref{eq:regular-and-weighted-nc} at the right-hand side, we get \begin{equation} {\bf C} \frac{d}{dt} \bm{\varepsilon}_{\bf p} \in N^{{\bf C}^{-1}}_{\Sigma}(\bm{\sigma}). \label{eq:plastic-flow-rule-in-proof} \end{equation} Substitute \eqref{eq:rate-strain-constraint} and \eqref{eq:plastic-flow-rule-in-proof} into \eqref{eq:rate-additive-decomposition}: \[ -\frac{d}{dt} \bm{\sigma}\in N^{{\bf C}^{-1}}_{\Sigma}(\bm{\sigma}) - \mathcal{U}- \frac{d}{dt} \bm{\widetilde{\sigma}}. \] Recall that $\mathcal{U}=-\mathcal{U}$. Therefore, \[ -\left(\frac{d}{dt} \bm{\sigma}- \frac{d}{dt} \bm{\widetilde{\sigma}} \right)\in N^{{\bf C}^{-1}}_{\Sigma}(\bm{\sigma}) + \mathcal{U}. \] We use \eqref{eq:argument-sum-nc} in the normal cone term and substitute $\mathcal{U}$ as in \eqref{eq:orthogonal-space-nc} using \eqref{eq:stress-constraint}: \[ -\left(\frac{d}{dt} \bm{\sigma}- \frac{d}{dt} \bm{\widetilde{\sigma}} \right)\in N^{{\bf C}^{-1}}_{\Sigma-\bm{\widetilde{\sigma}}}(\bm{\sigma}-\bm{\widetilde{\sigma}}) + N^{{\bf C}^{-1}}_{\mathcal{V}}(\bm{\sigma}-\bm{\widetilde{\sigma}}). \] Next, we use \eqref{eq:nc-subadditivity} to get \[ -\frac{d}{dt} \left( \bm{\sigma}- \bm{\widetilde{\sigma}} \right)\in N^{{\bf C}^{-1}}_{\left(\Sigma-\bm{\widetilde{\sigma}}\right)\cap \mathcal{V}}(\bm{\sigma}-\bm{\widetilde{\sigma}}) \] and substitute \eqref{eq:yielding-to-sweeping} to obtain the sweeping process \eqref{eq:sp-elastoplastic}. On the other hand, we can express the term with $\bm{\varepsilon}$ from \eqref{eq:rate-additive-decomposition}: \[ \frac{d}{dt} {\bf C}\, \bm{\varepsilon} = \frac{d}{dt} \bm{\sigma}+ {\bf C} \,\frac{d}{dt} \bm{\varepsilon}_{\bf p}, \] substitute there \eqref{eq:plastic-flow-rule-in-proof}: \begin{equation} \frac{d}{dt} {\bf C}\, \bm{\varepsilon} \in \frac{d}{dt} \bm{\sigma}+ N^{{\bf C}^{-1}}_{\Sigma}(\bm{\sigma}), \label{eq:additive-dec-rate-nc-proof} \end{equation} and insert there $\bm{\widetilde{\sigma}}$: \[ \frac{d}{dt} {\bf C}\, \bm{\varepsilon} \in N^{{\bf C}^{-1}}_{\Sigma-\bm{\widetilde{\sigma}}}(\bm{\sigma}-\bm{\widetilde{\sigma}})+ \frac{d}{dt} (\bm{\sigma}-\bm{\widetilde{\sigma}})+ \frac{d}{dt} \bm{\widetilde{\sigma}}. \] Combine this with \eqref{eq:rate-strain-constraint} to obtain \begin{equation} \frac{d}{dt} {\bf C}\, \bm{\varepsilon} \in \left(N^{{\bf C}^{-1}}_{\Sigma-\bm{\widetilde{\sigma}}}(\bm{\sigma}-\bm{\widetilde{\sigma}})+ \frac{d}{dt} (\bm{\sigma}-\bm{\widetilde{\sigma}})\right)\cap \mathcal{U}+ \frac{d}{dt} \bm{\widetilde{\sigma}}, \label{eq:near-ready-sp-proof} \end{equation} and then substitute \eqref{eq:yielding-to-sweeping} and apply ${\bf C}^{-1}$ to obtain the inclusion \eqref{eq:inclusion2-elastoplastic}. Assume now that we have a solution \[ (y, \varepsilon)\in W^{1,\infty}(I, \mathcal{H}_{{\bf C}^{-1}})\times W^{1,\infty}(I, \mathcal{H}) \] of \eqref{eq:sp-elastoplastic}--\eqref{eq:diff-incs-ic-eps}. Define $\sigma$ via $y$ according to \eqref{eq:yielding-to-sweeping}. Apply ${\bf C}$ to \eqref{eq:inclusion2-elastoplastic} and substitute \eqref{eq:yielding-to-sweeping} to get \eqref{eq:near-ready-sp-proof}, which is equivalent to \eqref{eq:additive-dec-rate-nc-proof} and \eqref{eq:rate-strain-constraint} together. Define $\varepsilon_{\rm el}$ and $\varepsilon_{\rm p}$ via both $\varepsilon$ and $\sigma$ according to \eqref{eq:aepp-2}--\eqref{eq:aepp-3}. We have that \eqref{eq:additive-dec-rate-nc-proof}, \eqref{eq:aepp-2} and \eqref{eq:regular-and-weighted-nc} imply \eqref{eq:aepp-4}. In turn, \eqref{eq:rate-strain-constraint} and the initial condition \eqref{eq:diff-incs-ic-eps} imply \eqref{eq:compatibility-affine-constraint-proof}, which is equivalent to \eqref{eq:aepp-1} by Proposition \ref{prop:elasticity-affine-planes} {\it i)}. Finally, from \eqref{eq:sp-elastoplastic} we have \eqref{eq:stress-constraint}, which means \eqref{eq:aepp-5} by Proposition \ref{prop:elasticity-affine-planes} {\it ii)}. $\blacksquare$ Now let us discuss the existence of solutions to \eqref{eq:sp-elastoplastic} and \eqref{eq:inclusion2-elastoplastic}. \begin{Theorem} \label{th:aepp-well-posedness-safe-load-strict} In the setting of Theorem \ref{th:aepp-to-sweeping-process} assume that Slater's condition holds, i.e. \begin{equation} \left({\rm int}\, \Sigma-\bm{\widetilde{\sigma}}(t)\right)\cap \mathcal{V} \neq \varnothing \qquad \text{for all }t\in I. \label{eq:safe-load-strict} \end{equation} Then \begin{enumerate}[{\it i)}] \item \label{enum:th:aepp-well-posedness-sp} the sweeping process \eqref{eq:sp-elastoplastic}, \eqref{eq:diff-incs-ic-y}, \eqref{eq:aepp-sp-moving-set} satisfies the conditions of Theorem \ref{th:abstract-sp-existence} and it has a unique solution $y\in W^{1,\infty}(I, \mathcal{H}_{{\bf C}^{-1}})$, \item \label{enum:th:aepp-well-posedness-nonempty-rhs} the right-hand side of differential inclusion \eqref{eq:inclusion2-elastoplastic} is a nonempty set for a.a. $t\in I$. \item \label{enum:th:aepp-well-posedness-estimate-rhs} there exists at least one $\varepsilon \in W^{1,\infty}(I, \mathcal{H})$ which solves \eqref{eq:inclusion2-elastoplastic}, \eqref{eq:diff-incs-ic-eps}. \end{enumerate} \end{Theorem} \noindent {\bf Proof.} \begin{enumerate}[{\it i)}] \item Indeed, for each $t\in I$ we have the moving set \eqref{eq:aepp-sp-moving-set} being closed, convex and nonempty. Moreover, The set $\Sigma-\bm{\widetilde{\sigma}}(t)$ is a time-dependent translation, therefore it is Lipschitz-continuous with the constant $\left\|\widetilde{\sigma}\right\|_{W^{1,\infty}(I, \mathcal{H}_{{\bf C}^{-1}})}$ (see \eqref{eq:ae-time-regularity}) with respect to Hausdorff distance. In turn, condition \eqref{eq:safe-load-strict} and the boundness of $\Sigma$ are sufficient for the general argument of \cite[Section 5.c, pp. 274--278]{Moreau1973}, \cite{Moreau1973-2}, \cite{Moreau1975}. From there we deduce that $\mathcal{C}(t)$, being the intersection \eqref{eq:aepp-sp-moving-set}, is also Lipschitz-continuous with respect to Hausdorff distance. We will also discuss other ways to justify Lipschitz continuity of $\mathcal{C}(t)$, which are suitable for particular examples. Observe, that all of the conditions of Theorem \ref{th:abstract-sp-existence} hold for the sweeping process \eqref{eq:sp-elastoplastic}, \eqref{eq:diff-incs-ic-y}, \eqref{eq:aepp-sp-moving-set}, which means that the statement {\it i)} of the current proposition also holds. \item Condition \eqref{eq:safe-load-strict} now plays the role of \eqref{eq:nontrivial-intersection-condition} and it guarantees that for all $t\in I$ \begin{equation} N^{{\bf C}^{-1}}_{\left(\Sigma-\bm{\widetilde{\sigma}}(t)\right)\cap \mathcal{V}}(\bm{y}(t)) = N^{{\bf C}^{-1}}_{\Sigma-\bm{\widetilde{\sigma}}(t)}(\bm{y}(t))+ N^{{\bf C}^{-1}}_{\mathcal{V}}(\bm{y}(t)). \label{eq:nc-additivity-in-aepp-proof} \end{equation} Hence for a.a. $t\in I$ we have \[ -\frac{d}{dt}\bm{y}(t)\in N^{{\bf C}^{-1}}_{\Sigma-\bm{\widetilde{\sigma}}(t)}(\bm{y}(t)) + \mathcal{U}. \] Thus for a.a. $t\in I$ there exists an element $\bm{\omega}(t)\in -\mathcal{U}=\mathcal{U}$ s.t. \[ \bm{\omega}(t) \in N^{{\bf C}^{-1}}_{\Sigma-\bm{\widetilde{\sigma}}(t)}(\bm{y}(t)) + \frac{d}{dt}\bm{y}(t), \] i.e. \begin{equation} \bm{\omega}(t) \in \left(N^{{\bf C}^{-1}}_{\Sigma-\bm{\widetilde{\sigma}}(t)}(\bm{y}(t)) + \frac{d}{dt}\bm{y}(t)\right)\cap \mathcal{U}, \label{eq:aepp-omega-inclusion} \end{equation} from where the statement about \eqref{eq:inclusion2-elastoplastic} follows. \item It is enough to show that there exists a function $\omega\in L^{\infty}(I, \mathcal{H}_{{\bf C}^{-1}})$ such that \eqref{eq:aepp-omega-inclusion} holds. Indeed, then we can set \begin{equation} \bm{\varepsilon} (t) = \bm{\varepsilon_0}+{\bf C}^{-1}\left(\bm{\widetilde{\sigma}}(t)-\bm{\widetilde{\sigma}}(0)+\int\limits_0^t \bm{\omega}(s)\, ds\right), \label{eq:aepp-eps-from-omega} \end{equation} which then is from $W^{1, \infty}(I, \mathcal{H})$, since so is $\widetilde{\sigma}$ by Remark \ref{rem:ae-time-regularity}. To find such $\omega\in L^{\infty}(I, \mathcal{H}_{{\bf C}^{-1}})$ we follow the ideas, which were originally published in \cite[Section 6.d, pp. 315--318]{Moreau1973}, \cite[Section 2.e, pp. 12.14--12.18]{Moreau1973-3}, \cite[Chapter III]{Castaing1975}, and which also can be found in \cite[pp. 20--23]{Kunze2000}. \noindent\underline{Claim: The right-hand side of \eqref{eq:aepp-omega-inclusion} admits a measurable selection.} Nowadays, the topic of selections from measurable multifunctions is present in many monographs, of which we will use \cite[Chapter 8]{Aubin2009}. Denote by $J\subset I$ the set of full measure where the right-hand side of \eqref{eq:aepp-omega-inclusion} is well-defined and nonempty. By Proposition \ref{prop:nc-equivalent} \ref{enum:abstract-nc-in-proj-notation}, the normal cone there can be written as \begin{equation} N^{{\bf C}^{-1}}_{\Sigma-\bm{\widetilde{\sigma}}(t)}(\bm{y}(t)) = \left\{\bm{\tau}\in \mathcal{H}_{{\bf C}^{-1}}: {\rm proj}^{{\bf C}^{-1}}\left(\bm{\tau}+\bm{y}(t), \Sigma-\bm{\widetilde{\sigma}}(t)\right)=\bm{y}(t) \right\}, \label{eq:aepp-proof-nc-measurable} \end{equation} where by ${\rm proj}^{{\bf C}^{-1}}(\bm{\tau}, \mathcal{C})$ we denote is a metric projection in the sense of the inner product \eqref{eq:weighted-ip} of a point $\bm{\tau}$ on a closed convex nonempty set $\mathcal{C}$. Such projection is nonexpansive w.r. to $\bm{\tau}$ in the corresponding norm, hence it is continuous. Overall, the mapping \begin{align*} \pi: J\times \mathcal{H}_{{\bf C}^{-1}}& \to \mathcal{H}_{{\bf C}^{-1}},\\ \pi: (t, \bm{\tau})& \mapsto {\rm proj}^{{\bf C}^{-1}}(\bm{\tau}+\bm{y}(t), \Sigma-\bm{\widetilde{\sigma}}(t)) \end{align*} is Caratheodory \cite[p. 311]{Aubin2009}, and, by the Inverse Image Theorem \cite[Th. 8.2.9, p. 315]{Aubin2009} used with $g=\pi$, $F:t\mapsto \mathcal{H}_{{\bf C}^{-1}}$ and $G:t \mapsto \{\bm{y}(t)\}$, the normal cone \eqref{eq:aepp-proof-nc-measurable} is a measurable multimapping \cite[Def. 8.1.1, p. 307]{Aubin2009} from the domain $J$ endowed with Lebesgue measure. Moreover, the mapping $t\mapsto \frac{d}{dt}\bm{y}(t)$ is measurable by the choice of $y$. Therefore, by \cite[Th. 8.2.8, p. 314]{Aubin2009} the sum $N^{{\bf C}^{-1}}_{\Sigma-\bm{\widetilde{\sigma}}(t)}(\bm{y}(t)) + \frac{d}{dt}\bm{y}(t)$ is also a measurable multimapping from $J$. By \cite[Th. 8.2.4, p. 312]{Aubin2009} the intersection, which makes the right-hand side of \eqref{eq:aepp-omega-inclusion}, is also a measurable multimapping on $J$, thus, by \cite[Th. 8.1.3, p. 308]{Aubin2009}, there exists a measurable selection $\omega$. Finally, we must estimate $\|\omega\|_{L^{\infty}(I,\mathcal{H}_{{\bf C}^{-1}})}<\infty$, for which we need to prove the following. \noindent \underline{Claim: under condition \eqref{eq:safe-load-strict} the following statement is true:} \begin{multline} \text{there exists}\quad \rho>0 \quad \text{such that for any}\quad t\in I \quad\text{there exists} \quad \bm{h}_t\in\mathcal{C}(t) \\ \text{such that} \quad B_\rho (\bm{h}_t)\subset \Sigma -\bm{\widetilde{\sigma}}(t), \label{eq:aepp-safe-load-bound-claim} \end{multline} where $B_\rho (\bm{h}_t)$ is an open ball in $\mathcal{H}_{{\bf C}^{-1}}$ with a center $\bm{h}_t$ and a radius $\rho$, and $\mathcal{C}(t)$ is as in \eqref{eq:aepp-sp-moving-set}. Assume the contrary, that \begin{multline*} \text{for any}\quad \rho>0 \quad \text{there exists}\quad t\in I \quad \text{such that for all} \quad \bm{h}\in\mathcal{C}(t) \\ \text{ there exists }\quad \bm{\tau}_t\in B_\rho (\bm{h}) \quad \text{such that}\quad \bm{\tau}_t \notin \Sigma -\bm{\widetilde{\sigma}}(t). \end{multline*} Take a sequence $\rho_i\to 0, i\in \mathbb{N}$ and redenote it with its subsequence so that $t_i\to t^*$ for some $t^*\in I$. We have that \begin{equation} \text{for all} \quad i\in \mathbb{N},\quad \bm{h}\in\mathcal{C}(t_i)\quad \text{ there exists }\quad \bm{\tau}_{i}\in B_{\rho_i} (\bm{h}) \quad \text{such that}\quad \bm{\tau}_{i} \notin \Sigma -\bm{\widetilde{\sigma}}(t_i). \label{eq:aepp-safe-load-bound-claim-contrary} \end{equation} On the other hand, condition \eqref{eq:safe-load-strict} means that \begin{equation} \text{there exist}\qquad \rho^*>0\quad \text{and}\quad \bm{h}^*\in\mathcal{C}(t^*) \quad\text{such that} \quad B_{\rho^*}(\bm{h}^*) \subset \Sigma-\bm{\widetilde{\sigma}}(t^*). \label{eq:eq:safe-load-strict-aepp-proof-implication} \end{equation} However, as we discussed in the part {\it i)} of the proof, condition \eqref{eq:safe-load-strict} implies Lipschitz-continuity of $\mathcal{C}(t)$ with respect to the Hausdorff distance. Thus, \begin{multline} \text{for any small number}\quad \delta>0 \quad \text{we can find} \quad N\in\mathbb{N}\\ \text{ such that for any}\quad i\geqslant N\quad \text{we will have}\quad \bm{h}^*\in B_\delta(0)+\mathcal{C}(t_i). \label{eq:aepp-proof-haus-implication} \end{multline} We proceed further in three steps. \begin{enumerate}[1.] \item Take $\delta<\frac{\rho^*}{3}$ and use \eqref{eq:aepp-proof-haus-implication} to find the corresponding $N\in \mathbb{N}$. Denote \[ \bm{h}_{i}={\rm proj}^{{\bf C}^{-1}}(\bm{h}^*, \mathcal{C}(t_i)) \in B_\delta(\bm{h}^*) \] for $i\geqslant N$. \item Use the Lipschitz-continuity of $\widetilde{\sigma}$ (Remark \ref{rem:ae-time-regularity}) to make sure that $N$ is large enough for \[ \|\bm{\widetilde{\sigma}}(t_i)-\bm{\widetilde{\sigma}}(t^*)\|_{{\bf C}^{-1}}<\frac{\rho^*}{3}. \] Redefine $N$, if necessary. \item Now choose $i\geqslant N$ such that $\rho_i<\frac{\rho^*}{3}$, and use \eqref{eq:aepp-safe-load-bound-claim-contrary} to find the corresponding $\bm{\tau}_{i}\in B_{\rho_i}(\bm{h}_i)$. \end{enumerate} Observe that \begin{multline*} \left\|\bm{\tau}_i+\bm{\widetilde{\sigma}}(t_i)-\bm{\widetilde{\sigma}}(t^*)-\bm{h}^*\right\|_{{\bf C}^{-1}}\leqslant \|\bm{\tau}_i-\bm{h}_i\|_{{\bf C}^{-1}}+\|\bm{\widetilde{\sigma}}(t_i)-\bm{\widetilde{\sigma}}(t^*)\|_{{\bf C}^{-1}}+\|\bm{h}_i-\bm{h}^*\|_{{\bf C}^{-1}}<\\<\rho_i +\frac{\rho^*}{3}+\delta<\rho^*, \end{multline*} i.e. \begin{equation} \bm{\tau}_i+\bm{\widetilde{\sigma}}(t_i)-\bm{\widetilde{\sigma}}(t^*) \in B_{\rho^*}(\bm{h}^*). \label{eq:aepp-proof-in-ball-to-contadict} \end{equation} But, by the construction in \eqref{eq:aepp-safe-load-bound-claim-contrary}, \[ \bm{\tau}_i\notin \Sigma -\bm{\widetilde{\sigma}}(t_i), \] i.e. \[ \bm{\tau}_i+\bm{\widetilde{\sigma}}(t_i)-\bm{\widetilde{\sigma}}(t^*) \notin \Sigma -\bm{\widetilde{\sigma}}(t^*), \] which, together with \eqref{eq:aepp-proof-in-ball-to-contadict}, contradicts \eqref{eq:eq:safe-load-strict-aepp-proof-implication}. Therefore, condition \eqref{eq:safe-load-strict} yields our claim \eqref{eq:aepp-safe-load-bound-claim}. \noindent \underline{Claim: $\|\omega\|_{L^{\infty}(I,\mathcal{H}_{{\bf C}^{-1}})}<\infty$.} From \eqref{eq:aepp-safe-load-bound-claim} we get that for every $t\in I$ and every $\bm{\tau}\in \mathcal{H}_{{\bf C}^{-1}}$ \begin{equation} \delta^*_{B_\rho(\bm{h}_t)}(\bm{\tau})\leqslant \delta^*_{\Sigma -\bm{\widetilde{\sigma}}(t)}(\bm{\tau}), \label{eq:support-functions-embedded-proof} \end{equation} where $\delta^*$ denotes the support function \eqref{eq:def-support-function} and we use the property \eqref{eq:sp-subset-property}. Denote \[ \bm{\zeta}(t)=\bm{\omega}(t)- \frac{d}{dt}\bm{y}(t) \] for $t\in J$ and plug it into \eqref{eq:support-functions-embedded-proof} as $\bm{\tau}$ to obtain \[ \rho \left\|\bm{\zeta}(t) \right\|_{{\bf C}^{-1}} + \left\la\bm{\zeta}(t) , \bm{h}_t\right\ra_{{\bf C}^{-1}} \leqslant \delta^*_{\Sigma -\bm{\widetilde{\sigma}}(t)}\left(\bm{\zeta}(t) \right). \] On the other hand, from \eqref{eq:aepp-omega-inclusion} we have that \[ \bm{\zeta}(t) \in N^{{\bf C}^{-1}}_{\Sigma-\bm{\widetilde{\sigma}}(t)}(\bm{y}(t)), \] i.e. by Proposition \ref{prop:nc-equivalent} \ref{enum:abstract-nc-in-support-func-notation} \[ \left\la\bm{\zeta}(t), \bm{y}(t)\right\ra_{{\bf C}^{-1}} = \delta^*_{\Sigma -\bm{\widetilde{\sigma}}(t)}\left(\bm{\zeta}(t), \bm{y}(t)\right). \] Therefore, \[ \rho \left\|\bm{\zeta}(t) \right\|_{{\bf C}^{-1}} + \left\la\bm{\zeta}(t) , \bm{h}_t\right\ra_{{\bf C}^{-1}} \leqslant \left\la\bm{\zeta}(t), \bm{y}(t)\right\ra_{{\bf C}^{-1}}, \] i.e. \[ \rho \left\|\bm{\omega}(t)- \frac{d}{dt}\bm{y}(t) \right\|_{{\bf C}^{-1}} \leqslant \left\la\bm{\omega}(t)- \frac{d}{dt}\bm{y}(t), \bm{y}(t)-\bm{h}_t\right\ra_{{\bf C}^{-1}}. \] Recall, however, that $\bm{\omega}(t)\in \mathcal{U}$ by \eqref{eq:aepp-omega-inclusion}, and both $\bm{y}(t)$ and $\bm{h}_t$ belong to $\mathcal{C}(t)\subset \mathcal{V}$ by \eqref{eq:sp-elastoplastic} and \eqref{eq:aepp-safe-load-bound-claim}, respectively. Since $\mathcal{U}$ and $\mathcal{V}$ are linear ${\bf C}^{-1}$-orthogonal subspaces, the corresponding term vanishes and we get that \begin{multline*} \rho \left\|\bm{\omega}(t)\right\|_{{\bf C}^{-1}}-\rho\left\| \frac{d}{dt}\bm{y}(t) \right\|_{{\bf C}^{-1}} \leqslant \rho \left\|\bm{\omega}(t)- \frac{d}{dt}\bm{y}(t) \right\|_{{\bf C}^{-1}} \leqslant\\[2mm] \leqslant \left\la - \frac{d}{dt}\bm{y}(t), \bm{y}(t)-\bm{h}_t\right\ra_{{\bf C}^{-1}}\leqslant \left\|\frac{d}{dt}\bm{y}(t)\right\|_{{\bf C}^{-1}}\left\|\bm{y}(t)-\bm{h}_t\right\|_{{\bf C}^{-1}}. \end{multline*} Since set $\Sigma$ is bounded (say, by a constant $L_\Sigma>0$) and $\bm{h}_t\in \Sigma - \bm{\widetilde{\sigma}}(t)$ by \eqref{eq:aepp-safe-load-bound-claim}, we have the estimate \[ \left\|\bm{\omega}(t)\right\|_{{\bf C}^{-1}} \leqslant \|y\|_{W^{1,\infty}(I, \mathcal{H}_{\bf{C}^{-1}})}+ \frac{1}{\rho} \|y\|_{W^{1,\infty}(I, \mathcal{H}_{\bf{C}^{-1}})} \left(\|y\|_{W^{1,\infty}(I, \mathcal{H}_{\bf{C}^{-1}})}+ L_\Sigma + \|\widetilde{\sigma}\|_{W^{1,\infty}(I, \mathcal{H}_{\bf{C}^{-1}})}\right) \] for a.a. $t\in I$. Therefore, we indeed have $\omega\in L^{\infty}(I,\mathcal{H}_{{\bf C}^{-1}})$ and we can use \eqref{eq:aepp-eps-from-omega} to construct $\varepsilon \in W^{1,\infty}(I, \mathcal{H})$. \end{enumerate} $\blacksquare$ \begin{Corollary} \label{cor:aepp-full-solution} Given the data as in Definition \ref{def:aepp} such that the condition \eqref{eq:safe-load-strict} holds (with $\widetilde{\sigma}$ found via \eqref{eq:abstract_linear_solution}), the problem \eqref{eq:sp-elastoplastic}--\eqref{eq:aepp-sp-moving-set} has at least one solution $(y, \varepsilon)\in W^{1,\infty}(I, \mathcal{H}_{{\bf C}^{-1}})\times W^{1,\infty}(I, \mathcal{H})$, and the abstract problem of quasi-static evolution in elasticity-perfect plasticity (Definition \ref{def:aepp}) also has at least one solution. The unknowns $y, \sigma$ and $\varepsilon_{\rm el}$ can be determined uniquely, while the unknowns $\varepsilon$ and $\varepsilon_{\rm p}$ may be solvable non-uniquely. \end{Corollary} \begin{Remark} \label{rem:aepp-sp-in-V} Notice, that the moving set \eqref{eq:aepp-sp-moving-set} always lays in the subspace $\mathcal{V}$, i.e. one can think of the sweeping process \eqref{eq:sp-elastoplastic} as being defined in Hilbert space $\mathcal{V}$ with the inner product induced from $\mathcal{H}_{{\bf C}^{-1}}$. Moreover, recall that in \eqref{eq:abstract_linear_solution} the first and the second terms take values in orthogonal subspaces $\mathcal{V}$ and $\mathcal{U}$, respectively. Therefore, the change of the load $\bm{f}(t)$ causes the change of the {\it shape} of the moving set \eqref{eq:aepp-sp-moving-set}, while the change of the load $\bm{g}(t)$ only causes a parallel translation of $\mathcal{C}(t)$ within $\mathcal{V}$. In particular, this leads to the following corollaries. \end{Remark} \begin{Corollary} \label{cor:constant-force} Asuume that during a time-interval $I$ the load $\bm{f}(t)$ remains constant and such that the moving set \eqref{eq:aepp-sp-moving-set} is nonempty (for at least one $t$, which implies nonemptyness for all $t\in I$). If the load $\bm{g}(t)$ has regularity as in \eqref{eq:loads-abstract-def}, then the time-dependent set $\mathcal{C}(t)$ only moves by a parallel translation within $\mathcal{V}$. Hence $\mathcal{C}(t)$ is Lipschitz-continuous with respect to Hausdorff distance with Lipschitz constant $ \|G\|_{\rm op}\, \|g\|_{W^{1, \infty}(I, \mathcal{H})}$ and the conditions of Theorem \ref{th:abstract-sp-existence} are satisfied for the sweeping process \eqref{eq:sp-elastoplastic}, \eqref{eq:diff-incs-ic-y}, \eqref{eq:aepp-sp-moving-set} regardless of condition \eqref{eq:safe-load-strict}. \end{Corollary} \begin{Corollary} Since in \eqref{eq:abstract_linear_solution} the operator $G$ takes values in $\mathcal{V}$, condition \eqref{eq:safe-load-strict} is equivalent to \begin{equation} \left({\rm int}\, \Sigma- Q\,{\bm f}(t)\right)\cap \mathcal{V} \neq \varnothing \qquad \text{for all }t\in I. \label{eq:safe-load-forces-strict} \end{equation} \end{Corollary} \subsection{Example 1 --- a discrete elastic-perfectly plastic model with one-dimensional sweeping process} Now we modify the example of Section \ref{ssect:ex11-elasticity} to fit in the sweeping process framework. Recall the model of Fig. \ref{fig:discrete-models-elastic} {\it a)} and assume now that each of the two springs is elastic-perfectly plastic, i.e. that Hooke's law is valid for each spring only as long as the stress of the spring remains inside a given interval $(\sigma^-_i, \sigma^+_i), i\in \overline{1,2}$, which is called the {\it elastic range} of the spring. The threshold values $\sigma^-_i, \sigma^+_i$ are called {\it yield limits}, and we assume that for $i\in \overline{1,2}$ \begin{equation} \sigma_i^-<0<\sigma_i^+. \label{eq:ex11-epp-yield-limits} \end{equation} Our assumption on the mechanical behavior of the springs is that \begin{equation*} \sigma_i(t) \in [\sigma^-_i, \sigma^+_i] \end{equation*} always holds. If a deformation is imposed upon a spring in such a manner, that Hooke's law would require the stress to go beyond the yield limits, then a plastic deformation occurs. Accumulated plastic deformation does not influence the evolution of stress, which is characteristic of {\it perfect plasticity}. The equations in Definition \ref{def:aepp} model such behavior. For the discrete model the unknowns $\varepsilon, \varepsilon_{\rm el}$ and $\varepsilon_{\rm p}$ are called, respectively, {\it total elongations}, {\it elastic elongations} (or the elastic component of the elongations) and {\it plastic elongations} (or the plastic component of the elongations). On top of the relations \eqref{eq:aepp-1}, \eqref{eq:aepp-3}, \eqref{eq:aepp-5}, which are similar to the elasticity problem and bear the same physical meaning, we have the {\it additive decomposition} \eqref{eq:aepp-2} of elongations into elastic and plastic components and the {\it normal form of the plastic flow rule} \eqref{eq:aepp-4}. The latter is the constitutive law for the plastic component, just like Hooke's law \eqref{eq:aepp-3} is the constitutive law for the elastic component, see Fig. \ref{fig:elasticity-pp-scheme}. We take spaces $\mathcal{H}, \mathcal{X}, \mathcal{W}_0$, operators ${\bf E}, {\bf D}, {\bf C}$ and functions $\bm{g}, \bm{f}$ as in Section \ref{ssect:ex11-elasticity}, and the set \begin{equation} \Sigma = [\sigma^-_1, \sigma^+_1]\times[\sigma^-_2, \sigma^+_2]\subset \mathbb{R}^2\cong \mathcal{H}_{{\bf C}^{-1}}. \label{eq:ex11-epp-sigma} \end{equation} Stress solution for elasticity is explicitly written as \eqref{eq:ex11-linear-solution}, and the moving set \eqref{eq:aepp-sp-moving-set} is \begin{multline} \mathcal{C}(t) = \left([\sigma^-_1, \sigma^+_1]\times[\sigma^-_2, \sigma^+_2] - \frac{l(t)}{k_1^{-1}+ k_2^{-1}} \begin{pmatrix}1 \\ 1\end{pmatrix} - \frac{F_2(t)}{k_1+k_2}\begin{pmatrix} k_1\\-k_2\end{pmatrix}\right)\cap {\rm Im}\, \begin{pmatrix}1\\1\end{pmatrix} = \\[2mm] \begin{aligned} =\left\{\begin{pmatrix}y\\y\end{pmatrix}: y\in \mathbb{R} \text{ such that }\right. &\sigma^-_1\leqslant y+\frac{l(t)}{k_1^{-1}+ k_2^{-1}}+\frac{F_2(t)\, k_1}{k_1+k_2}\leqslant \sigma^+_1 \\ \text{ and } &\sigma^-_2\leqslant \left. y+\frac{l(t)}{k_1^{-1}+ k_2^{-1}}-\frac{F_2(t)\, k_2}{k_1+k_2}\leqslant \sigma^+_2 \right\},\end{aligned} \label{eq:ex11-epp-moving-set} \end{multline} see Fig. \ref{fig:discrete-models-moving-sets} {\it a)}. In turn, condition \eqref{eq:safe-load-forces-strict} (equivalent to \eqref{eq:safe-load-strict}) is written as \begin{equation} \begin{aligned} \text{for each}\quad t\in I \quad\text{there exists}\quad y\in \mathbb{R} \quad \text{such that}\quad &\sigma^-_1<y+\frac{F_2(t)\, k_1}{k_1+k_2}<\sigma^+_1 \\ \text{and}\quad &\sigma^-_2< y-\frac{F_2(t)\, k_2}{k_1+k_2}< \sigma^+_2. \end{aligned} \label{eq:ex11-epp-safe-load-strict} \end{equation} Notice that condition \eqref{eq:safe-load-strict} exactly means that in Fig. \ref{fig:discrete-models-moving-sets} {\it a)} the magnitude of the load $\bm{f}(t)$ is not too large, so that the segment $\mathcal{C}(t)$ does not degenerate to a point or an empty set. We also suggest an interested reader to illustrate the proof of Theorem \ref{th:aepp-well-posedness-safe-load-strict} \ref{enum:th:aepp-well-posedness-nonempty-rhs} and \ref{enum:th:aepp-well-posedness-estimate-rhs} by sketching the right-hand sides of \eqref{eq:inclusion2-elastoplastic} and \eqref{eq:aepp-omega-inclusion}, since in the context of the current example those are subsets of $\mathbb{R}^2$. Moreover, from such an illustration it is possible to observe non-uniqueness of the evolution of $\varepsilon$ when stresses in multiple springs reach the yield limits. \begin{figure}[H]\center \includegraphics{Fig-discrete-models-moving-sets.pdf} \caption{\footnotesize The construction of the moving set in the discrete models of Examples 1.1 ({\it a}) and 1.2 ({\it b}). The figure shows the situation with the stiffness parameters $k_i$ equal to $1$. } \label{fig:discrete-models-moving-sets} \end{figure} \begin{Remark} \label{rem:ex11-plasticity-vertices-velocity} Since $\mathcal{C}(t)$ in the current example is a segment, i. e. it is a bounded {\it polyhedron with fixed normal directions to its faces} in a one-dimensional space $\mathcal{V}$, we can estimate the Lipschitz constant of $\mathcal{C}(t)$ by estimating the velocities of its endpoints (of the vertices of the polyhedron in the general case), see \cite[Lemma A.3, pp. 36--37]{Gudoshnikov2023ESAIM}, cf. the argument in the proof of Theorem \ref{th:aepp-well-posedness-safe-load-strict} \ref{enum:th:aepp-well-posedness-sp}. \end{Remark} We can now interpret Corollary \ref{cor:aepp-full-solution} as the following. \begin{Corollary} Let us be given the data of Section \ref{ssect:ex11-elasticity} and set $\Sigma$ as in \eqref{eq:ex11-epp-sigma}, such that the condition \eqref{eq:ex11-epp-safe-load-strict} holds. Then the problem of quasi-static evolution in the model of Fig. \ref{fig:discrete-models-elastic} {\it a)} with elastic-perfectly plastic springs has at least one solution. In particular, the evolution of stresses $\bm{\sigma}(t)\in \mathbb{R}^2$ can be found as \begin{equation} \bm{\sigma}(t) = \bm{y}(t) + \bm{\widetilde{\sigma}}(t), \label{eq:finding-stresses-epp} \end{equation} where $\bm{\widetilde{\sigma}}$ is the stress solution for elasticity \eqref{eq:ex11-linear-solution} and ${\bm{y}}$ is the solution to the sweeping process \eqref{eq:sp-elastoplastic} in $\mathbb{R}^2$ with the inner product \eqref{eq:ex11-inner-product}. The moving set \eqref{eq:ex11-epp-moving-set} of the sweeping process is a time-dependent segment within the one-dimensional subspace $\mathcal{V}$, see Fig. \ref{fig:discrete-models-moving-sets} {\it a)}. \end{Corollary} \subsection{Example 2 --- a discrete elastic-perfectly plastic model with two-dimensional sweeping process} Consider now the model of Fig. \ref{fig:discrete-models-elastic} {\it b)} and assume now that all three springs are elastic-perfectly plastic, i.e. that they are characterized by stiffness parameters $k_i$ and elastic ranges $(\sigma^-_i, \sigma^+_i), \, i\in \overline{1,3}$, for which \eqref{eq:ex11-epp-yield-limits} holds. We take spaces $\mathcal{H}, \mathcal{X}, \mathcal{W}_0$, operators ${\bf E}, {\bf D}, {\bf C}$ and functions $\bm{g}, \bm{f}$ as in Section \ref{ssect:ex12-elasticity}, and the set \begin{equation} \Sigma = [\sigma^-_1, \sigma^+_1]\times[\sigma^-_2, \sigma^+_2]\times[\sigma^-_3, \sigma^+_3]\subset \mathbb{R}^3\cong \mathcal{H}_{{\bf C}^{-1}}. \label{eq:ex12-epp-sigma} \end{equation} Stress solution for elasticity is explicitly written as \eqref{eq:ex12-linear-solution}, and the moving set \eqref{eq:aepp-sp-moving-set} is \begin{multline} \mathcal{C}(t) = \left([\sigma^-_1, \sigma^+_1]\times[\sigma^-_2, \sigma^+_2]\times[\sigma^-_3, \sigma^+_3] - \phantom{\begin{pmatrix} k_2^{-1}+k_3^{-1} & -k_2^{-1} \\[1mm]k_3^{-1} & k_1^{-1} \\[1mm]-k_2^{-1}& k_1^{-1}+ k_2^{-1} \end{pmatrix}}\right. \\ \left. -\frac{1}{k_1^{-1}k_2^{-1}+k_2^{-1}k_3^{-1}+k_1^{-1}k_3^{-1}}\begin{pmatrix} k_2^{-1}+k_3^{-1} & -k_2^{-1} \\[1mm]k_3^{-1} & k_1^{-1} \\[1mm]-k_2^{-1}& k_1^{-1}+ k_2^{-1} \end{pmatrix}\begin{pmatrix} l_1(t) \\ l_2(t)\end{pmatrix} - \frac{F_1(t)+F_3(t)}{k_1+k_2+k_3}\begin{pmatrix}-k_1 \\ k_2\\ -k_3\end{pmatrix}\right)\cap\\[2mm] \cap {\rm Ker}\, \begin{pmatrix} -1 & 1 & -1\end{pmatrix} = \\[2mm] \begin{aligned}=\left\{\begin{pmatrix}y_1\\y_2\\y_3\end{pmatrix}\in \mathbb{R}^3\right. &\text{ such that } -y_1+y_2-y_3=0\\ \text{ and } &\sigma^-_1\leqslant y_1+\frac{\left(k_2^{-1}+k_3^{-1}\right)l_1(t) -k_2^{-1} l_2(t)}{k_1^{-1}k_2^{-1}+k_2^{-1}k_3^{-1}+k_1^{-1}k_3^{-1}} -\frac{k_1\left(F_1(t)+F_3(t)\right)}{k_1+k_2+k_3}\leqslant \sigma^+_1\\[2mm] \text{ and } & \sigma^-_2\leqslant y_2+\frac{k_3^{-1}\, l_1(t)+ k_1^{-1}\, l_2(t)}{k_1^{-1}k_2^{-1}+k_2^{-1}k_3^{-1}+k_1^{-1}k_3^{-1}}+\frac{ k_2 \left(F_1(t)+F_3(t)\right)\,}{k_1+k_2+k_3}\leqslant \sigma^+_2\\[2mm] \text{ and } &\sigma^-_3\leqslant y_3+\frac{-k_2^{-1} l_1(t)+\left(k_1^{-1}+k_2^{-1}\right) l_2(t)}{k_1^{-1}k_2^{-1}+k_2^{-1}k_3^{-1}+k_1^{-1}k_3^{-1}} -\frac{k_3\left(F_1(t)+F_3(t)\right)}{k_1+k_2+k_3}\leqslant \sigma^+_3 \left.\vphantom{\begin{pmatrix}y_1\\y_2\\y_3\end{pmatrix}}\right\},\end{aligned} \label{eq:ex12-epp-moving-set} \end{multline} see Fig. \ref{fig:discrete-models-moving-sets} {\it b)}. In turn, condition \eqref{eq:safe-load-forces-strict} (equivalent to \eqref{eq:safe-load-strict}) is written as \begin{equation} \begin{aligned} \text{for each}\quad t\in I\quad \text{there exists}\quad \begin{pmatrix}y_1\\y_2\\y_3\end{pmatrix}\in \mathbb{R}^3\quad \text{such that}\quad & -y_1+y_2-y_3=0\\ \text{and}\quad &\sigma^-_1< y_1 -\frac{k_1\left(F_1(t)+F_3(t)\right)}{k_1+k_2+k_3}< \sigma^+_1\\[2mm] \text{and}\quad & \sigma^-_2< y_2+\frac{ k_2 \left(F_1(t)+F_3(t)\right)\,}{k_1+k_2+k_3}< \sigma^+_2\\[2mm] \text{and}\quad &\sigma^-_3< y_3 -\frac{k_3\left(F_1(t)+F_3(t)\right)}{k_1+k_2+k_3}< \sigma^+_3. \end{aligned} \label{eq:ex12-epp-safe-load-strict} \end{equation} Again, notice that condition \eqref{eq:safe-load-strict} exactly means that in Fig. \ref{fig:discrete-models-moving-sets} {\it b)} the magnitude of the load $\bm{f}(t)$ is not too large, so that the polygon $\mathcal{C}(t)$ does not degenerate to a point or an empty set. \begin{Remark} Since $\mathcal{C}(t)$ in the current example is a polygon, i. e. it is again a bounded {\it polyhedron with fixed normal directions to its faces} in a two-dimensional space $\mathcal{V}$, we again can estimate the Lipschitz constant of $\mathcal{C}(t)$ by estimating the velocities of its vertices, see \cite[Lemma A.3, pp. 36--37]{Gudoshnikov2023ESAIM}, cf. Remark \ref{rem:ex11-plasticity-vertices-velocity} and the proof of Theorem \ref{th:aepp-well-posedness-safe-load-strict} \ref{enum:th:aepp-well-posedness-sp}. \end{Remark} We can now interpret Corollary \ref{cor:aepp-full-solution} as the following. \begin{Corollary} Let us be biven the data of Section \ref{ssect:ex12-elasticity} and set $\Sigma$ as in \eqref{eq:ex12-epp-sigma}, such that the condition \eqref{eq:ex12-epp-safe-load-strict} holds. Then the problem of quasi-static evolution in the model of Fig. \ref{fig:discrete-models-elastic} {\it b)} with elastic-perfectly plastic springs has at least one solution. In particular, the evolution of stresses $\bm{\sigma}(t)\in \mathbb{R}^3$ can be found as \eqref{eq:finding-stresses-epp}, where $\bm{\widetilde{\sigma}}$ is the stress solution for elasticity \eqref{eq:ex12-linear-solution} and ${\bm{y}}$ is the solution to the sweeping process \eqref{eq:sp-elastoplastic} in $\mathbb{R}^3$ with the inner product \eqref{eq:ex12-inner-product}. The moving set \eqref{eq:ex12-epp-moving-set} of the sweeping process is a time-dependent polygon (with fixed normal directions to its faces) within the two-dimensional subspace $\mathcal{V}$, see Fig. \ref{fig:discrete-models-moving-sets} {\it b)}. \end{Corollary} The sweeping process framework of Definition \ref{def:aepp}, Theorem \ref{th:aepp-to-sweeping-process} and condition \eqref{eq:safe-load-strict} are well-applicable to the networks of elastic-perfectly plastic springs in general, of which the models in Fig. \ref{fig:discrete-models-elastic} are particular cases. For the general construction with one spatial dimension we refer to \cite{Gudoshnikov2021ESAIM}, \cite{Gudoshnikov2020}, and for many spatial dimensions we refer to \cite{Gudoshnikov2023preprint}. The former references focus on the models with the boundary conditions in terms of elongations, such as \eqref{eq:ex12-constraint-matrix-form} in Example 2, while in the latter reference we considered the boundary conditions in terms of displacements, i.e of the type \eqref{eq:ex11-bc}. In \cite[Fig. 8, p. 3378]{Rachinskii2018}, \cite[Fig. 2, p. 16]{Gudoshnikov2023ESAIM} and \cite[Fig. 1, p.2]{Oleg2024preprint} the reader can find a very interesting network model, known in engineering as the {\it bridge circuit} \cite[Fig. 2.31, p. 58]{Sundararajan2020}, \cite[Fig. 11.15, p. 216]{Bracke2024reliability}. This model results in a sweeping process with three-dimensional $\mathcal{V}$ and $\mathcal{C}(t)$ (see \cite[Figs. 1, 3, pp. 16--17]{Gudoshnikov2023ESAIM}) and five-dimensional $\mathcal{H}$. Its quantities can also be written explicitly, but the formulas are even more cumbersome then those of the discrete examples in the present paper. \section{Regularity lost (Example 3 with elasticity-perfect plasticity)} \label{sect:regularity-lost} \subsection{The sweeping process in the continuum model} Let us now consider the continuous rod of Section \ref{ssect:ex21-elasticity}, endowed with the elastic-perfectly plastic behavior at each point $x\in \Omega$. Such rod is characterized by its stiffness function $\bm{C}\in L^\infty(\Omega)$ (see Section \ref{sssect:ex21-Hooke-law}) and its {\it local yield limits} $\sigma^-, \sigma^+\in L^{\infty}(\Omega)$, so that, analogously to \eqref{eq:ex11-epp-yield-limits} we require \[ \sigma^-(x)<0<\sigma^+(x) \qquad \text{for a.a. }x\in\Omega. \] Functions $\sigma^-, \sigma^+$ are the boundaries of the {\it elastic range} at each point $x\in \Omega$, so that the one-sided constraint of the stress now takes the form \[ \sigma(t,x)\in \left[\sigma^-(x), \sigma^+(x)\right]\qquad \text{for all }t\in I \text{ and a.a. }x\in \Omega. \] In the Definition \ref{def:aepp} we now use the data of Section \ref{ssect:ex21-elasticity} with \begin{equation} \Sigma = \left\{\bm{\sigma}\in L^2(\Omega): \text{ for a.a. }x\in \Omega \text{ we have }\sigma^-(x)\leqslant\sigma(x)\leqslant\sigma^+(x) \right\}. \label{eq:ex21-Sigma-def} \end{equation} The sweeping process \eqref{eq:sp-elastoplastic} happens within the subspace $\mathcal{V}$ of constant functions (see \eqref{eq:ex21-space-V}) in the space $L^2_{{\bf C}^{-1}}(\Omega)$, which is $ L^2(\Omega)$ with the inner product \eqref{eq:ex21-ip}. We use the corresponding stress solution for elasticity \eqref{eq:w0-const-def},\eqref{eq:ex21-linear-solution} to write the moving set \eqref{eq:aepp-sp-moving-set} as \begin{multline} \mathcal{C}(t) =\left\{\vphantom{ L^2_{{\bf C}^{-1}}}\right.\bm{y} \in L^2_{{\bf C}^{-1}}(\Omega): \text{ there exists }c\in \mathbb{R} \text{ such that for a.a. }x\in \Omega \text{ we have}\\ c\equiv y(x)\quad\text{and}\quad \sigma^-(x)-\widetilde{\sigma}(t,x) \leqslant c\leqslant \sigma^+(x)-\widetilde{\sigma}(t,x) \left.\vphantom{ L^2_{{\bf C}^{-1}}} \right\}, \label{eq:ex21-moving-set} \end{multline} which is closed and convex. For a reader's convenience we collect the new quantities, related to plasticity in all three examples, in Table \ref{tab:elasticity_perfect_plasticity_examples}. \begin{table}[H] \begin{center} \renewcommand{\arraystretch}{1.2} \begin{tabular}{|c|c|c|c|} \hline \textbf{Quantity} & \textbf{Example 1 (Fig. \ref{fig:discrete-models-elastic} {\it a})} & \textbf{Example 2 (Fig. \ref{fig:discrete-models-elastic} {\it b})} & \textbf{Example 3 (Fig. \ref{fig:continuous-rod-model-elastic})}\\ \hline $\Sigma\subset \mathcal{H}$ & $ [\sigma^-_1, \sigma^+_1]\times[\sigma^-_2, \sigma^+_2]$ & $ [\sigma^-_1, \sigma^+_1]\times[\sigma^-_2, \sigma^+_2]\times[\sigma^-_3, \sigma^+_3]$ & $ \begin{array}{c}\left\{\bm{\sigma}\in L^2(\Omega): \text{ for a.a. }x\in \Omega \right.\\ \left.\sigma^-(x)\leqslant\sigma(x)\leqslant\sigma^+(x) \right\}\end{array}$ \\ \hline $\begin{array}{c} \text{Moving set}\\ \mathcal{C}(t) \end{array}$ & formula \eqref{eq:ex11-epp-moving-set} & formula \eqref{eq:ex12-epp-moving-set} & formula \eqref{eq:ex21-moving-set} \\ \hline \end{tabular} \end{center} \caption{The quantities in the examples of elasticity-perfect of the current paper, in addition to Table \ref{tab:linear_elasticity_examples}.} \label{tab:elasticity_perfect_plasticity_examples} \end{table} \subsection{The continuum model with particular data and its stress solution} \label{ssect:ex21-concrete-data} Let us examine the implications of the problem \eqref{eq:sp-elastoplastic}--\eqref{eq:aepp-sp-moving-set} with particular data of the continuous rod. Consider the space- and time-domains \[ \Omega=(-1,1), \qquad I=[0,T],\, T=3, \] the parameters of the rod \[ \bm{C}(x)\equiv 1,\qquad \sigma^-(x)\equiv -1, \qquad \sigma^+(x)\equiv 1, \] the initial stress state \[ \sigma_0(x) \equiv 0 \qquad \text{for all }x\in \Omega \] and the following loading programme: \[ F(t,x) = \begin{cases} 2xt &\text{when } t\in[0,1),\\ 2x &\text{when } t\in[1, T], \end{cases} \qquad u_a(t)\equiv 0 \text{ for all }t\in I, \qquad u_b=\begin{cases}0,& \text{when }t\in[0, 1),\\ t-1 &\text{when } t\in [1,T], \end{cases} \] i.e. we gradually increase the body force load from zero to $F(1,x) = 2x$ during time-interval $[0,1)$, and then keep the body force load constant, while monotonically increasing the required length in the displacement boundary condition. We compute the stress solution for elasticity \eqref{eq:ex21-linear-solution} as \begin{equation} \widetilde{\sigma}(t,x) = \begin{cases} t\left(\frac{1}{3}-x^2\right) & \text{when } t\in [0,1),\\ \frac{1}{2}(t-1)+ \frac{1}{3}-x^2& \text{when }t\in[1,T], \end{cases} \label{eq:ex21-linear-solution-concrete-loads} \end{equation} see \cite[\nolinkurl{Example_3_sigma_tilde.mp4}]{SupplMat} for the animation. The sweeping process \eqref{eq:sp-elastoplastic} in this case is defined in terms of the regular inner product of $L^2(\Omega)$, its initial condition is \begin{equation} y(0,x) = \sigma_0(x) - \widetilde{\sigma}(0,x) \equiv 0 \qquad \text{for all }x\in \Omega \label{eq:ex21-ic-data} \end{equation} and moving set its moving set is \eqref{eq:ex21-moving-set}. In particular for $t\in[1,T]$, the inequalities in \eqref{eq:ex21-moving-set} take the form \[ -\frac{4}{3} - \frac{1}{2}(t-1)+x^2\leqslant c\leqslant \frac{2}{3}-\frac{1}{2}(t-1)+x^2, \] see Fig. \ref{fig:ex21-moving-set}. \begin{figure}[H]\center \includegraphics{Fig-ex21-moving-set.pdf} \caption{\footnotesize The moving set. } \label{fig:ex21-moving-set} \end{figure} Observe, that all the way through the interval $t\in\left[0, t^*\right],\, t^*=\frac{7}{3},$ the stationary constant-zero function is the solution of the sweeping process \eqref{eq:sp-elastoplastic} with initial condition \eqref{eq:ex21-ic-data}. Indeed, by plugging $c=0$ and \eqref{eq:ex21-linear-solution-concrete-loads} into \eqref{eq:ex21-moving-set} we can check the condition \ref{enum:abstract-sp-solution-def-constraint} of Definition \ref{def:abstract-sp}, and zero velocity clearly belongs to the normal cone \eqref{eq:nc-C-inv-def}, therefore the condition \ref{enum:abstract-sp-solution-def-nc} of Definition \ref{def:abstract-sp} also holds. The time-moment $t^*=\frac{7}{3}$ is special, as that is when the {\it purely elastic deformation} ends and the {\it plastic deformation} of the rod starts. Figs. \ref{fig:ex21-moving-set} {\it b)} and {\it c)} suggest, that for the interval $(t^*, T)$ the solution of the sweeping process is \[ y(t,x) = -\frac{1}{2}(t-t^*) \qquad \text{with}\qquad \frac{d}{dt} y(t,x) =- \frac{1}{2}. \] It can be verified analytically that such function $\bm{y}(t)$ belongs to $\mathcal{C}(t)$ for $t\in[t^*, T]$. Moreover, for any $t\in[t^*, T]$ one can also see from Figs. \ref{fig:ex21-moving-set} {\it b)} and {\it c)} that any constant function $c(x)$ that fits into $\mathcal{C}(t)$ has to be such that \[ c(x)-y(t,x)\leqslant 0, \] i.e. for the velocity we have \[ -\left(\frac{d}{dt} y(t,x)\right)\,\left(c(x)-y(t,x)\right)\leqslant 0 \qquad \text{for any }x\in \Omega, \,t\in[t^*, T]. \] This means that condition \ref{enum:abstract-sp-solution-def-nc} of Definition \ref{def:abstract-sp} holds with the normal cone \eqref{eq:nc-C-inv-def} defined in terms of the regular integral inner product of $L^2(\Omega)$. To summarize, the solution to the sweeping process \eqref{eq:sp-elastoplastic} with the data of the current section is \[ y(t,x) = \begin{cases}0&\text{when }t\in[0, t^*),\\ -\frac{1}{2}(t-t^*)& \text{when } t\in [t^*, T],\end{cases} \qquad \text{with}\qquad \frac{d}{dt} y(t,x) =\begin{cases}0&\text{when }t\in[0, t^*),\\ - \frac{1}{2}& \text{when } t\in (t^*, T],\end{cases} \] see \cite[\nolinkurl{Example_3_y.mp4}]{SupplMat} for the animation. By formula \eqref{eq:finding-stresses-epp} this gives the evolution of stress \[ \sigma(t,x) = \begin{cases} t\left(\frac{1}{3}-x^2\right) & \text{when } t\in [0,1)\\ \frac{1}{2}(t-1)+ \frac{1}{3}-x^2& \text{when }t\in[1,t^*)\\ 1-x^2& \text{when } t\in [t^*, T],\end{cases} \] which, indeed, belongs to $\Sigma$, as required by perfect plasticity; see also \cite[\nolinkurl{Example_3_sigma.mp4}]{SupplMat} for the animation. \subsection{Non-existense of a strain solution as a \texorpdfstring{$L^2$}{L2}-function} Now let us consider differential inclusion \eqref{eq:inclusion2-elastoplastic} and its component \eqref{eq:aepp-omega-inclusion} in particular. During time-interval $[0, t^*]$, when $\frac{d}{dt} \bm{y}=0$, we clearly can solve \eqref{eq:aepp-omega-inclusion} with $\omega\equiv 0$ and \eqref{eq:inclusion2-elastoplastic} with $\frac{d}{dt} \bm{\varepsilon}=\frac{d}{dt} \bm{\widetilde{\sigma}}$, which agrees with the mechanical expectation of purely elastic evolution. However, during time-interval $(t^*, T)$ we encounter the following phenomenon. \begin{Observation} \label{obs:nonupport} For all $t\in (t^*, T)$ in \eqref{eq:aepp-omega-inclusion} we have \begin{equation} N^{{\bf C}^{-1}}_{\Sigma-\bm{\widetilde{\sigma}}(t)}(\bm{y}(t)) = \{0\}. \label{eq:ex21-singleton-zero-normal-cone} \end{equation} Indeed, one can observe from Figs. \ref{fig:ex21-moving-set} {\it b)} and {\it c)} that the only function $\bm{\tau}\in L_{{\bf C}^{-1}}^2(\Omega)$ for which \[ {\rm proj}^{{\bf C}^{-1}}(\bm{\tau}, \Sigma-\bm{\widetilde{\sigma}}(t)) = {\bm y}(t) \qquad \text{i.e.} \qquad \bm{\tau} - {\bm y}(t) \in N^{{\bf C}^{-1}}_{\Sigma-\bm{\widetilde{\sigma}}(t)}(\bm{y}(t)) \] is the vector $\bm{y}(t)$ itself (and the projection is meant in terms of the standard inner product of $L^2(\Omega)$, since we chose $\bm{C}(x)\equiv 1$). In other words, the set $\Sigma-\bm{\widetilde{\sigma}}(t)$ in space $L^2(\Omega)$ {\it does not posess a nonzero supporting vector} at a point $\bm{y}(t)$, although it clearly lays on the boundary of the set. \end{Observation} This situation is in a striking contrast with a typical picture of convex analysis in finite dimensions (see Fig. \ref{fig:fig-normal-cone}), where the normal cone degenerates to the singleton-zero set if and only if $\bm{y}$ belongs to the interior of $\Sigma$, see Fig. \ref{fig:fig-normal-cone} {\it a)} specifically. In general, such points $\bm{y}$ with a degnerate normal cone fall into the category of {\it nonsupport points}, see \cite[p. 184]{Peressini1967}, \cite[p. 97]{Jeyakumar1992} , \cite[Def. 2.15, p. 21]{Borwein1992}, \cite[Def. 2.6 (b), p. 2544]{Borwein2003}, \cite[p. 155]{Mordukhovich2022}. From Observation \ref{obs:nonupport} it follows that for all $t\in (t^*, T)$ the first term in the intersection \eqref{eq:aepp-omega-inclusion} is the singleton set \begin{equation} N^{{\bf C}^{-1}}_{\Sigma-\bm{\widetilde{\sigma}}(t)}(\bm{y}(t)) + \frac{d}{dt}\bm{y}(t) =\{0\}+ \frac{d}{dt}\bm{y}(t) =\left\{\frac{d}{dt}\bm{y}(t)\right\} =\left\{-\frac{1}{2}\right\}, \label{eq:ex21-with-data-nonzero-component} \end{equation} where $-\frac{1}{2}$ is meant as a constant function over $\Omega$. But, recall that $\bm{y}(t)\in \mathcal{V}$ for all $t$ (see Remark \ref{rem:aepp-sp-in-V}), hence $\frac{d}{dt}\bm{y}(t)\in \mathcal{V}$ as well. As $\mathcal{U}$ and $\mathcal{V}$ are orthogonal subspaces, their only intersection is zero. But $\frac{d}{dt}\bm{y}(t)$ in \eqref{eq:ex21-with-data-nonzero-component} is not zero, from which we deduce the following. \begin{Observation} For all $t\in(t^*, T]$ the right-hand sides of \eqref{eq:aepp-omega-inclusion} and \eqref{eq:inclusion2-elastoplastic} are empty sets. Therefore, the problem \eqref{eq:sp-elastoplastic}--\eqref{eq:aepp-sp-moving-set} as a whole has no solution $(y, \varepsilon)\in W^{1, \infty}(I, \mathcal{H}_{{\bf C}^{-1}})\times W^{1, \infty}(I, \mathcal{H})$ for any $\bm{\varepsilon_0}$. By the statement of Theorem \ref{th:aepp-to-sweeping-process}, the problem of Definition \ref{def:aepp} also has no solution \eqref{eq:aepp-unknowns} with the data of Section \ref{ssect:ex21-concrete-data}. \end{Observation} This obviously contradicts the conclusion \ref{enum:th:aepp-well-posedness-nonempty-rhs} of Theorem \ref{th:aepp-well-posedness-safe-load-strict}. Therefore, the condition \eqref{eq:safe-load-strict} must be failing. Indeed, any function $\bm{\sigma}\in \Sigma$ can be perturbed to another function $\bm{\sigma}'\notin \Sigma$ with \[ \|\bm{\sigma}-\bm{\sigma}'\|_{{\bf C}^{-1}} = \left(\intOm \frac{\left|\sigma(x)-\sigma'(x)\right|^2}{\bm{C}(x)}\, dx\right)^{\frac{1}{2}} \] arbitrarily small, see Fig. \ref{fig:misc-1} {\it a)}. I.e. the following fact takes place. \begin{Observation} \label{obs:ex21-Slater-cq-fails} In the setting of the model of the continous rod \[ {\rm int}\, \Sigma =\varnothing \] and condition \eqref{eq:safe-load-strict} is not satisfied. \end{Observation} \begin{figure}[H]\center \includegraphics{Fig-misc-1.pdf} \caption{\footnotesize Blah } \label{fig:misc-1} \end{figure} It may appear somewhat surprising that the sweeping process \eqref{eq:sp-elastoplastic} with the set \eqref{eq:aepp-sp-moving-set} has a well-defined solution, and yet the normal cone \eqref{eq:ex21-singleton-zero-normal-cone} is degenerate and the differential inclusion \eqref{eq:inclusion2-elastoplastic} is ill-posed. To understand the situation, revisit the proof of Theorem \ref{th:aepp-well-posedness-safe-load-strict} \ref{enum:th:aepp-well-posedness-nonempty-rhs} and observe that the failure of the condition \eqref{eq:safe-load-strict} means that the additivity of normal cones \eqref{eq:nc-additivity-in-aepp-proof} is no longer guaranteed. And, indeed, we merely have one-sided relation \begin{equation} N^{{\bf C}^{-1}}_{\Sigma-\bm{\widetilde{\sigma}}(t)}(\bm{y}(t))+ N^{{\bf C}^{-1}}_{\mathcal{V}}(\bm{y}(t)) = \{0\} + \mathcal{U}\subset N^{{\bf C}^{-1}}_{\left(\Sigma-\bm{\widetilde{\sigma}}(t)\right)\cap \mathcal{V}}(\bm{y}(t)), \label{eq:ex21-nc-additivity-failure} \end{equation} cf. \eqref{eq:nc-subadditivity}, because we already had found the vector $-\frac{d}{dt}\bm{y}(t)$, which belongs the right-hands side, but not to the left-hand side, see the argument we discussed after \eqref{eq:ex21-with-data-nonzero-component}. We conclude with the following. \begin{Observation} \label{obs:ex21-cq-importance} When the additivity of normal cones \eqref{eq:nc-additivity-in-aepp-proof} fails to hold with equality, in the problem of Definition \ref{def:aepp} and Theorem \ref{th:aepp-to-sweeping-process} it may be impossible to extract the evolution of $\varepsilon$ and $\varepsilon_{\rm p}$ form the evolution of $\sigma,\, \varepsilon_{\rm el}$ and $y$. Condition \eqref{eq:safe-load-strict} serves as a {\it constraint qualification} to ensure such additivity, see Proposition \ref{prop:nc-abstract-properties} \ref{enum:prop:nc-abstract-properties-nc-subadditivity}, but condition \eqref{eq:safe-load-strict} fails when when $\Sigma$ is described via inequalities on the state variable in a space with an integral norm. If, for this one or for a similar problem, we could find a less demanding constraint qualification which ensures the additivity of normal cones, we could solve the problem in full. \end{Observation} Note, that a failure of constraint qualification \eqref{eq:nontrivial-intersection-condition} can lead to the loss of additivity of normal cones in a finite-dimensional setting as well (see Fig. \ref{fig:misc-1} {\it b)}), but not when the involved sets are polyhedral (see \cite[Corollary 23.8.1, p. 224]{Rockafellar1970}). \subsection{Concluding remarks on the continuum elastic-perfectly plastic model} As we have demonstrated, the lack of additivity of the normal cones explains why the problem of quasi-static evolution in elasticity-perfect plasticity, formulated in Definition \ref{def:aepp} cannot be solved in general through the means of Theorem \ref{th:aepp-to-sweeping-process} when we search for the unknown variable \[ \bm{\varepsilon}\in \mathcal{H}= L^2(\Omega), \] although this approach works in a finite-dimensional setting. In fact, J.-J. Moreau encountered this issue and shown in \cite{Moreau1976} that for the problem of elastic-perfectly plastic rod one must search for a displacement field $\bm{u}$ and the corresponding strain field $\bm{\varepsilon}$ so that \[ \bm{u}\text{ has bounded variation on }\Omega, \qquad \bm{\varepsilon}=\frac{\partial}{\partial x} \bm{u} \text{ is a bounded Borel signed measure on } \Omega, \] where the derivative is meant in the appropriate sense. The development of the mathematical theory of perfect plasticity for two- and three-dimensional domains $\Omega\subset \mathbb{R}^n$ followed, in which the displacement field $\bm{u}$ takes values in the so-called space of bounded deformation $BD(\Omega)$, i.e. \[ \bm{u}\in L^1(\Omega, \mathbb{R}^n), \qquad \bm{\varepsilon}=\frac{1}{2}\left(D\bm{u}+(D\bm{u})^T\right)\text{ is a bounded Borel vector measure on }\Omega. \] when $\Omega$ is bounded. We will leave this theory beyond the scope of the current text, but recommend the references from Section \ref{sect:intro}. \section{Constraint qualifications and elastoplasticity} \label{sect:constraint-qualification-and-hardening} \subsection{Constraint qualifications} Let us focus now on the following property (we repeat Proposition \ref{prop:nc-abstract-properties} \ref{enum:prop:nc-abstract-properties-nc-subadditivity} for presentation purposes): \begin{Proposition} Let $\mathcal{H}$ be a Hilbert space, $\mathcal{C}_1, \mathcal{C}_2\subset \mathcal{H}$ be closed, convex, nonempty sets and $\bm{x}\in \mathcal{C}_1\cap \mathcal{C}_2$. Then \begin{equation} N_{\mathcal{C}_1}(\bm{x}) + N_{\mathcal{C}_2}(\bm{x}) \subset N_{\mathcal{C}_1 \cap \mathcal{C}_2}(\bm{x}). \label{eq:nc-subadditivity-general} \end{equation} \end{Proposition} In the proof of Theorem \ref{th:aepp-to-sweeping-process} we used \eqref{eq:nc-subadditivity-general} to justify that the evolution of stress in problems with elastoplasticity can be found from the corresponding sweeping process,. We also observed, that to extract the evolution of the plastic component and of the total strain from the evolution of stresses, we need the equality of sets in \eqref{eq:nc-subadditivity-general}, which is the property called {\it additivity of normal cones}: \begin{equation} N_{\mathcal{C}_1}(\bm{x}) + N_{\mathcal{C}_2}(\bm{x}) = N_{\mathcal{C}_1 \cap \mathcal{C}_2}(\bm{x}). \label{eq:nc-additivity-general} \end{equation} To justify such equality we can use conditions on $\mathcal{C}_1$ and $\mathcal{C}_2$, which are called {\it constraint qualifications}. Constraint qualifications are usually discussed in the context of optimization problems, written in a specific form, and we reformulate our question of additivity of the normal cones in terms of {\it Fenchel-Rockafellar dual problems}. \begin{Proposition} \label{prop:from-nc-to-dual-problems} Let $\mathcal{H}$ be a Hilbert space, $\mathcal{C}_1, \mathcal{C}_2\subset \mathcal{H}$ be closed, convex, nonempty sets and $\bm{x}\in \mathcal{C}_1\cap \mathcal{C}_2$. Let \begin{equation} \bm{v} \in N_{\mathcal{C}_1 \cap \mathcal{C}_2}(\bm{x}). \label{eq:nc-additivity-to-primal-dual-v-taken-1} \end{equation} and consider two optimization problems with optimal solutions, respectively, $p^*$ and $d^*$: \begin{align} p^*&=- \delta^*_{\mathcal{C}_1\cap \mathcal{C}_2}(\bm{v})&&=\inf_{\bm{c}\in \mathcal{H}}\left(\delta_{\mathcal{C}_1}(\bm{c})+\left(\delta_{\mathcal{C}_2}(\bm{c})-\la \bm{v},\bm{c}\ra\right)\right),& \label{eq:abstract-primal-problem} \\[2mm] d^*&= - \inf_{\bm{y}\in \mathcal{H}} \left(\delta^*_{\mathcal{C}_1}(\bm{y})+ \delta^*_{\mathcal{C}_2}(\bm{v}-\bm{y}) \right) &&= \sup_{\bm{y}\in \mathcal{H}}\left(-\delta^*_{\mathcal{C}_1}(\bm{y})- \delta^*_{\mathcal{C}_2}(\bm{v}-\bm{y}) \right),& \label{eq:abstract-dual-problem} \end{align} where $\delta$ and $\delta^*$ denote the indicator function \eqref{eq:def-indicator-function} and the support function \eqref{eq:def-support-function}, respectively. Then \begin{equation} \bm{v} \in N_{\mathcal{C}_1}(\bm{x}) + N_{\mathcal{C}_2}(\bm{x}) \label{eq:nc-additivity-to-primal-dual-v-taken-2} \end{equation} if and only if the following equality, called strong duality, holds: \begin{equation} p^*=d^* \text{ and the optimal value } d^* \text{ is achieved in \eqref{eq:abstract-dual-problem} at some }\bm{y^*}\in \mathcal{H}. \label{eq:nc-additivity-to-primal-dual-strong-duality} \end{equation} \end{Proposition} \begin{Remark} Optimization problems \eqref{eq:abstract-primal-problem} and \eqref{eq:abstract-dual-problem} are also sometimes written (see e.g. \cite{Attouch1986}) in the following forms, respectively, \begin{align*} -p^*&= (\delta_{\mathcal{C}_1}+\delta_{\mathcal{C}_2})^*(\bm{v}),\\ -d^*&= (\delta^*_{\mathcal{C}_1}\square \,\delta^*_{\mathcal{C}_2})(\bm{v}), \end{align*} where $f^*$ is the {\it Fenchel conjugate} (or {\it Legendre transform}) of a function $f:\mathcal{H}\to \mathbb{R}\cup\{+\infty\}$, defined as \[ f^*: \mathcal{H}\to \mathbb{R}\cup\{+\infty\}, \qquad f^*(\bm{y}) = \sup_{\bm{x}\in\mathcal{H}}\left(\la\bm{y}, \bm{x}\ra - f(\bm{x})\right), \] and $f_1\square\, f_2$ is the {\it infimal convolution} of two functions $f_1, f_2:\mathcal{H}\to \mathbb{R}\cup\{+\infty\}$, defined as \[ (f_1\square \,f_2): \mathcal{H}\to \mathbb{R}\cup\{+\infty\}, \qquad (f_1\square \,f_2)(\bm{v}) = \inf_{\bm{y}\in\mathcal{H}}\left(f_1(\bm{y})+f_2(\bm{v}-\bm{y}))\right). \] \end{Remark} \noindent Before we prove Proposition \ref{prop:from-nc-to-dual-problems}, let us verify the following statement. \begin{Proposition} \label{prop:weak-duality} {\bf (weak duality)} Let $\mathcal{H}$ be a Hilbert space and $\mathcal{C}_1, \mathcal{C}_2\subset \mathcal{H}$ be closed, convex, nonempty sets such that $\mathcal{C}_1\cap \mathcal{C}_2$ is also nonempty. For a given $\bm{v}\in \mathcal{H}$ let $p^*, d^*\in \mathbb{R}\cup \{-\infty\}$ be defined by \eqref{eq:abstract-primal-problem}--\eqref{eq:abstract-dual-problem}. Then \[ p^*\geqslant d^*. \] \end{Proposition} \noindent{\bf Proof of Proposition \ref{prop:weak-duality}.} Clearly, for any $\bm{y} \in \mathcal{H}$ and any $\bm{c}\in \mathcal{C}_1\cap \mathcal{C}_2$ \[ \la \bm{y}, \bm{c}\ra + \la\bm{v}- \bm{y}, \bm{c}\ra =\la \bm{v},\bm{c} \ra. \] Since $\mathcal{C}_1\cap\mathcal{C}_2$ is a smaller set than $\mathcal{C}_1$ and $\mathcal{C}_2$ each, \[ \sup_{\bm{c}\in \mathcal{C}_1} \la \bm{y}, \bm{c}\ra +\sup_{\bm{c}\in \mathcal{C}_2} \la \bm{v}- \bm{y}, \bm{c}\ra \geqslant \sup_{\bm{c}\in \mathcal{C}_1 \cap \mathcal{C}_2} \la \bm{v}, \bm{c}\ra, \] i.e. for any $\bm{y}\in \mathcal{H}$ \[ \delta^*_{\mathcal{C}_1}(\bm{y}) +\delta^*_{\mathcal{C}_2}(\bm{v}-\bm{y}) \geqslant \delta^*_{\mathcal{C}_1 \cap \mathcal{C}_2}(\bm{v}). \] Find the infimum w.r. to $\bm{y}$: \[ \inf_{\bm{y}\in \mathcal{H}} \left(\delta^*_{\mathcal{C}_1}(\bm{y}) +\delta^*_{\mathcal{C}_2}(\bm{v}-\bm{y})\right) \geqslant \delta^*_{\mathcal{C}_1 \cap \mathcal{C}_2}(\bm{v}) \] and multiply by $-1$ to obtain the calmed inequality. $\blacksquare$ \noindent {\bf Proof of Proposition \ref{prop:from-nc-to-dual-problems}.} \underline{Part ``if''.} From the statement of the Proposition we are given $\bm{x}\in \mathcal{C}_1\cap \mathcal{C}_2, \bm{v}\in \mathcal{H}$ such that \eqref{eq:nc-additivity-to-primal-dual-v-taken-1} holds, and $\bm{y^*}\in \mathcal{H}$, at which the optimal value $d^*$ is achieved in \eqref{eq:abstract-dual-problem}. Since $p^*=d^*$ we have \begin{equation*} 0=p^*-d^* = - \delta^*_{\mathcal{C}_1\cap \mathcal{C}_2}(\bm{v}) + \delta^*_{\mathcal{C}_1}(\bm{y^*}) + \delta^*_{\mathcal{C}_2}(\bm{v}-\bm{y^*}) =-\la\bm{v},\bm{x}\ra + \sup_{\bm{c}\in \mathcal{C}_1}\,\la \bm{y^*}, \bm{c} \ra+ \sup_{\bm{c}\in \mathcal{C}_2}\,\la \bm{v}-\bm{y^*}, \bm{c} \ra, \end{equation*} where the last equality is due to \eqref{eq:nc-additivity-to-primal-dual-v-taken-1} and the basic property of the normal cone, see Proposition \ref{prop:nc-equivalent} {\it i), iii)}. Therefore, \begin{equation*} \la \bm{y^*},\bm{x}\ra+ \la\bm{v} - \bm{y^*},\bm{x}\ra= \sup_{\bm{c}\in \mathcal{C}_1}\,\la \bm{y^*}, \bm{c} \ra+ \sup_{\bm{c}\in \mathcal{C}_2}\,\la \bm{v}-\bm{y^*}, \bm{c} \ra, \end{equation*} Observe that, since $\bm{x}\in \mathcal{C}_1$ and $\bm{x}\in \mathcal{C}_2$ simultaneously, this is only possible when \[ \la \bm{y^*},\bm{x}\ra = \sup_{\bm{c}\in \mathcal{C}_1}\,\la \bm{y^*}, \bm{c} \ra \qquad\text{and} \qquad \la\bm{v}- \bm{y^*},\bm{x}\ra = \sup_{\bm{c}\in \mathcal{C}_2}\,\la \bm{v}-\bm{y^*}, \bm{c} \ra, \] i.e. \begin{equation} \la \bm{y^*},\bm{x}\ra = \delta^*_{\mathcal{C}_1}(\bm{y^*}) \qquad\text{and} \qquad \la\bm{v}- \bm{y^*},\bm{x}\ra = \delta^*_{\mathcal{C}_2}(\bm{v}-\bm{y^*}). \label{eq:nc-additivity-to-primal-dual-proof-separate-support-funcs} \end{equation} Again, by Proposition \ref{prop:nc-equivalent} {\it i), iii)} this means that \begin{equation} \bm{y^*}\in N_{\mathcal{C}_1}(\bm{x})\qquad\text{and} \qquad \bm{v}-\bm{y^*}\in N_{\mathcal{C}_2}(\bm{x}), \label{eq:nc-additivity-to-primal-dual-proof-separate-nc} \end{equation} from which \eqref{eq:nc-additivity-to-primal-dual-v-taken-2} follows. \newline\noindent \underline{Part ``only if''.} Assume now that both \eqref{eq:nc-additivity-to-primal-dual-v-taken-1} and \eqref{eq:nc-additivity-to-primal-dual-v-taken-2} hold for some $\bm{v}, \bm{x}$. From \eqref{eq:nc-additivity-to-primal-dual-v-taken-2} we deduce the existence of $\bm{y^*}$, such that \eqref{eq:nc-additivity-to-primal-dual-proof-separate-nc} holds, which yields \eqref{eq:nc-additivity-to-primal-dual-proof-separate-support-funcs} and \[ \la \bm{y^*},\bm{x}\ra+ \la\bm{v} - \bm{y^*},\bm{x}\ra= \delta^*_{\mathcal{C}_1}(\bm{y^*}) + \delta^*_{\mathcal{C}_2}(\bm{v}-\bm{y^*}), \] i.e. \[ \la\bm{v}, \bm{x}\ra = \delta^*_{\mathcal{C}_1}(\bm{y^*}) + \delta^*_{\mathcal{C}_2}(\bm{v}-\bm{y^*}). \] On the other hand, \eqref{eq:nc-additivity-to-primal-dual-v-taken-1} yields \[ \la\bm{v}, \bm{x}\ra=\delta^*_{\mathcal{C}_1\cap \mathcal{C}_2}(\bm{v}), \] hence we have \[ \delta^*_{\mathcal{C}_1\cap \mathcal{C}_2}(\bm{v}) = \delta^*_{\mathcal{C}_1}(\bm{y^*}) + \delta^*_{\mathcal{C}_2}(\bm{v}-\bm{y^*}), \] i. e. \[ p^* = -\delta^*_{\mathcal{C}_1}(\bm{y^*}) - \delta^*_{\mathcal{C}_2}(\bm{v}-\bm{y^*}). \] As we have shown in Proposition \ref{prop:weak-duality}, the supremum in \eqref{eq:abstract-dual-problem} cannot be larger than $p^*$, and we have presented the specific $\bm{y}=\bm{y}^*$ at which the function inside the supremum in \eqref{eq:abstract-dual-problem} attains the value $p^*$. This is exactly what we claimed in \eqref{eq:nc-additivity-to-primal-dual-strong-duality}. $\blacksquare$ Proposition \ref{prop:from-nc-to-dual-problems} gives the connection between the additivity of normal cones and the dual optimization problems. It allows to guarantee the additivity of normal cones when at least one of the constraint qualifications below holds. \begin{Proposition} \label{prop:constraint-qualifications} Let $\mathcal{H}$ be a Hilbert space, $\mathcal{C}_1, \mathcal{C}_2\subset \mathcal{H}$ be closed, convex, nonempty sets. Assume that at least one of the conditions listed below holds. Then for any $\bm{x}, \bm{v}$ as in \eqref{eq:nc-additivity-to-primal-dual-v-taken-1} the strong duality \eqref{eq:nc-additivity-to-primal-dual-strong-duality} holds , and, therefore, the additivity of normal cones \eqref{eq:nc-additivity-general} holds for any $\bm{x}\in \mathcal{C}_1\cap\mathcal{C}_2$. \begin{enumerate}[{\it i)}] \item \label{cq-Slater-I} (Slater's constraint qualification I. \cite[p. 239]{Aubin2000}, \cite[Section 2.f, p. 206]{Moreau1973}, \cite[Lemma 1 (b), pp.~4--5]{Kunze2000}) \[ {\mathcal{C}_1}\cap {{\rm int}\, \mathcal{C}_2}\neq \varnothing. \] \item \label{cq-Slater-II} (Slater's constraint qualification II. \cite[Th. 10.5.3, pp. 239--240]{Aubin2000}, \cite[Th. 2.1, p. 926]{Gowda1990}) \[ 0\in {\rm int} \left({\mathcal{C}_1} - {\, \mathcal{C}_2}\right). \] \item \label{cq-Rockafellar} (Rockafellar's constraint qualification \cite[Th. 17, Th. 18, pp. 41--42]{Rockafellar1974}, \cite[Th. 2.2, p. 926]{Gowda1990}) \[ \left\{\lambda \left(\bm{c_1}-\bm{c_2}\right): \lambda \geqslant 0, \bm{c_1}\in \mathcal{C}_1, \bm{c_2}\in \mathcal{C}_2 \right\}=\mathcal{H}. \] \item \label{cq-GCQ} (Attouch--Brezis's constraint qualification \cite{Attouch1986}, also known as the ``general constraint qualification'' \cite[Th. 3.5, Def. 3.3, Prop. 3.4, pp. 930--932]{Gowda1990}) \[ \left\{\lambda \left(\bm{c_1}-\bm{c_2}\right): \lambda \geqslant 0, \bm{c_1}\in \mathcal{C}_1, \bm{c_2}\in \mathcal{C}_2 \right\}\quad \text{is a closed linear subspace of }\mathcal{H}. \] \end{enumerate} \end{Proposition} \noindent We should make several comments on Proposition \ref{prop:constraint-qualifications}. First, note, that \ref{cq-Slater-I}$\Rightarrow$\ref{cq-Slater-II}$\Rightarrow$\ref{cq-Rockafellar}$\Rightarrow$\ref{cq-GCQ}. Slater's constrain qualifications have their name from the work \cite{Slater1950}, where such type of conditions was introduced in 1950. Constraint qualifications \ref{cq-Rockafellar} and \ref{cq-GCQ} are related, respectively, to the concepts of the {\it core} and the {\it strong quasi-relative interior} of a set. They are generalizations to Banach spaces of the relative interior of a set. A reader, interested in the topic, may also find useful the following works: \cite{Jeyakumar1992},\cite{Jeyakumar1992b},\cite{Borwein1992}, \cite{Zalinescu2002}, \cite{Borwein2003},\cite{Grad2010}, \cite{Daniele2014}. \subsection{Attempting to apply constraint qualifications to the continuum elastic-perfectly plastic model} \label{ssect:ex2-pp-cq} In Section \ref{ssect:ex21-concrete-data} we have verified directly, that in the continuum elastic-perfectly plastic model the additivity of normal cones can fail (see \eqref{eq:ex21-nc-additivity-failure}), which, in turn, leads to the impossibility of solving \eqref{eq:aepp-omega-inclusion} and \eqref{eq:inclusion2-elastoplastic}. We have also checked, that the constraint qualification with the strongest condition in our list, Proposition \ref{prop:constraint-qualifications} \ref{cq-Slater-I} fails, see Observation \ref{obs:ex21-Slater-cq-fails}. For completeness, let us confirm the constraint qualification with the weakest condition (Proposition \ref{prop:constraint-qualifications} \ref{cq-GCQ}) also fails in the same situation. Recall, that we apply the constraint qualifications to the pair of sets \[ \mathcal{C}_1=\Sigma - \bm{\widetilde{\sigma}}(t), \qquad \mathcal{C}_2=\mathcal{V}, \] in the Hilbert space \[ \mathcal{H} = L^2_{{\bf C}^{-1}}(\Omega)= L^2(\Omega), \qquad \Omega=(-1, 1), \] where $\Sigma$ is given by \eqref{eq:ex21-Sigma-def} with $\sigma^+\equiv-\sigma^-\equiv 1$, vector $\bm{\widetilde{\sigma}}(t)$ is given by \eqref{eq:ex21-linear-solution-concrete-loads} (and we are interested in $t\in\left[t^*, T\right] = \left[\frac{7}{3}, 3\right]$ in particular), and $\mathcal{V}$ is the subspace of constant functions \eqref{eq:ex21-space-V}, see Fig. \ref{fig:ex21-moving-set}. \begin{Lemma} \label{lemma:ex21-qc-generate-L-infty} With the data recalled above (in the current Section \ref{ssect:ex2-pp-cq}), we have \begin{equation} \left\{\lambda \left(\bm{c_1}-\bm{c_2}\right): \lambda \geqslant 0, \bm{c_1}\in \mathcal{C}_1, \bm{c_2}\in \mathcal{C}_2 \right\} = L^\infty(\Omega). \label{eq:ex21-qc-generate-L-infty} \end{equation} \end{Lemma} \noindent{\bf Proof.} Indeed, in the particular case \eqref{eq:ex21-linear-solution-concrete-loads}, as well as in the general case \eqref{eq:ex21-linear-solution}, $\bm{\widetilde{\sigma}}(t)$ is an absolutely continuous function of $x$ on $\overline{\Omega}$ for any fixed $t$. Hence \[ \bm{\widetilde{\sigma}}(t)\in L^\infty(\Omega), \] \[ \Sigma - \bm{\widetilde{\sigma}}(t)\subset L^\infty(\Omega)\] due to the construction of $\Sigma$ in \eqref{eq:ex21-Sigma-def}, and, of course, \[ \left\{\lambda \left(\bm{c_1}-c_2\right): \lambda \geqslant 0, \bm{c_1}\in \Sigma - \bm{\widetilde{\sigma}}(t), c_2\in \mathbb{R} \right\} \subset L^\infty(\Omega), \] cf. \eqref{eq:ex21-qc-generate-L-infty}. On the other hand, as long as \begin{equation} \underset{x\in \Omega}{\rm ess\, inf} \left(\sigma^+(x)-\widetilde{\sigma}(t,x)\right)> \underset{x\in \Omega}{\rm ess\, sup} \left(\sigma^-(x)-\widetilde{\sigma}(t,x)\right), \label{eq:ex21-pp-nondegeneracy} \end{equation} i.e. as long as the rectangle on Fig. \ref{fig:ex21-pp-gcq-fail} {\it a)} does not degenerate to a horizontal segment or an empty set, any $\bm{v}\in L^\infty(\Omega)$ can be represented as \begin{equation} {\bm v} =\lambda(\bm{c_1}-c_2) \label{eq:ex21-cq-L-infty-proof-affine-transformation} \end{equation} for some $\lambda>0, c_2\in \mathbb{R}$ and \begin{multline*} \bm{c_1}\in \left\{\bm{c}\in L^\infty(\Omega): \vphantom{\underset{x\in \Omega}{\rm ess\, inf}} \text{for a.a. }y\in \Omega \text{ we have }\right.\\ \left.\underset{x\in \Omega}{\rm ess\, sup} \left(\sigma^-(x)-\widetilde{\sigma}(t,x)\right) \leqslant c(y)\leqslant \underset{x\in \Omega}{\rm ess\, inf} \left(\sigma^+(x)-\widetilde{\sigma}(t,x)\right) \right\} \subset \Sigma - \bm{\widetilde{\sigma}}(t). \end{multline*} This finishes the proof of \eqref{eq:ex21-qc-generate-L-infty}. $\blacksquare$ \begin{figure}[H]\center \includegraphics{Fig-ex21-pp-gcq-fail.pdf} \caption{\footnotesize {\it a)} As long as the blue rectangle is nondegenerate, we have \eqref{eq:ex21-pp-nondegeneracy} and \eqref{eq:ex21-qc-generate-L-infty} being true, because any function $\bm{v}\in L^\infty(\Omega)$ can be rescaled and translated to fit in the blue rectrangle, cf \eqref{eq:ex21-cq-L-infty-proof-affine-transformation}. {\it b)} An explicit example showing that $L^\infty(\Omega)$ is not closed in the $L^2$-norm. Take $\bm{y^*}\in L^2(\Omega)$ as $y^*(x) = (-x+1)^{-\frac{1}{4}}$, and approximate it by $y_i(x) = \min(y^*(x), i),\,i \in \mathbb{N}$. We have $\|\bm{y^*}-\bm{y_}i\|_{L^2}(\Omega)\to 0, \bm{y_i}\in L^\infty(\Omega)$, yet $\bm{y^*}\notin L^\infty(\Omega)$. } \label{fig:ex21-pp-gcq-fail} \end{figure} Recall, however, that $L^\infty(\Omega)$ is not a closed subspace of $L^2(\Omega)$, see Fig. \ref{fig:ex21-pp-gcq-fail} {\it b)} for an illustration. Thus Lemma \ref{lemma:ex21-qc-generate-L-infty} means that even the constraint qualification with the weakest condition (Proposition \ref{prop:constraint-qualifications} \ref{cq-GCQ}) fails for the the continuum elastic-perfectly plastic model. This was expected though, as we have already shown, that \eqref{eq:ex21-nc-additivity-failure} does not hold as equality. \subsection{The abstract framework, suitable for elasticity-hardening plasticity} Let us modify the Definition \ref{def:aepp} of an abstract plasticity problem to allow for a different kind of constraint $\Sigma$. \begin{Definition} \label{def:aehp} Let spaces $\mathcal{H}, \mathcal{X}, \mathcal{W}_0$, operators ${\bf E}, {\bf D}, {\bf C}$ and functions $\bm{g}, \bm{f}$ be as in Section \ref{ssect:adjoint-operatorsED} and Definition \ref{def:ae}, but we require $\mathcal{H}$ to be separable. In addition, let us be given another separable Hilbert space $\mathcal{H}'$, which we call the {\it space of internal variables}. We denote the entire configuration space by \[ \widehat{\mathcal{H}} =\mathcal{H}\times\mathcal{H}'. \] Let us be given a bounded closed convex nonempty set $\widehat{\Sigma}\subset \widehat{\mathcal{H}}$. We say that the unknown variables \begin{equation} \varepsilon, \varepsilon_{\rm el}, \varepsilon_{\rm p}, \sigma \in W^{1, \infty}(I, \mathcal{H}), \qquad \xi\in W^{1, \infty}(I, \mathcal{H}') \label{eq:aehp-unknowns} \end{equation} solve the {\it abstract problem of quasi-static evolution in elasticity-monotone plasticity} if they satisfy \begin{align} \bm{\varepsilon}& \in {\rm Im}\, {\bf E}+\bm{g} , \label{eq:aehp-1} \tag{EMP1}\\ \bm{\varepsilon}& = \bm{\varepsilon_{\bf el}}+\bm{\varepsilon_{\bf p}}, \label{eq:aehp-2} \tag{EMP2}\\ \bm{\sigma}& = {\bf C} \, \bm{\varepsilon_{\bf el}}, \label{eq:aehp-3} \tag{EMP3}\\ \frac{d}{dt} \begin{pmatrix}\bm{\varepsilon_{\bf p}}\\ -\bm{\xi} \end{pmatrix}& \in N_{\widehat{\Sigma}} \begin{pmatrix}\bm{\sigma}\\ \bm{\xi}\end{pmatrix}, \label{eq:aehp-4} \tag{EMP4}\\ \bm{\sigma}\in D(\bf D)\quad \text{ and }\quad {\bf D}\, \bm{\sigma}&=\bm{f}, \label{eq:aehp-5} \tag{EMP5} \end{align} and the initial condition \[ (\bm{\varepsilon}(0), \bm{\varepsilon}_{\bf el}(0), \bm{\varepsilon}_{\bf p}(0), \bm{\sigma}(0), \bm{\xi}(0)) =(\bm{\varepsilon}_{\bf 0}, \bm{\varepsilon}_{\bf el0}, \bm{\varepsilon}_{\bf p0}, \bm{\sigma}_{\bf 0}, \bm{\xi}_{\bf 0}) \] with some given right-hand side from $\mathcal{H}^4\times \mathcal{H}'$ satisfying \eqref{eq:aehp-1}--\eqref{eq:aehp-3}, \eqref{eq:aehp-5} at $t=0$ and \begin{equation*} \begin{pmatrix}\bm{\sigma}_{\bf 0}\\ \bm{\xi}_{\bf 0}\end{pmatrix} \in \widehat{\Sigma}. \label{eq:aehp-sigma-ic-compatible} \end{equation*} \end{Definition} The problem of Definition \ref{def:aehp} can be represented as the diagram of Fig. \ref{fig:elasticity-hp-scheme}. Similarly to Theorem \ref{th:aepp-to-sweeping-process}, we will convert the problem to a sweeping process and a differential inclusion with a known right-hand side. Both elasticity-perfect plasticity and elasticity-hardening plasticity can be modeled via the framework of Definition \ref{def:aehp}, and we use the term elasticity-{\it monotone} plasticity due to the normal cone being a monotone operator. In contrast, the phenomenon of {\it softening} (aslo known as ``negative hardening'') cannot be modeled in the framework, and requires its further nontrivial modification. \begin{figure}[H]\center \includegraphics{Fig-elasticity-hp-scheme.pdf} \caption{\footnotesize Schematic representation of the problem of Definition \ref{def:aehp}. The unknown variables are indicated by blue color. In the problem of Definition \ref{def:aehp} we are only looking for the unknowns $\varepsilon, \varepsilon_{\rm el}, \varepsilon_{\rm p}, \sigma$ and $\xi$. Red rectangles indicate the constitutive relations. } \label{fig:elasticity-hp-scheme} \end{figure} Before we derive the sweeping process from Definition \ref{def:aehp}, we will need the counterpart of space $\widehat{H}$ with the weighted inner product. \begin{Definition} Consider the operator \[ \widehat{{\bf C}}:\widehat{\mathcal{H}} \to \widehat{\mathcal{H}}, \qquad \widehat{{\bf C}}:\begin{pmatrix}\bm{\sigma}\\ \bm{\xi}\end{pmatrix} \mapsto \begin{pmatrix}{\bf C}\, \bm{\sigma}\\ \bm{\xi}\end{pmatrix} \] and its inverse \[ \widehat{{\bf C}}^{-1}:\widehat{\mathcal{H}} \to \widehat{\mathcal{H}}, \qquad \widehat{{\bf C}}^{-1}:\begin{pmatrix}\bm{\sigma}\\ \bm{\xi}\end{pmatrix} \mapsto \begin{pmatrix}{\bf C}^{-1}\, \bm{\sigma}\\ \bm{\xi}\end{pmatrix}. \] Define the space $\widehat{\mathcal{H}}_{{\bf C}^{-1}}$ as \[ \widehat{\mathcal{H}}_{{\bf C}^{-1}}= \mathcal{H}_{{\bf C}^{-1}} \times \mathcal{H}', \] i.e. $\widehat{\mathcal{H}}_{{\bf C}^{-1}}$ is the space $\widehat{H}$ with the following inner product \begin{multline} \left\la\begin{pmatrix}\bm{\sigma}\\ \bm{\xi}\end{pmatrix}, \begin{pmatrix}\bm{\tau}\\ \bm{\zeta}\end{pmatrix}\right\ra_{\widehat{{\bf C}}^{-1}}= \left\la\begin{pmatrix}\bm{\sigma}\\ \bm{\xi}\end{pmatrix}, \widehat{{\bf C}}^{-1}\begin{pmatrix}\bm{\tau}\\ \bm{\zeta}\end{pmatrix}\right\ra_{\widehat{\mathcal{H}}}= \la \bm{\sigma}, {\bf C}^{-1} \bm{ \tau} \ra_\mathcal{H}+\la\bm{\xi}, \bm{\zeta}\ra_{\mathcal{H}'} \\ \text{for any } \bm{\sigma}, \bm{\tau}\in \mathcal{H},\, \bm{\xi}, \bm{\zeta}\in \mathcal{H}' \label{eq:weighted-ip-hardening} \end{multline} We denote by $N^{\widehat{{\bf C}}^{-1}}$ the normal cone in the Hilbert space $\widehat{\mathcal{H}}_{{\bf C}^{-1}}$, defined by formula \eqref{eq:nc-abstract-def} with the inner product \eqref{eq:weighted-ip-hardening}. Finally, we denote by $P_{1}$ the projection from the space $\widehat{\mathcal{H}}_{{\bf C}^{-1}}$ to its subspace $\mathcal{H}_{{\bf C}^{-1}}$, i.e. the extraction of the first component: \[ P_1:\widehat{\mathcal{H}}_{{\bf C}^{-1}} \mapsto \mathcal{H}_{{\bf C}^{-1}}, \qquad P_1:\begin{pmatrix}\bm{\sigma}\\ \bm{\xi}\end{pmatrix} \mapsto \bm{\sigma}. \] \end{Definition} \begin{Theorem} \label{th:aehp-to-sweeping-process} Functions \eqref{eq:aehp-unknowns} solve the abstract problem of quasi-static evolution in elasticity-monotone plasticity if and only if the unknown $y$ as in \eqref{eq:yielding-to-sweeping} and the unknowns $\varepsilon \in W^{1,\infty}(I, \mathcal{H}),\, \xi \in W^{1,\infty}(I, \mathcal{H}')$ solve the differential inclusions \begin{numcases}{} -\frac{d}{dt}\begin{pmatrix}\bm{y}\\ \bm{\xi} \end{pmatrix}\in N^{\widehat{{\bf C}}^{-1}}_{\widehat{\mathcal{C}}(t)}\begin{pmatrix}\bm{y}\\ \bm{\xi}\end{pmatrix}, \label{eq:sp-elastoplastic-hardening}\\ \frac{d}{dt}\bm{\varepsilon} \in {\bf C}^{-1}\left(P_1\left(\left(N^{\widehat{{\bf C}}^{-1}}_{\widehat{\Sigma}-\begin{pmatrix}\bm{\widetilde{\sigma}}(t)\\0\end{pmatrix}}\begin{pmatrix}\bm{y}\\\bm{\xi}\end{pmatrix}+ \frac{d}{dt} \begin{pmatrix}\bm{y}\\ \bm{\xi}\end{pmatrix}\right)\cap \left(\mathcal{U}\times\{0\}\right)\right)+ \frac{d}{dt} \bm{\widetilde{\sigma}}(t)\right) \label{eq:inclusion2-elastoplastic-hardening} \end{numcases} with the initial conditions \begin{align*} \bm{y}(0) &= \bm{\sigma}_{\bf 0}- \bm{\widetilde{\sigma}}(0)\\ \begin{pmatrix} \bm{y}(0) \\ \bm{\xi}(0)\end{pmatrix}& \in \mathcal{C}(0), \\\bm{\varepsilon}(0)& = \bm{\varepsilon}_{\bf 0}\in {\bf C}^{-1}(\mathcal{U}+\bm{\widetilde{\sigma}}(0)), \end{align*} where the moving set is \begin{equation} \widehat{\mathcal{C}}(t)=\left(\widehat{\Sigma}-\begin{pmatrix}\bm{\widetilde{\sigma}}(t)\\ 0\end{pmatrix}\right)\cap \left(\mathcal{V}\times \mathcal{H}'\right). \label{eq:aehp-sp-moving-set} \end{equation} \end{Theorem} \noindent {\bf Proof.} We follow the lines of the proof of Theorem \ref{th:aepp-to-sweeping-process} and by the same reasoning we obtain \eqref{eq:stress-constraint}, \eqref{eq:rate-strain-constraint} and \eqref{eq:rate-additive-decomposition} from \eqref{eq:aehp-1}--\eqref{eq:aehp-3}, \eqref{eq:aehp-5}. Apply $\widehat{\bf C}$ to \eqref{eq:aehp-5} to obtain \begin{equation} \widehat{\bf C} \frac{d}{dt} \begin{pmatrix}\bm{\varepsilon_{\bf p}}\\ -\bm{\xi} \end{pmatrix} \in N^{\widehat{\bf C}^{-1}}_{\widehat{\Sigma}} \begin{pmatrix}\bm{\sigma}\\ \bm{\xi}\end{pmatrix}, \label{eq:plastic-flow-rule-in-proof-hardening} \end{equation} i.e. \[ \frac{d}{dt} \begin{pmatrix}{\bf C}\, \bm{\varepsilon}_{\bf p}\\ -\bm{\xi} \end{pmatrix} \in N^{\widehat{\bf C}^{-1}}_{\widehat{\Sigma}} \begin{pmatrix}\bm{\sigma}\\ \bm{\xi}\end{pmatrix}. \] Substitute there \eqref{eq:rate-additive-decomposition} to get \[ -\frac{d}{dt} \begin{pmatrix}\bm{\sigma} \\ \bm{\xi} \end{pmatrix} \in N^{\widehat{\bf C}^{-1}}_{\widehat{\Sigma}} \begin{pmatrix}\bm{\sigma}\\ \bm{\xi}\end{pmatrix} - \frac{d}{dt} \begin{pmatrix}{\bf C}\, \bm{\varepsilon}\\ 0\end{pmatrix} \] and use \eqref{eq:rate-strain-constraint} to obtain \[ -\frac{d}{dt} \begin{pmatrix}\bm{\sigma} \\ \bm{\xi} \end{pmatrix} \in N^{\widehat{\bf C}^{-1}}_{\widehat{\Sigma}} \begin{pmatrix}\bm{\sigma}\\ \bm{\xi}\end{pmatrix} - \frac{d}{dt}\begin{pmatrix} \bm{\widetilde{\sigma}}\\ 0\end{pmatrix} - (\mathcal{U}\times \{0\}), \] i.e. \[ -\frac{d}{dt} \begin{pmatrix}\bm{\sigma}- \bm{\widetilde{\sigma}} \\ \bm{\xi} \end{pmatrix} \in N^{\widehat{\bf C}^{-1}}_{\widehat{\Sigma}} \begin{pmatrix}\bm{\sigma}\\ \bm{\xi}\end{pmatrix} + (\mathcal{U}\times \{0\}). \] On the other hand, from \eqref{eq:stress-constraint} we have \[ \begin{pmatrix} \bm{\sigma}-\bm{\widetilde{\sigma}}\\ \bm{\xi} \end{pmatrix}\in \mathcal{V}\times \mathcal{H}', \] where the right-hand side is the orthogonal complement of $\mathcal{U}\times \{0\}$ in the space $\widehat{\mathcal{H}}_{{\bf C}^{-1}}$. Therefore, \[ -\frac{d}{dt} \begin{pmatrix}\bm{\sigma}- \bm{\widetilde{\sigma}} \\ \bm{\xi} \end{pmatrix} \in N^{\widehat{\bf C}^{-1}}_{\widehat{\Sigma} - \begin{pmatrix}\bm{\widetilde{\sigma}}\\0\end{pmatrix}} \begin{pmatrix}\bm{\sigma}-\bm{\widetilde{\sigma}}\\ \bm{\xi}\end{pmatrix}+ N^{\widehat{\bf C}^{-1}}_{\mathcal{V}\times \mathcal{H}'} \begin{pmatrix}\bm{\sigma}-\bm{\widetilde{\sigma}}\\ \bm{\xi}\end{pmatrix}. \] By the subadditivity of the normal cones \eqref{eq:nc-subadditivity-general} we have \[ -\frac{d}{dt} \begin{pmatrix}\bm{\sigma}- \bm{\widetilde{\sigma}} \\ \bm{\xi} \end{pmatrix} \in N^{\widehat{\bf C}^{-1}}_{\left(\widehat{\Sigma} - \begin{pmatrix}\bm{\widetilde{\sigma}}\\0\end{pmatrix}\right)\cap \left(\mathcal{V}\times \mathcal{H}'\right)} \begin{pmatrix}\bm{\sigma}-\bm{\widetilde{\sigma}}\\ \bm{\xi}\end{pmatrix}. \] Substitute \eqref{eq:yielding-to-sweeping} to obtain the desired sweeping process \eqref{eq:sp-elastoplastic-hardening}. To derive \eqref{eq:inclusion2-elastoplastic-hardening} observe, that \eqref{eq:rate-additive-decomposition} implies \[ \widehat{\bf C} \frac{d}{dt} \begin{pmatrix}\bm{\varepsilon}\\0\end{pmatrix} = \frac{d}{dt} \begin{pmatrix}\bm{\sigma}\\\bm{\xi}\end{pmatrix}+\widehat{\bf C}\frac{d}{dt} \begin{pmatrix}\bm{\varepsilon}_{\bf p}\\ -\bm{\xi}\end{pmatrix}. \] Substitute \eqref{eq:plastic-flow-rule-in-proof-hardening} to get \[ \widehat{\bf C} \frac{d}{dt} \begin{pmatrix}\bm{\varepsilon}\\0\end{pmatrix} \in \frac{d}{dt} \begin{pmatrix}\bm{\sigma}\\\bm{\xi}\end{pmatrix}+N^{\widehat{\bf C}^{-1}}_{\widehat{\Sigma}} \begin{pmatrix}\bm{\sigma}\\ \bm{\xi}\end{pmatrix} \] i.e. \[ \widehat{\bf C} \frac{d}{dt} \begin{pmatrix}\bm{\varepsilon}\\0\end{pmatrix} \in N^{\widehat{\bf C}^{-1}}_{\widehat{\Sigma} - \begin{pmatrix}\bm{\widetilde{\sigma}}\\ 0\end{pmatrix}} \begin{pmatrix}\bm{\sigma}- \bm{\widetilde{\sigma}}\\ \bm{\xi}\end{pmatrix}+ \frac{d}{dt} \begin{pmatrix}\bm{\sigma}-\bm{\widetilde{\sigma}}\\\bm{\xi}\end{pmatrix}+\frac{d}{dt}\begin{pmatrix}\bm{\widetilde{\sigma}}\\0 \end{pmatrix}. \] But from \eqref{eq:rate-strain-constraint} we have \[ \widehat{\bf C} \frac{d}{dt} \begin{pmatrix}\bm{\varepsilon}\\0\end{pmatrix} \in \left(\mathcal{U}\times \{0\}\right)+ \frac{d}{dt}\begin{pmatrix}\bm{\widetilde{\sigma}}\\0 \end{pmatrix}, \] hence \[ \widehat{\bf C} \frac{d}{dt} \begin{pmatrix}\bm{\varepsilon}\\0\end{pmatrix} \in \left( N^{\widehat{\bf C}^{-1}}_{\widetilde{\Sigma} - \begin{pmatrix}\bm{\widetilde{\sigma}}\\ 0\end{pmatrix}} \begin{pmatrix}\bm{\sigma}- \bm{\widetilde{\sigma}}\\ \bm{\xi}\end{pmatrix}+ \frac{d}{dt} \begin{pmatrix}\bm{\sigma}-\bm{\widetilde{\sigma}}\\\bm{\xi}\end{pmatrix}\right)\cap \left(\mathcal{U}\times \{0\}\right)+\frac{d}{dt}\begin{pmatrix}\bm{\widetilde{\sigma}}\\0 \end{pmatrix}. \] Substitute \eqref{eq:yielding-to-sweeping} and apply $\widetilde{\bf C}^{-1}$ to obtain \[ \frac{d}{dt}\begin{pmatrix} \bm{\varepsilon}\\0 \end{pmatrix} \in \widehat{{\bf C}}^{-1}\left(\left(N^{\widehat{{\bf C}}^{-1}}_{\widehat{\Sigma}-\begin{pmatrix}\bm{\widetilde{\sigma}}\\0\end{pmatrix}}\begin{pmatrix}\bm{y}\\\bm{\xi}\end{pmatrix}+ \frac{d}{dt} \begin{pmatrix}\bm{y}\\ \bm{\xi}\end{pmatrix}\right)\cap \left(\mathcal{U}\times\{0\}\right)+ \frac{d}{dt} \begin{pmatrix}\bm{\widetilde{\sigma}}\\0 \end{pmatrix}\right), \] which is equivalent to the desired inclusion \eqref{eq:inclusion2-elastoplastic-hardening}. The derivation of \eqref{eq:aehp-1}--\eqref{eq:aehp-5} from \eqref{eq:yielding-to-sweeping}, \eqref{eq:sp-elastoplastic-hardening}--\eqref{eq:aehp-sp-moving-set} is similar to the corresponding part of the proof of Theorem \ref{th:aepp-to-sweeping-process}. $\blacksquare$ \subsection{The continuum model with elasticity-hardening plasticity} \label{ssect:aehp-rod} Consider again the continuous rod of Section \ref{ssect:ex21-elasticity}, endowed with the elastic-hardening plastic behavior at each point $x\in \Omega$. The rod is characterized by its stiffness function $\bm{C}\in L^\infty(\Omega)$ (see Section \ref{sssect:ex21-Hooke-law}) and its yield limits $\sigma^-_x(\xi), \sigma^+_x(\xi)$, which in this paper we consider as functions \[ (x,\xi)\in \Omega \times\mathbb{R} \longmapsto \sigma^-_x(\xi)\in \mathbb{R}, \] \[ (x,\xi)\in \Omega \times\mathbb{R} \longmapsto \sigma^+_x(\xi) \in \mathbb{R}, \] depending on the point $x\in \Omega$ and the value of the internal variable $\xi$ at that point. In the Definition \ref{def:aehp} we now use the data of Section \ref{ssect:ex21-elasticity} with \[ \mathcal{H}'= L^2(\Omega), \qquad \widehat{H} = L^2(\Omega)\times L^2(\Omega), \] \begin{equation} \widehat\Sigma = \left\{\begin{pmatrix} \bm{\sigma}\\ \bm{\xi} \end{pmatrix}\in L^2(\Omega)\times L^2(\Omega): \text{ for a.a. }x\in \Omega \text{ we have }\sigma^-_x(\xi(x))\leqslant \sigma(x)\leqslant \sigma^+_x(\xi(x))\right\}. \label{eq:aehp-general-set-sigma} \end{equation} The constraint on the stress to belong to the elastic range now takes the form \[ \sigma(t,x) \in [\sigma^-_x(\xi(t,x)), \sigma^+_x(\xi(t,x))] \qquad \text{for all }t\in I \text{ and a.a. }x\in \Omega, \] where $\xi(t,x)$ is the new internal unknown variable we introduced in \eqref{eq:aehp-unknowns}, and it allows to model the change of the elastic range during the plastic evolution. We will only consider the cases for which we can ensure that \begin{equation} \sigma^-_x(\xi(t,x))< \sigma^+_x(\xi(t,x)) \qquad \text{for a.a. }x\in \Omega \label{eq:aehp-nondegenerate-elastic-range} \end{equation} during the entire evolution. In particular, we assume that \eqref{eq:aehp-nondegenerate-elastic-range} holds for the initial value $\bm{\xi}(t)=\bm{\xi_0}$. We must make several further assumptions on the data. At first, for a. a. $x\in \Omega$ \begin{align} \text{ the map}\quad& \xi\longmapsto \sigma^-_x(\xi) \quad\text{is convex, strictly monotone and surjective onto }\mathbb{R} \label{eq:aehp-top-yield-limit-map} \\ \text{ the map}\quad& \xi\longmapsto \sigma^+_x(\xi) \quad \text{is concave, stricitly monotone and surjective onto }\mathbb{R}. \label{eq:aehp-bottom-yield-limit-map} \end{align} We consider the following two simple cases, in which we can guarantee the non-degeneracy of the elastic range \eqref{eq:aehp-nondegenerate-elastic-range} and a constraint qualification of the intersection in \eqref{eq:aehp-sp-moving-set}. \subsubsection{Expansion of the elastic range, e.g. nonlinear isotrpoic hardening} \label{sssect:aehp-rod-isotropic} As the first case, consider the situation when the maps \eqref{eq:aehp-top-yield-limit-map}--\eqref{eq:aehp-bottom-yield-limit-map} have the opposite type of monotonicity. For definiteness, we set that for a. a. $x\in \Omega$ the map \eqref{eq:aehp-top-yield-limit-map} is increasing, and \eqref{eq:aehp-bottom-yield-limit-map} is decreasing, see Fig. \ref{fig:hardening-rules} {\it a)}. Under our assumptions both maps are invertible, and we denote their respective inverses by $\xi^-_x(\sigma),\, \xi^+_x(\sigma)$, i.e. we have the maps \begin{align*} \sigma\in \mathbb{R} \longmapsto \xi^-_x(\sigma) \in \mathbb{R},\\ \sigma \in \mathbb{R} \longmapsto \xi^+_x(\sigma) \in \mathbb{R}. \end{align*} \begin{figure}[H]\center \includegraphics{Fig-hardening-rules.pdf} \caption{\footnotesize Elastic range (red interval), depending on the interlal variable $\xi$ for a particular $x\in \Omega$. {\it a)} Elastic range expands nonlinearly in the plastic regime (isotropic hardening in particular). {\it b)} Linear growth condition \eqref{eq:aehp-isotropic-hardening-linear-growth}. {\it c)} Elastic range translates linearly, which is called linear kinematic hardening. } \label{fig:hardening-rules} \end{figure} The maps \eqref{eq:aehp-top-yield-limit-map}--\eqref{eq:aehp-bottom-yield-limit-map} are continuous on the entire $\mathbb{R}$ as convex and concave functions with finite values \cite[Th. 10.1, p. 82]{Rockafellar1970}. Hence their inverses are also continuous. We assume that for each $\xi\in \mathbb{R}$ the maps \[ x\longmapsto\sigma^-_x(\xi), \qquad x\longmapsto\sigma^+_x(\xi) \] are Lebesgue-measurable, so that \begin{align} (x,\sigma)\in \Omega \times\mathbb{R} \longmapsto \xi^-_x(\sigma)\in \mathbb{R}, \label{eq:eph-bottom-yield-limit-map-inverse} \\ (x,\sigma)\in \Omega \times\mathbb{R} \longmapsto \xi^+_x(\sigma) \in \mathbb{R} \label{eq:eph-top-yield-limit-map-inverse} \end{align} are Carath\'{e}odory maps. Finally, we must assume their linear growth with respect to $\sigma$: \begin{multline} \text{there exist } \psi\in L^2(\Omega) \text{ and } c>0 \text{ such that for a.a. }x\in \Omega \text{ and all } \sigma\in \mathbb{R}\\ \max\left(|\xi^-_x(\sigma)|, |\xi^+_x(\sigma)|\right)\leqslant \psi(x)+c|\sigma|, \label{eq:aehp-isotropic-hardening-linear-growth} \end{multline} see Fig. \ref{fig:hardening-rules} {\it b)}. Under such assumptions we can write the set $\widehat \Sigma$ in \eqref{eq:aehp-general-set-sigma} as \begin{equation} \widehat\Sigma = \left\{\begin{pmatrix} \bm{\sigma}\\ \bm{\xi} \end{pmatrix}\in L^2(\Omega)\times L^2(\Omega): \text{ for a.a. }x\in \Omega \text{ we have }\xi(x)\geqslant \max\left(\xi^-_x(\sigma(x)), \xi_x^+(\sigma(x))\right)\right\} \label{eq:ehp-isortopic-set-sigma-with-inverses} \end{equation} and observe that $\widehat\Sigma$ is a closed, convex, nonempty set. We use the set $\widehat\Sigma$ to construct the sweeping process \eqref{eq:sp-elastoplastic-hardening}, which happens within the subspace $\mathcal{V}\times L^2(\Omega)$, cf. \eqref{eq:ex21-space-V}, in the space $L^2_{{\bf C}^{-1}}(\Omega)\times L^2(\Omega)$, which is $L^2(\Omega)\times L^2(\Omega)$ with the inner product \eqref{eq:weighted-ip-hardening}. In particular, the moving set \eqref{eq:aehp-sp-moving-set} of the sweeping process is \begin{multline*} \mathcal{C}(t) =\left\{\vphantom{ L^2_{{\bf C}^{-1}}}\right.(\bm{y}, \bm{\xi}) \in L^2_{{\bf C}^{-1}}(\Omega)\times L^2(\Omega): \text{ there exists }c\in \mathbb{R} \text{ such that for a.a. }x\in \Omega \text{ we have}\\ c\equiv y(x)\quad\text{and}\quad \sigma_x^-(\xi(x)) \leqslant c+ \widetilde{\sigma}(t,x)\leqslant \sigma^+_x(\xi(x))\left.\vphantom{ L^2_{{\bf C}^{-1}}} \right\}, \end{multline*} where $\widetilde{\sigma}$ is the corresponding stress solution for elasticity \eqref{eq:w0-const-def},\eqref{eq:ex21-linear-solution}. The particular symmetric situation when \[ \sigma_x^-(\xi)=-\sigma_x^+(\xi) \qquad \text{for a.a. }x\in \Omega \text{ and all } \xi\in \mathbb{R} \] is known as {\it nonlinear isotropic hardening} (see e.g. \cite[Sect. 3.5, pp.~70--72]{Han2012}) for the one-dimensional rod. \subsubsection{Linear translation of the elastic range, i.e. linear kinematic hardening} \label{sssect:aehp-rod-kinematic} We would like to also include the case of {\it kinematic hardening} (see e.g. \cite[Sect. 5.4.3, 5.4.4, pp.~205--240]{Lemaitre1994}), but in order to simultaneously maintain the well-posedness \eqref{eq:aehp-nondegenerate-elastic-range}, the convexity of $\widehat{\Sigma}$ and the simplicity of the model (i.e. without considering the multiyield Mroz-type models, see e.g. \cite[pp.~216--218]{Lemaitre1994}, \cite[pp.~15--18]{Krejci1996}) we will conly consider the case of {\it linear kinematic hardening}, see Fig. \ref{fig:hardening-rules} {\it c)}. In such a case we specifically consider the yield limits of the form \[ \sigma^-_x(\xi) = \sigma^-_{\rm offset}(x)+ \bm{H}(x)\,\xi, \] \[ \sigma^+_x(\xi) = \sigma^+_{\rm offset}(x)+ \bm{H}(x)\, \xi, \] where $\sigma_{\rm offset}^-, \sigma_{\rm offset}^+\in L^\infty (\Omega)$ we call the {\it initial yield limits} and $\bm{H}\in L^\infty(\Omega)$ is the {\it hardening modulus} with a lower bound preventing the degeneration to perfect plasticity, i.e. that there exists a constant $\eta\in \mathbb{R}$ such that \begin{equation} \text{there exist } \eta\in \mathbb{R} \text{ such that for a.a. } x\in \Omega \text{ we have}\qquad 0<\eta\leqslant\bm{H}(x). \label{eq:hardening-modulus-lower-estimate} \end{equation} Observe from \eqref{eq:aehp-general-set-sigma} that the set $\widehat \Sigma$ is also closed, convex and nonempty in such a case. The sweeping process \eqref{eq:sp-elastoplastic-hardening} happens again within the subspace $\mathcal{V}\times L^2(\Omega)$, cf. \eqref{eq:ex21-space-V}, in the space $L^2_{{\bf C}^{-1}}(\Omega)\times L^2(\Omega)$, which is $L^2(\Omega)\times L^2(\Omega)$ with the inner product \eqref{eq:weighted-ip-hardening}. The moving set \eqref{eq:aehp-sp-moving-set} of the sweeping process is \begin{equation*} \begin{aligned} \mathcal{C}(t) =&\left\{\vphantom{ L^2_{{\bf C}^{-1}}}\right.(\bm{y}, \bm{\xi}) \in L^2_{{\bf C}^{-1}}(\Omega)\times L^2(\Omega): \text{ there exists }c\in \mathbb{R} \text{ such that for a.a. }x\in \Omega \text{ we have}\\ &c\equiv y(x)\quad\text{and}\quad \sigma_{\rm offset}^-(x) \leqslant c+\widetilde{\sigma}(t,x) - \bm{H}(x)\,\xi(x)\leqslant \sigma_{\rm offset}^+(x)\left.\vphantom{ L^2_{{\bf C}^{-1}}} \right\}, \end{aligned} \end{equation*} where $\widetilde{\sigma}$ is given by \eqref{eq:w0-const-def},\eqref{eq:ex21-linear-solution} again. \subsection{Constraint qualifications for the continuum model with elasticity-hardening plasticity } Let us now verify that a constraint qualification is satisfied for the continuum model with hardening. Specifically, we claim the following. \begin{Proposition} For any $\bm{\widetilde{\sigma}} \in L^2_{{\bf C}^{-1}}(\Omega)$ denote \begin{equation} \mathcal{C}_1=\widehat{\Sigma} -\begin{pmatrix}\bm{\widetilde{\sigma}}\\ 0\end{pmatrix}, \qquad \mathcal{C}_2=\mathcal{V}\times L^2(\Omega), \label{eq:ih-check-cq} \end{equation} where $\widehat{\Sigma}$ and $\mathcal{V}$ are given by \eqref{eq:aehp-general-set-sigma} and \eqref{eq:ex21-space-V} respectively. Under the assumptions of Section \ref{ssect:aehp-rod} (including either Section \ref{sssect:aehp-rod-isotropic} or \ref{sssect:aehp-rod-kinematic}) Proposition \ref{prop:constraint-qualifications} applies to $\mathcal{C}_1$ and $\mathcal{C}_2$, so that constraint qualification \ref{cq-Slater-I} fails, but constraint qualifications \ref{cq-Slater-II}--\ref{cq-GCQ} are satisfied. \end{Proposition} \noindent {\bf Proof.} Constraint qualification of Proposition \ref{prop:constraint-qualifications} \ref{cq-Slater-I} fails, since both $\mathcal{C}_1$ and $\mathcal{C}_2$ in \eqref{eq:ih-check-cq} have empty interiors due to the same argument as in the case of perfect plasticity, see Observation \ref{obs:ex21-Slater-cq-fails} and Fig. \ref{fig:misc-1} {\it a)}. To show that Proposition \ref{prop:constraint-qualifications} \ref{cq-Slater-II} is true we demonstrate that \[ \mathcal{C}_1-\mathcal{C}_2=\widehat{\mathcal{H}}. \] Take arbitrary \[ \begin{pmatrix}\bm{\sigma'}\\ \bm{\xi'}\end{pmatrix}\in\widehat{\mathcal{H}} = L^2_{{\bf C}^{-1}}(\Omega)\times L^2(\Omega). \] We need to show that there are \begin{equation*} \begin{pmatrix}\bm{\sigma}\\ \bm{\xi}\end{pmatrix}\in \widehat{\Sigma},\qquad c\in \mathbb{R}, \qquad \bm{\zeta}\in L^{2}(\Omega) \end{equation*} such that \begin{equation*} \bm{\sigma'} = \bm{\sigma}- \bm{\widetilde{\sigma}} - c \qquad\text{and}\qquad \bm{\xi'} = \bm{\xi}-\bm{\zeta}. \end{equation*} Due to the arbitrariness of $\bm{\sigma'}$ we can absorb $\bm{\widetilde{\sigma}} + c$ into it, so that it is sufficient to find \begin{equation} \bm{\xi},\, \bm{\zeta}\in L^{2}(\Omega) \label{eq:in-proof-cq-hardening-1} \end{equation} such that \begin{equation} \begin{pmatrix}\bm{\sigma}'\\ \bm{\xi}\end{pmatrix}\in \widehat{\Sigma} \qquad\text{and}\qquad \bm{\xi'} = \bm{\xi}-\bm{\zeta}. \label{eq:in-proof-cq-hardening-2} \end{equation} In the case of nonlinearly expanding elastic range (Section \ref{sssect:aehp-rod-isotropic}) choose \begin{equation} \xi(x) = \max\left(\xi^-_x(\sigma'(x)), \xi_x^+(\sigma'(x))\right), \qquad \zeta(x) = \xi(x) - \xi'(x)\qquad \text{for a.a. }x\in \Omega. \label{eq:in-proof-isotropic-hardening-choice} \end{equation} Notice, that since the functions \eqref{eq:eph-bottom-yield-limit-map-inverse}--\eqref{eq:eph-top-yield-limit-map-inverse} are Carath\'{e}odory and satisfy the linear growth condition \eqref{eq:aehp-isotropic-hardening-linear-growth}, the corresponding Nemytskii operator maps $L^2(\Omega)\cong L^2_{{\bf C}^{-1}}(\Omega)$ into $L^2(\Omega)$ \cite[Def. 5.2, Th. 5.1, pp.~62--63]{Precup2002}, so that we indeed can fulfill \eqref{eq:in-proof-cq-hardening-1} with such choice. It is evident from \eqref{eq:ehp-isortopic-set-sigma-with-inverses}, that \eqref{eq:in-proof-cq-hardening-2} is satisfied as well. In the case of linear kinematic hardening (Section \ref{sssect:aehp-rod-kinematic}) choose \[ \xi(x) = \frac{\sigma'(x)-\tau(x)}{\bm{H}(x)}, \qquad \zeta(x) = \xi(x) - \xi'(x)\qquad \text{for a.a. }x\in \Omega, \] with an arbitrary measurable function $\bm{\tau}$ such that \[ \tau(x) \in \left[ \sigma^-_{\rm offset}(x), \sigma^+_{\rm offset}(x)\right]. \] Due to the assumption \eqref{eq:hardening-modulus-lower-estimate} whe have \eqref{eq:in-proof-cq-hardening-1}--\eqref{eq:in-proof-cq-hardening-2} satisfied in this case as well. Therefore, in both cases the set $\mathcal{C}_1-\mathcal{C}_2$ is the entire space, and the interior of $\mathcal{C}_1-\mathcal{C}_2$ is also the entire space, which includes the zero element. This fulfills the condition of Proposition \ref{prop:constraint-qualifications} \ref{cq-Slater-II}, from which \ref{cq-Rockafellar} and \ref{cq-GCQ} follow as well. $\blacksquare$ \begin{Remark} The linear growth condition \eqref{eq:aehp-isotropic-hardening-linear-growth} is sharp in the sense of the constraint qualifications of Proposition \ref{prop:constraint-qualifications} \ref{cq-Slater-II}--\ref{cq-GCQ}. Indeed, \eqref{eq:aehp-isotropic-hardening-linear-growth} is not only a sufficient, but also a necessary condition for a Nemytskii operator to map $L^2(\Omega)$ into $L^2(\Omega)$, see \cite[Th. 3.4.4, p.~407]{Gasinski2005}. If the linear growth condition does not hold, function $\bm{\xi}$ in \eqref{eq:in-proof-isotropic-hardening-choice} will not be in $L^2(\Omega)$ for some $\bm{\sigma'}\in L^2(\Omega)$, and even the translation on a constant function or a multiplication by a constant (which appears in Proposition \ref{prop:constraint-qualifications} \ref{cq-GCQ}) will not change the class of intergrability. \end{Remark} \begin{Remark} The linear growth condition \eqref{eq:aehp-isotropic-hardening-linear-growth} implies the estimate \[ \max\left( \|\xi^-_{\cdot}(\sigma(\cdot))\|_{L^2},\|\xi^+_{\cdot}(\sigma(\cdot))\|_{L^2} \right)\leqslant \|\psi\|_{L^2}+ c\|\bm{\sigma}\|_{L^2}\qquad \text{for any } \bm{\sigma}\in L^2(\Omega), \] see again \cite[Th. 5.1, p.~63]{Precup2002}. In \cite[Assumption 8.4, p. 230]{Han2012} one can find another way to state the linear growth condition, which is very similar to \eqref{eq:aehp-isotropic-hardening-linear-growth} due to convexity of $\widehat{\Sigma}$. \end{Remark} Let us now state the consequence of applicability of the constraint qualifications to the problem with hardening. \begin{Corollary} Under the assumptions of Section \ref{ssect:aehp-rod} (including either Section \ref{sssect:aehp-rod-isotropic} or \ref{sssect:aehp-rod-kinematic}), and provided with a solution of the sweeping process \eqref{eq:sp-elastoplastic-hardening}, the right-hand side of the inclusion \eqref{eq:inclusion2-elastoplastic-hardening} is nonempty for a.a. $t\in[0,T]$. \label{cor:hardening} \end{Corollary} \noindent Since we have the additivity of the normal cones in the right-hand side of \eqref{eq:sp-elastoplastic-hardening} due to Proposition \ref{prop:constraint-qualifications}, the proof of Corollary \ref {cor:hardening} is analogous to the proof of Theorem \ref{th:aepp-well-posedness-safe-load-strict} \ref{enum:th:aepp-well-posedness-nonempty-rhs}. \section{Conclusions} \label{sect:conclusions} We have formulated rigorous frameworks for linear elasticity, elasticity-perfect plasticity and elasticity-hardening plasticity (Definitions \ref{def:ae}, \ref{def:aepp}, \ref{def:aehp} resp.) in terms of abstract adjoint linear operators (Section \ref{ssect:adjoint-operatorsED}) and converted them to equivalent formulations in terms of differential inclusions (Th. \ref{th:elasticity-solution}, \ref{th:aepp-to-sweeping-process}, \ref{th:aehp-to-sweeping-process} resp.). These differential inclusions have convenient forms of a sweeping process and an ``open loop'' problem, i.e a problem with the known right-hand side. Curiously, notice that even the equivalent formulation \eqref{eq:abstract_intersection} of the elasticity problem can be seen as a sweeping process with a singleton moving set. These frameworks are ready to be used for discrete models, as well as for continuous models. Recall, that the ``dual'' approach, which we use, gives an additional insight about the behavior of the constraint set $\mathcal{C}(t)$. We mean that, if the force boundary conditions remain constant, then $\mathcal{C}(t)$ moves only by a translation, see Remark \ref{rem:aepp-sp-in-V} and Corollary \ref{cor:constant-force}. At least in the discrete case, one can think about the displacement boundary conditions as being ``more respectful'' to perfect plasticity, or more compatible with it. In this line of thinking it may be tempting to conjecture that in elasticity-perfect plasticity any ``pathological'' situations, such as a measure-valued strain, may appear only during yielding caused by a prescribed change in a force load. This is not the case, as we have demonstrated by Example 3 in Section \ref{sect:regularity-lost}, where force load was only used to form a stress profile well within the elastic range at a.a. $x\in \Omega$, and after that displacement load was applied to drive the system into the plastic regime with no strain solution in $L^2(\Omega)$. By the same mechanism, any extremum of the stress or the yield limit distributions (e.g. a defect in the material) can lead to a measure-valued strain, even if the plastic deformation is due to a change in the displacement load and the motion of $\mathcal{C}(t)$ is just a parallel translation. The comparison of discrete and continuous models within the same framework is our main achievement. On one hand, discrete examples are well-solvable for both strain (elongation) and stress in the case of perfect plasticity. On the other hand, a continuous example may not have a strain solution in $L^2(\Omega)$, because the lack of additivity of normal cones prevents us from extracting the evolution of total strain $\bm{\varepsilon}$ from the evolution of stress $\bm{\sigma}$. As one would expect, the classical condition (Slater I constraint qualification) to ensure the additivity of normal cones fails hopelessly for plasticity-type unilateral constrains due to properties of $L^2(\Omega)$. But this failure also includes elastoplasticity with hardening, which is known to be a well-solvable problem. Using later advances in the infinite-dimensional optimization, we were able to show that elastoplasticity with hardening possesses additivity of the normal cones, although it requires the use of more sophisticated constraint qualifications and an assumption on linear growth. We can classify elastoplastic models to distinguish those, for which Theorems \ref{th:aepp-to-sweeping-process} and \ref{th:aehp-to-sweeping-process} can be used to extract the evolution of strain in a certain space, see Table \ref{tab:constraint-qualifications-plasticity}. This work is a foundation for future developments, such as: \begin{itemize} \item Limit analysis for cyclical loads, previously performed for discrete models \cite{Gudoshnikov2021ESAIM}, could be done for the current abstract framework in general, although there are certain challenges to overcome. Such analysis can include the model of a three-dimensional continuous medium \cite{Gudoshnikov2025}, which fits in the current framework. \item It would be interesting to find examples of the similar kind, which would satisfy just one or two constraint qualifications from Proposition \ref{prop:constraint-qualifications}, cf. Table \ref{tab:constraint-qualifications-plasticity}, or an even weaker constraint qualification compared to those of Proposition \ref{prop:constraint-qualifications}. Although we admit that condition \cite[Assumption 8.4, p. 230]{Han2012} is available for the models with hardening, from which Proposition \ref{prop:constraint-qualifications} cannot offer much improvement, it seems that the natural relation between constraint qualifications in infinite-dimensional optimization and elastoplastic models is mostly unexplored. We only can name \cite{Daniele2014}, where the authors apply a constraint qualification to the static elastoplastic torsion problem. \item To keep the volume of the text at least somewhat reasonable, for the Theorem \ref{th:aehp-to-sweeping-process} we focused on the aspect of additivity of the normal cones in \eqref{eq:inclusion2-elastoplastic-hardening}. In other words, for the models with hardening we have only proven the fact, which corresponds to the part \ref{enum:th:aepp-well-posedness-nonempty-rhs} of Theorem \ref{th:aepp-well-posedness-safe-load-strict}. Strictly speaking, to claim the solvability in the models with hardening, we also have to prove the facts, similar to parts \ref{enum:th:aepp-well-posedness-sp} and \ref{enum:th:aepp-well-posedness-estimate-rhs} of Theorem \ref{th:aepp-well-posedness-safe-load-strict}, i.e. the Lipschitz-continuity of $\widehat{\mathcal{C}}(t)$ with respect to Hausdorff distance in \eqref{eq:sp-elastoplastic-hardening} and an estimate on a selection from the right-hand side of \eqref{eq:inclusion2-elastoplastic-hardening}. While some results for particular cases are available, e.g. in \cite{Duvaut1972} and \cite{Han2012}, it would be interesting to provide an answer on the Lipschitz-continuity of the moving set with respect to the Hausdorff distance, so that such answer would be general in the same sense as the additivity of normal cones and constraint qualifications provide a general answer to the question of nonemptyness of the right-hand sides in \eqref{eq:inclusion2-elastoplastic} and \eqref{eq:inclusion2-elastoplastic-hardening}. One might think about some kind of ``differential constraint qualifications''. \end{itemize} \begin{table}[H] \begin{center} \renewcommand{\arraystretch}{1.2} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{3}{*}{\textbf{Constraint qualification}} &\multirow{3}{*}{$\begin{array}{c} \textbf{Discrete}\\\textbf{models}\end{array}$}& \multicolumn{3}{c|}{\textbf{Continuous models in $L^2(\Omega)$}} \\\cline{3-5} && \multirow{2}{*}{ $\begin{array}{c} \textbf{perfect}\\\textbf{plasticity}\end{array}$} & \multicolumn{2}{c|}{\textbf{hardening}}\\\cline{4-5} &&& $\begin{array}{c} \textbf{linear}\\\textbf{growth} \\\textbf{\eqref{eq:aehp-isotropic-hardening-linear-growth}, \eqref{eq:hardening-modulus-lower-estimate}}\\\textbf{fails}\end{array}$ & $\begin{array}{c} \textbf{linear}\\ \textbf{growth} \\\textbf{\eqref{eq:aehp-isotropic-hardening-linear-growth}, \eqref{eq:hardening-modulus-lower-estimate}}\\\textbf{holds}\end{array}$ \\ \hline Slater I, Prop. \ref{prop:constraint-qualifications} \ref{cq-Slater-I} & {\color{ForestGreen}YES} &{\color{red}NO} &{\color{red}NO} & {\color{red}NO} \\ \hline Slater II, Prop. \ref{prop:constraint-qualifications} \ref{cq-Slater-II} & {\color{ForestGreen}YES} &{\color{red}NO} & {\color{red}NO} & {\color{ForestGreen}YES} \\ \hline Rockafellar, Prop. \ref{prop:constraint-qualifications} \ref{cq-Rockafellar} & {\color{ForestGreen}YES} &{\color{red}NO}&{\color{red}NO} & {\color{ForestGreen}YES} \\ \hline Attouch--Brezis, Prop. \ref{prop:constraint-qualifications} \ref{cq-GCQ}& {\color{ForestGreen}YES} &{\color{red}NO}&{\color{red}NO} & {\color{ForestGreen}YES} \\ \hline \end{tabular} \end{center} \caption{Constraint qualifications for different types of elastoplastic models.} \label{tab:constraint-qualifications-plasticity} \end{table} \appendix \section{Some specific preliminaries} \label{sect:appendix} \subsection{Sobolev functions with values in Banach spaces} \label{ssect:prelim-bochner-sobolev} Let $I=[0,T], T>0$ be the time-domain of an evolution problem. For Bochner-Sobolev spaces $W^{1,p}(I,X)$ of functions with values in a Banach space $X$ we refer to \cite[Section 8.5, pp.~229--236]{Leoni2017}, \cite[Section II.5, pp.~92--110]{Boyer2012} and \cite[Section 2.4, pp.~37--49]{Migorsky2013}. We would like to remind the reader that the characterization of a function as Bochner-Sobolev is equivalent to classical differentiablity a.e. on $I$. Specifically, we mean the following facts. \begin{Proposition}{\rm (\cite[Prop. 2.50, p.~47]{Migorsky2013}, cf. \cite[Th. 8.55, p.~230]{Leoni2017})} Let $X$ be a reflexive Banach space and $u:I\to X$ be an absolutely continuous function. Then $u$ is differentiable a.e. on $I$ in the classical sense with $u'\in L^{1}(I, X)$ and \[ u(t)=u(0) +\int\limits_{0}^{t} u'(s) ds \qquad \text{for all }t\in I. \] \end{Proposition} \begin{Proposition} \label{prop:prelim-classical-derivatives-ae} {\rm (\cite[Prop. 2.51, p.~47]{Migorsky2013}, cf. \cite[Th. 8.57, p.~232]{Leoni2017})} Let $1\leqslant p\leqslant \infty$, $k\in \mathbb{N}$ and $u\in L^{p}(I, X)$. Then the following properties are equivalent: \begin{enumerate}[{\it i)}] \item $u\in W^{k,p}(I, X)$, \item there exists an absolutely continuous function $\widetilde{u}$, such that $u=\widetilde{u}$ a.e. on $I$ and for all $m\in \overline{0, k}$ the classical derivatives $\widetilde{u}^{(m)}$ of $\widetilde{u}$ exist a.e. on $I$ with $\widetilde{u}^{(m)} \in L^{p}(I, X)$. \end{enumerate} \end{Proposition} \subsection{Moore-Penrose pseudoinverse matrix} In this text we use (real) Moore-Penrose pseudoinverse matrix as it is an important practical tool to solve linear algebraic equations, and it is also available in many numerical packages. Here we remind the reader the definition of the Moore-Penrose pseudoinverse and some of its basic properties. \begin{Proposition}{\bf \cite[p. 9]{Bapat2010}} Let $A$ be an $m \times n$-matrix. Then there exists a unique $n \times m$-matrix $A^+$, called {\it Moore-Penrose pseudoinverse of $A$}, such that all of the following hold: \begin{align} AA^+A&=A, \label{eq:MP1}\\ A^+AA^+&=A^+, \label{eq:MP2}\\ (AA^+)^\top&=AA^+,\label{eq:MP3}\\ (A^+A)^\top&=A^+A.\label{eq:MP4} \end{align} \label{prop:MP1} \end{Proposition} \begin{Proposition}{\bf \cite[Def. 1.1.2]{Campbell2008}} A matrix $A^+$ is a Moore-Penrose pseudoinverse of $A$ if and only if $AA^+$ and $A^+A$ are orthogonal projection matrices onto, respectively, ${\rm Im}\,A$ and ${\rm Im}\,A^\top$. \label{prop:MP2} \end{Proposition} \noindent The following proposition can be verified by direct substitution into \eqref{eq:MP1}-\eqref{eq:MP4}: \begin{Proposition} If the columns of $A$ are linearly independent, then $A^+= (A^\top A)^{-1}A^\top$ and $A^+$ is a {\it left inverse} of $A$, i.e. \begin{equation*} A^+A=I_{n\times n}. \end{equation*} Similarly, if the rows of $A$ are linearly independent, then $A^+= A^\top(AA^\top)^{-1}$ and $A^+$ is a {\it right inverse} of $A$, i.e. \begin{equation} AA^+=I_{m\times m}. \label{eq:mp-right-inverse} \end{equation} \label{prop:MP3} \end{Proposition} \begin{Corollary} Let $A$ be an $m\times n$ matrix with linearly independent rows and $b\in \mathbb{R}^m$. Then for $x\in \mathbb{R}^n$ \begin{equation} Ax = b \qquad \Longleftrightarrow \qquad x\in {\rm Ker}\, A + A^+ b, \label{eq:mp-solving-linear-systems} \end{equation} where \[ A^+=A^\top(AA^\top)^{-1}. \] \label{cor:mp-solving-linear-systems} \end{Corollary} \noindent {\bf Proof.} Assume that the left part of \eqref{eq:mp-solving-linear-systems} holds. Apply $A^+$ to it to get \begin{equation} A^+ A x=A^+b. \label{eq:mp-in-proof1} \end{equation} From Proposition \ref{prop:MP2} we know that $A^+A$ is the matrix of othrogonal projection onto ${\rm Im}\, A^\top$, therefore there is $y\in\left( {\rm Im}\, A^\top\right)^\perp = {\rm Ker}\, A$, such that \[ x=A^+Ax + y. \] Add this to \eqref{eq:mp-in-proof1} to obtain the right part of \eqref{eq:mp-solving-linear-systems}, as claimed. Vice versa, assume that the right part of \eqref{eq:mp-solving-linear-systems} holds. Apply $A$ to it to get \[ Ax=AA^+b. \] Use \eqref{eq:mp-right-inverse} to derive the right part of \eqref{eq:mp-solving-linear-systems}. $\blacksquare$ \section*{Acknowledgements} The author thanks Pavel Krej{\v{c}}{\'{i}}, Giselle Antunes Monteiro and Michal K{\v{r}}{\'{i}}{\v{z}}ek from the Institute of Mathematics CAS for helpful scientific discussions. The author thanks A. Phung for his copy of the book \cite{Aubin2009}, which came in useful for the current work. The autor honors the memory of Prof. Zalman Balanov, from whom the author first learned about Nemytskii operator. \subsection*{Funding} \noindent This research is supported by the GA{\v{C}}R project GA24-10586S and the Czech Academy of Sciences (RVO: 67985840). \printbibliography \end{document}
2412.13056v1
http://arxiv.org/abs/2412.13056v1
Profiniteness of higher rank volume
\documentclass[12pt, a4paper]{amsart} \usepackage[english]{babel} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amscd} \usepackage{hyperref} \usepackage{verbatim} \usepackage{color} \usepackage{tikz-cd}\usetikzlibrary{babel} \usepackage{enumerate} \usepackage{mathtools} \usepackage{cite} \usepackage[initials,msc-links,backrefs]{amsrefs} \usepackage[all]{xy} \title[Profiniteness of Volume] {Profiniteness of higher rank volume} \author[H. Kammeyer]{Holger Kammeyer} \author[S. Kionke]{Steffen Kionke} \author[R. K\"ohl]{Ralf K\"ohl} \address{Heinrich Heine University D{\"u}sseldorf, Faculty of Mathematics and Natural Sciences, Mathematical Institute, Germany} \email{[email protected]} \address{FernUniversit\"at in Hagen, Faculty of Mathematics and Computer Science, Germany} \email{[email protected]} \address{Kiel University, Faculty of Mathematics and Natural Sciences, Department of Mathematics, Germany} \email{[email protected]} \subjclass[2010]{22E40, 20E18} \keywords{profinite rigidity, volume} \theoremstyle{plain} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem*{conjectureU}{Conjecture (U)} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \newtheorem{question}[theorem]{Question} \numberwithin{equation}{section} \numberwithin{theorem}{section} \DeclareMathOperator{\id}{Id} \DeclareMathOperator{\inn}{int} \DeclareMathOperator{\Lie}{Lie} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\End}{End} \DeclareMathOperator{\Aut}{Aut} \DeclareMathOperator{\Out}{Out} \DeclareMathOperator{\Res}{Res} \DeclareMathOperator{\Ad}{Ad} \DeclareMathOperator{\rank}{rk} \DeclareMathOperator{\pr}{pr} \providecommand{\normal}{\trianglelefteq} \providecommand{\calO}{\mathcal{O}} \providecommand{\fg}{\mathfrak{g}} \providecommand{\fh}{\mathfrak{h}} \providecommand{\fp}{\mathfrak{p}} \providecommand{\fk}{\mathfrak{k}} \providecommand{\fa}{\mathfrak{a}} \providecommand{\fso}{\mathfrak{so}} \providecommand{\up}[1]{\,^{#1}} \providecommand{\bbN}{\mathbb{N}} \providecommand{\bbR}{\mathbb{R}} \providecommand{\bbQ}{\mathbb{Q}} \providecommand{\bbZ}{\mathbb{Z}} \providecommand{\bbF}{\mathbb{F}} \providecommand{\bbA}{\mathbb{A}} \providecommand{\bbC}{\mathbb{C}} \DeclareMathOperator{\SL}{SL} \DeclareMathOperator{\GL}{GL} \DeclareMathOperator{\Gal}{Gal} \DeclareMathOperator{\Spin}{Spin} \DeclareMathOperator{\Spn}{\alg{\Spin}} \DeclareMathOperator{\vol}{vol} \providecommand{\bHom}{\underline{\Hom}_{\text{gr}}} \renewcommand{\arraystretch}{1.5} \renewcommand{\phi}{\varphi} \providecommand{\ignore}[1]{} \providecommand{\alg}[1]{\mathbf{#1}} \providecommand{\lie}[1]{\textup{#1}} \providecommand{\N}{\mathbb{N}} \providecommand{\R}{\mathbb{R}} \providecommand{\Q}{\mathbb{Q}} \providecommand{\Z}{\mathbb{Z}} \providecommand{\F}{\mathbb{F}} \providecommand{\A}{\mathbb{A}} \providecommand{\C}{\mathbb{C}} \newcommand{\hooklongrightarrow}{\lhook\joinrel\longrightarrow} \newcommand{\hooklongleftarrow}{\longleftarrow\joinrel\rhook} \newcommand*{\arXiv}[1]{ \href{http://www.arxiv.org/abs/#1}{arXiv:\textbf{#1}}} \newcounter{commentcounter} \usepackage{ifthen,srcltx} \newcommand{\showcomments}{yes} \newsavebox{\commentbox} \newenvironment{comnz}{\ifthenelse{\equal{\showcomments}{yes}}{\footnotemark \begin{lrbox}{\commentbox} \begin{minipage}[t]{1.25in}\raggedright\sffamily\tiny \footnotemark[\arabic{footnote}]} {\begin{lrbox}{\commentbox}}} {\ifthenelse{\equal{\showcomments}{yes}} {\end{minipage}\end{lrbox}\marginpar{\usebox{\commentbox}}} {\end{lrbox}}} \newcommand{\commentho}[1]{\stepcounter{commentcounter} \begin{comnz} \textbf{(Holger)}: \textcolor{blue}{#1}\end{comnz}} \newcommand{\commentst}[1]{\stepcounter{commentcounter} \begin{comnz} \textbf{(Steffen)}: \textcolor{magenta}{#1}\end{comnz}} \newcommand{\commentra}[1]{\stepcounter{commentcounter} \begin{comnz} \textbf{(Ralf)}: \textcolor{green}{#1}\end{comnz}} \begin{document} \selectlanguage{english} \begin{abstract} We show that the covolume of an irreducible lattice in a higher rank semisimple Lie group with the congruence subgroup property is determined by the profinite completion. Without relying on CSP, we additionally show that volume is a profinite invariant of octonionic hyperbolic congruence manifolds. \end{abstract} \maketitle \section{Introduction} A group \(\Gamma\) is called \emph{residually finite} if every \(1 \neq g \in \Gamma\) maps to a nontrivial element in some finite quotient of \(\Gamma\). It then becomes a natural question in how far \(\Gamma\), or at least some property of \(\Gamma\), is determined by all finite quotient groups of \(\Gamma\); or equivalently, by the profinite completion \(\widehat{\Gamma}\). A well-known problem of this sort has been around for quite some time and was in particular advertised as the final problem in A.\,Reid's 2018 ICM address~\cite{Reid:ICM}*{Question~7.4}. \begin{question} \label{question:volume-3-manifolds} Let \(M\) and \(N\) be finite volume hyperbolic 3-manifolds. Suppose that \(\widehat{\pi_1 M} \cong \widehat{\pi_1 N}\). Does \(\operatorname{vol} M = \operatorname{vol} N\)? \end{question} This question is in fact the case \(G = \operatorname{SL}_2 (\C)\) of the more general question if profinitely isomorphic irreducible lattices \(\Gamma, \Delta \le G\) in a semisimple Lie group \(G\) have fundamental domains of equal Haar measure. We answer the general question affirmatively if \(G\) has \emph{higher rank} (at least two) and possesses the \emph{congruence subgroup property}. \begin{theorem} \label{thm:same-lie-group} Let \(G\) be a connected semisimple Lie group with higher rank and finite center and without compact factors. Fix a Haar measure \(\mu\) on \(G\) and suppose \(\Gamma, \Delta \le G\) are irreducible lattices with CSP* such that \(\widehat{\Gamma} \cong \widehat{\Delta}\). Then \(\mu(G/\Gamma) = \mu(G/\Delta)\). \end{theorem} Conjecturally, the assumption of CSP* is always satisfied. Nevertheless, we highlight the following unconditional conclusion for lattices with non-compact quotient. \begin{theorem} \label{thm:non-uniform-same-lie-group} Let \(G\) be a connected semisimple Lie group with higher rank and finite center and without compact factors. Fix a Haar measure \(\mu\) on \(G\) and let \(\Gamma, \Delta \le G\) be irreducible non-uniform lattices such that \(\widehat{\Gamma} \cong \widehat{\Delta}\). Then \(\mu(G/\Gamma) = \mu(G/\Delta)\). \end{theorem} We will exlain the precise meaning of CSP* in Section~\ref{sec:csp}. At this point, let us only inform the experts that essentially, it shall refer to a finite \emph{congruence kernel} with the two additional requirements that ``Conjecture U'' should hold true if the algebraic group in which $\Gamma$ is arithmetic is a certain outer form of type $A_n$ and moreover that the congruence kernel has the same order as the \emph{metaplectic kernel}. The well-known \emph{Margulis--Platonov conjecture} asserts that the latter condition should actually be automatic and this is in fact known in the majority of cases. Even better, another well-known conjecture due to Serre says that CSP should hold under our assumptions and the status of this conjecture is likewise advanced. So in many cases, requiring CSP* is not needed and conjecturally, it is redundant altogether. A notable case in which CSP* is known occurs if the algebraic group is isotropic. By the Borel--Harish-Chandra Theorem, this translates back to Theorem~\ref{thm:non-uniform-same-lie-group}. \medskip While our methods break down with regard to the original Question~\ref{question:volume-3-manifolds}, we do offer a result for another type of rank one locally symmetric spaces, for which the congruence subgroup property is still unknown. Recall that there exists an exceptional rank one symmetric space called the \emph{octonionic hyperbolic plane} \(\mathbb{OH}^2\). We will refer to any 16-dimensional connected Riemannian manifold whose universal covering is isometric to \(\mathbb{OH}^2\) as an \emph{octonionic hyperbolic manifold}. Note that the isometry group of \(\mathbb{OH}^2\) is the exceptional real Lie group \(F_{4(-20)}\) so that the fundamental group of any octonionic hyperbolic manifold embeds as a subgroup of \(F_{4(-20)}\) by the deck transformation action. \begin{theorem} \label{thm:octonionic} Let \(M\) and \(N\) be finite volume octonionic hyperbolic manifolds with fundamental groups \(\Gamma = \pi_1 M\) and \(\Delta = \pi_1 N\). Assume that \(\Gamma\) and \(\Delta\) are arithmetic congruence lattices in \(F_{4(-20)}\) and \(\widehat{\Gamma} \cong \widehat{\Delta}\). Then \(\vol M = \vol N\). \end{theorem} By the work of Corlette~\cite{Corlette:archimedean-superrigidity} and Gromov--Schoen~\cite{GromovSchoen}, arithmeticity for lattices in $F_{4(-20)}$ is known. Contrary to Serre's original conjecture mentioned above, several results have meanwhile pointed in the direction that CSP might also hold in type $F_{4(-20)}$. So if that is true, our result gives profiniteness of octonionic hyperbolic volume in general. Moreover, in that case one can construct non-isomorphic, profinitely isomorphic lattices as in the theorem using non-isomorphic number fields with isomorphic adele rings. This contrasts with Question~\ref{question:volume-3-manifolds}: Conjecturally, all Kleinian groups are profinitely rigid. \medskip Finally, it has long been known that even lattices \(\Gamma \le G\) and \(\Delta \le H\) in different Lie groups can have isomorphic profinite completions (e.g.~\cite{Aka:arithmetic}). One may still ask if they should have equal covolume. This question, however, depends on the normalization of the Haar measures on \(G\) and \(H\). Two more or less canonical such normalizations come to mind. One can use the \emph{Killing form} to obtain such a normalization as we will explain in Section~\ref{sec:measures}. Another option is to use the \emph{Euler-Poincar{\'e}-measure} \cite{Serre:cohomologie}*{Section~1.6} which is the Haar measure \(\mu\) on \(G\) such that \(\mu(G/\Gamma) = \chi (\Gamma)\) is the Euler characteristic for every torsion-free cocompact lattice \(\Gamma \le G\), provided \(\chi(\Gamma) \neq 0\). However, neither normalization turns volume into a profinite invariant for lattices in higher rank Lie groups. While it will be apparent from this investigation that the Killing form normalization does not work, it was shown before in \cite{Kammeyer-Kionke-Raimbault-Sauer}*{Theorem~1.2} that there exist profinitely isomorphic spinor groups with distinct Euler characteristics. So the only thing one can still hope for is that on each higher rank Lie group, one can fix one particular normalization of the Haar measure such that volume becomes a profinite invariant among all lattices in all such Lie groups. This is indeed what will be accomplished in this article, even in the more general setting where a ``semisimple Lie group'' is understood as a product of simple Lie groups over various local fields of characteristic zero. To formulate these most general results precisely, we shall now delve a bit deeper into the theory. \begin{definition} \label{def:simply-connected} Let $A$ be a finite set. For each $\alpha \in A$ let $k_\alpha$ be a local field with $\mathrm{char}(k_\alpha)=0$ and let $\mathbf{G}_\alpha$ be a simply-connected absolutely almost simple linear algebraic $k_\alpha$-group. We call a locally compact topological group $G$ of the form \[ G = \prod_{\alpha\in A} \mathbf{G}_\alpha(k_\alpha) \] an \emph{algebraically simply-connected semisimple Lie group}. The \emph{rank} of $G$ is defined as \[ \rank G = \sum_{\alpha \in A} \rank_{k_\alpha} \mathbf{G}_\alpha. \] We say that $G$ \emph{has no compact factors} if each $\mathbf{G}_\alpha$ is $k_\alpha$-isotropic or, equivalently, if none of the groups $\mathbf{G}_\alpha(k_\alpha)$ is compact. \end{definition} The point of this definition is that an irreducible lattice $\Gamma \le G$ in an algebraically simply-connected semisimple Lie group with $\rank G \ge 2$ and without compact factors is \emph{$S$-arithmetic} by \emph{Margulis arithmeticity} \cite{Margulis:discrete-subgroups}*{Theorem~IX.1.11, p.\,298}. This means that there exists a number field $k$, a connected absolutely almost simple $k$-group $\mathbf{H}$, and a finite subset $S \subset V(k)$ of the set of all places of $k$ such that $S$ contains all infinite places and finally, there exists a continuous homomorphism of topological groups \[ \varphi \colon \prod_{v \in S^{\text{is}}} \mathbf{H}({k_v}) \longrightarrow G \] such that $\varphi(\mathbf{H}(\mathcal{O}_{k, S}))$ is commensurable with $\Gamma$. Here, $S^{\text{is}}$ denotes the subset of $S$ with all infinite places removed at which $\mathbf{H}$ is anisotropic and $\mathcal{O}_{k, S}$ denotes the ring of $S$-integers in $k$, meaning the subring of $k$ consisting of all $x \in k$ with $|x|_v \le 1$ for all finite places $v \notin S$. The group $\mathbf{H}(\mathcal{O}_{k, S})$ is defined by picking an embedding $\mathbf{H} \subset \mathbf{GL_n}$ and is thus well-defined up to commensurability. By~\cite{Margulis:discrete-subgroups}*{Remark~IX.1.6.(i), p.\,294}, we may and will moreover assume that $\mathbf{H}$ is simply-connected. Moreover, our assumption that the groups $\mathbf{G}_\alpha$ are all absolutely simple and simply-connected guarantees that $\varphi$ is in fact an isomorphism of topological groups given by a product of isomorphisms of the factors~\cite{Margulis:discrete-subgroups}*{Remarks~(i) and~(iii) on p.\,291}. In fact, we have a bijection $\alpha \colon S^{\text{is}} \rightarrow A$ such that $\varphi = \prod_{v \in S^{\text{is}}} \varphi_v$ for isomorphisms $\varphi_v \colon \mathbf{H}_v \rightarrow \mathbf{G}_{\alpha(v)}$ defined over isomorphism $k_v \cong k_{\alpha(v)}$. \medskip Finally, it follows from superrigidity~\cite{Margulis:discrete-subgroups}*{Theorem~C, Chapter~VIII, p.\,259} that $k$, $\mathbf{H}$, and $S$ are unique in the strongest sense: For $k'$, $\mathbf{H}'$, and $S'$ with the above properties, there exists a field isomorphism $\sigma \colon k \rightarrow k'$ inducing a bijection from $S$ to $S'$ and there exists a $k$-isomorphism $\mathbf{H} \cong {}^\sigma \mathbf{H}'$. In particular, it is meaningful to define that $\Gamma$ has the \emph{congruence subgroup property (CSP)} if the uniquely defined $k$-group $\mathbf{H}$ has finite $S$-congruence kernel $C(\mathbf{H}, S)$. We remark that we could have equivalently and intrinsically defined that $\Gamma$ has CSP by requiring that it have polynomial representation growth~\cite{Lubotzky-Martin:rep-growth}, or, still equivalently, polynomial index growth~\cite{Lubotzky-Segal:subgroup-growth}*{Theorem~12.10, p.\,223}. Similarly, we define that \(\Gamma\) has CSP* if \(\mathbf{H}\) has CSP* with respect to \(S\) according to Definition~\ref{definition:cspstar}. \bigskip Let $k_\alpha$ be a local field of characteristic $0$ and let $G$ be a semisimple Lie group over $k_\alpha$. The Killing form on the Lie algebra gives rise to a canonical Haar measure $\mu^{\dagger}_G$ on $G$ (see Section~\ref{sec:measures}); we refer to this measure as the Killing measure of $G$. \begin{definition}\label{def:renormalized-measure} Let $G$ be an algebraically simply-connected semisimple Lie group. We define the \emph{renormalized Killing measure} on $G$ as \[ \mu^\diamond_G = \delta \prod_{\alpha \in A} c_\alpha^{-1} \mu^\dagger_{\mathbf{G}_\alpha(k_\alpha)} \] where $c_\alpha$ is the Killing measure of the compact real form of $\mathbf{G}_\alpha$ if $\alpha$ is archimedean and otherwise $c_\alpha = 1$. Here $\delta = 2$ if all $k_\alpha$ are archimedean and there is some $\alpha \in A$ such that $k_\alpha=\bbR$ and $\mathbf{G}_\alpha(k_\alpha)$ is not topologically simply-connected. In all other cases, we define $\delta = 1$. \end{definition} \begin{theorem} \label{thm:main-lattices} Let $G$ and $H$ be algebraically simply-connected semisimple Lie groups of rank at least two without compact factors. Suppose $\Gamma \subseteq G$ and $\Delta \subseteq H$ are irreducible lattices with CSP* such that $\widehat{\Gamma} \cong \widehat{\Delta}$. Then the renormalized Killing covolumes of $\Gamma$ and $\Delta$ are equal. \end{theorem} Again, since CSP* is known if the algebraic group is isotropic, the Harish-Chandra Theorem gives the following unconditional result. \begin{theorem} \label{thm:non-uniform} Let $G$ and $H$ be algebraically simply-connected semisimple Lie groups of rank at least two without compact factors. Suppose $\Gamma \le G$ and $\Delta \le H$ are irreducible non-uniform lattices with $\widehat{\Gamma} \cong \widehat{\Delta}$. Then the renormalized Killing covolumes of $\Gamma$ and $\Delta$ are equal. \end{theorem} The outline of the article is as follows. In Section~\ref{sec:csp} we recall aspects of the congruence subgroup problem needed to formulate our results and we provide the definition of CSP*. In Section~\ref{sec:measures}, we discuss the renormalization of Killing measures. The main results are proven in Section~\ref{sec:main} which ends with a remark on profinite almost rigidity and how our results relate to the work of M.\,Aka~\cite{Aka:arithmetic}. The article concludes with Appendix~\ref{appendix:s-adelic-superrigidity} on the \(S\)-arithmetic extension of the adelic superrigidity theorem. \medskip The authors are grateful to Deutsche Forschungsgemeinschaft (DFG) for financial support in the Priority Program ``Geometry at Infinity'' (DFG 441848266, 441425994). H.\,K. and S.\,K. acknowledge additional funding from the DFG Research Training Group ``Algebro-Geometric Methods in Algebra, Arithmetic, and Topology'', DFG 284078965. R.\,K. acknowledges additional funding via the DFG Individual Research Grant KO 4323/15-1. The authors thank Andrei Rapinchuk for valuable discussions concering CSP. \section{Around the congruence subgroup property} \label{sec:csp} To fix the notation and to define CSP* concisely, we review some aspects of the congruence subgroup property. Let $k$ be a number field and let $S \subset V(k)$ be a finite subset that contains all infinite places. We still denote the ring of $S$-integers in $k$ by $\mathcal{O}_{k,S}$. Let $\mathbf{G}$ be a simply-connected absolutely almost simple linear algebraic $k$-group. Picking an embedding $\mathbf{G} \subset \mathbf{GL_n}$, the $\mathcal{O}_{k,S}$-points $\mathbf{G}(\mathcal{O}_{k,S})$ are defined. A subgroup $\Gamma \le \mathbf{G}(\mathcal{O}_{k,S})$ is called a \emph{congruence subgroup} if for some nonzero ideal $\mathfrak{a} \trianglelefteq \mathcal{O}_{k,S}$, the group $\Gamma$ contains the kernel of the reduction homomorphism $\mathbf{G}(\mathcal{O}_{k,S}) \rightarrow \mathbf{G}(\mathcal{O}_{k,S}/\mathfrak{a})$. Taking the congruence subgroups as a unit neighborhood base defines the \emph{congruence topology} on $\mathbf{G}(k)$. It is a priori coarser than the \emph{arithmetic topology} on $\mathbf{G}(k)$ consisting of all finite index subgroups of $\mathbf{G}(\mathcal{O}_{k,S})$. Both topologies are independent of the chosen embedding $\mathbf{G} \subset \mathbf{GL_n}$. With respect to the canonical uniform structure on $\mathbf{G}(k)$, the arithmetic topology defines the completion $\widehat{\mathbf{G}(k)}$ and the congruence topology defines the completion $\overline{\mathbf{G}(k)}$. We have a canonical surjective homomorphism $\widehat{\mathbf{G}(k)} \rightarrow \overline{\mathbf{G}(k)}$. The kernel of this homomorphism is denoted by $C(\mathbf{G},S)$ and is called the \emph{congruence kernel} of $\mathbf{G}$ with respect to $S$. We say that $\mathbf{G}$ \emph{has CSP with respect to $S$} if the profinite group $C(\mathbf{G}, S)$ is actually finite. The congruence kernel is closely related to the \emph{metaplectic kernel} $M(\mathbf{G}, S)$ of $\mathbf{G}$, where for any subset $V \subset V(k)$ we define \[ M(\mathbf{G}, V) = \ker [H_m^2(\mathbf{G}(\mathbb{A}_{k, V}), I) \longrightarrow H^2(\mathbf{G}(k), I)]. \] Here $I = \R/\Z$ denotes the one-dimensional real torus, $H_m^2$ refers to cohomology defined by measurable cocycles, and $\mathbb{A}_{k,V}$ is the ring of \emph{$V$-adeles} consisting of all elements in the product $\prod_{v \notin V} k_v$ almost all of whose coordinates lie in the ring of integers $\mathcal{O}_v \subset k_v$. We will later write $\mathbb{A}_k = \mathbb{A}_{k, \emptyset}$ and $\mathbb{A}_k^f = \mathbb{A}_{k, V_\infty}$ where $V_\infty$ is the set of infinite places of $k$. In fact, as explained in~\cite[Theorem 2]{Prasad-Rapinchuk:survey}, it is known that if $\mathbf{G}$ satisfies $\sum_{v \in S} \rank_{k_v} \mathbf{G} \ge 2$ and $\rank_{k_v} \mathbf{G} \ge 1$ for all finite $v \in S$ and if $\mathbf{G}$ has CSP with respect to $S$, then \[ C(\mathbf{G},S) \text{ is isomorphic to the Pontrjagin dual of } M(\mathbf{G}, S) \] provided the following is true. \begin{conjecture}[Margulis--Platonov] Let $T$ be the (possibly empty) set of finite places of $k$ at which $\mathbf{G}$ is $k_v$-anisotropic. If $G_T = \prod_{v \in T} \mathbf{G}(k_v)$ denotes the corresponding profinite group, then every non-central normal subgroup $N \trianglelefteq \mathbf{G}(k)$ is given by intersecting the diagonally embeded $\mathbf{G}(k) \le G_T$ with an open normal subgroup $W \trianglelefteq G_T$. \end{conjecture} So the conjecture states in particular that $\mathbf{G}(k)$ does not admit non-central normal subgroups if $\mathbf{G}$ is isotropic at all finite places. Note that this is automatic if $\mathbf{G}$ does not have type $A_n$~\cite{Kneser:galois}*{Satz~3}. The metaplectic kernel has been calculated in general by Prasad and Rapinchuk~\cite{Prasad-Rapinchuk:metaplectic} with the caveat that ``Conjecture (U)'' has to be assumed if $\mathbf{G}$ is \emph{special}, meaning $\mathbf{G}$ is a type ${}^2 A_n$-group given by $\mathbf{G} = \mathbf{SU}(h)$ where $h$ is a non-degenerate hermitian form over a non-commutative division algebra with involution of the second kind. To state the conjecture, for any finite set $V \subset V(k)$, we define \[ M_V(\mathbf{G}) = \ker\left[H_m^2(\textstyle\prod_{v \in V} \mathbf{G}(k_v), I) \longrightarrow H^2(\mathbf{G}(k), I)\right]. \] Note that for $V \subset W$, we get a canonical inclusion $M_V(\mathbf{G}) \subseteq M_W(\mathbf{G})$. \begin{conjectureU}[see \cite{Prasad-Rapinchuk:metaplectic}] Suppose $\mathbf{G}$ is special and $V \subset V(k)$ is finite. If $V_0 \subset V$ is the subset of finite places, then $M_V(G) = M_{V_0}(G)$. \end{conjectureU} Now we can state the calculation of metaplectic kernels by Prasad--Rapinchuk. For the reader's convenience, we slightly reformulate the result needed for our purpose and we comment on the proof. This is well-known, but we could not locate this exact statement in the literature. The group of roots of unity in $k$ is denoted by $\mu(k) \le k^*$. \begin{theorem}\label{thm:metaplectic} Assume $\mathbf{G}$ is isotropic at every finite place in $S$. In case $\mathbf{G}$ is special, assume moreover Conjecture~(U) holds true. If $S$ consists of the infinite places only and $\mathbf{G}_v$ is topologically simply-connected at every real place, then the metaplectic kernel $M(\mathbf{G}, S)$ is isomorphic to $\mu(k)$. In all other cases, $M(\mathbf{G}, S)$ is trivial. \end{theorem} \begin{proof} The main theorem of~\cite{Prasad-Rapinchuk:metaplectic} by Prasad--Rapinchuk states explicitly that in the remaining cases $M(\mathbf{G}, S)$ is trivial. If on the other hand, $S$ is the set of infinite places and $\mathbf{G}$ is topologically simply-connected at all real places, then the real Lie group \[ G = \prod_{v \in S} \mathbf{G}(k_v) \] is simply-connected because at complex places the notions of algebraic and topological simply-connectedness coincide. Therefore, we have $H^2_m(G,I) \cong H^2_m(G, \R)$ as proven by Moore in~\cite{Moore:extensions}*{Theorem~A}. The latter can be computed by the Lie algebra cohomology $H^2_m(G, \R) \cong H^2(\mathfrak{g}, \mathfrak{k}; \R)$ according to a result of Wigner \cite{Wigner:algebraic-cohomology}*{Corollary to Theorem~3}, where $\mathfrak{k}$ is the Lie algebra of a maximal compact subgroup $K \le G$. By the classical work of Chevalley, Eilenberg, and Cartan, we have $H^2(\mathfrak{g}, \mathfrak{k}; \R) \cong H^2(G_u/K;\R)$, where $G_u$ is the compact form of the complexified Lie group $G_\C \cong \prod_{v \mid \infty} \operatorname{Res}_{k_v/\R} \mathbf{G}_v(\C)$. Since $G_u$ is a deformation retract of $G_\C$, while $K$ is a deformation retract of $G$, both $G_u$ and $K$ are 2-connected, so the exact sequence of homotopy groups of the fibration \[ 1 \longrightarrow K \longrightarrow G_u \longrightarrow G_u/K \longrightarrow 1 \] shows that $\pi_2(G_u/K) \cong \pi_1 (G_u/K) \cong 1$. Finally, the Hurewicz theorem in combination with the universal coefficient theorem imply that $H^2(G_u/K, \R) \cong 0 $ so that $H^2_m(G, I) \cong 0$. Moore's work also shows that $H^1_m(G, I) \cong H^1_c(G, I)$ coincides with the first group cohomology with respect to continuous cocycles~\cite{Moore:group-extensions}*{Theorem~3 and Corollary~1}. As $G$ acts trivially on $I$ and since $I$ is abelian, $H^1_c(G,I)$ is given by continuous group homomorphisms $G \rightarrow I$ and any such homomorphism must be trivial because $G$ is semisimple. So also $H^1_m(G,I) \cong 0$. Therefore, the inflation-restriction exact sequence for measurable cohomology associated with the short exact sequence \[ 1 \longrightarrow G \longrightarrow \mathbf{G}(\mathbb{A}_k) \longrightarrow \mathbf{G}(\mathbb{A}_k^f) \longrightarrow 1 \] shows that the inflation map \[ H_m^2(\mathbf{G}(\mathbb{A}^f_k), I) \longrightarrow H_m^2(\mathbf{G}(\mathbb{A}_k), I) \] induced by the projection $\mathbf{G}(\mathbb{A}_k) \rightarrow \mathbf{G}(\mathbb{A}^f_k)$ is an isomorphism. By functoriality, the kernels of the restriction maps to $H^2(\mathbf{G}(k), I)$ are thus also isomorphic, meaning $M(\mathbf{G}, S) \cong M(\mathbf{G}, \emptyset)$. Under the assumption of Conjecture~$(U)$ if $\mathbf{G}$ is special, the Prasad--Rapinchuk theorem now gives $M(\mathbf{G}, \emptyset) \cong \mu(k)$. \end{proof} In order to have these calculations fully available, we give the following definition. \begin{definition} \label{definition:cspstar} We say that $\mathbf{G}$ \emph{has CSP* with respect to $S$} if \begin{enumerate} \item $\mathbf{G}$ has CSP with respect to $S$, \item \label{item:cequalsm} $|C(\mathbf{G}, S)| = |M(\mathbf{G}, S)|$, \item \label{item:conjecture-u} Conjecture~(U) holds true if $\mathbf{G}$ is special. \end{enumerate} Let $\Gamma$ be an irreducible lattice in an algebraically simply-connected semisimple Lie group $G$ of rank at least two and without compact factors and let $\mathbf{G}$ be the unique simple algebraic group in which $\Gamma$ is $S$-arithmetic given by Margulis' arithmeticity theorem. Then we say \emph{$\Gamma$ has CSP*} if $\mathbf{G}$ has CSP* with respect to~$S$. \end{definition} Assuming $\sum_{v \in S} \rank_{k_v} \mathbf{G} \ge 2$ and $\rank_{k_v} \mathbf{G} \ge 1$ for all finite places $v \in S$, we just explained that \eqref{item:cequalsm} holds true if the Margulis--Platonov conjecture is true for $\mathbf{G}$. In that case, finiteness of $C(\mathbf{G},S)$ is actually equivalent to centrality \cite{Prasad-Rapinchuk:survey}*{Theorem~2} and a well-known conjecture of Serre \cite{Platonov-Rapinchuk:algebraic-groups}*{(9.45), p.\,556} says that this should be true under our assumption on the ranks. According to \cite{Prasad-Rapinchuk:survey}*{Section~3.5}, the Margulis--Platonov conjecture has been established in all cases except for certain anisotropic outer forms of type $A_n$, anisotropic forms of type ${}^3 D_4$ and ${}^6 D_4$, and certain anisotropic type $E_6$ forms. Similarly, according to \cite{Prasad-Rapinchuk:survey}*{Section~5.1}, centrality of $C(\mathbf{G},S)$ is open for all anisotropic inner forms of type $A_n$, ``most'' anisotropic outer forms of type $A_n$, anisotropic forms of type ${}^3 D_4$ and ${}^6 D_4$, and ``most'' anisotropic groups of type $E_6$. To summarize: \begin{theorem} \label{thm:status-of-csp} Suppose $\sum_{v \in S} \rank_{k_v} \mathbf{G} \ge 2$ and $\rank_{k_v} \mathbf{G} \ge 1$ for all finite places $v \in S$. Then the group $\mathbf{G}$ has CSP* with respect to $S$ if either \begin{enumerate} \item $\mathbf{G}$ is $k$-isotropic, or \item $\mathbf{G}$ is $k$-anisotropic but not of type $A_n$, not a triality form of type $D_4$, and not of exceptional type $E_6$. \end{enumerate} \end{theorem} Recall that according to the Harish-Chandra Theorem, an irreducible lattice $\Gamma$ in an algebraically simply-connected semisimple Lie group $G$ is uniform if and only if the corresponding unique simple algebraic group $\mathbf{G}$ is $k$-anisotropic. This explains how Theorem~\ref{thm:non-uniform} follows from Theorem~\ref{thm:main-lattices}. Finally, we will explain in the proof of Theorem~\ref{thm:same-lie-group} that any irreducible lattice \(\Gamma \le G\) in a higher rank semisimple Lie group with finite center and without compact factors has a finite index subgroup \(\Gamma' \le \Gamma\) which embeds as an irreducible lattice in a uniquely determined algebraically simply-connected semisimple Lie group of rank at least two without compact factors. It is then well-defined to say that \(\Gamma\) has CSP* if \(\Gamma'\) has CSP*. \section{Measures on algebraic groups} \label{sec:measures} In this section we fix measures on algebraically simply-connected semisimple Lie groups. To this end, let $(K,|\cdot|)$ be a local field of characteristic $0$ with absolute value $|\cdot|$. We normalize a Haar measure $\mu_K$ on $K$ in the following way: \begin{itemize} \item $K = \bbR$: fix $\mu_\bbR([0,1]) = 1$ \item $K = \bbC$: fix $\mu_\bbC([0,1]+i[0,1]) = 2$ \item $K$ nonarchimedean with ring of integers $\mathcal{O}$: fix $\mu_K(\mathcal{O}) = 1$. \end{itemize} Let $G$ be a semisimple Lie group over $K$ of dimension $d$ and let $\fg$ denote its Lie algebra. Every non-trivial form $\omega_0 \in \bigwedge^d \fg^*$ extends to a unique left-invariant top-degree differential form $\omega$ on $G$ and induces a left Haar measure $\mu_\omega$ (that depends on the fixed Haar measures $\mu_K$ on $K$). As $\fg$ is semisimple and $K$ is of characteristic $0$, the Killing form $B_\fg$ on $\fg$ is non-degenerate. The Killing form can be used to define a canonical normalization of the Haar measure $\mu^{\dagger}_G$. Let $e_1,\dots, e_d$ be an ordered basis of $\fg$ and let $e_1^*,\dots,e_n^*$ denote the associated ordered dual basis. For $\omega_0 = e_1^*\wedge e_2^* \wedge \dots \wedge e_n^*$ we define the \emph{Killing measure} \[ \mu^{\dagger}_G = \sqrt{|\det\bigl((B(e_i,e_j))_{i,j}\bigr)|} \; \mu_{\omega}. \] This is independent of the chosen ordered basis. Indeed, if the base change is given by a transformation matrix $T$, then $\omega_0$ transforms with the inverse determinant of $T$ and $\det\bigl((B(e_i,e_j))_{i,j}\bigr)$ transforms with $\det(T)^2$. \begin{lemma}\label{lem:tamagawa-to-killing} Let $\mathbf{G}$ be a connected semisimple $d$-dimensional linear algebraic group over a number field $k$ with discriminant~$d_k$. Then the Tamagawa measure $\tau$ and the product measure $\mu^\dagger = \prod_{v \in V(k)} \mu^\dagger_{\mathbf{G}(k_v)}$ on $\mathbf{G}(\bbA_k)$ satisfy \[ |d_k|^{-d/2} \mu^\dagger = \tau. \] \end{lemma} \begin{proof} Let $\fg$ be the $k$-Lie algebra of $\mathbf{G}$. We pick an ordered basis $e_1,\dots, e_d$ of $\fg$ so that the multilinear form $\omega_0 = e_1^* \wedge \dots \wedge e_d^*$ extends to a left-invariant algebraic top-dimensional $k$-form $\omega$ on $\mathbf{G}$. Then for every $v \in V(k)$, the form $\omega$ extends in turn to a differential $k_v$-form on $\mathbf{G}(k_v)$ and gives us a measure $\mu_{\omega,v}$ as explained above. The Tamagawa measure $\tau$ is then given by $\tau = |d_k|^{-d/2} \prod_{v \in V(k)} \mu_{\omega,v}$. For a different basis, the differential form $\omega$ is scaled by the determinant of the transformation matrix and we have the product formula $\prod_{v \in V(k)} |a|_v = 1$ for all $a \in k$, so $\tau$ is well-defined. Since $\det B(e_i, e_j) \in k$, the product formula also gives \[ \prod_{v \in V(k)} \mu_{\omega,v} = \prod_{v \in V(k)} \sqrt{|\det B(e_i, e_j)|_v} \ \mu_{\omega, v} = \prod_{v \in V(k)} \mu^\dagger_{\mathbf{G}(k_V)} = \mu^\dagger, \] whence the assertion. \end{proof} Recall that we defined the renormalized Killing volume in Definition~\ref{def:renormalized-measure} in the introduction. \begin{lemma} \label{lem:forget-compact-factors} Let $G$ be an algebraically simply-connected semisimple Lie group and let $\pi\colon G \to G^{\text{is}}$ be the canonical projection to the product of the noncompact factors. Assume the kernel of $\pi$ is a real Lie group. If $\Gamma \subseteq G$ is a torsion-free lattice, then the renormalized Killing covolume of $\Gamma$ in $G$ equals the renormalized Killing covolume of $\pi(\Gamma)$ in~$G^{\text{is}}$ \end{lemma} \begin{proof} We first note that the factors $\delta$ in the renormalized Killing measures for $G$ and $G^{\text{is}}$ agree. Indeed, if $\mathbf{G}_{\alpha}$ is anisotropic and $k_\alpha = \bbR$, then $\mathbf{G}_{\alpha}(k_\alpha)$ is topologically simply-connected, being a deformation retract of $\mathbf{G}_\alpha(\C)$ which is topologically simply-connected because it is algebraically so. By assumption, we have $G = K \times G^{\text{is}}$ for a compact Lie group~$K$. Since $\Gamma$ is torsion-free and discrete, we have $\Gamma \cap K = \{1\}$. We choose a measurable fundamental domain $\mathcal{F}$ for the action of $\Gamma$ on $G^{\text{is}}$. Then $K \times \mathcal{F}$ is a measurable fundamental domain for $\Gamma$ in $G$. Hence the ratio of the covolumes of $\Gamma$ and $\pi(\Gamma)$ is the renormalized Killing volume of $K$, which is one by construction. \end{proof} \section{Proof of the main theorem} \label{sec:main} Let $k$ be a number field and let $S \subset V(k)$ be a finite subset containing all infinite places. We extend the terminology in \cite{KammeyerKionke}*{Definition~3.1} and say that a simply-connected absolutely almost simple $k$-group $\mathbf{G}$ is \emph{$S$-algebraically superrigid} if for every field $l$ of characteristic zero, every $S$-arithmetic subgroup $\Gamma$ of $\mathbf{G}$, and every homomorphism $\delta \colon \Gamma \rightarrow \mathbf{H}(l)$ to the $l$-points of a connected and absolutely almost simple $l$-group $\mathbf{H}$ with Zariski dense image, there exists a unique homomorphism $\sigma \colon k \rightarrow l$, a unique $l$-epimorphism $\eta \colon {}^\sigma \mathbf{G} \rightarrow \mathbf{H}$, and a unique homomorphism $\nu \colon \Gamma \rightarrow Z(\mathbf{H})(l)$ such that $\delta(\gamma) = \nu(\gamma)\eta(\gamma)$ for all $\gamma \in \Gamma$. By a standard descent argument (e.g.~using \cite[Lemma 2.2]{KammeyerKionke}), it suffices to verify this condition for $\ell = \bbC$. The \emph{Margulis superrigidity theorem} \cite{Margulis:discrete-subgroups}*{Theorem~(C), p.\,259} asserts that $\mathbf{G}$ is $S$-algebraically superrigid if $\sum_{v \in S} \rank_{k_v} \mathbf{G} \ge 2$. We outline in Appendix~\ref{appendix:s-adelic-superrigidity} that the \emph{adelic superrigidity} theorem that the first two authors concluded from this property in~\cite{KammeyerKionke}*{Theorem~3.2} extends effortlessly to $S$-algebraically superrigid groups: if $l$ is a number field, then also homomorphisms $\Gamma \rightarrow \mathbf{H}(\mathbb{A}_{l,T})$ from an $S$-arithmetic subgroup $\Gamma$ of $\mathbf{G}$ to the $T$-adele points of $\mathbf{H}$ extend to a homomorphism of group schemes over the adele points. This theorem will be key in our argument. Two number fields are called \emph{arithmetically equivalent} over $\Q$ if almost all rational primes have the same (unordered) tuple of inertia degrees in both number fields. The \emph{congruence hull} $\Gamma^c$ of an $S$-arithmetic group $\Gamma$ is the smallest congruence subgroup containing~$\Gamma$. \begin{proposition}\label{prop:S-arithmetic-volume-base} Let $\mathbf{G}$ and $\mathbf{H}$ be simply-connected absolutely almost simple groups defined over number fields $k$ and $l$. Let $\Gamma \subseteq \mathbf{G}(k)$ be an $S$-arithmetic and let $\Delta \subseteq \mathbf{H}(l)$ be a $T$-arithmetic group such that there exists an isomorphism $\Psi \colon \widehat{\Gamma} \rightarrow \widehat{\Delta}$. Assume that $\mathbf{G}$ and $\mathbf{H}$ are $S$- and $T$-algebraically superrigid, respectively. Then the following hold: \begin{enumerate} \item \label{it:arithmetically-equivalent} The fields $k$ and $l$ are arithmetically equivalent over $\bbQ$. \item\label{it:finite-places} $S$ contains a finite place if and only if $T$ contains a finite place. \item There is an open normal subgroup $U \subseteq \widehat{\Gamma}$ such that the congruence hulls $\Gamma_0^c$ of $\Gamma_0 := U \cap \Gamma$ and $\Delta_0^c$ of $\Delta_0 = \Psi(U) \cap \Delta$ have equal Killing covolumes in $\prod_{v \in S} \mathbf{G}(k_v)$ and resp. $\prod_{w\in T} \mathbf{H}(l_w)$. \end{enumerate} \end{proposition} \begin{proof} Let $\Psi \colon \widehat{\Gamma} \to \widehat{\Delta}$ be an isomorphism. Using Theorem~\ref{thm:adelic-rigidity-s-arithmetic} and the existence of $\Psi$ and its inverse as in the proof of Theorem~3.4 in \cite{KammeyerKionke}, we deduce that there is an isomorphism $j_1 \colon \mathbb{A}_{l, T} \to \mathbb{A}_{k,S}$ of adele rings and an isomorphism of group schemes $\eta_1 \colon \alg{G} \times_k \mathbb{A}_{k,S} \to \alg{H} \times_l \mathbb{A}_{l,T}$ over $j_1$ and an open normal subgroup $U \normal_o \widehat{\Gamma}$ such that \begin{equation*} \xymatrix{ U \ar[d]_{q_\alg{G}}\ar[r]^{\cong}_{\Psi} & \Psi(U) \ar[d]_{q_{\alg{H}}}\\ \alg{G}(\mathbb{A}_{k, S}) \ar[r]^{\cong}_{\eta_1} & \alg{H}(\mathbb{A}_{l, T}), } \end{equation*} commutes. Here $U$ is any open normal subgroup contained in the kernels of the homomorphisms $\hat{\nu}_1 \colon \widehat{\Gamma} \to Z(\mathbf{H})(\mathbb{A}_{l, T})$ and $\hat{\nu}_2\circ \Psi \colon \widehat{\Gamma} \to Z(\alg{G})(\mathbb{A}_{k, S})$ induced on profinite completions by the homomorphisms $\nu_1$ and $\nu_2$ coming from Theorem~\ref{thm:adelic-rigidity-s-arithmetic}. We write $\widetilde{U} =q_{\alg{G}}(U)$. The existence of the isomorphism $\mathbb{A}_{l, T} \to \mathbb{A}_{k,S}$ implies that almost all rational primes have the same decomposition type in $k$ and $l$, which gives \eqref{it:arithmetically-equivalent}, and that $S$ contains a place over $p$ if and only if $T$ does, which shows~\eqref{it:finite-places}. Let $\Gamma_0^c$ and $\Delta_0^c$ denote the congruence hulls of $\Gamma_0$ and $\Delta_0$, respectively, so $\Gamma_0^c = \mathbf{G}(k) \cap \widetilde{U}$ and $\Delta_0^c = \mathbf{H}(l) \cap \eta_1(\widetilde{U})$. Kottwitz proved \cite{Kottwitz} that the Tamagawa number $\tau$ equals $1$ for simply-connected semisimple groups (the missing Hasse principle for $E_8$ was later established by Chernousov~\cite{Chernousov}). Hence, Lemma \ref{lem:tamagawa-to-killing} implies that the Killing covolume of $\mathbf{G}(k)$ in $\mathbf{G}(\bbA_k)$ is $|d_k|^{\frac{\dim(\mathbf{G})}{2}}$. Let $\mathcal{F}$ be a measurable fundamental domain for the action of $\Gamma^c_0$ on $\prod_{v\in S} \mathbf{G}(k_v)$. Then it is straightforward to check using strong approximation that $\mathcal{F} \times \widetilde{U}$ is a measurable fundamental domain for the action of $\mathbf{G}(k)$ on $\mathbf{G}(\bbA_k)$ (cf.~\cite[\S 4.2]{Ono:algebraicgroups}). In turn, we obtain \[ \vol_{\mu^\dagger_S}\left(\prod_{v \in S} \mathbf{G}(k_v)/\Gamma_0^c\right) = |d_k|^{\frac{\dim(\mathbf{G})}{2}} \vol_{\mu^\dagger_{V(k) \setminus S}}(\widetilde{U})^{-1} \] where $\mu^\dagger_S = \prod_{v \in S} \mu^\dagger_{\mathbf{G}(k_v)}$ and $\mu^\dagger_{V(k) \setminus S} = \prod_{v \notin S} \mu^\dagger_{\mathbf{G}(k_v)}$. The same argument gives a formula for the Killing covolume of $\Delta_0^c$ in $\prod_{w \in T} \mathbf{H}(l_w)$ as a product of $|d_l|^{\frac{\dim(\mathbf{H})}{2}}$ and $\vol_{\mu^\dagger_{V(l) \setminus T}}(q_{\alg{H}}(\Psi(U)))^{-1}$. It follows (for instance) from adelic superrigidity, that $\dim(\mathbf{G}) = \dim(\mathbf{H})$. The number fields $l$ and $k$ are arithmetically equivalent over $\bbQ$, and thus have the same discriminant; see \cite[Theorem (1.4)]{Klingen:similarities}. Since an isomorphism of Lie algebras maps the Killing forms to one another, the isomorphim $\eta_1$ preserves the Killing covolume so that \[ \vol_{\mu^\dagger_{V(k) \setminus S}}(\widetilde{U}) = \vol_{\mu^\dagger_{V(l) \setminus T}}(\eta_1(\widetilde{U})) = \vol_{\mu^\dagger_{V(l) \setminus T}}(q_{\alg{H}}(\Psi(U))). \] We deduce that the Killing covolumes of $\Gamma_0^c$ in $\prod_{v\in S} \mathbf{G}(k_v)$ and of $\Delta_0^c$ in $\prod_{w\in T} \mathbf{H}(l_w)$ coincide. \end{proof} \begin{corollary}\label{cor:S-arithmetic-volume-H1} In the setting of Proposition \ref{prop:S-arithmetic-volume-base} assume that \[\Hom(H_1(\Gamma,\bbZ), Z(\mathbf{H})(\bbC)) = \{1\}.\] Then the Killing covolumes of the congruence hulls $\Gamma^c$ in $\prod_{v \in S} \mathbf{G}(K_v)$ and $\Delta^c$ in $\prod_{w\in T} \mathbf{H}(L_w)$ are equal. \end{corollary} \begin{proof} As the abelianization is a profinite invariant, we have $H_1(\Gamma,\bbZ) \cong H_1(\Delta,\bbZ)$. Moreover, $\alg{G}$ and $\alg{H}$ are isomorphic over $\bbC$, so that $Z(\alg{G})(\bbC) \cong Z(\alg{H})(\bbC)$. This shows that if $\Hom(H_1(\Gamma,\bbZ), Z(\mathbf{H})(\bbC))$ is trivial, then one can choose $U = \widehat{\Gamma}$ in the proof of Proposition \ref{prop:S-arithmetic-volume-base} (compare to the proof of \cite[Theorem 3.4]{KammeyerKionke}). Hence $\Gamma = \Gamma_0$ and $\Delta = \Delta_0$ and the assertion follows. \end{proof} In particular, this result applies to groups of type $F_4$ because they have trivial center. \begin{corollary}\label{cor:F4} In the setting of~\ref{prop:S-arithmetic-volume-base}, assume $\mathbf{G}$ and $\mathbf{H}$ have type~$F_4$. Then the Killing covolumes of the congruence hulls $\Gamma^c$ in $\prod_{v \in S} \mathbf{G}(K_v)$ and $\Delta^c$ in $\prod_{w\in T} \mathbf{H}(L_w)$ are equal. \end{corollary} Theorem~\ref{thm:octonionic} is the special case when $k$ and $l$ are totally real, $S$ and $T$ consist of the infinite places only, $\mathbf{G}$ and $\mathbf{H}$ are type $F_4$ forms which are isomorphic to $F_{4(-20)}$ at one real place and anisotropic at all other real places, and $\Gamma$ and $\Delta$ are congruence subgroups. Note that $\mathbf{G}$ and $\mathbf{H}$ are $S$- and $T$-algebraically superrigid even though they have rank one. As mentioned in the first paragraph of this section, superrigidity over $\mathbb{C}$ entails algebraic superrigidity. This can be deduced from the work of Corlette~\cite{Corlette:archimedean-superrigidity} and Gromov-Schoen~\cite{GromovSchoen} following along the lines of Margulis \cite{Margulis:discrete-subgroups}. \medskip In the proof of the next result, it will become apparent why renormalization of Killing measures is necessary. \begin{theorem} \label{thm:S-arithmetic-main} Let $\mathbf{G}$ and $\mathbf{H}$ be simply-connected absolutely almost simple algebraic groups defined over number fields $k$ and $l$. Let $\Gamma \subseteq \mathbf{G}(k)$ be $S$-arithmetic and let $\Delta \subseteq \mathbf{H}(l)$ be $T$-arithmetic with $\widehat{\Gamma} \cong \widehat{\Delta}$. Assume $\mathbf{G}$ and $\mathbf{H}$ are algebraically superrigid and have CSP* with respect to $S$ and $T$, respectively. Then the renormalized Killing covolumes of $\Gamma$ in $\prod_{v \in S} \mathbf{G}(k_v)$ and $\Delta$ in $\prod_{w\in T} \mathbf{H}(l_w)$ are equal. \end{theorem} \begin{proof} As before, we denote by $\Psi \colon \widehat{\Gamma} \to \widehat{\Delta}$ a fixed isomorphism. We may apply Proposition \ref{prop:S-arithmetic-volume-base} to find a suitable open normal subgroup $U \trianglelefteq \widehat{\Gamma}$. We may shrink $U$ to achieve that $U \cap C(\alg{G},S) = \{1\}$ and $\Psi(U)\cap C(\alg{H},T) = \{1\}$. As usual, we define $\Gamma_0 = \Gamma \cap U$ and $\Delta_0 = \Delta \cap \Psi(U)$. As \[ |\Gamma:\Gamma_0| = |\widehat{\Gamma}:U| = |\widehat{\Delta}:\Psi(U)| = |\Delta:\Delta_0| \] it is sufficient to show that the subgroups $\Gamma_0$ and $\Delta_0$ have the same renormalized Killing covolumes. Since $U \cap C(\alg{G},S) = \{1\}$, we have \[|\Gamma_0^c:\Gamma_0| = |C(\alg{G},S)| = |M(\alg{G}, S)|\] and similarly $|\Delta_0^c:\Delta_0| = |M(\alg{H}, T)|$. Proposition~\ref{prop:S-arithmetic-volume-base} gives that the Killing covolumes of $\Delta_0^c$ and $\Gamma_0^c$ agree, that $S$ contains a finite place if and only if $T$ does, and that $k$ and $l$ are arithmetically equivalent. In particular $[k:\bbQ] = [l:\bbQ]$, and $k$ and $l$ have the same number of real and complex places; see \cite[Theorem (1.4)]{Klingen:similarities}. Therefore, the rescaling of the Killing volume by the Killing volumes of the compact real forms is the same for $k$ and $l$ and can be disregarded. Moreover, if $S$ and $T$ contain a finite place, then Theorem~\ref{thm:metaplectic} implies $|M(\alg{G}, S)|= |M(\alg{H},T)| = 1$ and the assertion follows. So suppose now that $S$ and $T$ consist of the archimedean places of $k$ and~$l$ only. As we just said, $k$ is totally imaginary if and only if $l$ is. In this case $|M(\alg{G}, S)|= |\mu(k)| = |\mu(l)| = |M(\alg{H}, T)|$ because arithmetically equivalent fields contain the same roots of unity; see \cite[Theorem (1.4)]{Klingen:similarities}. So finally assume that $k$ and $l$ have at least one real place. By Theorem~\ref{thm:metaplectic}, we have $|M(\alg{G}, S)| = 1$ if and only if $\alg{G}(k_v)$ is not topologically simply-connected at some real place $v$ and then the factor $\delta_G$ in the renormalized Killing measure is given by $\delta_G = 2$. If on the other hand $\alg{G}(k_v)$ is topologically simply-connected at all real places $v$, then $|M(\alg{G}, S)| = 2$ and $\delta_G = 1$. So in both cases $\delta_G \cdot |M(\alg{G}, S)| = 2$ and the same applies to $\mathbf{H}$. It follows that the renormalized Killing covolumes of $\Gamma_0$ and $\Delta_0$ agree. \end{proof} \begin{example} As an addendum to the last proof, we give examples of \(\mathbf{G}\) and \(\mathbf{H}\) whose metaplectic kernels are different to show that the factor \(\delta\) in the renormalization of Killing measures cannot be ommitted. In fact, we can use M.\,Aka's examples of spinor groups which exhibit that Kazhdan's property (T) is not a profinite property~\cite{Aka:property-t}*{Theorem~1}. Consider the quadratic forms \[ f = \langle 1, 1, 1, 1, 1, 1, -1 \rangle \quad \text{and} \quad g = \langle 1, 1, -1, -1, -1, -1, -1 \rangle \] and let \(\mathbf{G} = \mathbf{Spin}(f)\) and \(\mathbf{H} = \mathbf{Spin}(g)\), both considered as groups over the number field \(\Q(\sqrt{2})\). Then \(\mathbf{G}\) is topologically simply-connected at both real places because the maximal compact subgroup \(\operatorname{Spin}(6)\) is a simply-connected deformation retract. In contrast, \(\mathbf{H}\) is not topologically simply-connected at the real places because at these places, \(\mathbf{H}\) is a twofold covering of \(\operatorname{SO}^0(2,5)\) whose fundamental group is infinite. For \(S\) consisting of the real places of \(\Q(\sqrt{2})\), Theorem~\ref{thm:metaplectic} gives that \(M(\mathbf{G}, S)\) has order two whereas \(M(\mathbf{H}, S)\) is trivial. Both \(\mathbf{G}\) and \(\mathbf{H}\) are algebraically superrigid and have CSP* by Theorem~\ref{thm:status-of-csp}. Since the quadratic forms \(\langle 1, 1, 1, 1 \rangle\) and \(\langle -1, -1, -1, -1 \rangle\) are isometric over \(\Z_p\) for all primes \(p\), we see that the congruence completions of the arithmetic groups \(\mathbf{G}(\Z[\sqrt{2}])\) and \(\mathbf{H}(\Z[\sqrt{2}])\) (defined as in~\cite{Kammeyer-Sauer:spinor}*{Section~3}) are isomorphic. It follows that suitable finite index subgroups \(\Gamma\) and \(\Delta\) of these groups have isomorphic profinite completions. \end{example} \begin{proof}[Proof of Theorem \ref{thm:main-lattices}] As we explained below Definition~\ref{def:simply-connected}, we have \[ G \cong \prod_{v \in S^{\text{is}}} \mathbf{G}(k_v) \quad \text{and} \quad H \cong \prod_{v \in T^{\text{is}}} \mathbf{H}(l_v) \] for simply-connected absolutely almost simple linear algebraic groups $\mathbf{G}$ and $\mathbf{H}$ over $k$ and $l$, respectively, and for finite sets of places $S^{\text{is}}$ and $T^{\text{is}}$ of $k$ and $l$ containing all the infinite places at which $\mathbf{G}$ and $\mathbf{H}$ are isotropic, respectively. Let $S$ and $T$ be the union of $S^{\text{is}}$ and $T^{\text{is}}$ with all infinite places, respectively. The group $\Gamma$ is commensurable with $\alg{G}(\mathcal{O}_{k,S})$ and $\Delta$ is commensurable with $\alg{H}(\mathcal{O}_{l,T})$. Pick $\Gamma_0 \le \Gamma$ of finite index such that $\Gamma_0 \subset \mathbf{G}(\mathcal{O}_{k,S})$ and such that $\Gamma_0$ is torsion-free (Selberg's lemma) and pick $\Delta_0 \le \Delta$ similarly. Then the open subgroup $U = \overline{\Gamma_0} \cap \overline{\Delta_0}$ of $\widehat{\Gamma} \cong \widehat{\Delta}$ gives torsion-free subgroups $\Gamma_1 = U \cap \Gamma \le \alg{G}(k)$ and $\Delta_1 = U \cap \Delta \le \alg{H}(l)$ such that $[\Gamma : \Gamma_1 ] = [\Delta : \Delta_1]$ and $\widehat{\Gamma_1} \cong \widehat{\Delta_1}$. The assumption of CSP* gives $|C(\mathbf{G},S)| = |M(\mathbf{G}, S)|$ and similarly for~$\mathbf{H}$. Since neither $G$ nor $H$ has compact factors, it follows that neither $S^{\text{is}}$ nor $T^{\text{is}}$ contain infinite places at which $\mathbf{G}$ or $\mathbf{H}$ would be anisotropic, respectively. As $\Gamma_1$ and $\Delta_1$ are torsion-free, Lemma~\ref{lem:forget-compact-factors} shows that the renormalized Killing covolume of $\Gamma_1 \le G$ and $\Delta_1 \le H$ is the same as the renormalized Killing covolume of $\Gamma_1 \le \prod_{v \in S} \alg{G}(k_v)$ and $\Delta_1 \le \prod_{v \in T} \alg{H}(l_v)$, respectively. As we explained in the beginning of the section, the higher rank assumption on $G$ and $H$ implies that $\mathbf{G}$ and $\mathbf{H}$ are $S$- and $T$-algebraically superrigid. Now the latter renormalized Killing covolumes are equal by Theorem~\ref{thm:S-arithmetic-main} and this completes the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:same-lie-group}.] Since the center \(Z(G)\) of \(G\) is finite, there exist finite index subgroups \(\Gamma_0 \le \Gamma\) and \(\Delta_0 \le \Delta\) intersecting \(Z(G)\) trivially. Fixing the isomorphism \(\widehat{\Gamma} \cong \widehat{\Delta}\), the intersection of the closures \(\overline{\Gamma_0} \cap \overline{\Delta_0}\) in \(\widehat{\Gamma}\) intersects \(\Gamma\) and \(\Delta\) in finite index subgroups \(\Gamma_1\) and \(\Delta_1\) and we have \([\Gamma \colon \Gamma_1] = [\Delta \colon \Delta_1]\). Since \(\Gamma_1\) and \(\Delta_1\) intersect \(Z(G)\) trivially, the projection map gives embeddings \(\Gamma_1, \Delta_1 \le \operatorname{Ad} G\) into the adjoint form \(\operatorname{Ad} G = G / Z(G)\). The adjoint form can be identified with the group of inner automorphisms \(\operatorname{Int} (\mathfrak{g})\) of the Lie algebra \(\mathfrak{g}\) of \(G\). As such, it is given by the unit path component of the \(\R\)-group \(\operatorname{Aut}(\mathfrak{g})\). It also is the unit path component of the Zariski connected \(\R\)-group given by the product of the Zariski unit components of the automorphism groups \(\operatorname{Aut} (\mathfrak{g}_i)\) of the simple ideals of \(\mathfrak{g} \cong \mathfrak{g}_1 \oplus \cdots \oplus \mathfrak{g}_r\). The non-absolutely simple factors in this product decomposition are given by restriction of scalars from an absolutely simple group over \(\C\) and can thus be replaced by these \(\C\)-groups. Finally, taking the product of all the algebraically simply-connected coverings of the absolutely simple factors, we obtain an algebraically simply-connected Lie group \(G_0\). Note that \(G_0\) is path connected because the real points of an algebraically simply connected \(\R\)-group are path connected (the \(\C\)-points of the \(\C\)-factors are even topologically simply-connected). The preimages \(\Gamma_2\) and \(\Delta_2\) of \(\Gamma_1\) and \(\Delta_1\) along the covering morphism \(p \colon G_0 \rightarrow \operatorname{Ad} G\) are irreducible lattices in \(G_0\). Moreover, \(G_0\) has no anisotropic factor because \(G\) has no compact factor by assumption and similarly \(G_0\) has higher rank because \(G\) does. However, we cannot yet ensure that \(\widehat{\Gamma_2} \cong \widehat{\Delta_2}\). Therefore we argue similarly as above and let \(\Gamma_3 \le \Gamma_2\) and \(\Delta_3 \le \Delta_2\) be finite index subgroups intersecting \(\ker p\) trivially. Then \(p\) embeds \(\Gamma_3\) and \(\Delta_3\) as finite index subgroups of \(\Gamma_1\) and \(\Delta_1\). The intersection \(\overline{\Gamma_3} \cap \overline{\Delta_3}\) in \(\widehat{\Gamma_1} \cong \widehat{\Delta_1}\) intersects \(\Gamma_3\) and \(\Delta_3\) in finite index subgroups \(\Gamma_4\) and \(\Delta_4\) and these are finally the irreducible lattices in \(G_0\) with \(\widehat{\Gamma_4} \cong \widehat{\Delta_4}\) and \([\Gamma : \Gamma_4] = [\Delta : \Delta_4]\). If we endow \(G_0\) with the Haar measure induced from \(G\) via \(\operatorname{Ad} G\) and denote it likewise by \(\mu\), then by construction \[ \frac{\mu(G/\Gamma)}{\mu(G_0 / \Gamma_4)} = \frac{|Z(G)|}{[\Gamma : \Gamma_4] \cdot |\ker p|} = \frac{|Z(G)|}{[\Delta : \Delta_4] \cdot |\ker p|} = \frac{\mu(G/\Delta)}{\mu(G_0 / \Delta_4)}. \] Hence the assertion now follows from Theorem~\ref{thm:main-lattices}. \end{proof} Since CSP* is known in the isotropic case (Theorem~\ref{thm:status-of-csp}), Theorem~\ref{thm:non-uniform-same-lie-group} is immediate from Theorem~\ref{thm:same-lie-group}. \begin{remark}[\emph{On the profinite almost rigidity of \(S\)-arithmetic groups}] A theorem of Borel~\cite{Borel:bunded-covolume}*{Theorem~4.2} applies to our situation: For every algebraically simply-connected Lie group \(G\) of higher rank without compact factors and with fixed Haar measure \(\mu\) and for every given bound \(c > 0\), there exist only finitely many conjugacy classes of lattices \(\Gamma \le G\) with \(\mu(G/\Gamma) \le c\). Moreover any two profinitely isomorphic lattices in such Lie groups define surrounding algebraic groups by arithmeticity which are isomorphic locally almost everywhere by Theorem~\ref{thm:adelic-rigidity-s-arithmetic}. There are only finitely many possible isomorphism types over each remaining archimedean or \(\mathfrak{p}\)-adic completion. Thus Borel's theorem and our Theorem~\ref{thm:main-lattices} have the corollary that only finitely many isomorphism types of irreducible lattices in groups \(G\) as above have the same profinite completion if CSP* is granted. This recovers a theorem of M.\,Aka~\cite{Aka:arithmetic}*{Theorem~2}. In fact, in his proof, Aka also uses Borel's theorem above, but only as an intermediate step after showing (in \cite{Aka:arithmetic}*{Section~5}) that \emph{commensurable} \(S\)-arithmetic groups have the same covolume if they are profinitely isomorphic. This naturally led us to the question whether one can show the profiniteness of volume of higher rank \(S\)-arithmetic groups in general. Let us also point out that R.\,Spitler showed in his thesis~\cite{Spitler:thesis} that if a general finitely generated residually finite group \(\Lambda\) is profinitely isomorphic to a higher rank \(S\)-arithmetic group \(\Gamma\), then \(\Lambda\) embeds in a \(T\)-arithmetic group \(\Delta\). If \(\Delta\) has moreover CSP, then \(\widehat{\Gamma} \cong \widehat{\Delta}\). It is moreover a decades old question, advertised by Platonov and Rapinchuk in~\cite{Platonov-Rapinchuk:algebraic-groups}*{Problem on p.\,424}, whether \(S\)-arithmetic groups with CSP have proper \emph{Grothendieck subgroups}. In our setting, this just asks whether the inclusion \(\Lambda \le \Delta\) can be proper. To sum up, assuming CSP* and that the Platonov--Rapinchuk problem has a negative solution, we can conclude that a higher rank \(S\)-arithmetic group \(\Gamma\) is \emph{absolutely profinitely almost rigid}: up to isomorphism, only finitely many finitely generated residually finite groups \(\Lambda\) satisfy \(\widehat{\Gamma} \cong \widehat{\Lambda}\). Finally coming back to Question~\ref{question:volume-3-manifolds}, we should mention that Yi Liu~\cite{Liu}*{Theorem~1.1} has recently shown that finite volume hyperbolic 3-manifold groups are profinitely almost rigid among finitely generated 3-manifold groups. However, he writes explicitly~\cite{Liu}*{p.\,743} that he does not know whether all the finitely many hyperbolic 3-manifold groups with the same profinite completion have equal volume. \end{remark} \appendix \section{$S$-adelic superrigidity} \label{appendix:s-adelic-superrigidity} We briefly present the extension of adelic superrigidity as proven in \cite{KammeyerKionke} to the $S$-arithmetic case. \begin{theorem}\label{thm:adelic-rigidity-s-arithmetic} Let $k$ and $l$ be two algebraic number fields and let $S \subseteq V(k)$ and $T \subseteq V(l)$ be two finite subsets containing all the archimedean places. Assume that $S$ does not contain a finite place at which $\mathbf{G}$ is anisotropic. Let $\mathbf{G}$ and $\mathbf{H}$ be simply-connected absolutely almost simple groups over $k$ and $l$ and assume that $\mathbf{G}$ is $S$-algebraically superrigid. Let $\Gamma \subseteq \mathbf{G}(k)$ be an $S$-arithmetic subgroup. Suppose there is a homomorphism $\phi \colon \Gamma \to \mathbf{H}(\mathbb{A}_{l,T})$ such that $\overline{\phi(\Gamma)}$ is compact and has non-empty interior. Then there are, uniquely determined, \begin{itemize} \item an injective map $w \colon V(l) \setminus T \to V(k) \setminus S$, \item isomorphisms of valued fields $j_v \colon l_v \to k_{w(v)}$ for all $v \in V(l) \setminus T$ which induce an injection $j \colon \mathbb{A}_{l,T} \to \mathbb{A}_{k,S}$ of topological rings, \item a homomorphism of group schemes over $j$ \[ \eta\colon \mathbf{G}\times_k \mathbb{A}_{k,S} \to \mathbf{H}\times_l \mathbb{A}_{l,T}, \] \item a group homomorphism $\nu\colon \Gamma \to Z(\mathbf{H})(\mathbb{A}_{l,T})$ \end{itemize} such that $\phi(\gamma) = \nu(\gamma)\eta(\gamma)$ for all $\gamma \in \Gamma$. Moreover, for every prime number $p$ \[ \sum_{\substack{v \in V(l)\setminus T\\v\mid p}} [l_v:\mathbb{Q}_p] \leq \sum_{\substack{w \in V(k)\setminus S\\w\mid p}} [k_w:\mathbb{Q}_p] \] and if equality occurs for every prime, then $w$ is a bijection and $j$ is an isomorphism. \end{theorem} The proof of Theorem 3.2 in \cite{KammeyerKionke} carries over almost verbally. One needs to verify that places not in $T$ are mapped to places not in $S$. To this end, it suffices to see that in \cite[Lemma 3.3]{KammeyerKionke}, the group $\overline{\phi_v(\Gamma)}$ is compact if and only if $\overline{\Gamma}$ is compact. The proof of Step 2 in \cite{KammeyerKionke} not correct as stated. We use the opportunity to give a correct argument. Assume that two places $v_1,v_2$ of $L$ are mapped to the same place $w$ of $K$. Our aim (as in \cite{KammeyerKionke}) is to show that the map $(\eta_{v_1},\eta_{v_2})\colon \mathbf{G}(K_w)\to \mathbf{H}(L_{v_1})\times\mathbf{H}(L_{v_2})$ has a nowhere dense image, to derive a contradiction. Since the image is a subgroup, it suffices to show that it does not contain an open neighbourhood of the identity. However, the inverse image of \(\mathbf{H}(L_{v_1})\times \{1\}\) under $(\eta_{v_1},\eta_{v_2})$ is the kernel of $\eta_{v_2}$. We recall that $\eta_{v_1}$,$\eta_{v_2}$ are central isogenies, i.e., the kernel is finite and hence the intersection of the image $(\eta_{v_1},\eta_{v_2})$ with \(\mathbf{H}(L_{v_1})\times \{1\}\) is finite and cannot be open. \begin{bibdiv}[References] \begin{biblist} \bib{Aka:arithmetic}{article}{ author={Aka, Menny}, title={Arithmetic groups with isomorphic finite quotients}, journal={J. Algebra}, volume={352}, date={2012}, pages={322--340}, issn={0021-8693}, review={\MR{2862189}}, } \bib{Aka:property-t}{article}{ author={Aka, Menny}, title={Profinite completions and Kazhdan's property (T)}, journal={Groups Geom. Dyn.}, volume={6}, date={2012}, number={2}, pages={221--229}, issn={1661-7207}, review={\MR{2914858}}, } \bib{Borel:bunded-covolume}{article}{ author={Borel, A.}, title={On the set of discrete subgroups of bounded covolume in a semisimple group}, journal={Proc. Indian Acad. Sci. Math. Sci.}, volume={97}, date={1987}, number={1-3}, pages={45--52 (1988)}, issn={0253-4142}, review={\MR{0983603}}, } \bib{Chernousov}{article}{ author={Chernousov, V. I.}, title={The Hasse principle for groups of type $E_8$}, language={Russian}, journal={Dokl. Akad. Nauk SSSR}, volume={306}, date={1989}, number={5}, pages={1059--1063}, issn={0002-3264}, translation={ journal={Soviet Math. Dokl.}, volume={39}, date={1989}, number={3}, pages={592--596}, issn={0197-6788}, }, review={\MR{1014762}}, } \bib{Corlette:archimedean-superrigidity}{article}{ author={Corlette, Kevin}, title={Archimedean superrigidity and hyperbolic geometry}, journal={Ann. of Math. (2)}, volume={135}, date={1992}, number={1}, pages={165--182}, issn={0003-486X}, review={\MR{1147961}}, } \bib{GromovSchoen}{article}{ author={Gromov, Mikhail}, author={Schoen, Richard}, title={Harmonic maps into singular spaces and $p$-adic superrigidity for lattices in groups of rank one}, journal={Inst. Hautes \'Etudes Sci. Publ. Math.}, number={76}, date={1992}, pages={165--246}, issn={0073-8301}, review={\MR{1215595}}, } \bib{KammeyerKionke}{article}{ author={Kammeyer, Holger}, author={Kionke, Steffen}, title={Adelic superrigidity and profinitely solitary lattices}, journal={Pacific J. Math.}, volume={313}, date={2021}, number={1}, pages={137--158}, issn={0030-8730}, review={\MR{4313430}}, } \bib{Kammeyer-Kionke-Raimbault-Sauer}{article}{ author={Kammeyer, Holger}, author={Kionke, Steffen}, author={Raimbault, Jean}, author={Sauer, Roman}, title={Profinite invariants of arithmetic groups}, journal={Forum Math. Sigma}, volume={8}, date={2020}, pages={Paper No. e54, 22}, review={\MR{4176758}}, } \bib{Kammeyer-Sauer:spinor}{article}{ author={Kammeyer, Holger}, author={Sauer, Roman}, title={$S$-arithmetic spinor groups with the same finite quotients and distinct $\ell^2$-cohomology}, journal={Groups Geom. Dyn.}, volume={14}, date={2020}, number={3}, pages={857--869}, issn={1661-7207}, review={\MR{4167024}}, } \bib{Klingen:similarities}{book}{ author={Klingen, Norbert}, title={Arithmetical similarities}, series={Oxford Mathematical Monographs}, note={Prime decomposition and finite group theory; Oxford Science Publications}, publisher={The Clarendon Press, Oxford University Press, New York}, date={1998}, pages={x+275}, isbn={0-19-853598-8}, review={\MR{1638821}}, } \bib{Kneser:galois}{article}{ author={Kneser, Martin}, title={Galois-Kohomologie halbeinfacher algebraischer Gruppen \"uber ${\germ p}$-adischen K\"orpern. II}, language={German}, journal={Math. Z.}, volume={89}, date={1965}, pages={250--272}, issn={0025-5874}, review={\MR{0188219}}, } \bib{Kottwitz}{article}{ author={Kottwitz, Robert E.}, title={Tamagawa numbers}, journal={Ann. of Math. (2)}, volume={127}, date={1988}, number={3}, pages={629--646}, issn={0003-486X}, review={\MR{0942522}}, } \bib{Liu}{article}{ author={Liu, Yi}, title={Finite-volume hyperbolic 3-manifolds are almost determined by their finite quotient groups}, journal={Invent. Math.}, volume={231}, date={2023}, number={2}, pages={741--804}, issn={0020-9910}, review={\MR{4542705}}, } \bib{Lubotzky-Martin:rep-growth}{article}{ author={Lubotzky, Alexander}, author={Martin, Benjamin}, title={Polynomial representation growth and the congruence subgroup problem}, journal={Israel J. Math.}, volume={144}, date={2004}, pages={293--316}, issn={0021-2172}, review={\MR{2121543}}, } \bib{Lubotzky-Segal:subgroup-growth}{book}{ author={Lubotzky, Alexander}, author={Segal, Dan}, title={Subgroup growth}, series={Progress in Mathematics}, volume={212}, publisher={Birkh\"auser Verlag, Basel}, date={2003}, pages={xxii+453}, isbn={3-7643-6989-2}, review={\MR{1978431}}, } \bib{Margulis:discrete-subgroups}{book}{ author={Margulis, G. A.}, title={Discrete subgroups of semisimple Lie groups}, series={Ergebnisse der Mathematik und ihrer Grenzgebiete (3)}, volume={17}, publisher={Springer-Verlag, Berlin}, date={1991}, pages={x+388}, isbn={3-540-12179-X}, review={\MR{1090825}}, } \bib{Moore:extensions}{article}{ author={Moore, Calvin C.}, title={Extensions and low dimensional cohomology theory of locally compact groups. I, II}, journal={Trans. Amer. Math. Soc.}, volume={113}, date={1964}, pages={40--63; ibid. {\bf 113 (1964), 64--86}}, issn={0002-9947}, review={\MR{0171880}}, } \bib{Moore:group-extensions}{article}{ author={Moore, Calvin C.}, title={Group extensions and cohomology for locally compact groups. III}, journal={Trans. Amer. Math. Soc.}, volume={221}, date={1976}, number={1}, pages={1--33}, issn={0002-9947}, review={\MR{0414775}}, } \bib{Ono:algebraicgroups}{article}{ author={Ono, Takashi}, title={On algebraic groups and discontinuous groups}, journal={Nagoya Math. J.}, volume={27}, date={1966}, pages={279--322}, issn={0027-7630}, review={\MR{0199193}}, } \bib{Platonov-Rapinchuk:algebraic-groups}{book}{ author={Platonov, Vladimir}, author={Rapinchuk, Andrei}, title={Algebraic groups and number theory}, series={Pure and Applied Mathematics}, volume={139}, note={Translated from the 1991 Russian original by Rachel Rowen}, publisher={Academic Press, Inc., Boston, MA}, date={1994}, pages={xii+614}, isbn={0-12-558180-7}, review={\MR{1278263}}, } \bib{Prasad-Rapinchuk:metaplectic}{article}{ author={Prasad, Gopal}, author={Rapinchuk, Andrei S.}, title={Computation of the metaplectic kernel}, journal={Inst. Hautes \'Etudes Sci. Publ. Math.}, number={84}, date={1996}, pages={91--187 (1997)}, issn={0073-8301}, review={\MR{1441007}}, } \bib{Prasad-Rapinchuk:survey}{article}{ author={Prasad, G.}, author={Rapinchuk, A. S.}, title={Developments on the Congruence Subgroup Problem after the Work of Bass, Milnor and Serre} book={ title={Collected papers of John Milnor. V. Algebra}, editor={Bass, Hyman}, editor={Lam, T. Y.}, note={Edited by Hyman Bass and T. Y. Lam}, publisher={American Mathematical Society, Providence, RI}, date={2010}, }, date={2010}, pages={307--325}, review={\MR{2841244}}, } \bib{Reid:ICM} {article}{ author={Reid, Alan W.}, title={Profinite rigidity}, conference={ title={Proceedings of the International Congress of Mathematicians---Rio de Janeiro 2018. Vol. II. Invited lectures}, }, book={ publisher={World Sci. Publ., Hackensack, NJ}, }, isbn={978-981-3272-91-0}, isbn={978-981-3272-87-3}, date={2018}, pages={1193--1216}, review={\MR{3966805}}, } \bib{Serre:cohomologie}{article}{ author={Serre, Jean-Pierre}, title={Cohomologie des groupes discrets}, language={French}, conference={ title={S\'eminaire Bourbaki, 23\`eme ann\'ee (1970/1971)}, }, book={ series={Lecture Notes in Math.}, volume={Vol. 244}, publisher={Springer, Berlin-New York}, }, date={1971}, pages={Exp. No. 399, pp. 337--350}, review={\MR{0422504}}, } \bib{Spitler:thesis}{thesis}{ author = {R. F. Spitler}, title = {Profinite Completions and Representations of Finitely Generated Groups}, year = {2019}, note = {PhD thesis}, organization = {Purdue University}, review = {\newline \url{https://www.doi.org/10.25394/PGS.9117068.v1}}, } \bib{Wigner:algebraic-cohomology}{article}{ author={Wigner, David}, title={Algebraic cohomology of topological groups}, journal={Trans. Amer. Math. Soc.}, volume={178}, date={1973}, pages={83--93}, issn={0002-9947}, review={\MR{0338132}}, } \end{biblist} \end{bibdiv} \end{document}
2412.13118v1
http://arxiv.org/abs/2412.13118v1
Entanglement principle for the fractional Laplacian with applications to inverse problems
\documentclass[12pt,final]{amsart} \usepackage{amsmath,amscd} \usepackage{amssymb} \usepackage{amsthm} \usepackage{comment} \usepackage{mathtools} \usepackage{graphicx, xcolor} \usepackage{geometry}\geometry{margin=1in} \usepackage{mathrsfs} \usepackage[ocgcolorlinks, linkcolor=blue]{hyperref} \usepackage{bm} \usepackage{bbm} \usepackage{url} \usepackage[utf8]{inputenc} \usepackage{mathtools,amssymb} \usepackage{esint} \usepackage{tikz} \usepackage{dsfont} \usepackage{relsize} \usepackage{url} \urlstyle{same} \usepackage{xcolor} \usepackage{graphicx} \usepackage{mathrsfs} \usepackage[shortlabels]{enumitem} \usepackage{lineno} \usepackage{amsmath} \usepackage{enumitem} \usepackage{amsthm} \usepackage{verbatim} \usepackage{dsfont} \numberwithin{equation}{section} \renewcommand{\thefigure}{\thesection.\arabic{figure}} \DeclarePairedDelimiter\ceil{\lceil}{\rceil} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \allowdisplaybreaks \newcommand{\para}[1]{\vspace{3mm} \noindent\textbf{#1.}} \mathtoolsset{showonlyrefs} \graphicspath{{images/}} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{definition}[theorem]{Definition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{claim}[theorem]{Claim} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{question}{Question} \newtheorem{remark}[theorem]{Remark} \title[Entanglement principle for the fractional Laplacian]{Entanglement principle for the fractional Laplacian with applications to inverse problems} \author[A. Feizmohammadi]{Ali Feizmohammadi} \address{Department of Mathematics, University of Toronto, 3359 Mississauga Road, Deerfield Hall, 3015, Mississauga, ON, Canada L5L 1C6} \curraddr{} \email{[email protected]} \author[Y.-H. Lin]{Yi-Hsuan Lin} \address{Department of Applied Mathematics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan \& Fakult\"at f\"ur Mathematik, University of Duisburg-Essen, Essen, Germany} \curraddr{} \email{[email protected]} \keywords{Fractional Laplacian, entanglement principle, Calderón problem, unique continuation, spherical mean transform, Runge approximation, Bernstein functions, super-exponential decay. } \subjclass[2020]{Primary: 35R30, secondary 26A33, 42B37} \newcommand{\todo}[1]{\footnote{TODO: #1}} \newcommand{\C}{{\mathbb C}} \newcommand{\R}{{\mathbb R}} \newcommand{\Z}{{\mathbb Z}} \newcommand{\N}{{\mathbb N}} \newcommand{\Q}{{\mathbb Q}} \newcommand{\A}{{\mathcal A}} \newcommand{\Order}{{\mathcal O}} \newcommand{\order}{o} \newcommand{\eps}{\epsilon} \newcommand{\der}{{\mathrm d}} \newcommand{\id}{\mathrm{Id}} \newcommand {\p} {\partial} \newcommand{\LC}{\left(} \newcommand{\RC}{\right)} \newcommand{\wt}{\widetilde} \newcommand{\Kelvin}{K}\newcommand{\riesz}{I_{\alpha}}\newcommand{\xrt}{X}\newcommand{\dplane}{R_d} \newcommand{\no}{N}\newcommand{\nod}{N_d} \newcommand{\schwartz}{\mathscr{S}} \newcommand{\cschwartz}{\mathscr{S}_0} \newcommand{\tempered}{\mathscr{S}^{\prime}} \newcommand{\rapidly}{\mathscr{O}_C^{\prime}} \newcommand{\slowly}{\mathscr{O}_M} \newcommand{\fraclaplace}{(-\Delta)^s} \newcommand{\fourier}{\mathcal{F}} \newcommand{\ifourier}{\mathcal{F}^{-1}} \newcommand{\vev}[1]{\left\langle#1\right\rangle} \newcommand{\pol}{\mathcal{O}_M} \newcommand{\borel}{\mathcal{M}} \newcommand{\Hcirc}{\overset{\hspace{-0.08cm}\circ}{H^s}} \newcommand{\test}{\mathscr{D}}\newcommand{\smooth}{\mathscr{E}}\newcommand{\cdistr}{\mathscr{E}'}\newcommand{\distr}{\mathscr{D}^{\prime}}\newcommand{\dimens}{n}\newcommand{\kernel}{h_{\alpha}} \newcommand{\norm}[1]{\lVert #1 \rVert} \newcommand{\abs}[1]{\left\lvert #1 \right\rvert}\newcommand{\aabs}[1]{\left\lVert #1 \right\rVert}\newcommand{\ip}[2]{\left\langle #1,#2 \right\rangle}\DeclareMathOperator{\spt}{spt}\DeclareMathOperator{\ch}{ch}\DeclareMathOperator{\Div}{div} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\dist}{dist} \DeclareMathOperator{\loc}{loc} \newcommand{\radon}{\mathscr{M}}\newcommand{\weak}{\rightharpoonup}\newcommand{\weakstar}{\overset{\ast}{\rightharpoonup}} \begin{document} \maketitle \begin{abstract} We prove an entanglement principle for fractional Laplace operators on $\mathbb R^n$ for $n\geq 2$ as follows; if different fractional powers of the Laplace operator acting on several distinct functions on $\mathbb R^n$, which vanish on some nonempty open set $\mathcal O$, are known to be linearly dependent on $\mathcal O$, then all the functions must be globally zero. This remarkable principle was recently discovered to be true for smooth functions on compact Riemannian manifolds without boundary \cite{FKU24}. Our main result extends the principle to the noncompact Euclidean space stated for tempered distributions under suitable decay conditions at infinity. We also present applications of this principle to solve new inverse problems for recovering anisotropic principal terms as well as zeroth order coefficients in fractional polyharmonic equations. Our proof of the entanglement principle uses the heat semigroup formulation of fractional Laplacian to establish connections between the principle and the study of several topics including interpolation properties for holomorphic functions under certain growth conditions at infinity, meromorphic extensions of holomorphic functions from a subdomain, as well as support theorems for spherical mean transforms on $\mathbb R^n$ that are defined as averages of functions over spheres. \end{abstract} \tableofcontents \section{Introduction}\label{sec: introduction} Fractional Laplace operators are a well-known example of nonlocal operators that satisfy a surprising \emph{unique continuation property} (UCP); if $u\in H^{r}(\R^n)$ for some $r\in \R$, and if $u$ and its fractional power of Laplacian of some order $s\in (0,1)$, namely $(-\Delta)^s u$, both vanish on some nonempty open set, then $u$ must vanish globally on $\mathbb R^n$, see e.g. \cite{GSU20}. We also refer the reader to \cite{Riesz} for a classical result with stronger assumptions on $u$; see also \cite{Fall01022014,ruland2015unique,Yu17} for related results. An analogous (UCP) as above has been derived in \cite{CMR20} for the higher-order fractional Laplacian $(-\Delta)^s$ with $s\in (-\frac{n}{2},\infty) \setminus \Z$. The above (UCP) with $s\in (0,1)$ was further extended in \cite{GLX} to the case of the fractional Laplace–Beltrami operators $(-\Delta_g)^s$ on $\R^n$ with a smooth Riemannian metric $g$. We also mention the recent work \cite{kenig2024fractional} that derives (UCP) results for certain classes of variable coefficient fractional dynamical Schr\"odinger equations. A common technique in derivation of (UCP) results for fractional Laplace operators is the Caffarelli--Silvestre extension procedure \cite{Caffarelli08082007} together with Carleman estimates from \cite{ruland2015unique}, see also \cite{ghosh2021non} for an alternative proof using heat semigroups. The above-mentioned (UCP) has been a key tool in solving inverse problems for certain classes of nonlocal equations. We refer the reader to \cite{GSU20} for the first result in this direction which subsequently led to significant research on inverse problems for nonlocal equations. This will be further discussed in Section~\ref{sec_ip_applications}. \subsection{Entanglement principle for the fractional Laplace operator} In this paper, we are partly concerned with establishing (UCP) for \emph{fractional polyharmonic operators} on $\R^n$. Precisely, let $N\geq 2$ and let $\mathcal O\subset \R^n$ be a nonempty open set. Suppose that $u\in H^{r}(\R^n)$ for some $r\in \R$ and that there holds \begin{equation}\label{UCP_poly} u|_{\mathcal O}= \sum_{k=1}^N b_k ((-\Delta)^{s_k} u)|_{\mathcal O} =0, \end{equation} for some $\{b_k\}_{k=1}^N \subset \C\setminus \{0\}$ and some $\{s_k\}\subset (0,\infty)\setminus \N$. Does it follow that $u=0$ on $\R^n$? Let us mention that such operators are physically motivated by some probabilistic models; see e.g. \cite[Appendix B]{DLV21}. To the best of our knowledge, no prior results address the above (UCP) formulated in this generality. The explicit Caffarelli-Silvestre extension procedure \cite{Caffarelli08082007} for representing fractional Laplace operators as Dirichlet-to-Neumann maps for degenerate elliptic equations has been a key tool in the study of (UCP) for single-term fractional Laplace operators (see e.g. \cite{ruland2015unique,GSU20}). Such explicit representations are not known for fractional polyharmonic operators. In addition, approaches based on heat semigroup representations of fractional Laplace operators face several technical difficulties, arising from the fact that multiple nonlocal terms contribute to the expression \eqref{UCP_poly} and isolating the terms is not feasible. In this paper, we establish (UCP) for \eqref{UCP_poly} as a particular case of a much broader principle that we refer to as the {\em entanglement principle} for fractional Laplace operators, stated as the following broad question. \begin{question}\label{question} Let $N\in \N$, let $\{s_k\}_{k=1}^N\subset (0,\infty)\setminus \N$ and let $\mathcal{O}\subset \R^n$ be a nonempty open set. Let $\{u_k\}_{k=1}^N$ be sufficiently fast decaying functions at infinity and assume that \begin{equation}\label{ent_u_cond} u_1|_{\mathcal O}=\ldots=u_N|_{\mathcal O}=0 \quad \text{and} \quad \sum_{k=1}^N b_k((-\Delta)^{s_k}u_k)\big|_{\mathcal O}=0, \end{equation} for some $\{b_k\}_{k=1}^N\subset \C\setminus \{0\}$. Does it follow that $u_k\equiv 0$ in $\R^n$ for all $k=1,\ldots, N$? \end{question} When $N=1$, the above question has an affirmative answer, as it reduces to the well-known (UCP) for the fractional Laplace operator. However, for $N\geq 2$, this is a much stronger statement than (UCP), since it involves several distinct functions simultaneously in one equation. The nomenclature of the principle comes from \cite[Theorem 1.8]{FKU24} where, among other theorems proved in that paper, the authors discovered the entanglement principle for fractional Laplace-Beltrami operators on closed Riemannian manifolds, i.e. compact Riemannian manifolds without boundary. We thus aim to extend that principle to the case of Euclidean spaces. The main difference here lies in the noncompactness of the Euclidean space $\R^n$ which, as we will discuss later in Section~\ref{sec_outline_proof}, creates several important difficulties; see also \cite[Remark 1.9]{FKU24} on why compactness of the ambient manifold is an important feature there. We will affirmatively answer the above question under suitable decay rates for $\{u_k\}_{k=1}^{N}$ at infinity together with an additional assumption for the fractional exponents $\{s_k\}_{k=1}^N$. To state our result, we first need to define the notion of \emph{super-exponential decay at infinity} for a distribution on $\mathbb R^n$ as follows. \begin{definition}[Super-exponential decay at infinity] \label{def_exp} Let $u\in H^{-r}(\mathbb R^n)$ for some $r\in \R$. We say that $u$ has super-exponential decay at infinity if there exist constants $C,\rho>0$ and $\gamma>1$ such that given each $R>0$ there holds \begin{equation}\label{super-exponential decay weak} |\langle u, \phi\rangle| \leq C e^{-\rho R^\gamma} \|\phi\|_{H^{r}(\mathbb R^n)}, \quad \text{for all } \phi \in C^{\infty}_0(\mathbb R^n\setminus B_R(0)). \end{equation} Here, $\langle \cdot,\cdot\rangle$ is the continuous extension of the Hermitian $L^2(\R^n)$-inner product as a sesquilinear form to $H^{-r}(\R^n)\times H^{r}(\R^n)$ and $B_R(0)$ is the closed ball of radius $R>0$ centered at the origin in $\R^n$. \end{definition} To answer Question \ref{question}, we need to impose the following additional assumption on $\{s_k\}_{k=1}^N$: \begin{enumerate}[\textbf{(H)}] \item\label{exponent condition} We assume $\{s_k\}_{k=1}^N \subset (0,\infty)\setminus \N$ with $s_1<s_2<\ldots <s_N$ and that \begin{equation} \begin{cases} s_k-s_j \notin \Z \quad &\text{for all $j\neq k$,} \quad \quad\text{if the dimension $n$ is even}\\ s_k -s_j\notin \frac{1}{2}\Z \quad &\text{for all $j\neq k$,} \quad \quad \text{if the dimension $n$ is odd}. \end{cases} \end{equation} \end{enumerate} Our main result may be stated as follows, which will be proved in Section~\ref{sec: entanglement}. \begin{theorem}[Entanglement principle]\label{thm: ent} Let $\mathcal{O}\subset \R^n$, $n\geq 2$, be a nonempty bounded open set and let $\{s_k\}_{k=1}^N$ satisfy \ref{exponent condition}. Assume that $\{u_k\}_{k=1}^N\subset H^{-r}(\R^n)$ for some $r\in \R$ and that its elements exhibit super-exponential decay at infinity in the sense of Definition~\ref{def_exp}. If, \begin{align}\label{condition_UCP_u} u_1|_{\mathcal O}=\ldots=u_N|_{\mathcal O}=0 \quad \text{and} \quad \sum_{k=1}^N (b_k(-\Delta)^{s_k}u_k)\big|_{\mathcal O}=0, \end{align} for some $\{b_k\}_{k=1}^N\subset \C\setminus \{0\}$, then $u_k\equiv 0$ in $\R^n$ for each $k=1,\ldots,N$. \end{theorem} \begin{remark} Let us make some remarks about the optimality of the assumptions in the above theorem: \begin{enumerate}[(i)] \item The assumption \ref{exponent condition} that for all $k\neq j$, we have that $s_k-s_j \notin \Z$ is optimal in the sense that there are counterexamples in its absence, i.e., there would exist functions $\{u_k\}_{k=1}^N$ fulfilling \eqref{condition_UCP_u}, but with $u_k \not \equiv 0$, for some $k=1,\ldots, N$. We refer the reader to \cite[Remark 1.10]{FKU24} for the construction of such counterexamples. However, as can be seen in the statement of the theorem, when the dimension is odd, we impose an additional requirement that $s_k-s_j$ is not an odd multiple of $\frac{1}{2}$. This assumption may not necessarily be optimal and may be an artifact of our proof. In odd dimensions, when two exponents $s_k$ and $s_j$ differ by an odd multiple of $\frac{1}{2}$, the corresponding terms $(-\Delta)^{s_k} u_k$ and $(-\Delta)^{s_j} u_j$ appear to create a resonance effect in our analysis which does not allow us to disentangle them from each other. For the sake of clarity of our presentation, we impose this additional assumption in odd dimensions. We believe that this condition may be removable with some further analysis. \item The super-exponential decay at infinity \eqref{super-exponential decay weak} imposed in the theorem may also not be optimal. In fact, for all but one step in our proof (see Proposition~\ref{prop_reduction from SM transform}), it suffices to assume that $\{u_k\}_{k=1}^{\infty}$ have Schwartz decay at infinity. It appears to us that some decay assumption on the distributions is unavoidable unless one were to use an entirely new methodology. Fortunately, when it comes to applications of our entanglement principle to inverse problems as well as Runge approximation properties of solutions to fractional polyharmonic equations, we always only need to work with functions that are compactly supported in $\R^n$ and as such the super-exponential decay stated in the theorem will be satisfied in its applications related to inverse problems. We will present the key ideas in the proof of Theorem~\ref{thm: ent} as well as a comparison with \cite[Theorem 1.8]{FKU24} in Section~\ref{sec_outline_proof}. \end{enumerate} \end{remark} \subsection{Applications to inverse problems} \label{sec_ip_applications} Let us first give a brief overview of the previous literature of inverse problems for nonlocal equations. In \cite{GSU20}, it was discovered that the (UCP) property for $(-\Delta)^s u$, $s\in (0,1)$, can be used as a key tool to solve certain inverse problems for nonlocal equations. There, the authors showed that it is possible to determine an unknown function $q\in L^{\infty}(\Omega)$ from the knowledge of the so-called exterior Dirichlet-to-Neumann mapping $$ C^{\infty}_0(W_1)\ni f\mapsto \left. (-\Delta)^su \right|_{W_2} \in H^{-s}(W_2),$$ where $u\in H^{s}(\R^n)$ is the unique solution to \begin{equation} \begin{cases} (-\Delta)^s u+q(x) u =0 &\text{ in }\Omega, \\ u=f &\text{ in }\Omega_e:=\R^n\setminus \overline{\Omega} \end{cases} \end{equation} and $W_1, W_2\Subset \Omega_e$ are two nonempty open sets. This inverse problem may be viewed as a nonlocal version of the well-known isotropic Calder\'{o}n problem in electrical impedance tomography. The connection between inverse problems for certain nonlocal equations and their strong (UCP) has since led to significant research in this direction. We mention several examples of related works. The works \cite{GRSU20,cekic2020calderon} investigate (UCP) under low regularity assumptions and recovery of lower order coefficients from finite numbers of exterior measurements. In \cite{GLX,CLL2017simultaneously,KLZ-2022,GU2021calder}, the authors investigate inverse problems for nonlocal variable coefficient operators. The works \cite{CMR20,CMRU20} study inverse problems for higher-order fractional operators, and \cite{KRZ-2023,lili24b,LL2022inverse,lin2020monotonicity} consider nonlocal equations in the presence of additional nonlinear lower order terms. In the articles \cite{LLR2019calder,KLW2021calder,ruland2018exponential,RS17}, the authors derived stability results for similar inverse problems as well as studying inverse problems for certain evolution-type nonlocal equations involving both space and time. Very recently, a different perspective has also been employed by using the Caffarelli-Silvestre type reduction formula to obtain some uniqueness results for inverse problems related to nonlocal PDEs by reducing them to inverse problems for local PDEs, see e.g. \cite{CGRU2023reduction,LLU2023calder,LZ2024approximation}. Let us mention also that a new direction of research was initiated in the recent works \cite{feizmohammadi2024fractional_closed,FGKU_2025fractional} for solving nonlocal versions of the well-known and widely open anisotropic Calder\'{o}n problem stated on closed Riemannian manifolds. These works use entirely different properties of fractional Laplace--Beltrami operators and do not rely on (UCP). We refer the reader to the follow-up works \cite{Chien23,choulli2023fractional,FKU24,lili24a,lin2024fractional,LLU2022para,Quan24,ruland2023revisiting} in this direction. Before characterizing our first result on inverse problems, let us briefly motivate it by recalling an equivalent global formulation of the anisotropic Calder\'{o}n problem in the setting of Euclidean space $\R^n$ using Cauchy datasets. The problem goes back to the pioneering paper of Calder\'{o}n in \cite{calderon}. Let $\Omega\subset \R^n$ be a bounded Lipschitz domain with $n\geq 2$ and suppose that $A(x)=(A^{jk}(x))_{j,k=1}^n$ is a real-valued positive definite symmetric matrix on $\R^n$ that is equal to identity in the exterior of $\Omega$. Consider the Cauchy dataset $$ \mathscr S_A=\{ \left(u, \nabla\cdot(A\nabla u)\right)\big|_{\Omega_e}\,:\, u\in C^{\infty}(\R^n)\quad\text{and}\quad \nabla\cdot(A\nabla u)=0 \quad \text{in $\Omega$}\,\}. $$ The anisotropic Calder\'{o}n problem is equivalent to the question of determining an a priori unknown matrix $A$ inside $\Omega$ from the knowledge of $\mathscr S_A$. It was noted by Tartar (account given in \cite{KV1984_Tartar}) that uniqueness is possible only modulo a gauge as follows. If $\Phi:\R^n\to \R^n$ is a diffeomorphism that fixes the exterior region $\Omega_e$, then there holds: $$ \mathscr S_{A} = \mathscr S_{B},\quad \text{where}\quad B(x)= \frac{(D\Phi)A(D\Phi)^t}{|\det(D\Phi)|}\bigg|_{\Phi^{-1}(x)}\quad \forall\,x\in \R^n.$$ Here, $D\Phi$ is the Jacobean matrix of $\Phi$. We mention that the anisotropic Calder\'{o}n problem has been solved in dimension two \cite{ASENS_2001_4_34_5_771_0,dd646efa-ce8c-38c2-804a-74a1c777c7c0} but remains widely open in higher dimensions outside the category of a real-analytic $A$ (which was solved in \cite{https://doi.org/10.1002/cpa.3160420804}). We refer the reader to \cite{SLSEDP_2012-2013____A13_0} for a survey of the anisotropic Calder\'{o}n problem. \subsection{Recovery of an anisotropic principal order term} For our first result, we solve a variant of the anisotropic Calder\'{o}n problem discussed above in the presence of lower order nonlocal terms in the equation. As we will see, in dimensions three and higher, we can determine the matrix $A$ uniquely without any gauge. To describe the result, let us start with the setup. Let $\Omega \subset \R^n$ be a bounded Lipschitz domain with $n\geq 3$ and let us fix \begin{equation}\label{delta_0} \delta_0 \in \left(-\frac{n}{2}, \frac{n}{2}-2\right). \end{equation} Let $A(x)=\LC A^{jk}(x)\RC_{j,k=1}^n$ be a smooth real-valued positive definite symmetric matrix on $\R^n$ such that \begin{equation} \label{matrix_iden} \LC A^{jk}(x)\RC_{j,k=1}^n =\mathds{1}_{n\times n} \quad \text{for $x\in \Omega_e=\R^n\setminus \overline\Omega$.} \end{equation} We consider the equation \begin{align}\label{equ: main} L_Au:=-\nabla \cdot (A(x)\nabla u)+P_0 u=0 \qquad &\text{ in }\Omega, \end{align} in the distributional sense, where the local anisotropic principal part has the divergence form $$-\nabla \cdot (A(x)\nabla u)=-\sum_{j,k=1}^n \frac{\p}{\p x_j}\left(A^{jk}(x)\frac{\partial u}{\partial x_k} \right).$$ and where $P_0$ is the fractional (variable coefficient) polyharmonic operator defined by \begin{equation}\label{P_0_def} P_0u := \sum_{k=1}^N p_k\, (-\Delta)^{s_k}\left(p_k u\right) \qquad \text{for some $\{p_k\}_{k=1}^N\subset C^{\infty}_0(\R^n)$}, \end{equation} for some $N\geq 1$, some $\{s_k\}_{k=1}^N\subset (0,\frac{1}{2}]$ with $s_1<\ldots<s_N$ and some collection of complex-valued functions $\{p_k\}_{k=1}^N\subset C^{\infty}_0(\R^n)$ that satisfy \begin{equation} \label{p_k_def} p_k(x) \equiv b_k \quad \text{for $k=1,\ldots,N$ and all $x$ in some open neighbourhood $\mathcal U$ of $\overline{\Omega}$,} \end{equation} where $\{b_k\}_{k=1}^N\subset \C\setminus \{0\}$ is some set of $N$ nonzero numbers. We aim to study an inverse problem for solutions to \eqref{equ: main} subject to exterior measurements of its solutions. Precisely, our goal is to determine an a priori unknown matrix $A(x)$ in \eqref{equ: main} from the knowledge of the following (exterior) Cauchy dataset $$ \mathcal C_A\subset W^{2,2}_{\delta_0}(\Omega_e) \times L^2_{\delta_0+2}(\Omega_e) \qquad \text{with}\qquad \Omega_e=\R^n\setminus \overline{\Omega},$$ defined by \begin{equation} \label{eq: Cauchy data} \mathcal{C}_A:= \left\{\left( u|_{\Omega_e}, \left.(L_Au) \right|_{\Omega_e} \right)\,:\, \text{$u\in W^{2,2}_{\delta_0}(\R^n)$ \, with \, $L_Au=0$ \, in $\Omega$} \right\}, \end{equation} where $\delta_0$ is as in \eqref{delta_0} and we recall that the operator $L_A$ is given by \eqref{equ: main} and that the equation is to be understood in the distribution sense. We will define the weighted Sobolev spaces $L^2_\delta(\R^n)$ and $W^{k,2}_\delta(\R^n)$ in Section \ref{sec: preliminary}. The richness of the Cauchy dataset, $\mathcal C_A$, will be discussed in Proposition~\ref{prop_solvability}, see also (iii) in Remark~\ref{rmk_thm_ip1}. Our first inverse problem can now be formulated as follows: \begin{enumerate}[\textbf{(IP1)}] \item\label{Q:IP1} \textbf{Inverse Problem 1.} Can one uniquely determine the matrix-valued function $A$ in $\Omega$ from the exterior Cauchy dataset $\mathcal{C}_A$ defined by \eqref{eq: Cauchy data}? \end{enumerate} We prove the following global uniqueness result for \ref{Q:IP1} in Section~\ref{sec: IP}. \begin{theorem}[Global anisotropic uniqueness result]\label{Thm: global uniqueness 1} Let $\Omega \subset \R^n$, $n\geq 3$, be a bounded Lipschitz domain, let $N \in \N$, let $\{s_k\}_{k=1}^N\subset (0,\frac{1}{2}]$ with $s_1<\ldots<s_N$. Let $\{p_k\}_{k=1}^N\subset C^{\infty}_0(\R^n)$ satisfy \eqref{p_k_def}. For $j=1,2,$ let $A_j\in C^{\infty}(\R^n;\R^{n^2})$ be a positive definite symmetric matrix that satisfies \eqref{matrix_iden} and subsequently define $\mathcal{C}_{A_j}$ as the exterior Cauchy dataset \eqref{eq: Cauchy data} (with $A=A_j$ and $\delta_0$ fixed as in \eqref{delta_0}). Then, \begin{align} \mathcal{C}_{A_1}=\mathcal{C}_{A_2}\quad \text{implies that}\quad A_1=A_2 \text{ in }\Omega. \end{align} \end{theorem} \begin{remark} \label{rmk_thm_ip1} Let us make some remarks about the above theorem. \begin{itemize} \item[(i)]{To the best of our knowledge, the above theorem is new even in the case $N=1$. Let us also point out that in the case that $N=1$, the condition \eqref{p_k_def} is not needed as long as $p_1$ does not vanish at any point in an open neighborhood of $\overline{\Omega}$. Observe that in comparison with the anisotropic Calder\'{o}n problem, and somewhat surprisingly, we have proven that in the presence of the nonlocal lower order terms $P_0u$ in the equation $L_Au=0$ in $\Omega$, there is no diffeomorphism gauge and one indeed recovers the matrix $A$ uniquely from the exterior Cauchy dataset.} \item[(ii)]{The assumption on the dimension, namely $n\geq 3$, is for simplicity of presentation of the forward problem and is related to the study of Fredholm properties of the operator $u\mapsto \nabla\cdot(A(x)\nabla u)$ which itself depends in a key way on mapping properties of the Laplacian, see example \cite{McOwen1979TheBO, Bartnik1986TheMO}. In dimensions three and higher, the Laplacian acts as an isomorphism between certain pairs of weighted Sobolev spaces of the form $W^{k,2}_{\delta}(\R^n)$. This will no loner be true in dimension two and one needs to work with more complicated Sobolev spaces.} \item[(iii)]{Both the assumptions $\{s_k\}_{k=1}^{N}\subset (0,\frac{1}{2}]$ and that $\{p_k\}_{k=1}^N\subset C^{\infty}_0(\R^n)$ are also related to the forward theory and the structure of Cauchy dataset $\mathcal C_A$. Under the previous two assumptions, we have the Fredholmness properties for the operator $L_A$ that is proven in Proposition~\ref{prop_solvability}. Finally, let us mention that, as we will see in the proof of Theorem~\ref{Thm: global uniqueness 1}, it suffices for us to work with certain smooth elements in $\mathcal C_A$ subject to certain growth conditions at infinity. Having this in mind, let us note that we could have alternatively defined a broader Cauchy dataset as follows $$ \widetilde{\mathcal C}_{A} = \{ \left( u|_{\Omega_e}, (L_Au)|_{\Omega_e}\right)\,:\, \text{$u\in C^{\infty}(\R^n)$ \, with \, $L_Au=0$ \, in $\Omega$}\},$$ where we recall that $L_A:C^{\infty}(\R^n) \to C^{\infty}(\R^n)$ since $\{p_k\}_{k=1}^N \subset C^{\infty}_0(\R^n)$. Thus, our formulation of the Cauchy dataset $\mathcal C_A$ should be viewed as working with less data compared to $\widetilde{\mathcal C}_A$. } \end{itemize} \end{remark} \subsection{Recovery of a zeroth order local term} Next, we discuss a partial data inverse problem for recovering zeroth order coefficients for fractional polyharmonic equations. We assume the more restrictive condition that $\{ b_k \}_{k=1}^N\subset (0,\infty)$ and an additional condition on the zeroth order coefficient. To state this partial data inverse problem, let us again consider $\Omega \subset \R^n$ to be a bounded Lipschitz domain with $n\geq 2$, let $\{s_k\}_{k=1}^N$ be such that \ref{exponent condition} is satisfied. Next, let $q\in L^{\infty}(\Omega)$ and consider the exterior Dirichlet value problem \begin{equation}\label{equ: main 2} \begin{cases} P_q u =0 &\text{ in }\Omega, \\ u=f &\text{ in }\Omega_e, \end{cases} \end{equation} where \begin{equation}\label{P_def} P_qu := \sum_{k=1}^N b_k (-\Delta)^{s_k}u + q(x)u , \end{equation} We will assume that $b_k>0$ for all $k=1,\ldots, N$, and that \begin{equation}\label{eigenvalue condition} \text{$0$ is not a Dirichlet eigenvalue of $P_q$} \end{equation} in the sense that \begin{equation} \begin{cases} P_q u =0 & \text{ in }\Omega \\ u=0 &\text{ in }\Omega_e \end{cases} \text{ implies }u\equiv 0 \text{ in }\R^n. \end{equation} Letting $W_1, W_2\Subset \Omega_e$ be two bounded nonempty open sets, and by using the condition \eqref{eigenvalue condition}, we can define the so-called \emph{Dirichlet-to-Neumann} (DN) map of \eqref{equ: main 2} via \begin{equation}\label{DN map} \Lambda_q: \wt H^{s_N}(W_1) \to H^{-s_N}(W_2), \quad f\mapsto \left. (P_q u_f) \right|_{W_2}, \end{equation} where $u_f \in H^{s_N}(\R^n)$ is the unique solution to \eqref{equ: main 2}. We refer the reader to Section~\ref{sec: preliminary: fcn} for the definition of the $\wt H^{s}(U)$ spaces (for $s\in \R$) and to Section~\ref{sec: exterior problem} for the well-posedness of equation \eqref{equ: main 2}. \begin{enumerate}[\textbf{(IP2)}] \item\label{Q:IP2} \textbf{Inverse Problem 2.} Can one uniquely determine the potential $q$ in $\Omega$ from the exterior DN map $\Lambda_q$ defined by \eqref{DN map}? \end{enumerate} We prove the following uniqueness result for \ref{Q:IP2} in Section~\ref{sec: IP}. \begin{theorem}[Global uniqueness for bounded potentials]\label{Thm: global uniqueness 2} Let $\Omega \subset \R^n$, $n\geq 2$, be a bounded Lipschitz domain, and let $W_1, W_2\Subset \Omega_e$ be nonempty bounded open sets. Let $N \in \N$, $\{b_k\}_{k=1}^N \subset (0,\infty)$ and let $\{s_k\}_{k=1}^N$ satisfy \ref{exponent condition}. For each $j=1,2,$ let $q_j\in L^\infty(\Omega)$ satisfy \eqref{eigenvalue condition} and define $\Lambda_{q_j}$ to be the DN map \eqref{DN map} (with $q=q_j$). Then, \begin{align} \Lambda_{q_1}f=\Lambda_{q_2}f \text{ for any }f\in C^\infty_0(W_1), \quad \text{implies that}\quad q_1=q_2 \text{ in }\Omega. \end{align} \end{theorem} \begin{remark} Note that when $N=1$, and the exponent $s_1$ belongs to $(0,1)$, the preceding theorem reduces to the known main result of \cite{GSU20}. Let us also comment that by rewriting the nonlocal Schr\"odinger equation $P_q$ as $$ P_q := \psi(-\Delta) +q(x), $$ where $\psi (\lambda)=\sum_{k=1}^N b_k \lambda^{s_k}$, we obtain a formulation of the inverse problem via the \emph{Bernstein function}\footnote{A function $f:(0,\infty) \to \R$ is a Bernstein function if $f\in C^\infty((0,\infty))$, $f(\lambda)\geq 0 $ for all $\lambda>0$, and $(-1)^k \frac{d^kf (\lambda)}{d\lambda^k} \geq 0$ for all $\lambda>0$ and for all $k\in \N$. A typical example of Bernstein functions is $f(\lambda)=\lambda^s$, for any $s\in (0,1)$. It is also known that $b_1f_1+b_2f_2$ is a Bernstein function for any constants $b_1,b_2>0$, provided $f_1$ and $f_2$ are Bernstein functions.}. Hence, Theorem \ref{Thm: global uniqueness 2} can be regarded as a generalization of the fractional Calder\'on type inverse problem associated with Bernstein-type operators which, to the best of our knowledge, has not been solved before. We refer the reader to \cite{KM18_Bernstein} for related studies for extension problems of complete Bernstein functions associated with the Laplace operator. Another related study was investigated in \cite{LLL_poly}. \end{remark} Theorem \ref{Thm: global uniqueness 2} can be proved by using the following Runge approximation property for solutions of \eqref{equ: main 2}, which itself involves the entanglement principle of Theorem \ref{thm: ent}. The Runge approximation property may be of independent interest in control theory. We prove this theorem in Section~\ref{sec: IP}. \begin{theorem}[Runge approximation]\label{Thm: Runge} Let $\Omega\subset \R^n$ be a bounded open set, and let $W\Subset \Omega_e$ be a bounded nonempty open set. Let $N \in \N$, $\{b_k\}_{k=1}^N \subset (0,\infty)$ and let $\{s_k\}_{k=1}^N$ satisfy \ref{exponent condition}. Let $q\in L^\infty(\Omega)$ satisfy \eqref{eigenvalue condition}. Then, given any function $g\in L^2(\Omega)$ and any $\eps>0$, there exists a solution $u=u_\eps\in H^{s_N}(\R^n)$ of equation \eqref{equ: main 2} for some exterior Dirichlet data $f=f_\epsilon\in C^{\infty}_0(W)$ such that $ \left\| u_\eps -g \right\|_{L^2(\Omega)}<\eps.$ \end{theorem} Apart from Theorems \ref{Thm: global uniqueness 1} and \ref{Thm: global uniqueness 2}, we would expect further applications of our entanglement principle in the study of inverse problems for systems of nonlocal equations. We leave this as a future direction of research. \subsection{Outline of the key ideas in the proof of Theorem~\ref{thm: ent}} \label{sec_outline_proof} Let us discuss some of the key ideas in the proof of Theorem~\ref{thm: ent} in Section~\ref{sec: entanglement}. Our starting point will be to show that the principle can be derived from an analogous statement for smooth functions with super-exponential decay at infinity, see the statement of Theorem~\ref{thm_ent_smooth} for this version of the theorem. This is not surprising as fractional Laplace operators commute with convolution operators on $\mathbb R^n$, as can be readily seen from their definition via Fourier transforms. The proof of the smooth case $\{u_k\}_{k=1}^N\subset C^{\infty}(\R^n)$ will then be divided into three main steps. \\ \noindent {\bf Step I.} In the first step, we proceed to make a hidden connection between the analogue of equation \eqref{ent_u_cond} in the smooth case (see Theorem~\ref{thm_ent_smooth}) and the holomorphic function $$F:\{z\in \C\,:\, \mathrm{Re}(z)\geq 0\} \to \C,$$ that is (for now formally) defined via the expression \begin{equation}\label{F_intro} F(z):= \sum_{k=1}^N \frac{\Gamma(z+1+\alpha_k)}{\Gamma(-\alpha_k)\Gamma(1+\alpha_k)}\int_0^\infty (e^{t\Delta} \,u_k)(x) t^{-(z+1+\alpha_k)}\, dt\end{equation} where $\alpha_k\in (0,1)$, $k=1,\ldots,N$ is the fractional part of $s_k$ and $x$ is a fixed point inside $\mathcal O$. We will prove that \eqref{ent_u_cond} implies that the function $F(z)$ above must vanish on positive integers, that is to say, \begin{equation}\label{F_m_intro} F(m)=0 \quad \text{for $m=1,2,\ldots$}.\end{equation} We will carefully analyze the well-posedness of the definition \eqref{F_intro} showing that it is indeed a holomorphic function of $z$ in the right half-plane. We will then derive precise bounds for its growth rates as $|z|\to \infty$, see Lemma~\ref{lem_analytic_bounds}. The remaining part of this first step is to establish Proposition~\ref{prop_F_zero}, showing that the only holomorphic function on $\left\{ z\in \C: \, \textrm{Re}(z)\geq 0\right\}$ that vanishes on positive integers and enjoys the growth rates of Lemma~\ref{lem_analytic_bounds} is the zero function. This part relies crucially on an interpolation theorem for holomorphic functions in the same spirit as Carlson's theorem in complex analysis. The version that we need here is due to Pila \cite{pila_05}, see Theorem~\ref{thm_Pila} for its statement. \\ \noindent{{\bf Step II.}} Once we have established that $F(z)=0$ for all $z\in \{z\in \C\,:\,\textrm{Re}(z)\geq 0\}$, we aim to see what further information about the functions $u_k$, $k=1,\ldots,N$ may be inferred from it. Let us also comment that this is a key step that diverges from the approach in \cite{FKU24}. In \cite{FKU24} the authors showed that an analogous expression as $F(z)$ above appears on closed Riemannian manifolds. Subsequently, they showed that $F(z)$ in their setup is globally holomorphic away from nonpositive integers, thanks in part to the large time exponential decay of the heat semigroup $e^{t\Delta_g}$ when acting on $\Delta_g u$ with $u$ smooth (the operator $\Delta_g$ denotes the Laplace-Beltrami operator). This allowed them to perform singularity analysis near the poles of $F(z)$ and conclude that each of the functions $u_k$ must be zero in the case of closed Riemannian manifolds, see also \cite[Remark 1.9]{FKU24}. However, as it can be readily seen from the expression \eqref{heat_kernel} for the heat kernel on $\R^n$, the large time behaviour of the heat semigroup in $\R^n$ only has a polynomial decay of order $t^{-\frac{n}{2}}$ and thus the expression $F(z)$ will become divergent as one moves into the left half-plane $\textrm{Re}(z)\leq 0$. Nevertheless, we will prove that under Schwartz class decay for the functions $u_k$, it is possible to meromorphically continue the function $F(z)$ into the left half-plane. This is analogous to the well-known meromorphic continuation of the Gamma function to the left half-plane based on a recursive equation that it enjoys in the right half-plane, namely \eqref{recursion of Gamma function}. We refer the reader to Lemma~\ref{lem_meromorphic} for the expression of this meromorphic extension. Once this extension is obtained, we proceed to perform singularity analysis near its poles and show that this leads to disentanglement of the terms in the expression \eqref{ent_u_cond}. Assuming the condition \ref{exponent condition}, this leads us to show that the following specific moments must vanish for each fixed $x\in \mathcal O$ and each $m\in \N\cup \{0\}$, \begin{equation}\label{moments_intro} \begin{cases} \int_{\R^n} u_k(y)\, |x-y|^{2m}\,dy =0, &\text{if $n$ is even,}\\ \int_{\R^n} u_k(y)\, |x-y|^{2m+1}\,dy =0, &\text{if $n$ is odd}. \end{cases} \end{equation} We remark that in the case of odd dimension and if two exponents differ by an odd multiple of $\frac{1}{2}$, the singularity analysis becomes more complicated as some pairs of terms in \eqref{ent_u_cond} create a resonance effect leading to more complicated expressions. For the sake of clarity of our presentation, we decided to remove this possibility by the extra assumption in odd dimensions. We comment that up to the end of this step, only Schwartz class decay is needed.\\ \noindent{{\bf Step III.}} The last step of our analysis is to show that the vanishing of the previous moments for each $x\in \mathcal O$ would imply that the functions $u_k$ must all vanish globally on $\R^n$. This is the only step of the proof where we need to impose more spatial decay on the functions $u_k$ than Schwartz class decay. Indeed, it seems we must have super-exponential decay to be able to conclude this step. Our proof relies on showing that under super-exponential decay, the vanishing of the previous moments implies that the spherical averages of $u_k$ must be zero over all spheres centered at the set $\mathcal O$. This step is based on the study of Fourier--Laplace transforms. The proof is then completed thanks to well-known support theorems for these geometrical Radon type transforms. \subsection{Organization of the paper} The paper is organized as follows. In Section \ref{sec: preliminary}, we introduce basic notions used in this work and prove the solvability and well-posedness for \eqref{equ: main} and \eqref{equ: main 2}, respectively. In Section \ref{sec: entanglement}, we derive the entanglement principle, involving several tools including analytic interpolation as well as reduction to spherical mean transforms. In Section~\ref{sec: IP}, we apply the entanglement principle to show the global uniqueness results for \ref{Q:IP1}--\ref{Q:IP2} as well as proving the Runge approximation property for solutions to \eqref{equ: main_solv}. \section{Preliminaries}\label{sec: preliminary} \subsection{Function spaces} \label{sec: preliminary: fcn} We briefly discuss the notations for the weighted Sobolev spaces as well as the notation of fractional Sobolev spaces. \subsubsection{Weighted Sobolev spaces} Following the notations in \cite{4b44167c-d575-3418-b803-b8adaeefcb60, McOwen1979TheBO}, we define for $\delta\in \R$ the weighted Sobolev space $L^2_\delta(\R^n)$ as the space of measurable functions $u\in L^2_{\textrm{loc}}(\R^n)$ such that the norm \begin{equation}\label{def_weight_sobolev} \|u\|_{L^2_\delta(\R^n)}= \left(\int_{\R^n} (1+|x|^2)^{\delta}\,|u(x)|^2\,dx \right)^{\frac{1}{2}} \end{equation} is finite. Next, given any $k=0,1,\ldots$ and any $\delta\in \R$ we define $W^{k,2}_\delta(\R^n)$ in an analogous way corresponding to the norms \begin{equation}\label{def_weight_sobolev_derivative} \|u\|_{W^{k,2}_\delta(\R^n)}= \sum_{j=0}^k \sum_{|\beta|=j}\|D^{\beta}u\|_{L^2_{\delta+j}(\R^n)}, \end{equation} where the second summation is taken over multi-indexes $\beta\in (\N\cup\{0\})^n$ with $|\beta|=\beta_1+\ldots+\beta_n=j$ and we have $D^\beta = \frac{\p^{|\beta|}}{\p x_1^{\beta_1}\ldots\, \p x_n^{\beta_n}}.$ The spaces $W^{k,2}_\delta(U)$ are to be understood similarly for any open set $U\subset \R^n$. Let us also mention that the notations $W^{k,2}(\R^n)=H^{k}(\R^n)$ for $k\in \N$ and the notation $W^{0,2}(\R^n)=L^2(\R^n)$ will be reserved for the standard Sobolev spaces, not to be confused with the weighted Sobolev spaces above. The above weighted Sobolev spaces will be key when it comes to discussing the structure of the Cauchy dataset $\mathcal C_A$. In particular, we will need to use the following elliptic regularity result of \cite{Bartnik1986TheMO} that we recall here. We caution the reader that the notations of Bartnik for weighted Sobolev spaces are slightly different from the standard notation \eqref{def_weight_sobolev}-\eqref{def_weight_sobolev_derivative}. Thus, for the sake of the reader's convenience, we will state the lemma here, adjusted to fit our notation. \begin{lemma}(Elliptic regularity, cf. Proposition 1.6 in \cite{Bartnik1986TheMO}) \label{lem_elliptic_regularity_weighted} Let $n\geq 2$ and let $A$ be a real-valued positive definite symmetric matrix that satisfies \eqref{matrix_iden}. Let $\mathcal L_0:= \nabla\cdot(A(x)\nabla)$ and finally let $\delta\in \R$. If $u\in L^{2}_{\delta}(\R^n)$ and $\mathcal L_0 u \in L^{2}_{\delta+2}(\R^n)$, then $u \in W^{2,2}_{\delta}(\R^n)$ and there holds $$ \|u\|_{W^{2,2}_{\delta}(\R^n)} \leq C \left( \|\mathcal L_0 u\|_{L^{2}_{\delta+2}(\R^n)} + \|u\|_{L^{2}_{\delta}(\R^n)} \right),$$ for some $C>0$ independent of $u$. \end{lemma} \subsubsection{Fractional Sobolev spaces} Recall that the fractional Laplacian $(-\Delta)^s$, $s\geq 0$, is given via the Fourier transform as follows. \begin{equation} (-\Delta)^s u=\mathcal{F}^{-1}\left\{ \abs{\xi}^{2s}\mathcal{F}u(\xi)\right\}, \quad \text{for }u\in \mathcal{S}(\R^n), \end{equation} where $\mathcal{F}$ and $\mathcal{F}^{-1}$ denote the Fourier and inverse Fourier transform, respectively. For the functional spaces, we write $H^s (\R^n ) = W^{s,2}(\R^n)$ for the $L^2$-based Sobolev space with norm \begin{equation} \norm{u}_{H^s(\R^n)} = \left\| \langle D \rangle^s \mathcal F u\right\|_{L^2(\R^n)}, \end{equation} for any $s\in \R$, where $\langle \xi \rangle =\LC1+\abs{\xi}^2\RC^{1/2}$. Given a nonempty open set $U\subset \R^{n}$, the space $C_0^{\infty}(U)$ consists of all $C^{\infty}(\mathbb{R}^{n})$-smooth functions with compact support in $U$. Analogously as in \cite{GSU20}, we adopt for each $s\in \R$, the notations \begin{align*} H^{s}(U) & :=\left\{u|_{U}: \, u\in H^{s}(\R^{n})\right\},\\ \wt H^{s}(U) & :=\text{closure of \ensuremath{C_{0}^{\infty}(U)} in \ensuremath{H^{s}(\R^{n})}},\\ H_{0}^{s}(U) & :=\text{closure of \ensuremath{C_{0}^{\infty}(U)} in \ensuremath{H^{s}(U)}}. \end{align*} The space $H^{s}(U)$ is complete in the sense that \[ \|u\|_{H^{s}(U)}:=\inf\left\{ \|v\|_{H^{s}(\mathbb{R}^{n})}: \, v\in H^{s}(\mathbb{R}^{n})\mbox{ and }v|_{U}=u\right\} . \] The space $H^{-s}(U)$, with any $s\in \R$, may be viewed as the topological dual space of $\wt H^s(U)$. We also we use the notation $\langle v, w\rangle_{H^{-s}(U),\widetilde H^s(U)},$ to denote the continuous extension of $$ (v,w)_{L^2(U)}= \int_{U} v(x)\,\overline{w}(x)\,dx \qquad \forall\, v,w \in C^{\infty}_0(U),$$ as a sesquilinear form to all of $H^{-s}(U) \times \widetilde H^{s}(U).$ We also recall a mapping property for the fractional Laplacian. \begin{lemma}[Lemma 2.1 in \cite{GSU20}]\label{Lem: mapping prop of frac Lap} For $s\geq 0$, the fractional Laplacian extends as a bounded map \begin{equation} (-\Delta)^s : H^a(\R^n)\to H^{a-2s}(\R^n), \text{ for }a\in \R. \end{equation} \end{lemma} \subsection{Nonlocal operators defined via the heat semigroup} For values $s\in (0,1)$, there are several equivalent definitions for fractional Laplace operator $(-\Delta)^s$. Here, let us use the heat semigroup definition which will be suitable for our analysis. We first recall the definition of the heat semigroup. Let \begin{align}\label{heat_kernel} p_t(y):=\frac{1}{(4\pi t)^{n/2}}e^{-\frac{|y|^2}{4t}}, \text{ for }y\in \R^n \text{ and }t>0, \end{align} be the heat kernel of the heat operator $\p _t -\Delta$ for $(t,x)\in (0,\infty)\times \R^n$. Let $u\in H^s(\R^n)$, and define $e^{t\Delta}u\colon [0,\infty)\times \R^n\to\C$ by \begin{align}\label{heat kernel representation} e^{t\Delta}u(x):=\int_{\R^n} p_t(x-y)u(y)\, dy\quad \text{for}\quad t>0 \end{align} and also $e^{0\Delta}u(x):=u(x)$. Then $ e^{t\Delta}u \in C^{\infty}([0,\infty);H^s(\R^n))$ is the unique solution to \begin{align}\label{heat equation} \begin{cases} \LC \p_t -\Delta \RC U=0 &\text{ in }(0,\infty) \times \R^n,\\ U(x,0)=u(x)&\text{ in }\R^n. \end{cases} \end{align} It is well known that \begin{align} \left\| e^{t\Delta}u\right\|_{H^s(\R^n)}\leq \norm{u}_{H^s(\R^n)}, \text{ for }t\geq 0. \end{align} Next, for specific values $s\in (0,1)$, we recall the well-known equivalent expression for the fractional Laplacian given via \begin{equation}\label{fractional heat semi} \LC -\Delta \RC ^s u(x) =\frac{1}{\Gamma (-s)}\int_0^\infty \frac{e^{t\Delta} u(x)-u(x)}{t^{1+s}}\, dt, \end{equation} where $\Gamma(\cdot)$ is the Gamma function defined by \begin{equation}\label{Gamma function} \Gamma(z)=\int_0^\infty e^{-t}t^{z-1}\, dt, \quad \text{for $\textrm{Re}(z)>0$.} \end{equation} As the Gamma function plays an essential role in our analysis, let us also mention the recursion formula \begin{equation}\label{recursion of Gamma function} \Gamma(z+1)=z\,\Gamma(z)\quad \text{or}\quad \Gamma(z)=\frac{\Gamma(z+1)}{z} \quad \text{$\textrm{Re}(z)>0$}. \end{equation} Indeed, the above recursion formula is important as it allows one to extend the Gamma function as a holomorphic function to all of the complex plane $\C$ except at nonpositive integers where the extended function will have simple poles. Throughout the remainder of this paper, the Gamma function $\Gamma$ is to be understood in this extended sense. Let us close this section by noting that several other equivalent definitions of the fractional Laplace operator are known, see for example the survey article \cite{Kwansnicki17} for ten equivalent definitions. \subsection{On the Cauchy dataset $\mathcal C_A$} \label{sec: C_q} In order to prove Theorem~\ref{Thm: global uniqueness 1}, we first need to show that the Cauchy dataset $\mathcal C_A$ makes sense, that is to say, it is not empty and possesses enough elements for us to study the inverse problem. In fact, as we will see in a moment, this set is infinite-dimensional and we can precisely categorize many of its elements, sufficient for our purposes of solving \ref{Q:IP1}. To this end, let us discuss the Poisson equation \begin{equation}\label{equ: main_solv} L_A u =F \qquad \text{ in }\R^n, \end{equation} with $F\in L^2_{\delta_0+2}(\R^n)$, where $\delta_0$ is as in \eqref{delta_0} and is fixed throughout this manuscript and we refer the reader to Section~\ref{sec: preliminary: fcn} for the definition of weighted Sobolev spaces. Later, when it comes to discussing the Cauchy dataset $\mathcal C_A$, we will naturally impose that $\supp F \subset \overline{\Omega_e}$ where we recall that $\Omega_e=\R^n\setminus \overline\Omega$. To solve the inverse problem \ref{Q:IP1}, it suffices for us to only work with $F \in C^{\infty}_0(\Omega_e)$. \begin{remark} \label{remark_fractional_domain_weight} Let us make the remark that given any $\delta\in \R$, the operator $L_A: W^{2,2}_\delta(\R^n) \to L^2_{\delta+2}(\R^n)$ defined by \eqref{equ: main} is a bounded linear operator. Recall that $L_A$ has a local divergence part for which this mapping property is obvious. Considering next the fractional polyharmonic part given by $P_0$ given by \eqref{P_0_def}, we note that given any $u\in W^{2,2}_\delta(\R^n)$, we have that $p_k u \in H^2(\R^n)$ for $k=1,\ldots,N$, as the functions $p_k$ are all smooth and with compact support. Thus, using again the fact that $p_k$'s are compactly supported together with the mapping property for fractional Laplace operators in Lemma~\ref{Lem: mapping prop of frac Lap}, we deduce that $p_k (-\Delta)^{s_k} (p_ku) \in L^2_{\delta'}(\R^n)$ for any $\delta'\in \R$. Recall that $\{s_k\}_{k=1}^N \subset (0,\frac{1}{2}].$ \end{remark} In the following proposition, it is useful to note that given any $\delta\in \R$, the topological dual of $L^2_{\delta}(\R^n)$ is $L^2_{-\delta}(\R^n)$ and also that the adjoint of $L_A:W^{2,2}_{\delta}(\R^n)\to L^2_{\delta+2}(\R^n)$ is given by $\overline{L_A}: L^{2}_{-\delta-2}(\R^n) \to (W^{2,2}_{\delta}(\R^n))^\star$. Note also that for $n\geq3$, we have for any $\delta\in \R$, \begin{equation}\label{delta_0_property} \delta \in \left( -\frac{n}{2}, \frac{n}{2}-2\right) \iff -\delta-2 \in \left( -\frac{n}{2}, \frac{n}{2}-2\right).\end{equation} \begin{proposition}[Solvability of Poisson equation]\label{prop_solvability} Let $n\geq 3$. Let $A$ be a smooth positive definite real-valued symmetric matrix on $\R^n$ that satisfies \eqref{matrix_iden}. Let $N \in \N$, let $\{p_k\}_{k=1}^N \subset C^\infty_0(\R^n)$ satisfy \eqref{p_k_def} and finally let $\{s_k\}_{k=1}^N \subset (0,\frac{1}{2}]$. Let $\delta_0$ be as in \eqref{delta_0}. The operator \begin{equation}\label{mapping solvable} L_A: W^{2,2}_{\delta_0}(\R^n) \to L^2_{\delta_0+2}(\R^n) \end{equation} defined in \eqref{equ: main} is Fredholm with index zero. Defining \begin{align}\label{K_A_def} \mathcal{K}_{A}:=\left\{ u\in W^{2,2}_{\delta_0}(\R^n): \, L_A u=0 \text{ in }\R^n\right\}, \end{align} we have the inclusions \begin{equation}\label{K_A_reg} \mathcal K_A \subset W^{k,2}_{-2-\delta_0}(\R^n) \cap W^{k,2}_{\delta_0}(\R^n) \quad \text{for any $k\in \{0\}\cup \N$.} \end{equation} There are two mutually exclusive possibilities: \begin{enumerate}[(i)] \item $\mathcal{K}_A=\{0\}$. In this case, for $F \in L^2_{\delta_0+2}(\R^n)$, the Poisson equation \eqref{equ: main_solv} has a unique solution $u\in W^{2,2}_{\delta_0}(\R^n)$. \item $\dim(\mathcal{K}_A)=m$, for some $m\in \N$. In this case, for $F \in L^2_{\delta_0+2}(\R^n)$, the Poisson equation \eqref{equ: main_solv} admits some solution $u\in W^{2,2}_{\delta_0}(\R^n)$ if and only if \begin{equation}\label{phi_ortho} \left(F, \overline{\zeta}\right)_{L^{2}(\R^n)}=0 \qquad \forall\, \zeta \in \mathcal K_A.\end{equation} \end{enumerate} Moreover, if $F\in C^{\infty}_0(\R^n)$ satisfies \eqref{phi_ortho} then any solution $u\in W^{2,2}_{\delta_0}(\R^n)$ to the Poisson equation \eqref{equ: main_solv} will also be in $W^{k,2}_{\delta_0}(\R^n)\cap W^{k,2}_{-\delta_0-2}(\R^n)$ for all $k\in \N$ and thus in particular globally smooth in $\R^n$. \end{proposition} \begin{remark} Let us remark that the appearance of the conjugate $\overline{\zeta}$ in \eqref{phi_ortho} above is because we are using sesquilinear forms and that the (formal) adjoint of $L_A$ is $\overline{L_A}$. Note also that in case (ii) above, if $u\in W^{2,2}_{\delta_0}(\R^n)$ is a solution to the equation \eqref{equ: main_solv}, then any other solution to the Poisson equation must then be of the form $u+\zeta$ for some $\zeta\in \mathcal K_A$. \end{remark} \begin{remark} In the above proposition, it is crucial for us that $\delta_0$ is as in \eqref{delta_0}, that $\{s_k\}_{k=1}^N \subset (0,\frac{1}{2}]$, that $A$ satisfies \eqref{matrix_iden} and that $\{p_k\}_{k=1}^N \subset C^{\infty}_0(\R^n)$, so that the $p_k$'s have compact support. The assumption on $\delta_0$ provides suitable mapping properties for the local divergence part in the equation while the other assumptions simplify the treatment of the nonlocal perturbation and let us avoid discussing elliptic regularity properties in fractional weighted Sobolev spaces. We believe that some sets of assumptions need to be imposed here to have a well-posedness theory. \end{remark} \begin{proof}[Proof of Proposition \ref{prop_solvability}] Applying \cite[Theorem 0]{McOwen1979TheBO} we note that the mapping \begin{equation} \label{Delta_weight_map} \Delta: W^{2,2}_{\delta_0}(\R^n)\to L^{2}_{\delta_0+2}(\R^n) \end{equation} is an isomorphism. Using this result together with the fact that $A$ satisfies \eqref{matrix_iden}, we can apply \cite[Proposition 1.15]{Bartnik1986TheMO} to deduce that the operator $$ \mathcal L_0: W^{2,2}_{\delta_0}(\R^n) \to L^2_{\delta_0+2}(\R^n),$$ defined via $$\mathcal L_0(u) := -\nabla \cdot\left( A(x)\nabla u\right),$$ is also an isomorphism. We caution the reader that our notations for weighted Sobolev spaces follow the standard convention and are a bit different from that of Bartnik (up to a shift and sign change for the weight $\delta$). Note also that our choice of $\delta_0$ is non-exceptional in his framework which is crucial. We write $\mathcal L_0^{-1}:L^2_{\delta_0+2}(\R^n)\to W^{2,2}_{\delta_0}(\R^n)$ for its inverse. Next, by noting that $\{p_k\}_{k=1}^N\subset C^{\infty}_0(\R^n)$ and using the mapping property given in Lemma~\ref{Lem: mapping prop of frac Lap} together with the fact that $\{s_k\}_{k=1}^N\subset (0,\frac{1}{2}]$ it is straightforward to see that the mapping $$ p_k\,(-\Delta)^s p_k\,\mathcal L_0^{-1}: L^{2}_{\delta_0+2}(\R^n) \to H^{1}_0(U),$$ is continuous for every $s\in (0,\frac{1}{2}]$ where $U$ is a nonempty bounded open set that contains the support of all the $p_k$'s. Together with the fact that $H^{1}_0(U)$ is compactly embedded in $L^2(U)$ for every nonempty bounded open set $U\subset \R^n$ and that $p_k$'s are compactly supported there, we deduce that the mapping $$ p_k(-\Delta)^{s_k} p_k\,\mathcal L_0^{-1}: L^{2}_{\delta_0+2}(\R^n) \to L^2_{\delta_0+2}(\R^n),$$ is compact for every $k=1,\ldots,N$. This proves that the operator $L_A$ is indeed Fredholm with index zero. That the Fredholm alternative holds in the way that it is written is due to the regularity properties that we describe next. We only provide a sketch of the proof here as the details are classical techniques used in elliptic regularity. We begin with the claim in \eqref{K_A_reg} that $\mathcal K_A \subset W^{k,2}_{\delta_0}(\R^n)$ for any $k\in \N$. The claim is trivial if $m=0$. For $m\geq 1$, this follows from bootstrapping the boundedness of the maps (see Lemma~\ref{Lem: mapping prop of frac Lap}) $$ (-\Delta)^{s_k} : H^{s}(\R^n) \to H^{s-2s_k}(\R^n)\subset H^{s-1}(\R^n) \quad k=1,\ldots,N,$$ with the elliptic regularity property given by Lemma~\ref{lem_elliptic_regularity_weighted} (cf. \cite[Proposition 1.6]{Bartnik1986TheMO}), noting crucially that $\{s_k\}_{k=1}^N\subset (0,\frac{1}{2}]$ and that $p_k$'s are compactly supported. (In particular, since $\{p_k\}_{k=1}^N\subset C^{\infty}_0(\R^n)$, the operator of multiplication by $p_k$ is continuous from $H^l(\R^n)$ to $W^{l,2}_{\delta'}(\R^n)$ for $k=1,\ldots,N$, any $\delta'\in \R$ and $l\in \N \cup \{0\}.$) Let us now consider the second claim in \eqref{K_A_reg}. Assume again that $m\geq 1$ and consider any nonzero element $\zeta \in \mathcal K_A$. There holds: $$ \mathcal L_0 \zeta = -P_0 \zeta \in L^{2}_{\delta'}(\R^n),$$ for any $\delta'\in \R$, where we used the fact that $\{p_k\}_{k=1}^N\subset C^{\infty}_0(\R^n)$. Recalling that $\zeta \in W^{2,2}_{\delta_0}(\R^n)$ together with the fact that \eqref{delta_0_property} is satisfied with $\delta=\delta_0$, we may now apply \cite[Proposition 1.14]{Bartnik1986TheMO} to obtain the additional regularity property that $\zeta \in W^{2,2}_{-\delta_0+2}(\R^n)$. Analogously as in the previous paragraph, we can bootstrap this observation via Lemma~\ref{lem_elliptic_regularity_weighted} to show the claim. Finally, the claim regarding the extra regularity of solutions to \eqref{equ: main_solv} when $F\in C^{\infty}_0(\R^n)$ follows analogously. \end{proof} We will also need the following two lemmas for future reference in Section~\ref{sec: IP}. \begin{lemma} \label{lem_ortho} Let $n\geq 3$. Assume that $\dim(\mathcal{K}_{A}) = m$, for some $m\in \N$, where $\mathcal{K}_A$ is given by \eqref{K_A_def}. Let $\zeta_1, \dots, \zeta_m \in \mathcal{K}_{A}$ be linearly independent functions. Then, for any $c = (c_1, \dots, c_m) \in \mathbb{C}^m$, there exists $g \in C^\infty_0(\Omega_e)$ such that $\LC g, \overline{\zeta_l}\RC _{L^2(\R^n)} = c_l$ for $l = 1, \dots, m$. \end{lemma} \begin{proof} The proof is this lemma is similar to the proof of \cite[Lemma 3.9]{FKU24} with minor modifications. We include it for the sake of convenience for the reader. We shall show that the following linear map \[ T: C_0^\infty(\Omega_e) \ni g \mapsto \big( \LC g, \overline{\zeta_1}\RC_{L^2(\R^n)}, \dots, \LC g, \overline{\zeta_m}\RC_{L^2(\R^n)}\big) \in \mathbb{C}^m \] is surjective. Suppose, for the sake of contradiction, that $T$ is not surjective. Then there exists a nonzero vector $a = (a_1, \dots, a_m) \in \mathbb{C}^m$ such that \begin{equation} \label{eq_lemma_anal_1} 0 = T(g) \cdot a = \LC g, \overline{\zeta}\RC_{L^2(\R^n)}, \end{equation} for all $g \in C^\infty_0(\Omega_e)$. Here, $\zeta := \sum_{l=1}^m a_l \zeta_l \in W^{2,2}_{\delta_0}(\R^n)$, and $\xi \cdot \eta = \sum_{l=1}^N \xi_l \overline{\eta_l}$ denotes the inner product of vectors $\xi, \eta \in \mathbb{C}^m$. It follows from \eqref{eq_lemma_anal_1} that $\zeta|_{\Omega_e} = 0$. Together with the fact that $\zeta \in \mathcal{K}_{A}$, we obtain from the expression for $L_A$ in \eqref{equ: main} that $$\zeta|_{\Omega_e}=0 \quad \text{and}\quad \sum_{k=1}^m p_k (-\Delta)^{s_k} (p_k \zeta) =0 \quad \text{in $\Omega_e$}$$ In particular, note that the first identity above also implies that the function $\zeta\in W^{2,2}_{\delta_0}(\R^n)$ satisfies super-exponential decay. Our entanglement principle Theorem~\ref{thm: ent} (applied with the choices $N=m$, $u_k=a_k\zeta$, $b_k=1$ for $k=1,\ldots,m$ and any nonempty bounded open set $\mathcal O\subset \Omega_e$) implies that $a_k\zeta = 0$ on $\R^n$ for all $k=1,\ldots,m$, and thus $a = 0$, which is a contradiction. \end{proof} \begin{lemma} \label{lem_ortho_cond} Let $n\geq 3$. We adopt the same notations as in Proposition~\ref{prop_solvability}. Suppose that $v\in L^{2}_{\delta_0}(\R^n)$ is a function such that the following conditional statement holds: $$\text{if}\quad \left(\textrm{$F\in C^{\infty}_0(\Omega_e)$ with $(F,\overline{\zeta})_{L^2(\Omega_e)}=0$ for all $\zeta\in \mathcal K_A$}\right) \quad \text{then}\quad (F,\overline{v})_{L^2(\Omega_e)}=0.$$ Then, $v|_{\Omega_e}=\zeta|_{\Omega_e}$ for some $\zeta\in \mathcal K_A$. \end{lemma} \begin{proof} The argument that we present closely follows a part of the proof of \cite[Lemma 3.10]{FKU24} with some adjustments as we are using weighted Sobolev spaces on $\R^n$ and some care is needed. Let $\textrm{dim}\,\mathcal{K}_A=m\geq 1$ as the claim is trivial in the case $m=0$. Let $\mathcal W:=(\sigma^{\delta_0\,}\mathcal K_A)|_{\Omega_e}$ where $\sigma=(1+|x|^2)^{\frac{1}{2}}$. Based on the proof of the previous lemma, we know that $\textrm{dim}\, \mathcal W=m$ and that if $\{\zeta_1,\ldots,\zeta_m\}$ is a basis for $\mathcal K_A$, then, \[ \mathcal W = \mathrm{span}\left\{(\sigma^{\delta_0}\zeta_1)|_{\Omega_e}, \dots, (\sigma^{\delta_0}\zeta_{m})|_{\Omega_e}\right\} \subset L^2(\Omega_e). \] Define $w:=(\sigma^{\delta_0}v)|_{\Omega_e} \in L^2(\Omega_e)$. Writing the orthogonal decomposition $L^2(\Omega_e) = \mathcal W \oplus \mathcal W^\perp$, we deduce that \begin{equation} \label{lem_density_new_1_2} w = (\sigma^{\delta_0}\zeta)|_{\Omega_e} + w_0, \end{equation} where $\zeta \in \mathcal K_A$ and $w_0 \in \mathcal W^\perp$, i.e., \begin{equation} \label{lem_density_new_2} \big(w_0, \sigma^{\delta_0}\zeta_k\big)_{L^2(\Omega_e)} = 0 \quad \text{for all} \quad k = 1, \dots, m. \end{equation} We shall next show that the conditional statement in the lemma implies that $w_0 = 0$. To see this, let $\{h_\ell\}_{\ell=1}^\infty \subset C_0^\infty(\Omega_e)$ be such that \begin{equation} \label{lem_density_new_3} \left\|h_\ell - \overline{w_0}\right\|_{L^2(\Omega_e)} \to 0 \quad \text{as} \quad \ell \to \infty. \end{equation} It follows from \eqref{lem_density_new_2} and \eqref{lem_density_new_3} that \begin{equation} \label{lem_density_new_4} \lim_{\ell \to \infty} (h_\ell, \sigma^{\delta_0}\overline{\zeta_k})_{L^2(\Omega_e)} = 0 \quad \text{for all} \quad k = 1, \dots, m. \end{equation} By Lemma~\ref{lem_ortho}, there exist functions $\{\theta_k\}_{k=1}^N \subset C_0^\infty(\Omega_e)$ such that \begin{equation} \label{lem_density_new_5} \left(\theta_k, \sigma^{\delta_0}\overline{\zeta_j}\right)_{L^2(\Omega_e)} = \delta_{kj} \quad \text{for all} \quad k, j = 1, \dots, m. \end{equation} Consider the sequence of functions $F_\ell \in C_0^\infty(\Omega_e)$ defined by \[ F_\ell = h_\ell - \sum_{j=1}^{m} \left(h_\ell, \sigma^{\delta_0}\overline{\zeta_j}\right)_{L^2(\Omega_e)} \theta_j, \quad \ell = 1,2, \dots. \] It follows from \eqref{lem_density_new_5} that $$\left(\sigma^{\delta_0}F_\ell, \overline{\zeta_k}\right)_{L^2(\Omega_e)} =\left(F_\ell, \sigma^{\delta_0}\overline{\zeta_k}\right)_{L^2(\Omega_e)} = 0 \quad \text{for all $k = 1, \dots, m$, and $\ell=1,2,\dots$},$$ and therefore, by the hypothesis of the lemma, and the definition of $w$, \begin{equation} \label{lem_density_new_6} \left(F_\ell, \overline{w}\right)_{L^2(\Omega_e)} = 0 \quad \text{for all} \quad \ell = 1, 2, \dots. \end{equation} We observe from \eqref{lem_density_new_3} and \eqref{lem_density_new_4} that \begin{equation} \label{lem_density_new_7} \|F_\ell - \overline{w_0}\|_{L^2(\Omega_e)} \to 0 \quad \text{as} \quad \ell \to \infty. \end{equation} It follows from \eqref{lem_density_new_6} and \eqref{lem_density_new_7} that $\left(\overline{w_0}, \overline{w_0}\right)_{L^2(\Omega_e)} = 0$ and therefore, $w_0 = 0$, thus completing the proof of the lemma. \end{proof} \subsection{Well-posedness of the DN map with partial data} \label{sec: exterior problem} We will now discuss the well-posedness of the exterior-value problem \eqref{equ: main 2} under an additional constraint on $\{b_k\}_{k=1}^N$, namely that they are all positive real numbers. \begin{lemma}[Well-posedness]\label{Lem: well-posedness} Let $\Omega \subset \R^n$, $n\geq 2$, be a bounded Lipschitz domain, and let $W\Subset \Omega_e$ be a nonempty bounded open set. Let $N \in \N$, let $\{b_k\}_{k=1}^N \subset (0,\infty)$ and let $\{ s_k\}_{k=1}^N$ satisfy \ref{exponent condition}. Let $q\in L^\infty(\Omega)$ satisfy \eqref{eigenvalue condition}. Then, given any $f\in C^\infty_0(W)$, there exists a unique solution $u\in H^{s_N}(\R^n)$ to \eqref{equ: main 2} subject to the exterior Dirichlet data $f$. \end{lemma} \begin{proof} We give a brief sketch of this standard lemma. By considering $u=v+f$ in $\R^n$, we can study the well-posedness of an alternative equation \begin{equation} \begin{cases} P_q v = \varphi &\text{ in }\Omega, \\ v=0 &\text{ in }\Omega_e, \end{cases} \end{equation} where $\varphi = \left. - (P_q f) \right|_{\Omega}$. Recalling that $\{b_k\}_{k=1}^\infty\subset (0,\infty)$, consider the sesquilinear form \begin{equation} B_0 (v,w):=\sum_{k=1}^N b_k \big( (-\Delta)^{s_k/2} v , (-\Delta)^{s_k/2} \big)_{L^2(\R^n)}. \end{equation} for any $v,w\in \wt H^{s_N}(\Omega)$. It is straightforward to see that $B_0(\cdot, \cdot)$ satisfies both boundedness and coercive. In other words, we have \begin{equation} \left| B_0(v,w)\right| \leq \sum_{k=1}^N \big\| (-\Delta)^{s_k/2} v \big\|_{L^2(\R^n)}\big\| (-\Delta)^{s_k/2} w \big\|_{L^2(\R^n)}, \end{equation} and \begin{equation} B_0(v,v)\geq \sum_{k=1}^N b_k \big\| (-\Delta)^{s_k/2} v \big\|_{L^2(\R^n)}^2 \geq b_N \big\| (-\Delta)^{s_N/2} v \big\|_{L^2(\R^n)}^2 , \end{equation} for any $v,w\in \wt H^{s_N}(\Omega)$, where we use $\{ b_k\}_{k=1}^N\subset (0,\infty)$ in the last inequality. Hence, the rest of the proof follows the standard method of the proof of the Lax-Milgram theorem (for example, see \cite{GSU20,GLX}), and is therefore omitted. In short, for the sesquilinear form \begin{equation}\label{B_q bilinear} B_q(v,w):=B_0(v,w)+\LC qv , w \RC_{L^2(\Omega)}, \end{equation} one can find a unique $v\in \wt H^s(\Omega)$ satisfying $$ B_q(v,w)=\langle \varphi , w \rangle_{H^{-s_N}(\Omega),\wt H^{s_N}(\Omega)}, $$ for any $w\in \wt H^{s_N}(\Omega)$, provided the condition \eqref{eigenvalue condition} holds. This concludes the proof. \end{proof} With the above-mentioned well-posedness result, the DN map \eqref{DN map} is well-defined. Specifically, there is the relation \begin{equation}\label{DN and bilinear} \left\langle \Lambda_q f ,g \right\rangle :=\left\langle \Lambda_q f ,g \right\rangle_{H^{-s_N}(\Omega_e),\widetilde{H}^{s_N}(\Omega_e)} = B_q (u_f,w_g), \end{equation} where $u_f\in H^{s_N}(\R^n)$ is the solution to \eqref{equ: main 2}, $w_g\in H^{s_N}(\R^n)$ is any function with $\left. w_g \right|_{\Omega_e}=g$ and $B_q(\cdot, \cdot)$ is given by \eqref{B_q bilinear}. Furthermore, one can derive the following integral identity. \begin{lemma}[Integral identity] Let $\Omega\subset \R^n$ be a bounded domain with Lipschitz boundary for $n\geq 2$. Let $\{b_k\}_{k=1}^N\subset (0,\infty)$ and $\{s_k\}_{k=1}^N\subset (0,\infty)$ with $0<s_1<s_2<\ldots< s_N$ satisfy \ref{exponent condition}. Let $q, q_{1},q_{2}\in L^{\infty}(\Omega)$ satisfy \eqref{eigenvalue condition}. For any $f_1,f_2\in C^\infty_0(\Omega_{e})$, we have the symmetry property \begin{equation}\label{eq:adjoint operator} \left\langle \Lambda_q f_1 , \overline{f_2}\right\rangle = \left\langle f_1, \Lambda_{\overline q} f_2\right\rangle , \end{equation} and the integral identity \begin{equation}\label{eq:integral identity} \left\langle (\Lambda_{q_{1}}-\Lambda_{q_{2}})f_{1},f_{2}\right\rangle=\LC \LC q_{1}-q_{2}\RC u_{f_1},u_{f_2}\RC_{L^2(\Omega)} \end{equation} where for $j=1,2,$ $u_{f_j}\in H^{s}(\mathbb{R}^{n})$ is the unique solution to \eqref{equ: main 2} with $q=q_j$ and $f=f_j$. \end{lemma} \begin{proof} The symmetry \eqref{eq:adjoint operator} of the DN map comes from the symmetry of the sesquilinear form $B_q(\cdot, \cdot)$ (see e.g. \eqref{DN and bilinear}). On the other hand, by \eqref{eq:adjoint operator}, we have \begin{equation*} \begin{split} \left\langle (\Lambda_{q_{1}}-\Lambda_{q_{2}})f_{1},\overline{f_{2}}\right\rangle & =\left\langle \Lambda_{q_{1}}f_{1},\overline{f_{2}}\right\rangle-\left\langle f_{1},\Lambda_{\overline{q_{2}}}\overline{f_{2}}\right\rangle\\\ &=B_{q_{1}}(u_{f_1},u_{f_2})-B_{q_{2}}(u_{f_1},u_{f_2})\\ & =\LC \LC q_{1}-q_{2}\RC u_{f_1},u_{f_2}\RC_{L^2(\Omega)}. \end{split} \end{equation*} This concludes the proof. \end{proof} \section{Entanglement principle}\label{sec: entanglement} We first show that the proof of Theorem~\ref{thm: ent} follows from an analogous statement for smooth functions whose derivatives of all orders enjoy super-exponential decay at infinity. \begin{theorem}\label{thm_ent_smooth} Let $\{\alpha_k\}_{k=1}^N\subset (0,1)$ with $\alpha_1<\ldots<\alpha_N$ satisfy \begin{equation} \label{exp_condition_alpha} \left(|\alpha_j-\alpha_k|\neq \frac{1}{2} \quad \text{for $j,k=1,\ldots,N$} \right), \quad \text{if the dimension $n$ is odd.} \end{equation} Let $\mathcal{O}\subset \R^n$, $n\geq 2$, be a nonempty open set and assume that $\{v_k\}_{k=1}^N\subset C^{\infty}(\R^n)$ and that there exists constants $\rho>0$ and $\gamma>1$ such that given any multi-index $\beta=(\beta_1,\ldots,\beta_n) \in \LC \N \cup \{0\} \RC^n$ there holds \begin{equation}\label{exp_decay} \left|D^{\beta} v_k(x)\right| \leq C_\beta\, e^{-\rho|x|^\gamma} \quad \forall\, x\in \R^n \qquad k=1,\ldots,N, \end{equation} for some $C_\beta>0$ where $D^\beta = \frac{\p^{|\beta|}}{\p x_1^{\beta_1}\ldots\, \p x_n^{\beta_n}}.$ If, \begin{align}\label{condition_ent} v_1|_{\mathcal O}=\ldots=v_N|_{\mathcal O}=0 \quad \text{and} \quad \sum_{k=1}^N ((-\Delta)^{\alpha_k}v_k)\big|_{\mathcal O}=0, \end{align} then $v_k\equiv 0$ in $\R^n$ for each $k=1,\ldots,N$. \end{theorem} At first glance, the above theorem may appear slightly weaker than our entanglement principle of Theorem \ref{thm: ent}, since it imposes more regularity and decay on the functions and the exponents are restricted to $(0,1)$. As we will see in a moment, a mollifier argument allows us to show that Theorem~\ref{thm: ent} can be proven from this weaker version. \begin{proof}[Proof of Theorem~\ref{thm: ent} via Theorem~\ref{thm_ent_smooth}] We will assume that the hypothesis of Theorem~\ref{thm: ent} is satisfied. Assume without loss of generality that $\{u_k\}_{k=1}^N\subset H^{-r}(\R^n)$ for some $r\in \R$. Let $\phi \in C^{\infty}_0(\R^n)$ be a nonnegative function with compact support inside the open unit ball centered at the origin such that $\|\phi\|_{L^1(\R^n)}=1$. We fix a nonempty open set $\widetilde{\mathcal O} \Subset O$ so that \begin{equation} \label{tilde_O} \textrm{dist}(x, \R^n\setminus \mathcal O)>\epsilon_0 \qquad \forall\, x\in \overline{\widetilde{\mathcal O}} \end{equation} for some $\epsilon_0\in (0,1).$ Define, for each $\epsilon \in (0,\epsilon_0)$ the function $$\psi_\epsilon(x) := \epsilon^{-n}\phi(\epsilon^{-1}x).$$ Next, we define for each $x\in \R^n$, and each $\epsilon \in (0,\epsilon_0)$, the function $\wt v_{k,\epsilon}\in C^{\infty}(\R^n)$ by $$ \wt v_{k,\epsilon}(x)= b_k\,\LC u_k\ast \psi_\epsilon\RC(x):=b_k \langle u_k(\cdot) ,\psi_\epsilon(x-\cdot)\rangle\quad k=1,\ldots,N,$$ where $\langle \cdot,\cdot\rangle$ denotes the sesquilinear pairing between $H^{-r}(\R^n)$ and $H^r(\R^n)$ as explained in Section~\ref{sec: preliminary: fcn}. As $u_k$ with $k=1,\ldots,N$ all vanish on $\mathcal O$, we obtain in view of \eqref{tilde_O} that \begin{equation} \label{v_k_zero} \wt v_{k,\epsilon}(x)=0 \quad \forall\, x\in \widetilde{\mathcal O} \quad \epsilon \in (0,\epsilon_0) \quad k=1,\ldots,N. \end{equation} Furthermore, given any multi-index $\beta \in \LC \N \cup \{0\}\RC ^n$ and in view of the fact that the distributions $\{u_k\}_{k=1}^N$ all have super-exponential decay in the sense of Definition~\ref{def_exp}, we obtain for each $x\in \R^n$ with $|x|>2$ and each $k=1,\ldots,N$, $$ \left|D^\beta \wt v_{k,\epsilon}(x)\right| = \left| b_k \langle u_k,D^\beta \psi_\epsilon(x-\cdot)\rangle\right| \leq \left|b_k\right|\,C \,e^{-\rho \,(|x|-1)^\gamma} \left\|\psi_\epsilon\right\|_{H^{r+|\beta|}(\R^n)}, $$ where we used the fact that $\psi_{\epsilon}(x-\cdot)$ is supported outside the closed ball $B_{|x|-1}(0)$ together with Definition~\ref{def_exp} with the choice $R=|x|-1$. Therefore, by modifying the constant $C>0$ above we deduce that there exists $C_\beta>0$ (depending on $\beta$ and $\epsilon$) such that \begin{equation}\label{v_beta_decay} \left|D^\beta \wt v_{k,\epsilon}(x)\right| \leq C_\beta \,e^{-{\rho 2^{-\gamma}\, |x|^\gamma}}, \quad \text{for all $x\in \R^n$ and all $k=1,\ldots,N.$} \end{equation} Next, let us write $ s_k = \lfloor s_k \rfloor +\alpha_k, $ where $\lfloor s_k\rfloor$ is the greatest integer not exceeding $s_k$ and $\alpha_k \in (0,1)$ is its fractional part. The reason that the fractional parts $\alpha_k$ are never zero here is due to \ref{exponent condition}. Define $$ v_{k,\epsilon}(x) = b_k \,(-\Delta)^{\lfloor s_k\rfloor}\wt{v}_{k,\epsilon} \quad k=1,\ldots,N \quad \epsilon \in (0,\epsilon_0). $$ It is now straightforward to see that the hypothesis of Theorem~\ref{thm_ent_smooth} is satisfied with $\{v_k\}_{k=1}^N$ in its statement replaced with the functions $\{v_{k,\epsilon}\}_{k=1}^N$ and with $\mathcal O$ in its statement replaced with $\widetilde{\mathcal O}$. Indeed, thanks to \eqref{v_beta_decay}, we see that these functions enjoy the super-exponential decays stated in \eqref{exp_decay} and also that they satisfy the condition \eqref{condition_ent}. Moreover, by \ref{exponent condition}, the fractional parts of $s_k$ all belong to $(0,1)$ and additionally satisfy \eqref{exp_condition_alpha}. Thus, applying Theorem~\ref{thm_ent_smooth} to these functions, we conclude that there holds $$ (-\Delta)^{\lfloor s_k\rfloor}\wt{v}_{k,\epsilon}=0 \quad \text{in $\R^n$ for all $k=1,\ldots,N$.} $$ The latter equation implies that $\tilde{v}_{k,\epsilon}$ is identical to zero. Indeed, this is trivial to see if $\lfloor s_k\rfloor=0$ and in the other case that $\lfloor s_k\rfloor\in \N$, it follows from applying the unique continuation principle for the Laplace operator on $\R^n$. Therefore, $$ \langle u_k(\cdot), \psi_\epsilon(x-\cdot)\rangle=0 \quad \text{in $\R^n$ and all $k=1,\ldots,N$.} $$ Finally, we obtain the desired claim by letting $\epsilon$ approach zero and noting that $b_k\neq 0$ for $k=1,\ldots,N$. \end{proof} The rest of this section is concerned with proving Theorem~\ref{thm_ent_smooth}. Thus, we will assume throughout the remainder of the section that $\{\alpha_k\}_{k=1}^N\subset (0,1)$ and that $\{v_k\}_{k=1}^N\subset C^{\infty}(\R^n)$ are as stated in the hypothesis of Theorem~\ref{thm_ent_smooth}. In the rest of this section, we make the standing assumption that $\omega \Subset \mathcal O$ is a fixed nonempty bounded open set and that there holds \begin{equation}\label{kappa} \textrm{dist}(\overline{\omega},\R^n\setminus \mathcal O)\geq 2\kappa >0, \end{equation} for some constant $\kappa\in (0,1)$. For our purposes, it suffices to think of $\omega$ as a sufficiently small neighbourhood of some fixed point inside $\mathcal O$. \subsection{Analytic interpolation in the right half-plane} We remark that throughout the remainder of Section~\ref{sec: entanglement}, the notation $\log(z)$, $z\in \C$ stands for the principal branch of the logarithm function. Also, given any $a>0$, the notation $a^z$ stands for $e^{a\log z}$. We begin this section with a few lemmas. \begin{lemma}\label{lem_analytic} Given each fixed $x\in \omega$, the function $$ F: \{ z\in \C\,:\, \mathrm{Re}(z) \geq 0\} \to \C$$ defined via \begin{equation}\label{F(z)} F(z):= \sum_{k=1}^N \frac{\Gamma(z+1+\alpha_k)}{\Gamma(-\alpha_k)\Gamma(1+\alpha_k)}\int_0^\infty (e^{t\Delta} v_k)(x)\, t^{-(z+1+\alpha_k)}\, dt \end{equation} is holomorphic. \end{lemma} \begin{proof} As each of the functions $z\mapsto \Gamma(z+1+\alpha_k)$, $k=1,\ldots,N$ are holomorphic in the right half-plane $\left\{ z\in \C: \, \mathrm{Re}(z)\geq 0 \right\}$, it suffices to show that for each $k=1,\ldots,N$ the function \begin{equation}\label{g_def} G_k(z):= \int_0^{\infty} (e^{t\Delta}v_k)(x)\,t^{-(z+1+\alpha_k)}\,dt,\end{equation} is also holomorphic in the right half-plane. It is straightforward to see that for each $z$ in the right half-plane the above integrands are absolutely integrable. Our task is to show that they depend analytically on $z$. We write \begin{equation}\label{g_rand} \begin{split} G_k(z) := G_{1,k}(z)+G_{2,k}(z):= \int_0^{1} (e^{t\Delta}v_k)(x)\,t^{-(z+1+\alpha_k)}\,dt + \int_1^{\infty} (e^{t\Delta}v_k)(x)\,t^{-(z+1+\alpha_k)}\,dt, \end{split} \end{equation} and proceed to show that for each fixed $x\in \omega$, each of $G_{j,k}$, $j=1,2$, $k=1,\ldots, N$ depend analytically on $z$ with $\textrm{Re}(z)\geq 0$. We begin by noting that given any $h\in \C$ and any $t>0$ there holds \begin{equation}\label{complex_plane_bound} \left|t^{-h}-1+h\log(t)\right| \leq \frac{1}{2} (\log t)^2\, |h|^2\, e^{|h\log t|} \quad \forall\, h\in \C, \end{equation} where we used Taylor series approximation for $e^{-h\,\log t}$ around $h=0$ with two terms. Recalling that $\omega \Subset \mathcal O$ and the bound \eqref{kappa}, we also record the following straightforward point-wise bound for heat semigroups on $\R^n$, \begin{equation} \label{heat_bound_rand} \left|e^{t\Delta}v_k(x)\right| \leq \frac{\left\|v_k\right\|_{L^1(\R^n)} }{(4\pi t)^{\frac{n}{2}}}\, e^{-\frac{\kappa^2}{t}}\, \qquad t>0\quad x\in \omega. \end{equation} For the function $G_{1,k}$ given by \eqref{g_rand}, we note by using \eqref{complex_plane_bound} that given each $z\in \C$ and each $h\in \C$ with $|h|< 1$, we have \begin{equation} \begin{split} & \bigg|G_{1,k}(z+h)- G_{1,k}(z) + h \underbrace{\int_0^1 (e^{t\Delta}v_k)(x)\,t^{-(z+1+\alpha_k)}\,(\log t)\,dt}_{\textrm{Absolutely convergent for all $z\in \C$}} \bigg| \\ &\qquad \leq \frac{\|v_k\|_{L^1(\R^n)}}{2(4\pi)^{\frac{n}{2}}}|h|^2 \underbrace{\int_0^1 (\log t)^2\,t^{-\frac{n}{2}-2-|z|-\alpha_k}\, e^{-\frac{\kappa^2}{t}}\,dt}_{\textrm{Absolutely convergent for all $z\in \C$}} \end{split} \end{equation} showing in fact that the function $G_{1,k}(z)$ is holomorphic everywhere on $\C$. On the other hand, for the function $G_{2,k}(z)$ we note that given each $z$ with $\textrm{Re}(z)\geq 0$ and each $|h|<1$ there holds \begin{equation} \begin{split} & \bigg|G_{2,k}(z+h)- G_{2,k}(z) + h \int_1^\infty (e^{t\Delta}v_k)(x)\,t^{-(z+1+\alpha_k)}\,(\log t)\,dt \bigg| \\ &\qquad \leq \frac{\|v_k\|_{L^1(\R^n)}}{2(4\pi)^{\frac{n}{2}}}|h|^2 \int_1^\infty (\log t)^2 t^{-\frac{n}{2}-\alpha_k}\,dt \end{split} \end{equation} showing indeed that the function $G_{2,k}$ is holomorphic on $\textrm{Re}(z)\geq 0$. \end{proof} \begin{lemma}\label{lem_meromorphic} Given each fixed $x\in \Omega$, the function $F$ defined by \eqref{F(z)} admits a meromorphic extension to $\C$ (with isolated poles of order at most two) given by \begin{equation}\label{F_def} F(z)= \sum_{k=1}^N \frac{4^{\alpha_k+z}}{\pi^{\frac{n}{2}}} \, \frac{\Gamma\LC z+1+\alpha_k\RC \Gamma \LC z +\frac{n}{2}+\alpha_k\RC}{\Gamma(-\alpha_k)\,\Gamma(1+\alpha_k)} \int_{\R^n\setminus \mathcal{O}} \frac{v_k(y)}{\abs{x-y}^{n+2\alpha_k +2z}}\, dy. \end{equation} \end{lemma} \begin{proof} By direct computation, we obtain for each $k=1,\ldots,N$ and each $z\in \C$ with $\mathrm{Re}(z)\geq 0$ that \begin{equation} \begin{split} &\quad \, \int_0^\infty \LC e^{t\Delta}v_k\RC(x) e^{-(z+\alpha_k+1)}\, dt \\ &= \underbrace{\frac{1}{(4\pi)^{n/2}}\int_0^\infty \LC \int_{\R^n\setminus \mathcal{O}}\frac{1}{t^{n/2}}e^{-\frac{\abs{x-y}^2}{4t}} v_k(y)\, dy\RC t^{-(\alpha_k+z+1)}\, dt}_{\text{By Fubini's theorem, heat kernel formula \eqref{heat kernel representation} and }v_k=0 \text{ in }\mathcal{O}} \\ &= \underbrace{\int_{\R^n\setminus \mathcal O} \left(\int_0^\infty t^{-(\frac{n}{2}+\alpha_k+z+1)} e^{-\frac{\abs{x-y}^2}{4t}}\, dt\right)v_k(y)\,dy}_{\text{Absolutely convergent for }\mathrm{Re}(z)\geq 0} \\ &= \frac{4^{\alpha_k+z}}{\pi^{\frac{n}{2}}} \underbrace{\bigg( \int_0^\infty t^{\frac{n}{2}+\alpha_k+z-1}\,e^{-t}\, dt\bigg) }_{\text{By change of variable }} \underbrace{\int_{\R^n\setminus \mathcal{O}} \frac{v_k(y)}{|x-y|^{n+2\alpha_k+2z}}\, dy}_{\textrm{Absolutely convergent for all $z\in \C$ by} \, \eqref{exp_decay}}\\ &= \frac{4^{\alpha_k+z}}{\pi^{\frac{n}{2}}} \Gamma\LC z+\frac{n}{2}+\alpha_k\RC \int_{\R^n\setminus \mathcal{O}} \frac{v_k(y)}{|x-y|^{n+2\alpha_k+2z}}\, dy, \end{split} \end{equation} where we use $x\in \omega$ and $y\in \R^n\setminus \mathcal{O}$ so that \eqref{kappa} holds. Next, we aim to show that the previous expression derived for $\mathrm{Re}(z)\geq 0$ can be viewed on the entire complex plane. Indeed, analogously to the proof of Lemma~\ref{lem_analytic} and due to the fast decay of $v_k$ given by \eqref{exp_decay} we have that the mapping $$ z \mapsto \int_{\R^n\setminus \mathcal{O}} \frac{v_k(y)}{|x-y|^{n+2\alpha_k+2z}}\, dy, \quad k=1,\ldots,N, \quad x\in \omega$$ is holomorphic for all $z\in \C$ (let us point out that only Schwartz decay for $v_k$ is needed here). The Gamma function is also meromorphic on the entire complex plane. We have thus completed the proof of the lemma. \end{proof} The following lemma also appears in \cite{FKU24} and is a consequence of the upper bound for the Gamma function, see \cite[formula (2.1.19) on page 34]{Paris_Kaminski_book}; see also \cite[page 300]{Olverbook}, \begin{equation} \label{eq_bound_Gamma_f} |\Gamma(z)| \le \sqrt{2\pi} |z|^{a-\frac{1}{2}} e^{-\frac{\pi}{2}|b|} e^{\frac{1}{6} |z|^{-1}}, \end{equation} valid for all $ z = a + ib \in \C$, where $a\ge 0$. \begin{lemma}[Lemma 3.4 in \cite{FKU24}]\label{lem_H_bounds} Let $C:=\sqrt{2\pi}e^{\frac{1}{6}}$. Given $s \in (0,\infty)$, the function $$ H(z):=\Gamma(z+1+s),$$ is meromorphic on $\mathbb{C}$, with its only singularities being simple poles at $z=-k-1-s$ for $k=0,1,2,\dots$, and additionally satisfies \begin{enumerate}[(1)] \item\label{item 1 lem 3.4} {$|H(\textrm{\em i}b)|\leq C (2|b|)^{s+\frac{1}{2}} e^{-\frac{\pi}{2}|b|}$ for all $b\in \R$ such that $|b|\ge 1+s$,} \item\label{item 2 lem 3.4} {$|H(a)| \leq C e^{a \log a}e^{a\log 2+(s+\frac{1}{2})\log(2a)}$ for all $a\ge 1+s$,} \item\label{item 3 lem 3.4} {$|H(z)|\leq C e^{2|z|\log(2|z|)}$ for all $z\in \{a+\textrm{\em i}b\in \C :\, a\ge 0,\ b\in \R\}$ such that $|z|\ge 1+s$}. \end{enumerate} \end{lemma} \begin{lemma}[cf. Lemma 3.5 \cite{FKU24}] \label{lem_analytic_bounds} Given each $k=1,\ldots,N$ and each fixed $x\in \omega$, there exist constants $c_1, c_2 > 0$, depending on $\kappa\in (0,1)$ in \eqref{kappa}, $\|v_k\|_{L^1(\R^n)}$ and on $\alpha_k$ such that the holomorphic function $G_k: \{\mathrm{Re}(z)\geq 0\}\to \C$ defined by \eqref{g_def} satisfies the bounds \begin{enumerate}[(i)] \item\label{item i G_k} $\left|G_k(\mathrm{i}b)\right| \leq c_1$ for all $b \in \mathbb{R}$, \item\label{item ii G_k} $\left|G_k(a)\right| \leq c_1\,e^{c_2 a}\, e^{a \log a}$ for all $a \ge 1$, \item\label{item iii G_k} $\left|G_k(z)\right| \leq c_1\,e^{c_2|z|} e^{2|z| |\log z|}$ for all $z \in \{a + \mathrm{i}b \in \mathbb{C}: \, a \ge 0,\ b \in \mathbb{R}\}$ with $|z| \ge 1$. \end{enumerate} \end{lemma} \begin{proof} We first note that $t^{-z} = e^{-z \log t}$ for $t > 0$ where the logarithm is defined using its principal branch. By definition of $G_k$ together with \eqref{kappa} and \eqref{heat_bound_rand}, we write for each $z=a+\mathrm{i}b$ with $a\geq 0$ that \begin{equation} \begin{split} |G_k(z)|&\leq \int_0^1 |(e^{t\Delta}v_k)(x)|\,t^{-(a+1+\alpha_k)}\,dt+ \int_1^{\infty}|(e^{t\Delta}v_k)(x)|\,t^{-(a+1+\alpha_k)}\,dt \\ &\leq \int_0^1 \frac{\|v_k\|_{L^1(\R^n)} }{(4\pi t)^{\frac{n}{2}}}\, e^{-\frac{\kappa^2}{t}} \,t^{-(a+1+\alpha_k)}\,dt+ \int_1^{\infty}\frac{\|v_k\|_{L^1(\R^n)} }{(4\pi t)^{\frac{n}{2}}}\, e^{-\frac{\kappa^2}{t}}t^{-(a+1+\alpha_k)}\,dt \\ &\leq \underbrace{\frac{\|v_k\|_{L^1(\R^n)} }{(4\pi)^{\frac{n}{2}}} \int_1^\infty e^{-\kappa^2 t} \,t^{a+\alpha_k+\frac{n}{2}-1}\,dt}_{\text{Change of variables}}+ \frac{\|v_k\|_{L^1(\R^n)} }{(4\pi)^{\frac{n}{2}}} \int_1^\infty t^{-(a+\alpha_k+\frac{n}{2}+1)}\,dt\\ &=\frac{\|v_k\|_{L^1(\R^n)} }{(4\pi)^{\frac{n}{2}}\kappa^{2a+2\alpha_k+n}} \int_{\kappa^2}^\infty e^{-t} \,t^{a+\alpha_k+\frac{n}{2}-1}\,dt+ \frac{\|v_k\|_{L^1(\R^n)} }{(4\pi)^{\frac{n}{2}}} \int_1^\infty t^{-(a+\alpha_k+\frac{n}{2}+1)}\,dt\\ &\leq \frac{\|v_k\|_{L^1(\R^n)}}{(4\pi)^{\frac{n}{2}}}\left(\kappa^{-2a-2\alpha_k-n}\Gamma\LC a+\alpha_k+\frac{n}{2}\RC+ \frac{1}{\frac{n}{2}+1+a}\right)\\ &\leq 2\frac{\|v_k\|_{L^1(\R^n)}}{(4\pi)^{\frac{n}{2}}}\,\kappa^{-2a-2\alpha_k-n}\,\Gamma\LC a+\alpha_k+\frac{n}{2}\RC. \end{split} \end{equation} All the bounds \ref{item i G_k}--\ref{item iii G_k} follow immediately from combining the previous bound with Lemma~\ref{lem_H_bounds} \ref{item 1 lem 3.4}--\ref{item 3 lem 3.4}. This completes the proof. \end{proof} The following proposition is a key step in the proof of Theorem~\ref{thm_ent_smooth}. \begin{proposition}\label{prop_F_zero} Let the hypothesis of Theorem~\ref{thm_ent_smooth} be satisfied. Given each $x\in \omega$, the meromorphic function $F$ defined by \eqref{F_def} vanishes everywhere in the complex plane, away from its isolated poles. \end{proposition} In order to prove the above proposition, we need to make use of a sharp interpolation theorem for holomorphic functions that vanish on positive integers subject to certain growth rates at infinity, due to Pila \cite{pila_05}. For the convenience of the reader, we include Pila's theorem here. \begin{theorem} \label{thm_Pila} Let $\alpha,\beta\in \R$ with $\alpha+\beta<1$ and let $\epsilon>0$. Write $z=a+ib$ and suppose that $h(z)$ is holomorphic in the region $a\ge 0$, satisfying: \begin{enumerate}[(i)] \item {$\limsup\limits_{|b|\to \infty}\frac{\log|h(ib)|}{\pi |b|}\le \beta$,} \item {$\limsup\limits_{a\to \infty}\frac{\log|h(a)|}{2a\log a}\le \alpha$,} \item {$\log|h(z)|=\mathcal{O}(|z|^{2-\epsilon})$, throughout $a\ge 0$, as $|z|\to\infty$}. \end{enumerate} Suppose that $h(m)=0$ for all $m\in \N$. Then $h\equiv 0$ on the set $\{ z\in \C: \, \mathrm{Re}(z)\geq 0\}$. \end{theorem} \begin{proof}[Proof of Proposition~\ref{prop_F_zero}] The proof is analogous to the proof of \cite[Lemma 3.7]{FKU24} but with some modifications as we are working on $\R^n$. Throughout this proof, it is important to recall that the elements of $\{v_k\}_{k=1}^{N}$ are smooth and in fact belong to the Schwartz class $\mathcal S(\R^n)$. We begin by noting that the condition \eqref{condition_ent} implies in particular that \begin{equation} \begin{split} \Delta^{m}v_1 =\ldots = \Delta^{m}v_k=0 \text{ in }\mathcal{O},\quad m\in \N, \end{split} \end{equation} and subsequently that \begin{equation} \sum_{k=1}^N (-\Delta)^{\alpha_k}\Delta^{m} v_k=0 \text{ in }\mathcal{O}, \end{equation} for any $m\in \N$, where we use $(-\Delta)^{\alpha+\beta}=(-\Delta)^\alpha (-\Delta)^{\beta}$ on $H^{2\alpha+2\beta}(\R^n)$, for all $\alpha, \beta \in \R$. Recalling that $\omega \Subset \mathcal{O}$ is a nonempty bounded open set and that $\kappa\in (0,1)$ is as in \eqref{kappa}, the above identity implies that \begin{align}\label{pf of UCP 1} \sum_{k=1}^N \frac{1}{\Gamma(-\alpha_k)} \int_0^\infty \frac{e^{t\Delta}\Delta^{m} v_k(x)}{t^{1+\alpha_k}}\, dt=0 ,\text{ for }x\in \omega, \end{align} for all $m\in \N$. We claim that the previous identity \eqref{pf of UCP 1} can be reduced to \begin{equation}\label{eq_important_m} \sum_{k=1}^N \frac{\Gamma(1+m+\alpha_k)}{\Gamma(-\alpha_k)\Gamma(1+\alpha_k)} \int_0^{\infty} (e^{t\Delta}v_k)(x)\,t^{-(1+m+\alpha_k)}\,dt =0 \quad \forall\, x\in \omega\quad \forall m\in \N. \end{equation} In order to prove \eqref{eq_important_m} from \eqref{pf of UCP 1} we will employ an integration by parts trick that has been used recently in several related works, see for example, the proofs of \cite[Lemma 3.7]{FKU24}, \cite[Proposition 3.1]{GU2021calder} and \cite[Theorem 1.1]{FGKU_2025fractional}). Using the fact that $t\mapsto e^{t\Delta} (\Delta v_k)\in C^\infty([0,\infty); C^\infty(\R^n))$, and that $e^{t\Delta}\Delta^{m}= \Delta^{m} e^{t\Delta}$ for all $t\ge 0$ on $\mathcal{D}(\Delta^m)=H^{2m}(\R^n)$, we obtain for any $m\in \N$, \begin{equation} \label{eq_100_4} \big(e^{t\Delta}\Delta^{m} v_k\big)(x)=\p_t^{m} \big(e^{t\Delta}v_k\big)(x), \end{equation} for $x\in \omega$ and $k=1,\dots, N$. Combining \eqref{pf of UCP 1} and \eqref{eq_100_4}, we get \begin{equation} \label{eq_100_5} \sum_{k=1}^N \frac{1}{\Gamma(-\alpha_k)}\int_0^\infty \p_t^m(e^{t\Delta}v_k)(x) \frac{dt}{t^{1+\alpha_k}}=0, \end{equation} for $x\in \omega$ and all $m\in \N$. We shall next repeatedly integrate by parts in \eqref{eq_100_5} $m$ times and show that no contributions arise from the endpoints of the integral. Indeed, for $t > 0$ and $x \in \omega$, using \eqref{eq_100_4} and \eqref{exp_decay}, we obtain (analogously as in \eqref{heat_bound_rand}) that \begin{equation} \label{eq_100_5_1} \left|\p_t^l (e^{t\Delta}v_k)(x)\right|=\left|(e^{t\Delta}\Delta^{l}v_k)(x)\right|\le \frac{\left\|\Delta^l v_k\right\|_{L^1(\R^n)} }{(4\pi t)^{\frac{n}{2}}}\, e^{-\frac{\kappa^2}{t}}\, \qquad t>0\quad x\in \omega, \end{equation} where $l = 0, 1, \dots, m-1$. The bound \eqref{eq_100_5_1} shows that no contribution from $t =0$ or $t=\infty$ arises when integrating by parts in \eqref{eq_100_5}. Thus, by integrating by parts $m$ times in \eqref{eq_100_5}, we obtain that \begin{equation} \label{eq_100_8} \sum_{k=1}^N \frac{1}{\Gamma(-\alpha_k)}c_k\int_0^\infty (e^{t\Delta} v_k)(x) \frac{dt}{t^{m+1+\alpha_k}}=0, \end{equation} for $x\in \omega$ and all $m\in \N$. Here \begin{equation} \label{eq_100_9} c_k=(1+\alpha_k)(2+\alpha_k)\dots (m+\alpha_k)=\frac{\Gamma(m+1+\alpha_k)}{\Gamma(1+\alpha_k)}. \end{equation} This completes the proof of \eqref{eq_important_m}. Recalling the definition \eqref{F(z)}, we observe next that \eqref{eq_important_m} may be rewritten in the form \begin{equation} \label{F(m)} F(m)=0, \quad \text{for all } m\in \N. \end{equation} Recall from Lemma~\ref{lem_analytic} that $F(z)$ is holomorphic in $\left\{ z: \, \mathrm{Re}(z)\geq 0\right\}$ and also that it admits a meromorphic extension to all of $\C$ via Lemma~\ref{lem_meromorphic}. Furthermore, recalling that $$ F(z)= \sum_{k=1}^N \frac{\Gamma(z+1+\alpha_k)}{\Gamma(-\alpha_k)\Gamma(1+\alpha_k)}\,G_k(z),$$ it follows from Lemma~\ref{lem_H_bounds} and Lemma~\ref{lem_analytic_bounds} that the function $F(z)$ satisfies the bounds on the set $\left\{ z\in \C: \, \mathrm{Re}(z)\geq 0\right\}$ stated in Theorem~\ref{thm_Pila} (with $h$ replaced by $F$ and with the choices $\alpha=1$ and $\beta=-\frac{1}{2}$). Thus, we conclude from Pila's theorem that $F(z)$ on the right half-plane and subsequently by analytic continuation that its meromorphic extension given by Lemma~\ref{lem_meromorphic} vanishes identically on $\C$ away from its isolated poles. These poles happen precisely at those values of $z$ for which either $z+1+\alpha_k$ or $z+\frac{n}{2}+\alpha_k$ is a nonpositive integer for some $k=1,\ldots,N$. \end{proof} \subsection{Analysis of singularities of the meromorphic extension} We aim to use Proposition~\ref{prop_F_zero} and perform an analysis of the poles of the function $F(z)$. Recall from Lemma~\ref{lem_meromorphic} that given a fixed $x\in \omega$, the meromorphic function $F:\C \to \C$ is given by the expression $$ F(z)= \sum_{k=1}^N \frac{4^{\alpha_k+z}}{\pi^{\frac{n}{2}}} \, \frac{\Gamma\LC z+1+\alpha_k\RC \Gamma \LC z +\frac{n}{2} +\alpha_k \RC}{\Gamma(-\alpha_k)\,\Gamma(1+\alpha_k)} \int_{\R^n\setminus \mathcal{O}} \frac{v_k(y)}{\abs{x-y}^{n+2\alpha_k +2z}}\, dy.$$ Based on the proof of Lemma~\ref{lem_meromorphic} we also know that a point $z\in \C$ is a pole of the function $F$ above if and only if it is of the form $z=-\alpha_k-m-\frac{n}{2}$ or $z=-\alpha_k-m-1$ for some $k=1,\ldots,N$ and some $m=0,1,2,\ldots$. Our analysis will need to be modified depending on the parity of the dimension $n \geq 2$. \\ \noindent{\bf \em II.I. case of even dimension.} We will assume first that $n=2p$ for some $p\in \N$ so that $$ F(z)= \sum_{k=1}^N \frac{4^{\alpha_k+z}}{\pi^{p}} \, \frac{\Gamma\LC z+1+\alpha_k\RC \Gamma \LC z +p +\alpha_k\RC}{\Gamma(-\alpha_k)\,\Gamma(1+\alpha_k)} \int_{\R^n\setminus \mathcal{O}} \frac{v_k(y)}{\abs{x-y}^{2p+2\alpha_k +2z}}\, dy.$$ Let us now fix a number $m \in \N \cup \{0\}$ and an index $j\in \{1,\ldots,N\}$ and subsequently consider the limit \begin{equation}\label{residue_limit_even} \lim\limits_{z\to -m-p-\alpha_j} (z+m+p+\alpha_j)^2\,F(z).\end{equation} On the one hand, the above limit is zero, thanks to Proposition~\ref{prop_F_zero}. On the other hand, we can evaluate the limit explicitly by residue calculus. Let us compute the limit above by breaking the sum into its $N$ components and see their contributions. It is straightforward to see that for each $k=1,\ldots,N$ with $k\neq j$, there holds: \begin{equation}\label{residue_limit_even_1} \lim\limits_{z\to -m-p-\alpha_j} (z+m+p+\alpha_j)^2\,\Gamma(z+1+\alpha_k)\Gamma(z+\alpha_k+p) =0,\end{equation} because the Gamma functions on the right-hand side do not have a pole in the limit, thanks to $\alpha_k \neq \alpha_j$ and $\alpha_k \in (0,1)$. Thus, the only term in the summand that could contribute to the limit would be the term $k=j$. For this term, both the Gamma functions have a simple singularity there and as such we have that \begin{equation} \begin{split} &\quad \, \lim\limits_{z\to -m-p-\alpha_j} (z+m+p+\alpha_j)^2\,\Gamma(z+1+\alpha_j)\Gamma(z+p+\alpha_j)\\ &=\Big( \lim\limits_{z\to -m-p-\alpha_j} (z+m+p+\alpha_j)\Gamma(z+1+\alpha_j)\Big) \Big( \lim\limits_{z\to -m-p-\alpha_j} (z+m+p+\alpha_j)\Gamma(z+p+\alpha_j)\Big)\\ &=\textrm{Res}(\Gamma;-m-p+1)\,\textrm{Res}(\Gamma;-m), \end{split} \end{equation} where we recall that for each $\ell\in \N\cup \{0\}$, the notation $\textrm{Res}(\Gamma,-\ell)$, $\ell\in \N\cup \{0\}$ stands for the residue\footnote{Recall that if $c$ is a simple pole of the function $f$, the residue of $f$ is given by $\mathrm{Res}(f;c)=\displaystyle\lim_{z\to c}(z-c)f(z)$.} of Gamma function $\Gamma(z)$ at the point $z=-\ell$ that equals $\frac{(-1)^\ell}{\ell!}$. Therefore, \begin{equation}\label{residue_limit_even_2} \lim\limits_{z\to -m-p-\alpha_j} (z+m+p+\alpha_j)^2\,\Gamma(z+1+\alpha_j)\Gamma(z+p+\alpha_j) =\frac{(-1)^{p-1}}{m!(m+p-1)!}.\end{equation} By combining \eqref{residue_limit_even_1}--\eqref{residue_limit_even_2} into the limit \eqref{residue_limit_even} and recalling that the answer must be zero, we obtain \begin{equation} \int_{\R^n\setminus \mathcal O} v_j(y)\, |x-y|^{2m}\,dy =0 \quad \text{for all $m\in \N\cup\{0\}$ and all $j=1,\ldots,N.$} \end{equation} In particular, since $v_j$'s all vanish on the set $\mathcal O$, we obtain that \begin{equation} \label{residue_limit_even_final} \int_{\R^n} v_j(y)\, |x-y|^{2m}\,dy =0 \quad \text{for any $x\in \omega$, $m\in \N\cup\{0\}$ and $j=1,\ldots,N.$} \end{equation} We will hold onto this and move on to the singularity analysis in the odd dimension.\\ \noindent{\bf \em II.II. case of odd dimension.} We will assume now that $n=2p-1$ for some $p\in \N$ so that $$ F(z)= \sum_{k=1}^N \frac{4^{\alpha_k+z}}{\pi^{p-\frac{1}{2}}} \, \frac{\Gamma\LC z+1+\alpha_k\RC \Gamma \LC z+\alpha_k +p-\frac{1}{2}\RC}{\Gamma(-\alpha_k)\,\Gamma(1+\alpha_k)} \int_{\R^n\setminus \mathcal{O}} \frac{v_k(y)}{\abs{x-y}^{2p-1+2\alpha_k +2z}}\, dy.$$ Let us now fix a number $m \in \N \cup \{0\}$ and an index $j\in \{1,\ldots,N\}$ and subsequently consider the limit \begin{equation}\label{residue_limit_odd} \lim\limits_{z\to -m-1-\alpha_j} (z+m+1+\alpha_j)\,F(z).\end{equation} As in the previous case, on the one hand, the above limit is zero, thanks to Proposition~\ref{prop_F_zero}. On the other hand, we can evaluate the limit explicitly by residue calculus. It is straightforward to see that for each $k=1,\ldots,N$ with $k\neq j$, there holds: \begin{equation}\label{residue_limit_odd_1} \lim\limits_{z\to -m-1-\alpha_j} (z+m+1+\alpha_j)\,\Gamma \LC z+1+\alpha_k\RC \Gamma\LC z+\alpha_k+p-\frac{1}{2}\RC =0,\end{equation} because the Gamma functions on the right-hand side do not have a pole at the limit, thanks to the fact that $\alpha_k \neq \alpha_j$ as well as \eqref{exp_condition_alpha} and $\alpha_k \in (0,1)$. Thus, the only term in the summation that could contribute to the limit would be the term $k=j$. For this term, we note that \begin{equation}\label{residue_limit_odd_2} \begin{split} &\quad\, \lim\limits_{z\to -m-1-\alpha_j} (z+m+1+\alpha_j)\,\Gamma(z+1+\alpha_j)\Gamma\LC z+\alpha_j+p-\frac{1}{2}\RC \\ &=\textrm{Res}(\Gamma;-m)\, \Gamma\LC -m+p-\frac{3}{2}\RC\\ &=\frac{(-1)^m}{m!} \underbrace{\Gamma\LC -m+p-\frac{3}{2}\RC}_{\text{Nonzero}} \end{split} \end{equation} By combining \eqref{residue_limit_odd_1}--\eqref{residue_limit_odd_2} into the limit \eqref{residue_limit_odd} and recalling that the answer must be zero, we obtain \begin{equation} \int_{\R^n\setminus \mathcal O} v_k(y)\, |x-y|^{2m-2p+3}\,dy =0, \quad \text{for all $m\in \N\cup\{0\}$ and all $k=1,\ldots,N.$} \end{equation} In particular, since $p\geq 1$ and since $v_k$'s vanish on the set $\mathcal O$, we obtain that \begin{equation} \label{residue_limit_odd_final} \int_{\R^n} v_k(y)\, |x-y|^{2m+1}\,dy =0, \quad \text{for any $x\in \omega$, $m\in \N\cup\{0\}$ and $k=1,\ldots,N.$} \end{equation} We conclude this section by noting that our singularity analysis of $F(z)$ together with Proposition~\ref{prop_F_zero} has given us the two equations \eqref{residue_limit_even_final} and \eqref{residue_limit_odd_final} depending on the parity of the dimension $n$. The proof of Theorem~\ref{thm_ent_smooth} now follows immediately from the next proposition. \subsection{Support theorems for spherical mean transform} We aim to complete the proof of Theorem~\ref{thm_ent_smooth} via the following proposition. \begin{proposition}\label{prop_reduction from SM transform} Let $\omega \subset \R^n$ be a nonempty bounded open set and let $v\in C^{\infty}(\mathbb R^n)$ be identical to zero on $\omega$. Assume that there exist constants $C,\rho>0$ and $\gamma>1$ such that \begin{equation} \label{v_super_decay} |v(x)| \leq C\,e^{-\rho |x|^\gamma} \quad \text{for any } x\in \R^n. \end{equation} Moreover, assume that given each $x\in \omega$ and each $m\in \N\cup \{0\}$, there holds \begin{equation}\label{moments} \begin{cases} \int_{\R^n} v(y)\, |x-y|^{2m}\,dy =0, &\text{if $n$ is even,}\\ \int_{\R^n} v(y)\, |x-y|^{2m+1}\,dy =0, &\text{if $n$ is odd}. \end{cases} \end{equation} Then, $v$ must vanish identically on $\R^n$. \end{proposition} The proof of the above proposition relies on the spherical mean transform, which can be found in \cite[Theorem 3.3]{Quinto} (see also \cite[Theorem 1.1, Remark 1.3]{RAMM20021033}). Let us collect the result about the spherical mean transform in the next lemma for readers' convenience. \begin{lemma}[Spherical mean transform]\label{Lem: SMT} Adopting all assumptions in Proposition \ref{prop_reduction from SM transform}, let us consider the spherical mean transform of $v\in C^\infty(\R^n)$ by \begin{equation} \mathrm{SM}v(x,t):=\int_{\mathbb{S}^{n-1}}v(x-t\theta)\, dV_{\mathbb{S}^{n-1}}(\theta), \quad t>0, \end{equation} where $\mathbb S^{n-1}$ is the unit sphere in $\R^n$ and $dV_{\mathbb S^{n-1}}$ stands for its surface measure. Suppose that $\mathrm{SM}v(x,t)=0$ for all $(x,t)\in \omega\times (0,\infty)$. Then $v\equiv 0$ on $\R^n$. \end{lemma} \begin{proof}[Proof of Proposition \ref{prop_reduction from SM transform}] We will only prove the proposition for the case that $n$ is even. The case of odd dimensions $n\geq 2$ can be obtained by using similar arguments. First, let us fix $x\in \omega$ and observe that by our hypothesis (and since $n$ is even) there holds \begin{equation}\label{iden_even} \int_{\R^n} v(x-y)\, |y|^{2m}\,dy =0 \quad \forall\, m\in \N \cup \{0\}. \end{equation} Fixing $x\in \omega$, and recalling that $v$ vanishes on $\omega$. Let us define the function $h\in C^{\infty}([0,\infty))$ via spherical mean transforms \begin{equation}\label{sphere_mean} h(t)= t^{n-1}\int_{\mathbb S^{n-1}} v(x-t\theta)\,dV_{\mathbb S^{n-1}}(\theta) \qquad t\geq 0. \end{equation} Let us also define $f\in L^{\infty}(\R)$ via $$ f(t) = \begin{cases} h(t) \quad &\text{if $t \geq0$},\\ 0 \quad &\text{if $t<0$}. \end{cases} $$ As the function $v$ satisfies the super-exponential decay \eqref{v_super_decay}, it is straightforward to see that there is a $C'>0$ and $\rho' \in (0,\rho)$ such that \begin{equation} \label{f_decay} |f(t)| \leq C' e^{-\rho' |t|^{\gamma}}. \end{equation} In terms of the function $f$, identity \eqref{iden_even} now reduces to \begin{equation} \label{f_iden} \int_\R f(t) \,t^{2m}\,dt =0, \quad \text{for all } m\in \N\cup \{0\}. \end{equation} Let us next define $\mathcal F_f \in C^{\infty}(\C)$ to be the Fourier--Laplace transform of $f$ defined by \begin{equation} \label{Fourier_Laplace} \mathcal F_f(z) := \int_{\R} f(t) e^{-\textrm{i}tz}\,dt, \quad z\in \C. \end{equation} Observe that the super-exponential decay \eqref{f_decay} of the function $f$ makes the above definition well-defined and makes $\mathcal F_f$ to be an entire function. The condition \eqref{f_iden} and analytic continuation imply that all the even order derivatives of the function $\mathcal F_f(z)$ must vanish at the origin, that is to say $$ \LC \frac{\partial^{2m}}{\partial z^{2m}} \mathcal F_f \RC (0)=0, \quad \text{for all } m\in \N \cup \{0\}. $$ As $\mathcal F_f$ is entire, its power series expansion at zero is absolutely convergent everywhere and therefore the latter identity implies that $\mathcal F_f$ is an odd function. Using the inverse Fourier transform $$ f(t) = \frac{1}{2\pi}\,\int_\R \mathcal F_f(\xi)\, e^{\textrm{i}\xi t}\,d\xi,$$ we deduce that $f$ must also be an odd function. As it vanishes for all $t<0$, it must vanish everywhere. Recalling the definition of $f$ we obtain next that the spherical means of the function $v$ must vanish on any sphere whose center lies on the set $\omega$. By Lemma \ref{Lem: SMT}, we can conclude that the function $v$ must vanish everywhere in $\R^n$. \end{proof} \section{Proofs for inverse problems \ref{Q:IP1}--\ref{Q:IP2}}\label{sec: IP} In this section, we prove both Theorems \ref{Thm: global uniqueness 1} and \ref{Thm: global uniqueness 2}. The proof of both Theorems crucially relies on special cases of our entanglement principle. \begin{proof}[Proof of Theorem \ref{Thm: global uniqueness 1}] Recalling Proposition~\ref{prop_solvability}, we choose any nonzero $F \in C^{\infty}_0(\Omega_e)$ satisfying the (exterior) orthogonality condition \eqref{phi_ortho} so that equation \eqref{equ: main_solv} (with $A$ replaced by $A_1$) admits some solution $u^{(1)}\in W^{2,2}_{\delta_0}(\R^n)$. Observe that there are infinitely many such $F$'s as $\mathcal K_{A_1}$ is finite-dimensional (see also Lemma~\ref{lem_ortho}). Note also that by Proposition~\ref{prop_solvability}, we have $u^{(1)}\in W^{k,2}_{\delta_0}(\R^n)\cap W^{k,2}_{-\delta_0-2}(\R^n)$ for any $k\in \{0\}\cup \N$. Let us now consider \begin{equation}\label{def_fg} (f,g) := (u^{(1)}|_{\Omega_e}, F) =(u^{(1)}|_{\Omega_e}, (L_{A_1} u^{(1)})|_{\Omega_e})\in \mathcal C_{A_1}.\end{equation} As $\mathcal C_{A_1}=\mathcal C_{A_2}$, it follows from the definition of $\mathcal C_{A_2}$ that there exists $u^{(2)}\in W^{2,2}_{\delta_0}(\R^n)$ with $$ L_{A_2} u^{(2)} =0 \quad \text{in $\Omega$},$$ such that there holds $$ \left. u^{(2)}\right|_{\Omega_e}=\left. u^{(1)}\right|_{\Omega_e}=f \quad \text{and}\quad (L_{A_2}u^{(2)})|_{\Omega_e}= g=F.$$ As $L_{A_2}u^{(2)}=0$ in $\Omega$ and as it equals $F$ in $\Omega_e$ and finally as $F$ is compactly supported in $\overline{\Omega_e}$, we conclude that $$ L_{A_2} u^{(2)} = F \quad \text{in $\R^n$},$$ holds in the $L^2_{\delta_0+2}$-sense. We note here that since $F\in C^{\infty}_0(\Omega_e)$ and since $u^{(2)}\in W^{2,2}_{\delta_0}(\R^n)$ we have that $u^{(2)}\in W^{k,2}_{\delta_0}(\R^n)\cap W^{k,2}_{-\delta_0-2}(\R^n)$ for all $k\in \N$, thanks to Proposition~\ref{prop_solvability}. This also implies that the equation is satisfied point-wise. Next, introducing $$u:=u^{(2)}-u^{(1)} \in W^{k,2}_{\delta_0}(\R^n)\cap W^{k,2}_{-\delta_0-2}(\R^n), \quad k=0,1,\ldots,$$ we note that \begin{equation}\label{u_2_diff_eq} u|_{\Omega_e}=0 \quad \text{and}\quad L_{A_2}u= \nabla\cdot(\underbrace{(A_{2}-A_{1}}_{\textrm{Henceforth $\widetilde{A}$}})\nabla u^{(1)}) \quad \text{in $\R^n$}. \end{equation} Recalling that $A_{1}|_{\Omega_e}=A_{2}|_{\Omega_e}=\mathds{1}_{n\times n}$ and that $P_0$ is given by \eqref{P_0_def}, we obtain \begin{equation} \label{u_12_eq} u|_{\Omega_e}=0 \quad \text{and} \quad P_{0} u= 0 \quad \text{in $\Omega_e$}. \end{equation} Noting that the functions $\{p_k\}_{k=1}^N\subset C^{\infty}_0(\R^n)$ satisfy \eqref{p_k_def} and thus are each a nonzero constant in an open neighbourhood of $\overline{\Omega}$, we can now apply our entanglement principle, namely Theorem~\ref{thm: ent} with the choice $u_k:=p_k\,u$ for $k=1,\ldots,N$, in its statement and with $\mathcal O=\mathcal U\setminus \overline{\Omega}$ where $\mathcal U$ is as in \eqref{p_k_def}. The entanglement principle now implies that the function $u$ must vanish everywhere on $\R^n$. Consequently, $P_{0}u$ must also vanish globally and thus in particular, \begin{equation}\label{A_eq_0} \nabla\cdot\left(\widetilde{A}(x)\nabla u^{(1)} \right) =0 \quad \text{in $\R^n$}, \end{equation} where we recall that $\widetilde{A}(x)=A_{2}(x)-A_{1}(x)$. To summarize, we have proved that given any $F\in C^{\infty}_0(\Omega_e)$ that satisfies \eqref{phi_ortho} and any solution $u^{(1)}\in W^{2,2}_{\delta_0}(\R^n)$ that solves \eqref{equ: main_solv} (with $A=A_{1}$), equation \eqref{A_eq_0} must be satisfied globally. As $u^{(1)}+\zeta$ is also a solution to equation \eqref{equ: main_solv} for any $\zeta\in \mathcal K_{A_1}$ (recalling the definition \eqref{K_A_def}), and as $\widetilde{A}$ is real-valued, we deduce in particular that \begin{equation} \label{A_eq_0_1} \nabla\cdot\left(\widetilde{A}(x)\nabla \overline{\zeta} \right) =0 \quad \text{in $\R^n$}, \quad \forall\, \zeta \in \mathcal K_{A_1}. \end{equation} Next, let $h \in C^{\infty}_0(\Omega)$ be an arbitrary function. By \eqref{A_eq_0_1} we have that \begin{equation} \label{A_eq_0_2} \left( \nabla\cdot\left(\widetilde{A}(x)\nabla h \right), \overline{\zeta}\right)_{L^2(\R^n)} =0 \qquad \forall\, \zeta \in \mathcal K_{A_1}. \end{equation} Using this equality together with Proposition~\ref{prop_solvability}, we deduce that there exists a solution $w\in W^{2,2}_{\delta_0}(\R^n)\cap W^{2,2}_{-2-\delta_0}(\R^n)$ to the equation $$L_{A_1} w = \nabla\cdot\left(\widetilde{A}(x)\nabla h \right) \quad \text{in $\R^n$}.$$ Recalling equation \eqref{A_eq_0} and the fact that $h$ is compactly supported in $\Omega$, we deduce that \begin{equation} \begin{split} 0&=\LC \nabla\cdot\left(\widetilde{A}(x) \nabla u^{(1)} \right),\overline{h}\RC_{L^2(\Omega)}=\LC u^{(1)}, \nabla\cdot\left(\widetilde{A}(x) \nabla \overline{h}\right)\RC_{L^2(\Omega)}\\ &=\LC u^{(1)}, \nabla\cdot\left(\widetilde{A}(x) \nabla \overline{h}\right)\RC_{L^2(\R^n)}\\ &=\LC u^{(1)},\overline{L_{A_1}w}\RC_{L^2(\R^n)}=\LC L_{A_1}u^{(1)},\overline{w}\RC_{L^2(\R^n)}\\ &=\LC F,\overline{w}\RC_{L^2(\Omega_e)}. \end{split} \end{equation} As $F\in C^{\infty}_0(\Omega_e)$ is any function that satisfies \eqref{phi_ortho}, we deduce via Lemma~\ref{lem_ortho_cond} that \begin{equation}\label{w_eq_1} w|_{\Omega_e} = \zeta|_{\Omega_e}, \quad \text{for some $\zeta \in \mathcal K_{A_1}$}. \end{equation} Defining the function $\widetilde w \in W^{2,2}_{\delta_0}(\R^n)\cap C^{\infty}(\R^n)$ via $\widetilde w = w-\zeta$, we obtain that \begin{equation}\label{tilde_w_eq} L_{A_1}\widetilde w = \nabla\cdot\left(\widetilde{A}(x)\nabla h \right) \quad \text{in $\R^n$}\end{equation} and that $\widetilde{w}=0$ in $\Omega_e$, and therefore, recalling that $h\in C^{\infty}_0(\Omega)$, there holds $$ \left. (P_{0}\widetilde w) \right|_{\Omega_e}= \left. \widetilde{w}\right|_{\Omega_e}=0. $$ We can now apply our entanglement principle, namely Theorem~\ref{thm: ent} with the choice $u_k=p_k\widetilde w$ for $k=1,\ldots,N$, in its statement and with $\mathcal O=\mathcal U$ where $\mathcal U$ is as in \eqref{p_k_def}). Noting that $p_k$'s are nonzero on $\Omega$, the theorem yields that $\wt w =0 $ in $\Omega$ so that $\widetilde w$ must vanish on $\R^n$, where we used \eqref{w_eq_1}. Hence, we conclude via \eqref{tilde_w_eq} that \begin{equation}\label{h_final}\nabla\cdot\left(\widetilde{A}(x) \nabla h\right) =0 \quad \text{in $\R^n$}.\end{equation} To summarize, we have shown that given any $h\in C^{\infty}_0(\Omega)$, the above equation must be satisfied globally. Next, let us choose an arbitrary point $x_0 \in \Omega$ and let $B_\delta(x_0)$ be the closed ball of radius $\delta>0$ centered at $x_0$ where $\delta\in (0,1)$ is sufficiently small, so that $B_\delta(x_0)\subset \Omega$. Let $\chi_0:\R\to \R$ be a nonnegative smooth cutoff function such that $\chi_0(t):=\begin{cases} 1 & \text{ for } |t|\leq \frac{1}{4} \\ 0 & \text{ for } |t|\geq \frac{1}{2} \end{cases}$. Subsequently, let us consider for each $\lambda\in \R$ and each vector $\xi=(\xi_1,\ldots,\xi_n) \in \R^n$, the function $$ h_{\lambda,\xi}(x)= e^{\textrm{i} \lambda \xi\cdot (x-x_0)} \chi_0(\delta^{-1}|x-x_0|).$$ Substituting $h=h_{\lambda,\xi}$ into equation \eqref{h_final}, evaluating it at $x_0$, dividing by $\lambda^2$ and finally taking the limit as $\lambda \to \infty$, we conclude that there holds $$ \sum_{j,k=1}^n \widetilde{A}^{jk}(x_0) \,\xi_j \xi_k=0 , \quad \forall\, \xi \in \R^n,$$ and finally (due to symmetry) that $\widetilde{A}(x_0)=0$. The theorem is now proved since $x_0\in \Omega$ is arbitrary. \end{proof} Now, we turn to prove Theorem \ref{Thm: global uniqueness 2}. We present a different perspective based on the Runge approximation property for the sake of interest. We first show the Runge approximation property. \begin{proof}[Proof of Theorem \ref{Thm: Runge}] Consider the set \begin{equation} \mathcal{R}:=\left\{ u_f -f : \, f\in C^\infty_0(W)\right\}, \end{equation} where the notation $u_f \in H^{s_N}(\R^n)$ stands for the unique solution to \eqref{equ: main 2}. It suffices to show that $\mathcal{R}$ is dense in $L^2(\Omega)$. The proof is standard by using the duality argument with the Hahn-Banach theorem. It suffices to show that if $v\in L^2(\Omega)$ satisfies $\LC v, w \RC_{L^2(\Omega)}=0$ for any $w\in \mathcal{R}$, then $v\equiv0$. To show this, let $v$ be such a function, that is to say, \begin{equation}\label{eq:11111111} \LC v,u_{f}-f\RC_{L^2(\Omega)} =0,\quad \text{for any }f\in C_{0}^{\infty}(W). \end{equation} Let $\phi\in\widetilde{H}^{s_N}(\R^n)$ be the unique solution of $P_q \phi=v$ in $\Omega$ subject to vanishing exterior Dirichlet data, namely $\phi|_{\Omega_e}=0$. Next, observe that for any $f\in C_{0}^{\infty}(W)$, there holds \begin{equation}\label{eq:formal_adjoint_relation} B_{q}(\phi,f)=B_{q}(\phi,f-u_{f})=\LC v,f-u_{f}\RC _{L^2(\Omega)} \end{equation} where $B_q(\cdot,\cdot)$ is the sesquilinear form given by \eqref{B_q bilinear} and we have used the facts that $u_{f}$ is the solution of \eqref{equ: main 2}, and $\phi\in\widetilde{H}^{s}(\Omega)$ (recall that $\phi|_{\Omega_e}=0$). Applying \eqref{eq:11111111} and \eqref{eq:formal_adjoint_relation} together with the fact that $q$ is zero outside $\Omega$, we deduce that \[ \big( \sum_{k=1}^N b_k (-\Delta)^{s_k}\phi,f\big)_{L^2(\mathbb{R}^{n})}=0\mbox{ for any }f\in C_{0}^{\infty}(W). \] Thus, the function $\phi\in \wt H^{s_N}(\Omega)$ satisfies \[ \phi=\sum_{k=1}^N b_k (-\Delta)^{s_k}\phi =0 \text{ in }W. \] Thanks to the entanglement principle of Theorem \ref{thm: ent} again, we obtain $\phi\equiv0$ and then $v\equiv0$. Note that the application of the entanglement principle is justified here as $\phi=0$ in $\Omega_e$, and thus in particular satisfies the super-exponential decay condition automatically. \end{proof} \begin{proof}[Proof of Theorem \ref{Thm: global uniqueness 2}] We follow the same argument as the proof of \cite[Theorem 1.1]{GSU20}. If $\left. \Lambda_{q_{1}}f\right|_{W_{2}}=\left.\Lambda_{q_{2}}f\right|_{W_{2}}$ for any $f\in C_{0}^{\infty}(W_{1})$, where $W_{1}$ and $W_{2}$ are open subsets of $\Omega_{e}$, by the integral identity \eqref{eq:integral identity}, we have \[ \int_{\Omega}(q_{1}-q_{2})u_{1}\,\overline{u_{2}}\, dx=0, \] where $u_{j}\in H^{s_N}(\R^{n})$ solves $P_{q_j}u_{j}=0$ in $\Omega$ with $u_{j}$ having exterior values $f_{j}\in C_{0}^{\infty}(W_{j})$, for $j=1,2$. Given an arbitrary $\phi\in L^{2}(\Omega)$ and by using the Runge approximation of Theorem \ref{Thm: Runge}, there exists two sequences of functions $\big( u_{\ell}^{1}\big)_{\ell\in \N}$, $\big(u_{\ell}^{2}\big)_{\ell\in \N}\subset H^{s_N}(\mathbb{R}^{n})$ that fulfill \begin{align*} & P_{q_1} u_{\ell}^{1}=P_{q_2} u_{\ell}^{2}=0\text{ in \ensuremath{\Omega}},\\ & \supp\big(u_{k}^{1}\big)\subseteq\overline{\Omega_{1}}, \quad \supp\big(u_{\ell}^{2}\big)\subseteq\overline{\Omega_{2}},\\ & \left. u_{\ell}^{1}\right|_{\Omega}=\phi+r_{\ell}^{1},\quad \left. u_{\ell}^{2}\right|_{\Omega}=1+r_{\ell}^{2}, \end{align*} where $\Omega_{1}$, $\Omega_{2}\subset \R^n$ are two open sets containing $\Omega$, and $r_{\ell}^{1},r_{\ell}^{2}\to0$ in $L^{2}(\Omega)$ as $\ell\to\infty$. Plug these solutions into the integral identity and pass the limit as $\ell\to\infty$, then we infer that \[ \int_{\Omega}\LC q_{1}-q_{2}\RC \phi \, dx=0. \] As $\phi\in L^{2}(\Omega)$ is arbitrary, we can conclude that $q_{1}=q_{2}$ in $\Omega$. This concludes the proof. \end{proof} \medskip \noindent\textbf{Acknowledgment.} Y.-H. Lin is partially supported by the Ministry of Science and Technology Taiwan, under projects 113-2628-M-A49-003 and 113-2115-M-A49-017-MY3. \bibliography{refs} \bibliographystyle{alpha} \end{document}
2412.13424v1
http://arxiv.org/abs/2412.13424v1
Remarks on retracts of polynomial rings in three variables in any characteristic
\documentclass[a4paper, 12pt]{amsart} \usepackage[top=40truemm,bottom=40truemm,left=35truemm,right=35truemm]{geometry} \usepackage{amssymb,amsmath} \usepackage{longtable} \usepackage{color} \usepackage[normalem]{ulem} \makeatletter \@namedef{subjclassname@2010}{ \textup{2010} Mathematics Subject Classification} \makeatother \renewcommand{\baselinestretch}{1.1} \setlength{\footskip}{30pt} \numberwithin{equation}{section} \newcommand{\mvskip}{\vspace{4mm}} \newcommand{\svskip}{\vspace{2mm}} \newcommand{\A}{\mathbb{A}} \newcommand{\C}{\mathbb{C}} \newcommand{\G}{\mathbb{G}} \newcommand{\R}{\mathbb{R}} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\BP}{\mathbb{P}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\F}{\mathbb{F}} \newcommand{\Ker}{\operatorname{Ker}} \newcommand{\Nil}{\operatorname{Nil}} \newcommand{\Spec}{\operatorname{Spec}} \newcommand{\EXP}{\operatorname{EXP}} \newcommand{\ML}{\operatorname{ML}} \newcommand{\AK}{\operatorname{AK}} \newcommand{\ol}[1]{\overline{#1}}\newcommand{\trdeg}{{\rm tr.deg}\:} \newcommand{\ch}{{\rm char}\:} \newcommand{\lc}{{\rm lc}\:} \newtheorem{thm}{Theorem}[section]\newtheorem{prop}[thm]{Proposition}\newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{defn}[thm]{Definition} \newtheorem{example}[thm]{Example} \newtheorem{remark}[thm]{Remark} \newtheorem{problem}[thm]{Problem} \newtheorem*{problem1}{Problem 1} \newtheorem*{claim1}{Claim 1} \newtheorem*{claim2}{Claim 2} \newtheorem*{claim3}{Claim 3} \pagestyle{plain} \begin{document} \title[Remarks on retracts of polynomial rings in three variables]{Remarks on retracts of polynomial rings in three variables in any characteristic} \author{Hideo Kojima} \address[H. Kojima]{Department of Mathematics, Faculty of Science, Niigata University, 8050 Ikarashininocho, Nishi-ku, Niigata 950-2181, Japan} \email{[email protected]} \author{Takanori Nagamine} \address[T. Nagamine]{Department of Mathematics, College of Science and Technology, Nihon University, 1-8-14 Kanda-Surugadai, Chiyoda-ku, Tokyo 101-8308, Japan} \email{[email protected]} \author{Riko Sasagawa} \address[R. Sasagawa]{Niigata Prefectural Tokamachi Senior High School, 1-203 Honcho-Nishi, Tokamachi-shi, Niigata, 948-0083, Japan} \subjclass[2020]{Primary 13B25; Secondary 13N15, 13A50, 14R20} \thanks{Research of the first author is supported by JSPS KAKENHI Grant Number JP 21K03200, and the second author by JSPS KAKENHI Grant Number JP 21K13782.} \keywords{Polynomial rings, Exponential maps, Retracts.} \begin{abstract} Let $A$ be a retract of the polynomial ring in three variables over a field $k$. It is known that if $\ch(k) = 0$ or $\trdeg_k A \not= 2$ then $A$ is a polynomial ring. In this paper, we give some sufficient conditions for $A$ to be the polynomial ring in two variables over $k$ when $\ch(k) > 0$ and $\trdeg_k A = 2$. \end{abstract} \maketitle \setcounter{section}{0} \section{Introduction} Throughout the paper, $k$ denotes a field. For an integral domain $R$ and a positive integer $n$, $R^{[n]}$ denotes the polynomial ring in $n$ variables over $R$. $Q(R)$ denotes the quotient field of $R$. Let $B$ be a $k$-algebra and $A$ its $k$-subalgebra. We say that $A$ is a \textit{retract} of $B$ if there exists an ideal $I$ of $B$ such that $B = A \oplus I$, where the sum is $A$-direct (cf.\ Definition 2.1). Several mathematicians have studied retracts of $k$-domains. See, e.g., Costa \cite{C77}, Spilrain--Yu \cite{SY00}, Liu--Sun \cite{LS18}, \cite{LS21}, the second author \cite{N19}, Chakraborty--Dasgupta--Dutta--Gupta \cite{CDDG21}, Gupta-the second author \cite{GN23}. In particular, the study of retracts of polynomial rings has been done because it is closely related to the Zariski cancellation problem. In \cite{C77}, Costa posed the following problem. \begin{problem1} Is every retract of $k^{[n]}$ a polynomial ring? \end{problem1} We note that Problem 1 includes the Zariski cancellation problem. Indeed, if a $k$-algebra $A$ satisfies $A^{[1]} \cong k^{[n]}$ as $k$-algebras, then $A$ can be regarded as a retract of $k^{[n]}$. So, if Problem 1 has an affirmative answer for fixed $k$ and $n$, then we conclude that $A \cong k^{[n-1]}$. Let $A$ be a retract of $B:= k^{[n]}$. We have the following results on Problem 1. \begin{itemize} \item[(1)] If $\trdeg_k A = 0$ (resp.\ $\trdeg_k A = n$), then $A = k$ (resp.\ $A = B$). \item[(2)] (\cite[Theorem 3.5]{C77})\ \ If $\trdeg_k A = 1$, then $A \cong k^{[1]}$. In particular, if $n = 2$, then $A$ is a polynomial ring. \item[(3)] (\cite[Theorem 2.5]{N19}, \cite[Theorem 5.8]{CDDG21})\ \ If $\ch(k) = 0$ and $\trdeg_k A = 2$, then $A \cong k^{[2]}$. In particular, if $n = 3$ and $\ch(k) = 0$, then $A$ is a polynomial ring. \item[(4)] In the case $n \geq 3$ and $\ch(k) > 0$, Gupta \cite{G14-1}, \cite{G14-2} proved that the Zariski cancellation problem does not hold for the affine $n$-space $\A^n$. So, Problem 1 has counter examples when $n \geq 4$ and $\ch(k) > 0$. \end{itemize} Therefore, Problem 1 remains open in the following cases: \begin{itemize} \item $n = 3$ and $\ch (k) > 0$. \item $n \geq 4$ and $\ch (k) = 0$. \end{itemize} Let $A$ be a retract of $k^{[3]}$ with $\trdeg_k A = 2$. When $\ch (k) = 0$, $A \cong k^{[2]}$ by \cite[Theorem 2.5]{N19} (or \cite[Theorem 5.8]{CDDG21}). Unfortunately, their proofs do not work in the case $\ch (k) > 0$. In fact, the second author \cite{N19} used the result \cite[Theorem 1.1 (a)]{K80} in the proof of \cite[Theorem 2.5]{N19}, which does not hold true when $\ch(k) > 0$. In the proof of \cite[Theorem 5.8]{CDDG21}, the authors used \cite[Theorem 2.8]{CDDG21}, which cannot be used when $\ch (k) > 0$. Furthermore, they used the fact that $k^{[2]}$ over a field $k$ of characteristic zero has no non-trivial forms (cf.\ \cite[Theorem 3]{K75}), which does not hold true when $\ch(k) > 0$. In this paper, we study the case where $n = 3$ and $\ch (k) > 0$. In Section 2, we recall some results on retracts and exponential maps and their rings of constants on integral domains. In Section 3, we give some sufficient conditions for a retract $A$ of $k^{[3]}$ to be a polynomial ring in two variables over $k$ when $\ch(k) > 0$ and $\trdeg_k A = 2$. More precisely, we prove that if one of the following conditions holds, then $A \cong k^{[2]}$: \begin{enumerate} \item $A$ is a ring of constants of some exponential map (cf.\ Corollary 3.2). \item $\ML(A) \not= A$, that is, there exists a non-trivial exponential map on $A$ (cf.\ Theorem 3.3). \item $k$ is algebraically closed and there exists a $\Z$-grading on $A$ with $A_0\not=A$ (cf.\ Theorem 3.7). \end{enumerate} In Section 4, we give several examples of retracts of low-dimensional polynomial rings over $k$. These examples indicate they are polynomial rings. \section{Preliminary results} Let $R$ be an integral domain. In this section, all rings we consider are $R$-domains. \begin{defn} {\rm An $R$-subalgebra $A$ of an $R$-algebra $B$ is called a {\em retract} if it satisfies one of the following equivalent conditions: \begin{itemize} \item[(1)] There exists an idempotent $R$-algebra endomorphism $\varphi$ of $B$, which is called a {\em retraction}, such that $\varphi(B) = A$. \item[(2)] There exists an $R$-algebra homomorphism $\varphi: B \to A$ such that $\varphi|_{A} = {\rm id}_A$. \item[(3)] There exists an ideal $I$ of $B$ such that $B = A \oplus I$ as $A$-modules. \end{itemize} } \end{defn} We recall some properties of retracts. \begin{lem} \label{properties of retracts} Let $B$ be an $R$-domain and $A$ a retract of $B$. Then the following assertions hold true. \begin{itemize} \item[(1)] $A$ is algebraically closed in $B$, namely, every algebraic element of $B$ over $A$ belongs to $A$. So $Q(A) \cap B = A$. \item[(2)] If $B$ is finitely generated over $R$, then so is $A$. \item[(3)] If $B$ is a UFD, then so is $A$. \item[(4)] If $B$ is regular, then so is $A$. \end{itemize} \end{lem} \begin{proof} (1)\ \ See \cite[Lemma 1.3]{C77}. (2) \ \ Obvious. (3)\ \ See \cite[Proposition 1.8]{C77}. (4)\ \ See \cite[Corollary 1.11]{C77}. \end{proof} Let $B$ be an $R$-domain and let $\sigma: B \to B^{[1]}$ be an $R$-algebra homomorphism from $B$ to $B^{[1]}$. For an indeterminate $U$ over $B$, let $\sigma_U$ denote the map $\sigma: B \to B[U]$. Then $\sigma$ is called an \textit{exponential map} on $B$ if the following conditions hold. \begin{itemize} \item[(i)] $\epsilon_0 \circ \sigma_U = {\rm id}_B$, where $\epsilon_0 : B[U] \to B$ is the evaluation at $U = 0$. \item[(ii)] For indeterminates $U, \, V$, we have $\sigma_V \circ \sigma_U = \sigma_{U+V}$, where $\sigma_V: B \to B[V]$ is extended to an $R$-algebra homomorphism $\sigma_V : B[U] \to B[U,\, V] (= B^{[2]})$ by setting $\sigma_V(U) = U$. \end{itemize} For an exponential map $\sigma : B \to B^{[1]}$, we denote the ring of constants of $\sigma$ by $B^{\sigma}$. Namely, $B^{\sigma} = \{ x \in B \ | \ \sigma(x) = x \} = \sigma^{-1}(B)$. The exponential map $\sigma$ is said to be \textit{trivial} if $B^{\sigma} = B$. We note that the conditions (i) and (ii) as above is equivalent to the condition that $\sigma$ is a co-action of the additive group scheme $\G_a = \Spec R[U]$ on $\Spec B$. If $R$ is a field of $\ch(R) = 0$, then an exponential map $\sigma: B \to B^{[1]}$ is corresponding to a locally nilpotent $R$-derivation $D: B \to B$. In particular, $B^{\sigma} = \Ker D$. For each $b \in B$, we define $\deg_{\sigma} (b)$ (resp.\ $\lc_{\sigma} (b)$) to be the degree of $\sigma(b)$ as a polynomial in the indeterminate over $B$ (resp.\ the leading coefficient of $\sigma (b)$). Assume that $\sigma$ is non-trivial, i.e., $B^{\sigma} \not= B$. We call $s \in B$ a {\em local slice} of $\sigma$ if $\deg_{\sigma} (s) = \min \{ \deg_{\sigma} (b) \ | \ b \in B \setminus B^{\sigma} \}$. For a local slice $s \in B$ of $\sigma$, it is clear that $\lc_{\sigma} (s) \in B^{\sigma}$. We recall the following well-known result. \begin{lem} Let $\sigma$ be a non-trivial exponential map on an $R$-domain $B$. For a local slice $s \in B$ of $\sigma$, set $a = \lc_{\sigma} (s)$. Then $B_a (:= B[\frac{1}{a}]) = (B^{\sigma})_a [s]$ and $s$ is indeterminate over $B^{\sigma}$. \end{lem} \begin{proof} See, e.g., \cite[Corollary 2.4]{Kuroda17}. \end{proof} Let $\EXP(B)$ denote the set of all exponential maps on $B$. Then the Makar-Limanov invariant of $B$, which is denoted by $\ML (B)$, is defined as $$ \ML (B) = \bigcap_{\sigma \in \EXP (B) } B^{\sigma}. $$ The Makar-Limanov invariant $\ML(B)$ of $B$ is also called the AK invariant of $B$ (the ring of absolute constants of $B$) and denoted by $\AK(B)$. Let $A$ be a subring of $B$. Then $A$ is said to be {\em factorially closed} in $B$ if, for any two nonzero elements $b_1, b_2 \in B$, $b_1 b_2 \in A$ implies that $b_1, b_2 \in A$. In some papers, a factorially closed subring is called an {\em inert} subring. It is well known that $B^{\sigma}$ is factorially closed in $B$ for any $\sigma \in \EXP (B)$ (see e.g., \cite[Lemma 2.1 (1)]{K16}). \section{Retracts and rings of invariants of exponential maps} In this section, we give some sufficient conditions for a retract $A$ of $k^{[3]}$ to be a polynomial ring. Throughout this subsection, $R$ denotes a domain. \subsection{Retracts and rings of constants of exponential maps} In this subsection, we consider retracts and rings of invariants of exponential maps on $R$-domains. The following result is a generalization of \cite[Proposition 4.2]{CDDG21} and \cite[Theorem 2.5]{LS21}. In fact, the proof of Theorem 3.1 is almost the same as that of \cite[Theorem 2.5]{LS21}. In Theorem 3.1, the implication (1) $\Longrightarrow$ (2) and the equivalence of (1) and (3) when $B$ is a UFD follow from \cite[Proposition 4.2]{CDDG21}. Here, we use the technique given in the proof of \cite[Theorem 2.5]{LS21}. \begin{thm} Let $B$ be an $R$-domain and $A$ an $R$-subalgebra of $B$. Then the following conditions are equivalent to each other{\rm :} \begin{itemize} \item[(1)] $B \cong A^{[1]}$ as an $A$-algebra. \item[(2)] $A$ is a retract of $B$ with $B = A \oplus Bh$ for some $h \in B \setminus \{ 0 \}$ and $A = B^{\sigma}$ for some non-trivial exponential map $\sigma$ on $B$. \end{itemize} Moreover, if $B$ is a UFD, then the condition {\rm (1)} is equivalent to the following condition {\rm (3)}. \begin{itemize} \item[(3)] $A$ is a retract of $B$ and $A = B^{\sigma}$ for some non-trivial exponential map $\sigma$ on $B$. \end{itemize} \end{thm} \begin{proof} The proof consists of three parts. \noindent \textbf{(1) $\Longrightarrow$ (2) and (1) $\Longrightarrow$ (3).} By (1), $B = A[h]$, where $h$ is an indeterminate. So $A$ is a retract of $B$ and $B = A \oplus Bh$. We have an $R$-algebra homomorphism $\sigma: B \to B^{[1]} = B[t]$ such that $\sigma(h) = h + t$ and $\sigma(a) = a$ for any $a \in A$. Then $\sigma$ is an exponential map on $B$ and $B^{\sigma} = A$. This proves (2) and (3). \smallskip \noindent \textbf{(2) $\Longrightarrow$ (1).} Let $\pi: B \to A=B^{\sigma}$ be the projection form $B$ onto $A$ regarding to the decomposition $B = A \oplus Bh$. We know that $\pi$ is a retraction with $\Ker \pi = Bh$. Then $h$ is a local slice of $\sigma$. Indeed, for a local slice $s \in B$, $s':= s - \pi(s) \in Bh$, say $s' = bh$ for some $b \in B\setminus \{0 \}$. Since $h \not\in A = B^{\sigma}$, $\deg_{\sigma} (h) \geq \deg_{\sigma} (s)$. Moreover, $$ \deg_{\sigma} (s) = \deg_{\sigma} (s') = \deg_{\sigma} (b) + \deg_{\sigma} (h) \geq \deg_{\sigma} (h), $$ which implies $\deg_{\sigma} (h) = \deg_{\sigma} (s)$. Hence $h$ is a local slice of $\sigma$. By Lemma 2.3, $B_a = A_a [h]$, where $a = \lc_{\sigma}(h) \in A$. Since $A$ is algebraically closed in $B$ and $h \not\in A$, $A [h] \cong A^{[1]}$ as $A$-algebras. For any $f \in B$, there exists a positive integer $m$ such that $a^m f \in A[h]$. We may assume that $a^m f \in M:= A \oplus Ah \oplus \cdots \oplus Ah^r$ for some $r \geq 0$. Since $$ B = A \oplus Bh = A \oplus (A \oplus Bh) h = A \oplus Ah \oplus Bh^2, $$ we have $B = M \oplus Bh^{r+1}$. If $f = g + bh^{r+1}$ for some $g \in M$ and $b \in B$, then $a^m b h^{r+1} = a^m f - a^m g \in M \cap Bh^{r+1} = \{ 0 \}$, hence $b = 0$. Therefore, $f = g \in M \subset A[h]$. This proves (1). \smallskip \noindent \textbf{(3) $\Longrightarrow$ (2) in the case where $B$ is a UFD.} Let $\pi: B \to A$ be a retraction and set $I = \Ker \pi$. Then $B = A \oplus I$. Take a local slice $s$ of $\sigma$, which exists since $\sigma$ is non-trivial. Since $s - \pi(s) \in I$ and $s - \pi(s)$ is a local slice of $\sigma$ because $\pi(s) \in A = B^{\sigma}$, we may assume that $s \in I$. Let $s = p_1 \cdots p_{\ell}$ be the prime decomposition of $s$. Since $ \deg_{\sigma} (s) = \sum_{i=1}^{\ell} \deg_{\sigma}(p_i) $ and $\deg_{\sigma} (p_i) = 0$ or $\geq \deg_{\sigma} (s)$ for $i = 1, \ldots, \ell$, there exists unique $i$ such that $\deg_{\sigma} (p_i ) = \deg_{\sigma} (s)$, say $\deg_{\sigma} (p_1) = \deg_{\sigma} (s)$, and $\deg_{\sigma} (p_2) = \cdots = \deg_{\sigma} (p_{\ell}) = 0$. Namely, $p_1$ is a local slice of $\sigma$ and $p_2, \ldots, p_{\ell} \in A$. Set $a = \lc_{\sigma} (p_1)$. By Lemma 2.3, $B_{a} = A_{a}[p_1]$ and $p_1$ is indeterminate over $A$. Since $s = p_1 p_2 \cdots p_s \in I$ and $p_2, \ldots, p_s \in A$, $p_1 \in I$. For any $u \in I$, there exist a positive integer $n(u)$ and some $a_0, a_1, \ldots, a_n \in A$ such that $$ a^{n(u)} u = a_0 + a_1 p_1 + \cdots + a_n p_1^n. $$ Since $u, p_1 \in I$, $a_0 \in I \cap A$. Then $a_0 = 0$ and so $a^{n(u)} u \in p_1 B$. Since $p_1$ is a prime element of $B$, $u \in p_1 B$ or $a^{n(u)} \in p_1 B$. If $a^{n(u)} \in p_1 B$, then $v p_1 = a^{n(u)}$ for some $v \in B$. Since $a \in A = B^{\sigma}$ and $A$ is factorially closed in $B$, $p_1 \in A$. This is a contradiction. Therefore, $I = p_1 B$ is a principal ideal. This proves (2). \end{proof} By Theorem 3.1 and the cancellation theorem for the affine plane over a field, we have the following result. \begin{cor} Let $k$ be a field and $A$ a $k$-algebra. If $A$ is a retract of $B = k^{[3]}$ and there exists a non-trivial exponential map $\sigma$ on $B$ such that $A = B^{\sigma}$, then $A \cong k^{[2]}$. \end{cor} \begin{proof} By Theorem 3.1, $B \cong A^{[1]}$ as $k$-algebras. By the cancellation property of $k^{[2]}$ (see \cite[Theorem 2.1]{BG15}, \cite[Theorem 1.2]{Kuroda17} or \cite[Theorem 3.4]{K16}), we have $A \cong k^{[2]}$. \end{proof} \subsection{Retracts of $k^{[3]}$ with non-trivial exponential maps} In this subsection, we prove the following result. \begin{thm} Let $k$ be a field of $p:= \ch (k) \geq 0$ and $B = k^{[3]}$ the polynomial ring in three variables over $k$. Let $A$ be a retract of $B$ with $\trdeg_k A = 2$. If $p = 0$ or $\ML(A) \not= A$, then $A \cong k^{[2]}$ as $k$-algebras. \end{thm} \begin{proof} If $p = 0$, then $A \cong k^{[2]}$ by \cite[Theorem 5.8]{CDDG21} or \cite[Theorem 2.5]{N19}. From now on, assume that $p > 0$ and $\ML(A) \not= A$. Since $A$ is a retract of $B = k^{[3]}$ and $B$ is finitely generated as a $k$-algebra, $A$ is finitely generated as a $k$-algebra by Lemma 2.2 (2). Further, since $B$ is a UFD, so is $A$ by Lemma 2.2 (3). In particular, $A$ is normal. By $\ML(A) \not= A$, there exists a non-trivial exponential map $\sigma$ on $A$. Set $R = A^{\sigma}$. \begin{claim1} $R \cong k^{[1]}$. \end{claim1} \begin{proof} Since $\sigma$ is a non-trivial exponential map on $A$, we have $$ \trdeg_k R = \trdeg_k A - 1 = 2 -1 = 1. $$ Since $A$ is a UFD, so is $R$ by \cite[Lemma 2.1 (1)]{K16}. Then we infer from \cite[Theorem III]{EZ70} (or \cite[Theorem (4.1)]{AEH72}) that $R \cong k^{[1]}$. \end{proof} \begin{claim2} {\rm $A$ is geometrically factorial over $k$, i.e., $A \otimes_k k'$ is a UFD for any algebraic extension field $k'$ of $k$. } \end{claim2} \begin{proof} Let $k'$ be an algebraic extension field of $k$. By \cite[Lemma 1.2]{N19}, $A \otimes_k k'$ is a retract of $B \otimes_k k'$ over $k'$. Since $B \otimes_k k' \cong k'^{[3]}$ is a UFD, so is $A \otimes_k k'$. This proves the assertion. \end{proof} \begin{claim3} {\rm $A \cong k^{[2]}$. } \end{claim3} \begin{proof} By Claim 1, $R = k[f]$ for some $f \in A \setminus k$. By \cite[Lemma 2.1 (3)]{K16}, $Q(R) \cap A = R$. Hence, $k(f) \cap A = k[f]$. Set $S= k[f] \setminus \{ 0 \}$. Then, by Lemma 2.3, we know that $S^{-1} A \cong k(f)^{[1]}$ because $R = k[f]$. By Claim 2, $A$ is geometrically factorial over $k$. Therefore, by virtue of Lemma 3.4 below, we know that $A \cong k[f]^{[1]} \cong k^{[2]}$. \end{proof} Theorem 3.3 is thus verified. \end{proof} \begin{lem} {\rm (Russell--Sathaye \cite[Theorem 2.4.2]{RS79})} Let $k$ be a field, $A$ a finitely generated $k$-domain and $f \in A$. Set $S = k[f] \setminus \{ 0 \}$. Assume that the following conditions {\rm (1)}--{\rm (3)} are satisfied{\rm :} \begin{enumerate} \item[{\rm (1)}] $S^{-1} A = k(f)^{[1]}$. \item[{\rm (2)}] $k(f) \cap A = k[f]$. \item[{\rm (3)}] $A$ is geometrically factorial over $k$. \end{enumerate} Then $A \cong k[f]^{[1]}$. \end{lem} \begin{proof} See \cite[2.4]{RS79}, where we consider $L$ and $F$ as $k$ and $f$, respectively. \end{proof} Let $A$ be a $k$-subalgebra of $k^{[n]}$ and assume that $A$ is factorially closed in $k^{[n]}$. We easily see that $A$ is a polynomial ring if $k$ is algebraically closed and $\trdeg_k A\in \{0,1,n\}$. In particular, in \cite[Theorem 1 (2)]{CGM15}, Chakraborty, Gurjar and Miyanishi proved that if $k$ is algebraically closed of characteristic zero, then every factorially closed subring of $k^{[3]}$ is a polynomial ring. By the argument of the proof of Theorem 3.3, we obtain the following result. \begin{prop} Let $k$ be an algebraically closed field of $p:= \ch (k) \geq 0$ and $B = k^{[3]}$ the polynomial ring in three variables over $k$. Let $A$ be a factorially closed $k$-subalgebra of $B$ with $\trdeg_k A = 2$. If $p = 0$ or $\ML(A) \not= A$, then $A \cong k^{[2]}$ as $k$-algebras. \end{prop} \begin{proof} If $p = 0$, then $A \cong k^{[2]}$ by \cite[Theorem 1 (2)]{CGM15}. So we assume that $p > 0$ and $\ML(A) \not= A$. Since $A$ is algebraically closed in $B$, $A = Q(A) \cap B$. By $\trdeg_k A = 2 (<3)$ and Zariski's theorem in \cite{Z54}, we know that $A$ is finitely generated over $k$. By $\ML(A) \not= A$, there exists a non-trivial exponential map $\sigma$ on $A$. Then the proof of Claim 1 in the proof of Theorem 3.3 works in our setting. So $R:= A^{\sigma} \cong k^{[1]}$. Since $A$ is factorially closed in $B$ and $B$ is a UFD, $A$ is a UFD. In particular, $A$ is geometrically factorial since $k$ is algebraically closed. Finally, we easily see that the proof of Claim 3 in the proof of Theorem 3.3 works in our setting. This proves $A \cong k^{[2]}$. \end{proof} \subsection{Retracts of $k^{[3]}$ with $\Z$-gradings} In this subsection, we consider retracts having $\Z$-gradings. Let $A$ be a finitely generated $k$-domain and $X=\Spec A$. $\G_m$ denotes the $1$-dimensional algebraic torus over $k$. It is well known that a $\G_m$-action on $X$ is corresponding to a $\Z$-grading on $A$, that is, $A=\bigoplus_{d\in\Z}A_d$, where $A_d$ denotes the homogeneous elements of $A$ of degree $d\in\Z$. A $\G_m$-action on $X$ is said to be {\it effective} if $A\not=A_0$ with respect to the corresponding $\Z$-grading on $A$. $X$ is called an {\it affine $\G_m$-variety} if $X$ has an effective $\G_m$-action. In \cite{K23}, the first author proved that the following. \begin{thm} {\rm (\cite[Theorem 1.2]{K23})}\label{G_m-surf} Let $k$ be an algebraically closed field of any characteristic and $S=\Spec A$ be an affine $\G_m$-surface over $k$. If $S$ is smooth and $A$ is a UFD with $A^*=k^*$, then $S\cong\A^2$. That is, $A\cong k^{[2]}$. \end{thm} In \cite[Corollary 5.15]{CDDG21}, Chakraborty, Dasgupta, Dutta and Gupta proved that if $A$ is a graded subalgebra of $B\cong k^{[n]}$ with the standard grading and there exists a retraction $\pi:B\to A$ such that $\pi(B_+)\subseteq B_+$, then $A$ is a polynomial ring over $k$. The following theorem suggests that these assumptions are unnecessary when $n=3$. \begin{thm} \label{graded} Let $k$ be an algebraically closed field of any characteristic and $A$ be a retract of $B\cong k^{[n]}$ for some $n\geq2$ with $\trdeg_k\:A=2$. If $A$ has a $\Z$-grading with $A_0\not=A$, then $A\cong k^{[2]}$. Therefore, every $\Z$-graded ring which is a retract of $k^{[3]}$ is a polynomial ring over $k$. \end{thm} \begin{proof} Let $S:=\Spec A$. By the assumption $S$ is an affine $\G_m$-surface. Since $A$ is a retract of $B\cong k^{[n]}$, Lemma \ref{properties of retracts} implies that A is a regular UFD with $A^*=k^*$. In particular, $S$ is smooth. Therefore, it follows from Theorem \ref{G_m-surf} that $A\cong k^{[2]}$. \end{proof} \section{Examples of retracts of low-dimensional polynomial rings} In this section, we give some examples of retracts of polynomial rings in two or three variables and we determine their generators. Additionally, we check whether each retract is a polynomial ring or not. Let $k$ be a field and $A$ a retract of $B=k^{[n]}=k[x_1, \ldots, x_n]$ for a positive integer $n$. It is clear that, if $n = 1$, then either $A=B$ or $A = k$. If either $n=2$ or $n=3$ and $\ch(k) = 0$, then we know that $A$ is a polynomial ring over $k$. Furthermore, $\trdeg_k A =1$ (resp.\ $n=3$, $\ch(k) = 0$ and $\trdeg_k A = 2$), then $A=k[f]$ (resp.\ $A=k[f_1,f_2]$) for some $f \in B \setminus k$ (resp.\ $f_1,f_2 \in B \setminus k$). When $n = 2$ and $\ch(k) = 0$, Spilrain--Yu \cite{SY00} gave a characterization of all the retracts of $B$ up to an automorphism, and gave several applications of this characterization to the $2$-dimensional Jacobian conjecture. However, it is difficult to determine the retracts of $B$ when $n \ge 2$. Let $\varphi$ be a retraction from $B$ to $A$. Say, $A = \operatorname{Im} \varphi$, where $\varphi : B \to B$ is a $k$-algebra endomorphism of $B$ and $\varphi ^{2} = \varphi$. We set $f_i=\varphi(x_i)$ for $i=1, \dots ,n$. Since $\varphi$ is a $k$-algebra homomorphism, we have $$ \varphi(g) = g(\varphi(x_1), \dots ,\varphi(x_n)) = g(f_1, \dots ,f_n) $$ for any polynomial $g \in B$. In particular, \begin{equation} f_i=\varphi(x_i)=\varphi (\varphi(x_i)) = \varphi(f_i) =f_i(f_1, \dots ,f_n) \tag{4.1} \end{equation} for each $i=1, \dots ,n$. Furthermore, $A=k[f_1, \dots ,f_n]$. Conversely, suppose $A=k[f_1, \dots ,f_n]$ is a $k$-algebra of $B$ generated by $n$ polynomials $f_1, \dots ,f_n$. We set the $k$-algebra homomorphism $\varphi : B \to B$ given by $\varphi(x_i)=f_i$ for each $i=1, \dots ,n$. If the $f_i$'s satisfy (4.1), then $\varphi$ is a retraction. Thus we have the following lemma. \begin{lem} With the same notations as above, $\varphi$ is a retraction if and only if the $f_i$'s satisfy {\rm (4.1)}. \end{lem} Now, we give some examples of retracts of polynomial rings by using Lemma 4.1. In this section, we consider the cases $n=2, 3$ only. We note that some of the results of this section can be generalized. However, we do not give such a generalization since even in the low dimensional case has not been fully understood. We sometimes set $k^{[2]}=k[x,y]$ and $k^{[3]}=k[x,y,z]$. In the following example, we determine the retracts of $B=k^{[n]} \ (n=2,3)$ generated by monomials. Here, we assume that a monomial is monic. \begin{example} {\rm Let $f_1, \dots ,f_n$ be $n$ polynomials of $B=k^{[n]}$ that satisfy {\rm (4.1)} and let $A=k[f_1, \dots ,f_n]$. Assume that $A \ne k$ and every $f_i \ (i=1, \dots ,n)$ is either $0$ or a monomial. Then there exists a positive integer $r$ such that $1\leq r\leq n$ and $f_{m_1},\ldots,f_{m_r}$ are non-zero monomials and $f_{j}=0$ for $j\not\in\{m_1,\ldots,m_r\}$. For $1\leq i\leq r$, let $f_{m_i}=x_1^{a_{m_i1}}\cdots x_n^{a_{m_in}}$ be a monomial, where $a_{m_ij}\geq 0$. Then the $r\times r$ matrix $(a_{m_im_j})_{1\leq i,j,\leq r}$ is idempotent. Indeed, by (4.1), $a_{m_ij}=0$ when $f_j=0$, thus $f_{m_i}\in k[x_{m_1},\ldots,x_{m_r}]$. Therefore, (4.1) implies that $(a_{m_im_j})_{1\leq i,j,\leq r}$ is an idempotent matrix. By simple calculation, the following assertions hold. \begin{enumerate} \item[(1)] If $n=2$, then $f_1$ and $f_2$ are as in the following table, where $m$ is a non-negative integer. \begin{longtable}{cc|c} $f_1$ & $f_2$ & $A=k[f_1,f_2]$ \\ \hline $x$ & $0$ & $k[x]$ \\ $0$ & $y$ & $k[y]$ \\ $x$ & $x^{m}$ & $k[x]$ \\ $x$ & $y$ & $k[x,y]$ \\ $xy^{m}$ & $1$ & $k[xy^{m}]$ \\ $y^{m}$ & $y$ & $k[y]$ \\ $1$ & $x^{m}y$ & $k[x^{m}y]$ \\ \end{longtable} \item[(2)] If $n=3$, then $f_1$, $f_2$ and $f_3$ are as in the following table, where $m$ and $l$ are non-negative integers. \vspace{1cm} \begin{longtable}{ccc|c} $f_1$ & $f_2$ & $f_3$ & $A=k[f_1,f_2,f_3]$ \\ \hline $x$ & $0$ & $0$ & $k[x]$ \\ $0$ & $y$ & $0$ & $k[y]$ \\ $0$ & $0$ & $z$ & $k[z]$ \\ $x$ & $x^{m}$ & $0$ & $k[x]$ \\ $x$ & $y$ & $0$ & $k[x,y]$ \\ $xy^{m}$ & $1$ & $0$ & $k[xy^{m}]$ \\ $y^{m}$ & $y$ & $0$ & $k[y]$ \\ $1$ & $x^{m}y$ & $0$ & $k[x^{m}y]$ \\ $x$ & $0$ & $x^{m}$ & $k[x]$ \\ $x$ & $0$ & $z$ & $k[x,z]$ \\ $xz^{m}$ & $0$ & $1$ & $k[xz^{m}]$ \\ $z^{m}$ & $0$ & $z$ & $k[z]$ \\ $1$ & $0$ & $x^{m}z$ & $k[x^{m}z]$ \\ $0$ & $y$ & $y^{m}$ & $k[y]$ \\ $0$ & $y$ & $z$ & $k[y,z]$ \\ $0$ & $yz^{m}$ & $1$ & $k[yz^{m}]$ \\ $0$ & $z^{m}$ & $z$ & $k[z]$ \\ $0$ & $1$ & $y^{m}z$ & $k[y^{m}z]$ \\ $x$ & $x^{l}$ & $x^{m}$ & $k[x]$ \\ $y^{l}$ & $y$ & $y^{m}$ & $k[y]$ \\ $z^{l}$ & $z^{m}$ & $z$ & $k[z]$ \\ $x$ & $y$ & $x^{l}y^{m}$ & $k[x,y]$ \\ $y^{l}z^{m}$ & $y$ & $z$ & $k[y,z]$ \\ $x$ & $x^{l}y^{m}$ & $z$ & $k[x,z]$ \\ $x$ & $y$ & $z$ & $k[x,y,z]$ \\ $xy^{l}$ & $1$ & $x^{m}y^{lm}$ & $k[xy^{l}]$ \\ $xz^{l}$ & $x^{m}z^{lm}$ & $1$ & $k[xz^{l}]$ \\ $y^{l}z^{lm}$ & $yz^{m}$ & $1$ & $k[yz^{m}]$ \\ $y^{lm}z^{l}$ & $1$ & $y^{m}z$ & $k[y^{m}z]$ \\ $1$ & $x^{l}y$ & $x^{lm}y^{m}$ & $k[x^{l}y]$ \\ $1$ & $x^{lm}z^{l}$ & $x^{m}z$ & $k[x^{m}z]$ \\ $xy^{l}$ & $1$ & $y^{m}z$ & $k[xy^{l},y^{m}z]$ \\ $xz^{l}$ & $yz^{m}$ & $1$ & $k[xz^{l},yz^{m}]$ \\ $1$ & $x^{l}y$ & $x^{m}z$ & $k[x^{l}y, x^{m}z]$ \\ $xy^{l}z^{m}$ & $1$ & $1$ & $k[xy^{l}z^{m}]$ \\ $1$ & $x^{l}yz^{m}$ & $1$ & $k[x^{l}yz^{m}]$ \\ $1$ & $1$ & $x^{l}y^{m}z$ & $k[x^{l}y^{m}z]$ \\ \end{longtable} \end{enumerate} In these cases, $A$ is a polynomial ring. } \end{example} We give elementary results on sufficient conditions for a retract of $k^{[3]}$ to be a polynomial ring in Propositions 4.4 and 4.6. In their proofs, we use the following. \begin{lem} Let $f_1, f_2 ,f_3$ be three polynomials of $B=k^{[3]} = k[x_1, x_2, x_3]$ and let $A=k[f_1, f_2 ,f_3]$. Assume that $f_1, f_2, f_3$ satisfy {\rm (4.1)} and one of the following conditions. \begin{enumerate} \item[{\rm (1)}] $f_1,f_2,f_3$ are algebraically independent over $k$. \item[{\rm (2)}] There exists $i\in\{1,2,3\}$ such that $f_i\in k$. \item[{\rm (3)}] There exists $i\in\{1,2,3\}$ such that $f_i=x_i$. \end{enumerate} Then $A$ is a polynomial ring. \end{lem} \begin{proof} Suppose that the condition (1) holds. Then $\trdeg_kA=3$, hence $A=B=k^{[3]}$. Suppose that the condition (2) holds. We may assume that $f_3\in k$. Then $A=k[f_1,f_2]$. If $f_1$ and $f_2$ are algebraically independent over $k$, then $\trdeg_k A = 2$ and hence $A\cong k^{[2]}$. Otherwise, since $\trdeg_kA \leq 1$, the assertion follows from \cite[Theorem 3.5]{C77}. Suppose that the condition (3) holds. We may assume that $f_3=x_3$. Let $R=k[x_3]$. Then $B=R[x_1,x_2]\cong R^{[2]}$, $A=R[f_1,f_2]$ and $\varphi$ is a homomorphism of $R$-algebras. Thus, $A$ is an $R$-algebra retract of $B\cong R^{[2]}$. Since $R$ is a UFD, it follows from \cite[Theorem 3.5]{C77} that $A\cong R^{[e]}$ for some $0\leq e\leq 2$. Therefore, $A\cong k^{[e+1]}$. \end{proof} Here we prove the following. \begin{prop} Let $f_1, f_2 ,f_3$ be three polynomials of $B=k^{[3]}$ that satisfy {\rm (4.1)} and let $A=k[f_1, f_2 ,f_3]$. Assume that $A \ne k$ and at least one of $f_1, f_2, f_3$ is a monomial. Then $A$ is a polynomial ring. \end{prop} \begin{proof} We may assume that $f_1$ is a monomial. If at least one of $f_1$, $f_2$, $f_3$ is an element of $k$, then the assertion follows from Lemma 4.3. So we may assume that $f_1$, $f_2$, $f_3$ are non-constant. By the assumption, we set $f_1 = x^{p}y^{q}z^{r}$, where $p$,$q$,$r$ are non-negative integers and $(p,q,r) \ne (0,0,0)$. Assume that neither $f_2$ nor $f_3$ is a monomial. If $q \geq 1$, $f_1(f_1,f_2,f_3)$ is not a monomial because $f_2$ is not a monomial. So we have $f_1(f_1,f_2, f_3) \ne f_1$, which is a contradiction. Similarly, we derive a contradiction if $r \geq 1$. So $q=0$ and $r=0$. Then $f_1=x^{p}$ and $f_1(f_1,f_2,f_3)=x^{p^{2}}$. Since $f_1$ satisfies the condition (4.1), we have $p=1$ and $f_1=x$. By Lemma 4.3, $A$ is a polynomial ring. Assume that $f_2$ is a monomial and $f_3$ is not a monomial. We set $f_2 = x^{s}y^{t}z^{u}$, where $s$,$t$,$u$ are non-negative integers and $(s,t,u) \ne (0,0,0)$. By the same argument as above, we have $r=u=0$. Then we have $(f_1,f_2) =(x,x^{m}), (x,y),(y^{m},y)$ by Example 4.2 (1). We know that $A$ is a polynomial ring by Lemma 4.3. \end{proof} In Proposition 4.4, we can determine the $f_1, f_2, f_3$ under some assumptions. See the list of Example 4.2 (2), where every $f_i $ ($i = 1,2,3$) is either $0$ or a monomial. In fact, we have the following. \begin{example} {\rm Let $f_1, \dots ,f_n \in B \setminus k$ be $n$ polynomials of $B=k^{[n]}$ that satisfy {\rm (4.1)} and let $A=k[f_1, \dots ,f_n]$. Assume further that the following conditions hold: \begin{enumerate} \item[(i)] The constant term of $f_i$ is equal to $0$ for each $i=1, \dots ,n$. \item[(ii)] If $f_i \neq 0$, then its leading coefficient with respect to the lexicographical monomial order with $x > y > z$ is equal to $1$. \end{enumerate} In fact, for every retract of $B$, we can take its generator satisfying the conditions (i) and (ii) as above. By simple computation, we have: \begin{enumerate} \item[(1)] In case $n=2$, we assume that $f_1$ is a monomial and $f_2$ is not a monomial. Then $f_1$ and $f_2$ are as in the following table. \begin{longtable}{cc|c} $f_1$ & $f_2$ & $A=k[f_1,f_2]$ \\ \hline $x$ & $f_2 \in k[x]$ & $k[x]$ \\ \end{longtable} \item[ (2)] In case $n=3$, we assume both $f_1$ and $f_2$ are monomials and $f_3$ is not a monomial. Then $f_1$, $f_2$ and $f_3$ are as in the following table, where $m$ is a non-negative integer and $g$ is an element of $B$. \begin{longtable}{ ccc|c} $f_1$ & $f_2$ & $f_3$ & $A=k[f_1,f_2,f_3]$ \\ \hline $x$ & $x^{m}$ & $f_3 \in k[x]$ & $k[x]$ \\ $x$ & $x^{m}$ & $(x^{m}-y)g+z$ & $k[x,f_3]$ \\ $x$ & $y$ & $f_3 \in k[x,y]$ & $k[x,y]$ \\ $y^{m}$ & $y$ & $f_3 \in k[y]$ & $k[y]$ \\ $y^{m}$ & $y$ & $(x-y^{m})g+z$ & $k[y,f_3]$ \\ \end{longtable} \end{enumerate} } \end{example} In Proposition 4.4, we have not determined the $f_1, f_2, f_3$ when only one of them is a monomial. Of course, there are many such examples. Finally, we give the following result. \begin{prop} Let $f_1, f_2 ,f_3$ be three polynomials of $B=k^{[3]} = k[x_1, x_2, x_3]$ that satisfy {\rm (4.1)} and let $A=k[f_1, f_2 ,f_3]$. Assume that $A \ne k$ and $f_1, f_2 ,f_3$ satisfy the conditions {\rm (i)} and {\rm (ii)} in Example {\rm 4.5}. If there exists $i \in \{1, 2 ,3 \}$ such that $f_i$ is a binomial and $x_i$ is a term of $f_i$, then $A$ is a polynomial ring. \end{prop} \begin{proof} We may assume that $f_1$ is a binomial and $x_1$ is a term of $f_1$. We may set $f_1=x_1 + \alpha x_1^{p}x_2^{q}x_3^{r}$, where $\alpha \in k \setminus \{0\}$, $p,q,r \in \Z_{\ge 0}$ and $(p,q,r) \ne (0,0,0)$. If $q=0$ and $r=0$, then $f_1=x_1+\alpha x_1^{p}$. Since $f_1$ is a binomial and the constant term of $f_1$ is equal to $0$ by the condition (i) in Example 4.5, we have $p \ge 2$. So we know that $f_1(f_1,f_2,f_3)=f_1+\alpha f_1^{p}$. By (4.1), we have $\alpha f_1^{p} =0$, which is a contradiction. Therefore, we have $q \ne 0$ or $r \ne 0$. Now we know that $f_1(f_1,f_2,f_3)=f_1+\alpha f_1^{p}f_2^{q}f_3^{r}$. By the same argument as above, we have $\alpha f_1^{p}f_2^{q}f_3^{r} =0$. Since $\alpha$ and $f_1$ are not equal to $0$ and $q \ne 0$ or $r \ne 0$, we have $f_2=0$ or $f_3=0$. Thus we know that $A$ is a polynomial ring by Lemma 4.3. \end{proof} \begin{thebibliography}{30} \bibitem{AEH72} S. Abhyankar, P. Eakin and W. Heinzer, On the uniqueness of the coefficient ring in a polynomial ring, J. Algebra, \textbf{23} (1972), 310--342. \bibitem{BG15} S. M. Bhatwadekar and N. Gupta, A note on the cancellation property of $k[X,Y]$, J. Algebra Appl., \textbf{14} (2005), 1540007 (5 pages). \bibitem{CDDG21} S. Chakraborty, N. Dasgupta, A. K. Dutta and N. Gupta, Some results on retracts of polynomial rings, J. Algebra, \textbf{567} (2021), 243--268. \bibitem{CGM15} S. Chakraborty, R. V. Gurjar and M. Miyanishi, Factorially closed subrings of commutative rings, Algebra Number Theory, \textbf{9} (2015), 1137--1158. \bibitem{C77} D. L. Costa, Retracts of polynomial rings, J. Algebra, \textbf{44} (1977), 492--502. \bibitem{EZ70} A. Evyatar and A. Zaks, Rings of polynomials, Proc.\ Amer.\ Math.\ Soc., \textbf{25} (1970), 559--562. \bibitem{EMS136} G. Freudenburg, Algebraic Theory of Locally Nilpotent Derivations (second edition), Encyclopaedia of Mathematical Sciences vol.\ 136, Invariant Theory and Algebraic Transformation Groups VII, Springer-Verlag, 2017. \bibitem{G14-1} N. Gupta, On the cancellation problem for the affine space $\A^3$ in characteristic $p$, Invent.\ Math., \textbf{195} (2014), 279--288. \bibitem{G14-2} N. Gupta, On Zariski's cancellation problem in characteristic $p$, Adv.\ Math., \textbf{264} (2014), 296--307. \bibitem{GN23} N. Gupta and T. Nagamine, Retracts of Laurent polynomial rings, preprint (arXiv:2301.12681v3). \bibitem{K75} T. Kambayashi, On the absence of non-trivial separable forms of the affine plane, J. Algebra, \textbf{35} (1975), 449--456. \bibitem{K80} T. Kambayashi, On Fujita's cancellation theorem for the affine plane, J. Fac.\ Sci.\ Univ.\ Tokyo, Sect IA, Math., \textbf{27} (1980), 535--548. \bibitem{K16} H. Kojima, Notes on the kernels of locally finite iterative higher derivations in polynomial rings, Comm. Algebra, \textbf{44} (2016), 1924--1930. \bibitem{K23} H. Kojima, Smooth affine $\mathbb{G}_m$-surfaces with finite Picard groups and trivial units, Tokyo J. Math., {\bf 46} (2023), 93--109. \bibitem{Kuroda17} S. Kuroda, A generalization of Nakai's theorem on locally finite iterative higher derivations, Osaka J. Math., \textbf{54} (2017), 335--341. \bibitem{LS18} D. Liu and X. Sun, A class of retracts of polynomial rings, J. Pure Apple.\ Algebra, \textbf{222} (2018), 382--386. \bibitem{LS21} D. Liu and X. Sun, Retracts that are kernels of locally nilpotent derivations, Czechoslovak Math.\ J., \textbf{72 (147)} (2021), 191--199. \bibitem{N19} T. Nagamine, A note on retracts of polynomial rings in three variables, J. Algebra, \textbf{534} (2019),339--343. \bibitem{RS79} P. Russell and A. Sathaye, On finding and cancelling variables in $k[X,Y,Z]$, J. Algebra \textbf{57} (1979), 151--166. \bibitem{SY00} V. Spilrain and J.-T. Yu, Polynomial retracts and the Jacobian conjecture, Trans.\ Amer.\ Math.\ Soc., \textbf{352} (2000), 477--484. \bibitem{Z54} O. Zariski, Interpr\'{e}tations alg\'{e}brico-g\'{e}om\'{e}triques du quatorzi\`{e}me probl\`{e}me de Hilbert, Bull. Sci. Math., {\bf 78} (1954), 155--168. \end{thebibliography} \end{document}
2412.13427v2
http://arxiv.org/abs/2412.13427v2
Spectrality of Moran-type measures with staggered contraction ratios
\documentclass[12pt, reqno]{amsart} \usepackage{} \usepackage{graphicx} \usepackage{subfigure} \usepackage{latexsym} \usepackage{amsmath} \usepackage{amssymb} \usepackage[usenames]{color} \usepackage{soul} \usepackage{pstricks} \usepackage{enumerate} \usepackage{amscd} \usepackage{color} \setlength{\textwidth}{15.0cm} \setlength{\textheight}{22.0cm} \hoffset=-1cm \pagestyle {plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{Def}[theorem]{Definition} \newtheorem{Prop}[theorem]{Proposition} \newtheorem{Lem}[theorem]{Lemma} \newtheorem{Cor}[theorem]{Corollary} \newtheorem{Rem}[theorem]{Remark} \newtheorem{Coj}[theorem]{Conjecture} \newtheorem{Exa}[theorem]{Example} \newtheorem{Main}{Main Theorem} \renewcommand{\theMain}{} \newcommand\Let{\mathrel{\mathop:\!\!=}} \topmargin=0cm \errorcontextlines=0 \numberwithin{equation}{section} \renewcommand{\rm}{\normalshape} \renewcommand{\baselinestretch}{1.1} \newcommand{\E}{{\mathcal E}} \newcommand{\F}{{\mathcal F}} \newcommand{\D}{{\mathcal D}} \newcommand{\R}{{\mathbb R}} \newcommand{\Z}{\mathcal{Z}} \begin{document} \title{Spectrality of Moran-type measures with staggered contraction ratios} \author{Jun Jason Luo} \address{College of Mathematics and Statistics, Key Laboratory of Nonlinear Analysis and its Applications (Chongqing University), Ministry of Education, Chongqing University, Chongqing, 401331, P.R. China} \email{[email protected]} \author{Lin Mao}\address{College of Mathematics and Statistics, Key Laboratory of Nonlinear Analysis and its Applications (Chongqing University), Ministry of Education, Chongqing University, Chongqing, 401331, P.R. China} \email{[email protected]} \author{Jing-cheng Liu}\address{Key Laboratory of Computing and Stochastic Mathematics (Ministry of Education), School of Mathematics and Statistics, Hunan Normal University, Changsha, Hunan 410081, P.R. China} \email{[email protected]} \begin{abstract} Consider a Moran-type iterated function system (IFS) \( \{\phi_{k,d}\}_{d\in D_{2p_k}, k\geq 1} \), where each contraction map is defined as \[ \phi_{k,d}(x) = (-1)^d b_k^{-1}(x + d), \] with integer sequences \( \{b_k\}_{k=1}^\infty \) and \( \{p_k\}_{k=1}^\infty \) satisfying \( b_k \geq 2p_k \geq 2 \), and digit sets \( D_{2p_k} = \{0, 1, \ldots, 2p_k - 1\} \) for all \( k \geq 1 \). We first prove that this IFS uniquely generates a Borel probability measure \( \mu \). Furthermore, under the divisibility constraints \[ p_2 \mid b_2, \quad 2 \mid b_2, \quad \text{and} \quad 2p_k \mid b_k \ \text{for} \ k \geq 3, \] with \(\{b_k\}_{k=1}^\infty\) bounded, we prove that \( \mu \) is a spectral measure, that is, $ L^2(\mu) $ admits an orthogonal basis of exponentials. To fully characterize the spectral properties, we introduce a multi-stage decomposition strategy for spectrums. By imposing the additional hypothesis that all parameters \( p_k \) are even, we establish a complete characterization of \( \mu \)'s spectrality. This result unifies and extends the frameworks proposed in \cite{An-He2014, Deng2022, Wu2024}, providing a generalized criterion for such measures. \end{abstract} \keywords{Moran-type measure; Spectral measure; Fourier transfrom} \thanks{The research was supported by the NNSF of China (No. 12071125), the Natural Science Foundation of Chongqing (No. CSTB2023NSCQ-MSX0553), the Hunan Provincial NSF (No. 2024JJ3023) and the education Department Important Foundation of Hunan province in China (No. 23A0059).} \subjclass[2010]{Primary 28A80; Secondary 42B10, 42C05} \maketitle \section{Introduction} A Borel set \(\Omega \subset \mathbb{R}^n\) with positive, finite Lebesgue measure is termed a spectral set if the space \(L^2(\Omega)\) admits an orthogonal basis of exponential functions. This concept, central to functional analysis, gained prominence with Fuglede's 1974 conjecture \cite{Fuglede1974}, which posited that \emph{\(\Omega\) is a spectral set if and only if it tiles \(\mathbb{R}^n\) translationally, i.e., there exists \(\mathcal{J} \subset \mathbb{R}^n\) such that \(\{\Omega + t : t \in \mathcal{J}\}\) partitions \(\mathbb{R}^n\).} While disproven in dimensions \(n \geq 3\) by Tao, Kolountzakis and Matolcsi \cite{Tao2004, Matolcsi2005, Kolountzakis-Matolcsi2006-1, Kolountzakis-Matolcsi2006-2}, the conjecture remains unresolved for \(n = 1, 2\). Recent breakthroughs by Lev and Matolcsi \cite{LeMa_2022} established its validity under convexity constraints across all dimensions, revitalizing interest in restricted cases. The development of harmonic analysis and fractal geometry has propelled spectral measures, generalizations of spectral sets, to the forefront of mathematical inquiry. Formally, a compactly supported Borel probability measure \(\mu\) on \(\mathbb{R}^n\) is called a \emph{spectral measure} if there exists a countable subset \(\Lambda \subset \mathbb{R}^n\) (termed its \emph{spectrum}) such that the exponential system $ \{e^{2\pi i\langle\lambda,\cdot \rangle}:\lambda\in\Lambda\}$ forms an orthonormal basis for \(L^2(\mu)\). Identifying such measures, particularly among singular fractal measures, constitutes a pivotal challenge in modern analysis. Seminal work by Jorgensen and Pedersen \cite{Jorgensen-Pedersen1998} revealed that the \(\frac{1}{n}\)-Cantor measure \(\mu_{1/n}\) is spectral if and only if \(n\) is even. Subsequent advancements extended this to self-similar measures \cite{Laba-Wang2002}, where Dai, He, and Lau \cite{Dai-He-Lau2014} characterized that a self-similar measure $\mu_\rho$ with a consecutive digit set $ D_N=\{0,1,\dots, N-1\}$ is a spectral measure if and only if the reciprocal of its contraction ratio $p=\rho^{-1}$ is an integer and $ N\mid p $. Parallel developments resolved long-standing questions on Bernoulli convolutions \cite{Hu-Lau2008,Dai2012}, while self-affine measures and infinite convolutions were systematized through Hadamard triples \cite{An-He-Lau2015, Dutkay-Han-Lai2019, Li-Miao-Wang2022,Li-Miao-Wang2024-1,Li-Miao-Wang2024-2}. Moran-type measures, as natural extensions of self-similar and self-affine measures, exhibit intricate infinite product structures. Strichartz \cite{Strichartz2000} pioneered their spectral analysis in 2000. Later, An and He \cite{An-He2014} concerned the Moran-type measure $ \mu_{\{b_k\},\{D_{p_k}\}} $ generated by a sequence of integers $ \{b_k\}_{k=1}^{\infty} $ with $ b_k \geq 2 $ and a sequence of consecutive digit sets $ \{D_{p_k}\}_{k=1}^{\infty} $ with $ p_k \geq 2 $ by proving sufficiency of divisibility conditions (\(p_k \mid b_k\)) for spectrality. Recent work by An, Li and Zhang \cite{An-Li-Zhang2022} and Deng and Li \cite{Deng2022} established necessity, culminating in a complete characterization. Current research explores generalized Moran-type measures under varied constraints \cite{An-Fu-Lai2019, An-He-Li2015, Deng2023, He-He2017, Liu-Lu-Zhou2023, Liu-Liu-Luo2024, Liu2024, Shi2019, Tang-Yin2018,Wu-Xiao2024}, demonstrating their rich spectral properties. Notably, all preceding achievements on spectrality of fractal measures fundamentally rely on the uniformity of linear components in their defining contraction maps. A paradigm shift emerged through Wu's recent work \cite{Wu2024}, which constructed the first self-similar spectral measure $\mu_{\pm\rho}$ with staggered contraction ratios $\pm\rho$ and consecutive digit set $D_{2N}$. The study established a sharp characterization: $\mu_{\pm\rho}$ is spectral if and only if the reciprocal of contraction parameter $p = \rho^{-1}$ is an integer and satisfies $2N \mid p$. Building on this conceptual breakthrough, we systematically investigate the spectrality of a novel class of Moran-type measures on $\mathbb{R}$ featuring hierarchically staggered contraction ratios. Our approach diverges from classical methodologies. We first rigorously establish the existence and uniqueness of Moran-type measures through Moran-type iterated function systems, then study their spectral properties by revealing intrinsic arithmetic constraints in non-uniform scaling systems. \begin{Def}\label{def-moran IFS} Let $ \{b_k\}_{k=1}^{\infty},\{n_k\}_{k=1}^{\infty} $ be two sequences of integer numbers with all $ b_k\geq n_k\geq 2 $ and let $\Phi_k=\{\phi_{k,i}\}_{i=0}^{n_k-1}$ be a family of contracting similitudes on $\R$ for each $ k\ge 1 $, where \begin{equation}\label{ifs} \phi_{k,i}(x)=(-1)^i b_k^{-1}(x+i),\quad x\in \R \quad (i=0,1,\dots, n_k-1). \end{equation} We call the sequence $\{\Phi_k\}_{k=1}^{\infty}$ a Moran-type iterated function system (IFS) with staggered contraction ratios. \end{Def} \begin{theorem}\label{thm-main1} Let $ \{\Phi_k\}_{k=1}^{\infty} $ be the Moran-type IFS as in Definition \ref{def-moran IFS}, then the following two statements hold: \begin{enumerate}[(\romannumeral1)] \item There exists a unique sequence of nonempty compact sets $\{E_k\}_{k= 1}^\infty$ in $ \mathbb R $ such that \begin{equation}\label{equi-supp} E_k=\Phi_k(E_{k+1}):=\bigcup_{i=0}^{n_k-1}\phi_{k,i}(E_{k+1}), \quad k\geq1 \end{equation} and \begin{equation}\label{conv-point} \lim_{l\to\infty}|\phi_{k,j_k}\circ \phi_{k+1,j_{k+1}}\circ\cdots\circ \phi_{l,j_l}(0)-\phi_{k,j_k}\circ \phi_{k+1,j_{k+1}}\circ\cdots\circ \phi_{l,j_l}(a_{l+1})|=0 \end{equation} uniformly for all sequences $ \{j_l:0\leq j_l\leq n_l-1\}_{l=k}^{\infty} $ and any $ a_{l+1}\in E_{l+1} $. \item There exists a unique sequence of probability measures $\{\mu_k\}_{k= 1}^\infty$ satisfying \begin{equation}\label{equi-mea} \mu_k=\frac{1}{n_k}\mu_{k+1}\circ \Phi_k^{-1}:=\sum_{i=0}^{n_k-1}\frac{1}{n_k}\mu_{k+1}\circ \phi_{k,i}^{-1}, \quad k\geq1, \end{equation} where each $\mu_k$ is supported on $E_k$. \end{enumerate} \end{theorem} The first measure $\mu:=\mu_1$ is called a \emph{Moran-type measure with staggered contraction ratios}, which will be our research object throughout the paper. The supporting set $E_1$ is usually called a \emph{Moran-type set}. The geometric fashion and Hausdorff dimension of Moran-type sets were investigated in detail by Hua \emph{et al.} \cite{Hua-Rao-Wen-Wu2000}. \begin{theorem}\label{thm-main2} Let $\{\Phi_k\}_{k=1}^{\infty}$ be the Moran-type IFS as in Definition \ref{def-moran IFS} with assumption that the sequence $\{b_k\}_{k=1}^\infty$ is bounded and $ 2\mid n_k $ for all $ k\geq1 $. Let $\mu:=\mu_1$ be as in \eqref{equi-mea}. Then $ \mu $ is a spectral measure if $ 2\mid b_2,\ n_2\mid 2b_2 $ and $ n_k\mid b_k $ for $ k\geq 3 $. \end{theorem} The necessity of this condition generally fails to hold, as demonstrated by the counterexample in Theorem \ref{Bernoulli situation} (Section \ref{sect.4}). However, under the additional hypothesis that $4$ divides all $n_k$, we give a complete characterization of the spectrality of $ \mu $ in the next theorem. This result unifies and extends the frameworks proposed in \cite{An-He2014, Deng2022, Wu2024}, providing a generalized criterion for such measures. \begin{theorem}\label{thm-main3} Let $\{\Phi_k\}_{k=1}^{\infty}$ be the Moran-type IFS as in Definition \ref{def-moran IFS} with assumption that the sequence $ \{b_k\}_{k=1}^{\infty} $ is bounded and $ 4\mid n_k $ for all $ k\geq1 $. Let $\mu:=\mu_1$ be as in \eqref{equi-mea}. Then $ \mu $ is a spectral measure if and only if $ n_2\mid 2b_2 $ and $ n_k\mid b_k $ for $ k\geq 3 $. \end{theorem} The primary challenges in establishing these theorems stem from two fundamental considerations. First, unlike classical self-similar measures, the Moran-type measure $ \mu $ lacks a convolutional structure, which precludes direct analysis via the framework of infinite convolution measures. Notably, by leveraging the iterative relation \eqref{equi-mea}, we demonstrate that its spectrality can be equivalently characterized through an infinite convolution (Proposition \ref{equivalent}). Second, in addressing the necessity condition of Theorem \ref{thm-main3}, we introduce a multi-stage decomposition strategy for spectrums (Remark \ref{Remark5.2}). This approach enables the extraction of hierarchical constraints by recursively partitioning the spectrum of the spectral measure. A natural extension lies in investigating the spectral consistency across the sequence of measures $\{\mu_k\}_{k=1}^\infty$. For some typical Moran-type measures, Liu \emph{et al.} \cite{Liu-Liu-Luo2024} recently showed that spectrality of $ \mu_1 $ propagates inductively to all $ \mu_k \ (k \geq 1)$. Nevertheless, the general case remains an open question, as the interplay between scaling parameters and spectral inheritance mechanisms resists straightforward generalization. The rest of this paper is organized as follows. Section \ref{sect.2} synthesizes foundational results, including the Fourier transform of measures and some criteria for identifying spectral measures. Section \ref{sect.3} establishes Theorem \ref{thm-main1}, while Sections \ref{sect.4} and \ref{sect.5} develop the proofs of Theorems \ref{thm-main2} and \ref{thm-main3}, respectively. \section{Preliminaries}\label{sect.2} Let $ \mu $ be a Borel probability measure on $ \mathbb{R} $, the Fourier transform of $ \mu $ is defined by \begin{equation*} \widehat{\mu}(t)=\int e^{2\pi itx}d\mu(x),\ t\in\mathbb{R}. \end{equation*} We denote by $ \mathcal{Z}(\widehat{\mu}):=\{t\in \mathbb{R}: \widehat{\mu}(t)=0\}$ the zero set of $ \widehat{\mu} $. Let $ \Lambda\subset\mathbb{R} $ be a countable set, then the set $E(\Lambda) = \{e^{2\pi i\lambda x}:\lambda\in\Lambda\}$ is an orthogonal set of $ L^2(\mu) $ if and only if \begin{equation*} (\Lambda-\Lambda)\setminus\{0\} \subset\mathcal{Z}(\widehat{\mu}). \end{equation*} If this is the case, then $ \Lambda $ is called a \emph{bi-zero set of $ \mu $}. Moreover, if $\Lambda$ is a bi-zero set of $\mu$ but $\Lambda \cup\{\alpha\}$ is not a bi-zero set for any $\alpha\in{\mathbb R}\setminus \Lambda$, then we call $\Lambda$ a \emph{maximal bi-zero set of $\mu$}. The following is a basic criterion for determining $ \Lambda $ to be a bi-zero set or spectrum of $ \mu $. \begin{theorem}[\cite{Jorgensen-Pedersen1998}]\label{J-P} Let $ \Lambda $ be a countable subset of $ \mathbb{R} $ and \begin{equation*} Q_{\Lambda, \mu}(t):=\sum_{\lambda \in\Lambda}|\widehat{\mu}(t+\lambda)|^2,\quad t\in\mathbb{R}. \end{equation*} Then \begin{enumerate}[(\romannumeral1)] \item $ \Lambda $ is a bi-zero set of $ \mu $ if and only if $ Q_{\Lambda,\mu}(t)\leq 1 $ for all $ t\in\mathbb{R} $; \item $ \Lambda $ is a spectrum of $ \mu $ if and only if $ Q_{\Lambda, \mu}(t)=1 $ for all $ t\in\mathbb{R} $. \end{enumerate} \end{theorem} \begin{Lem}[\cite{Dai-He-Lau2014}]\label{Dai} Let $ \mu=\nu*\omega $ be a convolution of two measures, and let $ \Lambda $ be a bi-zero set of $ \nu $. Then $ \Lambda $ is also a bi-zero set of $ \mu $, but it can not be a spectrum of $ \mu $ when $ \omega $ is not a Dirac measure. \end{Lem} \begin{Lem}\label{coefficient equiv} For any $ c\in \mathbb{R}\setminus\{0\} $, the measure $ \mu $ is a spectral measure if and only if $ \mu':=\mu(c\ \cdot) $ is a spectral measure. \end{Lem} \begin{proof} If $ \mu $ is a spectral measure with a spectrum $ \Lambda $, for any $ t\in\mathbb{R} $, \begin{eqnarray*} Q_{c\Lambda, \mu'}(t)&=&\sum_{\lambda \in\Lambda}|\widehat{\mu'}(t+c\lambda)|^2\\ &=&\sum_{\lambda \in\Lambda}\left|\int e^{2\pi i(t+c\lambda)x}d\mu(cx)\right|^2\\ &=&\sum_{\lambda \in\Lambda}\left|\int e^{2\pi i(\frac{t}{c}+\lambda)y}d\mu(y)\right|^2\\ &=&\sum_{\lambda \in\Lambda}|\widehat{\mu}(\frac{t}{c}+\lambda)|^2\equiv1. \end{eqnarray*} Then $ \mu' $ is also a spectral measure. Likewise for the other hand. \end{proof} For any $ n\in\mathbb{Z}\setminus\{0\} $, we let $ v_2(n) $ be the greatest integer $k\ge 0$ such that $2^k$ divides $n$ in $\mathbb{Z}$, that is, $ 2^{-k}n\in 2\mathbb{Z}+1 $ and let $ v_2(0)=\infty $. For any rational number $ \frac{m}{n}\in \mathbb{Q} $ with $ m,n\in\mathbb{Z} $ where $ n\ne0 $, we define $$ v_2\left(\frac{m}{n}\right)=v_2(m)-v_2(n).$$ Recently, Deng and Li \cite{Deng2023} gave a necessary and sufficient condition for the spectrality of the Moran-type Bernoulli covolution of the form \begin{equation*} \mu=\delta_{b_1^{-1}\{0,d_1\}}*\delta_{(b_1b_2)^{-1}\{0,d_2\}}*\cdots*\delta_{(b_1\cdots b_k)^{-1}\{0,d_k\}}*\cdots, \end{equation*} where $ b_k, d_k $ are integers. \begin{theorem}[\cite{Deng2023}]\label{Bernoulli-sn} For the above $ \mu$ with $ |b_k|>|d_k| $ for all $ k\geq2 $. Assume futher that the sequence $ \{|d_k|\}_{k=1}^{\infty} $ is bounded. Then $ \mu $ is a spectral measure if and only if $ s_i\ne s_j $ for all $ j>i\geq1 $, where \begin{equation*} s_k=v_2\left(\frac{b_1b_2\cdots b_k}{2d_k}\right),\quad k=1,2,\dots. \end{equation*} \end{theorem} \begin{Def} Let $ R $ be an integer satisfying $ |R|>1 $, and $ B,L\subset\mathbb{Z} $ be finite integer sets with the same cardinality $ \# B=\# L\geq 2 $. If the matrix \begin{equation*} \frac{1}{\sqrt{\# B}}\left(e^{-2\pi i R^{-1}b\ell}\right)_{b\in B,\ell\in L} \end{equation*} is unitary, we call $ (R,B,L) $ a Hadamard triple in $ \mathbb{R} $. \end{Def} The Hadamard triple is an important tool in studying the spectrality of measures as it is closely related to the spectrum of a discrete measure. Let $ A $ be a finite subset of $ \mathbb{R} $, we define a discrete measure on $A$ by \begin{equation*} \delta_A=\frac{1}{\# A}\sum_{a\in A}\delta_a \end{equation*} where $ \delta_a $ is the Dirac measure at $ a $. Moreover, we also define the mask polynomial of the set $ A $ by \begin{equation*} m_A(x)=\frac{1}{\# A}\sum_{a\in A}e^{2\pi iax},\ x\in\mathbb{R}. \end{equation*} The mask polynomial is obviously equal to the Fourier transform of the discrete measure $ \delta_A $. Trivially, $ (R,B,L) $ is a Hadamard triple if and only if $ L $ is a spectrum of the measure $ \delta_{R^{-1}B}$ \cite{Dutkay-Han-Lai2019}. \begin{Lem}[\cite{Li-Miao-Wang2024-1}]\label{finite Hadamard} If $ \{(N_j,B_j,L_j)\}_{j=1}^{n} $ are finitely many Hadamard triples in $ \mathbb{R} $, let \begin{equation*} N=N_1N_2\cdots N_n, \quad B=(N_2\cdots N_n)B_1+\cdots+N_nB_{n-1}+B_{n} \end{equation*} and \begin{equation*} L=L_1+N_1L_2+\cdots+(N_1N_2\cdots N_{n-1})L_n, \end{equation*} then $ (N,B,L) $ is a Hadamard triple. \end{Lem} Let $ \{(N_k,B_k,L_k)\}_{k=1}^{\infty} $ be a sequence of Hadamard triples in $\mathbb{R}$, the infinite convolution generated by the Hadamard triples is defined by \begin{equation}\label{def-infinite convolution} \mu=\delta_{N_1^{-1}B_1}*\cdots*\delta_{(N_1\cdots N_k)^{-1}B_k}*\cdots \end{equation} which is usually represented as a convolution of two measures $$ \mu=\mu_k*\mu_{>k}$$ where $$\mu_k=\delta_{N_1^{-1}B_1}*\cdots*\delta_{(N_1\cdots N_k)^{-1}B_k}$$ and $$\mu_{>k} = \delta_{(N_1\cdots N_{k+1})^{-1}B_{k+1}}*\delta_{(N_1\cdots N_{k+1}N_{k+2})^{-1}B_{k+2}}*\cdots.$$ Under the assumption that $ B_k \subset\{0, 1,\dots, N_k-1\} $ for all $k\geq 1 $, An \emph{et al.} \cite{An-Fu-Lai2019} found a sufficient condition for spectrality of the infinite convolution $\mu$. \begin{Lem}[\cite{An-Fu-Lai2019}]\label{An} Suppose that $ \{(N_k,B_k,L_k)\}_{k=1}^{\infty} $ is a sequence of Hadamard triples with $ B_k \subset\{0, 1,\dots, N_k-1\} $ for all $k\geq 1 $. Suppose that \begin{equation*} \liminf_{k\to\infty}\#B_k<\infty. \end{equation*} Then the infinite convolution $\mu$ as in \eqref{def-infinite convolution} is a spectral measure and it always admits a spectrum in $\mathbb{Z} $. \end{Lem} The \emph{integral periodic zero set} of a Borel probability measure $\mu$ on $\R$ is defined to be the set \begin{equation*} Z(\mu)=\{\xi\in\mathbb{R}:\widehat{\mu}(\xi+k)=0\text{ for all } k\in\mathbb{Z}\}. \end{equation*} Let $\mathcal{P}(E)$ be the set of all Borel probability measures on $E\subset \mathbb{R}$. A family of measures $ \Psi\subset\mathcal{P}(E) $ is called an \emph{admissible family} if $ Z(\mu)=\emptyset $ for every $ \mu\in \text{cl}(\Psi) $, the closure of $ \Psi $ with respect to the weak topology on $ \mathcal{P}(E) $. We say that $ \Psi $ is \emph{tight} if for each $ \varepsilon>0 $, there exists a compact subset $ K\subset\mathbb{R} $ such that $\inf_{\mu\in\Psi}\mu(K)>1-\varepsilon.$ \begin{Lem}[\cite{An-Fu-Lai2019}]\label{empty} Let $ \mu\in\mathcal{P}([0, 1]) $. Then $ Z(\mu)\ne \emptyset $ if and only if $ \mu=\frac{1}{2}(\delta_0+\delta_1) $. \end{Lem} Let $\mu=\mu_k*\mu_{>k}$ be as in \eqref{def-infinite convolution}. We define \begin{equation*} \omega_{>k}(\cdot)=\mu_{>k}\left(\frac{\cdot}{N_1\cdots N_k}\right). \end{equation*} That is, $$\omega_{>k}=\delta_{N_{k+1}^{-1}B_{k+1}}*\delta_{(N_{k+1}N_{k+2})^{-1}B_{k+2}}*\cdots.$$ \begin{theorem}[\cite{Li-Miao-Wang2024-1}]\label{Li} Let $ \{(N_k,B_k,L_k)\}_{k=1}^{\infty} $ be a sequence of Hadamard triples in $\mathbb{R}$, suppose that infinite convolution measure $ \mu $ defined by \eqref{def-infinite convolution} exists. If there exists a subsequence $ \{\omega_{>n_j}\} $ which is tight and addmissible, then $ \mu $ is a spectral measure with a spectrum in $ \mathbb{Z} $. \end{theorem} \begin{theorem}[\cite{Li-Miao-Wang2024-2}]\label{weak-conv} Let $ \{(N_k,B_k,L_k)\}_{k=1}^{\infty} $ be a sequence of Hadamard triples in $\mathbb{R}$, suppose that infinite convolution measure $ \mu $ defined by \eqref{def-infinite convolution} exists. If there exists a subsequence $ \{\omega_{>n_j}\} $ which converges weakly to $ \omega $, and $ Z(\omega)=\emptyset $, then $ \mu $ is a spectral measure with a spectrum in $ \mathbb{Z} $. \end{theorem} If the sequence of Hadamard triples $ \{(N_k,B_k,L_k) \}_{k=1}^{\infty} $ only has a finite number of different elements, then the following consequences are clear. \begin{Lem}[\cite{Li-Miao-Wang2024-1}]\label{cl} Let $ \{(N_j,B_j,L_j)\}_{j=1}^{m} $ be finitely many Hadamard triples, and let $ \Sigma=\{1,2,\dots,m\}^{\mathbb{N}} $. For $ \sigma=\sigma_1 \sigma_2\cdots\in\Sigma $, define \begin{equation*} \mu_{\sigma}=\delta_{N_{\sigma_1}^{-1}B_{\sigma_1}}*\cdots*\delta_{(N_{\sigma_1}\cdots N_{\sigma_k})^{-1}B_{\sigma_k}}*\cdots. \end{equation*} Let $ \Psi=\{\mu_{\sigma}:\sigma\in\Sigma\} $. Then we have $ \text{cl}(\Psi)=\Psi $. \end{Lem} \begin{Lem}[\cite{Li-Miao-Wang2024-2}]\label{finite-gcd} Suppose that $ \{(N_k,B_k,L_k)\}_{k=1}^{\infty} $ is chosen from a finite set of Hadamard triples. If for each $ k\geq1 $, \begin{equation*} \gcd\left(\bigcup_{j=k}^{\infty}(B_j-B_j)\right)=1, \end{equation*} then $ Z(\mu)=\emptyset $, where $\mu$ is as in \eqref{def-infinite convolution}. \end{Lem} The final useful lemma is due to Deng and Li \cite{Deng2022}. \begin{Lem}[\cite{Deng2022}]\label{combined sum} Let $ p_{i,j}$ be positive numbers such that $ \sum_{j=1}^{n}p_{i,j}=1 \ (i=1,2,\dots,m) $ and $x_{i,j}\geq 0$ such that $ \sum_{i=1}^{m}\max_{1\leq j\leq n} \{x_{i,j}\} \leq1 $. Then $ \sum_{j=1}^{n}\sum_{i=1}^{m}p_{i,j}x_{i,j}=1 $ if and only if $ \sum_{i=1}^{m}x_{i,1}=1 $ and $ x_{i,1}=x_{i,2}=\cdots=x_{i,n} $ for $ 1\leq i\leq m $. \end{Lem} \section{Existence and uniqueness of Moran-type measures}\label{sect.3} In this section, we establish the theory of Moran-type IFS and show the existence and uniqueness of the Moran-type measure by proving Theorem \ref{thm-main1}. \begin{proof}[Proof of Theorem \ref{thm-main1}] \quad (\romannumeral1) Since $\Phi_k=\{\phi_{k,i}\}_{i=1}^{n_k}$ is a family of contracting similitudes on $\R$ and $ b_k\geq n_k\geq2 \ (k\ge 1) $. For any $l\ge k, m\ge 1$ and for all sequences $ \{j_l:0\leq j_l\leq n_l-1\}_{l=k}^{\infty} $, we have \begin{eqnarray*} & &|\phi_{k,j_k}\circ \phi_{k+1,j_{k+1}}\circ\cdots\circ \phi_{l,j_l}(0)-\phi_{k,j_k}\circ \phi_{k+1,j_{k+1}}\circ\cdots\circ \phi_{l+m,j_{l+m}}(0)|\\ &=&|\phi_{k,j_k}\circ \phi_{k+1,j_{k+1}}\circ\cdots\circ \phi_{l,j_l}\left(0-\phi_{l+1,j_{l+1}}\circ\cdots\circ \phi_{l+m,j_{l+m}}(0)\right)|\\ &=&b_k^{-1}b_{k+1}^{-1}\cdots b_l^{-1}\cdot\left|\sum_{i=1}^m(-1)^{j_{l+1}+\cdots+j_{l+i}}b_{l+1}^{-1}\cdots b_{l+i}^{-1}\cdot j_{l+i}\right|\\ &=&b_k^{-1}b_{k+1}^{-1}\cdots b_l^{-1}\cdot\sum_{i=1}^m b_{l+1}^{-1}\cdots b_{l+i}^{-1}\cdot j_{l+i}\\ &\leq&2^{k-l+2}. \end{eqnarray*} This implies \begin{equation}\label{conv} \lim_{l\to\infty}\ \sup_{\substack{0\leq j_i\leq n_i-1 \\ k\le i\le l+m, m\geq 1}}|\phi_{k,j_k}\circ \cdots\circ \phi_{l,j_l}(0)-\phi_{k,j_k}\circ\cdots\circ \phi_{l+m,j_{l+m}}(0)|=0. \end{equation} As \begin{eqnarray}\label{seq-bound} \begin{aligned} |\phi_{k,j_k}\circ \cdots\circ \phi_{l,j_l}(0)| &\le b_k^{-1} \cdot j_k+\cdots+b_k^{-1}\cdots b_l^{-1}\cdot j_l\\ &< 1+\frac{1}{2}+\cdots+\frac{1}{2^{l-k}}<2. \end{aligned} \end{eqnarray} Thus the sequence $ \{\phi_{k,j_k}\circ\cdots\circ \phi_{l,j_l}(0)\}_{l=k}^{\infty} $ is uniformly bounded in $[-2,2]$. Together with \eqref{conv}, we can get that the sequence $ \{\phi_{k,j_k}\circ\cdots\circ \phi_{l,j_l}(0)\}_{l=k}^{\infty} $ converges uniformly for all sequences $ \{j_l:0\leq j_l\leq n_l-1\}_{l=k}^{\infty} $. For each $k\ge 1$, define \begin{equation}\label{def-set} E_k=\left\{\lim_{l\to\infty}\phi_{k,j_k}\circ \phi_{k+1,j_{k+1}}\circ\cdots\circ \phi_{l,j_{l}}(0): 0\leq j_i\leq n_i-1, k\le i\le l\right\}. \end{equation} Obviously, $ \{E_k\}_{k=1}^{\infty} $ satisfies \eqref{equi-supp}. We just need to show the compactness of $ E_k $. From \eqref{seq-bound}, it follows that $ E_k\subset [-2,2] $ for each $k\ge 1$, hence $\{E_k\}_{k=1}^{\infty}$ is uniformly bounded. Let $ \{x_n\}_{n=1}^{\infty} $ be a Cauchy sequence of $ E_k $, then for every $x_n$, there is a sequence $ \{j_{n,i}\}_{i=k}^{\infty} $ such that \begin{equation*} x_n=\lim_{l\to\infty}\phi_{k,j_{n,k}}\circ\cdots\circ \phi_{l,j_{n,l}}(0). \end{equation*} Since $ 0\leq j_{n,k}\leq n_k-1 $, there exists $ \eta_k $ such that the index set $ \{n:j_{n,k}=\eta_k\} $ is infinite. Repeating this process, there are $\eta_{k+1}, \eta_{k+2}, \dots,\eta_l,\dots $ such that the index set $ \{n: j_{n,i}=\eta_i, i=k,\dots,l\} $ is infinite for all $ l\geq k $. Let \begin{equation*} x=\lim_{l\to\infty}\phi_{k,\eta_k}\circ \phi_{k+1,\eta_{k+1}}\circ\cdots\circ \phi_{l,\eta_l}(0). \end{equation*} By \eqref{conv}, for any $ \varepsilon>0 $, there is an integer $ N>k $ such that for any $ m,n\geq N$, \begin{equation}\label{bounded} |\phi_{k,j_k}\circ \phi_{k+1,j_{k+1}}\circ\cdots\circ \phi_{m,j_m}(0)-\phi_{k,j_k}\circ \phi_{k+1,j_{k+1}}\circ\cdots\circ \phi_{n,j_{n}}(0)|<\varepsilon. \end{equation} That is, $ E_k $ is contained in the $ \varepsilon $-neighborhood of the set $ \{\phi_{k,j_k}\circ \phi_{k+1,j_{k+1}}\circ\cdots\circ \phi_{N,j_N}(0):0\leq j_i\leq n_i-1,i=k,\dots,N\} $. Since $ \{x_n\}_{n=1}^\infty $ is a Cauchy sequence in $ E_k $, there exists $ N'\geq N $ such that $ |x_n-x_m|<\varepsilon $ for all $ m,n>N' $. Choose $ n>N' $ such that $ j_{n,i}=\eta_i $ for $ i=k,k+1,\dots,N $. Then for any $ l>N' $, by \eqref{bounded}, we have \begin{eqnarray*} |x-x_l|&\leq&|x-x_n|+|x_n-x_l|\\ &\leq&|x-x_n|+\varepsilon\\ &\leq&|x-\phi_{k,\eta_k}\circ \phi_{k+1,\eta_{k+1}}\circ\cdots\circ \phi_{N,\eta_{N}}(0)|\\ & &+|\phi_{k,\eta_k}\circ \phi_{k+1,\eta_{k+1}}\circ\cdots\circ \phi_{N,\eta_N}(0)-x_n|+\varepsilon\\ &\leq&3\varepsilon. \end{eqnarray*} Hence $ \lim\limits_{n\to\infty}x_n= x \in E_k $, and $ E_k $ is compact. Moreover, the above argument also shows that $$ \lim_{l\to\infty}\phi_{k,j_k}\circ \phi_{k+1,j_{k+1}}\circ\cdots\circ \phi_{l,j_l}(E_{l+1})= \left\{\lim_{l\to\infty}\phi_{k,j_k}\circ \phi_{k+1,j_{k+1}}\circ\cdots\circ \phi_{l,j_{l}}(0)\right\} $$ uniformly for all sequences $ \{j_i:0\leq j_l\leq n_i-1\}_{i=k}^{\infty} $. Thus \eqref{conv-point} follows and the existence is proved. Next we prove the uniqueness. Suppose $ \{F_k\}_{k=1}^{\infty} $ is another sequence of compact sets on $ \mathbb{R} $ satisfying \eqref{equi-supp}. Let $ x\in F_k $. \eqref{equi-supp} shows that there exist $ x_i\in F_i $ and $ 0\leq j_i\leq n_i-1 $ for $ i=k+1,k+2,\dots $ such that \begin{equation*} x=\phi_{k,j_k}\circ \phi_{k+1,j_{k+1}}\circ\cdots\circ \phi_{l,j_l}(x_{l+1}), \quad l=k,k+1,\dots. \end{equation*} By \eqref{conv-point}, $ x=\lim\limits_{l\to\infty}\phi_{k,j_k}\circ \phi_{k+1,j_{k+1}}\circ\cdots\circ \phi_{l,j_l}(0) $. Then $ x\in E_k $. Hence $ F_k\subset E_k $. Similarly we can show $ E_k\subset F_k $. Therefore $ F_k=E_k, k\ge 1$, and the uniqueness follows. (\romannumeral2) For each $k\ge 1$, we denote by the symbolic space starting from the $k$-th level $$\Sigma_k:=\{j_k \cdots j_{i}\cdots:0\leq j_i\leq n_i-1, \ i\geq k\}$$ and denote by the corresponding cylinder sets of $\Sigma_k$ $$[j_k\cdots j_{i}]:=\{j_k \cdots j_i\sigma:\sigma\in\Sigma_{i+1}\},\quad\text{where} \ i\ge k.$$ Define a metric $\rho_k$ on $\Sigma_k$ by $$\rho_k(j_k\cdots j_{i}\cdots, j'_k\cdots j'_{i}\cdots)=2^{-\min\{i:j_i\ne j'_i\}},$$ then $(\Sigma_k,\rho_k) $ is a compact metric space, and the cylinder sets $[j_k \cdots j_i]$ are the open sets of the space. If we put a mass on every cylinder set of $\Sigma_k$ as follows: \begin{equation}\label{index-mea} \omega_k([j_k \cdots j_{i}])=n_k^{-1} \cdots n_i^{-1}, \quad i\ge k. \end{equation} The $ \omega_k $ constitutes the unique Borel probability measure on $ \Sigma_k $. Note that \eqref{def-set} defines a contiuous map $ \pi_k:\Sigma_k\to E_k $. Hence $ \mu_k:=\omega_k\circ\pi_k^{-1} $ is a Borel probability measure on $ E_k $. For any Borel subset $ A\subset E_k $, by \eqref{index-mea}, \begin{eqnarray*} \mu_k(A)&=&\omega_k\circ \pi_k^{-1}(A)\\ &=&\omega_k\left(\{j_k j_{k+1}\cdots j_{n}\cdots:\lim_{n\to\infty}\phi_{k,j_k}\circ \phi_{k+1,j_{k+1}}\circ\cdots\circ \phi_{n,j_n}(0)\in A\}\right)\\ &=&\sum_{j_k=0}^{n_k-1}\frac{1}{n_k}\omega_{k+1}\left(\{j_{k+1}\cdots j_{n}\cdots:\lim_{n\to\infty}\phi_{k+1,j_{k+1}}\circ\cdots\circ \phi_{n,j_n}(0)\in \phi_{k,j_k}^{-1}(A)\}\right)\\ &=&\sum_{i=0}^{n_k-1}\frac{1}{n_k}\mu_{k+1}\circ\phi_{k,i}^{-1}(A). \end{eqnarray*} Hence $ \mu_k $ satisfies \eqref{equi-mea} and the support of $ \mu_k $ is $ E_k $. Finally, we show the uniqueness of the sequence of measures $\{\mu_k \}_{k=1}^\infty$. Let $ \mathcal{P}([-2,2]) $ be the set of all Borel probability measures on $[-2,2]$, equip it with a dual Lipschitz metric \begin{equation*} L(\nu,\nu')=\sup_{\text{Lip}(g)\leq 1} \left|\int gd\nu-\int gd\nu'\right| \end{equation*} where $ \text{Lip}(g)=\sup\limits_{x\ne y}\dfrac{|g(x)-g(y)|}{|x-y|}$ is the Lipschitz constant of $ g $. Then $(\mathcal{P}([-2,2]), L)$ is a compact metric space \cite{BP-2017}. Let $ g:[-2,2]\to\mathbb{R} $ be a function with $\text{Lip}(g)\leq 1 $, and let \begin{equation}\label{eq-h-function} h=\sum_{i=0}^{n_k-1}\frac{1}{n_k}g\circ \phi_{k,i}. \end{equation} Then \begin{eqnarray*} |h(x)-h(y)|&=&\sum_{i=0}^{n_k-1}\frac{1}{n_k}|g\circ \phi_{k,i}(x)-g\circ \phi_{k,i}(y)|\\ &\leq&\sum_{i=0}^{n_k-1}\frac{1}{n_k}|g\circ \phi_{k,i}(x)-g\circ \phi_{k,i}(y)|\\ &\leq&\sum_{i=0}^{n_k-1}\frac{1}{n_k}\cdot b_k^{-1} \cdot |x-y| \\ &\leq& b_k^{-1} \cdot |x-y|. \end{eqnarray*} That is, $\text{Lip} (h)\le b_k^{-1}<1$. Suppose that $ \{\mu'_k\}_{k=1}^{\infty} $ is another sequence of measures satisfying \eqref{equi-mea}, by \eqref{eq-h-function}, we have \begin{eqnarray*} \left|\int gd\mu_k-\int gd\mu'_k\right| &=&\left|\int h d\mu_{k+1}-\int h d\mu'_{k+1}\right|\\ &\leq& \text{Lip}(h)L(\mu_{k+1},\mu'_{k+1})\\ &\leq&b_k^{-1} L(\mu_{k+1},\mu'_{k+1}). \end{eqnarray*} Hence $L(\mu_k,\mu'_k)\leq b_k^{-1} L(\mu_{k+1},\mu'_{k+1})\leq\cdots\leq b_k^{-n} L(\mu_{k+n},\mu'_{k+n})$ for any $n\ge 1$. By letting $ n\to \infty $, it concludes that $ L(\mu_k,\mu'_k)=0 $. Therefore, $ \mu_k\equiv\mu'_k $, proving the uniqueness. \end{proof} \section{Proof of Theorem \ref{thm-main2}} \label{sect.4} In this section, we study the spectrality of the Moran-type measure $\mu:=\mu_1$ defined in \eqref{equi-mea}. Throughout the section, we assume that $n_k$'s are all even numbers, i.e., $n_k=2p_k$ where $p_k\in {\mathbb N}$ for $k\ge 1$. The sequence of measures $\{\mu_k\}_{k=1}^\infty$ can be reformulated as \begin{equation}\label{def-mu_k} \mu_k=\sum_{j=0}^{2p_k-1}\frac{1}{2p_k}\mu_{k+1}\circ \phi_{k,j}^{-1}, \quad k\geq1. \end{equation} \begin{Lem}\label{lem-Fourier transform} Let $ \mu:=\mu_1 $ be the Moran-type measure defined by the above, then its Fourier transform is $$\widehat{\mu}(t)=e^{-b_1^{-1}\pi ti}\prod_{k=1}^{\infty} f_k\left((b_k\cdots b_1)^{-1}t\right),\quad t\in \mathbb{R},$$ where $ f_k(t)=\frac{1}{p_k}\sum_{j=0}^{p_k-1}\cos(4j+1-b_{k+1}^{-1})\pi t $. \end{Lem} \begin{proof} From \eqref{def-mu_k}, the Fourier transform of $\mu $ has the expression \begin{eqnarray*} \widehat{\mu}_{k}(t)&=&\sum_{j=0}^{2p_k-1}\frac{1}{2p_k}\widehat{\mu_{k+1}\circ \phi_{k,j}^{-1}}(t)\\ &=&\frac{1}{2p_k}\sum_{j=0}^{2p_k-1}\int e^{2\pi itx}d\mu_{k+1}\circ\phi_{k,j}^{-1}(x)\\ &=&\frac{1}{2p_k}\sum_{j=0}^{2p_k-1}\int e^{2\pi it (-1)^j b_k^{-1}(x+j)}d\mu_{k+1}(x)\\ &=&\frac{1}{2p_k}\sum_{j=0}^{2p_k-1}e^{2\pi it (-1)^j b_k^{-1}j}\int e^{2\pi it (-1)^j b_k^{-1}x}d\mu_{k+1}(x)\\ &=&\frac{1}{2p_k}\sum_{j=0}^{2p_k-1}e^{2\pi it (-1)^j b_k^{-1}j}\widehat{\mu}_{k+1}((-1)^j b_k^{-1}t),\quad t\in\mathbb{R}. \end{eqnarray*} By considering the real and imaginary parts, denoted by `Re' and `Im', respectively, we have $$\widehat{\mu}_{k+1}((-1)^j b_k^{-1}t)=\text{Re } \widehat{\mu}_{k+1}(b_k^{-1}t)+i(-1)^j \text{Im } \widehat{\mu}_{k+1}(b_k^{-1}t).$$ Hence \begin{equation}\label{eq-iteration of transform} \left(\begin{array}{cc} \text{Re }\widehat{\mu}_k(t) \\ \text{Im }\widehat{\mu}_k(t) \end{array} \right)=M_k(b_k^{-1}t)\left(\begin{array}{ccc} \text{Re }\widehat{\mu}_{k+1}(b_k^{-1}t) \\ \text{Im }\widehat{\mu}_{k+1}(b_k^{-1}t) \end{array} \right) \end{equation} where the function matrix $ M_k(t)=\left(\begin{array}{cc}a_{k,1}(t) & a_{k,2}(t)\\ a_{k,3}(t) & a_{k,4}(t)\end{array}\right) $ is given by \begin{eqnarray*} &&a_{k,1}(t) =\frac{1}{2p_k}\sum_{j=0}^{2p_k-1}\cos2\pi tj,\qquad \quad \ a_{k,2}(t)=-\frac{1}{2p_k}\sum_{j=0}^{2p_k-1}\sin2\pi tj, \\ &&a_{k,3}(t) =\frac{1}{2p_k}\sum_{j=0}^{2p_k-1}(-1)^j\sin2\pi tj,\quad a_{k,4}(t)=\frac{1}{2p_k}\sum_{j=0}^{2p_k-1}(-1)^j\cos2\pi tj. \end{eqnarray*} By iterating \eqref{eq-iteration of transform}, one can get \begin{eqnarray*} && \left(\begin{array}{cc} \text{Re }\widehat{\mu}_1(t) \\ \text{Im }\widehat{\mu}_1(t) \end{array} \right) \\ &=& M_1(b_1^{-1}t)M_2((b_2 b_1)^{-1}t)\cdots M_k((b_k\cdots b_2 b_1)^{-1}t)\left(\begin{array}{ccc} \text{Re }\widehat{\mu}_{k+1}((b_k\cdots b_2 b_1)^{-1}t) \\ \text{Im }\widehat{\mu}_{k+1}((b_k\cdots b_2 b_1)^{-1}t) \end{array} \right) \\ &=&\lim_{k\to\infty}M_1(b_1^{-1}t)M_2((b_2 b_1)^{-1}t)\cdots M_k((b_k\cdots b_2 b_1)^{-1}t)\left(\begin{array}{ccc} 1 \\ 0 \end{array} \right). \end{eqnarray*} A simple calculation yields that $ \det M_k(t)\equiv0 $ for $ t\in\mathbb{R}, k\ge 1$. Thus the function matrix $ M_k(t) $ can be expressed as $$M_k(t)=\alpha(t)^T\beta_k(t),$$ where $$\alpha(t)=(\cos\pi t,-\sin\pi t)$$ and $$\beta_k(t)=\left(\frac{1}{p_k}\sum_{j=0}^{p_k-1}\cos(4j+1)\pi t, \ -\frac{1}{p_k}\sum_{j=0}^{p_k-1}\sin(4j+1)\pi t\right).$$ On the other hand, one can check that \begin{eqnarray*} \beta_k(t)\alpha(b_{k+1}^{-1}t)^T&=&\frac{1}{p_k}\sum_{j=0}^{p_k-1}\left(\cos(4j+1)\pi t \cos b_{k+1}^{-1}\pi t+\sin(4j+1)\pi t \sin b_{k+1}^{-1}\pi t\right) \\ &=&\frac{1}{p_k}\sum_{j=0}^{p_k-1}\cos(4j+1-b_{k+1}^{-1})\pi t \\ &:=& f_k(t). \end{eqnarray*} Since \begin{eqnarray*} \sum_{k=1}^{\infty}\left(1-f_k((b_k\cdots b_1)^{-1}t)\right)&=&\sum_{k=1}^{\infty}\frac{2}{p_k}\sum_{j=0}^{p_k-1}\sin^2\frac{(4j+1-b_{k+1}^{-1})(b_k\cdots b_1)^{-1}t}{2}\\ &\leq&\sum_{k=1}^{\infty}\frac{2}{p_k}\sum_{j=0}^{p_k-1}\left(\frac{(4j+1-b_{k+1}^{-1})(b_k\cdots b_1)^{-1}t}{2}\right)^2\\ &\leq&\sum_{k=1}^{\infty}\frac{2}{p_k}\sum_{j=0}^{p_k-1}\left(2p_k(b_k\cdots b_1)^{-1}t\right)^2\\ &\leq&\sum_{k=1}^{\infty}2^{3-2k}t^2<\infty, \end{eqnarray*} the infinite product $ \prod_{k=1}^{\infty}f_k\left((b_k\cdots b_1)^{-1}t\right) $ converges uniformly on each compact subset of $ \mathbb{R} $. From the above discussion, it follows that \begin{small} \begin{eqnarray*} && \left(\begin{array}{cc} \text{Re }\widehat{\mu}_1(t) \\ \text{Im }\widehat{\mu}_1(t) \end{array} \right) \\ &=& \lim_{k\to\infty}M_1(b_1^{-1}t)M_2((b_2 b_1)^{-1}t)\cdots M_k((b_k\cdots b_2 b_1)^{-1}t)\left(\begin{array}{ccc} 1 \\ 0 \end{array} \right) \\ &=&\lim_{k\to\infty}\alpha(b_1^{-1}t)^T\beta_1(b_1^{-1}t)\alpha((b_2 b_1)^{-1}t)^T\beta_2((b_2 b_1)^{-1}t)\cdots \alpha((b_k\cdots b_1)^{-1}t)^T\beta_k((b_k\cdots b_1)^{-1}t)\left(\begin{array}{ccc} 1 \\ 0 \end{array} \right) \\ &=&\alpha(b_1^{-1}t)^T\lim_{k\to\infty}\left(\prod_{n=1}^{k-1}\beta_n((b_n\cdots b_1)^{-1}t)\alpha((b_{n+1}\cdots b_1)^{-1}t)^T\right) \beta_{k}((b_k\cdots b_1)^{-1}t)\left(\begin{array}{ccc} 1 \\ 0 \end{array} \right) \\ &=&\alpha(b_1^{-1}t)^T\prod_{n=1}^{\infty}f_n((b_n\cdots b_1)^{-1}t). \end{eqnarray*} \end{small} Hence $ \widehat{\mu}(t)=\widehat{\mu}_1(t)=e^{-b_1^{-1}\pi ti}\prod_{n=1}^{\infty} f_n((b_n\cdots b_1)^{-1}t) $ as desired. \end{proof} Let $D_n$ denote the consecutive digit set $\{0,1,\dots, n-1\}$ for $n\ge 1$, and let \begin{equation}\label{eq-D_k} \mathcal{D}_k=D_{p_k}\oplus(p_k-\frac{1+b_{k+1}^{-1}}{2})D_2. \end{equation} Define an infinite convolution measure $ \nu:=\nu_{\{b_k\},\{\mathcal{D}_k\}}$ by \begin{equation}\label{eq-infinite convolution-equivalent} \nu=\delta_{b_1^{-1}\mathcal{D}_1}*\delta_{(b_1 b_2)^{-1}\mathcal{D}_2}*\cdots*\delta_{(b_1 \cdots b_k)^{-1}\mathcal{D}_k}*\cdots. \end{equation} We shall show that the measure $ \nu $ has the same spectrality as the Moran-type measure $ \mu:=\mu_1 $ defined in \eqref{def-mu_k}. \begin{Prop}\label{equivalent} The Moran-type measure $ \mu:=\mu_1 $ as in \eqref{def-mu_k} is a spectral measure if and only if $ \nu $ as in \eqref{eq-infinite convolution-equivalent} is a spectral measure. \end{Prop} \begin{proof} By using the notation in Lemma \ref{lem-Fourier transform}, we notice that $$f_k(t)=\frac{1}{p_k}\text{Re }\left(e^{(1-b_{k+1}^{-1})\pi it}\sum_{j=0}^{p_k-1}e^{4j\pi it}\right),\quad t\in\mathbb{R}.$$ If $ t\notin \frac{1}{2}\mathbb{Z} $, then $$f_k(t)=\frac{\sin2p_k\pi t}{p_k \sin2\pi t}\cos(2p_k-1-b_{k+1}^{-1})\pi t.$$ Since $ \delta_{\mathcal{D}_k}=\delta_{D_{p_k}}*\delta_{(p_k-\frac{1+b_{k+1}^{-1}}{2})D_2} $, the mask polynomial of $ \mathcal{D}_k $ can be expressed as \begin{eqnarray}\label{eq-mask polyn.} m_{\mathcal{D}_k}(t)&=&m_{D_{p_k}}(t)\cdot m_{(p_k-\frac{1+b_{k+1}^{-1}}{2})D_2}(t) \nonumber\\ &=&\left(\frac{1}{p_k}\sum_{j=0}^{p_k-1}e^{2\pi i tj}\right)\left(\frac{1}{2}+\frac{1}{2}e^{2\pi i(p_k-\frac{1+b_{k+1}^{-1}}{2})t}\right) \nonumber\\ &=&\frac{\sin p_k\pi t}{p_k \sin\pi t}\cos(p_k-\frac{1+b_{k+1}^{-1}}{2})\pi t\cdot e^{(2p_k-\frac{3+b_{k+1}^{-1}}{2})\pi it} \\ &=&f_k(t/2)e^{(2p_k-\frac{3+b_{k+1}^{-1}}{2})\pi it},\quad t\notin\mathbb{Z}. \nonumber \end{eqnarray} By the continuity of the functions $ f_k $ and $ m_{\mathcal{D}_k} $, the above result holds for all $ t\in\mathbb{R} $. Then for $ t\in\mathbb{R} $, \begin{eqnarray*} \widehat{\nu}(t)&=&\prod_{k=1}^{\infty} m_{\mathcal{D}_k}\left((b_k\cdots b_1)^{-1}t\right)\\ &=&\prod_{k=1}^{\infty} f_k\left((b_k\cdots b_1)^{-1}t/2\right) e^{(2p_k-\frac{3+b_{k+1}^{-1}}{2})(b_k\cdots b_1)^{-1}\pi ti}\\ &=&\prod_{k=1}^{\infty} f_k\left((b_k\cdots b_1)^{-1}t/2\right)\cdot e^{-\pi b_1^{-1}\frac{t}{2}i}\cdot e^{\pi bti}\\ &=&\widehat{\mu}({t}/{2})\cdot e^{\pi bti} \end{eqnarray*} where $ b:=\frac12 b_1^{-1}+\sum_{k=1}^{\infty}(2p_k-\frac{3+b_{k+1}^{-1}}{2})(b_k\cdots b_1)^{-1}<\infty$ as $ b_k\geq 2p_k\geq 2 $. Consequently, for any $ \Lambda\subset\mathbb{R} $ and $ t\in\mathbb{R} $, $$Q_{\Lambda, \nu}(t)=\sum_{\lambda \in\Lambda}\left|\widehat{\nu}(\lambda+t)\right|^2=\sum_{\lambda \in\frac12 \Lambda}\left|\widehat{\mu}(\lambda+t/2)\right|^2=Q_{\frac12 \Lambda, \mu}(t/2).$$ Therefore, $\Lambda$ is a spectrum of $\nu$ if and only if $\frac12 \Lambda$ is a spectrum of $\mu$ by Theorem \ref{J-P}. \end{proof} According to Proposition \ref{equivalent}, it suffices to consider the spectrality of $ \nu $. The following special case that $n_k\equiv 2$ (i.e., $p_k\equiv 1$) for $k\ge 1$ provides a counterexample to show that the condition in Theorem \ref{thm-main2} for $\nu$ to be a spectral measure is not a necessary condition. \begin{theorem}\label{Bernoulli situation} Assume that $ p_k\equiv1 $ for $ k\geq 1 $. If $ 2\mid b_k $ for $ k\geq 2 $, then $ \nu $ defined by \eqref{eq-infinite convolution-equivalent} is a spectral measure. Conversely, if $ \nu $ is a spectral measure, $ b_k $ is not always even for $ k\geq 2 $. \end{theorem} \begin{proof} Since $ p_k\equiv1 $ for $ k\geq 1 $, we have \begin{equation*} \mathcal{D}_k=\left\{0,\frac{1-b_{k+1}^{-1}}{2}\right\}. \end{equation*} Define $ \tilde{D}_k:=2b_{k+1}\mathcal{D}_k=\{0,b_{k+1}-1\} $, the measure $ \nu $ defined by \eqref{eq-infinite convolution-equivalent} can be expressed as \begin{equation*} \nu=\delta_{(2b_1 b_2)^{-1}\tilde{D}_1}*\delta_{(2b_1 b_2 b_3)^{-1}\tilde{D}_2}*\delta_{(2b_1 b_2b_3 b_4)^{-1}\tilde{D}_3}*\cdots \end{equation*} which is a Moran-type Bernoulli convolution. If $ 2\mid b_k $ for all $ k\geq 2 $. We can check the condition in Theorem \ref{Bernoulli-sn} that \begin{eqnarray*} s_{k+1}&=&v_2\left(\frac{b_1b_2\cdots b_{k+1}b_{k+2}}{b_{k+2}-1}\right)\\ &=&v_2\left(b_1b_2\cdots b_{k+1}b_{k+2}\right)\\ &>&v_2\left(b_1b_2\cdots b_{k+1}\right)\\ &=&v_2\left(\frac{b_1b_2\cdots b_{k+1}}{b_{k+1}-1}\right)\\ &=&s_k \end{eqnarray*} for all $ k\geq 1 $. Hence $ \nu $ is a spectral measure. On the other hand, let $ b_3=7, b_k=8 $ for $ k\ne3 $, we can get \begin{eqnarray*} &&s_{1}=v_2\left(\frac{b_1b_2}{b_{2}-1}\right)=6,\quad s_{2} = v_2\left(\frac{b_1b_2b_3}{b_{3}-1}\right)=5,\\ &&s_k=v_2\left(\frac{b_1b_2\cdots b_{k+1}}{b_{k+1}-1}\right)=3k, \quad k=3,4,\dots. \end{eqnarray*} Then $s_k$'s are different from each other for all $ k\geq 1 $. Therefore, $ \nu $ is a spectral measure but $2 \nmid b_3$. \end{proof} In the rest of this section, our main concern is the general case that $p_k \not \equiv 1$. First of all, we need to modify the expression of $\nu$ in a more appropriate way. By using \eqref{eq-D_k}, we have \begin{eqnarray*} \delta_{(b_1\cdots b_k)^{-1}\mathcal{D}_k} &=&\delta_{(b_1\cdots b_k)^{-1}D_{p_k}}*\delta_{(b_1\cdots b_k)^{-1}(p_k-\frac{1+b_{k+1}^{-1}}{2})D_2}\\ &=&\delta_{(b_1\cdots b_k)^{-1}D_{p_k}}*\delta_{(b_1\cdots b_k2b_{k+1})^{-1}(b_{k+1}(2p_k-1)-1)D_2}. \end{eqnarray*} Then the measure $ \nu $ in \eqref{eq-infinite convolution-equivalent} is of the following form \begin{eqnarray*} \nu&=&\delta_{b_1^{-1}\mathcal{D}_1}*\delta_{(b_1 b_2)^{-1}\mathcal{D}_2}*\cdots\delta_{(b_1\cdots b_k)^{-1}\mathcal{D}_k}*\cdots\\ &=& \delta_{b_1^{-1}D_{p_1}}*\delta_{(b_12b_2)^{-1}(b_2(2p_1-1)-1)D_2}*\delta_{(b_1b_2)^{-1}D_{p_2}}*\delta_{(b_1b_22b_3)^{-1}(b_3(2p_2-1)-1)D_2}*\cdots * \\ &&\delta_{(b_1\cdots b_k)^{-1}D_{p_k}}*\delta_{(b_1\cdots b_k2b_{k+1})^{-1}(b_{k+1}(2p_k-1)-1)D_2}*\cdots. \end{eqnarray*} Starting from the second term, by swapping the positions of two consecutive terms in the above expression, one can get \begin{eqnarray}\label{eq-form of measure nu} \nu &=&\delta_{b_1^{-1}D_{p_1}}*\delta_{b_1^{-1} b_2^{-1}D_{p_2}}*\delta_{b_1^{-1} b_2^{-1} 2^{-1}(b_2(2p_1-1)-1)D_2}*\delta_{b_1^{-1} b_2^{-1} 2^{-1} {(\frac{b_3}{2})}^{-1}D_{p_3}}* \nonumber \\ &&\delta_{b_1^{-1} b_2^{-1} 2^{-1} {(\frac{b_3}{2})}^{-1}2^{-1}(b_3(2p_2-1)-1)D_2}*\delta_{b_1^{-1}b_2^{-1}b_3^{-1}2^{-1}(\frac{b_4}{2})^{-1}D_{p_4}}*\cdots*\\ &&\delta_{b_1^{-1}b_2^{-1}\cdots b_{k-1}^{-1}2^{-1}(\frac{b_k}{2})^{-1}D_{p_k}}*\delta_{b_1^{-1}b_2^{-1}\cdots b_{k-1}^{-1}2^{-1}(\frac{b_k}{2})^{-1}2^{-1}(b_k(2p_{k-1}-1)-1)D_2}* \cdots. \nonumber \end{eqnarray} By removing the terms in which $p_k=1$, we can relabel the above infinite convolution to ensure that every digit set contains at least two elements in the following way: \begin{enumerate}[(i)] \item if $ p_1=1 $ and $ p_2=1 $, let $ b'_1=2b_1b_2 $, $ D'_1=(b_2(2p_1-1)-1)D_2 $; \item if $ p_1=1 $ and $ p_2\ne 1 $, let $ b'_1=b_1b_2 $, $ D'_1=D_{p_2} $ and $ b'_2=2 $, $ D'_2=(b_2(2p_1-1)-1)D_2 $; \item if $ p_1\ne1 $ and $ p_2=1 $, let $ b'_1=b_1 $, $ D'_1=D_{p_1} $ and $ b'_2=2b_2 $, $ D'_2=(b_2(2p_1-1)-1)D_2 $; \item if $ p_1\ne 1 $ and $ p_2\ne 1 $, let $ b'_1=b_1 $, $ D'_1=D_{p_1} $, $ b'_2=b_2 $, $ D'_2=D_{p_2} $ and $ b'_3=2 $, $ D'_3=(b_2(2p_1-1)-1)D_2 $. In general, for $ k\geq 3 $, we set $$m_k=2k-2-\#\{n:p_n=1,n<k\}.$$ \item If $ p_k\ne1 $, let $ b'_{m_k}=\frac{b_k}{2} $, $ D'_{m_k}=D_{p_k} $ and $ b'_{m_k+1}=2 $, $ D'_{m_k+1}=(b_k(2p_{k-1}-1)-1)D_2 $; \item if $ p_k=1 $, let $ b'_{m_k}=b_k $, $ D'_{m_k}=(b_k(2p_{k-1}-1)-1)D_2 $. \end{enumerate} Therefore, we get a sequence of integers $\{b'_k\}_{k=1}^\infty$ and a sequence of digit sets $\{D'_k\}_{k=1}^\infty$. Accordingly, the measure $ \nu $ in \eqref{eq-infinite convolution-equivalent} can be simplified to \begin{equation}\label{rearrangement} \nu=\delta_{{b'_1}^{-1}D'_1}*\delta_{(b'_1 b'_2)^{-1}D'_2}*\cdots \end{equation} where $ \#D'_k\geq2 $ for all $ k\geq 1 $. Decompose such $\nu$ into two parts as before: \begin{equation*} \nu=\nu_k*\nu_{>k} \end{equation*} where $$\nu_k=\delta_{{b'_1}^{-1}D'_1}*\delta_{(b'_1 b'_2)^{-1}D'_2}*\cdots*\delta_{(b'_1\cdots b'_k)^{-1}D'_k}$$ and $$\nu_{>k}=\delta_{(b'_1\cdots b'_k b'_{k+1})^{-1}D'_{k+1}}*\delta_{(b'_1\cdots b'_k b'_{k+1}b'_{k+2})^{-1}D'_{k+2}}*\cdots.$$ Moreover, we scale the measure $\nu_{>k}$ by \begin{equation}\label{omega.n} \omega_{>k}(\cdot):=\nu_{>k}\left(\frac{\cdot}{b'_1\cdots b'_k}\right)=\delta_{{b'_{k+1}}^{-1}D'_{k+1}}*\delta_{(b'_{k+1}b'_{k+2})^{-1}D'_{k+2}}*\cdots. \end{equation} \begin{Prop}\label{4.4} If $ p_{1}\mid b_{1},\ p_{2}\mid b_{2},\ 2\mid b_{2} $ and $ 2p_{k}\mid b_{k} $ for $k\ge 3$, then there exists a sequence of sets $ \{L_k\}_{k=1}^{\infty}$ in $\mathbb{Z} $ such that $(b'_k,D'_k,L_k) $ are all Hadamard triples. \end{Prop} \begin{proof} Cases (i)-(iv) are easily verified, we only check the general cases (v) and (vi). For $ k\geq m_3 $, there exists $ i\geq3 $ such that $ m_i\leq k< m_{i+1} $. If $ p_i\ne1 $ and $ k=m_i $, then $ b'_k=b'_{m_i}=\frac{b_i}{2},\ D'_k=D'_{m_i}=D_{p_i} $, we can get \begin{equation*} \delta_{{b'_k}^{-1}D'_k}=\delta_{(\frac{b_i}{2})^{-1}D_{p_i}}. \end{equation*} It follows that the zero set \begin{equation*} \mathcal{Z}(\widehat{\delta}_{{b'_k}^{-1}D'_k})=\frac{b_i}{2p_i}(\mathbb{Z}\setminus p_i\mathbb{Z}). \end{equation*} Let $ L_k=\frac{b_i}{2p_i}D_{p_i}$. By the assumption that $ 2p_{i}\mid b_{i} $, we have that $ L_k $ is a spectrum of $ \delta_{{b'_k}^{-1}D'_k} $ with $ L_k\subset\mathbb{Z} $. Hence $ (b'_k,D'_k,L_k) $ is a Hadamard triple. If $ p_i\ne1 $ and $ k=m_i+1 $, then $ b'_k=b'_{{m_i}+1}=2,\ D'_k=D'_{m_i+1}=b_i(2p_{i-1}-1)-1)D_{2} $, we can get \begin{equation*} \delta_{{b'_k}^{-1}D'_{k}}=\delta_{2^{-1}(b_i(2p_{i-1}-1)-1)D_{2}}. \end{equation*} The zero set is \begin{equation*} \mathcal{Z}(\widehat{\delta}_{{b'_k}^{-1}D'_k})=\frac{2\mathbb{Z}+1}{b_i(2p_{i-1}-1)-1}. \end{equation*} Let $ L_k=D_2 $. By the assumption that $ 2\mid b_{i} $, we have that $ L_k $ is a spectrum of $ \delta_{{b'_k}^{-1}D'_k} $ with $ L_k\subset\mathbb{Z} $. Hence $ (b'_k,D'_k,L_k) $ is a Hadamard triple. If $ p_i=1 $, $ k=m_i$, then $b'_k=b_i, D'_k=(b_i(2p_{i-1}-1)-1)D_2 $. we can get \begin{equation*} \delta_{{b'_k}^{-1}D'_k}=\delta_{{b_i}^{-1}(b_i(2p_{i-1}-1)-1)D_{2}}. \end{equation*} The zero set is \begin{equation*} \mathcal{Z}(\widehat{\delta}_{{b'_k}^{-1}D'_k})=\frac{b_i(2\mathbb{Z}+1)}{2(b_i(2p_{i-1}-1)-1)}. \end{equation*} Let $ L_k=\frac{b_i}{2} D_2 $. By the assumption that $ 2\mid b_{i} $, it concludes that $ L_k $ is the spectrum of $ \delta_{{b'_k}^{-1}D'_k} $ with $ L_k\subset\mathbb{Z} $. Hence $ (b'_k,D'_k,L_k) $ is also a Hadamard triple. \end{proof} \begin{theorem}\label{thm3- spectral measure} If $ p_{2}\mid b_{2}$, $ 2\mid b_{2}$ and $ 2p_{k}\mid b_{k}$ for $k\ge 3$, and the sequence $ \{b_k\}_{k=1}^{\infty} $ is bounded, then $ \nu $ in \eqref{rearrangement} is a spectral measure. \end{theorem} \begin{proof} By Lemma \ref{coefficient equiv}, the value of $ b_1 $ does not affect the spectrality of $ \nu $, hence we can asume that $ p_1\mid b_1 $. Depending on whether $ p_k $ is equal to $ 1 $ for $ k\geq 1 $, we divide $ \{p_k\}_{k=1}^{\infty} $ into the following three cases: Case 1: $ p_k\equiv1 $ for all $ k\geq1 $, the desired result follows from Theorem \ref{Bernoulli situation}. Case 2: There are only finitely many $ k $'s that make $ p_k\geq2 $. Then there exists a positive number $\kappa\in\mathbb{N} $ such that $ p_k\equiv1 $ for all $ k\geq \kappa $. Let $ \{L_k\}_{k=1}^{\infty}\subset \mathbb{Z} $ be as in Proposition \ref{4.4}. Write \begin{equation*} B=b'_1\cdots b'_{m_\kappa}, \quad L=L_1+b'_1L_2+\cdots+(b'_1\cdots b'_{m_\kappa-1})L_{m_\kappa} \end{equation*} and \begin{equation*} D=(b'_2\cdots b'_{m_\kappa})D'_1+(b'_3\cdots b'_{m_\kappa})D'_2+\cdots+b'_{m_\kappa}D'_{m_\kappa-1}+D'_{m_\kappa}. \end{equation*} Then $$\nu_{m_\kappa}=\delta_{(b'_1)^{-1}D'_1}*\delta_{(b'_1 b'_2)^{-1}D'_2}*\cdots*\delta_{(b'_1\cdots b'_{m_\kappa})^{-1}D'_{m_\kappa}}=\delta_{B^{-1}D}.$$ Since $ \{(b'_k,D'_k,L_k)\}_{k=1}^{m_\kappa} $ are finite many Hadamard triples, from Lemma \ref{finite Hadamard}, it follows that $ (B,D,L) $ is also a Hadamard triple. That is, $ \nu_{m_\kappa} $ is a spectral measure with a spectrum $ L $. On the other hand, by \eqref{omega.n}, \begin{eqnarray*} \omega_{>m_\kappa}&=&\delta_{{b'}_{m_\kappa+1}^{-1}D'_{m_\kappa+1}}*\delta_{(b'_{m_\kappa+1}b'_{m_\kappa+2})^{-1}D'_{m_\kappa+2}}*\cdots\\ &=&\delta_{b_{\kappa+1}^{-1}(b_{\kappa+1}-1)D_2}*\delta_{(b_{\kappa+1}b_{\kappa+2})^{-1}(b_{\kappa+2}-1)D_2}*\cdots. \end{eqnarray*} Thus $ \{(b'_k,D'_k,L_k)\}_{k=m_\kappa+1}^{\infty} $ are Hadamard triples with $ \#D'_k=2 $ and $D'_k=\{0,b'_k-1\}\subset\{0,1,\dots,b'_k-1\} $ for all $ k\geq m_\kappa+1 $. Since $\{b_k'\}$ is bounded, by Lemma \ref{An}, we have that $ \omega_{>m_\kappa} $ is a spectral measure with a spectrum $ \Gamma\subset \mathbb{Z} $. Hence $ \nu_{>m_\kappa}$ is also a spectral measure with a spectrum $ B\Gamma\subset B\mathbb{Z} $. Set \begin{equation*} \Lambda=L\oplus B\Gamma. \end{equation*} For any $ t\in\mathbb{R}$, \begin{eqnarray*} Q_{\Lambda, \nu}(t)&=&\sum_{\eta\in\Lambda}|\widehat{\nu}(t+\eta)|^2\\ &=&\sum_{\eta\in\Lambda}|\widehat{\nu}_{m_\kappa}(t+\eta)|^2\cdot |\widehat{\nu}_{>m_\kappa}(t+\eta)|^2\\ &=&\sum_{\lambda\in L}\sum_{\gamma\in\Gamma}\left|\widehat{\nu}_{m_\kappa}(t+\lambda+B\gamma)\right|^2\cdot \left|\widehat{\nu}_{>m_\kappa}(t+\lambda+B\gamma)\right|^2\\ &=&\sum_{\lambda\in L}\left(\left|\widehat{\nu}_{m_\kappa}(t+\lambda)\right|^2\cdot\sum_{\gamma\in\Gamma} \left|\widehat{\nu}_{>m_\kappa}(t+\lambda+B\gamma)\right|^2\right)\\ &=&\sum_{\lambda\in L}\left|\widehat{\nu}_{m_\kappa}(t+\lambda)\right|^2\equiv1. \end{eqnarray*} Therefore, $ \nu $ is a spectral measure with a spectrum $ \Lambda $ by Theorem \ref{J-P}. Case 3: There are infinitely many $ k $'s that make $ p_k\geq2 $. That is, for all $ l,k\geq1 $, there exists an integer $ N(k)>k+l $, such that $ p_{N(k)}\geq 2 $. We can get \begin{equation*} \gcd(D'_{m_{N(k)}}-D'_{m_{N(k)}})=\gcd(D_{p_{N(k)}}-D_{p_{N(k)}})=1, \end{equation*} then \begin{equation}\label{gcd} \gcd\left(\bigcup_{j=l+k}^{\infty}(D'_j-D'_j)\right)=1. \end{equation} Since $ \{b_k\}_{k=1}^{\infty} $ is bounded, by Proposition \ref{4.4}, the sequence $ \{(b'_k,D'_k,L_k)\}_{k=1}^{\infty} $ is chosen from a finite set of Hadamard triples. Thus the measure $ \omega_{>l} $ is generated by finitely many Hadamard triples. By Lemma \ref{finite-gcd} and \eqref{gcd}, the integral periodic zero set $ Z(\omega_{>l})=\emptyset $ for all $ l\geq1 $. (i) If $\{ \omega_{>l}\}_{l=1}^{\infty} $ has no weakly covergent subsequences. In this case, we have $\text{cl}(\{ \omega_{>l}\}_{l=1}^{\infty})=\{ \omega_{>l}\}_{l=1}^{\infty} $. Moreover, we may find a compact subset $ K\subset\mathbb{R} $ such that $ \omega_{>l}\in\mathcal{P}(K) $ for all $ l\geq 1 $. Hence the family of measures $\{ \omega_{>l}\}_{l=1}^{\infty} $ is admissible and tight. Theorem \ref{Li} implies that $ \nu $ is a spectral measure with a spectrum in $ \mathbb{Z} $. (ii) If there exists a subsequence, say $\{ \omega_{>l_j}\}_{j=1}^{\infty} $, which converges weakly to $\omega$. Let $ \Psi $ be the set of all measures generated by finitely many Hadamard triples, and let $ \Psi_0\subset\Psi $ be the set of the measures where there are infinitely many $ k $'s such that $ p_k\geq2 $. By Lemma \ref{cl}, we have $ \omega'\in\Psi $. If $ \omega'\in\Psi_0 $, then $ Z(\omega')=\emptyset $. If $ \omega'\in\Psi\setminus\Psi_0 $, then $ \omega'$ belongs to Case 1 or Case 2. That is, there exists a sufficiently large $ M>0 $ such that $p_k\equiv 1$ for all $k\ge M$ in the expression of $ \omega'_{>M}$. The sequence $\{\omega_{>l_j+M}\}_{j=1}^\infty$ converges weakly to $\omega'_{>M}\in\Psi\setminus\Psi_0$. In this situation, we may regard $\omega'_{>M}$ as $\omega_{>k}$ for some $k\ge m_{\kappa}$ in Case 2. The support of measure $ \omega_{>k} $ is \begin{equation*} \text{spt}(\omega_{>k})\subset\left[0,\sum_{l=k+1}^{\infty}(b'_{k+1}\cdots b'_l)^{-1}(b'_l-1)\right]\subset[0,1]. \end{equation*} That means $ \omega_{>k}\in\mathcal{P}([0,1]) $. Since $ \omega_{>k} $ is generated by Hadamard triples, it is purely singular without atoms, $ \omega_{>k}(\{1\}) =0 $. Lemma \ref{empty} yields that $ Z(\omega_{>k}) =\emptyset $. Therefore, $ Z(\omega'_{>M})=\emptyset $. Consequently, $ \nu $ is a spectral measure by Theorem \ref{weak-conv}. \end{proof} \bigskip \begin{proof}[Proof of Theorem \ref{thm-main2}] The theorem directly follows from Proposition \ref{equivalent} and Theorem \ref{thm3- spectral measure}. \end{proof} \section{Proof of Theorem \ref{thm-main3}}\label{sect.5} In this section, we follow the same notation as in Section \ref{sect.4}, that is, $n_k=2p_k, k\ge 1$ and $\nu$ is defined by \eqref{eq-infinite convolution-equivalent}. To prove Theorem \ref{thm-main3}, we define a new Moran-type measure generated by a sequence of positive rational numbers \( \{d_k:=\frac{l_k}{t_k}\}_{k=1}^\infty \) and a sequence of consecutive digit sets \( \mathcal {D}_{k} = \{0, 1, \dots, \gamma_k-1\} \) as follows: \begin{eqnarray}\label{new moran} \nu=\delta_{d_1^{-1}\mathcal{D}_1}*\delta_{(d_1d_2)^{-1}\mathcal {D}_{2}}*\cdots*\delta_{(d_1\cdots d_k)^{-1}\mathcal {D}_k}*\cdots. \end{eqnarray} Write \begin{equation}\label{new moran k} \omega_{>k} =\delta_{({d_{k+1}})^{-1}\mathcal {D}_{{k+1}}}*\delta_{({d_{k+1}d_{k+2}})^{-1}\mathcal {D}_{{k+2}}}*\delta_{({d_{k+1}d_{k+2}d_{k+3}})^{-1}\mathcal {D}_{{k+3}}}*\cdots. \end{equation} If $ \{t_k\}_{k=1}^{\infty} $ and $ \{\gamma_k\}_{k=1}^{\infty}$ both are bounded, we let $ c\in {\mathbb N} $ be the common multiple of $\{t_k\}_{k=1}^\infty$ and $\{\gamma_k\}_{k=1}^\infty$. Obviously, the zero set of $\widehat{\nu}$ is \begin{equation*} \mathcal{Z}(\widehat{\nu})=\bigcup_{k=1}^{\infty} d_1\cdots d_k\left(\frac{\mathbb{Z}\setminus \gamma_k\mathbb{Z}}{\gamma_k}\right)\subset\frac{d_1}{c} \mathbb{Z}. \end{equation*} Assuming that $ \Lambda $ containing $0$ is a spectrum of $\nu$, we have $ \frac{c}{d_1}\Lambda\subset\mathbb{Z} $. It follows that \begin{equation*} \frac{1}{d_1}\Lambda=\bigcup_{n=0}^{c-1} \left(\frac{n}{c}+\Lambda_n\right), \end{equation*} where $ \Lambda_n=\mathbb{Z}\cap\left(\frac{\Lambda}{d_1}-\frac{n}{c}\right) $and $\frac{n}{c}+\Lambda_n=\emptyset$ if $\Lambda_n=\emptyset$. Denote by $ q_k:=\frac{c}{\gamma_k}\in\mathbb{N}, \ k\geq1 $. For any $ n\in\{0,1,\dots,c-1\} $, there exists a unique pair of integers $ (i,j)\in\{0,1,\dots,q_1-1\}\times \{0,1,\dots,\gamma_1-1\} $ such that $$ n=i+q_1 j. $$ Hence we further have \begin{equation} \label{decomposition} \frac{1}{d_1}\Lambda=\bigcup_{n=0}^{c-1} \left(\frac{n}{c}+\Lambda_n\right)=\bigcup_{i=0}^{q_1-1} \bigcup_{j=0}^{\gamma_1-1}\left(\frac{i+q_1 j}{c}+\Lambda_{i+q_1 j}\right). \end{equation} The proof of the following technical lemma is similar to that of Proposition 3.1 in \cite{Wu-Xiao2024} (also Proposition 3.3 in \cite{Deng2022}), while $ d_k $'s here are rational numbers. \begin{Lem}\label{5.2} Let $\nu$ be defined by \eqref{new moran} with bounded $ \{t_k\}_{k=1}^\infty$ and $\{\gamma_k\}_{k=1}^\infty$. If $\nu$ is a spectral measure with a spectrum $\Lambda $ containing $0$, then $ \omega_{>1} $ as in \eqref{new moran k} is a spectral measure and for any $ \{j_i:0\leq i\leq q_1-1\}\subset\{0,1,\dots,\gamma_1-1\} $, the set \begin{equation*} \Gamma=\bigcup_{i=0}^{q_1-1}\left(\frac{i+q_1 j_i}{c}+\Lambda_{i+q_1 j_i}\right) \end{equation*} is a spectrum of $ \omega_{>1} $ if $ \Gamma\ne\emptyset $. \end{Lem} \begin{proof} We first show that $ \Gamma $ is a bi-zero set of $ \omega_{>1} $ if $ \Gamma\ne\emptyset $. It is obviously true if $ \Gamma $ has only one element. We just have to consider that $ \Gamma $ has at least two elements. For any $ \lambda_1\ne\lambda_2\in\Gamma $, there exist $ i_1,i_2\in\{0,1,\dots,q_1-1\} $, $ z_1\in\Lambda_{i_1+q_1 j_{i_1}}, z_2\in\Lambda_{i_2+q_1 j_{i_2}} $ such that \begin{equation*} \lambda_1=\frac{i_1+q_1 j_{i_1}}{c}+z_1, \quad \lambda_2=\frac{i_2+q_1 j_{i_2}}{c}+z_2. \end{equation*} By the definitions of $ \Lambda $ and $ \Gamma $, we have $ d_1 \Gamma\subset\Lambda $. Since $ \nu(\cdot)=\delta_{d_1^{-1} \mathcal{D}_1}(\cdot)*\omega_{>1}(d_1\ \cdot) $, it follows that \begin{equation*} 0=\widehat{\nu}(d_1(\lambda_1-\lambda_2))=m_{\mathcal{D}_1}(\lambda_1-\lambda_2)\widehat{\omega}_{>1}(\lambda_1-\lambda_2). \end{equation*} Since $ m_{\mathcal{D}_1} $ is a $ \mathbb{Z} $-periodic function, it yields that \begin{eqnarray*} m_{\mathcal{D}_1}(\lambda_1-\lambda_2)&=&m_{\mathcal{D}_1}\left(\frac{i_1-i_2+q_1 (j_{i_1}-j_{i_2})}{c}+z_1-z_2\right)\\ &=&m_{\mathcal{D}_1}\left(\frac{i_1-i_2+q_1 (j_{i_1}-j_{i_2})}{c}\right). \end{eqnarray*} If $ i_1=i_2 $, then $ j_{i_1}=j_{i_2} $, $ m_{\mathcal{D}_1}(\lambda_1-\lambda_2)=m_{\mathcal{D}_1}(0)=1 $. Hence $ \widehat{\omega}_{>1}(\lambda_1-\lambda_2)=0 $. If $ i_1\ne i_2 $, since $ i_1-i_2\in[1-q_1,q_1-1]\setminus\{0\} $, we have $ i_1-i_2\notin q_1\mathbb{Z} $. Suppose that $ \lambda_1-\lambda_2\in \mathcal{Z}(m_{\mathcal{D}_1})=\frac{\mathbb{Z}\setminus \gamma_{1}\mathbb{Z}}{\gamma_{1}} $, then there exists $ z\in\mathbb{Z}\setminus \gamma_{1}\mathbb{Z} $ such that \begin{equation*} \frac{i_1-i_2+q_1 (j_{i_1}-j_{i_2})}{c}=\frac{z}{\gamma_{1}}. \end{equation*} Thus $c=q_1 \gamma_1$ shows that $ i_1-i_2=q_1(z-j_{i_1}+j_{i_2})\in q_1\mathbb{Z} $ , which is impossible. Therefore, $ m_{\mathcal{D}_1}(\lambda_1-\lambda_2)\ne0 $ and $ \widehat{\nu}_{>1}(\lambda_1-\lambda_2)=0 $, proving that $ \Gamma $ is a bi-zero set of $ \omega_{>1} $. Next we show that $ \Gamma $ is a spectrum of $ \omega_{>1} $. Denote $ {\Lambda}_{i,j}:=\Lambda_{i+q_1 j} $. For any $ t\in\mathbb{R} $, \begin{equation*} 1=Q_{\Lambda,\nu}(d_1 t)=\sum_{i=0}^{q_1-1} \sum_{j=0}^{\gamma_{1}-1}\sum_{\lambda\in i+q_1 j+c\Lambda_{i,j}}\left|\widehat{\nu}\left(\frac{d_1 \lambda}{c}+d_1 t\right)\right|^2, \end{equation*} where $ \sum_{\lambda\in i+q_1 j+c\Lambda_{i,j}}\left|\widehat{\nu}\left(\frac{d_1 \lambda}{c}+d_1 t\right)\right|^2=0 $ if $ \Lambda_{i,j}=\emptyset $. Since $ \Lambda_{i,j}\subset\mathbb{Z} $, we have \begin{eqnarray*} 1&=&Q_{\Lambda,\nu}(d_1 t)\\ &=&\sum_{i=0}^{q_1-1} \sum_{j=0}^{\gamma_{1}-1}\sum_{\tilde{\lambda}\in\Lambda_{i,j}}\left|m_{\mathcal{D}_1}\left(\frac{i+q_1 j}{c}+\tilde{\lambda}+t\right)\right|^2\left|\widehat{\omega}_{>1}\left(\frac{i+q_1 j}{c}+\tilde{\lambda}+t\right)\right|^2\\ &=&\sum_{i=0}^{q_1-1} \sum_{j=0}^{\gamma_1-1}\left|m_{\mathcal{D}_1}\left(\frac{i+q_1 j}{c}+t\right)\right|^2\sum_{\tilde{\lambda}\in\Lambda_{i,j}}\left|\widehat{\omega}_{>1}\left(\frac{i+q_1 j}{c}+\tilde{\lambda}+t\right)\right|^2. \end{eqnarray*} Let $ p_{i,j}:=\left|m_{\mathcal{D}_1}\left(\frac{i+q_1 j}{c}+t\right)\right|^2 $ and $ x_{i,j}:=\sum_{\tilde{\lambda}\in\Lambda_{i,j}}\left|\widehat{\omega}_{>1}\left(\frac{i+q_1 j}{c}+\tilde{\lambda}+t\right)\right|^2 $, we have $ \sum_{i=0}^{q_1-1} \sum_{j=0}^{\gamma_1-1} p_{i,j} x_{i,j}=1 $. Let $ t\in\mathbb{R}\setminus\mathbb{Q} $, since $ \mathcal{Z}(\mathcal{D}_1)\subset \mathbb{Q} $, we have $ p_{i,j}>0 $. For any $ i\in\{0,1,\dots,q_1-1\} $, let $ L_i=\{i+q_1 j:0\leq j\leq \gamma_1-1\} $, then $ (c,\mathcal{D}_1,L_i) $ is Hadamard triple, that is, \begin{equation*} \sum_{j=0}^{\gamma_1-1} p_{i,j}=\sum_{j=0}^{\gamma_1-1}\left|m_{\mathcal{D}_1}\left(\frac{i+q_1 j}{c}+t\right)\right|^2=1. \end{equation*} Since $ \{j_i:0\leq i\leq q_1-1\}\subset\{0,1,\dots,\gamma_1-1\} $, $ \Gamma $ is a bi-zero set of $ \omega_{>1} $, then \begin{equation*} \sum_{i=0}^{q_1-1}\max\{x_{i,0},x_{i,1},\dots,x_{i,\gamma_1-1}\}\leq1. \end{equation*} By Lemma \ref{combined sum}, we have \begin{equation*} \sum_{i=0}^{q_1-1} \sum_{\tilde{\lambda}\in\Lambda_{i,j}}\left|\widehat{\omega}_{>1}\left(\frac{i+q_1 j}{c}+\tilde{\lambda}+t\right)\right|^2=1 \end{equation*} for all $ j\in\{0,1,\dots,\gamma_1-1\} $, and \begin{eqnarray}\label{5.4.0} &&\sum_{\tilde{\lambda}\in\Lambda_{i,0}}\left|\widehat{\omega}_{>1}\left(\frac{i}{c}+\tilde{\lambda}+t\right)\right|^2\nonumber\\ &=&\sum_{\tilde{\lambda}\in\Lambda_{i,1}}\left|\widehat{\omega}_{>1}\left(\frac{i+q_1}{c}+\tilde{\lambda}+t\right)\right|^2 \\ &\vdots& \nonumber\\ &=&\sum_{\tilde{\lambda}\in\Lambda_{i,\gamma_1-1}}\left|\widehat{\omega}_{>1}\left(\frac{i+q_1 (\gamma_1-1)}{c}+\tilde{\lambda}+t\right)\right|^2 \nonumber \end{eqnarray} for all $ i\in\{0,1,\dots,q_1-1\} $. Hence \begin{equation*} Q_{\Gamma,{\omega}_{>1}}(t)=\sum_{\gamma\in\Gamma}|\widehat{\omega}_{>1}(t+\gamma)|^2=1 \end{equation*} for $ t\in\mathbb{R}\setminus\mathbb{Q} $. By the continuity of $ Q_{\Gamma,{\omega}_{>1}} $, we have $ Q_{\Gamma,{\omega}_{>1}}(t)\equiv1 $ for $ t\in\mathbb{R} $. Theorem \ref{J-P} implies that $ {\omega}_{>1} $ is a spectral measure with the spectrum $ \Gamma $. \end{proof} \begin{Rem}\label{Remark5.2} From the proof of Lemma \ref{5.2}, it is easy to see that $ {\omega}_{>k} $ is a spectral measure for all $k\ge 1$ provided that $\nu$ is a spectral measure. Furthermore, if $\Gamma_k$ containing $0$ is a spectrum of $ {\omega}_{>k}$, then for any $ \{j_i:0\leq i\leq q_k-1\}\subset\{0,1,\dots,\gamma_k-1\} $, the set \begin{equation*} \Gamma_{k+1}=\bigcup_{i=0}^{q_k-1}\left(\frac{i+q_k j_i}{c}+\Lambda_{i+q_k j_i}\right) \end{equation*} is a spectrum of $ \omega_{>k+1} $ if $ \Gamma_{k+1}\ne\emptyset $, where $c=q_k\gamma_k$ and $ \Lambda_{i+q_k j_i}=\mathbb{Z}\cap\left(\frac{\Gamma_k}{d_k}-\frac{i+q_k j_i}{c}\right)$. \end{Rem} \begin{Prop}\label{5.3New} Let $\nu$ be defined by \eqref{new moran} is a spectral measure with bounded $ \{t_k\}$ and $\{\gamma_k\}$, then \item (\romannumeral1) If $\gamma_{i}\nmid t_{i+1}$ for some $i$, then $\gamma_{i+1}\mid l_{i+1}$. \item (\romannumeral2) If $\gamma_i\nmid t_{i+1}$, $\gcd(l_{i+1},t_{i+1})=1$, and $d_{i+2}=l_{i+2}$ is an integer satisfying $t_{i+1}\mid l_{i+2}$ for some $i$, then $t_{i+1}\gamma_{i+2}\mid l_{i+2}$. \end{Prop} \begin{proof} By Remark \ref{Remark5.2}, we only need to prove the case that $i=1$. Let $ \Lambda$ be a spectrum of $\nu$ with $0\in \Lambda$, then Lemma \ref{5.2} shows that \begin{equation*} \Gamma=\bigcup_{i=0}^{q_1-1}\left(\frac{i+q_1 j_i}{c}+\Lambda_{i+q_1 j_i}\right)=\bigcup_{i=0}^{q_1-1}\left(\frac{i}{c}+\frac{j_i}{\gamma_1}+\Lambda_{i,j_i}\right) \end{equation*} is a spectrum of $\omega_{>1}$ if $\Gamma\ne\emptyset$. (\romannumeral1) If $\gamma_{1}\nmid t_{2}$. We set $j_0=0$, and for $ i\ne 0 $ we set $ j_i=0 $ if $\frac{l_2}{\gamma_2}\not \equiv t_{2}\frac{i}{c}\pmod 1$, $ j_i=1$ if $\frac{l_2}{\gamma_2} \equiv t_{2}\frac{i}{c}\pmod 1$. Then $\gamma_{1}\nmid t_{2}$ implies that $\frac{l_2}{\gamma_2}\not \equiv t_{2}\frac{i+q_1j_i}{c}\pmod 1$ for $ i\ne 0 $. Using \eqref{decomposition} for $\omega_{>1}$, the spectrum $\Gamma$ has the following multi-stage decomposition \begin{eqnarray}\label{eq-multi-stage deco} \Gamma&=&\bigcup_{i=0}^{q_1-1}\left(\frac{i+q_1 j_i}{c}+\Lambda_{i,j_i}\right) \nonumber\\ &=&d_2 \bigcup_{i'=0}^{q_{2}-1}\bigcup_{j'=0}^{\gamma_2-1}\left(\frac{i'+q_{2}j'}{c}+\Gamma_{i',j'}\right) \end{eqnarray} where $\Gamma_{i',j'}=\mathbb{Z}\cap\left(\frac{\Gamma}{d_2}-\frac{i'+q_2 j'}{c}\right)$. Since $0\in\Gamma_{0,0}\neq \emptyset$, by \eqref{5.4.0}, we have $\Gamma_{0,j'}\neq \emptyset$ for all $0\leq j'\leq\gamma_2-1$. Choose an integer $z'_0\in \Gamma_{0,1}$, then there exists $ (i,j_i)\in\{0,1,\dots,q_1-1\}\times \{0,1,\dots,\gamma_1-1\} $ and an integer $z_0\in \Lambda_{i,j_i}$ such that $$ \frac{l_2}{t_2}(\frac{1}{\gamma_2}+z'_0)=\frac{i+q_1 j_i}{c}+z_0. $$ This yields that $i=j_i=0$ since $\frac{l_2}{\gamma_2}\not \equiv t_{2}\frac{i+q_1j_i}{c}\pmod 1$ for any $1\leq i\leq q_1-1$. Hence $\frac{l_2}{\gamma_2}+l_2 z'_0 =t_2 z_0$, showing that $\gamma_{2}\mid l_{2}$. (\romannumeral2) If $\gamma_1\nmid t_2$, $\gcd(l_2,t_2)=1$, and $d_3=l_3$ is an integer satisfying $t_{2}\mid l_{3}$, We set $j_0=0$, and for $ i\ne 0 $ we set $ j_i=0 $ if $ \frac{t_2 i}{c}\notin \mathbb{Z} $, $ j_i=1 $ if $ \frac{t_2 i}{c}\in \mathbb{Z} $. Hence $ 0\in\Lambda_{0,0}\subset\Gamma $, $ \Gamma $ is a spectrum of $\omega_{>1}$. Since $\gamma_1\nmid t_2$, one can check that \begin{equation*} \left(\frac{i+q_1j_i}{c}\right)t_2=\frac{t_2 i}{c}+\frac{t_2 }{\gamma_1}j_i\notin \mathbb{Z} \end{equation*} for any $i\neq 0$. By using \eqref{eq-multi-stage deco} and the fact that $ \Lambda_{i,j_i},\Gamma_{i',j'}\subset \mathbb{Z} $ for any $ i,i',j' $, we have $l_2\Gamma_{0,0}\subset t_2\Lambda_{0,0}.$ It follows from $\gcd(l_2,t_2)=1$ that $\Gamma_{0,0}\subset t_2 \mathbb{Z}$. For any $0\leq \iota \leq q_2-1$, choose $0\leq \tau_{\iota}\leq \gamma_2-1$ and let \begin{equation*} \Lambda^*=\bigcup_{\iota=0}^{q_2-1}(\frac{\iota+q_2\tau_{\iota}}{c}+\Gamma_{\iota,\tau_{\iota}}) \end{equation*} where $\tau_{\iota}=0$ for $\iota=0$. Then $\Lambda^*$ is a spectrum of the measure $\omega_{>2}$ by Remark \ref{Remark5.2}. Using multi-stage decomposition again, we have \begin{eqnarray*} \Lambda^*&=&\bigcup_{\iota=0}^{q_2-1}(\frac{\iota+q_2\tau_{\iota}}{c}+\Gamma_{\iota,\tau_{\iota}})\\ &=&d_3 \bigcup_{s=0}^{q_{3}-1}\bigcup_{t=0}^{\gamma_3-1}(\frac{s+q_{3}t}{c}+\Lambda^*_{s,t})\\ &=&l_3 \bigcup_{s=0}^{q_{3}-1}\bigcup_{t=0}^{\gamma_3-1}(\frac{s}{c}+\frac{t}{\gamma_3}+\Lambda^*_{s,t}) \end{eqnarray*} where $\Lambda^*_{s,t}=\mathbb{Z}\cap\left(\frac{\Lambda^*}{d_3}-\frac{s+q_3 t}{c}\right)$. That $d_3=l_{3}$ implies that $t_3=1$ and $\gamma_{2}\nmid t_{3}$, hence $\gamma_{3}\mid l_{3}$ by (i), it follows that $$ l_3\left(\frac{t}{\gamma_3}+\Lambda^*_{0,t}\right)\subset\mathbb{Z} $$ for any $0\leq t\leq \gamma_3-1$. It can be seen that \begin{equation*} l_3\left(\frac{t}{\gamma_3}+\Lambda^*_{0,t}\right)\subset \Gamma_{0,0}\subset t_2\mathbb{Z}. \end{equation*} Since $t_2\mid l_3$, let $l_3=l'_3t_2$, then $l'_3(\frac{t}{\gamma_3}+\Lambda^*_{0,t})\subset\mathbb{Z}$ for any $0\leq t\leq \gamma_3-1$. Consequently, we have $\gamma_3\mid l'_3$ and $t_2\gamma_3\mid l_3$, completing the proof. \end{proof} \begin{Prop}\label{5.1} Suppose that $ 2\mid p_k $ for all $ k\geq1 $. If $ \nu $ in \eqref{eq-infinite convolution-equivalent} is a spectral measure, then $ 2\mid b_{k+1} $ for all $ k\geq 1 $. \end{Prop} \begin{proof} Since $ 2\mid p_k , k\ge 1$, we get \begin{equation*} D_{p_k}=D_2\oplus D_{\frac{p_k}{2}}. \end{equation*} The measure $ \nu $ can be expressed as \begin{eqnarray*} \nu&=&\delta_{b_1^{-1}D_2}*\delta_{b_1^{-1}D_{\frac{p_1}{2}}}* \delta_{{b_1^{-1}(p_1-\frac{1+b_{2}^{-1}}{2})D_2}}*\cdots*\delta_{(b_1\cdots b_k)^{-1}D_2}*\delta_{(b_1\cdots b_k)^{-1}D_{\frac{p_k}{2}}}*\\ &&\delta_{(b_1\cdots b_k)^{-1}(p_k-\frac{1+b_{k+1}^{-1}}{2})D_2}*\delta_{(b_1\cdots b_kb_{k+1})^{-1}D_2}*\delta_{(b_1\cdots b_kb_{k+1})^{-1}D_{\frac{p_{k+1}}{2}}}*\cdots. \end{eqnarray*} If $ 2\mid b_{k+1} $ does not hold for all $ k\geq 1 $, that is, there exists $ n\geq 1 $ such that $ b_{n+1}\in 2\mathbb{Z}+1 $. Then we have $$\mathcal{Z}\left(\widehat{\delta}_{(b_1\cdots b_{n+1})^{-1}D_2}\right)=\frac{b_1\cdots b_{n+1}}{2} (2\mathbb{Z}+1) \subset \frac{b_1\cdots b_n}{2} (2\mathbb{Z}+1)=\mathcal{Z}\left(\widehat{\delta}_{(b_1\cdots b_n)^{-1}D_2}\right).$$ Writing $ \nu=\delta_{(b_1\cdots b_{n+1})^{-1}D_2}*\nu' $. Since $ \mathcal{Z}\left(\widehat{\delta}_{(b_1\cdots b_{n+1})^{-1}D_2}\right)\subset \mathcal{Z}(\widehat{\nu'}) $, if $ \Lambda $ is a spectrum of $ \nu $, then $ \Lambda $ is a bi-zero set of $ \nu' $. But $ \Lambda $ can not be a spectrum of $ \nu $ by Lemma \ref{Dai}. That is a contradiction. \end{proof} For the measure $ \nu $ as in \eqref{eq-form of measure nu}, we may change the digit sets in each level into the consecutive digit sets by adjusting the contraction ratios as follows: \begin{eqnarray*} \nu &=&\delta_{b_1^{-1}D_{p_1}}*\delta_{b_1^{-1} b_2^{-1}D_{p_2}}*\delta_{b_1^{-1} b_2^{-1} 2^{-1}(b_2(2p_1-1)-1)D_2}*\delta_{b_1^{-1} b_2^{-1} 2^{-1} {(\frac{b_3}{2})}^{-1}D_{p_3}}*\\ &&\delta_{b_1^{-1}\cdot b_2^{-1} 2^{-1}\cdot {(\frac{b_3}{2})}^{-1} 2^{-1}(b_3(2p_2-1)-1)D_2}*\cdots\\ &=&\delta_{b_1^{-1}D_{p_1}}*\delta_{b_1^{-1} b_2^{-1}D_{p_2}}*\delta_{b_1^{-1} b_2^{-1} (\frac{2}{b_2(2p_1-1)-1})^{-1}D_2}*\delta_{b_1^{-1} b_2^{-1} (\frac{2}{b_2(2p_1-1)-1})^{-1} {(\frac{b_3(b_2(2p_1-1)-1)}{2})}^{-1}D_{p_3}}*\\ &&\delta_{b_1^{-1} b_2^{-1} (\frac{2}{b_2(2p_1-1)-1})^{-1} {(\frac{b_3(b_2(2p_1-1)-1)}{2})}^{-1} (\frac{2}{b_3(2p_2-1)-1})^{-1}D_2}*\cdots. \end{eqnarray*} Let \begin{eqnarray}\label{b''k} \begin{aligned} & b''_1=b_1,\quad b''_{2k+1}=\frac{2}{b_{k+1}(2p_k-1)-1}, \\ & b''_2=b_2, \quad b''_{2k+2}=\frac{b_{k+2}(b_{k+1}(2p_k-1)-1)}{2}, \end{aligned} \end{eqnarray} and \begin{equation}\label{p''k} p''_1=p_1,\quad p''_2=p_2,\quad p''_{2k+1}=2,\quad p''_{2k+2}=p_{k+2}. \end{equation} Then $\nu$ can be rewritten as \begin{equation}\label{nu} \nu =\delta_{{b''_1}^{-1}D_{p''_1}}*\delta_{{b''_1}^{-1}{b''_2}^{-1}D_{p''_2}}*\delta_{{b''_1}^{-1}{b''_2}^{-1}{b''_3}^{-1}D_{p''_3}}*\cdots. \end{equation} Accordingly, let \begin{equation}\label{eq-omega>k} \omega_{>k} =\delta_{{b''_{k+1}}^{-1}D_{p''_{k+1}}}*\delta_{{b''_{k+1}}^{-1}{b''_{k+2}}^{-1}D_{p''_{k+2}}}*\delta_{{b''_{k+1}}^{-1}{b''_{k+2}}^{-1}{b''_{k+3}}^{-1}D_{p''_{k+3}}}*\cdots. \end{equation} \begin{theorem}\label{necessity} Let $ 2\mid p_k $ for all $ k\geq1 $ and $ \{b_k\}_{k=1}^{\infty} $ be bounded. If $ \nu $ in \eqref{nu} is a spectral measure, then $ p_2\mid b_2 $ and $ 2p_k\mid b_k $ for $ k\geq 3 $. \end{theorem} \begin{proof} Propsition \ref{5.1} implies that $ 2\mid b_k $ for all $ k\geq2 $. By \eqref{b''k}, \eqref{p''k} and Propsition \ref{5.3New}(i), we have $ p_2\mid b_2 $. From Lemma \ref{5.2}, it follows that $\omega_{>1}$ as in \eqref{eq-omega>k} is a spectral measure where \begin{eqnarray*} \omega_{>1}&=&\delta_{{b''_{2}}^{-1}D_{p''_{2}}}*\delta_{{b''_{2}}^{-1}{b''_{3}}^{-1}D_{p''_{3}}}*\delta_{{b''_{2}}^{-1}{b''_{3}}^{-1}{b''_{4}}^{-1}D_{p''_{4}}}*\cdots \\ &=&\delta_{ b_2^{-1}D_{p_2}}*\delta_{b_2^{-1} (\frac{2}{b_{2}(2p_1-1)-1})^{-1}D_2}*\delta_{b_2^{-1} (\frac{2}{b_{2}(2p_1-1)-1})^{-1} {(\frac{b_{3}(b_{2}(2p_1-1)-1)}{2})}^{-1}D_{p_{3}}}*\cdots. \end{eqnarray*} Since $p_2$ is even and $(b_{2}(2p_1-1)-1)$ is odd, we get $ p_2\nmid (b_{2}(2p_1-1)-1) $ and $ \gcd(2,b_{2}(2p_1-1)-1)=1 $. Moreover, $ b_{2}(2p_1-1)-1\mid \frac{b_{3}(b_{2}(2p_1-1)-1)}{2}$ (as $2\mid b_3$). Therefore, Propsition \ref{5.3New}(ii) yields that $ p_3(b_{2}(2p_1-1)-1)\mid \frac{b_{3}(b_{2}(2p_1-1)-1)}{2} $, and hence $ p_3\mid \frac{b_{3}}{2} $. Repeating the above argument, we conclude that $ p_k \mid \frac{b_k}{2} $ for all $ k \geq 3 $, completing the proof. \end{proof} \bigskip \begin{proof}[Proof of Theorem \ref{thm-main3}] The theorem directly follows from Proposition \ref{equivalent}, Theorem \ref{thm3- spectral measure} and Theorem \ref{necessity}. \end{proof} \begin{thebibliography}{99} \bibitem{An-Fu-Lai2019} L.X. An, X. Fu and C.K. Lai, \emph{On spectral Cantor-Moran measures and a variant of Bourgain's sum of sine problem}, Adv. Math. 349 (2019) 84-124. \bibitem{An-He2014} L.X. An and X.G. He, \emph{A class of spectral Moran measures}, J. Funct. Anal. 266 (1) (2014) 343-354. \bibitem{An-He-Lau2015} L.X. An, X.G. He and K.S. Lau, \emph{Spectrality of a class of infinite convolutions}, Adv. Math. 283 (2015) 362-376. \bibitem{An-He-Li2015} L.X. An, X.G. He and H.X. Li, \emph{Spectrality of infinite Bernoulli convolutions}, J. Funct. Anal. 269 (2015) 1571-1590. \bibitem{An-Li-Zhang2022} L.X. An, Q. Li and M.M. Zhang, \emph{The generalized Fuglede's conjecture holds for a class of Cantor-Moran measures}, Pacific J. Math. 334 (2) (2025) 189-209. L.X. An, Q. Li and M.M. Zhang, \emph{Characterization of spectral Cantor-Moran measures with consecutive digits}, preprint (2022). \bibitem{BP-2017} C. Bishop and Y. Peres, \emph{Fractals in probability and analysis}, Cambridge studies in advanced mathematics, (2017). \bibitem{Dai2012} X.R. Dai, \emph{When does a Bernoulli convolution admit a spectrum?}, Adv. Math. 231 (3-4) (2012) 1681-1693. \bibitem{Dai-He-Lau2014} X.R. Dai, X.G. He and K.S. Lau, \emph{On spectral $ N $-Bernoulli measures}, Adv. Math. 259 (2014) 511-531. \bibitem{Deng2022} Q.R. Deng and M.T. Li, \emph{Spectrality of Moran-type self-similar measures on $\mathbb{R}$}, J. Math. Anal. Appl. 506 (1) (2022) 125547. \bibitem{Deng2023} Q.R. Deng and M.T. Li, \emph{Spectrality of Moran-type Bernoulli convolutions}, Bull. Malays. Math. Sci. Soc. 46 (4) (2023) 136. \bibitem{Dutkay-Han-Lai2019} D. Dutkay, J. Haussermann and C.K. Lai, \emph{Hadamard triples generate self-affine spectral measures}, Trans. Amer. Math. Soc. 371 (2) (2019) 1439-1481. \bibitem{Fuglede1974} B. Fuglede, \emph{Commuting self-adjoint partial differential operators and a group theoretic problem}, J. Funct. Anal. 16 (1974) 101-121. \bibitem{He-He2017} L. He and X.G. He, \emph{On the Fourier orthonormal bases of Cantor–Moran measures}, J. Funct. Anal. 272 (5) (2017) 1980-2004. \bibitem{Hu-Lau2008} T.Y. Hu and K.S. Lau, \emph{Spectral property of the Bernoulli convolutions}, Adv. Math. 219 (2) (2008) 554-567. \bibitem{Hua-Rao-Wen-Wu2000} S. Hua, H. Rao, Z. Wen and J. Wu, \emph{On the structures and dimensions of Moran sets}, Sci. China Ser. A-Math. 43 (2000) 836-852. \bibitem{Jorgensen-Pedersen1998} P. Jorgensen and S. Pedersen, \emph{Dense analytic subspaces in fractal $ L^2 $-spaces}, J. Anal. Math. 75 (1998) 185-228. \bibitem{Kolountzakis-Matolcsi2006-1} M.N. Kolountzakis and M. Matolcsi, \emph{Tiles with no spectra}, Forum Math. 18 (3) (2006) 519-528. \bibitem{Kolountzakis-Matolcsi2006-2} M.N. Kolountzakis and M. Matolcsi, \emph{Complex Hadamard matrices and the spectral set conjecture}, Collect. Math. Extra (2006) 281-291. \bibitem{Laba-Wang2002} I. \L aba and Y. Wang, \emph{On spectral Cantor measures}, J. Funct. Anal. 193 (2) (2002) 409-420. \bibitem{LeMa_2022} N. Lev and M. Matolcsi, \emph{The Fuglede conjecture for convex domains is true in all dimensions}, Acta Math. 228 (2) (2022) 385-420. \bibitem{Li-Miao-Wang2022} W.X. Li, J.J. Miao and Z.Q. Wang, \emph{Weak convergence and spectrality of infinite convolutions}, Adv. Math. 404 (2022) 108425. \bibitem{Li-Miao-Wang2024-1} W.X. Li, J.J. Miao and Z.Q. Wang, \emph{Spectrality of random convolutions generated by finitely many Hadamard triples}, Nonlinearity 37 (1) (2024) 015003. \bibitem{Li-Miao-Wang2024-2} W.X. Li, J.J. Miao and Z.Q. Wang, \emph{Spectrality of infinite convolutions and random convolutions.}, J. Funct. Anal. 287 (7) (2024) 110539. \bibitem{Liu-Lu-Zhou2023} J. Liu, Z.Y. Lu and T. Zhou, \emph{Spectrality of Moran-Sierpinski type measures}, J. Funct. Anal. 284 (6) (2023) 109820. \bibitem{Liu-Liu-Luo2024} J.C. Liu, Q.Q. Liu and J.J. Luo and J.J. Wang, \emph{Spectrality of a class of Moran measures on $\mathbb{R}^2$}, preprint, (2024). \bibitem{Liu2024} Z.S. Liu, \emph{Spectrality of homogeneous Moran measures on the plane}, Chaos Solitons Fract. 183 (2024) 114926. \bibitem{Matolcsi2005} M. Matolcsi, \emph{Fuglede’s conjecture fails in dimension $ 4 $}, Proc. Amer. Math. Soc. 133 (10) (2005) 3021-3026. \bibitem{Shi2019} R. Shi, \emph{Spectrality of a class of Cantor–Moran measures}, J. Funct. Anal. 276 (12) (2019) 3767-3794. \bibitem{Strichartz2000} R.S. Strichartz, \emph{Mock Fourier series and transforms associated with certain Cantor measures}, J. Anal. Math. 81 (2000) 209-238. \bibitem{Tang-Yin2018} M.W. Tang and F.L. Yin, \emph{Spectrality of Moran measures with four-element digit sets}, J. Math. Anal. Appl. 461 (1) (2018) 354-363. \bibitem{Tao2004} T. Tao, \emph{Fuglede’s conjecture is false in $ 5 $ and higher dimensions}, Math. Res. Let. 11 (2-3) (2004) 251-258. \bibitem{Wu2024} H.H. Wu, \emph{Spectral self-similar measures with alternate contraction ratios and consecutive digits}, Adv. Math. 443 (2024) 109585. \bibitem{Wu-Xiao2024} S. Wu and Y.Q. Xiao, \emph{Spectrality of a class of infinite convolutions on $\mathbb{R}$}, Nonlinearity, 37 (5) (2024) 055015. \end{thebibliography} \end{document}
2412.13557v1
http://arxiv.org/abs/2412.13557v1
The $L_p$ Gauss dual Minkowski problem
\documentclass[12pt]{amsart} \usepackage{amsmath,amsfonts,amsbsy,amsgen,amscd,mathrsfs,amssymb,amsthm} \usepackage[usenames,dvipsnames]{xcolor} \usepackage[colorlinks=true,citecolor=red,linkcolor=blue]{hyperref} \usepackage{geometry} \geometry{a4paper,left=3cm,right=3cm,top=2.5cm,bottom=2.5cm} \usepackage{setspace} \setstretch{1.0} \renewcommand{\baselinestretch}{1.1} \allowdisplaybreaks[4] \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{definition}[theorem]{Definition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{remark}[theorem]{Remark} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{example}[theorem]{Example} \newtheorem{examples}[theorem]{Examples} \newtheorem{problem}[theorem]{Problem} \newtheorem{conjecture}[theorem]{Conjecture} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \title[$L_p$ Gauss dual Minkowski problem] {The $L_p$ Gauss dual Minkowski problem} \author{Bin Chen} \address{Bin Chen } \curraddr{} \email{[email protected]} \thanks{} \author{Chao Li} \address{Chao Li } \curraddr{} \email{[email protected]} \author{Weidong Wang} \address{Weidong Wang } \email{[email protected]} \subjclass[2020]{52A20; 52A40; 35K96} \keywords{$L_p$-Gauss dual curvature measure; $L_p$-Gauss dual Minkowski problem; Monge-Amp\`{e}re equation; Gauss curvature flow} \begin{document} \begin{abstract} This article introduces the $L_p$-Gauss dual curvature measure and proposes its related $L_p$-Gauss dual Minkowski problem as: for $p,q\in\mathbb{R}$, under what necessary and/or sufficient condition on a non-zero finite Borel measure $\mu$ on unit sphere does there exist a convex body $K$ such that $\mu$ is the $L_p$ Gauss dual curvature measure? If $K$ exists, to what extent is it unique? This problem amounts to solving a class of Monge-Amp\`{e}re type equations on unit sphere in smooth case: \begin{align}\label{0.1} e^{-\frac{|\nabla h_K|^2+h_K^2}{2}}h_K^{1-p} (|\nabla h_K|^2+h_K^2)^{\frac{q-n}{2}} \det(\nabla^2h_K+h_KI)=f, \end{align} where $f$ is a given positive smooth function on unit sphere, $h_k$ is the support function of convex body $K$, $\nabla h_K$ and $\nabla^2h_K$ are the gradient and Hessian of $h_K$ on unit sphere with respect to an orthonormal basis, and $I$ is the identity matrix. We confirm the existence of solution to the new problem with $p,q>0$ and the existence of smooth solution to the equation (\ref{0.1}) with $p ,q\in\mathbb{R}$ by variational method and Gaussian curvature flow method, respectively. Furthermore, the uniqueness of solution to the equation (\ref{0.1}) in the case $p,q\in\mathbb{R}$ with $q<p$ is established. \end{abstract} \maketitle \vskip 20pt \section{Introduction } The Brunn-Minkowski type theories (such as classical Brunn-Minkowski theory, $L_p$-Brunn-Minkowski theory, Orlicz Brunn-Minkowski theory and their dual theories) of convex bodies (compact convex sets with nonempty interiors) in $n$-dimensional Euclidean spaces $\mathbb{R}^n$ play an important role in the study of convex geometric analysis. Geometric invariants, Minkowski sum and geometric measures associated with convex bodies are the core of the Brunn-Minkowski type theories. Geometric invariants can be viewed as geometric functionals of convex bodies and geometric measures are the differential of geometric functionals. It is well known that Minkowski-type problems related to geometric measures (including surface area measures, curvature measures and dual curvature measures, etc.) are the cornerstone of Brunn-Minkowski theories, see e.g., \cite{BB,BH,BoL,CFL,CHZ,CHZ1,CW,G,GHW,GaHW,HaLYZ,HLYZ, HuLY,HLY,JW,L,Lu,LYZ,S,Z} and the references therein. In the last decade, the Minkowski type problems for the measures associated with solution to the boundary-value problems are doubtless extremely important variant, as some typical examples, we can refer to some representative papers \cite{CF,CNS,FZH,HYZ,J,ZX}. Moreover, a particularly interesting Minkowski type problem, the Gauss Minkowski problem, was first proposed by Huang, Xi and Zhao \cite{HXZ}, and more research results on this problem can be found in \cite{BLYZ,FHX,FLX,HQ,Liu,W,WW}. Very recently, Feng, Li and Xu \cite{F} devepoped a theory analogous to the one for the Minkowski problem in which geometric measure are generated by the differential of {\it Gauss dual quermassintegral}. Let $K\in\mathcal{K}_o^n$ be the set of convex bodies containing the origin in their interior. For $q\in\mathbb{R}$ and $K\in\mathcal{K}_o^n$, the Gauss dual quermassintegral, denoted by $\widetilde{V}_{\gamma,q}(K)$, of $K$ is defined by \begin{align}\label{1.1} \widetilde{V}_{\gamma,q}(K)=\int_{\mathbb{R}^n\backslash K} e^{-\frac{|x|^2}{2}}|x|^{q-n}dx. \end{align} The Gauss dual curvature measure, denoted by $\widetilde{C}_{\gamma,q}(K,\cdot)$, of $K\in\mathcal{K}_o^n$ is uniquely determined by the following variational formula (see \cite[Thorem 3.3]{F}) $$\lim_{t\rightarrow0}\frac{\widetilde{V}_{\gamma,q}([h_t]) -\widetilde{V}_{\gamma,q}(K)}{t} =-\int_{\mathbb{S}^{n-1}}f(v) d\widetilde{C}_{\gamma,q}(K,v),$$ where $h_t=e^{tf}h_K$ and a continuous function $f: \mathbb{S}^{n-1}\rightarrow\mathbb{R}$, and $[h_t]$ is the Wulff shape generated by $h_t$ (see Section \ref{S2} for details). For each Borel measurable set $\omega\subset\mathbb{S}^{n-1}$, the Gauss dual curvature measure $\widetilde{C}_{\gamma,q}(K,\cdot)$ is defined by $$\widetilde{C}_{\gamma,q}(K,\omega)= \int_{\nu_K^{-1}}x\cdot\nu_K(x)e^{-\frac{|x|^2}{2}} |x|^{q-n}d\mathcal{H}^{n-1}(x).$$ where $\cdot$ represents the inner product in $\mathbb{R}^n$. The {\bf Gauss dual Minkowski problem} was posed: given a non-zero finite Borel measure $\mu$ on $\mathbb{S}^{n-1}$ and $q\in\mathbb{R}$, what are necessary and sufficient conditions on $\mu$ to guarantee the existence of a convex body $K\in\mathcal{K}_o^n$ such that $$\mu=\widetilde{C}_{\gamma,q}(K,\cdot)?$$ If $K$ exists, is it unique? In \cite{F}, the existence of smooth solutions to Gauss dual Minkowski problem for $q<0$ was obtained by continuity method. Then, an approximation argument was used to give the weak solution to this problem. Moreover, the existence of smooth for $q=0$ was established by a degree-theory approach, and the uniqueness of smooth solution for $q\leq0$ was also established. The main purpose of this paper is to construct $L_p$-Gauss dual curvature measure based on the research in \cite{F}. An $L_p$ version of the variational formula similar to \cite[Theorem 6.2]{LYZ} can be shown: for $p\neq0, q\in\mathbb{R}$, $K\in\mathcal{K}_o^n$ and a continuous function $f :\mathbb{S}^{n-1}\rightarrow\mathbb{R}$, then $$\lim_{t\rightarrow0}\frac{\widetilde{V}_{\gamma,q}([h_t]) -\widetilde{V}_{\gamma,q}(K)}{t} =-\frac{1}{p}\int_{\mathbb{S}^{n-1}}f(v)^p d\widetilde{C}_{\gamma,p,q}(K,v),$$ where $h_t(v)=(h_K(v)^p+tf(v)^p)^{\frac{1}{p}}$. This naturally leads to the definition of $L_p$ Gauss dual curvature measure: $$\widetilde{C}_{\gamma,p,q}(K,\omega) =\int_{\nu_K^{-1}(\omega)}(x\cdot\nu_K(x))^{1-p} e^{-\frac{|x|^2}{2}}|x|^{q-n}d\mathcal{H}^{n-1}(x),$$ for Borel measurable set $\omega\subset\mathbb{S}^{n-1}$. Obviously, the $L_p$-Gauss surface area measure and the Gauss dual curvature measure are special cases of the $L_p$-Gauss dual curvature measure in the sense that for $p, q\in\mathbb{R}$ and $K\in\mathcal{K}_o^n$, $$\widetilde{C}_{\gamma,0,q}(K,\cdot) =\widetilde{C}_{\gamma,q}(K,\cdot), \ \ \widetilde{C}_{\gamma,p,n}(K,\cdot) =S_{\gamma,p}(K,\cdot). $$ Naturally, the $L_p$-Gauss dual Minkowski problem is posed as follows: \begin{problem}\label{p1.1} (The $L_p$-Gauss dual Minkowski problem) For $p,q\in\mathbb{R}$, under what necessary and/or sufficient condition on a non-zero finite Borel measure $\mu$ on $\mathbb{S}^{n-1}$ does there exist a set $K\in\mathcal{K}_o^n$ such that \begin{align*} \mu=\widetilde{C}_{\gamma,p,q}(K,\cdot)? \end{align*} If $K$ exists, to what extent is it unique$?$ \end{problem} When $K$ is sufficiently smooth and the given measure $\mu$ has a density $f: \mathbb{S}^{n-1}\rightarrow \mathbb{R}$, the $L_p$-Gauss dual Minkowski problem is equivalent to solving the following Monge-Amp\`{e}re type equation on $\mathbb{S}^{n-1}$: \begin{align}\label{1.2} e^{-\frac{|\nabla h_K|^2+h_K^2}{2}}h_K^{1-p} (|\nabla h_K|^2+h_K^2)^{\frac{q-n}{2}} \det(\nabla^2h_K+h_KI)=f, \end{align} where $\nabla h_K$ and $\nabla^2h_K$ are the gradient and Hessian of $h_K$ on $\mathbb{S}^{n-1}$ with respect to an orthonormal basis, and $I$ is the identity matrix. The first objective is to consider the existence of symmetric solution to Problem \ref{p1.1} by the variational method (see Theorem \ref{t4.3} for details). \begin{theorem}\label{t1.2} For $p>0$ and $q>0$. Let $\mu$ be a non-zero finite even Borel measure on $\mathbb{S}^{n-1}$ and not concentrated in any closed hemisphere. Then there exists an origin-symmetric convex body $K$ such that \begin{align*} \mu=\widetilde{C}_{\gamma,p,q}(K,\cdot). \end{align*} \end{theorem} Next, using the Gauss curvature flow method, the existence of smooth solution to the equation (\ref{1.2}) is obtained as below (see Theorem \ref{t5.5} for details). \begin{theorem}\label{t1.3} Let {$p, q\in\mathbb{R}$}, and $f: \mathbb{S}^{n-1}\rightarrow(0,\infty)$ be a smooth, positive function satisfying \begin{align}\label{1.3} \limsup_{s\rightarrow+\infty} (s^{q-p}e^{-\frac{s^2}{2}})<f<\liminf_{s\rightarrow0^+} (s^{q-p}e^{-\frac{s^2}{2}}). \end{align} Then there exists a smooth solution $h_K$ to the equation (\ref{1.2}). \end{theorem} The Gauss curvature flow was first introduced and studied by Firey \cite{Fi} to model the shape change of worn stones. Since then, many scholars have found that using curvature flow to study the hypersurfaces is a very effective tool, such as solving the Minkowski-type problems, see e.g., \cite{BIS,CC,CH,CL,CWX,LSW,LL} and the references therein. Finally, we establish the uniqueness of the solution to the equation (\ref{1.2}) as follows (see Theorem \ref{t6.1} for details). \begin{theorem}\label{t1.} Let $p, q\in\mathbb{R}$ with $q<p$, and $f: \mathbb{S}^{n-1}\rightarrow(0,\infty)$ be a smooth, positive function on $\mathbb{S}^{n-1}$. Then the solution to the equation (\ref{1.2}) is unique. \end{theorem} This paper is organized as follows. In Section \ref{S2}, we collect some basic facts on convex bodies and convex hypersurfaces. In Section \ref{S3}, we establish a $L_p$-variational formula of Gauss dual quermassintegral, and introduce the $L_p$-Gauss dual curvature measure and $L_p$-Gauss dual Minkowski problem. In Section \ref{S4}, the existence of weak solution to the $L_p$-Gauss dual Minkowski problem will be discussed by variational method. In Section \ref{S5}, the existence of smooth solution to the $L_p$-Gauss dual Minkowski problem will be obtained by curvature flow method. Finally, the uniqueness of the solution of $L_p$-Gauss dual Minkowski problem will be established in Section \ref{S6}. \section{Preliminaries}\label{S2} In this section we list some facts about convex bodies and convex hypersurfaces that readers can refer to a celebrated books of Gardner and Schneider \cite{G,S} and papers \cite{HLYZ,U}. For a convex body $K\in\mathcal{K}_o^n$, the support function, $h(K,\cdot): \mathbb{S}^{n-1}\rightarrow\mathbb{R}$, of $K$ is defined by $$h_K(u)=h(K,u)=\max\{u\cdot Y: Y\in K\}, \ u\in\mathbb{S}^{n-1}.$$ We clearly know that the support function is a continuous and homogeneous convex function with degree $1$. The Minkowski sum of $K, L\in\mathcal{K}_o^n$ is defined, for $t>0$, by $$K+tL=\{x+ty: x\in K, \ y\in L\}.$$ The set $K+tL$ is a convex body whose support function is given by $$h(K+tL,\cdot)=h(K,\cdot)+th(L,\cdot).$$ For $K, L\in\mathcal{K}_o^n$ and $p\geq1$, the $L_p$ Minkowski sum, $K+_pt\cdot L$, of $K$ and $L$ is defined through its support function $$h(K+_pt\cdot L,\cdot)^p=h(K,\cdot)^p+th(L,\cdot)^p.$$ In particular, when $p=1$, the $L_p$ Minkowski sum is just classical Minkowski sum. For each $K\in\mathcal{K}_o^n$, the radial function $\rho(K,\cdot)$ is defined by $$\rho_K(x)=\rho(K,x)=\max\{\lambda: \lambda x\in K\},\ x\in\mathbb{R}^n\backslash\{0\}.$$ Obviously, the radial function is a continuous function and homogeneous of degree $-1$. Moreover $\partial K=\{\rho_K(v)v: v\in\mathbb{S}^{n-1}\}$. For each $K\in\mathcal{K}_o^n$, the polar body $K^*$ of $K$ is the convex body defined by $$K^*=\{x\in\mathbb{R}^n: x\cdot y\leq1, \ for\ all\ y\in K\}.$$ It is easy to verify that \begin{align}\label{2.1} (K^*)^*=K. \end{align} The support function and radial function of $K\in\mathcal{K}_o^n$ and its polar body are related by \begin{align}\label{2.2} h_K(x)\rho_{K^*}(x)=1 \ \ and \ \ \rho_{K}(x)h_{K^*}(x)=1, \end{align} for $x\in\mathbb{R}^n\backslash\{0\}$. Let Let $\mathbb{C}(\mathbb{S}^{n-1})$ be the set of all continuous functions on $\mathbb{S}^{n-1}$, $\mathbb{C}^+(\mathbb{S}^{n-1})$ be the set of all strictly positive continuous functions on $\mathbb{S}^{n-1}$ and $\mathbb{C}^+_e(\mathbb{S}^{n-1})$ be the subset of even functions from $\mathbb{C}^+(\mathbb{S}^{n-1})$. For each $f\in C^+(\mathbb{S}^{n-1})$, the Wulff shape $[f]$ generated by $f$ is the convex body defined by $$[f]=\{x\in\mathbb{R}^n: x\cdot v\leq f(v),\ for \ all \ v\in\mathbb{S}^{n-1}\}.$$ It is apparent that $h_{[f]}\leq f$ and $[h_K]=K$ for $K\in\mathcal{K}_o^n$. The convex hull $\langle f\rangle$ generated by $f$ is defined by $$\langle f\rangle=conv\{f(u)u: u\in\mathbb{S}^{n-1}\}\in\mathcal{K}_o^n.$$ If $K\in\mathcal{K}_o^n$, then $\langle \rho_K\rangle=K$. Moreover, \begin{align}\label{2.3} [f]^*=\langle\frac{1}{f}\rangle. \end{align} The boundary point of convex body $K$ which only has one supporting hyperplane is called the singular point. The set of singular points is denoted as $\sigma K$, it is well known that $\sigma K$ has spherical Lebesgue measure 0. For $x\in\partial K\setminus \sigma K$, the Gauss map $\nu_K:x\in\partial K\setminus \sigma K\rightarrow \mathbb{S}^{n-1}$ is represented by \begin{align*} \nu_K(x)=\{v\in\mathbb{S}^{n-1}:x\cdot v=h_K(v)\}. \end{align*} Correspondingly, for a Borel set $\eta\subset\mathbb{S}^{n-1}$, the inverse Gauss map is denoted by $\nu_K^{-1}$, \begin{align*} \nu_K^{-1}(\eta)=\{x\in\partial K: \nu_K(x)\in\eta\}. \end{align*} Note that $h_K(x)$ is differentiable at $x\in\mathbb{R}^n$ if and only if $\partial h_K(x)$ consists of exactly one vector which is the gradient, denoted by $Dh_K(x)$, of $h_K$ at $x$. Let $h_K$ be differentiable at $v\in\mathbb{S}^{n-1}$ for a convex body $K$, where $v=\nu_K(x)$ is an outer unit normal vector at $x\in\partial K$. Then $$x=\nu_K^{-1}(v)=Dh_K(v).$$ From this, we have \begin{align}\label{2.5} &\nonumber h_K(v)=h_K(\nu_K(x))=\langle x,\nu_K(x)\rangle=\langle Dh_K(v),v\rangle,\\ &Dh_K(v)=\nabla h_K(v)+h_K(v)v,\\ &\nonumber |x|^2=|Dh_K(v)|^2=|\nabla h_K(v)|^2+h_K(v)^2. \end{align} For a convex hypersurface $\Omega$ of class $C^2_+$ ($\partial\Omega$ is $C^2$ smooth and has positive Gauss curvature), then the support function of $\Omega$ can be stated as $$h(\Omega,x)=x\cdot\nu^{-1}_\Omega(x).$$ Let $e=\{e_{ij}\}$ be the standard metric of $\mathbb{S}^{n-1}$. The second fundamental form of $\Omega$ is defined as \begin{align}\label{2.5} \Pi_{ij}=\nabla_{ij}h+he_{ij}, \end{align} where $\nabla_{ij}$ is the second order covariant derivative with respect to $e_{ij}$. By the Weingarten's formula and (\ref{2.5}), the principal radii of $\Omega$, under a smooth local orthonormal frame on $\mathbb{S}^{n-1}$, are the eigenvalues of matrix \begin{align*} b_{ij}=\nabla_{ij}h+h\delta_{ij}. \end{align*} Particularly, the Gauss curvature of $\Omega$ can be expressed as \begin{align*} \mathcal{K}(x)=\frac{1}{\det(\nabla_{ij}h+h\delta_{ij})}. \end{align*} \section{$L_p$-Gaussian dual curvature measure and $L_p$ Gaussian dual Minkowski problem}\label{S3} In this section, we establish the $L_p$ variational formula of Gauss dual quermassintegral, and introduce the $L_p$ Gauss dual curvature measure and its $L_p$ Gauss dual Minkowski problem. \begin{lemma}\label{l3.1} Let $K\in\mathcal{K}_o^n$, $f: \mathbb{S}^{n-1}\rightarrow\mathbb{R}$ be a continuous function, and $\varepsilon>0$ be sufficiently small. Define $h_t\in\mathbb{C}^+(\omega)$, for $t\in(-\varepsilon,\varepsilon)$ and $v\in\mathbb{S}^{n-1}$, by $$h_t(v)=(h_K(v)^p+tf(v)^p)^{\frac{1}{p}}.$$ Then, for almost all $u\in\mathbb{S}^{n-1}$ with respect to the spherical Lebesgue measure, $$\lim_{t\rightarrow0}\frac{\rho_{[h_t]}(u) -\rho_K(u)}{t}=\frac{f(\alpha_K(u))^p} {ph_K(\alpha_K(u))^p}\rho_K(u).$$ Moreover, there exists $M_0>0$ and $\varepsilon_0>0$ such that $$|\rho_{[h_t]}(u)-\rho_K(u)|\leq M_0|t|,\ t\in(-\varepsilon_0,\varepsilon_0).$$ \end{lemma} \begin{proof} From \cite[Lemma 4.1]{HLYZ}, we have for $u\in\mathbb{S}^{n-1}$ and $t\in(-\varepsilon,\varepsilon)$ \begin{align*} &\frac{d}{dt}(\log h_{\langle\varrho_t\rangle}(u))|_{t=0} \lim_{t=0}\frac{h_{\langle\varrho_t\rangle}(u)- h_{\langle\varrho_0\rangle}(u)}{t}\\ &=\frac{1}{h_{\langle\varrho_t\rangle}(u)}\bigg|_{t=0} \lim_{t=0}\frac{h_{\langle\varrho_t\rangle}(u)- h_{\langle\varrho_0\rangle}(u)}{t} =g(\alpha^\ast_{\langle\varrho_0\rangle}(u)), \end{align*} i.e., \begin{align}\label{3.1} \lim_{t=0}\frac{h_{\langle\varrho_t\rangle}(u)- h_{\langle\varrho_0\rangle}(u)}{t} =h_{\langle\varrho_t\rangle}(u) g(\alpha^\ast_{\langle\varrho_0\rangle}(u)), \end{align} where $g: \mathbb{S}^{n-1}\rightarrow \mathbb{R}$ is continuous and $\varrho_t: \mathbb{S}^{n-1}\rightarrow(0,\infty)$ is defined by $$\log\varrho_t(u)=\log\varrho_0(u)+tg(u)+o(t,u).$$ By (\ref{2.2}) and (\ref{2.3}), one has \begin{align}\label{3.2} \nonumber&\lim_{t\rightarrow0}\frac{\rho_{[h_t]}(u) -\rho_K(u)}{t}=\lim_{t\rightarrow0} \frac{h_{[h_t]^*}^{-1}(u) -h_{K^*}^{-1}(u)}{t}\\ \nonumber&=-\lim_{t\rightarrow0}\frac{h_{ \langle\frac{1}{h_t}\rangle}(u) -h_{\langle\frac{1}{h_K}\rangle}(u)} {th_{\langle\frac{1}{h_t}\rangle}(u) h_{\langle\frac{1}{h_K}\rangle}(u)}\\ \nonumber&=-\frac{1}{h_{ \langle\frac{1}{h_K}\rangle}(u)^2} \lim_{t\rightarrow0}\frac{h_{\langle\frac{1}{h_t}\rangle}(u) -h_{\langle\frac{1}{h_K}\rangle}(u)}{t}\\ &=-\rho_K^2(u)\lim_{t\rightarrow0} \frac{h_{\langle\frac{1}{h_t}\rangle}(u) -h_{\langle\frac{1}{h_0}\rangle}(u)}{t}. \end{align} Let $\varrho_t=\frac{1}{h_t}$. Then $$\log h_t=\log h_K-tg-o(t).$$ Since $$h_t(v)=(h_K(v)^p+tf(v)^p)^{\frac{1}{p}},\ v\in\mathbb{S}^{n-1},$$ it follows that $$\log h_t=\log h_K+\frac{tf^p}{ph_K^p}+o(t).$$ Thus \begin{align*} g=-\frac{f^p}{ph_K^p}. \end{align*} By (\ref{2.1}), (\ref{3.1}), (\ref{3.2}) and \cite[Lemma 2.6]{HLYZ}, we have \begin{align*} \lim_{t\rightarrow0}\frac{\rho_{[h_t]}(u) -\rho_K(u)}{t}&=\frac{1}{p}\rho_K^2(u) h_{\langle\frac{1}{h_0}\rangle}(u) g(\alpha^\ast_{\langle\varrho_0\rangle}(u))\\ &=\frac{f(\alpha_K(u))^p}{ph_K(\alpha_K(u))^p}\rho_K(u). \end{align*} Since $[h_0]=K\in\mathcal{K}_o^n$ and $[h_t]\rightarrow K$ as $t\rightarrow0$ by Aleksandrov's convergence theorem for Wulff shapes (see \cite{S}), then there exist $m_1, m_2\in(0,\infty)$ and $\varepsilon_0>0$ such that $$0<m_1<\rho_{[h_t]}<m_2<\infty \ \ on \ \ \mathbb{S}^{n-1},$$ for $t\in(-\varepsilon_0,\varepsilon_0)$. Thus \begin{align}\label{3.3} 0<\frac{\rho_{[h_t]}}{\rho_{K}}<m_3 \ \ on \ \ \mathbb{S}^{n-1}, \end{align} for some $m_3>1$. Observe that $s-1\geq\log s$ for $s\in(0,1)$ while $s-1\leq\log s\leq m_3\log s$ for $s\in[1,m_3]$. Thus, when $s\in(0,m_3)$ $$|s-1|\leq m_3|\log s|.$$ By (\ref{3.3}), we have $$\bigg|\frac{\rho_{[h_t]}}{\rho_{K}}-1\bigg|\leq m_3\bigg|\log\frac{\rho_{[h_t]}}{\rho_{K}}\bigg|.$$ This, together with (\ref{3.3}) and fact that $|\log\rho_{[h_t]}-\log\rho_{K}|\leq M|t|$ (by \cite[Lemma 4.1]{HLYZ}), implies that $$|\rho_{[h_t]}-\rho_{K}|\leq m_2\rho_{K}|\log\rho_{[h_t]}-\log\rho_{K}|\leq m_2m_3M|t|=M_0|t|,$$ on $\mathbb{S}^{n-1}$ for $t\in(-\varepsilon_0,\varepsilon_0)$. Here $M_0=m_2m_3M$ where $M$ comes from \cite[Lemma 4.1]{HLYZ}. The proof is completed. \end{proof} The following variational formula gives rise to the $L_p$ Gauss dual surface area measure. \begin{theorem}\label{t3.2} For $p\neq0$ and $q\in\mathbb{R}$. Let $K\in\mathcal{K}_o^n$, $f\in\mathbb{C}(\mathbb{S}^{n-1})$ and $h_t$ be given in Lemma \ref{l3.1}. Then $$\lim_{t\rightarrow0}\frac{\widetilde{V}_{\gamma,q}([h_t]) -\widetilde{V}_{\gamma,q}(K)}{t}=-\frac{1}{p} \int_{\mathbb{S}^{n-1}}f(v)^p d\widetilde{C}_{\gamma,p,q}(K,v).$$ \end{theorem} \begin{proof} From the definition of $h_t$, we have $$\log h_t(u)=\log h_K(u)+\frac{tf(u)^p}{ph_K(u)^p},\ u\in\mathbb{S}^{n-1}.$$ From (\ref{1.1}), it is equivalent to $$\widetilde{V}_{\gamma,q}([h_t])=\int_{\mathbb{S}^{n-1}} \int_{\rho_{[h_t]}}^\infty e^{-\frac{r^2}{2}}r^{q-1}drdu.$$ Let $t\rightarrow0$. Then there exist $\lambda_1, \lambda_2>0$ such that $\rho_{[h_t]}, \rho_{K}\in[\lambda_1, \lambda_2]$. Set $$F(s)=\int_s^\infty e^{-\frac{r^2}{2}}r^{q-1}dr.$$ According to the mean value theorem, there exists $\xi$ between $\rho_{[h_t]}$ and $\rho_{K}$ such that \begin{align}\label{3.4} |F(\rho_{[h_t]})-F(\rho_{K})|=|F^\prime(\xi)| |\rho_{[h_t]}-\rho_{K}|\leq e^{-\frac{\xi^2}{2}}\xi^{q-1}M_0|t|\leq\lambda_3|t|, \end{align} where $\lambda_3=\max_{\xi\in[\lambda_1,\lambda_2]} (e^{-\frac{\xi^2}{2}}\xi^{q-1}M_0)$, and $M_0$ comes from Lemma \ref{3.1}. By Lemma \ref{3.1}, (\ref{3.4}) and \cite[(3.39)]{LYZ}, employing the dominated convergence theorem, one has \begin{align*} &\lim_{t\rightarrow0}\frac{\widetilde{V}_{\gamma,q}([h_t])- \widetilde{V}_{\gamma,q}(K)}{t} =\lim_{t\rightarrow0}\int_{\mathbb{S}^{n-1}} \frac{F(\rho_{[h_t]})-F(\rho_{K})}{t}du\\ &=-\frac{1}{p}\int_{\mathbb{S}^{n-1}}e^{-\frac{\rho_K^2(u)}{2}} \rho_K^{q}(u)\frac{f(\alpha_K(u))^p}{h_K(\alpha_K(u))^p}du\\ &=-\frac{1}{p}\int_{\mathbb{S}^{n-1}}f(\alpha_K(u))^p e^{-\frac{\rho_K^2(u)}{2}}\rho_K^{q-n}(u)\frac{\rho_K^{n}(u)} {h_K(\alpha_K(u))^p}du\\ &=-\frac{1}{p}\int_{\partial^\prime K}f(\nu_K(x))^p e^{-\frac{|x|^2}{2}}|x|^{q-n}(x\cdot\nu_K(x))^{1-p} d\mathcal{H}^{n-1}(x)\\ &=-\frac{1}{p}\int_{\mathbb{S}^{n-1}}f(v)^p d\widetilde{C}_{\gamma,p,q}(K,v). \end{align*} The proof is completed. \end{proof} Thus, the definition of $L_p$-Gaussian dual curvature measure deduced from the variational formula in Theoren \ref{t3.2} is obtained. \begin{definition}\label{d3.3} Let $K\in\mathcal{K}_o^n$ and $p,q\in\mathbb{R}$. Define the $L_p$-Gaussian dual curvature measure of $K$ by: for Borel set $\omega\subset\mathbb{S}^{n-1}$, $$\widetilde{C}_{\gamma,p,q}(K,\omega) =\int_{\nu_K^{-1}(\omega)}(x\cdot\nu_K(x))^{1-p} e^{-\frac{|x|^2}{2}}|x|^{q-n}d\mathcal{H}^{n-1}(x).$$ \end{definition} The following result gives the weak convergence of the $L_p$-Gaussian dual curvature measure. \begin{proposition}\label{p3.4} If $K_i\in\mathcal{K}_o^n$ such that $K_i$ converges to $K_0\in\mathcal{K}_o^n$ in Hausdorff metric, then $\widetilde{C}_{\gamma,p,q}(K_i,\cdot)$ converges to $\widetilde{C}_{\gamma,p,q}(K_0,\cdot)$ weakly. \end{proposition} \begin{proof} Let $f: \omega\rightarrow\mathbb{R}$ be a continuous function. Suppose that $K_i\rightarrow K_0$ with respect to Hausdorff metric. Then $\rho_{K_i}\rightarrow\rho_{K_0}$ uniformly on $\mathbb{S}^{n-1}$, and $\alpha_{K_i}\rightarrow\alpha_{K_0}$ almost everywhere on $\mathbb{S}^{n-1}$ with respect to spherical Lebesgue measure (by \cite[Lemma 2.2]{HLYZ}). Moreover, $h_{K_i}(\alpha_{K_i},\cdot)\rightarrow h_{K_0}(\alpha_{K_0},\cdot)$ uniformly on $\mathbb{S}^{n-1}$. Since $K_0\in\mathcal{K}_o^n$, then there exists a constant $C>0$ such that $\frac{1}{C}B\subset K_i,K_0\subset CB$ for sufficiently large $i$. Thus \begin{align*} e^{-\frac{\rho_{K_i}^2(u)}{2}}\rho_{K_i}^q(u) \frac{f(\alpha_{K_i}(u))^p}{h_{K_i}(\alpha_{K_i}(u)^p} \rightarrow e^{-\frac{\rho_{K_0}^2(u)}{2}}\rho_{K_0}^q(u) \frac{f(\alpha_{K_0}(u))^p}{h_{K_0}(\alpha_{K_0}(u)^p} \end{align*} uniformly on $\mathbb{S}^{n-1}$. By Definition \ref{d3.3} and Lebesgue's dominated convergence theorem, one has \begin{align*} &\int_{\mathbb{S}^{n-1}}f(v)^pd\widetilde{C}_{\gamma,p,q}(K_i,v)\\ &=\int_{\partial^\prime {K_i}}f(\nu_{K_i}(x))^p(x\cdot\nu_{K_i}(x))^{1-p} e^{-\frac{|x|^2}{2}}|x|^{q-n}d\mathcal{H}^{n-1}(x)\\ &=\int_{\mathbb{S}^{n-1}}f(\alpha_{K_i}(u))^p h_{K_i}(\alpha_{K_i}(u))^{-p} e^{-\frac{\rho_{K_i}^2(u)}{2}}\rho_{K_i}^q(u)du\\ &\rightarrow\int_{\mathbb{S}^{n-1}}f(\alpha_{K_0}(u))^p h_{K_0}(\alpha_{K_0}(u))^{-p} e^{-\frac{\rho_{K_0}^2(u)}{2}}\rho_{K_0}^q(u)du\\ &=\int_{\mathbb{S}^{n-1}}f(\alpha_{K_0}(u))^p e^{-\frac{\rho_{K_0}^2(u)}{2}}\rho_{K_0}(u)^{q-n} \frac{\rho_{K_0}(u)^n} {h_{K_0}(\alpha_{K_0}(u))^{p}}du\\ &=\int_{\partial^\prime {K_0}}f(\nu_{K_0}(x))^p (x\cdot\nu_{K_0}(x))^{1-p} e^{-\frac{|x|^2}{2}}|x|^{q-n}d\mathcal{H}^{n-1}(x)\\ &=\int_{\mathbb{S}^{n-1}}f(v)^p d\widetilde{C}_{\gamma,p,q}({K_0},v). \end{align*} The proof is completed. \end{proof} {\bf The $L_p$-Gauss dual Minkowski problem}. For $p,q\in\mathbb{R}$, under what necessary and/or sufficient condition on a non-zero finite Borel measure $\mu$ on $\mathbb{S}^{n-1}$ does there exist a set $K\in\mathcal{K}_o^n$ such that \begin{align}\label{3.5} \mu=\widetilde{C}_{\gamma,p,q}(K,\cdot)? \end{align} If $K$ exists, to what extent is it unique? As we all know, if $\partial K$ is smooth with positive Gauss curvature, then the surface area measure $S(K,\cdot)$ of $K$ is absolutely continuous with respect to spherical Lebesgue measure $S$ and its density is \begin{align}\label{3.6} \frac{dS(K,\cdot)}{dS}=\det(\nabla^2h_K+h_KI). \end{align} Assume $K$ is strictly convex. By the definitions of $L_p$-Gauss dual curvature measure and surface area measure, and (\ref{2.5}), one has \begin{align*} \widetilde{C}_{\gamma,p,q}(K,\eta)&=\int_{\nu_K^{-1}(\eta)} (x\cdot\nu_K(x))^{1-p}|x|^{q-n} e^{-\frac{|x|^2}{2}}d\mathcal{H}^{n-1}(x)\\ &=\int_{\eta}h_K(v)^{1-p}(|\nabla h_K(v)|^2+h_K(v)^2)^{\frac{q-n}{2}}e^{-\frac{|\nabla h_K(v)|^2+h_K(v)^2}{2}}dS(K,v). \end{align*} Namely, \begin{align}\label{3.7} d\widetilde{C}_{\gamma,p,q}(K,\cdot) =h_K^{1-p}(|\nabla h_K|^2+h_K^2)^{\frac{q-n}{2}}e^{-\frac{|\nabla h_K|^2+h_K^2}{2}}dS(K,\cdot). \end{align} Let $\partial K$ is smooth with positive Gauss curvature for $K\in\mathcal{K}_o^n$. By (\ref{3.6}) and (\ref{3.7}), we have $$\frac{d\widetilde{C}_{\gamma,p,q}(K,\cdot)}{dS} =h_K^{1-p}(|\nabla h_K|^2+h_K^2)^{\frac{q-n}{2}}e^{-\frac{|\nabla h_K|^2+h_K^2}{2}}\det(\nabla^2h_K+h_KI).$$ Therefore, when the given measure $\mu$ has a density $f$ that is an integral nonnegative function on $\mathbb{S}^{n-1}$, the $L_p$-Gauss dual Minkowski problem is equivalent to solving the following Monge-Amp\`{e}re type equation \begin{align*} e^{-\frac{|\nabla h_K|^2+h_K^2}{2}}h_K^{1-p} (|\nabla h_K|^2+h_K^2)^{\frac{q-n}{2}} \det(\nabla^2h_K+h_KI)=f. \end{align*} \section{The variational method to generate weak solution}\label{S4} This section aims to prove Theorem \ref{t1.} by transforming it into an optimization problem. \subsection{An associated optimization problem} Let $\mathcal{K}_e^n$ be the set of origin-symmetric convex bodies. For any non-zero finite Borel measure $\mu$ on $\mathbb{S}^{n-1}$, $p>0$ and $q>0$. Define $\Phi: \mathcal{K}_e^n\rightarrow\mathbb{R}$ by $$\Phi(Q)=\frac{1}{p}\int_{\mathbb{S}^{n-1}}h_Q(v)^pd\mu(v) +\int_{\mathbb{S}^{n-1}}F(\rho_Q(u))du, \ for \ Q\in\mathcal{K}_e^n,$$ where $F(\rho_Q(u))=\int_{\rho_Q(u)}^{\infty} e^{-\frac{r^2}{2}}r^{q-1}dr$. We consider the minimization problem as follows: \begin{align}\label{4.0} \inf\{\Phi(Q):Q\in\mathcal{K}_e^n\}. \end{align} \begin{lemma}\label{l4.1} Let $\mu$ be a non-zero finite Borel measure on $\mathbb{S}^{n-1}$, $p>0$ and $q>0$. If $K\in\mathcal{K}_e^n$ satisfies \begin{align}\label{4.1} \Phi(K)=\inf\{\Phi(Q):Q\in\mathcal{K}_e^n\}, \end{align} then $$\mu=\widetilde{C}_{\gamma,p,q}(K,\cdot).$$ \end{lemma} \begin{proof} For each $f\in \mathbb{C}^+_e(\mathbb{S}^{n-1})$, define a functional $\phi: \mathbb{C}^+_e(\mathbb{S}^{n-1})\rightarrow\mathbb{R}$ by $$\phi(f)=\frac{1}{p}\int_{\mathbb{S}^{n-1}}f(v)^pd\mu(v) +\int_{\mathbb{S}^{n-1}}F(\rho_{[f]})du.$$ We claim that for each $f\in \mathbb{C}^+_e(\mathbb{S}^{n-1})$ \begin{align}\label{4.2} \phi(f)\leq\phi(h_K). \end{align} Since $h_{[f]}\leq f$ and $[h_K]=K$, one has $$\phi(f)\leq\phi(h_{[f]})=\Phi([f])\leq \Phi(K)=\phi(h_k),$$ where the second inequality sign follows from (\ref{4.1}). For any $g\in \mathbb{C}(\mathbb{S}^{n-1})$ and $t\in(-\varepsilon,\varepsilon)$ where $\varepsilon>0$ is sufficiently small. Let $$h_t(v)=h_K(v)e^{tg(v)}.$$ By (\ref{4.2}), since $h_K$ is a minimizer of (\ref{4.1}), the Lagrange multiplier method says, $$\frac{d}{dt}\phi(h_t)|_{t=0}=0.$$ This, together with the formula in \cite[Lemma 3.1]{F}, implies $$\int_{\mathbb{S}^{n-1}}h_K(v)^pg(v)d\mu(v) =\int_{\mathbb{S}^{n-1}}g(v)d\widetilde{C}_{\gamma,q}(K,v).$$ Thus $$d\mu(v)=h_K(v)^{-p}d\widetilde{C}_{\gamma,q}(K,v).$$ We conclude that $\mu=\widetilde{C}_{\gamma,p,q}(K,\cdot)$. \end{proof} \subsection{Existence of an optimizer} \begin{lemma}\label{l4.2} For $p>0$ and $q>0$. Let $\mu$ be a non-zero finite even Borel measure on $\mathbb{S}^{n-1}$ and not concentrated in any closed hemisphere. Then there exists a convex body $K\in\mathcal{K}_e^n$ such that \begin{align*} \Phi(K)=\inf\{\Phi(Q):Q\in\mathcal{K}_e^n\}. \end{align*} \end{lemma} \begin{proof} Suppose $Q_l$ is a sequence of origin-symmetric convex bodies and \begin{align}\label{4.3} \lim_{l\rightarrow\infty}\Phi(Q_l) =\inf\{\Phi(Q):Q\in\mathcal{K}_e^n\}. \end{align} We claim that $Q_l$ is uniformly bounded. If not, then there exists $u_l\in\mathbb{S}^{n-1}$ such that $\rho_{Q_l}(u_l)\rightarrow\infty$ as $l\rightarrow\infty$. By the definition of support function, one has $$\rho_{Q_l}(u_l)(v\cdot u_l)_+\leq h_{Q_l}(v).$$ From the definitions of $\Phi$ and Gamma function, we have \begin{align*} \Phi(Q_l)&=\frac{1}{p}\int_{\mathbb{S}^{n-1}}h_{Q_l}(v)^pd\mu(v) +n\omega_n\int_{\rho_{Q_l}(v)}^\infty e^{-\frac{r^2}{2}}r^{q-1}dr\\ &\geq \frac{1}{p}\int_{\mathbb{S}^{n-1}} \rho_{Q_l}(u_l)^p(v\cdot u_l)_+^pd\mu(v) +n\omega_n\frac{\sqrt{2}^q}{2} \int_{\frac{\rho_{Q_l}(v)}{2}}^\infty e^{-r}r^{\frac{q}{2}-1}dr\\ &=\frac{\rho_{Q_l}(u_l)^p}{p}\int_{\mathbb{S}^{n-1}} (v\cdot u_l)_+^pd\mu(v)+n\omega_n \frac{\sqrt{2}^q}{2}\Gamma(\frac{q}{2})\\ &\geq\frac{\rho_{Q_l}(u_l)^p}{p}\int_{\mathbb{S}^{n-1}} (v\cdot u_l)_+^pd\mu(v)+n\omega_n, \end{align*} where the last inequality follows from the property of Gamma function $\Gamma$, i.e., $\frac{\sqrt{2}^q}{2}\Gamma(\frac{q}{2})\geq1$, and $(t)_+=\max\{t,0\}$ for any $t\in\mathbb{R}$. By the fact that $\mu$ is not concentrated in any closed hemisphere, we may find $c_0>0$ such that $$\int_{\mathbb{S}^{n-1}} (v\cdot u_l)_+^pd\mu(v)>c_0.$$ Therefore $$\Phi(Q_l)>\frac{c_0\rho_{Q_l}(u_l)^p}{p} +n\omega_n\rightarrow\infty$$ as $l \rightarrow\infty$. But this contradicts (\ref{4.3}). So we conclude that $Q_l$ is uniformly bounded. By the Blaschke selection theorem in \cite{S}, there is a convergent subsequence of $Q_l$, still denoted by $Q_l$, converges to a compact convex set $K$ of $\mathbb{R}^n$. Since $K$ is origin-symmetric, this means that $K$ contains the origin as its interior point. Therefore, the convex body $K$ is a minimizer to the optimization (\ref{4.0}). \end{proof} \subsection{Existence of symmetric solution to $L_p$-Gaussian dual Minkowski problem} Combining Lemma \ref{l4.1} and Lemma \ref{l4.2}, we get the existence Theorem \ref{t1.2} for the $L_p$-Gaussian dual Minkowski problem stated in the introduction. \begin{theorem}\label{t4.3} For $p>0$ and $q>0$. Let $\mu$ be a non-zero finite even Borel measure on $\mathbb{S}^{n-1}$ and not concentrated in any closed hemisphere. Then there exists a convex body $K\in\mathcal{K}_e^n$ such that \begin{align*} \mu=\widetilde{C}_{\gamma,p,q}(K,\cdot). \end{align*} \end{theorem} \section{The curvature flow method to generate smooth solution}\label{S5} In this section, the existence of smooth solution to the $L_p$-Gauss dual Minkowski problem will be obtained by curvature flow method. \subsection{Gauss curvature flow and its related geometric function} We consider a type of Gauss curvature flow of a family of closed convex hypersurfaces $\{\Omega_t\}$ given by $\Omega_t=F(\mathbb{S}^{n-1},t)$, where $F: \mathbb{S}^{n-1}\times [0,T)\rightarrow\mathbb{R}^n$ is the smooth map that satisfies \begin{align}\label{5.1} \begin{cases} \frac{\partial X(x,t)}{\partial t}=-f(x)(x\cdot F(x,t))^{p}e^{\frac{|F(x,t)|^2}{2}} |F(x,t)|^{n-q}\mathcal{K}(x,t)\nu+F(x,t),\\ F(x,0)=F_0(x), \end{cases} \end{align} where $f$ is a given positive smooth function on $\mathbb{S}^{n-1}$, $\mathcal{K}$ is the Gauss curvature of $\Omega_t$ at $F(x,t)$, $\nu$ is the out normal of $\Omega_t$ at $F(x,t)$, and $T$ is the maximal time for which the solution of (\ref{5.1}) exists. As discussed in Section 2, the support function of convex hypersurface $\Omega_t$ can be expressed as $h(x,t)=x\cdot F(x,t)$, we thus derive the evolution equation for $h(\cdot,t)$ from the flow (\ref{5.1}) as follows \begin{align}\label{5.2} \begin{cases} \frac{\partial h(x,t)}{\partial t} =-f(x)h^{p}e^{\frac{|\nabla h|^2+h^2}{2}} (|\nabla h|^2+h^2)^{\frac{n-q}{2}}\mathcal{K} +h(x,t),\\ h(x,0)=h_0(x), \end{cases} \end{align} From (\ref{2.5}) and the definition of radial function, there is \begin{align}\label{5.3} \rho(u)u=\nabla h(x)+h(x)x. \end{align} Let $x=x(u,t)$, we have $$\log\rho(u,t)=\log h(x,t)-\log x\cdot u.$$ Differentiating the above identity, we obtain \begin{align}\label{5.4} \frac{\partial\rho(u,t)}{\partial t} =\frac{\rho(u,t)}{h(x,t)}\frac{\partial h(x,t)}{\partial t}. \end{align} By (\ref{5.2}), (\ref{5.3}) and (\ref{5.4}), the evolution equation of $\rho(u,t)$ can be described as \begin{align}\label{5.5} \begin{cases} \frac{\partial\rho(u,t)}{\partial t} =-f(\xi)h^{p-1}\rho^{n-q+1} e^{\frac{\rho^2}{2}}\mathcal{K}+\rho(u,t),\\ \rho(u,0)=\rho_0(u). \end{cases} \end{align} Now we investigate the characteristic of a more important geometric functional that are key to proving the long-time existence of solutions to the flow (\ref{5.1}). Firstly, we define $$\phi(r)=\int^{r}s^{q-1}e^{-\frac{s^2}{2}}ds,$$ and $$\varphi(t)=\int^t\frac{1}{s^{1-p}}ds.$$ \begin{lemma}\label{l5.1} For convex hypersurface $\Omega_t$, we define the functional $$\Phi(\Omega_t)=\int_{\mathbb{S}^{n-1}}f(x) \varphi(h(x,t))dx-\int_{\mathbb{S}^{n-1}}\phi(\rho(u,t))du.$$ Let $p,q\in\mathbb{R}$. Then $\Phi(\Omega_t)$ is monotone non-increasing along the flow (\ref{5.1}). That is $$\frac{\partial}{\partial t}\Phi(\Omega_t)\leqslant0,$$ with equality if and only if the support function of $\Omega_t$ satisfies the equation (\ref{1.2}). \end{lemma} \begin{proof} By (\ref{5.2}) and (\ref{5.4}), it follows from $du=\frac{h}{\rho^n\mathcal{K}}dx$ that \begin{align*} \partial_t\Phi(\Omega_t)&=\int_{\mathbb{S}^{n-1}} \frac{f(x)}{h^{1-p}}\partial_thdx-\int_{\mathbb{S}^{n-1}} \rho^{q-1}e^{-\frac{\rho^2}{2}}\partial_t\rho du\\ &=\int_{\mathbb{S}^{n-1}}\frac{f(x)}{h^{1-p}}\partial_thdx -\int_{\mathbb{S}^{n-1}} \frac{\rho^{q-n}e^{-\frac{\rho^2}{2}}} {\mathcal{K}}\partial_thdx\\ &=\int_{\mathbb{S}^{n-1}}\frac{\rho^{q-n}e^{-\frac{\rho^2}{2}}} {\mathcal{K}}\bigg(\frac{f}{h^{1-p}}\frac{\mathcal{K}} {\rho^{q-n}e^{-\frac{\rho^2}{2}}}-1\bigg)\partial_thdx\\ &=-\int_{\mathbb{S}^{n-1}}\frac{h\rho^{q-n}e^{-\frac{\rho^2}{2}}} {\mathcal{K}}\bigg(\frac{f}{h^{1-p}}\frac{\mathcal{K}} {\rho^{q-n}e^{-\frac{\rho^2}{2}}}-1\bigg)^2dx\\ &\leq0. \end{align*} Equality hold if and only if $$\frac{f}{h^{1-p}}\frac{\mathcal{K}} {\rho^{q-n}e^{-\frac{\rho^2}{2}}}=1,$$ that is the equation (\ref{1.2}). \end{proof} \subsection{A priori estimates} \subsubsection{$C^0,C^1$-estimates} \begin{lemma}\label{l5.2} Let $h(\cdot,t)$ be a smooth solution of (\ref{5.2}). Assume that $f$ is a positive smooth function on $\mathbb{S}^{n-1}$ satisfying (\ref{1.3}) and $p, q\in\mathbb{R}$. Then there exist positive constants c and C independent of $t$ such that \begin{align}\label{5.6} c\leqslant h(\cdot,t)\leqslant C,\ c\leqslant \rho(\cdot,t)\leqslant C, \ \forall (\cdot,t)\in \mathbb{S}^{n-1}\times [0,T). \end{align} \end{lemma} \begin{proof} From the definitions of support and radial functions, we only need to prove one of (\ref{5.6}). Suppose the spatial maximum of the $h(\cdot,t)$ is attained at $x_1\in\mathbb{S}^{n-1}$. Then, from (\ref{5.3}), at $x_1$ $$\nabla h=0, \ \nabla^2 h\leqslant0\ and \ h=\rho.$$ By (\ref{5.2}), one has \begin{align*} \partial_th&\leqslant-f(x)h^{p-q+1}e^{\frac{h^2}{2}}+h\\ &=h^{p-q+1}e^{\frac{h^2}{2}} (h^{q-p}e^{-\frac{h^2}{2}}-f(x)). \end{align*} Taking $A=\limsup_{s\rightarrow+\infty}(s^{q-p} e^{-\frac{s^2}{2}})$. By (\ref{1.3}), $\epsilon=\frac{1}{2}(\min_{\mathbb{S}^{n-1}}f(x)-A)$ is positive and there exists some positive constant $c_1$ such that $$h^{q-p}e^{-\frac{h^2}{2}}<A+\epsilon \ for \ h>c_1.$$ This, together with (\ref{1.3}), yields $$h^{q-p}e^{-\frac{h^2}{2}}-f(x) <A+\epsilon-\min_{\mathbb{S}^{n-1}}f(x)<0,$$ which implies that at $x_1$ $$\partial_th<0.$$ Therefore $$h\leqslant\max\{c_1,\max_{\mathbb{S}^{n-1}}h(x,0)\}.$$ Similarly, we can estimate the spatial minimum of $h(\cdot,t)$. Suppose the spatial minimum of the $h(\cdot,t)$ is attained at $x_2\in\mathbb{S}^{n-1}$. Then, from (\ref{5.3}), at $x_2$ $$\nabla h=0, \ \nabla^2 h\geqslant0\ and \ h=\rho.$$ By (\ref{5.2}), one has \begin{align*} \partial_th&\geq-f(x)h^{p-q+1}e^{\frac{h^2}{2}}+h\\ &=h^{p-q+1}e^{\frac{h^2}{2}} (h^{q-p}e^{-\frac{h^2}{2}}-f(x)). \end{align*} Taking $a=\liminf_{s\rightarrow0^+}(s^{q-p} e^{-\frac{s^2}{2}})$. By (\ref{1.3}), $\epsilon=\frac{1}{2}(a-\max_{\mathbb{S}^{n-1}}f(x))$ is positive and there exists some positive constant $c_2$ such that $$h^{q-p}e^{-\frac{h^2}{2}}>a-\epsilon \ for \ h<c_2.$$ This, together with (\ref{1.3}), yields $$h^{q-p}e^{-\frac{h^2}{2}}-f(x) >a-\epsilon-\max_{\mathbb{S}^{n-1}}f(x)>0,$$ which implies that at $x_2$ $$\partial_th>0.$$ Therefore $$h\geqslant\min\{c_2,\min_{\mathbb{S}^{n-1}}h(x,0)\}.$$ The proof is completed. \end{proof} \begin{lemma}\label{l5.3} Under the assumptions of Lemma \ref{l5.2}, then we have \begin{align*} |\nabla h(\cdot,t)|\leqslant C \ and \ |\nabla\rho(\cdot,t)|\leqslant C, \ \forall (\cdot,t)\in \mathbb{S}^{n-1}\times [0,T), \end{align*} where $C$ is a positive constant as in Lemma \ref{l5.2}. \end{lemma} \begin{proof} By (\ref{2.5}) and (\ref{5.3}), we know that $$\min_{\mathbb{S}^{n-1}}h(\cdot,t)\leqslant\rho(\cdot,t)\leqslant\max_{\mathbb{S}^{n-1}}h(\cdot,t),$$ and $$\rho^2=h^2+|\nabla h|^2.$$ These facts together with Lemma \ref{l5.2} imply the result. \end{proof} \subsubsection{$C^2$-estimates} In this subsection, we will establish the upper and lower bounds of the principal curvatures. These estimates can be obtained by considering proper auxiliary functions, see e.g., \cite{LL} for similar techniques. \begin{lemma}\label{l5.4} Let $h(\cdot,t)$ be a smooth solution of (\ref{5.2}). Assume that $f$ is a positive smooth function on $\mathbb{S}^{n-1}$ satisfying (\ref{1.3}). Then there exist positive constants c and C, independent of $t$, such that the principal curvature $\kappa_i$, $i=1,\cdots,n-1$, of $\Omega_t$ satisfying \begin{align}\label{5.7} c\leqslant \kappa_i\leqslant C, \ \forall (\cdot,t)\in \mathbb{S}^{n-1}\times [0,T). \end{align} \end{lemma} \begin{proof} {\bf Step 1:} We need to prove the upper bound of Gauss curvature $\mathcal{K}$. It is essential to construct the following auxiliary function \begin{align}\label{5.8} \Theta(x,t)=\frac{-\partial_th+h}{h-\varepsilon_0} =\frac{f(x)h^p\rho^{n-q}e^{\frac{\rho^2}{2}} \mathcal{K}}{h-\varepsilon_0}, \end{align} where $\varepsilon_0$ is a positive constant satisfying $\varepsilon_0=\frac{1}{2}\min h(\cdot,t)$, $\forall(\cdot,t)\in S^{n-1}\times[0,T)$. From (\ref{5.8}), the upper bound of $\mathcal{K}$ follows from $\Theta(\cdot,t)$. Hence we only need to derive the upper bound of $\Theta(x,t)$. Suppose the spatial maximum of $\Theta$ is obtained at $x_0$. Then \begin{align}\label{5.9} 0=\nabla_i\Theta=\frac{-\partial_th_i+h_i}{h-\varepsilon_0} +\frac{(\partial_th-h)h_i}{(h-\varepsilon_0)^2}, \end{align} and using (\ref{5.9}), we get \begin{align}\label{5.10} 0\geq\nabla_{ij}\Theta=\frac{-\partial_th_{ij} +h_{ij}}{h-\varepsilon_0}+\frac{(\partial_th-h) h_{ij}}{(h-\varepsilon_0)^2}, \end{align} where $\nabla_{ij}\Theta\leq0$ should be understood in the sense of negative semi-definite matrix. {As in the background metrial, namely Section \ref{S2}, we know the fact $b_{ij}=h_{ij}+h\delta_{ij}$, and $b^{ij}$ its inverse matrix.} This, together with (\ref{5.10}), yields \begin{align*} \partial_tb_{ij}&=\partial_th_{ij}+\partial_th\delta_{ij}\\ &\geq h_{ij}+\frac{(\partial_th-h)h_{ij}}{h-\varepsilon_0} +\partial_th\delta_{ij}\\ &=b_{ij}+\frac{(\partial_th-h)h_{ij}}{h-\varepsilon_0} +(\partial_th-h)\delta_{ij}\\ &=b_{ij}+\frac{\partial_th-h}{h-\varepsilon_0}(h_{ij} +h\delta_{ij}-\varepsilon_0\delta_{ij})\\ &=b_{ij}-\Theta(b_{ij}-\varepsilon_0\delta_{ij}). \end{align*} By the definition of Gauss curvature, we obtain \begin{align}\label{5.11} \nonumber \partial_t\mathcal{K}&=-\mathcal{K}b^{ij}\partial_tb_{ij}\\ &\leq-\mathcal{K}b^{ij}[b_{ij}-\Theta(b_{ij} -\varepsilon_0\delta_{ij})]\\ \nonumber &=-\mathcal{K}[(n-1)(1-\Theta)+\Theta\varepsilon_0\mathcal{H}], \end{align} where $\mathcal{H}$ denotes the mean curvature of $F(\cdot,t)$. From (\ref{5.8}) and Lemma \ref{l5.2}, there exists a positive constant $c_1$ such that \begin{align}\label{5.12} \frac{1}{c_1}\Theta(\cdot,t)\leq \mathcal{K}(\cdot,t)\leq c_1\Theta(\cdot,t). \end{align} Noting \begin{align*} \frac{1}{n-1}\mathcal{H}\geq \mathcal{K}^{\frac{1}{n-1}}, \end{align*} and combining (\ref{5.11}) and (\ref{5.12}), we obtain \begin{align}\label{5.13} \partial_t\mathcal{K}&\leq(n-1)c_1^{-1}\Theta^2 -(n-1)c_1^{-1}\varepsilon_0\Theta^{\frac{2n-1}{n-1}}. \end{align} From (\ref{5.8}), we have \begin{align*} \partial_t\Theta=f(x)\mathcal{K}\partial_t \bigg(\frac{h^p\rho^{n-q}e^{\frac{\rho^2}{2}}} {h-\varepsilon_0}\bigg) +\frac{f(x)h^p\rho^{n-q}e^{\frac{\rho^2}{2}}} {h-\varepsilon_0}\partial_t\mathcal{K}, \end{align*} where \begin{align*} \partial_t\bigg(\frac{h^p\rho^{n-q}e^{\frac{\rho^2}{2}}} {h-\varepsilon_0}\bigg) &=\bigg[ph^{p-1}\rho^{n-q}e^{\frac{\rho^2}{2}}\partial_th +(n-q)\rho^{n-q-1}h^pe^{\frac{\rho^2}{2}}\partial_t\rho\\ &\ \ \ +h^p\rho^{n-q+1}e^{\frac{\rho^2}{2}} \partial_t\rho\bigg]\frac{1}{h-\varepsilon_0} -\frac{h^p\rho^{n-q}e^{\frac{\rho^2}{2}}\partial_th} {(h-\varepsilon_0)^2}\\ &=(\alpha+\rho^2)\frac{h^{p-1}\rho^{n-q} e^{\frac{\rho^2}{2}}\partial_th}{h-\varepsilon_0} -\frac{h^p\rho^{n-q}e^{\frac{\rho^2}{2}}\partial_th} {(h-\varepsilon_0)^2}\\ &=\bigg[(\alpha+\rho^2)h^{p-1}\rho^{n-q}e^{\frac{\rho^2}{2}} -\frac{h^p\rho^{n-q}e^{\frac{\rho^2}{2}}} {h-\varepsilon_0}\bigg] \frac{h-(h-\varepsilon_0)\Theta}{h-\varepsilon_0}, \end{align*} where $\alpha=n+p-q$. By (\ref{5.8}), (\ref{5.12}) and (\ref{5.13}), we have at $x_0$ \begin{align}\label{5.14} \nonumber\partial_t\Theta&\leq c_1f \bigg[(\alpha+\rho^2)h^{p-1}\rho^{n-q}e^{\frac{\rho^2}{2}} -\frac{h^p\rho^{n-q}e^{\frac{\rho^2}{2}}} {h-\varepsilon_0}\bigg] \frac{(h-(h-\varepsilon_0)\Theta)\Theta}{h-\varepsilon_0}\\ &\nonumber\ \ \ +\frac{(n-1)fh^p\rho^{n-q} e^{\frac{\rho^2}{2}}}{c_1(h-\varepsilon_0)} (\Theta^2-\varepsilon_0\Theta^{\frac{2n-1}{n-1}})\\ &=\frac{c_1fh}{h-\varepsilon_0}\bigg[(\alpha+\rho^2) h^{p-1}\rho^{n-q}e^{\frac{\rho^2}{2}} -\frac{h^p\rho^{n-q}e^{\frac{\rho^2}{2}}} {h-\varepsilon_0}\bigg]\Theta\\ &\nonumber\ \ \ -c_1f\bigg[(\alpha+\rho^2) h^{p-1}\rho^{n-q}e^{\frac{\rho^2}{2}} -\frac{h^p\rho^{n-q}e^{\frac{\rho^2}{2}}} {h-\varepsilon_0}\bigg]\Theta^2\\ &\nonumber\ \ \ +\frac{(n-1)fh^p\rho^{n-q} e^{\frac{\rho^2}{2}}}{c_1(h-\varepsilon_0)} (\Theta^2-\varepsilon_0\Theta^{\frac{2n-1}{n-1}}). \end{align} Then by the definition of $\varepsilon_0$, there exists positive constants $c_2, c_3$ and $c_4$, depending only on $\min_{\mathbb{S}^{n-1}\times[0,T)}h$, $\min_{\mathbb{S}^{n-1}\times[0,T)}\rho$, $\min_{\mathbb{S}^{n-1}}f$ and $\varepsilon_0$, such that $$c_1f\bigg[(\alpha+\rho^2) h^{p-1}\rho^{n-q}e^{\frac{\rho^2}{2}} -\frac{h^p\rho^{n-q}e^{\frac{\rho^2}{2}}} {h-\varepsilon_0}\bigg]\leq c_2,$$ $$\frac{(n-1)fh^p\rho^{n-q} e^{\frac{\rho^2}{2}}}{c_1(h-\varepsilon_0)}\leq c_3,\ \ and \ \ \frac{h}{h-\varepsilon_0}\leq c_4.$$ Therefore (\ref{5.14}) can be further estimated as \begin{align*} \partial_t\Theta&\leq c_2c_4\Theta-c_2\Theta^2 +c_3(\Theta^2-\varepsilon_0\Theta^{\frac{2n-1}{n-1}})\\ &=\Theta^2[c_2c_4\Theta^{-1}-(c_2-c_3)-c_3\varepsilon_0 \Theta^{\frac{1}{n-1}}]. \end{align*} One can see that whenever $c_2\geq c_3$ and $\Theta>(c_2c_3^{-1}\varepsilon_0^{-1})^{\frac{n-1}{n}}$, we have \begin{align*} \partial_t\Theta<0. \end{align*} This implies that $\Theta$ has a uniform upper bound. For any $(x,t)$ $$\mathcal{K}(x,t)=\frac{(h-\varepsilon_0)\Theta(x,t)} {fh^p\rho^{n-q}e^{\frac{\rho^2}{2}}} \leq\frac{(h-\varepsilon_0)\Theta(x_0,t)} {fh^p\rho^{n-q}e^{\frac{\rho^2}{2}}}\leq C.$$ Namely, $\mathcal{K}$ has a uniform upper bound. {\bf Step 2:} Now we prove the lower bound of principal curvature. Consider the following auxiliary function \begin{align}\label{5.15} \mathcal{E}(x,t)=\log\lambda_{\max}(\{b_{ij}\}) -a\log h(x,t)-b|\nabla h|^2, \end{align} where $a,b$ are positive constants to be specified later, $\lambda_{\max}(\{b_{ij}\})$ is the maximal eigenvalue of $\{b_{ij}\}$. As showed in Section \ref{S2}, we know that the eigenvalue of $\{b_{ij}\}$ and $\{b^{ij}\}$ are respectively the principal radii and principal curvature of $\Omega_t$. For any fixed $t\in[0,T)$, suppose that the maximum of $\mathcal{E}(x,t)$ is attained at $x_1\in\mathbb{S}^{n-1}$. By a rotation of coordinates, we may assume that $\{b^{ij}(x_1,t)\}$ is diagonal, and $\lambda_{\max}(\{b_{ij}(x_1,t)\})=b_{11}(x_1,t)$. Therefore, to derive a positive lower bound of principal curvatures, it is equivalent to estimate the upper bound of $b_{11}$. Based on the above assumption, (\ref{5.15}) can be transformed into \begin{align}\label{5.16} \widetilde{\mathcal{E}}(x,t)=\log b_{11} -a\log h(x,t)-b|\nabla h|^2. \end{align} Using the above assumption again, for any fixed $t\in[0,T)$, $\widetilde{\mathcal{E}}(x,t)$ has a local maximum at $x_1$, i.e., \begin{align}\label{5.17} \nonumber0=\nabla_i\widetilde{\mathcal{E}}&=b^{11}\nabla_ib_{11} -a\frac{h_i}{h}+2b\Sigma_jh_jh_{ji}\\ &=b^{11}\nabla_i(h_{11}+h\delta_{11}) -a\frac{h_i}{h}+2bh_ih_{ii}, \end{align} and \begin{align}\label{5.18} 0\geq\nabla_{ii}\widetilde{\mathcal{E}} =b^{11}\nabla_{ii}b_{11}-(b^{11})^2 (\nabla_ib_{11})^2-a(\frac{h_{ii}}{h}-\frac{h_i^2}{h^2}) +2b(\Sigma_jh_jh_{jii}+h_{ii}^2). \end{align} Moreover \begin{align}\label{5.19} \nonumber\partial_t\widetilde{\mathcal{E}} &=b^{11}\partial_tb_{11}-a\frac{h_t}{h}+2b\Sigma_jh_jh_{jt}\\ &=b^{11}(h_{11t}+h_t)-a\frac{h_t}{h}+2b\Sigma_jh_jh_{jt}. \end{align} From (\ref{5.8}), we write \begin{align}\label{5.20} \nonumber\log(h-h_t)&=\log(fh^p\rho^{n-q} e^{\frac{\rho^2}{2}}\mathcal{K})\\ &=-\log\det(\nabla^2h+hI)+A(x,t), \end{align} where $A(x,t):=\log(fh^p\rho^{n-q}e^{\frac{\rho^2}{2}})$. Differentiating (\ref{5.20}) with respect to $e_j$, we have \begin{align}\label{5.21} \frac{h_j-h_{jt}}{h-h_t}=-\sum_{i,k} b^{ik}\nabla_jb_{ik}+\nabla_jA=-\sum_{i}b^{ii} (h_{jii}+h_i\delta_{ij})+\nabla_jA, \end{align} and \begin{align}\label{5.22} \frac{h_{11}-h_{11t}}{h-h_t}-\frac{(h_1-h_{1t})^2}{(h-h_t)^2} =-\sum_ib^{ii}\nabla_{11}b_{ii}+\sum_{i,k}b^{ii}b^{kk} (\nabla_1b_{ik})^2+\nabla_{11}A. \end{align} By the Ricci identity on $\mathbb{S}^{n-1}$ $$\nabla_{11}b_{ij}=\nabla_{ij}b_{11}-\delta_{ij}b_{11} +\delta_{11}b_{ij}-\delta_{1i}b_{1j}+\delta_{1j}b_{1i},$$ and (\ref{5.18}), (\ref{5.19}), (\ref{5.21}) and (\ref{5.22}), one has \begin{align}\label{5.23} \nonumber\frac{\partial_t\widetilde{\mathcal{E}}}{h-h_t} &=b^{11}\bigg(\frac{h_{11t}-h_{11}+h_{11}+h-h+h_t}{h-h_t}\bigg) -a\frac{h_t-h+h}{h(h-h_t)}+2b\frac{\Sigma_jh_jh_{jt}}{h-h_t}\\ \nonumber&=b^{11}\bigg(-\frac{(h_1-h_{1t})^2}{(h-h_t)^2} +\sum_ib^{ii}\nabla_{11}b_{ii}-\sum_{i,k}b^{ii}b^{kk} (\nabla_1b_{ik})^2-\nabla_{11}A\bigg)\\ \nonumber&\ \ \ +\frac{1}{h-h_t}-b^{11}+\frac{a}{h} -\frac{a}{h-h_t}+2b\frac{\Sigma_jh_jh_{jt}}{h-h_t}\\ \nonumber&\leq b^{11}\bigg(\sum_ib^{ii}(\nabla_{ii}b_{11} -b_{11}+b_{ii})-\sum_{i,k}b^{ii}b^{kk}(\nabla_1b_{ik})^2\bigg)\\ \nonumber&\ \ \ +\frac{1-a}{h-h_t}-b^{11}\nabla_{11}A +\frac{a}{h}+2b\frac{\Sigma_jh_jh_{jt}}{h-h_t}\\ \nonumber&\leq\sum_ib^{ii}\bigg[(b^{11})^2(\nabla_ib_{11})^2 +a\bigg(\frac{h_{ii}}{h}-\frac{h_i^2}{h^2}\bigg) -2b\bigg(\sum_jh_jh_{jii}+h_{ii}^2\bigg)\bigg]\\ &\ \ \ -b^{11}\sum_{i,k}b^{ii}b^{kk}(\nabla_1b_{ik})^2 -b^{11}\nabla_{11}A+\frac{a}{h} +2b\frac{\Sigma_jh_jh_{jt}}{h-h_t}+\frac{1-a}{h-h_t}\\ \nonumber&\leq\sum_ib^{ii}a\bigg(\frac{h_{ii}+h-h}{h} -\frac{h_i^2}{h^2}\bigg)+2b\sum_jh_j\bigg(-\sum_i b^{ii}h_{jii}+\frac{h_{jt}}{h-h_t}\bigg)\\ \nonumber&\ \ \ -2b\sum_ib^{ii}h_{ii}^2 -b^{11}\nabla_{11}A+\frac{a}{h}+\frac{1-a}{h-h_t}\\ \nonumber&\leq-a\sum_ib^{ii}-2b\sum_ib^{ii}(b_{ii}^2-2b_{ii}h) -b^{11}\nabla_{11}A+\frac{na}{h}+\frac{1-a}{h-h_t}\\ \nonumber&\ \ \ +2b\sum_jh_j\bigg(\frac{h_j}{h-h_t} +\sum_ib^{ii}h_j-\nabla_jA\bigg)\\ \nonumber&\leq-b^{11}\nabla_{11}A-2b\sum_jh_j\nabla_jA +(2b|\nabla h|^2-a)\sum_ib^{ii}-2b\sum_ib_{ii}\\ \nonumber&\ \ \ +\frac{2b|\nabla h|^2-a+1}{h-h_t} +4b(n-1)h+\frac{na}{h}. \end{align} Next we calculate $-b^{11}\nabla_{11}A-2b\sum_jh_j\nabla_jA$. From the expression of $A(x,t)$, we have \begin{align}\label{5.24} \nabla_jA=\frac{f_j}{f}+\frac{ph_j}{h} +\frac{(n-q+\rho^2)\rho_j}{\rho}, \end{align} and \begin{align}\label{5.25} \nonumber\nabla_{11}A&=\frac{ff_{11}-f_1^2}{f^2} +p\frac{hh_{11}-h_1^2}{h^2}\\ &\ \ \ +\frac{(n-q+\rho^2)\rho\rho_{11} -(n-q-\rho^2)\rho_1^2}{\rho^2}, \end{align} where \begin{align*} &\rho_i=\frac{hh_i+\sum h_kh_{ki}}{\rho}=\frac{h_ib_{ii}}{\rho},\\ &\rho_{ij}=\frac{hh_{ij}+h_ih_j+\sum h_kh_{kij}+\sum h_{ki}h_{kj}}{\rho}-\frac{h_ih_jb_{ii}b_{jj}}{\rho^3}. \end{align*} By (\ref{5.24}) and (\ref{5.25}), one has \begin{align}\label{5.26} \nonumber&-b^{11}\nabla_{11}A-2b\sum_jh_j\nabla_jA\\ \nonumber&=-b^{11}\frac{ff_{11}-f_1^2}{f^2} -b^{11}p\frac{hh_{11}-h_1^2}{h^2}-b^{11} \frac{(n-q+\rho^2)\rho_{11}}{\rho}\\ \nonumber&\ \ \ +b^{11}\frac{(n-q-\rho^2)\rho_1^2}{\rho^2} -2b\sum_jh_j\frac{f_j}{f} -2bp\sum_j\frac{h_j^2}{h}\\ \nonumber&\ \ \ -2b\sum_jh_j\frac{(n-q+\rho^2)\rho_j}{\rho}\\ &=-b^{11}\frac{ff_{11}-f_1^2}{f^2}-\frac{(n-1)p}{h} +pb^{11}+b^{11}\frac{ph_1^2}{h^2}-b^{11} \frac{(n-q+\rho^2)\rho_{11}}{\rho}\\ \nonumber&\ \ \ + (n-1)b_{11}\frac{(n-q-\rho^2)h_1^2}{\rho^4} -2b\sum_jh_j\frac{f_j}{f}-2bp\sum_j\frac{h_j^2}{h}\\ \nonumber&\ \ \ -2b\sum_jh_j^2\frac{(n-q+\rho^2)b_{jj}}{\rho^2}\\ \nonumber&\leq C_1b^{11}+C_2b_{11}+C_3b+C_4 \end{align} where $C_1, C_2, C_3$ and $C_4$ are positive constants, depending only on $\|f\|_{C^1(\mathbb{S}^{n-1})}$, $\|h\|_{C^1(\mathbb{S}^{n-1}\times[0,T))}$, $\min_{\mathbb{S}^{n-1}\times[0,T)}h$, $\min_{\mathbb{S}^{n-1}\times[0,T)}\rho$, and $\min_{\mathbb{S}^{n-1}}f$. Substituting (\ref{5.26}) into (\ref{5.23}), at $x_1$, we have \begin{align*} \frac{\partial_t\widetilde{\mathcal{E}}}{h-h_t} &\leq C_1b^{11}+C_3b+C_4-(2b|\nabla h|^2-a)\sum_ib^{ii}\\ &\ \ \ -b\sum_ib_{ii}+4b(n-1)h+\frac{na}{h}<0 \end{align*} provided $a<2b|\nabla h|^2$ and $b_{11}$ large enough. Hence, $$\mathcal{E}(x_1,t)=\widetilde{\mathcal{E}}(x_1,t)\leq C,$$ for some positive constant $C$, independent of $t$. The proof of Lemma \ref{l5.4} is completed. \end{proof} \subsection{Existence of smooth solution} With the help {of the priori estimates} in subsection 5.2, we show that the flow (\ref{5.1}) exists for all time. \begin{theorem}\label{t5.5} Let {$p,q\in\mathbb{R}$}, and $f: \mathbb{S}^{n-1}\rightarrow(0,\infty)$ be a smooth, positive function satisfying $$\limsup_{s\rightarrow+\infty} (s^{q-p}e^{-\frac{s^2}{2}})<f<\liminf_{s\rightarrow0^+} (s^{q-p}e^{-\frac{s^2}{2}}).$$ Then the flow (\ref{5.1}) has a smooth solution $\Omega_t=F(\mathbb{S}^{n-1},t)$ for all time $t>0$. Furthermore, a subsequence of $\Omega_t$ converges in $C^{\infty}$ to a smooth, closed, and uniformly convex hypersurface whose support function is smooth solution to the equation \begin{align}\label{5.27} e^{-\frac{|\nabla h|^2+h^2}{2}}h^{1-p} (|\nabla h|^2+h^2)^{\frac{q-n}{2}} \det(\nabla^2h+hI)=f. \end{align} \end{theorem} \begin{proof} From the $C^2$-estimates obtained in Lemma \ref{l5.4}, we know that Equation (\ref{5.1}) is uniformly parabolic on any finite time interval and has the short time existence. By $C^0, C^1$ and $C^2$-estimates (Lemmas \ref{l5.2}, \ref{l5.3} and \ref{l5.4}), and the Krylov's theory \cite{K}, we get the H\"{o}lder continuity of $\nabla^2h$ and $\partial_th$. Then we get the higher order derivation estimates by the regularity theory of the uniformly parabolic equations. Therefore, we obtain the long-time existence and regularity of the solution to Equation (\ref{5.1}). Moreover, we have \begin{align*} \|h\|_{C_{x,t}^{i,j}(\mathbb{S}^{n-1}\times[0,T))}\leq C \end{align*} for some $C>0$, independent of $t$, and for each pairs of nonnegative integers $i, j$. With the aid of the Arzel\`{a}-Ascoli theorem and a diagonal argument, there exists a sequence of $t$, denoted by $\{t_k\}_{k\in\mathbb{N}}\subset(0,\infty)$, and a smooth function $h(x)$ such that \begin{align*} \|h(x,t_k)-h(x)\|_{C^i(\mathbb{S}^{n-1})}\rightarrow0 \end{align*} uniformly for any nonnegative integer $i$ as $t_k\rightarrow\infty$. This illustrates that $h(x)$ is a support function. Let $\Omega_{\infty}$ be a convex body determined by $h(x)$, we conclude that $\Omega_{\infty}$ is smooth and strictly convex with the origin in its interior. In the following, we prove that Equation (\ref{5.27}) has a smooth solution. From Lemma \ref{l5.1}, we see that \begin{align}\label{5.28} \partial_t\Phi(\Omega_t)\leq0. \end{align} If there exists a time $\tilde{t}$ such that $$\partial_t\Phi(\Omega_t)\bigg|_{t=\tilde{t}}=0,$$ then by the equality condition in Lemma \ref{l5.1}, we have $$e^{-\frac{|\nabla h|^2+h^2}{2}}h^{1-p} (|\nabla h|^2+h^2)^{\frac{q-n}{2}} \det(\nabla^2h+hI)=f,$$ namely, support function $h(x,\tilde{t})$ of $\Omega_{\tilde{t}}$ satisfies Equation (\ref{5.27}). Next we verify the case of $\partial_t\Phi(\Omega_t)<0$. From Lemma \ref{l5.1}, there exists a positive constant $C$, independent of $t$, such that \begin{align}\label{5.29} \Phi(\Omega_t)\leq C, \end{align} and $\partial_t\Phi(\Omega_t)$ is uniformly continuous. Combining (\ref{5.28}) and (\ref{5.29}), and applying the Fundamental Theorem of calculus, we obtain $$\int_0^t\Phi^\prime(\Omega_t)dt =\Phi(\Omega_t)-\Phi(\Omega_0) \leq\Phi(\Omega_t)\leq C,$$ which leads to $$\int_0^\infty\Phi^\prime(\Omega_t)dt<C.$$ This implies that there exists a subsequence of time $t_j\rightarrow\infty$ such that $$\lim_{t_j\rightarrow\infty}\partial_t\Phi(\Omega_{t_j})=0.$$ From the proof of Lemma \ref{l5.1}, we have \begin{align*} \partial_t\Phi(\Omega_t)\bigg|_{t=t_j} =-\int_{\mathbb{S}^{n-1}}\frac{h\rho^{q-n} e^{-\frac{\rho^2}{2}}}{\mathcal{K}}\bigg(\frac{f}{h^{1-p}} \frac{\mathcal{K}}{\rho^{q-n} e^{-\frac{\rho^2}{2}}}-1\bigg)^2dx\bigg|_{t=t_j}. \end{align*} Passing to the limit, we have \begin{align*} 0&=\lim_{t_j\rightarrow\infty}\partial_t \Phi(\Omega_t)\bigg|_{t=t_j}\\ &=-\int_{\mathbb{S}^{n-1}}\frac{h_{\infty} \rho^{q-n}_{\infty}e^{-\frac{\rho^2_{\infty}}{2}}} {\mathcal{K}_{\infty}}\bigg(\frac{f}{h_{\infty}^{1-p}} \frac{\mathcal{K}_{\infty}}{\rho_{\infty}^{q-n} e^{-\frac{\rho^2_{\infty}}{2}}}-1\bigg)^2dx\\ &\leqslant0, \end{align*} this means that $$\frac{f}{h_{\infty}^{1-p}}\frac{\mathcal{K}_{\infty}} {\rho_{\infty}^{q-n}e^{-\frac{\rho^2_{\infty}}{2}}}=1,$$ where $h_\infty$, $\rho_{\infty}$ and $\mathcal{K}_{\infty}$ are the support function, radial function and product of the principal curvature radii of the limit convex hypersurface $\Omega_\infty$, respectively. The proof of Theorem \ref{t5.5} is completed. \end{proof} As an application, we have \begin{corollary}\label{c5.6} Under the assumptions of Theorem \ref{t5.5}, there exists a smooth solution to the $L_p$-Gauss dual Minkowski problem (\ref{1.2}) for $p,q\in\mathbb{R}$. \end{corollary} \section{Uniqueness of solution}\label{S6} In this section, we provide a uniqueness result of the $L_p$-Gauss dual Minkowski problem under an appropriate condition, see e.g., \cite{CW,LL} for similar techniques. \begin{theorem}\label{t6.1} Let {$p, q\in\mathbb{R}$ with $q<p$}, and $f: \mathbb{S}^{n-1}\rightarrow(0,\infty)$ be a smooth, positive function on $\mathbb{S}^{n-1}$. Then the solution to the equation \begin{align}\label{6.1} e^{-\frac{|\nabla h|^2+h^2}{2}}h^{1-p} (|\nabla h|^2+h^2)^{\frac{q-n}{2}} \det(\nabla^2h+hI)=f \end{align} is unique. \end{theorem} \begin{proof} Let $h_1$ and $h_2$ be two solutions to the equation (\ref{6.1}), and $$M(x_0)=\max_{x\in\mathbb{S}^{n-1}}\frac{h_1(x)}{h_2(x)} =\frac{h_1(x_0)}{h_2(x_0)},\ \ x_0\in\mathbb{S}^{n-1}.$$ We now prove $M(x_0)\leq1$ by contradiction. Suppose $M(x_0)>1$, then \begin{align}\label{6.2} h_1(x_0)>h_2(x_0). \end{align} At $x_0$, we have \begin{align}\label{6.3} 0=\nabla M=\frac{h_2\nabla h_1-h_1\nabla h_2}{h_2^2}, \ \ i.e., \ \ \frac{\nabla h_1}{h_1}=\frac{\nabla h_2}{h_2}, \end{align} and by (\ref{6.3}), one has \begin{align}\label{6.4} 0\geq\nabla^2M=\frac{h_2\nabla^2 h_1-h_1\nabla^2h_2}{h_2^2}, \ \ i.e., \ \ \frac{\nabla^2 h_1}{h_1}-\frac{\nabla^2 h_2}{h_2}. \end{align} From the equation (\ref{6.1}), using (\ref{6.3}) and (\ref{6.4}), we have \begin{align*} 1&=\frac{e^{-\frac{|\nabla h_2|^2+h_2^2}{2}}h_2^{1-p} (|\nabla h_2|^2+h_2^2)^{\frac{q-n}{2}} \det(\nabla^2h_2+h_2I)} {e^{-\frac{|\nabla h_1|^2+h_1^2}{2}}h_1^{1-p} (|\nabla h_1|^2+h_1^2)^{\frac{q-n}{2}} \det(\nabla^2h_1+h_1I)}\\ &=\frac{e^{-\frac{(|\frac{\nabla h_2}{h_2}|^2+1)h_2^2}{2}}h_2^{1-p}[h_2^2 (|\frac{\nabla h_2}{h_2}|^2+1)]^{\frac{q-n}{2}}h_2^{n-1} \det(\frac{\nabla^2h_2}{h_2}+I)} {e^{-\frac{(|\frac{\nabla h_1}{h_1}|^2+1)h_1^2}{2}}h_1^{1-p}[h_1^2 (|\frac{\nabla h_1}{h_1}|^2+1)]^{\frac{q-n}{2}}h_1^{n-1} \det(\frac{\nabla^2h_1}{h_1}+I)}\\ &=\frac{e^{-\frac{(|\frac{\nabla h_2}{h_2}|^2+1)h_2^2}{2}}h_2^{q-p}\det(\frac{\nabla^2h_2}{h_2}+I)} {e^{-\frac{(|\frac{\nabla h_1}{h_1}|^2+1)h_1^2}{2}}h_1^{q-p} \det(\frac{\nabla^2h_1}{h_1}+I)}\\ &\geq\frac{e^{-\frac{(|\frac{\nabla h_2}{h_2}|^2+1)h_2^2}{2}}h_2^{q-p}} {e^{-\frac{(|\frac{\nabla h_1}{h_1}|^2+1)h_1^2}{2}}h_1^{q-p}}, \end{align*} i.e., $$\bigg(\frac{e^{\frac{h_2^2}{2}}}{e^{\frac{h_1^2}{2}}}\bigg) ^{(|\frac{\nabla h_1}{h_1}|+1)}\geq \bigg(\frac{h_2}{h_1}\bigg)^{q-p}.$$ By (\ref{6.2}), and noting $q<p$, we have $$\bigg(\frac{h_2}{h_1}\bigg)^{q-p}\geq1.$$ Hence $$e^{\frac{h_2^2}{2}}\geq e^{\frac{h_1^2}{2}}.$$ Namely, $h_2(x_0)\geq h_1(x_0)$. This is a contradiction to (\ref{6.2}). Thus $h_2(x)\geq h_1(x)$ for all $x\in\mathbb{S}^{n-1}$. On the other hand, let $$m(x_1)=\min_{x\in\mathbb{S}^{n-1}}\frac{h_1(x)}{h_2(x)} =\frac{h_1(x_1)}{h_2(x_1)},\ \ x_1\in\mathbb{S}^{n-1}.$$ Applying the same argument as above, we can get $$h_1(x)\geq h_2(x)$$ for all $x\in\mathbb{S}^{n-1}$. Therefore, we obtain $$h_1(x)\equiv h_2(x)$$ for all $x\in\mathbb{S}^{n-1}$. \end{proof} \vskip 1.0cm \begin{thebibliography}{11} {\small \bibitem{BB} G. Bianchi, K. J. B\"{o}r\"{o}czky, A. Colesanti and D. Yang, {\it The $L_p$ Minkowski problem for $-n<p<1$}, Adv. Math., {\bf 341} (2019), 493-535. \bibitem{BH} K. J. B\"{o}r\"{o}czky, P. Heged\"{u}s and G. Zhu, {\it On the discrete logarithmic Minkowski problem}, Int. Math. Res. Not., {\bf 6} (2016), 1807-1838. \bibitem{BoL} K. J. B\"{o}r\"{o}czky, E. Lutwak, D. Yang and G. Zhang, {\it The logarithmic Minkowski problem}, J. Amer. Math. Soc., {\bf 26} (2013), 831-852. \bibitem{BLYZ}K. J. B\"{o}r\"{o}czky, E. Lutwak, D. Yang, G. Zhang and Y. Zhao, {\it The Gauss image problem}, Communications on Pure and Applied Mathematics, Vol. LXXIII, (2020), 1406-1452. \bibitem{BIS} P. Bryan, M. N. Ivaki and J. Scheuer, {\it Orlicz-Minkowski flows}, Calc. Var. Partial Differential Equations, {\bf 60} (2021), Paper No. 41. \bibitem{CC} B. Chen, J. Cui and P. Zhao, {\it Inverse Gauss curvature flows and Orlicz Minkowski problem}, Anal. Geom. Metr. Spaces, {\bf 10} (2022), 330-343. \bibitem{CFL} S. Chen, Y. Feng and W. Liu, {\it Uniqueness of solutions to the logarithmic Minkowski problem in $\mathbb{R}^3$}, Adv. Math., {\bf 411} (2022), 108782. \bibitem{CH} C. Chen, Y. Huang and Y. Zhao, {\it Smooth solutions to the $L_p$ dual Minkowski problem}, Math. Ann., {\bf 373} (2019), 953-976. \bibitem{CL} H. Chen and Q.-R. Li, {\it The $L_p$ dual Minkowski problem and related parabolic flows}, J. Funct. Anal., {\bf 281} (2021), Paper No. 109139, 65. \bibitem{CHZ} S. Chen, Q.-R. Li and G. Zhu, {\it On the $L_p$ Monge-Amp\`ere equation}, J. Differential. Equqtions, {\bf 263} (2017), 4997-5011. \bibitem{CHZ1} S. Chen, Q.-R. Li and G. Zhu, {\it The logarithmic Minkowski problem for non-symmetric measures}, Trans. Amer. Math. Soc., {\bf 371} (2019), 2623-2641. \bibitem{CWX} L. Chen, D. Wu and N. Xiang, {\it Smooth solutions to the Gauss image problem}, Pacific J. Math., {\bf 317} (2022), 275-295. \bibitem{CF} A. Colesanti and M. Fimiani, {\it The Minkowski problem for the torsional rigidity}, Indiana Univ. Math. J., {\bf 59} (2010), 1013-1039. \bibitem{CNS} A. Colesanti, K. Nystr\"{o}m, P. Salani, J. Xiao, D. Yang and G. Zhang, {\it The Hadamard variational formula and the Minkowski problem for $p$-capacity}, Adv. Math., {\bf 285} (2015), 1511-1588. \bibitem{CW} K.-S. Chou and X.-J. Wang, {\it The $L_p$-Minkowski problem and the Minkowski problem in centroaffine geometry}, Adv. Math., {\bf 205} (2006), 33-823. \bibitem{FHX} Y. Feng, S. Hu and L. Xu, {\it On the $L_p$ Gaussian Minkowski problem}, J. Differential Equations, {\bf 363} (2023), 350-390. \bibitem{FLX} Y. Feng, W. Liu and L. Xu, {\it Existence of non-symmetric solutions to the Gaussian Minkowski problem}, J. Geom. Anal., {\bf 33} (2023), Paper No. 89. \bibitem{F} Y. Feng, Y. Li and L. Xu, {\it Existence of solutions to the Gaussian dual Minkowski problem}, J. Differential Equations, {\bf 416} (2025), 368-298. \bibitem{FZH} Y. Feng, Y. Zhou and B. He, {\it The $L_p$ electrostatic $q$-capacitary Minkowski problem for general measures}, J. Math. Anal. Appl., {\bf 487} (2020) 123959. \bibitem{Fi} W. J. Firey, {\it Shapes of worn stones}, Mathematika, {\bf 21} (1974), 1-11. \bibitem{G} R.J. Gardner, Geometric Tomography, second ed., Cambridge Univ. Press, New York, 2006. \bibitem{GHW} R. J. Gardner, D. Hug and W. Weil, {\it The Orlicz-Brunn-Minkowski theory: a general framework, additions, and inequalities}, J. Differential. Geom., {\bf 97} (2014), 427-476. \bibitem{GaHW} R. J. Gardner, D. Hug, W. Weil, S. Xing and D. Ye, {\it General volumes in the Orlicz Brunn-Minkowski theory and a related Minkowski problem I}, Calc. Var. Partial Differential Equations, {\bf 58} (2019), Paper No. 12. \bibitem{HaLYZ} C. Haberl, E. Lutwak, D. Yang and G. Zhang, {\it The even Orlicz Minkowski problem}, Adv. Math., {\bf 224} (2010), 2485-2510. \bibitem{HYZ} H. Hong, D. Ye and N. Zhang, {\it The $p$-capacitary Orlicz-Hadamard variational formula and Orlicz-Minkowski problems}, Calc. Var. Partial Differential Equations, {\bf 57} (2018), Paper No. 5. \bibitem{HH} Q. Huang and B. He, {\it On the Orlicz Minkowski problem for polytopes}, Discrete Comput. Geom., {\bf 48} (2012), 281-297. \bibitem{HQ} Y. Huang and L. Qin, {\it The Gaussian chord Minkowski problem}, Discrete Contin. Dyn. Syst. Ser. S, {\bf 17} (2024), 930-944. \bibitem{HLYZ} Y. Huang, E. Lutwak, D. Yang and G. Zhang, {\it Geometric measures in the dual Brunn-Minkowski theory and their associated Minkowski problems}, Acta Math., {\bf 216} (2016), 325-388. \bibitem{HuLY} Y. Huang, E. Lutwak, D. Yang and G. Zhang, {\it The $L_p$-Aleksandrov problem for $L_p$-integral curvature}, J. Differential Geom., {\bf 110} (2018), 1-29. \bibitem{HXZ} Y. Huang, D. Xi and Y. Zhao, {\it The Minkowski problem in Gaussian probability space}, Adv. Math., {\bf 385} (2021) 107769. \bibitem{HLY} D. Hug, E. Lutwak, D. Yang and G. Zhang, {\it On the $L_p$ Minkowski problem for polytopes}, Discrete Comput. Geom., {\bf 33} (2005), 699-715. \bibitem{J} D. Jerison, {\it A Minkowski problem for electrostatic capacity}, Acta Math., {\bf 176} (1996), 1-47. \bibitem{JW} H. Jian, J. Lu and X.-J. Wang, {\it Nonuniqueness of solutions to the $L_p$-Minkowski problem}, Adv. Math., {\bf 281} (2015), 845-856. \bibitem{K} N. V. Krylov, {\it Nonlinear elliptic and parabolic equations of the second order}, Mathematics and its Applications (Soviet Series), 7. D. Reidel Publishing Co., Dordrecht, 1987. \bibitem{LSW} Q.-R. Li, W. Sheng and X.-J. Wang {\it Flow by Gauss curvature to the Aleksandrov and dual Minkowski problem}, J. Eur. Math. Soc., {\bf 22} (2020), 893-923 \bibitem{Liu} J. Liu, {\it The $L_p$-Gaussian Minkowski problem}, Calc. Var. Partial Differential Equations, {\bf 61} (2022), Paper No. 28. \bibitem{LL} Y. Liu and J. Lu, {\it A flow method for the dual Orlicz-Minkowski problem}, Trans. Amer. Math. Soc., {\bf 373} (2020), 5833-5853. \bibitem{L} E. Lutwak, {\it The Brunn-Minkowski-Firey theory I. Mixed volumes and the Minkowski problem}, J. Differential. Geom., {\bf 38} (1993), 131-150. \bibitem{Lu} E. Lutwak, D. Yang and G. Zhang, {\it On the $L_p$-Minkowski problem}, Trans. Amer. Math. Soc., {\bf 356} (2004), 4359-4370. \bibitem{LYZ} E. Lutwak, D. Yang and G. Zhang, {\it $L_p$ dual curvature measures}, Adv. Math., {\bf 329} (2018), 85-132. \bibitem{S} R. Schneider, Convex Bodies: The Brunn-Minkowski theory, 2nd edn, {\it Cambridge Univ. Press}, Cambridge, 2014. \bibitem{U} J. Urbas, {\it An expansion of convex hypersurfaces}, J. Differential Geom., {\bf 33} (1991), 91-125. \bibitem{W} H. Wang, {\it Continuity of the solution to the $L_p$ Minkowski Pproblem in Gaussian probability space}, Acta Math. Sin. (Engl. Ser.), {\bf 38} (2022), 2253-2264. \bibitem{WW} C. Wu, D. Wu and N. Xiang, {\it The $L_p$ Gauss image problem}, Geom. Dedicata, {\bf 216} (2022), No. 62. \bibitem{Z} G. Zhu, {\it The logarithmic Minkowski problem for polytopes}, Adv. Math., {\bf 262} (2014), 909-931. \bibitem{ZX} D. Zou and G. Xiong, {\it The $L_p$ Minkowski problem for the electrostatic $\mathfrak{p}$-capacity}, J. Differential Geom., {\bf 116} (2020), 555-596. } \end{thebibliography} \end{document}
2412.13583v1
http://arxiv.org/abs/2412.13583v1
Spectrum and Lifshitz tails for the Anderson model on the Sierpinski gasket graph
\documentclass[11pt,reqno]{amsart} \usepackage{amsmath,amsfonts,amsthm,amssymb,amsxtra} \usepackage[colorlinks,citecolor=red,hypertexnames=false]{hyperref} \usepackage{color} \usepackage{tikz} \usepackage[shortlabels]{enumitem} \usepackage{bbm} \usepackage{stmaryrd} \usepackage{nicematrix} \usepackage{mathrsfs} \usepackage{float} \usepackage{graphicx,subfigure} \usepackage[margin=1.1in]{geometry} \newtheorem{theorem}{Theorem}[section] \newtheorem{thm}[theorem]{Theorem} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{claim}{Claim} \theoremstyle{definition} \newtheorem{definition}{Definition} \newtheorem{defn}[definition]{Definition} \newtheorem{example}{Example} \newtheorem{ex}[example]{Example} \newtheorem{assumption}{Assumption} \newtheorem{question}{Question} \theoremstyle{remark} \newtheorem{remark}{Remark}[section] \newtheorem{rmk}[remark]{Remark} \newtheorem{remarks}{Remarks}[section] \numberwithin{equation}{section} \allowdisplaybreaks \newcommand{\A}{\mathbf{A}} \newcommand{\B}{\mathfrak{B}} \newcommand{\ball}{B} \newcommand{\C}{\mathbb{C}} \newcommand{\cl}{\mathrm{cl}} \newcommand{\const}{\mathrm{const}\ } \newcommand{\D}{\mathcal{D}} \renewcommand{\epsilon}{\varepsilon} \newcommand{\I}{\mathbb{I}} \newcommand{\F}{\mathcal{F}} \newcommand{\loc}{{\rm loc}} \newcommand{\mg}{\mathrm{mag}} \newcommand{\N}{\mathbb{N}} \newcommand{\V}{\mathbb{V}} \newcommand{\G}{\mathbb{G}} \newcommand{\ope}{\mathrm{op}} \renewcommand{\phi}{\varphi} \newcommand{\R}{\mathbb{R}} \newcommand{\Rp}{\text{Re\,}} \newcommand{\Sph}{\mathbb{S}} \newcommand{\T}{\mathbb{T}} \newcommand{\w}{\mathrm{weak}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\om}{\omega} \newcommand{\Om}{\Omega} \newcommand{\lat}{\mathcal{L}} \newcommand{\E}{\mathbb{E}} \newcommand{\lan}{\langle} \newcommand{\ran}{\rangle} \newcommand{\g}{\gamma} \newcommand{\vecone}{\overrightarrow{1}} \newcommand{\one}{\mathbbm{1}} \newcommand{\wt}{\widetilde} \newcommand{\wh}{\widehat} \DeclareFontFamily{U}{mathx}{} \DeclareFontShape{U}{mathx}{m}{n}{<-> mathx10}{} \DeclareSymbolFont{mathx}{U}{mathx}{m}{n} \DeclareMathAccent{\widehat}{0}{mathx}{"70} \DeclareMathAccent{\widecheck}{0}{mathx}{"71} \renewcommand{\check}{\widecheck} \newcommand{\floor}[1]{\left\lfloor#1\right\rfloor} \newcommand{\ceil}[1]{\left\lceil#1\right\rceil} \newcommand{\pdelta}{\Delta^{(p)}} \newcommand{\cdelta}{\Delta^{(c)}} \newcommand{\ip}[1]{\left\langle #1 \right \rangle} \newcommand{\intbr}[2]{\llbracket #1 , #2 \rrbracket } \newcommand{\ipc}[2]{ \langle #1 , #2 \rangle } \newcommand{\ipca}[2]{\left ( #1 , #2 \right )_a } \newcommand{\braket}[3]{\left \langle #1 \, | #2 |\, #3 \right \rangle } \newcommand{\Ev}[1]{\mathbb{E} \left( #1 \right)} \newcommand{\Evc}[2]{\mathbb{E} \left ( #1 \middle | #2 \right )} \newcommand{\norm}[1]{\left\Vert#1\right\Vert} \newcommand{\abs}[1]{\left\vert#1\right\vert} \newcommand{\setb}[2]{\left \{ #1 \ \middle | \ #2 \right \} } \newcommand{\com}[2]{\left[ #1 , #2 \right ]} \newcommand{\bra}[1]{\left < #1 \right |} \newcommand{\ket}[1]{\left | #1 \right >} \newcommand{\dirac}[3]{\bra{#1} #2 \ket{#3}} \newcommand{\diracip}[2]{\left <#1 \middle | #2 \right >} \newcommand{\Evac}[2]{\bb{E} \left ( #1 \middle | #2 \right )} \newcommand{\Eva}[1]{\bb{E} \left ( #1 \right )} \newcommand{\Oh}[1]{\mc{O} \left (#1 \right )} \newcommand{\card}[1]{{\rm Card}\left\{\, #1 \, \right\}} \newcommand{\cards}[1]{{\rm Card}\left( #1 \right)} \renewcommand{\P}{\mathbb{P}} \renewcommand{\Pr}[1]{ {\P}\left\{\, #1 \, \right\}} \newcommand{\Prr}[1]{{\P}\left (\, #1 \, \right)} \renewcommand{\emptyset}{\o} \newcommand{\cH}{\mathcal H} \newcommand{\eps}{\varepsilon} \newcommand{\bigzero}{\mbox{\normalfont\Large\bfseries 0}} \newcommand{\rvline}{\hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep}} \newcommand{\vertiii}[1]{{\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert #1 \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}} \DeclareMathOperator{\codim}{codim} \DeclareMathOperator{\dist}{dist} \DeclareMathOperator{\Div}{div} \DeclareMathOperator{\dom}{dom} \DeclareMathOperator{\im}{Im} \DeclareMathOperator{\re}{Re} \DeclareMathOperator{\spec}{spec} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\tr}{Tr} \DeclareMathOperator{\den}{den} \DeclareMathOperator{\Bern}{Bern} \newcommand\numberthis{\addtocounter{equation}{1}\tag{\theequation}} \newcommand{\HA}{H^A} \begin{document} \title[Spectrum and Lifshitz tails for SG]{Spectrum and Lifshitz tails for the Anderson model on the Sierpinski gasket graph} \author[L. Shou, W. Wang, S. Zhang]{Laura Shou, Wei Wang, Shiwen Zhang} \begin{abstract} In this work, we study the Anderson model on the Sierpinski gasket graph. We first identify the almost sure spectrum of the Anderson model when the support of the random potential has no gaps. We then prove the existence of the integrated density states of the Anderson model and show that it has Lifshitz tails with Lifshitz exponent determined by the ratio of the volume growth rate and the random walk dimension of the Sierpinski gasket graph. \end{abstract} \maketitle \section{Introduction} Random Schr\"odinger operators play an important role in describing the quantum state of a particle in a disordered material. The tight-binding approximation simplifies the model further by restricting the positions of electrons, leading to a Hamiltonian given by a discrete random Schr\"odinger $-\Delta+V$ acting on a Hilbert space $\ell^2(\mathbb G)$, with $\mathbb G$ the vertices of a graph, $\Delta$ the graph Laplacian, and $V$ a random potential, acting as a multiplication operator $V(x)=V_\omega(x)$. The simplest, but also one of the most prominent, form of the random potential is to take $\{V(x)\}_{x\in\G}$ as independent and identically distributed (i.i.d.) random variables. The corresponding random operator is called the \emph{Anderson model}. The Anderson model, along with more general random operators, have been long investigated in the ergodic setting, on a vertex-transitive graph such as the standard $\Z^d$ lattice, or the Bethe lattice (a regular tree graph); see a thorough discussion in e.g. \cite{aizenman2015random}. Fractal graphs are notable examples that are not vertex-transitive. In this paper, we study spectral properties of the Anderson model on such a fractal graph, the Sierpinski gasket graph, also called the \emph{Sierpinski lattice}\footnote{The `true' Sierpinski gasket $K$ is a compact fractal subset of $\R^2$, constructed by a self-similar iterated function scheme, see e.g. \cite{fal1990,barlow2017random}. The Sierpinski lattice $\G$, as an finite combinatorial graph, is defined with a similar structure, whose large-scale structure mimics the microscopic structure of the Sierpinski gasket $K$. }. There is a large literature about the free Laplacian $\Delta$ on the Sierpinski gasket or Sierpinski lattice, as well as more general nested fractals and p.c.f. self-similar sets or graphs, see e.g. \cite{alex1984,ram1984, kusuoka1987diffusion,goldstein1987random, barlow1988, kig1989, fuku1988,shima1991eigenvalue,fuku1992,lind1990,lapidus1994,malo1993, teplyaev1998spectral,dalrymple1999fractal,stri1999}, which is far from a complete list. For the Anderson model and other random Hamiltonians on fractals, there are many works in the physics literature, see e.g. \cite{schreib1996,stic2016,kosi2017,manna2024}, among others. But limited work has been done for the Anderson model on fractals in the math literature, except in, e.g. \cite{shima1991lifschitz, pietruska1991lifschitz,ma95,balsam2023density}. Lacking the ergodic setting, it is not easy in general to determine the full spectrum of a random operator, nor its different spectral components. In particular, to the best to our knowledge, there is no work on the (topological) structure or identification of deterministic spectra for the Anderson model on the Sierpinski gasket or Sierpinski lattice, nor on other fractal sets or graphs. The first result of this paper is the almost-sure spectrum of the discrete random Schr\"odinger operator on the Sierpinski lattice when the support of the random potential has no gaps. There are also partial results of the spectrum for general random potentials. On the other hand, the works of \cite{shima1991lifschitz, pietruska1991lifschitz,balsam2023density} concern the continuous/differential random Schr\"odinger operator on the Sierpinski gasket (rather than the discrete/graphical Sierpinski lattice), and prove the existence and the Lifshitz-type singularity of the integrated density of states (IDS) for the random Schr\"odinger operator. There are no discrete analogue of these results. The second part of this paper obtains the existence of the IDS for the Anderson model on the Sierpinski lattice, and establishes the Lifshitz tail estimates of the IDS near the bottom of the spectrum. \subsection{Main results} To state our main results, we first introduce the Sierpinski lattice, and the associated Anderson model. Let $T_0\subseteq \R^2$ be the unit equilateral triangle with the vertex set $\G_0=\{(0,0),(1,0),(1/2,\sqrt 3/2)\}=\{ a_1,a_2,a_3\}$. Now, recursively define $T_n$ by \begin{align}\label{eqn:T-rec} T_{n+1}=T_n\cup \big(T_n+2^na_2\big) \cup \big(T_n+2^na_3\big), \ n\ge 0. \end{align} Each $T_n$ consists of $3^n$ many translations of the unit triangle $T_0$. Let $\G_n$ be the collection of all vertices of the triangles in $T_n$. The set of edges $\mathcal E_n=\{(x,y):x,y\in \G_n\}$ is defined by the relation $(x,y)\in \mathcal E_n$ iff there exists a triangle $T$ on side 1 in $T_n$ with $x,y\in T$. Let \begin{align}\label{eqn:SG-def} \G=\bigcup_{n\ge 0}(\G_n\, \cup \, \G_n'), \ \ \mathcal E=\bigcup_{n\ge 0}(\mathcal E_n\, \cup \mathcal E_n'), \ \ \Gamma=(\G,\mathcal E), \end{align} where $\G_n'$ and $\mathcal E_n'$ are the symmetric image of $\G_n$ and $\mathcal E_n$ with respect to the $y$-axis respectively; see Figure~\ref{fig:V3}. \begin{figure} \centering \includegraphics[width=.8\linewidth]{Fig/fig1.png} \caption{ The unit triangle $T_0$ is located next to the origin, with vertices $\G_0=\{(0,0),(1,0),(1/2,\sqrt3/2)\}$. The right hand side of the $y$-axis is the 3rd step of construction $T_3$ and the left hand side is its reflective mirror with respect to the $y$-axis. The picture contains $2\times 27$ many unit triangles $T$, which are all translations of $T_0$. The dots form the vertex set $\G_3'\cup\G_3$. The edge set $\mathcal E_3'\cup\mathcal E_3$ consists of all edges of length 1 of the unit triangles. } \label{fig:V3} \end{figure} $\Gamma=(\G,\mathcal E)$ is called the (infinite) Sierpinski gasket graph, or Sierpinski lattice, with the vertex set $\G$ and the edge set $\mathcal E$. We write $x\sim y$ to mean $(x,y)\in \mathcal E$, and say that $y$ is a neighbor of $x$. The (combinatorial) graph Laplacian on the Sierpinski lattice is given by \begin{align}\label{eqn:Lap} \Delta f(x)= \sum_{y\in \G:y\sim x}\left(f(y)- f(x)\right), \ x\in \G, \end{align} acting on $\ell^2(\G)$ equipped with the usual inner product $\langle f,g\rangle=\sum_{x\in\G} f(x)\overline{ g(x)}$. The Anderson model on the Sierpinski lattice is given by the random Hamiltonian $H_\omega=-\Delta+V_\omega$ on $\ell^2(\G)$: \begin{align}\label{eqn:AM} H_\omega f(x)=-\sum_{y\in \G: y\sim x}(f(y)-f(x))+V_\omega(x)f(x), \ \ x\in \G, \end{align} where $V_\omega$ is a random potential, acting as the usual multiplicative operator. $\{V_\omega(x)\}_{x\in \G}$ are independent, identically distributed (i.i.d.) random variables, with a common distribution $P_0$. We denote by $ \supp P_0$ the (essential) support of the measure $P_0$, defined as \begin{align}\label{eqn:Vsupp} \supp P_0=\{x\in \R\ | \ P_0(x-\varepsilon,x+\varepsilon)>0\ {\rm for\ all\ } \varepsilon>0\ \}. \end{align} The first result of the paper is the a.s. structure of the spectrum of $H_\omega=-\Delta+V_\omega$. \begin{theorem}\label{thm:AM-as-spectrum} Let $H_\omega=-\Delta+V_\omega$ be the Anderson model \eqref{eqn:AM} on the Sierpinski lattice $\G$. Then almost surely, \begin{align}\label{eqn:AM-as-spectrum1} \sigma(-\Delta)+ \supp P_0 \subseteq \sigma(H_\omega)\subseteq \sigma(-\Delta)+[\inf V_\omega,\sup V_\omega] , \end{align} and \begin{align}\label{eqn:AM-as-spectrum3} \sigma(H_\omega)\subseteq[0,6]+ \supp P_0 \end{align} In particular, if the essential support of the potential is an interval, i.e., $ \supp P_0 =[a,b]$ for $-\infty\le a<b\le \infty$, then the spectrum of $H_\omega$ is almost surely a constant set given by \begin{align}\label{eqn:AM-as-spectrum2} \sigma(H_\omega) =\sigma(-\Delta)+ \supp P_0 . \end{align} \end{theorem} \begin{remark} The spectrum of the free Laplacian $-\Delta$ on the Sierpinski lattice as a set was first computed by \cite{belli1982stab,belli1988ren,fuku1992}. The nature of $\sigma(-\Delta)$ and the structure of eigenfunctions were later determined in \cite{teplyaev1998spectral}. As shown in \cite[Theorem 2]{teplyaev1998spectral}, the spectrum $\sigma(-\Delta)\subseteq[0,6]$ consists of isolated eigenvalues and a Cantor set. In particular, $0=\inf \sigma(-\Delta)$ is in the essential/Weyl spectrum, and $6=\sup \sigma(-\Delta)$ is the largest isolated eigenvalue of infinite multiplicity. For the Anderson model $-\Delta+V_\omega$, from Theorem~\ref{thm:AM-as-spectrum} we only know that the almost sure constant spectrum structure \eqref{eqn:AM-as-spectrum2} holds when there are no gaps in the support of the potential, e.g. for the uniform distribution, Gaussian distribution, etc. For general distributions, we do not know how to show $ \sigma(-\Delta+V_\omega)$ is a non-random set. In addition, the relation \eqref{eqn:AM-as-spectrum2} may not hold for, e.g. Bernoulli distributions; see for example numerical experiments in Figure~\ref{fig:gasket-ids}. Given a Bernoulli potential with $ \supp P_0 =\{a,b\}$, the best that we obtain by \eqref{eqn:AM-as-spectrum3} is \begin{align} \sigma(-\Delta+V_\omega)\subseteq[0,6]+\{a,b\}= \Big(a+[0,6] \Big)\cup \Big(b+[0,6]\Big). \end{align} \end{remark} \begin{remark} It is well known that the spectrum of a family of \emph{ergodic} operators is almost surely non-random, dating back to L. Pastur \cite{pastur1980}. However, since the Sierpinski lattice $\G$ is not vertex transitive, \eqref{eqn:AM} is not realized as an ergodic family in the usual way by a natural vertex-transitive group of graph automorphisms (see e.g. \cite[Definition 3.4.]{aizenman2015random}). We thus do not know if there is a similar approach to show the deterministic nature of the spectrum. The more specific structure of the spectrum as described by the form of \eqref{eqn:AM-as-spectrum2} was determined for the Anderson model on $\Z^d$ in \cite{kunz1980,kirsch1982cmp} using the Weyl criterion. Theorem~\ref{thm:AM-as-spectrum} can be viewed as an extension of the result of \cite{kunz1980} from the $\Z^d$ lattice to the Sierpinski lattice $\G$ in certain cases. \end{remark} \begin{figure}[!ht] \centering \subfigure []{ \includegraphics[width=7.0 cm]{Fig/fig2a.png} } \quad \subfigure[]{ \includegraphics[width=7.0cm]{Fig/fig2b.png} } \caption{ (a) IDS of $-\Delta$ and $-\Delta+V_\omega$ with a 0-10 Bernoulli potential, for a finite gasket with sidelength $2^8$. The spectrum $\mathcal C=\sigma(-\Delta)$ is a Cantor subset of $[0,6]$. Clearly, $\sigma(-\Delta+V_\omega)\subseteq[0,6]\cap[10,16]$. But it does not appear that $\sigma(-\Delta+V_\omega)\subseteq\mathcal C\cap (10+\mathcal C)$. (b) The enlarged view of the bottom part of the IDS shows the Lifshitz tails in the Bernoulli case (red), along with a reference exponential function (light blue) where $\tau=\frac{\log 3}{\log 5}$, $m_1=1.38$, $m_2=-4.64$. Additionally, we plot the IDS for the standard Laplacian (in black) for comparison along with the reference line $c_1E^{\tau}$, with $c_1=0.135$, in yellow.}\label{fig:gasket-ids} \end{figure} Next, we study the integrated density of states of the random Schr\"odinger $H_\omega$, as the limit of a sequence of finite volume eigenvalue counting functions. Given $L\in \N$, let $B_{L}=B(O,2^L)\subseteq\G$ be the (graph metric) ball, centered at the origin $O=(0,0)$, with radius $2^L$. Let $H_\omega^{B_L}=\one_{B_L}H_\omega \one_{B_L}$ be the restriction $H_\omega$ on the finite-dimensional space $\ell^2(B_L)$. Denote the eigenvalue counting function below the energy $E$ of $H_\omega^{B_L}$ by \begin{align} \mathcal N(E;H_\omega^{B_L})=\#\big\{\ {\rm eigenvalues}\ E'\ {\rm of}\ H_\omega^{B_L} \ {\rm such\ that\ }\ E'\le E\ \big\}. \end{align} \begin{theorem}\label{thm:IDS-exist-lif} Let $H_\omega=-\Delta+V_\omega$ be the Anderson model \eqref{eqn:AM} on the Sierpinski lattice $\G$. Then there exists a non-random right continuous non-decreasing function $N(E)$ such that almost surely, \begin{align}\label{eqn:IDS-exist-intro} N(E) = \lim_{L\to\infty}\frac{1}{|B_{L}|} \E \mathcal N(E; H_\omega^{ B_{L}})=\lim_{L\to\infty}\frac{1}{|B_{ L}|} \mathcal N(E; H_\omega^{ B_{ L}}) \ \ {\textrm{for all continuity points of}}\; N(E). \end{align} The limit $N(E)$ is defined to be the integrated density of states of $H_\omega$. If, in addition, suppose the common distribution $P_0$ of $\{V_\omega(x)\}_{x\in \G}$ satisfies $\inf \supp P_0=0$ and $P_0([0,\eps])\ge C\eps^\kappa$ for some $C,\kappa>0$. Then \begin{align}\label{eqn:lif-SG-intro} \lim_{E\searrow 0} \frac{\log \big|\log N (E)\big|}{\log E}=-\frac{\log 3}{\log5}. \end{align} \end{theorem} \begin{remark} In general, to relate the finite volume approximations to the infinite operator, it is convenient to set $H_\omega^{B_L}$ to be zero in the complement of $B_L$, corresponding to the so-called simple or zero Dirichlet boundary conditions. This is how one can interpret the operators in $H_\omega^{B_L}$ in \eqref{eqn:IDS-exist-intro}. The finite volume operator can however take many rather arbitrary choices of boundary conditions: free, Neumann, or wired in some way. We will state a more general version of the existence result \eqref{eqn:IDS-exist-intro} in Theorem~\ref{thm:IDS-exist-gen} in Section~\ref{sec:exi-ids}, where we see boundary conditions do not affect the limiting IDS. \end{remark} \begin{remark} We know by \eqref{eqn:AM-as-spectrum1} that the bottom of the spectrum $\sigma(-\Delta+V_\omega)$ is 0 assuming $\inf \supp V=0$. The limit \eqref{eqn:lif-SG-intro} is a weak version of what we expect to be a stronger form of the asymptotic behavior of the IDS near 0, \begin{align}\label{eqn:lif-gen} N(E)\sim C_1e^{-C_2E^{-\tau}},\ \tau=\frac{\log 3}{\log 5}, \ \ \ {\rm as}\ E\searrow 0. \end{align} (See Figure~\ref{fig:gasket-ids}(b) for a numerical illustration.) Such a drastic thinning tail of the IDS as in Eqs.~\eqref{eqn:lif-SG-intro} and \eqref{eqn:lif-gen} is referred to as Lifshitz tails. Note that for the free Laplacian $-\Delta$ on the Sierpinski gasket, the IDS $N(E)$ near the bottom $0$ of $\sigma(-\Delta)$ vanishes at the rate $N(E)\sim CE^{\tau}$ as $E\searrow 0$, as obtained in \cite{fuku1988,fuku1992}. In contrast to the polynomial behavior of the free Laplacian, the IDS for a random Schr\"odinger operator exhibits a different extreme behavior near the bottom such as \eqref{eqn:lif-gen}. The double-log asymptotic behavior \eqref{eqn:lif-SG-intro} will be proved in Section~\ref{sec:lif}. We first discuss more background about Lifshitz tails to finish the introduction. \end{remark} \subsection{More background and historical work on the Lifshitz tails}\label{sec:background} On the (continuum) Sierpinski gasket in $\R^2$, Lifshitz tails of the IDS were first proved for the Laplacian with Poisson obstacles in \cite{pietruska1991lifschitz}. Approximately the same time, similar Lifshitz tails were obtained for random Schr\"odinger operators on general nested fractals in $\R^d,d\ge2$ in \cite{shima1991lifschitz}. More recently, there are more generalizations in \cite{balsam2023density} for the differential case on nested fractals with good labeling properties. All these works are for continuous/differential operators on the Sierpinski gasket or other continuous fractals. In this context, Eq.~\eqref{eqn:lif-SG-intro} of Theorem~\ref{thm:IDS-exist-lif} extends the Lifshitz tails to the Anderson model on the discrete/graphical Sierpinski lattice $\G$. The original Lifshitz tails phenomenon was first identified in the 1960s by I.M. Lifshitz \cite{lif1965}. It has since been extensively studied with rigorous proof for various random models on $\R^d$ or $\Z^d$. We do not discuss all the related works here, but we refer readers to \cite[Section 6]{kirsch2007invitation} and \cite[Chapter 4]{aizenman2015random} for a thorough review. Note that the original Lifshitz tail of the IDS $N(E)$ on $\R^d$ or $\Z^d$ is asymptotically $ N(E)\sim C_1e^{-C_2E^{-d/2}}$, as $ E\searrow0$ (assuming the bottom of the spectrum is at 0). The index $d/2$ is usually referred to as the Lifshitz exponent. One obtains a different Lifshitz exponent $\tau=\log3/\log5$ on the Sierpinski gasket (\cite{pietruska1991lifschitz,shima1991lifschitz}) and on the Sierpinski lattice \eqref{eqn:lif-SG-intro}. To relate $\tau$ to the exponent $d/2$, one denotes $\tau=d_s/2$, where $d_s$ is the `spectral dimension' of the gasket. Actually, there is a more intrinsic way to link the Lifshitz exponent to two other parameters in the so-called \emph{Heat Kernel Bound} $\mathrm{HK}(\alpha,\beta)$, a property for the free Laplacian on the corresponding space. The Euclidean space/lattice $\R^d$ or $\Z^d$ satisfies $\mathrm{HK}(d,2)$, while the Sierpinski gasket or Sierpinski lattice satisfies $\mathrm{HK}(\log3/\log2,\log5/\log 2)$. In either case, we see that the Lifshitz exponent is given by the ratio of the two parameters $\alpha/\beta$. We are very interested in whether the Lifshitz singularity with exponent $\alpha/\beta$ holds for the Anderson model on more general graphs with the \emph{Heat Kernel Bound} $\mathrm{HK}(\alpha,\beta)$, without certain additional regularity/self-similarity/good labeling properties of the graph. For random differential/continuous operators on the Sierpinski gasket in $\R^2$ (or more generally, on nested fractals in $\R^d$), there are two main ways in the literature to prove the existence of the IDS and Lifshitz tails. \begin{itemize} \item One way is the works of Pietruska-Paluba, Balsam, Kaleta, and Olszewski \cite{pietruska1991lifschitz,balsam2023density}. The existence of the IDS is obtained by the convergence of the expected values of the underlying Laplace transforms. Then the Lifshitz tail is obtained by the long time behavior of the associated Laplace transform. \item The other method is the work of Shima \cite{shima1991lifschitz}. The existence of the IDS was obtained directly as the limit of the finite volume IDS, by the law of large numbers. Then the Lifshitz tail of the IDS is obtained by the Dirichlet--Neumann bracketing method, combined with the specific bound by the Dirichlet form due to Kusuoka \cite{dobrushin1993lecture}. \end{itemize} In this work, we extend the existence and the Lifshitz tail of the IDS of \cite{shima1991lifschitz} to the discrete Sierpinski lattice $\G$, using an adapted Dirichlet and Neumann bracketing method for $\G$. There are numerous proofs of Lifshitz tails for different models using the method of large deviations. The idea of using Dirichlet--Neumann bracketing for Lifshitz tails first appeared in a physics paper \cite{harris1973rigorous}. The original Dirichlet and Neumann bracketing method was first established for continuum models in $\R^d$ by Kirsch and Martinelli \cite{kirsch1983large}, with the advantage of being very close to Lifshitz's intuition. It was later extended by Simon \cite{simon1985lifschitz} to the $\Z^d$ setting, where there are some technical aspects special to the discrete model, see more discussion in \cite[\S]{kirsch2007invitation}, \cite[\S 4.3]{aizenman2015random}. The main technical difficulties of adapting the bracketing method to the Sierpinski lattice (which are not present for $\Z^d$ or continuum Sierpinski gasket cases) are: \begin{enumerate}[(i)] \item There is not a natural disjoint partition of the Sierpinski lattice $\G$. One has to carefully treat the overlap of the subdomains and the associated edge energy in order to bound the Hamiltonian. \item The required bounds on the (low lying) eigenvalues of the Dirichlet or Neumann Laplacian on a finite Sierpinski triangle are not derived previously in the literature. These estimates might have independent interests in studying the spectrum of the associated Anderson model. \end{enumerate} One of our goals is to describe some of these technical difficulties where Dirichlet--Neumann bracketing on the Sierpinski lattice is not so common, honoring the bracketing method by Kirsch-Martinelli--Simon and the adaption by Shima. These results may also be useful to further study the Anderson localization and other open questions for random Schr\"odinger operators on the Sierpinski lattice, and on more general discrete graphs. \subsection{Outline} The rest of this article is organized as follows. \begin{itemize} \item In Section~\ref{sec:pre}, we introduce background and preliminaries on the Sierpinski lattice. \item In Section~\ref{sec:spe}, we prove Theorem~\ref{thm:AM-as-spectrum}, the almost sure spectrum of the Anderson model on the Sierpinski lattice. \item In Section~\ref{sec:exi-ids}, we study the partition of the Sierpinski lattice and show the existence of the IDS under different boundary conditions. \item In Section~\ref{sec:lif}, we prove the Lifshitz tail \eqref{eqn:lif-SG-intro}, first the upper bound using the Neumann bracketing, and then the lower bound using the (modified) Dirichlet bracketing. \end{itemize} Throughout the paper, constants such as $C$, $c$, and $c_i$ may change from line to line. We will use the notation $X\lesssim Y$ to mean $X\le cY$, and $X\gtrsim Y$ to mean $X\ge cY$, for some constant $c$ depending only on $\Gamma$. If $X\lesssim Y\lesssim X$, we may also write $X\approx Y$. \section{Preliminaries}\label{sec:pre} In this section, we collect several useful facts about Sierpinski lattices. We refer readers for more background about the Laplacian on Sierpinski lattices to \cite{shima1991eigenvalue,teplyaev1998spectral}, about random walks on general graphs to \cite{barlow2017random}, and about random Schr\"odinger operators to \cite{kirsch2007invitation,aizenman2015random}. Let $\Gamma=(\G,\mathcal E)$ be the Sierpinski lattice defined in \eqref{eqn:SG-def}. $\Gamma$, or $\G$, is also called the full Sierpinski lattice with empty boundary/corner, while the one without the symmetric image $\G_n'$ is referred to as the right half Sierpinski lattice with the boundary or corner $O=(0,0)$. Let $A\subseteq\G$ be a subset of vertices. The exterior boundary of $A$ is $\partial A=\{ x\not\in A: \exists y \in A \ \ {\rm with}\ \ x\sim y \}, $ and the interior boundary is defined as $\partial^i A=\partial(\G\backslash A) $. The subset $A$ induces a subgraph $\Gamma_A=(A,\mathcal E_A),$ where $\mathcal E_A=\{(x,y)\in \mathcal E: x,y\in A\}$. When there is no ambiguity, we may identify a graph or a subgraph with its vertex set and vice versa. For example, we may call either $\Gamma_A$ or just $A$ a subgraph of the Sierpinski lattice. We may also abuse the notation and write $x\in \Gamma_A$ if $x\in A$. The vertex degree of $x$, $\deg(x)=\#\{y: x \sim y\}$ is the number of neighbors of $x$. Notice on the full Sierpinski lattice, $\deg(x)\equiv 4$, while on the right half Sierpinski lattice, $\deg(O)=2$ and $\deg(x)=4$ for $x\neq O$. As usual, a ball in $\Gamma$, centered at $x$, with radius $r$, is defined as $B(x,r)=\{y:d(x,y)\le r\}$, where the natural metric $d(x,y)=$ length of the shortest path from $x$ to $y$. For any subset $A\subseteq\G$, we denote by $|A|=\#\{x: x\in A\}$ the cardinality of $A$. For any $L\in \N$, $\G_L$ in \eqref{eqn:SG-def} induces a subgraph. The entire Sierpinski lattice consists of infinitely many translations of $\G_L$, denoted as $\{\G_{L,j}\}_{j=1}^\infty$, glued together at corner points. We call either $ \G_L$ or any of its translations a $2^L$-triangle. The extreme points/vertices of $\G_{L,j}$ are the three vertices of the biggest triangle, i.e., the interior boundary of $\G_{L,j}$. We say that two $2^L$-triangles are adjacent if they share an extreme point. Due to the recursive relation \eqref{eqn:T-rec}, \begin{align} |\G_L|=\frac{1}{2}(3^{L+1}+3),\ \ \ L=0,1,2,\cdots. \end{align} Suppose $L\ge \ell$. Then the $2^L$-triangle $\G_L$ consists of $3^{L-\ell}$ many $2^\ell$-triangles $ \G_{\ell,j},j=1,\cdots,3^{L-\ell}$. All these $ \G_{\ell,j}$ are subgraphs isometric to $\G_\ell$. We consider functions on the vertices $\G$, which will be denoted by the function space $C(\G):=\C^{\G}=\{f: \G\to \C \}$. The space $\ell^2(\G)$ is defined via the $\ell^2$ norm induced by the usual (non-weighted) inner product $\langle f,g\rangle:=\sum_{x\in \G} f(x) \overline{ g(x)}. $ The subspace $\ell^2(A)$ is defined accordingly for any subset $A\subseteq\G$. The Laplacian $\Delta$ on the full Sierpinski lattice $\G$ is defined as in \eqref{eqn:Lap}. It is a bounded nonpositive selfadjoint operator in $\ell^2(\G)$. For $f,g\in \ell^2(\G)$, the following Discrete Gauss–Green theorem holds: \begin{align}\label{eqn:gauss-green} \sum_{x\in \G}g(x)\Delta f(x)=-\frac{1}{2}\sum_{x\in \G}\sum_{\substack{y\in \G\\ y\sim x}}\big(f(x)-f(y)\big)\big(g(x)-g(y)\big). \end{align} The probabilistic Laplacian $\Delta_p$ on the full Sierpinski lattice $\G$ is defined to be $\Delta_p=\frac{1}{4}\Delta$ since $\deg(x)\equiv 4,x\in \G$. The structure of the spectrum $\sigma(\Delta_p)=\frac{1}{4}\sigma(\Delta)$ was fully determined in \cite{teplyaev1998spectral}. \section{Spectrum of the Anderson model on the Sierpinski lattice}\label{sec:spe} In this section we prove Theorem~\ref{thm:AM-as-spectrum}. \begin{proof}[Proof of Theorem~\ref{thm:AM-as-spectrum}] Suppose $ \supp P_0 \subseteq[a,b]$ for some $a\le b$, so that almost surely $a\le V_\omega \le b$. By the standard power series expansion argument of self-adjoint operators (see e.g. \cite[Problem~3.6]{teschl2014mathematical}), for $a, b$ finite we obtain the right hand side inclusion of \eqref{eqn:AM-as-spectrum1}, \[ \sigma(-\Delta+V_\omega)\subseteq\sigma(-\Delta)+[a,b]. \] If $a$ or $b$ is infinite, the inclusion is immediate since the right hand side is then an interval $(-\infty,\infty)$, $[a,\infty)$, or $(-\infty,b+6]$. The same argument as in the $a, b$ finite case implies \eqref{eqn:AM-as-spectrum3}, using the fact that $\sigma(-\Delta)\subseteq[0,6]$ and switching the role of $-\Delta$ and $V_\omega$. It remains to prove the left hand side of \eqref{eqn:AM-as-spectrum1}. The outline follows from the proof of the $\Z^d$ case, see e.g. \cite[Theorem 3.9]{kirsch2007invitation}. There are two key ingredients needed for the Sierpinski lattice, stated in the following two claims. \begin{claim}\label{clm:Vconst} There is a set $\Omega_0$ of probability one such that the following is true: For any $\omega\in \Omega_0$, any $\mu\in \supp P_0 $, $\ell>0$ and $\varepsilon>0$, there exists a $2^\ell$-triangle $\G_\ell\subseteq\G$ such that \begin{align} \sup_{x\in \G_{\ell}}|V_\omega(x)-\mu|<\varepsilon. \end{align} \end{claim} \begin{remark} This is a ``Sierpinski lattice analogue'' of the result on $\Z^d$. It is important that the full probability set is independent of $\mu,\ell$ and $\eps$. We only need the existence of some (one) triangle for any size $\ell$. The conclusion can be made stronger by requesting infinitely many triangles; see the analog for $\Z^d$ in e.g. \cite[Proposition 3.8]{kirsch2007invitation}. \end{remark} \begin{claim}\label{clm:ef-move} Eigenvalues of $-\Delta$ are dense in the spectrum $\sigma(-\Delta)$. Each eigenvalue has infinitely many compactly supported eigenfunctions. For any eigenfunction $\varphi$ supported on some $2^\ell$-triangle $\G_\ell$ and for any other $2^\ell$-triangle $\G_{\ell,1}$, there is a translation of $\varphi$, denoted as $\psi$, supported on $ \G_{\ell,1}$, belonging to the same eigenvalue. \end{claim} Claim~\ref{clm:Vconst} is a consequence of the Borel--Cantelli lemma, and guarantees that the potential can be arbitrarily close to any constant on a ``far away'' triangle. Claim~\ref{clm:ef-move} is essentially a rephrase of \cite[Theorem 2]{teplyaev1998spectral}, which allows one to move a compactly supported eigenfunction anywhere on the lattice. The detailed proofs of the two claims are left to the end of the section. We first use them to complete the proof of Theorem~\ref{thm:AM-as-spectrum}. Suppose $\lambda\in \sigma(-\Delta)$ and $\mu\in \supp P_0 $. We will construct a Weyl sequence of $-\Delta+V_\omega$ associated with $\lambda+\mu$. By Claim~\ref{clm:ef-move}, there is a sequence of compactly supported eigenfunctions $\varphi_k$ associated with eigenvalues\footnote{ If $\lambda$ itself is an eigenvalue with an eigenfunction $\varphi$, then we take $\lambda_k\equiv \lambda$ and $\varphi_k\equiv \varphi$.} $\lambda_k$ such that $|\lambda_k-\lambda|<1/k$ and $\|\varphi_k\|=1$ for all $k$. For each $\varphi_k$, since it is compactly supported, we assume $\supp \varphi_k$ is contained in some $2^{\ell_k}$-triangle $\G_{\ell_k}$. Now take $\omega\in \Omega_0$ given as in Claim~\ref{clm:Vconst}. Then for $\mu\in \supp P_0 $, $\ell=\ell_k$ and $\eps=1/k$, there is a $2^{\ell_k}$-triangle $ \G_{\ell_k,1}$ (not necessarily the same as $\G_{\ell_k}$ where $\varphi_k$ is supported) such that \begin{align} \sup_{x\in \G_{\ell_k,1}}\big|V_\omega(x)-\mu\big|<\frac{1}{k}. \end{align} Next, we use Claim~\ref{clm:ef-move} to move $\varphi_k$ from $\G_{\ell_k}$ to $ \G_{\ell_k,1}$ which gives a new eigenfunction $\psi_k$ of $-\Delta$ satisfying $\|\psi_k\|=1$, $-\Delta\psi_k=\lambda_k \psi_k$, and $\supp \psi_k\subseteq \G_{\ell_k,1}$. Then almost surely one has \begin{align} \|(-\Delta +V_\omega-(\lambda+\mu)\psi_k\|\le & \|(-\Delta -\lambda)\psi_k\|+\|(V_\omega-\mu)\psi_k\| \\ \le & |\lambda_k-\lambda|\cdot \|\psi_k\|+ \sup_{x\in \G_{\ell_k,1}} \big|V_\omega(x)-\mu\big| \cdot\|\psi_k\|\le \frac{2}{k}. \end{align} Hence $\psi_k$ is a Weyl sequence of $-\Delta +V_\omega$ associated with $\lambda+\mu\in \sigma(-\Delta )+ \supp P_0 $. The Weyl criterion (see e.g. \cite[Theorem VII.12]{reed1981functional} or \cite[Lemma 6.17]{teschl2014mathematical}) implies $\lambda+\mu \in \sigma(-\Delta+V_\omega)$ which completes the proof of \eqref{eqn:AM-as-spectrum1}. \end{proof} The rest of this section contains the proofs of the two claims. We first show \begin{proof}[Proof of Claim~\ref{clm:Vconst}] Fix $\ell>0$, $\mu\in \supp P_0 $, and $\eps>0$. Let $\{\G_{\ell,j}\subseteq\G\}_{j\ge 1}$ be infinitely many disjoint $2^\ell$ triangles, and let $\mathcal E_j=\{\omega:|V_\omega(x)-\mu|<\varepsilon,\ x\in \G_{\ell,j}\}$. Since $\{V_\omega(x)\}$ are i.i.d., the probability of $\mathcal E_j$ is $\P(\mathcal E_j)= \P\big(\mu-\varepsilon,\mu+\varepsilon \big)^{|\G_{\ell,j}|}=p_{\mu,\ell,\varepsilon}>0,$ which is independent of $j$. Hence, $\sum_{j=1}^\infty\P(\mathcal E_j)=\infty.$ Since all $\G_{\ell,j}$ are disjoint and all the events $\mathcal E_j$ are independent, by the (second) Borel--Cantelli lemma (see e.g. \cite[Theorem 3.6]{kirsch2007invitation}), the set \[\Big\{\omega\ | \ \omega\ {\textrm{belongs to infinitely many}}\ \mathcal E_j \Big\} \] has probability one, which implies that \begin{align} \Omega_{\ell,\varepsilon,\mu}=\big\{\omega\ |\ {\rm for\ some\ }2^\ell\ {\rm triangle}\ \G_{\ell}\subseteq\G: \sup_{x\in \G_{\ell}}|V_\omega(x)-\mu|<\varepsilon \ \big\} \end{align} has probability one. Since the set $ \supp P_0\subseteq\R $ contains a countable dense set $\mathcal C_0$, then the countable intersection set \begin{align} \Omega_0=\bigcap_{\substack{\ell\in \N, n\in \N\\ \mu\in \mathcal C_0}}\Omega_{\ell,1/n,\mu} \end{align} also has probability one, and satisfies the requirement of the claim. \end{proof} Now we verify Claim~\ref{clm:ef-move}. \begin{proof}[Proof of Claim~\ref{clm:ef-move}] By \cite[Theorem 2]{teplyaev1998spectral}, the spectrum of $\Delta$ is $\sigma(\Delta)=-4(\mathcal D \cup \mathcal J)$, where $\mathcal D=\{-3/2\}\cup \big(\bigcup_{m=0}^\infty R_{-m}\{-3/4\}\big)$, for $R(z)=z(4z+5)$ and $R_{-m}A$ the preimage of a set $A$ under the $m$-th composition power of $R$, is part of the eigenvalues of $-\Delta$. The set $\mathcal J$ is the Julia set of $R$ which coincides with the set of limit points of $\mathcal D$. Hence, $ 4\mathcal D$ is dense in $\sigma(-\Delta)$. Also by \cite[Theorem 2]{teplyaev1998spectral}, any eigenvalue of $-\Delta$ has infinitely many compactly supported eigenfunctions. It remains to verify the latter half, which allows us to move a compactly supported eigenfunction anywhere on the Sierpinski lattice. This follows from the repeated structure of the Sierpinski lattice. More precisely, if $\varphi$ is an eigenfunction of $-\Delta$ supported on a $2^\ell$-triangle $\G_{\ell}$, then one can translate $\varphi$ to any other $2^\ell$-triangle $\G_{\ell,1}$ to create another eigenfunction with the same eigenvalue. This uses that $\deg(x)=4$ for all $x\in\G$, so that the restrictions of $-\Delta$ on $\G_\ell$ and $\G_{\ell,1}$ are exactly the same (as a finite dimensional matrix). \end{proof} \begin{remark}\label{rem:full-half} The proof of Claim~\ref{clm:ef-move} is for the combinatorial Laplacian $\Delta$, or equivalently (4 times) the probabilistic Laplacian $4\Delta_p$, on the full Sierpinski lattice $\G=\bigcup_n(\G_n\cup\G_n')$. A similar argument (with accounting for the origin) applies to $-\Delta_p$ on the right half Sierpinski lattice, using the spectrum structure of $\sigma(\Delta_p)$ obtained in \cite{teplyaev1998spectral}. \end{remark} \section{Existence of the IDS for random Schr\"odinger operators on the Sierpinski lattice.}\label{sec:exi-ids} Let $H_\omega=-\Delta+V_\omega$ be the Anderson model as in \eqref{eqn:AM}. For simplicity, we omit the subscript $\omega$ and write $H=H_\omega$ and $V=V_\omega$. We first discuss restrictions of $H$ to a finite-dimensional subspace of $\ell^2(\G)$ with different boundary conditions. For any finite subset $A\subseteq\G$, we need to consider the following three Laplacians with different boundary conditions: \begin{itemize} \item Simple boundary condition (the usual zero Dirichlet boundary condition): for $f\in \ell^2(A)$, \begin{align}\label{eqn:Lap-simple} -\Delta^{A}f(x)=-\Delta^{A,S}f(x)= \deg_{\G\backslash A}(x) f(x)+\sum_{\substack{y\in A \\ y\sim x}}\big(f(x)-f(y)\big)=\deg(x)f(x)-\sum_{\substack{y\in A \\ y\sim x}}f(y). \end{align} \item Neumann boundary condition: for $f\in \ell^2(A)$, \begin{align}\label{eqn:Lap-N} -\Delta^{A,N}f(x)= \sum_{\substack{y\in A \\ y\sim x}}\big(f(x)-f(y)\big)=\deg_A(x)f(x)-\sum_{\substack{y\in A \\ y\sim x}}f(y). \end{align} \item Modified Dirichlet boundary condition: for $f\in \ell^2(A)$, \begin{align}\label{eqn:Lap-D} -\Delta^{A,D}f(x)=2\deg_{\G\backslash A}(x) f(x)+\sum_{\substack{y\in A \\ y\sim x}}\big(f(x)-f(y)\big)=\big(2\deg(x)-\deg_A(x)\big)f(x)-\sum_{\substack{y\in A \\ y\sim x}}f(y). \end{align} \end{itemize} The corresponding Schr\"odinger operator is $H^{A,\bullet}=-\Delta^{A,\bullet}+V^A$, where $\bullet$ is one of the above three boundary conditions and $V^A$ is just the restriction of $V$ to $A$. The zero/simple boundary Laplacian $\Delta^{A}=\Delta^{A,S}$ corresponds to application of $\Delta$ on the subspace $\{f\in \ell^2(\G):f(x)=0,x\notin A\}$, and is associated with the simple random walk killed upon exiting $A$. The modified Dirichlet boundary Laplacian $\Delta^{A,D}=\Delta^{A}-\deg_{\G\backslash A}$ is a Dirichlet-type operator with zero/simple boundary conditions on $\G\backslash A$, modified however at the interior boundary vertices of $A$, where it penalizes such boundary vertices. The Neumann boundary Laplacian $\Delta^{A,N}$ is the same as the regular graph Laplacian on the subgraph induced by $A$ (i.e. without consideration for the larger graph $\G$), and thus satisfies the same type of Gauss--Green theorem as the infinite lattice \eqref{eqn:gauss-green}: \begin{align}\label{eqn:gauss-green-finite} \ipc{f}{ -\Delta^{A,N} f}_{\ell^2(A)} =&\, \frac{1}{2}\sum_{\substack{x,y \in A \\ x\sim y } }\big(f(x)-f(y)\big)^2. \end{align} \begin{remark} Notice that in the interior, if $x\notin \partial^iA$, then $\deg(x)=\deg_A(x)$ and the three Laplacians behave in the same way. The simple and modified Dirichlet boundary conditions only add extra diagonal terms (corresponding to vertex degrees) to the interior boundary vertices. This will allow us to bound the extra edge energy terms in the quadratic form when we partition the graph into subgraphs. These modified vertex degree operators were first used in \cite{simon1985lifschitz} for $\Z^d$ case, demonstrating some technical aspects in the bracketing method special to the discrete case; see also the discussion in \cite[\S 4.3]{aizenman2015random}. We will see how these boundary conditions will affect the energy partition on the Sierpinski lattice in Section \ref{sec:lif}. \end{remark} For any $L\in \N$, let $\Gamma_{L,j}=(\G_{L,j},\mathcal E)$ be a $2^L$-triangle. The collection $\{\G_{L,j}\}_j$ forms a non-disjoint cover of $\G$, \begin{align}\label{eqn:Part1-SG} \mathcal P=\{\G_{L,j}\}_{j\ge 1} ,\ \ \G =\bigcup_{j\ge 1} \G_{L,j}. \end{align} Two adjacent triangles $\G_{L,j}$ and $\G_{L,j'}$ share only one extreme vertex. The overlap of $\mathcal P$ will not affect the Neumann bracketing side or the upper bound of the eigenvalue counting. For the Dirichlet bracketing side, we will need the following surgeries on $\mathcal P$ to deal with the overlap of two adjacent triangles. For any $\G_{L,j}\in \mathcal P$, let $\partial ^i\G_{L,j}=\{o_1,o_2,o_3\}$ be the three extreme vertices. Denote by $\wt \G_{L}=\G_{L}\backslash (\partial ^i\G_{L})$ the truncated $2^\ell$-triangle associated with $\G_\ell$ (Figure~\ref{fig:wtV}). Consider the following disjoint collection \begin{align}\label{eqn:Part2-SG} \wt{\mathcal P} =\{\wt \G_{L,j}\}_{j\ge 1}, \ \ \G=(\bigcup_{j\ge 1} \wt \G_{L,j})\cup \mathcal R, \end{align} where $\mathcal R$ is the collection of all the extreme vertices of all $\G_{L,j}\in \mathcal P$. The associated subgraph is denoted as $\wt \Gamma_{L,j}=(\wt \G_{L,j},\mathcal E)$, see Figure \ref{fig:wtV}. Then $\{ \wt{\mathcal P},\mathcal R\}$ forms a partition of $\G$. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Fig/fig3a.png} \includegraphics[width=0.45\textwidth]{Fig/fig3b.png} \caption{The left figure is the subgraph induced by $\G_2$. Removing the three extreme vertices in $\G_2$ leads to the truncated triangle $\wt \G_2$ on the right. The dashed edges are also removed from the subgraph induced by $\wt \G_2$.} \label{fig:wtV} \end{figure} We now denote by \[\mathcal N(E;X)=\#\{{\rm eigenvalues}\ E'\ {\rm of}\ X \ {\rm such\ that\ }\ E'\le E\}\] the eigenvalue counting function of an operator $X$ below the energy $E$. \begin{lemma}\label{lem:XL-induc} Let $X^L,Y^L$ be any two of the operators in the set $\bigcup_{\bullet\in\{S,N,D\}}\{H^{\G_L,\bullet},H^{\wt\G_L,\bullet}\}$, where $\bullet\in\{S,N,D\}$ is one of the three boundary conditions \eqref{eqn:Lap-simple}--\eqref{eqn:Lap-D}. Then \begin{align}\label{eqn:XY} \big| \mathcal N(E; X^L)-\mathcal N(E; Y^L)\big|\le 9, \end{align} and \begin{align}\label{eqn:XL} \big| \mathcal N(E; X^L)-\sum_{i=1}^3\mathcal N(E; X^{L-1,i})\big|\le 30, \end{align} where $\{X^{L-1,i}\}_{i=1}^3$ are the corresponding Schr\"odinger operators on the three $2^{L-1}$-triangles contained in $\G_L$, with the same boundary condition as $ X^L $. \end{lemma} The bounds in \eqref{eqn:XY} and \eqref{eqn:XL} essentially follow from the min-max principle, using applications to rank-one perturbations or orthogonal projections between matrices. We give the proof here, referencing some standard technical lemmas included in Appendix~\ref{sec:min-max}. We will see in the proof that the general upper bounds in the right hand side of \eqref{eqn:XY} and \eqref{eqn:XL} hold for any choice of the boundary conditions, and that there are better estimates for a specific $X^L$. \begin{proof} The three boundary conditions \eqref{eqn:Lap-simple}--\eqref{eqn:Lap-D} differ only at the 3 extreme vertices of the triangle $\G_L$. In other words, the operators $H^{\G_L,D},H^{\G_L}, H^{\G_L,N}$ all act on $\ell^2(\G_L)$ in the same way, except for the 3 diagonal terms at the 3 extreme vertices. Then by a perturbation argument (Lemma~\ref{lem:A1} Eq.~\eqref{eqn:NH-diag-pert}), for $X^L,Y^L$ being any two of $\{H^{\G_L,D},H^{\G_L}, H^{\G_L,N}\}$, we have $ \big| \mathcal N(E; X^L)-\mathcal N(E; Y^L)\big|\le 3$. Next, we link the counting between $\G_L$ and its truncation $\wt \G_L$. By the definition of the simple boundary condition \eqref{eqn:Lap-simple}, $H^{\wt\G_L}$ is the orthogonal projection (restriction) of $H^{\G_L,N}$ onto the subspace $\ell^2(\wt \G_L)$, since the diagonal vertex degree terms are 4 in all cases for non-extreme vertices. By Lemma~\ref{lem:A1} Eq.~\eqref{eqn:NH-cauchy-inter}, \begin{align}\label{eqn:NH+2} \mathcal N(E;H^{\wt\G_L})\le \mathcal N(E;H^{\G_L,N})\le \mathcal N(E;H^{\wt\G_L})+3. \end{align} On the other hand, when we remove the extreme vertices $o_i$, $i=1,2,3$, from $\G_L$, we create 6 new interior boundary points in $\wt \G_L$, denoted as $o_{ij}\in \wt \G_L$, $i=1,2,3$, $j=1,2$ with $o_{ij}\sim o_i$ (see e.g. Figure~\ref{fig:wtV}). When we count eigenvalues on $\wt \G_L$ corresponding to the different boundary conditions, we only need to consider the differences in the vertex degree at these 6 points $o_{ij}$. Similar as on the $\G_L$, the operators $H^{\wt\G_L,D},H^{\wt\G_L}, H^{\wt\G_L,N}$ act on $\ell^2(\wt \G_L)$ with exactly the same matrix elements, except for the 6 diagonal terms at $o_{ij}\in \wt \G_L$. Then again by \eqref{eqn:NH-diag-pert}, for $X^L,Y^L$ being any two of $\{H^{\wt\G_L,D},H^{\wt\G_L}, H^{\wt\G_L,N}\}$, we obtain $ \big| \mathcal N(E; X^L)-\mathcal N(E; Y^L)\big|\le 6$. Together with \eqref{eqn:NH+2}, this yields \eqref{eqn:XY}. \begin{figure} \centering \includegraphics[width=0.5\linewidth]{Fig/fig4.png} \caption{The $2^3$-triangle $\G_3$ (the big triangle) consists of three $2^2$-triangle $\{\G_{2,i}\}_{i=1}^3$. The three $2^2$-triangles have totally 6 extreme vertices $\{e_j\}_{j=1}^6$ (in blue). Remove these extreme vertices, then the truncated triangles $\wt \G_{2,i}$ are disjoint, and $(\cup_{i=1}^3\wt \G_{2,i})\cup\{e_j\}_{j=1}^6$ form a disjoint partition of $\G_3$.} \label{fig:Vpart} \end{figure} For the upper bound in \eqref{eqn:XL}, we write the decomposition $\G_L=\cup_{i=1}^3\G_{L-1,i}$ where each $\G_{L-1,i}$ is a $2^{L-1}$-triangle isometric to $\G_{L-1}$. These three triangles only (pairwise) intersect at their three extreme vertices, $\cup_{i\neq j}\big(\G_{L-1,i}\cap\G_{L-1,j}\big)=\{e_1,e_2,e_3\}$ (see Figure~\ref{fig:Vpart}). We consider only the Neumann boundary condition on each of these three triangles, since we can then use \eqref{eqn:XY} for the other boundary conditions. By the definition of the Neumann Laplacian \eqref{eqn:Lap-N}, we have \begin{align}\label{eqn:Lap-N-quad} \ipc{f}{-\Delta^{\G_L,N} f}_{\ell^2(\G_L)}=\sum_{i=1}^3\ipc{f}{-\Delta^{\G_{L-1,i},N} f}_{\ell^2(\G_{L-1,i})}. \end{align} Set $\wt V(x):=V(x)$ if $x\neq e_i$ and $\wt V_{e_i}:=2V_{e_i}$. Since the three $2^{L-1}$-triangles only overlap once at each $e_i$, then \begin{align}\label{eqn:Lap-N-quad-sch} \ipc{f}{\big(-\Delta^{\G_L,N}+\wt V^{\G_L}\big) f}_{\ell^2(\G_L)}=\sum_{i=1}^3\ipc{f}{\big(-\Delta^{\G_{L-1,i},N}+V^{\G_{L-1,i}} \big)f}_{\ell^2(\G_{L-1,i})}. \end{align} Hence, by Lemma~\ref{lem:NH<NH12} Eq.~\eqref{eqn:NH<H12}, we obtain \begin{align}\label{eqn:412} \mathcal N(E; -\Delta^{\G_L,N}+\wt V^{\G_L})\le \sum_{i=1}^3\mathcal N(E; -\Delta^{\G_{L-1,i},N}+V^{\G_{L-1,i}}) . \end{align} Since $\wt V^{\G_L}$ is an diagonal perturbation of $ V^{\G_L}$ with rank 3, then by Lemma~\ref{lem:A1} Eq.~\eqref{eqn:NH-diag-pert}, \begin{align}\label{eqn:413} \mathcal N(E; -\Delta^{\G_L,N}+\wt V^{\G_L})\ge \mathcal N(E; -\Delta^{\G_L,N}+ V^{\G_L})-3=\mathcal N(E; H^{\G_L,N})-3. \end{align} Putting \eqref{eqn:412} and \eqref{eqn:413} together with \eqref{eqn:XY} gives \begin{align} \mathcal N(E; X^L)\le \sum_{i=1}^3\mathcal N(E; X^{ L-1,i})+30 , \end{align} for $X^L$ being any of $\{H^{\G_L,N},H^{\wt\G_L,D},H^{\wt\G_L}, H^{\wt\G_L,N}\}$. For the lower bound in \eqref{eqn:XL}, we need to split $\G_L$ into disjoint (truncated) smaller triangles in order to apply Lemma~\ref{lem:A3} (see Figure~\ref{fig:Vpart}). Let $\G_L=\cup_{i=1}^3\G_{L-1,i}$ be as above. Let $\wt \G_{L-1,i}\subseteq\G_{L-1,i},i=1,2,3$ be the three truncated $2^{L-1}$ triangles isometric to $\wt \G_{L-1}$ as in \eqref{eqn:Part2-SG}. Denote by $\mathcal R=\{e_j\}_{j=1}^6:=\G_L\backslash\cup_{i=1}^3\wt \G_{L-1,i}$, where $e_j$ are the 6 extreme vertices of $ \{\G_{L-1,i}\}_{i=1}^3$. Let $\mathcal H_i=\ell^2(\wt \G_{L-1,i}),i=1,2,3$ and $\mathcal H_0=\ell^2(\mathcal R)$. Hence, $\mathcal H_i,i=0,1,2,3$ are orthogonal to each other and $\oplus_{i=0}^3 \mathcal H_i =\ell^2(\G_L)$. When we split the graph $ (\G_L,\mathcal E)$ into subgraphs $(\wt \G_{L-1,i},\mathcal E)$ (and the 6 isolated vertices $e_j$), we removed 18 edges connecting $\wt \G_{L-1,i}$ and $e_j$. Starting similarly to \eqref{eqn:Lap-N-quad}, we have, using \eqref{eqn:gauss-green}, \begin{align*} \ipc{f}{ -\Delta^{\G_L,N} f}_{\ell^2(\G_L)} =& \sum_{i=1}^3\ipc{f}{ -\Delta^{\wt\G_{L-1,i},N} f}_{ \mathcal H_i } + \frac{1}{2}\sum_{\substack{x\sim y \\ x\, {\rm or}\, y \in \mathcal R} } \big(f(x)-f(y)\big)^2 \\ \le & \sum_{i=1}^3\ipc{f}{ -\Delta^{\wt\G_{L-1,i},N} f}_{ \mathcal H_i } + \sum_{\substack{x\sim y \\ x\, {\rm or}\, y \in \mathcal R} } \big(f(x)^2+f(y)^2 \big) \\ \le & \sum_{i=1}^3\ipc{f}{ -\Delta^{\wt\G_{L-1,i},N} f}_{ \mathcal H_i } +\sum_{i=1}^3 \sum_{\substack{x\in \wt\G_{L-1,i} \\ x\sim \mathcal R}} 2f(x)^2 +8\sum_{j=1}^6 f(e_j)^2 \\ \numberthis=& \sum_{i=1}^3\ipc{f}{ -\Delta^{\wt\G_{L-1,i},D} f}_{ \mathcal H_i } + 8\sum_{j=1}^6 f(e_j)^2. \label{eqn:energy-bound1} \end{align*} In the last line, we used that if $ x\in \wt\G_{L-1,i}$ and $x\sim\mathcal R$ (i.e. $ x\sim y$ for some $y\in \mathcal R $), then $x$ is an interior boundary point of $\wt\G_{L-1,i}$. Since \[\deg_{\G\backslash \wt\G_{L-1,i}}(x)=\deg_{\G }(x)-\deg_{ \wt\G_{L-1,i}}(x)=\begin{cases} 1, \ &x \ {\textrm{is an interior boundary point of }}\ \wt\G_{L-1,i}\\ 0, \ \ & {\rm otherwise} \end{cases}, \] then by the definition \eqref{eqn:Lap-D} of the modified Dirichlet boundary condition, we have \[\ipc{f}{ -\Delta^{\wt\G_{L-1,i},D} f} =\ipc{f}{ -\Delta^{\wt\G_{L-1,i},N} f} +2\sum_{\substack{x\in \wt\G_{L-1,i} \\ x\sim \mathcal R}} f(x)^2, \] leading to \eqref{eqn:energy-bound1}. Adding the potential to \eqref{eqn:energy-bound1} and applying Lemma~\ref{lem:A3} with the subspaces $\wt \G_{L-1,i}$ and those spanned by each $e_j$, we get (dropping the contributions from the $e_j$ below) \begin{align}\label{eqn:4.16} \mathcal N(E; -\Delta^{\G_L,N}+ V^{\G_L})\ge \sum_{i=1}^3\mathcal N(E; -\Delta^{\wt\G_{L-1,i},D}+V^{\wt \G_{L-1,i}}) . \end{align} Similar to the upper bound, applying \eqref{eqn:XY} then implies \begin{align} \mathcal N(E; X^L)\ge \sum_{i=1}^3\mathcal N(E; X^{ L-1,i})-27 , \end{align} for $X^L$ being any of $\{H^{\G_L,N},H^{\wt\G_L,D},H^{\wt\G_L}, H^{\wt\G_L,N}\}$. This completes the proof of \eqref{eqn:XL}. \end{proof} The relation \eqref{eqn:XL} suggests the eigenvalue counting $\mathcal N(E; X^L)$ is almost (or very close to) a subadditive process (up to some constant shift). Applying \eqref{eqn:XL} inductively on each smaller $2^{L-j}$-triangle ($j=0,1,2,\cdots$) and then applying the ergodic theory/law of large numbers, we obtain the existence of the IDS for the Anderson model on the (infinite) Sierpinski lattice. \begin{theorem}\label{thm:IDS-exist-gen} The integrated density states $N(E)$ for the Anderson model $H=-\Delta+V$ \eqref{eqn:AM} on the right half Sierpinski lattice exists, and is a.s. a non-random function, which is defined by the following limit \begin{align}\label{eqn:IDS-exist-half} N(E)=\lim_{L\to\infty}\frac{1}{|\G_L|} \E\mathcal N(E; X^L)=\lim_{L\to\infty}\frac{1}{|\G_L|} \mathcal N(E; X^L),\ \ a.s. , \end{align} where $X^L$ is any choice in $\mathcal T_L=\bigcup_{\bullet\in\{S,N,D\}}\{H^{\G_L,\bullet},H^{\wt\G_L,\bullet}\}$. As a consequence, the integrated density states $N(E)$ of the Anderson model on the full Sierpinski lattice, exists as a non-random function, and can be defined by the following limit \begin{align}\label{eqn:IDS-exist-full} N(E)=\lim_{L\to\infty}\frac{1}{|B_{L}|} \E \mathcal N(E; H^{ B_{L},\bullet})=\lim_{L\to\infty}\frac{1}{|B_{ L}|} \mathcal N(E; H^{ B_{ L},\bullet}), \ \ a.s., \end{align} with any boundary condition $\bullet=S,N$, or, $D$, and where $B_{L}=B(O,2^L)$ is the (graph metric) ball, centered at the origin $O$, with radius $2^L$. \end{theorem} \begin{proof} \noindent {\bf Case I: the right half Sierpinski lattice $\bigcup_{L}\G_L$.} Let $\mathcal T_L=\bigcup_{\bullet\in\{S,N,D\}}\{H^{\G_L,\bullet},H^{\wt\G_L,\bullet}\}$ be as in the theorem. Suppose $\{V(x)\}_{x\in \G}$ are i.i.d. random variables. We first show that the limit \begin{align}\label{eqn:expNL} n_E:= \lim_{L\to\infty}\frac{1}{3^L}\E \mathcal N(E; X^L) \end{align} exists and depends only on $E$ (with the same value for any choice of $X^L\in \mathcal T_L$). Retain the same notations as in Lemma \ref{lem:XL-induc}. Given $X^L$, denote by $ X^{L-1,i},i=1,2,3$ the corresponding Schr\"odinger operator on one of the smaller component $2^{L-1}$-triangles, $\G_{L-1,i}$ (or $\wt \G_{L-1,i}$, respectively) with the same boundary condition as $\{X^L\}$. All eigenvalue counting functions $\mathcal N(E; X^{L-1,i}), i=1,2,3$, have the same expectation value since $\G_{L-1,i}$ (or $\wt \G_{L-1,i}$) are isometric to $\G_{L-1}$ (or $\wt \G_{L-1}$ respectively), and $\{V(x)\}_{x\in \G}$ are i.i.d. Taking the expectation in \eqref{eqn:XL} of Lemma~\ref{lem:XL-induc} gives \begin{align} \E\mathcal N(E; X^L)\le 3\E\mathcal N(E; X^{L-1})+30. \end{align} Thus for fixed $E$, the limit $n_E=\lim_{L\to\infty}\frac{1}{3^L} \E\mathcal N(E; X^L)$ exists since the number sequence $a_L:=\frac{1}{3^L} \big(\E\mathcal N(E; X^L)+15\big)$ is decreasing. Clearly, $n_E$ does not depend on the choice of $X^L$ due to \eqref{eqn:XY}. Next, we study the a.s. limit (the second equality in \eqref{eqn:IDS-exist-half}). For any $L> \ell \ge 1$, we apply \eqref{eqn:XL} inductively down to the $2^\ell$-triangle size to obtain \begin{align} \mathcal N(E; X^L)\le \sum_{i=1}^{3^{L-\ell}}\mathcal N(E; X^{\ell,i})+ 15\cdot 3^{L-\ell}. \end{align} Dividing both sides by $3^L$ gives \begin{align}\label{eqn:220} \frac{1}{3^L} \mathcal N(E; X^L)\le \frac{1}{3^\ell} \cdot \frac{1}{3^{L-\ell}}\sum_{i=1}^{3^{L-\ell}}\mathcal N(E; X^{\ell,i})+ 15\cdot3^{ -\ell} . \end{align} If $X^{\ell,i}$ is any one of $\{H^{\wt\G_{\ell,i},D},H^{\wt\G_{\ell,i}}, H^{\wt\G_{\ell,i},N}\}$ where $\wt \G_{\ell,i}$ is the truncated triangle isometric to $\wt \G_\ell$, then $\{\mathcal N(E; X^{\ell,i})\}_i$ are identically distributed with the common mean $\E \mathcal N(E; X^{\ell})$. In addition, they are all independent since $\{\wt \G_{\ell,i}\}_{i=1,2,3}$ are disjoint. Hence, by the (strong) law of large numbers, for fixed $\ell$, \begin{align}\label{eqn:221} \lim_{L\to \infty} \frac{1}{3^{L-\ell}}\sum_{i=1}^{3^{L-\ell}}\mathcal N(E; X^{\ell,i})=\E \mathcal N(E; X^{\ell}), \ \ a.s. \end{align} For fixed $\ell$, taking the limit as $L\to \infty$ in \eqref{eqn:220} thus gives \begin{align} \limsup_{L\to \infty} \frac{1}{3^L} \mathcal N(E; X^L)\le \frac{1}{3^\ell} \E \mathcal N(E; X^{\ell})+ 15\cdot3^{ -\ell}, \ \ a.s. \end{align} Then taking the limit as $\ell\to \infty$ and recalling the definition \eqref{eqn:expNL} of $n_E$, we obtain \begin{align} \limsup_{L\to \infty} \frac{1}{3^L} \mathcal N(E; X^L)\le n_E, \ \ a.s. \end{align} The same argument via the lower bound in \eqref{eqn:XL} gives \begin{align} \liminf_{L\to \infty} \frac{1}{3^L} \mathcal N(E; X^L)\ge n_E, \ \ a.s. \end{align} Putting the two together, we obtain \begin{align} \label{eqn:NE-limit-as} \lim_{L\to \infty} \frac{1}{3^L} \mathcal N(E; X^L)= n_E=\lim_{L\to\infty}\frac{1}{3^L}\E \mathcal N(E; X^L), \ \ a.s. \end{align} We proved the above limit for $X^{L}\in \{H^{\wt\G_{L},D},H^{\wt\G_{L}}, H^{\wt\G_{L},N}\}$ where we used the independence of the eigenvalue counting on disjoint triangles $\wt \G_{\ell,i}$. However, due to \eqref{eqn:XY}, Eq.~\eqref{eqn:NE-limit-as} holds for $X^L=H^{\G_L,\bullet}$ as well. Finally, since $ |\G_L|=\frac{1}{2}(3^L+3)$, the following limit exists \begin{align}\label{eqn:IDS-half-limit-pf} N(E)=2n_E=\lim_{L\to\infty}\frac{1}{| \G_L|} \E\mathcal N(E; X^L)=\lim_{L\to\infty}\frac{1}{| \G_L|} \mathcal N(E; X^L),\ \ a.s., \end{align} where $X^{L}$ is any choice of $ \mathcal T_L$. \noindent {\bf Case II: the full Sierpinski lattice $\bigcup_{L}(\G_L\cup \G_L')$. } Notice that $B_{L}=B(O,2^L)=\G_L\cup \G_L'$ and $\G_L\cap \G_L'=\{O\}$, where $\G_L'$ is the reflection of $\G_L$ with respect to the $y$-axis. By the same argument used in Lemma~\ref{lem:XL-induc}, one can obtain \begin{align}\label{eqn:full-half} \mathcal N(E; H^{\wt \G_L,D})+ \mathcal N(E; H^{\wt \G_L',D}) \le \mathcal N(E; H^{B_{L},N}) \le \mathcal N(E; H^{\G_L,N})+ \mathcal N(E; H^{\G_L',N})+1. \end{align} Since $\G_L'$ is isometric to $\G_L$, \eqref{eqn:IDS-half-limit-pf} holds for $\G_L'$ (with any boundary condition). And the resulted limit for $\G_L'$ equals the limit for $\G_L$, still denoted by $N(E)$. Using $ |B_L|=2|\G_L|-1$ and \eqref{eqn:full-half}, we obtain \begin{align}\label{eqn:full-half2} \lim_{L\to \infty} \frac{1}{|B_L|}\mathcal N(E; H^{B_{L},N})=\frac{1}{2}N(E)+\frac{1}{2}N (E)=N(E),\ a.s. , \end{align} which defines the integrated density of states on the full Sierpinski lattice. The other boundary conditions on $B_L$ can be proved similarly. \end{proof} \section{Lifshitz tails for the Anderson model on the Sierpinski lattice} \label{sec:lif} Throughout this section, we set $\alpha=\log3/\log 2, \beta=\log5/\log 2$. Because of \eqref{eqn:full-half} and \eqref{eqn:full-half2} in the previous section, it is enough to study the tail behavior of the IDS $N(E)$ only on the right half Sierpinski lattice. We will prove \begin{theorem}\label{thm:IDS-Lif} Let $H_\omega=-\Delta+V_\omega$ be the Anderson model as in \eqref{eqn:AM}. Suppose $\{V_\omega(x)\}_{x\in \G}$ are i.i.d random variables with a (non-trivial) common distribution $P_0$, satisfying \begin{align}\label{eqn:V-Lif-ass} \inf \supp P_0=0, \ {\rm and}\ P_0([0,\eps])\ge C\eps^\kappa, \end{align} for some $C,\kappa>0$ and all sufficiently small $\varepsilon>0$. Let $N(E)$ be the IDS given by Theorem~\ref{thm:IDS-exist-lif}. Then \begin{align}\label{eqn:lif-SG} \lim_{E\searrow 0} \frac{\log \big|\log N (E)\big|}{\log E}=-\frac{\alpha}{\beta}. \end{align} \end{theorem} In the following, we again omit the subscript $\omega$ and write $H=H_\omega$ and $V=V_\omega$. The proof relies on the method called Dirichlet--Neumann bracketing as reviewed in Section~\ref{sec:background}. The original Dirichlet--Neumann bracketing principle refers to bounds on the spectrum obtained through additive schemes on a (disjoint) partition of the vertex set of a graph. The Neumann and Dirichlet Laplacians (and the associated Schr\"odinger operators) are picked so that the corresponding quadratic forms give a pair of complementary bounds on the original Hamiltonian. Here, we continue to use Laplacians on finite triangles with the three boundary conditions defined in \eqref{eqn:Lap-simple}, \eqref{eqn:Lap-N} and \eqref{eqn:Lap-D} in the previous section. We will divide the large triangle $\G_L$ into small triangles (fundamental subdomains) of size $2^\ell$, where $2^\ell\sim E^{-1/\beta}$ is picked according to the energy level $E$. Then the quadratic form of the operator on an arbitrarily large triangle can be approximated by a sum of quadratic forms on these small triangles, leading to the desired bound on the eigenvalue counting function by the Rayleigh--Ritz principle. \subsection{Neumann bracketing and the Lifshitz tail upper bound } For the upper bound, we will use the non-disjoint cover $\mathcal P$ defined in \eqref{eqn:Part1-SG} and the Neumann Laplacian on each of the small $2^\ell$-triangles, where $2^\ell\sim E^{-1/\beta}$ will be specified later. More precisely, given $L>\ell>0$, write \begin{align} \G_L=\bigcup_{j=1}^{3^{L-\ell}}\G_{\ell,j}, \end{align} where $\G_{\ell,j}$ are all the $2^\ell$-triangles in $\G_L$. Let $-\Delta^{\G_{\ell,j},N}$ be the Neumann Laplacian on $\G_{\ell,j}$ as in \eqref{eqn:Lap-N}. The same argument in \eqref{eqn:Lap-N-quad}, inductively on $\G_L,\G_{L-1},\cdots, \G_\ell$ (and all the triangles isometric to them), gives \[ \ipc{f}{\Delta^{\G_L,N} f}_{\ell^2(\G_L)}=\sum_{\G_{\ell,j}\subseteq \G_L }\ipc{f}{\Delta^{\G_{\ell,j},N} f}_{\ell^2(\G_{\ell,j})}. \] Since the cover is \emph{not} disjoint, considering the overlapped onsite potential at the extreme vertices as in \eqref{eqn:Lap-N-quad-sch}, we obtain \begin{align}\label{eqn:N-brack} \ipc{f}{ \big(-\Delta^{\G_L,N}+ V^{\G_L}\big) f}_{\ell^2(\G_L)} \ge \sum_{\G_{\ell,j}\subseteq \G_L }\ipc{f}{\big(-\Delta^{\G_{\ell,j},N}+ \frac{1}{2}V^{\G_{\ell,j}}\big) f}_{\ell^2(\G_{\ell,j})}. \end{align} As before, let $\mathcal N(E;X)=\#\{{\rm eigenvalues}\ E'\ {\rm of}\ X \ {\rm such\ that\ }\ E'\le E\}$ be the eigenvalue counting function of an operator $X$ below the energy $E$. Applying Lemma \ref{lem:NH<NH12} to \eqref{eqn:N-brack}, we obtain \begin{align}\label{eqn:N-sum-upper} \mathcal N(E; -\Delta^{\G_L,N}+ V^{\G_L}) \le \sum_{\G_{\ell,j}\subseteq \G_L } \mathcal N(E;-\Delta^{\G_{\ell,j},N}+ \frac{1}{2}V^{\G_{\ell,j}}). \end{align} Next we divide both sides by $|\G_L|=\frac{1}{2}(3^{L+1}+3)\ge 3^L$ and take the expectation value. Since there are $3^{L-\ell}$ terms in the sum which are identically distributed, the above inequality yields \begin{align}\label{eqn:SG-Lif-upper} \frac{1}{|\G_L|} \E\mathcal N(E; H^{\G_L,N} )\le \frac{3^{L-\ell}}{3^L} \E\mathcal N(E;-\Delta^{\G_{\ell },N}+ \frac{1}{2}V^{\G_{\ell }})\le \P \big( E_0 \le E \big), \end{align} where $E_0=E_0(H^\ell)$ is the smallest eigenvalue of $H^\ell=-\Delta^{\G_{\ell },N}+ \frac{1}{2}V^{\G_{\ell }}$, and we used that $\mathcal N( E;H^\ell)\le |\G_\ell| \one_{ \{E_0 \le E \} } \le 3^\ell \one_{ \{E_0 \le E \} } $ in the last inequality. It is enough to bound $\P \big( E_0(H^\ell)\le E \big)$ from above. This will be achieved by bounding $E_0\big( H^{ \ell}\big)$ from below. The key ingredient is the following Temple's inequality. \begin{proposition}[Temple, \cite{temple1928theory}] \label{prop:temple} Let $H$ be a self-adjoint operator with an isolated non-degenerate eigenvalue $E_0=\inf \sigma(H)$, and let $E_1=\inf \big(\sigma(H) \backslash\{E_0\}\big)$ . Then for any $\psi\in \mathcal D (H)$ (domain of $H$), which satisfies $\ipc{\psi}{H\psi}<E_1$, $\|\psi\|=1$, then the following bound holds: \begin{align}\label{eqn:temple} E_0\ge \ipc{\psi}{H\psi}-\frac{\ipc{H\psi}{H\psi}-\ipc{\psi}{H\psi}^2}{E_1-\ipc{\psi}{H\psi}}. \end{align} \end{proposition} The proof of Temple's inequality can be found in e.g. \cite{simon1985lifschitz,kirsch2007invitation,aizenman2015random}. To apply Temple's inequality to $H^\ell$, we also need the lower bound of the Neumann Laplacian eigenvalue. \begin{proposition}\label{prop:Neumann-ev} Let $E_1=E_1(-\Delta^{\G_\ell,N})$ be the first non-zero eigenvalue of the Neumann Laplacian $-\Delta^{\G_\ell,N}$. There are numerical constants $c_0=15/2,c_0'=60$ such that \begin{align}\label{eqn:Neumann-ev} \frac{c_0}{2^{\ell \beta }} \le E_1\le \frac{c_0'}{2^{\ell \beta }}. \end{align} \end{proposition} \begin{remark} To apply Temple's inequality, we only need the lower bound of $E_1$. The upper bound is provided for completeness. Note that $\G_\ell$ is actually the (half-sided) ball $B_R=B(O,R)$ (with respect to the gasket graph metric), centered at the origin $O=(0,0)$, with radius $R=2^\ell$. The proposition is thus equivalent to the (two-sided) asymptotic behavior $E_1(-\Delta^{B_R,N}) \sim R^{-\beta}$. Note that the first (smallest) Dirichlet eigenvalue on a ball or a triangle of the same size has the same order asymptotic $R^{-\beta}$; see Proposition~\ref{prop:Diri-ev}. \end{remark} We will first use these two propositions to complete the proof of the Lifshitz upper bound. Proposition \ref{prop:Neumann-ev} will use the explicit iteration formula of the Neumann eigenvalues from \cite{teplyaev1998spectral}. The proof is left to the end of the section. \begin{proof}[Proof of the upper bound of Eq.~\eqref{eqn:lif-SG}] Denote by $E_0(X),E_1(X)$ the first and second smallest eigenvalue of an operator $X$ respectively in the proof. We consider a truncated potential \begin{align}\label{eqn:wtV-def} \wt V(x):=\min \left\{\frac{1}{2}V(x), \frac{c_0}{3}2^{-\ell \beta }\right\} , \end{align} where $c_0=15/2$ is the constant given in \eqref{eqn:Neumann-ev}. Let $\wt H^{ \ell}:= -\Delta^{\G_{\ell },N}+ \wt V^{\G_{\ell }}$, and recall $H^\ell=-\Delta^{\G_\ell,N}+\frac{1}{2}V^{\G_\ell}$. Clearly, $\wt H^{ \ell} \le H^\ell$ by the definition of $\wt V$. By the min-max principle \eqref{eqn:NH1<NH2}, then \begin{align}\label{eqn:SGE00} E_0\big( \wt H^{ \ell}\big)\le E_0\big( H^{ \ell}\big) . \end{align} The rest of the work is bounding $E_0\big( \wt H^{ \ell}\big)$ from below by Temple's inequality. We will apply Proposition \ref{prop:temple} to $\wt H^{ \ell}$, with $E_0=E_0\big( \wt H^{ \ell}\big)$, $E_1=E_1\big( \wt H^{ \ell}\big)$, and \[\psi(x)=\frac{1}{\sqrt{|\G_\ell|}}, \ x\in \G_\ell \] being the normalized constant ground state of the Neumann Laplacian $-\Delta^{\G_\ell, N}$. To proceed, we need a lower bound of $E_1\big( \wt H^{ \ell}\big)$. Combining the min-max principle, the inequality $ \wt H^{ \ell}\ge -\Delta^{\G_\ell,N}$ and the lower bound of $E_1$ in Proposition~\ref{prop:Neumann-ev}, we obtain \[ E_1\big( \wt H^{ \ell}\big)\ge E_1(-\Delta^{\G_\ell,N})\ge c_0\frac{1}{2^{\ell \beta }}. \] Recall that $\psi= |\G_\ell|^{-1/2} $ is the constant eigenfunction of $\Delta^{\G_\ell,N}$ associated with the eigenvalue $0$. Then $\Delta^{\G_\ell,N}\psi=0$, and so \begin{align} \ipc{\psi}{\wt H^{ \ell}\psi}=\ipc{\psi}{\wt V^{\G_\ell}\psi}= \frac{1}{|\G_\ell|}\sum_{x\in \G_\ell}\wt V(x)\le \frac{c_0}{3}2^{-\ell \beta }< E_1\big( \wt H^{ \ell}\big). \end{align} Hence, the conditions of Temple's inequality are all met. The second term on the right hand side of Temple's inequality \eqref{eqn:temple} can be bounded from above as \begin{align} \frac{\ipc{\wt H^{ \ell}\psi}{\wt H^{ \ell}\psi}-\ipc{\psi}{\wt H^{ \ell}\psi}^2}{E_1-\ipc{\psi}{\wt H^{ \ell}\psi}} \le &\, \frac{\ipc{\wt H^{ \ell}\psi}{\wt H^{ \ell}\psi} }{ E_1-\ipc{\psi}{\wt H^{ \ell}\psi}} = \frac{\ipc{\wt V^{\G_\ell}\psi}{\wt V^{\G_\ell}\psi} }{ E_1-\ipc{\psi}{\wt H^{ \ell}\psi}}\nonumber \\ \le &\, \frac{\frac{c_0}{3}2^{-\ell \beta }|\G_\ell|^{-1}\sum_{x\in \G_\ell}\wt V(x) }{ c_02^{-\ell \beta }-\frac{c_0}{3}2^{-\ell \beta }}=\frac{1}{2|\G_\ell|}\sum_{x\in \G_\ell}\wt V(x). \label{eqn:5.14} \end{align} Applying Temple's inequality \eqref{eqn:temple}, together with \eqref{eqn:SGE00} and \eqref{eqn:5.14}, thus gives \begin{align} E_0\big( H^{\G_\ell}\big)\ge E_0\big( \wt H^{ \ell}\big)\ge &\, \ipc{\psi}{\wt H^{ \ell}\psi}-\frac{\ipc{\wt H^{ \ell}\psi}{\wt H^{ \ell}\psi}-\ipc{\psi}{\wt H^{ \ell}\psi}^2}{E_1-\ipc{\psi}{\wt H^{ \ell}\psi}} \\ \ge &\, \frac{1}{|\G_\ell|}\sum_{x\in \G_\ell}\wt V(x)-\frac{1}{2|\G_\ell|}\sum_{x\in \G_\ell}\wt V(x)=\frac{1}{2|\G_\ell|}\sum_{x\in \G_\ell}\wt V(x). \label{eqn:Temple-app} \end{align} Note that $\{2^{\ell \beta}\wt V(x)\}_{x\in \G_\ell}$ are i.i.d. random variables with range in $[0,c_0/3]$, and with common mean \begin{align} \mu_\ell=\E\Big(\min\big\{2^{\ell \beta}V(x),\ \frac{c_0}{3} \big\}\Big)\ge \frac{c_0}{3}\P\big(2^{\ell \beta}V(x)>\frac{c_0}{3}\big)=\frac{c_0}{3}\Big[1-\P\big(V(x)\le\frac{c_0}{3\cdot 2^{\ell \beta}}\big)\Big]. \end{align} Therefore, \begin{align}\label{eqn:inf-mu} \liminf_{\ell\to \infty} \mu_\ell\ge \frac{c_0}{3}[1-\P(V(x)=0)]=:\frac{c_0}{3}p_1>0, \end{align} using that $p_0=1-p_1=\P(V(x)=0)<1$ since the distribution is non-trivial (the support contains more than one point). Then for $E>0$, let \begin{align}\label{eqn:lE} \ell=\Big\lfloor \frac{1}{\beta\log 2}\log\Big(\frac{c_0p_1}{16}E^{-1} \Big)\Big\rfloor \end{align} so that \begin{align}\label{eqn:lE2} 2^{\ell \beta+1}E\le \frac{c_0p_1}{8}, \ \ {\rm and}\ \ 2^\ell\ge \frac{1}{2}\Big(\frac{c_0p_1}{16 }\Big)^{1/\beta} \cdot E^{-\frac{1}{\beta}} . \end{align} Combing the first inequality in \eqref{eqn:lE2} with \eqref{eqn:Temple-app} and \eqref{eqn:SG-Lif-upper}, we obtain \begin{align}\label{eqn:5.21} \frac{1}{|\G_L|} \E\mathcal N(E; H^{\G_L,N} )\le \P \big( E_0 \le E \big)\le \P \Big( \frac{1}{2|\G_\ell|}\sum_{x\in \G_\ell}\wt V(x)\le E \Big) \le \P \Big( \frac{1}{|\G_\ell|}\sum_{x\in \G_\ell}2^{\ell \beta}\wt V(x)\le \frac{c_0p_1}{8} \Big), \end{align} and so the problem is reduced to estimating the right-most probability from above. Applying the standard type of large deviation estimate (Hoeffding inequality) to $2^{\ell \beta}\wt V(x)$, for $E$ sufficiently small (the smallness depending only on $c_0,p_0$), we obtain \begin{align}\label{eqn:LDT-app} \P \left( \frac{1}{|\G_\ell|}\sum_{x\in \G_\ell}2^{\ell \beta}\wt V(x)\le\frac{c_0p_1}{8}\right) \le e^{-cE^{-\frac{\alpha}{\beta}} }, \end{align} where $c$ only depends on $c_0,p_0,\alpha,\beta$ (in particular, is independent of $E,\ell$). The argument is quite standard and close to the proof for the $\Z^d$ case, except for the choice of the size of the fundamental domain $\ell$ in \eqref{eqn:lE} due to the specific volume control parameter and the walk dimension on the Sierpinski lattice. We sketch the proof of \eqref{eqn:LDT-app} for completeness. The exponentially decaying probability estimate is provided by Hoeffding's inequality for sums of bounded i.i.d. random variables. \begin{proposition}[Hoeffding {\cite{hoe1963}}]\label{prop:hoeffding} If $\{Y_k\}_{1\le k\le K}$ are i.i.d. random variables ranging in $[0,b]$, then for any $\eps>0$, there is $c=c(\eps,b)>0$ such that \begin{align} \P\Big(\frac{1}{K}\sum_{k=1}^KY_k-\E(Y_k)\ge \eps\Big)\le e^{-cK}. \end{align} \end{proposition} We have $Y_k=2^{\ell \beta}\wt V_k$ and $K=|\G_\ell|$. Using \eqref{eqn:inf-mu}, take $\ell$ sufficiently large (depending only on $c_0$ and $p_1$) so that $\mu_\ell\ge c_0p_1/4$. Then Proposition~\ref{prop:hoeffding} gives \begin{align} \P \Big( \frac{1}{|\G_\ell|}\sum_{x\in \G_\ell}2^{\ell \beta}\wt V(x)\le \frac{c_0p_1}{8} \Big) \le &\,\P \Big( \frac{1}{|\G_\ell|}\sum_{x\in \G_\ell}2^{\ell \beta}\wt V(x)-\mu_\ell\le \frac{c_0p_1}{8}-\frac{c_0p_1}{4} \Big) \\ \le &\, e^{-c|\G_\ell|}, \end{align} where $c>0$ only depends on $c_0$ and $p_1$. By the volume lower bound $|\G_\ell|\ge 3^\ell= 2^{\ell \alpha}$, and the lower bound $2^\ell \gtrsim E^{-1/\beta}$ from \eqref{eqn:lE2}, we see $|\G_\ell|\ge c'E^{-\frac{\alpha}{\beta}}$ for some constant $c'$ depending only on $c_0,p_0,\alpha,\beta$. This completes the proof of \eqref{eqn:LDT-app}. The smallness condition of $E$ is determined by the largeness requirement of $\ell$ through the relation \eqref{eqn:lE}. Thus \eqref{eqn:5.21} becomes \begin{align} \frac{1}{|\G_L|} \E\mathcal N(E; H^{\G_L,N} ) \le \P \big( E_0(H^{ \ell})\le E \big) \le \P \big( \frac{1}{2|\G_\ell|}\sum_{x\in \G_\ell}\wt V(x)\le E \big)\le ce^{-c_1E^{-\alpha/\beta}}. \end{align} Finally, for fixed $E$, taking the limit of $L\to \infty$ and using \eqref{eqn:IDS-exist-half}, we obtain $ N(E)\le ce^{-c_1E^{-\alpha/\beta}}$. Then taking the double log limit as $E\searrow 0$ implies the desired Lifshitz tail upper bound \[ \limsup_{E\searrow 0} \frac{\log \big|\log N (E)\big|}{\log E}\le -\frac{\alpha}{\beta}. \] \end{proof} It remains to complete the \begin{proof}[Proof of Proposition~\ref{prop:Neumann-ev}] Let $\Delta^{\G_\ell,N}$ be the (combinatorial) Laplacian on $\ell^2(\G_\ell)$ with Neumann boundary condition, i.e., it is the subgraph Laplacian on $\G_\ell$. Denote the associated probabilistic Laplacian by $\Delta^{\G_\ell,N}_p=D \Delta^{\G_\ell,N} $, where $D={\rm Diag}\{ \deg_{\G_\ell}(x)^{-1} \}$ is the multiplication operator (diagonal matrix) by the reciprocal of the vertex degree (so all entries are either $1/2$ or $1/4$). All the eigenvalues of $\Delta^{\G_\ell,N}_p$ can be explicitly determined by the decimation method as described in \cite{shima1991eigenvalue,teplyaev1998spectral}. We will use the following formulation in \cite[Proposition 3.12]{teplyaev1998spectral}: For $\ell\ge 1$, the eigenvalues of $\Delta^{\G_\ell,N}_p$ are given by \begin{align}\label{eqn:Tep-eigen} \sigma(\Delta^{\G_\ell,N}_p)=\left\{-\frac{3}{2}\right\}\cup \left(\bigcup_{m=0}^{\ell-1} R_{-m}\left\{0,-\frac{3}{4}\right\}\right), \end{align} where $R(z)=z(4z+5)$, and $R_{-m}A$ is the preimage of a set $A\subseteq \R$ under the $m$-th composition power of $R$. For any $x\in \R$, its preimage under $R$ is \[R_{-1}\{x\}=\Big\{\ \frac{-5-\sqrt{25+16x}}{8},\ \frac{-5+\sqrt{25+16x}}{8}\ \Big\}. \] Denote the larger root (at least for $x\ge-25/16$) by \begin{align}\label{eqn:deci-f} f(x)=\frac{-5+\sqrt{25+16x}}{8}. \end{align} By \eqref{eqn:Tep-eigen} and the monotonicity of $f$, the largest eigenvalue of $\Delta^{\G_\ell,N}_p$ is $0$, and the second largest eigenvalue of $\Delta^{\G_\ell,N}_p$ is the $(\ell-1)$-th iteration of $-3/4$ under $f$, i.e., $f^{\circ(\ell-1)}(-3/4)=f\circ f \circ \cdots \circ f(-3/4)$. Computation of the series expansion of $f$ with the Taylor remainder theorem shows that for $-1\le x\le0$, \begin{align} \frac{1}{5}x(1-x) \le f(x)\le \frac{1}{5}x, \end{align} which implies that for $n\ge 0$ and $-1\le x\le 0$ \begin{align}\label{eqn:f-it} \frac{4}{5^n}x\le f^{\circ n}(x)\le \frac{1}{5^n}x. \end{align} The upper bound is immediate, the lower bound of \eqref{eqn:f-it} can be proved by a direct induction and the constant is not optimal. We include the computation in Appendix~\ref{sec:f-it}. By \eqref{eqn:f-it}, we obtain \begin{align} -15\frac{1}{2^{\beta \ell}} =- \frac{3}{4}\frac{4}{5^{\ell-1}}\le f^{\circ(\ell-1)}(-3/4)\le -\frac{3}{4}\frac{1}{5^{\ell-1}}=-\frac{15}{4}\frac{1}{2^{\beta \ell}}, \end{align} in which we used $5^\ell=2^{\ell \log 5/\log 2}=2^{\beta\ell}$. In other words, the first (smallest) eigenvalue of $-\Delta^{\G_\ell,N}_p$ is $E_0(-\Delta^{\G_\ell,N}_p)=0$, and the second (the first non-zero) eigenvalue of $-\Delta^{\G_\ell,N}_p$ is $E_1(-\Delta^{\G_\ell,N}_p)=- f^{\circ(\ell-1)}(-3/4)$ satisfying \begin{align}\label{eqn:E1-lower-p} \frac{15}{4}\frac{1}{2^{\beta \ell}}\le E_1(-\Delta^{\G_\ell,N}_p) \le 15\frac{1}{2^{\beta \ell}}. \end{align} Note that both $D $ and $-\Delta^{\G_\ell,N}$ are positive semidefinite. In addition, the largest eigenvalue of $D $ is $1/2$. Then by the majorization theory of eigenvalues \eqref{eqn:EjAB}, one has \begin{align} \frac{1}{2}E_1\big( -\Delta^{\G_\ell,N} \big)\ge E_1\Big(D (-\Delta^{\G_\ell,N})\Big)=E_1\Big( -\Delta_p^{\G_\ell,N} \Big) , \end{align} which, together with \eqref{eqn:E1-lower-p}, implies that \[ E_1(-\Delta^{\G_\ell,N}) \ge 2E_1(-\Delta^{\G_\ell,N}_p) \ge \frac{15}{2}\frac{1}{2^{\beta \ell}}. \] Similarly, using that the smallest eigenvalue of $D $ is $1/4$ and applying \eqref{eqn:EjAB} again, we obtain \[ E_1(-\Delta^{\G_\ell,N}) \le 4E_1(-\Delta^{\G_\ell,N}_p) \le 60\frac{1}{2^{\beta \ell}}. \] \end{proof} \subsection{(Modified) Dirichlet bracketing and the Lifshitz tail lower bound} For the lower bound of \eqref{eqn:lif-SG}, we use the disjoint partition $\wt{\mathcal P}$ \eqref{eqn:Part2-SG} and the modified Dirichlet Laplacian \eqref{eqn:Lap-D}. Similar to the Neumann bracketing, given $L>\ell>0$, write \begin{align} \G_L=\Bigg(\bigcup_{j=1}^{3^{L-\ell}}\wt \G_{\ell,j}\Bigg)\cup \mathcal R, \end{align} but where $\wt \G_{\ell,j}$ are all the truncated (and hence disjoint) $2^\ell$-triangles associated with $\G_{\ell,j}\subseteq \G_L$ and $\mathcal R$ is the collection of all the extreme vertices of all $\G_{L,j}\subseteq \G_L$; see Figure~\ref{fig:Vpart2}. \begin{figure} \centering \includegraphics[width=0.5\linewidth]{Fig/fig5.png} \caption{The $2^4$-triangle $\G_4$ (the big triangle) consists of 9 $2^2$-triangle $\{\G_{2,i}\}_{i=1}^9$. The $2^2$-triangles have totally 15 extreme vertices (the filled circles) $\mathcal R=\{e_j\}_{j=1}^{15}$. Remove $\mathcal R$ from $\G_4$, then the truncated triangles $\wt \G_{2,i}$ (the shadowed ones) are disjoint, and $(\cup_{i=1}^9\wt \G_{2,i})\cup \mathcal R$ form a disjoint partition of $\G_4$.} \label{fig:Vpart2} \end{figure} Let $-\Delta^{\wt \G_{\ell,j},D}$ be the (modified) Dirichlet Laplacian on $\wt \G_{\ell,j}$ as in \eqref{eqn:Lap-D}. Using the same argument as in \eqref{eqn:energy-bound1} to bound the removed edge energy between $\G_{\ell,j}$ and $\mathcal R$, we obtain \begin{align} \ipc{f}{-\Delta^{\G_L,N} f}_{\ell^2(\G_L)}\le \sum_{\wt \G_{\ell,j}\subseteq \G_L }\ipc{f}{-\Delta^{\wt \G_{\ell,j},D} f}_{\ell^2(\wt \G_{\ell,j})}+8\sum_{e_j\in \mathcal R}f(e_j)^2, \end{align} which implies \begin{align} \ipc{f}{(-\Delta^{\G_L,N}+V^{\G_L}) f}_{\ell^2(\G_L)}\le \sum_{\wt \G_{\ell,j}\subseteq \G_L }\ipc{f}{(-\Delta^{\wt \G_{\ell,j},D}+V^{\wt \G_{\ell,j}}) f}_{\ell^2(\G_{\ell,j})}+\sum_{e_j\in \mathcal R}(8+V_{e_j})f(e_j)^2. \end{align} Then by Lemma~\ref{lem:A3}, dropping terms on the right hand side as in \eqref{eqn:4.16}, \begin{align} \mathcal N( E; H^{\G_L,N} ) \ge \sum_{\wt \G_{\ell,j}\subseteq \G_L } \mathcal N(E; H^{\wt \G_{\ell,j},D} ). \end{align} Taking the expectation both sides, using the fact that all $\wt \G_{\ell,j}$ are isometric to $\G_\ell$ and $\{V(x)\}$ are i.i.d., we obtain \begin{align}\label{eqn:D-direct-sum-lower} \E \mathcal N(E; H^{\G_L,N}) \ge 3^{L-\ell} \E \mathcal N( E; H^{\wt \G_{\ell},D}). \end{align} Let $E_0=E_0(H^{\wt \G_{\ell},D})$ be the ground state energy of $H^{\wt \G_{\ell},D}$. Given $E>0$, if $E_0(H^{\wt \G_{\ell},D})\le E$, then $\mathcal N(E; H^{\wt \G_{\ell},D})$ is at least one. Hence, \begin{align} \label{eqn:D-N-P-lower} \E \mathcal N(E; H^{\wt \G_{\ell},D})\ge \P\Big(E_0(H^{\wt \G_{\ell},D})\le E\Big). \end{align} It is enough to bound $\P\Big(E_0(H^{\wt \G_{\ell},D})\le E \Big)$ from below, or equivalently, to bound the ground state energy $E_0(H^{\wt \G_{\ell},D})$ from above. In order to make use of estimates for the ground state energy of the Laplacian with {simple} boundary conditions (Proposition~\ref{prop:Diri-ev}), we consider slightly smaller truncated triangles that avoid the boundary vertices. For $\ell$ sufficiently large, let $\wt T \subsetneq \wt \G_\ell$ be a truncated $2^{\ell_1}$-triangle with side length $\ell_1=\ell-2$, and located away from the (interior) boundary vertices $\{o_i\}_{i=1}^6$ of $\wt \G_\ell$ (see Figure \ref{fig:wtB}). \begin{figure} \centering \includegraphics[width=.5\textwidth]{Fig/fig6.png} \caption{The entire figure is a truncated $2^4$-triangle $\wt \G_4$. The shaded region is a smaller truncated $2^2$-triangle $\wt T\subsetneq \wt \G_4$, located near the midpoint of the bottom edge so that $\wt \T$ is strictly away from the 6 interior boundary vertices of $\wt \G_4$. } \label{fig:wtB} \end{figure} Let $\phi_0$ be the ground state of the Laplacian $-\Delta^{\wt T}$ on $\wt T$, with the simple boundary condition as in \eqref{eqn:Lap-simple}. Extend $\phi_0$ to $\ell^2(\wt \G_\ell)$ by setting $\phi_0(x)=0$ on $\wt \G_\ell\backslash \wt T$. Then $-\Delta^{\wt \G_\ell,D}\phi_0=-\Delta^{\wt T}\phi_0$ since $\wt T$ is located away from the interior boundaries of $\wt \G_\ell$. By the min-max principle, then \begin{align} E_0(H^{\wt \G_{\ell},D})=\inf_{\varphi\neq 0}\frac{\ipc{\phi}{H^{\wt \G_{\ell},D}\phi}}{\ipc{\phi}{\phi}} \le & \,\inf_{\varphi\neq 0}\frac{\ipc{\phi}{- \Delta^{\wt \G_\ell,D}\phi}}{\ipc{\phi}{\phi}}+\max_{\wt \G_\ell}V(x) \nonumber\\ \le & \,\frac{\ipc{\phi_0}{- \Delta^{\wt \G_\ell,D}\phi_0}}{\ipc{\phi_0}{\phi_0}}+\max_{\wt \G_\ell}V(x) = \frac{\ipc{\phi_0}{-\Delta^{\wt T }\phi_0}}{\ipc{\phi_0}{\phi_0}}+\max_{\wt \G_\ell}V(x) \nonumber\\ = &\, E_0\big(-\Delta^{\wt T }\big)+\max_{\wt \G_\ell}V(x). \label{eqn:E0-1} \end{align} We need the following upper bound of the simple (zero Dirichlet) Laplacian eigenvalue on truncated triangles. \begin{proposition}\label{prop:Diri-ev} Let $\wt \G_\ell \subseteq \G$ be a truncated $2^\ell$-triangle. Let $E_0(-\Delta^{\wt \G_\ell})$ be the ground state energy (smallest eigenvalue) of the Laplacian with simple boundary conditions $-\Delta^{\wt \G_\ell}$. There are numerical constants $c_0=40,c'_0=10$, such that for any $\ell\in \N$, \begin{align}\label{eqn:Diri-E0-upper-tri} \frac{c'_0}{2^{\ell \beta}} \le E_0(-\Delta^{\wt \G_\ell})\le \frac{c_0}{2^{\ell \beta}} . \end{align} \end{proposition} This is the analogue of Proposition~\ref{prop:Neumann-ev} for the simple (zero Dirichlet) Laplacian. One proof is again based on the recursive expression for the eigenvalue obtained by the decimation method in \cite{shima1991eigenvalue,teplyaev1998spectral}. By \cite[\S 6]{teplyaev1998spectral}, the ground state eigenvalue is given by \begin{align} E_0( \Delta^{\wt \G_\ell})=4f^{\circ(\ell-1)}\Big(-\frac{1}{2}\Big), \end{align} where $f(x)$ is the same function as in \eqref{eqn:deci-f}. Using again the iteration estimate of $f^{\circ(n)}(x)\sim x/5^n$ in \eqref{eqn:f-it}, one obtains \eqref{eqn:Diri-E0-upper-tri} with $c_0=40$ and $c_0'=10$. We omit the details here. \begin{remark} By the min-max principle, the same estimate holds for the Dirichlet Laplacian on a (non-truncated) $2^\ell$-triangle $\G_\ell$. More generally, the asymptotic behavior of the first Dirichlet eigenvalue on a graph ball, $ E_0(-\Delta^{B(x,r)}) \approx r^{-\beta}$, always holds on graphs satisfying the Heat Kernel Bound $\mathrm{HK}(\alpha,\beta)$, using the fact that $E_0$ is always proportional to the reciprocal of the exist time from balls; see e.g. \cite[Corollary 2]{shou2024}. \end{remark} Applying the upper bound of \eqref{eqn:Diri-E0-upper-tri} to \eqref{eqn:E0-1} with $\wt T=\wt \G_{\ell_1},\ell_1=\ell-2$, we arrive at \begin{align} E_0(H^{\wt \G_{\ell},D})\le \frac{c_1}{2^{\ell \beta}}+\max_{\wt \G_\ell}V(x). \end{align} Let \begin{align*} \ell=\Big\lceil \frac{1}{\beta\log 2}\log\big(2c_1 E^{-1}\big) \Big\rceil, \end{align*} so that \begin{align}\label{eqn:lE-upper} \frac{c_1}{2^{\ell \beta }}\le \frac{1}{2}E,\ \ {\rm and}\ \ \ \ 2^\ell\le 2 (2c_1)^{\frac{1}{\beta}}\cdot E^{-\frac{1}{\beta}}. \end{align} Then $E_0(H^{\wt \G_{\ell},D})\le E/2+\max_{\wt \G_\ell}V(x) $, which implies for sufficiently small $E$, \begin{align} \P\Big(E_0(H^{\wt \G_{\ell},D})\le E\Big)\ge \P\Big(\max_{x\in \wt \G_\ell}V(x)\le E/2\Big) \ge&\; \P\Big({\textrm{For all} } \ x\in\wt \G_\ell, V(x)\le E/2\Big)\nonumber \\ \ge &\; C(E/2)^{\kappa |\wt \G_\ell|} \\ \ge &\; Ce^{c_2 (\log(E/2)) E^{-\frac{\alpha}{\beta}}}, \label{eqn:PE0-lower} \end{align} where we used the probability distribution assumption $\P\big(V(x)\le E\big)\ge CE^{\kappa}$ from \eqref{eqn:V-Lif-ass}, and the upper bound of \[|\wt \G_\ell|\le 3^{\ell+1}=3\times 2^{\ell \alpha}\lesssim E^{-\alpha/\beta} \] from \eqref{eqn:lE-upper} (since $\log(E/2)$ is negative for small $E$). Putting \eqref{eqn:PE0-lower} together with \eqref{eqn:D-direct-sum-lower} and \eqref{eqn:D-N-P-lower}, we obtain \begin{align} \frac{1}{|\G_L|} \E\mathcal N(E; H^{\G_L,N})\ge \frac{3^{L-\ell}}{|\G_L|} \P\Big(E_0(H^{\wt \G_{\ell},D})\le E\Big)\ge c_3E^{-\frac{\alpha}{\beta}}e^{c_2 (\log(E/2)) E^{-\frac{\alpha}{\beta}}}. \end{align} Similarly as for the upper bound, for fixed $E$, taking the limit as $L\to \infty$ and again using \eqref{eqn:IDS-exist-half}, we obtain $ N(E)\ge c_3E^{-\frac{\alpha}{\beta}}e^{c_2 (\log(E/2)) E^{-\frac{\alpha}{\beta}}}$. Then taking the double log limit as $E\searrow 0$ implies the desired Lifshitz tail lower bound \[ \liminf_{E\searrow 0} \frac{\log \big|\log N (E)\big|}{\log E}\ge -\frac{\alpha}{\beta}. \] \appendix \section{Eigenvalue counting comparison}\label{sec:min-max} The min-max principle by E. Fischer \cite{fischer1905quadratische} and R. Courant \cite{courant1920eigenwerte} for self-adjoint operators (bounded from below) is a useful tool to count the eigenvalues below the essential spectrum of the operator. In this appendix, we briefly summarize some of the consequences of the min-max principle for self-adjoint linear operators $H$ (actually real symmetric matrices) on a finite dimensional Hilbert space $\mathcal H=\C^K$. In this case, $H$ has eigenvalues $\lambda_1\le \cdots\le \lambda_K$ and the eigenvalue counting function is denoted as $ \mathcal N(E;H)=\#\{i:\lambda_i\le E\}$. Clearly, $ \mathcal N(E;H)={\rm dim\ span}\{\phi^i\in \mathcal H:\ H\phi^i=\lambda_i\phi^i,\lambda_i\le E\}={\rm dim}\{f\in \mathcal H:\ipc{f}{Hf}\le E\ipc{f}{f}\}$. As a consequence, if $\ipc{f}{Hf}> E\ipc{f}{f}$, for all $f$ in a subspace $S\subseteq \mathcal H,$ then \begin{align}\label{eqn:codim} \mathcal N(E;H)\le {\rm codim}S. \end{align} and if $\ipc{f}{Hf}\le E\ipc{f}{f}$, for all $f\in S,$ then \begin{align}\label{eqn:dim} \mathcal N(E;H)\ge {\rm dim}S. \end{align} For the matrix case, the min-max principle reads (see e.g. \cite[{Theorem 4.13}]{aizenman2015random}) \begin{align} \lambda_n=\min_{\phi^1,\cdots,\phi^n \in \mathcal H}\max\big\{\ipc{\phi}{H\phi}:\ \ \phi\in {\rm span}(\phi^1,\cdots,\phi^n) , \ \|\phi\|=1\big\}. \end{align} Hence, if $H_i,i=1,2$ are two self-adjoint operators on $\mathcal H$ satisfying $H_1\ge H_2$ in the sense of positive operators on $\mathcal H$, then \begin{align}\label{eqn:NH1<NH2} \mathcal N(E;H_1) \le \mathcal N(E;H_2). \end{align} We would like to compare the eigenvalue counting functions between matrices up to some rank one perturbations or orthogonal projections onto a smaller subspace. \begin{lemma} \label{lem:A1} Let $H$ be a real symmetric matrix on $\mathcal H=\C^K$. Let $H_1$ be the orthogonal projection of $H$ acting on a linear subspace $\mathcal H_1\cong \C^{L}$. Then \begin{align}\label{eqn:NH-cauchy-inter} \mathcal N(E;H_1)\le \mathcal N(E;H)\le \mathcal N(E;H_1)+{\rm codim}H_1. \end{align} Now suppose $H_2=H+\sum_{j=1}^mc_je_{i_j}e^T_{i_j}$ is a rank $m$ diagonal perturbation of $H$. Then \begin{align}\label{eqn:NH-diag-pert} |\mathcal N(E;H)-\mathcal N(E;H_2)|\le m. \end{align} \end{lemma} \begin{proof} Suppose $H$ has eigenvalues $\lambda_1\le \cdots\le \lambda_K$ and $H_1$ has eigenvalues $\mu_1\le \cdots\le \mu_{L}$. By the Cauchy interlacing theorem, \begin{align} \lambda_i\le \mu_i\le \lambda_{i+K-L}, i=1,\cdots,L. \end{align} which implies \eqref{eqn:NH-cauchy-inter}. For the second part, note that $H$ and $H_2$ have the same orthogonal projection $\wt H$, obtained by deleting the rows and columns from $H$ for $i_1,\cdots,i_m$, onto the subspace $\wt {\mathcal H}=\{e_{i_1},\ldots,e_{i_m}\}^\perp$ with ${\rm codim}\wt {\mathcal H}=m$. Then \eqref{eqn:NH-cauchy-inter} implies \begin{align} \mathcal N(E; H_2 )-m\le \mathcal N(E;\wt H )\le \mathcal N(E;H)\le \mathcal N(E;\wt H )+m\le \mathcal N(E; H_2 )+m, \end{align} which is \eqref{eqn:NH-diag-pert}. \end{proof} \begin{lemma}\label{lem:NH<NH12} Let $H$ be a self-adjoint matrix on $\mathcal{H}\cong\C^n$, and let $\{v_1,\ldots,v_n\}$ be an orthonormal basis for $\mathcal{H}$. For $i=1,\ldots,k$, set $\mathcal{H}_i=\operatorname{span}(\{v_m:m\in I_i\})$ for sets $I_i\subseteq\{1,\ldots,n\}$, and let $H_i$ be a self-adjoint operator on $\mathcal{H}_i$ for each $i=1,\ldots,k$. Suppose $\bigcup_{i=1}^kI_i=\{1,\ldots,n\}$, and that \begin{align}\label{eqn:H>H12} \ipc{f}{Hf}_{\mathcal H}\ge \sum_{i=1}^k\ipc{P_if}{H_iP_if}_{\mathcal H_i}, \quad\text{for all }f\in\mathcal H, \end{align} where $P_i$ is the orthogonal projection onto $\mathcal H_i$. Then \begin{align}\label{eqn:NH<H12} \mathcal N(E;H)\le \sum_{i=1}^k\mathcal N(E;H_i), \end{align} where $\mathcal N(E;H_i)$ is the eigenvalue counting function for the restriction of $H_i$ on $\mathcal H_i$. \end{lemma} \begin{proof} For any $i$, let $S_i=\{\phi\in \mathcal H_i\subseteq\mathcal H: \ipc{\phi}{H_i\phi}\le E\ipc{\phi}{\phi}\}$. Let $ S=S_1+S_2+\cdots+ S_k\subseteq \mathcal H$. Suppose $f \in S^\perp\subseteq\cap_i S_i^{\perp} $. For each $i$, write $f=P_if+ P_i^{\perp}f$, where $P_i^\perp$ is the orthogonal projection onto $\mathcal{H}_i^\perp$. If $P_if\neq 0$, then $f \in S_i^{\perp}$ implies $P_if \in S_i^{\perp}$. By the definition of $S_i$, then $ \ipc{P_if}{H_iP_if}_{\mathcal H_i}\ge E\ipc{P_if}{P_if}$. Note if $P_if= 0$, the same inequality holds trivially. Then \eqref{eqn:H>H12} implies that \begin{align} \ipc{f}{Hf}_{\mathcal H}\ge \sum_{i=1}^k E\ipc{P_if}{P_if} \ge E\ipc{f}{f}, \end{align} where in the last inequality we used the definition of the $\mathcal{H}_i$ and that $\bigcup_{i=1}^kI_i=\{1,\ldots,n\}$. Therefore, by \eqref{eqn:codim}, \begin{align} \mathcal N(E;H)\le {\rm codim}S^\perp={\rm dim} S \le \sum_{i=1}^k {\rm dim} S_i . \end{align} Recalling that $\mathcal N(E;H_i)$ is taken on the Hilbert space $\mathcal{H}_i$, we see that ${\rm dim}S_i= \mathcal N(E;H_i)$ by definition. \end{proof} \begin{lemma}\label{lem:A3} Let $H$ be a real symmetric matrix on $\mathcal H$. Let $\mathcal H_i\subseteq \mathcal H$, for $i=1,\cdots,k$ with $k\ge2$, be subspaces, and let $H_i$ be a self-adjoint linear operator on $\mathcal H_i$ for each $i=1, \cdots,k$. Suppose that $\mathcal H_i$ and $\mathcal H_j$ are orthogonal for $i\neq j$, and that \begin{align}\label{eqn:H<H12} \ipc{f}{Hf}_{\mathcal H}\le \sum_{i=1}^k\ipc{P_if}{H_iP_if}_{\mathcal H_i}, \quad\text{for all }f\in\mathcal H, \end{align} where $P_i$ is the orthogonal projection onto $\mathcal H_i$. Then \begin{align}\label{eqn:NH>H12} \mathcal N(E;H)\ge \sum_{i=1}^k\mathcal N(E;H_i), \end{align} where $\mathcal N(E;H_i)$ is the eigenvalue counting function for the restriction of $H_i$ on $\mathcal H_i$. In particular, $\mathcal N(E;H)\ge \mathcal N(E;H_i)$ for any $i$. \end{lemma} \begin{proof} Let $S_i=\{\phi\in \mathcal H_i: \ipc{\phi}{H_i\phi}\le E\ipc{\phi}{\phi}\}$. Let $ S=\oplus_i S_i\subseteq\mathcal{H}$ be the direct sum of $S_i$ (which is well-defined since $S_i\subseteq \mathcal H_i$ are orthogonal). Suppose $f \in S$, so that $P_if\in S_i$, which implies $ \ipc{P_if}{H_iP_if}_{\mathcal H_i}\le E\ipc{P_if}{P_if}$ by the definition of $S_i$. Then \eqref{eqn:H<H12} implies that \begin{align} \ipc{f}{Hf}_{\mathcal H}\le \sum_{i=1}^k E\ipc{P_if}{P_if}\le E\ipc{f}{f}, \end{align} where in the last inequality we used that the $ {S}_i$ are orthogonal so that $\sum_{i=1}^k P_i$ is the orthogonal projection onto $S$. Therefore, by \eqref{eqn:dim}, \begin{align} \mathcal N(E;H)\ge {\rm dim} S =\sum_{i=1}^k {\rm dim} S_i=\sum_{i=1}^k \mathcal N(E;H_i). \end{align} \end{proof} \section{Asymptotic behavior of the iteration \eqref{eqn:f-it}}\label{sec:f-it} Suppose that for $-1\le x\le0$, \begin{align}\label{eqn:f-lower} 0\ge f(x)\ge \frac{1}{5}x(1-x). \end{align} We show that for $n\ge 1$, the $n$-th iteration of $f$ satisfies, for $-1\le x \le 0$, \begin{align}\label{eqn:fn-lower} 0\ge f^{\circ n}(x)\ge \frac{1}{5^n}x \big(1- S_n x\big),\ \ \text{where } S_n=\sum_{m=0}^{n-1}\frac{(m+1)^2}{5^m}. \end{align} Since $S_1=1$, \eqref{eqn:fn-lower} holds for $n=1$. Now suppose that \eqref{eqn:fn-lower} holds for some $n \ge 1$. Note that \eqref{eqn:f-lower} implies that $0\ge f^{\circ m}(x)\ge -1$ for $-1\le x\le0$ for any $m\ge1$. Then \begin{align*} f^{\circ ({n+1})}(x)= f\big(f^{\circ n}(x)\big) \ge &\, \frac{1}{5}f^{\circ n}(x)(1-f^{\circ n}(x)) \\ \ge &\, \frac{1}{5}\frac{1}{5^n}x \big(1- S_n x\big)\Big[1-\frac{1}{5^n}x \big(1- S_n x\big) \Big] \\ =& \, \frac{1}{5^{n+1}}x \Big[1- S_n x-\frac{1}{5^n}x \big(1- S_n x\big)^2 \Big]. \numberthis \end{align*} Using the very loose bound$0<S_n\le n$ and $-1\le x\le 0$, one has $1- S_n x\le 1+n$. Hence, \begin{align} f^{\circ ({n+1})}(x)\ge \frac{1}{5^{n+1}}x \Big[1- S_n x-\frac{1}{5^n}x (1+n )^2 \Big]= \frac{1}{5^{n+1}}x (1- S_{n+1} x ), \end{align} which completes the induction. Direct computation gives $S_n\le S_\infty=75/32$ for all $n$. Therefore, for $-1\le x \le 0$, \begin{align} f^{\circ n}(x)\ge \frac{1}{5^n}x \big(1- S_\infty x\big)\ge \frac{1}{5^n}x \left(1+\frac{75}{32}\right)\ge \frac{4}{5^n}x , \end{align} which is the lower bound in \eqref{eqn:f-it}. \section{Ordered eigenvalues of the product of two semidefinite matrices} \begin{lemma} Denote by $E_0(X)\le E_1(X)\le \cdots E_{n-1}(X)$ the ordered eigenvalues (if they are all real) of a $n\times n$ matrix $X$. Suppose $A,B$ are two $n\times n$ positive semidefinite Hermitian matrices. Then for $j=0,\cdots,n-1$, \begin{align}\label{eqn:EjAB} E_{0}(A)E_j(B) \le E_j(AB)\le E_{n-1}(A)E_j(B). \end{align} \end{lemma} This is some well-known result of the theory majorization of eigenvalues. A more general version can be found in e.g. \cite[p.340, H.1.d. Theorem (Lidski\v{i}, 1950)]{marsh2011}. We sketch the proof of the special case \eqref{eqn:EjAB} here for the reader's convenience. \begin{proof} Since $B$ is positive semidefinite Hermitian, then $B^{1/2}$ is well-defined, and is also positive semidefinite Hermitian. Rewrite $AB=(AB^{1/2})\cdot B^{1/2}$ which is similar to $B^{1/2}\cdot(AB^{1/2})$. Hence, the ordered eigenvalues (counting multiplicity) of $AB$ are the same as $B^{1/2} AB^{1/2} $, i.e., \[E_{j}(AB)=E_j(B^{1/2} AB^{1/2}),\ \ j=0,\cdots,n-1. \] On the other hand, denote by $E_{\max}=E_{n-1}(A)$ the largest eigenvalue of $A$. Then $E_{\max}-A\ge 0$ implies $B^{1/2}E_{\max}B^{1/2}\ge B^{1/2}AB^{1/2}$ (both inequalities being in the positive semidefinite sense). Then by the min-max principle, one has for $j=0,\cdots,n-1$, \[E_j(B^{1/2} AB^{1/2})\le E_j(B^{1/2}E_{\max}B^{1/2})=E_{\max}\cdot E_j(B^{1/2} B^{1/2})=E_{\max}\cdot E_j(B ), \] which is the upper bound. The lower bound can be proved exactly in the same way using $B^{1/2}E_{0}(A)B^{1/2}\le B^{1/2}AB^{1/2}$. \end{proof} \noindent\textbf{Acknowledgments.} \phantomsection \addcontentsline{toc}{section}{Acknowledgments} The authors would like to thank Jeffrey Schenker for many useful discussions about the Anderson model on the Sierpinski gasket, in particular, for suggesting studying the spectrum using the Weyl criterion. S.Z. was supported by the NSF grant DMS-2418611. \bibliographystyle{abbrv} \bibliography{SG.bib} { \bigskip \vskip 0.08in \noindent -------------------------------------- \footnotesize \medskip L.~Shou, {Joint Quantum Institute, Department of Physics, University of Maryland, College Park, MD 20742, USA}\par\nopagebreak \textit{E-mail address}: \href{mailto:[email protected]}{[email protected]} \vskip 0.4cm W. ~Wang, {LSEC, Institute of Computational Mathematics and Scientific/Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China}\par\nopagebreak \textit{E-mail address}: \href{mailto:[email protected]}{[email protected]} \vskip 0.4cm S.~Zhang, {Department of Mathematics and Statistics, University of Massachusetts Lowell, Southwick Hall, 11 University Ave. Lowell, MA 01854 }\par\nopagebreak \textit{E-mail address}: \href{mailto:shiwen\[email protected]}{shiwen\[email protected]} } \end{document}
2412.13634v1
http://arxiv.org/abs/2412.13634v1
Kinetically constrained models
\documentclass[graybox,envcountchap,sectrefs,envcountsame]{svmono} \usepackage{amsfonts} \usepackage{amsmath} \usepackage{amssymb,psfrag} \usepackage{enumitem} \usepackage{bbm} \usepackage{tikz} \usepackage{tkz-graph} \setlist[itemize]{leftmargin=*} \setlist[enumerate]{leftmargin=*,label=(\roman*),ref=(\roman*)} \spnewtheorem{observation}[theorem]{Observation}{\bf}{\it} \usepackage[numeric,initials,nobysame,msc-links,abbrev]{amsrefs} \renewcommand{\eprint}[1]{\href{https://arxiv.org/abs/#1}{arXiv:#1}} \newcommand{\pageafter}[1]{#1~pp.} \BibSpec{article}{+{} {\PrintAuthors} {author} +{,} { \textit} {title} +{.} { } {part} +{:} { \textit} {subtitle} +{,} { \PrintContributions} {contribution} +{.} { \PrintPartials} {partial} +{,} { } {journal} +{} { \textbf} {volume} +{} { \PrintDatePV} {date} +{,} { \issuetext} {number} +{,} { \pageafter} {pages} +{,} { } {status} +{,} { \PrintDOI} {doi} +{,} { available at \eprint} {eprint} +{} { \parenthesize} {language} +{} { \PrintTranslation} {translation} +{;} { \PrintReprint} {reprint} +{.} { } {note} +{.} {} {transition} +{} {\SentenceSpace \PrintReviews} {review} } \BibSpec{collection.article}{+{} {\PrintAuthors} {author} +{,} { \textit} {title} +{.} { } {part} +{:} { \textit} {subtitle} +{,} { \PrintContributions} {contribution} +{,} { \PrintConference} {conference} +{} {\PrintBook} {book} +{,} { } {booktitle} +{,} { \PrintDateB} {date} +{,} { \pageafter} {pages} +{,} { } {status} +{,} { \PrintDOI} {doi} +{,} { available at \eprint} {eprint} +{} { \parenthesize} {language} +{} { \PrintTranslation} {translation} +{;} { \PrintReprint} {reprint} +{.} { } {note} +{.} {} {transition} +{} {\SentenceSpace \PrintReviews} {review} } \usepackage{verbatim} \usetikzlibrary{arrows} \usetikzlibrary[patterns] \usetikzlibrary{shapes.misc} \usetikzlibrary{decorations.pathreplacing} \usetikzlibrary{arrows.meta} \usepackage{subcaption} \usepackage{array} \usepackage{multirow} \renewcommand{\familydefault}{bch} \usepackage{hyperref} \tikzset{cross/.style={cross out, draw=black, minimum size=2*(#1-\pgflinewidth), inner sep=0pt, outer sep=0pt}, cross/.default={1pt}} \usepackage{newtxtext} \usepackage{newtxmath} \setcounter{tocdepth}{1} \newcommand{\Var}{\operatorname{Var}} \newcommand{\Ent}{\operatorname{Ent}} \newcommand{\cA}{\ensuremath{\mathcal A}} \newcommand{\cB}{\ensuremath{\mathcal B}} \newcommand{\cC}{\ensuremath{\mathcal C}} \newcommand{\cD}{\ensuremath{\mathcal D}} \newcommand{\cE}{\ensuremath{\mathcal E}} \newcommand{\cF}{\ensuremath{\mathcal F}} \newcommand{\cG}{\ensuremath{\mathcal G}} \newcommand{\cH}{\ensuremath{\mathcal H}} \newcommand{\cI}{\ensuremath{\mathcal I}} \newcommand{\cJ}{\ensuremath{\mathcal J}} \newcommand{\cK}{\ensuremath{\mathcal K}} \newcommand{\cL}{\ensuremath{\mathcal L}} \newcommand{\cM}{\ensuremath{\mathcal M}} \newcommand{\cN}{\ensuremath{\mathcal N}} \newcommand{\cO}{\ensuremath{\mathcal O}} \newcommand{\cP}{\ensuremath{\mathcal P}} \newcommand{\cQ}{\ensuremath{\mathcal Q}} \newcommand{\cR}{\ensuremath{\mathcal R}} \newcommand{\cS}{\ensuremath{\mathcal S}} \newcommand{\cT}{\ensuremath{\mathcal T}} \newcommand{\cU}{\ensuremath{\mathcal U}} \newcommand{\cV}{\ensuremath{\mathcal V}} \newcommand{\cW}{\ensuremath{\mathcal W}} \newcommand{\cX}{\ensuremath{\mathcal X}} \newcommand{\cY}{\ensuremath{\mathcal Y}} \newcommand{\cZ}{\ensuremath{\mathcal Z}} \newcommand{\bbA}{{\ensuremath{\mathbb A}} } \newcommand{\bbB}{{\ensuremath{\mathbb B}} } \newcommand{\bbC}{{\ensuremath{\mathbb C}} } \newcommand{\bbD}{{\ensuremath{\mathbb D}} } \newcommand{\bbE}{{\ensuremath{\mathbb E}} } \newcommand{\bbF}{{\ensuremath{\mathbb F}} } \newcommand{\bbG}{{\ensuremath{\mathbb G}} } \newcommand{\bbH}{{\ensuremath{\mathbb H}} } \newcommand{\bbI}{{\ensuremath{\mathbb I}} } \newcommand{\bbJ}{{\ensuremath{\mathbb J}} } \newcommand{\bbK}{{\ensuremath{\mathbb K}} } \newcommand{\bbL}{{\ensuremath{\mathbb L}} } \newcommand{\bbM}{{\ensuremath{\mathbb M}} } \newcommand{\bbN}{{\ensuremath{\mathbb N}} } \newcommand{\bbO}{{\ensuremath{\mathbb O}} } \newcommand{\bbP}{{\ensuremath{\mathbb P}} } \newcommand{\bbQ}{{\ensuremath{\mathbb Q}} } \newcommand{\bbR}{{\ensuremath{\mathbb R}} } \newcommand{\bbS}{{\ensuremath{\mathbb S}} } \newcommand{\bbT}{{\ensuremath{\mathbb T}} } \newcommand{\bbU}{{\ensuremath{\mathbb U}} } \newcommand{\bbV}{{\ensuremath{\mathbb V}} } \newcommand{\bbW}{{\ensuremath{\mathbb W}} } \newcommand{\bbX}{{\ensuremath{\mathbb X}} } \newcommand{\bbY}{{\ensuremath{\mathbb Y}} } \newcommand{\bbZ}{{\ensuremath{\mathbb Z}} } \newcommand{\sB}{\ensuremath{\mathscr B}} \newcommand{\bone}{{\ensuremath{\mathbf 1}} } \newcommand{\bzero}{{\ensuremath{\mathbf 0}} } \newcommand{\ba}{{\ensuremath{\mathbf a}} } \newcommand{\bb}{{\ensuremath{\mathbf b}} } \newcommand{\bc}{{\ensuremath{\mathbf c}} } \newcommand{\bd}{{\ensuremath{\mathbf d}} } \newcommand{\be}{{\ensuremath{\mathbf e}} } \newcommand{\bg}{{\ensuremath{\mathbf g}} } \newcommand{\bh}{{\ensuremath{\mathbf h}} } \newcommand{\bi}{{\ensuremath{\mathbf i}} } \newcommand{\bj}{{\ensuremath{\mathbf j}} } \newcommand{\bk}{{\ensuremath{\mathbf k}} } \newcommand{\bl}{{\ensuremath{\mathbf l}} } \newcommand{\bm}{{\ensuremath{\mathbf m}} } \newcommand{\bn}{{\ensuremath{\mathbf n}} } \newcommand{\bo}{{\ensuremath{\mathbf o}} } \newcommand{\bp}{{\ensuremath{\mathbf p}} } \newcommand{\bq}{{\ensuremath{\mathbf q}} } \newcommand{\br}{{\ensuremath{\mathbf r}} } \newcommand{\bs}{{\ensuremath{\mathbf s}} } \newcommand{\bt}{{\ensuremath{\mathbf t}} } \newcommand{\bu}{{\ensuremath{\mathbf u}} } \newcommand{\bv}{{\ensuremath{\mathbf v}} } \newcommand{\bw}{{\ensuremath{\mathbf w}} } \newcommand{\bx}{{\ensuremath{\mathbf x}} } \newcommand{\by}{{\ensuremath{\mathbf y}} } \newcommand{\bz}{{\ensuremath{\mathbf z}} } \newcommand{\var}{\operatorname{Var}} \newcommand{\<}{\langle} \renewcommand{\>}{\rangle} \newcommand{\diam}{{\ensuremath{\mathrm{diam}}} } \newcommand{\qe}{{\ensuremath{q_{\mathrm{eff}}}} } \newcommand{\1}{{\ensuremath{\mathbbm{1}}} } \newcommand{\trel}{T_{\mathrm{rel}}} \newcommand{\tmix}{{\ensuremath{T_{\mathrm{mix}}}} } \newcommand{\ent}{{\ensuremath{\mathrm{Ent}}} } \newcommand{\csob}{{\ensuremath{c_{\mathrm{sob}}}} } \newcommand{\qc}{{\ensuremath{q_{\mathrm{c}}}} } \newcommand{\qct}{{\ensuremath{\tilde q_{\mathrm{c}}}} } \newcommand{\tbp}{{\ensuremath{\tau_0^{\mathrm{BP}}}} } \newcommand{\Hb}{{\ensuremath{\overline{\mathbb H}}} } \renewcommand{\leq}{\leqslant} \renewcommand{\geq}{\geqslant} \renewcommand{\le}{\leqslant} \renewcommand{\ge}{\geqslant} \DeclareDocumentCommand \to { o o } { \IfNoValueTF {#1} {\IfNoValueTF{#2}{\rightarrow}{\xrightarrow[#2]}}{\IfNoValueTF{#2}{\xrightarrow{#1}}{\xrightarrow[#2]{#1}}}} \begin{document} \frontmatter \title{Kinetically constrained models} \author{Ivailo Hartarsky and Cristina Toninelli} \maketitle {\dedication{To our friend, mentor and collaborator Fabio Martinelli, \newline without whom this book would have ended here.}} \tableofcontents \preface This manuscript focuses on Kinetically Constrained Models (KCM), a topic which lies at the intersection between probability and statistical mechanics. KCM are a class of Markov processes. They belong to the larger class of interacting particle systems with stochastic dynamics on discrete lattices. KCM were introduced in the physics literature in the 1980's to model the liquid-glass transition, a longstanding open problem in condensed matter physics. The key feature of KCM is that the update at a given lattice site can occur only if the configuration verifies a kinetic constraint requiring that there are no particles in a suitable neighbourhood. Extensive numerical simulations indicate that KCM display a remarkable behavior typical of glassy systems. Therefore, they have been the subject of several investigations in the last forty years with the aim of providing a deeper understanding of the liquid-glass transition and of more general jamming transitions. Mathematically, KCM pose very challenging and interesting problems. In fact, the presence of the constraints induces non-attractiveness, the occurrence of several invariant measures, and the failure of many powerful tools to analyze relaxation to equilibrium (coercive inequalities, coupling, censoring\dots). Remarkably, the degeneracy of the rates caused by the constraints is not a mere technical obstacle which prevents using the classic tools. Indeed, the behavior of KCM is qualitatively different from that of interacting particle systems without constraints. Peculiar features include anomalously long mixing times, aging, singularities in the dynamical large deviation function, dynamical heterogeneities, and atypical ergodicity breaking transitions corresponding to the emergence of a large variety of amorphous structures. All in all, we can definitely say that KCM open a new chapter in the well established field of interacting particle systems. Major progress has been made in the last twenty years towards a full and rigorous understanding of the large time behavior of KCM at stationarity. We present these results, illustrating both the high level ideas and some novel technical tools that have been devised to deal with the presence of constraints and with the lack of attractiveness. On the way, we unveil some remarkable connections of KCM with other mathematical subjects, in particular with bootstrap percolation cellular automata. We also present a choice of open problems concerning particularly the out of equilibrium dynamics. Indeed, despite some achievements, robust tools to analyse KCM in this regime are still lacking and several beautiful questions remain open, even for simple choices of constraints. This book aims at being accessible to both mathematicians and physicists. Hopefully it will be a useful tool to reinforce the bridge between the two communities which, in our opinion, have still much to learn from each other on KCM and glassy dynamics. \section*{Outline} The content of the manuscript is as follows. \begin{itemize} \item In Chapter \ref{chap:intro} we provide the physics background and motivation for studying KCM. \item In Chapter \ref{chap:models} we formally introduce KCM along with the relevant notation and key quantities of interest. It may be viewed as defining the scope of the manuscript. \item In Chapter \ref{chap:BP} we discuss deterministic monotone cellular automata known as bootstrap percolation and their fundamental relation to KCM. \item In Chapter \ref{chap:1d} we explore KCM in one dimension and introduce some basic tools, notably the bisection-constrained method. We focus particularly on the Fredrickson--Andersen 1-spin facilitated model (FA-1f) and on the East model, which not only serve as a warm-up for more advanced models, but also as a tool for their study. \item In Chapter \ref{chap:FA2f} we consider the Fredrickson--Andersen 2-spin facilitated model in 2 dimensions. We develop progressively more sophisticated tools for its study, culminating with determining its sharp asymptotic behaviour at low temperature. These tools, which are flexible enough to be generalised to treat other models, include a robust long range Poincar\'e inequality and a very flexible multi-scale renormalisation tool, the Matryoska dolls. \item In Chapter \ref{chap:universality} we examine the universality theory for KCM in one and two dimensions. It further elaborates our techniques and establishes a detailed map of the domain. \item In Chapter \ref{chap:out} we turn our attention to results on KCM out of equilibrium. Convergence to equilibrium and mixing times are investigated, using a set of tools completely separate from previous chapters. \item In Chapter \ref{chap:other} we briefly mention several settings, other than the one of Chapter~\ref{chap:models}, in which KCM have been studied. We also mention some closely related models, and provide more detailed references for the interested reader. \end{itemize} Dependencies between different chapters are shown in the next diagram. \begin{center} \begin{tikzpicture}[x=0.20\textwidth,y=0.1\textwidth] \GraphInit[vstyle=Normal] \SetVertexNormal[Shape = rectangle, LineWidth = 1pt]\tikzset{VertexStyle/.append style = {outer sep=1pt}} \SetVertexNormal[Shape = circle, LineWidth = 1pt]\tikzset{VertexStyle/.append style = {outer sep=1pt}} \Vertex[x=0,y=2,L=\ref{chap:intro}]{intro} \Vertex[x=2,y=1,L=\ref{chap:1d}]{1d} \Vertex[x=1,y=1,L=\ref{chap:BP}]{BP} \Vertex[x=3,y=1,L=\ref{chap:FA2f}]{FA2f} \Vertex[x=1,y=2,L=\ref{chap:other}]{other} \Vertex[x=0,y=1,L=\ref{chap:models}]{models} \Vertex[x=2,y=2,L=\ref{chap:out}]{out} \Vertex[x=4,y=1,L=\ref{chap:universality}]{universality} \tikzset{EdgeStyle/.style = {->}} \Edge(1d)(FA2f) \Edge(FA2f)(universality) \Edge(BP)(out) \Edge(BP)(1d) \Edge(models)(BP) \end{tikzpicture} \end{center} Consequently, Chapters~\ref{chap:intro} and \ref{chap:other} can be regarded as optional general knowledge. Chapters~\ref{chap:models} and~\ref{chap:BP} are indispensable core material. A graduate course on the subject could cover these two chapters and a selection of Chapters~\ref{chap:1d} and/or~\ref{chap:out}, which both introduce a large variety of techniques in an accessible setting. The remaining Chapters~\ref{chap:FA2f} and~\ref{chap:universality} are intended for a more expert audience, particularly for newcomers to the field, who have already covered the basics, but need some background and intuition before delving into the details of specific papers. The more basic material (Chapters~\ref{chap:intro}-\ref{chap:1d}) is covered including full proofs or detailed sketches and featuring exercises to help assimilating the content. Subsequent chapters are less detailed and often refer to original papers for technical details. We apologise for the inevitable inaccuracies due to favouring simplicity over technical completeness. Indeed, we aim at highlighting the heuristic ideas and the guiding lines behind each method and result. We have tried to keep the presentation as self-contained as possible, but there are some prerequisites. We do not assume, but hope for some familiarity with the basics of standard textbooks in the field such as \cite{Levin09} on Markov chains and \cites{Liggett99,Liggett05} on interacting particle systems, while some of the contents of an undergraduate course in probability theory, as covered for instance in \cites{Durrett19,Legall22,Feller68,Dudley02}, will be used without notice. While it is possible to only refer to these books as needed, it may be a good investment to first acquire some superficial experience with their content, which is excellent to have in any case. \section*{Acknowledgements} We thank Damiano de Gaspari, Fabio Martinelli, Quentin Moulard and Fabio Toninelli for helpful discussions. We thank Giulio Biroli, Oriane Blondel, Laure Mar\^ech\'e, Fabio Martinelli, Assaf Shapira, R\'eka Szab\'o for insightful comments and corrections on the presentation and Filippo Nuccio for further support. I.H.~was supported by the Austrian Science Fund (FWF): P35428-N. Most of his work on this manuscript was done in 2023 and 2024, when he was affiliated with TU Wien, whose hospitality is gratefully acknowledged. C.T.~was supported by ERC through Grant 680275 'MALIG'. \mainmatter \foreach \i in {1,...,8}{ \input{\i} } \cleardoublepage \backmatter \phantomsection \addcontentsline{toc}{chapter}{References} \bibliography{Bib} \end{document} \chapter{One-dimensional models} \label{chap:1d} \abstract{In this chapter we investigate one-dimensional KCM. Most notably, these include the FA-1f and East models. We present the techniques used to determine the scaling of their characteristic times as $q\to 0$. We familiarise ourselves with the use of test functions, non BP-based canonical paths, combinatorial bottlenecks and bisection in the simplest possible setting. We then move on to FA-2f and general KCM, still in one dimension. These one-dimensional models are not only interesting in their own right, but will also serve as tools for the study of higher-dimensional KCM via renormalisation.\\} In one dimension there are three nearest neighbour KCM corresponding to the update families $\{\{-1\},\{1\}\}$, $\{\{1\}\}$ and $\{\{-1,1\}\}$ (excluding the trivial cases $\{\varnothing\}$ and $\varnothing$, as usual). We recognise the FA-1f, East and FA-2f models respectively. The latter is not very interesting from our viewpoint, but we consider it for completeness. FA-1f and East on the other hand are not only of interest themselves, but also provide fundamental building blocks for renormalisation arguments for more complex models (as in the proof of Theorem~\ref{th:exponential}). \section{FA-1f} \label{sec:FA1f} In this section, we consider $\cU=\{\{-1\},\{1\}\}$. From Theorem~\ref{th:BP}, we have $q_c=0$. We therefore look for asymptotics as $q\to0$, which are provided by the work of Cancrini, Martinelli, Roberto and Toninelli \cite{Cancrini08} and Shapira \cite{Shapira20} (and its arXiv version). \begin{theorem}[FA-1f asymptotics] \label{th:FA1f} For FA-1f in $d=1$ dimension there exists $C>0$ such that for $q$ small enough \begin{align*}1/C\le q^{3}\trel\le{}&C,&1/C\le q^3\bbE_{\mu_q}[\tau_0]\le{}&C,&\bbP_\mu(\tau_0>t)\le{}& e^{-Cq^3t}.\end{align*} \end{theorem} \begin{remark}[{Arrhenius law}] \label{rem:Arrhenius} Note that if we rewrite Theorem~\ref{th:FA1f} in terms of the inverse temperature, using \eqref{eq:temperature}, the above scaling corresponds to an Arrhenius divergence, $\tau_0,T_{\rm rel}\approx \exp(c\beta)$, as for strong glass forming liquids (see Figure~\ref{fig:vetri:1}). \end{remark} Before embarking on the proof, let us explain the heuristics behind Theorem~\ref{th:FA1f}. For $q$ small empty sites are typically isolated (and therefore cannot be immediately removed). However, if $x\in\bbZ$ is empty, at rate $q$ we can empty $x-1$ or $x+1$. Suppose the first event occurs. At this point the constraint at $x$ is also satisfied and, in a time of order one, we will (with equal probability) either occupy $x-1$ or $x$. In the latter case, the net result is that we have ``moved'' the empty site from $x$ to $x-1$. So, we intuitively expect empty sites to behave like random walks moving at rate $q$ until they meet. When they meet, they typically coalesce. This explains the scaling $1/q^3$: it is the time required to overcome the typical distance $\ell=1/q$ between two consecutive empty sites (inverse rate times distance squared, that is $1/q \times 1/q^2$). \begin{proof}[Theorem~\ref{th:FA1f}] In order to show that $\trel\ge q^{-3}/C$, we recall \eqref{eq:def:Trel}. Define the test function $f(\omega)=\min\{k\ge 1:\omega_{k}\omega_{-k+1}=0\}$, that is, the distance from $1/2$ to the nearest empty site rounded up. As in the proof of Theorem~\ref{th:BP}, one can check that $\mu(f(\omega)=k+1)=(2q-q^2)(1-q)^{2k}$ for $k\ge 0$. This geometric random variable has $\var(f)=(1-q)^2/(2q-q^2)^2$. On the other hand, by \eqref{eq:dirichlet}, \begin{align*}\cD(f)={}&2\sum_{x\ge 1} \mu(c_x\var_x(f))=2q(1-q)\sum_{x\ge 1}\mu\left((1-\omega_{x+1})\prod_{y=-x+1}^{x-1}\omega_y\right)\\={}& 2q^2\sum_{x\ge 1}(1-q)^{2x}=\frac{q(1-q)^2}{1-q/2}.\end{align*} Hence, by \eqref{eq:def:Trel}, \[\trel\ge \frac{\var(f)}{\cD(f)}=\frac{1}{4q^3(1-q/2)}\ge \frac{1}{4q^{3}}.\] The inequality $\bbE_{\mu}[\tau_0]\ge 1/(Cq^3)$ is proved in a similar way, but using a more subtle variant of \eqref{eq:def:Trel} regarding hitting times. See \cite{Shapira20}*{Section~4.2} and \cite{Shapira20a}. The proof of $\trel\le C/q^3$ proceeds similarly to the implication from \ref{item:expo:6} to \ref{item:expo:7} in Theorem~\ref{th:exponential}. However, we use canonical paths reflecting the heuristic mechanism discussed above rather than the brutal legal paths provided by bootstrap percolation in Lemma~\ref{lem:legal:path}. The first step is proving a Poincar\'e inequality for the generalised East model, which will be discussed in Section~\ref{sec:general:KCM}. It implies that there exists a constant $C<\infty$ independent of $q$ such that for any local function $f$, \begin{equation} \label{eq:East:Poincare:renorm}\var(f)\le C\sum_{x\in\bbZ}\mu(\tilde c_x\var_{\Lambda_x}(f)),\end{equation} where $\Lambda_x=\{x\lceil 1/q\rceil,\dots,(x+1)\lceil1/q\rceil-1\}$ and $\tilde c_x=1-\prod_{y\in\Lambda_{x+1}} \omega_{y}$. This is precisely the unidimensional analogue of the renormalisation of Figure \ref{fig:exponential:renormalisation:2}. In view of \eqref{eq:East:Poincare:renorm}, we seek to bound $\mu(\tilde c_x\var_{\Lambda_x}(f))$ with terms $\mu(c_y\var_y(f))$ of the Dirichlet form from \eqref{eq:dirichlet}. To do so, for each $x\in\bbZ$, $\omega\in\Omega$ such that $\omega_{\Lambda_{x+1}}\neq \bone$ and $y\in\Lambda_x$ with $\omega_y=1$, we define a legal path $(\omega^{(i)})_{i=0}^l$ from $\omega$ to $\omega^y$ as follows. Set $\xi=\min\{z>y:\omega_z=0\}$, $l=2(\xi-y-1)+1$ and define $\omega^{(0)}=\omega$, $\omega^{(l)}=\omega^y$ and \begin{align} \label{eq:path:FA1f:1d} \omega^{(2i-1)}={}&\omega^{\xi-i},& \omega^{(2i)}={}&(\omega^{\xi-i})^{\xi-i-1} \end{align} for $i\in \{1,\dots,\xi-y-1\}$ ($\omega^{(0)}=\omega$ and $\omega^{(l)}=\omega^y$ by definition). In words, this is the legal path sending an empty interval of length oscillating between one and two sites from $\xi$ to $y$. Let $x^{(i)}$ be the site such that $\omega^{(i)}=(\omega^{(i-1)})^{x^{(i)}}$. Observe that for odd $i\in\{3,\dots,l\}$ we have $\omega_{x^{(i)}}^{(i)}=1=\omega^{(i)}_{x^{(i+1)}}$, so it is convenient to set $j_i=2\lceil i/2\rceil-1$ (that is, the odd number in $\{i,i-1\}$) for $i\in\{2,\dots,l\}$ and $j_1=0$. Thus, by the Cauchy--Schwarz inequality, any $\omega$ such that $\tilde c_x(\omega)=1$ satisfies \begin{align*} q\omega_y\left(f(\omega^y)-f(\omega)\right)^2&{}\le ql\sum_{i=1}^{l}c_{x^{(i)}}\left(\omega^{(i)}\right)\left(f\left(\omega^{(i)}\right)-f\left(\omega^{(i-1)}\right)\right)^2,\\ &{}=ql\sum_{i=1}^l\omega^{(j_i)}_{x^{(i)}}c_{x^{(i)}}\left(\omega^{(j_i)}\right)\left(f\left(\omega^{(j_i)}\right)-f\left(\left(\omega^{(j_i)}\right)^{x^{(i)}}\right)\right)^2.\end{align*} Next note that, given $y$, $\omega^{(j_i)}$ and whether $i=1$ or not, we can recover $\omega$ and that $\mu(\omega)/\mu(\omega^{(j_i)})\le (1-q)/q$ (equality holds except for $i=1$). Integrating the last display over $\omega$, we obtain \[\mu(\tilde c_x\var_y(f))\le 4\lceil 1/q\rceil\sum_{z\in\Lambda_x\cup\Lambda_{x+1}}\mu\left(\tilde c_x(\omega)c_{z}(\omega)\omega_z\left(f(\omega)-f(\omega^z)\right)^2\right),\] since $l\le 2\lceil 1/q\rceil$. Summing the last result over $y\in \Lambda_x$ and then $x\in\bbZ$, we obtain \[\sum_{x\in\bbZ}\mu(\tilde c_x\var_{\Lambda_x}(f))\le \sum_{x\in\bbZ}\mu\left(\tilde c_x\sum_{y\in\Lambda_x}\var_y(f)\right)\le 8\lceil1/q\rceil^3\cD(f),\] recalling \eqref{eq:dirichlet} and \eqref{eq:Poincare:unconstrained}. Plugging this into \eqref{eq:East:Poincare:renorm} and recalling \eqref{eq:def:Trel} concludes the proof. The proof that $\bbP_\mu(\tau_0>t)\le e^{-Cq^3t}$ follows similar but more delicate lines (see \cite{Shapira20}*{Section~4.1} for the details). Finally the upper bound on $\bbE_{\mu}[\tau_0]$ follows directly from the last inequality. \end{proof} We note that higher dimensional analogues of Theorem~\ref{th:FA1f} are known and the scaling is $\log(1/q)/q^{2}$ in $d=2$ and $q^{-2}$ in $d\ge 3$ \cites{Cancrini08,Shapira20} with the exception of the following conjecture, whose lower bound remains open. \begin{conjecture}[{Relaxation time in two dimensions}] \label{conj:FA1f} For FA-1f in $d=2$ dimensions, there exists $C>0$ such that for $q$ small enough $1/C\le q^2\trel/\log(1/q)\le C$. \end{conjecture} \section{East} Recall that the East model in one dimension corresponds to the update family $\cU=\{\{1\}\}$. It is, in a sense, the simplest non-trivial KCM and lies at the base of the theory. Indeed, the very first rigorous results on KCM were proved for the East model around the turn of the century \cites{Chung01,Aldous02}. Furthermore, the East model appears also in other contexts including the study of random walks on the group of upper triangular matrices \cites{Peres13,Ganguly19}. Once again, by Theorem~\ref{th:BP}, we have $\qc=0$, so we are interested in asymptotics as $q\to 0$. We address upper and lower bounds separately, showcasing two important techniques. Together they give the following result of Aldous and Diaconis, and Cancrini, Martinelli, Roberto and Toninelli \cites{Aldous02, Cancrini08}. \begin{theorem}[East asymptotics] \label{th:East} For the East KCM in $d=1$ dimension we have \begin{align*} \lim_{q\to0}\frac{\log \trel}{(\log(1/q))^2}&{}=\frac{1}{2\log 2}, &\lim_{q\to 0}\bbP_\mu\left(\left|\frac{2\log 2\cdot \log \tau_0}{(\log(1/q))^{2}}-1\right|<\varepsilon\right)&{}=1 \end{align*} for any $\varepsilon>0$. \end{theorem} We refer the reader to \cite{Chleboun14} for finer results including the scaling of $T_{\rm rel}^{\Lambda}$ as a function of $q$ and $|\Lambda|$ and for the equivalence, up to a length scale $|\Lambda|=O(1/q)$, of the relaxation and mixing time of the East model; to \cite{Faggionato13} for a survey dedicated to this model; to \cite{Chleboun16} for higher dimensions; to \cite{Couzinie22b} for a multicolour version. \begin{remark}[{Super-Arrhenius law}] \label{rem:superArrhenius} If we rewrite Theorem~\ref{th:East} in terms of temperature using \eqref{eq:temperature}, we get $\tau_0,\trel\approx \exp(\beta^2/(2\log 2))$. This super-Arrhenius divergence is reminiscent of the scaling for fragile super-cooled liquids (see Figure~\ref{fig:vetri:1}), which is an important reason for interest in this model among physicists. Also note that this scaling diverges much faster than the emptying time for the corresponding BP (recall Theorem~\ref{th:BP}) or FA-1f (recall Theorem~\ref{th:FA1f} and Remark~\ref{rem:Arrhenius}). \end{remark} \subsection{Lower bound: combinatorial bottleneck} \label{subsec:East:lower} The lower bound is proved via a \emph{combinatorial bottleneck}. In rough terms, the strategy is as follows. We consider the stationary KCM started at a typical configuration under $\mu_q$ with $q$ small. We identify a set of configurations around the origin which (deterministically) cannot be avoided if the origin is to be infected. For instance, this could be having an atypically large number of empty sites in the vicinity of the origin, but will usually be more subtle. We then seek to evaluate the probability and the number (entropy) of these \emph{bottleneck configurations}. Finally, we use stationarity and a union bound on time to show that if these configurations are unlikely and there are few of them, thus one needs to wait a lot of time to observe any of them close to the origin. The hard part of such arguments is identifying the correct bottleneck. \subsubsection{Combinatorics for the East model} The key ingredient to understand the behaviour of the one-dimensional East model is the following combinatorial result of Chung, Diaconis and Graham \cite{Chung01}. \begin{proposition}[{Combinatorial bottleneck for East}] \label{prop:comb} Consider the East model on $\Lambda=\{\dots,-2,-1\}\subset \mathbb Z$ with boundary condition $\bzero_{\{0,1,\dots\}}$. For $n\ge 0$, let $V(n)$ be the set of all configurations that the process can reach from $\bone_\Lambda$ via a legal path (recall Definition~\ref{def:legal:path}) in which all configurations contain at most $n$ empty sites. For $k\in\{0,\dots,n\}$, let $V(n,k)=\{\omega\in V(n):\sum_{x\in\Lambda}(1-\omega_x)=k\}$ be the configurations in $V(n)$ with $k$ empty sites. Finally, let $\ell(n)$ be the largest distance of an empty site (in $\Lambda$) from $0$ in $V(n)$, that is \[ \ell(n)=\sup\left\{-y: y \in \Lambda, \exists \omega\in V(n),\omega_y=0\right\} \] with the convention $\sup\varnothing=0$. For $n \geq 0$, \begin{align} \label{eq:combi:1} \ell(n)&{}=2^{n}-1,\\ \label{eq:combi:2} \left|V(n,n)\right|&{}\leq n!2^{\binom{n}{2}} . \end{align} \end{proposition} The first part, \eqref{eq:combi:1}, was already observed in \cites{Sollich03,Sollich99}. The second statement, \eqref{eq:combi:2}, is not very far from being tight. Indeed, \cite{Chung01} proved that $c^n\le |V(n)|/(2^{\binom{n}{2}}n!)\le C^n$ with $c\approx 0.67$ the largest root of $384x^3-336x^2+54x-1$ and $C=1/\log 4\approx 0.72$. Proving Proposition \ref{prop:comb} is an excellent exercise, which we invite the reader to do before moving on. We provide a full proof, as it is very instructive of the mechanism governing the relaxation of the East model (see Figure~\ref{fig:East_path} for an illustration). \begin{figure} \begin{subfigure}{0.25 \textwidth} \begin{tikzpicture}[line cap=round,line join=round,x=2.0cm,y=2.0cm, scale=0.45] \draw(0,0) -- (3.5,0); ll (0, 0) circle (2mm); ll (0.5, 0) circle (2mm); ll (1, 0) circle (2mm); ll (1.5, 0) circle (2mm); ll (2, 0) circle (2mm); ll (2.5, 0) circle (2mm); ll (3, 0) circle (2mm); \draw (3.4, -0.1) rectangle (3.6,0.1); \draw(4.5,0) -- (8,0); ll (4.5, 0) circle (2mm); ll (5, 0) circle (2mm); ll (5.5, 0) circle (2mm); ll (6, 0) circle (2mm); ll (6.5, 0) circle (2mm); ll (7, 0) circle (2mm); \draw (7.5, 0) circle (2mm); \draw (7.9, -0.1) rectangle (8.1,0.1); \draw(9,0) -- (12.5,0); ll (9, 0) circle (2mm); ll (9.5, 0) circle (2mm); ll (10, 0) circle (2mm); ll (10.5, 0) circle (2mm); ll (11, 0) circle (2mm); \draw (11.5, 0) circle (2mm); \draw(12, 0) circle (2mm); \draw (12.4, -0.1) rectangle (12.6,0.1); \draw(0,-1) -- (3.5,-1); ll (0, -1) circle (2mm); ll (0.5, -1) circle (2mm); ll (1, -1) circle (2mm); ll (1.5, -1) circle (2mm); ll (2, -1) circle (2mm); \draw (2.5, -1) circle (2mm); ll (3, -1) circle (2mm); \draw (3.4, -1.1) rectangle (3.6,-0.9); \draw(4.5,-1) -- (8,-1); ll (4.5, -1) circle (2mm); ll (5, -1) circle (2mm); ll (5.5, -1) circle (2mm); ll (6, -1) circle (2mm); \draw (6.5, -1) circle (2mm); \draw (7, -1) circle (2mm); ll (7.5, -1) circle (2mm); \draw (7.9, -1.1) rectangle (8.1,-0.9); \draw(9,-1) -- (12.5,-1); ll (9, -1) circle (2mm); ll (9.5, -1) circle (2mm); ll (10, -1) circle (2mm); \draw (10.5, -1) circle (2mm); \draw (11, -1) circle (2mm); \draw (11.5, -1) circle (2mm); ll(12, -1) circle (2mm); \draw (12.4, -1.1) rectangle (12.6,-.9); \draw(0,-2) -- (3.5,-2); ll (0, -2) circle (2mm); ll (0.5, -2) circle (2mm); ll (1, -2) circle (2mm); \draw (1.5, -2) circle (2mm); ll (2, -2) circle (2mm); \draw (2.5, -2) circle (2mm); ll (3, -2) circle (2mm); \draw (3.4, -2.1) rectangle (3.6,-1.9); \draw(4.5,-2) -- (8,-2); ll (4.5, -2) circle (2mm); ll (5, -2) circle (2mm); ll (5.5, -2) circle (2mm); \draw (6, -2) circle (2mm); ll (6.5, -2) circle (2mm); \draw (7, -2) circle (2mm); \draw (7.5, -2) circle (2mm); \draw (7.9, -2.1) rectangle (8.1,-1.9); \draw(9,-2) -- (12.5,-2); ll (9, -2) circle (2mm); ll (9.5, -2) circle (2mm); ll (10, -2) circle (2mm); \draw (10.5, -2) circle (2mm); ll (11, -2) circle (2mm); ll(11.5, -2) circle (2mm); \draw(12, -2) circle (2mm); \draw (12.4, -2.1) rectangle (12.6,-1.9); \draw(0,-3) -- (3.5,-3); ll (0, -3) circle (2mm); ll (0.5, -3) circle (2mm); ll (1, -3) circle (2mm); \draw (1.5, -3) circle (2mm); ll (2, -3) circle (2mm); ll (2.5, -3) circle (2mm); ll (3, -3) circle (2mm); \draw (3.4, -3.1) rectangle (3.6,-2.9); \draw(4.5,-3) -- (8,-3); ll (4.5, -3) circle (2mm); ll (5, -3) circle (2mm); \draw (5.5, -3) circle (2mm); \draw(6, -3) circle (2mm); ll (6.5, -3) circle (2mm); ll (7, -3) circle (2mm); ll (7.5, -3) circle (2mm); \draw (7.9, -3.1) rectangle (8.1,-2.9); \draw(9,-3) -- (12.5,-3); ll (9, -3) circle (2mm); \draw (9.5, -3) circle (2mm); \draw (10, -3) circle (2mm); \draw (10.5, -3) circle (2mm); ll (11, -3) circle (2mm); ll(11.5, -3) circle (2mm); ll(12, -3) circle (2mm); \draw (12.4, -3.1) rectangle (12.6,-2.9); \draw(0,-4) -- (3.5,-4); ll (0, -4) circle (2mm); \draw (0.5, -4) circle (2mm); ll (1, -4) circle (2mm); \draw (1.5, -4) circle (2mm); ll (2, -4) circle (2mm); ll (2.5, -4) circle (2mm); ll (3, -4) circle (2mm); \draw (3.4, -4.1) rectangle (3.6,-3.9); \draw(4.5,-4) -- (8,-4); \draw (4.5, -4) circle (2mm); \draw (5, -4) circle (2mm); ll (5.5, -4) circle (2mm); \draw (6, -4) circle (2mm); ll (6.5, -4) circle (2mm); ll (7, -4) circle (2mm); ll (7.5, -4) circle (2mm); \draw (7.9, -4.1) rectangle (8.1,-3.9); \end{tikzpicture} \end{subfigure} \caption{\label{fig:East_path} A legal path on $\{\dots,-2-1\}$ with at most $3$ simultaneous empty sites starting from a completely occupied configuration and creating a vacancy at $-(2^3-1)$. Successive steps of the path should be read from left to right and top to bottom. The empty square at site $0$ stands for the empty boundary condition.}\end{figure} \begin{proof}[Proposition~\ref{prop:comb}] We prove \eqref{eq:combi:1} by induction on $n$. The statement is trivial for $n=0$. Fix $n\ge 1$ and assume that for all $m< n$ we have $\ell(m)= 2^m-1$. Given $\omega\in V(n)\setminus\{\bone_\Lambda\}$, let $k\le n$, $x_k<\dots<x_1<x_0=0$ and $X=\{x_1,\dots,x_k\}$ be such that $\omega=\bzero_X\cdot\bone_{\Lambda\setminus X}$. \begin{claim} There exists $i\in\{1,\dots,k\}$ such that $x_{i}-x_{i-1}\le 2^{n-k}$. \end{claim} \begin{proof} Assume the contrary. It is important to recall that the inverse of a legal path is legal, so there exists a legal path from $\omega$ to $\bone$ via configurations with at most $n$ empty sites. If $k=n$, this immediately gives that there exists $i\in\{1,\dots,k\}$ with $x_i=x_{i-1}-1$, since otherwise no legal move can remove empty sites and there are already $n$ of them. Assume $k<n$ and let $\omega^{(j)}$ be a legal path from $\omega$. We prove by a further induction on $j\ge 0$ that $\omega^{(j)}_X=\bzero_X$. The statement is trivial for $j=0$. If it is true for all $t<j\ge1$, then we can decompose the dynamics into the intervals $\{x_{i}+1,\dots,x_{i-1}-1\}$ for $i\in\{1,\dots,k\}$. Each such interval starts with $\bone$ initial condition and has $\bzero$ boundary condition. Thus, by the first induction hypothesis for $m=n-k$, we necessarily have $\omega_{x_i+1}^{(j-1)}=1$. Recalling that $\cU=\{\{1\}\}$ and \eqref{eq:def:cx}, this implies that $\omega_{x_i}^{(j)}=\omega_{x_i}^{(j-1)}=0$, completing the second induction. Since there is a legal path from $\omega$ to $\bone$, but the empty sites of $\omega$ cannot be removed, we obtain the desired contradiction proving the claim. \end{proof} Returning to the first induction, let $i\in\{1,\dots,k\}$ such that $x_i-x_{i-1}\le 2^{n-k}$. By the induction hypothesis for $m=n-k$, we can find a legal path $\gamma$ in $\Lambda_i=\{x_i+1,\dots,x_{i-1}-1\}$ with boundary condition $\omega_{\Lambda\setminus\Lambda_i}\cdot\bzero_{\bbZ\setminus\Lambda}$ from $\omega_{\Lambda_i}=\bone_{\Lambda_i}$ to a configuration $\omega'\in\Omega_{\Lambda_i}$ with $\omega'_{x_i+1}=0$ and $\gamma$ features at most $n-k$ empty sites in $\Lambda_i$ simultaneously. To see this, consider a legal path placing an empty site at $-l(n-k)\le x_i+1-x_{i-1}$, truncate it at the first step when an empty site is placed at $x_i+1-x_{i-1}$ and shift this path by $x_{i-1}$. We can now form the path from $\omega$ to $\omega^{x_i}$ by performing $\gamma$, then occupying $x_i$ and then performing the inverse of $\gamma$ (which is still legal, because only the boundary condition at $x_{i-1}$ is used due to the orientation of the East update family). By construction this path never creates more than $k+(n-k)=n$ empty sites in $\Lambda$, so $\omega^{x_i}\in V(n,k-1)$. Let $\omega^{x_i}=\bzero_{X'}\cdot\bone_{\Lambda\setminus X'}$ with $X'=\{x'_1,\dots,x'_{k-1}\}$ and $0>x'_1>\dots>x'_{k-1}$. Then \[-x_k=\sum_{j=1}^k(x_{j-1}-x_j)\le 2^{n-k}+\sum_{j=1}^{k-1}(x'_{j-1}-x_j')=2^{n-k}-x'_{k-1}.\] Iterating this inequality, we obtain \[-x_k\le \sum_{j=n-k}^{n}2^j\le \sum_{j=0}^{n-1}2^j=2^{n}-1,\] so $l(n)\le 2^n-1$ as desired. To see that this is an equality it suffices to follow the equalities above, which naturally leads to the path depicted in Figure~\ref{fig:East_path}. Finally, \eqref{eq:combi:2} also follows easily from the above. Namely, each configuration $\omega\in V(n,k)$ can be encoded by the index $i\in\{1,\dots,k\}$, the distance $x_{i-1}-x_i\in\{1,\dots,2^{n-k}\}$ and the configuration $\omega^{x_i}\in V(n,k-1)$. Iterating this encoding gives \[|V(n,n)|\le \prod_{k=1}^{n}(k2^{n-k})=n!2^{\binom n2},\] which concludes the proof. \end{proof} \subsubsection{From the combinatorial result to the emptying time} \label{subsubsec:bottleneck:deduction} We now deduce the lower bound of Theorem~\ref{th:East} from Proposition~\ref{prop:comb}. This was done in somewhat different ways in \cites{Aldous02,Cancrini10}, but we rather present a proof along the lines of \cite{Hartarsky22univlower}, which is more adapted to generalisations. Let $n=\lfloor (\log(1/q)-\log\log(1/q))/\log2\rfloor$ and $\Lambda_n=\{0,\dots,2^n-1\}$. In view of Proposition~\ref{prop:comb}, we identify configurations in $V(n)$ and $V(n,n)$ with their restriction to $\Lambda_n$. Let $\cA=\{\omega\in\Omega:\omega_{\Lambda_n}=\bone_{\Lambda_n}\}$. By \eqref{eq:combi:1}, if $\cA$ occurs at time $0$, then there exists $t\le \tau_0$ such that $\omega_{\Lambda_n}(t)\not\in V(n)$. But, exiting $V(n)\times\Omega_{\bbZ\setminus\Lambda_n}$ requires visiting $V(n,n)\times\Omega_{\bbZ\setminus\Lambda_n}$. Let $T=(n^2q)^{-n/2}$ and let $N$ denote the number of updates (legal or illegal) at sites $x\in\Lambda_n$ up to time $T$. Then $N$ has the Poisson law with parameter $nT$. For $i\in\{1,\dots,N\}$, let $\theta_i$ denote the times of these updates. Observe that, by stationarity, $\omega(\theta_{{i}})$ is distributed according to $\mu$ (see \cite{Hartarsky22univlower}*{Claim~3.11} for a formal proof). Putting this together, we get \begin{align} \nonumber\bbP_\mu(\tau_0\le T)&{{}\le 1-\mu(\cA)+\bbP_\mu\left(\bigcup_{i=1}^N\left\{\omega_{\Lambda_n}(\theta_i)\in V(n,n)\right\}\right)}\\ &{}\le 1-\mu(\cA)+\bbP(N\ge 2nT)+2nT\mu_{\Lambda_n}(V(n,n)),\label{eq:East:union:bound} \end{align} that is, if $\tau_0\le T$, then we start outside $\cA$, or there are many updates, or at some update the configuration is in $V(n,n)$. It remains to bound \eqref{eq:East:union:bound}. Firstly, for $q\to0$ we have \[\mu(\cA)=(1-q)^{2^n}\ge (1-q)^{1/(q\log(1/q))}\to 1.\] Furthermore, by the Bienaym\'e-Chebyshev inequality, $\bbP(N\ge 2nT)\to 0$, as $nT\to \infty$, which is the case when $q\to 0$. Finally, by \eqref{eq:combi:2}, \begin{align*} 2nT\mu_{\Lambda_n}(V(n,n))&{}= 2nT|V(n,n)|q^n(1-q)^{2^n-1-n}\le 2nTn!2^{\binom n2}q^n\\ &{}\le 2n\left(n^2q\right)^{-n/2} en(n/e)^n2^{n^2/2}q^n\le 2en^2e^{-n}\to 0. \end{align*} Inserting these bound in \eqref{eq:East:union:bound}, we obtain that $\bbP_\mu(\tau_0>T)\to 1$ as $q\to 0$. This concludes the proof for the emptyting time, since \[\frac{\log T}{(\log(1/q))^2}\to\frac{1}{2\log 2}.\] The analogous lower bound for $\trel$ follows directly from \eqref{eq:F0F1}. \subsection{Upper bound: the bisection technique} \label{subsec:East:upper} The upper bound of Theorem~\ref{th:East} is our first encounter with the bisection technique. It was introduced in \cite{Cancrini08}, drawing inspiration from \cite{Martinelli99}*{Proposition 3.5}. It is not only very useful for the study of KCM, but has also been applied in other settings \cites{Caputo12,Bhatnagar07}. \subsubsection{Two-block dynamics}The reader may have noticed that up to now we have not really proved any Poincar\'e inequality. We have only been reducing one inequality to another one we already know via renormalisation and canonical paths. The next lemma is, in a sense, the only Poincar\'e inequality we prove from scratch in this monograph. Morally, it deals with the East model on only two sites with empty boundary condition. This being a Markov process with only 4 states, which even happens to be a birth-death chain, one could compute the spectrum of its generator explicitly by hand. However, we state the result directly for the generalised version of the East model, as this is the version that is useful in renormalisation arguments. While the result is originally from \cite{Cancrini08}*{Proposition~4.5}, we rather give the proof from \cite{Hartarsky22phd}*{Lemma~1.3.8}, which is more probabilistic. \begin{lemma}[Two-block dynamics] \label{lem:two-block} Let $(\bbX,\pi)$ be the product of two finite probability spaces $(\bbX_1,\pi_1)$ and $(\bbX_2,\pi_2)$. Let $\var_{1}(f)=\var_\pi(f(X_1,X_2)|X_2)$ and similarly for $\var_{2}(f)$. Fix a nonempty event $\cX\subset \bbX_1$. Then for any $f:\bbX\to\bbR$ \[\var_{\pi}(f)\le \frac{\bbE_\pi(\var_{1}(f)+\1_{\cX}\var_{2}(f))}{1-\sqrt{1-\pi(\cX)}}\le \frac{2}{\pi(\cX)}\bbE_\pi\left(\var_{1}(f)+\1_{\cX}\var_{2}(f)\right).\] \end{lemma} A way to interpret this is as a Poincar\'e inequality (i.e.\ bound on the relaxation time) for a continuous time Markov chain which updates $X_1$ at rate $1$ and updates $X_2$ at rate $1$, provided that $\cX$ occurs. In fact, the relaxation time bound $1/(1-\sqrt{1-\pi(\cX)})$ is optimal. \begin{proof} Couple two copies of the chain described above, by attempting the same updates in both. For this, use a graphical representation as in Section~\ref{subsec:markov} attempting updates at $X_1$ and $X_2$ with rate $1$, but deeming those in $X_2$ illegal if $\cX$ does not occur. The two chains clearly coalesce as soon as we update $X_1$ so that $\cX$ occurs and then immediately update $X_2$. Consider (legal or illegal) updates on $X_2$ preceded by an update at $X_1$. Their number up to time $T$ is $\lfloor N/2\rfloor$ with $N$ a Poisson random variable with mean $T$. Each one succeeds in coupling the chains independently with probability $\pi(\cX)$. It is elementary to check that $\bbE(\lambda^N)=e^{-T(1-\lambda)}$ for any $\lambda\in(0,\infty)$. Thus, the probability that the two chains are not equal at time $T$ is at most \[\bbE\left[(1-\pi(\cX))^{\lfloor N/2\rfloor}\right]\le \frac{1}{(1-\pi(\cX))\exp(T(1-\sqrt{1-\pi(\cX)}))}.\] Classical results on Markov chains \cite{Levin09}*{Proposition~4.7, Corollary~12.6, Remark~13.13}\footnote{For continuous time Markov chains the spectral radius in \cite{Levin09}*{Corollary~12.6} is replaced by $e^{-1/\trel}$.} then give that $\trel\le 1/(1-\sqrt{1-\pi(\cX)})$, as desired. Finally, for the second inequality we use the Bernoulli inequality: $x\le 2(1-\sqrt{1-x})$ for any $x\in[0,1]$. \end{proof} If we apply Lemma \ref{lem:two-block} to $\bbX_1=\bbX_2=\{0,1\}$, $\pi_1=\pi_2=\mu_q$ and $\cX=\{0\}$, we obtain that the relaxation time of the East KCM with boundary condition $\bzero_{\bbZ\setminus\{0,1\}}$ is at most $1/(1-\sqrt{1-q})$. When $q\to0$ this is approximately $2/q$. However, the great advantage of the first bound in Lemma~\ref{lem:two-block} is that its prefactor tends to $1$ as $\pi(\cX)\to 1$. Thus, we can hope to apply this result to progressively larger volumes and more likely events $\cX$ and obtain a relaxation time bound uniform in the volume. This is the main idea behind the upper bound in Theorem~\ref{th:East}, which we discuss next. \subsubsection{Bisection technique} \label{subsubsec:bisection:1d} Recall that by Proposition~\ref{prop:infinite:to:finite}, in order to prove the upper bound of Theorem~\ref{th:East}, it suffices to bound the finite volume relaxation times $\trel^\Lambda$ uniformly as the volume diverges. The bisection technique consists in an iterative application of Lemma~\ref{lem:two-block}. Rather than presenting the somewhat technical and artificial-looking proof of \cite{Cancrini08}, let us take a more instructive approach to see how the proof is conceived. \paragraph{Basic idea} Let $\Lambda_k=\{1,\dots,2^k\}$ and $\Lambda'_k=\Lambda_{k+1}\setminus\Lambda_k$ for any $k\ge 0$. Clearly, $\trel^{\Lambda_0}=1$. We further seek to relate $\trel^{\Lambda_k}$ and $\trel^{\Lambda_{k+1}}$. Fix $k\ge 0$ and apply Lemma~\ref{lem:two-block} with $\bbX_1=\Omega_{\Lambda'_k}$, $\bbX_2=\Omega_{\Lambda_{k}}$, $\pi_1=\mu_{\Lambda'_k}$, $\pi_2=\mu_{\Lambda_k}$ and $\cX=\{\omega_{2^k+1}=0\}\subset\bbX_1$. This gives \begin{equation} \label{eq:East:bisection:1}\var(f)\le \frac{\mu(\var_{\Lambda'_k}(f)+\1_\cX\var_{\Lambda_k}(f))}{1-\sqrt{\varepsilon_k}}\end{equation} for any $f:\Omega_{\Lambda_{k+1}}\to\bbR$, where $\varepsilon_k=1-\pi_1(\cX)=1-q$. For the first term above, \eqref{eq:poincare} and translation invariance directly give \begin{equation} \label{eq:East:bisection:2}\var_{\Lambda'_k}(f)\le \trel^{\Lambda_k}\cD_{\bzero_{\bbZ\setminus\Lambda'_k}}(f).\end{equation} Yet, the fact that the East update family $\cU=\{\{1\}\}$ only looks to the right gives that for any $x\in\Lambda'_k$ we have $c_x^{\bzero_{\bbZ\setminus\Lambda'_k}}=c_x^{\bzero_{\bbZ\setminus\Lambda_{k+1}}}$, so \begin{equation} \label{eq:East:bisection:3}\cD_{\bzero_{\bbZ\setminus\Lambda'_k}}(f)=\sum_{x\in\Lambda'_k}\mu\left(c_x^{\bzero_{\bbZ\setminus\Lambda_{k+1}}}\var_x(f)\right)\end{equation} by \eqref{eq:dirichlet}. For the second term in \eqref{eq:East:bisection:1}, we similarly have \begin{equation} \label{eq:East:bisection:4} \1_\cX\cD_{\bzero_{\bbZ\setminus\Lambda_k}}(f)\le \sum_{x\in\Lambda_k}\mu\left(c_x^{\bzero_{\bbZ\setminus\Lambda_{k+1}}}(f)\right),\end{equation} since $\cX$ guarantees precisely the presence of an empty site as in the boundary condition $\bzero_{\bbZ\setminus\Lambda_k}$. Combining \eqref{eq:East:bisection:1}-\eqref{eq:East:bisection:4} and \eqref{eq:def:Trel}, we obtain \begin{equation} \label{eq:East:bisection:recurrence}\trel^{\Lambda_{k+1}}\le \frac{\trel^{\Lambda_k}}{1-\sqrt{\varepsilon_k}}. \end{equation} Iterating the above relation, and recalling Proposition~\ref{prop:infinite:to:finite}, we get \begin{equation} \label{eq:East:product}\trel\le \frac2q\prod_{k=0}^\infty\frac{1}{1-\sqrt{\varepsilon_k}}.\end{equation} \paragraph{Spread the boundary condition} Unfortunately, this first attempt fails, because $\varepsilon_k=1-q$ does not decay with $k$, so the product in \eqref{eq:East:product} is infinite. In order to fix this problem, we should define the event $\cX$ differently. Namely, fix some $\delta>0$ small enough and let $\delta_k=\lfloor 2^{k(1-\delta)}\rfloor$. Then let $\cX=\{\omega_{2^k+\{1,\dots,\delta_k\}}\neq \bone\}$, that is, there is an empty site among the first $\delta_k$ sites to the East of $\Lambda_k$. With this choice $\varepsilon_k=(1-q)^{\delta_k}$ does decay sufficiently fast for the right hand side of \eqref{eq:East:product} to be finite. In fact, one can compute that the product is at most $q^{-C}(1/q)^{\log_2(1/q)/(2-2\delta)}$ for some constant $C>0$ (see \cite{Cancrini08}*{Section 6.1} for more details). Taking $\delta$ small, this is exactly the upper bound we want. However, with this choice of $\cX$, \eqref{eq:East:bisection:4} is no longer valid. To deal with this issue, on $\cX$ we can define the random variable \begin{equation} \label{eq:def:xi}\xi(\omega)=\left\{\max i\le \delta_k:\omega_{2^k+i}=0\right\}\ge 1\end{equation} indicating the position of the rightmost empty site in $\Lambda'_k$ at distance at most $\delta_k$ from the boundary of $\Lambda_k$. Then we can rewrite \begin{align} \label{eq:East:muLk+1}\mu_{\Lambda_{k+1}}\left(\1_\cX\var_{\Lambda_k}(f)\right)&{}=\sum_{i=1}^{\delta_k}\mu_{\Lambda_{k+1}}\left(\1_{\xi=i}\var_{\Lambda_k}(f)\right)\\ &{}=\sum_{i=1}^{\delta_k}\mu_{\Lambda_{k+1}}\left(\1_{\xi=i}\mu_{\{1,\dots,2^k+i-1\}}\left(\var_{\Lambda_k}(f)\right)\right),\nonumber\end{align} since $\1_{\xi=i}$ is independent of $\omega_{\{1,\dots,2^k+i-1\}}$. But, setting $V_i=\{1,\dots,2^k+i-1\}$, we have \begin{equation} \1_{\xi=i}\mu_{V_i}(\var_{\Lambda_k}(f))\le \1_{\xi=i}\var_{V_i}(f)\le \trel^{V_i}\1_{\xi=i}\cD_{\bzero_{\bbZ\setminus V_i}}(f), \label{eq:East:1xii} \end{equation} since $\xi=i$ guarantees that $\omega_{2^k+i}=0$, so the boundary condition is indeed empty. Note that in the first inequality above, we used the convexity of the variance that implies that for any volumes $A,B$ it holds \begin{equation}\label{eq:convexity:var} \mu_{A}(\var_{B}(f))\leq\var_{A\cup B}(f). \end{equation} Combining \eqref{eq:East:muLk+1} and \eqref{eq:East:1xii} together with the monotonicity property \eqref{eq:monotonicity}, we get \begin{align*}\mu_{\Lambda_{k+1}}\left(\1_\cX\var_{\Lambda_k}(f)\right)&{}\le {\trel^{V_{\delta_k}}\sum_{i=1}^{\delta_k}\mu_{\Lambda_{k+1}}\left(\1_{\xi=i}\cD_{\bzero_{\bbZ\setminus V_i}}(f)\right)}\\ &{}\le \trel^{V_{\delta_k}}\sum_{x\in V_{\delta_k}}\mu_{{\Lambda_{k+1}}}\left(c_x\var_x(f)\right),\end{align*} using $\sum_{i=1}^{\delta_k}\1_{\xi=i}\le 1$ and the fact that $\1_{\xi=i}\cdot c_x^{\bzero_{\bbZ\setminus V_i}}\le \1_{\omega_{2^{k}+i}=0}\cdot c_x^{\bzero_{\bbZ\setminus V_i}}\le c_x$ for any $i\in\{1,\dots,\delta_k\}$ and $x\in V_i$. Further combining this with \eqref{eq:East:bisection:1}-\eqref{eq:East:bisection:3}, we obtain \begin{equation} \label{eq:East:bisection:almost}\var(f)\le \frac{\trel^{V_{\delta_k}}}{1-\sqrt{\varepsilon_k}}\mu\left(\cD_{\Lambda_{k+1}}(f)+\sum_{x\in V_{\delta_k}\cap\Lambda'_k}\left(c_x^{\bzero_{\bbZ\setminus\Lambda_{k+1}}}\var_x(f)\right)\right).\end{equation} Recalling \eqref{eq:def:Trel}, we see that \eqref{eq:East:bisection:almost} is almost the result we seek to prove, \eqref{eq:East:bisection:recurrence}. \paragraph{Final adjustments} We are only left with mending two technical problems with the previous argument. Firstly, in the right hand side of \eqref{eq:East:bisection:almost}, the terms corresponding to $x\in V_{\delta_k}\cap\Lambda'_k$ appear twice (once in the Dirichlet form). In order to solve this, we consider many possible choices of the partition of $\Lambda_{k+1}$ into $\Lambda_k$ and $\Lambda'_k$, keeping the total volume $2^{k+1}$ and the overlap $\delta_k$ fixed. We then average \eqref{eq:East:bisection:almost} over these choices. This yields an additional factor $1+1/s_k$ in \eqref{eq:East:bisection:recurrence}, where $s_k=\lfloor2^{k\varepsilon/3}\rfloor$ is the number of choices we consider. Secondly, $V_{\delta_k}$ is slightly larger than $\Lambda_k$ and so is the corresponding relaxation time. This issue is solved by choosing the sizes of all $\Lambda_k$ growing as $2^k+2^{k(1-\varepsilon/3)}$ rather than $2^k$. Once these problems are solved, we get \eqref{eq:East:bisection:recurrence} (with the additional factor $1+1/s_k$) and conclude the proof of the upper bound in Theorem~\ref{th:East}. The upper bound on $\tau_0$ follows from \eqref{eq:F0F1}. Interestingly, prior to \cite{Cancrini08}, the conjecture in the physics literature on the exponent was $T_{\rm rel} \sim q^{{\log_2(1/q)}}$, with an exponent off by a factor $2$. In order to get the correct scaling one has to take into account a subtle balance between the energetic and entropic contributions that, atypically, lie at the same level for the one-dimensional East model. Remarkably, the bisection technique is able to automatically take into account this subtle balance and provide a tight result correcting the conjectured exponent. \section{FA-2f} \label{sec:FA2f:1d} In this section, we briefly discuss the one-dimensional FA-2f update family $\cU=\{\{-1,1\}\}$. From Theorem~\ref{th:BP}, we have $\qc=1$, because two neighbouring occupied sites remain occupied at all times. Normally, our study of the model would end here, because the phase $q<\qc$ is rather complicated, but in the one-dimensional setting, we are able to say more. Specifically for FA-2f, one can check that the BP transformation (recall \eqref{eq:def:BP}) satisfies $[\omega]=\sB_\cU(\omega)$ for any $\omega\in\Omega$.That is, the BP process becomes stationary after one step. Taking Corollary~\ref{cor:legal:path} into account, the KCM dynamics can be decomposed into independent dynamics on intervals delimited by two occupied sites. On each such interval, we recover what is known as the hard-core Glauber dynamics with fugacity $\lambda=(1-q)/q$. That is because occupied sites cannot appear next to other occupied sites, while emptying is always possible (within an interval delimited by two occupied sites). There is a rich literature on Glauber dynamics of the hard-core model, particularly on general graphs with bounded degree, but also lattices of higher dimension (see e.g.\ \cites{Levin09,Sly10}). However, the one-dimensional lattice is somewhat degenerate from the standard viewpoint and does not appear to have been the subject of much study. One natural question one could ask is how the system behaves on its \emph{ergodic component}, that is, the set of configurations such that $[\omega]=\bzero$. One can then still study the $q\to 0$ regime, which can be viewed as quenching the model from inverse temperature $\beta=-\infty$ to $\beta$ large. In this setting, it is possible to prove that the relaxation time (this time with respect to the Gibbs measure of the hard-core model, see \cite{Georgii11} for background, rather than the plain $\mu_q$ product measure) is finite for any $q>0$. This can be obtained, for example using classical techniques such as block dynamics and strong spatial mixing \cites{Martinelli94,Martinelli94a,Martinelli94b}. However, as we will see in Section~\ref{sec:general:KCM}, this can also be achieved via the bisection technique, which applies more broadly. \section{General KCM} \label{sec:general:KCM} We have so far seen two ways in which it may be desirable to generalise KCM. Namely, allowing a state space larger than $\{0,1\}$ for each site, and working with the dynamics restricted to an ergodic component. Furthermore, for the purposes of studying higher-dimensional models, it is also useful to consider inhomogeneous KCM with site-dependent update families. We next define general KCM incorporating all these features, following \cite{Hartarsky21b}. Fix $R>0$, $q\in(0,1)$ and $\Lambda\Subset\bbZ$. For each $x\in\Lambda$, fix a probability space $(\Omega_x,\pi_x)$ with $|\Omega_x|<\infty$ and an event $\cI_x\subset\Omega_x$ with $\pi_x(\cI_x)\ge q$. Let $(\Omega,\pi)=(\prod_{x\in\Lambda}\Omega_x,\bigotimes_{x\in\Lambda}\pi_x)$ be the corresponding product space. A boundary condition is any configuration $\eta\in\{0,1\}^{\bbZ\setminus\Lambda}$. Further fix an update family $\cU_x$ for each $x\in\Lambda$ so that for any $U\in\cU_x$ and $u\in U$ we have $|u|\le R$. The constraint at $x\in\Lambda$ is defined by \[c_x^\eta(\omega)=\max_{U\in\cU_x}\prod_{u\in U,x+u\in\Lambda}\1_{\omega_{x+u}\in\cI_{x+u}}\prod_{u\in U,x+u\in\bbZ\setminus\Lambda}\left(1-\eta_{x+u}\right)\] for a configuration $\omega\in\Omega$ and a boundary condition $\eta\in\{0,1\}^{\bbZ\setminus\Lambda}$. Consider the Markov process such that for each site $x\in\Lambda$ such that $c_x=1$, the state of site $x$ is updated to an independent random variable with law $\pi_x$. That is, the process with generator \[\cL(f)=\sum_{x\in\Lambda}c_x^\eta\cdot(\pi_x(f)-f).\] Let $\cC$ be an arbitrarily chosen irreducible component of $\Omega$ for this dynamics and $\mu=\pi(\cdot|\cC)$. Then define $\trel$ via \eqref{eq:dirichlet} and \eqref{eq:def:Trel}. We refer to this Markov process restricted to $\cC$ as a \emph{general KCM with range $R$ and facilitating parameter $q$}. The following result was proved by Hartarsky \cite{Hartarsky21b} via the bisection technique adapted for going back and forth several times between the two blocks in the two-block Lemma~\ref{lem:two-block}. \begin{theorem}[General KCM upper bound] \label{th:general:1d} There exists $C>0$ depending only on $R$ such that for every $q\in(0,1)$, \[\trel\le (2/q)^{C\log\min(|\Lambda|,2/q)}.\] \end{theorem} In words, Theorem~\ref{th:general:1d} states that, for any one-dimensional general KCM with uniformly bounded update rule range and probability of the facilitating state uniformly bounded away from 0 has a finite relaxation time scaling at most like the one of the East model (recall Theorem~\ref{th:East}). Note that the minimum reflects the fact that the product in \eqref{eq:East:product} approaches its limiting value for scales $k\approx\log(1/q)$. Theorem~\ref{th:general:1d} can also be extended to infinite volume along the lines of Proposition~\ref{prop:infinite:to:finite}, but one needs to be careful in defining the irreducible components (see \cite{Hartarsky21b}*{Observation 3}). Let us discuss a few useful applications of Theorem~\ref{th:general:1d}. Firstly, FA-2f on its ergodic component is covered, just like any 1-dimensional (homogeneous binary) KCM. More importantly, we have the following bounds for the generalised FA-1f and East KCM. They were both derived in \cite{Martinelli19a}*{Proposition~3.4}, using the methods discussed in Sections~\ref{sec:FA1f} and~\ref{subsec:East:upper}, the second one also following from Theorem~\ref{th:general:1d}. \begin{proposition}[Generalised FA-1f and East upper bound] \label{prop:FA1f:East:generalised} Let $\Lambda$ be a segment and $\eta\in\{0,1\}^{\bbZ\setminus\Lambda}$ with $\eta_x=0$ if $x>\max\Lambda$ and $\eta_x=1$ if $x<\min\Lambda$. Consider a general KCM on $\Lambda$ and assume it to be homogeneous with $\cU_x=\cU$ for all $x\in\Lambda$, with range 1, facilitating parameter $q\in(0,1)$ and boundary condition $\eta$. Then for some absolute $C>0$ we have \[\trel\le \begin{cases} (2/q)^{C}&\cU=\{\{-1\},\{1\}\},\\ (2/q)^{C\log\min(|\Lambda|,2/q)}&\cU=\{\{1\}\}. \end{cases}\] \end{proposition} Let us note that one can also prove a polynomial bound on the relaxation time of (generalised) FA-1f on a segment with $\bone$ boundary condition on its ergodic component (that is, all configurations except $\bone$), see \cite{Blondel13}. \section{Conclusion} \label{subsec:tools:1d} Let us review the state of our toolbox after the developments of this chapter (recall Section~\ref{subsec:tools:BP}). \noindent\textbf{Test functions.} The lower bound of Theorem~\ref{th:FA1f} was proved by guessing a non-trivial test function. This technique will not take us any further in the sequel for more sophisticated models, because guessing a suitable function and being able to compute the variance and Dirichlet form are quite implausible. \noindent\textbf{Canonical paths.} The upper bound of Theorem~\ref{th:FA1f} relied on more subtle canonical paths than the ones provided by BP in Section~\ref{sec:legal:paths}. They reflect the heuristic view of the dynamics of FA-1f. Once we have a good intuition about the dominant relaxation mechanism of a KCM, we could, in principle try to implement it in a canonical path. Unfortunately, this approach quickly goes out of hand as the models get more complicated, since explicitly defining and analysing the paths involved becomes very laborious and quite tricky. We therefore avoid further recourse to canonical paths. \noindent\textbf{Renormalisation.} In the proof of Theorem~\ref{th:FA1f}, we saw the details of the 1-dimensional renormalisation we already saw in Section~\ref{sec:exponential:decay}. This technique for proving upper bounds will be developed much further in the next chapter. Now that we have some simple KCM to build on, it will become our bread and butter tool for proving upper bounds. \noindent\textbf{Combinatorial bottlenecks.} This method discussed in Section~\ref{subsec:East:lower} will be our method of choice for proving lower bounds on time scales in what follows. The content of Section~\ref{subsubsec:bottleneck:deduction} will require essentially no adaptation. The main difficulty in implementing this approach lies in identifying what needs to happen before the origin can be updated and proving that it is indeed necessary. Finding the correct bottleneck is usually guided by heuristics of the dominant relaxation mechanism (from upper bounds), estimating the probability of the bottleneck will usually be done using ideas from BP, while entropy tends not to pose problems. Thus, the main issue is proving rough analogues of \eqref{eq:combi:1} in more advanced settings. \noindent\textbf{Bisection.} The idea used in Section~\ref{subsec:East:upper} was to iterate the simple two-block dynamics of Lemma~\ref{lem:two-block}. Bisection is our primary technique for proving directly that a KCM has finite relaxation time in infinite volume. In the next chapter we will see how to do the same in higher dimensions. \noindent\textbf{General KCM.} Thanks to bisection, we were able to treat one-dimensional KCM in great generality. They are ready to use in renormalisation schemes. Although we will need to introduce some higher-dimensional models with general state space, one-dimensional general KCM will be sufficient for most of our purposes. \chapter{Out of equilibrium} \label{chap:out} \abstract{In this chapter, we study KCM with initial state not distributed according to the stationary measure. We start by presenting detailed results on the East model, illustrating the kind of results one would like: exponential convergence to equilibrium after a temperature quench, mixing time cutoff in finite volume, etc. We then treat KCM in full generality at the cost of weakening the results. We conclude with open problems, such as bringing the two aspects above together, and some additional out-of-equilibrium settings. This chapter is independent from Chapters \ref{chap:1d}-\ref{chap:universality}.\\} In the last three chapters, we discussed properties of the stationary KCM, that is, starting with initial distribution given by the invariant non-trivial product measure $\mu=\mu_q$. While this is the first setting to explore in order to understand these models, it is not the only one of interest. Indeed, both from the mathematics and the physics perspective, it is relevant to study KCM with other initial conditions, that is, out of equilibrium. An initial basic question is to determine under which conditions on $\cU$, $q$ and the initial configuration $\omega $, the law of the corresponding infinite volume KCM $(\eta(t))_{t\ge0}$ converges to the equilibrium measure $\mu$, as time goes to infinity. It is natural to expect that, in the ergodic regime, $q>\qc$, provided $\omega $ has ``enough empty sites'' (at the very least, one should have $[\omega ]=\bzero$ in view of Section~\ref{sec:legal:paths}) it should hold that for any local function $f$, \begin{equation} \lim_{t\to\infty}|\bbE_{\omega }(f(\eta(t)))-\mu(f)|=0\label{hope},\end{equation} Similarly, if the process is initialised according to a distribution $\nu$, one could expect that if $\nu$ has a sufficiently high density of empty sites and $q>\qc$, we should have \begin{equation}\lim_{t\to\infty}|\bbE_{\nu}(f(\eta(t)))-\mu(f)|=0\label{hope_nu}.\end{equation} A natural choice is $\nu=\mu_{q_0}$ with $q_0\neq q$. In the physics jargon this is known as a \emph{temperature quench}: abruptly changing the temperature from one value to another (recall \eqref{eq:temperature}).\footnote{ The extreme case, $q=1$, is well understood. Indeed, the process becomes a continuous-time version of BP, and essentially behaves like BP. } Unfortunately, robust tools to prove \eqref{hope} and \eqref{hope_nu} are not yet available. Indeed, with the notable exception of the East model (see Section~\ref{sec:East-out}), results are far from being satisfactory and are limited to a restrictive regime of high $q$ (see Section \ref{sec:high-temp}). The first and foremost reason for this is the fact that, even though the constraints \eqref{eq:def:cx} are monotone, KCM are not attractive (that is, the product partial order is not preserved by the semi-group of the process, see \cite{Liggett05}*{Sections II.2 and III.2} for background). This is due to the fact that the presence of more empty sites may make certain constraints satisfied and therefore allow certain empty sites to become occupied. Consequently, many of the powerful techniques (e.g.\ censoring or coupling arguments) which have been developed for the study of other Glauber dynamics (e.g.\ the contact process and stochastic Ising model, see \cites{Liggett05,Liggett99,Martinelli99}), fail for KCM. To make matters worse, the usual Holley--Stroock strategy \cite{Holley89} to prove convergence to equilibrium does not apply. Indeed, this approach uses the finiteness of the logarithmic Sobolev constant, which implies hypercontractivity of the semigroup. However, due to the presence of constraints, the logarithmic Sobolev constant is infinite for KCM (see Corollary \ref{cor:LS}) and the technique fails. Another natural question is how the mixing time (recall \eqref{eq:def:tmix}) scales with the volume $\Lambda^{(n)}=\{0,\dots,n\}^d$ , as $n\to\infty$ with $\cU$ and $q$ fixed. For $q>\qc$, we expect linear scaling: there exists $C>0$ such that, for any $\delta\in(0,1)$ and $n$ large enough depending on $\delta$, \begin{equation}\label{hope_tmix} t_{\rm{mix}}^{(n)}(\delta):=t_{\rm{mix}}^{\Lambda^{(n)}}(\delta)\leq Cn. \end{equation} While lower bounds linear in $n$ are easy to obtain (see Proposition \ref{prop:tmix:easy}), proving \eqref{hope_tmix} is more challenging and will be addressed in the next sections. Once linear upper and lower bounds are established, it is natural to seek a finer \emph{cutoff} result. Namely, we expect that there exists $v$ independent of $\delta$ such that \begin{align} \label{hope_cutoff} t_{\rm{mix}}^{(n)}(\delta)&{}= v n +\epsilon_{n,\delta},&\lim_{n\to\infty}\epsilon_{n,\delta}/n&{}=0.\end{align} Proving this cutoff is essentially equivalent to establishing a limit shape result like those known e.g.\ for the contact process (see \cite{Durrett80}). This question was raised by Kordzakhia and Lalley \cite{Kordzakhia06} for the North-East KCM and is readily supported by simulations (see \cite{Kordzakhia06}*{Figure~1}). \section{Oriented KCM} Before turning to the East model, let us gather a few useful facts which hold more generally for oriented models. For $\vec u\in\bbR^d\setminus \{0\}$, we consider the open half-space $\bbH_{\vec u}=\{\vec x\in\bbR^d:\<\vec x,\vec u\><0\}$, as in \eqref{eq:openhalf}. \begin{definition}[Oriented KCM] \label{def:oriented} Fix a dimension $d\ge 1$ and a $d$-dimensional update family $\cU$. We say that the $\cU$ is oriented (and that the corresponding KCM is \emph{oriented}) if there exists $\vec v\in \bbR^d\setminus\{0\}$ such that $\bbH_{\vec v}\supset\bigcup_{U\in\cU}U$. \end{definition} \begin{remark}[{Oriented examples}] Among the models introduced in Section \ref{sec:rules} only the East model and the North-East model are oriented (in every dimension). In both cases a possible choice is $\vec v=-\sum_{i=1}^d \vec e_i$. \end{remark} The propositions below state two very handy properties shared by all oriented KCMs: \begin{itemize} \item dependence propagates only in one direction among well-chosen hyperplanes; \item conditionally on a given site having been legally updated, its occupation variable has its equilibrium distribution. \end{itemize} \begin{proposition}[{Oriented dependence}] \label{prop:handy1} Fix an oriented update family $\mathcal U$, a site $\vec x\in\mathbb Z^d$ and let $\bbH_{\vec v}\supset\bigcup_{U\in\cU}U$. Recalling the graphical construction (see Section \ref{subsec:markov}), the restriction $(\eta_{\vec x+\bbH_v}(t))_{t\ge 0}$ of the KCM process is independent of the initial condition, clock rings and coin tosses in $\bbZ^d\setminus(\vec x+\bbH_{\vec v})$. \end{proposition} In the case of the one-dimensional East model, Proposition \ref{prop:handy1} states that a site $x\in\bbZ$ is only influenced by the restriction of the process to its right. \begin{corollary}[{Exact equilibrium}] \label{cor:handy2} Fix an oriented update family $\mathcal U$, an initial configuration $\omega$, and a site $\vec x\in\mathbb Z^d$. Let $\mathcal E_{\vec x}(t)$ for $t\geq 0$ be the event that there has already been a legal update (see Section \ref{subsec:markov}) at $\vec x$ by time $t$. Then \[\mathbb P_{\omega}(\eta_t(x)=1|\mathcal E_x(t))=q.\] \end{corollary} The proofs of Proposition~\ref{prop:handy1} and Corollary~\ref{cor:handy2} are left as an exercise to the reader. The following result was proved by Chleboun and Martinelli \cite{Chleboun13}, using an idea similar to Proposition~\ref{prop:handy1}. \begin{theorem}[Quasi-linear mixing for oriented KCM] \label{th:CM} Fix a dimension $d\ge 1$ and a $d$-dimensional oriented update family $\cU$. For any $q>\qc$ there exists $C>0$ such that \[t^{(n)}_{\mathrm{mix}}(\delta)\le C n\log n\] for any $\delta\in(0,1)$ and $n$ large enough depending on $\delta$. \end{theorem} Though Theorem~\ref{th:CM} is expected to be sub-optimal (see \eqref{hope_tmix} and Conjecture \ref{conj:tmix} below), we recall the main ideas of its simple and instructive proof. \begin{proof}[Sketch] Let $\Lambda=\{1,\dots,n\}^d$. We proceed iteratively by decomposing $\Lambda$ into its sections \[\left\{\bx\in\Lambda:\<\bx,\bv\>=\lambda\right\}\] for $\lambda\in\bbR$ and $\bv\in\bbR^d\setminus\{0\}$ as in the definition of an oriented update family. Choosing the direction $\bv\in\bbZ^d$, we obtain order $n$ such sections of cardinality at most order $n^{d-1}$. Observe that sites on the first hyperplane (corresponding to the smallest value of $\lambda$ above) are unconstrained thanks to the boundary condition, so a classical coupon collector argument (see e.g.\ \cite{Levin09}*{Section~5.3.3}) shows that its mixing time is of order at most $\log n$. The idea is to show that after a time of order $i\log n$, the distribution of the state of the first $i$ hyperplanes is very close to the stationary one. Then the inductive step is performed using the fact that $\trel<\infty$ whenever $q>\qct$, by Theorem~\ref{th:exponential} and $\qc=\qct$ for oriented update families by \cite{Hartarsky22sharpness}*{Corollary 1.8} (recall Conjecture~\ref{conj:qc:qct}, which is established in this case). Indeed, using $\trel<\infty$ along the lines of the proof of Theorem~\ref{th:exponential}, we show that it is likely that, at time of order $\log n$, each site of the last hyperplane has received a legal update, assuming that the initial marginal on the first $i-1$ sections is exactly (or close to) the equilibrium measure $\mu_q$. Of course, the fact that the equilibrium marginal is preserved over time is true thanks to the oriented nature of the constraint. \end{proof} \section{East model} \label{sec:East-out} \subsection{Results} In addition to orientation, the East model has other helpful features enabling the proof of rather detailed results outlined next. Let $\bbN=\{0,1,\dots\}$ and $\Delta_{\vec x}^d=\vec x-(\bbN^d\setminus\{0\})$. \begin{theorem} [Exponential convergence in all dimensions]\label{theo:East_out} Consider the East model on $\mathbb{Z}^d$ with $q\in(0,1]$. Fix $\vec x\in\mathbb Z^d$ and an initial configuration $\omega \in \Omega$ such that $\omega_{\vec x+\bbN^d}\neq\bone_{\vec x+\bbN^d}$. Then there exists $m=m(\omega,q)>0$ and for each local function $f$ with support contained in $\Delta_{\vec x}^d$ there exists $C=C(f,\omega,q)$ such that for $t>0$ sufficiently large it holds that \[ \left| \mathbb{E}_{\omega}(f(\eta(t))) - \mu(f) \right|\leq C e^{-m t}. \] \end{theorem} \begin{remark}\label{rem:East}Theorem \ref{theo:East_out}, which was proved in one dimension by Cancrini, Schonmann, Martinelli and Toninelli \cite{Cancrini10} and extended by Mar\^ech\'e \cite{Mareche19a} to any dimension, implies relaxation to equilibrium in the sense of \eqref{hope} with exponential decay for the minimal possible initial condition, namely as soon as $\omega$ has at least one empty site in $\vec x+\bbN^d $ for any $\vec x\in\mathbb Z^d$. Using Theorem \ref{theo:East_out}, one can also easily prove convergence to equilibrium in the sense of \eqref{hope_nu} with $\nu= \mu_{q_0}$ for any $q_0\in(0,1]$ (see \cite{Cancrini10}*{Theorem 4.3} and \cite{Mareche19a}*{Theorem 2.2 and Remark 2.1}). Furthermore, this convergence occurs exponentially fast, namely there exists $m=m(q_0,q)>0$ and, for $f$ local, there exists $C=C(f,q_0,q)$ such that \[ \left| \mathbb{E}_{\mu_{q_0}}(f(\eta(t))) - \mu(f) \right|\leq C e^{-m t}. \] In dimension one, the dominant term of the time scale of convergence to equilibrium when $q\downarrow 0$ has also been established. It holds that $\trel \leq m^{-1} \leq \trel \log (1/q)=e^{(\log (1/q))^2/(2\log 2)} \log (1/q)$. The lower bound follows by using an argument similar to the one in Section \ref{subsubsec:bottleneck:deduction}, the upper bound may be found in \cite{Faggionato13}*{Theorem 3.5}. We also refer to \cite{Blondel13b}*{Proposition~4.3} for non-local functions. \end{remark} \begin{theorem}[Linear mixing time in all dimensions]\label{th:East:precut} For the East model in any dimension \eqref{hope_tmix} holds. Furthermore, the result also holds when instead of the completely empty boundary condition we fix any ergodic boundary condition (see Definition \ref{def:ergoBC}). \end{theorem} We will provide a proof of Theorem \ref{theo:East_out} in Section \ref{sec:East-out1} for $d=1$ and refer the reader to \cite{Mareche19a} for $d>1$. Concerning the proof of Theorem \ref{th:East:precut}, which is based on similar ingredients, we refer the reader to \cite{Chleboun15}. Let us now turn to finer results which have been proved only in the one dimensional case. Consider the East model on $\mathbb Z$ with initial condition $\omega\in\Omega$ such that $\omega_{-\bbN}=\bone_{-\bbN\setminus\{0\}}\cdot\bzero_{\{0\}}$. Let $X_t=\min\{x\in\bbZ:\eta_x(t)=0\}$ be the position of the leftmost empty site of $\eta(t)$, which we call the \emph{front}. Notice that $X_t$ makes only nearest neighbour jumps. Indeed, an empty site can appear or disappear only when its left neighbour is empty. Since, once a site is legally updated its occupation variable is forever set to equilibrium (see Corollary~\ref{cor:handy2}), one could imagine that to the right of the front the distribution is $\mu$. If this were true, the front would move as a biased random walk: negative increments would occur at rate $q$ (the constraint is always satisfied on the site $X_{t}-1$), and positive increments at rate $q(1-q)$ (the occupation variable at the position of the front can be updated to occupied only if $X_t+1$ is empty). This would yield a speed $q^2$ to the left. While it is not true that the configuration seen from the front has distribution $\mu$, the following result due to Ganguly, Lubetzky and Martinelli \cite{Ganguly15}, confirms that (as for a biased random walk) the front moves at a negative speed with normal fluctuations and that its concentrated passage times imply cutoff (see \eqref{hope_cutoff}) with a window $\sqrt n$. \begin{theorem}[East in one dimension: CLT for the front] \label{theo:CLT} Let $d=1$ and $\omega\in\Omega$ be such that $\omega_{-\bbN}=\bone_{-\bbN\setminus\{0\}}\cdot\bzero_{\{0\}}$. For any $q\in(0,1)$, there exist constants $\sigma=\sigma(q)>0$, $v=v(q)<0$ and $C=C(q)>0$ such that, the front $X_t$ of the one-dimensional East process with initial condition $\omega$ satisfies \begin{align*} \bbP\left(\lim_{t\to\infty}\frac{X_t}{t}=v\right)&{}=1,& \left|\mathbb E(X_t)-vt\right|&{}\le C,\\ \lim_{t\to\infty}\frac{\Var(X_t)}{t}&{}=\sigma^2,& \lim_{t\to\infty}\frac{X_t-vt}{\sigma \sqrt t} &{}=Z \end{align*} in distribution for a standard normal random variable $Z$. \end{theorem} \begin{corollary}[Cutoff in one dimension] \label{th:East:cutoff} For any $q\in(0,1)$, there exists $C=C(q)>0$ such that the one-dimensional East model satisfies \eqref{hope_cutoff} with \[|\epsilon_{n,\delta}|\le C\phi^{-1}(1-\delta)\sqrt n,\] where $\phi$ is the cumulative distribution function of the standard normal law. \end{corollary} We refer the reader to \cite{Ganguly15} for the proof of Theorem \ref{theo:CLT} based on: \begin{itemize} \item ergodicity of the process seen from the front (see Theorem \ref{theo:ergoEast} below); \item showing that, after an initial burn-in time, the front increments behave like a stationary sequence of weakly dependent random variables, and applying an ingenious Stein's method argument of Bolthausen \cite{Bolthausen82} to derive the central limit theorem. \end{itemize} Corollary~\ref{th:East:cutoff} follows almost immediately. We further refer the reader to \cite{Couzinie22} for a first step in the direction of proving \eqref{hope_cutoff} for East in higher dimensions. \begin{theorem}[Ergodicity of the process seen from the front]\label{theo:ergoEast} Let $d=1$ and $\omega\in\Omega$ be such that $\omega_{-\bbN}=\bone_{-\bbN\setminus\{0\}}\cdot \bzero_{\{0\}}$. Let $(\eta(t))_{t\ge 0}$ be the one-dimensional East KCM with initial condition $\omega$. For all $t\ge 0$ and $x\in\bbZ$, let $\tilde\eta_x(t)=\eta_{x+X_t}(t)$, defining the process seen from the front. There exists a unique measure $\tilde\mu=\tilde\mu(q)$ such that $\tilde\eta(t)\to\tilde\mu$ in distribution as $t\to\infty$. Moreover, there exist $C=C(q)$ and $m=m(q)>0$, such that for any $x\in\bbZ$ \[\|\tilde\mu-\mu\|_{[x,\infty)}\le C e^{-mx},\] where, for $\Lambda\subset\mathbb Z$, $\|\tilde\mu-\mu\|_{\Lambda}$ denotes the total variation distance between the marginals of $\tilde\mu$ and $\mu$ on $\Lambda$. \end{theorem} \begin{remark}[{Front velocity}]\label{rem:velo} The velocity of the front $v$ in Theorem \ref{theo:CLT} can be expressed in terms of the invariant measure as $v=-q+(1-q)\tilde\mu(\omega_1=0)$. \end{remark} The main ingredients for the proof of Theorem~\ref{theo:ergoEast} due to Blondel \cite{Blondel13b}, in addition to the techniques needed for Theorem~\ref{theo:East_out}, are: \begin{itemize} \item coupling the processes seen from the front, starting with two different initial conditions, in order to prove that their laws converge to the same limit (see \cite{Blondel13b}*{Theorem 4.7}); \item using the fact that the front moves at most linearly (by the finite speed argument of Proposition~\ref{prop:tmix:easy}) and leaves empty sites behind to prove that far from the front the process is almost at equilibrium (see \cite{Blondel13b}*{Theorem 4.7}). Here the distinguished zero of Definition~\ref{def:distinguished} below plays a key role. \end{itemize} \begin{problem}[{Front measure}] Theorems \ref{theo:CLT}-\ref{theo:ergoEast} leave various questions unanswered. For instance, can one quantify the correlations between adjacent occupation variables in the invariant measure $\tilde\mu $? Can one determine the asymptotics of the velocity $v$ of Theorem~\ref{theo:CLT} and Remark~\ref{rem:velo} in the $q\to 0$ limit? \end{problem} \subsection{Exponential decay to equilibrium for East in one dimension} \label{sec:East-out1} Let us start by introducing a key notion, due to Aldous and Diaconis \cite{Aldous02}. \begin{definition}[Distinguished zero] \label{def:distinguished} Fix $x\in\mathbb Z$ and an initial configuration $\omega \in \Omega$ with $\omega_x=0$. Call the site $x$ \emph{distinguished}. We set $\xi_0=x$. The position $\xi_s\in \mathbb{Z}$ of the \emph{distinguished zero} at time $s >0$ is defined according to the following iterative rule. For all times $s$ strictly smaller than the first legal update $t_1$ at $x$, we set $\xi_s=x$, while $\xi_{t_1}=x+1$. Then we wait for the first legal update $t_{2}$ at $x+1$ after $t_1$, at which point we set $\xi_{t_{2}}=x+2$ and so on. \end{definition} Note that, almost surely, the trajectory $(\xi_s)_{s\ge 0}$ is right-continuous, piece-wise constant, increasing by 1 at each jump and not exploding in finite time. Also note that, by definition of the legal updates, necessarily the state $\eta_{\xi_s}(s)$ of the East process at the position of the distinguished zero is 0 for any $s\ge 0$, hence the name. \begin{proposition}[{Properties of the distinguished zero}] \label{prop:handyEast} Fix $t>0$. In the setting of Definition \ref{def:distinguished}, conditioning on the knowledge of the trajectory $(\xi_s)_{s \leq t}$ and denoting by $0 < t_1 < \dots< t_{n-1}<t$ the times of the distinguished zero's jumps (with $t_0=0$, $t_n=t$), the following holds for all $i\in\{0,\dots,n-1\}$ \begin{itemize}\item for $s\in[t_i,t_{i+1})$, $\xi_s=x+i$ and the restriction of the process to $\{x,\dots,x+i-1\}$ follows an East dynamics with zero boundary condition; \item at time $t_{i+1}$, the configuration at site $x+i$ is updated according to a Bernoulli$(1-q)$ variable. \end{itemize} \end{proposition} The proof, which is derived using Definition \ref{def:distinguished}, Proposition \ref{prop:handy1} and Corollary \ref{cor:handy2}, is left to the reader. \begin{remark}[{Conditional graphical construction}] Recall the graphical construction of Section \ref{subsec:markov}. Proposition \ref{prop:handyEast} implies that, given $(\xi_{t'})_{t'\le t}$, for any $(s, y)$ satisfying $s\leq t$ and $y\leq\xi_s$, the variable $\eta_y(s)$ is determined by the following ``conditional graphical construction''. In the time interval $[0,t_1)$, the occupation variables in the interval $\{y,\dots, x-1\}$ evolve as an East process with fixed empty site at $x$, using the clock rings $(t_{z,k})_{z\in\{y,\dots, x-1\}}$ and coin tosses $(s_{z,k})_{z\in\{y,\dots, x-1\}}$. In the time interval $[t_1,t_2)$ the same happens with empty boundary condition at $x+1$ instead of $x$ and so on. Note that, at $t_1$, the clock at $x$ rings and $\eta_x(t_1)$ takes the value of the corresponding coin toss. \label{rem:conditional} \end{remark} Furthermore, Proposition \ref{prop:handyEast} yields the following. \begin{corollary}[{Equilibrium zone}] \label{cor:handyEast} Fix $a\leq x\in\bbZ$ and two measures $\psi^-$ and $\psi^+$ on $\Omega_{a-1-\bbN}$ and $\Omega_{x+1+\bbN}$ respectively. Let \[ \psi= \begin{cases} \psi^-\otimes \mu_{\{a,\dots,x-1\}}\otimes\delta_{0}\otimes\psi^+ & \text{ if $a<x$}\\ \psi^-\otimes\delta_{0}\otimes\psi^+ & \text{if $a=x$} \end{cases} \] Let $(\eta(t))_{t\ge0}$ be the East process with initial distribution $\psi$. Fix $t\ge 0$. Then the conditional distribution of $\eta_{\{a,\dots,\xi_t-1\}}(t)$, given the distinguished zero trajectory $(\xi_s)_{s\le t}$, is the equilibrium one, $\mu_{\{a,\dots,\xi_t-1\}}$. \end{corollary} The following proof, due to Cancrini, Martinelli, Schonmann and Toninelli \cite{Cancrini10}, uses as key ingredients Proposition \ref{prop:handyEast}, Corollary \ref{cor:handyEast} and the $\mathrm{L}^2$ convergence to equilibrium guaranteed by Theorem~\ref{th:exponential}\ref{item:expo:7} and \eqref{eq:expo}. \begin{proof}[Theorem \ref{theo:East_out} for $d=1$] Assume for simplicity that $\mu(f)=0$ , $x=0$, $\omega_0=0$ and let the support of $f$ be contained in $\{a,\dots,a'\}\Subset-1-\bbN$. Let $b\leq 0$ be the position of the first empty site in $\omega$ to the right of $a'$. Make $b$ distinguished and denote by $\xi_s$ its position at time $s$. Given the trajectory $(\xi_s)_{s \leq t}$, let $0 < t_1 < t_2 < \dots< t_{n-1}<t$ be the times when the distinguished zero jumps, and set $t_0=0$, $t_n=t$. For $i\in\{0,\dots,n-1\}$, let \begin{align*} {\Xi} _i&{}=(\xi_s)_{s\in [t_i,t]}&V_i&{}=\{a,\dots,b+i-1\}.\end{align*} We claim that \begin{align} \nonumber\left|\bbE_\omega\left(f(\eta(t))\right)\right| & {}\leq \bbE_\omega \left( \left|\bbE_\omega(f(\eta(t)) | \Xi_0) \right| \right) \\ \nonumber &{}\leq(\min(q,1-q))^{-(b-a)}\bbE_\omega \left( \int \mathrm{d}\mu_{V_0}(\omega') \left|\bbE_{{\omega'}} (f(\eta '(t)) | \Xi _0) \right| \right)\\ \nonumber&{} \leq (\min(q,1-q))^{-(b-a)} \bbE_\omega \left(\sqrt{\int \mathrm{d}\mu_{V_0}(\omega')(\bbE_{\omega'}(f(\eta'(t)) |\Xi _0))^2}\right) \\ \label{eq:Eastbound}&{} \leq (\min(q,1-q))^{-(b-a)} \bbE_\omega \left( \sqrt{\var_{V_0} \left(g_t^{(0)}\right)}\right), \end{align} where we let $\eta '(t)$ be the configuration obtained following the conditional graphical construction of Remark \ref{rem:conditional} with $\eta'_{V_0}(0)=\omega'\in\Omega_{V_0}$ and \[g_t^{(0)}(\omega')=\bbE_{\omega'}(f(\eta '(t)) | \Xi _0 ).\] In order to obtain \eqref{eq:Eastbound}, we used Remark \ref{rem:conditional} together with the fact that for any $\omega\in\{0,1\}^{\mathbb Z}$ it holds $\mu_{V_0}(\omega|_{V_0})\geq (\min(q,1-q))^{-(b-a)}$ to obtain the second inequality, Cauchy-Schwarz for the third inequality and, for the last inequality, we used Corollary \ref{cor:handyEast} and the hypothesis that the support of $f$ is contained in $[a,b)$, which yield \begin{equation} \int \mathrm{d}\mu_{V_0 }(\omega') \bbE_{\omega'}(f(\eta '(t)) | (\xi_s)_{s \leq t})= \mu_{V_0 } (f) =0.\label{berluingalera} \end{equation} Let $P_s^{(i)}$ for $ s\in [t_i,t_{i+1})$ be the Markov semigroup associated to the East process in the interval $V_i$ with a fixed empty site at the right boundary $b+i$. Then, using Propositions \ref{prop:handy1} and \ref{prop:handyEast}, we get \begin{align} \label{fru} g_t^{(0)}({\omega'})& = \sum_{\sigma \in \{0,1\}^{V_0}}\sum_{\sigma' \in \{0,1\}} P_{t_1}^{(0)}(\omega',\sigma)\mu_b(\sigma')g^{(1)}_{t-t_1}(\sigma\cdot\sigma'), \end{align} where, for $s\geq 0$, we define $g_s^{(1)}:\Omega_{V_1}\to\mathbb R$ by \begin{equation}\label{g0}g_s^{(1)}(\omega'')=\bbE_{\omega''}(f(\eta ''(s)) | \Xi_1 ) \end{equation} where $\eta''(s)$ for $s\leq t$ denotes the configuration in the interval $\{a,\dots,\xi_t-1\}$ obtained starting at time $t_1$ from the configuration $\omega''\in\Omega_{V_1}$ and evolving according to the conditional graphical construction described in Remark \ref{rem:conditional} applied to the time interval $(t_1, t_1+s]$. Therefore, using \eqref{eq:expo} we get \begin{align}\nonumber \var_{V_0} \left(g_t^{(0)}\right) &{}\leq e^{-2 t_1 /T_{\rm{rel}}^{V_0}} \var_{V_0} \left( \sum_{\sigma' \in \{0,1\}} \mu_b(\sigma') \mu_{{V_1}}\left(g_{t-t_1}^{(1)}\right)\right)\\ &{}\leq e^{-2 t_1/T_{\rm{rel}}} \var_{V_1} \left(g_{t-t_1}^{(1)}\right) \label{fru2} \end{align} where, as usual, we denote by $T_{\rm{rel}}^{V_0}$ (resp.\ $T_{\rm{rel}}$) the relaxation time of the East model on $V_0$ with empty boundary condition (resp.\ $\mathbb Z$). In order to obtain \eqref{fru2} we use convexity of the variance and \eqref{eq:finite:volume:chain}. We can now proceed analogously to get \begin{equation} \var_{V_1} \left(g^{(1)}_{t-t_1}\right) \leq e^{-2(t_2-t_1)/{T_{\rm{rel}}}} \var_{V_2}\left(g^{(2)}_{t-t_1-t_2}\right) \end{equation} and, by induction, we get \[\var_{V_0} \left(g_t^{(0)}\right) \leq e^{-2 t/T_{\rm{rel}}} \var_{\{a,\dots,\xi_t-1\}}(f).\] Plugging this bound into \eqref{eq:Eastbound} and recalling that $T_{\rm{rel}}>0$ for any $q\in{(0,1]}$ for the East model, we finally get \[|\bbE_\omega(f(\eta(t)))| \leq c {e}^{-t/\trel} \bbE_\omega \left( \sqrt{\var_{\{a,\dots,\xi_t-1\}} (f)} \right) \leq ce^{-t/\trel} \|f\|_\infty \] for some $c=c(q,a,b)>0$. \end{proof} While the distinguished zero does not generalise to oriented KCM, a version of it was used in \cite{Cancrini10}*{Section 4} for an oriented KCM on a tree. \section{High vacancy density regime} \label{sec:high-temp} In this section we focus on results on KCM with arbitrary update family with $q$ close enough to 1. While the same results should hold whenever $q>\qc$, the current techniques do not allow proving this. \subsection{Results} The next two theorems, proved by Hartarsky and F.~Toninelli \cite{Hartarsky24KCM-CP}, establish \eqref{hope_nu} with an exponential convergence when $\nu$ is a product measure with a vacancy density $q_0>q_c$ and a linear upper bound for the mixing time \eqref{hope_tmix}. Both results apply to all models but are restricted to a high vacancy density regime for $q$ and their proof is delegated to Section~\ref{subsec:out-of-eq:high:proof}. Before stating them we need to introduce the notion of trivial subcritical models. \begin{definition}[{Trivial subcritical}] We say that an update family $\cU$ is not \emph{trivial subcritical}, if there exists $U\in \cU$ and a direction $\vec v\in \bbR^d$ such that $\<\vec u,\vec v\><0$ for all $\vec u\in U$. \end{definition} In \cite{Balister24}*{Theorem~7.1} it is proved that $q_c=1$ iff $\cU$ is trivial subcritical. Therefore, excluding trivial subcritical models from the following two results is necessary. \begin{theorem}[Exponential convergence to equilibrium after a temperature quench] \label{th:convergence:out-of-eq} Fix a dimension $d\ge 1$ and an update family $\cU$ which is not trivial subcritical. For any $\alpha>0$ there exist $\varepsilon=\varepsilon(\alpha)>0$ and $c=c(\alpha)>0$ such that the following holds for the $\cU$-KCM $\eta$. For any $q_0\in[\qct+\alpha,1]$, $q\in[1-\varepsilon,1]$ and local function $f:\Omega\to \bbR$, \[\left|\bbE_{\mu_{q_0}}(f(\eta(t)))-\mu_q(f)\right|\le \frac{\|f\|_\infty\cdot|\mathrm{supp}f|}{ce^{ct}},\] where $\mathrm{supp}f$ is the set of sites on whose state the value of $f$ depends. \end{theorem} \begin{theorem}[Linear mixing at high vacancy density] \label{th:mixing} Fix a dimension $d\ge 1$ and a $d$-dimensional update family $\cU$ which is not trivial subcritical. Then there exist $\varepsilon>0$ and $C>0$ such that for any $q\in[1-\varepsilon,1]$, inequality \eqref{hope_tmix} holds for any $\delta\in(0,1)$ and $n$ large enough depending on $\delta$. \end{theorem} It should be noted that Theorem~\ref{th:mixing} also applies to domains of non-hypercubic shape, but is stated as is for the sake of simplicity. We refer the reader to \cite{Hartarsky24KCM-CP}*{Section~4} for an account of previous works in this direction, in particular \cites{Blondel13,Mountford19,Mareche20}. Moving on to more precise results, it only remains to report the following one due to Ertul \cite{Ertul22} (also recall Corollary~\ref{th:East:cutoff} for the one dimensional East model). \begin{theorem}[Cutoff for one-dimensional FA-1f] \label{th:FA1f:cutoff} Consider the FA-1f model in one dimension. There exists an explicit $\varepsilon>0$ such that for any $q\in(1-\varepsilon,1]$, there exists $v,\alpha,\beta>0$ such that \eqref{hope_cutoff} holds with $$-\alpha\sqrt n\leq \epsilon_{n,\delta}\leq \beta\sqrt n.$$ \end{theorem} Furthermore, the number $v$ in Theorem~\ref{th:FA1f:cutoff} corresponds to twice the speed at which the rightmost empty site, known as \emph{front}, moves in FA-1f on $\{1,2,\dots\}$ with boundary condition $\bzero_{\{0,-1,\dots\}}$ (compare with Remark \ref{rem:velo} for East). Indeed, as for Corollary~\ref{th:East:cutoff}, the proof of Theorem~\ref{th:FA1f:cutoff} is based on the identification of the front speed thanks to the convergence of the process seen from the front \cite{Blondel19} (as in Theorem~\ref{theo:ergoEast}). Then, considering each interval of occupied sites, one may show that it shrinks at the front speed at both ends, leading to a double speed (see \cite{Ertul22} for more details). Nonetheless, deducing Theorem~\ref{th:FA1f:cutoff} from the analogue of Theorem~\ref{theo:ergoEast} is not immediate, as in the case of Corollary~\ref{th:East:cutoff}. \subsection{Proofs via cooperative contact processes} \label{subsec:out-of-eq:high:proof} We now turn to outlining the proofs of Theorems~\ref{th:convergence:out-of-eq} and~\ref{th:mixing}. In order to simplify the presentation (see \cite{Hartarsky24KCM-CP} for the full details\footnote{Beware that in \cite{Hartarsky24KCM-CP} the roles of 0 and 1 states are exchanged.}), we focus on Theorem~\ref{th:mixing} in the case of FA-2f in two dimensions (see Figure~\ref{fig:FA2f}). The only additional ingredient needed in the general case and in Theorem~\ref{th:convergence:out-of-eq} are one-scale renormalisations along the lines of Section~\ref{sec:exponential:decay}. The proof proceeds in several steps involving a number of interacting particle systems other than KCM. In Section~\ref{subsubsec:CP}, we reduce the mixing time problem for FA-2f to a more complex problem for a simpler model. Namely, studying the space-time connected components of occupied sites in a cooperative contact process known as the sexual contact process. In Section~\ref{subsubsec:LPP}, we use a comparison with last passage percolation to replace the arbitrary initial condition by the fully empty one. In Section~\ref{subsubsec:BPwD} we discretise time to transform the sexual contact process into a North-East BP with death. In Section~\ref{subsubsec:Toom}, we introduce Toom cycles to show that, at $q$ close enough to 1, with empty initial condition, BP with death mostly has empty sites. Finally, in Section~\ref{subsubsec:chains}, we show that long chains of such Toom cycles are also unlikely. \subsubsection{Sexual contact process} \label{subsubsec:CP} Let us start by introducing the \emph{sexual contact process} (SCP) of \cite{Durrett86}. It is a continuous time Markov process on $\{0,1\}^{\bbZ^2}$ (or in finite volume $\Lambda$ with $\bzero_{\bbZ^2\setminus\Lambda}$ boundary condition as in Section~\ref{sec:finite_vol}) which, using a graphical representation similar to the one for KCM from Section~\ref{subsec:markov}, is defined as follows. Each site $\bx\in\bbZ^2$ waits an independent exponentially distributed time with mean one before attempting to update. At that time, if both $\bx+\be_1$ and $\bx+\be_2$ are in state 0, the state of $\bx$ becomes $0$ with probability $q$ and becomes $1$ with probability $1-q$. Otherwise the state of $\bx$ remains unchanged with probability $q$ and becomes $1$ with the remaining probability $1-q$. SCP has two notable advantages over FA-2f and one disadvantage. Namely, SCP is attractive and oriented, but its upper invariant measure is not explicit when it is not the trivial $\delta_\bone$. The above formulation of the process suggests a canonical coupling with \mbox{FA-2f}, using the same clock rings and Bernoulli variables with parameter $q$, the same domain $\Lambda=\{1,\dots,n\}^2$ and boundary condition $\bzero_{\bbZ^2\setminus\Lambda}$. It is an exercise to check the following. \begin{lemma}[Sexual contact process comparison] \label{lem:SCP:comparison} Under the canonical coupling of FA-2f $(\eta(t))_{t\ge 0}$ and SCP $(\zeta(t))_{t\ge 0}$, if $\zeta(0)=\bone_\Lambda$, then $\eta_\bx(t)\le \zeta_\bx(t)$ for all $\bx\in\Lambda$ and $t\ge 0$. \end{lemma} However, we require a finer relation from \cite{Hartarsky24KCM-CP}*{Section~7} (also see \cite{Liggett99}*{Section~III.1}, \cite{Busic13}*{Section~4.2} and \cite{Gottschau18}*{Section~1.3} for similar ideas), since the two processes do not share the invariant measure $\mu_q$. Consider a set of \emph{orange} sites $O_t\subset \Lambda$ defined as follows for $t\ge 0$. At time $0$, we set $O_0=\Lambda$. At each clock ring $t\ge 0$ at site $\bx\in\Lambda$, we obtain $O_t$ from $O_{t-}$ by: \begin{itemize} \item removing $\bx$, if $\zeta_\bx(t)=0$; \item adding $\bx$, if $\zeta_\bx(t)=1$ and there is an orange site around $\bx$, that is, $O_{t-}\cap \{\bx,\bx+\be_1,\bx+e_2,\bx-\be_1,\bx-\be_2\}\neq \varnothing$; \item changing nothing otherwise. \end{itemize} The purpose of orange sites is to ensure that FA-2f processes with different initial conditions are coupled outside orange sites. \begin{lemma}[Orange set coupling] \label{lem:orange} Under the canonical coupling of SCP $(\zeta(t))_{t\ge 0}$ with initial condition $\bone_\Lambda$ and two FA-2f processes $(\eta(t))_{t\ge 0}$ and $(\eta'(t))_{t\ge 0}$ with different initial conditions, for all $t\ge 0$, we have \begin{equation} \label{eq:Ot}\left\{\bx\in\Lambda:\eta_\bx(t)\neq\eta'_\bx(t)\right\}\subset O_t.\end{equation} \end{lemma} \begin{proof} We proceed by induction on the number of clock rings in $\Lambda$. Removing $\bx$, if $\zeta_\bx(t)=0$ is justified by Lemma~\ref{lem:SCP:comparison}. Setting $O_t=O_{t-}\cup\{\bx\}$ cannot violate \eqref{eq:Ot}, at time $t$, if the induction hypothesis is verified at $t-$. Finally, assume that there is no orange site around $\bx$. Then by induction hypothesis $\eta(t-)$ and $\eta'(t-)$ are equal around $\bx$ and, therefore, $\eta_\bx(t)=\eta'_\bx(t)$. \end{proof} Owing to Lemma~\ref{lem:orange} and the standard results on coupling and mixing times \cite{Levin09}*{Corollary~5.5}, proving Theorem~\ref{th:mixing} is reduced to showing that with high probability the set of orange sites is empty at time $Cn$ for $C$ large enough. We say that two space-time points $(t,\bx)$ and $(t',\bx')$ are connected in $X\subset[0,\infty)\times\Lambda$, if there exists a sequence of segments $[s_i,t_i]\times\{\bx_i\}\subset X$ indexed by $i\in\{0,\dots,N\}$ such that $[s_i,t_i]\cap[s_{i-1},t_{i-1}]\neq \varnothing$ and $\bx_{i-1}$ and $\bx_{i}$ are neighbours in $\Lambda$, such that $(t,\bx)\in[s_0,t_0]\times\{\bx_0\}$ and $(t',\bx')\in[s_N,t_N]\times\{\bx_N\}$. By the definition of orange sites, it is clear that $\bigcup_{t\ge 0}O_t$ is contained in the connected component $\cC$ of $\{(t,\bx)\in[0,\infty)\times\Lambda:\zeta_\bx(t)=1\}$ containing $\{0\}\times\Lambda$. Thus, it suffices to show that, for $q\ge 1-\varepsilon$, \begin{equation} \label{eq:connected:comp} \bbP\left(\cC\subset[0,Cn]\times\Lambda\right)\ge 1-\delta \end{equation} for some constant $C$ independent of $\delta>0$, provided $n$ is large enough. In order to simplify the exposition, we fix $q=1-\varepsilon<1$ in the rest of the section with $\varepsilon$ to be chosen small enough in Section~\ref{subsubsec:BPwD}. \subsubsection{Last passage percolation} \label{subsubsec:LPP} Following \cite{Hartarsky24KCM-CP}*{Section~11}, our next goal is to replace the worst initial condition, $\bone_\Lambda$, of SCP $\zeta$ of Section~\ref{subsubsec:CP} by the best one, $\bzero_\Lambda$. For this we use a comparison with a version of last passage percolation, which will play a similar role to the orange set above. Define the \emph{last passage} set $L_t\subset\Lambda$ for $t\ge 0$ as follows. Set $L_0=\Lambda$ and at each clock ring $t\ge 0$ at site $\bx\in\Lambda$, we obtain $L_t$ from $L_{t-}$ by removing $\bx$, if the following conditions are both satisfied: \begin{itemize} \item at time $t$ the Bernoulli variable with parameter $q=1-\varepsilon$ takes the value 0, that is, SCP changes the state of $\bx$ to 1 regardless of the current configuration; \item $\{\bx+\be_1,\bx+\be_2\}\cap L_{t-}=\varnothing$. \end{itemize} Otherwise, we set $L_t=L_{t-}$. The proof of the following observation, similar to Lemma~\ref{lem:orange}, is left as an exercise for the reader. \begin{lemma}[Last passage percolation coupling] \label{lem:LPP:coupling} Under the above coupling, we have \[\left\{\bx\in\Lambda:\zeta_\bx(t)\neq\zeta'_\bx(t)\right\}\subset L_t\] for any $t\ge 0$, where $\zeta$ and $\zeta'$ are SCP with initial condition $\bone_\Lambda$ and $\bzero_\Lambda$ respectively. \end{lemma} We next invoke a classical fact about last passage percolation. We note that much more precise results are available, but not more useful for our purposes. \begin{lemma}[Linear last passage time] \label{lem:LPP} For some absolute constant $C'>0$ we have \[\bbP\left(L_{C'n/\varepsilon}\neq\varnothing\right)\le \delta\] for any $n$ large enough depending on $\delta>0$. \end{lemma} \begin{proof}[Sketch] A proof of Lemma~\ref{lem:LPP} generalising to arbitrary dimension was given by Greenberg, Pascoe and Randall \cite{Greenberg09}. The idea is to introduce an exponential metric on the set of possible values of the last passage set $L_t$: \[d(A,B)= \sum_{\bx\in A\Delta B}e^{\gamma\<\bx,\bone\>},\] where $\gamma>0$ is a suitably large constant depending only on dimension and $\Delta$ denotes the symmetric difference of sets. One can verify that the expected distance of two canonically coupled last passage percolations starting from neighbouring configurations contracts. Applying the path coupling method (see \cite{Levin09}*{Chapter 14}), this yields that the hitting time of $\varnothing$ is logarithmic in the diameter of the space, which is exponential in $n$. \end{proof} Combining Lemmas~\ref{lem:LPP:coupling} and~\ref{lem:LPP} and recalling \eqref{eq:connected:comp}, it is now sufficient to prove that, with probability $1-\delta$, the connected components of $\{(t,\bx)\in[0,\infty)\times\Lambda:\zeta'_\bx(t)=1\}$ intersecting $\{C'n/\varepsilon\}\times\Lambda$ are contained in $[0,Cn]\times\Lambda$. Taking $C>C'/\varepsilon$ and performing a union bound, it suffices to show that for any space-time point $(t,\bx)$, its connected component $\cC_{t,\bx}$ satisfies \begin{equation} \label{eq:diameter}\bbP(\diam(\cC_{t,\bx})\ge k)\le e^{-ck}\end{equation} for some $c>0$ independent of $n$ and all $k$ large enough. Since SCP is attractive, $\zeta'$ has initial condition $\bzero_\Lambda$ and boundary condition $\bzero_{\bbZ^2\setminus\Lambda}$, it suffices to prove \eqref{eq:diameter} for the infinite volume SCP $\zeta''$ with initial condition $\bzero$. Indeed, by induction on the number of clock rings in $\Lambda$ in the graphical construction, one can show that for every $t\ge 0$ and $\vec x\in\bbZ^2$, we have $(\bzero_{\bbZ^2\setminus\Lambda}\cdot \zeta'(t))_{\vec x}\le \zeta''_{\vec x}(t)$, which implies that the connected component $\cC_{t,\vec x}$ for $\zeta''$ contains the one for $\zeta'$. \subsubsection{BP with death} \label{subsubsec:BPwD} The next step of the proof is a discretisation in time \cite{Hartarsky24KCM-CP}*{Section~8}. Namely, we fix $T$ large enough but such that $T\varepsilon$ is small. This way, in each time interval of length $T$, it is likely that SCP $\zeta''$ with initial condition $\bzero$ attempts to change the state of a given site $\bx$ to 0 (and succeeds, if $\bx+\be_1$ and $\bx+\be_2$ are in state 0), but never attempts to change the state of $\bx$, $\bx+\be_1$ or $\bx+\be_2$ to 1. We declare each space-time point $(m,\bx)\in\{0,1,\dots\}\times\bbZ^2$ \emph{good}, if the above event occurs for site $\bx$ in the time interval $[mT,(m+1)T)$. Note that almost surely, no clocks ring at times $(mT)_{m=0}^\infty$, so these events are 1-dependent and have probability $1-\varepsilon'$ for $\varepsilon'>0$ that can be chosen arbitrarily small, if $\varepsilon$ is small enough. Using the Liggett-Schonmann-Stacey theorem \cite{Liggett97}, we can replace this 1-dependent field of indicators of good sites by an independent one with high marginals (at the cost of changing $\varepsilon'$). With the good space-time points at hand, we define the discrete time version $\xi$ of SCP, which we refer to as North-East BP with death parameter $\varepsilon'$. Set $\xi(0)=\bzero$. For $m\ge 1$ and $\bx\in\bbZ^2$, define $\xi_\bx(m)=0$, if $(m-1,\bx)$ is good and $\xi_{\bx}(m-1)=0$, or if $(m-1,\bx)$ is good and $\xi_{\bx+\be_1}(m-1)=\xi_{\bx+\be_2}(m-1)=0$. Otherwise, set $\xi_\bx(m)=1$. The name is justified by the fact that, in the absence of non-good space-time points, this process is exactly BP with the North-East update family (see Figure~\ref{fig:NE}). It is not hard to check that, if $\xi_\bx(m)=\xi_\bx(m+1)=0$, then $\zeta''_\bx(t)=0$ for all $t\in[mT,(m+1)T)$. In view of this, we consider the set \begin{equation} \label{eq:m'x'} X=\left\{(m,\bx)\in\{0,1,\dots\}\times\bbZ^2:\xi_{\bx}(m)=1\right\}.\end{equation} equipped with all edges of the form $((m,\bx),(m',\bx'))$ with \begin{equation} \label{eq:connected:chain} (m'-m,\bx'-\bx)\in\{-1,0,1\}\times\{\bzero,\be_1,\be_2,-\be_1,-\be_2\}.\end{equation} Hence, it suffices to prove \eqref{eq:diameter} for the connected component $\cC'_{m,\bx}$ of any space-time point $(m,\bx)$ in $X$. \subsubsection{Toom cycles} \label{subsubsec:Toom} Before treating connected components $\cC'_{m,\bx}$ in $X$ of \eqref{eq:m'x'}, it is useful to first show that $\bbP(\xi_\bx(m)=1)$ is small for any space-time point $(m,\bx)$. This is an instance of a classical result of Toom \cite{Toom80} for stability of cellular automata subjected to random noise. However, since our cellular automaton is BP with update family consisting of a single rule, it is possible to use a simpler argument of Swart, Szab\'o and Toninelli \cite{Swart22}*{Section~3.5} presented below. We present the construction of a \emph{Toom cycle} in the form of an algorithm illustrated in Figure~\ref{fig:Toom}. \begin{figure} \centering \includegraphics[width=\linewidth]{looperas.pdf} \caption{Illustration of the algorithm used to construct a Toom cycle rooted at the bottom space-time point $(m,\vec x)$. Time increases downwards.} \label{fig:Toom} \end{figure} Fix a realisation of the good space-time points such that $\xi_\bx(m)=1$. For each $(m',\bx')$ such that $\xi_{\bx'}(m')=1$ and $(m'-1,\bx')$ is good, fix some $\be(m',\bx')\in\{\be_1,\be_2\}$ such that $\xi_{\bx'+\be(m',\bx')}(m'-1)=1$, which necessarily exists by construction. We construct a sequence of space-time points starting at $(m,\bx)$. Elements of the sequence such that both the previous and next element have larger time coordinate are called \emph{sinks}. Initially the sequence is the single point $\cT_0=(m,\bx)$. If $\be(m,\bx)$ is not defined, we stop and output $\cT_0$. Otherwise, we \emph{explore}: replace the point $(m,\bx)$ by the sequence $\cT_1=(m,\bx),(m-1,\bx),(m,\bx),(m-1,\bx+\be(m,\bx)),(m,\bx)$. For $l\ge 1$, given $\cT_l$, define $\cT_{l+1}$ as follows. Among the sinks $(m',\bx')$ of $\cT_l$ such that $\be(m',\bx')$ is defined, we find the first one that maximises $m'$. If no such element exists, we stop and output $\cT_l$. Otherwise, explore: replace the selected space-time point $(m',\bx')$ in $\cT_l$ by $(m',\bx'),(m'-1,\bx'),(m',\bx'),(m'-1,\bx'+\be(m',\bx')),(m',\bx')$. Denote the result of this exploration operation by $\cT_l'$. If the points $(m'-1,\bx')$ and $(m'-1,\bx'+\be(m',\bx'))$ are not already present in $\cT_l$, we set $\cT_{l+1}=\cT_l'$. If $(m'-1,\bx')$ appears in $\cT_l$, we remove from $\cT_l'$ all vertices of the corresponding sub-cycle except one of the two occurrences of $(m'-1,\bx')$ to obtain $\cT_l''$. Otherwise, set $\cT''_l=\cT'_l$. Finally, we do the same \emph{loop erasion} operation on $\cT''_l$, if $(m'-1,\bx'+\be(m',\bx'))$ appears twice in this sequence. The final result defines $\cT_{l+1}$. The output of this algorithm is called the \emph{Toom cycle rooted at} $(m,\bx)$. A number of combinatorial properties of this object are needed. All of their proofs are fairly simple, but fiddly, so we refer to \cite{Swart22} for the details. Firstly, the Toom cycle is well defined, contains its root and its increments belong to \[\{(-1,0,0),(1,0,0),(-1,1,0),(1,-1,0),(-1,0,1),(1,0,-1)\}.\] Consequently, the number of Toom cycles with given root of length $l$ is at most $6^l$. Secondly, for each sink $(m',\bx')$, the space-time point $(m'-1,\bx')$ is not good and each sink appears only once in the cycle. Observe that there are three types of vertices in a Toom cycle: the ones with two neighbours with smaller time coordinate, those with two neighbours with larger time coordinate and the others. We call them \emph{sources}, \emph{sinks} and \emph{internal vertices}. The number $n_*$ of sources and the number of sinks are the same by double counting. One can prove that internal vertices $(m_i,\bx_i)$ such that their neighbours are not sinks or sources satisfy \[((m_{i+1}-m_i),\<\bone,\bx_{i+1}-\bx_i\>)=((m_i-m_{i-1}),\<\bone,\bx_{i}-\bx_{i-1}\>)\in\{(-1,0),(1,-1)\}.\] When the above quantity is $(1,-1)$ (resp.\ $(-1,0)$), we call the internal vertex \emph{blue} (resp.\ \emph{red}) and denote the number of such vertices $n_b$ (resp.\ $n_r$). Examining the time increments, it is clear that $|n_b-n_r|\le 4n_*$. On the other hand, by examining the projected space increments $\<\bone,\bx_{i+1}-\bx_i\>$, we see that $n_b\le 6n_*$. Combining these facts, we obtain that the total length of the Toom cycle is at most $6n_*+n_b+n_r\le 22n_*$. As noted above, sinks are not good and distinct, so the probability that all sinks of a Toom cycle are bad is at most $(\varepsilon')^{n_*}$. We are then able to use a union bound \[\bbP\left(\xi_\bx(m)=1\right)\le\varepsilon'+\sum_{l\ge 1} 6^{2l}(\varepsilon')^{2l/22}<(\varepsilon')^{1/12},\] for $\varepsilon'$ small enough, taking into account that cycles have even length. \subsubsection{Chains of Toom cycles} \label{subsubsec:chains} In Section~\ref{subsubsec:Toom}, we showed that for any space-time point $(m,\bx)$, the probability $\bbP(\xi_\bx(m)=1)$ that it is in state 1 in North-East BP with death parameter $\varepsilon'$ and initial condition $\bzero$ is small. However, following Section~\ref{subsubsec:BPwD}, we need to prove that the probability of a connected component of such points is exponentially low in the diameter of the component. We already have one exponential bound, namely for a given space-time point, the probability that its Toom cycle has length more than $l\ge 1$ is at most $6^l(\varepsilon')^{l/22}$. It therefore remains to treat connected components of Toom cycles. Unfortunately, if Toom cycles for different points intersect at a sink, we lose independence. In order to deal with this issue, we introduce the following notion of chain (of Toom cycles), following \cite{Hartarsky24KCM-CP}*{Section~9.3}. \begin{definition}[Chain] A \emph{chain} is a finite sequence of vertex-disjoint Toom cycles $T_i$ rooted at space-time points $(m_i,\bx_i)$ of lengths $l_i$ such that \begin{equation} \label{eq:chain:distance} d((m_i,\bx_i),(m_{i+1},\bx_{i+1}))\le 7(l_i+l_{i+1}).\end{equation} The length of a chain is $\sum_i l_i$. \end{definition} The next result \cite{Hartarsky24KCM-CP}*{Lemma~9.15} shows that we can extract a chain from a large connected component of occupied sites. \begin{lemma}[Existence of chains] \label{lem:chain} There exists a constant $C>0$ (depending only on the dimension, which we fixed equal to 2) such that the following holds. If the connected component $\cC_{m,\bx}'$ of $(m,\bx)$ in $X$ from \eqref{eq:m'x'} has diameter at least $k$, then there exists a chain of length $l\ge k/C$ contained in the ball $B_{(m,\bx)}(Cl)$ of radius $Cl$ centered at $(m,\bx)$. \end{lemma} \begin{proof}[Sketch] The construction of the chain is obtained algorithmically by pruning the sequence $(T_i)_{i=1}^N$ of Toom cycles rooted at the space-time points of a path from $(m,\bx)$ to the farthest point in its connected component $\cC_{m,\bx}'$. Initially the Toom cycles in this sequence may intersect. We start by discarding the Toom cycles rooted at space-time points at distance at most $6l_1$ from $(m_1,\bx_1)=(m,\bx)$ except $T_1$. We inspect the first remaining Toom cycle $T_i$ with $i>1$ (if any). If $T_1\cap T_i\neq \varnothing$, we also remove $T_1$ from our sequence, in which case we observe that necessarily $l_i> 3l_1$, so \[\left\{\left(m_j,\bx_j\right):j<i\right\}\subset B_{(m_1,\bx_1)}(6l_1)\subset B_{(m_i,\bx_i)}(6l_i).\] If, on the contrary, $T_1\cap T_i=\varnothing$, we keep both $T_1$ and $T_i$ for the moment and observe that $d((m_1,\bx_1),(m_i,\bx_i))\le 6l_1+\sqrt{2}$ by construction, since the roots initially form a connected path in the sense of \eqref{eq:connected:chain}. In subsequent steps, we proceed similarly. Namely, we first remove the Toom cycles rooted at space-time points $(m_j\bx_j)$, with $j>i,$ at distance at most $6l_i$ from $(m_i,\bx_i)$. Then, for the first remaining Toom cycle $T_j$ with $j>i$, we remove those among the remaining ones with smaller index which intersect $T_j$. After the algorithm terminates, denoting the set of remaining indices of Toom cycles by $I$, one can show that $T_i\cap T_j=\varnothing$ for distinct $i,j\in I$ and \[\bigcup_{i\in I}B_{(m_i,\bx_i)}(6l_i)\] is a connected set containing the initial path. This allows us to extract the desired chain. See \cite{Hartarsky24KCM-CP} for more details. \end{proof} With Lemma~\ref{lem:chain} at hand, the proof of Theorem~\ref{th:mixing} is nearly complete. We use a union bound over the possible chains. Recall that Toom cycles in a chain are disjoint (so independent) and their probability is at most $(\varepsilon')^{l/22}$ for a chain length $l$. It therefore remains to show that the number of possible chains of length $l$ is bounded by $C^l$ for some constant $C$ independent of $\varepsilon'$. Indeed, the number of chains modulo the choice of the roots of the Toom cycles is at most $6^l$ as discussed above. Finally, the number of choices for the roots can be bounded using \eqref{eq:chain:distance} and the fact that $\bbZ^2$ does not have super-exponential growth. Putting the above together, we obtain an exponential bound on $\bbP(\diam(\cC'_{m,\bx})\ge k)$, concluding the proof of Theorem~\ref{th:mixing} for FA-2f in two dimensions. \section{Other out-of-equilibrium results} \label{sec:out-of-eq:other} We next briefly mention a few more works tackling KCM out of equilibrium from angles different from those discussed above. \subsection{The biased annihilating branching process} The biased annihilating branching process (BABP) is an interacting particle system closely related to FA-1f, to which several works have been devoted in the 1990s \cites{Sudbury95, Sudbury97,Neuhauser93}. Here sites are updated to $0$ (resp.\ $1$) with rate proportional to the number of neighbouring empty sites. More precisely BABP has generator \eqref{eq:generator} with $c_{\vec x}$ replaced by $\tilde c_{\vec x}$, where \[\tilde c_{\vec x}(\omega):=\sum_{i=1}^d\sum_{\varepsilon\in\{-1,1\}}(1-\omega_{\vec x+\varepsilon\vec e_i}).\] Like FA-1f, BABP is reversible w.r.t.\ $\mu$ and not attractive. Furthermore, for all $\omega\in\Omega$, \[\tilde c_{\vec x}(\omega)=0\Leftrightarrow c_{\vec x}(\omega)=0\] with $c_{\vec x}$ the FA-1 constraint function (recall \eqref{eq:def:cx} and Section~\ref{sec:rules}). Despite these similarities, BABP has some special features not shared by FA-1f, which make it more tractable. In particular, it enjoys a self-duality property (see \cite{Sudbury95}*{Eq.~34}) and quasi-duality with another model known as double flip process (see \cite{Sudbury97}*{Section 6}). Thanks to these features, which correspond to some special algebraic properties satisfied by the generator, Sudbury \cite{Sudbury97} established that for BABP in any dimension convergence to equilibrium (i.e.\ \eqref{hope_nu}) holds when $\nu=\mu_{q_0}$, provided $q_0\neq 0$. Unfortunately, this result does not seem to provide any insight or tool for proving \eqref{hope_nu} at all vacancy densities for FA-1f. \subsection{FA-1f at low density} \label{sec:low}The FA-1f model has been particularly investigated, since it is the most accessible model beyond the East one. It has the notable advantage of being non-cooperative in the sense that a single empty site can move around and allow the configuration to be resampled. Moreover, as we saw in Section~\ref{sec:FA1f}, empty sites move more or less like random walks, albeit possibly coalescing when they meet and occasionally branching. If one works in a suitably chosen finite volume with $q$ small and only follows the process over a relatively short amount of time, it can be possible to track these random walks and deal with delicate collision events. More precisely, consider a `critical' volume, that is, a torus (or any other bounded degree finite connected graph) $\Lambda=(\bbZ/n\bbZ)^d$ of cardinal $n^d$ of order $1/q$. Since the graph is connected, by Corollary \ref{cor:legal:path}, the FA-1f model is ergodic on $\Omega\setminus\{\bone\}$, so we exclude the $\bone$ configuration in the sequel.Typically, under $\mu_q$, $\Lambda$ only contains a bounded number of empty sites and they are well separated. If we follow the process up to a time horizon $1/q^2$, say, we do not expect to see more than three empty sites near each other. Since moving an empty site requires creating another one, this means that only binary collisions or branchings occur and these can be analysed. Furthermore, the above reasoning can also work out of equilibrium, provided that one can control the length of the initial period of time it takes for the number of empty sites to drop down to order 1. This delicate strategy was employed by Pillai and Smith \cites{Pillai17,Pillai19} yielding that the mixing time in the above setting is of order $q^{-2}$, up to logarithmic corrections, for $d\ge 2$. As already discussed in the proof of Proposition~\ref{prop:CBSEP}, one can prove the same result, but also for other graphs by establishing a finite-volume logarithmic Sobolev inequality \cite{Hartarsky22CBSEP} (recall \eqref{eq:logsob}, Corollary~\ref{cor:LS}). In fact, such inequalities already proved useful in \cite{Blondel13} for studying FA-1f at high vacancy density on infinite graphs (e.g.\ $\bbZ^d$) with initial condition such that there are empty sites at bounded distance from all vertices of the graph. However, in the absence of attractiveness, it is usually difficult to relate finite and infinite volume results. \subsection{Large deviations in trajectory space} A radically different viewpoint consists in studying trajectories of KCM rather than their state at a given time. Given, e.g., a finite box $\Lambda=\{1,\dots,n\}^d$ with suitable boundary condition $\sigma\in\Omega_{\bbZ^d\setminus\Lambda}$ and a time $t$, the \emph{activity} $\cA(t)$ is the total number of times any site changed its state up to time $t$. One is interested in the large deviation properties of $\cA(t)$ as $t\to\infty$. That is, for $a\in[0,\infty)$, one expects that \[f(a)=\lim_{\varepsilon\to0}\lim_{n\to\infty}\lim_{t\to\infty}\frac{-\log\bbP_{\mu}(|\cA(t)-a|<\varepsilon)}{n^dt}\] exists and would like to study the properties of the large deviation rate $f$. The constraint of the KCM impacts the function $f$ in that $f(a)=0$ for all $a<\lim_{n\to\infty}\lim_{\to\infty}\bbE_{\mu}(\cA(t))/(n^dt)$: low activity is easy to achieve. This can be seen as a constraint-induced dynamical phase transition. We refer the reader to \cites{Garrahan09,Garrahan07,Bodineau12,Bodineau12a,Jack06}, where these and finer matters are investigated for FA-1f and East in one dimension. \subsection{Aging for the one-dimensional East} Remark \ref{rem:East} guarantees that, for $q\ll 1$, relaxation to equilibrium after a density quench (namely starting from $\nu=\mu_{q_0}$ with a fixed $q_0\in(0,1]$) occurs at an exponential rate on a time scale of order $T_{\rm rel}\sim e^{c|\log q|^2}$ with $c=(2\log 2)^{-1}$ (see Theorem \ref{th:East}). In this section we will discuss a peculiar behaviour that occurs at intermediate times $1\ll t \ll T_{\rm rel}$. Fix $\epsilon, q\in(0,1)$, and, for $n\ge 1$, set \begin{align} t_0&{}=1,&t_0^-&{}=0,&t_0^+&{}=\left(\frac{1}{q}\right)^{\epsilon},\nonumber\\ t_n&{}=\left(\frac{1}{q}\right)^n,&t_n ^-&{}=t_n ^{1-\epsilon},&t_n ^+&{}=t_n ^{1+\epsilon}.\label{deftn} \end{align} The time intervals $[t_n^-, t_n^+]$ and $[t_n^+,t_{n+1}^-]$ are called \emph{$n$\textsuperscript{th} active} and \emph{$n$\textsuperscript{th} stalling} periods respectively. Note that in the limit $q\to 0$ there is a sharp separation of time scales, $t_n/t_{n'}\to 0$ for $n<n'$ and, if $\epsilon\ll 1$, the $n$\textsuperscript{th} stalling period is much longer than the $n$\textsuperscript{th} active period. Suppose that $q\ll 1$ and initialise the process from a Bernoulli distribution at density $q_0>q$. Most of the non-equilibrium evolution will try to remove the excess of empty sites present initially and will thus be dominated by the coalescence of domains corresponding to the intervals separating two consecutive empty sites. This process must occur in a cooperative way because, in order to remove an empty site, another empty site must be created to its right. Furthermore, recalling that creating an empty site at distance $\ell$ requires creating $\sim\log_2 \ell$ additional empty sites (see Proposition \ref{prop:comb}) the following heuristic picture emerges \begin{enumerate} \item During the $n$\textsuperscript{th} active period, only the empty sites with another empty site to their right at distance less than $2^n$ can be removed; \item at the end of the $n$\textsuperscript{th} active period, no empty sites at distance less than $2^n+1$ are present any more, and there is no empty site that was not present at the beginning of the period; \item during the $n$\textsuperscript{th} stalling period, nothing happens: none of the empty sites present at the beginning is destroyed and no new empty site is present at the end. \end{enumerate} The above heuristics, set out by Evans and Sollich in two physics papers \cites{Sollich99,Sollich03}, was turned into rigorous results by Faggionato, Martinelli, Roberto and Toninelli in \cite{Faggionato12}. This implies that the vacancy density displays a peculiar staircase behavior (see \cite{Faggionato12}*{Theorem 2.5(i)}) and the two-time autocorrelation function depends in a non trivial way on the two times and not just on their difference (see \cite{Faggionato12}*{Theorem 2.5(i)}). This phenomenon is known as \emph{aging}. In the same paper, sharp results for the statistics of the interval between two consecutive zeros in the different periods were established (see\cite{Faggionato12}*{Theorem 2.6}). The key ingredients for these results are: \begin{itemize} \item proving that the non equilibrium dynamics of the East model starting from a renewal process is well approximated when $q\downarrow 0$ by a hierarchical coalescence process whose rates depend on large deviation probabilities of the East model; \item universality results for the scaling limit of this coalescence process (see \cite{Faggionato12a}). \end{itemize} \begin{problem} It could be interesting to investigate whether a staircase behaviour for local functions and aging for two-time functions hold also for East in higher dimensions or for other KCM featuring a sharp separation of time scales, in particular for supercritical rooted models where logarithmic energy barrier also occur (see Proposition \ref{prop:Laure:combi}). \end{problem} \section{Basic open problems} \label{sec:open-pbs} The results discussed in the previous sections leave several basic questions open. \subsection{Ergodic regime} Fix a dimension $d\geq 1$, a $d$-dimensional update family $\cU$ and $q>q_c(\cU)$. The following conjectures can be viewed either as an extension to the whole set of $\cU$ of the results of Section~\ref{sec:East-out} for the East model or as extension to the whole ergodic regime of the high temperature results of Section~\ref{sec:high-temp}. \begin{conjecture}[Convergence to equilibrium after a temperature quench] Let $\nu$ be the Bernoulli measure $\mu_{q_0}$, $q_0>\qc$. Then, \begin{enumerate} \item for any local function $f$, \eqref{hope_nu} holds; \item furthermore, the convergence occurs exponentially fast. \end{enumerate} \end{conjecture} \begin{conjecture}[Linear pre-cutoff] There exists $C$ such that\eqref{hope_tmix} holds for any $\delta\in (0,1)$ and $n$ large enough.\label{conj:tmix} \end{conjecture} \begin{conjecture}[Cutoff] There exists \sl{${v}$} such that \eqref{hope_cutoff} holds. \end{conjecture} \subsection{Beyond the ergodic regime} Fix a dimension $d\geq 1$, a $d$-dimensional update family $\cU$ and $q<q_c(\cU)$. \begin{problem}[Quench from high to low temperature] Let the initial distribution $\nu$ be $\mu_{q_0}$ with $q_0>\qc$. Since a.s.\ the initial state has closure $\bzero$, this will be true at any later time (see Lemma \ref{lem:closure}). However, it is natural to expect a behaviour dominated by the growth of occupied ``clusters'' that can be unblocked only from their boundary. Is this intuition correct and how fast do these clusters grow? E.g.\ how does $\tbp$ of the current configuration scale with time? Another natural question is whether, locally, the measure converges to the equilibrium density, namely whether $\lim_{t\to\infty}\mathbb E_{\mu_{q_0}}(\eta_0(t))=1-q$. In principle, this is still compatible with the fact that the closure is $\bzero$ at any time. See however \cite{Cancrini10}*{Section 4} for an example of KCM defined on trees (see Section \ref{sec:tree}) for which such a result is ruled out. \end{problem} The inverse regime, $q_0<\qc<q$, is often uninteresting, because the set of sites which can be updated may partition into an infinite collection of independent finite Markov chains. However, in some cases, an infinite chain remains. This leads to a setting similar to inhomogeneous KCM, discussed in Section~\ref{sec:general:KCM} and \ref{sec:inhomogeneous} below. \section{Techniques} Finally, let us review the new techniques encountered in this chapter. In fact, the only one that was not new (recall Section~\ref{subsec:tools:BP}) is renormalisation used in the proof of the general case of Theorems~\ref{th:convergence:out-of-eq} and~\ref{th:mixing}. \noindent\textbf{Distinguished empty site.} Definition~\ref{def:distinguished} provided this essential tool for the analysis of the one dimensional East model. It can be seen as a boundary condition moving to the right and leaving equilibrium to its left. \noindent\textbf{Couplings.} In Sections~\ref{subsubsec:CP}-\ref{subsubsec:BPwD}, we saw useful couplings between KCM and various attractive processes: cooperative contact processes, BP with death and last passage percolation, all based on the graphical construction of the models. The drawback of such comparisons is that, in order for them to be useful, the attractive process may need to be supercritical, leading to an artificial limitation to the high vacancy density regime. \noindent\textbf{Toom cycles.} This combinatorial tool for proving stability of BP with death, discussed in Section~\ref{subsubsec:Toom}, has broader applications. Namely, suitable extensions can be used for perturbations of arbitrary cellular automata and, in some cases, interacting particle systems \cite{Swart22}. \noindent\textbf{Chains.} In order to control connected components rather than individual sites, in Section~\ref{subsubsec:chains}, we relied on chains. This is a rather general percolation method not specific to Toom cycles. \noindent\textbf{Cutoff strategy.} In both Corollary~\ref{th:East:cutoff} and Theorem~\ref{th:FA1f:cutoff} the same strategy was used to prove a linear time cutoff with square root window. While the entire programme has only been implemented for specific models in one dimension, it is, in principle, feasible for any KCM. The first step is proving a (possibly stretched) exponential convergence to equilibrium as in Theorem~\ref{th:convergence:out-of-eq}. Then one shows a positive speed result as in Theorem~\ref{th:mixing}. The third step is showing ergodicity for the process seen from the front and a law of large numbers for the front position. Finally, a central limit theorem for the front can be sought in order to control the cutoff window. \chapter{Fredrickson-Andersen 2-spin facilitated model} \label{chap:FA2f} \abstract{This chapter uses the setting of the FA-2f model to develop several new tools for determining the emptying time of the origin with high precision as $q\to 0$. We begin by using bisection in higher dimensions to show that this time scale is finite. We then discuss a robust long range Poincar\'e inequality approach for proving upper bounds. Finally, we assess the sharp threshold of FA-2f, which relies on a robust relation with bootstrap percolation for the lower bound and on the very flexible method of matryoshka dolls for the upper bound. All of these methods generalise to treat various other models.\\} Recall from Section~\ref{sec:rules} that the FA-2f model's constraint requires at least two empty neighbours in order to change the state of a site. Throughout this section we work in two dimensions for simplicity of notation, but the arguments apply equally well in any dimension. For FA-2f (in two dimensions), the natural geometry is rectangular. It is therefore, convenient to denote by \begin{equation} \label{eq:def:rectangle} R(a,b)=\{0,\dots,a-1\}\times\{0,\dots,b-1\}\subset\bbZ^2\end{equation} the rectangle of side lengths $a,b\in\bbN$. \section{Bisection in higher dimensions} \label{sec:bisection:2d} Let us begin by paying our debt by proving that $\trel<\infty$ for any $q>0$ for FA-2f. Recall that in the proof of Theorem~\ref{th:exponential}, the only fact whose proof was postponed until now is that for $q$ close enough to 1, the KCM with update family \[\cU'=\{U'\}=\left\{\{(1,0),(1,1),(0,1)\}\right\}\] has a finite relaxation time. In fact, we rather used this result for the version of this KCM with general state space, as in Section~\ref{sec:general:KCM}, but the proof is very similar, so we focus on the binary case. In Section~\ref{subsec:East:upper} we discussed how to prove this in one dimension via the bisection technique. We next explain how the method adapts to higher dimensions, following \cite{Cancrini08}. In fact, the proof works for any $q>\qc(\cU')$. We start with a simple observation, which could be attributed to Schonmann~\cite{Schonmann90}. \begin{observation}[Oriented percolation correspondence] \label{obs:GOSP} Endow $\bbZ^2$ with the oriented graph structure defined by the edge set $E=\{(\bx,\bx+\bu):\bx\in\bbZ^2,\bu\in U'\}$. Then for any configuration $\omega\in\Omega$, in $\cU'$-BP, the emptying time $\tbp$ is given by the number of sites in the longest (oriented) path from $0$ whose sites are all occupied. \end{observation} The proof of this fact by induction on the number of iterations of the BP map $\sB_\cU$ from \eqref{eq:def:BP} is left as an exercise to the reader. In view of Observation~\ref{obs:GOSP}, for $\bx\in A\subset\bbZ^2$, $B\subset\bbZ^2$ and $\omega\in\Omega_A$, we write $\bx\to[A]B$ if there exists a sequence of sites $(\bx_i)_{i=0}^{l}\in A^{l+1}$ occupied in $\omega$ with $\bx_0=\bx$, $\bx_l\in B$ and $(\bx_{i-1},\bx_i)\in E$ for all $i\in\{1,\dots,l\}$. \begin{figure} \centering \begin{tikzpicture}[x=0.4cm,y=0.4cm] \draw (0,0)--(20,0)--(20,10)--(0,10)--cycle; \draw (10,0)--(10,10); \draw (15,0)--(15,10); \foreach \i in {0,1,2,3,4,5,7,8} \draw (10.5,0.5+\i) circle (0.25); \foreach \i in {1,2,3,5,8} ll (11.5,0.5+\i) circle (0.25); \foreach \i in {0,4,6,7} \draw (11.5,0.5+\i) circle (0.25); \foreach \i in {0,4,6,7,8} ll (12.5,0.5+\i) circle (0.25); \foreach \i in {3,5} \draw (12.5,0.5+\i) circle (0.25); \foreach \i in {0,4,6,8} ll (13.5,0.5+\i) circle (0.25); \foreach \i in {2,3,5,7,9} \draw (13.5,0.5+\i) circle (0.25); \foreach \i in {0,3,4,7,9} ll (14.5,0.5+\i) circle (0.25); \foreach \i in {1,2,5,6,8} \draw (14.5,0.5+\i) circle (0.25); \draw (14.5,9.5)--(12.5,7.5); \draw (13.5,8.5)--(11.5,8.5); \draw (12.5,8.5)--(12.5,6.5); \draw (14.5,7.5)--(13.5,6.5)--(12.5,6.5)--(11.5,5.5); \draw (14.5,4.5)--(12.5,4.5)--(11.5,3.5)--(11.5,1.5); \draw (14.5,4.5)--(14.5,3.5); \draw (14.5,0.5)--(12.5,0.5); \draw [decorate,decoration={brace,amplitude=10pt}] (10,0) -- (0,0) node [midway,yshift= -0.5cm] {$\Lambda_k$}; \draw [decorate,decoration={brace,amplitude=10pt}] (20,0) -- (10,0) node [midway,yshift= -0.5cm] {$\Lambda'_k$}; \draw [decorate,decoration={brace,amplitude=10pt}] (0,10) -- (15,10) node [midway,yshift=0.5cm] {$V_k$}; ll[opacity=0.25] (10,0)--(15,0)--(15,10)--(10,10); \end{tikzpicture} \caption{\label{fig:bisection:2d}Illustration of the bisection in Section~\ref{sec:bisection:2d}. The rectangle $\Lambda_{k+1}$ is partitioned into $\Lambda_k\sqcup\Lambda'_k$. The shaded rectangle $V_k'$ has width $\delta_k$. The full dots in it represent occupied sites in $\Gamma_k$, while $\partial\Gamma_k$ consists of the empty sites. In the figure the event $\cX_k$ occurs, since the paths do not reach the leftmost column of $V'_k$.} \end{figure} For any $k\ge 0$, consider the rectangles \begin{align*} \Lambda_{2k}&{}=R\left(2^k,2^k\right)&\Lambda_{2k+1}&{}=R\left(2^{k+1},2^k\right)\end{align*} and $\Lambda'_k=\Lambda_{k+1}\setminus\Lambda_k$, which is a translate of $\Lambda_k$. Note that the rectangles $(\Lambda_k)_{k\ge 0}$ are nested in such a way that each is obtained by stretching the previous rectangle twice either horizontally or vertically (see Figure~\ref{fig:bisection:2d}). These rectangles will play the role of the intervals $\Lambda_k$ in Section~\ref{subsubsec:bisection:1d}. As in one dimension, for any $k\ge 0$, we set $\delta_k=\lfloor 2^{k(1-\delta)}\rfloor$ with some fixed $\delta>0$ small enough. Further define \begin{align*} V'_{2k}&{}=\left(2^k,0\right)+R\left(\delta_k,2^k\right)&V'_{2k+1}&{}=\left(0,2^k\right)+R\left(2^{k+1},\delta_k\right)\\ \partial_+ V'_{2k}&{}=\left(2^{k}+\delta_k-1,0\right)+R\left(1,2^k\right)&\partial_+ V_{2k+1}&{}=\left(0,2^k+\delta_k-1\right)+R\left(2^{k+1},1\right)\\ \partial_- V'_{2k}&{}=\left(2^{k},0\right)+R\left(1,2^k\right)&\partial_- V_{2k+1}&{}=\left(0,2^k\right)+R\left(2^{k+1},1\right)\end{align*} and $V_k=\Lambda_k\cup V'_k$ (see Figure~\ref{fig:bisection:2d}). We next need to choose the facilitating event $\cX$ in Lemma~\ref{lem:two-block}, so that its probability gets close to 1 as the scale $k$ increases. Given $k\ge 0$, let \begin{align} \label{eq:def:Gammak} \cX_k&{}=\left\{\omega\in\Omega_{V'_k}:\Gamma_k\cap \partial_- V_k'=\varnothing\right\},& \Gamma_k&{}=\left\{\bx\in V'_k:\bx\to[V'_k]\partial_+ V_k'\right\}.\end{align} That is, $\cX_k$ is the event that no site on the boundary of $V'_k$ is connected to the other boundary via occupied sites in $V'_k$ (see Figure~\ref{fig:bisection:2d}). With this definition it is a classical percolation result of Menshikov and Aizenman--Barsky \cites{Menshikov86,Aizenman87} that for some $c>0$ depending only on $q>\qc$, \begin{equation} \label{eq:exp:decay:GOSP} \varepsilon_k=1-\mu(\cX_k)\le \exp\left(-c\delta_k\right) \end{equation} (see \cite{Duminil-Copin16} for a simple proof, \cite{Grimmett99} for more background on percolation and \cite{Hartarsky22GOSP} for more background on oriented percolation). Indeed, taking Observation~\ref{obs:GOSP} into account, $1-\qc$ is the critical parameter of the oriented percolation model on $\bbZ^2$ with edge set $E$. Plugging \eqref{eq:exp:decay:GOSP} into \eqref{eq:East:product}, we easily obtain that the product there is finite. Our next task is to show bound the second term in \eqref{eq:East:bisection:1} as \begin{equation} \label{eq:East:2d:dirichlet}\mu_{V_k}\left(\1_{\cX_k}\var_{\Lambda_k}(f)\right)\le \trel^{V_k}\sum_{\bx\in V_k}\mu_{V_{k}}\left(c_\bx^{\bzero_{\bbZ^2\setminus \Lambda_{k+1}}}\var_\bx(f)\right).\end{equation} Once \eqref{eq:East:2d:dirichlet} is established, the proof is concluded by the same final adjustments as in Section~\ref{subsubsec:bisection:1d}. In order to prove \eqref{eq:East:2d:dirichlet}, we proceed similarly to the one-dimensional case, but we need to pay attention to the definition of the `rightmost empty site'. For $\gamma\subset V'_k$, let \begin{equation} \label{eq:def:Gammakbar}\overline\gamma=\partial_+V'_k\cup\gamma\cup\left(V'_k\cap\left(\gamma-U'\right)\right).\end{equation} Then $\partial\Gamma_k=\overline\Gamma_k\setminus\Gamma_k$ (see Figure~\ref{fig:bisection:2d}) will play the role of the rightmost empty site $\xi$ in \eqref{eq:def:xi}. By \eqref{eq:def:Gammak} and \eqref{eq:def:Gammakbar}, $\Gamma_k$ (and $\overline \Gamma_k$) is measurable with respect to $\omega_{\overline\Gamma_k}$ and, if $\cX_k$ occurs, then $\omega_{\partial\Gamma_k}=\bzero$. Finally, observe that, by construction, $\partial\Gamma_k$ is a cut-set separating $\partial_+V'_k$ from $\Lambda_k$. Therefore, recalling \eqref{eq:convexity:var}, this gives \begin{align*} \mu_{V_{k}}\left(\1_{\cX_k}\var_{\Lambda_k}(f)\right)&{}\le \sum_{\gamma\subset V'_k\setminus\partial_-V'_k}\mu_{V_{k}}\left(\1_{\Gamma_k=\gamma}\var_{\Lambda_k}(f)\right)\\ &{}\le \sum_{\gamma\subset V'_k\setminus\partial_-V'_k}\mu_{V_{k}}\left(\1_{\Gamma_k=\gamma}\1_{\omega_{\partial\gamma}=\bzero}\var_{V_k\setminus\overline\gamma}(f)\right)\\ &{}\le \sum_{\gamma\subset V'_k\setminus\partial_-V'_k}\trel^{V_k\setminus\overline\gamma}\sum_{\bx\in V_k\setminus\overline\gamma}\mu_{V_{k}}\left(\1_{\Gamma_k=\gamma}c_\bx^{\bzero_{\bbZ^2\setminus \Lambda_{k+1}}}\var_\bx(f)\right), \end{align*} where we used $\1_{\omega_{\partial\gamma}=\bzero}c_\bx(\omega)\le c_\bx(\omega)$ for all $\bx\in V_k\setminus\overline{\gamma}$ and $\omega\in\Omega$. Recalling the the monotonicity property \eqref{eq:monotonicity}, we recover \eqref{eq:East:2d:dirichlet} as desired. Bisection is good for proving $\trel<\infty$. However, if we follow the proof of Theorem~\ref{th:exponential} and take into account Theorem~\ref{th:BP}, we obtain the following very poor bound. \begin{corollary}[{Basic FA-2f upper bound}] \label{cor:FA2f:bound:trivial} For FA-2f in $d$ dimensions there exists $C>0$ such that for all $q>0$, \[\trel\le \exp^{\circ 2}\left(Cq^{-1/(d-1)}\right).\] \end{corollary} In \cite{Cancrini08} this was improved to $\exp(Cq^{-5})$ using less brutal canonical paths than the ones suggested in the proof of Theorem~\ref{th:exponential}. However, this is still very far from the truth. The next sections examine other techniques for obtaining more accurate bounds. \section{Long range renormalisation} \label{sec:LRP} In order to improve the bound on the relaxation time of FA-2f provided by Corollary~\ref{cor:FA2f:bound:trivial}, we need a new technique by Martinelli and Toninelli \cite{Martinelli19}. The key idea is to transform our KCM, which has short range but unlikely constraints, into a model with long range but very likely constraints. These constraints require the occurrence of a certain \emph{ droplet} far away and of a \emph{good environment}, on which the droplet can move, connecting the droplet location to the origin. We start by proving a \emph{long range Poincar\'e inequality}, which is the key building block of this technique, in Section~\ref{sec:constrainedpoincare}. In Section \ref{sec:FA2fbetter} we discuss how to combine it with renormalisation to obtain results about our model of interest. \subsection{A long range constrained Poincar\'e inequality } \label{sec:constrainedpoincare} Fix a finite probability space $(\mathbb X_0,\nu_0)$ and let $(\bbX, \nu)=(\prod_{\bx\in\bbZ^d}\bbX_0,\bigotimes_{\bx\in\bbZ^d}\nu_0)$ be the corresponding product space. For each $\bx\in \bbZ^d$ let $\bbN^d_{\bx}:=\bx+(\{0,1,\dots\}^d\setminus\{0\})$. Then fix a finite set $\Delta_0\subset\bbN^d_0$. Let $\cA_0\subset\bbX$ be an event depending only on the restriction $\eta_{\Delta_0}\in\bbX_{\Delta_0}$ of the state $\eta\in\bbX$ to $\Delta_0$. Define $\cA_\bx$ by translating $\cA_0$ by $\bx\in\bbZ^d$. For $\bx\in\bbZ^d$, set \begin{align*} r_\bx&{}=\1_{\cA_\bx},&\epsilon&{}=1-\nu_0(\cA_0). \end{align*} Then following result gives a taste of the more general \cite{Martinelli19}*{Theorem 2}. \begin{proposition}[Long range constrained Poincar\'e inequality] \label{teo:constrainedpoincare} Assume that \begin{equation}\label{cond} (1+|\Delta_0|)\epsilon<1/4,\end{equation} then for any local function $f:\bbX \to \bbR$ it holds that \begin{equation} \label{CP} \var(f)\le 4 \sum_{\bx\in\mathbb Z^d} \nu\left(r_\bx \var_\bx(f)\right). \end{equation} \end{proposition} \begin{proof}[Sketch] The starting point to prove \eqref{CP} is the inequality \cite{Martinelli19}*{Lemma 2.5} \begin{equation} \label{AB} \var(f)\leq\sum_{\bx\in\bbZ^d} \nu\left(\var_\bx \left(\nu_{\bbN^d_{\bx}}(f) \right) \right), \end{equation} which can be obtained using the law of total variance and the product form of $\nu$. Then one can rewrite a generic term in the r.h.s.\ of \eqref{AB} using the decomposition \[ \mu_{\bbN^d_{\bx}}(f)=\mu_{\bbN^d_{\bx}}\left(r_\bx f \right) + \mu_{\bbN^d_{\bx}}\left(\left(1-r_\bx \right)f\right), \] and repeatedly use Cauchy-Schwarz inequality and convexity of the variance to obtain the final result \eqref{CP}. \end{proof} \begin{remark} \label{rem:pos_tree} Proposition \ref{teo:constrainedpoincare} also plays a key role for the study of KCM on regular trees. It is actually in this context that it was introduced in \cite{Martinelli13} (also see \cite{Cancrini15} for a refinement). \end{remark} In \cite{Martinelli19}, Proposition~\ref{teo:constrainedpoincare} is proved in a more general setting allowing the constraints $r_\bx$ to be the product of several indicator functions and at the same time transforming \eqref{cond} into a more flexible condition involving the supports and the probabilities of these events. Using this, Proposition~\ref{3.4} below is deduced in \cite{Martinelli19}*{Proposition 3.4}. Before stating it, we require some more notation. \begin{definition}[Good path] \label{def:good:path} We call a sequence of sites $\gamma=(\bx_i)_{i\ge 0}^k$ with $\bx_i-\bx_{i-1}\in\{\be_j:j\in\{1,\dots,d\}\}$ for any $i\in\{1,\dots,k\}$ an \emph{oriented path of length $k+1$}. Fix a \emph{good} and a \emph{super good} event $\cG,\cS\cG\subset\bbX_0$. Given a configuration $\eta\in\bbX$, we say that the oriented path $\gamma$ is \emph{good}, if $\eta_{\bx_i}\in\cG$ for all $i\in\{0,\dots,k\}$ and $\eta_{\bx_k}\in\cS\cG$. For any $\bx\in\bbZ^d$ and $K\ge 1$, we denote by $\Gamma_\bx^{K}$ the event that for every $i\in\{1,\dots,d\}$ there exists a good oriented path of length at most $K$ starting at $\bx+\be_i$. \end{definition} \begin{proposition}[Good path constraint dynamics] \label{3.4} There exists $\delta>0$ such that for all $p_1,p_2\in[0,1]$ with $\max(p_2,(1-p_1)(\log(1/p_2))^2)\le \delta$, the following holds. If $\nu_0(\cG)=p_1$ and $\nu_0(\cS\cG)=p_2$, then for any local function $f:\bbX\to\bbR$, \begin{equation}\label{eq:3.4} \var (f)\leq 4\sum_{\bx\in\bbZ^d}\nu\left(\1_{\Gamma_\bx^{p_2^{-2}}}\var_{\bx}(f)\right). \end{equation} \end{proposition} \subsection{Combining renormalisation and the long range Poincar\'e inequality} \label{sec:FA2fbetter} We next explain how to apply Proposition~\ref{3.4} to FA-2f. As suggested by the fact that we worked with a general state space in Section~\ref{sec:constrainedpoincare}, we intend to use renormalisation. Once again, the argument works in any dimension, but we present it in two-dimensions in order to simplify notation. For any $\bx\in\bbZ^2$, the (renormalised) site $\bx$ will correspond to the rectangle $R_\bx=\ell\cdot\bx+R(\ell,\ell)$ (recall \eqref{eq:def:rectangle}), where $\ell=\lceil3\log(1/q)/q\rceil$. We consider the state space $\bbX_0=\Omega_{R_0}$ and the corresponding states $\eta_\bx=\omega_{R_\bx}$ for $\bx\in\bbZ^2$. It remains to choose the good and super good events in Definition~\ref{def:good:path}. A simple choice is to define $\omega\in\cS\cG\subset\Omega_{R_0}$, if \begin{equation} \label{eq:def:SG:naive} \omega_{R(\ell,\ell)\setminus ((1,1)+R(\ell-2,\ell-2))}=\bzero.\end{equation} That is, a renormalised site is super good, if the perimeter of the corresponding rectangle is empty. The good event is defined by $\omega\in\cG\subset\Omega_{R_0}$, if \begin{equation} \label{eq:def:G}\forall i\in\{0,\dots,\ell-1\},\omega_{(i,0)+R(1,\ell)}\neq\bone,\omega_{(0,i)+R(\ell,1)}\neq\bone.\end{equation} That is a renormalised site is good, if each of its rows and columns contain at least one empty site. The idea behind \eqref{eq:def:SG:naive} and \eqref{eq:def:G} is that the following statements hold (recall the BP closure $[\cdot]$ from \eqref{eq:def:closure}): \begin{itemize} \item $\cS\cG\subset\{\omega\in\Omega_{R_0}:[\omega]=\bzero_{R_0}\}$; \item for any $\bx\in\{\be_1,-\be_1,\be_2,-\be_2\}$, if $\omega_{R_0}\in\cS\cG$ and $\omega_{R_{\bx}}\in\cG$, then $[\omega_{R_0\cup R_\bx}]=\bzero$; \item if $\omega_{R_0}\in\cS\cG$, $\omega_{R_{((-1,0)}}\in\cG$ and $\omega_{R_{(0,-1)}}\in\cG$, then $[\omega_{[-\ell,\ell)^2}]=\bzero$. \end{itemize} The reader is invited to verify these deterministic BP claims. They will allow us to transport a super good renormalised site along a good path. We now have all the ingredients necessary to deal with FA-2f, following \cite{Martinelli19} to prove the following bound greatly improving on the one from Corollary~\ref{cor:FA2f:bound:trivial}. \begin{theorem}[{FA-2f upper bound up to logarithmic corrections}] \label{th:FA2f:MT} For FA-2f in $d\ge2$ dimensions there exists a constant $C>0$ such that for any $q$ small enough, \[\trel\le \exp\left(\frac{C(\log(1/q))^2}{q^{1/(d-1)}}\right).\] \end{theorem} \begin{proof}[Sketch] The reasoning proceeds in three steps. We first apply Proposition~\ref{3.4} as described above. We then need to bound the generic term in the right hand side of \eqref{eq:3.4}, say for $\bx=0$. The occurrence of the event $\Gamma_0^{p_2^{-2}}$ guarantees good paths of length at most $p_2^{-2}$. We can view a good path as a one-dimensional general KCM with FA-1f constraint, as covered by Proposition~\ref{prop:FA1f:East:generalised}. Roughly speaking, this enables us to transform the generic term into a sum of terms of the form \begin{equation} \label{eq:long-range:short-range}\mu\left(\1_{\omega_{R_{\by}}\in\cS\cG}\var_{R_{\bx}}(f|\cG)\right) \end{equation} for $\bx$ and $\by$ neighbours along the good path. The prefactor incurred in this transformation is at most $p_2^{-C}$ for some constant $C>0$. Note that \eqref{eq:long-range:short-range} only features a short range constraint. The final step is to bound \eqref{eq:long-range:short-range} by the Dirichlet form of FA-2f in $R_{\bx}\cup R_\by$ with $\bone$ boundary condition. This can be done using canonical paths. If done brutally, using the legal paths of Section~\ref{sec:legal:paths}, this would lead to a prefactor of order $q^{-2\ell^2}$. However, proceeding a little more carefully (only creating one or two empty columns/rows and moving them rather than emptying the entire rectangles), one can easily reduce this cost to $q^{-2\ell}$. Putting everything together and computing $p_2\ge q^{4\ell}$, we obtain the desired result (see \cites{Martinelli19,Martinelli19a} for more details). \end{proof} The three steps in the last proof can be understood as follows. The first step (applying Proposition~\ref{3.4}) allows us to reduce the study of the infinite volume KCM to one in a large but finite volume ($1/p_2^2$, which is exponential in $1/q$) containing sufficient empty sites to efficiently empty the origin. The second step goes from the `global' scale $p_2^{-2}$ to the much smaller `mesoscopic' scale $\ell$. Finally the third step goes from the mesoscopic to the scale of the lattice. In the proof of Theorem~\ref{th:FA2f:MT}, we used a generalised FA-1f dynamics for the second and third steps, as well as a very simple choice of $\cS\cG$ event. In order to go further, we will need to reconsider all these choices. \section{Sharp threshold} \label{sec:sharp} Our next goal is a much stronger result that Theorem~\ref{th:FA2f:MT} providing a sharp threshold for FA-2f. This is the most precise result for KCM of this type and is due to Hartarsky, Martinelli and Toninelli \cite{Hartarsky23FA}*{Theorem~1.3} (also see that reference for a quantitative bound on the second order correction in two dimensions). \begin{theorem}[Sharp threshold for FA-2f] \label{th:FA2f:sharp:threshold} For FA-2f in $d\ge 2$ dimensions, \begin{equation} \label{eq:FA2f:sharp:threshold}q\log \tau_0\to d\cdot\lambda(d,2)\end{equation} in $\bbP_{\mu_q}$-probability, as $q\to 0$, where $\lambda(d,2)$ is the constant from Theorem~\ref{th:BP} for $j$-neighbour BP. Furthermore, $q\log\bbE_{\mu_q}(\tau_0)\to d\cdot\lambda(d,2)$. \end{theorem} In other words, for FA-2f we have $\tau_0=(\tbp)^{d+o(1)}$ as $q\to0$. Before moving on to the proof, let us mention that it would be good to improve the statement above to a relaxation time result (the lower bound follows from \eqref{eq:tau0} and \eqref{eq:FA2f:sharp:threshold}). \begin{conjecture}[{FA-2f relaxation time}] For FA-2f in $d\ge 2$, as $q\to0$, we have \[q\log \trel\to d\cdot\lambda(d,2).\] \end{conjecture} \subsection{Lower bound: combinatorial bottleneck} \label{sec:FA2f:lower} We start by discussing the lower bound of \eqref{eq:FA2f:sharp:threshold}. It is a relatively simple consequence of known results in BP and generalises well to other models. Recall that from \eqref{eq:t0t0bp} (also recall \eqref{eq:tau0}) and Theorem~\ref{th:BP}, we have $\liminf_{q\to0}q\log\tau_0\ge \lambda(d,2)$, but we would like to improve this bound. The heuristics behind the improvement is the following combinatorial bottleneck (see \cite{Hartarsky23FA}*{Section~2} for more details). \begin{proof}[Sketch of the lower bound of Theorem~\ref{th:FA2f:sharp:threshold}] According to Theorem~\ref{th:BP} and Corollary~\ref{cor:legal:path}, with high probability, the origin cannot be emptied using only empty sites within distance, say, $1/q^3$ from it in the initial configuration. We may therefore consider the first time $\tau$, when the origin can be emptied using only empty sites at distance at most $1/q^3$ from it. That is, setting $\ell=\lceil1/q^3\rceil$ and $\Lambda=[-\ell,\ell]^d$, we define \[\tau=\inf\left\{t\ge 0:\left[\omega_\Lambda(t)\right]^{\bone_{\bbZ^d\setminus\Lambda}}\right\},\] where $\omega_\Lambda(t)$ is the restriction of the stationary FA-2f process to $\Lambda$ at time $t$ and we recall \eqref{eq:def:closure} and \eqref{eq:def:BP:finite}. The crucial observation is that in $\omega_\Lambda(\tau)$, there is a site $\bx$ at the boundary of $\Lambda$ such that \begin{equation} \label{eq:FA2f:bottleneck}\left[\omega_\Lambda(\tau)\right]^{\bone_{\bbZ^d\setminus\Lambda}}_0=1\neq\left[\omega_\Lambda^\bx(\tau)\right]^{\bone_{\bbZ^d\setminus\Lambda}}_0.\end{equation} That is, at $\tau$ a site at the boundary of $\Lambda$ becomes empty and this is essential (pivotal in the percolation jargon) to being able to empty the origin inside the box $\Lambda$. Since we are working with the stationary process, we may perform a union bound over the attempted updates in $\Lambda$ as in Section~\ref{subsubsec:bottleneck:deduction}. Thus, it suffices to show that for any $\varepsilon>0$ and $q$ small enough, \begin{equation} \label{eq:FA2f:bottleneck:bound} \mu_q(\cA)\le \exp\left(\frac{-d\cdot\lambda(d,2)+\varepsilon}{q}\right), \end{equation} where $\cA$ is the union over $\bx\in\partial\Lambda$ of the event in \eqref{eq:FA2f:bottleneck}. Note that \eqref{eq:FA2f:bottleneck:bound} only makes reference to 2-neighbour BP, so we have successfully reduced the problem for FA-2f to its BP counterpart. The bound \eqref{eq:FA2f:bottleneck:bound} is indeed known in BP and is, in fact, the main step in the proof of Theorem~\ref{th:BP} for this update family. \end{proof} In rough terms, we harnessed the fact that, in order for the origin to be `locally emptiable', a certain `critical droplet' (similar to the $\cS\cG$ event in Section~\ref{sec:FA2fbetter}) needs to be present at the origin at some point in time. We then plugged this combinatorial bottleneck into the standard bound of Section~\ref{subsubsec:bottleneck:deduction}. \subsection{Coalescing and branching simple symmetric exclusion process} \label{subsec:CBSEP} We require one more ingredient as preparation for the upper bound in Theorem~\ref{th:FA2f:sharp:threshold}. It comes in the form of a model strongly related to the generalised FA-1f of Section~\ref{sec:general:KCM}, but not belonging to the class of KCM we defined in Section~\ref{subsec:markov}. For the sake of simplicity, we introduce only the binary (non-generalised) model and refer to \cite{Hartarsky22CBSEP} for the generalised version. Let $G=(V,E)$ be the box $V=\{1,\dots,\ell\}^d$ with its usual graph structure, where $\ell$ is a positive integer. We consider the state space $\Omega=\{0,1\}^V$ as usual. We define $\Omega_+=\Omega\setminus \{\bone\}$ to be the event that there exists at least one empty site. Similarly, for any edge $e=\{x,y\}\in E$ we refer to $(\omega_x,\omega_y)\in \{0,1\}^{\{x,y\}}$ as the state of $e$ in $\omega$ and write $E_e=\{\omega\in\Omega:\omega_x\omega_y=0\}$ for the event that $e$ is not occupied (at least one of its vertices is empty). Given $p\in (0,1)$, let $\pi=\bigotimes_{x\in V}\pi_x$ be the product Bernoulli measure in which each vertex is empty with probability $p$ and let $\mu(\cdot):=\pi(\cdot| \Omega_+)$. Given an edge $e=\{x,y\}$, we write $\pi_e:=\pi_x\otimes\pi_y$. \emph{CBSEP} is a continuous time Markov chain on $\Omega_+$ for which the state of any edge $e\in E$ such that $E_e$ occurs is resampled with rate one w.r.t.\ $\pi_e(\cdot| E_e)$. Thus, any edge containing exactly one empty site moves the empty site to the other endpoint of the edge (the \emph{SEP move}) with rate $(1-p)/(2-p)$ and creates an extra empty site at the occupied endpoint (the \emph{branching move}) with rate $p/(2-p)$. Moreover, any edge containing two empty sites occupies one of the two chosen uniformly (the \emph{coalescing move}) with rate $2(1-p)/(2-p)$. The chain is readily seen to be reversible with respect to $\mu$ and ergodic on $\Omega_+$, because it can reach the configuration with all sites empty. If $c(\omega,\omega')$ denotes the jump rate from $\omega$ to $\omega'$, the Dirichlet form $\cD^{\mathrm{CBSEP}}(f)$ of the chain has the expression \begin{align} \label{eq:def:cD:CBSEP} \cD^{\mathrm{CBSEP}}(f)&{}=\frac 12 \sum_{\omega,\omega'\in\Omega_+}\mu(\omega)c(\omega,\omega')\left(f(\omega')-f(\omega) \right)^2\\ &{}=\sum_{e\in E}\mu(\1_{E_e}\var_{\pi_e}(f| E_e)).\nonumber\end{align} Notice that the branching and coalescing moves of CBSEP are exactly the moves allowed in FA-1f (recall Section~\ref{sec:FA1f}). Moreover, the SEP move can be reconstructed by a branching and a coalescing move. This leads to a comparison between the corresponding Dirichlet forms (see e.g.\ \cite{Levin09}*{Section 13.4}). Although the two models are clearly closely related, we stress that CBSEP has many advantages over \mbox{FA-$1$f}, making its study simpler. Most notably, {CBSEP} is \emph{attractive} in the sense of interacting particle systems, that is the natural stochastic order is preserved by the dynamics (see \cite{Liggett05}*{Section III.2} for background). Furthermore, it is natural to embed in {CBSEP} a continuous time random walk $(W_t)_{t\ge 0}$ on $G$ such that {CBSEP} has an empty site at $W_t$ for all $t\ge 0$. This feature is challenging to reproduce for \mbox{FA-$1$f} \cite{Blondel13}. Finally, it is possible to move an empty site in CBSEP without creating more empty sites, contrary to what is the case in FA-1f (recall Section~\ref{sec:FA1f}). The main result on CBSEP we will need is the following \cite{Hartarsky23FA}*{Proposition 5.2}.\footnote{We record a mistake in \cite{Hartarsky23FA} leading to the weaker bound stated here. Indeed, in the last but one sentence of the proof of \cite{Hartarsky23FA}*{Proposition~5.2} in Appendix~B, the bounds on the cover time of the random walk and logarithmic Sobolev constant of CBSEP are not correctly imported from \cite{Levin09}*{Chapter 11} and \cite{Hartarsky22CBSEP}*{Corollary 3.2}. This mistake has no impact on the rest of the paper.} \begin{proposition}[CBSEP relaxation time] \label{prop:CBSEP} Assume that $d\ge2$ and consider a sequence of box sizes $\ell_n$ and parameters $p_n$ such that $p_n\ell_n^d\to\infty$ and $p_n\to0$. Then, for some $C>0$ and all $n$ large enough, \[\trel^{\mathrm{CBSEP}}\le \frac{C\log^{3}(1/p_n)}{p_n}.\] \end{proposition} \begin{proof}[Sketch] The first step of the proof is to renormalise the model by considering boxes of volume approximately $1/p_n$. This brings us to treating CBSEP with parameter $p$ of order 1 on arbitrary volume and CBSEP on a box of volume approximately $1/p_n$. The former relaxation time is uniformly bounded (see \cite{Hartarsky23FA}*{Lemma B.1} and \cite{Hartarsky22CBSEP}*{Theorem~1} for details), while the latter is bounded by $C\log^2(1/p_n)/p_n$ thanks to \cite{Hartarsky22CBSEP}*{Corollary 3.1}. The proof of the first upper bound follows from the fact that $\trel^{\mathrm{FA-1f}}$ is bounded for $q$ bounded away from 0 (recall Theorem~\ref{th:exponential}) together with a comparison between the Dirichlet forms of CBSEP and FA-1f. The second upper bound may be proved, using canonical paths along the lines of Theorem~\ref{th:FA1f} (also recall Conjecture~\ref{conj:FA1f}, whose upper bound is known from \cite{Cancrini08} for $d=2$, and its analogue for $d\ge3$ from \cite{Shapira20}), see \cite{Hartarsky22CBSEP}*{Proposition~4.6}. While this concludes the sketch for CBSEP, its generalised version is more subtle to analyse. Indeed, in \cite{Hartarsky22CBSEP}*{Theorem~2} it is shown that one can bound the mixing time (and therefore the relaxation time) of generalised CBSEP, using the mixing time of CBSEP and the cover time of the continuous time simple random walk on the box of interest. The cover time is classically bounded by $\log^2(1/p_n)/p_n$ \cite{Levin09}*{Chapter 11}, but more work is needed to bound the mixing time of CBSEP, as opposed to its relaxation time. The approach of \cite{Hartarsky22CBSEP} is to prove a logarithmic Sobolev inequality, which classically bounds the mixing time. This is achieved, using deep input from \cites{Lee98,Alon20}. An alternative approach to bounding the mixing time of FA-1f can be found in \cites{Pillai17,Pillai19}. \end{proof} \subsection{Upper bound} \label{sec:FA2f:upper} We are now ready to discuss the proof of the upper bound of Theorem~\ref{th:FA2f:sharp:threshold} in dimension $d=2$, following \cite{Hartarsky23FA}. On the high level, the proof resembles Section~\ref{sec:FA2fbetter}. The first two steps bringing us to a mesoscopic scale are quite general, while reaching the mesoscopic scale from the microscopic one is much more delicate and specific to FA-2f. \subsubsection{Reduction to the mesoscopic scale via CBSEP} \label{subsec:FA2f:reduction:to:meso} Recall from Theorem~\ref{th:BP} that $\lambda(2,2)=\pi^2/18$. Fixing some $\varepsilon>0$ and setting \begin{equation} \label{eq:def:t*} t_*=\exp\left(\frac{\pi^2 +\varepsilon}{9q}\right),\end{equation} our goal is to prove that \begin{equation}\label{eq:FA2f:goal} \lim_{q\to 0}\bbP_\mu\left(\tau_0>t_*\right)=0. \end{equation} We next use finite speed of propagation (recall \eqref{eq:speed:propagation}) to show that it is unlikely that, before time $t_*$, the state of the origin is influenced by the configuration outside the box $B=[-2t_*,2t_*]^2$. Thus, it suffices to prove \eqref{eq:FA2f:goal} for FA-2f with boundary condition $\bone_{\bbZ^2\setminus B}$. The next step is a renormalisation to the generalised CBSEP model discussed in Section~\ref{subsec:CBSEP}. The renormalisation itself resembles the one of Section~\ref{sec:FA2fbetter}, but the choice of super good events is much more delicate. We fix a mesoscopic scale $\ell=\lceil q^{-9}\rceil$ (the exponent is fairly arbitrary) and divide $B$ into boxes $Q_\bx=\ell\bx+R(\ell,\ell)$ (recall \eqref{eq:def:rectangle}) for $\bx\in\bar B=[-\lceil 2t_*/\ell\rceil,\lceil 2t_*/\ell\rceil]\cap \bbZ^2$. Each renormalised site $\bx\in\bar B$ will be in one of two states---good and super good. As in Section~\ref{sec:FA2fbetter}, $\bx\in\bar B$ is \emph{good}, if $\omega_{Q_\bx}$ has at least one empty site on each row and column of $Q_\bx$. We denote the corresponding event by $\cG_\bx$. By a union bound, it holds that \[1-\mu\left(\bigcap_{\bx\in\bar B}\cG_\bx\right)\le 5t_*^2(1-q)^\ell\le 1/t_*^4,\] owing to our choice of sufficiently large $\ell$. Since this probability is so small, as in Section~\ref{subsubsec:bottleneck:deduction} (union bound over attempted updates, using stationarity), it is likely that all renormalised sites in $\bar B$ remain good at all times up to $t_*$. We will choose the super good event $\cS\cG_\bx$ in such a way that \begin{equation} \label{eq:SG:proba} \mu\left(\cS\cG_\bx\right)\ge \exp\left(-\frac{\pi^2/9+\varepsilon/2}{q}\right).\end{equation} Comparing this with \eqref{eq:def:t*}, we similarly compute that it is likely that, up to time $t_*$, at all times there is at least one super good renormalised site in $\bar B$. Thus, we therefore assume that \[\cE=\bigcap_{\bx\in\bar B}\cG_\bx\cap\bigcup_{\bx\in\bar B}\cS\cG_\bx\] occurs for all $t\le t_*$. The event $\cE$ corresponds to $\Omega_+$ in Section~\ref{subsec:CBSEP} and super good renormalised sites play the role of empty sites for CBSEP. By a standard result (see \cite{Asselah01}*{Theorem~2}) similar to \eqref{eq:F0F1} (also recall \eqref{eq:def:Trel}), but taking into account that we require not exiting $\cE$, in order to prove \eqref{eq:FA2f:goal}, it suffices to establish a Poincar\'e type inequality of the form \begin{equation} \label{eq:FA2f:goal2} \frac{\cD_{\bone_{\bbZ^2\setminus B}}(f)}{\var(f|\cE)}\ge \exp\left(-\frac{\pi^2/9+2\varepsilon/3}{q}\right) \end{equation} for any function $f:\Omega_{B}\to\bbR$ such that $f(\omega)=0$, if $\omega\not\in\cE$. We are now ready to apply the Poincar\'e inequality for generalised CBSEP provided by Proposition~\ref{prop:CBSEP}. Recalling \eqref{eq:SG:proba}, this yields \begin{equation} \label{eq:FA2f:CBSEP}\var(f|\cE)\le Cq^{-3}\exp\left(-\frac{\pi^2/9+\varepsilon/2}{q}\right)\sum_{\bx\sim\by}\mu\left(\1_{\cS\cG_{\bx,\by}}\var_{Q_\bx\cup Q_\by}\left(f|\cS\cG_{\bx,\by}\right)|\cE\right),\end{equation} where $\cS\cG_{\bx,\by}=\cS\cG_\bx\cup\cS\cG_\by$ for neighbours $\bx\sim\by$ in $\bar B$. Note that, if we had used FA-1f (recall Section~\ref{sec:FA1f}) instead of CBSEP at this point, the exponent above would become roughly $2\pi^2/(9q)$ instead of $\pi^2/(9q)$, which is not enough for proving Theorem~\ref{th:FA2f:sharp:threshold}. Notice that relating $\var_{Q_\bx\cup Q_\by}(f|\cS\cG_{\bx,\by})$ to the terms of the Dirichlet form of FA-2f in $Q_\bx\cup Q_\by$ (recall \eqref{eq:dirichlet}) is essentially equivalent to establishing a Poincar\'e inequality for FA-2f on this mesoscopic volume, conditioned to remain in the event $\cS\cG_{\bx,\by}$. Indeed, some simple but delicate manipulations (see \cite{Hartarsky23FA}*{Claim 5.5}) allow deducing \eqref{eq:FA2f:goal2} from \eqref{eq:FA2f:CBSEP} and the Poincar\'e inequality \begin{align} \label{eq:FA2f:meso} \var_{Q}(f|\cS\cG)&{}\le \gamma(Q)\sum_{\bz\in Q}\mu\left(c_\bz^{\bone_{\bbZ^2\setminus Q}}\var_\bz(f)|\cS\cG\right),\\\nonumber\gamma(Q)&{}\le\exp\left(\frac{\varepsilon}{7q}\right)\end{align} with $Q=Q_\bx\cup Q_\by$ and $\cS\cG=\cS\cG_{\bx,\by}$, which holds for all $f:\Omega_Q\to\bbR$. The next section is devoted to the proof of \eqref{eq:FA2f:meso}. \subsubsection{Mesoscopic Poincar\'e inequality: the matryoshka doll technique} \label{subsubsec:matryoshka} The inequality \eqref{eq:FA2f:meso}, which we seek to establish, should be interpreted as stating that, once a critical droplet is present at a given location, it is rather easy to completely change the state there. Note that we have not yet specified our choice of super good event beyond requiring \eqref{eq:SG:proba} to hold. Our actual choice and the proof of \eqref{eq:FA2f:meso} go hand in hand and follow a multi-scale renormalisation scheme which we refer to as the \emph{matryoshka doll technique}. For the sake of simplicity of the presentation, we rather present the argument in a way that yields a sharp threshold for slight variants of the model. The additional technical difficulties are dealing with more boundary conditions in \eqref{eq:FA2f:meso}, requiring at least one empty site on every two consecutive lines and implementing a microscopic FA-1f dynamics on the boundary of a droplet. The interested reader can find these features in \cite{Hartarsky23FA}. The model we will treat is the KCM corresponding to the modified two-neighbour update family obtained by removing the lowest two rules in Figure~\ref{fig:FA2f}. We refer to it as modified FA-2f and seek to prove \eqref{eq:FA2f:meso} but with $\cS\cG$ event satisfying \eqref{eq:SG:proba} with $\lambda(2,2)$ replaced by $\lambda'=\pi^2/6$, which is the correct sharp threshold constant for this model (the matching lower bound is proved exactly as described in Section~\ref{sec:FA2f:lower}). The matryoshka doll technique requires us to keep track of several features of a droplet $\Lambda$ in parallel: \begin{itemize} \item geometry, that is, the shape and size of the droplet; \item super good event $\cS\cG(\Lambda)\subset\Omega_\Lambda$, which guarantees that the droplet's state can be resampled efficiently, conditionally on the super good event; \item occurrence probability, that is, $\mu_q(\cS\cG(\Lambda))$; \item relaxation time, that is, the smallest constant $\gamma(\Lambda)\ge 1$ such that \eqref{eq:FA2f:meso} holds. \end{itemize} Given the multi-scale nature of the technique, all of the above are defined or bounded recursively for a sequence of droplets $\Lambda^{(m)}$, starting from a single empty site and reaching the droplet $Q$ from Section~\ref{subsec:FA2f:reduction:to:meso}. \paragraph{Geometry} In the case of (modified) FA-2f, droplets are simply rectangles. Let $m_0=\lfloor1/\sqrt q\rfloor$, \begin{align} \label{eq:def:ellm:Lambdam} \ell_m&{}=\begin{cases} m+1&m< m_0,\\ \left\lfloor \frac{e^{(m-m_0)\sqrt q}}{\sqrt q}\right\rfloor &m\ge m_0, \end{cases}&\Lambda^{(m)}&{}= R\left(\ell_{\lceil m/2\rceil},\ell_{\lfloor m/2\rfloor}\right), \end{align} for any $m\ge 0$. Thus, $\Lambda^{(m)}$ form a nested sequence of rectangular droplets, each second one being a square. At each step only the width or only the length (depending on parity) is increased. The corresponding length scales increase linearly up to $1/\sqrt q$ and then exponentially, reaching our final size of interest $\ell=1/q^9$ after approximately $\frac{17\log(1/q)}{2\sqrt q}$ steps. The exact choice of scales is not of fundamental importance, but some care is needed. The scale $1/\sqrt q$ is known to be relevant thanks to BP results (recall \eqref{eq:2nBP}), but it is also not crucial for our purposes. \paragraph{Super good event} The definition of the super good event is guided by the idea that lower scale droplets should be allowed to move freely within a larger scale droplet. This is vital for obtaining an efficient relaxation mechanism (that is, a small Poincar\'e constant $\gamma(Q)$ in \eqref{eq:FA2f:meso}). For any $\bx\in\bbZ^2$, this intuition leads us to define $\cS\cG(\{\bx\})=\{\omega_\bx=0\}$ (recall that $\Lambda^{(0)}=\{0\}$ is just a single site) and, for $m\ge 0$, \begin{multline} \label{eq:def:SG} \cS\cG\left(\bx+\Lambda^{(2m+1)}\right)=\bigcup_{s=0}^{\ell_{m+1}-\ell_m}\Big(\cS\cG\left((s,0)+\bx+\Lambda^{(2m)}\right)\\\cap\bigcap_{\substack{t\in\{0,\dots,\ell_{m+1}-1\}\\t\not\in\{s,\dots,s+\ell_m-1\}}}\left\{\omega\in\Omega_{\bx+\Lambda^{(2m+1)}}:\omega_{(t,0)+\bx+R(1,\ell_m)}\neq\bone\right\}\Big) \end{multline} and similarly for odd scales. In words, at each scale we require that some translate of the lower scale droplet occurs and that each of remaining rows or columns (depending on parity) contains at least one empty site. The definition is illustrated in Figure~\ref{fig:FA2f:droplets}. \begin{figure} \centering \begin{tikzpicture}[>=latex,x=0.5cm,y=0.5cm] \draw[fill] (10,3.3) rectangle (13,6.3); \draw[pattern=vertical lines] (9,3.3) rectangle (14,6.3); \draw[pattern=horizontal lines] (9,2.5) rectangle (14,3.3); \draw[pattern=horizontal lines] (9,2.5) rectangle (14,3.3); \draw[pattern=horizontal lines] (9,6.3) rectangle (14,7.5); \draw[pattern=horizontal lines] (8,1) rectangle (16,2.5); \draw[pattern=horizontal lines] (8,7.5) rectangle (16,9); \draw[pattern=vertical lines] (8,2.5) rectangle (9,7.5); \draw[pattern=vertical lines] (14,2.5) rectangle (16,7.5); \draw [<->] (22,1)--(22,9) node [midway, right] {$\ell_{m}$}; \draw [<->] (17.3,3.3)--(17.3,6.3) node [midway, right]{$\ell_{m-2}$}; \draw [<->] (19.5,2.5)--(19.5,7.5) node [midway, right] {$\ell_{m-1}$}; \draw (8, 1)-- (8,9); \draw (16,1)-- (16,9); \draw (8, 1)-- (16,1); \draw (8, 9)-- (16,9); \draw (8, 7.5)-- (16,7.5); \draw (8, 2.5)-- (16,2.5); \draw (9, 2.5)-- (9,7.5); \draw (14, 2.5)-- (14,7.5); \draw (9, 3.3)-- (14,3.3); \draw (9, 6.3)-- (14,6.3); \draw (10, 3.3)-- (10,6.3); \draw (14, 3.3)-- (14,6.3); \end{tikzpicture} \caption{An example structure of the super good event $\cS\cG(\Lambda^{(2m)})$. Only the super good translates of $\Lambda^{(n)}$ for $m-n\in\{0,\dots,6\}$ are shown. The hatched regions are required to contain at least one empty site per line in the direction of the hatching.} \label{fig:FA2f:droplets} \end{figure} \paragraph{Occurrence probability} We next turn to lower bounding $\mu(\cS\cG(\Lambda^{(m)})$. The argument goes back to \cite{Aizenman88} and is only about BP, but we include it, as it is informative. Set $f:(0,\infty)\to(0,\infty):x\mapsto-\log(1-e^{-x})$, which is decreasing and convex. Systematically taking $s=0$ in \eqref{eq:def:SG}, we get that for any $m\in[0,17\log(1/q)/\sqrt q]$, \begin{align*}\mu\left(\cS\cG\left(\Lambda^{(m)}\right)\right)&{}\ge \prod_{n=0}^{\lceil m/2\rceil}\left(1-(1-q)^{\ell_n}\right)^{2(\ell_{n+1}-\ell_n)}\\ &{}\ge \exp\left(-2\sum_{n=0}^{\lceil m/2\rceil}(\ell_{n+1}-\ell_n)f(q\ell_n)\right)\\ &{}=\exp\left(-\frac2q\int_0^\infty f+\frac{o(1)}{q}\right)=\exp\left(-\frac{\pi^2+o(1)}{3q}\right),\end{align*} where the asymptotic notation is as $q\to 0$. The error in the approximation of the Riemann sum by an integral above is carried out in more detail in \cite{Hartarsky23FA}*{Appendix A}, while the integral can be computed using the series expansion of $\log(1-\cdot)$ and the fact that $\zeta(2)=\pi^2/6$. We have thus concluded the proof of the analogue of \eqref{eq:SG:proba} in the context of modified FA-2f. \paragraph{Relaxation time} Proving \eqref{eq:FA2f:meso} is the most challenging part of \cite{Hartarsky23FA}. The proof proceeds by establishing the recursive bound \begin{equation} \label{eq:FA2f:recursion}\gamma\left(\Lambda^{(m)}\right)\le e^{C\log^2(1/q)}\gamma\left(\Lambda^{(m-1)}\right).\end{equation} for some constant $C>0$. In turn, proving \eqref{eq:FA2f:recursion} is done via a version of the bisection technique, whose base step is unusually delicate. For concreteness, we only discuss one parity. Fix some $m$ such that $\ell_{m}\le 9\log(1/q)/q$. Set $\Lambda'=\Lambda^{(2m+1)}=R(\ell_{m+1},\ell_m)$ and $\Lambda=\Lambda^{(2m)}=R(\ell_m,\ell_m)$. The bisection will be used to decrease the width difference $\ell_{m+1}-\ell_m$ down to 1 in roughly $\log(\ell_{m+1}-\ell_m)$ steps. For translates of $R(l,\ell_m)$ with $l\in(\ell_m,\ell_{m+1})$, we extend \eqref{eq:def:SG} by replacing $\ell_{m+1}-\ell_m$ by $l-\ell_m$ and $\Lambda^{(2m+1)}$ by $R(l,\ell_m)$. We consider the rectangles $R^{(k)}=R(\ell_m+d_k,\ell_m)$ with $d_k=\lceil (2/3)^k(\ell_{m+1}-\ell_m)\rceil$ for $k\ge 0$. We then seek to prove a recursive bound of the form \begin{equation} \label{eq:FA2f:recursion:step} \gamma\left(R^{(k)}\right)\le a_k\gamma\left(R^{(k+1)}\right)\end{equation} with $a_k\le q^{-C}$ for some constant $C>0$. In order to prove \eqref{eq:FA2f:recursion:step}, we use an auxiliary dynamics somewhat similar to the two-block one of Lemma~\ref{lem:two-block}, but using a different mechanism. Contrary to Lemma~\ref{lem:two-block}, which uses an East mechanism, the dynamics behind Lemma~\ref{lem:three-block} below is non-oriented. It has three sites and has two types of constrained updates occurring at rate 1. The first update resamples the first and second sites (blocks) conditionally on some event occurring there before and after the update. The second update is similar for the second and third sites. The first two blocks together correspond to $R^{(k+1)}$, all three blocks form $R^{(k)}$ and the second and third blocks form a translate of $R^{(k+1)}$ (see Figure~\ref{fig:3block}). This leads us to the following lemma adapted from \cite{Hartarsky23FA}*{Proposition 3.5}. \begin{lemma}[{Three-block dynamics}] \label{lem:three-block}There exists $C>0$ such that the following holds. Let $(\bbX,\pi)$ be the product of three finite probability spaces $(\bbX_i,\pi_i)_{i=1}^3$. Fix some events $\cA_1\subset\bbX_1$, $\cA_3\subset\bbX_3$, $\cB_{1,2}\subset\bbX_1\times\bbX_2$ and $\cB_{2,3}\subset\bbX_2\times\bbX_3$. Set $\cH=\cB_{1,2}\times\cA_3$ and $\cK=\cA_1\times \cB_{2,3}$. Consider the Dirichlet form \[\cD(f)=\pi\left(\1_{\cH}\var(f|\cB_{1,2}\times\{\omega_3\})+\1_{\cK}\var(f|\{\omega_1\}\times\cB_{2,3})|\cH\cup\cK\right)\] defined for any $f:\bbX\to\bbR$. Consider some event $\cC_2\subset\bbX_2$ such that $\cA_1\times\cC_2\times\cA_3\subset(\cH\cap\cK)$. Then \begin{equation} \label{eq:3block} \var_{1,2,3}(f|\cH\cup\cK)\le C\frac{\pi_{1,2}(\cB_{1,2})\pi_{2,3}(\cB_{2,3})}{\pi_1(\cA_1)(\pi_2(\cC_2))^2\pi_3(\cA_3)}\cD(f)\end{equation} \end{lemma} \begin{proof}[Sketch] The mechanism behind the proof is the following. Consider two copies of the chain described above with Dirichlet form $\cD$, coupled by attempting the same updates. Observe that the following sequence of update attempts guarantees that the two chains reach the same state. First attempt to resample sites 1 and 2 to so that $\cA_1\times\cC_2$ occurs. Then, before any other update is attempted, update sites 2 and 3 (after the first update, $\cK$ necessarily occurs, regardless whether the first update attempt was successful) so that $\cC_2\times\cA_3$ occurs. Finally, again before any other update is attempted, update sites 1 and 2. Estimating the rate at which such a sequence of updates is attempted yields the desired result (see \cite{Hartarsky23FA}*{Proposition 3.5} for more detail). \end{proof} \begin{figure} \centering \begin{tikzpicture}[>=latex,x=0.5cm,y=0.5cm] ll[opacity=0.3] (-0.1,6) -- (-0.1,0) -- (6,0) -- (6,6) -- cycle; \draw (-3,0)-- (12,0); \draw (-3,6)-- (12,6); \draw (9,0)-- (9,6); \draw (12,0)-- (12,6); \draw (9,6)-- (0,6); \draw (-3,6)-- (-3,0); \draw[pattern=vertical lines] (-3,0) rectangle (0,6); \draw[pattern=vertical lines] (9,0) rectangle (12,6); \draw[pattern=vertical lines] (6,0) rectangle (9,6); \draw (3,3) node {$\Lambda^{(m)}+(d_k-d_{k+1},0)$}; \draw [decorate,decoration={brace,amplitude=10pt}] (9,0) -- (-3,0) node [midway,yshift= - 0.5cm] {$R^{(k+1)}$}; \draw [decorate,decoration={brace,amplitude=10pt}] (-3,6) -- (0,6) node [midway,yshift= 0.5cm] {$V_1$}; \draw [decorate,decoration={brace,amplitude=10pt}] (0,6) -- (9,6) node [midway,yshift= 0.5cm] {$V_2$}; \draw [decorate,decoration={brace,amplitude=10pt}] (9,6) -- (12,6) node [midway,yshift= 0.5cm] {$V_3$}; \end{tikzpicture} \caption{ \label{fig:3block}The partition of $R^{(k)}$ into the rectangles $V_1$, $V_2$, $V_3$.} \end{figure} In order to prove \eqref{eq:FA2f:recursion:step}, using Lemma~\ref{lem:three-block}, we consider the blocks $V_1,V_2,V_3$ depicted in Figure~\ref{fig:3block}. The events $\cA_1$ and $\cA_3$ require each column to contain at least one empty site. The events $\cB_{1,2}$, $\cB_{2,3}$ and $\cC_2$ are the super good events for $V_1\cup V_2$, $V_2\cup V_3$ and $V_2$, which were already defined. In order to show that the fraction in \eqref{eq:3block} is at most $q^{-C}$ for some $C>0$, it remains to observe that \begin{equation} \label{eq:uniform position} \pi_1(\cA_1)\pi_2(\cC_2)\ge \pi_{1,2}(\cB_{1,2})/d_k. \end{equation} The last inequality itself follows from the observation that the event in \eqref{eq:def:SG} for each $s$ has equal probability and the position depicted in Figure~\ref{fig:3block} guarantees the occurrence of $\cH\cap\cK$. The above proves \eqref{eq:FA2f:recursion:step} as desired. However, we are not done proving \eqref{eq:FA2f:recursion}. Indeed, iterating \eqref{eq:FA2f:recursion:step} yields \begin{equation} \label{eq:FA2f:recursion:step:final}\gamma\left(\Lambda^{(2m+1)}\right)\le e^{C\log^2(1/q)}\gamma\left(R\left(\ell_{m}+1,\ell_m\right)\right)\end{equation} for some $C>0$, the rectangle on the right being the thinnest rectangle $R^{(k)}$ we can obtain, which is one column wider than the desired $\Lambda^{(2m)}$. The reason for this is visible in Figure~\ref{fig:3block}. Indeed, if $d_{k}=1$, we cannot fit a translate of $\Lambda^{(2m)}$ in $V_2$ in such a way that both $V_1$ and $V_3$ remain non-empty. Thus, this base case requires a separate argument contained in \cite{Hartarsky23FA}*{Proposition 3.7, Lemma 4.10}, which we briefly discuss next. \begin{figure}[b] \centering \begin{tikzpicture}[>=latex,x=0.5cm,y=0.5cm] ll [opacity=0.3] (3,3) rectangle (8,8); \draw (5.5,5.5) node {$\Lambda^{(2m-2)}+\bx$}; \draw (-0.5,0)--(-0.5,10); \draw (9.5,0)--(9.5,10); \draw [pattern=vertical lines] (0,3) rectangle (3,8); \draw [pattern=vertical lines] (8,3) rectangle (9,8); \draw [pattern=horizontal lines] (0,8) rectangle (9,10); \draw [pattern=horizontal lines] (0,0) rectangle (9,3); \draw[anchor=east] (-0.5,2) node {$V_1$}; \draw [decorate,decoration={brace,amplitude=10pt}] (0,10.2) -- (9,10.2) node [midway,yshift=0.5cm] {$V_2$}; \draw [decorate,decoration={brace,amplitude=10pt}] (9.7,8) -- (9.7,3) node [midway,xshift=0.5cm] {$I_3$}; \draw [decorate,decoration={brace,amplitude=10pt}] (-0.7,3) -- (-0.7,8) node [midway,xshift=-0.5cm] {$I_1$}; \draw[anchor=west] (9.5,2) node {$V_3$}; \end{tikzpicture} \caption{ \label{fig:3block:base} Partition of $R(\ell_m+1,\ell_m)$ into the rectangle $V_2$ and the lines $V_1$ and $V_3$. The internal structure of the contracted super good event for $V_2$ is shown. The empty sites in $V_1$ and $V_3$ need to be in $I_1$ and $I_3$ respectively in order to match it.} \end{figure} The proof that \begin{equation} \label{eq:FA2f:recursion:base} \gamma\left(R(\ell_m+1,\ell_m)\right)\le q^{-C}\gamma\left(\Lambda^{(2m)}\right)\end{equation} for a constant $C>0$ proceeds along somewhat similar lines to \eqref{eq:FA2f:recursion:step}, but is more subtle. We do use a decomposition as in Figure~\ref{fig:3block} with $V_1$ and $V_3$ consisting of a single column. In order to take into account the fact that the remaining $V_2$ is slightly smaller than $\Lambda^{(2m)}$, we define a ``contracted'' version of $\cS\cG(\Lambda^{(2m)})$ which requires as much of its internal structure as it is possible to fit into $V_2$. That is, we require the super good translate of $\Lambda^{(2m-2)}$ (assuming $m\neq 0$) and lines containing at least one empty site within $V_2$, mimicking Figure~\ref{fig:FA2f:droplets}. This contracted super good event plays the role of $\cC_2$ in an appropriate analogue of Lemma~\ref{lem:three-block} (the events $\cA_1$ and $\cA_3$ now need to fit well with the position of the super good translate of $\Lambda^{(2m-2)}$, see Figure~\ref{fig:3block:base}). Finally, using a double iteration of \eqref{eq:uniform position}, we bound the resulting factor as desired, since the position of the super good translate of $\Lambda^{(2m-2)}$ is somewhat uniform inside $\Lambda^{(2m)}$. Putting \eqref{eq:FA2f:recursion:step:final} and \eqref{eq:FA2f:recursion:base} together, we obtain \eqref{eq:FA2f:recursion}. In turn, iterating \eqref{eq:FA2f:recursion} and a trivial bound at scale $m=0$ gives \eqref{eq:FA2f:meso} with $\gamma(Q)\le \exp(o(1/q))$, as $q\to 0$, as desired. This concludes the sketch of Theorem~\ref{th:FA2f:sharp:threshold}. \section{Conclusion} In conclusion, let us summarise the new techniques presented in the present chapter, recalling Section~\ref{subsec:tools:BP} and~\ref{subsec:tools:1d}. \noindent\textbf{Combinatorial bottlenecks.} Thanks to this method, we were able to prove precise lower bounds. The approach of Section~\ref{sec:FA2f:lower} is quite general. It is based on BP results, but more precise than the BP lower bound we saw in \eqref{eq:t0t0bp}. The intuition behind it is that a critical droplet needs to visit the origin in order to change its state, which does not happen before the time given by the inverse probability of a droplet. \noindent\textbf{Bisection.} In Section~\ref{sec:bisection:2d} we saw that the bisection technique can also be applied in higher dimensions by alternating two-block steps in different directions. \noindent\textbf{Long range renormalisation.} This is a new technique in our repertoire. It allows us to focus on a one-dimensional path leading a critical droplet to the origin. This effectively reduces the problem to treating the movement of a critical droplet on its own scale, rather than a much larger one. It has the advantage of yielding upper bounds on $\trel$, but the one-dimensional character of its path does not allow obtaining sharp thresholds. \noindent\textbf{Matryoska dolls.} This is our state of the art technique for proving sharp upper bounds. It may be viewed as a very adaptable multi-scale renormalisation scheme, whose flexibility will be unleashed in the next chapter. In a sense, bisection in higher dimensions is also an instance of matryoshka dolls. The idea of the method is to recursively prove Poincar\'e inequalities on progressively larger and larger droplets. At each step, we have full freedom in choosing the auxiliary dynamics. Once a sufficiently large scale is reached via the matryoshka dolls, we conclude by a single step renormalisation, whose auxiliary global dynamics may also be adapted to our needs. A major advance of this technique is that it allows us to prove very precise results incorporating tailored relaxation mechanisms, completely bypassing the need to build any explicit canonical paths. While for modified FA-2f canonical paths reflecting the multi-scale structure of droplets could be envisioned, such approaches very quickly go out of hand in more general settings such as the ones investigated in the next chapter. \noindent\textbf{Auxiliary dynamics.} Let us review the auxiliary dynamics we have plugged into the matryoshka dolls technique so far. In Section~\ref{sec:bisection:2d}, we always used the simple two-block dynamics of Lemma~\ref{lem:two-block}. In Section~\ref{sec:FA2fbetter}, we employed a one-dimensional FA-1f global dynamics. In Section~\ref{subsec:CBSEP} we developed a new possible global dynamics given by the CBSEP that was applied to FA-2f in Section~\ref{subsec:FA2f:reduction:to:meso}. Finally, in Section~\ref{subsubsec:matryoshka}, we developed a three-block alternative, Lemma~\ref{lem:three-block}, to the two-block Lemma~\ref{lem:two-block}. The three-block dynamics was used repeatedly in one direction and we then switched direction repeatedly, until reaching the desired mesoscopic scale. The key feature of CBSEP and the three-block lemma is that they allow us to move a very unlikely droplet without paying the price of its creation from scratch, but rather just a little bit of internal reshuffling. \chapter{Models} \label{chap:models} \abstract{ In this chapter we provide a general definition of KCM and associated quantities of interest. We introduce the choices of constraints corresponding to some of the most studied models, including East, Friedrickson-Andersen $j$-spin facilitated (FA-$j$f), Duarte, North-East (NE) and Spiral models.} \section{Notation}\label{Section:Setting} We denote the sets of nonnegative integers, integers and reals by $\bbN$, $\bbZ$ and $\bbR$ respectively. The models that we consider in this manuscript are most often defined on the infinite integer lattice $\mathbb Z^d$ or a subset thereof. We denote by $\vec x=(x_1,\dots x_d)$ the \emph{sites} of $\mathbb Z^d$, by $\vec e_1,\dots,\vec e_d$ its \emph{canonical basis vectors} and by $\Omega=\{0,1\}^{\mathbb Z^d}$ the \emph{configuration space}. Elements of $\Omega$ are called \emph{configurations} and denoted by Greek letters $\sigma,\eta,\omega,\dots$. For a configuration $\omega\in\Omega$ and a site $\vec x\in\bbZ^d$, $\omega_{\vec x}$ denotes the \emph{occupancy variable} (the value) of $\omega$ at $\vec x$. When $\omega_{\vec x}=1$, we say that site $\vec x$ is \emph{occupied}. When $\omega_{\vec x}=0$ we say that $\vec x$ is \emph{empty}. For $\omega\in\Omega$, we write $|\omega|=|\{\bx\in\bbZ^d:\omega_\bx=0\}|$ for the number of empty sites in $\omega$. KCM are endowed with a parameter $q\in(0,1]$ called \emph{vacancy density} and corresponding to the inverse temperature $\beta$ (in the right units) via the relation \begin{equation}\label{eq:temperature} q=1/(1+e^{\beta}).\end{equation} In particular, the limit $q\to 0$ corresponds to the zero temperature limit, $\beta\to\infty$. Given $q\in[0,1]$, we denote the product Bernoulli$(1-q)$ measure on $\Omega$, under which a site is empty with probability $q$, by $\mu_q$ or simply $\mu$, when $q$ is clear from the context. The mean with respect to a measure $\nu$ on $\Omega$ of a function $f:\Omega\to\bbR$ is denoted by $\nu(f)$, while its variance is denoted by $\var_\nu(f)$ or simply $\var(f)$ when $\nu=\mu$. Functions $f$ we are interested in only take a finite number of values, so no integrability issues arise. We sometimes work on a subset $\Lambda\subset\bbZ^d$ of the lattice. Correspondingly, we set $\Omega_{\Lambda}=\{0,1\}^{\Lambda}$. The \emph{restriction} of a configuration $\omega\in\Omega$ to $\Lambda$ is denoted by $\omega_\Lambda\in\Omega_\Lambda$. For any measure $\nu$ that is the product of a measure on ${\Omega_\Lambda}$ and ${\Omega_{\Lambda^c}}$, we similarly denote by $\nu_{\Lambda}$ the restriction to $\Lambda$. When $\Lambda=\{\vec x\}$ is a singleton, we simply write $\nu_{\vec x}$ for $\nu_{\Lambda}$. We write $\var_\Lambda$ for $\var_{\mu_\Lambda}$, that is the variance with respect to the occupation variables in $\Lambda$. Given disjoint sets $\Lambda_1,\Lambda_2\subset \bbZ^d$, and $\omega^{(1)}\in \Omega_{\Lambda_1}$ and $\omega^{(2)}\in\Omega_{\Lambda_2}$, we write $\omega^{(1)}\cdot\omega^{(2)}\in \Omega_{\Lambda_1\cup\Lambda_2}$ for the configuration such that \begin{equation} \label{eq:def:concatenation}(\omega^{(1)}\cdot\omega^{(2)})_{\vec x}=\begin{cases} \omega^{(1)}_{\vec x}&\vec x\in\Lambda_1,\\ \omega^{(2)}_{\vec x}&\vec x\in\Lambda_2. \end{cases}\end{equation} We denote the fully occupied (resp.\ empty) configurations by $\bone$ (resp.\ $\bzero$) and omit the domain $\Lambda$ in $\bone_\Lambda$ (resp.\ $\bzero_\Lambda$), when it is clear from the context. \section{Update families} \label{sec:rules} KCM are Glauber type Markov processes on $\Omega$ (or $\Omega_{\Lambda}$) reversible w.r.t.\ $\mu$ ($\mu_{\Lambda}$, respectively). We give the general definition introduced in \cite{Cancrini08} to cover all the models studied in physics. Each KCM is characterised by its \emph{update family}, namely a finite collection $\mathcal U=\{U_1,\dots,U_m\}$ of finite subsets of $\mathbb Z^d\setminus \{0\}$ called \emph{update rules}. Given a vertex ${\vec x}\in \bbZ^d$, we will say that the \emph{constraint} at $\vec x$ is satisfied by the configuration $\omega\in\Omega$ if at least one of the update rule translated at ${\vec x}$ is completely empty, namely, \begin{equation} c_{{\vec x}}(\omega)=c_{\vec x}^{\cU}(\omega)= \begin{cases} 1&\exists U\in\cU, \forall \vec u\in U,\omega_{{\vec x}+\vec u}=0,\\ 0&\text{otherwise}. \end{cases}\label{eq:def:cx} \end{equation} Observe that constraints $c_{\vec x}$ are non-increasing w.r.t.\ the product partial order on configurations in $\Omega$ given by $\omega\le\omega'$ if $\omega(\vec y)\le \omega'(\vec y)$ for all $\vec y\in\bbZ^d$. Inversely, any non-increasing function $c_0:\Omega\to\{0,1\}$ depending only on the restriction of the configuration to a finite subset of $\bbZ^d\setminus\{0\}$ can be written in the form $c_0^\cU$ for some update family $\cU$. Correspondingly, there is a natural partial order on update families: $\cU_1\le \cU_2$, if $c_0^{\cU_1}(\omega)\le c_0^{\cU_2}(\omega)$ for every $\omega\in\Omega$.\footnote{Note that since we only care about the update family through the constraint it induces, we identify update families yielding the same constraints, e.g.\ $\{\{1\}\}$ and $\{\{1\},\{1,2\}\}$ in one dimension.} The maximal and minimal update families correspond to $\cU=\varnothing$ (i.e.\ $c_0^\cU\equiv0$) and $\cU=\{\varnothing\}$ (i.e.\ $c_0^\cU\equiv1$). Since the goal of KCM is to explore the effect of constraints on the dynamics, we discard these cases, by systematically assuming update families and update rules to be nonempty. Moreover, we usually drop $\cU$ from the notation, as $\cU$ is fixed or arbitrary. \begin{figure} \centering \begin{subfigure}{0.3\textwidth} \begin{center} \begin{tikzpicture} \draw (-1,-1.5) circle (1 mm) ; \draw (0,-1) circle (1 mm); \draw[step=0.5cm,gray,very thin](-2,-2)grid(-1,-1); \draw[step=0.5cm,gray,very thin](-0.5,-2)grid(0.5,-1); \draw (-1.5cm,-1.5cm) node[cross=3pt,rotate=0] {}; \draw (0,-1.5cm) node[cross=3pt,rotate=0] {}; \draw (0.5 cm,1.8cm) node {};\end{tikzpicture} \caption{East\label{fig:East}} \end{center} \end{subfigure} \begin{subfigure}{0.3\textwidth} \begin{center} \begin{tikzpicture} \draw (-1.5,-1.5) circle (1 mm) ; \draw (0.5,-2) circle (1 mm); \draw (0.5,0.5) circle (1 mm); \draw (-0.5,0) circle (1 mm); \draw[step=0.5cm,gray,very thin](-1.5,-2)grid(-0.5,-1); \draw[step=0.5cm,gray,very thin](0,-2)grid(1,-1); \draw[step=0.5cm,gray,very thin](-1.5,-0.5)grid(-0.5,0.5); \draw[step=0.5cm,gray,very thin](0,-0.5)grid(1,0.5); \draw (-1 cm,-1.5cm) node[cross=3pt,rotate=0] {}; \draw (0.5 cm,-1.5cm) node[cross=3pt,rotate=0] {}; \draw (-1 cm,0.cm) node[cross=3pt,rotate=0] {}; \draw (0.5 cm,0cm) node[cross=3pt,rotate=0] {}; \end{tikzpicture} \caption{FA-$1$f\label{fig:FA1f}} \end{center} \end{subfigure} \begin{subfigure}{0.3\textwidth} \begin{center} \begin{tikzpicture} \draw[step=0.5cm,gray,very thin](-1.5,-2)grid(-0.5,-1); \draw[step=0.5cm,gray,very thin](0,-2)grid(1,-1); \draw[step=0.5cm,gray,very thin](-1.5,-0.5)grid(-0.5,0.5); \draw[step=0.5cm,gray,very thin](0,-0.5)grid(1,0.5); \draw[step=0.5cm,gray,very thin](-1.5,0.999)grid(-0.5,2); \draw[step=0.5cm,gray,very thin](-0.0001,0.999)grid(1,2); \draw (-1cm,1.5cm) node[cross=3pt,rotate=0] {}; \draw (0,1.5) circle (1 mm) ; \draw (0.5,2) circle (1 mm) ; \draw (-1.5,1.5) circle (1 mm) ; \draw (-1,1) circle (1 mm) ; \draw (-1,0.5) circle (1 mm) ; \draw (-0.5,0) circle (1 mm) ; \draw (1,0) circle (1 mm) ; \draw (0.5,-0.5) circle (1 mm) ; \draw (0.5,-1) circle (1 mm) ; \draw (0.5,-2) circle (1 mm) ; \draw (-0.5,-1.5) circle (1 mm) ; \draw (-1.5,-1.5) circle (1 mm) ; \draw (-1cm,0cm) node[cross=3pt,rotate=0] {}; \draw (0.5cm,1.5cm) node[cross=3pt,rotate=0] {}; \draw (0.5cm,0cm) node[cross=3pt,rotate=0] {}; \draw (0.5cm,-1.5cm) node[cross=3pt,rotate=0] {}; \draw (-1 cm,-1.5cm) node[cross=3pt,rotate=0] {}; \end{tikzpicture} \caption{FA-$2$f\label{fig:FA2f}} \end{center} \end{subfigure} \begin{subfigure}{0.3\textwidth} \begin{center} \begin{tikzpicture} \draw[step=0.5cm,gray,very thin](-1.5,-2)grid(-0.5,-1); \draw[step=0.5cm,gray,very thin](0,-2)grid(1,-1); \draw[step=0.5cm,gray,very thin](-1.5,-0.5)grid(-0.5,0.5); \draw (-1,0.5) circle (1 mm) ; \draw (-1,-0.5) circle (1 mm) ; \draw (-1.5,-1.5) circle (1 mm) ; \draw (-1,-2) circle (1 mm) ; \draw (0,-1.5) circle (1 mm) ; \draw (0.5,-1) circle (1 mm) ; \draw (-1cm,0cm) node[cross=3pt,rotate=0] {}; \draw (0.5cm,-1.5cm) node[cross=3pt,rotate=0] {}; \draw (-1 cm,-1.5cm) node[cross=3pt,rotate=0] {}; \end{tikzpicture} \caption{Duarte\label{fig:Duarte}} \end{center} \end{subfigure} \begin{subfigure}{0.3\textwidth} \begin{center} \begin{tikzpicture} \draw[step=0.5cm,gray,very thin](-2,-2)grid(-1,-1); \draw (-1,-1.5) circle (1 mm) ; \draw (-1.5,-1) circle (1 mm) ; \draw (-1.5,-1.5) node[cross=3pt,rotate=0] {}; \end{tikzpicture} \caption{North-East\label{fig:NE}} \end{center} \end{subfigure} \begin{subfigure}{0.3\textwidth} \begin{center} \begin{tikzpicture} \draw[step=0.5cm,gray,very thin](-1.5,-2)grid(-0.5,-1); \draw[step=0.5cm,gray,very thin](0,-2)grid(1,-1); \draw[step=0.5cm,gray,very thin](-1.5,-0.5)grid(-0.5,0.5); \draw[step=0.5cm,gray,very thin](0,-0.5)grid(1,0.5); \draw (-1,0.5) circle (1 mm) ; \draw (-0.5,0) circle (1 mm) ; \draw (0,0.5) circle (1 mm) ; \draw (1,0.5) circle (1 mm) ; \draw (0,0) circle (1 mm) ; \draw (-0.5,0.5) circle (1 mm) ; \draw (0.5,0.5) circle (1 mm) ; \draw (-0.5,-0.5) circle (1 mm) ; \draw (0,-1) circle (1 mm) ; \draw (0.5,-2) circle (1 mm) ; \draw (-0.5,-1.5) circle (1 mm) ; \draw (-1,-2) circle (1 mm) ; \draw (-1.5,-2) circle (1 mm) ; \draw (-0.5,-2) circle (1 mm) ; \draw (0,-2) circle (1 mm) ; \draw (0,-1.5) circle (1 mm) ; \draw (-1cm,0cm) node[cross=3pt,rotate=0] {}; \draw (0.5cm,0cm) node[cross=3pt,rotate=0] {}; \draw (0.5cm,-1.5cm) node[cross=3pt,rotate=0] {}; \draw (-1 cm,-1.5cm) node[cross=3pt,rotate=0] {}; \end{tikzpicture} \caption{Spiral\label{fig:spiral}} \end{center} \end{subfigure} \caption{The two-dimensional update families introduced in Section \ref{sec:rules}.\label{fig:rules}} \end{figure} We now present some of the most commonly considered update families (see Figure~\ref{fig:rules} for their two-dimensional representation). While at this point they may seem arbitrary, in Chapter~\ref{chap:universality}, we will see that they are representatives of different universality classes displaying very different behaviour. \begin{itemize} \item The \textbf{East} \cites{Jackle91,Ashton06} update family is $\cU=\{\{\vec e_1\},\dots,\{\vec e_d\}\}$. That is, the East constraint at $\vec x\in\bbZ^d$ is satisfied, if $\vec x$ has an empty neighbour in some of the positive coordinate directions. \item The \textbf{Frederickson--Andersen $j$-spin facilitated (FA-$j$f or $j$-neighbour)} \cites{Fredrickson84,Fredrickson85} update family is $\cU=\{U\subset\{\vec e_1,\dots,\vec e_d,-\vec e_1,\dots,-\vec e_d\}:|U|=j\}$, where $j\in\{1,\dots,2d\}$ is a parameter. That is, the FA-$j$f constraint at $\vec x\in\bbZ^d$ is satisfied, if $\vec x$ has at least $j$ empty neighbours. \item The \textbf{Duarte} \cite{Duarte89} update family is $\cU=\{\{-\vec e_1,\vec e_2\},\{-\vec e_1,-\vec e_2\},\{-\vec e_2,\vec e_2\}\}$ in two dimensions. That is, the Duarte constraint at $\vec x\in\bbZ^2$ is satisfied, if $\vec x$ has at least two empty neighbours other than $\vec x+\vec e_1$. \item The \textbf{North-East (NE)} \cite{Reiter92} update family is $\cU=\{\{\vec e_1,\dots,\vec e_d\}\}$ for $d\ge 2$ (for $d=1$ we get the East model). That is, the constraint at $\vec x\in\bbZ^d$ is satisfied, if all the neighbours of $\vec x$ in the positive coordinate directions are empty. \item The \textbf{Spiral} \cite{Toninelli08} update family is $\cU=\{U_1,U_2,U_3,U_4\}$ in two dimensions, where $U_1=\{(1,-1),(1,0),(1,1),(0,1)\}$ and $U_2,U_3,U_4$ are obtained by rotating $U_1$ by $\pi/2$, $\pi$ and $3\pi/2$ around the origin respectively. \end{itemize} \section{The Markov process} \label{subsec:markov} Let us first informally describe KCM on $\mathbb Z^d$ via its so-called \emph{graphical representation}. Each vertex ${\vec x}\in\bbZ^d$ is equipped with a unit intensity Poisson process, whose atoms $t_{\vec x,k}$ for $k\in\bbN$ are the \emph{clock rings}. We are further given independent Bernoulli random variables $s_{\vec x,k}$ with parameter $1-q$ called \emph{coin tosses}. At the clock ring $t_{\vec x,k}$, if the current configuration $\omega$ satisfies the constraint at ${\vec x}$, we set the occupation variable $\omega_{\vec x}$ to $s_{\vec x,k}$. Such updates are called \emph{legal}. If, on the contrary, the constraint is not satisfied, the configuration remains unchanged at time $t_{\vec x,k}$. More formally, the Markov process can be constructed via its self-adjoint Markov \emph{semigroup} $P_t:=e^{t\cL}$ on $\mathrm{L}^2(\mu)$, where the \emph{generator} $\cL$ is a non-negative self-adjoint operator with domain $\mathrm{Dom}(\mathcal L)$ that can be constructed in a standard way (see e.g.\ \cite{Liggett05}*{Sections~I.3, IV.4}) starting from its action on \emph{local functions} (i.e.\ functions depending on the occupancy variables on a finite number sites): \begin{equation} \label{eq:generator} \cL f=\sum_{\vec x\in \bbZ^d}c_{\vec x}\cdot \left(\mu_{\vec x}(f)-f\right). \end{equation} Spelling out the definition of $\mu_{\vec x}$, the generator can be equivalently rewritten as \begin{equation} \label{eq:generatorbis} \cL f(\omega)=\sum_{\vec x\in \bbZ^d}c_{\vec x}(\omega)\left((1-\omega_{\vec x})(1-q)+\omega_{\vec x}q\right) (f(\omega^{\vec x})-f(\omega)) \end{equation} with $\omega^{\vec x}$ the configuration obtained from $\omega$ by flipping its value at $\vec x$, i.e. \begin{equation} \label{eq:def:omega:flip} \omega^{\vec x}_{\vec y}= \begin{cases} \omega_{\vec y} &\vec y \neq \vec x,\\ 1-\omega_{\vec x} &\vec y=\vec x. \end{cases}\end{equation} We further introduce the Dirichlet form $\cD:{\mbox{Dom}}(\cL)\to\mathbb R$ defined as \begin{equation}\label{eq:dirichlet} \cD(f)=- \mu(f \cdot \cL f)=\sum_{\vec x\in \bbZ^d}\mu\left(c_{\vec x} \Var_{\vec x}(f)\right). \end{equation} Using the formulation \eqref{eq:generatorbis} it is not hard to verify that \[\cD(f)=\frac{1}{2}\int\mu(\mathrm{d}\omega)\sum_{\vec x\in \bbZ^d}c_{\vec x}(\omega)\left((1-\omega_{\vec x})(1-q)+\omega_{\vec x}q\right)\left(f(\omega^{\vec x})-f(\omega)\right)^2.\] When the initial distribution at time $t=0$ is $\nu$, the law and expectation of the KCM process on the Skorokhod space $D([0,\infty),\Omega)$ of c\`adl\`ag functions are denoted by $\bbP_{\nu}$ and $\bbE_{\nu}$ respectively (see \cite{Billingsley99}*{Chapter III} for background). If $\nu$ is concentrated over a single configuration, $\nu=\delta_{\sigma}$, we write simply $\bbP_\sigma$ and $\bbE_\sigma$ for $\bbP_\nu$ and $\bbE_\nu$, while if $\nu=\mu$, we simply write $\bbP$ and $\bbE$. We use $\omega(t)$ to denote the state of the KCM at time $t\ge 0$. We next discuss an important property of KCM---reversibility (see \cite{Liggett05}*{Section II.5} for background). It is not hard to verify that, since the constraint $c_{\vec x}(\omega)$ does not depend on $\omega_{\vec x}$, the dynamics satisfies detailed balance w.r.t.\ the product measure $\mu$. Therefore, $\mu$ is reversible (i.e.\ $\mu(f\cdot P_t g)=\mu(g\cdot P_t f)$ for all $f,g\in \mathrm{L}^2(\mu)$ and $t\ge0$) and therefore it is an invariant measure for the process (i.e.\ $\mu P_t=\mu$ for all $t\ge0$). However, $\mu$ is \emph{not} the unique invariant measure---e.g.\ the Dirac measure on the fully occupied configuration is clearly invariant in view of \eqref{eq:def:cx} and \eqref{eq:generator}. We nevertheless refer to the KCM with initial condition $\mu$ as the \emph{stationary process}. \section{Boundary conditions} \label{sec:finite_vol} KCM can also be defined on finite or infinite subsets $\Lambda\subset \mathbb Z^d$ (we write $\Lambda\Subset\bbZ^d$ when we assume that $\Lambda$ is finite). In this case the most natural choice is to imagine that the configuration is defined also outside $\Lambda$, where it is frozen and equal to some reference configuration $\sigma\in\Omega_{\bbZ^d\setminus\Lambda}$, the \emph{boundary condition}. Then, for $\vec x\in\Lambda$, $\omega\in\Omega_{\Lambda}$, the constraint is defined as \begin{equation} \label{eq:def:cx:finite} c^{\sigma}_{\vec x}(\omega)= c_{\vec x}(\omega\cdot\sigma) \end{equation} (recall \eqref{eq:def:concatenation} and \eqref{eq:def:cx}). We denote by $\cL_{\sigma}$ and $\mathcal{D}_{\sigma}$ the generator and Dirichlet form of this process on $\Omega_{\Lambda}$ that are obtained by restricting the sums in \eqref{eq:generator} and \eqref{eq:dirichlet} to sites in $\Lambda$ and substituting $c_{\vec x}$ with $c^{\sigma}_{\vec x}$. We similarly denote by $\bbP_{\nu}^{\sigma}$ and $\bbE_{\nu}^\sigma$ ($\bbP_\zeta^{\sigma}$ and $\bbE_\zeta^\sigma$, if $\nu=\delta_\zeta$, and $\bbP^\sigma$ and $\bbE^\sigma$, if $\nu=\mu$) the law and expectation of the process with initial distribution $\nu$ and by $\omega^\sigma(t)$ the process at time $t$. Note that $\mu_\Lambda$ is reversible for this process. \section{Characteristic times and critical parameters}\label{sec:time_scales} Having defined KCM, we next discuss the kinds of questions and quantities that we seek to tackle for them. Let us start with three natural observables. The \emph{emptying}, \emph{occupation} and \emph{persistence} times of a KCM $(\omega(t))_{t\ge 0}$ are given by \begin{align} \label{eq:def:tau} \tau_0&=\inf\left\{t\ge 0:\omega_0(t)=0\right\},&\tau_1&=\inf\left\{t\ge 0:\omega_0(t)=1\right\},&\tau_\vee&=\max(\tau_0,\tau_1) \end{align} respectively. Turning to more analytic quantities, the \emph{relaxation time} (also known as the inverse of the \emph{spectral gap} of $\cL$) is defined as \begin{align} \label{eq:def:Trel} \trel={}&\frac{1}{\mbox{gap}},& {\mbox{gap}}={}&\inf_{\substack{f\in \mathrm{Dom}(\cL)\\ \Var(f)\neq 0}}\frac{\cD(f)}{\Var(f)} \end{align} where $\cD$ is the Dirichlet form \eqref{eq:dirichlet}. This definition is equivalent to saying that $\trel$ is the smallest constant $C\ge 0$ such that the {\emph{Poincar\'e inequality}} \begin{equation} \label{eq:poincare} \Var(f)\leq C \cD(f) \end{equation} is satisfied for any $f\in\mathrm{Dom}(\cL)$. A finite relaxation time is equivalent to the fact that the measure $\mu$ is mixing for the semigroup $P_t$ with exponentially decaying correlations (see e.g.\ \cite{Saloff-Coste97}), namely for all $f\in \mathrm{L}^2(\mu)$ it holds that\footnote{Indeed, by reversibility, $ \frac{\mathrm d}{\mathrm{d}t}\var (P_t f)=-2\mathcal{D}(P_t f)$.} \begin{equation}\label{eq:expo}\Var\left(P_t f\right)\leq e^{-2 t/\trel}\Var(f).\end{equation} Thus, the relaxation time controls the decay of correlations in the stationary process. Of course, the above time scales have no reason to be finite, so it is natural to consider the corresponding critical parameters. We define the \emph{ergodicity} and \emph{exponential decay} critical parameters \begin{align} \label{eq:def:qc}\qc&=\qc(\cU)=\inf\left\{q>0:\bbP(\tau_0<\infty)=1\right\},\\ \label{eq:def:qct}\qct&=\qct(\cU)=\inf\left\{q>0:\trel<\infty\right\}. \end{align} Our main goals are to determine (or characterise) \qc and \qct and to study the asymptotics of $\tau_0$ and $\trel$ for the stationary process as $q\to\qc+$, as well as the behaviour of the KCM out of equilibrium (with initial condition different from $\mu$) for any $q>0$. Once a Poincar\'e inequality \eqref{eq:poincare} is established it is natural to ask whether it would be possible to prove also a stronger coercive inequality for the generator. In particular one can investigate whether a logarithmic or modified logarithmic Sobolev inequality holds (see \cite{Saloff-Coste97} for background). These correspond, respectively, to the existence of a finite constant $C_{\mathrm{LS}}$ and $C_{\mathrm{MLS}}$ such that for any nonnegative $f\in\mathrm{Dom}(\cL)$ we have \begin{align} \label{eq:logsob} \Ent(f)&{}\leq C_{\mathrm{LS}}\mathcal{E}\left(\sqrt{f},\sqrt{f}\right),\\ \label{eq:modlogsob}\Ent(f) &{}\leq C_\mathrm{MLS}\mathcal{E} (f,\log f),\end{align} where for any two functions $f,g$ we set \begin{align*} \Ent(f)&{}=\mu\left(f\log(f/\mu(f))\right),&\cE(f,g)&{}=-\mu(f\mathcal L g).\end{align*} The (stronger) inequality \eqref{eq:logsob} is known to be equivalent to the hypercontractivity property of the semigroup $P_t$ \cite{Diaconis96}. Instead, \eqref{eq:modlogsob} is equivalent to exponential decay of the relative entropy for the generator $P_t$ (see \cite{Bobkov06}), namely for each probability measure $\nu$ on $\Omega$ it holds that\footnote{Indeed, if we let $f_t$ be the relative density of $\nu P_t$ w.r.t.\ $\mu$ it holds that $\frac{\mathrm d}{\mathrm{d}t} H(\nu P_t ||\mu)= \frac{\mathrm d}{\mathrm{d}t} \Ent(f_t)=-\cE(f_t,\log f_t)$.} \begin{equation}\label{eq:rel_ent_dec_1}H(\nu P_t ||\mu) \leq e^{-t/C_{\mathrm{MLS}}}H(\nu ||\mu) \end{equation} where for any two measures $\nu_1,\nu_2$ on $\Omega$ we denote by $H(\nu_1||\nu_2)$ the relative entropy (or Kullback--Leibler divergence) of $\nu_1$ w.r.t.\ $\nu_2$. The definitions in \eqref{eq:def:tau}, \eqref{eq:def:Trel}, \eqref{eq:logsob}, \eqref{eq:modlogsob} naturally extend to the finite volume setting. We denote the relaxation time (logarithmic or modified logarithmic Sobolev constants) for the KCM with boundary condition $\sigma$ by $T^{\sigma}_{\rm{rel}}$ (resp.\ $C_\mathrm{MLS}^{\sigma}$ and $C_\mathrm{MLS}^{\sigma}$). When $\sigma=\bzero_{\bbZ^d\setminus\Lambda}$ for $\Lambda\subset\bbZ^d$, we simplify the notation to $T^{\Lambda}_{\rm{rel}}$ (resp.\ $C_\mathrm{MLS}^{\Lambda}$ and $C_\mathrm{MLS}^{\Lambda}$). We finally define another natural time scale known as mixing time of the KCM on $\Lambda\subset\bbZ^d$ with boundary condition $\sigma\in\Omega_{\bbZ^d\setminus\Lambda}$ (see \cite{Levin09} for background). The total variation distance $d_{\rm{TV}}$ between two measures $\mu_1,\mu_2$ on $\Omega_\Lambda$ is given by \begin{equation} \label{eq:def:dTV} d_{\rm{TV}} (\mu_1,\mu_2)=\sup_{A\in\mathcal F} \left|\mu_1(A)-\mu_2(A)\right|, \end{equation} where $\mathcal F$ denotes the Borel $\sigma$-field generated by the open sets of $\Omega_\Lambda$. For $\varepsilon\in(0,1)$, the mixing time is \begin{equation}\label{eq:def:tmix}t_{\rm{mix}}^{\sigma}(\varepsilon)=\inf\left\{t>0, \ \max_{\omega\in \Omega_{\Lambda}}d_{\rm{TV}} \left(\delta_\omega P_t,\mu\right)\le \varepsilon\right\}. \end{equation} If $\sigma=\bzero_{\bbZ^d\setminus\Lambda}$, we simplify the notation to $t_{\rm{mix}}^{\sigma}=t_{\rm{mix}}^{\Lambda}$. The mixing time has a natural probabilistic interpretation: it is the time when it is impossible to distinguish (e.g.\ by a statistical test) the law of the process at that time from the equilibrium measure, regardless of the initial state. \chapter{Background from physics} \label{chap:intro} \abstract{In this chapter we discuss the physics background. We recall the basic phenomenology of the liquid-glass transition and more general jamming transitions, and explain the role of KCM as models for glassy dynamics.} \section{The liquid-glass transition} \label{sec:vetri} From the point of view of statistical physics, a key motivation behind the study of KCM is the effort to understand the \emph{liquid-glass transition}. Glass is widely present in our daily life: it is a very versatile material, easily produced and manipulated on an industrial scale by cooling different liquid mixtures (e.g.\ silica, sodium carbonate and calcium oxide). And yet a microscopic understanding of this state of matter (which, according to archaeological findings in Egypt and Eastern Mesopotamia, people have been manufacturing since 3000 B.C.) and of how glass forms is still an open challenge for condensed matter physicists (see e.g.\ the review \cite{Arceri21}). In 1995, Nobel prize Philip W.\ Anderson \cite{Anderson95} wrote: \emph{``The deepest and most interesting unsolved problem in solid state theory is probably the theory of the nature of glass and the glass transition.''} He added, \emph{``This could be the next breakthrough in the coming decade.''} Thirty years later, physicists still disagree about the nature of glass and on how it forms. At the heart of this puzzle lies the intriguing fact that the glasses display properties of both solids and liquids. In fact, we could either say ``glass is an extremely viscous liquid that does not flow'' or ``glass is an unstructured, amorphous solid''. Indeed, despite its macroscopic rigidity, the microscopic structure of a glass has the same disordered arrangement of molecules as a liquid. In other words, based on a single snapshot, liquid and glass are essentially indistinguishable. This lack of order might seem in contrast with the thermodynamics paradigm that, when a liquid is cooled below its melting temperature, an ordered structure should form and the liquid should become a crystal. The secret is to perform the cooling sufficiently fast. This way the nucleation of the crystal is prevented and the liquid enters a long-lived metastable state, the \emph{super-cooled liquid} phase. Very roughly speaking, the liquid-crystal transition is avoided because molecules do not have enough time to reorganise and form the ordered crystal structure. The molecules move slower and slower forming a thick syrup and eventually they get trapped in the structureless glass state. Since the nucleation time of the stable crystal structure is out of reach for any reasonable experiment, though the state of super-cooled liquid is not thermodynamically stable, for all practical purposes it can be considered as an equilibrium system. In particular, one can define a relaxation time (and measure it experimentally via viscosity) and establish fluctuation-dissipation relations connecting response to an external driving force and correlations functions. Essentially, we can forget about the crystal and just focus on the super-cooled phase. A common feature of super-cooled liquids around the glass transition is the sharp slowdown of dynamics. As shown in Figure \ref{fig:vetri:1}, viscosity can increase by over 14 orders of magnitude upon a small decrease in temperature. It also highlights the fact that super-cooled liquids can be classified into two groups: \emph{strong} and \emph{fragile}. If we let $\eta$ be the viscosity, $T$ be the temperature and we define the activation energy as $E:=T\log\eta$, strong liquids are characterised by a temperature-independent $E$, while for fragile liquids $E$ increases as $T$ decreases. The corresponding scaling forms for $\eta$ are called \emph{Arrhenius} and \emph{super-Arrhenius}, respectively. This dramatic growth of time scales\footnote{For example, for fragile glass-forming liquids time scales at the melting temperature are typically of the order of the picosecond (which is also roughly the time scale of molecular motion), and are of the order of 100 seconds when temperature equals $2/3$ of the melting temperature.} is related to the fact that when the temperature is lowered, the density increases: molecules tend to obstruct each other, blocked structures may arise, and motion becomes very cooperative\footnote{{That is, a growing number of particles need to move in a coordinated way in order for relaxation to occur.}}. A key experimental observation of the cooperative motion is the fact that when a glass cools, the molecules do not slow down uniformly. There is indeed a clear coexistence of fast and slow regions. This phenomenon is called \emph{dynamical heterogeneity}: some regions of the liquid jam, while in other regions molecules continue to move around \cites{Berthier11a,Ediger00}. Thus, even if a change of structure does not occur when the glass is formed, an underlying dynamical phase transition separating slow and fast trajectories seems to occur (see Figure \ref{fig:vetri:2}). An indirect experimental evidence of dynamical heterogeneity is the decoupling of self-diffusion coefficient $D_s$ and viscosity $\eta$: super-cooled liquids violate the phenomenological Stokes--Einstein relation, $D_s\eta/T=\mathrm{constant}$, which holds in homogeneous liquids. Though both $D_s^{-1}$ and $\eta$ increase when the temperature is lowered, the former does not increase as fast as the latter. This leads to $D_s\eta/T$ increasing by 2-3 orders of magnitude when temperature is decreased towards the glass transition temperature (see e.g.\ \cite{Mapes06}). The reason why this decoupling of self-diffusion and viscosity is interpreted as a hallmark of dynamical heterogeneity is that the diffusion of the tracer particle (see Section~\ref{sec:tracer} for a more formal description) should be dominated by the fastest regions and the structural relaxation (measured by the viscosity) by the slowest regions. Despite a great deal of experimental and theoretical investigation, a complete understanding of this behaviour and of other peculiar phenomena occurring in the vicinity of the glass transition (aging, hysteresis, rejuvenation, anomalous transport phenomena \dots) is still far out of reach. None of the numerous theories covers all the above phenomenology and a common consensus around ``the'' theory of the glass transition is still lacking in the physics community (see \cite{Arceri21} for a review of various theories). A central theoretical difficulty is the fact that from the point of view of critical phenomena the situation is very peculiar: the liquid/glass transition displays a \emph{mixed character}. Indeed, diverging time and length scales (typical of second order phase transitions), are accompanied by a discontinuous order parameter (typical of first order transitions). The jump of the order parameter corresponds to the discontinuous emergence of an amorphous density profile. Furthermore, both from the experimental and the theoretical point of view, the degeneracy of ground states complicates the problem. Thus, everybody agrees at least on one point: this is certainly not a standard type of ergodicity breaking transition! The active research on the glass transition is also enhanced by the fact that a dynamical arrest towards an amorphous state displaying similar properties occurs in a large variety of systems upon tuning a proper external parameter (see e.g.\ \cite{Arceri21}*{Sec. IV}). These phenomena, that are generically dubbed \emph{jamming transitions}, occur for several materials: grains in powders (granular media), emulsions, foams, colloidal suspensions, polymers, plastics, ceramics, etc. Last but not least, understanding the glass transition would probably yield to novel theoretical and numerical tools that could be useful in other fields of science handling systems displaying a non trivial global collective behavior. This is the vast realm of systems that goes under the general name of \emph{complex systems}. \begin{figure} \begin{minipage}{\textwidth} ll} \rightfigure[c]{\includegraphics[width=0.4\textwidth]{eterogeneita}} \end{minipage} \leftcaption{\label{fig:vetri:1}Logarithm of the viscosity of several glass-forming liquids plotted against the inverse temperature. Here temperature is rescaled by the empirical glass transition temperature, $T_g$, defined as the value at which the viscosity equals $10^{12}Pa\cdot s$. Reprinted from \cite{Debenedetti01} with permission. Copyright © 2001, Springer Nature Limited. } \rightcaption{\label{fig:vetri:2}Spatial map of single particle displacements in a molecular dynamics simulation of a super-cooled liquid in two dimensions. Arrows show the displacement of each particle in a trajectory of length comparable to the relaxation time. Reprinted from \cite{Berthier11} with permission. Copyright © 2011 American Physical Society.} \end{figure} \section{KCM as models for glassy dynamics} Kinetically constrained models (KCM) are toy models for the liquid-glass and jamming transitions. They rely on the idea that these are dynamical phenomena in which static interactions play a minor role. The kinetic constraints are therefore devised to mimic the mechanism of local caging which slows the dynamics down at low temperature or high density. Originally motivated by free volume theories \cite{Glarum60}, KCM have been promoted in the last decades as a paradigm model for the so called {\emph {dynamical facilitation theory}} for the glass transition (see e.g.\ \cites{Biroli13,Chacko24}). Indeed, despite their simplicity and their trivial statics, KCM display many key dynamical features of real materials that undergo glass or jamming transitions: anomalous ergodicity breaking transitions, percolation of blocked structures, dynamical arrest, non-trivial spatio-temporal fluctuations, dynamical heterogeneities, aging\dots Furthermore, depending on the choice of the constraints, they feature either a super-Arrhenius or an Arrhenius behavior for the relaxation time, thus we recover both the behavior of fragile and strong supercooled liquids. On the other hand, a major criticism to KCM is that a convincing derivation of these toy models via a coarse graining from realistic molecular models of liquids is missing (though some attempts in this direction have been recently performed both in experiments of granular glasses \cites{Candelier09,Candelier10} and in numerical simulations of super-cooled liquids \cite{Ozawa23}). In particular, it is not clear how one should identify at the molecular level the facilitating (empty) sites. We refer the reader to \cites{Biroli13,Arceri21,Ritort03,Garrahan11} for further comments on successes and limitations of KCM as toy models for real glass forming liquids as well as for the illustration of alternative theories. What we can confidently say, adopting a sentence from \cite{Arceri21}, is that KCM have been influential and very instructive in order to develop a theoretical understanding of glassy phenomena. Regarding derivations of KCM, we should also mention a different class of models which have been introduced to prove that that kinetic constraints can emerge spontaneously from static interactions, the so-called plaquette models. These are spin models with the usual Glauber dynamics reversible w.r.t.\ a Gibbs measure corresponding to a particular Hamiltonian $\cH$. For certain choices of $\cH$, the relaxation at low temperature is dominated by the motion of ``excited'' plaquettes. These excitations act as a source of mobility since the energetic barrier to flip a spin is smaller in their vicinity. Thus, at low temperature, their dynamics can be somehow mapped to a KCM. We will return to plaquette models in Section~\ref{sec:plaquettes}. We also mention that in recent years there has been an increasing amount of work on the quantum versions of KCM that have been proposed in the study of Rydberg atoms \cite{Turner18} and as models for quantum many-body localization \cite{Garrahan18}. For example, in \cite{Pancotti20}, the quantum version of the East KCM has been analyzed and the occurrence of a first-order quantum transition at which the ground state becomes exponentially localized (with a consequent slowdown of dynamics) has been detected. \chapter{Related settings and models} \label{chap:other} \abstract{In this final chapter, we conclude by surveying a few additional settings not covered by the models defined in Chapter~\ref{chap:models}, but strongly related. Indeed, so far we have allowed general update families and, in Chapter~\ref{chap:out}, non-equilibrium initial conditions. However, we have restricted our attention to the equilibrium measure $\mu$ being product, constraints being identical at all sites, dynamics changing the state of a single site at a time, the underlying graph being a $d$-dimensional lattice. Each of these hypotheses may be revoked and leads to interesting models and questions, many of which have not yet been explored.} \section{KCM on other graphs} \label{sec:tree} KCM can also be defined on graphs different from $\mathbb Z^d$, including arbitrary graphs \cites{Cancrini09,Hartarsky22CBSEP}, trees \cites{Cancrini15,Martinelli13,Cancrini10}, hyperbolic lattices \cite{Sausset10} and many more such as Bienaym\'e--Galton--Watson trees or various models of random graphs waiting to be explored. See e.g.\ \cite{Aldous05} for a possible application of FA-1f to information storage in a sensor network. The most studied case is FA-$j$f on oriented or non-oriented regular trees of degree $k+1$. In the non-oriented version, the constraint at site $\bx$ is satisfied if at least $j$ of the neighbours of $\bx$ in the tree are empty. In the oriented version, at least $j$ empty sites should be among the $k$ children. As for KCM on $\mathbb Z^d$, the ergodicity thresholds for the KCM and for the corresponding BP dynamics coincide (see Theorem \ref{th:ergo}). Thanks to the tree structure, it is not difficult to write recursive equations for the critical thresholds \cite{Balogh06a}, yielding that, both for oriented and non-oriented models, we have $\qc=1$ for $j> k$, $\qc=0$ for $j=1$ and $0<\qc<1$ for $j\in\{2,\dots,k\}$. In \cite{Martinelli13}, Martinelli and Toninelli prove that $\trel<\infty$ for all models in the whole ergodic regime, $q>\qc$. In \cite{Cancrini15}, the same authors together with Cancrini and Roberto, analyse the scaling in the critical regime in the case $j=k$ for the oriented model and prove a power law divergence of $\trel$ as $q\downarrow \qc$. An analogous scaling is also conjectured in all cases $j\in\{2,\dots,k-1\}$, but has yet to be proven. A fundamental difference between the case $j=k$ and $j\in[2,k-1]$, at the root of the difficulties to handle the missing cases, is that the BP transition is continuous for $j=k$ (it essentially correspond to a standard percolation transition), while for all $j\in\{2,\dots,k-1\}$ it is discontinuous, namely $\mu_{\qc}(\tbp=\infty)>0$. Using a strategy similar to the proof of Theorem \ref{theo:East_out}, for the oriented model with $k=j=2$, exponential convergence to equilibrium when $q>\qc$ starting from an initial distribution $\mu_{q'}$ with $q'>\qc$ is proven in \cite{Cancrini10} (see Theorem 4.3 therein together with \cite{Martinelli13}*{Theorem 2}). The proof can be readily extended to all oriented models for $j=1$ or $j=k$. The result is conjectured to hold also for the remaining cases $j\in[2,k]$. \section{Inhomogeneous KCM} \label{sec:inhomogeneous} We briefly considered an inhomogeneous setting in Section~\ref{sec:general:KCM}, where the update family defining the constraint is allowed to depend on the site. While our treatment in one dimension is very general, one can also consider such inhomogeneous models in higher dimensions, possibly choosing the update families at random. For instance, one may consider sites $\bx\in\bbZ^d$ having FA-$j_\bx$f constraint with $j_\bx$ chosen i.i.d.\ at random according to some distribution on $\{0,\dots,2d\}$. A few such models are studied in \cites{Shapira20a,Shapira19polluted}, but most remain unexplored. \section{KCM with interactions} \label{sec:interaction} Another natural modification of KCM is to introduce static interactions between occupied sites. This may be achieved by updating each site w.r.t.\ a measure depending on the current state of other sites. For instance, one could consider the $\cU$-constrained Glauber dynamics for the Ising model with inverse temperature $\beta$ with generator \eqref{eq:generator}, where, instead of $\mu_\bx(\omega_\bx=0)=q$, we set \[\mu_\bx(\omega_\bx=0)=\frac{1}{1+\exp(\beta\sum_{\by\sim\bx}(2\omega_\by-1))},\] the sum running over nearest neighbours of $\bx$ in $\bbZ^d$. In fact, this was already considered in \cite{Fredrickson84} together with an external magnetic field. More generally Gibbs measures were considered in \cite{Cancrini09}*{Section~5}. While the initial motivation behind KCM is to investigate the extent to which glassy phenomenology can be explained by purely dynamical means, in reality interactions are certainly present. It is therefore interesting, but probably challenging, to study such models. \section{Plaquette models} \label{sec:plaquettes} Instead of kinetic constraints, \emph{plaquette models} have static interactions as in Section~\ref{sec:interaction}, but of multi-body type. They were introduced to show that kinetic constraints can emerge from static interactions at low temperatures {\cites{Garrahan02,Newman99,Turner15}}. An example is the \emph{square plaquette model} on $\bbZ^2$. For any $\bx\in\bbZ^2$, the plaquette of $\bx$ in this model is the square $P_\bx=\{\bx,\bx+\be_1,\bx+\be_2,\bx+\be_1+\be_2\}$. The Gibbs weights are defined by the Hamiltonian \[-\sum_{\bx\in\bbZ^2}\prod_{\by\in P_\bx}(2\omega_\by-1)\] and one then considers an unconstrained single site Glauber dynamics. The behaviour of the square plaquette model turns out to be similar to that of FA-1f, while a similar triangular plaquette model with plaquettes of the form $P_\bx=\{\bx,\bx+\be_1,\bx+\be_1+\be_2\}$ is conjectured to behave like the East KCM. Work on these models can be found in \cites{Chleboun17,Chleboun20,Chleboun21}. \section{Conservative models} \label{sec:conservative} The physical motivation behind KCM (recall Chapter~\ref{chap:intro}) views sites of $\bbZ^d$ as mesoscopic volumes whose particle density may be lower or higher, as reflected by the state of the site. However, if we take a microscopic perspective, it is more natural to consider constrained models in which the number of particles is conserved. The first and most classical such models are the Kob--Andersen ones \cite{Kob93}. In KA-$j$f, one may exchange the states of any two neighbouring sites, $\vec x$ and $\vec y$, provided they both have at least $j-1$ empty neighbours in $\bbZ^d\setminus\{\vec x,\vec y\}$. The case $j=1$ coincides with the simple symmetric exclusion process (SSEP) and we will disregard it in the following. One can similarly define conservative versions of $\cU$-KCM, obtaining the class of \emph{kinetically constrained lattice gases} (KCLG) (see \cite{Cancrini10a}*{Section 2} for a formal definition of this class). It is immediate to verity that for any $q\in[0,1]$, $\mu_q$ is a reversible measure for these dynamics. As for the non conservative models, we define $\qc$ by \eqref{eq:def:qc}. In \cite{Cancrini10a}*{Proposition 2.16}, a conservative analogue of Theorem~\ref{th:ergo} is established stating that for any KCLG the ergodicity threshold coincides with the \emph{excheangeability threshold} defined as the minimal value of $q$ above which, for $\mu_q$-almost every configuration and for any couple of sites, there exists a legal path exchanging their occupation variables. The first mathematical result on KCLG is due to Biroli, Fisher and Toninelli \cites{Toninelli04, Toninelli05} who proved that $\qc=0$ for any KA-$j$f with $j\in\{2,\dots,d\}$.\footnote{The cases $j>d$ trivially lead to $\qc=1$.} The key ingredients of this proof are: \begin{itemize} \item the construction of a set of configurations, the so called \emph{frameable configurations}, which can be connected by a legal path to a configuration with well-chosen boundary state (see \cite{Martinelli20}*{Definition 3.3} and \cite{Toninelli05}) for the dynamics on finite volume with occupied boundary condition; \item the construction of legal paths connecting any two frameable configurations with equal number of empty sites; \item the fact that the $\mu_q$-probability that a configuration is frameable goes sufficiently fast to one as the volume increases (see \cite{Martinelli20}*{Proposition 3.26}). More precisely , there exist $c_+,c_->0$ dependent on $d,j$ such that, setting $$\Xi_{\pm}(q,j,d)=\exp^{\circ (j-1)}\left(\frac{c_{\pm}}{q^{\frac{1}{d-j+1}}}\right),$$ when $q\to 0$ and $L\to\infty$ faster (resp.\ slower) than $\Xi_+(q,j,d)$ (resp.\ $\Xi_-(q,j,d)$), the probability of being frameable goes to one (resp.\ to zero). \end{itemize} Combining these results with canonical paths, renormalization arguments and tools borrowed from oriented percolation, Martinelli, Shapira and Toninelli \cite{Martinelli20}*{Theorem 1} prove that the spectral gap of the KA-$j$f models on $\{1,\dots,L\}^d$ with unconstrained sources at the boundary in any dimension $d$ and for any $j\in [2,d]$ scales diffusively (as for SSEP), namely as $L^{-2}$, and with a density pre-factor of the form $\Xi_{\pm}(q,j,d)^{-1}$ (while for SSEP there is no density pre-factor). In \cite{Blondel18}, Blondel and Toninelli consider the behavior of a tagged particle, and prove (following the ideas sketched in \cite{Toninelli04}), that for all $d\geq 2$ and for any $j\in [2,d]$, diffusive behaviour holds at any density $q\in (0,1)$. More precisely, if we distribute the configuration according to $\mu_q$, condition on the presence of a particle initially at the origin, tag it and denote by $\vec X_t$ its position at time $t$, for some matrix $D(q)$ such that $\vec e_i \cdot D(q) \vec e_i>0$ for all $i\in\{1,\dots,d\}$, it holds that \begin{equation} \label{eq:diff} \lim_{\epsilon\to 0} \epsilon \vec X_{\epsilon^{-2}t}=\sqrt{2D(q)}B_t, \end{equation} where $B_t$ is a standard $d$-dimensional Brownian motion and the convergence holds in the sense of weak convergence of path measures. This contradicts conjectures based on numerical simulations \cites{Kob93,Kurchan97} claiming the occurrence of a diffusive/non-diffusive transition. Furthermore Ertul and Shapira prove (see \cite{Ertul21}*{Theorem 2.3}) upper and lower bound for $D(q)$ of the form $\Xi_{\pm}(q,j,d)^{-1}$ (modulo a logarithmic correction in the case $d=2$). The fast shrinking to zero explains why it was incorrectly inferred from numerical simulation (\cite{Kob93}) that for $d=3$ and $j=3$ a diffusive/non-diffusive transition would occur. \begin{problem} \label{prob:conservative} Two other natural issues in the conservative setting are \begin{enumerate} \item \label{item:prob1} determining the evolution of macroscopic density profiles, namely establishing the hydrodynamic limit, and the fluctuations around these profiles; \item \label{item:prob2}establishing relaxation at equilibrium in infinite volume. \end{enumerate} Concerning \ref{item:prob1}, a natural candidate for the hydrodynamic limit is a parabolic equation of porous media that degenerates when the density approaches one. As for fluctuations, it is reasonable to expect they would be Edward--Wilkinson Gaussian fluctuations as for SSEP \cite{DeMasi84}. Establishing these results in the presence of constraints is particularly challenging (see \cite{Goncalves09} where this is achieved for a different KCLG). Concerning \ref{item:prob2}, a first result \cite{Cancrini10} shows that there exists $C(q)>0$ such that for any local function $f$ it holds that, for all $t>0$, \[\var_{\mu_q} (P_t f) \leq \frac{C(q) \|f\|_{\infty}}{t}.\] We expect the correct behavior to be of the form $t^{-d/2}$, as for SSEP. \end{problem} Other kinetically constrained lattice gases can be found in \cites{Bertini04,Goncalves09,Bonorino20,Goncalves11,dePaula21,Jackle91a,Ertel88,Nagahata12,Shapira23}. A model that is currently very actively considered is known as the \emph{facilitated exclusion process}. In this one-dimensional model, a particle is allowed to jump to a neighbouring empty site, provided its other neighbour is occupied. We direct the reader to \cites{Blondel21,Erignoux24,Erignoux24a,Ayre24,DaCunha24,Erignoux23,Goldstein21,Blondel20,Massoulie24} for work on this topic. \section{Tracer diffusion} \label{sec:tracer} In the previous section we described the tagged particle behavior for KCLG. Though the non-conservative dynamics of KCM is not diffusive, one can define the following similar problem. Consider a stationary KCM evolving from a configuration distributed according to $\mu_q$ and inject at time zero a particle (the tracer) at the origin. The tracer moves like a modified random walk attempting to jump at rate one to a site chosen uniformly at random among its nearest neighbours, with the jump being allowed if and only if both the sites occupied by the walker before and after the move are empty (see \cite{Blondel14} for a precise definition). Note that the KCM constitutes a dynamical random environment in which the tracer evolves, and is not influenced by the motion of the tracer. Blondel proved (see \cite{Blondel14}*{Proposition 3.1 and 3.2}) that if the underlying KCM has a positive spectral gap, the tracer has a diffusive behavior with a non-degenerate diffusion matrix, namely \eqref{eq:diff} holds. Furthermore for the FA-1f model in any dimension $d\geq 1$, it holds that $\vec e_i \cdot D(q)\vec e_i\sim q^2$ for $q\downarrow 0$ and $i\in\{1,\dots d\}$. Instead, for the East model \cite{Blondel14}*{Theorem 3.5} proves that $D$ scales as the spectral gap (modulo power law corrections in $q$). This corrects the conjecture that had been put forward by physicists (\cites{Jung04,Jung05}) affirming that for the East model $D$ would scale as $\trel^{-\xi}$ with $\xi<1$. Asymmetric tracers on stationary KCM have also been the object of investigation. See for example \cite{Avena16} for results on a tracer on the one-dimensional East model and with positive (resp.\ negative) drift when on its current position the occupation variable of the East model is occupied (resp.\ empty). \section{Upper triangular matrix walk} We conclude with a further context in which KCM arise naturally beyond the study of glassy dynamics and interacting particle systems. Let $G_n$ be the group of $n\times n$ upper triangular matrices with entries in the two-element field $\bbF_2$ and ones on the diagonal. The following Markov chain was considered in \cite{Coppersmith00}. At each step, with probability $1/2$ nothing happens and, for each $i\in\{1,\dots,n-1\}$, with probability $1/(2n-2)$, we add row $i+1$ to row $i$. This corresponds to performing a lazy random walk on $G_n$ with generator set $(I+E_{i,i+1})_{i=1}^{n-1}$. If we restrict our attention to column $j$ of the matrix, this Markov chain becomes exactly the East process with parameter $q=1/2$ on the segment $\Lambda_j=\{1,\dots, j-1\}$ with boundary condition $\bzero_{\bbZ\setminus\Lambda_j}$. We direct the reader to \cites{Peres13,Ganguly19} for works on this random walk. \chapter{Universality} \label{chap:universality} \abstract{In this chapter, we consider KCM with completely arbitrary update families. The goal of universality is to identify what kinds of behaviour are possible within this vast collection of models and to efficiently classify all update families with respect to their behaviour. We begin by considering one-dimensional models, which feature three rough universality classes, represented by the three nearest neighbour models studied in Chapter~\ref{chap:1d}. We then provide some background on the two-dimensional BP universality theory, before moving on to two-dimensional KCM. We present the complete rough and refined universality theory for KCM in two dimensions and cover the essential elements of their proofs, building on the techniques exhibited in Chapters~\ref{chap:1d} and~\ref{chap:FA2f}.} \section{KCM universality in one dimension} Let us warm up by considering one-dimensional KCM. We recall that in Theorem~\ref{th:FA1f} and~\ref{th:East}, we saw that the correct scaling of $\trel$ (or $\tau_0$) for FA-1f is $1/q^3$, while it is $\exp((\log(1/q))^2/(2\log 2))$ for the East KCM. For FA-2f, we have $\qc=1$, as discussed in Section~\ref{sec:FA2f:1d}. The outcome of the universality theory in one dimension is that the only possible behaviours are those of FA-1f, East and FA-2f. In order to state the universality result, we need to define stable directions, which will determine the universality class. \begin{definition}[Stable directions] Fix a one-dimensional update family $\cU$. We say that the positive (resp.\ negative) direction is \emph{unstable}, if there exists $U\in\cU$ such that $U\subset\{-1,-2,\dots\}$ (resp.\ $\{1,2,\dots\}$) and \emph{stable} otherwise. \end{definition} Indeed, it is not hard to check that FA-1f, East and FA-2f have zero, one and two stable directions respectively. The stability of a direction governs whether empty sites can reproduce in that direction. The following result is the rough universality classification in one dimension due to Mar\^ech\'e, Martinelli, Morris and Toninelli~\cites{Mareche20combi,Mareche20Duarte,Martinelli19a}. \begin{theorem}[One-dimensional rough universality] \label{th:univ:1d} For a one-dimensional KCM with update family $\cU$ we have that \begin{itemize} \item if $\cU$ has two unstable directions, then $\qc=0$ and, for some $C>0$, \[\lim_{q\to 0}\bbP_\mu\left(1/C\le \frac{\log\tau_0}{\log(1/q)}\le C\right)=1;\] \item if $\cU$ has one unstable direction, then $\qc=0$ and, for some $C>0$, \[\lim_{q\to 0}\bbP_\mu\left(1/C\le \frac{\log\tau_0}{\log^2(1/q)}\le C\right)=1;\] \item if $\cU$ has no unstable direction, then $\qc=1$. \end{itemize} \end{theorem} \begin{remark} The asymptotics in Theorem~\ref{th:univ:1d} also hold for $\trel$ instead of $\tau_0$. \end{remark} The proof of this result follows the same lines as what we have already seen, so, rather than repeating it, we only provide the appropriate pointers. The lower bound for update families with two unstable directions follows from the fact that there is typically no empty site at distance much less than $1/q$ from the origin. The corresponding upper bound follows from {\eqref{eq:F0F1},} Proposition~\ref{prop:FA1f:East:generalised} and renormalisation (empty renormalised sites correspond to completely empty intervals of sites with large fixed length). The upper bound for update families with one unstable direction follows from Theorem~\ref{th:general:1d}. The corresponding lower bound is proved as in Section~\ref{subsec:East:lower}, again with a renormalisation in order to apply the combinatorial Proposition~\ref{prop:comb}, considering a renormalised site empty if it contains at least one empty site (not to be confused with the upper bound renormalisation above). Finally, the result for update families with no unstable directions is immediate, since the state of any sufficiently long interval of occupied sites can never be modified. It would be interesting to know the sharp asymptotics of $\log\tau_0$ for general $\cU$, as in the case of FA-1f and East, but this matter is still open. It is good to note that the corresponding problem for BP is easy (see \cite{Hartarsky22phd}*{Proposition 1.3.4}). \section{BP universality in two dimensions} Before we move on to two-dimensional universality for KCM, we require some background on the side of BP. Since our focus is on KCM, we take these BP results for granted and refer the interested reader to \cites{Morris17a} for a detailed survey of the methods involved. \subsection{Rough universality in BP} We start by generalising the definition of stable directions. We use $\|\cdot\|$ and $\<\cdot,\cdot\>$ to denote the Euclidean norm and scalar product. Let $S^{1}=\{\bu\in\bbR^2:\|\bu\|=1\}$ be the unit circle. We call the elements of $S^{1}$ \emph{directions}. We consider the open half-plane with outer normal $\bu\in S^{1}$ \begin{equation}\label{eq:openhalf}\bbH_\bu=\left\{\bx\in\bbR^2:\<\bx,\bu\><0\right\}.\end{equation} \begin{definition}[Stable directions] Fix an update family $\cU$. We say that $\bu\in S^{1}$ is \emph{unstable}, if there exists $U\in\cU$ such that $U\subset \bbH_\bu$ and \emph{stable} otherwise. A direction is called \emph{strongly stable}, if it belongs to the interior of the set of stable directions. {A stable direction $\bu\in S^1$ is \emph{isolated stable}, if there exists an open interval $I\subset S^1$ such that the only stable direction in $I$ is $\bu$.} \end{definition} The stable directions of the models in Figure~\ref{fig:rules} are given in Figure~\ref{fig:stable}. Stable directions allow us to define the rough universality classes in two dimensions. \begin{definition}[Rough universality partition] \label{def:2d:rough} Let $\cC=\{\bbH_\bu\cap S^1:\bu\in S^1\}$ denote the set of open semicircles of $S^1$. An update family $\cU$ is: \begin{itemize} \item \emph{supercritical} if there exists $C\in\cC$ containing no stable direction. If additionally \begin{itemize} \item there exist two non-opposite stable directions, $\cU$ is \emph{rooted}; \item there do not exist two non-opposite stable directions, $\cU$ is \emph{unrooted}. \end{itemize} \item \emph{critical} if every $C\in\cC$ contains a stable direction and there exists $C\in\cC$ containing finitely many stable directions.\\ \item \emph{subcritical} if every $C\in\cC$ contains infinitely many stable directions. If additionally \begin{itemize} \item there exists an unstable direction, $\cU$ is \emph{nontrivial}; \item all directions are stable, $\cU$ is \emph{trivial}. \end{itemize} \end{itemize} \end{definition} Comparing Definition~\ref{def:2d:rough} with the one-dimensional case of Theorem~\ref{th:univ:1d}, we see that the new rough universality classes in two dimensions are the critical and subcritical nontrivial ones. The following rough universality theorem for BP is due to Balister, Bollob\'as, Przykucki, Smith and Uzzell \cites{Balister16,Bollobas15}. \begin{figure} \centering \begin{subfigure}{0.3\textwidth} \begin{center} \begin{tikzpicture}[line cap=round,line join=round,x=0.3\textwidth,y=0.3\textwidth] \clip (-1.1,-1.1) rectangle (1.1,1.1); \draw[very thin](0,0) circle (0.3\textwidth); \draw [very thick,color=red] plot[domain=0:pi/2,variable=\t] ({cos(\t r)},{sin(\t r)}); \draw (0.5,0.5) node {$\infty$}; \end{tikzpicture} \caption{East (Figure~\ref{fig:East})} \end{center} \end{subfigure} \begin{subfigure}{0.3\textwidth} \begin{center} \begin{tikzpicture}[line cap=round,line join=round,x=0.3\textwidth,y=0.3\textwidth] \clip (-1.1,-1.1) rectangle (1.1,1.1); \draw[very thin](0,0) circle (0.3\textwidth); \end{tikzpicture} \caption{FA-$1$f (Figure~\ref{fig:FA1f})} \end{center} \end{subfigure} \begin{subfigure}{0.3\textwidth} \begin{center} \begin{tikzpicture}[line cap=round,line join=round,x=0.3\textwidth,y=0.3\textwidth] \clip (-1.1,-1.1) rectangle (1.1,1.1); \draw[very thin](0,0) circle (0.3\textwidth); ll [color=red] (0,1) circle (2pt) node[anchor=north,black] {$1$}; ll [color=red] (0,-1) circle (2pt) node[anchor=south,black] {$1$}; ll [color=red] (1,0) circle (2pt) node[anchor=east,black] {$1$}; ll [color=red] (-1,0) circle (2pt) node[anchor=west,black] {$1$}; \end{tikzpicture} \caption{FA-$2$f (Figure~\ref{fig:FA2f})} \end{center} \end{subfigure} \begin{subfigure}{0.3\textwidth} \begin{center} \begin{tikzpicture}[line cap=round,line join=round,x=0.3\textwidth,y=0.3\textwidth] \clip (-1.1,-1.1) rectangle (1.1,1.1); \draw[very thin](0,0) circle (0.3\textwidth); ll [color=red] (1,0) circle (2pt) node[anchor=east,black] {$1$}; \draw [very thick,color=red] plot[domain=pi/2:3*pi/2,variable=\t] ({cos(\t r)},{sin(\t r)}); \draw (-0.7,0) node {$\infty$}; \end{tikzpicture} \caption{Duarte (Figure~\ref{fig:Duarte})} \end{center} \end{subfigure} \begin{subfigure}{0.3\textwidth} \begin{center} \begin{tikzpicture}[line cap=round,line join=round,x=0.3\textwidth,y=0.3\textwidth] \clip (-1.1,-1.1) rectangle (1.1,1.1); \draw[very thin](0,0) circle (0.3\textwidth); \draw [very thick,color=red] plot[domain=3*pi/2:3*pi,variable=\t] ({cos(\t r)},{sin(\t r)}); \draw (0.5,0.5) node {$\infty$}; \end{tikzpicture} \caption{North-East (Figure~\ref{fig:NE})} \end{center} \end{subfigure} \begin{subfigure}{0.3\textwidth} \begin{center} \begin{tikzpicture}[line cap=round,line join=round,x=-0.3\textwidth,y=0.3\textwidth] \clip (-1.1,-1.1) rectangle (1.1,1.1); \draw[very thin](0,0) circle (0.3\textwidth); \draw [very thick,color=red] plot[domain=0:pi/4,variable=\t] ({cos(\t r)},{sin(\t r)}); \draw [very thick,color=red] plot[domain=pi/2:3*pi/4,variable=\t] ({cos(\t r)},{sin(\t r)}); \draw [very thick,color=red] plot[domain=pi:5*pi/4,variable=\t] ({cos(\t r)},{sin(\t r)}); \draw [very thick,color=red] plot[domain=3*pi/2:7*pi/4,variable=\t] ({cos(\t r)},{sin(\t r)}); \draw (0.65,0.27) node {$\infty$}; \draw (-0.27,0.65) node {$\infty$}; \draw (-0.65,-0.27) node {$\infty$}; \draw (0.27,-0.65) node {$\infty$}; \end{tikzpicture} \caption{Spiral (Figure~\ref{fig:spiral})} \end{center} \end{subfigure} \caption{Stable directions of the two-dimensional update families from Figure~\ref{fig:rules}.\label{fig:stable}} \end{figure} \begin{theorem}[Two-dimensional rough universality for BP] \label{th:2d:univ:rough} Let $\cU$ be a two-dimensional update family. If $\cU$ is \begin{itemize} \item supercritical, then $\qc=0$ and, for some $C>0$, \[\lim_{q\to 0}\mu\left(1/C\le \frac{\log\tbp}{\log(1/q)}\le C\right)=1;\] \item critical, then $\qc=0$ and, for some $C>0$, \[\lim_{q\to 0}\mu\left(1/C\le \frac{\log\log\tbp}{\log(1/q)}\le C\right)=1;\] \item subcritical nontrivial, then $0<\qc<1$. \item subcritical trivial, then $\qc=1$. \end{itemize} \end{theorem} We note that a complete generalisation of Theorem~\ref{th:2d:univ:rough} has already been established in arbitrary dimension by Balister, Bollob\'as, Morris and Smith \cites{Balister22,Balister24,BalisterNaNb}, but its KCM counterpart is still missing. The reader interested in subcritical models is also encouraged to consult \cites{Hartarsky22Toom,Toom80,Gray99,Hartarsky22sharpness,Hartarsky21} for different approaches to bounding $\qc$. While the behaviour of nontrivial subcritical models is very interesting, it is quite challenging and not much is known currently, so we discard them in the sequel. We further refer to \cite{Hartarsky22phd}*{Section~1.5.1} for a detailed account of the history of universality in BP and KCM. \subsection{Refined universality in BP} The bounds on $\tbp$ for critical models in Theorem~\ref{th:2d:univ:rough} are quite loose due to the iterated logarithm, particularly compared to results for specific models (recall Theorem~\ref{th:BP}). In two dimensions, it is possible to obtain much more precise asymptotics. In order to state them, we require a refinement of the notion of stable direction (see Figure~\ref{fig:stable} for examples). Recall that $|\cdot|$ is the number of empty sites and $[\cdot]$ is the closure from \eqref{eq:def:closure}. \begin{definition}[Difficulty] \label{def:alpha} The \emph{difficulty} $\alpha(\bu)$ of $\bu\in S^1$ is \begin{itemize} \item $0$ if $\bu$ is unstable; \item $\infty$ if $\bu$ is stable, but not isolated stable; \item $\min\{|Z|:Z\subset\bbZ^2,|[\bzero_{\bbH_\bu\cup Z}\cdot\bone_{\bbZ^2\setminus(\bbH_\bu\cup Z)}]_{\bbZ^2\setminus\bbH_\bu}|=\infty\}$ otherwise. \end{itemize} The \emph{difficulty} of $\cU$ is \[\alpha=\alpha(\cU)=\min_{C\in\cC}\max_{\bu\in C}\alpha(\bu).\] We say that a direction $\bu\in S^1$ is \emph{hard} if $\alpha(\bu)>\alpha$. We say that $\cU$ is \emph{unbalanced} if there exist two opposite hard directions and \emph{balanced} otherwise. \end{definition} In words, the difficulty of an isolated stable direction is the smallest number of empty sites needed to empty an infinite number of sites with the help of an empty half-plane with outer normal the direction. The difficulty of the update family is given by the easiest open semi-circle, a semi-circle being as hard as the hardest direction it contains. Comparing Definitions~\ref{def:2d:rough} and~\ref{def:alpha}, it can be shown \cite{Bollobas23} that an update family is supercritical if $\alpha=0$, subcritical if $\alpha=\infty$ and critical if $0<\alpha<\infty$. Among the critical examples of Figure~\ref{fig:rules}, FA-2f is balanced and Duarte is unbalanced, both having difficulty $1$ (see Figure~\ref{fig:stable}). With these notions, the refined universality result for BP of Bollob\'as, Duminil-Copin, Morris and Smith \cite{Bollobas23} is the following. \begin{theorem}[Two-dimensional refined universality for BP] \label{th:2d:univ:refined:BP} Let $\cU$ be a two-dimensional critical update family of difficulty $\alpha$. Then, for some $C>0$, \[\lim_{q\to0}\mu\left(1/C\le \frac{q^\alpha\log\tbp}{(\log(1/q))^\gamma}\le C\right)=1,\] where $\gamma=0$, if $\cU$ is balanced, and $\gamma=2$, if $\cU$ is unbalanced. \end{theorem} Naturally, Theorem~\ref{th:2d:univ:refined:BP} is consistent with Theorem~\ref{th:BP} for 2-neighbour and Duarte BP. \section{KCM universality in two dimensions} \subsection{Statement} With BP universality at hand, we may turn to KCM. For subcritical models, in view of Theorems~\ref{th:ergo} and~\ref{th:exponential} combined with the BP rough universality Theorem~\ref{th:2d:univ:rough}, we do not have anything new to say. We therefore focus on supercritical and critical models in the next result due to Mar\^ech\'e, Martinelli, Morris and Toninelli \cites{Mareche20combi,Mareche20Duarte,Martinelli19a}. Recall Definition~\ref{def:2d:rough}. \begin{theorem}[Two-dimensional rough universality for KCM] \label{th:universality:rough:KCM:2d}For any two-dimensional KCM with update family $\cU$ we have that \begin{itemize} \item if $\cU$ is supercritical unrooted, then for some $C>0$, \begin{equation} \label{eq:rough:supercrit:unrooted}\lim_{q\to0}\bbP_\mu\left(1/C\le \frac{\log \tau_0}{\log(1/q)}\le C\right)=1;\end{equation} \item if $\cU$ is supercritical rooted, then for some $C>0$, \begin{equation} \label{eq:rough:supercrit:rooted}\lim_{q\to0}\bbP_\mu\left(1/C\le \frac{\log \tau_0}{\log^2(1/q)}\le C\right)=1;\end{equation} \item if $\cU$ is critical, then for some $C>0$, \begin{equation} \label{eq:rough:critical}\lim_{q\to0}\bbP_\mu\left(1/C\le \frac{\log\log \tau_0}{\log(1/q)}\le C\right)=1.\end{equation} \end{itemize} The same asymptotics hold for $\trel$ instead of $\tau_0$. \end{theorem} The proof of Theorem~\ref{th:universality:rough:KCM:2d} will be explained in Section~\ref{subsec:rough}, but before that, let us first discuss refined universality. Recalling Definition~\ref{def:alpha}, we only need the following vocabulary in order to define the refined KCM universality classes. \begin{definition}[Further refined universality types] \label{def:2d:refined} A critical two-dimensional update family $\cU$ is \emph{rooted}, if there exist two non-opposite hard directions and \emph{unrooted} otherwise. We say that $\cU$ is \emph{semi-directed}, if there is exactly one hard direction and \emph{isotropic} if there are no hard directions. \end{definition} Notice that balanced unrooted update families are either semi-directed or isotropic. The above notions allow us to state the refined universality result obtained over a series of works of Hartarsky, Mar\^ech\'e, Martinelli, Morris and Toninelli \cites{Hartarsky22univlower,Hartarsky24univupper,Martinelli19a,Hartarsky21a,Hartarsky20} (see \cite{Hartarsky24univupper} for the final step and a detailed discussion), also relying on the BP refined universality Theorem~\ref{th:2d:univ:refined:BP} of \cite{Bollobas23}. \begin{theorem}[Two-dimensional refined universality for KCM] \label{th:2d:univ:refined:KCM} Let $\cU$ be a two-dimensional critical update family of difficulty $\alpha$. Then for some $C>0$, \[\lim_{q\to0}\bbP_\mu\left(1/C\le \frac{q^{\alpha\cdot\beta}\log\tau_0}{(\log(1/q))^\gamma(\log\log(1/q))^\delta}\le C\right)=1,\] where the exponents $\beta$, $\gamma$ and $\delta$ are given in the following table depending on whether $\cU$ has finite or infinite number of stable directions; is balanced or unbalanced; is rooted or unrooted; is semi-directed or isotropic, if balanced and unrooted. \begin{center} \begin{tabular}{c| >{\centering\arraybackslash}m{4cm}| >{\centering\arraybackslash}m{2cm}| >{\centering\arraybackslash}m{2cm}} \multirow{2}{*}{$\beta, \gamma, \delta$}&Infinite stable directions & \multicolumn{2}{c}{Finite stable directions}\\\cline{2-4} & Rooted & Rooted & Unrooted\\ \hline Unbalanced & $2,4,0$ & $1,3,0$ & $1,2,0$\\ \hline \multirow{2}{*}{Balanced} & \multirow{2}{*}{$2,0,0$} & \multirow{2}{*}{$1,1,0$} & S.-dir.~$1,0,1$\\\cline{4-4} &&&{Iso.~$1,0,0$}\\ \end{tabular}\end{center} In particular, $\beta=2$, if $\cU$ has a strongly stable direction, and $\beta=1$ otherwise. \end{theorem} It is not hard to check that the seven refined universality classes in Theorem~\ref{th:2d:univ:refined:KCM} do exhaust the critical rough universality class of Theorem~\ref{th:universality:rough:KCM:2d}. We emphasise that Theorem~\ref{th:2d:univ:refined:KCM} is currently the sharpest result available for any critical KCM with the exception of FA-2f and slight variations thereof, namely modified FA-2f and the Frob\"ose KCM (recall Section~\ref{subsubsec:matryoshka} and see \cites{Frobose89,Gravner09}). As in Theorem~\ref{th:FA2f:sharp:threshold}, the upper bounds in Theorem~\ref{th:2d:univ:refined:KCM} are not known to hold for $\trel$. Interestingly, an analogue of Theorem~\ref{th:2d:univ:refined:KCM} is not available in one dimension. It would be good to fill this gap by solving the following problem. \begin{problem}[{Unrooted scaling in one dimension}] \label{prob:1d:refined:univ} Let $\cU$ be a one-dimensional update family with two unstable directions. Prove existence of and determine $\alpha\in(0,\infty)$ such that \[\lim_{C\to\infty}\liminf_{q\to0}\bbP_\mu\left(1/C\le q^\alpha\tau_0\le C\right)=1.\] \end{problem} It should be noted that, while the analogous problem for BP is an exercise \cite{Hartarsky22phd}*{Proposition~1.3.4}, in view of Theorem~\ref{th:FA1f}, Problem~\ref{prob:1d:refined:univ} is not. \subsection{Rough universality proofs} \label{subsec:rough} We next outline the proof of Theorem~\ref{th:universality:rough:KCM:2d}. Firstly, the lower bound of \eqref{eq:rough:supercrit:unrooted} is immediate, since there is typically no empty site at distance much smaller than $q^{-1/2}$ from the origin. The lower bound in \eqref{eq:rough:critical} follows from Theorem~\ref{th:2d:univ:rough} together with \eqref{eq:t0t0bp} and \eqref{eq:tau0}. The upper bounds in \eqref{eq:rough:supercrit:unrooted} and \eqref{eq:rough:supercrit:rooted} are proved using Proposition~\ref{prop:FA1f:East:generalised} and \eqref{eq:F0F1} together with a simple renormalisation, as for Theorem~\ref{th:univ:1d}. Namely, each empty site corresponds to a suitably oriented rectangle whose sites are all empty. The exact shape of this rectangle is chosen based on the proof of BP rough universality, so that, if it is empty, it is able to reproduce an empty copy of itself (see \cites{Bollobas15,Martinelli19a}). The upper bound in \eqref{eq:rough:critical} proceeds like the proof of Theorem~\ref{th:FA2f:MT}, but using a generalised one-dimensional KCM with East instead of FA-1f constraint, still covered by Proposition~\ref{prop:FA1f:East:generalised}. We are left with the lower bound in \eqref{eq:rough:supercrit:rooted} which is the only one requiring additional ideas with respect to what we have already seen. The overall scheme remains the same---we seek to establish a combinatorial bottleneck akin to the one of Proposition~\ref{prop:comb} and convert it into the desired bound as in Section~\ref{subsubsec:bottleneck:deduction}. However, contrary to the one-dimensional case of Theorem~\ref{th:univ:1d}, the combinatorial bottleneck cannot be deduced from the one-dimensional Proposition~\ref{prop:comb} via renormalisation. We next present a sketch of the proof of the following result of Mar\^ech\'e \cite{Mareche20combi}*{Theorem~4}, which is the crucial ingredient. \begin{proposition}[Combinatorial bottleneck for rooted models] \label{prop:Laure:combi}Let $\cU$ be a two-dimensional update family which is not supercritical unrooted. There exists an integer $C>0$ such that the following holds for any integer $n\ge 1$. Consider the $\cU$-KCM on $\Lambda_n=\{-Cn2^n,\dots,Cn2^n\}^2$ with boundary condition $\bzero_{\bbZ^2\setminus \Lambda}$. Let $V(n)$ be the set of all configurations that the process can reach from $\bone_\Lambda$ via a legal path (recall Definition~\ref{def:legal:path}) in which all configurations contain at most $n$ empty sites. Then $\omega_0=1$ for all $\omega\in V(n)$. \end{proposition} \begin{figure} \centering \begin{tikzpicture}[x=0.2cm,y=0.2cm] \draw (-9,-9) rectangle (9,9); \draw (-3,-3) rectangle (3,3); \draw [fill=black, fill opacity=0.5,even odd rule] (-5,-5) rectangle (5,5) (-7,-7) rectangle (7,7); \draw (0,0) node{$\Lambda_{n-1}$}; \draw (9,0) node[right]{$\Lambda_{n}$}; \end{tikzpicture} \caption{Illustration of the proof of Proposition~\ref{prop:Laure:combi}. The buffer zone is shaded.} \label{fig:Laure} \end{figure} \begin{proof}[Sketch] For simplicity, we focus on the two-dimensional East KCM (see Figure~\ref{fig:East}). The proof proceeds by induction on $n$ claiming that for any $\omega\in V(n)$, $|\omega_{\Lambda_{n-1}}|\le n-1$. Indeed, iterating this fact, we obtain that for any $\omega\in V(n)$, $|\omega_{\Lambda_1}|\le 1$, so that the single empty site in $\Lambda_1$ cannot be at distance more than $C$ from the boundary of $\Lambda_1$, so it cannot reach the origin. In order to show the claim, using the reversibility of legal paths (recall Definition~\ref{def:legal:path}), we may instead prove that, if we start from $\omega\neq\bone_{\Lambda_n}$ such that $\omega_{\Lambda_n\setminus\Lambda_{n-1}}=\bone$ and never visit configurations with more than $n$ empty sites, then we cannot reach the $\bone_{\Lambda_n}$ configuration. To do this, we prove by a second induction, on the number of steps in the legal path, that the following two conditions remain valid. Firstly, a frame-shaped buffer zone around $\Lambda_{n-1}$ with no empty site remains intact (see Figure~\ref{fig:Laure}). Secondly, there always remains an empty site in the internal region encircled by the buffer, so the dynamics cannot reach $\bone_{\Lambda_n}$. We know that so far an empty site remains trapped in the internal region encircled by the buffer, so we only have $n-1$ empty sites available for disrupting the buffer from the outside, which is impossible by the induction hypothesis on $n$. Therefore, it suffices to show that we may not disrupt the buffer from the inside either. By projecting the two-dimensional East model on each axis it is clear that no empty site can enter the right and top parts of the buffer from the inside, and the projections of the top and rightmost empty sites in the region inside the buffer need to remain where they were initially. The left part of the buffer (and similarly for the bottom one) cannot be reached from the inside, because at least one empty site needs to remain as far right as the rightmost initial one was, so we only have $n-1$ empty sites with which to reach the left part of the buffer, which is impossible by induction hypothesis on $n$. \end{proof} With the above sketch in mind, we encourage the reader to consult the full proof in \cite{Mareche20combi}, which actually takes only four pages. We also remark that Proposition~\ref{prop:Laure:combi} and its proof generalise immediately to any dimension. \subsection{Refined universality proofs} \subsubsection{Lower bounds} \label{subsec:refined:lower} We start with the lower bounds in Theorem~\ref{th:2d:univ:refined:KCM}, since they are closely related to Proposition~\ref{prop:Laure:combi}. We follow \cite{Hartarsky22univlower} and refer to that work for the formal proof. Somewhat surprisingly, all seven refined universality classes are governed by the same combinatorial bottleneck, but on different length scales and for different reasons. For the sake of concreteness, we focus on the model from Figure~\ref{fig:log1}, whose difficulty is $\alpha=1$. \begin{figure} \centering \begin{subfigure}{0.49\textwidth} \begin{tikzpicture}[x=0.5cm,y=0.5cm] \draw (-1.5,1.5) circle (0.15); \draw (-2,0.5) circle (0.15); \draw (-1.5,1) circle (0.15); \draw (1,1) circle (0.15); \draw (1.5,0.5) circle (0.15); \draw (-2,-2) circle (0.15); \draw (-1.5,-2.5) circle (0.15); \draw (-2.5,-2) circle (0.15); \draw (-1.5,-3) circle (0.15); \draw (1.5,-2) circle (0.15); \draw (1,-2.5) circle (0.15); \draw (2,-2) circle (0.15); \draw[step=0.5,gray,very thin](-2.5,-3)grid(-0.5,-1); \draw[step=0.5,gray,very thin](0,-3)grid(2,-1); \draw[step=0.5,gray,very thin](-2.5,-0.5)grid(-0.5,1.5); \draw[step=0.5,gray,very thin](0,-0.5)grid(2,1.5); \draw (-1.5,-2) node[cross=2pt,rotate=0] {}; \draw (1,-2) node[cross=2pt,rotate=0] {}; \draw (-1.5,0.5) node[cross=2pt,rotate=0] {}; \draw (1,0.5) node[cross=2pt,rotate=0] {}; \end{tikzpicture}\quad \begin{tikzpicture}[x=1cm,y=1cm] \clip (-1.2,-1.2) rectangle (1.2,1.2); \draw[very thin](0,0) circle (1); ll [color=red] (0,1) circle (2pt) node[anchor=north,black] {$2$}; ll [color=red] (1,0) circle (2pt) node[anchor=east,black] {$2$}; ll [color=red] (-1,0) circle (2pt) node[anchor=west,black] {$1$}; ll [color=red] (0,-1) circle (2pt) node[anchor=south,black] {$1$}; \end{tikzpicture} \caption{Balanced model of Section~\ref{subsec:refined:lower}.\label{fig:log1}} \end{subfigure} \begin{subfigure}{0.49\textwidth} \begin{tikzpicture}[x=0.5cm,y=0.5cm] \draw (-1.5,1.5) circle (0.15); \draw (-2,0.5) circle (0.15); \draw (-2.5,0.5) circle (0.15); \draw (-1.5,1) circle (0.15); \draw (1,1) circle (0.15); \draw (1.5,0.5) circle (0.15); \draw (2,0.5) circle (0.15); \draw (-2,-2) circle (0.15); \draw (-1.5,-2.5) circle (0.15); \draw (-2.5,-2) circle (0.15); \draw (-1.5,-3) circle (0.15); \draw (1.5,-2) circle (0.15); \draw (1,-2.5) circle (0.15); \draw (2,-2) circle (0.15); \draw[step=0.5,gray,very thin](-2.5,-3)grid(-0.5,-1); \draw[step=0.5,gray,very thin](0,-3)grid(2,-1); \draw[step=0.5,gray,very thin](-2.5,-0.5)grid(-0.5,1.5); \draw[step=0.5,gray,very thin](0,-0.5)grid(2,1.5); \draw (-1.5,-2) node[cross=2pt,rotate=0] {}; \draw (1,-2) node[cross=2pt,rotate=0] {}; \draw (-1.5,0.5) node[cross=2pt,rotate=0] {}; \draw (1,0.5) node[cross=2pt,rotate=0] {}; \end{tikzpicture}\quad \begin{tikzpicture}[x=1cm,y=1cm] \clip (-1.2,-1.2) rectangle (1.2,1.2); \draw[very thin](0,0) circle (1); ll [color=red] (0,1) circle (2pt) node[anchor=north,black] {$2$}; ll [color=red] (1,0) circle (2pt) node[anchor=east,black] {$2$}; ll [color=red] (-1,0) circle (2pt) node[anchor=west,black] {$1$}; ll [color=red] (0,-1) circle (2pt) node[anchor=south,black] {$2$}; \end{tikzpicture} \caption{Unbalanced model of Section~\ref{subsec:refined:upper}.\label{fig:log3}} \end{subfigure} \caption{The update rules, stable directions and difficulties of two example critical rooted update families with difficulty $\alpha=1$.} \end{figure} Morally speaking, in this model the smallest mobile entity (`droplet') is an empty square of size roughly $1/q$, similarly to FA-2f (recall Chapter~\ref{chap:FA2f}). Indeed, typically, on its left and bottom sides, one can find an empty site, which allows it to empty the column of sites on its left and the row of sites below it. However, it is essentially impossible for the droplet to grow up or right, as this requires two consecutive empty sites and those are typically only available at distance $1/q^2$ from the droplet. We will only work in the box $\Lambda=\{-1/q^{7/4},\dots,1/q^{7/4}\}^2$, so such couples of empty sites are not available for most columns and rows. Thus, we expect droplets to essentially follow the dynamics of the two-dimensional East KCM (see Figure~\ref{fig:East}). On a very high level, we proceed in the same way as in the proof of Proposition~\ref{prop:Laure:combi}. However, there are several glaring problems in making the above reasoning rigorous. Firstly, much like in Figure~\ref{fig:FA2f:droplets}, droplets can be more complex than empty squares of size $1/q$. Thus, one needs to identify an event which says whether a droplet is present and this event should be deterministically necessary for empty sites to spread. Moreover, the event should have probability of the correct order $\exp(-1/q)$, as suggested by the BP Theorem~\ref{th:2d:univ:refined:BP}. It turns out that the notion of `spanning' introduced in \cite{Bollobas23} for the proof of the lower bound of Theorem~\ref{th:2d:univ:refined:BP} for unbalanced models, following \cite{Cerf99}, is flexible enough for our purposes. Roughly speaking, a droplet (rectangle) is spanned if the empty sites present inside it are sufficient to empty a connected set touching all its sides (see \cite{Morris17a}*{Section~8} for an overview of the BP result and its proof). We call a droplet critical if it has size roughly $1/q$. If our goal were to only find a single critical droplet, we could proceed as in Section~\ref{sec:FA2f:lower}. Indeed, for isotropic and unbalanced unrooted models the proof of Section~\ref{sec:FA2f:lower} or directly \eqref{eq:t0t0bp} combined with Theorem~\ref{th:BP} suffices. However, for other refined universality classes, including the one of the model of Figure~\ref{fig:log1}, we need to work much harder. In order to obtain the exponent $\gamma=1>0$ in Theorem~\ref{th:2d:univ:refined:KCM}, we need many droplets. Unfortunately, given a configuration, spanned critical droplets may overlap, so, in order to obtain good bounds on the probability of the configuration, we need to consider disjointly occurring ones (droplets occur disjointly, if they admit disjoint witness sets of empty sites). We may then define the number of spanned critical droplets as the maximal number of disjointly occurring ones. Considering the KCM on $\Lambda$ with $\bzero_{\bbZ^2\setminus\Lambda}$ boundary condition, starting from a typical initial configuration (which in particular does not contain more than three empty sites close to each other), our aim is to prove that, before we can empty the origin, we need to visit a configuration with at least order $\log(1/q)$ disjointly occurring critical droplets. If droplets did follow a two-dimensional East dynamics exactly (up to renormalisation), this would follow from Proposition~\ref{prop:Laure:combi}. But this is not the case. Indeed, by changing their internal structure, droplets may move a bit without creating another droplet, as we saw in Section~\ref{sec:FA2f:upper}. Worse yet, they are not really forbidden to move right or up, but simply are not likely to be able to do so wherever they want: it depends on the dynamic environment. In order to handle these problems, we need the crucial notion of crossing. Consider a vertical strip $S$ of width $1/q^{3/2}$ of our domain $\Lambda$. Roughly speaking, we say that $S$ has a crossing if the following two events occur. Firstly, the empty sites in $S$ together with the entire half-plane to the left of $S$ are enough to infect a path from left to right in $S$. Secondly, $S$ does not contain a spanned critical droplet. Notice that these two events have opposite monotonicity in the configuration. Employing BP tools, it can be shown (see \cite{Hartarsky22univlower}*{Appendix B}) that the probability of a crossing decays exponentially with the width of $S$ at our scales of interest. In particular, the probability under $\mu$ that such a strip $S$ is crossed is of order $\exp(-1/q^{3/2})$. Roughly speaking, the proof proceeds by splitting $S$ into smaller strips which are either crossed by a single spanned droplet of subcritical size, or contain a pair of adjacent empty sites. Having appropriate bounds on the probability of spanned subcritical droplets as a function of their size, one may prove the desired bound on crossing by a union bound over the partition of $S$ into the smaller strips. We note that in the case of infinite number of stable directions, bounding the probability of crossings is quite different \cite{Hartarsky22univlower}*{Appendix~B.2}, but remains possible. Having established such a bound on the probability of crossings, we may incorporate them into the combinatorial bottleneck---we are satisfied if we visit either a configuration with a crossed strip of width $1/q^{3/2}$, or with $\log(1/q)$ disjointly spanned critical droplets. The lack of crossings allows us to exclude the possibility of a droplet reaching the right side of the vertical strip $S$ without help from the right of $S$, since the KCM dynamics can never infect more than what bootstrap percolation can (recall Section~\ref{sec:legal:paths}). With these additional inputs, the proof scheme of Proposition~\ref{prop:Laure:combi} can be carried out to give the lower bounds in Theorem~\ref{th:2d:univ:refined:KCM}. The only difference between refined universality classes comes in the choice of length scales, in the bounds on the probability of spanning droplets and crossing strips and their proofs. \subsubsection{Upper bounds} \label{subsec:refined:upper} We next turn to the upper bounds in Theorem~\ref{th:2d:univ:refined:KCM}. Contrary to the lower bounds, the proofs of upper bounds are highly dependent on the refined universality class. However, there are two classes, for which all the elements of the proof have already been discussed. The weakest upper bound corresponding to $\beta=2,\gamma=4,\delta=0$ in Theorem~\ref{th:2d:univ:refined:KCM}, which applies to all models, but is only sharp for unbalanced critical families with infinite number of stable directions, in fact follows from the proof of the rough Theorem~\ref{th:universality:rough:KCM:2d} mentioned in Section~\ref{subsec:rough} (see \cite{Martinelli19a}). At the other extreme, the upper bound for isotropic models ($\beta=1,\gamma=0,\delta=0$) is proved similarly to Theorem~\ref{th:FA2f:sharp:threshold}, using the matryoshka doll technique (recall Section~\ref{sec:FA2f:upper}), up to some technical modifications (see \cite{Hartarsky24univupper}*{Section~5}). Our next goal is to outline the proof of the upper bound of Theorem~\ref{th:2d:univ:refined:KCM} with $\beta=1,\gamma=3,\delta=0$, which applies to any critical KCM with finite number of stable directions, but is only sharp for unbalanced rooted families with finite number of stable directions. This will clarify the relevance of the absence of any strongly stable directions, which governs the value of the most important exponent $\beta$. For the sake of simplicity, we focus on the model depicted in Figure~\ref{fig:log3}. Let us start with some heuristic considerations before explaining how they can be turned into the proof originating from \cite{Hartarsky21a}. Since we are interested in an upper bound, we may choose the notion of droplet in a simple way. Namely, a droplet $D$ is (a translate of) an empty square frame of size $C\log(1/q)/q$ and thickness $2$, where $C$ is a suitably large constant (see Figure~\ref{fig:mechanism:1}). Then typically (under $\mu$), there is an empty site in the column to the left of $D$ allowing us to empty $D-(1,0)$. However, it is unlikely to find a pair of adjacent empty sites on any of the other sides of $D$. We conclude that it is easy for $D$ to advance only to the left. An efficient way to perform this leftward motion is given by the legal path for the East KCM from Figure~\ref{fig:East_path}, where each empty site represents an empty translate of $D$. \begin{figure}[b] \begin{subfigure}{\linewidth} \centering \begin{tikzpicture}[x=-0.25cm,y=0.25cm,scale=0.5] \clip(-15,-5) rectangle (55,11); ll[color=gray] (10,0) rectangle (40,10); ll (0,0) -- (10,0) -- (10,10) -- (0,10) -- cycle; ll[line width=0pt,color=white,fill=white] (2,2) rectangle (8,8); ll[fill=black] (40,10) -- (30,10) -- (30,0) -- (40,0) -- cycle; ll[line width=0pt,color=gray,fill=gray] (32,2) rectangle (38,8); \draw (39.5,10.5) circle (0.3); \draw (38.5,10.5) circle (0.3); \draw [thick,->] (12,5)--(27,5); \draw [<->] (-2,0) -- (-2,10) node[midway, right]{$\frac{C\log(1/q)}{q}$}; \draw [<->] (0,-1) -- (40,-1) node[midway, below]{$1/q^2$}; \end{tikzpicture} \end{subfigure} \begin{subfigure}{\linewidth} \centering \begin{tikzpicture}[x=-0.25cm,y=0.25cm,scale=0.5] \clip(-12,-0.5) rectangle (52,11); ll[color=gray] (10,0) rectangle (40,11); ll (0,0) rectangle (10,11); ll[line width=0pt,color=white,fill=white] (2,3) rectangle (8,8); ll[fill=black] (30,0) rectangle (40,11); ll[line width=0pt,color=white,fill=gray] (32,3) rectangle (38,8); \draw [thick,<-] (12,5)--(27,5); \end{tikzpicture} \end{subfigure} \caption{ \label{fig:mechanism:1} The mechanism for the droplet to grow up.} \end{figure} \begin{figure}[t] \centering \begin{tikzpicture}[x=0.25cm,y=0.25cm,scale=0.5] \clip(-55,0) rectangle (15,40); \begin{scope}[shift={(-10,15)}] ll (0,0) -- (10,0) -- (10,10) -- (0,10) -- cycle; ll[line width=0pt,color=white,fill=white] (2,2) rectangle (8,8); ll[fill=black] (-20,0) rectangle (-30,10); ll[line width=0pt,color=white,fill=white] (-22,2) rectangle (-28,8); \draw (-29.5,10.5) circle (0.3); \draw (-28.5,10.5) circle (0.3); \draw [thick,<->] (-2,5)--(-17,5); ll[color=gray] (0,10) rectangle (10,11); ll[color=gray] (2,2) rectangle (8,3); \end{scope} \begin{scope}[rotate=90] ll (0,0) -- (10,0) -- (10,10) -- (0,10) -- cycle; ll[line width=0pt,color=white,fill=white] (2,2) rectangle (8,8); ll[fill=black] (40,10) -- (30,10) -- (30,0) -- (40,0) -- cycle; ll[line width=0pt,color=white,fill=white] (32,2) rectangle (38,8); \draw (39.5,-0.5) circle (0.3); \draw (38.5,-0.5) circle (0.3); ll[color=gray] (0,0) rectangle (10,-1); ll[color=gray] (8,8) rectangle (2,7); \draw [thick, <->] (11,5)--(13,5); \draw [thick, <->] (26.5,5)--(28.5,5); \end{scope} \draw [<->] (2,0) -- (2,40) node[midway, right]{$\frac{1}{q^2}$}; \end{tikzpicture} \caption{The mechanism for the droplet to grow to the right by making a long excursion up, each of whose steps is a long excursion to the left, as in Figure~\ref{fig:mechanism:1}.} \label{fig:mechanism:2} \end{figure} The key idea is that it suffices to perform this East-like motion for a distance of order $1/q^2$, in order to find a pair of adjacent empty sites on the row above the droplet. Once the droplet reaches them, it is able to move one step up. It is then possible to revert the East path to bring the droplet to the original position, but shifted one lattice step up. This procedure effectively yields a step in the hard up direction. We may then iterate this idea, moving upwards in an East-like manner, where each step up is, in fact, a long East-like path to the left and back. This way, we eventually reach a pair of adjacent empty sites allowing the droplet to move to the right, etc. See Figures~\ref{fig:mechanism:1} and~\ref{fig:mechanism:2} for an illustration of the mechanism. Based on the above heuristics, we expect droplets to be able to move freely in all directions by creating only about $\log(1/q)$ additional droplets at a time. Hence, we expect the time necessary for droplets to move to be of order $\rho^{-\log(1/q)}$, where $\rho\approx q^{8C\log(1/q)/q}$ is the probability of a single droplet under $\mu$. This gives the desired $\exp(8C\log^3(1/q)/q)$ time scale. \begin{figure}[b] \centering \begin{tikzpicture}[line cap=round,line join=round,x=-0.15cm,y=0.15cm] \draw [thick] (0,0) rectangle (30,5) node[midway]{$B$}; \draw [thick] (0,5.3) rectangle (30,30) node[midway]{$R_0$}; \draw [thick] (-0.3,0) rectangle (-15.3,30) node[midway]{$R_1^-$}; \draw [thick] (30.3,0) rectangle (45.3,30) node[midway]{$R_1^+$}; ll (0,0) rectangle (5,5); ll[line width=0pt,color=white,fill=white] (1,1) rectangle(4,4); \draw (2.5,2.5) node{$\mathring D$}; \draw [<->] (0,29)--(30,29) node[midway,below] {$C \frac{\log(1/q)}{q^2}$}; \draw [<->] (-16,0)--(-16,30) node[midway,right] {$C \frac{\log(1/q)}{q^2}$}; \draw [<->] (-1,0)--(-1,5) node[midway,right] {$C \frac{\log(1/q)}{q}$}; \draw (2,4)--(5,7) node[above left]{$D$}; \end{tikzpicture} \caption{Geometry of the matryoshka dolls in Section~\ref{subsec:refined:upper} for the update family depicted in Figure~\ref{fig:log3}.} \label{fig:mechanism:3} \end{figure} In order to turn the above into a proof, we use the matryoshka doll technique from Section~\ref{subsubsec:matryoshka}. The geometry of the consecutive regions is given in Figure~\ref{fig:mechanism:3}. The super good event $\cS\cG(\Lambda)$ for $\Lambda=D\cup \mathring D\cup B\cup R_0\cup R_1^+\cup R_1^-$ requires that: \begin{itemize} \item the droplet $D$ is empty; \item each column of the \emph{base} $B$ contains an empty site \item each row of the rectangle $R_0$ contains a pair of adjacent empty sites; \item each column of the rectangles $R_1^+$ and $R_1^-$ contains a pair of adjacent empty sites. \end{itemize} Thanks to our choice of geometry, it is not hard to check that the latter three events are very likely under $\mu$, so $\mu(\cS\cG(\Lambda))\approx \rho=q^{|D|}\approx \exp(8C(\log(1/q))^2/q)$. Recalling Section~\ref{sec:finite_vol}, we then seek to prove the Poincar\'e inequality \begin{align} \label{eq:snail:Poincare}\mu_{\Lambda}(f|\cS\cG(\Lambda))&{}\le \gamma(\Lambda)\cD_{\bone_{\bbZ^2\setminus\Lambda}}(f),& \gamma(\Lambda)&{}\le e^{(C\log(1/q))^3/q}\end{align} for any local function $f:\Omega\to\bbR$. Once \eqref{eq:snail:Poincare} is proved, concluding the proof of Theorem~\ref{th:2d:univ:refined:KCM} for the model under consideration can be done along the lines of Section~\ref{subsec:FA2f:reduction:to:meso}. The proof of \eqref{eq:snail:Poincare} proceeds in a roughly similar way to \eqref{eq:FA2f:meso}, by proving Poincar\'e inequalities successively for $D$, $D\cup\mathring D$, $D\cup\mathring D\cup B$,\dots, $\Lambda$. The one for $D$ is trivial, since $\cS\cG(D)=\{\bzero_D\}\times\Omega_{\bbZ^2\setminus D}$ is a single configuration if restricted to $D$. The inequality for $D\cup \mathring D$ is proved by dividing $\mathring D$ into vertical strips of width 2 and using Proposition~\ref{prop:FA1f:East:generalised} for generalised FA-1f in one dimension. This yields the relaxation time bound \[\gamma\left(D\cup\mathring D\right)\le q^{C^2\log(1/q)/q},\] where $\gamma(D\cup\mathring D)$ is defined as in \eqref{eq:snail:Poincare}. Proving that $\gamma(D\cup \mathring D\cup B)\le \exp(e^{C^2(\log(1/q))^3/q})$ (and similarly for adding the remaining rectangles one by one) is done along the lines of the proof of \eqref{eq:FA2f:recursion}. Namely, we use bisection to reduce the length of the base until it reaches 1. However, the factor $a_k$ in the analogue of \eqref{eq:FA2f:recursion:step} is only bounded by $\rho$, because we use the original two-block dynamics of Lemma~\ref{lem:two-block} rather than the non-oriented three-block variant, Lemma~\ref{lem:three-block}. This reflects the fact that our heuristics is based on East-like motion rather than CBSEP-like. Thus, the only remaining ingredient is dealing with adding a single column to the left of $D\cup\mathring D$. To do this, we observe that, viewing the empty droplet $D$ as a boundary condition, the problem reduces to dealing with FA-1f on a one-dimensional segment, given that it contains at least one empty site. This was already done in Theorem~\ref{th:general:1d}. This completes the sketch of the proof of Theorem~\ref{th:2d:univ:refined:KCM} for the model in Figure~\ref{fig:log3}. Let us note that, the fact that Theorem~\ref{th:general:1d} covers inhomogeneous one-dimensional KCM on their ergodic components is crucial for dealing with more general update families at this point. For the remaining unbalanced refined universality class ($\beta=1,\gamma=2,\delta=0$), the proof of \cite{Hartarsky24univupper} is still quite similar, using a CBSEP instead of East-type dynamics in the above argument (Lemma~\ref{lem:three-block} versus Lemma~\ref{lem:two-block}). However, balanced models, particularly with finite number of stable directions, are much more delicate. The reason is that one needs to treat scales below the critical one as well, taking into account the non-trivial internal structure of critical droplets which has a multi-scale form as in Figure~\ref{fig:FA2f:droplets}. However, the present situation is more complex than in Section~\ref{sec:FA2f:upper}, because some directions are hard, so one needs to use East-like dynamics on some scales and CBSEP-like on others, also carefully choosing directions in which to grow, depending on the scale. When coupled with the not necessarily rectangular geometry required for general models, as well as bounding conditional probabilities of droplets as in Section~\ref{subsubsec:matryoshka}, but in the absence of symmetry, the proof of \cite{Hartarsky24univupper} becomes quite involved. We direct the reader to \cite{Hartarsky24univupper}*{Section~2} for a detailed description of all mechanisms underlying the proof. \section{Conclusion} No new tools were encountered in the present chapter, as compared to previous ones. Instead, we saw how to combine and generalise several of the techniques we were already familiar with, in order to obtain extremely general and precise results. This should clearly showcase the robustness of these methods. For lower bounds we still relied on combinatorial bottlenecks generalising the one for the East model and also incorporating BP ideas. Rough upper bounds used long range renormalisation, while refined ones were proved via the matryoshka dolls technique. One of the takeaways from universality is that a thorough understanding of lower-dimensional models (in our case, one-dimensional) together with respecting the natural geometry and directional preferences of the model can allow one to understand higher dimensional models. The universality viewpoint not only gives a unified framework for understanding the landscape of KCM theory, but also historically supplied the motivation and playground for developing many of the tools presented in the previous chapters. \chapter{From bootstrap percolation to kinetically constrained models} \label{chap:BP} \abstract{ In this chapter we introduce bootstrap percolation cellular automata, which are instrumental for the study of KCM. We first state some relevant known results for these automata. We then show that several fundamental properties of KCM, including ergodicity, mixing and exponential relaxation, can be directly related to their bootstrap percolation counterparts.} \section{Bootstrap percolation} Bootstrap percolation (BP) is a family of monotone cellular automata. They may be viewed as the discrete time synchronous monotone analogue of KCM. Specific instances of BP have been studied since the 1970s \cites{Chalupa79,Kogut81,Pollak75}, but it is convenient to directly introduce them in greater generality as considered in \cites{Schonmann90,Gravner99,Bollobas15}. Like KCM, BP is defined by an update family $\cU$ (recall Section~\ref{sec:rules}). Given $\omega\in\Omega$, we define $\sB_\cU(\omega)\in\Omega$ by\footnote{Note that, in most of BP literature, the roles of the 0 and 1 states are reversed.} \begin{equation} \label{eq:def:BP} (\sB_\cU(\omega))_{\vec x}=\begin{cases} 0&\omega_{\vec x}=0\text{ or }\exists U\in\cU,\forall \vec u\in U,\omega_{\vec x+\vec u}=0,\\ 1&\text{otherwise} \end{cases} \end{equation} for all $\vec x\in\bbZ^d$. In words, in one discrete time step we empty all sites for which the constraint (recall \eqref{eq:def:cx}) is satisfied. In BP, empty sites remain empty. In view of this monotonicity, it is natural to define the \emph{closure} \begin{equation} \label{eq:def:closure} [\omega]=[\omega]_\cU=\inf_{t\in\bbN}\sB_\cU^{\circ t}(\omega)=\lim_{t\to\infty}\sB_\cU^{\circ t}(\omega)\in\Omega, \end{equation} where $\sB_\cU^{\circ t}$ denotes the $t$-fold iteration of $\sB_\cU$ and the infimum and limit are taken with respect to the product partial order and product topology respectively. That is, $[\omega]$ is the configuration obtained upon iterating the bootstrap percolation map of \eqref{eq:def:BP}. Similarly to \eqref{eq:def:tau}, we define the \emph{BP emptying time} \begin{equation}\tbp=\inf\left\{t\in\bbN:\left(\sB_\cU^{\circ t}(\omega)\right)_0=0\right\}\in\bbN\cup\{\infty\}.\end{equation} On a domain $\Lambda\subset\bbZ^d$ with boundary condition $\sigma\in\Omega_{\bbZ^d\setminus\Lambda}$, we set \begin{equation} \label{eq:def:BP:finite} \left(\sB_\cU^\sigma(\omega)\right)_{\vec x}=\begin{cases} 0&\omega_{\vec x}=0\text{ or }\exists U\in\cU,\forall \vec u\in U,(\sigma\cdot\omega)_{\vec x+\vec u}=0,\\ 1&\text{otherwise} \end{cases} \end{equation} for $\omega\in\Omega_\Lambda$ and $\vec x\in\Lambda$. We further define $[\omega]^{\sigma}=\lim_{t\to\infty}(\sB_\cU^\sigma)^{\circ t}(\omega)$ pointwise. So far BP is completely deterministic. We next introduce randomness by considering an initial condition $\omega$ distributed according to the product Bernoulli measure $\mu_q$ with parameter (density of initially occupied sites) $1-q\in[0,1]$ (recall Section \ref{Section:Setting}). Following \eqref{eq:def:qc} and \eqref{eq:def:qct}, we define the \emph{emptying} and \emph{exponential decay} critical thresholds \begin{align} \label{eq:def:qcbp}q_{\mathrm c}^{\mathrm{BP}}&=\inf\left\{q>0:\mu_q\left(\tbp<\infty\right)=1\right\},\\ \label{eq:def:qctbp}\qct^{\mathrm {BP}}&=\inf\left\{q>0:\liminf_{t\to\infty}\frac{-\log\mu_q(\tbp>t)}{t}>0\right\}. \end{align} Note that by ergodicity of the product measure (with respect to translations, see e.g.\ \cite{Keller98} for background), we have $\mu_q(\tbp<\infty)=1$ if and only if $\mu_q([\omega]=\bzero_{\bbZ^d})=1$. As for KCM, one is primarily interested in determining these thresholds and the asymptotics of $\tbp$ as $q\to q_{\mathrm c}^{\mathrm{BP}}+$. In the present text our focus is on KCM, so we take BP results for granted. Let us therefore gather a few facts about the BP models corresponding to the update families introduced in Section~\ref{sec:rules}. \begin{theorem}[BP background]\label{th:BP}\leavevmode \begin{itemize} \item For East BP in dimension $d\ge 1$, $q_{\mathrm c}^{\mathrm{BP}}=\qct^{\mathrm{BP}}=0$ and $q^{1/d}\tbp$ converges to a Weibull distribution, as $q\to0$: $\mu(q^{1/d}\tbp\ge t)\to e^{-t^d/d!}$ for $t\ge 0$. \item For 1-neighbour BP in dimension $d\ge 1$, $q_{\mathrm c}^{\mathrm{BP}}=\qct^{\mathrm{BP}}=0$ and $q^{1/d}\tbp$ converges to a Weibull distribution, as $q\to0$: $\mu(q^{1/d}\tbp\ge t)\to e^{-(2t)^d/d!}$ for $t\ge 0$. \item For $j$-neighbour BP in dimension $d$ with $d\ge j\ge 2$ we have $q_{\mathrm c}^{\mathrm{BP}}=\qct^{\mathrm{BP}}=0$ and $q^{1/(d-j+1)}\log^{\circ (j-1)}\tbp$ converges in probability to a constant\footnote{See \cite{Balogh09a}*{(1)-(3)} for an explicit expression of $\lambda(d,j)$.} $\lambda(d,j)>0$, as $q\to0$. \item For $2$-neighbour BP in $d=2$ dimensions, there exist positive constants\footnote{We have $\lambda=\pi^2/18$ and $\lambda_2\approx7.0545$, see \cite{Hartarsky24locality}*{Section~A.1.2} for an explicit expression.} $\lambda,\lambda_2$ such that, as $q\to0$, we have \begin{equation} \label{eq:2nBP}\mu_q\left(\left|\log \tbp-\frac{\lambda}{q}+\frac{\lambda_2}{\sqrt q}\right|\le \frac{\log^2(1/q)}{\sqrt[3]q} \right)\to1. \end{equation} \item For Duarte BP we have $q_{\mathrm c}^{\mathrm{BP}}=\qct^{\mathrm{BP}}=0$ and $q\log \tbp/\log^2(1/q)$ converges in probability to a positive constant, as $q\to0$. \item For North-East BP in $d\ge 2$ dimensions we have $q_{\mathrm c}^{\mathrm{BP}}=\qct^{\mathrm{BP}}=1-p_{\mathrm c}^{\mathrm{OP},d}$, where $p_{\mathrm c}^{\mathrm{OP},d}\in(0,1)$ is the critical probability of oriented site percolation in $d$ dimensions (see e.g.\ \cites{Durrett84,Liggett05,Liggett99,Hartarsky22GOSP} for background) and $\mu_{q_{\mathrm c}^{\mathrm{BP}}}(\tbp=\infty)=0$. \item For Spiral BP we have $q_{\mathrm c}^{\mathrm{BP}}=\qct^{\mathrm{BP}}=1-p_{\mathrm c}^{\mathrm{OP},2}$ and $\mu_{q_{\mathrm c}^{\mathrm{BP}}}(\tbp=\infty)>0$. \item For $j$-neighbour BP in dimension $d$ with $j>d\ge 1$ we have $q_{\mathrm c}^{\mathrm{BP}}=\qct^{\mathrm{BP}}=1$. \end{itemize} \end{theorem} \begin{proof} For East BP, we only sketch the argument and leave the details as an exercise to the reader. First, we verify that for any $t\in\bbN$, we have $\mu_q(\tbp>t)=(1-q)^{N_d(t)}$, where $N_d(t)=\{(x_1,\dots,x_d)\in\bbN^d:\sum_{i=1}^dx_i\le t\}$. One can check that $N_d(t)=\binom{t+d}{d}=t^d/d!+O_d(t^{d-1})$, where the implicit constant may depend on the dimension $d$. In particular, for any $q>0$, $\mu_q(\tbp=\infty)=0$ and $\mu_q(\tbp>t)$ decays at least exponentially, so $q_{\mathrm{c}}^{\mathrm{BP}}=\qct^{\mathrm{BP}}=0$. Moreover, the convergence in distribution follows from the asymptotics of $N_d(t)$. The proof for 1-neighbour BP is analogous, replacing $N_d(t)$ by the volume of the discrete $\ell^1$ ball of radius $t$, whose cardinal is asymptotically equivalent to $2^dN_d(t)$. For $j$-neighbour BP with general $2\le j\le d$, the asymptotics of $\log^{\circ(j-1)}\tbp$ is due to Balogh, Bollob\'as, Duminil-Copin and Morris \cite{Balogh12} (the case $j=2$ was established by Holroyd \cites{Holroyd03,Holroyd06}), while the identification of the critical value is due to Schonmann \cite{Schonmann92}*{Theorem~3.1}. The result for 2-neighbour BP is due to Hartarsky and Teixeira \cite{Hartarsky24locality}. The quantitative result for Duarte BP is due to Bollob\'as, Duminil-Copin, Morris and Smith \cite{Bollobas17}, while the qualitative one is due to Schonmann \cites{Schonmann92,Schonmann90}. The result for North-East BP follows from the fact that $\tbp=\infty$ if and only if there is an infinite oriented path of occupied sites from the origin (see \cite{Schonmann90}), together with classical results on oriented site percolation \cites{Aizenman87,Menshikov86,Bezuidenhout90}. The result for Spiral BP is due to Toninelli and Biroli \cite{Toninelli08} (also see \cites{Hartarsky21,Toninelli07,Toninelli07a,Toninelli06,Jeng08}). For $j$-neighbour BP with $j>d\ge 1$ it suffices to observe that for $q<1$, we have \[\mu_q\left(\tbp=\infty\right)\ge\mu_q\left(A_0\cap\{0,1\}^d=\varnothing\right)=(1-q)^{2^d}>0.\] This concludes the proof for all models. \end{proof} The reader may have noticed the following pattern \cite{Hartarsky21}*{Conjecture~8.1} (also see \cite{Schonmann92}) in Theorem~\ref{th:BP}. \begin{conjecture}[Sharp phase transition] \label{conj:qc:qct} For any update family $\cU$ in any dimension it holds that $q_{\mathrm{c}}^{\mathrm{BP}}=\qct^{\mathrm{BP}}$. \end{conjecture} This is an important open problem in BP theory, which has so far been resolved for update families contained in an open half-space with the origin on its boundary \cite{Hartarsky22sharpness}*{Theorem~1.6}, as well as those with $q_{\mathrm{c}}^{\mathrm{BP}}=0$. For the latter assertion, note that \cite{BalisterNaNb} proves a stretched exponential bound on the tail of $\tbp$, but a standard renormalisation argument \cite{Schonmann92} can be used to recover an exponential decay. Concerning the continuity of the phase transition, there is not even a guess what the answer should be, leaving the following problem wide open. \begin{problem} Determine which update families satisfy $\mu_{q_{\mathrm c}^{\mathrm{BP}}}(\tbp=\infty)=0$ like North-East BP and unlike Spiral BP. \end{problem} \section{Legal paths} \label{sec:legal:paths} We next introduce the notion of legal path that will be instrumental in several proofs. In words, a legal path is a sequence of configurations differing by a legal update (recall Section~\ref{subsec:markov}). \begin{definition}[Legal paths] \label{def:legal:path} Given a domain $\Lambda\subset\bbZ^d$, boundary condition $\sigma\in\Omega_{\bbZ^d\setminus\Lambda}$ and two configurations $\omega,\omega'\in\Omega_\Lambda$, a \emph{legal path from $\omega$ to $\omega'$ in $\Lambda$} is a finite sequence $(\omega^{(i)})_{i=0}^n$ of configurations in $\Omega_\Lambda$ such that $\omega^{(0)}=\omega$, $\omega^{(n)}=\omega'$ and for each $i\in\{1,\dots,n\}$ it holds that either $\omega^{(i)}=\omega^{(i-1)}$ or there exists $\vec x^{(i)}\in\Lambda$ such that $\omega^{(i)}=(\omega^{(i-1)})^{\vec x^{(i)}}$ (recall \eqref{eq:def:omega:flip}) and $c^\sigma_{\vec x^{(i)}}(\omega^{(i)})=1$ (recall \eqref{eq:def:cx:finite}). The \emph{length} of the legal path $(\omega^{(i)})_{i=0}^n$ is $n$. Notice that if $(\omega^{(i)})_{i=0}^n$ is a legal path from $\omega$ to $\omega'$ in $\Lambda$, then its \emph{inverse} $(\omega^{(n-i)})_{i=0}^n$ is a legal path from $\omega'$ to $\omega$ in $\Lambda$. \end{definition} It turns out that BP provides a simple way to know when a legal path exists and to construct it. We start by observing that the BP closure is invariant along legal paths. \begin{lemma}[Invariance of closure] \label{lem:closure} Let $\omega\in\Omega$ and $\vec x\in\bbZ^d$ be such that $c_{\vec x}(\omega)=1$. Then $[\omega]=[\omega^{\vec x}]$ and for any $\Lambda\subset\bbZ^d$ such that $\vec x\in\Lambda$ we have $[\omega_\Lambda]^{\omega_{\bbZ^d\setminus\Lambda}}=[\omega^{\vec x}_\Lambda]^{\omega_{\bbZ^d\setminus\Lambda}}$. Consequently, for $\omega',\omega''\in\Omega_\Lambda$ connected by a legal path in $\Lambda$ with boundary condition $\sigma\in\Omega_{\bbZ^d\setminus\Lambda}$ we have $[\omega'\cdot\sigma]=[\omega''\cdot\sigma]$ and $[\omega']^\sigma=[\omega'']^{\sigma}$. \end{lemma} \begin{proof} Assume without loss of generality that $\omega_{\vec x}=0$. By \eqref{eq:def:cx} and \eqref{eq:def:BP}, we have that $(\sB_\cU(\omega^{\vec x}))_{\vec x}=0=\omega_{\vec x}$, so $\sB_\cU(\omega^{\vec x})\le \omega\le \omega^{\vec x}$. By \eqref{eq:def:closure}, since $[\cdot]$ is non-increasing, this yields $[\omega]=[\omega^{\vec x}]$. The proof of $[\omega_\Lambda]^{\omega_{\bbZ^d\setminus\Lambda}}=[\omega^{\vec x}_\Lambda]^{\omega_{\bbZ^d\setminus\Lambda}}$ is analogous. The statement on legal paths follows by induction on the length. \end{proof} \begin{lemma} \label{lem:legal:path} Let $\Lambda\subset\bbZ^d$, $\sigma\in\Omega_{\bbZ^d\setminus\Lambda}$ and $\omega\in\Omega_\Lambda$. Let $\vec x\in\Lambda$ be such that $\omega_{\vec x}=1$. There exists a legal path from $\omega$ to $\omega^{\vec x}$ if and only if $[\omega]^\sigma_{\vec x}=0$. Moreover, if it exists, the legal path can be chosen with length at most $2|\Lambda|$. \end{lemma} \begin{proof} The only if direction was proved in Lemma~\ref{lem:closure}. Assume that $[\omega]_{\vec x}^\sigma=0$. Then there exists a finite set $\Lambda'\subset\Lambda$ such that $[\omega_{\Lambda'}]^{\sigma\cdot\omega_{\Lambda\setminus\Lambda'}}_{\vec x}=0$. Up to replacing $\Lambda$ by $\Lambda'$, we may assume that $\Lambda$ is finite. We construct a legal path in $\Lambda$ of length at most $|\Lambda|$ from $\omega$ to $[\omega]^\sigma$. To achieve this, we empty an arbitrary occupied site whose constraint with boundary condition $\sigma$ is satisfied. Such a vertex always exist until we reach $[\omega]^\sigma$. We similarly obtain a legal path from $\omega^{\vec x}$ to $[\omega^{\vec x}]^\sigma=[\omega]^\sigma$ and conclude by concatenating its inverse with the legal path from $\omega$. \end{proof} Deducing or proving the following corollary is left as an exercise to the reader. \begin{corollary}[Legal paths and closure] \label{cor:legal:path} Let $\Lambda\subset\bbZ^d$, $\sigma\in\Omega_{\bbZ^d\setminus\Lambda}$ and $\omega,\omega'\in\Omega_\Lambda$ be such that $\sum_{\vec x\in \Lambda}|\omega_{\vec x}-\omega'_{\vec x}|<\infty$. Then there exists a legal path from $\omega$ to $\omega'$ in $\Lambda$ with boundary condition $\sigma$, if and only if $[\omega]^\sigma=[\omega']^{\sigma}$ (recall \eqref{eq:def:BP:finite}). \end{corollary} \begin{definition}[Ergodic boundary condition]\label{def:ergoBC} Let $\Lambda\Subset\bbZ^d$. We say that a boundary condition $\sigma\in\Omega_{\mathbb Z^d\setminus \Lambda}$ is \emph{ergodic}, if $[\bone_\Lambda]^\sigma=\bzero_\Lambda$. By Corollary~\ref{cor:legal:path}, this is equivalent to $\mathcal L^{\sigma}$ defining an ergodic process on $\Omega_{\Lambda}$.\end{definition} \section{Ergodicity} \label{sec:ergo} The simple deterministic statements of Section~\ref{sec:legal:paths} entail the following result of Cancrini, Martinelli, Roberto and Toninelli \cite{Cancrini08}*{Proposition~2.4}, whose fundamental importance for KCM is apparent in view of Theorem~\ref{th:BP}. \begin{theorem}[Ergodicity] \label{th:ergo} For any update family $\cU$ and $q\in(0,1)$, the following are equivalent \begin{enumerate} \item \label{item:ergo:5}$\bbP_{\mu_q}(\tau_\vee<\infty)=1$, \item \label{item:ergo:4}$\bbP_{\mu_q}(\tau_0<\infty)=1$, \item \label{item:ergo:1}$\mu_q(\tbp<\infty)=1$, \item \label{item:ergo:2}$0$ is a simple eigenvalue of $\cL$ on $\mathrm{L}^2(\mu_q)$, that is, the dynamics is \emph{ergodic}, \item \label{item:ergo:3}for all $f\in\mathrm{L}^2(\mu_q)$ we have $\lim_{t\to\infty} P_t f=\mu_q(f)$, that is, the dynamics is \emph{mixing}. \end{enumerate} In particular, $\qc=q_{\mathrm c}^{\mathrm{BP}}$ (recall \eqref{eq:def:qc} and \eqref{eq:def:qcbp}). \end{theorem} \begin{proof} \textbf{\ref{item:ergo:5} implies \ref{item:ergo:4}.} This follows directly from the definition \eqref{eq:def:tau}.\\ \textbf{\ref{item:ergo:4} implies \ref{item:ergo:1}.} Consider the function $f:\Omega\to\{0,1\}:\omega\mapsto[\omega]_0$ in $\mathrm{L}^2(\mu_q)$. By Lemma~\ref{lem:closure}, we have that $\cL_{n}f=0$, where $\cL_n$ is the KCM generator on $\Omega$ defined by restricting the sum in \eqref{eq:generator} to $\Lambda_n=\{-n,\dots,n\}^d$. Moreover, clearly $\cL_{n}g\to\cL g$ for any local function $g$, so $\cL_{n}f\to\cL f$ (see \cite{Liggett05}*{Corollary I.3.14}\footnote{As explained in \cite{Liggett05}*{Section IV.4}, the corollary and other results apply in $\mathrm L^2(\mu_q)$ instead of the space of continuous functions for the product topology.}). Thus, $\cL f=0$, so that for any $t\ge 0$, $\bbP_{\mu_q}$-a.s.\ $f(\omega(t))=f(\omega(0))$. Consequently, \begin{equation} \label{eq:ergo:Q}\bbP_{\mu_q}\left(\forall t\in\bbQ\cap[0,\infty), f(\omega(t))=f(\omega(0))\right)=1.\end{equation} Assume that $\bbP_{\mu_q}(\tau_0<\infty)=1$. Then $\omega_0(t)=0$ for all $t\in[\tau_0,\tau_0+\varepsilon]$ for some random $\varepsilon>0$. Thus, \eqref{eq:ergo:Q} gives $\bbP_{\mu_q}(f(\omega(0))=f(\omega(\tau_0))=0)=1$, which concludes the proof, since $\mu_q(\tbp<\infty)=1-\mu_q(f)=1$.\\ \textbf{\ref{item:ergo:1} implies \ref{item:ergo:2}.} Fix $f\in\mathrm{L}^2(\mu_q)$ such that $\cL f=0$. Then, recalling \eqref{eq:dirichlet}, we have $\sum_{\vec x\in\bbZ^d}\mu_q(c_{\vec x}\var_{\vec x}(f))=\cD(f)=0$, so each of the (non-negative) summands is 0. We seek to prove that $\var(f)=0$ and our starting point is the unconstrained Poincar\'e inequality\footnote{In other words, the spectral gap of the generator of the product KCM corresponding to $\cU=\{\varnothing\}$ is $1$, since it is the tensor product of irreducible 2-state Markov processes with total rate $1$. This classical fact also follows e.g.\ by taking $\cX=\bbX_1$ in Lemma~\ref{lem:two-block}.} \begin{equation} \label{eq:Poincare:unconstrained}\var(f)\le \sum_{\vec x\in\bbZ^d}\mu(\var_{\vec x}(f)).\end{equation} Assume further that \ref{item:ergo:1} holds and fix some $\vec x\in\bbZ^d$. Then \[\mu\left(\var_{\vec x}(f)\right)\le \mu\left(\omega_0\left(f\left(\omega^{\vec x}\right)-f(\omega)\right)^2\right)=\sum_{n=1}^\infty \mu\left(\1_{\cE_n\setminus\cE_{n-1}}(\omega)\left(f\left(\omega^{\vec x}\right)-f(\omega)\right)^2\right)\] where $\cE_n=\{\omega\in\Omega:[\omega_{\Lambda_n}]^{\bone_{\bbZ^d\setminus\Lambda_n}}_{{\bx}}=0\}$ and $\Lambda_n=\vec x+\{-n,\dots,n\}^d$. But by Lemma \ref{lem:legal:path} for any $\omega\in\cE_n\setminus\cE_{0}$ there exists a legal path $(\omega^{(i)})_{i=0}^{2|\Lambda_{n}|}$ from $\omega$ to $\omega^{\vec x}$ in $\Lambda_{n}$. Writing $f(\omega^{\vec x})-f(\omega)$ telescopically along this legal path and using the Cauchy--Schwarz inequality yields \[\1_{\cE_n\setminus\cE_{n-1}}(\omega)\left(f\left(\omega^{\vec x}\right)-f(\omega)\right)^2\le 2|\Lambda_n|\sum_{i=1}^{2|\Lambda_n|}c_{\vec x^{(i)}}\left(\omega^{(i)}\right)\left(f\left(\omega^{(i)}\right)-f\left(\omega^{(i-1)}\right)\right)^2,\] where $\vec x^{(i)}$ is the location of the legal update from $\omega^{(i-1)}$ to $\omega^{(i)}$. In order to recover $\mu(c_{\vec x^{(i)}}\var_{\vec x^{(i)}}(f))$ from the $\mu$-average of the last summand, we only need to perform a change of measure and observe that $\mu(\omega)/\mu(\omega^{(i)})\le (q(1-q))^{-|\Lambda_n|}$. Putting everything together, we obtain \[\mu(\var_{\vec x}(f))\le \sum_{n=1}^\infty 4|\Lambda_n|^2(q(1-q))^{-|\Lambda_n|-1}\sum_{\vec y\in\Lambda_n}\mu\left(c_{\vec y}\var_{\vec y}(f)\right)=0.\] Recalling \eqref{eq:Poincare:unconstrained}, this gives that $f$ is $\mu$-a.s.\ constant as desired.\\ \textbf{\ref{item:ergo:2} implies \ref{item:ergo:5}.} This follows from the ergodic theorem (see e.g.\ \cite{Billingsley65}) applied to the function $\omega\mapsto\omega_{0}$.\\ \textbf{\ref{item:ergo:2} is equivalent to \ref{item:ergo:3}.} This is \cite{Liggett05}*{Theorem~IV.4.13} (based on operator spectral theory). \end{proof} Before moving on, let us comment on the proof, which showcases two important ideas. Firstly, in order to empty the origin, we need to be able to do so in BP. Secondly, if BP is able to empty some site, we can turn that into a Poincar\'e inequality for the corresponding KCM. Building on these two insights, one can go surprisingly far. The first one is useful for obtaining lower bounds on $\tau_0$, while the second one enables upper bounds. Finally, let us mention that the proof of the implication from \ref{item:ergo:1} to \ref{item:ergo:2} is our first encounter with the canonical path technique for Markov chains originating in \cites{Sinclair89,Lawler88} (also see e.g.\ \cite{Levin09}*{Section 13.5}). \section{Exponential decay} \label{sec:exponential:decay} In Section~\ref{sec:ergo} we saw that ergodicity and mixing of KCM are equivalent to BP a.s.\ emptying $\bbZ^d$. All these results are purely qualitative and provide no quantitative control whatsoever, e.g.\ on the tails of emptying times. Our next task is to transfer the tail behaviour of BP to KCM, by proving the following result adapted from \cites{Hartarsky21,Cancrini08,Martinelli19,Cancrini09}. Once again, its interest is made clear by Theorem~\ref{th:BP} (also recall Conjecture~\ref{conj:qc:qct} and Theorem~\ref{th:ergo}). \begin{theorem}[Exponential decay] \label{th:exponential} For any update family $\cU$ and $q\in(0,1)$, the following are equivalent. \begin{enumerate} \item \label{item:expo:2}$\liminf_{t\to\infty}-\log\bbP_{\mu_q}(\tau_\vee>t)/t>0$, \item \label{item:expo:4}$\liminf_{t\to\infty}-\log\bbP_{\mu_q}(\tau_0>t)/t>0$, \item \label{item:expo:6}$\liminf_{t\to\infty}-\log\mu_q(\tbp>t)/t>0$, \item \label{item:expo:7}$\trel<\infty$ (recall \eqref{eq:def:Trel}). \end{enumerate} In particular, $\qct=\qct^{\mathrm{BP}}$ (recall \eqref{eq:def:qct} and \eqref{eq:def:qctbp}). \end{theorem} \begin{proof}[Sketch] \textbf{\ref{item:expo:2} implies \ref{item:expo:4}.} This follows directly from the definition \eqref{eq:def:tau}.\\ \textbf{\ref{item:expo:4} implies \ref{item:expo:6}.} We claim that there exists a constant $\delta>0$ (depending on $\cU$ and $q$) such that if we run BP and KCM from the same initial configuration $\omega\in\Omega$, \begin{equation} \label{eq:t0t0bp}\bbP_\omega\left(\tau_0\le \delta \tbp\right)\le e^{-\tbp(\omega)}.\end{equation} The idea is that, in order to empty the origin, legal updates have to occur in the right order along some path of length $\tbp(\omega)$, starting at the origin. Consecutive vertices of the path are allowed to be at distance at most $\max_{U\in\cU,\vec u\in U} \|\vec u\|$. However, the number of such paths is at most $e^{C\tbp(\omega)}$ for some $C>0$. Moreover, the probability that the sum of $N$ exponential random variables of mean 1 is smaller than $\delta N$ is at most $e^{-2CN}$, choosing $\delta$ sufficiently small depending on $C$. We conclude the proof of the claim by a union bound on the possible paths. See \cite{Martinelli19}*{Lemma~4.3} for more details.\\ \textbf{\ref{item:expo:6} implies \ref{item:expo:7}.} For simplicity of the presentation, we focus on the two-dimensional case and assume that $\{(2,1),(1,2)\}\in\cU$. The proof proceeds in three steps. First, we prove that for $q=1-\varepsilon_0$ sufficiently close to 1, $\trel<\infty$ for the update family $\cU'=\{U'\}=\{\{(1,0),(1,1),(0,1)\}\}$. Second, we perform a renormalisation,\footnote{In physics, one would rather speak of coarse-graining, but we adopt the mathematical jargon.} by tessellating space into large square boxes, which are deemed good if the $\cU$-KCM restricted to the box is able to empty most of the bottom and left boundaries of the box in the current configuration. Third, we show how to completely empty a given (possibly non-good) box, assuming that the three neighbouring boxes corresponding to $\cU'$ are good. We postpone the discussion of the first step to Section~\ref{sec:bisection:2d}, where the bisection technique is presented (for the full details, refer to \cite{Cancrini08}*{Section~4}). \begin{figure} \begin{minipage}{\textwidth} \leftfigure[c]{\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=4.0cm,y=4.0cm] ll[fill=black,fill opacity=0.25] (0,0) -- (0.7,0) -- (0.7,0.7) -- (0,0.7) -- cycle; \draw (0,0)-- (1,0); \draw (1,0)-- (1,1); \draw (1,1)-- (0,1); \draw (0,1)-- (0,0); \draw [very thick] (0.1,0.1)-- (0.9,0.1); \draw [very thick] (0.9,0.1)-- (0.9,0.9); \draw [very thick] (0.9,0.9)-- (0.1,0.9); \draw [very thick] (0.1,0.9)-- (0.1,0.1); \draw (0,0)-- (0.7,0) node[below]{$(1-3\varepsilon)n$};; \draw (0.7,0)-- (0.7,0.7); \draw (0.7,0.7)-- (0,0.7); \draw (0,0.7)-- (0,0); \draw [dashed] (0.9,0.1)-- (0.7,0); \draw [dashed] (0.1,0.9)-- (0,0.7); \draw (0.1,0) node[below]{$\varepsilon n$}; \draw (1,1) node[below right]{$\Lambda$}; ll} \rightfigure[c]{\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=2cm,y=2cm] ll[fill=black,fill opacity=0.25] (1,0) -- (1.7,0) -- (1.7,0.7) -- (1,0.7) -- cycle; ll[fill=black,fill opacity=0.25] (1,1) -- (1.7,1) -- (1.7,1.7) -- (1,1.7) -- cycle; ll[fill=black,fill opacity=0.25] (0,1) -- (0.7,1) -- (0.7,1.7) -- (0,1.7) -- cycle; \draw [very thick] (0,0)-- (1,0)-- (1,1)-- (0,1)--cycle; \draw (1,0)-- (1.7,0); \draw (1.7,0)-- (1.7,0.7); \draw (1.7,0.7)-- (1,0.7); \draw (1,0.7)-- (1,0); \draw (1,1)-- (1.7,1); \draw (1.7,1)-- (1.7,1.7); \draw (1.7,1.7)-- (1,1.7); \draw (1,1.7)-- (1,1); \draw (0,1)-- (0.7,1); \draw (0.7,1)-- (0.7,1.7); \draw (0.7,1.7)-- (0,1.7); \draw (0,1.7)-- (0,1); \draw (1,0)-- (2,0); \draw (2,0)-- (2,1); \draw (2,1)-- (1,1); \draw (1,1)-- (1,0); \draw (1,1)-- (2,1); \draw (2,1)-- (2,2); \draw (2,2)-- (1,2); \draw (1,2)-- (1,1); \draw (0,1)-- (1,1); \draw (1,1)-- (1,2); \draw (1,2)-- (0,2); \draw (0,2)-- (0,1); \draw [dashed] (1,1.7)-- (0.7,1.1); \draw [dashed] (1.7,1)-- (1.1,0.7); \draw (0.5,0.5) node{$\Lambda$}; \draw (1.7,0) node[below]{$(2-3\varepsilon)n$}; \draw (1,0) node[below]{$n$}; \end{tikzpicture}} \end{minipage} \leftcaption{\label{fig:exponential:renormalisation:1}A good box in the renormalisation of the proof of Theorem~\ref{th:exponential}. The thick box is emptied in \eqref{eq:expo:renorm:1}. This allows emptying the shaded one in \eqref{eq:expo:renorm:2}. The dashed lines with slopes $1/2$ and $2$ indicate how empty sites propagate via the rule $\{(2,1),(1,2)\}$.}\rightcaption{\label{fig:exponential:renormalisation:2}If the three shaded boxes are empty, we are able to empty the thick box $\Lambda$, as indicated by the dashed lines with slopes $1/2$ and $2$, thanks to the update rule $\{(2,1),(1,2)\}$.} \end{figure} We turn to the second step. Fix $\varepsilon>0$ small enough depending on $\varepsilon_0$ and then take $n\in\bbN$ large enough. Let $\Lambda=\{0,\dots,n-1\}^2$ be the renormalisation box. Then the exponential decay provided by \ref{item:expo:6} and a union bound give \begin{equation} \label{eq:expo:renorm:1}\mu_q\left(\left[\omega_\Lambda\right]^{\bone_{\bbZ^{{2}}\setminus\Lambda}}_{[\varepsilon n,(1-\varepsilon)n)^2}=\bzero_{[\varepsilon n,(1-\varepsilon)n)^2}\right)\ge 1-\varepsilon_0.\end{equation} In words, it is likely that BP in $\Lambda$ empties all of $\Lambda$ except a thin frame (see Figure~\ref{fig:exponential:renormalisation:1}), in which case we say that $\Lambda$ is \emph{good}. Thus, \eqref{eq:expo:renorm:1} states that the probability that a box is good is at least $1-\varepsilon_0$. However, since $\{(2,1),(1,2)\}\in\cU$, we have \begin{equation} \label{eq:expo:renorm:2}\left[\bzero_{[\varepsilon n,(1-\varepsilon)n)^2}\cdot\bone_{\Lambda\setminus[\varepsilon n,(1-\varepsilon)n)^2}\right]^{\bone_{\bbZ^{{2}}\setminus\Lambda}}_{[0,\dots,(1-3\varepsilon)n)^2}=\bzero_{[0,\dots,(1-3\varepsilon)n)^2},\end{equation} that is, BP empties all but a thin strip along the top and right boundaries of $\Lambda$, as desired (see Figure~\ref{fig:exponential:renormalisation:1}). See \cite{Hartarsky21}*{Section~7.4} for more details on the second step. We move on to the third step. Again using $\{(2,1),(1,2)\}\in\cU$, we have \begin{equation} \label{eq:expo:renorm:3}\left[\bzero_{[0,(1-3\varepsilon)n)+nU'}\cdot\bone_{[0,2n)^2\setminus([0,\dots,(1-3\varepsilon)n)+nU')}\right]_\Lambda^{[0,2n)^2}=\bzero_\Lambda\end{equation} (see Figure~\ref{fig:exponential:renormalisation:2}). Consequently, if the three boxes of the form $\Lambda+n\vec u$ with $\vec u\in U'$ are good, then $\Lambda$ can be emptied by BP in the union of these four boxes. Then Lemma~\ref{lem:legal:path} provides a legal path in $[0,2n)^2$ from $\omega_{[0,2n)^2}$ to $\omega_{[0,2n)^2}^{\vec x}$ for any $\vec x\in\Lambda$. Using this path and the canonical path approach as in the proof of Theorem~\ref{th:ergo}, this yields that for some $\gamma<\infty$ depending on $n$ and $q$ and any local function $f$, \begin{align*} \mu\left(\1_{\forall\vec u\in U',\Lambda+n\vec u\text{ is good}}\var_{\Lambda}(f)\right)\le& \sum_{\vec x\in\Lambda}\mu\left(\1_{\forall\vec u\in U',\Lambda+n\vec u\text{ is good}}\var_{\vec x}(f)\right)\\ \le&\gamma\sum_{\vec x\in[0,2n)^2}\mu(c_{\vec x}\var_{\vec x}(f)).\end{align*} Finally, it remains to see that for some $\gamma'<\infty$ and any local function $f$, \[\var(f)\le \gamma'\sum_{\vec x\in\bbZ^{{2}}}\mu\left(\1_{\forall\vec u\in U',n\vec x+\Lambda+n\vec u\text{ is good}}\var_{n\vec x+\Lambda}(f)\right).\] Recalling \eqref{eq:def:cx} and \eqref{eq:poincare}, we recognise exactly the Poincar\'e inequality for the $\cU'$-KCM except that each site (box) now has more than two states and the parameter $q$ is replaced by $\mu_q(\Lambda\text{ is good})\ge 1-\varepsilon_0$. We will frequently encounter such general state-space KCM arising from renormalisation procedures (see Section~\ref{sec:general:KCM}) and will see that usually the corresponding Poincar\'e inequalities are proved just like the ones for ordinary KCM. Since $\trel<\infty$ for the $\cU'$-KCM at $q\ge 1-\varepsilon_0$, by the first step, this completes the proof. See \cite{Cancrini08}*{Section~5} for more details.\\ \textbf{\ref{item:expo:7} implies \ref{item:expo:2}.} Define \[\lambda_0=\inf\left\{\cD(f):f\in\mathrm{L}^2(\mu),\mu\left(f^2\right)=1,\forall \omega\in\Omega,\omega_0=0\Rightarrow f(\omega)=0\right\}.\] Observe that any $f$ as above satisfies $\var(f)\ge q$, since $\mu^2(f)=(1-q)^2\mu^2(f|\omega_0=1)\le (1-q)^2\mu(f^2|\omega_0=1)=(1-q)$. Recalling \eqref{eq:def:Trel}, this implies $\lambda_0\ge q/\trel$. Finally, a general result on hitting times for Markov chains \cite{Aldous02a}*{Proposition~3.21} yields \begin{equation} \label{eq:F0F1} \bbP(\tau_0>t)\le e^{-t\lambda_0}\le e^{-qt/\trel}\end{equation} for any $t\ge 0$. This proves \ref{item:expo:4} and one can recover \ref{item:expo:2}, by proceeding analogously for $\tau_1$ (recall \eqref{eq:def:tau}). See \cite{Cancrini08}*{Theorem~3.6} for a different proof. \end{proof} The next result \cite{Cancrini08}*{Lemma 2.11, Proposition~2.13} reduces the relaxation time of a KCM in infinite volume to the one in finite volume (recall Section~\ref{sec:finite_vol}). \begin{proposition}[Finite volume reduction] \label{prop:infinite:to:finite}Let $\cU$ be an update family and $q\in(0,1)$. Given $\Lambda\Subset\bbZ^d$, recall that $\trel^{\Lambda}$ denotes the relaxation time of the finite-volume generator $\cL_{\bzero_{\bbZ^d\setminus\Lambda}}$. Then \[\trel=\lim_{\Lambda\Subset\bbZ^d,\Lambda\to\bbZ^d}\trel^\Lambda .\] The same holds for $C_{\mathrm{LS}}$ and $C_{\mathrm{MLS}}$. \end{proposition} \begin{proof}We prove the chain of inequalities \begin{equation} \label{eq:finite:volume:chain} \trel\le\sup_{\Lambda\Subset\bbZ^d} \trel^\Lambda\le \lim_{\Lambda\Subset\bbZ^d,\Lambda\to\bbZ^d}\trel^\Lambda\le \trel. \end{equation} For the first one, it suffices to prove the Poincar\'e inequality \eqref{eq:poincare} with $C=\sup_{\Lambda\Subset\bbZ^d}\trel^\Lambda$ for any local function $f$. Since $f$ is local, for $\Lambda$ large enough we have $\var(f)=\var_{\Lambda}(f)$ and $\cD(f)=\cD_{\bzero_{\bbZ^d\setminus\Lambda}}(f)$. So we are done by \eqref{eq:def:Trel}. For the second and third inequalities and the existence of the limit in \eqref{eq:finite:volume:chain}, it suffices to show that for $V\subset\Lambda\subset \bbZ^d$ we have \begin{equation} \label{eq:monotonicity} \trel^V\le \trel^\Lambda. \end{equation} Consider $f:\Omega_V\to\bbR$ and extend $f$ to $\tilde f:\Omega_\Lambda\to \bbR$ by $\tilde f(\omega)=f(\omega_V)$. Clearly, $\var(f)=\var(\tilde f)$, since $\mu_\Lambda=\mu_V\otimes\mu_{\Lambda\setminus V}$, so it remains to check that \[\cD_{\bzero_{\bbZ^d\setminus\Lambda}}(\tilde f)\le\cD_{\bzero_{\bbZ^d\setminus V}}(f).\] But, recalling \eqref{eq:dirichlet}, this follows from $c_\bx^{\bzero_{\bbZ^d\setminus\Lambda}}\le c_\bx^{\bzero_{\bbZ^d\setminus V}}$. The latter is true, using \eqref{eq:def:cx:finite} and the fact that $c_\bx$ is non-increasing. The proof for the (modified) logarithmic Sobolev constant is identical. \end{proof} \section{Stronger functional inequalities} We next consider the time scales corresponding to mixing, modified logarithmic Sobolev and logarithmic Sobolev inequalities (recall Section~\ref{sec:time_scales}). We start with general bounds on the mixing time. \begin{proposition}[{Basic mixing time bounds}] \label{prop:tmix:easy} For any update family $\cU$ and $q>0$ there exists $C>0$ such that the following holds. If $q>\qct$, then for all $\varepsilon\in(0,1)$ and $\Lambda\subset\bbZ^d$, \begin{equation} \label{eq:tmix:upper:volume} t_{\mathrm{mix}}^{\bzero_{\bbZ^d\setminus\Lambda}}(\varepsilon)\le C(\log(1/\varepsilon)+|\Lambda|).\end{equation} For all $\varepsilon\in(0,1)$ and all $n$ large enough, for any $\sigma\in\Omega_{\bbZ^d\setminus\{1,\dots,n\}^d}$, \begin{equation}\label{eq:tmix:lower:linear} t_{\mathrm{mix}}^{\sigma}(\varepsilon)\ge n/C.\end{equation} \end{proposition} \begin{proof} For \eqref{eq:tmix:upper:volume}, we use a general inequality for reversible Markov chains (see e.g.\ \cite{Levin09}*{Theorem~20.6}) \[t_{\mathrm{mix}}^{\bzero_{\bbZ^d\setminus\Lambda}}(\varepsilon)\le \log\left(\frac{1}{\varepsilon\mu_*}\right)\trel^\Lambda,\] with $\mu_*=\min_{\omega\in\Omega_{\Lambda}}\mu(\omega)=\min(q,1-q)^{|\Lambda|}$. By \eqref{eq:def:qct}, for $q>\qct$, we have $\trel<\infty$. Moreover, by \eqref{eq:monotonicity}, $\trel\ge \trel^{\bzero_{\bbZ^d\setminus\Lambda}}$, concluding the proof of \eqref{eq:tmix:upper:volume}. For \eqref{eq:tmix:lower:linear}, we choose $\omega=\bone_\Lambda$ in \eqref{eq:def:tmix} with $\Lambda=\{1,\dots,n\}^d$. We denote the $\cU$-KCM with parameter $q$, boundary condition $\sigma\in\Omega_{\bbZ^d\setminus\Lambda}$ and initial condition $\omega$ by $(\eta(t))_{t\ge0}$. Fix $C>0$ large enough and set $\Lambda'=\{\lceil n/\sqrt C\rceil,\dots,n-\lceil n/\sqrt C\rceil\}^d$. It suffices to show that for any $q\in[0,1]$ \begin{equation} \label{eq:speed:propagation} \lim_{n\to\infty}\bbP\left(\eta_{\Lambda'}(n/C)=\bone_{\Lambda'}\right)=1.\end{equation} Let us denote $\|\cU\|=\max\{\|u\|:\exists U\in\cU,u\in U\}$. A \emph{path from $\bx\in\Lambda'$ to $\partial\Lambda$} is a finite sequence of sites $(\bx_i)_{i=1}^N$ in $\Lambda$ with $\bx_1=\bx$, $\bx_N$ at distance at most $\|\cU\|$ from $\bbZ^d\setminus\Lambda$, and, for each $i\in\{2,\dots,N\}$, $\|\bx_i-\bx_{i-1}\|\le\|\cU\|$. Assume that $\eta_t(\bx)=0$ for some $t>0$. Then we can construct a path from $\bx$ to $\partial\Lambda$ inductively as follows, assuming that $\bx_i$ is chosen (by definition, $\bx_1=\bx$). If $d(\bx_i,\bbZ^d\setminus\Lambda)\le \|\cU\|$, we set $N=i$ and stop. Otherwise, consider the first time $t'\le t$ such that $\eta_{t'}(\bx_i)=0$ and let $\bx_{i+1}$ be arbitrarily chosen in $\bx+\bigcup_{U\in\cU}U$ so that $\eta_{\bx_{i+1}}(t'-)=0$. We call this a \emph{decreasing path of $\bx$}, since the emptying times of its sites are decreasing. For a given path from $\bx$ to $\partial\Lambda$ of length $k$, the probability that it is decreasing is at most the probability that the sum of $k$ i.i.d.\ standard exponential random variables $E_i$ is at most $t$. Indeed, this follows from the graphical construction of Section~\ref{subsec:markov}. Moreover, $k\ge (n/\sqrt C)/\|\cU\|$, so, setting $t=n/C$, for $n$ large enough we obtain \begin{align*} \bbP(\eta_t(\bx)=0)&{}\le \sum_{k\ge \frac{n}{\|\cU\|\sqrt C}}\left|\{(\bx_i)_i\text{ path from $\bx$ to $\partial\Lambda$ of length $k$}\}\right|\cdot\bbP\left(\sum_{l=1}^kE_l\le t\right)\\ &{}\le \sum_{k\ge \frac{n}{\|\cU\|\sqrt C}} (2\|\cU\|+1)^{kd}\cdot\bbP\left(\sum_{l=1}^kE_l\le \frac{k}{C^{1/3}}\right)\\ &{}\le \sum_{k\ge \frac{n}{\|\cU\|\sqrt C}}(2\|\cU\|+1)^{kd}\cdot\frac{(\bbE(e^{-C^{1/3}E_1}))^k}{e^{-k}}\le e^{-n/(\sqrt C\|\cU\|)}, \end{align*} where we used the exponential Markov inequality in the last line and took into account that $C$ can be chosen large enough depending on $\|\cU\|$ and $d$. The desired \eqref{eq:speed:propagation} then follows by a union bound over $\bx\in\Lambda'$. The argument used in the proof \eqref{eq:tmix:lower:linear} is commonly referred to as \emph{finite speed of propagation} and applies equally well to any finite-range interacting particle system \cite{Liggett99}*{Section~I.1}. In fact, this is a way to show that the graphical construction of the KCM produces a well-defined Markov process. \end{proof} \begin{corollary}[{Infinite logarithmic Sobolev constants}] \label{cor:LS} For any update family $\cU$ and $q\in(0,1)$, we have \begin{equation}\label{inf}C_{\mathrm{LS}}=C_{\mathrm{MLS}}=\infty.\end{equation} Furthermore, there exists $C=C(q)>0$ such that the following holds \begin{itemize}\item[(i)] Fix $n$ and let $\Lambda=\{-n,\dots,n\}^d$. For any $\sigma\in\Omega_{\bbZ^d\setminus\Lambda}$ and for $n$ large enough, we have \begin{equation}\label{eq:ls:lower:linear} C_{\mathrm{LS}}^{\sigma}\ge C_{\mathrm{MLS}}^{\sigma}\ge n/C.\end{equation} \item[(ii)] If $q>\qct$, then, for any $\Lambda\Subset\bbZ^d$, we have \begin{equation} \label{eq:ls:upper:volume} C_{\mathrm{LS}}^{\Lambda}\leq C|\Lambda|. \end{equation} \end{itemize} \end{corollary} \begin{proof} The first inequality in \eqref{eq:ls:lower:linear} is a general and classical result (see \cite{Diaconis96}*{Lemma 2.7}). To prove the second inequality we set \[f(\omega)=(1-q)^{-|\Lambda|}\1_{\omega_\Lambda=\bone_\Lambda},\] so that $\mu(f)=1$. Let $\mu^f$ be the probability measure with density $f$ w.r.t.\ $\mu$. Then, by \eqref{eq:speed:propagation}, for a properly chosen $C>0$, we have \begin{equation} \label{cont} \lim_{n\to\infty}\mathbb E_{\mu^f}(\eta_{0}(Cn))=1. \end{equation} Combining \eqref{eq:rel_ent_dec_1}, \eqref{cont} and Pinsker's inequality implies the second inequality of \eqref{eq:ls:lower:linear}. Inequality \eqref{eq:ls:upper:volume} follows from Theorem~\ref{th:exponential}\ref{item:expo:7} together with \eqref{eq:finite:volume:chain} and the general and classical bound (see \cite{Saloff-Coste97} Corollary 2.2.10) \begin{equation} C_{\mathrm{LS}}^{\sigma} \le \frac{\log((1/\mu^*)-1)}{1-2\mu^*}\trel^{\sigma} \end{equation} where $\mu^*:=\min_{\omega\in\Omega_{\Lambda}} \mu(\omega)$. Finally, \eqref{inf} follows from \eqref{eq:ls:lower:linear} and Proposition ~\ref{prop:infinite:to:finite}. \end{proof} \section{Conclusion} \label{subsec:tools:BP} In Theorems~\ref{th:ergo} and~\ref{th:exponential} we completely reduced the critical values $\qc$ and $\qct$ to their BP counterparts. As we saw in Theorem~\ref{th:BP} and will see more generally in Chapter~\ref{chap:universality}, modulo Conjecture~\ref{conj:qc:qct}, this gives complete information about $\qc$ and $\qct$. Our next goal is to find the asymptotics of $\tau_0$ as $q\to\qc+$, as in the BP Theorem~\ref{th:BP}. In reality, even in the simplest BP models with $\qc\in(0,1)$, such as North-East BP, the asymptotics of $\tbp$ as $q\to \qc+$ and of $\mu_{\qc}(\tbp>t)$ as $t\to\infty$ remain inaccessible, even though it is classically conjectured that they should be governed by certain critical exponents. Therefore, in the remainder of the manuscript we mostly focus on update families for which $\qc=0$. Before we direct our efforts to seeking asymptotics as $q\to0$, let us summarise the techniques we learned in this chapter, as several of them will be of further use. \noindent\textbf{BP lower bound.} In \eqref{eq:t0t0bp} we have seen that BP provides a natural lower bound for KCM time scales. Let us also record the general quantitative bound obtained by combining \eqref{eq:t0t0bp} and \eqref{eq:F0F1}: \begin{equation}\label{eq:tau0} \delta\mu_q\left(\tbp\right)\le \bbE_{\mu_q}(\tau_0)\leq \frac{\trel}{q} \end{equation} for some $\delta>0$ depending only on $\cU$ (not necessarily equal to the one in \eqref{eq:t0t0bp}). \noindent\textbf{Test functions.} Thanks to the variational definition \eqref{eq:def:Trel} of $\trel$, one can obtain lower bounds by plugging well-chosen functions. We used this in the proof of the implication \ref{item:ergo:4} to \ref{item:ergo:1} of Theorem~\ref{th:ergo}. The test function we used here simply reflected BP, but in the next chapter we will need a more subtle choice. \noindent\textbf{Canonical paths.} This method for proving upper bounds on $\trel$ was used in the implication \ref{item:ergo:1} to \ref{item:ergo:2} of Theorem~\ref{th:ergo}. In this instance, the canonical paths were simply granted by BP, but in the next chapter, paths will incorporate more refined heuristics on the KCM dynamics. \noindent\textbf{Renormalisation.} In the proof of the implication \ref{item:expo:6} to \ref{item:expo:7} we used the idea of renormalisation. We regarded large boxes as single sites in an auxiliary (generalised) KCM dynamics. This splits the problem of proving upper bounds on $\trel$ in two. First, we prove an upper bound on the relaxation time of the auxiliary dynamics. Second, we show (e.g.\ by canonical paths) how to ``locally'' reconstruct the original Dirichlet form from the auxiliary one, taking advantage of the auxiliary constraint. We will make extensive use of this technique in the next chapters. \noindent \textbf{Finite speed of propagation.} This observation allows us to show that with very high probability no information about the state of the process at a given place or its boundary condition can travel faster than linearly. The proof of the lower bound of Proposition~\ref{prop:tmix:easy} was an application of this fact. \noindent \textbf{Stronger functional inequalities.} In Corollary~\ref{cor:LS}, we established that, contrary to Poincar\'e inequalities, logarithmic and modified logarithmic Sobolev inequalities in infinite volume are \emph{not} a suitable tool for studying KCM. Nonetheless, such techniques can be of use on suitably chosen finite volumes.
2412.13613v1
http://arxiv.org/abs/2412.13613v1
Some New Non-binary Quantum Codes from One-generator Quasi-cyclic Codes
\documentclass[11pt, a4paper]{article} \usepackage{amsmath} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{graphicx} \usepackage{amsthm,amssymb} \usepackage{epsfig} \usepackage{float} \usepackage{color} \usepackage[top=1in, bottom=1in, left=1in, right=1in]{geometry} \usepackage[thinlines]{easytable} \usepackage{lscape} \usepackage{algorithm,algorithmic} \usepackage{amsmath,amssymb,amsfonts} \DeclareMathOperator{\Aut}{Aut} \DeclareMathOperator{\lcm}{lcm} \DeclareMathOperator{\ord}{ord} \DeclareMathOperator{\rank}{rank} \numberwithin{figure}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \newtheorem{definition}[theorem]{Definition} \newtheorem{note}[theorem]{Note} \begin{document} \title{Some New Non-binary Quantum Codes from One-generator Quasi-cyclic Codes \footnote{Email: [email protected] (T. Bag) [corresponding author], [email protected] (H. Q. Dinh), [email protected] (D. Panario)}} \author{Tushar Bag$^{1}$, Hai Q. Dinh$^2$ , Daniel Panario$^3$} \date{ \small{ 1. Univ Lyon, Inria, ENS Lyon, UCBL, LIP, 69342 Lyon Cedex 07, France\\ 2. Kent State University, Warren, OH 44483, USA\\ 3. School of Mathematics and Statistics, Carleton University, Ottawa, Ontario K1S 5B6, Canada. } \today} \maketitle \begin{abstract} This article studies one-generator and two-generator quasi-cyclic codes over finite fields. We present two versions of necessary and sufficient conditions for the symplectic self-orthogonality of one-generator quasi-cyclic codes, using both matrix and polynomial approaches. We provide two versions of necessary and sufficient conditions for two-generator quasi-cyclic codes for symplectic self-orthogonality and the symplectic dual-containing condition. Additionally, using these necessary and sufficient conditions, we construct new quantum codes with record-breaking parameters that improve upon current records. \end{abstract} \text{\bf Keywords:} \small{Quasi-cyclic codes (QCs), quantum error-correcting codes (QECCs).}\\ \text{\bf Mathematics Subject Classifications(2010):} \small{ 94B05, 94B15, 94B60.}\\ \section{Introduction} Quasi-cyclic (QC) codes are a prominent class of linear error-correcting codes. They possess a well-developed algebraic structure that generalizes the concept of cyclic codes. This generalization allows for greater flexibility in code design, enabling the construction of asymptotically good codes that approach the modified Gilbert-Varshamov bound \cite{Kas, L5}. The study of QC codes has yielded numerous record-breaking linear codes, particularly over small finite fields. Several key contributions have been made to the understanding of QC codes. Research by Conan et al.~\cite{Con} studied into the structural attributes of quasi-cyclic codes, providing both an enumeration of these codes and a characterization of their duals. Another study by Seguin \cite{Se} examined the properties of a specific class of one-generator QC codes. Ling and Solé research deeply into the algebraic structure of QC codes across a series of articles \cite{L1, L2, L3, L4}. Lally et al. \cite{La} explored the structure and duals of arbitrary QC codes, with a particular focus on self-dual QC codes with an index of 2. Aydin et al. \cite{Nuh1} investigated the structure of 1-generator quasi-twisted codes and constructed new linear codes. \vskip 3pt Quantum error-correcting codes (QECCs) are essential for protecting quantum information from decoherence and quantum noise, playing a significant role in both quantum computing and communication. Quantum computers are theorized to solve problems significantly faster than classical computers. However, mitigating errors caused by decoherence and noise remains a critical challenge. Here, QECCs emerge as a powerful tool, safeguarding quantum information in both communication and computation. The concept of QECCs was first proposed in \cite{cal1996, St, S95}. The Calderbank-Shor-Steane (CSS) structure \cite{CRSS98} has served as the foundation for a substantial portion of recent research on QECCs. Construction of non-binary quantum codes techniques is explored in \cite{0, A, Ket}. \vskip 3pt Symplectic self-orthogonal quasi-cyclic (QC) codes have not only proven themselves to be an excellent family for constructing new linear codes, but they have also become pivotal in constructing numerous new binary quantum codes. The study of quantum code construction from QC codes began recently after the work of Galindo et al. \cite{Gal}, where the authors studied a specific class of 2-generator QC codes using Euclidean, Hermitian, and symplectic structures of QC codes. Following this, Lv et al. \cite{Q3, Q2} and Guan et al. \cite{Q4, Q1} constructed many record-breaking binary quantum codes utilizing the symplectic structure of QC codes. Additionally, explicit dual generators of QC codes have been studied in \cite{QC,iitr}. \vskip 3pt The motivation for this study is to examine the necessary and sufficient conditions for symplectic self-orthogonality and the symplectic dual-containing condition in a simplified form that applies to the general version of quasi-cyclic codes. While some specific forms have been previously studied in the literature, we aim to encompass all these forms, provide a simpler, more general version, and demonstrate that our approach can construct new codes with record-breaking parameters. \vskip 3pt This paper is organized as follows. In Section 2, we present the basics of linear codes and quasi-cyclic codes. In Section 3, we study one-generator quasi-cyclic codes and present the symplectic self-orthogonality condition over finite fields. Section 4 focuses on two-generator quasi-cyclic codes, showing both the symplectic self-orthogonality condition and the symplectic dual-containing condition for the general form of two-generator quasi-cyclic codes over finite fields. In Section 5, we construct new quantum codes with record-breaking parameters based on our study. Finally in Section 6, we conclude our work giving further research problems. \section{Preliminaries} Let \(\mathbb{F}_q\) be the finite field with \(q = p^r\) elements, where \(p\) is a prime number and \(r\) is a positive integer. A code \(C\) is a linear code of length \(2n\) over $\mathbb F_q$ if \(C\) forms a subspace of the vector space \(\mathbb{F}_q^{2n}\). The elements of \(C\) are called codewords. Suppose \(\mathbf{a} = (a_0, a_1, \dots, a_{2n-1})\) and \(\mathbf{b} = (b_0, b_1, \dots, b_{2n-1})\) are codewords of \(C\). We define the (minimum) Hamming weight of \(C\) as $w_H(C) = \min \{ w_H(\mathbf{a}) \mid \mathbf{a} \in C, \mathbf{a} \ne 0 \},$ where \(w_H(\mathbf{a})\) is the number of non-zero components of \(\mathbf{a}\). The (minimum) Hamming distance of \(C\) is $ d_H(C) = \min \{ d_H(\mathbf{a}, \mathbf{b}) \mid \mathbf{a},\mathbf{b}\in C, \mathbf{a} \ne \mathbf{b} \},$ where \( d_H(\mathbf{a}, \mathbf{b}) = |\{ i \mid a_i \ne b_i \}|\). \vskip 5pt The symplectic inner product of \(\mathbf{u} = (\mathbf{u}_1, \mathbf{u}_2) \in \mathbb{F}_q^{2n}\) and \(\mathbf{v} = (\mathbf{v}_1, \mathbf{v}_2) \in \mathbb{F}_q^{2n}\) is defined as \[ \langle \mathbf{u}, \mathbf{v} \rangle_s = \langle \mathbf{u}_1, \mathbf{v}_2 \rangle_e - \langle \mathbf{v}_1, \mathbf{u}_2 \rangle_e, \] where \(\mathbf{u}_1, \mathbf{u}_2, \mathbf{v}_1, \mathbf{v}_2 \in \mathbb{F}_q^n\) and \(\langle \cdot, \cdot \rangle_e\) is the standard Euclidean inner product in \(\mathbb{F}_q^n\). We observed that this inner product can also be written as \[ \langle \mathbf{u}, \mathbf{v} \rangle_s = \mathbf{u} \Omega \mathbf{v}^t, \] where \[ \Omega = \begin{pmatrix} O_n & I_n \\ -I_n & O_n \end{pmatrix}, \] where \( I_n \) denotes the \( n \times n \) identity matrix, and \( O_n \) denotes the \( n \times n \) zero matrix. \vskip 5pt The symplectic dual code \(C^{\perp_s}\) of \(C\) is defined as $ C^{\perp_s} = \{ \mathbf{u} \in \mathbb{F}_p^{2n} \mid \langle \mathbf{u}, \mathbf{v} \rangle_s = 0, \text{ for all } \mathbf{v} \in C \}. $ A linear code \(C\) is called symplectic self-orthogonal if \(C \subseteq C^{\perp_s}\), and symplectic dual-containing if \(C^{\perp_s} \subseteq C\). Let \(\mathbf{c} = (\mathbf{x}, \mathbf{y}) \in \mathbb{F}_q^{2n}\), where \(\mathbf{x}, \mathbf{y} \in \mathbb{F}_q^n\). We define the (minimum) symplectic weight of \(C\) as $ w_S(C) = \min \{ w_S(\mathbf{c}) \mid \mathbf{c} \in C, \mathbf{c} \ne 0 \}, $ where \(w_S(\mathbf{c}) = w_S(\mathbf{x}, \mathbf{y}) = |\{ i \mid (x_i, y_i) \ne (0,0) \}|\). \vskip 3pt Suppose $C$ is a linear code of length $n$ over $\mathbb F_q$. Then $C$ is a cyclic code if for any codeword $ (c_0,c_1,\dots,c_{n-1})\in C$, we have $(c_{n-1},c_0,c_1,\dots,c_{n-2})\in C$. Let us denote by $R=\mathbb F_q[x]/\langle x^n-1 \rangle$. We can identify a codeword $(c_0,c_1,\dots,c_{n-1})\in C$ by a polynomial $c(x)=c_0+c_1x+\cdots+c_{n-1}x^{n-1}\in R$. It is easy to show that $C$ is a cyclic code of length $n$ over $\mathbb F_q$ if it forms an ideal of $R$. \begin{definition} Suppose $C$ is a linear code of length $ln$ over $\mathbb F_q$. Any codeword ${\mathfrak c}=(c_{0,0},c_{0,1},\dots,\linebreak c_{0,l-1},c_{1,0},\dots, c_{1,l-1}, \dots, c_{n-1,0},\dots,c_{n-1,l-1}) \in C$ can be written as \[{\mathfrak c}=\begin{pmatrix} c_{0,0} & c_{0,1} &\cdots &c_{0,l-1}\\ c_{1,0} & c_{1,1} &\cdots &c_{1,l-1}\\ \vdots & \vdots & \vdots &\vdots \\ c_{n-1,0} & c_{n-1,1} &\cdots &c_{n-1,l-1}\\ \end{pmatrix}. \] In this case, $C$ is a quasi-cyclic (QC) code of index $l$ if for any ${\mathfrak c}\in C$, we get \[\begin{pmatrix} c_{n-1,0} & c_{n-1,1} &\cdots &c_{n-1,l-1}\\ c_{0,0} & c_{0,1} &\cdots &c_{0,l-1}\\ \vdots & \vdots & \vdots &\vdots \\ c_{n-2,0} & c_{n-2,1} &\cdots &c_{n-2,l-1}\\ \end{pmatrix}\in C.\] \end{definition} \section{One-generator QC codes} A QC code of length \(2n\) and index \(2\) can be represented as \(C = (C_1, C_2)\), where each \(C_j\) is a cyclic code of length \(n\). Suppose \(C_j\) is generated by a polynomial \(c_j(x)\) such that \(c_j(x) \mid x^n - 1\) for \(j = 1, 2\). Then, a one-generator QC code with index \(2\) can be interpreted as a 2-tuple of polynomials \((c_1(x), c_2(x))\). \vskip 3pt Any one-generator QC code of length \(2n\) and index \(2\) can be written as \[ C = \{r(x)\big(c_1(x), c_2(x)\big) \mid r(x) \in R\} = \{\big(r(x)c_1(x), r(x)c_2(x)\big) \mid r(x) \in R\}. \] \begin{theorem}\label{gen} Let $C$ be a one-generator QC code of length $2n$ and index $2$ over $\mathbb F_q$. Then a generator of $C$ is of the form $(r_1(x)g_1(x), r_2(x)g_2(x))$, where ${g_i}(x)h_i(x)= x^n -1$ and $\gcd\big(r_i(x),h_i(x)\big)=1$ for $i = 1, 2$. \end{theorem} \begin{proof} Let \( C \) be a one-generator QC code of length \( 2n\) and index \( 2 \) over \( \mathbb{F}_q \), generated by \((c_1(x), c_2(x))\). Any element of \( C \) can be written in the form \(\big(s(x)c_1(x), s(x)c_2(x)\big)\) for some polynomial \( s(x) \in R \). \vskip 3pt For \( i = 1, 2 \), we define a map \(\Psi_i : C \longrightarrow R\) by \[ \Psi_i\big(s(x)c_1(x), s(x)c_2(x)\big) = s(x)c_i(x). \] It is easy to show that, this map \(\Psi_i\) is a module homomorphism. Since \(\Psi_i(C)\) is the image of \( C \) under a module homomorphism, it forms an ideal of \( R \). \vskip 3pt In the ring \( R = \mathbb{F}_q[x] / \langle x^n - 1 \rangle \), every ideal is a cyclic code of length \( n \) over \( \mathbb{F}_q \). Because \( R \) is a principal ideal domain, any ideal \(\Psi_i(C)\) is generated by a single polynomial \( g_i(x) \) such that \( g_i(x) \mid x^n - 1 \). Therefore, we can write \(\Psi_i(C) = ( g_i(x))\). \vskip 3pt Thus, the code \( C \) can be expressed as \[ C = \big(r_1(x)g_1(x), r_2(x)g_2(x)\big) , \] where \( g_i(x) h_i(x) = x^n - 1 \) and \(\gcd(r_i(x), h_i(x)) = 1\) for \( i = 1, 2 \). This completes the proof. \end{proof} \begin{definition} Let $C$ be a one-generator QC code of length $2n$ and index $2$ over $\mathbb F_q$. Then the monic polynomial $h(x)$ of minimum degree such that \[h(x)\big(r_1(x)g_1(x), r_2(x)g_2(x)\big)=(0,0)\] \noindent is the parity-check polynomial of $C$. \end{definition} \begin{theorem}\label{gcd1} Let $C=\big(r_1(x)g_1(x), r_2(x)g_2(x)\big)$ be a one-generator QC code of length $2n$ and index $2$ over $\mathbb F_q$, where \( g_i(x) h_i(x) = x^n - 1 \) and \(\gcd(r_i(x), h_i(x)) = 1\) for \( i = 1, 2 \). Then $\dim(C)=\deg(h(x))$. \end{theorem} \begin{proof} We define a module homomorphism \(\Phi: R \rightarrow C\) by \[\Phi(a(x)) = a(x)\big(r_1(x)g_1(x), r_2(x)g_2(x)\big). \] Let \( h(x) = \mathrm{lcm}(h_1(x), h_2(x)) \) be the parity-check polynomial of \(C\). We note that \(\text{Ker}(\Phi) = (h(x))\). Since the map \(\Phi\) is surjective, we get \( R/(h(x)) \cong C \), which implies that the $\dim(C) =\deg(h(x))$. \end{proof} \begin{remark}{\em An important fact about Theorem \ref{gcd1}, is that, without the conditions \(\gcd(r_i(x), h_i(x)) = 1\) for \(i = 1, 2\), we cannot assert that \(\dim(C) = \deg(h(x))\). We illustrate this with the following example.\hfill$\square$ }\end{remark} \begin{example}{\em We consider \(R = \mathbb{F}_3[x]/\langle x^{10} - 1 \rangle\), and \[x^{10}-1 = (x+1)(x+2)(x^4 + x^3 + x^2 + x + 1)(x^4 + 2x^3 + x^2 + 2x + 1)\in \mathbb{F}_3[x].\] We take \[g_1(x) =(x+2)(x^4 + 2x^3 + x^2 + 2x + 1),~~g_2(x) = (x^4 + x^3 + x^2 + x + 1)(x^4 + 2x^3 + x^2 + 2x + 1),\] \[r_1(x) = 2x^4 + 2x^3 + 2x^2 + 2x + 2,~~\mbox{and}~~ r_2(x) = 2x^5 + 2x^4 + x^3 + x^2 + 2x.\] \vskip 3pt We have that $g_i(x) \mid x^n - 1$, and $r_i(x)\in R$, for \(i = 1, 2\). Also, as \( g_i(x) h_i(x) = x^n - 1 \) for \( i = 1, 2 \), we get \[h_1(x) = x^5 + 2x^4 + 2x^3 + 2x^2 + 2x + 1,~~\mbox{and}~~ h_2(x) = x^2 + 2.\] \vskip 3pt From the above we get, \[\gcd(r_1(x),h_1(x)) = x^4 + x^3 + x^2 + x + 1,~~\mbox{and}~~ \gcd(r_2(x),h_2(x)) = 1. \] Then $h(x)=\lcm(h_1(x),h_2(x))=x^6 + x^5 + 2x + 2$, implying $\deg(h(x))=6$. \vskip 4pt On the other hand, using MAGMA computations, we find that the dimension of the QC code generated by \((r_1(x)g_1(x), r_2(x)g_2(x))\) is 2. \hfill$\square$ }\end{example} \begin{corollary}\cite{Nuh1} Let $C=(r_1(x)g(x), r_2(x)g(x))$ be a one-generator QC code of length $2n$ and index $2$ over $\mathbb F_q$. Then $\dim(C)=n-\deg(g(x))$. Moreover, $d'(C) \ge 2d_H$, where $d'$ is the minimum Hamming distance of $C$ and $d_H$ is the minimum Hamming distance of $C_i$ for $i=1,2$. \hfill$\square$ \end{corollary} \begin{theorem}\label{mt}\cite{xu2022} Let \( C \) be a linear code of length \( 2n \) over \( \mathbb{F}_q \) with generator matrix \( G \). Suppose \( G \) is an \( m \times 2n \) matrix. Then \( C \) is a symplectic self-orthogonal code if and only if \( G \Omega G^t = O_m \), where \( O_m \) is the \( m \times m \) zero matrix and \( G^t \) denotes the transpose of \( G \). \end{theorem} \begin{proof} Suppose $C$ is a symplectic self-orthogonal code and $c=uG\in C$ an arbitrary codeword for some vector \( u \in \mathbb F^m_q \). Then \[\langle c, c\rangle_s = c\Omega c^t= (uG)\Omega (uG)^t=u(G\Omega G^t)u^t.\] Therefore, $\langle c, c\rangle_s=0 $ if and only if $ G\Omega G^t=O_m.$ \end{proof} We recall a generator of a one-generator QC code of length \( 2n \) and index \( 2 \) over \( \mathbb{F}_q \) as \( C = (r_1(x)g_1(x), r_2(x)g_2(x)) \), and we denote \( a(x) = r_1(x)g_1(x) \) and \( b(x) = r_2(x)g_2(x) \). We note that $a(x), b(x) \in R$. Then a generator matrix of \( C \) can be expressed as \( G = (A \mid B) \), where \( A \) and \( B \) are \( n \times n \) circulant matrices generated by \( a_1(x) \) and \( b_1(x) \), respectively. Here, \( \mid \) denotes the horizontal concatenation of the two circulant matrices \( A \) and \( B \). Then we have the following result. \begin{theorem}\label{t1} Let \(C\) be a one-generator QC code of length $2n$ and index $2$ over \(\mathbb{F}_q\), whose generator matrix is \(G = (A \mid B)\). Then \(C \subseteq C^{\perp_s}\) if and only if \(AB^t = BA^t\), where \(A^t\) and \(B^t\) represent the transposes of \(A\) and \(B\), respectively. \end{theorem} \begin{proof} By Theorem \ref{mt}, \( C \subseteq C^{\perp_s} \) if and only if \( G \Omega G^t \) is a zero matrix. For this one-generator QC code, the generator matrix \( G \) is of the form \( G = (A \mid B) \). Thus, we have \begin{align*} G \Omega G^t &= \begin{pmatrix} A & B \end{pmatrix} \begin{pmatrix} O_n & I_n \\ -I_n & O_n \end{pmatrix} \begin{pmatrix} A & B \end{pmatrix}^t= \begin{pmatrix} -B & A \end{pmatrix} \begin{pmatrix} A^t \\ B^t \end{pmatrix} = - B A^t + A B^t. \end{align*} Therefore, $A B^t - B A^t = O_n$, implies $A B^t = B A^t $. Hence, $C\subseteq C^{\perp_s}$ if and only if $A B^t=B A^t$. \end{proof} We can also present Theorem \ref{t1} in terms of polynomials. To do so, we need to discuss the transpose of a polynomial, and its relation with the generator matrix described as follows. \vskip 5pt Let $t(x)=t_0 +t_1 x+t_2 x+\cdots+t_{n-2}x^{n-2}+t_{n-1}x^{n-1} \in \mathbb F_q[x]/\langle x^n-1 \rangle$. We define the transpose polynomial $\overline{t}(x)$ of $t(x)$ as \[\overline{t}(x) =x^n t(x^{-1}) =t_0 +t_{n-1} x+t_{n-2} x^2+\cdots+t_{2}x^{n-2}+t_{1}x^{n-1}. \] \vskip 5pt We present the following result on the symplectic self-orthogonality of a one-generator quasi-cyclic code in terms of the generator polynomials. \begin{theorem}\label{prf} Let \(C\) be a one-generator QC code of length $2n$ and index $2$ generated by \((a(x), b(x))\). Then \(C \subseteq C^{\perp_s}\) if and only if \(a(x)\overline{b}(x) - b(x)\overline{a}(x) \equiv 0 \mod (x^n - 1) \). \end{theorem} \begin{proof} By Theorem \ref{t1}, the condition \( C \subseteq C^{\perp_s} \) holds if and only if \( AB^t = BA^t \). We aim to express this condition using polynomials. \vskip 3pt If the circulant matrix \( A \) is generated by the polynomial \( a(x) \), then its transpose \( A^t \) is generated by the transpose polynomial \( \overline{a}(x) \). Similarly, the transpose \( B^t \) of the circulant matrix \( B \), generated by the polynomial \( b(x) \), is represented by the transpose polynomial \( \overline{b}(x) \). \vskip 3pt Additionally, considering that the circulant matrix \( A \) is generated by \( a(x) \) and \( B^t \) is generated by \( \overline{b}(x) \), the product matrix \( AB^t \) corresponds to the circulant matrix generated by \( a(x) \overline{b}(x) \mod (x^n - 1) \). Similarly, \( BA^t \) corresponds to the circulant matrix generated by \( b(x) \overline{a}(x) \mod (x^n - 1) \). \vskip 3pt Therefore, \(AB^t = BA^t\) if and only if \(a(x)\overline{b}(x) - b(x)\overline{a}(x) \equiv 0 \mod (x^n - 1)\). \end{proof} Here, we present a detailed example explaining all the concepts discussed above. \begin{example}{\em We consider $R=\mathbb F_3[x]/\langle x^{11}-1 \rangle$, and \[x^{11}-1=(x+2)(x^5 + 2x^3 + x^2 + 2x + 2)(x^5 + x^4 + 2x^3 + x^2 + 2)\in \mathbb F_3[x].\] We take \[g_1(x)=(x+2)(x^5 + x^4 + 2x^3 + x^2 + 2),~~\mbox{and}~~g_2(x)=x^5 + x^4 + 2x^3 + x^2 + 2.\] Then \[h_1(x)=(x^5 + 2x^3 + x^2 + 2x + 2),~~\mbox{and}~~h_2(x)=(x+2)(x^5 + 2x^3 + x^2 + 2x + 2).\] \noindent We also consider \[r_1(x)=2x^8 + 2x^7 + 2x^6 + 2x^5 + 2x^4 + 2x^3 + 2x^2 + 2x + 2,~~\mbox{and}\]\[r_2(x) = 2x^7 + 2x^6 + 2x^5 + x^4 + x^3 + 2x^2 + x\in R.\] \noindent Then $C=(r_1(x)g_1(x), r_2(x)g_2(x))$ is a one-generator QC code of length $22$ over $\mathbb F_3$. We can check that $\gcd(r_i(x), h_i(x))= 1$ for $i = 1, 2$. Then $\dim(C)=\deg(h(x))=6$, where \[h(x)=\mathrm{lcm}(h_1(x),h_2(x))=x^6 + 2x^5 + 2x^4 + 2x^3 + x^2 + 1.\] As per Theorem \ref{t1}, we take $a(x)=r_1(x)g_1(x)\in R,$ and $b(x)=r_2(x)g_2(x)\in R$. Then $A$ is generated by $a(x)$, $B$ is generated by $b(x)$, $A^t$ is generated by $\overline{a}(x)$ and $B^t$ is generated by $\overline{b}(x)$, where \[a(x)=x^9 + x^5 + x^4 + x^3 + x + 1,\] \[\overline{a}(x)=x^{10} + x^8 + x^7 + x^6 + x^2 + 1,\] \[b(x)=2x^{10} + 2x^8 + 2x^7 + x^6 + x^5 + x^2 + x + 1,\] \[\overline{b}(x)=x^{10} + x^9 + x^6 + x^5 + 2x^4 + 2x^3 + 2x + 1.\] \noindent It is easy to check that $AB^t = BA^t$, also \(a(x)\overline{b}(x) - b(x)\overline{a}(x) \equiv 0 \mod (x^n - 1)\). Thus, \(C \subseteq C^{\perp_s}\). \hfill$\square$ }\end{example} \vskip 10pt \begin{remark} {\em A key property of the transpose polynomial is that if \( C \) is an \([n, \dim(C)]\) code with generator matrix \( G \in \mathbb{F}_q^{n \times n} \), where \( G \) is the circulant matrix generated by \( g(x) \), then the code \( C^t \) is also an \([n, \dim(C)]\) code with generator matrix \( G^t \in \mathbb{F}_q^{n \times n} \), which is generated by \( \overline{g}(x) \), the transpose polynomial of \( g(x) \). This holds because \( \dim(C) = \rank(G) = \rank(G^t) = \dim(C^t) \). It's important to note that the circulant generator matrix is not required to have full rank. \hfill$\square$ }\end{remark} In the following result, we present a theorem that allows us to determine the dimension of a one-generator QC code without the $\gcd$ conditions as in Theorem \ref{gcd1}. \begin{theorem} Let \(C\) be a one-generator QC code of length $2n$ and index $2$ generated by \((a(x), b(x))\), where $a(x),b(x)\in R$. Then \(\dim(C) = n - \deg(f(x))\), where \(f(x)=\gcd\big(a(x),b(x),x^n-1\big)\). \end{theorem} \begin{proof} Suppose \(\gcd\big(a(x), b(x), x^n - 1\big) = f(x)\). Then the polynomial \(f(x)\) divides both \(a(x)\) and \(b(x)\), as well as \(x^n - 1\). Thus, \(f(x)\) defines a cyclic code of length \(n\) over \(\mathbb{F}_q\), whose dimension is \(n - \deg(f(x))\). \vskip 3pt The one-generator QC code \(C\) generated by \((a(x), b(x))\) can be expressed in terms of polynomials as \[ C = \{a(x)p(x) + b(x)q(x) \mid p(x), q(x) \in R \}. \] We show that $\{a(x)p(x) + b(x)q(x) \mid p(x), q(x) \in R \}$ is exactly the principal ideal generated by $f(x)$. \vskip 5pt As both \( a(x) \) and \( b(x) \) are multiples of \( f(x) \), any linear combination of \( a(x) \) and \( b(x) \) will also be a multiple of \( f(x) \). This implies that \( ( f(x) ) \), the ideal generated by \( f(x) \), is contained in the ideal generated by \( a(x) \) and \( b(x) \), denoted as \( ( a(x), b(x) ) \). Thus, $( f(x)) \subseteq ( a(x), b(x) )$. \vskip 5pt For the other side, we take $d(x)\in ( a(x), b(x))$, then \[d(x)=a(x)u(x)+b(x)v(x)=f(x)\big(u'(x)+v'(x)\big) \quad \mbox{implies}\quad d(x)\in ( f(x) ),\] \noindent where $a(x) = f(x) u'(x)$ and $b(x) = f(x) v'(x)$, for some $u(x),v(x),u'(x),v'(x)\in R$. Therefore, $ ( a(x), b(x) )$ $\subseteq ( f(x))$. \vskip 5pt Thus, $ (a(x), b(x) ) = ( f(x))$, and the dimension of \(C\) is given by the dimension of the cyclic code generated by \(f(x)\), which is \(n - \deg(f(x))\). Hence, \(\dim(C) = n - \deg(f(x))\).\end{proof} \begin{remark} {\em There are no non-trivial dual-containing one-generator quasi-cyclic codes. This is because the dimension of a one-generator QC code of length $2n$ and index $2$ is given by \(\dim(C) = n - \deg(f(x))\). Consequently, the dimension of the dual code is \(n + \deg(f(x))\). For a code to be dual-containing, the dimension of the dual code must be less than or equal to the dimension of the original code, hence \(n - \deg(f(x)) \ge n + \deg(f(x))\), which implies \(\deg(f(x)) \le 0\).\hfill$\square$ }\end{remark} \section{Two-generators QC codes} In this section, we present two-generator QC codes, along with the necessary and sufficient conditions for self-orthogonality and the dual-containing property. In the earlier section, we discussed one-generator QC codes of the form $(a_1(x), b_1(x))$, where $a_1(x)=r_1(x)g_1(x), b_1(x)=r_2(x)g_2(x)$ such that $g_i(x) \mid x^n - 1$ and $r_i(x) \in R$ for $i = 1, 2$. Similarly, we now introduce two-generator QC codes, where the generators are of the form $(a_1(x), b_1(x))$ and $(a_2(x), b_2(x))$. These generators are defined as follows: \begin{align}\label{2g1} a_1(x)=t_1(x)g_1(x),~~~ b_1(x)=t_2(x)g_2(x),~~~ a_2(x)=t_3(x)g_3(x),~~~ b_2(x) = t_4(x)g_4(x), \end{align} where \(a_i(x), b_i(x) \in R\), $g_j(x) \mid x^n - 1$ and $t_j(x) \in R$ for $j = 1, 2, 3, 4$. \vskip 5pt In this two-generator case, handling four factors \( g_i(x) \) of \( x^n-1 \) and four other polynomials \( t_i(x) \in R \) can be quite involved. Therefore, some special forms have been considered for study. For example, in \cite{Gal}, \cite{Q2}, and \cite{Q4} consecutively, the generators are considered as follows: \[ \begin{pmatrix} f(x) & h(x)f(x)\\ 0 & g(x) \end{pmatrix}, ~~\begin{pmatrix} g_1(x) & g_1(x)\\ g_2(x) & u(x)g_2(x) \end{pmatrix},~~\mbox{and}~~ \begin{pmatrix} v(x)g_1(x) & g_1(x)\\ g_2(x) & v(x)g_2(x) \end{pmatrix}. \] \vskip 5pt In this work, we aim to present a self-orthogonality and dual-containing condition that applies to any two-generator QC codes and can be viewed as a continuation of the one-generator case study. To achieve this, we consider the generator matrix corresponding to the generator \((a_1(x), b_1(x))\) as \(G_1 = (A_1 \mid B_1)\) and for \((a_2(x), b_2(x))\) as \(G_2 = (A_2 \mid B_2)\), where \(A_i\) are circulant matrices generated by the polynomial \(a_i(x)\) for \(i=1,2\), and \(B_i\) are circulant matrices generated by the polynomial \(b_i(x)\) for \(i=1,2\). A generator matrix of the two-generator QC code is then constructed as follows: \begin{align}\label{2gm} G =\begin{pmatrix} G_1\\ G_2 \end{pmatrix}= \begin{pmatrix} A_1 & B_1\\ A_2 & B_2 \end{pmatrix}. \end{align} Next we give the dimension formula for a two-generator QC code of length \(2n\) and index \(2\) over \(\mathbb{F}_q\). \begin{theorem}\label{dim2} Let \( C \) be a two-generator QC code of length \(2n\) and index \(2\) over \(\mathbb{F}_q\), with generator matrix \( G \) of the form $(\ref{2gm})$ given by \[ G =\begin{pmatrix} G_1\\ G_2 \end{pmatrix}.\] Then, $\dim(C) = \rank(G) = \rank(G_1) + \rank(G_2) - \dim(\text{row space}(G_1) \cap \text{row space}(G_2)).$ \end{theorem} \begin{remark}{\em The result of Theorem \ref{dim2} can also be expressed as \[\dim(C) = \dim(C_1) + \dim(C_2) - \dim(C_1 \cap C_2),\] where we can think \( C_i \) as one-generator QC codes generated by \( G_i \), for \( i = 1, 2 \). We observed that \[\dim(C_i) = \deg(f_i(x)),~\mbox{where}~f_i(x)=\gcd\big(a_i(x), b_i(x), x^n - 1\big),~\mbox{for} ~i=1,2.\] So far, we have been unable to establish the degree correspondence of \(\dim(C_1 \cap C_2)\) using the polynomials considered in this study. Addressing this issue likely demands further investigation and a more detailed exploration of the polynomials, which we plan to undertake in a future project concentrating on the explicit dual construction of two-generator QC codes. \hfill$\square$ }\end{remark} \subsection{Self-orthogonal QC codes} \begin{theorem}\label{gproof} Let \(C\) be a two-generator QC code of length \(2n\) and index \(2\) generated by \((a_1(x), b_1(x))\) and \((a_2(x), b_2(x))\), where \(a_i(x), b_i(x) \in R\) and are of the form $(\ref{2g1})$ for \(i = 1, 2\). A generator matrix of this QC code \(C\) is of the form $(\ref{2gm})$. Then \(C \subseteq C^{\perp_s}\) if and only if the following conditions hold: \[ A_1 B_1^t = B_1 A_1^t, \quad A_2 B_2^t = B_2 A_2^t, \quad \text{and} \quad A_1 B_2^t = B_1 A_2^t, \] where \(A_i^t\) and \(B_i^t\) denote the transposes of \(A_i\) and \(B_i\), respectively, for $i=1,2$. \end{theorem} \begin{proof} By Theorem \ref{mt}, $C$ is a symplectic self-orthogonal code, if and only if $G\Omega G^t $ is a zero matrix. Here $C$ is a two-generator QC code of length $2n$ over $\mathbb F_q$, whose generator matrix is $G$ is of the form $(1)$. Then \begin{align*} G \Omega G^t &= \begin{pmatrix} A_1 & B_1 \\ A_2 & B_2 \end{pmatrix} \begin{pmatrix} O_n & I_n \\ -I_n & O_n \end{pmatrix} \begin{pmatrix} A_1^t & A_2^t \\ B_1^t & B_2^t \end{pmatrix} \\ &= \begin{pmatrix} -B_1 & A_1 \\ -B_2 & A_2 \end{pmatrix} \begin{pmatrix} A_1^t & A_2^t \\ B_1^t & B_2^t \end{pmatrix} \\ &= \begin{pmatrix} -B_1A_1^t + A_1B_1^t & -B_1A_2^t + A_1B_2^t\\ -B_2A_1^t + A_2B_1^t & -B_2A_2^t + A_2B_2^t \end{pmatrix} . \end{align*} Therefore, \[G \Omega G^t = \begin{pmatrix} -B_1A_1^t + A_1B_1^t & -B_1A_2^t + A_1B_2^t\\ -B_2A_1^t + A_2B_1^t & -B_2A_2^t + A_2B_2^t \end{pmatrix} = \begin{pmatrix} O_n & O_n\\ O_n & O_n \end{pmatrix}.\] By noting that \( -B_1 A_2^t + A_1 B_2^t = (-B_2 A_1^t + A_2 B_1^t)^t \), and comparing both sides, we obtain the result. \end{proof} \begin{remark}{\em Using ideas from Theorem \ref{t1}, we can consider the conditions \( A_1 B_1^t = B_1 A_1^t \) and \( A_2 B_2^t = B_2 A_2^t \) as the symplectic self-orthogonality conditions for the two constituent one-generator QC codes \((a_1(x), b_1(x))\) and \((a_2(x), b_2(x))\), respectively. Additionally, the condition \( A_1 B_2^t = B_1 A_2^t \) imposes self-orthogonality between the two constituent one-generator QC codes \((a_1(x), b_1(x))\) and \((a_2(x), b_2(x))\), which together generate the two-generator QC code. \hfill$\square$}\end{remark} Similar to the one-generator case, we also present an alternative criterion for the symplectic self-orthogonality condition in terms of the generator polynomials for the two-generator QC codes. \begin{theorem} Let $C$ be a two-generator QC code of length $2n$ and index $2$ generated by $(a_1(x),b_1(x))$ and $(a_2(x),b_2(x))$, where \(a_i(x), b_i(x) \in R\) and are of the form $(\ref{2g1})$ for \(i = 1, 2\). Then $C\subseteq C^{\perp_s}$ if and only if $a_1(x)\overline{b}_1(x)- b_1(x)\overline{a}_1(x) \equiv 0 \mod (x^n - 1)$, $a_2(x)\overline{b}_2(x)-b_2(x)\overline{a}_2(x) \equiv 0 \mod (x^n - 1)$ and $a_1(x)\overline{b}_2(x)-b_1(x)\overline{a}_2(x) \equiv 0 \mod (x^n - 1)$. \end{theorem} \begin{proof} This proof follows a similar line of arguments as the proof of Theorem \ref{prf}. \end{proof} \subsection{Dual-containing QC codes} We have examined the symplectic self-orthogonality condition \( C \subseteq C^{\perp_s} \) for two-generator QC codes. Similarly, we can derive a necessary and sufficient condition for the symplectic dual-containing property \( C^{\perp_s} \subseteq C \). Before proceeding, we need the following result. \begin{theorem}\label{mt2} Let \( C \) be a linear code of length \( 2n \) over \( \mathbb{F}_q \) with a parity-check matrix \( H \). Suppose \( H \) is an \( m \times 2n \) matrix. Then \( C \) is a symplectic dual-containing code if and only if \( H \Omega H^t = O_m \), where \( O_m \) is the \( m \times m \) zero matrix and \( H^t \) denotes the transpose of \( H \). \end{theorem} \begin{proof} Let us assume \( C \) is a symplectic dual-containing code, which means \( C^\perp_s \subseteq C \). This gives us: \begin{align*} C^\perp_s \subseteq C & \Longleftrightarrow \forall x \in C^\perp_s, \quad x \in C\\ & \Longleftrightarrow \forall x \in C^\perp_s, \quad H\Omega x^t = 0\\ & \Longleftrightarrow \text{for all rows } r \text{ of } H, \quad H\Omega r^t = 0\\ & \Longleftrightarrow H\Omega H^t = 0. \end{align*} \end{proof} To determine the dual-containing property of two-generator QC codes, we need to construct a parity-check matrix. Our objective is to start with a generator matrix \( G \) of a two-generator QC code of length \( 2n \) and index \( 2 \) in the form \((\ref{2gm})\). We consider circulant matrices \( P_i \) generated by the polynomial \( p_i(x) \) for \( i = 1, 2 \), and circulant matrices \( Q_i \) generated by the polynomial \( q_i(x) \) for \( i = 1, 2 \) to form a parity-check matrix \( H \) of the form: \begin{align} \label{2pm} H = \begin{pmatrix} P_1 & Q_1 \\ P_2 & Q_2 \end{pmatrix}, \end{align} \noindent such that \( G\Omega H^T = O_{2n} \), where \( O_{2n} \) denotes the \( 2n \times 2n \) zero matrix. \vskip 5pt By solving \( G\Omega H^T = O_{2n} \), we derive the following equations: \begin{align*} A_1 \cdot Q_1^T &= B_1 \cdot P_1^T \\ A_1 \cdot Q_2^T &= B_1 \cdot P_2^T \\ A_2 \cdot Q_1^T &= B_2 \cdot P_1^T \\ A_2 \cdot Q_2^T &= B_2 \cdot P_2^T. \end{align*} The generator matrix \( G \) in the form \((\ref{2gm})\) and the parity-check matrix \( H \) in the form \((\ref{2pm})\) may not always have full rank. Consequently, \( H \) does not always generate the dual QC code of \( C \). The condition \( G\Omega H^T = O_{2n} \) indicates that if a two-generator QC code \( C \) is generated by the matrix \( G \) in the form \((\ref{2gm})\), and another two-generator QC code \( C' \) is generated by the matrix \( H \) in the form \((\ref{2pm})\), then all codewords of \( C \) are orthogonal to those of \( C' \). However, \( C' \) is not always equal to \( C^\perp_s \), the symplectic dual of \( C \). If the matrix \( H \) satisfies \(\text{rank}(G) + \text{rank}(H) = 2n\), i.e., \( \dim(C^\perp_s) + \dim(C') =2n\), we can assert that \( H \) is the parity-check matrix of \( C \) that generates \( C^\perp_s \). \vskip 5pt \begin{example}\label{e1}{\em We consider $R=\mathbb{F}_3[x]/\langle x^{15}-1 \rangle$, and \[ x^{15}-1 = (x+2)^3(x^4 + x^3 + x^2 + x + 1)^3 \in \mathbb{F}_3[x]. \] We take \[g_1(x) = (x+2)(x^4 + x^3 + x^2 + x + 1), ~~~~g_2(x) = (x+2)^3(x^4 + x^3 + x^2 + x + 1),\] \[g_3(x) = (x^4 + x^3 + x^2 + x + 1)^2,~~\mbox{and}~~ g_4(x) = (x^4 + x^3 + x^2 + x + 1)\] such that $g_i(x) \mid x^n - 1$ for $i=1,2,3,4$. We also take $t_i(x) \in R$ for $i=1,2,3,4$ such that \begin{align*} t_1(x) &= 2x^{12} + 2x^{11} + 2x^{10} + 2x^9 + 2x^8 + 2x^7 + 2x^6 + 2x^5 + 2x^4 + 2x^3 + 2x^2 + 2x + 2, \\ t_2(x) &= 2x^{11} + 2x^{10} + 2x^9 + 2x^8 + 2x^7 + 2x^6 + 2x^5 + 2x^4 + 2x^3 + 2x^2 + 2x + 2, \\ t_3(x) &= 2x^9 + 2x^8 + 2x^7 + 2x^6 + 2x^5 + 2x^4 + 2x^3 + 2x^2 + 2x + 2, \\ t_4(x) &= 2x^7 + 2x^6 + 2x^5 + 2x^4 + 2x^3 + 2x^2 + 2x + 2. \end{align*} \noindent Then \(C\) is a two-generator QC code of length \(30\) and index \(2\) generated by \((a_1(x), b_1(x))\) and \((a_2(x), b_2(x))\), where $a_i(x) \equiv t_i(x)g_i(x) \mod (x^n - 1)$ for $i=1,2$ and $b_j(x) \equiv t_j(x)g_j(x) \mod (x^n - 1)$ for $j=3,4$. Then the generator matrix $G$ is of the form $(\ref{2gm})$. \vskip 3pt We consider $p_i(x) \equiv g^\perp_i(x)\overline{t}_i(x) \mod (x^n - 1)$ for $i=1,2$ and $q_j(x) \equiv g^\perp_j(x)\overline{t}_j(x) \mod (x^n - 1)$ for $j=3,4$. Then the parity-check matrix $H$ is of the form \((\ref{2pm})\). We can check that \( G \Omega H^T = O_{2n} \) and \(\text{rank}(G) + \text{rank}(H) = 2n\). Thus $H$ generates the symplectic dual of the QC code $C$. }\hfill$\square$ \end{example} \vskip 5pt Assuming we have a parity-check matrix for two-generator QC codes of length \(2n\) and index \(2\) that generate \( C^\perp_s \), we derive the necessary and sufficient condition for dual-containing two-generator QC codes of length \(2n\) and index \(2\) over \(\mathbb{F}_q\), expressed in terms of matrices. \begin{theorem}\label{1} Let \(C\) be a two-generator QC code of length \(2n\) and index \(2\) over $\mathbb F_q$. A parity-check matrix of this QC code \(C\) is of the form $(\ref{2pm})$. Then \(C^{\perp_s} \subseteq C\) if and only if the following conditions hold: \[ P_1 Q_1^t = Q_1 P_1^t, \quad P_2 Q_2^t = Q_2 P_2^t, \quad \text{and} \quad P_1 Q_2^t = Q_1 P_2^t, \] where \(P_i^t\) and \(Q_i^t\) denote the transposes of \(P_i\) and \(Q_i\), respectively, for \(i=1,2\). \end{theorem} \begin{proof} The proof follows a similar approach to the proof of Theorem \ref{gproof}. \end{proof} A necessary and sufficient condition for two-generator QC codes of length \(2n\) and index \(2\) over \(\mathbb{F}_q\) that contain their dual can be expressed in terms of polynomials as follows. \begin{theorem}\label{2} Let \(C\) be a two-generator QC code of length \(2n\) and index \(2\) over $\mathbb F_q$. A parity-check matrix of this QC code \(C\) is of the form $(\ref{2pm})$. Then \(C^{\perp_s} \subseteq C\) if and only if $ p_1(x)\overline{q}_1(x) - q_1(x)\overline{p}_1(x) \equiv 0 \mod (x^n - 1), p_2(x)\overline{q}_2(x) - q_2(x)\overline{p}_2(x) \equiv 0 \mod (x^n - 1),$ and $p_1(x)\overline{q}_2(x) - p_2(x)\overline{q}_1(x) \equiv 0 \mod (x^n - 1),$ where \(\overline{p}(x)\) denotes the transpose polynomial of \(p(x)\). \end{theorem} \begin{proof} This proof follows a similar line of arguments as the proof of Theorem \ref{prf}. \end{proof} \begin{example}{\em Continuing from Example \ref{e1}, we can demonstrate that the two-generator QC code $C$ described in Example \ref{e1} meets both the necessary and sufficient conditions for the dual-containing property as stated in Theorem \ref{1} and Theorem \ref{2}. Consequently, this code is a dual-containing QC code of length $2n$ and index $2$ over $\mathbb{F}_3$.\hfill$\square$ }\end{example} \begin{remark}{\em The duals of single-generator QC codes have been addressed in \cite{QC, iitr}. However, duals of two-generator quasi-cyclic codes pose significantly greater complexity, primarily due to the management of the eight polynomials in the generator matrix \( G \). This study aims to identify symplectic self-orthogonal and symplectic dual-containing codes without explicitly deriving the generators of the dual code. While minimum distance bounds for specific types of two-generator quasi-cyclic codes have been discussed in \cite{Gal, Q2}, establishing these bounds for general two-generator QC codes remains an open challenge. \hfill$\square$}\end{remark} \section{QECCs from QC codes} Most of the quantum codes that have been studied in the literature are primarily based on the well-known CSS structure \cite{CRSS98}. The study of quantum codes has also developed through the use of Hermitian and symplectic structures over \(\mathbb{F}_q\), where \(q\) is a prime power, as explored in \cite{A, Ket}. We recall the symplectic self-orthogonal result from \cite{Ket} and present a similar result corresponding to the dual-containing property. \vskip 3pt Consider a linear code \( C \). To construct a quantum code from \( C \), it is necessary to satisfy the condition \( C \subseteq C^{\perp_s} \) or \( C^{\perp_s} \subseteq C \). The primary motivation of this paper is to establish necessary and sufficient conditions for efficiently constructing such linear codes, ensuring they possess the symplectic self-orthogonal or symplectic dual-containing property. Based on these two properties, we derive the corresponding results that we use to construct quantum codes from our study. \begin{theorem} (\cite{0}) \label{Q} Let \( C \) be a linear code of length \( 2n \) over \( \mathbb{F}_q \) with parameters \([2n, k]\). If \( C \subseteq C^{\perp_s} \), then there exists a quantum error-correcting code \( Q \) with parameters \( [[n, n-k, d_s]] \) over \( \mathbb{F}_q \), where \( d_s = \min\{w_s(c) \mid c \in (C^{\perp_s} \setminus C)\} \). \end{theorem} We can also state the above result in terms of the dual-containing property. \begin{theorem}\label{dc} Let \( C \) be a linear code of length \( 2n \) over \( \mathbb{F}_q \) with parameters \([2n, k]\). If \( C^{\perp_s} \subseteq C \), then there exists a quantum error-correcting code \( Q \) with parameters \( [[n, k-n, d_s]] \) over \( \mathbb{F}_q \), where \( d_s = \min\{w_s(c) \mid c \in (C \setminus C^{\perp_s})\} \). \end{theorem} \begin{proof} Let \( C \) be a linear code of length \( 2n \) over \( \mathbb{F}_q \) with parameters \([2n, k]\), such that \( C \subseteq C^{\perp_s} \). Consider \( D = C^{\perp_s} \), which is a linear code of length \( 2n \) over \( \mathbb{F}_q \) with parameters \([2n, 2n-k]\). Since \( D = C^{\perp_s} \), it follows that \( D^{\perp_s} = C \). Therefore, \( C^{\perp_s} \subseteq C \) implies \( D \subseteq D^{\perp_s} \). \vskip 3pt By Theorem \ref{dc}, there exists a quantum error-correcting code \( Q \) with parameters \( [[n, n-(2n-k), d_s]] \), which simplifies to \( [[n, k-n, d_s]] \), where \( d_s = \min \{ w_s(c) \mid c \in (C \setminus C^{\perp_s}) \} \). \end{proof} For ease of computation, we primarily consider one-generator quasi-cyclic codes \( C \) of the form \((r_1(x)g(x), r_2(x)g(x))\), where \( g(x) \mid x^n - 1 \) and \( r_1(x), r_2(x) \in R \). We observe that a quantum code generated from a symplectic self-orthogonal quasi-cyclic code \( C \) of this form has a degree given by \(n-k = n-(n-\deg(g(x)))=\deg(g(x))\). The advantage of this form is that it allows us to fix the degree of the quantum code to match the dimension of the parameter code we want to improve. All computations are done using MAGMA \cite{Mag}. \begin{example}{\em We consider \( q = 5 \) and \( n = 11 \). Then \( R = \mathbb{F}_5[x]/\langle x^{11} - 1 \rangle \). We take two polynomials \( r_1(x), r_2(x) \in R \), where \[ r_1(x) = 4x^8 + 4x^7 + 4x^6 + 4x^5 + 4x^4 + 4x^3 + 4x^2 + 4x + 4, \] \[ r_2(x) = 4x^6 + 2x^5 + 4x^4 + 2x^3 + 4x^2, ~~g(x) = 1. \] Next, we consider two circulant matrices of size 11, \( A \) and \( B \), generated by \( r_1(x) \) and \( r_2(x) \) over \( \mathbb{F}_5 \). This \( C \) is a QC code of length $22$ of index $2$ whose generator matrix is \( G = (A \mid B) \), where \(\mid\) represents the horizontal concatenation of the two circulant matrices \( A \) and \( B \). We note that \( AB^t = BA^t \), which implies \( C \) is a symplectic self-orthogonal code with parameters \([22, 11, 8]\) over \( \mathbb{F}_5 \). Therefore, by Theorem \(\ref{Q}\), we obtain a QECC with parameters \([[11, 0, 6]]\), which are record-breaking parameters. The previous record was \([[11, 0, 5]]\). This newly constructed code has been already updated to the quantum code table \cite{gra}. \hfill$\square$}\end{example} \begin{example}{\em We consider \( q = 3 \) and \( n = 13 \). Then \( R = \mathbb{F}_3[x]/\langle x^{13} - 1 \rangle \). We take $g(x)\mid x^{13}-1$ and \( r_1(x), r_2(x) \in R \), where \[ g(x) = 2x^6 + x^5 + 2x^4 + x^3 + x^2 + x + 2, \] \[ r_1(x) = 2x^6 + 2x^5 + 2x^4 + 2x^3 + 2x^2 + 2x + 1, \] \[ r_2(x) = 2x^6 + 2x^5 + 2x^4 + x^3 + 2x^2 + 2x + 2. \] Next, we consider two circulant matrices of size $13$, \( A \) and \( B \), generated by \( g(x)r_1(x) \) and \( g(x)r_2(x) \) over \( \mathbb{F}_3 \). Then \( C \) is a QC code of length $26$ with index $2$ whose generator matrix is \( G = (A \mid B) \), where \(\mid\) represents the horizontal concatenation of the two circulant matrices \( A \) and \( B \). We note that \( AB^t = BA^t \), which implies \( C \) is a symplectic self-orthogonal code with parameters \([26, 7, 12]\) over \( \mathbb{F}_3 \). By Theorem \(\ref{Q}\), we obtain a QECC with parameters \([[13, 6, 4]]\), which are record-breaking parameters. The previous record was \([[13, 6, 3]]\). This newly constructed code is also already updated to the online quantum code table \cite{gra}. \hfill$\square$}\end{example} \begin{example}{\em We consider \( q = 3 \) and \( n = 23 \). Then \( R = \mathbb{F}_3[x]/\langle x^{23} - 1 \rangle \). We take $g(x)\mid x^{23}-1$ and \( r_1(x), r_2(x) \in R \), where \[ g(x) = x^{12} + 2x^{11} + 2x^9 + x^8 + 2x^7 + x^6 + x^5 + x^3 + 1, \] \[ r_1(x) = 2x^{11} + 2x^{10} + 2x^9 + 2x^8 + 2x^7 + 2x^6 + 2x^5 + 2x^4 + 2x^3 + 2x^2 + 2x + 2, \] \[ r_2(x) = 2x^{10} + 2x^9 + 2x^8 + 2x^7 + 2x^6 + 2x^5 + 2x^4 + x^3 + 2x^2 + 1. \] Next, we consider two circulant matrices of size 23, \( A \) and \( B \), generated by \( g(x)r_1(x) \) and \( g(x)r_2(x) \) over \( \mathbb{F}_3 \). Then \( C \) is a QC code of length $46$ with index $2$ whose generator matrix is \( G = (A \mid B) \), where \(\mid\) represents the horizontal concatenation of the two circulant matrices \( A \) and \( B \). We note that \( AB^t = BA^t \), which implies \( C \) is a symplectic self-orthogonal code with parameters \([46, 11, 21]\) over \( \mathbb{F}_3 \). By Theorem \(\ref{Q}\), we obtain a QECC with parameters \([[23, 12, 5 ]]\), which are record-breaking parameters. The previous record was \([[23, 12, 4 ]]\). This newly constructed code already appears updated in the online quantum code table \cite{gra}. \hfill$\square$}\end{example} \begin{example}{\em We consider \( q = 3 \) and \( n = 16 \). Then \( R = \mathbb{F}_3[x]/\langle x^{16} - 1 \rangle \). We take $g(x)\mid x^{16}-1$ and \( r_1(x), r_2(x) \in R \), where \[ g(x) = 2x^6 + x^4 + 1, \] \[ r_1(x) = 2x^9 + 2x^8 + 2x^7 + 2x^6 + 2x^5 + 2x^4 + 2x^3 + 2x^2 + 2x + 1, \] \[ r_2(x) = 2x^9 + 2x^8 + x^7 + 2x^6 + x^5 + x^4 + 2x^3 + x. \] Next, we consider two circulant matrices of size $16$, \( A \) and \( B \), generated by \( g(x)r_1(x) \) and \( g(x)r_2(x) \) over \( \mathbb{F}_3 \). Then \( C \) is a QC code of length $32$ with index $2$ whose generator matrix is \( G = [A \mid B] \), where \(\mid\) represents the horizontal concatenation of the two circulant matrices \( A \) and \( B \). We note that \( AB^t = BA^t \), which implies \( C \) is a symplectic self-orthogonal code with parameters \([32, 10, 12]\) over \( \mathbb{F}_3 \). By Theorem \(\ref{Q}\), we obtain a QECC with parameters \([[16, 6, 5]]\), which are record-breaking parameters. The previous record was \([[16, 6, 4 ]]\). This newly constructed code is in online the quantum code table \cite{gra}. \hfill$\square$}\end{example} \begin{example}{\em By \cite[Theorem 6]{cal1996}, if a quantum code with parameters $ [[n,k,d]]$ exists then a quantum code with parameters $ [[n+1,k,d]]$ also exists, when $k > 0$. Therefore, from the above-constructed quantum code parameters $ [[ 16, 6, 5 ]]$, we get a quantum code with parameters $ [[ 17, 6, 5 ]]$ which is also new and breaks the previous record which is $ [[ 17, 6, 4 ]]$. This newly constructed code is in the online quantum code table \cite{gra}. \hfill$\square$}\end{example} \section{Conclusion and Future work} In this work, we study one-generator and two-generator quasi-cyclic (QC) codes over \(\mathbb{F}_q\), where \(q\) is a prime power. We present a necessary and sufficient condition for symplectic self-orthogonal one-generator quasi-cyclic codes. Based on this condition, we have constructed new quantum codes that set new records. Extending our study to two-generator QC codes over finite fields, we present necessary and sufficient conditions for both symplectic self-orthogonality and symplectic dual-containing properties. For each factor \(g(x)\) of \(x^n - 1\), we choose two polynomials \(r_1(x)\) and \(r_2(x)\) to construct a quantum code from the one-generator QC codes. We know that skew polynomial rings are not unique factorization domains; hence, any skew polynomial can have multiple factorizations over our standard commutative polynomial ring \(\mathbb{F}_q[x]\). This multiplicity increases the potential to find more factors and, consequently, more possibilities to construct codes. It will be interesting to study one-generator skew quasi-cyclic codes and apply our necessary and sufficient conditions to explore new record-breaking quantum codes. \section*{Acknowledgement} T. Bag's work is funded by the European Research Council (ERC Grant AlgoQIP, Agreement No. 851716). T. Bag also acknowledges support from a government grant administered by the Agence Nationale de la Recherche under the Plan France 2030, reference ANR-22-PETQ-0006. T. Bag is grateful to Prof. Markus Grassl for numerous discussions on quantum codes. D. Panario was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC), reference number RPGIN-2018-05328. \begin{thebibliography}{99} \bibitem{QC} K. Abdukhalikov, T. Bag and D. Panario, One-generator quasi-cyclic codes and their dual codes, Discrete Math. 346(6), 113369, (2023). \bibitem{0} S. A. Aly, A. Klappenecker and P. K. Sarvepalli, On quantum and classical BCH codes, IEEE Trans. Inf. Theory 53(3), 1183--1188, (2007). \bibitem{A} A. Ashikhmin and E. Knill, Nonbinary quantum stabilizer codes, IEEE Trans. Inf. Theory 47, 3065--3072, (2001). \bibitem{Nuh1} N. Aydin, I. Siap and D. K. Ray-Chaudhuri, The structure of 1-generator quasi-twisted codes and new linear codes, Des. Codes Cryptogr. 24, 313--326, (2001). \bibitem{Mag} W. Bosma, J. Cannon, C. Fieker and A. Steel (eds.), Handbook of magma functions, Edition 2.19, 5488 pages, (2013). \bibitem{iitr} S. Benjwal, M. Bhaintwal, On the duals of quasi-cyclic codes and their application to quantum codes, Quantum Inf. Process 23, 113, (2024). \bibitem{cal1996} A. R. Calderbank and P. W. Shor, Good quantum error-correcting codes exist, Phys. Rev. A 54(2), 1098--1105, (1996). \bibitem{CRSS98} A. R. Calderbank, E. M. Rains, P. M. Shor and N. J. A. Sloane, Quantum error-correction via codes over $\text{GF}(4)$, IEEE Trans. Inf. Theory 44, 1369--1387, (1998). \bibitem{Con} J. Conan and G. Seguin, Structural properties and enumeration of quasi cyclic codes, AAECC 4, 25--39, (1993). \bibitem{Gal} C. Galindo, F. Hernando and R. Matsumoto, Quasi-cyclic constructions of quantum codes, Finite Fields their Appl. 52, 261--280, (2018). \bibitem{gra} M. Grassl, Bounds on the minimum distance of linear codes and quantum codes, Online available at http://www.codetables.de, Accessed on 2024-06-08. \bibitem{Q4} C. Guan, R. Li and L. Lu, New binary quantum codes constructed from quasi-cyclic codes, Int. J. Theor. Phys 61, 172, (2022). \bibitem{Q1} C. Guan, R. Li, J. Lv and Z. Ma, Symplectic self-orthogonal quasi-cyclic codes, ArXiv:2212.14225. \bibitem{Kas} T. Kasami, A Gilbert-Varshamov bound for quasi-cyclic codes of rate $\frac{1}{2}$, IEEE Trans. Inf. Theory 20, 679, (2018). \bibitem{Ket} A. Ketkar, A. Klappenecker, S. Kumar, and P. K. Sarvepalli, Nonbinary stabilizer codes over finite fields, IEEE Trans. Inf. Theory 52, 4892–-4914, (2006). \bibitem{La} K. Lally and P. Fitzpatrick, Algebraic structure of quasicyclic codes, Discrete Appl. Math. 11, 157--175, (2001). \bibitem{L1} S. Ling and P. Sol\'e, On the algebraic structure of quasi-cyclic codes I: finite fields, IEEE Trans. Inf. Theory 47(7), 2751--2760, (2001). \bibitem{L5} S. Ling and P. Sol\'e, Good self-dual quasi-cyclic codes exist, IEEE Trans. Inf. Theory 49(4), 1052--1053, (2003). \bibitem{L2} S. Ling and P. Sol\'e, On the algebraic structure of quasi-cyclic codes II: chain rings, Des. Codes Cryptogr. 30, 113--130, (2003). \bibitem{L3} S. Ling and P. Sol\'e, On the algebraic structure of quasi-cyclic codes III: generator theory, IEEE Trans. Inf. Theory 51(5), 2692--2700, (2005). \bibitem{L4} S. Ling and P. Sol\'e, On the algebraic structure of quasi-cyclic codes IV: repeated roots, Des. Codes Cryptogr. 38, 337--361, (2006). \bibitem{Q3} J. Lv, R Li and J. Wang, New binary quantum codes derived from one-generator quasi-cyclic codes, IEEE Access 7, 85782--85785, (2019). \bibitem{Q2} J. Lv, R. Li and J. Wang, An explicit construction of quantum stabilizer codes from quasi-cyclic codes, IEEE Commun. Lett. 24(5), 1067--1071, (2020). \bibitem{Se} G. E. Seguin, A class of 1-generator quasi-cyclic codes, IEEE Trans. Inf. Theory 50, 1745--1753, (2004). \bibitem{St} A. M. Steane, Simple quantum error-correcting codes, Phys. Rev. A 54, 4741--4751, (1996). \bibitem{S95} P. W. Shor, Scheme for reducing decoherence in quantum memory, Phys. Rev. A 52, 2493--2496, (1995). \bibitem{xu2022} H. Xu and W. Du, On some binary symplectic self-orthogonal codes, AAECC 33, 321--337, (2022). \end{thebibliography} \end{document}
2412.13675v1
http://arxiv.org/abs/2412.13675v1
On the algebraic structure of the Schröder monoid
\UseRawInputEncoding \documentclass[10pt]{article} \oddsidemargin 0 cm \evensidemargin 0 cm \textwidth 16.9 cm \textheight 22.0 cm \usepackage{relsize} \usepackage[dvips]{color} \usepackage{epsfig} \usepackage{float,amsthm,amssymb,amsfonts} \usepackage{ amssymb,amsmath,graphicx, amsfonts, latexsym} \usepackage{xcolor} \begin{document} \theoremstyle{plain} \newtheorem{theorem}{{\bf Theorem}}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{remark}[theorem]{Remark} \theoremstyle{definition} \newtheorem{defn}{Definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{conjecture}[theorem]{Conjecture} \def\im{\mathop{\rm Im}\nolimits} \def\dom{\mathop{\rm Dom}\nolimits} \def\rank{\mathop{\rm rank}\nolimits} \def\nullset{\mbox{\O}} \def\ker{\mathop{\rm ker}\nolimits} \def\implies{\; \Longrightarrow \;} \def\GR{{\cal R}} \def\GL{{\cal L}} \def\GH{{\cal H}} \def\GD{{\cal D}} \def\GJ{{\cal J}} \def\set#1{\{ #1\} } \def\z{\set{0}} \def\Sing{{\rm Sing}_n} \def\nullset{\mbox{\O}} \title{On the algebraic structure of the Schr\"{o}der monoid} \author{\bf Muhammad Mansur Zubairu\footnote{Corresponding Author. ~~Email: \emph{[email protected]}}, Abdullahi Umar and Fatma Salim Al-Kharousi \\ \it\small Department of Mathematics, Bayero University Kano, P. M. B. 3011, Kano, Nigeria\\ \it\small \texttt{[email protected]}\\[3mm] \it\small Department of Mathematical Sciences,\\ \it\small Khalifa University, P. O. Box 127788, Sas al Nakhl, Abu Dhabi, UAE\\ \it\small \texttt{[email protected]}\\[3mm] \it\small Department of Mathematics,\\ \it\small College of Science,\\ \it\small Sultan Qaboos University.\\ \it\small \texttt{[email protected]}} \date{\today} \maketitle\ \begin{abstract} Let $[n]$ be a finite chain $\{1, 2, \ldots, n\}$, and let $\mathcal{LS}_{n}$ be the semigroup consisting of all isotone and order-decreasing partial transformations on $[n]$. Moreover, let $\mathcal{SS}_{n} = \{\alpha \in \mathcal{LS}_{n} : \, 1 \in \textnormal{Dom } \alpha\}$ be the subsemigroup of $\mathcal{LS}_{n}$, consisting of all transformations in $\mathcal{LS}_{n}$ each of whose domain contains $1$. For $1 \leq p \leq n$, let $K(n,p) = \{\alpha \in \mathcal{LS}_{n} : \, |\im \, \alpha| \leq p\}$ and $M(n,p) = \{\alpha \in \mathcal{SS}_{n} : \, |\im \, \alpha| \leq p\}$ be the two-sided ideals of $\mathcal{LS}_{n}$ and $\mathcal{SS}_{n}$, respectively. Furthermore, let ${RLS}_{n}(p)$ and ${RSS}_{n}(p)$ denote the Rees quotients of $K(n,p)$ and $M(n,p)$, respectively. It is shown in this article that for any $S \in \{\mathcal{SS}_{n}, \mathcal{LS}_{n}, {RLS}_{n}(p), {RSS}_{n}(p)\}$, $S$ is abundant and idempotent generated for all values of $n$. Moreover, the ranks of the Rees quotients ${RLS}_{n}(p)$ and ${RSS}_{n}(p)$ are shown to be equal to the ranks of the two-sided ideals $K(n,p)$ and $M(n,p)$, respectively. Finally, these ranks are computed to be $\sum\limits_{k=p}^{n} \binom{n}{k} \binom{k-1}{p-1}$ and $\binom{n-1}{p-1}2^{n-p}$, respectively. \end{abstract} \emph{2020 Mathematics Subject Classification. 20M20.}\\ \textbf{Keywords:} Isotone maps, Order decreasing, abundant semigroup, Rank properties \section{Introduction and Preliminaries} For a natural number $n$, denote $[n]$ to be the finite chain $\{1,2, \ldots ,n\}$. A map $\alpha$ with its domain and range being subsets of $[n]$ (or with the domain being the entire set $[n]$ and the range being a subset of $[n]$) is referred to as a \emph{partial} \emph{transformation} (resp., \emph{full transformation}). The notations $\mathcal{P}_{n}$ and $\mathcal{T}_{n}$ usually represent \emph{the semigroups of all partial and full transformations}, respectively. A transformation $\alpha\in \mathcal{P}_{n}$ is said to be an \emph{ isotone} map (resp., an \emph{anti-tone} map) if (for all $x,y \in \dom\,\alpha$) $x\leq y$ implies $x\alpha\leq y\alpha$ (resp., $x\alpha\geq y\alpha$); \emph{order decreasing} if (for all $x\in \dom\,\alpha$) $x\alpha\leq x$. The notations $\mathcal{DP}_n$ and $\mathcal{OP}_n$ shall denote \emph{the semigroup of order-decreasing partial transformations} on $[n]$ and \emph{the semigroup of all isotone partial transformations} on $[n]$, respectively. As in \cite{auc}, we shall refer to $\mathcal{PC}_{n}$ (\emph{semigroup of all isotone order-decreasing partial transformation} on $[n]$) as the \emph{large} \emph{Schr\"{o}der} monoid and we shall denote it as: \begin{equation}\label{qn111}\mathcal{LS}_{n}= \mathcal{OP}_n\cap \mathcal{DP}_n .\end{equation} \noindent These monoids have been extensively studied in various contexts, see for example \cite{zua, gu1, gm, al1, al2, al3, al4, al5}. The composition of two elements $\alpha $ and $\gamma$ in $\mathcal{P}_{n}$ is defined as $x(\alpha\circ\gamma)=((x)\alpha)\gamma$ for all $x\in\dom\, \alpha$. Without ambiguity, we shall be using the notation $\alpha\gamma$ to denote $\alpha\circ\gamma$. We shall also use the notations $1_{[n]}$, $\im \alpha$, $\dom \alpha$, $h(\alpha)=|\im \, \alpha|$ to denote the identity map on $[n]$, the image set of a map $\alpha$, the domain set of the map $\alpha$ and the height of $\alpha$, respectively. Furthermore, let $P$ denote a linearly ordered partition of $[n]$ in the sense that, for any two sets $A$ and $B$ in $P$, we write $A<B$ if each element in $A$ is less than every element in $B$. Now let \begin{equation}\label{qn1} \mathcal{SS}_{n} = \{\alpha \in \mathcal{LS}_{n} : 1 \in \textnormal{Dom } \alpha \} \end{equation} \noindent be the set of all maps in $\mathcal{LS}_{n}$ each of whose domain contains 1 and \begin{equation}\label{qn2} \mathcal{SS}^{\prime}_n = \{\alpha \in \mathcal{LS}_{n} : 1 \notin \text{Dom } \alpha\} \end{equation} \noindent be the set of all maps in $\mathcal{LS}_{n}$ each of whose domain do not contains 1. In other words, $\mathcal{SS}^{\prime}_n$ is the set complement of $\mathcal{SS}_{n}$. The monoid $\mathcal{LS}_{n}$ first appeared in Ganyushkin and Mazorchuk \cite {gmv}, where it was shown that it is idempotent-generated. Moreover, the combinatorial properties of the semigroup have been explored in \cite{al3}, where it was shown that the size (or order) of $\mathcal{LS}_{n}$ corresponds to the \emph{large} (or \emph{double}) \emph{Schr\"{o}der number}: \[s_{0}=1, \quad s_{n}= \frac{1}{n+1} \sum\limits_{r=0}^{n}\binom{n+1}{n-r}\binom{n+r}{r} \quad (n\geq 1).\] The set $\mathcal{SS}_{n}$ and its complement $\mathcal{SS}_{n}^{\prime}$ were initially introduced by Laradji and Umar \cite{al5}, who showed that both are subsemigroups of $\mathcal{LS}_{n}$. Interestingly, these two semigroups were found to have the same size, which coincides with the (\emph{small}) \emph{Schr\"{o}der number}: \[s_{n}= \frac{1}{2(n+1)} \sum\limits_{r=0}^{n}\binom{n+1}{n-r}\binom{n+r}{r}.\] As in \cite{al5}, we shall refer to the semigroup $\mathcal{SS}_{n}$, as the \emph{small} \emph{Schr\"{o}der} monoid. Moreover, for $1\le p\le n$, let \begin{equation} \label{kn} K(n,p)=\{\alpha\in \mathcal{LS}_{n}: \, |\im \, \alpha|\le p\}\end{equation} \noindent and \begin{equation}\label{mn} M(n,p)=\{\alpha\in \mathcal{SS}_{n}: \, |\im \, \alpha|\le p\}\end{equation} \noindent be the two sided ideals of $\mathcal{LS}_{n}$ and $\mathcal{SS}_{n}$, respectively, consisting of all decreasing isotone maps with a height of no more than $p$. Furthermore, for $p\geq 1$, let \begin{equation}\label{knn} {RLS}_{n}(p)= K(n,p)/ K(n, p-1) \end{equation} \noindent be the Rees quotient semigroup of $K(n,p)$, and for $p\geq 2$ \begin{equation}\label{mnn} {RSS}_{n}(p)= M(n,p)/M(n, p-1) \end{equation} \noindent be the Rees quotient semigroup of $M(n,p)$. The elements of ${RLS}_{n}(p)$ (or ${RSS}_{n}(p)$) can be considered as the elements of $\mathcal{LS}_{n}$ (or $\mathcal{SS}_{n}$) of exactly height $p$. The product of two elements of ${RLS}_{n}(p)$ (or ${RSS}_{n}(p)$) is $0$ if their product in ${RLS}_{n}(p)$ (or ${RSS}_{n}(p)$) has a height strictly less than $p$, otherwise it is in ${RLS}_{n}(p)$ (or ${RSS}_{n}(p)$). The algebraic and rank properties of these subsemigroups have not been studied to our knowledge, see [\cite{al5}, Remark 4.1]. In this paper we are going to study certain algebraic and rank properties of these semigroups. For more details about basic terms and concepts in semigroup theory, see the books of Howie \cite{howi} and Higgins \cite{ph}. \indent Following the approach outlined in \cite{HRS}, every $\alpha\in \mathcal{LS}_{n} $ can be represented as \begin{equation}\label{1}\alpha=\begin{pmatrix}A_1&\ldots&A_p\\a_1&\ldots&a_p\end{pmatrix} \, (1\le p\le n),\end{equation} where $a_{i}\leq \min A_{i}$ for all $1\leq i\leq p$ and $A_i$ $(1\le i\le p)$ denote equivalence classes defined by the relation $\textnormal{ker }\alpha=\{(x, y)\in \dom \, \alpha\times \dom \, \alpha: \, x\alpha=y\alpha\}$, we shall denote this collection by $\textnormal{\bf Ker }\alpha=\{A_1, A_2, \ldots, A_p\}$. Furthermore, $\textnormal{\bf Ker }\alpha$ is linearly ordered (i.e., for $i<j$, $A_{i}<A_{j}$ if and only if $a<b$ for all $a\in A_{i}$ and $b\in A_{j}$). Moreover, we may without loss of generality assume that $1\leq a_{1}<a_{2}<\ldots<a_{p}\leq n$, since $\alpha$ is an isotone map. It is important to mention that the domain of each element in $\mathcal{SS}_{n}$ contains $1$, in particular, $1\in A_{1}$, and so, each element in $\mathcal{SS}_{n}$ of height $1\leq p\leq n$ can be expressed as: \begin{equation} \label{eq3} \alpha = \begin{pmatrix}A_1&A_2&\ldots& A_p\\1&a_2&\ldots& a_p\end{pmatrix}. \end{equation} \section{Regularity, Green's relations and starred Green's relations} In a semigroup $S$, an element $a\in S$ is said to be \emph{regular} if there is $b$ in $S$ such that $a=aba$ and $S$ is said to be a \emph{regular semigroup} if every element of $S$ is regular. When faced with a new type of transformation semigroup, the initial algebraic inquiry typically involves determining the characteristics of its Green's equivalences. These relations are commonly utilized to categorize elements within a semigroup. For definition of these relations, we recommend that the reader consults Howie \cite{howi}. In semigroup theory, there are five Green's relations, namely $\mathcal{L,R,D , J\ \text{and } H}$. It is a known fact in finite semigroups that the relations $\mathcal{D }$ and $\mathcal{J}$ are equivalent (see [\cite{howi}, Proposition 2.1.4]). Therefore, we will focus on characterizing the relations $\mathcal{L,R,D \, \text{and } H}$ on the large and small Schr\"{o}der monoids $\mathcal{LS}_{n} \ \text{and } \mathcal{SS}_{n}$, respectively. From this point forward in this section, we shall refer to $\alpha$ and $\beta$ in $\mathcal{LS}_{n}$ as \begin{equation} \label{eqq3} \alpha = \begin{pmatrix}A_1&\ldots& A_p\\a_{1}&\ldots& a_p\end{pmatrix} \text{and} \ \beta = \begin{pmatrix} B_1 & \ldots & B_p \\ b_{1} & \ldots & b_p \end{pmatrix} \, (1\leq p\leq n) \end{equation} \noindent and $\alpha$ and $\beta$ in $\mathcal{SS}_{n}$ as \begin{equation} \label{eqq4} \alpha = \begin{pmatrix}A_1&A_2&\ldots& A_p\\ 1&a_2&\ldots& a_p\end{pmatrix} \text{and} \ \beta = \begin{pmatrix} B_1 & B_2 & \ldots & B_p \\ 1 & b_2& \ldots & b_p \end{pmatrix} \, (1\leq p\leq n). \end{equation} Now let $S\in \{\mathcal{LS}_{n}, \, \mathcal{SS}_{n} \}$. Then we have the following theorem. \begin{theorem}\label{l} Let $S\in \{\mathcal{LS}_{n}, \, \mathcal{SS}_{n} \}$ and let $\alpha,\beta \in S $ be as in \eqref{eqq3} or \eqref{eqq4}. Then $\alpha\mathcal{L}\beta$ if and only if $\im \, \alpha=\im \, \beta$ \emph{(}i.e., $a_i = b_i$ for $1\leq i\leq p$\emph{)} and $\min A_i = \min B_i$ for all $1\leq i\leq p$. \end{theorem} \begin{proof} The proof going forward resembles the proof in [\cite{umar}, Lemma 2.2.1(2)]. Conversely, suppose that $\im \, \alpha=\im \, \beta$ and $\min A_i = \min B_i$ for all $1\leq i\leq p$. Let $t_i = \min A_i$ and $h_i = \min B_i$ for $1 \le i\le p$. Now if $\alpha, \beta\in \mathcal{LS}_{n}$, then define $\gamma_{1}, \gamma_{2}$ as: \begin{equation} \gamma_1 = \begin{pmatrix}A_1&\ldots& A_p\\t_{1}&\ldots& t_p\end{pmatrix} \ \text{and } \gamma_{2} = \begin{pmatrix} B_1 & \ldots & B_p\\ h_{1} & \ldots & h_p \end{pmatrix}. \end{equation} \noindent If $\alpha, \beta\in \mathcal{SS}_{n}$, then we can use the definition of $\gamma_{1}, \gamma_{2}$ as above after substituting $t_{1}=1=h_{1}$. In both scenarios, it is evident that $\gamma_{1}, \gamma_{2} \ \in S$ and $\alpha = \gamma_{1}\beta,\ \beta = \gamma_{2}\alpha$. Thus, ($\alpha$,$\beta$) $\in \mathcal{L}$, as required. \end{proof} \begin{theorem}\label{r} Let $S\in \{\mathcal{LS}_{n}, \mathcal{SS}_{n} \}$. Then $S$ is $\mathcal{R}-$trivial. \end{theorem} \begin{proof} $\mathcal{LS}_{n}$ is known to be $\mathcal{R}$ trivial by [\cite{ph1}, Theorem 4.2] and so $\mathcal{SS}_{n}$ is $\mathcal{R}-$trivial follows from the fact that $\mathcal{LS}_{n}$ is $\mathcal{R}$ trivial and $\mathcal{R}(\mathcal{SS}_{n})\subseteq \mathcal{R}(\mathcal{LS}_{n})\cap (\mathcal{SS}_{n} \times \mathcal{SS}_{n}).$ \end{proof} As a consequence of the above theorem, we readily have the following corollaries. \begin{corollary} On the semigroup $S\in \{\mathcal{LS}_{n}, \mathcal{SS}_{n} \}$, $\mathcal{H} = \mathcal{R}$. \end{corollary} \begin{corollary}\label{rem1} Let $\alpha \in S\in \{\mathcal{LS}_{n}, \mathcal{SS}_{n}\}$. Then $\alpha$ is regular if and only if $\alpha$ is an idempotent. Hence, the semigroup $S \in \{\mathcal{LS}_{n}, \mathcal{SS}_{n}\}$ is nonregular. \end{corollary} \begin{proof} The result follows from the fact that in an $\mathcal{R}$-trivial semigroup, every nonidempotent element is not regular. \end{proof} \begin{theorem} On the semigroup $S\in \{\mathcal{LS}_{n}, \mathcal{SS}_{n} \}$, $ \mathcal{D} = \mathcal{L}$. \end{theorem} \begin{proof} The result follows from the fact that $S$ is $\mathcal{R}$-trivial from Theorem \ref{r}, and that $\mathcal{D}=\mathcal{L}\circ \mathcal{R}.$ \end{proof} As a consequence of the three theorems above, we deduce the following characterizations of Green's equivalences on the semigroup $S$ in $\{{RSS}_{n}(p), \, {RLS}_{n}(p), \, M(n,p), \, K(n,p) \}$. \begin{theorem} Let $S\in \{{RSS}_{n}(p), \, {RLS}_{n}(p), \, M(n,p), \, K(n,p) \}$ and let $\alpha, \, \beta \in S$ be as in \eqref{eqq3} or \eqref{eqq4}. Then \begin{itemize} \item[(i)] $\alpha \mathcal{L} \beta$ if and only if $\im \, \alpha = \im \, \beta$ \emph{(}i.e., $a_i = b_i$ for $1 \leq i \leq p$\emph{) }and $\min A_i = \min B_i$ for all $1 \leq i \leq p$; \item[(ii)] $S$ is $\mathcal{R}$-trivial; \item[(iii)] $\mathcal{H} = \mathcal{R}$; \item[(iv)] $\mathcal{D} = \mathcal{L}$.\end{itemize} Hence, for $p \geq 3$, the semigroup $S$ is nonregular. \end{theorem} If a semigroup is not regular, it is customary to examine the starred Green's relations in order to classify the algebraic class to which it belongs. Therefore, we will now proceed to characterize the starred analogues of Green's equivalences on these semigroups. For the definitions of these relations we recommend to the reader, Fountain \cite{FOUN2}. There are five starred Green's equivalences, namely: $\mathcal{L}^*$, $\mathcal{R}^*$, $\mathcal{D}^*$, $\mathcal{J}^*$, and $\mathcal{H}^*$. The relation $\mathcal{D}^*$ is the join of $\mathcal{L}^*$ and $\mathcal{R}^*$, while $\mathcal{H}^*$ is the intersection of $\mathcal{L}^*$ and $\mathcal{R}^*$. A semigroup $S$ is said to be \emph{left abundant} if each $\mathcal{L}^*$-class contains an idempotent; it is said to be \emph{right abundant} if each $\mathcal{R}^*$-class contains an idempotent; and it is said to be \emph{abundant} if each $\mathcal{L}^*$-class and each $\mathcal{R}^*$-class of $S$ contains an idempotent. These classes of semigroups were introduced by Fountain \cite{FOUN, FOUN2}. Many classes of transformation semigroups have been shown to be either left abundant, right abundant, or abundant; see for example \cite{al1, um,umar, quasi, ua3, zm1}. Before we characterize the starred Green's relations, we need the following definition and lemmas from \cite{quasi}: A subsemigroup $U$ of $S$ is called an \emph{inverse ideal} of $S$ if for all $u \in U$, there exists $u^{\prime} \in S$ such that $uu^{\prime}u = u$ and both $u^{\prime}u$ and $uu^{\prime}$ are in $U$. \begin{lemma}[\cite{quasi}, Lemma 3.1.8.]\label{inv1} Every inverse ideal $U$ of a semigroup $S$ is abundant. \end{lemma} \begin{lemma} [\cite{quasi}, Lemma 3.1.9.] \label{inv2} Let $U$ be an inverse ideal of a semigroup $S$. Then \begin{itemize} \item[(1)] $\mathcal{L}^{*} (U) = \mathcal{L}(S) \cap (U \times U)$; \item[(2)] $\mathcal{R}^{*}( U) = \mathcal{R}(S) \cap(U \times U)$; \item[(3)] $\mathcal{H}^{*}( U) = \mathcal{H}(S) \times (U \times U).$\end{itemize} \end{lemma} We now have the following result. \begin{theorem}\label{inv} Let \(\mathcal{LS}_{n}\) be as defined in \eqref{qn111}. Then \(\mathcal{LS}_{n}\) is an inverse ideal of $\mathcal{P}_{n}$. \end{theorem} \begin{proof} Let $\alpha\in \mathcal{LS}_{n}$ be as expressed in \eqref{1}, and let $t_{i}=\min A_{i}$ for all $1\leq i\leq p$. Now define $\alpha^{\prime}$ as: \[\alpha^{\prime}=\begin{pmatrix} a_1 & \ldots & a_p\\ t_1 & \ldots & t_p \end{pmatrix} .\] \noindent Clearly, $\alpha^{\prime}$ is in $\mathcal{P}_{n}$. Notice that: \begin{align*}\alpha\alpha^{\prime}\alpha &=\begin{pmatrix} A_1 & \ldots & A_p\\ a_1 & \ldots & a_p \end{pmatrix}\begin{pmatrix} a_1 & \ldots & a_p\\ t_1 & \ldots & t_p \end{pmatrix}\begin{pmatrix} A_1 & \ldots & A_p\\ a_1 & \ldots & a_p \end{pmatrix}\\&= \begin{pmatrix} A_1 & \ldots & A_p\\ a_1 & \ldots & a_p \end{pmatrix}=\alpha. \end{align*} \noindent Moreover, \[\alpha^{\prime}\alpha=\begin{pmatrix} a_1 & \ldots & a_p\\ t_1 & \ldots & t_p \end{pmatrix}\begin{pmatrix} A_1 & \ldots & A_p\\ a_1 & \ldots & a_p \end{pmatrix}=\begin{pmatrix} a_1 & \ldots & a_p\\ a_1 & \ldots & a_p \end{pmatrix}=\text{1}_{\im \, \alpha}\in \mathcal{LS}_{n},\]\noindent and also \[\alpha\alpha^{\prime}=\begin{pmatrix} A_1 & \ldots & A_p\\ a_1 & \ldots & a_p \end{pmatrix}\begin{pmatrix} a_1 & \ldots & a_p\\ t_1 & \ldots & t_p \end{pmatrix} =\begin{pmatrix} A_1 & \ldots & A_p\\ t_1 & \ldots & t_p \end{pmatrix}\in E(\mathcal{LS}_{n})\subset \mathcal{LS}_{n}.\] \noindent Thus, $\mathcal{LS}_{n}$ is an inverse ideal of $\mathcal{P}_{n}$, as required. \end{proof} \begin{remark}\label{gg} By letting $a_{1}=t_{1}=1$ in the above theorem and its proof, we deduce that $\mathcal{SS}_{n}$ is an inverse ideal of $\mathcal{P}_{n}$. \end{remark} Consequently, we have the following result. \begin{theorem} Let $\mathcal{LS}_{n} \ \text{and } \mathcal{SS}_{n}$ be as defined in \eqref{qn111} and \eqref{qn1}, respectively and let $S\in \{ {\mathcal{LS}_{n}}, \mathcal{SS}_{n} \}$. Then $S$ is abundant. \end{theorem} \begin{proof} The result follows from Theorem \ref{inv} (resp., Remark \ref{gg}) and Lemma \ref{inv1}. \end{proof} \begin{theorem} \label{a1} Let $S\in \{\mathcal{LS}_{n}, \mathcal{SS}_{n} \}$, then for $\alpha, \beta\in S$ we have: \begin{itemize} \item[(i)] $\alpha\mathcal{L}^*\beta$ if and only $\im \alpha = \im \beta$; \item[(ii)] $\alpha\mathcal{R}^*\beta$ if and only if $\ker \alpha = \ker \beta$; \item[(iii)] $\alpha\mathcal{H}^*\beta$ if and only if $\alpha=\beta$; \item[(iv)] $\alpha\mathcal{D}^*\beta$ if and only if $|\im \alpha| = |\im \beta|$. \end{itemize} \end{theorem} \begin{proof} \begin{itemize} \item[(i)] and (ii) follow from Theorem \ref{inv}, Lemma \ref{inv2} and [\cite{howi}, Exercise 2.6.17], while (iii) follows from (i) and (ii) and the fact that $\alpha$ and $\beta$ are isotone. \item[(iv)] Let's assume that $\alpha\mathcal{D}^{*}\beta$. Thus by (\cite{howi}, Proposition 1.5.11), there exist elements $\gamma_{1},~\gamma_{2}, \ldots,~\gamma_{2n-1}\in ~S$ such that $\alpha\mathcal{L}^{*}\gamma_{1}$, $\gamma_{1}\mathcal{R}^{*}\gamma_{2}$, $\gamma_{2}\mathcal{L}^{*}\gamma_{3},\ldots,$ $\gamma_{2n-1}\mathcal{R}^{*}\beta$ for some $n\in ~ \mathbb{{N}}$. Consequently, from (i) and (ii), we deduce that $\im~\alpha=\im~\gamma_{1}$, ${\ker}~\gamma_{1}={\ker}~\gamma_{2}$, $\im~\gamma_{2}=\im~\gamma_{3},\ldots,$ $\ker~\gamma_{2n-1}=\ker~\beta$. Now it follows that $|\im~\alpha|=|\im~\gamma_{1}|=|\dom~\gamma_{1}/ \ker~\gamma_{1}|=|\dom~\gamma_{2}/ \ker~\gamma_{2}|=\ldots=|\dom~\gamma_{2n-1}/ \ker~\gamma_{2n-1}|=|\dom~\beta/ \ker~\beta|=|\im~\beta|.$ Conversely, suppose that $|\im~\alpha|=|\im~\beta|$ where \begin{equation*}\label{2} \alpha=\left(\begin{array}{ccc} A_{1} & \ldots & A_{p} \\ a_{1} & \ldots & a_{p} \end{array} \right)\text{ and } \beta=\left(\begin{array}{ccc} B_{1} & \ldots & B_{p} \\ b_{1} & \ldots & b_{p} \end{array} \right).\end{equation*} Now define \begin{equation*}\label{2} \delta=\left(\begin{array}{ccc} A_{1} & \ldots & A_{p} \\ {1} & \ldots & {p} \end{array} \right)\text{ and } \gamma=\left(\begin{array}{ccc} B_{1} & \ldots & B_{p} \\ {1} & \ldots & {p} \end{array} \right).\end{equation*} \noindent Clearly, $\delta$ and $\gamma$ are in $S$. Notice that $\ker \, \alpha= \ker \, \delta$, $\im \, \delta=\im \, \gamma$ and $\ker \, \gamma=\ker \, \beta$. Thus by (i) and (ii) we see that $\alpha \mathcal{R}^{*} \delta \mathcal{L}^{*} \gamma \mathcal{R}^{*} \beta$. \noindent Similarly, define $\delta=\left(\begin{array}{ccc} n-p+{1} & \ldots & n \\ a_{1} & \ldots & a_{p} \end{array} \right)$ and $\gamma=\left(\begin{array}{ccc} n-p+1 & \ldots & n \\ b_{1} & \ldots & b_{p} \end{array} \right)$. Clearly, $\delta$ and $\gamma\in S$. Moreover, notice that $\im \, \alpha=\im \, \delta$, $\ker \, \delta= \ker \, \gamma$, $\im \, \gamma=\im \, \beta$. Thus by (i) and (ii) we have $\alpha \mathcal{L}^{*} \delta \mathcal{R}^{*} \gamma \mathcal{L}^{*}\beta$. Hence, by (\cite{howi}, Proposition 1.5.11) it follows that $\alpha\mathcal{D}^{*}\beta$. The proof is now complete. \end{itemize} \end{proof} \begin{lemma}\label{uaaaa} On the Schr\"{o}der monoids $\mathcal{LS}_{n}$ and $\mathcal{SS}_{n}$ \emph{(}$n\geq 3$\emph{)}, we have $\mathcal{D}^{*}=\mathcal{R}^{*}\circ\mathcal{L}^{*}\circ\mathcal{R}^{*}=\mathcal{L}^{*}\circ\mathcal{R}^{*}\circ\mathcal{L}^{*}$. \end{lemma} \begin{proof} The sufficiency follows from the converse of the proof of (iv) in the above theorem, while for the necessity, we have to prove that $\mathcal{L}^{*}\circ\mathcal{R}^{*}\neq \mathcal{R}^{*}\circ\mathcal{L}^{*}$. Take \[\alpha=\left(\begin{array}{cc} 1 & 2 \\ {1} &2 \end{array} \right) \text{ and } \beta=\left(\begin{array}{cc} 1 & 3 \\ {1} &3 \end{array} \right). \] \noindent Now define $\delta=\left(\begin{array}{cc} 1 & 3 \\ {1} &2 \end{array} \right).$ Then clearly $\im \, \alpha=\im \, \delta$ and $\dom \, \delta=\dom \, \beta$, and so $\alpha \mathcal{L}^{*} \delta \mathcal{R}^{*}\beta$. i.e., $(\alpha, \beta)\in \mathcal{L}^{*} \circ \mathcal{R}^{*}$. On the other hand, if we have $(\alpha, \beta)\in \mathcal{R}^{*} \circ \mathcal{L}^{*}$, then there must exist $\gamma \in\mathcal{SS}_{n} \subseteq \mathcal{LS}_{n}$ such that $\alpha \mathcal{R}^{*} \gamma \mathcal{L}^{*}\beta$. However, this means that $\dom \, \alpha= \dom \, \gamma=\{1,2\}$ and $\im \, \gamma=\im \, \beta=\{1,3\}$, which is impossible. The result now follows. \end{proof} \begin{lemma}\label{uaaa} On the semigroups ${RLS}_{n}(p)$ and ${RSS}_{n}(p)$, we have $\mathcal{D}^{*}=\mathcal{R}^{*}\circ\mathcal{L}^{*}\circ\mathcal{R}^{*}=\mathcal{L}^{*}\circ\mathcal{R}^{*}\circ\mathcal{L}^{*}$. \end{lemma} \begin{proof} The proof is the same as the proof of the above lemma. \end{proof} As in \cite{FOUN2}, to define the relation $\mathcal{J}^{*}$ on a semigroup $S$, we first denote the $\mathcal{L}^{*}$-class containing the element $a\in S$ by $L^{*}_{a}$. (The corresponding notation can be used for the classes of the other relations.) A \emph{left} (resp., \emph{right}) $*$-\emph{ideal} of a semigroup $S$ is defined to be a \emph{left} (resp., \emph{right}) ideal $I$ of $S$ such that $L^{*}_{a} \subseteq I$ (resp., $R^{*}_{a} \subseteq I$), for all $a \in I$. A subset $I$ of $S$ is a $*$-ideal of $S$ if it is both left and right $*$-ideal. The \emph{principal $*$-ideal} $J^{*}(a)$ generated by the element $a\in S$ is defined to be the intersection of all $*$-ideals of $S$ to which $a$ belongs. The relation $\mathcal{J}^{*}$ is defined by the rule that $a \mathcal{J}^{*} b$ if and only if $J^{*}(a) = J^{*}(b)$, where $J^{*}(a)$ is the principal $*$-ideal generated by $a$. The next lemma is crucial to our next investigation about the properties of $\mathcal{J}^{*}$ in the semigroup $S\in\{\mathcal{LS}_{n}, \mathcal{SS}_{n} \}$. \begin{lemma}[\cite{FOUN2}, Lemma 1.7]\label{jj} Let $a$ be an element of a semigroup $S$. Then $b \in J^{*}(a)$ if and only if there are elements $a_{0},a_{1},\ldots, a_{n}\in S$, $x_{1},\ldots,x_{n}, y_{1}, \ldots,y_{n} \in S^{1}$ such that $a = a_{0}$, $b = a_{n}$, and $(a_{i}, x_{i}a_{i-1}y_{i}) \in \mathcal{D}^{*}$ for $i = 1,\ldots,n.$ \end{lemma} As in \cite{ua}, we now have the following: \begin{lemma}\label{jjj} For $\alpha, \, \beta\in S\in\{\mathcal{LS}_{n}, \mathcal{SS}_{n} \}$, let $ \alpha\in J^{*}(\beta)$. Then $\mid \im \, \alpha \mid\leq \mid \im \,\beta \mid$. \end{lemma} \begin{proof} Let $ \alpha \in J^{*}(\beta)$. Then, by Lemma \ref{jj}, there exist $\beta_{0}, \beta_{1},\ldots, \beta_{n}$, $\gamma_{1}, \ldots, \gamma_{n}$, $\tau_{1}, \ldots, \tau_{n}$ in $S\in\{\mathcal{LS}_{n}, \mathcal{SS}_{n} \}$ such that $\beta=\beta_{0}$, $\alpha=\beta_{n}$, and $(\beta_{i}, \gamma_{i}\beta_{i-1}\tau_{i})\in \mathcal{D}^{*}$ for $i =1,\ldots,n.$ Thus, by Lemma \ref{uaaaa}, this implies that \[\mid\im \,\beta_{i} \mid= \mid\im \, \gamma_{i}\beta_{i-1}\tau_{i} \mid\leq \mid\im \, \beta_{i-1} \mid ,\] \noindent so that \[\mid \im \, \alpha \mid\leq \mid \im \,\beta \mid,\] \noindent as required. \end{proof} \begin{lemma}\label{uaaaaa} On the large and small Schr\"{o}der monoids $\mathcal{LS}_{n}$ and $\mathcal{SS}_{n}$, we have $\mathcal{J}^{*}=\mathcal{D}^{*}$. \end{lemma} \begin{proof} Notice we need to only show that $\mathcal{J}^{*} \subseteq \mathcal{D}^{*}$ (since $\mathcal{D}^{*} \subseteq \mathcal{J}^{*}$). So, suppose that $(\alpha,\beta) \in \mathcal{J}^{*}$, then $J^{*}(\alpha)=J^{*}(\beta)$, so that $\alpha\in J^{*}(\beta)$ and $\beta\in J^{*}(\alpha)$. However, by Lemma \ref{jjj}, this implies that \[\mid \im \, \alpha \mid \leq \mid \im \, \beta \mid \text{ and } \mid \im \, \beta \mid \leq \mid \im \, \alpha \mid,\] \noindent so that $\mid \im \, \alpha \mid= \mid \im \, \beta \mid$. Thus by Lemma \ref{uaaaa}, we have \[\mathcal{J}^{*} \subseteq \mathcal{D}^{*},\]\noindent as required. \end{proof} \begin{lemma}\label{un} On the semigroup $S$ in $\{\mathcal{LS}_{n}, \, \mathcal{SS}_{n}, \, {RSS}_{n}(p), \, {RLS}_{n}(p), \, M(n,p), \, K(n,p) \}$, every $\mathcal{R}^{*}-$class contains a unique idempotent. \end{lemma} \begin{proof} This follows from the fact that \textbf{Ker }$\alpha$ can only admit one image subset of $[n]$ so that $\alpha$ is an idempotent by the decreasing property of $\alpha$. \end{proof} \begin{remark}\begin{itemize} \item[(i)] It is now clear that, for each $1\le p \le n$, the number of $\mathcal{R}^{*}-$classes in $J^{*}_{p}=\{\alpha\in \mathcal{LS}_{n}: \, |\im \, \alpha|=p\}$ is equal to the number of all possible partial ordered partitions of $[n]$ into $p$ parts. This is equivalent to the number of $\mathcal{R}-$classes in $ \{\alpha\in \mathcal{OP}_n: \, |\im \, \alpha|=p\}$, which is known to be $\sum\limits_{r=p}^{n}{\binom{n}{r}}{\binom{r-1}{p-1}}$ from \emph{ [\cite{al3}, Lemma 4.1]}. \item[(ii)] If $S\in \{{RSS}_{n}(p), \, {RLS}_{n}(p), \, M(n,p), \, K(n,p) \}$. Then the characterizations of the starred Green's relations in Theorem \ref{a1}, also hold in $S$. \end{itemize} \end{remark} Thus, the semigroup $K(n,p)$, like $\mathcal{LS}_{n}$ is the union of $\mathcal{J}^{*}$ classes \[ J_{o}^{*}, \, J_{1}^{*}, \, \ldots, \, J_{p}^{*}\] where \[J_{p}^{*}=\{\alpha\in K(n,p): \, |\im \, \alpha|=p\}.\] Furthermore, $K(n,p)$ has $\sum\limits_{r=p}^{n}{\binom{n}{r}}{\binom{r-1}{p-1}}$ $\mathcal{R}^{*}-$classes and $\binom{n}{p}$ $\mathcal{L}^{*}-$classes in each $J^{*}_{p}$. Consequently, the Rees quotient semigroup ${RLS}_{n}(p)$ has $\sum\limits_{r=p}^{n}{\binom{n}{r}}{\binom{r-1}{p-1}}+1$ $\mathcal{R}^{*}-$classes and $\binom{n}{p}+1$ $\mathcal{L}^{*}-$classes. (The term 1 is derived from the singleton class containing the zero element in every instance.) Now, let $J^{*}_{p}=\{\alpha\in \mathcal{SS}_{n}: \, h(\alpha)=p\}$. We compute the number of $\mathcal{R}^{*}$ classes in $J^{*}_{p}$ and the number of idempotents in $\mathcal{SS}_{n}$ in the lemmas below. \begin{lemma} For $1\leq p\leq n$, the number of $\mathcal{R}^{*}-$classes in $J^{*}_{p}$ is \[\sum\limits_{r=p}^{n}{\binom{n-1}{r-1}}{\binom{r-1}{p-1}}.\] \end{lemma} \begin{proof} Let $\alpha\in \mathcal{SS}_{n}$ be such that $h(\alpha)=p$ and $|\dom \, \alpha|=r$ for $p\leq r\leq n$. Next observe that since $1\in \dom \, \alpha$, then we can choose the remaining $r-1$ elements of $\dom \, \alpha$ from $[n]\setminus \{1\}$ in $\binom{n-1}{r-1}$ ways. Moreover, we can partition $\dom \, \alpha$ into $p$ convex (modulo $\dom \, \alpha$) subsets in $\binom{r-1}{p-1}$ ways. The result follows after multiplying these two binomial coefficients and taking the sum from $r=p$ to $r=n$. \end{proof} \begin{lemma}\label{ssch} For $1\le p \le n$, we have $\sum\limits_{r=p}^{n}{\binom{n-1}{r-1}}{\binom{r-1}{p-1}}=\binom{n-1}{p-1}2^{n-p}$. \end{lemma} \begin{proof} \begin{align*} \sum\limits_{r=p}^{n}{\binom{n-1}{r-1}}{\binom{r-1}{p-1}}=& \sum\limits_{r=p}^{n}{\frac{(n-1)!}{(n-r)!(r-1)!}\cdot\frac{(r-1)!}{(r-p)!(p-1)!}}\\&= \sum\limits_{r=p}^{n}{\frac{(n-1)!}{(n-r)!(r-p)!(p-1)!}}\\&= \sum\limits_{r=p}^{n}{\frac{(n-1)!(n-p)!}{(n-r)!(p-1)!(r-p)!(n-p)!}} \, \, \left(\textnormal{multiplying by $\frac{(n-p)!}{(n-p)!}$}\right)\\&=\sum\limits_{r=p}^{n}{\frac{(n-1)!}{(p-1)!(n-p)!}\cdot\frac{(n-p)!}{(n-r)!(r-p)!}} \textnormal{ (by spliting and rearranging the fractions)}\\& = \sum\limits_{r=p}^{n}{\binom{n-1}{p-1}\binom{n-p}{n-r}}\\& = \binom{n-1}{p-1}\sum\limits_{r=p}^{n}{\binom{n-p}{n-r}}\\&= \binom{n-1}{p-1}2^{n-p}, \end{align*} as required. \end{proof} Now we have the theorem below. \begin{theorem} Let $\mathcal{SS}_{n}$ be as defined in \eqref{qn1}. Then $|E(\mathcal{SS}_{n})|=3^{n-1}$. \end{theorem} \begin{proof}The result follows from Lemma \ref{ssch} by summing up $\binom{n-1}{p-1}2^{n-p}$ from $p=1$ to $p=n$. \end{proof} \begin{remark} Notice that $1\in \dom \, \alpha$ for every $\alpha \in M(n,p)$. Thus $M(n,p)$ has $\binom{n-1}{p-1}2^{n-p}$ $\mathcal{R}^{*}-$classes and $\binom{n}{p}$ $\mathcal{L}^{*}-$classes in each of its $J^{*}_{p}$. Similarly, the Rees quotient semigroup ${RSS}_{n}(p)$ has $\binom{n-1}{p-1}2^{n-p}+1$ $\mathcal{R}^{*}-$classes and $\binom{n}{p}+1$ $\mathcal{L}^{*}-$classes. \end{remark} \section{Rank properties} Let $S$ be a semigroup and $A$ be a nonempty subset of $S$. The \emph{smallest subsemigroup} of $S$ that contains $A$ is called the \emph{ subsemigroup generated by $A$} usually denoted by $\langle A \rangle$. If there exists a finite subset $A$ of a semigroup $S$ such that $\langle A \rangle$ equals $S$, then $S$ is referred to as a \emph{finitely-generated semigroup}. The \emph{rank} of a finitely generated semigroup $S$ is defined as the minimum cardinality of a subset $A$ such that $\langle A \rangle$ equals $S$. i.e., \[ \text{rank}(S) = \min\{|A| : \langle A \rangle = S\}. \] \noindent If the set $A$ consists exclusively of the idempotents in $S$, then $S$ is called \emph{idempotent-generated} (equivalently, a \emph{semiband}), and the idempotent-rank is denoted by $\text{idrank}(S)$. The monoid $\mathcal{LS}_{n}$ first appeared in Ganyushkin and Mazorchuk \cite {gmv}, where it was shown that it is idempotent generated. Moreover, the combinatorial properties of the semigroup have been explored in \cite{al3}, where it was shown that the size (or order) of $\mathcal{LS}_{n}$ corresponds to the \emph{large} (or \emph{double}) \emph{Schr\"{o}der number}: \[s_{0}=1, \quad s_{n}= \frac{1}{n+1} \sum\limits_{r=0}^{n}\binom{n+1}{n-r}\binom{n+r}{r} \quad (n\geq 1).\] \noindent It is important to note that Dimitrova and Koppitz \cite{dm} examines the rank of the semigroup of all order-preserving and \emph{extensive} (which means order-increasing) partial transformations on a finite chain, denoted by $\mathcal{POE}_{n}$. This monoid can easily be shown to be isomorphic to the large \emph{Schr\"{o}der} monoid $\mathcal{LS}_{n}$ (see \cite{umar}); thus, the rank of $\mathcal{LS}_{n}$ can easily be obtained from [\cite{dm}, Proposition 4.0], by isomorphism. However, we present the result and proof for the sake of completeness and the generating elements. Moreover, Ping \emph{et al.} \cite{png} generalized the results of Dimitrova and Koppitz \cite{dm}, where they obtained the rank of the two-sided ideals of $\mathcal{POE}_{n}$. The rank of the two-sided ideals of $\mathcal{LS}_{n}$ can also be obtained through isomorphism from [\cite{png}, Proposition 2.6]. Furthermore, both articles fail to recognize that each of the objects considered has a minimum generating set (not minimal) since each of the object is an $\mathcal{R}$-trivial semigroup; thus, most of the proofs given in the two articles are belabored. In this section, we provide among other results the proof of the rank properties of these objects. For a more detailed discussion about ranks in semigroup theory, we refer the reader to \cite{hrb, hrb2}. Several authors have explored the ranks, idempotent ranks, and nilpotent ranks of various classes of semigroups of transformations. Notably, the works of Gomes and Howie \cite{gm, gm2, gm3}, Howie and McFadden \cite{hf}, Garba \cite{g1, g2, gu1}, Umar \cite{umar, ua1, ua} and Zubairu \emph{et. al.,} \cite{zm1} are here emphasized. The large Schrder monoid $\mathcal{LS}_{n}$ has been shown to be idempotent-generated in [\cite{gmv}, Theorem 14.4.5], where it first appeared. Our aim is to compute the rank of the two sided ideal $M(n,p)$ of the Schr\"{o}der monoid $\mathcal{SS}_{n}$, thereby obtaining the rank of $\mathcal{SS}_{n}$, as special cases. We first note the following definitions and a well known result about decreasing maps from \cite{umar, ua3}. Let $f(\alpha)$ be the cardinal of \[F(\alpha) =\{x\in \dom \, \alpha: \, x\alpha=x\} ;\] the set of fixed points of the map $\alpha$. Then we have the following lemma. \begin{lemma}\label{hq} For all order decreasing partial maps $\alpha$ and $\beta$ on $A\subseteq [n]$, $F(\alpha\beta)=F(\alpha)\cap F(\beta)=F(\beta\alpha)$. \end{lemma} \begin{proof} If $\alpha$ or $\beta$ is zero (i.e., the empty map), the result follows. Now suppose $\alpha$ and $\beta$ are nonzero partial order decreasing maps. The proof is the same as the proof of Lemma 2. 1. in \cite{ua3}. \end{proof} However, we initiate our examination with the following result about generating elements of $\mathcal{LS}_{n}$. \begin{lemma}\label{hq} The large Schr\"{o}der monoid $\mathcal{LS}_{n}$ is idempotent-generated. \end{lemma} \begin{proof} Let $\alpha\in \mathcal{LS}_{n}$ be as expressed in \eqref{1}. If $p=0$, then $\alpha$ is the empty map which is an idempotent, and the result follows. Now suppose $1\le p\le n$ and let $t_{i}=\min A_{i}$ for $1\le i \le p$. Notice that the maps defined as \[\epsilon=\begin{pmatrix}A_1&\cdots&A_p\\t_1&\cdots&t_p\end{pmatrix}\] \noindent and \[\epsilon_{i}=\begin{pmatrix}a_1&\cdots & a_{i-1}&\{a_{i},t_{i}\}&t_{i+1}&\cdots &t_p\\a_1&\cdots&a_{i-1}&a_{i}&t_{i+1}&\cdots&t_p\end{pmatrix} \, \, (1\le i\le p)\] \noindent are idempotents in $J^{*}_{p}$. Moreover, \begin{align*}\epsilon\epsilon_{1}\cdots\epsilon_{p}=& \begin{pmatrix} A_1 & \cdots & A_p \\ t_1 & \cdots & t_p \end{pmatrix} \begin{pmatrix} \{t_{1},a_1\} & t_2 & \cdots & t_p \\ a_1 & t_2 & \cdots & t_p \end{pmatrix} \begin{pmatrix} a_1 & \{a_{2},t_2\} & t_{3} & \cdots & t_p \\ a_1 & a_2 & t_{3} & \cdots & t_p \end{pmatrix} \cdots \begin{pmatrix} a_1 & \cdots & a_{p-1} & \{a_{p}, t_p\} \\ a_1 & \cdots & a_{p-1} & a_p \end{pmatrix} \\=& \begin{pmatrix} A_1 & \cdots & A_p \\ a_1 & \cdots & a_p \end{pmatrix} = \alpha. \end{align*} The result now follows. \end{proof} Notice that in the proof of the above lemma, $|\im \, \alpha|=h(\alpha)=h(\epsilon)=h(\epsilon_{i})=p$ for all $1\le i\le p$. Consequently, we have the following result. \begin{lemma}\label{hh} Every element in $S\in\{{RSS}_{n}(p), \, {RLS}_{n}(p), \, K(n,p), \, M(n,p) \}$ of height $p$ can be expressed as a product of idempotents in $S$, each of height $p$. \end{lemma} The next result shows that the set of nonzero idempotents in ${RLS}_{n}(p)$ (${RSS}_{n}(p)$) is the minimum generating set of ${RLS}_{n}(p)\setminus \{0\}$ (${RSS}_{n}(p)\setminus \{0\}$). \begin{lemma} Let $\alpha$, $\beta$ be elements in ${RLS}_{n}(p)\setminus \{0\}$ \emph{(}resp., $\alpha$, $\beta$ in ${RSS}_{n}(p)\setminus \{0\}$\emph{)}. Then $\alpha\beta\in E({RLS}_{n}(p)\setminus \{0\})$ \emph{(}resp., $\alpha\beta\in E({RSS}_{n}(p)\setminus \{0\})$\emph{)} if and only if $\alpha, \, \beta\in E({RLS}_{n}(p)\setminus \{0\})$ and $\alpha\beta=\alpha$ \emph{(}resp., $\alpha, \, \beta\in E({RSS}_{n}(p)\setminus \{0\})$ and $\alpha\beta=\alpha$\emph{)}. \end{lemma} \begin{proof} Suppose $\alpha\beta\in E({RLS}_{n}(p)\setminus \{0\})$ (resp., $\alpha\beta\in E({RSS}_{n}(p)\setminus \{0\})$). Then \[ p= f(\alpha\beta)\leq f(\alpha)\leq |\im \, \alpha|=p,\] \[ p= f(\alpha\beta)\leq f(\beta)\leq |\im \, \beta|=p.\] This ensures that \[F(\alpha)=F(\alpha\beta)=F(\beta), \] \noindent and so $\alpha, \, \beta\in E({RLS}_{n}(p)\setminus \{0\})$ and $\alpha\beta=\alpha$ (resp., $\alpha, \, \beta\in E({RSS}_{n}(p)\setminus \{0\})$ and $\alpha\beta=\alpha$). The converse is obvious. \end{proof} It is evident that necessity ensues, as the result of multiplying a non-idempotent element with any other element does not yield a non-zero idempotent, by Lemma \ref{hq}. Consequently, the rank and idempotent rank of ${RLS}_{n}(p)$ (resp., ${RSS}_{n}(p)$) are equivalent. Therefore, we have now established the key results of this section. \begin{theorem}\label{pb} Let ${RLS}_{n}(p)$ be as defined in \eqref{knn}. Then \[\text{rank } {RLS}_{n}(p)= \text{idrank } {RLS}_{n}(p)=|E({RLS}_{n}(p)\setminus\{0\})|=\sum\limits_{r=p}^{n}{\binom{n}{r}}{\binom{r-1}{p-1}}.\] \end{theorem} \begin{proof} It follows from the fact that there are $\sum\limits_{r=p}^{n}{\binom{n}{r}}{\binom{r-1}{p-1}}$ $\mathcal{R}^{*}-$classes in ${RLS}_{n}(p)$ and each $\mathcal{R}^{*}-$class contains a unique idempotent, by Lemma \ref{un}. \end{proof} \begin{theorem}\label{mnnn} Let ${RSS}_{n}(p)$ be as defined in \eqref{mnn}. Then \[\text{rank } {RSS}_{n}(p)= \text{idrank } {RSS}_{n}(p)=|E({RSS}_{n}(p)\setminus\{0\})|=\binom{n-1}{p-1}2^{n-p}.\] \end{theorem} \begin{proof} It follows from the fact that there are $\binom{n-1}{p-1}2^{n-p}$ $\mathcal{R}^{*}-$classes in ${RSS}_{n}(p)$ and each $\mathcal{R}^{*}-$class contains a unique idempotent, by Lemma \ref{un}. \end{proof} The next lemma is crucial to determine the ranks of the Schr\"{o}der monoids $\mathcal{LS}_{n}$ and $\mathcal{SS}_{n}$. Now for $1\leq p\leq n$ let \[J^{*}_{p}=\{\alpha\in \mathcal{LS}_{n}: |\im \, \alpha|=p \}.\] \begin{lemma}\label{lm1} For $0\leq p\leq n-2$, $J^{*}_{p}\subset \langle J^{*}_{p+1}\rangle$. In other words, if $\alpha\in J^{*}_{p}$ then $\alpha\in \langle J^{*}_{p+1}\rangle$ for $1\leq p\leq n-2$. \end{lemma} \begin{proof} It suffices to prove that every idempotent of height $p$ can be expressed as product of idempotents of height $p+1$, by Lemma \ref{hq}. Let $\epsilon\in E(J^{*}_{p})$ be expressed as: \[\epsilon= \begin{pmatrix} A_1 & \cdots & A_p \\ t_1 & \cdots & t_p \end{pmatrix}, \] \noindent where $\min A_{i}=t_{i}$ for all $1\le i\le p$. Now $\epsilon$ is either a full map or a partial map. We shall first consider the case when $\epsilon$ is a full map. \noindent \textbf{Case 1.} Suppose $\epsilon$ is a full idempotent map of height $p$. Then since $p\le n-2$, then there exists $1\le i\le p$, such that $|A_{i}|\ge 2$. Thus, there are two subcases to consider.\\ \noindent\textbf{Subcase i.} Suppose $|A_{i}|=2$ and let $A_{i}=\{t_{i}, x_{i_{1}}\}$. Then there exists $1\leq j\leq p$, such that $|A_{j}|\ge 2$. Now let $A_{j}=\{t_{j}, y_{j_{1}},y_{j_{2}}, \ldots, t_{j+1}-1\}$, where we may suppose without loss of generality that $i<j$. Then define $\epsilon_{1}$ and $\epsilon_{2}$ as: \[\epsilon_{1}=\begin{pmatrix} A_1 & \cdots & A_{i-1} & t_{i} & x_{i_{1}}& A_{i+1}&\cdots & A_{p} \\ t_1 & \cdots & t_{i-1} &t_{i}&x_{i_{1}}& t_{i+1}& \cdots & t_p \end{pmatrix}\] \noindent and \[\epsilon_{2}=\left( \begin{array}{cccccccccccc} t_{1} & \cdots & t_{i-1} & \{t_{i}, x_{i_{1}} \} & t_{i+1} & \cdots & t_{j} & y_{j_{1}} & t_{j+1} & \cdots & t_{p}\\ t_{1} & \cdots & t_{i-1} & t_{i} & t_{i+1} & \cdots & t_{j} & y_{j_{1}} & t_{j+1} & \cdots & t_{p} \end{array} \right).\] \\ \noindent\textbf{Subcase ii.} Suppose $|A_{i}|>2$, and let $A_{i}=\{t_{i}, x_{i_{1}}, \, x_{i_{2}} \, \ldots, \, t_{i+1}-1\}$. Thus define $\epsilon_{1}$ and $\epsilon_{2}$ as: \[\epsilon_{1}=\begin{pmatrix} A_1 & \cdots & A_{i-1} & t_{i} & \{x_{i_{1}}, \, x_{i_{2}} \, \ldots, \, t_{i+1}-1 \}& A_{i+1}&\cdots & A_{p} \\ t_1 & \cdots & t_{i-1} &t_{i}& x_{i_{1}} & t_{i+1}& \cdots & t_p \end{pmatrix}\] \noindent and \[\epsilon_{2}=\left( \begin{array}{cccccccccccc} t_{1} & \cdots & t_{i-1} & \{t_{i}, x_{i_{1}} \} & x_{i_{2}} & t_{i+1} & \cdots & t_{p}\\ t_{1} & \cdots & t_{i-1} & t_{i} & x_{i_{2}}& t_{i+1} & \cdots & t_{p} \end{array} \right).\] \noindent Clearly, in either of the subcases, one can easily see that $\epsilon_{1}$ and $\epsilon_{2}$ are in $E(J^{*}_{p+1})$, and also in each subcase $\epsilon_{1}\epsilon_{2}=\epsilon$.\\ \noindent\textbf{Case 2.} Now suppose $\epsilon$ is strictly partial. Thus $|(\dom \, \epsilon)^{c}|\ge 1$. Let $q\in (\dom \, \epsilon)^{c}$. Then there are three subcases to consider, i.e., either (i) there exists $x_{i_{k}}$ and $x_{i_{k+1}}$ in $A_{i}$ (for some $1\le i\le p$ ), such that $x_{i_{k}}< q< x_{i_{k+1}}$; (ii) or there exists $1\le i\le p-1$, such that $\max A_{i}<q<\min A_{i+1}$; (iii) or $\max A_{p}< q$. Thus, we consider the three cases separately. \textbf{Subcase i.} Suppose there exists $x_{i_{k}}$ and $x_{i_{k+1}}$ in $A_{i}$ (for some $1\le i\le p$ ), such that $x_{i_{k}}< q< x_{i_{k+1}}$. Let $A_{i}=\{t_{i}, x_{i_{1}}, \ldots, x_{i_{{k}}}, x_{i_{k+1}}, \ldots\}$. Thus define $\epsilon_{1}$, $\epsilon_{2}$ and $\epsilon_{3}$ as: \[\epsilon_{1}=\begin{pmatrix} A_1 & \cdots & A_{i-1} & \{ t_{i}, x_{i_{1}}, \, \ldots, \, x_{i_{k}} \}& \{ x_{i_{k+1}}, x_{i_{k+2}}, \, \ldots \}&A_{i+1}&\cdots & A_{p} \\ t_1 & \cdots & t_{i-1} &t_{i}& x_{i_{k+1}}& t_{i+1}& \cdots & t_p \end{pmatrix};\] \[\epsilon_{2}=\left( \begin{array}{ccccccccccc} t_{1} & \cdots & t_{i} & \{q, x_{i_{k+1}} \} & t_{i+1} & \cdots & t_{p}\\ t_{1} & \cdots & t_{i} & q& t_{i+1} & \cdots & t_{p} \end{array} \right)\] \noindent and \[\epsilon_{3}=\left( \begin{array}{cccccccccccc} t_{1} & \cdots & t_{i-1} & \{t_{i}, q \} & x_{i_{k+1}} & t_{i+1} & \cdots & t_{p}\\ t_{1} & \cdots & t_{i-1} & t_{i} & x_{i_{k+1}}& t_{i+1} & \cdots & t_{p} \end{array} \right).\] \textbf{Subcase ii.} Suppose $\max A_{i}< q< \min A_{i+1}$. Now either \textbf{(a)} $|A_{i}|\geq 2$; or \textbf{(b)} $|A_{i}|= 1$ and $|A_{i+1}|\geq 2$; or \textbf{(c)} $|A_{i}|=|A_{i+1}|=1$ and $|A_{j}|\geq 2$ for some $i\neq j\neq i+1$; or \textbf{(d)} all blocks of $\epsilon$ are singleton, and so there exists $q^{\prime}\in \, (\dom \, \epsilon)^{c}$ such that $\max A_{j}< q< \min A_{j+1}$ for some $1\leq j\leq p-1$. \noindent \textbf{(a.)} Suppose $\max A_{i}< q< \min A_{i+1}$ such that $|A_{i}|\geq 2$. Now let $A_{i}=\{t_{i}, x_{i_{1}}, x_{i_{2}}, \dots\}$. Then define $\epsilon_{1}$, $\epsilon_{2}$ and $\epsilon_{3}$ as: \[\epsilon_{1}=\begin{pmatrix} A_1 & \cdots & A_{i-1} & t_{i}& \{ x_{i_{1}}, x_{i_{2}}, \, \ldots, \}&A_{i+1}&\cdots & A_{p} \\ t_1 & \cdots & t_{i-1} &t_{i}& x_{i_{1}}& t_{i+1}& \cdots & t_p \end{pmatrix};\] \[\epsilon_{2}=\left( \begin{array}{ccccccccccc} t_{1} & \cdots & t_{i} & \{ x_{i_{1}}, q \} & t_{i+1} & \cdots & t_{p}\\ t_{1} & \cdots & t_{i} & x_{i_{1}}& t_{i+1} & \cdots & t_{p} \end{array} \right)\] \noindent and \[\epsilon_{3}=\left( \begin{array}{cccccccccccc} t_{1} & \cdots & t_{i-1} & \{t_{i}, x_{i_{1}} \} & q & t_{i+1} & \cdots & t_{p}\\ t_{1} & \cdots & t_{i-1} & t_{i} & q& t_{i+1} & \cdots & t_{p} \end{array} \right).\] \noindent In either of the two subcases above, one can easily verify that $\epsilon_{1}$, $\epsilon_{2}$ and $\epsilon_{3}$ are in $E(J^{*}_{p+1})$, and also \[\epsilon_{1}\epsilon_{2}\epsilon_{3}=\epsilon.\] \noindent\textbf{(b.)} Suppose $\max A_{i}< q< \min A_{i+1}$ such that, $|A_{i}|=1$ and $|A_{i+1}|\geq 2$. Now, let $A_{i+1}=\{t_{i+1}, x_{{(i+1)}_{1}}, x_{{(i+1)}_{2}}, \dots\}$. Then define $\epsilon_{1}$ and $\epsilon_{2}$ as: \[\epsilon_{1}=\begin{pmatrix} A_1 & \cdots & A_{i} & t_{i+1}& \{ x_{{(i+1)}_{1}}, x_{{(i+1)}_{2}}, \, \ldots, \}&A_{i+2}&\cdots & A_{p} \\ t_1 & \cdots & t_{i} &t_{i+1}& x_{{(i+1)}_{1}}& t_{i+2}& \cdots & t_p \end{pmatrix}\] \noindent and \[\epsilon_{2}=\left( \begin{array}{ccccccccccc} t_{1} & \cdots & t_{i} & q & \{t_{i+1}, x_{{(i+1)}_{1}}\} & t_{i+2}& \cdots & t_{p}\\ t_{1} & \cdots & t_{i} & q & t_{i+1} & t_{i+2}& \cdots & t_{p} \end{array} \right).\] \noindent Clearly $\epsilon_{1}$ and $\epsilon_{2}$ in $E(J^{*}_{p+1})$ and $\epsilon_{1}\epsilon_{2}=\epsilon$. \noindent \textbf{(c.)} Suppose $\max A_{i}< q< \min A_{i+1}$ such that $|A_{i}|=|A_{i+1}|=1$ and suppose there exists $1\leq j\leq p-2$ such that $|A_{j}|\geq 2$, where $i\neq j\neq i+1$. Now let $A_{j}=\{t_{j}, y_{j_{1}}, y_{j_{2}}, \, \ldots\}$ and we may suppose without loss of generality that $i<j$. Thus $q<y_{j_{1}}$. Now, define $\epsilon_{1}$ and $\epsilon_{2}$ as: \[\epsilon_{1}=\begin{pmatrix} A_1 & \cdots & A_{i} &q& A_{i+1}&\cdots & A_{p} \\ t_1 & \cdots & t_{i} &q&t_{i+1}& \cdots & t_p \end{pmatrix}\] \noindent and \[\epsilon_{2}=\left( \begin{array}{cccccccccc} t_{1} & \cdots & t_{j+1} & y_{j_{1}} & t_{j+2}& \cdots & t_{p}\\ t_{1} & \cdots & t_{j+1} & y_{j_{1}} & t_{j+2}& \cdots & t_{p} \end{array} \right).\] \noindent\textbf{(d.)} Suppose $\max A_{i}< q< \min A_{i+1}$ is such that $|A_{i}|=1$ for all $1\le i \le p$, and suppose there exists $d\in (\dom \, \epsilon)^{c} $, such that $\max A_{j}< d<\min A_{j+1}$ for some $1< j< p$. We may suppose without loss of generality that $d<q$. If $i=j$, then $\max A_{i}< d<q< \min A_{i+1}$. Thus, define $\epsilon_{1}$ and $\epsilon_{2}$ as: \[\epsilon_{1}=\begin{pmatrix} A_1 & \cdots & A_{i} &q& A_{i+1}&\cdots & A_{p} \\ t_1 & \cdots & t_{i} &q&t_{i+1}& \cdots & t_p \end{pmatrix}\] \noindent and \[\epsilon_{2}=\left( \begin{array}{cccccccccc} t_{1} & \cdots & t_{i} & d & t_{i+1}& \cdots & t_{p}\\ t_{1} & \cdots & t_{i} & d & t_{i+1} & \cdots & t_{p} \end{array} \right).\] However, if $i\neq j$ then $j<i$ since $d<q$. So define $\epsilon_{1}$ and $\epsilon_{2}$ as: \[\epsilon_{1}=\begin{pmatrix} A_1 & \cdots & A_{j} &d& A_{j+1}&\cdots&A_{i}& \cdots& A_{p} \\ t_1 & \cdots & t_{j} &d&t_{j+1}& \cdots &t_{i} &\cdots& t_p \end{pmatrix}\] \noindent and \[\epsilon_{2}=\begin{pmatrix} t_1 & \cdots & t_{j} &t_{j+1}&\cdots&t_{i}& q&t_{i+1}& \cdots& t_{p} \\ t_1 & \cdots & t_{j} &t_{j+1}& \cdots &t_{i} &q&t_{i+1}&\cdots& t_p \end{pmatrix}. \] Thus in all the above cases, it can easily be seen that $\epsilon_{1}$ and $\epsilon_{2}$ are in $E(J^{*}_{p+1})$, and $\epsilon_{1}\epsilon_{2}=\epsilon$. \noindent\textbf{Case 3.} If $\max A_{p}< q$, then either (i) there exists $1\leq i\leq p$, such that $|A_{i}|\geq 2$; (ii) or there exists $d\in (\dom \, \epsilon)^{c}$, such that $q<d$. \noindent \textbf{(i.)} Suppose there exists $1\leq i\leq p$, such that $|A_{i}|\geq 2$. Now let $A_{i}=\{t_{i}, x_{i_{1}}, \ldots\}$. Then define $\epsilon_{1}$ and $\epsilon_{2}$ as: \[\epsilon_{1}=\begin{pmatrix} A_1 & \cdots & A_{i-1} &t_{i}&\{x_{i_{1}}, x_{i_{2}}, \ldots\}&A_{i+1}& \cdots& A_{p} \\ t_1 & \cdots & t_{i-1} &t_{i}&x_{i_{1}}& t_{i+1} &\cdots& t_p \end{pmatrix}\] \noindent and \[\epsilon_{2}=\begin{pmatrix} t_1 & \cdots & t_{i-1} &\{t_{i}, x_{i_{1}}\}&t_{i+1}& \cdots& t_{p}&q \\ t_1 & \cdots & t_{i-1} &t_{i}& t_{i+1} &\cdots& t_p& q \end{pmatrix} .\] \noindent \textbf{(ii.)} Suppose there exists $d\in (\dom \, \epsilon)^{c}$, such that $q<d$. Then define \[\epsilon_{1}=\begin{pmatrix} A_1 & \cdots & A_{p}&q \\ t_1 & \cdots & t_p&q \end{pmatrix}\] \noindent and \[\epsilon_{2}=\begin{pmatrix} t_1 & \cdots& t_{p}&d \\ t_1 & \cdots& t_p& d \end{pmatrix} . \] \noindent Clearly in (i) and (ii) above, $\epsilon_{1}$ and $\epsilon_{2}$ are clearly in $E(JJ^{*}_{p+1})$ and also $\epsilon_{1}\epsilon_{2}=\epsilon$. The proof of the lemma is now complete. \end{proof} \begin{remark}\label{nn} The above lemma also holds when the large Schr\"{o}der monoid $\mathcal{LS}_{n}$ is replaced by the small Schr\"{o}der monoid $\mathcal{SS}_{n}$, with a slight modification of the proof by substituting $t_{1} = 1$ in the definition of $\epsilon$. \end{remark} Consequently, we have the following (which can equivalently be obtain from [\cite{png}, Proposition 2.6] by isomorphism). \begin{theorem}\label{knp} Let $K(n,p)$ be as defined in \eqref{kn}. Then \[\text{rank } K(n,p)= \text{idrank } K(n,p)=|E(J^{*}_{p})|=\sum\limits_{r=p}^{n}{\binom{n}{r}}{\binom{r-1}{p-1}}.\] \end{theorem} \begin{proof}Notice that by Lemma \ref{lm1}, $\langle J^{*}_{p} \rangle= K(n,p)$ for all $p$. Notice also that, $\langle E({RLS}_{n}(p)\setminus\{0\})\rangle= J^{*}_{p}$. The result now follows from Theorem \ref{pb}. \end{proof} We now deduce the following corollary which can also be obtain from [\cite{dm}, Proposition 4] by isomorphism. \begin{corollary} The \(\text{rank } \mathcal{LS}_{n}= \text{idrank } \mathcal{LS}_{n}=2n.\) \end{corollary} \begin{proof} Notice that by Lemma \ref{lm1}, $\langle J^{*}_{n-1} \rangle=\mathcal{LS}_{n}\setminus J^{*}_{n}=K(n,n-1)$. Notice also that $J^{*}_{n}$ contains only the identity element $1_{[n]}$. Thus $\text{rank } \mathcal{LS}_{n}= \text{idrank } \mathcal{LS}_{n}= \text{idrank }K(n,n-1)+1$. The result now follows from Theorem \ref{knp}. \end{proof} Similarly, we also obtain the next theorem and its corollary: \begin{theorem}\label{mnp} Let $M(n,p)$ be as defined in \eqref{mn}. Then \[\text{rank } M(n,p)= \text{idrank } M(n,p)=|E(J^{*}_{p})|=\binom{n-1}{p-1}2^{n-p}.\] \end{theorem} \begin{corollary} The \(\text{rank } \mathcal{SS}_{n}= \text{idrank } \mathcal{SS}_{n}=2n-1.\) \end{corollary} \begin{proof} The proof is similar to that of the above theorem. \end{proof} \noindent{\bf Acknowledgements, Funding and/or Conflicts of interests/Competing interests.} The first named author would like to thank Bayero University and TETFund (TETF/ES/UNI/KANO/TSAS/2022) for financial support. He would also like to thank Sultan Qaboos University, Oman, for hospitality during a 1-year postdoc research visit to the institution. \begin{thebibliography}{99} \markboth{Reference}{} \bibitem{zua} Ali, B., Umar, A. and Zubairu, M. M. Regularity and Greens relations for the semigroups of partial and full contractions of a finite chain. \emph{Scientific African}, 21, (2023) p.e01890. \bibitem{dm} Dimitrova, I., Koppitz, J. On the monoid of all partial order-preserving extensive transformations. \emph{Comm. Algebra} 40(5), (2012) 1821-1826. \bibitem{FOUN} Fountain, J. B. Adequate Semigroups. \emph{Proc. Edinb. Math. Soc.} \textbf{22} (1979), 113--125. \bibitem{FOUN2} Fountain, J. B. Abundant Semigroups. \emph{Proc. Lond. Math. Soc.} \textbf{44} (1982), 103--129. \bibitem{g1} Garba, G. U. On the nilpotent rank of certain semigroups of transformations, \emph{Glasgow Math. J.} \textbf{36} (1), (1994), 1--9. \bibitem{g2} Garba, G. U. Nilpotents in partial one-one order-preserving transformations, \emph{Semigroup Forum}, \textbf{48} (1994), 37--49. \bibitem{gu1} Garba, G. U. On the idempotent ranks of certain semigroups of order-preserving transformations, \emph{Portugaliae Mathematica}, \textbf{51} (1994), 185-204. \bibitem{gmv} Ganyushkin, O. and Mazorchuk, V. Classical Finite Transformation Semigroups. An Introduction. Algebra and AppICations 9, Springer, London, 2009. \bibitem{gm} Gomes, G. M. S. and Howie, J. M. On the ranks of certain semigroups of order preserving transformations, \emph{Semigroup Forum}, \textbf{45} (1992), 272-282. \bibitem{gm2} Gomes, G. M. S. and Howie, J. M. Nilpotents in flnite symmetric inverse semigroup, \emph{Proc. Edinburgh Math. Soc.,} 30 (1987), 383--395. \bibitem{gm3} Gomes, G. M. S. and Howie, J. M. On the ranks of certain flnite semigroups of transformations, \emph{Math. Proc. Cambridge Phil. Soc.}, \textbf{101} (1987), 395--403. \bibitem{howi} Howie, J. M. \emph{Fundamentals of semigroup theory}. London Mathematical Society, New series 12. The Clarendon Press, Oxford University Press, 1995. \bibitem{hrb} Howie, J. M. and Marques Ribeiro, M. I. Rank properties in finite semigroups, \emph{Comm. Algebra,}, \textbf{27}: 11, (1999), 5333--5347. \bibitem{hrb2} Howie, J. M. and Marques Ribeiro, M. I. Rank Properties in Finite Semigroups II: The Small Rank and the Large Rank. \emph{Southeast Asian Bulletin of Mathematics,} 24, (2000), 231-237. \bibitem{hf} Howie, J. M. and McFadden, R. B. Idempotent rank in finite full transformation semigroups, \emph{Proc. Roy. Soc. Edinb.} A, \textbf{114} (1990), 161--167. \bibitem{HRS} Howie, J. M., Robertson, E. F. and Schein, B. M. A combinatorial property of finite full transformation semigroups. \emph{Proc. Roy. Soc. Edinb.} \textbf{109A} (1988), 319--328. \bibitem{ph} Higgins, P. M. \emph{Techniques of semigroup theory.} Oxford university Press, 1992. \bibitem{ph1} Higgins, P. M. Divisors of semigroups of order-preserving mappings on a finite chain, \emph{Intern. Journal of Algebra and Computations} \textbf{5} (1995), 725-742. \bibitem{al1} Laradji, A. and Umar, A. On certain finite semigroups of order-decreasing transformations I, \emph{Semigroup Forum}, \textbf{69} (2004), 184-200. \bibitem{al2} Laradji, A. and Umar, A. Combinatorial results for semigroups of order-decreasing partial transformations, \emph{Journal of Integer Sequences}, \textbf{7} (2004), 04.3.8. \bibitem{al3} Laradji, A. and Umar, A. Combinatorial results for semigroups of order-preserving partial transformations, \emph{Journal of Algebra}, \textbf{278} (2004), 342-359. \bibitem{al4} Laradji, A. and Umar, A. Combinatorial results for semigroups of order-preserving full transformations, \emph{Semigroup Forum}, \textbf{72} (2006), 51-62. \bibitem{al5} Laradji, A. and Umar, A. Lattice paths and order-preserving partial transformations, \emph{Utilitas Mathematica}, \textbf{101} (2016), 23--36. \bibitem{png} Ping, Z., Huabi, H. and Yunyun, Q. The ideals of the monoid of all partial order-preserving extensive transformations, \emph{Semigroup Forum} \textbf{104} (2022), 494-508. \bibitem{auc} Umar, A. Some combinatorial problems in the theory of symmetric inverse semigroups, \emph{Algebra Discrete Math.}, \textbf{9} (2010), 2, 115-126. \bibitem{umar} Umar, A. \emph{Semigroups of order-decreasing transformations}. PhD thesis, University of St Andrews. (1992). \bibitem{quasi} Umar, A. A class of quasi-adequate transformation semigroups. \emph{Portugaliae Mathematica} \textbf{ 51}, (1994), 4, 553--570. \bibitem{ua1} Umar, A. On the semigroups of order-decreasing finite full transformations. \emph{Proc. Roy. Soc. Edinburgh Sect. A} \textbf{120}, (1992), no. 1-2, 129--142. \bibitem{ua} Umar, A. On certain infinite semigroups of order-decreasing transformations I. \emph{Comm. Algebra} \textbf{25} (1997), no. 1-2, 2989--2999. \bibitem{ua3} Umar, A. On the ranks of certain finite semigroups of order-decreasing transformations. \emph{Portugaliae Mathematica} \textbf{ 53}, (1996), 1, 23--34. \bibitem{um} Umar, A. and Zubairu, M. M. On certain semigroups of contraction mappings of a finite chain. \emph{Algebra Discrete Math.} \textbf{32} (2021), No. 2, 299--320. \bibitem{zm1} Zubairu, M. M., Umar, A. and Aliyu, J. A. On certain semigroups of order decreasing full contraction mappings of a finite chain. \emph{Recent Developments in Algebra and Analysis: (Trend in Mathematics), Springer Int. Pub.}, \textbf{1}, (2024), 35--45. \end{thebibliography} \end{document}
2412.13754v1
http://arxiv.org/abs/2412.13754v1
Optimal Exact Recovery in Semi-Supervised Learning: A Study of Spectral Methods and Graph Convolutional Networks
\pdfoutput=1 \documentclass[10pt]{amsart} \usepackage{amsthm,amsmath,amsfonts,amssymb} \usepackage[section]{algorithm} \usepackage{algpseudocode} \usepackage{amssymb} \usepackage[toc,page]{appendix} \usepackage{array} \usepackage{bbm} \usepackage{calligra} \usepackage{color} \usepackage{comment} \usepackage[dvipsnames]{xcolor} \usepackage{enumerate} \usepackage{enumitem} \usepackage{fullpage} \usepackage{graphicx} \usepackage{mathMacros} \usepackage{mathtools} \usepackage{multirow} \usepackage{pifont}\usepackage{caption} \usepackage{subcaption} \usepackage{stmaryrd} \usepackage{tablefootnote} \usepackage[square,numbers]{natbib} \bibliographystyle{unsrtnat} \usepackage{hyperref} \usepackage{cleveref} \usepackage{autonum} \cslet{blx@noerroretextools}\empty \hypersetup{ colorlinks=true, linkcolor={red!85!black}, citecolor={blue!75!black}, filecolor=magenta, urlcolor={blue!75!black}, } \colorlet{linkequation}{blue} \numberwithin{equation}{section} \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{axiom}{Axiom} \newtheorem{claim}[axiom]{Claim} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{definition}[theorem]{Definition} \newtheorem{assumption}[theorem]{Assumption} \newtheorem{remark}[theorem]{Remark} \newtheorem{conjecture}[theorem]{Conjecture} \theoremstyle{remark} \newtheorem*{example}{Example} \newtheorem*{fact}{Fact} \newtheorem{notation}[theorem]{Notation} \DeclareMathAlphabet{\mathcalligra}{T1}{calligra}{m}{n} \DeclareMathAlphabet{\mathpzc}{OT1}{pzc}{m}{it} \algrenewcommand\algorithmicrequire{\textbf{Input:}} \algrenewcommand\algorithmicensure{\textbf{Output:}} \newcommand{\todo}[1]{\textcolor{magenta}{[#1]}} \newcommand{\vphi}{\varphi} \renewcommand{\epsilon}{\varepsilon} \newcommand{\iid}{\stackrel{\mathrm{\tiny{i.i.d.}}}{\sim}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\CSBM}{\textnormal{CSBM}} \newcommand{\SBM}{\textnormal{SBM}} \newcommand{\GMM}{\textnormal{GMM}} \newcommand{\Ber}{\textnormal{Ber}} \newcommand{\diag}{\textnormal{diag}} \newcommand{\tra}{\mathrm{tr}} \newcommand{\te}{\mathrm{te}} x}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \newcommand{\bGamma}{\boldsymbol{\Gamma}} \renewcommand{\sT}{\top} \title{Optimal Exact Recovery in Semi-Supervised Learning:\\ A Study of Spectral Methods and Graph Convolutional Networks} \author{Hai-Xiao Wang} \address{Department of Mathematics, University of California, San Diego, La Jolla, CA 92093} \email{[email protected]} \author{Zhichao Wang} \address{Department of Mathematics, University of California, San Diego, La Jolla, CA 92093} \email{[email protected]} \date{\today} \begin{document} \maketitle \input{abstract} \section{Introduction}\label{sec:intro} Graph Neural Networks (GNNs) have emerged as a powerful method for tackling various problems in the domain of graph-structured data, such as social networks, biological networks, and knowledge graphs. The versatility of GNNs allows for applications ranging from node classification to link prediction and graph classification. To explore the mechanism and functionality behind GNNs, it is natural to assume certain data generation models, such that the fundamental limits of certain tasks appear mathematically. In particular, we focus on the synthetic data sampled from \emph{Contextual Stochastic Block Model} (CSBM) introduced in \cite{deshpande2018contextual}. In the binary \emph{Stochastic Block Model} (SBM), vertices are connected with probability $p$ when they are from the same community; otherwise $q$. The CSBM dataset extends the traditional SBM, where each node is additionally associated with a feature vector sampled from a corresponding \emph{Gaussian Mixture Model} (GMM). The parameters of CSBM are composed of the connection probabilities $p$ and $q$ in SBM and \emph{signal-to-noise ratio} (SNR) in GMM. We investigate the \emph{semi-supervised} node classification problem, which aims to recover the labels of unknown nodes when some node labels are revealed. In the literature, the existing work has focused on the generalization property of GNN \cite{esser2021learning,bruna2017community,baranwal2021graph}, the role of nonlinearity \cite{lampert2023self} and the phenomenon of oversmoothing \cite{wu2022non}. While, in this paper, we focus on the fundamental limits of CSBM and explore the following questions. \begin{enumerate} \item What is the necessary condition on the parameters of CSBM to classify all nodes correctly? \item What is the best possible accuracy for any algorithm when given the model parameters? \item Can we design an efficient estimator to achieve the best possible accuracy? \item How well does GNN perform under this evaluation metric? \end{enumerate} In addressing these challenges, we consider the \emph{transductive learning} framework, where the connections between known and unknown nodes are utilized efficiently, in contrast to the \emph{inductive learning} where only the connections among known nodes are involved. For the first time, we identify the \emph{Information-Theoretic} (IT) limits of CSBM for all algorithms and necessary condition to classify all nodes correctly, especially all unknown nodes. This discovery is pivotal as it sets a benchmark for evaluating the performance of various algorithms on this type of data for the node classification problem. \subsection{Related work} We provide some relevant previous literature below. \subsubsection*{Unsupervised learning on \emph{CSBM}} For benchmarking and demonstrating theoretical guarantees, CSBM has emerged as a widely adopted data structure for the theoretical analysis of diverse algorithms. In the extremely \emph{sparse} graph setting, the first tight IT analysis for community detection on CSBM was provided by \cite{deshpande2018contextual} via a non-rigorous cavity method of statistical physics. Later, \cite{lu2023contextual} proved the conjecture in \cite{deshpande2018contextual}, and characterized the sharp threshold to detect the community. Meanwhile, \cite{abbe2022lp} established the sharp threshold for correctly classifying all nodes and provided a simple spectral algorithm that reaches the IT limits for optimal recovery. \subsubsection*{Semi-supervised learning} Theoretical analyses within the semi-supervised learning framework have previously addressed various aspects. First, semi-supervised linear regression was explored in a line of research work, e.g. \cite{azriel2022semi,ryan2015semi,chakrabortty2018efficient,tony2020semisupervised}. Using an information-theoretic approach, the generalization error was characterized in \cite{he2022information} for iterative methods and in \cite{aminian2022information} under covariate-shift setting. Moreover, \cite{belkin2004regularization} explored the task of labeling a large partially labeled graph via regularization and semi-supervised regression on graphs. \cite{lelarge2019asymptotic, nguyen2023asymptotic} explored asymptotic Bayes risks on GMM in semi-supervised learning, whereas we extend this to CSBM under the perfect classification setting. \subsubsection*{Graph Convolutional Networks \emph{(GCNs)}} GCN is one of the most fundamental GNN architectures, introduced by \cite{kipf2017semisupervised}. There are many works that theoretically analyze GCNs from different perspectives. For example, \cite{wu2022non} studied the phenomenon of oversmoothing of GCN in CSBM; the expressivity of deep GCNs is studied by \cite{oono2019graph}; \cite{wei2022understanding,baranwal2023optimality} analyzed Bayesian inference in nonlinear GCNs; \cite{ma2022is} showed that GCNs can perform well over some heterophilic graphs, especially in CSBM. Additionally, \cite{huang2023graph} analyzed the feature learning of GCNs on modified CSBM (SNM therein), and \cite{lu2022learning} showed the learning performance on SBM based on the dynamics of coordinate descent on GCNs. However, currently, there is no complete analysis of the training dynamics for GCNs on CSBM. Based on the analysis of GCNs, there are many modifications of GCN architectures with theoretical improvements, for instance, line GNNs with the non-backtracking operator \cite{chen2018supervised} for community detection, simple spectral graph convolution \cite{zhu2021simple}, and graph attention \cite{fountoulakis2023graph,fountoulakis2022classification}. \subsubsection*{Generalization theory of \emph{GCNs}} Many works have studied the generalization performance of GCNs. \cite{tang2023generalization} controlled the transductive generalization gap of GCNs trained by SGD, and \cite{bruna2017community} explored the community detection for SBM with GCNs. The generalization performance of GCNs on CSBM has been considered in \cite{baranwal2021graph,baranwal2022effects,chen2018supervised}. Compared with \cite{baranwal2021graph}, our result studied the exact recovery performance of linear GCNs on sparser CSBM. Moreover, \cite{shi2024homophily} provided heuristic formulas of the regression generalization error in GCN for CSBM, showing the double descent phenomenon. Later, \cite{duranthon2024asymptotic} extended the computation to arbitrary convex loss and regularization for extreme sparse CSBMs. Differently, we proved the asymptotic training and test errors for linear regression on GCNs for sparse CSBMs. Recently, \cite{duranthon2023optimal} compared the optimal belief-propagation-based algorithm with general GNNs for CSBM under the constant degree regime. In terms of the generalization, the roles of self-loops and nonlinearity in GCNs have been studied in \cite{lampert2023self,kipf2017semisupervised}. Our results in GCNs also provide a way to choose the optimal self-loop weight in GCN to achieve optimal performance. \subsection{Main contributions} Our contribution lies in the following five perspectives. \begin{enumerate} \item Mathematically, for any algorithm, we derive the necessary and sufficient conditions for correctly classifying all nodes on CSBM under the semi-supervised setting. \item When perfect classification is impossible, we characterize the lower bound of the asymptotic misclassification ratio for any algorithm. \item We devise a spectral estimator, provably achieving perfect classification down to IT limits. \item We evaluate the efficacy of graph ridge regression and GCN on the CSBM for perfect classification. \item We present a method for selecting the \textit{optimal self-loop} weight in a graph to optimize its performance. This approach offers novel insights into the modification of GCN architectures. \end{enumerate} \section{Preliminaries}\label{sec:prel} \subsection{Node classification} Let $\cV$ and $\cE$ denote the set of vertices and edges of graph $\gG$ respectively, with $|\cV| = N \in \N_{+}$. Assume that $\cV$ is composed of two disjoint sets $\cV_{+}, \cV_{-}$, i.e., $\cV = \cV_{+} \cup \cV_{-}$ and $\cV_{+} \cap \cV_{-} = \emptyset$. Let $\by \coloneqq [y_1, \ldots, y_N]^{\top} \in \{\pm 1\}^{N}$ denote the label vector encoding the community memberships, i.e., $\cV_{+} = \{i\in [N]: y_i >0\}$ and $\cV_{-} = \{i\in [N]: y_i < 0\}$. Assume the access to $\cG$ in practice. The goal is to recover the underlying $\by$ using the observations. Let $\widehat{\by}$ denote some estimator of $\by$. To measure the performance of the above estimator, define the \textit{mismatch} ratio between $\by$ and $\widehat{\by}$ by \begin{align} \psi_N(\by, \widehat{\by}) \coloneqq \frac{1}{N} \min_{s \in \{\pm 1\} } |\{i\in[N]:\by_i\neq s\cdot \widehat{\by}_i\}|. \end{align} For the symmetric case, $|\cV_{+}| = |\cV_{-}| = N/2$, the \emph{random guess} estimation, i.e., determining the node label by flipping a fair coin, would achieve $50\%$ accuracy on average. An estimator is meaningful only if it outperforms the random guess, i.e., $\psi_N(\by, \widehat{\by}) \leq 0.5$. If so, $\widehat{\by}$ is said to accomplish \emph{weak} recovery. See \cite{abbe2018community} for a detailed introduction. In this paper, we aim to address another interesting scenario when all the nodes can be perfectly classified, i.e., $\psi_N = 0$, which leads to the concept of \emph{exact} recovery. \begin{definition}[Exact recovery] The $\widehat{\by}$ is said to achieve the exact recovery (strong consistency) if \begin{align} \lim_{N\to \infty} \P(\psi_N(\by, \widehat{\by}) = 0) = \lim_{N\to \infty} \P(\widehat{\by} = \pm \,\, \by) = 1. \end{align} \end{definition} \subsection{Contextual Stochastic Block Model} It is natural to embrace certain data generation models to study the mathematical limits of algorithm performance. The following model is in particular of our interests. \begin{definition}[Binary Stochastic Block Model, \SBM] Assume $\ones^{\top}\by= 0$, i.e., $|\cV_{+}| = |\cV_{-}| = N/2$. Given $0< \alpha, \beta < 1$, for any pair of node $i$ and $j$, the edge $\{i, j\}\in\cE$ is sampled independently with probability $\alpha$ if $y_i = y_j$, i.e., $\P(A_{ij} = 1) = \alpha$, otherwise $\P(A_{ij} = 1) = \beta$. Furthermore, if $\bA \in \{0, 1\}^{N \times N}$ is symmetric and $A_{ii} = 0$ for each $i\in [N]$, we then write $\bA \sim \SBM(\by, \alpha, \beta)$. \end{definition} For each node $i \in \cV$, there is a feature vector $\bx_i$ attached to it. We are interested in the scenario where $\bx_i$ is sampled from the following \emph{Gaussian Mixture Model}. \begin{definition}[Gaussian Mixture Model, \GMM] Given $N, d\in \N_{+}$, label vector $\by \in \{\pm 1\}^{N}$ and some fixed $\bmu \in \sS^{d-1}$ with $\|\bmu\|_2 = 1$, we write $\{\bx_i\}_{i=1}^{N} \sim \GMM (\bmu, \by, \theta)$ if $\bx_i = \theta y_i \bmu + \bz_i \in \R^d$ for each $i\in [N]$, where $\theta >0$ denote the signal strength, and $\{\bz_i \}_{i=1}^{N}\subset \R^{d}$ are i.i.d. random column vectors sampled from $\Normal (\bzero, \bI_d)$. Then by denoting $\bZ \coloneqq [\bz_1,\ldots,\bz_N]^{\sT}$, we re-write $\bX \in \R^{N \times d}$ as \begin{align} \bX \coloneqq [ \bx_1, \bx_2, \ldots, \bx_N]^{\sT} =\theta \by \bmu^{\top} + \bZ.\label{eqn:gauss_mixture} \end{align} \end{definition} In particular, this paper focuses on the scenario where $\cG$ and $\bX$ are generated in the following manner. \begin{definition}[Contextual Stochastic Block Model, \CSBM]\label{def:CSBM} Suppose that $N, d \in \N_{+}$, $0 \leq \alpha, \beta \leq 1$ and $\theta > 0$. We write $(\bA, \bX) \sim \CSBM (\by, \bmu, \alpha, \beta, \theta)$ if \begin{enumerate}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex,label=(\alph*)] \item the label vector $\by$ is uniformly sampled from the set $\{\pm 1\}^{N}$, satisfying $\ones^{\sT} \by = 0$; \item independently, $\bmu$ is sampled from uniform distribution over the $\sS^{d -1}\coloneqq \{ \bv \in \R^{d}: \|\bv\|_2 = 1\}$; \item given $\by$, independently, we sample $\bA \sim \SBM (\by, \alpha, \beta)$ and $\bX \sim \GMM (\bmu, \by, \theta)$. \end{enumerate} \end{definition} {\CSBM} was first introduced in \cite{deshpande2018contextual}, where a tight analysis for the inference of latent community structure was provided. The \emph{information-theoretic} thresholds on \emph{exact} recovery \cite{abbe2022lp} and weak recovery \cite{lu2023contextual} were established under the \emph{unsupervised} learning regime, i.e., none of the node labels is revealed. However, the modern learning methods \cite{kipf2017semisupervised} performed on the popular datasets \cite{yang2016revisiting, bojchevski2018deep, shchur2018pitfalls} rely on the model training procedure, i.e., a fraction of node labels are revealed, which is the regime we will focus on. \subsection{Semi-supervised learning on graph} Assume that $n \in [0, N)$ node labels are revealed, denoted by $y_1, \ldots, y_n$ without loss of generality. Let $\sL = \{(\bx_{i}, y_i)\}_{i=1}^{n}$ denote the training samples and $\sU = \{ \bx_j\}_{j = n + 1}^{N}$ denote the set of feature vectors corresponding to the unrevealed nodes. Each vertex $v\in \cV$ is assigned to either $\cV_{\sL}$ or $\cV_{\sU}$ depending on the disclosure of its label, where $n \coloneqq |\cV_{\sL}|$, $m \coloneqq |\cV_{\sU}|$ with $N = n + m$. Let $\tau \coloneqq n/N$ denote the \emph{training ratio}. For simplicity, let $\by_{\sL} \in \{\pm 1\}^{n}$ and $\by_{\sU} \in \{\pm 1\}^{m}$ denote the \emph{revealed} and \emph{unrevealed} label vectors. We further denote $\cV_{\sL,+}=\{i\in\cV_{\sL}:y_i>0\}$, $\cV_{\sL,-}=\{i\in\cV_{\sL}:y_i<0\}$, $\cV_{\sU,+}=\{i\in\cV_{\sU}:y_i>0\}$ and $\cV_{\sU,-}=\{i\in\cV_{\sU}:y_i<0\}$. For instance in Figure~\ref{fig:SBM}, we aim to recover the labels of $\cV_{\sU,+}$ and $\cV_{\sU,-}$ based on known labels in $\cV_{\sL,+}$ and $\cV_{\sL,-}$. Under the \emph{semi-supervised} regime, the graph $\cG$ and feature matrix $\bX$ are generated in the following manner. \begin{definition}[Semisupervised CSBM]\label{def:SemiCSBM} Suppose that $\by_{\sL}$, $\by_{\sU}$ are uniformly sampled from $\{\pm 1\}^{n}$, $\{\pm 1\}^{m}$ respectively, satisfying $\ones^{\sT}_n \by_{\sL} = \ones^{\sT}_m \by_{\sU} = 0$. After concatenating $\by = [\by_{\sL}^{\sT}, \by_{\sU}^{\sT}]^{\sT}$, we have $(\bA, \{\bx_i\}_{i=1}^{N})$ sampled from $\CSBM ( \by, \bmu, \alpha, \beta, \theta)$ in Definition~\ref{def:CSBM}. \end{definition} \begin{remark} It reduces to the unsupervised regime if $n = 0$. \end{remark} Let $\cX = \mathrm{span}(\{\bx_i\}_{i=1}^{N})$ denote the \emph{feature space} and $\cY = \{\pm 1\}^{N}$ denote \emph{label} space. In practice, the access to the graph $\gG = (\cV, \cE)$, the feature vectors $\{\bx_i\}_{i=1}^{N}$ and the revealed labels $\by_{\sL}$ are guaranteed. At this stage, finding a predictor $h: \cX \times \cG \times \cY_{\sL} \mapsto \cY_{\sU}$ is our primary interest. Let $\widehat{\by}_{\sU}$ denote some estimator of $\by_{\sU}$. The \textit{mismatch} ratio under the semi-supervised regime can be re-written as \begin{align} \psi_m(\by_{\sU}, \widehat{\by}_{\sU}) = \frac{1}{m} \min_{s \in \{\pm 1\} } |\{i\in[m]:(\by_{\sU})_i\neq s(\widehat{\by}_{\sU})_i\}|. \end{align} \begin{figure} \centering \begin{minipage}[h]{0.4\linewidth} \centering {\includegraphics[width=1\textwidth]{fig/SBM2.png}} \end{minipage} \caption{An example of SBM under semi-supervised learning. Red: $\cV_{\sL,+}$; blue: $\cV_{\sL,-}$; yellow: $\cV_{\sU,+}$; and orange $\cV_{\sU,-}$.} \label{fig:SBM} \end{figure} \subsection{Graph-based transductive learning} To efficiently represent the training and test data, we define the following two \emph{sketching} matrices. \begin{definition}\label{def:sketching} Define $\bS_{\sL} \in \{0, 1\}^{n \times N}$, $\bS_{\sU} \in \{0, 1\}^{m \times N}$ \begin{align} (\bS_{\sL})_{ij} \coloneqq &\, \indi\{i = j\}\cap \indi \{i \in \cV_{\sL}\},\\ (\bS_{\sU})_{ij} \coloneqq &\, \indi\{i = j\}\cap \indi \{i \in \cV_{\sU}\}. \end{align} Immediately, $\by_{\sL} = \bS_{\sL}\by$, $\by_{\sU} = \bS_{\sU} \by$. Define $\bX_{\sL} \coloneqq \bS_{\sL} \bX$, $\bX_{\sU} \coloneqq \bS_{\sU} \bX$, then $\bX = [\bX_{\sL}^{\sT}, \bX_{\sU}^{\sT}]^{\sT}$. The adjacency matrix $\bA \in \R^{N \times N}$ adapts the following block form \begin{align} \bA = \begin{bmatrix} \bA_{\sL} & \bA_{\sL \sU}\\ \bA_{\sU\sL} & \bA_{\sU} \end{bmatrix} \coloneqq \begin{bmatrix} \bS_{\sL} \bA \bS_{\sL}^{\sT} & \bS_{\sL} \bA \bS_{\sU}^{\sT}\\ \bS_{\sU} \bA \bS_{\sL}^{\sT} & \bS_{\sU} \bA \bS_{\sU}^{\sT}\label{eqn:decomposion_A} \end{bmatrix}. \end{align} \end{definition} In \textit{inductive} learning, algorithms are unaware of the nodes for testing during the learning stage, i.e., only $\bA_{\sL}$ and $\bX$ are used for training. The test graph $\bA_{\sU}$ is disjoint from $\bA_{\sL}$ and entirely unseen by the algorithm during the training procedure, since $\bA_{\sL \sU}$ is not used either. Notably, this kind of information wastage will reduce the estimator's accuracy. In contrast, the entire graph $\bA$ is used for algorithm training under \textit{transductive} learning. The estimator benefits from the message-passing mechanism among seen and unseen nodes. \section{Main results}\label{sec:main_results_CSBM} To state our main results, we start with several basic assumptions. Recall that $\tau \coloneqq n/N$ denotes the \emph{training ratio}, where $\tau \in (0, 1)$ is some fixed constant. \begin{assumption}[Asymptotics]\label{ass:asymptotics} Let $q_m$ be some function of $m$ and $q_m \to \infty$ as $m \to \infty$. For $\CSBM ( \by, \bmu, \alpha, \beta, \theta)$ in model ~\ref{def:CSBM}, we assume $\alpha = a \cdot q_m /m$ and $\beta = b \cdot q_m /m$, for some constants $a\neq b\in \R^+$, and \begin{align} c_\tau \coloneqq \theta^4 [q_m (\theta^2 + (1-\tau)d/m)]^{-1},\label{eqn:ctau} \end{align} is a fixed positive constant as $m \to \infty$. Furthermore, we fix $n/N =\tau\in (0,1)$ as $N,n,m\to\infty$. \end{assumption} For instance, $\tau =0.2$, $\alpha=0.3$ and $\beta=0.05$ in Figure~\ref{fig:SBM}. For $a, b \in \R^{+}$, denote $a_{\tau} = (1 - \tau)^{-1}a$, $b_{\tau} =(1 - \tau)^{-1}b$. Define the following \emph{rate function} by \begin{align} I(a_\tau,b_\tau,c_\tau) \coloneqq [(\sqrt{a_\tau} - \sqrt{b_\tau})^2 + c_\tau]/2 \label{eqn:rate_I_abc_tau}, \end{align} which will be applied to our large deviation analysis. \subsection{Information-theoretic limits}\label{sec:ITLowerBoundsCSBM} Note that $\by = [\by_{\sL}^{\sT}, \by_{\sU}^{\sT}]^{\sT}$, $(\bA, \bX) \sim \CSBM (\by, \bmu, \alpha, \beta, \theta)$, we first present the necessary condition for any estimator $\widehat{\by}_{\sU}$ to reconstruct $\by_{\sU}$ exactly. \begin{theorem}[Impossibility]\label{thm:impossibility_CSBM} Under \Cref{ass:asymptotics} with $q_m = \log(m)$, as $m \to \infty$, every algorithm will mis-classify at least $2$ vertices with probability tending to $1$ if $I(a_{\tau}, b_{\tau}, c_{\tau}) < 1$. \end{theorem} We explain the proof sketch above. For the node classification problem, the best estimator is the Maximum Likelihood Estimator (MLE). If MLE fails exact recovery, then no other algorithm could achieve exact recovery. When $I(a_{\tau}, b_{\tau}, c_{\tau}) < 1$, we can prove that with high probability, MLE will not return the true label vector $\by_{\mathbb{U}}$, but some other configuration $\widetilde{\by}_{\mathbb{U}} \neq \by_{\mathbb{U}}$ instead, which leads to the failure of exact recovery. Similar idea showed up in \cite{abbe2015exact, kim2018stochastic, wang2023strong} before. On the other hand, the following result concerns the fundamental limits of any algorithm. \begin{theorem}\label{thm:ITlowerbounds_CSBM} Under \Cref{ass:asymptotics} with $q_{m} \gg 1$, any sequence of estimators $\widehat{\by}_{\sU}$ satisfies \begin{align} \liminf_{m \to \infty} q^{-1}_m \log \E \psi_m(\by_{\sU}, \widehat{\by}_{\sU}) \geq - I(a_{\tau}, b_{\tau}, c_{\tau}). \end{align} \end{theorem} Informally, the result of \Cref{thm:ITlowerbounds_CSBM} can be interpreted as $\E \psi_m(\by_{\sU}, \widehat{\by}_{\sU}) \geq e^{-I(a_{\tau}, b_{\tau}, c_{\tau})q_m}$, which gives the lower bound on the expected mismatch ratio for any estimator $\widehat{\by}_{\sU}$. This rate function $I(a_\tau,b_\tau,c_\tau)$ in \eqref{eqn:rate_I_abc_tau} is derived from the analysis of the \emph{large deviation principle} (LDP) for $\bA$ and $\bX$, with details deferred to \Cref{lem:WmuLDP}. \subsection{Optimal spectral estimator}\label{sec:spectral} \subsubsection{The construction of spectral estimators} Define the \emph{hollowed Gram} matrix $\bG = \cH(\bX \bX^{\top}) \in \R^{N \times N}$ by $G_{ij} = \<\bx_i, \bx_j\>\indi_{\{i \neq j\}}$. Similarly, $\bG$ adapts the block form as in \eqref{eqn:decomposion_A}. Let $\lambda_i(\bA)$, $\lambda_i(\bA_{\sU})$ (resp. $\lambda_i(\bG)$, $\lambda_i(\bG_{\sU})$) denote the $i$-th largest eigenvalue of $\bA$, $\bA_{\sU}$ (resp. $\bG$, $\bG_{\sU}$), and $\bu_i(\bA_{\sU})$, $\bu_i(\bG_{\sU})$ are the corresponding unit eigenvectors. Define the index $\ell^{\star} = 2 \cdot \indi\{ a > b\} + m \cdot \indi\{ a < b\}$ and the ratio \begin{align} \widehat{\kappa}_{\ell^{\star}} = \log\Big( \frac{\lambda_1(\bA_{\sU}) + \lambda_{\ell^{\star}}(\bA_{\sU})}{\lambda_1(\bA_{\sU}) - \lambda_{\ell^{\star}}(\bA_{\sU})} \Big). \label{eqn:hatkappa_lstar} \end{align} The index $\ell^{\star}$ is used to differentiate the homophilic $(a > b)$ and heterophilic $(a < b)$ graphs. We then define \begin{subequations} \begin{align} \widehat{\by}_{\mathrm{SBM}} &\, \coloneqq \widehat{\kappa}_{\ell^{\star}} \big(\frac{1}{\sqrt{m}} \bA_{\sU \sL} \by_{\sL} + \lambda_{\ell^{\star}}(\bA_{\sU}) \bu_{\ell^{\star}}(\bA_{\sU}) \big) \label{eqn:yhatSBM} \\ \widehat{\by}_{\mathrm{GMM}} &\, \coloneqq \frac{2\lambda_1 (\bG)}{N\lambda_1(\bG) + dN} \Big( \frac{\bG_{\sU\sL} \by_{\sL}}{\sqrt{m}} + \lambda_1 (\bG_{\sU})\bu_1(\bG_{\sU}) \Big). \label{eqn:yhatGMM} \end{align} \end{subequations} It is natural to discard the graph estimator when $a = b$ reflected by $\widehat{\kappa}_{\ell^{\star}} = 0$, since no algorithm could outperform random guess on the Erd\H{o}s-R\'{e}nyi graph. Consequently, the ideal estimator, inspired by \emph{semi-supervised} \emph{principal component analysis}, is given by $\sign(\widehat{\by}_{\mathrm{PCA}})$, where \begin{equation} \widehat{\by}_{\mathrm{PCA}} \coloneqq \widehat{\by}_{\mathrm{SBM}} + \widehat{\by}_{\mathrm{GMM}}. \label{eqn:pcaEstimator} \end{equation} Pseudocode of the spectral algorithm is given below. \begin{algorithm} \caption{Partition via spectral estimator} \label{alg:PCA} \begin{algorithmic} \Require $\bA$, $\bX$, $\by_{\sL}$. \State{Compute the gram matrix $\bG$.} \State{Construct $\widehat{\by}_{\mathrm{SBM}}$ and $\widehat{\by}_{\mathrm{GMM}}$ defined in \eqref{eqn:yhatSBM} and \eqref{eqn:yhatGMM} respectively. } \State{Construct the $\widehat{\by}_{\mathrm{PCA}}$ in \eqref{eqn:pcaEstimator}. } \Ensure $\widehat{\cV}_{\sU, +} \coloneqq \{i \in \cV_{\sU}: (\widehat{\by}_{\mathrm{PCA}})_{i} > 0\}$ and $\widehat{\cV}_{\sU, -} \coloneqq \{i \in \cV_{\sU}: (\widehat{\by}_{\mathrm{PCA}})_{i} < 0\}$. \end{algorithmic} \end{algorithm} \subsubsection{The regime $q_m \gtrsim \log(m)$} \Cref{thm:impossibility_CSBM} and \Cref{thm:achievability_CSBM} (a) establish the sharp threshold for exact recovery, i.e., $I(a_{\tau}, b_{\tau}, c_{\tau}) = 1$, verified by the numerical simulations in Figures \ref{fig:pca_exact_c50} and \ref{fig:pca_optimal_c50}. \begin{theorem}\label{thm:achievability_CSBM} Let \Cref{ass:asymptotics} hold and $q_m \gtrsim \log(m)$. \begin{enumerate}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex,label=(\alph*)] \item (Exact). When $I_{\tau} = I(a_{\tau}, b_{\tau}, c_{\tau}) > 1$, $\widehat{\by}_{\mathrm{PCA}}$ achieves exact recovery with probability at least $1 - m^{1 - I_{\tau}}$. \item (Optimal). When $I_{\tau} = I(a_{\tau}, b_{\tau}, c_{\tau}) \leq 1$, it follows $$ \limsup_{m \to \infty} q^{-1}_m \log \E \psi_m(\by_{\sU}, \widehat{\by}_{\mathrm{PCA}}) \leq - I_{\tau}. $$ \end{enumerate} \end{theorem} Informally, the second part of \Cref{thm:achievability_CSBM} can be understood as $\E \psi_m(\by_{\sU}, \widehat{\by}_{\sU}) \leq e^{-I_{\tau} \cdot q_m}$, which establishes an upper bound of the expected mismatch ratio. It matches the lower bound in \Cref{thm:ITlowerbounds_CSBM}. In that sense, even though exact recovery is impossible when $I(a_{\tau}, b_{\tau}, c_{\tau}) \leq 1$ by \Cref{thm:impossibility_CSBM}, the estimator $\widehat{\by}_{\mathrm{PCA}}$ in \eqref{eqn:pcaEstimator} arrives the lowest possible error rate when $q_m \gtrsim \log(m)$. \begin{figure*}[h] \centering \begin{minipage}[t]{0.49\linewidth} \centering \subcaptionbox{$c_{\tau} = 0.5$.} {\includegraphics[width=1.18\textwidth]{fig/pca_tau_25_N_800_c_50_test_exact.png}} \end{minipage} \begin{minipage}[t]{0.49\linewidth} \centering \subcaptionbox{$c_{\tau} = 1.5$} {\includegraphics[width=0.95\textwidth]{fig/pca_tau_25_N_800_c_150_test_exact.png}} \end{minipage} \caption{ \small{Performance of $\widehat{\by}_{\mathrm{PCA}}$ in \eqref{eqn:pcaEstimator}: fix $N = 800$, $\tau = 0.25$ and vary $a$ ($y$-axis) and $b$ ($x$-axis) from $1$ to $10.5$. For each parameter configuration $(a_{\tau}, b_{\tau}, c_{\tau})$, we compute the frequency of exact recovery over $20$ independent runs. Light color represents a high chance of success. Phase transitions occurs at the red curve $I(a_{\tau}, b_{\tau}, c_{\tau}) = 1$, as proved by Theorems \ref{thm:impossibility_CSBM} and \ref{thm:achievability_CSBM}.}} \label{fig:pca_exact_c50} \end{figure*} \subsubsection{The regime $1 \ll q_m \ll \log(m)$} When the graph becomes even sparser, where the expected degree of each vertex goes to infinity slower than $\log(m)$, the previous estimator $\widehat{\by}_{\mathrm{PCA}}$ in \eqref{eqn:pcaEstimator} is no longer valid. There are two main issues. First, $\widehat{\kappa}_{\ell^{\star}}$ was designed for the estimation of $\log(a_{\tau}/b_{\tau})$, but it does not converge to $\log(a_{\tau}/b_{\tau})$ anymore when $1 \ll q_m \ll \log(m)$, since $\lambda_{1}(\bA_{\sU})$ and $\lambda_{\ell^{\star}}(\bA_{\sU})$ no longer concentrate around $\frac{\alpha + \beta}{2}$ and $\frac{\alpha - \beta}{2}$ \cite{feige2005spectral}. To get rid of that, we refer to the quadratic forms $\ones^{\sT} \bA_{\sL}\ones$ and $ \by_{\sL}^{\sT}\bA_{\sL}\by_{\sL}$, which still present good concentration properties. Formally, we use the following $\widetilde{\kappa}_{\ell^{\star}}$ instead \begin{align} \widetilde{\kappa}_{\ell^{\star}} \coloneqq \log\Big( \frac{ \ones^{\sT} \bA_{\sL}\ones + \by_{\sL}^{\sT}\bA_{\sL}\by_{\sL} }{ \ones^{\sT} \bA_{\sL}\ones - \by_{\sL}^{\sT}\bA_{\sL}\by_{\sL} } \Big). \end{align} The second issue is that, the entrywise eigenvector analysis of $\bu_{2}(\bA_{\sU})$ breaks down due to the lack of concentration. To overcome that, we let $\widehat{\by}_{\mathrm{G}} = \sign(\widehat{\by}_{\mathrm{GMM}})$. Note that $\bA_{\sU}\widehat{\by}_{\mathrm{G}}$ is closed to $\sqrt{m}\lambda_{\ell^{\star}}(\bA_{\sU}) \bu_{\ell^{\star}}(\bA_{\sU})$, then the new graph estimator is defined through \begin{align} \widetilde{\by}_{\mathrm{SBM}} &\, \coloneqq \widetilde{\kappa}_{\ell^{\star}} \big(\bA_{\sU \sL} \by_{\sL} + \bA_{\sU} \widehat{\by}_{\mathrm{G}}\big)/\sqrt{m} \label{eqn:ytildeSBM} \end{align} Combining the above reasoning together, the estimator for under the general case is given by $\sign(\widetilde{\by}_{\mathrm{PCA}})$, where \begin{align} \widetilde{\by}_{\mathrm{PCA}} = \widetilde{\by}_{\mathrm{SBM}} + \widehat{\by}_{\mathrm{GMM}}.\label{eqn:tildepcaEstimator} \end{align} The following result shows that $\widetilde{\by}_{\mathrm{PCA}}$ achieves the lowest possible expected error rate when $1 \ll q_{m} \ll \log(m)$. \begin{theorem}\label{thm:general_pcaEstimator} Let \Cref{ass:asymptotics} hold, then it follows \begin{align} \limsup_{m \to \infty} q^{-1}_m \log \E \psi_m \big(\by_{\sU}, \sign(\widetilde{\by}_{\mathrm{PCA}}) \big) \leq - I(a_{\tau}, b_{\tau}, c_{\tau}). \end{align} \end{theorem} \begin{figure}[h] \centering \begin{minipage}[t]{0.5\linewidth} \centering {\includegraphics[width=1\textwidth]{fig/pca_compareN_tau_25_b_500_c_50.png}} \end{minipage} \caption{ \small{The $y$-axis is $q_m^{-1}\log(\E \psi_m)$, the average mismatch ratio on the logarithmic scale. The $x$-axis is $a$, varying from $0$ to $10.5$. Fix $b = 5$, $\tau = 0.25$, $c_{\tau} = 0.5$. The red curve is $-I(a_{\tau}, b_{\tau}, c_{\tau})$, the lower bound predicted by Theorem \ref{thm:ITlowerbounds_CSBM}. The experiments over different $N$ shows that $\widehat{\by}_{\mathrm{PCA}}$ achieves the information-theoretical limits, as proved in Theorems \ref{thm:achievability_CSBM} and \ref{thm:general_pcaEstimator}. }} \label{fig:pca_optimal_c50} \vspace{-3mm} \end{figure} \subsubsection{Comparation with unsupervised regime} When only the sub-graph $\cG_{\sU} = (\cV_{\sU}, \cE_{\sU})$ is observed, it becomes an unsupervised learning task on $\cG_{\sU}$, where the data is equivalently sampled from $(\bA_{\sU}, \{\bx_i\}_{i=1}^{m}) \sim \CSBM (\by_{\sU}, \bmu, \alpha, \beta, \theta)$ with $\alpha = a q_m/m$ and $\beta = b q_m/m$. The rate function can be obtained by simply taking $\tau = 0$ with $a_0 = a$, $b_0=b$, and $c_0 = q_m^{-1} (\theta^2 + d/m)^{-1} \theta^4$, aligning with the result in \cite{abbe2022lp}. The difference between the two boundaries $I(a_\tau,b_\tau,c_\tau) = 1$ (\textcolor{red}{red}) and $I(a_0,b_0,c_0) = 1$ (\textcolor{blue}{blue}) is presented in \Cref{fig:pca_exact_c50}. A crucial observation is that, the extra information from $\bX_{\sU}$, $\bA_{\sU}$ and $\bA_{\sU\sL}$ shrinks the boundary for exact recovery, making the task easier compared with the unsupervised regime. \subsection{Performance of ridge regression on linear GCN}\label{sec:ridge} For $\CSBM ( \by, \bmu, \alpha, \beta, \theta)$, in this section, we focus on analyzing how these parameters $a,b,c_{\tau}$ and $\tau$ defined in Assumption~\ref{ass:asymptotics} affect the learning performances of the \textit{linear} graph convolutional networks. We consider a graph convolutional kernel $h(\bX) \in\R^{N\times d}$ which is a function of data matrix $\bX$ and adjacency matrix $\bA$ sampled from $\CSBM ( \by, \bmu, \alpha, \beta, \theta)$. We add self-loops and define the new adjacency matrix $\bA_{\rho} \coloneqq \bA + \rho \bI_{N}$, where $\rho > 0$ represents the intensity of self connections in the graph. Let $\bD_{\rho}$ be the diagonal matrix whose diagonals are the average degree for $\bA_{\rho}$, i.e., $[\bD_{\rho}]_{ii} = \frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^N(\bA_{\rho})_{ij}$ for each $i \in [N]$. For the linear graph convolutional layer, we will consider the following normalization: \begin{align}\label{eq:h(X)} h(\bX)=\frac{1}{\sqrt{Nq_m}} \bD^{-1}_{\rho}\bA_{\rho}\bX, \end{align} Denote $\bD:=\bD_0$, indicating no self-loop added, and $D_0$ as the first diagonal of $\bD$. We study the linear ridge regression on $h(\bX)$. Compared with the general GCN defined in \cite{kipf2017semisupervised}, here we simplify the graph convolutional layer by replacing the degree matrix of $\bA$ by the average degree among all vertices. In this case, we can directly employ $ \widetilde{h}(\bX)=\frac{1}{\widetilde d \cdot\sqrt{Nq_m}} \bA_{\rho}\bX $ to approximate the $h(\bX)$, where $\widetilde d$ is the expected average degree defined by \eqref{eq:tilde_d}. Notice that for sparse graph $\bA$ under Assumption~\ref{ass:asymptotics}, the degree concentration for each degree is \textit{not} guaranteed, which is a different situation from \cite{baranwal2021graph,baranwal2022effects}. We now consider transductive learning on CSBM following the idea from \cite{baranwal2021graph,shi2024homophily}. Recall that the vertex set $\cV$ is split into two disjoint sets $\cV_{\sL}$ and $\cV_{\sU}$, where $n = |\cV_{\sL}|$, $m = |\cV_{\sU}|$ and $N = n + m$. The training ratio $\tau = \frac{n}{N}$ as $N\to\infty$. From Definition~\ref{def:sketching}, we know that $\bS_{\sL} \in [0, 1]^{n \times N}$, $\bS_{\sU} \in [0, 1]^{m \times N}$, $\bS_{\sL} \bX = \bX_{\sL} \in \R^{n\times d}$, and $\bS_{\sU} \bX = \bX_{\sU} \in \R^{m \times d}$. Then, the empirical loss of linear ridge regression (LLR) on $h(\bX)$ can be written as \begin{align} L(\bbeta) = \frac{1}{n}\|\bS_{\sL} (h(\bX) \bbeta - \by ) \|_2^2 + \frac{\lambda}{n} \|\bbeta\|_2^2, \end{align} for any $\lambda>0$, where the solution to this problem is \begin{align} \widehat{\bbeta} =~& \underset{\bbeta \in \R^d}{\arg \min} \, L(\bbeta)\\ = ~& (h(\bX)^{\top} \bP_{\sL} h(\bX) + \lambda \bI_d )^{-1}h(\bX)^{\top} \bP_{\sL}\by,\label{eq:regression_solu} \end{align} where $\bP_\sL =\bS_{\sL}^\top\bS_\sL\in \R^{N\times N}$ is a diagonal matrix. Similarly, define $\bP_\sU =\bS_{\sU}^\top\bS_\sU\in \R^{N\times N}$. Then the estimator of this linear ridge regression for $\lambda>0$ is \begin{equation}\label{eq:regression_solu_y} \widehat\by_{\mathrm{LRR}}=\bS_\sU h(\bX)(h(\bX)^{\top} \bP_{\sL} h(\bX) + \lambda \bI_d )^{-1}h(\bX)^{\top} \bP_{\sL}\by. \end{equation} In the following, we aim to analyze the misclassification rate $\psi_m(\by_\sU,\widehat\by_{\mathrm{LRR}})$, the associated test and training risks in mean square error (MSE) defined by \begin{align} \cR(\lambda) \coloneqq~& \frac{1}{m}\|\bS_{\sU}(h(\bX)\widehat\bbeta - \by)\|_2^2\label{eq:test}\\ \cE(\lambda) \coloneqq~& \frac{1}{n}\|\bS_{\sL}(h(\bX)\widehat\bbeta - \by)\|_2^2.\label{eq:train} \end{align} Notice that \citet{shi2024homophily} also studied the asymptotic test and training risks for CSBM on linear GCNs but in a sparser graph $\bA$ with constant average degree. They utilized statistical physics methods with some Gaussian equivalent conjecture to compute the asymptotic risks. Below, we provide detailed statements for the exact recovery thresholds of $\widehat\by_{\mathrm{LRR}}$. \begin{figure*}[t!] \centering \begin{minipage}[t]{0.49\linewidth} \centering \subcaptionbox{Without self-loop.} {\includegraphics[width=\textwidth]{fig/ridge_graph_tau_25_N_800_rho_0_c_50_test_exact.png}} \end{minipage} \begin{minipage}[t]{0.49\linewidth} \centering \subcaptionbox{With optimal self-loop} {\includegraphics[width=\textwidth]{fig/ridge_graph_tau_25_N_800_rho_1_c_50_test_exact.png}} \end{minipage} \caption{ \small{Performance of $\widehat{\by}_{\mathrm{LRR}}$ in \eqref{eq:regression_solu_y}. Fix $N = 800$, $\tau = 0.25$, $c_{\tau} = 0.5$. Compute the frequency of exact recovery over $20$ independent runs. When $I(a_{\tau}, b_{\tau}, c_{\tau}) > 1$, $\widehat{\by}_{\mathrm{LRR}}$ achieves exact recovery, as proved in Theorem \ref{thm:exact_linear} (a) and (b).}} \label{fig:lrr_exact_c50} \end{figure*} \begin{theorem}[Exact recovery for graph convolution linear ridge regression]\label{thm:exact_linear} Consider the ridge regression on the linear graph convolution $h(\bX)$ defined in \eqref{eq:h(X)} with estimator $\widehat\by_{\mathrm{LLR}}$ in \eqref{eq:regression_solu_y}. Assume that $\rho\lesssim q_m$, $\theta^2 = (1 + o(1)) c_{\tau} q_m$ and $q_m\lesssim d\lesssim \sqrt{Nq_m}$.Then, under Assumption~\ref{ass:asymptotics}, we can conclude that \begin{enumerate}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex,label=(\alph*)] \item When $\rho=0$, then $\P(\psi_m(\by_\sU,\sign(\widehat\by_{\mathrm{LRR}}))=0)\to 1$ as long as $I(a_\tau,b_\tau, 0 )>1$. \item When \begin{equation}\label{eq:optimal_rho} \rho = \frac{2c_{\tau}}{\log(a_{\tau}/b_{\tau})} q_m, \end{equation} then $\P(\psi_m(\by_\sU,\sign(\widehat\by_{\mathrm{LRR}}))=0)\to 1$ as long as $I(a_\tau,b_\tau,c_\tau)>1$. \item When $\rho = s q_m$ for some constant $s\in\R$, then $$\P(\psi_m(\by_\sU,\sign(\widehat\by_{\mathrm{LRR}}))=0)\to 1$$ when $J(a_\tau, b_\tau, c_\tau, \zeta, s )>1$, as $m\to\infty,$ where $\zeta:=\frac{\kappa \tau }{\kappa^2\tau+\lambda}$ and $\kappa:=\sqrt{c_\tau}\cdot\frac{ a_\tau-b_\tau+2s }{a_\tau+b_\tau+2s}$ for $\lambda>0$. Here rate function $J(a_\tau, b_\tau, c_\tau, \zeta, s )$ is defined in Lemma~\ref{lem:rate_fun}. Additionally, we know that the exact recovery region $\{(a_\tau,b_\tau,c_\tau):J(a_\tau, b_\tau, c_\tau, \zeta, s )>1\}$ is a subset of the optimal region $\{(a_\tau,b_\tau,c_\tau):I(a_\tau, b_\tau, c_\tau )>1\}.$ \end{enumerate} \end{theorem} Consequently, $\rho = \frac{2c_{\tau}}{\log(a/b)} q_m$ is the \textit{optimal} weighted self-loop to attain the exact recovery of labels $\by_\sU$ in this semi-supervised learning with linear ridge regression on $h(\bX)$. This is because in this case, the exact recovery for $\widehat\by_{\mathrm{LRR}}$ matches the information-theoretic lower bound in Theorem~\ref{thm:ITlowerbounds_CSBM}, i.e., below this threshold, no algorithms can perfectly recover all the unknown labels in $\cV_\sU$. \begin{theorem}[Asymptotic training and test errors]\label{thm:error} Consider $(\bA, \bX) \sim \CSBM (\by, \bmu, \alpha, \beta, \theta)$. Suppose that $\rho/q_m\to s\in\R$ and $d\lesssim N$. Under the Assumption~\ref{ass:asymptotics}, the training and test errors for linear ridge regression on $h(\bX)$ defined by \eqref{eq:regression_solu} are asymptotically satisfying the following results. For any fixed $\lambda>0,$ both training and test errors in MSE loss defined in \eqref{eq:test} and \eqref{eq:train} satisfy \begin{align} \cE(\lambda) \text{ and }\cR(\lambda)\to~& \frac{\lambda^2}{(\kappa^2\tau+\lambda)^2}, \end{align} almost surely, as $m,N\to\infty$, where $\kappa$ is defined in Theorem~\ref{thm:exact_linear} (c). \end{theorem} \begin{figure}[h] \centering \begin{minipage}[t]{0.49\linewidth} \centering {\includegraphics[width=1\textwidth]{fig/ridge_graph_tau_25_N_800_compare_rho_c_50_b_40_test_acc.png}} \end{minipage} \caption{\small{The $y$-axis is $\E \psi_m$ , the average mismatch ratio over $20$ independent runs. The $x$-axis is $a$, varying from $0$ to $10.5$. Fix $b = 4$, $\tau = 0.25$, $c_{\tau} = 0.5$, $N = 400$. The red curve is $m^{-I(a_{\tau}, b_{\tau}, c_{\tau})}$, the lower bound predicted by Theorem \ref{thm:ITlowerbounds_CSBM} with $q_m = \log(m)$. This experiment shows that $\widehat{\by}_{\mathrm{LRR}}$ achieves a lower mismatch ratio when adding self-loop in the area $I(a_{\tau}, b_{\tau}, c_{\tau}) < 1$, where the exact recovery is impossible.}}\label{fig:lrr_optimal_c50} \end{figure} \subsection{Performance of GCN with gradient-based training}\label{sec:NN} In this section, we study the feature learning of GCN on $(\bA, \bX) \sim \CSBM (\by, \bmu, \alpha, \beta, \theta)$ with $n$ known labels and $m$ unknown labels to be classified. We focus on gradient-based training processes. From Section~\ref{sec:ridge}, we can indicate that the self-connection (or self-loop) weight $\rho$ plays an important role in exact recovery on test feature vertices. It turns out that we need to find the optimal $\rho$ in \eqref{eq:optimal_rho} for self-loop weight to ensure the exact recovery threshold approaches to the IT bound studied in Section~\ref{sec:ITLowerBoundsCSBM} for graph learning. A similar idea is also mentioned in \cite{kipf2017semisupervised}, whereas the equal status of self-connections and edges to neighboring nodes may not be a good assumption for a general graph dataset. Hence, we raise a modified training process for GCNs: in general, learning on graphs also requires learning the optimal self-loop weight for the graph, i.e., we should also view parameter $\rho$ in graph $\bA_\rho=\bA+\rho\bI_N$ as a trainable parameter. Although the optimal $\rho$ in Section~\ref{sec:ridge} for CSBM on semisupervised learning is due to LDP analysis (see Appendix~\ref{sec:LDP_ridge}), we can generally apply a spectral method to achieve oracle $\rho$ in $\eqref{eq:optimal_rho}$. We denote \[\bA_s=\bA+s q_m\bI_N,\quad \bD_s = sq_m\bI_N+\bD,\] where $\bD$ is a diagonal matrix with the average degree for each diagonal. In this section, we view $s\in\R$ as a trainable parameter. Let us consider a general two-layer graph convolutional neural network defined by \begin{equation}\label{eq:NN} f(\bX):=\frac{1}{\sqrt{K}}\sigma(\bD_s^{-1}\bA_s\bX\bW)\ba \end{equation} where the first-layer weight matrix is $\bW\in\R^{d\times K}$ and second layer weight matrix is $\ba\in\R^K$ for some $K\in\N$. Here, $\bW,s$ are training parameters for this GCN. We aim to train this neural network with training label $\by_\sL$ to predict the labels for vertices in $\cV_\sU$. Notice that when training $\bW$, we want $\bW$ to learn (align with) the correct feature $\bmu$ in the dataset. As studied in \cite{baranwal2021graph}, for CSBM with a large threshold, the data point feature is linearly separable, hence there is no need to introduce a nonlinear convolution layer in \eqref{eq:NN}. So we will consider $\sigma(x)=x$ below. In practice, nonlinearity for node classification may not be important in certain graph learning problems \cite{wu2019simplifying,he2020lightgcn}. We train this GCN in two steps. First, we train the $\bW$ with a large gradient descent step on training labels. By choosing the suitable learning rate, we can allow $\bW$ to learn the feature $\bmu$. Let us define the MSE loss by \begin{equation}\label{eq:loss} \cL(\bW,s)= \frac{1}{2n}(f(\bX)-\by)^\top\bP_\sL(f(\bX)-\by). \end{equation} The analysis for GD with a large learning rate to achieve feature learning is analogous with \cite{ba2022high,damian2022neural}. We extend this analysis to one-layer GCNs. Precisely, we take a one-step GD with a weight decay $\lambda_1$ and learning rate $\eta_1$: \begin{equation} \bW^{(1)} = \bW^{(0)} - \eta_1 \Big(\nabla_{\bW^{(0)}} \mathcal{L}(\bW^{(0)},s^{(0)}) + \lambda_1 \bW^{(0)} \Big). \end{equation} Secondly, we find out the optimal $s$ based on \eqref{eq:optimal_rho}. Here, we only use training labels $\by_\sL$. Let \begin{equation} s^{(1)}=\frac{2}{n^2q_m}(\by_\sL^\top\bX_\sL\bW^{(1)}\ba)\Big/\log\Big(\frac{\mathbf{1}^\top\bA\mathbf{1} + \by^\top_\sL\bA_\sL\by_\sL}{\mathbf{1}^\top\bA\mathbf{1} -\by^\top_\sL\bA_\sL\by_\sL}\Big). \end{equation} This construction resembles the spectral methods defined in \eqref{eqn:ytildeSBM}. Meanwhile, we can also replace this estimator with the gradient-based method to optimize $s$ in MSE loss which is shown in Appendix~\ref{sec:NN_proof}. However, to attain the IT bound, the \textit{nonlinearity} of $\sigma(x)$ in \eqref{eq:NN} plays an important role when applying GD to find optimal self-loop weight $s$. This observation is consistent with results by \citet{wei2022understanding,baranwal2023optimality}, where nonlinearity needed for GCN to obtain certain Bayes optimal in sparse graph learning. \begin{assumption} \label{assump:NN} Consider $N,d,K\to\infty$, $n\asymp N$, $K\asymp N$, $q_m = \log (m)$ and $d=o(q_m^2)$. We assume that at initialization $s^{(0)}=0$, and $\sqrt{K}\cdot[\bW^{(0)}]_{ij}\iid\cN(0,1), ~\sqrt{K}\cdot[\ba]_j\iid \Unif\{\pm 1\}$, for all $i\in[d],j\in [K]$. \end{assumption} With initialization stated in Assumption~\ref{assump:NN} and trained parameters $\bW^{(1)}$ and $s^{(1)}$, we derive a GCN estimator for unknown labels which matches with Theorems~\ref{thm:ITlowerbounds_CSBM} and \ref{thm:impossibility_CSBM}. \begin{theorem}\label{thm:NN} Under Assumptions~\ref{ass:asymptotics} and~\ref{assump:NN}, suppose that learning rate $\eta_1=\Theta(K/\sqrt{q_m})$ and weight decay rate $\lambda_1=\eta_1^{-1}$. Then, estimator $\widehat\by_{\mathrm{GCN}}=\bS_{\sU}f(\bX)$ with $\bW=\bW^{(1)}$ and $s=s^{(1)}$ satisfies that $$\P(\psi_m(\by_\sU,\sign(\widehat\by_{\mathrm{GCN}}))=0)\to 1$$ when $I(a_\tau, b_\tau, c_\tau )>1$, as $m\to\infty.$ Hence, GCN can attain the IT bound for the exact recovery of CSBM. \end{theorem} \begin{figure} \begin{minipage}[h]{0.45\linewidth} \centering {\includegraphics[width=1.0\textwidth] {fig/gcn_tau_25_N_800_compare_rho_c_50_b_40_test_acc.png}} \end{minipage} \caption{\small{Mismatch ratio difference of $\widehat\by_{\mathrm{GCN}}$ when with or without self-loop for fixed $b = 4$, $c_{\tau} = 0.5$, $N = 400$.}} \label{fig:GCN_optimal_c50} \end{figure} \begin{remark} \cite{duranthon2023optimal} proposed the AMP-BP algorithm to solve the community detection problem under CSBM, where the expected degree of each vertex is constant, i.e., $q_m = O(1)$. By contrast, this manuscript focuses on the regime $q_m \gg 1$ as in Assumption~\ref{ass:asymptotics}. Theorem~\ref{thm:NN} shows that the GCN achieves exact recovery when $I(a_\tau, b_\tau, c_\tau )>1$. However, the performance of the GCN is not characterized when $I(a_\tau, b_\tau, c_\tau )<1$, and it is still unclear whether it would match the lower bound proved in Theorem~\ref{thm:ITlowerbounds_CSBM}, i.e., the optimality of GCN remains open. From simulations in Figure~\ref{fig:GCN_optimal_c50}, we observe that below the IT bound ($I(a_\tau, b_\tau, c_\tau )<1$), there is a gap between theoretical optimal error (red curve) and the simulated mismatch ratio by GCN estimators. \end{remark} \begin{figure*}[h] \centering \begin{minipage}[h]{0.48\linewidth} \centering \subcaptionbox{Exact recovery counts without self-loop.} {\includegraphics[width=1\textwidth]{fig/gcn_tau_25_N_400_rho_0_c_50_BCE_test_exact.png}} \end{minipage} \begin{minipage}[h]{0.48\linewidth} \centering \subcaptionbox{Exact recovery counts with self loop $\rho$.} {\includegraphics[width=1\textwidth] {fig/gcn_tau_25_N_400_rho_1_c_50_BCE_test_exact.png}} \end{minipage} \caption{ \small Performance of $\widehat\by_{\mathrm{GCN}}$ when $N = 400$, $\tau = 0.25$, $c_{\tau} = 0.5$.} \label{fig:GCN_exact_c50} \end{figure*} \section{Numerical simulations}\label{sec:simulation} In this section, we present numerical simulations for the algorithms we investigated above. \subsection{Optimal spectral method} The efficacy of the spectral estimator $\widehat{\by}_{\mathrm{PCA}}$ is demonstrated in Figures \ref{fig:pca_exact_c50} (a) and \ref{fig:pca_optimal_c50} for $c_{\tau} = 0.5$ and Figure \ref{fig:pca_exact_c50} (b) for $c_{\tau} = 1.5$. We fix $N = 800$, $\tau = 0.25$, but vary $a$ ($y$-axis) and $b$ ($x$-axis) from $1$ to $10.5$ in \Cref{fig:pca_exact_c50}, and compute the frequency of exact recovery over $20$ independent trials for each parameter configuration $(a_{\tau}, b_{\tau}, c_{\tau})$. Here, a lighter color represents a higher success chance. The (\textcolor{red}{red}) and (\textcolor{blue}{blue}) curves represent the boundaries for exact recovery under semi-supervised and unsupervised regimes respectively. A larger $c_{\tau}$ implies a stronger signal in node features, which shrinks the boundary for exact recovery and makes the problem easier. In Figure \ref{fig:pca_optimal_c50}, we fix $b = 5$ but vary $a$ ($x$-axis) from $1$ to $10.5$. The simulations for the average mismatch ratio are presented on the logarithmic scale over different choices of $N$. Clearly, $\log \E \psi_m$ will approach the lower bound (red curve) as proved in Theorems \ref{thm:ITlowerbounds_CSBM}, \ref{thm:achievability_CSBM} (2). \subsection{Ridge regression on linear GCNs} The efficacy of the ridge estimator $\widehat\by_{\mathrm{LRR}}$ is presented in Figures \ref{fig:lrr_exact_c50} and \ref{fig:lrr_optimal_c50}. We fix $N = 800$, $\tau = 0.25$ and $c_{\tau} = 0.5$ in \Cref{fig:lrr_exact_c50}, but vary $a$ ($y$-axis) and $b$ ($x$-axis) from $1$ to $10.5$, where $20$ independent trials are performed on each $(a_{\tau}, b_{\tau}, c_{\tau})$. The difference between the (a) and (b) lies on the choice of the self-loop density $\rho$, where we take $\rho = 0$ in (a) but $\rho = 2c_{\tau} q_m \log(a_{\tau}/b_{\tau})$ in (b) as \eqref{eq:optimal_rho}. In \Cref{fig:lrr_optimal_c50}, we fix $b = 4$, $N = 400$ but vary $a$ ($x$-axis) from $1$ to $10.5$. When $I(a_{\tau}, b_{\tau}, c_{\tau}) < 1$, the performance difference between the choices of $\rho$ are presented. From simulations, the average mismatch ratio is closer to the predicted lower bound (red curve) when the optimal self-loop is added. \subsection{Gradient-based training on GCN} The efficacy of $\widehat\by_{\mathrm{GCN}}$ is presented in Figures \ref{fig:GCN_optimal_c50} and \ref{fig:GCN_exact_c50}. Similarly, we fix $N = 400$, $\tau = 0.25$ and $c_{\tau} = 0.5$, but vary $a$ ($y$-axis) and $b$ ($x$-axis) from $1$ to $9$ in \Cref{fig:GCN_exact_c50}. For each $(a_{\tau}, b_{\tau}, c_{\tau})$, $10$ independent trials are performed. We plot the performance when adding self-loops to the graph data, where we take $\rho = 0$ in (a) but $\rho = 2c_{\tau} q_m \log(a_{\tau}/b_{\tau})$ in (b) as \eqref{eq:optimal_rho}. In Figure \ref{fig:GCN_optimal_c50}, we fix $b = 4$, $c_{\tau} = 0.5$, $N = 400$ but vary $a$ ($x$-axis) from $1$ to $9$. The performance difference between the choices of $\rho$ when $I(a_{\tau}, b_{\tau}, c_{\tau}) < 1$ are presented. From the simulations, the average mismatch ratio is closer to the predicted bound (red curve) when the optimal self-loop is added. \section{Discussion and conclusion}\label{sec:conclusions} Our research delves into the precise recovery threshold in semi-supervised learning on the CSBM. We present various strategies for achieving exact recovery, including the spectral method, linear ridge regression applied to linear GCNs, and gradient-based training techniques for GCNs. Firstly, as shown in $\ell^*$ and $\widehat\kappa_{\ell^*}$ defined in \eqref{eqn:hatkappa_lstar}, all of our methods cover Erd\H{o}s-R\'{e}nyi graph ($a=b$), homophilic graphs ($a>b$) and heterophilic graphs $(a<b)$. When $a=b$, we can only utilize the node feature from GMM for classification, which returns to the semisupervised learning on GMM \cite{lelarge2019asymptotic,oymak2021theoretical,nguyen2023asymptotic}. For heterophilic graphs with $a<b$, the optimal self-loop strength $\rho$ defined in \eqref{eq:optimal_rho} is negative, which validates the observation in Figure 5 of \cite{shi2024homophily}. Furthermore, for each method, we establish precise asymptotic lower bounds that depend on the sparsity of the SBM and the SNR in GMMs. In many instances, these bounds are optimal, compared with the IT bound. Notably, our findings support the notion that GCNs, when equipped with certain gradient-based training protocols, can flawlessly recover all unlabeled vertices provided the SNR exceeds the IT bound. This finding underscores the effectiveness of GCNs in addressing classification problems within CSBM settings. For future research endeavors, one can explore the precise recovery rates for more complex and non-linear graph models, such as XOR-SBM and random geometric Gaussian graphs. Moreover, it is also interesting to illuminate the process of feature learning in GCNs and identify the optimal GCN architectures that mitigate over-smoothing and adhere to information theory constraints. \section*{Acknowledgements} H.X.W. and Z.W. acknowledge the support from NSF DMS-2154099. Z.W. is also partially supported by NSF DMS-2055340. \bibliographystyle{abbrvnat} \bibliography{bibliography.bib} \appendix \addcontentsline{toc}{section}{Appendices} \newpage \input{appendix} \end{document} \begin{abstract} We delve into the challenge of semi-supervised node classification on the Contextual Stochastic Block Model (CSBM) dataset. Here, nodes from the two-cluster Stochastic Block Model (SBM) are coupled with feature vectors, which are derived from a Gaussian Mixture Model (GMM) that corresponds to their respective node labels. With only a subset of the CSBM node labels accessible for training, our primary objective becomes the accurate classification of the remaining nodes. Venturing into the transductive learning landscape, we, for the first time, pinpoint the information-theoretical threshold for the exact recovery of all test nodes in CSBM. Concurrently, we design an optimal spectral estimator inspired by Principal Component Analysis (PCA) with the training labels and essential data from both the adjacency matrix and feature vectors. We also evaluate the efficacy of graph ridge regression and Graph Convolutional Networks (GCN) on this synthetic dataset. Our findings underscore that graph ridge regression and GCN possess the ability to achieve the information threshold of exact recovery in a manner akin to the optimal estimator when using the optimal weighted self-loops. This highlights the potential role of feature learning in augmenting the proficiency of GCN, especially in the realm of semi-supervised learning. \end{abstract} \section{Information-theoretic limits}\label{sec:ITLowerBoundsCSBM_proof} In this section, we will provide the proofs for \Cref{thm:impossibility_CSBM} and \Cref{thm:ITlowerbounds_CSBM}. \subsection{Impossibility for exact recovery} The proof sketch of \Cref{thm:impossibility_CSBM} is presented in this section, with some proofs of Lemmas deferred. Let $\by\in \{\pm 1\}^{N}$ denote the true label vector with $\by = [\by_{\sL}^{\sT}, \by_{\sU}^{\sT}]^{\sT}$, where $\by_{\sL}$ and $\by_{\sU}$ denote the observed and uncovered label vector respectively. Assume $(\bA, \bX) \sim \CSBM (\by, \bmu, \alpha, \beta, \theta)$ as in model \ref{def:SemiCSBM}, and the access to $\bA, \bX, \by_{\sL}$ are provided. Let $\widehat{\by}_{\sU} \in \{\pm 1\}^{m}$ denote an estimator of $\by_{\sU}$ obtained from algorithm. The probability that $\widehat{\by}_{\sU}$ fails recovering every entry of $\by_{\sU}$ is \begin{align} &\,\P_{\mathrm{fail}}\coloneqq \P(\widehat{\by}_{\sU} \neq \pm \by_{\sU}) = \sum_{\bA, \bX, \by_{\sL}}[1 -\P(\widehat{\by}_{\sU} = \pm \by_{\sU} | \bA, \bX, \by_{\sL}) ] \cdot \P(\bA, \bX, \by_{\sL}), \label{eqn:probFailureExactRecovery} \end{align} where the \textit{Maximum A Posteriori} (MAP) estimator achieves its minimum. Since the prior distribution of $\by$ is uniform sampled in Definition~\ref{def:SemiCSBM}, the discussion on the ideal estimator can be transferred to \textit{Maximum Likelihood Estimation} (MLE), \begin{align} \widehat{\by}_{\textnormal{MLE}} \coloneqq \underset{\bz \in \{\pm 1\}^{m}, \ones^{\top}\bz = 0}{\arg\max} \P(\bA |\by_{\sL}, \by_{\sU} = \bz) \cdot \P(\bX|\by_{\sL}, \by_{\sU} = \bz).\label{eqn:MLEestimator} \end{align} Furthermore, \Cref{lem:MAPMLEMax} shows the function that MLE is maximizing over $\bz \in \{\pm 1\}^{m}$ \begin{align} f(\bz) \coloneqq \log \P(\bA |\by_{\sL}, \by_{\sU} = \bz) + \log \P(\bX|\by_{\sL}, \by_{\sU} = \bz). \label{eqn:MLEMax} \end{align} From the discussion above, MLE is equivalent to the best estimator MAP. No algorithm would be able to assign all labels correctly if MLE fails. In the view of \eqref{eqn:MLEMax}, the failure of MLE indicates that some configuration $\bsigma \in \{\pm 1\}^{m}$ other than the true $\by_{\sU}$ achieves its maximum, and MLE prefers $\bsigma$ other than $\by_{\sU}$. To establish the necessity, we explicitly construct some $\bsigma \in \{\pm 1\}^{m}$ with $\ones^{\sT}\bsigma = 0$ such that $\bsigma \neq \by_{\sU}$ but $f(\bsigma) \geq f(\by_{\sU})$ when below the threshold, i.e., $I_{\tau}(a, b, c) < 1$. An example of such $\bsigma$ can be constructed as follows. Pick $u\in \cV_{\sU, +}$ and $v\in \cV_{\sU, -}$ where $\cV_{\sU, \pm} = \cV_{\sU} \cap \cV_{\pm}$, and switch the labels of $u$ and $v$ in $\by_{\sU}$ but keep all the others. \Cref{lem:WmuLDP} characterizes the scenarios of failing exact recovery in terms of $u$ and $v$. \begin{lemma}\label{lem:WmuLDP} Given some subset $\cS \subset \cV = [N]$, for vertex $u\in \sU$, define the following random variable \begin{align} W_{m,u}(\cS) \coloneqq y_u \cdot \bigg( \log (a/b)\cdot \sum_{j\in \cS} A_{uj}y_j + \frac{2}{N + d/\theta^2}\sum_{j\in \cS} \langle \bx_u,\bx_j\rangle y_j \bigg).\label{eqn:Wmu} \end{align} Denote by $W_{m, u} \coloneqq W_{m,u}([N] \setminus \{u\})$ for any $u \in \cV_{\sU}$. Define the rate function \begin{align} I(t,a_{\tau},b_{\tau},c_\tau) \coloneqq\frac{1}{2}\Big(a_{\tau} - a_{\tau} \Big(\frac{a_{\tau}}{b_{\tau}}\Big)^t + b_{\tau} - b_{\tau} \Big(\frac{b_{\tau}}{a_{\tau}} \Big)^t \Big) -2c_\tau(t+t^2). \label{eqn:rateFunctiontau} \end{align} Then, it supreme over $t$ is attained at $t^{\star} = -1/2$, \begin{align} \sup_{t\in\R} I(t,a_{\tau},b_{\tau},c_\tau) = I(-1/2,a_{\tau},b_{\tau},c_{\tau}) = \frac{1}{2} \Big( (\sqrt{a_\tau} - \sqrt{b_\tau})^2 + c_\tau \Big) \eqqcolon I(a_\tau,b_\tau,c_\tau), \end{align} where the last equality holds as in \eqref{eqn:rate_I_abc_tau}. \begin{enumerate}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex,label=(\alph*)] \item For any $\eps<\frac{a-b}{2(1-\tau)}\log (a/b)+2c_\tau$ and $\delta>0$, there exists some sufficiently large $m_0>0$, such that for $I(t, a_{\tau}, b_{\tau}, c_{\tau})$ in \eqref{eqn:rateFunctiontau}, the following holds for any $m\ge m_0$ \begin{align} \P(W_{m,u}\le \eps q_m) = (1 + o(1))\cdot \exp{ \Big( -q_m\cdot\big(-\delta+\sup_{t\in\R}\{\eps t+I(t,a_{\tau},b_{\tau},c_\tau)\} \big) \Big) }. \end{align} \item For the pair $u\in \cV_{\sU, +}$ and $v\in \cV_{\sU, -}$, the event $\{ W_{m,u} \leq 0\} \cap \{ W_{m,v} \leq 0\}$ implies $f(\by_{\sU}) \leq f(\bsigma)$ with probability at least $1 - e^{-q_m}$. \end{enumerate} \end{lemma} However, for any pair $u\in \cV_{\sU, +}$, $v\in \cV_{\sU, -}$, the variables $W_{m, u}$ and $W_{m, v}$ are not independent due to the existence of common random edges. To get rid of the dependency, let $\cU$ be a subset of $\cV_{\sU}$ with cardinality $|\cU| = \delta m$ where $\delta = \log^{-3}(m)$, such that $|\cU \cap \cV_{\sU, +}| = |\cU \cap \cV_{\sU, -}| = \delta m/2$. Define the following random variables \begin{align} U_{m, u} \coloneqq W_{m,u}([N] \setminus \cU ),\quad J_{m, u} \coloneqq W_{m,u}(\cU \setminus \{u\} ), \quad J_{m} \coloneqq &\, \max_{u\in \cV_{\sU}} J_{m, u} \label{eqn:UmuJmuJm} \end{align} Obviously, for some $\zeta_m >0$, $\{U_{m, u} \leq -\zeta_{m} q_m \} \cap \{J_m \leq \zeta_{m} q_m\}$ implies $\{W_{m, u} \leq 0\}$ since $W_{m, u} = U_{m, u} + J_{m, u}$. Furthermore, $\{U_{m, u} \leq -\zeta_{m} q_m \}$ does not reply on $\{J_m \leq \zeta_{m} q_m\}$ since $J_m$ is independent to any vertex in $\cU$. Also, $\{ U_{m, u}\}_{u \in \cV_{\sU, +} \cap \cU}$ is a set of independent random variables since no overlap edges. Thus the failure probability can be lower bounded by \begin{align} \P_{\mathrm{fail}} \geq &\, \P(\exists u \in \cV_{\sU, + },\, v \in \cV_{\sU, -} \textnormal{ s.t. } f(\by_{\sU}) \leq f(\bsigma))\\ \geq &\, \P\Big(\cup_{u \in \cV_{\sU, +}} \{ W_{m,u} \leq 0 \} \bigcap \cup_{v \in \cV_{\sU, -}} \{ W_{m,v}\leq 0 \} \Big) \geq \P\Big(\cup_{u \in \cV_{\sU, +} \cap \cU} \{ W_{m,u} \leq 0 \} \bigcap \cup_{v \in \cV_{\sU, -} \cap \cU } \{ W_{m,v}\leq 0 \} \Big) \\ \geq &\, \P\Big(\cup_{u \in \cV_{\sU, +} \cap \cU} \{ U_{m,u} \leq -\zeta_{m} q_m \} \bigcap \cup_{v \in \cV_{\sU, -} \cap \cU } \{ U_{m,v}\leq -\zeta_{m} q_m \} \Big| \{J_m \leq \zeta_{m} q_m\} \Big) \cdot \P(J_m \leq \zeta_{m} q_m) \\ \geq &\, \P\Big(\cup_{u \in \cV_{\sU, +} \cap \cU} \{ U_{m,u} \leq -\zeta_{m} q_m \} \Big| \{J_m \leq \zeta_{m} q_m\} \Big)\\ &\,\cdot \P \Big( \cup_{v \in \cV_{\sU, -} \cap \cU } \{ U_{m,v}\leq -\zeta_{m} q_m \} \Big| \{J_m \leq \zeta_{m} q_m\} \Big) \cdot \P(J_m \leq \zeta_{m} q_m) \\ = &\, \P\Big(\cup_{u \in \cV_{\sU, +} \cap \cU} \{ U_{m,u} \leq -\zeta_{m} q_m \} \Big)\cdot \P \Big( \cup_{v \in \cV_{\sU, -} \cap \cU } \{ U_{m,v}\leq -\zeta_{m} q_m \} \Big) \cdot \P(J_m \leq \zeta_{m} q_m). \end{align} \begin{lemma}\label{lem:lowboundsJmuUmu} For $\zeta_{m} = (\log\log m)^{-1}$ and $q_m = \log(m)$ and some constant $\widetilde{\delta}> 0$, the following holds \begin{align} \P(J_m \leq \zeta_{m} q_m) \geq 1 - \log^{-3}(m) \cdot m^{-1 + o(1)}, \quad \P\Big(\cup_{u \in \cV_{\sU, +} \cap \cU} \{ U_{m,u} \leq -\zeta_{m} q_m \} \Big) \geq 1 - \exp\Big( -\frac{m^{1 - I(a_{\tau}, b_{\tau}, c_{\tau}) + \widetilde{\delta}}}{2\log^3(m)}\Big). \end{align} \end{lemma} With the lower bounds of the three components obtained in \Cref{lem:lowboundsJmuUmu}, while $I(a_{\tau}, b_{\tau}, c_{\tau}) = 1 - \epsilon < 1$ for some $\epsilon > 0$ and $\widetilde{\delta}> 0$, one has \begin{align} \P_{\mathrm{fail}} \geq &\, \Big[ 1 - \exp\Big( -\frac{m^{1 - I(a_{\tau}, b_{\tau}, c_{\tau}) + \widetilde{\delta}}}{2\log^3(m)}\Big) \Big]^2 \cdot \Big(1 - \frac{m^{-1 + o(1)}}{\log^{3}(m)} \Big) \\ \geq &\, 1 - 2\exp\Big( -\frac{m^{\epsilon + \widetilde{\delta}}}{2\log^3(m)}\Big) - \frac{m^{-1 + o(1)}}{\log^{3}(m)} - \exp\Big( -\frac{m^{\epsilon + \widetilde{\delta}}}{2\log^3(m)}\Big) \cdot \frac{m^{-1 + o(1)}}{\log^{3}(m)} \,\, \overset{m \to \infty }{\rightarrow} 1. \end{align} Therefore, the with probability tending to $1$, the best estimator MLE (MAP) fails exact recovery, hence no other algorithm could succeed. \subsection{Information-theoretic lower bounds} \begin{proof}[Proof of \Cref{thm:ITlowerbounds_CSBM}] For each node $i\in \cV_{\sU}$, denote $f(\cdot| \widetilde{\bA}, \widetilde{\bX}, \widetilde{\by}_{\sL}, \widetilde{\by}_{\sU, -i}) = \P( y_i = \cdot| \bA = \widetilde{\bA}, \bX = \widetilde{\bX}, \by_{\sL} = \widetilde{\by}_{\sL}, \by_{\sU, -i} = \widetilde{\by}_{\sU, -i})$. Due to the symmetry of the problem, vertices are interchangeable if in the same community, then it suffices to consider the following event \begin{align} \cA = \big \{ f(y_1| \widetilde{\bA}, \widetilde{\bX}, \widetilde{\by}_{\sL}, \widetilde{\by}_{\sU, -1}) < f(-y_1| \widetilde{\bA}, \widetilde{\bX}, \widetilde{\by}_{\sL}, \widetilde{\by}_{\sU, -1}) \big \}. \end{align} By {Lemma F.3 in \cite{abbe2022lp}} and symmetry between vertices, for any sequence of estimators $\widehat{\by}_{\sU}$, the following holds \begin{align} \E \psi_m (\widehat{\by}_{\sU}, \by_{\sU}) \geq \frac{m-1}{3m-1} \cdot \P( \cA ). \end{align} Recall the definition of $W_{m, u}$ in \eqref{eqn:Wmu}. We denote $W_{m, 1}$ by taking $u = 1$ and $\sS = [N] \setminus \{u\}$. Define the following two events \begin{align} \cB_{\epsilon} \coloneqq &\, \bigg\{ \bigg| \log\Big( \frac{ f(y_1| \widetilde{\bA}, \widetilde{\bX}, \widetilde{\by}_{\sL}, \widetilde{\by}_{\sU, -1}) }{f(-y_1| \widetilde{\bA}, \widetilde{\bX}, \widetilde{\by}_{\sL}, \widetilde{\by}_{\sU, -1})} \Big) - W_{m, 1}\bigg| < \epsilon q_m\bigg\}, \quad \cC_{\epsilon} = \Big\{ W_{m, 1} \leq -\epsilon q_m \Big\}. \end{align} By triangle inequality, $\cB_{\epsilon} \cap \cC_{\epsilon}$ implies $\cA$, thus $\cB_{\epsilon} \cap \cC_{\epsilon} \subset \cA$, and \begin{align} \E \psi_m (\widehat{\by}_{\sU}, \by_{\sU}) \gtrsim \P(\cA) \geq \P(\cB_{\epsilon} \cap \cC_{\epsilon}) \geq \P(\cC_{\epsilon}) - \P(\cB_{\epsilon}^{ \text{c}}). \end{align} According to \Cref{lem:optimalApprox}, $\P(\cB_{\epsilon}^{\text{c}}) \ll e^{-q_m}$. Together with the results above, and by \Cref{lem:WmuLDP}, we have \begin{align} \liminf_{m \to \infty} q^{-1}_m \log \E \eta_{m}(\widehat{\by}_{\sU}, \by_{\sU}) \geq -\sup_{t\in \R} \{\eps t + I(a, b, c_{\tau}, \tau)\}, \end{align} and the proof is finished by taking $\eps \to 0$. \end{proof} \subsection{Deferred proofs} For the sake of convenience, we introduce the following notations for the remaining of this section. For some realization $\bA = \widetilde{\bA}$, $\bX = \widetilde{\bX}$, $\by_{\sL} =\widetilde{\by}_{\sL}\in \{\pm 1\}^{n}$ and $\bmu = \widetilde{\bmu}$, we write \begin{align} &\,\P(\widetilde{\bA}, \widetilde{\bX}, \widetilde{\by}_{\sL}) = \P(\bA = \widetilde{\bA}, \bX = \widetilde{\bX}, \by_{\sL} =\widetilde{\by}_{\sL}), \quad \P(\widetilde{\bA}, \widetilde{\bX}| \widetilde{\by}_{\sL}, \by_{\sU} = \bz ) = \P(\bA = \widetilde{\bA}, \bX = \widetilde{\bX}| \by_{\sL} = \widetilde{\by}_{\sL}, \by_{\sU} = \bz)\\ &\,\P(\widetilde{\bA}|\widetilde{\by}_{\sL}, \by_{\sU} = \bz) = \P(\bA = \widetilde{\bA}|\by_{\sL} = \widetilde{\by}_{\sL}, \by_{\sU} = \bz),\quad \P(\widetilde{\bX}| \widetilde{\by}_{\sL}, \by_{\sU} = \bz) = \P(\bX = \widetilde{\bX}| \by_{\sL} = \widetilde{\by}_{\sL}, \by_{\sU} = \bz). \end{align} \begin{lemma}\label{lem:MAPMLEMax} The MAP estimator minimizes the $\P_{\textnormal{fail}}$, and MAP is equivalent to the MLE \eqref{eqn:MLEestimator}. The quantity that MLE is maximizing is defined in \eqref{eqn:MLEMax}. \end{lemma} \begin{proof}[Proof of \Cref{lem:MAPMLEMax}] From model \ref{def:SemiCSBM}, independently, $\by_{\sL}$ and $\by_{\sU}$ are uniformly distributed over the spaces $\{\pm 1\}^{m}$ and $\{\pm 1\}^{m}$ respectively, thus the following factorization holds \begin{align} \P( \by_{\sL}, \by_{\sU} = \bz) \coloneqq \P(\by_{\sL} = \widetilde{\by}_{\sL}, \by_{\sU} = \bz) = \P(\by_{\sL} = \widetilde{\by}_{\sL}) \cdot \P(\by_{\sU} = \bz), \end{align} which is some constant irrelevant to $\bz$. The first sentence of the Lemma can be established by Bayes Theorem, since \begin{align} \widehat{\by}_{\mathrm{MAP}} =&\, \underset{\bz \in \{\pm 1\}^{m}, \ones^{\top}\bz = 0}{\arg\max} \P(\by_{\sU} = \bz|\bA,\bX, \by_{\sL}) = \underset{\bz \in \{\pm 1\}^{m}, \ones^{\top}\bz = 0}{\arg\max} \frac{\P(\bA, \bX|\by_{\sL}, \by_{\sU} = \bz)\cdot \P(\by_{\sL}, \by_{\sU} = \bz)}{\P(\bA, \bX, \by_{\sL})}\\ =&\, \underset{\bz \in \{\pm 1\}^{m}, \ones^{\top}\bz = 0}{\arg\max} \P(\bA, \bX |\by_{\sL}, \by_{\sU} = \bz) = \underset{\bz \in \{\pm 1\}^{m}, \ones^{\top}\bz = 0}{\arg\max} \P(\bA |\by_{\sL}, \by_{\sU} = \bz) \cdot \P(\bX|\by_{\sL}, \by_{\sU} = \bz) \end{align} where $\P( \by_{\sL}, \by_{\sU} = \bz)$ and $\P(\bA, \bX, \by_{\sL})$ in the first line are factored out since they are irrelevant to $\bz$, and the last equality holds due to the independence between $\bA$ and $\bX$ when given $\by$. For the second sentence of the Lemma, the function $f(\bz)$ could be easily obtained by taking the logarithm of the objectve probability. \end{proof} \begin{proof}[Proof of \Cref{lem:WmuLDP} (1)] By definition of $W_{m,i}$, we have \begin{align} \E[e^{tW_{m,i}}|y_i]=~&\E\left[\exp{\left(\frac{2t\theta^2}{N\theta^2+d}\sum_{j\in[N]\setminus\{i\}} \langle \bx_i,\bx_j\rangle y_jy_i\right)}\;\middle|\;y_i\right]\\ ~&\cdot\E\left[\exp{\left(t\log (a/b)\sum_{j\in\cV_{\sU}\setminus\{i\}} A_{ij} y_jy_i\right)}\;\middle|\;y_i\right]\cdot\E\left[\exp{\left(t\log (a/b)\sum_{j\in\cV_{\sL}} A_{ij} y_jy_i\right)}\;\middle|\;y_i\right]. \end{align} Following the same calculation as Lemma F.2 in \cite{abbe2022lp}, we know that \begin{align} \log \E\left[\exp{\left(\frac{2t\theta^2}{N\theta^2+d}\sum_{j\in[N]\setminus\{i\}} \langle \bx_i,\bx_j\rangle y_jy_i\right)}\;\middle|\;y_i\right]=~& 2c_\tau(t+t^2)(1+o(1))q_m\\ \log \E\left[\exp{\left(t\log (a/b)\sum_{j\in\cV_{\sU}\setminus\{i\}} A_{ij} y_jy_i\right)}\;\middle|\;y_i\right]=~& \frac{1}{2}\left(-a+a\left(\frac{a}{b}\right)^t-b+b\left(\frac{b}{a}\right)^t\right)(1+o(1))q_m. \end{align} Meanwhile, since \begin{align} \E[e^{-tA_{ij}y_iy_j}|y_i]=1+\frac{1}{2}\big(\alpha(e^t-1)+\beta(e^{-t}-1)\big), \end{align} we can get \begin{align} \E\left[\exp{\left(t\log (a/b)\sum_{j\in\cV_{\sL}} A_{ij} y_jy_i\right)}\;\middle|\;y_i\right] =~& n \log \left(1+\frac{1}{2}\big(\alpha(e^t-1)+\beta(e^{-t}-1)\big)\right)\\ =~& \frac{n}{2m}\left(a\left(\frac{a}{b}\right)^t-a+b\left(\frac{b}{a}\right)^t-b\right)(1+o(1))q_m. \end{align} Thus by using $\log(1 + x) = x$ when $x = o(1)$, we obtain \begin{align} &\,\lim_{m \to \infty} q_m^{-1}\log \E[e^{tW_{m,i}}|y_i] = \lim_{m \to \infty} \left( 2c_\tau(t+t^2) +\frac{N}{2m}\left(a\left(\frac{a}{b}\right)^t-a+b\left(\frac{b}{a}\right)^t-b\right)\right)(1+o(1))\\ =&\, -I(t,a_{\tau},b_{\tau},c_{\tau})(1+o(1)). \end{align} The proof is then completed by applying {Lemma H.5 in \cite{abbe2022lp}}. \end{proof} \begin{proof}[Proof of \Cref{lem:WmuLDP} (2)] First, we plug in $\sigma$, $\by_{\sU}$ into \eqref{eqn:MLEMax}, and consider the effect of $u$ and $v$, \begin{align} f(\by_{\sU}) - f(\bsigma) =&\, \log\P(\bA |\by_{\sL}, \by_{\sU} = \bsigma) - \log\P(\bA |\by_{\sL}, \by_{\sU}) + \log \P(\bX|\by_{\sL}, \by_{\sU} = \bsigma) - \log \P(\bX|\by_{\sL}, \by_{\sU})\\ =&\, \log \P(\bA |y_{u} = 1, y_v = -1, \by_{\sL}, \by_{\sU \setminus \{ u, v \} }) - \log \P(\bA |y_{u} = -1, y_v = 1, \by_{\sL}, \by_{\sU \setminus \{ u, v \} })\\ &\, + \log \P(\bX|y_{u} = 1, y_v = -1, \by_{\sL}, \by_{\sU \setminus \{ u, v \} }) - \log \P(\bX|y_{u} = -1, y_v = 1, \by_{\sL}, \by_{\sU \setminus \{ u, v \} } ). \end{align} By \Cref{eqn:decoupleuandv}, the term above can be further reformulated as \begin{align} f(\by_{\sU}) - f(\bsigma) = &\, \log \Big( \frac{p_{\bA}(\bA|y_{u}, \by_{-u})}{p_{\bA}(\bA|-y_{u}, \by_{-u})} \Big) + \log \Big( \frac{p_{\bX}(\bX|y_{u}, \by_{-u})}{p_{\bX}(\bX|-y_{u}, \by_{-u})} \Big)\\ &\, + \log \Big( \frac{p_{\bA}(\bA|y_{v}, \by_{-v})}{p_{\bA}(\bA|-y_{v}, \by_{-v})} \Big) + \log \Big( \frac{p_{\bX}(\bX|y_{v}, \by_{-v})}{p_{\bX}(\bX|-y_{v}, \by_{-v})} \Big). \end{align} Note that $y_u^2 = 1$, according to \Cref{lem:F4ABBElp,lem:F5ABBElp}, with probability at least $1 - e^{-q_m}$, we have \begin{align} &\, \bigg| \log \Big( \frac{p_{\bA}(\bA|y_{u}, \by_{-u})}{p_{\bA}(\bA|-y_{u}, \by_{-u})} \Big) + \log \Big( \frac{p_{\bX}(\bX|y_{u}, \by_{-u})}{p_{\bX}(\bX|-y_{u}, \by_{-u})} \Big) - y_{u}\log\Big( \frac{a}{b}\Big) \sum_{j \neq i} A_{ij} y_j - y_{u}\frac{2\theta^2}{N\theta^2 + d} \sum_{j\neq i} \<\bx_i, \bx_{j}\>y_j \bigg| \\ \leq &\, \bigg| \log \Big( \frac{p_{\bA}(\bA|y_{u}, \by_{-u})}{p_{\bA}(\bA|-y_{u}, \by_{-u})} \Big) - y_{u}\log\Big( \frac{a}{b}\Big) \sum_{j \neq i} A_{ij} y_j \Bigg| + \Bigg |\log \Big( \frac{p_{\bX}(\bX|y_{u}, \by_{-u})}{p_{\bX}(\bX|-y_{u}, \by_{-u})} \Big) - y_{u}\frac{2\theta^2}{N\theta^2 + d} \sum_{j\neq i} \<\bx_i, \bx_{j}\>y_j \bigg| \\ \ll &\, q_m. \end{align} Consequently by triangle inequality, there exists some large enough constant $c > 0$ such that with probability at least $1 - e^{-cq_m}$, \begin{align} |f(\by_{\sU}) - f(\bsigma) - W_{m, u} - W_{m, v}|/ q_m = o(1), \end{align} The proof is then completed. \end{proof} \begin{proof}[Proof of \Cref{lem:lowboundsJmuUmu}] First, $J_{m} \coloneqq \max_{u\in \cV_{\sU}} J_{m, u}$ in \eqref{eqn:UmuJmuJm}, then it suffices to focus on $\P(J_{m, u} >\zeta_{m} q_m)$, since an argument based on the union bound leads to \begin{align} \P(J_{m} > \zeta_{m} q_m ) = \P(\exists u \in \cV_{\sU} \textnormal{ s.t. } J_{m, u} >\zeta_{m} q_m) \leq \sum_{u\in \cV_{\sU}} \P(J_{m, u} >\zeta_{m} q_m). \end{align} We claim the following upper bound with the proof deferred later \begin{align} \E J_{m, u} \leq \exp(q_m \log^{-3}m). \label{eqn:expectationJmu} \end{align} Then by Markov inequality and the fact $q_m = \log(m)$, one has $\P(J_{m, u} > \zeta_{m} q_m) \leq \E J_{m, u}/ (\zeta_{m} q_m) \asymp n^{-2 + o(1)}$. Thus by the union bound \begin{align} \P( J_{m} \leq \zeta_{m} q_m) = 1 - \P(J_{m} > \zeta_{m} q_m) \geq 1 - \gamma m \cdot m^{-2 + o(1)} = 1 - \log^{-3}(m) \cdot m^{-1 + o(1)}. \end{align} For the second desired inequality, note that the difference between $U_{m, u}$ \eqref{eqn:UmuJmuJm} and $W_{m, u}$ \eqref{eqn:Wmu} is relatively negligible since $|\cU|/m = \log^{-3}(m) = o(1)$, thus $U_{m, u}$ exhibits the same concentration behavior as $W_{m, u}$. One could follow the same calculation as in \Cref{lem:WmuLDP} to figure out that for any $m\ge m_0$ and $\widetilde{\delta} >0$, with $I(t,a_{\tau},b_{\tau},c_\tau)$ defined in \eqref{eqn:rateFunctiontau}, the following holds \begin{align} \P(U_{m, u}\leq -\zeta_{m} q_m) = \exp{ \big(-q_m \cdot(-\widetilde{\delta} +\sup_{t\in\R}\{ -\zeta_{m} t+I(t,a_{\tau},b_{\tau},c_\tau)\}) \big)}. \end{align} Note that $\{ U_{m, u}\}_{u \in \cV_{\sU, +} \cap \cU}$ is a set of independent for different since no edge overlap, then one has \begin{align} &\, \P\Big(\cap_{u \in \cV_{\sU, +} \cap \cU} \{ U_{m,u} \leq -\zeta_{m} q_m \} \Big) = \prod_{u \in \cV_{\sU, +}\cap \cU} \P( U_{m,u} > -\zeta_{m} q_m ) \\ =&\, \Big( 1 - m^{-I(a_{\tau}, b_{\tau}, c_{\tau}) + \widetilde{\delta}}\Big)^{\delta m/2} \leq \exp\Big( -\frac{m^{1 - I(a_{\tau}, b_{\tau}, c_{\tau}) + \widetilde{\delta}}}{2\log^3(m)}\Big), \end{align} where the last inequality holds since $1 - x \leq e^{-x}$, and it leads to our desired result \begin{align} \P\Big(\cup_{u \in \cV_{\sU, +} \cap \cU} \{ U_{m,u} \leq -\zeta_{m} q_m \} \Big) = 1 - \P\Big(\cap_{u \in \cV_{\sU, +} \cap \cU} \{ U_{m,u} \leq -\zeta_{m} q_m \} \Big) \geq 1 - \exp\Big( -\frac{m^{1 - I(a_{\tau}, b_{\tau}, c_{\tau}) + \widetilde{\delta}}}{2\log^3(m)}\Big). \end{align} We now establish the proof of \eqref{eqn:expectationJmu}. Recall that $J_{m, u}$ \eqref{eqn:UmuJmuJm} is a summation of independent random variables, where the number of such type random variables is at most $|\cU| = \gamma m \asymp m\log^{-3}(m)$. Denote $\widehat{\bmu}^{(-u)} = \frac{1}{|\cU| - 1} \sum_{j\in \cU\setminus \{u\}} \bx_j y_j$. Recall $\bx_i = \theta y_i \bmu + \bz_i$ with $\|\bmu\|_2 = 1$ in \eqref{eqn:gauss_mixture}, then $y_i \bx_i \sim \Normal (\theta\bmu, \bI_d)$ given $y_i$, and $\sqrt{|\cU| - 1}\,\, \widehat{\bmu}^{(-u)} \sim \Normal(\sqrt{|\cU| - 1}\,\,\theta \bmu, \bI_d)$, while $y_i \bx_i$ and $\sqrt{|\cU| - 1}\,\, \widehat{\bmu}^{(-u)}$ are independent. Following {Lemma H.4 in \cite{abbe2022lp}}, for all $t\in (-\sqrt{|\cU| - 1}, \sqrt{|\cU| - 1})$, one has \begin{align} &\, \log\E\big( \exp(t \langle \bx_u , \widehat{\bmu}^{(-u)}\rangle y_u ) | y_u \big) = \log\E\big( \exp(t/\sqrt{|\cU| - 1} \langle \bx_u , \sqrt{|\cU| - 1}\,\,\widehat{\bmu}^{(-u)}\rangle y_u ) | y_u \big)\\ = &\, \frac{\frac{t^2}{|\cU|-1}}{2(1 - \frac{t^2}{|\cU|-1} )} \big(\theta^2 \|\bmu\|_2^2 + (|\cU| -1)\cdot \theta^2 \|\bmu\|^2_2 \big) + \frac{\frac{t}{\sqrt{|\cU| - 1}} }{1 - \frac{t^2}{|\cU|-1}} \theta^2 \langle \bmu, \sqrt{|\cU| - 1} \bmu \rangle - \frac{d}{2}\log\Big(1 - \frac{t^2}{|\cU|-1} \Big)\\ =&\, \frac{t \theta^2}{1 - \frac{t^2}{|\cU|-1}} \Big(1 + \frac{|\cU|t}{2(|\cU| - 1)} \Big) - \frac{d}{2}\log\Big(1 - \frac{t^2}{|\cU|-1} \Big) = \log\E\big( \exp(t \langle \bx_u , \widehat{\bmu}^{(-u)}\rangle y_u ), \end{align} where the last inequality holds since the result above is independent of $y_u$. We substitute $s = 2t\Tilde{p}/\theta^2$, where $\Tilde{p} = \theta^4(|\cU| - 1)/(N\theta^2 + d)$. We focus on the critical case $\theta^2 \asymp q_m \asymp \log(m)$, $|\cU| = m \log^{-3}m$, $d/N = \gamma \asymp 1$, thus $s^2/(|\cU| - 1) = m^{-1}\log^{-3}(m) = o(1)$, $\log(1 - x) = -x$ for $x = o(1)$, then \begin{align} &\, \log\E\big( \exp(s \langle \bx_u , \widehat{\bmu}^{(-u)}\rangle y_u ) = \frac{s \theta^2}{1 - \frac{s^2}{|\cU|-1}} \Big(1 + \frac{|\cU|s}{2(|\cU| - 1)} \Big) - \frac{d}{2}\log\Big(1 - \frac{s^2}{|\cU|-1} \Big)\\ =&\, [1 + o(1)] s\theta^2 (1 + s/2) + \frac{d}{2} \cdot \frac{s^2}{|\cU| - 1} = [1 + o(1)] \cdot \Big[2t\Tilde{p}(1 + \frac{t\Tilde{p}}{\theta^2}) + \frac{d}{2(|\cU| - 1)} \cdot \frac{4t^2\Tilde{p}^2}{\theta^4} \Big]\\ =&\, [1 + o(1)] \cdot 2\Tilde{p}t \Big[ 1 + \frac{t\Tilde{p}}{\theta^2} \Big(1 + \frac{d}{\theta^2(|\cU| - 1)} \Big)\Big] = [1 + o(1)]\cdot 2\Tilde{p}t\Big[\Big(1 + t \, \frac{d + \theta^2 (|\cU| - 1)}{d + N\theta^2}\Big)\Big]\\ =&\, [1 + o(1)]\cdot 2\Tilde{p}t \big(1 + t (1 - \tau)\log^{-3}(m) \big). \end{align} By \eqref{eqn:ctau}, we focus on the critical case $\theta^2 \asymp q_m \asymp \log(m)$, $|\cU| = m \log^{-3}m$, $d/N = \gamma \asymp 1$, then \begin{align} c_{\tau} q_m = \frac{\theta^4}{\theta^2 + (1 - \tau)d/m} \asymp q_m, \quad \Tilde{p} = \frac{\theta^4(|\cU| - 1)}{N\theta^2 + d}\asymp \log^{-2}(m) \asymp c_{\tau}q_m \cdot \log^{-3}(m), \end{align} which leads to \begin{align} &\, \log \E \exp \bigg(\frac{2}{N + d/\theta^2}\sum_{j\in \cU \setminus \{ u \} } \langle \bx_u,\bx_j\rangle y_j \bigg)\\ = &\, \log\E\big( \exp(s \langle \bx_u , \widehat{\bmu}^{(-u)}\rangle y_u ) = 2c_{\tau} \big(t + t^2 (1 - \tau)\log^{-3}(m) \big) \log^{-3}(m)\cdot q_m. \end{align} On the other hand, we have \begin{align} &\, \E[e^{-tA_{uj}y_u y_j}|y_u] = \frac{1}{2} \E[e^{tA_{uj}}|y_u y_j = 1] + \frac{1}{2} \E[e^{-tA_{uj}}|y_u y_j = -1] = \frac{1}{2}[\alpha e^t + (1 - \alpha)] + \frac{1}{2}[\beta e^t + (1 - \beta)]\\ = &\, 1 + \frac{\alpha(e^t - 1) + \beta(e^{-t} - 1)}{2} = \E[e^{-tA_{uj}y_u y_j}], \end{align} where the last equality holds since the result on the second line does not depend on $y_u$ again. Conditioned on $y_u$, $\{A_{uj}y_u y_j\}_{j \neq u}$ are i.i.d. random variables, then followed by $\alpha = aq_m /m$, $\beta = bq_m /m$ and $\log(1 + x) = x$ for $x = o(1)$, we have \begin{align} &\, \log\E \Big[ \exp\Big(t\log(a/b) y_u \sum_{j\in \cU\setminus \{u\} } A_{uj}y_j \Big) \Big] = (|\cU| - 1) \cdot \log\Big(1 + \frac{aq_m [(a/b)^t - 1] + bq_m [(b/a)^t - 1] }{2m}\Big)\\ =&\, (1 + o(1)) \frac{a[(a/b)^t - 1] + b[(b/a)^t - 1]}{2} \log^{-3}(m) \cdot q_m. \end{align} Due to the independence between the graph $\bA$ and feature vectors $\bX$ conditioned on $y_{u}$, one has \begin{align} &\, q^{-1}_m \log \E e^{tJ_{m, u}} = q^{-1}_m \log \E \exp \bigg(\frac{2}{N + d/\theta^2}\sum_{j\in \cU \setminus \{ u \} } \langle \bx_u,\bx_j\rangle y_j \bigg) + q^{-1}_m \log\E \Big[ \exp\Big(t\log(a/b) y_u \sum_{j\in \cU\setminus \{u\} } A_{uj}y_j \Big) \Big]\\ =&\, [1 + o(1)]\cdot \Big[ \frac{a[(a/b)^t - 1] + b[(b/a)^t - 1]}{2} + 2c_{\tau} \big(t + t^2 (1 - \tau)\log^{-3}(m) \big)\Big] \log^{-3}(m) \asymp \log^{-3}(m), \end{align} where the last line holds since $a, b, c \asymp 1$. The proof of \eqref{eqn:expectationJmu} is then established once the large deviation results from graph $\bA$ and feature matrix $\bX$ are added together. \end{proof} \begin{lemma}\label{eqn:decoupleuandv} Denote by $p_{\bA}(\cdot|\widetilde{\ell}_i, \widetilde{\by}_{-i})$ the conditional probability mass function of $\bA$ given $y_i = \widetilde{\ell}_i\in \{\pm 1\}$ and $\by_{-i} = \widetilde{\by}_{-i} \in \{\pm 1\}^{N-1}$. Denote by $p_{\bX}(\cdot|\widetilde{\ell}_i, \widetilde{\by}_{-i})$ the conditional probability density function of $\bX$ given $y_i = \widetilde{\ell}_i\in \{\pm 1\}$ and $\by_{-i} = \widetilde{\by}_{-i} \in \{\pm 1\}^{N-1}$. Then \begin{align} \log \bigg( \frac{\P(\bA |y_{u}, y_v, \by_{\sL}, \by_{\sU \setminus \{ u, v \} })}{\P(\bA |-y_{u}, -y_v, \by_{\sL}, \by_{\sU \setminus \{ u, v \} })} \bigg) = \log \Big( \frac{p_{\bA}(\bA|y_{u}, \by_{-u})}{p_{\bA}(\bA|-y_{u}, \by_{-u})} \Big) + \log \Big( \frac{p_{\bA}(\bA|y_{v}, \by_{-v})}{p_{\bA}(\bA|-y_{v}, \by_{-v})} \Big),\\ \log \bigg( \frac{\P(\bX |y_{u}, y_v, \by_{\sL}, \by_{\sU \setminus \{ u, v \} })}{\P(\bX | -y_{u}, -y_v, \by_{\sL}, \by_{\sU \setminus \{ u, v \} })} \bigg) = \log \Big( \frac{p_{\bX}(\bX|y_{u}, \by_{-u})}{p_{\bX}(\bX|-y_{u}, \by_{-u})} \Big) + \log \Big( \frac{p_{\bX}(\bX|y_{v}, \by_{-v})}{p_{\bX}(\bX|-y_{v}, \by_{-v})} \Big). \end{align} \end{lemma} \begin{proof}[Proof of \Cref{eqn:decoupleuandv}] We start with the graph part. For each vertex $u\in \cV_{\sU}$, denote $\cT_{u} = \{j\in [N]\setminus{u} : y_u y_j = 1\}$ and $\cS_{u} = \{j\in [N]\setminus{u} : A_{uj} = 1\}$, then \begin{align} p_{\bA}(\bA|y_{u}, \by_{-u}) \propto &\, \alpha^{|\cT_{u} \cap \cS_{u}|} \cdot (1 - \alpha)^{|\cT_{u} \setminus \cS_{u}|} \cdot \beta^{|\cS_{u} \setminus \cT_{u}|} \cdot (1 - \beta)^{|[N] \setminus (\cT_{u} \cup \cS_{u} \cup \{u\}) |},\\ p_{\bA}(\bA|-y_{u}, \by_{-u}) \propto &\, \alpha^{|\cS_{u}\setminus \cT_{u}|} \cdot (1 - \alpha)^{|[N] \setminus (\cT_{u} \cup \cS_{u} \cup \{u\}) |} \cdot \beta^{|\cT_{u} \cap \cS_{u}|} \cdot (1 - \beta)^{|\cT_{u} \setminus \cS_{u} |}, \end{align} where $\propto$ hides the factor not involving $\{A_{uj}\}_{j=1}^{N}$ and $y_u$. Then \begin{align} \log \Big( \frac{p_{\bA}(\bA|y_{u}, \by_{-u})}{p_{\bA}(\bA|-y_{u}, \by_{-u})} \Big) = \Big( |\cT_{u} \cap \cS_{u}| - |\cS_{u}\setminus \cT_{u}|\Big) \log\Big( \frac{\alpha}{\beta} \Big) + \Big( |\cT_{u} \setminus \cS_{u}| - |[N] \setminus (\cT_{u} \cup \cS_{u} \cup \{u\}) |\Big) \log\Big(\frac{1 - \alpha}{1 - \beta} \Big). \end{align} For the left hand side, we assume $y_{u} = 1, y_v = -1$ and factor out the terms irrelevant to $u$ and $v$, then \begin{align} \P(\bA |y_{u}, y_v, \by_{\sL}, \by_{\sU \setminus \{ u, v \} }) \propto \beta^{A_{uv}}(1 - \beta)^{1 - A_{uv}} \cdot &\, \alpha^{|\cT_{u} \cap \cS_{u}|} \cdot (1 - \alpha)^{|\cT_{u} \setminus \cS_{u}|} \cdot \beta^{|\cS_{u} \setminus \cT_{u}|} \cdot (1 - \beta)^{|[N] \setminus (\cT_{u} \cup \cS_{u} \cup \{u\}) |}\\ \cdot &\,\alpha^{|\cT_{v} \cap \cS_{v}|} \cdot (1 - \alpha)^{|\cT_{v} \setminus \cS_{v}|} \cdot \beta^{|\cS_{v} \setminus \cT_{v}|} \cdot (1 - \beta)^{|[N] \setminus (\cT_{v} \cup \cS_{v} \cup \{v\}) |}\, . \end{align} We perform the same calculation under the assumption $y_{u} = -1, y_v = 1$, which gives \begin{align} \P(\bA |-y_{u}, -y_v, \by_{\sL}, \by_{\sU \setminus \{ u, v \} }) \propto \beta^{A_{uv}}(1 - \beta)^{1 - A_{uv}} \cdot &\, \alpha^{|\cS_{u}\setminus \cT_{u}|} \cdot (1 - \alpha)^{|[N] \setminus (\cT_{u} \cup \cS_{u} \cup \{u\}) |} \cdot \beta^{|\cT_{u} \cap \cS_{u}|} \cdot (1 - \beta)^{|\cT_{u} \setminus \cS_{u} |}\\ \cdot &\, \alpha^{|\cS_{v}\setminus \cT_{v}|} \cdot (1 - \alpha)^{|[N] \setminus (\cT_{v} \cup \cS_{v} \cup \{v\}) |} \cdot \beta^{|\cT_{v} \cap \cS_{v}|} \cdot (1 - \beta)^{|\cT_{v} \setminus \cS_{v} |}\,, \end{align} where the probability of generating edge $(u, v)$ remains unchanged when flipping the signs of $u$ and $v$ at the same time. The proof follows easily by rearranging and separating relevant terms. For the second part, note that \begin{align} p_{\bX}(\bX|\by) \propto \E_{\bmu} \exp\Big( - \frac{1}{2}\sum_{j\in \cV} \|\bx_j - y_j \bmu\|_2^2 \Big) \propto \E_{\bmu} \exp\Big( \Big\langle \sum_{j\in \cV} \bx_j y_j, \bmu \Big\rangle \Big), \end{align} where $\propto$ hides quantities that do not depend on $\by$. Consequently, \begin{align} \frac{p_{\bX}(\bX|y_{u}, \by_{-u})}{p_{\bX}(\bX|-y_{u}, \by_{-u})} = \E_{\bmu}\exp(2y_u\bx_u^{\sT}\bmu). \end{align} For the left hand side, similarly, let $\propto$ hide the quantities independent of $u$, $v$, then \begin{align} \P(\bX |y_{u}, y_v, \by_{\sL}, \by_{\sU \setminus \{ u, v \} }) \propto \E_{\bmu}\exp(y_u\bx_u^{\sT}\bmu + y_v\bx_v^{\sT}\bmu) = \E_{\bmu}\exp(y_u\bx_u^{\sT}\bmu) \cdot \E_{\bmu}\exp(y_v\bx_v^{\sT}\bmu). \end{align} The conclusion follows easily by the linearity of expectation. \end{proof} \begin{lemma}[{Lemma F.4, \cite{abbe2022lp}}]\label{lem:F4ABBElp} Denote by $p_{\bX}(\cdot|\widetilde{\ell}_i, \widetilde{\by}_{-i})$ the conditional probability density function of $\bX$ given $y_i = \widetilde{\ell}_i\in \{\pm 1\}$ and $\by_{-i} = \widetilde{\by}_{-i} \in \{\pm 1\}^{N-1}$. Then there exists some large enough constant $c>0$ such that for each $i \in [N]$, with probability at least $1 - e^{-c q_N}$, \begin{align} \bigg| y_i \log \bigg( \frac{p_{\bX}(\bX|y_i, \by_{-i}) }{p_{\bX}(\bX|y_i, \by_{-i}) } \bigg) - \frac{2\theta^2}{N\theta^2 + d} \sum_{j\neq i} \<\bx_i, \bx_{j}\>y_j \bigg|/ q_N = o(1). \end{align} \end{lemma} \begin{lemma}[{Lemma F.5, \cite{abbe2022lp}}]\label{lem:F5ABBElp} Denote by $p_{\bA}(\cdot|\widetilde{\ell}_i, \widetilde{\by}_{-i})$ the conditional probability mass function of $\bA$ given $y_i = \widetilde{\ell}_i\in \{\pm 1\}$ and $\by_{-i} = \widetilde{\by}_{-i} \in \{\pm 1\}^{N-1}$. Then there exists some large enough constant $c>0$ such that for each $i \in [N]$, with probability at least $1 - e^{-c q_N}$, \begin{align} \bigg| y_i \log \bigg( \frac{p_{\bA}(\bA|y_i, \by_{-i}) }{p_{\bX}(\bA|y_i, \by_{-i}) } \bigg) - \log\Big( \frac{a}{b}\Big) \sum_{j \neq i} A_{ij} y_j \bigg| / q_N = o(1). \end{align} \end{lemma} \section{Performance of optimal spectral estimator} According to \cite{abbe2022lp} and the discussion in \Cref{sec:ITLowerBoundsCSBM}, the ideal estimator for the label of the node $i\in\cV_{\sU}$ could be derived from \begin{align} \widehat{y}_i^{\,\,\mathrm{genie}} = \underset{y = \pm 1} {\arg \max}~\P(y_i = y|\bA, \bX, \by_{-i}) \end{align} \begin{lemma}\label{lem:optimalApprox} For each given $i\in \cV_{\sU}$, following the $o_{\P}(q_m;q_m)$ notation in \cite{abbe2022lp}, we have \begin{align} \bigg| \log \Big( \frac{\P(y_i = 1|\bA, \bX, \by_{-i})}{\P(y_i = -1|\bA, \bX, \by_{-i})} \Big) - \Big[\log\big(\frac{a}{b} \big) \bA \by + \frac{2}{N + d/\theta^2}\bG \by \Big]_i\bigg| = o_{\P}(q_m;q_m).\notag \end{align} \end{lemma} \begin{proof}[Proof of \Cref{lem:optimalApprox}] By definition of conditional probability and the independence between $\bA|\by$ and $\bX|\by$, for vertex $i\in \cV_{\sU}$, one has \begin{subequations} \begin{align} &\,\log \Big( \frac{\P(y_i = 1|\bA, \bX, \by_{-i})}{\P(y_i = -1|\bA, \bX, \by_{-i})} \Big) = \log \Big( \frac{\P(\bA, \bX| y_i = 1, \by_{-i})}{\P(\bA, \bX| y_i = -1, \by_{-i})} \Big)\\ =&\, \log \Big( \frac{\P(\bA| y_i = 1, \by_{-i})}{\P(\bA| y_i = -1, \by_{-i})} \Big) + \log \Big( \frac{\P(\bX| y_i = 1, \by_{-i})}{\P(\bX| y_i = -1, \by_{-i})} \Big). \end{align} \end{subequations} Then, one could apply Lemmas F.4, F.5 in \cite{abbe2022lp} separately to conclude the results for the two terms above. \end{proof} From \Cref{lem:optimalApprox} above, the ideal estimator $(\widehat{y}_1^{\mathrm{genie}},\ldots, \widehat{y}_m^{\mathrm{genie}})^\sT$ can be approximated by \begin{equation}\label{eq:approx_genie} \sign \left(\log(a/b) (\bA_{\sU\sL} \by_{\sL} +\bA_{\sU}\by_{\sU} ) + \frac{2}{N + d/\theta^2}(\bG_{\sU\sL}\by_{\sL}+\bG_{\sU}\by_{\sU})\right). \end{equation} Note that $\bA_{\sU\sL}, \by_{\sL}$ and $\bG_{\sU\sL}$ are accessible for us in semi-supervised setting. Below, \Cref{lem:intermediate_close_w} indicates that a scaled version of \eqref{eq:approx_genie} is entrywisely close to $\widehat{\by}_{\mathrm{PCA}}$ in \eqref{eqn:pcaEstimator} up to a global sign flip. \begin{lemma}\label{lem:intermediate_close_w} Denote $\bar{\bu} \coloneqq \by_{\sU}/\sqrt{m}$. For each $i \in \sU$, define \begin{align} \bw \coloneqq \log(a/b) \Big( \bA_{\sU\sL} \by_{\sL}/\sqrt{m} + \bA_{\sU} \bar{\bu} \Big) + \frac{2}{N + d/\theta^2}(\bG_{\sU\sL}\by_{\sL}/\sqrt{m}+\bG_{\sU}\bar{\bu}). \end{align} Then for $\widehat{\by}_{\mathrm{PCA}}$ in \eqref{eqn:pcaEstimator}, there exists some sequence $\{\epsilon_m\}_{m}$ going to $0$ such that \begin{align} \P(\min_{c = \pm 1}\|c\widehat{\by}_{\mathrm{PCA}}- \bw\|_{\infty} \geq \epsilon_m \, m^{-1/2}\log(m)) \lesssim m^{-2}. \end{align} \end{lemma} \begin{proof}[Proof of \Cref{lem:intermediate_close_w}] Define the following intermediate-term \begin{align} \bv = \log \Big( \frac{\alpha}{\beta} \Big) \cdot \bigg( \bA_{\sU\sL} \frac{1}{\sqrt{m}} \by_{\sL}+ \frac{m(\alpha - \beta)}{2} \cdot \bu_{2}(\bA_{\sU}) \bigg) + \frac{2\theta^2}{N\theta^2 + d} \Bigg( \bG_{\sU\sL} \frac{1}{\sqrt{m}}\by_{\sL} + m\theta^2 \cdot \bu_1(\bG_{\sU}) \Bigg). \end{align} By definition of $\alpha$ and $\beta$ in \Cref{ass:asymptotics}, $\alpha/\beta = a/b$. We focus on the case $a > b$, then \begin{align} \|\bv - \bw\|_{\infty} \leq &\, \log(a/b)\cdot \|\bA_{\sU}\bar{\bu} - \frac{(a - b)q_m}{2}\bu_2(\bA_{\sU})\|_{\infty} + \frac{2\theta^2}{N\theta^2 + d} \|\bG_{\sU}\bar{\bu} - m\theta^2 \bu_1(\bG_{\sU})\|_{\infty}\\ \|\bv - \widehat{\by}_{\mathrm{PCA}}\|_{\infty} \leq &\, \bigg| \log \Big( \frac{\lambda_1(\bA) + \lambda_2(\bA)}{\lambda_1(\bA) - \lambda_2(\bA)} \Big) - \log(\alpha/\beta) \bigg| \cdot \frac{1}{\sqrt{m}} \| \bA_{\sU \sL} \by_{\sL}\|_{\infty} \\ & + \, \bigg| \log \Big( \frac{\lambda_1(\bA) + \lambda_2(\bA)}{\lambda_1(\bA) - \lambda_2(\bA)} \Big) \lambda_2(\bA_{\sU}) - \log(a/b) \frac{(a - b)q_m}{2} \bigg| \cdot \|\bu_2(\bA_{\sU})\|_{\infty}\\ & + \, \frac{1}{\sqrt{m}}\bigg|\frac{2\lambda_1 (\bG)}{N\lambda_1(\bG) + dN} - \frac{2\theta^2}{N\theta^2 + d}\bigg| \cdot \|\bG_{\sU\sL}\by_{\sL}\|_{\infty}, \\ & + \, \bigg|\frac{2\lambda_1 (\bG) \lambda_1 (\bG_{\sU})}{N\lambda_1(\bG) + dN} - \frac{2m\theta^4}{N\theta^2 + d}\bigg| \cdot \|\bu_1(\bG_{\sU})\|_{\infty}. \end{align} Without loss of generality, we assume $\<\bu_1(\bG_{\sU}, \bar{\bu})\> \geq 0$ and $\<\bu_2(\bA_{\sU}, \bar{\bu})\> \geq 0$. Also, by {Lemma B.1, Theorem 2.1 in \cite{abbe2022lp}}, with probability at least $1 - e^{-n}$, \begin{align} \lambda_1 (\bG) = (1 + o(1)) \cdot N\theta^2, \quad \lambda_1 (\bG_{\sU}) = (1 + o(1)) \cdot m\theta^2, \end{align} and for some large constant $c>4$, with probability at least $1 - n^{-c}$, there exists some vanishing sequence $\{\epsilon_m\}_m$ such that \begin{align} \| \bu_1(\bG_{\sU}) - \bG_{\sU} \bar{\bu}/(m \theta^2) \|_{\infty} \lesssim \epsilon_m \cdot m^{-1/2}, \quad \| \bu_1(\bG_{\sU}) \|_{\infty} \lesssim m^{-1/2}. \end{align} One can also obtain the upper bounds for $\| \bA_{\sU\sL}\|_{2 \to \infty}$ and $\| \bG_{\sU\sL}\|_{2 \to \infty}$. The remaining procedure follows similarly as Lemma F.1 in \cite{abbe2022lp} and Corollary 3.1 in \cite{abbe2020entrywise}. \end{proof} \begin{proof}[Proof of \Cref{thm:achievability_CSBM} (1)] First, for each node $i\in \cV_{\sU}$, if there exists some positive constant $\xi$ such that $q_m^{-1}\sqrt{m} y_i (\widehat{\by}_{\mathrm{PCA}})_{i} \geq \xi$, then the estimator $\sign(\widehat{\by}_{\mathrm{PCA}})$ recovers the label of each node correctly. Thus a sufficient condition for exact recovery is \begin{align} q_m^{-1} \sqrt{m}\min_{i\in \sU} y_i (\widehat{\by}_{\mathrm{PCA}})_{i} \geq \xi, \quad \textnormal{ for some positive constant } \xi. \end{align} Remind the result of \Cref{lem:intermediate_close_w}, for some vanishing positive sequence $\{\epsilon_{m}\}_m$, we have $\min_{c = \pm 1}\|c\widehat{\by}_{\mathrm{PCA}} - \bw\|_{\infty} \geq \epsilon_m \, m^{-1/2}q_m$ with probability at most $m^{-2}$. Denote $\hat{c} \coloneqq \argmin_{c = \pm 1} \|c\widehat{\by}_{\mathrm{PCA}} - \bw\|_{\infty}$ and $\hat{\bv} = \hat{c} \cdot \widehat{\by}_{\mathrm{PCA}}$. Based on the facts above, the sufficient condition for exact recovery can be further simplified as \begin{align} q_m^{-1} \sqrt{m}\min_{i\in \sU} y_i\hat{v}_{i} \geq q_m^{-1} \sqrt{m}\min_{i\in \sU} y_iw_{i} - \epsilon_m \geq \xi, \end{align} where the last inequality holds since $\epsilon_{m}$ vanishes to $0$. Then we have \begin{align} \P(\psi_m = 0) = &\, \P(\sign(\widehat{\by}_{\mathrm{PCA}}) = \by ) \geq \P(q_m^{-1}\sqrt{m}\min_{i\in \sU} y_i \cdot (\widehat{\by}_{\mathrm{PCA}})_{i} \geq \xi )\\ \geq &\, \P(q_m^{-1}\sqrt{m}\min_{i\in \sU} y_i \hat{v}_{i} \geq \xi, \,\, q_m^{-1}\sqrt{m}\|\hat{\bv} - \bw\|_{\infty} < \epsilon_m)\\ \geq &\, \P(q_m^{-1}\sqrt{m}\min_{i\in \sU} y_i w_i \geq \xi, \,\, q_m^{-1}\sqrt{m}\|\hat{\bv} - \bw\|_{\infty} < \epsilon_m)\\ \geq &\, \P(q_m^{-1}\sqrt{m}\min_{i\in \sU} y_i w_i \geq \xi) - \P( q_m^{-1}\sqrt{m}\|\hat{\bv} - \bw\|_{\infty} \geq \epsilon_m)\\ \geq &\, 1 - \sum_{i \in \sU} \P(q_m^{-1}\sqrt{m} y_i w_i < \xi ) - m^{-2} = 1 - m\cdot \P(q_m^{-1}\sqrt{m} y_i w_i < \xi ) - m^{-2}. \end{align} Note that $\sqrt{m} w_i y_i = W_{n, i}([N])$ defined in \Cref{lem:WmuLDP}. We take $0 < \epsilon < \frac{a-b}{2(1 - \tau)}\log(a/b) + 2c_{\tau}$, then for any $\delta >0$, there exists some large enough positive constant $M$ such that for $m \geq M$, $\epsilon_m < \xi$, it follows that \begin{align} \P(\sqrt{m} w_i y_i \leq \xi q_m) \leq \exp\Big( - \Big(\sup_{t\in \R} \Big\{ \xi t + I(t, a_{\tau}, b_{\tau}, c_{\tau})) \Big\} + \delta \Big)\cdot \log(m)\Big). \end{align} By combining the arguments above, the probability of accomplishing exact recovery is lower bounded by \begin{align} \P(\psi_m = 0) \geq 1 - m^{1 - \sup_{t\in \R}\{ \xi t + I(t, a_{\tau}, b_{\tau}, c_{\tau}))\} + \delta} - m^{-2}\quad \overset{m \to \infty}{\longrightarrow} 1, \end{align} since $I(a_{\tau}, b_{\tau}, c_{\tau}) = \sup_{t\in \R}\{ \xi t + I(t, a_{\tau}, b_{\tau}, c_{\tau}))\} > 1$ by assumption when choosing small enough $\xi$ and $\delta$. \end{proof} \begin{proof}[Proof of \Cref{thm:achievability_CSBM} (2)] The proof procedure follows similarly to Theorem 4.2 in \cite{abbe2022lp}, where we should apply the large deviation results \Cref{lem:WmuLDP} and \Cref{lem:intermediate_close_w} instead. \end{proof} \begin{proof}[Proof of \Cref{thm:general_pcaEstimator}] The proof procedure follows similarly to Theorem 4.4 in \cite{abbe2022lp}, where we should apply the new large deviation result \Cref{lem:WmuLDP} instead. The proof is simplified since $\by_{\sL}$ is accessible under the semi-supervised learning regime. \end{proof} \section{The analysis of the ridge regression on linear graph convolution} For $\CSBM ( \by, \bmu, \alpha, \beta, \theta)$, in this section, we focus on analyzing how these parameters $c_\tau, a_\tau $ and $b_\tau$ defined in Assumption~\ref{ass:asymptotics} affect the learning performances of the \textit{linear} graph convolutional networks. We consider a graph convolutional kernel $h(\bX) \in\R^{N\times d}$ which is a function of data matrix $\bX$ and adjacency matrix $\bA$ sampled from $\CSBM ( \by, \bmu, \alpha, \beta, \theta)$. We add self-loop and define the new adjacency matrix $\bA_{\rho} \coloneqq \bA + \rho \bI_{N}$, where $\rho \in\R$ represents the intensity. Let $\bD_{\rho}$ be the diagonal matrix whose diagonals are the average degree for $\bA_{\rho}$, i.e., $[\bD_{\rho}]_{ii} = \frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^N(\bA_{\rho})_{ij}$ for each $i\in [N]$. Denote $\bD:=\bD_0$, indicating no self-loop added. Recall that the normalization we applied for the linear graph convolutional layer is \begin{align} h(\bX)=\frac{1}{\sqrt{Nq_m}} \bD^{-1}_{\rho}\bA_{\rho}\bX. \end{align} Let us denote the expectation of the average degree of $\bA_\rho$ by \begin{equation}\label{eq:tilde_d} \widetilde{d} \coloneqq \frac{1}{N} \sum_{i=1}^{N} \E[\bD_\rho]_{ii} = \frac{a_\tau + b_\tau}{2}q_m + \rho. \end{equation} However, $h(\bX)$ is hard to deal with when we consider its large deviation principle. Instead, we use the following $\widetilde{h}(\bX)$ for analysis \begin{align} \widetilde{h}(\bX)=\frac{1}{\widetilde d \cdot\sqrt{Nq_m}} \bA_{\rho}\bX. \end{align} \subsection{Large Deviation Results for Ridge Regression Estimators}\label{sec:LDP_ridge} For any $i\in\cV$, we denote $\cN_i\subset \cV$ as the neighborhood of vertex $i\in\cV$. We consider the case that the feature learned by the GCN is $\zeta \sqrt{q_m}\, \bmu$ for some constant $\zeta$, i.e., $\bw =\zeta \sqrt{q_m} \, \bmu$. Notice that \begin{align} h_i =&\, y_i \zeta \sqrt{q_m} (\E \bD_\rho)^{-1} ( \bA_\rho \bX)_{i:} \bmu \\ =&\, \frac{y_i \zeta \sqrt{q_m} }{ \widetilde{d} } \Big(\sum_{j\in \cN_i} \bmu^{\sT}(\theta y_j \bmu + \bz_j) + \rho\bmu^{\sT}(\theta y_i \bmu + \bz_i)\Big)\\ =&\, \underbrace{\frac{\rho \zeta \theta \sqrt{q_m} y_i^2 \|\bmu\|_2^2}{\widetilde{d}} }_{I_1} + \underbrace{\frac{\rho \zeta \sqrt{q_m} y_i \bmu^{\sT}\bz_i}{\widetilde{d}}}_{I_2} + \underbrace{\frac{\zeta \theta \sqrt{q_m} \|\bmu\|_2^2}{\widetilde{d}}\sum_{j\neq i} \bA_{ij} y_i y_j }_{I_3} + \underbrace{\frac{\zeta \sqrt{q_m}}{\widetilde{d}}\sum_{j\neq i} y_i \bA_{ij}\bmu^{\sT} \bz_j }_{I_4}.\label{eq:decompose_h_i} \end{align} Here $\cN_i$ is the neighborhood of vertex $i\in\cV_{\sU}$. \begin{proposition}[LDP for Ridge Regression]\label{prop:LDP_regression} Under the Assumption~\ref{ass:asymptotics} with $(\bA, \bX) \sim \CSBM (\by, \bmu, \alpha, \beta, \theta)$ and $\rho = s q_m$ for some constant $s \in \R$. Assume $d/N \ll q_m$, then for any fixed $i\in\cV_\sU$ and constant $\zeta>0$, we have \[\lim_{m\to\infty}q_m^{-1}\log \P(y_i \zeta (\E \bD_\rho)^{-1} ( \bA_\rho \bX)_{i:} \bmu \le \eps \sqrt{q_m})=-\sup_{t\in\R}\{\eps t+g(a, b, c, \tau, \zeta, s, t)\}\] for sufficiently small $\eps>0$ and all large $m$, where \begin{align} g(a_\tau, b_\tau, c_\tau, \zeta, s, t)= &\, g_1(t) + g_2(t),\\ g_1(t) =&\, - \frac{2ts \zeta \sqrt{c_\tau}}{a_\tau+b_\tau+2s} - \frac{2t^2 s^2 \zeta^2}{(a_\tau+b_\tau+2s)^2},\\ g_2(t) =&\, - \frac{a_\tau}{2 }\Big[\exp\Big( \frac{2t\zeta \sqrt{c_\tau}}{a_\tau+b_\tau+2s}\Big) - 1\Big] - \frac{b_\tau}{2 }\Big[\exp\Big( - \frac{2t\zeta \sqrt{c_\tau}}{a_\tau+b_\tau+2s} \Big) - 1\Big]. \end{align} Consequently, for any sufficiently small $\eps>0$ and any $\delta>0$, there exists some $N_0>0$ such that for all $N\ge N_0$, we have \[\P(y_i \zeta (\E \bD_\rho)^{-1} ( \bA_\rho \bX)_{i:} \bmu \le \eps \sqrt{q_m})=\exp{(-q_m[\sup_{t\in\R}\{\eps t+g(a_\tau, b_\tau, c_\tau, \zeta, s, t)\}-\delta])}\] \end{proposition} \begin{proof}[Proof of \Cref{prop:LDP_regression}] Our goal is to calculate the following moment-generating function \begin{align} \E[\exp(t h_i)] \coloneqq \E_{\bA}[\E_{\bX}[\exp(t h_i) | \bA]]. \end{align} First, since $\|\bmu\|_2 = 1$, $y_i^2 = 1$, then in \eqref{eq:decompose_h_i}, $ I_1 = \rho \zeta \theta \sqrt{q_m}/\widetilde{d}$, and it is deterministic. Second, $\bmu^{\sT} \bz_i \sim \Normal(0, 1)$, then $I_2 \sim \Normal(0, \rho^2\zeta^2q_m/\widetilde{d}^2)$, and \begin{align} \E_{\bX}[\exp(tI_2)|y_i] = \exp\Big( \frac{t^2 \rho^2 \zeta^2 q_m }{2\widetilde{d}^2} \Big) = \E[\exp(tI_2)], \end{align} where the last equality holds since the result we obtained is independent of $y_i, \bA$. Let $\cN_i$ denote the set of neighbors of node $i$ and $|\cN_i|$ denote its cardinality. Conditioned on $\bA, \by, \bmu$, then $I_4 \sim \Normal(0, |\cN_i|\zeta^2 q_m/\widetilde{d}^2)$, and \begin{align} \E_{\bX}[\exp(tI_4)|\bA, y_i, \bmu] = \exp\Big( \frac{t^2 \zeta^2 |\cN_i|q_m}{2\widetilde{d}^2} \Big). \end{align} Note that $|\cN_i| = \sum_{j = 1}^{N} A_{ij}$, and $I_3$ is independent of $\bX$, then \begin{align} \E_{\bX}[\exp\big(t(I_3 + I_4)\big)|\bA, y_i, \bmu] = \exp\bigg( \frac{t\zeta\theta\sqrt{q_m}}{\widetilde{d}}\sum_{j\neq i} A_{ij} \Big(y_i y_j + \frac{t\zeta\sqrt{q_m}}{2\widetilde{d}\theta} \Big) \bigg) \end{align} One could take the expectation over $\bA$ conditioned on $\by$, then \begin{align} &\, \E_{\bA}\Big[\exp \bigg( \frac{t\zeta \theta \sqrt{q_m}}{\widetilde{d}} A_{ij} \Big(y_i y_j + \frac{t\zeta\sqrt{q_m}}{2\widetilde{d}\theta} \Big) \bigg) \Big| y_i \Big] \\ =&\, \frac{1}{2} \E_{\bA}\Big[\exp \bigg( \frac{t\zeta \theta\sqrt{q_m}}{\widetilde{d}} A_{ij} \Big( 1 + \frac{t\zeta\sqrt{q_m}}{2\widetilde{d}\theta} \Big) \bigg) \Big| y_i y_j = 1\Big] + \frac{1}{2} \E_{\bA}\Big[\exp \bigg( \frac{t\zeta\theta\sqrt{q_m}}{\widetilde{d}} A_{ij} \Big( -1 + \frac{t\zeta\sqrt{q_m}}{2\widetilde{d}\theta} \Big) \bigg)\Big| y_i y_j = -1\Big]\\ =&\, \frac{\alpha}{2} \exp\bigg( \frac{t\zeta \theta\sqrt{q_m}}{\widetilde{d}} + \frac{t^2 \zeta^2 q_m}{2\widetilde{d}^2} \bigg) + \frac{1 - \alpha}{2} + \frac{\beta}{2}\exp\bigg( -\frac{t\zeta \theta\sqrt{q_m}}{\widetilde{d}} + \frac{t^2 \zeta^2 q_m}{2\widetilde{d}^2} \bigg) + \frac{1 - \beta}{2}\\ =&\, 1 + \frac{\alpha}{2} \bigg( \exp\Big( \frac{t\zeta \theta\sqrt{q_m}}{\widetilde{d}} + \frac{t^2 \zeta^2 q_m}{2\widetilde{d}^2} \Big) - 1\bigg) + \frac{\beta}{2} \bigg( \exp\Big( - \frac{t\zeta \theta\sqrt{q_m}}{\widetilde{d}} + \frac{t^2 \zeta^2 q_m}{2\widetilde{d}^2} \Big) - 1\bigg)\\ =&\, 1 + \frac{\alpha}{2} \bigg( \exp\Big( (1 + o(1))\frac{t\zeta \theta\sqrt{q_m}}{\widetilde{d}} \Big) - 1\bigg) + \frac{\beta}{2} \bigg( \exp\Big( - (1 + o(1)) \frac{t\zeta \theta\sqrt{q_m}}{\widetilde{d}} \Big) - 1\bigg), \end{align} where the last equality holds since $\widetilde{d} \asymp q_m$, $\theta \asymp \sqrt{q_m}$, $\zeta \asymp 1$, and for some fix $t$, $1 \asymp |\frac{t\zeta \theta\sqrt{q_m}}{\widetilde{d}}| \gg \frac{t^2 \zeta^2 q_m}{2\widetilde{d}^2} \asymp q^{-1}_m = o(1)$ for sufficiently large $m$. At the same time, the result above is again independent of $y_i, \bmu$. Recall $\alpha = a q_m/m = o(1)$, $\beta = bq_m/m = o(1)$, $\frac{\theta^4}{\theta^2 + d/N} = c_{\tau} q_m$ in \Cref{ass:asymptotics}. By using $\log(1 + x) = x$ for $x = o(1)$, we then have \begin{align} q^{-1}_m \log \E_{\bA} \big[ \E_{\bX}[\exp\{ t(I_3 + I_4)\}] \big] =&\, \frac{1}{q_m}\sum_{j\in [N]\setminus \{ i \} } \log \bigg( 1 + \frac{\alpha}{2}\Big[\exp\Big( \frac{t\zeta \theta\sqrt{q_m}}{\widetilde{d}}\Big) - 1\Big] + \frac{\beta}{2}\Big[\exp\Big( - \frac{t\zeta \theta\sqrt{q_m}}{\widetilde{d}} \Big) - 1\Big]\bigg)\\ =&\, \frac{N - 1}{q_m} \cdot \bigg( \frac{a q_m}{2m}\Big[\exp\Big( \frac{t\zeta \theta\sqrt{q_m}}{\widetilde{d}}\Big) - 1\Big] + \frac{b q_m}{2m}\Big[\exp\Big( - \frac{t\zeta \theta\sqrt{q_m}}{\widetilde{d}} \Big) - 1\Big] \bigg)\\ =&\, \frac{a}{2(1 - \tau)}\Big[\exp\Big( \frac{t\zeta \theta\sqrt{q_m}}{\widetilde{d}}\Big) - 1\Big] + \frac{b}{2(1 - \tau)}\Big[\exp\Big( - \frac{t\zeta \theta\sqrt{q_m}}{\widetilde{d}} \Big) - 1\Big]\\ = &\, \frac{a}{2(1 - \tau)}\Big[\exp\Big( \frac{2t\zeta c_{\tau} }{a + b + 2s}\Big) - 1\Big] + \frac{b}{2(1 - \tau)}\Big[\exp\Big( - \frac{2t\zeta c_{\tau} }{a + b + 2s} \Big) - 1\Big], \end{align} where the last equality holds since we apply $\widetilde{d} = (\frac{a + b}{2} + s)q_m$ and $\theta^2 = (1 + o(1)) c_{\tau} q_m$ since $d/N \ll q_m$ by assumption. Combining the calculations above, we then compute the following rate function \begin{align} g(a_\tau, b_\tau, c_\tau, \zeta, s, t) =&\, - \,q_m^{-1}\log\E[\exp(t h_i)] = - \,q_m^{-1}\log\E_{\bA}\big[ \E_{\bX}[\exp(t(I_1 + I_2 + I_3 + I_4))| \bA] \big]\\ =&\, -(g_1(t) + g_2(t)). \end{align} Then, we can apply Lemma H.5 in \cite{abbe2022lp} to conclude our results in this proposition. \end{proof} \begin{lemma}\label{lem:rate_fun} For function $g(a_\tau, b_\tau, c_\tau, \zeta, s, t)$ defined in Proposition~\ref{prop:LDP_regression}, we know that \[J(a_\tau, b_\tau, c_\tau, \zeta, s )\coloneqq \sup_{t\in\R}g(a_\tau, b_\tau, c_\tau, \zeta, s, t)\le I(a_\tau, b_\tau, c_\tau)\] and the equality is attained when $s = \frac{2c_{\tau}}{\log(a_\tau/b_\tau)}=\frac{2c_{\tau}}{\log(a/b)} $. Moreover, if $s=0$, then $g_1(t)\equiv0$ and \[J(a_\tau, b_\tau, c_\tau, \zeta, 0 )= I(a_\tau, b_\tau, 0)\le J(a_\tau, b_\tau, c_\tau, \zeta, s ).\] \end{lemma} \begin{proof}[Proof of \Cref{lem:rate_fun}] Notice that both $g_1(t)$ and $g_2(t)$ are concave. First, $g_1(t)$ achieves its maximum at the point $t_1 := - \sqrt{c_\tau}(a+b+2s)/(2s \zeta )$, and $g_2(t)$ achieves its maximum at the point $t_2 := (a+b+2s)\log(b/a) /(4\zeta \sqrt{c_\tau})$. Note that \begin{align} \sup_{t\in\R}g(a_\tau, b_\tau, c_\tau, \zeta, s, t) \leq \max_{t\in \R} g_1(t) + \max_{t\in \R} g_2(t) = g_1(t_1) + g_2(t_2),\label{eq:sup_g} \end{align} where \begin{align} g_1(t_1)=c_{\tau}/2,\quad \text{ and }g_2(t_2) = (\sqrt{a} - \sqrt{b})^2/(2(1 - \tau)). \end{align} Thus, this proves the upper bound on $J(a_\tau, b_\tau, c_\tau, \zeta, s )$. Notice that the equality in \eqref{eq:sup_g} holds if $t_1=t_2$. It turns out that when $s = \frac{2c_{\tau}}{\log(a/b)} $, then $t_1 = t_2$, and $g_1(t_1) = c_{\tau}/2$, $g_2(t_2) = (\sqrt{a} - \sqrt{b})^2/(2(1 - \tau))$. Therefore, in this case, we have \begin{align} \max_{t\in \R} g(a, b, c, \tau, \zeta, t) = \frac{(1 - \tau)^{-1}(\sqrt{a} - \sqrt{b})^2 + c_{\tau}}{2}=I(a_\tau, b_\tau, c_\tau). \end{align} \end{proof} \subsection{Preliminary Lemmas on Ridge Regression Estimator} Note the facts that $(\bB^{\top}\bB + \bI_{d})^{-1}\bB^{\top} = \bB^{\top}(\bB\bB^{\top} + \bI_{N})^{-1}$ for any matrix $\bB^{N \times d}$, $\bP_\sU^2=\bP_\sU$, $\bP_\sL^2=\bP_\sL$ and $\bP_\sU = \bI_N - \bP_\sL$, then, \begin{align} h(\bX)\widehat{\bbeta} &\,= h(\bX)(h(\bX)^{\top} \bP_\sL h(\bX)+ \lambda \bI_d )^{-1}h(\bX)^{\top} \bP_\sL\by \\ &\, = h(\bX)[ (\bP_\sL h(\bX))^{\top} \bP_\sL h(\bX) + \lambda \bI_d ]^{-1}(\bP_\sL h(\bX))^{\top}\by \\ &\, = h(\bX) h(\bX)^{\top} \bP_\sL[ \bP_\sL h(\bX)h(\bX)^{\top}\bP_\sL + \lambda \bI_N ]^{-1}\by. \end{align} Consequently, the test risk can be written as \begin{align} \cR(\lambda ) =&\, \frac{1}{m}\by^{\top} \bQ^{\top}\bP_{\sU}\bQ\by,\,\text{ where } \bQ:=h(\bX) h(\bX)^{\top} \bP_\sL[ \bP_\sL h(\bX)h(\bX)^{\top}\bP_\sL + \lambda \bI_N ]^{-1}-\bI_N \end{align} \begin{lemma}\label{lem:approx_A} Assume that $|\rho|\lesssim q_m$ and $q_m\gtrsim \log N$. Let $\bD_{\rho}$ be the diagonal matrix where each diagonal represents the average degree of the graph $\bA_{\rho}$ after adding the self-loop $\rho$ and let $\widetilde{d}$ denote the expected average degree of $\bA_{\rho}$. Then $\|\bD^{-1}_{\rho} - (\widetilde{d})^{-1}\|\lesssim q_m^{-3/2}$ with probability at least $1 - e^{-N}$. Furthermore, with probability at least $1-2N^{-10}- 2e^{-N}$, \begin{align} \| \bD^{-1}_{\rho}\bA_{\rho} - \E \bA_{\rho} /\widetilde{d} \| \lesssim q_m^{-1/2}.\label{eq:approx_A} \end{align} Consequently, $\|\bD^{-1}_{\rho}\bA_{\rho}\| \leq C$ with probability at least $1-2N^{-10}- 2e^{-N}$ for some constant $C > 0$. \end{lemma} \begin{proof}[Proof of \Cref{lem:approx_A}] First, for any $i\in[N]$, note that $[\bD_{\rho}]_{ii} = \frac{1}{N} \sum_{i = 1}^{N} (\rho + \sum_{j \neq i}\bA_{ij}) = \rho + \frac{1}{N}\sum_{i = 1}^{N}\sum_{j \neq i}\bA_{ij}$, and $\widetilde{d} = \E [\bD_{\rho}]_{ii} = \frac{a_{\tau} + b_{\tau}}{2 }q_m + \rho$, then by Bernstein inequality, \begin{align} \P\Big( \big| [\bD_{\rho}]_{ii} - \widetilde{d} \big| \geq \sqrt{q_m} \Big) = \P\Big( \Big| \sum_{i = 1}^{N}\sum_{j \neq i} (\bA_{ij} - \E \bA_{ij}) \Big| \geq N \sqrt{q_m} \Big) \lesssim \exp(-N). \end{align} Thus by comparing the entrywise difference of $[\bD_{\rho}]_{ii} - \widetilde{d}$, with probability at least $1 - e^{-N}$, we have \begin{align} \|\bD^{-1}_{\rho} - (\widetilde{d})^{-1}\| \lesssim q_m^{-3/2}.\label{eq:d_inv_diff} \end{align} For the second part of the statement, by the triangle inequality, we have \begin{align} \| \bD^{-1}_{\rho}\bA_{\rho} - \E \bA_{\rho} /\widetilde{d} \| \leq \| \bD^{-1}_{\rho}\bA_{\rho} - \bD^{-1}_{\rho}\E \bA_{\rho}\| + \| \bD^{-1}_{\rho}\E \bA_{\rho} - \E \bA_{\rho} /\widetilde{d} \| \end{align} For the first term, we proved that with probability at least $1 - e^{-N}$, $[D_{\rho}]_{ii} = (1 + o(1)) \widetilde{d} \asymp q_m$ with deviation at most $\sqrt{q_m}$, then according to \Cref{lem:concentrateA}, with probability at least $1-2N^{-10}- 2e^{-N}$, when $q_m \gtrsim \log(N)$, the following is bounded by \begin{align} &\, \|\bD^{-1}_{\rho}\,(\bA_{\rho} - \E\bA_{\rho})\| \leq (\|\widetilde{d}^{-1}\| + \|\bD^{-1}_{\rho} - \widetilde{d}^{-1}\| ) \cdot \|(\bA_{\rho} - \E\bA_{\rho})\| \leq (q_m^{-1} + q_m^{-3/2} )\sqrt{q_m} \asymp q_m^{-1/2}. \end{align} For the second term, note that $\|\E \bA_{\rho}\| \lesssim q_m$, then by results above \begin{align} &\, \| \big( \bD^{-1}_{\rho} - \widetilde{d}^{-1} \big) \, \E\bA_{\rho}\| \leq \| \bD^{-1}_{\rho} - (\widetilde{d})^{-1}\| \cdot \|\E\bA_{\rho}\|\lesssim \frac{1}{\sqrt{q_m}}. \end{align} Therefore, with probability at least $1-2N^{-10}- 2e^{-N}$, \begin{align} \| \bD^{-1}_{\rho}\bA_{\rho} - \E \bA_{\rho} /\widetilde{d} \| \lesssim q_m^{-1/2}. \end{align} For the last part, $\|\E \bA_{\rho} /\widetilde{d}\| \lesssim 1$ since \begin{align} \E \bA_{\rho} /\widetilde{d} = \frac{1}{N} \ones \ones^{\sT} + \frac{(a - b)q_m + 2\rho}{(a + b)q_m + 2\rho} \frac{1}{N} \by \by^{\sT}. \end{align} Then the proof is completed by triangle inequality since $q_m \gg 1$, and \begin{align} \|\bD^{-1}_{\rho}\bA_{\rho}\| \leq \|\E \bA_{\rho} /\widetilde{d}\| + \| \bD^{-1}_{\rho}\bA_{\rho} - \E \bA_{\rho} /\widetilde{d} \| \lesssim 1 + q_m^{-1/2}\lesssim 1. \end{align} \end{proof} \begin{lemma}\label{lem:approx_X} Consider $\bX\sim\GMM(\bmu,\by,\theta)$. Suppose that $d\lesssim N$, then we have \begin{align} \Big\|\frac{1}{\sqrt{Nq_m}}\bX -\frac{\theta}{\sqrt{Nq_m}}\by\bmu^\top\Big\|\le \frac{C}{\sqrt{q_m}}, \label{eq:approx_X} \end{align} with probability at least $1-2e^{-cN}$ for some constants $C,c>0$. \end{lemma} \begin{proof}[Proof of \Cref{lem:approx_X}] Recall the concentration on the operator norm of the Gaussian random matrix for $\bZ\in\R^{N\times d}$ (see \cite{vershynin2018high}). Then for every $t>0$, there exists some constant $c>0$ such that \begin{align}\label{eq:Gaussian-spectral-norm} \P(\norm{\bZ}\ge \sqrt{N}+\sqrt{d}+t)\le 2\exp{(-ct^2)}. \end{align} Then, we know that $\frac{\norm{\bZ}}{\sqrt{N}}\lesssim 1$ with probability at least $1-2e^{-cN}$ by taking $t = \sqrt{N}$. Then we have \begin{align} \norm{\frac{1}{\sqrt{Nq_m}}\bX -\frac{\theta}{\sqrt{Nq_m}}\by\bmu^\top}\le \frac{\|\bZ\| }{ \sqrt{Nq_m}} \lesssim \frac{\sqrt{N} + \sqrt{d} }{ \sqrt{Nq_m}} \asymp \frac{1}{\sqrt{q_m}}, \end{align}with probability at least $1-2e^{-cN}$. \end{proof} \begin{lemma}\label{lem:approx_kernel} Consider $(\bA, \bX) \sim \CSBM (\by, \bmu, \alpha, \beta, \theta)$. Under the Assumption~\ref{ass:asymptotics}, when $d\lesssim N$, we have that \[\|h(\bX)-\bH\|\leq \frac{C}{\sqrt{q_m}},\] with probability at least $1-cN^{-10}$, where $\bH:=\frac{ \kappa_m }{ \sqrt{N }}\by\bmu^\top$ and $\kappa_m:=\frac{\alpha-\beta+2\rho}{\alpha+\beta+2\rho}\cdot\frac{\theta}{\sqrt{q_m}}$, for all large $m$ and $n$ and some constants $c,C>0$. \end{lemma} \begin{proof}[Proof of \Cref{lem:approx_kernel}] Notice that $\bH=\frac{\theta}{\widetilde d\sqrt{Nq_m}}(\E\bA_\rho)\by\bmu^\top$. We apply Lemmas~\ref{lem:approx_A} and \ref{lem:approx_X} to derive that \begin{align} \|h(\bX)-\bH\|\le~& \norm{h(\bX)-\frac{1}{\widetilde d}(\E\bA_\rho)\frac{\bX}{\sqrt{Nq_m}}}+\norm{\frac{1}{\widetilde d}(\E\bA_\rho)\frac{\bX}{\sqrt{Nq_m}}-\bH}\\ \le ~& \frac{1}{\sqrt{q_m}}(\theta+2+\sqrt{N/d})\cdot\norm{\bD_\rho^{-1}\bA_\rho-\frac{1}{\widetilde d}(\E\bA_\rho) }+\frac{\|\E\bA_\rho\|}{\widetilde d}\norm{\frac{\bX}{\sqrt{Nq_m}}-\frac{\theta}{\sqrt{Nq_m}}\by\bmu^\top}\lesssim 1/\sqrt{q_m}, \end{align}with probability at least $1-cN^{-10}$. Here we apply the fact that $\theta/\sqrt{q_m}\lesssim 1$. \end{proof} \begin{lemma}\label{lem:approx_beta} Consider $(\bA, \bX) \sim \CSBM (\by, \bmu, \alpha, \beta, \theta)$. Under the Assumption~\ref{ass:asymptotics} with $d\lesssim N$, the ridge regression solution $\widehat\bbeta$ defined in \eqref{eq:regression_solu} satisfies \[\frac{1}{\sqrt{N}}\|\widehat\bbeta-\widetilde\bbeta\|\le \frac{C}{\sqrt{q_m}},\] with probability at least $1-cN^{-10}$, where $\widetilde\bbeta:=\frac{\sqrt{N}\kappa_m\tau}{\kappa_m^2\tau+\lambda}\bmu$ and $\kappa_m =\frac{\alpha-\beta+2\rho}{\alpha+\beta+2\rho}\cdot\frac{\theta}{\sqrt{q_m}} $, for all large $m$ and $n$ and some constants $c,C>0$. Moreover, $\|\widehat\bbeta\|\lesssim\sqrt{N}$ with probability at least $1-cN^{-10}$. \end{lemma} \begin{proof} Notice that $\widetilde\bbeta= (\bH^{\top} \bP_{\sL} \bH + \lambda \bI_d )^{-1}\bH^{\top} \bP_{\sL}\by,$ where $\bH$ is defined by Lemma~\ref{lem:approx_kernel}. From Lemma~\ref{lem:approx_A}, we know that $\|h(\bX)\|\lesssim 1$ and $\|\bH\|\lesssim 1$ with probability at least $1-2N^{-10}$. Moreover, $\|(\bH^{\top} \bP_{\sL} \bH + \lambda \bI_d )^{-1}\|\le \lambda^{-1}$ and $\|(h(\bX)^{\top} \bP_{\sL} h(\bX) + \lambda \bI_d )^{-1}\|\le \lambda^{-1}$. Therefore, applying Lemma~\ref{lem:approx_kernel}, we derive that \begin{align} \frac{1}{\sqrt{N}}\|\widehat\bbeta-\widetilde\bbeta\|\le~& \|(\bH^{\top} \bP_{\sL} \bH + \lambda \bI_d )^{-1}\bH^{\top} -(h(\bX)^{\top} \bP_{\sL} h(\bX) + \lambda \bI_d )^{-1}h(\bX)^\top\|\cdot\|\bP_{\sL}\by\|/\sqrt{N}\\ \le ~& \|(\bH^{\top} \bP_{\sL} \bH + \lambda \bI_d )^{-1}\|(\bH-h(\bX))\|+\|(\bH^{\top} \bP_{\sL} \bH + \lambda \bI_d )^{-1}\|\\ ~& \cdot\|\bH -h(\bX)\|\cdot(\|\bH \|+\|h(\bX)\|)\cdot\|(h(\bX)^{\top} \bP_{\sL} h(\bX) + \lambda \bI_d )^{-1}\|\cdot\|h(\bX)\| \\ \lesssim ~& \|\bH -h(\bX)\|\lesssim 1/\sqrt{q_m}, \end{align}with at least $1-cN^{-10}$, for some constant $c>0$. \end{proof} \subsection{Exact Recovery Threshold for Ridge Regression on Linear GCN} \begin{lemma}\label{lem:his} Let $h(\bX)^\top=[\bh_1,\ldots,\bh_N]$ and $\widetilde{h}(\bX)^\top=[\bar\bh_1,\ldots,\bar\bh_N]$. For any $i\in [N]$ and deterministic unit vector $\bu\in\R^d$, there exists some $c,C>0$ such that \begin{align} \P( |(\bar\bh_i-\bh_i)^\top\bu| \le C/\sqrt{Nq_m})\ge~& 1-cN^{-10},\label{eq:h_i_diff}\\ \P(|\bar\bh_i^\top\bu|\le C/\sqrt{N })\ge~& 1-cN^{-10},\label{eq:bar_h_i}\\ \P\left(\|\bh_i\|\le C\sqrt{d/(Nq_m)} \right)\ge~& 1-cN^{-10},\label{eq:norm_h_i} \end{align}for all large $n$ and $m$. \end{lemma} \begin{proof} For any unit vector $\bu\in\R^d$ and $i\in[N]$, conditioning on event \eqref{eq:d_inv_diff}, we have \begin{align} \Big|(\bar\bh_i-\bh_i)^\top\bu\Big|\le~& \frac{1}{\sqrt{Nq_m}}\big\|(\widetilde d)^{-1}\bI_N-\bD_\rho^{-1}\big\|\cdot |(\bA_\rho\bX)_{i:}\bu | \lesssim \frac{1}{q_m^2\sqrt{N} }\Big| ( \bA_\rho\bX)_{i:}\bu\Big|,\label{eq:AX_row0} \end{align} where we employ Lemma~\ref{lem:approx_A}. Then, for any $i\in[N]$, we can further have \begin{align} |(\bA_\rho\bX)_{i:}\bu |= ~& \left|\sum_{j\in\cN_i}(\theta y_j\bmu^\top\bu+\bz_j^\top\bu)+\theta\rho\bmu^\top\bu+\rho\bz_i^\top\bu\right|\\ \le ~& \theta (|\cN_i|+\rho)+\Big|\sum_{j\in\cN_i} \bz_j^\top\bu+ \rho\bz_i^\top\bu\Big|\label{eq:AX_row} \end{align} Based on {Lemma 3.3 in \cite{alt2021extremal}}, we can upper bound the degree of each vertex by \begin{equation}\label{eq:degree_bound} \P(|\cN_i|\le C\log N)\ge 1-C_DN^{-D}, \end{equation} for any $i\in [N]$, some constants $C,C_D>0$ and sufficiently large constant $D>0$. Meanwhile, since each $\bz_j^\top\bu\sim\cN(0,1)$, by applying Hoeffding's inequality ({Theorem 2.6.2 in \cite{vershynin2018high}}), we can deduce that \begin{equation}\label{eq:sum_Gaussian_1} \P\Big(\Big|\sum_{j\in\cN_i} \bz_j^\top\bu+ \rho\bz_i^\top\bu\Big|\le t\Big||\cN_i|=k\Big)\ge 1- 2\exp{\Big(-\frac{ct^2}{k+\rho^2}\Big)}, \end{equation} for any $k\in\N$, $t>0$, and some constant $c>0$. Then combining \eqref{eq:degree_bound} and \eqref{eq:sum_Gaussian_1}, for any large $D>0$, there exists some constants $C,C_D>0$ such that \begin{equation}\label{eq:sum_Gaussian} \P\Big(\Big|\sum_{j\in\cN_i} \bz_j^\top\bu+ \rho\bz_i^\top\bu \Big|\le C\log N, ~ |\cN_i|\le C\log N\Big)\ge 1- 2C_DN^{-D}. \end{equation} Thus, with \eqref{eq:AX_row} and $\rho \asymp \log N$, we can conclude that $|(\bA_\rho\bX)_{i:}\bu |\lesssim q_m^{3/2}$ with probability at least $1- 2C_DN^{-D}.$ Following with \eqref{eq:AX_row}, we can conclude that \begin{equation} \P\left( \Big|(\bar\bh_i-\bh_i)^\top\bu\Big|\le C/\sqrt{q_mN}\right)\ge 1-cN^{-10}. \end{equation} For the second part, we can analogously get $|\bar\bh_i^\top\bu|\lesssim \frac{1}{q_m^{3/2}\sqrt{N}}|(\bA_\rho\bX)_{i:}\bu |$. Then, we can apply \eqref{eq:AX_row} and \eqref{eq:sum_Gaussian} to conclude \eqref{eq:bar_h_i}. Finally, notice that \begin{align} \|(\bA_\rho\bX)_{i:}\|= ~& \left\|\sum_{j\in\cN_i}(\theta y_j\bmu+\bz_j)+\theta\rho\bmu+\rho\bz_i\right\|\\ \le ~& \theta (|\cN_i|+\rho)+\Big\|\sum_{j\in\cN_i} \bz_j+ \rho\bz_i\Big\|. \end{align} Applying Theorem 3.1.1 in \cite{vershynin2018high}, we know that \begin{equation}\label{eq:z_i_norm} \P(\|\bz_i\|\le 2\sqrt{d})\ge 1-2\exp{(-cd)} \end{equation} for some constant $c>0$ and any $i\in [N]$. Thus, combining \eqref{eq:degree_bound} and Lemma~\ref{lem:Z_concentration}, we have that with probability at least $1-cN^{-10},$ \[\|\bh_i\|\le \frac{1}{q_m^{3/2}\sqrt{N}}\|(\bA_\rho\bX)_{i:}\|\lesssim \sqrt{\frac{d}{Nq_m}}\] because of the fact that $q_m\lesssim d$ and $ N\asymp m$. This completes the proof of this lemma. \end{proof} Inspired by \cite{abbe2020entrywise,abbe2022lp}, we now apply a general version of leave-one-out analysis for $\widehat\bbeta$ by defining the following approximated estimator. For any $i\in\cV_\sU$, denote by \begin{equation}\label{eq:beta_i} \widehat\bbeta^{(i)}=(h^{(i)}(\bX)^{\top} \bP_{\sL} h^{(i)}(\bX) + \lambda \bI_d )^{-1}h^{(i)}(\bX)^{\top} \bP_{\sL}\by, \end{equation} where $h^{(i)}(\bX):=\frac{1}{\sqrt{Nq_m}}\bD_\rho^{-1}\bA_\rho(\bX-\bZ^{(i)})$ and $\bZ^{(i)}:=[\bz_1\mathbf{1}_{1\in\cN_i\cup \{i\}},\ldots,\bz_k\mathbf{1}_{k\in\cN_i\cup \{i\}},\ldots, \bz_N\mathbf{1}_{N\in\cN_i\cup \{i\}}]^\top\in\R^{N\times d}$. Here, the difference between $h^{(i)}(\bX)$ and $h(\bX)$ is that we turn off the feature noises $\bz_i$ for vertices $\cN_i\cup\{i\}$. In this case, conditional on $\by$ and $\bmu$, both $\widetilde{\bbeta}$ and $\widehat\bbeta^{(i)}$ are independent with $\bh_i$ and $\bar\bh_i$ given any $i\in\cV_\sU$. Next, we present the following properties for $\widehat\bbeta^{(i)}$. \begin{lemma}\label{lem:beta_i} Assume that $q_m\ll d\ll Nq_m$. For \eqref{eq:regression_solu} and \eqref{eq:beta_i}, we have \begin{align} \frac{1}{\sqrt{N}}\norm{\widehat\bbeta^{(i)}-\widehat\bbeta}\le~& C\sqrt{\frac{d}{q_mN}},\\ \big\|\widehat\bbeta^{(i)}\big\|\le~& C\sqrt{N}, \end{align}with a probability at least $1-cN^{-10}$, for some constants $c,C>0$ \end{lemma} \begin{proof} Based on Lemma~\ref{lem:approx_A}, we have that \begin{align} \|h^{(i)}(\bX)-h(\bX)\|=\frac{1}{\sqrt{Nq_m}}\|\bD_\rho^{-1}\bA_\rho \bZ^{(i)}\|\le \frac{1}{\sqrt{q_mN}}\|\bZ^{(i)}\| \lesssim \sqrt{\frac{d}{q_mN}} \end{align} with probability at least $1-cN^{-10}$, where we utilize \eqref{eq:degree_bound} and \eqref{eq:Gaussian-spectral-norm} for $\bZ^{(i)}$ as well. Thus, we know that $\|h^{(i)}(\bX)\|\lesssim 1$ with probability at least $1-cN^{-10}$. Then, analogously with Lemma~\ref{lem:approx_beta}, we have \begin{align} \frac{1}{\sqrt{N}}\|\widehat\bbeta-\widehat\bbeta^{(i)}\|\le~& \|(h^{(i)}(\bX)^{\top} \bP_{\sL} h^{(i)}(\bX) + \lambda \bI_d )^{-1}h^{(i)}(\bX)^{\top} -(h(\bX)^{\top} \bP_{\sL} h(\bX) + \lambda \bI_d )^{-1}h(\bX)^\top\| \\ \le ~& \|(h^{(i)}(\bX)^{\top} \bP_{\sL} h^{(i)}(\bX)+ \lambda \bI_d )^{-1}\|\cdot \|(h^{(i)}(\bX)-h(\bX))\|+\|(h^{(i)}(\bX)^{\top} \bP_{\sL}h^{(i)}(\bX)+ \lambda \bI_d )^{-1}\|\\ ~& \cdot\|h^{(i)}(\bX) -h(\bX)\|\cdot(\|h^{(i)}(\bX) \|+\|h(\bX)\|)\cdot\|(h(\bX)^{\top} \bP_{\sL} h(\bX) + \lambda \bI_d )^{-1}\|\cdot\|h(\bX)\| \\ \lesssim ~& \|h^{(i)}(\bX) -h(\bX)\|\lesssim \sqrt{\frac{d}{q_mN}}, \end{align}with a probability at least $1-cN^{-10}$, for some constant $c>0$. Also, with Lemma~\ref{lem:approx_beta}, we can show that $\|\widehat\bbeta^{(i)}\|\lesssim \sqrt{N}$ with very high probability. \end{proof} \begin{lemma}\label{lem:approx_beta_i} Under the same assumption as Lemma~\ref{lem:approx_beta}, for each $i\in\cV_\sU$ the estimator $\widehat\bbeta^{(i)}$ defined in \eqref{eq:beta_i} satisfies \[\frac{1}{\sqrt{N}}\|\widehat\bbeta^{(i)}-\widetilde\bbeta\|\le \frac{C}{\sqrt{q_m}},\] with probability at least $1-cN^{-10}$, where $\widetilde\bbeta $ is defined in Lemma~\ref{lem:approx_beta}, for all large $m$ and $n$ and some constants $c,C>0$. \end{lemma} The proof of this lemma is the same as Lemma~\ref{lem:approx_beta}, so we ignore the details here. \begin{proof}[Proof of Theorem~\ref{thm:exact_linear}] Recall that $$\zeta=\frac{\kappa\tau}{\kappa^2\tau+\lambda}\quad \text{ and }\quad\kappa=\sqrt{c_\tau}\cdot\frac{a_\tau-b_\tau+2s}{a_\tau+b_\tau+2s},$$ where both $\zeta$ and $\kappa$ are some constants in $\R$. Hence, we know that \[ \frac{\kappa_m\tau }{\kappa_m^2\tau +\lambda}=\zeta(1+o(1)).\] Then, $\widetilde\bbeta/\sqrt{N}=\zeta\bmu+o(1/\sqrt{N})$. Denote $\by_{\sU,i}$ as the $i$-th entry of label $\by_{\sU}$. Firstly, we consider the general case when $\rho = s q_m$ for some fixed constant $s\in\R$. For each $i\in\cV_\sU$, we can utilize Lemmas~\ref{lem:his}, ~\ref{lem:beta_i}, and~\ref{lem:approx_beta_i} to obtain that \begin{align} \left|\frac{y_i}{\sqrt{N}}\bar\bh_i^\top\widetilde{\bbeta}-\frac{y_i}{\sqrt{N}} \bh_i^\top\widehat{\bbeta}\right|\le ~& \frac{1}{\sqrt{N}}\Big|(\bar\bh_i-\bh_i)^\top\widetilde{\bbeta}\Big|+\frac{1}{\sqrt{N}}\Big|(\bar\bh_i-\bh_i)^\top(\widetilde{\bbeta}-\widehat\bbeta^{(i)})\Big|\\ ~&+\frac{1}{\sqrt{N}}\Big|\bar\bh_i^\top(\widetilde{\bbeta}-\widehat\bbeta^{(i)})\Big|+\frac{1}{\sqrt{N}}\Big| \bh_i^\top(\widehat\bbeta^{(i)}-\widehat\bbeta)\Big|\\ \le ~& \frac{C}{\sqrt{Nq_m}}+C\frac{d}{Nq_m}, \end{align} with probability at least $1-cN^{-10}$ for some constants $c,C>0$. Here, we applied \eqref{eq:h_i_diff} when $\bu=\widetilde{\bbeta}/\sqrt{N}$ and $\bu=(\widetilde{\bbeta}-\widehat\bbeta^{(i)})/\sqrt{N}$, \eqref{eq:bar_h_i} when $\bu=(\widetilde{\bbeta}-\widehat\bbeta^{(i)})/\sqrt{N}$, and \eqref{eq:norm_h_i}. Then, if $d\lesssim \sqrt{Nq_m}$, we can conclude that \begin{equation} \left|\frac{y_i}{\sqrt{N}}\bar\bh_i^\top\widetilde{\bbeta}-\frac{y_i}{\sqrt{N}} \bh_i^\top\widehat{\bbeta}\right|\le \frac{C}{\sqrt{Nq_m}}, \end{equation}with very high probability for some universal constant $C>0$. Therefore, we can take $\eps_m=1/\sqrt{q_m}$ to get \begin{align} \P(\psi_m(\sign(\widehat\by_{\sU}),\by_{\sU})=0)=~&\P\big(\min_{i\in[m]}\by_{\sU,i}\cdot\widehat\by_{\sU,i}>0\big)= \P\Big( \min_{i\in\cV_{\sU}} \frac{y_{i}\cdot \bh_i^\top \widehat\bbeta}{\sqrt{N}}>0\Big)\\ \ge ~& \P\Big( \min_{i\in\cV_{\sU}}\frac{y_{i}}{\sqrt{N}}\cdot \bar\bh_{i}^\top \widetilde\bbeta>\frac{C\eps_m}{\sqrt{N}}\Big)- \sum_{i\in\cV_{\sU}}\P\Big(\left|\frac{y_i}{\sqrt{N}}\bar\bh_i^\top\Tilde{\bbeta}-\frac{y_i}{\sqrt{N}} \bh_i^\top\widehat{\bbeta}\right| > \frac{C\eps_m}{\sqrt{N}}\Big)\\ \ge ~& \P\big(\min_{i\in\cV_{\sU}}y_{i}\zeta\cdot \frac{1}{ \widetilde d}(\bA_{\rho}\bX)_{i:} \bmu>C\sqrt{q_m}\eps_m\big)-Cm^{-2}\\ \ge ~& 1-\sum_{i\in\cV_{\sU}}\P\Big(y_{i}\cdot \frac{\zeta}{ \widetilde d}(\bA_\rho\bX)_{i:} \bmu\le C\sqrt{q_m}\eps_m\Big)-Cm^{-2}\\ \ge ~& 1-m\cdot\P\Big(y_{i}\cdot \frac{\zeta}{ \widetilde d}(\bA_\rho\bX)_{i:} \bmu\le C\sqrt{q_m}\eps_m\Big)-Cm^{-2}\\ \ge~& 1-m^{1- \sup_{t\in\R} \{\eps_m t+g(a ,b ,c,\tau,\zeta, s,t)\}+\delta}-Cm^{-2}, \end{align} for any $\delta>0$ and sufficiently large $m$, where in the last line we employ Proposition~\ref{prop:LDP_regression}. Thus, applying Lemma~\ref{lem:rate_fun}, we know that when $J(a_\tau, b_\tau, c_\tau, \zeta, s )>1$, $\P(\psi_m(\sign(\widehat\by_{\sU}),\by_{\sU})=0)\to 1$ as $m\to\infty$. When $s=\frac{2c_{\tau}}{\log(a/b)} $, Lemma~\ref{lem:rate_fun} implies that $J(a_\tau, b_\tau, c_\tau, \zeta, s )=I(a_\tau, b_\tau, c_\tau )$ defined in \eqref{eqn:rate_I_abc_tau}. Notice that $J(a_\tau, b_\tau, c_\tau, \zeta, s )\le I(a_\tau, b_\tau, c_\tau )$ for any $s\in\R$. Whereas $s=0$, Lemma~\ref{lem:rate_fun} implies that $J(a_\tau, b_\tau, c_\tau, \zeta, s )=I(a_\tau, b_\tau, 0)$. Hence, this completes the proof of this theorem. \end{proof} \subsection{Asymptotic Errors for Ridge Regression on Linear GCN} \begin{lemma}\label{lem:risks_app} Under the Assumption~\ref{ass:asymptotics}, there exist some constant $c,C>0$ such that with probability at least $1-cN^{-2}$, \begin{align} |\overline{\cR}(\lambda)-\cR(\lambda)|\le~&\frac{C}{\sqrt{q_m}},\\ |\overline{\cE}(\lambda)-\cE(\lambda)|\le~& \frac{C}{\sqrt{q_m}}, \end{align} where \begin{align} \overline{\cR}(\lambda):=~&\frac{1}{m}(\bH\Tilde{\bbeta}-\by)^\top\bP_{\sU}(\bH\Tilde{\bbeta}-\by),\\ \overline{\cE}(\lambda):=~&\frac{1}{n}(\bH\Tilde{\bbeta}-\by)^\top\bP_{\sL}(\bH\Tilde{\bbeta}-\by). \end{align} \end{lemma} \begin{proof} From Lemmas~\ref{lem:approx_kernel} and~\ref{lem:approx_beta}, we know that \[\frac{1}{\sqrt{m}}\|\bH\Tilde{\bbeta}-h(\bX)\widehat\bbeta\|\le\frac{1}{\sqrt{m}}\|\bH\|\cdot\|\Tilde{\bbeta}-\widehat\bbeta\|+\frac{1}{\sqrt{m}}\|\bH -h(\bX)\|\cdot\|\widehat\bbeta\|\lesssim \frac{1}{\sqrt{q_m}},\] with probability at least $1-CN^{-2}$ for some constant $C>0$. Since $\norm{\bP_\sL}$ and $\norm{\bP_\sU}$ are both upper bounded by one, we can directly conclude Lemma~\ref{lem:risks_app}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:error}] Based on Lemma~\ref{lem:risks_app}, we can instead compute $\overline{\cE}(\lambda)$ and $\overline{\cR}(\lambda)$. Recall that $\bH=\frac{\kappa_m}{\sqrt{N}}\by\bmu^\top$ and $\frac{1}{\sqrt{N}}\widetilde\bbeta=\frac{\kappa_m\tau }{\kappa_m^2\tau +\lambda}\bmu$. Thus, $\bH\Tilde{\bbeta}=\frac{\kappa_m^2\tau }{\kappa_m^2\tau+\lambda}\by$. Then, since $\frac{1}{m}\by^\top\bP_{\sU}\by=\frac{1}{n}\by^\top\bP_{\sL}\by=1$, we have \begin{align} \overline{\cR}(\lambda) =~&\frac{1}{m}(\bH\Tilde{\bbeta}-\by)^\top\bP_{\sU}(\bH\Tilde{\bbeta}-\by)=\Big(1-\frac{\kappa_m^2\tau_n}{\kappa_m^2\tau_n+\lambda}\Big)^2=\frac{\lambda^2}{(\kappa^2\tau+\lambda)^2}+o(1),\\ \overline{\cE}(\lambda) =~&\frac{1}{n}(\bH\Tilde{\bbeta}-\by)^\top\bP_{\sL}(\bH\Tilde{\bbeta}-\by)=\Big(1-\frac{\kappa_m^2\tau_n}{\kappa_m^2\tau_n+\lambda}\Big)^2=\frac{\lambda^2}{(\kappa^2\tau+\lambda)^2}+o(1). \end{align} Then taking $m\to\infty$, we can get the results of this lemma. \end{proof} \section{Feature Learning of Graph Convolutional Networks}\label{sec:NN_proof} In this section, we complete the proof of Theorem~\ref{thm:NN} in Section~\ref{sec:NN}. Recall that \begin{equation}\label{eq:W_GD} \bW^{(1)} = \bW^{(0)} - \eta_1 \Big(\nabla_{\bW^{(0)}} \mathcal{L}(\bW^{(0)},s^{(0)}) + \lambda_1 \bW^{(0)} \Big). \end{equation} and \begin{equation}\label{eq:s_Trained} s^{(1)}=\frac{2}{n^2q_m}\by_\sL^\top\bX_\sL\bW^{(1)}\ba\Big/\log\left(\frac{N\cdot D_0+\by^\top_\sL\bA_\sL\by_\sL}{N\cdot D_0-\by^\top_\sL\bA_\sL\by_\sL}\right), \end{equation} The algorithm we applied in Theorem~\ref{thm:NN} is given by Algorithm~\ref{alg:gradient0}. Below, we first construct an optimal solution for this problem and present the LDP analysis. Then, we present will use \cite{ba2022high,damian2022neural} to analyze the feature learned from $\bW^{(1)}$. Finally, inspired by the optimal solution, we will prove $s^{(1)}$ is close to the optimal $s$ in \eqref{eq:optimal_rho}. Meanwhile, we also present an additional gradient-based method in Algorithm~\ref{alg:gradient} to approach the optimal $s$ in \eqref{eq:optimal_rho}. We will leave the theoretical analysis for Algorithm~\ref{alg:gradient} as a future direction to explore. \begin{algorithm} \caption{Gradient-based training for GCN in Theorem~\ref{thm:NN}} \label{alg:gradient0} \begin{algorithmic} \Require Learning rates $\eta_1$, weight decay $\lambda_1$ \State {\textbf{Initialization:}} $s^{(0)}=0$, $\sqrt{K}\cdot[\bW^{(0)}]_{ij}\iid\cN(0,1), ~ \sqrt{K}\cdot[\ba]_j\iid \Unif\{\pm 1\}$, $\forall i\in[d],j\in [K]$. \State {\textbf{Training Stage 1:}} \State $\quad\quad$ Set $\sigma(x)=x$ in \eqref{eq:NN} \State $\quad\quad$ $\bW^{(1)} \gets \bW^{(0)} - \eta_1 (\nabla_{\bW^{(0)}} \mathcal{L}(\bW^{(0)},s^{(0)}) + \lambda_1 \bW^{(0)} )$ \State {\textbf{Training Stage 2:}} \State $\quad\quad$ $s^{(1)}\gets s^{(0)}+\frac{2}{n^2q_m} \by_\sL^\top\bX_\sL\bW^{(1)}\ba \Big/\log\left(\frac{N\cdot D_0+\by^\top_\sL\bA_\sL\by_\sL}{N\cdot D_0-\by^\top_\sL\bA_\sL\by_\sL}\right)$ \Ensure Prediction function for unknown labels: $\sign(\bS_\sU\bD_{s^{(1)}}^{-1}\bA_{s^{(1)}}\bW^{(1)}\ba)$ \end{algorithmic} \end{algorithm} \subsection{Thresholds for GCNs and LDP analysis} Consider $(\bA, \bX) \sim \CSBM (\by, \bmu, \alpha, \beta, \theta)$. Let $\bA\in\R^{N\times N}$ denote the adjacency matrix of the graph $G$ and let us define the degree matrix by $\bD_0 \coloneqq \diag\{D_0, \ldots, D_0\}\in\R^{N\times N}$ where $D_0=\frac{1}{N}\sum_{j=1}^{N}\sum_{i=1}^{N} \bA_{ij}$. Let $\rho=sq_m$ denote the self-loop intensity \cite{kipf2017semisupervised,wu2019simplifying,shi2024homophily} for some $s\in\R$, and $\bA_s = \bA + \rho \bI$, $\bD_s = \bD + \rho \bI$ denote the adjacency, average degree matrices of the graph after adding self-loops, respectively. The convolutional feature vector is $\widetilde{\bx}_i \coloneqq \big((\bD_s)^{-1} {\bA_s}\bX\big)_{i:}$. Ideally, our goal is to prove that the convoluted feature vectors are linearly separable, i.e., find some $\bw \in \R^{d}$ such that \begin{align} \widetilde{\bx}_i^{\sT} \bw + b > 0 \textnormal{ if } y_i = 1, \quad \widetilde{\bx}_i^{\sT} \bw + b < 0 \textnormal{ if } y_i = -1, \end{align} for some $b\ge0$. We consider the case that the feature learned by the GCN is exactly $ \bmu$ in GMM, i.e., the optimal margin is $\bw = \bmu$. Under this setting, we show the LDP results for this estimator. The proof is similar to Proposition~\ref{prop:LDP_regression}. \begin{proposition}[LDP for GCNs]\label{prop:LDP} For $(\bA, \bX) \sim \CSBM (\by, \bmu, \alpha, \beta, \theta)$ with Assumption~\ref{ass:asymptotics}, when $s=2c_\tau/\log(\frac{a}{b})$ and $c_\tau=\theta^2/q_m+o(1)$, we have that \[\lim_{m\to\infty}q_m^{-1}\log \P(y_i \cdot\sqrt{q_m} (\E {\bD_s})^{-1} ( {\bA_s} \bX)_{i:} \bmu \le \eps q_m)=-\sup_{t\in\R}\{\eps t+I(a_\tau,b_\tau,c_\tau,t),\}\] where $\sup_{t\in\R}I(a_\tau,b_\tau,c_\tau,t)=I(a_\tau,b_\tau,c_\tau)$ defined in \eqref{eqn:rate_I_abc_tau}. \end{proposition} \begin{proof} For simplicity, let $\widetilde{d} \coloneqq \frac{1}{N} \sum_{i=1}^{N} \widetilde{D}_i = \frac{a + b}{2}q_m + \rho$, denoting the expected degree of each node $i\in [N]$. Then \begin{align} h_i :=&\, y_i \widetilde{\bx}_i^{\sT} \bw = y_i \theta (\E \widetilde{\bD})^{-1} (\widetilde{\bA} \bX)_{i:} \bmu \\ =&\, \frac{y_i \theta}{\widetilde{d}} \Big(\sum_{j\in \cN_i} \bmu^{\sT}(\theta y_j \bmu + \bz_j) + \rho\bmu^{\sT}(\theta y_i \bmu + \bz_i)\Big)\\ =&\, \underbrace{\frac{\rho \theta^2 y_i^2 \|\bmu\|_2^2}{\widetilde{d}}}_{I_1} + \underbrace{\frac{\rho \theta y_i \bmu^{\sT}\bz_i}{\widetilde{d}}}_{I_2} + \underbrace{\frac{\theta^2 \|\bmu\|_2^2}{\widetilde{d}}\sum_{j\neq i} A_{ij} y_i y_j }_{I_3} + \underbrace{\frac{\theta}{\widetilde{d}}\sum_{j\neq i} y_i A_{ij}\bmu^{\sT} \bz_j }_{I_4}. \end{align} Our goal is to calculate the following moment-generating function \begin{align} \E[\exp(t h_i)] \coloneqq \E_{\bA}[\E_{\bX}[\exp(t h_i) | \bA]]. \end{align} First, since $\|\bmu\|_2 = 1$, $y_i^2 = 1$, $ I_1 = \rho \theta^2/\widetilde{d}^2$, and it is deterministic. Second, $\bmu^{\sT} \bz_i \sim \Normal(0, 1)$, then $I_2 \sim \Normal(0, \rho^2\theta^2/\widetilde{d}^2)$, and \begin{align} \E_{\bX}[\exp(tI_2)|y_i] = \exp\Big( \frac{t^2 \rho^2 \theta^2}{2\widetilde{d}^2} \Big) = \E[\exp(tI_2)], \end{align} where the last equality holds since the result we obtained is independent of $y_i$. Let $\cN_i$ denote the set of neighbors of node $i$ and $|\cN_i|$ denote its cardinality. Conditioned on $\bA, \by, \bmu$, $I_4 \sim \Normal(0, |\cN_i|\theta^2/\widetilde{d}^2)$, and \begin{align} \E_{\bX}[\exp(tI_4)|\bA, y_i, \bmu] = \exp\Big( \frac{t^2 \theta^2 |\cN_i|}{2\widetilde{d}^2} \Big). \end{align} Note that $|\cN_i| = \sum_{j = 1}^{N} A_{ij}$, and $I_3$ is independent of $\bX$, then \begin{align} \E_{\bX}[\exp\big(t(I_3 + I_4)\big)|\bA, y_i, \bmu] = \exp\bigg( \frac{t\theta^2}{\widetilde{d}}\sum_{j\neq i} A_{ij} \Big(y_i y_j + \frac{t}{2\widetilde{d}} \Big) \bigg) \end{align} One could take the expectation over $\bA$ conditioned on $\by$, then \begin{align} &\, \E_{\bA}\Big[\exp \Big( \frac{t\theta^2}{\widetilde{d}} A_{ij} \Big(y_i y_j + \frac{t}{2\widetilde{d}} \Big) \Big) \Big| y_i \Big] \\ =&\, \frac{1}{2} \E_{\bA}\Big[\exp \Big( \frac{t\theta^2}{\widetilde{d}} A_{ij} \Big(y_i y_j + \frac{t}{2\widetilde{d}} \Big) \Big) \Big| y_i y_j = 1\Big] + \frac{1}{2} \E_{\bA}\Big[\exp \Big( \frac{t\theta^2}{\widetilde{d}} A_{ij} \Big(y_i y_j + \frac{t}{2\widetilde{d}} \Big) \Big) \Big| y_i y_j = -1\Big]\\ =&\, \frac{1}{2} \Big[ \alpha \exp\Big( \frac{t^2 \theta^2}{2\widetilde{d}^2} + \frac{t \theta^2}{\widetilde{d}}\Big) + (1 - \alpha) + \beta \exp\Big( \frac{t^2 \theta^2}{2\widetilde{d}^2} - \frac{t \theta^2}{\widetilde{d}}\Big) + (1 - \beta) \Big]\\ =&\, 1 + \frac{\alpha}{2}\Big[\exp\Big( \frac{t^2 \theta^2}{2\widetilde{d}^2} + \frac{t \theta^2}{\widetilde{d}}\Big) - 1\Big] + \frac{\beta}{2}\Big[\exp\Big( \frac{t^2 \theta^2}{2\widetilde{d}^2} - \frac{t \theta^2}{\widetilde{d}}\Big) - 1\Big], \end{align} where the result is again independent of $y_i, \bmu$. Recall $\alpha = a q_m/m = o(1)$, $\beta = bq_m/m = o(1)$, $\frac{\theta^4}{\theta^2 + (1 - \tau )d/m} = c_{\tau} q_m$ in \Cref{ass:asymptotics}, thus $\theta^2 = (1 + o(1)) c_{\tau} q_m$. By using $\log(1 + x) = x$ for $x = o(1)$, we then have \begin{align} q^{-1}_m \log \E_{\bA} \big[ \E_{\bX}[\exp\{ t(I_3 + I_4)\}] \big] =&\, \log \bigg( 1 + \frac{\alpha}{2}\Big[\exp\Big( \frac{t^2 \theta^2}{2\widetilde{d}^2} + \frac{t \theta^2}{\widetilde{d}}\Big) - 1\Big] + \frac{\beta}{2}\Big[\exp\Big( \frac{t^2 \theta^2}{2\widetilde{d}^2} - \frac{t \theta^2}{\widetilde{d}}\Big) - 1\Big]\bigg)\\ =&\, \frac{a}{2}\Big[\exp\Big( \frac{t^2 \theta^2}{2\widetilde{d}^2} + \frac{t \theta^2}{\widetilde{d}}\Big) - 1\Big] + \frac{b}{2}\Big[\exp\Big( \frac{t^2 \theta^2}{2\widetilde{d}^2} - \frac{t \theta^2}{\widetilde{d}}\Big) - 1\Big]\\ =&\, (1 + o(1)) \frac{a}{2}\Big[\exp\Big( \frac{t \theta^2}{\widetilde{d}}\Big) - 1\Big] + (1 + o(1)) \frac{b}{2}\Big[\exp\Big( - \frac{t \theta^2}{\widetilde{d}}\Big) - 1\Big] \end{align} Combining the calculations above, compute the following rate function \begin{align} g(a, b, c_\tau, t) \coloneqq - \,q_m^{-1}\log\E[\exp(t h_i)]. \end{align} Recall $\alpha = a q_m/m = o(1)$, $\beta = bq_m/m = o(1)$, $\frac{\theta^4}{\theta^2 + (1 - \tau )d/m} = c_{\tau} q_m$ in \Cref{ass:asymptotics}, thus $\theta^2 = (1 + o(1)) c_{\tau} q_m$. By using $\log(1 + x) = x$ for $x = o(1)$, the rate function $g(a, b, c_\tau, t)$ can be calculated as \begin{align} g(a, b, c_\tau, t) =&\, - \frac{t\rho \theta^2}{q_m \widetilde{d}} - \frac{t^2\rho^2 \theta^2 }{2q_m \widetilde{d}^2} + \frac{(N-1)}{2m} \Big[ a - a\exp\Big( \frac{t^2 \theta^2}{2\widetilde{d}^2} + \frac{t \theta^2}{\widetilde{d}}\Big) + b - b\exp\Big( \frac{t^2 \theta^2}{2\widetilde{d}^2} - \frac{t \theta^2}{\widetilde{d}}\Big) \Big] \\ =&\, \frac{1}{2(1 - \tau)} \Big[a\Big(1 - \exp\Big( \frac{2c_{\tau}t}{a + b+ 2s} \Big) \Big) + b\Big(1 - \exp\Big( -\frac{2c_{\tau} t}{a+ b + 2s} \Big) \Big) \Big] - \frac{2c_{\tau}st}{a+ b + 2s} - \frac{2c_{\tau} s^2t^2}{(a + b + 2s)^2}, \end{align} where in the last line, we used $\rho = s q_m$, $\widetilde{d} = (\frac{a + b}{2} + s)q_m$. By choosing $s = \frac{2c_\tau}{\log(a/b)}$, we can conclude that \begin{align} I^{\star} = \sup_{t\in \R} I(a_\tau, b_\tau, c_\tau, t) = \frac{(1 - \tau)^{-1}(\sqrt{a} - \sqrt{b})^2 + c_\tau}{2}\equiv I(a_\tau,b_\tau,c_\tau), \end{align} which completes the proof. \end{proof} \subsection{Gradient descent for the first layer weight matrix $\bW$} For simplicity, we denote $\widetilde\bX=\bD_s^{-1}\bA_s\bX=(\widetilde\bx_1,\ldots,\widetilde\bx_N)^\top$ where $\widetilde\bx_i\in\R^d$ for $i\in[N]$ and $s=0$. In this case, we will explore the feature learning on $\bW$. Below, we will always fix $\ba$ (at initialization in Assumption~\ref{assump:NN}) and perform gradient descent on $\bW$ in \eqref{eq:W_GD}. To ease the notions, we write the initialized first-layer weights as $\bW^{(0)}=\bW_{\!0}$, and the weights after one gradient step as $\bW^{(1)}=\bW_{1}$, where the learning rate of the first gradient descent is $\eta_1>0$. Let $s^{(0)}=0$. Following the notions in \cite{ba2022high}, we denote that \begin{align}\label{eq:gradient-step-MSE} \bG_1 := -\nabla_{\bW_{0}}\cL(\bW_0,s^{(0)}) = \frac{1}{n} \widetilde\bX_\sL^\top \left[\left(\frac{1}{\sqrt{K}}\left(\by_\sL - \frac{1}{\sqrt{K}}\sigma(\widetilde\bX_\sL\bW_{\!0})\ba\right)\ba^\top\right) \odot \sigma'(\widetilde\bX_\sL\bW_{\!0})\right], \end{align} where $\widetilde\bX_\sL=\bS_\sL\widetilde\bX\in \R^{n\times d}$, $\odot$ is the Hadamard product, and $\sigma'$ is the derivative of $\sigma$ (acting entry-wise). Here $K$ represents the number of neurons in the hidden GCN layer in \eqref{eq:NN}. Then, from \eqref{eq:W_GD} with $\lambda_1=\eta_1^{-1}$, we have $$\bW_{\! 1} = \bW_{\!0} + \eta_1 \cdot\bG_1-\bW_{\!0}=\eta_1 \cdot\bG_1.$$ Thus, our target is to analyze the gradient matrix $\bG_1$. The following proposition is similar to Proposition 2 in \cite{ba2022high}, implies that this gradient matrix is approximately rank one. \begin{proposition}\label{prop:G_1} Under the same assumption as Theorem~\ref{thm:NN}, we have that \begin{align} \Big\|\bG_1-\frac{1}{n\sqrt{K}} \widetilde\bX_{\sL}^\top\by_\sL\ba^\top\Big\|_F\le~& \frac{Cq_m}{K}, \end{align} with probability at least $1-\exp{(-c\log^2N)}$, for some constant $c,C>0$. \end{proposition} \begin{proof} First of all, analogously to the proof of Lemma~\ref{lem:approx_kernel}, we can show that \begin{equation}\label{eq:bound_tild_X} \norm{\widetilde\bX_\sL}\le \sqrt{q_mN}, \end{equation} with very high probability, since $d\lesssim N$. Moreover, $\|\by_\sL\|=\sqrt{n}$ and we can always view $\by$ as a deterministic vector in $\R^N$. By the definition, the gradient matrix $\bG_1$ under the MSE can be simplified as follows \begin{align} \bG_1 &= -\frac{1}{n}\widetilde\bX_\sL^\top \left[\left(\frac{1}{\sqrt{K}}\left(\frac{1}{\sqrt{K}}\sigma(\widetilde\bX_\sL\bW_0)\ba - \by_\sL\right)\ba^\top\right) \odot \sigma'(\widetilde\bX_\sL\bW_0)\right] \\ &= \frac{1}{n}\cdot\frac{\mu_1}{\sqrt{K}}\widetilde\bX_\sL^\top\left(\by_\sL - \frac{1}{\sqrt{K}}\sigma(\widetilde\bX_\sL\bW_0)\ba \right)\ba^\top + \frac{1}{n}\cdot\frac{1}{\sqrt{K}}\bX^\top\left(\left(\by_\sL - \frac{1}{\sqrt{N}}\sigma(\widetilde\bX_\sL\bW_0)\ba\right)\ba^\top\odot\sigma'_\perp(\widetilde\bX_\sL\bW_0)\right), \end{align} where we utilized the orthogonal decomposition: $\sigma'(z) = \mu_1 + \sigma'_\perp(z)$. By Stein's lemma, we know that $\E[z\sigma(z)] = \E[\sigma'(z)] = \mu_1$, and hence $\E[\sigma'_\perp(z)]=0$ for $z\sim\cN(0,1)$. Notice that we consider $\sigma(x)=x$, hence $\mu_1=1$ and $\sigma'_\perp(z)\equiv 0$. Therefore, we have \begin{align} \bG_1 &=\frac{1}{n}\cdot\frac{1}{\sqrt{K}}\widetilde\bX_\sL^\top \by_\sL\ba^\top - \underbrace{\frac{1}{nK}\widetilde\bX_\sL^\top \widetilde\bX_\sL\bW_0 \ba \ba^\top}_{\bDelta}. \end{align} Notice that $\|\ba\|=1$ and we can apply \eqref{eq:Gaussian-spectral-norm} for Gaussian random matrix $\bW_0$. Thus, because of Lemma~\ref{lem:approx_kernel}, $d\lesssim n\asymp N$ and $d\lesssim K$, we have that \begin{align} \|\bDelta\|_F = \|\bDelta\|\le \frac{Cq_m}{K}(1+\sqrt{d/K}), \end{align} with very high probability, which completes the proof of this proposition. \end{proof} This proposition shows that for $\bW$ at Gaussian initialization, the corresponding gradient matrix can be approximated in operator norm by the \textbf{rank-1} matrix only related to labels $\by_\sL$, feature matrix $\widetilde\bX_\sL$, and $\ba$. In the following, we will use the parameter \[\zeta:= \sqrt{c_\tau}\frac{\eta_1\sqrt{q_m}}{K}\frac{\alpha-\beta}{\alpha+\beta}.\] Notice that $\zeta=\Theta(1)$ if $K/\eta_1 = \Theta(\sqrt{q_m})$. Then we can tune the learning rate $\eta_1$ to ensure that this trained and normalized weight matrix $\frac{1}{\sqrt{K}}\bW^{(1)}$ can be aligned with $\bmu$ perfectly. \begin{lemma}\label{lem:Wa} Under the assumption as Theorem~\ref{thm:NN}, we have that \[\norm{\frac{1}{\sqrt{K}}\bW^{(1)}\ba-\sqrt{c_\tau}\frac{\eta_1\sqrt{q_m}}{K}\frac{\alpha-\beta}{\alpha+\beta} \bmu}=O\big(\frac{\eta_1}{K}\big),\] with a probability at least $1-cN^{-10}$, for some constants $c,C>0$. \end{lemma} \begin{proof} Notice that $\bW_{\! 1} =\eta_1 \cdot\bG_1$ and $\ba^\top\ba=1$. Notice that $\widetilde\bX_{\sL}^\top\by_\sL=\sqrt{Nq_m}\cdot h(\bX)^\top\bP_\sL\by$. Following from Proposition~\ref{prop:G_1} and Lemma~\ref{lem:his}, we can have \begin{align} \norm{\frac{1}{\sqrt{K}}\bW^{(1)}\ba-\zeta \bmu}\le~& \frac{1}{\sqrt{K}}\Big\| \eta_1 \bDelta\ba\Big\|+\Big\|\zeta\bmu-\frac{\eta_1 }{nK} \widetilde\bX_{\sL}^\top\by_\sL\ba^\top\ba\Big\|\\ \lesssim~& \frac{\eta_1}{K}+ \frac{\eta_1 q_m}{K^{3/2}}, \end{align}with very high probability. Notice that here $\ba^\top\ba=1$. Then, we can assume $q_m^2\lesssim K$ to finish this proof. \end{proof} \subsection{Learning the optimal self-loop weight} \begin{lemma}\label{lem:D_0} Under Assumption~\ref{ass:asymptotics}, we know that \[\left|D_0 - \bar{d}\right|\le \frac{C}{q_m^{1/2}},\] with probability at least $1-ce^{-N}$ for some constants $c,C>0$, where $\bar d :=\frac{a_\tau+b_\tau}{2} q_m$. \end{lemma} This is straightforward based on the proof of Lemma~\ref{lem:approx_A}, hence we ignore the proof here. \begin{lemma} Under the assumption as Theorem~\ref{thm:NN}, we have that \[\left|\frac{2}{n^2q_m} \by_\sL^\top\bX_\sL\bW^{(1)}\ba -2c_\tau\right|\le \frac{C}{n\sqrt{q_m}}\] with probability at least $1-cN^{-10}$, for some constants $c,C>0$. \end{lemma} \begin{proof} By Proposition~\ref{prop:G_1} and Lemma~\ref{lem:Wa}, we can replace $\bW^{(1)}\ba$ by $\zeta\bmu $. Notice that \eqref{eq:bound_tild_X} and Lemma~\ref{lem:Wa} indicate that \begin{align} \left|\frac{2}{n^2q_m} \by_\sL^\top\bX_\sL(\bW^{(1)}\ba -\zeta\bmu)\right|\le \frac{2}{n^2q_m}\|\by_\sL\|\cdot\|\bX_\sL\|\cdot\|\bW^{(1)}\ba-\zeta \bmu\|\lesssim 1/(nq_m), \end{align} with very high probability. for $s=0$. Then, we can apply Lemmas~\ref{lem:Z_concentration} and \ref{lem:approx_X} to conclude that \begin{align} \left|\frac{1}{n^2q_m} \by_\sL^\top\bX_\sL \zeta\bmu-c_\tau \right| \lesssim \frac{1}{n\sqrt{q_m}}, \end{align} with a very high probability for sufficiently large $n$ and $m$. \end{proof} \begin{lemma} Following the assumptions in Theorem~\ref{thm:NN}, we have that \[\left|\frac{1}{n}\by^\top_\sL\bA_\sL\by_\sL-\frac{a_\tau-b_\tau}{2}q_m\right|=o(q_m),\] with a probability of at least $1-cN^{-10}$, for some constants $c,C>0$. \end{lemma} \begin{proof} This lemma follows from Lemma F.6 in \cite{abbe2022lp} and Corollary 3.1 in \cite{abbe2020entrywise}. Notice that \begin{align} \frac{1}{n}\by^\top_\sL\bA_\sL\by_\sL=~& \frac{1}{n}\sum_{i,j\in\cV_\sL}\bA_{ij}y_iy_j\\ =~& \frac{1}{n}\sum_{i,j\text{ in same block of } \cV_{\sL }}\bA_{ij} -\sum_{i,j\text{ in different blocks of }\cV_{\sL }}\bA_{ij}. \end{align} Then, we can apply the proof of (F.23) in \cite{abbe2022lp} to conclude this lemma. \end{proof} Combining all the above lemmas in this section, we can derive the following lemma. \begin{lemma}\label{lem:s_1} Following the assumptions in Theorem~\ref{thm:NN}, we have that \[\left|s^{(1)}-\frac{2c_\tau}{\log\Big(\frac{a_\tau}{b_\tau}\Big)}\right|\le \frac{C}{\sqrt{q_m}},\] with probability at least $1-cN^{-10}$, for some constants $c,C>0$, where $s^{(1)}$ is given by \eqref{eq:s_Trained}. \end{lemma} \begin{algorithm} \caption{Gradient-based training for both $\bW$ and $s$ in GCN} \label{alg:gradient} \begin{algorithmic} \Require Learning rates $\eta_t$, weight decay $\lambda_t$, number of steps $T$ \State {\textbf{Initialization:}} $s^{(0)}\sim\Unif([-1,1])$, $\sqrt{K}\cdot[\bW^{(0)}]_{ij}\iid\cN(0,1), ~ \sqrt{K}\cdot[\ba]_j\iid \Unif\{\pm 1\}$, $\forall i\in[d],j\in [K]$. \State {\textbf{Training Stage 1:}} \State {$\quad\quad$ Set $\sigma(x)=x$ in \eqref{eq:NN}} \State $\quad\quad$ $\bW^{(1)} \gets \bW^{(0)} - \eta_1 (\nabla_{\bW^{(0)}} \mathcal{L}(\bW^{(0)},s^{(0)}) + \lambda_1 \bW^{(0)} )$ \State $\quad\quad$ $s^{(1)} \gets s^{(0)}$ \State $\quad\quad$ $\bW^{(1)} \gets \bW^{(1)}\ba$ \State $\quad\quad$ $\ba \gets 1$ \State {\textbf{Training Stage 2:}} \State $\quad\quad$ Set $\sigma(x)=\tanh(x)$ in \eqref{eq:NN} \State $\quad\quad$ {\textbf{For} $t=2$ to $T$ \textbf{do}} \State $\quad\quad\quad\quad$ $s^{(t)} \gets s^{(t-1)} - \eta_t \nabla_{s^{(t)} } \mathcal{L}(\bW^{(1)},s^{(t-1)}) + \lambda_t s^{(t-1)}$ \State $\quad\quad$ \textbf{End For} \Ensure Prediction function for unknown labels: $\sign(\bS_\sU\bD_{s^{(T)}}^{-1}\bA_{s^{(T)}}\bW^{(1)}\ba)$ \end{algorithmic} \end{algorithm} \subsection{Proof of Theorem~\ref{thm:NN}} Let us recall that we consider $K/\eta_1\asymp \sqrt{q_m}$ and $d=o(q_m^2)$ with $\bW_{i,j}\sim\cN(0,1/K)$. Finally, we complete the proof of Theorem~\ref{thm:NN} for Algorithm~\ref{alg:gradient0} as follows. Recall that $\bD_s:=(D_0+sq_m)\bI\in\R^{n\times n}$, for any $s\in\R$, where $D_0$ is the average degree of the graph. Denote that \begin{align} s_{\text{opt}}:=~&\frac{2c_\tau}{\log\Big(\frac{a_\tau}{b_\tau}\Big)}\\ \bD_{s^{(1)}}^{-1}\bA_{s^{(1)}}\bX=:~&[\widehat\bg_1,\ldots,\widehat\bg_N]^\top\in\R^{N\times d},\\ \bD_{s_{\text{opt}}}^{-1}\bA_{s_{\text{opt}}}\bX=:~&[ \bar{\bg}_1,\ldots, \bar{\bg}_N]^\top\in\R^{N\times d},\\ (\tilde{d}+s_{\text{opt}}\cdot q_m)^{-1} \bA_{s_{\text{opt}}}\bX=:~&[ \tilde\bg_1,\ldots, \tilde\bg_N]^\top\in\R^{N\times d},\\ \frac{1}{\sqrt{K}}\bW^{(1)}\ba =:~& \widehat{\bmu}. \end{align} Then, by definition, $\widehat\by_{\mathrm{GCN},i}=\widehat\bg_i^\top \widehat\bmu$ for $i\in\cV_{\sU}$. As a remark, notice that Lemma~\ref{lem:his} verifies that with high probability $\norm{\widehat\bg_i}\lesssim \sqrt{d}$. Because of this bound, we can only consider the regime when $d=o(q_m^2)$ for our following analysis. To improve this to a high dimensional regime, e.g., $d\asymp N$, we improve the following concentration without simply using the bound of $\norm{\widehat\bg_i}$. Similarly with the proof of Lemma~\ref{lem:beta_i} in ridge regression of linear GCN part, we need to do certain leave-one-out analysis to achieve a larger regime for $d$. Next, based on the above decomposition, we follow the proof idea of Theorem~\ref{thm:exact_linear} to complete the proof of Theorem~\ref{thm:NN}. Combining Lemmas~\ref{lem:his},~\ref{lem:Wa} and \ref{lem:s_1}, for each $i\in\cV_\sU$, we can obtain that \begin{align} |y_{i}\cdot \widehat\bg_i^\top \widehat\bmu-y_{i}\cdot \tilde\bg_i^\top \bmu|\le ~& \Big|(\tilde\bg_i-\bar\bg_i)^\top {\bmu}\Big|+ \Big|(\bar\bg_i- \widehat\bg_i)^\top {\bmu}\Big| + \Big| \widehat\bg_i^\top(\widehat\bmu- \bmu)\Big| = o(\sqrt{q_m}), \end{align} with probability at least $1-cN^{-10}$ for some constants $c,C>0$. Therefore, we can take $\zeta=1$, $\eps_m=o(1)$ and $\rho= s_{\text{opt}}\cdot q_m$ to get \begin{align} \P(\psi_m(\sign(\widehat\by_{\mathrm{GCN}}),\by_{\sU})=0)=~&\P\big(\min_{i\in[m]}\by_{\sU,i}\cdot\widehat\by_{\mathrm{GCN},i}>0\big)= \P\Big( \min_{i\in\cV_{\sU}} y_{i}\cdot \widehat\bg_i^\top \widehat\bmu >0\Big)\\ \ge ~& \P\Big( \min_{i\in\cV_{\sU}} y_{i}\cdot \tilde\bg_i^\top \bmu >\epsilon_m \sqrt{q_m},~|y_{i}\cdot \widehat\bg_i^\top \widehat\bmu-y_{i}\cdot \tilde\bg_i^\top \bmu|\le \epsilon_m \sqrt{q_m},~\forall i\in \cV_{\sU}\Big)\\ \ge ~& \P\Big( \min_{i\in\cV_{\sU}} y_{i} \cdot \tilde\bg_{i}^\top \bmu> \eps_m {\sqrt{q_m}}\Big)- \sum_{i\in\cV_{\sU}}\P\Big(|y_{i}\cdot \widehat\bg_i^\top \widehat\bmu-y_{i}\cdot \tilde\bg_i^\top \bmu|> \epsilon_m \sqrt{q_m}\Big)\\ \ge ~& \P\big(\min_{i\in\cV_{\sU}}y_{i}\zeta\cdot \frac{1}{ \widetilde d}(\bA_{\rho}\bX)_{i:} \bmu>C\sqrt{q_m}\eps_m\big)-Cm^{-2}\\ \ge ~& 1-\sum_{i\in\cV_{\sU}}\P\Big(y_{i}\cdot \frac{\zeta}{ \widetilde d}(\bA_\rho\bX)_{i:} \bmu\le C\sqrt{q_m}\eps_m\Big)-Cm^{-2}\\ \ge ~& 1-m\P\Big(y_{i}\cdot \frac{\zeta}{ \widetilde d}(\bA_\rho\bX)_{i:} \bmu\le C\sqrt{q_m}\eps_m\Big)-Cm^{-2}\\ \ge~& 1-m^{1- \sup_{t\in\R} \{\eps_m t+g(a ,b ,c,\tau,1, s_{\text{opt}},t)\}+\delta}-Cm^{-2}, \end{align} for any $\delta>0$ and sufficiently large $m$, where in the last line we employ Proposition~\ref{prop:LDP}. Thus, applying Lemma~\ref{lem:rate_fun}, we know that when $J(a_\tau, b_\tau, c_\tau, 1, s_{\text{opt}} )=I(a_\tau,b_\tau,c_\tau)>1$, $\P(\psi_m(\sign(\widehat\by_{\mathrm{GCN}}),\by_{\sU})=0)\to 1$ as $m\to\infty$. \section{Auxiliary Lemmas and Proofs} \begin{lemma}\label{lem:Z_concentration} Let $\bZ\in\R^{N\times d}$ defined in \eqref{eqn:gauss_mixture}. Then, there exists some constant $c,K>0$ such that for any $t>0$ \begin{align} \Parg{\left|\frac{1}{\sqrt{N}} \ones^{\top} \bZ\bmu\right|\ge t}\le~&2\exp{(-ct^2d)},\\ \Parg{\left|\frac{1}{N } \ones^{\top} \bZ\bZ^{\top} \by\right|\ge t}\le~&2\exp{\left(-cd\min\left\{\frac{t^2}{K^2 },\frac{t}{K}\right\}\right)},\\ \Parg{\left|\frac{1}{N } \ones^{\top} \bZ\bZ^{\top} \ones - 1\right|\ge t}\le~&2\exp{\left(-cd\min\left\{\frac{t^2}{K^2 },\frac{t}{K}\right\}\right)}. \end{align} \end{lemma} \begin{proof} Based on general Hoeffding’s inequality Theorem 2.6.3 in \cite{vershynin2018high}, we can get \begin{align} \Parg{\left|\frac{1}{\sqrt{N}} \ones^{\top} \bZ\bmu\right|\le t}=~&\Parg{\left|\frac{1}{\sqrt{N}} \sum_{i=1}^N \bz_i^\top\bmu\right|\le t} \le 1-2\exp{(-ct^2d)}. \end{align} Similarly, by Bernstein’s inequality Theorem 2.8.2 in \cite{vershynin2018high}, we have \begin{align} \Parg{\left|\frac{1}{N } \ones^{\top} \bZ\bZ^{\top} \by\right|\le t}\ge~& 1- 2\exp{\left(-cd\min\left\{\frac{t^2}{K^2 },\frac{t}{K}\right\}\right)},\\ \Parg{\left|\frac{1}{N } \ones^{\top} \bZ\bZ^{\top} \ones - 1\right|\le t}\ge~& 1- 2\exp{\left(-cd\min\left\{\frac{t^2}{K^2 },\frac{t}{K}\right\}\right)}, \end{align} where $K=\|\xi\|_{\psi_2}^2$ for $\xi\sim\cN(0,1)$. \end{proof} \begin{lemma}[Simplified version of Theorem 3.3 in \cite{dumitriu2023exact}]\label{lem:concentrateA} Let $G = ([N], E)$ be an inhomogeneous Erd\H{o}s-R\'{e}nyi graph associated with the probability matrix $\bP$, that is, each edge $e = \{i, j\}\subset [N]^2$ is sampled from $\mathrm{Ber}(P_{ij})$, namely, $\P(A_{ij} = 1) = P_{ij}$. Let $\bA$ denote the adjacency matrix of $G$. Denote $P_{\max}\coloneqq \max_{i, j\in [N]} P_{ij}$. Suppose that \begin{align}\label{eqn:concentrateA_condition} N \cdot P_{\max} \geq c\log N\,, \end{align} for some positive constant $c$, then with probability at least $1-2n^{-10}- 2e^{-N}$, adjacency matrix $\bA$ satisfies \begin{align}\label{eqn:concentrateA} \|\bA - \E \bA \| \leq \const_{\eqref{eqn:concentrateA}}\cdot \sqrt{N \cdot P_{\max}}\,, \end{align} \end{lemma} \begin{lemma}[Bernstein's inequality, Theorem 2.8.4 of \cite{vershynin2018high}]\label{lem:Bernstein} Let $X_1,\dots,X_n$ be independent mean-zero random variables such that $|X_i|\leq K$ for all $i$. Let $\sigma^2 = \sum_{i=1}^{n}\E X_i^2$. Then for every $t \geq 0$, \begin{align} \P \Bigg( \Big|\sum_{i=1}^{n} X_i \Big| \geq t \Bigg) \leq 2 \exp \Bigg( - \frac{t^2/2}{\sigma^2 + Kt/3} \Bigg)\,. \end{align} \end{lemma}
2412.13751v1
http://arxiv.org/abs/2412.13751v1
Notions of entropy for unitary representations
\documentclass[11pt,a4paper]{book} \usepackage{amssymb,amsbsy,amsthm,amsmath,graphicx,epsfig} \usepackage{mathabx,times,hyperref,eucal} \usepackage{appendix} \numberwithin{equation}{section} \usepackage{sectsty} \chapterfont{\Large} \sectionfont{\large} \subsectionfont{\normalsize} \newenvironment{nmath}{\begin{center}\begin{math}}{\end{math}\end{center}} \newtheorem{thm}{Theorem}[chapter] \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{conj}[thm]{Conjecture} \newtheorem{dfn}[thm]{Definition} \newtheorem{prob}[thm]{Problem} \newtheorem{ques}[thm]{Question} \newtheorem{mainthm}{Theorem} \renewcommand*{\themainthm}{\Alph{mainthm}} \theoremstyle{remark} \newtheorem{ex}[thm]{Example} \newtheorem{rmk}[thm]{Remark} \newtheorem*{warning}{Warning} \newtheorem{rmks}[thm]{Remarks} \newcommand{\bb}[1]{\mathbb{#1}} \newcommand{\bs}[1]{\boldsymbol{#1}} \renewcommand{\bf}[1]{\mathbf{#1}} \renewcommand{\rm}[1]{\mathrm{#1}} \renewcommand{\cal}[1]{\mathcal{#1}} \newcommand{\bbC}{\mathbf{C}} \newcommand{\bbD}{\mathbf{D}} \newcommand{\bbE}{\mathbf{E}} \newcommand{\bbF}{\mathbf{F}} \newcommand{\bbN}{\mathbf{N}} \newcommand{\bbP}{\mathbf{P}} \newcommand{\bbQ}{\mathbf{Q}} \newcommand{\bbR}{\mathbf{R}} \newcommand{\bbT}{\mathbf{T}} \newcommand{\bbZ}{\mathbf{Z}} \newcommand{\sfE}{\mathsf{E}} \newcommand{\rmS}{\mathrm{S}} \newcommand{\T}{\mathrm{T}} \newcommand{\rmH}{\mathrm{H}} \newcommand{\rmI}{\mathrm{I}} \newcommand{\rmM}{\mathrm{M}} \newcommand{\rmU}{\mathrm{U}} \newcommand{\rma}{\mathrm{a}} \renewcommand{\d}{\mathrm{d}} \newcommand{\rme}{\mathrm{e}} \newcommand{\rmh}{\mathrm{h}} \newcommand{\rmi}{\mathrm{i}} \newcommand{\calA}{\mathcal{A}} \newcommand{\calB}{\mathcal{B}} \newcommand{\calC}{\mathcal{C}} \newcommand{\E}{\mathcal{E}} \newcommand{\F}{\mathcal{F}} \newcommand{\calG}{\mathcal{G}} \newcommand{\calH}{\mathcal{H}} \newcommand{\I}{\mathcal{I}} \newcommand{\J}{\mathcal{J}} \renewcommand{\P}{\mathcal{P}} \newcommand{\Q}{\mathcal{Q}} \newcommand{\R}{\mathcal{R}} \newcommand{\calS}{\mathcal{S}} \newcommand{\calT}{\mathcal{T}} \newcommand{\U}{\mathcal{U}} \newcommand{\V}{\mathcal{V}} \newcommand{\X}{\mathcal{X}} \newcommand{\Y}{\mathcal{Y}} \newcommand{\Z}{\mathcal{Z}} \newcommand{\A}{\mathfrak{A}} \newcommand{\B}{\mathfrak{B}} \newcommand{\frE}{\mathfrak{E}} \newcommand{\frH}{\mathfrak{H}} \newcommand{\frL}{\mathfrak{L}} \newcommand{\M}{\mathfrak{M}} \newcommand{\N}{\mathfrak{N}} \newcommand{\frR}{\mathfrak{R}} \newcommand{\fre}{\mathfrak{e}} \newcommand{\G}{\Gamma} \renewcommand{\L}{\Lambda} \renewcommand{\S}{\Sigma} \renewcommand{\a}{\alpha} \renewcommand{\b}{\beta} \newcommand{\eps}{\varepsilon} \renewcommand{\phi}{\varphi} \newcommand{\g}{\gamma} \renewcommand{\l}{\lambda} \newcommand{\s}{\sigma} \newcommand{\co}{\mathrm{co}} \newcommand{\cx}{\mathrm{cx}} \newcommand{\GP}{\mathrm{GP}} \newcommand{\Op}{\mathrm{Op}} \renewcommand{\hat}[1]{\widehat{#1}} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\ul}[1]{\underline{#1}} \renewcommand{\tilde}[1]{\widetilde{#1}} \renewcommand{\t}[1]{\widetilde{#1}} \newcommand{\into}{\hookrightarrow} n}{\nolinebreak\hspace{\stretch{1}}$\lhd$} \newcommand{\midotimes}{\hbox{$\bigotimes$}} \newcommand{\midvee}{\hbox{$\bigvee$}} \newcommand{\mprod}{\hbox{$\prod$}} \newcommand{\mcap}{\hbox{$\bigcap$}} \newcommand{\sbinom}[2]{\hbox{$\binom{#1}{#2}$}} \newcommand{\actson}{\curvearrowright} \newcommand{\longto}{\longrightarrow} \newcommand{\bspi}{\boldsymbol{\pi}} \newcommand{\ann}{\mathrm{ann}} \newcommand{\hann}{\mathrm{h}_\mathrm{ann}} \newcommand{\htyp}{\mathrm{h}_\mathrm{typ}} \newcommand{\FK}{\mathrm{FK}} \newcommand{\red}{\mathrm{red}} \newcommand{\dom}{\mathrm{dom}\,} \newcommand{\vol}{\mathrm{vol}} \newcommand{\Sew}{\mathrm{Sew}} \newcommand{\Rep}{\mathrm{Rep}} \newcommand{\PSD}{\mathrm{PSD}} \newcommand{\PPSD}{\mathrm{PPSD}} \newcommand{\PD}{\mathrm{PD}} \newcommand{\sa}{\mathrm{sa}} \newcommand{\tr}{\overline{\mathrm{tr}}} \newcommand{\Tr}{\mathrm{tr}} \newcommand{\Det}{\mathrm{det}} \newcommand{\pd}{\mathrm{pd}} \newcommand{\st}{\mathrm{stiff}} \begin{document} \title{Notions of entropy for unitary representations} \author{Tim Austin} \maketitle \setcounter{tocdepth}{1} \tableofcontents \pagestyle{plain} \chapter{Introduction} In the last twenty years, Lewis Bowen's sofic entropy and its annealed version (originally called the `f-invariant') have taken a central place in the ergodic theory of actions of non-amenable groups. In this work we pursue these notions of entropy across the analogy between ergodic theory and representation theory. We arrive at new quantities associated to unitary representations of groups and representations of other C$\sp*$-algebras. We find connections to both classical constructs in operator theory and the study of tuples of independent random unitary matrices. Fix a countable group $\G$, and consider actions of two different kinds: measure-preserving actions on probability spaces (ergodic theory); and unitary actions on Hilbert spaces (representation theory). The analogy between these is mostly straightforward and widely appreciated. For example, it is clear in Kechris' adaptation of the relation of weak containment to measure-preserving systems~\cite{KecGAEGA,BurKec20}. This analogy crystallizes into at least two different formal relationships. On the one hand, any measure-preserving system gives rise to its Koopman representation~\cite[Section 10]{KecGAEGA}. On the other hand, any orthogonal real representation of $\G$ can be used to construct a measure-preserving action on a Gaussian Hilbert space, and this construction is easily adapted to start with a unitary representation instead~\cite[Appendices C--E]{KecGAEGA}. However, the first of these relationships does not correctly connect the notions of entropy that we study in this work, and the second is apt to mislead. The new entropy that we define for unitary representations does not reduce to sofic entropy when applied to a Koopman representation -- indeed, our new notion is not even an invariant of unitary equivalence in general. On the other hand, if we start with a unitary representation and construct the associated Gaussian system, then there are cases in which our notion of entropy for the former should equal a differential-entropy analog of sofic entropy for the latter. But even for single transformations the study of `differential' Kolmogorov--Sinai entropy meets extensive difficulties~\cite{HamParTho08}, and really seems to behave well only for linear transformations between Gaussian systems. If we insist on that restriction, then we might as well stay within the setting of representation theory. Overall, an analogy at the level of intuition is more revealing than either the Koopman or the Gaussian construction for our work below. To begin filling out this analogy, we identify certain common constructs on each side as follows. Observables (equivalently, finite measurable partitions) of a probability space are analogous to vectors in Hilbert space, or more generally to finite tuples of vectors. If $(X,\mu,T)$ is a measure-preserving $\G$-system and $\a:X\to A$ is an observable, then it generates the shift-system $(A^\G,\a^\G_\ast \mu,S)$, where $\a^\G_\ast \mu$ is the law of the $\G$-indexed stochastic process $(\a\circ T^g:\ g \in \G)$. If $\pi$ is a unitary representation of $\G$ and $v \in H_\pi$, then it defines the positive definite function $\langle \pi(g)v,v\rangle$. This is the analog of a joint distribution. Its construction can be extended to finite tuples of vectors by considering matrix-valued positive definite functions. \section*{\emph{Entropy for representations of separable C$\sp*$-algebras}} Our first main topic is an analog of sofic entropy for representations. To introduce it, consider again a countable group $\G$ and the analogy between its measure-preserving actions and its unitary representations. If $\G$ is amenable, then the classical Kolmogorov--Sinai entropy of $(X,\mu,T)$ is defined by considering laws such as $\a^\G_\ast \mu$, taking an appropriate limit of normalized Shannon entropies along a F\o lner sequence, and then taking a supremum over the choice of $\a$. However, if $\G$ is not amenable then no simple generalization of this idea has met with success. Circumventing this problem, Lewis Bowen introduced `sofic entropy' in~\cite{Bowen10}. It is defined with reference to an auxiliary sofic approximation to $\G$ itself: a sequence of `near actions' of $\G$ by permutations on finite sets. For a system $(X,\mu,T)$ and observable $\a$ as above, over each of these `near actions', Bowen counts the $A$-colourings of the finite set whose `local statistics' are close to the law of $\a$. Very roughly, he takes the lim-sup exponential growth rate of these counts as the definition of a new notion of entropy. Our first innovation below is the analogous construction for a unitary representation $\pi$ and a vector $v \in H_\pi$. This time we try approximating by a sequence of finite-dimensional representations, and for each of these we consider the vectors that give roughly the same positive definite function on $\G$ as $v$ does in $\pi$. Our first notion of entropy is (roughly) the lim-sup exponential growth rate of the volumes of these sets of vectors. In pursuing this idea, we quickly find that it is easier to allow greater generality still. We fix a separable, unital C$\sp*$-algebra $\A$ and consider all its representations on separable Hilbert spaces. This generality is the work of Part~\ref{part:general} below. The case of a group $\G$ is recovering by taking $\A = C^\ast \G$. However, even in that case certain helpful auxiliary constructions lead us to consider other C$\sp*$-algebras as well. Fix a positive integer $k$. Given a representation $\pi$ of $\A$ and vectors $v_1$, \dots, $v_k \in \A$, define their \textbf{type} to be the $\rmM_k$-valued map \[\Phi^\pi_{v_1,\dots,v_k}(a) := [\langle \pi(a)v_j,v_i\rangle]_{i,j =1}^k \qquad (a \in \A).\] This is a completely positive map on $\A$, and any $\rmM_k$-valued completely positive map arises this way by Stinespring's theorem. Next, if $O$ is any set of such maps for a fixed value of $k$, and $\pi$ is any representation, then we define \[\X(\pi,O) := \{(v_1,\dots,v_k) \in H_\pi^{(k)}:\ \Phi^\pi_{v_1,\dots,v_k} \in O\}.\] Imagining that $O$ is a small neighbourhood of a particular completely positive map $\phi$, this is the analog of a set of `good models' for a given shift-invariant measure in sofic entropy theory. Finally, consider a sequence $\bspi := (\pi_n)_{n\ge 1}$ of representations of $\A$ whose dimensions are finite but diverge. We refer to it as an \textbf{almost periodic sequence} for $\A$. See Definition~\ref{dfn:AP} and the discussion in the rest of that section. For an almost periodic sequence $\bspi$ and a completely positive map $\phi$, we define the \textbf{almost periodic entropy of $\phi$ along $\bspi$} by \[\rmh_{\bspi}(\phi) := \inf_O \limsup_{i\to \infty}\frac{1}{d_i}\log \frac{\vol_{2kd_i}\X(\pi_i,O)}{c(k,d_i)},\] where $O$ ranges over neighbourhoods of $\phi$, $d_i$ is the dimension of $\pi_i$, $\vol_{2kd_i}$ refers to Lebesgue measure in $2kd_i$ real dimensions, and $c(k,d_i)$ is a certain choice of normalizing constant. A full explanation is given in Definition~\ref{dfn:APent}. The formula above is a direct analog of the usual definition of sofic entropy for a measure-preserving system with a distinguished generating observable. After a few chapters on necessary background (which more experienced readers may prefer to skip), we introduce this new notion of entropy carefully in Chapter~\ref{chap:AP}, and develop its basic properties. Many of these are suggested by the analogy with ergodic theory, but some differences emerge. For example, it is not an invariant of unitary equivalence of representations. We also study this notion from the point of view of some different modes of convergence of the sequence $\bspi$ itself: convergence of the resulting traces on $\A$; convergence in the Fell topology; or convergence in the related by stronger Fell-global topology (see Section~\ref{sec:catalog-tops}). Chapter~\ref{chap:det-form} presents the first main theorem of this work: a formula for $\rmh_{\bspi}(\phi)$ as a Fuglede--Kadison determinant. It holds whenever $\phi$ is `asymptotically associated' to $\bspi$ and the pulled-back traces $d_i^{-1}\Tr_{d_i}\circ \pi_i$ converge to a limiting tracial state $\tau$ of $\A$. `Asymptotic association' means that, for every neighbourhood $O$ of $\phi$, the set $\X(\pi_i,O)$ is nonempty for infinitely many $i$; without this assumption $\rmh_{\bspi}(\phi)$ is simply forced to be $-\infty$. For a completely positive map $\phi$ and a tracial state $\tau$, our determinantal formula refers to the `Lebesgue decomposition' of $\phi$ with respect to $\tau$: this is recalled in Corollary~\ref{cor:Leb}. It also refers to the Fuglede--Kadison determinant $\Delta_\tau \phi$ when $\phi$ is `absolutely continuous' with respect to $\tau$. We define this in terms of a particular kind of `Radon--Nikodym derivative' in Definition~\ref{dfn:FK-det-ac}. Assuming this technical background, the conclusion is as follows. \begin{mainthm}\label{mainthm:det-form} Suppose that $d_n^{-1}\Tr_{d_n}\circ \pi_n \to \tau$ and that $\phi$ is asymptotically associated to $\bspi$. Let $\phi_\rm{ac} + \phi_\rm{sing}$ be the Lebesgue decomposition of $\phi$ relative to $\tau$. Then \[\rmh_{\bspi}(\phi) = \log \Delta_\tau \phi_{\rm{ac}}.\] \end{mainthm} Fuglede--Kadison determinants have appeared in formulas for entropy in a number of previous works. They mostly concern Kolmogorov--Sinai or sofic entropy of some special classes of measure-preserving systems, or closely related notions in probability theory. As far as I know, the instances closest to Theorem~\ref{mainthm:det-form} are those of Li and Thom in~\cite{LiTho14}, Lyons in~\cite{Lyons05,Lyons10}, and especially Hayes in~\cite{Hayes16,Hayes21}. Theorem~\ref{mainthm:det-form} can also be seen as a somewhat loose analog of Szeg\H{o}'s limit theorem for positive definite Toeplitz determinants. This connection also has a known role from previous works in this area: for Szeg\H{o}'s original theorem on $\bbZ$ in~\cite{HamParTho08}, and for certain other analogs of it in~\cite{Lyons05,LiTho14}. A much more precise version of this connection emerges once we focus on annealed AP entropy for free groups in Part~\ref{part:free} -- see the remarks following Theorem~\ref{mainthm:annealed-formula}. Section~\ref{sec:prev-det-form} discusses the connections above in more detail. \section*{\emph{Random AP sequences for free groups}} In Part~\ref{part:free} we restrict our attention to unitary representations of a free group $\Gamma$ that is freely generated by a finite set $S$. For these, it is easy to find many representations of any desired dimension, since independent generators can be chosen arbitrarily. In particular, in any finite number of dimensions, we can choose a generating tuple of unitary matrices independently at random from Haar measure. This gives us a natural notion of a `uniformly random representation' in each finite dimension, and hence a natural model of random almost periodic sequences. This is the setting of Voiculescu's theory of `free probability'~\cite{VoiDykNic92,Voi02}, which has exploded in the last thirty years and already includes a notion of `free entropy'. But the notion of entropy that we study here seems to be new. For these random sequences, we can define a kind of annealed average of almost periodic entropy. This is analogous to Bowen's annealed sofic entropy from~\cite{Bowen10free} (originally referred to as the `f-invariant'; see the discussion of terminology in~\cite{AusBowShr}). Many of our results in Part~\ref{part:free} lie parallel to that theory, but sometimes with quite different proofs. For each positive integer $n$, let $\pi_n$ be a random representation of $\G$ on $\bbC^{(n)}$ so that the generators $\pi_n(s)$ for $s \in S$ are independent random elements of $\rmU(n)$, all distributed according to Haar measure. Let $\phi$ be a positive definite $\rmM_k$-valued function on $\G$. Then we define the \textbf{annealed almost periodic entropy} of $\phi$ to be \begin{equation}\label{eq:hann-preview} \hann(\phi) := \inf_O\limsup_{n\to\infty}\frac{1}{n}\log \bbE\frac{\vol_{2kn}\X(\pi_n,O)}{c(k,n)}, \end{equation} where as before the infimum ranges over neighbourhoods $O$ of $\phi$. The expectation refers to the random choice of the generators. After some more chapters of background, we introduce annealed entropy carefully in Definition~\ref{dfn:aAP-ent}, and derive some simple properties of it subsequently in Chapter~\ref{chap:free-gps}. We borrow the term `annealed' for these averages from similar calculations in statistical physics: see the discussion following Definition~\ref{dfn:aAP-ent} and in~\cite[Section I.1]{MezParVir--book}. Because we have averaged over all $n$-dimensional representations, the quantity $\hann(\phi)$ is a function of $\phi$ alone. Moreover, a kind of first-moment calculation (explained further below) leads to concrete formulas for it. The next theorem states two of these; some others are derived in Section~\ref{sec:alt-forms}. Let $B_n$ denote the closed $n$-ball around the identity in the left Cayley graph of $\G$. Let $\phi_{(B_n)}$ denote the positive semi-definite $(k|B_n|)$-by-$(k|B_n|)$ matrix determined by the elements ${\phi(g^{-1}h) \in \rmM_k}$ for $g,h \in B_n$, and similarly for other finite subsets of $\G$. \begin{mainthm}\label{mainthm:annealed-formula} The function $\hann$ takes values in $[-\infty,\infty)$ for each $k$. For any positive definite map $\phi:\G\to \rmM_k$, the value $\hann(\phi)$ is equal to both \begin{equation}\label{eq:LDP-formula-1} \lim_{n\to\infty}\Big[\log\Det \,\phi_{(B_n)} - \sum_{s \in S}\log\Det\,\phi_{(B_n\cap sB_n)}\Big] \end{equation} and \begin{equation}\label{eq:LDP-formula-2} \lim_{n\to\infty}\Big[\sum_{s \in S}\log\Det\,\phi_{(B_n\cup sB_n)} - (2r-1)\log\Det\,\phi_{(B_n)}\Big] \end{equation} (interpreting these as $-\infty$ if any of the determinants appearing here equals $0$). \end{mainthm} Much earlier in our work, in Chapter~\ref{chap:log-det-ent}, we introduce an `entropy' formalism for logarithms of determinants. This idea is classical, but the notation that accompanies it lets us write and manipulate quantities such as those above more easily. According to that formalism, formula~\ref{eq:LDP-formula-2} is the direct analog of Bowen's original definition of annealed sofic entropy in~\cite{Bowen10free}. \section*{\emph{Large deviations principles}} One of our first results about $\hann$ shows that, in case $\phi$ is unital, we can replace $\vol_{2kn}$ in~\eqref{eq:hann-preview} with the unique unitary-invariant probability measure $m_{\rm{U}(k,n)}$ on the space $\rm{U}(k,n)$ of all orthonormal $k$-tuples in $\bbC^{(n)}$. This is a consequence of Lemma~\ref{lem:normalized3}, which also lets us reduce from general $\phi$ to the unital case. This substitution of measures reveals the crucial advantage to working with annealed averages. The Fubini--Tonelli theorem lets us exchange probability and expectation like this: \[\bbE m_{\rm{U}(k,n)}\X(\pi_n,O) = \bbE\int 1_{\X(\pi_n,O)}\,dm_{\rm{U}(k,n)} = \int\bbP(\Phi^{ \pi_n}_V \in O)\,dm_{\rm{U}(k,n)}(V).\] Since the distribution of $\pi_n$ is invariant under the transitive action of $\rm{U}(n)$ on $\rm{U}(k,n)$, the last integrand above is actually constant, and so it simply equals $\bbP(\Phi^{\pi_n}_V \in O)$ for some fixed choice of $V \in \rm{U}(k,n)$. Intuitively, the random function \[\Phi^{\pi_n}_V(g) := V^\ast \pi_n(g)V \qquad (g \in \G)\] describes how the random unitary matrices that generate $\pi_n$ `sit together' through their actions on a fixed orthonormal $k$-tuple of vectors $V$. Inserting this single probability back into our alternative formula for $\hann(\phi)$, we see that $\hann$ is really an exponent that governs certain probabilities relating to $\Phi^{\pi_n}_V$ as $n\to\infty$. With this in mind, we might hope that $\Phi^{\pi_n}_V$ actually satisfies a large deviations principle with $-\hann$ as rate function. This turns out to be the case, and its proof comes with calculations that lead back back to~\eqref{eq:LDP-formula-1} and~\eqref{eq:LDP-formula-2} above. Our main result here is actually a large deviations principle for certain finite-dimensional restrictions of the random functions $\Phi^{\pi_n}_V$. These restrictions are over subsets $F$ of $\G$ that are `grounded'. This means that $F$ contains $e$, is finite, and is connected in the left Cayley graph of $\G$. These sets are introduced more carefully in Section~\ref{sec:group-geom}. Fix $V \in \rm{U}(k,n)$. Given a grounded set $F$ and also the random function $\Phi^{\pi_n}_V$, consider the random finite matrices \begin{equation}\label{eq:Qpin} Q^{\pi_n}_{(F)} := [\Phi^{\pi_n}_V(g^{-1}h)]_{g,h \in F}. \end{equation} These lie in the compact set $\S_k(F)$ of positive semidefinite $F$-by-$F$ block matrices that (i) have each block equal to $I_k$ and (ii) are invariant under left-translation by $\G$ as far as possible (so they are `Toeplitz on $\G$'). See Section~\ref{sec:restrict-extend} for a careful introduction. Define the following function of an element $q$ of $\S_k(F)$: \[h_F(q) := \left\{\begin{array}{ll} \log\Det\, q - \sum_{s \in S}\log\Det\, q_{(F\cap sF)} &\quad \hbox{if $q$ is nonsingular} \\ -\infty & \quad \hbox{if $q$ is singular.}\end{array}\right.\] Notice that $h_F(q) > -\infty$ if and only if $q$ is nonsingular, and that $h_F(q)$ agrees with the expression inside the limit~\eqref{eq:LDP-formula-1} when $F = B_n$ and $\phi$ is unital. \begin{mainthm}\label{mainthm:LDP} Fix a grounded subset $F$ of $\G$ and an element $q \in \S_k(F)$. \begin{enumerate} \item[a.] The random Gram matrix $Q^{\pi_n}_{(F)}$ is almost surely nonsingular for all sufficiently large $n$. \item[b.] For every neighbourhood $O$ of $q$, we have \[\log \bbP\big(Q^{\pi_n}_{(F)} \in O\big) \ge h_F(q)\cdot n - o(n) \quad \hbox{as}\ n\to\infty.\] \item[c.] For every real number $a > h_F(q)$, there is a neighbourhood $O$ of $q$ such that \[\log \bbP\big(Q^{\pi_n}_{(F)} \in O\big) \le a n + o(n) \quad \hbox{as}\ n\to\infty.\] \end{enumerate} \end{mainthm} Theorem~\ref{mainthm:LDP} implies an infinite-dimensional large deviations principle for the whole of $\Phi^{\pi_n}_V$ with rate function $-\hann$ by a standard limiting argument: see Corollary~\ref{cor:LDP}. We actually complete the prove of Theorem~\ref{mainthm:LDP} before that of Theorem~\ref{mainthm:annealed-formula}. The proof of Theorem~\ref{mainthm:LDP} is by induction on the grounded set $F$ (in the sense explained in Section~\ref{sec:group-geom}), and it occupies Chapter~\ref{chap:random-orbits} and Section~\ref{sec:completed-LDP}. We prove formula~\eqref{eq:LDP-formula-1} of Theorem~\ref{mainthm:annealed-formula} as a corollary of Theorem~\ref{mainthm:LDP}. We then prove formula~\eqref{eq:LDP-formula-2} as a corollary of formula~\eqref{eq:LDP-formula-1} and some further manipulations in Section~\ref{sec:alt-forms}. During this work we also meet some other formulas for $\hann$, including two series expansions in Corollaries~\ref{cor:first-expansion} and~\ref{cor:Sew}. The second of these is the analog of Seward's formula for annealed sofic entropy from~\cite[Theorem 1.7]{Seward--freestab}, so we call it the `Seward expansion'. It subsequently plays an important role in Chapter~\ref{chap:tempered}. Our proofs of Theorem~\ref{mainthm:LDP} and these other results all depend on a way of building up a positive definite function over a finitely generated free group through a sequence of enlargements of underyling grounded sets. At each stage, a space of possible extensions of a partially-defined positive definite function is parameterized by certain contraction matrices. These provide an analog for free groups of the classical Verblunsky coefficients for a sequence of orthogonal polynomials on the unit circle: see, for instance,~\cite{SimOPUCI} or~\cite{SimSzeg}. We therefore refer to these contraction matrices as Verblunsky coefficients in our setting as well (Definition~\ref{dfn:Verb}). (Beware that they also depend on choosing a certain enumeration of the group $\G$ a priori.) When we obtain $\Phi^{\pi_n}_V$ at random as above, it is described by a random sequence of Verblunsky coefficients. Our main technical step towards Theorem~\ref{mainthm:LDP} is the result that any finite initial segment of this random sequence of Verblunsky coefficients becomes independent once $n$ is large enough: see Proposition~\ref{prop:dil-dist} and Corollary~\ref{cor:KilNenfree}. This is the analog for free groups of a theorem of Killip and Nenciu from~\cite{KilNen04} for single random unitary matrices, and our use of it to prove Theorem~\ref{mainthm:LDP} resembles recent work on a large-deviations approach to Szeg\H{o}'s limit theorems~\cite{GamNagRou16,BreSimZei18}. Having developed the basic theory of $\hann$ this far, we next begin to develop more significant consequences of it. First, in Chapter~\ref{chap:zeroth-order}, we use it to study the probability that a given positive definite function $\phi:\G\to \rmM_k$ is asymptotically associated to our random AP sequence $(\pi_n)_{n\ge 1}$ at all. For the sake of brevity, we describe our main results here less formally than above, referring to the main text for details. Assume that $\phi$ is unital for simplicity. Then, roughly put, we show that small neighbourhoods $O$ of $\phi$ satisfy \[\bbP(\X(\pi_n,O)\ne \emptyset) \approx e^{\rmh^0(\phi)\cdot n + o(n)} \qquad \hbox{as}\ n\to\infty,\] where \[\rmh^0(\phi) := \lim_{t\downarrow 0}\hann(t\phi + (1-t)(\tau\otimes I_k)),\] and where $\tau$ is the regular character on $\G$. The precise statement is Theorem~\ref{thm:exp-typ}, which has roughly the form of another large deviations principle. The fact that the limit defining $\rmh^0(\phi)$ exists is among the corollaries of that theorem. The functional $\rmh^0$ turns out to have several further consequences. We refer to it as \textbf{zeroth-order entropy}. First, it provides the exponential decay of upper-tail probabilities for the random operator norms $\|\pi_n(a)\|$ for any $a \in C^\ast \G$: see Proposition~\ref{prop:norm-upper-tails}. This large deviations principle is a natural companion to the Collins--Male theorem~\cite{ColMal14} giving strong asymptotic freeness of these random representations. On the other hand, it turns out that $\rmh^0(\phi)$ is a much less sensitive function of $\phi$ than sofic entropy or $\hann$, and can be used to define a new invariant of representation themselves up to approximate unitary equivalence. This invariant and its properties are explored in Sections~\ref{sec:zeroth-order} and~\ref{sec:three-entropies}. Consequences include a much more complete analog of Veblunsky's form of Szeg\H{o}'s theorem over free groups: this is at the end of Section~\ref{sec:three-entropies}. Finally, Chapter~\ref{chap:tempered} proves the last major theorem of our work. This identifies which positive definite functions $\phi$ have $\rmh^0(\phi) = 0$, or equivalently which ones are asymptotically associated to the random sequence $(\pi_n)_{n\ge 1}$ with high probability. \begin{mainthm}\label{mainthm:tempered} Let $\phi \in \S_k(\G)$ for some positive integer $k$. If $\phi$ is tempered, then for every neighbourhood $O$ of $\phi$ we have \[\bbP(\X(\pi_n,O) \ne \emptyset) \to 1.\] On the other hand, if $\phi$ is not tempered, then there are a neighbourhood $O$ of $\phi$ and a positive value $c$ such that \[\bbP(\X(\pi_n,O) \ne \emptyset) \le e^{-cn + o(n)}.\] \end{mainthm} Theorem~\ref{mainthm:tempered} is not a new result. The case $k=1$ is equivalent to the Collins--Male theorem from~\cite{ColMal14}. That in turn is deduced from the analogous theorem for GUE random matrices due to Haagerup and Thorbj\o rnsen~\cite{HaaTho05}, which was the first real breakthrough in this area when it appeared. And the case of larger $k$ is reduced to the case $k=1$ fairly easily. However, our new proof is independent of those previous works, and seems to have little in common with them. Very roughly, the methods of~\cite{HaaTho05} and many other works in this area depend on controlling the expected traces of powers or other transforms of a random matrix $\pi_n(a)$ so well that one can infer bounds on expected operator norms as a consequence. By contrast, annealed AP entropy lets us work directly with positive definite functions on $\Gamma$, without ever picking an element $a$ of $C^\ast \G$ and considering the resulting random operator $\pi_n(a)$ or its norm. The first part of Theorem~\ref{mainthm:tempered}, which assumes that $\phi$ is tempered, follows from some simple theory and a measure concentration result. The second part requires most of the work. It amounts to proving that $\rmh^0(\phi) < 0$ whenever $\phi$ is unital and not tempered. We do this by combining a Seward expansion of $\hann$ as in~\ref{cor:Sew} with a classic characterization of tempered positive definite functions on finitely generated free groups due to Haagerup~\cite{Haa79}. With both of those in hand, the remaining work depends on some careful but elementary estimates. It occupies Sections~\ref{sec:cyclic}--~\ref{sec:proof-of-D}, and is following by a discussion in Section~\ref{sec:tempered-reflections}. In the last few sections of Chapter~\ref{chap:tempered}, we revisit some of our earlier topics in the light of Theorem~\ref{mainthm:tempered}, and derive some further consequences as a result. These include an abstract large deviations principle for the sequence $(\pi_n)_{n\ge 1}$ in the `Fell-global topology' on a compact metrizable space $\cal{Z}\G$ that we call the `catalog' of $\G$. This space parametrizes approximate equivalence classes of all separable representations of $\G$: see Section~\ref{sec:approx-equiv} and the analogous constructions in~\cite{AbeEle11,TucDro15}. Proposition~\ref{prop:weak-global-in-prob} gives the Fell-global convergence of $\pi_n$ in probability and then Theorem~\ref{thm:exp-typ-2} gives the large deviations principle; necessary background definitions can be found in Chapter~\ref{chap:approx}. Using this, in Section~\ref{sec:MF} we obtain some more abstract consequences that describe those representations of $\G$ whose zeroth-order entropy is finite. These provide a new source of examples of C$\sp*$-algebras that have the `matricial field' property (recalled in Section~\ref{sec:FD-conv}). \subsection*{Acknowledgements} This work depended on insightful conversations and correspondence with many people. In this regard I am particularly grateful to Uri Bader, Lewis Bowen, Peter Burton, Amir Dembo, Magdalena Musat, Narutaka Ozawa, Sorin Popa, Mikael R\o rdam, Brandon Seward, Dima Shlyakhtenko, Dan Timotin, Hugo Woerderman, and Ofer Zeitouni. For the purpose of open access, the author has applied a Creative Commons Attribution (CC-BY) licence to any Author Accepted Manuscript version arising from this submission. \part{REPRESENTATIONS OF GENERAL C$\sp*$-ALGEBRAS}\label{part:general} \chapter{Notation, conventions, and basic facts}\label{chap:basic-notn} This chapter sets some notation that we use throughout, some of which is slightly non-standard. Apart from that, most readers should be able to skip it and then refer back to it as necessary. \section{Linear algebra}\label{sec:lin-alg} We assume several standard ideas from linear algebra and matrix analysis in the sequel. For definiteness I use~\cite{HorJohMA} as a reference wherever possible Throughout this work, our focus is restricted to linear algebra and functional analysis over $\bbC$ rather than $\bbR$. This is the appropriate choice for studying unitary representations and C$\sp*$-algebras later. Much of our work could be re-fashioned over $\bbR$ without requiring major new ideas. For any positive integer $k$, we write $\bbC^{(k)}$ for the space of height-$k$ column vectors with complex entries. More generally, if $H$ is a vector space, then we write $H^{(k)}$ for the $k$-fold \textbf{inflation} of $H$, which is the vector space of height-$k$ column vectors with entries in $H$. If $S$ is a set, possible infinite, and $H$ is a Hilbert space, then we extend this notation further by writing $H^{(S)}$ for the Hilbert-space direct sum of an $S$-indexed family of copies of $H$, still regarded as a space of column vectors. This insistence on column vectors is slightly unusual in functional analysis, but for finite $k$ it enables us to use matrix-vector notation from linear algebra in places where it greatly simplifies some reasoning We write $\rmM_{n,k}$ for the vector space of $n$-by-$k$ matrices over the complex numbers, and identify these with linear maps from $\bbC^{(k)}$ to $\bbC^{(n)}$ using matrix-vector multiplication. By writing such a matrix as $[v_1,\dots,v_k]$, where $v_1$, \dots, $v_k$ are its columns, we identify it with a $k$-tuple of vectors in $\bbC^{(n)}$. We generalize this notation further by allowing columns from a general vector space $H$, meaning that a linear map from $\bbC^{(k)}$ to $H$ may still be written in the form $[v_1,\dots,v_k]$. This identification of tuples with linear maps is so convenient that we often abuse notation by calling the linear map $V$ itself a `$k$-tuple of vectors in $H$'. If $H$ is an inner product space, the adjoint $V^\ast$ is the map from $H$ to $\bbC^{(k)}$ whose output coordinates are given by the inner products with the vectors $v_i$. We abbreviate $\rmM_{k,k}$ to $\rmM_k$ and regard it as a $\ast$-algebra over $\bbC$ in the usual way. We write $I_k$ for the $k$-by-$k$ identity matrix. We write $\Tr_k$ and $\Det$ for the usual trace and determinant on any such algebra, and we set \[\tr_k M := k^{-1}\Tr_k\,M \qquad (M \in \rmM_k).\] We write $\rmM_{k,\sa}$ for the subspace of self-adjoint members of $\rmM_k$, $\rmM_{k+}$ for the subset of positive semidefinite members of $\rmM_k$, and $\rmM^\circ_{k+}$ for the further subset of positive definite members. The set $\rmM_{k+}$ is a closed cone in $\rmM_k$, and $\rmM^\circ_{k+}$ is the relative interior of $\rmM_{k+}$ in $\rmM_{k,\sa}$. This means that, if $A$ is positive definite, then any sufficiently small self-adjoint perturbation of $A$ is still positive definite; and this stability property characterizes the positive definite matrices among the positive semidefinite ones. For a linear operator on an inner product space, or a matrix that can be regarded as such, the notation $\|\cdot\|$ means the operator norm by default. We occasionally write it as $\|\cdot\|_{\rm{op}}$ where disambiguation may be helpful. We also use the following less standard notations. When $k\le n$, we write $\rm{GL}(k,n)$ for the set of all linear injections from $\bbC^{(k)}$ to $\bbC^{(n)}$, or equivalently $n$-by-$k$ matrices with linearly independent columns. When $k=n$ this is the usual general linear group $\rm{GL}(n,\bbC)$. We write $\rm{U}(k,n)$ for the subset of unitary embeddings, or equivalently matrices whose columns are orthonormal. When $k=n$ this gives the unitary group $\rmU(n)$. Finally, we write $\Xi(k,n)$ for the subset of elements of $\rmM_{n,k}$ that are contractions for the Euclidean distances on $\bbC^{(k)}$ and $\bbC^{(n)}$, and $\Xi^\circ(k,n)$ for the further subset of strict contractions. These are the closed and open unit balls of $\rmM_{n,k}$ in the operator norm, respectively. \section{Gram matrices, embeddings, and projections} Let $H$ be a complex inner product space and let $V = [v_1, \dots, v_k]$ be $k$-tuple in $H$, interpreted as a linear map from $\bbC^{(k)}$ to $H$. The \textbf{Gram matrix} $Q$ of this tuple is the $k$-by-$k$ matrix of their inner products: \begin{equation}\label{eq:Gram} Q = [\langle v_j,v_i\rangle]_{i,j=1}^k = V^\ast V. \end{equation} Gram matrices are positive semidefinite, and every positive semidefinite matrix can be written as a Gram matrix. The choice of this representation is not unique, but given $Q$ one can make the canonical choice $H := \bbC^{(k)}$ and $V:= Q^{1/2}$: this is the only choice for which $V$ is again positive semidefinite. The Gram matrix in~\eqref{eq:Gram} nonsingular if and only if the vectors are linearly independent, hence if and only if $V$ is injective. More generally, a positive semidefinite matrix $Q$ is the Gram matrix of a tuple of vectors in some inner product space $H$ if and only if $\dim H \ge \rm{rank}\,Q$, and in this case the choice of those vectors is unique up to a unitary transformation of $H$. These facts follow at once from~\cite[Theorem 7.2.10]{HorJohMA}, for example. The Gram matrix $Q$ has all diagonal entries equal to $1$ if and only if every $v_i$ is a unit vector. Let $V = [v_1, \dots, v_k]$ and $Q = V^\ast V$ be as above. Any orthonormal set of vectors with the same span as $v_1$, \dots, $v_k$ consists of linear combinations of $v_1$, \dots, $v_k$, and there are general methods for finding such an orthonormal set which use the entries of $Q$ to find the coefficients. The simplest general method is the Gram--Schmidt procedure, but this depends on the order of the vectors $v_1$, \dots, $v_k$. If $Q$ is nonsingular, then a more symmetric method is available. Indeed, in this case the polar decomposition of $V$ is given by \begin{equation}\label{eq:polar} V = (VQ^{-1/2})\cdot Q^{1/2}, \end{equation} and therefore \begin{equation}\label{eq:new-basis} W := [w_1,\dots,w_k] := VQ^{-1/2} \end{equation} is a unitary embedding of $\bbC^{(k)}$ into $H$, and so its columns are an orthonormal basis for $\rm{span}\{v_1,\dots,v_k\}$. These calculations also yield a formula for the orthogonal projection $P$ from $H$ onto \[M := \rm{span}\,\{v_1,\dots,v_k\} = V[\bbC^{(k)}].\] This orthogonal projection is the same if we replace $V$ by $W$, and since $W$ is a unitary embedding this projection is simply given by \begin{equation}\label{eq:proj-onto-span} P = WW^\ast = VQ^{-1}V^\ast. \end{equation} If $P$ is an orthogonal projection on a Hilbert space, we sometimes $P^\perp:=1-P$ for brevity. \section{Group algebras and other algebras} If $\G$ is a discrete group, then we write $\bbC[\G]$ for the complex group algebra of $\G$, and regard it concretely as the space of functions from $\G$ to $\bbC$ that have finite support. Given $g \in \G$, we write $\delta_g$ for its canonical image in $\bbC[\G]$, which is the indicator function of the singleton $\{g\}$. We may express a general element of $\bbC[\G]$ as either an indexed family $(a_g:\ g \in \G)$ or a sum like $\sum_g a_g\delta_g$, where in either case $a_g$ is zero for all but finitely many $g$. The vector space operations of $\bbC[\G]$ are pointwise, and its multiplication is convolution: \[a\ast b := \sum_{g,h} a_g b_h \delta_{gh} \qquad (a,b \in \bbC[\G]).\] We also always endow $\bbC[\G]$ with the involution \begin{equation}\label{eq:CG-inv} a^\ast := \sum_g \ol{a_g}\delta_{g^{-1}} \qquad (a \in \bbC[\G]), \end{equation} making it a $\ast$-algebra. We extend all of these notations to any larger class of complex-valued functions on $\G$ on which they are still well-defined. Given any algebra $A$ over $\bbC$ and a positive integer $k$, we write $\rmM_k(A)$ for the algebra of $k$-by-$k$ matrices with entries in $A$. If $A$ is a $\ast$-algebra, then we define an involution on $\rmM_k(A)$ by transposing and applying the involution of $A$ entry-wise. If $A$ is an algebra and $\pi$ is a representation of $A$ on a vector space $H$, then using these rules it defines a representation $\pi_{(k)}$ of $\rmM_k(A)$ on $H^{(k)}$. If $A$ is a $\ast$-algebra and $\pi$ is a $\ast$-representation on a Hilbert space, then these features extend to $\pi_{(k)}$. Slightly more generally, given such an algebra $A$ and positive integers $k$ and $\ell$, we write $\rmM_{k,\ell}(A)$ for the vector space of $k$-by-$\ell$ matrices with entries in $A$. We multiply entries of $M_{k,\ell}(A)$ and $\rmM_{\ell,m}(A)$ according to the usual rules of matrix multiplication. Similarly, if $A$ is an algebra of operators on a vector space $H$, then $\rmM_{k,\ell}(A)$ is naturally identified with a vector space of linear maps from $H^{(\ell)}$ to $H^{(k)}$ by the usual rule for multiplying a vector by a matrix. \section{Real analysis} \subsection*{\emph{Asymptotic notation}} We use Landau's `big-$O$' and `little-$o$' notation widely, but only for sequences indexed by natural numbers. In case these sequences involve other parameters as well, each function hidden behind this asymptotic notation may depend freely on those other parameters. Thus, for example, if $X$ is any set and $f_1$, $f_2$, \dots and $f$ are real-valued functions on $X$, then the assertion \[``\ \ f_n(x) = f(x) + o(1) \ \ "\] means that $f_n \to f$ pointwise, whereas uniform convergence could be written \[``\ \ \sup_x |f_n(x) - f(x)| = o(1)\ \ ".\] At a few points, we write $f(n) \lesssim g(n)$ or $g(n) \gtrsim f(n)$ as an alternative notation for $f(n) = O(g(n))$. We write $f(n) \simeq g(n)$ if both $f(n) \lesssim g(n)$ and $f(n) \gtrsim g(n)$. \subsection*{\emph{Measures}} For any dimension $d$, we write $\vol_d$ for Lebesgue measure on $\bbR^d$. For any positive integers $d$ and $k$, we also write $\vol_{2kd}$ for the measure on $\rmM_{k,d}(\bbC)$ obtained by identifying this space with $\bbR^{2kd}$. If $G$ is any compact metric group, then we write $m_G$ for its Haar probability measure. \chapter{Preliminaries: log-determinants as entropies}\label{chap:log-det-ent} This chapter explores an analogy between linear maps from one finite-dimensional inner product space to another (and a suitable notion of log-determinant) and discrete random variables (and discrete Shannon entropy). The connection between log-determinants and entropy is almost as old as information theory itself. The tightest link is actually with differential entropy rather than discrete entropy. This is because, up to a normalization, the differential entropy of a multi-variate Gaussian random vector is the logarithm of the determinant of its covariance matrix~\cite[Theroem 8.4.1]{CovTho06}. All the standard identities and inequalities enjoyed by differential entropy have formulations for log-determinants. They are, in turn, largely analogous to the properties of discrete Shannon entropy, with a few key differences. This can be understood as a consequence of that calculation for Gaussian distributions, but these facts are elementary enough that one can also regard the two sets of properties as equal cousins with analogous proofs. We prefer the second point of view in this work, where log-determinants play an essential role throughout, but in a purely `linear setting' where Gaussian random variables would be an extraneous construct. In this chapter we establish our notation for log-determinants as a kind of entropy, along with the analogs of conditional entropy and mutual information, and we establish the basic properties that we need. In Section~\ref{sec:types} we also provide an interpretation of log-determinants in terms of volumes which is the analog of the classical `method of types' for discrete Shannon entropy. None of the ideas in this chapter are new. On the contrary, the analogy between log-determinants and other entropy-like quantities has appeared in many different forms across pure and applied mathematics for decades. However, for this very reason, our later work is made easier by having all the finite-dimensional results that we need collected in one place and with a consistent notation. Chapter~\ref{chap:lin-alg} provides some additional background of a similar flavour that we do not need until Part~\ref{part:free}. Readers who have experience with these topics can probably skip this chapter and follow references back to it when needed. \section{Abstract log-determinants} Let $K$ and $H$ be complex inner product spaces of equal dimension, and let $T$ be a linear map from $K$ to $H$. We may express $T$ as a matrix using any choice of orthonormal bases $\calB$ for $K$ and $\calC$ for $H$. Let $\calB$ and $\calB'$ be two such choices for $K$, let $\calC$ and $\calC'$ be two such choices for $H$, and let \[M := \!\phantom{i}_{\calC}[T]_{\calB} \quad \hbox{and} \quad M' := \!\phantom{i}_{\calC'}[T]_{\calB'}.\] Then these matrices are related by \begin{equation}\label{eq:det-prod} M' = U'MU, \end{equation} where $U$ is a base-change matrix from $\calB$ to $\calB'$ and $U'$ is a base-change matrix from $\calC$ to $\calC'$. Since we assume all these bases are orthonormal, so are $U$ and $U'$, and as a result we have \[\det M' = \det U' \cdot \det M \cdot \det U.\] Taking absolute values, this shows that the number \[|\det M|\] does not depend on the choice of the bases $\calB$ and $\calC$ provided they are orthonormal. Henceforth we write this number simply as $|\det T|$, and regard it as an abstract property of linear transformations between finite-dimensional inner product spaces of equal dimension. (So, importantly, `$\det$' may appear only inside an absolute value unless specific bases for our inner product spaces have been chosen.) More generally, if $K$ and $H$ are any inner product spaces with $\dim K <\infty$, then we extend our definition above as follows. \begin{dfn} If $T$ is a linear injection from $K$ to $H$, so $\rm{rank}\,T = \dim K$, then we regard it as a map from $K$ to $\rm{ran}\,T$ and then define $|\det T|$ as above (that is, we `ignore the rest of $H$'). If $T$ is not injective, so $\rm{rank}\,T < \dim K$, then we let $|\det T| := 0$. \end{dfn} This agrees with our previous definition if $\dim K = \dim H$ because in that case $|\det T| = 0$ if and only if $\rm{rank}\,T < \dim H$. We must keep the assumption that $\dim K < \infty$ so that the determinants in~\eqref{eq:det-prod} are defined. This convention for extending determinants to maps from one vector space to another of possible higher dimension has widespread uses. For example, it gives a convenient expression of the area formula for integration against Hausdorff measures~\cite[Sections 3.2--3]{EvaGar}. This generalized notion of determinant may also be evaluated as follows. \begin{lem}\label{lem:Gram-det} If $T:K\to H$ as above, then \begin{equation}\label{eq:det-T*T} |\det T|^2 = \det T^\ast T. \end{equation} \end{lem} \begin{proof} If necessary, we may replace $H$ by $\rm{ran}\,T$ without affecting either side of~\eqref{eq:det-T*T}, and so we may assume that $H = \rm{ran}\,T$. If $T$ is non-injective, then $T^\ast T$ is singular, and both sides of~\eqref{eq:det-T*T} are zero. If $T$ is injective, then $\dim K = \dim H$. By choosing orthornomal bases, we may therefore assume that $K = H = \bbC^{(\dim H)}$, at which point~\eqref{eq:det-T*T} is an identity for classical matrix determinants. \end{proof} In equation~\eqref{eq:det-T*T}, note that $T^\ast T$ is a self-adjoint linear transformation from $H$ to itself, and so it may be evaluated as a traditional determinant using any choice of orthonormal basis for $H$. However, beware that~\eqref{eq:det-T*T} $T^\ast T$ and $T^\ast T$ may not have the same determinant if $H$ and $K$ are different spaces. If $T:K\to H$ as above, and if $H'$ is any other inner product space that contains $\rm{ran}\,T$, then the value $|\det T|$ is unchanged if we regard $T$ as a map taking values in $H'$ rather than $H$. However, the choice of this target spaces does become important when we consider a composition of two linear transformations. \begin{lem}[Multiplicativity]\label{lem:mult} Let \[J \stackrel{T}{\longto} K \stackrel{S}{\longto} H\] be linear maps between finite-dimensional inner product spaces, and assume that $T$ is surjective. Then \[|\det ST| = |\det S||\det T|.\] \end{lem} \begin{proof} If $T$ is not injective, then neither is $ST$, so in this case both sides of the desired equality equal $0$. If $S$ is not injective, then neither is $ST$, because $T$ is surjective. So in this case also both sides of the desired equality equal $0$. Finally, if $T$ and $S$ are both injective, then we may (i) restrict our attention from $H$ to $\rm{ran}\,S$ if necessary, and so assume that $S$ is also surjective, and then (ii) pick bases of $H$, $K$ and $J$ so that $T$ and $S$ are both represented by square matrices. Now the desired equality follows by the multiplicativity of $\det$ as a function of matrices. \end{proof} \begin{ex} The assumption that $T$ is surjective is necessary in the lemma above. This is a consequence of our conventions for handling spaces of differing dimensions. To see this, consider the example \[\bbC \stackrel{T}{\longto} \bbC^2 \stackrel{S}{\longto} \bbC\] where $Tx = (x,0)$ and $S(x,y) = x$. Then $S$ is not injective, so $|\det S| = 0$, but $ST$ is the identity of $\bbC$, so $|\det ST| = 1$. \end{ex} \begin{rmk} An alternative way to write abstract log-determinants is offered by exterior algebra: see, for instance~\cite[Exercise 2.13]{WarFDMLG}. If $V = [v_1,\dots,v_k]$ is a map from $\bbC^{(k)}$ to $H$, then that yields the formula \[|\det V| = \|v_1\wedge \cdots \wedge v_k\|.\] n \end{rmk} \section{Log-determinant entropy and related quantities}\label{sec:log-det-ent-etc} Let $T$ be a linear map from a finite-dimensional inner product space $K$ to another inner product space $H$, as in the previous section. \begin{dfn}\label{dfn:log-det-ent} The \textbf{log-determinant entropy of $T$} is the value \[\rmH(T) := \log|\det T|,\] with the usual convention that $\log 0 := -\infty$. \end{dfn} We sometimes call this just `entropy' when no confusion can arise. If $K = \bbC^{(k)}$, then we may identify $T$ with $[v_1,\dots,v_k]$ for some $v_1$, \dots, $v_k \in H$. In this case we sometimes write $\rmH(v_1,\dots,v_k)$ in place of $T$. Now consider also a subspace $L$ of $H$. Let $\iota_L$ be the inclusion map from $L$ into $H$, let $P_L = \iota_L^\ast$ be the orthogonal projection from $H$ onto $L$, and let $P_L^\perp := 1-P_L$. \begin{dfn}\label{dfn:cond-log-det-ent} The \textbf{conditional log-determinant entropy of $T$ given $L$} is the value \[\rmH(T\mid L) := \rmH(P_L^\perp T).\] If $S$ is another linear map into $H$, then we define \[\rmH(T\mid S) := \rmH(T\mid \rm{ran}\,S).\] \end{dfn} \begin{ex}\label{ex:1D-cond-ent} A single vector $v \in H$ may be regarded as a linear map from $\bbC$ to $H$. Then the definitions above give $\rmH(v) = \log \|v\|$, and more generally \[\rmH(v\mid L) = \log\|v - P_L v\|\] for any closed subspace $L$ of $H$. \qed \end{ex} As a heuristic principle, in Definition~\ref{dfn:cond-log-det-ent} and others in the sequel, if $L$ is a subspace of some ambient inner product space, then `conditioning' on $L$ means \emph{projecting everything to $L^\perp$}. This intuition fits well with our first version of the chain rule. Let $T:K\to H$ and $S:L\to H$ be two linear maps into the same space. Let us use the matrix notation $[T,S]$ for the combined map \[K\oplus L \to H :[x,y]^\rm{T}\mapsto Tx+Sy.\] \begin{prop}[Chain rule, I]\label{prop:chain1} We have \[\rmH([T,S]) = \rmH(S) + \rmH(T\mid S)\] (including the possibility that both sides equal $-\infty$). \end{prop} \begin{proof} If $\rm{ran}\,S$ and $\rm{ran}\,T$ are orthogonal, then $\rm{ran}([T,S])$ is unitarily equivalent to $\rm{ran}\,S \oplus \rm{ran}\,T$, and this unitary equivalence converts the map $[T,S]$ into $\rm{diag}(T,S)$, which satisfies \[\rmH(\rm{diag}(T,S)) = \rmH(T) + \rmH(S).\] In the general case, let $J:= \rm{ran}\,S$, let $M := P_J^\perp[ \rm{ran}\,T]$, and let $\iota_J$ and $\iota_M$ be the inclusions of these spaces in $K$. Then $J$ and $M$ are orthogonal, their sum is equal to $\rm{ran}([T,S])$, and we have the factorization \[[T,S] = [\iota_MP_J^\perp T + \iota_J P_J T, \iota_J P_J S] = [\iota_M,\iota_J]\left[\begin{array}{cc}P_J^\perp T & 0\\ P_JT & P_J S\end{array}\right],\] where the matrix is regarded as a map from $K\oplus L$ to $M\oplus J$. As such, this map is surjective, and so Lemma~\ref{lem:mult} and the special case of orthogonal spaces gives \begin{align*} \rmH([T,S]) &= \rmH([\iota_M,\iota_J]) + \rmH\left(\left[\begin{array}{cc}P_J^\perp T & 0\\ P_JT & P_JS\end{array}\right]\right)\\ &= 0 + \rmH(S) + \rmH(P_J^\perp T) \\ &= \rmH(S) + \rmH(T \mid S). \end{align*} \end{proof} Given a pair of linear maps $T:\bbC^{(k)}\to H$ and $S:\bbC^{(\ell)}\to H$, the combined map $[T,S]$ has a Gram matrix of size $k+\ell$ which naturally has a two-by-two-block structure. We can express $\rmH(T \mid S)$ using this Gram matrix through a generalization of~\eqref{eq:Gram-ent}, but we defer this until Section~\ref{sec:block-Gram} where such two-by-two-block Gram matrices are discussed more carefully. Now consider two finite-dimensional subspaces $L$ and $J$ of $H$. \begin{dfn}\label{dfn:log-det-mut-inf} The \textbf{mutual information between $L$ and $J$} is the value \[\rmI(L\,;\,J) := -\rmH(P_J^\perp \iota_L)\] (so this may take values in $(-\infty,\infty]$). If $T$ is a linear map from another inner product space $K$ to $H$, then we define \[\rmI(T\,;\,J) := \rmI(\rm{ran}\,T\,;\,J).\] We may replace the second argument of $\rmI$ by a linear map using the same convention. \end{dfn} \begin{rmk} For linear maps $S$ and $T$ taking values in $H$, observe that $\rmI(S\,;\,T)$ depends only on the subspaces $\rm{ran}\,S$ and $\rm{ran}\,T$. In particular, pre-composing either $S$ or $T$ with an invertible map of its domain does not change the value of $\rmI(S\,;\,T)$. n \end{rmk} Unlike log-determinant entropy, log-determinant mutual information cannot take arbitrary real values. \begin{lem}\label{lem:mut-inf-nonneg} Any subspaces $L$ and $J$ satisfy $\rmI(L\,;\,J) \ge 0$, with equality if and only if $L$ and $J$ are orthogonal. \end{lem} \begin{proof} The definition gives $\rmI(L\,;\,J) = - \rmH(P_J^\perp \iota_L)$. The map $P_J^\perp \iota_L$ is a composition of contractions, so it as again a contraction, and therefore its determinant is at most one. This turns into the lower bound for $\rmI(L\,;\,J)$. Equality holds if and only if the map $P_J^\perp \iota_L$ is actually an isometry, which is equivalent to $L \subset J^\perp$. \end{proof} The next result is a second chain rule, now involving log-determinant mutual information. Let $T: K\to H$ be a linear map between finite-dimensional inner product spaces and let $L$ be a subspace of $H$. \begin{prop}[Chain rule, II]\label{prop:chain2} We have \[\rmH(T) - \rmI(T\,;\,L) = \rmH(T\mid L)\] (including the possibility that both sides equal $-\infty$). \end{prop} \begin{proof} Let $J := \rm{ran}\,T$. Then \[P_L^\perp T = (P_L^\perp \iota_J)T.\] Since $T$ is surjective when regarded as a map from $H$ to $J$, we may apply Lemma~\ref{lem:mult} to the composition on the right-hand side here (but not necessarily to the one on the left). Since $\log$ converts multiplication to addition, the result follows. \end{proof} \begin{cor}\label{cor:symmetry} If $S$ and $T$ are two linear maps taking values in $H$, then \[\rmH([S,T]) = \rmH(S) + \rmH(T) - \rmI(S\,;\,T)\] (including the possibility that both sides equal $-\infty$). In particular, $\rmI(\cdot\,;\,\cdot)$ is a symmetric function of its two arguments. \end{cor} \begin{proof} The identity follows by combining Propositions~\ref{prop:chain1} and~\ref{prop:chain2}, and then the symmetry holds because the rest of that identity is symmetric in $S$ and $T$. \end{proof} \begin{cor}\label{cor:conditioning-monotone} If $T$ is a linear map into $H$ and $L$ is a subspace of $H$, then $\rmH(T\mid L)\le \rmH(T)$, with equality if and only if either (i) $\rmH(T) = -\infty$ or (ii) $\rm{ran}\,T$ and $L$ are orthogonal. Similarly, if $S$ and $T$ are two linear maps into $H$, then $\rmH([S,T]) \le \rmH(S) + \rmH(T)$, with equality if and only if ether (i) the right-hand side equals $-\infty$ or (ii) $\rm{ran}\,S$ and $\rm{ran}\,T$ are orthogonal. More generally, if $R$ is a third linear map into $H$, then \begin{equation}\label{eq:strong-subadd} \rmH([R,S,T]) + \rmH(R) \le \rmH([R,S]) + \rmH([R,T]). \end{equation} \end{cor} Inequality~\eqref{eq:strong-subadd} is often called `strong subadditivity' in information theory. \begin{proof} The first few conclusions all follow by combining Lemma~\ref{lem:mut-inf-nonneg}, Proposition~\ref{prop:chain2}, and Corollary~\ref{cor:symmetry}. Finally, the inequality~\eqref{eq:strong-subadd} follows by applying the unconditional version to $P_L^\perp S$ and $P_L^\perp T$, where $L := \rm{ran}\,R$, and then adding $2\rmH(R)$ to both sides and using Proposition~\ref{prop:chain1}. \end{proof} Now let $L$, $J$ and $M$ be three subspaces of $H$. \begin{dfn} The \textbf{conditional mutual information between $L$ and $J$ given $M$} is the quantity \[\rmI(L\,;\,J\mid M) := \rmI(P_M^\perp [L]\,;\,P_M^\perp [J]).\] We apply this definition to linear maps rather than subspaces by inserting the ranges of those linear maps, as previously. \end{dfn} If $M$ is a subspace of $H$ and $T$ is a linear map taking values in $H$, then $\rm{ran}\,P_L^\perp T$ is equal to $P_L^\perp [\rm{ran}\,T]$, so conditional mutual information applied to a pair of maps $T$ and $S$ satisfies \[\rmI(T\,;\,S\mid M) = \rmI(P_M^\perp T\,;\,P_M^\perp S).\] \begin{lem}\label{lem:ortho-sum} If $L$ and $J$ are orthogonal subspaces of $H$ then \[P_{L + J}^\perp = P_J^\perp P_L^\perp.\] \end{lem} \begin{proof} We have $(L + J)^\perp = L^\perp \cap J^\perp$, and now the result is a standard lemma for the orthogonal projection onto an intersection of complements of orthogonal subspaces.\end{proof} \begin{prop}[Conditional chain rule]\label{prop:cond-chain} If $T$ is a linear map taking values in $H$, and $L$ and $M$ are subspaces of $H$, then \[\rmH(T\mid M) - \rmI(T\,;\,L\mid M) = \rmH(T\mid L+M)\] (including the possibility that both sides equal $-\infty$). \end{prop} \begin{proof} Observe that \[L + M = \underbrace{P_M^\perp[L] + M}_{\rm{orthogonal}},\] and hence Lemma~\ref{lem:ortho-sum} gives \[P_{L + M}^\perp T = P_{P_M^\perp[L]}^\perp P_M^\perp T.\] Combining this with the definitions above, the desired equality may be written \[\rmH(P_M^\perp T) - \rmI(P_M^\perp T\,;\,P_M^\perp[L]) = \rmH(P_{P_M^\perp[L]}^\perp P_M^\perp T) = \rmH(P_M^\perp T \mid P_M^\perp [L]).\] This is now the unconditional chain rule from Proposition~\ref{prop:chain2} applied to the map $P_M^\perp T$ and subspace $P_M^\perp [L]$. \end{proof} Later, we often express log-determinant entropies in terms of Gram matrices. If $T$ is a $k$-tuple in the inner product space $H$, and $Q = T^\ast T$ is its Gram matrix, then Lemma~\ref{lem:Gram-det} \begin{equation}\label{eq:Gram-ent} \rmH(T) = \frac{1}{2}\rmH(Q). \end{equation} We use this relation frequently in explicit calculations in the sequel. More generally, suppose that $T$, $S$, and $R$ are a $k$-tuple, $\ell$-tuple and $m$-tuple in $H$, respectively. Combining these into one $(k+\ell+m)$-tuple as previously, they have a combined Gram matrix $Q = [T,S,R]^\ast [T,S,R]$, which naturally has a three-by-three-block structure. This Gram matrix determines all the conditional log-determinant entropies and mutual informations among them. Explicit formulas for these that generalize~\eqref{eq:Gram-ent} can be given in terms of $Q$, but they are rather involved, and we do not need them until Part~\ref{part:free}. We therefore defer them for now: see~\eqref{eq:Gram-cond-ent},~\eqref{eq:Gram-mut-inf} and~\eqref{eq:cond-mut-inf-contraction}. On the other hand, we do sometimes need a notation that indicates which Gram matrix lies behind an entropy or related value, without manipulating that Gram matrix explicitly. Let $Q$ be an $n$-by-$n$ positive semi-definite matrix, and let $\a$, $\b$, and $\gamma$ be three disjoint subsets of $\{1,2,\dots,n\}$. Let $T$ be an $n$-tuple in a Hilbert space that has Gram matrix $Q$, and let $T_\a$ be the sub-tuple indexed by $\a$, and similarly for the other subsets. Then we set \begin{equation}\label{eq:pre-Gram-ent} \rmH_Q(\a\mid \b) := \rmH(T_\a\mid T_\b), \qquad \rmI_Q(\a\,;\,\b\mid \gamma) := \rmI(T_\a\,;\,T_\b\mid T_\gamma), \end{equation} and similarly for other expressions. This usage is well-defined because the tuple $T$ is determined by $Q$ up to unitary equivalence, which does not change the absolute values of any log-determinants. \subsection*{\emph{Notes and further references}} It would be impossible to review the full history of log-determinants as entropies here; we can attempt only an indicative sketch. An early reference from information theory which depends on this connection for applied purposes is~\cite{Burg75}. The paper~\cite{ChoCov84} broadened Burg's work and demonstrated the convenience of information theoretic inequalities as basic techniques. Stimulated by ideas such as those in~\cite{Burg75}, many results in matrix analysis identify solutions to natural problems about positive semidefinite matrices through a variational principle. These include certain matrix completion problems, which may be phrased in terms of maximizing determinants (as in~\cite{GroJohSaWol84}) or maximizing `entropies' (as in~\cite{BakWoe92}), even if no operational meaning for entropy is intended. An analogous use of `entropy' for certain integrals of logarithms appears in the study of orthogonal polynomials, interpolation problems and the `commutant lifting theorem': see, for instance,~\cite{FoiFraCL} or~\cite[Section 2.3]{SimOPUCI}. A modern account of such `maximum entropy' arguments is given in~\cite[Theorem 2.1.4 and Section 2.6]{BakWoeMCetc}. Some simple completion problems of this kind appear in Part~\ref{part:free} below, so we return to them in Chapter~\ref{chap:lin-alg}. From another point of view, the fact that log-determinants are precisely the differential entropies of appropriate multi-variate Gaussian distributions can be used to give new proofs of various classical inequalities for log-determinants by treating them as special cases of inequalities for differential entropy and mutual information. The classic paper on these proofs is~\cite{CovTho88}. The results in~\cite[Section 3]{CovTho88} include the classical inequalities of Fischer, Koteljanski\u{\i} (also called Hadamard--Fischer) and Sz\'asz: it is worth comparing that treatment with the traditional linear algebra proofs (as in~\cite[Section 7.8]{HorJohMA}, for example). A textbook reference for the information-theoretic arguments is~\cite[Section 17.9]{CovTho06}. Our own work in Section~\ref{sec:log-det-ent-etc} offers a kind of `middle ground'. The techniques used in that section remain within linear algebra, but the `entropy' formalism shows how close these results are to analogous inequalities of information theory. For example, if we express the subadditivity given by the second part of Corollary~\ref{cor:conditioning-monotone} in terms of Gram matrices as in~\eqref{eq:pre-Gram-ent}, it is simply Fischer's inequality~\cite[Theorem 7.8.5]{HorJohMA}, and then the strong subadditivity in~\eqref{eq:strong-subadd} corresponds to Koteljanski\u{\i}'s inequality~\cite[Theorem 7.8.9]{HorJohMA}. From another direction, our log-determinant entropy $\rmH$ is a special case of the relative entropy of density matrices as studied in quantum information theory, which is often denoted by $S$. In particular, in the notation of~\cite[equation (4.19)]{BhatPDM}, we have $\rmH(A) = -S(I_k\mid A)$ whenever $A$ is a $k$-by-$k$ positive semidefinite matrix of trace equal to $1$. \section{Polar decomposition of integrals}\label{sec:polar-decomp-int} Recall the notations $\rm{GL}(k,n)$, $\rm{U}(k,n)$, $\rmM_{k+}$ and $\rmM^\circ_{k+}$ from Chapter~\ref{chap:basic-notn}. The set $\rm{GL}(k,n)$ is a dense open subset of $\rmM_{n,k} \cong (\bbC^{(n)})^k$. Its complement is defined by the vanishing of determinants of submatrices, so it has zero volume. Therefore we may regard Lebesgue measure $\vol_{2kn}$ as a measure on $\rm{GL}(k,n)$. The set $\rmU(k,n)$ has a natural action of $\rmU(n)$ by post-multiplication, and this action is transitive. So $\rmU(k,n)$ is a homogeneous space of the compact Lie group $\rmU(n)$; the stabilizer of any fixed choice of $k$ orthonormal vectors in $\bbC^{(n)}$ is a copy of ${\rmU(n-k)}$. As a result, $\rmU(k,n)$ inherits a Haar probability measure $m_{\rmU(k,n)}$ from $\rmU(n)$. For example, if $k = 1$, then $\rmU(1,n)$ is the unit sphere $\rmS^{2n-1}$, and $m_{\rm{U}(1,n)}$ is the normalized surface-area measure. Lastly, the set $\rmM^\circ_{k+}$ is relatively open in the real-linear subspace $\rmM_{k,\rm{sa}}$ of $\rmM_k$, and $\rmM_{k,\rm{sa}}$ has real dimension $k^2$, so the restriction of $\vol_{k^2}$ to $\rmM^\circ_{k+}$ gives a nontrivial sigma-finite measure. The continuity of spectral calculus lets us define a continuous map \begin{equation}\label{eq:polar-decomp-map} \rm{U}(k,n)\times \rmM^\circ_{k+} \to \rm{GL}(k,n):(V,Q)\mapsto VQ^{1/2}. \end{equation} This is a homeomorphism with inverse given by polar decomposition: \[T\mapsto (T(T^\ast T)^{-1/2},T^\ast T).\] The map~\eqref{eq:polar-decomp-map} leads to the following formula for integration over $\rm{GL}(k,n)$. \begin{prop}\label{prop:int-form} Any positive Borel function $f$ on $\rm{GL}(k,n)$ satisfies \begin{multline}\label{eq:int-form} \int_{\rm{GL}(k,n)} f(T)\,d\vol_{2kn}(T) \\ = c(k,n)\int_{\rmM^\circ_{k+}} (\det Q)^{n-k} \int_{\rm{U}(k,n)} f(VQ^{1/2})\ dm_{\rm{U}(k,n)}(V)\ d\vol_{k^2}(Q), \end{multline} where \begin{equation}\label{eq:ckn} c(k,n) := \frac{\pi^{nk}}{\pi^{k(k-1)/2}\prod_{j=1}^k(n-j)!}. \end{equation} \end{prop} When $k=1$,~\eqref{eq:int-form} is simply the formula for integration over a complex vector space using polar coordinates: \begin{align*} \int_{\bbC^n} f(z)\,d\vol_{2n}(z) &= \frac{\pi^n}{(n-1)!}\int_0^\infty q^{n-1}\int_{\rmS^{2n-1}}f(\sqrt{q}u)\ d\s_{2n-1}(u)\ dq \\ &= \frac{2n\pi^n}{n!}\int_0^\infty r^{2n-1}\int_{\rmS^{2n-1}}f(ru)\ d\s_{2n-1}(u)\ dr, \end{align*} where the second line follows by substituting $q := r^2$. See, for instance,~\cite[equation (1.5)]{AxlBouRamHF} together with~\cite[Proposition A.1]{AxlBouRamHF}. The same reference also gives the identification \begin{equation}\label{eq:ball-vol} c(1,n) = \frac{\pi^n}{(n-1)!} = n\cdot \vol_{2n}B_1(0), \end{equation} where $B_1(0)$ is the unit ball in $\bbC^{(n)}$. \begin{proof}[Proof of Proposition~\ref{prop:int-form}] The general case of~\eqref{eq:int-form} can be proved in various ways, but some careful manipulations with the change-of-variables formula are unavoidable. Happily, the result is known classically from the study of complex Wishart distributions. Real Wishart distributions are one of the standard families that arise in high-dimensional statistical inference: see, for instance,~\cite[Subsection 2.3.6 and Appendix B]{Bishop}. Their complex counterparts have been studied since at least Goodman's work~\cite{Goo63}, which includes the following result. Let $T$ be a random matrix in $\rmM_{n,k}$ whose rows are independent with complex $k$-variate Gaussian distribution with mean zero and complex covariance matrix $I_k$. This means that the overall distribution of $T$ is \begin{equation}\label{eq:C-Gauss} \frac{1}{\pi^{nk}}e^{-\Tr_k(T^\ast T)}\ d\vol_{2kn}(T) \qquad (T \in \rmM_{n,k}); \end{equation} this is a product of $n$ copies of~\cite[equation (1.5)]{Goo63}. Then the resulting random Gram matrix $Q := T^\ast T$ has the \textbf{complex Wishart distribution} \begin{equation}\label{eq:C-Wish} \frac{1}{\pi^{k(k-1)/2}\prod_{j=1}^k(n-j)!}(\det Q)^{n-k}e^{-\Tr_k Q}\ d\vol_{k^2}(Q) \qquad (Q \in \rmM_{k+}): \end{equation} see~\cite[equation (1.6)]{Goo63}. To derive~\eqref{eq:int-form} from this fact, first observe that both sides of~\eqref{eq:int-form} are invariant under the action of $\rm{U}(n)$ by left-multiplication. We may therefore average over that action first and so reduce to the case when $f(T) = g(T^\ast T)$ for some positive Borel function $g$ on $\rmM_{k+}$. Then, letting $h(Q) := e^{\Tr_k Q}g(Q)$, the distribution formulas~\eqref{eq:C-Gauss} and~\eqref{eq:C-Wish} give the equality of expectations \begin{multline*} \frac{1}{\pi^{nk}}\int_{\rm{GL}(k,n)} e^{-\Tr_k(T^\ast T)}h(T^\ast T)\ d\vol_{2kn}(T) \\ = \frac{1}{\pi^{k(k-1)/2}\prod_{j=1}^k(n-j)!}\int_{\rmM^\circ_{k+}} (\det Q)^{n-k}e^{-\Tr_k Q}h(Q)\ d\vol_{k^2}(Q), \end{multline*} and this is~\eqref{eq:int-form} after multiplying through by $\pi^{nk}$. \end{proof} We refer to the normalizing constants $c(k,n)$ frequently in the sequel, but we generally do not need the exact formula for them. The only conclusion we really use is the following asymptotic behaviour for $k$ fixed and $n\to\infty$, which follows from~\eqref{eq:ckn} and Stirling's approximation: \begin{equation}\label{eq:ckn-asymp} c(k,n) = \frac{(e\pi)^{nk}}{n^{nk}}\cdot e^{o(n)} \qquad \hbox{and hence} \qquad c(k,n) = k^{kn}\cdot c(1,kn)\cdot e^{o(n)}. \end{equation} \subsection*{\emph{Notes and further references}} In connection with Wishart distributions, Proposition~\ref{prop:int-form} has several intricate variants and enhancements, for example by passing from the distribution of $Q$ to the distribution of its eigenvalues. This is a well-studied topic: see, for instance,~\cite[Exercise 2.1.18 and Proposition 4.1.3]{AndGuiZei--book} (as well as the bibliographical notes to both chapters),~\cite[Section 6.2]{WaiHDS}, or~\cite[pp129--131 and Lemma 4.4.7]{HiaPetSL}. \section{The method of types}\label{sec:types} Section~\ref{sec:log-det-ent-etc} treats log-determinant entropy as a purely formal analog of Shannon entropy. But log-determinant entropy also has an interpretation in terms of the volumes of certain sets of `typical tuples' of vectors in high dimensions. This is the analog of the `method of types' in information theory~\cite[Section 11.1]{CovTho06}, which interprets Shannon entropy in terms of some basic counting problems. When log-determinant entropy appears as the differential entropy of a jointly Gaussian random vector, it also has a standard method-of-types interpretation in terms of volumes: the basic idea is covered in~\cite[Section 8.2]{CovTho06}. The results of this section are slightly different, but the spirit is the same. For any $n\ge k$ and $O \subset \rmM_{k+}$, let \begin{equation}\label{eq:TnO} T(n,O) := \{X \in \rmM_{n,k}:\ X^\ast X \in O\}. \end{equation} \begin{thm}\label{thm:types-1} Let $Q$ be a $k$-by-$k$ positive semidefinite matrix. \begin{itemize} \item[a.] (Lower bound) If $Q$ is nonsingular and $O$ is any neighbourhood of $Q$ in $\rmM_{k+}$, then \[\frac{\vol_{2kn}T(n,O)}{c(k,n)} \ge e^{\rmH(Q)n - o(n)}.\] \item[b.] (Upper bound) For any $a > \rmH(Q)$ there is a neighbourhood $O$ of $Q$ in $\rmM_{k+}$ such that \[\frac{\vol_{2kn}T(n,O)}{c(k,n)} \le e^{an + o(n)}.\] \end{itemize} \end{thm} \begin{proof} When $Q$ is nonsingular, we can prove both parts together rather directly from Proposition~\ref{prop:int-form}. Indeed, in that case we can shrink $O$ if necessary to assume that it is contained in $\rmM^\circ_{k+}$, and then that proposition gives \begin{align*} \vol_{2kn}T(n,O) &= c(k,n)\int_O (\det Q)^{n-k}\int_{\rm{U}(k,n)} 1\ dm_{\rm{U}(k,n)}(V)\ d\vol_{k^2}(Q)\\ &=c(k,n)\int_O (\det Q)^{n-k}\ d\vol_{k^2}(Q). \end{align*} If $\eps > 0$, then we may choose $O$ so small that every $Q' \in O$ satisfies \begin{equation}\label{eq:O-above-and-below} e^{-\eps} \det Q \le \det Q' \le e^\eps \det Q, \end{equation} at which point the above integral implies that \[e^{(\rmH(Q)-\eps)(n-k)}\vol_{k^2}O \le \frac{\vol_{2kn}T(n,O)}{c(k,n)} \le e^{(\rmH(Q) +\eps)(n-k)}\vol_{k^2}O.\] This implies both the lower and upper bounds, because (i) $\eps$ is arbitrary, (ii) $k$ is fixed while $n\to\infty$, and (iii) the lower bound for a smaller choice of $O$ implies it for a larger choice of $O$, so $\eps n$ can be improved to $o(n)$ in the lower bound. The proof of the upper bound when $Q$ is singular is analogous, except this time we choose $O$ so small that every $Q' \in O$ satisfies $\det Q' \le e^a$, rather than~\eqref{eq:O-above-and-below}. \end{proof} Although simple, Theorem~\ref{thm:types-1} is our basic template for Theorem~\ref{mainthm:det-form}. Theorem~\ref{mainthm:det-form} is the equivariant analog of Theorem~\ref{thm:types-1} for a countable group and representations of it that may be infinite-dimensional. The definition of AP entropy in Chapter~\ref{chap:AP} considers the volumes of sets that are the equivariant analogs of the sets $T(n,O)$, and the log-determinant that gives the leading exponents in Theorem~\eqref{thm:types-1} becomes its equivariant analog: a Fuglede--Kadison determinant (see Definition~\ref{dfn:FK-det-ac}). \subsection*{\emph{Notes and further references}} The book~\cite{CsiKor--book} develops much of information theory starting from the `method of types', and the same is done for large deviations theory in~\cite[Section 2.1]{DemZei--LDPbook}. For discrete Shannon entropy, the usual proofs in the method of types involve separately counting strings with a given empirical distribution and then judiciously applying Stirling's approximation. While this is elementary, Theorem~\ref{thm:types-1} is arguably even simpler. This is because of the changes of variable that are encapsulated in Proposition~\ref{prop:int-form}. These are ultimately an expression of the symmetries of $\vol_{2kn}$ and other relevant measures on spaces of matrices, which have no obvious analog for probability distributions over finite sets. A related use of symmetry is also essential to our later proof of Theorem~\ref{mainthm:det-form}. For that theorem, a `discrete' analog would probably require an explicit `formula' for sofic entropy of a stationary process over a finite alphabet. At present, no candidate for such a formula has even been conjectured. Indeed, there are reasons why such an analog might be impossible, in that sofic entropy might depend on the chosen sofic approximation in a fundamentally more complicated way than AP entropy depends on the AP approximation: see the discussion following Corollary~\ref{cor:det-form}. By combining Theorem~\ref{thm:types-1} with a Gaussian density for $T$, one obtains a fairly simple proof of the large deviations principle for the Wishart matrices $T^\ast T$ when $T\in \rmM_{n,k}$ is chosen from a Gaussian distribution, $k$ is fixed, and $n\to\infty$. Section~\ref{sec:matrix-completion-LDP} below derives some related large deviations principles in a similar way. However, much of the research interest in Wishart matrices concerns regimes in which $k$ grows with $n$, and these demand finer techniques. See, for instance,~\cite[Section 5.5]{HiaPetSL}. \chapter{Preliminaries on operator algebras and their representations}\label{chap:rep-th-op-alg} This is another chapter containing essentially standard material, although we use non-standard notation at a few points below. Once again, readers who have experience with these topics can probably skip this chapter and follow references back to it when needed. \section{C$\sp*$-algebras and their representations} \subsection*{\emph{C$\sp*$-algebras and operator algebras}} Throughout Part~\ref{part:general}, $\A$ is a \emph{separable, unital} C$\sp\ast$-algebra and we study \emph{separable} representations, meaning that they act on separable complex Hilbert spaces. We usually denote a representation by a single letter such as $\pi$, and then write its Hilbert space as $H_\pi$ when necessary. In the first few chapters, separability is a convenience more than a necessity. A few arguments that use sequences could use nets instead, and a few facts about compact metric spaces could be replaced with generalizations for compact Hausdorff spaces. However, once the more novel works begins in Chapter~\ref{chap:AP}, this assumption of separability becomes more important. Our basic objects of study are sequences of finite-dimensional representations and their `limits' in appropriate senses, which are necessarily separable. On the other hand, if the C$\sp\ast$-algebra itself it not separable then convergence for sequences no longer captures all the properties of the various topologies we need, and so some statements would at least require substantial changes, particularly once we reach the notion of `Fell-global' convergence in the next chapter. Throughout this part, our guiding examples are the group C$\sp\ast$-algebras $C^\ast \G$ of countable groups $\G$. The algebra $C^\ast \G$ is the maximal C$\sp\ast$-completion of the group algebra $\bbC[\G]$, so it is indeed separable and unital. Our main results would also apply to the unital augmentation of the group C$\sp\ast$-algebra of a locally compact, second countable group. If $\A$ is a separable C$^\ast$-algebra and $k$ is a positive integer, then $\rmM_k(\A)$ is also a separable C$\sp\ast$-algebra in a canonical way~\cite[Section 34]{ConOT}. If $\A = C^\ast \G$, then representation of $\A$ correspond to unitary representations of $\G$. When discussing these examples we pass back and forth between groups and their C$^\ast$-algebras rather freely, for instance by using the same notation for a representation of $\G$ and for its extension to $C^\ast\G$. Infinite-dimensional representation theory and operator algebras have been entwined since their inception. For definiteness, I mostly cite~\cite{ConFA,ReeSimFA} for background from functional analysis,~\cite{FolAHA,Mac76} for representation theory, and~\cite{ConOT,Dix--Cstar,Dix--vN} for C$\sp*$-algebras and operator algebras. I assume standard definitions and results from abstract algebra at about the level of~\cite{GriAA}. Beyond the level of those classic texts, I try to include complete proofs. This is partly for the benefit of readers who are not specialists in these areas, but also because, at various points later, we need to take apart the proofs of some general theorems and see how to approximate their ingredients, rather than just citing their conclusions. Several of the sections below end with a review of more advanced references for the interested reader. Once a particular representation $\pi$ is being considered, operator theory provides many auxiliary constructions of operators in $\B(H_\pi)$, for instance via the Borel functional calculus or symmetry considerations. These often fall outside the operator-norm closure of $\pi(\A)$, which is a C$\sp*$-algebra, but within its bi-commutant $\pi(\A)''$, which agrees with the weak-operator closure of $\pi(\A)$ and is a von Neumann algebra. To help us keep in mind the proper context for each object we consider, we adhere rather strictly to the following convention: \begin{itemize} \item C$^\ast$-algebras may exist \emph{in the abstract}; \item a von Neumann algebra is a weak-operator closed $\ast$-subalgebra of $\B(H)$ for some particular Hilbert space $H$. \end{itemize} In particular, we may be somewhat casual about identifying C$^\ast$-algebras that are isomorphic, but two `copies' of a von Neumann algebra acting on different Hilbert spaces are regarded as different von Neumann algebras. This convention follows~\cite{FolAHA} and~\cite{ConOT}, for example. Occasionally it can be a little fussy: for instance, $\B(\bbC^k)$ is a von Neumann algebra while the matrix algebra $\rmM_k$ is not, although the former is the isomorphic image of the latter under the obvious representation. However, even for finite-dimensional matrix algebras, this convention helps to separate the different roles they play in our work. \subsection*{\emph{Representations}} If $\pi$ is a representation of $\A$ and $M$ is a closed $\pi$-invariant subspace of $H_\pi$ then we write $\pi^M$ for the associated \textbf{subrepresentation}. A representation $\pi$ is \textbf{irreducible} if its only closed invariant subspaces are $\{0\}$ and $H_\pi$. We use $\oplus$ for general direct sums in the categories of Hilbert spaces or representations. For a single representation $\pi$ and any $k \in \{1,2,\dots\} \cup \{\infty\}$, we write $\pi^{(k)}$ for the direct sum of $k$ copies of $\pi$, using the indexing set $\bbN$ when $k=\infty$. We call this the \textbf{$k$-fold inflation} of $\pi$. If $\pi$ is a representation of $\A$, then a subset $S$ of $H$ is \textbf{cyclic} for $\pi$ if it not contained in any proper closed invariant subspace of $H_\pi$, or equivalently if \[\ol{\sum_{v \in S}\pi(\A)v} = H_\pi.\] If $\pi$ and $\rho$ are two representations of $\A$, then: \begin{itemize} \item $\pi$ is \textbf{equivalent} to $\rho$, written $\pi \cong \rho$, if there is a unitary operator from $H_\pi$ to $H_\rho$ that intertwines $\pi$ with $\rho$; \item $\pi$ is \textbf{contained} in $\rho$, written $\pi \lesssim \rho$, if $\pi$ is equivalent to a subrepresentation of $\rho$; \item $\pi$ and $\rho$ are \textbf{disjoint} if no nontrivial subrepresentations of them are equivalent. \end{itemize} If $\A$ is separable, then so is any irreducible representation, and so any such representation is equivalent to one on a chosen separable `reference' Hilbert space. We can therefore define the \emph{set} of equivalence classes of irreducible representations of $\A$. It is called the \textbf{spectrum} of $\A$ and denoted by $\hat{\A}$. If $\A = C^\ast \G$, then $\hat{\A}$ is called the \textbf{unitary dual} of $\G$ and denoted by $\hat{\G}$. See,~\cite[Chapter 7]{FolAHA} or~\cite[Chapter 18]{Dix--Cstar}. Another possible relation is that $\pi$ is contained in an inflation of $\rho$. We do not introduce a separate term for this relation. If $H_\pi$ is separable, then we can take it to mean that $\pi \lesssim \rho^{(\infty)}$. However, we do use a term for the resulting equivalence relation: $\pi$ and $\rho$ are \textbf{quasi-equivalent} if each is contained in an inflation of the other. See~\cite[Proposition 5.3.1]{Dix--Cstar} for several alternative formulations of quasi-equivalence. The next proposition is a basic comparison between two arbitrary representations. \begin{prop}\label{prop:Leb-reps} If $\pi$ and $\rho$ are two representations of $\A$, then $\pi$ has a unique invariant subspace $M$ such that $\pi^M$ is contained in $\rho^{( \infty)}$ and $\pi^{M^\perp}$ is disjoint from $\rho$. The orthogonal projection onto $M$ lies in the centre of $\pi(\A)''$. \qed \end{prop} For unitary representations of groups, this is~\cite[Theorem 1.11]{Mac76}; the case of general C$\sp\ast$-algebras has the same proof. \subsection*{\emph{Representations of matrix C$\sp*$-algebras}} If $\pi$ is a representation of $\A$ on $H$ and $k$ is a positive integer, then we define a representation $\pi_{(k)}$ of $\rmM_k(\A)$ on $H^{(k)}$ by combining $\pi$ with the usual rules of matrix-vector multiplication: \[\pi_{(k)}([a_{ij}])\cdot [v_1,\dots,v_k]^\rm{T} := \Big(\Big[\sum_{j=1}^k\pi(a_{ij})v_j\Big]_{i=1}^k\Big)^\rm{T}.\] From another point of view, we can identify $H^{(k)}$ with $H\otimes \bbC^{(k)}$, and then $\pi_{(k)}$ is the Kronecker product of $\pi$ with the canonical representation \[\rm{canon}_k:\rmM_k \to \B(\bbC^{(k)}).\] (The construction of $\pi_{(k)}$ for a faithful choice of $\pi$ actually gives a quick way to \emph{define} the C$\sp*$-algebra structure on $\rmM_k(\A)$: see, for instance, the second paragraph of~\cite[Section 34]{ConOT}.) On the other hand, any representation $\kappa$ of $\rmM_k(\A) \cong \A\otimes \rmM_k$ on a Hilbert space $K$ is generated by commuting representations $\kappa|\rmM_k$ and $\kappa|\A$. For $\rmM_k$, every representation is an inflation of $\rm{canon}_k$: this is a classical result of pure algebra, or a special case of the representation theory of C$\sp*$-algebras of compact operators~\cite[Section 16]{ConOT}. We can therefore write $K$ as $H^{(k)}$ for some auxiliary Hilbert space $H$ so that $\kappa|\rmM_k$ is identified with $I_H\otimes \rm{canon}_k$. From here, the commutant of $\kappa|\rmM_k$ is identified with $\B(H) \otimes I_k$~\cite[Section 50]{ConOT}. Since $\kappa|\A$ takes values in this commutant, it in turn must have the form $\pi\otimes I_k \cong \pi^{(k)}$ for some representation $\pi$ of $\A$ on $H$. Finally, if a projection of $K$ commutes with both $\kappa|\rmM_k$ and $\kappa|\A$, then it must have the form $P\otimes I_k$ for some projection $P \in \pi(\A)'$. Putting these ingredients together gives parts (a) and (b) of the next lemma, and part (c) is a direct calculation. \begin{lem}\label{lem:reps-of-matrices} The following hold. \begin{itemize} \item[a.] Every representation of $\rmM_k(\A)$ is equivalent to $\pi_{(k)}$ for some representation $\pi$ of $\A$. \item[b.] Every subrepresentation of $\pi_{(k)}$ is equivalent to $\rho_{(k)}$ for some $\rho \lesssim \pi$. \item[c.] We have $(\pi_{(k)})^{(\ell)} \cong (\pi^{(\ell)})_{(k)}$ for every positive integer $k$ and cardinal $\ell$. \qed \end{itemize} \end{lem} \begin{ex}\label{ex:matrix-mult} Matrix multiplication from the left defines another representation \[\rm{mult}_k:\rmM_k\to \B(\rmM_k).\] By regarding the columns of a $k$-by-$k$ matrix as a $k$-tuple of separate vectors in $H^{(k)}$, we see that $\rm{mult}_k \cong \rm{canon}_k^{(k)}$. This equivalence survives upon taking Kronecker products with another representation: in particular, $\pi\otimes \rm{mult}_k$ is quasi-equivalent to $\pi_{(k)}$ for every $k$ and $\pi$. In spite of this, Kronecker products with $\rm{mult}_k$ do have the special advantage of preserving the existence of a tracial vector: see Example~\ref{ex:tensor-prod} below. \qed \end{ex} Generalizing further, if $\pi$ is a representation of $\A$, $k$ and $\ell$ are two positive integers, and $a \in \rmM_{\ell,k}(\A)$, then we write $\pi_{(\ell,k)}(a)$ as short for $[\pi(a_{ij})]_{ij}$, and interpret it as an operator from $H^{(k)}$ to $H^{(\ell)}$ using matrix-vector multiplication. If $k\ne \ell$ then $\rmM_{\ell,k}(\A)$ is not an algebra, but this construction still appears as a way to move between different Hilbert spaces later. \section{Completely positive maps}\label{sec:CP} \subsection*{\emph{Positive functionals and completely positive maps}} If $\pi$ is a representation of $\A$ and $v \in H_\pi$, then the formula \[\Phi^\pi_v(a) := \langle \pi(a)v,v\rangle\] defines a positive linear functional on $\A$. All positive linear functionals arise this way because of the GNS construction. More generally, let $v_1$, \dots, $v_k \in H_\pi$, and regard the tuple $V := [v_1,\dots,v_k]$ as a linear map from $\bbC^{(k)}$ to $H$. Then the formula \begin{equation}\label{eq:cp-assoc} \Phi^\pi_V(a) := V^\ast \pi(a) V = [\langle \pi(a)v_j,v_i\rangle]_{i,j=1}^k \end{equation} defines a linear map from $\A$ to $\rmM_k$. Notice that the order of the indices in each matrix entry matches the convention for the Gram matrix of a tuple of vectors in~\eqref{eq:Gram}. We sometimes write $\Phi^\pi_{v_1,\dots,v_k}$ instead of $\Phi^\pi_V$. The map $\Phi^\pi_V:\A\to \rmM_k$ is always completely positive. The basic theory of completely positive maps can be found in~\cite[Section 34]{ConOT}; this particular instance of them is~\cite[Example 34.3(b)]{ConOT}. In the sequel, we write $\B(\A,\rmM_k)$ for the space of all continuous linear maps from $\A$ to $\rmM_k$, and $\B(\A,\rmM_k)_+$ for the subset of all completely positive (not just positive) ones. We say that the completely positive map in~\eqref{eq:cp-assoc} is \textbf{associated to $\pi$ by $v_1$, \dots, $v_k$}. Alternatively, if the representation $\pi$ is understood, we adapt a term from information theory by calling $\Phi^\pi_V$ the \textbf{type} of the tuple $V$ (compare the usage in~\cite[Section 11.1]{CovTho06}, for example). Much of our work later concerns classifying the tuples in a representation space according to their type. Any completely positive map $\phi:\A \to \rmM_k$ is necessarily continuous~\cite[Proposition 33.4]{ConOT}. Moreover, a generalization of the GNS construction shows that $\phi$ is associated to some representation $\pi$ by a tuple $v_1$, \dots, $v_k$. If this tuple is cyclic for $\pi$, then $\pi$ and the tuple are unique up to unitary equivalence; in this case we call them a \textbf{minimal dilation} of $\phi$, and often denote the representation by $\pi_\phi$. This fact is a special case of Stinespring's theorem~\cite[Theorem 34.7]{ConOT}. A continuous linear map $\phi:\A\to \rmM_k$ may be regarded as a matrix $[\phi_{ij}]$ of continuous linear functionals on $\A$. We may therefore use it to define an element of $\rmM_k(\A)^\ast$ by the pairing \begin{equation}\label{eq:pairing} \langle \phi,a\rangle := \frac{1}{k}\sum_{ij}\phi_{ij}(a_{ij}). \end{equation} The division by $k$ is convenient later because it preserves the property of $\phi$ being `unital' (Definition~\ref{dfn:normalized} below). In general, the pairing~\eqref{eq:pairing} defines a canonical isomorphism \begin{equation}\label{eq:pairing-iso} \B(\A,\rmM_k) \to \rmM_k(\A)^\ast. \end{equation} With this isomorphism understood, we henceforth regard the weak$^\ast$ topology as defined on either space. By default we equip any subset of either space with the restriction of this topology. If $\phi:\A\to\rmM_k$ is completely positive and it is associated to $\pi$ by the tuple $v_1$, \dots, $v_k \in H_\pi$, and if $a = [a_{ij}] \in \rmM_k(\A)$, then we find that \begin{equation}\label{eq:phi-brackets-associated} \langle \phi,a\rangle = \frac{1}{k}\sum_{ij}\langle \pi(a_{ij})v_j,v_i\rangle = \frac{1}{k}\left\langle \pi_{(k)}(a)\left[\begin{array}{c}v_1\\ \vdots \\ v_k\end{array}\right],\left[\begin{array}{c}v_1\\ \vdots \\ v_k\end{array}\right]\right\rangle. \end{equation} Thus, when applied to a completely positive map, the isomorphism in~\eqref{eq:pairing-iso} has the following effect on minimal dilations. \begin{lem}\label{lem:dilation-matrix-dilation} If $\phi$ is associated to $\pi$ by the cyclic tuple $v_1,\dots,v_k\in H_\pi$, then $\langle \phi,\cdot\rangle$ is associated to $\pi_{(k)}$ by the cyclic vector $k^{-1/2}[v_1,\dots,v_k]^\rm{T}$. In particular, \[\pi_{\langle \phi,\cdot\rangle}\cong (\pi_\phi)_{(k)}.\] \qed \end{lem} On the other hand, by Lemma~\ref{lem:reps-of-matrices}, any representation of $\rmM_k(\A)$ is of the form $\pi_{(k)}$ up to equivalence, and so any positive functional on $\rmM_k(\A)$ is associated to some such representation by a cyclic vector. Now reading~\eqref{eq:phi-brackets-associated} from right to left, we conclude the following. \begin{prop}\label{prop:pairing-positive} The pairing isomorphism~\eqref{eq:pairing-iso} identifies $\B(\A,\rmM_k)_+$ with the closed cone $\rmM_k(\A)^\ast_+$ of positive elements of $\rmM_k(\A)^\ast$. \qed \end{prop} Concrete examples of representations and associated completely positive maps are considered for group C$\sp*$-algebras in the next section. The next two examples give constructions of new maps from old ones. \begin{ex}\label{ex:diagonal-phi} Given a positive functional $\phi$ on $\A$ and a positive integer $k$, the tensor product $\phi\otimes I_k$ defines a completely positive map $\A\to \rmM_k$. If $\phi$ is associated to $\pi$ by $\xi$, then $\phi \otimes I_k$ is associated to $\pi^{(k)}$ by the vectors \begin{equation}\label{eq:xi-i} \xi_i = [0,\dots,0,\xi,0,\dots,0]^\rm{T} \qquad (i=1,2,\dots,k), \end{equation} where only the $i^\rm{th}$ coordinate of $\xi_i$ is nonzero. Alternatively, the constant $k$-by-$k$ matrix \[ \phi_{ij} := \phi \quad (1 \le i,j \le k)\] is also positive definite. If $\phi$ is associated to $\pi$ by $\xi$, then this matrix is associated to the same representation $\pi$ by the $k$-tuple $\xi$, \dots, $\xi$. \qed \end{ex} \begin{ex}\label{ex:tensor-prod} Given a positive functional $\phi$ on $\A$ and a positive integer $k$, the tensor product $\phi\otimes \Tr_k$ defines a positive functional on $\A\otimes \rmM_k \cong \rmM_k(\A)$. If $\phi$ is associated to $\pi$ by $\xi$, then $\phi\otimes \Tr_k$ can be written using the representation $\pi_{(k)}$ and the same tuple of vectors from~\eqref{eq:xi-i}, because \[(\phi\otimes \Tr_k)([a_{ij}]) = \sum_i\phi(a_{ii}) = \sum_i\langle \pi_{(k)}(a)\xi_i,\xi_i\rangle \qquad (a = [a_{ij}] \in \rmM_k(\A)).\] However, this does not associate $\phi\otimes \Tr_k$ to $\phi_{(k)}$ by a single cyclic vector. To do that, we must use instead the larger representation $\t{\pi} := \pi\otimes \rm{mult}_k$ from Example~\ref{ex:matrix-mult}, which gives \[(\phi\otimes \Tr_k)([a_{ij}]) = \langle \t{\pi}(a)(\xi\otimes I_k),\xi\otimes I_k\rangle \qquad (a = [a_{ij}] \in \rmM_k(\A)).\] Since $\xi\otimes I_k$ is also cyclic for $\t{\pi}$, this identifies the GNS representation of $\phi\otimes \Tr_k$. \qed \end{ex} Some manipulations with types are most easily expressed in terms of pairings like~\eqref{eq:pairing} and manipulations like~\eqref{eq:phi-brackets-associated}. \begin{lem}\label{lem:Ad-formula} Fix positive integers $k$ and $\ell$. Let $\pi$ be a representation of $\A$, let $v_1,\dots,v_k$ be a $k$-tuple in $H_\pi$, and let $\phi := \Phi^\pi_{v_1,\dots,v_k}$. Let $a \in \rm{M}_{\ell,k}(\A)$, define \[\left[\begin{array}{c}y_1 \\ \vdots \\ y_\ell\end{array}\right] = \pi_{(\ell,k)}(a)\left[\begin{array}{c}v_1\\ \vdots \\ v_k\end{array}\right]\ \in H_\pi^{(\ell)}\] and let $\psi := \Phi^\pi_{y_1,\dots,y_\ell}$. Then these completely positive maps are related by \begin{equation}\label{eq:psi-phi} \ell\langle \psi,b\rangle := k\langle \phi,a^\ast ba \rangle \qquad (b \in \rmM_\ell(\A)). \end{equation} \end{lem} \begin{proof} This is just a matter of matrix multiplication and taking adjoints: \[\ell\langle \psi,b\rangle = \left\langle \pi_{(\ell,k)}(ba)\left[\begin{array}{c}v_1 \\ \vdots \\ v_k\end{array}\right],\pi_{(\ell,k)}(a)\left[\begin{array}{c}v_1 \\ \vdots \\ v_k\end{array}\right]\right\rangle = k\langle \phi,a^\ast ba\rangle.\] \end{proof} To turn~\eqref{eq:psi-phi} into an expression for $\psi$ as a map from $\A$ to $\rmM_k$, we can choose a pair of indices $(i,j)$ and apply~\eqref{eq:psi-phi} to the matrix with $b_{ij} = b$ and all other entries equal to $0$. When we do this, the right-hand side of~\eqref{eq:psi-phi} still involves pairing $\phi$ with a $k$-by-$k$ operator matrix that could be nonzero in any of its entries: this is why the pairing notation is so convenient in this case. \begin{dfn}\label{dfn:normalized} A completely positive map $\phi:\A \to \rmM_k$ is \textbf{unital} if $\phi(e) = I_k$. The set of these is denoted by $\S_k(\A)$. \end{dfn} In particular, $\S_1(\A)$ is the usual state space of $\A$. For each $k$ the set $\S_k(\A)$ is a compact and convex subset of $\B(\A;\rmM_k)$. \begin{dfn}\label{dfn:typical} For any positive integer $k$ and subset $O$ of $\B(\A,\rmM_k)$, let \[ \X(\pi,O) := \big\{[v_1,\dots,v_k]^\rm{T} \in H_\pi^{(k)} :\ \Phi^\pi_{v_1,\dots,v_k} \in O \big\}. \] The elements of $\X(\pi,O)$ are the \textbf{$O$-typical} tuples of the representation $\pi$. \end{dfn} We often use Definition~\ref{dfn:typical} when $O$ is a small neighbourhood around a given `target' completely positive map $\phi$. In this case we may write informally about elements of $\X(\pi,O)$ as `approximately $\phi$-typical vectors'. This resembles the use of terms such as `microstate' or `good model' in free probability and the study of sofic entropy in ergodic theory (compare~\cite[Section 2.3]{Bowen--survey}, for example). Lemma~\ref{lem:dilation-matrix-dilation} gives a simple relationship between the approximately typical vectors for $\phi$ and those for $\langle \phi,\cdot\rangle$. For any representation $\pi$ of $\A$ and any $x_1$, \dots, $x_k \in H_\pi$, that lemma gives \[\langle\Phi^\pi_{x_1,\dots,x_k},\cdot\rangle = \Phi^{\pi_{(k)}}_{k^{-1/2}[x_1,\dots,x_k]^\rm{T}}.\] Therefore, if $O$ is any neighbourhood of $\phi$ and $\t{O}$ is the corresponding neighbourhood of $\langle \phi,\cdot\rangle$ under the isomorphism~\eqref{eq:pairing-iso}, then \begin{equation}\label{eq:X-and-X} \X(\pi_{(k)},\t{O}) = k^{-1/2}\X(\pi,O). \end{equation} As a vector space topology, the weak$^\ast$ topology on $\B(\A,\rmM_k)$ also defines a canonical uniform structure on this space (see, for instance,~\cite[Section 8.1]{Eng89} for the basics of uniform structures). Restricted to any bounded subset of $\B(\A,\rmM_k)$, this uniform structure is generated by any translation-invariant metric compatible with the topology. We need this uniform structure only occasionally, usually through the following lemma: \begin{lem}\label{lem:unif-cts} For any $\pi$ and $k$, the type map \[H_\pi^{{(k)}} \mapsto \B(\A,\rmM_k):[v_1,\dots,v_k]^\rm{T}\mapsto \Phi^\pi_{v_1,\dots,v_k}\] is continuous, and uniformly continuous on any bounded subset of $H_\pi^{{(k)}}$. \end{lem} \begin{proof} These properties are elementary for the inner product map $H_\pi\times H_\pi\to \bbC$. They follow for types by arguing pointwise for each $i$, $j$, and $a \in \A$. \end{proof} We also need a notation for the images of the maps in Lemma~\ref{lem:unif-cts} when we restrict to orthonormal tuples. \begin{dfn}\label{dfn:type-set} For a given representation $\pi$, its \textbf{$k$-fold type-set} is \[\S_k(\pi) := \big\{\Phi^\pi_{v_1,\dots,v_k}:\ v_1,\dots,v_k \in H_\pi\ \hbox{are orthonormal}\big\},\] the subset of elements of $\S_k(\A)$ that are associated to $\pi$. \end{dfn} Observe that \[O\cap \S_k(\pi) \ne \emptyset \quad \Leftrightarrow \quad \X(\pi,O) \ne \emptyset\] for any representation $\pi$ and subset $O$ of $\S_k(\A)$. \subsection*{\emph{Tracial functionals}} A linear functional $\tau$ on $\A$ is \textbf{tracial} if \begin{equation}\label{eq:trace} \tau(ab) = \tau(ba) \qquad (a,b \in \A). \end{equation} The study of these is motivated by two fundamental sources of examples. First, if $\pi$ is a $d$-dimensional representation with $d$ finite, then the normalized trace on $\B(\bbC^d)$ pulls back to the tracial state $\tr_d \circ \pi$ on $\A$. Second, for group C$\sp*$-algebras, tracial states correspond to characters of the group; we return to these below. The normalized trace on $\rmM_k$ can be combined with any other tracial functional $\tau$ on a C$\sp*$-algebra $\A$ by taking their tensor product. This defines a linear functional on $\rmM_k\otimes \A = \rmM_k(\A)$. Written out in full, this gives the new functional \[(\tr_k\otimes \tau)([a_{ij}]) := \frac{1}{k}\sum_i \tau(a_{ii}) \qquad ([a_{ij}] \in \rmM_k(\A)).\] This is still tracial, because \[(\tr_k\otimes \tau)([a_{ij}]\cdot[b_{ij}]) = (\tr_k\otimes \tau)\Big(\Big[\sum_\ell a_{i\ell}b_{\ell j}\Big]\Big) = \frac{1}{k}\sum_{i,\ell}\tau(a_{i\ell}b_{\ell i}),\] and we can swap the order of each product $a_{i\ell}b_{\ell i}$ on the right-hand side because of~\eqref{eq:trace}. If $\tau$ is a state on $\A$, then $\tr_k\otimes \tau$ is a state on $\rmM_k(\A)$. In general, if a tracial positive functional $\tau$ is associated to the representation $\l$ on $H$ by a vector $\xi$, then~\eqref{eq:trace} becomes an identity for $\xi$: \begin{equation}\label{eq:trace-vec} \langle \l(a)\l(b)\xi,\xi\rangle = \langle \l(b) \l(a)\xi,\xi\rangle \qquad (a,b \in \A). \end{equation} In any representation, a vector satisfying~\eqref{eq:trace-vec} is called \textbf{tracial}. \subsection*{\emph{The example of discrete group C$\sp*$-algebras}} Throughout this work, our guiding examples of C$\sp*$-algebras are the C$\sp*$-algebras $C^\ast\G$ of locally compact, second countable groups $\G$. Their construction and main properties are given in~\cite[Section 7.1]{FolAHA} and~\cite[Section 13.9]{Dix--Cstar}. In this section we assume further that $\G$ is a countable discrete group, which is simplest to describe because then $C^\ast \G$ is unital and contains a canonical copy of $\G$ itself. If $\G$ is such a group, then any completely positive map $\phi:\A\to\rmM_k$ may be restricted to define a map from $\G$ itself to $\rmM_k$: \[g\mapsto \phi(\delta_g) \qquad (g\in \G).\] In the sequel we often write simply $\phi(g)$ instead of $\phi(\delta_g)$ in this situation. If $\phi = \Phi^\pi_V$ for some representation $\pi$ and $k$-tuple $V$ as in~\eqref{eq:cp-assoc}, then this becomes \begin{equation}\label{eq:matrix-elements} \Phi^\pi_V(\delta_g) = [\langle \pi(g)v_j,v_i\rangle] \qquad (g \in \G). \end{equation} In representation theory, the function on $\G$ given by $\langle \pi(g)v,u\rangle$ is called the \textbf{$(u,v)$-matrix element} of the representation. If a $\phi:\G\to \rmM_k$ arises this way, then it is a \textbf{positive definite} map (sometimes called a `map of positive type'). This means that it is bounded and satisfies \begin{equation}\label{eq:pdf} \sum_{1 \le i,j\le k}\phi_{ij}(a_i^\ast a_j) = \sum_{g,h \in \G,\,1 \le i,j \le k}\ol{a_{i,h}}a_{j,g}\phi_{ij}(h^{-1}g) \ge 0 \end{equation} for any $a_1$, \dots, $a_k \in \bbC[\G]$. For instance, for the map in~\eqref{eq:pdf}, this holds because the sum in~\eqref{eq:matrix-elements} is just equal to the squared length of the vector $\sum_i\pi(a_i)v_i$. On the other hand, if $\phi$ is any map on $\G$ satisfying~\eqref{eq:pdf}, then another variant of the GNS construction produces a unitary representation of $\G$ to which $\phi$ is associated as in~\eqref{eq:matrix-elements}. Using this, $\phi$ then extends to a completely positive map on the whole of $C^\ast\G$. This result is used most often when $k=1$, and that case can be found in such standard sources as~\cite[Section 3.3]{FolAHA} or~\cite[Section 13.4]{Dix--Cstar}. The proof for larger $k$ is analogous. These results establish a bijection between positive definite maps from $\G$ to $\rmM_k$ and the space $\B(C^\ast \G,\rmM_k)_+$. Under this bijection the weak$^\ast$ topology on $\B(C^\ast \G,\rmM_k)_+$ corresponds to the usual weak$^\ast$ topology of $\ell^\infty(\G;\rmM_k)$, and when restricted to bounded subsets it coincides with the topology of pointwise convergence for functions on $\G$. We carry all of our previous notation and terminology for completely positive maps such as `assocation', `dilations', and `types' over to this setting in the obvious way. A positive definite map $\phi:\G\to\rmM_k$ is \textbf{unital} if $\phi(e) = I_k$. When working with a countable group $\G$ in the sequel, we often write $\S_k(\G)$ for the space of these unital maps, in view of the identification of this space with $\S_k(C^\ast \G)$ as defined previously. It is a convex subset of $\ell^\infty(\G;\rmM_k)$, and compact by the Banach--Alaoglu theorem. (A more normal notation for $\S_1(\G)$ would be $\P_1(\G)$, but I have made this choice to emphasize the common nature of these spaces.) A state on $C^\ast \G$ is tracial if and only if it arises from a \textbf{character} of $\G$, meaning a unital positive definite function $\chi$ that satisfies \begin{equation}\label{eq:char} \chi(g^{-1}hg) = \chi(h) \qquad (g,h \in \G). \end{equation} \begin{ex}\label{ex:regular} The function $1_{\{e\}}$ is positive definite, and it is associated to the left regular representation on $\ell^2(\G)$ by the function $\delta_e$. It satisfies~\eqref{eq:char}. We call it the \textbf{regular character} of $\G$, and often denote it by $\chi_{\rm{reg}}$, and its extension to $C^\ast \G$ by $\tau_{\rm{reg}}$. More generally, $\delta_e \otimes I_k$ is the \textbf{regular} $\rmM_k$-valued positive definite function. \qed \end{ex} \begin{ex}\label{ex:subgroup} If $H$ is a subgroup of $\G$, then the function $1_H$ is positive definite. It is associated to the quasi-regular representation of $\G$ on $\ell^2(\G/H)$ by the function $\delta_{eH}$. It is a character if and only if $H$ is normal in $G$. \qed \end{ex} For a free group, we discuss another simple family of positive definite functions with more diverse properties in Example~\ref{ex:Haa-fns} below. \subsection*{\emph{Notes and further references}} The textbook~\cite{PauCB} is dedicated to completely positive and completely bounded maps between operator algebras. The basic results that we need are mostly covered in Chapters 1--6 of that book. In particular, Stinespring's theorem is~\cite[Theorem 4.1]{PauCB}, and Proposition~\ref{prop:pairing-positive} is covered by~\cite[Theorem 6.1]{PauCB} (which allows the greater generality of maps defined on `operator systems', and where the proof given is independent of Stinespring's theorem). Stinespring's theorem is originally from~\cite{Sti55}; the chapter notes in~\cite{PauCB} give a more complete guide to original references. Some more recent uses of completely positive maps in the study of C$^\ast$-algebras, including group C$^\ast$-algebras, can be found in~\cite[Sections 1.2 and 2.5 and Appendix D]{BroOza08}. Besides the references used above, the basic theory of unitary group representations and their relation to the group C$\sp*$-algebra are recounted succinctly in~\cite[Appendices A--C]{BekdelaHarValKaz} or in more detail in~\cite[Chapters 1 and 2]{BekdelaHarUDC}. The reference~\cite[Chapter VI]{DorFelRepI} is also largely focused on groups but admits the generality of arbitrary C$\sp*$-algebras. The general construction of a unitary representation from a positive definite function appears first in work of Gelfand and Raikov~\cite{GelRai43}. It is an analog for groups of the GNS construction for a state on a C$\sp*$-algebra. Similarly, the generalization to $\rmM_k$-valued positive definite functions on a group is a cousin of Stinespring's theorem. It can be found (with a further generalization) in~\cite{Kun66}, or in the standard text~\cite[Theorem 4.8]{PauCB}, where the key ideas are traced back to Naimark. The relationship between positive definite functions on a group and completely positive maps on the group C$\sp*$-algebra is discussed further at the end of~\cite[Chapter 4]{PauCB}. The proofs of all these representation theorems come from the same mould, and for this reason many authors now attach the label `GNS' to all of them. Examples are~\cite[Appendix C]{BekdelaHarValKaz} and~\cite[Section I.B]{BekdelaHarUDC}, which recount all this basic theory in some detail in the case $k=1$. However, beware that the versions for groups are not quite special cases of the versions for C$\sp*$-algebras. This is because the notions of `positivity' in their hypotheses are not known to match beforehand. In Stinespring's theorem, the complete positivity of the map is understood with reference to positivity of elements of a C$\sp*$-algebra. On the other hand, a positive definite function $\bbC[\G]$ is positive when applied to elements of the form $a^\ast a$ for $a \in \bbC[\G]$, but these may not be all of the elements of $\bbC[\G]$ that are positive when regarded as elements of $C^\ast \G$. For this reason, it is not clear beforehand that positive definiteness of a function on a group implies complete positivity of the extended map on the group C$\sp*$-algebra. This turns out to be the case, but I do not know a way to see this that does not essentially prove the representation theorem in the process. In addition to~\cite{Dix--Cstar}, group C$\sp*$-algebras appear as a diverse source of examples of C$\sp*$-algebras in more recent references such as~\cite{Dav--Cstar},~\cite{BroOza08} and~\cite{PisOST}. The survey~\cite{Oza13} explains specific connections between group C$\sp*$-algebras and Connes' embedding conjecture. In this reference, Ozawa considers carefully how notions of positivity for various classes of $\sp*$-algebra relate to positivity in C$\sp*$-algebra completions of them. This generalizes the discussion above of the relationship between Naimark's and Stinespring's theorems. The overarching theme here is generalizations of Hilbert's 17th problem, and it has also been widely studied: see~\cite{Helt02} or the references in~\cite[Chapter 3]{BakWoeMCetc}, for example. \section{Comparing completely positive maps} We now return to the study of completely positive maps on a general C$\sp*$-algebra. \subsection*{\emph{Lebesgue decomposition and Radon--Nikodym theorems}} Completely positive maps enjoy various properties analogous to positive Borel measures regarded as positive functionals on a space of continuous functions. Let $\phi:\A\to \rmM_k$ and $\psi:\A\to \rmM_\ell$ be completely positive. Here are some ways they may be related: \begin{itemize} \item if $k=\ell$, then $\psi$ \textbf{dominates} $\phi$, written $\phi \le \psi$, if $\psi - \phi$ is positive definite --- this defines the \textbf{positive definite} ordering on $\B(\A,\rmM_k)_+$ for each $k$; \item $\phi$ is \textbf{absolutely continuous} with respect to $\psi$, written $\phi \ll \psi$, if $\pi_\phi$ is contained in $\pi_\psi^{(\infty)}$; \item $\phi$ and $\psi$ are \textbf{mutually singular}, written $\phi \perp \psi$, if $\pi_\phi$ and $\pi_\psi$ are disjoint. \end{itemize} The second and third items here deliberately deliberately suggest the analogy with measures. In these two items we sometimes omit mentioning the `reference' function $\psi$ when it is clear from the context. These definitions lead to analogs of the Lebesgue decomposition and Radon--Nikodym theorem from measure theory. The first of these is obtained by applying Proposition~\ref{prop:Leb-reps} to $\pi_\phi$ and $\pi_\psi$. \begin{cor}\label{cor:Leb} Given $\phi$ and $\psi$ as above, there is a unique decomposition \begin{equation}\label{eq:Leb} \phi = \phi_{\rm{ac}} + \phi_{\rm{sing}} \end{equation} into positive summands such that $\phi_{\rm{ac}} \ll \psi$ and $\phi_{\rm{sing}} \perp \psi$. \end{cor} \begin{proof} Apply Proposition~\ref{prop:Leb-reps} to the representations $\pi_\phi$ and $\pi_\psi$ to obtain a canonical subspace $M$ of $H_\phi$, and let $P$ be the orthogonal projection of $H_\phi$ onto $M$. Let $x_1$, \dots, $x_k$ be vectors that associate $\phi$ to $\pi_\phi$. Since $P$ commutes with $\pi$, we have \begin{align*} \phi_{ij}(a) &= \langle \pi_\phi(a) x_j,x_i\rangle \\ &= \langle \pi_\phi(a) Px_j,Px_i\rangle + \langle \pi_\phi(a) (1-P)x_j,(1-P)x_i\rangle. \end{align*} The two terms on the right give $\phi_{\rm{ac}}$ and $\phi_{\rm{sing}}$, respectively. They have the desired relationships with $\psi$ because their unique minimal dilations are simply $\pi_\phi^M$ and $\pi_\phi^{M^\perp}$ again, and uniqueness holds because those subrepresentations of $\pi_\phi$ are unique. \end{proof} We refer to~\eqref{eq:Leb} as the \textbf{Lebesgue decomposition} of $\phi$ with respect to $\psi$, again in analogy with measure theory. At some points below our work is made much easier by analysing the Lebesgue decomposition of $\langle \phi,\cdot\rangle$ rather than of $\phi$. The next lemma tells us how to switch between these. \begin{lem}\label{lem:Leb-of-pairing} Let $\phi \in \B(\A,\rmM_k)_+$ and $\psi \in \A^\ast_+$, and let~\eqref{eq:Leb} be the Lebesgue decomposition of $\phi$ with respect to $\psi$. Then the Lebesgue decomposition of $\langle \phi,\cdot\rangle$ with respect to $\psi\otimes \Tr_k$ is \[\langle \phi_{\rm{ac}},\cdot\rangle + \langle \phi_{\rm{sing}},\cdot\rangle.\] \end{lem} \begin{proof} The GNS representation of $\langle\phi,\cdot\rangle$ is equivalent to $(\pi_\phi)_{(k)}$ by Lemma~\ref{lem:dilation-matrix-dilation}, and similarly with $\phi_{\rm{ac}}$ or $\phi_{\rm{sing}}$ in place of $\phi$. The GNS representation of $\psi\otimes \Tr_k$ is equivalent to $\pi_\psi\otimes \rm{mult}_k$ as in Example~\ref{ex:tensor-prod}, and this is quasi-equivalent to $(\pi_\psi)_{(k)}$ as in Example~\ref{ex:matrix-mult}. Now Lemma~\ref{lem:reps-of-matrices} shows that \[\langle \phi_{\rm{ac}},\cdot\rangle \ll \psi\otimes \Tr_k \qquad \hbox{and} \qquad \langle \phi_{\rm{sing}},\cdot\rangle \perp \psi\otimes \Tr_k.\] This completes the proof by the uniqueness of the Lebesgue decomposition. \end{proof} Analogs of the Radon--Nikodym theorem for completely positive maps are more complicated than the analog of the Lebesgue decomposition. They separate into several strands which require different assumptions and allow for the non-commutativity of $\A$ in different ways. For now we need only the simplest of these. It reflects the non-commutativity of $\A$ by invoking the commutant $\pi_\psi(\A)'$. It evades other technical issues by assuming that $k = \ell$ and $\phi \le \psi$; after proving it we can deduce that this is indeed stronger than $\phi \ll \psi$. \begin{prop}\label{prop:RadNik} Assume that $\psi$ is associated to $\pi_\psi$ by $v_1$, \dots, $v_k$. Let $c > 0$. Then $\phi \le c\psi$ if and only if there exists $T \in \pi_\psi(\A)'$ such that $0\le T \le c$ and \[\langle \phi,a\rangle = \left\langle \pi_\psi(a)T^{(k)}\left[\begin{array}{c}v_1\\ \vdots \\ v_k\end{array}\right],T^{(k)}\left[\begin{array}{c}v_1\\ \vdots \\ v_k\end{array}\right]\right\rangle \qquad (a \in \rmM_k(\A))\] (using again pairings as in~\eqref{eq:pairing}). If such a $T$ exists, then it is unique. In particular, if $\phi$ is bounded in the positive definite ordering by a multiple of $\psi$, then $\phi$ is associated to $\pi_\psi$, and so $\phi \ll \psi$. \end{prop} \begin{proof} The special case $k=1$ is standard: see, for example,~\cite[Proposition 32.1]{ConOT}. The extension to larger values of $k$ follows by applying that special case to the positive linear functionals $\langle \phi,\cdot\rangle$ and $\langle \psi,\cdot\rangle$ on $\rmM_k(\A)$. We have seen previously how $\pi_\psi$ extends to a representation of this matrix C$\sp*$-algebra on $H_\psi^{(k)}$, and $\langle \psi,\cdot\rangle$ is associated to this representation by the tuple $[v_1,\dots,v_k]^\rm{T}$: this is precisely~\eqref{eq:phi-brackets-associated}. So we can apply the special case to these constructs. The result is an operator $\t{T}$ on $H_\psi^{(k)}$ that commutes with $\pi_\psi(\rmM_k(\A))$, and this has the desired form $T^{(k)}$ by~\cite[Proposition 50.11]{ConOT}. \end{proof} Proposition~\ref{prop:RadNik} also has the following standard consequence. \begin{cor}\label{cor:extreme} An element $\phi$ of $\S_k(\A)$ is an extreme point of that set if and only if $\pi_\phi$ is irreducible. \end{cor} \begin{proof} The case $k=1$ is~\cite[Theorem 32.7]{ConOT}. With Proposition~\ref{prop:RadNik} in hand, the proof in the general case is the same. \end{proof} The issues arising from non-commutativity are somewhat more tractable when comparing a completely positive functional against a \emph{tracial} positive functional. Later, we need a refinement of Proposition~\ref{prop:RadNik} for this setting: it appears as Theorem~\ref{thm:RadNik} below. \subsection*{\emph{Approximate association and weak association}} Let $\pi$ be a separable representation and $\phi \in \B(\A,\rmM_k)_+$. We say that $\phi$ is \textbf{approximately associated} to $\pi$ if it lies in the closure in $\B(\A,\rmM_k)_+$ of the set of maps associated to $\pi$. Since $\A$ is separable, this holds if and only if there is a sequence of $k$-tuples $v_{n,1}$, \dots, $v_{n,k}$ in $H_\pi$ such that \[\Phi^\pi_{v_{n,1},\dots,v_{n,k}}(a) \to \phi(a) \qquad (a\in \A).\] If $\phi$ is unital, and the sequence of tuples $v_{n,1}$, \dots, $v_{n,k}$ witnesses the convergence above, then we can always perturb those tuples slightly so that they are orthonormal (compare, for instance, parts (iv) and (v) of~\cite[Proposition 2.2]{Kec05}). It follows that an element of $\S_k(\A)$ is approximately associated to $\pi$ if and only if it lies in the closure $\ol{\S_k(\pi)}$. The classical notion of weak association is even weaker than approximate association. A unital completely positive map $\phi \in\S_k(\A)$ is \textbf{weakly associated} to a representation $\pi$ if it can be approximated by convex combinations of elements of $\S_k(\pi)$: that is, if $\phi \in \ol{\rm{conv}}\,\S_k(\pi)$. This definition is conventional only in the case $k=1$, but no new ideas are needed if we allow it for larger values of $k$. We can describe the relationship between approximate and weak association in terms of dilations. This is because we always have $\ol{\rm{conv}}\,\S_k(\pi) = \ol{\S_k(\pi^{(\infty)})}$, so a functional is weakly associated to $\pi$ if and only if it is approximately associated to $\pi^{(\infty)}$. See, for instance,~\cite[Proposition 2.2]{Kec05}, where this result is formulated for unitary group representations, but the proof given works with only cosmetic changes in the general case. On the other hand, if $\phi$ happens to be a extreme point of $\S_k(\A)$, so its GNS representation is irreducible, then allowing a convex hull makes no difference: such a $\phi$ is weakly associate to another representation $\pi$ if and only if it is approximately associated (see, for instance, the proof of~\cite[Theorem 3.4.10]{Dix--Cstar}). We can compare representations themselves by comparing their sets of approximately or weakly associated completely positive maps. This classical idea is the basis for several important notions in representation theory such as the Fell topology. We return to it in the next chapter. \begin{ex}\label{ex:tempered} When $\A = C^\ast \G$, the left regular representation $\l$ plays a distinguished role in the representation theory of $\A$. A positive definite function $\phi$ on $\G$ is called \textbf{tempered} if it is approximately associated to $\l$. Since the left and right regular representations of $\G$ are equivalent, we could equivalently define temperedness using the right regular representation throughout. Temperedness is more often defined with `weakly associated' in place of `approximately associated', but in the case of $\l$ these definitions coincide: this fact is contained in~\cite[Proposition 18.3.5(a)]{Dix--Cstar}, for example. Tempered positive definite functions on free groups are the key objects in Theorem~\ref{mainthm:tempered}, and they play an important role in much of Part~\ref{part:free}. \qed \end{ex} Now assume that $\phi \in \B(\A,\rmM_k)_+$ and $\psi \in \B(\A,\rmM_\ell)_+$. The next few lemmas compare typical tuples for these two maps if they are related by association or approximate association. \begin{lem}\label{lem:Ad-cty} Suppose that $\phi$ and $\psi$ are related via some $a \in \rmM_{\ell,k}(\A)$ as in equation~\eqref{eq:psi-phi} in Lemma~\ref{lem:Ad-formula}. If $a$ is held fixed, then $\psi$ varies continuously with $\phi$. \end{lem} \begin{proof} For fixed $a$ the map $b\mapsto a^\ast b a$ is norm continuous. \end{proof} \begin{lem}\label{lem:lin-maps} Assume that $\phi$ and $\psi$ are related via $a$ as in~\eqref{eq:psi-phi}. For any neighbourhood $U$ of $\psi$, there is a neighbourhood $V$ of $\phi$ such that \begin{equation}\label{eq:lin-maps} \pi_{(\ell,k)}(a)[\X(\pi,V)] \subset \X(\pi,U) \end{equation} for any representation $\pi$. \end{lem} \begin{proof} Let $H$ be the space of $\pi$. Consider a $k$-tuple $v_1$, \dots, $v_k$ in $H$, and define the $\ell$-tuple $y_1$, \dots, $y_\ell$ by \[[y_1,\dots, y_\ell]^\rm{T} := \pi_{(\ell,k)}(a)[v_1,\dots,v_k]^\rm{T}.\] If these two tuples have types $\phi'$ and $\psi'$, respectively, then $\psi'$ is again related to $\phi'$ as in~\eqref{eq:psi-phi}. Now the existence of $V$ for a given $U$ follows by the continuity given by Lemma~\ref{lem:Ad-cty}. \end{proof} \begin{cor}\label{cor:typ-trans} Assume that $\psi$ is approximately associated to $\pi_\phi$. Then for any neighbourhood $U$ of $\psi$ there is a neighbourhood $V$ of $\phi$ such that \[\X(\pi,V)\ne \emptyset \quad \Rightarrow \quad \X(\pi,U) \ne \emptyset.\] for any representation $\pi$. \end{cor} \begin{proof} Let $\phi$ be associated to $\pi_\phi$ by the cyclic tuple $x_1$, \dots, $x_k$. By cyclicity and Lemma~\ref{lem:unif-cts}, there is some $a \in \rmM_{\ell,k}(\A)$ such that the tuple defined by \[[y_1, \dots, y_\ell]^\rm{T} := \pi_{(\ell,k)}(a)[x_1,\dots,x_k]^\rm{T}\] satisfies $\psi' := \Phi^\pi_{y_1,\dots,y_\ell} \in U$. Now apply Lemma~\ref{lem:lin-maps} to $\phi$, $\psi'$ and $a$. \end{proof} \begin{rmk} n \end{rmk} \subsection*{\emph{Diagonal joinings and other joinings}} Let $\phi \in \S_k(\A)$ and $\psi \in \S_\ell(\A)$. Their \textbf{diagonal joining} is the map \[\rm{diag}(\phi,\psi) := \left[\begin{array}{cc} \phi & 0\\ 0 & \psi\end{array}\right],\] which lies in $\S_{k+\ell}(\A)$. Let $K := \{1,\dots,k\}$ and $L := \{k+1,\dots,k+\ell\}$, and for any $(k+\ell)$-by-$(k+\ell)$ matrix $\theta$ write $\theta_{(K)}$ and $\theta_{(L)}$ for the diagonal-block submatrices. The next lemma is a stability property of the diagonal joining. \begin{lem}\label{lem:disjoint-no-join} If $\phi \perp \psi$ and $O$ is a neighbourhood of $\rm{diag}(\phi,\psi)$, then there are neighbourhoods $U$ of $\phi $ and $V$ of $\phi$ such that the following holds: \begin{quote} If $\theta \in \B(\A,\rmM_{k+\ell})_+$ satisfies $\theta_{(K)} \in U$ and $\theta_{(L)} \in V$, then $\theta \in O$. \end{quote} \end{lem} \begin{proof} Since $\A$ is separable, the relevant topologies are first countable. Therefore, if the conclusion fails, then we can find a sequence $(\theta'_n)_{n\ge 1}$ of c.p. maps $\A\to \rmM_{k+\ell}$ such that \begin{equation}\label{eq:blocks-converge} (\theta'_n)_{(K)} \to \phi \quad \hbox{and} \quad (\theta'_n)_{(L)} \to \psi, \end{equation} but such that $\theta'_n$ does not converge to $\theta$. The assumption~\eqref{eq:blocks-converge} confines the sequence $(\theta'_n)_{n\ge 1}$ to the intersection of a closed ball with $\B(\A;\rmM_{k+\ell})_+$, and this is compact by the Banach--Alouglu theorem. Therefore we may pass to a subsequence and then assume that $\theta'_n$ converges to a different element $\theta'$ of $\B(\A;\rmM_{k+\ell})_+$. Now $\pi_{\theta'}$ contains a tuple with type $\phi$ and another with type $\psi$ so that these tuples do not generate orthogonal subrepresentations, contradicting the mutual singularity of $\phi$ and $\psi$. \end{proof} Lemma~\ref{lem:disjoint-no-join} has the following consequence for sets of typical vectors. \begin{lem}\label{lem:pairs} Let $\theta := \rm{diag}(\phi,\psi)$. \begin{enumerate} \item[a.] If $\phi \perp \psi$, then for every neighbourhood $O$ of $\theta$ there are neighbourhoods $U$ of $\phi$ and $V$ of $\psi$ such that \[\X(\pi,O)\supset \X(\pi,U) \times \X(\pi,V)\] for any representation $\pi$. \item[b.] For any neighbourhoods $U$ of $\phi$ and $V$ of $\psi$ there is a neighbourhood $O$ of $\theta$ such that \[\X(\pi,O) \subset \X(\pi,U) \times \X(\pi,V)\] for any representation $\pi$. \end{enumerate} \end{lem} \begin{proof} For part (a), let $U$ and $V$ be given for $O$ by Lemma~\ref{lem:disjoint-no-join}. For part (b), let \[O := \big\{\theta':\ \theta'_{(K)} \in U\ \hbox{and}\ \theta'_{(L)} \in V\big\}.\] \end{proof} The restriction to mutually singular positive definite functions in part (a) above is not superfluous. Indeed, if $\phi = \psi$ and $\pi = \pi_\phi$ is finite-dimensional and irreducible, then $\X(\pi,U)$ is nonempty for any neighourhood $U$ of $\phi$, but $\rm{diag}(\phi,\phi)$ is not necessarily approximately associated to $\pi$, only to $\pi^{(2)}$. This is analogous to the reasons why sofic entropy may fail to be additive for Cartesian products in ergodic theory~\cite{Aus--soficentadd}. If $\phi \in \S_k(\A)$ and $\psi \in \S_\ell(\A)$ are not mutually singular, then copies of $\pi_\phi$ and $\pi_\psi$ may `sit together' inside a larger representation $\pi$ in a variety of ways. We can keep track of this using another unital completely positive map as follows. Choose tuples $X$ and $Y$ in $H_\pi$ so that $\Phi^\pi_X = \phi$ and $\Phi^\pi_Y = \psi$. Let $M$ be the sum of $\rm{ran}\,X$ and $\rm{ran}\,Y$, and let $m := \dim M$, so this lies between $\max\{k,\ell\}$ and $k+\ell$. Finally, let $W$ be a unitary isomorphism from $\bbC^{(m)}$ to $M$, and let $\theta := \Phi^\pi_W$, so this lies in $\S_m(\A)$. Since $\rm{ran}\,X \subset \rm{ran}\,W$, there is a unitary embedding $U \in \rm{U}(k,m)$ such that $X = WU$, and similarly there is a unitary embedding $V \in \rm{U}(\ell,m)$ such that $Y = WV$. From these identities we obtain \[\phi(a) = X^\ast \pi(a) X = U^\ast \theta(a) U \qquad \hbox{and} \qquad \psi(a) = V^\ast \theta(a)V \qquad (a \in \A).\] To invoke this construction in the sequel, we make the following definition. \begin{dfn}\label{dfn:joining} A \textbf{joining} of $\phi \in \S_k(\A)$ and $\psi \in \S_\ell(\A)$ is a triple $(\theta,U,V)$ with the following properties: \begin{itemize} \item[i.] $\theta \in \S_m(\A)$ for some $m \le k+\ell$; \item[ii.] $U \in \rm{U}(k,m)$ and $V \in \rm{U}(\ell,m)$; \item[iii.] these data satisfy \begin{equation}\label{eq:phi-psi-theta} U^\ast \theta(a) U = \phi(a) \qquad \hbox{and}\qquad V^\ast \theta(a) V = \psi(a)\qquad (a \in \A). \end{equation} \end{itemize} \end{dfn} The term `joining' is not standard in representation theory. It is borrowed from ergodic theory, where it has played a valuable role since being introduced by Furstenberg in~\cite{Fur67}. If $(\theta,U,V)$ is a joining of $\phi$ and $\psi$, then $\pi_\phi$ and $\pi_\psi$ are both contained in $\pi_\theta$. This reverses the construction in the paragraph preceding Definition~\ref{dfn:joining}. For example, $\rm{diag}(\phi,\psi)$ gives a joining if we take $U$ and $V$ to be the injections to the first $k$ and last $\ell$ coordinates in $\bbC^{(k+\ell)}$, respectively. This joining is associated to the coordinate inclusions of $\pi_\phi$ and $\pi_\psi$ into $\pi_\phi \oplus \pi_\psi$. If $\phi$ and $\psi$ are mutually singular, then any copies of $\pi_\phi$ and $\pi_\psi$ inside a larger representation must be orthogonal, so in this case the diagonal joining is the only joining up to a unitary change of variables in $\bbC^{(k+\ell)}$. In ergodic theory, one of Furstenberg's key proposals in~\cite{Fur67} was to take the analog of this property as the \emph{definition} of `disjointness'. \begin{rmk} n \end{rmk} \subsection*{\emph{Sums}} Lemma~\ref{lem:pairs} has a useful variant concerning sums of mutually singular positive definite functions. Assume now that $k=\ell$ and let $\theta:= \rm{diag}(\phi,\psi)$ and $\g := \phi + \psi$. \begin{lem}\label{lem:sums} \begin{itemize} \item[a.] If $\theta$ is associated to $\pi_\theta$ by $x_1$, \dots, $x_{2k}$, then $\g$ is associated to $\pi_\theta$ by ${x_1 + x_{k+1}}$, \dots, ${x_k+x_{2k}}$. \item[b.] If $\phi \perp \psi$, then $\theta$ is also associated to $\pi_\g$. \end{itemize} \end{lem} \begin{proof} Part (a) follows by a direct calculation. It relies on the fact that the off-diagonal blocks of $\theta$ are zero. On the other hand, assume that $\phi \perp \psi$, and suppose that $\g$ is associated to $\pi_\g$ by $y_1$, \dots, $y_k$. The representations $\pi_{\phi}$ and $\pi_{\psi}$ are disjoint by assumption, and they are both contained in $\pi_\g$ by Proposition~\ref{prop:RadNik}. Therefore~\cite[Proposition 5.2.4]{Dix--Cstar} gives a central projection $P$ in $\pi_\phi(\A)''$ onto an invariant subspace $M$ so that $\pi_\g^M \cong \pi_\phi$. Since $P$ commutes with $\pi(\A)$ and $P(1-P) = 0$, it follows that the type of the tuple $(1-P)y_1$, \dots, $(1-P)y_k$ is equal to $\g - \phi = \psi$, and moreover that the type of the $(2k)$-tuple $Py_1$, \dots, $Py_k$, $(1-P)y_1$, \dots, $(1-P)y_k$ is $\theta$. This proves (b). \end{proof} \begin{cor}\label{cor:sums} Assume that $\phi \perp \psi$. \begin{enumerate} \item[a.] For every neighbourhood $W$ of $\g$ there are neighbourhoods $U$ of $\phi$ and $V$ of $\psi$ such that \[\X(\pi,W)\supset \X(\pi,U) + \X(\pi,V)\] for any representation $\pi$. \item[b.] For any neighbourhoods $U$ of $\phi$ and $V$ of $\psi$ there is a neighbourhood $W$ of $\g$ such that \[\X(\pi,W) \subset \X(\pi,U) + \X(\pi,V)\] for any representation $\pi$. \end{enumerate} \end{cor} \begin{proof} The set sum of $\X(\pi,U)$ and $\X(\pi,V)$ is the image of their Cartesian product under the operation on tuples that appears in part (a) of Lemma~\ref{lem:sums}. Therefore the two parts of this corollary follow from the respective parts of that lemma together with Lemma~\ref{lem:lin-maps}, which relates typical sets for neighbourhoods of $\theta$ to typical sets for neighbourhoods of $\phi$ and $\psi$ through that operation. \end{proof} \subsection*{\emph{Notes and further references}} Proposition~\ref{prop:RadNik} is essentially as old as the GNS construction: see~\cite[Proposition 2.5.1]{Dix--Cstar} and Dixmier's references for that section. This proposition is a step towards several other `Radon--Nikodym' theorems that hold between positive functionals under additional assumptions, such as for normal functionals on von Neumann algebras. These reflect the non-commutativity of the algebra in different ways. Two early examples are due to Sakai, and are covered in~\cite[Section 1.24]{SakCandW}; another textbook treatment that includes several variants is in~\cite[Sections 7.2--3]{KadRin97}. We meet another variant in Theorem~\ref{thm:RadNik} below which is special to the case of absolute continuity with respect to a tracial functional. Let $\psi$ be a state on $\A$ and let $\pi := \pi_\psi$ and $H := H_{\pi_\psi}$. By writing out the elements of $\pi^{(\infty)}$ in coordinates, we find that the set of all states $\phi$ with $\phi \ll\psi$ is equal to \[\Big\{\sum_{i\ge 1} \a_i \kappa_i:\ \kappa_i \in \S_1(\pi),\ \a_i \ge 1,\ \sum_{i\ge 1}\a_i = 1\Big\}.\] We can consider this set for any representation $\pi$, ignoring its relationship to $\psi$. It is a convex subset of $\ol{\rm{conv}}\S_1(\pi)$, but in general it is not closed. Some texts on quantum physics call this set the \textbf{folium} of $\pi$. One consequence of Theorem~\ref{thm:RadNik} below is that, if $\pi$ has a cyclic tracial vector, then its folium simply equals $\S_1(\pi)$: in particular, in these examples $\S_1(\pi)$ is already convex (although still not necessarily closed). The Lebesgue decomposition from Corollary~\ref{cor:Leb} has a more sophisticated characterization in terms of extensions of these maps to the von Neumann completion $\pi_\psi(\A)''$: one can show that $\phi_{\rm{ac}}$ is the largest completely positive map which (i) is bounded above by $\phi$ and (ii) extends to a normal completely positive map on $\pi_\psi(\A)''$. See~\cite[Section 10.1]{KadRin97}, particularly their Theorem 10.1.15(iii) and Proposition 10.1.17, where these results are obtained using an auxiliary construction with the universal representation of $\A$. (Our Corollary~\ref{cor:Leb} is more general in allowing the cases $k \ge 2$ and $\ell \ge 2$, but the differences required in the proofs are cosmetic.) A third alternative approach to this Lebesgue decomposition is possible using Simon's `Lebesgue decomposition' for unbounded quadratic forms~\cite{Sim78}. In ergodic theory, joinings are a very rich class of objects indeed, and have proven valuable across a wide variety of topics. The textbook~\cite{Gla03} develops the whole of ergodic theory from this point of view as far as possible. Joinings also appear briefly in~\cite[Section 10.(C)]{KecGAEGA} to help establish some properties of weak containment of measure-preserving group actions, which is perhaps close in spirit to the present work. Their representation theoretic counterparts are simpler, because one can use orthogonal complements to break up representations. But they still prove to be a convenient way of capturing several constructions in the sequel. \chapter{Approximation and convergence of representations}\label{chap:approx} In this chapter we recall the notion of approximate equivalence of representations, and study natural topologies on the space of approximate equivalence classes. We explain these as examples of abstract Vietoris and lower Vietoris topologies, so we quickly recall the general definitions of those in Section~\ref{sec:prelim-hyperspace}. Then the main theory is laid out in Sections~\ref{sec:approx-equiv} and~\ref{sec:catalog-tops}. The origins of these ideas lie at least as far back as Voiculescu's work in~\cite{Voi76}. More recently, the main precedent is Ab\'ert and Elek's study of related ideas for measure-preserving systems in~\cite{AbeEle11}; see also~\cite{TucDro15} and~\cite{BurKec20}. Those papers introduce a natural topology on the space of approximate equivalence classes and show that it is compact. Ab\'ert and Elek already include variants for unitary representations as well. We include complete proofs of our versions of these results. Firstly, this is in order to present them in the generality of C$\sp*$-algebras, although this does not require any novel ideas. We also take a slightly different approach, avoiding the use of ultralimits from~\cite{AbeEle11,TucDro15}, and leading to a description of the space of approximate equivalence classes as the `catalog' in Definition~\ref{dfn:catalog}. Finally, in Section~\ref{sec:FD-conv} we discuss finite-dimensional representations and various modes of convergence for them. Our approach to these is worth comparing with modes of convergence for sequences of large finite graphs: see the related discussion in~\cite{AbeEle11}. The material in this chapter is not new, but not as standard as the preceding chapter. Readers familiar with~\cite{AbeEle11} may find that skimming this chapter quickly suffices. \section{Preliminaries on hyperspaces and Vietoris topologies}\label{sec:prelim-hyperspace} If $T$ is a topological space, then we write $\calH(T)$ for its \textbf{hyperspace}: the space of all nonempty compact subsets of $T$. Unless specified otherwise, this is endowed with the \textbf{Vietoris topology}. This is generated by the \textbf{Vietoris basic sets}: these are the sets \begin{multline*} \V(U_1,\dots,U_k) := \{K \in \calH(T):\ K\subset U_1\cup\cdots \cup U_k\\ \hbox{and}\ K\cap U_i \ne \emptyset\ \hbox{for}\ i=1,2,\dots,k\} \end{multline*} as $U_1$, \dots, $U_k$ range over all open subsets of $T$. The basic properties of the Vietoris topology are covered in~\cite[Ex.s 2.7.20 and 3.12.27]{Eng89}. If $T$ is compact and metrizable, then so is $\calH(T)$. For example, starting from a compact metric for $T$, a compact metric for $\calH(T)$ is given by the resulting Hausdorff metric~\cite[Ex. 4.5.23]{Eng89}. The examples in our applications below fall into this class, but often without any particular canonical choice of metric, so we generally use the more abstract description in terms of Vietoris basic sets. If $T$ is a compact metrizable space, the next lemma gives a convenient criterion for convergence of a sequence in $\calH(T)$. \begin{lem}\label{lem:Vietoris-conv} Let $(K_n)_{n\ge 1}$ be a sequence in $\calH(T)$. It converges to a limit in $\calH(T)$ if and only if every $t \in T$ satisfies one of the following: \begin{itemize} \item[i.] every neighbourhood $U$ of $t$ satisfies $U\cap K_n \ne \emptyset$ for all sufficiently large $n$; \item[ii.] some neighbourhood $U$ of $t$ satisfies $U \cap K_n = \emptyset$ for all sufficiently large $n$. \end{itemize} In this case, \begin{align*} \lim_n K_n &= \{t \in T:\ t\ \hbox{satisfies (i)}\}\\ &= \{\lim_n t_n:\ t_n \in K_n\ \hbox{and}\ (t_n)_{n\ge 1}\ \hbox{converges}\}. \end{align*} \end{lem} \begin{proof} \emph{($\Rightarrow$).}\quad Suppose that $K_n \to K$ in $\calH(T)$. If $t \in K$ and $U$ is a neighbourhood of $t$, then $\V(U,T)$ is a neighbourhood of $K$, and condition (i) asserts that $K_n \in \V(U,T)$ for all sufficiently large $n$. On the other hand, if $t \not\in K$, then, because $T$ is compact and Hausdorff, $t$ has a neighbourhood $U$ such that $\ol{U}\cap K=\emptyset$. It follows that $\V(T\setminus \ol{U})$ is a neighbourhood of $K$, and so $K_n \in \V(T\setminus \ol{U})$ for all sufficiently large $n$, which implies condition (ii). \vspace{7pt} \emph{($\Leftarrow$).}\quad Suppose that (i) and (ii) hold. First, choose a point $t_n \in K_n$ for each $n$, and then let $t$ be a subsequential limit of the sequence $(t_n)_{n\ge 1}$. This point $t$ cannot satisfy condition (ii), so it must satisfy (i). Therefore the set $K$ of points satisfying (i) is nonempty. It is also closed, because its complement is described by condition (ii) as a union of open sets. Therefore $K \in \calH(T)$. It remains to show that $K_n\to K$. Let $\V(U_1,\dots,U_k)$ be a general Vietoris neighbourhood of $K$, and choose $s_i \in K\cap U_i$ for each $i$. Then $U_i$ is a neighbourhood of $s_i$, so condition (i) gives $U_i\cap K_n \ne \emptyset$ for all sufficiently large $n$. On the other hand, the set $L:= T\setminus (U_1\cup \cdots \cup U_k)$ is compact and disjoint from $K$. Condition (ii) provides a neighbourhood of every point in $L$, and extracting an open subcover from these gives an open neighbourhood $V$ of $L$ such that $V\cap K_n = \emptyset$ for all sufficiently large $n$. This implies that $K_n \subset U_1\cup \cdots \cup U_k$ for all sufficiently large $n$. Altogether, we have shown that $K_n \in \V(U_1,\dots,U_k)$ for all sufficiently large $n$. \vspace{7pt} \emph{Identification of the limit.}\quad The first formula for $\lim_n K_n$ is already given by the proof of the forward implication above. The second follows from properties (i) and (ii) and the fact that $T$ is first countable. \end{proof} An alternative topology on $\calH(T)$ is the \textbf{lower Vietoris topology}: see, for instance,~\cite[Appendix]{Mic51}. This is generated by the basic open sets of the form \[ \V'(U_1,\dots,U_k) := \{K \in \calH(T):\ K\cap U_i \ne \emptyset\ \hbox{for}\ i=1,2,\dots,k\}\] for some open subsets $U_1$, \dots, $U_k$ of $T$. In the previous notation, this is simply equal to $\V(U_1,\dots,U_k,T)$, so the lower Vietoris topology is weaker than the Vietoris topology. Like the Vietoris topology, it is second countable in case $T$ is compact and metrizable. However, in all but degenerate cases, the lower Vietoris topology is much weaker than the Vietoris topology, and also less well-behaved. Indeed, if $T$ is a Hausdorff space with at least two points, then the lower Vietoris topology is not T$_1$, because for any $K \in \calH(T)$ the lower-Vietoris closure of the singleton $\{K\}$ is the whole of $\calH(K)$, regarded as a subset of $\calH(T)$. On the other hand, since the lower Vietoris topology is weaker than the Vietoris topology, any subset of $\calH(T)$ that is Vietoris-closed is still quasi-compact in the lower Vietoris topology (meaning that every open cover has a finite sub-cover, but the space fails to be Hausdorff~\cite[p132]{Eng89}). A lower Vietoris topology can be used to define the Fell topology on sets of representations, which we recall in Section~\ref{sec:catalog-tops} below. \section{Approximate equivalence and containment}\label{sec:approx-equiv} The next notions are weakenings of containment and equivalence. \begin{dfn}\label{dfn:approx-contain} If $\pi$ and $\rho$ are two separable representations of $\A$, then $\pi$ is \textbf{approximately contained} in $\rho$, written $\pi \lesssim_\rm{a} \rho$, if there is a sequence of unitary operators $U_n:H_\pi \to H_\rho$ such that \[U_n^\ast \rho(a) U_n \stackrel{\rm{WOT}}{\to} \pi(a) \qquad (a \in \A).\] If each of $\pi$ and $\rho$ is approximately contained in the other, then they are \textbf{approximately equivalent}, written $\pi \cong_\rm{a} \rho$. \end{dfn} An introduction to approximate equivalence is given in~\cite[Section 41]{ConOT}. While the results we need about this notion are classical, beware that notation and terminology are somewhat variable across the literature. Approximate equivalence turns out to have several alternative definitions whose strengths initially appear very different. Most of our needs from this story are covered by the next two results. \begin{lem}\label{lem:approx-contain} If $\pi$ and $\rho$ are separable representations, then $\pi \lesssim_\rm{a} \rho$ if and only if $\ol{\S_k(\pi)}\subset \ol{\S_k(\rho)}$ for every $k$. \end{lem} \begin{proof} For the forward implication, we can approximate any element of $\S_k(\pi)$ by elements of $\S_k(\rho)$ by applying the unitary operators $U_n$ witnessing the approximate containment to tuples in $H_\pi$. For the reverse implication, we first observe that the spaces $H_\pi$ and $H_\rho$ must have the same dimension, and then we can use the containments $\ol{\S_k(\pi)}\subset \ol{\S_k(\rho)}$ to assemble partial isometries between finite-dimensional subspaces of $H_\pi$ and $H_\rho$ and then extend them to unitary operators. See, for instance, the proof of~\cite[Proposition H.2]{KecGAEGA} and the remark that follows it (this reference is only for group representations, but the method is the same). \end{proof} \begin{thm}\label{thm:approx-equiv} For separable representations $\pi$ and $\rho$, the following are equivalent: \begin{itemize} \item[i.] they are approximately equivalent; \item[ii.] they satisfy $\ol{\S_k(\pi)} = \ol{\S_k(\rho)}$ for every $k$; \item[iii.] there is a sequence of unitary operators $U_n:H_\pi \to H_\rho$ such that \[\|\pi(a) - U_n^\ast \rho(a) U_n\| \to 0 \quad \hbox{for every}\ a\in \A.\] \qed \end{itemize} \end{thm} Having proved Lemma~\ref{lem:approx-contain}, the only non-trivial part of Theorem~\ref{thm:approx-equiv} is that (i) or (ii) implies (iii). This is covered by parts (b) and (c) of~\cite[Theorem 41.12]{ConOT}. (Beware that Conway temporarily uses the notation ``$\cong_\rm{w}$'' for another of the formulations of approximate equivalence in~\cite[Theorem 41.12]{ConOT}, but there it does not mean weak equivalence in the sense used in the present work.) By Theorem~\ref{thm:approx-equiv}, the sequence of closed sets $\ol{\S_k(\pi)}$ for $k\ge 1$ determines the approximate equivalence class of a separable representation $\pi$. Henceforth we often abbreviate that sequence to $\ol{\S_\bullet(\pi)}$, and regard it as a single element of the space \[\prod_{k\ge 1} \calH(\S_k(\A)).\] Our next goal is to describe the image of the map $\pi \mapsto \ol{\S_\bullet(\pi)}$ more explicitly. \begin{dfn}\label{dfn:catalog} The \textbf{catalog} of $\A$ is the set $\cal{Z}\A$ of all $Z_\bullet \in \prod_{k\ge 1}\calH(\S_k(\A))$ that have the following two closure properties: \begin{itemize} \item[i.] if $\phi \in Z_k$ then $\S_\ell(\pi_\phi) \subset Z_\ell$ for every $\ell$; \item[ii.] if $\phi,\psi \in \bigcup_kZ_k$ then they have a joining $(\theta,U,V)$ with $\theta \in \bigcup_k Z_k$ (recall Definition~\ref{dfn:joining} for the term `joining'). \end{itemize} If $\G$ is a discrete group, then its \textbf{catalog} $\cal{Z}\G$ is the copy of the space $\cal{Z}(C^\ast \G)$ obtained by substituting $\S_k(\G)$ for $\S_k(C^\ast \G)$ in the obvious way. \end{dfn} I do not know a standard name for $\cal{Z}\A$, so the term `catalog' is new. \begin{prop}\label{prop:approx-equiv} An element $Z_\bullet$ of $\prod_{k\ge 1}\calH(\S_k(\A))$ is equal to $\ol{\S_\bullet(\pi)}$ for some separable representation $\pi$ if and only if it lies in $\cal{Z}A$. \end{prop} \begin{proof} \emph{($\Rightarrow$).}\quad Let $Z_\bullet := \ol{\S_\bullet(\pi)}$. It satisfies property (i) by the transitivity of approximation association (see Corollary~\ref{cor:typ-trans}). To check property (ii), suppose that $\phi \in \ol{\S_k(\pi)}$ and $\psi \in \ol{\S_\ell(\pi)}$. Then there are sequences of unitary embeddings $X_n:\bbC^{(k)}\to H_\pi$ and $Y_n:\bbC^{(\ell)} \to H_\pi$ such that \[\phi_n(a):= X_n^\ast \pi(a)X_n \to \phi(a) \qquad \hbox{and} \qquad \psi_n(a):= Y_n^\ast\pi(a)Y_n \to \psi(a)\] for every $a \in \A$. As described prior to Definition~\ref{dfn:joining}, for each $n$ the pair of embeddings $X_n$ and $Y_n$ can be turned into a joining $(\theta_n,U_n,V_n)$ of $\phi_n$ and $\psi_n$ such that $\theta_n$ is associated to $\pi$. By passing to a subsequence if necessary, we may assume that there is a single value $m$ such that $\theta_n \in \S_m(\pi)$ for every $n$. Then, by compactness, and passing to a further subsequence if necessary, we may assume that $U_n\to U$, $V_n\to U$ and $\theta_n\to \theta$. Then $\theta$ lies in $\ol{\S_m(\A)}$, and $(\theta,U,V)$ is a joining of $\phi$ and $\psi$. \vspace{7pt} \emph{($\Leftarrow$).}\quad Given $Z_\bullet$, we must construct a suitable representation $\pi$. Let $S_k$ be a countable dense subset of $Z_k$ for each $k$, and let $S$ be their union. Enumerate $S$ as $\{\phi_1,\phi_2,\dots\}$, and let $\rho_i = \pi_{\phi_i}$ for each $i$. For notational purposes, let $\pi_0$ be a trivial representation. We now define by recursion a sequence of representations $\pi_i$, $i\ge 1$, that has the following properties: \begin{itemize} \item $\pi_{i+1}$ contains both $\pi_i$ and $\rho_{i+1}$ for each $i\ge 0$; \item each $\pi_i$ has a finite cyclic tuple; \item $Z_\bullet$ contains $\ol{\S_\bullet(\pi_i)}$ for each $i$. \end{itemize} To begin the recursion, let $\pi_1 := \rho_1$. Now suppose that $\pi_1$, \dots, $\pi_i$ have already been constructed. Since $\pi_i$ and $\rho_{i+1}$ both have finite cyclic tuples, and since \[Z_\bullet \supset \ol{\S_\bullet(\pi_i)} \cup \ol{\S_\bullet(\rho_{i+1})},\] applying property (ii) and then property (i) from Definition~\ref{dfn:catalog} gives a new representation $\pi_{i+1}$ with a finite cyclic tuple that contains both $\pi_i$ and $\rho_{i+1}$ and satisfies $Z_\bullet \supset \ol{\S_\bullet(\pi_{i+1})}$. This continues the recursion. To finish, let $\pi^{M_i}_{i+1}$ be a subrepresentation equivalent to $\pi_i$, and define \[\pi:= \pi_1 \oplus \pi_2^{M_1^\perp}\oplus \pi_3^{M_2^\perp}\oplus \cdots.\] This contains an increasing sequence of subrepresentations equivalent to $\pi_1$, $\pi_2$, \dots, and their union is dense in $H_\pi$. It follows by Lemma~\ref{lem:unif-cts} that \[\ol{\S_k(\pi)} = \ol{\bigcup_{i\ge 1}\S_k(\pi_i)} \subset Z_k \qquad \hbox{for each}\ k.\] On the other hand, we also have \[\ol{\S_k(\pi)} \supset \ol{\bigcup_{i \ge 1}\S_k(\rho_i)} \supset Z_k \qquad \hbox{for each}\ k\] by the density of our initial choice of each $S_k$. \end{proof} \subsection*{\emph{Notes and further references}} The first main results about approximate equivalence of representations are due to Voiculescu~\cite{Voi76}, building on some fundamental technical results of Glimm~\cite{Gli60}. See also~\cite{Arv77} for a survey and a different approach that introduced some versatile new techniques. Later accounts such as the one in~\cite{ConOT} generally follow the lines of~\cite{Arv77}. We recall some more of the general theory later in Section~\ref{sec:stiff}, in preparation for some slightly more advanced applications of it. \section{Topologies on the catalog}\label{sec:catalog-tops} The catalog $\cal{Z}\A$ is a subset of $\prod_{k\ge 1}\calH(\S_k(\A))$. We can endow that product space with the product of either the Vietoris topologies or the lower Vietoris topologies, and then restrict that product topology to $\cal{Z}\A$. Having done so, we refer slightly abusively to either the Vietoris or the lower Vietoris topology `on' $\cal{Z}\A$. These two topologies on $\cal{Z}\A$ have different properties and capture different phenomena. We start with the lower Vietoris topology, which leads to the classical Fell topology on representations themselves. \subsection*{\emph{The lower Vietoris and Fell topologies}} Let $R$ be any set of separable representations, or of equivalence classes of such. Then the map \begin{equation}\label{eq:lower-Vietoris-to-Fell} R\to \cal{Z}\A:\ \big(\ \pi\ \hbox{or}\ [\pi]\ \big)\ \mapsto \ol{\S_\bullet(\pi)} \end{equation} is well-defined, and we can use it to pull the lower Vietoris topology on $\cal{Z}\A$ back to a topology on $R$. Like the lower Vietoris topology itself, this pullback is second countable. This construction is used most often when $R = \hat{\A}$. In that case the resulting topology on $\hat{\A}$ is usually called the \textbf{Fell topology}, so we extend the use of this term to any other choice of $R$ as well. When $R = \hat{\A}$, this topology has many other equivalent characterizations, for instance by pulling back the Jacobson topology on $\rm{Prim}(\A)$, or in terms of weak containment of positive functionals. A full account is given in~\cite[Chapter 3]{Dix--Cstar}, or~\cite[Section 7.2]{FolAHA} gives a quicker overview in the case of a group C$\sp*$-algebra. Some of these characterizations can cease to coincide for other choices of $R$. In these cases, the `Fell topology' in our sense is the `quotient topology' in Fell's own terminology from~\cite{Fel62}. This is because it does still agree with the quotient to equivalence classes of the pointwise weak operator topology on spaces of actual representations. This agreement follows quickly by the methods in~\cite{Fel62} or~\cite[Section 3.5]{Dix--Cstar}, we do not need this alternative definition. Upon restricting to the catalog, we can simplify the family of basic open sets for the lower Vietoris topology. \begin{lem}\label{lem:lower-Vietoris-simplify} For any positive integer $k$ and open subset $U$ of $\S_k(\A)$, let \[\cal{W}(k,U) := \Big\{Z_\bullet \in \prod_{k\ge 1}\calH(\S_k(\A)):\ Z_k\cap U \ne \emptyset\Big\}.\] Then the collection of all subsets of $\cal{Z}\A$ of the form $\cal{Z}\A\cap \cal{W}(k,U)$ is a base for the lower Vietoris topology on $\cal{Z}\A$. \end{lem} \begin{proof} The sets $\cal{W}(k,U)$ are a sub-base for the product of the lower Vietoris topologies on $\prod_{k\ge 1}\cal{H}(\S_k(\A))$ by definition. The point is to show that they actually become a base upon intersecting all sets with $\cal{Z}\A$. To this end, suppose that $Z_\bullet \in \cal{Z}\A$ and $Z_\bullet \in \cal{W}(k,U)\cap \cal{W}(\ell,V)$. Choose a representation $\pi$ so that $Z_\bullet = \ol{\S_\bullet(\pi)}$, and now choose $X \in H_\pi^{(k)}$ and $Y \in H_\pi^{(\ell)}$ so that $\Phi^\pi_X \in U$ and $\Phi^\pi_Y \in V$. Let $V$ be an orthonormal tuple with the same span as $[X,Y]$, and suppose it consists of $m$ vectors where $m\le k+\ell$. Then $\theta:= \Phi^\pi_V \in \S_m(\pi)$, and both $\Phi^\pi_X$ and $\Phi^\pi_Y$ are associated to $\pi_\theta$. Therefore Corollary~\ref{cor:typ-trans} gives a neighbourhood $W$ of $\theta$ such that, if $Z'_\bullet \in \cal{Z}\A$ and $Z'_m\cap W\ne \emptyset$, then also $Z'_k \cap U\ne \emptyset$ and $Z'_\ell\cap V\ne \emptyset$. This shows that \[\cal{Z}\A\cap \cal{W}(k,U) \cap \cal{W}(\ell,V) \supset \cal{Z}\A\cap \cal{W}(m,W) \ni Z_\bullet.\] The analogous result for a larger intersection of sets of the form $\cal{W}(k,U)$ follows by induction, so this completes the proof. \end{proof} Pulling this lemma back to a collection $R$ of representations, we find that a base for the Fell topology there is given by the sets of the form \[\{\pi \in R:\ \X(\pi,U)\ne \emptyset\}\] as $k$ ranges over positive integers and $U$ ranges over open subsets of $\S_k(\A)$, and similarly if $R$ is a set of equivalence classes of representations. Like lower Vietoris topologies in general, the Fell topology can fail to be T$_1$. It can even fail to be T$_0$ in case the map~\eqref{eq:lower-Vietoris-to-Fell} is not injective. Unpacking the definitions, we find that for two representations $\rho$ and $\pi$ the singletons $\{[\rho]\}$ and $\{[\pi]\}$ have the same closures in the Fell topology if and only if $\rho \cong_{\rm{a}} \pi$. In case $\rho$ and $\pi$ are irreducible, one can also replace approximate equivalence with weak equivalence here~\cite[Section 3.4]{Dix--Cstar}. \subsection*{\emph{The Vietoris and Fell-global topologies}} With the product of the Vietoris topology on each coordinate factor, the space $\prod_{k\ge 1}\calH(\S_k(\A))$ is compact and Hausdorff. Since we assume $\A$ is separable, this product topology is also metrizable, and hence so is its restriction to $\cal{Z}\A$. It may therefore be understood in terms of sequences. The specialization of Lemma~\ref{lem:Vietoris-conv} gives the following. \begin{lem}\label{lem:typ-not-typ} A sequence $(Z_{n,\bullet})_{n\ge 1}$ in $\cal{Z}\A$ converges to a limit in the Vietoris topology if and only if, for every positive integer $k$, every $\phi \in \S_k(\A)$ satisfies one of the following: \begin{itemize} \item[i.] every neighbourhood $U$ of $\phi$ satisfies $\X(\pi_n,U) \ne \emptyset$ for all sufficiently large $n$; \item[ii.] some neighbourhood $U$ of $\phi$ satisfies $\X(\pi_n,U) = \emptyset$ for all sufficiently large $n$. \end{itemize} In this case, \begin{align*} \lim_n Z_{n,k} &= \{\phi \in \S_k(\A):\ \phi\ \hbox{satisfies (i)}\}\\ &= \{\lim_n\phi_n:\ \phi_n \in Z_{n,k}\ \hbox{and}\ (\phi_n)_{n\ge 1}\ \hbox{converges}\}. \end{align*} \qed \end{lem} Let us use this lemma to verify the following basic property. \begin{lem}\label{lem:catalog-closed} The space $\cal{Z}\A$ is a Vietoris-closed subset of $\prod_{k\ge 1}\calH(\S_k(\A))$. \end{lem} \begin{proof} Let $(Z_{n,\bullet})_{n\ge 1}$ be a sequence in $\cal{Z}\A$ that converges to $Z_\bullet$. We must show that $Z_\bullet$ still satisfies properties (i) and (ii) from Definition~\ref{dfn:catalog}. \vspace{7pt} \emph{Property (i).}\quad Suppose that $\phi \in Z_k$, $\psi \in \S_\ell(\pi_\phi)$, and $U$ is a neighbourhood of $\psi$. Then Corollary~\ref{cor:typ-trans} gives a neighbourhood $V$ of $\phi$ such that \begin{equation}\label{eq:U-V-nonempty} \X(\pi,V)\ne \emptyset \qquad \Rightarrow \qquad \X(\pi,U) \ne \emptyset \end{equation} for any representation $\pi$. Since $\phi \in Z_k$, Lemma~\ref{lem:typ-not-typ} gives a sequence of elements $\phi_n \in Z_{n,k}$ that converges to $\phi$. These elements witness that $\X(\pi_{\phi_n},V)$ is nonempty for all sufficiently large $n$, and so $\X(\pi_{\phi_n},U)$ is also nonempty for all sufficiently large $n$. Therefore $U\cap Z_{n,\ell}$ is also nonempty for all sufficiently large $n$, by~\eqref{eq:U-V-nonempty}. By another appeal to Lemma~\ref{lem:typ-not-typ}, this shows that $\psi \in Z_\ell$. \vspace{7pt} \emph{Property (ii).}\quad Let $\phi \in Z_k$ and $\psi \in Z_\ell$. By Lemma~\ref{lem:typ-not-typ}, there are sequences $\phi_n \in Z_{n,k}$ converging to $\phi$ and $\psi_n \in Z_{n,\ell}$ converging to $\psi$. Passing to a subsequence if necessary, there are a fixed integer $m$ and joinings $(\theta_n,U_n,V_n)$ of $\phi_n$ and $\psi_n$ such that $\theta_n \in Z_{n,m}$. By compactness, and passing to a further subsequence if necessary, we may also assume that $\theta_n$, $U_n$ and $V_n$ converge to some limits $\theta$, $U$ and $V$. These limits satisfy $\theta \in Z_m$ by Lemma~\ref{lem:typ-not-typ}, and they define a joining of $\phi$ and $\psi$. \end{proof} Thus, with its Vietoris topology, the space $\cal{Z}\A$ is again compact and metrizable. Compare~\cite[Theorem 4]{AbeEle11} and~\cite[Theorem 5.1]{TucDro15}. By comparing Lemma~\ref{lem:lower-Vietoris-simplify} with the definition of the Vietoris topology, we obtain a counterpart of that lemma for this topology. \begin{lem}\label{lem:Vietoris-simplify} Consider the sets that have the form \[\big\{Z_\bullet \in \cal{Z}\A:\ Z_\ell \subset O\ \hbox{and}\ Z_k\cap U \ne \emptyset\big\}\] for some positive integers $k$ and $\ell$ and some open subsets $O$ of $\S_\ell(\A)$ and $U$ of $\S_k(\A)$. These are a base for the Vietoris topology of $\cal{Z}\A$. \end{lem} \begin{proof} For any $\ell$ and $O$ as above, let \[\cal{W}'(\ell,O) := \Big\{Z_\bullet \in \prod_{k\ge 1}\calH(\S_k(\A)):\ Z_\ell \subset O\Big\}.\] Now let $\ell_1$ and $\ell_2$ be positive integers and consider open subsets $O_1$ of $\S_{\ell_1}(\A)$ and $O_2$ of $\S_{\ell_2}(\A)$. Let $\ell:= \max\{\ell_1,\ell_2\}$ and let $P$ and $Q$ be any unitary embeddings of $\bbC^{(\ell_1)}$ and $\bbC^{(\ell_2)}$ into $\bbC^{(\ell)}$, respectively. Finally, let \[O := \{\phi \in \S_m(\A):\ P^\ast \phi P \in O_1\ \hbox{and}\ Q^\ast \phi Q \in O_2\}.\] Now a manipulation using the axioms of the catalog shows that \[\cal{Z}\A\cap \cal{W}'(\ell,O) \subset \cal{Z}\A \cap \cal{W}'(\ell_1,O_1)\cap \cal{W}'(\ell_2,O_2).\] Arguing as for Lemma~\ref{lem:lower-Vietoris-simplify}, and using the fact already proved in that lemma, the present lemma follows as well. \end{proof} If $R$ is any set of separable representations or their equivalence classes, then we can pull the Vietoris topology from $\cal{Z}\A$ back to $R$ through the map~\eqref{eq:lower-Vietoris-to-Fell}. This is essentially the definition of the topologies (on measure-preserving actions or representations) given in~\cite{AbeEle11,TucDro15}. I do not know a standard term for this topology, so we call it the \textbf{Fell-global topology}. We refer to convergence of a sequence in this topology as \textbf{Fell-global convergence}. This is one of the basic modes of convergence that we study for sequences of finite-dimensional representations later. The following notion makes sense even if a sequence $(\pi_n)_{n\ge 1}$ of representations does not Fell-global converge. \begin{dfn}\label{dfn:asymp-assoc} A completely positive map $\phi$ is \textbf{asymptotically associated} to $(\pi_n)_{n\ge 1}$ if every neighbourhood $O$ of $\phi$ satisfies $\X(\pi_n,O)\ne \emptyset$ for infinitely many $n$. \end{dfn} Suppose that $\phi \in \B(\A,\rmM_k)_+$. Since $\A$ is separable, the topology of bounded subsets of $\B(\A,\rmM_k)$ is first countable. As a result, $\phi$ is asymptotically associated to $(\pi_n)_{n\ge 1}$ as above if and only if there are a subsequence $n_1 < n_2 < \dots$ and tuples $v_{i,1}$, \dots, $v_{i,k}$ in $H_{\pi_{n_i}}$ such that \[\Phi^{\pi_{n_i}}_{v_{i_1},\dots,v_{i_k}} \to \phi.\] If the sequence $(\pi_n)_{n\ge 1}$ Fell-global converges, then the limit of $\ol{\S_\bullet(\pi_n)}$ is precisely the set of maps asymptotically associated to $(\pi_n)_{n\ge 1}$, by Lemma~\ref{lem:typ-not-typ} \subsection*{\emph{Notes and further references}} For a locally compact group $\G$, Godemont defined a topology on the unitary dual $\hat{\G}$ in~\cite{God48}, and showed that it corresponds to the Jacobson topology on $\rm{Prim}(\G)$. His definition is in terms of the weakly associated sets of positive definite functions, and he also defined the relation of weak containment on representations of $\G$ in the process. For a general C$\sp*$-algebra $\A$, Fell then introduced his topology on $\hat{\A}$ and related it to weak containment in~\cite{Fel60}. He studied its generalizations to larger classes of representations in~\cite{Fel62}, for the purpose of applying it to induced representations. Thorough textbook accounts of this theory are given in~\cite[Appendix F]{BekdelaHarValKaz} (for group representations) and~\cite[Chapter VII]{DorFelRepI} (for C$\sp*$-algebras); note that the latter reference calls the resulting topology on a space of representations the `regional topology'. The terms `approximate association' and `approximate containment' are not standard. I have chosen them to make the connection with `approximate equivalence' (which is a standard term). Nevertheless, the idea of approximate containment does have a well-established place in the literature: see, for instance, the way that Doran and Fell initially define the `regional topology' in~\cite[Subsection VII.1]{DorFelRepI}. Another convention is to refer to our approximate containment as `weak containment in the sense of Zimmer': see, for instance,~\cite{BekdelaHarValKaz,KecGAEGA,BekdelaHarUDC,AbeEle11}. Zimmer does indeed use the term `weak containment' for what we call `approximate containment' in~\cite[Definition 7.3.5]{Zim84}, which differs from Godemont and Fell's usage in general. However, the history seems murky. In~\cite{Zim84}, Zimmer himself attributes this definition to Fell's original papers. Zimmer's formulation refers to the types of finite tuples in a way that appears only inside the proofs in~\cite{Fel62} (see~\cite[Lemma 1.3]{Fel62}). But Zimmer also elides Fell's own more careful discussion of how Godemont's `weak containment' differs from his topologies when one allows representations that are not necessarily irreducible. \section{Almost periodic sequences}\label{sec:FD-conv} In the next chapter we begin to study approximations of general representations by sequences of finite-dimensional ones. It helps to have a more compact terminology for these. \begin{dfn}[Almost periodic sequence]\label{dfn:AP} An \textbf{almost periodic} (`\textbf{AP}') \textbf{sequence} for $\A$ is any sequence of finite-dimensional representations of $\A$ whose dimensions tend to $\infty$. \end{dfn} Fix an AP sequence $\bspi = (\pi_i)_{i\ge 1}$. The remainder of Part~\ref{part:general} is largely concerned with two issues: \begin{itemize} \item What is the asymptotic behaviour of $\S_\bullet(\pi_i)$ as $i\to\infty$? \item If $\phi \in \B(\A;\rmM_k)_+$ is asymptotically associated to $(\pi_i)_{i\ge 1}$, how `large' is the set of approximately $\phi$-typical tuples as $i\to\infty$? \end{itemize} A good answer to the first question could be showing that the sequence $(\pi_i)_{i\ge 1}$ approximately converges to a particular limit. A precise meaning for the second question is elaborated in Chapter~\ref{chap:AP}. In the rest of this section we discuss three different possible modes of convergence for an AP sequence $\bspi = (\pi_i)_{i\ge 1}$. The first two are Fell and Fell-global convergence from Section~\ref{sec:catalog-tops}. For finite-dimensional representations, a third natural mode of convergence is also available. \begin{dfn} If $\pi_n$ is a $d_n$-dimensional representation for each $n$ and $\tau$ is a tracial state on $\A$, then the AP sequence $(\pi_n)_{n\ge 1}$ \textbf{trace converges} to $\tau$ if $\tr_{d_n}\circ \pi_n \to \tau$ in the weak$^\ast$ topology. \end{dfn} Trace convergence is rather different from the other two modes, but the phenomenon of measure concentration does give rise to a connection from the former to the latter: see Corollary~\ref{cor:char-and-weak-global} below. (That result is the main reason we insist that an AP sequence should have dimensions tending to $\infty$.) All three of the modes of convergence above appear naturally in explorations of how the finite-dimensional representations are distributed among all representations in either topology, and have been explored widely in previous works, although our terms `Fell-global' and `trace convergence' are not standard. We next recall a few aspects of this story. First, a group $\G$ is called \textbf{maximally almost periodic} if it has an AP sequence $(\pi_i)_{i\ge 1}$ that separates the elements of the group. For finitely generated groups, this property turns out to be equivalent to residual finiteness. Maximally almost periodic groups were introduced by von Neumann~\cite{vonNeu34} and covered by Weil in~\cite[Chapter VII]{WeilIGT}. More modern introductions to them are given in~\cite[Sections 16.4--5]{Dix--Cstar}, where they are called `injectable' groups, and in~\cite[Subsection 4.C.b]{BekdelaHarUDC}. See also~\cite[Section 6.4.2]{BroOza08} for a connection from maximal almost periodicity to Kirchberg's `factorization property'. If $\A = C^\ast \G$ where $\G$ is countable and discrete, then an AP sequence that trace converges to $\tau_{\rm{reg}}$ is called \textbf{asymptotically regular}. Such an AP sequence is necessarily separating. The reverse implication can fail, but by forming running tensor products and direct sums one can always create an asymptotically regular AP sequence from a separating one. So a group has an asymptotically regular AP sequence if and only if it is residually finite, by the result recalled above. A general C$\sp*$-algebra is called \textbf{residually finite dimensional} if it has a faithful family of finite-dimensional representations. This is equivalent to the subset $\hat{\A}_\rm{fd}$ of finite-dimensional irreducible representations being dense in $\hat{\A}$ for the Fell topology. For a group C$\sp*$-algebra, this is strictly stronger than maximal almost periodicity, and it is not known to hold for many groups. One important family of examples is free groups, for which this property follows from work of Lubotzky and Shalom~\cite{LubotSha04}. In fact, they prove the even stronger result that representations induced by finite permutation actions are dense in the Fell topology. By an extension of this argument one can actually approximately `count' the finite-permutation actions that weakly approximate a given measure-preserving free-group action. This discovery emerges independently from Bowen's works~\cite{Bowen10free,Bowen10c,Bowen10c} on annealed sofic entropy (called the `f-invariant' in those references; see the discussion of terminology in~\cite{AusBowShr}). The survey~\cite{BurKec20} goes over this story more carefully. The analog of this counting is one of the uses to which we can put our notion of annealed AP entropy later: see Corollary~\ref{cor:exp-typ}. The stronger notion of Fell-global convergence leads to the class of `\textbf{matricial field}' (`\textbf{MF}') C$\sp*$-algebras introduced by Blackadar and Kirchberg in~\cite{BlaKir97}. To be precise, if $\bspi$ Fell-global converges to $\pi$, then the composition \[\A \stackrel{(\pi_1,\pi_2,\dots)}{\longrightarrow} \prod_{i\ge 1}\rmM_{d_i} \stackrel{\rm{quotient}}{\longrightarrow} \prod_{i\ge 1}\rmM_{d_i} \Big\slash \big\{(A_i)_{i\ge 1}:\ \|A_i\|\to 0\big\}\] has kernel equal to $\ker \pi$, and so witnesses that $\pi(\A)$ is MF. General results connecting the MF property to other aspects of the taxonomy of C$\sp*$-algebras are recounted in~\cite{Bro--QDsurvey} and~\cite[Sections 11.1--2]{BroOza08}; see also the papers~\cite{RaiSch19} and~\cite{EbaJab--MF}. Many specific examples of C$\sp*$-algebras are known to be MF or have another related property such as quasidiagonality. Several of these are unreduced or reduced group C$\sp*$-algebras for different classes of groups, including examples in~\cite{CarDadEck13,TikWhiWin17,RaiSch19,Sch--findimapp}. Those works give more careful descriptions and more complete sets of references. Several of the results mentioned above are special to certain classes of groups such as the residually finite groups. However, the modes of convergence above can be applied for many more groups by requiring that the group law hold only `in the limit'. To be specific, let $\G$ be any countable group, and write it as $F/N$ for some free group $F$ and normal subgroup $N$. Now we can look for finite-dimensional representations of $F$ that trace converge to $1_N$ rather than finite-dimensional representations of $\G$ that converge to $1_{\{e\}}$. This allows considerable extra flexibility, because those finite-dimensional representations of $F$ need not have trivial restriction to $N$ until we take their limit. Allowing convergence in this sense, the availability of finite-dimensional approximants to the regular representation of $\G$ is equivalent to $\G$ being what R\u{a}dulescu called `hyperlinear': see, for instance, the original paper~\cite{Rad08} or the introductions in~\cite{KwiPes13} or~\cite{CapLup15}. In this way, free groups are essentially universal among all countable groups for the purposes of the present work, since the desired convergence for other groups can always be captured by choosing the right limit character over the free group. For this reason, results that are formulated for free groups, including the results of Part~\ref{part:free} below, can still lead to new understanding of other classes of groups or other C$\sp*$-algebras entirely. Focusing on free groups in particular, we arrive naturally at Voiculescu's theory of `free probability' and the use of probabilistic techniques. For example, if $\G$ itself is freely generated by $\{s_1,\dots,s_r\}$, and $\bspi = (\pi_n)_{n\ge 1}$ is an AP sequence for it, then $\bspi$ is asymptotically reqular if and only if the tuples of generators \[\pi_n(s_1), \dots \pi_n(s_r)\in \rm{U}(d_n) \qquad (n\ge 1)\] are \textbf{asymptotically free} in Voiculescu's sense: see the survey~\cite{Voi02--survey} or textbooks such as~\cite{VoiDykNic92,HiaPetSL} or~\cite[Chapter 5]{AndGuiZei--book}. One of the basic results of free probability theory asserts that `randomly chosen' $n$-dimensional representations are asymptotically free with high probability as $n\to\infty$: see Theorems~\ref{thm:asymp-free1} and~\ref{thm:asymp-free2} below. Beyond this, essentially following~\cite{CapDon-Mar07,ColMal14}, an AP sequence $\bspi = (\pi_n)_{n\ge 1}$ for a free group $\G$ is called \textbf{strongly asymptotically free} if it converges Fell-globally to the regular representation. As explained above, the existence of such an AP sequence implies that the reduced C$\sp*$-algebra of the free group is MF. The known proofs that such sequences exist for free groups are probabilistic, starting with the result of Collins and Male~\cite{ColMal14} for uniform random representations (which is based on an analogous result for the Gaussian unitary ensemble by Haagerup and Thorbj\o rnsen~\cite{HaaTho05}). This is effectively our own Theorem~\ref{mainthm:tempered}, which we prove in a new way using entropy arguments in Chapter~\ref{chap:tempered}. Other examples come from random permutation representations, as shown more recently by Bordenave and Collins~\cite{BordCol19}. For any countable group $\G$, if an AP sequence $\bspi$ Fell-global converges to the left regular representation $\l$, then any subsequential limit of the sequence $\tr_{d_n}\circ \pi_n$ in the compact space $\S_1(\G)$ extends to a tracial state on the reduced C$\sp*$-algebra $C_{\rm{red}}^\ast \G := C^\ast(\l)$. In case $C_{\rm{red}}^\ast \G$ has only the regular tracial state, it follows that $\pi_n$ must also trace converge to $\l$. This property is quite widespread. For free groups it is a classical result of Powers~\cite{Pow75}, so strong asymptotic freeness is indeed stronger than asymptotic freeness. For other groups, the definitive recent work~\cite{BreKalKenOza17} explains the story in full. In Section~\ref{sec:MF} we use our main results below to give a probabilistic proof that a large class of other representations of free groups besides the regular representation also generate MF C$\sp*$-algebras: see Proposition~\ref{prop:MF}. Another famous open question in a similar spirit is Voiculescu's proposed variational principle from~\cite{Voi02} between free entropy and its topological version. However, that principle would apply in a different regime from our theory, and I doubt the methods below can shed light on it. \begin{rmk} n \end{rmk} \chapter{Almost periodic entropy}\label{chap:AP} \section{Definition and first properties} We now arrive at the first main definition of this work. Let $\A$ be a separable C$\sp*$-algebra. Fix an AP sequence $\bspi = (\pi_n)_{n\ge 1}$, a positive integer $k$, and an element $\phi$ of $\B(\A,\rmM_k)_+$. Let $d_n$ be the dimension of $\pi_n$ for each $n$. Recall the normalizing constants $c(k,d)$ from Proposition~\ref{prop:int-form}. \begin{dfn}\label{dfn:APent} The \textbf{almost periodic} (`\textbf{AP}') \textbf{entropy of $\phi$ along $\bs{\pi}$} is the quantity \begin{equation}\label{eq:APent} \rmh_{\bspi}(\phi) := \inf_{O \ni \phi} \limsup_{n \to\infty} \frac{1}{d_n}\log \frac{\vol_{2kd_n}\X(\pi_n,O)}{c(k,d_n)}, \end{equation} where the infimum runs over all neighbourhoods of $\phi$. \end{dfn} (Recall Definition~\ref{dfn:typical} for the set $\X(\pi_n,O)$ of $O$-typical vectors. These sets appear frequently throughout the sequel.) The insertion of $c(k,d_n)$ into~\eqref{eq:APent} is a normalization choice. This particular choice seems to result in the lightest notation later. It makes for a natural `zero' in the scale that we use for entropy: see Proposition~\ref{prop:max-ent} below. The division by $d_n$ outside the logarithm in~\eqref{eq:APent} is another normalization choice, and is somewhat arbitrary. One could use $2d_n$ instead, but I have preferred to emphasize complex rather than real dimension. Another natural choice would be $kd_n$, the complex dimension of the ambient space containing $\X(\pi_n,O)$. However, in Part~\ref{part:free} this would introduce an extra factor of $k$ into the rates of all our large deviations principles, so I have avoided this as well. We use `$\limsup$' in Definition~\ref{dfn:APent} to allow for possible non-convergence. This matches our earlier choice to allow arbitrary subsequences in the definition of asymptotic association (Definition~\ref{dfn:asymp-assoc}). Having made no extra assumptions on $\bspi$, there is no reason why using `$\liminf$' should give the same value. Indeed, one would expect this to fail in case either (i) the sequence of tracial functionals $\tr_{d_n}\circ \pi_n$ has more than one subsequential limit or (ii) a positive definite function $\phi$ is asymptotically associated to some subsequence of $\bspi$ but not to some other subsequence. However, once we account for these two possibilities, we find that using `$\limsup$' or `$\liminf$' does give the same quantity in~\eqref{eq:APent}. This is a consequence of (b) of Corollary~\ref{cor:det-form} below. Anticipating this, a number of the results below study AP entropy under an assumption that the functionals $\tr_{d_n}\circ \pi_n$ converge. We next establish the first basic properties of AP entropy. \begin{lem} For any $\bspi$ and $k$, the function $\rmh_{\bspi}$ is upper semicontinuous on $\B(\A,\rmM_k)_+$. \end{lem} \begin{proof} This holds because $\rmh_{\bspi}(\phi)$ is an infimum of a function of open sets applied to the neighbourhoods of $\phi$. \end{proof} \begin{lem}\label{lem:e-upper-bd} We always have $\rmh_{\bspi}(\phi) \le \rmH(\phi(e))$, where $\rmH$ is the log-determinant entropy from Section~\ref{sec:log-det-ent-etc}. \end{lem} \begin{proof} If $O$ is any neighbourhood of $\phi(e)$ in $\rmM_{k+}$, then the set \[U := \{\psi \in \B(\A,\rmM_k)_+:\ \psi(e) \in O\}\] is a neighbourhood of $\phi$ in $\B(\A,\rmM_k)_+$. If $\pi$ is a $d$-dimensional representation, then we obtain \[\X(\pi,U) = \{V = [v_1,\dots,v_k]^\rm{T} \in (\bbC^{(d)})^{(k)}:\ V^\ast V \in O\}.\] As we shrink $O$ around $\phi(e)$, the volume of this set is controlled by part (b) of Theorem~\ref{thm:types-1}. \end{proof} Lemma~\ref{lem:e-upper-bd} is analogous to the inequality between the entropy of a partition and the entropy rate of the generated process under a measure-preserving transformation: see, for instance,~\cite[Theorem 4.12(i)]{Walters--book}. Later we also characterize the cases of equality: see Proposition~\ref{prop:max-ent} below. For a given AP sequence $\bspi = (\pi_n)_{n\ge 1}$, the property of asymptotic association to $\bspi$ is insensitive to changing from one completely positive map to another one which is approximately equivalent. However, the actual values of AP entropy are much more sensitive. This is an important point in our story where expectations from ergodic theory could lead one astray. In ergodic theory, one of the most essential properties of sofic entropy is its independence of the choice of generating observable~\cite{Bowen10}, and hence its invariance under isomorphism of abstract measure-preserving systems. The analog of this for AP entropy is false. Even if we restrict attention to single cyclic unit vectors in a representation $\pi$, the resulting values of AP entropy can vary greatly. Indeed, rather than invariance, AP entropy enjoys a general transformation law when one cyclic tuple is exchanged for another: see Corollary~\ref{cor:AP-ent-transf-law} and the paragraphs that follow it. In these respects, AP entropy is really more like a `differential' version of sofic entropy than the standard discrete version. Even for single measure-preserving transformations, differential entropy rate is not an isomorphism-invariant: see the explorations in~\cite{HamParTho08}. In some cases, proofs about AP entropy are much easier to digest in the special case $k=1$, if only because the notation is lighter. The next result lets us make this simplification without losing any generality. Its main benefits are felt in the next two chapters. \begin{prop}\label{prop:AP-for-pairing} Let $\phi\in\B(\A,\rmM_k)_+$ be completely positive, let $\bspi = (\pi_n)_{n\ge 1}$ be an AP sequence, and let $\bspi_{(k)} := ((\pi_n)_{(k)})_{n\ge 1}$. \begin{itemize} \item[a.] If $\tr_{d_n}\circ \pi_n\to \tau$, then $\tr_{kd_n}\circ (\pi_n)_{(k)}\to \tau \otimes \tr_k$. \item[b.] If $\phi$ is asymptotically associated to $\bspi$, then $\langle \phi,\cdot\rangle$ is asymptotically associated to $\bspi_{(k)}$. \item[c.] We have $\rmh_{\bspi}(\phi) = k\cdot \rmh_{\bspi_{(k)}}(\langle \phi,\cdot\rangle)$. \end{itemize} \end{prop} \begin{proof} Since $\tr_{kd_n} = \tr_{d_n}\otimes \tr_k$, part (a) is an assertion of the continuity of forming tensor products. It can be checked entry-wise for an element of $\rmM_k(\A)$. Parts (b) and (c) follow from the relation~\eqref{eq:X-and-X}. In the first place, this shows that $\X(\pi_n,O)$ is nonempty if and only if $\X((\pi_n)_{(k)},\t{O})$ is nonempty. Secondly, taking volumes, normalizing, and combining with~\eqref{eq:ckn-asymp}, it gives the asymptotic relation \begin{align*} \frac{\vol_{2kd_n}\X(\pi_n,O)}{c(k,d_n)} &= \frac{k^{kd_n}\cdot \vol_{2kd_n}\X((\pi_n)_{(k)},\t{O})}{c(k,d_n)} \\ &= e^{o(n)}\cdot \frac{\vol_{2kd_n}\X(\pi_n,O)}{c(1,kd_n)}. \end{align*} Taking logarithms, normalizing (by $d_n$ on the left and $kd_n/k$ on the right), and letting $n\to\infty$, this turns into the desired identity. \end{proof} If $\A = C^\ast \G$, then $\rmM_k(\A)$ is not a group C$\sp*$-algebra in a canonical way. The simplification offered by Proposition~\ref{prop:AP-for-pairing} is one reason why the work in Part~\ref{part:general} is actually easier if we allow general C$\sp*$-algebras. \section{Alternative formula using orthonormal tuples} If the type of a tuple of vectors in some representation is unital, then the vectors themselves must be orthonormal, because the value of the map at $e$ is their Gram matrix. This is still approximately true if the type is only close to unital. Applying an argument of this kind inside the integrals of Proposition~\ref{prop:int-form} gives the following. \begin{lem}\label{lem:normalized} Let $\phi \in \B(\A,\rmM_k)_+$, assume that $\phi(e)$ is nonsingular, and define an element of $\S_k(\A)$ by \begin{equation}\label{eq:phi1-dfn} \phi_1(a) := \phi(e)^{-1/2}\cdot \phi(a)\cdot \phi(e)^{-1/2} \qquad (a \in \A). \end{equation} Let $\eps > 0$. If $O$ and $O_1$ are neighbourhoods of $\phi$ and $\phi_1$, respectively, then there are a positive constant $C$ and smaller respective neighbourhoods $U$ and $U_1$ such that \begin{multline}\label{eq:normalized} C\cdot (e^{-\eps}\det \phi(e))^{d-k}\cdot m_{\rm{U}(k,d)}\X(\pi,U_1) \le \frac{\vol_{2kd}\X(\pi,U)}{c(k,d)} \\ \le C\cdot (e^\eps \det \phi(e))^{d-k}\cdot m_{\rm{U}(k,d)}\X(\pi,U_1) \end{multline} for any $d\ge k$ and any $d$-dimensional representation $\pi$. \end{lem} \begin{proof} In view of the homeomorphism in~\eqref{eq:polar-decomp-map}, we can choose neighbourhoods $U_1$ of $\phi_1$ in $\S_k(\A)$ and $U_2$ of $\phi(e)$ in $\rmM^\circ_{k+}$ such that $U_1 \subset O_1$ and such that the following two properties hold: \begin{itemize} \item[i.] every $Q \in U_2$ is invertible and satisfies $e^{-\eps}\det \phi(e) < \det Q < e^\eps \det\phi(e)$; \item[ii.] the set \[U := \big\{\psi \in \B(\A,\rmM_k)_+:\ \psi(e) \in U_2\ \hbox{and}\ \psi(e)^{-1/2}\cdot \psi\cdot \psi(e)^{-1/2} \in U_1\big\}\] is a neighbourhood of $\phi$ and is contained in $O$. \end{itemize} Property (ii) implies that $\X(\pi,U) \subset \rm{GL}(k,d)$, so Proposition~\ref{prop:int-form} gives \begin{align*} &\vol_{2kd}\X(\pi,U) \\ &= c(k,d)\int_{\rmM^\circ_{k+}} (\det Q)^{d-k} \int_{\rm{U}(k,d)} 1_{\X(\pi,U)}(VQ^{1/2})\ dm_{\rm{U}(k,d)}(V)\ d\vol_{k^2}(Q)\\ &= c(k,d)\cdot m_{\rm{U}(k,d)}\X(\pi,U_1)\cdot \int_{U_2} (\det Q)^{d-k}\ d\vol_{k^2}(Q). \end{align*} Re-arranging and using property (i), this implies that \begin{multline*} (e^{-\eps}\det \phi(e))^{d-k}\cdot \vol_{k^2} U_2\cdot m_{\rm{U}(k,n)}\X(\pi,U) \le \frac{\vol_{2kd}\X(\pi,U_1)}{c(k,d)} \\ \le (e^{\eps}\det \phi(e))^{d-k}\cdot \vol_{k^2} U_2\cdot m_{\rm{U}(k,n)}\X(\pi,U_1). \end{multline*} This is~\eqref{eq:normalized} with $C = \vol_{k^2}U_2$, which is positive because $U_2$ is open and nonempty. \end{proof} As a result of this lemma, the entropy of any completely positive map can be reduced to an estimate using the measures $m_{\rm{U}(k,n)}$ by normalizing as in~\eqref{eq:phi1-dfn}. \begin{cor}\label{cor:normalized} Let $\phi \in \B(\A,\rmM_k)_+$ and let $\bspi$ be an AP sequence. \begin{itemize} \item[a.] If $\phi(e)$ is singular, then $\rmh_{\bspi}(\phi) = -\infty$. \item[b.] If $\phi(e)$ is nonsingular, and $\cal{O}$ is a base of neighbourhoods around ${\phi(e)^{-1/2}\cdot \phi \cdot \phi^{-1/2}}$ in $\B(\A,\rmM_k)$, then \begin{equation}\label{eq:normalized1} \rmh_{\bspi}(\phi) = \log \Det\, \phi(e) + \inf_{O \in \cal{O}} \limsup_{n \to\infty} \frac{1}{d_n}\log m_{\rm{U}(k,d)}\X(\pi_n,O). \end{equation} \end{itemize} \end{cor} \begin{proof} If $\phi(e)$ is singular, then $\rmH(\phi(e)) = -\infty$, so part (a) follows from Lemma~\ref{lem:e-upper-bd}. If $\phi(e)$ is nonsingular, then part (b) follows by inserting the estimates~\eqref{eq:normalized} into Definition~\ref{dfn:APent} and taking infima over neighbourhoods. \end{proof} If $\phi(e)$ is invertible, then we can apply the formula~\eqref{eq:normalized1} to both $\phi$ and $\phi(e)^{-1/2}\cdot \phi\cdot \phi(e)^{-1/2}$ to deduce the identity \[\rmh_{\bspi}(\phi) = \rmH(\phi(e)) + \rmh_{\bspi}\big(\phi(e)^{-1/2}\cdot \phi\cdot \phi(e)^{-1/2}\big).\] This is a special case of the general transformation law that we prove later: see Corollary~\ref{cor:AP-ent-transf-law}. \begin{rmk}\label{rmk:KL-div?} The formula~\eqref{eq:normalized1} is somewhat suggestive of this simple identity for discrete Shannon entropy: \begin{equation}\label{eq:H-log-D} \rmH(\mu) = \log|A| - \rm{D}(\mu\,\|\,\nu), \end{equation} where $A$ is a finite alphabet, $p$ is the uniform distribution on $A$, $\mu$ is any other probability distribution on $A$, and $\rm{D}$ denotes Kullback--Leibler divergence~\cite[equation (2.94)]{CovTho06}. The relation~\eqref{eq:H-log-D} also has an equivariant generalization: if $A$ is a finite alphabet, $p$ is the uniform distribution on $A$, and $\mu$ is any shift-invariant probability measure on $A^\bbZ$, then their entropy rates are related by \[\rmh(\mu) = \rmH(p) - \rm{d}(\mu\,\|\,p^\bbZ),\] where $\rm{d}$ is a suitably-defined `divergence rate' between $\mu$ and $\nu$. n \end{rmk} Henceforth we have a choice between the original definition of AP entropy and the alternative given by Corollary~\ref{cor:normalized}. Each seems to be preferable under some conditions. The first major advantage of the second alternative is contact with the phenomenon of measure concentration. We explain this in the next section. \section{Consequences of concentration}\label{sec:conc} Our next few results are consequences of the measure concentration phenomenon on the groups $\rm{U}(n)$. It is needed again later. The specific concentration inequality that we cite is given in the next theorem. It follows from the logarithmic Sobolev inequality obtained in~\cite[Theorem 15]{MecMec13} via the standard Herbst argument (see~\cite{Led95}). \begin{thm}\label{thm:Un-conc} Suppose that $F:\rm{U}(n)^r\to \bbR$ is $L$-Lipschitz for the Hilbert--Schmidt metric defined by \[d_{\rm{HS}}\big((U_{1,1},\dots,U_{1,r}),(U_{2,1},\dots,U_{2,r})\big) := \sqrt{\sum_{i=1}^r \Tr_n\big((U_{1,i}-U_{2,i})^\ast (U_{1,i}-U_{2,i})\big)}.\] Then \[m_{\rm{U}(n)}^{\times r}\Big\{\Big|F - \int F\,dm_{\rm{U}(n)}^{\times r}\Big| \ge t\Big\} \le 2e^{-nt^2/24L^2}.\] \qed \end{thm} We use the full strength of Theorem~\ref{thm:Un-conc} for $\rm{U}(n)$ in Part~\ref{part:free} via Theorem~\ref{thm:asymp-free2}. In the rest of Part~\ref{part:general} we actually need concentration results for the spaces $\rm{U}(k,n)$ as $n\to\infty$ with $k$ fixed. These follow from Theorem~\ref{thm:Un-conc} by a simple contraction argument. Let $1 \le k \le n$, and let $V_0$ be the fixed element of $\rm{U}(k,n)$ that embeds $\bbC^{(k)}$ as the first $k$ coordinates of $\bbC^{(n)}$. Then the map \[G:\rm{U}(n) \to \rm{U}(k,n):U\mapsto UV_0\] is a surjection that pushes $m_{\rm{U}(n)}$ to $m_{\rm{U}(k,n)}$, and it is $1$-Lipschitz for the Hilbert--Schmidt metric on $\rm{U}(k,n)$ given by \[d(V_1,V_2) := \sqrt{\Tr_n((V_1 - V_2)^\ast (V_1-V_2))}.\] Therefore any $L$-Lipschitz function on $\rm{U}(k,n)$ may be composed with $G$ to become an $L$-Lipschitz function on $\rm{U}(n)$. Applying Theorem~\ref{thm:Un-conc} to this composition and with $r = 1$ gives the following. \begin{cor}\label{cor:Ukn-conc} If $F:\rm{U}(k,n)\to \bbR$ is $L$-Lipschitz for the Hilbert--Schmidt norm above, then \[m_{\rm{U}(k,n)}\Big\{\Big| F - \int F\,dm_{\rm{U}(k,n)}\Big| \ge t\Big\} \le 2e^{-nt^2/24L^2}.\] \qed \end{cor} \begin{rmk} n \end{rmk} Now let $\pi$ be a $d$-dimensional representation of $\G$. Since the trace of a matrix is invariant under unitary conjugation, averaging over all of $\rm{U}(d)$ gives \begin{equation}\label{eq:trace-integral} \tr_d\circ \pi = \int_{\rmS^{2d-1}}\Phi^\pi_x\ d\sigma_{2d-1}(x), \end{equation} where $\s_{2d-1} = m_{\rm{U}(1,n)}$ is the uniform distribution on $\rmS^{2d-1}$. When $d$ is large, an appeal to Corollary~\ref{cor:Ukn-conc} improves on~\eqref{eq:trace-integral} considerably: $\Phi^\pi_x$ is actually close to $\tr_d\circ \pi$ for `most' $x \in \rmS^{2d-1}$ individually. We can also handle cross terms to estimates the combined type of a random $k$-tuples drawn from $m_{\rm{U}(k,n)}$ together with another fixed tuple. To do this we first compute the following average. \begin{lem}\label{lem:expected-joining} Let $\pi$ be a $d$-dimensional representation. Assume that $\ell$ and $k$ are positive integers with $\ell + k \le d$, and let $X = [x_1,\dots,x_\ell]$ be a fixed element of $\rmM_{d,\ell}$. Then \[\int_{\rm{U}(k,d)} \Phi^\pi_{[X,V]}(a)\ dm_{\rm{U}(k,d)}(V) = \rm{diag}(\Phi^\pi_X(a),\,\tr_d\pi(a) \cdot I_k) \qquad (a \in \A).\] \end{lem} \begin{proof} Let $\theta(a) \in \rmM_{k+\ell}$ be the left-hand side of the desired equality. If $V = [v_1,\dots,v_k]$ is drawn at random from $m_{\rm{U}(k,d)}$, then each $v_i$ has distribution $\s_{2d-1}$. Therefore, by ~\eqref{eq:trace-integral}, the diagonal term $\theta_{\ell+i,\ell+i}(a)$ is equal to \[\int \Phi^\pi_{v_i}(a)\ dm_{\rm{U}(k,d)}(V) = \int \Phi^\pi_v(a)\ d\s_{2d-1}(v) = \tr_d\pi(a)\] for each $a \in \A$ and $i=1,2,\dots,k$. On the other hand, for each $i=1,2,\dots,k$, the tuples \[v_1,\dots,v_{i-1},v_i,v_{i+1},\dots,v_k \qquad \hbox{and} \qquad v_1,\dots,v_{i-1},-v_i,v_{i+1},\dots,v_k\] have the same distribution, by the invariance of $m_{\rm{U}(k,d)}$. Therefore, for any $j\in\{1,2,\dots,k\}\setminus \{i\}$, the corresponding off-diagonal term \[\theta_{\ell+i,\ell+j}(a) = \int \langle \pi(a)v_j,v_i\rangle\ dm_{\rm{U}(k,d)}(V)\] is unchanged if we replace $v_i$ with $-v_i$, so its value must be $0$. The same argument shows that $\theta_{\ell+i,m}(a) = 0$ whenever $1 \le i \le k$ and $1\le m \le \ell$. \end{proof} Now fix positive integers $k$ and $\ell$, let $\phi \in \B(\A,\rmM_\ell)_+$, and let $\tau$ be a tracial state of $\A$. \begin{cor}\label{cor:conc} For every neighbourhood $U$ of $\rm{diag}(\phi,\tau \otimes I_k)$ there are neighbourhoods $O$ of $\phi$ and $V$ of $\tau$ and positive constants $C$ and $c$ such that we have \[m_{\rm{U}(k,d)}\{V:\ [X,V]^\rm{T} \in \X(\pi,U)\} \ge 1 - Ce^{-cd}\] whenever $X \in \X(\pi,O)$ and $\tr_d\circ \pi \in V$. \end{cor} \begin{proof} By shrinking $U$ if necessary, we may assume it has the form \[U_1\cap \bigcap_{1\le j,j'\le k,a \in F}U_{2,j,j',a} \cap \bigcap_{1\le i \le k,1\le j\le k,a \in F}U_{3,i,j,a},\] where \begin{align*} U_1 &= \{\theta:\ \theta_{(\{1,2,\dots,\ell\})} \in O\},\\ U_{2,j,j',a} &= \{\theta:\ |\theta_{\ell+j,\ell+j'}(a) - \delta_{j,j'}\tau(a)|<\eps\},\\ \hbox{and} \quad U_{3,i,j,a} &= \{\theta:\ |\theta_{i,\ell+j}(a)| < \eps\} \end{align*} for some neighbourhood $O$ of $\phi$, some finite subset $F$ of $\A$, and some $\eps > 0$. The correct choice of $O$ is the one in the definition of $U_1$ above. We now focus on the other sets in the intersection. The total number of these depends only on $|F|$, $k$ and $\ell$, which all remain fixed as $d\to\infty$. It therefore suffices to prove that suitable $V$, $C$ and $c$ exist for each of those other sets separately. We classify them into three cases. \vspace{7pt} \emph{Case 1.}\quad First let $1 \le j\le k$ and $a \in \A$. Let \[V := \{\tau' \in \A^\ast:\ |\tau'(a) - \tau(a)| < \eps/2\},\] and assume that $\tr_d\circ \pi \in V$. Then we have \begin{align}\label{eq:small-prob-1} \{V:\ [X,V]^\rm{T} \not\in \X(\pi,U_{2,j,j,a})\} &= \{V:\ |\langle \pi(a)v_j,v_j\rangle - \tau(a)| \ge \eps\} \nonumber\\ &\subset \{V:\ |\langle \pi(a)v_j,v_j\rangle - \tr_d\pi(a)| \ge \eps/2\}. \end{align} As a function of $V \in \rm{U}(k,d)$ with the Hilbert--Schmidt norm, the expression $\langle \pi(a)v_j,v_j\rangle$ is $(2\|a\|)$-Lipschitz. By Lemma~\ref{lem:expected-joining}, its integral with respect to $m_{\rm{U}(k,d)}$ is equal to $\tr_d\pi(a)$. Therefore Corollary~\ref{cor:Ukn-conc} gives an upper bound of the form $2e^{-cd}$ for the probability of the event~\eqref{eq:small-prob-1}, where $c$ depends on only $\eps$ and $\|a\|$. \vspace{7pt} \emph{Case 2.}\quad Now let $1 \le j,j'\le k$ with $j\ne j'$ and let $a \in \A$. Then \[ m_{\rm{U}(k,d)}\{V:\ [X,V]^\rm{T} \not\in \X(\pi,U_{2,j,j',a})\} = m_{\rm{U}(k,d)}\{V:\ |\langle \pi(a)v_{j'},v_j\rangle| \ge \eps\}.\] Once again, the expression $\langle \pi(a)v_{j'},v_j\rangle$ is $(2\|a\|)$-Lipschitz as a function of $V \in \rm{U}(k,d)$, and this time its integral with respect to $m_{\rm{U}(k,d)}$ is zero, again by Lemma~\ref{lem:expected-joining}. We can therefore apply Corollary~\ref{cor:Ukn-conc} as in Case 1. \vspace{7pt} \emph{Case 3.}\quad Finally, let $1 \le i \le \ell$ and $1 \le j \le k$ and let $a \in \A$. Then \[ m_{\rm{U}(k,d)}\{V:\ [X,V]^\rm{T} \not\in \X(\pi,U_{3,i,j,a})\} = m_{\rm{U}(k,d)}\{V:\ |\langle \pi(a)v_j,x_i\rangle| \ge \eps\}.\] This time, since $x_i$ is a fixed unit vector, the expression $\langle \pi(a)v_j,x_i\rangle$ is $\|a\|$-Lipschitz as a function of $V \in \rm{U}(k,d)$. Once again its integral with respect to $m_{\rm{U}(k,d)}$ is zero by Lemma~\ref{lem:expected-joining}. We can therefore apply Corollary~\ref{cor:Ukn-conc} again as in Case 1. \end{proof} The following special case of Corollary~\ref{cor:conc} is worth recording by itself. \begin{cor}\label{cor:conc0} For every neighbourhood $U$ of $\tau \otimes I_k$ there are a neighbourhood $V$ of $\tau$ and positive constants $C$ and $c$ such that we have \[m_{\rm{U}(k,d)}\X(\pi,U) \ge 1 - Ce^{-cd}\] whenever $\tr_d\circ \pi \in V$. \qed \end{cor} The next corollary is a consequence of Corollary~\ref{cor:conc} for Fell-global limits of AP sequences. \begin{cor}\label{cor:char-and-weak-global} Suppose that $\tr_{d_n}\circ \pi_n \to \tau$ and let $\l := \pi_\tau$. If $\phi \in \ol{\S_k(\l^{(\infty)})}$, then $\phi$ is asymptotically associated to $(\pi_n)_{n\ge 1}$. \end{cor} \begin{proof} Any tuple in $H_\l^{(\infty)}$ can be approximated by tuples that are non-zero in only finitely many of the coordinate copies of $H_\l$, and so Lemma~\ref{lem:unif-cts} gives \[\ol{\S_k(\l^{(\infty)})} = \ol{\bigcup_{\ell \ge 1}\S_k(\l^{(\ell)})}.\] Therefore, by Corollary~\ref{cor:typ-trans}, it suffices to show that $\tau\otimes I_\ell$ itself is asymptotically associated to $\bspi$ for every positive integer $\ell$. This holds by Corollary~\ref{cor:conc0}: indeed, according to that corollary, a random tuple drawn from $m_{\rm{U}(\ell,d_n)}$ is approximately typical for $\tau\otimes I_\ell$ with high probability once $d_n$ is large enough. \end{proof} If $(\pi_n)_{n\ge 1}$ Fell-global converges, then Corollary~\ref{cor:char-and-weak-global} is a lower bound on the set $\lim_n\ol{\S_\bullet(\pi_n)}$ by Lemma~\ref{lem:typ-not-typ}. Sometimes this can be an equality: indeed, Theorem~\ref{mainthm:tempered} is a probabilistic version of this statement, and relates it to the strong asymptotic freeness of tuples of independent random unitary matrices. In other cases the containment can be strict. Corollary~\ref{cor:conc} is worth comparing with part (a) of Lemma~\ref{lem:pairs}. That lemma requires the disjointness of $\phi$ and $\psi$, whereas this corollary does not. This time we find that the analogous containment `nearly' holds in the sense of measure, simply because vectors that are approximately typical for $\tau$ are so prevalent around the high-dimensional sphere $\rmS^{2d-1}$. In combination with the formula from Corollary~\ref{cor:normalized}, Corollary~\ref{cor:conc} also helps with estimating AP entropy values. The next proposition determines the cases of equality in Lemma~\ref{lem:e-upper-bd}. \begin{prop}\label{prop:max-ent} Assume that $\tr_{d_n}\circ \pi_n\to \tau$ and that $\phi(e)$ is nonsingular. Then equality holds in Lemma~\ref{lem:e-upper-bd} if and only if $\phi = \tau\otimes \phi(e)$. \end{prop} \begin{proof} First, replacing $\phi$ with $\phi(e)^{-1/2}\cdot \phi\cdot \phi(e)^{-1/2}$ and applying part (b) of Corollary~\ref{cor:normalized} if necessary, we may assume that $\phi(e) = I_k$. If $\phi = \tau \otimes I_k$, then for any neighbourhood $U$ of $\phi$ we let $V$, $C$ and $c$ be given by Corollary~\ref{cor:conc0}. Since $\tr_{d_n}\circ \pi_n \in V$ for all sufficiently large $n$, the estimate from that corollary gives \[m_{\rm{U}(k,d_n)}\X(\pi_n,U) \to 1.\] In particular, in this case $\rmh_{\bspi}(\tau \otimes I_k) = 0$, by Corollary~\ref{cor:normalized}. On the other hand, if $\phi(e) = I_k$ but $\phi \ne \tau \otimes I_k$, then they have disjoint neighbourhoods, say $U$ and $U'$ respectively. Let $V$, $C$ and $c$ be given by Corollary~\ref{cor:conc0} for the neighbourhood $U'$. Since $\tr_{d_n}\circ \pi_n \in V$ for all sufficiently large $n$, this time Corollary~\ref{cor:conc0} gives \[m_{\rm{U}(k,d_n)}\X(\pi_n,U) \le m_{\rm{U}(k,d_n)}\big(\rm{U}(k,n_d)\setminus \X(\pi_n,U')\big)\le Ce^{-cd}.\] Therefore, in particular, $\rmh_{\bspi}(\phi) \le -c < 0$, again by Corollary~\ref{cor:normalized}. \end{proof} \subsection*{\emph{Notes and further references}} The analog of Theorem~\ref{thm:Un-conc} for $\rm{SU}(n)$ can be deduced from the lower bound on its Ricci curvature as a Riemannian manifold. Starting from this lower bound, one can either use the volume-comparison estimates of Gromov to show that these inequalities are implied by their counterparts on the sphere~\cite{GroMil83}, or run the semigroup argument of Bakry and Emery~\cite{BakEme85,Led92} to prove a logarithmic Sobolev inequality. Theorem~\ref{thm:Un-conc} for $\rm{U}(n)$ is based on the result for $\rm{SU}(n)$, but requires an additional argument: the coupling used in~\cite{MecMec13} gives the optimal constant. See~\cite[Section 5.3]{MecRMTCCG} or~\cite[Subsection 4.4.2]{AndGuiZei--book} for general accounts of concentration for compact matrix groups with more thorough selections of references. \section{Mollifying}\label{sec:mollify} Fix a tracial state $\tau$ on $\A$ and a positive integer $k$. For any $\phi \in \B(\A,\rmM_k)_+$, let \[\phi_t := t\phi + (1-t)(\tau\otimes I_k) \qquad (0\le t\le 1).\] Where a finite-dimensional representation $\pi$ (resp. $\pi_n$) appears below, its dimension is $d$ (resp. $d_n$). The next lemma is a generalization of Lemma~\ref{lem:normalized} that compares variable tuples against a fixed one. This time we give only the lower bound. \begin{lem}\label{lem:normalized2} Let $\phi \in \B(\A,\rmM_k)_+$, let $U$ be a neighbourhood of ${\rm{diag}(\phi,\tau\otimes I_k)}$ in $\B(\A,\rmM_{2k})_+$, and let $\eps > 0$. Then there are a neighbourhood $W$ of ${\rm{diag}(\phi,\tau\otimes I_k)}$ in $\B(\A,\rmM_{2k})_+$ and a positive constant $C$ such that $W \subset U$ and such that we have \begin{equation}\label{eq:normalized2} \frac{\vol_{2kd}\{Y:\ [X,Y]^\rm{T} \in \X(\pi,W)\}}{c(k,d)} \ge Ce^{-\eps d}\cdot m_{\rm{U}(k,d)}\{Y:\ [X,Y]^\rm{T} \in \X(\pi,W)\} \end{equation} whenever $d \ge 2k$, $\pi$ is a $d$-dimensional representation, and $X \in \bbC^{(d)}$. (Notice that both sides of~\eqref{eq:normalized2} may be zero, for example if the type of $X$ itself is not close enough to $\phi$.) \end{lem} \begin{proof} Let $L := \{k+1,\dots,2k\}$. Arguing as in the proof of Lemma~\ref{lem:normalized}, there are (i) a neighbourhood $W_2$ of $I_k$ in $\rmM^\circ_{k+}$ such that \[e^{-\eps} < \det Q < e^\eps \qquad \hbox{for every}\ Q \in W_2,\] and (ii) a neighbourhood $W$ of ${\rm{diag}(\phi,\tau\otimes I_k)}$ in $\B(\A,\rmM_{2k})_+$ such that $W\subset U$ and such that \begin{multline*} W = \big\{\psi \in \B(\A,\rmM_{2k})_+:\ \psi(e)_{(L)} \in W_2\ \hbox{and}\\ \rm{diag}(I_k,\phi(e)_{(L)}^{-1/2})\cdot \psi \cdot \rm{diag}(I_k,\phi(e)_{(L)}^{-1/2})\in W\big\}. \end{multline*} Now let $d\ge 2k$, let $\pi$ be any $d$-dimensional representation, and let $X \in \bbC^{(d)}$. Just as in the proof of Lemma~\ref{lem:normalized}, we can insert the special form of $W$ into Proposition~\ref{prop:int-form} and obtain \begin{align*} &\vol_{2kd}\{Y:\ [X,Y]^\rm{T} \in \X(\pi,W)\} \\ &= c(k,d)\cdot m_{\rm{U}(k,d)}\{Y:\ [X,Y]^\rm{T} \in \X(\pi,W)\}\cdot \int_{W_2}(\det Q)^{d-k}\ d\vol_{k^2}(Q)\\ &\ge c(k,d)\cdot e^{-\eps d}\cdot \vol_{k^2}W_2\cdot m_{\rm{U}(k,d)}\{Y:\ [X,Y]^\rm{T} \in \X(\pi,W)\}. \end{align*} This is the desired inequality with $C = \vol_{k^2}W_2$. \end{proof} The convex combinations $\phi_t$ for $0 < t < 1$ give us a way to `mollify' a positive definite function $\phi$. Importantly, a nontrivial such convex combination has typical vectors only if $\phi$ itself does, but when such vectors exist there are many of them: if $\phi$ is typical along $\bspi$ then the AP entropy of $\phi_t$ tends to $0$ as $t\downarrow 0$. The next proposition makes these assertions precise. \begin{prop}\label{prop:mollify} Let $\phi \in \B(\A,\rmM_k)_+$ and let $0 < t \le 1$. \begin{itemize} \item[a.] For every neighbourhood $U$ of $\phi_t$ and $\eps > 0$ there are neighbourhoods $O$ of $\phi$ and $V$ of $\tau$ such that \[\frac{\vol_{2kd} \X(\pi,U)}{c(k,d)} \ge e^{-\eps d}\cdot (1-t)^{kd}\] whenever $\X(\pi,O)\ne \emptyset$, $\tr_d\circ \pi \in V$, and $d$ is sufficiently large. \item[b.] For every neighbourhood $O$ of $\phi$ there is a neighbourhood $U$ of $\phi_t$ such that \[\X(\pi,U)\ne \emptyset \quad \Rightarrow \quad \X(\pi,O) \ne \emptyset\] for any representation $\pi$. \end{itemize} \end{prop} \begin{proof} \emph{Part (a).}\quad First, by part (a) of Lemma~\ref{lem:sums}, Lemma~\ref{lem:lin-maps}, and Lemma~\ref{lem:normalized2}, we can choose a neighbourhood $W$ of $\rm{diag}(\phi,\tau\otimes I_k)$ and a positive constant $C$ so that we have \[[X,Y]^\rm{T} \in \X(\pi,W) \qquad \Rightarrow \qquad \sqrt{t}X + \sqrt{1-t}Y \in \X(\pi,U)\] and also so that~\eqref{eq:normalized2} holds whenever $d\ge 2k$ and $X \in \bbC^{(d)}$. Next, by Corollary~\ref{cor:conc}, we can choose neighbourhoods $O$ of $\phi$ and $V$ of $\tau$ so that \[m_{\rm{U}(k,d)}\{Y:\ [X,Y]^\rm{T} \in \X(\pi,W)\} \ge 1/2\] whenever $X \in \X(\pi,O)$, $\tr_d\circ \pi \in V$, and $d$ is sufficiently large. Now assume that $\X(\pi,O)$ is nonempty, let $X$ be an element of it, and assume also that $\tr_d\circ \pi \in V$ and $d$ is sufficiently large. Then the choices above and~\eqref{eq:normalized2} give \begin{align*} \frac{\vol_{2kd}\X(\pi,U)}{c(k,d)} &= \frac{(1-t)^{kd}}{c(k,d)} \cdot \vol_{2kd}\{Y:\ \sqrt{t}X + \sqrt{1-t}Y \in \X(\pi,U)\}\\ &\ge \frac{(1-t)^{kd}}{c(k,d)} \cdot \vol_{2kd}\{Y:\ [X,Y]^\rm{T} \in \X(\pi,W)\}\\ &\ge (1-t)^{kd}\cdot Ce^{-\eps d}\cdot m_{\rm{U}(k,d)}\{Y:\ [X,Y]^\rm{T} \in \X(\pi,W)\}\\ &\ge Ce^{-\eps d}(1-t)^{kd}/2. \end{align*} Since $\eps$ is arbitrary, this completes the proof. \vspace{7pt} \emph{Part (b).}\quad By Proposition~\ref{prop:RadNik}, $\phi$ is associated to the GNS representation of $\phi_t$ provided $t > 0$, so this conclusion follows from Corollary~\ref{cor:typ-trans}. \end{proof} \begin{cor}\label{cor:mollify} Assume that $\tr_{d_n}\circ \pi_n \to \tau$. Let $\phi \in \B(\A;\rmM_k)_+$ and let ${0 < t < 1}$. Then $\phi$ is asymptotically associated to $\bspi$ if and only if $\phi_t$ is asymptotically associated to $\bspi$, and in this case \[\rmh_{\bspi}(\phi_t) \ge k\log (1-t).\] \end{cor} \begin{proof} The equivalence of asymptotic association for $\phi$ and $\phi_t$ follows directly from Proposition~\ref{prop:mollify}. The reverse implication uses part (b); the forward implication uses part (a) and the observation that $\X(\pi_n,U)$ cannot be empty if its volume is positive. Finally the lower bound on AP entropy also follows from that volume lower bound upon taking logarithms, normalizing, and letting $n\to\infty$. \end{proof} \begin{cor} If $\tr_{d_n}\circ \pi \to\tau$ and $\bspi$ Fell-global converges, then \begin{align*} \lim_n \ol{\S_k(\pi_n)} &= \{\phi \in \S_k(\A):\ \rmh_{\bspi}(\phi_t) > -\infty\ \forall t \in (0,1)\}\\ &= \{\phi \in \S_k(\A):\ \rmh_{\bspi}(\phi_t) \to 0\ \hbox{as}\ t\downarrow 0\}. \end{align*} \qed \end{cor} Thus, even though the value $\rmh_{\bspi}(\phi)$ could be $-\infty$ for some $\phi$ that is asymptotically associated to $\bspi$, we have established the following: if $\bspi$ Fell-global converges, and we know (i) the limit of the tracial states $\tr_{d_n}\circ \pi_n$ and (ii) the whole function $\rmh_{\bspi}$ on $\S_k(\A)$, then these determine the limiting set $\lim_n\ol{\S_k(\pi_n)}$. \chapter{Preliminaries on tracial functionals and affiliated operators}\label{chap:char} The first result of this section is Theorem~\ref{thm:RadNik} below. It is an improvement on the `Radon--Nikodym' theorem in Proposition~\ref{prop:RadNik}. It applies when a normal positive functional on a von Neumann algebra is compared to normal tracial state. That result belongs to the larger theory of von Neumann algebras with faithful normal tracial states, and it depends on other parts of that theory. This material is very classical, but lies beyond many first introductions to operator algebras. It is one of the most substantial prerequisites for our work, so I include it for readers who do not already have this background. Nevertheless, I cite most results without proof, using~\cite{Dix--vN} for definiteness wherever possible. An exception is the extension of Fuglede--Kadison determinants to affiliated operators, which is a somewhat more recent topic and for which I cite~\cite[Section 2]{HaaSch07}. This background from von Neumann algebras is used only during the proof of Theorem~\ref{mainthm:det-form} in Chapter~\ref{chap:det-form}. After that chapter, Theorem~\ref{mainthm:det-form} is cited again a few times, but it is not used within the proofs of the other main theorems. So the reader can skip the present section and the detailed proofs in Chapter~\ref{chap:det-form} without losing much understanding of the remaining chapters if they wish. \section{Tracial vectors and canonical involutions}\label{sec:char} Let $\tau$ be a tracial state on a C$\sp*$-algebra $\A$, and assume it is associated to a representation $\l$ by a cyclic vector $\xi$. Then $\xi$ satisfies the trace identity~\eqref{eq:trace-vec}, which may also be written in the form \[\langle \l(b)\xi,\l(a^\ast)\xi\rangle = \langle \l(a)\xi,\l(b^\ast)\xi\rangle \qquad (a,b \in \A).\] In this form, we can take weak-operator limits in $a$ and then separately in $b$ to conclude that the positive functional \begin{equation}\label{eq:tau-on-N} \tau(A) := \langle A\xi,\xi\rangle \end{equation} is actually tracial on the whole von Neumann algebra $\N := \l(\A)''$. Clearly $\xi$ is still cyclic for $\N$. It follows that $\xi$ is also cyclic for $\N'$~\cite[Corollary I.6.1]{Dix--vN}, hence also separating for both $\N$ and $\N'$~\cite[Proposition I.1.5]{Dix--vN}, and that $\N$ and $\N'$ are both finite von Neumann algebra~\cite[Proposition I.6.9(ii)]{Dix--vN}. \begin{ex}\label{ex:regular-again} If $\G$ is a countable discrete group, then the regular character $\chi_{\rm{reg}}$ is associated to the left regular representation by the function $1_{\{e\}}$, regarded as an element of $\ell^2(\G)$. This is the origin of the term `regular character'. \qed \end{ex} \begin{ex}\label{ex:normal-subgroup} If $N$ is a normal subgroup of $\G$, then the character $1_N$ is associated to the left action of $\G$ on $\ell^2(\G/N)$ by the element $1_{\{N\}}$. \qed \end{ex} If $\xi$ is tracial for the von Neumann algebra $\N$, then the formula \begin{equation}\label{eq:canonical-involution} J(A\xi) := A^\ast \xi \qquad (A \in \N) \end{equation} gives a well-defined involution on the subspace $\N\xi$. The map $J$ is anti-linear, and it converts inner products to their conjugates as a consequence of the trace identity. In particular, it is an isometry of the restriction of the norm of $H$. It therefore extends by continuity to an involution of the whole of $H$ that has the same properties, which we still denote by $J$. It is called the \textbf{canonical involution} associated to $\N$ and $\xi$. This construction is recounted in~\cite[Section I.5.1]{Dix--vN} in the alternative framework of Hilbert algebras, which are shown to be equivalent to algebras with tracial cyclic vectors in~\cite[Section I.6.2]{Dix--vN}. The following facts can all be found in those sections. \begin{lem}\label{lem:J} The canonical involution has the following properties: \begin{itemize} \item[i.] $J\xi = \xi$; \item[ii.] $JA\xi = A^\ast \xi$ for every $A \in \N$; \item[iii.] the map $A\mapsto JAJ$ is an involutive $\ast$-anti-automorphism of $\B(H)$; \item[iv.] the map from (iii) preserves $\tau$ and exchanges the von Neumann subalgebras $\N$ and $\N'$ of $\B(H)$. \qed \end{itemize} \end{lem} \subsection*{\emph{Notes and further references}} Tracial vectors and canonical involutions already appear in the work of Murray and von Neumann~\cite{MurvonNI,MurvonNII,MurvonNIV}, with additional early contributions by Ambrose~\cite{Amb45,Amb49}. A more complete set of original references is in~\cite[Sections I.5.1--2]{Dix--vN}. Example~\ref{ex:normal-subgroup} is a special case of a much more general construction that starts with an `invariant random subgroup' of $\G$. This construction can be found in~\cite[Section 15.F]{BekdelaHarUDC}, which gives further references. On the other hand, In general, representations having tracial vectors (or more complicated but related structures) occupy a special place and often admit a finer analysis. Their basic theory is covered in~\cite[Chapters 6 and 17]{Dix--Cstar}, and~\cite[Chapters 10--12]{BekdelaHarUDC} covers finer aspects for unitary group representations. \section{Affiliated operators and Fuglede--Kadison determinants} Some of the constructions we need below involve operators on $H$ that are closed and densely defined but unbounded. The basic theory of these can be found in~\cite[Chapter X]{ConFA}, including the unbounded version of the spectral theorem~\cite[Section X.4]{ConFA} which leads to an unbounded extension of the polar decomposition~\cite[Exercise X.4.6]{ConFA} (or see~\cite[Section VIII.9]{ReeSimFA}). For any closed operator $T$, its kernel is a closed subspace of $H$, because \[\ker T = \{x \in H:\ (x,0) \in \rm{graph}\,T\}.\] As a result, if a closed and densely defined operator vanishes on a dense subspace, then it is identically zero. We say that a closed and densely defined operator $T$ between two Hilbert spaces is \textbf{nonsingular} if $\ker T = \{0\}$, and \textbf{singular} otherwise. A closed and densely defined operator $T$ is \textbf{affiliated} to the von Neumann algebra $\N$ if it commutes with every unitary element of $\N'$~\cite[Exercise I.1.10]{Dix--vN}. This includes the assertion that those unitary elements preserve $\dom T$. The collection of operators affiliated to $\N$ is closed under forming adjoints, and if such an operator $T$ has polar decomposition $U|T|$, then the uniqueness of that decomposition implies that $U \in \N$ and that $|T|$ is also affiliated to $\N$. If $\N$ is a finite von Neumann algebra, then sums and products of affiliated operators are also still affiliated operators, provided each of these constructions is following by taking graph closures~\cite[Exercise III.1.13.c]{Dix--vN}. Now let $\A$ be a C$^\ast$-algebra and $\tau$ a tracial positive functional on it. For any positive invertible $a \in \A$ we define its \textbf{Fuglede--Kadison determinant with respect to $\tau$} by \[\Delta_\tau a:= \exp(\tau(\log a)).\] (This definition is sometimes extended to other invertible elements $a$ by applying it to $|a|$, but we do not need this.) If $\N$ is a von Neumann algebra, $\tau$ is a normal tracial positive fuctional on it, and $A \in \N$ is positive and invertible, then $\Delta_\tau A$ can be expressed in terms of the spectral resolution of $A$. In this setting its basic properties are covered in~\cite[Section I.6.11]{Dix--vN}. This formulation also permits an extension to certain affiliated operators. We describe this quickly here, referring to~\cite[Section 2]{HaaSch07} for a more careful exposition with complete proofs. If $T$ is a non-negative operator affiliated to $\N$ and $E(\cdot)$ is its spectral measure on $[0,\infty)$, then $T$ is called \textbf{log-integrable} with respect to $\tau$ if \[\int_0^\infty \log^+ t\ \tau E(dt) < \infty\] (note that this controls the singularity of $\log$ at $\infty$ but not at $0$). If this holds, then we define its \textbf{Fuglede--Kadison determinant} with respect to $\tau$ to be \begin{equation}\label{eq:FK-det} \Delta_\tau T := \exp \int_0^\infty \log t\ \tau E(dt). \end{equation} The integral here is well-defined by log-integrability, but may take the value $-\infty$, in which case $\Delta_\tau T := 0$. During the coming proof of Theorem~\ref{mainthm:det-form}, we need a relative of Kaplansky's density theorem that (i) allows affiliated operators and (ii) provides an approximation of determinants as well as a strong-operator approximation. Its proof is a routine combination of the spectral theorem and the original Kaplansky density theorem. I have not found this result elsewhere so include a complete proof. \begin{prop}\label{prop:Kap+} Let $\pi$ be a representation of $\A$, let $\N := \pi(\A)''$, and let $\tau$ be a normal tracial positive functional on $\N$. Suppose that $T$ is a non-negative operator affiliated to $\N$ which is log-integrable with respect to $\tau$. For any $x \in \rm{dom}\,T$ and $\eps > 0$, there is a positive element $a$ of $\A$ such that \begin{equation}\label{eq:approxs} \|\pi(a)x - Tx\| < \eps \quad \hbox{and} \quad |\Delta_\tau \pi(a) - \Delta_\tau T| < \eps. \end{equation} If $T$ is nonsingular, then we may require in addition that \begin{equation}\label{eq:approxs2} \|x - \pi(a^{-1})Tx\| < \eps. \end{equation} \end{prop} \begin{proof} Assume that $x$ is nonzero and let $y:= Tx$. Consider the spectral resolution \[T = \int_0^\infty t\ E(dt).\] For each $\delta \in (0,1)$, define \[T_\delta := (T\vee \delta)\wedge \delta^{-1} := \int_0^\infty (t\vee \delta)\wedge \delta^{-1}\ E(dt),\] where `$\vee$' stands for `$\max$' and `$\wedge$' stands for `$\min$'. This is an element of $\N$ satisfying $\delta \le T_\delta \le \delta^{-1}$. The spectral theorem gives \[\|T_\delta x - y\|^2 = \int_0^\infty |t - (t\vee \delta)\wedge \delta^{-1}|^2\ \langle E(dt)x,x\rangle.\] The expression $\langle E(\cdot)x,x\rangle$ is a Borel measure on $\bbR$, and the function $t^2$ is integrable with respect to it because of the assumption that $x \in \rm{dom}\,T$. Therefore the dominated convergence theorem gives \begin{equation}\label{eq:T-delta-small-1} \|T_\delta x - y\| < \eps \end{equation} for all sufficiently small $\delta$. Another calculation from the spectral theorem gives \[\|x - T_\delta^{-1}y\|^2 = \int_0^\infty |1 - \phi_\delta(t)|^2\ \langle E(dt)x,x\rangle,\] where \[\phi_\delta(t) = \left\{\begin{array}{ll}t/\delta &\quad t < \delta\\ 1 & \quad \delta \le t \le \delta^{-1}\\ \delta t &\quad t > \delta^{-1}.\end{array}\right.\] These functions are uniformly bounded by $t+1$, and they converge pointwise to $0$ on $(0,\infty)$ as $\delta \downarrow 0$. If $T$ is nonsingular, then $E\{0\} = 0$, so in that case the dominated convergence theorem also gives \begin{equation}\label{eq:T-delta-small-2} \|x - T_\delta^{-1} y\| < \eps \end{equation} for all sufficiently small $\delta$. Since $\tau$ is normal, the composition $\tau E$ is a Borel measure on $[0,\infty)$, and by assumption the function $\log^+ t$ is integrable with respect to this measure. Therefore we also have \begin{equation}\label{eq:T-delta-det-conv} \log \Delta_\tau T_\delta = \int_0^\infty \log((\delta \vee t)\wedge \delta^{-1})\ \tau E(dt) \to \log \Delta_\tau T \quad \hbox{as}\ \delta \downarrow 0, \end{equation} where we use (i) the log-integrability of $T$ and the dominated convergence theorem to control the integrals over $[1,\infty)$, and (ii) the monotone convergence theorem to control the integrals over $[0,1)$. The convergence~\eqref{eq:T-delta-det-conv} holds even if $\log \Delta_\tau T = -\infty$, and once we exponentiate it implies that \begin{equation}\label{eq:T-delta-small-3} |\Delta_\tau T_\delta - \Delta_\tau T| < \eps \end{equation} for all sufficiently small $\delta$. Pick $\delta > 0$ small enough that~\eqref{eq:T-delta-small-1} and~\eqref{eq:T-delta-small-3} both hold for all sufficiently small $\delta$. In case $T$ is nonsingular, pick $\delta$ small enough that~\eqref{eq:T-delta-small-2} also holds. Let $\mathfrak{S}_\delta$ be the set of self-adjoint elements $S$ of $\N$ that satisfy $\delta \le S \le \delta^{-1}$. The function $S\mapsto \log S$ is strong-operator continuous on the set $\mathfrak{S}_\delta$: for instance, this holds by applying~\cite[Lemma 44.2]{ConOT} to a Lipschitz function that agrees with $\log$ on $[\delta,\delta^{-1}]$. Since $\tau$ is positive and normal, it is ultraweakly continuous~\cite[Theorem 46.4]{ConOT}, and this in turn implies strong-operator continuity when restricted to any bounded subset of $\N$ such as $\mathfrak{S}_\delta$ (this follows, for instance, from the explicit description of such functionals in~\cite[Theorem 46.4(d)]{ConOT}). It follows that the expression \[\Delta_\tau S = \exp(\tau(\log S))\] is also strong-operator continuous on $\mathfrak{S}_\delta$. Now the existence of $a \in \A$ satisfying $\pi(a) \in \mathfrak{S}_\delta$ and also the approximations~\eqref{eq:approxs} follows from the Kaplansky density theorem: see~\cite[Theorem 44.1]{ConOT}, specifically part (c). Since $\delta$ is fixed, any $a \in \A$ such that $\pi(a) \in \mathfrak{S}_\delta$ also satisfies \begin{equation}\label{eq:pre-approxs2} \|\pi(a^{-1})y - T_\delta^{-1}y\| \le \delta^{-1}\|y - \pi(a)T_\delta^{-1}y\|. \end{equation} Therefore, in case $T$ is nonsingular, the application of Kaplansky's density theorem above can be made above so that the right-hand side of~\eqref{eq:pre-approxs2} is less than the gap in the inequality~\eqref{eq:T-delta-small-2}. In this case~\eqref{eq:approxs2} is obtained as well. \end{proof} \subsection*{\emph{Notes and further references}} Affiliated operators also have their origins in~\cite{MurvonNI}. A careful introduction to working with them can be found in~\cite[Chapter 8]{Luc02}, where the goal is applications to $L^2$-invariants in topology. Since its introduction in~\cite{FugKad51,FugKad52}, the Fuglede--Kadison determinant has played an increasingly important role in various connections of operator algebras to others parts of mathematics. A succinct survey of some of these is given in~\cite{delaHar13}. A much more thorough exposition with a view towards handling to $L^2$-invariants in topology is given in~\cite[Section 3.2]{Luc02}; see also~\cite[Chapter 13]{Luc02} on the approximation and determinant conjectures. \section{Comparing positive functionals and traces}\label{sec:reachRadNik} Let $\xi$ be a cyclic tracial vector for the von Neumann algebra $\N$, let $\tau$ be the tracial functional defined by $\xi$, and let $\phi$ be another normal positive functional on $\N$. Since $\N$ has the separating element $\xi$, by~\cite[Theorem III.1.4]{Dix--vN} there exists $y \in H$ such that \[\phi(A) = \langle Ay,y\rangle \qquad (A \in \N).\] If $y_1$ is another vector that represents $\phi$ in the same way, then any $A,B \in \N$ satisfy \[\langle Ay_1,By_1\rangle = \phi(B^\ast A) = \langle Ay,By\rangle.\] Therefore the map \[\N y\to \N y_1 : Ay \mapsto Ay_1\] is well-defined, linear, and unitary, and so it extends to a well-defined partial isometry $W \in \N'$ with initial space $\ol{\N y}$ and final space $\ol{\N y_1}$. Since $\N'$ is finite as a von Neumann algebra,~\cite[Exercise III.1.13.d]{Dix--vN} gives an operator $S$ affiliated to $\N'$ such that $y = S\xi$. If $W|S|$ is the polar decomposition of $S$, then $W$ is a partial isometry that commutes with $\N$, so the vector $|S|\xi$ also represents the same functional $\phi$. Therefore, by altering the choice of $y$ if necessary, we may assume that $S$ is non-negative, and having done so both $y$ and $S$ are unique by the uniqueness of polar decomposition. Since $\xi \in \dom\,S$, the spectral measure $E(\cdot)$ of $S$ satisfies \begin{equation}\label{eq:sq-int} \|S\xi\|^2 = \int_0^\infty s^2\ \tau E(ds) < \infty. \end{equation} Therefore $S$ is log-integrable, because $\log^+ s \le s^2$, and so $\Delta_\tau S$ is defined Finally, let $T := JS^\ast J$, where $J$ is the canonical involution. This is affiliated to $\N$ by part (iv) of Lemma~\ref{lem:J}, and using parts (i) and then (ii) of that lemma we obtain \[T\xi = JS^\ast J\xi = JS^\ast \xi = S\xi.\] Conjugation by $J$ preserves $\tau$, and it is a $\ast$-anti-automorphism (property (iii) from Lemma~\ref{lem:J}) so it respects spectral decompositions. Therefore $T$ is also log-integrable and $\Delta_\tau T = \Delta_\tau S$. In the sequel, we need to apply the construction above starting from a positive functional $\phi$ on a C$\sp*$-algebra $\A$ rather than a normal positive functional on a von Neumann algebra. Let $\tau$ be a tracial positive functional on $\A$ and assume that $\phi \ll \tau$. Let $\tau$ be associated to the representation $\l$ by the cyclic tracial vector $\xi$, and let $\N := \l(\A)''$. Define the normal tracial functional $\t{\tau}$ on $\N$ using the vector $\xi$ as in~\eqref{eq:tau-on-N}, so $\tau = \t{\tau}\circ \l$. Since $\phi \ll \tau$, we know that $\phi$ is the type of some vector in $\l^{(\infty)}$, and using this formula for $\phi$ it extends to a normal positive functional $\t{\phi}$ on $\N$. With these ingredients in hand, we can apply our previous construction to the functionals $\t{\tau}$ and $\t{\phi}$ on $\N$. The result is the following. \begin{thm}\label{thm:RadNik} In the notation above, $\phi$ is associated to $\l$ by the vector $T\xi$ for some non-negative log-integrable operator $T$ affiliated to $\l(\A)''$. \qed \end{thm} More generally still, suppose that $\phi:\A\to\rmM_k$ is completely positive, that $\tau$ is a tracial state of $\A$, and that $\phi\ll\tau$. Then the positive functional $\langle \phi,\cdot\rangle$ on $\rmM_k(\A)$ is absolutely continuous with respect to the tracial functional $\tau\otimes \Tr_k$ by Lemma~\ref{lem:Leb-of-pairing}. This enables us to extend the use of Theorem~\ref{thm:RadNik} from positive functionals to completely positive maps. In doing so, Example~\ref{ex:tensor-prod} shows that $\l$ is replaced by ${\t{\l}:= \l\otimes \rm{mult}_k}$, and the unbounded operator $T$ is now affiliated to $\t{\l}(\rmM_k(\A))''$. The representation of $\phi$ using the non-negative operator $T$ is an equivariant analog of the representation of a positive semidefinite matrix $Q$ as the square of another such matrix $V$. In that situation we have $\det Q = (\det V)^2$, and this motivates the following definition. \begin{dfn}\label{dfn:FK-det-ac} If $\phi$ is a positive functional and $\phi \ll\tau$, then the \textbf{Fuglede--Kadison determinant of $\phi$ with respect to $\tau$} is \[\Delta_\tau \phi := (\Delta_{\t{\tau}}T)^2,\] where $T$ is the operator from Theorem~\ref{thm:RadNik}. More generally, if $\phi:\A\to\rmM_k$ is completely positive and $\phi \ll\tau$, then the \textbf{Fuglede--Kadison determinant of $\phi$ with respect to $\tau$} is \[\Delta_\tau \phi := \Delta_{\tau\otimes \Tr_k}(\langle \phi,\cdot\rangle).\] \end{dfn} \subsection*{\emph{Notes and further references}} To arrive at Theorem~\ref{thm:RadNik}, we make essential use of~\cite[Theorem III.1.4]{Dix--vN} guaranteeing that normal positive functionals can be associated to single vectors in $H$. In~\cite{Dix--vN} that fact is deduced from the result that, if $\M$ and $\N$ are von Neumann algebras and each of them has a cyclic and separating vector, then any isomorphism from $\N$ to $\M$ is spatial~\cite[Theorem III.1.3]{Dix--vN}. Another textbook account is in~\cite[Sections 7.1--3]{KadRin97}, where the representation of normal states using single vectors is provided by~\cite[Theorem 7.2.3]{KadRin97}. Historically, versions of Theorem~\ref{thm:RadNik} developed hand-in-hand with results guaranteeing that certain isomorphisms are spatial. A thorough account and list of original references is given in~\cite[Section III.1.4]{Dix--vN}. As far as I know, the specific origins of Theorem~\ref{thm:RadNik} are in~\cite{Dye52}. In particular, Theorem 4 of that paper provides a more general comparison between two normal states on a finite and sigma-finite von Neumann algebra. In time, those developments evolved into much more general and powerful techniques for uncovering the abstract structure of von Neumann algebras. The Tomita--Takesaki theory extended the use of tracial vectors to any vectors that are cyclic and separating: standard textbook references are~\cite[Chapter VI--VIII]{Tak--bookII} and~\cite[Section 9.2]{KadRin97}. That theory was then an ingredient in the use of bimodules (also called `correspondences') to characterize different classes of factors and their properties. These ideas originate in unpublished work of Connes, and were then extended and disseminated by Popa~\cite{Pop86}. See also~\cite[Section IX.3]{Tak--bookII} for an introduction, or~\cite[Appendix F and Chapter 6]{BroOza08} or~\cite[Appendix V.B]{ConnesNG} for more advanced overviews. \chapter{The determinantal formula}\label{chap:det-form} This chapter proves Theorem~\ref{mainthm:det-form}. We can begin by reducing our work to the case $k=1$. Indeed, suppose we know the result for \emph{any} separable C$\sp*$-algebra in that case. Assume that $\phi \in \B(\A,\rmM_k)_+$ is asymptotically associated to an AP sequence $\bspi = (\pi_n)_{n\ge 1}$, and suppose that $\tr_{d_n}\circ \pi_n \to \tau$. Then part (a) of Proposition~\ref{prop:AP-for-pairing} gives \[\tr_{kd_n}\circ (\pi_n)_{(k)}\to \tau\otimes \tr_k,\] and parts (b) and (c) of that proposition give that $\langle \phi,\cdot\rangle$ is asymptotically associated to $\bspi_{(k)}:= ((\pi_n)_{(k)})_{n\ge 1}$ and satisfies \[\rmh_{\bspi}(\phi) = k\cdot \rmh_{\bspi_{(k)}}(\langle \phi,\cdot\rangle).\] Applying the case $k=1$ of Theorem~\ref{mainthm:det-form} to the algebra $\rmM_k(\A)$, functional $\langle \phi,\cdot\rangle$, and AP sequence $\bspi_{(k)}$, we find that \[\rmh_{\bspi_{(k)}}(\langle \phi,\cdot\rangle) = \log \Delta_{\tau\otimes \tr_k}(\langle \phi,\cdot\rangle_{\rm{ac}}) = \log \Delta_{\tau\otimes \tr_k}(\langle \phi_{\rm{ac}},\cdot\rangle) = \frac{1}{k}\log \Delta_\tau \phi_{\rm{ac}}.\] The second equality here holds by Lemma~\ref{lem:Leb-of-pairing}, and the last follows from Definition~\ref{dfn:FK-det-ac}, noting the change in normalization from $\tr_k$ to $\Tr_k$. Combining the equalities above, we arrive at \[\rmh_{\bspi}(\phi) = \log \Delta_{\tau\otimes \Tr_k}(\langle \phi_{\rm{ac}},\cdot\rangle),\] as required for the general case of Theorem~\ref{mainthm:det-form}. We now assume that $k=1$ for the rest of chapter, which significantly lightens the notation. In Section~\ref{sec:det-form-reform}, we rephrase Theorem~\ref{mainthm:det-form} as a pair of inequalities (Theorems~\ref{thm:det-form-lower} and~\ref{thm:det-form-upper}) for finite-dimensional representations satisfying certain hypotheses. This makes the proof of Theorem~\ref{mainthm:det-form} more transparent, but we also need those explicit inequalities, rather than the statement of Theorem~\ref{mainthm:det-form} itself, in Section~\ref{sec:three-entropies}. Sections~\ref{sec:det-form-lower} and~\ref{sec:det-form-upper} prove Theorems~\ref{thm:det-form-lower} and~\ref{thm:det-form-upper}. They begin by citing parts (a) and (b) of Corollary~\ref{cor:sums}, respectively. Corollary~\ref{cor:sums} applies because $\phi_{\rm{ac}}$ and $\phi_{\rm{sing}}$ are mutually singular. That corollary compares the sets $\X(\pi,O)$ for neighbourhoods $O$ of $\phi$ with sums of the analogous sets for $\phi_{\rm{ac}}$ and $\phi_{\rm{sing}}$ separately, first from below and then from above. Finally, Section~\ref{sec:two-cors} presents two first corollaries of Theorem~\ref{mainthm:det-form} and compares them to sofic entropy, and Section~\ref{sec:prev-det-form} discusses the relationship between Theorem~\ref{mainthm:det-form} and some of its predecessors in the literature. \section{Reformulation of Theorem~\ref{mainthm:det-form}}\label{sec:det-form-reform} For the rest of this chapter, let $\A$ be a separable C$\sp*$-algebra with a tracial state $\tau$, let $\phi \in \A^\ast_+$, and let $\phi = \phi_{\rm{ac}} + \phi_{\rm{sing}}$ be the Lebesgue decomposition of $\phi$ relative to $\tau$. Finally let \[h := \log \Delta_\tau \phi_{\rm{ac}} \in [-\infty,\infty).\] We deduce Theorem~\ref{mainthm:det-form} from a pair of inequalities for fixed finite-dimensional representations. \begin{thm}\label{thm:det-form-lower} Assume that $h > -\infty$, let $\eps > 0$, and let $O$ be a neighbourhood of $\phi$. Then there are a positive integer $d_0$ and neighbourhoods $V$ of $\tau$ and $U$ of $\phi_{\rm{sing}}$ such that the following holds: \begin{quote} If $\pi$ is a $d$-dimensional representation of $\A$ such that \[d \ge d_0,\quad \tr_d\circ \pi \in V, \quad \hbox{and} \quad \X(\pi,U) \ne \emptyset,\] then \[\frac{\vol_{2d}\X(\pi,O)}{c(1,d)} \ge e^{(h-\eps)d}.\] \end{quote} \end{thm} \begin{thm}\label{thm:det-form-upper} Let $h_1 > h$. Then there are a positive integer $d_0$ and neighbourhoods $V$ of $\tau$ and $O$ of $\phi$ such that the following holds: \begin{quote} If $\pi$ is a $d$-dimensional representation of $\A$ such that $d\ge d_0$ and $\tr\circ \pi \in V$, then \[\frac{\vol_{2d}\X(\pi,O)}{c(1,d)} \le e^{h_1d}.\] \end{quote} \end{thm} \begin{proof}[Proof of Theorem~\ref{mainthm:det-form} from Theorems~\ref{thm:det-form-lower} and~\ref{thm:det-form-upper}] As explained at the start of the chapter, it suffices to prove Theorem~\ref{mainthm:det-form} assuming that $k=1$. First assume that $h > -\infty$, let $\eps > 0$, and let $O$ be a neighbourhood of $\phi$. Let the integer $d_0$ and neighbourhoods $V$ of $\tau$ and $U$ of $\phi_{\rm{sing}}$ be given by Theorem~\ref{thm:det-form-lower}. By passing to a subsequence of $\bspi$ if necessary and applying Corollary~\ref{cor:typ-trans}, we may assume that $\X(\pi_n,U)$ is nonempty for all $n$. Since also $\tr_{d_n}\circ \pi_n \to \tau$, all the hypotheses of Theorem~\ref{thm:det-form-lower} are now satisfied by $\pi_n$ for all sufficiently large $n$, giving the lower bound \[\limsup_{n\to \infty}\frac{1}{d_n}\log\frac{\vol_{2d_n}\X(\pi_n,O)}{c(1,d_n)} \ge h-\eps.\] Since $\eps$ and $O$ are arbitrary, this proves that $\rmh_{\bspi}(\phi) \ge h$. This inequality also holds trivially in case $h = -\infty$. On the other hand, let $h_1 > h$, and let the neighbourhoods $V$ of $\tau$ and $O$ of $\phi$ be given by Theorem~\ref{thm:det-form-upper}. This theorem also applies to $\pi_n$ for all sufficiently large $n$ (this time without needing to pass to a subsequence), giving the upper bound \[\limsup_{n\to \infty}\frac{1}{d_n}\log \frac{\vol_{2d_n}\X(\pi_n,O)}{c(1,d_n)} \le h_1.\] By the arbitrariness of $h_1$, this proves that $\rmh_{\bspi}(\phi) \le h$. \end{proof} \section{Proof of the lower bound}\label{sec:det-form-lower} This section proves Theorem~\ref{thm:det-form-lower}. Assume that $h > -\infty$, let $\eps > 0$, and let $O$ be a neighbourhood of $\phi$. Since ${\phi_{\rm{ac}} \perp \phi_{\rm{sing}}}$, part (a) of Corollary~\ref{cor:sums} gives neighbourhoods $U$ of $\phi_{\rm{sing}}$ and $W$ of $\phi_{\rm{ac}}$ such that \begin{equation}\label{eq:det-form-O-lower} \X(\pi,O) \supset \X(\pi,U) + \X(\pi,W) \end{equation} for any representation $\pi$. In particular, if $\pi$ is finite-dimensional and $\X(\pi,U)$ is nonempty, then $\X(\pi,O)$ contains a translate of $\X(\pi,W)$, and so the volume of the former is at least the volume of the latter. To use this lower bound, we prove Theorem~\ref{thm:det-form-lower} in two steps. The first handles the special case when $\phi$ itself is absolutely continuous with the respect to $\tau$, and the second deduces the general case from that special case. The first step requires a simple volume-comparison estimate. We present this next as a separate lemma. We use it again in the `reverse direction' in the next section. \begin{lem}\label{lem:vol-compare} Let $\phi \in \A^\ast_+$, let $a \in \A$, and let $\psi := \phi(a^\ast(\cdot)a)$. For any neighbourhood $O$ of $\psi$, there is a neighbourhood $O'$ of $\phi$ such that \[\vol_{2d}\X(\pi,O)\ge \Det\,|\pi(a)|^2\cdot \vol_{2d}\X(\pi,O')\] for any $d$-dimensional representation $\pi$. \end{lem} \begin{proof} If $O$ is a neighbourhood of $\psi$, then Lemma~\ref{lem:lin-maps} gives a neighbourhood $O'$ of $\phi$ such that \[\vol_{2d}\X(\pi,O) \ge \vol_{2d}\big(\pi(a)[\X(\pi,O')]\big) = \Det\,|\pi(a)|^2\cdot \vol_{2d}\X(\pi,O').\] The determinant is squared here because $\pi(a)$ is a complex linear map in $d$ complex dimensions, but we must treat it as a real linear map in $2d$ real dimensions for the purpose of computing these volumes. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:det-form-lower}] Let $\tau$ be associated to $\l$ by the cyclic tracial vector $\xi$, and let $\t{\tau}$ be the normal tracial state on $\l(\A)''$ defined by $\xi$, so $\tau = \t{\tau}\circ \l$. \vspace{7pt} \emph{Step 1: absolutely continuous case.}\quad Assume first that $\phi \ll \tau$. In this case there is no need for a neighbourhood of $\phi_{\rm{sing}}$, so it remains to find $d_0$ and $V$. By Theorem~\ref{thm:RadNik}, $\phi$ is associated to $\l$ by a vector of the form $T\xi$ for some non-negative unbounded operator $T$ affiliated to $\l(\A)''$, and then Definition~\ref{dfn:FK-det-ac} gives $h = 2\log\Delta_{\t{\tau}} T$. Now Lemma~\ref{lem:unif-cts} and Proposition~\ref{prop:Kap+} give a positive invertible element $a$ of $\A$ such that (i) the vector $y := \l(a)\xi$ has type lying in $O$ and (ii) we have \begin{equation}\label{eq:approx-by-invertible-2} 2\tau(\log a) = 2\log\Delta_{\tau}(a) > h - \eps/3. \end{equation} The type of $y$ is equal to $\tau(a^\ast(\cdot) a)$, by Lemma~\ref{lem:Ad-formula}, so Lemma~\ref{lem:vol-compare} gives a neighbourhood $O'$ of $\tau$ such that \begin{align*} \vol_{2d} \X(\pi,O) &\ge (\Det\,\pi(a))^2\cdot \vol_{2d}\X(\pi,O')\\ &= \exp(2d(\tr_d\circ \pi)(\log a))\cdot \vol_{2d}\X(\pi,O') \end{align*} for any $d$-dimensional representation $\pi$ (note: we can omit the absolute value from $\pi(a)$ because $a$ itself is positive). To finish, let $V$ be a neighbourhood of $\tau$ and $d_0$ a positive integer such that, whenever $d\ge d_0$ and $\tr_d\circ \pi \in V$, both of the following hold: \[2(\tr_d\circ \pi)(\log a) > 2\tau(\log a) - \eps/3\] and \[\frac{\vol_{2d}\X(\pi,O')}{c(1,d)} \ge e^{-d\eps/3}.\] The first of these is possible because that inequality itself defines a neighbourhood of $\tau$. The second is possible by Corollary~\ref{cor:conc0} and the lower bound from Lemma~\ref{lem:normalized}. Combining these lower bounds with the inequalities obtained above, we arrive at \[\frac{\vol_{2d} \X(\pi,O)}{c(1,d)} \ge \exp\big(2d\tau(\log a) - d\eps/3 - d\eps/3\big) \ge \exp(hd - \eps d)\] as required. \vspace{7pt} \emph{Step 2: general case.}\quad Now allow $\phi$ to be arbitrary. Given the neighbourhood $O$ of $\phi$, choose neighbourhoods $U$ of $\phi_{\rm{sing}}$ and $W$ of $\phi_{\rm{ac}}$ so that~\eqref{eq:det-form-O-lower} holds. Provided $\X(\pi,U)$ is nonempty, this implies that \[\vol_{2d}\X(\pi,O) \ge \vol_{2d}\X(\pi,W).\] Now Step 1 gives suitable $d_0$ and $V$ for this neighbourhood $W$ of $\phi_{\rm{ac}}$. \end{proof} \section{Proof of the upper bound}\label{sec:det-form-upper} This section proves Theorem~\ref{thm:det-form-upper}. The first idea is similar to the proof of Theorem~\ref{thm:det-form-lower}. This time we must find a neighbourhood $O$ of $\phi$ for which we can bound $\vol_{2d}\X(\pi,O)$ from above. To do this, we first find suitable separate neighbourhoods $U$ of $\phi_{\rm{sing}}$ and $W$ of $\phi_{\rm{ac}}$, and then use part (b) of Corollary~\ref{cor:sums} to obtain a neighbourhood $O$ of $\phi$ such that \begin{equation}\label{eq:det-form-O-upper} \X(\pi,O) \subset \X(\pi,U) + \X(\pi,W) \end{equation} for any representation $\pi$. However, compared to Theorem~\ref{thm:det-form-lower}, it takes more work to turn~\eqref{eq:det-form-O-upper} into the necessary volume upper bound. Two ingredients are needed: an upper bound on the volume of $\X(\pi,W)$, and an upper bound on the \emph{covering numbers} -- not just the volume -- of $\X(\pi,U)$. We derive these two upper bounds separately, and then show how they combine in the proof of the full theorem. Both of those separate upper bounds depend in different ways on a further technical lemma, which we explain first of all. \begin{lem}\label{lem:into-subspace} Let $\phi \in \A^\ast_+$, let $\eps > 0$, and let $r^2:= \max\{\phi(e),1\}$. Suppose that $a \in \A$ satisfies $0 \le a \le 1$ and $\phi(a) < \eps^3$. Then there are neighbourhoods $V$ of $\tau$ and $O$ of $\phi$ such that the following holds: \begin{quote} If $\pi$ is a $d$-dimensional representation of $\A$ such that $\tr_d\circ \pi \in V$, then there is a subspace $M$ of $\bbC^{(d)}$ such that \[\dim M \leq \frac{1-\tau(a) + \eps}{1 - \eps}^{}d\] and \[\X(\pi,O) \subset (M\cap B_{2r}(0)) + B_\eps(0).\] \end{quote} \end{lem} Intuitively, this lemma uses $a$ to `trap' $\X(\pi,O)$ close to the lower-dimensional subspace $M$. \begin{proof} Let \[O := \{\phi' \in \A^\ast_+:\ \phi'(e) < 4r^2,\ \phi'(a) < \eps^3\}.\] This is a neighbourhood of $\phi$ by our assumptions. In addition, let \[V := \{\tau' \in \A^\ast_+:\ \tau'(a) > \tau(a) - \eps\}.\] Now let $\pi$ be a $d$-dimensional representation satisfying $\tr_d\circ \pi \in V$. Consider the matrix $A := \pi(a)$. It still satisfies $0\le A \le 1$. Let \[M := \sum_{\l \le \eps}\ker (A - \l)\] and let $P$ be the orthogonal projection of $\bbC^{(d)}$ onto $M$. Then $(1-\eps)P \le 1 - A$, and hence \[(1 - \eps)\dim P \le \Tr_d(1-A) = d(1-(\tr_d\circ \pi)(a)) < (1 - \tau(a) + \eps)d,\] where the last inequality holds because $\tr_d\circ \pi \in V$. On the other hand, we also have $\eps (1-P) \le A$, so any $x \in \X(\pi,O)$ satisfies \[\eps\|(1 - P)x\|^2 = \langle \eps(1- P)x,x\rangle \le \langle Ax,x\rangle = \langle \Phi^\pi_x,a\rangle < \eps^3\] and also $\|x\|^2 = \Phi^\pi_x(e) < 4r^2$. Therefore any such $x$ lies within distance $\eps$ of $M \cap B_{2r}(0)$. \end{proof} Our next step is to prove Theorem~\ref{thm:det-form-upper} in the special case when $\phi$ is absolutely continuous. The proof includes our first application of Lemma~\ref{lem:into-subspace}. We use that lemma a second time during the proof of Lemma~\ref{lem:pre-reduce-to-ac} below. \begin{proof}[Proof of Theorem~\ref{thm:det-form-upper} in absolutely continuous case] If $\phi \ll \tau$, Theorem~\ref{thm:RadNik} associates $\phi$ to $\l$ by the vector $x = T\xi$ for some non-negative unbounded operator $T$ affiliated to $\l(\A)''$, and then Definition~\ref{dfn:FK-det-ac} gives $h = 2\log\Delta_{\t{\tau}} T$. The proof is divided into two further cases according to whether $T$ is singular or nonsingular. \vspace{7pt} \emph{Case 1: singular.}\quad In this case $h = -\infty$. On the other hand, since $T$ is singular, the orthogonal projection $Q$ from $H_\l$ to $\ker T$ lies in $\l(\A)''$ and is nonzero, hence satisfies \[0\le Q \le 1 \qquad \hbox{and} \qquad \delta := \t{\tau} Q = \|Q\xi\|^2 > 0,\] because $\xi$ is separating for $\l(\A)''$. It also satisfies $\langle Qx,x\rangle = \|QT\xi\|^2 = 0$. Now let $\eps > 0$, and assume that $\eps < \min\{\delta/8,1\}$. The Kaplansky density theorem~\cite[Theorem 44.1]{ConOT} applied to $Q$ gives some $a \in \A$ such that \[0 \le a \le 1, \qquad \tau(a) > \delta/2, \qquad \hbox{and} \qquad \phi(a) = \langle \l(a)x,x\rangle < \eps^3.\] (Note that we have used here that $\tau$ is strong-operator continuous on $\l(\A)$, not just ultraweakly continuous.) Using this $a$, Lemma~\ref{lem:into-subspace} gives neighbourhoods $O$ of $\phi$ and $V$ of $\tau$. Provided $\tr_d\circ \pi \in V$, the conclusion of that lemma gives an orthogonal decomposition $M\oplus M^\perp$ of $\bbC^{(d)}$ under which $\X(\pi,O)$ is mapped to a subset of \[B'_{2r}(0)\times B''_\eps(0),\] where $B'$ and $B''$ denote open balls in $M$ and $M^\perp$, respectively, and where $r$ is $\max\{\sqrt{\phi(e)},1\}$. Letting $\a := \dim M/d$, and inserting~\eqref{eq:ball-vol} and then~\eqref{eq:ckn-asymp}, it follows that \begin{align*} \vol_{2d}\X(\pi,O) &\le \vol_{2\a d}(B'_{2r}(0))\cdot \vol_{2(1-\a) d}B''_\eps(0)\\ & \le c(1,\a d)\cdot (2r)^{2\a d}\cdot c(1,(1-\a)d)\cdot \eps^{2(1-\a) d}\\ &= e^{o(d)}\cdot \frac{(e\pi)^d\cdot (2r)^{2\a d}\cdot \eps^{2(1-\a) d}}{\a^{\a d}(1-\a)^{(1-\a)d}d^d}\\ &= e^{o(d)}\cdot c(1,d)\cdot e^{\rmH(\a,1-\a)d}\cdot (2r)^{2\a d}\cdot \eps^{2(1-\a) d}, \end{align*} where on the last line $\rmH(\a,1-\a)$ is binary Shannon entropy, and where the quantity hidden inside the notation `$o(d)$' is a function of $d$ alone. On the other hand, the bound from Lemma~\ref{lem:into-subspace} gives \[1 - \a > 1 - \frac{1-\tau(a) + \eps}{1-\eps} = \frac{\tau(a) - 2\eps}{1-\eps} > \delta/4.\] Inserting this into the volume upper bound above, re-arranging, and recalling that binary entropy is at most $\log 2$, we have shown that \[\frac{\vol_{2d}\X(\pi,O)}{c(1,d)} \le e^{o(d)}\cdot 2^d\cdot (2r)^{2d}\cdot \eps^{\delta d/2}.\] Since $\eps$ is chosen after $r$ and $\delta$, this upper bound can be made smaller than any chosen exponential in $d$ for all sufficiently large $d$. \vspace{7pt} \emph{Case 2: nonsingular.}\quad Let $\eps > 0$ and let $h_1 > h$. Let $U$ be a neighbourhood of $\tau$ such that $\X(\pi,U) \subset B_{1+\eps}(0)$ for any $d$-dimensional representation $\pi$. Since $T$ is nonsingular, Proposition~\ref{prop:Kap+} gives a positive element $a$ of $\A$ such that $2\tau(\log a) < h_1$ and such that the vector $y := \l(a^{-1})x$ has type that lies in $U$. Now Lemma~\ref{lem:vol-compare} provides a neighbourhood $O$ of $\phi$ such that \[\vol_{2d}\X(\pi,U) \ge \Det |\pi(a^{-1})|^2\cdot \vol_{2d}\X(\pi,O)\] for any $d$-dimensional representation $\pi$. Re-arranging and using our choice of $U$ and the properties of $a$, this becomes \begin{align*} \vol_{2d}\X(\pi,O) &\le \vol_{2d}B_{1+\eps}(0)\cdot \Det|\pi(a)|^2\\ &\le c(1,d)\cdot (1+\eps)^{2d}\cdot \exp(2\Tr_d(\pi(\log a)))\\ &= c(1,d)\cdot (1+\eps)^{2d}\cdot \exp(2d(\tr_d\circ \pi)(\log a)), \end{align*} where the second inequality follows from~\eqref{eq:ball-vol}. Choosing a small enough neighbourhood $V$ of $\tau$, if $\tr_d\circ \pi \in V$ then the quantity above is at most \[c(1,d)\cdot (1+\eps)^{2d}\cdot \exp(2d\tau(\log a)+\eps d) \le c(1,d)\cdot (1+\eps)^{2d}\cdot e^{(h_1 + \eps)d}.\] By the arbitrariness of $\eps$ and $h_1$, this completes the proof. \end{proof} To prove Theorem~\ref{thm:det-form-upper} for a general choice of $\phi$, we need to combine the estimate above with some control over the approximately typical vectors for $\phi_{\rm{sing}}$. This is provided by the next lemma, whose proof contains our second appeal to Lemma~\ref{lem:into-subspace}. \begin{lem}\label{lem:pre-reduce-to-ac} Assume that $\phi \perp \tau$. For any $\eps > 0$ there are neighbourhoods $V$ of $\tau$ and $O$ of $\phi$ such that the following holds: \begin{quote} If $\pi$ is a $d$-dimensional representation such that $\tr_d\circ \pi \in V$, then there is a subset $F\subset \bbC^{(d)}$ such that \[|F| \le e^{\eps d} \quad \hbox{and} \quad \X(\pi,O) \subset F + B_\eps(0).\] \end{quote} \end{lem} \begin{rmk} n \end{rmk} \begin{proof} Let $r^2 := \max\{\phi(e),1\}$ and let $\eps \in (0,r)$. Let $\phi$ be associated to the representation $\kappa$ on $K$ by the cyclic vector $x$. Let $\g := \kappa \oplus \l$. Since $\kappa$ and $\l$ are disjoint, the orthogonal projection $P$ from $K\oplus H$ onto $H$ is a central element of $\g(\A)''$~\cite[Proposition 5.2.4]{Dix--Cstar}. By the Kaplansky density theorem~\cite[Theorem 44.1]{ConFA}, there exists $a\in \A$ such that $0 \le a \le 1$ and also \begin{equation}\label{eq:a-props} \phi(a) = \langle \kappa(a)x,x\rangle < \eps^3\qquad \hbox{and} \qquad \tau(a) = \langle \l(a)\xi,\xi\rangle > 1 - \eps. \end{equation} With this choice of $a$, let $V$ and $O$ be given by Lemma~\ref{lem:into-subspace}. Let $\pi$ be a $d$-dimensional representation of $\A$ with $\tr_d\circ \pi \in V$, and then let $M$ be the subspace of $\bbC^{(d)}$ provided by that lemma. Finally, let $F$ be a maximal $\eps$-separated subset of $M\cap B_{2r}(0)$. On the one hand, maximality gives \[\X(\pi,O) \subset (M\cap B_{2r}(0)) + B_\eps(0) \subset F + B_{2\eps}(0).\] On the other hand, the balls $M \cap B_{\eps/2}(v)$ for $v \in F$ must be disjoint and all contained $M\cap B_{3r}(0)$, so a volume comparison gives \[|F| \le \frac{(3r)^{\dim M}}{(\eps/2)^{\dim M}} = (6r/\eps)^{\dim M} \le \exp\Big(\frac{2\eps\log (6r/\eps)}{1-\eps}d\Big),\] where the last inequality holds upon inserting the bound on $\dim M$ from Lemma~\ref{lem:into-subspace} and then simplifying it using the second line of~\eqref{eq:a-props}. Since $\eps$ is arbitrary and $\eps\log (6r/\eps) \to 0$ as $\eps \to 0$, this completes the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:det-form-upper}] Let $h_1 > h$. By the absolutely continuous case of the theorem, already proved above, there are neighbourhoods $V_1$ of $\tau$ and $U_1$ of $\phi_{\rm{ac}}$ such that \[\vol_{2d} \X(\pi,U_1) \le c(1,d)\cdot e^{h_1d}\] whenever $\tr_d\circ \pi \in V$. By Lemma~\ref{lem:unif-cts}, choose $\delta > 0$ and a smaller neighbourhood $U$ of $\phi_{\rm{ac}}$ so that we also have \[\X(\pi,U) + B_\delta(0) \subset \X(\pi,U_1).\] Now let $\eps > 0$, and reduce it if necessary so that $\eps < \delta$. By Lemma~\ref{lem:pre-reduce-to-ac}, there are neighbourhoods $V_2$ of $\tau$ and $W$ of $\phi_{\rm{sing}}$ such that, provided $\tr_d\circ \pi \in V_2$, there is a subset $F \subset \bbC^{(d)}$ such that \[|F| \le e^{\eps d} \quad \hbox{and} \quad \X(\pi,W) \subset F + B_\delta(0).\] Finally, let $V := V_1\cap V_2$, and let $O$ be a neighbourhood of $\phi$ such that~\eqref{eq:det-form-O-upper} holds for this $U$ and $W$, as described at the beginning of this section. If $\tr_d\circ \pi \in V$, then~\eqref{eq:det-form-O-upper} combines with the containments above like this: \[\X(\pi,O) \subset \X(\pi,U) + \X(\pi,W) \subset \X(\pi,U) + F + B_\delta(0) \subset \X(\pi,U_1) + F.\] As a result, we have \[\vol_{2d}\X(\pi,O) \le |F|\cdot \vol_{2d}\X(\pi,U_1) \le c(1,d)\cdot e^{(h_1+\eps)d}.\] By the arbitrariness of $h_1 > h$ and $\eps > 0$, this completes the proof. \end{proof} \section{Some first corollaries}\label{sec:two-cors} Let us retain the notation of Theorem~\ref{mainthm:det-form}, but now return to allowing any value of $k$. \begin{cor}\label{cor:det-form} Assume that $\tr_{d_n}\circ \pi_n \to \tau$. \begin{enumerate} \item[a.] The map $\phi$ is asymptotically associated to $\bspi$ if and only if $\phi_{\rm{sing}}$ is. \item[b.] We have $\rmh_{\bspi}(\phi_{\rm{ac}}) = \log\Delta_\tau\phi_{\rm{ac}}$ and \[\rmh_{\bspi}(\phi) = \left\{\begin{array}{ll}\rmh_{\bspi}(\phi_{\rm{ac}}) & \quad \hbox{if $\phi_{\rm{sing}}$ is asymptotically associated to $\bspi$}\\ -\infty & \quad \hbox{otherwise.} \end{array}\right.\] \item[c.] If $\rmh_{\bspi}(\phi) > -\infty$, then $\phi_{\rm{ac}}$ is associated to $\l^{(k)}$ by a $k$-tuple that is cyclic and separating for $\l^{(k)}(\A)''$, and so \[\pi_\phi \gtrsim \pi_{\phi_{\rm{ac}}} \cong \l^{(k)}.\] \end{enumerate} \qed \end{cor} Corollary~\ref{cor:det-form} somewhat limits the novelty of AP entropy. It is really just a Fuglede--Kadison log-determinant, \emph{provided} one knows which positive definite functions are asymptotically associated to a given AP approximation $\bspi$. By passing to a subsequence, one can always assume that $\bspi$ Fell-global converges, and then Theorem~\ref{mainthm:det-form} shows that the AP entropy function $\rmh_{\bspi}$ depends only on the values of the two limits $\tau$ and $\lim_n \ol{\S_\bullet(\pi_n)}$. If $\tr_{d_n}\circ \pi_n \to \tau$ but $\bspi$ does not converge approximately, then Theorem~\ref{mainthm:det-form} may give different values of AP entropy along different subsequences, but the only two possible values are $\log\Delta_\tau \phi_{\rm{ac}}$ or $-\infty$ (and sometimes these are equal). In ergodic theory, an important open problem asks whether the sofic entropy of a measure-preserving system along any sofic approximation must always equal either one particular value or $-\infty$. This would mean that the sofic entropy of a measure-preserving system is essentially `unambiguous', in that it does not depend on a choice of sofic approximation provided it is finite. Corollary~\ref{cor:det-form} answers the analogous question positively for AP entropy. On the other hand, it is known that the answer is negative for the topological variant of sofic entropy~\cite{AirBowLin22}. If the answer is negative for sofic entropy itself, then any general method for evaluating or estimating sofic entropy must depend on the particular sofic approximation through more than just its set of asymptotically associated stationary processes. I do not know of good candidates for what additional features of a sofic approximation could play a role here. \begin{prob} Is there any `non-linear' adaptation of our proof of Corollary~\ref{cor:det-form} that can reveal new classes of measure-preserving systems whose sofic entropy is unambiguous in the sense described above? \end{prob} By combining Theorem~\ref{mainthm:det-form} with the multiplicativity of Fuglede--Kadison determinants (see~\cite[Proposition 2.5]{HaaSch07}), we obtain a transformation law for AP entropies: \begin{cor}\label{cor:AP-ent-transf-law} Let $\phi,\psi \in \B(\A;\rmM_k)_+$, let $a \in \rmM_k(\A)$ be invertible, and suppose they are all related as in equation~\eqref{eq:psi-phi} of Lemma~\ref{lem:Ad-formula}. Let $\bspi = (\pi_n)_{n\ge 1}$ be an AP sequence such that $\tr_{d_n}\circ \pi_n \to \tau$. Then $\phi$ is asymptotically associated to $\bspi$ if and only if $\psi$ is, and in that case \[\rmh_{\bspi}(\psi) = 2\log \Delta_{\tau\otimes \Tr_k}|a| + \rmh_{\bspi}(\phi).\] \qed \end{cor} Corollary~\ref{cor:AP-ent-transf-law} can also be proved directly from Lemma~\ref{lem:vol-compare} in much the same way as Theorem~\ref{thm:det-form-lower} itself. One can easily construct examples as in Corollary~\ref{cor:AP-ent-transf-law} in which $\pi_\phi \cong \pi_\psi$ but $\Delta_\tau |a| \ne 1$. Any such example shows that $\rmh_{\bspi}(\phi)$ is not an invariant of the equivalence class of $\pi_\phi$ alone. In addition to the corollaries above, some more basic results from earlier chapters are not used during the proof of Theorem~\ref{mainthm:det-form}, and can in hindsight be seen as special cases of that theorem. Part (a) of Proposition~\ref{prop:mollify} is an example. \section{Connections to previous work}\label{sec:prev-det-form} \subsection*{\emph{Determinantal entropy formulas in ergodic theory}} Fuglede--Kadison determinants already have an established role in ergodic theory as formulas for the entropy of certain special kinds of measure-preserving system. Most of those examples are systems of algebraic origin. In such a system, a countable discrete group $\G$ acts by automorphisms on a compact Abelian group, necessarily preserving its Haar measure. A system of this kind can be constructed from any element $f$ of the integral group ring $\bbZ[\G]$ by letting $\G$ act on the Pontrjagin dual of $\bbZ[\G]/\bbZ[\G]f$; we denote this example by $\boldsymbol{X}_f$. When $\G$ is $\bbZ$ or $\bbZ^d$, the entropy of the resulting measure-preserving system is equal to the Mahler measure of $f$: see~\cite[Chapter V]{Sch95} and the references given there. That Mahler measure can be recognized as the Fuglede--Kadison determinant of $f$ when regarded as an element of the group von Neumann algebra $\mathfrak{L}\G$. Starting from this observation, Deninger conjectured that this determinant should equal the entropy of $\boldsymbol{X}_f$ whenever $\G$ is amenable, and proved this under various extra hypotheses: see~\cite{Den06,Den09,Den12} and also the joint work~\cite{DenSch07}. Li improved on these with the general result in~\cite{Li12}, which assumes only that $f$ is invertible in $\mathfrak{L}\G$. Finally, Li and Thom proved the full conjecture in~\cite{LiTho14}, and generalized it still further to a formula for the entropy of systems constructed by Pontrjagin duality from any $\bbZ[\G]$-module of type FL. Further generalizations have weakened the assumption of amenability to soficity. Finitary approximations of Fuglede--Kadison determinants arising from sofic approximations to groups already appear in~\cite{EleSza05}, which verifies large cases of L\"uck's determinant conjecture as a result. Equalities relating entropy and Fuglede--Kadison determinants also appeared in Lyons' works~\cite{Lyons05,Lyons10}. These concern asymptotically counting spanning trees in finite connected graphs using their random weak limits. He shows that this `tree entropy' of the random limit graph is given by a Fuglede--Kadison determinant of its Laplacian. His setting does not require a group action, but it yields results for sofic groups as a special case. Most recently, this theme was taken up by Hayes. In~\cite{Hayes16}, he generalized Deninger's conjecture to equate the sofic entropy of $\boldsymbol{X}_f$ with the Fuglede--Kadison determinant of $f \in \bbZ[\G]$ whenever $\G$ is sofic. His main result actually allows larger finite matrices over $\bbZ[\G]$ in place of $f$, which approaches a generalization of the Li--Thom theorem to sofic groups, but Hayes also shows that the full generalization of the Li--Thom theorem is false. See also the alternative proof with further refinements in~\cite{Hayes21}. Alongside these papers, Hayes has developed other connections between sofic entropy and representation theory. In~\cite{Hayes18} he proves that an arbitrary measure-preserving $\G$-system can have completely positive sofic entropy only if its Koopman representation is contained in the infinite inflation of the regular representation. In~\cite{Hayes17}, he computes the sofic entropy of a stationary Gaussian process over $\G$ in terms of the real orthogonal representation that defines its first chaos, generalizing one of the results from~\cite{HamParTho08} for single transformations. Hayes' Fuglede--Kadison determinant formula is the closest ergodic theoretic predecessor of our Theorem~\ref{mainthm:det-form}. Although the analogies between the two settings are limited, several elements of our proof of Theorem~\ref{mainthm:det-form} also have precedents in those earlier papers. In particular, our use of Lemma~\ref{lem:pre-reduce-to-ac} resembles the key technical steps in~\cite[Sections 4.1--2]{Hayes18}, and the ways in which $\phi_{\rm{sing}}$ and $\phi_{\rm{ac}}$ control whether $\rmh_{\bspi}(\phi)$ equals $-\infty$ (Corollary~\ref{cor:det-form} above) closely resemble the main theorem in~\cite{Hayes17}. Our determinantal formula may also be worth comparing with the formula for the Ihara zeta function of `measure graphs' presented in~\cite{LenPogSch19}, but I have not learned about this thoroughly. \subsection*{\emph{Versions of Szeg\H{o}'s theorem}} Fuglede--Kadison determinants also appear in generalizations of Szeg\H{o}'s limit theorem for positive definite Toeplitz determinants. This very classical result is a cornerstone of the study of positive definite functions on $\bbZ$ and orthogonal polynomials on the unit circle, and we refer to it again several times later. At those points I generally cite the standard texts~\cite{SimOPUCI,SimOPUCII,SimSzeg} and use terminology that matches Simon's. If $\phi$ is a positive definite function on $\bbZ$, then Bochner's theorem identifies it as the Fourier--Stieltjes transform of a finite Borel measure $\mu$ on the circle group $\bbT$. For each $n$, let $\phi_{(n)}$ be the $n$-by-$n$ positive semidefinite Toeplitz matrix obtained by restricting $\phi$ to $\{0,1,2\dots,n-1\}$. Then Szeg\H{o}'s theorem is the equality \begin{equation}\label{eq:Szeg} \lim_n \frac{1}{n}\log\det \phi_{(n)} = \int \log \frac{d\mu_{\rm{ac}}}{dm_{\bbT}}\,dm_\bbT, \end{equation} where $\mu_{\rm{ac}}$ is the absolutely continuous part of $\mu$, and where both sides may equal $-\infty$. See~\cite[Theorem 1.6.1]{SimSzeg}. Once again, the integral on the right-hand side of~\eqref{eq:Szeg} can be recognized as a Fuglede--Kadison determinant. This time the von Neumann algebra is $L^\infty(m_\bbT)$, and we regard the integrable function $d\mu_{\rm{ac}}/dm_\bbT$ as an operator affiliated to it. Comparing with Theorem~\ref{mainthm:det-form}, we now find a precedent for other important features of that theorem. For example, the singular part of $\mu$ has no effect on the limit in~\eqref{eq:Szeg}; in Theorem~\ref{mainthm:det-form} the analog is true provided $\phi_{\rm{sing}}$ is asymptotically associated to $\phi$. This link between Fuglede--Kadison determinants and Szeg\H{o}'s theorem was previously appreciated in classical works on Mahler measure, and also by Lyons in~\cite{Lyons05} and Li and Thom in~\cite{LiTho14}. However, their results apply the Fuglede--Kadison determinant to rather special choices of operators, and not all aspects of Szeg\H{o}'s theorem make an appearance. In particular, there is no separation into singular and absolutely continuous parts in those papers. If $\A = C^\ast \G$ and $\G$ is amenable, then the methods of the present chapter can be adapted to show that \begin{equation}\label{eq:Szeg-Folner} \lim_n \frac{1}{|F_n|}\log\det \phi_{(F_n)} = \log \Delta_\tau \phi_{\rm{ac}}, \end{equation} along any F\o lner sequence $(F_n)_{n\ge 1}$ of $\G$, where $\tau$ is the regular tracial state. This provides a canonical generalization of both sides of~\eqref{eq:Szeg}, and generalizes another of the main results from~\cite{LiTho14}. I expect to treat this result separately in a future paper. If $\A = C^\ast \G$ but $\G$ is not amenable, then AP entropy is not really a straightforward generalization of the left-hand side of~\eqref{eq:Szeg-Folner}. Indeed, in ergodic theory, the inability to define a good notion of entropy for a non-amenable acting group using its finite subsets is well-known. This is precisely the obstacle that Bowen's introduction of sofic entropy overcame. One can formulate a notion of entropy defined using finite subsets of a non-amenable group, but it turns out to have wild properties~\cite{Bur17}, and we do not present its representation theoretic analog. In another direction, a sequence of papers by Popesu introduced and studied a kind of limiting log-determinant entropy for representations of free semigroups. Popescu used these in developing a theory of Toeplitz matrices and dilations for such semigroups: see~\cite{Popescu96,Popescu01} and the further references given there. But the semigroups of operators in those works are highly non-invertible and very different from representations of free groups. I have not found a link between Popescu's entropy in those papers and any of the quantities studied in the present work. So, for non-amenable groups, or outside the class of group C$\sp*$-algebras entirely, the connection between Theorem~\ref{mainthm:det-form} and~\eqref{eq:Szeg} leaves some space between them. However, for representations of free groups, we find a much tighter link in Part~\ref{part:free} below. In that setting, we first meet a generalization of Verblunksy coefficients: see Section~\ref{sec:restrict-extend}. Then our proof of Theorem~\ref{mainthm:annealed-formula} yields a very complete generalization of Verblunsky's form of Szeg\H{o}'s theorem (see~\cite[Theorem 1.8.6]{SimSzeg}). We return to this at the end of Section~\ref{sec:three-entropies}. \part{RANDOM AP SEQUENCES FOR FINITELY GENERATED FREE GROUPS}\label{part:free} \chapter{Preliminaries on large deviations principles}\label{chap:LDP-prelims} Large deviations theory is treated well in several standards texts, including~\cite{Var--LDPbook2} and~\cite{DemZei--LDPbook}. This chapter recalls the formulation that we use in the sequel, and a few technical results that we need. Many readers will be able to skip this chapter and follow references back to it if necessary. Let $\X$ be a complete separable metric space, let $(\mu_n)_{n\ge 1}$ be a sequence of Borel probability measures on it, and let $I$ be a function from $\X$ to $[0,\infty]$. \begin{dfn}[Large deviations principle]\label{dfn:LDP} The sequence $(\mu_n)_{n\ge 1}$ obeys the \textbf{large deviations principle} (`\textbf{LDP}') with \textbf{rate function} $I$ if the following hold: \begin{itemize} \item[i.] (Upper bound) If $x \in \X$ and $a < I(x)$, then there is a neighbourhood $U$ of $x$ such that \[\mu_n U \le e^{-an + o(n)}.\] \item[ii.] (Lower bound) If $x \in \X$ and $I(x) < a < \infty$, then any neighbourhood $U$ of $x$ satisfies \[\mu_n U \ge e^{-a n - o(n)}.\] \item[iii.] (Exponential tightness) For every $a > 0$ there is a compact subset $K$ of $\X$ such that \[\mu_n (\X\setminus K) \le e^{-an + o(n)}.\] \end{itemize} \end{dfn} \begin{rmk} n \end{rmk} Put together, conditions (i) and (ii) from Definition~\ref{dfn:LDP} imply that \[I(x) = -\inf_U \limsup_{n\to \infty} \frac{1}{n}\log \mu_n U = -\inf_U \liminf_{n\to \infty} \frac{1}{n}\log \mu_n U,\] where $U$ runs over all neighbourhoods of $x$. These in turn imply that $I$ must be lower semicontinuous because of the following general principle. \begin{lem}\label{lem:upper-semicts} Let $\X$ be a topological space, let $\cal{U}$ be an open cover of it, let $F$ be a function from $\cal{U}$ to $[-\infty,\infty)$, and let \[f(x) := \inf\{F(U):\ U\in \cal{U},\,U\ni x\} \qquad (x \in \X).\] Then $f$ is upper semicontinuous. \end{lem} \begin{proof} If $f(x) < a$, then there must be some $U \in \cal{U}$ such that $U \ni x$ and ${F(U) < a}$, and this implies that $f(x') < a$ for all $x' \in U$. \end{proof} We apply Lemma~\ref{lem:upper-semicts} to various other examples later, sometimes with the opposite sign to conclude lower semicontinuity. In addition, conditions (i)--(iii) from Definition~\ref{dfn:LDP} imply that $I$ must be proper, meaning that its level set $\{I \le a\}$ is compat for every $a \in [0,\infty)$. An alternative to Definition~\ref{dfn:LDP} is perhaps more standard. This alternative assumes explicitly that $I$ is lower semicontinuous and proper, and then asserts that \begin{equation}\label{eq:LDP2-1} \mu_n C \le \exp\big(-\inf_{x \in C}I(x)\cdot n + o(n)\big) \end{equation} for every closed subset $C$ of $\X$, and that \[\mu_n U \ge \exp\big(-\inf_{x \in U}I(x)\cdot n - o(n)\big)\] for every open subset $U$ of $\X$. The equivalence of this definition with Definition~\ref{dfn:LDP} can be found in~\cite[Section 2.1]{Var--LDPbook2} or~\cite[Subsection 4.1.2]{DemZei--LDPbook}. (Beware that~\cite[Section 2.1]{Var--LDPbook2} starts with an apparently weaker definition of `exponential tightness' than our condition (iii), but then derives this equivalence as well.) Now suppose that $\Y$ and $\X$ are complete separable metric spaces and that $\pi:\Y \to \X$ is continuous. Suppose in addition that $(\nu_n)_{n\ge 1}$ is a sequence of Borel probability measures on $\Y$, and let $\mu_n := \pi_\ast\nu_n$. If the sequence $(\nu_n)_{n\ge 1}$ obeys the LDP with rate function $I$, then the sequence $(\mu_n)_{n\ge 1}$ obeys the LDP with rate function \begin{equation}\label{eq:contraction} J(x) := \inf_{y \in \pi^{-1}\{x\}}I(x). \end{equation} This is the \textbf{contraction principle}: see~\cite[Section 2.3]{Var--LDPbook2} or~\cite[Subsection 4.2.1]{DemZei--LDPbook}. The next lemma can simplify the family of open sets that we need to check when proving a LDP. \begin{lem}\label{lem:LDP-base-enough} Let $\cal{U}$ be a base for the topology of $\X$. Let (i)---(iii) be the conditions from Definition~\ref{dfn:LDP}. If condition (i) holds with the restriction that $U \in \cal{U}$, then it holds in full, and similarly with condition (ii). \end{lem} \begin{proof} Among the neighbourhoods of a given point $x$, both conditions get stronger as $U$ gets smaller, so these implications hold because $\cal{U}$ is a base. \end{proof} Two simple applications of Lemma~\ref{lem:LDP-base-enough} are used in the sequel. \begin{lem}\label{lem:LDP-product} Suppose $(\mu_n)_{n\ge 1}$ and $(\nu_n)_{n\ge 1}$ obey the LDPs on $\cal{X}$ and $\cal{Y}$ with rate functions $I$ and $J$, respectively. Then $(\mu_n \times \nu_n)_{n\ge 1}$ obeys the LDP on $\cal{X}\times \cal{Y}$ with the rate function \[I(x) + J(y) \qquad ((x,y) \in \cal{X}\times \cal{Y}).\] \qed \end{lem} Lemma~\ref{lem:LDP-product} holds by applying Lemma~\ref{lem:LDP-base-enough} to the base of product open sets, and by observing that products of compact sets can be used to verify exponential tightness of the product measures. If we assume that $I$ and $J$ both have minimum value $0$, then the reverse of the implication in Lemma~\ref{lem:LDP-product} also holds, by applying the contraction principle to each coordinate projection. Our second application of Lemma~\ref{lem:LDP-base-enough} requires a little more preamble. Let $\cal{X}_i$ be a compact metric space for every $i\ge 1$, let $\cal{X} := \prod_{i\ge 1}\cal{X}_i$, and let $\Y$ be a nonempty closed subset of $\X$. For any subsets $F \subset G\subset \bbN$ let \[\pi_F:\prod_{i \in G}\cal{X}_i\to \prod_{i \in F}\cal{X}_i\] be the coordinate projection, and let $\Y_F := \pi_F[\Y]$. Let $(\mu_n)_{n\ge 1}$ be a sequence of Borel probability measures on $\cal{Y}$, and let $\mu_{F,n} := \pi_{F\ast}\mu_n$ for any $n\in \bbN$ and $F\subset \bbN$. Finally, let $\cal{F}$ be an upwards-directed cover of $\bbN$ by finite subsets. \begin{lem}\label{lem:LDP-inf-product} If $(\mu_{F,n})_{n\ge 1}$ obeys the LDP with rate function $I_F:\cal{Y}_F\to [0,\infty]$ for every $F\in \cal{F}$, then \begin{itemize} \item[a.] these rate functions are related by \[I_F(x) = \inf\{I_G(y):\ y \in \cal{Y}_G,\ \pi_F(y) = x\}\] whenever $F,G \in \cal{F}$, $F\subset G$, and $x \in \cal{Y}_F$, and \item[b.] the original sequence $(\mu_n)_{n \ge 1}$ obeys the LDP with rate function \[I(y) := \sup\{I_F(\pi_F(y)):\ F\in \cal{F}\} \qquad (y \in \Y).\] \qed \end{itemize} \end{lem} Part (a) of Lemma~\ref{lem:LDP-inf-product} is a case of the contraction principle. Part (b) follows from part (a) by applying Lemma~\ref{lem:LDP-base-enough} to the base of open subsets of $\cal{Y}$ that have the form $\Y\cap \pi_F^{-1}[U]$ for some $F\in\cal{F}$ and some open subset $U$ of $\cal{Y}_F$. Indeed, for these sets we have \[\mu_n(\Y\cap \pi_F^{-1}[U]) = \mu_{F,n}U,\] and for each fixed $F$ the right-hand measure here is governed by $I_F$ as $n\to\infty$. The last result of this section concerns how an LDP depends on the underlying space on which it is formulated. Let $\cal{X}$ be a separable and completely metrizable space with an open subset $\cal{X}_0$. Let $(\mu_n)_{n\ge 1}$ be a sequence of Borel probability measures on $\cal{X}$ such that ${\mu_n \cal{X}_0 = 1}$ for all sufficiently large $n$. Then we can regard $\cal{X}_0$ as a separable and completely metrizable space in its own right, and choose a sequence $(\nu_n)_{n\ge 1}$ of Borel probability measures on it such that $\nu_n = \mu_n(\,\cdot\,\cap \cal{X}_0)$ for all sufficiently large $n$. It is usually easy to pass probability limit laws between the measures $\mu_n$ on $\cal{X}$ and $\nu_n$ on $\cal{X}_0$. The next lemma gives general conditions for an LDP to survive such a passage. The important change here is not between one sequence of measures and the other, but rather between choosing $\X_0$ or $\X$ as the ambient space. \begin{lem}\label{lem:LDP-open-subset} In the situation above, let $I:\cal{X}\to [0,\infty]$ be lower semicontinuous, and assume that $I(x) = \infty$ for every $x \in \cal{X}\setminus \cal{X}_0$. Then $(\mu_n)_{n\ge 1}$ obeys the LDP on $\cal{X}$ with rate function $I$ if and only if $(\nu_n)_{n\ge 1}$ obeys the LDP on $\cal{X}_0$ with rate function $I|\cal{X}_0$. \end{lem} \begin{proof} By omitting finitely many terms of the sequence, we may assume that $\mu_n \cal{X}_0 = 1$ and $\nu_n = \mu_n(\,\cdot\, \cap \cal{X}_0)$ for all $n$. In both directions below we refer conditions (i)--(iii) from Definition~\ref{dfn:LDP}. \vspace{7pt} \emph{Step 1: ($\Rightarrow$)}.\quad If conditions (i) and (ii) hold for neighbourhoods of arbitrary points in $\cal{X}$, then they also hold at points in $\cal{X}_0$ if we use neighbourhoods contained in $\cal{X}_0$. These are always available because $\cal{X}_0$ is open. Now let $a > 0$, and let $K$ be a compact subset of $\X$ witnessing condition (iii) of Definition~\ref{dfn:LDP} for the sequence $(\mu_n)_{n\ge 1}$. By our assumptions on $I$, the compact set $\{I \le a\}$ is disjoint from the closed set $\cal{X}\setminus \cal{X}_0$, and so the set $\{I\le a\}$ has an open neighbourhood $U$ whose closure is still disjoint from $\cal{X}\setminus \cal{X}_0$. Now our choice of $K$ and an application of~\eqref{eq:LDP2-1} give \[\mu_n(\cal{X}\setminus (K\cap \ol{U})) \le \mu_n(\cal{X}\setminus K) + \mu_n(\cal{X}\setminus U) \le e^{-an + o(n)} + e^{-an + o(n)}. \] So $K\cap \ol{U}$ is a compact subset of $\cal{X}_0$ that witnesses condition (iii) for $(\nu_n)_{n\ge 1}$. \vspace{7pt} \emph{Step 2: ($\Leftarrow$)}.\quad Conditions (i) and (ii) for $(\nu_n)_{n\ge 1}$ imply the same conditions for $(\mu_n)_{n\ge 1}$ around any any point that lies in $\cal{X}_0$. In addition, for any $a > 0$, condition (iii) for $(\nu_n)_{n\ge 1}$ gives a compact subset $K$ of $\cal{X}_0$ such that \[\nu_n (\cal{X}_0\setminus K) \le e^{-an + o(n)}.\] Since $K$ is also compact as a subset of $\cal{X}$, this verifies condition (iii) for the sequence $(\mu_n)_{n\ge 1}$ as well. However, it also verifies condition (i) for $(\mu_n)_{n\ge 1}$ at every point of $\cal{X}\setminus \cal{X}_0$, because $I(x) = \infty$ for such points, and the set $\cal{X}\setminus K$ is an open neighbourhood of them. Lastly, condition (ii) for $(\mu_n)_{n\ge 1}$ is vacuous at any $x$ satisfying $I(x) = \infty$, so we have now verified all three conditions for $(\mu_n)_{n \ge 1}$ everywhere. \end{proof} A concrete application of Lemma~\ref{lem:LDP-open-subset} appears as Corollary~\ref{cor:matrix-LDP1} below. \subsection*{\emph{Notes and further references}} Most of the work in this chapter is subsumed by more general machinery that can be found in dedicated textbooks such as~\cite{DemZei--LDPbook}. See~\cite[Section 1.2]{DemZei--LDPbook} for a discussion of the difference between a `full' LDP (as we have defined it) and a `weak LDP', which promises~\eqref{eq:LDP2-1} only for compact sets and does not assume exponential tightness. Lemma~\ref{lem:LDP-inf-product} is really a special case of the Dawnson--G\"artner theorem about large deviations theory on `projective limit' space: see, for instance,~\cite[Theorems 4.6.1 and 4.6.9]{DemZei--LDPbook}. I have formulated this special case explicitly to make its later application easier for non-specialists to digest. \chapter{Preliminaries on parametrizing positive semidefinite matrices}\label{chap:lin-alg} The results in the first section below are available in standard textbooks. Those in Section~\ref{sec:three-block-Gram} are also standard among experts, but perhaps not known so widely. Once again, some readers may prefer to skip this chapter until its contents are cited later. \section{Two-by-two block Gram matrices}\label{sec:block-Gram} Let $V = [v_1,\dots,v_k]$ and $U = [u_1, \dots, u_\ell]$ be two tuples in the same Hilbert space $H$. The combined tuple $[v_1, \dots, v_k, u_1, \dots, u_\ell]$ has a $(k+\ell)$-by-$(k+\ell)$ Gram matrix which we can write in the block form \begin{equation}\label{eq:two-block} Q = \left[\begin{array}{cc} Q_{11} & R\\ R^\ast & Q_{22}\end{array}\right]. \end{equation} The diagonal blocks $Q_{11}$ and $Q_{22}$ are the separate Gram matrices of $V$ and $U$, and \[R= [\langle u_j,v_i\rangle]_{1 \le i \le k,\,1\le j \le \ell} = V^\ast U.\] Let $M$ be the span of $v_1$, \dots, $v_k$ and $P$ the orthogonal projection onto $M$, as before. We can separate each $u_i$ into $Pu_i$ and $P^\perp u_i$, and then form two new Gram matrices from these tuples of components. Because $M$ and $M^\perp$ are orthogonal, the full Gram matrix $Q_{22}$ of $u_1$, \dots, $u_k$ is the sum of these two Gram matrices of projections. Now assume that the block $Q_{11}$ is nonsingular. To find the Gram matrix of $[Pu_1,\dots,Pu_\ell]$, we can use~\eqref{eq:proj-onto-span} to express each $Pu_i$: \begin{equation}\label{eq:proj-Gram} [\langle Pu_{i'},Pu_i\rangle]_{i,i'=1}^\ell = U^\ast P^2 U = U^\ast P U = U^\ast VQ_{11}^{-1} V^\ast U = R^\ast Q_{11}^{-1} R. \end{equation} It follows that, when $Q_{11}$ is nonsingular, the Gram matrix of the orthogonal projections $P^\perp u_1$, \dots, $P^\perp u_\ell$ is equal to the difference \begin{equation}\label{eq:comp-Gram} Q_{22} - R^\ast Q_{11}^{-1} R. \end{equation} This is the \textbf{Schur complement} of $Q_{11}$ in $Q$. It appears widely in the analysis of positive semidefinite matrices. Firstly, it appears in the formula for inverting any nonsingular $2$-by-$2$ block matrix~\cite[Section 0.7.3]{HorJohMA}, and in the corresponding determinant formula \begin{equation}\label{eq:Schur-det} \det Q = \det Q_{11} \cdot \det (Q_{22} - R^\ast Q_{11}^{-1} R), \end{equation} which holds whenever $Q_{11}$ is nonsingular~\cite[Section 0.8.5]{HorJohMA}. If $Q_{11}$ is singular then the calculations above are not available, but of course the full matrix $Q$ still determines the Gram matrix of the orthogonal projections $P^\perp u_1$, \dots, $P^\perp u_\ell$ uniquely. In this case other methods must be employed, leading to a notion of generalized Schur complements: see~\cite[Exercises 7.1.P28 and then 7.3.P7--8]{HorJohMA}, for example. For our later purposes it is generally easiest to find ways to reduce our work to the nonsingular case. Consider again a two-by-two-block self-adjoint matrix $Q$ written as in~\ref{eq:two-block}, and assume that $Q_{11}$ and $Q_{22}$ are both positive semidefinite. The next proposition characterizes those off-diagonal blocks for which the whole of $Q$ is still positive semidefinite. It features the Schur complement again. \begin{prop}\label{prop:two-block-completion} Let $Q$ be as in~\eqref{eq:two-block}, and assume that $Q_{11}$ and $Q_{22}$ are positive semidefinite. Then $Q$ is positive semidefinite if and only if $R$ is equal to $Q_{11}^{1/2}CQ_{22}^{1/2}$ for some contraction $C$ from $\bbC^{(\ell)}$ to $\bbC^{(k)}$, If $Q_{11}$ is nonsingular, then this holds if and only if the Schur complement Schur complement $Q_{22} - R^\ast Q^{-1}_{11} R$ is positive semidefinite. If $Q_{11}$ and $Q_{22}$ are both nonsingular, then the contraction $C$ promised above is unique, and the following are equivalent: \begin{itemize} \item the whole of $Q$ is nonsingular; \item the contraction $C$ is strict; \item the Schur complement $Q_{22} - R^\ast Q^{-1}_{11} R$ is nonsingular. \qed \end{itemize} \end{prop} See~\cite[Theorems 7.7.7 and 7.7.9]{HorJohMA}. The proof can be written fairly quickly in terms of matrix manipulations, but the conditions appearing here also also fit naturally into our previous geometric description. If the whole of $Q$ is positive semidefinite, then it is a Gram matrix, say of the concatenated tuple $[V,U]$ as above. In that case we have already seen that the Schur complement $Q_{22} - R^\ast Q^{-1}_{11} R$ is the Gram matrix of the projected tuple $P^\perp u_1$, \dots, $P^\perp u_\ell$. To interpret $C$, assume first that $Q_{11}$ and $Q_{22}$ are both nonsingular. In this case two applications of~\eqref{eq:new-basis} give new tuples \begin{equation}\label{eq:new-bases} W := VQ_{11}^{-1/2} \qquad \hbox{and} \qquad Y := UQ_{22}^{-1/2} \end{equation} such that $W$ (resp. $V$) is a unitary embedding of $\bbC^{(k)}$ (resp. $\bbC^{(\ell)}$) into $H$ with the same image as $V$ (resp. $U$). Therefore the linear map $C := W^\ast PY$ is a composition of contractions, and hence a contraction. Geometrically, it is the restricted orthogonal projection $P|\rm{span}\,\{u_1,\dots,u_\ell\}$, written in terms of the canonical bases of $\rm{span}\,\{u_1,\dots,u_\ell\}$ and $\rm{span}\,\{v_1,\dots,v_k\}$ provided by~\eqref{eq:new-bases}. On the other hand, substituting from~\eqref{eq:new-bases}, this contraction satisfies \[C = Q_{11}^{-1/2}V^\ast P UQ_{22}^{-1/2} = Q_{11}^{-1/2} R Q_{22}^{-1/2},\] where the second equality holds because $P$ is the orthogonal projection onto the image of $V$, and so $V^\ast P = V^\ast$. Re-arranging, we have shown that \[R = Q_{11}^{1/2}CQ_{22}^{1/2}.\] If either $Q_{11}$ or $Q_{22}$ is singular, then we cannot quite argue as above. For example, if $Q_{11}$ is singular, then the first part of~\eqref{eq:new-bases} cannot be used to select a \emph{canonical} orthonormal basis of $\rm{span}\,\{v_1,\dots,v_k\}$. However, \emph{some} choice of basis is still possible, and an adjusted version of the reasoning above can then still be completed. This shows that the representation of $R$ using a contraction still holds in case $Q_{11}$ or $Q_{22}$ is singular, but that this representation is no longer unique. (Alternatively, the singular case can be deduced from the nonsingular case by a limited argument: see, for instance, the use of~\cite[Lemma 7.7.8]{HorJohMA}.) \begin{ex}\label{ex:k1} Suppose that $\ell = 1$, so $Q$ is the $(k+1)$-by-$(k+1)$ Gram matrix of a tuple $[v_1,\dots,v_k,u]$. Assume that $v_1$, \dots, $v_k$ are linearly independent and $u \ne 0$. In this case the top right block of $Q$ is a column vector in $\bbC^{(k)}$ which specifies the inner product of $u$ with each $v_i$. This column vector is parametrized by a contraction from $\bbC$ to $\bbC^{(k)}$, or equivalently by a vector in $\bbC^{(k)}$ of length at most $1$. Geometrically, we have a canonical choice of orthonormal basis for $\rm{span}\,\{v_1,\dots,v_k\}$ given by~\eqref{eq:new-basis}, and the parametrizing vector is the orthogonal projection of $u/\|u\|$ onto that span written in this basis. Most simply of all, if also $k = 1$, then the parametrizing contraction is simply a complex number of modulus $\cos \theta$, where $\theta$ is the angle between $v$ and $u$. \qed \end{ex} Now suppose that $V:\bbC^{(k)}\to H$ and $U:\bbC^{(\ell)}\to H$ are two linear maps into a complex inner product space, and write $[V,U]$ for the combined map from $\bbC^{(k+\ell)}$ to $H$. Its Gram matrix $Q = [V,U]^\ast [V,U]$ has a two-by-two-block structure as in~\eqref{eq:two-block}. Assume again for simplicity that $Q_{11}$ is nonsingular, so the Schur complement $S := Q_{22} - R^\ast Q_{11}^{-1}R$ from~\eqref{eq:comp-Gram} is defined. As explained above, $S$ is the Gram matrix of the map $P_{\rm{ran}\,V}^\perp U$, and so it appears in the generalization of~\eqref{eq:Gram-ent} for conditional entropy \begin{equation}\label{eq:Gram-cond-ent} \rmH(U\mid V) = \rmH(U \mid \rm{ran}\,V) = \frac{1}{2}\rmH(S). \end{equation} Now let $C$ be the contraction provided by Proposition~\ref{prop:two-block-completion}. Either by recalling the geometric interpretation of this contraction described above, or by combining formula~\ref{eq:Schur-det} with the chain rule (Proposition~\ref{prop:chain2}) and re-arranging, we find a simple partner of the formula above for mutual information: \begin{equation}\label{eq:Gram-mut-inf} \rmI(U\,;\,V) = -\frac{1}{2}\rmH(I_\ell - C^\ast C) = -\rmH((I_\ell - C^\ast C)^{1/2}). \end{equation} Accompanying formula~\eqref{eq:Gram-cond-ent}, we can prove a method-of-types interpretation for log-determinant mutual information that generalizes Theorem~\ref{thm:types-1}. Assume that $n\ge \ell + k$, let $e_1$, \dots, $e_n$ be a fixed orthonormal basis for $\bbC^{(n)}$, and let $E := [e_1,\dots,e_\ell]$. For any subset $O$ of $\rmM_{(k+\ell)+}$, let \begin{equation}\label{eq:T'} T'(n,O) := \{X \in \rmM_{n,k}:\ [E,X]^\ast [E,X] \in O.\} \end{equation} Let \[Q = \left[\begin{array}{cc}I_\ell & R\\ R^\ast & Q_{22}\end{array}\right].\] be a fixed $(k+\ell)$-by-$(k+\ell)$ positive semidefinite matrix, and let $S$ be the Schur complement $Q_{22} - R^\ast R$. By~\eqref{eq:Gram-cond-ent}, the conditional entropy $\rmH(X\mid E)$ is equal to $\rmH(S)/2$ whenever the tuple $[E,X]$ has Gram matrix $Q$. \begin{thm}\label{thm:types-2} The following hold: \begin{itemize} \item[a.] (Lower bound) If $Q$ is nonsingular and $O$ is any neighbourhood of $Q$ in $\rmM_{(k+\ell)+}$, then \[\frac{\vol_{2kn}T'(n,O)}{c(k,n)} \ge e^{\rmH(S)n - o(n)}.\] \item[b.] (Upper bound) For any $a > \rmH(S)$ there is a neighbourhood $O$ of $Q$ in $\rmM_{(k+\ell)+}$ such that \[\frac{\vol_{2kn}T'(n,O)}{c(k,n)} \le e^{an + o(n)}.\] \end{itemize} \end{thm} \begin{proof} Denote a typical element of $\rmM_{(k+\ell)+}$ by \begin{equation}\label{eq:Q'blocks} Q' = \left[\begin{array}{cc}Q_{11}' & R'\\ (R')^\ast & Q'_{22}\end{array}\right]. \end{equation} In this space, $Q$ has a neighbourhood base consisting of the sets of the form \[O = \{Q':\ Q_{11}' \in O_{11},\ R' \in O_{12},\ Q'_{22} - (R')^\ast R' \in O_{22}\}\] as $O_{11}$, $O_{12}$ and $Q_{22}$ range over neighbourhoods of $I_\ell$ in $\rmM_{\ell+}$, of $R$ in $\rmM_{\ell,k}$, and of $S$ in $\rmM_{k+}$ respectively. It therefore suffices to prove parts (a) and (b) using sets of this form. However, for such a set $O$, we have \[T'(n,O) = \{X \in\rmM_{n,k}: E^\ast X \in O_{12} \ \hbox{and}\ (PX)^\ast(PX) \in O_{22}\},\] where $P$ is the orthogonal projection of $\bbC^{(n)}$ onto the last $n-k$ coordinates, regarded as spanning a copy of $\bbC^{(n-k)}$. As a result, $T'(n,O)$ has the same volume as the Cartesian product \[O_{12} \times T(n-\ell,O_{22}) \subset \rmM_{\ell,k} \times \rmM_{(n-\ell),k},\] where $T(n-\ell,O_{22})$ is defined as in~\eqref{eq:TnO}. The resulting factor of $\vol_{2k\ell} O_{12}$ does not depend on $n$, and the factor of $\vol_{2k(n-\ell)}T(n-\ell,O_{22})$ is governed by Theorem~\ref{thm:types-1}. This completes the proof when combined with~\eqref{eq:ckn-asymp}, which gives the asymptotic \[c(k,n-\ell) = e^{o(n)}c(k,n) \qquad \hbox{as}\ n\to\infty\ \hbox{for fixed}\ k\ \hbox{and}\ \ell.\] \end{proof} \section{Three-by-three-block Gram matrices}\label{sec:three-block-Gram} This section extends the work of Section~\ref{sec:block-Gram} by allowing three blocks rather than two. Let $H$ be a complex inner product space as before. Let $k$,$\ell$ and $m$ be non-negative integers, and in $H$ consider tuples of vectors \begin{equation}\label{eq:full-tuple} V = [v_1,\dots,v_k], \quad U = [u_1,\dots,u_\ell] \quad \hbox{and} \quad X = [x_1,\dots,x_m]. \end{equation} Combine them into a single $(k+\ell+m)$-tuple, and write their joint Gram matrix explicitly in $3$-by-$3$ block form \begin{equation}\label{eq:full-Gram} Q := \left[\begin{array}{ccc}V^\ast V & V^\ast U & V^\ast X\\ U^\ast V & U^\ast U & U^\ast X \\ X^\ast V & X^\ast U & X^\ast X\end{array}\right]. \end{equation} Let $M$ be the span of $u_1$, \dots, $u_\ell$ (not $v_1$, \dots, $v_k$, as previously), and let $P$ be the orthogonal projection onto $M$. Let us decompose the matrix $Q$ into two summands by splitting each vector into its components in $M$ and $M^\perp$. In terms of the linear maps in~\eqref{eq:full-tuple}, this simply means we write $V = PV + P^\perp V$, and similarly. Since $PU = U$ and $P^\perp U = 0$, we find that $Q$ is equal to \begin{equation}\label{eq:three-block-decomp} \left[\begin{array}{ccc} V^\ast P V & V^\ast U & V^\ast P X\\ U^\ast V & U^\ast U & U^\ast X \\ X^\ast P V & X^\ast U & X^\ast P X\end{array}\right] + \left[\begin{array}{ccc} V^\ast P^\perp V & 0 & V^\ast P^\perp X\\ 0 & 0 & 0 \\ X^\ast P^\perp V & 0 & X^\ast P^\perp X\end{array}\right]. \end{equation} The first summand here is the Gram matrix of three tuples of vectors that are all contained in $M$. On the other hand, by ignoring the middle row and column, the second summand is effectively just a two-by-two-block Gram matrix. This decomposition leads naturally to a generalization of Proposition~\ref{prop:two-block-completion} to three-by-three-block self-adjoint matrices. This time we formulate it only in the case of some extra nonsingularity assumptions. These are not really necessary, but removing them requires a considerably fiddlier discussion, and the case presented here covers our later needs. \begin{prop}\label{prop:three-block-completion} Consider a $(k+\ell+m)$-by-$(k+\ell+m)$ self-adjoint matrix \begin{equation}\label{eq:full-Gram2} Q := \left[\begin{array}{ccc} Q_{11} & Q_{12} & R\\ Q_{12}^\ast & Q_{22} & Q_{23} \\ R^\ast & Q_{23}^\ast & Q_{33}\end{array}\right]. \end{equation} Assume that the submatrices \[Q_{(1\cup 2)} := \left[\begin{array}{cc} Q_{11} & Q_{12}\\ Q_{12}^\ast & Q_{22}\end{array}\right] \qquad \hbox{and} \qquad Q_{(2\cup 3)} := \left[\begin{array}{cc} Q_{22} & Q_{23}\\ Q_{23}^\ast & Q_{33}\end{array}\right]\] are both positive semidefinite, and that the further submatrix $Q_{22}$ is strictly positive definite. Then $Q$ is positive semidefinite if and only if it has the form \begin{equation}\label{eq:three-block-decomp2} \left[\begin{array}{ccc} Q_{12}Q_{22}^{-1} Q_{12}^\ast & Q_{12} & Q_{12} Q_{22}^{-1} Q_{23}\\ Q_{12}^\ast & Q_{22} & Q_{23} \\ Q_{23}^\ast Q_{22}^{-1}Q_{12}^\ast & Q_{23}^\ast & Q_{23}^\ast Q_{22}^{-1} Q_{23}\end{array}\right] + \left[\begin{array}{ccc} S_{11} & 0 & S_{11}^{1/2}CS_{33}^{1/2}\\ 0 & 0 & 0 \\ S_{33}^{1/2}C^\ast S_{11}^{1/2} & 0 & S_{33}\end{array}\right], \end{equation} where \begin{equation}\label{eq:Schurs} S_{11} := Q_{11} - Q_{12}Q_{22}^{-1}Q_{12}^\ast \quad \hbox{and} \quad S_{33} := Q_{33} - Q_{23}^\ast Q_{22}^{-1}Q_{23} \end{equation} are Schur complements and $C$ is a contraction from $\bbC^{(m)}$ to $\bbC^{(k)}$. If $Q_{(1\cup 2)}$ and $Q_{(2\cup 3)}$ are both nonsingular, then the contraction $C$ promised here is unique, and the whole of $Q$ is nonsingular if and only if $C$ is a strict contraction. \end{prop} \begin{proof} \emph{Necessity.} \quad Let $n := k+\ell + m$. If the whole of $Q$ is positive semidefinite, then it is the Gram matrix of some tuples of vectors in $\bbC^{(n)}$ as in~\eqref{eq:full-tuple}, and so it decomposes as in~\eqref{eq:three-block-decomp}. Applying~\eqref{eq:proj-Gram} to the four corner blocks of the first summand in~\eqref{eq:three-block-decomp}, we find that it equals the first summand in~\eqref{eq:three-block-decomp2}. On the other hand, by~\eqref{eq:comp-Gram}, the two nonzero diagonal blocks in the second summand in~\eqref{eq:three-block-decomp} are equal to $S_{11}$ and $S_{22}$. Finally, Proposition~\ref{prop:two-block-completion} tells us that this second summand is positive semidefinite if and only if it has the asserted form for some contraction $C$. \vspace{7pt} \emph{Sufficiency.} \quad Suppose that $Q$ has the form in~\eqref{eq:three-block-decomp2}. The second summand is positive semidefinite by Proposition~\ref{prop:two-block-completion}. It therefore suffices to prove that $Q$ is positive semidefinite in case it equals the first summand in~\eqref{eq:three-block-decomp2}, meaning that $Q_{ii} = Q_{i2}^\ast Q_{22}^{-1} Q_{2i}$ for both $i$ in $\{1,3\}$. By~\eqref{eq:proj-Gram}, this means that $Q_{(1\cup 2)}$ and $Q_{(2\cup 3)}$ are unchanged by the procedure of writing each as a Gram matrix and then projecting all the vectors onto the span of the vectors corresponding to the block $Q_{22}$. We may therefore pick a tuple $U$ whose Gram matrix is $Q_{22}$ and then two more tuples $V$ and $X$ contained in $\rm{ran}\,U$ so that $Q_{(1\cup 2)}$ and $Q_{(2\cup 3)}$ are the Gram matrices of $[V,U]$ and $[U,X]$ respectively. Finally, applying~\eqref{eq:proj-Gram} for the projection onto $\rm{ran}\,U$ and the whole triple tuple $[V,U,X]$, we find that under these conditions the Gram matrix of $[V,U,X]$ is the first summand in~\eqref{eq:three-block-decomp2}, so that summand must be positive semidefinite. \vspace{7pt} \emph{Uniqueness and nonsingularity.} \quad These follow by checking the corresponding conditions for uniqueness and nonsingularity in our application of Proposition~\ref{prop:two-block-completion} above. \end{proof} Before continuing our general discussion of Proposition~\ref{prop:three-block-completion}, we illustrate the parametrization by contractions in the following continuation of Example~\ref{ex:k1}. \begin{ex}\label{ex:kell1} Suppose that $m=1$, and let $Q$ as above be the Gram matrix of the combined tuple $[v_1,\dots,v_k,u_1,\dots,u_\ell,x]$ in $\bbC^{(n)}$. Let $P$ be the orthogonal projection onto $M = \rm{span}\,\{u_1,\dots,u_\ell\}$. The parametrizing contraction $C$ appears in the second summand in~\eqref{eq:three-block-decomp2}. That, in turn, corresponds to the second summand in~\eqref{eq:three-block-decomp}, whose nonzero blocks together form the $(k+1)$-by-$(k+1)$ Gram matrix of the tuple $[P^\perp v_1,\dots,P^\perp v_k,P^\perp x]$. As in Example~\ref{ex:k1}, the top right block here is a column vector in $\bbC^{(k)}$ which specifies the inner product of $P^\perp x$ with each $P^\perp v_i$. This column vector is parametrized by a contraction from $\bbC$ to $\bbC^{(k)}$, meaning simply a vector in $\bbC^{(k)}$ of length at most $1$. Under the canonical choice of orthonormal basis for $\rm{span}\,\{P^\perp v_1,\dots,P^\perp v_k\}$, this vector corresponds to the orthogonal projection of $P^\perp x/\|P^\perp x\|$ onto that span. \qed \end{ex} Proposition~\ref{prop:three-block-completion} can be regarded as the solution to a `matrix completion problem'. Referring to~\eqref{eq:full-Gram2}, imagine that we know all the blocks of this self-adjoint matrix apart from $R$ and $R^\ast$, and that the submatrices $Q_{(1\cup 2)}$ and $Q_{(2\cup 3)}$ are positive semidefinite. The `completion problem' asks whether we can choose $R$ so that the whole of $Q$ remains positive semidefinite. Assuming that $Q_{22}$ is nonsingular, Proposition~\ref{prop:three-block-completion} tells us that the answer is Yes, and moreover that the possible choices for $R$ are parametrized by the elements of $\Xi(m,k)$ (that is, linear contractions from $\bbC^{(m)}$ to $\bbC^{(k)}$ --- recall this notation from Section~\ref{sec:lin-alg}) as in~\eqref{eq:three-block-decomp2}. Putting the various steps above together we arrive at the formula \begin{equation}\label{eq:param} R = Q_{12}Q_{22}^{-1}Q_{23} + S_{11}^{1/2} C S_{33}^{1/2}, \end{equation} where $S_{11}$ and $S_{33}$ are as in~\eqref{eq:Schurs}. At some points in the sequel, it is important to think of this problem and its solution as a relationship between whole spaces of matrices. Recall that $\rmM_{k+}$ denotes the closed cone of positive semidefinite elements of $\rmM_{k,sa}$, and that $\rmM^\circ_{k+}$ denotes the subset of positive definite elements, which is the relative interior of $\rmM_{k+}$ in $\rmM_{k,\sa}$. The next space is less standard, so we define it formally. Let $k$, $\ell$ and $m$ be non-negative integers, as before. \begin{dfn} A \textbf{partial positive semidefinite} matrix with \textbf{block sizes} $k$, $\ell$ and $m$ is a $5$-tuple \[(Q_{11},Q_{12},Q_{22},Q_{23},Q_{33}) \in \rmM_{k+} \times \rmM_{k,\ell}\times \rmM_{\ell+} \times \rmM_{\ell,m}\times \rmM_{m+}\] with the property that both of the two-by-two-block matrices \[\left[\begin{array}{cc} Q_{11} & Q_{12}\\ Q_{12}^\ast & Q_{22}\end{array}\right] \qquad \hbox{and} \qquad \left[\begin{array}{cc} Q_{22} & Q_{23}\\ Q_{23}^\ast & Q_{33}\end{array}\right]\] are positive semidefinite. A partial positive semidefinite matrix is \textbf{partially nonsingular} if both of those two-by-two-block matrices are nonsingular; otherwise it is \textbf{partially singular}. We write $\PPSD_{k,\ell,m}$ for the space of partial positive semidefinite matrices with block sizes $k$, $\ell$ and $m$, and $\PPSD^\circ_{k,\ell,m}$ for the further subset of its partially nonsingular members. \end{dfn} Rather than writing out a $5$-tuple as above, henceforth we write the corresponding partial positive semidefinite matrices as a three-by-three-block matrix with some unknown entries: \[\left[\begin{array}{ccc} Q_{11} & Q_{12} & ?\\ Q_{21}^\ast & Q_{22} & Q_{23} \\ ? & Q_{23}^\ast & Q_{33}\end{array}\right].\] We often indicate such an entity by a raised question mark, as in ``$Q^?$''. There is a natural map \[\pi^?:\rmM_{(k+\ell+m)+}\to \PPSD_{k,\ell,m}\] that simply erases the $k$-by-$m$ blocks in the top right and bottom left of its argument. If $Q^? \in \PPSD_{k,\ell,m}$, then let us write \[\Delta(Q^?) := \{Q \in \rmM_{(k+\ell+m)+}:\ \pi^?(Q) = Q^?\},\] and refer to this fibre as the set of \textbf{completions} of $Q^?$. With these notations in hand, we can turn Proposition~\ref{prop:three-block-completion} into the following. \begin{cor}\label{cor:three-block-completion} The map $\pi^?$ is a surjection from $\rmM_{(k+\ell+m)+}$ onto $\PPSD_{k,\ell,m}$. Its restriction to $\rmM^\circ_{(k+\ell+m)+}$ is surjective onto $\PPSD^\circ_{k,\ell,m}$. We can define a map \begin{multline}\label{eq:param-map} \PPSD^\circ_{k,\ell,m}\times \Xi(m,k) \\ \to \big\{Q \in \rmM_{(k+\ell+m)+}:\ Q_{(1\cup 2)}\ \hbox{and}\ Q_{(2\cup 3)}\ \hbox{nonsingular}\big\} \end{multline} according to \[(Q^?,C) \mapsto \left[\begin{array}{ccc}Q_{11} & Q_{12} & R\\ Q_{12}^\ast & Q_{22} & Q_{23}\\ R^\ast & Q_{23}^\ast & Q_{33}\end{array}\right],\] where $S_{11}$, $S_{22}$ are given by~\eqref{eq:Schurs} and then $R$ is given by~\eqref{eq:param}. This map is a homeomorphism, and it maps $\PPSD^\circ_{k,\ell,m}\times \Xi^\circ(m,k)$ onto $\rmM^\circ_{(k+\ell+m)+}$. \end{cor} \begin{proof} This is mostly just a reformulation of Proposition~\ref{prop:three-block-completion}, together with the observation that the functions of matrices defined by~\eqref{eq:Schurs} and~\eqref{eq:param} are continuous on the open subsets where the necessary matrices are invertible. The only outstanding issue is the surjectivity of $\pi^?$ onto the whole of $\rmM_{(k+\ell+m)+}$. This follows by a continuity argument and the fact that continuous images of compact sets are compact: compare, for instance, the use of~\cite[Lemma 7.7.8]{HorJohMA} to prove~\cite[Theorem 7.7.9]{HorJohMA}. \end{proof} Intuitively, for any partially $Q^?$, the map in~\eqref{eq:param-map} defines a homeomorphic parametrization of $\Delta(Q^?)$ by $\Xi(m,k)$ under which the nonsingular completions are identitied with the strict contractions. Moreover, this parametrization also varies continuously as $Q^?$ varies in the open subset $\PPSD^\circ_{k,\ell,m}$ of $\PPSD_{k,\ell,m}$. If $Q^?$ is partially singular then it still has completions, but the formula~\eqref{eq:param} may degenerate. As in the previous section, alternative constructions are available with the help of Moore--Penrose generalized inverses and generalized Schur complements, but we avoid the need for these in the sequel. If $Q_{22}$ is nonsingular, then a canonical choice of completion for $Q^?$ results by setting $C$ to be $0$. Equivalently, this is the choice for which the second summand in~\eqref{eq:three-block-decomp2} vanishes. If $Q$ is the combined Gram matrix of the tuples in~\eqref{eq:full-tuple}, then this decomposition is the same as~\eqref{eq:three-block-decomp}, and there the second term vanishes if and only if the subspaces $\rm{ran}\,V + \rm{ran}\,U$ and $\rm{ran}\,U + \rm{ran}\,X$ are relatively orthogonal over their common further subspace $\rm{ran}\,U$. This particular choice of $Q$ is called the \textbf{central completion} of $Q_{(1\cup 2)}$ and $Q_{(2\cup 3)}$. With our analogy between tuples of vectors and random variables in mind, the central completion is the analog of a relatively independent joint distribution for a triple of random variables; compare, for instance, the main technical construction in~\cite{Boz89}. If the three tuples $V$, $U$, and $X$ have the three-by-three-block Gram matrix $Q$ in~\eqref{eq:full-Gram}, then the blocks of this matrix may used to generalize~\eqref{eq:Gram-mut-inf} to conditional mutual information. If $C$ is the contraction associated to this matrix by Proposition~\ref{prop:three-block-completion}, and we keep in mind its geometric interpretation that appears in the proof, then the resulting formula is \begin{equation}\label{eq:cond-mut-inf-contraction} \rmI(V\,;\,X\mid U) = -\rmH((I_m - C^\ast C)^{1/2}). \end{equation} \subsection*{\emph{Notes and further references}} Completion problems for positive semidefinite matrices, as well as many other classes of matrices, have been studied in great depth, both within pure linear algebra and for the sake of numerous engineering applications. This is also an area of longstanding interaction between matrix analysis and abstract operator theory: see, for instance,~\cite[Lemma 3.1]{PauCB} for an abstract generalization of Proposition~\ref{prop:two-block-completion} in terms of $2\times 2$ matrices over a C$\sp*$-algebra. The way we use Corollary~\ref{cor:three-block-completion} below also belongs to that interaction. A classic introduction to this branch of matrix analysis can be found in~\cite{Joh90}, and the textbooks~\cite{FoiFraCL} and~\cite{BakWoeMCetc} both cover it very thoroughly. They can be consulted for topics we have left aside, such as how to handle singular or degenerate cases. In particular, positive two-by-two-block and three-by-three-block matrix completion are both treated thoroughly in~\cite[Chapte XVI]{FoiFraCL}, and both that chapter and~\cite[Chapter 2]{BakWoeMCetc} consider the natural further generalization of Proposition~\ref{prop:three-block-completion} to `banded' matrices with larger numbers of blocks. For these, possible completions are parametrized by sequences of contractions in a way that generalizes~\eqref{eq:param}: see~\cite[Section XVI.8]{FoiFraCL} or~\cite[Theorem 2.1.2]{BakWoeMCetc}. For a different point of view on the more standard results in Section~\ref{sec:block-Gram} above, the book~\cite[Sections 1.3 and 1.5]{BhatPDM} relates them to the positivity or monotonicity of certain maps between matrices. That book also explores some different but closely related completion questions for positive semidefinite matrices in~\cite[Section 3.4]{BhatPDM}, where they are also related to positivity of certain associated Schur product maps between spaces of matrices. See~\cite[Chapter 3]{PauCB} for operator theoretic connections with this point of view. \section{Large deviations for some random Gram matrices}\label{sec:matrix-completion-LDP} This section is a continuation of Chapter~\ref{chap:log-det-ent}. It interprets log-determinant conditional information as the negative rate function in some large deviations principles about random Gram matrices. In addition to having intrinsic interest, we need these results explicitly in Chapter~\ref{chap:LDP-proof}. We proceed in two steps. The first step considers a special case and shows how log-determinant mutual information appears. It is deduced from the `method of types' calculation from Section~\ref{sec:types}. The second step generalizes the first by adding more ingredients, and arrives at log-determinant conditional mutual information as the negative rate function. These large deviations principles illustrate the intuitive meanings of these quantities. Let $k$, $\ell$ and $n$ be positive integers with $n\ge k+\ell$. In $\bbC^{(n)}$, let $e_1$, \dots, $e_n$ be the standard basis, and let $E := [e_1,\dots,e_\ell]$. In addition, let $V$ be a random element of $\rmU(k,n)$ with distribution $m_{\rm{U}(k,n)}$. The Gram matrices of $E$ and $U_n$ are $I_\ell$ and $I_k$, respectively. The matrix of inner products $E^\ast V$ is a random element of $\Xi(k,\ell)$: it is simply given by the first $\ell$ rows of $V$. We denote its distribution by \begin{equation}\label{eq:projected-contraction-dist} \s_{n,\ell,k}A := m_{\rmU(k,n)}\{V:\ E^\ast V \in A\} \qquad (A \subset \Xi(k,\ell),\ A\ \hbox{Borel}). \end{equation} \begin{ex}\label{ex:k=1} Suppose that $k = 1$, so that $V$ is given by a single random vector $v$ distributed uniformly over the sphere $\rmS^{2n-1}$, and $E^\ast v$ is the vector of its first $\ell$ coordinates. The distribution $\s_{n,\ell,1}$ of this projection is a classical object in high-dimensional analysis. When $\ell=n$, it is simply equal to the usual normalized surface measure of $\rmS^{2n-1}$. When $\ell < n$, it is given explicitly by \[\frac{(n-1)!}{\pi^\ell(n-\ell-1)!}\cdot (1 - |y|^2)^{n-\ell-1}\cdot 1_{\{|y| \le 1\}}\cdot d\vol_{2\ell}(y) \qquad (y \in \bbC^{(\ell)}).\] This standard calculation is a corollary of~\cite[Theorem A.4]{AxlBouRamHF}, for example (beware that in our setting $n$ and $\ell$ are \emph{complex} dimensions, so they appear without a factor of $1/2$ compared to the usual formulas as in that reference). One can also derive an explicit formula for $\s_{n,\ell,k}$ when $\ell$ and $k$ are both at least $2$, but it becomes rapidly more complicated as the parameters grow, and we avoid using it in the sequel. \qed \end{ex} \begin{thm}\label{thm:matrix-LDP1} For fixed $k$ and $\ell$, the sequence of distributions $(\s_{n,\ell,k})_{n\ge 1}$ obeys the large deviations principle on $\Xi(k,\ell)$ with rate function \[-\rmH(I_\ell - C^\ast C) \qquad (C \in \Xi(k,\ell)).\] \end{thm} \begin{proof} Since $\Xi(k,\ell)$ is compact, exponential tightness holds automatically. It remains to prove conditions (i) and (ii) from Definition~\ref{dfn:LDP}. We prove these by reducing our work to the volume estimates from Theorem~\ref{thm:types-2}. Let $O_{11}$ be a neighbourhood of $I_\ell$ in $\rmM_{\ell+}$. Let $C\in \Xi(k,\ell)$ and let $O_{12}$ be a neighbourhood of it in that space. Finally, let $\eps > 0$, and let $O_{22}$ be a neighbourhood of $I_k$ in $\rmM_{k+}$ which is so small that \begin{equation}\label{eq:det-control} e^{-\eps} \le \Det\, Q'_{22} \le e^\eps \qquad \hbox{for every}\ Q'_{22} \in O_{22}. \end{equation} (so, in particular, $O_{22} \subset \rmM^\circ_{k+}$). Write a typical element $Q'$ of $\rmM_{(k+\ell)+}$ as in~\eqref{eq:Q'blocks}, and let \[O := \{Q' \in \rmM_{(k+\ell)+}:\ Q'_{11} \in O_{11},\ (Q_{11}')^{-1/2}R'(Q_{22}')^{-1/2} \in O_{12},\ Q_{22}' \in O_{22}\}.\] As we vary $O_{11}$, $O_{12}$ and $O_{22}$, such sets form a neighbourhood base in $\rmM_{(k+\ell)+}$ around \[Q := \left[\begin{array}{cc} I_\ell & C\\ C^\ast & I_k\end{array}\right].\] Let $S := I_\ell - C^\ast C$, the Schur complement of the $(1,1)$ block within $Q$. Define $T'(n,O)$ from $O$ as in~\eqref{eq:T'}. Applying Proposition~\ref{prop:int-form} and simplifying gives \[\frac{\vol_{2kn}T'(n,O)}{c(k,n)} = \s_{n,\ell,k}O_{12}\cdot \int_{O_{22}}(\Det\, Q'_{22})^{n-k}\ d\vol_{k^2}(Q'_{22}).\] Combined with~\eqref{eq:det-control}, this re-arranges to give \begin{equation}\label{eq:vol-and-sigma} e^{-\eps(n-k)}\frac{\vol_{2kn}T'(n,O)}{c(k,n)\vol_{k^2}O_{22}} \le \s_{n,\ell,k}O_{12} \le e^{\eps(n-k)}\frac{\vol_{2kn}T'(n,O)}{c(k,n)\vol_{k^2}O_{22}}. \end{equation} To prove the probability lower bound, note that for any neighbourhood $O_{12}$ and any $\eps > 0$ we may choose $O_{22}$ so that~\eqref{eq:det-control} holds, and then the left-hand side of~\eqref{eq:vol-and-sigma} is estimated by part (a) of Theorem~\ref{thm:types-2} to give \[\s_{n,\ell,k}O_{12} \ge e^{(\rmH(S) - \eps)n - o(n)}.\] On the other hand, if $a > \rmH(S)$, then we may choose $O_{11}$, $O_{12}$ and $Q_{22}$ small enough that~\eqref{eq:det-control} holds, and now the right-hand side of~\eqref{eq:vol-and-sigma} and part (b) of Theorem~\ref{thm:types-2} give \[\s_{n,\ell,k}O_{12} \le e^{(a + \eps)n - o(n)}.\] By the arbitrariness of $\eps$ and $a$, this completes the proof. \end{proof} \begin{ex}\label{eq:k=1II} When $k = 1$ as in Example~\ref{ex:k=1}, so each $V$ is given by a single random vector $v$, then Theorem~\ref{thm:matrix-LDP1} asserts that the random $k$-dimensional projection $E^\ast v$ obeys a large deviations principle in the closed unit ball $\ol{B}$ of $\bbC^{(k)}$ with rate function \begin{equation}\label{eq:ball-LDP} -\log (1 - |y|^2) \qquad (y \in \ol{B}). \end{equation} This classical fact can also be read off directly from the formula in Example~\eqref{ex:k=1}. In this example, notice that $v$ is a uniform random element of the sphere whose \emph{real} dimension is $2n-1$. This might make it more natural to define the rate function as the constant that precedes $2n$ rather than $n$ in the relevant exponents of probabilities. This would be compensated by changing the formula~\eqref{eq:ball-LDP} into \[-\frac{1}{2}\log (1 - |y|^2) = -\log \sqrt{1 - |y|^2}.\] However, because Definition~\ref{dfn:LDP} insists on normalizing by the complex dimension $n$ rather than the real dimension $2n$, the formula without the square root appears above. This is also why the general rate function in Theorem~\ref{thm:matrix-LDP1} involves no square root, unlike the expression in~\eqref{eq:Gram-mut-inf}. Analogous remarks apply to Theorem~\ref{mainthm:LDP} in full, and to its 1-dimensional predecessor in~\cite{GamNagRou16} (see also~\cite[Theorem 4.1]{BreSimZei18}), which is phrased with `speed' $n$ and no square roots. \qed \end{ex} n The next property of our measures on $\Xi(k,\ell)$ is also needed later. \begin{lem}\label{lem:a-s-strict} If $n \ge k+\ell$ then \[\s_{n,\ell,k}(\Xi^\circ(k,\ell)) = 1.\] \end{lem} \begin{proof} If we fix $X \in \rm{U}(k,n)$ and $Y \in \rm{U}(\ell,n)$, then it is equivalent to show that \[m_{\rmU(n)}\{U:\ \|X^\ast U Y\| < 1\} = 1.\] The complement of this event is the same as the event that the concatenation $[X,UY]$ is not an injection, or equivalently the event that \[\det \left[\begin{array}{cc}I_k & X^\ast UY\\ Y^\ast U^\ast X & I_\ell \end{array}\right] = 0.\] Regarded as a function of $U$, this determinant is a polynomial in its entries. It does not vanish identically, because when $n \ge k +\ell$ we can certainly make some choice of $U$ so that the ranges of $V$ and $UY$ are disjoint subspaces. Therefore the zero set of this polynomial has zero measure. \end{proof} Because of Theorem~\ref{thm:matrix-LDP1} and Lemma~\ref{lem:a-s-strict}, Lemma~\ref{lem:LDP-open-subset} lets us transfer our large deviations principle between $\Xi(k,\ell)$ and $\Xi^\circ(k,\ell)$. Indeed, Lemma~\ref{lem:a-s-strict} shows that $\s_{n,\ell,k}$ is supported on the open subset $\Xi^\circ(k,\ell)$ of $\Xi(k,\ell)$ when $n\ge k+\ell$. Let $\s^\circ_{n,\ell,k}$ be a Borel probability measure on $\Xi^\circ(k,\ell)$ which agrees with the restriction of $\s_{n,\ell,k}$ whenever $n\ge k+\ell$ \begin{cor}\label{cor:matrix-LDP1} For fixed $k$ and $\ell$, the sequence of distributions $(\s^\circ_{n,\ell,k})_{n\ge 1}$ obeys the LDP on $\Xi^\circ(k,\ell)$ with rate function \[-\rmH(I_\ell - C^\ast C) \qquad (C \in \Xi^\circ(k,\ell)).\] \end{cor} \begin{proof} Theorem~\ref{thm:matrix-LDP1} gives the analogous conclusion for the set $\Xi(k,\ell)$ of all contractions rather than its open subset $\Xi^\circ(k,\ell)$. To apply Lemma~\ref{lem:LDP-open-subset}, it remains to observe that $-\rmH(I_\ell - C^\ast C) = \infty$ whenever $C \in \Xi(k,\ell)\setminus \Xi^\circ(k,\ell)$. Indeed, if $C \in \Xi(k,\ell)$ is not a strict contraction, then the matrix $I_\ell - C^\ast C$ has a nontrivial kernel, in which case the quantity above is equal to the log-determinant of a singular matrix, which equals $\infty$. \end{proof} We now generalize Theorem~\ref{thm:matrix-LDP1} to the setting of Section~\ref{sec:three-block-Gram}, starting with three tuples of vectors as in~\eqref{eq:full-tuple}. We first identify a certain distribution exactly, and then turn this into a more general large deviations principle. Thus, suppose that $n\ge k+\ell+m$, and fix three tuples \[V = [v_1,\dots,v_k],\quad U = [u_1,\dots,u_\ell], \quad \hbox{and} \quad X_0 = [x_1,\dots,x_m].\] Let $M$ be $\rm{ran}\,U$, and let $G$ be the compact subgroup of $\rmU(n)$ that fixes $M$, so $G$ is canonically isomorphic to $\rmU(M^\perp)$. Let $W$ be a random element of $G$ with distribution $m_G$, and let $X = WX_0$. Because $W$ is unitary and it fixes $M$, the combined tuple $[U,X] = [U,WX_0]$ always has the same joint Gram matrix as $[WU,WX_0]$, and hence the same as $[U,X_0]$. Intuitively, $X$ is a tuple drawn `uniformly at random' conditionally on having this joint Gram matrix with $U$. As a result, the overall joint Gram matrix \[Q := \left[\begin{array}{ccc} Q_{11} & Q_{12} & R\\ Q_{12}^\ast & Q_{22} & Q_{23} \\ R^\ast & Q_{23}^\ast & I_m\end{array}\right] := \left[\begin{array}{ccc}V^\ast V & V^\ast U & V^\ast X\\ U^\ast V & U^\ast U & U^\ast X \\ X^\ast V & X^\ast U & X^\ast X\end{array}\right] \] is random only in the blocks $R$ and $R^\ast$. Thus, the partial matrix $Q^? := \pi^?(Q)$ is deterministic, fixed by our initial choice of $U$, $V$ and $X_0$, and $Q$ is a random element of $\Delta(Q^?)$ (see Section~\ref{sec:three-block-Gram} for notation). We now add the assumption that $Q^?$ is partially nonsingular. In this case, the relation~\eqref{eq:param} provides a random contraction $C\in \Xi(k,m)$ corresponding to the random completion $Q \in \Delta(Q^?)$. Having fixed $Q^?$, each of $Q$ and $C$ determines the other uniquely and continuously, by Corollary~\ref{cor:three-block-completion}. \begin{prop}\label{prop:law-of-contraction} Under the assumptions above, $C$ has distribution $\s_{n-\ell,m,k}$. \end{prop} \begin{proof} Let $P$ be the orthogonal projection onto $M$. Since every $W \in G$ satisfies $PW = WP = P$, the relevant block of $Q$ is given by \begin{align*} V^\ast X &= V^\ast WX_0\\ &= V^\ast P WX_0 + V^\ast P^\perp WX_0 \\ &= V^\ast P X_0 + V^\ast P^\perp WX_0\\ &= V^\ast P X_0 + (P^\perp V)^\ast W (P^\perp X_0). \end{align*} The partial Gram matrix $Q^?$ determines the two Schur complements $S_{11}$ and $S_{33}$ in~\eqref{eq:Schurs}. Using these and the terms that appear above, the random contraction $C$ is given by \[C = S_{11}^{-1/2}(P^\perp V)^\ast W (P^\perp X_0)S_{33}^{-1/2} = (P^\perp V S_{11}^{-1/2})^\ast W (P^\perp X_0S_{33}^{-1/2}). \] By~\eqref{eq:proj-Gram}, $S_{11}$ and $S_{33}$ are the Gram matrices of $P^\perp V$ and $P^\perp X_0$, and therefore by~\eqref{eq:new-basis} the tuples $P^\perp VS_{11}^{-1/2}$ and $P^\perp X_0S_{33}^{-1/2}$ are unitary maps from $\bbC^{(k)}$ into $M^\perp$ and from $\bbC^{(m)}$ into $M^\perp$, respectively. Since $W$ corresponds to a uniformly random element of $\rmU(M^\perp)$, making a final unitary identification of $M^\perp$ with $\bbC^{(n-\ell)}$ completes the proof. \end{proof} Proposition~\ref{prop:law-of-contraction} leads directly to the following generalization of Theorem~\ref{thm:matrix-LDP1}. Fix a partially nonsingular partial Gram matrix $Q^?$ as above, and for all sufficiently large $n$ let $V_n$, $U_n$ and $X_{n,0}$ be three tuples in $\bbC^{(n)}$ whose Gram matrix lies in $\Delta(Q^?)$. Let $M_n := \rm{ran}\,U_n$, let $G_n$ be the subgroup of $\rmU(n)$ that fixes $M_n$, and let $W_n$ be a random element of $G_n$ with distribution $m_{G_n}$. Finally, let $X_n := W_n X_{n,0}$, and let $Q_n$ be the Gram matrix of the combined tuple $[V_n,U_n,X_n]$. This is a random element of $\Delta(Q^?)$ of the kind studied above. Since $Q^?$ is partially nonsingular, Corollary~\ref{cor:three-block-completion} establishes a homeomorphism between $\Delta(Q^?)$ and $\Xi(k,m)$. For any $Q \in \Delta(Q^?)$, let $C$ be the image of $Q$ in $\Xi(k,m)$ under this homeomorphism, and in particular let $C_n$ the random contraction obtained from $Q_n$ this way. \begin{cor}\label{cor:matrix-LDP2} The random Gram matrix $Q_n$ obeys a large deviations principle in the space $\Delta(Q^?)$ with rate function \begin{equation}\label{eq:matrix-LDP2} -\rmH(I_\ell - C^\ast C) \qquad (Q \in \Delta(Q^?)). \end{equation} \end{cor} \begin{proof} By the contraction principle, it is equivalent to prove that the random contraction $C_n$ obeys the large deviations principle in $\Xi(k,m)$ with rate function given by~\eqref{eq:matrix-LDP2}. This follows from Proposition~\ref{prop:law-of-contraction} and Theorem~\ref{thm:matrix-LDP1} (since $n\to \infty$ and $\ell$ is fixed, the difference between $n$ and $n-\ell$ in the exponents vanishes). \end{proof} The assumption that $Q^?$ be nonsingular is essential for making sense of Proposition~\ref{prop:law-of-contraction}, because without it the contraction $C$ is not even uniquely determined by $Q$. But for Corollary~\ref{cor:matrix-LDP2} this assumption is merely a technical convenience. If it does not hold, we can proceed by trimming some of the vectors in $V_n$, $U_n$ or $X_{n,0}$ (equivalently, removing some rows and columns of $Q_n$) until we reach nonsingularity of the remaining partial Gram matrix; apply Corollary~\ref{cor:matrix-LDP2} to this reduced problem; and then check that this does not change the log-determinant mutual information from the original full Gram matrix. The steps are routine but fiddly, so we omit the details. \chapter{Preliminaries on free groups and their positive definite functions}\label{chap:free-prelims} Most of the results in this chapter are again standard, but the point of view in Section~\ref{sec:restrict-extend} is somewhat new. \section{Geometry of finitely generated free groups}\label{sec:group-geom} \subsection*{\emph{Cayley graphs}} Let $\G$ be a group generated by a finite subset $S$. Then the set $S\cup S^{-1}$ defines a Cayley graph on the vertex set $\G$. The group $\G$ is free on $S$ if and only if this graph is a tree~\cite[Proposition I.15]{Ser--T}. In this work we use Cayley graphs mostly in this special case. To be more precise, we have a choice between the left and right Cayley graphs. We mostly use the \emph{left} Cayley graph, and later references to `the Cayley graph' all imply this choice. The edges of this graph are the unordered pairs $\{g,sg\}$ for $g \in \G$ and $s \in S\cup S^{-1}$. The graph metric of the Cayley graph is a right-invariant metric on $\G$. For a group element $g$, its \textbf{length} $|g|$ is its distance from $e$ in this metric. Equivalently, it the minimal length of a word in $S\cup S^{-1}$ that evaluates to $g$. We write $B_n$ and $S_n$ for the closed ball and sphere of radius $n$ around $e$, respectively. In particular, $S_1 = S\cup S^{-1}$. Our main uses for this graph boil down to the following. We wish to interpret paths in the graph as sequences of moves around an orbit of a point under an action of $\G$, where each individual step is implemented by a generator or its inverse. Since our actions are all from the left, this requires that we use the left Cayley graph. (By contrast, the right Cayley graph is important as a graph on which $\G$ acts naturally by graph automorphisms.) If $\G$ is the free group on a set $S$, then every element of $\G$ has a unique representation as a reduced word in $S\cup S^{-1}$. If elements $g$ and $h$ of $\G$ are represented by reduced words $v$ and $w$ respectively, then $g$ is adjacent to $h$ in the left Cayley graph if and only if one of those words is an extension of the other by a single letter on the left. More generally, if $v$ is an extension of $w$ by any number of letters on the left (resp. right), then we call $h$ a \textbf{suffix} (resp. \textbf{prefix}) of $g$. For any $g$, its suffixes (resp. prefixes) are the vertices along the shortest path from $e$ to $g$ in the left (resp. right) Cayley graph. If $E$ is a subset of $\G$, then its \textbf{exterior boundary} is the set of elements of $\G$ that are not in $E$ but are adjacent to $E$ in the left Cayley graph. Similarly, the \textbf{interior boundary} of $E$ is the exterior boundary of $\G\setminus E$. \subsection*{\emph{Grounded subsets}} The following class of finite subsets of $\G$ plays a central role throughout the remaining chapters. \begin{dfn}\label{dfn:grounded} A finite subset $F$ of $\G$ is \textbf{grounded} if it contains $e$ and is connected in the left Cayley graph. \end{dfn} Grounded sets may be visualized as the finite subtrees of the left Cayley graph that are rooted at the identity. They (or their analogs for the right Cayley graph) appear naturally in many works on free groups. For instance, they offer revealing choices of transversals to subgroups in Schreier's theorem: see, for example,~\cite[Proposition I.16]{Ser--T}. They also arise implicitly in the procedure of `splitting' observables that Bowen uses for proving some of the properties of annealed sofic entropy (the `f-invariant') in~\cite{Bowen10free}. Our term for them follows~\cite[Section 14]{Oza13}. If $F$ is a grounded subset of $\G$, and $g$ is any element of the exterior boundary of $F$, then $F\cup \{g\}$ is also grounded. We often abbreviate such a union to $F\cup g$ in the sequel. Because the Cayley graph of $\G$ is a tree and $F$ is a connected subset of it, in this case there is a unique element $s$ of $S\cup S^{-1}$ such that $g \in sF$, and then we call $F\cup g$ an \textbf{enlargement of $F$ in direction $s$}. By removing interior boundary points one at a time and then reversing the process, any grounded set may be reached from $\{e\}$ by a sequence of enlargements. Several proofs about grounded sets in the sequel are by \textbf{induction on the grounded set}. For a desired assertion, this means we prove (i) that it holds for $\{e\}$, and then (ii) that if it holds for a grounded set $F$ then it also holds for any enlargement of $F$. This implies the assertion for any grounded set by the assembly procedure described above. In some of our inductive proofs of this kind, we must keep careful track of the directed edges of the Cayley graph that are contained in each grounded set. These directed edges are labeled by the generators from $S$ that give rise to them: the directed edges within $F$ that are labeled by generator $t$ emanate from the vertices in $t^{-1}F\cap F$. The next lemma tracks how these families of edges grow when $F$ itself is enlarged. \begin{lem}\label{lem:shift-enlargement} If $F$ is a grounded set and $F' = F \cup g$ is an enlargement in direction $s \in S\cup S^{-1}$, then \[F'\cap sF' = (F\cap sF) \cup \{g\},\] \[F'\cap s^{-1}F' = (F\cap s^{-1}F)\cup\{s^{-1}g\},\] and \[F'\cap tF' = F\cap tF \quad \hbox{for all}\ t \in (S\cup S^{-1})\setminus \{s,s^{-1}\}.\] \end{lem} \begin{proof} Let $t \in S\cup S^{-1}$, and suppose that \[h \in (F'\cap tF')\setminus (F\cap tF).\] In particular, $h$ lies in both $F'$ and $tF'$, but must actually lie in at least one of $F'\setminus F = \{g\}$ and $tF'\setminus tF = \{tg\}$. This gives us two cases: \begin{itemize} \item If $h=g$, then we must also have $g \in tF'$, and hence $t^{-1}g \in F$. Since $g$ is an external boundary point of $F$, this is possible only if $t=s$. \item If $h = tg$, then we must also have $tg \in F'$, and hence $tg \in F$. Since $g$ is an external boundary point of $F$, this is possible only if $t=s^{-1}$. \end{itemize} \end{proof} There are many ways one can begin with $\{e\}$ and grow a sequence of grounded sets that exhaust $\G$ by enlargements. At a few points below we need such sequences explicitly. A \textbf{grounded enumeration} is an enumeration \[e = g_0, g_1, g_2, \dots\] of $\G$ such that $g_{i+1}$ is an external boundary point of $\{g_0,\dots,g_i\}$ for every $i$. In any grounded enumeration, $\{g_0,\dots,g_i\}$ is a grounded set for every $i$, and each of these sets is an enlargement of its predecessor for $i\ge 1$. For example, any enumeration that satisfies \begin{equation}\label{eq:growing-length} 0=|g_0| \le |g_1| \le |g_2|\le \cdots \end{equation} is grounded -- we refer to these examples as \textbf{length-first} enumerations. An even more specialized way to select an enumeration that satisfies~\eqref{eq:growing-length} is to place a total order on $S\cup S^{-1}$, and then apply the \textbf{length-lexicographic} ordering to elements of $\G$ by regarding them as reduced words. In this ordering, $g$ precedes $h$ if and only if either $|g| < |h|$, or $|g| = |h|$ and the first letter of $g$ that differs from the corresponding letter of $h$ precedes that letter of $h$. This particular kind of grounded enumeration plays a special role in a series expansion of annealed AP entropy that we derive in Section~\ref{sec:alt-forms}, and which is then an essential ingredient in the proof of Theorem~\ref{mainthm:tempered} in Chapter~\ref{chap:tempered}. \section{Verblunsky coefficients for positive definite functions}\label{sec:restrict-extend} Let $\phi \in \S_k(\G)$ (recall this notation from the discussion of group C$\sp*$-algebras in Section~\ref{sec:CP}). For any finite $F\subset \G$, we can consider the $F$-by-$F$ block matrix \begin{equation}\label{eq:phi-QF} \phi_{(F)} := [\phi(g^{-1}h)]_{g,h \in F}, \end{equation} where each block lies in $\rmM_k$. This belongs to the following set. \begin{dfn} For any finite subset $F$ of $\G$, we write $\S_k(F)$ for the set of all positive semidefinite elements $Q$ of $\rmM_F(\rmM_k)$ that (i) have all diagonal blocks $Q(g,g)$ equal to $I_k$, and (ii) satisfy the symmetry \begin{equation}\label{eq:Toe} Q(g_1,h_1) = Q(g_2,h_2) \quad \hbox{whenever}\ g_1,g_2,h_1,h_2 \in F\ \hbox{and}\ g_1^{-1}h_1 = g_2^{-1}h_2. \end{equation} We refer to the elements of $\S_k(F)$ as \textbf{partial positive definite functions over $F$}. We write $\S_k^\circ(F)$ for the subset of nonsingular members of $\S_k(F)$. \end{dfn} If $\phi \in \S_k(\G)$, then it is \textbf{nonsingular} if $\phi_{(F)} \in \S^\circ_k(F)$ for every finite $F\subset \G$. Starting from $\phi$, the matrices $\phi_{(F)}$ in~\eqref{eq:phi-QF} are consistent in that $\phi_{(F)}$ and $\phi_{(F')}$ have the same submatrix indexed by $F\cap F'$. On the other hand, given a consistent family of matrices $Q_F \in \S_k(F)$ indexed by some upwards-directed family of finite subsets $F$ that cover $\G$, we can set $\phi(g) := Q_F(e,g)$ for any $F$ that contains $\{e,g\}$, and so produce an element of $\S_k(\G)$ that gives rise to this family of matrices via~\eqref{eq:phi-QF}. In this way, $\S_k(\G)$ is identified with the space of consistent families of such $F$-by-$F$ block matrices. For general locally compact groups, a classic question of abstract harmonic analysis asks when a partial positive definite function over a subset has a positive definite extension to the whole group. When $\G = \bbR$ and $F$ is an interval, a classic result of Krein~\cite{Kre40} shows that this is always possible. The corresponding result for discrete intervals in $\bbZ$ is similar but simpler. On the other hand, even when $\G = \bbZ^2$ and $F$ is a box, examples of Rudin~\cite{Rud63,Rud70} show that extension may be impossible, and similar examples seem to be available over most other groups. See~\cite{BakTim11} for recent advances and a selection of further references. However, generalizing that result about $\bbZ$, extension is always possible when $\G$ is a free group and $F$ is a grounded subset. That is, for any such $F$ and any positive integer $k$, the map \begin{equation}\label{eq:big-surj} \S_k(\G) \to \S_k(F) \end{equation} defined by~\eqref{eq:phi-QF} is surjective. Indeed, this is true even if we replace $\rmM_k$ with $\B(H)$ for any complex Hilbert space $H$. This is essentially a result of Bakonyi and Timotin~\cite[Theorem 4.3]{BakTim07}, although they do not consider general grounded sets explicitly. The simplified proof in~\cite[Lemma 25]{Oza13} does allow this generality. That section of~\cite{Oza13} offered strong inspiration for several of the arguments in this chapter and also Chapter~\ref{chap:LDP-proof}. The key step in~\cite[Section 14]{Oza13} is showing that the submatrix map \begin{equation}\label{eq:little-surj} \S_k(F\cup g)\to \S_k(F) \end{equation} is surjective when $F$ is grounded and $F\cup g$ is an enlargement of it. This implies the surjectivity of~\eqref{eq:big-surj} by extending an element of $\S_k(F)$ through a sequence of enlargements that exhausts $\G$. This surjectivity can be recast as an example of the completion problem for three-by-three-block positive semidefinite matrices from Section~\ref{sec:three-block-Gram}. In~\cite[Lemma 24]{Oza13}, Ozawa simply chooses the central completion of the relevant partial Gram matrix at each stage (see the remarks at the end of Section~\ref{sec:three-block-Gram}). Theorem~\ref{mainthm:LDP} is a large deviations principle for certain random elements of the spaces $\S_k(F)$. It is proved by induction on $F$. Along a sequence of grounded sets $F$, a sequence of these random positive semidefinite matrices is revealed one enlargement at a time. To describe the distributions of these random matrices, we need not just the surjectivity of the map~\eqref{eq:little-surj} for each $F$ and $F\cup g$, but also a parametrization of all possible pre-images of an element of $\S_k(F)$. We obtain this using Proposition~\ref{prop:three-block-completion}. Then we make contact with the large deviations principles that are already established in Section~\ref{sec:matrix-completion-LDP}. In the rest of this section we explain this parametrization in more detail. Suppose that $F$ is a grounded set and that $F\cup g$ is an enlargement of $F$ in direction $s$. Let $Q \in \S_k(F)$, and consider all possible extensions $Q' \in \S_k(F\cup g)$. Then $Q$ is the submatrix of $Q'$ obtained by removing the block-row and block-column indexed by $g$. The symmetry~\eqref{eq:Toe} dictates some elements of this row and column: for example, we must have $Q'(g,h) = Q(s^{-1}g,s^{-1}h)$ whenever $h \in F\cap sF$. It turns out that these are the only entries of $Q'$ that are determined by $Q$ through equation~\eqref{eq:Toe}. This fact is contained within the proof of~\cite[Lemma 25]{Oza13}. \begin{lem}\label{lem:Toe} If $h,a,b \in F$ satisfy $a^{-1}b = h^{-1}g$, then $h \in F\cap sF$. \end{lem} \begin{proof} Since $F$ is grounded, it contains $e$ and is connected in the left Cayley graph. Therefore, in addition to $a$ and $b$, $F$ contains all their suffixes. We may therefore cancel a common prefix and so assume that $a$ and $b$ begin with different letters. Now there are two cases. First, if $s$ is the first letter of the reduced word of $h$, then $s^{-1}h$ is a suffix of $h$, so $s^{-1}h \in F$ by connectedness. On the other hand, if $s$ is not the first letter of $h$, then $h$ and $g$ have no prefix in common. In this case there is no cancellation between the reduced words of $h^{-1}$ and $g$ when they are multiplied, as we have already arranged for $a$ and $b$. Since $b\in F$ and $a^{-1}b = h^{-1}g$, this is possible only if $b$ is a proper suffix of $g$, and therefore $h$ is a proper suffix of $a$. Because of the equality $h^{-1}g = a^{-1}b$, this properness implies that $s^{-1}h$ is still a suffix of $a$ (now possibly equal to $a$), and so again it lies in $F$ by connectedness. \end{proof} Now imagine ordering the set $F\cup g$ so that $F\setminus sF$ comes first, $F\cap sF$ comes next, and $g$ comes last. Use this ordering to write $Q$ and $Q'$ as block matrices: \begin{equation}\label{eq:K-ext} Q = \left[\begin{array}{cc} Q_{11} & Q_{12}\\ Q_{12}^\ast & Q_{22}\end{array}\right] \quad \hbox{and} \quad Q' := \left[\begin{array}{ccc} Q_{11} & Q_{12} & R\\ Q_{12}^\ast & Q_{22} & Q_{23} \\ R^\ast & Q_{23}^\ast & I_k\end{array}\right]. \end{equation} Let $Q^?$ be the partial matrix obtained from $Q'$ by replaced $R$ with $?$. The possible extensions $Q'$ of $Q$ are the positive semidefinite completions of $Q^?$. If $Q^?$ is partially nonsingular, then these completions are parametrized by contractions from $\bbC^{(k)}$ to $\bbC^{(k|F\setminus sF|)}$ according to Proposition~\ref{prop:three-block-completion}. This partial nonsingularity holds if and only if $Q$ is nonsingular, since the two relevant two-by-two-block submatrices of $Q^?$ are both submatrices of $Q$ by translation-invariance. \begin{dfn}\label{dfn:Verb} If $Q \in \S^\circ_k(F)$ and $Q'$ is an extension of it in $\S_k(F\cup g)$, then the \textbf{Verblunsky coefficient} of $Q'$ over $Q$ is the element of $\Xi(k,k|F\setminus sF|)$ that parametrizes it according to Proposition~\ref{prop:three-block-completion}. If $\phi \in \S_k(\G)$, $F$ is a grounded set, $\phi_{(F)}$ is nonsingular, and $F\cup g$ is an enlargement of $F$, then the \textbf{Verblunksy coefficient} of $\phi$ from $F$ to $F\cup g$ is the Verblunsky coefficient of $\phi_{(F\cup g)}$ over $\phi_{(F)}$. \end{dfn} The use of this term is extended from the classical case when $\G = \bbZ$ and $k=1$. By Bochner's theorem, a unital positive definite function $\phi$ on $\bbZ$ is the Fourier-Stieltjes transform of a probability measure $\mu$ on the unit circle $\bf{K}$ in $\bbC$. As a result, if $F = \{1,2,\dots,m\}$, then $\phi_{(F)}$ is the Toeplitz matrix $[\langle z^j,z^i\rangle_{L^2(\mu)}]_{i,j=1}^m$. The associated sequence $(p_n)_{n\ge 0}$ of orthogonal polynomials is obtained by applying the Gram--Schmidt procedure to the sequence $1$, $z$, $z^2$, \dots in $L^2(\mu)$. These polynomials are related by a recursive formula due to Szeg\H{o}~\cite[Theorem 1.8.2]{SimSzeg}. Provided $\mu$ is not finitely supported, so $1$, $z$, \dots are linearly independent in $L^2(\mu)$, that recursion introduces an infinite sequence of coefficients $\a_n$ called the Verblunsky coefficients. They lie in the open disk in the complex plane, and their geometric interpretation explained in~\cite[Section 1.8]{SimSzeg} show that these are precisely the special case of our Definition~\ref{dfn:Verb}. The generalization with $\G = \bbZ$ but $k > 1$ has also been studied fairly completely: see the introduction and further references in~\cite[Section 2.13]{SimOPUCI}, or the overview in~\cite{Treil89}. Our work in this section is a generalization for free groups of this classical parametrization of positive definite functions on $\bbZ$ by their Verblunsky coefficients. This point of view is also taken by Bakonyi and Timotin in~\cite{BakTim07} (they refer to `Szeg\H{o} parameters' rather than Verblunsky), and our parametrization is a technical variation on theirs. Some of our other results later are also free-group generalization of known theorems about orthogonal polynomials, and this provides important motivation throughout Part~\ref{part:free}. Our next result is the free-group generalization of Verblunsky's original theorem in this area: compare~\cite[Theorem 1.8.5]{SimSzeg}. \begin{cor}\label{cor:Verb} Fix a grounded enumeration $g_0,g_1,\dots$ of $\G$, let $F_n := \{g_0,g_1,\dots,g_n\}$ for each $n$, and assume that $F_{n+1}$ is an enlargement of $F_n$ in direction $s_n$. Given $\phi \in \S^\circ_k(\G)$, let $C_n$ be the Verblunsky coefficient of $\phi$ from $F_n$ to $F_{n+1}$. Then the map \begin{equation}\label{eq:Verb-map} \S^\circ_k(\G) \to \prod_{n\ge 0}\Xi^\circ(k,k|F_n\setminus s_nF_n|):\phi\mapsto (C_0,C_1,\dots) \end{equation} is a bijection. \end{cor} \begin{proof} First suppose we know the sequence $C_0,C_1$, \dots, for some $\phi \in \S^\circ_k(\G)$. Then $\phi(e,e) = I_k$, and once, $\phi_{(F_n)}$ is known, it and the contraction $C_n$ determine $\phi_{(F_{n+1})}$. So the given map is injective. On the other hand, given any such sequence $C_0,C_1,\dots$, we can construct a compatible sequence of elements $Q_{(F_n)} \in \S_k^\circ(F_n)$ by setting $Q_{\{e\}} := I_k$ and then recursively appealing to Proposition~\ref{prop:three-block-completion}. As explained at the start of the section, these all come from a single element of $\S_k(\G)$, and that element is nonsingular by construction. So the given map is surjective. \end{proof} By allowing generalized Schur complements, the inverse of the map~\eqref{eq:Verb-map} can be extended to parametrize the whole of $\S_k(\G)$, not just $\S_k^\circ(\G)$. But this is more complicated, particularly when the topologies on those spaces of contractions must be taken into account, so instead we find ways to work around this case in the sequel. The necessary modifications are already a little fiddly when $\G = \bbZ$. See, for instance,~\cite[Introduction]{BreSimZei18}, where the authors have to allow finitely supported measures because they are working towards a large deviations principle on the space of \emph{all} probability measures on $\bbT$. We also sometimes need the continuity given by the following consequence of Corollary~\ref{cor:three-block-completion}. \begin{lem}\label{lem:pdf-completion} Let $F$ be a grounded set and let $F\cup g$ be an enlargement of $F$ in direction $s$. Given $Q \in \S_k(F\cup g)$ such that $Q_{(F)}$ is nonsingular, let $C$ be its Verblunsky coefficient over $Q_{(F)}$. Then the map \begin{align*} \big\{Q \in \S_k(F\cup g):\ Q_{(F)}\ \hbox{nonsingular}\big\} &\to \S_k^\circ(F)\times \Xi(k,k|F\setminus sF|):\\ Q &\mapsto (Q_{(F)},C) \end{align*} is a homeomorphism, and it maps $\S_k^\circ(F\cup g)$ onto $\S_k^\circ(F)\times \Xi^\circ(k,k|F\setminus sF|)$. \end{lem} \begin{proof} We identify $\S_k(F\cup g)$ as the subset of $\rmM_{k(|F| + 1)+}$ defined by the symmetries~\eqref{eq:Toe}; restrict the map from Corollary~\ref{cor:three-block-completion} to this subset; and then identify its image as described above. \end{proof} The use of Verblunsky coefficients to parametrize completely positive maps on free groups is a great advantage in the sequel. It has no clear analog in the ergodic theory of stationary processes over free groups. As we find repeatedly, this enables new proofs about annealed AP entropy that are quite different from their counterparts for annealed sofic entropy, including in cases where those ergodic theoretic proofs do not extend. For $\G = \bbZ$, Verblunsky's thoerem is the beginning of a long-running search for implications between properties of a measure $\mu$ on $\bf{K}$ and properties of its Verblunsky coefficients. Many such results are centrepieces of~\cite{SimOPUCI,SimOPUCII} and~\cite[Chapters 1 and 2]{SimSzeg}. First in the line is a version of Szeg\H{o}'s theorem in terms of Verblunsky coefficients: see~\cite[Theorem 1.8.7 and Corollary 1.8.6]{SimSzeg}, and also the end of Section~\ref{sec:three-entropies} below. Several further examples are listed in~\cite[Section 1.1]{SimOPUCI}: consider especially Baxter's theorem about the absolute continuity of $\mu$ and properties of its density~\cite[Chapter 5]{SimOPUCI}; Rakhmanov's theorem about everywhere-positivity of $\mu_{\rm{ac}}$~\cite[Chapter 9]{SimOPUCII}; and also implications from Verblunsky coefficients to essential spectra of measures (see the end of~\cite[Section 3.4]{SimOPUCI}). It could be interesting to seek analogs of such implications over general free groups. Beware, however, that positive definite functions on a non-Abelian group are more mysterious than measures, and the conversion of the story into complex analysis via the spectral theorem ceases to be available. We do not pursue this topic further here. \begin{ex}\label{ex:Haa-fns} In~\cite{Haa79} proved that the functions on $\G$ given by $\phi(g) = e^{-t|g|}$ are positive definite for any $t > 0$. These were generalized to a non-radial family of examples in~\cite{DeMFigTal80}, referred to as \textbf{Haagerup functions}. To define one of these, pick values $\phi(s) = \phi(s^{-1})$ in the closed unit disk for each $s \in S$, and then extend to any reduced word by multiplication along the letters. These simple and instructive examples depend on the choice of the free generating set $S$, and they are never characters apart from the trivial examples $\chi_{\rm{reg}}$ and $1_G$. They are also introduced carefully in~\cite[Section 8.III]{FigTalPicHAFG}. If the Haagerup function $\phi$ is associated to $\pi = \pi_\phi$ by the cyclic vector $z$, and if $A$ and $B$ are two finite connected subsets of the \emph{right} Cayley graph of $(\G,S)$ such that $A\cap B$ is a singleton $\{g\}$, then the subspaces $\rm{span}\,\pi(A)z$ and $\rm{span}\,\pi(B)z$ are relatively orthogonal over $\bbC\cdot \pi(g)z$. This is an aspect of a more complete `independence' phenomenon which is more fully explained in~\cite[Section 5]{BakTim07}. In view of our interpretation of `conditioning' as projecting to an orthogonal subspace (see Section~\ref{sec:log-det-ent-etc}), this shows that Haagerup functions are a representation theoretic analog of free-group Markov processes (as already noted by Burton and Juschenko in~\cite{BurJus22}). The reference~\cite[Section 5]{BakTim07} also explains the generalization of this construction using matrices to obtain $\rmM_k$-valued positive definite functions. Now pick a length-first enumeration $g_0$, $g_1$, \dots of $\G$, let $F_n := \{g_0,g_1,\dots,g_n\}$, and suppose that $F_{n+1}$ is an enlargement of $F_n$ in direction $s_n$. Let us show that the Verblunsky coefficient of $\phi$ from $F_n$ to $F_{n+1}$ vanishes for all $n$ such that $|g_{n+1}| \ge 2$. For such a value of $n$, let $h$ be the prefix of $g_{n+1}$ of length $|g_{n+1}|-1$. Then $h$ is one step from $g_{n+1}$ towards $e$ in the \emph{right} Cayley graph. Since $|h| < |g_{n+1}|$, $h$ also lies in $F_n$, because we picked a length-first enumeration. Finally, $s_n$ is the first letter of the reduced word of $g_{n+1}$, and also of $h$ because $h$ still has length at least $1$. Overall, this shows that $h \in F_n\cap s_n F_n$, and $h$ separates $g_{n+1}$ from the rest of $F_n$ in the right Cayley graph. Therefore Proposition~\ref{prop:three-block-completion} and the relative orthogonality from the previous paragraph give the desired vanishing of the Veblunsky coefficient. On the other hand, as long as $|\phi(s)| < 1$ for ever $s$, the first few Verblunsky coefficients all have absolute value $<1$. This provides a family of examples with finite but non-zero annealed AP entropy. n \end{ex} Many other distinguished families of free-group representations have been studied, and it could be interesting to study our notions of entropy in those as well. For example,~\cite[Chapter 5]{FigTalPicHAFG} introduces two other one-parameter families of irreducible free-group representations. Such examples are often constructed as `boundary representations', and more of them are studied in~\cite{KuhSte92}, which also includes several further references concerning this family. \subsection*{\emph{Notes and further references}} Even for locally compact Abelian groups, problems of characterizing and extending locally-defined positive definite functions quickly become rich and subtle, and have been studied widely. The monograph~\cite{JorPedTia16} is mostly dedicated to such questions, with a number of sections that allow non-Abelian Lie groups as well. Alongside~\cite{SimOPUCI,SimOPUCII,SimSzeg}, Nikolski's book~\cite{NikTMO} offers a somewhat terser introduction to Toeplitz matrices and orthogonal polynomials. See~\cite[Section 5.6]{NikTMO} for Szeg\H{o}'s limit theorems for determinants and spectra. Toeplitz operators are central examples in more abstract operator theory for many reasons. Other aspects of their study are introduced in~\cite[Chapter 7]{DouBAOT} or~\cite[Section 2.3]{HigRoeAKH}. \chapter{Annealed AP entropy over free groups}\label{chap:free-gps} We now begin to specialize the ideas of Part~\ref{part:general} to representations of free groups and their group C$\sp*$-algebras. For these a natural notion of `uniformly random $n$-dimensional representation' is available, and we can study their average or typical entropy behaviour. \subsection*{\emph{Important warning}} In the sequel, we study only \emph{finitely generated} free groups, but we often omit to mention this for brevity. The free group generated by a countably infinite set should enjoy a similar theory, but its analysis will be complicated by more technicalities such as additional convergence issues, and I have not looked into these closely. \section{Random representations}\label{sec:intro-random-AP} Let $\G$ be the group freely generated by a finite set $S$ of size $r$. Fix a background probability space $(\Omega, \F, \bbP)$, and for each positive integer $n$ let $(\pi_n(s):\ s \in S)$ be an $S$-tuple of random elements of $\rmU(n)$, independent and each with distribution $m_{\rm{U}(n)}$. Let $\pi_n$ be the resulting random representation of $\G$ on $\bbC^n$. We refer to any $\pi_n$ with this distribution as \textbf{uniformly random}. In the sequel, we never consider probabilities that involve $\pi_n$ for more than one value of $n$ at a time. For this reason, the joint distribution of these random representations is unimportant. A reader wanting a definite choice may assume they are independent. The functional \[\chi_{\pi_n}(g) := \tr_n\pi_n(g) \qquad (g \in \G)\] is a random character on $\G$. Some first properties of $\pi_n$ follow from the expectation of this random functional together with the measure concentration from Theorem~\ref{thm:Un-conc}. \begin{thm}\label{thm:asymp-free1} The expectation $\bbE\chi_{\pi_n}$ converges to $\chi_{\rm{reg}}$ in $\S_1(\G)$ as $n\to\infty$. \qed \end{thm} \begin{thm}\label{thm:asymp-free2} For every neighbourhood $V$ of $\chi_{\rm{reg}}$ in $\S_1(\G)$ there is a positive constant $c$ such that \[\bbP(\chi_{\pi_n} \not\in V) = O(e^{-cn^2}).\] \end{thm} \begin{proof} The topology of $\S_1(\G)$ agrees with the topology of pointwise convergence. Recalling the usual sub-base for this topology, it suffices to prove that, for each $g \in \G$ and $\eps > 0$, there exists $c > 0$ such that \[\bbP(|\chi_{\pi_n}(g) - \chi_{\rm{reg}}(g)| \ge \eps) = O(e^{-cn^2}).\] By Theorem~\ref{thm:asymp-free1}, this follows if we find $c > 0$ such that \[\bbP(|\chi_{\pi_n}(g) - \bbE \chi_{\pi_n}(g)| \ge \eps/2) = O(e^{-cn^2}).\] Finally, this follows from Theorem~\ref{thm:Un-conc} because, for each $g \in \G$, the map \[(\pi_n(s):\ s \in S) \mapsto \tr_n\pi_n(g)\] is $O(|g|/n)$-Lipschitz for the Hilbert--Schmidt metric on $\rm{U}(n)^r$. \end{proof} Both of Theorems~\ref{thm:asymp-free1} and~\ref{thm:asymp-free2} originate in Voiculescu's foundational work on free probability: see~\cite[Theorems 3.8 and 3.9]{Voi91} (and note that the first of these covers rather more than our Theorem~\ref{thm:asymp-free1}). I include the proof of Theorem~\ref{thm:asymp-free2} above to show the connection with Section~\ref{sec:conc}, and because we need the particular shape of the resulting estimate later. \subsection*{\emph{Notes and further references}} A single random unitary matrix with distribution $m_{\rm{U}(n)}$ is one of the most well-studied models in random matrix theory, where this distribution is traditionally called the `circular unitary ensemble': see, for instance,~\cite[Section 10.3]{MehRM}. Its analysis is driven by Weyl's classical integration formula for central functions on unitary groups~\cite[Proposition 4.1.6]{AndGuiZei--book}. However, the interaction of several independent random unitary matrices is not amenable to such exact calculations. The alternative point of view provided by operator algebras via Voiculescu's free probability theory has been key to uncovering the phenomena of such random tuples, and much remains to be explored. \section{Annealed AP entropy} For the group C$\sp*$-algebra $C^\ast\G$, completely positive maps to $\rmM_k$ can be identified with positive definite maps from $\G$ itself to $\rmM_k$, as explained in the last part of Section~\ref{sec:CP}. We modify the notation from Chapter~\ref{chap:AP} accordingly by writing $\X(\pi,O)$ when $\pi$ is any representation of $\G$ and $O$ is any set of functions from $\G$ to $\rmM_k$. Let $\pi_n$ be a uniform random $n$-dimensional representation. \begin{dfn}\label{dfn:aAP-ent} If $\phi:\G\to\rmM_k$ is positive definite, then its \textbf{annealed AP entropy} is \[ \hann(\phi) := \inf_O \limsup_{n \to\infty} \frac{1}{n}\log \frac{\bbE \vol_{2kn}\X(\pi_n,O)}{c(k,n)},\] where the infimum runs over all neighbourhoods of $\phi$ in $\ell^\infty(\G;\rmM_k)$. \end{dfn} \begin{rmk} The term `annealed' is taken from statistical physics. When using probability to model disorded materials, an `annealed average' refers to the expectation of a random cardinality or a random volume in phase space. By contrast, a `quenched average' should capture the high-probability behaviour of that cardinality or volume, but this may not be reflected accurately by the annealed average because of large values on low-probability events. Annealed averages are often easier to work with, and can be a useful step towards understanding high-probability behaviour even if they do not capture it directly. See, for instance, the use of these terms in~\cite{MezParVir--book}. n \end{rmk} An alternative formula for $\hann$ using the measure $m_{\rm{U}(k,n)}$ follows from Lemma~\ref{lem:normalized} by exactly the same reasoning as for Corollary~\ref{cor:normalized}. \begin{lem}\label{lem:normalized3} Let $\phi:\G\to \rmM_k$ be positive definite. \begin{itemize} \item[a.] If $\phi(e)$ is singular, then $\hann(\phi) = -\infty$. \item[b.] If $\phi(e)$ is nonsingular, and $\cal{O}$ is a base of neighbourhoods around ${\phi(e)^{-1/2}\cdot \phi\cdot\phi(e)^{-1/2}}$ in $\S_k(\G)$, then \begin{equation}\label{eq:normalized3} \hann(\phi) = \log \det \phi(e) + \inf_{O\in \cal{O}} \limsup_{n \to\infty} \frac{1}{n}\log \bbE m_{\rm{U}(k,n)}\X(\pi_n,O). \end{equation} \qed \end{itemize} \end{lem} In particular, if $\phi$ is unital, then the second term on the right-hand side of~\eqref{eq:normalized3} is an alternative expression for $\hann(\phi)$. Similarly to Remark~\ref{rmk:KL-div?}, we can think of the negative of this quantity as an annealed version of a Kullback--Leibler divergence. This alternative formula leads naturally to a more probabilistic interpretation in terms of a chosen orthonormal tuple, say $V = [e_1,\dots,e_k]$. Starting from the definition of $m_{\rm{U}(k,n)}$ as a pushforward of $m_{\rm{U}(n)}$, we have \begin{align}\label{eq:probab-reform} \bbE m_{\rm{U}(k,n)}\X(\pi_n,O) &= \bbE \int_{\rm{U}(n)}1_{\X(\pi_n,O)}(UV)\ dm_{\rm{U}(n)}(U) \nonumber\\ &= \int_{\rm{U}(n)}\bbP(UV \in \X(\pi_n,O))\ dm_{\rm{U}(n)}(U) \nonumber\\ &= \int_{\rm{U}(n)}\bbP(V \in \X(U^\ast\pi_n U,O))\ dm_{\rm{U}(n)}(U) \nonumber \\ &= \bbP(V \in \X(\pi_n,O))\nonumber\\ &= \bbP(\Phi^{\pi_n}_V \in O), \end{align} where the second equality holds by Tonelli's theorem, and the last holds because the law of $\pi_n$ is invariant under conjugation by any fixed element of $\rm{U}(n)$. This calculation is typical of the ways in which annealed averages are simpler than other statistics in models such as ours. Corollary~\ref{cor:LDP} below asserts that the random type $\Phi^{\pi_n}_V$ obeys a large deviations principle in the space $\S_k(\G)$ with rate function $-\hann$ as $n\to\infty$. Then~\eqref{eq:normalized3} and~\eqref{eq:probab-reform} connect this back to Theorem~\ref{mainthm:LDP}. Corollary~\ref{cor:LDP} also provides an explicit formula for the rate function $-\hann$, which in turn gives the proof of Theorem~\ref{mainthm:annealed-formula}. This large deviations principle continues the analogy between $-\hann$ for unital maps and Kullback--Leibler divergence, which is the rate function in Sanov's theorem~\cite[Section 11.4]{CovTho06}. \begin{rmk} n \end{rmk} \section{The chosen-tuple large deviations principles}\label{sec:chosen-tuple-statement} In the notation of~\eqref{eq:probab-reform}, $\Phi^{\pi_n}_V$ is a random element of the infinite-dimensional compact convex set $\S_k(\G)$, which we can regard as a limit of its projections over grounded subsets of $\G$. Theorem~\ref{mainthm:LDP} asserts that these obey the LDP with rate function $-h_F$. Because $V$ is fixed but $\pi_n$ is random, we sometimes refer to part (b) of this theorem as the `chosen-tuple' large deviations principle. Using our notation for log-determinant entropy, we can re-write the negative of that function like this: \[h_F(q) := \left\{\begin{array}{ll} \rmH(q) - \sum_{s \in S}\rmH(q_{(F\cap sF)}) &\quad \hbox{if $q$ is nonsingular} \\ -\infty & \quad \hbox{if $q$ is singular}\end{array}\right.\] for $q \in \S_k(F)$. This is worth bearing in mind because later we manipulate this quantity using results such as the chain rules from Section~\ref{sec:log-det-ent-etc}. Assuming Theorem~\ref{mainthm:LDP}, we also obtain an LDP for the whole of $\Phi^{\pi_n}_V$. \begin{cor}\label{cor:LDP} The random positive definite function $\Phi^{\pi_n}_V$ obeys the large deviations principle in the space $\S_k(\G)$ with rate function equal to \begin{equation}\label{eq:LDP-formula-inf} -\inf\big\{h_F(\phi_{(F)}):\ F\subset \G,\ F\ \hbox{grounded}\} = -\lim_{n\to\infty}h_{B_n}(\phi_{(B_n)}) \end{equation} for $\phi \in \S_k(\G)$. \end{cor} \begin{proof}[Proof of Corollary~\ref{cor:LDP} from Theorem~\ref{mainthm:LDP}.] Since grounded sets form an upwards-directed cover of the whole of $\G$, the first formula in~\eqref{eq:LDP-formula-inf} follows from Theorem~\ref{mainthm:LDP} by an application of part (b) of Lemma~\ref{lem:LDP-inf-product}. Since the balls $B_n$ are all grounded and also form an upwards-directed cover of $\G$, the same reasoning gives the analogous formula when $F$ is restricted to the set of balls $\{B_n:\ n\ge 1\}$. By part (a) of Lemma~\ref{lem:LDP-inf-product}, the quantity appearing inside that infimum is non-decreasing in $n$, so the infimum is equal to the limit as $n\to\infty$. \end{proof} Corollary~\ref{cor:LDP} now leads back to the first formula of Theorem~\ref{mainthm:annealed-formula}. Recall the notation from~\eqref{eq:pre-Gram-ent} for log-determinant entropy and other quantities in terms of a Gram matrix, which we use frequently here and in the new few chapters. \begin{proof}[Proof of formula~\eqref{eq:LDP-formula-1} of Theorem~\ref{mainthm:annealed-formula} from Corollary~\ref{cor:LDP}] First, if $\phi(e)$ is singular, then $\hann(\phi)=-\infty$, and the explicit convention in Theorem~\ref{mainthm:annealed-formula} says that~\eqref{eq:LDP-formula-1} is also equal to $-\infty$ in this case. Second, if $\phi$ is unital, then~\eqref{eq:normalized3},~\eqref{eq:probab-reform}, and the second expression in~\eqref{eq:LDP-formula-inf} give \[\hann(\phi) = \inf_O\limsup_{n\to\infty}\frac{1}{n}\log \bbP(\Phi^{\pi_n}_V \in O) = \lim_{n\to\infty}h_{B_n}(\phi_{(B_n)}),\] and this agrees with~\eqref{eq:LDP-formula-1} in this case. Finally, if $\phi(e)$ is nonsingular but not unital, then~\eqref{eq:normalized3} gives \begin{equation}\label{eq:reduce-to-normalized} \hann(\phi) = \log \det \phi(e) + \hann(\phi_1), \end{equation} where $\phi_1 := \phi(e)^{-1/2}\cdot \phi \cdot \phi(e)^{-1/2}$. On the other hand, for any finite subset $F$ of $\G$, we have \[\phi_{(F)} = \rm{diag}(\underbrace{\phi(e)^{1/2},\dots,\phi(e)^{1/2}}_{F})\cdot (\phi_1)_{(F)}\cdot \rm{diag}(\underbrace{\phi(e)^{1/2},\dots,\phi(e)^{1/2}}_{F}),\] and hence \[\rmH(\phi_{(F)}) = |F|\log \det \phi(e) + \rmH((\phi_1)_{(F)})\] (where both sides may equal $-\infty$). Applying this equality to the balls $B_n$ and their intersections $B_n\cap sB_n$, we find that \begin{align*} h_{B_n}(\phi_{(B_n)}) &= \rmH(\phi_{(B_n)}) - \sum_{s \in S}\rmH(\phi_{(B_n\cap sB_n)})\\ &= \log \det \phi(e) + h_{B_n}((\phi_1)_{(B_n)}), \end{align*} where we have used the calculation \[|B_n| - \sum_{s \in S}|B_n\cap sB_n| = \Big(1 + 2r\cdot \frac{(2r-1)^n-1}{2r-2}\Big) - r\cdot \Big(2\cdot \frac{(2r-1)^n-1}{2r-2}\Big) = 1,\] and where as usual either both sides are finite and equal or both sides are $-\infty$. Comparing this with~\eqref{eq:reduce-to-normalized}, we see that the unital case of identity~\eqref{eq:LDP-formula-1} implies the general case. \end{proof} We prove formula~\eqref{eq:LDP-formula-2} of Theorem~\ref{mainthm:annealed-formula} in Section~\ref{sec:alt-forms}, along with some other alternative formulas for annealed AP entropy. We prove Theorem~\ref{mainthm:LDP} itself in Section~\ref{sec:completed-LDP} by an induction on $F$. Because this inductive proof involves evaluating the change in $h_F$ as $F$ is enlarged, it also gives another alternative formula for $\hann$ as an infinite series of the increments along a grounded enumeration: see Corollary~\ref{cor:first-expansion}. Later, Theorem~\ref{mainthm:LDP} is the basis for a number of other large deviations principles in Chapters~\ref{chap:zeroth-order} and~\ref{chap:tempered}, as well as the proof of Theorem~\ref{mainthm:tempered} in Chapter~\ref{chap:tempered}. Before leaving this section, let us note another essential feature of Theorem~\ref{mainthm:LDP} and Corollary~\ref{cor:LDP}. These give large deviations principles on the compact convex spaces $\S_k(F)$ and $\S_k(\G)$, respectively. However, in both cases the rate functions are usually not convex (that is, $h_F$ and $\hann$ are not concave). More precisely, the failure of concavity of $\hann$ appears as soon as one leaves the subset of tempered elements of $\S_k(\G)$. This is a proper subset of that space provided $r\ge 2$, so we can attribute this failure to the non-amenability of non-Abelian free groups. These facts about $\hann$ become clear during Chapters~\ref{chap:zeroth-order} and~\ref{chap:tempered} below --- indeed, the notion of `zeroth-order' entropy that we define in Chapter~\ref{chap:zeroth-order} would be trivial if $\hann$ were concave. Because of the non-convexity, the machinery of Legendre--Fenchel transforms and the G\"arnter--Ellis theorem (see~\cite[Section 4.5]{DemZei--LDPbook}, and also the discussion that accompanies their Lemma 4.1.21) are not available to us. As a result, the coming proofs of Theorem~\ref{mainthm:LDP} and Corollary~\ref{cor:LDP} are much closer to `first principles' than those techniques. \section{Joint distribution of random Verblunsky coefficients} To reach Theorem~\ref{mainthm:LDP}, we first develop some auxiliary calculations about conditioning on grounded subsets of orbits in a random representation. This is the work of Chapter~\ref{chap:random-orbits}. It leads naturally to a description of the distribution of $(\Phi^\pi_V)_{(F)}$ for each $F$ when $\pi$ is uniform random representation of sufficiently high dimension, and this in turn is the key to our proof of Theorem~\ref{mainthm:LDP}. This description has intrinsic interest of its own, so we formulate it separately here. Let $Q^\pi_{(F)} := (\Phi^\pi_V)_{(F)}$ to lighten notation. The description is recursive from one grounded set $F$ to an enlargement $F\cup g$ in direction $t$. When $Q^\pi_{(F)}$ is nonsingular, let \[C^\pi_{F,g} \in \Xi(k,k|F\setminus tF|)\] be the Verblunsky coefficient of $Q^\pi_{(F\cup g)}$ over $Q^\pi_{(F)}$ (Definition~\ref{dfn:Verb}), so this is a random element of that space of contractions once we condition on the event that it is well-defined. For notation, recall the distributions $\s_{n,\ell,k}$ on the space of contractions $\Xi(k,\ell)$ defined in~\eqref{eq:projected-contraction-dist}. \begin{prop}\label{prop:dil-dist} If $n\ge k|F|$, then the following hold: \begin{itemize} \item[a.] $Q^\pi_{(F)}$ is nonsingular almost surely, so $C^\pi_{F,g}$ is defined almost surely; \item[b.] $C^\pi_{F,g}$ is independent from $Q^\pi_{(F)}$; \item[c.] $C^\pi_{F,g}$ has distribution $\s_{n-k|F\cap tF|,k|F\setminus tF|,k}$. \end{itemize} \end{prop} Notice that we need part (a) of this proposition for parts (b) and (c) to make sense. Part (a) of this proposition also gives part (a) of Theorem~\ref{mainthm:LDP}. The independence given by part (b) of Proposition~\ref{prop:dil-dist} is a great advantage of working with Verblunsky coefficients during the subsequent proof of the rest of Theorem~\ref{mainthm:LDP}. Proposition~\ref{prop:dil-dist} has the following corollary, which also seems worth emphasizing explicitly. Fix any grounded enumeration $e = g_0$, $g_1$, $g_2$, \dots of $\G$, and let $F_i := \{g_0,\dots,g_i\}$ for each $i\ge 0$. Suppose that $F_{i+1}$ is an enlargement of $F_i$ in direction $s_{i+1}$ for each $i$. \begin{cor}\label{cor:KilNenfree} Let $\pi$ be a uniform random $n$-dimensional representation of $\G$, and let $m := \lfloor n/k\rfloor$. The resulting Gram matrix $Q^\pi_{(F_i)}$ is nonsingular almost surely for all $i =0,1,2\dots,m$, and the corresponding random Verblunsky coefficients \begin{equation}\label{eq:first-m-Verblunskys} C^\pi_{F_0,g_1},\ C^\pi_{F_1,g_2},\ \dots,\ C^\pi_{F_m,g_{m+1}} \end{equation} are defined almost surely, are independent, and have distributions \[\s_{n - k|F_i\cap s_{i+1}F_i|,k|F_i\setminus s_{i+1}F_i|,k} \qquad \hbox{for}\ i=0,1,2,\dots,m.\] \qed \end{cor} If $k=r=1$, then~\eqref{eq:first-m-Verblunskys} is the sequence of the first $m$ Verblunsky coefficients of the random positive definite function on $\bbZ$ given by the orbit of $e_1$ under a uniform random $n$-dimensional unitary matrix. In this case the independence of the first $n$ of these random Verblunksy coefficients, together with their individual distributions, originally appears as~\cite[Proposition 3.3]{KilNen04}. For $r= 1$ but $k \ge 2$, their result is generalized in~\cite[Theorem 3.2]{GamRou14}. Corollary~\ref{cor:KilNenfree} is a further generalization of those results to random representations of higher-rank free groups. One could also re-write Theorem~\ref{mainthm:LDP} directly in terms of the sequences of Verblunsky coefficients that appear in Corollary~\ref{cor:KilNenfree}. The rate function has a simple sum form in terms of those coefficients: see Corollary~\ref{cor:first-expansion} below. \subsection*{\emph{Notes and further references}} The result of Killip and Nenciu that the Verblunsky coefficients of a single uniformly random unitary matrix are independent is an important step towards finding random-matrix models of various natural ensembles of $n$-point subset of the unit circle. This development and further references are discussed on~\cite[p321]{AndGuiZei--book}. Orthogonal polynomials on both the real line and the unit circle play a large role in calculations with the joint eigenvalue distributions of various random matrix models. For many of the most classical models of random self-adjoint or symmetric matrices, those joint distributions are determinantal with kernels expressed in terms of a related family of orthogonal polynomials on the real line. See, for instance, the example of the Gaussian unitary ensemble in~\cite[Subsection 3.2.1]{AndGuiZei--book} and then the bibliographical notes that mention orthogonal polynomials on~\cite[p184 and p321]{AndGuiZei--book}. For orthogonal polynomials on the unit circle, the connection begins with the appearance of certain Toeplitz determinants inside the Weyl intregration formula when computing expectations with respect to $m_{\rm{U}(n)}$: see~\cite[pp67--9]{SimOPUCI} or~\cite[Proposition 4.1.6 or Remark 4.1.7]{AndGuiZei--book}. However, these calculations for single random matrices are very precise, and I do not see analogs of them in the present setting. I do not know whether there is some more substantial connection between those appearances of orthogonal polynomials and the way we use a generalization of Verblunsky coefficients in the present work. \chapter{Conditioning on orbits in random representations}\label{chap:random-orbits} Towards Theorem~\ref{mainthm:LDP}, we first derive a precise formula for the distribution of certain finite subsets of the orbit of a chosen tuple under the random representations $\pi_n$. The derivation proceeds by induction on the size of the subsets, and the inductive hypothesis depends on understanding the conditional distribution of $\pi_n$ itself given a smaller orbit subset that has already been revealed. These conditional distributions have natural expressions in terms of the Haar measures of various closed subgroups of unitary matrices. The underlying principles are very general, so we formulate them for random homomorphisms from $\G$ to an arbitrary compact group $G$ with a homogeneous space $X$. Quite possibly they have appeared in other works before, but I have not found a reference. \section{The general calculation}\label{sec:random-orbits} Let $G$ be a compact metric group and let $m$ be its Haar probability measure. More generally, if $gK$ is any coset of a closed subgroup $K$ of $G$, let $m_{gK} = \delta_g \ast m_K$ be the translate of $m_K$ to that coset. The results of this section are phrased as identifying disintegrations of certain measures over certain maps. Disintegrations are also called `regular conditional probabilities', and are classical constructs in measure theoretic probability. They can be used to express conditional expectations in terms of measurably-varying families of measures. The basic theory that we assume is covered in standard texts such as~\cite[Sections V.6--8]{ParPMMS} and~\cite[Sections 4.2--3]{Var--prob}. Suppose now that $L\subset K$ is a nested pair of closed subgroups. Fix a coset $gK$, and write $gK/L$ for the set of cosets of $L$ that it contains: that is, \[gK/L := \{gkL \in G/L:\ k \in K\}.\] \begin{lem}\label{lem:cosets-disint} The family of measures \[m_{gkL} \qquad (gkL \in gK/L)\] is a disintegration of $m_{gK}$ over the map \[f:gK \to gK/L:gh \mapsto ghL.\] \end{lem} \begin{proof} Following the definition in~\cite[Section V.8]{ParPMMS}, for instance, we must check two criteria. Firstly, for any $gkL \in gK/L$, we have \[f^{-1}\{gkL\} = \{gk':\ gk'L = gkL\} = gkL,\] and $m_{gkL}$ is supported on this fibre by definition. Secondly, we must check that \[\int m_{gkL}\,dm_K(k) = m_{gK}.\] Translating from the left by $g^{-1}$, we may reduce this to the case $g = e$, so it remains to check that \[\int m_{kL}\,dm_K(k) = m_K.\] This is now a standard relation between a compact group and a closed subgroup: see, for instance,~\cite[Theorem 2.51]{FolAHA} (which is formulated for all unimodular locally compact groups). \end{proof} Now suppose that $G$ acts on a homogeneous space $X$ from the left. Given a point $x$ of $X$, let $G(x)$ denote its stabilizer. More generally, given a tuple $\bf{x}$ in $X^I$ for some index set $I$, let $G(\bf{x})$ be the mutual stabilizer $\bigcap_iG(x_i)$. In addition, given another tuple $\bf{y}$ in $X^I$, let $H(\bf{x};\bf{y})$ be the set of elements $g$ of $G$ such that \[gx_i = y_i \qquad \forall i\in I.\] If $\bf{y} = \bf{x}$ then this set is simply $G(\bf{x})$. In other cases it may be empty. If it is not empty, then it is equal to the left coset $hG(\bf{x})$ and to the right coset $G(\bf{y})h$ for any single element $h$ in $H(\bf{x};\bf{y})$. In that case, we also write $m(\,\cdot\,|\,\bf{x};\bf{y})$ for $m_{H(\bf{x};\bf{y})}$. Observe that \begin{equation}\label{eq:measure-id} m(\,\cdot\,|\,\bf{x};h\bf{y}) = \delta_h \ast m(\,\cdot\,|\,\bf{x};\bf{y}). \end{equation} We may apply Lemma~\ref{lem:cosets-disint} to the cosets $H(\bf{x};\bf{y})$ as follows. Assume that $H(\bf{x};\bf{y})$ is nonempty, let $z \in X$, and let $w = gz$ for some $g \in H(\bf{x};\bf{y})$. Then $H(\bf{x},z;\bf{y},w)$ is still nonempty, because it contains $g$. For each such $g$, the set $H(\bf{x},z;\bf{y},w)$ is equal to the set $gG(\bf{x},z)$, and it is contained in $H(\bf{x};\bf{y}) = gG(\bf{x})$. Therefore Lemma~\ref{lem:cosets-disint} applies to tell us that the family \[m(\,\cdot\mid \bf{x},z;\bf{y},w) \qquad (w \in H(\bf{x};\bf{y})z)\] is a disintegration of $m(\,\cdot\mid \bf{x};\bf{y})$ over the orbit-map $g\mapsto gz$. We need a slightly more general version of this result that allows larger Cartesian products, but is proved in the same way. \begin{cor}\label{cor:cond-ingredient} Let $I_1$, \dots, $I_r$ be finite index sets, and for each $i$ let $\bf{x}_i$ and $\bf{y}_i$ be two $I_i$-tuples such that $H(\bf{x}_i;\bf{y}_i)$ is nonempty. Let $j \in \{1,2,\dots,r\}$ and let $z \in X$. Then the kernel \[\prod_{i<j}m(\,\cdot\mid \bf{x}_i;\bf{y}_i)\times m(\,\cdot\mid \bf{x}_j,z;\bf{y}_j,w)\times \prod_{i > j}m(\,\cdot\mid \bf{x}_i;\bf{y}_i) \qquad (w \in H(\bf{x}_j;\bf{y}_j)z)\] is a disintegration of the measure \[\prod_{i=1}^r m(\,\cdot \mid \bf{x}_i;\bf{y}_i)\] over the map \[(g_1,\dots,g_r)\mapsto g_jz.\] \end{cor} \begin{proof} The case $r=1$ is outlined above. For the general case, in the group $G^r$, apply Lemma~\ref{lem:cosets-disint} to the large coset \[\prod_{i=1}^r H(\bf{x}_i;\bf{y}_i)\] and the smaller cosets \[\prod_{i<j}H(\bf{x}_i;\bf{y}_i)\times H(\bf{x}_j,z;\bf{y}_j,w)\times \prod_{i > j}H(\bf{x}_i;\bf{y}_i) \qquad (w \in H(\bf{x}_j;\bf{y}_j)z).\] \end{proof} With the above preparations complete, we can return to random actions of a free group. Suppose now that $\G$ is freely generated by a finite set $S$ of size $r$, and that a random homomorphism $\pi:\G \to G$ is obtained by choosing $g_s$ for $s \in S$ independently from $m$ and then setting $\pi(s) := g_s$. In our application of Corollary~\ref{cor:cond-ingredient} below, the tuples $\bf{x}_i$ and $\bf{y}_i$ are indexed by certain subsets of $\G$. The following terminology helps keep track of them. \begin{dfn}\label{dfn:possible} If $\pi \in \rm{Hom}(\G,G)$ and $x \in X$, then the \textbf{orbit map at $x$} is \[\pi^{(x)}(g) := \pi(g)x \qquad (g \in \G).\] If $F \subset \G$, then the resulting \textbf{orbit patch} is the restriction $\pi^{(x)}|F$, which we regard as a tuple in $X^F$. If we fix $F$ and $x$, then the set of all orbit patches for different possible choices of $\pi$ is a subset $Y_{F,x}$ of $X^F$. It is closed as a continuous image of the compact space $\rm{Hom}(\G,G)$. We refer to the tuples in $Y_{F,x}$ as \textbf{$F$-possible starting from $x$}. \end{dfn} Now consider a grounded set $F$ and a tuple $\bf{y} \in Y_{F,x}$. For each $t \in S$, the sets \[t^{-1}F\cap F \quad \hbox{and} \quad F\cap tF\] are naturally identified by left-translation by $t$. With this identification, we can consider the cosets $H(\bf{y}_{t^{-1}F\cap F};\bf{y}_{F\cap tF})$ for each $t$. The fact that $\bf{y}$ is $F$-possible implies that these cosets are all nonempty: if $\bf{y} = \pi^{(x)}|F$ then $\pi(t)$ must lie in $H(\bf{y}_{t^{-1}F\cap F};\bf{y}_{F\cap tF})$ for every $t$. Here is the main result of this section. It describes the conditional distribution of the generators of a random action given an orbit patch. \begin{prop}\label{prop:condition-action} Fix $x \in X$ and let $F$ be a grounded subset of $\G$. Let $\pi$ be a random action in which the generators $\pi(s)$, $s \in S$, are independent and distributed according to $m$. Then a regular conditional distribution for $(\pi(s):\ s \in S)$ given the orbit patch $\pi^{(x)}|F$ is given by \begin{equation}\label{eq:prod-meas} m_F(\,\cdot \mid \bf{y}) := \prod_{s \in S} m(\,\cdot\,|\,\bf{y}_{F\cap s^{-1}F};\bf{y}_{s F\cap F}) \qquad (\bf{y} \in Y_{F,x}). \end{equation} \end{prop} \begin{proof} We prove this by induction on the grounded set $F$. When $F = \{e\}$, the set $Y_{F,x}$ is the singleton $\{x\}$. The left-hand side of~\eqref{eq:prod-meas} is simply equal to $m^{\times S}$, and so is the right-hand side because $F\cap sF$ and $s^{-1}F\cap F$ are both empty for every $s$. So now suppose that the result is known for some grounded set $F$, and let $F' = F\cup g$ be an enlargement of it in direction $s$. Then, for any action $\pi$, we have $\pi(g)x = \pi(s)\pi(s^{-1}g)x$, and so $\pi^{(x)}|F'$ can be identified with $(\pi^{(x)}|F,\,\pi(s)\pi(s^{-1}g)x)$. Accordingly, $Y_{F',x}$ can be identified with a subset of $Y_{F,x}\times X$. For $\bf{y} \in Y_{F,x}$, let us write \[Y^\bf{y}_{F',x} := \{w \in X:\ (\bf{y},w) \in Y_{F',x}\}.\] To extend the result from $F$ to $F\cup g$, by the tower property of conditional expectation~\cite[Theorem 4.5(vi)]{Var--prob}, it suffices to prove the following: \begin{quote} If $\pi$ is a random action in which the generators are distributed according to $m_F(\,\cdot\mid \bf{y})$, then a regular conditional distribution for ${(\pi(s):\ s \in S)}$ given the image $\pi(s)y_{s^{-1}g}$ is given by \begin{equation}\label{eq:prod-meas-2} m_{F'}(\,\cdot\mid \bf{y},w) \qquad (w \in Y^\bf{y}_{F',x}). \end{equation} \end{quote} To do this, we first identify the measures appearing in~\eqref{eq:prod-meas-2} more carefully. Writing an element of $Y_{F',x}$ as $\bf{y}' := (\bf{y},w)$, consider the factor measures \[m(\,\cdot\mid \bf{y}'_{F'\cap t^{-1}F'};\bf{y}'_{tF'\cap F'}) \qquad (t \in S).\] If $t=s^{\pm 1}$ (according as $s$ lies in $S$ or $S^{-1}$), then either the first or the second equality of Lemma~\ref{lem:shift-enlargement} gives \[m(\,\cdot\mid \bf{y}'_{F'\cap t^{-1}F'};\bf{y}'_{tF'\cap F'}) = m(\,\cdot\mid \bf{y}_{F\cap s^{-1}F},y_{s^{-1}g};\bf{y}_{sF\cap F},w).\] On the other hand, if $t \ne s^{\pm 1}$, then the third equality of Lemma~\ref{lem:shift-enlargement} gives \[m(\,\cdot\mid \bf{y}'_{F'\cap t^{-1}F'};\bf{y}'_{tF'\cap F'}) = m(\,\cdot\mid \bf{y}_{F\cap t^{-1}F};\bf{y}_{tF\cap F}).\] (This is the key point where we need $F$ to be grounded.) Combining the identifications above, we see that the asserted conditional distribution is precisely the one given by Corollary~\ref{cor:cond-ingredient} when we let $r = |S|$, index all the data by $S$ rather than by $\{1,2,\dots,r\}$, and identify \[I_t,\ \bf{x}_t, \hbox{and}\ \bf{y}_t \quad \hbox{with} \quad t^{-1}F\cap F,\ \bf{y}_{t^{-1}F\cap F},\ \hbox{and}\ \bf{y}_{F\cap tF}\] for each $t \in S$. This continues the induction. \end{proof} \begin{rmk} In the proof above, the assumption that $F$ is grounded enables us to apply Lemma~\ref{lem:shift-enlargement}. Proposition~\ref{prop:condition-action} is not true if we omit that assumption. To see this more clearly, consider a simplified version of Proposition~\ref{prop:condition-action} that partitions the space of actions according to orbit patches but ignores conditional distributions. That version tells us that, if $\bf{y}$ is $F$-possible starting from $x$, then \[\big\{(\pi(s):\ s \in S) \in G^S: \pi(g)x = y_g\ \forall g\in F\big\} = \prod_{s \in S}H(\bf{y}_{s^{-1}F\cap F};\bf{y}_{F\cap sF}).\] n \end{rmk} \section{Application to random unitary representations} Fix positive integers $k$ and $n$ with $k\le n$. We can now prove Proposition~\ref{prop:dil-dist} by specializing the results of the previous section to the case when $G = \rm{U}(n)$ and $X = \rm{U}(k,n)$. Let $V = [e_1,\dots,e_k]$, the canonical embedding of $\bbC^{(k)}$ as the first $k$ coordinates in $\bbC^{(n)}$. For now $k$ and $n$ are still fixed, but we turn to asymptotic results as $n\to\infty$ in the next chapter. \begin{proof}[Proof of Proposition~\ref{prop:dil-dist}.] For a grounded set $F$ and an enlargement $F\cup g$ in direction $t$, we prove the following implications among the parts of Proposition~\ref{prop:dil-dist}: \begin{multline*} \hbox{part (a) for $F$} \ \ \Rightarrow \ \ \hbox{parts (b) and (c) for $(F,g)$} \ \ \Rightarrow \ \ \hbox{part (a) for $F\cup g$}. \end{multline*} When $F = \{e\}$, we have $Q^\pi_{(F)} = I_k$, so part (a) is immediate in this case. Starting from there, the above implications imply the result in full by induction on $F$. So now assume that $n\ge k|F|$, and suppose we already know that $Q^\pi_{(F)}$ is nonsingular almost surely. Therefore the Verblunsky coefficient $C := C^\pi_{F,g}$ is defined almost surely. It takes values in $\Xi := \Xi(k,k|F\setminus tF|)$. Let us also assume that $t \in S$; the case $t \in S^{-1}$ is analogous. Let $(\Omega,\F,\bbP)$ be the background probability space, and let $\cal{G}$ be the sigma-subalgebra of $\F$ generated by the random orbit patch $\pi^{(V)}|F$. Since $Q^\pi_{(F)}$ is a function of this orbit patch, the law of total probability gives \[\bbP\{Q^\pi_{(F)} \in A,\ C \in B\} = \bbE\big[\bbP(C\in B\mid \cal{G});\ Q^\pi_{(F)} \in A\big]\] for any Borel subsets $A\subset \S_k(F)$ and $B \subset \Xi$. Therefore parts (b) and (c) for $F$, $g$ follow if we show that the constant $\s_{n-k|F\cap tF|,\,k|F\setminus tF|,\,k}(B)$ is a version of $\bbP(C\in B\mid \cal{G})$. Now, $C$ is a function of $Q^\pi_{(F\cup g)}$, and this in turn is a function of $(\pi^{(V)}|F,\,\pi(t))$. Therefore Proposition~\ref{prop:condition-action} asserts that a version of $\bbP(C\in B\mid \cal{G})$ is given by \begin{equation}\label{eq:C-indep} m(\{\pi(t):\ C \in B\}\mid \bf{y}_{F\cap t^{-1}F};\bf{y}_{tF\cap F}) \qquad \hbox{when}\ \bf{y} := \pi^{(V)}|F \in Y_{F,V}. \end{equation} To use this expression, fix $\bf{y} \in Y_{F,V}$ and also some $U_0 \in H(\bf{y}_{F\cap t^{-1}F};\bf{y}_{tF\cap F})$, and let $U$ be drawn at random from the Haar probability measure on $\rm{U}((\bf{y}_{F\cap tF})^\perp)$. Under the conditional measure in~\eqref{eq:C-indep}, $\pi(t)$ has the same distribution as $UU_0$, by the definitions in Section~\ref{sec:random-orbits}, and then $Q^\pi_{(F\cup g)}$ has the same distribution as the Gram matrix of the combined tuple $[\bf{y}_{F\setminus tF},\ \bf{y}_{F\cap tF},\ UU_0y_{t^{-1}g}]$, where only the last entry is random. If it happens that $Q^\pi_{(F)} \in \S^\circ_k(F)$, then the combined tuple $[\bf{y}_{F\setminus tF},\ \bf{y}_{F\cap tF}]$ is linearly independent, and so is the combined tuple $[\bf{y}_{F\cap tF},\ UU_0y_{t^{-1}g}]$ because it is contained in the translate $U\bf{y}$. We can therefore apply Proposition~\ref{prop:law-of-contraction}, which tells us that the left-hand side of~\eqref{eq:C-indep} is equal to $\s_{n-k|F\cap tF|,k|F\setminus tF|,k}(B)$, as required. Since $Q^\pi_{(F)} \in \S^\circ_k(F)$ lies in $Y^\circ_{F,V}$ almost surely by part (a) for $F$, this completes the proof of parts (b) and (c) for $(F,g)$. Finally, by Lemma~\ref{lem:a-s-strict}, $\s_{n-k|F\cap tF|,\,k|F\setminus tF|,\,k}$ is supported on the subset ${\Xi^\circ(k,k|F\setminus tF|)}$ of proper contractions provided \[n-k|F\cap tF| \ge k + k|F\setminus tF|, \quad \hbox{or equivalently} \quad n\ge k(|F| + 1).\] In this case, the dilation $Q^\pi_{(F\cup g)}$ of $Q^\pi_{(F)}$ is still almost surely nonsingular, by Proposition~\ref{prop:three-block-completion}. So if $n\ge k(|F|+1)$ then we can now also conclude part (a) for $F\cup g$, so the induction continues. \end{proof} \chapter{The chosen-tuple large deviations principle}\label{chap:LDP-proof} This chapter proves Theorem~\ref{mainthm:LDP}, and then derives various further consequences. Let $\pi_n$ be a uniform random $n$-dimensional representation of $\G$. We retain some notations from the previous chapter, giving reminders as we re-encounter them. \section{Proof of the chosen-tuple large deviations principle}\label{sec:completed-LDP} The proof of Theorem~\ref{mainthm:LDP} is another induction on the grounded set $F$. The main ingredient is Proposition~\ref{prop:dil-dist}. For an enlargement $F\cup g$ of $F$, the basic idea for the inductive step is straightforward: \begin{itemize} \item the inductive hypothesis gives us the LDP for $Q^{\pi_n}_{(F)}$; \item Proposition~\ref{prop:dil-dist} and Corollary~\ref{cor:matrix-LDP2} tell us that the next Verblunsky coefficient $C$ is independent from $Q^{\pi_n}_{(F)}$ and satisfies its own LDP; \item Lemma~\ref{lem:LDP-product} tells us how to combine these for the pair $(Q^{\pi_n}_{(F)},C)$; \item finally, since $Q^{\pi_n}_{(F\cup g)}$ is parametrized uniquely by $Q^{\pi_n}_{(F)}$ and $C$, applying the contraction principle should continue the induction. \end{itemize} There is a slight complication in the last of these steps. This is because $Q^{\pi_n}_{(F\cup g)}$ is not a continuous image of the pair $(Q^{\pi_n}_{(F)},C)$ until we exclude the event that $Q^{\pi_n}_{(F)}$ is singular. For this reason, the proof also involves passing between the desired target spaces $\S_k(F)$ and $\S_k(F\cup g)$ and their open subsets $\S_k^\circ(F)$ and $\S_k^\circ(F\cup g)$. This is where we need Lemma~\ref{lem:LDP-open-subset}. \begin{proof}[Proof of Theorem~\ref{mainthm:LDP}] Part (a) of Theorem~\ref{mainthm:LDP} is already contained in Proposition~\ref{prop:dil-dist}, so it remains to prove parts (b) and (c) (the two bounds of the LDP). \vspace{7pt} \emph{Step 1.}\quad In view of part (a), we may choose Borel probability measures $\mu_{F,n}$ restricted to the open subset $\S_k^\circ(F)$ of $\S_k(F)$ such that $\mu_{F,n}$ agrees with the distribution of $Q^{\pi_n}_{(F)}$ for all sufficiently large $n$. In addition, we have \[\S_k^\circ(F) = \{-h_F <\infty\}\] for any grounded set $F$. Therefore, by Lemma~\ref{lem:LDP-open-subset}, parts (b) and (c) of Theorem~\ref{mainthm:LDP} is equivalent to the fact that $(\mu_{F,n})_{n\ge 1}$ obeys the LDP on $\S_k^\circ(F)$ with rate function $-h_F|\S_k^\circ(F)$. This is the version we prove in the remaining steps. \vspace{7pt} \emph{Step 2.}\quad The rest of the proof is an induction on $F$. The result is vacuous when $F = \{e\}$, so we suppose the result is known for a grounded set $F$ and consider an enlargement $F\cup g$ in direction $s$. Combining the inductive hypothesis with Corollary~\ref{cor:matrix-LDP1} and applying Lemma~\ref{lem:LDP-product}, the product measures \[\mu_{F,n}\times \s_{n - k|F\cap sF|,k|F\setminus sF|,k} \qquad (n\ge 1)\] obey the LDP on the space \begin{equation}\label{eq:open-prod} \S_k^\circ(F)\times \Xi^\circ(k,k|F\setminus sF|) \end{equation} with the rate function \[-h_F(q) -\rmH(I_k - C^\ast C) \qquad ((q,C) \in \S_k^\circ(F)\times \Xi^\circ(k,k|F\setminus sF|)).\] \vspace{7pt} \emph{Step 3.}\quad Under the inverse of the homeomorphism in Lemma~\ref{lem:pdf-completion}, the product set~\eqref{eq:open-prod} is identified with $\S_k^\circ(F\cup g)$, and Proposition~\ref{prop:dil-dist} shows that $\mu_{F\cup g,n}$ is the image of $\mu_{F,n}\times \s_{n - k|F\cap sF|,k|F\setminus sF|,k}$ under this inverse homeomorphism. Therefore, by the identity~\eqref{eq:cond-mut-inf-contraction}, the measures $\mu_{F\cup g,n}$ obey the LDP on ${\S_k^\circ(F\cup g)}$ with rate function \[-h_F(q_{(F)}) + 2\rmI_q(g\,;\,F\setminus sF\mid F\cap sF) \qquad (q \in \S_k^\circ(F\cup g)),\] recalling again the notation from~\eqref{eq:pre-Gram-ent} for conditional log-determinant mutual information and suchlike in terms of a Gram matrix. To finish the proof we show that the expression above equals $-h_{F\cup g}(q)$. Since $q$ and hence $q_{(F)}$ are nonsingular, we have \begin{multline*} \frac{1}{2}\big(h_{F\cup g}(q) - h_F(q_{(F)})\big) \\ = \rmH_q(F\cup g) - \rmH_q(F) - \sum_{t\in S}\big(\rmH_q((F\cup g)\cap t(F\cup g)) - \rmH_q(F\cap tF)\big), \end{multline*} where all terms are finite. Lemma~\ref{lem:shift-enlargement} tells us the difference between \[(F\cup g)\cap t(F\cup g) \qquad \hbox{and} \qquad F\cap tF\] for each $t \in S$. Using this and entropy identities, we can express the right-hand side above more succinctly in terms of conditional entropies. First, by the third equality from Lemma~\ref{lem:shift-enlargement}, all terms in the sum with $t \ne s^{\pm 1}$ cancel. Next, if $s \in S$ and $t = s$, then by the first equality from Lemma~\ref{lem:shift-enlargement} and the conditional chain rule (Proposition~\ref{prop:cond-chain}) the remaining differences become \[\rmH_q(g\mid F) - \rmH_q(g\mid F\cap tF) = - \rmI_q(g\,;\,F\setminus sF\mid F\cap sF),\] as required in this case. Finally, if $s \in S^{-1}$ and $t = s^{-1}$, then by the second equality from Lemma~\ref{lem:shift-enlargement} the remaining differences become \begin{align*} \rmH_q(g\mid F) - \rmH_q(tg\mid F\cap tF) &= \rmH_q(g\mid F) - \rmH_q(g\mid sF\cap F)\\ &= - \rmI_q(g\,;\,F\setminus sF\mid F\cap sF), \end{align*} where the first equality holds by the invariance of $q$ under translation by $s = t^{-1}$, and the second follows by Proposition~\ref{prop:cond-chain} as before. Once again this verifies the desired equality. This continues the induction and so completes the proof. \end{proof} \begin{rmk} It is worth comparing our appeal to Lemma~\ref{lem:LDP-open-subset} with the exposition of the case of positive definite functions on $\bbZ$ in~\cite{BreSimZei18}, which does not involve a version of that lemma. Instead, they introduce a slightly larger space of finite or infinite sequences of Verblunsky coefficients that can be used to parametrize positive definite functions without assuming nonsingularity, then apply the Killip--Nenciu result to prove a large deviations principle there, and finish with a simple appeal to the contraction principle. n \end{rmk} \begin{rmk} The proof of Theorem~\ref{mainthm:LDP} accumulates the desired rate function for $F$ as a sum of contributions along a sequence of enlargements. It seems a little surprising that this sum can then be telescoped into the relatively simple closed form in formula~\eqref{eq:LDP-formula-1}. n \end{rmk} \begin{rmk} n \end{rmk} To finish this section, let us observe that $\hann$ can also be written as an infinite series of contributions that appear as one enlarges a grounded set one element at a time to cover $\G$. Let $e = g_0$, $g_1$, $g_2$, \dots be a grounded enumeration of $\G$ as discussed in Section~\ref{sec:group-geom}, let $F_n = \{g_0,\dots,g_n\}$ for each $n\ge 0$, and let $s_n \in S\cup S^{-1}$ be such that $F_{n+1}$ is an enlargement of $F_n$ in direction $s_n$ for every $n$. \begin{cor}\label{cor:first-expansion} For any $\phi \in \S_k(\G)$ we have \[\hann(\phi) = - \sum_{n=0}^\infty 2\rmI_\phi(g_{n+1}\,;\,F_n\mid F_n\cap s_nF_n)\] If $\phi$ is nonsingular and its Verblunsky coefficient from $F_n$ to $F_n\cup g_{n+1}$ is $C_n$, then we also have \[\hann(\phi) = \sum_{n=0}^\infty \rmH(I_k - C_n^\ast C_n).\] \end{cor} \begin{proof} Since $\phi$ is normalized, we have $h_{F_0}(\phi_{(F_0)}) = 0$. From this starting point, repeating the calculation in Step 3 of the proof of Theorem~\ref{mainthm:LDP} gives \[h_{F_n}(\phi_{(F_n)}) = -\sum_{i=0}^{n-1} 2\rmI_\phi(g_{i+1}\,;\,F_i\mid F_i\cap s_iF_i)\] by induction on $n$. The $i^{\rm{th}}$ term of this sum is equal to $\rmH(I_k - C_i^\ast C_i)$, by equation~\eqref{eq:cond-mut-inf-contraction}. These partial sums are non-increasing in $n$ and converges to the desired infinite sum as $n\to\infty$. Since every grounded set is contained in $F_n$ for all sufficiently large, the final conclusion is now another reformulation of Corollary~\ref{cor:LDP}. \end{proof} The partial sums of the infinite series above depend on our particular choice of enumeration of $\G$, but their limit does not. Presumably this fact can be proved directly using only rules for manipulating log-determinant entropies such as the chain rule, but I have not found such a proof. From the formulas for $\hann$ obtained so far, we can now derive a basic property that we need later. \begin{lem}\label{lem:hann-additivity} If $\phi_i:\G\to\rmM_{k_i}$ is positive definite for $i=1,\dots,\ell$ and $\psi = \rm{diag}(\phi_1,\dots,\phi_\ell)$, then \[\hann(\psi) = \sum_{i=1}^\ell\hann(\phi_i)\] (where, as usual, both sides may equal $-\infty$). \end{lem} \begin{proof} Consider the first of the formulas for $\hann(\psi)$ given by Theorem~\ref{mainthm:annealed-formula}: it is the limit as $n\to\infty$ of the expression \begin{equation}\label{eq:hann-psi} \rmH(\psi_{(B_n)}) - \sum_{s \in S}\rmH(\psi_{(B_n\cap sB_n)}). \end{equation} In our present setting, $\psi_{(B_n)}$ is the $\ell$-by-$\ell$-block-diagonal matrix with blocks equal to $(\phi_i)_{(B_n)}$ for $i=1,2,\dots,\ell$, and so \[\rmH(\psi_{(B_n)}) = \sum_{i=1}^\ell\rmH((\phi_i)_{(B_n)}).\] The same additivity holds with any of the intersections ${B_n\cap s B_n}$ in place of $B_n$. Inserting these into~\eqref{eq:hann-psi} and letting $n\to\infty$ gives the result. \end{proof} Any of the other formulas that we derive for $\hann$ could be used to prove Lemma~\ref{lem:hann-additivity} with a similar amount of effort. But it seems to be much harder to prove this lemma directly from the interpretation of $\hann$ as the large deviations rate function in Corollary~\ref{cor:LDP}. The next problem asks about another widely-studied operation on positive definite functions. \begin{prob} Given an inclusion of discrete groups, one can use induction to extend representations of the smaller to the larger, and this construction can be carried out at the level of positive definite functions~\cite[Chapter 6]{FolAHA}. If they are both free groups, how does $\hann$ behave under this construction? A partial answer could be related to Seward's work~\cite{Sew14} in the ergodic theory setting. \end{prob} \section{Alternative formulas and expansions for annealed entropy}\label{sec:alt-forms} In this section we derive various other limit expressions or series expansions for $\hann$. These include formula~\eqref{eq:LDP-formula-2} from Theorem~\ref{mainthm:annealed-formula}, which is the direct analog of Bowen's original formula from his introduction of annealed sofic entropy (then called the `f-invariant'). They also include the analog of a series expansion due to Seward~\cite{Seward--freestab}, which we derive from the running average of the limits in~\eqref{eq:LDP-formula-1} and~\eqref{eq:LDP-formula-2}. This Seward expansion greatly simplifies our proof of Theorem~\ref{mainthm:tempered} later in the paper because it admits some more straightforward estimates than either~\eqref{eq:LDP-formula-1} or~\eqref{eq:LDP-formula-2} separately. Throughout this section, we fix a positive definite function $\phi:\G \to \rmM_k$ (it need not be unital now), and abbreviate $\rmH_\phi$ to $H$ and $\rmI_\phi$ to $I$. Let us also name these two sequences: \begin{align}\label{eq:En-1} E_n &:= H(B_{n+1}) - \sum_{s \in S}H(B_{n+1}\cap sB_{n+1})\\ \hbox{and} \quad E_n' &:= \sum_{s \in S}H(B_n \cup sB_n) - (2r-1)\cdot H(B_n) \nonumber. \end{align} So $2E_n$ is the $(n+1)^{\rm{th}}$ term of the sequence in~\eqref{eq:LDP-formula-1}, and $2E_n'$ is the $n^{\rm{th}}$ term of the sequence in~\eqref{eq:LDP-formula-2}. This normalization and indexing prove convenient shortly. Next, we make a simple but crucial observation about balls in $\G$: \begin{equation}\label{eq:cup-cap} B_{n+1}\cap sB_{n+1} = B_n\cup sB_n \quad (s \in S,\ n\ge 0). \end{equation} To see this, first note that $|sg| = |g|\pm 1$ for every group element $g$. Consequently, the two sides of~\eqref{eq:cup-cap} both contain $B_n$ and are both contained in $B_{n+1}$. Finally, if $|g| = n+1$, then $g$ lies in $sB_{n+1}$ if and only if its reduced work begins with an $s$, hence if and only if it lies in $sB_n$. As a result of~\eqref{eq:cup-cap}, we have the alternative expression \begin{equation}\label{eq:En-2} E_n = H(B_{n+1}) - \sum_{s \in S}H(B_n\cup sB_n). \end{equation} We use both~\eqref{eq:En-1} and~\eqref{eq:En-2} below. \begin{lem}\label{lem:1-2-ineqs} We have $E_{n+1}'\le E_n \le E_n'$ for all $n\ge 0$. \end{lem} \begin{proof} \emph{Step 1.}\quad Writing $E_n$ as in~\eqref{eq:En-1}, we have \begin{align*} E_n - E_{n+1}' &= 2rH(B_{n+1}) -\sum_{S \in S}\big(H(B_{n+1}\cap sB_{n+1}) + H(B_{n+1}\cup sB_{n+1})\big)\\ &= \sum_{s \in S}\big(2H(B_{n+1}) - H(B_{n+1}\cap sB_{n+1}) - H(B_{n+1}\cup sB_{n+1})\big). \end{align*} By the strong subadditivity inequality~\eqref{eq:strong-subadd} and translation invariance, we have \[H(B_{n+1}\cap sB_{n+1}) + H(B_{n+1}\cup sB_{n+1}) \le H(B_{n+1}) + H(sB_{n+1}) = 2H(B_{n+1}).\] so $E_n - E_{n+1}'$ is a sum of non-negative terms. \vspace{7pt} \emph{Step 2.}\quad Writing $E_n$ as in~\eqref{eq:En-2}, we have \begin{equation*} E_n' - E_n = 2\sum_{s \in S}H(B_n \cup sB_n) - H(B_{n+1}) - (2r-1)\cdot H(B_n). \end{equation*} The sphere $S_{n+1}$ is the disjoint union of $sB_n\setminus B_n$ as $s$ ranges over $S\cup S^{-1}$. Therefore conditioning and applying the subadditivity from Corollary~\ref{cor:conditioning-monotone} gives \begin{equation}\label{eq:from-Sew} H(S_{n+1}\mid B_n) \le \sum_{s \in S\cup S^{-1}}H(B_n\setminus sB_n\mid B_n). \end{equation} Adding $2rH(B_n)$ and using the chain rule (Proposition~\ref{prop:chain1}), this becomes \[H(B_{n+1}) + (2r-1)\cdot H(B_n) \le \sum_{s\in S\cup S^{-1}}H(B_n \cup sB_n) = 2\sum_{s\in S}H(B_n \cup sB_n),\] where the second equality holds by translation-invariance. Re-arranging, this asserts that $E_n' - E_n$ is non-negative. \end{proof} \begin{rmk} n \end{rmk} \begin{proof}[Proof of formula~\eqref{eq:LDP-formula-2} of Theorem~\ref{mainthm:annealed-formula}] Formula~\eqref{eq:LDP-formula-1} shows that $2E_n$ converges to $\hann(\phi)$, and $2E_n'$ must converge to the same limit by Lemma~\ref{lem:1-2-ineqs}. \end{proof} \begin{prob} When $\phi$ is unital, Theorem~\ref{mainthm:LDP} says that the value $2E_n$ gives the rate function in the large deviations principle obeyed by the random element \[(\Phi^\pi_V)_{(B_n)} \in \S_k(B_n).\] The values $2E_n'$ should have an analogous interpretation, but this time for the random $S$-tuples \[\big((\Phi^\pi_V)_{(B_n\cup sB_n)}:\ s \in S\big)\] taking values in the space \[\Big\{(Q_s:\ s \in S) \in \prod_{s \in S}\S_k(B_n\cup sB_n):\ (Q_s)_{(B_n)}\ \hbox{is the same for all $s$}\Big\}.\] If this is true, then both inequalities in Lemma~\ref{lem:1-2-ineqs} become instances of the contraction principle. I believe one can prove this along the same lines as Theorem~\ref{mainthm:LDP} without any really new ideas, but working with the these tuples is sure to be more complicated than working with $(\Phi^\pi_V)_{(B_n)}$ alone. \end{prob} Lemma~\ref{lem:1-2-ineqs} implies that the sums $E_n+E_n'$ also converge to $\hann(\phi)$. This is significant, because these actually take a form that is arguably simpler than either sequence individually. From~\eqref{eq:En-2}, we obtain \begin{equation}\label{eq:pre-Sew} E_n + E_n' = H(B_{n+1}) - (2r-1)\cdot H(B_n). \end{equation} All the terms that involve shifting balls by individual generators have canceled out. This sequence of averages is still non-increasing, since both $E_n$ and $E_n'$ have this property. Starting from~\eqref{eq:pre-Sew}, we can now derive a non-negative series expansion for $\hann(\phi)$. It is crucial during our proof of Theorem~\ref{mainthm:tempered} below. Let $g_n = e$, $g_1$, $g_2$, \dots be any length-lexicographic ordering of $\G$ as described in Section~\ref{sec:group-geom}, and for each positive integer $N$ let \[P(g_N):= \{g_0,g_1,\dots,g_{N-1}\}:\] that is, the set of predecessors of $g_n$ in this ordering. Observe that \[P(g_N) = B_n \quad \hbox{when}\ N = |B_n|.\] \begin{cor}\label{cor:Sew} If $\phi:\G\to\rmM_k$ is positive definite, then $\hann(\phi)$ is equal to \begin{multline}\label{eq:Sew} 2H(B_0) -\sum_{n=0}^\infty \sum_{s_{n+1}\cdots s_1 \in S_{n+1}}\big(H(s_n\cdots s_1\,|\,P(s_n\cdots s_1)) \\ \qquad \qquad \qquad \qquad \qquad \qquad - H(s_{n+1}\cdots s_1\mid P(s_{n+1}\cdots s_1))\big). \end{multline} \end{cor} This is the analog of~\cite[Theorem 1.7]{Seward--freestab}, one of the key technical innovations of that paper. For this reason we call it the \textbf{Seward expansion} of $\hann(\phi)$ corresponding to the given ordering of $\G$. (Our expression appears different from Seward's by a factor of $2$, but this is just a consequence of our normalization choices.) \begin{proof} For any $n\ge 0$, iterating the chain rule from Proposition~\ref{prop:chain1} gives \begin{equation*} H(S_{n+1}\mid B_n) = \sum_{s_{n+1}\cdots s_1 \in S_{n+1}}H(s_{n+1}\cdots s_1\mid P(s_{n+1}\dots s_1)). \end{equation*} When $n=0$, we combine this with~\eqref{eq:pre-Sew} to obtain \begin{align*} E_0 + E_0' &= H(B_1) - (2r-1)\cdot H(B_0)\\ &= 2H(B_0) + H(S_1\mid B_0) -2r\cdot H(B_0)\\ &= 2H(B_0) - \sum_{s \in S_1}\big(H(B_0) - H(s\mid P(s))\big). \end{align*} Similarly, for $n\ge 1$ the increments of the sequence in~\eqref{eq:pre-Sew} satisfy \begin{align*} &(E_n + E'_n) - (E_{n-1} + E_{n-1}') \\ &= H(S_{n+1}\mid B_n) - (2r-1)\cdot H(S_n\mid B_{n-1})\\ &= \sum_{s_{n+1}\cdots s_1 \in S_{n+1}}H(s_{n+1}\cdots s_1\mid P(s_{n+1}\dots s_1)) \\ &\qquad \qquad \qquad \qquad - (2r-1)\cdot\sum_{s_n\cdots s_1 \in S_n}H(s_n\cdots s_1\mid P(s_n\dots s_1)). \end{align*} Since every reduced word in $S_n$ has $(2r-1)$ neighbours in $S_{n+1}$, the difference above is equal to \[\sum_{s_{n+1}\dots s_1 \in S_{n+1}}\big(H(s_{n+1}\cdots s_1\,|\,P(s_{n+1}\cdots s_1)) - H(s_n\cdots s_1\mid P(s_n\cdots s_1))\big).\] So~\eqref{eq:Sew} is the infinite series of increments of the sequence~\eqref{eq:pre-Sew}, and so this series converges to the same limit as that sequence. \end{proof} \begin{rmk} The argument above actually proves~\eqref{eq:Sew} for any length-first enumeration of $\G$, not just length-lexicographic orderings. But our use of this formula in Chapter~\ref{chap:tempered} does need some additional properties that are special to length-lexicographic orderings, so we restrict our attention to this case. n \end{rmk} \begin{prob} With some re-arranging, the terms of a Seward expansion can also be written as log-determinant conditional mutual informations (see~\eqref{eq:Sew-cond-mut-inf} below). Is there a simple description of the contraction matrices associated to those terms, and do they offer another parametrization of positive definite functions on free groups? \end{prob} \begin{prob} In~\cite{BakTim07}, Bakonyi and Timotin provide a different way of extending a partial positive definite function on $\G$. They also proceed along a sequence of larger and larger finite subsets of $\G$, but of a different kind. Their work also introduces its own analog of `Verblunsky coefficients' over free groups. What is the joint distribution of their coefficients for a uniform random representation? Can those coefficients be used to give another expression for $\hann$? \end{prob} \begin{prob} In~\cite{Seward--freestab}, Seward remarks on how the vanishing of all but finitely many terms in his expansion characterizes the special classes of tree-indexed Markov process or their generalization to larger steps (see~\cite{Bowen10b} or~\cite{Sew14}). Presumably the analogous story is available here as well. Markov processes should correspond to Haagerup functions or their matrix generalizations (Example~\ref{ex:Haa-fns}). More generally, the higher-step Markov property should correspond to central extensions of partial positive definite functions on balls (see~\cite[Section 5]{BakTim07}). This correspondence is already remarked by Burton and Juschenko in~\cite{BurJus22}. It complies with our general rule of thumb that conditional independence in probability corresponds to relative orthogonality in linear algebra. \end{prob} \chapter{Zeroth-order entropy}\label{chap:zeroth-order} Consider a general countable group $\G$. Let $\bspi$ be an AP sequence for $\G$ with $\chi_{\pi_n} \to \chi_{\rm{reg}}$. For any $\phi \in \S_k(\G)$, Section~\ref{sec:mollify} shows how the property of approximate association to $\bspi$ is determined by the AP entropy function $\rmh_{\bspi}$. In the proof, we mollify $\phi$ by forming convex combinations with $\chi_{\rm{reg}} \otimes I_k$ in order to reduce the possible variation in the volumes of sets of typical vectors, and observe $\rmh_{\bspi}$ of these mollified positive definite functions. The present chapter contains an analogous theory for annealed entropy when $\G$ is free. We mollify $\phi$ by forming the same convex combinations, then use the annealed entropy of those convex combinations to determine whether $\phi$ is `approximately associated in probability' to a random AP sequence, or find the decay of the relevant probabilities if it is not. This is made precise by Theorem~\ref{thm:exp-typ} in Section~\ref{sec:LDP-near-assoc}, which can be regarded as another large deviations principle. The steps in this section broadly correspond to those in Section~\ref{sec:mollify}, but some of them are more delicate here. After proving Theorem~\ref{thm:exp-typ}, we can also prove Fell-global convergence of $\pi_n$ in probability (Proposition~\ref{prop:weak-global-in-prob}) and give a first, somewhat abstract identification of the limit. That result is a step towards the full temperedness theorem (Theorem~\ref{mainthm:tempered}), which we prove in the next chapter. In Section~\ref{sec:LDP-op-norms}, we apply a version of the contraction principle to Theorem~\ref{thm:exp-typ} in order to prove a large deviations principle for the operator norms of elements of $C^\ast \G$ in a uniform random representation of $\G$. After proving these probabilistic results, in Section~\ref{sec:zeroth-order} we observe that the exponent appearing in Theorem~\ref{thm:exp-typ} defines a new functional of representations themselves that we call `zeroth-order entropy'. We derive several properties of this functional. Finally, Section~\ref{sec:three-entropies} gives a formula that relate annealed entropy, zeroth-order entropy, and the determinantal formula for AP entropy from Theorem~\ref{mainthm:det-form}. This formula has some simple but revealing consequences, and adds to the connection between our work and Verblusnky's form of Szeg\H{o}'s limit theorem for determinants. In addition to their intrinsic interest, these results contain the remaining preparations needed for the proof of Theorem~\ref{mainthm:tempered} (the temperedness theorem) in the next chapter, and the consequent new proofs of the Haagerup--Thorbj\o rnsen and Collins--Male theorems. Throughout this chapter, if $\phi \in \S_k(\G)$ and $0 < t \le 1$, then \[\phi_t := t\phi + (1-t)\chi_{\rm{reg}}\otimes I_k.\] \section{The zeroth-order entropy of a positive definite function}\label{sec:LDP-near-assoc} The next lemma is a probabilistic extension of Proposition~\ref{prop:mollify}. When comparing the two, beware that we switch parts (a) and (b) here. This is because the new order connects better with the lower and upper bounds that we prove for some further large deviations principles below. \begin{lem}\label{lem:exp-typ} Let $\phi \in \S_k(\G)$, and let $0 < t \le 1$. \begin{itemize} \item[a.] For every neighbourhood $O$ of $\phi$ there is a neighbourhood $U$ of $\phi_t$ such that \[\bbP(\X(\pi_n,O) \ne \emptyset) \ge \bbE m_{\rmU(k,n)}\X(\pi_n,U).\] \item[b.] For every neighbourhood $U$ of $\phi_t$ and $\eps > 0$ there are a neighbourhood $O$ of $\phi$ and a positive constant $c$ such that \[\bbE m_{\rmU(k,n)}\X(\pi_n,U) \ge e^{-\eps n}\cdot (1-t)^{kn}\cdot \bbP(\X(\pi_n,O)\ne \emptyset) - O(e^{-cn^2})\] as $n\to\infty$. \end{itemize} \end{lem} \begin{proof} \emph{Part (a).}\quad If $O$ is a neighbourhood of $\phi$, then let $U$ be the neighbourhood of $\phi_t$ given by part (b) of Proposition~\ref{prop:mollify}. Applying the conclusion of that part to the random representation $\pi_n$ gives \[\bbP(\X(\pi_n,O)\ne \emptyset) \ge \bbP(\X(\pi_n,U)\ne \emptyset).\] This in turn is bounded below by $\bbE m_{\rmU(k,n)}\X(\pi_n,U)$ because $m_{\rmU(k,n)}$ is a probability measure. \vspace{7pt} \emph{Part (b).}\quad Let $U$ be a neighbourhood of $\phi_t$ and let $\eps > 0$. First, because of the upper bound in~\eqref{eq:normalized}, it suffices to prove this result with $\vol_{2kn}/c(k,n)$ in place of $m_{\rm{U}(k,n)}$. With this modified objective, for this $U$ and $\eps$, let $O$ and $V$ be the respective neighbourhoods of $\phi$ and $\chi_{\rm{reg}}$ given by part (a) of Proposition~\ref{prop:mollify}. Applying the conclusion of that part to $\pi_n$ gives \begin{align*} \frac{\bbE \vol_{2kn}\X(\pi_n,U)}{c(k,n)} &\ge e^{-\eps n}\cdot (1-t)^{kn}\cdot \bbP(\X(\pi_n,O)\ne \emptyset,\ \chi_{\pi_n} \in V)\\ &\ge e^{-\eps n}\cdot (1-t)^{kn}\cdot \bbP(\X(\pi_n,O)\ne \emptyset) - \bbP(\chi_{\pi_n} \not\in V)\\ &\ge e^{-\eps n}\cdot (1-t)^{kn}\cdot \bbP(\X(\pi_n,O)\ne \emptyset) - O(e^{-cn^2}), \end{align*} where $c$ is the positive constant provided by Theorem~\ref{thm:asymp-free2}. \end{proof} As Lemma~\ref{lem:exp-typ} is a probabilistic analog of Proposition~\ref{prop:mollify}, so the next result is a probabilistic analog of Corollary~\ref{cor:mollify}. It is the first main result of this chapter. To formulate it, consider the function \[ a(t) := \hann(\phi_t) \qquad (0 < t \le 1)\] and the constant \[a_\ast := \sup\{a(t):\ 0 < t \le 1\}.\] Beware that $a(t)$ and $a_\ast$ may take the value $-\infty$. \begin{thm}\label{thm:exp-typ} Let $\phi \in \S_k(\G)$, and use the notation introduced above. \begin{enumerate} \item[a.] If $a_\ast > -\infty$, then \[ \bbP\big(\X(\pi_n,O) \ne \emptyset\big) \ge e^{a_\ast n - o(n)}\] for every neighbourhood $O$ of $\phi$. \item[b.] For every $A > a_\ast$ there is a neighbourhood $O$ of $\phi$ such that \[ \bbP\big(\X(\pi_n,O) \ne \emptyset\big) \le e^{A n + o(n)}.\] \item[c.] If $a_\ast > -\infty$, then \[a(t) \ge a_\ast + k\log(1-t) \qquad (0 < t < 1),\] and so $a(t) \to a_\ast$ as $t\downarrow 0$. \end{enumerate} \end{thm} \begin{proof} \emph{Part (a).}\quad Assume that $a(t) > -\infty$ for some $t \in (0,1]$. Let $U$ be the neighbourhood of $\phi_t$ given by part (a) of Lemma~\ref{lem:exp-typ}. Then the conclusion of that lemma and Corollary~\ref{cor:LDP} give \[ \bbP\big(\X(\pi_n,O)\ne \emptyset\big) \ge \bbP(\Phi^\pi_V \in U) \ge e^{a(t)n - o(n)}.\] Since $t$ is arbitrary, this proves part (a). \vspace{7pt} \emph{Part (b).}\quad Suppose that $A > a_\ast$. Choose $t > 0$, $\eps > 0$, and $A' \in \bbR$ so that \[A + k\log(1-t) > A' + \eps > A' > a_\ast.\] Because of the right-hand inequality here, another appeal to Corollary~\ref{cor:LDP} gives a neighbourhood $U$ of $\phi_t$ such that \begin{equation}\label{eq:E-ub} \bbE m_{\rmU(k,n)}\X(\pi_n,U) = \bbP(\Phi^\pi_V \in U) \le e^{A' n + o(n)}. \end{equation} Combining this with part (b) of Lemma~\ref{lem:exp-typ}, we obtain a neighbourhood $O$ of $\phi$ and a positive constant $c$ such that \[e^{A'n + o(n)} \ge e^{-\eps n}\cdot (1-t)^{kn}\cdot \bbP(\X(\pi_n,O) \ne \emptyset) - O(e^{-cn^2}).\] Re-arranging and comparing the orders of magnitude of the exponents here, this is possible only if \begin{equation}\label{eq:nearly-b} \bbP\big(\X(\pi_n,O)\ne \emptyset\big)\le e^{(A' + \eps - k\log(1-t))n + o(n)}, \end{equation} and this in turn is at most $e^{An + o(n)}$ by our choice of $t$, $\eps$, and $A'$. \vspace{7pt} \emph{Part (c).}\quad Assume that $a_\ast > -\infty$, let $t \in (0,1)$, and let $A > a(t)$. Then Corollary~\ref{cor:LDP} gives a neighbourhood $U$ of $\phi_t$ such that \[\bbE m_{\rmU(k,n)}\X(\pi_n,U) \le e^{An + o(n)}.\] We combine this with Lemma~\ref{lem:exp-typ} (as in the proof of part (b) above) and also with the conclusion of part (a). For $U$ as above and any $\eps > 0$, these give a neighbourhood $O$ of $\phi$ and a positive constant $c$ such that \begin{align*} e^{An + o(n)} &\ge e^{-\eps n}\cdot (1-t)^{kn}\cdot \bbP(\X(\pi_n,O) \ne \emptyset) - O(e^{-cn^2})\\ &\ge e^{-\eps n}\cdot (1-t)^{kn}\cdot e^{a_\ast n - o(n)} - O(e^{-cn^2}). \end{align*} Comparing exponents, this is possible only if \[A \ge a_\ast - \eps + k\log (1-t).\] Since this holds for any $A > a(t)$ and $\eps > 0$, re-arranging completes the proof. \end{proof} The value $a_\ast$ that appears in Theorem~\ref{thm:exp-typ} plays an important role. We need to consider how it depends on $\phi$. To this end, we introduce the following terminology and notation. \begin{dfn} If $\phi \in \S_k(\G)$, then its \textbf{zeroth-order entropy} is the quantity \begin{equation}\label{eq:h0-lim-t} \rmh^0(\phi) := \lim_{t\downarrow 0}\hann(t\phi + (1-t)\chi_{\rm{reg}}\otimes I_k) \end{equation} \end{dfn} The limit in~\eqref{eq:h0-lim-t} exists and equals the corresponding supremum over $t$ by part (c) of Theorem~\ref{thm:exp-typ}. We allow any positive integer $k$ here, but it is suppressed from the notation for $\rmh^0$. We could easily extend this definition to non-unital positive definite functions, but the benefits of doing so are nullified by the results of the next section. By parts (a) and (b) of Theorem~\ref{thm:exp-typ}, this function can also be written \begin{equation}\label{eq:h0-inf-O} \rmh^0(\phi) = \inf_O\Big(\limsup_{n\to\infty}\frac{1}{n}\log \bbP\big(\X(\pi_n,O) \ne \emptyset\big)\Big), \end{equation} where the infimum runs over all neighbourhoods of $\phi$ in $\S_k(\G)$. As a result, Lemma~\ref{lem:upper-semicts} immediately gives the following. \begin{lem}\label{lem:usc} When restricted to $\S_k(\G)$ for any fixed $k$, the function $\rmh^0$ is upper semicontinuous. \qed \end{lem} The term `zeroth-order' reflects the conceptual interpretation of $\rmh^0$ in~\eqref{eq:h0-inf-O}. It determines the asymptotic probability that $\X(\pi_n,O)$ is nonempty for small neighbourhoods $O$ of $\phi$ and sufficiently large $n$, but it is not affected by the `size' of the set $\X(\pi_n,O)$ provided it is nonempty. In this respect it resembles the classical interpretation of the Renyi entropy of order zero of a random variable, which is a simple function of the probability that the random variable is not zero: see, for instance,~\cite[eqn. (17.98)]{CovTho06}. As with Renyi entropy, we could also define `order-$p$ annealed entropy' for any other $p > 1$, recovering our original quantity $\hann$ when $p=1$, but it turns out that this does not seem to provide any additional insight. We explain this further in Section~\ref{sec:three-entropies} below. If $\rmh^0(\phi) = 0$, then Theorem~\ref{thm:exp-typ} tells us that the probability that $\phi$ is asymptotically associated to $\bspi$ does not decay exponentially. However, it turns out that the latter conclusion can be strengthened considerably: in this case asymptotic association actually occurs with high probability as $n\to\infty$. This is another consequence of measure concentration. \begin{lem}\label{lem:typ-or-very-atyp} If $\rmh^0(\phi) = 0$, then in fact \[ \bbP\big(\X(\pi_n,O) \ne \emptyset\big) \to 1\] for every neighbourhood $O$ of $\phi$. \end{lem} \begin{proof} Let $O$ be a neighbourhood of $\phi$. Choose a smaller neighbourhood $O'$ of $\phi$ and $\eps > 0$ such that \[ \Phi_X^\pi \in O'\quad \hbox{and}\quad \max_{s\in S}\|\pi(s)-\rho(s)\| < \eps \quad \Rightarrow \quad \Phi^\rho_X \in O\] for any $X \in \rm{U}(k,n)$ and $n$-dimensional representations $\pi$ and $\rho$. This implies that \[ B_\eps(\{(\pi(s):\ s\in S):\ \X(\pi,O')\ne \emptyset \}) \subset \{(\pi(s):\ s \in S):\ \X(\pi,O)\ne \emptyset\},\] where both sides are subsets of $\rm{U}(n)^S$, and $B_\eps$ is the $\eps$-neighbourhood in the maximum of the operator norm applied to each coordinate in $\rmU(n)^S$. By assumption, we have \[ \bbP\big(\X(\pi_n,O') \ne \emptyset \big) \ge e^{-o(n)}.\] That maximum of operator norms is $O(1)$-Lipschitz with respect to the Hilbert--Schmidt norm in Theorem~\ref{thm:Un-conc}, so the set inclusion above and a standard consequence of exponential concentration give \[ \bbP\big(\X(\pi_n,O)\ne \emptyset\big) = 1 - o(1). \] \end{proof} As we study other properties of $\rmh^0$ below, we find that the most basic ones are easier to derive from the more conceptual meaning given by~\eqref{eq:h0-inf-O} than from the rather complicated formula in~\eqref{eq:h0-lim-t}. However,~\eqref{eq:h0-lim-t} becomes crucial for some more substantial calculations and estimates later, particularly during the proof of the temperedness theorem (Theorem~\ref{mainthm:tempered}) in the next chapter. The next main result of this section identifies the limit in probability of the random sequence $\S_k(\pi_n)$ in $\cal{Z}\G$. We formulate and prove it following an auxiliary lemma, which is a probabilistic analog of Lemma~\ref{lem:Vietoris-conv}. \begin{lem}\label{lem:Vietoris-conv-prob} Let $T$ be a compact metric space and give $\calH(T)$ the Vietoris topology. On some probability space, let $X_1$, $X_2$, \dots be a sequence of $\calH(T)$-valued random variables. Then $X_n$ converges in probability to some deterministic limit in $\calH(T)$ if and only if every $t \in T$ satisfies one of the following: \begin{itemize} \item[i.] every neighbourhood $U$ of $t$ satisfies $\bbP(U\cap X_n \ne \emptyset) \to 1$; \item[ii.] some neighbourhood $U$ of $t$ satisfies $\bbP(U\cap X_n = \emptyset) \to 1$. \end{itemize} In this case, the limit is the set of those $t$ that satisfy (i), which is necessarily nonempty. \end{lem} \begin{proof} Recall the notation $\V(\cdot)$ for Vietoris neighbourhoods from Section~\ref{sec:prelim-hyperspace}. \vspace{7pt} \emph{($\Rightarrow$).}\quad Suppose that $K \in \calH(T)$ and $X_n\to K$ in probability. If $t \in K$, then for any neighbourhood $U$ of $t$ we have \[\bbP(U\cap X_n\ne \emptyset) = \bbP(X_n \in \V(U,T)) \to 1.\] On the other hand, if $t\in T\setminus K$, then $t$ has a neighbourhood $U$ such that $K \subset T\setminus \ol{U}$, and now \[\bbP(U\cap X_n = \emptyset) \ge \bbP(\ol{U}\cap X_n = \emptyset) = \bbP(X_n \in \V(T\setminus \ol{U})) \to 1.\] \vspace{7pt} \emph{($\Leftarrow$).}\quad Suppose that (i) and (ii) hold, and let $K$ be the set of points $t$ that satisfy (i). Then $K$ is closed, because its complement is explicitly a union of open sets. If $K$ were empty, then for every $t \in T$ assumption (ii) would give a neighbourhood $V_t$ such that \[\bbP(V_t \cap X_n = \emptyset) \to 1.\] Choosing a finite subcover of the compact space $T$, this would imply that also \[\bbP(X_n = \emptyset) \to 1,\] contradicting our assumption that $X_n$ takes values in $\calH(T)$. So $K\ne \emptyset$, and hence $K \in\calH(T)$. Now consider a general Vietoris neighbourhood $\V(U_1,\dots,U_k)$ of $K$. Let \[L := T\setminus (U_1\cup \cdots \cup U_k).\] This is an element of $\calH(T)$ that is disjoint from $K$. Therefore every $t \in L$ has a neighbourhood $V_t$ such that \[\bbP(V_t\cap X_n = \emptyset) \to 1.\] By compactness we can choose a finite subcover $V_{t_1}$, \dots, $V_{t_\ell}$ of $L$, and it follows that \[\bbP(X_n \in \V(U_1,\dots,U_k))\ge \bbP\Big(\bigcap_{i=1}^k\{U_i\cap X_n \ne \emptyset\}\cap \bigcap_{j=1}^\ell \{V_{t_j}\cap X_n = \emptyset\}\Big) \to 1,\] applying assumption (i) for the events featuring $U_i$ and assumption (ii) for the events featuring $V_{t_j}$. \end{proof} \begin{prop}\label{prop:weak-global-in-prob} For each positive integer $k$, the set \[\{\phi \in \S_k(\G):\ \rmh^0(\phi) = 0\}\] is compact and nonempty, and the random sets $\S_k(\pi_n)$ converge to it in probability in the Veitoris topology of $\calH(\S_k(\G))$. \end{prop} \begin{proof} Fix $k$, and accordingly regard $\S_k(\G)$ as the domain of $\rmh^0$ for the rest of the proof. We verify the conditions from Lemma~\ref{lem:Vietoris-conv-prob}. If $\rmh^0(\phi) = 0$ and $U$ is a neighbourhood of $\phi$, then Lemma~\ref{lem:typ-or-very-atyp} gives \[\bbP(\S_k(\pi_n)\cap U \ne \emptyset) = \bbP(\X(\pi_n,U) \ne \emptyset) \to 1.\] This verifies condition (i) from Lemma~\ref{lem:Vietoris-conv-prob}. Now suppose that $\rmh^0(\phi) < 0$. Whenever $0 > A > \rmh^0(\phi)$, part (b) of Theorem~\ref{thm:exp-typ} gives a neighbourhood $U$ of $\phi$ such that \[\bbP(\S_k(\pi_n)\cap U \ne \emptyset) \le e^{An + o(n)} \to 0.\] This verifies condition (ii) from Lemma~\ref{lem:Vietoris-conv-prob}. \end{proof} Proposition~\ref{prop:weak-global-in-prob} is a prelude to Theorem~\ref{thm:exp-typ-2}, which gives a large deviations principle for the sequence $\S_k(\pi_n)$ in the Vietoris topology of $\cal{Z}\G$. However, that result also depends on Theorem~\ref{mainthm:tempered}, which tells us that $\{\rmh^0 = 0\}$ is precisely the set of tempered positive definite functions. The last result of this section is easy to deduce from~\eqref{eq:h0-lim-t} and Lemma~\ref{lem:hann-additivity}, but seems much less obvious from the interpretation in~\eqref{eq:h0-inf-O}. \begin{lem}\label{lem:additivity} If $\phi_i \in \S_{k_i}(\G)$ for $i=1,\dots,\ell$ and $\psi = \rm{diag}(\phi_1,\dots,\phi_\ell)$, then \[\rmh^0(\psi) = \sum_{i=1}^\ell\rmh^0(\phi_i)\] (where, as usual, both sides may equal $-\infty$). \end{lem} \begin{proof} Each $\psi_t$ is equal to $\rm{diag}((\phi_1)_t,\dots,(\phi_\ell)_t)$, so the result follows by applying Lemma~\ref{lem:hann-additivity} and letting $t\downarrow 0$. \end{proof} \section{Large deviations for operator norms}\label{sec:LDP-op-norms} Theorem~\ref{thm:exp-typ} implies the following large deviations control over the upper tails of operator norms. It supplements the Collins--Male theorem, which gives the approximate behaviour of those norms with high probability. After we prove Theorem~\ref{mainthm:tempered} in the next chapter, we can combine it with this proposition to give a new proof of the Collins--Male theorem. \begin{prop}\label{prop:norm-upper-tails} Let $a \in C^\ast \G$ be non-negative. For each $c > 0$, let \[S_c := \{\phi \in \S_1(\G):\ \phi(a) \ge \tau_{\rm{reg}}(a) + c\},\] and define \[I(c) := \sup\{\rmh^0(\phi):\ \phi \in S_c\}.\] Then \[I(c)\cdot n + o(n) \ge \log\bbP(\|\pi_n(a)\| \ge \|\l(a)\| + c) \ge \sup\{I(c'):\ c' > c\}\cdot n - o(n)\] as $n\to\infty$ (with the obvious adjustments if $I(c) = -\infty$). \end{prop} \begin{proof} \emph{Upper bound.}\quad Let $h > I(c)$. By continuity and Theorem~\ref{thm:exp-typ}, every $\phi \in S_c$ has a neighbourhood $O_\phi$ which satisfies \[\log\bbP(\X(\pi_n,O_\phi)\ne \emptyset)\le hn + o(n).\] Since $S_c$ is compact, we can cover it with finitely many such neighbourhoods, say $O_{\phi_1}\cup \cdots \cup O_{\phi_m}$. Therefore, since the operator norm $\pi_n(a)$ is achieved by some vectors in the finite-dimensional sphere $\rmS^{2n-1}$, we have \begin{align*} \log\bbP(\|\pi_n(a)\| \ge \|\l(a)\| + c) &\le \log\bbP(\X(\pi_n,O_{\phi_1}\cup \cdots \cup O_{\phi_m}) \ne \emptyset)\\ &\le h n + o(n). \end{align*} By the arbitrariness of $h$, this proves the upper bound. \vspace{7pt} \emph{Lower bound.}\quad Let $c' > c$, and assume that $I(c') > -\infty$, for otherwise the result is vacuous. Let $\eps > 0$, and pick $\phi \in S_{c'}$ such that $\rmh^0(\phi) > I(c') - \eps$. Then the set \[O := \{\psi \in \S_1(\G):\ \psi(a) > c\}\] is a neighbourhood of $\phi$, and so Theorem~\ref{thm:exp-typ} gives \begin{multline*} \log\bbP(\|\pi_n(a)\| \ge \|\l(a)\| + c) \ge \log\bbP(\X(\pi_n,O) \ne \emptyset) \\ \ge\rmh^0(\phi)\cdot n - o(n) \ge (I(c') - \eps)\cdot n - o(n). \end{multline*} By the arbitrariness of $c'$ and $\eps$, this completes the proof. \end{proof} The conclusion of Proposition~\ref{prop:norm-upper-tails} simplifies to an exact asymptotic if $I$ is continuous at $c$. \begin{prob} Is there any choice of $a$ in Proposition~\ref{prop:norm-upper-tails} for which the function $I$ is not continuous everywhere? \end{prob} Proposition~\ref{prop:norm-upper-tails} can of course be adapted to a general element $b$ of $C^\ast \G$ by applying it to $a := b^\ast b$. In Theorem~\ref{thm:exp-typ-2} below we prove a large deviations principle for the random sets $\ol{\S_\bullet(\pi_n)}$ in the space $\cal{Z}\G$ with the Vietoris topology. That principle gives another way to prove Proposition~\ref{prop:norm-upper-tails} by a routine application of the contraction principle (see~\eqref{eq:contraction}). \begin{rmk} n \end{rmk} \section{The zeroth-order entropy of a representation}\label{sec:zeroth-order} It turns out that $\rmh^0(\phi)$ is much less sensitive to the choice of $\phi$ than AP entropy or annealed AP entropy. In particular, we prove shortly that $\rmh^0(\phi)$ depends only on the GNS representation $\pi_\phi$, and is monotone under containment of representations. This property motivates the definition of $\rmh^0$ for general representations in Definition~\ref{dfn:0-ent} below. In working towards these results, an alternative notation is sometimes helpful. If $\pi$ is a representation of $\G$ on $H$, and $x_1, \dots, x_k \in H$, then we write \[\rmh^0(\pi,[x_1,\dots,x_k]) := \rmh^0(\Phi^\pi_{[x_1,\dots,x_k]}).\] The next lemma is the basic step for the rest of this section. \begin{lem}\label{lem:h0-almost-assoc} If $\psi$ is approximately associated to $\pi_\phi$, then $\rmh^0(\psi)\ge \rmh^0(\phi)$. \end{lem} \begin{proof} By Corollary~\ref{cor:typ-trans}, if $U$ is any neighbourhood of $\psi$ then there is a neighbourhood $O$ of $\phi$ such that \[\X(\pi,O)\ne \emptyset \quad \Rightarrow \quad \X(\pi,U)\ne \emptyset\] for any representation $\pi$. With this choice of $O$, it follows that \[\bbP(\X(\pi_n,O)\ne \emptyset) \le \bbP(\X(\pi_n,U)\ne \emptyset) \qquad (n\ge 1).\] Taking logarithms, normalizing, letting $n\to\infty$, and then taking the infimum over $U$, this turns into the desired inequality. \end{proof} \begin{cor} If $\pi_\phi \cong_{\rm{a}} \pi_\psi$, then $\rmh^0(\phi) = \rmh^0(\psi)$. \qed \end{cor} The next corollary gives the simple half of Theorem~\ref{mainthm:tempered}. \begin{cor}\label{cor:easy-tempered} If $\phi \in \S_k(\G)$ is tempered then $\rmh^0(\phi) = 0$. \end{cor} \begin{proof} Since $\chi_{\pi_n} \to \chi_{\rm{reg}}$ in probability (Theorem~\ref{thm:asymp-free2}), this follows from Corollary~\ref{cor:char-and-weak-global}. \end{proof} Lemma~\ref{lem:h0-almost-assoc} inspires our more general definition of $\rmh^0$. \begin{dfn}\label{dfn:0-ent} The \textbf{zeroth-order entropy} of a representation $\pi$ of $\G$ is \begin{align*} \rmh^0(\pi) &:= \inf\Big\{\rmh^0(\phi):\ \phi \in \bigcup_{k\ge 1}\S_k(\G)\Big\}\\ &= \inf\big\{\rmh^0(\pi,[x_1,\dots,x_k]):\ k\ge 1,\ x_1,\dots,x_k \in H\big\}. \end{align*} \end{dfn} The rest of this section develops the properties of zeroth-order entropy. These flow from Definition~\ref{dfn:0-ent} rather similarly to the traditional treatment of Kolmogorov--Sinai entropy in ergodic theory when it is defined as a supremum over partitions: see, for example,~\cite[Sections 4.4--6]{Walters--book}. Let us stress, however, that $\rmh^0$ is defined in general as an infimum rather than a supremum, even though Definition~\ref{dfn:0-ent} and Kolmogorov--Sinai entropy follow the same sign convention for an `entropy'. Sometimes a slightly different formula is more convenient. \begin{lem}\label{lem:0-ent-alt-dfn} We have \[\rmh^0(\pi) := \inf\Big\{\rmh^0(\phi):\ \phi \in \bigcup_{k \ge 1}\ol{\S_k(\G)}\Big\}.\] \end{lem} \begin{proof} This follows from the original formula in Definition~\ref{dfn:0-ent} together with the upper semicontinuity from Lemma~\ref{lem:usc}. \end{proof} The next results allow us to restrict which vectors in a representation we consider when computing $\rmh^0$. The first is analogous to the Kolmorogov--Sinai generator theorem~\cite[Theorem 4.17]{Walters--book}. \begin{lem}\label{lem:h0-cyclic} If $[x_1,\dots,x_k]$ is cyclic for $\pi$, then \[\rmh^0(\pi) = \rmh^0(\pi,[x_1,\dots,x_k]).\] Equivalently, if $\phi \in \S_k(\G)$, then $\rmh^0(\pi_\phi) = \rmh^0(\phi)$. \end{lem} \begin{proof} The inequality ``$\le$'' follows at once from the definition of $\rmh^0$. On the other hand, if $\psi$ is any other positive definite function associated to $\pi$, then Lemma~\ref{lem:h0-almost-assoc} shows that $\rmh^0(\psi)$ is bounded below by $\rmh^0(\pi,[x_1,\dots,x_k])$. \end{proof} The next lemma generalizes Lemma~\ref{lem:h0-cyclic} for representation that do not have finite cyclic tuples. It is an analog of~\cite[Theorems 4.21 and 4.22]{Walters--book}. \begin{lem}\label{lem:h0-nearly-cyclic} Let $\pi$ be a representation of $\G$ on $H$, and let $S$ be a subset of $H$ such that the linear span of $\{\pi(g)x:\ g \in \G, x \in S\}$ is dense. Then \[\rmh^0(\pi) = \inf\big\{\rmh^0(\pi,[x_1,\dots,x_k]):\ k\ge 1,\ x_1,\dots,x_k \in S\big\}.\] \end{lem} \begin{proof} The inequality ``$\le$'' follows at once from the definition of $\rmh^0$. For the reverse direction, suppose that $Y = [y_1,\dots,y_\ell]$ is a tuple in $H$. For any tuple $X = [x_1,\dots,x_k]$ in $H$, let $M_X$ be the $\pi$-invariant subspace of $H$ generated by $X$. By our assumption on $S$, there are tuples $X_1$, $X_2$, \dots drawn from $S$ and further tuples $Y_1$, $Y_2$, \dots such that $Y_n$ lies in $M_{X_n}$ and $Y_n \to Y$, and hence also $\Phi^\pi_{Y_n}\to \Phi^\pi_Y$ by Lemma~\ref{lem:unif-cts}. By Lemma~\ref{lem:usc} and Lemma~\ref{lem:h0-cyclic}, it follows that \[\rmh^0(\pi,Y) \ge \limsup_{n\to\infty} \rmh^0(\pi,Y_n) \ge \limsup_{n\to\infty} \rmh^0(\pi^{M_{X_n}}) = \limsup_{n\to\infty} \rmh^0(\pi,X_n).\] Since each $X_n$ is a tuple drawn from $S$, this proves the inequality ``$\ge$''. \end{proof} Using Lemma~\ref{lem:0-ent-alt-dfn}, we can improve Lemma~\ref{lem:h0-almost-assoc} to the following monotonicity for an arbitrary pair of representations $\pi$ and $\rho$. \begin{cor}\label{cor:h0-almost-contained} If $\rho$ is approximately contained in $\pi$, then $\rmh^0(\rho) \ge \rmh^0(\pi)$. In particular, if $\rho$ is approximately contained in an inflation of the regular representation, then $\rmh^0(\rho) = 0$. \end{cor} \begin{proof} If $\phi$ is approximately associated to $\rho$, then it is also approximately associated to $\pi$, so the first conclusion follows by Lemma~\ref{lem:0-ent-alt-dfn}. Then the second conclusion follows from Corollary~\ref{cor:easy-tempered}. \end{proof} \begin{cor}\label{cor:approx-equiv-invar} The function $\rmh^0$ on representations is an invariant of approximate equivalence (and hence certainly of unitary equivalence). \qed \end{cor} \begin{rmk} n \end{rmk} The abstract properties of $\rmh^0$ developed so far enable the following modest generalization of Theorem~\ref{thm:exp-typ}. \begin{cor}\label{cor:exp-typ} Fix a separable representation $\pi$ of $\G$. \begin{itemize} \item[a.] If $\rmh^0(\pi) > -\infty$, then for any Fell neighbourhood $U$ of $\pi$ and any $\eps > 0$ we have \[\bbP(\pi_n \in U) \ge e^{(\rmh^0(\pi) - \eps)n - o(n)}.\] \item[b.] If $h_0 >\rmh^0(\pi)$, then $\pi$ has a Fell neighbourhood $U$ such that \[\bbP(\pi_n \in U) \le e^{h_0n + o(n)}.\] \end{itemize} \end{cor} \begin{proof} By Lemma~\ref{lem:lower-Vietoris-simplify}, we may restrict attention to the base of Fell neighbourhoods that have the form \[U = \{\rho:\ \X(\rho,O) \ne \emptyset\}\] for some positive integer $k$ and some open subset $O$ of $\S_k(\G)$ such that $\X(\pi,O)$ is nonempty. However, for neighbourhoods $U$ of this form for a fixed value of $k$, the probabilities in parts (a) and (b) of the present corollary simply reduce to parts (a) and (b) of Theorem~\ref{thm:exp-typ}, so they are governed by $\rmh^0$ as a function on $\S_k(\G)$. Taking an infimum over associated positive definite functions completes the proof. \end{proof} Corollary~\ref{cor:exp-typ} looks like a large deviations principle for the sequence $\pi_n$ in the Fell topology. However, that topology is far from metrizable, so this corollary does not fall within the usual scope of large deviations theory. A better point of view is that Corollary~\ref{cor:exp-typ} is an abstraction of Proposition~\ref{prop:norm-upper-tails} that controls the `upper tails' of the whole random sets $\S_k(\pi)$. Indeed, the earlier proof of Proposition~\ref{prop:norm-upper-tails} is essentially the special case of the proof above that focuses on Fell neighbourhoods defined by lower bounds on the norms of operators (compare~\cite[Proposition 3.3.2 and Lemma 3.3.3]{Dix--Cstar}). In Theorem~\ref{thm:exp-typ-2} below we improve it to true a large deviations principle in the separable and completely metrizable space $\cal{Z}\G$ with its Vietoris topology. The next proposition is the analog of the additivity of Kolmogorov--Sinai entropy under Cartesian products~\cite[Theorem 4.23]{Walters--book}. \begin{prop}\label{prop:additivity} Let $(\pi_i:\ i\in I)$ be any family of representations of $\G$ and let $\pi$ be their direct sum. Then \[\rmh^0(\pi) = \inf\Big\{\sum_{i\in J}\rmh^0(\pi_i):\ J\subset I,\ J\ \hbox{finite}\Big\}.\] \end{prop} In the sequel, we usually write simply \[\sum_{i \in I}\rmh^0(\pi_i)\] for the infimum on the right-hand side in Proposition~\ref{prop:additivity}. \begin{proof} Let $H_i$ be the space of $\pi_i$ for each $i$, so the space of $\pi$ is \[H := \bigoplus_{i \in I}H_i.\] Let $S$ be the set of vectors in $H$ that are non-zero in at most one coordinate. This satisfies the hypotheses of Lemma~\ref{lem:h0-nearly-cyclic}, so \[\rmh^0(\pi) = \inf\big\{\rmh^0(\pi,X):\ X\ \hbox{is a tuple drawn from}\ S\big\}.\] If $X$ is a tuple drawn from $S$, then it uses only finitely many of the direct summands in $H$. Therefore, permuting the entries of $X$ if necessary, we may write it as the concatenation of some sub-tuples, say $X_{i_1}$, \dots, $X_{i_\ell}$, so that $X_{i_j}$ is a tuple of vectors nonzero in only the $i_j^{\rm{th}}$ coordinate. Since these sub-tuples lie in orthogonal sub-representations of $\pi$, Lemma~\ref{lem:additivity} gives \[\rmh^0(\pi,X) = \sum_{r=1}^\ell \rmh^0(\pi,X_{i_r}).\] By taking the infimum over all possible choices of $i_1$, \dots, $i_r$ and then $X_{i_1}$, \dots, $X_{i_r}$, this completes the proof. \end{proof} \begin{cor}\label{cor:h0-sing-only} Let $\pi$ be a representation of $\G$ on $H$, and let $M$ be the closed subspace of $H$ given by Proposition~\ref{prop:Leb-reps}, so $\pi^M$ is contained in $\l^{(\infty)}$ and $\pi^{M^\perp}$ is disjoint from $\l$. Then \[\rmh^0(\pi) = \rmh^0(\pi^{M^\perp}).\] \end{cor} \begin{proof} Proposition~\ref{prop:additivity} gives \[\rmh^0(\pi) = \rmh^0(\pi^{M^\perp}) + \rmh^0(\pi^M).\] Since $\pi^M$ is contained in $\l^{(\infty)}$, Corollary~\ref{cor:h0-almost-contained} shows that $\rmh^0(\pi^M) = 0$. \end{proof} In the next chapter, Proposition~\ref{prop:form-for-h0} makes the description from Corollary~\ref{cor:h0-sing-only} much more precise. \begin{prob} Evaluate or estimate $\rmh^0$ for some concrete examples of positive definite functions or representations. One example could be a Haagerup function (Example~\ref{ex:Haa-fns}) $\phi$ which is not tempered. Provided $|\phi(s)| < 1$ for every $s$, the formula for $\hann(\phi)$ from Corollary~\ref{cor:first-expansion} has only finitely many terms, and they are all finite. This implies that $\rmh^0(\phi) > -\infty$ using part (a) of Lemma~\ref{lem:exp-typ} with $t=1$. But on the other hand Theorem~\ref{mainthm:tempered} shows that $\rmh^0(\phi) < 0$ if $\phi$ is not tempered, and for a non-Abelian free group this holds when the values $|\phi(s)|$ are close enough to $1$. Another example could be the function $1_H$ for $H$ a nontrivial subgroup of $\G$. I guess that $\rmh^0(1_H) = -\infty$, but I have not proved this. Generalizing that example, a third could be positive definite functions given by invariant random subgroups as in~\cite[Section 15.F]{BekdelaHarUDC}. Finally, I also guess that any finite-dimensional representation $\pi$ has ${\rmh^0(\pi) = -\infty}$, but I have not proved this either. \end{prob} \section{The three-entropy formula}\label{sec:three-entropies} At this point, for a positive definite function $\phi$ over a free group $\G$, we have met three different entropy values: \begin{itemize} \item AP entropy along a fixed AP sequence, which always equals either $-\infty$ or the Fuglede--Kadison log-determinant from Theorem~\ref{mainthm:det-form}. \item Annealed AP entropy. \item Zeroth-order entropy, which depends only on the GNS representation $\pi_\phi$. \end{itemize} These entropies are related by the following formula. \begin{thm}\label{thm:three-entropies} Let $\tau$ be the regular tracial state on $C^\ast \G$. For any positive integer $k$ and positive definite function $\phi:\G\to\rmM_k$, we have \[\hann(\phi) = \rmh^0(\pi_\phi) + \log \Delta_\tau\phi_{\rm{ac}}.\] \end{thm} Theorem~\ref{thm:three-entropies} is a probabilistic analog of conclusion (b) in Corollary~\ref{cor:det-form}. Its proof refers to the probabilistic interpretations of $\hann(\phi)$ (Corollary~\ref{cor:LDP}) and $\rmh^0(\pi_\phi)$ (Theorem~\ref{thm:exp-typ}). It combines these with the reformulation of Theorem~\ref{mainthm:det-form} given by Theorems~\ref{thm:det-form-lower} and~\ref{thm:det-form-upper}, which makes the relevant dependences more explicit. The proof does not use the explicit formulas for $\hann$ given by Theorem~\ref{mainthm:annealed-formula} or the defining formula~\eqref{eq:h0-lim-t} for $\rmh^0$. \begin{proof} This is rather similar to the proof of Theorem~\ref{mainthm:det-form} itself from Theorems~\ref{thm:det-form-lower} and~\ref{thm:det-form-upper} (see Section~\ref{sec:det-form-reform}), but now combined with an appeal to Theorem~\ref{thm:exp-typ}. If $\phi(e)$ is singular, then both sides equal $-\infty$ and there is nothing more to prove, so assume it is nonsingular. Replacing $\phi$ with $\phi(e)^{-1/2}\cdot \phi\cdot \phi(e)^{-1/2}$ and subtracting $\rmH(\phi(e))$ from both sides of the desired equality, we may then assume that $\phi$ is unital. In addition, by Lemma~\ref{lem:h0-cyclic} and Corollary~\ref{cor:h0-sing-only}, it is equivalent to prove that \begin{equation}\label{eq:three-entropies} \hann(\phi) = \rmh^0(\phi_{\rm{sing}}) + \log \Delta_\tau\phi_{\rm{ac}}. \end{equation} We prove~\eqref{eq:three-entropies} in the form of two inequalities. Let $h_0$ and $h_1$ be the two summands on the right-hand side of~\eqref{eq:three-entropies} \vspace{7pt} \emph{Step 1: ($\ge$).}\quad Assume that $h_0$ and $h_1$ are both finite, let $\eps > 0$, and let $O$ be any neighbourhood of $\phi$. Let $\t{O}$ be the corresponding neighbourhood of $\langle \phi,\cdot\rangle$ under the isomorphism from Proposition~\ref{prop:pairing-positive}. Let the integer $d_0$ and the neighbourhoods $V$ of $\tau\otimes \tr_k$ and $\t{U}$ of $\langle\phi_{\rm{sing}},\cdot\rangle$ be given by Theorem~\ref{thm:det-form-lower}, and let $V$ and $U$ be the corresponding neighbourhoods of $\tau$ and $\phi_{\rm{sing}}$. Recalling the identity~\eqref{eq:X-and-X} and the asymptotic~\eqref{eq:ckn-asymp}, once $kn\ge d_0$, Theorem~\ref{thm:det-form-lower} and part (c) of Proposition~\ref{prop:AP-for-pairing} give the following lower bound: \begin{align*} \bbE\frac{\vol_{2kn}\X(\pi_n,O)}{c(k,n)} &\ge \bbE\Big[\frac{\vol_{2kn}\X((\pi_n)_{(k)},\t{O})}{c(1,kn)}\,;\, \tr_n\circ \pi_n \in V,\ \X(\pi_n,U) \ne \emptyset\Big] \\ &\ge e^{(h_1/k - \eps)kn}\cdot \bbP(\tr_n\circ \pi_n \in V,\ \X(\pi_n,U) \ne \emptyset). \end{align*} By Theorem~\ref{thm:asymp-free2} and part (a) of Theorem~\ref{thm:exp-typ}, there exists $c > 0$ such that this lower bound is at least \begin{multline*} e^{(h_1 - k\eps)n}\cdot \big(\bbP\big(\X(\pi_n,U) \ne \emptyset\big) - O(e^{-cn^2})\big) \ge e^{(h_1 - k\eps)n}\cdot \big(e^{(h_0 - \eps)n} - O(e^{-cn^2})\big). \end{multline*} Taking logarithms, normalizing by $n$, and letting $n \to \infty$, this shows that \[\hann(\phi) \ge h_0 + h_1 - (k+1)\eps.\] Since $\eps > 0$ is arbitrary, this completes the proof. \vspace{7pt} \emph{Step 2: ($\le$).}\quad Let $a_0 > h_0$, and for this choice of $a_0$ let $O_0$ be the neighbourhood of $\phi_{\rm{sing}}$ given by part (b) of Theorem~\ref{thm:exp-typ}. Let $a_1 > h_1$, and for this choice of $a_1$ let the integer $d_0$ and neighbourhoods $\t{V}$ of $\tau\otimes \tr_k$ and $\t{O}$ of $\langle\phi,\cdot\rangle$ be given by Theorem~\ref{thm:det-form-upper}, and let $V$ and $O$ be the corresponding neighourhoods of $\tau$ and $\phi$, respectively. Finally, after shrinking $O$ further if necessary, assume in addition that \begin{equation}\label{eq:O-to-ball} \X(\pi,O) \subset B_R(0) \subset (\bbC^{(d)})^{(k)} \end{equation} for some $R > 0$ and any $d$-dimensional representation $\pi$, and also that \begin{equation}\label{eq:O-to-O0} \X(\pi,O) \ne \emptyset \quad \Rightarrow \quad \X(\pi,O_0) \ne \emptyset \end{equation} for any representation $\pi$ (this last implication can be enforced by Corollary~\ref{cor:typ-trans}). Now the properties gathered here and another appeal to~\eqref{eq:X-and-X} and~\eqref{eq:ckn-asymp} give the following upper bound for all sufficiently large $n$: \begin{align*} &\bbE\frac{\vol_{2kn}\X(\pi_n,O)}{c(k,n)} \\ &= \bbE\Big[\frac{\vol_{2kn}\X(\pi_n,O)}{c(k,n)}\,;\,\tr_n\circ \pi_n \not\in V\Big] + \bbE\Big[\frac{\vol_{2kn}\X(\pi_n,O)}{c(k,n)}\,;\,\tr_n\circ \pi_n \in V\Big] \\ &\le R^{2kn}\cdot \bbP(\tr_n\circ \pi_n \not\in V) \\ &\qquad \qquad + \bbE\Big[\frac{\vol_{2kn}\X((\pi_n)_{(k)},\t{O})}{c(1,kn)}\,;\,\tr_n\circ \pi_n \in V,\ \X(\pi_n,O_0) \ne \emptyset\Big] \\ &\le R^{2kn}\cdot \bbP(\tr_n\circ \pi_n \not\in V) + e^{a_1n}\cdot \bbP(\tr_n\circ \pi_n \in V,\ \X(\pi_n,O_0) \ne \emptyset), \end{align*} where the first inequality holds by applying~\eqref{eq:O-to-ball} to the first term and~\eqref{eq:O-to-O0} to the second term. By Theorem~\ref{thm:asymp-free2} and part (b) of Theorem~\ref{thm:exp-typ}, there exists $c > 0$ such that this upper bound is at most \[O(R^{2kn}e^{-cn^2}) + e^{(a_0+a_1)n}.\] Taking logarithms, normalizing by $n$, and letting $n\to\infty$, this shows that \[\hann(\phi) \le a_0+a_1.\] By the arbitrariness of $a_0$ and $a_1$, this completes the proof. \end{proof} \begin{rmk} n \end{rmk} Let $\phi \in \S_k(\G)$ and let $p$ be a positive real number. Then essentially the same proof as above gives the generalization \[\inf_O\limsup_{n\to\infty}\frac{1}{n}\log \bbE\big[(m_{\rm{U}(k,n)}\X(\pi_n,O))^p\big] = \rmh^0(\pi_\phi) + p\cdot \log\Delta_\tau \phi_{\rm{ac}}.\] Theorem~\ref{thm:three-entropies} is the case $p=1$. Letting $p\downarrow 0$, we recover simply $\rmh^0(\pi_\phi)$, so this further justifies the name `zeroth-order'. This shows why defining entropies of other `orders' has limited value in our setting: they are simply obtained from $\rmh^0$ and $\hann$ by linear interpolation. Now let $\phi \in \S_k(\G)$, let $g_0$, $g_1$, \dots be a grounded enumeration of $\G$, and let $C_0$, $C_1$, \dots be the resulting Verblunsky coefficients for $\phi$. Then Corollary~\ref{cor:first-expansion} and Theorem~\ref{thm:three-entropies} combine to give \begin{equation}\label{eq:free-Verb} \sum_{n=0}^\infty\rmH(I_k - C_n^\ast C_n) = \rmh^0(\pi_\phi) + \log\Delta_\tau \phi_{\rm{ac}}. \end{equation} If $\G = \bbZ$, then all positive definite functions are tempered, so $\rmh^0$ is identically zero. Then the remaining equation above is precisely Verblunsky's form of Szeg\H{o}'s limit theorem for determinants when $k=1$ (see~\cite[Theorem 1.8.6]{SimSzeg}), or its matrix extension when $k > 1$ (see~\cite[Theorem 2.13.5]{SimOPUCI}). These classical theorems have many different proofs. Among them, the proof via large deviations is a relatively recent discovery of Gamboa, Nagel and Rouault~\cite{GamNagRou16}; see also the survey~\cite{BreSimZei18}, which gives references to subsequent refinements and variants on this work by those same three authors. Our work here generalizes these results, and that large-deviations proof, to finitely generated free groups, as promised in Section~\ref{sec:prev-det-form}. When $\G$ has at least two generators, it is not amenable, and $\rmh^0$ may be nonzero. We thus discover that the free-groups generalization of Verblunsky's form of Szeg\H{o}'s theorem has an additional term. This term vanishes if $\phi$ is tempered. However, it turns out to be strictly negative if $\phi$ is not tempered because of Theorem~\ref{mainthm:tempered} (proved in the next chapter). In a sense, it `feels' how far $\phi$ is from being tempered. We can look at~\eqref{eq:free-Verb} another way. Let $a \in C^\ast \G$, let $\l$ be the regular representation with its usual tracial vector $\xi$, and let $\phi := \Phi^\pi_{\l(a)\xi}$. This is tempered by construction. In this case $k=1$, so the resulting Verblunsky coefficients are single vectors $c_n$, and~\eqref{eq:free-Verb} turns into \begin{equation}\label{eq:free-Verb2} \log\Delta_\tau |a| = \sum_{n=0}^\infty \log\sqrt{1 - |c_n|^2}. \end{equation} As far as I know, this is a new formula for the regular Fuglede--Kadison determinant of any element of $C^\ast \G$. Its utility is unclear, since estimating the quantities $|c_n|$ may be difficult for most choices of $a$. \begin{prob} Give a direct derivation of~\eqref{eq:free-Verb2} without going through $\hann$ and $\rmh^0$. \end{prob} \chapter{The temperedness theorem}\label{chap:tempered} In view of Theorem~\ref{thm:exp-typ}, the main point of Theorem~\ref{mainthm:tempered} is that the zeroth-order entropy of a representation vanishes if and only if that representation is approximately contained in the regular representation. The proof of this occupies the first four sections below. In Section~\ref{sec:cyclic} we reduce our work to the study of cyclic representations. We also digress briefly to show how our proof of Theorem~\ref{mainthm:tempered} gives a new proof of the Collins--Male theorem. In Section~\ref{sec:Haa} we recall Haagerup's classic characterization of tempered positive definite functions on free groups, and derive some other consequences of the specific estimates that appear in Haagerup's work. Section~\ref{sec:more-on-hann} looks more closely at Seward expansions of the functional $\hann$: the main results are combinatorial properties of the various subsets of $\G$ that appear in that expansion. Finally, Section~\ref{sec:proof-of-D} combines these preliminaries into the full proof of Theorem~\ref{mainthm:tempered}. Once the proof is complete, we offer some reflections on it in Section~\ref{sec:tempered-reflections}. The remaining sections of this chapter concern some consequences of Theorem~\ref{mainthm:tempered}. The main ones are a large deviations principle for the random set-sequences $\ol{\S_\bullet(\pi_n)}$ in the Vietoris topology of $\cal{Z}\G$ (Theorem~\ref{thm:exp-typ-2}), and some results on the structure of representations with finite zeroth-order entropy in Section~\ref{sec:MF}. \section{Reduction to the cyclic case}\label{sec:cyclic} The heart of Theorem~\ref{mainthm:tempered} is the following. \begin{thm}\label{thm:tempered-cyclic} If $\phi \in \S_1(\G)$ and $\phi$ is not tempered, then $\rmh^0(\phi) < 0$. \end{thm} \begin{proof}[Proof of Theorem~\ref{mainthm:tempered} from Theorem~\ref{thm:tempered-cyclic}] The easy implication already appeared in Corollary~\ref{cor:easy-tempered}, so it remains to prove the reverse. Thus, suppose that $\pi$ is any separable representation satisfying $\rmh^0(\pi) = 0$. Writing it as a direct sum of cyclic representations, these must also all have $\rmh^0$ equal to $0$ by Proposition~\ref{prop:additivity}. Therefore Theorem~\ref{thm:tempered-cyclic} shows that they are all approximately contained in the regular representation. Now the same follows for their sum $\pi$, because $\l \cong_{\rm{a}} \l^{(\infty)}$ (this is a standard result, but we recall its proof in Example~\ref{ex:left-reg-no-stiff} below). \end{proof} It remains to prove Theorem~\ref{thm:tempered-cyclic}. Let $\tau$ be the regular tracial state on $C^\ast \G$. If $\phi \in \S_1(\G)$ is not tempered, then we must show that \begin{equation}\label{eq:supneg} \lim_{t \downarrow 0}|\hann(t\phi + (1-t)\tau)| > 0. \end{equation} This becomes our main goal until the end of Section~\ref{sec:proof-of-D}. However, before leaving this section, let us return to the Collins--Male theorem. Although we have mentioned this theorem during several earlier passages, none of the results proved so far depend on it, so we may use them freely to give a new proof of it. \begin{proof}[New proof of the Collins--Male theorem from Theorem~\ref{mainthm:tempered}] Let $a \in C^\ast \G$, and for $c > 0$ let $S_c$ and $I(c)$ be the set and exponent introduced in Proposition~\ref{prop:norm-upper-tails}, respectively. In view of that proposition, it suffices to prove that $I(c) < 0$ for every $c > 0$. However, Theorem~\ref{mainthm:tempered} shows that $\rmh^0(\phi) < 0$ for every individual $\phi \in S_c$, and now the negativity of $I(c)$ follows because $S_c$ is compact and $\rmh^0$ is upper semicontinuous (Lemma~\ref{lem:usc}). \end{proof} Having re-proved the Collins--Male theorem, I expect one can quickly give a new proof of the Haagerup--Thorbj\o rnsen theorem by reversing the spectral-calculus steps in Collins and Male's original proof in~\cite{ColMal14}. But I have not checked this carefully.. \section{Some preliminaries on temperedness}\label{sec:Haa} For a free group $\G$, Haagerup's paper~\cite{Haa79} introduced some fundamental $\ell^2$-norm estimates for convolutions in $\bbC[\G]$, and used them to deduce structural results about the reduced C$\sp*$-algebra and Fourier algebras of $\G$ and also a characterization of its tempered positive definite functions. That characterization is the basic ingredient for the work of this chapter, along with some other consequences of Haagerup's original inequalities. This background is also recounted in~\cite[Section 2.1]{FigTalPicHAFG}. As in Section~\ref{sec:group-geom}, we write $B_n$ and $S_n$ for the closed ball and sphere of radius $n$ around $e$ in the left Cayley graph of $\G$, respectively. Haagerup's characterization is the following. \begin{prop}[{\cite[Theorem 3.1]{Haa79}}]\label{prop:Haa-temp} If $\phi \in \S_1(\G)$, then $\phi$ is tempered if and only if \[\sqrt{\sum_{g \in S_n}|\phi(g)|^2} = e^{o(n)} \quad \hbox{as}\ n\to\infty.\] \qed \end{prop} Recall from Example~\ref{ex:tempered} that, for the left regular representation of a countable group, approximate association and weak association agree, and so one can define `tempered' using either. \begin{rmk} Proposition~\ref{prop:Haa-temp} has been generalized to groups with length functions that satisfy `property RD', which has also found other applications~\cite{Jol90,JolVal91}. n \end{rmk} \begin{prob} Proposition~\ref{prop:Haa-temp} characterizes the temperedness of $\phi$ in terms of a growth condition on the values it takes. Is there a tractable characterization of temperedness in terms of generalized Verblunsky coefficients instead? Such a characterization could possibly make our work in this chapter much easier. \end{prob} In the rest of this chapter, we write $\|\cdot\|_2$ for the norm of $\ell^2(\G)$. Thus, the left-hand side of the condition in Proposition~\ref{prop:Haa-temp} is equal to $\|\phi 1_{S_n}\|_2$. Haagerup's original proof gives more: if $\phi$ is tempered then in fact \[\|\phi 1_{S_n}\|_2 = O(n) \quad\hbox{as}\ n\to\infty.\] Thus, for members of $\S_1(\G)$, there is a `gap' in the possible growth-types of these sums over spheres between linear (if tempered) and roughly exponential (if not). In the present work we need only the cruder characterization via subexponential growth as in Proposition~\ref{prop:Haa-temp}. This is the crucial ingredient for the rest of this chapter. \begin{rmk} n \end{rmk} Haagerup's proof of Proposition~\ref{prop:Haa-temp} uses the following inequality. \begin{lem}[{\cite[Lemma 1.3]{Haa79}}]\label{lem:Haa-basic} If $a$ and $b$ are elements of $\bbC[\G]$ that are supported on $S_n$ and $S_m$, respectively, then \[\|(a\ast b) 1_{S_k}\|_2 \le \|a\|_2\|b\|_2\] when \[|n-m| \le k \le n+m \quad \hbox{and}\quad k = n+m \mod 2.\] The left-hand side is zero for all other $k$. \qed \end{lem} We need this inequality itself via the following consequence. \begin{cor}\label{cor:Haa-basic1} If $a$ and $b$ are elements of $\bbC[\G]$ that are supported on $B_n$ and $B_m$, respectively, then \[\|(a\ast b)1_{S_k}\|_2 \le \left\{\begin{array}{ll} \|a\|_2\|b\|_2 &\quad k = 0\\ (n+m+1-k)^2\|a\|_2\|b\|_2 &\quad 1 \le k \le n+m\\ 0 &\quad k > n+m. \end{array}\right.\] \end{cor} \begin{proof} First, $S_0 = \{e\}$, and \[(a\ast b)_e = \sum_h a_h b_{h^{-1}}.\] This has absolute value at most $\|a\|_2\|b\|_2$ by the CBS inequality. Now suppose that $k\ge 1$. Decomposing over spheres of different radii, the triangle inequality gives \[ \|(a\ast b) 1_{S_k}\|_2 \le \sum_{i=0}^n\sum_{j=0}^m \|((a 1_{S_j})\ast (b 1_{S_i}))1_{S_k}\|_2.\] By Lemma~\ref{lem:Haa-basic}, for each $i$ and $j$ the summand here is zero unless \[i+j \ge k \quad \hbox{and} \quad |i-j|\le k\] (ignoring the additional parity constraint). The number of terms that satisfy these inequalities is bounded above by $(n+m+1-k)^2$, and for these terms Lemma~\ref{lem:Haa-basic} gives the upper bound $\|a1_{S_i}\|_2\|b1_{S_j}\|_2$, which is at most $\|a\|_2\|b\|_2$. \end{proof} \begin{rmk} n \end{rmk} Lemma~\ref{lem:Haa-basic} implies estimates for inner products in the GNS space of a given positive definite function. We present two versions of these in Lemma~\ref{lem:Haa-basic2} and Corollary~\ref{cor:Haa-basic2} below, but we prepare some more notation first. Let $\phi \in \S_1(\G)$. During the proof of Theorem~\ref{mainthm:tempered}, the norms around spheres $\|\phi 1_{S_n}\|_2$ appear both because of Proposition~\ref{prop:Haa-temp} and also as a result of another calculation inside the proof of Proposition~\ref{prop:hann-aux} below. For technical reasons, spheres of even and odd radii occur slightly differently in the proof, and this makes it convenient to collect spheres into consecutive pairs. Thus, to lighten notation, let us write \begin{equation}\label{eq:norms-on-spheres} w(n) := \|\phi 1_{S_{2n-1}}\|_2 + \|\phi 1_{S_{2n}}\|_2 \qquad (n \ge 1). \end{equation} Now suppose that $\phi$ is associated to its GNS representation $\pi$ by the cyclic vector $z$, and let $H := H_\pi$. In addition, define \begin{equation}\label{eq:L} L:\bbC[\G]\to H:a\mapsto \pi(a)z. \end{equation} The relationship between $\phi$, $\pi$, $z$ and $L$ given by the GNS construction is conveniently expressed as \begin{equation}\label{eq:phi-dfn} \langle La,Lb\rangle = \sum_{g,h} a_g\ol{b_h}\phi(h^{-1}g) = \phi(b^\ast \ast a) \qquad (a,b \in \bbC[\G]). \end{equation} \begin{lem}\label{lem:Haa-basic2} Let $a$ and $b$ be elements of $\bbC[\G]$, and suppose they are supported on $gB_{2n-i}$ and $gB_i$ respectively for some $g \in \G$ and some integers $n \ge 1$ and $0 \le i \le 2n$. Then \[|\langle La,Lb\rangle| \le W(n) \|a\|_2\|b\|_2,\] where \begin{equation}\label{eq:more-norms-on-spheres} W(n) := 1 + 4\sum_{\ell=1}^n (n+1-\ell)^2w(\ell) \qquad (n\ge 1) \end{equation} with $w(\ell)$ as in~\eqref{eq:norms-on-spheres}. \end{lem} \begin{proof} First, since \[\langle La,Lb\rangle = \langle \pi(g)^{-1}La,\pi(g)^{-1}Lb\rangle = \langle L(\delta_{g^{-1}}\ast a),L(\delta_{g^{-1}}\ast b)\rangle,\] we can assume without loss of generality that $g = e$. Applying the triangle inequality and then the CBS inequality to~\eqref{eq:phi-dfn} gives \[ |\langle La,Lb\rangle| = |\phi(b^\ast \ast a)| \le \sum_{k=0}^\infty |\phi((b^\ast\ast a)1_{S_k})| \le \sum_{k=0}^\infty \|(b^\ast\ast a)1_{S_k}\|_2\|\phi 1_{S_k}\|_2.\] Substituting the bound from Corollary~\ref{cor:Haa-basic1}, and noting that $\|\phi 1_{S_0}\|_2 = |\phi(e)| = 1$, we deduce that \begin{equation}\label{eq:inner-prod-bound} |\langle La,Lb\rangle| \le \Big(1 + \sum_{k=1}^{2n}(2n+1-k)^2\|\phi 1_{S_k}\|_2\Big) \|a\|_2\|b\|_2. \end{equation} To finish, we bound the coefficient in parentheses in~\eqref{eq:inner-prod-bound} like this: \begin{align*} &1 + \sum_{k=1}^{2n}(2n+1-k)^2\|\phi 1_{S_k}\|_2 \\ &= 1 + \sum_{\ell = 1}^n\big((2n+1 - (2\ell-1))^2\|\phi 1_{S_{2\ell-1}}\|_2 + (2n+1 - 2\ell)^2\|\phi 1_{S_{2\ell}}\|_2\big)\\ &\le 1 + 4\sum_{\ell = 1}^n(n+1 - \ell)^2(\|\phi 1_{S_{2\ell-1}}\|_2 + \|\phi 1_{S_{2\ell}}\|_2). \end{align*} This equals $W(n)$ from~\eqref{eq:more-norms-on-spheres}. \end{proof} We apply Lemma~\ref{lem:Haa-basic2} later via the following asymmetrical corollary of it. \begin{cor}\label{cor:Haa-basic2} Assume that $\phi \ge c\tau$ in the ordering of positive functionals for some $c > 0$. For some $g \in \G$ and some integers $n \ge 1$ and $0 \le i \le 2n$, let $x$ be an element of $L(\bbC^{gB_{2n-i}})$, and let $b$ be an element of $\bbC[\G]$ supported on $gB_i$. Then \[|\langle x,Lb\rangle| \le \frac{W(n)}{\sqrt{c}} \|x\|\|b\|_2,\] where $W(n)$ is as in~\eqref{eq:more-norms-on-spheres}. \end{cor} \begin{proof} Suppose that $x = La$ with $a \in \bbC^{gB_{2n-i}}$. Our assumption on $\phi$ implies that \[\|x\|^2 = \|La\|^2 = \phi(a^\ast \ast a) \ge c\tau(a^\ast \ast a) = c\|a\|_2^2.\] Use this to bound the factor of $\|a\|_2$ in the conclusion of Lemma~\ref{lem:Haa-basic2}. \end{proof} In our proof of Theorem~\ref{mainthm:tempered}, a key role is played by the fact that, if $\phi$ is not tempered, then the quantities $w(n)$ from~\eqref{eq:norms-on-spheres} and $W(n)$ from~\eqref{eq:more-norms-on-spheres} are comparable in size for infinitely many values of $n$. This is Corollary~\ref{cor:Gronwall-in-action} below. We deduce it from the exponential growth promised by Haagerup's characterization (Proposition~\ref{prop:Haa-temp}) and a variant of Gronwall's lemma. (Gronwall's original lemma is for functions of a continuous variable, and is a basic tool for studying partial differential equations: see, for instance,~\cite[Appendix B.2.j]{Evans--PDE}. It has many discrete variants in the literature, but I have not found one that covers precisely what we need.) \begin{lem}[Discrete Gronwall-type lemma]\label{lem:Gronwall} For every $\eps > 0$ there exists $\delta > 0$ such that the following holds. Let $v(1)$, $v(2)$, \dots be non-negative real numbers, and define \[V(n) := 1 + 4\sum_{\ell=1}^{n-1}(n+1-\ell)^2v(\ell) \qquad (n\ge 1)\] (beware: this sum stops at $n-1$, whereas the sum in~\eqref{eq:more-norms-on-spheres} stops at $n$). Let $A\ge 1$, and assume further that \begin{equation}\label{eq:conds-on-v} [ \quad \hbox{either} \quad v(n) \le Ae^{\eps n} \quad \hbox{or} \quad v(n) \le \delta V(n) \quad ] \end{equation} for every $n\ge 1$. Then in fact $v(n) \le A e^{\eps n}$ for every $n \ge 1$. \end{lem} \begin{proof} In anticipation of the rest of the proof, let \begin{equation}\label{eq:assumptions-on-C} \delta := \Big(1 + \frac{4e^{2\eps}(1+e^{-\eps})}{(1 - e^{-\eps})^3}\Big)^{-1}. \end{equation} The proof is by induction on $n$. To begin, observe from the definitions that $\delta V(1) = \delta < 1$, whereas $Ae^{\eps} > 1$, so the second inequality in~\eqref{eq:conds-on-v} is stronger than the first when $n=1$. Now suppose that the desired bound is known up to some value of $n$, and consider assumption~\eqref{eq:conds-on-v} at the argument $n+1$. If the first inequality in~\eqref{eq:conds-on-v} holds at $n+1$, then there is nothing to prove. If the second inequality in~\eqref{eq:conds-on-v} holds at $n+1$, then we combine it with the inductive hypothesis and some crude upper bounds like this: \begin{align*} v(n+1) &\le \delta + 4\delta A\sum_{\ell=1}^{n}(n+2-\ell)^2e^{\eps\ell} \\ &\le \delta + 4\delta Ae^{2\eps+\eps n}\sum_{k=1}^\infty k^2e^{-\eps k}\\ &\le \delta A\Big(1 + 4e^\eps\sum_{k=1}^\infty k^2e^{-\eps k}\Big)e^{\eps(n+1)}. \end{align*} Using standard formulas for the term-by-term derivatives of a geometric series, this last expression is equal to \[\delta A\Big(1 + \frac{4e^{2\eps }(1+e^{-\eps})}{(1 - e^{-\eps})^3}\Big)e^{\eps(n+1)}.\] By our choice in~\eqref{eq:assumptions-on-C}, this is at most $Ae^{\eps (n+1)}$, so the induction continues. \end{proof} \begin{cor}\label{cor:Gronwall-in-action} Let $\phi \in \S_1(\G)$ and define $w(n)$ and $W(n)$ as in~\eqref{eq:norms-on-spheres} and~\eqref{eq:more-norms-on-spheres}, respectively. If $\phi$ is not tempered then there are positive integers $n_1 < n_2 < \cdots$ such that \[\log w(n_i) \gtrsim n_i \quad \hbox{and} \quad w(n_i) \simeq W(n_i)\] as $i\to\infty$ (where the implied constants depend on $\phi$). \end{cor} \begin{proof} We assume the conclusion fails for some $\phi$ and prove that $\phi$ is tempered. Let $\eps > 0$, and let $\delta > 0$ be given by Lemma~\ref{lem:Gronwall} for this choice of $\eps$. By our assumption on $\phi$, we must have \[\hbox{either} \quad w(n) \le e^{\eps n} \quad \hbox{or} \quad w(n) \le \frac{\delta}{1+4\delta} W(n)\] for all but finitely many $n$. Hence, for some $A \ge 1$, we have \[\hbox{either} \quad w(n) \le Ae^{\eps n} \quad \hbox{or} \quad w(n) \le \frac{\delta}{1+4\delta} W(n)\] for all $n\ge 1$. Also, the second of these possibilities re-arranges into \[w(n) \le \delta \Big(1 + 4\sum_{\ell=1}^{n-1}(n+1-\ell)^2w(\ell)\Big)\] (that is, we re-arrange to remove the term with $\ell = n$ from~\eqref{eq:more-norms-on-spheres}). We now have the conditions to apply Lemma~\ref{lem:Gronwall} with $v(n) := w(n)$. It shows that $w(n) \le Ae^{\eps n}$ for all $n$. Since $\eps$ is arbitrary, this implies that $\phi$ is tempered, by Proposition~\ref{prop:Haa-temp}. \end{proof} \section{Manipulations with Seward series}\label{sec:more-on-hann} Let $\phi \in \S_1(\G)$, and let it be associated to a representation $\pi$ by a cyclic vector $z$. Pick a total order on $S\cup S^{-1}$, denote it by `$<$', and extend it to the resulting length-lexicographic ordering of $\G$, as described in Section~\ref{sec:group-geom} (so $e$ is the first element in this ordering). Having fixed this ordering, write $P(g)$ for the set of predecessors of any group element $g$. This includes setting $P(e) := \emptyset$. This ordering determines a Seward expansion of $\hann(\phi)$ as in Corollary~\ref{cor:Sew}. Since $\phi(e) = 1$, the first term of that expansion vanishes. By translation invariance, the remaining terms may be written using the quantities \[\rmH_\phi(e\mid (s_n\cdots s_1)^{-1}P(s_n\cdots s_1))\] for all reduced words $s_n\cdots s_1$. To express these quantities more succinctly, let \begin{equation}\label{eq:P-and-Q} Q(g) := g^{-1}P(g) \qquad (g \in \G). \end{equation} The estimates that appear during the proof of Theorem~\ref{thm:tempered-cyclic} depend on some basic combinatorial properties of these sets. First, since $P(g)$ does not contain $g$ itself, $Q(g)$ does not contain $e$. Also, since \[B_{|g|-1} \subset P(g) \subset B_{|g|},\] we have \[g^{-1}B_{|g|-1} \subset Q(g) \subset g^{-1}B_{|g|}.\] Beware: the left translate of a ball by $g^{-1}$ is still a ball in the \emph{right} Cayley graph, but not in the \emph{left} Cayley graph, which is the one that concerns us here. Inversion switches the two Cayley graphs, so if desired one could visualize $Q(g)^{-1}$ as roughly a closed ball of radius $|g|$ around $g$, except that it does not include the whole interior boundary of that ball. The next three results describe relations among the sets $Q(g)$. \begin{lem}\label{lem:nested} Every reduced word $s_{n+1}\cdots s_1$ with $n\ge 0$ satisfies \begin{equation}\label{eq:nested} Q(s_n\cdots s_1) \subset Q(s_{n+1}\cdots s_1). \end{equation} \end{lem} This inclusion is also central to the manipulations in~\cite{Seward--freestab}: see, for instance, the paragraph preceding~\cite[Theorem 3.5]{Seward--freestab}. \begin{proof} Translating on the left by $s_{n+1}\cdots s_1$, it is equivalent to prove that \[s_{n+1}P(s_n\dots s_1)\subset P(s_{n+1}\cdots s_1).\] This holds by the definition of the length-lexicographic ordering. \end{proof} With Lemma~\ref{lem:nested} in mind, we also define \[C(s_n\cdots s_1) := Q(s_n\cdots s_1)\setminus Q(s_{n-1}\cdots s_1)\] for any nonempty reduced word $s_n\cdots s_1$. We refer to $C(s_n\cdots s_1)$ as the \textbf{crescent of $s_n\cdots s_1$}, motivated by the description of it in the next lemma. To motivate our interest in these, notice that the conditional chain rule (Proposition~\ref{prop:cond-chain}) lets us re-arrange the Seward expansion from Corollary~\ref{cor:Sew} into the following \begin{equation}\label{eq:Sew-cond-mut-inf} \hann(\phi) = -\sum_{n=1}^\infty \sum_{s_n\cdots s_1 \in S_n} \rmI_\phi(e\,;\,C(s_n\cdots s_1)\mid Q(s_{n-1}\cdots s_1)). \end{equation} In the end we do not use this formula directly in our work below; instead, the original expression in Corollary~\ref{cor:Sew} is manipulated into a lower bound with a somewhat simpler form, and then crescents enter into our analysis of that quantity. \begin{lem}\label{lem:Q-diff} Let $n \ge 1$, let $s_n \cdots s_1$ be a reduced word, and let $C := C(s_n\cdots s_1)$. Then \[C \subset S_{2n-2}\cup S_{2n-1}\cup S_{2n}.\] Moreover: \begin{itemize} \item $C\cap S_{2n-2}$ is empty if either $n=1$ or $s_{n-1} > s_n^{-1}$, and if $s_{n-1} < s_n^{-1}$ then \[C\cap S_{2n-2} = \big\{t_{2n-2}\cdots t_1:\ \hbox{$s_1^{-1}\cdots s_n^{-1}$ is a prefix of $t_{2n-2}\cdots t_1$}\big\};\] \item we have \[C\cap S_{2n-1} = \big\{t_{2n-1}\cdots t_1:\ \hbox{$s_1^{-1}\cdots s_n^{-1}$ is a prefix of $t_{2n-1}\cdots t_1$}\big\};\] \item we have \[C\cap S_{2n} = \big\{t_{2n}\cdots t_1:\ \hbox{$s_1^{-1}\cdots s_n^{-1}$ is a prefix of $t_{2n}\cdots t_1$ and $t_n < s_n$}\big\}.\] \end{itemize} \end{lem} \begin{proof} We first describe the set \[P(s_n\cdots s_1)\setminus s_nP(s_{n-1}\cdots s_1) = \big\{h \in P(s_n\cdots s_1):\ s_n^{-1}h \not\in P(s_{n-1}\cdots s_1)\big\}.\] Suppose that $h$ lies in this set and has reduced word $t_m\cdots t_1$. In the first place this implies $m \le n$. It also implies $m\ge n-2$, for otherwise $s_n^{-1}h$ would have length at most $n-2$ and hence lie in $P(s_{n-1}\cdots s_1)$. Now consider the three possible values for $m$ in turn: \begin{itemize} \item If $m = n-2$, then since $s_n^{-1}h \not\in P(s_{n-1}\cdots s_1)$ we must have that $s_n \ne t_{n-2}$ and $s_n^{-1} > s_{n-1}$, and these conditions are also sufficient. \item If $m = n-1$, then since $s_n^{-1}h \not\in P(s_{n-1}\cdots s_1)$ we must have that $s_n \ne t_{n-1}$, and this is also sufficient for then $|s_n^{-1}h| = n$. \item If $m = n$, then since $s_n^{-1}h \not\in P(s_{n-1}\cdots s_1)$ we must have either $s_n \ne t_n$, or $s_n = t_n$ but $t_{n-1} > s_{n-1}$, and either of these possibilities is sufficient. \end{itemize} Translating by $s_1^{-1}\cdots s_n^{-1}$, these cases turn into the three intersections listed. \end{proof} \begin{cor}\label{cor:cover-disjoint} As $g$ ranges over $\G\setminus \{e\}$, the crescents $C(g)$ are pairwise disjoint and their union is $\G\setminus \{e\}$. \end{cor} \begin{proof} None of the sets $Q(g)$ contains $e$, so nor do any of the crescents. On the other hand, if $t_m\cdots t_1$ is a nonempty reduced word in $S\cup S^{-1}$, then it lies in the crescent $C(s_n\cdots s_1)$ if and only if it lies in one of the three sets listed in Lemma~\ref{lem:Q-diff}. There is exactly one choice of $s_n\cdots s_1$ for which this is the case, determined as follows: \begin{itemize} \item if $m = 2p-1$, then $n=p$ and $s_n\cdots s_1 = (t_{2p-1}\cdots t_p)^{-1}$; \item if $m = 2p$ and $t_p < t_{p+1}^{-1}$, then $n = p$ and $s_n\cdots s_1 = (t_{2n}\cdots t_{n+1})^{-1}$; \item if $m = 2p$ and $t_p > t_{p+1}^{-1}$, then $n = p+1$ and $s_n\cdots s_1 = (t_{2p}\cdots t_p)^{-1}$. \end{itemize} \end{proof} Finally, for $n\ge 1$, let us write \begin{equation}\label{eq:crescent-union} R_n := \bigcup_{g \in S_n}C(g). \end{equation} By Lemma~\ref{lem:Q-diff}, this is the union of $S_{2n-1}$, some of $S_{2n-2}$, and some of $S_{2n}$. Its intersections with $S_{2n-2}$ and $S_{2n}$ depend on the original choice of ordering of $S_1$. In order to make contact with the estimates of the previous subsection, we need to be able to choose that ordering so that $R_n$ sees `enough' of those spheres, in the sense of the next lemma. \begin{cor}\label{cor:remove-Rn} Let $\phi \in \S_1(\G)$. For every $n\ge 1$, there is a choice of ordering of $S_1$ (which may depend on $\phi$) such that, if $R_n$ is the resulting union of crescents in~\eqref{eq:crescent-union}, then \begin{equation}\label{eq:R-big-enough} \|\phi 1_{R_n}\|_2 \ge \frac{w(n)}{2}, \end{equation} where $w(n)$ is as in~\eqref{eq:norms-on-spheres}. \end{cor} \begin{proof} Let $<$ be a choice of ordering of $S_1$, and let $<'$ be its reverse: if $s,t \in S_1$ are distinct, then \[s < t \quad \Leftrightarrow \quad t <' s.\] Let $P'$, $Q'$, $C'$, and $R'$ denote the analogs of $P$, $Q$, $C$ and $R$ constructed from $<'$ rather that $<$. Now observe that $R_n$ always contains $S_{2n-1}$, and the third set in Lemma~\ref{lem:Q-diff} gives that \[R_n \cap S_{2n} = \{t_{2n}\cdots t_1 \in S_{2n}:\ t_n < t_{n+1}^{-1}\}= S_{2n}\setminus R_n'.\] So $R_n \cup R_n'$ contains $S_{2n-1}\cup S_{2n}$, and therefore \[\|\phi 1_{R_n}\|_2^2 + \|\phi 1_{R'_n}\|_2^2 \ge \|\phi 1_{S_{2n-1}\cup S_{2n}}\|_2^2 = \|\phi 1_{S_{2n-1}}\|_2^2 + \|\phi 1_{S_{2n}}\|_2^2.\] Therefore, by the inequality of arithmetic and geometric means, at least one of the sets $R_n$ and $R_n'$ must satisfy~\eqref{eq:R-big-enough}. \end{proof} Beware: the ordering found by this corollary depends on $n$. When we apply it during the proof of Proposition~\ref{prop:hann-aux} later, we need it only for one value of $n$ at a time. \section{Completed proof of Theorem~\ref{mainthm:tempered}}\label{sec:proof-of-D} We have shown in Section~\ref{sec:cyclic} that Theorem~\ref{mainthm:tempered} is implied by Theorem~\ref{thm:tempered-cyclic}, and that this in turn is implied by~\eqref{eq:supneg}. To prove~\eqref{eq:supneg}, we need a lower bound on $\hann$ in terms of quantities that are easier to estimate in terms of the individual values taken by $\phi$. The next proposition accomplishes this (under a mild extra assumption) in terms of the sequences $w(n)$ from~\eqref{eq:norms-on-spheres} and $W(n)$ from~\eqref{eq:more-norms-on-spheres}. \begin{prop}\label{prop:hann-aux} If $\phi \ge c\tau$ in the ordering of positive functionals for some $c > 0$, then \begin{equation}\label{eq:hann-aux} |\hann(\phi)| \ge \sup_{n\ge 1}\frac{c w(n)^2}{8n W(n)^2}. \end{equation} \end{prop} Before starting the proof of Proposition~\ref{prop:hann-aux}, we show how it completes the main task of this chapter. \begin{proof}[Proof of Theorem~\ref{thm:tempered-cyclic} given Proposition~\ref{prop:hann-aux}] Suppose that $\phi$ is not tempered, and consider the mollified functions \[\phi_t := t\phi + (1-t)\tau \qquad (0 \le t \le 1).\] Let $w_t(n)$ and $W_t(n)$ be the analogs of $w(n)$ and $W(n)$ defined with $\phi_t$ in place of $\phi$. The definition of $\phi_t$ gives \[w_t(n) = \sqrt{\sum_{g \in S_{2n-1}}|\phi_t(g)|^2} + \sqrt{\sum_{g \in S_{2n}}|\phi_t(g)|^2} = tw(n)\qquad (n\ge 1)\] and \[W_t(n) = 1 + 4\sum_{\ell=1}^n(n+1-\ell)^2 w_t(\ell) = 1 - t + tW(n) \qquad (n\ge 1).\] If $t\le 1/2$, then $\phi_t \ge \tau/2$, and then Proposition~\ref{prop:hann-aux} and the calculations above give \begin{equation}\label{eq:hann-final-LB} |\hann(\phi_t)| \ge \sup_{n\ge 1} \frac{w_t(n)^2}{16nW_t(n)^2} \ge \sup_{n\ge 1} \frac{t^2w(n)^2}{16n(1 + tW(n))^2}. \end{equation} Since $\phi$ is not tempered, by Corollary~\ref{cor:Gronwall-in-action} we can choose positive integers $n_1 < n_2 < \dots$ such that \[\log w(n_i) \gtrsim n_i \quad \hbox{and} \quad w(n_i) \simeq W(n_i).\] Let $t_i := \min\{1/w(n_i),1/2\}$ for each $i$. This is a sequence in $(0,1/2)$ that converges to $0$, and our choices imply that \[t_iw(n_i) \simeq t_iW(n_i) \simeq 1 \quad \hbox{and} \quad n_i \simeq |\log t_i|.\] Therefore~\eqref{eq:hann-final-LB} gives \[ |\hann(\phi_{t_i})| \ge \frac{t_i^2w(n_i)^2}{16n_i(1 + t_iW(n_i))^2} \simeq \frac{1}{|\log t_i|}.\] In particular, it follows that \begin{equation}\label{eq:not-fast-enough} \limsup_{t\downarrow 0}\frac{1}{t}|\hann(\phi_t)| = \infty. \end{equation} However, by part (c) of Theorem~\ref{thm:exp-typ}, we know that \[|\hann(\phi_t)| \le |\rmh^0(\phi) + \log (1-t)| = |\rmh^0(\phi)| + O(t) \qquad \hbox{as}\ t \downarrow 0.\] This is incompatible with~\eqref{eq:not-fast-enough} if $\rmh^0(\phi) = 0$, so $\rmh^0(\phi) < 0$, as required. \end{proof} \begin{rmk}\label{rmk:weakness} As the proof above shows, Proposition~\ref{prop:hann-aux} is only just strong enough to imply~\eqref{eq:supneg}. Its direct application gives the weaker estimate~\eqref{eq:not-fast-enough}, and only the intervention of Theorem~\ref{thm:exp-typ} enables us to complete the proof. n \end{rmk} We spend the rest of this section proving Proposition~\ref{prop:hann-aux}. We deduce it from another, more technical inequality, which we present separately as Lemma~\ref{lem:hann-aux} below. To formulate this, consider again a total ordering `$<$' of $S_1 = S\cup S^{-1}$, and extend it to a lenth-lexicographic ordering of $\G$. Assume that $\phi \in \S_1(\G)$ is associated to the representation $\pi$ by the cyclic vector $z$, let $H := H_\pi$, and let $L$ be the linear map from $\bbC[\G]$ to $H$ defined in~\eqref{eq:L}. For each group element $g$, let \[M(g) := L[\bbC^{Q(g)}] = \rm{span}(\pi(Q(g))z),\] where $Q(g)$ is as in~\eqref{eq:P-and-Q}. In particular, $M(e) = \{0\}$. Let $P_{M(g)}$ be the orthogonal projection from $H$ onto $M(g)$. For each nonempty reduced word $s_n\cdots s_1$, these subspaces satisfy \[M(s_n\cdots s_1) \supset M(s_{n-1}\cdots s_1),\] and we can consider the difference of the orthogonal projections of $z$, say \begin{equation}\label{eq:ys} y_{s_n\cdots s_1} := (P_{M(s_n\cdots s_1)} - P_{M(s_{n-1}\cdots s_1)})z. \end{equation} Our more technical inequality concerns the squared lengths of these projections. It relates them to the $\ell^2$-norms of $\phi$ over the unions of crescents $R_n$ defined in~\eqref{eq:crescent-union}. \begin{lem}\label{lem:hann-aux} If $\phi \ge c\tau$, then the vectors above satisfy \begin{equation}\label{eq:pre-hann-aux-0} \sum_{i=1}^n\sum_{s_i\cdots s_1 \in S_i}\|y_{s_n\cdots s_1}\|_2^2 \ge \frac{c}{n W(n)^2}\|\phi 1_{R_n}\|_2^2 \end{equation} for every $n\ge 1$. \end{lem} \begin{proof}[Proof of Proposition~\ref{prop:hann-aux} from Lemma~\ref{lem:hann-aux}] Using the same length-lexicographic ordering as above, consider the resulting Seward expansion for $\hann(\phi)$ from Corollary~\ref{cor:Sew}. By translation-invariance, it is a combination of the quantities \begin{align*} &\rmH_\phi(s_{n-1}\cdots s_1\mid P(s_{n-1}\cdots s_1)) - \rmH_\phi(s_n\cdots s_1\mid P(s_n\cdots s_1))\\ &= \rmH_\phi(e\mid Q(s_{n-1}\cdots s_1)) - \rmH_\phi(e \mid Q(s_n\cdots s_1))\\ &= \log \|z - P_{M(s_{n-1}\cdots s_1)}z\| - \log\|z - P_{M(s_n\cdots s_1)}z\| \end{align*} (recall Example~\ref{ex:1D-cond-ent} for the second equality here). Since $z$ is a unit vector, the Pythagorean identity turns this quantity into \[\frac{1}{2}\big(\log (1 - \|P_{M(s_{n-1}\cdots s_1)}z\|^2) - \log(1 - \|P_{M(s_n\cdots s_1)}z\|^2)\big).\] Since $\log$ has slope greater than $1$ on $(0,1)$, this difference is at least \[\frac{1}{2}\big(\|P_{M(s_n\cdots s_1)}z\|^2 - \|P_{M(s_{n-1}\cdots s_1)}z\|^2\big) = \frac{1}{2}\|y_{s_n\cdots s_1}\|^2,\] where $y_{s_n\cdots s_1}$ is as in~\eqref{eq:ys}, using the Pythagorean identity again for the last equality. Inserting this lower bound term-by-term into the Seward expansion, we have shown that \begin{equation}\label{eq:pre-hann-aux} |\hann(\phi)| \ge \frac{1}{2}\sum_{n=1}^\infty\sum_{s_n\cdots s_1 \in S_n}\|y_{s_n\cdots s_1}\|^2. \end{equation} Since the terms on the right-hand side of~\eqref{eq:pre-hann-aux} are non-negative, that inequality combines with Lemma~\ref{lem:hann-aux} to give \[|\hann(\phi)|\ge \sup_{n\ge 1}\frac{c}{2n W(n)^2}\|\phi 1_{R_n}\|_2^2.\] Finally, the left-hand side of this inequality does not depend on our chosen ordering of $S_1$, so we are free to optimize the right-hand side over this ordering. By Corollary~\ref{cor:remove-Rn}, doing so gives the inequality~\eqref{eq:hann-aux}. \end{proof} It remains to prove Lemma~\ref{lem:hann-aux}. This is the most demanding work of this section. In order to relate the summands in~\eqref{eq:pre-hann-aux-0} more directly to $\phi$, we bound each norm $\|y_{s_n\dots s_1}\|$ by picking another element of $\bbC[\G]$ as a `test vector', applying $L$ to it, and then applying Corollary~\ref{cor:Haa-basic2} to the resulting inner products in $H$. The choice of the test vectors is the really delicate aspect of the proof. To motivate it, let us first notice that, if we had \[M(s_{n-1}\cdots s_1) \ \perp \ \rm{span}(\pi(C(s_n\cdots s_1))z) = L[\bbC^{C(s_n\cdots s_1)}],\] then $y_{s_n\cdots s_1}$ would equal the projection of $z$ onto $L[\bbC^{C(s_n\cdots s_1)}]$. While this orthogonality usually fails if $\phi \ne \tau$, heuristically we can still imagine that $y_{s_n\cdots s_1}$ is `trying to stay close' to $L[\bbC^{C(s_n\cdots s_1)}]$, so it makes sense to test it against vectors from that subspace. For example, consider the vector \[a_{s_n\cdots s_1} := \ol{\phi} 1_{C(s_n\cdots s_1)}.\] This is supported in $C(s_n\cdots s_1)$, and hence also in $Q(s_n\cdots s_1)$. Intuitively, $La_{s_n\cdots s_1}$ is a `first order' approximation to the projection of $z$ onto $L[\bbC^{C(s_n\cdots s_1)}]$: for each individual $g\in C(s_n\cdots s_1)$, the vector $L(\ol{\phi}(g) 1_{\{g\}})$ equals the projection of $z$ onto the one-dimensional subspace $\bbC\cdot \pi(g)z$, and $La_{s_n\cdots s_1}$ is the sum of these vectors. In addition, the image of $a_{s_n\cdots s_1}$ under $L$ reflects its original length in $\ell^2(\G)$ through a simple special case of~\eqref{eq:phi-dfn}: \begin{equation}\label{eq:phi-dfn-2} \langle La_{s_n\cdots s_1},z\rangle = \sum_{g\in C(s_n\cdots s_1)} \ol{\phi}(g)\phi(g) = \|a_{s_n\cdots s_1}\|_2^2. \end{equation} One could test each $y_{s_n\cdots s_1}$ against $La_{s_n\cdots s_1}$, but I have not been able to complete the proof using this idea alone. To see why, consider that \begin{align}\label{eq:without-telescope} \langle y_{s_n\cdots s_1},La_{s_n\cdots s_1}\rangle &= \langle P_{M(s_n\cdots s_1)}z,La_{s_n\cdots s_1}\rangle - \langle P_{M(s_{n-1}\cdots s_1)}z,La_{s_n\cdots s_1}\rangle \nonumber \\ &= \langle z,La_{s_n\cdots s_1}\rangle - \langle P_{M(s_{n-1}\cdots s_1)}z,La_{s_n\cdots s_1}\rangle \nonumber \\ &= \|a_{s_n\cdots s_1}\|_2^2 - \langle P_{M(s_{n-1}\cdots s_1)}z,La_{s_n\cdots s_1}\rangle. \end{align} (The second of these equalities holds because $a_{s_n\cdots s_1}$ is supported in $Q(s_n\cdots s_1)$ and so $La_{s_n\cdots s_1}$ lies in $M(s_n\cdots s_1)$. The last equality is an application of~\eqref{eq:phi-dfn-2}.) In~\eqref{eq:without-telescope}, the first term is a fairly simple function of $\phi$. But I do not have a separate estimate for the second term which is precise enough to control the \emph{difference} of these terms. Moreover, I expect that in some cases this difference is much smaller than either term separately. As a result, I have not been able to exert enough control over $\|y_{s_n\cdots s_1}\|$ by this approach. To overcome this difficulty, we instead test each individual vector in~\eqref{eq:pre-hann-aux} against a certain \emph{sum} of vectors of the form $a_{s_n\cdots s_1}$, carefully chosen so that the resulting inner products mostly cancel overall. We make this choice in~\eqref{eq:test} below. The remaining sum contains only a much smaller number of inner products of the form $\langle P_{M(s_n\cdots s_1)}z,La_{s_n\cdots s_1}\rangle$, where both indexing words have length $n$, and we can then simplify all of these because of~\eqref{eq:phi-dfn-2}. Here are the precise details. \begin{proof}[Proof of Lemma~\ref{lem:hann-aux}] \emph{Step 1: choice of test vectors.}\quad For any $i = 0,1,\dots,n$ and any $s_i\cdots s_1$, let \begin{equation}\label{eq:test} b_{s_i\cdots s_1} := \sum_{\substack{s_{i+1}\cdots s_n \in S_{n-i}\\ s_is_{i+1}\ne e}}a_{s_n\cdots s_1} = \ol{\phi}\cdot 1_{\bigcup \{C(s_n\cdots s_1):\, s_{i+1}\cdots s_n \in S_{n-i},\, s_is_{i+1}\ne e\}}. \end{equation} (Notice that the terms of this sum are indexed by words of length $n$, even if $i$ is smaller than $n$. This is crucial for the cancellation in Step 3 below.) By Corollary~\ref{cor:cover-disjoint}, for any such $i$ we can decompose \[\|\phi 1_{R_n}\|^2_2 = \sum_{s_i\cdots s_1 \in S_i}\|b_{s_i\cdots s_1}\|_2^2.\] Applying this identity and then the CBS inequality, we have \begin{align}\label{eq:first-CBS} &\sqrt{n}\cdot \|\phi 1_{R_n}\|_2\cdot \sqrt{\sum_{i=1}^n\sum_{s_i\cdots s_1 \in S_i}\|y_{s_i\cdots s_1}\|^2} \nonumber\\ &= \sqrt{\sum_{i=1}^n\sum_{s_i\cdots s_1 \in S_i}\|b_{s_i\cdots s_1}\|_2^2} \cdot \sqrt{\sum_{i=1}^n\sum_{s_i\cdots s_1 \in S_i}\|y_{s_i\cdots s_1}\|^2}\nonumber\\ &\ge \sum_{i=1}^n\sum_{s_i\cdots s_1 \in S_i}\|b_{s_i\cdots s_1}\|_2\cdot \|y_{s_i\cdots s_1}\|. \end{align} \vspace{7pt} \emph{Step 2: comparing with inner products in $H$.}\quad Fix a word $s_i\cdots s_1 \in S_i$. The vector $y_{s_i\cdots s_1}$ lies in $M(s_i\cdots s_1)$, so it is the image under $L$ of an element of $\bbC[\G]$ that is supported on $Q(s_i\cdots s_1) \subset (s_i\cdots s_1)^{-1}B_i$. On the other hand, the vector $b_{s_i\cdots s_1}$ is itself an element of $\bbC[\G]$, and is supported on the set \[\bigcup_{\substack{s_{i+1}\cdots s_n \in S_{n-i}\\ s_is_{i+1}\ne e}}Q(s_n\cdots s_1) \\ \subset \bigcup_{\substack{s_{i+1}\cdots s_n \in S_{n-i}\\ s_is_{i+1}\ne e}}(s_n\cdots s_1)^{-1}B_n \subset (s_i\cdots s_1)^{-1}B_{2n-i}.\] Therefore, by Corollary~\ref{cor:Haa-basic2}, the lower bound reached in~\eqref{eq:first-CBS} is at least \begin{multline*} \frac{\sqrt{c}}{W(n)}\sum_{i=1}^n\sum_{s_i\cdots s_1 \in S_i}|\langle y_{s_i\cdots s_1},Lb_{s_i\cdots s_1}\rangle|\\ = \frac{\sqrt{c}}{W(n)}\sum_{i=1}^n\sum_{s_i\cdots s_1 \in S_i}\Big|\sum_{\substack{s_{i+1}\cdots s_n \in S_{n-i}\\ s_is_{i+1}\ne e}}\big\langle (P_{M(s_i\dots s_1)} - P_{M(s_{i-1}\dots s_1)})z,La_{s_n\cdots s_1}\big\rangle\Big|, \end{multline*} where the second line simply unpacks the definitions of $y_{s_i\cdots s_1}$ and $b_{s_i\cdots s_1}$. \vspace{7pt} \emph{Step 3: cancellation.}\quad By the triangle inequality, the expression at the end of Step 2 is greater than or equal to \begin{align*} &\frac{\sqrt{c}}{W(n)}\Big|\sum_{i=1}^n\sum_{s_i\cdots s_1 \in S_i}\sum_{\substack{s_{i+1}\cdots s_n \in S_{n-i}\\ s_is_{i+1}\ne e}}\big\langle (P_{M(s_i\dots s_1)} - P_{M(s_{i-1}\dots s_1)})z,La_{s_n\cdots s_1}\big\rangle\Big|\\ &= \frac{\sqrt{c}}{W(n)}\Big|\sum_{s_n\cdots s_1 \in S_n}\Big\langle \sum_{i=1}^n(P_{M(s_i\dots s_1)} - P_{M(s_{i-1}\dots s_1)})z,La_{s_n\cdots s_1}\Big\rangle\Big| \\ &= \frac{\sqrt{c}}{W(n)}\Big|\sum_{s_n\cdots s_1 \in S_n}\langle P_{M(s_n\cdots s_1)}z,La_{s_n\cdots s_1}\rangle\Big|, \end{align*} where the last equality holds because $M(e) = \{0\}$. \vspace{7pt} \emph{Step 4: simplifying and re-arranging.}\quad Since $La_{s_n\cdots s_1}$ lies in $M(s_n\cdots s_1)$, we can omit the orthogonal projections $P_{M(s_n\cdots s_1)}$ from the last line above. At that point,~\eqref{eq:phi-dfn-2} gives \[\sum_{s_n\cdots s_1 \in S_n}\langle z,La_{s_n\cdots s_1}\rangle = \sum_{s_n\cdots s_1 \in S_n}\|\phi 1_{C(s_n\cdots s_1)}\|_2^2 = \|\phi 1_{R_n}\|_2^2.\] Concatenating the estimates obtained so far, we have shown that \[\sqrt{n}\cdot \|\phi 1_{R_n}\|_2\cdot \sqrt{\sum_{i=1}^n\sum_{s_i\cdots s_1 \in S_i}\|y_{s_n\cdots s_1}\|_2^2} \ge \frac{\sqrt{c}}{W(n)}\|\phi 1_{R_n}\|_2^2.\] Re-arranging and squaring, this becomes~\eqref{eq:pre-hann-aux-0}. \end{proof} \begin{rmk} In Lemma~\ref{lem:hann-aux}, the left-hand side of~\eqref{eq:pre-hann-aux-0} is non-decreasing with $n$, but the same may not be true of the right-hand side. This seems peculiar at first sight, but is natural in view of Steps 1--3 of the proof. These estimate the norms appearing on the left-hand side in terms of inner products against a family of test vectors. Fox each fixed $n$, the test vector that we use for $y_{s_i\dots s_i}$ depends explicitly on $n$, even when $i < n$, because we choose the test vectors in order to arrive at the telescoping sums at the end of Step 3 of the proof. n \end{rmk} \section{Some reflections on the proof}\label{sec:tempered-reflections} To prove Theorem~\ref{thm:tempered-cyclic}, the overarching need is to turn Haagerup's characterization of temperedness (Proposition~\ref{prop:Haa-temp}) into a lower bound on the annealed entropy of $\phi_t$ for small values of $t$. Having expressed $\hann$ using the Seward expansion and then bounded it from below as in~\eqref{eq:pre-hann-aux}, the main insight is Step 3 in the proof of Lemma~\ref{lem:hann-aux}. In this step, the expression we start with is re-arranged into a collection of telescoping sums, and most of the apparent complexity cancels as a result. Noticing how to arrive at this cancellation was the key to discovering the whole proof. It dictates the rather complicated choice of the sum \[\sum_{\substack{s_{i+1}\cdots s_n \in S_{n-i}\\ s_is_{i+1}\ne e}}a_{s_n\cdots s_1}\] as the test vector that we use to estimate the norm of $y_{s_i\cdots s_1}$. When $i\ne n$, this choice of vector does not seem very natural in itself, but I did not find any other way to arrive at the desired cancellation. I also suspect that this choice of test vectors makes for some rather inefficient estimates. The families of vectors that we must handle become \begin{equation}\label{eq:first-family} y_{s_i\cdots s_1} \qquad (1\le i \le n,\ s_i\cdots s_1 \in S_i) \end{equation} and \begin{equation}\label{eq:second-family} \sum_{\substack{s_{i+1}\cdots s_n \in S_{n-i}\\ s_is_{i+1}\ne e}}La_{s_n\cdots s_1} \qquad (1 \le i \le n,\ s_i\cdots s_1 \in S_i). \end{equation} These two families are fundamentally different. Along a growing sequence of reduced words $s_1$, $s_2s_1$, \dots, $s_n\cdots s_1$, the vectors in~\eqref{eq:first-family} are mutually orthogonal, whereas the vectors in~\eqref{eq:second-family} are all constructed by summing over possible continuations of the word seen so far, and thus should have `large pieces' in common. In Step 2 of the proof of Lemma~\ref{lem:hann-aux}, we apply the inequality of Corollary~\ref{cor:Haa-basic2} to each of the inner products appearing between these pairs of vectors. But if the internal geometries of each family are so different, then this use of Corollary~\ref{cor:Haa-basic2} could be very lossy. This weakness is responsible for the factor of $n$ in the denominator of~\eqref{eq:hann-aux}. This factor should be removable in light of the reasoning about $\hann$ that concludes the proof of Theorem~\ref{mainthm:tempered}. Indeed, this factor of $n$ is why the estimates we obtain on ${\hann(t\phi + (1-t)\tau)}$ for various values of $t$ are not strong enough to yield~\eqref{eq:supneg} directly: the most they can tell us is~\eqref{eq:not-fast-enough}, which implies~\eqref{eq:supneg} only in conjunction with part (c) of Theorem~\ref{thm:exp-typ}. Methods for exploiting almost orthogonality when comparing large families of vectors are a classical topic in harmonic analysis on Euclidean spaces: see, for instance,~\cite[Chapter VII]{Ste--HA}. Moreover, many techniques from harmonic analysis on Euclidean spaces or homogeneous spaces have been adapted successfully functions on trees and free groups: see~\cite{Cartier73} or~\cite{FigTalPicHAFG} for an introduction. However, I have not been able to turn any results from those fields into useful estimates here. The first problem seems to be that, even considering the individual crescents $C(s_n\cdots s_1)$, their size grows exponentially with $n$, and this easily introduces new pre-factors into the relevant estimates that swamp any improvements obtained elsewhere. It could be interesting to look into this further. \section{Approximate containment and compact operators}\label{sec:stiff} The rest of this chapter derives some consequences of Theorem~\ref{mainthm:tempered}. For some of these, we need a little more background about the relations of approximate containment and approximate equivalence. It applies to separable representations of any separable C$\sp*$-algebra, although our later uses are all for the group C$\sp*$-algebras of free groups. As in Chapter~\ref{chap:approx},~\cite[Section 41 and 42]{ConOT} serves as a general reference. This next lemma gives another characterization of approximate equivalence. It relates two separable representations $\rho$ and $\pi$ by comparing (i) their kernels, (ii) which elements of $\A$ become compact operators in each representation, and (iii) the (finite or infinite) ranks of the images of elements of $\A$ in each representation. \begin{lem}\label{lem:approx-contain-compact} If $\rho \gtrsim_{\rm{a}} \pi$, then \begin{itemize} \item[i.] $\ker \rho \subset \ker \pi$; \item[ii.] if $a \in \A$ and $\rho(a)$ is compact, then $\pi(a)$ is compact; \item[iii.] for any $a \in \A$ we have $\rm{rank}\,\pi(a) \le \rm{rank}\,\rho(a)$. \end{itemize} \end{lem} \begin{proof} For each conclusion, it suffices to show how the relevant properties can be described in terms of the sets $\ol{\S_k(\rho)}$ and $\ol{\S_k(\pi)}$. \vspace{7pt} \emph{Part (i).}\quad For any $a \in \A$, we have \[\|\rho(a)\| := \sup\{\phi(a^\ast a):\ \phi \in \ol{\S_1(\rho)}\}.\] \vspace{7pt} \emph{Part (ii).}\quad By the spectral theorem for compact self-adjoint operators, the operator $\rho(a)$ is compact if and only if, for every $\eps > 0$, there exists an integer $\ell$ such that \[\min\{\phi_{ii}(a^\ast a):\ 1\le i \le k\} < \eps\] whenever $k\ge \ell$ and $\phi \in \ol{\S_k(\rho)}$. \vspace{7pt} \emph{Part (iii).}\quad For any positive integer $\ell$, we have $\rm{rank}\,\rho(a) \le k$ if and only if \[\Det\,\phi(a^\ast a) = 0 \qquad \hbox{for every}\ \phi \in \bigcup_{\ell \ge k+1}\ol{\S_\ell(\rho)}.\] \end{proof} The converse of Lemma~\ref{lem:approx-contain-compact} is also true, but the proof is longer and we do not need it. Compare~\cite[Theorem 41.12]{ConOT} about approximate equivalence. The next lemma is~\cite[Corollary 41.13(b)]{ConOT}. \begin{lem}\label{lem:get-sum} If $\rho$ and $\pi$ are two non-degenerate separable representations with the property that $\pi(a)$ is zero whenever $\rho(a)$ is compact, then $\rho\oplus \pi \cong_{\rm{a}} \rho$. \qed \end{lem} Our main need later is for a pair of corollaries of Lemma~\ref{lem:get-sum}. To explain these, the following terminology is helpful. For any separable representation $\pi$, its \textbf{stiff ideal} is the closed ideal $\mathfrak{I}$ of all $a \in A$ such that $\pi(a)$ is compact, and the \textbf{stiff part} of $\pi$ is the invariant subspace \[\ol{\rm{span}}\{\pi(a)v:\ v \in H_\pi,\ a \in \mathfrak{I}\}.\] We call $\pi$ itself \textbf{stiff} if it is equal to its stiff part; that is, if $\pi|\mathfrak{I}$ is non-degenerate. Beware: If $M$ is the stiff part of $\pi$, then $\pi^{M^\perp}$ may have a nontrivial stiff part of its own. So the decomposition into $M$ and $M^\perp$ is not generally a decomposition into a stiff representation and a representation with no stiff part. On the other hand, clearly any subrepresentation of a stiff representation is stiff. \begin{rmk} In some references, including~\cite{Arv77} and~\cite{ConOT}, the stiff part of $\pi$ is called the `essential part'. This seems unfortunate, because the case of a single unitary transformation reduces to the study of the essential spectrum $\s_{\rm{ess}}(U)$, but with a mismatch in which objects are labeled `essential'.n \end{rmk} \begin{cor}\label{cor:approx-contain-contain} If $\rho \gtrsim_{\rm{a}} \pi$, then $\rho$ is approximately equivalent to a representation that contains $\pi$. \end{cor} \begin{proof} Let $\mathfrak{I}$ be the stiff ideal of $\rho$, let $M$ be the stiff part of $\rho$, and let \[N := \ol{\rm{span}}\{\pi(a)v:\ v \in H_\pi,\ a \in \mathfrak{I}\}.\] Since $\mathfrak{I}$ is an ideal, $M$ and $N$ are invariant subspaces of $\rho$ and $\pi$, respectively. On the one hand, if $a \in \mathfrak{I}$, then $\pi^{N^\perp}(a) = 0$ by construction. So Lemma~\ref{lem:get-sum} gives $\rho \oplus \pi^{N^\perp}\cong_{\rm{a}} \rho$. On the other hand, $\rho^M|\mathfrak{I}$ and $\pi^N|\mathfrak{I}$ are two non-degenerate representations of $\mathfrak{I}$ by compact operators, and the previous lemma shows that \[\rm{rank}\,\rho^M(a) = \rm{rank}\,\rho(a) \ge \rm{rank}\,\pi(a) = \rm{rank}\,\pi^N(a) \qquad (a \in \mathfrak{I}).\] Therefore, by the general classification of representations by compact operators, we can conclude that $\rho^M|\mathfrak{I}$ contains $\pi^N|\mathfrak{I}$. See, for instance, the proof of~\cite[Proposition 16.23]{ConOT}, which is easily adapted to give containment rather than equivalence. Since every non-degenerate representation of $\mathfrak{I}$ has a unique extension to a representation of $\A$~\cite[Proposition 6.10]{ConOT}, it follows that $\rho^M$ contains $\pi^N$. Putting these conclusions together, we find that \[\pi \cong \pi^N \oplus \pi^{N^\perp} \lesssim \rho^M\oplus \pi^{N^\perp} \lesssim \rho\oplus \pi^{N^\perp} \cong_{\rm{a}} \rho.\] \end{proof} Approximate equivalence becomes a very loose relation in case a certain stiff part is trivial. \begin{cor} Suppose that $\rho \gtrsim_{\rm{a}} \pi$ and that $\pi(a)$ is either zero or non-compact for every $a \in \A$. Then \begin{equation}\label{eq:two-approx-equivs} \rho\cong_{\rm{a}} \rho\oplus \pi \cong_{\rm{a}} \rho\oplus \pi^{(\infty)}. \end{equation} In particular, with that assumption on $\pi$, we have $\pi \cong_{\rm{a}} \pi^{(\infty)}$. \end{cor} \begin{proof} By Lemma~\ref{lem:approx-contain-compact} and the assumptions, we have $\ker \rho \subset \ker\pi$ and also \[\rho(a)\ \hbox{compact} \quad \Rightarrow \quad \pi(a)\ \hbox{compact} \quad \Rightarrow \quad \pi(a) = 0 \qquad (a \in \A).\] Given this, it also follows that $\pi^{(\infty)}(a) = 0$ whenever $\rho(a)$ is compact. Now both parts of~\eqref{eq:two-approx-equivs} follow from Lemma~\ref{lem:get-sum}. The final conclusion follows by applying the first part of the lemma with $\rho = \pi$. \end{proof} \begin{ex}\label{ex:left-reg-no-stiff} If $\G$ is an infinite countable group and $\l$ is the left regular representation of $\A = C\sp\ast \G$, then $\l(a)$ is either zero or non-compact for every $a \in \A$. This can be seen in many ways. For instance, if $\l(a)$ is compact for some $a \in \A$, then so is $\l(a^\ast a)$, and now any non-zero eigenspaces of $\l(a^\ast a)$ would be a finite-dimensional invariant subspaces for the right-regular representation, which commutes with $\l$. However, since $\G$ is infinite, the right-regular representation is mixing, and hence has no non-trivial finite-dimensional subrepresentations. Given this, the preceding corollary shows the `absorbing property' that any $\rho \gtrsim_{\rm{a}} \l$ satisfies $\rho\oplus \l^{(\infty)} \cong_{\rm{a}} \rho$. \qed \end{ex} To formulate the next result, recall that the \textbf{support} of a representation $\pi$ is \[\rm{supp}\,\pi := \{[\rho] \in \hat{\A}:\ \rho\ \hbox{approximately contained in }\ \pi\}\] (see~\cite[Subsection 3.4.6]{Dix--Cstar}). This is a closed subset of $\hat{\A}$ in the Fell topology, and it depends only on the approximate equivalence class of $\pi$. \begin{thm}\label{thm:approx-equiv-2} If $\pi$ is a separable representation with stiff part $M$, then the following hold. \begin{itemize} \item[a.] The stiff part $\pi^M$ is equivalent to \[\bigoplus_i \kappa_i^{m_i}\] for some (finite or infinite) countable subset $\{[\kappa_1],[\kappa_2],\dots\}$ of $\hat{\A}$ and some positive integers $m_1$, $m_2$, \dots. \item[b.] For any countable dense subset $S$ of $\rm{supp}\,\pi^{M^\perp}$, we have \[\pi \ \cong_\rm{a} \ \pi^M \oplus \bigoplus_{[\kappa] \in S}\kappa^{(\infty)}.\] \item[c.] None of the classes $[\kappa_i]$ lies in $\rm{supp}\,\pi^{M^\perp}$. \item[d.] If $\rho$ is another separable representation approximately equivalent to $\pi$, then their stiff parts are unitarily equivalent. \qed \end{itemize} \end{thm} Parts (a), (b) and (c) of this theorem are contained in~\cite[Theorem 41.12 and 42.1]{ConOT}. Part (d) is~\cite[Lemma 41.9]{ConOT}. Related arguments also show that each class $[\kappa_i]$ is an isolated point of $\rm{supp}\,\pi$, but we do not need this extra fact in the sequel so we leave it aside here. The results above shed light on the failure of the sets $\ol{\S_k(\rho)}$ to be convex for some representation $\rho$. This can happen, but only if $\rho$ has a non-trivial stiff part, in which case the multiplicities of irreducible subrepresentations are then constrained under approximate equivalence by part (d) of Theorem~\ref{thm:approx-equiv-2}. \subsection*{\emph{Notes and further references}} Considering compact operators in the image of a representation leads to the definition of liminary and postliminary C$\sp\ast$-algebras~\cite[Sections 4.3--4]{Dix--Cstar}. Their representation theory is the most rigid and complete among separable C$\sp*$-algebras, culminating in Glimm's theorem from~\cite{Gli61} that a C$\sp*$-algebra is postliminary if and only if all of its representations are of Type I. An accessible introduction to this class of C$\sp*$-algebras is given in~\cite[Chapter 4]{Arv76}, and textbook accounts that include Glimm's theorem are~\cite[Chapter 9]{Dix--Cstar} and~\cite[Section 4.6]{SakCandW}. Folland discusses the significance of these results for representations of groups in~\cite[Section 7.2]{FolAHA}. The group C$\sp*$-algebras of various classes of connected Lie groups turn out to be postliminary, but on the other hand this is very rate among discrete groups by a theorem of Thoma: see~\cite[Section 7.D]{BekdelaHarUDC}. For representations of a general separable C$\sp*$-alegbra, the connection between approximate equivalence and compact operators goes back to the foundational papers~\cite{Voi76,Arv77}, and is also covered in~\cite[Sections 41 and 42]{ConOT}. Among the most consequential results is that, if $\pi \cong_{\rm{a}} \rho$, then, in addition to the vanishing of the operator norms in part (iii) of Theorem~\ref{thm:approx-equiv}, one can also require that the differences $\pi(a) - U_n^\ast \rho(a) U_n$ be compact operators. This fact is another part of~\cite[Theorem 41.12]{ConOT}. This aspect has become central because of connections with the study of $\rm{Ext}$, K-theory and K-homology for operator algebras initiated by Brown, Douglas and Fillmore~\cite{BroDouFil73,BroDouFil77}. It is the main focus of some more recent accounts: see, for instance,~\cite[Chapter 3]{HigRoeAKH}. Injective representations that have the property assumed for $\pi$ in Lemma~\ref{lem:get-sum} also appear at several points throughout~\cite{HigRoeAKH}, where they are called `ample' representations. \section{The large deviations principle for Vietoris limits} The next result shows that, when $\rmh^0$ is finite, it is `coercive' modulo tempered representations. \begin{prop}\label{prop:h0-strict-ineq} Suppose that $\rho \gtrsim_{\rm{a}}\pi \gtrsim_{\rm{a}}\l$. If $\rmh^0(\pi) = \rmh^0(\rho) > -\infty$, then $\rho \cong_{\rm{a}}\pi$. \end{prop} \begin{proof} By Corollaries~\ref{cor:approx-contain-contain} and~\ref{cor:approx-equiv-invar}, we may alter $\rho$ within its approximate equivalence class in order to assume that $\rho = \pi\oplus \xi$. Then Proposition~\ref{prop:additivity} gives \[\rmh^0(\rho) = \rmh^0(\pi) + \rmh^0(\xi).\] By the assumed equality of zeroth-order entropies, and then Theorem~\ref{mainthm:tempered}, this is possible only if $\xi$ is tempered. By the analysis in Example~\ref{ex:left-reg-no-stiff}, this implies that $\rho \lesssim_{\rm{a}} \pi \oplus \l \cong_{\rm{a}} \pi$. \end{proof} \begin{lem}\label{lem:exp-typ-3-upper} Let $\pi$ be a separable representation. \begin{enumerate} \item[a.] If $h_0 > \rmh^0(\pi)$, then there is a Vietoris neighbourhood $U$ of $\ol{\S_\bullet(\pi)}$ such that \[\bbP(\S_\bullet(\pi_n) \in U) \le e^{h_0n + o(n)}.\] \item[b.] For any compact subset $L$ of $\cal{Z}\G$ and any \[h_1 > \sup\{\rmh^0(\pi):\ \ol{\S_\bullet(\pi)} \in L\}\] we have \[\bbP(\S_\bullet(\pi_n) \in L) \le e^{h_1n + o(n)}.\] \end{enumerate} \end{lem} \begin{proof} If $h_0 > \rmh^0(\pi)$, then part (b) of Corollary~\ref{cor:exp-typ} gives a Fell neighbourhood $U$ of $\pi$ such that \[\bbP(\pi_n \in U) \le e^{h_0n + o(n)}.\] Since any Fell neighbourhood is the $\S_\bullet$-pre-image of a lower Vietoris neighbourhood, this proves part (a). Now consider a compact set $L$ as in part (b). Whenever $\ol{\S_\bullet(\pi)} \in L$, part (a) gives a Vietoris neighbourhood $U_\pi$ of $\ol{\S_\bullet(\pi)}$ such that \[\bbP(\S_\bullet(\pi_n) \in U_\pi) \le e^{h_1n + o(n)}.\] By compactness, $L$ can be covered by finitely many of the sets $U_\pi$, so also \begin{equation}\label{eq:Vietoris-cpct-upper-bd} \bbP(\S_\bullet(\pi_n) \in L) \le e^{h_1n + o(n)}. \end{equation} \end{proof} \begin{lem}\label{lem:nearby-h0-upper-bd} Let $\pi$ be a separable representation such that $\pi \gtrsim_{\rm{a}} \l$ and $\rmh^0(\pi) > -\infty$. For some positive integer $k$, let $K$ be a compact subset of $\S_k(\G)$ which is disjoint from $\ol{\S_k(\pi)}$. Then there are $c > 0$ and a Fell neighbourhood $U$ of $\pi$ such that \begin{equation}\label{eq:U-and-K-upper-bd} \bbP\big(\pi_n \in U\ \hbox{and}\ \S_k(\pi_n)\cap K \ne\emptyset\big) \le e^{(\rmh^0(\pi)-c)n + o(n)}. \end{equation} \end{lem} \begin{proof} \emph{Step 1.}\quad We first consider the special case when $K$ is the singleton $\{\psi\}$, and prove that there are $c > 0$, a neighbourhood $O$ of $\psi$, and a Fell neighbourhood $U$ of $\pi$ such that the following holds: \begin{quote} If $\rho \in U$ and $\X(\rho,O) \ne \emptyset$ then $\rmh^0(\rho) < \rmh^0(\pi) - c$. \end{quote} We prove this by contradiction. Since both $\S_k(\G)$ and the Fell topology are second countable, if the conclusion is false then there is a sequence of separable representation $\rho_n$ such that \[\rho_n \stackrel{\rm{Fell}}{\to} \pi, \quad \rho_n \stackrel{\rm{Fell}}{\to} \pi_\psi, \quad \hbox{and} \quad \liminf_n\rmh^0(\rho_n) \ge \rmh^0(\pi).\] By passing to a subsequence, we may assume that $\ol{\S_\bullet(\rho_n)} \to \ol{\S_\bullet(\rho)}$ in $\cal{Z}\G$ for some limit representation $\rho$. Then $\rho$ approximately contains both $\pi$ and $\pi_\psi$, and it satisfies $\rmh^0(\rho)\ge \rmh^0(\pi)$ by Vietoris upper-semicontinuity of $\rmh^0$. But the first two conditions here imply that $\rho$ is not approximately equivalent to $\pi$, and so this lower bound on $\rmh^0(\rho)$ contradicts Proposition~\ref{prop:h0-strict-ineq}. \vspace{7pt} \emph{Step 2.}\quad Now consider a general compact set $K$. For each $\psi \in K$, Step 1 gives a neighbourhood $O_\psi$ of $\psi$, a Fell neighbourhood $U_\psi$ of $\pi$, and a constant $c_\psi > 0$. By compactness, we can choose $\psi_1$, \dots, $\psi_n \in K$ such that $K$ is contained in $O := O_{\psi_1}\cup \cdots\cup O_{\psi_n}$. Let \[U_0 := U_{\psi_1}\cap \cdots \cap U_{\psi_n} \qquad \hbox{and} \qquad c:= \min\{c_{\psi_1},\dots,c_{\psi_n}\},\] so now we have $\rho^0(\rho) \le \rmh^0(\pi)-c$ whenever $\rho \in U_0$ and $\X(\rho,O)\ne\emptyset$. By shrinking $U_0$ further if necessary, we may also assume that \[U_0 = \{\rho:\ \S_\ell(\rho)\cap V_0\ne\emptyset\}\] for some positive integer $\ell$ and open subset $V_0$ of $\S_\ell(\G)$. Since this must be a neighbourhood of $\pi$, we can also choose some $\phi \in \S_\ell(\pi)\cap V_0$, and now we can choose a neighbourhood $V$ of $\phi$ with $\ol{V}\subset V_0$. Let \[U := \{\rho:\ \S_\ell(\rho)\cap V\ne\emptyset\}.\] Finally, let \[L := \{Z_\bullet \in \cal{Z}\G:\ Z_\ell\cap \ol{V}\ne \emptyset\ \hbox{and}\ Z_k\cap K\ne\emptyset\}.\] This is a compact subset of $\cal{Z}\G$ for the Vietoris topology, and any $\rho$ satisfying $\S_\bullet(\rho) \in L$ also satisfies $\rho \in U_0$ and $\X(\rho,O)\ne \emptyset$, and hence $\rmh^0(\rho)\le \rmh^0(\pi)-c$. Now applying part (b) of Lemma~\ref{lem:exp-typ-3-upper} to $L$, we finally obtain \[\bbP(\pi_n \in U\ \hbox{and}\ \S_k(\pi_n)\cap K\ne\emptyset) \le \bbP(\S_\bullet(\pi_n)\in L)\le e^{(\rmh^0(\pi)-c)n + o(n)},\] as required. \end{proof} \begin{thm}\label{thm:exp-typ-2} The sequence $\ol{\S_\bullet(\pi_n)}$ obeys the large deviations principle in $\cal{Z} \G$ with rate function \[I(\ol{\S_\bullet(\pi)}) := \left\{\begin{array}{ll}-\rmh^0(\pi) &\quad \hbox{if}\ \pi\gtrsim_{\rm{a}} \l\\ \infty &\quad \hbox{otherwise}\end{array}\right. \qquad (\ol{\S_\bullet(\pi)} \in \cal{Z}\G).\] \end{thm} In the definition of $I$ above, note that the right-hand side does indeed depend only on $\ol{\S_\bullet(\pi)}$, by Theorem~\ref{thm:approx-equiv}, Corollary~\ref{cor:approx-equiv-invar}, and the fact that approximately containing $\l$ is an invariant of approximate equivalence. \begin{proof} \emph{Step 1.}\quad If $\pi$ does not approximately contain $\l$, then there are a positive integer $k$, an element $\phi \in \S_k(\l)$, and a neighbourhood $U$ of $\phi$ such that $\ol{\S_k(\pi)}$ is disjoint from $\ol{U}$. By Corollary~\ref{cor:typ-trans}, $\chi_{\rm{reg}}$ has a neighbourhood $V$ such that \[\X(\pi_n,V) \ne \emptyset \qquad \Rightarrow \qquad \X(\pi_n,U) \ne \emptyset.\] Now Theorem~\ref{thm:asymp-free2} gives a positive constant $c$ such that \[\bbP(\S_k(\pi_n) \subset \S_k(\G)\setminus \ol{U}) = \bbP(\X(\pi_n,\ol{U}) = \emptyset) \le \bbP(\X(\pi_n,V)=\emptyset) = O(e^{-cn^2}),\] and this is eventually smaller than any exponential function of $n$. Since the left-hand event displayed above is defined by a Vietoris neighbourhood of $\pi$, this implies the desired upper bound. \vspace{7pt} \emph{Step 2.}\quad In case $\pi \gtrsim_{\rm{a}}\l$, the correct upper bound is already provided by part (a) of Lemma~\ref{lem:exp-typ-3-upper}. To finish the proof, we now assume further that $\rmh^0(\pi) > -\infty$ and prove the lower bound. Consider a basic Vietoris neighbourhood $\ol{\S_\bullet(\pi)}$ of the form \[W := \{Z_\bullet \in \cal{Z}\G:\ Z_\ell\cap O_1 \ne \emptyset,\ Z_k\subset O_2\}\] for some positive integers $k$ and $\ell$ and open subsets $O_1 \subset \S_\ell(\G)$ and $O_2 \subset \S_k(\G)$ (see Lemma~\ref{lem:Vietoris-simplify}). Let \[U_1 := \{\rho:\ \X(\rho,O_1)\ne \emptyset\},\] so this is a Fell neighbourhood of $\pi$. Let $K := \S_k(\G)\setminus O_2$, so this is compact and disjoint from $\ol{\S_k(\pi)}$. Now Lemma~\ref{lem:nearby-h0-upper-bd} gives some $c > 0$ and a Fell neighbourhood $U_2$ of $\pi$ such that the upper bound~\eqref{eq:U-and-K-upper-bd} holds with $U_2$ in place of $U$. Finally, let $U:= U_1\cap U_2$, so this is still a Fell neighbourhood of $\pi$, and observe that \begin{align*} \bbP(\ol{\S_\bullet(\pi_n)} \in W) &\ge \bbP(\pi_n \in U\ \hbox{and}\ \ol{\S_k(\pi_n)} \subset O_2)\\ &= \bbP(\pi_n \in U) - \bbP(\pi_n \in U\ \hbox{and}\ \ol{\S_k(\pi_n)} \cap K \ne\emptyset)\\ &\ge e^{\rmh^0(\pi)n - o(n)} - e^{(\rmh^0(\pi)-c)n + o(n)}, \end{align*} where we bound the first term from below by Corollary~\ref{cor:exp-typ} and bound the absolute value of the second term by Lemma~\ref{lem:nearby-h0-upper-bd} as above. Comparing the asymptotic behaviours as $n\to\infty$, this gives the desired probability lower bound. \end{proof} \begin{rmk} n \end{rmk} \begin{rmk} Theorem~\ref{thm:exp-typ-2} is worth comparing with Voiculescu's conjectured variational principle between `topological free entropy' and `free capacity' from~\cite{Voi02}. When specialized to representations of a free group $\G$, the unknown part of that principle should be a consequence of the following heuristic phenomenon: given a character $\chi$ on $\G$ and a sufficiently large integer $n$, `most' $n$-dimensional representations $\pi$ of $\G$ that satisfy $\chi_\pi \approx \chi$ also satisfy $\ol{\S_\bullet(\pi)} \approx \ol{\S_\bullet(\pi_\chi)}$. n \end{rmk} \begin{rmk} n \end{rmk} \section{Representations with finite zeroth-order AP entropy}\label{sec:MF} \begin{prop}\label{prop:MF} If $\pi$ is a separable representation with $\pi \gtrsim_{\rm{a}} \l$ and ${\rmh^0(\pi) > -\infty}$, then $\pi$ is a Fell-global limit of an AP sequence, and $C^\ast(\pi)$ is MF. \end{prop} \begin{proof} The Vietoris topology on $\cal{Z}\G$ is second countable, and hence so is the Fell-global topology on separable representations. Therefore, for the first conclusion, it suffices to show that every Fell-global neighbourhood $U$ of $\pi$ contains finite-dimensional representations of arbitrarily high dimension. However, by Theorem~\ref{thm:exp-typ-2}, our assumptions on $\pi$ give \[\bbP(\pi_n \in U) \ge e^{\rmh^0(\pi)n - o(n)},\] so in particular this must be strictly positive for all sufficiently large $n$. The last conclusion follows from the general connection between Fell-global limits and the MF property: see Section~\ref{sec:FD-conv}. \end{proof} The connection between results like Theorem~\ref{thm:exp-typ-2} and the MF property was pointed out to me by Lewis Bowen. \begin{ex} Recall that a Haagerup function $\phi$ as in Example~\ref{ex:Haa-fns} has finite annealed AP entropy provided $|\phi(s)| < 1$ for every $s$. Therefore its GNS representation has finite zeroth-order AP entropy as well, and therefore any Haagerup function $\phi$ satisfying that inequality gives a C$\sp*$-algebra $C^\ast(\pi_\phi)$ that is MF. \qed \end{ex} An even finer result holds as a consequence of Theorem~\ref{thm:approx-equiv-2}. \begin{prop}\label{prop:form-for-h0} Let $\pi$ be a separable representation and let $M$ be its stiff part. Represent it as the direct sum \begin{equation}\label{eq:big-direct-sum} \pi \cong_{\rm{a}} \bigoplus_i \kappa_i^{m_i} \oplus \bigoplus_{[\kappa] \in S}\kappa^{(\infty)} \end{equation} as in parts (a) and (b) of Theorem~\ref{thm:approx-equiv-2}. If $\rmh^0(\pi) > -\infty$ then (i) $\pi^{M^\perp} \lesssim_{\rm{a}}\l$, and (ii) we have \begin{equation}\label{eq:form-for-h0} \rmh^0(\pi) = \sum_i m_i\cdot \rmh^0(\kappa_i). \end{equation} \end{prop} \begin{proof} By Corollary~\ref{cor:approx-equiv-invar}, $\rmh^0(\pi)$ is the result of applying Proposition~\ref{prop:additivity} to the direct sum in~\eqref{eq:big-direct-sum}. This gives $-\infty$ unless $\rmh^0(\kappa) = 0$ for every $\kappa \in S$. By Theorem~\ref{mainthm:tempered}, this holds only if $S$ is contained in $\hat{\G}_\rm{red}$, and by the analysis in Example~\ref{ex:left-reg-no-stiff} this implies that $\pi^{M^\perp}\lesssim_{\rm{a}} \l$. Having proved this, the remaining terms from our application of Proposition~\ref{prop:additivity} are those in~\eqref{eq:form-for-h0}. \end{proof} Combining Proposition~\ref{prop:form-for-h0} with Theorem~\ref{thm:three-entropies} and part (c) of Corollary~\ref{cor:det-form}, we find the following. \begin{cor}\label{cor:when-hann-positive} If $\phi \in \S_k(\G)$ and $\hann(\phi) > -\infty$, then \[\pi_\phi \cong (\hbox{stiff})\oplus (\hbox{tempered}),\] and its tempered part contains $\l^{(k)}$. \qed \end{cor} In addition to the regular character, we have met a rather large supply of positive definite maps with finite annealed AP entropy: the Haagerup functions and their matrix-valued generalizations (Example~\ref{ex:Haa-fns}). We have already seen that the C$\sp*$-algebras they generate are MF. The corollary above suggests that their minimal dilations should actually have a very tractable structure, at least outside the tempered part. I do not know a route to this insight that does not involve $\rmh^0$. \chapter{Some further directions and open problems} Previous chapters have already contained a selection of potential research problems that emerge naturally during the course of our work. This final chapter describes a few more directions that could be explored next. Some of these are rather open-ended. \subsection*{\emph{Other models of random unitary representations}} Uniform random representations are the simplest models of random finite-dimensional unitary representations of free groups. But many others have also been studied. Most famously, representations generated by independent random permutations of a large finite set enjoy many properties in parallel with independent tuples of Haar-distributed random unitary matrices. See, in particular,~\cite{BordCol19}, which establishes strong asymptotic freeness for this model and provides an overview of the previous literature. The restriction to permutation matrices makes the symmetries of this model `less transitive', promising to complicate such basic steps in our work as the rearrangement~\eqref{eq:probab-reform}. So a notion of annealed AP entropy along uniformly random permutation representations is probably quite a bit more complicated. Another option is to move away from uniform randomness by `planting' a small number of vectors with prescribed behaviour in each of the representations $\pi_n$. This would be the analog of planted models of random graphs, which have been widely studied in combinatorics and theoretical computer science (see~\cite{BolSco04} for a first example). Technically, `planting' in our context should probably mean choosing the generators $(\pi_n(s):\ s \in S)$ initially from a distribution that has the form we have already met on the right-hand side of~\eqref{eq:prod-meas}. So it could be that this model of randomness requires few new ideas compared to our work above. Among its benefits one could hope to find a definition and an interpretation of `relative' annealed AP entropy. These should be analogous to their predecessors in ergodic theory studied in~\cite{Shr23}. \subsection*{\emph{Tensor products}} A third source of examples results from combining a random choice of unitary-matrix generators with operations such as tensor products: see~\cite{BordCol24}, for example. A different construction with tensor products is also essential to Hayes' method in his recent proof of the Peterson--Thom conjecture in~\cite{Hay22}. That proof was initially conditional on strong asymptotic freeness of certain random tensor product representations, but this ingredient has now been provided: see~\cite{BelCap22} and also~\cite{BordCol--tensors}. Is any version of annealed AP entropy available in settings such as these? \subsection*{\emph{Fixed matrices}} Many of the known results about asymptotic freeness or strong asymptotic freeness gain extra strength because one can also allow a sequence of fixed matrices alongside the random ones, and show that the whole partially-random collections of matrices still satisfy `as much (strong) asymptotic freeness as possible'. See the full results in~\cite{ColMal14}, for example. Is there a variant of annealed AP entropy that can help to analyse these models? \subsection*{\emph{Orthogonal or symplectic representations}} Most of our work should be straightforward to adapt to independent random orthogonal matrices, and I doubt there are any surprises waiting. This is because the main results in Part~\ref{part:free} do not depend on the spectral theorem for unitary matrices anywhere. The case of symplectic matrices is not quite so clear to me. Note that the main result of Chapter~\ref{chap:random-orbits} already applies to any of these settings. \subsection*{\emph{Analogies with free probability}} Formally, our only appeal to free probability in Part~\ref{part:free} is through Theorems~\ref{thm:asymp-free1} and~\ref{thm:asymp-free2}. But other points of resemblance are widespread. Do other known ideas or constructions from the study of free entropy in free probability have adaptations to the study of annealed AP entropy? See~\cite[Chapter 6]{HiaPetSL} for an introduction to that theory. \subsection*{\emph{Other groups}} To construct our random finite-dimensional unitary representations of a free group, we simply choose a unitary matrix for each free generator independently from the Haar measure $m_{\rm{U}(n)}$. As pointed out to me by Uri Bader, the distribution of the resulting random representation does not depend on the chosen free generating set for the group. This is because any two sets of free generators are related by an automorphism of the free group, and any such automorphism is a composition of Nielsen transformations~\cite{Nie24}. An easy check shows that each Nielsen transformation converts one tuple of independent Haar-distributed unitary matrices into another, and so the same is true of any automorphism of the group by induction. (See, for example,~\cite{Gel08} and the references listed there for much more on this measure-preserving action.) It should be straightforward to generalize these random finite-dimensional representations to free products of other cyclic groups. However, beyond this class, it is more difficult to define `natural' measures on representation varieties. Some of the first successes are due to Goldman for surface groups~\cite{Gol84}. See~\cite{MulPen02} for more recent related work, or the discussion in~\cite{BureLaw22}. When such a natural finite measure exists, one can sensibly use it to define `annealed' AP entropies for those groups. I do not know how tractable or useful the resulting theory might be. For a Cartesian product $\G_1 \times \G_2$ of two other groups, this discussions asks for a natural way to generate a pair of commuting finite-dimensional representations of $\G_1$ and $\G_2$. While there may not be a single canonical measure on such pairs, one possibility is offered by Kronecker products of separate representations of $\G_1$ and $\G_2$ (see~\cite[Section 7.3]{FolAHA}). Studying these would bring us back to our questions about tensor products above. In general, questions about whether such tensor-product representations can effectively `see' all representations of $\G_1\times \G_2$ have long-known connections to Connes' embedding problem and related phenomena. Much more on these topics can be found in~\cite{PisOST},~\cite{BroOza08}, or~\cite{Oza13}, for example. \bibliography{bibfile}{} \bibliographystyle{abbrv} \begin{small} \noindent Mathematics Institute\\ Zeeman Building\\ University of Warwick\\ Coventry CV4 7AL\\ United Kingdom\\ \href{mailto:[email protected]}{\texttt{[email protected]}} \end{small} \end{document}
2412.13748v1
http://arxiv.org/abs/2412.13748v1
Physical Layer Security for Continuous-Aperture Array (CAPA) Systems
\documentclass[journal]{IEEEtran} \IEEEoverridecommandlockouts \usepackage{lineno} \usepackage{cite} \usepackage{hyperref} \usepackage{amsmath,amssymb,amsfonts} \usepackage{amsmath} \usepackage{amsthm} \DeclareMathOperator*{\argmax}{argmax} \usepackage{graphicx} \usepackage{textcomp} \usepackage{xcolor} \usepackage{graphicx} \usepackage{float} \usepackage{subfigure} \usepackage{amsmath} \usepackage{amsfonts,amssymb} \usepackage{mathrsfs} \usepackage{mathtools} \usepackage{algorithm} \usepackage{algorithmicx} \usepackage{algpseudocode} \usepackage{bm} \usepackage{multirow} \usepackage{array} \usepackage{amssymb} \usepackage{amsmath} \usepackage{cite} \usepackage{url} \usepackage{xcolor} \usepackage{cite,graphicx,amsmath,amssymb} \usepackage{subfigure} \usepackage{fancyhdr} \usepackage{mdwmath} \usepackage{mdwtab} \usepackage{caption} \usepackage{amsthm} \usepackage{setspace} \usepackage{bm} \usepackage{algorithm} \usepackage{algpseudocode} \usepackage{mathtools} \usepackage{dsfont} \usepackage{bbm} \usepackage{cases} \newtheorem{remark}{Remark} \theoremstyle{definition} \newtheorem{theorem}{Theorem} \newenvironment{theorembox} {\begin{theorem}}{\hfill \interlinepenalty500 $\Box$\end{theorem}} \newtheorem{lemma}{Lemma} \newenvironment{lemmabox} {\begin{lemma}}{\hfill \interlinepenalty500 $\Box$\end{lemma}} \newtheorem{corollary}{Corollary} \newenvironment{corollarybox} {\begin{corollary}}{\hfill \interlinepenalty500 $\Box$\end{corollary}} \newtheorem{proposition}{Proposition} \makeatletter \newcommand{\biggg}{\bBigg@{3}} \newcommand{\Biggg}{\bBigg@{3.5}} \makeatother \hyphenation{op-tical net-works semi-conduc-tor} \begin{document} \title{Physical Layer Security for Continuous-Aperture Array (CAPA) Systems} \author{Boqun Zhao, Chongjun Ouyang, Xingqi Zhang, and Yuanwei Liu\vspace{-15pt} \thanks{B. Zhao and X. Zhang are with the Department of Electrical and Computer Engineering, University of Alberta, Edmonton AB, T6G 2R3, Canada (email: \{boqun1, xingqi.zhang\}@ualberta.ca).} \thanks{C. Ouyang is with the School of Electronic Engineering and Computer Science, Queen Mary University of London, London, E1 4NS, U.K. (e-mail: [email protected]).} \thanks{Y. Liu is with the Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong (email: [email protected]).} } \maketitle \begin{abstract} A continuous-aperture array (CAPA)-based secure transmission framework is proposed to enhance physical layer security. Continuous current distributions, or beamformers, are designed to maximize the secrecy transmission rate under a power constraint and to minimize the required transmission power for achieving a specific target secrecy rate. On this basis, the fundamental secrecy performance limits achieved by CAPAs are analyzed by deriving closed-form expressions for the maximum secrecy rate (MSR) and minimum required power (MRP), along with the corresponding optimal current distributions. To provide further insights, asymptotic analyses are performed for the MSR and MRP, which reveals that \romannumeral1) for the MSR, the optimal current distribution simplifies to maximal ratio transmission (MRT) beamforming in the low-SNR regime and to zero-forcing (ZF) beamforming in the high-SNR regime; \romannumeral2) for the MRP, the optimal current distribution simplifies to ZF beamforming in the high-SNR regime. The derived results are specialized to the typical array structures, e.g., planar CAPAs and planar spatially discrete arrays (SPDAs). The rate and power scaling laws are further analyzed by assuming an infinitely large CAPA. Numerical results demonstrate that: \romannumeral1) the proposed secure continuous beamforming design outperforms MRT and ZF beamforming in terms of both achievable secrecy rate and power efficiency; \romannumeral2) CAPAs achieve superior secrecy performance compared to conventional SPDAs. \end{abstract} \begin{IEEEkeywords} Continuous-aperture array (CAPA), maximum secrecy rate, minimum required power, physical layer security. \end{IEEEkeywords} \section{Introduction} Multiple-antenna technology is a fundamental pillar of modern wireless communication systems. Its core principle is to utilize a larger number of antenna elements in order to enhance spatial degrees of freedom (DoFs) and improve channel capacity \cite{tse2005fundamentals}. Traditionally, multiple-antenna systems are designed with a spatially discrete topology, where each antenna is represented as an individual point in space. Driven by the benefits of integrating more antennas, the concept of densely packed antenna arrays has gained significant attention in the field of communications. By reducing the spacing between elements within a fixed array aperture, more antennas can be accommodated, thereby enhancing spatial DoFs. This progression has given rise to many state-of-the-art array architectures, such as holographic multiple-input multiple-output (MIMO) systems \cite{holographic,holographic2}, large intelligent surface \cite{intelligent_surface}, dynamic metasurface antennas \cite{meta}. In these systems, antenna elements are arranged in an ultra-dense configuration with spacings of less than half a wavelength, leading to improved spectral efficiency \cite{pizzo2020spatially}. The holy grail of multiple-antenna systems is envisioned as the development of \emph{spatially continuous} electromagnetic (EM) apertures, referred to as \emph{continuous-aperture arrays (CAPAs)} \cite{liu2024capa}. A CAPA operates as a single electrically large-aperture antenna with a continuous current distribution, which comprises a (virtually) infinite number of radiating elements coupled with electronic circuits and a limited number of radio-frequency (RF) chains. On one hand, CAPAs fully utilize the entire aperture surface, which enables significant enhancements in spatial DoFs and array gains. On the other hand, they provide precise control over the amplitude and phase of the current across the aperture's surface. In summary, CAPAs can leverage spatial resources far more \emph{effectively} and \emph{flexibly} than traditional spatially discrete arrays (SPDAs). This capability allows them to approach the theoretical limits of channel capacity, positioning CAPAs as a cornerstone of next-generation wireless communications \cite{liu2024capa}. Unlike traditional arrays, which are modeled using spatially discrete methods, CAPA-based wireless transmission adopts a fundamentally different approach rooted in the EM field theory. Specifically, while the channels of conventional SPDAs are described using discrete matrices, the spatial response of a CAPA is modeled as a continuous operator in Hilbert space \cite{migliore2008electromagnetics}. This discrete-to-continuous transition is not merely a technical adjustment but a paradigm shift in the conceptualization and design of wireless transmission systems \cite{capa_single_0}. It fundamentally alters the analytical and design frameworks and renders conventional methods developed for SPDAs unsuitable for CAPAs \cite{liu2024capa}. Therefore, this shift necessitates the development of novel conceptual and mathematical tools tailored to address the continuous EM field interactions in CAPAs \cite{migliore2018horse}. \subsection{Prior Works} Recently, there has been growing research interest in the design and analysis of CAPA-based wireless communications. In \cite{capa_single_4}, the authors proposed a wavenumber-division multiplexing framework to enable multi-stream data transmission between two linear CAPAs. This concept was extended to CAPA-based multiuser MIMO channels in \cite{zhang2023pattern}, where a Fourier-based method was developed to maximize the downlink sum-rate by optimizing the current distribution used for modulating RF signals. Building on this approach, \cite{optimization} further studied beamforming design for uplink multiuser CAPA systems. More recently, \cite{wang2024beamforming} and \cite{wang2024optimal} proposed two calculus of variations (CoV)-based approaches for beamforming in unpolarized CAPA-based multiuser channels. Additionally, \cite{guo2024multi} applied deep learning techniques to design current distributions for multiuser CAPA systems. In addition to continuous beamforming design, the performance of CAPA-based wireless communication systems has also been analyzed. In \cite{capacity}, the channel capacity between two spherical dielectric CAPAs was studied. The authors of \cite{xie} discussed the effective DoFs and capacity between two linear CAPAs. In \cite{wan_2}, the Fredholm determinant was utilized to compare the mutual information of CAPA- and SPDA-based MIMO channels, and this analysis was further extended in \cite{wan_1} to incorporate the effects of non-white EM interference. Moreover, \cite{zhao2024continuous} analyzed the sum-rate capacity and capacity region of CAPA-based multiuser uplink and downlink channels. Extensions to CAPA-based fading channels were presented in \cite{ouyang2024diversity} with a focus on the diversity-multiplexing tradeoff in the high signal-to-noise (SNR) region. Additionally, \cite{yindi} evaluated the signal-to-interference-plus-noise ratio in uplink CAPA systems and proposed an adaptive interference mitigation method. Building on these advancements, a recent study derived the optimal linear receive beamformer for CAPA-based multiuser channels and analyzed the achieved performance in terms of both sum-rate and mean-squared error (MSE) \cite{ouyang2024performance}. \subsection{Motivation and Contributions} The aforementioned works demonstrate the superiority of CAPAs over SPDAs in enhancing wireless communication performance. However, these studies mainly focus on analyzing or optimizing system effectiveness, such as sum-rate \cite{zhang2023pattern,optimization,wang2024beamforming,wang2024optimal,guo2024multi}, or system reliability, such as outage probability \cite{ouyang2024diversity} and MSE \cite{ouyang2024performance}. Beyond these metrics, another critical issue for wireless communication systems is their security. Specifically, the broadcast nature of wireless channels exposes transmitted signals to potentially insecure environments, making them susceptible to interception by eavesdroppers \cite{chen2016survey}. This vulnerability emphasizes the crucial need for ensuring robust wireless security \cite{chen2016survey}. In the context of wireless security, secure channel coding has been theoretically proven as an effective approach to achieving nearly 100\% secure transmission at the physical layer \cite{chen2016survey}. This strategy, known as \emph{physical layer security (PLS)}, addresses the limitations of traditional cryptographic methods applied at higher layers (such as the network layer) by eliminating the need for additional spectral resources and reducing signaling overhead \cite{yang2015safeguarding}. The fundamental model for safeguarding information at the physical layer is Wyner's wiretap channel \cite{wyner}, which introduced the concept of secrecy transmission rate and secrecy channel capacity---the supremum of the achievable secure coding rate. In recent years, beamforming has been widely recognized as an effective method for enhancing secrecy capacity, thereby improving the PLS of wireless systems \cite{yang2015safeguarding}. Given that CAPAs are considered a promising architecture for achieving high beamforming gains, their application to PLS is particularly compelling. However, the use of CAPAs to enhance PLS remains underexplored in existing literature, which motivates this work. This article aims to analyze the fundamental limits of secrecy performance achieved by CAPA-based beamforming. Specifically, we focus on two key aspects: the \emph{maximum secrecy rate (MSR)} under a given power constraint and the \emph{minimum required power (MRP)} to achieve a specified secrecy rate target. The main contributions of this work are summarized as follows. \begin{itemize} \item We propose a CAPA-based transmission framework to enable secure communications with a legitimate user in the presence of an eavesdropper. Leveraging the EM theory and information theory, we introduce continuous operator-based signal and channel models to characterize CAPA-based secure communications. Within this framework, we define two critical metrics to evaluate secrecy performance: the MSR under a given power constraint and the MRP to achieve a specified secrecy rate target. \item For the problem of maximizing the secrecy rate, we derive a closed-form expression for the MSR and the optimal current distribution to achieve it. Additionally, we provide high-SNR and low-SNR approximations of the MSR and prove that the optimal current distribution simplifies to maximal ratio transmission (MRT) beamforming in the low-SNR regime and zero-forcing (ZF) beamforming in the high-SNR regime. Furthermore, we analyze the MSR achieved by a planar CAPA under a line-of-sight (LoS) model. To gain deeper insights, we conduct an asymptotic analysis of the achieved MSR by extending the aperture size to infinity and demonstrate that the asymptotic MSR adheres to the principle of energy conservation. \item For the problem of minimizing the required power to guarantee a target secrecy rate, we derive closed-form expressions for both the optimal current distribution and the MRP. On this basis, we demonstrate that the MRP outperforms that achieved by MRT beamforming and prove that the optimal current distribution simplifies to ZF beamforming when the target secrecy rate becomes infinitely large. Additionally, we derive a closed-form expression for the MRP achieved by a planar CAPA and characterize its asymptotic behavior as the aperture size approaches infinity. \item We provide numerical results to validate the effectiveness of CAPAs in enhancing PLS and demonstrate that: \romannumeral1) increasing the aperture size improves the secrecy performance achieved by CAPA-based secure beamforming; \romannumeral2) both the MSR and MRP converge to a finite constant as the aperture size of CAPA approaches infinity, which aligns with the principle of energy conservation; \romannumeral3) the proposed optimal continuous beamformers outperform the MRT and ZF-based schemes in terms of both MSR and MRP; and \romannumeral4) CAPA yields superior secrecy performance compared to conventional SPDAs. \end{itemize} The remainder of this paper is organized as follows. Section \ref{Section: System Model} introduces the CAPA-based secure transmission framework and defines the metrics of MSR and MRP. Sections \ref{sec_MSR} and \ref{sec_MRP} analyze the fundamental performance limits of the MSR and MRP achieved by CAPAs, respectively. Section \ref{numerical} presents numerical results to validate the theoretical findings. Finally, Section \ref{conclusion} concludes the paper. \subsubsection*{Notations} Throughout this paper, scalars, vectors, and matrices are denoted by non-bold, bold lower-case, and bold upper-case letters, respectively. For the matrix $\mathbf{A}$, ${\mathbf{A}}^{\mathsf{T}}$, ${\mathbf{A}}^{*}$ and ${\mathbf{A}}^{\mathsf{H}}$ denote its transpose, conjugate and transpose conjugate, respectively. For the square matrix $\mathbf{B}$, $\det(\mathbf{B})$ denotes its determinant. The notations $\lvert a\rvert$ and $\lVert \mathbf{a} \rVert$ denote the magnitude and norm of scalar $a$ and vector $\mathbf{a}$, respectively. The set $\mathbbmss{R}$ and $\mathbbmss{C}$ stand for the real and complex spaces, respectively, and notation $\mathbbmss{E}\{\cdot\}$ represents mathematical expectation. And we generally reserve the symbols $I(\cdot)$ for mutual information and $H(\cdot)$ for entropy. Finally, ${\mathcal{CN}}({\bm\mu},\mathbf{X})$ is used to denote the circularly-symmetric complex Gaussian distribution with mean $\bm\mu$ and covariance matrix $\mathbf{X}$. \begin{figure}[!t] \centering \includegraphics[height=0.25\textwidth]{CAPA.eps} \caption{Illustration of a CAPA-based wiretap channel.} \vspace{-7pt} \label{Figure: System_Model} \end{figure} \section{System Model}\label{Section: System Model} gurename} {\ref{Figure: System_Model}}, we consider a wiretap channel consisting of a base station (BS), a legitimate user (Bob, denoted as $\rm{b}$), and an eavesdropper (Eve, denoted as $\rm{e}$). Each entity is equipped with a CAPA. Let $\mathcal{A}\subseteq{\mathbbmss{R}}^{3\times1}$ represent the aperture of the CAPA at the BS, with its size given by $\lvert \mathcal{A} \rvert=\int_{\mathcal{A}}{\rm{d}}{\mathbf{s}}$. Moreover, let ${\mathcal{A}}_{k}\subseteq{\mathbbmss{R}}^{3\times1}$ denote the aperture of the CAPA at each user $k\in\{\rm{b},\rm{e}\}$, with the aperture size $\lvert \mathcal{A}_k \rvert=\int_{{\mathcal{A}}_k}{\rm{d}}{\mathbf{r}}$. It is assumed that the aperture size of each user is significantly smaller than that of the BS, i.e., $\lvert \mathcal{A} \rvert\gg \lvert \mathcal{A}_k \rvert$ ($\forall k$). Moreover, both Bob and Eve are assumed to have complete channel state information (CSI) regarding their respective effective channels. It is further assumed that Eve is a registered user, and thus, the BS obtains the CSI for both Eve and Bob during the channel training phase \cite{ly2010mimo}. Under these conditions, Eve is expected to receive the \textit{common messages} broadcast across the network but remain uninformed about the \textit{confidential messages} intended solely for Bob. \subsection{CAPA-Based Transmission}\label{Section: System Model: CAPA-Based Transmission} The BS intends to transmit a \emph{confidential message} $W$ to Bob over $N$ channel uses, and ensures that it remains secret from Eve. To achieve this, the BS first encodes the confidential message $W$ into the codeword $[s(1),\ldots,s(N)]$ using a properly designed encoder \cite{leung1978gaussian}. The encoded symbols are Gaussian distributed with zero mean and unit variance, i.e., $s(n)\sim{\mathcal{CN}}(0,1)$ for $n\in{\mathcal{N}}\triangleq\{1,\ldots,N\}$. Next, the BS maps the encoded symbols over the time interval $n\in{\mathcal{N}}$, i.e., $s(n)$, into the transmit signal $x(\mathbf{s},n)\in{\mathbbmss{C}}$ by utilizing a source current $j({\mathbf{s}})$ for $\mathbf{s}\in\mathcal{A}$. This signal is radiated towards Bob, while also being overheard by Eve. The transmit signal at the $n$th time interval is given by \begin{equation}\label{Transmit_Signal} {x}(\mathbf{s},n)=j({\mathbf{s}})s(n),~n\in{\mathcal{N}}, \end{equation} where the source current is subject to the power constraint $\int_{\mathcal{A}}\lvert j({\mathbf{s}})\rvert^2 {\rm{d}}{\mathbf{s}}\leq P$. Therefore, the electric field excited by ${x}(\mathbf{s},n)$ at point $\mathbf{r}\in\mathcal{A}_k$ can be written as follows \cite{pizzo2022spatial}: \begin{subequations}\label{Electric_Field_Model} \begin{align} e_k(\mathbf{r},n)&=\int_{{\mathcal{A}}}h_k(\mathbf{r},{\mathbf{s}}){x}({\mathbf{s}},n){\rm{d}}{\mathbf{s}}\\ &=s(n)\int_{{\mathcal{A}}}h_k(\mathbf{r},{\mathbf{s}})j({\mathbf{s}}) {\rm{d}}{\mathbf{s}}, \end{align} \end{subequations} where $h_k(\mathbf{r},{\mathbf{s}})$ denotes user $k$'s spatial response from $\mathbf{s}$ to $\mathbf{r}$. The total observation of user $k$ at point $\mathbf{r}\in{\mathcal{A}}_k$ is the sum of the information-carrying electric field $e_k(\mathbf{r},n)$ and a random noise field $z_k(\mathbf{r},n)$, i.e., \begin{subequations}\label{Total_Electric_Field_Model} \begin{align} y_k(\mathbf{r},n)&=e_k(\mathbf{r},n)+z_k(\mathbf{r},n)\\ &=s(n)\int_{{\mathcal{A}}}h_k(\mathbf{r},{\mathbf{s}})j({\mathbf{s}}) {\rm{d}}{\mathbf{s}}+z_k(\mathbf{r},n), \end{align} \end{subequations} where $z_k(\mathbf{r},n)$ denotes the $n$th sample of additive white Gaussian noise (AWGN) at ${\mathbf{r}}$ with mean zero and variance (noise power) $\sigma_k^2$, i.e., $z_k(\mathbf{r},n)\sim {\mathcal{CN}}(0,\sigma_k^2)$. The SNR of user $k$ for decoding $s(n)$ is given by \cite{zhao2024continuous} \begin{subequations}\label{Received_SNR_Basic} \begin{align} \gamma_k&=\frac{{\mathbbmss{E}}\{\lvert s(n)\rvert^2\}\left\lvert\int_{\mathcal{A}_k}\int_{{\mathcal{A}}}h_k(\mathbf{r},{\mathbf{s}})j({\mathbf{s}}) {\rm{d}}{\mathbf{s}}{\rm{d}}{\mathbf{r}}\right\rvert^2}{\int_{\mathcal{A}_k}{\mathbbmss{E}}\{\lvert z_k(\mathbf{r},n)\rvert^2\}{\rm{d}}{\mathbf{r}}}\\ &=\frac{1}{\sigma_k^2 \lvert \mathcal{A}_k \rvert}{\left\lvert\int_{\mathcal{A}_k}\int_{{\mathcal{A}}}h_k(\mathbf{r},{\mathbf{s}})j({\mathbf{s}}) {\rm{d}}{\mathbf{s}}{\rm{d}}{\mathbf{r}}\right\rvert^2}.\label{Received_SNR_Basic_sub2} \end{align} \end{subequations} Given that the aperture size of each user is typically on the order of the wavelength, it is significantly smaller than both the propagation distance and the size of the BS aperture. Therefore, the variations in the channel response across the receive aperture are negligible, which yields \begin{equation} h_k(\mathbf{r},{\mathbf{s}})\approx h_k(\mathbf{r}_k,{\mathbf{s}})\triangleq h_k(\mathbf{s}). \end{equation} As a results, we can approximate \eqref{Received_SNR_Basic_sub2} as follows: \begin{align}\label{Received_SNR} \gamma_k\approx\frac{\lvert\mathcal{A}_k\rvert}{\sigma_k^2}{\left\lvert\int_{{\mathcal{A}}}h_k({\mathbf{s}})j({\mathbf{s}}) {\rm{d}}{\mathbf{s}}\right\rvert^2}. \end{align} \subsection{Secure Transmission}\label{Section: System Model: Secure Transmission} Bob makes an estimate $\hat{W}$ of $W$ based on the received output ${\mathbf{y}}_{\rm{b}}=[{{y}}_{\rm{b}}(1),\ldots,{{y}}_{\rm{b}}(N)]^{\mathsf{T}}$ from its channel, resulting in a block error rate $\epsilon_N=\Pr(W\ne \hat{W})$. The confidential message $W$ is also the input to Eve’s channel, and Eve has an average residual uncertainty $H(W|{\mathbf{y}}_{\rm{e}})$ after observing the output ${\mathbf{y}}_{\rm{e}}=[{{y}}_{\rm{e}}(1),\ldots,{{y}}_{\rm{e}}(N)]^{\mathsf{T}}$. Defining the transmission rate as $\mathcal{R}_N\triangleq H(W)/N$, and the fractional equivocation of Eve as $\Delta_N\triangleq H(W|{\mathbf{y}}_{\rm{e}})/H(W)$, the information-theoretic limits of secure transmission is described as follows \cite{leung1978gaussian}. \subsubsection*{Secure Coding Theorem} For any transmission rate ${\mathcal{R}}<{\mathcal{R}}_{j(\mathbf{s})}\triangleq\max\{\log_2(1+\gamma_{\rm{b}})-\log_2(1+\gamma_{\rm{e}}),0\}$, there exists an encoder-decoder pair such that as $N\rightarrow\infty$, the rate $\mathcal{R}_N\rightarrow\mathcal{R}$, the equivocation $\Delta_N\rightarrow1$, and the error probability $\epsilon_N\rightarrow0$. We comment that $\Delta_N\rightarrow1$ is equivalent to $H(W|{\mathbf{y}}_{\rm{e}})\rightarrow H(W)$, or $I(W;{\mathbf{y}}_{\rm{e}})=H(W)-H(W|{\mathbf{y}}_{\rm{e}})\rightarrow0$, meaning that Eve is unable to extract any information about the confidential message $W$. The above statements suggest that the maximum secure transmission rate for a given source current distribution $j({\mathbf{s}})$ is ${\mathcal{R}}_{j(\mathbf{s})}$. Below this rate, Bob is able to recover the confidential message with arbitrary precision, while Eve cannot obtain any information from $W$. Since the secrecy rate ${\mathcal{R}}_{j(\mathbf{s})}$ is a function of the current distribution $j({\mathbf{s}})$, we consider two key metrics to characterize the secrecy performance limits of the considered system. \subsubsection{Maximum Secrecy Rate} The MSR, subject to the power budget $P$, is defined as follows: \begin{subequations} \begin{align} {\mathcal{R}}_{\star}&\triangleq\max_{\int_{\mathcal{A}}\lvert j({\mathbf{s}})\rvert^2 {\rm{d}}{\mathbf{s}}\leq P}{\mathcal{R}}_{j(\mathbf{s})}\\ &=\max_{\int_{\mathcal{A}}\lvert j({\mathbf{s}})\rvert^2 {\rm{d}}{\mathbf{s}}\leq P}\max\left\{\log_2\left(\frac{1+\gamma_{\rm{b}}}{1+\gamma_{\rm{e}}}\right),0\right\}.\label{Maximum_Rate_Definition} \end{align} \end{subequations} \subsubsection{Minimum Required Transmit Power} Another metric of interest is the MRP to ensure a target secrecy rate ${\mathsf{R}}_0>0$, which is defined as follows: \begin{subequations} \begin{align} {\mathcal{P}}_{\star}&\triangleq\min_{{\mathcal{R}}_{j(\mathbf{s})}\geq {\mathsf{R}}_0}{\int_{\mathcal{A}}\lvert j({\mathbf{s}})\rvert^2 {\rm{d}}{\mathbf{s}}}\\&=\min_{\log_2\left(\frac{1+\gamma_{\rm{b}}}{1+\gamma_{\rm{e}}}\right)\geq {\mathsf{R}}_0}{\int_{\mathcal{A}}\lvert j({\mathbf{s}})\rvert^2 {\rm{d}}{\mathbf{s}}}.\label{Minimum_Power_Definition} \end{align} \end{subequations} Achieving the MSR or the MRP are fundamental objectives in secure transmission. The MSR represents the highest rate at which data can be securely transmitted, while the MRP indicates the minimum power needed to achieve a desired level of security. In the following sections, we derive closed-form expressions for ${\mathcal{R}}_{\star}$ and ${\mathcal{P}}_{\star}$, as defined in \eqref{Maximum_Rate_Definition} and \eqref{Minimum_Power_Definition}, respectively, and analyze these two key metrics. \section{Analysis of the Maximum Secrecy Rate}\label{sec_MSR} In this section, we analyze the MSR by deriving its close-form expression and the associate current distribution. \subsection{Problem Reformulation} Based on \eqref{Received_SNR} and \eqref{Maximum_Rate_Definition}, the problem of secrecy rate maximization can be formulated as follows: \begin{equation}\label{CAP_MISOSE_Problem_1} \max_{\int_{\mathcal{A}}{\left| j(\mathbf{s}) \right|^2}\mathrm{d}\mathbf{s}\le P} \frac{1+\frac{\lvert\mathcal{A}_{\mathrm{b}}\rvert}{\sigma _{\mathrm{b}}^{2}}\left| \int_{\mathcal{A}}{h_{\mathrm{b}}}(\mathbf{s})j(\mathbf{s})\mathrm{d}\mathbf{s} \right|^2}{1+\frac{\lvert\mathcal{A}_{\mathrm{e}}\rvert}{\sigma _{\mathrm{e}}^{2}}\left| \int_{\mathcal{A}}{h_{\mathrm{e}}}(\mathbf{s})j(\mathbf{s})\mathrm{d}\mathbf{s} \right|^2}. \end{equation} According to the monotonicity of function $f(x)=\frac{1+ax}{1+bx}$ for $x>0$ and $a>b>0$, it can be easily shown that the optimal $j(\mathbf{s})$ satisfies $\int_{\mathcal{A}}{\left| j(\mathbf{s}) \right|^2}\mathrm{d}\mathbf{s}= P$. By defining $\overline{\gamma}_{k}\triangleq\frac{P\lvert\mathcal{A}_k\rvert}{{\sigma}_k^2} $ for $k\in\{\rm{b},\rm{e}\}$ and $u(\mathbf{s})\triangleq \frac{j(\mathbf{s})}{\sqrt{P}}$, we rewrite \eqref{CAP_MISOSE_Problem_1} as follows: \begin{equation}\label{CAP_MISOSE_Problem_2} \max_{\int_{\mathcal{A}}{\left| u(\mathbf{s}) \right|^2}\mathrm{d}\mathbf{s}=1} \frac{1+\overline{\gamma }_{\mathrm{b}}\left| \int_{\mathcal{A}}h_{\mathrm{b}}(\mathbf{s})u(\mathbf{s})\mathrm{d}\mathbf{s} \right|^2}{1+\overline{\gamma }_{\mathrm{e}}\left| \int_{\mathcal{A}}{h_{\mathrm{e}}}(\mathbf{s})u(\mathbf{s})\mathrm{d}\mathbf{s} \right|^2}. \end{equation} Furthermore, for $k\in\{\rm{b},\rm{e}\}$, it holds that \begin{align} &1=\int_{\mathcal{A}}{\left\lvert u(\mathbf{s}) \right\rvert^2}\mathrm{d}\mathbf{s}\overset{\clubsuit}{=}\int_{\mathcal{A}}\int_{\mathcal{A}}u(\mathbf{s})\delta (\mathbf{s}-\mathbf{s}')u^*(\mathbf{s}')\mathrm{d}\mathbf{s}\mathrm{d}\mathbf{s}',\nonumber\\ &\left\lvert \int_{\mathcal{A}}{h_k(\mathbf{s})u(\mathbf{s})}\mathrm{d}\mathbf{s} \right\rvert^2=\int_{\mathcal{A}}\int_{\mathcal{A}}{{u(\mathbf{s})h_k(\mathbf{s})h_{k}^{*}(\mathbf{s}')u^*(\mathbf{s}')\mathrm{d}\mathbf{s}\mathrm{d}\mathbf{s}'}}\nonumber, \end{align} where $\delta(\cdot)$ is the Dirac delta function, step $\clubsuit$ follows from the fact that $\int_{{\mathcal{A}}}\delta({\mathbf{x}}-{\mathbf{x}}_0)f(\mathbf{x}){\rm{d}}{\mathbf{x}}=f({\mathbf{x}}_0)$ with $f(\cdot)$ being an arbitrary function defined on ${\mathcal{A}}$. Taken together, we obtain \begin{align} 1+\overline{\gamma }_k\left\lvert \int_{\mathcal{A}}{h_k(\mathbf{s})u(\mathbf{s})}\mathrm{d}\mathbf{s} \right\rvert^2 \!=\int_{\mathcal{A}}\int_{\mathcal{A}}{{u(\mathbf{s})A_k(\mathbf{s},\mathbf{s}^{\prime})}}u^*(\mathbf{s}')\mathrm{d}\mathbf{s}\mathrm{d}\mathbf{s}',\nonumber \end{align} where $A_k(\mathbf{s},\mathbf{s}^{\prime})\triangleq \delta (\mathbf{s}-\mathbf{s}^\prime)+\overline{\gamma }_{k}h_{k}(\mathbf{s})h_{k}^{*}(\mathbf{s}^\prime) $. Consequently, the objective function in \eqref{CAP_MISOSE_Problem_2} equals the generalized Rayleigh quotient given as follows: \begin{equation}\label{objective} {\mathsf{RQ}}(u(\mathbf{s}))\triangleq\frac{\int_{\mathcal{A}}\int_{\mathcal{A}}{u(\mathbf{s})A_{\rm{b}}(\mathbf{s},\mathbf{s}^{\prime})}u^*(\mathbf{s}')\mathrm{d}\mathbf{s}\mathrm{d}\mathbf{s}'} {\int_{\mathcal{A}}{\int_{\mathcal{A}}{u(\mathbf{s})A_{\rm{e}}(\mathbf{s},\mathbf{s}^{\prime})}}u^*(\mathbf{s}')\mathrm{d}\mathbf{s}\mathrm{d}\mathbf{s}'}. \end{equation} We note that the value of ${\mathsf{RQ}}(u(\mathbf{s}))$ is not affected by the norm of $u(\mathbf{s})$, i.e., $\int_{\mathcal{A}}{\left| u(\mathbf{s}) \right|^2\mathrm{d}\mathbf{s}}$. Therefore, the optimization problem in \eqref{CAP_MISOSE_Problem_1} is equivalent to the following one: \begin{equation}\label{objective_trans_final} \max_{u(\mathbf{s})}{\mathsf{RQ}}(u(\mathbf{s}))\Leftrightarrow\eqref{CAP_MISOSE_Problem_1}. \end{equation} To solve problem \eqref{objective_trans_final}, we define two functions as follows: \begin{align} Q\left( \mathbf{s},\mathbf{s}' \right)& \triangleq\delta (\mathbf{s}-\mathbf{s}')+\mu h_{\mathrm{e}}(\mathbf{s})h_{\mathrm{e}}^{*}(\mathbf{s}'),\label{function_Q}\\ \hat{Q}\left( \mathbf{s},\mathbf{s}' \right)& \triangleq \delta (\mathbf{s}-\mathbf{s}')-\frac{\mu}{1+\mu g_{\mathrm{e}}}h_{\mathrm{e}}(\mathbf{s})h_{\mathrm{e}}^{*}(\mathbf{s}'),\label{Q_invert} \end{align} where $\mu =-\frac{1}{g_{\mathrm{e}}}\pm \frac{1}{g_{\mathrm{e}}\sqrt{1+\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}}}}$, and $g_{\mathrm{e}}=\int_{\mathcal{A}}{\left| h_{\mathrm{e}}(\mathbf{s}) \right|^2\mathrm{d}\mathbf{s}}$ represents the channel gain for Eve. Then, the following lemmas can be found. \vspace{-5pt} \begin{lemma}\label{lemma_1} Given $h_{\mathrm{e}}(\mathbf{s})$, $Q\left( \mathbf{s},\mathbf{s}' \right)$ and $\hat{Q}\left( \mathbf{s},\mathbf{s}' \right)$ are mutually invertible, which satisfy \begin{equation}\label{Q_Inversion} \begin{split} \int_{\mathcal{A}}Q\left( \mathbf{s},\mathbf{s}_1 \right)\hat{Q}\left( \mathbf{s}_1,\mathbf{s}' \right){\rm{d}}\mathbf{s}_1 &=\int_{\mathcal{A}}\hat{Q}\left( \mathbf{s},\mathbf{s}_1 \right)Q\left( \mathbf{s}_1,\mathbf{s}' \right){\rm{d}}\mathbf{s}_1\\ &=\delta(\mathbf{s}-\mathbf{s}'). \end{split} \end{equation} \end{lemma} \vspace{-5pt} \begin{IEEEproof} Please refer to Appendix \ref{Appendix:A} for more details. \end{IEEEproof} \vspace{-5pt} \begin{lemma}\label{lemma_2} Given $h_{\mathrm{e}}(\mathbf{s})$, it holds that \begin{equation}\label{Indentity_Transform} \begin{split} \int_{\mathcal{A}}\!\int_{\mathcal{A}}\!Q\left( \mathbf{s}_1,\mathbf{s} \right)A_{\rm{e}}(\mathbf{s},\mathbf{s}') Q\left( \mathbf{s}' ,\mathbf{s}_1'\right){\rm{d}}\mathbf{s}{\rm{d}}\mathbf{s}'=\delta(\mathbf{s}_1-\mathbf{s}_1'). \end{split} \end{equation} \end{lemma} \begin{IEEEproof} Please refer to Appendix \ref{Appendix:B} for more details. \end{IEEEproof} Motivated by the results in \textbf{Lemma \ref{lemma_2}}, we define the following function: \begin{align}\label{Transform_Secrecy_Rate_Current} {\nu }(\mathbf{s}_1)\triangleq\int_{{\mathcal{A}}}u({\mathbf{s}})\hat{Q}\left( \mathbf{s},\mathbf{s}_1 \right){\rm{d}}{\mathbf{s}}, \end{align} which, together with \eqref{Q_Inversion}, yields \begin{align} \int_{\mathcal{A}}{\nu }(\mathbf{s}_1)Q\left( \mathbf{s}_1,\mathbf{s}' \right){\rm{d}}{\mathbf{s}}_1 &=\int_{{\mathcal{A}}}\int_{\mathcal{A}}u({\mathbf{s}})\hat{Q}\left( \mathbf{s},\mathbf{s}_1 \right)Q\left( \mathbf{s}_1,\mathbf{s}' \right){\rm{d}}{\mathbf{s}}\nonumber\\ &=\int_{\mathcal{A}}u({\mathbf{s}})\delta(\mathbf{s}-\mathbf{s}'){\rm{d}}{\mathbf{s}}=u(\mathbf{s}'). \end{align} The above arguments imply that $u(\cdot)$ and ${\nu }(\cdot)$ can be mutually transformed. Therefore, we can transform the variable to be optimized in \eqref{objective_trans_final} from $u(\mathbf{s})$ to ${\nu }(\mathbf{s})$ by setting $u(\mathbf{s})=\int_{\mathcal{A}}{\nu }(\mathbf{s}_1)Q\left({\mathbf{s}}_1,\mathbf{s}\right){\rm{d}}{\mathbf{s}}_1$. As a result, the denominator of \eqref{objective} can be written as follows: \begin{equation}\label{Dominator_Transformed} \begin{split} &\int_{\mathcal{A}}{\int_{\mathcal{A}}{u(\mathbf{s})A_{\rm{e}}(\mathbf{s},\mathbf{s}^{\prime})}}u^*(\mathbf{s}')\mathrm{d}\mathbf{s}\mathrm{d}\mathbf{s}'\\ &=\int_{\mathcal{A}}\int_{\mathcal{A}}{\nu }(\mathbf{s}_1){\hat{E}}(\mathbf{s}_1,\mathbf{s}_1'){\nu }^{*}(\mathbf{s}_1')\mathrm{d}\mathbf{s}_1\mathrm{d}\mathbf{s}_1', \end{split} \end{equation} where ${\hat{E}}(\mathbf{s}_1,\mathbf{s}_1')=\int_{\mathcal{A}}\int_{\mathcal{A}}Q\left({\mathbf{s}}_1,\mathbf{s}\right)A_{\rm{e}}(\mathbf{s},\mathbf{s}^{\prime}) Q^{*}\left({\mathbf{s}}_1',\mathbf{s}'\right)\mathrm{d}\mathbf{s}\mathrm{d}\mathbf{s}'$. According to the definition of $Q(\cdot,\cdot)$, we have $Q^{*}\left({\mathbf{s}}_1',\mathbf{s}'\right)=Q\left(\mathbf{s}',{\mathbf{s}}_1'\right)$, which, together with \textbf{Lemma \ref{lemma_1}}, yields ${\hat{E}}(\mathbf{s}_1,\mathbf{s}_1')=\delta(\mathbf{s}_1,\mathbf{s}_1')$. It follows that \begin{align} \int_{\mathcal{A}}\int_{\mathcal{A}}{\nu }(\mathbf{s}_1)\delta(\mathbf{s}_1,\mathbf{s}_1'){\nu }^{*}(\mathbf{s}_1')\mathrm{d}\mathbf{s}_1\mathrm{d}\mathbf{s}_1' =\int_{\mathcal{A}}{\left| \nu (\mathbf{s}_1) \right|^2\mathrm{d}\mathbf{s}_1}. \end{align} Taken together, we transform problem \eqref{objective_trans_final} into the following equivalent form: \begin{align}\label{CAP_MISOSE_Problem_4} \max_{\nu (\mathbf{s}_1)}\frac{\int_{\mathcal{A}}{\int_{\mathcal{A}}}\nu (\mathbf{s}_1)B(\mathbf{s}_1,\mathbf{s}_1^{\prime})\nu ^*(\mathbf{s}_1^{\prime})\mathrm{d}\mathbf{s}_1\mathrm{d}\mathbf{s}_1^{\prime}}{\int_{\mathcal{A}}{\left| \nu (\mathbf{s}_1) \right|^2}\mathrm{d}\mathbf{s}_1}, \end{align} where \begin{equation}\label{Semi_Def_Function_Original} B(\mathbf{s}_1,\mathbf{s}_1^{\prime})=\int_{\mathcal{A}}\int_{\mathcal{A}}Q\left( \mathbf{s}_1,\mathbf{s} \right)A_{\rm{b}}(\mathbf{s},\mathbf{s}') Q\left( \mathbf{s}' ,\mathbf{s}_1'\right){\rm{d}}\mathbf{s}{\rm{d}}\mathbf{s}'. \end{equation} Note that the objective function of problem \eqref{CAP_MISOSE_Problem_4} is not influenced by the norm of $\nu(\mathbf{s}_1)$, i.e., $\int_{\mathcal{A}}{\left| \nu(\mathbf{s}_1) \right|^2\mathrm{d}\mathbf{s}_1}$. Thus, problem \eqref{CAP_MISOSE_Problem_4} can be simplified by removing the denominator term therein. The final results are given as follows. \vspace{-5pt} \begin{theorem}\label{Theorem_CAP_MISOSE_Problem} The secrecy rate maximization problem defined in \eqref{CAP_MISOSE_Problem_1} is equivalent to the following: \begin{equation}\label{CAP_MISOSE_Problem_5} \nu^{\star}(\mathbf{s}_1)=\argmax_{\nu (\mathbf{s}_1)} \int_{\mathcal{A}}{\int_{\mathcal{A}}}{\nu } (\mathbf{s}_1)B(\mathbf{s}_1,\mathbf{s}_1^{\prime}){\nu } ^*(\mathbf{s}_1^{\prime})\mathrm{d}\mathbf{s}_1\mathrm{d}\mathbf{s}_1^{\prime}. \end{equation} After obtaining $\nu^{\star}(\mathbf{s}_1)$, the source current distribution that maximizes the secrecy rate can be calculated as follows: \begin{align}\label{optimal_u_rate_final_current} j_{\mathsf{msr}}\left( \mathbf{s} \right)=\frac{\sqrt{P}\int_{\mathcal{A}}{\nu }^{\star}(\mathbf{s}_1)Q\left({\mathbf{s}}_1,\mathbf{s}\right){\rm{d}}{\mathbf{s}}_1} {\sqrt{\int_{{\mathcal{A}}}\lvert\int_{\mathcal{A}}{\nu }^{\star}(\mathbf{s}_1)Q\left({\mathbf{s}}_1,\mathbf{s}\right){\rm{d}}{\mathbf{s}}_1\rvert^2{\rm{d}}{\mathbf{s}}}}. \end{align} \end{theorem} \vspace{-5pt} \begin{IEEEproof} The results follow directly from using \eqref{Transform_Secrecy_Rate_Current}. \end{IEEEproof} \subsection{Maximum Secrecy Rate \& Optimal Current Distribution} In this subsection, we aim to solve problem \eqref{CAP_MISOSE_Problem_5} and derive closed-form expressions for the optimal current distribution and the achieved MSR. \subsubsection{Maximum Secrecy Rate} The results in \textbf{Theorem \ref{Theorem_CAP_MISOSE_Problem}} suggest that the optimal solution to problem \eqref{CAP_MISOSE_Problem_5}, i.e., the rate-optimal source current distribution, corresponds to the principal eigenfunction of the operator $B(\mathbf{s}_1,\mathbf{s}_1^{\prime})$. Furthermore, the optimal objective value of problem \eqref{CAP_MISOSE_Problem_5} is given by the principal eigenvalue of $B(\mathbf{s}_1,\mathbf{s}_1^{\prime})$. Therefore, we have \begin{equation}\label{maximum_secrecy_rate_definition_final} {\mathcal{R}}_{\star}=\max_{\int_{\mathcal{A}}\lvert j({\mathbf{s}})\rvert^2 {\rm{d}}{\mathbf{s}}\leq P}{\mathcal{R}}_{j(\mathbf{s})}=\log_2(\lambda_{B}^{\max}), \end{equation} where $\lambda _{B}^{\max}$ denotes the principal eigenvalue of $B(\mathbf{s}_1,\mathbf{s}_1^{\prime})$. To calculate each eigenvalue $\lambda_B$ of $B(\mathbf{s}_1,\mathbf{s}_1^{\prime})$, we need to solve the following characteristic equation: \begin{equation}\label{character} \det(B(\mathbf{s}_1,\mathbf{s}_1^{\prime})-\lambda_B\delta(\mathbf{s}_1-\mathbf{s}_1^{\prime}))=0, \end{equation} where $\det(\cdot)$ here is utilized to calculate the Fredholm determinant of the operator $B(\mathbf{s}_1,\mathbf{s}_1^{\prime})-\lambda_B\delta(\mathbf{s}_1-\mathbf{s}_1^{\prime})\triangleq{\hat{B}}(\mathbf{s}_1,\mathbf{s}_1^{\prime})$, i.e., the product of all its eigenvalues \cite{gohberg2012traces}. Since directly solving equation \eqref{character} is challenging, we define \begin{align}\label{C_definition} C(\mathbf{s}_2,\mathbf{s}_2^{\prime})\triangleq&\int_{\mathcal{A}}{\int_{\mathcal{A}}}\hat{Q}(\mathbf{s}_2,\mathbf{s}_1){\hat{B}}(\mathbf{s}_1,\mathbf{s}_1^{\prime}) \hat{Q}(\mathbf{s}^\prime_1,\mathbf{s}^\prime_2)\mathrm{d}\mathbf{s}_1\mathrm{d}\mathbf{s}_{1}^{\prime}. \end{align} Then, the following lemma can be concluded. \vspace{-5pt} \begin{lemma}\label{Lemma_Fredholm} Given $h_{\mathrm{b}}(\mathbf{s})$ and $h_{\mathrm{e}}(\mathbf{s})$, it holds that \begin{equation}\label{C_derivation} \begin{split} C(\mathbf{s}_2,\mathbf{s}_2^{\prime})&=\overline{\gamma }_{\mathrm{b}}h_{\mathrm{b}}(\mathbf{s}_2)h_{\mathrm{b}}^{*}(\mathbf{s}_{2}^{\prime})-\lambda _B \overline{\gamma }_{\mathrm{e}} h_{\mathrm{e}}(\mathbf{s}_2)h_{\mathrm{e}}^{*}(\mathbf{s}_{2}^{\prime})\\ &+(1-\lambda _B)\delta (\mathbf{s}_2-\mathbf{s}_{2}^{\prime}). \end{split} \end{equation} \end{lemma} \vspace{-5pt} \begin{IEEEproof} Please refer to Appendix \ref{Appendix:C} for more details. \end{IEEEproof} According to \textbf{Lemma \ref{lemma_2}}, $\hat{Q}(\cdot,\cdot)$ is an invertible operator. Consequently, we have \begin{equation} \det({\hat{B}}(\mathbf{s}_1,\mathbf{s}_1^{\prime}))=0\Leftrightarrow\det(C(\mathbf{s}_2,\mathbf{s}_2^{\prime}))=0, \end{equation} which means that we can equivalently transform equation \eqref{character} to the following equation: \begin{equation}\label{equation_trans} \det(C(\mathbf{s}_2,\mathbf{s}_2^{\prime})) =0. \end{equation} Note that the operator $C(\mathbf{s}_2,\mathbf{s}_2^{\prime})$ is Hermitian, i.e., $C(\mathbf{s}_2,\mathbf{s}_2^{\prime})=C^*(\mathbf{s}_2^{\prime},\mathbf{s}_2)$, and thus its determinant is equal to the product of the eigenvalues, i.e., $\det(C(\mathbf{s}_2,\mathbf{s}_2^{\prime}))=\prod_{i=1}^{\infty}{\lambda _{C,i}}$ with $\lambda _{C,i}$ being the $i$th eigenvalue of $C(\mathbf{s}_2,\mathbf{s}_2^{\prime})$. Consequently, we can rewrite \eqref{equation_trans} as follows: \begin{equation}\label{equation_trans2} \det(C(\mathbf{s}_2,\mathbf{s}_2^{\prime})) =\prod\nolimits_{i=1}^{\infty}{\lambda _{C,i}}=0. \end{equation} By observing the mathematical structure of $C(\mathbf{s}_2,\mathbf{s}_2^{\prime})$ given in \eqref{C_derivation}, we conclude the following lemma. \vspace{-5pt} \begin{lemma}\label{lem_eigenvalue} Given $h_{\mathrm{b}}(\mathbf{s})$ and $h_{\mathrm{e}}(\mathbf{s})$, the eigenvalues of $C(\mathbf{s}_2,\mathbf{s}_2^{\prime})$ are given as follows: \begin{subequations}\label{eigenvalue_C} \begin{align} &\lambda_{C,1}=\xi_1-\lambda_B+1,\ \lambda_{C,2}=\xi_2-\lambda_B+1,\\ &\lambda_{C,3}=\ldots=\lambda_{C,\infty}=-\lambda_B+1. \end{align} \end{subequations} Here, \begin{subequations} \begin{align} \xi_1&=\frac{\Delta+ \sqrt{\Delta^2+4\lambda _B \overline{\gamma}_{\mathrm{b}}\overline{\gamma}_{\mathrm{e}}g_{\mathrm{b}}g_{\mathrm{e}}(1-\overline{\rho})}}{2},\label{eigenvalue_C_solution1}\\ \xi_2&=\frac{\Delta- \sqrt{\Delta^2+4\lambda _B \overline{\gamma}_{\mathrm{b}}\overline{\gamma}_{\mathrm{e}}g_{\mathrm{b}}g_{\mathrm{e}}(1-\overline{\rho})}}{2},\label{eigenvalue_C_solution2} \end{align} \end{subequations} where $\Delta=\overline{\gamma}_{\mathrm{b}}g_{\mathrm{b}}-\lambda _B \overline{\gamma}_{\mathrm{e}}g_{\mathrm{e}}$, $g_{\mathrm{b}}=\int_{\mathcal{A}}{\left| h_{\mathrm{b}}(\mathbf{s}) \right|^2\mathrm{d}\mathbf{s}}$ represents the channel gain for Bob, $\overline{\rho }=\frac{\left| \rho \right|^2}{g_{\mathrm{b}}g_{\mathrm{e}}}\in[ 0,1 ) $\footnote{We assume that the spatial responses of Bob and Eve are not parallel, i.e., $\overline{\rho }\ne1$. This condition arises when Bob and Eve are located at different positions, which represents the most general and practical scenario.} denotes the channel correlation factor between Bob and Eve, with $\rho \triangleq\int_{\mathcal{A}}{h_{\mathrm{b}}(\mathbf{s})h_{\mathrm{e}}^{*}(\mathbf{s})\mathrm{d}\mathbf{s}}$. \end{lemma} \vspace{-5pt} \begin{IEEEproof} Please refer to Appendix \ref{Appendix:D} for more details. \end{IEEEproof} Substituting \eqref{eigenvalue_C} into \eqref{equation_trans2} gives \begin{equation}\label{equation_trans3} (\xi_1-\lambda_B+1)(\xi_2-\lambda_B+1)\prod\nolimits_{i=3}^{\infty}(-\lambda_B+1)=0. \end{equation} By solving the above equation with respect to $\lambda_B$, we can obtain all the eigenvalues of $B(\mathbf{s}_1,\mathbf{s}_1^{\prime})$. The corresponding results are summarized as follows. \vspace{-5pt} \begin{theorem}\label{lem_principal} Given $h_{\mathrm{b}}(\mathbf{s})$ and $h_{\mathrm{e}}(\mathbf{s})$, the principal eigenvalue of operator $B(\mathbf{s}_1,\mathbf{s}_1^{\prime})$ can be expressed as follows: \begin{equation}\label{max_secrecy_rate_maximum_eigenvalue} \lambda _{B}^{\max}=1+\frac{\overline{\gamma }_{\mathrm{b}}g_{\mathrm{b}}\left( 1+\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}}\left( 1-\overline{\rho } \right) \right)}{1+\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}}}. \end{equation} \end{theorem} \vspace{-5pt} \begin{IEEEproof} Please refer to Appendix \ref{Appendix:E} for more details. \end{IEEEproof} Consequently, we derive the closed-form expression for the MSR in the following theorem. \vspace{-5pt} \begin{theorem} Given $h_{\mathrm{b}}(\mathbf{s})$ and $h_{\mathrm{e}}(\mathbf{s})$, the MSR is given by \begin{equation}\label{max_secrecy_rate} {\mathcal{R}}_{\star}=\log_2\left(1+\frac{\overline{\gamma }_{\mathrm{b}}g_{\mathrm{b}}\left( 1+\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}}\left( 1-\overline{\rho } \right) \right)}{1+\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}}}\right) \triangleq{R}(g_{\mathrm{b}},g_{\mathrm{e}},\overline{\rho }). \end{equation} \end{theorem} \vspace{-5pt} \begin{IEEEproof} The results follow by inserting \eqref{max_secrecy_rate_maximum_eigenvalue} into \eqref{maximum_secrecy_rate_definition_final}. \end{IEEEproof} \vspace{-5pt} \begin{remark}\label{rem_gen} The results in \eqref{max_secrecy_rate} suggest that the MSR is determined by the channel gain of each user and their channel correlation factor. This expression is applicable to any aperture, regardless of its location, shape, and size. Further, the above derivations are not specific to any particular channel and can be directly extended to various channel types. \end{remark} \vspace{-5pt} \subsubsection{Optimal Current Distribution} Having calculated the MSR ${\mathcal{R}}_{\star}$, we turn to the source current that achieves it. \vspace{-5pt} \begin{lemma}\label{lem_optimal_u} The optimal solution to the simplified secrecy rate maximization problem defined in \eqref{CAP_MISOSE_Problem_5} is given by \begin{align}\label{optimal_u_rate_step0} \nu^{\star}(\mathbf{s}_1)=\int_{{\mathcal{A}}}u^{\star}({\mathbf{s}})\hat{Q}\left( \mathbf{s},\mathbf{s}_1 \right){\rm{d}}{\mathbf{s}}, \end{align} where $u^\star\left( \mathbf{s} \right)$ is the principal eigenfunction of $C(\mathbf{s},\mathbf{s}^{\prime})$. \end{lemma} \vspace{-5pt} \begin{IEEEproof} Please refer to Appendix \ref{Appendix:F} for more details. \end{IEEEproof} Following the derivation steps outlined in Appendix \ref{Appendix:D}, the principal eigenfunction of $C(\mathbf{s},\mathbf{s}^{\prime})$ is given by \begin{equation}\label{optimal_u_rate_step1} u^{\star}\left( \mathbf{s} \right) ={h^*_{\mathrm{b}}(\mathbf{s})+\frac{\xi_1-\overline{\gamma}_{\mathrm{b}}g_{\mathrm{b}}}{\overline{\gamma}_{\mathrm{b}}\rho}h^*_{\mathrm{e}}(\mathbf{s})}, \end{equation} which is obtained by substituting the principal eigenvalue $\xi=\xi_1$ into \eqref{b_a}. By submitting \eqref{optimal_u_rate_step0} and \eqref{optimal_u_rate_step1} into \eqref{optimal_u_rate_final_current} and using the results in \textbf{Lemma \ref{lemma_2}} for simplifications, the optimal current distribution that maximizes the secrecy rate is given in the following theorem. \vspace{-5pt} \begin{theorem} The optimal source current distribution that maximizes the secrecy transmission rate is given by \begin{equation}\label{optimal_j} j_{\mathsf{msr}}\left( \mathbf{s} \right)=\sqrt{P}\frac{h^*_{\mathrm{b}}(\mathbf{s})+\frac{\xi_1-\overline{\gamma}_{\mathrm{b}}g_{\mathrm{b}}}{\overline{\gamma}_{\mathrm{b}}\rho}h^*_{\mathrm{e}}(\mathbf{s})} {\sqrt{\int_{\mathcal{A}}\left\lvert h^*_{\mathrm{b}}(\mathbf{s})+\frac{\xi_1-\overline{\gamma}_{\mathrm{b}}g_{\mathrm{b}}}{\overline{\gamma}_{\mathrm{b}}\rho}h^*_{\mathrm{e}}(\mathbf{s})\right\rvert^2{\rm{d}}{\mathbf{s}}}}. \end{equation} \end{theorem} \vspace{-5pt} \subsubsection{Asymptotic Analysis}\label{Section: Asymptotic Analysis} To gain further insights into the system, we conduct an asymptotic analysis of the MSR in both the low-SNR and high-SNR regimes. In the low-SNR regime, i.e., $P\rightarrow0$, we have $\overline{\gamma }_{\mathrm{b}}\rightarrow0$ and $\overline{\gamma }_{\mathrm{e}}\rightarrow0$. According to the results in {\textbf{Theorem \ref{lem_principal}}}, we obtain \begin{align}\label{Asymptotic_Analysis_Important_Property} \xi_1-\lambda _{B}^{\max}+1=0\Rightarrow\xi_1= \frac{\overline{\gamma }_{\mathrm{b}}g_{\mathrm{b}}\left( 1+\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}}\left( 1-\overline{\rho } \right) \right)}{1+\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}}}. \end{align} It follows that \begin{align} \lim_{P\rightarrow0}\frac{\xi_1}{\overline{\gamma}_{\mathrm{b}}\rho}= \lim_{P\rightarrow0}\frac{g_{\mathrm{b}}\left( 1+\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}}\left( 1-\overline{\rho } \right) \right)}{\rho(1+\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}})}=\frac{g_{\mathrm{b}}}{\rho}, \end{align} which, together with the fact that $\lim_{P\rightarrow0}\frac{\overline{\gamma}_{\mathrm{b}}g_{\mathrm{b}}}{\overline{\gamma}_{\mathrm{b}}\rho}=\frac{g_{\mathrm{b}}}{\rho}$, yields \begin{align} \lim_{P\rightarrow0}\frac{\xi_1-\overline{\gamma}_{\mathrm{b}}g_{\mathrm{b}}}{\overline{\gamma}_{\mathrm{b}}\rho}=\frac{g_{\mathrm{b}}}{\rho}-\frac{g_{\mathrm{b}}}{\rho}=0. \end{align} Therefore, in the low-SNR regime, the optimal source current distribution degenerates into the following form: \begin{align}\label{Optimal_Current_Rate_Low_SNR} \lim_{P\rightarrow0}j_{\mathsf{msr}}\left( \mathbf{s} \right)\simeq\sqrt{P}\frac{h^*_{\mathrm{b}}(\mathbf{s})} {\sqrt{\int_{\mathcal{A}}\left\lvert h^*_{\mathrm{b}}(\mathbf{s})\right\rvert^2{\rm{d}}{\mathbf{s}}}}, \end{align} which is parallel to Bob's spatial response, i.e., $h^*_{\mathrm{b}}(\mathbf{s})$. \vspace{-5pt} \begin{remark} The results in \eqref{Optimal_Current_Rate_Low_SNR} suggest that in the low-SNR regime, the rate-optimal source current distribution simplifies to MRT beamforming, which aims to maximize the legitimate user's signal power. \end{remark} \vspace{-5pt} Based on \eqref{Optimal_Current_Rate_Low_SNR}, the low-SNR MSR is given by \begin{align}\label{Optimal_Rate_Low_SNR} \lim_{P\rightarrow0}{\mathcal{R}}_{\star}\simeq\log _2\left( \frac{1+\overline{\gamma }_{\mathrm{b}}g_{\mathrm{b}}}{1+\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}}\overline{\rho }} \right)=\mathcal{R} _{\mathsf{mrt}}, \end{align} where $\mathcal{R} _{\mathsf{mrt}}$ represents the secrecy rate achieved by MRT beamforming, i.e., $j_{\mathsf{mrt}}\left( \mathbf{s} \right) =\sqrt{P}\frac{h^*_{\mathrm{b}}\left( \mathbf{s} \right)}{\sqrt{\int_{\mathcal{A}}{\left| h_{\mathrm{b}}\left( \mathbf{s} \right) \right|^2\mathrm{d}\mathbf{s}}}}$. In the high-SNR regime, i.e., $P\rightarrow\infty$, we have $\overline{\gamma }_{\mathrm{b}}\rightarrow\infty$ and $\overline{\gamma }_{\mathrm{e}}\rightarrow\infty$, which, together with \eqref{Asymptotic_Analysis_Important_Property}, leads to \begin{align} \lim_{P\rightarrow\infty}\frac{\xi_1}{\overline{\gamma}_{\mathrm{b}}\rho}= \lim_{P\rightarrow\infty}\frac{g_{\mathrm{b}}( 1\!+\!\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}}( 1\!-\!\overline{\rho }))}{\rho(1+\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}})}=\frac{g_{\mathrm{b}}}{\rho}( 1-\overline{\rho } ). \end{align} Recalling that $\lim_{P\rightarrow\infty}\frac{\overline{\gamma}_{\mathrm{b}}g_{\mathrm{b}}}{\overline{\gamma}_{\mathrm{b}}\rho}=\frac{g_{\mathrm{b}}}{\rho}$ yields \begin{align} \lim_{P\rightarrow\infty}\frac{\xi_1-\overline{\gamma}_{\mathrm{b}}g_{\mathrm{b}}}{\overline{\gamma}_{\mathrm{b}}\rho}=\frac{g_{\mathrm{b}}}{\rho}( 1-\overline{\rho } )-\frac{g_{\mathrm{b}}}{\rho}=-\frac{g_{\mathrm{b}}}{\rho}\overline{\rho } . \end{align} Therefore, in the high-SNR regime, the optimal source current distribution degenerates into the following form: \begin{align}\label{Optimal_Current_Rate_High_SNR} \lim_{P\rightarrow\infty}j_{\mathsf{msr}}\left( \mathbf{s} \right)\simeq\sqrt{P}\frac{h^*_{\mathrm{b}}(\mathbf{s})-\frac{g_{\mathrm{b}}}{\rho}\overline{\rho }h^*_{\mathrm{e}}(\mathbf{s})} {\sqrt{\int_{\mathcal{A}}\left\lvert h^*_{\mathrm{b}}(\mathbf{s})-\frac{g_{\mathrm{b}}}{\rho}\overline{\rho }h^*_{\mathrm{e}}(\mathbf{s})\right\rvert^2{\rm{d}}{\mathbf{s}}}}. \end{align} We observe that \begin{subequations} \begin{align} &\int_{\mathcal{A}}\left(h^*_{\mathrm{b}}(\mathbf{s})-\frac{g_{\mathrm{b}}}{\rho}\overline{\rho }h^*_{\mathrm{e}}(\mathbf{s})\right)h_{\mathrm{e}}(\mathbf{s}){\rm{d}}{\mathbf{s}} =\rho^{*}-\frac{g_{\mathrm{b}}}{\rho}\overline{\rho }g_{\mathrm{e}},\label{ZF_Rate_Condition1}\\ &\int_{\mathcal{A}}\left(h^*_{\mathrm{b}}(\mathbf{s})-\frac{g_{\mathrm{b}}}{\rho}\overline{\rho }h^*_{\mathrm{e}}(\mathbf{s})\right)h_{\mathrm{b}}(\mathbf{s}){\rm{d}}{\mathbf{s}} =g_{\mathrm{b}}-\frac{g_{\mathrm{b}}}{\rho}\overline{\rho }\rho,\label{ZF_Rate_Condition2} \end{align} \end{subequations} which, together with the fact of $\rho =\int_{\mathcal{A}}{h_{\mathrm{b}}(\mathbf{s})h_{\mathrm{e}}^{*}(\mathbf{s})\mathrm{d}\mathbf{s}}$, yields \begin{align} \eqref{ZF_Rate_Condition1} = \rho^{*}-\frac{g_{\mathrm{b}}}{\rho}\frac{\left| \rho \right|^2}{g_{\mathrm{b}}g_{\mathrm{e}}}g_{\mathrm{e}}=0,~\eqref{ZF_Rate_Condition2}= g_{\mathrm{b}}(1-\overline{\rho }), \end{align} which suggests that $h^*_{\mathrm{b}}(\mathbf{s})-\frac{g_{\mathrm{b}}}{\rho}\overline{\rho }h^*_{\mathrm{e}}(\mathbf{s})$ is orthogonal to $h_{\mathrm{e}}(\mathbf{s})$. \vspace{-5pt} \begin{remark} The above arguments imply that in the high-SNR regime, the rate-optimal source current distribution simplifies to ZF beamforming, which aims to minimize the information leakage to the eavesdropper. \end{remark} \vspace{-5pt} Based on \eqref{Optimal_Current_Rate_High_SNR}, the high-SNR MSR is given by \begin{align}\label{Optimal_Rate_High_SNR} \lim_{P\rightarrow\infty}{\mathcal{R}}_{\star}\simeq\log_2\left( \overline{\gamma }_{\mathrm{b}}g_{\mathrm{b}}\left( 1-\overline{\rho } \right) \right)=\mathcal{R} _{\mathsf{zf}}, \end{align} where $\mathcal{R} _{\mathsf{zf}}$ represents the secrecy rate achieved by ZF beamforming, i.e., $j_{\mathsf{zf}}\left( \mathbf{s} \right) =\sqrt{P}\frac{h^*_{\mathrm{b}}(\mathbf{s})-\frac{g_{\mathrm{b}}}{\rho}\overline{\rho }h^*_{\mathrm{e}}(\mathbf{s})} {\sqrt{\int_{\mathcal{A}}\left\lvert h^*_{\mathrm{b}}(\mathbf{s})-\frac{g_{\mathrm{b}}}{\rho}\overline{\rho }h^*_{\mathrm{e}}(\mathbf{s})\right\rvert^2{\rm{d}}{\mathbf{s}}}}$. These findings reveal that MRC and ZF beamforming represent two extremes of optimal secure beamforming strategies. At high SNR, where information leakage dominates over Gaussian noise, ZF beamforming performs optimally by nullifying Eve's signal. Conversely, at low SNR, where information leakage is negligible, MRC beamforming is preferred as it maximizes Bob's signal power. Therefore, the optimal current distribution resembles ZF beamforming in the high-SNR regime and MRC beamforming in the low-SNR regime. \subsection{Typical Cases}\label{discussion_MSR} As stated in \textbf{Remark~\ref{rem_gen}}, the derived expression for the MSR is applicable to arbitrary apertures and channel types. In this subsection, we specialize the aperture $\mathcal{A}$ o specific cases, such as planar CAPAs and planar SPDAs. To facilitate theoretical investigations into fundamental performance limits and asymptotic behaviors, we consider LoS channels. In particular, the channel response between point $\mathbf{s}\in\mathcal{A}$ and user $k\in\{\rm{b},\rm{e}\}$ is modeled as follows \cite{zhao2024continuous}: \begin{equation}\label{channel_model} h_k\left( \mathbf{s} \right) =\frac{\mathrm{j}k_0\eta \mathrm{e}^{-\mathrm{j}k_0\left\| \mathbf{r}_k-\mathbf{s} \right\|}}{4\pi \left\| \mathbf{r}_k-\mathbf{s} \right\|}\sqrt{\frac{\left| \mathbf{e}^{\mathsf{T}}(\mathbf{s}-\mathbf{r}_k) \right|}{\left\| \mathbf{r}_k-\mathbf{s} \right\|}}, \end{equation} Here, the term $\sqrt{\frac{\left| \mathbf{e}^{\mathsf{T}}(\mathbf{s}-\mathbf{r}_k) \right|}{\left\| \mathbf{r}_k-\mathbf{s} \right\|}}$ models the effect of the project aperture of the BS array,as indicated by the projection of the array's normal vector ${\mathbf{e}}\in{\mathbbmss{R}}^{3\times1}$ onto the wave propagation direction at point $\mathbf{s}$. Additionally, $\frac{\mathrm{j}k_0\eta \mathrm{e}^{-\mathrm{j}k_0\left\| \mathbf{r}_k-\mathbf{s} \right\|}}{4\pi \left\| \mathbf{r}_k-\mathbf{s} \right\|}$ represents the impact of free-space EM propagation \cite{ouyang2024impact}, where $\eta=120 \pi~\Omega$ denotes the impedance of free space, and $k_0=\frac{2\pi}{\lambda}$ with $\lambda$ being the wavelength represents the wavenumber. \begin{figure}[!t] \centering \includegraphics[height=0.24\textwidth]{CAP_MISOSE.eps} \caption{Illustration for a planar CAPA.} \vspace{-8pt} \label{planar_capa} \end{figure} \subsubsection{Planar CAPA} We consider that the BS employs a planar CAPA situated on the $x$-$z$ plane and centered at the origin, as depicted in Fig. \ref{planar_capa}. The edges of $\mathcal{A}$ are parallel to the coordinate axes, with physical dimensions $L_x$ and $L_z$ along the $x$- and $z$-axes, respectively. It follows that \begin{equation} {\mathcal{A}}=\{(x,0,z)|x,z\in\left[-{L_x}/{2},{L_z}/{2}\right]\}, \end{equation} with the size $\lvert \mathcal{A} \rvert=L_xL_z$. For each user $k \in \{\rm{b}, \rm{e}\}$, let $r_k$ denote the distance from the center of $\mathcal{A}$ to the center of $\mathcal{A}_\mathsf{k}$, and let $\phi_k \in [0, \pi]$ and $\theta_k \in [0, \pi]$ represent the corresponding azimuth and elevation angles, respectively. Accordingly, $\mathcal{A}_\mathsf{k}$ is centered at ${\mathbf{r}}_{k}=[r_k\Phi_k, r_k\Psi_k, r_k\Theta_k]^{\mathsf{T}}$, where $\Phi_k\triangleq\cos{\phi_k}\sin{\theta_k}$, $\Psi_k\triangleq\sin{\phi_k}\sin{\theta_k}$, and $\Theta_k\triangleq\cos{\theta_k}$. Consequently, for $\mathbf{s}=[x,0,z]^{\mathsf{T}}\in\mathcal{A}$, by inserting $\mathbf{e}=[0,1,0]^\mathsf{T}$ and $\left\| \mathbf{r}_k-\mathbf{s} \right\| =(x^2+z^2-2r_k\left( \Phi _kx+\Theta _kz \right) +r_{k}^{2})^{\frac{1}{2}}$ into \eqref{channel_model}, we derive the channel response for user $k$ as follows: \begin{equation}\label{case_stduy_G} \begin{split} h_k\left( \mathbf{s} \right) &=\frac{\mathrm{j}k_0\eta \sqrt{r_k\Psi _k}\mathrm{e}^{-\mathrm{j}k_0(x^2+z^2-2r_k\left( \Phi _kx+\Theta _kz \right) +r_{k}^{2})^{\frac{1}{2}}}}{\sqrt{4\pi}(x^2+z^2-2r_k\left( \Phi _kx+\Theta _kz \right) +r_{k}^{2})^{\frac{3}{4}}}\\ &\triangleq h_k(x,z). \end{split} \end{equation} To characterize the MSR, we firstly derive the channel gain for each user and the channel correlation factor as follows. \vspace{-5pt} \begin{lemma}\label{lem_planar_capa} With the planar CAPA, the channel gain for user $k\in\{{\mathrm{b},\mathrm{e}}\}$ can be expressed as follows: \begin{equation}\label{g_planar_capa} g_k^{\mathrm{c}}=\frac{k_{0}^{2}\eta ^2}{4\pi}\sum_{x\in \mathcal{X} _k}{\sum_{z\in \mathcal{Z} _k}{\arctan}}\bigg( \frac{xz}{\Psi _k\sqrt{\Psi _{k}^{2}+x^2+z^2}} \bigg), \end{equation} where ${\mathcal{X}}_k\triangleq\{\frac{L_x}{2r_k}\pm \Phi_k\}$, and ${\mathcal{Z}}_k\triangleq\{\frac{L_z}{2r_k}\pm \Theta_k\}$. Additionally, the channel correlation factor can be approximated as follows: \begin{align}\label{rho_planar_capa} \begin{split} \rho _{\mathrm{c}} =&\frac{\pi ^2\left| \mathcal{A} \right|}{4T^2}\sum\nolimits_{t=1}^T{\sum\nolimits_{t^{\prime}=1}^T{\sqrt{\left( 1-\psi _{t}^{2} \right) \left( 1-\psi _{t^{\prime}}^{2} \right)}}}\\ &\times h_{\mathrm{b}}^{*}({L_x\psi _t}/{2},{L_z\psi _{t^{\prime}}}/{2})h_{\mathrm{e}}({L_x\psi _t}/{2},{L_z\psi _{t^{\prime}}}/{2}), \end{split} \end{align} where $T$ is a complexity-vs-accuracy tradeoff parameter, and $\psi _t=\cos \left( \frac{\left( 2t-1 \right) \pi}{2n} \right) $. \end{lemma} \vspace{-5pt} \begin{IEEEproof} Please refer to Appendix \ref{Appendix:G} for more details. \end{IEEEproof} Given that the MSR in \eqref{max_secrecy_rate} is expressed as a function of the channel gain of each user and the channel correlation factor, after deriving $g_{k}^{\mathrm{c}}$ and $\rho _{\mathrm{c}}$, we can obtain the MSR achieved by the planar CAPA as $\mathcal{R}_{\mathrm{c}}=R\left( g_{\mathrm{b}}^{\mathrm{c}},g_{\mathrm{e}}^{\mathrm{c}},\overline{\rho} _{\mathrm{c}} \right) $, where $\overline{\rho }_{\mathrm{c}}=\frac{\left| \rho_{\mathrm{c}} \right|^2}{g_{\mathrm{b}}^{\mathrm{c}}g_{\mathrm{e}}^{\mathrm{c}}}$. To further elucidate the properties of planar CAPAs in the secure transmission, we analyze the asymptotic MSR where $\mathcal{A}$ becomes infinitely large, i.e., $L_x,L_z\rightarrow\infty$. Under this condition, the channel gain for user $k\in\{{\mathrm{b},\mathrm{e}}\}$ satisfies \begin{equation}\label{asy_g_planarcapa} \lim_{L_x,L_z\rightarrow \infty} g_{k}^{\mathrm{c}}=\frac{k_{0}^{2}\eta ^2}{4\pi}\frac{4\pi}{2}=\frac{k_{0}^{2}\eta ^2}{2}, \end{equation} which is a constant value. Additionally, while $\lim_{L_x,L_z\rightarrow\infty}\overline{\rho }_{\mathrm{c}}$ is computationally intractable, the numerical results shown in \cite{boqun_jstsp,liu2024road} indicate that this limit is much less than one, i.e., $1-\lim_{L_x,L_z\rightarrow\infty}\overline{\rho }_{\mathrm{c}}\approx 1$. Based on these findings, we can determine the asymptotic MSR as follows. \vspace{-5pt} \begin{corollary}\label{cor_planar_capa} When $L_x,L_z\rightarrow\infty$, the asymptotic MSR for the planar CAPA is given by \begin{equation}\label{asyR_planar_capa} \lim_{L_x,L_z\rightarrow \infty} \mathcal{R} _{\mathrm{c}}\approx \log _2\left(1+\frac{\overline{\gamma }_{\mathrm{b}}k_{0}^{2}\eta ^2}{2}\right). \end{equation} \end{corollary} \vspace{-5pt} \begin{IEEEproof} The results are obtained by substituting $1-\overline{\rho }\approx 1$ and the results in \eqref{asy_g_planarcapa} into \eqref{max_secrecy_rate}. \end{IEEEproof} \vspace{-5pt} \begin{remark}\label{rem_current} For the infinitely large planar CAPA, the received power at Bob is maximized by the MRT source current, i.e., $j_{\mathsf{msr}}\left( \mathbf{s} \right) =j_{\mathsf{mrt}}\left( \mathbf{s} \right)$, while Eve's received power is minimized to zero, reaching the upper limit of secrecy performance. \end{remark} \vspace{-5pt} \vspace{-5pt} \begin{remark}\label{rem_finite} The MSR will converge to an upper bound rather than increasing indefinitely with the aperture size, a observation that is intuitively reasonable from the standpoint of energy conservation. \end{remark} \vspace{-5pt} \subsubsection{Planar SPDA} For comparison, we next examine a case where the above planar CAPA is partitioned into $M=M_zM_x$ spatially discrete elements. Here, $M_{x}=2\tilde{M}_x+1$ and $M_{z}=2\tilde{M}_z+1$ represent the number of antenna elements along the $x$- and $z$-axes respectively, as illustrated in Fig. \ref{planar_spda}. The size of each element is denoted by $\sqrt{{A}_{\mathrm{s}}}\times\sqrt{{A}_{\mathrm{s}}}$, and the inter-element distance is $d$, where $\sqrt{{A}_{\mathrm{s}}}\leq d\ll r_k$. Under this configuration, the central location of each element is $[m_xd,0,m_zd]^{\mathsf{T}}$, where $m_x\in \mathcal{M} _x\triangleq\{0,\pm1,\ldots,\pm\tilde{M}_x\}$ and $m_z\in \mathcal{M} _z\triangleq\{0,\pm1,\ldots,\pm\tilde{M}_z\}$. Additionally, we have $L_x\approx M_xd$, $L_z\approx M_zd$, and \begin{equation} \mathcal{A} =\left\{ (m_xd+\ell _x,0,m_zd+\ell _z)\left| \begin{array}{c} \ell _x,\ell _z\in [-\frac{\sqrt{A_{\mathrm{s}}}}{2},\frac{\sqrt{A_{\mathrm{s}}}}{2}],\\ m_x\in \mathcal{M} _x,m_z\in \mathcal{M} _z\\ \end{array} \right. \right\}. \end{equation} \begin{figure}[!t] \centering \includegraphics[height=0.24\textwidth]{SPD_MISOSE.eps} \caption{Illustration for Planar SPDA.} \vspace{-8pt} \label{planar_spda} \end{figure} For the planar SPDA, given that the size of each element is much smaller than the distance between the BS and the user, i.e.,$\sqrt{{A}_{\mathrm{s}}}\ll r_k$, the variation in the channel across an element is negligible. Consequently, we can express the channel gain and correlation factor as follows: \begin{align} g_{k}^{\mathrm{s}}&=A_{\mathrm{s}}\sum_{m_x\in \mathcal{M} _x}^{}{\sum_{m_z\in \mathcal{M} _z}^{}{\left| h_k(m_xd,m_zd) \right|}^2},\label{g_planar_spda0}\\ \rho _{\mathrm{s}}&=A_{\mathrm{s}}\sum_{m_x\in \mathcal{M} _x}{\sum_{m_z\in \mathcal{M} _z}{h_{\mathrm{b}}^{*}(m_xd,m_zd)h_{\mathrm{e}}(m_xd,m_zd)}}. \end{align} Notably, the channel gain $g_{k}^{\mathrm{s}}$ can be calculated as follow. \vspace{-5pt} \begin{lemma}\label{lem_planar_spda} With the planar SPDA, the channel gain for user $k\in{\mathrm{b},\mathrm{e}}$ is given by \begin{equation}\label{g_planar_spda} g_k^{\mathrm{s}}=\!\frac{A_{\mathrm{s}}k_{0}^{2}\eta ^2}{4\pi d^2}\!\sum_{x\in \mathcal{X} _k}{\sum_{z\in \mathcal{Z} _k}\!{\arctan}}\bigg( \frac{xz}{\Psi _k\sqrt{\Psi _{k}^{2}\!+\!x^2\!+\!z^2}} \bigg)\!=\zeta _{\mathrm{oc}}g_k^{\mathrm{c}}, \end{equation} where $\zeta _{\mathrm{oc}}\triangleq\frac{A_{\mathrm{s}}}{d^2}$ represents the array occupation ratio (AOR). \end{lemma} \vspace{-5pt} \begin{IEEEproof} Please refer to Appendix \ref{Appendix:I} for more details. \end{IEEEproof} \vspace{-5pt} \begin{remark}\label{rem_aor} It can be observed from \eqref{g_planar_spda} that the channel gain for each user achieved by the SPDA converges to that of the CAPA when $\zeta _{\mathrm{oc}}=1$. This result is intuitive, as an SPDA effectively becomes a CAPA when the AOR equals $1$. \end{remark} \vspace{-5pt} Having derive the MSR for the SPDA, viz., $\mathcal{R}_{\mathrm{s}}=R\left( g_{\mathrm{b}}^{\mathrm{s}},g_{\mathrm{e}}^{\mathrm{s}},\overline{\rho} _{\mathrm{s}} \right) $ with $\overline{\rho }_{\mathrm{s}}=\frac{\left| \rho_{\mathrm{s}} \right|^2}{g_{\mathrm{b}}^{\mathrm{s}}g_{\mathrm{e}}^{\mathrm{s}}}$, we can then obtain its asymptotic expression by following the steps similar to the derivations for the planar CAPA. In particular, as the number of antenna element $M_x,M_z$ grows, the MSR achieved by the SPDA approaches to the following upper bound: \begin{equation}\label{asy_SPDA} \lim_{M_x,M_z\rightarrow \infty} \mathcal{R} _{\mathrm{s}}\approx \log _2\left( 1+\frac{\overline{\gamma }_{\mathrm{b}}k_{0}^{2}\eta ^2\zeta _{\mathrm{oc}}}{2} \right) . \end{equation} \section{Analysis of the Minimum Required Power}\label{sec_MRP} In this section, we will analyze the MRP. \subsection{Problem Reformulation} The problem in \eqref{Minimum_Power_Definition} can be rewritten as follows: \begin{subequations} \begin{align} \min_{{p},\kappa (\mathbf{s})}~~ &~p\\ {\rm{s.t.}}~~~&\frac{1+p\left| \mathcal{A} _{\mathrm{b}} \right|\sigma _{\mathrm{b}}^{-2}\left| \int_{\mathcal{A}}{h_{\mathrm{b}}}(\mathbf{s})\kappa (\mathbf{s})\mathrm{d}\mathbf{s} \right|^2}{1+p\left| \mathcal{A} _{\mathrm{e}} \right|\sigma _{\mathrm{e}}^{-2}\left| \int_{\mathcal{A}}{h_{\mathrm{e}}}(\mathbf{s})\kappa (\mathbf{s})\mathrm{d}\mathbf{s} \right|^2}\geq2^{\mathsf{R}_0},\label{constraint_1}\\ &p> 0,~\int_{\mathcal{A}}{\left| \kappa (\mathbf{s}) \right|^2\mathrm{d}\mathbf{s}}=1, \end{align} \end{subequations} where $\int_{\mathcal{A}}{\left| j(\mathbf{s}) \right|^2\mathrm{d}\mathbf{s}}= p$ and $\kappa (\mathbf{s})= \frac{j(\mathbf{s})}{\sqrt{p}}$. After multiplying both sides of \eqref{constraint_1} by $1+p\left| \mathcal{A} _{\mathrm{e}} \right|\sigma _{\mathrm{e}}^{-2}\left| \int_{\mathcal{A}}{h_{\mathrm{e}}}(\mathbf{s})\kappa (\mathbf{s})\mathrm{d}\mathbf{s} \right|^2$ and performing some basic manipulations, we obtain \begin{equation} \int_{\mathcal{A}}{\int_{\mathcal{A}}{}p\kappa ^*(\mathbf{s})D\left( \mathbf{s},\mathbf{s}^{\prime} \right) \kappa (\mathbf{s}^{\prime})\mathrm{d}\mathbf{s}}\mathrm{d}\mathbf{s}^{\prime}\ge 2^{\mathsf{R}_0}-1, \end{equation} where \begin{equation}\label{mrp_problem_1} D\left( \mathbf{s},\mathbf{s}^{\prime} \right) =\frac{\left| \mathcal{A} _{\mathrm{b}} \right|}{\sigma _{\mathrm{b}}^{2}}h_{\mathrm{b}}(\mathbf{s})h_{\mathrm{b}}^{*}(\mathbf{s}^{\prime})-2^{\mathsf{R}_0}\frac{\left| \mathcal{A} _{\mathrm{e}} \right|}{\sigma _{\mathrm{e}}^{2}}h_{\mathrm{e}}(\mathbf{s})h_{\mathrm{e}}^{*}(\mathbf{s}^{\prime}). \end{equation} Furthermore, we have \begin{equation}\label{mrp_problem_2} \int_{\mathcal{A}}{\int_{\mathcal{A}}{}p\kappa ^*(\mathbf{s})D\left( \mathbf{s},\mathbf{s}^{\prime} \right) \kappa (\mathbf{s}^{\prime})\mathrm{d}\mathbf{s}}\mathrm{d}\mathbf{s}^{\prime}\le p\lambda _{D}^{\max}, \end{equation} where $\lambda _{D}^{\max}$ denotes the principal eigenvalue of operator $D\left( \mathbf{s},\mathbf{s}^{\prime} \right)$. By combining \eqref{mrp_problem_1} with \eqref{mrp_problem_2}, it follows that $p\geq\frac{2^{\mathsf{R}_0}-1}{\lambda _{D}^{\max}}>0$. This implies that the minimum value of $p$, i.e., the MRP, can be expressed as follows: \begin{equation} \mathcal{P}_{\star}=\frac{2^{\mathsf{R}_0}-1}{\lambda _{D}^{\max}}. \end{equation} In this case, we have \begin{equation} \int_{\mathcal{A}}{\int_{\mathcal{A}}\kappa ^*(\mathbf{s})D\left( \mathbf{s},\mathbf{s}^{\prime} \right) \kappa (\mathbf{s}^{\prime})\mathrm{d}\mathbf{s}}\mathrm{d}\mathbf{s}^{\prime}=\lambda _{D}^{\max}, \end{equation} which indicates that the associate $\kappa(\mathbf{s})$ aligns with the normalized principal eigenfunction of $D\left( \mathbf{s},\mathbf{s}^{\prime} \right)$. \subsection{Minimum Required Power \& Optimal Current Distribution} The above arguments imply that the MRP and the associate current distribution correspond to the principal eigenvalue and eigenfunction of $D\left( \mathbf{s},\mathbf{s}^{\prime} \right)$, respectively. Their closed-form expressions are given as follows. \vspace{-5pt} \begin{theorem} The MRP for CAPA-based secure transmission to guarantee a target secrecy rate $\mathsf{R}_0$ can be expressed as follows: \begin{equation}\label{MRP_expression} \mathcal{P}_{\star} =\frac{2\left( 2^{\mathsf{R}_0}-1 \right)}{\alpha +\sqrt{\alpha ^{2}+\beta }}, \end{equation} where $\alpha =\left| \mathcal{A} _{\mathrm{b}} \right|\sigma _{\mathrm{b}}^{-2}g_{\mathrm{b}}-2^{\mathsf{R}_0}\left| \mathcal{A} _{\mathrm{e}} \right|\sigma _{\mathrm{e}}^{-2}g_{\mathrm{e}}$, and $\beta =2^{\mathsf{R}_0+2}\left| \mathcal{A} _{\mathrm{b}} \right|\left| \mathcal{A} _{\mathrm{e}} \right|\sigma _{\mathrm{b}}^{-2}\sigma _{\mathrm{e}}^{-2}g_{\mathrm{b}}g_{\mathrm{e}}\left( 1-\overline{\rho } \right) $. The optimal current distribution is given by \begin{equation} \begin{split} j_{\mathsf{mrp}}\left( \mathbf{s} \right) =\sqrt{\mathcal{P}_{\star}}\frac{h^*_{\mathrm{b}}(\mathbf{s})-\tau h^*_{\mathrm{e}}(\mathbf{s})} {\sqrt{\int_{\mathcal{A}}\left\lvert h^*_{\mathrm{b}}(\mathbf{s})-\tau h^*_{\mathrm{e}}(\mathbf{s})\right\rvert^2{\rm{d}}{\mathbf{s}}}}, \end{split} \end{equation} where $\tau=\frac{{\left| \mathcal{A} _{\mathrm{b}} \right|g_{\mathrm{b}}}{\sigma _{\mathrm{b}}^{-2}}+{\left| \mathcal{A} _{\mathrm{e}} \right|g_{\mathrm{e}}2^{\mathsf{R}_0}}{\sigma _{\mathrm{e}}^{-2}}-\sqrt{\alpha ^{2}+\beta}}{2\left| \mathcal{A} _{\mathrm{b}} \right|\sigma _{\mathrm{b}}^{-2}\rho }$. \end{theorem} \vspace{-5pt} \begin{IEEEproof} The proof is similar to that of \textbf{Lemma~\ref{lem_eigenvalue}}. \end{IEEEproof} \vspace{-5pt} \begin{remark}\label{rem_gen2} The MRP given in \eqref{MRP_expression} is determined by the channel gain for each user and the channel correlation factor. This expression is applicable to any aperture, regardless of its location, shape, and size. \end{remark} \vspace{-5pt} We next compare the MRP with the required power achieved by ZF and MRT beamforming. When MRT beamforming is utilized, we have $j({\mathbf{s}})=\sqrt{p}\frac{h^*_{\mathrm{b}}(\mathbf{s})}{{\sqrt{\int_{\mathcal{A}}\left\lvert h^*_{\mathrm{b}}(\mathbf{s})\right\rvert^2{\rm{d}}{\mathbf{s}}}}}$ (as per \eqref{Optimal_Current_Rate_Low_SNR}), and the minimum required transmission power to guarantee a secrecy transmission rate of $\mathsf{R}_0$ is given by \begin{equation}\label{P_mrt} \mathcal{P} _{\mathsf{mrt}}=\frac{2^{\mathsf{R}_0}-1}{\left| \mathcal{A} _{\mathrm{b}} \right|\sigma _{\mathrm{b}}^{-2}g_{\mathrm{b}}-2^{\mathsf{R}_0}\left| \mathcal{A} _{\mathrm{e}} \right|\sigma _{\mathrm{e}}^{-2}g_{\mathrm{e}}\overline{\rho }}. \end{equation} By noting that fact that $\overline{\rho }\in[0,1]$, we can prove that \begin{align} \frac{\mathcal{P} _{\mathsf{mrt}}}{\mathcal{P}_{\star}}=\frac{1}{2}\frac{\alpha +\sqrt{\alpha ^{2}+\beta }}{\left| \mathcal{A} _{\mathrm{b}} \right|\sigma _{\mathrm{b}}^{-2}g_{\mathrm{b}}-2^{\mathsf{R}_0}\left| \mathcal{A} _{\mathrm{e}} \right|\sigma _{\mathrm{e}}^{-2}g_{\mathrm{e}}\overline{\rho }}>1. \end{align} Furthermore, we have the following observations. \vspace{-5pt} \begin{remark}\label{rem_MRP} It is observed that the proposed optimal current distribution outperforms MRT beamforming in terms of minimizing the transmit power. Besides, the optimal current distribution $j_{\mathsf{mrp}}\left( \mathbf{s} \right) $ can achieve an arbitrary large target secrecy rate as long as the transmission power is sufficiently high. In contrast, the MRT cannot achieve a secrecy rate greater than $\log _2\left( \left| \mathcal{A} _{\mathrm{b}} \right|\sigma _{\mathrm{b}}^{-2}g_{\mathrm{b}} \right) -\log _2\left( \left| \mathcal{A} _{\mathrm{e}} \right|\sigma _{\mathrm{e}}^{-2}g_{\mathrm{e}} \overline{\rho }\right)$, regardless of the transmission power level. \end{remark} \vspace{-5pt} We then consider ZF beamforming, which yields $\sqrt{p}\frac{h^*_{\mathrm{b}}(\mathbf{s})-\frac{g_{\mathrm{b}}}{\rho}\overline{\rho }h^*_{\mathrm{e}}(\mathbf{s})} {\sqrt{\int_{\mathcal{A}}\left\lvert h^*_{\mathrm{b}}(\mathbf{s})-\frac{g_{\mathrm{b}}}{\rho}\overline{\rho }h^*_{\mathrm{e}}(\mathbf{s})\right\rvert^2{\rm{d}}{\mathbf{s}}}}$ (as per \eqref{Optimal_Current_Rate_High_SNR}), and the minimum required transmission power to guarantee a secrecy transmission rate of $\mathsf{R}_0$ is given by \begin{equation}\label{P_zf} \mathcal{P} _{\mathsf{zf}}=\frac{2^{\mathsf{R}_0}}{\left| \mathcal{A} _{\mathrm{b}} \right|\sigma _{\mathrm{b}}^{-2}g_{\mathrm{b}}\left( 1-\overline{\rho } \right)}. \end{equation} By letting ${\mathsf{R}_0}\rightarrow\infty$, we have \begin{equation} \begin{split} \lim_{{\mathsf{R}_0}\rightarrow\infty}\frac{1}{\alpha +\sqrt{\alpha ^{2}+\beta }} &=\lim_{{\mathsf{R}_0}\rightarrow\infty}\frac{\sqrt{\alpha ^{2}+\beta }-\alpha}{\beta}\\ &=\frac{1}{2\left| \mathcal{A} _{\mathrm{b}} \right|\sigma _{\mathrm{b}}^{-2}g_{\mathrm{b}}\left( 1-\overline{\rho } \right)}, \end{split} \end{equation} which suggests that \begin{align}\label{rem_MRP_zf_compare} \lim_{{\mathsf{R}_0}\rightarrow\infty}\mathcal{P}_{\star}\simeq\frac{2^{\mathsf{R}_0}}{\left| \mathcal{A} _{\mathrm{b}} \right|\sigma _{\mathrm{b}}^{-2}g_{\mathrm{b}}\left( 1-\overline{\rho } \right)} =\mathcal{P} _{\mathsf{zf}}. \end{align} \vspace{-5pt} \begin{remark}\label{rem_MRP_zf} The results in \eqref{rem_MRP_zf_compare} suggest that ZF beamforming is asymptotically optimal when achieving an infinitely large secrecy rate target. \end{remark} \vspace{-5pt} \subsection{Typical Cases} Since the MRP is expressed as a function of the channel gains and the channel correlation factor, the MRP for both the planar CAPA and SPDA under the LoS channel can be readily obtained based on the results in \textbf{Lemma~\ref{lem_planar_capa}} and \textbf{\ref{lem_planar_spda}}, respectively. Therefore, in this subsection, we will focus on the asymptotic behaviors of $\mathcal{P}$ as $L_x,L_z\rightarrow\infty$ or $M_x,M_z\rightarrow\infty$ to gain further insights, where the considered channel model and array structure are the same as those detailed in Section \ref{discussion_MSR}. \subsubsection{Planar CAPA} The asymptotic MRP for the planar CAPA with infinitely large aperture is given as follows. \vspace{-5pt} \begin{corollary}\label{cor_MRP_capa} When $L_x,L_z\rightarrow\infty$, the MRP satisfies \begin{equation} \lim_{L_x,L_z\rightarrow \infty} \mathcal{P}_{\star} \approx \frac{2\left( 2^{\mathsf{R}_0}-1 \right)}{k_{0}^{2}\eta ^2\left| \mathcal{A} _{\mathrm{b}} \right|\sigma _{\mathrm{b}}^{-2}}. \end{equation} \end{corollary} \vspace{-5pt} \begin{IEEEproof} Similar to the proof of \textbf{Corollary~\ref{cor_planar_capa}}. \end{IEEEproof} \vspace{-5pt} \begin{remark}\label{rem_lowerbound} The results of \textbf{Corollary~\ref{cor_MRP_capa}} suggest that as the aperture size increases, the MRP for the planar CAPA will converge to an lower bound that is larger than zero. \end{remark} \vspace{-5pt} \subsubsection{Planar SPDA} In the case of the planar SPDA, when the antenna number grows to infinity, i.e., $M_x,M_z\rightarrow\infty$, the MRP approaches to the follow lower bound: \begin{equation} \lim_{M_x,M_z\rightarrow \infty} \mathcal{P}_{\star} \approx \frac{2\left( 2^{\mathsf{R}_0}-1 \right)}{k_{0}^{2}\eta ^2\left| \mathcal{A} _{\mathrm{b}} \right|\sigma _{\mathrm{b}}^{-2}\zeta _{\mathrm{oc}}}. \end{equation} We note that for extremely large arrays, the minimum transmit power required by an SPDA for a given target secrecy rate reduces to that of the CAPA when $\zeta _{\mathrm{oc}}=1$. \section{Numerical Results}\label{numerical} In this section, numerical results are presented to demonstrate the secrecy performance achieved by CAPAs. For clarity, the simulations employ the planar arrays specified in Section \ref{discussion_MSR}. Without otherwise specification, the simulation parameters are set as follows: $\lambda=0.125$ m, $\left| \mathcal{A} _{\mathrm{b}} \right|=\left| \mathcal{A} _{\mathrm{e}} \right|=\frac{\lambda^2}{4\pi}$, $\frac{P}{\sigma^2_{\mathrm{b}}}=\frac{P}{\sigma^2_{\mathrm{e}}}=10$ dB, $L_x=L_z$, $(r_{\mathrm{b}},\theta_{\mathrm{b}},\phi_{\mathrm{b}})=(10 \, \rm{m},\frac{\pi}{6},\frac{\pi}{6})$, $(r_{\mathrm{e}},\theta_{\mathrm{e}},\phi_{\mathrm{e}})=(20 \, \rm{m},\frac{\pi}{3},\frac{\pi}{3})$, and $T=100$. \begin{figure}[!t] \centering \setlength{\abovecaptionskip}{3pt} \includegraphics[height=0.255\textwidth]{R_P.eps} \caption{Secrecy rates versus power budget $P$.} \vspace{-5pt} \label{R_P} \end{figure} \begin{figure}[!t] \centering \setlength{\abovecaptionskip}{3pt} \includegraphics[height=0.254\textwidth]{R_size.eps} \caption{MSRs versus aperture size $\left| \mathcal{A}\right|$.} \vspace{-8pt} \label{R_size} \end{figure} \begin{figure}[!t] \centering \setlength{\abovecaptionskip}{3pt} \includegraphics[height=0.25\textwidth]{R_aor.eps} \caption{MSRs versus AOR $\zeta_{\mathrm{oc}}$.} \vspace{-13pt} \label{R_aor} \end{figure} \begin{figure}[!t] \centering \setlength{\abovecaptionskip}{3pt} \includegraphics[height=0.265\textwidth]{P_R0.eps} \caption{Required powers versus target secrecy rate $\mathsf{R}_0$.} \vspace{-8pt} \label{P_R0} \end{figure} \subsection{Maximum Secrecy Rate} gurename} {\ref{R_P}} illustrates the secrecy rate achieved by the CAPA with different aperture sizes as a function of the power budget. As observed, the derived closed-form expressions closely align with the simulation results, which validates the correctness of our previously derived results. For comparison, the secrecy rates achieved by MRT and ZF beamforming schemes are also presented. The results show that the proposed optimal current distribution yields better secrecy rate than both the MRT and ZF-based schemes. Moreover, as the transmit power increases, the secrecy rate achieved by MRT beamforming converges to a finite constant, while the rates achieved by ZF beamforming and the optimal current distribution increase monotonically. This demonstrates that the high-SNR slope for the MSR is greater than that achieved by MRT beamforming. gurename} {\ref{R_P}}, it can be also observed that in the low-SNR regime, the secrecy rate achieved by MRT beamforming is nearly identical to that achieved by the optimal current distribution. This is because, at low SNRs, Gaussian noise dominates over information leakage, making MRC beamforming more effective for enhancing the secrecy rate. In contrast, at high SNRs, the secrecy rate achieved by ZF beamforming closely approaches that of the optimal current distribution. This is because, in the high-SNR regime, information leakage becomes the dominant factor over Gaussian noise, making leakage cancellation essential for improving the secrecy rate. These observations align with the discussions in Section \ref{Section: Asymptotic Analysis}. \begin{figure}[!t] \centering \setlength{\abovecaptionskip}{3pt} \includegraphics[height=0.25\textwidth]{P_size.eps} \caption{MRPs versus aperture size $\left| \mathcal{A}\right|$.} \vspace{-5pt} \label{P_size} \end{figure} \begin{figure}[!t] \centering \setlength{\abovecaptionskip}{3pt} \includegraphics[height=0.25\textwidth]{P_aor.eps} \caption{MRPs versus AOR $\zeta_{\mathrm{oc}}$.} \vspace{-8pt} \label{P_aor} \end{figure} gurename} {\ref{R_size}} demonstrate that for the same aperture size, CAPA achieves superior secrecy performance compared to conventional SPDA. To further highlight the performance gap between CAPA and SPDA in terms of achievable MSR, we plot the MSR for SPDAs against the AOR in {\figurename} \ref{R_aor}. This graph illustrates the MSR for an SPDA gradually converges with that of a CAPA as the AOR approaches one, which corroborates the statements in \textbf{Remark~\ref{rem_aor}}. \subsection{Minimum Required Power} gurename} {\ref{P_R0}} illustrates the MRP and the transmission power needed for ZF and MRT beamforming to achieve different target secrecy rates ${\mathsf{R}}_0$. It can be observed that while the required powers for MRT beamforming are nearly identical to the MRP for small values of $\mathsf{R}_0$, the gap becomes significant as $\mathsf{R}_0$ grows, with MRT requiring substantially more power than the MRP. Notably, when the target secrecy rate exceeds a certain threshold, MRT beamforming becomes incapable of achieving it, regardless of the available transmission power. In contrast, the current distribution designed to achieve the MRP can support arbitrarily high secrecy rates, provided sufficient transmission power is available. These findings align with the discussions in \textbf{Remark \ref{rem_MRP}}. Furthermore, we observe that when achieving a high target secrecy rate, ZF beamforming yields performance virtually identical to that of the optimal current distribution, which corroborates the results discussed in \textbf{Remark \ref{rem_MRP_zf}}. The above observations highlight the superiority of the proposed optimal current design in terms of minimizing transmit power while guaranteeing a target secrecy rate. gurename} \ref{P_aor} shows the MRP for both CAPAs and SPDAs as a function of the AOR. Similar to the MSR, the MRP for an SPDA gradually converges to that of a CAPA as the AOR approaches $1$. \section{Conclusion}\label{conclusion} This article has developed a novel secure transmission framework using CAPAs and analyzed its fundamental secrecy performance limits. We derived closed-form expressions for the MSR under a power constraint and the MRP to achieve a target secrecy rate, along with the corresponding optimal continuous current distributions. We proved that the rate-optimal source current simplifies to MRT beamforming in the low-SNR region and to ZF beamforming in the high-SNR region. Additionally, we showed that the power-optimal current converges to ZF beamforming in the high-SNR regime. Through both theoretical analyses and numerical simulations, we demonstrated that CAPAs provide superior secrecy performance compared to conventional SPDAs. These findings underscore the potential of CAPAs as a promising paradigm for secure wireless communication. \begin{appendix} \setcounter{equation}{0} \renewcommand\theequation{A\arabic{equation}} \subsection{Proof of \textbf{Lemma \ref{lemma_1}}}\label{Appendix:A} Inserting \eqref{function_Q} and \eqref{Q_invert} into the left-hand side of \eqref{Q_Inversion} gives \setlength\abovedisplayskip{3pt} \setlength\belowdisplayskip{3pt} \begin{align}\label{invert} &\int_{\mathcal{A}}Q\left( \mathbf{s},\mathbf{s}_1 \right)\hat{Q}\left( \mathbf{s}_1,\mathbf{s}' \right){\rm{d}}\mathbf{s}_1 =\int_{\mathcal{A}}\left( \delta (\mathbf{s}-\mathbf{s}_1)+\mu h_{\mathrm{e}}(\mathbf{s})h_{\mathrm{e}}^{*}(\mathbf{s}_1) \right)\notag\\ &\times\left( \delta (\mathbf{s}_1-\mathbf{s}')-\frac{\mu h_{\mathrm{e}}(\mathbf{s}_1)h_{\mathrm{e}}^{*}(\mathbf{s}')}{1+\mu g_{\mathrm{e}}} \right) \mathrm{d}\mathbf{s}_1 =\delta (\mathbf{s}-\mathbf{s}')\notag\\ &+\left( \mu -\frac{\mu+\mu ^2g_{\mathrm{e}}}{1+\mu g_{\mathrm{e}}} \right) h_{\mathrm{e}}(\mathbf{s})h_{\mathrm{e}}^{*}(\mathbf{s}')=\delta (\mathbf{s}-\mathbf{s}'). \end{align} Following the same approach to obtain \eqref{invert}, we also obtain \begin{align} \int_{\mathcal{A}}\hat{Q}\left( \mathbf{s},\mathbf{s}_1 \right)Q\left( \mathbf{s}_1,\mathbf{s}' \right){\rm{d}}\mathbf{s}_1=\delta (\mathbf{s}-\mathbf{s}'). \end{align} This completes the proof of \textbf{Lemma~\ref{lemma_1}}. \subsection{Proof of \textbf{Lemma \ref{lemma_2}}}\label{Appendix:B} Inserting \eqref{function_Q} and \eqref{Q_invert} into the left-hand side of \eqref{Indentity_Transform} gives \begin{align}\label{Denominator_Calculation_Forward_00} E(\mathbf{s}_1,\mathbf{s}_1')=&\int_{\mathcal{A}}{\int_{\mathcal{A}}}(\delta (\mathbf{s}_1-\mathbf{s})+\mu h_{\mathrm{e}}(\mathbf{s}_1)h_{\mathrm{e}}^{*}(\mathbf{s}))A_{\rm{e}}(\mathbf{s},\mathbf{s}^{\prime})\nonumber\\ &\times (\delta (\mathbf{s}^{\prime}-\mathbf{s}_{1}^{\prime})+\mu h_{\mathrm{e}}(\mathbf{s}')h_{\mathrm{e}}^{*}(\mathbf{s}_{1}^{\prime}))\mathrm{d}\mathbf{s}\mathrm{d}\mathbf{s}^{\prime}. \end{align} We then calculate $E(\mathbf{s}_1,\mathbf{s}^{\prime}_1)$ as follows. According to that fact of $\int_{{\mathcal{A}}}\delta({\mathbf{x}}-{\mathbf{x}}_0)f(\mathbf{x}){\rm{d}}{\mathbf{x}}=f({\mathbf{x}}_0)$, the integral in terms of ${{\mathbf{s}}}$ involved in \eqref{Denominator_Calculation_Forward_00} can be calculated as follows: \begin{align}\label{Denominator_Calculation_Forward_1} &\int_{\mathcal{A}}(\delta (\mathbf{s}_1-\mathbf{s})\!+\!\mu h_{\mathrm{e}}(\mathbf{s}_1)h_{\mathrm{e}}^{*}(\mathbf{s}))(\delta (\mathbf{s}-\mathbf{s}^{\prime})\!+\!\overline{\gamma }_{\mathrm{e}}h_{\mathrm{e}}(\mathbf{s})h_{\mathrm{e}}^{*}(\mathbf{s}^{\prime}))\mathrm{d}\mathbf{s}\notag\\ &=\delta (\mathbf{s}_1-\mathbf{s}^{\prime})+ (\mu+\overline{\gamma }_{\mathrm{e}}+\overline{\gamma }_{\mathrm{e}}\mu g_{\mathrm{e}})h_{\mathrm{e}}(\mathbf{s}_{1})h_{\mathrm{e}}^{*}(\mathbf{s}'). \end{align} We next calculate the integral in terms of ${{\mathbf{s}}}'$, which yields \begin{align}\label{Denominator_Calculation_Forward_2} &\int_{\mathcal{A}}\eqref{Denominator_Calculation_Forward_1}\times (\delta (\mathbf{s}^{\prime}-\mathbf{s}_{1}^{\prime})+\mu h_{\mathrm{e}}(\mathbf{s}')h_{\mathrm{e}}^{*}(\mathbf{s}_{1}^{\prime}))\mathrm{d}\mathbf{s}^{\prime}\notag\\ &=\!\delta (\mathbf{s}_1\!-\!\mathbf{s}_1^{\prime})\!+\chi h_{\mathrm{e}}(\mathbf{s}_1)h_{\mathrm{e}}^{*}(\mathbf{s}_1^{\prime}), \end{align} where $\chi =\overline{\gamma }_{\mathrm{e}}\mu ^2g_{\mathrm{e}}^{2}+\mu ^2g_{\mathrm{e}}+2\overline{\gamma }_{\mathrm{e}}\mu g_{\mathrm{e}}+2\mu +\overline{\gamma }_{\mathrm{e}}$. Inserting $\mu =-\frac{1}{g_{\mathrm{e}}}\pm \frac{1}{g_{\mathrm{e}}\sqrt{1+\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}}}}$ into the expression of $\chi$, we can obtain $\chi=0$, which yields \begin{equation}\label{function_c} E(\mathbf{s}_1,\mathbf{s}^{\prime}_1)=\delta (\mathbf{s}_1-\mathbf{s}_1^{\prime}), \end{equation} which completes the proof of \textbf{Lemma \ref{lemma_2}}. \subsection{Proof of \textbf{Lemma \ref{Lemma_Fredholm}}}\label{Appendix:C} Equation \eqref{C_definition} can be written as follows: \begin{equation}\label{Eigenvalue_Equation_Basic2} \begin{split} &\!C(\mathbf{s}_2,\mathbf{s}_2^{\prime})\!=\!\underset{I_1}{\underbrace{\int_{\mathcal{A}}{\int_{\mathcal{A}}}\hat{Q}(\mathbf{s}_2,\mathbf{s}_1)B(\mathbf{s}_1,\mathbf{s}_{1}^{\prime})\hat{Q}(\mathbf{s}_{1}^{\prime},\mathbf{s}_{2}^{\prime})\mathrm{d}\mathbf{s}_1\mathrm{d}\mathbf{s}_{1}^{\prime}}}\\ &~~~-\lambda _B\underset{I_2}{\underbrace{\int_{\mathcal{A}}{\int_{\mathcal{A}}{}}\hat{Q}(\mathbf{s}_2,\mathbf{s}_1)\delta (\mathbf{s}_1-\mathbf{s}_{1}^{\prime})\hat{Q}(\mathbf{s}_{1}^{\prime},\mathbf{s}_{2}^{\prime})\mathrm{d}\mathbf{s}_1\mathrm{d}\mathbf{s}_{1}^{\prime}}}. \end{split} \end{equation} Substituting \eqref{Semi_Def_Function_Original} into $I_1$ gives \begin{align}\label{I1} I_1&=\int_{\mathcal{A}}{\int_{\mathcal{A}}{A_{\rm{b}}(\mathbf{s},\mathbf{s}^{\prime})}}\int_{\mathcal{A}}\hat{Q}(\mathbf{s}_2,\mathbf{s}_1)Q\left(\mathbf{s}_1,\mathbf{s}\right) \mathrm{d}\mathbf{s}_1\notag\\ &\times\int_{\mathcal{A}}{Q\left(\mathbf{s}^\prime,\mathbf{s}^\prime_1 \right) \hat{Q}(\mathbf{s}_{1}^{\prime},\mathbf{s}_{2}^{\prime})}\mathrm{d}\mathbf{s}_{1}^{\prime}\mathrm{d}\mathbf{s}\mathrm{d}\mathbf{s}^{\prime}=\int_{\mathcal{A}} {\int_{\mathcal{A}}{A_{\rm{b}}(\mathbf{s},\mathbf{s}^{\prime})}}\nonumber\\ &\times\delta (\mathbf{s}_2-\mathbf{s})\delta (\mathbf{s}^\prime-\mathbf{s}_{2}^{\prime})\mathrm{d}\mathbf{s}\mathrm{d}\mathbf{s}^{\prime}=A_{\rm{b}}(\mathbf{s}_2,\mathbf{s}_{2}^{\prime}). \end{align} Moreover, $I_2$ can be calculated as follows: \begin{align} I_2&=\int_{\mathcal{A}}{\int_{\mathcal{A}}{}}\Big( \delta (\mathbf{s}_2-\mathbf{s}_1)-\frac{\mu}{1+\mu g_{\mathrm{e}}}h_{\mathrm{e}}(\mathbf{s}_2)h_{\mathrm{e}}^{*}(\mathbf{s}_1) \Big) \delta (\mathbf{s}_1-\mathbf{s}_{1}^{\prime})\notag\\ &\times\Big( \delta (\mathbf{s}_{1}^{\prime}-\mathbf{s}_{2}^{\prime})-\frac{\mu}{1+\mu g_{\mathrm{e}}}h_{\mathrm{e}}(\mathbf{s}_{1}^{\prime})h_{\mathrm{e}}^{*}(\mathbf{s}_{2}^{\prime}) \Big) \mathrm{d}\mathbf{s}_1\mathrm{d}\mathbf{s}_{1}^{\prime}\notag\\ &=\int_{\mathcal{A}}{}\Big( \delta (\mathbf{s}_2-\mathbf{s}_{1}^{\prime})-\frac{\mu}{1+\mu g_{\mathrm{e}}}h_{\mathrm{e}}(\mathbf{s}_2)h_{\mathrm{e}}^{*}(\mathbf{s}_{1}^{\prime}) \Big) \notag\\ &\times\Big( \delta (\mathbf{s}_{1}^{\prime}-\mathbf{s}_{2}^{\prime})-\frac{\mu}{1+\mu g_{\mathrm{e}}}h_{\mathrm{e}}(\mathbf{s}_{1}^{\prime})h_{\mathrm{e}}^{*}(\mathbf{s}_{2}^{\prime}) \Big) \mathrm{d}\mathbf{s}_{1}^{\prime}\notag\\ &=\delta (\mathbf{s}_2-\mathbf{s}_{2}^{\prime})+\frac{\mu}{1+\mu g_{\mathrm{e}}}\left( \frac{\mu g_{\mathrm{e}}}{1+\mu g_{\mathrm{e}}}-2 \right) h_{\mathrm{e}}(\mathbf{s}_2)h_{\mathrm{e}}^{*}(\mathbf{s}_{2}^{\prime})\notag. \end{align} Recalling that $\mu =-\frac{1}{g_{\mathrm{e}}}\pm \frac{1}{g_{\mathrm{e}}\sqrt{1+\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}}}}$ yields \begin{align} \frac{\mu}{1+\mu g_{\mathrm{e}}}\left( \frac{\mu g_{\mathrm{e}}}{1+\mu g_{\mathrm{e}}}-2 \right) =\frac{\mu(-2-\mu g_{\mathrm{e}})}{(1+\mu g_{\mathrm{e}})^2}=\overline{\gamma }_{\mathrm{e}}, \end{align} which leads to \begin{align}\label{I2} I_2=\delta (\mathbf{s}_2-\mathbf{s}_{2}^{\prime})+\overline{\gamma }_{\mathrm{e}}h_{\mathrm{e}}(\mathbf{s}_2)h_{\mathrm{e}}^{*}(\mathbf{s}_{2}^{\prime}) =A_{\rm{e}}(\mathbf{s}_2-\mathbf{s}_{2}^{\prime}). \end{align} By inserting \eqref{I1} and \eqref{I2} into \eqref{Eigenvalue_Equation_Basic2}, the results of \eqref{C_derivation} follow immediately. \subsection{Proof of \textbf{Lemma \ref{lem_eigenvalue}}}\label{Appendix:D} Upon observing \eqref{C_derivation}, the Hermitian operator $\breve{C}(\mathbf{s}_2,\mathbf{s}_{2}^{\prime})\triangleq\overline{\gamma }_{\mathrm{b}}h_{\mathrm{b}}(\mathbf{s}_2)h_{\mathrm{b}}^{*}(\mathbf{s}_{2}^{\prime})-\lambda _B \overline{\gamma }_{\mathrm{e}} h_{\mathrm{e}}(\mathbf{s}_2)h_{\mathrm{e}}^{*}(\mathbf{s}_{2}^{\prime})$ is a function of $h_{\mathrm{b}}(\cdot)$ and $h_{\mathrm{e}}(\cdot)$. Therefore, it can be concluded that the eigenfunction of $\breve{C}(\mathbf{s}_2,\mathbf{s}_{2}^{\prime})$ must lie in the subspace spanned by $h_{\mathrm{b}}(\cdot)$ and $h_{\mathrm{e}}(\cdot)$. This means that the eigenvalues and eigenfunctions of $\breve{C}(\mathbf{s}_2,\mathbf{s}_{2}^{\prime})$ can be determined from the following equation: \setlength\abovedisplayskip{5pt} \setlength\belowdisplayskip{5pt} \begin{equation}\label{D1} \begin{split} &\int_{\mathcal{A}}\breve{C}(\mathbf{s}_2,\mathbf{s}_{2}^{\prime})(ah^*_{\mathrm{b}}(\mathbf{s}_2)-bh^*_{\mathrm{e}}(\mathbf{s}_2))\mathrm{d}\mathbf{s}_2\\ &=\xi(ah^*_{\mathrm{b}}(\mathbf{s}'_2)-bh^*_{\mathrm{e}}(\mathbf{s}'_2)), \end{split} \end{equation} where $\xi$ denotes the eigenvalue of $\breve{C}(\mathbf{s}_2,\mathbf{s}_{2}^{\prime})$. By performing some mathematical manipulations to the left-hand side of \eqref{D1}, we can transform \eqref{D1} as follows: \begin{equation}\label{Eigenvalue_Equation_Transform1} \begin{split} &\left( a\overline{\gamma }_{\mathrm{b}}g_{\mathrm{b}}\!-\!b\overline{\gamma }_{\mathrm{b}}\rho \right)\! h^*_{\mathrm{b}}(\mathbf{s}'_2)\!-\!( a\lambda _B \overline{\gamma }_{\mathrm{e}}\rho ^*\!-\!b\lambda _B \overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}} ) h^*_{\mathrm{e}}(\mathbf{s}'_2)\\ &=a\xi h^*_{\mathrm{b}}(\mathbf{s}'_2)-b\xi h^*_{\mathrm{e}}(\mathbf{s}'_2). \end{split} \end{equation} Based on \eqref{Eigenvalue_Equation_Transform1}, we have \begin{equation} a\xi = a\overline{\gamma }_{\mathrm{b}}g_{\mathrm{b}}-b\overline{\gamma }_{\mathrm{b}}\rho,\ b\xi=a\lambda _B \overline{\gamma }_{\mathrm{e}}\rho ^*-b\lambda _B \overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}}. \end{equation} It follows that \begin{equation}\label{b_a} \frac{\xi-\overline{\gamma}_{\mathrm{b}}g_{\mathrm{b}}}{\overline{\gamma}_{\mathrm{b}}\rho}=-\frac{b}{a},\quad \frac{\xi+\lambda _B \overline{\gamma}_{\mathrm{e}}g_{\mathrm{e}}}{\lambda _B \overline{\gamma}_{\mathrm{e}}\rho^*}=\frac{a}{b}, \end{equation} which leads to the following equation with respect to $\xi$: \begin{equation} \frac{\xi-\overline{\gamma}_{\mathrm{b}}g_{\mathrm{b}}}{\overline{\gamma}_{\mathrm{b}}\rho} \frac{\xi+\lambda _B \overline{\gamma}_{\mathrm{e}}g_{\mathrm{e}}}{\lambda _B \overline{\gamma}_{\mathrm{e}}\rho^*}+1=0, \end{equation} or equivalently, \begin{equation} \xi^2-(\overline{\gamma}_{\mathrm{b}}g_{\mathrm{b}}-\lambda _B \overline{\gamma}_{\mathrm{e}}g_{\mathrm{e}})\xi -\lambda _B \overline{\gamma}_{\mathrm{b}}\overline{\gamma}_{\mathrm{e}}g_{\mathrm{b}}g_{\mathrm{e}}(1-\overline{\rho}) =0. \end{equation} The solutions to the above equation are given by $\xi=\xi_1$ and $\xi=\xi_2$, where $\xi_1$ and $\xi_2$ are given in \eqref{eigenvalue_C_solution1} and \eqref{eigenvalue_C_solution2}, respectively. In other words, $\xi=\xi_1$ and $\xi=\xi_2$ are the non-zero eigenvalues of $\breve{C}(\mathbf{s}_2,\mathbf{s}_{2}^{\prime})$. The above arguments imply that the eigen-decomposition of $\breve{C}(\mathbf{s}_2,\mathbf{s}_{2}^{\prime})$ can be written as follows: \begin{align} \breve{C}(\mathbf{s}_2,\mathbf{s}_{2}^{\prime})=\sum\nolimits_{i=1}^{2}\xi_i\varphi_i(\mathbf{s}_2)\varphi_i^{*}(\mathbf{s}_{2}^{\prime}), \end{align} where $\{\varphi_i(\cdot)\}_{i=1}^{2}$ represents an orthonormal basis on ${\mathcal{A}}$. Furthermore, it holds that \begin{align} \delta (\mathbf{s}_2-\mathbf{s}_{2}^{\prime})=\sum\nolimits_{i=1}^{\infty}\varphi_i(\mathbf{s}_2)\varphi_i^{*}(\mathbf{s}_{2}^{\prime}), \end{align} where $\{\varphi_i(\cdot)\}_{i=1}^{\infty}$ represents a complete orthonormal basis on ${\mathcal{A}}$. Taken together, we have \begin{equation} \begin{split} C(\mathbf{s}_2,\mathbf{s}_2^{\prime})&=\breve{C}(\mathbf{s}_2,\mathbf{s}_{2}^{\prime})+(1-\lambda _B)\delta (\mathbf{s}_2-\mathbf{s}_{2}^{\prime})\\ &=\sum\nolimits_{i=1}^{2}(\xi_i+1-\lambda _B)\varphi_i(\mathbf{s}_2)\varphi_i^{*}(\mathbf{s}_{2}^{\prime})\\ &+\sum\nolimits_{i=3}^{\infty}(1-\lambda _B)\varphi_i(\mathbf{s}_2)\varphi_i^{*}(\mathbf{s}_{2}^{\prime}), \end{split} \end{equation} which represents the eigen-decomposition of $C(\mathbf{s}_2,\mathbf{s}_2^{\prime})$. As a result, the eigenvalues of $C(\mathbf{s}_2,\mathbf{s}_2^{\prime})$ are given by \begin{equation} \begin{split} &\lambda_{C,1}=\xi_1-\lambda_B+1,\ \lambda_{C,2}=\xi_2-\lambda_B+1,\\ &\lambda_{C,3}=\ldots=\lambda_{C,\infty}=-\lambda_B+1, \end{split} \end{equation} which completes the proof of \textbf{Lemma \ref{lem_eigenvalue}}. \subsection{Proof of \textbf{Theorem \ref{lem_principal}}}\label{Appendix:E} It follows from \eqref{equation_trans3} that $\lambda_B=1$ or $\lambda_B=\xi_1+1$ or $\lambda_B=\xi_2+1$. For $\lambda_B=\xi_1+1$ or $\lambda_B=\xi_2+1$, we have \begin{align} \lambda_B=\frac{\Delta\pm \sqrt{\Delta^2+4\lambda _B \overline{\gamma}_{\mathrm{b}}\overline{\gamma}_{\mathrm{e}}g_{\mathrm{b}}g_{\mathrm{e}}(1-\overline{\rho})}}{2}. \end{align} This leads to \begin{equation}\label{key} (2\lambda _B-\Delta )^2=\Delta ^2+4\lambda _B \overline{\gamma }_{\mathrm{b}}\overline{\gamma }_{\mathrm{e}}g_{\mathrm{b}}g_{\mathrm{e}}(1-\overline{\rho }), \end{equation} which can be further simplified as follows: \begin{equation} \left( 1+\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}} \right) (\lambda _{B}-1)^{2}-\Xi (\lambda _B-1)-\overline{\gamma }_{\mathrm{b}}\overline{\gamma }_{\mathrm{e}}g_{\mathrm{b}}g_{\mathrm{e}}(1-\overline{\rho })=0.\nonumber \end{equation} The solutions to the above equation are given by \begin{equation} \lambda _{B,1}=1+\frac{\Xi + \sqrt{\Xi ^2+\Gamma}}{2\left( 1+\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}} \right)},\ \lambda _{B,2}=1+\frac{\Xi - \sqrt{\Xi ^2+\Gamma}}{2\left( 1+\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}} \right)},\nonumber \end{equation} where $\Xi =\overline{\gamma }_{\mathrm{b}}g_{\mathrm{b}}-\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}}+\overline{\gamma }_{\mathrm{b}}\overline{\gamma }_{\mathrm{e}}g_{\mathrm{b}}g_{\mathrm{e}}(1-\overline{\rho })$, and $\Gamma =4\left( 1+\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}} \right) \overline{\gamma }_{\mathrm{b}}\overline{\gamma }_{\mathrm{e}}g_{\mathrm{b}}g_{\mathrm{e}}(1-\overline{\rho })$. Since $\lambda _{B,1}\geq0\geq \lambda _{B,2}$, the principal eigenvalue of $B(\mathbf{s}_1,\mathbf{s}_1^{\prime})$ is $\lambda _{B,1}$, which can be further simplified as follows: \begin{align} \lambda _{B}^{\max}=\lambda _{B,1}\!&=1+\frac{\Xi +\!\sqrt{\left( \overline{\gamma }_{\mathrm{b}}g_{\mathrm{b}}\left( 1+\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}}\left( 1\!-\!\overline{\rho } \right) \right) +\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}} \right) ^2}}{2\left( 1+\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}} \right)}\nonumber\\ &=1+\frac{\overline{\gamma }_{\mathrm{b}}g_{\mathrm{b}}\left( 1+\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}}\left( 1-\overline{\rho } \right) \right)}{1+\overline{\gamma }_{\mathrm{e}}g_{\mathrm{e}}}. \end{align} This completes the proof of \textbf{Theorem \ref{lem_principal}}. \subsection{Proof of \textbf{Lemma \ref{lem_optimal_u}}}\label{Appendix:F} Given that the optimal solution to problem \eqref{CAP_MISOSE_Problem_5} corresponds to the principal eigenfunction of $B(\mathbf{s}_1,\mathbf{s}_1^{\prime})$, and noting that $B(\mathbf{s}_1,\mathbf{s}_1^{\prime})$ shares the same eigenfunctions as ${\hat{B}}(\mathbf{s}_1,\mathbf{s}_1^{\prime})=B(\mathbf{s}_1,\mathbf{s}_1^{\prime})-\lambda_B\delta(\mathbf{s}_1-\mathbf{s}_1^{\prime})$, problem \eqref{CAP_MISOSE_Problem_5} can be reformulated as follows: \begin{equation}\label{CAP_MISOSE_Problem_6} \max_{\nu (\mathbf{s}_1)} \int_{\mathcal{A}}{\int_{\mathcal{A}}{}}\nu (\mathbf{s}_1){\hat{B}}(\mathbf{s}_1,\mathbf{s}_1^{\prime})\nu ^*(\mathbf{s}_{1}^{\prime})\mathrm{d}\mathbf{s}_1\mathrm{d}\mathbf{s}_{1}^{\prime}. \end{equation} Recalling the definition given in \eqref{C_definition} and the invertible relationship given in \eqref{Q_Inversion}, we obtain \begin{align}\nonumber {\hat{B}}(\mathbf{s}_1,\mathbf{s}_1^{\prime})=\int_{\mathcal{A}}{\int_{\mathcal{A}}{}}Q(\mathbf{s}_1,\mathbf{s}_2)C(\mathbf{s}_2,\mathbf{s}_{2}^{\prime})Q(\mathbf{s}_{2}^{\prime},\mathbf{s}_{1}^{\prime})\mathrm{d}\mathbf{s}_2\mathrm{d}\mathbf{s}_{2}^{\prime}. \end{align} By substituting the above expression into \eqref{CAP_MISOSE_Problem_6} and using the fact that $Q(\mathbf{s}_{2}^{\prime},\mathbf{s}_{1}^{\prime})=Q^*(\mathbf{s}_{1}^{\prime},\mathbf{s}_{2}^{\prime})$, the objective function of \eqref{CAP_MISOSE_Problem_6} can be rewritten as follows: \begin{equation} \begin{split} &\int_{\mathcal{A}}{\int_{\mathcal{A}}{}}\nu (\mathbf{s}_1){\hat{B}}(\mathbf{s}_1,\mathbf{s}_1^{\prime})\nu ^*(\mathbf{s}_{1}^{\prime})\mathrm{d}\mathbf{s}_1\mathrm{d}\mathbf{s}_{1}^{\prime}\notag\\ &=\int_{\mathcal{A}}{\int_{\mathcal{A}}{}}C(\mathbf{s}_2,\mathbf{s}_{2}^{\prime})\int_{\mathcal{A}}{}\nu (\mathbf{s}_1)Q(\mathbf{s}_1,\mathbf{s}_2)\mathrm{d}\mathbf{s}_1\nonumber\\ &\times\int_{\mathcal{A}}{}\nu^*(\mathbf{s}_{1}^{\prime})Q^*(\mathbf{s}_{1}^{\prime},\mathbf{s}_{2}^{\prime})\mathrm{d}\mathbf{s}_{1}^{\prime}\mathrm{d}\mathbf{s}_2\mathrm{d}\mathbf{s}_{2}^{\prime}\nonumber\\ &=\int_{\mathcal{A}}{\int_{\mathcal{A}}{}}u\left( \mathbf{s}_2 \right) C(\mathbf{s}_2,\mathbf{s}_{2}^{\prime})u^*\left( \mathbf{s}_{2}^{\prime} \right) \mathrm{d}\mathbf{s}_2\mathrm{d}\mathbf{s}_{2}^{\prime}, \end{split} \end{equation} where $u(\mathbf{s})=\int_{\mathcal{A}}{\nu }(\mathbf{s}_1)Q\left({\mathbf{s}}_1,\mathbf{s}\right){\rm{d}}{\mathbf{s}}_1$. Therefore, we can obtain the optimal $\nu\left( \mathbf{s} \right)$ through solving the following problem: \begin{equation}\label{Optimal_Rate_Current_Medium_Step} u^{\star}({\mathbf{s}})= \argmax_{u(\mathbf{s})} \int_{\mathcal{A}}{\int_{\mathcal{A}}{}}u\left( \mathbf{s} \right) C(\mathbf{s},\mathbf{s}^{\prime})u^*\left( \mathbf{s}^{\prime} \right) \mathrm{d}\mathbf{s}\mathrm{d}\mathbf{s}^{\prime}, \end{equation} and then performing the transformation $\nu^{\star}(\mathbf{s}_1)=\int_{{\mathcal{A}}}u^{\star}({\mathbf{s}})\hat{Q}\left( \mathbf{s},\mathbf{s}_1 \right){\rm{d}}{\mathbf{s}}$. It can be observed from \eqref{Optimal_Rate_Current_Medium_Step} that the optimal $u(\mathbf{s})$ is aligned with the principal eigenfunction of $C(\mathbf{s},\mathbf{s}^{\prime})$. The final results follow immediately. \subsection{Proof of \textbf{Lemma \ref{lem_planar_capa}}}\label{Appendix:G} With the planar CAPA, the channel gain for user $k$ can be written as follows: \begin{equation}\nonumber \begin{split} g_k&=\int_{-\frac{L_z}{2}}^{\frac{L_z}{2}}{\int_{-\frac{L_x}{2}}^{\frac{L_x}{2}}{h_{k}^{*}(x,z)h_k(x,z)\mathrm{d}x}\mathrm{d}z} =\frac{k_{0}^{2}\eta ^2r_k\Psi _k}{4\pi}\\ &\times\int_{-\frac{L_z}{2}}^{\frac{L_z}{2}}\int_{-\frac{L_x}{2}}^{\frac{L_x}{2}} \frac{\mathrm{d}x\mathrm{d}z}{(x^2+z^2-2r_k\left( \Phi _kx+\Theta _kz \right) +r_{k}^{2})^{\frac{3}{2}}}, \end{split} \end{equation} The inner integral can be calculated using \cite[Eq. (2.264.5)]{integral}, and the outer integral can be calculated using \cite[Eq. (2.284)]{integral}, leading to the results presented in \eqref{g_planar_capa} The channel correlation factor is thus expressed as follows: \begin{equation} \rho =\int_{-\frac{L_z}{2}}^{\frac{L_z}{2}}{\int_{-\frac{L_x}{2}}^{\frac{L_x}{2}}{h_{\mathrm{b}}^{*}(x,z)h_{\mathrm{e}}(x,z)\mathrm{d}x}\mathrm{d}z}. \end{equation} We can further calculated the above integrals by using the Chebyshev-Gauss quadrature rule, i.e., \begin{equation} \int_a^b{f\left( x \right) \mathrm{d}}x\approx \frac{b-a}{2}\sum_{t=1}^T{\frac{\pi}{T}\sqrt{1-\psi _{t}^{2}}f\left( \frac{b-a}{2}\psi _t+\frac{b+a}{2} \right)},\nonumber \end{equation} which yields the results presented in \eqref{rho_planar_capa}. \subsection{Proof of \textbf{Lemma \ref{lem_planar_spda}}}\label{Appendix:I} By defining $\epsilon_k=\frac{d}{r_k} \ll 1$, we can rewrite \eqref{g_planar_spda0} as follows: \begin{equation}\label{i1} g_{k}^{\mathrm{s}}=\frac{Ak_{0}^{2}\eta ^2\Psi _k}{4\pi r_{k}^{2}}\sum_{m_x\in \mathcal{M} _x}{\sum_{m_z\in \mathcal{M} _z}{f_{\mathrm{s}}}}\left( m_x\epsilon _k,m_z\epsilon _k \right), \end{equation} Here, $f_{\mathrm{s}}\left( x,z \right) \triangleq (x^2+z^2-2\Phi_kx-2\Omega_kz+1)^{-\frac{3}{2}}$ is a function defined over the square region ${\mathcal{S}}_k\triangleq\{ (x,z) \mid -\frac{M_x\epsilon _k}{2}\leq x\leq \frac{M_x\epsilon _k}{2},-\frac{M_z\epsilon _k}{2}\leq z\leq \frac{M_z\epsilon _k}{2} \} $, which is divided into $M_xM_z$ sub-squares, each with an area of $\epsilon _k^2$. Since $\epsilon _k\ll 1$, it follows that $f_{\mathrm{s}}\left( x,z \right) \approx f_{\mathrm{s}}\left( m_x\epsilon _k,m_z\epsilon _k \right) $ for $\forall \left( x,z \right) \in \left\{ \left( x,z \right) \mid \left( m_x-\frac{1}{2} \right) \epsilon _k\leq x\leq \left( m_x+\frac{1}{2} \right) \epsilon _k,\left( m_z-\frac{1}{2} \right) \epsilon _k\leq z \right.$ $\left.\leq \left( m_z+\frac{1}{2} \right) \epsilon _k \right\}$. By the concept of double integral, it has \begin{align} \sum_{m_x\in \mathcal{M} _x}\sum_{m_z\in \mathcal{M} _z}\!{f_{\mathrm{s}}\left( m_x\epsilon _k,m_z\epsilon _k \right) \epsilon _k^2}\approx \iint_{{\mathcal{S}}_k}{f_{\mathrm{s}}\left( x,z \right) \mathrm{d}x\mathrm{d}z}. \nonumber \end{align} Therefore, \eqref{i1} can be rewritten as follows: \begin{equation} g_{k}^{\mathrm{s}}\approx \frac{\zeta _{\mathrm{oc}}k_{0}^{2}\eta ^2\Psi _k}{4\pi}\int_{-\frac{M_z\epsilon _k}{2}}^{\frac{M_z\epsilon _k}{2}}{\int_{-\frac{M_x\epsilon _k}{2}}^{\frac{M_x\epsilon _k}{2}}{}}f_{\mathrm{s}}\left( x,z \right) \mathrm{d}x\mathrm{d}z, \end{equation} which can be calculated with the aid of \cite[Eqs. (2.264.5) \& (2.284)]{integral}. The final results follow immediately. \end{appendix} \bibliographystyle{IEEEtran} \bibliography{mybib} \end{document}
2412.13759v1
http://arxiv.org/abs/2412.13759v1
Iterated relation systems on Riemannian manifolds
\documentclass[12pt,reqno]{amsart} \usepackage{hyperref} \usepackage{amscd,amssymb,amsfonts,amsbsy,amsmath,verbatim,color, mathrsfs} \usepackage{pst-node} \usepackage{tikz-cd} \usepackage{graphicx,cite} \usepackage{subfigure} \usepackage{float} \usepackage{tikz,ifthen,bm} \usepackage{verbatim,mathdots,caption,epstopdf} \usetikzlibrary{intersections} \usetikzlibrary{calc} \usepackage{amsmath} \newtheorem{thm}{Theorem}[section] \newtheorem{theo}[thm]{Theorem} \newtheorem{cor}[thm]{Corollary} \newtheorem{coro}[thm]{Corollary} \newtheorem{conj}[thm]{Conjecture} \newtheorem{lem}[thm]{Lemma} \newtheorem{ques}[thm]{Question} \newtheorem{prop}[thm]{Proposition} \newtheorem{exam}[thm]{Example} \newtheorem{exa}[thm]{Exmple} \theoremstyle{definition} \newtheorem{defi}{Definition}[section] \newtheorem{rema}[thm]{Remark} \numberwithin{equation}{section} \DeclareMathOperator{\RE}{Re} \DeclareMathOperator{\IM}{Im} \DeclareMathOperator{\ess}{ess} \newcommand{\abs}[1]{\left\vert#1\right\vert} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\seq}[1]{\left<#1\right>} \newcommand{\inner}[1]{\left\langle#1\right\rangle} \newcommand{\norm}[1]{\left\Vert#1\right\Vert} \newcommand{\essnorm}[1]{\norm{#1}_{\ess}} \newcommand{\myint}[1][v]{\textup{int}_{#1}\,} \newcommand{\A}{\mathcal{A}} \newcommand{\B}{{\mathcal B}} \newcommand{\D}{{\mathcal D}} \newcommand{\E}{{\mathcal E}} \newcommand{\F}{{\mathcal F}} \newcommand{\G}{{\mathcal G}} \newcommand{\cH}{{\mathcal H}} \providecommand{\I}{{\mathcal I}} \newcommand{\J}{\mathcal{J}} \newcommand{\K}{{\mathcal K}} \newcommand{\cL}{{\mathcal L}} \newcommand{\M}{{\mathcal M}} \newcommand{\cN}{{\mathcal N}} \def\cP{{\mathcal P}} \newcommand{\QQ}{{\mathcal Q}} \newcommand{\cS}{\mathcal{S}} \newcommand{\cT}{{\mathcal T}} \newcommand{\V}{{\mathcal V}} \newcommand{\U}{{\mathcal U}} \newcommand{\W}{\mathcal{W}} \newcommand{\X}{\mathcal{X}} \newcommand{\Real}{\mathbb{R}} \newcommand{\Complex}{\mathbb{C}} \newcommand{\Field}{\mathbb{F}} \newcommand{\RPlus}{\Real^{+}} \newcommand{\R}{\Real} \newcommand{\Z}{{\mathbb Z}} \newcommand{\bT}{{\mathbb T}} \newcommand{\Q}{{\mathbb Q}} \newcommand{\N}{{\mathbb N}} \def\sign{\textup{sign}\,} \def\con{\textup{con\,}} \def\cP{\mathcal{P}} \def\be{{\mathbf e}} \def\dd{{\mathbf d}} \def\bb{{\boldsymbol b}} \def\bi{{\boldsymbol i}} \def\bj{{\boldsymbol j}} \def\bk{{\boldsymbol k}} \def\bl{{\boldsymbol l}} \def\ba{{\boldsymbol a}} \def\bp{{\boldsymbol p}} \def\bx{{\boldsymbol x}} \def\by{{\boldsymbol y}} \def\bz{{\boldsymbol z}} \def\bu{{\boldsymbol u}} \def\bv{{\boldsymbol v}} \def\bw{{\boldsymbol w}} \def\bs{{\boldsymbol s}} \def\bt{{\boldsymbol t}} \def\brho{{\bm \rho}} \def\bone{\textbf{1}} \def\bzero{\textbf{0}} \def\Sign{\textup{sign}\,} \def\biota{\boldsymbol{\iota}} \def\ep{\varepsilon} \def\eps{\ep} \def\To{\longrightarrow} \def\dis{\displaystyle} \def\pt{\partial} \def\dd{{\textrm{d}}} \def\supp{{\textup{\textrm{supp}}}} \def\diag{\textup{diag}} \DeclareMathOperator{\esssup}{ess\,sup} \renewcommand{\thefigure}{\arabic{figure}} \def\myhom{homeomorphism{}}\def\myhoms{\myhom s{}} \newcommand{\setcolsep}[1][2] {\setlength{\arraycolsep}{#1 pt}} \makeatletter \newcommand{\rmnum}[1]{\romannumeral #1} \newcommand{\Rmnum}[1]{\expandafter\@slowromancap\romannumeral #1@} \makeatother \newcommand{\insertjpg}[2][\textwidth] {\includegraphics[width=#1]{#2}} \textheight 9.5in \textwidth 6.35in \topmargin -0.75in \oddsidemargin 0in \evensidemargin 0in \parskip 1ex \def\green#1{{\color{green}#1}} \begin{document} \baselineskip=17pt \setcounter{figure}{0} \title[Iterated relation systems] {Iterated relation systems on Riemannian manifolds} \date{\today} \author[J. Liu]{Jie Liu} \address{Key Laboratory of High Performance Computing and Stochastic Information Processing (HPCSIP) (Ministry of Education of China, College of Mathematics and Statistics, Hunan Normal University, Changsha, Hunan 410081, China.} \email{[email protected]} \author[S.-M. Ngai]{Sze-Man Ngai} \address{Beijing Institute of Mathematical Sciences and Applications, Huairou District, 101400, Beijing, China, and Key Laboratory of High Performance Computing and Stochastic Information Processing (HPCSIP) (Ministry of Education of China), College of Mathematics and Statistics, Hunan Normal University, Changsha, Hunan 410081, China.} \email{[email protected]} \author[L. Ouyang]{Lei Ouyang} \address{Key Laboratory of High Performance Computing and Stochastic Information Processing (HPCSIP) (Ministry of Education of China, College of Mathematics and Statistics, Hunan Normal University, Changsha, Hunan 410081, China.} \email{[email protected]} \subjclass[2010]{Primary: 28A80; Secondary: 28A78.} \keywords{Riemannian manifold; iterated relation system; graph iterated function system; graph open set condition; graph finite type condition; Hausdorff dimension.} \thanks{The authors are supported in part by the National Natural Science Foundation of China, grant 12271156, and Construct Program of the Key Discipline in Hunan Province.} \begin{abstract} For fractals on Riemannian manifolds, the theory of iterated function systems often does not apply well directly, as fractal sets are often defined by relations that are multivalued or non-contractive. To overcome this difficulty, we introduce the notion of iterated relation systems. We study the attractor of an iterated relation system and formulate a condition under which such an attractor can be identified with that of an associated graph-directed iterated function system. Using this method, we obtain dimension formulas for the attractor of an iterated relation system under the graph open set condition or the graph finite type condition. This method improves the one in [Ngai-Xu, J. Geom. Anal. {\bf 33} (2023), 262], which relies on knowing the specific structure of the attractor. \tableofcontents \end{abstract} \maketitle \section{Introduction}\label{S:IN} \setcounter{equation}{0} Fractals in Riemannian manifolds, especially in Lie groups, Heisenberg groups, and projective spaces, have been studied extensively by many authors, (see Strichartz \cite{Strichartz_1992}, Balogh and Tyson \cite{Balogh-Tyson_2005}, Barnsley and Vince \cite{Barnsley-Vince_2012}, Hossain {\em et al.} \cite{Hossain-Akhtar-Navascues_2024}, etc.). A basic theory of iterated function systems (IFS) on Riemannian manifolds, including $L^q$-spectrum, Hausdorff dimension, the weak separation condition (WSC), and the finite type condition (FTC), etc., has been established by the second author and Xu (\cite{Ngai-Xu_2021,Ngai-Xu_2023}). In \cite{Ngai-Xu_2023}, a method for computing the Hausdorff dimension of a certain fractal set in a flat torus is obtained; the fractal set can be identified with the attractor of some IFS on $\mathbb{R}^2/\mathbb{Z}^2$. However, this method is too cumbersome, and is not easy to generalize. The purpose of this paper is to establish a new method that can be used to calculate the Hausdorff dimensions of general fractal sets in Riemannian manifolds. Unlike fractal sets in $\R^n$, many interesting fractals in Riemannian manifolds cannot be described by an IFS. For example, the relations generating a Sierpinski gasket on the cylindrical surface are multivalued (see Example \ref{exam:osctri}). To deal with this problem, we introduce the notion of iterated relation systems (IRS) (see Definition \ref{defi:2.1}). We will show that many interesting fractals can be described naturally by IRSs. Let $\{R_t\}_{t=1}^N$ be an IRS on a nonempty compact subset of a topological space, and let $K$ be the attractor of $\{R_t\}_{t=1}^N$ (see Definition \ref{defi:2.2}). Then \begin{align}\label{eq:K=RK} K=\bigcup_{t=1}^NR_t(K) \end{align} (see Proposition \ref{prop:2.2}). However, in general, a nonempty compact set $K$ satisfying \eqref{eq:K=RK} need not be unique, and therefore need not be the attractor of $\{R_t\}_{t=1}^N$; Example \ref{exa:notunique} illustrates this. To study the uniqueness of $K$, we introduce the notion of a graph-directed iterated function system (GIFS) associated to an IRS (see Definition \ref{defi:3.1}), and prove the uniqueness of $K$ under the assumption that such a GIFS exists. Since relations need not be single-valued or contractive, it might not be possible to construct a GIFS associated to an IRS (see Examples \ref{thm(a)}--\ref{notcon}). Our first goal is to study, under suitable conditions, the existence of a GIFS associated to an IRS (see Theorem \ref{thm:exi}). Let $\# F$ be the cardinality of a set $F$. The definition of a finite family of contractions decomposed from $\{R_t\}_{t=1}^N$ and the definition of a good partition that appear in our first main theorem below are given in Definition \ref{defi:good}. \begin{thm}\label{thm:exi} Let $X$ be a complete metric space and let $E_0\subseteq X $ be a nonempty compact set. Let $\{R_t\}_{t=1}^N$ ($N\geq 1$) be an IRS on $E_0$. For any $k\geq0$, let $E_{k+1}:=\bigcup_{t=1}^NR_t(E_k)$. Assume that $\{R_t\}_{t=1}^N$ satisfies the following conditions. \begin{enumerate} \item[(a)]There exists an integer $k_0\geq 1$ such that for any $t\in\{1,\ldots,N\}$ and any $x\in E_{k_0-1}$, $$\#\{R_t(x)\}<\infty.$$ \item[(b)] For any $t\in\{1,\ldots,N\}$, if $H_t:=\{x\in E_{k_0-1}|2\leq\#\{R_t(x)\}<\infty\}\neq \emptyset$, we require that the following conditions are satisfied. \begin{enumerate} \item[(i)] $r_t:=R_t|_{H_t}$ can be decomposed as a finite family of contractions $\{h _t^{l,i}:H_t^i\to h _t^{l,i}(H_t^i)\}_{l=1,i=1}^{n_t,m_t}$, where $\bigcup_{i=1}^{m_t}H_t^i=H_t$, and $\tilde{r}_t:=R_t|_{E_{k_0-1}\backslash H_t}$ can be decomposed as a finite family of contractions $\{h _t^{0,i}:J_t^i\to h _t^{0,i}(J_t^i)\}_{i=1}^{s_t}$, where $\bigcup_{i=1}^{s_t}J_t^i=E_{k_0-1}\backslash H_t$. \item[(ii)] There exists a good partition on $E_{k_0}$ with respect to some finite family of contractions decomposed from $\{R_t\}_{t=1}^N$ (not necessarily the one in $(i)$). \end{enumerate} \end{enumerate} Then there exists a GIFS associated to $\{R_t\}_{t=1}^N$ on $E_{k_0}$. \end{thm} We give examples (see Examples \ref{thm(a)}--\ref{notgood}) to investigate the conditions in Theorem \ref{thm:exi}. Under the assumption that $\{R_t\}_{t=1}^N$ satisfies conditions (a) and (b)(i) in Theorem \ref{thm:exi}, we give a sufficient condition for the hypothesis (b)(ii) to be satisfied (see Theorem \ref{thm:exi2}). In Section \ref{S:GOSC}, we study IRS attractors under the assumption that an associated GIFS satisfies (GOSC). Our second goal is to describe a method for computing the Hausdorff dimension of an IRS attractor in a Riemannian manifold under the assumption that an associated GIFS $G$ exists and $G$ satisfies (GOSC). The construction of the incidence matrices that appear in Theorems \ref{thm:main1}--\ref{thm:main2} will be given in Section \ref{S:GOSC} (see \eqref{eq:matrix}). \begin{thm}\label{thm:main1} Let $M$ be a complete n-dimensional smooth orientable Riemannian manifold that is locally Euclidean. Let $\{R_t\}_{t=1}^N$ be an IRS on a nonempty compact subset of $M$, and $K$ be the associated attractor. Assume that there exists a GIFS $G=(V,E)$ associated to $\{R_t\}_{t=1}^N$ and assume that $G$ consists of contractive similitudes. Suppose that $G$ satisfies (GOSC). Let $\lambda_\alpha$ be the spectral radius of the incidence matrix $A_\alpha$ associated to $G$. Then $$\dim_H(K)=\alpha,$$ where $\alpha$ is the unique number such that $\lambda_\alpha=1$. \end{thm} The definitions of a simplified GIFS and a minimal simplified GIFS are given in Definitions \ref{simplified graph} and \ref{defi:min}, respectively. \begin{thm}\label{thm:main2} Let $M$, $\{R_t\}_{t=1}^N$, $K$, and $G=(V,E)$ be as in Theorem \ref{thm:main1}. Let $\widehat{G}=(\widehat{V},\widehat{E})$ be a minimal simplified GIFS associated to $G$. Assume that $\widehat{G}$ satisfies (GOSC). Let $\widehat{\lambda}_\alpha$ be the spectral radius of the incidence matrix $\widehat{A}_\alpha$ associated to $\widehat{G}$. \begin{enumerate} \item [(a)] $\dim_H(K)=\alpha$, where $\alpha$ is the unique real number such that $\widehat{\lambda}_\alpha=1$. \item [(b)] In particular, let $\widehat{G}=(\widehat{V},\widehat{E})$ be a minimal simplified GIFS associated to $G$ and assume that $\widehat{G}$ consists of contractive similitudes $\{f_t\}_{t=1}^m$. Suppose that $\#\widehat{V}=1$. If $\{f_t\}_{t=1}^m$ satisfies (OSC), then $\dim_H(K)$ is the unique number $\alpha$ satisfying $$\sum_{t=1}^m\rho_t^\alpha=1,$$ where $\rho_t$ is the contraction ratio of $f_t$. \end{enumerate} \end{thm} It is worth pointing out that Balogh and Rohner \cite{Balogh-Rohner_2007} extended the Moran-Hutchinson theorem (see \cite{Hutchinson_1981}) to the more general setting of doubling metric spaces. Wu and Yamaguchi \cite{Wu-Yamaguchi_2017} generalized the results of Balogh and Rohner. For IFSs on $\mathbb{R}^n$, the finite type was first introduced by the second author and Wang \cite{Ngai-Wang_2001}, and was extended to the general finite type condition independently by Jin and Yau \cite{Jin-Yau_2005}, Lau and the second author \cite{Lau-Ngai_2007}. The graph finite type condition (GFTC) was extended to Riemannian manifolds by the second author and Xu \cite{Ngai-Xu_2023}. In Section \ref{S:GFTC}, we study IRS attractors under the assumption that a GIFS associated to an IRS satisfies (GFTC). Our third goal is to obtain a method for computing the Hausdorff dimension of the attractor that improves the one in \cite{Ngai-Xu_2023}. The construction of the weighted incidence matrices that appear in the following theorems will be given in Section \ref{S:GFTC}. \begin{thm}\label{thm:main4} Let $M$, $\{R_t\}_{t=1}^N$, $K$, and $G=(V,E)$ be as in Theorem \ref{thm:main1}. Assume that $G$ satisfies (GFTC). Let $\lambda_\alpha$ be the spectral radius of the weighted incidence matrix $A_\alpha$ associated to $G$. Then $$\dim_H(K)=\alpha,$$ where $\alpha$ is the unique real number such that $\lambda_\alpha=1$. \end{thm} \begin{thm}\label{thm:main5} Let $M$, $\{R_t\}_{t=1}^N$, $K$, and $G=(V,E)$ be as in Theorem \ref{thm:main1}. Let $\widehat{G}=(\widehat{V},\widehat{E})$ be a minimal simplified GIFS associated to $G$. Assume that $\widehat{G}$ satisfies (GFTC). Let $\widehat{\lambda}_\alpha$ be the spectral radius of the weighted incidence matrix $\widehat{A}_\alpha$ associated to $\widehat{G}$. Then $$\dim_H(K)=\alpha,$$ where $\alpha$ is the unique real number such that $\widehat{\lambda}_\alpha=1$. \end{thm} This paper is organized as follows. In Section \ref{S:Pre}, we give the definition of an IRS, and study the properties of its attractor. Section \ref{S:IFS} is devoted to the proof of Theorem \ref{thm:exi}. In Section \ref{S:GOSC}, we prove Theorems \ref{thm:main1}--\ref{thm:main2}. We also give examples of IRSs satisfying the conditions of Theorem \ref{thm:main2}, and compute the Hausdorff dimension of the associated attractors. In Section \ref{S:GFTC}, we prove Theorems \ref{thm:main4}--\ref{thm:main5} and provide examples to illustrate Theorem \ref{thm:main5}. \section{Iterated relation systems}\label{S:Pre} \setcounter{equation}{0} Throughout this paper, we let $\Sigma:=\{1,\ldots, N\}$, $\Pi_t:=\{1,\ldots,n_t\}$, $\Delta_t:=\{1,\ldots,s_t\}$, $\Lambda_t:=\{1,\ldots,m_t\}$, and $\Psi_t^i:=\{1,\ldots,p_t^i\}$, where $N\in \N$. We first introduce the definition of an IRS. \begin{defi}\label{defi:2.1} Let $ T $ be a topological space and let $E_0\subseteq T $ be a nonempty compact set. Let $\{R_t\}_{t=1}^N$ be a family of relations defined on $E_0$. For any $n\geq0$, let \begin{align}\label{E_n} E_{n+1}:=\bigcup_{t=1}^NR_t(E_n). \end{align} We call $\{R_t\}_{t=1}^N$ an \textit{iterated relation system (IRS)} if it satisfies the following conditions: \begin{enumerate} \item[(a)] for any nonempty compact set $F\subseteq E_0$ and $t\in \Sigma$, $R_t(F)$ is a nonempty compact set; \item[(b)] for any $t\in \Sigma$, $R_t(E_0)\subseteq E_0$. \end{enumerate} \end{defi} Examples of IRSs given in Examples \ref{exam:osc1}--\ref{exam:osctri} and \ref{exam:gftc1}--\ref{exam:gftctri}. \begin{prop}\label{prop:2.1} Let $ T $ be a topological space, and let $E_0\subseteq T $ be a nonempty compact set. Let $\{R_t\}_{t=1}^N$ be an IRS on $E_0$. For any $n\geq 0$, let $E_n$ be defined as in \eqref{E_n}. Then $\bigcap_{n=0}^\infty E_n$ is a nonempty compact set. \end{prop} \begin{proof} By Definition \ref{defi:2.1}(b), we have $E_1\subseteq E_0$. Assume that $E_{n+1}\subseteq E_n$ for some $n\geq0$. Then \begin{align*} E_{n+2}&=\bigcup_{t=1}^NR_t(E_{n+1})\subseteq\bigcup_{t=1}^NR_t(E_n)=E_{n+1}. \end{align*} Hence for any $n\geq0$, $E_{n+1}\subseteq E_n$. We know that $E_0$ is a nonempty compact set. Assume that $E_n$ is a nonempty compact set for some $n\geq1$. By Definition \ref{defi:2.1}(a), we have $R_t(E_n)$ is a nonempty compact set for any $t\in\Sigma$. Hence $E_{n+1}$ is a nonempty compact set. Therefore, $\{E_n\}_{n=0}^\infty$ is a decreasing sequence of nonempty compact subsets of $ T $. This proves the proposition. \end{proof} \begin{defi}\label{defi:2.2} Let $ T $, $E_0$, and $\{R_t\}_{t=1}^N$ be as in Proposition \ref{prop:2.1}. For any $n\geq0$, let $E_n$ be defined as in \eqref{E_n}. We call $K:=\bigcap_{n=0}^\infty E_n$ the {\em invariant set} or \textit{attractor} of $\{R_t\}_{t=1}^N$. \end{defi} Next, we introduce the following condition, which we will call Condition (C). \begin{defi}\label{condi} Let $ T $, $E_0$, and $\{R_t\}_{t=1}^N$ be as in Proposition \ref{prop:2.1}. For any $n\geq 0$, let $E_n$ be defined as in \eqref{E_n}. We say that $\{R_t\}_{t=1}^N$ satisfies {\em Condition (C)} if it has the following property. \begin{enumerate} \item[(C)] For any $x\in E_0$, if there exist some $l\in \Sigma$, a subsequence $\{n_k\}$ of $\{n\}$, and $y_{n_k}\in E_{n_k}$ satisfying $x\in R_{l}(y_{n_k})\subseteq R_{l}(E_{n_k})$ for all $k\geq 0$, then there exists a subsequence $\{y_{n_{k_j}}\}$ of $\{y_{n_k}\}$ converging to some $y\in\bigcap_{n=0}^\infty E_n$ satisfying $x\in R_{l}(y)$. \end{enumerate} \end{defi} Condition (C) ensures that the inclusion $K\subseteq \bigcup_{t=1}^NR_t$ holds. \begin{prop}\label{prop:2.2} Let $ T $, $E_0$, and $\{R_t\}_{t=1}^N$ be defined as in Proposition \ref{prop:2.1}. Let $K$ be the attractor of $\{R_t\}_{t=1}^N$. \begin{enumerate} \item[(a)] $\bigcup_{t=1}^NR_t(K)\subseteq K$. \item[(b)] If Condition (C) holds, then $K\subseteq \bigcup_{t=1}^NR_t(K)$, and thus \eqref{eq:K=RK} holds. \end{enumerate} \end{prop} \begin{proof} (a) By the definition of $K$, for any $n\geq0$, we have $K\subseteq E_n$. Hence, for any $t\in\Sigma$, $$R_t(K)\subseteq\bigcup_{t=1}^NR_t(E_n)\subseteq E_n.$$ Thus, $\bigcup_{t=1}^NR_t(K)\subseteq\bigcap_{n=0}^\infty E_n=K.$ (b) We first prove the following claims. \noindent{\em Claim 1. $\bigcap_{n=0}^\infty\bigcup_{t=1}^NR_t(E_n)\subseteq\bigcup_{t=1}^N\bigcap_{n=0}^\infty R_t(E_n)$.} Let $x\in\bigcap_{n=0}^\infty\bigcup_{t=1}^NR_t(E_n)$. Then for any $n\geq 0$, there exists $l_n\in \Sigma$ such that $x\in R_{l_n}(E_{n})$. Hence there exist some $l\in \Sigma$ and a subsequence $\{n_k\}$ of $\{n\}$ such that $x\in R_{l}(E_{n_k})$ for all $k\geq 0$. Let $y_{n_k}\in E_{n_k}$ such that $x\in R_l(y_{n_k})$. By Condition (C), there exists a subsequence $\{y_{n_{k_j}}\}$ converging to some $$y\in\bigcap_{n=0}^\infty E_n\qquad \text{satisfying}\qquad x\in R_l(y)\subseteq \bigcap_{n=0}^\infty R_l(E_n).$$ Therefore, we have $x\in\bigcup_{t=1}^N\bigcap_{n=0}^\infty R_t(E_n)$. \noindent{\em Claim 2. $\bigcup_{t=1}^N\bigcap_{n=0}^\infty R_t(E_n)\subseteq\bigcup_{t=1}^NR_t\big(\bigcap_{n=0}^\infty E_n\big)$.} Let $x\in\bigcup_{t=1}^N\bigcap_{n=0}^\infty R_t(E_n)$. Then there exists $l\in\Sigma$ such that for any $n\geq0$, $x\in R_l(E_n)$. Let $y_n\in E_n$ such that $x\in R_l(y_n)$. By Condition (C), there exists a subsequence $\{y_{n_k}\}$ converging to some $$y\in\bigcap_{n=0}^\infty E_n \qquad \text{satisfying}\qquad x\in R_l(y)\subseteq R_l\Bigg(\bigcap_{n=1}^\infty E_n\Bigg).$$ Therefore, we have $x\in \bigcup_{t=1}^NR_t(\bigcap_{n=0}^\infty E_n)$. By Claims 1 and 2, we have $$K=\bigcap_{n=0}^\infty\bigcup_{t=1}^N R_t(E_n)\subseteq\bigcup_{t=1}^N\bigcap_{n=0}^\infty R_t(E_n)\subseteq\bigcup_{t=1}^NR_t\Bigg(\bigcap_{n=0}^\infty E_n\Bigg)=\bigcup_{t=1}^NR_t(K).$$ Using the result of (a), we have $K= \bigcup_{t=1}^NR_t(K)$. This proves (b). \end{proof} The following counterexample shows that Proposition \ref{prop:2.2}(b) fails if Condition (C) is not assumed. \begin{exam}\label{defi(c)} Let $E_0:=\{0,1\}\bigcup\{1/2^n:n\in\mathbb{N}_+\}$ and let $R_1$ be an IRS on $E_0$ defined as \begin{align*} R_1(\boldsymbol{x}) &:= \begin{cases} 1, & \,\, \boldsymbol{x}=0, \\ \{0,1/2^{n+1}\}, & \,\,\boldsymbol{x}=1/2^n,\text{ where }n\in\mathbb{N}_+,\\ 1, & \,\,\boldsymbol{x}=1.\\ \end{cases} \end{align*} Then $E_n=\{0,1\}\bigcup\{1/2^k:k\geq n+1\}$, where $n\in\mathbb{N}_+$. Let $x=0\in E_0$, $y_{n_k}=1/2^k\in E_{n_k}$, and $0\in R_1(y_{n_k})$, where $k\geq n+1$ and $n\in\mathbb{N}_+$. Then $\{y_{n_k}\}$ converges to $y:=0$, but $x\notin R_1(y)=\{1\}$. Hence $R_1$ does not satisfy Condition (C). Moreover, we have $K:=\bigcap_{n=0}^\infty E_n=\{0,1\}$ and $R_1(K)=\{1\}$. Hence $K\not\subseteq R_1(K)$. \end{exam} For a general IRS, there could be more than one nonempty compact set $K$ satisfying \eqref{eq:K=RK}. The following example illustrates this. \begin{exam}\label{exa:notunique} Let $E_0:=[0,1]\subseteq \R$, and let $\{R_t\}_{t=1}^2$ be an IRS on $E_0$ defined as \begin{align*} R_1(\boldsymbol{x}) &:= (3/4)\boldsymbol{x};\\ R_2(\boldsymbol{x}) &:= \begin{cases} 0, & \boldsymbol{x}=0, \\ (1/4)\boldsymbol{x}+3/4, & 0<\boldsymbol{x}<1, \\ \{3/4,1\}, & \boldsymbol{x}=1. \end{cases} \end{align*} Then there exist at least two nonempty compact sets $K$ satisfying equation \eqref{eq:K=RK}. \end{exam} \begin{proof} We know that $E_0=[0,1]$, \ldots, $E_n=[0,1]$. Hence $K=[0,1]$. By Proposition \ref{prop:2.2}, we have $K=\bigcup_{t=1}^2R_t(K)$. Now let $\widetilde{K}=\{0\}$. Note that $$\{0\}=R_1\big(\{0\}\big)\bigcup R_2\big(\{0\}\big).$$ Then $\widetilde{K}=\bigcup_{t=1}^2R_t(\widetilde{K})$, which completes the proof. \end{proof} In the next section, we will study the uniqueness of a nonempty compact set $K$ satisfying the equality \eqref{eq:K=RK}. \section{Associated graph-directed iterated function systems}\label{S:IFS} \setcounter{equation}{0} In this section, we let $ X $ be a complete metric space, $E_0\subseteq X $ be a nonempty compact set, and $\{R_t\}_{t=1}^N$ be an IRS on $E_0$. We introduce the notion of a graph-directed iterated function system (GIFS) associated to $\{R_t\}_{t=1}^N$, and prove Theorem \ref{thm:exi}. {\em A graph-directed iterated function system} (GIFS) of contractions $\{f_e\}_{e\in E}$ on $X$ is an orderd pair $G = (V, E)$, where $V = \{1,\ldots, m\}$ is the set of vertices and $E$ is the set of all directed edges. We allow more than one edge between two vertices. A {\em directed path} in $G$ is a finite string $\mathbf{e} =(e_1,\ldots, e_q)$ of edges in $E$ such that the terminal vertex of each $e_i$ is the initial vertex of the edge $e_{i+1}$. For such a path, denote the length of $\mathbf{e}$ by $|\mathbf{e}|=q$. For any two vertices $i,j\in V$ and any positive integer $q$, let $E^{i,j}$ be the set of all directed edges from $i$ to $j$, $E^{i,j}_q$ be the set of all directed paths of length $q$ from $i$ to $j$, $E_q$ be the set of all directed paths of length $q$, and $E^{*}$ be the set of all directed paths, i.e., \begin{align*} E_q:=\bigcup_{i,j=1}^mE^{i,j}_q\qquad \text{and}\qquad E^{*}:=\bigcup_{q=1}^\infty E_q. \end{align*} Then there exists a unique collection of nonempty compact sets $\{\widetilde{K}_i\}_{i\in V}$ satisfying \begin{align*} \widetilde{K}_i=\bigcup_{j=1}^m\bigcup_{e\in E^{i,j}} f_e(\widetilde{K}_j), \qquad i\in V \end{align*} (see, e.g., \cite{Mauldin-Williams_1988,Edgar_1990,Olsen_1994}). Let $\widetilde{K}:=\bigcup_{i=1}^m \widetilde{K}_i$ be the {\em invariant set} or {\em attractor} of the GIFS. Recall that a GIFS $G = (V, E)$ is said to be {\em strongly connected} provided that whenever $i,j$, there exists a directed path from $i$ to $j$. \begin{defi}\label{defi:3.1} Let $X$ be a complete metric space and let $E_0\subseteq X $ be a nonempty compact set. Let $\{R_t\}_{t=1}^N$ be an IRS on $E_0$. For any $k\geq0$, let $E_{k+1}$ be defined as in \eqref{E_n}. We assume that there exists a finite integer $k_0\geq 0$ such that $E_{k_0}=\bigcup_{j=1}^mW_j$, where each $W_j\subseteq E_{k_0}$ is compact. Suppose that there exists a GIFS $G=(V,E)$ of contractions $\{f_e\}_{e\in E}$, where $V=\{1,\ldots,m\}$, such that for any $q\geq 1$, \begin{align}\label{eq:gifs} \bigcup_{t\in\Sigma^q}R_t(E_{k_0})=\bigcup_{i,j=1}^m\bigcup_{\mathbf{e}\in E_{q}^{i,j}}f_\mathbf{e}(W_j). \end{align} Then $G$ is called a \textit{graph-directed iterated function system (GIFS) associated to} $\{R_t\}_{t=1}^N$ on $E_{k_0}$. We call \begin{align*} \widetilde{K}:=\bigcap_{q=1}^\infty\bigcup_{i,j=1}^m\bigcup_{\mathbf{e}\in E_{q}^{i,j}}f_\mathbf{e}(W_j) \end{align*} the {\em attractor} generated by the GIFS $G$ associated to $\{R_t\}_{t=1}^N$. \end{defi} We have the following proposition. \begin{prop}\label{prop:3.1} Let $X$, $E_0$, and $\{R_t\}_{t=1}^N$ be as in Definition \ref{defi:3.1}. Let $K$ be the attractor of $\{R_t\}_{t=1}^N$. Assume that there exists a GIFS $G=(V,E)$ associated to $\{R_t\}_{t=1}^N$ and assume that $G$ consists of contractions $\{f_e\}_{e\in E}$. Let $\widetilde{K}$ be the attractor of G. Then $K=\widetilde{K}$. \end{prop} \begin{proof} By Definition \ref{defi:3.1}, for any $q\geq1$, we have \begin{align}\label{eq:q} \bigcap_{q=1}^\infty\bigcup_{t\in\Sigma^q}R_t(E_{k_0})=\bigcap_{q=1}^\infty\bigcup_{i,j=1}^m\bigcup_{\mathbf{e}\in E_{q}^{i,j}}f_\mathbf{e}(W_j). \end{align} By Definition \ref{defi:2.1} and \eqref{eq:q}, we have \begin{align*} K=\bigcap_{n=0}^\infty E_{n+1}=\bigcap_{q=1}^\infty E_{k_0+q}=\bigcap_{q=1}^\infty\bigcup_{t\in\Sigma^q}R_t(E_{k_0}) =\bigcap_{q=1}^\infty\bigcup_{i,j=1}^m\bigcup_{\mathbf{e}\in E_{q}^{i,j}}f_\mathbf{e}(W_j)= \widetilde{K}. \end{align*} \end{proof} \begin{prop}\label{prop:3.2} Let $\{R_t\}_{t=1}^N$ and $K$ be defined as in Definition \ref{defi:3.1}. Assume that there exists a GIFS $G=(V,E)$ associated to $\{R_t\}_{t=1}^N$ and assume that $G$ consists of contractions $\{f_e\}_{e\in E}$. Then there exists a unique nonempty compact set $K$ satisfying \eqref{eq:K=RK}. \end{prop} \begin{proof} Assume that there exists another nonempty compact set $K_1$ satisfying \eqref{eq:K=RK}. Note that \begin{align*} K_1=\bigcup_{t_1=1}^N\cdots\bigcup_{t_n=1}^NR_{t_1}\cdots R_{t_n}(K_1) \subseteq\bigcup_{t_1=1}^N\cdots\bigcup_{t_n=1}^NR_{t_1}\cdots R_{t_n}(E_0). \end{align*} Hence \begin{align*} & K_1\subseteq\bigcap_{n=1}^\infty\bigcup_{t_1=1}^N\cdots\bigcup_{t_n=1}^NR_{t_1}\cdots R_{t_n}(E_0) =\bigcap_{n=1}^\infty E_{n} =K. \end{align*} Next we show that $K\subseteq K_1$. We know that $$ K_1=\bigcap_{n=1}^\infty\bigcup_{t_1=1}^N\ldots\bigcup_{t_n=1}^NR_{t_1}\ldots R_{t_n}(K_1).$$ For any $e\in E$, let $\rho_e$ be a contraction ratio of $f_e$. Then for any $x\in\widetilde{K}$ and for any integer $m$, $n$, where $m>n>0$, we have \begin{align*} &\big|R_{t_1}\cdots R_{t_{m-k_0}}(R_{t_{m-k_0+1}}\cdots R_{t_{m}}x)-R_{t_1}\cdots R_{t_{n-k_0}}(R_{t_{n-k_0+1}}\cdots R_{t_{n}}x)\big|\\ =&\big|f_{e_{j_1}}\cdots f_{e_{j_{m-k_0}}}(R_{t_{m-k_0+1}}\cdots R_{t_{m}}x)-f_{e_{j_1}}\cdots f_{e_{j_{n-k_0}}}(R_{t_{n-k_0+1}}\cdots R_{t_{n}}x)\big|\\ \leq&\,\rho_{e_{j_1}}\cdots \rho_{e_{j_{n}}}|E_{k_0}|. \end{align*} Hence $\lim_{n\rightarrow\infty}R_{t_1}\cdots R_{t_{n-k_0}}(R_{t_{n-k_0+1}}\cdots R_{t_n}x)$ exists. Thus, for any $z\in K$, there exists $x^*\in K_1$ satisfying $$z=\lim_{n\rightarrow\infty}f_{e_{j_1}}\cdots f_{e_{j_{n-k_0}}}(x^*)\in K_1.$$ Therefore $K\subseteq K_1$. This proves the proposition. \end{proof} Let $G=(V,E)$ be a GIFS of contractions $\{f_e\}_{e\in E}$. We say that $\{U_i\}_{i=1}^m$ is an {\em invariant family} under $G$ if \begin{align*} \bigcup_{e\in E^{i,j}}f_e(U_j)\subseteq U_i\qquad \text{for all}\,\,i,j\in \{1,\ldots,m\}. \end{align*} \begin{defi}\label{defi:good} Let $ X $, $E_0$, $\{R_t\}_{t=1}^N$, and $k_0$ be defined as in Definition \ref{defi:3.1}. \begin{enumerate} \item[(a)] Assume that conditions (i) and (ii) below hold. \begin{enumerate} \item[(i)] For any $t\in\Sigma$ and $x\in E_{k_0-1}$, $\#\{R_t(x)\}<\infty.$ \item[(ii)] For any $t\in\Sigma$, there exists $F_t^i\in{\rm dom}(R_t)\subseteq E_{k_0-1}$ such that for each $l\in \Pi_t$ and $i\in\{1,\ldots,i_t\}$, there exists a contraction $f_t^{l,i}: F_t^i\to f_t^{l,i}(F_t^i)$ and the following holds $$\bigcup_{t=1}^N\bigcup_{i=1}^{i_t}R_t(F_t^i)=\bigcup_{t=1}^N\bigcup_{l=1}^{n_t}\bigcup_{i=1}^{i_t}f_t^{l,i}(F_t^i).$$ \end{enumerate} Then we say that $\{f_t^{l,i}\}_{t=1,l=1,i=1}^{N,n_t,i_t}$ is {\em a finite family of contractions decomposed from $\{R_t\}_{t=1}^N$}, and that each $f_t^{l,i}$ is a {\em branch} of $R_t|_{F_t^i}$. \item[(b)] Let $\{W_t^{\alpha,\beta}\}_{t=1,\alpha=1,\beta=1}^{N,m_t,p_t}$ be a partition of $E_{k_0}$ satisfying $$\bigcup_{t=1}^N\bigcup_{\alpha=1}^{m_t}\bigcup_{\beta=1}^{p_t}W_t^{\alpha,\beta}=E_{k_0}.$$ Let $\mathcal{W}_{k_0}:=\{W_t^{\alpha,\beta}:t\in \Sigma, \alpha\in \Lambda_t,\beta\in \{1,\ldots,p_t\}\}$ be a collection of compact subsets of $E_{k_0}$. Let $\{g_t^{l,\alpha,\beta}:W_t^{\alpha,\beta}\to g_t^{l,\alpha,\beta}(W_t^{\alpha,\beta})\}_{t=1,l=1,\alpha=1,\beta=1}^{N,n_t,m_t,p_t}$ be a finite family of contractions decomposed from $\{R_t\}_{t=1}^N$. Fix $t\in \Sigma$. Assume that for any $\alpha\in \Lambda_t$ and $\beta\in \Psi_t$, $W_t^{\alpha,\beta}$ is invariant under $g_t^{l,\alpha,\beta}$ for all $l\in \Pi_t$. Then we call $\mathcal{W}_{k_0}$ {\em a good partition of $E_{k_0}$ with respect to $\{g_t^{l,\alpha,\beta}\}_{t=1,l=1,\alpha=1,\beta=1}^{N,n_t,m_t,p_t}$}. \end{enumerate} \end{defi} \begin{proof}[Proof of Theorem \ref{thm:exi}] To prove this theorem, we consider the following two cases. \noindent{\em Case 1. For any $t\in\Sigma$, $H_t=\emptyset$.} In this case, $\{R_t\}_{t=1}^N$ is an IFS. Hence there exists a GIFS associated to $\{R_t\}_{t=1}^N$ on $E_{k_0}$, where $k_0\geq0$. \noindent{\em Case 2. For some $t\in\Sigma$, $H_t\neq\emptyset$.} In this case, in order to prove the existence of the GIFS associated to $\{R_t\}_{t=1}^N$, we first prove the following five claims. \noindent{\em Claim 1. For any $t\in\Sigma$, we have $R_t(E_{k_0-1})=\Big(\bigcup_{l=1}^{n_t}\bigcup_{i=1}^{m_t}\overline{h_t^{l,i}(H_t^i)}\Big)\bigcup\Big(\bigcup_{i=1}^{s_t}\overline{h_t^{0,i}(J_t^i)}\Big)$.} By (b)(i), we have $$R_t(E_{k_0-1})=\bigcup_{l=1}^{n_t}\bigcup_{i=1}^{m_t}h_t^{l,i}(H_t^i)\bigcup \Big(\bigcup_{i=1}^{s_t} h_t^{0,i}(J_t^i)\Big)\subseteq\bigcup_{l=1}^{n_t}\bigcup_{i=1}^{m_t}\overline{h_t^{l,i}(H_t^i)}\bigcup\Big(\bigcup_{i=1}^{s_t}\overline{h_t^{0,i}(J_t^i)}\Big).$$ Since $R_t(E_{k_0-1})$ is compact, we have $\bigcup_{l=1}^{n_t}\bigcup_{i=1}^{m_t}\overline{h_t^{l,i}(H_t^i)}\bigcup\Big(\bigcup_{i=1}^{s_t}\overline{h_t^{0,i}(J_t^i)}\Big)\subseteq R_t(E_{k_0-1}).$ This proves Claim 1. \noindent{\em Claim 2. For any $t\in\Sigma$, let \begin{align*} & \underline{W}_t^{0,i}:=\overline{h_t^{0,i}(J_t^i)},\quad\text{where}\,\,i\in\Delta_t,\,\,\text{and}\\ &\underline{W}_t^{l,i}:=\overline{h_t^{l,i}(H_t^i)},\quad \underline{W}_t^{n_t+1,i}:=\overline{E_{k_0}\bigcap H_t^i},\quad\text{where}\,\,l\in\Pi_t\,\,\text{and}\,\, i\in\Lambda_t. \end{align*} Then \begin{align}\label{eq:E_k0} E_{k_0}=\bigcup_{t=1}^N\Bigg(\bigcup_{i=1}^{s_t}\underline{W}_t^{0,i}\bigcup\Big( \bigcup_{l=1}^{n_t+1}\bigcup_{i=1}^{m_t}\underline{W}_t^{l,i}\Big)\Bigg). \end{align} } By Claim 1, we have \begin{align*} E_{k_0}=&\bigcup_{t=1}^NR_t(E_{k_0-1})=\bigcup_{t=1}^N\Bigg(\Big(\bigcup_{l=1}^{n_t}\bigcup_{i=1}^{m_t}\overline{h_t^{l,i}(H_t^i)}\Big)\bigcup\Big(\bigcup_{i=1}^{s_t}\overline{h_t^{0,i}(J_t^i)}\Big)\Bigg)\\ =&\bigcup_{t=1}^N\Bigg(\bigcup_{i=1}^{s_t}\underline{W}_t^{0,i}\bigcup \Big( \bigcup_{l=1}^{n_t}\bigcup_{i=1}^{m_t}\underline{W}_t^{l,i}\Big)\Bigg). \end{align*} Note that for any $t\in\Sigma$ and $i\in\Lambda_t$, we have $\underline{W}_t^{n_t+1,i}=\overline{E_{k_0}\bigcap H_t^i}\subseteq E_{k_0}.$ Hence \eqref{eq:E_k0} holds. \noindent{\em Claim 3. For any $t\in\Sigma$ and $i\in\Lambda_t$, let $\{x_n\}$ be a convergent sequence in $H_t^i$. Then for any $l\in\Pi_t$, the sequence $\{h_t^{l,i}(x_n)\}$ converges. If $H_t^i$ is an open set, then we can extend $h_t^{l,i}$ from $H_t^i$ to $\overline{H_t^i}$ and let $\tilde{h}_t^{l,i}:\overline{H_t^i}\rightarrow \underline{W}_t^{l,i}$ be defined as \begin{align}\label{eq:hl} \tilde{h}_t^{l,i}(x): = \begin{cases} h_t^{l,i}(x), & x\in H_t^i, \\ \lim_{n\rightarrow\infty}h_t^{l,i}(x_n), & x\in\overline{H_t^i}\backslash H_t^i, \end{cases} \end{align} where for any $x\in \overline{H_t^i}\backslash H_t^i$, $x_n\rightarrow x$ ($n\rightarrow\infty$). Moreover, $\tilde{h}_t^{l,i}$ is a surjection.} In fact, by (b)(i), we have $h_t^{l,i}$ is uniform continuous in $H_t^i$. Hence the sequence $\{h_t^{l,i}(x_n)\}$ converges, and thus we can define $h_t^{l,i}(x):=\lim_{n\rightarrow\infty}h_t^{l,i}(x_n)$ if $x\in \overline{H_t^i}\backslash H_t^i$. Note that for any $x\in\overline{H_t^i}\backslash H_t^i$, $h_t^{l,i}(x)$ is independent of the choice of the sequence $\{x_n\}$. Therefore we can extend $h_t^{l,i}$ from $H_t^i$ to $\overline{H_t^i}$ and let $\tilde{h}_t^{l,i}$ be defined as in \eqref{eq:hl}. To show that $\tilde{h}_t^{l,i}$ is a surjection, we let $y\in \underline{W}_t^{l,i}=\overline{h_t^{l,i}(H_t^i)}$. Then there exists a sequence $\{h_t^{l,i}(x_n)\}\subseteq h_t^{l,i}(H_t^i)$ such that \begin{align}\label{eq:6a} \lim_{n\rightarrow\infty} h_t^{l,i}(x_n)=y. \end{align} We know that $\{x_n\}$ is bounded, and hence there exists a convergent subsequence $\{x_{n_k}\}$ such that \begin{align}\label{eq:6b} \lim_{n\rightarrow\infty} x_{n_k}=x\in\overline{H_t^i}. \end{align} Note that \begin{align}\label{eq:6c} |\tilde{h}_t^{l,i}(x)-y|\leq|\tilde{h}_t^{l,i}(x)-h_t^{l,i}(x_{n_k})|+|h_t^{l,i}(x_{n_k})-y|. \end{align} Combining (b)(i) and the definition of $\tilde{h}_t^{l,i}$, we have for all $\delta>0$, there exists $ N_1\in\mathbb{N}$ such that for all $k>N$, \begin{align}\label{eq:6d} |h_t^{l,i}(x_{n_k})-\tilde{h}_t^{l,i}(x)|<\frac{\varepsilon}{2}. \end{align} By \eqref{eq:6a} and \eqref{eq:6b}, we have $\lim_{k\rightarrow\infty}h_t^{l,i}(x_{n_k})=y$, and thus for any $\varepsilon>0$, there exists $N_2\in\mathbb{N}$ such that for all $k>N_2$, \begin{align}\label{eq:6e} |h_t^{l,i}(x_{n_k})-y|<\frac{\varepsilon}{2}. \end{align} Let $N:=\max\{N_1,N_2\}$. Then for any $k>N$, \eqref{eq:6d} and \eqref{eq:6e} hold. By \eqref{eq:6c}, we have $$|\tilde{h}_t^{l,i}(x)-y|<\frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon.$$ It follows that $\tilde{h}_t^{l,i}(x)=y$. This proves the Claim 3. \noindent{\em Claim 4. For any $t\in\Sigma$ and $i\in\Delta_t$, let $\{x_n'\}$ be a convergent sequence in $J_t^i$. Then the sequence $\{h_t^{0,i}(x_n')\}$ converges. If $J_t^i$ is an open set, then we can extend $h_t^{0,i}$ from $J_t^i$ to $\overline{J_t^i}$, and let $\tilde{h}_t^{0,i}:\overline{J_t^i}\rightarrow \underline{W}_t^{0,i}$ be defined as \begin{align}\label{eq:h0} \tilde{h}_t^{0,i}(x): = \begin{cases} h_t^{0,i}(x), & x\in J_t^i,\\ \lim_{n\rightarrow\infty}h_t^{l,i}(x'_n), & x\in \overline{J_t^i}\backslash J_t^i, \end{cases} \end{align} where for any $x\in \overline{J_t^i}\backslash J_t^i$, $x'_n\rightarrow x$ when $n\rightarrow\infty$. Moreover, $\tilde{h}_t^{0,i}$ is a surjection.} The proof of Claim 4 is similar to that of Claim 3; we omit the details. \noindent{\em Claim 5. For any $t\in\Sigma$, \begin{align}\label{eq:9f} R_t(E_{k_0})=\bigcup_{l=1}^{n_t}\bigcup_{i=1}^{m_t}\tilde{h}_t^{l,i}\big(\overline{E_{k_0}\bigcap H_t^i}\big)\bigcup \Bigg(\bigcup_{i=1}^{s_t}\tilde{h}_t^{0,i}\big(\overline{J_t^i}\big)\Bigg). \end{align}} In fact, by using a method similar to that for Claim 1, for any $t\in\Sigma$, we have \begin{align}\label{eq:9a} R_t(E_{k_0})=\bigcup_{l=1}^{n_t}\bigcup_{i=1}^{m_t}\overline{h_t^{l,i}(E_{k_0}\bigcap H_t^i)}\bigcup\Big(\bigcup_{i=1}^{s_t}\overline{h_t^{0,i}(J_t^i)}\Big). \end{align} As in the proof for Claim 3, for any $t\in\Sigma,i\in\Lambda_t$, and $l\in\Pi_t$, we have \begin{align}\label{eq:9b} \overline{h_t^{l,i}(E_{k_0}\bigcap H_t^i)}\subseteq\tilde{h}_t^{l,i}(\overline{E_{k_0}\bigcap H_t^i}). \end{align} By Claim 3, for any $x\in(E_{k_0}\bigcap H_t^i)\subseteq H_t^i$, we have $$\tilde{h}_t^{l,i}(x)=h_t^{l,i}(x)\subseteq h_t^{l,i}(E_{k_0}\bigcap H_t^i)\subseteq\overline{h_t^{l,i}(E_{k_0}\bigcap H_t^i)}.$$ If $x\in\overline{E_{k_0}\bigcap H_t^i}\backslash(E_{k_0}\bigcap H_t^i)$, then $x\in\overline{H_t^i}\backslash H_t^i$. We know that there exists a sequence $\{x_n\}\subseteq(E_{k_0}\bigcap H_t^i)$, converging to $x$, such that \begin{align*} \tilde{h}_t^{l,i}(x)=\lim_{n\rightarrow\infty}h_t^{l,i}(x_n)\subseteq h_t^{l,i}(E_{k_0}\bigcap H_t^i). \end{align*} Hence for any $t\in\Sigma,i\in\Lambda_t$, and $l\in\Pi_t$, \begin{align}\label{eq:9c} \tilde{h}_t^{l,i}(\overline{E_{k_0}\bigcap H_t^i})\subseteq\overline{h_t^{l,i}(E_{k_0}\bigcap H_t^i)}. \end{align} Combining \eqref{eq:9b} and \eqref{eq:9c}, for any $t\in\Sigma,i\in\Lambda_t$, and $l\in\Pi_t$, we have \begin{align}\label{eq:9d} \tilde{h}_t^{l,i}(\overline{E_{k_0}\bigcap H_t^i})=\overline{h_t^{l,i}(E_{k_0}\bigcap H_t^i)}. \end{align} By using a similar argument, for any $t\in \Sigma$ and $i\in\Delta_t$, we have \begin{align}\label{eq:9e} \tilde{h}_t^{0,i}(\overline{J_t^i})=\overline{h_t^{0,i}(J_t^i)}. \end{align} Combining \eqref{eq:9a}, \eqref{eq:9d}, and \eqref{eq:9e} proves \eqref{eq:9f}. Next, fix $t\in\Sigma$. By (b)(ii) and Claim 2, we can rename the nonempty elements in $\{\underline{W}_t^{l,i}\}_{t=1,s=1,i=1}^{N,n_t+1,m_t}$ and $\{W_t^{0,i}\}_{t=1,i=1}^{N,s_t}$ as $W_t^{s,i}$, where $s$ and $i$ satisfy the following conditions. \begin{enumerate} \item[(1)] For any $s\in\Psi_t^i$ and $i\in\Lambda_t$, $W_t^{s,i}\subseteq\overline{E_{k_0}\bigcap H_t^i}$. \item[(2)] For any $s\in\{p_t^i+1,\ldots,p_t^i+h_t^i\}$ and $i\in\Delta_t$, $W_t^{s,i}\subseteq\overline{J_t^i}$. \item[(3)] $\{W_t^{s,i}\}_{t=1,s=p_t^i+1,i=1}^{N,p_t^i+h_t^i,s_t}$ is invariant under $g_t^{0,s,i}:=\tilde{h}_t^{0,i}|_{W_t^{s,i}}$, and $\{W_t^{s,i}\}_{t=1,s=1,i=1}^{N,p_t^i,m_t}$ is invariant under $g_t^{l,s,i}:=\tilde{h}_t^{l,i}|_{W_t^{s,i}}$, where $l\in \Pi_t$. \end{enumerate} Note that $$\overline{E_{k_0}\bigcap H_t^i}=\bigcup_{s=1}^{p_t^i}W_t^{s,i} \qquad\text{and}\qquad \overline{J_t^i}=\bigcup_{s=p_t^i+1}^{p_t^i+h_t^i}\tilde{h}_t^{0,i}(W_t^{s,i}).$$ By \eqref{eq:9f}, for any $t\in\Sigma$, we have \begin{align*} R_t(E_{k_0})&=\bigcup_{l=1}^{n_t}\bigcup_{i=1}^{m_t}\tilde{h}_t^{l,i}\big(\overline{E_{k_0}\bigcap H_t^i}\big)\bigcup \Big(\bigcup_{i=1}^{s_t}\tilde{h}_t^{0,i}\big(\overline{J_t^i}\big)\Big)\\ &=\Bigg(\bigcup_{l=1}^{n_t}\bigcup_{i=1}^{m_t}\bigcup_{s=1}^{p_t^i}\tilde{h}_t^{l,i}\Big(W_t^{s,i}\Big)\Bigg)\bigcup\Bigg(\bigcup_{i=1}^{s_t}\bigcup_{s=p_t^i+1}^{p_t^i+h_t^i}\tilde{h}_t^{0,i}\Big(W_t^{s,i}\Big)\Bigg)\\ &=\Bigg(\bigcup_{l=1}^{n_t}\bigcup_{i=1}^{m_t}\bigcup_{s=1}^{p_t^i}g_t^{l,s,i}(W_t^{s,i})\Bigg)\bigcup\Bigg(\bigcup_{i=1}^{s_t}\bigcup_{s=p_t^i+1}^{p_t^i+h_t^i}g_t^{0,s,i}(W_t^{s,i})\Bigg). \end{align*} By the definitions of $g_t^{l,s,i}$ and $W_t^{s,i}$, for fixed $t$, $l$, $s$, $i$, and $g_u:=g_t^{l,s,i}$, $W_j:=W_t^{s,i}$, we have $$g_u:W_j\rightarrow W_k, \quad\text{for}\,\, j,k\in\{1,\ldots,p\},$$ where $W_k:=g_t^{l,s,i}(W_t^{s,i})$ and $u\in\{1,\ldots,L\}$. Let $e$ be the edge corresponding to $g_u$, and let $E^{k,j}$ be the set of all edges from $k$ to $j$. Let $$f_e:=g_u,\quad e\in E^{k,j}.$$ Then for any $e\in E$, $f_e$ is contractive. Hence $G:=(V,E)$ is a GIFS of contractions $\{f_e\}_{e\in E}$, where $V:=\{1,\ldots,p\}$ and $E=\bigcup_{k,j=1}^pE^{k,j}$ is the set of all edges. Finally, we show that $G$ is a GIFS associated to $\{R_t\}_{t=1}^N$ on $E_{k_0}$. By \eqref{eq:9f} and the definition of $g_u$, we have \begin{align}\label{eq:10b} \bigcup_{t=1}^NR_t(E_{k_0}) =\Bigg(\bigcup_{u=1}^{L'}\bigcup_{j=1}^{p'}g_u(W_j)\Bigg)\bigcup\Bigg(\bigcup_{u=L'+1}^{L}\bigcup_{j=p'+1}^pg_u(W_j)\Bigg)=\bigcup_{k,j=1}^p\bigcup_{e\in E^{k,j}}f_e(W_j). \end{align} Now we show by induction that for $q\geq1$, \begin{align}\label{eq:10c} \bigcup_{t\in\Sigma^q}R_t(E_{k_0})=\bigcup_{k,j=1}^p\bigcup_{\mathbf{e}\in E_q^{k,j}}f_\mathbf{e}(W_j). \end{align} For $q=1$, \eqref{eq:10b} implies that \eqref{eq:10c} holds. Assume that \eqref{eq:10c} holds when $q=q_0$. For $q=q_0+1$, \begin{align*} &\bigcup_{t\in\Sigma^{q_0+1}}R_t(E_{k_0})=\bigcup_{t_1=1}^NR_{t_1}\Bigg(\bigcup_{k,j=1}^p\bigcup_{e\in E_{q_0}^{k,j}}f_e(W_j)\Bigg)\\ =&\Bigg(\bigcup_{u=1}^{L'}\bigcup_{i=1}^{m'}g_u\Bigg(\bigcup_{j=1}^p\bigcup_{\mathbf{e}\in E_{q_0}^{k,j}}f_\mathbf{e}(W_j)\Bigg)\Bigg)\bigcup\Bigg(\bigcup_{u=L'+1}^{L}\bigcup_{i=m'+1}^pg_u\Bigg(\bigcup_{j=1}^p\bigcup_{\mathbf{e}\in E_{q_0}^{k,j}}f_\mathbf{e}(W_j)\Bigg)\Bigg)\\ =&\bigcup_{k,s=1}^p\bigcup_{\mathbf{e}\in E^{s,k}}f_\mathbf{e}\Bigg(\bigcup_{j=1}^p\bigcup_{\mathbf{e}\in E_{q_0}^{k,j}}f_\mathbf{e}(W_j)\Bigg)\qquad (\text{by \eqref{eq:10b}})\nonumber\\ =&\bigcup_{k,j=1}^p\bigcup_{\mathbf{e}\in E_{q_0+1}^{k,j}}f_\mathbf{e}(W_j). \end{align*} Hence \eqref{eq:10c} holds. By definition, $G$ is a GIFS associated to $\{R_t\}_{t=1}^N$ on $E_{k_0}$, where $k_0\geq1$. This proves Case 2. Combining Cases 1 and 2 completes the proof. \end{proof} We give a counterexample to show that Theorem \ref{thm:exi} fails if condition (a) is not satisfied. \begin{exam}\label{thm(a)} Let $E_0:=[0,1]$, and let $\{R_t\}_{t=1}^2$ be an IRS on $E_0$ defined as \begin{align*} R_1(\boldsymbol{x}) &:= (1/3)\,\boldsymbol{x}+2/3;\\ R_2(\boldsymbol{x}) &:= \begin{cases} (1/3)\boldsymbol{x}, & 0\leq\boldsymbol{x}\leq1, \\ \{1/2^p:p\in\mathbb{N}\}, & \boldsymbol{x}=0. \end{cases} \end{align*} Then $\#\{R_2(0)\}=\infty.$ Thus, there does not exist any (finite) GIFS associated to $\{R_t\}_{t=1}^2$. \end{exam} Examples \ref{notcon} and \ref{decomposed} show that the conditions (b)(i) in Theorem \ref{thm:exi} need not be satisfied. \begin{exam}\label{notcon} Let $E_0:=[0,1]$. Let $\{R_t\}_{t=1}^2$ be an IRS on $E_0$ defined as \begin{align*} &R_1(\boldsymbol{x}): = \begin{cases} \{\boldsymbol{x},\boldsymbol{x}+1/4\}, & \boldsymbol{x}\in [0,1/2], \\ (1/2)\boldsymbol{x}+1/4, & \boldsymbol{x}\in (1/2];\\ \end{cases}\\ &R_2(\boldsymbol{x}): =\boldsymbol{x}/2. \end{align*} Then $H_1=[0,1/2]$, and thus for any $\boldsymbol{x}\in H_1$, $$h_1^{1,1}(\boldsymbol{x})=\boldsymbol{x}\qquad\text{and}\qquad h_1^{2,1}(\boldsymbol{x})=\boldsymbol{x}+1/4.$$ Therefore $h_1^{1,1}$ and $h_1^{2,1}$ are not contractions. \end{exam} \begin{exam}\label{decomposed} Let $E_0:=[0,1]$. For any $i\in\mathbb{N}_+$, let $F^i:=[1-(1/2^{2i-2}),1-(1/2^{2i-1})]$, and define $$f^{1,i}(\boldsymbol{x}):=\frac{1}{2}\boldsymbol{x}+\frac{2^{2i-1}-2}{2^{2i}}\qquad\text{and}\qquad f^{2,i}(\boldsymbol{x}):=\frac{1}{3}\boldsymbol{x}+\frac{2^{2i+1}-8}{3\cdot2^{2i}}\qquad\text{for}\quad\boldsymbol{x}\in F^i.$$ Note that \begin{align*} &f^{1,i}(F^i)=\Big[\frac{1}{2}+\frac{2^{2i-1}-4}{2^{2i}},\,\frac{1}{2}+\frac{2^{2i-1}-3}{2^{2i}}\Big]\subseteq F^i\quad\text{and}\\ &f^{2,i}(F^i)=\Big[\frac{1}{3}+\frac{2^{2i+1}-12}{3\cdot2^{2i}},\,\frac{1}{3}+\frac{2^{2i+1}-10}{3\cdot2^{2i}}\Big]\subseteq F^i, \end{align*} where $i\in \mathbb{N}_+$ (see Figure \ref{fig.b(i)}). Let $R_1$ be an IRS on $E_0$ defined as \begin{align*} R_1(\boldsymbol{x}): = \begin{cases} \{f^{1,i}(\boldsymbol{x}),f^{2,i}(\boldsymbol{x})\}, & \boldsymbol{x}\in F^i, \,\,i\in\mathbb{N}_+, \\ \boldsymbol{x}/4, & \boldsymbol{x}\in E_0\backslash (\bigcup_iF^i\bigcup\{1\}),\\ \{0,1\}, & \boldsymbol{x}=1. \end{cases} \end{align*} Then $H_1=(\bigcup_{i=1}F^i)\bigcup\{1\}$, and $r_1:=R_1|_{H_1}$ cannot be decomposed as a finite family of contractions. \end{exam} \begin{figure}[htbp] \centering \begin{tikzpicture}[scale=0.9] \draw[red,ultra thick](0,0)--(5,0); \draw[red,ultra thick](7.5,0)--(8.75,0); \draw[red,ultra thick](9.375,0)--(9.6875,0); \draw[red,ultra thick](9.84375,0)--(9.9921875,0); \draw[red,ultra thick](9.9609375,0)--(9.98046875,0); \draw[red,ultra thick](9.990234375,0)--(9.995117188,0); \draw[red,ultra thick](9.997558594,0)--(9.998779297,0); \draw[blue,ultra thick](5,0)--(7.5,0); \draw[blue,ultra thick](8.75,0)--(9.375,0); \draw[blue,ultra thick](9.6875,0)--(9.84375,0); \draw[blue,ultra thick](9.9921875,0)--(9.9609375,0); \draw[blue,ultra thick](9.98046875,0)--(9.990234375,0); \draw[blue,ultra thick](9.995117188,0)--(9.997558594,0); \draw[red,ultra thick](0,-1.5)--(2.5,-1.5); \draw[red,ultra thick](0,-2.1)--(1.67,-2.1); \draw[red,ultra thick](7.5,-1.5)--(8.125,-1.5); \draw[red,ultra thick](7.5,-2.1)--(7.9166666667,-2.1); \draw[red,ultra thick](9.375,-1.5)--(9.53125,-1.5); \draw[red,ultra thick](9.375,-2.1)--(9.47916666667,-2.1); \draw[red,ultra thick](10,-1.5)--(10.05,-1.5); \draw[blue,ultra thick](1.25,-2.75)--(1.875,-2.75); \draw[blue,ultra thick](2.1875,-2.75)--(2.34375,-2.75); \draw[blue,ultra thick](2.421875,-2.75)--(2.4609375,-2.75); \draw[blue,ultra thick](2.498046875,-2.75)--(2.490234375,-2.75); \draw[blue,ultra thick](2.49117188,-2.75)--(2.497558594,-2.75); \draw[blue,ultra thick](2.498779297,-2.75)--(2.499389648,-2.75); \draw[blue](5,0)circle(.07); \draw[blue](7.5,0)circle(.07); \draw[blue](8.75,0)circle(.07); \draw[blue](9.375,0)circle(.07); \draw[blue](9.6875,0)circle(.07); \draw[blue](9.84375,0)circle(.07); \draw[blue](1.25,-2.75)circle(.07); \draw[blue](1.875,-2.75)circle(.07); \draw[blue](2.1875,-2.75)circle(.07); \draw[blue](2.34375,-2.75)circle(.07); \draw[fill=black](10,0.)circle(.05); \draw[fill=black](10,-1.5)circle(.03); \draw[fill=black](0.0,-1.5)circle(.03); \draw[black] (-1,0) node[right]{$E_0$}; \draw[black] (-1,-1.8) node[right]{$E_1$}; \draw[black] (2.5,0.3) node[right]{$F^1$}; \draw[black] (7.9,0.3) node[right]{$F^2$}; \draw[black] (9.3,0.3) node[right]{$F^3$}; \draw[black] (0.8,-1.25) node[right][scale=0.8]{$f^{1,1}(F^1)$}; \draw[black] (0.3,-1.85) node[right][scale=0.8]{$f^{2,1}(F^1)$}; \draw[black] (1.1,-2.5) node[right][scale=0.75]{$R_1(E_0\backslash H_1)$}; \draw[black] (-0.2,-0.25) node[right]{$0$}; \draw[black] (9.8,-0.25) node[right]{$1$}; \end{tikzpicture} \caption{The sets $E_1=R_1(E_0)$ and $H_1=(\bigcup_{i=1}F^i)\bigcup\{1\}$ in Example \ref{decomposed}.}\label{fig.b(i)} \end{figure} \begin{proof} By the definition of $R_1$, we have $H_1=(\bigcup_{i=1}F^i)\bigcup\{1\}$. We prove the following claim. \noindent{\em Claim 1. For any $i,$ $j\in\mathbb{N}_+$, let $f:F^i\bigcup F^j\rightarrow f(F^i\bigcup F^j)$ be a function decomposed from $R_1$. Then $f$ is not contractive.} To prove this claim, we let $x_1:=1-(1/2^{2i-2})\in F^i$ and $x_2:=1-(1/2^{2j-2})\in F^j$. Then $$f(x_1)=1-\frac{1}{2^{2i-2}}\qquad\text{and}\qquad f(x_2)=1-\frac{1}{2^{2j-2}}.$$ Note that $$d(f(x_1),f(x_2))=\Big|\frac{1}{2^{2i-2}}-\frac{1}{2^{2j-2}}\Big|=d(x_1,x_2).$$ Thus $f$ is not a contraction. Next, suppose that $r_1=R_1|_{H_1}$ can be decomposed as a finite family of contractions $\{h_1^{l,s}:H_1^s\to h_1^{l,s}(H_1^s)\}_{l=1,s=1}^{n,m}$. Then there would exist at least one $H_1^s$ such that \begin{align*} \bigcup_{i=q}^\infty F^i\subseteq H_{1}^s\,\,\text{ for some }\,\,\,q\geq 1,\qquad\text{and}\qquad h_1^{l,s}\Big|_{\big(\bigcup_{i=q}^\infty F^i\big)}\,\,\text{is contractive.} \end{align*}This contradicts Claim 1 and proves that $r_1$ cannot be decomposed as a finite family of contractions. \end{proof} The following example shows that condition (b)(ii) in Theorem \ref{thm:exi} need not be satisfied. \begin{exam}\label{notgood} Let $E_0:=[0,1]$. For any $i\in \mathbb{N}_+$, let \begin{align*} F^i:=\Big[1-\frac{1}{2^{i-1}},1-\frac{3}{5\cdot2^{i-1}}\Big],\quad I^i:=\Big(1-\frac{3}{5\cdot2^{i-1}},1-\frac{1}{2^{i}}\Big), \end{align*} and let $$f(\boldsymbol{x}):=\frac{5}{8}\boldsymbol{x}+\frac{3}{8}+\frac{3}{5\cdot2^{i+2}},\qquad\boldsymbol{x}\in F^i.$$ Note that $$f(F^i)=\Big[1-\frac{11}{5\cdot2^{i+1}},1-\frac{3}{5\cdot2^{i}}\Big]\subseteq I^i\bigcup F^{i+1}$$ (see Figure \ref{fig.b(ii)}). Let $R_1$ be an IRS on $E_0$ defined as \begin{align*} R_1(\boldsymbol{x}) &:= \begin{cases} \big\{f(\boldsymbol{x}), \boldsymbol{x}/20\big\}, &\boldsymbol{x}\in F^i,\,\,i\in \mathbb{N}_+, \\ \boldsymbol{x}/20,& \boldsymbol{x}\in \bigcup_{i=1}^\infty I^i,\\ \{1/20,1\}, & \boldsymbol{x}=1. \end{cases} \end{align*} Let $E_1:=R_1(E_0)$. Then there does not exist a good partition of $E_1$. \end{exam} \begin{figure}[htbp] \centering \begin{tikzpicture}[scale=1] \draw[red,ultra thick](0,0.2)--(4,0.2); \draw[red,ultra thick](5,0.2)--(7,0.2); \draw[red,ultra thick](7.5,0.2)--(8.5,0.2); \draw[red,ultra thick](8.75,0.2)--(9.25,0.2); \draw[red,ultra thick](9.375,0.2)--(9.625,0.2); \draw[red,ultra thick](9.6875,0.2)--(9.8125,0.2); \draw[red,ultra thick](9.84375,0.2)--(9.90625,0.2); \draw[red,ultra thick](9.921875,0.2)--(9.953125,0.2); \draw[red,ultra thick](9.9609375,0.2)--(9.976525,0.2); \draw[red,ultra thick](9.9609375,0.2)--(9.976525,0.2); \draw[red,ultra thick](9.98046875,0.2)--(9.98828125,0.2); \draw[blue,ultra thick](4,0.2)--(5,0.2); \draw[blue,ultra thick](7,0.2)--(7.5,0.2); \draw[blue,ultra thick](8.5,0.2)--(8.75,0.2); \draw[blue,ultra thick](9.25,0.2)--(9.375,0.2); \draw[blue,ultra thick](9.625,0.2)--(9.6875,0.2); \draw[blue,ultra thick](9.8125,0.2)--(9.84375,0.2); \draw[blue,ultra thick](9.90625,0.2)--(9.921875,0.2); \draw[blue,ultra thick](9.976525,0.2)--(9.9609375,0.2); \draw[blue,ultra thick](9.976525,0.2)--(9.98046875,0.2); \draw[red,ultra thick](4.5,-1.5)--(7,-1.5); \draw[red,ultra thick](7.25,-1.5)--(8.5,-1.5); \draw[red,ultra thick](8.625,-1.5)--(9.25,-1.5); \draw[red,ultra thick](9.3125,-1.5)--(9.625,-1.5); \draw[red,ultra thick](9.65625,-1.5)--(9.8125,-1.5); \draw[red,ultra thick](9.828125,-1.5)--(9.90625,-1.5); \draw[red,ultra thick](10,-1.5)--(10.05,-1.5); \draw[magenta,ultra thick](0,-1.5)--(0.2,-1.5); \draw[magenta,ultra thick](0.25,-1.5)--(0.35,-1.5); \draw[magenta,ultra thick](0.375,-1.5)--(0.425,-1.5); \draw[magenta,ultra thick](0.4375,-1.5)--(0.4625,-1.5); \draw[magenta,ultra thick](0.46875,-1.5)--(0.48125,-1.5); \draw[magenta,ultra thick](0.484375,-1.5)--(0.490625,-1.5); \draw[magenta,ultra thick](0.4921875,-1.5)--(0.4953125,-1.5); \draw[blue,ultra thick](0.2,-1.5)--(0.25,-1.5); \draw[blue,ultra thick](0.35,-1.5)--(0.375,-1.5); \draw[blue,ultra thick](0.425,-1.5)--(0.4375,-1.5); \draw[blue,ultra thick](0.4625,-1.5)--(0.46875,-1.5); \draw[blue,ultra thick](0.46875,-1.5)--(0.484375,-1.5); \draw[blue,ultra thick](0.490625,-1.5)--(0.4921875,-1.5); \draw[blue](4,0.2)circle(.07); \draw[blue](5,0.2)circle(.07); \draw[blue](7,0.2)circle(.07); \draw[blue](7.5,0.2)circle(.07); \draw[blue](8.5,0.2)circle(.07); \draw[blue](8.75,0.2)circle(.07); \draw[blue](9.25,0.2)circle(.07); \draw[blue](9.375,0.2)circle(.07); \draw[fill=black](10,0.2)circle(.05); \draw[fill=black](10,-1.5)circle(.05); \draw[fill=black](0.5,-1.5)circle(.05); \draw[black] (-1,0.1) node[right]{$E_0$}; \draw[black] (-1,-1.5) node[right]{$E_1$}; \draw[black] (2,0.45) node[right]{$F^1$}; \draw[black] (5.8,0.45) node[right]{$F^2$}; \draw[black] (7.6,0.45) node[right]{$F^3$}; \draw[black] (4.2,0.45) node[right]{$I^1$}; \draw[black] (7,0.45) node[right]{$I^2$}; \draw[black] (8.35,0.45) node[right]{$I^3$}; \draw[black] (-0.2,-0.1) node[right]{$0$}; \draw[black] (-0.2,-1.78) node[right][scale=0.85]{$0$}; \draw[black] (9.8,-0.1) node[right]{$1$}; \draw[black] (9.8,-1.8) node[right]{$1$}; \draw[black] (0.2,-1.8) node[right][scale=0.8]{$1/20$}; \draw(5,-1.35)--(5,-1.65); \draw(5.4,-1.35)--(5.4,-1.65); \draw(5.72,-1.35)--(5.72,-1.65); \draw(5.976,-1.35)--(5.976,-1.65); \draw(6.1808,-1.35)--(6.1808,-1.65); \draw(6.34464,-1.35)--(6.34464,-1.65); \draw(6.475712,-1.35)--(6.475712,-1.65); \draw(6.5805696,-1.35)--(6.5805696,-1.65); \draw(6.66445568,-1.35)--(6.66445568,-1.65); \draw(6.731564544,-1.35)--(6.731564544,-1.65); \draw(6.798673408,-1.35)--(6.798673408,-1.65); \draw(6.84162308096,-1.35)--(6.84162308096,-1.65); \draw(6.87598281933,-1.35)--(6.87598281933,-1.65); \draw(6.90347061002,-1.35)--(6.90347061002,-1.65); \draw(6.92546084248,-1.35)--(6.92546084248,-1.65); \draw(6.95,-1.35)--(6.95,-1.65); \draw(6.97,-1.35)--(6.97,-1.65); \draw(6.98,-1.35)--(6.98,-1.65); \draw(6.99,-1.35)--(6.99,-1.65); \draw(8.75,-1.35)--(8.75,-1.65); \draw(9.375,-1.35)--(9.375,-1.65); \draw[black] (4.8,-1.2) node[right][scale=0.7]{$d_1^1$}; \draw[black] (5.2,-1.2) node[right][scale=0.6]{$d_2^1$}; \draw[black] (5.5,-1.2) node[right][scale=0.6]{$d_3^1$}; \draw[black] (7.25,-1.2) node[right][scale=0.6]{$d_1^2$}; \draw[black] (7.45,-1.2) node[right][scale=0.6]{$d_2^2$}; \draw[black] (8.55,-1.2) node[right][scale=0.6]{$d_1^3$}; \draw[black] (9.15,-1.2) node[right][scale=0.6]{$d_1^4$}; \draw[black] (5,-1.9) node[right][scale=0.8]{$h_1^{1,1}(F^1)$}; \draw[black] (7.3,-1.9) node[right][scale=0.8]{$h_1^{1,1}(F^2)$}; \draw(7.5,-1.35)--(7.5,-1.65); \draw(7.66,-1.35)--(7.66,-1.65); \draw(7.788,-1.35)--(7.788,-1.65); \draw(7.89,-1.35)--(7.89,-1.65); \draw(7.97,-1.35)--(7.97,-1.65); \draw(8.03,-1.35)--(8.03,-1.65); \draw(8.1,-1.35)--(8.1,-1.65); \draw(8.15,-1.35)--(8.15,-1.65); \draw(8.19,-1.35)--(8.19,-1.65); \draw(8.23,-1.35)--(8.23,-1.65); \draw(8.26,-1.35)--(8.26,-1.65); \draw(8.28,-1.35)--(8.28,-1.65); \draw(8.3,-1.35)--(8.3,-1.65); \draw(8.32,-1.35)--(8.32,-1.65); \draw(8.34,-1.35)--(8.34,-1.65); \draw(8.36,-1.35)--(8.36,-1.65); \draw(8.38,-1.35)--(8.38,-1.65); \draw(8.4,-1.35)--(8.4,-1.65); \draw(8.41,-1.35)--(8.41,-1.65); \draw(8.43,-1.35)--(8.43,-1.65); \draw(8.45,-1.35)--(8.45,-1.65); \draw(8.47,-1.35)--(8.47,-1.65); \draw(8.48,-1.35)--(8.48,-1.65); \draw(8.49,-1.35)--(8.49,-1.65); \end{tikzpicture} \caption{The sets $H_1=(\bigcup_{i=1}F^i)\bigcup\{1\}$ and $E_0\backslash H_1=\bigcup_{i=1}I^i$ in Example \ref{notgood}. $\{d_j^i\}_{i,j\in \mathbb{N}_+}$ represents the set of division points in $E_1$. Note that for all $i\in\mathbb{N}_+$, $\#\{d_j^i:j\in \mathbb{N}_+\}=\infty$.} \label{fig.b(ii)} \end{figure} \begin{proof} By the definition of $R_1$, we have $H_1^1=\bigcup_{i=1}^\infty F^i$, $H_1^2=\{1\},$ and $J_1^1=\bigcup_{i=1}^\infty I^i.$ Thus, for any $i\in \mathbb{N}_+$, \begin{align*} &h_1^{1,1}(\boldsymbol{x})=f(\boldsymbol{x}), \,\,\boldsymbol{x}\in F^i;\qquad h_1^{2,1}(\boldsymbol{x})=\boldsymbol{x}/20,\,\, \boldsymbol{x}\in H_1^1;\\ &h_1^{1,2}(\boldsymbol{x})=1/20,\,\, \boldsymbol{x}=1;\qquad\,\,\, h_1^{2,2}(\boldsymbol{x})=1,\,\, \boldsymbol{x}=1;\qquad\,\,\,\, h_1^{0,1}(\boldsymbol{x})=\boldsymbol{x}/20,\,\, \boldsymbol{x}\in J_1^1. \end{align*} Hence for $l$, $k\in\{1,2\}$, $h_1^{0,1}$ and $h_1^{l,k}$ are contractions. For any $i\in \mathbb{N}_+$, let \begin{align*} d_1^i:=1-\frac{1}{2^i}\quad\text{and}\quad \{d_j^i\}:=\big\{\big(h_1^{1,1}\big)^{-1}(d_{j-1}^{i+1})\big\}\bigcap h_1^{1,1}(F^i),\,j=2,3,\ldots \end{align*} be a family of division points in $h_1^{1,1}(F^i)$. Then for any $j\in\{2,3,\ldots\}$, \begin{align*} d_{j-1}^{i+1} \in h_1^{1,1}(F^{i+1})\qquad\text{and}\qquad F^{i+1}\subseteq h_1^{1,1}(F^i). \end{align*} Hence for any $i,j\in\mathbb{N}_+$, $\{d_j^i\}=\big\{\big(h_1^{1,1}\big)^{-1}(d_{j-1}^{i+1})\big\}\bigcap h_1^{1,1}(F^i)\neq\emptyset.$ Next we show that for any $i\in \mathbb{N}_+$, $d_j^i$ is an increasing function of $j$. By the definition of $h_1^{1,1}$, we have $d_1^i<d_2^i$, where $i\in \mathbb{N}_+$, and $\big(h_1^{1,1}\big)^{-1}$ is an increasing function on $h_1^{1,1}(H_1^1)$. Assume that for any $i\in \mathbb{N}_+$ and $j=n$, we have $d_n^i<d_{n+1}^i$. Then \begin{align}\label{eq:increa} \big(h_1^{1,1}\big)^{-1}(d_n^i)<\big(h_1^{1,1}\big)^{-1}(d_{n+1}^i). \end{align} We let $j=n+1$ and let \begin{align*} \{d_{n+1}^i\}=\big\{\big(h_1^{1,1}\big)^{-1}(d_{n}^{i+1})\big\}\bigcap h_1^{1,1}(F^i)\quad\text{and}\quad \{d_{n+2}^i\} :=\big\{\big(h_1^{1,1}\big)^{-1}\big(d_{n+1}^{i+1})\big\}\bigcap h_1^{1,1}(F^i). \end{align*} Then by \eqref{eq:increa}, $d_{n+1}^i<d_{n+2}^i$. By induction, for any $i\in \mathbb{N}_+$, $d_j^i$ is an increasing function of $j$. Hence for any $i\in \mathbb{N}_+$, $\#\{d_j^i:j\in \mathbb{N}_+\}=\infty$. It follows that for any $i\in \mathbb{N}_+$, \begin{align*} h_1^{1,1}(F^i)=&\Big[1-\frac{11}{5\cdot2^{i+1}},d_1^i\Big]\bigcup[d_1^i,d_2^i]\bigcup[d_2^i,d_3^i]\bigcup\cdots\\ =:&W_1^{1,i,1}\bigcup W_1^{1,i,2}\bigcup W_1^{1,i,3}\bigcup\cdots=\bigcup_{j=1}^\infty W_1^{1,i,j}. \end{align*} Note that for any $i\in \mathbb{N}_+$ and $j=2,3,\ldots$, we have $W_1^{1,i,1}\subseteq I^i$ and $W_1^{1,i,j}\subseteq F^{i+1}.$ \noindent{\em Claim 1. For any $i\in \mathbb{N}_+$ and any $j\in \{2,3,\ldots\}$, let $$W_1^{i,t}:=W_1^{1,i,1}\bigcup W_1^{1,i,j} \qquad \text{and}\qquad g_1^{1,1,i,t}:=h_1^{1,1}|_{W_1^{i,t}}.$$ Then $g_1^{1,1,i,t}$ is not contractive.} In fact, for any $x\in W_1^{1,i,1}$ and any $y\in W_1^{1,i,j}$, we have $d(x,y)\leq 1/4$, while $d(g_1^{1,1,i,t}(x),g_1^{1,1,i,t}(y))\geq 13/20$. This proves Claim 1. \noindent{\em Claim 2. Fix $i\in \mathbb{N}_+$. For any $p,q\in\mathbb{N}_+$ with $p<q$, let $$ W_1^{i,s}:=W_1^{1,i,p}\bigcup W_1^{1,i,q}\qquad\text{and}\qquad g_1^{1,1,i,s}:=h_1^{1,1}|_{W_1^{i,s}}.$$ Then it is not possible for $g_1^{1,1,i,s}$ to be contractive and $W_1^{i,s}$ to be invariant under $g_1^{1,1,i,s}$.} In fact, suppose that $g_1^{1,1,i,s}$ is contractive and $W_1^{i,s}$ is invariant under $g_1^{1,1,i,s}$. By the definitions of $W_1^{1,i,p}$ and $W_1^{1,i,q}$, we have $$W_1^{1,i,p}\subseteq W_1^{1,i+1,p-1}\qquad\text{and}\qquad W_1^{1,i,q}\subseteq W_1^{1,i+1,q-1}.$$ Hence $g_1^{1,1,i,s-1}$ is contractive and $W_1^{i+1,s-1}:=W_1^{1,i+1,p-1}\bigcup W_1^{1,i+1,q-1}$ to be invariant under $g_1^{1,1,i,s-1}$. Since $$W_1^{1,i+1,p-1}\subseteq W_1^{1,i+2,p-2}\qquad\text{and}\qquad W_1^{1,i+1,q-1}\subseteq W_1^{1,i+2,q-2}.$$ Hence $g_1^{1,1,i,s-2}$ is contractive and $W_1^{i+2,s-2}:=W_1^{1,i+2,p-2}\bigcup W_1^{1,i+2,q-2}$ is invariant under $g_1^{1,1,i,s-2}$. Continue this process. We see that $g_1^{1,1,i,s-p+1}$ is contractive and $W_1^{i+p-1,s-p+1}:=W_1^{1,i+p-1,1}\bigcup W_1^{1,i+p-1,q-p+1}$ is invariant under $g_1^{1,1,i,s-p+1}$. This contradicts Claim 1. This proves Claim 2. Let $$W_1^{2,1,1}:=h_1^{2,1}(H_1^1), \,\,W_1^{1,2,2}:=h_1^{1,2}(H_1^2),\,\, W_1^{2,2,2}:=h_1^{2,2}(H_1^2),\,\,\text{and}\,\,W_1^{0,1,1}:=\overline{h_1^{0,1}(J_1^1)}.$$ Therefore we can extend $h_1^{0,1}$ from $J_1^1$ to $\overline{J_1^1}$, and let $\tilde{h}_1^{0,1}:\overline{J_1^1}\to W_1^{0,1,1}$ be defined as $\tilde{h}_1^{0,1}(\boldsymbol{x}):=\boldsymbol{x}/20$. Hence \begin{align*} E_1=&\tilde{h}_1^{0,1}(\overline{J_1^1})\bigcup\Big(\bigcup_{l=1}^2\bigcup_{k=1}^2 h_1^{l,k}(H_1^k)\Big) =W_1^{0,1,1}\bigcup\Big(\bigcup_{i,j\in \mathbb{N}_+} W_1^{1,i,j}\Big)\bigcup W_1^{2,1,1}\bigcup W_1^{1,2,2}\bigcup W_1^{2,2,2}. \end{align*} We assume that $\{W_1^{s,t}\}_{s=1,t=1}^{p,n}$ is a good partition of $E_1$ with respect to $$\{g_1^{l,s,t,k}\}_{l=0,k=1,s=1,t=1}^{2,2,p,n}, \,\,\text{where}\,\, g_1^{l,k,s,t}:=h_1^{l,k}|_{W_1^{s,t}}.$$ Then there exists at least one $W_1^{\alpha,\beta}$, where $\alpha\in\{1,\ldots,p\}$ and $\beta\in\{1,\ldots,n\}$, such that for any $i\in \mathbb{N}_+$, $$\bigcup_{j=q_1}^\infty W_1^{1,i,j}\subseteq W_1^{\alpha,\beta}\,\,\text{for some} \,\,\,q_1\geq 1.$$ This contradicts Claim 2 and proves there does not exist a good partition of $E_1$. \end{proof} \begin{thm}\label{thm:exi2} Let $ X $ be a complete metric space and let $E_0\subseteq X $ be a nonempty compact set. Let $\{R_t\}_{t=1}^N$ be an IRS on $E_0$. For any $k\geq0$, let $E_{k+1}$ be defined as in \eqref{E_n}. Assume $\{R_t\}_{t=1}^N$ satisfies the following conditions. \begin{enumerate} \item[(a)]There exists an integer $k_0\geq 1$ such that for any $t\in\Sigma$ and any $x\in E_{k_0-1}$, $$\#\{R_t(x)\}<\infty.$$ \item[(b)] For any $t\in\Sigma$, if $H_t:=\{x\in E_{k_0-1}|2\leq\#\{R_t(x)\}<\infty\}\neq \emptyset$, we require that the following conditions are satisfied. \begin{enumerate} \item[(i)] $r_t:=R_t|_{H_t}$ can be decomposed as a finite family of contractions $\{h _t^{l,i}:H_t^i\to h _t^{l,i}(H_t^i)\}_{l=1,i=1}^{n_t,m_t}$, where $\bigcup_{i=1}^{m_t}H_t^i=H_t$, and $\tilde{r}_t:=R_t|_{E_{k_0-1}\backslash H_t}$ can be decomposed as a finite family of contractions $\{h _t^{0,i}:J_t^i\to h _t^{0,i}(J_t^i)\}_{i=1}^{s_t}$, where $\bigcup_{i=1}^{s_t}J_t^i=E_{k_0-1}\backslash H_t$. \item[(ii)] There exist $\alpha,\beta\in\Lambda_t$ or $\sigma,\tau\in\Delta_t$ such that for any $l\in \Pi_t$ and any $i\in\Lambda_t$, $$\overline{h _t^{l,i}(H_t^i)}\subseteq \overline{H_t^\alpha}\qquad \text{or}\qquad \overline{h _t^{l,i}(H_t^i)}\subseteq \overline{J_t^\sigma},$$ and for any $i\in\Delta_t$, $$\overline{h _t^{0,i}(J_t^i)}\subseteq \overline{H_t^\beta}\qquad \text{or}\qquad \overline{h _t^{0,i}(J_t^i)}\subseteq \overline{J_t^\tau}.$$ \end{enumerate} \end{enumerate} Then there exists a GIFS associated to $\{R_t\}_{t=1}^N$ on $E_{k_0}$. \end{thm} \begin{proof} For any $t\in\Sigma$, let \begin{align*} & \underline{W}_t^{0,i}:=\overline{h_t^{0,i}(J_t^i)},\quad\text{where}\,\,i\in\Delta_t,\quad\text{and}\\ &\underline{W}_t^{l,i}:=\overline{h_t^{l,i}(H_t^i)},\quad \underline{W}_t^{n_t+1,i}:=\overline{E_{k_0}\bigcap H_t^i},\quad\text{where}\,\,l\in\Pi_t \,\,\text{and}\,\, i\in\Lambda_t. \end{align*} Fix $t\in\Sigma$. By (b)(ii), we can rename the nonempty elements in $\{\underline{W}_t^{l,i}\}_{t=1,s=1,i=1}^{N,n_t+1,m_t}$ and $\{W_t^{0,i}\}_{t=1,i=1}^{N,s_t}$ as $W_t^{s,i}$, where $s$ and $i$ satisfy the following conditions: \begin{enumerate} \item[(a)] for any $s\in\Psi_t^i$ and $i\in\Lambda_t$, $W_t^{s,i}\subseteq\overline{E_{k_0}\bigcap H_t^i}$; \item[(b)] for any $s\in\{p_t^i+1,\ldots,p_t^i+h_t^i\}$ and $i\in\Delta_t$, $W_t^{s,i}\subseteq\overline{J_t^i}$. \end{enumerate} Note that $$\overline{E_{k_0}\bigcap H_t^i}=\bigcup_{s=1}^{p_t^i}W_t^{s,i} \qquad\text{and}\qquad \overline{J_t^i}=\bigcup_{s=p_t^i+1}^{p_t^i+h_t^i}W_t^{s,i}.$$ For $t\in \Sigma$, $l\in\Pi_t$, $i\in\Lambda_t$, and $s\in \Psi_t^i$, let $g_t^{l,s,i}:=\tilde{h}_t^{l,i}|_{W_t^{s,i}}$. For $i\in\Delta_t$ and $s\in\{p_t^i+1,\ldots, p_t^i+h_t^i\}$, let $g_t^{0,s,i}:=\tilde{h}_t^{0,i}|_{W_t^{s,i}}$. Here $\tilde{h}_t^{l,i}$ and $\tilde{h}_t^{0,i}$ are defined as in \eqref{eq:hl} and \eqref{eq:h0}, respectively. Then for any $W_t^{s,i}$, where $t\in\Sigma$, $i\in\Lambda_t$, and $s\in \Psi_t^i$, there exists some $W_t^{s_0,j}$, where $s_0\in \Psi_t^j$ and $j\in\Lambda_t$, such that $$g_t^{l,s,i}(W_t^{s,i})\subseteq W_t^{s_0,j}.$$ Similarly, for any $W_t^{s,i}$, where $t\in\Sigma$, $i\in\Delta_t$, and $s\in \{p_t^i+1,\ldots,p_t^i+h_t^i\}$, there exists some $W_t^{s_0,j}$, where $j\in\Delta_t$ and $s_0\in \Psi_t^j$, such that $$g_t^{0,s,i}(W_t^{s,i})\subseteq W_t^{s_0,j}.$$ Hence $\{\{W_t^{s,i}\}_{t=1,s=1,i=1}^{N,p_t^i,m_t}, \{W_t^{s,i}\}_{t=1,s=p_t^i+1,i=1}^{N,p_t^i+h_t^i,s_t}\}$ is a good partition of $E_{k_0}$ with respect to $\{\{g_t^{l,s,i}\}_{t=1,l=1,s=1,i=1}^{N,n_t,p_t^i,m_t},\{g_t^{0,s,i}\}_{t=1,s=p_t^i+1,i=1}^{N,p_t^i+h_t^i,s_t}\}$. By Theorem \ref{thm:exi}, there exists a GIFS associated to $\{R_t\}_{t=1}^N$ on $E_{k_0}$. \end{proof} \begin{defi}\label{simplified graph} Let $ X $ be a complete metric space, and let $G=(V,E)$ be a GIFS of contractions $\{f_e\}_{e\in E}$ on $ X $, where $V:=\{1,\ldots,p\}$ and $E$ is the set of all directed edges. Let $\{W_j\}_{j=1}^p$ be an invariant family under $G$. We call $\widetilde{G}=(\widetilde{V},\widetilde{E})$ a \textit{simplified graph-directed iterated function system} associated to $G$, if $\widetilde{G}$ satisfies the following conditions. \begin{enumerate} \item[(a)] $\widetilde{E}\subseteq E$ and $\{\widetilde{W}_j\}_{j=1}^{\widetilde{p}} \subseteq \{W_j\}_{j=1}^{p}$, where $p\geq\widetilde{p}$. \item[(b)] Let $\{f_e\}_{e\in\widetilde{E}}$ be contractions associated to $\widetilde{G}$, and let $\{\widetilde{W}_j\}_{j=1}^{\widetilde{p}}$ be an invariant family under $\widetilde{G}$. Then for any $q\geqslant1$, \begin{align}\label{eq:min} \bigcup_{i,j=1}^p\bigcup_{\mathbf{e}\in E_q^{i,j}}f_\mathbf{e}(W_j)=\bigcup_{i,j=1}^{\widetilde{p}}\bigcup_{\mathbf{e}\in\widetilde{E}_q^{i,j}}f_\mathbf{e}(\widetilde{W}_j). \end{align} \end{enumerate} \end{defi} By Definition \ref{simplified graph}, we know that the attractor of $G$ is equal to the attractor of $\widetilde{G}$. Note that the simplified GIFS is not unique. \begin{defi}\label{defi:min} We say that a simplified GIFS $\widehat{G}$ composed of $\big(\{\widehat{W}_j\}_{j=1}^{\widehat{p}},\{f_e\}_{e\in \widehat{E}}\big)$ is a \textit{minimal simplified graph-directed iterated function system} if among all simplified GIFSs $\widetilde{G}=(\widetilde{V},\widetilde{E})$ composed of $\big(\{\widetilde{W}_j\}_{j=1}^{\widetilde{p}},\{f_e\}_{e\in \widetilde{E}}\big)$, we have $\widehat{p}\leq \widetilde{p}$, and among all those simplified GIFSs with $ \widetilde{p}=\widehat{p}$, we have $\#\{f_e\}_{e\in \widehat{E}}\leq \#\{f_e\}_{e\in \widetilde{E}}.$ \end{defi} \begin{prop}\label{prop:3.3} Assume that $\{R_t\}_{t=1}^N$ satisfies the conditions of Theorem \ref{thm:exi} and $G=(V,E)$ is a GIFS associated to $\{R_t\}_{t=1}^N$ guaranteed by Theorem \ref{thm:exi}, where $V=\{1,\ldots,p\}$ and $G$ consists of contractions $\{f_e\}_{e\in E}$. Then there exists a minimal simplified GIFS $\widehat{G}=(\widehat{V},\widehat{E})$ associated to $G$. \end{prop} \begin{proof} Let $\{W_t^{s,i}\}_{t=1,s=1,i=1}^{N,p_t^i,m_t}$ and $\{W_t^{s,i}\}_{t=1,s=p_t^i+1,i=1}^{N,p_t^i+h_t^i,s_t}$ be defined as in the proof of Theorem \ref{thm:exi}. For fixed $t$, $s$, $i$, we write $W_j:=W_t^{s,i}$. Then $$\bigcup_{t=1}^N\Bigg(\bigcup_{i=1}^{m_t}\bigcup_{s=1}^{p_t^i}W_t^{s,i}\bigcup\Big(\bigcup_{i=1}^{s_t}\bigcup_{s=p_t^i+1}^{p_t^i+h_t^i}W_t^{s,i}\Big)\Bigg)=\bigcup_{j=1}^pW_j.$$ Fix $t\in\Sigma$. For any $W_t^{s,i}$, where $s\in \Psi_t^i$ and $i\in\Lambda_t$, if there exists $s_0\in\Psi_t^i$ with $s_0\neq s$ such that $W_t^{s,i} \subseteq W_t^{s_0,i}$, then we remove $W_t^{s,i}$. In particular, if $W_t^{s,i}=W_t^{s_0,i}$, then we remove one of them. If there are multiple elements in $\{W_t^{s,i}\}_{t=1,s=1,i=1}^{N,p_t^i,m_t}$ that are equal, then we keep one of them and remove the others. We rename the remaining $\{W_t^{s,i}\}_{t=1,s=1,i=1}^{N,p_t^i,m_t}$ as $\widetilde{W}_t^{s,i}$, where $s\in \{1,\ldots,\widetilde{p}_t^i\}$ and $i\in\{1,\ldots,\widetilde{m}_t\}$. We use a similar method to keep the elements in the set $\{W_t^{s,i}\}_{t=1,s=p_t^i+1,i=1}^{N,p_t^i+h_t^i,s_t}$, and thus we rename the remaining $\{W_t^{s,i}\}_{t=1,s=p_t^i+1,i=1}^{N,p_t^i+h_t^i,s_t}$ as $\widetilde{W}_t^{s,i}$, where $s\in \{\widetilde{p}_t^i+1,\ldots,\widetilde{p}_t^i+\widetilde{h}_t^i\}$ and $i\in\{1,\ldots,\widetilde{s}_t\}$. Note that $$\bigcup_{i=1}^{m_t}\bigcup_{s=1}^{p_t^i}W_t^{s,i}=\bigcup_{i=1}^{\widetilde{m}_t}\bigcup_{s=1}^{\widetilde{p}_t^i}\widetilde{W}_t^{s,i}\qquad\text{and}\qquad \bigcup_{i=1}^{s_t}\bigcup_{s=p_t^i+1}^{p_t^i+h_t^i}W_t^{s,i}=\bigcup_{i=1}^{\widetilde{s}_t}\bigcup_{s=\widetilde{p}_t+1}^{\widetilde{p}_t^i+\widetilde{h}_t^i}\widetilde{W}_t^{s,i}.$$ For any $t\in\Sigma$, we note that the number of elements removed from $\{W_t^{s,i}\}_{t=1,s=1,i=1}^{N,p_t^i,m_t}$ and $\{W_t^{s,i}\}_{t=1,s=p_t^i+1,i=1}^{N,p_t^i+h_t^i,s_t}$ is equal to the number of elements removed from $\{W_j\}_{j=1}^p$. We rename the remaining $\{W_j\}_{j=1}^p$ as $\widetilde{W}_j$, where $j\in\{1,\ldots,\widetilde{p}\}$. Note that $$E_{k_0}=\bigcup_{j=1}^pW_j=\bigcup_{j=1}^{\widetilde{p}}\widetilde{W}_j=\bigcup_{t=1}^N\Bigg(\bigcup_{i=1}^{\widetilde{m}_t}\bigcup_{s=1}^{\widetilde{p}_t^i}\widetilde{W}_t^{s,i}\bigcup\Big(\bigcup_{i=1}^{\widetilde{s}_t}\bigcup_{s=\widetilde{p}_t^i+1}^{\widetilde{p}_t^i+\widetilde{h}_t^i}\widetilde{W}_t^{s,i}\Big)\Bigg).$$ Let $\widetilde{G}=(\widetilde{V},\widetilde{E})$ be a GIFS of contractions $\{f_e\}_{e\in\widetilde{E}}$, where $\widetilde{V}=\{1,\ldots,\widetilde{p}\}$, $\widetilde{E}\subseteq E$, and $\{\widetilde{W}_j\}_{j=1}^{\widetilde{p}}$ is an invariant family under $\widetilde{G}$. For any $e\in E\backslash\widetilde{E}$, we have $f_e(W_t^{s,i})\subseteq W_t^{s',i}.$ Note that there exist $\widetilde{W}_t^{s,i}$ and $\widetilde{W}_t^{s',i}$ such that $W_t^{s,i}\subseteq \widetilde{W}_t^{s,i}$ and $W_t^{s',i}\subseteq\widetilde{W}_t^{s',i}.$ Hence $e\in\widetilde{E}$. Therefore, we have $E\backslash\widetilde{E}\subseteq\widetilde{E}$ and \begin{align*} \bigcup_{i,j=1}^p\bigcup_{e\in E^{i,j}}f_e(W_j)=\bigcup_{i,j=1}^{\widetilde{p}}\bigcup_{e\in\widetilde{E}^{i,j}}f_e(\widetilde{W}_j). \end{align*} By induction, for all $q\geq1$, we have \begin{align*} \bigcup_{i,j=1}^p\bigcup_{\mathbf{e}\in E_q^{i,j}}f_\mathbf{e}(W_j)=\bigcup_{i,j=1}^{\widetilde{p}}\bigcup_{\mathbf{e}\in \widetilde{E}_q^{i,j}}f_\mathbf{e}(\widetilde{W}_j). \end{align*} Therefore $\widetilde{G}=(\widetilde{V},\widetilde{E})$ is a simplified GIFS associated to $G$. Among all simplified GIFSs that have been constructed by the above process, we first select the subcollection with the smallest number of vertices. Then among members of this subcollection, we further select the subfamily with the smallest number of contractions. Members of this subfamily are minimal simplified GIFSs associated to $G$, denoted $\widehat{G}=(\widehat{V},\widehat{E})$. \end{proof} \section{Hausdorff dimension of graph self-similar sets without overlaps}\label{S:GOSC} \setcounter{equation}{0} In this section, we give the definition of the graph open set condition (GOSC) and prove Theorems \ref{thm:main1}--\ref{thm:main2}. Moreover, we give some examples of IRSs that satisfy the conditions of Theorem \ref{thm:main2}, and compute the Hausdorff dimension of the corresponding attractors. \subsection{Graph open set condition} \begin{defi}\label{defi:4.2} Let $ X $ be a complete metric space. Let $\{f_t\}_{t=1}^m$ be an IFS of contractions on $ X $. We say that $\{f_t\}_{t=1}^m$ satisfies the \textit{open set condition} (OSC) if there exists a nonempty bounded open set $U$ on $ X $ such that $$\bigcup_{t=1}^mf_t(U)\subseteq U\qquad \text{and}\qquad f_{t_1}(U)\bigcap f_{t_2}(U)=\emptyset\quad\text{for}\,\, t_1\neq t_2.$$ \end{defi} \begin{defi}\label{defi:4.1} Let $ X $ be a complete metric space. Let $G=(V,E)$ be a GIFS of contractions $\{f_e\}_{e\in E}$ on $ X $. We say that $G$ satisfies the \textit{graph open set condition} (GOSC) if there exists a family $\{U_i\}_{i=1}^m$ of nonempty bounded open sets on $ X $ such that for all $ i\in\{1,\ldots,m\}$, \begin{enumerate} \item[(a)] $\bigcup_{e\in E^{i,j}}f_e(U_j)\subseteq U_i$; \item[(b)] $f_e(U_{j_1})\bigcap f_{e'}(U_{j_2})=\emptyset$, for all distinct $e\in E^{i,j_1}$ and $e'\in E^{i,j_2}$. \end{enumerate} \end{defi} \begin{defi}\label{defi:rtgosc} Let $ X $ be a complete metric space and let $\{R_t\}_{t=1}^N$ be an IRS on a nonempty compact subset of $ X $. Assume that there exists a GIFS $G$ associated to $\{R_t\}_{t=1}^N$ and assume that $G$ consists of contractions. If $G$ satisfies (GOSC), then we say that {\em $\{R_t\}_{t=1}^N$ satisfies (GOSC) with respect to $G$}. If $G$ does not satisfies (GOSC), then we say that $\{R_t\}_{t=1}^N$ as {\em overlaps with respect to $G$.} \end{defi} Let $G=(V,E)$ be a GIFS of contractions $\{f_e\}_{e\in E}$ on $ X $. For any $e\in E^{i,j}$, let $\rho_e$ be the contraction ratio of $f_e$. Recall that an {\em incidence matrix} $A_\alpha$ associated with $G$ is an $m\times m$ matrix defined by \begin{align}\label{eq:matrix} A_\alpha= [\rho_e^\alpha]_{m\times m}, \end{align} where for $i,j\in V$ and $e\notin E^{i,j}$, $\rho_e=0$. \begin{proof}[Proof of Theorem \ref{thm:main1}] By Proposition \ref{prop:3.1}, we know that $G$ and $\{R_t\}_{t=1}^N$ have the same attractor. If $G$ satisfies (GOSC), then $G$ satisfies (GFTC). The proof follows by using the results of \cite[Theorem 1.6]{Ngai-Xu_2023}; we omit the details. \end{proof} \begin{lem}\label{lem:4.1} Let $M$ be a complete $n$-dimensional smooth orientable Riemannian manifold with non-negative Ricci curvature. Let $K_M$ be the sectional curvature of $M$ and $K_M\leq b^2$. Let $\{V_i\}$ be a collection of disjoint open subsets of $M$ such that each $V_i$ contains a ball $B^M(p_1,a_1r)$ and is contained in a ball of radius $B^M(p_2,a_2r)$. Then any ball $B^M(p,r)$ intersects at most $C(n)(r+2a_2r)^n/(C(n,b,a_1r))$ of the $\overline{V_i}$, where $C(n)=\pi^{n/2}/\Gamma(1+n/2)$ is the volume of the unit ball in $\R^n$. \end{lem} \begin{proof} Let $B^M(p,r)\bigcap\overline{V}_i\neq\emptyset$. Then $V_i$ is contained in a ball concentric with $B^M(p,r+2a_2r)$. Let $q$ be the number of $V_i$ such that $B^M(p,r)\bigcap\overline{V}_i\neq\emptyset$. Then summing volumes, we have $$q { \rm Vol}_M(B^M(p_1,a_1r))\leq\sum_{B^M(p,r)\bigcap\overline{V}_i\neq\emptyset}{\rm Vol}_M(\overline{V}_i)\leq{\rm Vol}_M(B^M(p,r+2a_2r)).$$ By the Bishop-Gromov inequality (see, e.g., \cite{Bishop-Crittenden_1964}), we have $${\rm Vol}_M(B^M(p,r+2a_2r))\leq C(n)(r+2a_2r)^n.$$ Since $K_M\leq b^2$, we have ${ \rm Vol}_M(B^M(p_1,a_1r))\geq C(n,b,a_1r).$ This completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:main2}] Combining Proposition \ref{prop:3.1} and Definition \ref{simplified graph}, we know that $G$ and $\widehat{G}$ have the same attractor. If $\widehat{G}$ satisfies (GOSC), then $\widehat{G}$ satisfies (GFTC). By using the results of \cite[Theorem 1.6]{Ngai-Xu_2023}, we can prove (a). As $M$ is locally Euclidean, the sectional curvatures and the Ricci curvatures of $M$ are everywhere zero. By using the results of Lemma \ref{lem:4.1}, and a similar method as in \cite{Falconer_2003}, we can prove (b); we omit the details. \end{proof} { \subsection{Examples} In this subsection, we provide three examples of IRSs satisfying Theorem \ref{thm:main2}, and compute the Hausdorff dimension of the associated attractors. \begin{exam}\label{exam:osc1} Let $\mathcal{C}^2:=\mathbb{S}^1\times\R^1=\big\{(\cos\theta,\sin\theta,z):\theta\in[-\pi,\pi],z\in[0,2\pi]\big\}$ be a cylindrical surface. Let $E_0:=\mathcal{C}^2$. For $r\in [0,\pi/2)$, let $\boldsymbol{x}:=(\cos\theta,\sin\theta,z)\in E_0$, $H:=\{(-1,0,z):z\in[0,2\pi]\}$, and $\{R_t\}_{t=1}^3$ be an IRS on $E_0$ defined as \begin{align*} &R_1(\boldsymbol{x}):=\left\{ \begin{aligned} &(-\cos (\theta/2),-\sin (\theta/2),z/2+\pi/2),\quad \qquad \,\,&&\boldsymbol{x}\in E_0\backslash H,\\ &\{(0,-1,z/2+\pi/2), (0,1,z/2+\pi/2)\}\quad &&\boldsymbol{x}\in H;\\ \end{aligned} \right.\\ &R_2(\boldsymbol{x}):=\left\{ \begin{aligned} &(\cos (\theta/2),\sin (\theta/2),z/2+\pi-r),\quad &&\boldsymbol{x}\in E_0\backslash H,\\ &\{(0,1,z/2+\pi-r),(0,-1,z/2+\pi-r) \}\quad &&\boldsymbol{x}\in H;\\ \end{aligned} \right.\\ &R_3(\boldsymbol{x}):=\left\{ \begin{aligned} &(\cos (\theta/2),\sin (\theta/2),z/2+r),\qquad\qquad \qquad\,\, &&\boldsymbol{x}\in E_0\backslash H,\\ &\{(0,1,z/2+r),(0,-1,z/2+r)\} \qquad\qquad &&\boldsymbol{x}\in H. \end{aligned} \right. \end{align*} Let $K$ be the associated attractor (see Figure \ref{fig:osc1}). Then $$\dim_H(K)=\frac{\log3}{\log2}= 1.58496\ldots.$$ \end{exam} \begin{figure}[H] \centering \mbox{\subfigure[] { \includegraphics[scale=0.34]{IRS2.png}} }\quad \mbox{\subfigure[] { \includegraphics[scale=0.34]{gosc1.png}} } \quad \mbox{\subfigure[] { \includegraphics[scale=0.34]{gosc2.png}} } \\ \mbox{\subfigure[Front] { \includegraphics[scale=0.24]{osc1_zheng.png}} }\qquad\qquad \mbox{\subfigure[Back] { \includegraphics[scale=0.24]{osc1_fan.png}} } \caption{Figures for Example \ref{exam:osc1} with $r=\pi/4$. (a)--(c) are drawn on $\mathbb R^2$ and shrunk by $2\pi$. (a) The first iteration of $E_0$ under $\{R_t\}_{t=1}^3$, where $R_1(E_0)$ consists of the left and right rectangles, $R_2(E_0)$ is the top square, and $R_3(E_0)$ is the bottom square. (b) Vertices of the GIFS associated to $\{R_t\}_{t=1}^3$. (c) The first iteration of the vertices under the GIFS. (d)--(e) The attractor of $\{R_t\}_{t=1}^3$.} \label{fig:osc1} \end{figure} \begin{proof} For any $t\in\{1,2,3\}$, by the definition of $R_t$, we have $H_t^1=H_t:=\{(-1,0,z):z\in[0,2\pi]\}$. Let $r_t:=R_t|_{H_t}$ and $r_t(H_t)=\bigcup_{l=1}^2h_t^{l,1}(H_t^1)$. Then for any $\boldsymbol{x}\in H_t^1$, \begin{align*} &h_1^{1,1}(\boldsymbol{x})=(0,1,z/2+\pi/2),\quad &&h_1^{2,1}(\boldsymbol{x})=(0,-1,z/2+\pi/2),\\ &h_2^{1,1}(\boldsymbol{x})=(0,-1,z/2+\pi-r),\quad &&h_2^{2,1}(\boldsymbol{x})=(0,1,z/2+\pi-r),\\ &h_3^{1,1}(\boldsymbol{x})=(0,-1,z/2+r),\quad&& h_3^{2,1}(\boldsymbol{x})=(0,1,z/2+r). \end{align*} For any $t\in\{1,2,3\}$, let \begin{align*} &J_t^1:=\big\{(\cos\theta,\sin\theta,z):\theta\in(-\pi,0],z\in[0,2\pi]\big\}\quad \text{and}\\ &J_t^2:=\big\{(\cos\theta,\sin\theta,z):\theta\in(0,\pi),z\in[0,2\pi]\big\}. \end{align*} Then $E_0\backslash H_t=\bigcup_{i=1}^2J_t^i$. Let $\tilde{r}_t:=R_t|_{E_0\backslash H_t}$ and $\tilde{r}_t(E_0\backslash H_t)=\bigcup_{i=1}^2h_t^{0,i}(J_t^i)$. Then for any $\boldsymbol{x} \in J_t^i$, where $i\in\{1,2\}$, \begin{align*} &h_1^{0,i}(\boldsymbol{x})=(-\cos (\theta/2),-\sin (\theta/2),z/2+\pi/2),\\ &h_2^{0,i}(\boldsymbol{x})=(\cos (\theta/2),\sin (\theta/2),z/2+\pi-r),\\ &h_3^{0,i}(\boldsymbol{x})=(\cos (\theta/2),\sin (\theta/2),z/2+r). \end{align*} Hence for any $t\in \{1,2,3\}$, $h_t^{l,1}$ are contractions, where $l\in\{1,2\}$, and $h_t^{0,i}$ are contractions, where $i\in\{1,2\}$. Moreover, for $t=1$, we have \begin{align*} \overline{h_t^{1,1}(H_t)}\subseteq \overline{J_t^2},\qquad \overline{h_t^{2,1}(H_t)}\subseteq \overline{J_t^1},\qquad \overline{h_t^{0,1}(J_t^1)}\subseteq \overline{J_t^2},\qquad \overline{h_t^{0,2}(J_t^2)}\subseteq \overline{J_t^1}; \end{align*} for any $t\in\{2,3\}$, we have \begin{align*} \overline{h_t^{1,1}(H_t)}\subseteq \overline{J_t^1},\qquad \overline{h_t^{2,1}(H_t)}\subseteq \overline{J_t^2}, \qquad \overline{h_t^{0,i}(J_t^i)}\subseteq \overline{J_t^i},\,\,\text{where} \,\,i\in \{1,2\}. \end{align*} Hence $\{R_t\}_{t=1}^3$ satisfies the conditions of Theorem \ref{thm:exi2}. By Theorem \ref{thm:exi2} and Proposition \ref{prop:3.3}, we can find a minimal simplified GIFS $\widehat{G}=(\widehat{V},\widehat{E})$ with $\widehat{V}=\{1,\ldots,6\}$ and $\widehat{E}=\{e_1,\ldots,e_{18}\}$. The invariant family $\{\widehat{W}_i\}_{i=1}^6$ and the associated similitudes $\{f_e\}_{e\in\widehat{E}}$ are defined as \begin{align*} &\widehat{W}_1:=\{(\cos\theta,\sin\theta,z):\theta\in [-\pi,-\pi/2],\,\, z\in[\pi/2,3\pi/2]\},\\ &\widehat{W}_2:=\{(\cos\theta,\sin\theta,z):\theta\in [-\pi/2,0],\,\, z\in[\pi-r,2\pi-r]\},\\ &\widehat{W}_3:=\{(\cos\theta,\sin\theta,z):\theta\in [-\pi/2,0],\,\, z\in[r,\pi+r]\},\\ &\widehat{W}_4:=\{(\cos\theta,\sin\theta,z):\theta\in [0,\pi/2],\,\, z\in[\pi-r,2\pi-r]\},\\ &\widehat{W}_5:=\{(\cos\theta,\sin\theta,z):\theta\in [0,\pi/2],\,\, z\in[r,\pi+r]\},\\ &\widehat{W}_6:=\{(\cos\theta,\sin\theta,z):\theta\in [\pi/2,\pi],\,\, z\in[\pi/2,3\pi/2]\}, \end{align*} while $\widehat{E}^{i,j}$, $i,j\in\{1,\ldots,6\}$, and the associated similitudes $\{f_e\}_{e\in\widehat{E}}$ are defined as \begin{alignat*}{6} &e_1\in \widehat{E}^{1,4},\qquad &e_2\in \widehat{E}^{1,5},\qquad &e_3\in \widehat{E}^{1,6},\qquad &e_4\in \widehat{E}^{2,1},\qquad &e_5\in \widehat{E}^{2,2},\qquad&e_6\in \widehat{E}^{2,3},\\ &e_7\in\widehat{E}^{3,1},\qquad &e_8\in \widehat{E}^{3,2}, \qquad &e_9\in \widehat{E}^{3,3}, \qquad&e_{10}\in \widehat{E}^{4,4},\qquad &e_{11}\in \widehat{E}^{4,5}, \qquad&e_{12}\in \widehat{E}^{4,6},\\ &e_{13}\in \widehat{E}^{5,4}, \qquad&e_{14}\in \widehat{E}^{5,5},\qquad &e_{15}\in \widehat{E}^{5,6},\qquad &e_{16}\in \widehat{E}^{6,1},\qquad &e_{17}\in \widehat{E}^{6,2},\qquad &e_{18}\in \widehat{E}^{6,3}, \end{alignat*} and \begin{align*} &f_{e_{1}}:=\widetilde{h}_1^{0,2}|_{\widehat{W}_4}, \quad\,\, f_{e_{2}}:=\widetilde{h}_1^{0,2}|_{\widehat{W}_5}, \,\,\quad\,\,\, f_{e_{3}}:=\widetilde{h}_1^{0,2}|_{\widehat{W}_6},\quad\,\,\,\,\, f_{e_{4}}:=\widetilde{h}_2^{0,1}|_{\widehat{W}_1}, \quad \,\,\, f_{e_{5}}:=\widetilde{h}_2^{0,1}|_{\widehat{W}_2},\\ &f_{e_{6}}:=\widetilde{h}_2^{0,1}|_{\widehat{W}_3}, \quad\,\, f_{e_{7}}:=\widetilde{h}_3^{0,1}|_{\widehat{W}_1}, \quad\,\,\,\,\, f_{e_{8}}:=\widetilde{h}_3^{0,1}|_{\widehat{W}_2},\quad\,\,\,\,\, f_{e_{9}}:=\widetilde{h}_3^{0,1}|_{\widehat{W}_3}, \quad \,\,\, f_{e_{10}}:=\widetilde{h}_2^{0,2}|_{\widehat{W}_4},\\ &f_{e_{11}}:=\widetilde{h}_2^{0,2}|_{\widehat{W}_5}, \quad\,\, f_{e_{12}}:=\widetilde{h}_2^{0,2}|_{\widehat{W}_6}, \quad\,\, f_{e_{13}}:=\widetilde{h}_3^{0,2}|_{\widehat{W}_4},\quad\,\, f_{e_{14}}:=\widetilde{h}_3^{0,2}|_{\widehat{W}_5}, \quad \,\, f_{e_{15}}:=\widetilde{h}_3^{0,2}|_{\widehat{W}_6},\\ &f_{e_{16}}:=\widetilde{h}_1^{0,1}|_{\widehat{W}_1}, \quad\,\, f_{e_{17}}:=\widetilde{h}_1^{0,1}|_{\widehat{W}_2}, \quad\,\, f_{e_{18}}:=\widetilde{h}_1^{0,1}|_{\widehat{W}_3}, \end{align*} where $\tilde{h}_t^{l,i}$ and $\tilde{h}_t^{0,i}$ are defined as in \eqref{eq:hl} and \eqref{eq:h0}, respectively. Note that $\widehat{G}$ is strong connected. Let \begin{align*} \underline{W}_1&:=\{(\cos\theta,\sin\theta,z):\theta\in (-\pi,-\pi/2),\,\, z\in(\pi/2+r,3\pi/2-r)\},\\ \underline{W}_2&:=\{(\cos\theta,\sin\theta,z):\theta\in (-\pi/2,0),\,\, z\in(3\pi/2-2r,2\pi-2r)\},\\ \underline{W}_3&:=\{(\cos\theta,\sin\theta,z):\theta\in (-\pi/2,0),\,\, z\in(2r,2r+\pi/2)\},\\ \underline{W}_4&:=\{(\cos\theta,\sin\theta,z):\theta\in (0,\pi/2),\,\, z\in(3\pi/2-2r,2\pi-2r)\},\\ \underline{W}_5&:=\{(\cos\theta,\sin\theta,z):\theta\in (0,\pi/2),\,\, z\in(2r,2r+\pi/2)\},\\ \underline{W}_6&:=\{(\cos\theta,\sin\theta,z):\theta\in (\pi/2,\pi),\,\, z\in(\pi/2+r,3\pi/2-r)\}. \end{align*} Let $$\widetilde{K}_i=\bigcup_{j=1}^6\bigcup_{e\in \widehat{E}^{i,j}}f_e(\widetilde{K}_j).$$ Then for $i\in\{1,\ldots,6\}$, $U_i:=\underline{W}_i\backslash \widetilde{K}_i$ is an open set. For all $i\in\{1,\ldots,6\}$, $\{f_e\}_{e\in \widehat{E}}$ satisfies \begin{align*} \bigcup_{e\in \widehat{E}^{i,j}}f_e(U_j)\subseteq U_i\quad\text{and}\quad f_{e_1}(U_{j_1})\bigcap f_{e_2}(U_{j_1})=\emptyset,\,\,\text{for}\,\,e_1\in \widehat{E}^{i,j_1}\,\,\text{and}\,\,e_2\in \widehat{E}^{i,j_2}. \end{align*} Hence $\widehat{G}$ satisfies (GOSC). The weighted incidence matrix associated to $\widehat{G}$ is $$A_\alpha=\Big(\frac{1}{2}\Big)^\alpha\footnotesize{\begin{bmatrix} \begin{array}{cccccc} 0 &0 &0 &1 &1 &1 \\ 1 &1 &1 &0 &0 &0 \\ 1 &1 &1 &0 &0 &0 \\ 0 &0 &0 &1 &1 &1 \\ 0 &0 &0 &1 &1 &1 \\ 1 &1 &1 &0 &0 &0 \\ \end{array} \end{bmatrix}.}$$ The spectral radius of $A_\alpha$ is $3(1/2)^\alpha$. Therefore, $$\dim_H(K)=\frac{\log 3}{\log 2}= 1.58496\ldots.$$ \end{proof} \begin{exam}\label{R_1} Let $E_0:=[0,1]$, and let $\{R_t\}_{t=1}^2$ be an IRS on $E_0$ defined as \begin{align*} R_1(\boldsymbol{x}) &:= (1/2)\boldsymbol{x}+1/2;\\ R_2(\boldsymbol{x}) &:= \begin{cases} \{(1/2)\boldsymbol{x},(1/2)\boldsymbol{x}+3/8\} , &\boldsymbol{x}\in[0,1/2], \\ (1/2)\boldsymbol{x}, & \boldsymbol{x}\in (1/2,1]. \end{cases} \end{align*} Let $K$ be the associated attractor (see Figure \ref{fig.2}). Then $$\dim_H(K)=\frac{\log \big((1+\sqrt 5)/2\big)}{\log2}= 0.694242\ldots.$$ \end{exam} The proof of this example is similar to that of Example \ref{exam:osc1}; and is omitted. \begin{figure}[htbp] \centering \begin{tikzpicture}[scale=0.8] \draw[black,very thick](0,0)--(10,0); \draw[red,thick](0,0.2)--(5,0.2); \draw[blue,semithick](5.05,0.2)--(10,0.2); \draw[red,thick](0,-1.5)--(2.5,-1.5); \draw[red,thick](3.75,-1.5)--(6.25,-1.5); \draw[blue,thick](6.3,-1.5)--(8.75,-1.5); \draw[black,very thick](5,-1.8)--(10,-1.8); \draw[black,thin,dashed](5,0.6)--(5,-2.3); \draw [thick] (0,-.1) node[below]{0} -- (0,0.1); \draw [thick] (10,-.1) node[below]{1}-- (10,0.1); \draw[blue](5,0.2)circle(.05); \draw[blue](6.25,-1.5)circle(.05); \draw[black] (-1,-0.1) node[right]{$E_0$}; \draw[black] (-1,-1.65) node[right]{$E_1$}; \draw[black] (2,0.45) node[right]{$H_2$}; \draw[black] (7,0.45) node[right]{$E_0\backslash H_2$}; \draw [red,->] (2,0.15) -- (1.75,-1.35); \draw [red,->] (4,0.15) -- (4.55,-1.35); \draw [blue,->] (8.5,0.15) -- (7.9,-1.35); \draw [black,->] (9.4,-0.1) -- (8.9,-1.7); \draw[red] (1.73,-0.8) node[right]{$R_2$}; \draw[red] (4.25,-0.8) node[right]{$R_2$}; \draw[blue] (7.5,-0.8) node[right]{$R_2$}; \draw[black] (8.95,-1.2) node[right]{$R_1$}; \end{tikzpicture} \caption{The sets $H_2=[0,1/2]$, $E_0\backslash H_2=(1/2,1]$, and $E_1=\bigcup_{t=1}^2R_t(E_0)$ in Example \ref{R_1}.}\label{fig.2} \end{figure} \begin{exam}\label{exam:osctri} Let $\mathcal{C}^2:=\big\{(\cos\theta,\sin\theta,z):\theta\in[-\pi,\pi],z\in [0,2\pi]\big\}$ be a cylindrical surface. Let \begin{align*} E_0^1:=&\big\{(\cos\theta,\sin\theta,z):\theta\in[-\pi,0],z\in [0,\sqrt{3}\,\theta+\sqrt{3}\,\pi]\big\},\\ E_0^2:=&\big\{(\cos\theta,\sin\theta,z):\theta\in[0,\pi],z\in [0,-\sqrt{3}\,\theta+\sqrt{3}\,\pi]\big\}, \end{align*} and $E_0:=E_0^1\bigcup E_0^2$. Let $\boldsymbol{x}:=(\cos\theta,\sin\theta,z)\in E_0$ and let $\{R_t\}_{t=1}^3$ be an IRS on $E_0$ defined as \begin{align*} &R_1(\boldsymbol{x}):=\left\{ \begin{aligned} &(\cos (\theta/2-\pi/2),\sin (\theta/2-\pi/2),z/2),\quad &&\boldsymbol{x}\in E_0\backslash \{(-1,0,0)\},\\ &\{(1,0,0),(-1,0,0)\} \quad &&\boldsymbol{x}=(-1,0,0);\\ \end{aligned} \right.\\ &R_2(\boldsymbol{x}):=\left\{ \begin{aligned} &(\cos (\theta/2+\pi/2),\sin (\theta/2+\pi/2),z/2),\quad &&\boldsymbol{x}\in E_0\backslash \{(-1,0,0)\},\\ &\{(-1,0,0),(1,0,0)\}, \quad &&\boldsymbol{x}=(-1,0,0);\\ \end{aligned} \right.\\ &R_3(\boldsymbol{x}):=\left\{ \begin{aligned} &(\cos (\theta/2),\sin (\theta/2),z/2+\sqrt{3}\,\pi/2),\qquad\,\, &&\boldsymbol{x}\in E_0\backslash \{(-1,0,0)\},\\ &\{(0,1,\sqrt{3}\,\pi/2),(0,-1,\sqrt{3}\,\pi/2)\} \qquad &&\boldsymbol{x}=(-1,0,0). \end{aligned} \right. \end{align*} Let $K$ be the associated attractor (see Figure \ref{fig:osctri}). Then $$\dim_H(K)=\frac{\log3}{\log2}= 1.58496\ldots.$$ \end{exam} The proof of this example is similar to that of Example \ref{exam:osc1}; and is again omitted. \begin{figure}[H] \centering \mbox{\subfigure[Front] { \includegraphics[scale=0.24]{osctri_zheng.png}} }\qquad\qquad \mbox{\subfigure[Back] { \includegraphics[scale=0.24]{osctri_fan.png}} } \caption{The attractor of $\{R_t\}_{t=1}^3$ in Example \ref{exam:osctri}.} \label{fig:osctri} \end{figure} \section{Hausdorff dimension of graph self-similar sets with overlaps}\label{S:GFTC} \setcounter{equation}{0} In this section, we study IRSs with overlaps (see Definition \ref{defi:rtgosc}). We give the definition of the graph finite type condition (GFTC) and prove Theorems \ref{thm:main4}--\ref{thm:main5}. Moreover, we illustrate our method for computing the Hausdorff dimension of the associated attractors by several examples. \subsection{Graph finite type condition} The definitions of an equivalence relation and a sequence of nested index sets that appear in the following definition can be found in \cite{Ngai-Wang-Dong_2010,Ngai-Xu_2023} and are included in the Appendix A for completeness. \begin{defi}\label{defi:GFTC} Let $ X $ be a complete metric space. Let $G = (V, E)$ be a GIFS of contractions $\{f_e\}_{e\in E}$ on $X$, where $V=\{1,\ldots, m\}$. If there exists an invariant family of nonempty bounded open sets $\mathbf{U}=\{U_i\}_{i=1}^m$ with respect to some sequence of nested index sets $\{\mathcal{M}_k\}_{k=0}^\infty$ such that $\#\mathcal{V}/_\sim:=\{[\mathbf{v}]_{\mathbf{U}},\mathbf{v}\in \mathcal{V}\}$ is a finite set, where $\sim$ is an equivalence relation on $\mathcal{V}$, and $\mathcal{V}$ is defined as in \eqref{eq:v}, then we say that $G=(V,E)$ satisfies {\em the graph finite type condition} (GFTC). We say that $\mathbf{U}$ is {\em a finite type condition}. \end{defi} \begin{defi}\label{defi:rtgftc} Let $ X $ be a complete metric space. Let $\{R_t\}_{t=1}^N$ be an IRS on a nonempty compact subset of $ X $. Assume that there exists a GIFS $G$ associated to $\{R_t\}_{t=1}^N$ and assume that $G$ consists of contractions. If $G$ satisfies (GFTC), then we say that {\em $\{R_t\}_{t=1}^N$ satisfies (GFTC) with respect to $G$}. \end{defi} The following theorem provides a sufficient condition for a GIFS to satisfy the finite type condition. Recall that an algebraic integer $\beta > 1$ is called a {\em Pisot number} if all of its algebraic conjugates are in modulus strictly less than one. \begin{thm}\label{thm:Pisot} Let $M$ be a complete smooth $n$-dimensional Riemannian manifold that is locally Euclidean. Let $G = (V, E)$ be a GIFS of contractive similitudes $\{f_e\}_{e\in E}$ on $M$. Let $\{W_i\}_{i=1}^m$ be an invariant family of nonempty compact sets, and let $\mathbf{U}:=\{U_i\}_{i=1}^m$ be an invariant family of nonempty bounded open sets with $\overline{U}_i=W_i$, $i=1,\ldots,m$. For each similitude $f_e$, $e\in E$, assume that there exists an isometry $$g_i:U_i\to U'_i\subseteq \R^n$$ such that for any $e'\in E'$, $f'_{e'}:=g_i\circ f_e\circ g_i^{-1}$ is contractive similitude of the form $$f'_{e'}(x)=\beta^{-n_{e'}}R_{e'}(x)+b_{e'},$$ where $E'$ is a set of directed edges, $\beta>1$ is a Pisot number, $n_{e'}$ is a positive integer, $R_{e'}$ is an orthogonal transformation, and $b_{e'}\in \R^n$. Assume that $\{R_{e'}\}_{e'\in E'}$ generates a finite group $H$ and $$H\{b_{e'}|e'\in E'\}\subseteq r_1\Z[\beta]\times\cdots\times r_n\Z[\beta]$$ for some $r_1,\ldots,r_n\in \R$. Then $G$ is of finite type and $\mathbf{U}$ is a finite type condition family of $G$. \end{thm} \begin{proof} By the assumptions, we let $V':=\{1,\ldots,m\}$ be a set of vertices. Then $G' := (V', E')$ is a GIFS of contractive similitudes $\{f'_{e'}\}_{e'\in E'}$ on $\R^n$, and $\mathbf{U'}:=\{U'_i\}_{i=1}^m$ is an invariant family of nonempty bounded open sets for $G'$. By the results of \cite[Theorem 2.7]{Das-Ngai_2004}, we have $G'$ is of finite type and $\mathbf{U'}$ is a finite type condition family for $G'$. Let $\mathbf{v}':=(f'_{e'},i,j,k)\in \mathcal{V'}$ be a vertex (see Appendix A), where $\mathcal{V'}$ is a set of all vertices of $\mathcal{G}'$, $e'\in \widetilde{\mathcal{M}}_k^{i,j}$, $1\leq i,j\leq m$, $k\geq 1$, and $\widetilde{\mathcal{M}}_k^{i,j}\subseteq E'$ is a sequence of nested index sets. Then $\{[\mathbf{v}']_{\mathbf{U'}},\mathbf{v}'\in \mathcal{V'}\}$ is a finite set. Let $\mathbf{v}:=(f_{e},i,j,k)\in \mathcal{V}$, where $e\in \mathcal{M}_k^{i,j}$, $1\leq i,j\leq m$, and $ \mathcal{M}_k^{i,j}\subseteq E^*$ is a sequence of nested index sets. It follows from the definition of $f'_{e'}$ that $\{[\mathbf{v}]_{\mathbf{U}},\mathbf{v}\in \mathcal{V}\}=\{[\mathbf{v}']_{\mathbf{U'}},\mathbf{v}'\in \mathcal{V'}\}$ is a finite set. This proves the proposition. \end{proof} We let $\mathcal{T}_1,\ldots, \mathcal{T}_m$ denote the collection of all distinct neighbourhood types, with $[\mathbf{v}_{\text{root}}^i]$, $i=1,\ldots, m$, being the neighbourhood types of the root vertices. As in \cite{Lau-Ngai_2007}, for each $\alpha\geq 0$, we define a {\em weighted incidence matrix} $A_\alpha= (A_\alpha(i,j))_{i,j=1}^m$ as follows. Fix $i$ ($1\leq i\leq m$) and a vertex $\mathbf{v}\in \mathcal{V_R}$ such that $[\mathbf{v}] =\mathcal{T}_i$, let $\mathbf{u}_1,\ldots, \mathbf{u}_m$ be the offspring of $\mathbf{v}$ in $\mathcal{V_R}$ and let $\mathbf{k}_l$, $1\leq l\leq m$, be the unique edge in $\mathcal{G_R}$ connecting $\mathbf{v}$ to $\mathbf{u}_l$. Then we define \begin{align}\label{eq:mat} A_\alpha(i,j):=\sum\{\rho_{\mathbf{k}_l}^\alpha: \mathbf{v} \stackrel{\mathbf{k}_l}{\longrightarrow} \mathbf{u}_l,\, [\mathbf{u}_l]=\mathcal{T}_j\}. \end{align} \begin{proof}[Proof of Theorem \ref{thm:main4}] By Proposition \ref{prop:3.1}, we know that $G$ and $\{R_t\}_{t=1}^N$ have the same attractor. The proof follows by using the results of \cite[Theorem 1.6]{Ngai-Xu_2023}; we omit the details. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:main5}] Combining Proposition \ref{prop:3.1} and Definition \ref{simplified graph}, we know that the attractor of $G$ is equal to that of $\widehat{G}$. The proof follows by using the results of Theorem \ref{thm:main4}. \end{proof} \subsection{Examples} In this subsection, we give three examples of IRSs with overlaps that satisfy (GFTC). \begin{exam}\label{exam:gftc1} Let $\mathcal{C}^2=\big\{(\cos\theta,\sin\theta,z):\theta\in[-\pi,\pi],z\in[0,2\pi]\big\}$ be a cylindrical surface. Let $E_0:=\mathcal{C}^2$. For $r\in [0,\pi/2)$, we let $\boldsymbol{x}:=(\cos\theta,\sin\theta,z)$ and let $\{R_t\}_{t=1}^4$ be an IRS on $E_0$ defined as \begin{align*} &R_1(\boldsymbol{x}):=\left\{ \begin{aligned} &(-\cos (\theta/2),-\sin (\theta/2),z/2+\pi/2)\qquad\,\, &&\boldsymbol{x}\in E_0\backslash \{(-1,0,z):z\in[0,2\pi]\},\\ &\{(0,-1,z/2+\pi/2),(0,1,z/2+\pi/2)\} \quad\, &&\boldsymbol{x}\in\{(-1,0,z):z\in[0,2\pi]\};\\ \end{aligned} \right.\\ &R_2(\boldsymbol{x}):=\left\{ \begin{aligned} &(\cos (\theta/2),\sin (\theta/2),z/2+\pi-r) &&\boldsymbol{x}\in E_0\backslash \{(-1,0,z):z\in[0,2\pi]\},\\ &\{(0,1,z/2+\pi-r),(0,-1,z/2+\pi-r)\} &&\boldsymbol{x}\in\{(-1,0,z):z\in[0,2\pi]\};\\ \end{aligned} \right.\\ &R_3(\boldsymbol{x}):=\left\{ \begin{aligned} &(\cos (\theta/2),\sin (\theta/2),z/2+\pi/2)\, \quad\,\, &&\boldsymbol{x}\in E_0\backslash \{(-1,0,z):z\in[0,2\pi]\},\\ &\{(0,1,z/2+\pi/2),(0,-1,z/2+\pi/2)\} \quad\,\, &&\boldsymbol{x}\in\{(-1,0,z):z\in[0,2\pi]\};\\ \end{aligned} \right.\\ &R_4(\boldsymbol{x}):=\left\{ \begin{aligned} &(\cos (\theta/2),\sin (\theta/2),z/2+r)\quad\quad&&\boldsymbol{x}\in E_0\backslash \{(-1,0,z):z\in[0,2\pi]\},\\ &\{(0,1,z/2+r), (0,-1,z/2+r)\}\qquad\quad\,\,\,\, &&\boldsymbol{x}\in\{(-1,0,z):z\in[0,2\pi]\}. \end{aligned} \right. \end{align*} Let $K$ be the associated attractor (see Figure \ref{fig:gftc}). Then $$\dim_H(K)=\frac{\log(2+\sqrt2)}{\log2}= 1.77155\ldots.$$ \end{exam} \begin{proof} For any $t\in\{1,\ldots,4\}$, by the definition of $R_t$, we have $H_t^1=H_t:=\{(-1,0,z):z\in[0,2\pi]\}$. Let $r_t:=R_t|_{H_t}$ and $r_t(H_t)=\bigcup_{l=1}^2h_t^{l,1}(H_t^1)$. Then for any $\boldsymbol{x}\in H_t^1$, \begin{align*} &h_1^{1,1}(\boldsymbol{x})=(0,1,z/2+\pi/2),\quad &&h_1^{2,1}(\boldsymbol{x})=(0,-1,z/2+\pi/2),\\ &h_2^{1,1}(\boldsymbol{x})=(0,-1,z/2+\pi-r),\quad &&h_2^{2,1}(\boldsymbol{x})=(0,1,z/2+\pi-r),\\ &h_3^{1,1}(\boldsymbol{x})=(0,-1,z/2+\pi/2),\quad &&h_3^{2,1}(\boldsymbol{x})=(0,1,z/2+\pi/2),\\ &h_4^{1,1}(\boldsymbol{x})=(0,-1,z/2+r),\quad &&h_4^{2,1}(\boldsymbol{x})=(0,1,z/2+r). \end{align*} \begin{figure}[H] \centering \mbox{\subfigure[] { \includegraphics[scale=0.34]{IRS1.png}} }\quad \mbox{\subfigure[] { \includegraphics[scale=0.34]{gftc3.png}} } \quad \mbox{\subfigure[] { \includegraphics[scale=0.34]{gftc4.png}} } \\ \mbox{\subfigure[Front] { \includegraphics[scale=0.23]{gftc_zheng.png}} }\qquad\qquad \mbox{\subfigure[Back] { \includegraphics[scale=0.23]{gftc_fan.png}} } \caption{Figures for Example \ref{exam:gftc1} with $r=\pi/4$. (a)--(c) are drawn on $\mathbb R^2$ and shrunk by $2\pi$. (a) The first iteration of $E_0$ under $\{R_t\}_{t=1}^4$, where $R_1(E_0)$ consists of the left and right rectangles, $R_2(E_0)$ is the top square, $R_3(E_0)$ is the middle square, and $R_4(E_0)$ is the bottom square. (b) Vertices of the GIFS associated to $\{R_t\}_{t=1}^4$. (c) The first iteration of the vertices under the GIFS. (d)--(e) The attractor of $\{R_t\}_{t=1}^4$.} \label{fig:gftc} \end{figure} For any $t\in\{1,\ldots,4\}$, by the definition of $R_t$, we have \begin{align*} &J_t^1:=\big\{(\cos\theta,\sin\theta,z):\theta\in(-\pi,0],z\in[0,2\pi]\big\}\quad{and}\\ &J_t^2:=\big\{(\cos\theta,\sin\theta,z):\theta\in(0,\pi),z\in[0,2\pi]\big\}. \end{align*} Hence $E_0\backslash H_t=\bigcup_{i=1}^2 J_t^i$. Let $\tilde{r}_t:=R_t|_{E_0\backslash H_t}$ and $\tilde{r}_t(E_0\backslash H_t)=\bigcup_{i=1}^2h_t^{0,i}(J_t^i)$. Then for any $\boldsymbol{x} \in J_t^i$, where $i\in \{1,2\}$, \begin{align*} &h_1^{0,i}(\boldsymbol{x})=(-\cos (\theta/2),-\sin (\theta/2),z/2+\pi/2),\\ &h_2^{0,i}(\boldsymbol{x})=(\cos (\theta/2),\sin (\theta/2),z/2+\pi-r),\\ &h_3^{0,i}(\boldsymbol{x})=(\cos (\theta/2),\sin (\theta/2),z/2+\pi/2),\\ &h_4^{0,i}(\boldsymbol{x})=(\cos (\theta/2),\sin (\theta/2),z/2+r). \end{align*} Hence for any $t\in \{1,\ldots,4\}$ and $l\in\{1,2\}$, $h_t^{l,1}$ are contractions, and for any $i\in\{1,2\}$, $h_t^{0,i}$ are contractions. Moreover, for $t=1$, we have \begin{align*} \overline{h_t^{1,1}(H_t)}\subseteq \overline{J_t^2},\qquad \overline{h_t^{2,1}(H_t)}\subseteq \overline{J_t^1},\qquad \overline{h_t^{0,1}(J_t^1)}\subseteq \overline{J_t^2}, \qquad \overline{h_t^{0,2}(J_t^2)}\subseteq \overline{J_t^1}; \end{align*} for any $t\in\{2,3,4\}$, \begin{align*} \overline{h_t^{1,1}(H_t)}\subseteq \overline{J_t^1},\qquad \overline{h_t^{2,1}(H_t)}\subseteq \overline{J_t^2},\qquad \overline{h_t^{0,i}(J_t^i)}\subseteq \overline{J_t^i},\,\,\text{where}\,\,i\in \{1,2\}. \end{align*} Hence $\{R_t\}_{t=1}^4$ satisfies the conditions of Theorem \ref{thm:exi2}. By Theorem \ref{thm:exi2} and Proposition \ref{prop:3.3}, we can find a minimal simplified GIFS $\widehat{G}=(\widehat{V},\widehat{E})$ with $\widehat{V}=\{1,\ldots,8\}$ and $\widehat{E}=\{e_1,\ldots,e_{32}\}$. The invariant family $\{\widehat{W}_i\}_{i=1}^8$, the set of edges $\widehat{E}^{i,j}$ and the associated similitudes $\{f_e\}_{e\in\widehat{E}}$ are given defined as \begin{align*} \widehat{W}_1&:=\{(\cos\theta,\sin\theta,z):\theta\in [-\pi,-\pi/2],\,\, z\in[\pi/2,3\pi/2]\},\\ \widehat{W}_2&:=\{(\cos\theta,\sin\theta,z):\theta\in [-\pi/2,0],\,\, z\in[\pi-r,2\pi-r]\},\\ \widehat{W}_3&:=\{(\cos\theta,\sin\theta,z):\theta\in [-\pi/2,0],\,\, z\in[\pi/2,3\pi/2]\},\\ \widehat{W}_4&:=\{(\cos\theta,\sin\theta,z):\theta\in [-\pi/2,0],\,\, z\in[r,\pi+r]\}\\ \widehat{W}_5&:=\{(\cos\theta,\sin\theta,z):\theta\in [0,\pi/2],\,\, z\in[\pi-r,2\pi-r]\},\\ \widehat{W}_6&:=\{(\cos\theta,\sin\theta,z):\theta\in [0,\pi/2],\,\, z\in[\pi/2,3\pi/2]\},\\ \widehat{W}_7&:=\{(\cos\theta,\sin\theta,z):\theta\in [0,\pi/2],\,\, z\in[r,\pi+r]\},\\ \widehat{W}_8&:=\{(\cos\theta,\sin\theta,z):\theta\in [\pi/2,\pi],\,\, z\in[\pi/2,3\pi/2]\}, \end{align*} \begin{alignat*}{5} &e_1\in \widehat{E}^{1,5},\qquad &e_2\in \widehat{E}^{1,6},\qquad&e_3\in \widehat{E}^{1,7},\qquad&e_4\in \widehat{E}^{1,8},\qquad &e_5\in \widehat{E}^{2,1},\qquad&e_6\in \widehat{E}^{2,2},\\&e_7\in \widehat{E}^{2,3},\qquad&e_8\in \widehat{E}^{2,4},\qquad &e_9\in \widehat{E}^{3,1},\quad &e_{10}\in \widehat{E}^{3,2},\qquad&e_{11}\in \widehat{E}^{3,3},\qquad&e_{12}\in \widehat{E}^{3,4},\\ &e_{13}\in \widehat{E}^{4,1},\qquad&e_{14}\in \widehat{E}^{4,2},\qquad&e_{15}\in \widehat{E}^{4,3},\qquad&e_{16}\in \widehat{E}^{4,4},\qquad &e_{17}\in \widehat{E}^{5,5}, \qquad&e_{18}\in \widehat{E}^{5,6},\\&e_{19}\in \widehat{E}^{5,7},\qquad&e_{20}\in \widehat{E}^{5,8},\qquad &e_{21}\in \widehat{E}^{6,5},\qquad&e_{22}\in \widehat{E}^{6,6},\qquad&e_{23}\in \widehat{E}^{6,7},\qquad&e_{24}\in \widehat{E}^{6,8},\\ &e_{25}\in \widehat{E}^{7,5}, \qquad&e_{26}\in \widehat{E}^{7,6},\qquad&e_{27}\in \widehat{E}^{7,7},\qquad&e_{28}\in \widehat{E}^{7,8},\qquad &e_{29}\in \widehat{E}^{8,1},\qquad&e_{30}\in \widehat{E}^{8,2},\\&e_{31}\in \widehat{E}^{8,3},\qquad&e_{32}\in \widehat{E}^{8,4},\qquad&\qquad&\qquad&\qquad& \end{alignat*} and \begin{alignat*}{5} &f_{e_{1}}:=\widetilde{h}_1^{0,2}|_{\widehat{W}_5}, \quad& f_{e_{2}}:=\widetilde{h}_1^{0,2}|_{\widehat{W}_6}, \quad& f_{e_{3}}:=\widetilde{h}_1^{0,2}|_{\widehat{W}_7}, \quad& f_{e_{4}}:=\widetilde{h}_1^{0,2}|_{\widehat{W}_8}, \quad& f_{e_{5}}:=\widetilde{h}_2^{0,1}|_{\widehat{W}_1},\\& f_{e_{6}}:=\widetilde{h}_2^{0,1}|_{\widehat{W}_2}, \quad& f_{e_{7}}:=\widetilde{h}_2^{0,1}|_{\widehat{W}_3}, \quad& f_{e_{8}}:=\widetilde{h}_2^{0,1}|_{\widehat{W}_4}, \quad& f_{e_{9}}:=\widetilde{h}_3^{0,1}|_{\widehat{W}_1}, \quad& f_{e_{10}}:=\widetilde{h}_3^{0,1}|_{\widehat{W}_2}, \\& f_{e_{11}}:=\widetilde{h}_3^{0,1}|_{\widehat{W}_3}, \quad& f_{e_{12}}:=\widetilde{h}_3^{0,1}|_{\widehat{W}_4}, \quad & f_{e_{13}}:=\widetilde{h}_4^{0,1}|_{\widehat{W}_1}, \quad& f_{e_{14}}:=\widetilde{h}_4^{0,1}|_{\widehat{W}_2}, \quad& f_{e_{15}}:=\widetilde{h}_4^{0,1}|_{\widehat{W}_3},\\& f_{e_{16}}:=\widetilde{h}_4^{0,1}|_{\widehat{W}_4}, \quad& f_{e_{17}}:=\widetilde{h}_2^{0,2}|_{\widehat{W}_5}, \quad& f_{e_{18}}:=\widetilde{h}_2^{0,2}|_{\widehat{W}_6}, \quad& f_{e_{19}}:=\widetilde{h}_2^{0,2}|_{\widehat{W}_7}, \quad& f_{e_{20}}:=\widetilde{h}_2^{0,2}|_{\widehat{W}_8}, \\& f_{e_{21}}:=\widetilde{h}_3^{0,2}|_{\widehat{W}_5}, \quad& f_{e_{22}}:=\widetilde{h}_3^{0,2}|_{\widehat{W}_6}, \quad& f_{e_{23}}:=\widetilde{h}_3^{0,2}|_{\widehat{W}_7}, \quad& f_{e_{24}}:=\widetilde{h}_3^{0,2}|_{\widehat{W}_8}, \quad& f_{e_{25}}:=\widetilde{h}_4^{0,2}|_{\widehat{W}_5}, \\& f_{e_{26}}:=\widetilde{h}_4^{0,2}|_{\widehat{W}_6}, \quad& f_{e_{27}}:=\widetilde{h}_4^{0,2}|_{\widehat{W}_7}, \quad& f_{e_{28}}:=\widetilde{h}_4^{0,2}|_{\widehat{W}_8}, \quad& f_{e_{29}}:=\widetilde{h}_1^{0,1}|_{\widehat{W}_1}, \quad& f_{e_{30}}:=\widetilde{h}_1^{0,1}|_{\widehat{W}_2}, \\& f_{e_{31}}:=\widetilde{h}_1^{0,1}|_{\widehat{W}_3}, \quad& f_{e_{32}}:=\widetilde{h}_1^{0,1}|_{\widehat{W}_4}, \quad& \quad& \quad& \end{alignat*} where $\tilde{h}_t^{l,i}$ and $\tilde{h}_t^{0,i}$ are defined as in \eqref{eq:hl} and \eqref{eq:h0}, respectively. Let $\{U_i\}_{i=1}^8$ be an invariant family of nonempty bounded open sets with $\overline{U}_i=\widehat{W}_i$, $i\in\{1,\ldots,8\}$. By Theorem \ref{thm:Pisot}, $\widehat{G}$ is of finite type. For convenience, we let $f_{e_i}:=f_i$, $i\in\{1,\ldots,32\}$. Let $\mathcal{M}_k:=\{1,\ldots,32\}^k$ for $k\geq0$. Let $\mathcal{T}_1,\ldots,\mathcal{T}_8$ be the neighborhood types of the root neighborhoods $[U_1],\ldots,[U_8]$, respectively. All neighborhood types are generated after two iterations. To construct the weighted incidence matrix in the minimal simplified reduced GIFS $\mathcal{\widehat{G}_R}$ (see Appendix A). We note that $$\mathcal{V}_1=\{(f_1,1),\ldots,(f_{32},1)\}.$$ Denote by $\mathbf{v}_1,\ldots,\mathbf{v}_{32}$ the vertices in $\mathcal{V}_1$ according to the above order. Then \begin{align*} [\mathbf{v}_5]=[\mathbf{v}_9]=[\mathbf{v}_{13}]=[\mathbf{v}_{29}]=\mathcal{T}_1\qquad\text{and}\qquad [\mathbf{v}_4]=[\mathbf{v}_{20}]=[\mathbf{v}_{24}]=[\mathbf{v}_{28}]=\mathcal{T}_2. \end{align*} Let \begin{alignat*}{2} &\mathcal{T}_{9}:=[\mathbf{v}_6]=[\mathbf{v}_{10}]=[\mathbf{v}_{14}]=[\mathbf{v}_{30}],\qquad& \mathcal{T}_{10}:=[\mathbf{v}_7]=[\mathbf{v}_{11}]=[\mathbf{v}_{15}]=[\mathbf{v}_{31}],\\ &\mathcal{T}_{11}:=[\mathbf{v}_8]=[\mathbf{v}_{12}]=[\mathbf{v}_{16}]=[\mathbf{v}_{32}],\qquad& \mathcal{T}_{12}:=[\mathbf{v}_1]=[\mathbf{v}_{17}]=[\mathbf{v}_{21}]=[\mathbf{v}_{25}],\\ &\mathcal{T}_{13}:=[\mathbf{v}_2]=[\mathbf{v}_{18}]=[\mathbf{v}_{22}]=[\mathbf{v}_{26}],\qquad& \mathcal{T}_{14}:=[\mathbf{v}_3]=[\mathbf{v}_{19}]=[\mathbf{v}_{23}]=[\mathbf{v}_{27}]. \end{alignat*} Then \begin{align*} &\mathcal{T}_1\rightarrow\mathcal{T}_2+\mathcal{T}_{12}+\mathcal{T}_{13}+\mathcal{T}_{14},\qquad &&\mathcal{T}_2\rightarrow\mathcal{T}_1+\mathcal{T}_{9}+\mathcal{T}_{10}+\mathcal{T}_{11},\\ &\mathcal{T}_3\rightarrow\mathcal{T}_1+\mathcal{T}_{9}+\mathcal{T}_{10}+\mathcal{T}_{11},\qquad &&\mathcal{T}_4\rightarrow\mathcal{T}_1+\mathcal{T}_{9}+\mathcal{T}_{10}+\mathcal{T}_{11},\\ &\mathcal{T}_5\rightarrow\mathcal{T}_2+\mathcal{T}_{12}+\mathcal{T}_{13}+\mathcal{T}_{14},\qquad &&\mathcal{T}_6\rightarrow\mathcal{T}_2+\mathcal{T}_{12}+\mathcal{T}_{13}+\mathcal{T}_{14},\\ &\mathcal{T}_7\rightarrow\mathcal{T}_2+\mathcal{T}_{12}+\mathcal{T}_{13}+\mathcal{T}_{14},\qquad &&\mathcal{T}_8\rightarrow\mathcal{T}_1+\mathcal{T}_{9}+\mathcal{T}_{10}+\mathcal{T}_{11}. \end{align*} Since $f_6f_8=f_7f_6$, the edge $e_6e_8$ is removed in $\mathcal{\widehat{G}_R}$. $\mathbf{v}_6$ generates three offspring $$(f_6f_5,2),(f_6f_6,2),(f_6f_7,2)\in\mathcal{V}_2,$$ where $[(f_6f_5,2)]=\mathcal{T}_{1}$, $[(f_6f_6,2)]=\mathcal{T}_{9}$ and $[(f_6f_7,2)]=\mathcal{T}_{10}$. Hence $$\mathcal{T}_{9}\rightarrow\mathcal{T}_1+\mathcal{T}_{9}+\mathcal{T}_{10}.$$ As $f_7f_8=f_8f_6$, the edge $e_7e_8$ is removed in $\mathcal{\widehat{G}_R}$. $\mathbf{v}_7$ generates three offspring $$(f_7f_5,2),(f_7f_6,2),(f_7f_7,2)\in\mathcal{V}_2,$$ with $[(f_7f_5,2)]=\mathcal{T}_{1}$, $[(f_7f_6,2)]=\mathcal{T}_{9}$ and $[(f_7f_7,2)]=\mathcal{T}_{10}$. Thus $$\mathcal{T}_{10}\rightarrow\mathcal{T}_1+\mathcal{T}_{9}+\mathcal{T}_{10}.$$ $\mathbf{v}_8$ generates four offspring $$(f_8f_5,2),(f_8f_6,2),(f_8f_7,2),(f_8f_8,2)\in\mathcal{V}_2,$$ where $[(f_8f_5,2)]=\mathcal{T}_{1}$, $[(f_8f_6,2)]=\mathcal{T}_{9}$, $[(f_8f_7,2)]=\mathcal{T}_{10}$ and $[(f_8f_8,2)]=\mathcal{T}_{11}$. Therefore, $$\mathcal{T}_{11}\rightarrow\mathcal{T}_1+\mathcal{T}_{9}+\mathcal{T}_{10}+\mathcal{T}_{11}.$$ Using the same argument, we have \begin{align*} \mathcal{T}_{12}\rightarrow\mathcal{T}_2+\mathcal{T}_{12}+\mathcal{T}_{13},\,\,\,\,\mathcal{T}_{13}\rightarrow\mathcal{T}_2+\mathcal{T}_{12}+\mathcal{T}_{13},\,\,\,\,\mathcal{T}_{14}\rightarrow\mathcal{T}_2+\mathcal{T}_{12}+\mathcal{T}_{13}+\mathcal{T}_{14}. \end{align*} Since no new neighborhood types are generated, we conclude that the $\mathcal{\widehat{G}_R}$ is of finite type. The weighted incidence matrix is \begin{align}\label{eq:matrixtou} A_\alpha=\Big(\frac{1}{2}\Big)^\alpha \begin{bmatrix} \begin{array}{cccccccccccccc} 0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &1 &1\\ 1 &0 &0 &0 &0 &0 &0 &0 &1 &1 &1 &0 &0 &0\\ 1 &0 &0 &0 &0 &0 &0 &0 &1 &1 &1 &0 &0 &0\\ 1 &0 &0 &0 &0 &0 &0 &0 &1 &1 &1 &0 &0 &0\\ 0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &1 &1\\ 0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &1 &1\\ 0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &1 &1\\ 1 &0 &0 &0 &0 &0 &0 &0 &1 &1 &1 &0 &0 &0\\ 1 &0 &0 &0 &0 &0 &0 &0 &1 &1 &0 &0 &0 &0\\ 1 &0 &0 &0 &0 &0 &0 &0 &1 &1 &0 &0 &0 &0\\ 1 &0 &0 &0 &0 &0 &0 &0 &1 &1 &1 &0 &0 &0\\ 0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &1 &0\\ 0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &1 &0\\ 0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &1 &1\\ \end{array} \end{bmatrix}. \end{align} The spectral radius $\lambda_\alpha$ of $A_\alpha$ is $(2+\sqrt{2})/2^\alpha$, and by Theorem \ref{thm:main5}, $$\dim_H(K)=\alpha= 1.77155\ldots,$$ where $\alpha$ is the unique solution of the equation $\lambda_\alpha=1.$ \end{proof} The following example is from \cite[Example 7.6]{Ngai-Xu_2023}. Here we use the method in the present paper to compute the Hausdorff dimension of the same fractal. The method here is more systematic. \begin{exam}\label{exam:gftc2} Let $\mathbb{T}^2 := \mathbb{S}^1\times \mathbb{S}^1$ be a flat 2-torus, viewed as $[0, 1]\times[0, 1]$ with opposite sides identified, and $\mathbb{T}^2$ be endowed with the Riemannian metric induced from $\R^2$. We consider the following IFS with overlaps on $\R^2$: \begin{align*} g_1(\boldsymbol{x})=\frac{1}{2}\boldsymbol{x}+\Big(0,\frac{1}{4}\Big),\qquad g_2(\boldsymbol{x})=\frac{1}{2}\boldsymbol{x}+\Big(\frac{1}{4},\frac{1}{4}\Big),\\ g_3(\boldsymbol{x})=\frac{1}{2}\boldsymbol{x}+\Big(\frac{1}{2},\frac{1}{4}\Big),\qquad g_4(\boldsymbol{x})=\frac{1}{2}\boldsymbol{x}+\Big(\frac{1}{4},\frac{3}{4}\Big). \end{align*} Let $H:=\{0\}\times [0,1]\big)\bigcup \big([0,1]\times\{0\}\big)$. Iterations of $\{g_t\}_{t=1}^4$ induce an IRS $\{R_t\}_{t=1}^4$ on $E_0:=\mathbb{T}^2=\R^2/ \Z^2$, defined as \begin{align*} &R_1(\boldsymbol{x}):=\left\{ \begin{aligned} &\frac{1}{2}\boldsymbol{x}+\Big(0,\frac{1}{4}\Big),\qquad &&\boldsymbol{x}\in E_0\backslash H,\\ & \Big\{\frac{1}{2}\boldsymbol{x}+\Big(0,\frac{1}{4}\Big),\frac{1}{2}\boldsymbol{x}+\Big(\frac{1}{2},\frac{1}{4}\Big) \Big\} \qquad &&\boldsymbol{x}\in \{0\}\times [0,1],\\ & \Big\{\frac{1}{2}\boldsymbol{x}+\Big(0,\frac{1}{4}\Big),\frac{1}{2}\boldsymbol{x}+\Big(0,\frac{3}{4}\Big) \Big\} \qquad &&\boldsymbol{x}\in [0,1]\times\{0\}; \end{aligned} \right.\\ &R_2(\boldsymbol{x}):=\left\{ \begin{aligned} &\frac{1}{2}\boldsymbol{x}+\Big(\frac{1}{4},\frac{1}{4}\Big),\quad &&\boldsymbol{x}\in E_0\backslash H,\\ &\Big\{\frac{1}{2}\boldsymbol{x}+\Big(\frac{1}{4},\frac{1}{4}\Big),\frac{1}{2}\boldsymbol{x}+\Big(\frac{3}{4},\frac{1}{4}\Big) \Big\} \quad &&\boldsymbol{x}\in \{0\}\times [0,1],\\ & \Big\{\frac{1}{2}\boldsymbol{x}+\Big(\frac{1}{4},\frac{1}{4}\Big),\frac{1}{2}\boldsymbol{x}+\Big(\frac{1}{4},\frac{3}{4}\Big) \Big\} \qquad &&\boldsymbol{x}\in [0,1]\times\{0\}; \end{aligned} \right.\\ &R_3(\boldsymbol{x}):=\left\{ \begin{aligned} &\frac{1}{2}\boldsymbol{x}+\Big(\frac{1}{2},\frac{1}{4}\Big),\qquad &&\boldsymbol{x}\in E_0\backslash H,\\ &\Big\{\frac{1}{2}\boldsymbol{x}+\Big(\frac{1}{2},\frac{1}{4}\Big),\frac{1}{2}\boldsymbol{x}+\Big(1,\frac{1}{4}\Big) \Big\} \qquad &&\boldsymbol{x}\in \{0\}\times [0,1],\\ & \Big\{\frac{1}{2}\boldsymbol{x}+\Big(\frac{1}{2},\frac{1}{4}\Big),\frac{1}{2}\boldsymbol{x}+\Big(\frac{1}{2},\frac{3}{4}\Big)\Big\} \qquad &&\boldsymbol{x}\in [0,1]\times\{0\}; \end{aligned} \right.\\ &R_4(\boldsymbol{x}):=\left\{ \begin{aligned} &\frac{1}{2}\boldsymbol{x}+\Big(\frac{1}{4},\frac{3}{4}\Big),\quad &&\boldsymbol{x}\in E_0\backslash H,\\ &\Big\{ \frac{1}{2}\boldsymbol{x}+\Big(\frac{1}{4},\frac{3}{4}\Big), \frac{1}{2}\boldsymbol{x}+\Big(\frac{3}{4},\frac{3}{4}\Big) \Big\} \quad &&\boldsymbol{x}\in \{0\}\times [0,1/2],\\ &\Big\{\frac{1}{2}\boldsymbol{x}+\Big(\frac{1}{4},-\frac{1}{4}\Big),\frac{1}{2}\boldsymbol{x}+\Big(\frac{3}{4},\frac{1}{4}\Big) \Big\} \quad &&\boldsymbol{x}\in \{0\}\times [1/2,1],\\ &\Big\{\frac{1}{2}\boldsymbol{x}+\Big(\frac{1}{4},\frac{1}{4}\Big), \frac{1}{2}\boldsymbol{x}+\Big(\frac{1}{4},\frac{3}{4}\Big) \Big\} \quad &&\boldsymbol{x}\in [0,1]\times\{0\}. \end{aligned} \right. \end{align*} Let $K$ be the associated attractor. Then $$\dim_H(K)=\frac{\log(2+\sqrt2)}{\log2}= 1.77155\ldots.$$ \end{exam} \begin{proof} The proof of this example is similar to that of Example \ref{exam:osc1}; we only give an outline. First, for any $t\in\{1,\ldots,4\}$, by the definition of $R_t$, we let $H_t^1=\{0\}\times [0,1/2]$, $H_t^2=\{0\}\times (1/2,1]$, and $H_t^3= [0,1]\times\{0\}$. Then $H_t=\bigcup_{i=1}^3 H_t^i$. Let $r_t:=R_t|_{H_t}$. Then $r_t(H_t)=\bigcup_{l=1}^2\bigcup_{i=1}^3h_t^{l,i}(H_t^i)$. For any $t\in\{1,\ldots,4\}$, let \begin{align*} J_t^1:=\big\{[0,1]\times [0,1/2]\big\}\qquad \text{and}\qquad J_t^2:=\big\{[0,1]\times [1/2,1]\big\}. \end{align*} Hence $E_0\backslash H_t=\bigcup_{i=1}^2J_t^i$. Let $\tilde{r}_t:=R_t|_{E_0\backslash H_t}$. Then $\tilde{r}_t(E_0\backslash H_t)=\bigcup_{i=1}^2h_t^{0,i}(J_t^i)$. We can show that for any $t\in \{1,\ldots,4\}$, $h_t^{l,i}$ are contractions, where $l,i\in\{1,2\}$, and that for $i\in\{1,2\}$, $h_t^{0,i}$ are contractions. Moreover, there exist $\alpha,\beta\in\{1,2,3\}$ or $\sigma,\tau\in\{1,2\}$ such that for any $l\in \{1,2\}$ and any $i\in\{1,2,3\}$, $$\overline{h _t^{l,i}(H_t^i)}\subseteq \overline{H_t^\alpha}\qquad \text{or}\qquad \overline{h _t^{l,i}(H_t^i)}\subseteq \overline{J_t^\sigma},$$ and for any $i\in\{1,2\}$, $$\overline{h _t^{0,i}(J_t^i)}\subseteq \overline{H_t^\beta}\qquad \text{or}\qquad \overline{h _t^{0,i}(J_t^i)}\subseteq \overline{J_t^\tau}.$$ Hence $\{R_t\}_{t=1}^4$ satisfies the conditions of Theorem \ref{thm:exi2}. By Theorem \ref{thm:exi2} and Proposition \ref{prop:3.3}, we can obtain a minimal simplified GIFS $\widehat{G}=(\widehat{V},\widehat{E})$ where $\widehat{V}=\{1,\ldots,8\}$ and $\widehat{E}=\{e_1,\ldots,e_{32}\}$, along with the invariant family $\bigcup_{i=1}^8\widehat{W}_j$ and the associated similitudes $\{f_e\}_{e\in\widehat{E}}$. Let $\{U_i\}_{i=1}^8$ be an invariant family of nonempty bounded open sets with $\overline{U}_i=\widehat{W}_i$, $i=1,\ldots,8$. It follows from Theorem \ref{thm:Pisot} that the system $\widehat{G}$ is of finite type. Next, let $\mathcal{T}_1,\ldots,\mathcal{T}_8$ be the neighborhood types of the root neighborhoods $[U_1],\ldots,[U_8]$, respectively. All neighborhood types are generated after two iterations. The weighted incidence matrix happens to be the same as that in \eqref{eq:matrixtou}. Thus the spectral radius $\lambda_\alpha$ of $A_\alpha$ is $(2+\sqrt{2})/2^\alpha$, and by Theorem \ref{thm:main5}, we have $$\dim_H(K)=\alpha= 1.77155\ldots,$$ where $\alpha$ is the unique solution of the equation $\lambda_\alpha=1.$ \end{proof} \begin{exam}\label{exam:gftctri} Let $\mathcal{C}^2:=\mathbb{S}^1\times\R^1=\big\{(\cos\theta,\sin\theta,z):\theta\in[-\pi,\pi],z\in [0,2\pi]\big\}$ be a cylindrical surface. Let \begin{align*} E_0^1:=&\big\{(\cos\theta,\sin\theta,z):\theta\in[-\pi,0],z\in [0,\sqrt{3}\,\theta+\sqrt{3}\,\pi]\big\},\\ E_0^2:=&\big\{(\cos\theta,\sin\theta,z):\theta\in[0,\pi],z\in [0,-\sqrt{3}\,\theta+\sqrt{3}\,\pi]\big\}, \end{align*} and $E_0:=E_0^1\bigcup E_0^2$. Let $\rho:=(\sqrt5-1)/2$, $\boldsymbol{x}:=(\cos\theta,\sin\theta,z)\in E_0$, and $\{R_t\}_{t=1}^4$ be an IRS on $E_0$ defined as \begin{align*} &R_1(\boldsymbol{x}):=\left\{ \footnotesize{\begin{aligned} &(\cos (\rho\theta-\rho^2\pi),\sin (\rho\theta-\rho^2\pi),\rho z),\qquad\qquad\quad\qquad\qquad\qquad \qquad &&\boldsymbol{x}\in E_0\backslash \{(-1,0,0)\},\\ &\{(\cos(\rho^3\pi),\sin(\rho^3\pi),0), (-1,0,0)\}\qquad &&\boldsymbol{x}=(-1,0,0);\\ \end{aligned} } \right.\\ &R_2(\boldsymbol{x}):=\left\{ \footnotesize{ \begin{aligned} &(\cos (\rho\theta+\rho^2\pi),\sin (\rho\theta+\rho^2\pi),\rho z),\qquad\qquad\qquad\qquad\qquad\qquad\quad &&\boldsymbol{x}\in E_0\backslash \{(-1,0,0)\},\\ &\{(-1,0,0),(\cos(-\rho^3\pi),\sin(-\rho^3\pi),0)\} \qquad&&\boldsymbol{x}=(-1,0,0);\\ \end{aligned} } \right.\\ &R_3(\boldsymbol{x}):=\left\{ \footnotesize{\begin{aligned} &(\cos (\rho^2\theta),\sin (\rho^2\theta),\rho z+ \sqrt3\rho\pi),\qquad &&\boldsymbol{x}\in E_0\backslash \{(-1,0,0)\},\\ &\{(\cos (\rho^2\pi),\sin (\rho^2\pi),\sqrt3\rho\pi),(\cos(-\rho^3\pi),\sin(-\rho^3\pi),\sqrt3\rho\pi)\} \qquad &&\boldsymbol{x}=(-1,0,0).\\ \end{aligned} } \right. \end{align*} Let $K$ be the associated attractor (see Figure \ref{fig:gftctri}). Then $$\dim_H(K)= 1.68239\ldots.$$ \end{exam} The proof of this example is similar to that of Example \ref{exam:gftc1}; we omit the proof. \begin{figure}[H] \centering \mbox{\subfigure[Front] { \includegraphics[scale=0.24]{gftctri_zheng.png}} }\qquad\qquad \mbox{\subfigure[Back] { \includegraphics[scale=0.24]{gftctri_fan.png}} } \caption{The attractor of $\{R_t\}_{t=1}^3$ in Example \ref{exam:gftctri}.} \label{fig:gftctri} \end{figure} \begin{appendix}\section{Definitions related to (GFTC)}\label{app} \setcounter{equation}{0} The following definitions can be found in \cite{Lau-Ngai_2007,Ngai-Wang-Dong_2010,Ngai-Xu_2023}. For two directed paths, $\mathbf{e},\mathbf{e}'\in E^*$, we write $\mathbf{e}\preceq \mathbf{e}'$ if $\mathbf{e}$ is an initial segment of $\mathbf{e}'$ or $\mathbf{e}=\mathbf{e}'$, and write $\mathbf{e}\npreceq \mathbf{e}'$ if $\mathbf{e}$ is not an initial segment of $\mathbf{e}'$. Let $\{\mathcal{M}_k\}_{k=0}^\infty$ be a sequence of index sets such that for any $k\geq 0$, $\mathcal{M}_k$ is a finite subset of $E^*$. We say that $\mathcal{M}_k$ is an {\em antichain} if for any $\mathbf{e},\mathbf{e}'\in \mathcal{M}_k$, $\mathbf{e}\npreceq \mathbf{e}'$ and $\mathbf{e}'\npreceq \mathbf{e}$. Let $$\underline{m}_k=\underline{m}_k(\mathcal{M}_k):=\min\{|\mathbf{e}|:\mathbf{e}\in \mathcal{M}_k\}\quad\text{and}\quad \overline{m}_k=\overline{m}_k(\mathcal{M}_k):=\max\{|\mathbf{e}|:\mathbf{e}\in \mathcal{M}_k\}.$$ \begin{defi}\label{defi:nes} We say that $\{\mathcal{M}_k\}_{k=0}^\infty$ is a {\em sequence of nested index sets} if it satisfies the following conditions: \begin{enumerate} \item[(1)] both $\{\underline{m}_k\}$ and $\{\overline{m}_k\}$ are nondecreasing, and $\lim_{k\to\infty} \overline{m}_k=\lim_{k\to\infty} \overline{m}_k=\infty$; \item[(2)] for each $k\geq 1$, $\mathcal{M}_k$ is an antichain in $E^*$; \item[(3)] for each $\mathbf{e}'\in E^*$ with $|\mathbf{e}'|>\overline{m}_k$, there exists $\mathbf{e}\in \mathcal{M}_k$ such that $\mathbf{e}\preceq \mathbf{e}'$; \item[(4)] for each $\mathbf{e}'\in E^*$ with $|\mathbf{e}'|<\underline{m}_k$, there exists some $\mathbf{e}\in \mathcal{M}_k$ such that $\mathbf{e}'\preceq \mathbf{e}$; \item[(5)] there exists a positive integer $L$, independent of $k$, such that for all $\mathbf{e}\in \mathcal{M}_k$ and $\mathbf{e}'\in \mathcal{M}_{k+1}$ with $\mathbf{e}\preceq \mathbf{e}'$, we have $|\mathbf{e}'|-|\mathbf{e}|\leq L$. \end{enumerate} (We allow $\mathcal{M}_k\bigcap \mathcal{M}_{k+1}\neq \emptyset$. Very often, $\bigcup_{k=1}^\infty\mathcal{M}_k$ is a proper subset of $E^*$.) \end{defi} Fix a sequence $\{\mathcal{M}_k\}_{k=0}^\infty$ of nested index sets. It is useful to partition $\mathcal{M}_k$ into $\mathcal{M}_k^{i,j}$, $1\leq i,j\leq m$, defined as $$\mathcal{M}_k^{i,j}:=\mathcal{M}_k\bigcap \Big(\bigcup_{p\geq 1}E_p^{i,j}\Big)=\big\{\mathbf{e}=(e_1,\ldots, e_p)\in \mathcal{M}_k:\mathbf{e}\in E_p^{i,j}\,\, \text{for some}\,\, p\geq 1\big\}.$$ That is, $\mathcal{M}_k^{i,j}$ is the subset of $\mathcal{M}_k$ consisting of all directed paths from vertex $i$ to vertex $j$. Note that $\mathcal{M}_k=\bigcup_{i,j=1}^q\mathcal{M}_k^{i,j}$. For $i,j=1,\ldots, m$, $k\geq 1$, define $$\mathcal{V}_k^{i,j}:=\{(f_\mathbf{e},i,j,k),e\in \mathcal{M}_k^{i,j}\}\qquad \text{and}\qquad \mathcal{V}_k:=\bigcup_{i,j=1}^q\mathcal{V}_k^{i,j}.$$ For $\mathbf{e}\in \mathcal{M}_k^{i,j}$, we call $(f_\mathbf{e},i,j,k)$ (or simply $(f_e,k)$) a {\em vertex}. For convenience, we let $\mathcal{M}_0=\{1,\ldots,m\}$ and $\mathcal{V}_0:=\{\mathbf{v}_{\text{root}}^1,\ldots,\mathbf{v}_{\text{root}}^m\}$, where $\mathbf{v}_{\text{root}}^i:=(I,i,i,0)$, with $I$ being the identity map on $M$. We call $\mathcal{V}_0$ the set of {\em root vertices}. The {\em vertex set } is the set of all vertices (not counting multiplicity) \begin{align}\label{eq:v} \mathcal{V}:=\bigcup_{k\geq 0}\mathcal{V}_k. \end{align} Let $\pi: \bigcup_{k\geq 0}\mathcal{M}_k\to \mathcal{V}$ be a map from the set of all directed paths in $\bigcup_{k\geq 0}\mathcal{M}_k$ to the vertex set of $G$ defined naturally as follows: \begin{align} \pi(\mathbf{e}):=\left\{ \begin{aligned} &(f_\mathbf{e},i,j,k),\qquad &&\text{if}\,\,e\in \mathcal{M}_k^{i,j}\,\,\text{and}\,\,k\geq 1,\\ &\mathbf{v}_{\text{root}}^i,\qquad &&\text{if}\,\,\mathbf{e}=i\in \mathcal{M}_0. \end{aligned} \right. \end{align} Define a graph $\mathcal{G}:=(\mathcal{V},\mathcal{E})$, where $\mathcal{E}$ is the set of all directed edges of $\mathcal{V}$. Given two vertices $\mathbf{v}$ and $\mathbf{v}'$ in $\mathcal{G}$, suppose there exist directed paths $\mathbf{e}\in \mathcal{M}_k$, $\mathbf{e}'\in \mathcal{M}_{k+1}$ and $\mathbf{k}\in \mathcal{E}^*$ such that $\mathbf{v}=\pi(\mathbf{e})$, $\mathbf{v}'=\pi(\mathbf{e}')$ and $\mathbf{e}=\mathbf{e}\mathbf{k}$ (the concatenation of $\mathbf{e}$ and $\mathbf{k}$). Then we connect a directed edge $\mathbf{k}:\mathbf{v}\to \mathbf{v}'$. Note that if $\mathbf{k}:\mathbf{v}_1\to \mathbf{v}'$ and $\mathbf{k}:\mathbf{v}_2\to \mathbf{v}'$, then $\mathbf{v}_1=\mathbf{v}_2$. This way we obtain the set of all directed edges $\mathcal{E}$ of $\mathcal{G}$. We call $\mathbf{v}$ a {\em parent} of $\mathbf{v}'$ and $\mathbf{v}'$ an {\em offspring} of $\mathbf{v}$. To construct the graph $\mathcal{G_R}$, called the {\em reduced graph}, we first fix an order for $\mathcal{E}^*$. Here we use the lexicographical order induced on the index set. This means that if $\mathbf{e}=(e_{i_1},\ldots,e_{i_p})$ and $\mathbf{e}'=(e_{i_1'},\ldots,e_{i_p'})$, then $\mathbf{e}<\mathbf{e}'$ if and only if $(i_1,\ldots,i_p) <(i_1', \ldots,i_p')$ in the lexicographical order. Start with the vertex set $\mathcal{V}$. The set of directed edges $\mathcal{E_R}$ of $\mathcal{G_R}$ is obtained from $\mathcal{G}$ by removing all but the smallest directed edge going to a vertex. More precisely, for each vertex $\mathbf{v}$ let $\mathbf{k}_1,\ldots, \mathbf{k}_q$ be all the directed edges going from some vertex to $\mathbf{v}$. Suppose that $\mathbf{k}_1 < \mathbf{k}_2 < \cdots < \mathbf{k}_q$ in the order described above. Then we keep only the directed edge $\mathbf{k}_1$ and remove all other edges. This way we obtain $\mathcal{E_R}$. To finish the construction of the reduced graph, we remove all vertices that do not have offspring in $\mathcal{G_R}$, together with all the vertices and edges leading only to them (see \cite[appendix A]{Lau-Ngai_2007} for an example of such a vertex). We denote the resulting graph by the same symbol $\mathcal{G_R}$ and write $\mathcal{G_R} = (\mathcal{V_R}, \mathcal{E_R})$, where $\mathcal{V_R}$ is the set of all vertices and $\mathcal{E_R}$ is the set of all edges. Let $\widehat{\mathcal{G}}:=(\widehat{\mathcal{V}},\widehat{\mathcal{E}})$ be a minimal simplified GIFS associated to $\mathcal{G}$. We can use a similar method to construct a corresponding reduced GIFS $\widehat{\mathcal{G}}_{\mathcal{R}}:=(\widehat{\mathcal{V}}_{\mathcal{R}},\widehat{\mathcal{E}}_{\mathcal{R}})$. Fix an invariant family $\mathbf{U}:=\{U_i\}_{i=1}^m$ for $G=(V,E)$. Let $\mathbf{v}:=(f_\mathbf{e},i,j,k)\in \mathcal{V}_k$ with $\mathbf{e}\in \mathcal{M}^{i,j}_r$ and $\mathbf{v}'=(f_{\mathbf{e}'},i',j',k)\in \mathcal{V}_k$ with $\mathbf{e}'\in \mathcal{M}^{i',j'}_s$. We say that two vertices $\mathbf{v}$ and $\mathbf{v}'$ are {\em neighbours} (with respect to $\mathbf{U}$) if \begin{align*} i=i'\qquad\text{and}\qquad f_\mathbf{e}(U_j)\bigcap f_{\mathbf{e}'}(U_{j'})\neq \emptyset. \end{align*} We call the set \begin{align*} \mathcal{N}(\mathbf{v}):=\{\mathbf{v}':\mathbf{v}'\,\,\text{is a neighbour of}\,\, \mathbf{v}\} \end{align*} the {\em neighbourhood} of $\mathbf{v}$ (with respect to $\mathbf{U}$). We now define an equivalence relation on the set of vertices $\mathcal{V}$. Two vertices $\mathbf{v}\in \mathcal{V}_k$ and $\mathbf{v}'\in \mathcal{V}_{k'}$ are said {\em equivalent}, denoted $\mathbf{v}\sim\mathbf{v}'$ (or more precisely, $\mathbf{v}\sim_\tau\mathbf{v}'$ ), if $\#\mathcal{N}(\mathbf{v})=\#\mathcal{N}(\mathbf{v}')$ and $\tau=f_{\mathbf{v}'}\circ f_{\mathbf{v}}^{-1}:M\to M$ induces a bijection $g_\tau:\mathcal{N}(\mathbf{v})\to \mathcal{N}(\mathbf{v}')$ defined by \begin{align}\label{eq:equi} g_\tau(f_\mathbf{u},i,j,k)=(\tau\circ f_\mathbf{u},i',j',k') \end{align} so that the following condition are satisfied: \begin{enumerate} \item[(a)] In \eqref{eq:equi}, $j=j'$. \item[(b)] For $\mathbf{u}\in \mathcal{N}(\mathbf{v})$ and $\mathbf{u}'\in\mathcal{N}(\mathbf{v}')$ such that $g_\tau(\mathbf{u})=\mathbf{u}'$, and for any positive integer $l\geq 1$, a directed path $\mathbf{e}\in E^*$ satisfies $(f_\mathbf{u}\circ f_\mathbf{e},k+l)\in \mathcal{V}_{k+l}$ if and only if it satisfies $(f_{\mathbf{u}'}\circ f_\mathbf{e},k'+l)\in \mathcal{V}_{k'+l}$. \end{enumerate} It is easy to check that $\sim$ is an equivalence relation. Denote the equivalence class of $\mathbf{v}$ by $[\mathbf{v}]$, and call it the \textit{neighborhood types} of $\mathbf{v}$ (with respect to $\mathbf{U}$). \end{appendix} \section*{Acknowledgement} Part of this work was carried out while the third author was visiting Beijing Institute of Mathematical Sciences and Applications (BIMSA). She thanks the institute for its hospitality and support. \begin{thebibliography}{XXX} \bibitem{Balogh-Rohner_2007} Z. M. Balogh and H. Rohner, Self-similar sets in doubling spaces, {\em Illinois J. Math.} {\bf 51} (2007), 1275--1297. \bibitem{Balogh-Tyson_2005} Z. M. Balogh and J. T. Tyson, Hausdorff dimensions of self-similar and self-affine fractals in the Heisenberg group, {\em Proc. London Math. Soc.} {\bf 91} (2005), 153--183. \bibitem{Barnsley-Vince_2012} M. F. Barnsley and A. Vince, Real projective iterated function systems, {\em J. Geom. Anal.} {\bf 22} (2012), 1137--1172. \bibitem{Bishop-Crittenden_1964}R. L. Bishop and R. J. Crittenden, {\em Geometry of Manifolds}, Academic Press, New York, 1964. \bibitem{Das-Ngai_2004} M. Das and S.-M. Ngai, Graph-directed iterated function systems with overlaps, {\em Indiana Univ. Math. J.} {\bf 53} (2004), 109--134. \bibitem{Edgar_1990} G. A. Edgar, {\em Measure, Topology, and Fractal Geometry}, Springer, New York, 2008. \bibitem{Falconer_2003} K. J. Falconer, {\em Fractal Geometry: Mathematical Foundations and Applications}, John Wiley, 2nd Ed., 2014. \bibitem{Hossain-Akhtar-Navascues_2024}A. Hossain, M. N. Akhtar, and M. A. Navascu\'es, Fractal interpolation on the real projective plane, {\em Numer. Algorithms} {\bf 96} (2024), 557--582. \bibitem{Hutchinson_1981} J. E. Hutchinson, Fractals and self-similarity, {\em Indiana Univ. Math. J.} {\bf 30} (1981), 713--747. \bibitem{Jin-Yau_2005} N. Jin and S. S. T. Yau, General finite type IFS and M-matrix, {\em Commun. Anal. Geom.} {\bf 13} (2005), 821--843. \bibitem{Lau-Ngai_2007} K.-S. Lau and S.-M. Ngai, A generalized finite type condition for iterated function systems, {\em Adv. Math.} {\bf 208} (2007), 647--671. \bibitem{Mauldin-Williams_1988} R. D. Mauldin and S. C. Williams, Hausdorff dimension in graph directed constructions, {\em Trans. Am. Math. Soc.} {\bf 309} (1988), 811--829. \bibitem{Ngai-Wang_2001} S.-M. Ngai and Y. Wang, Hausdorff dimension of self-similar sets with overlaps, {\em J. Lond. Math. Soc.} {\bf 2} (2001), 655--672. \bibitem{Ngai-Wang-Dong_2010} S.-M. Ngai, F. Wang, and X. H. Dong, Graph-directed iterated function systems satisfying the generalized finite type condition, {\em Nonlinearity} {\bf 23} (2010), 2333--2350. \bibitem{Ngai-Xu_2021} S.-M. Ngai and Y. Y. Xu, Existence of $L^q$-dimension and entropy dimension of self-conformal measures on Riemannian manifolds, {\em Nonlinear Anal.} {\bf 230} (2023), Paper No. 113226. \bibitem{Ngai-Xu_2023} S.-M. Ngai and Y. Y. Xu, Separation conditions for iterated function systems with overlaps on Riemannian manifolds, {\em J. Geom. Anal.} {\bf 33} (2023), no. 8, Paper No. 262. \bibitem{Olsen_1994} L. Olsen, {\em Random Geometrically Graph Directed Self-Similar Multifractals}, Pitman Res. Notes Math. Ser., vol. 307, New York, 1994. \bibitem{Strichartz_1992} R. S. Strichartz, Self-similarity on nilpotent Lie groups, {\em Contemp. Math.}, {\bf 140} (1992), 123--157. \bibitem{Wu-Yamaguchi_2017} D. Wu and T. Yamaguchi, Hausdorff dimension of asymptotic self-similar sets, {\em J. Geom. Anal.} {\bf 4} (2017), 339--368. \end{thebibliography} \end{document}
2412.13906v1
http://arxiv.org/abs/2412.13906v1
Whitney Numbers of Rank-Metric Lattices and Code Enumeration
\documentclass[11pt,a4paper,reqno]{article} \usepackage{amssymb} \usepackage{latexsym} \usepackage{amsmath} \usepackage{graphicx} \usepackage{amsthm} \usepackage{empheq} \usepackage{bm} \usepackage{booktabs} \usepackage[dvipsnames]{xcolor} \usepackage{pagecolor} \usepackage{subcaption} \usepackage{tikz,lipsum,lmodern} \usepackage{enumitem} \usepackage{calligra} \usepackage{mathrsfs} \usepackage[margin=3cm]{geometry} \usepackage{authblk} \usepackage{enumitem} \setitemize{itemsep=0em} \setenumerate{itemsep=0em} \usepackage{hyperref} \hypersetup{ colorlinks = true, urlcolor = blue, linkcolor = purple, citecolor = ForestGreen } \numberwithin{equation}{section} \theoremstyle{definition} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{notation}[theorem]{Notation} \newtheorem{remark}[theorem]{Remark} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{construction}[theorem]{Construction} \newtheorem{question}{Question} \newtheorem{problem}{Problem} \renewcommand*{\theproblem}{\Alph{problem}} \newtheorem*{thnonumber}{Theorem} \newtheorem*{nnpb}{Problem} \newcommand\qbin[3]{\textnormal{bin}_{#3}(#1,#2)} \newcommand\bbq[1]{\bm{b}_q(#1)} \newcommand\Bq[1]{\bm{B}_q(#1)} \newcommand{\numberset}{\mathbb} \newcommand{\N}{\numberset{N}} \newcommand{\Z}{\numberset{Z}} \newcommand{\Q}{\numberset{Q}} \newcommand{\R}{\numberset{R}} \newcommand{\C}{\mathcal{C}} \newcommand{\K}{\numberset{K}} \newcommand{\F}{\numberset{F}} \newcommand{\A}{\numberset{A}} \newcommand{\Ol}{\mathcal{O}} \newcommand{\cC}{\mathcal{C}} \newcommand{\cL}{\mathcal{L}} \newcommand{\fq}{\F_q} \newcommand{\fqn}{\F_{q^n}} \newcommand{\fqm}{\F_{q^m}} \newcommand{\cfq}{\overline{\F_q}} \newcommand{\fqnu}{\F_{q^{\nu}}} \newcommand{\HH}{\textnormal{H}} \newcommand{\id}{\textnormal{id}} \newcommand{\adj}{\textnormal{adj}} \newcommand{\Nl}{\textnormal{N}_{\textnormal{l}}} \newcommand{\Nm}{\textnormal{N}_{\textnormal{m}}} \newcommand{\Nr}{\textnormal{N}_{\textnormal{r}}} \newcommand{\Il}{\textnormal{I}_{\textnormal{l}}} \newcommand{\Imm}{\textnormal{I}_{\textnormal{m}}} \newcommand{\Ir}{\textnormal{I}_{\textnormal{r}}} \newcommand{\Tr}{\textnormal{Tr}} \newcommand{\mV}{\mathcal{V}} \newcommand{\mH}{\mathcal{H}} \newcommand{\mA}{\mathcal{A}} \newcommand{\mL}{\mathscr{L}} \newcommand{\mU}{\mathcal{U}} \newcommand{\I}{\mathcal I} \newcommand{\mM}{\mathbf{m}} \newcommand{\Pro}{\numberset{P}} \def\GammaL{\mathrm{\Gamma L}} \newcommand{\mC}{\mathcal{C}} \newcommand{\mS}{\mathcal{S}} \newcommand{\mG}{\mathcal{G}} \newcommand{\mD}{\mathcal{D}} \newcommand{\mF}{\mathcal{F}} \newcommand{\mW}{\mathcal{W}} \newcommand{\mX}{\mathcal{X}} \newcommand{\mI}{\mathcal{I}} \newcommand{\mB}{\mathcal{B}} \newcommand{\mE}{\mathcal{E}} \newcommand{\mT}{\mathcal{T}} \newcommand{\mN}{\mathbf{n}} \newcommand{\rk}{\textnormal{rk}} \newcommand{\avg}{\textnormal{avg}} \newcommand{\matavg}{\textnormal{matavg}} \newcommand{\mP}{\mathcal{P}} \newcommand{\lin}{\textnormal{Lin}} \newcommand{\ev}{\textnormal{ev}} \newcommand{\mO}{\mathcal{O}} \def\Aut{\mathrm{Aut}} \newcommand{\mat}{\F_q^{n \times m}} \renewcommand{\longrightarrow}{\to} \newcommand{\Ball}{B} \newcommand{\ball}{\bm{b}} \newcommand{\bH}{\ball^\HH} \newcommand{\aut}{\textnormal{Aut}} \newcommand{\brk}{\ball^\rk} \newcommand{\dH}{d^{\textnormal{H}}} \newcommand{\wH}{\omega^{\textnormal{H}}} \newcommand{\drk}{d^{\textnormal{rk}}} \newcommand{\rhork}{\rho^{\textnormal{rk}}} \newcommand{\rhoH}{\rho^{\textnormal{H}}} \newcommand{\wrk}{\omega^{\rk}} \newcommand{\WH}{W^{\HH}} \newcommand{\Wrk}{W^{\rk}} \newcommand{\BallH}{B^\HH} \newcommand{\Ballrk}{B^\rk} \newcommand{\pp}{\bm{p}} \newcommand{\GL}{\textnormal{GL}_m(q)} \newcommand{\PG}{\textnormal{PG}} \newcommand\p[3]{\pp(#1;#2,#3)} \newcommand\pH[3]{\pp^\HH(#1;#2,#3)} \newcommand\DD[2]{|#1| / |#2|} \def\P{\mathscr{P}} \def\At{\textup{At}} \def\cov{\mathrel{<\kern-.6em\raise.015ex\hbox{$\cdot$}}} \def\<{\left<} \def\>{\right>} \newcommand{\qqbin}[2]{\begin{bmatrix}{#1}\\ {#2}\end{bmatrix}_q} \newcommand{\qmqbin}[2]{\begin{bmatrix}{#1}\\ {#2}\end{bmatrix}_{q^m}} \newcommand{\MRD}{{\textnormal{MRD}}} \newcommand{\srk}{{\textnormal{srk}}} \newcommand{\Alt}{{\textnormal{Alt}}} \newcommand{\Sym}{{\textnormal{Sym}}} \newcommand{\Her}{{\textnormal{Her}}} \newcommand{\supp}{{\textnormal{supp}}} \newtheorem{claim}{Claim} \renewcommand*{\theclaim}{\Alph{claim}} \newcommand*{\myproofname}{Proof of the claim} \newenvironment{clproof}[1][\myproofname]{\begin{proof}[#1]\renewcommand*{\qedsymbol}{\(\blacktriangle\)}}{\end{proof}} \allowdisplaybreaks \title{\textbf{Whitney Numbers of Rank-Metric Lattices \\ and Code Enumeration}} \usepackage{setspace} \setstretch{1.05} \author[1]{Giuseppe Cotardo\thanks{G.C. was partially supported by the NSF grants DMS-2037833 and DMS-2201075, and by the Commonwealth Cyber Initiative.}} \affil[1]{Virginia Tech, Blacksburg, U.S.A.} \author[2]{Alberto Ravagnani\thanks{A. R. is supported by the Dutch Research Council through grants OCENW.KLEIN.539 and VI.Vidi.203.045.}} \affil[2]{Eindhoven University of Technology, the Netherlands} \author[3]{Ferdinando Zullo\thanks{F. Z. was partially supported by the project COMBINE from University of Campania and was partially supported by the Italian National Group for Algebraic and Geometric Structures and their Applications (GNSAGA - INdAM).}} \affil[3]{Universit\`a degli Studi della Campania ``Luigi Vanvitelli'', Caserta, Italy.} \date{} \usepackage{setspace} \setstretch{0.99} \begin{document} \maketitle \abstract{ We investigate the Whitney numbers of the first kind of rank-metric lattices, which are closely linked to the open problem of enumerating rank-metric codes having prescribed parameters. We apply methods from the theory of hyperovals and linear sets to compute these Whitney numbers for infinite families of rank-metric lattices. As an application of our results, we prove asymptotic estimates on the density function of certain rank-metric codes that have been conjectured in previous work. } \medskip \section*{Introduction} This paper applies methods from finite geometry to an open order theory problem, solving some new instances. The problem is the computation of the Whitney numbers (of the first kind) of rank-metric lattices. Rank-metric lattices were introduced in~\cite{cotardo2023rank} in connection with the problem of enumerating rank-metric codes with correction capability bounded from below~\cite{cooperstein1998external,delsarte1978bilinear,gabidulin1985theory,roth1991maximum}. They can be seen as the $q$-analogues of higher-weight Dowling lattices, introduced by Dowling in~\cite{dowling1971codes,dowling1973q} in connection with the \textit{packing problem} for the Hamming metric, which is the central question of classical coding theory~\cite{dowling1971codes}. Rank-metric lattices are indexed by four parameters. More precisely, $\mL_i(n,m;q)$ is the subgeometry of the subspaces of $\F_{q^m}^n$ generated by vectors of rank weight up to~$i$; see Section~\ref{sec:1} for a precise definition. In~\cite{cotardo2023rank}, the supersolvable rank-metric lattices were completely characterized, and their Whitney numbers were computed for some sporadic parameter sets. The Whitney numbers of $\mL_2(4,4;2)$ were determined under the assumption that they are polynomial expressions in the field size $q$. The techniques applied in~\cite{cotardo2023rank} do not seem to extend naturally to other rank-metric lattices. In this paper, we propose a new approach for computing the Whitney numbers of rank-metric lattices, which leverages on techniques from finite geometry. We use the theory of hyperovals and linear sets, as well as the description of rank-metric codes as spaces of linearized polynomials. This allows us to compute the Whitney numbers of rank-metric lattices in several new instances, and to confirm the value conjectured in~\cite[Section~6]{cotardo2023rank} for the lattice~$\mL_2(4,4;2)$ under the polynomial assumption. Using the same methods, we also establish new results on the density function of rank-metric codes, which is another wide open problem in coding theory~\cite{antrobus2019maximal,byrne2020partition,gruica2022common,gruica2023rank,gruica2024densities,neri2018genericity}. The remainder of this paper is organized as follows. In Section~\ref{sec:1} we recall the definition of rank-metric lattices and outline the motivation for this work. In Section~\ref{sec:2}, we introduce the tools we will use throughout the paper: linear sets, hyperovals, rank-metric codes, and linearized polynomials. In Section~\ref{sec:3}, we establish the classification of two-dimensional MRD codes in $\F_{q^4}^4$ and $\F_{2^m}^m$, which we explicitly compute in Section~\ref{sec:4}. In Section~\ref{sec:5}, we count one-weight rank-metric codes using their multivariate polynomial description. Finally, in Section~\ref{sec:6}, we provide a recursive formula for computing the Whitney numbers of the first kind and give a closed formula for the ones of $\mL_2(n,3;q)$ with $n \in \{4,5,6\}$. \section{Motivation and Preliminaries} \label{sec:1} We start by recalling the definition of a lattice. We refer the reader to~\cite{stanley2011enumerative} for the necessary background on order theory. We follow the notation of~\cite{stanley2011enumerative}. \begin{definition} A \textbf{finite lattice} $(\mL, \le)$ is a finite poset where every $s,t \in \mL$ have a join and a meet. In this case, join and meet can be seen as commutative and associative operations $\vee, \wedge : \mL \times \mL \to \mL$. In particular, the \textbf{join} (resp., the \textbf{meet}) of a non-empty subset $S \subseteq \mL$ is well-defined as the join (resp., the meet) of its elements and denoted by~$\vee S$ (resp., by~$\wedge S$). Furthermore, $\mL$ has a minimum and maximum element ($0$ and~$1$, resp.). Finally, if $\mL$ is classified, then \textbf{ rank} of $\mL$ is~$\rk(\mL)=\rho(1)$, where $\rho$ is the rank function of $\mL$. \end{definition} \begin{definition} Let $\mL$ be a finite, graded lattice with rank function $\rho$ and M\"obius function~$\mu$. The \textbf{characteristic polynomial} of $\mL$ is the element of $\Z[\lambda]$ defined as \begin{equation*} \chi(\mL;\lambda):=\sum_{s\in\mL}\mu(s)\lambda^{\rk(\mL)-\rho(s)}=\sum_{i=0}^{\rk(\mL)}w_i(\mL)\lambda^{\rk(\mL)-i}, \end{equation*} where \begin{equation*} w_i(\mL)=\sum_{\substack{s\in\mL\\\rho(s)=i}}\mu(s) \end{equation*} is the $i$-\textbf{th Whitney number of the first kind} of $\mL$. The $i$-\textbf{th Whitney number of the first kind} of $\mL$, denoted by $W_i(\mL)$, is the number of elements of $\mL$ of rank~$i$. \end{definition} The family of \textit{rank-metric lattices} was introduced in~\cite{cotardo2023rank}, in connection with the problem of enumerating rank-metric codes with prescribed parameters, a current open problem in coding theory~\cite{antrobus2019maximal,byrne2020partition,gruica2022common,gruica2023rank}. We recall their definition after establishing the notation for the rest of the paper. In the sequel, $q$ is a prime power and $\fq$ is the finite with $q$ elements. We let $m,n$ be positive integers with $m \ge n$. \begin{definition} The \textbf{rank} (\textbf{weight}) of a vector $v\in\fqm$ is the dimension over $\fq$ of the $\fq$-span of its entries. We denote it by $\rk(v)$. The (\textbf{rank}) \textbf{distance} between vectors $v,w \in \F_{q^m}^n$ is $d(v,w)=\rk(v-w)$. A \textbf{rank-metric code} is a $k$-dimensional $\fqm$-linear subspace $C \le \fqm^n$ with $k \ge 1$. The \textbf{minimum distance} of $C$ is $d(C):=~\min\{\rk(v):v\in\fqm^n \textup{ with } v\neq 0\}$. We refer to $C$ as a $[n,k,d(C)]_{\fqm}$-code. \end{definition} Rank-metric lattices are defined as follows. \begin{definition} Let $i\in \{1, \ldots n\}$. The \textbf{rank-metric lattice} (\textbf{RML} in short) $\mL_i(n,m;q)$ associated with the $4$-tuple $(i,n,m,q)$ is the geometric lattice whose atoms are the $1$-dimensional $\fqm$-linear subspaces of $\fqm^n$ generated by the vectors of rank at most $i$, i.e., \begin{align*} \mL_i(n,m;q):= \{\<v_1,\ldots,v_\ell\>_{\fq} \colon \ell \ge 1, \, v_1,\ldots,v_\ell\in T_i(n,m;q)\}, \end{align*} where $T_i(n,m;q):=\{v\in\fqm^n:\rk(v)\leq i\}$. The order is given by the inclusion of $\fqm$-linear subspaces of $\fqm^n$. \end{definition} This paper focuses on the computation of the Whitney numbers of the first kind of rank-metric lattices. We follow the notation in~\cite{cotardo2023rank} and denote by $w_j(i,n,m;q)$ be the $j$-th Whitney number of the first kind of the lattice $\mL_i(n,m;q)$. We apply various techniques, ranging from coding theory, order theory, and finite geometry (hyperovals and linear sets). \section{Tools} \label{sec:2} In this section we recall some notions and results that will be needed throughout the paper. The section is divided into three subsections. \subsection{Hyperovals and Linear Sets} Let PG$(r,q)$ denotes the Desarguesian projective space of dimension $r$ of order $q$ and let $\PG(V,\fq)$ denotes the projective space obtained by $V$, for some $\fq$-linear vector space $V$. Classical references are~\cite{hirschfeld1979projective,lavrauw2015field,polverino2010linear}. \begin{definition} A set $\mathcal{O}$ of $q+1$ points in $\mathrm{PG}(2,q)$ is called an \textbf{oval} if no three of its points are collinear. Moreover, we say that the ovals $\mathcal{O}_1$ and $\mathcal{O}_2$ are \textbf{equivalent} if there exists a collineation $\phi$ of the plane such that $\phi(\mathcal{O}_1)=\phi(\mathcal{O}_2)$. \end{definition} Observe that conics are a family of ovals. Furthermore, by the Segre's Theorem we know that when $q$ is odd, all the ovals are equivalent to a conic; see~\cite{segre1955ovals}. When $q$ is even, every oval can be uniquely extended to a set of $q+2$ points satisfying again the condition that no three of its points are still collinear. These sets are known as \textbf{hyperovals}. We refer to~\cite{caullery2015classification,cherowitzo1996hyperovals,de2002arcs} for a list of known examples and classification results. We will be mainly interested in the case of translation hyperovals. \begin{definition} Let $\mathcal{H}$ be a hyperoval in $\PG(2,q)$, with $q$ even. We say that $\mathcal{H}$ is a \textbf{translation hyperoval} if there exists a secant line $\ell$ to $\mathcal{H}$ such that the group of elations having axis $\ell$ acts transitively on the points of $\mathcal{H}\setminus \ell$. \end {definition} The set of points of any hyperoval $\mathcal{H}$ in $\mathrm{PG}(2,q)$ is \[ \{ (x,f(x),1) \colon x \in \mathbb{F}_q \} \cup \{(1,0,0),(0,1,0)\}, \] where $f \in \mathbb{F}_q[x]$, up to equivalence. We will denote such a set of points by $\mathcal{H}_f$. It is not difficult to see that $\mathcal{H}_f$ is a translation hyperoval if and only if $f$ is an additive permutation polynomial and $f(x)/x$ is a permutation polynomial in $\F_{q}\setminus\{0\}$ as well. Translation hyperovals have been completely classified. \begin{theorem} (\cite[Main Theorem]{payne1971complete} and~\cite{hirschfeld1975ovals})\label{th:classhyper} Let \smash{$f \in \mathbb{F}_q[x]$} with $q=2^h$. Then $\mathcal{H}_f$ is a translation hyperoval if and only if \smash{$f(x)=ax^{2^j}$} for some $j \in \{1,\ldots,h-1\}$ with $\gcd(h,j)=1$. \end{theorem} In this paper, we mainly focus on $\fq$-linear sets in the projective line. \begin{definition}\label{def:linset} Let $V$ be an $\fqm$-linear vector space of dimension 2, let $U $ be an $\fq$-linear subspace of $V$, and let $\Lambda=\PG(V,\fqm)=\PG(1,q^m)$. The set \[ L_U=\{\langle \mathbf{u} \rangle_{\fqm} \, \colon \, \mathbf{u}\in U\setminus\{\mathbf{0}\}\} \] is called an $\fq$-\textbf{linear set} of \textbf{rank} $\dim_{\fq}(U)$. \end{definition} The following is an important concept related to linear sets. \begin{definition}\label{def:weight} The \textbf{weight} of a point $P=\langle \mathbf{v}\rangle_{\fqm} \in \Lambda$ in $L_U$ is $\dim_{\fq}(U \cap \langle \mathbf{v}\rangle_{\fqm})$. \end{definition} The maximum value for $|L_U|$ can be reached in the case where all the points of $L_U$ have weight one, therefore if $L_U$ is any $\fq$-linear set of rank $k$ we have \begin{equation}\label{eq:card} |L_U| \leq \frac{q^k-1}{q-1}. \end{equation} Linear sets attaining the bound in \eqref{eq:card} are called \textbf{scattered} and the $\fq$-linear subspace~$U$ is called a \textbf{scattered subspace}. They were originally introduced in~\cite{blokhuis2000scattered}. It is difficult, in general, to determine whether or not two linear sets are equivalent. Recall that~$L_{U_1}$ and~$L_{U_2}$ are \textbf{equivalent} if there exists $\varphi \in \mathrm{P\Gamma L}_1(q^m)$ with $\varphi(L_{U_1})=~L_{U_2}$. In~\cite{csajbok2018classes}, the authors introduced the following concept for detecting families of linear sets for which this problem is easier to address. \begin{definition}\label{def:simple} An $\fq$-linear set $L_U$ in $\PG(1,q^m)=\PG(V,\fqm)$ is \textbf{simple} if for each $\fq$-linear subspace $W$ of $V$, $L_W$ is $\mathrm{P\Gamma L}_2(q^m)$-equivalent to $L_U$ if and only if $U$ and~$W$ are in the same orbit of $\mathrm{\Gamma L}_2(q^m)$. We say that $L_U$ has \textbf{$\mathrm{GL}$-class one} if for each $\fq$-linear subspace~$W$ of~$V$, $L_W$ is $\mathrm{PG L}_2(q^m)$-equivalent to $L_U$ if and only if $U$ and $W$ are in the same orbit of~$\mathrm{G L}_2(q^m)$. \end{definition} \subsection{Polynomial Description of Rank-Metric Codes} We denote by $\fq[x_1,\ldots,x_\ell]$ the ring of polynomials in the variables $x_1,\ldots,x_\ell$ with coefficient in $\fq$. We let $\mathrm{GL}_k(q)$ and $\mathrm{\Gamma L}_k(q)$ denote the general linear and the general semilinear groups, respectively. We recall the connection between rank-metric codes and linearized polynomials. We start describing some properties of linearized polynomials and refer the reader to~\cite{wu2013linearized} for further details. \begin{definition} A (\textbf{linearized}) \textbf{$q$-polynomial} over $\F_{q^m}$ is a polynomial of the form $$ f:=\sum_{i \ge 0}f_i x^{q^i} \in \F_{q^m}[x].$$ The largest $i$ with $f_i \neq 0$ is the \textbf{$q$-degree} of $f$. The set of $q$-polynomials modulo~$x^{q^m}-x$ is denoted by $\mathcal{L}_{m,q}[x]$. \end{definition} Note that $\mathcal{L}_{m,q}[x]$ is an $\F_q$-algebra equipped with the operations of addition and composition of polynomials, and scalar multiplication by elements of $\F_q$. The elements of~$\cL_{m,q}[x]$ are in one-to-one correspondence with the $q$-polynomials of $q$-degree upper bounded by $m-1$. Throughout the paper, we abuse notation and denote an element of $\cL_{m,q}[x]$ as its unique representative of $q$-degree at most $m-1$. \begin{remark} \label{rem:end} We have $\F_q$-algebra automorphisms \begin{equation}\label{eq:isom_sigma} (\mathcal{L}_{m,q}[x],+,\circ)\cong(\mathrm{End}_{\fq}(\mathbb{F}_{q^m}),+,\circ) \cong (\mathbb{F}_q^{m \times m},+,\cdot), \end{equation} where in the 3-tuples we omitted the scalar multiplication by an element of~$\F_q$. We refer the reader to~\cite{wu2013linearized} and~\cite{gruica2023rank} for further details. \end{remark} In the following, we consider $\mathcal{L}_{m,q}[x]$ endowed with the metric \[ d_L(f,g)=\dim_{\fq}(\mathrm{Im}(f-g)), \] for any $f,g \in \mathcal{L}_{m,q}[x]$. It is not hard to check that $(\mathcal{L}_{m,q}[x],d_L)$ and $(\F_{q^m}^m,d)$ are isometric, where $d$ is the rank distance; see Section~\ref{sec:1}. As a consequence, an $\F_{q^m}$-linear rank-metric code in $\F_{q^m}^m$ corresponds to an $\F_{q^m}$-linear subspace of $\mathcal{L}_{m,q}[x]$. Therefore, we will sometimes say ``an $\F_{q^m}$-linear rank-metric code $\mC \le \mathcal{L}_{m,q}[x]$''. Interpreting rank-metric codes as spaces of $q$-polynomials will allow us to use results on linear sets and to count certain families of MRD codes. \begin{notation} If \smash{$f=\sum_{i=0}^{m-1}f_ix^{q^i} \in \cL_{m,q}[x]$} and $\rho \in \aut(\F_q)$ is a field automorphism, then we let \smash{$f^\rho:= \sum_{i=0}^{m-1}\rho(f_i)x^{q^i}$}. Furthermore, if \smash{$\mC \le \cL_{m,q}[x]$} is a rank-metric code and $\rho \in \aut(\F_q)$, we let $\mC^\rho:=\{f^\rho \mid f \in \mC\}$. \end{notation} We recall two notions of equivalence between rank-metric codes. \begin{definition} We say that the $\mathbb{F}_{q^m}$-linear rank-metric codes $\C_1, \C_2 \le \cL_{m,q}[x]$ are \textbf{equivalent} if there exist invertible \smash{$q$-polynomials} $f,g \in \cL_{m,q}[x]$ and a field automorphism \smash{$\rho \in \mathrm{Aut}(\fq)$} such that \[ \C_1=f \circ \C_2^\rho \circ g. \] The \textbf{automorphism group} of $\mC$ is \begin{align*} \aut(\mC) = \{(f_1,\rho,f_2) \in \GL \times \mathrm{Aut}(\fq) \times \GL \mid \C=f_1 \circ \C^\rho \circ f_2\}, \end{align*} where $\GL$ denotes the group of invertible linearized polynomials in $\mathcal{L}_{m,q}[x]$. \end{definition} \begin{remark} Note that for any $\C\leq \mathcal{L}_{m,q}[x]$, $f,g \in \cL_{m,q}[x]$ and a field automorphism \smash{$\rho \in \mathrm{Aut}(\fq)$}, then $f \circ \C^\rho \circ g$ may not be an $\F_{q^m}$-subspace of $\mathcal{L}_{m,q}[x]$. \end{remark} We aim to count the number of $\mathbb{F}_{q^m}$-linear rank-metric codes in $\mathbb{F}_{q^m}^n$ and therefore the natural concept of equivalence in this context makes use of $\F_{q^m}$-linear isometries of $(\mathcal{L}_{m,q}[x],d_L)$. The following is the second notion of equivalence. \begin{definition}\label{def:equivrk} We say that $\mathbb{F}_{q^m}$-linear rank-metric codes $\C_1, \C_2 \le \cL_{m,q}[x]$ are \textbf{linearly equivalent} if there exists an invertible \smash{$q$-polynomial} $g \in \cL_{m,q}[x]$ such that \[ \C_1=\C_2 \circ g. \] The \textbf{linear automorphism group} of $\mC$ is \begin{align*} \aut\mathrm{lin}(\mC) = \{g \in \mathrm{GL}_m(q) \mid \C=\C \circ g\}. \end{align*} \end{definition} \begin{remark} The linear automorphism group of $\mC$ is usually called \emph{right idealizer} or \emph{middle nucleus}; see~\cite{liebhold2016automorphism,lunardon2017kernels}. \end{remark} \subsection{Examples of MRD codes} One of the most studied MRD codes are the \textbf{Gabidulin codes}. These were first introduced in~\cite{delsarte1978bilinear,gabidulin1985theory} and later generalized in~\cite{kshevetskiy2005new}. They are defined as follows. \begin{definition}\label{def:gengab} Let $k,m,s$ be three positive integers with $k\leq m$ and $\gcd(s,m)=1$. Then a \textbf{generalized Gabidulin code} is \begin{align*} \mathcal{G}_{k,s,m}=\langle x,x^{q^s},\ldots,x^{q^{s(k-1)}} \rangle_{\mathbb{F}_{q^m}}. \end{align*} \end{definition} In~\cite[Theorem 5.4]{neri2020equivalence} (see also~\cite{schmidt2018number}), it has been proved that the number of linearly inequivalent generalized Gabidulin codes in $\mathcal{L}_{m,q}[x]$ is $\varphi(m)/2$, where $\varphi$ is the Euler's totient function. Their automorphism group has been fully determined; see~\cite{liebhold2016automorphism,sheekey2016new}. \begin{theorem}\cite[Theorem 4]{sheekey2016new}\label{th:autGab} Let $q=p^h$, with $p$ a prime and $h$ any positive integer. The automorphism group of $\mathcal{G}_{k,s,m}$ is the set of 3-tuples $\smash{(ax^{q^i}, \rho, bx^{q^{m-i}})}$ with $\smash{a,b \in \F_{q^m}^{\times}}$, $0 \le i \le m-1$, and $\rho \in \aut(\F_q)$. In particular, we have that $|\aut(\mathcal{G}_{k,s,m})| = hm(q^m-1)^2$. \end{theorem} We obtain the following by restricting to the linear automorphism group. \begin{corollary}\label{cor:autGab} The linear automorphism group of $\mathcal{G}_{k,s,m}$ is \[ \{ax \colon a \in \mathbb{F}_{q^m}^*\}. \] \end{corollary} Successively, Sheekey in~\cite{sheekey2016new} extended the above family of MRD codes (see also~\cite{lunardon2018generalized}). We recall their definition in the $\mathbb{F}_{q^m}$-linear setting. \begin{definition}\label{def:gentwigab} Let $k,m,s$ be three positive integers with $k\leq m$ and $\gcd(s,m)=1$ and let $\delta \in \mathbb{F}_{q^m}$ with $\mathrm{N}_{q^m/q}(\delta)\ne (-1)^{mk}$. Then a \textbf{generalized twisted Gabidulin code} is \begin{align*} \mathcal{T}_{k,s,m}(\delta)=\langle x^{q^s},\ldots,x^{q^{s(k-1)}},x+\delta x^{q^{s(n-1)}} \rangle_{\mathbb{F}_{q^m}}. \end{align*} \end{definition} Also, its automorphism group has been completely determined. \begin{theorem}(\cite[Theorem 7]{sheekey2016new} and~\cite[Theorem 4.4]{lunardon2018generalized})\label{th:autgentwi} Let $q=p^h$, with $p$ a prime and $h$ any positive integer. The automorphism group of $\mathcal{T}_{k,s,m}(\delta)$ is the set of 3-tuples $\smash{(ax^{q^i}, \rho, bx^{q^{m-i}})}$ with $\smash{a,b \in \F_{q^m}^{\times}}$, $0 \le i \le m-1$ such that \[ (b^{q^k-1})^{\rho q^i} \delta^{\rho q^i}=\delta, \] for some $\rho \in \aut(\F_q)$. \end{theorem} As before, we obtain the following by restricting to linear automorphism group. \begin{corollary}\label{cor:autGabtwis} The linear automorphism group of $\mathcal{T}_{k,s,m}(\delta)$ is \[ \{ax \colon a \in \mathbb{F}_{q^k}\cap \mathbb{F}_{q^m}\}. \] \end{corollary} \section{Classification of $\mathbb{F}_{q^m}$-linear rank-metric codes of dimension $2$} \label{sec:3} Sheekey in~\cite{sheekey2016new} established the first connection between rank-metric codes and linear sets and he pointed out a relation between $\F_{q^m}$-linear rank-metric codes and linear sets in the projective line $\PG(1,q^m)$. By~\cite[Section 5]{sheekey2016new}, we have the following result. \begin{theorem}\label{th:connJohn} Let $\C=\langle f_1(x),f_2(x) \rangle_{\fqm}$ be a rank-metric code. Then $\C$ is an MRD code if and only if $U_{f_1,f_2}=\{ (f_1(x),f_2(x)) \colon x \in \fqm\}$ is a scattered $\fq$-linear subspace of dimension $m$ in $\fqm^2$ (or equivalently $L_{U_{f_1,f_2}}$ is scattered). Moreover, two MRD codes $\C=\langle f_1(x),f_2(x) \rangle_{\fqm}$ and $\C'=\langle g_1(x),g_2(x) \rangle_{\fqm}$ are equivalent if and only if~$U_{f_1,f_2}$ and~$U_{g_1,g_2}$ are $\mathrm{\Gamma L}_2(q^m)$-equivalent. \end{theorem} The previous result can be rephrased as follows in terms of linear equivalence of codes; see for instance~\cite{alfarano2022linear,randrianarisoa2020geometric}. \begin{corollary}\label{cor:connJohn} Let $\C=\langle f_1(x),f_2(x) \rangle_{\fqm}$ and $\C'=\langle g_1(x),g_2(x) \rangle_{\fqm}$ be MRD codes of dimension 2. We have that $\C$ and $\C'$ are linearly equivalent if and only if $U_{f_1,f_2}$ and $U_{g_1,g_2}$ are $\mathrm{G L}_2(q^m)$-equivalent. \end{corollary} In~\cite{csajbok2018maximum}, Csajb\'ok and Zanella refined the classification of scattered $\fq$-linear subspaces of dimension $4$ in $\mathbb{F}_{q^4}^2$ provided by Bonoli and Polverino in~\cite{bonoli2005fq}. Using Theorem~\ref{th:connJohn}, the above results can be interpreted in terms of rank-metric code read as follows. \begin{theorem}\label{th:q4allMRD} Let $\C$ be an $\F_{q^4}$-linear MRD code in $\mathcal{L}_{4,q}[x]$ of dimension $2$. Then $\C$ is linearly equivalent to $\mathcal{T}_{2,1,4}(\delta)$ for some $\delta \in \mathbb{F}_{q^4}$ with $\mathrm{N}_{q^4/q}(\delta)\ne 1$. \end{theorem} \begin{proof} In~\cite[Theorem 3.4]{csajbok2018maximum} it has been proved that if $L_U$ is a scattered $\fq$-linear set of rank $4$ in $\mathrm{PG}(1,q^4)$, then it is $\mathrm{P\Gamma L}_2(q^4)$ to $L_{U'}$, where \[ U'=\{ (x,x^q+\delta x^{q^3}) \colon x \in \mathbb{F}_{q^4} \},\] for some $\delta \in \F_{q^4}$ with $\mathrm{N}_{q^4/q}(\delta)\ne 1$. Since by~\cite[Section 4]{csajbok2018classes} $\fq$-linear sets of rank~$4$ are simple in $\mathrm{PG}(1,q^4)$ and also using~\cite[Corollary 4.4]{csajbok2018maximum}, we find that $U$ and $U'$ are $\mathrm{G L}_2(q^4)$-equivalent. Therefore, applying Corollary~\ref{cor:connJohn} we find that $\C$ is equivalent to the code $\langle x, x^q+\delta x^{q^3} \rangle_{\fqm}$. \end{proof} As mentioned in~\cite[Introduction]{bartoli2021conjecture}, scattered subspaces of dimension $m$ in $\mathbb{F}_{2^m}^2$ corresponds to translation hyperovals, which have been classified by Payne in~\cite{payne1971complete}. As a consequence we obtain the following result in terms of MRD codes. \begin{theorem}\label{th:MRDq=2} Let $\C$ be an $\F_{2^m}$-linear MRD code in $\mathcal{L}_{m,2}[x]$ of either dimension $2$ or codimension $2$. Then $\C$ is linearly equivalent to $\mathcal{G}_{2,s,m}$. \end{theorem} \begin{proof} Let us first assume that $\C$ has dimension two. Let $U$ be a subspace of $\mathbb{F}_{2^m}^2$ associated with $\C$, as in Theorem~\ref{th:connJohn}. Up to $\mathrm{GL}_2(2^m)$-equivalence, we may assume that \[ U=\{(x,f(x)) \colon x \in \F_{2^m}\},\] for some $f(x)=\sum_{i=1}^{n-1} a_i x^{2^i}\in \mathcal{L}_{m,2}[x]$. Consider the graph of $f$ in $\mathrm{AG}(2,2^m)$ \[ \mathcal{G}(f)=\{(x , f(x)) \colon x \in \F_{2^m}\}, \] which can be identified in the projective plane with $\{\langle (x , f(x),1)\rangle_{\F_{2^m}} \colon x \in \F_{2^m}\}$, and the set of the determined directions by $f$ \[ \mathcal{D}(f)=\{\langle(x , f(x) , 0)\rangle_{\F_{2^m}} \colon x \in \F_{2^m}\} \subset \ell_{\infty}, \] then the pointset \[ \mathcal{S}=\mathcal{G}(f) \cup (\ell_{\infty} \setminus \mathcal{D}(f)) \] is a translation hyperoval of $\mathrm{PG}(2,2^m)$. It is clear that $\mathcal{S}$ has size $2^m+2$. The line at infinity meet $\mathcal{S}$ in exactly two points since $\mathcal{D}(f)$ is contained in $\ell_{\infty}$ and has size~$2^m-1$ and hence its complement in $\ell_{\infty}$ is given by two points. We can divide the affine lines into two classes: \begin{itemize} \item the lines meeting $\ell_{\infty}\setminus \mathcal{D}(f)$ are $2$-secant to $\mathcal{S}$; \item the lines meeting $\mathcal{D}(f)$ are either tangent or $2$-secant to $\mathcal{S}$. \end{itemize} Indeed, any line $\ell$ intersects the line at infinity in a point, namely $\langle (1,\alpha,0) \rangle_{\F_{2^m}}$ or~$\langle (0,1,0) \rangle_{\F_{2^m}}$. Suppose that $\ell\cap \ell_{\infty} \notin \mathcal{D}(f)$, then $|\ell \cap \mathcal{G}(f)|=1$, since $|\ell \cap \mathcal{G}(f)|\leq1$ for any line $\ell$ such that $\ell\cap \ell_{\infty} \notin \mathcal{D}(f)$ and $\mathcal{G}(f)$ has size $2^m$. Therefore, $\ell \cap \mathcal{S}$ has size two. Suppose that $\ell\cap \ell_{\infty}=\langle (1,\alpha,0) \rangle_{\F_{2^m}} \notin \mathcal{D}(f)$, then either $|\ell \cap \mathcal{G}(f)|=0$ or $|\ell \cap \mathcal{G}(f)|\geq 2$. By contradiction, assume that there exist $u,v,z \in \F_{2^m}$ such that they are pairwise distinct and $\langle(u , f(u),1)\rangle_{\F_{2^m}}, \langle(v , f(v),1)\rangle_{\F_{2^m}} ,\langle(z , f(z),1)\rangle_{\F_{2^m}} \in \ell$. Then $(u-v , f(u-v),0),(z-v,f(z-v),0) \in \langle (1,\alpha,0) \rangle_{\F_{2^m}} \cap \mathcal{D}(f)$ and since $D(f)$ is scattered then $u-v$ and $z-v$ must be $\F_{2}$-proportional and hence $u=v$, a contradiction. Therefore, in this case we have $|\ell \cap \mathcal{S}|\in \{0,2\}$. By Theorem~\ref{th:classhyper}, $f(x)=a_j x^{2^j}$, for some positive integer $j\in \{1,\ldots,m-1\}$ such that $\gcd(j,m)=1$. Therefore, we get $U=\{(x,a_j x^{2^j}) \colon x \in \F_{2^m}\}$ and $\C$ is equivalent to $\mathcal{G}_{2,j,m}$ (again by Theorem~\ref{th:connJohn}). When the dimension of $\C$ is $m-2$, the result follows by a duality argument. \end{proof} \section{Counting the number of $\F_{q^m}$-linear MRD codes of dimension $2$} \label{sec:4} In this section, we apply the results derived in the previous sections to determine the number of $\F_{q^m}$-linear MRD codes in $\F_{q^m}^m$ of dimension $2$ in the cases $q=2$ and any~$m$, and $q>2$ and $m=4$. We use the orbit-stabilizer theorem to count the number of certain families of $\fqm$-linear MRD codes. In detail, if we determine the number of the equivalence classes of the codes (say $\mathcal{C}_1,\ldots,\mathcal{C}_t$ is a set of representatives), then the number of codes we are looking for is \begin{equation}\label{eq:orbstab} \sum_{i=1}^t \frac{|\GL|}{|\mathrm{Autlin}(\mathcal{C}_i)|}. \end{equation} As a consequence, we derive some density results for these MRD codes. We define the density function of $\F_{q^m}$-linear rank-metric codes with parameters $[n,k,d]_{q^m/q}$ as the ratio of the number of codes with these parameters and the number of $\F_{q^m}$-linear subspaces of $\F_{q^m}^n$ having the same dimension, that is $$\delta^\rk_d( m,n, k; q):=\frac{|\{\mC \le \mathbb{F}_{q^m}^n \mid \dim(\mC)=k, \, \drk(\mC) \ge d\}|}{\qmqbin{n}{k}}.$$ It is currently an open problem in coding theory to determine the exact proportion of MRD codes~\cite{antrobus2019maximal,byrne2020partition,gruica2022common,gruica2023rank,gruica2024densities,neri2018genericity}. \subsection{Case $q=2$} We start by observing that two generalized Gabidulin codes $\mathcal{G}_{k,s,m}$ and $\mathcal{G}_{k,t,m}$ are equivalent if and only if $s\equiv \pm t \pmod{m}$, by~\cite{lunardon2018generalized} and this is still true when considering the linear equivalence. Therefore, we have the following. \begin{proposition}\label{prop:numberGab} The number of linearly inequivalent generalized Gabidulin codes of a fixed dimension in $\mathcal{L}_{m,q}[x]$ is $\frac{\varphi(m)}2$. \end{proposition} Therefore, combining~\eqref{eq:orbstab}, Proposition~\ref{prop:numberGab}, Theorem~\ref{th:MRDq=2}, and Corollary~\ref{cor:autGab}, we obtain the number of $\mathbb{F}_{2^m}$-linear MRD codes of dimension two in $\mathcal{L}_{m,2}[x]$. \begin{corollary} The number of $\mathbb{F}_{2^m}$-linear MRD codes of dimension two in $\mathcal{L}_{m,2}$ is \[ \frac{\varphi(m)}2 \cdot \frac{|\mathrm{GL}_m(2)| }{(2^m-1)}. \] \end{corollary} The following result provides the density of MRD codes in $\F_{2^m}^m$ of dimension $2$ and its asymptotic. \begin{corollary} We have \[\delta^\rk_{m-1}( m,m, 2; 2)=\frac{2^{m^2} \cdot \varphi(m) \cdot \prod_{j=1}^m \left( 1-\frac{1}{2^j}\right)(2^{2m}-1)}{(2^{m^2}-1)(2^{m^2-m}-1)},\] and \[\delta^\rk_{m-1}( m,m, 2; 2)\in \mathcal{O}\left(m2^{-m^2+3m}\right) \textup{ as } m\longrightarrow+\infty.\] \end{corollary} \subsection{Case $m=4$ and $q>2$} In the remainder of this section, we assume $q>2$. We derive bounds on the number of the equivalence classes of $\mathbb{F}_{q^4}$-linear MRD codes of dimension two in $\mathcal{L}_{4,q}[x]$, which arise from~\cite{csajbok2018maximum}. \begin{proposition}\label{prop:numbMRDq4} The number of linearly inequivalent $\mathbb{F}_{q^4}$-linear MRD codes of dimension two in $\mathcal{L}_{4,q}[x]$ is $q(q-1)/2$. \end{proposition} \begin{proof} Let start by noting that the number of linearly inequivalent $\mathbb{F}_{q^4}$-linear MRD codes of dimension two in $\mathcal{L}_{4,q}[x]$ is exactly the number of $\mathrm{PGL}_2(q^4)$-inequivalent scattered $\fq$-linear sets of rank $4$ in $\PG(1,q^4)$. This is indeed true because of Corollary~\ref{cor:connJohn} and as the scattered $\fq$-linear sets of rank-$4$ in $\PG(1,q^4)$ have $\mathrm{GL}$-class 1. Finally,~\cite[Theorem 4.5]{csajbok2018maximum} implies that there are exactly $q(q-1)/2$ linearly inequivalent $\mathbb{F}_{q^4}$-linear MRD codes of dimension two in $\mathcal{L}_{4,q}[x]$. \end{proof} We can now give the number of $\mathbb{F}_{q^4}$-linear MRD codes of dimension two in $\mathcal{L}_{4,q}[x]$. \begin{corollary} The number $M(q)$ of $\mathbb{F}_{q^4}$-linear MRD codes of dimension 2 in $\mathcal{L}_{4,q}[x]$ is \[ \frac{1}2 q^7 (q^3-1)(q^2-1)(q-1)(q^3-q^2-q-1). \] \end{corollary} \begin{proof} By Corollary~\ref{cor:autGabtwis}, the linear automorphism group $\mathcal{T}_{2,1,4}(\delta)$ with $\delta \ne 0$ \[ \{ ax \colon a \in \mathbb{F}_{q^2}^* \} \] and hence has size $q^2-1$. If $\delta=0$, Corollary~\ref{cor:autGab} implies that the linear automorphism group is \[ \{ ax \colon a \in \mathbb{F}_{q^4}^* \} \] and hence it has size $q^4-1$. By Proposition~\ref{prop:numbMRDq4}, the number of inequivalent $\mathbb{F}_{q^4}$-linear MRD codes of dimension 2 in $\mathcal{L}_{4,q}[x]$ is $q(q-1)/2$, of which one is the Gabidulin code and the remaining are twisted Gabidulin codes which are not equivalent to a Gabidulin code. Therefore, we have by \eqref{eq:orbstab} \[M(q) = \frac{|\mathrm{GL}_4(q)|}{q^4-1}+ \left(\frac{q(q-1)}2-1\right) \frac{|\mathrm{GL}_4(q)|}{q^2-1}, \] where the first element of the sum corresponds to the orbit of $\mathcal{G}_{2,1,4}$. The statement follows by straightforward computations. \end{proof} \begin{remark}\label{rem:w2} Applying this result to~\cite[Proposition~53]{cotardo2023rank}, we showed that $w_2(2,4,4;q)$ is a polynomial in $q$. A closed formula for $w_j(2,4,4;q)$, $j\in\{0,\ldots,4\}$, is given in~\cite[Theorem~59]{cotardo2023rank}. \end{remark} In the following result, we determine the density of 2-dimensional MRD codes in~$\F_{q^4}^4$ and its asymptotic behavior. At the time of writing this paper, this is one of the very few cases where the exact density can be computed. \begin{corollary} We have \[\delta^\rk_3( 4,4, 2; q)=\frac{1}2 \frac{q^7 (q^3-1)(q^2-1)(q-1)(q^3-q^2-q-1)(q^8-1)(q^4-1)}{(q^{16}-1)(q^{12}-1)}.\] In particular, \[\lim_{q\rightarrow \infty} \delta^\rk_3( 4,4, 2; q)=\frac{1}2.\] \end{corollary} The previous result shows that when randomly selecting a 2-dimensional code in $\F_{q^4}^4$, one has approximately 50\% chance of picking an MRD code (for sufficiently large~$q$). \section{One-weight codes in $\F_{q^m}^{mk}$} \label{sec:5} In this section we determine the number of one-weight codes within $\F_{q^m}^{mk}$. We first establish a connection between non-square rank-metric codes and multivariate linearized polynomials. We later use such a relation to compute the desired quantity. \subsection{Non-square rank-metric codes}\label{sec:nonsquare} As proved in~\cite{bartoli2022exceptional,polverino2023divisible}, any $\F_{q^m}$-linear rank-metric code in $\F_{q^m}^{m\ell}$ can be described as an $\F_{q^m}$-linear subspace of multivariate linearized polynomials. We let $\mathcal{L}_{m,q}[x_1,\ldots,x_{\ell}]$ (or $\mathcal{L}_{m,q}[\underline{x}]$) denote the $\F_{q^m}$-linear vector space of linearized polynomials over $\F_{q^m}$ in the indeterminates $\underline{x}=(x_1,\ldots,x_{\ell})$, that is the $\F_{q^m}$-span in $\F_{q^m}[\underline{x}]$ of \[ \{ x_i^{q^j} \colon i \in \{1,\ldots,\ell\},j \in \mathbb{N}_0 \}. \] Moreover, we let ${\mathcal{L}}_{m,q}[x_1,\ldots,x_{\ell}]$ (or ${\mathcal{L}}_{m,q}[\underline{x}]$) be the $\F_{q^m}$-linear vector space of linearized polynomials $\mathcal{L}_{m,q}[\underline{x}]$ modulo the relations $x_1^{q^m}-x_1,\ldots,x_{\ell}^{q^m}-x_{\ell}$. It is known that as an $\fq$-linear vector space, $\mathcal{L}_{m,q}[\underline{x}]$ is $\fq$-linear isometric to the space of $\fq$-linear maps from $\F_{q^m}^\ell$ to $\F_{q^m}$ and to $\F_q^{m \times m\ell}$. Therefore, we define the rank of a $q$-polynomial $f \in {\mathcal{L}}_{m,q}[\underline{x}]$ as $\mathrm{rk}(f)=\dim_{\fq}(\mathrm{Im}(f))$. In particular, $\F_{q^m}$-linear rank-metric codes in $\F_{q^m}^{m\ell}$ correspond to $\F_{q^m}$-linear subspaces of ${\mathcal{L}}_{m,q}[\underline{x}]$. \begin{proposition}\label{prop:ev}\cite[Proposition 4.5]{polverino2023divisible} Let $\mathcal{B}=(a_1,\ldots,a_{m\ell})$ be an $\F_q$-basis of $\F_{q^m}^\ell$ (seen as an $\fq$-linear vector space). The map \begin{equation*} \begin{array}{cccc} ev_{\mathcal{B}} \colon & {\mathcal{L}}_{m,q}[x_1,\ldots,x_{\ell}] & \rightarrow & \F_{q^m}^{m\ell} \\ & f &\mapsto &(f(a_1),\ldots,f(a_{\ell m})) \end{array} \end{equation*} is an $\F_{q^m}$-linear isomorphism which preserves the rank, that is $$\mathrm{rk}(f)=\dim_{\fq}(\langle f(a_1),\ldots,f(a_{\ell m}) \rangle_{\fq})\quad \textup{ for any } \mathcal{L}_{m,q}[x_1,\ldots,x_{\ell}].$$ Moreover, if $\mathcal{C}=\langle f_1(\underline{x}),\ldots,f_k(\underline{x}) \rangle_{\F_{q^m}}$, then a generator matrix for $ev_{\mathcal{B}}(\C)$ is \[ G=\begin{pmatrix} f_1(a_1) & f_1(a_2) & \ldots & f_1(a_{\ell m})\\ f_2(a_1) & f_2(a_2) & \ldots & f_2(a_{\ell m})\\ \vdots & & & \\ f_k(a_1) & f_k(a_2) & \ldots & f_k(a_{\ell m}) \end{pmatrix}.\] \end{proposition} Therefore, the map $ev_{\mathcal{B}}$ allows us to study rank-metric codes as $\F_{q^m}$-linear subspaces of ${\mathcal{L}}_{m,q}[\underline{x}]$. The notion of equivalence of rank-metric codes can be also read in the context of linearized polynomials. Indeed, the linear equivalence of rank-metric codes $\F_{q}^{m\times \ell m}$ is given by the natural action of the following group on $\F_{q}^{m\times \ell m}$ \[ \{ (A,B) \colon A \in \mathrm{GL}_m(q),B\in \mathrm{GL}_{\ell m}(q) \}. \] In terms of linearized polynomials, the elements of $\mathrm{GL}_{\ell m}(q)$ can be represented as invertible linear functions in $\F_{q^m}^\ell$ in itself and hence as elements of $({\mathcal{L}}_{m,q} [\underline{x}])^\ell$. It follows that equivalent rank-metric codes can be defined via the natural action of the following group on ${\mathcal{L}}_{m,q}[\underline{x}]$ \[ \{ (g,f) \colon g \in {\mathcal{L}}_{m,q}[x], f=(f_1,\ldots,f_{\ell})\in({\mathcal{L}}_{m,q} [\underline{x}])^\ell \textup{ and } f,g \text{ invertible} \}. \] Therefore, $\C_1, \C_2 \le \cL_{\ell m,q}[\underline{x}]$ are \textbf{equivalent} (or \textbf{linearly equivalent}) if there exist invertible \smash{$q$-polynomials} $f \in \cL_{m,q}[x]$, $g=(g_1,\ldots,g_{\ell})^{\ell} \in \cL_{m,q}[\underline{x}]$ and a field automorphism \smash{$\rho \in \mathrm{Aut}(\fq)$} such that \[ \C_1=f \circ \C_2^\rho \circ g, \] ($\C_1=f \circ \C_2^\rho \circ g$, respectively). The \textbf{automorphism group} of $\mC$ is \[ \mathrm{Aut}(\C)=\{ (g,\rho,f) \in \mathrm{GL}_m(q)\times \mathrm{Aut}(\fq)\times \mathrm{GL}_{\ell m}(q) \colon g \circ \C^\rho \circ f \}, \] and the \textbf{linear automorphism group} of $\mC$ is \[ \mathrm{Autlin}(\C)=\{ f \in \mathrm{GL}_{\ell m}(q) \colon \C \circ f=\C \}. \] \subsection{Counting one-weight codes} The goal of this section is to establish the number of $[mk,k,m]_{\fqm}$-codes in $\F_{q^m}^{mk}$. We recall the following result. \begin{proposition}(\cite[Proposition 3.6]{alfarano2022linear}) Let $k\geq 2$ and let $\C$ be a $[km,k]_{\fqm}$-code. We have that $\C$ is a one-weight code (with weight $m$) if and only if $\C$ is linearly equivalent to a code with generator matrix \[ (I_k\mid\alpha I_k\mid\cdots \mid \alpha^{m-1} I_k), \] for some $\alpha \in \F_{q^m}$ such that $\F_{q^m}=\fq(\alpha)$. \end{proposition} We start by determining the equivalence classes of one-weight codes. \begin{proposition}\label{prop:oneclass} We have only one equivalence class of $[mk,k,m]_{q^m/q}$-codes in $\F_{q^m}^{mk}$. \end{proposition} \begin{proof} Let $\alpha,\beta \in \F_{q^m}$ such that $\F_{q^m}=\fq(\alpha)$ and consider the codes $\C$ and $\C'$ having as a generator matrix \[ G=(I_k\mid\alpha I_k\mid\cdots \mid \alpha^{m-1} I_k)\,\,\,\text{and}\,\,\,G=(I_k\mid\beta I_k\mid\cdots \mid \beta^{m-1} I_k), \] respectively. For any $i \in \{1,\ldots,m-1\}$, write \[ \beta^i =a_{0,i}+a_{1,i} \alpha+\ldots+a_{m-1,i}\alpha^{m-1},\] with $a_0,\ldots,a_{m-1} \in \fq$. Consider the block-matrix \[ A=\begin{pmatrix} a_{0,0} I_k & a_{0,1} I_k & \cdots & a_{0,m-1} I_k \\ a_{1,0} I_k & a_{1,1} I_k & \cdots & a_{1,m-1} I_k \\ \vdots & & & \\ a_{m-1,0} I_k & a_{m-1,1} I_k & \cdots & a_{m-1,m-1} I_k \\ \end{pmatrix} \in \mathbb{F}_{q}^{mk\times mk}, \] and note that its determinant is equal to the determinant of the matrix $(a_{ij}) \in \mathbb{F}_q^{m\times m}$ (since it is the Kronecker product of this matrix with the identity matrix), which is invertible since it is the matrix change of basis between the ordered bases~$(1,\alpha,\ldots,\alpha^{m-1})$ and $(1,\beta,\ldots,\beta^{m-1})$. Therefore, $A \in \mathrm{GL}_{km}(q)$. Also, $GA$ is equal to \begin{multline*} ( (a_{0,0}+a_{1,0}\alpha+\ldots+a_{m-1,0}\alpha^{m-1} )I_k\mid (a_{0,1}+a_{1,1}\alpha+\ldots+a_{m-1,1}\alpha^{m-1} )I_k \mid \cdots \\\mid (a_{0,m-1}+a_{1,m-1}\alpha+\ldots+a_{m-1,m-1}\alpha^{m-1} )I_k )=G', \end{multline*} that is $\C$ and $\C'$ are linearly equivalent. \end{proof} \begin{remark} The above result can also be proved by using the geometric interpretation of $\F_{q^m}$-linear rank-metric codes established in~\cite{randrianarisoa2020geometric}, see also~\cite{alfarano2022linear}. \end{remark} In order to determine the number of $[mk,k,m]_{q^m/q}$-codes in $\F_{q^m}^{mk}$, we first determined the related linear automorphism group. Based on Section~\ref{sec:nonsquare} and on Proposition~\ref{prop:oneclass}, we may only consider the code \[\C=\langle x_1,\ldots,x_k \rangle_{\F_{q^m}} \subseteq \mathcal{L}_{m,q}[x_1,\ldots,x_k],\] since it is an $[mk,k,m]_{\fqm}$-code. \begin{theorem}\label{thm:autgrouponeweight} Let $k\geq 2$. The linear automorphism group of $\C=\langle x_1,\ldots,x_k \rangle_{\F_{q^m}}$ is \[\mathrm{Autlin}(\C)=\{ (a_{1,1}x_1+\ldots+a_{1,k}x_k,\ldots,a_{k,1}x_1+\ldots+a_{k,k}x_k) \colon \mathrm{rk}((a_{i,j}))=k \}.\] In particular, we get \[ |\mathrm{Autlin}(\C)|=|\mathrm{GL}_{q^m}(k)|= (q^{mk}-1)(q^{mk}-q^m)\cdots (q^{mk}-q^{m(k-1)}). \] \end{theorem} \begin{proof} Recall that \[ \mathrm{Autlin}(\C)=\{ f \in \mathrm{GL}_{k m}(q) \colon \C \circ f=\C \}=\{ (f_1,\ldots,f_k)\in \mathrm{GL}_{k m}(q) \colon x_i \circ f_i \in \C,\,\,\forall i\in[k] \}. \] Therefore, we have that there exist $a_{i,j} \in \mathbb{F}_{q^m}$ such that \[f_i(\underline{x})=a_{i,1}x_1+\ldots+a_{i,k}x_k\] for any $i \in [k]$. Since the map defined by $(f_1,\ldots,f_k)$ is invertible if and only if the matrix~$\smash{(a_{i,j})}$ is invertible, the result follows. \end{proof} The following result follows by combining the orbit-stabilizer theorem, Proposition~\ref{prop:oneclass} and Theorem~\ref{thm:autgrouponeweight}. \begin{corollary}\label{cor:oneweightcode} The number of $[mk,k,m]_{\fqm}$-codes is \[ \frac{|\mathrm{GL}_{mk}(q)|}{(q^{mk}-1)(q^{mk}-q^m)\cdots (q^{mk}-q^{m(k-1)})}.\] \end{corollary} As a consequence, we can determine the density of MRD codes in $\F_{q^m}^{mk}$ of dimension~$k$, as well as their asymptotic behaviour. \begin{corollary} We have \[\delta^\rk_m( m,mk, k; q)= \frac{|\mathrm{GL}_{mk}(q)|}{(q^{mk}-1)(q^{mk}-q^m)\cdots (q^{mk}-q^{m(k-1)})\qmqbin{mk}{k}},\] and \[\lim_{q\rightarrow \infty} \delta^\rk_m( m,mk, k; q)=1.\] \end{corollary} \section{Explicit results on Whitney numbers} \label{sec:6} In this section, we establish some numerical results on the Whitney numbers of rank-metric lattices. In some cases we are able to provide closed formulas for them, which is in general a very hard task. We obtain these results as corollaries of the formulas established throughout the paper. The following result is a consequence of~\cite[Corollary~3.1]{ravagnani2022whitney} and Corollary~\ref{cor:oneweightcode}. \begin{corollary}\label{prop:i2m3} Let $i=2$ and $m=3$. For $n\in\{4,5\}$ and $j\in\{1,\ldots,n\}$ we have \begin{align*} w_j(2,n,3;q)=\qqbin{n}{3}\qmqbin{n-1}{j-1}(q^3-q)(q^3-q^2)(-1)^{j-1}q^{3\binom{j-1}{2}}. \end{align*} Moreover, for $n=6$ and $j\in\{1,\ldots,6\}$ we get \begin{align*} w_j(2,6,3;q)=&\qqbin{6}{3}\qmqbin{5}{j-1}(q^3-q)(q^3-q^2)(-1)^{j-1}q^{\binom{j-1}{2}}\\+&\qmqbin{4}{j-2}(q^6-q)(q^6-q^2)(q^6-q^4)(q^6-q^5)(-1)^{j-2}q^{3\binom{j-2}{2}}. \end{align*} \end{corollary} \begin{proof} For $n\in\{4,5\}$, we have \begin{align*} w_j(2,n,3;q)=|\{\left<x\right>\in\fqm \colon \rk(x)=3\}|\qmqbin{n-1}{j-1}(-1)^{j-1}q^{3\binom{j-1}{2}} \end{align*} which implies the first part of the statement. For $n=6$,~\cite[Corollary~3.1]{ravagnani2022whitney} implies \begin{multline*} w_j(2,6,3;q)=|\{\left<x\right>\in\fqm\colon \rk(x)=3\}|\qmqbin{5}{j-1}(-1)^{j-1}q^{3\binom{j-1}{2}}\\ +|\{C\in\fqm\colon C \textup{ is } [6,2,3]_{\mathbb{F}_{q^3}}\textup{-code}\}|\qmqbin{4}{j-2}(-1)^{j-2}q^{3\binom{j-2}{2}}. \end{multline*} The statement now follows from Corollary~\ref{cor:oneweightcode}. \end{proof} The next result is the analogue of~\cite[Theorem~5.1]{ravagnani2022whitney} and provides a recursive formula for computing the Whitney numbers of the first kind. \begin{theorem}\label{prop:recursion} If $1\leq ij\leq\min\{n,m\}$ then \begin{equation*} w_j(i,n,m;q)=\sum_{s=1}^{ij}\qqbin{n}{s}\sum_{t=1}^sw_j(i,t,m;q)\qqbin{s}{t}q^{\binom{s-t}{2}}(-1)^{s-t}. \end{equation*} \end{theorem} \begin{proof} Fix $j\in\{1,\ldots,n\}$ and notice that, for all $X\in\mL_i(n,m;q)$ with $\dim(X)=j$, we have $\dim(\supp(X))\leq ij$. We get \begin{align*} w_j(i,n,m;q)&=\sum_{\substack{X\in\mL_i(n,m;q)\\\dim(X)=j}}\mu_{n,m,q}^{(i)}(X)\\&=\sum_{s=0}^{ij}\sum_{\substack{S\leq\fq^n\\\dim(S)=s}}\sum_{\substack{X\in\mL_i(n,m;q)\\\dim(X)=j\\\supp(X)=S}}\mu_{n,m,q}^{(i)}(X)\\&=\sum_{s=0}^{ij}\sum_{\substack{S\leq\fq^n\\\dim(S)=s}}f(S) \end{align*} where, for all $S\leq\fq^n$, we define \begin{equation*} f(S)=\sum_{\substack{X\in\mL_i(n,m;q)\\\dim(X)=j\\\supp(X)=S}}\mu_{n,m,q}^{(i)}(X). \end{equation*} For a subspace $S\leq\fq^n$ and for all $T\leq S$, define \begin{equation*} g(T)=\sum_{T'\leq T}f(T'). \end{equation*} Since $ij\geq 1$ we have $j\geq 1$ and $g(\left<0\right>)=0$ (sum over an empty index set). Moreover, observe that, for any $X\in\mL_i(n,m;q)$ with $\supp(X)= S$ there exists a lattice isomorphism that maps the interval $[0,X]$ in $\mL_i(n,m;q)$ to $\mL_i(\dim(S),m;q)$. Therefore, for any $t$-dimensional $T\leq S$, we get \begin{equation*} g(T)=\sum_{\substack{X\in\mL_i(n,m;q)\\\dim(X)=j\\\supp(X)\leq T}}\mu_{n,m,q}^{(i)}(X)=w_j(i,t,m;q). \end{equation*} If $s=\dim(S)$, using M\"obius inversion, then we get \begin{equation*} f(S)=\sum_{T\leq S}g(T)(-1)^{s-\dim(T)}=\sum_{t=1}^sw_j(i,t,m;q)\qqbin{s}{t}q^{\binom{s-t}{2}}(-1)^{s-t}. \end{equation*} Therefore, we obtain \begin{align*} w_j(i,n,m;q)&=\sum_{s=1}^{ij}\sum_{\substack{S\leq\fq^n\\\dim(S)=s}}f(S)\\ &=\sum_{s=1}^{ij}\sum_{\substack{S\leq\fq^n\\\dim(S)=s}}\sum_{t=1}^sw_j(i,t,m;q)\qqbin{s}{t}q^{\binom{s-t}{2}}(-1)^{s-t}\\ &=\sum_{s=1}^{ij}\qqbin{n}{s}\sum_{t=1}^sw_j(i,t,m;q)\qqbin{s}{t}q^{\binom{s-t}{2}}(-1)^{s-t}, \end{align*} which concludes the proof. \end{proof} One one the main difference with the Hamming-metric is the computation of the quantity $f(\fqn)$, which could be explicitly determined in~\cite[Theorem~5.1]{ravagnani2022whitney}. As an immediate consequence of Theorem~\ref{prop:recursion}, we can compute the Whitney numbers of the first kind of the lattices $\mL_2(n,m;q)$ for $m\in\{3,4\}$. \begin{corollary} For any $n\geq 2$ and $j\in\{1,2,3\}$, we have \begin{equation*} w_j(2,n,3;q)=\sum_{s=1}^{2j}\qqbin{n}{s}\sum_{t=1}^sw_j(2,t,3;q)\qqbin{s}{t}q^{\binom{s-t}{2}}(-1)^{s-t}, \end{equation*} where the values of the $w_j(2,t,3;q)$'s is given in Proposition~\ref{prop:i2m3}. \end{corollary} \begin{corollary} For any $n\geq 2$ and $j\in\{1,2,3,4\}$, we have \begin{equation*} w_j(2,n,4;q)=\sum_{s=1}^{2j}\qqbin{n}{s}\sum_{t=1}^sw_j(2,t,4;q)\qqbin{s}{t}q^{\binom{s-t}{2}}(-1)^{s-t}, \end{equation*} where the values of $w_j(2,4,4;q)$'s are given in~\cite[Theorem~59]{cotardo2023rank}. \end{corollary} \bigskip \bibliographystyle{abbrv} \bibliography{ourbib} \end{document}
2412.13898v1
http://arxiv.org/abs/2412.13898v1
On consistent estimation of dimension values
\documentclass[11pt,a4paper]{article} \usepackage[]{geometry} \usepackage[mathcal]{euscript} \usepackage{bm,url} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsthm} \usepackage{graphicx} \usepackage{natbib} \usepackage{colortbl} \usepackage{booktabs} \newtheorem{definition}{Definition} \newtheorem{remark}{Remark} \newtheorem{theorem}{Theorem} \newtheorem*{theo}{Theorem} \newtheorem{lemma}{Lemma} \newtheorem{proposition}{Proposition} \newtheorem*{prop}{Proposition} \newtheorem{corollary}{Corollary} \newtheorem{condition}{Condition} \newcommand{\R}[1]{\mathbb{R}^{#1}} \usepackage[english]{babel} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsthm} \usepackage{graphicx} \usepackage{color} \usepackage{array} \usepackage{mathrsfs} \usepackage{setspace} \usepackage{todonotes} \usepackage{hyperref} \usepackage{float} \newcommand{\nat}{\mathbb{N}} \newcommand{\eps}{\varepsilon} \newcommand{\ind}{\mathbb{I}} \newcommand{\pr}{\mathbb{P}} \newcommand{\B}{\mathcal{B}} \newcommand{\A}{\mathcal{A}}\usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsthm} \newcommand{\E}{\mathbb{E}} \newcommand{\V}{\mathbb{V}} \newcommand{\X}{\mathcal{X}} \newcommand{\Y}{\mathcal{Y}} \newcommand{\e}{\mathcal{e}} \newcommand{\F}{\mathfrak{F}} \newcommand{\G}{\mathbb{G}} \newcommand{\M}{\mathcal{M}} \newcommand{\BC}{\dim_{\text{box}}} \newcommand{\C}{\mathcal{C}} \newcommand{\ds}{\displaystyle} \DeclareMathOperator*{\argmax}{arg\,max} \newcommand{\D}{\mathcal{D}} \newcommand{\deb}{\stackrel{\mathcal{L}}{\longrightarrow}} \newcommand{\Vor}{\mbox{Vor}} \usepackage{ulem} \newcommand{\bea}[1]{\textcolor{blue}{#1}} \newcommand{\antonio}[1]{\textcolor{violet}{#1}} \DeclareMathOperator{\reach}{reach} \DeclareMathOperator{\unp}{Unp} \DeclareMathOperator{\dimm}{dim} \begin{document} \emergencystretch 3em \begin{center} \Large \bf On consistent estimation of dimension values \end{center} \normalsize \ \begin{center} Alejandro Cholaquidis$^a$, Antonio Cuevas$^b$, Beatriz Pateiro-Lopez$^{c}$, \\ \footnotesize $^a$ Centro de Matem\'atica, Universidad de la Rep\'ublica, Uruguay\\ $^b$ Departamento de Matem\'aticas, Universidad Aut\'onoma de Madrid\\ and Instituto de Ciencias Matem\'aticas ICMAT (CSIC-UAM-UCM-UC3M)\\ $^c$ Departamento de Estat\'{\i}stica, An\'alise Matem\'atica e Optimizaci\'on, Universidade \\ de Santiago de Compostela and Galician Center for Mathematical Research\\ and Technology (CITMAga). \end{center} \begin{abstract} The problem of estimating, from a random sample of points, the dimension of a compact subset $S$ of the Euclidean space is considered. The emphasis is put on consistency results in the statistical sense. That is, statements of convergence to the true dimension value when the sample size grows to infinity. Among the many available definitions of dimension, we have focused (on the grounds of its statistical tractability) on three notions: the Minkowski dimension, the correlation dimension and the, perhaps less popular, concept of pointwise dimension. We prove the statistical consistency of some natural estimators of these quantities. Our proofs partially rely on the use of an instrumental estimator formulated in terms of the empirical volume function $V_n(r)$, defined as the Lebesgue measure of the set of points whose distance to the sample is at most $r$. In particular, we explore the case in which the true volume function $V(r)$ of the target set $S$ is a polynomial on some interval starting at zero. An empirical study is also included. Our study aims to provide some theoretical support, and some practical insights, for the problem of deciding whether or not the set $S$ has a dimension smaller than that of the ambient space. This is a major statistical motivation of the dimension studies, in connection with the so-called ``Manifold Hypothesis''. \end{abstract} \section{Introduction} Let us assume that we have a random sample $X_1,\ldots,X_n$ drawn from a probability distribution $P$ whose support is an (unknown) compact set $S\subset {\mathbb R}^d$. Our aim here is to study the statistical estimation of the dimension of $S$, where this term is understood in three different senses: Minkowski, correlation and pointwise dimension. They will be defined below, alongside with the classical notion of Hausdorff dimension which remains, in several aspects, as a sort of ``golden standard'' though, unfortunately, rather unsuitable for statistical treatment. This leads to consider other ``proxy notions'' of dimension more appropriate for statistical treatment. They all agree with Hausdorff dimension in regular cases. A major statistical motivation for studying the estimation of the dimension of $S$ is to assess whether or not this dimension is that of the ambient space. So we place ourselves in the so-called ``Manifold Hypothesis'' setting, whose starting point is the empirical observation that many multivariate data sets found in practice are in fact confined into (or close to) a lower dimensional set. \\ \noindent \textit{On the Manifold Hypothesis and the notion of dimension} In the context of high-dimensional statistics, the so-called Manifold Hypothesis (MH) is fulfilled when a cloud of points in the Euclidean space ${\mathbb R}^d$ lies in fact in (or is close to) a set (often a manifold) ${\mathcal M}$ whose dimension is smaller than that of the ambient space. The case where ${\mathcal M}$ is assumed to be linear leads to the classical theory of Principal Components Analysis which is a topic routinely covered in undergraduate courses of multivariate analysis. But we are here concerned with the general, non-linear case. A deep study of MH, within the differential geometry framework, is given in \cite{fef16}; see Section 2 in that paper for an overview on Manifold Learning. Many other different strategies have been proposed to address, sometimes in an indirect fashion, the MH problem. These include (the list is largely non-exhaustive): \begin{itemize} \item[] Fitting lower dimensional structures (curves or surfaces) to the data cloud. \item[] Assessing lower-dimensionality (without explicit dimension estimation or surface fitting): \cite{aar17}, \cite{gen12a}, \cite{gen12b}, \cite{gen12c}. \item[] Estimating the dimension of $S$. This is the approach we will follow here. More precisely, we aim at identifying, with probability one as the sample size tends to infinity, whether the support $S$ of the underlying probability measure of the data has a Minkowski dimension smaller that $d$. The notion of Minkowski dimension has been chosen here, among the many available notions of dimension (see below for details), in account of its statistical tractability. \end{itemize} Our approach here follows this latter strategy. The contents of this work can be summarized as follows. In Section \ref{sec:back}, some background is given on a few required geometric and statistical notions. Section \ref{sec:notionsdim} provides a short account of a few notions of dimension currently used, with a particular focus on the aforementioned Minkowski, correlation and pointwise dimensions. In Section \ref{sec:estimators}, we define and motivate several estimators for the Minkowski, the correlation and the pointwise dimension of $S$. Some of them have been previously considered in the literature (see \cite{keg02}, \cite{you82}). Others, expressed in terms of volume functions, are mainly introduced as auxiliary tools in our asymptotic study. All of them depend on a suitable sequence $r_n$ of smoothing parameters. The main contributions of this paper are in Section \ref{sec:consistencia}, where mentioned consistency results are established. An empirical study is included in Section \ref{sec:emp}. Some final conclusions are briefly highlighted in Section \ref{sec:conclusions}. \section{Some geometric and statistical background. Some notation}\label{sec:back} \noindent \textit{A few basic definitions} Given a set $S\subset \mathbb{R}^d$, we will denote by $\mathring{S}$ and $\partial S$ the interior and boundary of $S$, respectively, with respect to the usual topology of $\mathbb{R}^d$. The diameter of $S$ will be denoted as $\textnormal{diam}(S)$. The closed ball in $\mathbb{R}^d$, of radius $\varepsilon$, centred at $x$ will be denoted by $B(x,\varepsilon)$ and $\omega_d=\mu(B(x,1))$, $\mu$ being the Lebesgue measure on ${\mathbb R}^d$. With a slight notational abuse, we denote $B(S,r)$ the $r$-parallel set of $S$, ${B(S,r)=\{x\in{\mathbb R}^d: \inf_{y\in S}\|x-y\|\leq r\} }$, $\|\cdot\|$ being the Euclidean norm in ${\mathbb R}^d$. Given two compact non-empty sets $A,C\subset{\mathbb R}^d$, the \it Hausdorff distance\/ \rm or \it \mbox{Hausdorff-Pompei} distance\/ \rm between $A$ and $C$ is defined by \begin{equation*} \rho_H(A,C)=\inf\{\eps>0: \mbox{such that } A\subset B(C,\eps)\, \mbox{ and } C\subset B(A,\eps)\}.\label{Hausdorff} \end{equation*} The following ``standardness'' notion appears, in slightly different versions, in the set estimation literature (see, e.g., \cite{cue04}): Given a probability measure $P$ with support $S\subset \mathbb{R}^d$, we say that $S$ is standard with respect to $P$ if \ there exist positive constants $r_0$, $\delta$ and $d'$ such that, for all $x\in S$, \begin{equation} \label{estandar} P(B(x,r))\geq \delta r^{d'} \mbox{ and } r\in(0,r_0). \end{equation} A useful tool in our approach will be the \textit{volume function} of $S$, which is defined, for $r\geq 0$, by $V(S,r)=V(r)=\mu(B(S,r))$. The volume function plays a relevant role in geometric measure theory, as commented below. \ \noindent \textit{Federer's reach, polynomial volume and polynomial reach} Following \cite{fed59}, let us define the \textit{reach} of $S$ as the supremum ${\mathbf r}$ of all values $R$ such that all points in $B(S,r)$ with $r\leq R$ have a unique metric projection onto $S$. In more formal terms, let $\unp(S)$ be the set of points $x\in \mathbb{R}^d$ with a unique metric projection onto $S$. For $x\in S$, let $\reach(S,x)=\sup\{r>0:\mathring{B}(x,r)\subset \unp(S)\big\}$. The \textit{reach} of $S$ is then defined by $\reach(S)=\inf\{\reach(S,x): x\in S\}$, and $S$ is said to be of positive reach if ${\mathbf r}=\reach(S)>0$. In this case, it is shown in \cite{fed59} that the volume function $V(r)$ is a polynomial on the interval $[0,{\mathbf r}]$, \begin{equation} V(r)=\theta_0+\theta_1r+\ldots+\theta_dr^d,\ \mbox{for all } r\in[0,\mathbf r].\label{pvol} \end{equation} Also, the coefficients of this polynomial have a relevant geometric information on $S$: in particular, $\theta_{0}=\mu(S)$, $\theta_1$ is (outer) Minkowski measure of the boundary of $S$, $\theta_d$ is, up to a known factor, the Euler-Poincar\'{e} characteristic of $S$ and the remaining coefficients can be interpreted in terms of curvatures. Still, it is important to note that a polynomial volume expression for $V(r)$ can hold even if $\reach(S)=0$. For instance it holds for the subset $S=[-1,1]^2\setminus [-1/2,1/2]^2$ of $\mathbb{R}^2$. This polynomial volume expression motivates the following definition given in \cite{ch23}, see also \cite{ch14}. \begin{definition}\label{def:polvol} Given a compact set $S\subset {\mathbb R}^d$ with volume function $V(r)=\mu(B(S,r))$, we define the \textit{polynomial reach} ${\mathbf R}$ of $S$ as \begin{equation*}\label{pol_reach} {\mathbf R}=\sup\{R\geq 0:V(r)\mbox{ is a polynomial of degree at most $d$ on $[0,R]$}\}. \end{equation*} \end{definition} \ \section{Different notions of dimension}\label{sec:notionsdim} Many proposals have been put forward to formally define the intuitively based notion of dimension of a set $S\subset {\mathbb R}^d$. A very short, partial account is included below. We start by mentioning the Hausdorff dimension (which, in several aspects is considered as a standard reference). Then we focus on the Minkowski dimension (on account of its statistical tractability) and we consider as well the notions of correlation dimension and pointwise dimension. The statistical estimation of these quantities will be addressed in subsequent sections. \ \noindent \textit{Hausdorff dimension} The Hausdorff dimension, first introduced by \cite{haus19}, is perhaps the most widely recognized member of the family of fractal dimensions whose aim is to quantify the complexity of the geometry of a set, its scaling and self-similarity properties; see \cite{fal04} for details. Its formal definition is based on the notion of $\alpha$-dimensional Hausdorff content, given by \[\mathcal{H}_\infty^\alpha(S)=\inf\left\{\sum_{i} \textnormal{diam}(U_i)^\alpha:\ S \subset\bigcup_i U_i \right\},\] where $\{U_i\}$ is a countable collection of sets that cover $S$. Then, the Hausdorff dimension is defined as \[\dim_\textnormal{H}(S)=\inf\{\alpha:\ \mathcal{H}_\infty^\alpha(S)=0\}.\] Hausdorff dimension is, in several aspects, a sort of ``ideal reference'' to define dimension as it enjoys a number of desirable properties, not necessarily shared by other dimension notions. However, in practice, $\dim_\textnormal{H}(S)$ could be very difficult to compute. Thus, in practical applications, there is a case to consider other dimension notions, as those considered below, as proxies for $\dim_\textnormal{H}$. \ \noindent \textit{The concept of Minkowski dimension. Some equivalent definitions} Let $S\subset \mathbb{R}^d$ be a bounded set. Define the \textit{covering number}, $N(S, r)$ to be the minimal number of sets of diameter at most $r$ required to cover $S$. Then, the Minkowski dimension, also known as Minkowski-Bouligand dimension, Kolmogorov capacity, Box-counting dimension, or entropy dimension, is defined as \begin{equation}\label{bcdim} \dim(S)=\lim_{r\to 0 }\frac{\log(N(S, r))}{\log(1/r)}. \end{equation} To motivate this definition in intuitive terms, note that \eqref{bcdim} means that the dimension is the exponent $k$ such that $N(S, 1/n)\approx Cn^k$. We will assume throughout that the limit in \eqref{bcdim} exists. If this were not the case, most results can be re-written in terms of the upper and lower Minkowski dimension, defined in terms, respectively, of the upper or lower limit in \eqref{bcdim}; see, e.g., \cite{fal04}. Some alternative, equivalent expressions for the Minkowski dimension can be obtained by replacing in \eqref{bcdim} the covering number $N(S,r)$ with either the \textit{packing number} $N_{\textnormal{pac}}(S,r)$ or the \textit{separating number} $N_{\textnormal{sep}}(S,r)$ defined, respectively, as the maximum possible cardinality of a disjoint collection of closed $r$-balls with centres on $S$ and the maximum cardinality of an \textit{$r$-separated} subset of $S$ (where $X\subset S$ is said to be $r$-separated if \(x, y \in X\) implies \(\| x - y \| \geq r\)). The equivalence of these alternative definitions follows from the relations \begin{equation}\label{enes} N(S,4r)\leq N_{\textnormal{sep}} (S,2r)\leq N(S,r)\ \mbox{and } N_{\textnormal{pac}}(S,r)=N_{\textnormal{sep}}(S,2r), \end{equation} which hold for any bounded set $S$ in the Euclidean space, see \citet[p. 67]{bis17}. A further alternative expression for $\dim(S)$ is \begin{equation}\label{bcdim2} \dim(S)=d-\lim_{r\to 0 }\frac{\log(\mu(B(S,r))}{\log(r)}=d-\lim_{r\to 0 }\frac{\log(V(r))}{\log(r)}, \end{equation} provided that this limit exists (it can be $+\infty$). The equivalence between \eqref{bcdim} and \eqref{bcdim2} follows from \eqref{enes} together with the simple inequalities \begin{equation}\label{mun} \mu(B(S,r))\leq N(S,r)\mu(B(0,r)), \ \mu(B(S,r))\geq N_{\textnormal{sep}}(S,r)\mu(B(0,r/2)). \end{equation} The relationship between Minkowski dimension and Hausdorff dimension is given by $\dim_\textnormal{H}(S)\leq \dim(S)$, where strict inequality is possible. A simple example is $\mathbb{Q}$, the set of rational numbers where $\dim_\textnormal{H}(\mathbb{Q})=0<1= \dim(\mathbb{Q})$. \ \noindent \textit{The correlation dimension} Another popular method for measuring some sort of fractal dimension is the \textit{correlation dimension}, introduced by \cite{gras83}; see also \cite{cam16} for a survey and \cite{pes93} for some mathematical insights. In fact, the definition below follows the formal treatment in \cite{pes93}, rather than the original formulation in \cite{gras83}. Let $X_1,X_2$ be two independent and identically distributed (iid) copies of a random element $X$ in $\mathbb{R}^d$. Let us define \begin{equation*}\label{cd} p(r)=\mathbb{P}(\|X_1 - X_2\|<r). \end{equation*} Then, the correlation dimension of the distribution of $X$, $P$, is defined as \begin{equation}\label{cd2} \dim_\textnormal{cd}(P)=\lim_{r\to 0} \frac{\log(p(r))}{\log(r)}, \end{equation} provided that this limit exists (it can be $+\infty$). Although we restrict ourselves to \( \mathbb{R}^d \), the correlation dimension is formally defined in \cite{pes93} for a general metric space. An outstanding difference between the definition of correlation dimension in \eqref{cd2} and other notions of dimension is the presence of the probability measure $P$. Of course, in our case, to make \eqref{cd2} comparable to other dimension notions, we will focus on probability measures $P$ whose support $S$ is the set of interest. In the Appendix it is shown that in fact the limit in \eqref{cd2} is the same for all $P$ fulfilling some regularity conditions. It is perhaps worth noting that, when the support $S$ is a compact manifold in ${\mathbb R}^d$, under some regularity conditions, the norm $\Vert\cdot\Vert$ of the ambient space can be replaced with the geodesic distance $d_S$ in $S$ still obtaining the same result in \eqref{cd2}. Indeed, since, in general $d_S(x,y)\geq \Vert x-y\Vert$, it will suffice to have $d_S(x,y)\leq C\Vert x-y\Vert$, for some constant $C>0$. This is guaranteed, under the above mentioned condition of positive reach \citep{fed59}: see, e.g., \cite[Lemma 3]{gen12b}. \ \noindent \textit{The pointwise dimension} The so-called pointwise dimension $\dim_\textnormal{pw}(x)$ (see \cite{you82}, \cite{cam16}) differs from Minkowski dimension $\dim(S)$ in at least two aspects: first, again, it depends on the underlying probability distribution of the data points, rather than simply on the support $S$. Second, it is defined point-by-point so that it takes into account as well the local aspects. Thus $\dim_\textnormal{pw}(x)$ provides information about different regions of the data cloud for which the global Minkowski dimension is blind. If $P$ is a probability measure with support $S\subset {\mathbb R}^d$, we define the pointwise dimension of $P$ at $x\in S$ as \begin{equation}\label{eq:pw} \dim_{\textnormal{pw}}(x)=\lim_{r\to 0} \frac{\log(P(B(x,r)))}{\log(r)}, \end{equation} provided that this limit does exist (it can be $+\infty$). While, obviously, $\dim_{\textnormal{pw}}(x)$ depends on the probability $P$, it is clear that many different choices of $P$ will lead to the same results. For example, $\dim_{\textnormal{pw}}(x)$ will equal $q$ for all choices of $P$ such that $P(B(x,r))$ is of exact order $r^q$. If $S$ is a compact Riemannian manifold, a natural choice for $P$, aimed at giving a sort of ``intrinsic'' pointwise dimension notion for $S$, would be the uniform distribution with respect to the volume form; see \citet[Section 1.3]{pen06}. A natural way of deriving a ``global'' notion of dimension for a set $S$ from \eqref{eq:pw} would be just defining $\dim_\textnormal{pw}(S)=\sup_{x\in S}\dim_\textnormal{pw}(x)$; see Section \ref{sec:estimators} for more discussion on this. It can be seen \citep[Prop. 2.1]{you82} that when $P$ is a probability measure with support on $S$, $\dim_\textnormal{H}(S)\leq \dim_\textnormal{pw}(S)$. Also \citep[Th. 4.4]{you82} if $P$ is a probability measure on a compact Riemannian manifold and $\dim_\textnormal{pw}(x)=q$ almost surely, then $\dim_\textnormal{H}(S)=q$. \ \section{Some estimators of the considered dimension notions}\label{sec:estimators} Our basic aim here is to define consistent estimators for the different notions of dimension introduced in the previous section. All these estimators are based on a random sample $\aleph_n=\{X_1,\dots,X_n\}$ of points from a probability distribution whose support is $S\subset\mathbb{R}^d$. The consistency of these estimators will be established in Section \ref{sec:consistencia}. Here, the term ``consistency'' must be understood in the statistical sense: we want that our estimators converge (almost surely) as $n$ grows to infinity. According to the usual paradigm in nonparametrics, all the proposed estimators will depend on a real sequence $r_n$ of smoothing parameters which must tend to zero slowly enough in order to achieve consistency. \ \noindent \textbf{The capacity estimator.} A first natural estimator for the Minkowski dimension would result from definition \eqref{bcdim}, \begin{equation}\label{dimcap_est} \widehat{\dim}_{\textnormal{cap}}=-\frac{\log(N_{\textnormal{sep}}(\aleph_n,r_n))}{\log(r_n)}, \end{equation} where $N_{\textnormal{sep}}(\aleph_n,r_n)$ is the natural empirical estimator of the separating number $N_{\textnormal{sep}}(S,r)$, that is, the maximum cardinality of an $r_n$-separated set in the sample $\aleph_n$, and $r_n$ is an appropriate sequence of smoothing parameters $r_n\downarrow 0$. This estimator was previously considered in \cite{keg02}; se also \cite{cam16}. We keep the name ``capacity estimator'' and the subindex ``cap'' in \eqref{bcdim} to keep K\'egl's notation although, according to the general notation we have followed so far (borrowed from \cite{bis17}) the sub-index ``sep'' would be acceptable as well. In fact, the main contribution in \cite{keg02} is an efficient algorithm to calculate the estimator \eqref{dimcap_est} or, more precisely, a ``scale-dependent'' version of it. For simplicity and ease of presentation, this version will not be considered in our consistency results below which, in any case, can be easily adapted to it. \ \noindent \textbf{A non-parametric volume-based estimator.} An alternative approach is obtained by simply plugging-in the volume function in \eqref{bcdim2} by its empirical counterpart ${V_n(r_n)=\mu(B(\aleph_n,r_n))}$, for some appropriate sequence of smoothing parameters $r_n\downarrow 0$. This leads to \begin{equation}\label{dimvol_est} \widehat{\dim}_{\textnormal{vol}}=d-\frac{\log(V_n(r_n))}{\log(r_n)} \end{equation} Lemma \ref{lem1} below establishes that this estimator differs from Kégl's estimator by less than $1/\log(r_n)$, up to known constants. \ \noindent \textbf{An estimator of the correlation dimension.} Let us define $$\widehat{p}_n(r)=\binom{n}{2}^{-1}\sum_{i\neq j}\mathbb{I}_{\{\|X_i-X_j\|<r\}},$$ where $\mathbb{I}$ denotes an indicator function. Observe that $\mathbb{E}(\widehat{p}_n(r))=p(r)$. Then, we can consider as estimator for the correlation dimension \begin{equation}\label{dimcd_est} \widehat{\dim}_{\textnormal{cd}}=\frac{\log(\hat p_n(r_n))}{\log(r_n)} \end{equation} \ \noindent \textbf{A plug-in estimator of the pointwise dimension.} An empirical version of the pointwise dimension $\dim_\textnormal{pw}$, defined in \eqref{eq:pw}, is given in a natural way by replacing the probability measure $P$ by its empirical counterpart, ${\mathbb P}_n$: \begin{equation}\label{estpw} \widehat{\dim}_{\textnormal{pw}}(x)= \frac{\log({\mathbb P}_n(B(x,r_n)))}{\log(r_n)}. \end{equation} \ As it follows from the discussion in Section \ref{sec:notionsdim}, a global estimator of the dimension of $S$ could be obtained from \eqref{estpw} simply defining $\widehat{\dim}_{\textnormal{pw}}(S)=\sup_{x\in S}\widehat{\dim}_{\textnormal{pw}}(x)$. However, in practice, we cannot calculate the estimator in all points of $S$. So, in the simulation outputs of Section \ref{sec:emp}, we have used the 0.9 quantile of the values $\widehat{\dim}_{\textnormal{pw}}(X_i)$. Of course, the motivation for this is to have some protection against outliers. \ \section{Consistency results}\label{sec:consistencia} \subsection{Consistency for the volume-based estimator and Kegl's estimator}\label{sec:consistkegl} The following technical lemma establishes a relationship between K\'egl's estimator and the volume-based estimator. It will be used in our two main theorems to derive results about K\'egl's estimator using the volume-based estimator. \begin{lemma} \label{lem1} Let $r_n$ be a sequence such that $0<r_n<1$ and $r_n\to 0$, for any $n\in\mathbb{N}$ \begin{equation}\label{lem:ineq} \Big|\widehat{\dim}_{\textnormal{vol}}-\widehat{\dim}_{\textnormal{cap}}+\frac{\log(\omega_d)}{\log(r_n)} \Big|\leq -d\frac{\log(2)}{\log(r_n)}\qquad a.s. \end{equation} \end{lemma} \begin{proof} From \eqref{mun} and $N(K,4r)\leq N_{\textnormal{sep}} (K,2r)\leq N(K,r)$ (these inequalities are valid for any bounded set $K$ in the Euclidean space; see \citet[p. 67]{bis17}). So, $$N_{\textnormal{sep}}(\aleph_n,r_n)\omega_d (r_n/2)^d\leq V_n(r_n)\leq V_n(2r_n)\leq N_{\textnormal{sep}}(\aleph_n,r_n)\omega_d(2r_n)^d.$$ Then, \begin{multline*} \log(\omega_d)+ d\log(r_n) -d\log(2)\leq \log(V_n(r_n))-\log(N_{\textnormal{sep}}(\aleph_n,r_n)) \\ \leq \log(\omega_d) +d\log(r_n) +d\log(2). \end{multline*} Dividing all terms by $-\log(r_n)$ results in $$-\frac{\log(\omega_d)}{\log(r_n)} +d\frac{\log(2)}{\log(r_n)}\leq \widehat{\dim}_{\textnormal{vol}}-\widehat{\dim}_{\textnormal{cap}}\leq -\frac{\log(\omega_d)}{\log(r_n)} -d\frac{\log(2)}{\log(r_n)},$$ from where it follows \eqref{lem:ineq}. \end{proof} The following result provides conditions for the (almost sure) consistency, as $n\to\infty$ of the estimators \eqref{dimcap_est} and \eqref{dimvol_est}. \begin{theorem}\label{th:general} Let $S\subset \mathbb{R}^d$ be a compact set such that $V(r)$ is Lipschitz in some interval $[0,\lambda]$ with $\lambda>0$. Let $\aleph_n=\{X_1,\dots,X_n\}$ be an iid sample from a distribution whose support is $S$. Let $\gamma_n=\rho_H(\aleph_n,S)$ and $r_n\to 0$ such that $\gamma_n<r_n$ and ${\gamma_n/(V(r_n-\gamma_n)\log(r_n))\to 0}$, almost surely (a.s.). Then, (a) the estimator $\widehat{\dim}_{\textnormal{vol}}$ defined in \eqref{dimvol_est} is almost surely consistent, that is \begin{equation*}\label{consist-dvol} \dim(S)=d-\lim_{n\to \infty }\frac{\log(V_n(r_n))}{\log(r_n)}\quad a.s., \end{equation*} where $V_n(r_n)=\mu(B(\aleph_n,r_n))$ is the natural empirical estimator of $V(r_n)$. (b) K\'egl's estimator $\widehat{\dim}_{\textnormal{cap}}$, defined in \eqref{dimcap_est} is almost surely consistent as well, under the same conditions for $r_n$. \end{theorem} \begin{proof} (a) Let us write, $$ \frac{\log(V_n(r_n))}{\log(r_n)}= \frac{\log(V(r_n))-\log(V(r_n)/V_n(r_n))}{\log(r_n)}.$$ From \eqref{bcdim2}, we only have to prove that \begin{equation}\label{eqth1} \frac{\log(V(r_n)/V_n(r_n))}{\log(r_n)}\to 0. \end{equation} If $r_n>\gamma_n$, from the first equation in the proof \citet[Prop. 4.2]{ch23} $V(r_n-\gamma_n)\leq V_n(r_n)$. Then, using that $\log(x)\leq x-1$, \begin{equation}\label{rgamma} \Bigg|\frac{\log(V(r_n)/V_n(r_n))}{\log(r_n)}\Bigg|\leq \Big|\frac{\log(V(r_n)/V(r_n-\gamma_n))}{\log(r_n)}\Bigg|\leq \Big|\frac{V(r_n)-V(r_n-\gamma_n)}{V(r_n-\gamma_n)\log(r_n)}\Big|. \end{equation} Now, from the Lipschitz assumption on $V$ in $[0,\lambda]$, there exists $L>0$ such that ${|V(r_n)-V(r_n-\gamma_n)|\leq L\gamma_n}$ for all $n$ large enough such that $r_n<\lambda$. So the right-hand side of \eqref{rgamma} is of order $\gamma_n/(V(r_n-\gamma_n)\log(r_n))$ and \eqref{eqth1} follows from ${\gamma_n/(V(r_n-\gamma_n)\log(r_n))\to 0}$. \ \noindent (b) The consistency of $\widehat{\dim}_{\textnormal{cap}}$ follows from (a) and Lemma \ref{lem1}. \end{proof} \begin{remark}\label{rem:st} In order to see the true extent of the assumptions in Theorem \ref{th:general} let us note that, under the standardness assumption \eqref{estandar} it is proved in \citet[Th. 3]{cue04} that $$ \gamma_n=\rho_H(\aleph_n,S)=O\left(\left(\frac{\log n}{n}\right)^{1/d}\right), $$ with probability one. Therefore, the condition $\gamma_n<r_n$, a.s. would hold whenever $r_n=O\left(\left({\log n}/{n}\right)^{1/d'}\right)$, for some $d'>d$. The other assumption, $\gamma_n/(V(r_n-\gamma_n)\log(r_n))\to 0$ a.s., is a bit more delicate, as it involves the behaviour of $V$ near zero. \end{remark} \begin{remark}\label{rem:sobreVn} The volume-based estimator in Theorem \ref{th:general} above plays here a somewhat instrumental role in order to get in part (b) the consistency result for $\widehat{\dim}_{\textnormal{cap}}$. Indeed, in principle, the computation of $\widehat{\dim}_{\textnormal{vol}}$ is more expensive, especially taking into account the efficient algorithm provided in \cite{keg02} to calculate $\widehat{\dim}_{\textnormal{cap}}$. \end{remark} \subsection{Some results under the polynomial volume assumption}\label{sec:polvol} If we assume that $V(r)=\mu(B(S,r))$ is a polynomial on some interval $[0,{\mathbf R}]$, the Minkowski dimension is always $d$ minus an integer corresponding to the order of the first non-null coefficient in the polynomial $V$. Also, under an additional shape restriction on $S$, we have a quite precise guide about the choice of the sequence of smoothing parameters $r_n$ in the plug-in consistent estimator $V_n(r_n)$ considered in Theorem \ref{th:general}. Finally, the polynomial volume assumption provides an alternative natural estimator of $V(r)$, denoted ${\mathcal P}_n(r)$, constructed by minimizing the $L^2-$distance between $V(r)$ ant the empirical volume function $V_n(r)$ introduced in the previous subsection. These ideas are formalised in the following result. \begin{theorem}\label{th:polvol} Let $S\subset \mathbb{R}^d$ be a compact set with polynomial volume function $V(r)=\sum_{j=k}^d \theta_j r^{j}$, $r\in [0,\mathbf{R}]$, $\theta_k$ being the first non null coefficient in $V(r)$. Given a sample $\aleph_n=\{X_1,\ldots,X_n\}$ drawn on $S$, denote ${\mathcal P}_n(r)=\sum_{j=k}^d \hat\theta_j r^{j}$, where $\hat\theta_j$ stand for the minimum-distance estimators of $\theta_j$ based on the $L^2$-distance between $V_n(r)$ and $V(r)$ on the interval $[0,{\mathbf R}]$. Then, (a) $\dim(S)=d-k$. (b) If $\gamma_n=\rho_H(\aleph_n,S)$ and $r_n\to 0$ is such that $\gamma_n/r_n\to 0$, a.s., then we have that, for $n$ large enough, \begin{equation}\label{eq:conVn} \Big|\dim(S)-\widehat{\dim}_{\textnormal{vol}}\Big |\leq \frac{|\log(2\theta_k)|}{|\log(r_n)|}\qquad \mbox{a.s}. \end{equation} Moreover, under the standardness condition $P(B(x,r))\geq \delta r^d$ introduced in \eqref{estandar}, where $P$ stands here for the distribution of the $X_i$, condition $\gamma_n/r_n\to 0$ is fulfilled for any sequence $r_n$ of type $$ r_n=\Big(\frac{\log n}{n}\Big)^{1/d^\prime{}} $$ with $d^\prime>d$. As a consequence, for this choice of $r_n$ we also have $|\dim(S)-\widehat{\dim}_{\textnormal{vol}}|=O(1/\log(n))$. (c) Assuming again $r_n\to 0$ and $\gamma_n/r_n\to 0$, a.s., K\'egl's estimator \eqref{dimcap_est} fulfils \begin{equation*} \Big|\dim(S)-\widehat{\dim}_{\textnormal{cap}}\Big |\leq \frac{|\log(2\theta_k)|+|\log(\omega_d)|+d\log(2)}{|\log(r_n)|} \qquad \mbox{a.s}. \end{equation*} (d) Let $f(x)=\lfloor x+1/2\rfloor$ be the function which maps any positive value $x>0$ to its nearest integer value. Then, there exists $r_0>0$ such that the estimator \begin{equation*}\label{esttilde} \widetilde{\dim}=f\Big(d-\frac{\log\big({\mathcal P}_n(r_0))}{\log (r_0)}\Big), \end{equation*} fulfils $\widetilde{\dim}=\dim(S)$ eventually almost surely, as $n\to\infty$. \end{theorem} \begin{proof} (a) Observe that a simple application of L'H\^opital rule yields \begin{equation*}\label{bc1} \dim(S)=d-\lim_{r\to 0} \frac{\log(V(r))}{\log(r)}=d-\lim_{r\to 0}\frac{\log(\sum_{j=k}^d \theta_j r^{j})}{\log(r)}=d-k. \end{equation*} (b) By part (a) and the polynomial volume assumption, for $n$ large enough such that $r_n< \mathbf R$, \begin{align*} \Big|\dim(S)-\widehat{\dim}_{\textnormal{vol}}\Big|= & \Bigg| \frac{\log(V_n(r_n)/V(r_n))}{\log(r_n)} + \frac{\log\Big(\sum_{j=0}^{d-k} \theta_{j+k} r_n^{j}\Big)}{\log(r_n)}\Bigg|\\ \leq & \Bigg| \frac{\log(V_n(r_n)/V(r_n))}{\log(r_n)} \Bigg|+\Bigg|\frac{\log\Big(\sum_{j=k}^{d-k} \theta_{j+k} r_n^{j}\Big)}{\log(r_n)}\Bigg| \end{align*} Let us bound the first term, using the same bounds used to prove Theorem \ref{th:general} (a) $$\Bigg|\frac{\log(V_n(r_n)/V(r_n))}{\log(r_n)} \Bigg| \leq \Bigg|\frac{\log(V(r_n)/V(r_n-\gamma_n))}{\log(r_n)}\Bigg|.$$ From the polynomial volume assumption and writing $$(r_n-\gamma_n)^j=r_n^j+\sum_{l=0}^{j-1}\binom{j}{l} r_n^l\gamma_n^{j-l}(-1)^{j-l}\quad j=k,\dots,d,$$ we get that $V(r_n-\gamma_n)=V(r_n)+\mathcal{Q}_n(\gamma_n)$ where $\mathcal{Q}_n(\gamma_n)$ is a polynomial in $\gamma_n$, that depends on $n$, but whose degree is at most $d$. Observe also that the independent term of $\mathcal{Q}_n$ is 0. Now if we use that $\log(x)\leq x-1$, \begin{multline*} \Bigg|\frac{\log(V(r_n)/V(r_n-\gamma_n))}{\log(r_n)}\Bigg|\leq \frac{|\mathcal{Q}_n(\gamma_n)|}{|(V(r_n)+\mathcal{Q}_n(\gamma_n))\log(r_n)|}=\frac{1}{|(1 +V(r_n)/\mathcal{Q}_n(\gamma_n))\log(r_n)|}.\\ \end{multline*} Now, since the independent term of $\mathcal{Q}_n$ is 0 and $r_n/\gamma_n\to \infty$, it follows that ${|(1 +V(r_n)/\mathcal{Q}_n(\gamma_n))|\to\infty}$ as $n\to\infty$. Then, for $n$ large enough, $$\frac{1}{|(1 +V(r_n)/\mathcal{Q}_n(\gamma_n))\log(r_n)|}+\Bigg|\frac{\log\Big(\sum_{j=0}^{d-k} \theta_{j+k} r_n^{j}\Big)}{\log(r_n)}\Bigg| \leq \frac{|\log(2\theta_k)|}{|\log(r_n)|}.$$ The statement concerning the standardness assumption follows from \citet[Th. 3]{cue04}. This result establishes that under condition \eqref{estandar}, ${\gamma_n=O\Big((\log n/n)^{d}\Big)}$. It can be noted that for this choice of $r_n$, bound \eqref{eq:conVn} yields $ |\dim(S)-\widehat{\dim}_{\textnormal{vol}}|=O(1/\log(n))$. (c) The proof follows directly from part (b) and Lemma \ref{lem1}. (d) Assume $\dim(S)=d-k$. For all $0<r\leq \mathbf{R}$, $$V(r)= r^k\Bigg(\sum_{j=k}^d \theta_j r^{j-k}\Bigg).$$ Now, \begin{align*} f\Bigg(d-\frac{\log({\mathcal P}_n(r))}{\log(r)}\Bigg)=&f\Bigg(d-\frac{\log(V(r))-\log(V(r)/{\mathcal P}_n(r))}{\log(r)}\Bigg)\\ =&f\Bigg(d-k-\frac{\log(\sum_{j=k}^d \theta_j r^{j-k})}{\log(r)} +\frac{\log(V(r)/{\mathcal P}_n(r))}{\log(r)}\Bigg). \end{align*} Let us fix $0<r_0\leq \mathbf{R}$ small enough such that \begin{equation}\label{condition1} \Bigg|\frac{\log(\sum_{j=k}^d \theta_j r_0^{j-k})}{\log(r_0)}\Bigg|<1/4. \end{equation} From \citet[Th. 1]{cuepat18} we know ${\mathcal P}_n(r_0)\to V(r_0)$ as $n\to \infty$ a.s. Then, with probability one for $n$ large enough, \begin{equation} \label{condition2} \Bigg|\frac{\log(V(r_0)/{\mathcal P}_n(r_0))}{\log(r_0)}\Bigg|<1/4. \end{equation} Combining \eqref{condition1} and \eqref{condition2}, with probability one for $n$ large enough, $$f\Bigg(d-\frac{\log({\mathcal P}_n(r_0))}{\log(r_0)}\Bigg)=d-k.$$ \end{proof} \begin{remark} Parts (b) and (c) are perhaps the most interesting conclusion of Theorem \ref{th:polvol} as they provide a wide class of sets $S$ (those with polynomial volume function) for which the assumptions imposed in Theorem \ref{th:general} to get consistency can be just replaced by the simpler conditions $\gamma_n/r_n\to 0$ a.s. and $r_n\to 0$. Part (d) has a rather conceptual, theoretical value. Indeed, our empirical results suggest that the estimator $\widetilde{\dim}$ considered in Theorem \ref{th:polvol} is not competitive in practice against the other estimators checked in Section \ref{sec:estimators}. Still, for this estimator, consistency can be obtained for a constant value $r_n=r_0$ of the tuning parameter. While such value is not known in advance, as it depends on the unknown dimension value, equation \eqref{condition1} might provide some clues for an iterative procedure to select $r_0$. \end{remark} \subsection{Consistency for the correlation dimension estimator}\label{sec:(P_x)} The following theorem establishes a consistency result for the natural estimator of the correlation dimension. \begin{theorem} \label{th:cd} Assume that $X$ is such that $\dim_\textnormal{cd}(P)$ exists and is finite. Then, the estimator $\widehat{\dim}_\textnormal{cd}$ defined in \eqref{dimcd_est} is almost surely consistent, that is $$\dim_\textnormal{cd}(P)=\lim_{n\to\infty}\frac{\log(\widehat{p}_n(r_n))}{\log(r_n)}\quad a.s.,$$ provided that we take \begin{equation}\label{eq:rncd} r_n=\Bigg(\frac{\log n}{n}\Bigg)^{\frac{1}{(1+\beta)\dim_\textnormal{cd}(P)}}, \end{equation} with $\beta>0$. \end{theorem} \begin{proof} If we use the bound $\log(x)\leq x-1$, for $n$ large enough, \begin{align*} \mathbb{P}\Bigg(\Bigg |\frac{\log(\widehat{p}_n(r_n))}{\log(r_n)}-\frac{\log(p(r_n))}{\log(r_n)}\Bigg|>\epsilon\Bigg)= & \mathbb{P}\Bigg(\Bigg | \log\Bigg (\frac{\widehat{p}_n(r_n)}{p(r_n)}\Bigg)\Bigg |>-\epsilon \log(r_n)\Bigg) \\ \leq & \mathbb{P}(|\widehat{p}_n(r_n)-p(r_n)|>-\epsilon \log(r_n)p(r_n)). \end{align*} We first note that $\widehat{p}_n(r_n)$ is a U-statistic. Then will use the concentration inequality for U-statistics, given by equation (2) in \citet[Th. A, p. 201]{serfling}. According to the notation of this book, the order of the statistic is $m=2$, the kernel function $h$ is $h(x_1,x_2)={\mathbb I}_{\|x_1-x_2\|\leq r_n}$, the bounds for the value of $h$ are $a=0$, $b=1$, the expectation of the U-statistic is $\theta=p(r_n)$ and the deviation value is $t=-\epsilon \log(r_n)p(r_n)>0$, for $n$ large enough. Thus, using the above mentioned inequality and noting that the distribution of $U_n-\theta$ is symmetric around 0, we get $$\mathbb{P}\Big( \left| \widehat{p}_n(r_n) - p(r_n) \right| > -\epsilon \log(r_n) p(r_n) \Big) \leq 2 \exp \left( \frac{-n \left( \epsilon \log(r_n) p(r_n) \right)^2}{4 \left( \sigma^2 - \frac{1}{3} \left( 1 - p(r_n) \right) \epsilon \log(r_n) p(r_n) \right) } \right),$$ where $\sigma^2=p(r_n)(1-p(r_n))$. Observe that, $$\frac{-n \left( \epsilon \log(r_n) p(r_n) \right)^2}{4 \left( \sigma^2 - \frac{1}{3} \left( 1 - p(r_n) \right) \epsilon \log(r_n) p(r_n) \right) }= -\frac{n\epsilon^2p(r_n)\log(r_n)}{4(1-p(r_n))(1/\log(r_n)-\epsilon/3)}.$$ Since $\dim_\textnormal{cd}(P)$ is finite, $p(r_n)\to 0$ as $n \to \infty$. Then we can take take $n$ large enough such that $-4(1-p(r_n))(1/\log(r_n)-\epsilon/3)\leq 2\epsilon$. Then, \begin{equation}\label{eq:cantelli} \mathbb{P}\Big( \left| \widehat{p}_n(r_n) - p(r_n) \right| > -\epsilon \log(r_n) p(r_n) \Big)\leq 2\exp\Bigg(\frac{n\epsilon \log(r_n)p(r_n)}{2}\Bigg). \end{equation} From \eqref{cd2}, for all $0<\beta$, $p(r_n)>r_n^{(1+\beta)\dim_\textnormal{cd}(P)}$ for $n$ large enough. Finally, the desired conclusion is a direct application of Borel-Cantelli's lemma since, from \eqref{eq:rncd}, $r_n$ fulfils that, given $\epsilon>0$, and $\alpha>1$, we have for $n$ large enough, $$\frac{n\epsilon \log(r_n)r_n^{(1+\beta)\dim_\textnormal{cd}(P)}}{2}\leq -\alpha\log n.$$ Thus, the series whose general term is the left-hand side of \eqref{eq:cantelli}, is bounded by the convergent series $\sum_n\frac{1}{n^\alpha}$. \end{proof} \begin{remark} It is clear from the proof of Theorem \ref{th:cd} that the conclusion stands valid for any sequence of smoothing parameters $r_n$ decreasing to zero not faster than a sequence of type \eqref{eq:rncd}, that is for any $r_n\downarrow 0$ with of type $$ \Bigg(\frac{\log n}{n}\Bigg)^{\frac{1}{(1+\beta)\dim_\textnormal{cd}(P)}}=O(r_n) $$ with $\beta>0$. For a different approach to the consistent estimation of correlation dimension see \cite{qiu22}. \end{remark} \subsection{Pointwise dimension estimation}\label{sec:point} The notion of pointwise dimension defined in \eqref{eq:pw} is perhaps less popular than the Minkowski definition of this concept. However, besides their obvious differences both dimension notions, pointwise and Minkowski, are somewhat complementary, since both are suitable for statistical estimation and both provide useful information about Hausdorff dimension, (see part (b) of Theorem \ref{th:pw} below), which is considerably harder to estimate in a direct fashion. In addition, as seen in Theorem \ref{th:pw} below, some standard methods in nonparametrics can be used to derive convergence rates for the estimator. Last but not least, the empirical results provided below suggest that the ``pointwise-based'' estimator proposed at the end of Section \ref{sec:estimators} could be a competitive choice in dimension assessment studies. \begin{theorem}\label{th:pw} Let $S\subset\mathbb{R}^d$ be a compact set and $\aleph_n=\{X_1,\dots,X_n\}$ be an iid sample distributed as $P$, whose support is $S$. (a) Let $x\in S$ such that $\dim_\textnormal{pw}(x)$, defined in \eqref{estpw}, exists and the standardness condition defined in \eqref{estandar} is fulfilled at $x$. Then, the estimator $\widehat{\dim}_{\textnormal{pw}}(x)$ is almost surely consistent, that is \begin{equation}\label{dpwconsist} {\dim}_{\textnormal{pw}}(x)=\lim_{n\to\infty}\frac{\log({\mathbb P}_n(B(x,r_n)))}{\log(r_n)}\quad a.s., \end{equation} for $r_n=(\log(n)/n)^{1/d'}$, $d'$ being as in \eqref{estandar}.\\ (b) Assume now that the ``global'' (for all $x\in S$) standardness condition \eqref{estandar} holds and $\dim_\textnormal{pw}(x)$ exists for all $x\in S$. Let \begin{equation} \label{2rn} r_n=\Bigg(\frac{\log n}{n}\Bigg)^{\frac{1}{2d'}}. \end{equation} Then, if the convergence in the definition \eqref{eq:pw} of $\dim_{\textnormal{pw}}$ is uniform on $x$, the consistency in \eqref{dpwconsist} is uniform as well, that is \begin{equation}\label{dpwconsistunif} \sup_{x\in S}\Big| \widehat{\dim}_{\textnormal{pw}}(x)-\dim_{\textnormal{pw}}(x)\Big|\to 0,\ \mbox{almost\ surely}, \mbox{ as } n\to\infty. \end{equation} As a consequence, we also have, almost surely, as $n\to\infty$, \begin{align}\label{supdw} & \sup_{x\in S} \widehat{\dim}_{\textnormal{pw}}(x)\to \sup_{x\in S} \dim_{\textnormal{pw}}(x)\geq \dim_\textnormal{H}(S), \ \mbox{and}\\ & \inf_{x\in S} \widehat{\dim}_{\textnormal{pw}}(x)\to \inf_{x\in S} \dim_{\textnormal{pw}}(x)\leq \dim_\textnormal{H}(S) \notag \end{align} where $\dim_\textnormal{H}(S)$ denotes the Hausdorff dimension of $S$. \end{theorem} \begin{proof} (a) Let $B=B(x,r_n)$. \begin{multline}\label{bias-var0} \Bigg|\dim_{\textnormal{pw}}(x)-\widehat{\dim}_{\textnormal{pw}}(x)\Bigg|\leq \Bigg|\dim_{\textnormal{pw}}(x)-\frac{\log(P(B))}{\log(r_n)}\Bigg|+ \Bigg|\widehat{\dim}_{\textnormal{pw}}(x)-\frac{\log(P(B))}{\log(r_n)}\Bigg|=B_n+V_n \end{multline} Take $\epsilon>0$ and $n$ large enough such that $B_n<\epsilon/2$. Observe that $$\Big|\frac{\log({\mathbb P}_n(B))}{\log(r)}-\frac{ \log(P(B)}{\log(r)}\Big|=\Big|\frac{\log({\mathbb P}_n(B)/P(B))}{\log(r)}\Big|.$$ Then, using $\log(x)\leq x-1$ and $P(B)\geq \delta r_n^{d'}$ and \begin{align*} \mathbb{P}\Big( |\log({\mathbb P}_n(B)/P(B))|>-\log(r_n)\epsilon/2\Big) \leq \mathbb{P}\Big( |{\mathbb P}_n(B)-P(B)|> -P(B)\log(r_n)\epsilon/2\Big) \end{align*} Let us know recall the well-known Bernstein inequality: if $Y_1,\ldots,Y_n$ are independent random variables with mean zero such that $|Y_i|\leq M$ for some constant $M$, we have $$ {\mathbb P}\Big(\left|\sum_{i=1}^nY_i\right|>\eta\Big)\leq 2\exp\Big(-\frac{\eta^2/2}{\sum_i{\mathbb E}(Y_i^2)\,+\,M\eta/3}\Big) $$ Using this for $\eta= -nP(B)\log(r_n)\epsilon/2$ and $Y_i={\mathbb I}_B(X_i)-P(B)$, \begin{equation}\label{eq:eta} \mathbb{P}\Big( |\log({\mathbb P}_n(B)/P(B))|>\log(r_n)\epsilon/2\Big) \leq 2\exp\Bigg(-\frac{\frac{1}{8}\delta r_n^{d'}\log^2(r_n)\epsilon^2n^2}{n-\frac{1}{6}\log(r_n)n\epsilon}\Bigg). \end{equation} The series is convergent if $r_n=(\log(n)/n)^{1/d'}$. Indeed, for such $r_n$ the exponent in the right-hand side of \eqref{eq:eta} is of type $O(\log(n)\log(r_n))$ and, therefore, eventually larger than $\log(n^2)$. This ensures the convergence of the series whose gneral term is the left-hand side of \eqref{eq:eta}. Then, \eqref{dpwconsist} follows from Borel-Cantelli Lemma. \ (b) We have \begin{multline}\label{bias-var} \Bigg|\dim_{\textnormal{pw}}(x)-\widehat{\dim}_{\textnormal{pw}}(x)\Bigg|\leq \sup_{x\in S} \Bigg|\dim_{\textnormal{pw}}(x)-\frac{\log(P(B(x,r_n)))}{\log(r_n)}\Bigg|+\\ \sup_{x\in S} \Bigg|\widehat{\dim}_{\textnormal{pw}}(x)-\frac{\log(P(B(x,r_n)))}{\log(r_n)}\Bigg|=B_n+V_n \end{multline} The term $B_n$ is a sort of ``bias term''. Since the convergence in the definition \eqref{eq:pw} of $\dim_{\textnormal{pw}}$ is uniform on $x$ we have $B_n \to 0$. Regarding the ``variability term'' $V_n$, let us prove that $V_n\to 0$ a.s. We will make use of the celebrated Vapnik-Cervonenkis inequality. We will in fact use a particular case of this result; see e.g. \citet[Th. 12.8]{pattern} for a proof and more details. \ \noindent \it \textbf{[VC inequality].} Let $P$ be a Borel probability measure on $\mathbb{R}^d$. Let $\aleph_n=\{X_1,\dots,X_n\}$ be an iid sample from $P$. Let ${\mathbb P}_n$ be the empirical measure corresponding to $\aleph_n$. Denote by $\mathcal A_n$ the class of all balls in $\mathbb{R}^d$ of radius $r_n$. Then, for any $n$ and $\epsilon \leq 1$, $$\mathbb{P}\Big\{\sup_{A \in \mathcal{A}_n} |{\mathbb P}_n(A) - P(A)| > \epsilon\Big\} \leq C (n^{2(d+2)}+1)e^{-2n\epsilon^2},$$ where $C$ is a constant that does not exceed $4 e^{4 \epsilon+4 \epsilon^2} \leq 4 e^8$. \ \rm Now, observe that $$\Big|\frac{\log({\mathbb P}_n(B))}{\log(r)}-\frac{ \log(P(B))}{\log(r)}\Big|=\Big|\frac{\log({\mathbb P}_n(B)/P(B))}{\log(r)}\Big|.$$ Then, for any given $r>0$, VC inequality together with $\log(x)\leq x-1$ yields \begin{align*} & \mathbb{P}\Big(\sup_{B=B(x,r_n)\in \mathcal{A}_n}|\log({\mathbb P}_n(B)/P(B))|>-\log(r_n)\epsilon\Big)\\ \leq &\mathbb{P}\Big(\sup_{B\in \mathcal{A}_n}|{\mathbb P}_n(B)-P(B)|>\inf_{B\in \mathcal{A}_n}-P(B)\log(r_n)\epsilon\Big)\\ \leq & 4e^8(n^{2d+4}+1)\exp\Bigg(-2n\inf_{{B\in \mathcal{A}_n}}P^2(B)\log(r_n)^2\epsilon^2\Bigg) \end{align*} Using the lower boundedness assumption imposed on $P$ and denoting ${g(r_n)=2\delta^2\epsilon^2\log^2(r_n)}$, we get \begin{align*} & \mathbb{P}\Big( \sup_{B=B(x,r_n)\in \mathcal{A}_n}|\log({\mathbb P}_n(B)/P(B))|>\log(r_n)\epsilon\Big)\\ \leq& 4e^8(n^{2d+4}+1)\exp\Big(-2n \delta^2 r_n^{2d'}(\log(r_n))^2\epsilon^2\Big)\\ =& 4e^8\exp\Big(-(2d+5)g(r_n)\log(n)\Big). \end{align*} Now, as $g(r_n)\to\infty$, we can take $n_0$ large enough such that $(2d+5)g(r_n)>2$ for $n>n_0$. Finally, from Borel-Cantelli Lemma, it follows that $V_n\to 0$ a.s. \ Finally, the proof of \eqref{supdw} follows form the uniform consistency \eqref{dpwconsistunif} and \citet[Prop. 2.1]{you82} which establishes $$ \inf_x\liminf_{r\to 0}\frac{\log(P(B(x,r)))}{\log (r)}\leq \dim_\textnormal{H}(S)\leq \sup_x\limsup_{r\to 0}\frac{\log(P(B(x,r)))}{\log (r)}. $$ \end{proof} \begin{remark} Again, the conclusions (a) and (b) of Theorem \ref{th:pw} stand true if $r_n$ is any sequence decreasing to zero such that $(\log(n)/n)^{1/d'}=O(r_n)$ and $(\log n/n)^{\frac{1}{2d'}}=O(r_n)$, respectively, $d'$ being as in \eqref{estandar}. A simple sufficient condition to ensure the uniform convergence in \ref{eq:pw} is the existence of positive constants $C_1<C_2$ and $\ell$ such that, for all $x\in S$, $C_1r^\ell\leq P(B(x,r))\leq C_2r^\ell$. \end{remark} \section{Empirical results}\label{sec:emp} In this section, we evaluate the practical performance of some of the discussed estimators on different sets $S\subset\mathbb{R}^d$ with differing Minkowski dimensions. For this, we use random samples $\aleph_n=\{X_1,\dots,X_n\}$ generated uniformly on $S$. First, we consider $S$ to be hypercubes of side length one, for different values of $d$ and $\dim(S)$ (details are given below). This allows us to assess how the estimators perform in scenarios where the ambient space dimension ranges from low to moderate, and the Minkowski dimension ranges from similar to considerably lower than that of the ambient space. Then, following the approach of \cite{cam15}, we analyze the performance of the estimators on a synthetic benchmark. This benchmark comprises a set of 13 manifolds, linearly or nonlinearly embedded in higher dimensional spaces. Finally, we evaluate the performance of the considered estimators in the presence of noise in the data. Our objective is not to provide a comprehensive comparison of multiple existing methodologies for dimension estimation (for that, we refer the reader to the study by \cite{cam15}). Instead, we focus on the capacity estimator \eqref{dimcap_est}, the correlation dimension estimator \eqref{dimcd_est} and the global estimator based on the plug-in estimation of the pointwise dimension \eqref{estpw}. Additionally, we include in our study the so-called box-counting estimator, commonly discussed in the literature when referring to fractal dimension estimators. This estimator arises from the fact that the Minkowski dimension in \eqref{bcdim} can be equivalently formulated by replacing the covering number $N(S, r)$ with the minimal number of boxes of side length $r$ required to cover the set, denoted as $N_{\textnormal{box}}(S, r)$ (hence the commonly used term ``box-counting dimension''). For a discussion on the equivalence of this definition, see \cite{fal04}. Thus, another natural estimator for the Minkowski dimension is \begin{equation}\label{dimbc_est} \widehat{\dim}_{\textnormal{bc}}=\frac{\log(N_{\textnormal{box}}(\aleph_n,r_n))}{\log(1/r_n)}, \end{equation} where $r_n$ is an appropriate sequence of smoothing parameters $r_n\downarrow 0$. From a practical perspective, algorithms have been developed in the literature to approximate $N_{\textnormal{box}}(\aleph_n,r_n)$. These box-counting algorithms typically involve placing a standard grid of boxes with side length $r_n$ over the embedding space and count the number of boxes containing at least one point from the sample. The estimators $\widehat{\dim}_{\textnormal{bc}}$, $\widehat{\dim}_{\textnormal{cap}}$ and $\widehat{\dim}_{\textnormal{cd}}$ were computed using the R library {\textit{Rdimtools}}, see \cite{you22}, with the functions \verb|est.boxcount|, \verb|est.packing|, and \verb|est.correlation|, respectively. Regarding $\widehat{\dim}_{\textnormal{bc}}$, although \eqref{dimbc_est} requires evaluating a ratio (or quotient) of two terms for a carefully selected value of $r_n$, in practice, instead of directly evaluating this ratio, the box-counting dimension is typically estimated by determining the slope of a linear regression of $\log(N_{\textnormal{box}}(\aleph_n, r_n))$ versus $\log(1/r_n)$ over a suitable range of values of $r_n$. Thus, the implemented algorithm automatically selects the values of $r_n$ (50 by default) and handle extreme points internally, enhancing robustness. A similar approach is used for $\widehat{\dim}_{\textnormal{cd}}$, where, instead of computing the ratio in \eqref{dimcd_est} for a given value of $r_n$, the slope of a linear regression of $\log(\hat p_n(r_n))$ versus $\log(r_n)$ over a suitable range of values of $r_n$ is computed. In the case of $\widehat{\dim}_{\textnormal{cap}}$, the library {\textit{Rdimtools}} implements the scale-dependent capacity dimension estimator described in \cite{keg02}, where the values of $r_1$ and $r_2$ in the algorithm are also automatically selected. For further details on the implemented algorithms, we refer to the library's documentation. Regarding the pointwise dimension estimator in \eqref{estpw}, although it primarily provides a local measure, we have already noted that a global estimator can be derived from it, defined as $\widehat{\dim}_{\textnormal{pw}}(S)=\sup_{x\in S}\widehat{\dim}_{\textnormal{pw}}(x)$. In practice, we compute $\widehat{\dim}_{\textnormal{pw}}(X_i)$ for $i=1,\ldots,n$ and estimate the Minkowski dimension of $S$ as the 0.9 quantile of these values. Using a quantile leads to a more robust estimate compared to using the maximum, as it mitigates the influence of outliers and extreme values. As in the other estimators discussed previously, $\widehat{\dim}_{\textnormal{pw}}(X_i)$ is computed for $i=1,\ldots,n$, by fitting a linear regression to $\log({\mathbb P}_n(B(X_i,r_n)))$ versus $\log(r_n)$ over a suitable range of values of $r_n$. Table \ref{tab:SQ} summarizes the results for hypercubes with side length one, for various values of $d$ and $\dim(S)$. Specifically, for each set $S\subset\mathbb{R}^d$, we generated $B=50$ samples of size $n=2500$ uniformly on $S$. We report the mean value of each estimator across the $B$ samples, using the following terminology for the columns in the tables: BC for the box-counting estimator, CAP for the capacity estimator, CD for the correlation dimension estimator and PW for the global estimator based on the pointwise dimension estimation. In parentheses, we also show the proportion of times each estimator correctly identifies the corresponding Minkowski dimension, approximating the estimates to the nearest integer, as the methods may yield non-integer values. We observe that both the box-counting estimator and the capacity estimator tend to underestimate the Minkowski dimension, especially in higher dimensions. The correlation dimension estimator provides more accurate estimates of the Minkowski dimension. The pointwise dimension estimator achieves the best results across all dimensions and is the only one to attain 100\% accuracy under all conditions. \begin{table}[ht] \centering \begin{tabular}{cccccc} \hline $d$ & $\dim(S)$ & BC & CAP & CD & PW \\ \hline 2 & 2 & 1.74 (1.00) & 1.74 (0.80) & 1.89 (1.00) & 2.15 (1.00) \\ \hline 3 & 3 & 2.44 (0.00) & 2.33 (0.26) & 2.86 (1.00) & 3.16 (1.00) \\ & 2 & 1.75 (1.00) & 1.74 (0.84) & 1.89 (1.00) & 2.16 (1.00) \\ \hline 4 & 4 & 3.04 (0.00) & 2.80 (0.02) & 3.77 (1.00) & 4.09 (1.00) \\ & 3 & 2.43 (0.00) & 2.34 (0.32) & 2.86 (1.00) & 3.16 (1.00) \\ & 2 & 1.75 (1.00) & 1.68 (0.78) & 1.89 (1.00) & 2.16 (1.00) \\ \hline 5 & 5 & 3.57 (0.00) & 3.05 (0.00) & 4.58 (0.80) & 5.01 (1.00) \\ & 4 & 2.92 (0.00) & 2.96 (0.04) & 3.76 (1.00) & 4.09 (1.00) \\ & 3 & 2.43 (0.00) & 2.40 (0.32) & 2.86 (1.00) & 3.16 (1.00) \\ & 2 & 1.75 (1.00) & 1.71 (0.78) & 1.90 (1.00) & 2.15 (1.00) \\ \hline 6 & 6 & 4.25 (0.00) & 3.41 (0.00) & 5.39 (0.10) & 5.86 (1.00) \\ & 5 & 3.44 (0.00) & 3.08 (0.00) & 4.60 (0.90) & 4.99 (1.00) \\ & 4 & 2.92 (0.00) & 2.83 (0.02) & 3.75 (1.00) & 4.10 (1.00) \\ & 3 & 2.43 (0.00) & 2.31 (0.20) & 2.86 (1.00) & 3.16 (1.00) \\ & 2 & 1.75 (1.00) & 1.76 (0.76) & 1.90 (1.00) & 2.15 (1.00) \\ \hline 7 & 7 & 5.33 (0.00) & 3.91 (0.00) & 6.17 (0.00) & 6.66 (1.00) \\ & 6 & 3.86 (0.00) & 3.33 (0.00) & 5.41 (0.20) & 5.85 (1.00) \\ & 5 & 3.44 (0.00) & 3.17 (0.00) & 4.59 (0.78) & 5.01 (1.00) \\ & 4 & 2.92 (0.00) & 2.90 (0.06) & 3.75 (1.00) & 4.10 (1.00) \\ & 3 & 2.43 (0.00) & 2.29 (0.30) & 2.85 (1.00) & 3.15 (1.00) \\ & 2 & 1.75 (1.00) & 1.61 (0.64) & 1.89 (1.00) & 2.16 (1.00) \\ \hline \end{tabular} \caption{Mean values of the box-counting estimator (BC), capacity estimator (CAP), correlation dimension estimator (CD) and the global estimator based on the pointwise dimension (PW) over $B=50$ samples of size $n=2500$. Samples are generated uniformly on hypercubes $S\subset\mathbb{R}^d$ for $d=2,\ldots,7$, with Minkowski dimension ${\dim(S)=2,\ldots,d}$. In parentheses, the proportion of times that each estimator correctly estimates $\dim(S)$.} \label{tab:SQ} \end{table} Table \ref{tab:HA} presents the results on the synthetic benchmark. In order to maintain the same conditions as in \cite{cam15}, we generated $B=20$ samples of size $n=2500$ uniformly on each manifold $\mathcal{M}_i$, $i =1,\ldots, 13$. The manifolds considered cover a diverse range of structures. They include a $(d-1)$-dimensional sphere linearly embedded ($\mathcal{M}_1$), affine spaces ($\mathcal{M}_2$ and $\mathcal{M}_9$), a 2-dimensional helix ($\mathcal{M}_5$), a Swiss-Roll ($\mathcal{M}_7$) and various nonlinear manifolds ($\mathcal{M}_4$, $\mathcal{M}_6$ and $\mathcal{M}_8$), among others. For a detailed description of the manifolds, we refer to Table 1 in \cite{cam15}, where the same notation is used to facilitate direct comparison. For generating the samples in the synthetic benchmark, we used the tool\footnote{Available at \url{https://www.ml.uni-saarland.de/code/IntDim/IntDim.html}} developed alongside \cite{hei05}. The box-counting estimator, which slightly outperforms in some instances the capacity estimator for low-dimensional manifolds, exhibits a more erratic behavior when applied to higher-dimensional manifolds. The correlation dimension estimator and the pointwise dimension estimator show the best performance overall. Even so, while both tend to underestimate the true dimension in high-dimensional cases, the pointwise dimension estimator seems to perform slightly better in some instances (e.g., $\mathcal{M}_1$, $\mathcal{M}_9$, $\mathcal{M}_{10}$, or $\mathcal{M}_{12}$). \begin{table}[ht] \centering \begin{tabular}{cccrrrr} \hline Manifold&$d$ & $\dim(\mathcal{M}_i)$ & \multicolumn{1}{c}{BC} & \multicolumn{1}{c}{CAP} & \multicolumn{1}{c}{CD} & \multicolumn{1}{c}{PW} \\ \hline $\mathcal{M}_1$ & 11 & 10 & 10.48 (0.55) & 6.65 (0.00) & 9.06 (0.00) & 9.99 (1.00) \\ $\mathcal{M}_2$ & 5 & 3 & 2.3 (0.00) & 2.2 (0.15) & 2.89 (1.00) & 3.15 (1.00) \\ $\mathcal{M}_3$ & 6 & 4 & 2.68 (0.00) & 2.7 (0.05) & 3.59 (0.90) & 4.36 (0.95) \\ $\mathcal{M}_4$ & 8 & 4 & 4.11 (1.00) & 4.07 (0.90) & 3.79 (1.00) & 4.36 (1.00) \\ $\mathcal{M}_5$ & 3 & 2 & 1.85 (1.00) & 1.68 (0.85) & 1.99 (1.00) & 2.15 (1.00) \\ $\mathcal{M}_6$ & 36 & 6 & 12.15 (0.00) & 5.56 (0.55) & 5.79 (0.95) & 7.27 (0.00) \\ $\mathcal{M}_7$ & 3 & 2 & 2.1 (1.00) & 2.41 (0.60) & 1.98 (1.00) & 2.15 (1.00) \\ $\mathcal{M}_8$ & 72 & 12 & 25.03 (0.00) & 8.54 (0.00) & 11.69 (0.90) & 16.02 (0.00) \\ $\mathcal{M}_9$ & 20 & 20 & 26.88 (0.00) & 8.74 (0.00) & 14.43 (0.00) & 16.51 (0.00) \\ $\mathcal{M}_{10}$ & 11 & 10 & 2.45 (0.00) & 5.43 (0.00) & 8.34 (0.00) & 9.12 (0.05) \\ $\mathcal{M}_{11}$ & 3 & 2 & 1.95 (1.00) & 2.65 (0.50) & 2.02 (1.00) & 2.16 (1.00) \\ $\mathcal{M}_{12}$ & 20 & 20 & 10.71 (0.00) & 7.16 (0.00) & 14.04 (0.00) & 18.95 (0.00) \\ $\mathcal{M}_{13}$ & 13 & 1 &0.96 (1.00) & 1.26 (0.95) & 1.25 (1.00) & 1.25 (1.00) \\ \hline \end{tabular} \caption{Mean values of the box-counting dimension estimator (BC), capacity dimension estimator (CAP), correlation dimension estimator (CD) and the global estimator based on the pointwise dimension (PW) over $B=20$ samples of size $n=2500$. Samples are generated uniformly on manifolds $\mathcal{M}_{i}$, $i=1,\ldots, 13$ in $\mathbb{R}^d$ with Minkowski dimension $\dim(\mathcal{M}_{i})$. In parentheses, the proportion of times that each estimator correctly estimates $\dim(\mathcal{M}_{i})$.} \label{tab:HA} \end{table} To conclude, we analyze the behavior of the estimators in a noisy context. More specifically, we again consider $S$ to be hypercubes of side length one, for different values of $d$ and $\dim(S)$. Unlike the previous scenarios, this time, we add \(d\)-dimensional Gaussian noise \(N_d(0, \sigma^2 I_d)\) to the uniform samples $\aleph_n=\{X_1,\dots,X_n\}$, for various values of \(\sigma\). The results are shown in Figure \ref{fig:noise}. Note that in the noiseless model with \(\sigma = 0\) (solid line), the points coincide with the values in Table \ref{tab:SQ}. Furthermore, in this context, a perfect estimate would result in the points lying on the diagonal in each plot. The pointwise dimension estimator comes closest to this ideal behavior, followed by the correlation dimension estimator. For the other two estimators, as previously mentioned, it can be observed that they both tend to underestimate the true dimension. On the other hand, as expected, when noise is introduced, the estimated values increase, indicating that the data now reside in the ambient dimension. Nonetheless, for low noise levels, the pointwise dimension estimator still provides reasonable estimates of \(\dim(S)\), demonstrating its robustness in the presence of noise. \begin{figure}[h!] \centering \includegraphics[width=0.83\textwidth]{noise_no_axes_3.pdf} \caption{Mean values of the box-counting estimator (BC), capacity estimator (CAP), correlation dimension estimator (CD) and the global estimator based on the pointwise dimension (PW) over $B=50$ samples of size $n=2500$. For the noiseless model (solid lines), samples are generated uniformly on hypercubes $S\subset\mathbb{R}^d$ for $d=2,\ldots,7$, with Minkowski dimension $\dim(S)=2,\ldots,d$. For the noisy settings, $d$-dimensional Gaussian noise $N_d(0,\sigma^2 I_d)$ is added to the the samples with $\sigma=0.005$ (dashed lines), $\sigma=0.01$ (dotted lines), $\sigma=0.02$ (dot-dashed lines), and $\sigma=0.05$ (two-dashed lines).} \label{fig:noise} \end{figure} \section{Conclusions}\label{sec:conclusions} We have proved consistency results and convergence rates, for three estimators of the dimension of a set $S$ proposed in \cite{keg02}, \cite{gras83} and \cite{you82} and when the sample data is made of random observations on $S$. Here ``consistency'' must be understood in the statistical sense, meaning stochastic convergence to the true value as the sample size tends to infinity. Our methodology is based on techniques similar to those typically employed in nonparametrics, including the use of a sequence smoothing parameters $r_n$ tending to zero ``slowly enough''. Our proofs crucially rely on some auxiliary ``volume based'' estimators, defined in terms of the volume of the $r_n$-dilated sample. Under the, not very restrictive, assumption that the underlying set $S$ has a polynomial volume function on some compact interval $[0,\mathbf R]$, we are able to derive a more informative results with a explicit choice of the sequence of smoothing parameters. From a more practical point of view, our experimental results suggest that the pointwise dimension estimator proposed in \cite{you82} might be a competitive choice in the dimension estimation problem. It should be noted that the aim of the theoretical results in Section \ref{sec:consistencia} is different from (though supplemental to) that of the empirical study in Section \ref{sec:emp}. In Section \ref{sec:consistencia} we are concerned with asymptotic results. So, the goal is to make sure than we are in fact using consistent estimators and to establish the order of convergence in the smoothing parameters $r_n$ that would ensure consistency. No attempt is made to establish optimality for the choice of $r_n$. This would be a much more complicated issue, worth of further study. In the empirical Section \ref{sec:emp} we analyse, via simulations, the practical performance of the considered estimators for a given sample size. Thus, the use the data-driven smoothing methods implemented in the available software, and considered in the previous literature, appeared as the most sensible choice. \section*{Appendix A} As pointed out above, the definition of $\dim_\textnormal{cd}$ depends (unlike other popular notions of dimension) on a given probability measure $P$. The following lemma states that, the final output is in fact the same for all possible choices of $P$ fulfilling some regularity conditions. \begin{lemma}\label{lemma:cd} Assume that there exists a Borel measure $\nu$ and positive constants $C_1<C_2$, $\ell$ and $r_0$ such that, for all $x\in S$, $C_1r^\ell\leq \nu(B(x,r))\leq C_2r^\ell$, for all $r<r_0$. Assume also that $P$ fulfils the ``standardness'' condition $P(B(x,r))\geq \gamma \nu(B(x,r))$ for some $\gamma>0$ and $r$ small enough (say $r<r_0$). Suppose finally that $P$ has a bounded density, $f$, with respect to $\nu$. Then $ \dim_{\textnormal{cd}}(P)=\ell$. \end{lemma} \begin{proof} First, observe that, \begin{align*} p(r)=&\int_S\mathbb{P}(d(X_1,x)<r|X_2=x)P(x)=\int_{M} \nu(B(x,r)) \frac{1}{\nu(B(x,r))} P(B(x,r)) dP(x).\\ \end{align*} Using the standardness assumption we have for $r<r_0$,e \begin{align*} \gamma c_1 r^\ell \leq p(r)&\leq C_2r^\ell \int_{S} \frac{1}{\nu(B(x,r))} P(B(x,r)) dP(x)\\ & =C_2r^\ell \int_{S} \Bigg[\frac{1}{\nu(B(x,r))} \int_{B(x,r)}f(y)d\nu(y)\Bigg]dP(x). \end{align*} Since $c_1r^\ell\leq \nu(B(x,r))\leq C_2r^\ell$, $\nu$ is a doubling measure. Then, by Theorem 1.8 in \cite{heinonen} $$\Bigg[\frac{1}{\nu(B(x,r))} \int_{B(x,r)}f(y)d\nu(y)\Bigg]\to f(x),$$ for almost all $x\in S$ with respect to $\nu$. Then, by the Dominated Convergence Theorem $$\int_{S} \Bigg[\frac{1}{\nu(B(x,r))} \int_{B(x,r)}f(y)d\nu(y)\Bigg]dP(x)\to \int_S f^2(x)d\nu(x)=L>0.$$ Finally, for $r$ small enough, $ \gamma C_1 r^\ell \leq p(r)\leq 2C_2Lr^\ell$, from where it follows that $\log(p(r))/\log(r)\to \ell$. \end{proof} \textbf{This Manuscript was supperted by Grants...} \begin{thebibliography}{9} \bibitem[\protect\citeauthoryear{Aaron, Cholaquidis, Cuevas}{2017}]{aar17} \rm{Aaron, C., Cholaquidis, A. and Cuevas, A.} (2017) Detection of low dimensionality and data denoising via set estimation techniques \textit{Electronic Journal of Statistics}, \textbf{11}(2), 4596--4628. \bibitem[\protect\citeauthoryear{Bishop and Peres}{2017}]{bis17} Bishop, C. J., and Peres, Y. (2017). \textit{Fractals in probability and analysis} (Vol. 162). Cambridge University Press. \bibitem[\protect\citeauthoryear{Camastra and Staiano}{2016}]{cam16} Camastra, F., and Staiano, A. (2016). Intrinsic dimension estimation: Advances and open problems. \textit{Information Sciences}, \textbf{328}, 26--41. \bibitem[\protect\citeauthoryear{Campadelli et al}{2015}]{cam15} Campadelli, P., Casiraghi, E., Ceruti, C., and Rozza, A. (2015). Intrinsic Dimension Estimation: Relevant Techniques and a Benchmark Framework. \textit{Mathematical Problems in Engineering}, \textbf{759567}, 1--21. \bibitem[\protect\citeauthoryear{Cholaquidis \textit{et al.}}{2024}]{ch23} \rm{Cholaquidis, A., Cuevas, A. and Moreno, L.} (2024) On the notion of polynomial reach: A statistical application. \textit{Electronic Journal of Statistics}, \textbf{18}, 3437--3460. \bibitem[\protect\citeauthoryear{Cholaquidis \textit{et al.}}{2014}]{ch14} Berrendero, J. R., Cholaquidis, A., Cuevas, A., and Fraiman, R. (2014). A geometrically motivated parametric model in manifold estimation. \textit{Statistics}, 48(5), 983-1004. \bibitem[\protect\citeauthoryear{Cuevas and Pateiro-L\'opez}{2018}]{cuepat18} \rm{Cuevas, A. and Pateiro-L\'opez, B.} (2018). Polynomial volume estimation and its applications. \textit{Journal of Statistical Planning and Inference}, \textbf{196}, 174--184. \bibitem[\protect\citeauthoryear{Cuevas and Rodr\'{i}guez-Casal}{2004}]{cue04} Cuevas, A. and Rodr\'{i}guez-Casal, A. (2004). On boundary estimation. \textit{Advances in Applied Probability}, \textbf{36(2)}, 340--354. \bibitem[\protect\citeauthoryear{Devroye, Gy{\"o}rfi and Lugosi}{1996}]{pattern} Devroye, L., Gy{\"o}rfi, L. and Lugosi, G. (1996). \textit{A probabilistic theory of pattern recognition}. Springer. \bibitem[\protect\citeauthoryear{Falconer}{2004}]{fal04} Falconer, K. (2004). \textit{Fractal geometry: Mathematical Foundations and Applications}. Wiley. \bibitem[\protect\citeauthoryear{Federer}{1959}]{fed59} \rm{Federer, H.} (1959). Curvature measures. \textit{Trans. Amer. Math. Soc.} \textbf{93} 418--491. \bibitem[\protect\citeauthoryear{Federer}{1969}]{fed:69} \rm{Federer, H.} (1969). \textit{Geometric Measure Theory}. {Springer}. \bibitem[\protect\citeauthoryear{Fefferman et al.}{2016}]{fef16} Fefferman, C., Mitter, S. and Narayanan, H. (2016). Testing the manifold hypothesis. \textit{Journal of the American Mathematical Society} \textbf{29(4)}, 983--1049. \bibitem[\protect\citeauthoryear{Genovese et al.}{2012a}]{gen12a} Genovese, C.R., Perone-Pacifico, M., Verdinelli, I. and Wasserman, L. (2012a). The geometry of nonparametric filament estimation. \textit{Journal of the American Statistical Association} \textbf{107}, 788--799. \bibitem[\protect\citeauthoryear{Genovese et al.}{2012b}]{gen12b} Genovese, C.R., Perone-Pacifico, M., Verdinelli, I. and Wasserman, L. (2012b). Minimax Manifold Estimation. \textit{Journal of Machine Learning Research} \textbf{13}, 1263--1291. \bibitem[\protect\citeauthoryear{Genovese et al}{2012c}]{gen12c} Genovese, C.R., Perone-Pacifico, M., Verdinelli, I. and Wasserman, L. (2012c). Manifold estimation and singular deconvolution under Hausdorff loss. \textit{Annals of Statistics} \textbf{40}, 941--963 \bibitem[\protect\citeauthoryear{Grassberger and Procaccia}{1983}]{gras83} Grassberger, P. and Procaccia, I. (1983). Measuring the Strangeness of Strange Attractors. \textit{Physica D: Nonlinear Phenomena.} \textbf{9}, 189-?208. \bibitem[\protect\citeauthoryear{Hastie and Stuetzle}{1989}]{has89} Hastie, T. and Stuetzle, W. (1989). Principal curves. \textit{Journal of the American Statistical Association} \textbf{84}, 502--516. \bibitem[\protect\citeauthoryear{Hausdorff}{1919}]{haus19} Hausdorff, F. (1919). Dimension und \"au\ss eres Ma\ss. \textit{Math. Ann} \textbf{79}, 157-179. \bibitem[\protect\citeauthoryear{Hein and Audibert}{2005}]{hei05} \rm{Hein, M. and Audibert, Y.} (2005). Intrinsic Dimensionality Estimation of Submanifolds in Euclidean space. \textit{Proceedings of the 22nd International Conference on Machine Learning}, 289--296. \bibitem[\protect\citeauthoryear{Heinonen}{2001}]{heinonen} \rm{Heinonen, J.} (2001) Lectures on Analysis on Metric Spaces. Springer. \bibitem[\protect\citeauthoryear{K\'egl}{2002}]{keg02} K\'egl, B. (2002). Intrinsic dimension estimation using packing numbers. \textit{Advances in neural information processing systems}, 15. \bibitem[\protect\citeauthoryear{Pennec}{2006}]{pen06} Pennec, X. (2006). Intrinsic statistics on Riemannian manifolds: Basic tools for geometric measurements. \textit{Journal of Mathematical Imaging and Vision}, \textbf{25}, 127-154. \bibitem[\protect\citeauthoryear{Pesin}{1993}]{pes93} Pesin, Y. B. (1993). On rigorous mathematical definitions of correlation dimension and generalized spectrum for dimensions. \textit{Journal of Statistical Physics}, \textbf{71}, 529-547. \bibitem[\protect\citeauthoryear{Qiu et al.}{2022}]{qiu22} Qiu, H., Yang, Y. and Rezakhah, S. (2022). Intrinsic dimension estimation method based on correlation dimension and kNN method.\textit{ Knowledge-Based Systems} 235 107627. \bibitem[\protect\citeauthoryear{Rataj and Winter}{2010}]{rataj10} \rm{Rataj, J. and Winter, S.}(2010). On Volume and Surface Area of Parallel Sets. \textit{Indiana University Mathematics Journal} \textbf{59}, 1661--1685. \bibitem[\protect\citeauthoryear{Serfling}{2009}]{serfling} Serfling, Robert J. (2009). Approximation theorems of mathematical statistics. John Wiley \& Sons. \bibitem[\protect\citeauthoryear{Stacho, L. L.}{1976}]{st76} \rm{Stach\'o, L. L.}(1976) On the volume function of parallel sets. Acta Sci. Math., \textbf{38}, 365--374. \bibitem[\protect\citeauthoryear{You and Shung}{2022}]{you22} \rm{You, K. and Shung, D.} (2022). Rdimtools: An R Package for Dimension Reduction and Intrinsic Dimension Estimation. \textit{Software Impacts}, \textbf{100414}. \bibitem[\protect\citeauthoryear{Young}{1982}]{you82} Young, L. S. (1982). Dimension, entropy and Lyapunov exponents. \textit{Ergodic theory and dynamical systems}, 2(1), 109-124. \end{thebibliography} \end{document}
2412.13929v2
http://arxiv.org/abs/2412.13929v2
A novel necessary and sufficient condition for the stability of $2\times 2$ first-order linear hyperbolic systems
\documentclass[10pt]{article} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage[top=3cm, bottom=3cm, left=3cm, right=3cm]{geometry} \usepackage{amsmath,amssymb,amsfonts,amsthm} \usepackage{algorithmic} \usepackage{graphicx} \usepackage{textcomp} \usepackage{xcolor} \usepackage{float} \usepackage{tikz} \usepackage{pgf,tikz} \usepackage{graphicx} \usepackage{enumitem} \usepackage[noadjust]{cite} \usepackage{dsfont} \usepackage{amsmath} \usepackage{tabularx} \usepackage{booktabs} \usepackage{caption} \usepackage{array} \usepackage[normalem]{ulem} \usepackage{url} \usepackage{lmodern} \theoremstyle{plain} \newtheorem{theorem}{Theorem} \newtheorem{proposition}{Proposition} \newtheorem{corollary}{Corollary} \newtheorem{lemma}{Lemma} \theoremstyle{definition} \newtheorem{remark}{Remark} \newcommand{\R}{\mathbb{R}} \newcommand{\N}{\mathbb{N}} \newcommand{\C}{\mathbb{C}} \newcommand{\diff}{\mathrm{d}} \DeclareMathOperator{\Ln}{Ln} \DeclareMathOperator{\real}{Re} \DeclareMathOperator{\imag}{Im} \DeclareMathOperator{\sign}{sgn} \begin{document} \title{A novel necessary and sufficient condition for the stability of $2\times 2$ first-order linear hyperbolic systems} \author{Ismaïla Balogoun\thanks{Université Paris-Saclay, CNRS, CentraleSupélec, Inria, Laboratoire des Signaux et Systèmes, 91190 Gif-sur-Yvette, France} \and Jean Auriol\footnotemark[1] \and Islam Boussaada\footnotemark[1] \thanks{IPSA Paris, 94200 Ivry-sur-Seine, France} \and Guilherme Mazanti\footnotemark[1] \thanks{Fédération de Mathématiques de CentraleSupélec, 91190, Gif-sur-Yvette, France}} \date{} \maketitle \begin{abstract} In this paper, we establish a necessary and sufficient stability condition for a class of two coupled first-order linear hyperbolic partial differential equations. Through a backstepping transform, the problem is reformulated as a stability problem for an integral difference equation, that is, a difference equation with distributed delay. Building upon a Stépán--Hassard argument variation theorem originally designed for time-delay systems of retarded type, we then introduce a theorem that counts the number of unstable roots of our integral difference equation. This leads to the expected necessary and sufficient stability criterion for the system of first-order linear hyperbolic partial differential equations. Finally, we validate our theoretical findings through simulations. \bigskip \noindent\textbf{Keywords.} Hyperbolic partial differential equations, integral difference equations, stability analysis, spectral methods. \end{abstract} \section{Introduction} Systems of first-order hyperbolic partial differential equations (PDEs) have been extensively studied over the years due to their application in modeling various physical phenomena. These include drilling devices \cite{auriol2020closed,auriol2022comparing}, water management systems \cite{diagne2017control}, aeronomy \cite{schunk1975transport}, cable vibration dynamics \cite{wang2020vibration} and pipelines \cite{rager2015simplified}, traffic networks~\cite{espitia2022traffic}. Comprehensive overviews of current research in this field can be found in \cite{bastin2016stability} and \cite{hayat2021boundary}. When there are no in-domain coupling terms, the method of characteristics can be used to relate the behavior of these systems to time-delay systems, a link extensively explored in the literature \cite{Chitour2024Approximate, Chitour2021One, Chitour2016Stability, cooke1968differential, slemrod1971nonexistence}. The situation becomes more complicated when in-domain coupling terms are present. To overcome this difficulty, a possible strategy, adopted for instance in \cite{saba2019stability, auriol2019explicit, auriol2019sensing}, is to make use of the backstepping transform for hyperbolic systems from \cite{krstic2008boundary} to convert these coupling terms into integral boundary terms, and then use the method of characteristics to relate the behavior of these systems to integral difference equations. Control questions concerning time-delay systems present intriguing and complex mathematical challenges. For linear time-invariant systems, stability and stabilization issues can be addressed through spectral methods, as detailed in \cite{hale1993introduction, michiels2014stability}. Similarly to linear time-invariant finite-dimensional systems, the stability of time-delay systems can be characterized through the position of its spectrum with respect to the imaginary axis. For instance, exponential stability is equivalent to the existence of $\alpha>0$ such that $\real s \leq-\alpha$ for every $s$ in the spectrum of the time-delay system. The spectrum of systems with finitely many discrete delays is made only of eigenvalues, which are infinite in number and can be characterized as the complex roots of a quasipolynomial. Quasipolynomials were extensively studied in the literature, and characterizing the location of their roots is a challenging problem that has attracted much research effort, both from theoretical and numerical points of view \cite{avellar1980zeros, berenstein2012complex, vyhlidal2014qpmr, stepan1989retarded, hassard1997counting}. Important insights on the stability of difference equations with finitely many discrete delays, including a discussion of robustness with respect to the delays, can be found in \cite[Chapter~9, Section~6]{hale1993introduction}, with extensions to systems with time-varying parameters provided in \cite{Chitour2016Stability}. One can also obtain necessary or sufficient stability conditions using Lyapunov--Krasovskii functionals, as done in \cite{campos2018necessary} for systems with finitely many discrete delays and in \cite{ortiz2022necessary} for integral difference equations. Cauchy's argument principle, which is a standard result in complex analysis, turns out to be an efficient way to investigate the stability of delay systems. It is used, in particular, in the St\'{e}p\'{a}n--Hassard argument variation approach, which allows one to count the number of eigenvalues with positive real parts. The corresponding counting formula, which was first introduced in \cite{stepan1989retarded} and then refined in \cite{hassard1997counting}, remains relevant not only for stability purposes but also for recent developments in the stabilization of time-delay systems, through the so-called \emph{partial pole placement} method. Indeed, this method consists in selecting the system parameters in order to enforce a finite number of prescribed eigenvalues of the system, and the dominance of these assigned eigenvalues is often shown by exploiting the St\'{e}p\'{a}n--Hassard counting formula, with stabilization being achieved when all the chosen eigenvalues have negative real parts \cite{boussaada2016characterizing, bedouhene2020real, Fueyo2023Pole, Boussaada2022Generic}. While the partial pole placement method has shown its effectiveness in the prescribed stabilization of scalar first-order hyperbolic partial differential equations as discussed in \cite{Boussaada2022Generic,ammari:hal-04200203,benarab:hal-04196450, schmoderer:hal-04194365}, the application of the underlying Hille oscillation Theorem \cite{hille1922} appears to be computationally cumbersome when dealing with systems of coupled hyperbolic partial differential equations. In this paper, we analyze the stability of a class of $2\times 2$ linear hyperbolic coupled PDEs. We use the backstepping transformation from \cite{auriol2018delay} to transform the original system into a target system that can be written as an integral difference equation. Then, we address the stability of the integral difference equation through spectral methods. Although the equation is not of the retarded type, we show that St\'{e}p\'{a}n--Hassard arguments from \cite{stepan1989retarded, hassard1997counting} can be adapted to count the number of roots with strictly positive real parts, assuming there are no roots on the imaginary axis. Combined with an analysis of the vertical asymptotes of the roots of the characteristic function, we obtain, as a consequence, necessary and sufficient conditions for the stability of the difference equation in question, which will yield the same results for the original hyperbolic system. This paper is organized as follows. Section~\ref{sec_description} presents the system under consideration and recalls the results of \cite{saba2019stability} transforming our system into an integral difference equation. Section~\ref{sec_main} contains our main results: Theorem~\ref{thm_open} uses the St\'{e}p\'{a}n--Hassard approach to count the number of unstable roots of our integral difference equation, Corollary~\ref{coro:stability} obtains as a consequence necessary and sufficient conditions for exponential stability, and Corollary~\ref{result:eqhyp} uses the relation between the integral difference equation and our original system in order to obtain necessary and sufficient conditions for exponential stability of the latter. A comparison between our results and some other results available in the literature is provided in Section~\ref{sec_comparaison}, while, in Section~\ref{sec_approximation}, we focus on the case where the in-domain coupling terms are constant and provide some numerical insights based on polynomial approximations of the integral kernel of the integral difference equation. The paper is concluded by final remarks and perspectives provided in Section~\ref{sec_conclusion}. \paragraph*{Notation} In this paper, the principal branch of the complex logarithm is denoted by $\Ln$, and the principal argument of a complex number is denoted $\arg$. Given a measurable subset $\Omega$ of $\mathbb{R}$ with positive measure, $L^2(\Omega,\R)$ denotes the set of (Lebesgue) measurable functions $f$ mapping the set $\Omega$ into $\mathbb{R}$ such that $\int_\Omega \lvert f(x) \rvert^2 \diff x < +\infty$, identifying functions which are equal almost everywhere. The associated norm is $\lVert f\rVert_{L^2(\Omega)}^2:= \int_\Omega \lvert f(x)\rvert^2 \diff x$. The Sobolev space $H^{1}(\Omega,\R)$ is defined as the set $\lbrace f \in L^2(\Omega,\R)\mid f^\prime\in L^2(\Omega,\R)\rbrace$, where the derivative is to be understood in the sense of distributions. Given a delay $\tau > 0$, a function $z\colon [-\tau, \infty) \mapsto \mathbb{R}$, and $t \geq 0$, the history function of $z$ at time $t$ is the function $z_{[t]} \colon [- \tau, 0] \to \mathbb R$ defined by $z_{[t]}(\theta) = z(t + \theta)$ for $\theta \in [-\tau, 0]$. For a set $I$, $\mathds{1}_I(x)$ is the function defined by \[ \mathds{1}_I(x) = \begin{cases} 1 & \text { if } x \in I, \\ 0 & \text { otherwise}. \end{cases} \] \section{Problem description} \label{sec_description} \subsection{System under consideration} We are interested in the stability analysis of the linear hyperbolic system \begin{equation} \label{eq:hyperbolic_couple} \left\{ \begin{aligned} &u_t(t, x)+\lambda u_x(t, x)=\sigma^{+}(x) v(t, x),~\,t>0,~ x \in[0,1], \\ &v_t(t, x)-\mu v_x(t, x)=\sigma^{-}(x) u(t, x),\,~t>0,~ x \in[0,1],\\ &u(t, 0)=q v(t, 0),\,~t>0,\\ &v(t, 1)=\rho u(t, 1),\,~t>0, \end{aligned} \right. \end{equation} where $(u(t,x), v(t,x))^T$ is the state of the system, the different arguments evolving in $\{(t,x) \mid t>0,~ x \in [0,1] \}$. The in-domain coupling terms $\sigma^{+}$ and $\sigma^{-}$ are assumed to be continuous functions, whereas the boundary coupling terms $q$ and $\rho$ and the velocities $\lambda>0$ and $\mu>0$ are assumed to be constant. We denote by $u_0(\cdot)=u(0,\cdot)$ and $v_0(\cdot)=v(0,\cdot)$ the initial conditions associated with~\eqref{eq:hyperbolic_couple}. We assume here that they belong to $H^1((0,1),\mathbb{R})$ and satisfy the compatibility conditions \begin{align} u_0(0)=qv_0(0),\quad v_0(1)=\rho u_0(1). \label{compatibility_condition_u_v} \end{align} As shown in \cite[Appendix~A]{bastin2016stability}, system~\eqref{eq:hyperbolic_couple} with initial condition $(u_0,v_0)$ in $H^1((0, 1), \mathbb R^2)$ satisfying the compatibility condition~\eqref{compatibility_condition_u_v} is well-posed, and its solution $(u, v)$ belongs to the space $C^1([0, +\infty),\allowbreak L^2((0, 1), \mathbb R^2)) \cap C^0([0, +\infty),\allowbreak H^1((0, 1), \mathbb R^2))$. In the sequel, we define the characteristic time $\tau$ of the system as \begin{equation}\label{tau} \tau=\frac{1}{\lambda}+\frac{1}{\mu}. \end{equation} Finally, we assume $q\neq 0$. Although the computations can be adjusted to deal with the case $q = 0$, we make this simplifying assumption here for the sake of clarity of presentation, and we direct the reader to \cite[Section~3.5]{coron2013local} for the scenario where $q=0$. \subsection{Objective and methodology} Our objective is to construct necessary and sufficient stability conditions that guarantee the exponential stability of system~\eqref{eq:hyperbolic_couple} in $L^2$ norm, namely, the existence of $\nu>0$ and $C\geq 0$ such that, for any $(u_0,v_0) \in H^1([0,1],\mathbb{R})\times H^1([0,1],\mathbb{R})$ satisfying the compatibility condition~\eqref{compatibility_condition_u_v}, the solution $(u,v)$ of system~\eqref{eq:hyperbolic_couple} satisfies \begin{align} \lVert(u(t,\cdot),v(t,\cdot))\rVert_{(L^2(0,1))^2} \leq C\mathrm{e}^{-\nu t} \lVert (u_0,v_0) \rVert_{(L^2(0,1))^2},~t\geq 0. \end{align} As explained in the introduction, stability conditions for systems of conservation and balance laws can be found in the literature~\cite{bastin2016stability}. Most of the existing results are based on (weighted $L^2$) Lyapunov functions and linear matrix inequalities (LMIs) and are, therefore, sufficient only conditions. It has been shown in~\cite{auriol2019explicit} that systems of first-order hyperbolic PDEs share equivalent stability properties to those of a class of integral difference equations (IDEs). This representation has been successfully used in~\cite{saba2019stability} to obtain a new stability condition. However, the proposed condition could not be easily verified and had to be relaxed to a sufficient-only condition to be implemented. In this paper, we use the same time-delay system framework to characterize the unstable roots of the system~\eqref{eq:hyperbolic_couple} and obtain \emph{implementable} necessary and sufficient stability conditions. \subsection{Equivalent integral difference equation} In this section, we adopt the approach presented in~\cite{saba2019stability, auriol2019explicit} to rewrite the PDE system~\eqref{eq:hyperbolic_couple} as an IDE with equivalent stability properties. To do so, we use a classical backstepping transformation. The detailed computations can be found in~\cite{saba2019stability} and we only recall here the main results that will be of use to us in the sequel. Let us consider the Volterra change of coordinates defined in \cite{coron2013local}, given by \begin{equation}\label{backstepping} \begin{aligned} \alpha(t, x) & =u(t, x)-\int_0^x\left(K^{u u}(x, \xi) u(t, \xi)+K^{u v}(x, \xi) v(t, \xi)\right) \diff \xi, \\ \beta(t, x) & =v(t, x)-\int_0^x\left(K^{v u}(x, \xi) u(t, \xi)+K^{v v}(x, \xi) v(t, \xi)\right) \diff \xi, \end{aligned} \end{equation} where the kernels $K^{u u}, K^{u v}, K^{v u}, K^{v v}$ are defined on the triangular domain $\mathcal{T}=\{(x, \xi) \in [0,1]^2 \mid \xi \leq x\}$. They are bounded continuous functions defined by a set of hyperbolic PDEs given in~\cite{coron2013local}. The Volterra backstepping transformation \eqref{backstepping} is invertible~\cite{yoshida1960lectures} and the inverse transformation can be expressed as \begin{equation}\label{backstepping_inverse} \begin{aligned} u(t, x) & = \alpha(t, x)+\int_0^x\left(L^{\alpha \alpha}(x, \xi) \alpha(t, \xi)+L^{\alpha \beta}(x, \xi) \beta(t, \xi)\right) \diff \xi, \\ v(t, x) & = \beta(t, x)+\int_0^x\left(L^{\beta \alpha}(x, \xi) \alpha(t, \xi)+L^{\beta \beta}(x, \xi) \beta(t, \xi)\right) \diff \xi, \end{aligned} \end{equation} where the kernels $L^{\alpha \alpha}, L^{\alpha \beta}, L^{\beta a}$, and $L^{\beta \beta}$ are bounded continuous functions defined on $\mathcal{T}$. The dynamics of the system in the new coordinates are \begin{equation}\label{new coordinates} \left\{ \begin{aligned} \alpha_t(t, x)+\lambda \alpha_x(t, x) & =0, \\ \beta_t(t, x)-\mu \beta_x(t, x) & =0, \end{aligned} \right. \end{equation} with boundary conditions \begin{equation}\label{boundary} \left\{ \begin{aligned} \alpha(t, 0) & = q \beta(t, 0), \\ \beta(t, 1) & = \rho \alpha(t, 1) + \int_0^1 \left(N^\alpha(\xi) \alpha(t, \xi)+N^\beta(\xi) \beta(t, \xi) \right)\diff \xi, \end{aligned} \right. \end{equation} with \begin{equation} \begin{aligned} N^\alpha(\xi) & = \rho L^{\alpha \alpha}(1, \xi) - L^{\beta \alpha}(1, \xi), \\ N^\beta(\xi) & = \rho L^{\alpha \beta}(1, \xi) - L^{\beta \beta}(1, \xi). \end{aligned} \end{equation} Using the method of characteristics on \eqref{new coordinates} yields, for all $x \in[0,1]$ and $t>\tau$, \begin{equation} \label{eq:alpha-beta-characteristics} \alpha(t, x)=q \beta\left(t-\frac{x}{\lambda}-\frac{1}{\mu}, 1\right), \quad \beta(t, x)=\beta\left(t-\frac{1-x}{\mu}, 1\right). \end{equation} Consequently, combining this with the boundary conditions \eqref{boundary}, we get \begin{equation}\label{distributed delay} \beta(t, 1)=q \rho \beta(t-\tau, 1) + \int_0^\tau N(\nu) \beta(t-\nu, 1) \diff \nu, \end{equation} where $\tau$ is defined by \eqref{tau} and $N$ is defined by \begin{equation}\label{N} N(\nu)=q \lambda N^\alpha\left(\lambda \nu-\frac{\lambda}{\mu}\right) \mathds{1}_{\left[\frac{1}{\mu}, \tau\right]}(\nu)+\mu N^\beta(1-\mu \nu) \mathds{1}_{\left[0, \frac{1}{\mu}\right]}(\nu). \end{equation} Consequently, $z(t)=\beta(t,1)$ is the solution of an IDE. Note also that, since the solution $(u, v)$ of \eqref{eq:hyperbolic_couple} belongs to $C^0([0, +\infty), H^1((0, 1), \mathbb R^2))$, the same is also true for the pair $(\alpha, \beta)$ defined in \eqref{backstepping}, thanks to the equations satisfied by the kernels $K^{u u}, K^{u v}, K^{v u}, K^{v v}$ from \cite[(3.30)--(3.37)]{coron2013local} and the regularity of these functions. Now, from \eqref{eq:alpha-beta-characteristics}, we have that $\beta(t-h, 1) = \beta(t, 1 - \mu h)$ for every $(t, h)$ with $0 \leq h \leq \frac{1}{\mu}$ and $t \geq h$, and it thus follows that $\beta(\cdot, 1) \in H^1((t - \frac{1}{\mu}, t), \mathbb R)$ for every $t \geq \frac{1}{\mu}$, which yields that $z \in H^1_{\mathrm{loc}}([0, +\infty), \mathbb R)$. The following theorem, whose proof can be found in \cite[Theorem~6.1.3]{auriol2024contributions} or in~\cite{redaud2024domain}, shows how the $L^2$ stability properties of~$z$ relate to those of~$(\alpha, \beta)$ (and consequently to those of~$(u,v)$). \begin{theorem} \label{theorem_equiv_norm} There exist two positive constants~$\kappa_0$ and~$\kappa_1$ such that, for every~$t>\tau$, \begin{equation} \label{eq_ineq_norm} \kappa_0 \lVert z_{[t]} \rVert^2_{L^2(-\frac{1}{\lambda},0)} \leq \lVert (\alpha(t,\cdot), \beta(t,\cdot)) \rVert^2_{(L^2(0,1))^2} \leq \kappa_1 \lVert z_{[t]} \rVert^2_{L^2(-\tau,0)}. \end{equation} Moreover, the exponential stability of~$z_{[t]}$ in the sense of the~$L^2(-\tau, 0)$ norm is equivalent to the exponential stability of~$(\alpha,\beta)$ (or equivalently to $(u,v)$) in the sense of the~$L^2$ norm. \end{theorem} The fact that the norms are different on the two sides of \eqref{eq_ineq_norm} is related to the structure of the difference equation (see, for instance, the design of converse Lyapunov--Krasovskii functions~\cite{pepe2013converse}). The system~\eqref{distributed delay} can be seen as a \emph{comparison system} for the PDE system~\eqref{eq:hyperbolic_couple} (see, e.g.,~\cite{niculescu2001delay} and the references therein). In the rest of the paper, we will focus our stability analysis on the IDE~\eqref{distributed delay}. \section{Main results} \label{sec_main} \subsection{Stability conditions for difference equation with distributed delay} In light of the results presented in the previous section, we now focus on the stability analysis of the IDE \begin{equation}\label{distributed delay0} z(t) = \xi z(t - \tau) + \int_0^\tau N(\nu) z(t - \nu) \diff\nu, \end{equation} where $\tau$ is a positive known delay, $\xi \in \R$, $N\colon [0, \tau] \to \mathbb R$ is an integrable function, and the unknown function is $z\colon [-\tau, +\infty) \to \mathbb R$. Even though the analysis of \eqref{distributed delay0} is motivated in this paper through its link with \eqref{eq:hyperbolic_couple}, we highlight that the stability analysis of IDEs has an interest on itself and in connection with more general time-delay systems (see, e.g., \cite[Chapter~9]{hale1993introduction}). We assume that the initial data $z_{[0]} = z^0$ of $z$ is known, belongs to the space $H^1([-\tau,0],\mathbb{R})$, and verifies the compatibility condition $z^0(0)=\xi z^0(-\tau)+\int_0^\tau N(\nu)z^0(-\nu)\diff\nu$. A function~$z\colon [-\tau, \infty) \rightarrow \mathbb{R}$ is called a \emph{solution} of the IDE~\eqref{distributed delay0} with initial condition $z^0$ if~$z_{[0]} = z^0$ and if equation~\eqref{distributed delay0} is satisfied for every~$t \geq 0$. We will also assume here that \begin{equation} \label{q_rho} \lvert \xi\rvert <1. \end{equation} This assumption is motivated by the fact that \eqref{distributed delay0} cannot be exponentially stable if $\lvert \xi\rvert>1$ \cite{henry1974linear,auriol2023robustification}, and amounts to assuming that the \emph{principal part} of the system~\eqref{distributed delay0} (that is, \eqref{distributed delay0} without the integral term corresponding to the distributed delay) is exponentially stable. However, due to the distributed delay term, system~\eqref{distributed delay0} may be unstable even under \eqref{q_rho}. We analyze the stability properties of \eqref{distributed delay0} through spectral methods. Its characteristic function is the function $\Delta\colon \mathbb C \to \mathbb C$ defined by \begin{equation}\label{characteristic equation} \Delta(s) = 1-\xi \mathrm{e}^{-s \tau}-\int_0^\tau N(\nu) \mathrm{e}^{-s \nu} \diff \nu=0. \end{equation} The next result shows how the properties of the function~$\Delta$ relate to the stability properties of the IDE~\eqref{distributed delay0}. \begin{lemma}[{\cite[Chapter~9, Theorem~3.5]{hale1993introduction}, \cite{henry1974linear}}] The IDE~\eqref{distributed delay0} is ex\-po\-nen\-tially stable in $L^2$ norm if and only if there exists $\eta>0$ such that all solutions $s$ of the characteristic equation~\eqref{characteristic equation} satisfy $\real(s) \leq -\eta$. \end{lemma} We start our results by the following lemma, which provides a necessary condition for the stability of \eqref{distributed delay0} by studying the behavior of nonnegative real roots of $\Delta$. \begin{lemma}\label{necessary_open} The system \eqref{distributed delay0} is not exponentially stable if \begin{equation*}\Delta(0)= 1-\xi -\int_0^\tau N(\nu) \diff \nu\leq 0. \end{equation*} \end{lemma} \begin{proof} If $\Delta(0)=0$, then zero is a root of $\Delta$, and \eqref{distributed delay} is not exponentially stable. If $\Delta(0)<0$ then there exists at least one positive real root of $\Delta$ since \[ \lim_{\substack{s \to +\infty \\ s \in \R}}\Delta(s)=1 \] and $\Delta$ is continuous on $(0, \infty)$. Thus, system \eqref{distributed delay} is not exponentially stable. \end{proof} The following lemma presents an interesting property, which is a consequence of~\cite[Theorem~2.1]{hale2002strong}. \begin{lemma}\label{finite roots} Assume that \eqref{q_rho} is satisfied. Then, for all $s_0 > \frac{1}{\tau}\ln \lvert \xi \rvert$, $\Delta$ has a finite number of roots in $\{s \in \mathbb C \mid \real(s) \geq s_0\}$. \end{lemma} This lemma implies that the function $\Delta$ can only have a finite number of roots on the imaginary axis. Let $\rho_1, \ldots, \rho_m$ be the positive zeros of $M(\omega)=\real\left(\Delta(i\omega)\right)$, repeated according to their multiplicities and ordered so that $0<\rho_m \leq \cdots \leq \rho_1$. The following theorem gives the number of roots of the characteristic function $\Delta$ which lie in $\{s \in \mathbb C \mid \real(s) > 0\}$, counted with their multiplicities. \begin{theorem}\label{thm_open} Assume that equation~\eqref{q_rho} is satisfied, that $\Delta$ has no roots on the imaginary axis, and that \begin{equation}\label{nece_suf} \int_0^\tau N(\nu) \diff \nu< 1-\xi. \end{equation} Then the number of roots of the characteristic function $\Delta$ which lie in $\{s \in \mathbb C \mid \real(s)>0\}$, counted by multiplicity, is given by \begin{equation} \label{2cond} \Gamma := \sum_{j=1}^m(-1)^{j-1} \sign\left(S(\rho_j)\right), \end{equation} where $S\colon \mathbb R \to \mathbb R$ is the function given by $S(\omega)=\imag\left(\Delta(i\omega)\right)$. \end{theorem} \begin{proof} For any $R>0$, let $C_R$ be the positively oriented contour defined by the curves $g_1$ and $g_2$, with \[ g_1 \colon\left\{\begin{aligned} \left[-\frac{\pi}{2}, \frac{\pi}{2}\right] & \to \mathbb C, \\ \theta & \mapsto R \mathrm e^{i \theta}, \end{aligned}\right. \quad\qquad g_2 \colon\left\{\begin{aligned} \left[-R, R\right] & \to \mathbb C, \\ \omega & \mapsto -i\omega. \end{aligned}\right. \] From Lemma~\ref{finite roots}, all zeros of $\Delta$ in $\{s \in \mathbb C \mid \real(s)>0\}$ are inside $C_R$, for sufficiently large $R$. By the argument principle, the number of zeros $n_0$ of $\Delta$ in $\{s \in \mathbb C \mid \real(s)>0\}$, counted with their multiplicities, is given by \begin{equation}\label{prin_gene} n_0=\frac{1}{2 \pi i} \oint_{C_R} \frac{\Delta^{\prime}(s)}{\Delta(s)} \diff s=\frac{1}{2 \pi i} \int_{g_1} \frac{\Delta^{\prime}(s)}{\Delta(s)} \diff s+\frac{1}{2 \pi i} \int_{g_2} \frac{\Delta^{\prime}(s)}{\Delta(s)} \diff s. \end{equation} We now focus on computing the different integral terms. \medskip \noindent\uline{Value of the integral over $g_1$ in \eqref{prin_gene}:} From the Riemann--Lebesgue lemma, we have that \[ \lim_{\substack{\lvert s\rvert \to +\infty \\ \real(s) \geq 0}} \int_0^\tau N(\nu) \mathrm{e}^{-s \nu} \diff \nu = 0. \] Then, for all $\epsilon>0$, there exists $R_0>0$ such that, for all $R\geq R_0$ and $\theta \in \left[-\frac{\pi}{2}, \frac{\pi}{2}\right]$, \begin{equation} \real(\Delta(R\mathrm{e}^{i\theta}))\geq 1-\lvert \xi\rvert-\epsilon. \end{equation} In particular, for $\epsilon=\frac{1-\lvert \xi \rvert}{2}$, we have $ \real(\Delta(R\mathrm{e}^{i\theta}))>0$ for sufficiently large $R$ and all $\theta \in \left[-\frac{\pi}{2}, \frac{\pi}{2}\right]$. Then, for sufficiently large $R$, the function $s \mapsto \Ln(\Delta(s)) = \ln \lvert \Delta(s) \rvert + i \arg(\Delta(s))$ is an analytic function in a neighborhood of $g_1$, with $\frac{\mathrm d}{\mathrm d s} \Ln(\Delta(s)) = \frac{\Delta^\prime(s)}{\Delta(s)}$, and thus \begin{align} \frac{1}{2 \pi i} \int_{g_1} \frac{\Delta^{\prime}(s)}{\Delta(s)} \diff s = \frac{\Ln\Delta(i R) - \Ln\Delta(-i R)}{2 \pi i} = \frac{1}{\pi} \arg(\Delta(i R)), \label{g1_gene} \end{align} where we used the fact that $\Delta(-i R) = \overline{\Delta(i R)}$. \medskip \noindent\uline{Value of the integral over $g_2$ in \eqref{prin_gene}:} Using the fact that \[\Delta(-i \omega) = \overline{\Delta(i \omega)} \quad \text{for every }\omega \in \mathbb R, \] we obtain that $\Delta^\prime(-i\omega) = \overline{\Delta^\prime(i \omega)}$ for every $\omega \in \mathbb R$, which implies \[ \frac{1}{2 \pi i} \int_{g_2} \frac{\Delta^{\prime}(s)}{\Delta(s)} \diff s = -\frac{1}{\pi} \int_0^R \real\left(\frac{\Delta^\prime(i \omega)}{\Delta(i \omega)}\right) \diff \omega. \] Since $\Delta$ has no roots on the imaginary axis, we have $\Delta(i\omega)=A(\omega)\mathrm{e}^{i\phi(\omega)}$ for some differentiable functions $A\colon \mathbb R \to \mathbb R_+^\ast$ and $\phi\colon \mathbb R \to \mathbb R$ with $\phi(R) = \arg(\Delta(i R))$. Hence \begin{equation*} \frac{\Delta'(i\omega)}{\Delta(i\omega)}=-i\frac{A'(\omega)}{A(\omega)}+\phi'(\omega), \end{equation*} and we deduce that \[ \frac{1}{2 \pi i} \int_{g_2} \frac{\Delta^{\prime}(s)}{\Delta(s)} \diff s = \frac{1}{\pi}(\phi(0) - \phi(R)). \] According to Condition~\eqref{nece_suf}, we have $M(0)>0$ and $S(0)=0$, where $M\colon \omega\in\R\mapsto \real\left(\Delta(i\omega)\right)$ and $S\colon \omega\in\R\mapsto \imag\left(\Delta(i\omega)\right)$. Then, as shown in \cite[Section~3.7]{hassard1997counting}, we can prove that \[\phi(0)=\pi\sum_{j=1}^m(-1)^{j-1} \sign\left(S(\rho_j)\right).\] Thus \begin{equation}\label{int_g2} \frac{1}{2 \pi i} \int_{g_2} \frac{\Delta'(s)}{\Delta(s)} \diff s=\sum_{j=1}^m(-1)^{j-1} \sign\left(S(\rho_j)\right)-\frac{\arg(\Delta(i R))}{\pi} . \end{equation} Combining~\eqref{g1_gene} and \eqref{int_g2}, we finally obtain that \begin{align}\label{prin_final} \frac{1}{2 \pi i} \oint_{C_R} \frac{\Delta^{\prime}(s)}{\Delta(s)} \diff s=\sum_{j=1}^m(-1)^{j-1} \sign\left(S(\rho_j)\right). \end{align} Consequently, the number of roots of the characteristic function $\Delta$ which lie in $\real(s)>0$, counted by multiplicity, is given by \begin{equation} \sum_{j=1}^m(-1)^{j-1} \sign\left(S(\rho_j)\right), \end{equation} as required. \end{proof} Combining Lemma~\ref{necessary_open}, Theorem~\ref{thm_open}, and Lemma~\ref{finite roots}, we obtain at once the following exponential stability result for \eqref{distributed delay0}. \begin{corollary} \label{coro:stability} Assume that equation~\eqref{q_rho} is satisfied and that $\Delta$ has no roots on the imaginary axis. Then, the system \eqref{distributed delay0} is exponentially stable if and only if \eqref{nece_suf} holds and $\Gamma=0$. \end{corollary} \begin{remark} Note that the use of this result requires finding the roots of $\Delta$ on the imaginary axis. This is equivalent to finding the common roots of the functions $M$ and $S$, where $M\colon \omega \in \mathbb{R} \mapsto \real(\Delta(i\omega))$ and $S\colon \omega \in \mathbb{R} \mapsto \imag(\Delta(i\omega))$. For this purpose, there are many numerical solvers available, such as those in numerical libraries like Scipy (with functions such as \textbf{scipy.optimize.root} and \textbf{scipy.optimize.newton}) and Matlab (with functions like \textbf{fsolve} and \textbf{roots}). These tools and methods enable efficient handling of the task of finding roots of real-valued functions defined on the set of real numbers. \end{remark} \begin{remark} One might wonder whether condition \eqref{nece_suf} is sufficient to guarantee that the function $\Delta$ has no root on the imaginary axis. To see that this is not the case, consider the function $N$ given by \[N(\nu)=-\frac{3\pi^2}{16-4\pi}\nu + \frac{\pi^2+2\pi}{16 - 4\pi},\] with $\tau=1$ and $\xi=\frac{1}{2}$. Then, we obtain \[\int_0^\tau N(\nu) \diff \nu=\frac{\pi}{8}<\frac{1}{2},\] which shows that condition \eqref{nece_suf} is satisfied. On the other hand, we compute \[ \Delta(i\tfrac{\pi}{2}) = 1 + \frac{i}{2} - \int_0^1 N(\nu) \mathrm e^{-i \nu \frac{\pi}{2}} \diff\nu = 0, \] showing that $i\frac{\pi}{2}$ is a root of $\Delta$ (and so is its complex conjugate $-i\frac{\pi}{2}$). Figure~\ref{fig:_1} shows the roots\footnote{Computations of the roots were performed with Python's \texttt{cxroots} package \cite{parini2018cxroots}, and the search of roots was limited to the rectangle $\{s \in \mathbb{C} \mid \lvert \real(s) \rvert \leq 5,\lvert \imag(s)\rvert \leq 20\}$.} of $\Delta$ and we can see the presence of roots of $\Delta$ on the imaginary axis. \begin{figure}[ht] \centering \includegraphics[width=0.7\textwidth]{Figures/Poly1.pdf} \caption{Roots of $\Delta$ with $N(\nu)=-\frac{3\pi^2}{16-4\pi}\nu + \frac{\pi^2+2\pi}{16 - 4\pi}$.} \label{fig:_1} \end{figure} \end{remark} \subsection{Application to the stability analysis of the system~\eqref{eq:hyperbolic_couple}} Since the PDE system \eqref{eq:hyperbolic_couple} has equivalent stability properties to those of~\eqref{distributed delay0} with $\xi=\rho q$, we directly obtain the following corollary \begin{corollary} \label{result:eqhyp} Assume that $\lvert q\rho\rvert<1$ and that the characteristic equation $\Delta$ (defined by equation~\eqref{characteristic equation} with $\xi=\rho q$) has no roots on the imaginary axis. Then, the system \eqref{eq:hyperbolic_couple} is exponentially stable if and only if equation \eqref{nece_suf} holds and $\Gamma=0$. \end{corollary} \section{Comparison with other criteria and numerical validation}\label{sec_comparaison} Our next goal is to compare the stability criterion from Corollary~\ref{result:eqhyp} to similar results in the literature. More precisely, we will compare our results to the stability criteria from \cite{bastin2016stability, saba2019stability} for the case where the in-domain coupling terms $\sigma^+$ and $\sigma^-$ are constant. In this particular situation, it is shown in \cite{saba2019stability} that the function $N$ from \eqref{N} is given by \begin{equation} \label{eq:explicit-N-Bessel} N(\nu) = \left(\frac{a}{\tau} + \frac{d(\nu)}{\tau^2}\right) J_0\left(2 \sqrt{h(\nu)}\right) + \frac{d(\nu)}{\tau^2} J_2\left(2 \sqrt{h(\nu)}\right), \quad \nu \in [0, \tau], \end{equation} where \[ \begin{aligned} h(\nu) & = \frac{R}{\tau^2}\nu(\tau-\nu), & \quad d(\nu) & = R(\tau-\nu b), \\ a & = \frac{q}{\mu}\sigma^- + \frac{\rho}{\lambda}\sigma^+, & R & = \frac{\sigma^+\sigma^-}{\lambda\mu}, &\quad b & = 1 + q\rho, \end{aligned} \] and $J_0$ and $J_2$ are Bessel functions of the first kind (see, e.g., \cite{olver2010nist}). Using the series expansion of Bessel functions from \cite[(10.2.2)]{olver2010nist}, the above expression of $N$ can be rewritten as \begin{equation} \label{eq:explicit-N-series} N(\nu) = \left(\frac{a}{\tau} + \frac{d(\nu)}{\tau^2}\right) \sum_{p=0}^{\infty} \frac{(-1)^p}{(p!)^2} \left(h(\nu)\right)^{p} + \frac{d(\nu)}{\tau^2} \sum_{p=0}^{\infty} \frac{(-1)^p}{p! (p + 2) !} \left(h(\nu)\right)^{p + 1}, \quad \nu \in[0, \tau]. \end{equation} Let us now recall the stability criteria from \cite{bastin2016stability, saba2019stability} with which we will compare our Corollary~\ref{result:eqhyp}. We first present the Lyapunov-based exponential stability condition for linear systems of balance laws from~\cite[Theorem~5.4]{bastin2016stability}. \begin{proposition}[{\cite[Theorem~5.4]{bastin2016stability}}] \label{prop:criterion-bastin-coron} Assume that $\sigma^+$ and $\sigma^{-}$ are constants and define \[M=\begin{pmatrix} 0&-\sigma^+\\-\sigma^{-}&0 \end{pmatrix}, \quad \mathbf{K}=\begin{pmatrix} 0&q\\\rho&0 \end{pmatrix}, \quad \text{ and } \quad \Lambda=\begin{pmatrix} \lambda&0\\0&-\mu \end{pmatrix}.\] If there exists a $2\times 2$ positive diagonal real matrix $P$ such that \[ M^{\top} P+P M \text { is positive semi-definite } \] and \[ \left\lVert \delta \mathbf{K} \delta^{-1}\right\rVert<1 \] with $\delta \triangleq \sqrt{P\lvert \Lambda\rvert}$ (the operations here being understood component-wise), then system \eqref{eq:hyperbolic_couple} is exponentially stable for the $L^2$ norm. \end{proposition} Using the fact that the stability of the system is equivalent to having $\lvert \Delta(s)\rvert>0$ for all $s \in \mathbb{C}$ such that $\real(s)\geq 0$, the following sufficient stability condition was obtained in~\cite{saba2019stability}. \begin{proposition}[{\cite[Proposition~3]{saba2019stability}}] \label{prop:criterion-jean-et-al} Assume that the coefficients $\sigma^+$ and $\sigma^{-}$ are constants. If the constant parameters of system \eqref{eq:hyperbolic_couple} satisfy either of the following set of inequalities \begin{enumerate} \item $\sigma^+ \sigma^{-} \geq 0$, $\rho q \geq 0$, and \[\lvert a\rvert+\lvert R\rvert\left(\frac{1}{1+\lvert \rho q\rvert}-\frac{1-\lvert \rho q\rvert}{2}\right)<1-\lvert \rho q\rvert;\] \item $\sigma^+ \sigma^{-} \geq 0$, $\rho q < 0$, and \[\lvert a\rvert+\lvert R\rvert \frac{1+\lvert \rho q\rvert}{2}<1-\lvert \rho q\rvert;\] \item $\sigma^+ \sigma^{-} < 0$, $\rho q \geq 0$, and \[\lvert a\rvert I_0(\sqrt{\lvert R\rvert})+\lvert R\rvert\left(\frac{1}{1+\lvert \rho q\rvert}-\frac{1-\lvert \rho q\rvert}{2}\right) \left[I_0(\sqrt{\lvert R\rvert})-I_2(\sqrt{\lvert R\rvert})\right]<1-\lvert \rho q\rvert;\] \item $\sigma^+ \sigma^{-} <0$, $\rho q<0$, and \[ \lvert a\rvert I_0(\sqrt{\lvert R\rvert})+\lvert R\rvert \frac{1+\lvert \rho q\rvert}{2} \left[I_0(\sqrt{\lvert R\rvert})-I_2(\sqrt{\lvert R\rvert})\right]<1-\lvert \rho q\rvert;\] \end{enumerate} where $I_0$ and $I_2$ are modified Bessel functions of the first kind (see, e.g., \cite[Section~10.25]{olver2010nist}), then system \eqref{eq:hyperbolic_couple} is exponentially stable for the $L^2$ norm. \end{proposition} Let us also mention that, using recent ISS results and a small-gain property, another sufficient stability condition can be found in~\cite{karafyllisinput}. When applied to system~\eqref{eq:hyperbolic_couple}, this condition requires the existence of a constant $K>0$, with $(\lvert \rho\rvert+\lvert q\rvert)\mathrm{e}^{-K}<1$ such that \begin{equation} \label{eq:criterion-iss} \left(\sqrt{\frac{\mathrm{e}^{2K}-\mathrm{e}^K}{K\lambda}\lvert \sigma^-\rvert}+\sqrt{\lvert \rho\rvert}\right)\left(\sqrt{\frac{\mathrm{e}^{2K}-\mathrm{e}^K}{K\mu}\lvert \sigma^+\rvert}+\sqrt{\lvert q\rvert}\right)<1. \end{equation} However, it has been shown in~\cite{saba2019stability} that this condition is not satisfied for numerous examples (and in particular, the ones we consider below). We refer to \cite[Section~V]{saba2019stability} for a more detailed comparison between \cite[Proposition~3]{saba2019stability} and \eqref{eq:criterion-iss}. \begin{table}[ht] \centering \footnotesize \caption{Comparison with other criteria} \label{tab:comparison} \begin{tabular}{@{\hspace*{0.01\textwidth}} >{\centering} m{0.34\textwidth} @{\hspace*{0.02\textwidth}} >{\centering} m{0.22\textwidth} @{\hspace*{0.02\textwidth}} >{\centering} m{0.15\textwidth} @{\hspace*{0.02\textwidth}} >{\centering} m{0.15\textwidth} @{\hspace*{0.01\textwidth}}} \toprule Example of system \eqref{eq:hyperbolic_couple} \\ $\bigl(\sigma^+,\allowbreak \sigma^-,\allowbreak \frac{1}{\lambda},\allowbreak \frac{1}{\mu},\allowbreak \rho,\allowbreak q\bigr)$ & Conditions of Corollary~\ref{result:eqhyp} & Conditions of Proposition~\ref{prop:criterion-bastin-coron} & Conditions of Proposition~\ref{prop:criterion-jean-et-al} \tabularnewline \midrule $(1.1,\allowbreak 0.4,\allowbreak 1,\allowbreak 1.2,\allowbreak 0.4,\allowbreak -0.5)$ & Satisfied & Not satisfied & Satisfied \tabularnewline $(-0.8,\allowbreak 0.7,\allowbreak 1,\allowbreak 1.2,\allowbreak 0.4,\allowbreak 0.25)$ & Satisfied & Satisfied & Satisfied \tabularnewline $(1.3,\allowbreak -0.95,\allowbreak 1.8,\allowbreak 0.44,\allowbreak 0.45,\allowbreak 0.25)$ & Satisfied & Not satisfied & Satisfied \tabularnewline $(1.3,\allowbreak -1.2,\allowbreak 1.8,\allowbreak 1.5,\allowbreak 0.45,\allowbreak 0.25)$ & Satisfied & Satisfied & Not satisfied \tabularnewline $(2.3,\allowbreak -3.5,\allowbreak 0.8,\allowbreak 1.1,\allowbreak 0.5,\allowbreak -0.7)$ & Satisfied & Not satisfied & Not satisfied \tabularnewline \bottomrule \end{tabular} \end{table} To compare our criterion with Propositions~\ref{prop:criterion-bastin-coron} and \ref{prop:criterion-jean-et-al} above, we selected five sets of parameters $\sigma^+$, $\sigma^-$, $\lambda$, $\mu$, $\rho$, and $q$, given in the first column of Table~\ref{tab:comparison}. We selected the same sets of parameters as in \cite[Table~II]{saba2019stability} as they provide showcase scenarios for which the criteria specified in \cite[Theorem~5.4]{bastin2016stability} and \cite[Proposition~3]{saba2019stability} are not always satisfied. To verify our criterion from Corollary~\ref{result:eqhyp}, we first verify that the inequalities $\lvert q \rho \rvert < 1$ and \eqref{nece_suf} are satisfied. Then, to compute the quantity $\Gamma$ from \eqref{2cond}, we first compute numerically the positive zeros $\rho_1, \dotsc, \rho_m$ of the function $M$ given by $M(\omega) = \real (\Delta(i \omega))$. In all the examples from Table~\ref{tab:comparison}, the inequalities $\lvert q \rho \rvert < 1$ and \eqref{nece_suf} are satisfied, and $M$ turns out to have no real roots, which immediately implies that $\Delta$ has no zeros on the imaginary axis, $\Gamma = 0$, and hence system \eqref{eq:hyperbolic_couple} is exponentially stable by Corollary~\ref{result:eqhyp}. Our verifications of these conditions were performed using MATLAB, using the built-in Bessel function and numerical integration. As an example, with the parameters $\left(\sigma^+, \sigma^-, \frac{1}{\lambda}, \frac{1}{\mu}, \rho, q\right) = (2.3, -3.5, 0.8, 1.1, 0.5, -0.7)$ (corresponding to the last row of Table~\ref{tab:comparison}), we get $\int_0^\tau N(\nu) \diff\nu \approx 1.3143 < 1 + 0.5 \times 0.7 = 1.35$ and, from the plot of the function $M$ shown in Figure~\ref{M_plot}, we observe that $M$ has no positive real root (note that $M(0)\approx 0.0357 \neq 0$). Therefore, $\Gamma = 0$ and, by Corollary~\ref{coro:stability}, the system is exponentially stable for these parameters. \begin{figure}[ht] \centering \includegraphics[width=0.7\textwidth]{Figures/M.pdf} \caption{Graph of the function $M$ when $\left(\sigma^+, \sigma^-, \frac{1}{\lambda}, \frac{1}{\mu}, \rho, q\right)=(2.3,-3.5,0.8,1.1,0.5,-0.7)$.} \label{M_plot} \end{figure} To illustrate this exponential stability property, we represent in Figure~\ref{fig_L2_norm} the evolution in time of the $L^2$ norm in space of the solution $(u, v)$ of~\eqref{eq:hyperbolic_couple} with the parameters $\left(\sigma^+, \sigma^-, \frac{1}{\lambda}, \frac{1}{\mu}, \rho, q\right) = (2.3, -3.5, 0.8, 1.1, 0.5, -0.7)$. We simulated our system with a time horizon of $500$, a standard first-order upwind scheme for the discretization of the space derivative, a time step of $\Delta t = 10^{-4}$, and different space discretization steps $\Delta x_u$ and $\Delta x_v$ for $u$ and $v$ in such a way that the CFL conditions $\frac{\lambda \Delta t}{\Delta x_u} \leq 1$ and $\frac{\mu \Delta t}{\Delta x_v} \leq 1$ are satisfied as close as possible to the equality. The initial condition $u_0$ was chosen as a constant equal to $1$, while the initial condition $v_0$ was chosen as the unique affine function such that the zero-order compatibility condition \eqref{compatibility_condition_u_v} is fulfilled. The result of the simulation, presented in Figure~\ref{fig_L2_norm}, show the exponential convergence to the origin of the considered solution in $L^2$ norm, as expected. \begin{figure}[htp] \centering \resizebox{0.6\textwidth}{!}{\input{Figures/norm_L2_solution.pgf}} \caption{$L^2$ norm of the solution $(u,v)$ of~\eqref{eq:hyperbolic_couple} with parameters $\left(\sigma^+, \sigma^-, \frac{1}{\lambda}, \frac{1}{\mu}, \rho, q\right)=(2.3,\allowbreak -3.5,\allowbreak 0.8,\allowbreak 1.1,\allowbreak 0.5,\allowbreak -0.7)$.} \label{fig_L2_norm} \end{figure} As the conditions from Corollary~\ref{result:eqhyp} are necessary and sufficient, we are able to conclude on the stability of systems for which the criteria from Propositions~\ref{prop:criterion-bastin-coron} or \ref{prop:criterion-jean-et-al} are not satisfied, as illustrated in Table~\ref{tab:comparison}. In addition, despite the potential difficulties in the computation of the zeros of $M$ and of the value of $\Gamma$, a numerical test of our necessary and sufficient condition from Corollary~\ref{result:eqhyp} seems less complex and easier to verify than the numerical test for the necessary and sufficient condition in \cite[Condition~(31)]{saba2019stability}. \section{Numerical explorations of truncations of the function $N$} \label{sec_approximation} One of the difficulties in the analysis of the case where $\sigma^+$ and $\sigma^-$ are constant is that the function $N$ from \eqref{eq:explicit-N-Bessel} depends on the Bessel functions $J_0$ and $J_2$, which are transcendental functions. On the other hand, when the function $N$ from \eqref{characteristic equation} is a polynomial, the function $\Delta$ becomes a quasipolynomial (more precisely, a quasipolynomial divided by a power of $s$), whose properties are easier to analyze. A natural idea to simplify the analysis in the case where $\sigma^+$ and $\sigma^-$ are constant is thus to approximate $N$ by truncating the series in \eqref{eq:explicit-N-series} and study the quasipolynomial $\Delta$ obtained by this procedure. The aim of this section is to provide numerical explorations around this idea. For $p \in \N$, let $N_p$ denote the function obtained by truncating the sums in \eqref{eq:explicit-N-series} so that each sum has only its first $p+1$ terms, i.e., \begin{equation*} N_p(\nu) = \left(\frac{a}{\tau} + \frac{d(\nu)}{\tau^2}\right) \sum_{p^\prime=0}^{p} \frac{(-1)^{p^\prime}}{(p^\prime !)^2} \left(h(\nu)\right)^{p^\prime} + \frac{d(\nu)}{\tau^2} \sum_{p^\prime=0}^{p} \frac{(-1)^{p^\prime}}{p^\prime ! (p^\prime + 2) !} \left(h(\nu)\right)^{p^\prime + 1}, \quad \nu \in[0, \tau]. \end{equation*} Note that \begin{equation}\label{expression N0} N_0(\nu)=\frac{R^2b}{2\tau^4}\nu^3-\frac{R^2(1+b)}{2\tau^3}\nu^2+\left(\frac{R^2}{2\tau^2}-\frac{Rb}{\tau^2}\right)\nu+\frac{a+R}{\tau} \end{equation} and that, for $p \in \N$, we have the recurrence relation \begin{equation}\label{expression Np+1} N_{p+1}(\nu)= N_p(\nu) + f_p(\nu), \end{equation} where \begin{equation}\label{expression fp} f_p(\nu) = (-1)^{p+1}\Bigg(\left(\frac{a}{\tau}+\frac{d(\nu)}{\tau^2}\right) \frac{(h(\nu))^{p+1}}{((p+1)!)^2}+\frac{(h(\nu))^{p+2}d(\nu)}{\tau^2(p+1)!(p+3)!}\Bigg) \end{equation} and $h$, $d$, $a$, $R$, and $b$ are defined as in Section~\ref{sec_comparaison}. For $p \in \N$, let $\Delta_p$ be the characteristic function associated with $N_p$, i.e., we set \begin{equation} \label{eq:Delta-p} \Delta_{p}(s)= 1-q\rho \mathrm{e}^{-s \tau}-\int_0^\tau N_{p}(\nu) \mathrm{e}^{-s \nu} \diff \nu. \end{equation} Rewriting $N_p$ as $N_p(\nu)=\sum_{k=0}^{2p+3}a_k\nu^k$ for suitable coefficients $a_0, \dotsc, a_{2 p + 3}$, then, for $s\neq 0$, we can rewrite $\Delta_p$ as $\Delta_p(s)=P_0(s)+P_1(s)\mathrm{e}^{-s \tau}$ with \begin{equation} \label{eq:P0P1truncation} \left\{ \begin{aligned} P_0(s) & = \frac{s^{2p+4}-\sum^{2p+3}_{k=1}(2p+3-k)!a_{2p+3-k}s^k-(2p+3)!a_{2p+3}}{s^{2p+4}}, \\ P_1(s) & = \frac{-q\rho s^{2p+4}+\sum^{2p+3}_{k=0}s^k N_p^{(2p + 3 - k)}(\tau)}{s^{2p+4}}. \end{aligned} \right. \end{equation} \subsection{Investigation of imaginary roots} As Corollary~\ref{result:eqhyp} requires one to check whether the function $\Delta$ admits roots on the imaginary axis, we now describe in more detail some conditions for the existence of imaginary roots of the function $\Delta_p$ obtained by the truncation $N_p$ of $N$. \subsubsection{The case $p=0$} Let us obtain a necessary condition for the existence of nontrivial roots in the imaginary axis for $\Delta_0$. For $\omega \in \R$, the imaginary number $i \omega$ is a root of $\Delta_0$ if and only if \[ P_0(i\omega)\mathrm{e}^{i\omega\tau} = - P_1(i\omega), \] where $P_0$ and $P_1$ are as in \eqref{eq:P0P1truncation} with $p = 0$. The latter condition implies that \[ \lvert P_0(i\omega)\rvert^2=\lvert P_1(i\omega)\rvert^2, \] which, using the expressions from \eqref{eq:P0P1truncation}, is equivalent to \begin{equation} \label{egalté P_P1} \omega^4\Bigg((1-q^2\rho^2)\omega^{4}+\omega^{2}\left( a_0^2+2a_1-N_0(\tau)^2-2q\rho N_0'(\tau)\right)-12a_3\Delta_0(0)\Bigg)=0. \end{equation} If we assume that $N_0$ satisfies \eqref{nece_suf} then, $\Delta_0(0)>0$. Consequently, we obtain that the solutions of~\eqref{egalté P_P1} verify $\omega\neq 0$. Moreover, according to \eqref{q_rho}, we have $a_3\geq 0$. Then, we obtain from \eqref{egalté P_P1} that \begin{equation} \label{eq:explicit-omega-0} \omega=\pm\sqrt{\frac{-\left( a_0^2+2a_1-N_0(\tau)^2-2q\rho N_0'(\tau)\right)+\sqrt{D}}{2(1-q^2\rho^2)}} \end{equation} with \[ D=\left( a_0^2+2a_1-N_0(\tau)^2-2q\rho N_0'(\tau)\right)^2+48a_3(1-q^2\rho^2)\Delta_0(0). \] This means that, if \eqref{nece_suf} is satisfied for $N_0$, then $\Delta_0$ has at most two roots on the imaginary axis, and these roots can only be the ones from \eqref{eq:explicit-omega-0}. Depending on the parameter values, it may have either exactly two imaginary roots or no imaginary roots. \subsubsection{The case $p=1$} Proceeding as previously, a straightforward computation shows that, if $\omega\in \R$ is such that $\Delta(i \omega) = 0$, then necessarily \begin{multline} \label{or1_egalté P_P1} \omega^6 \Biggl( \Bigl(1-q^2\rho^2\Bigr) \omega^{6} + \Bigl(a_0^2+2a_1-N_1(\tau)^2-2q\rho N_1'(\tau)\Bigr) \omega^{4} \\ + \Bigl(a_1^2-12a_3-4a_0a_2-N_1'(\tau)^2+2q\rho N_1^{'''}(\tau)+2N_1(\tau)N_1{''}(\tau)\Bigr)\omega^2 + 240\Delta_1(0)\Biggr)=0, \end{multline} with \begin{equation} \begin{aligned} a_0 & = \frac{a+R}{\tau}, & \quad a_1 & = -\frac{R^2+2R(a+b)}{2\tau^2}, \\ a_2 & = \frac{6aR+3R^2(b+1)-R^3}{6\tau^3}, & a_3 & = \frac{-3R^2b+R^3(b+2)}{6\tau^4}. \end{aligned} \end{equation} Assume that \eqref{q_rho} and \eqref{nece_suf} are satisfied with $N$ replaced by the truncation $N_1$. If $\delta<0$, then $\Delta$ has at most 2 roots on the imaginary axis, where \begin{equation} \left\{ \begin{aligned} &\delta=18ABCE-4B^3E+B^2C^2-4AC^3-27A^2E^2,\\ &A=1-q^2\rho^2,\, B= a_0^2+2a_1-N_1(\tau)^2-2q\rho N_1'(\tau),\\ &C=a_1^2-12a_3-4a_0a_2-N_1'(\tau)^2+2q\rho N_1^{'''}(\tau)+2N_1(\tau)N_1{''}(\tau),\\ &E=240\Delta_1(0). \end{aligned}\right. \end{equation} In addition, if $Q\geq 0$, then $\Delta$ has no root on the imaginary axis, where \begin{equation} Q=\frac{B}{27 A}\left(\frac{2 B^2}{A^2}-\frac{9 C}{A}\right)+\frac{E}{A} . \end{equation} With these first two truncations, we can conclude that finding the roots of $\Delta_p$ on the imaginary axis amounts to searching for the roots of a polynomial with real coefficients and finite degrees, which constitutes an algebraic problem. \subsection{Stability of the truncated system} We now numerically compare the stability properties of \eqref{distributed delay} when $N$ is given by \eqref{eq:explicit-N-Bessel} or by one of its truncations $N_p$, $p \in \mathbb N$. \begin{figure}[htp] \centering \includegraphics[width=0.7\textwidth]{Figures/Nq-07.pdf} \caption{$N$, $N_0$, $N_1$, $N_2$, and $N_3$ with $\Bigl(\sigma^+, \sigma^-, \frac{1}{\lambda}, \frac{1}{\mu}, \rho, q\Bigr)=(2.3,-3.5,0.8,1.1,0.5,-0.7)$.} \label{fig:_2} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=0.7\textwidth]{Figures/State_1_JA_V2.pdf} \caption{Solution of \eqref{distributed delay} with $N$, $N_0$, $N_1$, $N_2$, and $N_3$ with $\Bigl(\sigma^+,\allowbreak \sigma^-,\allowbreak \frac{1}{\lambda},\allowbreak \frac{1}{\mu},\allowbreak \rho,\allowbreak q\Bigr)=(2.3,\allowbreak -3.5,\allowbreak 0.8,\allowbreak 1.1,\allowbreak 0.5,\allowbreak -0.7)$.} \label{fig:_3} \end{figure} We start by considering \eqref{distributed delay} with parameters $\Bigl(\sigma^+,\allowbreak \sigma^-,\allowbreak \frac{1}{\lambda},\allowbreak \frac{1}{\mu},\allowbreak \rho,\allowbreak q\Bigr)=(2.3,\allowbreak -3.5,\allowbreak 0.8,\allowbreak 1.1,\allowbreak 0.5,\allowbreak -0.7)$. For this set of parameters, Figure~\ref{fig:_2} shows the function $N$ and its truncations $N_0$, $N_1$, $N_2$, $N_3$. We observe that these low-order approximations are already visually close to the graph of $N$, in particular $N_2$ and $N_3$. Solutions of \eqref{distributed delay} with initial condition $\beta(t, 1) =\sin (\pi t)$ for $t \in [-\tau, 0]$ for $N$ and for the truncations $N_0$, $N_1$, $N_2$, and $N_3$ are represented in Figure~\ref{fig:_3}, in which we observe that these solutions diverge for the truncations $N_0$, $N_1$, and $N_2$, but converge to $0$ for $N_3$ and $N$. Indeed, \eqref{nece_suf} is not satisfied for $N_0$, $N_1$ and $N_2$, since \begin{align*} \int_0^\tau N_0(\nu) \diff \nu & \approx 1.6561>1.35=1-q\rho, \\ \int_0^\tau N_1(\nu) \diff \nu & \approx 1.6117>1.35=1-q\rho, \\ \intertext{and} \int_0^\tau N_2(\nu) \diff \nu & \approx 1.3768>1.35=1-q\rho. \end{align*} \begin{figure}[htp] \centering \includegraphics[width=0.7\textwidth]{Figures/Nq-05.pdf} \caption{$N$, $N_0$, $N_1$, $N_2$, and $N_3$ with $\Bigl(\sigma^+, \sigma^-, \frac{1}{\lambda}, \frac{1}{\mu}, \rho, q\Bigr)=(1.1,0.4,1,1.2,0.4,-0.5)$.} \label{fig:_4} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=0.7\textwidth]{Figures/beta_q-05.pdf} \caption{Solution of \eqref{distributed delay} with $N$, $N_0$, $N_1$, $N_2$, and $N_3$ with $\Bigl(\sigma^+,\allowbreak \sigma^-,\allowbreak \frac{1}{\lambda},\allowbreak \frac{1}{\mu},\allowbreak \rho,\allowbreak q\Bigr)=(1.1,\allowbreak 0.4,\allowbreak 1,\allowbreak 1.2,\allowbreak 0.4,\allowbreak -0.5)$.} \label{fig:_5} \end{figure} We also consider \eqref{distributed delay} with parameters $\Bigl(\sigma^+,\allowbreak \sigma^-,\allowbreak \frac{1}{\lambda},\allowbreak \frac{1}{\mu},\allowbreak \rho,\allowbreak q\Bigr)=(1.1,\allowbreak 0.4,\allowbreak 1,\allowbreak 1.2,\allowbreak 0.4,\allowbreak -0.5)$, representing the corresponding functions $N$, $N_0$, $N_1$, $N_2$, and $N_3$ in Figure~\ref{fig:_4} and the solutions of \eqref{distributed delay} with the same initial condition as before for these functions in Figure~\ref{fig:_5}. We observe that, for these parameters, a lower-order truncation already provides a good approximation for $N$, and all represented solutions converge, even for the lowest-order truncation $N_0$ of $N$. For this set of parameters, we also represent in Figure~\ref{fig:_6} the roots of the characteristic function $\Delta$ from \eqref{characteristic equation} for $N$, $N_0$, and $N_1$, which also confirm the stability observed in Figure~\ref{fig:_5}. \begin{figure}[htp] \centering \includegraphics[width=0.7\textwidth]{Figures/Stable1.pdf} \caption{Roots of $\Delta$ with with $N$, $N_0$, and $N_1$ with $\Bigl(\sigma^+, \sigma^-, \frac{1}{\lambda}, \frac{1}{\mu}, \rho, q\Bigr)=(1.1,\allowbreak 0.4,\allowbreak 1,\allowbreak 1.2,\allowbreak 0.4,\allowbreak -0.5)$.} \label{fig:_6} \end{figure} We conclude this section with the following result, which provides a sufficient stability criterion for \eqref{eq:hyperbolic_couple} and \eqref{distributed delay} with constant parameters $\sigma^+$ and $\sigma^-$ (and hence, with $N$ given by \eqref{eq:explicit-N-Bessel}) in terms of properties of the function $\Delta_p$ from \eqref{eq:Delta-p} for some suitable $p$. \begin{proposition}\label{proposition} Assume that there exist $p_0\in \N$ and $\epsilon_0$ such that \begin{equation} \label{Del_del_p} \left\{ \begin{aligned} \inf_{\real(s)\geq 0} \lvert \Delta_{p_0}(s)\rvert & >\epsilon_0, \\ \left\lVert N-N_{p_0}\right\rVert_{L^1(0, \tau)} & <\epsilon_0, \end{aligned} \right. \end{equation} where $\Delta_{p_0}$ is defined from $N_{p_0}$ as in \eqref{eq:Delta-p}. Then \begin{equation} \inf_{\real(s)\geq 0} \lvert \Delta(s)\rvert>0. \end{equation} \end{proposition} \begin{proof} For all $s\in\C$, we have \begin{equation} \Delta(s)= \Delta_{p_0}(s)-\int_0^\tau \left(N(\nu)-N_{p_0}(\nu)\right) \mathrm{e}^{-s \nu} \diff \nu. \end{equation} Thus, for all $s\in\C$ with $\real(s) \geq 0$, we obtain \begin{align} \lvert \Delta(s)\rvert&\geq\lvert \Delta_{p_0}(s)\rvert-\left\lvert \int_0^\tau \left(N(\nu)-N_{p_0}(\nu)\right) \mathrm{e}^{-s \nu} \diff \nu\right\rvert\\& \geq\lvert \Delta_{p_0}(s)\rvert- \left\lVert N-N_{p_0}\right\rVert_{L^1(0,\tau)}. \end{align} Then, from \eqref{Del_del_p}, we obtain \begin{equation} \inf_{\real(s)\geq 0} \lvert \Delta(s)\rvert>0, \end{equation} as required. \end{proof} \section{Conclusion} \label{sec_conclusion} In this paper, we presented a novel necessary and sufficient condition for $2\times 2$ first-order linear hyperbolic systems. First, the problem was reformulated as a stability problem for an associated integral difference equation. Then, we provided a result counting the number of unstable roots of the latter system, yielding, as a consequence, a necessary and sufficient stability criterion for the system of first-order linear hyperbolic partial differential equations. In future work, we plan to study non-scalar systems and investigate the stability of integral difference equations with multiple delays. \bibliographystyle{abbrv} \bibliography{biblio} \end{document}
2412.13926v1
http://arxiv.org/abs/2412.13926v1
Finite groups with coprime non-linear codegrees
\documentclass{amsart} \usepackage{amsmath,amsthm,amsfonts,amssymb,latexsym,mathrsfs,graphicx,tikz,array} \usepackage{subcaption} \usepackage{hyperref} \usepackage{xpatch} \makeatletter \xpatchcmd{\@thm}{\thm@headpunct{.}}{\thm@headpunct{}}{}{} \makeatother \usepackage{enumerate} \usepackage[shortlabels]{enumitem} \usepackage{color} \headheight=7pt \textheight=574pt \textwidth=432pt \topmargin=14pt \oddsidemargin=18pt \evensidemargin=18pt ll} {\rm \hbox{\vrule height 0.2 cm width 0.2cm}}} \renewcommand{\qed}{\cvd} \headheight=5pt \textheight=600pt \textwidth=450pt \topmargin=14pt \oddsidemargin=11pt\evensidemargin=14pt \newcolumntype{P}[1]{>{\centering\arraybackslash}p{#1}} \renewcommand{\baselinestretch}{1.1} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}{Corollary}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{remark}[theorem]{Remark} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{thm}{Theorem} \renewcommand{\thethm}{\Alph{thm}} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem*{defn}{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{nota}[theorem]{Notation} \newtheorem*{ThmA}{Theorem A} \newtheorem*{ThmB}{Theorem B} \newtheorem*{ThmC}{Theorem C} \newtheorem*{Thm}{Theorem} \newtheorem*{Thmm}{Main Theorem} \newtheorem*{CorB}{Corollary B} \newtheorem*{CorC}{Corollary C} \newtheorem*{Cor}{Corollary} \newtheorem{Question}[theorem]{Question} \newenvironment{enumeratei}{\begin{enumerate}[\upshape (a)]} {\end{enumerate}} \newcommand{\irr}{{\mathrm {Irr}}} \newcommand{\Mult}{{\mathrm{Mult}}} \newcommand{\cd}{{\mathrm {cd}}} \newcommand{\cod}{{\mathrm {cod}}} \newcommand{\acd}{{\mathrm {acd}}} \newcommand{\Aut}{{\mathrm {Aut}}} \newcommand{\Out}{{\mathrm {Out}}} \newcommand{\Centralizer}{{\mathrm {C}}} \newcommand{\Center}{{\mathbf {Z}}} \newcommand{\PSL}{{\mathrm {PSL}}} \newcommand{\Suz}{{\mathrm {Suz}}} \newcommand{\PSp}{{\mathrm {PSp}}} \newcommand{\PSU}{{\mathrm {PSU}}} \newcommand{\SL}{{\mathrm {SL}}} \newcommand{\GL}{{\mathrm{GL}}} \newcommand{\SU}{{\mathrm {SU}}} \newcommand{\PGL}{{\mathrm {PGL}}} \newcommand{\PU}{{\mathrm {PU}}} \newcommand{\Proj}{{\mathrm {P}}} \newcommand{\GG}{{\mathscr{G}}} \newcommand{\DD}{\mathscr{D}} \newcommand{\In}{\mathscr{In}} \newcommand{\MT}{Moret$\'{o}$-Tiep's~Condition} \newcommand{\syl}{\mathrm{Syl}} \newcommand{\Cay}{\mathrm{Cay}} \newcommand{\Fit}{\mathbf{F}} \newcommand{\orb}{\mathrm{orb}} \newcommand{\OO}{\mathrm{o}} \newcommand{\Al}{\mathrm{A}} \newcommand{\sg}{|G|} \newcommand{\nn}{n_{2}(G)} \newcommand{\nnn}{n_{3}(G)} \newcommand{\I}{\mathrm{I}} \makeatletter \newcommand*{\rom}[1]{\expandafter\@slowromancap\romannumeral #1@} \makeatother \begin{document} \title{Finite Groups With coprime non-linear codegrees} \author[A. Zarezadeh]{Ashkan ZareZadeh$^{1}$} \author[B. Khosravi]{Behrooz Khosravi$^1$} \author[Z. Akhlaghi]{Zeinab Akhlaghi$^{1,2}$} \address{$^{1}$ Faculty of Mathematics and Computer Science, Amirkabir University of Technology (Tehran Polytechnic), 15914 Tehran, Iran.} \address{$^{2}$ School of Mathematics, Institute for Research in Fundamental Science (IPM) P.O. Box:19395-5746, Tehran, Iran. } \email{\newline \text{(Z. Akhlaghi) }z\[email protected] \newline \text{(B. Khosravi) }[email protected] \newline \text{(A. Zarezadeh) }[email protected]} \begin{abstract} Given a finite group $G$ with an irreducible character $\chi\in\irr(G)$, the codegree of $\chi$ is defined by $\cod(\chi):=|G:\ker\chi|/\chi(1)$. The set of non-linear irreducible character codegrees of $G$ is denoted by $\cod(G|G')$. In this note, we classify all finite groups $G$ with $|\cod(G|G')|>1$ and for each pair of distinct elements $m , n\in \cod(G|G')$, $m$ and $n$ are coprime. \end{abstract} \keywords{} \subjclass[2000]{05C25, 05C69, 94B25} \thanks{The third author is supported by a Grant from IPM (no. 1402200112 ) } \maketitle \section{Introduction} Throughout this note, $G $ is a finite group and $ \irr(G) $ is the set of all irreducible complex characters of $ G $, and $\pi(G)$ is the set of all prime divisors of $|G|$. Recall that $G$ is said to be a $2$-\textit{Frobenius group}, if $G$ has a normal series $1<K<H<G$, such that $G/K$ is a Frobenius group with $H/K$ as the Frobenius kernel, and $K$ is the Frobenius kernel of the Frobenius group $H$. In \cite{Qian}, the codegree of $ \chi\in\irr(G) $ is defined as $ \cod(\chi) = |G: \ker\chi|/\chi(1) $. Let $N$ be a noraml subgroup of $G$. The set of irreducible characters of $G$ whose kernel does not contain $N$ is denoted by $\irr(G|N)$, i.e. $\irr(G|N)=\irr(G)\setminus\irr(G/N)$. Therefore, $\irr(G|G')$ is the set of non-linear irreducible characters of $G$. Let $\cd(G|N)=\{\chi(1)\mid \chi\in\irr(G|N)\}$, and $\cod(G|N)=\{\cod(\chi)\mid \chi\in\irr(G|N)\}$. In \cite{Qian}, the authors studied the graph $\Gamma(G|N)$, whose vertices are the prime divisors of the elements in $\cod(G|N)$, and two distinct primes $p$ and $q$ are adjacent if there exists some $\chi\in\irr(G|N)$, such that $pq\mid \cod(\chi)$. For simplicity, $\Gamma(G|G)$ is denoted by $\Gamma(G)$. They showed that $\Gamma(G)$ is disconnected if and only if $G$ is either a Frobenius group or a $2$-Frobenius group. It is natural to ask \textit{if the disconnectedness of $\Gamma(G|G')$ results in the disconnectedness of $\Gamma(G)$}. In general, the answer is negative and using GAP\cite{GAP}, for $G=\text{SmallGroup}(480, 1188)$, we see that $\cod(G|G')=\{5,15,32\}$ and $\cod(G)=\{2,3,5,6,15,32\}$, which yields that $\Gamma(G|G')$ is disconnected while $\Gamma(G)$ is connected. In this note, we show that if $\Gamma(G|G')$ is disconnected and $|\cod(G|G')|=2$, then $\Gamma(G)$ is disconnected. So, the answer to the above question is true when $|\cod(G|G')|=2$. In \cite{Herzog}, non-nilpotent finite groups with $|\cd(G|G')|=1$ were determined. Analogously, in \cite{QianF}, non-nilpotent groups with $|\cod(G|G')|=1$ are classified. So, it is an interesting topic to liken the known results on degrees to codegrees. In \cite{T}, the finite groups whose non-linear degrees are coprime and $|\cd(G|G')|=2$ were studied. We will see that our results classify all the finite groups whose non-linear codegrees are coprime and $|\cod(G|G')|=2$. Reckon that $|\cod(G|G')|=2$ and $\Gamma(G|G')$ is disconnected if and only if $\cod(G|G')$ consists of two coprime integers. Obviously, if $|\cod(G|G')|=m>1$ consists of mutually coprime integers, then $\Gamma(G|G')$ has exactly $m$ components, and by \cite[Theorem B]{Qian}, we get that $m=2$. So, we prove the following general theorem: \begin{ThmA} Let $G$ be a finite group. If $|\cod(G|G')|>1$ and $\cod(G|G')$ consists of mutually coprime integers, then one of the following holds: \begin{itemize} \item $G$ is a Frobenius group and $G\cong C_p^k\rtimes Q_8$, for some prime $p$, and integer $k$: \item $G$ is a $2$-Frobenius group and if $1<K<H<G$ is a normal series such that $G/K=H/K\rtimes R$ and $H=K\rtimes J$ are Frobenius groups with Frobenius complements $R\leq G/K$ and $J\leq H$, respectively, then: \begin{enumerate} \item $R$ is cyclic; \item $J\cong C_p$, for some prime $p$; \item $K$ is the unique minimal normal subgroup of $G$; \item For each $1\not=x\in K$, $C_G(x)/K\cong R_0$, where $R_0\leq R$ and $\pi(R_0)=\pi(R)$. \end{enumerate} \end{itemize} \end{ThmA} If $N\unlhd G$ and $\theta\in {\rm Irr}(N)$, then $\I_G(\theta)$ denotes the inertia subgroup of $\theta$ in $G$, and ${\rm Irr}(\theta^G)$ denotes the set of all irreducible constituents of $\theta^G$. For natural number $n=p^km$, where $p$ is a prime, $k$ and $m$ are integers such that $(p,m)=1$, define $n_p=p^k$ and $n_{p'}=m$. For the rest of the notations, we follow \cite{Isaacs}. \section{preliminary results} Let $G$ be a finite group and $N\trianglelefteq G$ be an elementary abelian $p$-group. We know that the action of $G$ on $\irr(N)$ is equivalent to the action of $G$ on $N$. Accordingly $\{C_G(x)|x\in N\}=\{\I_G(\lambda)|\lambda\in\irr(N)\}$, so for each $x\in N$, there exists $\lambda_x\in\irr(N)$, such that $C_G(x)=\I_G(\lambda_x)$. We use this fact frequently and without any further reference. The following lemma is a corollary of Theorem A and the discussion following it in \cite{QianF}. \begin{lemma}\label{Qian} Let $G$ be a finite group and $|\cod(G|G')|=1$. Then $G$ is either a $p$-group, for some prime $p$, or $G=G'\rtimes R$ is a Frobenius group with $G'\cong C_q^k$, for some prime $q$, as the Frobenius kernel and $R$ is cyclic, also $\cod(G|G')=\{q^k\}$, for some integer $k$ and $q^k$ is cardinal of every minimal normal subgroup of $G$. \end{lemma} \begin{theorem}\cite[Theorem A]{Qian}\label{Qian1} Let $G$ be a non-abelian finite group and $p\in\pi(G)$. If $p$ does not divide any element in $\cod(G|G')$, then $P\in\syl_p(G)$ acts Frobeniusly on $G'$. \end{theorem} \begin{lemma}\cite[Lemma 2.1]{Qian}\label{subnormality} Let $G$ be a finite group and $\chi\in\irr(G)$. If $M$ is subnormal in $G$ and $\psi\in\irr(\chi_M)$, then $\cod(\psi)$ divides $\cod(\chi)$. \end{lemma} \begin{lemma}\cite[Lemma 2.7]{KLG}\label{2.7} Let $K$ be a normal abelian subgroup of $G$ and $\chi\in \irr(G)$. If $\ker(\chi)\cap K=1$, then $|K|$ divides $\cod(\chi)$. \end{lemma} Let $G$ be a finite group acting on a module $M$ over a finite field, and $q$ be a prime divisor of the order of $G/C_G(M)$. We say that the pair$ (G, M)$ satisfies $N_q$ if, for every $v \in M\setminus \{0\},$ $C_G(v)$ contains a Sylow $q$-subgroup of G as a normal subgroup. \begin{theorem}\cite[Proposition 8]{carlo}\label{sak} If $(G,M) $ satisfies $N_q$, then $(|M|-1)/(|C_M(Q)|-1) = n_q(G)$, where $n_q(G)$ is the number of Sylow $q$-subgroups of $G$ and $Q\in \syl_q(G)$. \end{theorem} \begin{theorem}\cite[Theorem B]{Isaacs2}\label{Solve} Let $N$ be a normal subgroup of a group $G$ and suppose that $|\cd(G|N)|\le 2$, then $G$ is solvable. \end{theorem} \begin{lemma}\label{*-group} Let $G$ be a finite group. If $|\cod(G|G')|\le 2$, then $G$ is solvable. \end{lemma} \begin{proof} On the contrary, assume that $G$ is a counterexample of minimal order. If $G$ is simple, then $|\cd(G)|=|\cod(G|G')|+1\le 3$, which contradicts \cite[Theorems 12.6 and 12.15]{Isaacs}. Hence, $G$ is not simple. Notice that by the fact that $G$ is a minimal counterexample, $G$ has a unique minimal normal subgroup, say $K$, which is non-solvable. Observe that $K\le G'$, and so $\irr(G|K)\subseteq \irr(G|G')$ and each character in $\irr(G|K)$ is faithful, and so $|\cd(G|K)|=|\cod(G|K)|\le |\cod(G|G')|\le 2$. Now Theorem \ref{Solve} shows that $G$ is solvable, which is a contradiction. \end{proof} The proof of the previous lemma is a mimic of the proof of \cite[Theorem 2.3]{Zeng}, and it also shows that $G$ is solvable, if $|\cod(G|G')|\le 2$. By Lemma \ref{*-group}, every *-group is solvable.\\ \begin{lemma}\label{nil} Let $G=C_p\times L$, where $L$ is a non-abelian $p'$-group, for some prime $p$. Then there exists $\chi, \psi \in \irr(G|G')$ such that $\cod(\psi)=p\cdot \cod(\chi)$, and so $(\cod(\chi),\cod(\psi))\ne 1$. \end{lemma} \begin{proof} Assume that $\chi\in \irr(L|L')$ and $\phi\in \irr(C_p|C_p)$. Let $\psi=\phi\times \chi$. Obviously $\cod(\chi)$ and $\cod(\psi)$ belong to $\cod(G|G')$. We can see that $\cod(\psi)=\frac{|G:1\times \ker\chi |}{\psi(1)}=\frac{p|L:\ker\chi|}{\chi(1)}=p\cdot\cod(\chi)$. \end{proof} \begin{lemma}\label{nilp} Each *-group is non-nilpotent. \end{lemma} \begin{proof} On the contrary, assume that $G$ is a nilpotent *-group. Clearly, $|\pi(G)|\geq 2$. Let $\{p,q\}\subseteq \pi(G)$ and $Q\in\syl_q(G)$ be a non-abelian Sylow $q$-subgroup of $G$. We know that $\cod(C_p\times Q)\subseteq \cod(G)$, and by Lemma \ref{nil}, we get that $C_p\times Q$ is not a *-group, implying that $G$ is not a *-group, a contradiction. \end{proof} \begin{lemma}\label{Fro} Let $G=N\rtimes H$ be a Frobenius group with Frobenius kernel $N$. If $|\pi(N)|>1$, then $G$ is not a *-group. \end{lemma} \begin{proof} Assume that $N=P\times T$, where $P\in \syl_p(G)$ and $T$ is the normal $p'$-Hall of $N$. Let $\psi\in\irr(P/P')$, $\phi\in\irr(T/T')$ and $\chi=\psi\times \phi$. Obviously, $\cod(\psi^G)=\frac{|G:\ker\psi^G|}{|G:N|}=|N:\ker\psi^G|$, now since $T\le \ker\psi^G$, we get that $\cod(\psi^G)$ is a power of $p$. Now since $P\nless \ker\psi^G$ and $T\nless \ker\chi^G$, we get that $\cod(\chi^G)=|N:\ker\chi^G|$ has a prime divisor other than $p$, a contradiction. \end{proof} As a quick note, let $G=Q_{2^k}$ be a generalized quaternion group of order $2^k$, where $k\ge 3$. We know that $\cd(G)=\{1,2\}$ and $G$ has a faithful character, so $2^{k-1}\in\cod(G|G')$. Now as ${\bf Z}(G)<G'$, $G$ has a non-faithful non-linear character and so there exists $\alpha\in\cod(G|G')$, where $\alpha\ne 2^{k-1}$. So $|\cod(G|G')|=1$ if and only if $G\cong Q_8$. This fact is used in the following lemma. \begin{lemma}\label{Frob} If $G$ is a Frobenius *-group, then $G\cong C_p^k\rtimes Q_8$, for some prime $p$ and integer $k\ge 2$. \end{lemma} \begin{proof} Let $G=N\rtimes H$, where $N$ and $H$ are the Frobenius kernel and Frobenius complement, respectively. By Lemma \ref{Fro}, $N$ is a $p$-group, for some prime $p$. We know that for every $1_N\ne\phi\in\irr(N)$, $p$ divides $\cod(\phi^G)=|N:\ker\phi^G|/\phi(1)$. Therefore $|\cod(H|H')|=1$. Since $H$ is not a Frobenius group, by Theorem \ref{Qian}, $H$ is a $q$-group, for some prime $q$. Now since $H$ is non-cyclic, as we discussed above $H\cong Q_8$, which yields that $N$ is abelian. If $N$ is a minimal normal subgroup, then $N$ is elementary abelian $p$-group, and we are done. So we may assume that $N$ is not minimal normal. Assume that $G$ has a unique minimal normal subgroup $M$. Choose $\theta\in\irr(N|M)$ and $\psi\in\irr(N/M)$. Obviously, $\theta^G$ is faithful and we can see that $$\cod(\theta^G)=\frac{\sg}{|G:N|}=|N|\quad \text{and} \quad \cod(\psi^G)=\frac{|G:\ker\psi^G|}{|G:N|}=|N:\ker\psi^G|<|N|,$$ which is a contradiction. Hence, there exists minimal normal subgroups $M_1$ and $M_2$. By induction $N/M_1$ and $N/M_2$ are elementary abelian, which means $\Phi(N)\le M_1\cap M_2=1$ and $N$ is elementary abelian. \end{proof} We recall that a finite group $G$ is called a \textit{$2$-Frobenius group}, if $G$ has a normal series $1<K<H<G$, such that $G/K$ is a Frobenius group with $H/K$ as the Frobenius kernel, and $K$ is the Frobenius kernel of the Frobenius group $H$. Recall that in such a $2$-Frobenius group, normal subgroups are either contained in $K$ or contain $H$, this fact is useful in the following Theorem. \begin{theorem}\label{2Fro} Let $G$ be a $2$-Frobenius group. Let $1<K<H<G$ be a normal series such that $G/K=H/K\rtimes R$ and $H=K\rtimes J$ are Frobenius groups with Frobenius complements $R\leq G/K$ and $J\leq H$, respectively. Then $G$ is an $*$-group if and only if all the following occurs: \begin{enumerate} \item $G/H\cong R$ is cyclic. \item $H/K\cong J\cong C_p$, for some prime $p$; \item $K$ is the unique minimal normal subgroup of $G$; \item For each $1\not =x\in K$, $C_G(x)/K\cong R_0$, where $R_0\le R$, and $\pi(R_0)=\pi(R)$. \end{enumerate} Moreover, in this case $\cod(G|G')=\{p,|K||R_0|\}$. \end{theorem} \begin{proof} First assume that $G$ is a $2$-Frobenius group with all the mentioned hypothesis. Then $G'=H$ and every non-linear character $\chi \in \irr(G)$, either $\chi \in \irr(G/K|H/K)$ or $\chi \in \irr(G|K)$. If the former case occurs, then $\cod(\chi)=p$. Now, assume the later case occurs. Then $\chi=\theta^G$, where $\theta\in I_G(\lambda)$ and $\lambda\in \irr(K)$ is non-principal. Thus $\cod(\chi)=|I_G(\lambda)|/(\theta(1)|\ker\chi|)$. Since $K$ is the only minimal normal subgroup of $G$, $\ker\chi=1$. Also $I_G(\lambda)/K=C_G(x)/K\cong R_0$, for some $1\not =x\in K$. Thus $\theta$ is an extension of $\lambda$, leading to $\cod(\chi)=|I_G(\lambda)|=|K||R_0|$. Hence $\cod(G|G')=\{p, |K||R_0|\}$ and $G$ is a *-group. Now, we assume that $G$ is a *-group. Note that $|\cod((G/K)|(G/K)')|\ge 1$, and $H/K$ is isomorphic to a Frobenius complement of $H$. If $G/K$ is a *-group, then by Lemma \ref{Frob}, $H/K\cong C_p^n$, for some prime $p$ and integer $n\ge 2$, which is a contradiction, as $H/K$ is a Frobeniu. So $|\cod((G/K)|(G/K)')|= 1$ and $G/K$ is isomorphic to a group described in Lemma \ref{Qian}, and so $(G/K)'=H/K\cong J \cong C_p$, for some prime $p$, $p\in\cod(G|G') $ and $G/H\cong R$ is cyclic. Assume that $\cod(G|G')=\{p,\beta\}$, for some integer $\beta$. Now since $p=|H:K|=|G'K:K| \mid |G'|$, and $H$ is a Frobenius group, we get that $G'=H$. If there exists $p\ne s\in\pi(R)\setminus\pi(\beta)$, then by Theorem \ref{Qian1}, we get that $s\in\syl_s(G)$, acts Frobeniusly on $G'=H$, which is impossible, as $H$ is a Frobenius group. Whence $\pi(R)\subseteq\pi(\beta)$. Next we prove that $K$ is a $q$-group, for some prime $q$. Note that $K$ is nilpotent. On the contrary and without loss of generality, assume that $K=S\times Q$, where $S\in\syl_s(K)$ and $Q\in\syl_q(K)$, are minimal normal subgroups of $G$, for some distinct prime numbers $s$ and $q$. Since $\irr(G|S)\subseteq \irr(G|G')$, by Lemma \ref{2.7}, we get that $|S|$ divides $\beta$. Note that as $G/S$ is not a Frobenius group, $\cod((G/S)|(G/S)')=\cod(G|G')=\{p,\beta\}$, and so $\beta$ divides $|G/S|$, hence for $S_1\in\syl_s(R)$, $|S|\le(\beta)_s\le |S_1|$. However since $R$ acts Frobeniusly on $H/K\cong C_p$ and $H/K\cong C_p$ acts Frobeniusly on $K$, $|S_1|<p<|S|$, a contradiction, and so $K$ is a $q$-group. Now we aim to prove that $K$ is a minimal normal subgroup of $G$. Let $K/L$ be a chief factor of $G$, for some $L<K$. Note that since $G/L$ is not a Frobenius group there exists $1\ne x_0\in K/L$, such that $K/L< C_{G/L}(x_0)$. Assume that $\frac{C_{G/L}(x_0)}{K/L}\cong R_0\le R$. Let $\lambda\in\irr(K/L)$ be non-principal, such that $\I_{G/L}(\lambda)=C_{G/L}(x_0)$. As $R_0$ is cyclic, by \cite[Theorem 11.22]{Isaacs} $\lambda$ extends to $\lambda_0\in\irr(\I_{G/L}(\lambda))$. Now since $\ker(\lambda_0^{G/L})=1$, we get that $\cod(\lambda_0^{G/L})=|\I_{G/L}(\lambda)|=|K/L||R_0|=\beta$. Notice that by Lemma \ref{2.7}, $|K/L|\mid \cod(\chi)$ for all $\chi \in \irr((G/L)|(K/L))$, and so by the same discussion, for each $1\not =x\in K/L$, we have $\beta=|C_{G/L}(x)|=|K/L||R_0|$, which implies that $\frac{C_{G/L}(x)}{K/L}\cong R_0$, for each $1\not=x\in K/L$. Now we prove that $|R_0|_q=|R|_q$. Let $Q\in\syl_q(G/L)$. Choose $1\ne x\in {\bf Z}(Q)\cap K/L$, and non-principal $\lambda\in\irr(K/L)$, such that $C_{G/L}(x)=I_{G/L}(\lambda)$. Obviously, $Q\le C_{G/L}(x)$. Now since $\beta=|I_{G/L}(\lambda)|=|R_0||K/L|$, and $|Q|$ divides $|I_{G/L}(\lambda)|$, we get that $|R_0|_q=|R|_q$, and also $(\beta)_q=|Q|$. Let $K/T$ be another chief factor of $G$. By the discussion we just had, $\beta=|K/T||R_1|=|K/L||R_0|$, for some $R_1\le R$, and also $|R_1|_q=|R|_q$. Note that $(\beta)_q=|Q|=|K/T||R_1|_q=|K/L||R_0|_q$, and since $|R_1|_q=|R|_q=|R_0|_q$, we get that $|K/T|=|K/L|$. In other words, each chief factor $K/W$ of $G$, has order $|K/L|$. Let $G_0/K\le G/K$ such that $G_0/K\cong C_p\rtimes R_0$. Since $G/K$ is a Frobenius group and $R$ is cyclic, $G_0$ is unique. Looking at the action of $G_0/K\cong \frac{G_0/L}{K/L}$ on $K/L$, by the above argument, we see that $C_{G_0/K}(x)$ is a non-trivial $p'$-subgroup of $G_0/K$, for all $1\ne x\in K/L $. Observe that $C_{G_0/K}(x)$ is cyclic, hence every subgroup of $C_{G_0/K}(x)$ is normal in $C_{G_0/K}(x)$. Note that since $G_0/K$ is a Frobenius group, we get that for $s\in\pi(C_{G_0/K}(x))$, $n_s(G_0/K)=p$. Now by Theorem \ref{sak}, there exists non-trivial $K_0/L< K/L$, such that $\frac{|K/L|-1}{|K_0/L|-1}=p$. Now we prove that $L=1$. On the contrary, let $L/L_0$ be a chief factor of $G$. We consider the following two cases: $\bullet$ Case 1) Assume that for each non-principal $\lambda\in \irr(L/L_0)$ and $\chi\in \irr(\lambda^{G/L_0})$, $\ker(\chi)=L_0$.\\ Let $Q\in\syl_q(G/L_0)$. Notice that since $|R|_q=|R_0|_q$, we get that $|Q|=|K/L_0||R_0|_q$. Now let $1\ne x\in {\bf Z}(Q) \cap L/L_0$. Choose non-principal $\lambda_2\in\irr(L/L_0)$, such that $\I_{G/L_0}(\lambda_2)=C_{G/L_0}(x)$, which clearly contains $Q$. Let $\chi\in\irr(\lambda_2^{G/L_0})$. Obviously, $\chi=\lambda_0^{G/L_0}$, for some $\lambda_0\in\irr(\lambda_2^{\I_{G/L_0}(\lambda_2)})$. Now since $\ker\chi=L_0$, we get that $\cod(\chi)=|\I_{G/L_0}(\lambda_2)|/\lambda_0(1)=|R_0||K/L|=\beta$. Observe that $|\I_{G/L_0}(\lambda_2)|_q=|L/L_0||K/L||R_0|_q$, and so $\lambda_0(1)_q=|L/L_0|$, for each $\lambda_0\in\irr(\lambda_2^{\I_{G/L_0}(\lambda_2)})$. Recall that $$|\I_{G/L_0}(\lambda_2):L/L_0|=\sum\limits_{\lambda_0\in\irr(\lambda_2^{\I_{G/L_0}(\lambda_2)})}\lambda_0^2(1),$$ and so $|\lambda_0(1)|_q^2=|L/L_0|^2$ divides $|\I_{G/L_0}(\lambda_2):L/L_0|_q$, hence $|L/L_0|^3\mid |\I_{G/L_0}(\lambda_2)|_q$. On the other hand, since $H$ is a Frobenius group $H/K\cong C_p$ acts Frobeniusly on $L/L_0$, whence $p\mid |L/L_0|-1$. Assume that $|L/L_0|\ge |K/L|$. Since $G/K$ is a Frobenius group, $|R_0|_q=|R|_q\mid p-1$. Thus $|R_0|_q<|L/L_0|$, implying that $|I_{G/L_0}(\lambda_2)|_q=|L/L_0||K/L||R_0|_q<|L/L_0|^3$, which is a contradiction. Hence $|L/L_0|< |K/L|$. Considering that $\frac{|K/L|-1}{|K_0/L|-1}=p$ and $p\mid |L/L_0|-1$, we deduce $|K/L|-1$ does not have a primitive prime divisor, and so Zsigmondy's Theorem asserts that we have the following two cases:\\ (1) $|K/L|=q^2$, where $q=2^a-1$ is a Mersenne prime, which is impossible. \\ (2) $|K/L|=2^6$. However $\frac{2^6-1}{2^2-1}=21$ and $\frac{2^6-1}{2^3-1}=3^2$ are not prime, a contradiction . $\bullet$ Case 2) Assume that there exists non-principal $\lambda\in\irr(L/L_0)$ and $\chi\in \irr(\lambda^{G/L_0})$, such that $L_0<\ker(\chi)$.\\ Note that since $G$ is a $2$-Frobenius group and $H\nless \ker(\chi)$, we get that $\ker\chi\le K$, and so $K/\ker\chi$ is a chief factor of $G$, hence $|K/\ker\chi|=|K/L|$. Obviously, $\ker\chi\cap L=L_0$, and so $K/\ker\chi=L\ker\chi/\ker\chi\cong L/(L\cap \ker\chi)=L/L_0$. Thus, $|K/L|=|K/\ker\chi|=|L/L_0|$. Let $\nu\in\irr(L/L_0)$ be non-principal, $\nu_1=\nu\times 1 \in\irr(L/L_0\times \ker\chi/L_0)=\irr(K/L_0)$, and $\mu\in\irr(\nu_1)$. Obviously, $\cod(\mu)=|K/L||R_0|$. On the other hand, $\mu=\theta^{G/L_0}$, for some linear $\theta\in\irr((\nu_1)^{\I_{G/L_0}(\nu_1)})$, and so $\cod(\theta^{G/L_0})=|\I_{G/L_0}(\nu_1):\ker\theta^{G/L_0}|=|\I_{G/L_0}(\nu_1)|/|\ker\chi:L_0|=|K/L||R_0|$. We get that $|\I_{G/L_0}(\nu)|=|\I_{G/L_0}(\nu_1)|=|K/L|^2|R_0|=|K/L_0||R_0|$, and so we have that $|\I_{G/L_0}(\lambda):K/L_0|=|R_0|$, for all non-principal $\lambda\in \irr(L/L_0)$. With the same argument for each non-principal $\zeta\in\irr(\ker\chi/L_0)$, we have that $|\I_{G/L_0}(\zeta):K/L_0|=|R_0|$. Now choose non-principals $\nu\in \irr(L/L_0)$ and $\zeta\in \irr(\ker\chi/L_0)$, and let $\omega=\nu\times \zeta$. Note that $\frac{\I_{G/L_0}(\omega)}{K/L_0}=\frac{\I_{G/L_0}(\nu)\cap \I_{G/L_0}(\zeta)}{K/L_0}=R_0^g$, for some $g\in G$ or $\frac{\I_{G/L_0}(\omega)}{K/L_0}=1$, as the intersection of distinct conjugates of $R_0$ is trivial. Assume that $\I_{G/L_0}(\omega)=K/L_0$, and so $\gamma=\omega^{G/L_0}\in \irr(\omega^{G/L_0})$. We have $\cod(\gamma)=$ $|K/L_0|/|\ker\gamma/L_0|$ $=|K/\ker\gamma|=|R_0||K/L|$, which implies that $|K/L|<|K/\ker\gamma|$, and so $K/\ker\gamma$ is not a chief factor of $G$. Now since $L_0\le \ker\gamma<K$, we get that $\ker\gamma=L_0$. Hence, $|L/L_0|=|K/L|=|R_0|$. Since $|R_0|\mid (p-1)$ and $p\mid (|K/L|-1)$, we get a contradiction. Therefore, for each non-principal $\nu\in \irr(L/L_0)$ and non-principal $\zeta\in \irr(\ker\chi/L_0)$, we have that $\frac{\I_{G/L_0}(\nu)}{K/L_0}=\frac{\I_{G/L_0}(\zeta)}{K/L_0}=R_0$, which means for all $1\ne x\in K/L_0$, $\frac{C_{G/L_0}(x)}{K/L_0}=R_0^g$, for some $g\in G$, and so $R_0^g\le \frac{C_{G/L_0}(K/L_0)}{K/L_0}\trianglelefteq \frac{G/L_0}{K/L_0}$. Now since $\frac{G/L_0}{K/L_0}\cong G/K$, which is a Frobenius group, we get that $\frac{H/L_0}{K/L_0}\le \frac{C_{G/L_0}(K/L_0)}{K/L_0}$, a contradiction. Now since both cases lead to contradiction, we get that $L=1$ and $K$ is a minimal normal subgroup of $G$, as desired, and also $\beta=|K||R_0|$. Now since $\pi(R)\subseteq \pi(|K||R_0|)=\{q\}\cup \pi(R_0)$, and $|R_0|_q=|R|_q$, we get that $\pi(R)=\pi(R_0)$. \end{proof} \begin{lemma}\label{TH} Let $G=P\rtimes R$, where $P\in\syl_p(G)$ and $R$ is a Hall $p'$-subgroup of $G$, for some prime $p$. Let $N<P$ be a minimal normal subgroup of $G$ such that $G/N$ is a Frobenius group, with Frobenius kernel $P/N$. If $N$ is not a direct factor of $P$, then: \begin{enumerate} \item for each $\chi\in\irr(G|N)$, $(|\ker\chi|,|R|)=1$, \item $G$ is a Frobenius group or there exists $\chi\in\irr(G|N)$ such that $(\cod(\chi),|R|)\ne 1$. \end{enumerate} \end{lemma} \begin{proof} (1) Assume that $\chi\in\irr(G|N)$. We know that $\ker\chi=P_0\rtimes R_0$, where $P_0<P$ and $R_0\le R$. If $R_0\ne 1$, then since $G/N$ is a Frobenius group and $N\ker\chi/N$ is not a $p$-subgroup of $G/N$, we have that $P/N\le N\ker\chi/N$ and we conclude that $P\le NP_0=N\times P_0$, which is a contradiction, and so $R_0=1$. \\ (2) Assume that $(\cod(\chi),|R|)= 1$, for each $\chi\in\irr(G|N)$. Let $\lambda\in\irr(\chi_N)$ for some $\chi\in \irr(G|N)$. We know that $\chi=\theta^G$ for some $\theta\in\irr(\lambda^{\I_G(\lambda)})$. Thus $\cod(\chi)=\frac{|G:\ker\chi|}{|G:\I_{G}(\lambda)|\theta(1)}=\frac{|\I_G(\lambda):\ker\theta^G|}{\theta(1)}$ and so by part (1), $|\I_G(\lambda)|_{p'}=(\theta(1))_{p'}$. Let $k=(\theta(1))_{p'}$. Recall that $|\I_G(\lambda):N|=\sum\limits_{\theta\in\irr(\lambda^{\I_G(\lambda)})}\theta(1)^2$, which is divisible by $k^2$, thus $(\theta(1))_{p'}=k=1$, and so $|\I_G(\lambda)|_{p'}=1$, which means $\I_G(\lambda)\le P$, for each $1_N\ne \lambda\in \irr(N)$. Thus, for each $1\ne x\in N$, we have that $C_R(x)=1$, and so $R$ acts Frobeniusly on $N$ and since $R$ acts Frobeniusly on $P/N$, we conclude that $G$ is a Frobenius group. \end{proof} \begin{lemma}\label{1p} Let $G$ be a *-group and $N$ a minimal normal $q$-subgroup, for some prime $q$. If $G/N$ is a $p$-group, for some prime $p$, then $G$ is a Frobenius group. \end{lemma} \begin{proof} Obviously, $G=N\rtimes P$, where $P\in\syl_p(G)$ and $\cod(G|G')=\{p^m,q^k\}$, for some $m$ and $k$. If $C_N(P)=N$, then $G=N\times P$, which contradicts Lemma \ref{nilp}. Hence, as $N$ is minimal normal, we get that $C_N(P)=1$, and so for each $1\ne x\in N$, there exists a $p$-subgroup $P_x\notin\syl_p(G)$ such that $ C_G(x)=N\rtimes P_x< G$. Now we prove that for each $1\ne x\in N$, $C_G(x)\le N$. On the contrary, assume that there exists $1\ne z\in N$ such that $P_z\ne 1$. Choose $\lambda\in \irr(N)$ such that $C_G(z)=\I_G(\lambda)$. By \cite[Problem 6.18]{Isaacs}, we get that $\lambda$ extends to $\lambda_0\in\irr(\I_G(\lambda))$. Note that $\ker\lambda_0^G\le P_z $ and by Lemma \ref{2.7}, $|N|$ divides $\cod(\lambda_0^G)=|\I_G(\lambda):\ker\lambda_0^G|$. As $\cod(\lambda_0^G)=q^k$ we conclude that $\ker\lambda_0^G=P_z$ and so $\I_G(\lambda)=N\times P_z$. Now choose $\theta\in\irr(P_z/P_z')$. We observe that $\lambda$ extends to $\chi=\lambda\times\theta$. Now it is easy to see that $pq\mid |\I_G(\lambda):\ker\chi^G|=\cod(\chi^G)$, a contradiction. So $C_G(x)=N$ for each $1\ne x\in N$, which means $P$ acts Frobeniusly on $N$. \end{proof} \begin{lemma}\label{1Q} Let $G$ be a *-group and $N$ be a minimal normal $q$-subgroup of $G$ such that $\overline{G}=G/N$ is non-nilpotent and $|\cod(\overline{G}|\overline{G}')|=1$. Then $G$ is a $2$-Frobenius group. \end{lemma} \begin{proof} By Lemma \ref{Qian}, $G/N=K/N\rtimes R$ is a Frobenius group, where $K/N=(G/N)'$ is an elementary abelian $p$-group and $R$ is a cyclic Hall $p'$-subgroup of $G$. Note that $p^k\in\cod(\overline{G}|\overline{G}')\subset \cod(G|G')$, for some $k$. We study the following two cases. \bigskip Case 1) $p=q$.\\ In this case, $NG'\in \syl_p(G)$ is normal in $G$. We claim that $N$ is not a direct factor of $G'N$. Otherwise, $G'N$ is an elementary abelian $p$-group and so by Maschke Theorem there exists $L\le G'$ such that $G'N=N\times L$ and $L\trianglelefteq G$. We may assume that $L$ is a minimal normal subgroup of $G$. Note that if $G$ is a Frobenius, then by Lemma \ref{Frob}, $R\cong Q_8$, which is a contradiction. Hence we get that $R$ does not act Frobeniusly on $N$, thus $N$ and $L$ are not $G$-isomorphic and so $N$ and $L$ are the only minimal normal subgroups of $G$. We choose $1_N\ne\psi\in\irr(N)$ and $1_L\ne\phi\in\irr(L)$. Let $\theta=\psi\times\phi$ and $\chi\in\irr(\theta^G)$. Note that since $N\times L=G'N\nleq \ker\chi$, we get that $\chi$ is faithful and non-linear. Now by Lemma \ref{2.7}. $|G'N|\mid \cod(\chi)$, and since $p^k<|G'N|$, we get a contradiction. Therefore $N$ is not a direct factor of $G'N$. Now by Lemma \ref{TH}(2), there exists $\theta \in\irr(G|N)\subset \irr(G|G'N)$ such that $(\cod(\theta),|R|)\ne 1$. So $\cod(G|G'N)=\{p^k,\cod(\theta)\}$, however by Lemma \ref{2.7}, $p \mid \cod(\theta)$, which is a contradiction. \bigskip Case 2) $p\ne q$.\\ In this case, $K=N\rtimes L$, for some $L\in\syl_p(K)$. Our goal is to show that $L$ acts Frobeniusly on $N$, which shows that $G$ is a $2$-Frobenius group. First of all since $K'\le N$, we get that $K'=1$ or $K'=N$. Assume that $K$ is abelian. Hence $K=N\times L$. Note that for each $1\ne x\in L$, $\overline{C_G(x)}=C_{\overline{G}}(\overline{x})=\overline{K}$, and so $C_G(x)=K$, henceforth $\I_G(\lambda)=K$, for each $1_L\ne \lambda\in\irr(L)$. Let $1_N\ne \lambda_0\in\irr(N)$ and $\chi=\lambda_0\times \lambda$. Then $\chi\in\irr(K)$ and $\chi_L=\lambda$. Now as $pq\mid \cod(\chi)$, by Lemma \ref{subnormality}, we conclude that $pq\mid \cod(\chi^G)=|K:\ker\chi^G|$, a contradiction. So $K'=N$. We claim that $p$ does not divide any element in $\cod(K|K')$. On the contrary, assume that $p\mid \cod(\theta)$, for some $\theta\in\irr(K|K')$. Note that $q\mid |K:\ker\theta|$ and by Ito's Theorem, $\theta(1)$ divides $|K:N|=L$, which means $pq\mid \frac{|K|}{\theta(1)|\ker\theta|}=\cod(\theta)$ and it is a contradiction by Lemma \ref{2.7} and so $p$ does not divide any element in $\cod(K|K')$. Now Theorem \ref{Qian1} asserts that $L$ acts Frobeniuosly on $K'=N$ and $K$ is a Frobenius group, as we wanted. \end{proof}
2412.13975v1
http://arxiv.org/abs/2412.13975v1
The number of descendants in a preferential attachment graph
\documentclass[11pt,reqno,tbtags]{amsart} \usepackage[utf8]{inputenc} \usepackage[a4paper,width=150mm,top=25mm,bottom=25mm]{geometry} \usepackage{mathtools} \usepackage{suffix} \usepackage{enumerate} \usepackage{enumitem} \usepackage{listings} \renewcommand{\baselinestretch}{1} \newcommand{\cyan}[1]{\textcolor{cyan}{#1}} \newcommand{\magenta}[1]{\textcolor{magenta}{#1}} \makeatletter \newcommand*{\rom}[1]{\expandafter\@slowromancap\romannumeral #1@} \makeatother \usepackage{pgf, tikz} \usepackage{subcaption} \usetikzlibrary{arrows, automata} \usepackage{float} \usepackage{parskip} \setlength{\parindent}{2em} \setlength{\oddsidemargin}{5mm} \setlength{\evensidemargin}{5mm} \usepackage{amsmath,amsthm,amssymb} \numberwithin{equation}{section} \newcommand\mycom[2]{\genfrac{}{}{0pt}{}{#1}{#2}} \DeclareRobustCommand{\stirling}{\genfrac\{\}{0pt}{}} \allowdisplaybreaks \usepackage{bbm} \usepackage[makeroom]{cancel} \usepackage{xcolor} \definecolor{coolblack}{rgb}{0.0, 0.18, 0.39} \usepackage[breaklinks=true]{hyperref} \hypersetup{ colorlinks=true, linkcolor=blue, urlcolor=blue, citecolor=black } \title{The number of descendants in a preferential attachment graph} \author{Svante Janson, Tiffany Y.\ Y.\ Lo} \thanks{Supported by the Knut and Alice Wallenberg Foundation, Ragnar Söderberg Foundation, the Swedish Research Council (VR), and Sverker Lerheden Foundation. } \address{Department of Mathematics, Uppsala University, PO Box 480, SE-751~06 Uppsala, Sweden} \email{[email protected] } \address{Department of Mathematics, Stockholm University, SE-106 91 Stockholm, Sweden} \email{[email protected]} \date{18 December, 2024} \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{conjecture}[theorem]{Conjecture} \theoremstyle{definition} \newtheorem{question}[theorem]{Question} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{conjexample}[theorem]{Conjectural Example} \newtheorem{remark}[theorem]{Remark} \newtheorem{case}[theorem]{Case} \newtheorem{condition}[theorem]{Condition} \newtheorem{notation}[theorem]{Notation} \newtheorem{problem}[theorem]{Problem} \DeclareRobustCommand*{\vea}{\overrightarrow{v}_s(\al)} \DeclareRobustCommand*{\oea}{\overrightarrow{e}(\al)} \newcommand{\al}{\alpha} \newcommand{\IP}{\mathbbm{P}} \newcommand\E{\operatorname{\mathbb E}} \newcommand{\F}{\mathcal{F}} \newcommand{\Pa}{\pi_\alpha} \newcommand{\Pb}{\pi_\beta} \newcommand{\nz}{n_0} \newcommand{\G}{\Gamma} \newcommand{\Gt}{\mathcal{G}_t} \newcommand{\Lx}{\mathcal{L}_X} \newcommand{\ld}{\ell+\delta} \newcommand{\Nn}{N_n} \newcommand{\limn}{\underset{n\rightarrow \infty}{\mathrm{lim}}} \newcommand\numberthis{\addtocounter{equation}{1}\tag{\theequation}} \newcommand{\wt}{\widetilde} \newcommand{\normx}{\lVert \mathbf{x}\rVert} \newcommand{\Zd}{\mathbbm{Z}^d} \newcommand{\be}{\mathbf{e}} \newcommand{\bB}{\mathbf{B}} \newcommand{\bx}{\mathbf{x}} \newcommand{\wh}{\widehat} \newcommand{\tone}{\mathbf{1}} \newcommand{\normxe}{\lVert \mathbf{x}+\mathbf{e}\rVert} \newcommand{\bone}{\mathbbm{1}} \newcommand{\rp}{\mathbbm{R}_+} \newcommand{\IZ}{\mathbbm{Z}} \newcommand{\cF}{\mathcal{F}} \newcommand{\cE}{\mathcal{E}} \newcommand{\cV}{\mathcal{V}} \newcommand{\tand}{\text{and}} \newcommand{\tbj}{\textbf{j}} \newcommand{\tbk}{\textbf{k}} \newcommand{\cT}{\mathcal{T}} \newcommand{\var}{\mathrm{Var}} \newcommand{\cov}{\mathrm{Cov}} \newcommand{\dtv}{\mathop{d_{\mathrm{TV}}}} \newcommand{\dw}{\mathop{d_{\mathrm{W}}}} \newcommand{\dk}{\mathop{d_{\mathrm{K}}}} \newcommand{\law}{\mathcal{L}} \newcommand{\toinf}{\to\infty} \renewcommand{\leq}{\leqslant} \renewcommand{\geq}{\geqslant} \renewcommand{\phi}{\varphi} \newcommand{\eps}{\varepsilon} \newenvironment{romenumerate}[1][-10pt]{\addtolength{\leftmargini}{#1}\begin{enumerate} \renewcommand{\labelenumi}{\textup{(\roman{enumi})}} \renewcommand{\theenumi}{\textup{(\roman{enumi})}} }{\end{enumerate}} \renewcommand{\le}{\leq} \renewcommand{\ge}{\geq} \newcommand{\refT}[1]{Theorem~\ref{#1}} \newcommand{\refTs}[1]{Theorems~\ref{#1}} \newcommand{\refC}[1]{Corollary~\ref{#1}} \newcommand{\refCs}[1]{Corollaries~\ref{#1}} \newcommand{\refL}[1]{Lemma~\ref{#1}} \newcommand{\refLs}[1]{Lemmas~\ref{#1}} \newcommand{\refR}[1]{Remark~\ref{#1}} \newcommand{\refRs}[1]{Remarks~\ref{#1}} \newcommand{\refS}[1]{Section~\ref{#1}} \newcommand{\refSs}[1]{Sections~\ref{#1}} \newcommand{\refApp}[1]{Appendix~\ref{#1}} \newcommand{\refP}[1]{Proposition~\ref{#1}} \newcommand{\refD}[1]{Definition~\ref{#1}} \newcommand{\refE}[1]{Example~\ref{#1}} \newcommand{\refEs}[1]{Examples~\ref{#1}} \newcommand{\refConj}[1]{Conjecture~\ref{#1}} \newcommand{\refStep}[1]{Step~\ref{#1}} \newcommand{\refSteps}[1]{Steps~\ref{#1}} \newcommand\ga{\alpha} \newcommand\gb{\beta} \newcommand\gd{\delta} \newcommand\gD{\Delta} \newcommand\gf{\varphi} \newcommand\GF{\Phi} \newcommand\gam{\gamma} \newcommand\gamm{\gamma^2} \newcommand\gG{\Gamma} \newcommand\gk{\varkappa} \newcommand\kk{\kappa} \newcommand\gl{\lambda} \newcommand\gL{\Lambda} \newcommand\go{\omega} \newcommand\gO{\Omega} \newcommand\gs{\sigma} \newcommand\gS{\Sigma} \newcommand\gss{\sigma^2} \newcommand\gt{\tau} \newcommand\gth{\theta} \newcommand\gu{\upsilon} \newcommand\gU{\Upsilon} \newcommand\cA{\mathcal A} \newcommand\cB{\mathcal B} \newcommand\cI{\mathcal I} \newcommand\cM{\mathcal M} \newcommand\cU{\mathcal U} \newcommand\cX{\mathcal X} \newcommand\tcB{\widetilde{\mathcal B}} \newcommand\tU{\widetilde{U}} \newcommand\sU{\mathsf{U}} \newcommand\hA{\widehat{A}} \newcommand\xD{\widehat{D}} \newcommand\hF{\widehat{F}} \newcommand\hH{\widehat{H}} \newcommand\hP{\widehat{P}} \newcommand\hT{\widehat{T}} \newcommand\hV{\widehat{V}} \newcommand\suma{\sum_{\nu\in\cI}} \newcommand\sumin{\sum_{i=1}^n} \newcommand\sumi{\sum_{i=1}^\infty} \newcommand\sumn{\sum_{n=1}^\infty} \newcommand\xx[1]{^{(#1)}} \newcommand\set[1]{\ensuremath{\{#1\}}} \newcommand\bigset[1]{\ensuremath{\bigl\{#1\bigr\}}} \newcommand\Bigset[1]{\ensuremath{\Bigl\{#1\Bigr\}}} \newcommand\biggset[1]{\ensuremath{\biggl\{#1\biggr\}}} \newcommand\lrset[1]{\ensuremath{\left\{#1\right\}}} \newcommand\xpar[1]{(#1)} \newcommand\bigpar[1]{\bigl(#1\bigr)} \newcommand\Bigpar[1]{\Bigl(#1\Bigr)} \newcommand\biggpar[1]{\biggl(#1\biggr)} \newcommand\lrpar[1]{\left(#1\right)} \newcommand\bigsqpar[1]{\bigl[#1\bigr]} \newcommand\sqpar[1]{[#1]} \newcommand\Bigsqpar[1]{\Bigl[#1\Bigr]} \newcommand\biggsqpar[1]{\biggl[#1\biggr]} \newcommand\lrsqpar[1]{\left[#1\right]} \newcommand\abs[1]{\lvert#1\rvert} \newcommand\bigabs[1]{\bigl\lvert#1\bigr\rvert} \newcommand\Bigabs[1]{\Bigl\lvert#1\Bigr\rvert} \newcommand\biggabs[1]{\biggl\lvert#1\biggr\rvert} \newcommand\lrabs[1]{\left\lvert#1\right\rvert} \newcommand\downto{\searrow} \newcommand\upto{\nearrow} \newcommand{\tend}{\longrightarrow} \newcommand\dto{\overset{\mathrm{d}}{\tend}} \newcommand\pto{\overset{\mathrm{p}}{\tend}} \newcommand\asto{\overset{\mathrm{a.s.}}{\tend}} \newcommand\ktoo{\ensuremath{{k\to\infty}}} \newcommand\ntoo{\ensuremath{{n\to\infty}}} \newcommand\Ntoo{\ensuremath{{N\to\infty}}} \newcommand\ttoo{\ensuremath{{t\to\infty}}} \newcommand\Po{\operatorname{Po}} \newcommand\Bi{\operatorname{Bi}} \newcommand\Bin{\operatorname{Bin}} \newcommand\Be{\operatorname{Be}} \newcommand\Ge{\operatorname{Ge}} \newcommand\NBi{\operatorname{NegBin}} \newcommand\GGx{\Gamma^*} \newcommand\aut{\operatorname{aut}} \renewcommand\P{\IP} \newcommand\Var{\operatorname{Var}} \newcommand\Cov{\operatorname{Cov}} \newcommand\xdots{\cdots} \newcommand\xnot{\text{not }} \newcommand\bbN{\mathbb N} \newcommand\bbR{\mathbb R} \newcommand\jq{q} \newcommand\gab{\ga\gb} \newcommand\gaxb{\ga{\cdot}\gb} \newcommand\gabxcc{{\ga{\cdot}\gb{*}\gamma_1\gamma_2}} \newcommand\gabcc{{\ga{\cdot}\gb{\cdot}\gamma_1\gamma_2}} \newcommand\gaxcc{{\ga{*}\gamma_1\gamma_2}} \newcommand\gacc{{\ga{\cdot}\gamma_1\gamma_2}} \newcommand\ttone{\tilde{\tone}} \newcommand\qw{^{-1}} \newcommand\qww{^{-2}} \newcommand\qq{^{1/2}} \newcommand\qqw{^{-1/2}} \newcommand\lrceil[1]{\left\lceil#1\right\rceil} \newcommand\lrfloor[1]{\left\lfloor#1\right\rfloor} \newcommand\aaa{^{(a)}} \newcommand\logn[1]{\log^{#1}n} \newcommand\WW[1]{W^{(#1)}} \newcommand\WWn[1]{W_n^{(#1)}} \newcommand\WIJ[1]{W'_{j,i,#1}} \newcommand\WKJ[1]{W'_{j,k,#1}} \newcommand\Sl{S_\ell} \newcommand\Sli{S_{\ell-1}} \newcommand\intoo{\int_0^\infty} \newcommand\dd{\,\mathrm{d}} \newcommand\ddx{\mathrm{d}} \newcommand\ddd[1]{\frac{\ddx}{\ddx#1}} \newcommand\eqd{\overset{\mathrm{d}}{=}} \newcommand\intoi{\int_0^1} \newcommand\lhs{left-hand side} \newcommand\rhs{right-hand side} \newcommand\hcY{\widehat{\mathcal Y}} \newcommand\cY{\mathcal{Y}} \newcommand\nn{^{(n)}} \newcommand\xfrac[2]{#1/#2} \newcommand\whp{w.h.p.} \newcounter{steps} \newcommand\stepp{\par\noindent\refstepcounter{steps} \emph{Step \arabic{steps}. }\noindent} \newcommand\steppx[1]{\par\noindent\refstepcounter{steps} \emph{Step \arabic{steps}. #1}\noindent} \newcommand\resetsteps{\setcounter{steps}{0}} \newcommand\oi{\ensuremath{[0,1]}} \newcommand\nxoo{_{n=1}^\infty} \newcommand\Beta{\mathrm{Beta}} \newcommand\GAMMA{\mathrm{Gamma}} \newcommand\Phix{\widehat\Psi} \newcommand\xM{\mathfrak M} \newcommand\tM{\widetilde M} \newcommand\gln{\gl_n} \newcommand\tgb{\tilde\beta} \newcommand\op{o_{\mathrm p}} \newcommand\Op{O_{\mathrm p}} \newcommand\Mx{M_*} \newcommand\Mxx{\Mx} \newcommand\bignorm[1]{\bigl\lVert#1\bigr\rVert} \newcommand\Bignorm[1]{\Bigl\lVert#1\Bigr\rVert} \newcommand\lrnorm[1]{\left\lVert#1\right\rVert} \newcommand\MM{\widehat M} \begingroup \count255=\time \divide\count255 by 60 \count1=\count255 \multiply\count255 by -60 \advance\count255 by \time \ifnum \count255 < 10 \xdef\klockan{\the\count1.0\the\count255} \endgroup \DeclarePairedDelimiter\ceil{\lceil}{\rceil} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \def\note#1{\par\smallskip\noindent\llap{$\boldsymbol\Longrightarrow$}\fbox{\vtop{\hsize=0.98\hsize\parindent=0cm\small\rm #1}}\rlap{$\boldsymbol\Longleftarrow$}\par\smallskip} \def\given{\typeout{Command 'given' should only be used within bracket command}} \newcounter{@bracketlevel} \def\@bracketfactory#1#2#3#4#5#6{ \expandafter\def\csname#1\endcsname##1{\addtocounter{@bracketlevel}{1}\global\expandafter\let\csname @middummy\alph{@bracketlevel}\endcsname\given\global\def\given{\mskip#5\csname#4\endcsname\vert\mskip#6}\csname#4l\endcsname#2##1\csname#4r\endcsname#3\global\expandafter\let\expandafter\given\csname @middummy\alph{@bracketlevel}\endcsname \addtocounter{@bracketlevel}{-1}}} \def\bracketfactory#1#2#3{\@bracketfactory{#1}{#2}{#3}{relax}{1mu plus 0.25mu minus 0.25mu}{0.6mu plus 0.15mu minus 0.15mu} \@bracketfactory{b#1}{#2}{#3}{big}{1mu plus 0.25mu minus 0.25mu}{0.6mu plus 0.15mu minus 0.15mu} \@bracketfactory{bb#1}{#2}{#3}{Big}{2.4mu plus 0.8mu minus 0.8mu}{1.8mu plus 0.6mu minus 0.6mu} \@bracketfactory{bbb#1}{#2}{#3}{bigg}{3.2mu plus 1mu minus 1mu}{2.4mu plus 0.75mu minus 0.75mu} \@bracketfactory{bbbb#1}{#2}{#3}{Bigg}{4mu plus 1mu minus 1mu}{3mu plus 0.75mu minus 0.75mu} } \bracketfactory{clc}{\lbrace}{\rbrace} \bracketfactory{clr}{(}{)} \bracketfactory{cls}{[}{]} \bracketfactory{abs}{\lvert}{\rvert} \bracketfactory{norm}{\Vert}{\Vert} \bracketfactory{floor}{\lfloor}{\rfloor} \bracketfactory{ceil}{\lceil}{\rceil} \bracketfactory{angle}{\langle}{\rangle} \begin{document} \begin{abstract} We study the number $X^{(n)}$ of vertices that can be reached from the last added vertex $n$ via a directed path (the descendants) in the standard preferential attachment graph. In this model, vertices are sequentially added, each born with outdegree $m\ge 2$; the endpoint of each outgoing edge is chosen among previously added vertices with probability proportional to the current degree of the vertex plus some number $\rho$. We show that $X^{(n)}/n^\nu$ converges in distribution as $n\to\infty$, where $\nu$ depends on both $m$ and $\rho$, and the limiting distribution is given by a product of a constant factor and the $(1-\nu)$-th power of a $\GAMMA(m/(m-1),1)$ variable. The proof uses a P\'olya urn representation of preferential attachment graphs, and the arguments of Janson (2024) where the same problem was studied in uniform attachment graphs. Further results, including convergence of all moments and analogues for the version with possible self-loops are provided. \end{abstract} \maketitle \section{Introduction} Preferential attachment models have emerged as a popular class of random graphs since it was proposed in \cite{BA1997} as an explanation for the power-law degree sequences observed in real-world networks. There are several versions of these models, differing in minor details, see e.g.\ \cite{vdh2017}; we will use the version defined below, which is the sequential model in \cite{Berger2014}. In this version, self-loops are not allowed but multiple edges are possible. The graph is often treated as undirected, but we regard it as directed, with all edges directed from the younger vertex (with larger label) to the older vertex (with smaller label). \begin{definition}[Preferential attachment graph]\label{de:pa} Fix an integer $m\geq 2$ and a real number $\rho>-m$, and let $(G_n)_{n\geq 1}$ be the sequence of random graphs that are generated as follows; $G_n$ has $n$ vertices with labels in $[n]:=\{1,\dots,n\}$. The initial graph $G_1$ consists of a single vertex (labelled 1) with no edges. Given $G_{n-1}$, we construct $G_{n}$ from $G_{n-1}$ by adding the new vertex with label $n$, and sequentially attaching $m$ edges between vertex~$n$ and at most $m$ vertices in $G_{n-1}$ as follows. Let {$d_j(n)$} be the degree of vertex $j$ in $G_n$. If $n\ge2$, each outgoing edge of vertex $n$ is attached to vertex $j\in[n-1]$ with probability proportional to $\rho$ + the current degree of vertex~$j$. (In particular, if $n=2$, we add $m$ edges from vertex~2 to vertex 1.) This means that the first {outgoing} edge of vertex $n$ is attached to vertex $j\in[n-1]$ with probability \begin{align}\label{eq:pa1} \frac{d_j(n-1)+\rho}{2m(n-2)+(n-1)\rho}; \end{align} noting that $\sum^{n-1}_{k=1}d_k(n-1)=2m(n-2)$ and $d_j(n-1)+\rho\ge m+\rho >0$. Furthermore, given that the first $1\leq k\leq m-1$ outgoing edges of vertex $n$ have been added to the graph, the $(k+1)$th edge of vertex $n$ is attached to vertex $j\in{[n-1]}$ with probability \begin{align}\label{eq:pa2} \frac{d_j(n-1)+\sum^k_{\ell=1}\tone[n\overset{\ell}{\to} j]+\rho}{2m(n-2)+k+(n-1)\rho}, \end{align} where $n\overset{\ell}{{\to}} j$ is shorthand for the event that the $\ell$-th outgoing edge of vertex $n$ is attached to vertex $j$. The resulting graph $G_n$ is a preferential attachment graph with $n$ vertices with parameters~$m$ and $\rho$, and we denote its law by $\mathrm{PA}(n,m,{\rho})$. \end{definition} The formulation of the sequential model in \cite{Berger2014} is somewhat different, but is easily seen to be equivalent. Note also that \cite{Berger2014} assume (in our notation) $\rho\ge 0$, but in the formulation above, only $\rho>-m$ is needed. The definition above is valid also for $m=1$ (in which case the graph is a tree), but we do not consider this case in the present paper; see Remark \ref{Rm=1} below for a further discussion. Since \cite{Bollobas2001} proved that the degree sequence of a certain class of preferential attachment models indeed has a power-law behaviour, many other properties of the model above and its variants have been investigated over the last two decades. These results include for example, vertex degrees, distance and local weak convergences; and we refer to the books \cite{vdh2017,vdh2024} for a comprehensive overview. In this paper, we study the number of vertices that can be reached from the lastly added vertex $n$ via a directed path in the preferential attachment graph. We refer to these vertices (including vertex $n$) as the \emph{descendants} of $n$ and their count as $X^{(n)}$, even though all of them (apart from vertex $n$ itself) are added to $G_n$ before $n$. The problem was first considered in \cite[Exercise 7.2.2.3-371 and 372]{Knuth} for a uniform attachment graph, where each vertex has $m\ge 2$ outgoing edges and the endpoints of these edges are chosen uniformly among the existing vertices. (\cite{Knuth} uses drawing without replacement, thus avoiding multiple edges, but as shown in \cite{Janson2023}, this makes no difference asymptotically.) This uniform attachment version is studied in \cite{Janson2023}, where it is shown that as $n\to\infty$, if $\nu=(m-1)/m$, then $X^{(n)}/n^{\nu}$ converges in distribution, and the limiting distribution is given by a product of a constant factor and the $(1-\nu)$-th power of a $\GAMMA(m/(m-1),1)$ variable. The main result of the present paper is that for the preferential attachment graph defined above, $X^{(n)}$ behaves similarly, but with a different exponent $\nu$ which furthermore depends on both $m$ and $\rho$. As in previous works such as \cite{Berger2014, Mori2003, PPR2017}, the analysis in this work is hinged on a connection between P\'olya urns and the preferential attachment mechanism. We use, in particular, the P\'olya urn representation of \cite{Berger2014} that was originally devised to study the local weak limit of preferential attachment graphs. As we show later, this representation result enables us to adapt the framework of \cite{Janson2023} to study the problem in the preferential attachment setting. We state our main results in the next subsection. \subsection{Main results} The parameters $m\ge2$ and $\rho>-m$ are fixed throughout the paper. We define \begin{align}\label{de:nu} \nu := \frac{(m-1)(m+\rho)}{m(m+\rho+1)} \in(0,1) .\end{align} The proofs of the results below are developed in \refSs{se:pu}--\ref{Smom}, and as by-products of the proofs, we also prove some results on the structure of the subgraph of descendants of $n$. In \refS{Sloop} we show that the following results hold also for a preferential attachment model with possible self-loops. \begin{theorem}\label{Tmain} As \ntoo, \begin{align}\label{tmain} n^{-\nu} X \dto \frac{\G\bigpar{\frac{(m-1)(m+\rho)}{m(m+\rho+1)}} \G\bigpar{\frac{m+\rho}{m(m+\rho+1)}+1}} {\G\bigpar{\frac{m+\rho}{m+\rho+1}}} \bbclr{\frac{(m+\rho+1)(m-1)}{2m+\rho}\xi_1 }^{1-\nu}, \end{align} where $\xi_1\in\GAMMA(m/(m-1),1)$. \end{theorem} \begin{theorem}\label{Tmom} All moments converge in \eqref{tmain}. In other words, for any $p>0$, as \ntoo, \begin{align}\label{tmom} \E[X^p]/n^{p\nu} &\to \lrpar{\frac{\G\bigpar{\frac{(m-1)(m+\rho)}{m(m+\rho+1)}} \G\bigpar{\frac{m+\rho}{m(m+\rho+1)}+1}} {\G\bigpar{\frac{m+\rho}{m+\rho+1}}} \lrpar{\frac{(m+\rho+1)(m-1)}{2m+\rho}}^{1-\nu}}^p \notag\\& \hskip4em\cdot \frac{\gG(p(1-\nu)+\frac{m}{m-1})}{\gG(\frac{m}{m-1})} . \end{align} \end{theorem} \begin{remark} In the special case $\rho=0$, \eqref{de:nu} and \eqref{tmain} simplify to $\nu=(m-1)/(m+1)$ and \begin{align}\label{nov6} n^{-\nu} X \dto \frac{1}{m+1} \frac{\G\bigpar{\frac{m-1}{m+1}} \G\bigpar{\frac{1}{m+1}}} {\G\bigpar{\frac{m}{m+1}}} \bbclr{\frac{m^2-1}{2m}\xi_1 }^{2/(m+1)}. \end{align} If we specialize further to the case $m=2$ and $\rho=0$, we get $\nu=1/3$, and \eqref{tmain} simplifies further to \begin{align}\label{nov7} n^{-1/3} X \dto \frac{\G\bigpar{\frac13}^2} {2^{4/3}3^{1/3} \G\bigpar{\frac23}} \xi_1 ^{2/3} = \frac{3^{1/6}\G\bigpar{\frac13}^3} {2^{7/3}\pi} \xi_1 ^{2/3} ,\end{align} with $\xi_1\in\GAMMA(2,1)$ and \begin{align}\label{RX} \frac{\G\bigpar{\frac13}^2} {2^{4/3}3^{1/3} \G\bigpar{\frac23}} \doteq 1.45833. \end{align} In this case, \eqref{tmom} yields, for example, \begin{align}\label{tmom20} \E[X]/ n^{1/3} \to \frac{\G\bigpar{\frac13}^2} {2^{4/3}3^{1/3} \G\bigpar{\frac23}} \gG\bigpar{2+\tfrac23} = \frac{5\,\G\bigpar{\frac13}^2} {2^{1/3}3^{7/3}} \doteq 2.19416 .\end{align} \end{remark} \begin{remark}\label{Rm=1} Definition \ref{de:pa} is valid also for $m=1$, and then defines a random tree; such preferential attachment trees have been studied by many authors. In this case, $X\nn$ equals 1 + the depth of vertex $n$, and it is known that $X\nn$ grows like $\log n$, in contrast to the case $m\ge2$ studied in the present paper, where we show that $X\nn$ grows as a power of $n$. More precisely, as \ntoo, \begin{align}\label{m=1} X\nn/\log n \pto \frac{1+\rho}{2+\rho}, \end{align} and precise results are known on the exact distribution, Poisson approximation, and a central limit theorem, see \cite{Dobrow-Smythe1996}, \cite[Theorem 6]{PP2007}, and \cite[Theorem 3]{Kuba-Wagner2010}. (Papers on preferential attachment trees usually use a slightly different definition, where the attachment probabilities depend on the outdegree rather than the degree as in \eqref{de:pa}; apart from a shift in the parameter $\rho$, this makes a difference only at the root. This minor difference ought not to affect asymptotic result; for $X\nn$ this follows rigorously by the bijection in \cite{Kuba-Wagner2010} which yields both exact and asymptotic results, and in particular \eqref{m=1}, by straightforward calculations for both versions of the definition.) \end{remark} We mention also an open problem, which we have not studied, where the same methods might be useful. \begin{problem} Study the asymptotic behaviour of $\max\{X^{(n+1)},\dots,X^{(n+i)}\}$ for a fixed $i\ge2$, in both uniform and preferential attachment graphs. Perhaps also do the same for $i=i(n)$ growing with $n$ at some rate. \end{problem} \subsection{Notation} \label{Snot} As above, $k\overset{\ell}{\to}i$ (where $1\le i <k\le n$ and $\ell\in[m]$) denotes that in $G_n$ the $\ell$-th outgoing edge of vertex $k$ is attached to vertex $i$. We say that vertex $i$ is a \emph{child} of vertex $k$ if there is such an edge. As usual, empty sums are 0, and empty products are 1. Convergence in distribution, in probability, and a.s.\ (almost surely) are denoted by $\dto$, $\pto$, and $\asto$, respectively. Equality in distribution is denoted by $\eqd$, and w.h.p. (with high probability) is short for ``with probability tending one as $n\toinf$''. We frequently use two standard probability distributions. The $\GAMMA(a,b)$ distribution, with $a,b>0$, has density $\G(a)^{-1}b^{-a} x^{a-1} e^{-x/b}$ on $(0,\infty)$. The Beta$(a,b)$ distribution, with $a,b>0$, has density $\frac{\gG(a+b)}{\gG(a)\gG(b)}x^{a-1}(1-x)^{b-1}$ on $(0,1)$. Most quantities defined below depend on $n$. We sometimes indicate this by a superscript ${}\nn$, but usually we omit this to simplify the notation. We may in proofs sometimes tacitly assume that $n$ is large enough. $C[a,b]$, $C[0,\infty)$ and $C(0,\infty)$ denote the spaces of continuous functions on the indicated intervals, equipped with the topology of uniform convergence on compact subsets. These spaces are complete separable metric spaces. Note that a sequence of random functions in $C[0,\infty)$ or $C(0,\infty)$ converges (a.s., in probability, or in distribution) if and only if it converges in the same sense in $C[a,b]$ for each compact interval $[a,b]$ in $[0,\infty)$ or $(0,\infty)$, respectively. (For $C[0,\infty)$ it is obviously equivalent to consider intervals $[0,b]$ only.) The case $C[0,\infty)$ is treated in detail in \cite{Whitt1970}; the case $C(0,\infty)$ is similar. $C$ denotes positive constants (not depending on $n$) that may vary from one occasion to another. The constants may depend on the parameters $m$ and $\rho$; we indicate dependence on other parameters (if any) by writing e.g.\ $C_a$. \section{P\'olya urn representation}\label{se:pu} We shall use a celebrated result of \cite{Berger2014}, which states that the dynamics of the preferential attachment graph can be encoded in a collection of classical P\'olya urns; see also \cite[Chapter 5]{vdh2024} for more details. In a classical P\'olya urn with initially $a$ red balls and $b$ black balls, a ball is randomly sampled from the urn at each {step}, and is then returned to the urn with another ball of the same colour. (The ``numbers'' of balls are not necessarily integers; any positive real numbers are allowed.) In the preferential attachment graph, for each $i\ge2$, the weight of vertex $i$, defined as the degree + $\rho$, and the total weight of the first $i-1$ vertices evolve like the numbers of red and black balls in a classical P\'olya urn. The initial numbers of red and black balls are {$a=m+\rho$ and $b=(2i-3)m+(i-1)\rho$}, which are the weights of vertex $i$ and the first $i-1$ vertices before the edges of vertex $i+1$ are added to the graph. When one of the first $i$ vertices is chosen as a recipient of a newly added edge, the number of red balls in the urn increases by one if vertex $i$ is the recipient; otherwise we add a new black ball to the urn. It is well-known, for example as a consequence of exchangeability and de Finetti's theorem, that the proportion of red balls a.s.\ converges to a random number $\beta\in \mathrm{Beta}(a,b)$, and that conditioned on $\beta$, the indicators that a red ball is chosen at each step are distributed as conditionally independent Bernoulli variables with parameter $\beta$. Consequently, by conditioning on suitable beta variables, the preferential attachment graph can instead be generated using independent steps. The model and the theorem below are easy variations of their counterparts in \cite[Section 2.2]{Berger2014}. The only difference is that $\rho$ is allowed to be negative here. \begin{definition}[P\'olya urn representation, \cite{Berger2014}]\label{de:PUR} Given the integer $m\geq 2$ {and the real number $\rho>-m$}, let $(B_j)^{\infty}_{j =1}$ be independent random variables such that $B_1=1$ and \begin{align}\label{de:betas} B_j\in\mathrm{Beta}(m+\rho, (2j-3)m+(j-1)\rho), \qquad j\ge2. \end{align} Given $(B_j)^{\infty}_{j =1}$, construct for each $n\ge1$ a (directed) graph $G_n$ on $n$ vertices (labelled by $[n]$) such that each vertex $2\leq k\leq n$ has $m$ outgoing edges, and the recipient of each outgoing edge of $k$ is $i\in[k-1]$ with probability \begin{align}\label{pb} B_i\prod_{j=i+1}^{k-1}(1-B_j), \end{align} with the endpoints of all edges in $G_n$ chosen (conditionally) independently. The law of $G_n$ is denoted by $\mathrm{PU}(n,m,{\rho}),$ where PU is short for P\'olya Urn. \end{definition} \begin{remark}\label{Rstop} The probabilities \eqref{pb} can be interpreted as follows, which will be useful below: Given $(B_j)_{j=1}^\infty$, each edge from vertex $k$ tries to land at $k-1$, $k-2$, \dots{} successively; at each vertex $j$ it stops with probability $B_j$, and otherwise it continues to the next vertex. (All random choices are independent, given $(B_j)_{j=1}^\infty$.) \end{remark} \begin{remark}\label{RBerger} The construction in \cite{Berger2014} is actually formulated in the following somewhat different way, which obviously is equivalent; we will use this version too below. Define \begin{align}\label{de:S} S_{n,j}= \prod^{n-1}_{i=j+1} (1-B_i)\qquad\text{for $0\leq j\leq n-1$}. \end{align} (In particular, $S_{n,0}=0$ and $S_{n,n-1}=1$.) Conditioned on $(B_j)^{n-1}_{j=2}$, let $(U_{k,\ell})^{n,m}_{k=2,\ell=1}$ be independent random variables with \begin{equation}\label{de:Uij} U_{k,\ell}\in \sU[0,S_{n,k-1}). \end{equation} For each vertex $2\le k\le n$, add the $m$ outgoing edges such that \begin{align}\label{kil} k \overset{\ell}\to i \iff U_{k,\ell} \in [S_{n,i-1},S_{n,i}), \qquad \ell\in[m],\; i\in[k-1]. \end{align} Note also that a natural way to achieve \eqref{de:Uij} is to let $(\widetilde U_{k,\ell})^{n-1,m}_{k=2,\ell=1}$ be independent $\sU[0,1]$ variables, independent of $(B_i)^{n-1}_{i=2}$, and set \begin{align}\label{eq:sU} U_{k,\ell} := S_{n,k-1} \widetilde U_{k,\ell}. \end{align} \end{remark} \begin{theorem}[\cite{Berger2014}, Theorem 2.1] For all integers $n\geq2$, $m\geq 2$ and real $\rho>-m$, $\mathrm{PA}(n,m, {\rho})=\mathrm{PU}(n,m,{\rho})$. \end{theorem} In view of this theorem, it is enough to consider the P\'olya urn representation instead of the preferential attachment graph. We shall do so in the subsequent analysis and always have $G_n\in \mathrm{PU}(n,m, {\rho})$. \begin{remark}\label{Runiform} The uniform directed acyclic graph studied in \cite{Janson2023}, where each new edge from $k$ is attached uniformly to a vertex in $[k-1]$, can be seen as the limit as $\rho\to\infty$ of the construction above; it can be constructed by the same procedure, except that we let $B_j:=1/j$ (deterministically). This may help seeing the similarities and differences in the arguments below and in \cite{Janson2023}. Not surprisingly, formally taking the limit $\rho\to\infty$ in \eqref{tmain} yields the main result of \cite{Janson2023}. \end{remark} \begin{remark} Unless we say otherwise, we use the same sequence $(B_i)_{i=1}^\infty$ for every $n$. (But see \refS{Sconv} for an exception.) \end{remark} \section{Preliminaries}\label{Sprel} For convenience, we define the positive constants \begin{align} \theta&:=2m+\rho,\label{de:theta} \\\label{de:chi} \chi &:= \frac{m+\rho}{2m+\rho} = \frac{m+\rho}{\theta}, \end{align} noting that if $\rho=0$, then $\chi=1/2$ for any $m$. Recall that if $B\in\Beta(a,b)$, it follows by evaluating a beta integral that the moments are given by \begin{align}\label{betamom} \E B^s = \frac{\gG(a+b)\gG(a+s)}{\gG(a)\gG(a+b+s)} =\frac{\gG(a+s)/\gG(a)}{\gG(a+b+s)/\gG(a+b)}, \qquad s>0. \end{align} Recall also that for any fixed real (or complex) $a$ and $b$, and $x>0$ (with $x+a\notin\set{0,-1,-2,\dots}$, \begin{align}\label{gg} \frac{ \gG(x+a)}{\gG(x+b)} = x^{a-b}\bigpar{1+O\bigpar{x\qw}}, \end{align} which follows readily from Stirling's formula; see also \cite[5.11.13]{NIST}. Similarly to the definition of $S_{n,j}$ in \eqref{de:S}, we also define \begin{align}\label{de:phi} \Phi_k =\prod^k_{j=1} (1+(m-1)B_j), \quad \text{for $k\geq 0$} .\end{align} We collect here some simple results for these variables that will be used later. \begin{lemma}\label{LB1} For $2\leq i<\infty$, let $B_i$ be as in \eqref{de:betas}; and for $1\le i<\infty$, let $\Phi_i$ be as in \eqref{de:phi}. We then have for $2\le i<\infty$, \begin{gather*} \E( B_i) = \frac{m+\rho}{\theta i-2m} =\frac{\chi}{i-2m/\gth} =\frac{\chi}{i}+O\Bigpar{\frac{1}{i^2}}, \numberthis\label{eq:Bmean}\\ \E(B_i^2) = \frac{(m+\rho+1)(m+\rho)}{(\theta i-2m+1)(\theta i-2m)} =O\Bigpar{\frac{1}{i^2}}. \numberthis \label{eq:Ebetasq} \end{gather*} Furthermore, for $2\leq j\leq k<\infty$, \begin{align*} \prod^k_{i=j}\E(1+(m-1)B_i) &= \frac{\G\bclr{k+1+[(m-1)(m+\rho)-2m]/\theta}\G(j-2m/\theta)}{\G(k+1-2m/\theta)\G\bclr{j+[(m-1)(m+\rho)-2m]/\theta}}\notag\\ &= \bbclr{\frac{k}{j}}^{(m-1)\chi} \bclr{1+O\bigpar{j^{-1}}}\numberthis \label{eq:mprodbeta} \end{align*} and \begin{align*} \E \Phi_k = \frac{m\cdot \G\bclr{2-2m/\theta}}{\G\big(2+[(m-1)(m+\rho)-2m]/\theta\big)} k^{(m-1)\chi} \big(1+O\bigpar{k^{-1}}\big). \numberthis\label{eq:meanphik} \end{align*} Finally, there is a positive constant $C$ such that, for $2\leq j\leq k<\infty$, \begin{align}\label{eq:trunphi2bd} \prod^k_{i=j}\E(1+(m-1)B_i)^2 \leq C\bbclr{\frac{k}{j}}^{2(m-1)\chi} \end{align} and, for $2\le k<\infty$, \begin{align}\label{eq:phibd} \E(\Phi_k^2)\leq Ck^{2(m-1)\chi}, \quad \E(\Phi^{-1}_k)\leq Ck^{-(m-1)\chi}, \quad \E(\Phi^{-2}_k)\leq Ck^{-2(m-1)\chi}. \end{align} \end{lemma} \begin{remark} If $m=2$ and $\rho=0$, then $\theta=4$ and so in \eqref{eq:meanphik}, \begin{align} \frac{m\cdot \G\bclr{2-2m/\theta}}{\G\big(2+[(m-1)(m+\rho)-2m]/\theta\big)} = \frac{2}{\G(3/2)} = \frac{4}{\sqrt{\pi}}. \end{align} \end{remark} \begin{proof} The equalities in \eqref{eq:Bmean} and \eqref{eq:Ebetasq} follow from \eqref{betamom}, recalling \eqref{de:theta}--\eqref{de:chi}. For \eqref{eq:mprodbeta}, we use \eqref{eq:Bmean} {to obtain} \begin{align}\label{eq:Eprodbeta} \prod^k_{i=j}\E(1+(m-1)B_i) &= \prod^k_{i=j}\frac{i+[(m-1)(m+\rho)-2m]/\theta}{i-2m/\theta}\notag\\ &= \frac{\G\bclr{k+1+[(m-1)(m+\rho)-2m]/\theta}\G(j-2m/\theta)}{\G(k+1-2m/\theta)\G\bclr{j+[(m-1)(m+\rho)-2m]/\theta}}, \end{align} and so \eqref{eq:mprodbeta} follows from \eqref{gg} and \eqref{de:chi}. The formula \eqref{eq:meanphik} follows similarly by taking $j=2$ in \eqref{eq:Eprodbeta} and using $B_1=1$. To prove \eqref{eq:trunphi2bd}, we write for $i\geq 2$, using \eqref{eq:Bmean}--\eqref{eq:Ebetasq}, \begin{align*} \E(1+(m-1)B_i)^2 &= 1 + 2(m-1)\E B_i + (m-1)^2\E B_i^2\\ &=1+2(m-1)\frac{\chi}{i}+O\bigpar{i^{-2}}\\ &=: 1 + y_i. \end{align*} Taking logarithms, \begin{align*} \log\biggpar{\prod^{{k}}_{i={j}} (1+y_i)} &= \sum^{k}_{i={j}} \log(1+y_i) \leq \sum^{k}_{i=j} {y_i} = 2(m-1)\chi\sum^{k}_{i=j} \big(i^{-1}+O(i^{-2})\big)\\ &= 2(m-1)\chi\log\bbclr{\frac{k}{j}} + O(j^{-1}).\numberthis\label{eq:Ebetasq2} \end{align*} This implies the inequality in \eqref{eq:trunphi2bd}. The bound on $\E(\Phi^2_k)$ in \eqref{eq:phibd} follows from the definition in \eqref{de:phi} and applying \eqref{eq:trunphi2bd} with $j=2$. The upper bound on $\E(\Phi_k^{-2})$ in \eqref{eq:phibd} can be proved similarly, where we can use $(1+x)^{-2}\leq 1-2x+3x^2$ for $x\geq 0$, and thus \begin{align}\label{dx0} \E\bigsqpar{(1+(m-1)B_i)^{-2}} & \leq 1-2(m-1)\E B_i+3(m-1)^2\E B_i^2 \notag\\& = 1- 2(m-1)\chi i^{-1} + O(i^{-2}), \end{align} together with $\log(1-x)\leq -x$. Finally, by the Cauchy--Schwarz inequality and the just proven $\E(\Phi_k^{-2})\leq Ck^{-2(m-1)\chi}$, \begin{align} \E(\Phi_k^{-1})\leq \sqrt{\E(\Phi_k^{-2})}\leq Ck^{-(m-1)\chi}, \end{align} which completes the proof of all three inequalities claimed in \eqref{eq:phibd}. \end{proof} \subsection{An infinite product} \begin{lemma} \label{LB2} The infinite product \begin{equation}\label{lb2a} \beta:=\prod^\infty_{k=1}\frac{1+(m-1)B_k}{\E(1+(m-1)B_k)} =\lim_\ktoo\frac{\Phi_k}{\E\Phi_k} \end{equation} exists a.s.\ and in $L^p$ for every $p<\infty$. Furthermore, $\E\beta=1$ and $\beta>0$ a.s. We have also, as $k\to\infty$, \begin{align}\label{lb2b} k^{-(m-1)\chi}\Phi_k\asto \tgb:= \frac{m\cdot \G\bclr{2-2m/\theta}}{\G\big(2+[(m-1)(m+\rho)-2m]/\theta\big)}\gb. \end{align} \end{lemma} \begin{proof} Define for $k\ge1$ \begin{align}\label{eq:betamg} \tM_k := \frac{\Phi_k}{\E (\Phi_k)} = \prod^k_{i=1} \frac{1+(m-1)B_i}{\E(1+(m-1)B_i)}. \end{align} This is a product of independent random variables with mean 1, and thus a martingale. For every fixed integer $r>1$, we have by the binomial theorem, $|B_k|\le1$, and \eqref{eq:Bmean}--\eqref{eq:Ebetasq}, \begin{align}\label{freja} \E(1+(m-1)B_k)^r &= \sum_{j=0}^r\binom{r}j{(m-1)^j}\E B_k^j = 1 + r(m-1) \E B_k + O(\E B_k^2) \notag\\ &= 1 + r(m-1)\E B_k + O(k^{-2})\notag\\ &= (1 + (m-1)\E B_k)^r + O(k^{-2}). \end{align} Hence, for every $k\ge1$, \begin{align} \E \tM_k^r = \prod_{i=1}^k\frac{\E(1+(m-1)B_i)^r}{(\E(1+(m-1)B_i))^r} = \prod_{i=1}^k\bigpar{1+O(i^{-2})} \le C_r \end{align} and thus the martingale $\tM_k$ is $L^r$-bounded; consequently it converges in $L^r$, and thus in $L^p$ for all real $0<p\le r$. Since $r$ is arbitrary, this holds for all $p>0$. In particular, $\tM_k\to\gb$ in $L^1$, which shows that $\E\gb=\lim_\ktoo\E\tM_k=1$. The event $\set{\gb=0}$ is independent of any finite number of $B_1,B_2,\dots$, and is thus a tail event. The Kolmogorov zero-one law, see e.g.\ \cite[Theorem 1.5.1]{Gut}, thus shows that $\P(\gb=0)=0$ or 1, but $\P(\gb=0)=1$ is impossible since $\E\gb=1$. Hence, $\gb>0$ a.s. Finally, \eqref{lb2b} follows by \eqref{lb2a} and \eqref{eq:meanphik}. \end{proof} \subsection{Estimates for $S_{n,k}$} Below, let $\psi_n$ be a positive function such that $\psi_n\le n-1$ and $\psi_n\to\infty$ as $n\to\infty$ (we later choose $\psi_n=n/\log n$). The next lemma shows that w.h.p., for all $k\ge \psi_n$, the random variables $S_{n,k}$ are close enough to the constants $(k/n)^\chi$. \begin{lemma}\label{le:Sest} Let $S_{n,k}$ be as in \eqref{de:S} and $\psi_n$ be as above. Define $\delta_n = \psi_n^{-\eps}$ for some $\eps\in(0,1/2)$. Then, there is a positive constant $C$ such that \begin{align} \IP\bigg[\max_{\ceil{\psi_n}\leq k<n}\bigg|S_{n,k}-\bbclr{\frac{k}{n}}^\chi\bigg|\geq 2\delta_n\bigg]\leq {C}{\psi_n}^{2\eps-1}. \end{align} \end{lemma} The proof of Lemma \ref{le:Sest} is based on a standard martingale argument that is similar to \cite{lo2024} (see also \cite{Berger2014}), but we present it here for completeness. To prepare for the main proof, we start by estimating $\E(S_{n,k})$. \begin{lemma}\label{le:smom} Let $S_{n,k}$ be as in \eqref{de:S}. For every $1\leq k\leq n-1$, we have \begin{align}\label{eq:meanS} \E(S_{n,k}) = \frac{\G\bclr{n-(3m+\rho)/\theta}}{\G\bclr{n-2m/\theta}}\frac{\G\bclr{k+1-2m/\theta}}{\G\bclr{ k+1-(3m+\rho)/\theta}}. \end{align} \end{lemma} \begin{proof} Recalling that $(B_j)^{n-1}_{j=2}$ are independent, we obtain from \eqref{eq:Bmean} \begin{align*} \E S_{n,k} &=\prod^{n-1}_{j=k+1} \E(1-B_i) = \prod^{n-1}_{j=k+1} \frac{\theta j -3m-\rho}{\theta j -2m} = \prod^{n-1}_{j=k+1} \frac{j-(3m+\rho)/\theta}{j-2m/\theta} \\&= \frac{\G\bclr{n-(3m+\rho)/\theta}}{\G\bclr{n-2m/\theta}}\frac{\G\bclr{k+1-2m/\theta}}{\G\bclr{ k+1-(3m+\rho)/\theta}},\numberthis \end{align*} as claimed in the lemma. \end{proof} \begin{lemma}\label{le:smean} Let $S_{n,k}$ be as in \eqref{de:S}. Then, there is a positive constant $C$ such that, for $1\leq k \leq n-1$, \begin{align}\label{eq:smean} \bigg|\E(S_{n,k})-\bbclr{\frac{k}{n}}^{\chi}\bigg|\leq {\frac{C}{n^\chi k^{1-\chi}}.} \end{align} \end{lemma} \begin{proof} By \eqref{eq:meanS} and \eqref{gg}, we have, recalling \eqref{de:chi}, \begin{align} \E(S_{n,k})= n^{-\chi}\bigpar{1+O(n\qw)}k^{\chi}\bigpar{1+O(k\qw)} =\bbclr{\frac{k}{n}}^{\chi}\bigpar{1+O(k\qw)} , \end{align} which yields \eqref{eq:smean}. \end{proof} \begin{proof}[Proof of Lemma \ref{le:Sest}] By \refL{le:smean}, for $n$ large enough and any $k\in[\psi_n, n]$, \begin{align} \biggabs{\E(S_{n,k})-\bbclr{\frac{k}{n}}^\chi} \le \frac{C}{{n^\chi k^{1-\chi}}} \le \frac{C}{k} \le \frac{C}{\psi_n}<\gd_n. \end{align} Hence, \begin{align*} \IP\bigg[\max_{\ceil{\psi_n}\leq k{<} n}\bigg|S_{n,k}-\bbclr{\frac{k}{n}}^\chi\bigg|\geq 2\delta_n\bigg] &\leq \IP\bigg[\max_{\ceil{\psi_n}\leq k{<} n}\big|S_{n,k}-\E(S_{n,k})\big| \geq \delta_n\bigg] \\ &\leq \IP\bigg[\max_{\ceil{\psi_n}\leq k{<} n}\big|S_{n,k}/\E(S_{n,k})-1\big| \geq \delta_n\bigg], \numberthis\label{hw1} \end{align*} noting that $\E S_{n,k}\leq 1$ for $k\geq 1$. To bound the right-hand side of \eqref{hw1}, we first observe that for $k\geq {0}$, \begin{align} \MM_k := \prod^{{n-1}}_{{j=n-k}} \frac{1-B_{j}}{\E(1-B_{j})} = \frac{S_{{n,n-1-k}}}{\E S_{n,n-1-k} } \end{align} is a martingale with respect to {the} $\sigma$-algebras generated by {$(B_j)^{n-1}_{j=n-k}$}, with $\E \MM_k=1$. Now, by Doob's inequality for the submartingale $(\MM_k-1)^2$, see e.g.\ \cite[Theorem 10.9.1]{Gut}, \begin{align*}\label{eq:mdoob} \IP\bigg[\max_{\ceil{\psi_n}\leq k<n}\big|S_{n,k}/\E(S_{n,k})-1\big| \geq \delta_n\bigg] &= {\IP\bigg[\max_{0\leq k\leq n-1-\ceil{\psi_n}}|\MM_k-1| \geq \delta_n\bigg]} \\ &\leq \delta_n^{-2}\var\big(\MM_{n-1-\ceil{\psi_n}}\big). \numberthis \end{align*} Using $\E \MM_{n-1-\ceil{\psi_n}}=1$ and the independence of the beta variables, we have \begin{align} \var\big(\MM_{n-1-\ceil{\psi_n}}\big) &= \E\big(\MM_{n-1-\ceil{\psi_n}}^2\big) -1 = \prod^{n-1}_{k = \ceil{\psi_n}+1}\frac{\E[(1-B_k)^2]}{(\E[1-B_k])^2}-1, \end{align} and by \eqref{de:betas}, \eqref{betamom}, and simplifying, we get \begin{align*} \var\big(\MM_{n-1-\ceil{\psi_n}}\big) &= \prod^{n-1}_{k=\ceil{\psi_n}+1 } {\bbclr{\frac{\theta k-3m-\rho+1}{\theta k-3m-\rho}\cdot \frac{\theta k-2m}{\theta k-2m+1}} }-1\\ &\leq \prod^{n-1}_{k={\ceil{\psi_n}+1} }(1+Ck^{-2}) -1 \leq \frac{C}{\psi_n}. \numberthis\label{eq:mvar} \end{align*} Applying \eqref{eq:mvar} {and $\delta_n=\psi_n^{-\eps}$} to \eqref{hw1} and \eqref{eq:mdoob} yields \begin{align*} \IP\bigg[\max_{\ceil{\psi_n}\leq k<n}\bigg|S_{n,k}-\bbclr{\frac{k}{n}}^\chi\bigg|\geq 2\delta_n\bigg] \leq \gd_n^{-2}\frac{C}{\psi_n} = C \psi_n^{2\eps-1}, \end{align*} hence proving the lemma. \end{proof} \subsection{Asymptotics of two sums} Fix a sequence $\gl_n\to\infty$ (we will later choose $\gl_n=n^\nu$) and define, for $y\ge0$ and $0\le k\le \ell<\infty$, \begin{align}\label{lh1} H^y_{k,\ell}&:=\sum_{i=k+1}^\ell\bigsqpar{(1-B_i)^{\gln y}-1+\gln yB_i}, \qquad I^y_{k,\ell} := \sum_{i=k+1}^\ell \bigsqpar{1-(1-B_i)^{\gln y}} \end{align} and, for $y\ge0$ and $0\le s\le t<\infty$, \begin{align}\label{lh2} \hH^y_{s,t}:=\gln\qw H^y_{\floor{s\gln},\floor{t\gln}},\qquad \wh I^y_{s,t}:=\gln\qw I^y_{\floor{s\gln},\floor{t\gln}}. \end{align} \begin{lemma}\label{LH} Let\/ $0<s\le t<\infty$. Then, for every $y\ge0$, as \ntoo, \begin{align}\label{lh3} \hH^y_{s,t}&\pto \int_s^t \Bigpar{\frac{(\theta u)^{m+\rho}} {(\theta u+y)^{m+\rho}}-1+\frac{\chi y}{u}}\dd u = \int_s^t \Bigpar{\frac{1}{(1+y/(\theta u))^{m+\rho}}-1+\frac{\chi y}{u}}\dd u \end{align} and \begin{align}\label{lh34} \wh I ^y_{s,t}\pto \int_s^t \Bigpar{1-\frac{(\theta u)^{m+\rho}}{(\theta u+y)^{m+\rho}}}\dd u = \int_s^t \Bigpar{1-\frac{1}{(1+y/(\theta u))^{m+\rho}}}\dd u. \end{align} \end{lemma} \begin{proof} Denote the summand in \eqref{lh1} by $\gD H_i$. Then $-1\le \gD H_i\le \gln y B_i$, and thus, using \eqref{eq:Ebetasq}, for $\floor{s\gln}<i\le\floor{\gln t}$, \begin{align}\label{lh4} \Var (\gD H_i) \le \E (\gD H_i)^2 \le C + C\gln^2\E B_i^2 \le C_s. \end{align} The summands $\gD H_i$ are independent, and thus \eqref{lh1}--\eqref{lh2} and \eqref{lh4} yield \begin{align}\label{lh5} \Var(\hH^y_{s,t}) =\gln^{-2}\sum_{i=\floor{s\gln}+1}^{\floor{t\gln}}\Var\bigpar{\gD H_i} \le {C_{s,t}} \gln^{-1} =o(1). \end{align} Hence, it suffices to show that the expectation $\E \hH^y_{s,t}$ converges to the limit in \eqref{lh3}. We have, applying \eqref{betamom} to {$1-B_i\in\Beta(\theta i-3m-\rho,m+\rho)$} and using \eqref{gg}, uniformly for $s\gln < i \le t\gln$, \begin{align}\label{lh6} \E(1-B_i)^{\gln y}& = \frac{\gG(\theta i-2m)\gG(\theta i-3m-\rho+\gln y)}{\gG(\theta i-3m-\rho)\gG(\theta i-2m+\gln y)} = \bbclr{\frac{\theta i}{\theta i +\gln y}}^{m+\rho} +o(1)\notag\\ &= \bbclr{\frac{\theta i/\gln}{\theta i/\gln + y}}^{m+\rho} +o(1). \end{align} Hence, using also \eqref{eq:Bmean}, if $i=u\gln$ with $u\in(s,t]$, \begin{align}\label{lh7} \E[\gD H_i] &= \E(1-B_i)^{\gln y}-1+\gln y \E B_i \notag\\ &=\bbclr{\frac{\theta i/\gln}{\theta i/\gln + y}}^{m+\rho} -1 + \frac{\chi}{i}\gln y +o(1). \end{align} It follows that $\E\hH^y_{s,t}$ is $o(1)$ plus a Riemann sum of the integral in \eqref{lh3}. The proof of \eqref{lh34} is similar, where we now replace $\Delta H_i$ with \begin{align} \Delta I_i = 1- (1-B_i)^{\lambda_n y} \end{align} and, for $s\lambda_n <i\leq t\lambda_n$, use the estimates $0\le\gD I_i\le1$ and thus, using \eqref{lh6}, \begin{align} & \var( \Delta I_i) \le \E (\Delta I_i)^2 \le1 ,\\ & \E[\gD I_i] = 1-\E(1-B_i)^{\gln y} = 1- \bbclr{\frac{\theta i/\gln}{\theta i/\gln + y}}^{m+\rho} + o(1) \end{align} to proceed. \end{proof} \section{Basic analysis}\label{se:basicanalysis} In this and subsequent sections, we follow the framework (and hence the notation) in \cite{Janson2023}. To concentrate on the important aspects of the proof, we assume that $m=2$ and $\rho=0$; note that then $\chi=\frac12$ and $\gth=2m=4$. The minor modifications for the general case are discussed in \refS{Sgen}. \subsection{The stochastic recursion.}\label{sse:sr} Let $D_n$ be the subgraph in $G_n$, consisting of vertex~$n$, all vertices that can be reached from $n$ via a directed path, and all the edges between them. We think of the vertices and edges in $D_n$ as coloured red. We use the following stochastic recursion to construct $D_n$. It is similar to the recursion used in \cite{Janson2023}, with differences that stem from the difference of the models. \begin{enumerate} \item Sample the beta variables $(B_j)^{n-1}_{j=2}$ defined in \eqref{de:betas}. \item Declare vertex $n$ to be red and all others black. Initiate the recursion by setting $k:=n$. \item If vertex $k$ is red, choose the recipients of the two outgoing edges from vertex $k$ according to the construction given in Definition \ref{de:PUR}. After sampling the recipients, declare them as red.\\ If vertex $k$ is black, delete $k$ and do nothing else. \item If $k=2$ then stop; otherwise let $k:=k-1$ and repeat from {(3)}. \end{enumerate} For integers $0\leq k\leq n-1$, let $Y_k$ be the number of edges in $D_n$ that start from $\clc{k+1,\dots,n}$ and end in $\clc{1,\dots,k}$. Define $Z_k$ as the number of edges in $Y_k$ that end in~$k$. Note that we have the boundary conditions $Y_{n-1}=2$ and $Y_0=0$; as well as $Z_1=Y_1$ and $Z_0=0$. For $1\leq k\leq n-1,$ denote the indicator that at least one edge of $Y_k$ ends at $k$ as \begin{align} J_k=\tone[Z_k\geq 1], \end{align} which is the same as the indicator that $k$ is red. Thus summing $J_k$ over $k\in[n-1]$ gives the number $X\nn$ of red vertices. For $2\leq k\leq n-1$, the number of edges that start at $k$ is $2J_k$, and we thus have \begin{align}\label{eq:stocrecur} Y_{k-1} = Y_k - Z_k + 2\cdot J_k = Y_k -Z_k + 2\cdot \tone[Z_k\geq 1]. \end{align} As in \cite{Janson2023}, we use a modified version of the procedure above, where we use the construction in \refR{Rstop}. In (3) above, we thus do not choose the recipients of the outgoing edges, we just note that they have endpoints in $[k-1]$. We then at the next vertex toss a coin for each edge with unassigned endpoint to decide whether it ends there or not. This yields the following equivalent version of the construction. \begin{enumerate} \item Sample the beta variables $(B_j)^{n-1}_{j=2}$ defined in \eqref{de:betas}. \item Declare vertex $n$ to be red and all others black. Initiate the recursion by setting $k:=n$. \item If vertex $k$ is red, add two outgoing edges from vertex $k$, with as yet undetermined endpoints in $[k-1]$; mark these edges \emph{incomplete}. \item Let $k:=k-1$. \item For each incomplete edge, toss a coin with heads probability $B_k$, independently given $B_k$. If the outcome is heads, the edge ends at $k$ and is marked complete; furthermore, vertex $k$ is coloured red. Otherwise do nothing (so the edge is still incomplete). \item If $k=1$ then stop; otherwise repeat from (3). \end{enumerate} Let $\cF_k$ be the $\sigma$-field generated by all beta variables $(B_j)^{n-1}_{j=2}$ and the coin tosses at vertices $n-1,\dots,k+1$. Then $\cF_1,\dots,\cF_{n-1}$ forms a decreasing sequence of $\sigma$-fields, and $Y_{n-1},\dots, Y_k$ are measurable with respect to $\cF_k$. Moreover, conditioned on $\cF_k$, we have \begin{align}\label{eq:distZk} Z_k\mid \cF_k\in \mathrm{Bin}(Y_k, B_k)\quad{\text{for $1\leq k\leq n-1$.} } \end{align} Thus, in view of the recursion \eqref{eq:stocrecur}, we have for $2\leq k\leq n-1$, \begin{align} \E(Y_{k-1}\mid \cF_k)&= Y_k - \E(Z_k\mid \cF_k) + 2\cdot\IP(Z_k\geq 1\mid \cF_k) \notag\\ &=Y_k -B_k Y_k +2\bclr{1-(1-B_k)^{Y_k}}.\label{eq:exact} \end{align} By Markov's inequality, we also have {for $2\leq k\leq n-1$,} \begin{align*} \E[Y_{k-1}\mid\cF_k]&\leq Y_k - \E[Z_k\mid \cF_k] + 2\cdot\E[Z_k\mid \cF_k] = Y_k+\E[Z_k\mid \cF_k]\\ &{=} (1+B_k)Y_k. \numberthis\label{eq:markov} \end{align*} Define, recalling \eqref{de:phi}, \begin{align}\label{de:Wk} W_k = \Phi_k Y_k\quad \text{{for $0\leq k\leq n-1$,}} \end{align} noting that $W_0=2Y_0=0$. Using \eqref{eq:markov} and \eqref{de:phi}, we find {for $2\leq k\leq n-1$,} \begin{align}\label{eq:Wk} \E(W_{k-1}\mid \cF_k) &= \Phi_{k-1}\E(Y_{k-1}\mid \cF_k) \leq \Phi_{k-1} (1+B_k) Y_k =\Phi_k Y_k = W_k; \end{align} and so $W_0,\dots,W_{n-1}$ is a reverse supermartingale. The initial value is \begin{align} W_{n-1} = \Phi_{n-1} Y_{n-1}=2\Phi_{n-1}. \end{align} By Doob's decomposition, \begin{align}\label{eq:doobdecom} W_k = M_k-A_k, \quad 0\leq k \leq n-1, \end{align} where \begin{align}\label{eq:mg} M_k := 2\Phi_{n-1} + \sum^{n-1}_{j=k+1} (W_{j-1} - \E(W_{j-1}\mid\cF_j)) \end{align} is a reverse martingale and \begin{align}\label{eq:incp} A_k := \sum^{n-1}_{j=k+1} (W_j-\E(W_{j-1}\mid\cF_j)) \end{align} is positive and reverse increasing. To see these properties of $A_k$, we note $A_{n-1}=0$ and by \eqref{eq:Wk}, \begin{align}\label{eq:Akdiff} A_{k-1} - A_k = W_k - \E(W_{k-1}\mid \cF_k) \geq 0\quad \text{for $1\leq k \leq n-1$.} \end{align} Hence, for $0\leq k\leq n-1$, we have $0\le W_k\leq M_k$. From the exact formula \eqref{eq:exact}, \begin{align} \E (W_{k-1}\mid \cF_k) = \Phi_{k-1}\E(Y_{k-1}\mid \cF_k) = \Phi_{k-1} (1-B_k) Y_k + 2\Phi_{k-1}\bclr{1-(1-B_k)^{Y_k}}, \end{align} and so \eqref{eq:Akdiff} can be written as \begin{align} A_{k-1}-A_k &= 2B_k\Phi_{k-1}Y_k - 2\Phi_{k-1}\bclr{1-(1-B_k)^{Y_k}} \notag\\ &=2\Phi_{k-1} \bclr{(1-B_k)^{Y_k}-1+B_kY_k}.\label{eq:incA} \end{align} Following the steps in \cite[equation (2.16)]{Janson2023} for evaluating $\var(Y_{k-1}\mid \cF_{k})$ in the {uniform attachment case}, here we have, for $1\leq k\leq n-1$, \begin{align}\label{nov1} &\var(Y_{k-1}\mid \cF_{k})\notag\\ &\quad =\Var(Z_k-2\cdot \mathbf{1}[Z_k\ge 1]\mid \cF_{k})\le 2\Var(Z_k \mid \cF_k) + 2 \Var(2\cdot \mathbf{1}[Z_k\ge 1] \mid \cF_{k})\notag\\ & \quad \le 2 B_k Y_k + 8 \IP(Z_k\ge 1\mid \cF_{k})\le 2 B_k Y_k + 8 \E(Z_k\mid \cF_k) = 10 B_k Y_k. \end{align} Thus, \begin{align}\label{eq:ubvarW} \var(W_{k-1}\mid \cF_k) = \Phi^2_{k-1} \var(Y_{k-1}\mid\cF_k) \leq 10\Phi^2_{k-1} B_k Y_k. \end{align} Let $\bB$ be the $\sigma$-field generated by the beta variables $(B_j)^{n-1}_{j=2}$, and let $\E_\bB$ and $\var_\bB$ denote conditional expectation and variance with respect to $\bB$. Note that $M_0,\dots,M_{n-1}$ is a reverse martingale also conditioned on $\bB$, since $\bB=\cF_{n-1}\subseteq \cF_k$ for every $k$. In particular, \begin{align}\label{eq:ubmeanW} \E_\bB W_k \leq \E_\bB M_k = M_{n-1} = 2\Phi_{n-1}\quad \text{{for $0\leq k\leq n-1$}}. \end{align} Hence, by applying \eqref{eq:mg}, the reverse martingale property, \eqref{eq:ubvarW}, \eqref{de:phi}, \eqref{de:Wk}, and then \eqref{eq:ubmeanW}, for $0\le k\le n-1$, \begin{align} \var_\bB (M_k) &= \E_\bB (M_k - 2\Phi_{n-1})^2 = \sum^{n-1}_{j=k+1}\E_\bB \var(W_{j-1}\mid \cF_j) \notag\\ &\leq 10 \sum^{n-1}_{j=k+1} \Phi^2_{j-1}B_j \E_\bB(Y_j) = 10 \sum^{n-1}_{j=k+1} \frac{B_j}{1+B_j}\Phi_{j-1}\E_\bB (W_j) \notag\\ & \leq {20} \sum^{n-1}_{j=k+1} \frac{B_j}{1+B_j}\Phi_{j-1}\Phi_{n-1} = {20} \sum^{n-1}_{j=k+1} \Phi^2_{j-1} B_j \prod^{n-1}_{i=j+1}(1+B_i).\numberthis\label{eq:varM} \end{align} \subsection{Some estimates}\label{sse:est} Below we provide several estimates for $W_k$, $M_k$, $Z_k$ and $A_k$ that we need later. The results are analogous to \cite[Lemmas 2.1--2.3]{Janson2023}. Recall that $\chi=1/2$ for $m=2$ and $\rho=0$. \begin{lemma}\label{le:doob} For $1\leq k\leq n-1$, we have \begin{align}\label{eq:meanW2} \E_\bB W_k^2 \leq \E_\bB M_k^2\leq 20\sum^{n-1}_{j=k+1}\Phi_{j-1}^2 B_j \prod^{n-1}_{i=j+1}(1+B_i) + 4\Phi_{n-1}^{2}. \end{align} Furthermore, \begin{align}\label{mW2} \E_\bB \max_{0\le k\le n-1} W_k^2 \le \E_\bB \max_{0\leq k \leq n-1} M_k^2 \le 4 \E_\bB M_0^2 \end{align} and there is a positive constant $C$ such that \begin{align}\label{eq:maximalineq} \E \max_{0\leq k \leq n-1} W_k^2 \le \E \max_{0\leq k \leq n-1} M_k^2 \leq Cn. \end{align} \end{lemma} \begin{proof} The first inequalities in \eqref{eq:meanW2} and \eqref{mW2} follow from $0\le W_k\leq M_k$. For the second inequality in \eqref{eq:meanW2}, note that \begin{align}\label{eq:notdoob} \E_\bB M_k^2 = \var_\bB(M_k) + (\E_\bB M_k)^2, \end{align} and the inequality follows from this by the inequality \eqref{eq:varM} and the equalities in \eqref{eq:ubmeanW}. The second inequality in \eqref{mW2} follows from Doob's inequality. By \eqref{mW2}, \eqref{eq:maximalineq} follows from showing $\E M_0^2 \le Cn$. For every $0\leq k\leq n-1$, we use \eqref{eq:varM} and the independence of the beta variables to obtain \begin{align} \E\big(\var_\bB(M_k)\big) &\leq 20 \E\sum^{n-1}_{j=k+1} \Phi^2_{j-1} B_j \prod^{n-1}_{i=j+1}(1+B_i) \notag\\ &=20\sum^{n-1}_{j=k+1}\E\Phi_{j-1}^2 \E B_j \prod^{n-1}_{i=j+1}\E(1+B_i). \end{align} So applying \eqref{eq:Bmean}, \eqref{eq:mprodbeta} and \eqref{eq:phibd}, we get \begin{align}\label{eq:varMbd} \E\big(\var_\bB(M_k)\big) \leq C \sum^{n-1}_{j=k+1} \frac{j}{j}\cdot\bbclr{\frac{n}{j}}^{1/2} \leq C n^{1/2}\sum^{n-1}_{j=k+1} \frac{1}{j^{1/2}} \leq C n . \end{align} Using the {equalities} in \eqref{eq:ubmeanW} and the estimate in \eqref{eq:phibd}, we also have \begin{align}\label{eq:EM2} \E\bigsqpar{( \E_\bB M_k)^2} = \E\bclr{M_{n-1}^2} = 4 \E(\Phi_{n-1}^2)\leq Cn \quad \text{{for $0\leq k\leq n-1$}}. \end{align} Applying \eqref{eq:varMbd} and \eqref{eq:EM2} to \eqref{eq:notdoob}, with $k=0$, we thus have \begin{align} \E M_0^2 \leq {\E}\bigsqpar{\var_\bB(M_0)} + {\E}\bigsqpar{(\E_\bB M_0)^2}\leq C n, \end{align} which together with \eqref{mW2} implies \eqref{eq:maximalineq}. \end{proof} \begin{lemma}\label{le:Zk} There is a positive constant $C$ such that, for $1\leq k\leq n-1$, \begin{align} \IP(Z_k\geq 1) &\leq C\frac{n^{1/2}}{k^{{3/2}}}; \label{eq:Zkg1}\\ \IP(Z_k\geq 2) &\leq C\frac{n}{k^3} \label{eq:Zkg2}. \end{align} \end{lemma} \begin{proof} We start by proving \eqref{eq:Zkg1}. Firstly, it follows from \eqref{eq:distZk} and \eqref{de:Wk} that \begin{equation} \E(Z_k\mid \cF_k)=Y_k B_k=\Phi_k^{-1} B_k W_k, \end{equation} which, along with \eqref{eq:ubmeanW}, imply that \begin{equation}\label{MarZ} \E_\bB(Z_k) = \Phi^{-1}_{k} B_k \E_\bB(W_k)\leq {2}\Phi^{-1}_{k} B_k \Phi_{n-1} = {2}B_k\prod^{n-1}_{i=k+1} (1+B_i). \end{equation} Using the independence of $(B_k)^{n-1}_{k=2}$, \eqref{eq:Bmean} and \eqref{eq:mprodbeta}, we therefore have \begin{align}\label{eq:meanZk} \E(Z_k)&{\leq 2}\E(B_k)\prod^{n-1}_{i=k+1}\E(1+B_i) \leq\frac{C}{k}\bbclr{\frac{n}{k}}^{1/2}= C\frac{n^{1/2}}{k^{{3/2}}} , \end{align} and \eqref{eq:Zkg1} follows from Markov's inequality. The proof for \eqref{eq:Zkg2} is similar, this time we observe that by Markov's inequality, \begin{align}\label{eq:Markov2} \IP(Z_k\geq 2\mid \cF_k) \leq \E\bbbclr{\binom{Z_k}{2}\biggm\vert \cF_k} = \binom{Y_k}{2} B_k^2 \leq B_k^2 Y^2_k = B_k^2\Phi_k^{-2} W_k^2. \end{align} By \eqref{eq:meanW2} of Lemma \ref{le:doob} and \eqref{de:phi}, we have \begin{align*} &\E\big(B_k^2\Phi_k^{-2} W_k^2\big) = \E\big(B_k^2\Phi_k^{-2} \E_\bB(W_k^2)\big)\\ & \leq 20 \E\bbbclr{B_k^2\Phi_k^{-2}\sum^{n-1}_{j=k+1}\Phi_{j-1}^2 B_j \prod^{n-1}_{i=j+1}(1+B_i)} + 4\E\bclr{B^2_k\Phi_k^{-2}\Phi_{n-1}^2}\\ &= 20 \E\bbbclr{B_k^2\sum^{n-1}_{j=k+1} B_j\prod^{j-1}_{i=k+1}(1+B_i)^2\prod^{n-1}_{l=j+1}(1+B_l)} + 4\E\bbclr{B^2_k\prod^{n-1}_{i=k+1}(1+B_i)^2}\\ &=20 \E(B_k^2)\sum^{n-1}_{j=k+1} \E(B_j)\prod^{j-1}_{i=k+1}\E(1+B_i)^2\prod^{n-1}_{l=j+1}\E(1+B_l) + 4\E(B_k^2) \prod^{n-1}_{i=k+1}\E(1+B_i)^2 \numberthis \label{eq:EZk2} \end{align*} Applying the estimates in \eqref{eq:Ebetasq}, \eqref{eq:mprodbeta} and \eqref{eq:trunphi2bd} to \eqref{eq:EZk2}, we find \begin{align}\label{eq:BphiW2} \E\big(B_k^2\Phi_k^{-2} W_k^2\big)\leq \frac{C}{k^2}\sum^{n-1}_{j=k+1}\frac{1}{j} \cdot\frac{j}{k}\cdot\bbclr{\frac{n}{j}}^{1/2} + \frac{Cn}{k^3} \leq \frac{Cn}{k^3}. \end{align} Taking the expectation in \eqref{eq:Markov2} and plugging in \eqref{eq:BphiW2} yields \eqref{eq:Zkg2}. \end{proof} \begin{lemma}\label{le:Ak} For $1\leq k\leq n-1$, \begin{equation}\label{eq:incAsim} A_{k-1} -A_k \leq (W_kB_k)^2\Phi^{-1}_{k} \end{equation} and there is a positive constant $C$ such that \begin{equation}\label{eq:EAk} \E A_k \leq \frac{Cn}{k^{3/2}}. \end{equation} \end{lemma} \begin{proof} By \eqref{eq:incA}, Taylor's formula, the increasing property of $\Phi_k$, and \eqref{de:Wk}, \begin{align}\label{eq:Abd} A_{k-1}-A_k&=2\Phi_{k-1}\bclr{(1-B_k)^{Y_k}-1+B_kY_k}\leq 2\Phi_{k-1} \binom{Y_k}{2} B_k^2 \notag\\ &\leq Y_k^2\Phi_{k-1} B_k^2 \leq Y_k^2\Phi_{k} B_k^2 =W_k^2 \Phi^{-1}_{k}B_k^2 , \end{align} yielding \eqref{eq:incAsim}. To prove \eqref{eq:EAk}, we note that by a telescoping argument, \eqref{eq:incAsim} implies \begin{align}\label{eq:tele} A_k \leq \sum^{n-1}_{i=k+1} \bclr{W_i B_i}^{2} \Phi^{-1}_{i} \end{align} Furthermore, \eqref{eq:meanW2} of Lemma \ref{le:doob} and \eqref{de:phi} together yield \begin{align*} &\E_\bB\bclr{W^2_i B_i^{2} \Phi^{-1}_{i}}\\ &\leq 20 B_i^2 \Phi^{-1}_{i} \sum^{n-1}_{j=i+1} \Phi_{j-1}^2 B_j\prod^{n-1}_{h=j+1}(1+B_h) + 4 B_i^2 \Phi^{-1}_{i} \Phi^2_{n-1}\\ &=20 B_i^2 \Phi_i\sum^{n-1}_{j=i+1} B_j \prod^{j-1}_{l=i+1} (1+B_l)^2 \prod^{n-1}_{h=j+1}(1+B_h) + 4 B_i^2 \Phi_{i} \prod^{n-1}_{h=i+1}(1+B_h)^2\\ &\leq 40 B_i^2 \Phi_{i-1}\sum^{n-1}_{j=i+1} B_j \prod^{j-1}_{l=i+1} (1+B_l)^2 \prod^{n-1}_{h=j+1}(1+B_h) + 8 B_i^2 \Phi_{i-1} \prod^{n-1}_{h=i+1}(1+B_h)^2 .\numberthis \end{align*} Taking expectation and again using the independence of $(B_k)^{n-1}_{k=2}$, we have \begin{align*} \E\bclr{W^2_i B_i^{2} \Phi^{-1}_{i}} &\leq 40 \E(B_i^2)\E(\Phi_{i-1}) \sum^{n-1}_{j=i+1} \E(B_j) \prod^{j-1}_{l=i+1} \E(1+B_l)^2 \prod^{n-1}_{h=j+1}\E(1+B_h)\\ &\qquad + 8 \E(B_i^2) \E(\Phi_{i-1}) \prod^{n-1}_{h=i+1}\E(1+B_h)^2. \numberthis \end{align*} Applying \eqref{eq:Bmean}, \eqref{eq:Ebetasq}, \eqref{eq:mprodbeta}, \eqref{eq:meanphik} and \eqref{eq:trunphi2bd} to the last display gives \begin{align}\label{eq:Asummand} \E\bclr{W^2_i B_i^{2} \Phi^{-1}_{i}}\leq \frac{C}{i^{3/2}} \sum^{n-1}_{j=i+1} \frac{1}{2j-2} \cdot \frac{j}{i}\cdot \bbclr{\frac{n}{j}}^{1/2} + \frac{Cn}{i^{5/2}}\leq \frac{Cn}{i^{5/2}}. \end{align} Thus, in view of \eqref{eq:Asummand} and \eqref{eq:tele}, we deduce that for $1\leq k\leq n-1$, \begin{align} \E (A_k) \leq \sum^{n-1}_{i=k+1} \frac{Cn}{i^{5/2}}\leq \frac{Cn}{k^{3/2}}, \end{align} hence proving \eqref{eq:EAk}. \end{proof} \section{The early part and a Yule process}\label{se:Yule} We continue to study the case $m=2$ and $\rho=0$, and recall that then $\chi=1/2$. We show that the early part of the growth of $D_n$ can be closely coupled to the same time-changed Yule process as in \cite{Janson2023}, and use this coupling to study $Y_k$ and $W_k$. We start by presenting its construction and key features, following the description in \cite{Janson2023}. Let $\mathcal{Y}$ be a Yule process starting with two particles, and let $\mathcal{Y}_t$ be the number of (living) particles at time $t$ (thus $\mathcal{Y}_0=2$). Note that $\mathcal{Y}_t$ has the same distribution as the sum of two copies of the standard Yule process, which starts with a single particle. (See e.g.\ \cite[Section III.5]{Athreya-Ney1972} for definition and some basic properties.) To better compare the process to $D_n$, it is convenient too to view the Yule process $\mathcal{Y}$ as a tree, where the root $\gamma_0:=0$ marks the beginning of the process and the vertex $\gamma_i$ marks the time of the $i$-th particle split in the process. Note also these split times are a.s.\ distinct. In this way, each particle can be represented by an edge from its time of birth to its time of death. The time-changed Yule tree $ \mathcal{\widehat Y}$ appearing in \cite{Janson2023} is obtained by applying the mapping $t\to e^{-t}$, so that the vertices in $ \mathcal{\widehat Y}$ have labels $e^{-\gamma_i}\in(0,1]$. Hence, the root in $\hcY$ has label 1, and a particle in the original Yule process that is born at time $\gamma_i$ and has lifetime $\tau\in \mathrm{Exp}(1)$ is now represented by an edge from $x=e^{-\gamma_i}$ to $e^{-{(\gamma_i+\tau)}}=xe^{-\tau}=xU$, where $U:=e^{-\tau}\in\sU(0,1)$. In light of this, as well as that $e^{-\gamma_0}=1$ and all lifetimes in the original Yule process are independent and have the $\mathrm{Exp}(1)$ distribution, any vertex in $\mathcal{\widehat Y}$ that is $d$ generations away from the root therefore take labels of the form $\widehat U_1\cdots \widehat U_d$, where $\widehat U_1,\dots,\widehat U_d\in\sU(0,1)$ and are independent. Let $\xD_n$ be the random red graph $D_n$ with label $k$ replaced with $(k/n)^\chi$, so that the labels now take values in $(0,1]$. We regard $D_n$ as rooted at $n$; thus the root of $\xD_n$ has label $1$. We shall compare the time-changed Yule tree $\mathcal{\widehat Y}$ to $ \xD_n$, considering only vertices with large enough labels. In preparation, let \begin{equation}\label{eq:n1} n_1=n_1^{(n)}:=\floor{n/\log n}. \end{equation} We will use the construction of $G_n$ in \eqref{de:S}--\eqref{eq:sU}, using the variables $S_{n,k}$ defined there; in particular, recall that $\tU_{k,\ell}$ are independent $\sU(0,1)$ random variables. \begin{lemma}\label{LU} Let\/ $\kappa_n:=\log n/n^{1/3}$. With probability at least $1-C\log n/n^{1/3}$, the following hold: \begin{romenumerate} \item \label{LUa} For every path in $D_n$ between vertex $n$ (the root) and a vertex $k>n_1$ consisting of $d+1\ge2$ red vertices $n=v_0>v_1>\dots>v_{d-1}>v_d=k$ such that $v_i\overset{\ell_i}\to v_{i+1}$ for $0\le i< d$, we have \begin{align}\label{lu1} \bigabs{\widetilde U_{v_0, \ell_0}\cdots \widetilde U_{v_{d-1}, \ell_{d-1}} - \bigpar{\tfrac{k}{n}}^\chi } \le 3d\kk_n. \end{align} \item \label{LUb} For every such path in $D_n$ between vertex $n$ and a vertex $k\le n_1$, we have \begin{align}\label{lu2} \widetilde U_{v_0, \ell_0}\cdots \widetilde U_{v_{d-1}, \ell_{d-1}} \le \bigpar{\tfrac{n_1}{n}}^\chi + 3d\kk_n. \end{align} \end{romenumerate} \end{lemma} \begin{proof} We may assume that $n$ is large enough such that $n_1\qw < \kk_n$, since the result is trivial for small $n$ by choosing $C$ large enough. Firstly it follows from \eqref{kil} and \eqref{eq:sU} that if $k\overset\ell\to i$, then \begin{align}\label{eq:Sinterval} S_{n,i-1}\leq \widetilde U_{k,\ell} S_{n,k-1}<S_{n,i}. \end{align} Secondly, again assuming that $n$ is large, we take $\eps=1/3$ and $\psi_n=n_1$ in Lemma \ref{le:Sest} and find that there is a positive constant $C$ such that with probability at least $1-C\log n/n^{1/3}$, \begin{align}\label{eq:Sest} \max_{n_1\leq j\leq n} \big| S_{n,j}-\bclr{\tfrac{j}{n}}^\chi \big| \leq 2 n_1^{-1/3} \le 3 \log^{1/3} n/n^{1/3} \leq \kappa_n. \end{align} We assume in the rest of the proof that \eqref{eq:Sest} holds, and show first that \eqref{lu1} follows by induction on $d$. Note first that if $j> n_1$, then \eqref{eq:Sest} implies \begin{align}\label{eq:Sest-} S_{n,j-1} \ge \bclr{\tfrac{j-1}{n}}^\chi -\kk_n& = \bclr{\tfrac{j}{n}}^\chi \bclr{1-\tfrac{1}{j}}^\chi -\kk_n \ge \bclr{\tfrac{j}{n}}^\chi \bclr{1-\tfrac{1}{j}} -\kk_n \ge \bclr{\tfrac{j}{n}}^\chi -\tfrac{1}{j} -\kk_n \notag\\& \ge \bclr{\tfrac{j}{n}}^\chi -2\kk_n. \end{align} For the base case $d=1$ we have by \eqref{eq:Sinterval}, \eqref{eq:Sest}, \eqref{eq:Sest-}, and recalling $S_{n,n-1}=1$, \begin{align} \tU_{n,\ell_0} \le S_{n,k} \le \bclr{\tfrac{k}{n}}^\chi +\kk_n \label{eq:basel} \end{align} and \begin{align} \tU_{n,\ell_0} \ge S_{n,k-1} \ge \bclr{\tfrac{k}{n}}^\chi -2\kk_n, \label{eq:base2} \end{align} which show \eqref{lu1} in this case. For $d\ge2$, we use induction and find, using the induction hypothesis and \eqref{eq:Sinterval}--\eqref{eq:Sest-}, \begin{align}\label{lu3} \widetilde U_{v_0, \ell_0}\cdots \widetilde U_{v_{d-1}, \ell_{d-1}} & \le \Bigpar{\bigpar{\tfrac{v_{d-1}}{n}}^\chi +3(d-1)\kk_n}\tU_{v_{d-1}, \ell_{d-1}} \notag\\& \le \bigpar{S_{n,v_{d-1}-1}+2\kk_n+(3d-3)\kk_n}\tU_{v_{d-1}, \ell_{d-1}} \notag\\& \le S_{n,v_{d-1}-1}\tU_{v_{d-1}, \ell_{d-1}} + (3d-1)\kk_n \notag\\& < S_{n,k} + (3d-1)\kk_n \notag\\& \le \bigpar{\tfrac{k}{n}}^\chi +3d\kk_n \end{align} and similarly, using also $S_{n,v_{d-1}}\ge S_{n,v_{d-1}-1}$, \begin{align}\label{lu4} \widetilde U_{v_0, \ell_0}\cdots \widetilde U_{v_{d-1}, \ell_{d-1}} & \ge \Bigpar{\bigpar{\tfrac{v_{d-1}}{n}}^\chi -3(d-1)\kk_n}\tU_{v_{d-1}, \ell_{d-1}} \notag\\& \ge \bigpar{S_{n,v_{d-1}}-\kk_n-(3d-3)\kk_n}\tU_{v_{d-1}, \ell_{d-1}} \notag\\& \ge S_{n,v_{d-1}-1}\tU_{v_{d-1}, \ell_{d-1}} - (3d-2)\kk_n \notag\\& \ge S_{n,k-1} - (3d-2)\kk_n \notag\\& \ge \bigpar{\tfrac{k}{n}}^\chi -3d\kk_n .\end{align} These inequalities prove \eqref{lu1}, which completes the proof of \ref{LUa}. To prove \ref{LUb}, assume first $v_{d-1}> n_1\ge k$. Then, using \eqref{lu1}, the first lines of \eqref{lu3} still hold and yield \begin{align}\label{lu5} \widetilde U_{v_0, \ell_0}\cdots \widetilde U_{v_{d-1}, \ell_{d-1}} & < S_{n,k} + (3d-1)\kk_n .\end{align} Furthermore, by $n_1\ge k$ and \eqref{eq:Sest}, \begin{align}\label{lu6} S_{n,k} \le S_{n,n_1} \le \bigpar{\tfrac{n_1}{n}}^\chi +\kk_n, \end{align} and \eqref{lu2} follows by \eqref{lu5} and \eqref{lu6}. Finally, in the remaining case $v_{d-1}\le n_1$, we use the trivial $\tU_{v_0, \ell_0}\cdots\tU_{v_{d-1}, \ell_{d-1}} \le \tU_{v_0, \ell_0}\cdots\tU_{v_{d-2}, \ell_{d-2}} $ and induction on $d$. \end{proof} Recall that a vertex in $\mathcal{\widehat Y}$ that is $d$ generations away from the root has label of the form $\widehat U_1\cdots \widehat U_d$, where $\widehat U_i\in \sU[0,1]$ and are independent. In view of \eqref{lu1}, we couple $\mathcal{\widehat Y}$ and $\xD_n$ by generating them together as follows, where we also construct a mapping $\Psi$ of the vertices of $\hcY$ to the vertices of $\xD_n$. In the construction below, $\hcY$ and $\xD_n$ will be finite subsets of the final Yule tree and digraph, and $\Psi$ maps the current $\hcY$ onto the current $\xD_n$. Recall that $\hcY$ and $\xD_n$ determine $\cY$ and $D_n$ by (deterministic) relabelling. \begin{enumerate} \item Sample the beta variables $(B_j)^{n-1}_{j=2}$ defined in \eqref{de:betas}. This defines also $S_{n,j}$ by \eqref{de:S}. \item Start the construction by letting $\hcY$ and $\xD_n$ just consist of their roots, both labelled 1. Let $\Psi$ map the root of $\hcY$ to the root of $\xD_n$. \item Let $x$ be the vertex in the constructed part of $\hcY$ that has the largest label among all vertices that do not yet have children assigned. Give $x$ children $xU'_{x,1}$ and $xU'_{x,2}$ (which are added to the current $\hcY$), where $U'_{x,\ell}$ are independent $\sU[0,1]$ variables that are independent of all other variables. The vertex $x$ is mapped to some vertex $\Psi(x)=(k/n)^\chi$ in $\xD_n$, which thus corresponds to vertex $k$ in $D_n$. There are three cases: \begin{enumerate} \item If $k>1$ and $\Psi(x)$ has not yet got any children, define $\tU_{k,\ell}:=U'_{x,\ell}$ for $\ell=1,2$. This defines by \eqref{kil}--\eqref{eq:sU} the edges from $k$ and thus the children of $k$ in $D_n$; if these children are $k_1$ and $k_2$, the corresponding children in $\xD_n$ are $(k_1/n)^\chi$ and $(k_2/n)^\chi$; we add them to $\xD_n$ and we define $\Psi(x_{\ell}):=(k_\ell/n)^\chi$, thus mapping the children of $x$ in $\hcY$ to the children of $\Psi(x)$ in $\xD_n$. \item If $k>1$ and $\Psi(x)$ already has children (because it equals $\Psi(y)$ for some $y>x$), then we just extend $\Psi$ by mapping the children of $x$ to the children of $\Psi(x)$ (in any order). \item If $k=1$, so $x$ maps to $v=(1/n)^\chi$ (which has no children in the final $\xD_n$), we extend $\Psi$ by mapping also the children of $x$ to $v$. \end{enumerate} \item Repeat from (3) (\emph{ad infinitum}). \end{enumerate} It is easy to see that running this ``algorithm'' an infinite number of iterations yields $\hcY$ and $\xD_n$ with the right distributions, together with a map $\Psi$ of the vertices of $\hcY$ onto the vertices of $\xD_n$ such that every path from the root in $\xD_n$ is the image of a path from the root in $\hcY$. $\Psi$ is obviously not injective since $\hcY$ is an infinite tree. Nevertheless, we show that restricted to rather large labels, the mapping $\Psi$ is \whp\ a bijection which perturbs the vertex labels with small errors. \begin{theorem}\label{th:coupling} Let $n_1:=\floor{n/\log n}$. We may w.h.p.\ couple the $\xD_n$ and the time-changed Yule tree $\mathcal{\widehat Y}$, such that considering only vertices with labels in $((n_1/n)^\chi,1]$ and edges with the starting points in this set, there is a bijection between these sets of vertices in the two models which displaces each label by at most $3\log^2n/n^{1/3}$, and a corresponding bijection between the edges (preserving the incidence relations). In particular, \whp{} \begin{equation}\label{eq:Yulecoupling} Y^{(n)}_{n_1} = \mathcal{\widehat Y}_{(n_1/n)^\chi}, \end{equation} where $\hcY_x=\cY_{-\log x}$ is the number of edges in $\hcY$ alive at time $x$. \end{theorem} \begin{proof} The proof is similar to that of \cite[Theorem 3.1]{Janson2023}, but with several technical complications. We use the coupling constructed before the theorem. Let $\gd_n:= 3\log^2n/n^{1/3}=(\log n)\kk_n$, with $\kk_n$ as in \refL{LU}. \stepp\label{TCO1} We first note that if some vertex in $\xD_n$ with label in $[(n_1/n)^\chi,1]$ is the image of two or more vertices in $\hcY$, then the corresponding vertex $k\ge n_1$ in $D_n$ can be reached from $n$ by at least two different paths in $D_n$, and if we let $k$ be maximal with this property, then its indegree $Z_k\ge2$. Consequently, the probability that this happens is at most, using \eqref{eq:Zkg2} of Lemma \ref{le:Zk}, \begin{align}\label{medges} \sum^{n-1}_{k=n_1}\IP(Z_k\geq 2) \leq C n \sum^{n-1}_{k=n_1}k^{-3}=O(n/n^2_1) = {O(\log^2 n/n)} =o(1). \end{align} Hence, \whp\ the mapping $\Psi$ from $\hcY$ to $\xD_n$ is injective at every vertex in $\xD_n\cap[(n_1/n)^\chi,1]$. We may thus in the sequel assume that this injectivity holds. Note that this implies that in the construction of the mapping $\Psi$ above, we have $\tU_{k,\ell}=U'_{x,\ell}$ for every $k\ge n_1$ and vertex $x\in\hcY$ such that $\Psi(x)=(k/n)^\chi$. \stepp\label{TCO2} As in \cite{Janson2023}, $\mathcal{\widehat Y}_x=\mathcal{Y}_{-\log x}$ for every $x\in (0,1]$, so by standard properties of the Yule process (see e.g.\ \cite[Section 3]{Janson2023}) \begin{align}\label{eq:Yule1} \E \mathcal{\widehat Y}_x = \E \mathcal{Y}_{-\log x} = 2e^{-\log x} = 2/x. \end{align} In $\mathcal{\widehat Y}$, there are $\mathcal{\widehat Y}_x-1$ vertices with labels in $[x,1]$. Thus, \eqref{eq:Yule1} implies that w.h.p.\ there are less than $\floor{\log n}$ such vertices for $x=(n_1/n)^\chi \sim \log^{-\chi} n$. It follows trivially that w.h.p., in $\hcY$ the number of generations from the root to any point in $[(n_1/n)^\chi,1]$ is less than $\floor{\log n}$. Hence, we may assume this property too. \stepp\label{TCO3} The expected number of vertices in $\mathcal{\widehat Y}$ that are within $\gd_n$ from $(n_1/n)^\chi$ is, by \eqref{eq:Yule1}, \begin{align*} \E\bclr{\hcY_{(n_1/n)^\chi-\gd_n} - \hcY_{(n_1/n)^\chi+\gd_n}} &= \frac{2}{(n_1/n)^\chi-\gd_n} - \frac{2}{(n_1/n)^\chi+\gd_n} \\& = O\bbclr{\frac{\gd_n}{(n_1/n)^{2\chi}}} =O\bclr{\gd_n \log^{2\chi}n} =o(1). \numberthis \label{eq:displacement} \end{align*} Hence, \whp\ there are no vertices $x$ in $\hcY$ with $|x-(n_1/n)^\chi|\le\gd_n$. We may in the sequel assume this. \stepp\label{TCO4} Consequently, \whp{} the properties in \refSteps{TCO1}--\ref{TCO3} hold, and also the conclusions \ref{LUa} and \ref{LUb} of \refL{LU}. We assume this for the rest of the proof. Suppose that $k> n_1$ is a vertex of $D_n$, and let $v=(k/n)^\chi$ be the corresponding vertex of $\xD_n$. By \refStep{TCO1}, $v=\Psi(x)$ for a unique vertex $x\in\hcY$. If $x\in((n_1/n)^\chi,1]$, then the number of generations from the root to $x$ in $\hcY$ is at most $\log n$ by \refStep{TCO2}. The number of generations to $\Psi(x)$ in $\xD_n$ is the same, and it follows from \eqref{lu1} and the equality $\tU_{k,\ell}=U'_{x,\ell}$ in \refStep{TCO1} that, denoting the path from $n$ to $k$ as in \refL{LU}, \begin{align}\label{tco4} |x-\Psi(x)|= \bigabs{\tU_{v_0, \ell_0}\cdots \tU_{v_{d-1}, \ell_{d-1}} -\bigpar{\tfrac{k}{n}}^\chi} \le 3 (\log n) \kk_n = 3\log^2n/n^{1/3} = \gd_n. \end{align} It remains to show only that no vertex $x$ in $\hcY$ is pushed over the boundary $(n_1/n)^\chi$ (in any direction) by $\Psi$. \stepp\label{TCO5} Suppose that there exists a vertex $x\le (n_1/n)^\chi$ in $\hcY$ such that $\Psi(x)> (n_1/n)^\chi$. Let $y>x$ be the parent of $x$ in $\hcY$, so that $\Psi(y)>\Psi(x)$. Assume also $y>(n_1/n)^\chi$. By \refStep{TCO2}, it follows that the number of generations from the root to $y$ is less than $\floor{\log n}$, and thus the number of generations to $x$ is at most $\floor{\log n}$. Consequently, \eqref{lu1} shows, similarly to \eqref{tco4}, that \begin{align} |x-\Psi(x)| \le 3 (\log n) \kk_n =\gd_n \end{align} and hence \begin{align} x\ge \Psi(x)-\gd_n \ge (n_1/n)^\chi-\gd_n. \end{align} However, we have also $x\le(n_1/n)^\chi$, so $|x-(n_1/n)^\chi|\le\gd_n$, and by \refStep{TCO3}, there is no such vertex $x$ in $\hcY$. If $y\le(n_1/n)^\chi$, we may instead replace $x$ by $y$ (and repeat this if necessary) until we encounter a vertex $x$ with parent $y$ such that $x\le(n_1/n)^\chi$, $\Psi(x)>(n_1/n)^\chi$ and $y>(n_1/n)^\chi$. However, we have shown that such a pair cannot exist. \stepp\label{TCO6} Suppose that there exists a vertex $x>(n_1/n)^\chi$ in $\hcY$ such that $\Psi(x)\le (n_1/n)^\chi$. By \refStep{TCO2}, it follows that the number of generations from the root to $x$ is less than $\log n$. This time \eqref{lu2} applies and shows that \begin{align} x\le \bigpar{\tfrac{n_1}{n}}^\chi + 3 (\log n) \kk_n = \bigpar{\tfrac{n_1}{n}}^\chi + \gd_n. \end{align} However, by \refStep{TCO3} again, there is no such vertex $x$ in $\hcY$. The various claims proved in the steps above show that with the coupling and mapping $\Psi$ constructed before the theorem, \whp\ $\Psi$ yields a bijection with the stated properties. In particular, \whp\ $Y_{n_1}$ equals the number of edges in $\hcY$ that are alive at $(n_1/n)^\chi$, meaning that they start in $J:=[(n_1/n)^\chi,1]$ and end outside $J$. (Note that a.s.\ $\hcY$ has no point exactly at $(n_1/n)^\chi$, so it does not matter whether we include that point in $J$ or not.) Finally, note that for any $x\in(0,1)$, the number of edges of $\hcY$ that are alive at $x$ equals the number of edges in $\cY$ that are alive at $-\log x$, which equals the number $\cY_{-\log x}$ of particles at $-\log x$, since each edge represents the lifeline of one particle. \end{proof} We define \begin{equation}\label{de:Xi} \Xi=\Xi^{(n)}:=\frac{W^{(n)}_{n_1}}{n^{\chi}}. \end{equation} This random variable plays the same role as $\Xi$ in \cite{Janson2023}, but note the different scaling. Recall also $\tgb$ defined in \eqref{lb2a}--\eqref{lb2b}. \begin{lemma}\label{LXi} As $n\to\infty$, \begin{equation}\label{lxi} \Xi^{(n)}\dto \tgb\cdot \xi, \end{equation} where $\tgb$ is given by \eqref{lb2b}, and $\xi\in\mathrm{Gamma}(2,1)$ is independent of $\tgb$. \end{lemma} \begin{proof} As in \cite[Lemma 3.2]{Janson2023}, we first generate the Yule process $\cY$, and then for each $n$ separately, we construct $D_n$ by the construction given before \refT{th:coupling}. This yields for each $n$ the coupling in \refT{th:coupling}. In particular, \eqref{eq:Yulecoupling} holds w.h.p. By standard properties of the Yule process (see e.g.\ \cite[Section III.5 and Problem III.2]{Athreya-Ney1972}), \begin{equation}\label{eq:Yulepop} x\mathcal{\widehat Y}_{x} = x\mathcal{Y}_{-\log x} \asto \xi \quad \text{as $x\to 0$}, \end{equation} with $\xi\in\mathrm{Gamma}(2,1)$. Therefore, \eqref{eq:Yulecoupling} and \eqref{eq:Yulepop} together imply that \begin{equation}\label{eq:Yconvergence} \bbclr{\frac{n_1}{n}}^{\chi} Y^{(n)}_{n_1} \pto \xi. \end{equation} Hence, using also \eqref{de:Xi}, \eqref{de:Wk}, and \eqref{lb2b}, \begin{align}\label{lxi3} \Xi\nn = n^{-\chi}\Phi_{n_1}Y_{n_1} = n_1^{-\chi}\Phi_{n_1}\bbclr{\frac{n_1}{n}}^{\chi} Y_{n_1} \pto \tgb\xi. \end{align} Finally, $Y\nn_{n_1}$ is a function of $(B_i)_{i>n_1}$ and the coin tosses made for $k>n_1$ in the construction. Hence, for any fixed $K$, $Y\nn_{n_1}$ is independent of $(B_i)_{i=1}^K$ for large enough $n$. Consequently, \eqref{eq:Yconvergence} implies that $\xi$ is independent of $(B_i)_{i=1}^K$ for every $K<\infty$, and thus $\xi$ and $\gb$ are independent. \end{proof} \section{The flat middle part}\label{se:flat} Let $n_2$ be any sequence of integers satisfying $n^{1/3}\ll n_2 \leq n_1$. We show that similar to the case of uniform attachment, the variable $W_k$ does not fluctuate much in the range $n_1\geq k\geq n_2$. Below we give analogues of \cite[Lemmas 4.1--4.2 and Theorem 4.3]{Janson2023}, where we recall the definitions of $W_k$, $M_k$, $A_k$ and $\Xi^{(n)}$ in \eqref{de:Wk}, \eqref{eq:doobdecom} and \eqref{de:Xi}. The results and proofs are again similar, but with a different scaling. \begin{lemma}\label{le:maxAk} As $n\to\infty$, \begin{equation}\label{eq:IImaxAk} \max_{n_2\leq k\leq n-1} \bigg|\frac{A_k}{n^{1/2}}\bigg| = \frac{A_{n_2}}{n^{1/2}}\pto 0. \end{equation} \end{lemma} \begin{proof} The first equality in \eqref{eq:IImaxAk} follows from the fact that $A_k$ is reverse increasing. By \eqref{eq:EAk} in Lemma \ref{le:Ak} and the choice of $n_2$, \begin{equation} \E \frac{A_{n_2}}{n^{1/2}} \leq C\frac{n}{n^{1/2}n_2^{3/2}} =o(1), \end{equation} implying the convergence in probability in \eqref{eq:IImaxAk}. \end{proof} \begin{lemma}\label{le:maxMk} As $n\to\infty$, \begin{equation}\label{lfM} \max_{0\leq k \leq n_1} \bigg|\frac{M_k}{n^{{1/2}}}-\Xi^{(n)}\bigg| \pto 0. \end{equation} \end{lemma} \begin{proof} Recall that $M_k$ is a reverse martingale. Hence we obtain by Doob's inequality, \eqref{eq:mg}, \eqref{eq:ubvarW}, \eqref{de:Wk}, \eqref{de:phi}, and \eqref{eq:ubmeanW}, \begin{align*} \E \max_{0\leq k \leq n_1} |M_k-M_{n_1}|^2 &\leq 4\E|M_0-M_{n_1}|^2 = 4 \sum^{n_1}_{i=1} \E \var(W_{i-1}\mid \cF_i)\\ &\leq 40 \sum^{n_1}_{i=1} \E\bclr{\Phi_{i-1}^{2}B_iY_i}=40 \sum^{n_1}_{i=1} \E\bclr{\Phi_{i-1}W_iB_i(1+B_i)^{-1}} \\ &= 40 \sum^{n_1}_{i=1}\E\bclr{\Phi_{i-1}B_i(1+B_i)^{-1}\E_\bB(W_i)}\\ &\le 80 \sum^{n_1}_{i=1} \E\bclr{\Phi_{i-1}B_i(1+B_i)^{-1}\Phi_{n-1}} \\ & = 80 \sum^{n_1}_{i=1} \E\bbclr{\Phi^2_{i-1}B_i\prod^{n-1}_{j=i+1}(1+B_j)}. \numberthis\label{hw2} \end{align*} Using the independence of $(B_i)^{n-1}_{i=2}$, \eqref{eq:Bmean}, \eqref{eq:mprodbeta} and \eqref{eq:phibd}, it follows from \eqref{hw2} that \begin{align*} \E \big( \max_{0\leq k \leq n_1} |M_k-M_{n_1}|\big)^2 &= \E \max_{0\leq k \leq n_1} |M_k-M_{n_1}|^2 \leq C \sum^{n_1}_{i=1} \frac{i}{i}\bbclr{\frac{n}{{i}}}^{1/2}\\ &\leq C(n n_1)^{1/2} = o(n).\numberthis \label{mmm} \end{align*} Thus, by \eqref{de:Xi}, \eqref{eq:doobdecom}, the triangle inequality, Lemma \ref{le:maxAk}, and \eqref{mmm}, \begin{align} \max_{0\leq k \leq n_1} \bigg|\frac{M_k}{n^{1/2}}-\Xi^{(n)}\bigg| &\leq \max_{0\leq k \leq n_1} \bigg|\frac{M_k}{n^{1/2}}-\frac{M_{n_1}}{n^{1/2}}\bigg| + \frac{A_{n_1}}{n^{1/2}} \pto 0, \end{align} as required. \end{proof} \begin{theorem}\label{TF1} As $n\to\infty$, \begin{equation}\label{tf1} \max_{n_2\leq k\leq n_1} \bigg|\frac{W_k}{n^{1/2}}-\Xi^{(n)}\bigg|\pto 0. \end{equation} \end{theorem} \begin{proof} The result is a direct consequence of \eqref{eq:doobdecom}, the triangle inequality, and Lemmas \ref{le:maxAk} and~\ref{le:maxMk}. \end{proof} \section{The final part: tightness}\label{Stig} Most vertices in $D_n$ turn out to have labels of the order $n^{1/3}$. To study this region in detail, we extend the processes $W_k$, $M_k$ and $A_k$ to real arguments $t\in[0,n-1]$ by linear interpolation. We for convenience extend them further to $t\in[0,\infty)$ by defining them to be constant on $[n-1,\infty)$. The aim of this and the next section is to prove convergence of $W_{tn^{1/3}}$ and $Y_{tn^{1/3}}$ as $n\to\infty$, after suitable rescaling. A key ingredient is the tightness of the random function \begin{align}\label{hA} \hA\nn_t:=n^{-1/2} A_{t n^{1/3}}\nn, \qquad t\ge0. \end{align} Recall that $C[a,b]$ is the space of continuous functions on $[a,b]$. \begin{lemma}\label{LA2} Let $0<a <b<\infty$. Then the stochastic processes $\hA\nn_t$, $n\ge1$, are tight in $C[a,b]$. \end{lemma} The proof of \refL{LA2} is more complicated than for the corresponding Lemma 5.2 in \cite{Janson2023}, and we show first two other lemmas. We begin by stating a simple general lemma on tightness in the space $C[a,b]$. (See e.g.\ \cite{Billingsley} for a background.) \begin{lemma}\label{LC} Let $-\infty<a<b<\infty$. Let\/ $(X_n(t))\nxoo$ and\/ $(Y_n(t))\nxoo$ be two sequences of random continuous functions on $[a,b]$. Suppose that there exists a sequence $(Z_n)\nxoo$ of random variables such that for every $n$ and $s,t\in[a,b]$, we have \begin{align}\label{lc} |X_n(t)-X_n(s)| \le Z_n |Y_n(t)-Y_n(s)|. \end{align} If the sequences $(X_n(a))\nxoo$ and $(Z_n)\nxoo$ are tight, and $(Y_n(t))\nxoo$ is tight in $C[a,b]$, then the sequence $(X_n(t))\nxoo$ is tight in $C[a,b]$. \end{lemma} \begin{proof} We may for convenience assume $[a,b]=\oi$. We define for any function $f$ on \oi{} its modulus of continuity \begin{align} \go(f;\gd):=\sup_{s,t\in\oi;\; |s-t|<\gd}|f(s)-f(t)|, \qquad \gd>0. \end{align} Then \cite[Theorem 8.2]{Billingsley} says that a sequence $(X_n(t))\nxoo$ in $C\oi$ is tight if and only if \begin{romenumerate} \item\label{bill1} the sequence $(X_n(0))_n$ is tight, and \item\label{bill2} for each positive $\eps$ and $\eta$, there exists $\gd>0$ such that, for every $n$, \begin{align} \P\bigpar{\go(X_n;\gd)\ge\eps}\le\eta. \end{align} \end{romenumerate} We have already assumed \ref{bill1}. Moreover, the assumption \eqref{lc} implies \begin{align}\label{lc4} \go(X_n;\gd)\le Z_n\go(Y_n;\gd) \end{align} for every $\gd$. Let $\eps,\eta>0$ be given. Since the sequence $(Z_n)\nxoo$ is tight, there exists a number $K>0$ such that $\P(|Z_n|>K) <\eta/2$ for every $n$. Hence, \eqref{lc4} implies \begin{align}\label{lc5} \P\bigpar{\go(X_n;\gd)\ge\eps}& \le \P(|Z_n|>K) + \P\bigpar{K\go(Y_n;\gd)\ge \eps} \notag\\& \le \eta/2 + \P\bigpar{\go(Y_n;\gd)\ge \eps/K}. \end{align} Since $(Y_n(t))\nxoo$ is tight, conditions \ref{bill1}--\ref{bill2} hold for $Y_n(t)$; in particular, there exists $\gd>0$ such that $\P\bigpar{\go(Y_n;\gd)\ge\eps/K}\le\eta/2$ for every $n$. Then \eqref{lc5} shows that \ref{bill2} holds. Consequently, $(X_n(t))\nxoo$ is tight. \end{proof} Recall that \refL{le:Ak} shows that \begin{align}\label{ha2} 0\le A_{k-1}-A_k \le W_k^2\Phi_k\qw B_k^2 \le M_k^2\Phi_k\qw B_k^2, \qquad 1\le k\le n-1. \end{align} We define the simpler \begin{align}\label{ha3} V_k:=\sum_{j=1}^k B_j^2,\quad T_k :=\sum_{j=1}^k B_j, \qquad 0\le k\le n-1; \end{align} we extend also $V_k$ and $ T_k$ by linear interpolation to real arguments, and define \begin{align}\label{hV} \hV\nn_t:=n^{1/3} V_{t n^{1/3}}\nn, \quad\wh T\nn_t := T\nn_{tn^{1/3}}, \qquad t\ge0. \end{align} The proof of \refL{LA2} only uses $\hV\nn_t$, but $\wh T\nn_t$ is needed later when we prove \eqref{nov7}. \begin{lemma}\label{LV2} Let $0<a<b<\infty$. Then the stochastic processes $\hV\nn_t-\hV\nn_a$ and $\wh T\nn_t- \wh T\nn_a$, $n\ge1$, are tight in $C[a,b]$. \end{lemma} \begin{proof} We start with $\hV\nn_t-\hV\nn_a$, $n\geq 1$. If $1\le k\le \ell\le n-1$, then \begin{align}\label{hdj1} \E|V_\ell- V_k|^2 =\E\Bigpar{ \sum_{i,j=k+1}^\ell B_i^2 B_j^2} = \sum_{i,j=k+1}^\ell\E\bigsqpar{ B_i^2 B_j^2} .\end{align} If $i\neq j$, then $B_i$ and $B_j$ are independent, and thus, by \eqref{eq:Ebetasq}, $\E\bigsqpar{ B_i^2 B_j^2}=\E[B_i^2]\,\E[B_j^2] = O\bigpar{i^{-2}j^{-2}}$. On the other hand, if $i=j$, then, by \eqref{betamom}, recalling that $B_i\in \Beta(2,4i-6)$, \begin{align}\label{hdj2} \E B_i^4 = \frac{2\cdot3\cdot4\cdot5}{(4i-4)(4i-3)(4i-2)(4i-1)} = O\bigpar{i^{-4}}. \end{align} Consequently, \eqref{hdj1} yields \begin{align}\label{hdj3} \E|V_\ell- V_k|^2 \le \sum_{i,j=k+1}^\ell C i^{-2}j^{-2} \le C (\ell-k)^2 k^{-4} .\end{align} This trivially holds for $\ell>n-1$ too, since then $V_\ell=V_{n-1}$ by definition. Furthermore, writing \eqref{hdj3} as $\norm{V_\ell-V_k}_{L^2}\le C (\ell-k)k^{-2}$, it follows by Minkowski's inequality that we can interpolate between integer arguments and conclude that \eqref{hdj3} holds for all real $k$ and $\ell$ with $1\le k\le \ell$. Let $s$ and $t$ be real numbers with $0<s\le t$. Then \eqref{hV} and \eqref{hdj3} yield, with $k:=sn^{1/3}$ and $\ell:=tn^{1/3}$, \begin{align}\label{hdj4} \E\bigabs{\hV\nn_t-\hV\nn_s}^2 = n^{2/3} \E |V_\ell-V_k|^2 \le C n^{2/3}|\ell-k|^2 k^{-4} = C|t-s|^2 s^{-4}. \end{align} For the restriction to $[a,b]$ we thus have \begin{align}\label{hdj4b} \E\bigabs{\hV\nn_t-\hV\nn_s}^2 \le C_a|t-s|^2 , \qquad a\le s\le t\le b, \end{align} which shows the tightness of $\hV\nn_t-\hV\nn_a$ by \cite[Theorem 12.3]{Billingsley}. Tightness of $\wh T\nn_t-\wh T\nn_a $ can be shown by using \begin{align} \E| T_k- T_\ell|^2 =\sum^\ell_{i,j=k+1} \E(B_iB_j) \leq C(\ell-k)^2 k^{-2}, \quad 1\leq k\leq \ell\leq n-1, \end{align} instead of \eqref{hdj3} and proceeding similarly. \end{proof} \begin{proof}[Proof of \refL{LA2}] Note first that \eqref{eq:EAk} implies that \begin{align} \E \hA\nn_a \le n^{-1/2}\frac{Cn}{(an^{1/3})^{3/2}} = Ca^{-3/2}, \end{align} and thus the sequence $\hA\nn_a$ is tight. Let \begin{align}\label{hdj0} \xM_n:=n^{-1/2}\max_{k\ge1} M_k \qquad\text{and}\qquad \Phix_n:= n^{1/6}\Phi\qw_{\floor{a n^{1/3}}}. \end{align} Then \refL{le:doob} shows that $\E\xM_n^2\le C$, and \eqref{eq:phibd} yields $\E\Phix_n=O(1)$; hence the sequences $(\xM_n)\nxoo$ and $(\Phix_n)\nxoo$ are tight. By \eqref{ha2} and \eqref{ha3}, we have for any integer $k$ with $k\ge a n^{1/3}$, since $\Phi_k$ is increasing by the definition \eqref{de:phi}, using \eqref{hdj0}, \begin{align}\label{hdj01} | A_k- A_{k-1}| & \le M_k^2\Phi_k\qw (V_k-V_{k-1}) \le n\xM_n^2 \Phi\qw_{\floor{a n^{1/3}}} (V_k-V_{k-1}) \notag\\& = n^{5/6}\xM_n^2 \Phix_n (V_k-V_{k-1}). \end{align} Thus, if $\floor{a n^{1/3}}\le k\le \ell$, \begin{align}\label{hdj5} | A_\ell- A_{k}| \le n^{5/6}\xM_n^2 \Phix_n (V_\ell-V_{k}), \end{align} We can interpolate between integer arguments and conclude that \eqref{hdj5} holds for all real $k$ and $\ell$ with $\floor{a n^{1/3}}\le k\le \ell$. By \eqref{hA} and \eqref{hV}, this shows that if $a\le s\le t$, then \begin{align}\label{hdj6} \bigabs{\hA\nn_t-\hA\nn_s} \le \xM_n^2 \Phix_n \bigpar{\hV\nn_t-\hV\nn_s}. \end{align} The result now follows from \refLs{LC} and \ref{LV2}, taking $X_n(t):=\hA\nn_t$, $Y_n(t):=\hV\nn_t-\hV\nn_a$, and $Z_n:=\xM_n^2\Phix_n$, noting that $(Z_n)\nxoo$ is tight since $(\xM_n)\nxoo$ and $(\Phix_n)\nxoo$ are. \end{proof} \section{The final part: convergence}\label{Sconv} Recall the definition of the spaces $C(0,\infty)$ and $C[0,\infty)$ in Section \ref{Snot}. \begin{theorem}\label{th:cvg} As $n\to\infty$ we have \begin{align}\label{cvg3} \frac{W_{tn^{1/3}}}{n^{1/2}}&\dto 4\tilde \beta \bclr{\bclr{t^{9/2}+\tfrac{3}{4}\xi t^{3}}^{1/3}-t^{3/2}} \qquad\text{in $C[0,\infty)$}, \intertext{and} \label{cvg4} \frac{Y_{tn^{1/3}}}{n^{1/3}}&\dto 4 \bclr{\bclr{t^{3}+\tfrac{3}{4}\xi t^{3/2}}^{1/3}-t} \qquad\text{in $C(0,\infty)$}, \end{align} with $\tilde\beta$ as in \eqref{lb2b} and\/ $\xi\in\mathrm{Gamma}(2,1)$ independent of $\tilde \beta$. \end{theorem} \begin{remark} We believe that \eqref{cvg4} holds also in $C[0,\infty)$, but we see no simple proof so we leave this as an open problem. \end{remark} \begin{proof} The proof is similar to that of \cite[Theorem 5.3]{Janson2023}, but with several technical complications. \resetsteps \steppx{Convergence in $C(0,\infty)$ for a subsequence.} As in \cite{Janson2023}, \refL{LA2} implies that by considering a subsequence, we may assume that the processes $\hA\nn_t$ converge in distribution in every space $C[a,b]$ with $0<a<b$ to some continuous stochastic process $\cA(t)$ on $(0,\infty)$; in other words, as \ntoo, \begin{align} \label{aw1} \hA\nn_t \to \cA(t) \end{align} holds in distribution in the space $C(0,\infty)$. Furthermore, also as in \cite{Janson2023}, we may by the Skorohod coupling theorem \cite[Theorem~4.30]{Kallenberg}, assume that this convergence holds a.s.; in other words (as $\ntoo$ along the subsequence) a.s.\ \eqref{aw1} holds uniformly on every interval $[a,b]$. We use such a coupling until further notice. Note that this means that we consider all random variables as defined separately for each $n$ (with some unknown coupling); in particular, this means that we have potentially a different sequence $B_i\nn$ ($i\ge1$) for each $n$, and thus different limits $\gb\nn$ and $\tgb\nn$. (The variables $B_i\nn$ for a fixed $n$ are independent, but we do not know how they are coupled for different $n$.) Hence, we cannot directly use the a.s.\ convergence results in \refL{LB2}. Instead we note that \eqref{lb2b} (which holds for each $n$, with the distributions of the variables the same for all $n$) implies, for any coupling, \begin{align}\label{jup1} \sup_{k\ge\log n}\bigabs{k^{-1/2}\Phi\nn_k- \tgb\nn}\pto0. \end{align} Furthermore, trivially (since the distributions are the same) \begin{align}\label{jup2} \tgb\nn\dto\tgb \end{align} for some random variable $\tgb$ with, by \refL{LB2}, $\tgb>0$ a.s.; note also that \eqref{jup2} holds jointly with \eqref{lxi}, since this is true for the coupling used in the proof of \refL{LXi}, where $B_i$ does not depend on $n$ and thus trivially $\tgb\nn\to\tgb$ holds together with \eqref{lxi3}. We may select the subsequence above such that \eqref{aw1} (in distribution), \eqref{jup2} and \eqref{lxi} hold jointly (with some joint distribution of the limits). We may then assume that \eqref{aw1}, \eqref{jup1}, \eqref{jup2}, \eqref{lxi}, \eqref{lfM}, and \eqref{tf1} all hold a.s., by redoing the application of the Skorohod coupling theorem and including all these limits. It then follows from \eqref{eq:doobdecom}, \eqref{hA}, \eqref{lfM}, \eqref{aw1}, and \eqref{lxi} that a.s. \begin{align}\label{aw2} n^{-1/2}W_{tn^{1/3}}= n^{-1/2}M_{tn^{1/3}}-\hA\nn_t =\Xi\nn -\cA(t) + o(1) \to \cB(t):=\tgb\xi-\cA(t) \end{align} uniformly on every compact interval in $(0,\infty)$. In other words, \eqref{aw2} holds a.s.\ in $C(0,\infty)$. From \eqref{aw2} we obtain by \eqref{de:Wk}, \eqref{jup1}, and \eqref{jup2}, letting $k:=\floor{tn^{1/3}}$, a.s. \begin{align}\label{aw3} Y_{\floor{tn^{1/3}}}=\frac{W_k}{\GF_k}=\frac{W_k}{k^{1/2}}(\tgb\nn+o(1))\qw =\frac{n^{1/2}}{k^{1/2}}\bigpar{\cB(t)+o(1)}\bigpar{\tgb+o(1)}\qw \end{align} and thus \begin{align}\label{aw4} n^{-1/3} Y_{\floor{tn^{1/3}}} \to t^{-1/2}\tgb\qw\cB(t), \end{align} again uniformly on every compact interval in $(0,\infty)$, and thus in $C(0,\infty)$. \steppx{Identifying the limit.} Fix $0<s<t$ and let $k:=\floor{sn^{1/3}}$ and $\ell:=\floor{tn^{1/3}}$. It follows from \eqref{aw1} that $\hA\nn_s-\hA\nn_{n^{-1/3}k}\to0$ and $\hA\nn_t-\hA\nn_{n^{-1/3}\ell}\to0$ a.s. Hence, \eqref{hA} and \eqref{eq:incA} imply that \begin{align}\label{aw5} \hA\nn_s-\hA\nn_t &= \sum_{i=k+1}^\ell n^{-1/2}(A_i-A_{i-1})+o(1) \notag\\& =n^{-1/2}\sum_{i=k+1}^\ell2\GF_{i-1}\bigsqpar{(1-B_i)^{Y_i}-1+Y_iB_i}+o(1) .\end{align} For any $B\in(0,1)$, with $x:=-\log(1-B)$, and any $y\ge1$, \begin{align}\label{aw6} \frac{\ddx}{\ddx y}\bigpar{(1-B)^{y}-1+yB} &= \frac{\ddx}{\ddx y}\bigpar{e^{-xy}-1+y(1-e^{-x})} =-xe^{-xy}+1-e^{-x} \notag\\& \ge 1-(1+x)e^{-x}>0. \end{align} Hence $(1-B)^{y}-1+yB$ is an increasing function of $y\ge1$. Let $\eps>0$, and let \begin{align}\label{aw7} y_+:=\max\set{u^{-1/2}\tgb\qw\cB(u): u\in[s,t]}+\eps. \end{align} Then \eqref{aw4} implies that, for large enough $n$, $Y_{i}\le y_+n^{1/3}$ when $k\le i\le\ell$, and hence \eqref{aw5} implies, noting that $\GF_i$ is increasing in $i$, \begin{align}\label{aw8} \hA\nn_s-\hA\nn_t & \le2 n^{-1/2}\GF_{\ell}\sum_{i=k+1}^\ell \bigsqpar{(1-B_i)^{n^{1/3}y_+}-1+n^{1/3}y_+B_i}+o(1) .\end{align} Using the notation \eqref{lh1}--\eqref{lh2}, we thus have by \eqref{lh3} and \eqref{lb2b} \begin{align}\label{aw9} \hA\nn_s-\hA\nn_t & \le2 n^{-1/6}\GF_{\ell}\hH^{y_+}_{s,t}+o(1) \notag\\& = 2 t^{1/2}\tgb \int_s^t \Bigpar{\frac{1}{(1+y_+/(4u))^2}-1+\frac{y_+}{2u}}\dd u +\op(1) .\end{align} Similarly, defining \begin{align}\label{aw7-} y_-:=\min\set{u^{-1/2}\tgb\qw\cB(u): u\in[s,t]}-\eps \end{align} (adjusted to 0 if \eqref{aw7-} becomes negative), we obtain a lower bound \begin{align}\label{aw9-} \hA\nn_s-\hA\nn_t & \ge 2 s^{1/2}\tgb \int_s^t \Bigpar{\frac{1}{(1+y_-/4u)^2}-1+\frac{y_-}{2u}}\dd u +\op(1) .\end{align} We now subdivide $[s,t]$ into a large number $N$ of small subintervals of equal length and apply \eqref{aw9} and \eqref{aw9-} for each subinterval $[s_i,t_i]$. Since $u^{-1/2}\tgb\qw\cB(u)$ is continuous, we may choose $N$ such that with probability $>1-\eps$, for each subinterval, the corresponding values of $y_+$ and $y_-$ differ by at most $3\eps$, and also that $t_i/s_i<1+\eps$. We may then, for each subinterval, replace $y_+$ and $y_-$ in \eqref{aw9} and \eqref{aw9-} by $u^{-1/2}\tgb\qw\cB(u)$ with a small error, and by summing over all subintervals it finally follows by letting $\eps\to0$ (we omit the routine details) that \begin{align}\label{aw10} \hA\nn_s-\hA\nn_t & = 2 \tgb \int_s^t u^{1/2}\Bigpar{\bigpar{1+\tfrac{1}{4}u^{-3/2}\tgb\qw\cB(u)}^{-2} -1+\tfrac12u^{-3/2}\tgb\qw\cB(u)}\dd u +\op(1) .\end{align} Since we also assume \eqref{aw1}, the left-hand side converges a.s.\ to $\cA(s)-\cA(t)$, and thus we have, a.s., \begin{align}\label{aw11} \cA(s)-\cA(t) & = 2 \tgb \int_s^t u^{1/2}\Bigpar{\bigpar{1+\tfrac{1}{4}u^{-3/2}\tgb\qw\cB(u)}^{-2} -1+\tfrac12u^{-3/2}\tgb\qw\cB(u)}\dd u .\end{align} This holds a.s.\ simultaneously for every pair of rational $s$ and $t$, and thus by continuity a.s.\ for every real $s$ and $t$ with $0< s\le t$. Consequently, $\cA(t)$ is a.s.\ continuously differentiable on $(0,\infty)$, with \begin{align}\label{aw12} \cA'(t) & = -2 \tgb t^{1/2}\Bigpar{\bigpar{1+\tfrac{1}{4}t^{-3/2}\tgb\qw\cB(t)}^{-2} -1+\tfrac12t^{-3/2}\tgb\qw\cB(t)} .\end{align} By the definition of $\cB(t)$ in \eqref{aw2}, this implies that also $\cB(t)$ is continuously differentiable and \begin{align}\label{aw12B} \cB'(t) & =-\cA'(t) = 2 \tgb t^{1/2}\Bigpar{\bigpar{1+\tfrac{1}{4}t^{-3/2}\tgb\qw\cB(t)}^{-2} -1+\tfrac12t^{-3/2}\tgb\qw\cB(t)} .\end{align} We may simplify a little by defining \begin{align}\label{tcB} \tcB(t):=\tgb\qw\cB(t). \end{align} Then \eqref{aw12B} becomes \begin{align}\label{aw13} \tcB'(t) & = 2 t^{1/2}\Bigpar{\bigpar{1+\tfrac{1}{4}t^{-3/2}\tcB(t)}^{-2} -1+\tfrac12t^{-3/2}\tcB(t)} .\end{align} By definition, $\hA\nn_t$ is decreasing on $[0,\infty)$, and thus \eqref{aw1} shows that $\cA(t)$ is decreasing, and thus $\cB(t)$ is increasing by \eqref{aw2}. (This also follows from \eqref{aw13}, since the right-hand side is positive.) Furthermore, \eqref{aw1}, \eqref{hA}, \eqref{eq:EAk}, and Fatou's inequality yield, for every $t>0$, \begin{align} \E \cA(t) \le\liminf_\ntoo\E\hA\nn_t \le\liminf_\ntoo n^{-1/2}\frac{Cn}{(tn^{1/3})^{3/2}} =\frac{C}{t^{3/2}}. \end{align} Hence, by dominated convergence, \begin{align} \E\lim_{t\to\infty}\cA(t)=\lim_{t\to\infty}\E\cA(t)=0. \end{align} Consequently, a.s.\ $\cA(t)\to0$ as \ttoo, and thus by \eqref{tcB} and \eqref{aw2} \begin{align}\label{aw16} \tcB(t)\upto\xi, \qquad\text{as \ttoo}. \end{align} We show in \refApp{Sdiff} below, see in particular \eqref{eq:de3} and \eqref{eq:gc0}, that the differential equation \eqref{aw13} has a unique solution satisfying the boundary condition \eqref{aw16}, viz.\ \begin{align}\label{tBt} \tcB(t) = 4t^{3/2} \bclr{\bclr{1+\tfrac{3}{4} \xi t^{-3/2}}^{1/3}-1}, \qquad t>0. \end{align} (It can easily be verified by differentiation that this is a solution; \refApp{Sdiff} shows how the solution may be found, and that it is unique.) Hence, by \eqref{tcB}, \begin{align*} \cB(t) &= \tgb\tcB(t) = 4\tilde \beta t^{3/2} \bclr{\bclr{1+\tfrac{3}{4} \xi t^{-3/2}}^{1/3}-1}\\ &=4\tilde \beta\bclr{\bclr{t^{9/2}+\tfrac{3}{4} \xi t^{3}}^{1/3}-t^{3/2}}.\numberthis\label{Bt} \end{align*} \steppx{Convergence in $C[0,\infty)$.} Note that \eqref{Bt} shows that $\cB(t)$ extends to a continuous function on $[0,\infty)$ with $\cB(0)=0$; hence it follows from \eqref{aw2} that also $\cA(t)$ extends to a continuous function on $[0,\infty)$ with $\cA(0)=\tilde \beta \xi$. Using $A\nn_0 =M\nn_0 - W\nn_0\leq M\nn_0$, and the assumed a.s.\ versions of \eqref{lfM} and \eqref{lxi}, we have that, a.s., \begin{align} \limsup_{n\to\infty} n^{-1/2} A\nn_0 \leq \limsup_{n\to\infty} n^{-1/2} M\nn_0 =\tilde\beta \xi = \cA(0). \end{align} By the reverse increasing property of $A\nn_k$, we also have, for every $t>0$, \begin{align} \liminf_{n\to\infty} n^{-1/2} A\nn_0\geq \liminf_{n\to\infty} n^{-1/2} A\nn_{tn^{1/3}} =\cA(t). \end{align} Sending $t\downto 0$ and thus $\cA(t)\upto \cA(0)$ then yields \begin{align} \liminf_{n\to\infty} n^{-1/2} A\nn_0\geq \cA(0). \end{align} It follows that, $\wh A\nn_0 \to \cA(0)$ a.s., and thus \eqref{aw1} holds a.s.\ for every fixed $t\ge0$. Since $\hA\nn_t$ and $\cA(t)$ are decreasing in $t$, and $\cA(t)$ is continuous, this implies that \eqref{aw1} holds a.s.\ uniformly for every interval $[0,b]$ with $0<b<\infty$. It then follows that also \eqref{aw2} holds a.s.\ uniformly on every compact interval in $[0,\infty)$; in other words, \eqref{aw2} holds a.s.\ in $C[0,\infty)$, with $\cB(t)$ as in \eqref{Bt}. \steppx{Conclusion.} We have so far considered a subsequence, and a special coupling, and have then shown \eqref{aw2} in $C[0,\infty)$ and \eqref{aw4} in $C(0,\infty)$, which by \eqref{Bt} yields \eqref{cvg3} and \eqref{cvg4}. Since \eqref{cvg3} and \eqref{cvg4} use convergence in distribution, they hold in general along the subsequence, also without the special coupling used in the proof. Moreover, the limits in \eqref{cvg3} and \eqref{cvg4} do not depend on the subsequence, and the proof shows that every subsequence has a subsequence such that the limits in distribution \eqref{cvg3} and \eqref{cvg4} hold. As is well known, this implies that the full sequences converge in distribution, see e.g.\ \cite[Section 5.7]{Gut}). \end{proof} \begin{remark} The argument above using possibly different $B_i\nn$ is rather technical. A more elegant, and perhaps more intuitive version, of the argument would be to assume that $B_i$ is the same for all $n$, and then condition on $(B_i)_{i=1}^\infty$ before applying the Skorohod coupling theorem. However, while intuitively clear, this seems technically more difficult to justify, and it seems to require that we prove that earlier convergence results hold also conditioned on $(B_i)_{i=1}^\infty$, a.s. We therefore prefer the somewhat clumsy argument above. \end{remark} \section{The number of descendants}\label{se:desc} Let $X=X^{(n)}$ be the number of red vertices in the preferential attachment graph. Vertex $n$ is red by definition, and $J_k=\tone[Z_k\geq 1]$ is the indicator that takes value 1 if vertex $k$ is red for $k< n$; thus \begin{equation} X=1+\sum^{n-1}_{k=1} J_k, \end{equation} noting that $J_k$ is $\cF_{k-1}$-measurable. We now set out to prove \eqref{nov7} (a special case of \refT{Tmain} with $m=2$ and $\rho=0$), which we for convenience restate below as a separate theorem. \begin{theorem}\label{th:X} As $n\to\infty$, \begin{equation}\label{eX} \frac{X^{(n)}}{n^{1/3}}\dto 2^{-\xfrac43}3^{-\xfrac13}\frac{\gG(\frac13)^2}{\gG(\frac23)} \xi^{2/3} , \end{equation} where $\xi\in \mathrm{Gamma}(2,1)$. \end{theorem} \begin{remark} Unlike in \eqref{cvg3} and \eqref{cvg4}, $\tgb$ does not appear in the distributional limit of $X^{(n)}/n^{1/3}$. This is because $\beta$ in \eqref{lb2a} is essentially determined by $B_k$ corresponding to the first few vertices; and in the red subgraph $D_n$, the number of these vertices is insignificant, since most vertices have labels of the order $n^{1/3}$. \end{remark} As in \cite{Janson2023}, we use the Doob decomposition \begin{equation}\label{eq:X} X=1+L_0+P_0, \end{equation} where for $ 0\leq k\leq n-1$, \begin{equation}\label{eq:Xm} L_k := \sum^{n-1}_{i=k+1}\bigpar{J_i-\E(J_i\mid \cF_i)} \end{equation} is a reverse martingale, and by \eqref{eq:distZk}, \begin{equation}\label{eq:Xpred} P_k := \sum^{n-1}_{i=k+1} \E(J_i\mid \cF_i) = \sum^{n-1}_{i=k+1} \IP(Z_i\geq 1\mid \cF_i) = \sum^{n-1}_{i=k+1} \bbclr{1-(1-B_i)^{Y_i}} \end{equation} is positive and reverse increasing. Furthermore, by Markov's inequality and \eqref{eq:distZk}, \begin{equation}\label{eq:Xpred1} P_{k-1}-P_k =\IP(Z_k\geq 1\mid \cF_k) \leq B_kY_k, \quad 1\leq k\leq n-1. \end{equation} By \eqref{eq:Xpred} and Lemma \ref{le:Zk}, for $1\leq k\leq n-1$, \begin{equation}\label{eq:EPk} \E P_k = \sum^{n-1}_{i=k+1} \IP(Z_i\geq 1) \leq \sum^{n-1}_{i=k+1} \frac{Cn^{1/2}}{i^{3/2}} \leq \frac{Cn^{1/2}}{k^{1/2}}. \end{equation} Using also the crude bound $0\leq J_i\leq 1$, we have $P_0-P_k\leq k$. Choosing $k=\floor{n^{1/3}}$ and applying \eqref{eq:EPk} thus yield \begin{equation}\label{eq:EP0} \E P_0 \leq \E P_{\floor{n^{1/3}}}+ \floor{n^{1/3}} \leq Cn^{1/3}. \end{equation} Moreover, it follows from the reverse martingale property of $L_k$, $\var(J_i\mid \cF_i)\leq \E(J_i\mid \cF_i)$ and \eqref{eq:EP0} that \begin{equation}\label{eq:L02} \E L_0^2 = \sum^{n-1}_{i=1} \E[\var(J_i\mid \cF_i)] \le\sum^{n-1}_{i=1} \E J_i =\E P_0 \leq Cn^{1/3}. \end{equation} This in turn implies that as $n\to\infty$, \begin{equation}\label{eq:L0} \frac{L_0}{n^{1/3}} \overset{p}{\longrightarrow} 0, \end{equation} and thus by \eqref{eq:X}, it is enough to show that as $n\to\infty$, $n^{-1/3}P_0$ converges in distribution to the \rhs{} of \eqref{eX}. To this end, we extend also $P_k$ to real arguments by linear interpolation and define \begin{equation}\label{de:hP} \wh P^{(n)}_t=n^{-1/3} P^{(n)}_{tn^{1/3}},\qquad t\ge 0. \end{equation} \begin{lemma}\label{le:Pt} Let $0<a<b<\infty$. Then the stochastic processes $\wh P^{(n)}_t$, $n\geq 1$, are tight in $C[a,b]$. \end{lemma} \begin{proof} The proof is very similar to that of Lemma \ref{LA2}. First, by \eqref{de:hP} and \eqref{eq:EP0}, \begin{align} \E \hP\nn_a = n^{-1/3}\E P\nn_{an^{1/3}} \le n^{-1/3}\E P\nn_0 \le C, \end{align} and thus the sequence $(\hP\nn_a)\nxoo$ is tight. Let $\xM_n$ and $\Phix_n$ be as in \eqref{hdj0}, $T_k$ and $\wh T\nn_k$ be as in \eqref{ha3} and \eqref{hV}. From \eqref{eq:Xpred1} and \eqref{de:Wk}, $W_k\leq M_k$, the increasing property of $\Phi_k$ and also \eqref{ha3}, for any integer $k\geq an^{1/3}$, \begin{align}\label{eq:hP} |P_k-P_{k-1}| \leq B_k Y_k = B_k\Phi_k^{-1} W_k \leq B_k\Phi_{\floor{an^{1/3}}}^{-1}\max_{k\geq 1} M_k = n^{1/3} \xM_n \Phix_n ( T_k- T_{k-1}). \end{align} Extending \eqref{eq:hP} to real arguments and using \eqref{de:hP}, we thus have \begin{align} |\wh P\nn_t-\wh P\nn_s|\leq \xM_n \Phix_n (\wh T\nn_t-\wh T\nn_s)\quad \text{if $a\leq s\leq t$.} \end{align} The result then follows from Lemmas \ref{LC} and \ref{LV2}, this time taking $X_n(t):=\wh P\nn_t$, $Y_n(t):=\wh T\nn_t-\wh T\nn_a$ and $Z_n:=\xM_n \Phix_n$; tightness of $\xM_n \Phix_n$ follows from that of $(\xM_n)^\infty_0$ and $ (\Phix_n)^\infty_0$. \end{proof} \begin{proof}[Proof of \refT{th:X}] In view of Lemma \ref{le:Pt}, by considering a subsequence, we may assume that the processes $\wh P\nn_t$ converge in distribution in every space $C[a,b]$ for $0<a<b<\infty$, and thus in $C(0,\infty)$, to some stochastic process $\mathcal{P}(t)$ on $(0,\infty)$. Again using the Skorohod coupling theorem, we can assume that all a.s.\ convergence results in the proof of Theorem \ref{th:cvg} hold and also \begin{equation}\label{eq:Pc} \wh P\nn_t \to \mathcal{P}(t) \end{equation} a.s.\ uniformly on every interval $[a,b]$. From \eqref{aw4}, \begin{equation} n^{-1/3} Y_{\floor{tn^{1/3}}} \to t^{-1/2}\tilde \beta\qw \mathcal{B}(t) = t^{-1/2}\tcB(t) \end{equation} a.s.\ uniformly on each compact interval in $(0,\infty)$. Let $s,t$ be real numbers with $0<s<t$, and let $k=\floor{sn^{1/3}}$ and $\ell=\floor{tn^{1/3}}$. By the same argument leading to \eqref{aw5}, now using \eqref{eq:Pc} and \eqref{eq:Xpred}, we have \begin{align} \wh P\nn_s - \wh P\nn_t =n^{-1/3} \sum^\ell_{i=k+1}\bclr{1-(1-B_i)^{Y_i}} + o(1) \end{align} Let $y_+$ and $y_-$ be as in \eqref{aw7} and \eqref{aw7-}. By \eqref{lh34} of Lemma \ref{LH}, we can therefore conclude that \begin{align}\label{eq:P+} \wh P\nn_s - \wh P\nn_t \leq \int^t_s \bbclr{1-\frac{1}{(1+y_+/(4u))^2}}\dd u +\op(1) \end{align} and \begin{align}\label{eq:P-} \wh P\nn_s - \wh P\nn_t \geq \int^t_s \bbclr{1-\frac{1}{(1+y_-/(4u))^2}}\dd u +\op(1). \end{align} The sandwich argument in the proof of \eqref{aw10}, the bounds in \eqref{eq:P+} and \eqref{eq:P-}, together with \eqref{tBt}, imply that for $0<s<t<\infty$, \begin{align} \wh P\nn_s - \wh P\nn_t &=\int_s^t\bbclr{1-\frac{1}{(1+u^{-1/2}\tcB(u)/(4u))^2}}\dd u +\op(1) \notag\\& =\int^t_s \bbclr{1-\frac{1}{\bclr{1+\tfrac{3}{4} \xi u^{-3/2}}^{2/3}}}\dd u +\op(1). \end{align} In light of \eqref{eq:Pc}, we thus have a.s., \begin{align}\label{eq:Pc1} \mathcal{P}(t)- \mathcal{P}(s) =\int^t_s \bbclr{1-\frac{1}{\bclr{1+\tfrac{3}{4} \xi u^{-3/2}}^{2/3}}}\dd u. \end{align} Furthermore, \begin{equation} \frac{\dd}{\dd s} \wh P\nn_s = -\E(J_k\mid \cF_k) = (1-B_k)^{Y_k} -1, \end{equation} where $k=\ceil{sn^{1/3}}$. Thus $ \big| \tfrac{\dd}{\dd s} \wh P\nn_s\big|\le 1$, which in turn implies that \begin{align}\label{eq:P1} \big| \wh P\nn_0- \wh P\nn_s\big| \leq s. \end{align} For $t\geq 1$, it follows from the reverse increasing property of $P_k$ and \eqref{eq:EPk} that \begin{equation}\label{eq:P2} \E \wh P\nn_t = n^{-1/3} \E P\nn_{tn^{1/3}} \leq n^{-1/3} \E P\nn_{\floor{tn^{1/3}}} \leq \frac{Cn^{1/6}}{\floor{tn^{1/3}}^{1/2}} \leq Ct^{-1/2}. \end{equation} Sending $s\to 0$ and $t\to\infty$, we deduce from \eqref{eq:P1} and \eqref{eq:P2} that $\wh P\nn_0-(\wh P\nn_s - \wh P\nn_t)\pto 0$, uniformly in $n$. Combining this and \eqref{eq:Pc1} with a standard argument gives \begin{equation}\label{eq:Pc2} \wh P\nn_0 \pto \int^\infty_0 \bbclr{1-\frac{1}{\bclr{1+\tfrac{3}{4} \xi u^{-3/2}}^{2/3}}}\dd u . \end{equation} The change of variable $x=3\xi u^{-3/2}/4$ yields, using \refL{Lnora}, \begin{align}\label{px} \int^\infty_0 \bbclr{1-\frac{1}{\bclr{1+\tfrac{3}{4} \xi u^{-3/2}}^{2/3}}}\dd u &= \frac{2}{3}\Bigpar{\frac{3\xi}{4}}^{2/3} \int^\infty_0 \bbclr{1-\frac{1}{(1+x)^{2/3}}}x^{-5/3}\dd x \notag\\& =\frac{2}{3}\Bigpar{\frac{3}{4}}^{2/3}\xi^{2/3}\frac{-\gG(-\frac23)\gG(\frac43)}{\gG(\frac23)} \notag\\& =2^{-\xfrac43}3^{-\xfrac13}\frac{\gG(\frac13)^2}{\gG(\frac23)}\xi^{2/3}. \end{align} The result thus follows from \eqref{eq:X}, \eqref{eq:L0}, \eqref{de:hP}, \eqref{eq:Pc2}, and \eqref{px}. \end{proof} \section{The general case}\label{Sgen} Here we consider the general case. The argument is similar to the case where $m=2$ and $\rho=0$, so we give only the main changes here. \subsection{The stochastic recursions and new estimates} Define $Y_k, J_k, Z_k, \cF_k$ as in Section \ref{se:basicanalysis}, but now with the boundary condition $Y_{n-1}=m$. We use the stochastic recursions in \refS{sse:sr} to obtain the subgraph $D_n$, where we now sample $m$ outgoing edges instead of two. The recursion in \eqref{eq:stocrecur} now becomes \begin{align}\label{eq:sr} Y_{k-1} = Y_k - Z_k + m\cdot J_k=Y_k - Z_k + m\cdot\mathbf{1}[Z_k\ge 1], \qquad 2 \le k \le n-1. \end{align} As \eqref{eq:distZk} still holds, we have \begin{align} \E(Y_{k-1}\mid \cF_{k}) &= Y_k - \E(Z_k\mid \cF_k) + m \IP(Z_k\ge 1\mid \cF_k)\notag \\ &= Y_k - B_k Y_k + m \big(1-(1-B_k)^{Y_k}\big), \end{align} and, again by Markov's inequality, \begin{align}\label{eq:gmy} \E(Y_{k-1}\mid \cF_{k}) \le Y_k - B_k Y_k + m B_k Y_k = (1+(m-1)B_k) Y_k. \end{align} Thus, with $\Phi_k$ as in \eqref{de:phi}, we can define \begin{align}\label{de:gmw} W_k := \Phi_k Y_k,\qquad 0\le k\le n-1. \end{align} It follows from \eqref{eq:gmy} and \eqref{de:gmw} that $W_0,\dots W_{n-1}$ is a reverse supermartingale with \begin{align} W_{n-1} = \Phi_{n-1} Y_{n-1} = m\Phi_{n-1.} \end{align} We again consider the Doob decomposition $W_k=M_k-A_k$, with \begin{align}\label{yngve} M_k := m\Phi_{n-1} + \sum^{n-1}_{j=k+1} (W_{j-1}-\E(W_{j-1}\mid \cF_j) ), \end{align} and $A_k$ as in \eqref{eq:incp}. Analogous to \eqref{eq:incA} and \eqref{eq:ubvarW}, \begin{align}\label{eq:AA} A_{k-1}-A_k = m\Phi_{k-1} \big((1-B_k)^{Y_k}-1+B_k Y_k \big) \end{align} and \begin{align}\label{eq:gvw} \var(W_{k-1}\mid \cF_k ) \le C\Phi_{k-1}^2 B_k Y_k. \end{align} Using \eqref{eq:gvw} and arguing as in \eqref{eq:varM}, we obtain \begin{align} \label{eq:vm} \var_\bB (M_k) \leq C \sum^{n-1}_{j=k+1} \Phi_{j-1}^2 B_j \prod^{n-1}_{i=j+1} \bclr{1+(m-1)B_i}. \end{align} By the same proofs as for Lemmas \ref{le:doob}--\ref{le:Ak}, again using \eqref{eq:Bmean}, \eqref{eq:Ebetasq}, \eqref{eq:mprodbeta}, \eqref{eq:meanphik} and \eqref{eq:phibd}, we get \begin{gather} \E\max_{0\le k\le n-1} W_k^2 \le \E\max_{0\le k\le n-1} M_k^2 \le Cn^{2(m-1)\chi}; \label{eq:e1}\\ \IP(Z_k\ge 1) \le \frac{Cn^{(m-1)\chi}}{k^{1+(m-1)\chi}},\qquad \IP(Z_k\ge 2) \le \frac{Cn^{2(m-1)\chi}}{k^{2+2(m-1)\chi}};\label{eq:e2}\\ A_{k-1}-A_k \le C \Phi_k^{-1} W_k^2 B_k^2, \qquad \E A_{k} \le \frac{Cn^{2(m-1)\chi}}{k^{1+(m-1)\chi}}.\label{eq:e3} \end{gather} \subsection{The branching process}\label{se:Yule1} As in Section \ref{se:Yule}, we can couple the early part of $D_n$ to a suitable time-changed branching process. Let $\cY$ be a branching process that starts with $m$ particles at time 0, and each particle has an independent $\mathrm{Exp}(1)$ lifetime, before splitting into $m$ new particles. Let $\wh \cY$ be the time-changed counterpart of $\cY$, again by the mapping $t\mapsto e^{-t}$; thus $\wh \cY_x=\cY_{-\log x}$ is the number of particles in $\wh \cY$ alive at time $x$. By standard properties of branching processes (see \cite[Section 8]{Janson2023} and e.g.\ \cite[Chapter III]{Athreya-Ney1972}) \begin{align}\label{eq:NB} \E \cY_t = me^{(m-1)t} \end{align} and as $\ttoo$ and thus $x=e^{-t}\to 0$, \begin{align}\label{eq:GG} x^{m-1}\hcY_{x}=e^{-(m-1)t}\cY_t \asto \xi\in \mathrm{Gamma}\bbclr{\frac{m}{m-1},m-1}. \end{align} The statements of \refL{LU} and \refT{th:coupling} hold with the same $n_1$ and $\kappa_n$, but $\chi$ is now as in \eqref{de:chi}, and $\delta_n=3\log^m n/n^{1/3}$. The analogue of Lemma \ref{LU} can be proved using entirely the same argument, but several straightforward modifications are needed to obtain the analogue of \refT{th:coupling}. For instance, in Step \ref{TCO1}, we use \eqref{eq:e2} and $n_1=\floor{n/\log n}$ to show that \begin{align} \sum^{n-1}_{k=n_1} \IP(Z_k\ge 2) = O\bclr{n^{-1}(\log n)^{1+2(m-1)\chi}}=o(1). \end{align} In Step \ref{TCO2}, it follows from \eqref{eq:NB} that \begin{align} \E \wh \cY_x = \E \cY_{-\log x} = \frac{m}{x^{m-1}}, \quad 0<x\le1, \end{align} and so for $x=(n_1/n)^\chi\sim \log^\chi n$, w.h.p.\ there are at most $\log^{m-1} n$ generations from the root $1$ to any point in $[(n_1/n)^\chi,1]$. Adjusting the remaining steps accordingly then yield the desired conclusion. Redefine \begin{align}\label{Xi+} \Xi^{(n)} := \frac{W^{(n)}_{n_1}}{n^{(m-1)\chi}}. \end{align} It follows from \eqref{Xi+}, \eqref{de:Wk}. \eqref{de:phi}, \eqref{lb2b}, \eqref{eq:Yulecoupling}, and \eqref{eq:GG}, that the statement of \refL{LXi} holds, with $\tilde \beta$ as in \eqref{lb2b}, and $\xi\in \mathrm{Gamma}(m/(m-1),m-1)$, independent of $\tilde \beta$. \subsection{The flat middle part}\label{se:flat1} We first note that $\nu$ defined in \eqref{de:nu} also satisfies, by \eqref{de:chi} and a simple calculation, \begin{align}\label{de:nu2} \nu = \frac{(m-1)\chi}{1+(m-1)\chi} \end{align} and thus \begin{align}\label{de:nu3} (1-\nu)(m-1)\chi = \nu. \end{align} We now choose $n^{\nu}\ll n_2\le n_1:=\floor{n/\log n}$. Then, as in Section \ref{se:flat}, \eqref{eq:e3} and \eqref{de:nu2} yield \begin{equation}\label{eq:IImaxAk+} \E \max_{n_2\leq k\leq n-1} \bigg|\frac{A_k}{n^{(m-1)\chi}}\bigg| = \frac{\E A_{n_2}}{n^{(m-1)\chi}} \le C\frac{n^{(m-1)\chi}}{n_2^{1+(m-1)\chi}} = C\Bigpar{\frac{n^\nu}{n_2}}^{1+(m-1)\chi} = o(1). \end{equation} Hence \refL{le:maxAk} holds, with denominators $n^{(m-1)\chi}$. Similarly, the proofs in Section \ref{se:flat}, now using \eqref{eq:vm}, \eqref{eq:mprodbeta}, \eqref{eq:phibd}, and \eqref{Xi+}, yield the conclusion that as $n\to\infty$, \begin{align} \max_{0\le k\le n_1} \bigg|\frac{M_k}{n^{(m-1)\chi}}-\Xi^{(n)}\bigg| &\pto 0; \label{eq:cm} \\ \max_{n_2\le k\le n_1}\bigg|\frac{W_k}{n^{(m-1)\chi}}-\Xi^{(n)}\bigg| &\pto 0. \label{eq:cw} \end{align} \subsection{The final part} Let $V_k$ and $T_k$ be as in \eqref{ha3}. As before, we extend $W_k, M_k, A_k, V_k, T_k$ to real arguments by linear interpolation. Now let, for $t\ge0$, \begin{align} \wh A\nn_t &:= n^{-(m-1)\chi} A\nn_{tn^\nu}, \label{hA+} \\ \wh V\nn_t&:= n^\nu V\nn_{tn^\nu},\label{hV+} \\ \wh T\nn_t&:= T\nn_{tn^\nu}.\label{hT+} \end{align} Note that \refL{LV2} holds in this more general setting (with the exponent $1/3$ replaced by $\nu$ in the proof). Moreover, fix $a>0$ and define also \begin{align} \xM_n:=n^{-(m-1)\chi}\max_{k\ge1} M_k \qquad\text{and}\qquad \Phix_n:= n^{\nu (m-1)\chi}\Phi\qw_{\floor{a n^{\nu}}}. \end{align} In view of \eqref{eq:e3}, we have, for all real $\floor{an^\nu}\le k\le \ell$, \begin{align}\label{eq:A1} |A_\ell-A_k| \le C n^{(m-1)\chi(2-\nu)}\xM_n^2 \Phix_n (V_\ell-V_{k}). \end{align} Since $(2-\nu)(m-1)\chi - (m-1)\chi=\nu$ by \eqref{de:nu3}, it follows from \eqref{eq:A1} that if $a\le s\le t$, then \begin{align} |\wh A\nn_t-\wh A\nn_s| \le C\xM_n^2 \Phix_n (\wh V\nn_t-\wh V\nn_s). \end{align} By \eqref{hA+}, \eqref{eq:e3}, and \eqref{de:nu2}, $\E \wh A\nn_a\le C_a$, which implies that the sequence $(\wh A\nn_a)\nxoo$ is tight. From \eqref{eq:e1} and \eqref{eq:phibd}, $\E \xM^2_n\le C$ and $\E \Phix_n=O(1)$, implying that $( \xM_n)^\infty_{n=1}$ and $(\Phix_n)^\infty_{n=1}$ are tight also. Following the proof of \refL{LA2}, with the ingredients above, we then conclude that the stochastic processes $\wh A\nn_t$, $n\ge 1$, are tight in $C[a,b]$ for $0<a<b<\infty$. Therefore, arguing as in the beginning of the proof of \refT{th:cvg}, we may assume (by considering a subsequence and a special coupling) that \eqref{aw1} holds a.s.\ in $C(0,\infty)$ together with \eqref{jup2}, \eqref{lxi}, \eqref{eq:cm}, \eqref{eq:cw}, and, instead of \eqref{jup1}, \begin{align}\label{jup1+} \sup_{k\ge\log n}\bigabs{k^{-(m-1)\chi}\Phi\nn_k- \tgb\nn}\pto0. \end{align} Then, analogously to \eqref{aw2} and \eqref{aw4}, \begin{align}\label{gaw1} n^{-(m-1)\chi} W_{tn^\nu} \to \cB(t):=\tilde\beta \xi -\cA(t) \end{align} and, again using \eqref{de:nu3}, \begin{align}\label{gaw2} n^{-\nu} Y_{\floor{tn^\nu}} \to t^{-(m-1)\chi}\tilde \beta^{-1} \cB(t) \end{align} a.s.\ uniformly on every compact interval in $(0,\infty)$. Let $k:=\floor{sn^\nu}$ and $\ell:=\floor{tn^\nu}$ for some $0<s<t$. Similarly to \eqref{aw5}, we have by \eqref{aw1} and \eqref{eq:AA} \begin{align}\label{gaii1} \wh A\nn_s-\wh A\nn_t = mn^{-(m-1)\chi}\sum^\ell_{i=k+1}\Phi_{i-1} [(1-B_i)^{Y_i}-1+Y_iB_i]+ o(1). \end{align} Following \eqref{aw6}--\eqref{aw10} with minor adjustments (in particular, choosing $\lambda_n=n^\nu$ in \refL{LH} and using again \eqref{de:nu3}) leads to, a.s.\ for every real $0<s\le t$, \begin{multline}\label{gaii2} \cA(s)-\cA(t) = m\tilde \beta \int^t_s u^{(m-1)\chi}\Bigl(\bclr{1+\tfrac{1}{\theta} u^{-(1+(m-1)\chi)}\tilde\beta^{-1}\cB(u)}^{-(m+\rho)} \\ \qquad -1+\chi u^{-(1+(m-1)\chi)}\tilde\beta^{-1}\cB(u)\Bigr)\dd u. \end{multline} Let, for convenience, recalling \eqref{de:nu2}, \begin{align}\label{de:al} \al := 1+(m-1)\chi =\frac{1}{1-\nu} .\end{align} Then, by \eqref{gaii2}, a.s.\ $\cA(t)$ is differentiable on $(0,\infty)$ and \begin{equation}\label{gaii3} \cA'(t) = -m\tilde \beta t^{(m-1)\chi} \Bigl( \bclr{1+\tfrac{1}{\theta} t^{-\ga}\tilde\beta^{-1}\cB(t)}^{-(m+\rho)} -1+\chi t^{-\ga}\tilde\beta^{-1}\cB(t)\Bigr), \end{equation} and $\cB'(t)=-\cA'(t)$ by \eqref{gaw1}. Define again \begin{align}\label{eq:bb} \wt \cB(t) = \tilde\beta^{-1}\cB(t), \end{align} so that \begin{align}\label{eq:de} \wt \cB'(t) = mt^{(m-1)\chi} \bbclr{\bclr{1+\tfrac{1}{\theta}t^{-\ga}\wt \cB(t)}^{-(m+\rho)}-1+\chi t^{-\ga}\wt \cB(t)}. \end{align} Moreover, $\E \cA(t)\le Ct^{-\ga}$ for $t\ge1$, say, by \eqref{hA+}, \eqref{eq:e3}, and Fatou's lemma, and dominated convergence further implies that $\E\lim_{t\to\infty}\cA(t)=0$. Hence $\cA(t)\to 0$ a.s.\ as $t\to\infty$, and thus we have we have from \eqref{gaw1} and \eqref{eq:bb} that \begin{align}\label{eq:aw16} \tcB(t)\upto\xi \qquad\text{as \ttoo}. \end{align} As shown in detail in \refApp{Sdiff}, see \eqref{eq:de3} and \eqref{eq:gc0}, the unique solution to \eqref{eq:de} satisfying \eqref{eq:aw16} is given by \begin{align}\label{eq:tB} \wt \cB(t) = \theta t^\al \bbclr{\bclr{1+\tfrac{m+\rho+1}{\theta} \xi t^{-\al}}^{\frac{1}{m+\rho+1}}-1}. \end{align} Hence, by \eqref{eq:bb} and \eqref{eq:tB}, \begin{align}\label{eq:b} \cB(t)=\tgb \wt \cB(t)=\theta \tgb t^\al \bbclr{\bclr{1+\tfrac{m+\rho+1}{\theta}\xi t^{-\al}}^{\frac{1}{m+\rho+1}}-1}. \end{align} Now, proceeding as in the remaining steps of the proof of \refT{th:cvg} yields the conclusion that as $\ntoo$, \begin{equation}\label{cvg3+} n^{-(m-1)\chi} W_{tn^{\nu}} \dto \theta \tgb t^\al \bbclr{\bclr{1+\tfrac{m+\rho+1}{\theta}\xi t^{-\al}}^{\frac{1}{m+\rho+1}}-1} \qquad\text{in $C[0,\infty)$}, \end{equation} and \begin{equation}\label{cvg4+} n^{-\nu}Y_{{tn^\nu}} \dto\theta t\bbclr{\bclr{1+\tfrac{m+\rho+1}{\theta}\xi t^{-\al}}^{\frac{1}{m+\rho+1}}-1} \qquad\text{in $C(0,\infty)$}. \end{equation} \subsection{The number of descendants} As in Section \ref{se:desc}, let $X=X^{(n)}$ be the number of red vertices, and define $L_k$ and $P_k$ as in \eqref{eq:Xm} and \eqref{eq:Xpred}. As in \eqref{eq:EPk}, it follows from \eqref{eq:e2} that \begin{align} \E P_k \le C \bbclr{\frac{n}{k}}^{(m-1)\chi},\qquad 1\le k\le n-1. \end{align} Furthermore, arguing as in \eqref{eq:EP0} with the cutoff $n^\nu$ yields, recalling \eqref{de:nu3}, \begin{align}\label{EP0} \E P_0 \le C n^\nu. \end{align} The argument for \eqref{eq:L02} now yields \begin{align}\label{EL0} \E L_0^2 \le C n^\nu, \end{align} which implies that \begin{align}\label{eq:cL} n^{-\nu}L_0 \pto 0\qquad \text{as $\ntoo$}. \end{align} As before, we extend $P_k$ to real arguments by linear interpolation, but now let \begin{align}\label{nov2} \wh P\nn_t = n^{-\nu} P\nn_{tn^\nu}, \qquad t\ge 0. \end{align} The same proof as for \refL{le:Pt} then shows that for $0<a<b<\infty$, the sequences $\wh P\nn_t $, $n\ge 1$ are tight in $C[a,b]$. Proceeding as in the proof of \refT{th:X}, where we use the Skorohod coupling theorem again, we get \begin{align}\label{eq:cP} \wh P\nn_0 &\pto \int^\infty_0 \bbclr{1-\bclr{1+\gth\qw u^{-\ga}\tcB(u)}^{-(m+\rho)} }\dd u. \notag\\&\qquad = \int^\infty_0 \bbclr{1-\bclr{1+\tfrac{m+\rho+1}{\theta} \xi u^{-\al}}^{-\frac{m+\rho}{m+\rho+1} } }\dd u. \end{align} By the change of variable $v=\theta^{-1}(m+\rho+1)\xi u^{-\al}$, \begin{align}\label{nov5} &\int^\infty_0 \bbclr{1-\bclr{1+\tfrac{m+\rho+1}{\theta} \xi u^{-\al}}^{-\frac{m+\rho}{m+\rho+1} } }\dd u \notag\\&\qquad =\frac{1}{\al} \bbclr{\frac{m+\rho+1}{\theta}\xi}^{1/\al} \int^\infty_0 \bigg(1-{(1+v)^{-\tfrac{m+\rho}{m+\rho+1}}}\bigg) v^{-(1+1/\al)} \dd v. \end{align} We take $a=-1/\al$ and $b={(m+\rho)}/({m+\rho+1})$ in \refL{Lnora}, and note that $1/\ga=1-\nu$ by \eqref{de:al}, and thus by \eqref{de:nu}, \begin{align} b-a= \frac{m+\rho}{m+\rho+1}+\frac{1}{\ga} =\frac{m+\rho}{m+\rho+1}-\nu+1 =\frac{m+\rho}{m(m+\rho+1)}+1 .\end{align} Hence, \eqref{nov5} and \refL{Lnora} yield \begin{align} &\int^\infty_0 \bbclr{1-\bclr{1+\tfrac{m+\rho+1}{\theta} \xi u^{-\al}}^{-\frac{m+\rho}{m+\rho+1}} }\dd u \notag\\ &\qquad = -\frac{1}{\al} \cdot \frac{\G(-\tfrac{1}{\al})\G\big(\tfrac{m+\rho}{m(m+\rho+1)}+1\big)}{\G\big(\frac{m+\rho}{m+\rho+1}\big)} \bbclr{\frac{m+\rho+1}{\theta}\xi}^{1/\al} \notag\\ &\qquad =\frac{\G(1-\frac{1}{\al})\G\big(\frac{m+\rho}{m(m+\rho+1)}+1\big)}{\G\big(\tfrac{m+\rho}{m+\rho+1}\big)} \bbclr{\frac{m+\rho+1}{\theta}\xi }^{1/\al} . \label{eq:ir} \end{align} Finally, \eqref{eq:X}, \eqref{eq:cL}, \eqref{nov2}, \eqref{eq:cP}, and \eqref{eq:ir} together imply that as $\ntoo$, \begin{align}\label{nov3} n^{-\nu} X\dto \frac{\G(1-\tfrac{1}{\al})\G\big(\frac{m+\rho}{m(m+\rho+1)}+1\big)}{\G\big(\tfrac{m+\rho}{m+\rho+1}\big)} \bbclr{\frac{m+\rho+1}{\theta}\xi }^{1/\al}. \end{align} We here note that, by \eqref{de:al} and \eqref{de:nu}, \begin{align}\label{nova} 1-\frac{1}{\ga}&=\nu= \frac{(m-1)(m+\rho)}{m(m+\rho+1)} .\end{align} We write also $\xi=(m-1)\xi_1$, with $\xi_1\in\GAMMA(m/(m-1),1)$, and recall that $\gth=2m+\rho$. Hence, \eqref{nov3} can be written as \eqref{tmain}. \qed \section{Moment convergence}\label{Smom} In this section, we prove \refT{Tmom} on moment convergence; we use the standard method of proving uniform moment estimates and thus uniform integrability. This time we choose to treat general $m$ and $\rho$ from the beginning. We consider first the reverse martingale $M_k$, recalling that $M_k\ge W_k\ge 0$. We denote the maximal function by \begin{align}\label{mx} \Mx:=\max_{n-1\ge k\ge 0} M_k, \end{align} and define the martingale differences, for $n-1\ge k\ge 1$, recalling \eqref{yngve}, \eqref{de:gmw}, \eqref{eq:sr}, and that $Y_k$ is $\cF_k$-measurable, \begin{align}\label{gdm} \gD M_k&:= M_{k-1}-M_k =W_{k-1}-\E\bigpar{W_{k-1}\mid\cF_k} \notag\\&\phantom: =\GF_{k-1} \bigpar{Y_{k-1}-\E\bigpar{Y_{k-1}\mid\cF_k}} \notag\\&\phantom: =-\GF_{k-1} \bigpar{Z_k-\E\bigpar{Z_k\mid\cF_k}} +m\GF_{k-1} \bigpar{J_k-\E\bigpar{J_k\mid\cF_k}} .\end{align} We define also the conditional square function \begin{align}\label{ssm} s(M):= \lrpar{\sum_{i=1}^{n-1}\E\bigpar{\xpar{\gD M_i}^2\mid\cF_i}}\qq. \end{align} Let for convenience \begin{align}\label{gk} \gk:=(m-1)\chi. \end{align} (Thus, in the case $m=2$, $\rho=0$, we have $\gk=\chi=\frac12$.) We use also the standard notation, for any random variable $\cX$, \begin{align} \norm{\cX}_p:=\bigpar{\E[|\cX|^p]}^{1/p}. \end{align} Note that for any $p>0$, \eqref{de:betas}, \eqref{betamom}, and \eqref{gg} yield, cf.\ \eqref{eq:Bmean}--\eqref{eq:Ebetasq}, \begin{align}\label{brage} \E[B_k^p]\le C_p k^{-p}. \end{align} \begin{lemma}\label{LpM} For every $p>0$, \begin{align}\label{lpm} \E[\Mx^p] \le C_p n^{p\gk}. \end{align} \end{lemma} \begin{proof} We assume in the proof for simplicity that $p\ge2$ is an integer; the general case follows by Lyapunov's inequality. We use as in \cite{Janson2023} one of Burkholder's martingale inequalities \cite[Theorem 21.1]{Burkholder1973}, \cite[Corollary 10.9.1]{Gut} on the reverse martingale $M_k-M_{n-1}=M_k-m\GF_{n-1}$, which yields \begin{align}\label{burk} \E [\Mxx^p]& \le C_p\E[\GF_{n-1}^p] + C_p\E \bigsqpar{\bigpar{\max_k|M_k-M_{n-1}|}^p} \notag\\& \le C_p \E[\GF_{n-1}^p] +C_p \E [s(M)^p] + C_p \E \bigsqpar{\max_k|\gD M_k|^p} \notag\\& \le C_p\E[\GF_{n-1}^p]+C_p \E \bigsqpar{s(M)^p} + C_p \sum_{k=1}^{n-1}\E \bigsqpar{|\gD M_k|^p} .\end{align} We estimate the three terms on the \rhs{} separately. First, we have by the independence of $B_i$, \eqref{freja}, and \eqref{eq:Bmean}, similarly to \eqref{eq:trunphi2bd}--\eqref{eq:phibd}, \begin{align}\label{frej} \E [\GF_k^p]& =\prod_{i=1}^k \E{(1+(m-1)B_i)^p} =\prod_{i=1}^k \Bigpar{1+p(m-1)\frac{\chi}{i}+O\bigpar{i^{-2}}} \notag\\& =\exp\Bigpar{\sum_{i=1}^k \Bigpar{\frac{p\gk}{i}+O\bigpar{i^{-2}}}} =\exp\Bigpar{p\gk \log k+O\bigpar{1}} \notag\\& \le C_pk^{p\gk} .\end{align} Next, by \eqref{gdm}, \eqref{eq:gvw}, and \eqref{de:gmw}, \begin{align}\label{sw1} \E\bigsqpar{\xpar{\gD M_k}^2\mid\cF_k} &=\Var\bigsqpar{W_{k-1}\mid\cF_k} \le C\GF_{k-1}^2 B_kY_k \le C \GF_{k-1}B_kW_k \notag\\& \le C \GF_{k-1}B_kM_k \le C \GF_{k-1}B_k\Mx. \end{align} Note that $\GF_k-\GF_{k-1}=(1+(m-1)B_k-1)\GF_{k-1}=(m-1)B_k\GF_{k-1}$. Hence, \eqref{ssm} and \eqref{sw1} yield \begin{align}\label{sw2} s(M)^2\le C\sum_{k=1}^{n-1}(\GF_k-\GF_{k-1})\Mx\le C\GF_{n-1}\Mx. \end{align} H\"older's inequality (or Cauchy--Schwarz's) and \eqref{frej} thus yield \begin{align}\label{sw3} \E[s(M)^p] \le C_p\E\bigsqpar{\GF_{n-1}^{p/2}\Mxx^{p/2}} \le C_p\Bigpar{\E\sqpar{\GF_{n-1}^{p}}\E[\Mxx^{p}]}\qq \le C_p n^{p\gk/2}\norm{\Mx}_p^{p/2}. \end{align} For the final term in \eqref{burk}, we use the decomposition of $\gD M_k$ in \eqref{gdm} and treat the two terms on the last line there separately. We use as in \cite[(7.9)]{Janson2023} the well-known general estimate for a binomial random variable $\zeta\in\Bin(N,q)$: \begin{align}\label{binp} \E |\zeta-\E\zeta|^p \le C_p (Nq)^{p/2} + C_pNq .\end{align} Conditioned on $\cF_k$, we have $Z_k\in\Bin(Y_k,B_k)$ by \eqref{eq:distZk}, and thus \eqref{binp} yields \begin{align}\label{sw4} \E \bigpar{\bigabs{Z_k-\E(Z_k\mid\cF_k)}^p\mid\cF_k}& \le C_p (Y_kB_k)^{p/2} + C_p Y_kB_k. \end{align} Similarly, since $J_k=\tone[Z_k\geq 1]$ has a conditional Bernoulli distribution, \begin{align}\label{sw7} \E \bigpar{\bigabs{J_k-\E(J_k\mid\cF_k)}^p\mid\cF_k}& \le C_p \E \bigpar{\abs{J_k}^p\mid\cF_k} = C_p \E \bigpar{J_k\mid\cF_k} \notag\\& \le C_p \E \bigpar{Z_k\mid\cF_k} = C_p Y_kB_k. \end{align} Hence, \eqref{gdm}, \eqref{sw4}, and \eqref{sw7} yield, \begin{align}\label{qw4} \E \bigsqpar{|\gD M_k|^p\mid\cF_k} &\le C_p\GF_{k-1}^p\bigsqpar{ \E \bigpar{\bigabs{Z_k-\E(Z_k\mid\cF_k)}^p\mid\cF_k} + \E \bigpar{\bigabs{J_k-\E(J_k\mid\cF_k)}^p\mid\cF_k} } \notag\\& \le C_p \GF_{k-1}^pY_k^{p/2}B_k^{p/2} + C_p\GF_{k-1}^p Y_kB_k \notag\\& \le C_p \GF_{k-1}^{p/2}W_k^{p/2}B_k^{p/2} + C_p\GF_{k-1}^{p-1} W_kB_k \notag\\& \le C_p \GF_{k-1}^{p/2}\Mx^{p/2}B_k^{p/2} + C_p\GF_{k-1}^{p-1} \Mx B_k. \end{align} Hence, using H\"older's inequality, the independence of $\GF_{k-1}$ and $B_k$, \eqref{frej}, and \eqref{brage}, \begin{align}\label{sw5} \E \bigsqpar{|\gD M_k|^p}& \le C_p \E\bigsqpar{\GF_{k-1}^{p/2}B_k^{p/2}\Mxx^{p/2}} +C_p \E\bigsqpar{\GF_{k-1}^{p-1}B_k\Mxx} \notag\\& \le C_p \Bigpar{\E\bigsqpar{\GF_{k-1}^{p}B_k^{p}}\E \bigsqpar{\Mxx^{p}}}\qq +C_p \Bigpar{\E\bigsqpar{\GF_{k-1}^{2p-2}B_k^{2}}\E \bigsqpar{\Mxx^{2}}}\qq \notag\\& \le C_p k^{p\gk/2-p/2}\norm{\Mx}_p^{p/2} +C_p k^{(p-1)\gk-1}\norm{\Mx}_2 \notag\\& \le C_p k^{p\gk/2-1}\norm{\Mx}_p^{p/2} +C_p k^{(p-1)\gk-1}\norm{\Mx}_p .\end{align} Consequently, \begin{align}\label{sw6} \sum_{k=1}^{n-1} \E \bigsqpar{|\gD M_k|^p}& \le C_p n^{p\gk/2}\norm{\Mx}_p^{p/2} +C_p n^{(p-1)\gk}\norm{\Mx}_p .\end{align} Finally, \eqref{burk} yields, collecting the estimates \eqref{frej}, \eqref{sw3}, and \eqref{sw6}, \begin{align}\label{sw9} \E[\Mx^p]& \le C_p n^{p\gk}+C_p n^{p\gk/2}\norm{\Mx}_p^{p/2}+C_p n^{(p-1)\gk}\norm{\Mx}_p .\end{align} It follows trivially from the definitions that for every $n$, $\Mx$ is deterministically bounded by some constant (depending on $n$), and thus $\norm{\Mx}_p<\infty$. Let $x:=\norm{\Mx}_p/n^{\gk}\in(0,\infty)$; then \eqref{sw9} can be written as \begin{align}\label{sw99} x^p \le C_p + C_p x^{p/2} + C_p x. \end{align} Since $p>1$, it follows that $x\le C_p$, which is the same as \eqref{lpm}. Alternatively, we can proceed as in \cite{Janson2023} to consider only $p=2^j$, with $j$ being positive integers. The conclusion \eqref{lpm} then follows from an induction over $j$, \eqref{sw9} and the base case $(p=2)$ proved in \eqref{eq:e1}. \end{proof} We use the decomposition $X=1+L_0+P_0$ in \eqref{eq:X}, and estimate the terms $L_0$ and $P_0$ separately. \begin{lemma}\label{LpP} For every $p>0$, \begin{align}\label{lpp} \E[P_0^p] \le C_p n^{p\nu}. \end{align} \end{lemma} \begin{proof} We may by Lyapunov's inequality assume that $p\ge1$ is an integer. By \eqref{eq:Xpred} and \eqref{eq:distZk}, \begin{align}\label{dk1} P_k = \sum^{n-1}_{i=k+1} \E(J_i\mid \cF_i) \le \sum^{n-1}_{i=k+1}Y_i B_i = \sum^{n-1}_{i=k+1}\GF_i\qw W_i B_i \le \Mx \sum^{n-1}_{i=k+1}\GF_{i-1}\qw B_i. \end{align} Hence, by H\"older's and Minkowski's inequalities, \begin{align}\label{dk2} \norm{P_k}_p \le \norm{\Mx}_{2p} \lrnorm{\sum^{n-1}_{i=k+1}\GF_{i-1}\qw B_i}_{2p} \le \norm{\Mx}_{2p} \sum^{n-1}_{i=k+1}\bignorm{\GF_{i-1}\qw B_i}_{2p}. \end{align} We have $(1+x)^{-p}\le 1-px+C_px^2$ for all $x\ge0$, and thus by \eqref{eq:Bmean}--\eqref{eq:Ebetasq} and \eqref{gk}, generalizing \eqref{dx0}, \begin{align}\label{dx1} \E\bigsqpar{(1+(m-1)B_i)^{-p}} & \leq 1-p(m-1)\E [B_i]+C_p\E [B_i^2 ] \notag\\& = 1- p\gk i^{-1} + O(i^{-2}). \end{align} Hence, by the same argument as for \eqref{frej}, for any integers $p\ge1$ and $k\ge1$, \begin{align}\label{dx2} \E [\GF_k^{-p}]& =\prod_{i=1}^k \E\bigsqpar{(1+(m-1)B_i)^{-p}} =\prod_{i=1}^k \Bigpar{1-\frac{p\gk}{i}+O\bigpar{i^{-2}}} \notag\\& =\exp\Bigpar{-\sum_{i=1}^k \Bigpar{\frac{p\gk}{i}+O\bigpar{i^{-2}}}} =\exp\Bigpar{-p\gk \log k+O\bigpar{1}} \notag\\& \le C_pk^{-p\gk} .\end{align} In other words, $\norm{\GF_k\qw}_p\le C_p k^{-\gk}$. Furthermore, $\norm{B_k}_p\le C_p k\qw$ by \eqref{brage}. Since $\GF_{i-1}$ and $B_i$ are independent, it follows that, for $i\ge2$, \begin{align}\label{dx3} \bignorm{\GF_{i-1}\qw B_i}_{p}= \norm{\GF_{i-1}\qw}_p\norm{B_i}_{p} \le C_p i^{-\gk-1}. \end{align} We may here replace $p$ by $2p$, and it follows from \eqref{dk2} and \eqref{lpm} that, for $k\ge1$, \begin{align}\label{dx4} \norm{P_k}_p \le \norm{\Mx}_{2p}\sum^{n-1}_{i=k+1}\bignorm{\GF_{i-1}\qw B_i}_{2p} \le C_pn^{\gk}\sum^{n-1}_{i=k+1} i^{-\gk-1} \le C_pn^{\gk} k^{-\gk} .\end{align} Furthermore, as in \refS{se:desc}, we have $P_0-P_k \le k$ for any $k\ge0$, and thus Minkowski's inequality and \eqref{dx4} yield, choosing $k:=\floor{n^\nu}$ and noting that \eqref{de:nu2} and \eqref{gk} imply $\gk(1-\nu)=\nu$, \begin{align} \norm{P_0}_p \le \norm{P_k}_p+k \le C_p n^{\gk-\gk\nu} +n^{\nu} \le C_p n^{\nu} ,\end{align} which completes the proof. \end{proof} \begin{lemma}\label{LpL} For every $p>0$, \begin{align}\label{lpl} \E[|L_0|^p] \le C_p n^{p\nu/2}. \end{align} \end{lemma} \begin{proof} Recall that $(L_k)_{k=0}^{n-1}$ is a reverse martingale. By \eqref{eq:Xm}, its conditional square function is given by \begin{align} s(L)^2:= \sum^{n-1}_{i=1}\E\bigsqpar{\bigpar{J_i-\E(J_i\mid \cF_i)}^2\mid\cF_i} =\sum^{n-1}_{i=1}\Var\sqpar{J_i\mid\cF_i} \le \sum^{n-1}_{i=1}\E\sqpar{J_i\mid\cF_i} =P_0, \end{align} where the inequality follows because $J_i$ has a conditional Bernoulli distribution. Furthermore, again using \eqref{eq:Xm}, the martingale differences $\gD L_k:=L_{k-1}-L_k$ are bounded by \begin{align} |\gD L_k| = \bigabs{J_k-\E(J_k\mid \cF_k)} \le 1. \end{align} Hence, Burkholder's inequality yields, similarly to \eqref{burk}, using also \refL{LpP}, \begin{align} \E [L_0^p] \le C_p \E [s(L)^p] + C_p \E \bigsqpar{\max_k|\gD L_k|^p} \le C_p \E [P_0^{p/2}] + C_p \le C_p n^{p\nu/2}, \end{align} which completes the proof. \end{proof} \begin{proof}[Proof of \refT{Tmom}] It follows from \eqref{eq:X} and \refLs{LpP} and \ref{LpL} that, for any $p>0$, \begin{align} \E [X^p] \le C_p + C_p \E[L_0^p]+C_p\E[P_0^p] \le C_p n^{p\nu}. \end{align} In other words, $\E[(X\nn/n^\nu)^p] \le C_p$ for every $p>0$. By a standard argument, see{} e.g.\ \cite[Theorems 5.4.2 and 5.5.9]{Gut}, this implies uniform integrability of the sequence $|X\nn/n^\nu|^p$ for every $p>0$ and thus the convergence in distribution in \eqref{tmain} implies convergence of all moments. Since $\xi_1\in\GAMMA\bigpar{\frac{m}{m-1},1}$, \begin{align} \E \bigsqpar{\xi_1^{p(1-\nu)}} =\frac{\gG(p(1-\nu)+\frac{m}{m-1})}{\gG(\frac{m}{m-1})} ,\end{align} and thus the explicit formula \eqref{tmom} follows. \end{proof} \section{The model with self-loops} \label{Sloop} In this section, we consider a variation of the preferential attachment graph in \refD{de:pa}, where self-loops are possible. We use the version in \cite[Section 8.2]{vdh2017} (see also \cite{Bollobas2004,Bollobas2001}) and start with a single vertex 1 with $m$ self-loops. For $n\ge2$, each outgoing edge of vertex $n$ is now attached to a vertex $j\in[n]$, again with probability proportional to $\rho$ + the current degree of vertex~$j$, where we define the current degree of vertex $n$ when we add the $(k+1)$th edge from it to be $k$ + $1$ + the number of loops attached to $n$ so far. (We thus count all outgoing edges up to the $(k+1)$th; a loop contributes 2 to the degree.) Hence, recalling that $d_j(n)$ is the degree of vertex $j$ in $G_n$, when adding vertex $n\ge 2$ to $G_{n-1}$, the $(k+1)$-th outgoing edge of vertex $n$ attaches to vertex $j\in [n]$ with probability \begin{align}\label{eq:pa3} \begin{cases} \frac{d_j(n-1)+\sum^k_{\ell=1} \mathbf{1}[n\overset{\ell}{\rightarrow}j] +\rho} {2(n-1)m+2k+1+n\rho}, &j<n, \\ \frac{k+1+\sum^k_{\ell=1} \mathbf{1}[n\overset{\ell}{\rightarrow}j] +\rho} {2(n-1)m+2k+1+n\rho}, &j=n. \end{cases} \end{align} \begin{remark} The details of the model can be modified without affecting the following asymptotic result, with only straightforward changes to its proof. For example, we may again start with $m$ edges between vertices 1 and 2, and thus no loops there, or we may include all $m$ outgoing edges in the weight of vertex $n$ when we add edges from it. We leave the details to the reader. \end{remark} \begin{theorem}\label{Tloop} Let $X\nn$ be the number of descendants of vertex $n$ in the model above. Then, the statements of \refTs{Tmain} and \ref{Tmom} hold. \end{theorem} The proof of \refT{Tloop} is largely similar to those of \refTs{Tmain} and \ref{Tmom} so we only sketch the main differences here. First, let $N_i$ be the number of self-loops at vertex $i$. When we add the $m$ edges from a new vertex $i$, the weight of vertex $i$ and the total weight of the first $i-1$ vertices evolve like a P\'olya urn $\cU'_i$ with initially $1+\rho$ red and $(2i-2)m+(i-1)\rho$ black balls, where we add 2 new balls at each draw: 2 red balls when a red ball is drawn, and one ball of each colour when a black ball is drawn; $N_i$ is the number of times a red ball is drawn. Note that this urn does not depend on what has happened when the edges from earlier vertices were added, and in particular not on $N_1,\dots,N_{i-1}$. Consequently, the random numbers $(N_i)^\infty_{i=1}$ are independent. Furthermore, if we condition on the entire sequence $(N_i)^\infty_{i=1}$, then the non-loop edges are added from each new vertex $n\ge2$ to $[n-1]$ by the same random procedure as in \refD{de:pa}, except that now we add $m-N_n$ new edges from $n$, and that the degrees of the vertices include also any existing loops. This means that after we have added vertex $j\ge2$, the weight of vertex $j$ and the total weight of the first $j-1$ vertices evolve like a standard P\'olya urn $\cU''_j$ with initially $m+N_j+\rho$ red and $(2j-1)m-N_j+(j-1)\rho$ black balls, after each draw adding one ball of the same colour as the drawn ball. As a consequence, the proportion of red balls converges a.s.\ to a random number $B_j$ with the (conditional) beta distribution \begin{align}\label{nB} B_j\mid(N_i)_{i=1}^\infty \in\mathrm{Beta}(m+N_j+\rho, (2j-1)m-N_j+(j-1)\rho), \qquad j\ge2. \end{align} Moreover, conditioned on $(N_i)_{i=1}^\infty$, we can again construct the preferential attachment graph by the P\'olya urn representation in \refD{de:PUR}--\refR{RBerger}, using (conditionally) independent $B_j$ with the distributions \eqref{nB}. (As before, we also let $B_1:=1$.) In particular, note that since $(N_i)_{i=2}^\infty$ are independent, the random variables $(B_i)_{i=2}^\infty$ are independent. and so are the pairs of random variables $(N_i,B_i)$, $i\ge2$. The distribution of each $B_j$ is thus a mixed beta distribution, but we do not need exact expressions. We will show that all estimates in \refS{Sprel} still hold (possibly with different constants $C$). Note first that in the urn $\cU'_i$ used to determine $N_i$, we make $m$ draws and thus the number of red balls is at most $m+\rho=O(1)$; hence the probability of drawing a red ball is $O(1/i)$ for each draw, and thus \begin{align}\label{ENi} \P(N_i>0) \le \E N_i = O(1/i). \end{align} Recall $\theta$ and $\chi$ in \eqref{de:theta} and \eqref{de:chi}. Using $0\le N_i\le m$, \eqref{nB}, \eqref{ENi}, and \eqref{betamom}, it is easy to show that \begin{align} \E B_i&= \frac{m+\rho+\E N_i}{\theta i} = \frac{\chi}{i} + O(i^{-2})\label{bn}, \\ \E B_i^r &\le \prod^{r-1}_{j=0}\frac{2m+\rho+j}{\theta i +j} \le C_ri^{-r} ,\quad r\ge 2.\label{bn1} \end{align} Similarly, we have, by first conditioning on $N_i$, \begin{align}\label{enb1} \E [N_iB_i] = \frac{\E[N_i(m+\rho+N_i)] }{\theta i} \le \frac{\E[N_i(2m+\rho)] }{\theta i} = O(i^{-2}), \end{align} and the bound \begin{align}\label{enb2} \E \bigsqpar{N^r_i B_i^r} =O\big(i^{-(r+1)}\big), \qquad \text{for each }r\ge 2 . \end{align} Define $\Phi_i$ and $S_{n,i}$ by \eqref{de:phi} and \eqref{de:S} as before. Then \eqref{bn} and a little calculation using \eqref{gg} shows that, for $2\le j\le k<\infty$, \begin{align*} \prod^k_{i=j}\E(1+(m-1)B_i) &= \prod^k_{i=j}\frac{i+(m-1)\chi}{i} \cdot\prod^k_{i=j}\frac{\E(1+(m-1)B_i)}{1+(m-1)\chi/i} \\ &= \frac{\G\bclr{k+1+(m-1)\chi}\G(j)}{\G\bclr{j+(m-1)\chi}\G(k+1)} \prod^k_{i=j}\Bigpar{1+O\bigpar{i^{-2}}} \notag\\ &= \bbclr{\frac{k}{j}}^{(m-1)\chi} \bclr{1+O(j^{-1})}\numberthis \label{eq:mprodbetaN} , \end{align*} as in \eqref{eq:mprodbeta}, and, recalling that $B_i$ are independent and taking $j=1$ in \eqref{eq:mprodbetaN}, \begin{align*} \E \Phi_k &=\prod^k_{i=1}\E(1+(m-1)B_i) = \frac{\G\bclr{k+1+(m-1)\chi}}{\G\bclr{1+(m-1)\chi}\G(k+1)} \prod^k_{i=1}\frac{\E(1+(m-1)B_i)}{1+(m-1)\chi/i} \\ &=Q k^{(m-1)\chi} \bclr{1+O(k^{-1})}\numberthis \label{pb0} , \end{align*} where \begin{align}\label{asa} Q:=\frac{\prod^\infty_{i=1}\frac{\E(1+(m-1)B_i)}{1+(m-1)\chi/i}}{\G\bclr{1+(m-1)\chi}}; \end{align} note that the infinite product in \eqref{asa} converges as a consequence of \eqref{bn}. Using \eqref{bn} and \eqref{bn1}, the upper bounds \eqref{eq:trunphi2bd} and \eqref{eq:phibd} follow by the same proof as before. The statements in Lemmas \ref{LB2} and \ref{le:Sest} hold exactly, except for \eqref{lb2b}, which in view of \eqref{pb0}, is now replaced with \begin{align}\label{ntb} \tilde\beta :=Q\beta. \end{align} Let $Y_k, Z_k, J_k, W_k$ be as in \refS{se:basicanalysis}. To streamline the arguments, from here onwards we concentrate on the $m=2$, $\rho=0$ case, and leave the general case (with modifications as in \refS{Sgen}) to the reader. Once we have sampled the self-loops at every vertex, the stochastic recursions for obtaining $D_n$ are similar to the ones in \refS{sse:sr}: we sample $(B_i)^{n-1}_{i=1}$ according to \eqref{nB}, and for each red vertex $k$, we add $2-N_k$ outgoing edges and proceed as before. The boundary conditions are the same, except now we have $Y_{n-1} = 2 - N_n$. For $2\le k\le n-1$, the recursion takes the form \begin{align}\label{nY} Y_{k-1} = Y_k - Z_k + (2 - N_k) J_k , \end{align} and because $0\le N_k \le 2$, we also have \begin{align}\label{sr1} Y_{k-1} \le Y_k - Z_k + 2 J_k. \end{align} Let $\cF_k$ be the $\sigma$-algebra generated by $(N_i)^n_{i=2}$, $(B_i)^{n-1}_{i=2}$ and the coin tosses at vertices $n-1,\dots,k+1$ in the stochastic recursion. Note that \eqref{eq:distZk} holds, and in view of \eqref{sr1}, also \eqref{eq:markov}--\eqref{eq:Akdiff} and \eqref{nov1}--\eqref{eq:varM} hold, with the number $2$ in \eqref{eq:mg} and \eqref{eq:varM} replaced with $2-N_n$, and with the last equality in \eqref{eq:ubmeanW} replaced with $\le$. Instead of \eqref{eq:incA}, from \eqref{nY} we have \begin{align}\label{iA} A_{k-1} -A_k &= W_k - \E (W_{k-1}\mid \cF_k)\notag\\ &= 2 \Phi_{k-1} ((1- B_k)^{Y_k}-1+B_k Y_k) + \Phi_{k-1} N_k \E(J_k\mid \cF_k). \end{align} Now, let $\bB$ be the $\sigma$-field generated by $(B_i)^{n-1}_{i=2}$ and $(N_i)^{n}_{i=2}$. As the upper bounds in Section \ref{Sprel} still hold, and $(B_i)^{n-1}_{i=2}$ are independent, \refLs{le:doob} and \ref{le:Zk} hold. The probability that vertex $k\ge 2$ is red and has at least one self-loop is \begin{align} \IP(Z_k\ge 1, N_k\ge 1) = \E\bigsqpar{\mathbf{1}\set{N_k\ge 1} \IP_{\bB}(Z_k\ge 1)}; \end{align} and so by Markov's inequality and \eqref{MarZ}, \begin{align} \IP(Z_k\ge 1, N_k\ge 1)\le 2 \E \bbclr{N_kB_k \prod^{n-1}_{i=k+1}(1+B_i) }. \end{align} By the independence of the pairs $(B_i,N_i)$, \eqref{enb1}, and \eqref{eq:mprodbeta}, this yields \begin{align}\label{eq:zn} \IP(Z_k\ge 1, N_k\ge 1)&\le 2 \E (N_kB_k) \prod^{n-1}_{i=k+1}\E(1+B_i) \le C\frac{n^{1/2}}{k^{5/2}}. \end{align} In view of \eqref{iA} and Markov's inequality, \eqref{eq:incAsim} in \refL{le:Ak} is replaced with \begin{align}\label{iA1} A_{k-1} -A_k \le (W_kB_k)^{2} \Phi_k^{-1} + \Phi_{k} N_k B_k Y_k = (W_kB_k)^{2} \Phi_k^{-1} + N_k B_k W_k .\end{align} Using \eqref{eq:ubmeanW}, we have \begin{align}\label{iA3} \E_\bB \sqpar{N_k B_k W_k} = N_k B_k \E_\bB \sqpar{W_k} \le 2 N_k B_k \Phi_{n-1}. \end{align} Since the pairs $(B_i,N_i)$ are independent, it follows from \eqref{iA3} and \eqref{de:phi} that \begin{align} \E \sqpar{N_k B_k W_k} \le 2 \E \bigsqpar{N_k B_k(1+B_k)} \prod^{n-1}_{\substack{i=1\\i\neq k}} \E (1+B_i) \le 4 \E [N_k B_k] \E\Phi_{n-1} ,\end{align} and applying \eqref{enb1} and \eqref{pb0}, we get \begin{align}\label{iA2} \E \bigsqpar{N_k B_k W_k} \le C\frac{n^{1/2}}{k^{2}} \le C\frac{n}{k^{5/2}}. \end{align} With \eqref{iA1} and \eqref{iA2}, we may proceed as in the proof of \refL{le:Ak} to show that \eqref{eq:EAk} holds. \refL{LU} follows from \refL{le:Sest}. Thus, the early part of the growth of $D_n$ can be coupled to the same time-changed Yule process $\wh \cY$ with some extra modifications. Recall that $\Psi(x)$ is the mapping of vertex $x$ in $\wh \cY$ to a vertex $k$ in $D_n$ (or vertex $(k/n)^\chi$ in $\xD_n$). In Step (1) of the coupling, we sample $(N_i)^n_{i=1}$ and then $(B_i)^{n-1}_{i=1}$ as in \eqref{bn}. If $\Psi$ maps $x$ to some $k$ that has at least one self-loop, we extend $\Psi$ in Section \ref{se:Yule} by mapping all children of $x$ to $k$ (so all other descendants of $x$ are also mapped to $k$). To prove that \refT{th:coupling} also holds in this case, we need to show that the extended mapping above is w.h.p.\ injective at every vertex in $\xD_n\cap [(n_1/n)^\chi,1]$. By \eqref{eq:zn} and \eqref{ENi}, the probability that a vertex in $\xD_n\cap [(n_1/n)^\chi,1]$ has at least one self-loop is at most \begin{multline}\label{loopp} \IP(N_n\ge 1) + \sum^{n-1}_{k=n_1} \IP(Z_k\ge 1, N_k\ge 1) \le \frac{C}{n} + \sum^{n-1}_{k=n_1} \frac{Cn^{1/2}}{k^{5/2}} = O(\log^{3/2} n/n) = o(1). \end{multline} The same argument as in Step \ref{TCO1} in the proof of \refT{th:coupling} and \eqref{loopp} then give the desired claim. The remaining steps of the proof can be applied without any changes. \refL{le:maxAk}, \refL{le:maxMk} and \refT{TF1} hold with the same proofs as before, since we have shown that \eqref{eq:EAk} and the various other estimates that we use there still hold. Let $\wh A\nn_t$ be as in \eqref{hA}. When proving tightness of $\wh A\nn_t$ in $C[a,b]$ for $0<a<b<\infty$, we have to use \eqref{iA1} instead of \eqref{eq:incAsim}. Let $V_k$, $T_k$, $\wh V\nn_t$, $\hT\nn_t$, $\xM_n$, and $\Phix_n$ be as in \eqref{ha3}, \eqref{hV} and \eqref{hdj0}. Using \eqref{iA1} we obtain instead of \eqref{hdj01}, using the crude bound $N_k\le m$, \begin{align}\label{tyr} | A_k- A_{k-1}| & \le M_k^2\Phi_k\qw (V_k-V_{k-1})+ m M_k(T_k-T_{k-1}) \notag\\& \le n^{5/6}\xM_n^2 \Phix_n (V_k-V_{k-1}) + m n^{1/2}\xM_n(T_k-T_{k-1}) \end{align} and thus, arguing as for \eqref{hdj6}, for real numbers $s,t$ such that $a\le s\le t$, \begin{align}\label{balder} |\wh A\nn_t - \wh A\nn_s | &\le \Phix_n \xM_n^2 (\hV\nn_t-\hV\nn_s) + m\xM_n (\hT\nn_t- \hT\nn_s). \end{align} We have already shown in Section \ref{Stig} that the processes $\wh V\nn_t-\wh V\nn_a$ and $\hT\nn_t- \hT\nn_a$ are tight in $C[a,b]$ (\refL{LV2}) and that the sequences $(\xM_n)^\infty_{n=1}$ and $(\Phix_n)^\infty_{n=1}$ are tight. Hence, by simple applications of \refL{LC}, the processes $\Phix_n \xM_n^2 (\hV\nn_t-\hV\nn_a)$ and $m\xM_n (\hT\nn_t- \hT\nn_a)$, $n\ge1$, are tight in $C[a,b]$. If $(X_n(t))^\infty_{n=1}$ and $(Y_n(t))^\infty_{n=1}$ are any two sequences of random continuous functions on $[a,b]$ that both are tight in $C[a,b]$, then so is the sequence $((X_n(t)+Y_n(t)))^\infty_{n=1}$. Hence, the sequence $\Phix_n \xM_n^2 (\hV\nn_t-\hV\nn_a)+m\xM_n (\hT\nn_t- \hT\nn_a)$, $n\ge1$, is tight in $C[a,b]$; finally \eqref{balder} and another application of \refL{LC} (now with $Z_n=1$) show that $\wh A\nn_t$, $n\ge1$, are tight in $C[a,b]$, so \refL{LA2} still holds. Some minor adjustments are also required to yield the same result as in \refT{th:cvg}. When applying the Skorohod coupling theorem, $N\nn_i$, $n\ge 1$, are potentially different for each $n$. Let $0<s<t$ and define $k:=\floor{sn^{1/3}}$ and $\ell:=\floor{tn^{1/3}}$. By \eqref{hA} and \eqref{iA}, \begin{multline} \hA\nn_s-\hA\nn_t =n^{-1/2}\sum_{i=k+1}^\ell2\GF_{i-1}\bigsqpar{(1-B_i)^{Y_i}-1+Y_iB_i} \\ + n^{-1/2}\sum_{i=k+1}^\ell \GF_{i-1} N_i \E(J_i\mid \cF_i) +o(1) .\end{multline} However, by Markov's inequality and \eqref{iA2}, \begin{align}\label{at1} \E \sum_{i=k+1}^\ell \GF_{i-1} N_i \E(J_i\mid \cF_i)& \le \E \sum_{i=k+1}^\ell \GF_{i-1} N_iB_iY_i \le \E \sum_{i=k+1}^\ell N_iB_iW_i \notag\\& \le \sum_{i=k+1}^\ell C\frac{n^{1/2}}{i^2} \le C\frac{n^{1/2}}{k} \le C_sn^{1/6}, \end{align} implying that \begin{align} n^{-1/2}\sum_{i=k+1}^\ell \GF_{i-1} N_i \E(J_i\mid \cF_i)\pto 0. \end{align} The remainder of the proof of \refT{th:cvg} is then the same as before. \begin{proof}[Proof of \refT{Tloop}] With the preparations above, the same argument as in \refS{se:desc} yields \refT{th:X} for this model too; with modifications as in \refS{Sgen} we obtain \refT{Tmain}. Similarly, the arguments in \refS{Smom} still hold, and thus \refT{Tmom} holds. \end{proof} \appendix \section{The differential equations in \eqref{aw13} and \eqref{eq:de}} \label{Sdiff} We rewrite the equation in \eqref{eq:de} as \begin{align}\label{eq:de1} f'(t) = mt^{\ga-1} \bbclr{\bclr{1+\tfrac{1}{\theta}t^{-\ga}f(t)}^{-(m+\rho)}-1+\chi t^{-\ga}f(t)}. \end{align} where, as above, \begin{align}\label{ga2} \ga:=1+(m-1)\chi. \end{align} Note that in the special case $m=2$ and $\rho=0$, we have $\chi=1/2$, $\theta=4$, and $\ga=3/2$, so the above yields the differential equation in \eqref{aw13}. We define \begin{align}\label{eq:gsg1} g(t) := \theta^{-1} t^{-\ga} f(t) \end{align} so that \eqref{eq:de1} simplifies to, recalling $\chi\gth=m+\rho$, see \eqref{de:chi}, \begin{align}\label{eq:de2} g'(t) = -\frac{\ga}{t}g(t) + \frac{m}{\theta t} \bbclr{(1+g(t))^{-(m+\rho)}-1+(m+\rho)g(t)}. \end{align} Letting \begin{align}\label{eq:gsh1} h(x):=g(e^{(m+\rho)x}) \end{align} then yields, using \eqref{ga2}, \begin{align}\label{eq:gsh2} h'(x)&=-(m+\rho)(1+(m-1)\chi) h(x) + \chi m\bclr{(1+h(x))^{-(m+\rho)}-1+(m+\rho)h(x)}\notag\\ &= \chi m (1+h(x))^{-(m+\rho)} -\chi m+ (m+\rho)(\chi -1)h(x)\notag\\ &=\chi m\bclr{(1+h(x))^{-(m+\rho)} -1 - h(x)}, \end{align} where the last equality follows from $(m+\rho)(1-\chi)=\chi m$, see again \eqref{de:chi}. The autonomous differential equation in \eqref{eq:gsh2} can be integrated to \begin{align}\label{eq:gsh3} \frac{1}{\chi m}\int \frac{1}{(1+h)^{-(m+\rho)}-1-h} \dd h = \int 1 \dd x \end{align} Furthermore, with $v:=1+h$, \begin{align} \frac{1}{\chi m}\int \frac{1}{(1+h)^{-(m+\rho)}-1-h} \dd h & = \frac{1}{\chi m} \int \frac{(1+h)^{m+\rho}}{1-(1+h)^{m+\rho+1}}\dd h \notag\\& =- \frac{1}{\chi m} \int \frac{v^{m+\rho}}{v^{m+\rho+1}-1}\dd v. \end{align} The change of variable $u=v^{m+\rho+1}-1$ then gives \begin{align} \frac{1}{\chi m} \int \frac{v^{m+\rho}}{v^{m+\rho+1}-1}\dd v = \frac{1}{\chi m(m+\rho+1)} \int \frac{1}{u}\dd u = \frac{1}{\chi m(m+\rho+1)} \log u + C. \end{align} Thus, reverting back to the original variable $h$, \eqref{eq:gsh3} is equivalent to \begin{align} -\frac{1}{\chi m(m+\rho+1)} \log\bclr{(1+h)^{m+\rho+1}-1} = x+C, \end{align} which yields the solution \begin{align}\label{eq:gsh4} h(x) = \bclr{1+ce^{-\chi m(m+\rho+1)x} }^{\frac{1}{m+\rho+1}} -1, \qquad \text{for some $c\in \bbR$.} \end{align} From \eqref{eq:gsg1} and \eqref{eq:gsh1}, \begin{align}\label{eq:gsh5} f(t) = \theta t^{1+(m-1)\chi} h\bclr{\tfrac{1}{m+\rho}\log t}. \end{align} so plugging in \eqref{eq:gsh4} into \eqref{eq:gsh5}, and using \begin{align}\label{eq:amr} \al = 1+(m-1)\chi = 1+\frac{(m-1)(m+\rho)}{2m+\rho} = \frac{\chi m(m+\rho+1)}{m+\rho}, \end{align} we get \begin{align}\label{eq:de3} f(t) = \theta t^\al \bbclr{\bclr{1+c t^{-\al}}^{\frac{1}{m+\rho+1}}-1}. \end{align} Using L'H\^opital's rule (or a Taylor expansion) and \eqref{eq:de3}, we obtain \begin{align} f(\infty) :=\lim_{t\to\infty}f(t) = \frac{\theta c}{m+\rho+1}\lim_{t\to\infty}\bclr{1+c t^{-\al}}^{\frac{1}{m+\rho+1}-1} = \frac{\theta c}{m+\rho+1} . \end{align} Hence, the unique solution $f$ to \eqref{eq:de1} with a given $f(\infty)$ is given by \eqref{eq:de3} with \begin{align}\label{eq:gc0} c = \frac{m+\rho+1}{\theta}f(\infty). \end{align} \section{{A beta integral}} Recall the standard beta integral \cite[5.12.3]{NIST} \begin{align}\label{erika} \intoo\frac{x^{a-1}}{(1+x)^b}\dd x &= \frac{\gG(a)\gG(b-a)}{\gG(b)} \end{align} when $0<\Re a<\Re b$. We use the following less well-known extension; it is not new but we give a proof for completeness. \begin{lemma}\label{Lnora} If\/ $-1<\Re a<0$ and\/ $\Re b>0$, then \begin{align}\label{nore} \intoo\Bigpar{\frac{1}{(1+x)^b}-1}x^{a-1}\dd x =\frac{\gG(a)\gG(b-a)}{\gG(b)}. \end{align} \end{lemma} \begin{proof} We consider a more general integral. Assume first $\Re a>0$ and $\Re b>0$, and let $\Re c>\Re a$. Then, by using \eqref{erika} twice, \begin{align}\label{nora} \intoo\Bigpar{\frac{1}{(1+x)^{b+c}}-\frac{1}{(1+x)^{c}}}x^{a-1}\dd x &= \frac{\gG(a)\gG(b+c-a)}{\gG(b+c)} -\frac{\gG(a)\gG(c-a)}{\gG(c)}. \end{align} For fixed $b$ and $c$ with $\Re b,\Re c>0$, the \lhs{} converges for $-1<\Re a<\Re c$, and defines an analytic function of $a$ in this strip. Hence, by analytic continuation, \eqref{nora} holds throughout this range. Similarly, if $\Re a>-1$ and $\Re b>0$, then the \lhs{} of \eqref{nora} is an analytic function of $c$ in the domain $\Re c>\Re a$, and thus \eqref{nora} holds whenever $-1<\Re a<\Re c$ and $\Re b>0$. For $-1<\Re a<0$ we thus may take $c=0$ in \eqref{nora} which yields \eqref{nore}. (Recall that $1/\gG(0)=0$.) \end{proof} \begin{remark} Note that \eqref{erika} and \eqref{nore} give the same formula, but for different ranges of $a$. The integrals can be interpreted as the Mellin transforms of $(1+x)^{-b}$ and $(1+x)^{-b}-1$, respectively, and thus this is an instance of a general phenomenon when considering the Mellin transforms of a function $f(x)$ and of the difference $f(x)-p(x)$ where, for example, $p(x)$ is a finite Taylor polynomial at $0$, see \cite[p.~19]{FGD}. \end{remark} \begin{thebibliography}{99} \bibitem{Athreya-Ney1972} Krishna B.\ Athreya and Peter E.\ Ney: \emph{Branching Processes}. Springer-Verlag, Berlin, 1972. \bibitem{BA1997} Albert-L\'aszl\'o Barab\'asi and Reka Albert: Emergence of scaling in random networks. \emph{Science} \textbf{286} (1999), no.\ 5439, pp.\ 509--512. \bibitem{Berger2014} Noam Berger, Christian Borgs, Jennifer T.\ Chayes, and Amin Saberi: Asymptotic behavior and distributional limits of preferential attachment graphs. \emph{Ann.\ Probab.} \textbf{42} (2014), no.\ 1, pp.\ 1--40. \bibitem{Billingsley} Patrick Billingsley: \emph{Convergence of Probability Measures}. Wiley, New York, 1968. \bibitem{Bollobas2004} B\'ela Bollob\'as and Oliver Riordan: The diameter of a scale-free random graph. \emph{Combinatorica} \textbf{24} (2004), no. 1, pp. 5--34. \bibitem{Bollobas2001} B\'ela Bollob\'as, Oliver Riordan, Joel Spencer, and G\'abor Tusn\'ady: The degree sequence of a scale-free random graph process. \emph{Random Structures Algorithms} \textbf{18} (2001), no.\ 3, pp.\ 279--290. \bibitem{Burkholder1973} Donald L. Burkholder: Distribution function inequalities for martingales. \emph{Ann. Probab.} \textbf1 (1973), no.\ 1, pp. 19–-42. \bibitem{Dobrow-Smythe1996} Robert P. Dobrow and Robert T. Smythe: Poisson approximations for functionals of random trees. \emph{Random Structures Algorithms} \textbf9 (1996), no. 1-2, 79--92. \bibitem{FGD} Philippe Flajolet, Xavier Gourdon, and Philippe Dumas: Mellin transforms and asymptotics: harmonic sums. {\em Theoret. Comput. Sci.}, \textbf{144} (1995), no. 1-2, pp. 3--58. \bibitem{Gut} Allan Gut: \emph{Probability: A Graduate Course}, 2nd ed., Springer, New York, 2013. \bibitem{vdh2017} Remco van der Hofstad: \emph{Random Graphs and Complex Networks, Vol.\ 1}, Cambridge Univ.\ Press, Cambridge, 2017. \bibitem{vdh2024} Remco van der Hofstad: \emph{Random Graphs and Complex Networks, Vol.\ 2}, Cambridge Univ.\ Press, Cambridge, 2024. \bibitem{Janson2023} Svante Janson: The number of descendants in a random directed acyclic graph. \emph{Random Structures Algorithms} \textbf{64} (2024), no.\ 3, pp.\ 768--803. \bibitem{Kallenberg} Olav Kallenberg: \emph{Foundations of Modern Probability.} 2nd ed., Springer, New York, 2002. \bibitem{Knuth} Donald E.\ Knuth: \emph{The Art of Computer Programming}, Section 7.2.2.3. Preliminary draft, 29 January 2023. (To appear in Volume 4, Fascicle 7.) \bibitem{Kuba-Wagner2010} Markus Kuba and Stephan Wagner: On the distribution of depths in increasing trees. \emph{Electron. J. Combin.} \textbf{17} (2010), no. 1, Research Paper 137, 9 pp. \bibitem{lo2024} Tiffany Y.\ Y.\ Lo: Local weak limit of preferential attachment random trees with additive fitness. \emph{Advances in Applied Probability} \textbf{56} (2024), no.\ 3, pp.\ 785--824. \bibitem{Mori2003} Tam\'as F.\ M\'ori: The maximum degree of the B\'arab\'asi-Albert random tree. \emph{Combinatorics, Probability and Computing} \textbf{14} (2005), no.\ 3, pp.\ 339--348 \bibitem{NIST} \emph{NIST Handbook of Mathematical Functions}. Edited by Frank W. J. Olver, Daniel W. Lozier, Ronald F. Boisvert and Charles W. Clark. Cambridge Univ. Press, 2010. \\ Also available as \emph{NIST Digital Library of Mathematical Functions}, \url{http://dlmf.nist.gov/} \bibitem{PP2007} Alois Panholzer and Helmut Prodinger: Level of nodes in increasing trees revisited. \emph{Random Structures Algorithms} \textbf{31} (2007), no. 2, 203--226 \bibitem{PPR2017} Erol Pek\"oz, Adrian R\"ollin and Nathan Ross: Joint degree distributions of preferential attachment random graphs. \emph{Advances in Applied Probability} \textbf{49} (2017), no.\ 2, pp.\ 368--387 \bibitem{Whitt1970} Ward Whitt: Weak convergence of probability measures on the function space $C[0,\infty)$. \emph{Annals of Mathematical Statistics} \textbf{41} (1970), no.\ 3, pp.\ 939--944. \end{thebibliography} \end{document}
2412.14059v1
http://arxiv.org/abs/2412.14059v1
Bessel functions and Weyl's law for balls and spherical shells
\documentclass[11pt]{amsart} \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{graphicx} \usepackage{subfigure} \usepackage{amsthm} \usepackage{enumerate} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{verbatim} \usepackage{yhmath} \usepackage{epstopdf} \usepackage{color} \usepackage{hyperref} \makeatletter \@namedef{subjclassname@2020}{ \textup{2020} Mathematics Subject Classification} \makeatother \numberwithin{equation}{section} \numberwithin{figure}{section} \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{claim}[theorem]{Claim} \newtheorem{defn}[theorem]{Definition} \theoremstyle{plain} \newtheorem{thmsub}{Theorem}[subsection] \newtheorem{lemmasub}[thmsub]{Lemma} \newtheorem{corollarysub}[thmsub]{Corollary} \newtheorem{propositionsub}[thmsub]{Proposition} \newtheorem{defnsub}[thmsub]{Definition} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newtheorem{remarks}{Remarks} \renewcommand\thefootnote{\fnsymbol{footnote}} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\dist}{dist} \DeclareMathOperator{\vol}{vol} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\tr}{tr} \allowdisplaybreaks[4] \begin{document} \date{} \title[Bessel functions and Weyl's law]{Bessel functions and Weyl's law for balls and spherical shells} \author{Jingwei Guo} \address{School of Mathematical Sciences\\ University of Science and Technology of China\\ Hefei, 230026\\ P.R. China} \email{[email protected]} \author{Tao Jiang} \address{School of Mathematics and Statistics\\ Anhui Normal University\\ Wuhu, 241002\\ P.R. China} \email{[email protected]} \author{Zuoqin Wang} \address{School of Mathematical Sciences\\ University of Science and Technology of China\\ Hefei, 230026\\ P.R. China} \email{[email protected]} \author{Xuerui Yang} \address{Department of Mathematics\\ University of Illinois at Urbana-Champaign\\ Urbana, IL, 61801\\USA} \email{[email protected]} \subjclass[2020]{35P20, 42B20, 11P21, 33C10} \keywords{Weyl's law, Dirichlet/Neumann Laplacian eigenvalues, decoupling theory, weighted lattice point counting, (cross-products of) ultraspherical Bessel functions.} \begin{abstract} The purpose of this paper is twofold. One is to investigate the properties of the zeros of cross-products of Bessel functions or derivatives of ultraspherical Bessel functions, as well as the properties of the zeros of the derivative of the first-kind ultraspherical Bessel function. The properties we study include asymptotics (with uniform and nonuniform remainder estimates), upper and lower bounds and so on. In addition, we provide the number of zeros of a certain cross-product within a large circle and show that all its zeros are real and simple. These results may be of independent interest. The other is to investigate the Dirichlet/Neumann Laplacian on balls and spherical shells in $\mathbb{R}^d$ ($d\geq 2$) and the remainder of the associated Weyl's law. We obtain new upper bounds in all dimensions, both in the Dirichlet and Neumann cases. The proof relies on our studies of Bessel functions and the latest development in the Gauss circle problem, which was driven by the application of the emerging decoupling theory of harmonic analysis. \end{abstract} \maketitle \tableofcontents \section{Introduction} \label{intro} Consider the Laplacian associated with a bounded Euclidean domain $\mathscr{D}\subset \mathbb{R}^d$ ($d\geq 2$), with either Dirichlet or Neumann boundary conditions. Denote by $\mathscr{N}_\mathscr{D}(\mu)=\#\{j\ | \lambda_j \le \mu\}$ the corresponding eigenvalue counting function, where $\lambda_j^2$ are the Dirichlet/Neumann eigenvalues. The Weyl remainder $\mathscr{R}_\mathscr{D}(\mu)$ is defined to be the quantity in the following expression: \begin{equation*} \mathscr{N}_\mathscr{D}(\mu)=\frac{\omega_d}{(2\pi)^{d}} \left|\mathscr{D}\right|\mu^d\mp \frac{\omega_{d-1}}{4(2\pi)^{d-1}}\left|\partial\mathscr{D}\right| \mu^{d-1}+\mathscr{R}_\mathscr{D}(\mu), \end{equation*} where $\omega_k$ denotes the volume of the unit ball in $\mathbb{R}^k$ and the sign ``$-$'' (resp., ``$+$'') refers to the Dirichlet (resp., Neumann) boundary condition. The study of such an eigenvalue counting function was initiated by Weyl \cite{weyl:1912, weyl:1913}. Weyl's conjecture claims that the remainder $\mathscr{R}_\mathscr{D}(\mu)$ is of order $o(\mu^{d-1})$ as $\mu \to \infty$. Melrose \cite{Mel:1980} and Ivrii \cite{Ivrii1980} proved this conjecture for manifolds with boundary under certain ``non-periodicity condition" on the billiard flow, namely the set of periodic billiard trajectories has measure zero. It is still unknown whether the non-periodicity condition holds for any bounded Euclidean domain (with sufficiently nice boundary). A natural question is to study the asymptotic order of the remainder $\mathscr{R}_\mathscr{D}(\mu)$. It is known that there is no universal constant $\kappa<d-1$ so that $\mathscr{R}_\mathscr{D}(\mu)=O(\mu^\kappa)$ holds for all $\mathscr{D}\subset \mathbb{R}^d$. In fact, for each $\kappa <1$ Lazutkin and Terman \cite{Lazu:1982} had constructed convex planar domains with specific billiard dynamics so that the remainder is not $O(\mu^\kappa)$. On the other direction, however, the remainder can be much smaller than $o(\mu^{d-1})$ for specific domains in $\mathbb{R}^d$. For example, for the Dirichlet Laplacian on disks in $\mathbb{R}^2$, Kuznetsov and Fedosov \cite{kuz:1965} and Colin de Verdi\`ere \cite{colin:2011} showed that the remainder is $O(\mu^{2/3})$. This bound was recently improved to $O(\mu^{2/3-1/495})$ in \cite{GWW2018} and to \begin{equation*} O\left(\mu^{\frac{131}{208}}(\log \mu)^{\frac{18627}{8320}}\right) \end{equation*} in \cite{GMWW:2019}. Huxley \cite{Huxley:2024} obtained the same bound for both the Dirichlet and Neumann Laplacian on disks. Kuznetsov proved the bound $O(\mu^{2/3})$ for ellipses in \cite{Kuznecov:1965} and considered planar domains of separable variable type in \cite{Kuznecov:1966}. In \cite{GMWW:2019} we studied the Dirichlet Laplacian on annuli and obtained the bound $O(\mu^{2/3})$ in general and the bound $O(\mu^{131/208}(\log \mu)^{18627/8320})$ under the rationality assumption on the ``slope" $\arccos(r/R)/\pi$, where $r$ and $R$ are the inner and outer radii of the annulus. The Dirichlet Laplacian on balls in $\mathbb{R}^d$ ($d\geq 3$) was studied in \cite{Guo} and a bound \begin{equation*} O\left(\mu^{d-2+\frac{131}{208}}(\log \mu)^{\frac{18627}{8320}} \right) \end{equation*} was obtained. In this paper, we study the Dirichlet/Neumann Laplacian on balls and spherical shells in $\mathbb{R}^d$. Throughout this paper we denote by \begin{equation*} \mathbb{S}=\mathbb{S}_{r,R}^d=\{x\in\mathbb{R}^d : r<|x|<R \} \end{equation*} a spherical shell in $\mathbb{R}^d$, where $r$ and $R$ are positive numbers with $r<R$, by \begin{equation*} \mathbb{B}=\mathbb{B}_R^d=\{x\in\mathbb{R}^d : |x|<R\} \end{equation*} a ball in $\mathbb{R}^d$, and by \begin{equation*} \theta^*=0.3144831759741\cdots \end{equation*} the opposite of the unique solution in the interval $[-0.35,-0.3]$ to the equation \eqref{definition of theta} (see Theorem \ref{expo sum} below). We obtain the following bounds. \begin{theorem}\label{specthm} Let $\mathscr{R}_\mathscr{D}(\mu)$ be the Weyl remainder associated with the domain $\mathscr{D}$ for either Dirichlet or Neumann eigenvalues. For $d\geq 2$ and $\epsilon>0$, we have \begin{enumerate} \item \begin{equation*} \mathscr{R}_\mathbb{B}(\mu)=O_{\epsilon}\left(\mu^{d-2+2\theta^*+\epsilon}\right); \end{equation*} \item \begin{equation*} \mathscr{R}_\mathbb{S}(\mu)=O\left(\mu^{d-2+\frac{2}{3}}\right) \end{equation*} which can be improved to \begin{equation*} \mathscr{R}_\mathbb{S}(\mu)=O_{\epsilon}\left(\mu^{d-2+2\theta^*+\epsilon}\right) \end{equation*} if $\pi^{-1}\arccos(r/R)\in \mathbb{Q}$. \end{enumerate} \end{theorem} \begin{remark} For comparison between these bounds and previous ones, it is worth noting that $2\theta^*=0.628966\cdots$, whereas $131/208=0.629807\cdots$. By using Huxley's bound in \cite[Proposition 3]{Huxley:2003} (which has previously been used in \cite{GMWW:2019, Guo, Huxley:2024}), and combining the work presented in Sections \ref{zeros}--\ref{sec5}, we can already obtain new results. Specifically, we can extend the results in \cite{GMWW:2019} from annuli to spherical shells in any dimension, covering both the Dirichlet and Neumann scenarios. Furthermore, we can extend the results in \cite{Guo} for balls to include the Neumann case. In fact, we can go even further. By using the latest development in the Gauss circle problem, which was driven by the application of the emerging decoupling theory of harmonic analysis, rather than relying on Huxley's \cite[Proposition 3]{Huxley:2003}, we can obtain new bounds in all dimensions, both in the Dirichlet and Neumann cases, by improving the exponent from $131/208$ to $2\theta^*$. \end{remark} We would like to take this chance to explain one difference between the cases $d=2$ and $d>2$. It is well known that the eigenvalue counting problem for certain Euclidean domains (whose billiard flows are completely integrable) can be converted into some ``almost lattice point problems'' (with each problem associated with a different domain of the same dimension), essentially involving the counting of lattice points subject to certain translations. Even thought the planar domain for the corresponding lattice point problem could be bad (non-convex, with cusp points, etc.) and despite the potential presence of complicated translations, it is still possible to adapt the methods/arguments developed in number theory and harmonic analysis for the Gauss circle problem (which counts lattice points within disks) over the past 100 years, and achieve a satisfactory asymptotic bound. As a result, one gets an equally nice bound for the eigenvalue counting problem, as alluded to above. For the case $d=2$, this idea was used by many authors (c.f. \cite{kuz:1965, Kuznecov:1965, Kuznecov:1966, colin:2011, GWW2018, GMWW:2019, FLPS:2023, Huxley:2024}) to study the Dirichlet/Neumann eigenvalue counting function of disks, annuli, ellipses, etc. In \cite{RWY} Rudnick, Wigman and Yesha also studied the Robin eigenvalue counting function of the disk via this method. For the case $d>2$, one may hope to mimic the aforementioned strategy to convert the eigenvalue counting problem for balls and spherical shells to some almost lattice point problems associated with specific domains in $\mathbb R^d$, and then by adapting existing techniques in handling lattice point problems within $d$-dimensional domains (like balls, ellipsoids, etc.) to get equally nice asymptotic bounds (e.g., achieving $O(\mu^{d-2})$ for $d>4$). However, while it is true that the eigenvalues still correspond to ``almost lattice points'', it seems that the existing techniques employed to address lattice point problems within d-dimensional domains are not applicable to the specific domains encountered here. In fact, according to Eswarathasan, Polterovich and Toth's \cite[Proposition 1.8]{EPT:2016}, there is a ``lower bound on average" for balls $\mathbb{B}$ in $\mathbb{R}^d$ ($d\geq 2$), \begin{equation*} \frac{1}{\mu}\int_{\mu}^{2\mu}\left|\mathscr{R}_\mathbb{B}(\tau) \right|\,\textrm{d}\tau\ge c \mu^{d-2+\frac{1}{2}}, \end{equation*} which is much larger than the well-known ``lattice point counting remainder" for balls (e.g., $O_\varepsilon(\mu^{d-2+\varepsilon})$ for $d=4$ and $O(\mu^{d-2})$ for $d>4$). An interesting question would be whether the true order of the error term for the eigenvalue counting problem for balls is, for any $\varepsilon>0$, \begin{equation*} O_{\varepsilon}\left(\mu^{d-2+\frac{1}{2}+\varepsilon} \right). \end{equation*} So instead of searching for a $d$-dimensional solution, we will convert the eigenvalue counting problems under consideration to some ``weighted'' planar lattice point counting problems. This method was used by the first author \cite{Guo} to study the Dirichlet eigenvalue counting function for balls, and by Filonov, Levitin, Polterovich and Sher \cite{FLPS:2023} in confirming Polya's conjecture for balls of dimension $d \ge 3$ (in the Dirichlet case). More specifically, the procedure of proving Theorem \ref{specthm} is as follows. We first derive approximations of eigenvalues with \textit{uniform} error estimates based on our investigation on the zeros of various expressions of Bessel related functions. Following this, we relate the eigenvalue counting problems to certain weighted lattice point counting problems associated with two types of special planar domains, with weights coming from multiplicities of eigenvalues. At last we count lattice points by applying estimates obtained from analytic number theory and harmonic analysis, and conclude the problems with satisfactory remainder estimates. One major difficulty we encounter lies in approximating eigenvalues with \textit{uniform} error terms. The eigenvalues we aim to study can be determined as the squares of the zeros of certain cross-products of Bessel functions or derivatives of ultraspherical Bessel functions, as well as the squares of the zeros of the derivative of the ultraspherical Bessel function of the first kind. Notice that Bessel functions and ultraspherical Bessel functions are widely used in mathematics, physics and engineering science to analyze boundary value problems with spherical or cylindrical geometry. Extensive research results have been obtained regarding them. To mention a few examples, McMahon \cite{mcmahon:1894} in 1894 gave asymptotics of the zeros of the Bessel and certain related functions (see also \cite[P. 371 and 441]{abram:1972}); Cochran \cite{cochran:1964, Cochran:1966a, Cochran:1966} in the 1960s examined properties (including asymptotics, analyticity, etc.) of the zeros of cross-products of Bessel functions and their derivatives (see also \cite[P. 374]{abram:1972}); Filonov, Levitin, Polterovich and Sher \cite{FLPS:2024} very recently obtain some nice results on uniform enclosures for the phase and zeros of Bessel functions and their derivatives. However, to our knowledge the cross-product of derivatives of ultraspherical Bessel functions has not been well studied so far, although its application in physics becomes increasingly important. We also find that known asymptotics of aforementioned zeros, if any, are not of the type that we require with uniform error terms. In this paper, the properties of zeros we study include asymptotics (with both uniform and nonuniform remainder estimates), upper and lower bounds and so on. Here we will briefly explain some of the results we have obtained by taking the zeros $x''_{\nu,k}$ of the cross-product of derivatives of ultraspherical Bessel functions $j_{\nu,\delta}'(Rx)y_{\nu,\delta}'(rx)-j_{\nu,\delta}'(rx)y_{\nu,\delta}'(Rx)$ (see \eqref{eigenequation1}) as an example. For detailed results, please refer to the subsequent sections. For any fixed $\nu\geq |\delta|$ and sufficiently large $k$, we give in Theorem \ref{thm2.20} that \begin{equation*} x''_{\nu,k}=\frac{\pi}{R-r}k+O_{\nu}\left(\frac{1}{k} \right). \end{equation*} (See Theorem \ref{thm4.12} for an analogous result of the zeros of the derivative of the ultraspherical Bessel function.) In the special case when $\delta=0$ (that is, ultraspherical Bessel functions are simply the usual Bessel functions), this asymptotics can be derived directly from \cite[9.5.28--9.5.31 on P. 374]{abram:1972}. The implicit constant in the error term is not uniform in $\nu$. However, the uniformity of the implicit constant in $\nu$ and $k$ is vital for the eigenvalue counting problems. After a somewhat long and technical computation, we manage to achieve such a uniformity. For example, we give in Theorem \ref{approximation} that \begin{equation*} x''_{\nu, k}=F(\nu,k)+O\left((\nu+k)^{-1}\right) \end{equation*} in certain range of $x''_{\nu,k}$, where $F$ is a fixed function homogeneous of degree one. Theorem \ref{approximation} contains results in all ranges of $x''_{\nu,k}$. See Theorem \ref{thm4.11} for an asymptotics of the zeros of the derivative of the ultraspherical Bessel function with uniform error terms. Besides asymptotics, we also obtain upper and lower bounds of zeros. See Propositions \ref{case0}, \ref{prop2}, and \ref{prop4.10}. In addition, we provide in Section \ref{sec3} the number of the zeros of the cross-product of derivatives of ultraspherical Bessel functions within a large circle and show that all its zeros are real and simple. Apart from the difficulty of approximating eigenvalues, another difficulty of extending planar results (like those in \cite{GMWW:2019}) to high dimensional ones lies in handling the varying multiplicities of eigenvalues. Our resolution to this (as did in \cite{Guo}) is to transfer the multiplicities to weights of lattice points correspondingly, that is, different lattice points may be counted for different numbers of times. As a result, we have to deal with certain weighted planar lattice point counting problems. We then solve them by decomposing them into finitely many standard lattice point counting problems without weights but associated with planar domains of decreasing sizes. For details, please refer to Section \ref{reduction-sec}. The novel exponent $2\theta^*$ in Theorem \ref{specthm} arises from the application of the latest development in the Gauss circle and Dirichlet divisor problems, achieved by Li and the last author in \cite{LY2023}, to the lattice point counting problems encountered in Section \ref{sec5}. Inspired by new ideas presented by Bourgain in \cite{Bourgain:2017} and by Bourgain and Watt in \cite{BW2018, BWpreprint}, Li and the last author combined recent advancements of the decoupling theory, made by Guth and Maldague \cite{GM:2022}, with results on some diophantine counting problems to improve results on the first spacing problem of the circle and divisor problems. Furthermore, by incorporating Huxley's work in \cite{Huxley:2003} on the second spacing problem, they obtained their improved exponential sum estimates in \cite[Theorem 4.2]{LY2023}. See Section \ref{sec6} for more elaboration. Based on their work, we have formulated an estimate for the rounding error sums, which holds within a limited range but under weaker assumptions. This result, presented independently in Section \ref{sec6}, is particularly applicable to our problems. \emph{Notations:} For functions $f$ and $g$ with $g$ taking nonnegative real values, $f\lesssim g$ means $|f|\leqslant Cg$ for some constant $C$. If $f$ is nonnegative, $f\gtrsim g$ means $g\lesssim f$. The notation $f\asymp g$ means that $f\lesssim g$ and $g\lesssim f$. If we write a subscript (for instance $\lesssim_{\sigma}$), we emphasize that the implicit constant depends on that specific subscript. We set $\mathbb{Z}_+:=\mathbb{N}\cup\{0\}$. \section{Zeros of cross-products of Bessel functions}\label{zeros} Let $0<r<R<\infty$ be two given numbers. For any $\nu\geq 0$ we would like to study positive zeros of cross-product combinations of Bessel functions \begin{equation} \mathfrak{f}_{\nu}(x):=J_{\nu}(Rx)Y_{\nu}(rx)-J_{\nu}(rx)Y_{\nu}(Rx), \label{eigenequation} \end{equation} \begin{equation} \mathfrak{g}_{\nu}(x):=J_{\nu}'(Rx)Y_{\nu}'(rx)-J_{\nu}'(rx)Y_{\nu}'(Rx) \label{eigenequation2} \end{equation} and \begin{equation} \mathfrak{h}_{\nu,\delta}(x):=j_{\nu}'(Rx)y_{\nu}'(rx)-j_{\nu}'(rx)y_{\nu}'(Rx), \label{eigenequation1} \end{equation} where $J_{\nu}$ and $Y_{\nu}$ are Bessel functions of the first and second kind of order ${\nu}$, \begin{equation*} j_{\nu}(x)=j_{\nu,\delta}(x):=x^{-\delta}J_{\nu}(x) \end{equation*} and \begin{equation*} y_{\nu}(x)=y_{\nu,\delta}(x):=x^{-\delta}Y_{\nu}(x) \end{equation*} with $\delta\in\mathbb{R}$ and $\nu\geq |\delta|$.\footnote{See Remark \ref{rm111} for the reason why we only consider the case $\nu\geq |\delta|$.} In particular when $\delta=0$ the functions $j_{\nu}$ and $y_{\nu}$ coincide with the Bessel functions hence $\mathfrak{g}_{\nu}=\mathfrak{h}_{\nu,0}$; when $\delta=d/2-1$ and $\nu=n+d/2-1$ with integer $n\geq 0$ and dimension $d\geq 3$, the functions $j_{\nu}$ and $y_{\nu}$ are ultraspherical Bessel functions of the first and second kind that we will deal with in the eigenvalue counting problems. The motivation for studying the cross-product $\mathfrak{h}_{\nu,\delta}$ (resp., $\mathfrak{f}_{\nu}$) lies in the Neumann (resp., Dirichlet) Laplacian on spherical shells. In this section, we primarily focus on the study of $\mathfrak{h}_{\nu,\delta}$ (including $\mathfrak{g}_{\nu}$), as $\mathfrak{f}_{n}$ with integer $n$ has already been investigated in \cite{GMWW:2019}, and the generalization of those results from $\mathfrak{f}_{n}$ to $\mathfrak{f}_{\nu}$ is essentially the same in nature. For completeness, we still list below results for $\mathfrak{f}_{\nu}$, though without proofs. One main goal of this section is to find approximations of positive zeros of $\mathfrak{f}_{\nu}(x)$ and $\mathfrak{h}_{\nu,\delta}(x)$ with uniform error terms, which are vital in our study of the two-term Weyl's law. To achieve this goal, we will put much effort into establishing asymptotics of the aforementioned cross-products. The desired approximations will be presented in Theorem \ref{approximation}. We will also provide approximations with nonuniform error terms, upper and lower bounds and so on. The study of the Dirichlet/Neumann Laplacian on balls is relatively easier. One needs to investigate the zeros of the derivative of the first-kind ultraspherical Bessel function. Analogous results will be presented in Section \ref{subsec4.2}. Throughout this paper we denote \begin{equation*} g(x)=\left(\sqrt{1-x^2}-x\arccos x\right)/\pi, \end{equation*} \begin{equation*} G(x)=\left\{ \begin{aligned} &Rg(x/R)-rg(x/r)\;\; &\mathrm{for}&\;0\leq x\leq r,\\ &Rg(x/R)\;\; &\mathrm{for}&\;r\leq x\leq R, \end{aligned} \right. \end{equation*} and \begin{equation*} \mathcal{G}_\nu(x)=x G(\nu/x). \end{equation*} These functions naturally arise in the asymptotics of Bessel functions and their cross-products, respectively. See Lemma \ref{app-1} and the following lemmas. \subsection{Asymptotics of cross-products} We first study $\mathfrak{f}_{\nu}(x)$ and $\mathfrak{g}_{\nu}(x)$, then $\mathfrak{h}_{\nu,\delta}(x)$ (based on results of $\mathfrak{g}_{\nu}(x)$). \begin{lemma}\label{case111} For any $c>0$ and all $\nu\ge0$, if $rx\geq \max\{(1+c)\nu, 10\}$ then \begin{equation}\label{case111-1} \mathfrak{f}_{\nu}(x)=-\frac{2}{\pi}\frac{\sin\left( \pi \mathcal{G}_\nu(x)\right)+O_c\left(x^{-1}\right)}{\left(\left(Rx\right)^2-\nu^2\right)^{1/4} \left(\left(rx\right)^2-\nu^2\right)^{1/4}} \end{equation} and \begin{equation}\label{case111-1NC} \mathfrak{g}_{\nu}(x)=-\frac{2}{\pi Rr}\frac{\sin\left( \pi \mathcal{G}_\nu(x)\right)+O_c\left(x^{-1}\right)}{x^2\left(\left(Rx\right)^2-\nu^2\right)^{-1/4} \left(\left(rx\right)^2-\nu^2\right)^{-1/4}}. \end{equation} \end{lemma} \begin{proof} We apply the asymptotics of Bessel functions from Lemma \ref{app-1} to all factors in $\mathfrak{f}_{\nu}(x)$ and $\mathfrak{g}_{\nu}(x)$, and subsequently utilize the angle difference formula for the sine function. \end{proof} \begin{lemma}\label{case222} There exists a constant $c\in (0,1)$ such that for any $\varepsilon>0$ and all sufficiently large $\nu$, if $\nu+\nu^{1/3+\varepsilon}\leq rx\leq (1+c)\nu$ then \begin{equation} \mathfrak{f}_{\nu}(x)=-\frac{2}{\pi}\frac{\sin\left( \pi \mathcal{G}_\nu(x)\right)+O\left(z^{-3/2}\right)}{\left(\left(Rx\right)^2-\nu^2\right)^{1/4} \left(\left(rx\right)^2-\nu^2\right)^{1/4}} \label{case222-1} \end{equation} and \begin{equation} \label{case222-1NC} \mathfrak{g}_{\nu}(x)=-\frac{2}{\pi Rr}\frac{\sin\left( \pi \mathcal{G}_\nu(x)\right)+O\left(z^{-3/2}\right)}{x^2 \left(\left(Rx\right)^2-\nu^2\right)^{-1/4} \left(\left(rx\right)^2-\nu^2\right)^{-1/4}}, \end{equation} where $z$ is determined by the equation $rx=\nu+z \nu^{1/3}$. \end{lemma} \begin{proof} Notice that $Rx>rx\geq \nu+\nu^{1/3+\varepsilon}$ implies that $Rx\geq (1+c')\nu$ with some constant $c'>0$. If $c$ is small then $\nu/rx$ is close to $1$, \begin{equation*} rxg\left(\frac{\nu}{rx}\right)\asymp rx\left( 1-\frac{\nu}{rx}\right)^{3/2}=\left(\frac{\nu}{rx} \right)^{1/2}z^{3/2} \asymp z^{3/2}\geq \nu^{\frac{3}{2}\varepsilon} \end{equation*} and \begin{equation*} x^{-1}\lesssim z^{-3/2}. \end{equation*} Applying Lemma \ref{app-1} to $J_{\nu}(Rx)$, $Y_{\nu}(Rx)$, $J_{\nu}'(Rx)$ and $Y_{\nu}'(Rx)$, Lemma \ref{app-2} to $J_{\nu}(rx)$, $Y_{\nu}(rx)$, $J_{\nu}'(rx)$ and $Y_{\nu}'(rx)$ and then the angle difference formula readily yields the desired asymptotics. \end{proof} \begin{lemma} \label{case2.5} There exist strictly decreasing real-valued $C^1$ functions $\psi_i$: $\mathbb{R} \rightarrow (0, 1/4)$, $i=1,2$, such that $\psi_i(0)=1/12$, $\lim_{x\rightarrow -\infty}\psi_i(x)=1/4$, $\lim_{x\rightarrow \infty}\psi_i(x)=0$ and the images of $\psi'_i$ are bounded intervals. For any $\varepsilon>0$ and all sufficiently large $\nu$, if $\nu-\nu^{1/3+\varepsilon}\leq rx \leq \nu+\nu^{1/3+\varepsilon}$ then \begin{equation} \mathfrak{f}_\nu(x)=-\frac{2^{5/6}}{\pi^{1/2}}\frac{\sin\left(\pi \mathcal{G}_\nu(x)+\pi \psi_1\left(z\right)\right)+O\left(\nu^{-2/3+2.5\varepsilon}\right)}{\nu^{1/3}\left(\left(Rx\right)^2-\nu^2\right)^{1/4} \left(\mathrm{Ai}^2+\mathrm{Bi}^2\right)^{-1/2}\left(-2^{1/3}z\right)}\label{case2.5-1} \end{equation} and \begin{equation}\label{case2.5-1NC} \mathfrak{g}_\nu(x)\!=-\frac{2^{7/6}}{\pi^{1/2}}\frac{\sin\left(\pi \mathcal{G}_\nu(x)-\pi \psi_2\left(z\right)\right)+O\left(\nu^{-2/3+2.75\varepsilon}\right)}{\nu^{2/3}Rx\left(\left(Rx\right)^2-\nu^2\right)^{-1/4} \left(\mathrm{Ai'}^2+\mathrm{Bi'}^2\right)^{-1/2}\left(-2^{1/3}z\right)}, \end{equation} where $z$ is determined by the equation $rx=\nu+z \nu^{1/3}$. \end{lemma} \begin{proof} We only prove the $\mathfrak g_{\nu}$ part; for the $\mathfrak{f}_{\nu}$ part see \cite[Lemma 2.3]{GMWW:2019}. Notice that $Rx>rx\geq \nu-\nu^{1/3+\varepsilon}$ implies $Rx>(1+c')\nu$ for some constant $c'>0$ whenever $\nu$ is sufficiently large. Denote \begin{equation*} rx=\nu+z\nu^{1/3} \quad \textrm{with $-\nu^\varepsilon\leq z\leq \nu^\varepsilon$}. \end{equation*} Applying Lemma \ref{app-1} to $J'_\nu(Rx)$ and $Y'_\nu(Rx)$ and Lemma \ref{9.3.4analogue} to $J'_\nu(rx)$ and $Y'_\nu(rx)$ yields \begin{align} \mathfrak{g}_\nu(x)&=-\frac{2^{7/6}\left(\left(Rx\right)^2-\nu^2\right)^{1/4}\sqrt{\mathrm{Ai}'^2+\mathrm{Bi}'^2}(-2^{1/3}z)} {\pi^{1/2}Rx\nu^{2/3}} \cdot \nonumber \\ &\bigg[\sin\!\left(\! \pi Rx g\!\left(\frac{\nu}{Rx}\right)\!-\frac{3\pi}{4}\!\right) \! \frac{\mathrm{Ai}'}{\sqrt{\mathrm{Ai}'^2+\mathrm{Bi}'^2}}\left(-2^{1/3}z\right)+ \label{equ1NC}\\ & \ \cos\!\left(\! \pi R x g\!\left(\frac{\nu}{Rx}\right)\!-\frac{3\pi}{4}\!\right) \! \frac{\mathrm{Bi}'}{\sqrt{\mathrm{Ai}'^2+\mathrm{Bi}'^2}}\left(-2^{1/3}z\right)\!+O\left(\nu^{-2/3+2.75\varepsilon}\right)\!\bigg], \label{equ2NC} \end{align} where we have used facts that $\mathrm{Ai}'^2(x)+\mathrm{Bi}'^2(x)$ has an absolute positive lower bound (see \cite[10.4.10 and 10.4.80]{abram:1972}) and $x\asymp \nu$. Let $t_0=-\infty$ and $t_m$ ($m\in\mathbb{N}$) be the $m$th zero of the function $\textrm{Ai}'(-x)$. Let \begin{equation*} \mathcal{A}(x)=\left\{ \begin{array}{ll} -(m-2)\pi+\arctan\left( \frac{\mathrm{Bi'}}{\mathrm{Ai'}}(-x)\right), & \textrm{$x\in (t_{m-1}, t_m)$, $m\in\mathbb{N}$,}\\ -(m-2)\pi-\frac{1}{2}\pi, & \textrm{$x=t_m$, $m\in\mathbb{N}$,} \end{array}\right. \end{equation*} be a continuous branch of the inverse tangent function $\arctan\left(\frac{\mathrm{Bi'}}{\mathrm{Ai'}}(-x)\right)$ and \begin{equation*} \beta(z)=\frac{1}{\pi} \mathcal{A}\left(2^{1/3}z\right). \end{equation*} We can then use this function $\beta$ and the angle sum formula to rewrite \eqref{equ1NC} and \eqref{equ2NC} as \begin{equation} \left[\sin\left( \pi R x g\left(\frac{\nu}{Rx}\right)+\pi \beta(z)-\frac{3}{4}\pi\right)+O\left(\nu^{-2/3+2.75\varepsilon}\right)\right].\label{case2.5-3NC} \end{equation} Set \begin{equation*} \psi_2(z)=\left\{ \begin{array}{ll} -\beta(z)-\frac{2\sqrt{2}}{3\pi}z^{3/2}+\frac{3}{4}, & \textrm{$z\geq 0$},\\ -\beta(z)+\frac{3}{4}, & \textrm{$z\leq 0$}. \end{array}\right. \end{equation*} By rewriting \eqref{case2.5-3NC} with this $\psi_2$ and the function $\mathcal{G}_\nu$ and using the asymptotics \begin{equation*} rx g\left(\frac{\nu}{rx}\right)=\frac{2\sqrt{2}}{3\pi}z^{3/2}+O\left(z^{2.5}\nu^{-2/3}\right) \quad \textrm{for $z\geq 0$}, \end{equation*} we get \eqref{case2.5-1NC}. It remains to check the properties of $\psi_2$. We first show that $\psi_2'(z)\leq 0$ with the equality holding only at $z=0$. Indeed, by using properties of Airy functions (10.4.1 and 10.4.10 in \cite[P. 446]{abram:1972}) we have \begin{equation*} \mathcal{A}'(x)=-\frac{1}{\pi}\frac{x}{(\mathrm{Ai}'^2+\mathrm{Bi}'^2 )(-x)}. \end{equation*} Hence $\psi_2'(z)\leq 0$ if $z\leq 0$ while the equality holds whenever $z=0$. For $z>0$, $\psi_2'(z)<0$ is equivalent to \begin{equation*} \pi z^{-1/2} \left(\mathrm{Ai}'^2+\mathrm{Bi}'^2 \right)(-z)>1 \quad \textrm{for all $z>0$} \end{equation*} which follows from the following two facts. Firstly, \begin{equation*} \lim_{z\rightarrow+\infty} \pi z^{-1/2} \left(\mathrm{Ai}'^2+\mathrm{Bi}'^2 \right)(-z)=1, \end{equation*} which is an easy consequence of the asymptotics of $\mathrm{Ai}'$ and $\mathrm{Bi}'$ (see \cite[P. 449]{abram:1972}). Secondly, for $z>0$ \begin{equation*} z^{-1/2}\left(\mathrm{Ai}'^2+\mathrm{Bi}'^2\right)(-z)=\frac{1}{2}\xi\left(J_{2/3}^2+Y_{2/3}^2\right)(\xi), \quad \textrm{with $\xi=\frac{2}{3}z^{3/2}$}, \end{equation*} is a decreasing function of $z$ (see \S 7.3 in \cite[P. 342]{olver:1997}). Note that the above identity follows from 9.1.3 in \cite[P. 358]{abram:1972} and 10.4.28 in \cite[P. 447]{abram:1972}. The continuity of $\psi_2'$ is obvious. The limit of $\psi_2$ at $\infty$ is easy to get by using asymptotics \cite[10.4.81]{abram:1972} of $\mathcal{A}(x)$, while the limit at $-\infty$ can be obtained by straightforward computation. Since $\psi_2'(z)\rightarrow 0$ as $|z|\rightarrow \infty$, its image must be a bounded interval. \end{proof} \begin{lemma}\label{case3} For any $\varepsilon>0$ and all sufficiently large $\nu$, if $r\nu/R<rx\leq \nu-\nu^{1/3+\varepsilon}$ then \begin{equation}\label{case3-1} \mathfrak f_\nu(x)=Y_\nu(rx)\frac{\mathrm{Ai}\!\left(-\left(\frac{3\pi}{2} \mathcal{G}_\nu(x)\right)^{2/3}\right)+O\!\left(\nu^{-4/3}\max\!\left\{1, \mathcal{G}_\nu(x)^{1/6}\right\}\!\right)} {\left(\left(Rx\right)^2-\nu^2\right)^{1/4}\left(12\pi \mathcal{G}_\nu(x)\right)^{-1/6}} \end{equation} and \begin{equation}\label{case3-1NC} \mathfrak g_\nu(x)=-2Y'_\nu(rx)\frac{\mathrm{Ai'}\!\left(-\left(\frac{3\pi}{2} \mathcal{G}_\nu(x)\right)^{2/3}\right) \!+\!O\!\left(\nu^{-2/3}\min\!\left\{1, \mathcal{G}_\nu(x)^{-1/6}\right\}\!\right)} {Rx\left(\left(Rx\right)^2-\nu^2\right)^{-1/4}\left(12\pi \mathcal{G}_\nu(x)\right)^{1/6}}, \end{equation} where $Y_\nu(rx)<0$ and $Y'_\nu(rx)>0$. If we further assume that $\mathcal{G}_\nu(x)>1$, then \begin{equation} \mathfrak f_\nu(x)=\sqrt{\frac{2}{\pi}} Y_\nu(rx)\frac{\sin\left( \pi \mathcal{G}_\nu(x)+\frac{\pi}{4}\right)+O\left(\mathcal{G}_\nu(x)^{-1} \right)}{\left(\left(Rx\right)^2-\nu^2\right)^{1/4}} \label{case3-2} \end{equation} and \begin{equation} \mathfrak g_\nu(x)=-\sqrt{\frac{2}{\pi}} Y'_\nu(rx)\frac{\sin\left( \pi \mathcal{G}_\nu(x)-\frac{\pi}{4}\right)+O\left(\mathcal{G}_\nu(x)^{-1} \right)}{Rx\left(\left(Rx\right)^2-\nu^2\right)^{-1/4}}. \label{case3-2NC} \end{equation} \end{lemma} \begin{remark}\label{333} It is trivial to follow from the asymptotics \eqref{case3-1NC} and \eqref{case3-2NC} to get that if $\mathcal{G}_\nu(x)\leq 1$ then \begin{equation}\label{case3-1hnuNC} \mathfrak g_\nu(x)=-2Y'_\nu(rx)\frac{\mathrm{Ai'}\!\left(-\left(\frac{3\pi}{2} \mathcal{G}_\nu(x)\right)^{2/3}\right) \!+\!O\!\left(\nu^{-2/3}\right)} {Rx\left(\left(Rx\right)^2-\nu^2\right)^{-1/4}\left(12\pi \mathcal{G}_\nu(x)\right)^{1/6}}, \end{equation} and, if $\mathcal{G}_\nu(x)>1$ then \begin{equation} \mathfrak g_\nu(x)=-\sqrt{\frac{2}{\pi}} Y'_\nu(rx)\frac{\sin\left( \pi \mathcal{G}_\nu(x)-\frac{\pi}{4}\right)+O\left(\max\left\{\mathcal{G}_\nu(x)^{-1}, \nu^{-\frac{2}{3}-\frac{\varepsilon}{2}}\right\} \right)}{Rx\left(\left(Rx\right)^2-\nu^2\right)^{-1/4}} \label{111} \end{equation} Below we will use \eqref{case3-1hnuNC} and \eqref{111} instead of \eqref{case3-1NC} and \eqref{case3-2NC}. In particular the error term of \eqref{111} is trivially weaker than that of \eqref{case3-2NC} but good enough for later application. The reason for this treatment is that we will reduce the study of $\mathfrak{h}_{\nu,\delta}(x)$ to that of $\mathfrak g_\nu(x)$, but the generalization of \eqref{case3-2NC} from $\mathfrak g_\nu(x)$ to $\mathfrak{h}_{\nu,\delta}(x)$ is not valid while that of \eqref{111} is. We will illustrate this point later. \end{remark} \begin{proof}[Proof of Lemma \ref{case3}] We focus on $\mathfrak{g}_{\nu}$ below; for $\mathfrak{f}_{\nu}$ see \cite[Lemma 2.6]{GMWW:2019}. Notice that \begin{equation*} \mathfrak{g}_\nu(x)=Y'_\nu(rx)\left(J'_\nu(Rx)-\frac{J'_\nu(rx)}{Y'_\nu(rx)}Y'_\nu(Rx)\right). \end{equation*} We will find the asymptotics of $J'_\nu(Rx)$ and show $(J'_\nu(rx)/Y'_\nu(rx))Y'_\nu(Rx)$ is relatively small. For the convenience of using Olver's asymptotics \eqref{jnuse111NC} and \eqref{ynuse111NC}, we denote \begin{equation*} Rx=\nu z_{R} \quad \textrm{and}\quad rx=\nu z_{r}. \end{equation*} We have two useful estimates. First, since $1<z_R< R/r$, the number $\zeta_R:=\zeta(z_R)$, determined by \eqref{def-zeta1}, is negative such that \begin{equation*} 0<(-\zeta_R)^{3/2}\lesssim 1. \end{equation*} Second, it follows from $r/R< z_r\leq 1-\nu^{-2/3+\varepsilon}$ and \eqref{bound-zeta-} that the number $\zeta_r:=\zeta(z_r)$, determined by \eqref{def-zeta2}, is positive such that \begin{equation*} \nu^{-1+1.5\varepsilon}\lesssim \zeta_r^{3/2}\lesssim 1 \end{equation*} whenever $\nu$ is sufficiently large. With \begin{equation*} \nu^{\varepsilon}\lesssim \nu^{2/3}\zeta_r\lesssim \nu^{2/3}, \end{equation*} applying \eqref{jnuse111NC}, \eqref{ynuse111NC} and asymptotics of Airy functions with positive arguments (see \cite[P. 448--449]{abram:1972}) yields \begin{equation*} J'_\nu(rx)=\left(2\pi\right)^{-1/2}(rx)^{-1}\left(\nu^2-(rx)^2\right)^{1/4}e^{-\frac{2}{3}\nu\zeta_r^{3/2}} \left(1+O\left(\nu^{-1}\zeta_r^{-3/2}\right)\right) \end{equation*} and \begin{equation*} Y'_\nu(rx)=\left(2/\pi\right)^{1/2}(rx)^{-1}\left(\nu^2-(rx)^2\right)^{1/4}e^{\frac{2}{3}\nu\zeta_r^{3/2}} \left(1+O\left(\nu^{-1}\zeta_r^{-3/2}\right)\right). \end{equation*} Thus $Y'_\nu(rx)>0$ and \begin{equation*} \frac{J'_\nu(rx)}{Y'_\nu(rx)}=\frac{1}{2}e^{-\frac{4}{3}\nu\zeta_r^{3/2}}\left(1+O\left(\nu^{-1}\zeta_r^{-3/2}\right)\right) =O\left(e^{-\nu^{\varepsilon}}\right). \end{equation*} Therefore \begin{equation*} \mathfrak{g}_\nu(x)=Y'_\nu(rx)\left(J'_\nu(\nu z_{R})+Y'_\nu(\nu z_{R})O\left(e^{-\nu^{\varepsilon}}\right)\right). \end{equation*} We next apply \eqref{jnuse111NC} and \eqref{ynuse111NC} to $J'_\nu(\nu z_{R})$ and $Y'_\nu(\nu z_{R})$ respectively. By using a simple identity \begin{equation}\label{222} \nu^{2/3}\left(-\zeta_R\right)=\left(\frac{3\pi}{2} \mathcal{G}_\nu(x)\right)^{2/3}, \end{equation} we readily obtain the main term in \eqref{case3-1NC}. Since we only have \begin{equation*} 0<\nu^{2/3}(-\zeta_R)\lesssim \nu^{2/3}, \end{equation*} the quantity $\nu^{2/3}(-\zeta_R)$ is not necessarily large. To obtain the error term in \eqref{case3-1NC}, we discuss depending on whether $\nu^{2/3}(-\zeta_R)>1$ or not. If it is greater than $1$, using bounds of Airy functions with negative arguments (see \cite[P. 448--449]{abram:1972}) yields a bound $O(\nu^{-2/3}(\nu^{2/3}|\zeta_R|)^{-1/4})$. Otherwise, using trivial bounds of Airy functions yields a bound $O(\nu^{-2/3})$. With these two bounds we obtain \eqref{case3-1NC}. Applying the asymptotics of $\mathrm{Ai}'(-r)$ to \eqref{case3-1NC} yields \eqref{case3-2NC}. \end{proof} We now study $\mathfrak{h}_{\nu,\delta}(x)$. Expanding derivatives in $\mathfrak{h}_{\nu,\delta}(x)$ and factoring out $x^{-2\delta}$ yields \begin{equation} \mathfrak{h}_{\nu,\delta}(x)=(Rr)^{-\delta}x^{-2\delta}\widetilde{\mathfrak{h}}_{\nu,\delta}(x), \label{444} \end{equation} where \begin{equation} \widetilde{\mathfrak{h}}_{\nu,\delta}(x):=\mathfrak{g}_{\nu}(x)+\mathscr{E}_{\nu,\delta}(x) \label{555} \end{equation} with \begin{align*} \mathscr{E}_{\nu,\delta}(x)=&\delta^2 (Rr)^{-1}x^{-2}\mathfrak{f}_{\nu}(x)-\delta R^{-1}x^{-1}\left(J_{\nu}(Rx)Y_{\nu}'(rx)-J_{\nu}'(rx)Y_{\nu}(Rx) \right)\\ &-\delta r^{-1}x^{-1}\left(J_{\nu}'(Rx)Y_{\nu}(rx)- J_{\nu}(rx)Y_{\nu}'(Rx)\right). \end{align*} It is obvious that positive zeros of $\mathfrak{h}_{\nu,\delta}(x)$ are exactly positive zeros of $\widetilde{\mathfrak{h}}_{\nu,\delta}(x)$. One can check that the remainder $\mathscr{E}_{\nu,\delta}(x)$ can be absorbed into error terms of asymptotics \eqref{case111-1NC}, \eqref{case222-1NC}, \eqref{case2.5-1NC}, \eqref{case3-1hnuNC} and \eqref{111} of $\mathfrak{g}_{\nu}(x)$. The computation is routine and tedious. For instance, we provide partial computation related to the term $|J_{\nu}'(Rx)Y_{\nu}(rx)|x^{-1}$. If $x\geq \max\{(1+c)\nu, 10\}$ then $J_{\nu}(x)$, $Y_{\nu}(x)$, $J_{\nu}'(x)$ and $Y_{\nu}'(x)$ are all of size $O(x^{-1/2})$. Under assumptions of Lemma \ref{case111} we have $|J_{\nu}'(Rx)Y_{\nu}(rx)|x^{-1}=O(x^{-2})$, which can be absorbed in the error term $O(x^{-1})$ of the asymptotics \eqref{case111-1NC} of $\mathfrak{g}_{\nu}(x)$. Under assumptions of Lemma \ref{case3} together with $\mathcal{G}_\nu(x)\leq 1$, we can use \eqref{bound-zeta+} and \eqref{222} to obtain that \begin{equation*} \mathcal{G}_\nu(x)\asymp \nu^{-1/2}(Rx-\nu)^{3/2}, \end{equation*} $0<Rx-\nu\lesssim \nu^{1/3}$ and $0<rx-r\nu/R\lesssim \nu^{1/3}$. With these estimates it is easy to show that $|J_{\nu}'(Rx)Y_{\nu}(rx)|x^{-1}$ can be absorbed in the error term of \eqref{case3-1hnuNC}. Under assumptions of Lemma \ref{case3} but with $\mathcal{G}_\nu(x)>1$, we have \begin{equation*} \left(\left(Rx\right)^2-\nu^2\right)^{-1/4}\frac{|J_{\nu}'(Rx)Y_{\nu}(rx)|}{|Y'_\nu(rx)|}\lesssim \left(\nu^2-(rx)^2\right)^{-1/2}. \end{equation*} This may not be majorized by the error $O(\mathcal{G}_\nu(x)^{-1})$ in \eqref{case3-2NC}. Hence we add its trivial bound $O(\nu^{-2/3-\varepsilon/2})$ into \eqref{case3-2NC} to rewrite it into the form \eqref{111}. \begin{lemma} \label{777} The asymptotics \eqref{case111-1NC}, \eqref{case222-1NC}, \eqref{case2.5-1NC}, \eqref{case3-1hnuNC} and \eqref{111} in Lemmas 2.1--2.4 still hold with $\mathfrak{g}_{\nu}(x)$ replaced by $\widetilde{\mathfrak{h}}_{\nu,\delta}(x)$. \end{lemma} \begin{remark} Obviously error terms of asymptotics of $\widetilde{\mathfrak{h}}_{\nu,\delta}(x)$ may also depend on $\delta$. \end{remark} Through Lemma \ref{777}, roughly speaking, the study of the positive zeros of $\mathfrak{h}_{\nu,\delta}(x)$ can be reduced to that of $\mathfrak{g}_{\nu}(x)$. \subsection{Properties of zeros of cross-products} \label{subsec2.2} We study positive zeros of $\mathfrak{f}_{\nu}$, $\mathfrak{g}_{\nu}$ and $\mathfrak{h}_{\nu,\delta}$ in this subsection. It is well-known that $\mathfrak{f}_{\nu}$, $\mathfrak{g}_{\nu}$ are even functions whose zeros are all real and simple. See Cochran \cite{cochran:1964}. For each nonnegative $\nu$, we denote the sequence of positive zeros of $\mathfrak{f}_{\nu}$ by $0<x_{\nu, 1}<\cdots<x_{\nu, k}<\cdots$, and similarly denote positive zeros of $\mathfrak{g}_{\nu}$ by $x'_{\nu, k}$ (with the convention of beginning with $k=0$ rather than with $k=1$ if $\nu>0$). We know that \begin{equation*} Rx_{\nu, k}>\nu \textrm{ and } R x'_{\nu, k}>\nu. \end{equation*} The former is an extension of \cite[Lemma 2.5]{GMWW:2019} from integer $n$ to nonnegative $\nu$. The latter is a consequence of \cite[Lemma 5]{Cochran:1966}. In fact we know that $x_{\nu, k}$ and $x'_{\nu, k}$ both go to infinity as $\nu$ (or $k$) goes to infinity. This will be useful later. \begin{proposition} \label{case0} For all real $\nu\geq 0$ and integer $k\geq 0$ with $\nu+k\geq 1$, we have \begin{equation*} x_{\nu,k}, x'_{\nu,k}>\frac{1}{R}\sqrt{\nu^2+\pi^2\left(k-\frac{1}{4} \right)^2}. \end{equation*} \end{proposition} \begin{proof} See \cite[Lemma 2.5]{GMWW:2019} for the proof of the lower bound of $x_{\nu,k}$. Concerning $x'_{0, k}$, since $J'_0=-J_1$ and $Y'_0=-Y_1$ (\cite[P. 361]{abram:1972}), it is also a zero of $\mathfrak f_1$ which implies \begin{equation*} Rx'_{0,k}>\sqrt{1+\pi^2\left(k-\frac{1}{4}\right)^2}>\sqrt{\pi^2\left(k-\frac{1}{4}\right)^2}, \end{equation*} as desired. It remains to consider $x'_{\nu, k}$ with $\nu>0$. If $\mathtt{j}'_{\nu,k+1}$ denotes the $(k+1)$-th positive zero of $J'_\nu$, we have \begin{equation*} x'_{\nu, k}\geq \mathtt{j}'_{\nu,k+1}/R, \end{equation*} as a consequence of results in \cite[Theorem 4]{Cochran:1966} and \cite[P. 38]{Kline:1948}. Note that \begin{equation*} \mathtt{j}'_{\nu,k+1}> \mathtt{j}_{\nu,k} \end{equation*} for $k\geq 1$ (\cite[P. 370]{abram:1972}) with $\mathtt{j}_{\nu,k}$ the $k$-th positive zero of $J_\nu$. Combining the above two inequalities with McCann's \cite[Corollary, P. 102]{McCann:1977} gives the desired bound for $k\geq 1$. When $k=0$ we have \begin{equation*} Rx'_{\nu, 0}\geq \mathtt{j}'_{\nu,1}>\sqrt{\nu(\nu+2)}>\sqrt{\nu^2+\left(\frac{\pi}{4}\right)^2} \end{equation*} as desired, where we have used the inequality (3) in \cite[P. 486]{watson:1966} and the assumption on the sum of $\nu$ and $k$. \end{proof} The following propositions reveal certain correspondence between the zero $x_{\nu,k}$ ($x'_{\nu,k}$) and the index $k$. \begin{proposition} \label{thm111} There exists a constant $c\in (0,1)$ such that for any $\varepsilon>0$ and all sufficiently large $\nu$ the positive zeros of $\mathfrak{f}_{\nu}$ and $\mathfrak{g}_\nu$ satisfy the following \begin{equation}\label{thm111-1} \mathcal{G}_{\nu}(x_{\nu,k})= \left\{\begin{array}{ll} \!\! k+O(x_{\nu,k}^{-1}), & \textrm{if $rx_{\nu,k}\ge (1+c)\nu$,}\\ \!\! k+O(z_{\nu,k}^{-\frac{3}{2}}), &\textrm{if $\nu+\nu^{\frac{1}{3}+\varepsilon} \le rx_{\nu,k}<(1+c)\nu$,}\\ \!\! k-\psi_1(z_{\nu,k})+O(\nu^{-\frac{2}{3}+3\varepsilon}),& \textrm{if $\nu-\nu^{\frac{1}{3}+\varepsilon}\!<\!rx_{\nu,k}<\nu+\nu^{\frac{1}{3}+\varepsilon}$,}\\ \!\! k-\frac{1}{4}+E_{\nu,k},& \textrm{if $rx_{\nu,k}\le \nu-\nu^{\frac{1}{3}+\varepsilon}$,} \end{array}\right. \end{equation} and \begin{equation}\label{thm111-1NC} \mathcal{G}_{\nu}(x'_{\nu,k})= \left\{\begin{array}{ll} \!\! k+O((x'_{\nu,k})^{-1}), & \textrm{if $rx'_{\nu,k}\ge (1+c)\nu$,}\\ \!\! k+O((z'_{\nu,k})^{-\frac{3}{2}}), &\textrm{if $\nu+\nu^{\frac{1}{3}+\varepsilon} \!\le\! rx'_{\nu,k}\!<\!(1+c)\nu$,}\\ \!\! k+\psi_2(z'_{\nu,k})+O(\nu^{-\frac{2}{3}+3\varepsilon}), &\textrm{if $\nu-\nu^{\frac{1}{3}+\varepsilon}\!<\!rx'_{\nu,k}\!<\!\nu+\nu^{\frac{1}{3}+\varepsilon}$,}\\ \!\! k+\frac{1}{4}+E'_{\nu,k}, & \textrm{if $rx'_{\nu,k}\le \nu-\nu^{\frac{1}{3}+\varepsilon}$,} \end{array}\right. \end{equation} where $z_{\nu,k}$ and $z'_{\nu,k}$ are determined by $rx_{\nu,k}=\nu+z_{\nu,k} \nu^{1/3}$ and $rx'_{\nu,k}=\nu+z'_{\nu,k} \nu^{1/3}$, $\psi_1$ and $\psi_2$ are the functions appearing in Lemma \ref{case2.5}, and remainders $E_{\nu,k}$ and $E'_{\nu,k}$ satisfy \begin{equation*} |E_{\nu,k}|<\min\left\{3/8, O\left(\mathcal{G}_{\nu}(x_{\nu,k})^{-1}\right)\right\},\quad k\in\mathbb{N}, \end{equation*} \begin{equation*} |E'_{\nu,0}|<1/8, \end{equation*} \begin{equation*} |E'_{\nu,k}|<\min\left\{3/8, O\left(\mathcal{G}_{\nu}(x'_{\nu,k})^{-1}+\nu^{-\frac{2}{3}-\frac{\varepsilon}{2}}\right)\right\}, \quad k\in\mathbb{N}. \end{equation*} \end{proposition} \begin{proof} We only prove the $\mathfrak{g}_\nu$ part. It is not hard to check that the function \begin{equation*} \mathcal{G}_\nu: [\nu/R, \infty)\rightarrow [0, \infty) \end{equation*} is continuous, strictly increasing and mapping $(\nu/R, (s+1/2)\pi/(R-r))$ onto $(0, s+1/2+O(\nu^{-1}))$ for any integer $s>\nu^3$. Therefore for each integer $0\leq k\leq s$ there exists an interval $(a_k, b_k)\subset (\nu/R, (s+1/2)\pi/(R-r))$ such that $\mathcal{G}_\nu$ maps $(a_0, b_0)$ to $(1/6, 3/8)$ and maps $(a_k, b_k)$ to $(k-1/8, k+3/8)$ for $1\le k\le s$ bijectively. All these intervals $(a_k, b_k)$'s are clearly disjoint. We claim that if $\nu$ is sufficiently large then for each $0\leq k\leq s$ \begin{equation} \mathfrak g_\nu(a_k)\mathfrak g_\nu(b_k)<0.\label{IVT-conditionNC} \end{equation} Assuming \eqref{IVT-conditionNC}, we know that there exists at least one zero of $\mathfrak g_\nu$ in each $(a_k, b_k)$. On the other hand Cochran's \cite[Theorem on P. 583]{cochran:1964} shows that there are exactly $2s+2$ zeros of $\mathfrak g_\nu$ within the disk $B(0,(s+1/2)\pi/(R-r))$ if $s$ is sufficiently large. Since $\mathfrak g_\nu$ is even, we conclude that $\mathfrak g_\nu$ has one and only one zero in each $(a_k, b_k)$, which is $x'_{\nu,k}$ by definition. At each $x'_{\nu,k}\in (a_k, b_k)$ we have $\mathfrak g_\nu(x'_{\nu,k})=0$. We thus have $\mathcal{G}_\nu(x'_{\nu,k})\in (\mathcal{G}_\nu(a_k), \mathcal{G}_\nu(b_k))$ which implies $|\mathcal{G}_\nu(x'_{\nu,0})-1/4|<1/8$ and $|\mathcal{G}_\nu(x'_{\nu,k})-(k+1/4)|<3/8$ for $k\in\mathbb{N}$. In particular we have $\mathcal{G}_\nu(x'_{\nu,k})>7/8$. We apply either \eqref{case111-1NC}, \eqref{case222-1NC}, \eqref{case2.5-1NC} or \eqref{111}, and conclude that each factor involving both the sine function and $\mathcal{G}_\nu(x'_{\nu,k})$ equals zero. Since $\mathcal{G}_\nu(x'_{\nu,k})-\Delta$ is contained in the interval $[k-3/8, k+3/8]$ for any $0\leq \Delta\leq 1/4$, we apply the arcsine function to obtain all desired asymptotics in \eqref{thm111-1NC}. It remains to verify \eqref{IVT-conditionNC}. When $(a_k, b_k)$ is contained in $\{x>\nu/R : \mathcal{G}_\nu(x)>C+0.5\}$ with a sufficiently large constant $C\in\mathbb{N}$, we can readily verify \eqref{IVT-conditionNC} by using the asymptotics \eqref{case111-1NC}, \eqref{case222-1NC}, \eqref{case2.5-1NC} or \eqref{111} since \begin{equation*} \mathcal{G}_\nu(a_k)-\Delta_1 \!\in\! \left[k-\frac{3}{8}, k-\frac{1}{8}\right] \textrm{ and }\mathcal{G}_\nu(b_k)-\Delta_2\!\in \!\left[k+\frac{1}{8}, k+\frac{3}{8}\right] \end{equation*} for all $0\leq \Delta_1, \Delta_2\leq 1/4$. When $(a_k, b_k)$ is contained in $\{x>\nu/R : \mathcal{G}_\nu(x)\leq C+0.5\}$, we use the asymptotics \eqref{case3-1hnuNC}. The sign of $\mathfrak g_\nu$ depends on that of \begin{equation} \mathrm{Ai}'\left(-\left(3\pi \mathcal{G}_\nu(x)/2 \right)^{2/3}\right)+O_C\left(\nu^{-2/3}\right). \label{theorem-1NC} \end{equation} If $k=0$ then \begin{equation*} \mathrm{Ai}'\left(-\left(3\pi \mathcal{G}_\nu(a_0)/2 \right)^{2/3}\right)=\mathrm{Ai}'\left(-\left(\pi /4 \right)^{2/3}\right)<0 \end{equation*} but \begin{equation*} \mathrm{Ai}'\left(-\left(3\pi \mathcal{G}_\nu(b_0)/2 \right)^{2/3}\right)=\mathrm{Ai}'\left(-\left(9\pi /16 \right)^{2/3}\right)>0. \end{equation*} If $k\ge 1$, as in the proof of Lemma \ref{case2.5}, we denote by $t_k$ the $k$th zero of the function $\textrm{Ai}'(-x)$. One can derive that \begin{equation*} t_k=\left[\frac{3\pi}{2}\left(k-\frac{3}{4}+\beta'_k\right)\right]^{2/3} \end{equation*} with $|\beta'_k|<0.05$ by using results in \cite[P. 214 \& 405]{olver:1997}). Therefore \begin{align} t_k\in &\left(\left[\frac{3\pi}{2}\left(k-0.8\right)\right]^{2/3}, \left[\frac{3\pi}{2}\left(k-0.7\right)\right]^{2/3} \right) \nonumber \\ &\subsetneq \left(\left[\frac{3\pi}{2}\mathcal{G}_\nu(a_k)\right]^{2/3}, \left[\frac{3\pi}{2}\mathcal{G}_\nu(b_k)\right]^{2/3} \right). \label{theorem-2NC} \end{align} Since $\textrm{Ai}'(-x)$ oscillates around zero for positive $x$ and the intervals in \eqref{theorem-2NC} are pairwise disjoint, the signs of \eqref{theorem-1NC} at $x=a_k$ and $b_k$ must be opposite whenever $\nu$ is sufficiently large, which ensures \eqref{IVT-conditionNC} in this case. \end{proof} For relatively small $\nu$ we have the following. \begin{proposition} \label{thm222} For any $V\in\mathbb{N}$ there exists a constant $K>0$ such that if $0\leq \nu\leq V$ and $k\geq K$ then the positive zeros of $\mathfrak f_\nu$ and $\mathfrak g_\nu$ satisfy \begin{equation} \mathcal{G}_\nu(x_{\nu,k})=k+O\left(x_{\nu,k}^{-1}\right) \label{thm222-2} \end{equation} and \begin{equation}\label{thm222-2NC} \mathcal{G}_\nu(x'_{\nu,k})=k+O\left((x'_{\nu,k})^{-1}\right). \end{equation} \end{proposition} \begin{proof} We only prove the $\mathfrak{g}_\nu$ part. Note that the asymptotics \eqref{case111-1NC} of $\mathfrak g_\nu$ is valid for all nonnegative $\nu$. For $0\leq \nu\leq V$ and $x>C_V$ for a sufficiently large constant $C_V$ we apply \eqref{case111-1NC} with $c=1$ and its error term $O_c(x^{-1})$ less than $1/100$, in particular on the interval \begin{equation} \left[\frac{(k-1/2)\pi}{R-r}, \frac{(k+1/2)\pi}{R-r}\right), \label{thm222-1NC} \end{equation} for any sufficiently large $k$. We observe that the interval $(a_k, b_k)$ (appearing in the proof of Proposition \ref{thm111}) is a subinterval of \eqref{thm222-1NC} if $k$ is sufficiently large. Indeed, this follows from $\mathcal{G}_\nu((a_k, b_k))=(k-1/8, k+3/8)$ and $\mathcal{G}_\nu((k\pm 1/2)\pi/(R-r))=k\pm 1/2+O(V^2/k)$. With \eqref{case111-1NC} it is obvious that $\mathfrak g_\nu(a_k)\mathfrak g_\nu(b_k)<0$. Hence $\mathfrak g_\nu$ has at least one zero in $(a_k, b_k)$. On the other hand, Cochran \cite{cochran:1964} tells us that there exists at most one zero in the interval \eqref{thm222-1NC}. Thus $\mathfrak g_\nu$ has exactly one zero in $(a_k, b_k)$, which is $x'_{\nu,k}$. Thus \begin{equation*} \sin\left( \pi\mathcal{G}_\nu(x'_{\nu,k})\right)+O\left((x'_{\nu,k})^{-1}\right)=0. \end{equation*} Applying the arcsine function yields the desired result. \end{proof} \begin{remark} A by-product of the above argument is the following bounds: if $k$ is sufficiently large then \begin{equation*} \frac{\pi(k-1/2)}{R-r}<\mathcal{G}^{-1}_\nu\left(k-\frac{3}{8}\right)<x_{\nu,k}<\mathcal{G}^{-1}_\nu\left(k+\frac{1}{8}\right)<\frac{\pi(k+1/2)}{R-r}. \end{equation*} and \begin{equation*} \frac{\pi(k-1/2)}{R-r}<\mathcal{G}^{-1}_\nu\left(k-\frac{1}{8}\right)<x'_{\nu,k}<\mathcal{G}^{-1}_\nu\left(k+\frac{3}{8}\right)<\frac{\pi(k+1/2)}{R-r}. \end{equation*} In particular, this result is better than Proposition \ref{case0} as $k$ goes to infinity. \end{remark} By Theorem \ref{thm444} (that will be proved independently in Section \ref{sec3}), for each fixed real $\delta$ and all $\nu\geq |\delta|$, zeros of $\widetilde{\mathfrak{h}}_{\nu,\delta}$ are all real and simple. We denote positive zeros of $\widetilde{\mathfrak{h}}_{\nu,\delta}$ (which are exactly positive zeros of $\mathfrak{h}_{\nu,\delta}$) by $x''_{\nu, k}$ (with the convention of beginning with $k=0$ if $\nu>|\delta|$ and with $k=1$ if $\nu=|\delta|$). The following two propositions reveal correspondence between the zero $x''_{\nu,k}$ and the index $k$. Their proofs rely on Theorem \ref{thm333} and are essentially the same as those of Propositions \ref{thm111} and \ref{thm222} since $\mathfrak{g}_{\nu}$ and $\widetilde{\mathfrak{h}}_{\nu,\delta}$ have formally the same asymptotics (by Lemma \ref{777}). \begin{proposition}\label{estofhx''} There exists a constant $c\in (0,1)$ such that for any $\varepsilon>0$, $\delta\in\mathbb{R}$ and all sufficiently large $\nu$, \begin{equation}\label{thm111-1hnuNC} \mathcal{G}_{\nu}(x''_{\nu,k})= \left\{\begin{array}{ll} \!\! k+O((x''_{\nu,k})^{-1}), & \!\textrm{if $rx''_{\nu,k}\ge (1+c)\nu$,}\\ \!\! k+O((z''_{\nu,k})^{-\frac{3}{2}}), & \!\textrm{if $\nu+\nu^{\frac{1}{3}+\varepsilon} \!\le\! rx''_{\nu,k}\!<\!(1+c)\nu$,}\\ \!\! k+\psi_2(z''_{\nu,k})+O(\nu^{-\frac{2}{3}+3\varepsilon}), & \! \textrm{if $\nu-\nu^{\frac{1}{3}+\varepsilon}\!<\!rx''_{\nu,k}\!<\!\nu+\nu^{\frac{1}{3}+\varepsilon}$,}\\ \!\! k+\frac{1}{4}+E''_{\nu,k}, & \textrm{if $rx''_{\nu,k}\le \nu-\nu^{\frac{1}{3}+\varepsilon}$,} \end{array}\right. \end{equation} where $z''_{\nu,k}$ is determined by $rx''_{\nu,k}=\nu+z''_{\nu,k} \nu^{1/3}$, $\psi_2$ is the function appearing in Lemma \ref{case2.5}, and the remainder $E''_{\nu,k}$ satisfies $|E''_{\nu,0}|<1/8$ and \begin{equation*} |E''_{\nu,k}|<\min\left\{3/8, O\left(\mathcal{G}_{\nu}(x''_{\nu,k})^{-1}+\nu^{-\frac{2}{3}-\frac{\varepsilon}{2}}\right)\right\}, \quad k\in \mathbb{N}. \end{equation*} \end{proposition} \begin{proposition}\label{prop1} For any $\delta\in\mathbb{R}$ and $V\in\mathbb{N}$ there exists a constant $K>0$ such that if $|\delta|\leq \nu\leq V$ and $k\geq K$ then \begin{equation}\label{thm222-2hnuNC} \mathcal{G}_{\nu}(x''_{\nu,k})=k+O\left((x''_{\nu,k})^{-1}\right). \end{equation} \end{proposition} As a by-product of the argument, we have the following. \begin{proposition}\label{prop2} We have the following bounds for $x''_{\nu,k}$. \begin{enumerate} \item If $\nu\geq|\delta|$, $k\geq 2$ and $\max\{\nu, k\}$ is sufficiently large, then \begin{equation*} x''_{\nu,k}>\frac{1}{R}\sqrt{\nu^2+\pi^2\left(k-\frac{5}{4} \right)^2}. \end{equation*} \item If $\nu\geq|\delta|$ and $\nu$ is sufficiently large, then \begin{equation*} x''_{\nu,k}>\nu/R. \end{equation*} \item If $\nu\geq|\delta|$ and $k$ is sufficiently large, then \begin{equation*} \frac{\pi(k-1/2)}{R-r}<\mathcal{G}^{-1}_\nu\left(k-\frac{1}{8}\right)<x''_{\nu,k}<\mathcal{G}^{-1}_\nu\left(k+\frac{3}{8}\right)<\frac{\pi(k+1/2)}{R-r}. \end{equation*} \end{enumerate} \end{proposition} \begin{proof} We observe that \begin{equation*} x''_{\nu,k}>x'_{\nu,k-1} \end{equation*} since $x''_{\nu,k}\in(a_k, b_k)$ and $x'_{\nu,k-1}\in (a_{k-1}, b_{k-1})$ where $\mathcal{G}_\nu((a_k, b_k))=(k-1/8, k+3/8)$. The first inequality follows immediately from Proposition \ref{case0}. The second inequality holds since $x''_{\nu,k}>a_k>a_0>\nu/R$. The third inequality holds since the interval $(a_k, b_k)$ is a subset of the interval \eqref{thm222-1NC} if $k$ is sufficiently large. \end{proof} As a consequence of the six propositions above, we can readily derive the following bounds on the gap between adjacent zeros of $\mathfrak f_{\nu}$, $\mathfrak g_{\nu}$ and $\mathfrak{h}_{\nu,\delta}$ respectively. \begin{corollary}\label{cor1} Given any sufficiently large $\nu$ and any constant $\sigma$ with $0<\sigma<R$, for all $x_{\nu,k}$'s that are greater than $\nu/\sigma$ we have \begin{equation*} 1 \lesssim x_{\nu,k+1}-x_{\nu,k}\lesssim_{\sigma} 1, \end{equation*} and the dependence of the implicit constant on $\sigma$ can be removed if $0<\sigma\leq r$. For any $V\in\mathbb{N}$ if $0\leq \nu\leq V$ and $k$ is sufficiently large then \begin{equation*} x_{\nu,k+1}-x_{\nu,k}\asymp 1. \end{equation*} The same type of results also hold for $x'_{\nu,k}$ and $x''_{\nu,k}$. \end{corollary} We have the same type of results as those stated in \cite[Corollary 2.11]{GMWW:2019}. \begin{corollary}\label{cor2} The error terms in \eqref{thm111-1} are of size \begin{align*} &O\left(\frac{1}{\nu+k}\right) \quad \textrm {if $rx_{\nu,k}\ge (1+c)\nu$};\\ &O\left(\nu^{1/2}\left(k-\frac{G(r)}{r}\nu\right)^{-3/2}\right)\quad \textrm {if $\nu+\nu^{1/3+\varepsilon}\le rx_{\nu,k}< (1+c)\nu$};\\ & O\left(\nu^{-2/3+3\varepsilon}\right) \quad \textrm {if $\nu-\nu^{1/3+\varepsilon}< rx_{\nu,k}< \nu+\nu^{1/3+\varepsilon}$};\\ &O\left(\frac{1}{k}\right)\quad \textrm {if $rx_{\nu,k}\le\nu-\nu^{1/3+\varepsilon}$}. \end{align*} The error terms in \eqref{thm111-1NC} and \eqref{thm111-1hnuNC} have the same bounds as above except that \begin{equation*} E'_{\nu,k}, E''_{\nu,k}=O\left(\frac{1}{k+1}+\nu^{-\frac{2}{3}-\frac{\varepsilon}{2}}\right),\quad k\geq 0. \end{equation*} The error terms in \eqref{thm222-2}, \eqref{thm222-2NC} and \eqref{thm222-2hnuNC} are all of size $O\left((\nu+k)^{-1}\right)$. \end{corollary} \begin{remark}\label{cor2-3} The second bound is small when $\nu$ is large, since \begin{equation*} \nu^{-1/3}\left(k-\frac{G(r)}{r}\nu\right)\asymp z_{\nu,k}\geq\nu^{\varepsilon}. \end{equation*} \end{remark} Let us now discuss the approximations of zeros. We already know that approximations of zeros $x_{\nu,k}$ and $x'_{\nu,k}$ can be derived directly from \cite[9.5.27--9.5.31 on P. 374]{abram:1972}, provided that we allow the implicit constants in the error terms to depend on $\nu$. Here we provide an analogous one-term asymptotics for the zeros $x''_{\nu,k}$, which is an easy consequence of Propositions \ref{estofhx''} and \ref{prop1} combined with a Taylor expansion of the function $G$ at $0$. \begin{theorem} \label{thm2.20} For any fixed $\nu\geq |\delta|$ and sufficiently large $k$, we have \begin{equation*} x''_{\nu,k}=\frac{\pi}{R-r}k+O_{\nu}\left(\frac{1}{k} \right). \end{equation*} \end{theorem} \begin{remark} See Theorem \ref{thm4.12} below for a similar result of the zeros of $j'_{\nu,\delta}$ and also Remark \ref{remark4.13} for a discussion on generalizing this type of one-term asymptotics to McMahon-type asymptotics. \end{remark} However, the type of approximations discussed above is insufficient for our applications to the eigenvalue counting problems. It is crucial for us to derive approximations where the error terms include implicit constants that are independent of both $\nu$ and $k$. Indeed, we have obtained such approximations for zeros of $\mathfrak{f}_{\nu}$ and $\mathfrak{h}_{\nu,\delta}$ in the following theorem. This result generalizes \cite[Corollary 2.14]{GMWW:2019}. Let $F: [0, \infty)\times [0, \infty)\setminus \{O\}\rightarrow \mathbb{R}$ be the function homogeneous of degree $1$ satisfying $F\equiv1$ on the graph of $G$, that is, $F$ is the Minkowski functional of the graph of $G$. Recall that we have computed and estimated two partial derivatives of $F$ in \cite[Lemma 2.13]{GMWW:2019}. \begin{theorem}\label{approximation} There exists a constant $c\in (0,1)$ such that for any $\varepsilon>0$ there exists a positive integer $V$ such that if $\nu>V$ then the positive zeros of $\mathfrak f_\nu$ satisfy \begin{equation}\label{approximation1} x_{\nu, k}=F(\nu,k-\tau_{\nu,k})+R_{\nu,k}, \end{equation} where \begin{equation*} \tau_{\nu,k}=\left\{ \begin{array}{ll} 0, & \textrm{if $rx_{\nu,k}\geq \nu+\nu^{1/3+\varepsilon}$,}\\ \psi_1\left(z_{\nu,k}\right), & \textrm{if $\nu-\nu^{1/3+\varepsilon}< rx_{\nu,k}< \nu+\nu^{1/3+\varepsilon}$,}\\ 1/4, & \textrm{if $rx_{\nu,k}\leq \nu-\nu^{1/3+\varepsilon}$,} \end{array} \right. \end{equation*} and \begin{equation*} R_{\nu,k}\!=\!\left\{ \begin{array}{ll} \!O\left((\nu+k)^{-1}\right), & \textrm{if $rx_{\nu,k}\geq (1+c)\nu$,}\\ \!O\left(\nu^{1/2}\left(k-\frac{G(r)}{r}\nu\right)^{-3/2}\right), & \textrm{if $\nu+\nu^{1/3+\varepsilon}\leq rx_{\nu,k}<(1+c)\nu$,}\\ \!O\left(\nu^{-2/3+3\varepsilon}\right), & \textrm{if $\nu-\nu^{1/3+\varepsilon}< rx_{\nu,k}< \nu+\nu^{1/3+\varepsilon}$,}\\ \!O\left(\nu^{1/3}k^{-4/3}\right), &\textrm{if $rx_{\nu,k}\leq \nu-\nu^{1/3+\varepsilon}$.} \end{array} \right. \end{equation*} If $0\leq \nu\leq V$, there exists a positive integer $K$ such that if $k>K$ then \eqref{approximation1} holds with \begin{equation*} \tau_{\nu,k}=0 \end{equation*} and \begin{equation*} R_{\nu,k}=O\left((\nu+k)^{-1}\right). \end{equation*} Similar results hold for $\mathfrak{h}_{\nu,\delta}$. There exists a constant $c\in (0,1)$ such that for any $\varepsilon>0$ there exists an integer $V>|\delta|$ such that if $\nu>V$ then the positive zeros of $\mathfrak{h}_{\nu,\delta}$ satisfy \begin{equation}\label{approximation1NC} x''_{\nu, k}=F(\nu,k+\widetilde\tau_{\nu,k})+\widetilde R_{\nu,k}, \end{equation} where \begin{equation}\label{translationNC} \widetilde\tau_{\nu,k}=\left\{ \begin{array}{ll} 0, & \textrm{if $rx''_{\nu,k}\geq \nu+\nu^{1/3+\varepsilon}$,}\\ \psi_2\left(z''_{\nu,k}\right), & \textrm{if $\nu-\nu^{1/3+\varepsilon}< rx''_{\nu,k}< \nu+\nu^{1/3+\varepsilon}$,}\\ 1/4, & \textrm{if $rx''_{\nu,k}\leq \nu-\nu^{1/3+\varepsilon}$,} \end{array} \right. \end{equation} and \begin{equation*} \widetilde R_{\nu,k}\!=\!\left\{ \begin{array}{ll} \!O\left((\nu+k)^{-1}\right), & \textrm{if $rx''_{\nu,k}\geq (1+c)\nu$,}\\ \!O\left(\nu^{1/2}\left(k-\frac{G(r)}{r}\nu\right)^{-3/2}\right), & \textrm{if $\nu+\nu^{1/3+\varepsilon}\leq rx''_{\nu,k}<(1+c)\nu$,}\\ \!O\left(\nu^{-2/3+3\varepsilon}\right), & \textrm{if $\nu-\nu^{1/3+\varepsilon}< rx''_{\nu,k}< \nu+\nu^{1/3+\varepsilon}$,}\\ \!O\left(\frac{\nu^{1/3}}{(k+1)^{4/3}}+\frac{\nu^{-1/3-\varepsilon/2}}{(k+1)^{1/3}}\right), &\textrm{if $rx''_{\nu,k}\leq \nu-\nu^{1/3+\varepsilon}$.} \end{array} \right. \end{equation*} If $|\delta|\leq \nu\leq V$, there exists a positive integer $K$ such that if $k>K$ then \eqref{approximation1NC} holds with \begin{equation*} \widetilde \tau_{\nu,k}=0 \end{equation*} and \begin{equation*} \widetilde R_{\nu,k}=O\left((\nu+k)^{-1}\right). \end{equation*} \end{theorem} \begin{remark} We define $\tau_{\nu,k}=\widetilde\tau_{\nu,k}=1/4$ when $\nu\leq V$ and $k\leq K$. This will be used in the next section. \end{remark} \begin{proof}[Proof of Theorem \ref{approximation}] If $\nu$ is sufficiently large, then $\nu/x''_{\nu, k}<R$ and \begin{equation*} x''_{\nu, k}=F\left(\nu, \mathcal{G}_{\nu}(x''_{\nu,k})\right). \end{equation*} If $rx''_{\nu,k}>\nu-\nu^{1/3+\varepsilon}$ then $x''_{\nu,k}>2\nu/(R+r)$ for sufficiently large $\nu$. By using the asymptotics in \eqref{thm111-1hnuNC} and the monotonicity of $\mathcal{G}_{\nu}$, we have \begin{equation*} \frac{k+\widetilde\tau_{\nu,k}}{\nu}\geq \frac{\mathcal{G}_{\nu}(x''_{\nu,k})}{2\nu}\geq \frac{1}{R+r}G\left(\frac{R+r}{2}\right), \end{equation*} which, by \cite[Lemma 2.13]{GMWW:2019}, ensures that $\partial_y F(\nu, \theta)\asymp 1$ with $\theta$ between $k+\widetilde\tau_{\nu,k}$ and $\mathcal{G}_{\nu}(x''_{\nu,k})$. Then \eqref{approximation1NC} follows from the mean value theorem, \eqref{thm111-1hnuNC} and Corollary \ref{cor2}. If $rx''_{\nu,k}\leq \nu-\nu^{1/3+\varepsilon}$ then $x''_{\nu,k}<\nu/r$. In this case we can obtain \eqref{approximation1NC} similarly if we notice that \begin{equation*} \frac{k+\widetilde\tau_{\nu,k}}{\nu}\asymp \frac{\mathcal{G}_{\nu}(x''_{\nu,k})}{\nu}\leq \frac{G(r)}{r} \end{equation*} implies, by \cite[Lemma 2.13]{GMWW:2019} no matter whether $\theta/\nu$ is small or not, that \begin{equation*} \partial_y F(\nu, \theta)\lesssim \nu^{1/3}\theta^{-1/3}\lesssim \nu^{1/3}(k+1)^{-1/3} \end{equation*} with $\theta$ between $k+\widetilde\tau_{\nu,k}$ and $\mathcal{G}_{\nu}(x''_{\nu,k})$. The case $|\delta|\leq \nu\leq V$ follows easily from the mean value theorem, Proposition \ref{prop1}, Corollary \ref{cor2} and \cite[Lemma 2.13]{GMWW:2019}. The proof of \eqref{approximation1} is similar. \end{proof} \section{Properties of zeros of the cross-product \texorpdfstring{$\widetilde{\mathfrak{h}}_{\nu,\delta}$}{}} \label{sec3} In this independent section we prove analogous results of Cochran \cite{cochran:1964} for the function $\widetilde{\mathfrak{h}}_{\nu,\delta}$. Recall that $\widetilde{\mathfrak{h}}_{\nu,\delta}(x)$ is defined by \eqref{444} originally for positive argument $x$. However, in view of its expression \eqref{555}, it is natural to extend its domain to the complex plane $\mathbb{C}$. \begin{theorem} \label{thm333} There exists a large constant $C$ such that for any $\nu\ge0$ if $s\in\mathbb{N}$ satisfies $s>C(\nu^3+1)$, then within the circle $|z|=(s+1/2)\pi/(R-r)$ the function $\widetilde{\mathfrak{h}}_{\nu,\delta}(z)$ has $2s+2$ or $2s$ zeros according to $\nu\neq |\delta|$ or $\nu=|\delta|$. \end{theorem} \begin{proof} We follow Cochran's strategy in \cite{cochran:1964} to prove this theorem. We first observe that: if $\nu\neq |\delta|$ then $\widetilde{\mathfrak{h}}_{\nu,\delta}$ is analytic in $\mathbb{C}\setminus\{0\}$ with a pole at $0$ of the second order; if $\nu=|\delta|$ then it is an entire function with $\widetilde{\mathfrak{h}}_{\nu,\delta}(0)\neq 0$. Indeed, the regularity of the function $\widetilde{\mathfrak{h}}_{\nu,\delta}$ in $\mathbb{C}\setminus\{0\}$ follows from the known regularity of the four cross-products of Bessel functions appearing in its definition (see \cite[P. 580]{cochran:1964} and \cite[P. 699]{Horsley}). Its behaviour about the origin is clear if we write it into a Laurent series. For instance, for all non-integer $\nu\geq 0$ we have \begin{align*} \widetilde{\mathfrak{h}}_{\nu,\delta}(z)=\frac{\left(\nu^2-\delta^2\right)\left(R^{2\nu}-r^{2\nu}\right)}{\nu\pi r^{\nu+1}R^{\nu+1}}\frac{1}{z^2} +C_{\widetilde{\mathfrak{h}}_{\nu,\delta}}+\cdots \end{align*} with a constant term \begin{align*} C_{\widetilde{\mathfrak{h}}_{\nu,\delta}}= &\frac{\left(\nu^2-2\nu-\delta^2+2\delta\right)\left(R^{2\nu-2}-r^{2\nu-2}\right)}{4\nu(\nu-1)\pi r^{\nu-1}R^{\nu-1}}\\ &+\frac{\left(-\nu^2-2\nu+\delta^2-2\delta\right)\left(R^{2\nu+2}-r^{2\nu+2}\right)}{4\nu(\nu+1)\pi r^{\nu+1}R^{\nu+1}}, \end{align*} by using the ascending series 9.1.10 and relations 9.1.2 and 9.1.27 in \cite[P. 358--361]{abram:1972}. The proofs for all integer $\nu\geq 0$ are similar, except that in addition to 9.1.10 and 9.1.27, we also need to use 9.1.5 and 9.1.11--9.1.13 in \cite{abram:1972}. We next derive an asymptotics of $\widetilde{\mathfrak{h}}'_{\nu,\delta}/\widetilde{\mathfrak{h}}_{\nu,\delta}$ on the circle $\sigma:|z|=(s+1/2)\pi/(R-r)$ and apply the argument principle to $\widetilde{\mathfrak{h}}_{\nu,\delta}$ to count its zeros in this circle. Let $s\in\mathbb{N}$ satisfy $s>C(\nu^3+1)$ with $C>0$ to be determined below. We apply Hankel's expansions (13.01) and (13.04) in \cite[P. 266--267]{olver:1997} (with $n=4$) to $\widetilde{\mathfrak{h}}_{\nu,\delta}(z)$ for $-\pi/2\le \arg z\le \pi/2$. In this process we also use relations between $J_{\nu}$, $Y_{\nu}$ and the Hankel functions and their recurrence relations. We then get an asymptotics of $\widetilde{\mathfrak{h}}_{\nu,\delta}$ as follows \begin{align*} \widetilde{\mathfrak{h}}_{\nu,\delta}(z)=&\frac{-2}{\sqrt{Rr}\pi z}\Bigg(\sin((R-r)z)\left(1+\frac{\alpha_2}{z^2}+O\left(\frac{\mu^4+1}{|z|^4}\right)\right)\\ & -\cos((R-r)z)\left(\frac{\alpha_1}{z}+\frac{\alpha_3}{z^3}+O\left(\frac{\mu^4+1}{|z|^4}\right)\right)\Bigg) \end{align*} with $\mu=4\nu^2$ and coefficients $\alpha_l=\alpha_l(\delta,R,r,\mu)=O(\mu^l+1)$ for $l=1,2,3$. By the evenness of $\widetilde{\mathfrak{h}}_{\nu,\delta}$ the asymptotics of $\widetilde{\mathfrak{h}}_{\nu,\delta}$ above holds for all large $|z|$. A straightforward calculation then shows that, for $z\in\sigma$, \begin{align*} \begin{split} \frac{\widetilde{\mathfrak{h}}'_{\nu,\delta}(z)}{\widetilde{\mathfrak{h}}_{\nu,\delta}(z)}=&(R-r)\Bigg(\cot((R-r)z)+\frac{\alpha_1}{z}\left(\cot^2((R-r)z)+1\right)\\ &+\frac{\alpha_1}{z^2}\left(\left(\frac{1}{R-r}+\alpha_1\right)\cot((R-r)z)+\alpha_1\cot^3((R-r)z)\right)\\ &+\!\frac{1}{z^3}\!\left(\!\left(\!\alpha_3\!-\!\alpha_1\alpha_2\!+\!\alpha_1^3\!+\!\frac{\alpha_1^2}{R-r}\!\right)\!\cot^2((R-r)z)\!+\!\alpha_1^3\cot^4((R-r)z)\!\right)\\ &+\frac{1}{z^3}\left(\alpha_3-\alpha_1\alpha_2-\frac{2\alpha_2}{R-r}\right)\Bigg)-\frac{1}{z}+O\left(\frac{\mu^4+1}{|z|^4}\right), \end{split} \end{align*} where we have used the fact that $\cot((R-r)z)$ is bounded above and $\sin((R-r)z)$ is bounded away from zero on $\sigma$. We integrate this equality over the contour $\sigma$ and use the residue theorem to evaluate the right hand side. For example, the first term $ \cot((R-r)z)$ has poles at $z=m\pi/(R-r)$ of order $1$ with $m=0,\pm1,\pm2,\ldots,\pm s$ within the contour. Thus \begin{equation*} \frac{1}{2\pi i}\int_{\sigma}\cot((R-r)z)\,\mathrm dz =\sum_{m=-s}^s\frac{1}{R-r} =\frac{2s+1}{R-r}. \end{equation*} Other parts can be computed similarly. To sum up, we obtain \begin{equation*} \begin{split} \frac{1}{2\pi i}\int_{\sigma}\frac{ \widetilde{\mathfrak{h}}'_{\nu,\delta}(z)}{ \widetilde{\mathfrak{h}}_{\nu,\delta}(z)}\,\mathrm dz =2s+O\left(\frac{\mu+1}{s}\right)+O\left(\frac{\mu^4+1}{s^3}\right). \end{split} \end{equation*} By the argument principle, the left hand side is equal to the number of zeros of $\widetilde{\mathfrak{h}}_{\nu,\delta}$ in $\sigma$ minus the number of its poles in $\sigma$. If $C$ is sufficiently large, then the right hand side has to be equal to $2s$. Recalling that $\widetilde{\mathfrak{h}}_{\nu,\delta}$ has only one pole at $0$ of the second order if and only if $\nu\neq|\delta|$, we get the desired result of the number of zeros. \end{proof} \begin{remark} As a consequence of Theorem \ref{thm333} and the evenness of $\widetilde{\mathfrak{h}}_{\nu,\delta}(z)$, by using the argument in the proof of Proposition \ref{thm111}, one can show for sufficiently large $\nu$ that zeros of $\widetilde{\mathfrak{h}}_{\nu,\delta}$ are all real and simple, and that positive zeros are all greater than $\nu/R$. One cannot show for relatively small $\nu$ the same properties by a similar argument, since the asymptotics \eqref{case111-1NC} provides little information for small $x$. However, by using Theorem \ref{thm333} and the study on ultraspherical Bessel functions in Filonov, Levitin, Polterovich and Sher \cite{FLPS:2024}, we manage to show, even for small $\nu$, that $\widetilde{\mathfrak{h}}_{\nu,\delta}(z)$ has only real and simple zeros . \end{remark} \begin{theorem} \label{thm444} For all $\delta\in\mathbb{R}$ and $\nu\geq |\delta|$, zeros of $\widetilde{\mathfrak{h}}_{\nu,\delta}(z)$ are all real and simple. \end{theorem} \begin{proof} As treated in \cite[\S 3.2]{FLPS:2024}, we write \begin{equation*} j_{\nu}'(x)+iy_{\nu}'(x)=L_{\nu,\delta}(x)\left(\cos (\psi_{\nu,\delta}(x))+i\sin (\psi_{\nu,\delta}(x))\right), \textrm{ $x>0$}, \end{equation*} with a positive modulus function $L_{\nu,\delta}$ and a continuous real phase function $\psi_{\nu,\delta}$ with the initial condition \begin{equation*} \lim_{x\rightarrow 0+}\psi_{\nu,\delta}(x)=\frac{\pi}{2}. \end{equation*} Therefore, \begin{equation*} \widetilde{\mathfrak{h}}_{\nu,\delta}(x)=-(Rr)^{\delta}x^{2\delta}L_{\nu,\delta}(Rx)L_{\nu,\delta}(rx)\sin\left(\psi_{\nu,\delta}(Rx)-\psi_{\nu,\delta}(rx)\right),\textrm{ $x>0$}, \end{equation*} with a continuous new phase function $\psi_{\nu,\delta}(Rx)-\psi_{\nu,\delta}(rx)$ such that \begin{equation*} \lim_{x\rightarrow 0+}\left(\psi_{\nu,\delta}(Rx)-\psi_{\nu,\delta}(rx)\right)=0. \end{equation*} By \cite[Lemma 3.2]{FLPS:2024}, \begin{equation*} \psi_{\nu,\delta}(Rx)-\psi_{\nu,\delta}(rx)=\left(s+\frac{1}{2}\right)\pi+O\left(s^{-1}\right) \textrm{ with $x=\frac{(s+1/2)\pi}{R-r}$}. \end{equation*} By \cite[Lemma 3.1]{FLPS:2024}, when $\nu=|\delta|$ the phase function is always positive on the positive real line and thus its image of the interval $(0, (s+1/2)\pi/(R-r))$ for any large integer $s$ must contain points $\pi,2\pi,\ldots, s\pi$. When $\nu>|\delta|$, however, the phase function is negative near $0$ and hence its image contains $0,\pi,2\pi,\ldots, s\pi$. This means that $\widetilde{\mathfrak{h}}_{\nu,\delta}$ has in the interval $(0, (s+1/2)\pi/(R-r))$ at least $s$ distinct zeros if $\nu=|\delta|$ and $s+1$ distinct zeros if $\nu>|\delta|$. Note that $\widetilde{\mathfrak{h}}_{\nu,\delta}(z)$ is an even function. The desired result then follows from Theorem \ref{thm333}. \end{proof} \begin{remark}\label{rm111} When $\nu<|\delta|$ we still do not know how to prove that $\widetilde{\mathfrak{h}}_{\nu,\delta}(z)$ has only real and simple zeros. The above argument shows that $\widetilde{\mathfrak{h}}_{\nu,\delta}(z)$ has at least $2s$ distinct real zeros within the circle $|z|=(s+1/2)\pi/(R-r)$, however, there are $2s+2$ zeros inside by Theorem \ref{thm333}. So two zeros are undetermined. On the other hand, in applications we will take $\delta=d/2-1$ and $\nu=n+d/2-1$ with integers $d\geq 2$ and $n\geq 0$, which do satisfy $\nu\geq|\delta|$. These are reasons why we restrict our study of $\mathfrak{h}_{\nu,\delta}$ and $\widetilde{\mathfrak{h}}_{\nu,\delta}$ only to the case $\nu\geq |\delta|$. \end{remark} \section{From spectrum counting to lattice counting}\label{reduction-sec} In Subsection \ref{subsec4.1} we deal with the shell case. We first transfer problems of counting spectrum into certain problems of counting lattice points with various translations; we next transform the latter problems into problems of counting lattice points with uniform translations. In Subsection \ref{subsec4.2} we present analogous results for the ball case, which are considerably easier to obtain. \subsection{The shell case}\label{subsec4.1} One can check by the standard separation of variables that for $d\geq 2$ the spectrum of the Dirichlet Laplacian associated with the shell $\mathbb{S}$ consists of the numbers $\omega_{n,k}^2$, $n\in\mathbb{Z}_+$, $k\in\mathbb{N}$, where $\omega_{n,k}$'s are positive zeros of the function $\mathfrak f_\nu(x)$ with \begin{equation*} \nu=n+\frac{d}{2}-1 \end{equation*} \textbf{which we denote for short throughout Sections \ref{reduction-sec} and \ref{sec5}}. In other words \begin{equation} \omega_{n,k}=x_{\nu,k}.\label{s4-1} \end{equation} For each pair $(n,k)$ the number $\omega_{n,k}^2$ appears $m_n^d$ times in the spectrum, where the number $m_n^d$ is defined by \begin{align*} m_n^d: =\left\{\begin{array}{cc} 1 & \text{if $n=0$}, \\ d & \text{if $n=1$},\\ \binom{n+d-1}{d-1}-\binom{n+d-3}{d-1} &\text{if $n\ge 2$}, \end{array} \right. \end{align*} $m_{-1}^d:=0$ and we follow the convention that $\binom{l}{m}=0$ if $m>l$. See Gurarie~\cite[\S4.5]{Gur:1992}. In the Neumann case, the corresponding spectrum consists of the squares of positive zeros of $\mathfrak{h}_{\nu,\delta}$ with \begin{equation*} \delta=\frac{d}{2}-1, \end{equation*} that is, it consists of the numbers $\widetilde \omega_{n,k}^2$, $n,k\in \mathbb{Z}_+$, where \begin{equation} \widetilde \omega_{n,k}=x''_{\nu,k}\label{s4-2} \end{equation} with an additional definition $\widetilde \omega_{0,0}:=0$. For each pair $(n,k)$ the number $\widetilde \omega_{n,k}^2$ appears $m_n^d$ times in the spectrum. We will apply to \eqref{s4-1} and \eqref{s4-2} Theorem \ref{approximation} which roughly tells us that $\omega_{n,k}$ and $\widetilde\omega_{n,k}$ correspond to points $(\nu, k-\tau_{\nu,k})$ and $(\nu, k+\widetilde\tau_{\nu,k})$ in $\mu\Omega$ respectively, where $\Omega$ denotes the closed domain bounded by the graph of $G$ and the axes. See Figure \ref{domainOmega}. \begin{figure}[ht] \centering \includegraphics[width=0.6\textwidth]{domainOmega.pdf} \caption{The domain $\Omega$.} \label{domainOmega} \end{figure} In the Dirichlet case, we write \begin{equation*} \mathscr{N}_\mathbb{S}(\mu)=\mathscr{N}_\mathbb{S}^{\mathtt{D}}(\mu)=\sum_{l=0}^{\infty} \left(m_{l}^d-m_{l-1}^d\right)\mathscr{N}_l^{\mathtt{D}}(\mu), \end{equation*} where \begin{equation*} \mathscr{N}_l^{\mathtt{D}}(\mu):=\#\{(n,k)\in \mathbb{Z}_+\times\mathbb{N} : \omega_{n,k}\leq \mu, n\geq l\}. \end{equation*} The superscript ``$\mathtt{D}$'' represents Dirichlet while ``$\mathtt{N}$'' represents Neumann. Correspondingly in this case we define a weighted lattice point counting function (associated with the domain $\Omega$) \begin{equation*} \mathscr{P}_{\Omega}(\mu)=\mathscr{P}_{\Omega}^{\mathtt{D}}(\mu):=\sum_{l=0}^{\infty}\left(m_{l}^d-m_{l-1}^d\right)\mathscr{P}_l^{\mathtt{D}}(\mu), \end{equation*} where \begin{equation*} \mathscr{P}_l^{\mathtt{D}}(\mu):=\#\left\{ \left(\nu, k-\tau_{\nu,k}\right)\in \mu\Omega : n\in \mathbb{Z}_+, n\geq l, k\in\mathbb{N}\right\}. \end{equation*} In the Neumann case, we have analogously \begin{equation*} \mathscr{N}_\mathbb{S}(\mu)=\mathscr{N}_\mathbb{S}^{\mathtt{N}}(\mu)=\sum_{l=0}^{\infty} \left(m_{l}^d-m_{l-1}^d\right)\mathscr{N}_l^{\mathtt{N}}(\mu), \end{equation*} where \begin{equation*} \mathscr{N}_l^{\mathtt{N}}(\mu):=\#\{(n,k)\in \mathbb{Z}_+\times\mathbb{Z}_+ : \widetilde\omega_{n,k}\leq \mu, n\geq l\}, \end{equation*} and we define \begin{equation*} \mathscr{P}_{\Omega}(\mu)=\mathscr{P}_{\Omega}^{\mathtt{N}}(\mu):=\sum_{l=0}^{\infty}\left(m_{l}^d-m_{l-1}^d\right)\mathscr{P}_l^{\mathtt{N}}(\mu), \end{equation*} where \begin{equation*} \mathscr{P}_l^{\mathtt{N}}(\mu):=\#\left\{ \left(\nu, k+\widetilde\tau_{\nu,k}\right)\in \mu\Omega : n\in \mathbb{Z}_+, n\geq l, k\in\mathbb{Z}_+\right\}. \end{equation*} We observe that the summands in the above four sums are all equal to zero if $l> R\mu-\frac{d-2}{2}$, and that $m_{l}^2-m_{l-1}^2$ equals $1$ if $l=0,1$, and $0$ otherwise, and for $d\geq 3$, $l\in\mathbb{Z}_+$, \begin{equation}\label{bino-bound} m_{l}^d-m_{l-1}^d=\binom{l+d-2}{d-2}-\binom{l+d-4}{d-2} =O_d\left(l^{d-3}+1\right). \end{equation} As did in ``the boundary parts'' in \cite[Theorem 3.1]{colin:2011} we can control the difference between the number of eigenvalues and the corresponding number of lattice points. \begin{lemma}\label{lemma4.1} There exists a constant $C>0$ such that \begin{equation*} \left|\mathscr{N}_l^*(\mu)-\mathscr{P}_l^*(\mu)\right|\leq \mathscr{P}_l^*\left(\mu+C\mu^{-0.4}\right)-\mathscr{P}_l^*\left(\mu-C\mu^{-0.4}\right)+O\left(\mu^{0.6}\right) \end{equation*} for all $l\in \mathbb{Z}_+$ and $*\in\{ \mathtt{D},\mathtt{N}\}$. \end{lemma} \begin{proof} We only prove the Neumann case here, as the Dirichlet case is similar. For $k\in\mathbb{Z}_+$, we define \begin{equation} \mathscr{N}^{\mathtt{N}}_{l,k}(\mu):=\#\{n\in \mathbb{Z}_+ : x''_{\nu,k}\leq \mu, n\geq l\}\label{countfcn1NC} \end{equation} and \begin{equation} \mathscr{P}^{\mathtt{N}}_{l,k}(\mu):=\#\left\{ n\in \mathbb{Z}_+ : \left(\nu, k+ \widetilde\tau_{\nu,k}\right)\in \mu\Omega, n\geq l\right\}.\label{countfcn2NC} \end{equation} Then $\mathscr{N}^{\mathtt{N}}_l(\mu)$ and $\mathscr{P}^{\mathtt{N}}_l(\mu)$ are sums of \eqref{countfcn1NC} and \eqref{countfcn2NC} respectively over (finitely many) $k\in \mathbb{Z}_+$. Using Theorem \ref{approximation} and properties of $F$ we get \begin{align} &\left|\mathscr{N}^{\mathtt{N}}_{l,k}(\mu)-\mathscr{P}^{\mathtt{N}}_{l,k}(\mu) \right|\leq \#\bigg(\left\{n\in\mathbb{Z}_+ : \max\{n,k\}>A\right\}\cap \nonumber\\ &\quad \left\{n\in\mathbb{Z}_+ : \mu-|\widetilde R_{\nu,k}|\leq F(\nu,k+\widetilde \tau_{\nu,k})\leq\mu+|\widetilde R_{\nu,k}|, n\geq l \right\}\bigg),\label{setNC} \end{align} where $A$ is a fixed sufficiently large constant such that when $\max\{n,k\}>A$ the asymptotics of $x''_{\nu,k}$ in Theorem \ref{approximation} applies. We next use Theorem \ref{approximation} and estimates of derivatives of $F$ (in \cite[Lemma 2.13]{GMWW:2019}) to estimate $|\mathscr{N}^{\mathtt{N}}_{l,k}(\mu)-\mathscr{P}^{\mathtt{N}}_{l,k}(\mu)|$ by estimating sizes of the set in \eqref{setNC}. We need to determine the form of $\widetilde R_{\nu,k}$ in different ranges of $k$. Note that we always have $\nu\leq R\mu$ by Proposition \ref{prop2}. If $0\leq k\leq \mu^{1/4}$ then \begin{equation} \left|\mathscr{N}^{\mathtt{N}}_{l,k}(\mu)-\mathscr{P}^{\mathtt{N}}_{l,k}(\mu)\right|\lesssim \mu^{1/3}(k+1)^{-4/3}.\label{444NC} \end{equation} Indeed, since $k/\mu$ is less than $G(r)$ we have $\nu\asymp \mu$. This, together with Proposition \ref{estofhx''} and Corollary \ref{cor2}, gives \begin{equation*} \frac{G(\nu/x''_{\nu,k})}{\nu/x''_{\nu,k}}=\frac{k+O(1)}{\nu}\lesssim \mu^{-3/4}, \end{equation*} which implies that $\nu/x''_{\nu,k}\rightarrow R-$ as $\mu\rightarrow \infty$ and thus $rx''_{\nu,k}\leq \nu-\nu^{1/3+\varepsilon}$. Therefore $\widetilde R_{\nu,k}=O(\nu^{1/3}(k+1)^{-4/3})$ and $\widetilde \tau_{\nu,k}=1/4$. Using the estimate of $\partial_x F$ we get \eqref{444NC}. If $\mu^{1/4}< k\leq \mu^{4/7}$, by a similar argument we have $\widetilde R_{\nu,k}=O(\nu^{1/3}k^{-4/3})=O(1)$, $\widetilde \tau_{\nu,k}=1/4$ and thus \begin{equation*} \left|\mathscr{N}^{\mathtt{N}}_{l,k}(\mu)-\mathscr{P}^{\mathtt{N}}_{l,k}(\mu) \right|\lesssim 1. \end{equation*} If $\mu^{4/7}< k\leq G(r)\mu-C_1$ for a sufficiently large constant $C_1$ then \begin{equation}\label{666NC} \left|\mathscr{N}^{\mathtt{N}}_{l,k}(\mu)-\mathscr{P}^{\mathtt{N}}_{l,k}(\mu)\right|\le \mathscr{P}^{\mathtt{N}}_{l,k}(\mu+C\mu^{-3/7})-\mathscr{P}^{\mathtt{N}}_{l,k}(\mu-C\mu^{-3/7}) \end{equation} for some constant $C$. Indeed, from the definition of the set \eqref{setNC} we observe that the point $(\nu,k+\widetilde\tau_{\nu,k})$ is contained in a tubular neighborhood of $\mu\partial \Omega$ of width much less than $1$ (because of $\nu\asymp \mu$ and Remark \ref{cor2-3}). For large $\mu$ the tubular neighborhood between $y=G(r)\mu$ and $y=G(r)\mu-C_1$ is close to a parallelogram. Hence if $k\leq G(r)\mu-C_1$ for a sufficiently large $C_1$ then $\nu\geq r\mu$. As a result, \begin{equation} \frac{k}{\nu}\leq \frac{G(r)}{r}-\frac{C_1}{\nu}.\label{555NC} \end{equation} However, if $rx''_{\nu,k}\geq \nu+\nu^{1/3+\varepsilon}$ then, by Proposition \ref{estofhx''} and the monotonicity of $G$, we have \begin{equation*} \frac{k}{\nu}=\frac{G(\nu/x''_{\nu,k})}{\nu/x''_{\nu,k}}+O\left(\nu^{-1-\frac{3}{2}\varepsilon} \right)>\frac{G(r)}{r}+O\left(\nu^{-1-\frac{3}{2}\varepsilon}\right), \end{equation*} which contradicts with \eqref{555NC}. Thus $rx''_{\nu,k}< \nu+\nu^{1/3+\varepsilon}$. This implies that $\widetilde R_{\nu,k}$ is either $O(\nu^{-2/3+3\varepsilon})$ or $O(\nu^{1/3}k^{-4/3}+\nu^{-1/3}k^{-1/3})$, both of which are of size $O(\mu^{-3/7})$. We immediately get \eqref{666NC}. If $G(r)\mu-C_1< k\leq G(r)\mu+\mu^{0.6}$, we still have $\nu\asymp \mu$. By using the trivial estimate $\widetilde R_{\nu,k}=O(1)$, \cite[Lemma 2.13]{GMWW:2019} and the mean value theorem we get \begin{equation*} \left|\mathscr{N}^{\mathtt{N}}_{l,k}(\mu)-\mathscr{P}^{\mathtt{N}}_{l,k}(\mu) \right|\lesssim 1. \end{equation*} If $k>G(r)\mu+\mu^{0.6}$ then \begin{equation} \left|\mathscr{N}^{\mathtt{N}}_{l,k}(\mu)-\mathscr{P}^{\mathtt{N}}_{l,k}(\mu) \right|\leq \mathscr{P}^{\mathtt{N}}_{l,k}(\mu+C\mu^{-0.4})-\mathscr{P}^{\mathtt{N}}_{l,k}(\mu-C\mu^{-0.4})\label{888NC} \end{equation} for some constant $C$. As argued in the proof of \eqref{666NC}, we conclude that $\nu<r\mu$. Then \begin{equation*} k-\frac{G(r)}{r}\nu>\mu^{0.6}. \end{equation*} With this lower bound, we can show that \begin{equation} \widetilde R_{\nu,k}=O(\mu^{-0.4}),\label{ine1} \end{equation} which gives the inequality \eqref{888NC} immediately. To obtain \eqref{ine1}, we first notice that $\widetilde R_{\nu,k}$ has possibly four forms (see Theorem \ref{approximation}). The first one $O((\nu+k)^{-1})=O(\mu^{-1})$ is small enough. The second one is of size $O(\mu^{-0.4})$ by the above lower bound. We next observe that if $\nu/\mu$ is sufficiently small, by Proposition \ref{estofhx''} and the monotonicity of $G$, then $rx''_{\nu,k}\geq (1+c)\nu$ and $\widetilde R_{\nu,k}$ is of the first form. Therefore, it remains to consider the case when the ratio $\nu/\mu\gtrsim 1$. Hence the third and fourth forms of $\widetilde R_{\nu,k}$ are $\lesssim \mu^{-2/3+3\varepsilon}<\mu^{-0.4}$. Summing the above bounds of $|\mathscr{N}^{\mathtt{N}}_{l,k}(\mu)-\mathscr{P}^{\mathtt{N}}_{l,k}(\mu)|$ over $k$ yields the desired inequality. \end{proof} One can then transfer the spectrum counting problem to a lattice point counting problem via the following result. By using Lemma \ref{lemma4.1} and the bound in \eqref{bino-bound}, we readily get the following in both the Dirichlet and Neumann cases. \begin{proposition}\label{difference1} There exists a constant $C>0$ such that \begin{equation*} \left|\mathscr{N}_\mathbb{S}(\mu)-\mathscr{P}_{\Omega}(\mu)\right|\leq \mathscr{P}_{\Omega}(\mu+C\mu^{-0.4})-\mathscr{P}_{\Omega}(\mu-C\mu^{-0.4})+O\left(\mu^{d-2+0.6}\right). \end{equation*} \end{proposition} \begin{remark} In spectral theory problems of counting eigenvalues are sometimes transformed into problems of counting lattice points. However, the argument in the proof of Lemma \ref{lemma4.1} reminds us that the reverse is also possible. For example, we can derive an inequality using eigenvalue counting to obtain lattice point counting: \begin{equation*} \left|\mathscr{N}_\mathbb{S}(\mu)-\mathscr{P}_{\Omega}(\mu)\right|\leq \mathscr{N}_\mathbb{S}(\mu+C\mu^{-0.4})-\mathscr{N}_\mathbb{S}(\mu-C\mu^{-0.4})+O\left(\mu^{d-2+0.6}\right). \end{equation*} This provides us with another possible novel way to study certain specific problems of lattice point counting. \end{remark} Notice that $\mathscr{P}_{\Omega}(\mu)$ is about counting lattice points with weights and different translations. We will transfer it into standard lattice point problems with the same translations and meanwhile estimate the differences caused by such transformations. In fact, we will move every point $(\nu, k-\tau_{\nu,k})$ to $(\nu, k-1/4)$ and every $(\nu, k+\widetilde\tau_{\nu,k})$ to $(\nu, k+1/4)$. Accordingly, we define the following new counting functions: firstly in the Dirichlet case \begin{equation*} \mathscr{Q}_{\Omega}(\mu)=\mathscr{Q}_{\Omega}^{\mathtt{D}}(\mu):=\sum_{l=0}^{\infty}\left(m_{l}^d-m_{l-1}^d\right)\mathscr{Q}_l^{\mathtt{D}}(\mu), \end{equation*} where \begin{equation*} \mathscr{Q}_l^{\mathtt{D}}(\mu):=\#\{ (\nu,k-1/4)\in \mu\Omega : n\in \mathbb Z_+,n\ge l,k\in\mathbb{N}\}, \end{equation*} and secondly in the Neumann case \begin{equation*} \mathscr{Q}_{\Omega}(\mu)=\mathscr{Q}_{\Omega}^{\mathtt{N}}(\mu):=\sum_{l=0}^{\infty}\left(m_{l}^d-m_{l-1}^d\right)\mathscr{Q}^{\mathtt{N}}_l(\mu), \end{equation*} where \begin{equation*} \mathscr{Q}_l^{\mathtt{N}}(\mu):=\#\{ (\nu,k+1/4)\in \mu\Omega : n\in \mathbb Z_+,n\ge l,k\in \mathbb Z_+\}. \end{equation*} To quantify the difference between $\mathscr{P}_{\Omega}(\mu)$ and $\mathscr{Q}_{\Omega}(\mu)$, we need to count points in certain bands of width $1/4$. For $0<L\leq R\mu$, we define bands on $[0, L]$ as follows \begin{equation*} \mathcal{B}_{L}^{\mathtt{D}}=\left\{(x,y)\in\mathbb{R}^2 : 0\leq x\leq L, \, \mu G\left(\frac{x}{\mu} \right)< y\leq \mu G\left(\frac{x}{\mu} \right)+\frac{1}{4} \right\} \end{equation*} and \begin{equation*} \mathcal{B}_{L}^{\mathtt{N}}=\left\{(x,y)\in\mathbb{R}^2 : 0\leq x\leq L,\,\mu G\left(\frac{x}{\mu} \right)-\frac{1}{4}< y\leq \mu G\left(\frac{x}{\mu} \right) \right\}. \end{equation*} These bands are formed by translating up/down part of the boundary $\mu\partial\Omega$ by $1/4$. We define associated counting functions \begin{equation*} \mathscr{Q}_{\mathcal{B}_{r\mu}^{\mathtt{D}},l}=\# \left\{(\nu,k)\in \mathcal{B}_{r\mu}^{\mathtt{D}}:n\in \mathbb{Z}_+,n\ge l, k\in \mathbb N \right\} \end{equation*} and \begin{equation*} \mathscr{Q}_{\mathcal{B}_{r\mu}^{\mathtt{N}},l}=\# \left\{(\nu,k)\in \mathcal{B}_{r\mu}^{\mathtt{N}} : n\in \mathbb{Z}_+,n\ge l, k\in \mathbb{Z}_+ \right\}. \end{equation*} Both functions equal to zero trivially when $l+d/2-1>r\mu$. \begin{lemma} \label{difference3} \begin{equation*} \mathscr{Q}_l^{*}(\mu)=\mathscr{P}_l^{*}(\mu)\pm \mathscr{Q}_{\mathcal{B}_{r\mu}^{*},l}+O(\mu^{1/3+\varepsilon}) \end{equation*} for $l\in \mathbb Z_+$ and $*\in\{ \mathtt{D},\mathtt{N}\}$, where we take the sign ``$+$'' (resp., ``$-$'') when $*=\mathtt{D}$ (resp., $*=\mathtt{N}$). \end{lemma} \begin{remark} From the proof of this lemma, we observe that $\mathscr{Q}_l^{*}(\mu)=\mathscr{P}_l^{*}(\mu)$ if $l+d/2-1>r\mu+C'\mu^{1/3+\varepsilon}$. \end{remark} \begin{proof}[Proof of Lemma \ref{difference3}] We will only give a brief proof of the Neumann case. See \cite[Proposition 3.2]{GMWW:2019} for a proof of the Dirichlet case. In the process of moving the points $(\nu, k+\widetilde\tau_{\nu,k})$ up to $(\nu, k+1/4)$, some of these points may get out of the domain $\mu\Omega$ but none enters the domain. The difference $\mathscr{P}_l^{\mathtt{N}}(\mu)-\mathscr{Q}_l^{\mathtt{N}}(\mu)$ equals the number of points $(\nu, k+\widetilde\tau_{\nu,k})\in \mathcal{B}_{R\mu}^{\mathtt{N}}$ with $n\geq l$ that leave the domain $\mu\Omega$. The points $(\nu, k+\widetilde\tau_{\nu,k})\in \mathcal{B}_{R\mu}^{\mathtt{N}}$ with $\widetilde\tau_{\nu,k}=0$ are all above the line $OJ$ (see Figure \ref{domainOmega}) because of $k/\nu>G(r)/r$ as a consequence of Proposition \ref{estofhx''} and Corollary \ref{cor2}. These points get out of $\mu\Omega$ surely. The points $(\nu, k+\widetilde\tau_{\nu,k})\in \mathcal{B}_{R\mu}^{\mathtt{N}}$ with $\widetilde\tau_{\nu,k}=1/4$ are all below the line $OJ$ because of $(k+1/4)/\nu<G(r)/r$ by Proposition \ref{estofhx''} and Corollary \ref{cor2} again. These points remain unmoved. The points $(\nu, k+\widetilde\tau_{\nu,k})\in \mathcal{B}_{R\mu}^{\mathtt{N}}$ with $0<\widetilde\tau_{\nu,k}<1/4$ may get out of $\mu\Omega$. We do not know whether they are above or below the line $OJ$ but we do know that they satisfy $|\nu-r\mu|\leq C'\mu^{1/3+\varepsilon}$ for some large constant $C'$. Indeed, by Theorem \ref{approximation}, if $\nu-\nu^{1/3+\varepsilon}<rx''_{\nu,k}<\nu+\nu^{1/3+\varepsilon}$ then \begin{equation*} x''_{\nu, k}=F(\nu,k+\widetilde\tau_{\nu,k})+O\left(\nu^{-2/3+3\varepsilon}\right)=\mu+O(1), \end{equation*} where the last equality follows from the fact that $(\nu, k+\widetilde\tau_{\nu,k})$ must be contained in a cone (in the first quadrant with its vertex at the origin) away from the axes (by Proposition \ref{estofhx''}) and hence in $\mu\Omega\setminus (\mu-O(1))\Omega$. Plugging this formula into the above inequality of $x''_{\nu,k}$ yields the desired range of $\nu$. Based on the above facts, we conclude that the points $(\nu,k+\widetilde\tau_{\nu,k})$ in the band $\mathcal{B}_{r\mu-C'\mu^{1/3+\varepsilon}}^{\mathtt{N}}$ are all with $\widetilde\tau_{\nu,k}=0$; the points $(\nu,k+\widetilde\tau_{\nu,k})$ in $\mathcal{B}_{R\mu}^{\mathtt{N}}\setminus \mathcal{B}_{r\mu+C'\mu^{1/3+\varepsilon} }^{\mathtt{N}}$ are all with $\widetilde\tau_{\nu,k}=1/4$. Some of the points $(\nu,k+\widetilde\tau_{\nu,k})\in \mathcal{B}_{R\mu}^{\mathtt{N}}$ with $|\nu-r\mu|< C'\mu^{1/3+\varepsilon}$ may get out of $\mu\Omega$ but its number is relatively small and of size $O(\mu^{1/3+\varepsilon})$. Collecting all these facts leads to the desired equality. \end{proof} As a consequence of Lemma \ref{difference3} and the bound in \eqref{bino-bound}, we obtain the following easily. \begin{proposition}\label{difference4} \begin{equation*} \mathscr{P}_{\Omega}^*(\mu)=\mathscr{Q}_{\Omega}^*(\mu)\mp \sum_{0\leq l\leq r\mu-\frac{d-2}{2}}\left(m_l^d-m_{l-1}^d\right)\mathscr{Q}_{\mathcal{B}_{r\mu}^{*},l}+O\left(\mu^{d-2+\frac{1}{3}+\varepsilon}\right), \end{equation*} where $*\in\{ \mathtt{D},\mathtt{N}\}$ and we take the sign ``$-$'' (resp., ``$+$'') when $*=\mathtt{D}$ (resp., $*=\mathtt{N}$). \end{proposition} By Propositions \ref{difference1} and \ref{difference4}, in order to obtain an asymptotics of $\mathscr{N}_{\mathbb{S}}^*(\mu)$ with $*\in\{ \mathtt{D},\mathtt{N}\}$, we just need to study lattice point counting functions $\mathscr{Q}_{\Omega}^*(\mu)$ and $\mathscr{Q}_{\mathcal{B}_{r\mu}^{*},l}$. We will do that in Section \ref{sec5}. \subsection{The ball case} \label{subsec4.2} As in Section \ref{intro}, we let $\mathscr{N}_\mathbb{B}(\mu)$ denote the eigenvalue counting function for the Dirichlet/Neumann Laplacian associated with the ball $\mathbb{B}$. Let $\Omega_0$ denote the closed domain bounded by the axes and the graph of \begin{equation*} G_0(x)=Rg(x/R),\quad 0\leq x\leq R. \end{equation*} See Figure \ref{domainOmega0}. \begin{figure}[ht] \centering \includegraphics[width=0.6\textwidth]{domainOmega0.pdf} \caption{The domain $\Omega_0$.} \label{domainOmega0} \end{figure} We define a weighted lattice point counting function (associated with the domain $\Omega_0$) corresponding to the Dirichlet Laplacian \begin{equation*} \mathscr{Q}_{\Omega_0}(\mu)=\mathscr{Q}_{\Omega_0}^{\mathtt{D}}(\mu):=\sum_{l=0}^{\infty}\left(m_{l}^d-m_{l-1}^d\right)\mathscr{Q}_{\Omega_0,l}^{\mathtt{D}}(\mu), \end{equation*} where \begin{equation*} \mathscr{Q}_{\Omega_0,l}^{\mathtt{D}}(\mu):=\#\left\{ \left(\nu, k-1/4\right)\in \mu\Omega_0 : n\in \mathbb{Z}_+, n\geq l, k\in\mathbb{N}\right\}. \end{equation*} We define an analogous version corresponding to the Neumann Laplacian \begin{equation*} \mathscr{Q}_{\Omega_0}(\mu)=\mathscr{Q}_{\Omega_0}^{\mathtt{N}}(\mu):=\sum_{l=0}^{\infty}\left(m_{l}^d-m_{l-1}^d\right)\mathscr{Q}_{\Omega_0,l}^{\mathtt{N}}(\mu), \end{equation*} where \begin{equation*} \mathscr{Q}_{\Omega_0,l}^{\mathtt{N}}(\mu):=\#\left\{ \left(\nu, k+1/4\right)\in \mu\Omega_0 : n\in \mathbb{Z}_+, n\geq l, k\in\mathbb{Z}_+\right\}. \end{equation*} We have the following ``ball'' version of Proposition \ref{difference1}. \begin{proposition} \label{prop4.7} There exists a constant $C>0$ such that \begin{equation*} \left|\mathscr{N}_\mathbb{B}(\mu)-\mathscr{Q}_{\Omega_0}(\mu)\right|\leq \mathscr{Q}_{\Omega_0}(\mu+C\mu^{-3/7})-\mathscr{Q}_{\Omega_0}(\mu-C\mu^{-3/7})+O\left(\mu^{d-2+4/7}\right). \end{equation*} \end{proposition} The Dirichlet case has already been proved in \cite[Theorem 3.2]{Guo}. The proof for the Neumann case follows a similar pattern. Below, we will outline the proof and highlight the key differences. We may assume $R=1$ without loss of generality. In the Neumann case, we need to study positive zeros of \begin{equation*} j_{\nu,\delta}'(x)=x^{-\delta}\left( J_{\nu}'(x)-\delta x^{-1}J_{\nu}(x) \right) \quad \textrm{with $\delta\in\mathbb{R}$ and $\nu\geq |\delta|$}. \end{equation*} The term $\delta x^{-1}J_{\nu}(x)$ can be absorbed into error terms of the asymptotics of $J_{\nu}'(x)$. To verify this, particularly when $\nu<x<(1+c)\nu$ for any sufficiently small $c>0$, we need to use $xg(\nu/x)\asymp \nu^{-1/2}(x-\nu)^{3/2}$ by \eqref{bound-zeta+}. \begin{lemma} For any $c>0$ and $\nu\ge 0$, if $x\geq \max\{(1+c)\nu, 10\}$ then \begin{equation*} j_{\nu,\delta}'(x)=-\left(\frac{2}{\pi}\right)^{1/2} \frac{\left(x^2-\nu^2\right)^{1/4}}{x^{1+\delta}} \left(\sin\left( \pi xg\left(\frac{\nu}{x}\right)-\frac{\pi}{4}\right)+O_c\left(x^{-1}\right)\right). \end{equation*} For any sufficiently small $c>0$ and sufficiently large $\nu$, if $\nu<x<(1+c)\nu$ then \begin{equation*} j_{\nu,\delta}'(x)=\frac{-2^{2/3}}{(3\pi)^{1/6}}\frac{\left(x^2-\nu^2\right)^{1/4}}{x^{1+\delta}\left(x g\left(\nu/x \right)\right)^{1/6}} \left(\!\mathrm{Ai}'\left(\!-\left(\frac{3\pi}{2} x g\left(\frac{\nu}{x} \right)\right)^{\!\frac{2}{3}}\right)+O\left(\nu^{-\frac{2}{3}}\right)\!\right) \end{equation*} when $xg(\nu/x)\leq 1$, and \begin{equation*} j_{\nu,\delta}'(x)=-\left(\frac{2}{\pi}\right)^{\!1/2} \frac{\left(x^2-\nu^2\right)^{1/4}}{x^{1+\delta}} \left(\sin\left( \pi xg\left(\frac{\nu}{x}\right)-\frac{\pi}{4}\right)+O\left(\!\left(xg\left(\frac{\nu}{x}\right)\right)^{\!-1}\right)\!\right) \end{equation*} when $xg(\nu/x)>1$. \end{lemma} We denote positive zeros of $j_{\nu,\delta}'$ by $a'_{\nu, \delta, k}$ (with the convention of beginning with $k=0$ if $\nu>|\delta|$ and with $k=1$ if $\nu=|\delta|$). By the way, $j_{\nu,\delta}'$ has only real zeros when $\nu\geq |\delta|$ since $zJ_{\nu}'(z)-\delta J_{\nu}(z)$ has only real zeros (see Watson \cite[P. 482]{watson:1966}), and all its nonzero zeros are simple by Dixon's theorem (see Watson \cite[P. 480]{watson:1966}). \begin{proposition} \label{prop4.9} There exists a constant $c\in (0,1)$ such that for any $\delta\in\mathbb{R}$ and all sufficiently large $\nu$, \begin{equation*} a'_{\nu, \delta, k}g\left(\frac{\nu}{a'_{\nu, \delta, k}}\right)=k+\frac{1}{4}+R'_{\nu,k}, \end{equation*} where the remainder $R'_{\nu,k}$ satisfies $|R'_{\nu,0}|<1/8$, $|R'_{\nu,k}|<1/4$, $k\in \mathbb{N}$ and \begin{equation*} R'_{\nu,k}= \left\{\begin{array}{ll} O((\nu+k)^{-1}), & \textrm{if $a'_{\nu, \delta, k}\ge (1+c)\nu$,}\\ O((k+1)^{-1}), & \textrm{if $\nu<a'_{\nu, \delta, k}< (1+c)\nu$.} \end{array}\right. \end{equation*} For any $\delta\in\mathbb{R}$ and $V\in\mathbb{N}$ there exists a constant $K>0$ such that if $|\delta|\leq \nu\leq V$ and $k\geq K$ then \begin{equation*} a'_{\nu, \delta, k}g\left(\frac{\nu}{a'_{\nu, \delta, k}}\right)=k+\frac{1}{4}+O\left(\frac{1}{\nu+k}\right). \end{equation*} \end{proposition} The proof of this proposition is merely a repetition of the proofs of Propositions \ref{thm111} and \ref{thm222}, except that it needs the use of the following two facts: if $s\in\mathbb{N}$ is sufficiently large, then in the interval $(0, \pi(s+\frac{1}{2}\nu+\frac{1}{2}))$ the function $j_{\nu,\delta}'(x)$ has $s+1$ or $s$ zeros according to $\nu>|\delta|$ or $\nu=|\delta|$; if $\nu\geq |\delta|$, $k\in\mathbb{N}$ and $\max\{\nu,k\}$ is sufficiently large, then \begin{equation*} a'_{\nu, \delta, k}\gtrsim \nu+k. \end{equation*} The first fact follows easily from \cite[Lemmas 3.1 and 3.2]{FLPS:2024}. The second one follows from the known lower bound of positive zeros of $J_{\nu}$ (McCann \cite{McCann:1977}) and the fact that $j_{\nu,\delta}$ and $j'_{\nu,\delta}$ have interlacing positive zeros by Dixon's theorem (see Watson \cite[P. 480]{watson:1966}). We can then derive the following bounds and approximations of zeros $a'_{\nu, \delta, k}$ like those in Proposition \ref{prop2} and Theorems \ref{thm2.20} and \ref{approximation}. Proposition \ref{prop4.10} has actually been proved implicitly in the proof of the previous proposition. \begin{proposition} \label{prop4.10} We have the following bounds for $a'_{\nu, \delta, k}$. \begin{enumerate} \item If $\nu\geq|\delta|$ and $k\geq 2$, then \begin{equation*} a'_{\nu, \delta, k}>\sqrt{\nu^2+\pi^2\left(k-\frac{5}{4} \right)^2}. \end{equation*} \item If $\nu\geq|\delta|$ and $k$ is sufficiently large, then \begin{equation*} \pi\left(k+\frac{\nu}{2}-\frac{1}{2}\right)<h^{-1}_\nu\left(k\right)<a'_{\nu, \delta, k}<h^{-1}_\nu\left(k+\frac{3}{8}\right)<\pi\left(k+\frac{\nu}{2}+\frac{1}{2}\right), \end{equation*} where $h_{\nu}(x)=xg(\nu/x)$. \end{enumerate} \end{proposition} McMahon gave an asymptotics of $a'_{\nu, \delta, k}$ when $\nu=n+\frac{1}{2}$ and $\delta=\frac{1}{2}$ with $n$ fixed and $k$ large (see \cite[P. 441]{abram:1972}). But we could not find in the literature any generalization of McMahon's expansion of $a'_{\nu, \delta, k}$ for other values of $\nu$ and $\delta$. Here we give such a generalization, a one-term asymptotics. It follows directly from a Taylor expansion of the function $g$ at $0$ and Proposition \ref{prop4.9}. \begin{theorem}\label{thm4.12} For any fixed $\nu\geq |\delta|$ and sufficiently large $k$, we have \begin{equation*} a'_{\nu, \delta, k}=\pi \left(k+\frac{1}{2}\nu+\frac{1}{4} \right)+O_{\nu}\left(\frac{1}{k} \right). \end{equation*} \end{theorem} \begin{remark}\label{remark4.13} In fact, we believe that a slight modification of the argument in this paper can give us a McMahon-type asymptotics of $a'_{\nu, \delta, k}$ rather than just a one-term asymptotics. \end{remark} Theorem \ref{thm4.11} follows from the mean value theorem and Proposition \ref{prop4.9}. \begin{theorem} \label{thm4.11} Let $F_g$ be the Minkowski functional of the graph of $g$. There exists a small constant $c>0$ and an integer $V>|\delta|$ such that if $\nu>V$ then \begin{equation} a'_{\nu, \delta, k}=F_g(\nu,k+1/4)+\mathfrak{R}'_{\nu,k},\label{rem} \end{equation} where \begin{equation*} \mathfrak{R}'_{\nu,k}=\left\{ \begin{array}{ll} O\left((\nu+k)^{-1}\right), & \textrm{if $a'_{\nu, \delta, k}\geq (1+c)\nu$,}\\ O\left(\nu^{1/3}(k+1)^{-4/3}\right), &\textrm{if $a'_{\nu, \delta, k}<(1+c)\nu$.} \end{array} \right. \end{equation*} If $|\delta|\leq \nu\leq V$ and $k$ is sufficiently large then \eqref{rem} holds with \begin{equation*} \mathfrak{R}'_{\nu,k}=O\left((\nu+k)^{-1}\right). \end{equation*} \end{theorem} The spectrum under consideration consists of squares of \begin{equation*} \omega'_{n,k}:=a'_{\nu,\delta,k}, \quad n,k\in \mathbb{Z}_+, \end{equation*} with $\nu=n+\frac{d}{2}-1$, $\delta=\frac{d}{2}-1$, and an additional definition $\omega'_{0,0}:=0$. For each pair $(n,k)$ the number $(\omega'_{n,k})^2$ appears $m_n^d$ times in the spectrum. Repeating the argument in \cite[Section 3]{Guo} gives the Neumann case of Proposition \ref{prop4.7}. \section{Lattice counting and Weyl's law}\label{sec5} \subsection{Weyl's law for shells}\label{sec5.1} The main task in this subsection is to study two associated lattice point problems, $\mathscr{Q}_{\Omega}^*(\mu)$ and $\mathscr{Q}_{\mathcal{B}_{r\mu}^{*},l}$, defined in Subsection \ref{subsec4.1}. The shell part of Theorem \ref{specthm} follows immediately from Proposition \ref{difference1} and Theorem \ref{thm Ndu}. We first study the lattice counting associated with both the domain $\Omega$ (when $r>0$; see Figure \ref{domainOmega}) and the domain $\Omega_0$ (when $r=0$; see Figure \ref{domainOmega0}). In the following lemma, when $r=0$, as a slight abuse of notation, $\Omega$, $G$ and $\mathscr{Q}_l^{*}(\mu)$ represent $\Omega_0$, $G_0$ and $\mathscr{Q}_{\Omega_0,l}^*(\mu)$ respectively. The results for $\Omega_0$ will be used in Subsection \ref{sec5.2}. Let $d\geq 2$ be a fixed integer. For $l\in\mathbb{Z}_+$ with $l>2-d$, we define \begin{equation*} (\mu\Omega)_l=\{(x,y)\in\mu\Omega: x\ge l+(d-3)/2\} \end{equation*} to be a subset of $\mu\Omega$ in $\mathbb{R}^2$; when $d=2$ and $l=0$, we define $(\mu\Omega)_0$ to be the union of $\mu\Omega$ and a rectangle $[-1/2,0]\times[0,\mu G(0)]$. Then $\mathscr{Q}_l^{\mathtt{D}}(\mu)$ and $\mathscr{Q}_l^{\mathtt{N}}(\mu)$ are the number of points $(\nu,k-1/4)$ and $(\nu,k+1/4)$ in $(\mu\Omega)_l$, respectively. \begin{lemma}\label{theorem:no-in-D} For $0\leq r < R$ and $d\ge 2$, we have \begin{equation}\label{final-statmentNC} \mathscr{Q}_l^{*}(\mu)= \left|(\mu\Omega)_l\right|+\left(c_*- \frac{1}{2}\right)\left(R\mu-l-\frac{d-3}{2}\right)+O\left(\mu^{2/3}\right) \end{equation} with $0\le l<R\mu-d/2+1$, $*\in\{ \mathtt{D},\mathtt{N}\}$, $c_{\mathtt{D}}=1/4$ and $c_{\mathtt{N}}=3/4$. If either $l\ge r\mu-d/2+1$ or the boundary curve of $\Omega$ has a tangent in $J$ with rational slope (i.e. $\pi^{-1}\arccos(r/R)\in \mathbb{Q}$), the remainder estimate can be improved to \begin{equation*} O_{\epsilon}\left(\mu^{2\theta^*+\epsilon}\right). \end{equation*} \end{lemma} \begin{remark} The remainder $O(\mu^{2/3})$ is a consequence of a standard second derivative estimate of van der Corput. The above-mentioned improved estimate follows from Theorem \ref{expo sum} which requires the boundedness of certain first derivatives. This condition is not satisfied if we count lattice points near to the boundary point $\mu J$ along lines parallel to the axes. However, this obstacle can be avoided if we count them along lines parallel to the tangent in $J$ with rational slope. See also \cite[Section 4]{GMWW:2019}. \end{remark} \begin{proof}[Proof of Lemma \ref{theorem:no-in-D}] For $l\in \mathbb{Z}_+$ and $c\in[0,1)$, we study the following counting function \begin{equation*} \mathscr{Q}_l(\mu):=\#\{(\nu,k-c)\in \mu\Omega : n\in \mathbb Z_+, n\ge l, k\in \mathbb N\}, \end{equation*} that is, we count the number of points $(\nu,k-c)$ in $\mu \Omega$. In particular $\mathscr{Q}_l(\mu)$ with $c=1/4$ (resp., $3/4$) gives $\mathscr{Q}_l^{\mathtt{D}}(\mu)$ (resp., $\mathscr{Q}_l^{\mathtt{N}}(\mu)$). If we define \begin{align*} \Omega_1&:=\{(x,y)\in\Omega: y\leq G(r)\},\\ \Omega_2&:=\{(x,y)\in\Omega: y>G(r)\} \end{align*} and \begin{equation} \mathscr{Q}_{\Omega_i,l}(\mu):=\#\{(\nu,k-c)\in \mu\Omega_i:n\in \mathbb Z_+,n\ge l,k\in \mathbb N\},\label{countfcn3} \end{equation} then \begin{equation*} \mathscr{Q}_l(\mu)=\mathscr{Q}_{\Omega_1,l}(\mu)+\mathscr{Q}_{\Omega_2,l}(\mu). \end{equation*} We do this splitting because the boundary curve (given by the graph of $G$) consists of two parts when $r>0$. When $r=0$ we have $\Omega_2=\emptyset$ and hence $\mathscr{Q}_l(\mu)=\mathscr{Q}_{\Omega_1,l}(\mu)$. We claim that \begin{equation} \begin{split} \mathscr{Q}_{\Omega_1,l}(\mu)=&\left|\{(x,y)\in(\mu \Omega)_l :y\leq G(r)\mu \}\right|-L_{l}\\ &+\left(c-\frac{1}{2}\right)\left(R\mu-l-\frac{d-3}{2}\right)+ O_{\epsilon}\left(\mu^{2\theta^*+\epsilon}\right), \end{split}\label{ND1NC} \end{equation} where \begin{equation*} \begin{split} L_{l}=\left\{\begin{array}{ll} 0&\text{if $l+d/2-1\geq r\mu$}, \\ \psi(G(r)\mu+c)\left(r\mu-l-\frac{d-3}{2}\right) &\text{if $l+d/2-1< r\mu$}. \end{array} \right. \end{split} \end{equation*} When $r>0$ we also claim that if $l+d/2-1\geq r\mu$ then $\mathscr{Q}_{\Omega_2,l}(\mu)=0$; if $l+d/2-1< r\mu$ then \begin{equation} \mathscr{Q}_{\Omega_2,l}(\mu) =\left|\{(x,y)\in(\mu \Omega)_l :y> G(r)\mu \}\right|+L_{l}+O\left(\mu^{2/3}\right),\label{ND2NC} \end{equation} where the remainder can be improved to $O_{\epsilon}(\mu^{2\theta^*+\epsilon})$ in the rational case (i.e. $\pi^{-1}\arccos(r/R)\in \mathbb{Q}$). It is easy to verify results of this lemma based on these claims. It remains to prove these claims. We first prove \eqref{ND1NC} when $l+d/2-1<r\mu$. In $\mu\Omega_1$ we count lattice points along lines parallel to the $x$-axis. Then \begin{equation} \mathscr{Q}_{\Omega_1,l}(\mu)=\sum_{c<k\leq G(r)\mu +c}\left(\left\lfloor\mu H\left(\frac{k-c}{\mu}\right)-\frac{d}{2}+1\right\rfloor-l+1\right), \label{sum3} \end{equation} where $H:[0,G(r)]\to[r,R]$ represents the inverse function of $G$ restricted to $[r,R]$. With \begin{equation*} \psi(t)=t-1/2-\lfloor t\rfloor \end{equation*} the sawtooth function, $\mathscr{Q}_{\Omega_1,l}(\mu)$ is equal to the difference of \begin{equation} \sum_{c<k\leq G(r)\mu +c}\left(\mu H\left(\frac{k-c}{\mu}\right)-l-\frac{d-3}{2}\right) \label{sum1} \end{equation} and \begin{equation} \sum_{c<k\leq G(r)\mu +c}\psi\left(\mu H\left(\frac{k-c}{\mu}\right)-\frac{d}{2}+1\right). \label{sum2} \end{equation} Applying the Euler--Maclaurin summation formula to \eqref{sum1} yields \begin{align*} \eqref{sum1}=&\int_1^{G(r)\mu+c}\!\!\! \mu H\left(\frac{y-c}{\mu}\right)-l-\frac{d-3}{2} \textrm{d}y+\int_1^{G(r)\mu+c} \!\!\!\psi(y) H'\left(\frac{y-c}{\mu}\right) \textrm{d}y\\ &+\frac{1}{2}\left(\mu H\left(\frac{1-c}{\mu}\right)-l-\frac{d-3}{2} \right)-\psi\left(G(r)\mu+c\right)\left(r\mu-l-\frac{d-3}{2} \right). \end{align*} By the asymptotics \begin{equation*} H(y)=R+O\left(y^{2/3}\right) \textrm{ as $y\rightarrow 0+$}, \end{equation*} the first integral is equal to \begin{equation*} \left|\{(x,y)\in(\mu \Omega)_l :y\leq G(r)\mu \}\right|+(c-1)\left(R\mu-l-\frac{d-3}{2}\right)+O(\mu^{1/3}). \end{equation*} By the second mean value theorem and $H'(y)\asymp y^{-1/3}$ (see \cite[Lemma 4.6]{GMWW:2019}) the second integral is of size $O(\mu^{1/3})$. Hence \begin{align*} \eqref{sum1}=&\left|\{(x,y)\in(\mu \Omega)_l :y\leq G(r)\mu \}\right|+\left(c-\frac{1}{2}\right)\left(R\mu-l-\frac{d-3}{2}\right)\\ &-L_l+O(\mu^{1/3}). \end{align*} For the sum \eqref{sum2}, the part with $k\leq \mu^{2\theta^*}$ is of size $O(\mu^{2\theta^*})$ and the rest part is divided into sums of the form \begin{equation} \sum_{M<k\leq M'\leq 2M}\psi\left(NF\left(\frac{k}{M}\right)\right),\label{psisumNC} \end{equation} where $M=2^j\mu^{2\theta^*}\lesssim\mu$, $N=M^{2/3}\mu^{1/3}$ and \begin{equation} F(x)=\left(\frac{\mu}{M}\right)^{2/3}H\left(\frac{M}{\mu} x-\frac{c}{\mu}\right)-\frac{1}{N}\left(\frac{d}{2}-1\right).\label{F1} \end{equation} By Theorem \ref{expo sum} with $T=MN$, \begin{equation*} \eqref{psisumNC}\lesssim_{\epsilon} \left(M^{5/3}\mu^{1/3}\right)^{\theta^*+\epsilon}. \end{equation*} Summing over $j$ yields \begin{equation*} \eqref{sum2}\lesssim_{\epsilon} \mu^{2\theta^*+2\epsilon}. \end{equation*} Combining the above results of \eqref{sum1} and \eqref{sum2} yields \eqref{ND1NC} when $l+d/2-1<r\mu$. To prove \eqref{ND1NC} for $l+d/2-1\geq r\mu$, we just need to replace the summation domain in \eqref{sum3} by $c<k\leq G(\frac{l+d/2-1}{\mu})\mu+c$ and argue similarly as above. We omit the details. If $r>0$ we also need to compute $\mathscr{Q}_{\Omega_2,l}(\mu)$. It is obvious that $\mathscr{Q}_{\Omega_2,l}(\mu)=0$ if $l+d/2-1\geq r\mu$. We will assume that $l+d/2-1<r\mu$ below. We first prove \eqref{ND2NC} with the error term $O(\mu^{2/3})$. We now count lattice points along lines parallel to the $y$-axis. Then \begin{align} \mathscr{Q}_{\Omega_2,l}(\mu) =&\sum_{l\le n< r\mu-d/2+1}\left(\left\lfloor\mu G\left(\frac{n+d/2-1}{\mu}\right)+c\right\rfloor- \left\lfloor G(r)\mu+c\right\rfloor\right)\nonumber\\ =&\sum_{l\le n< r\mu-d/2+1}\left(\mu G\left(\frac{n+d/2-1}{\mu}\right)-\mu G\left(r\right) \right)\label{sum4}\\ &-\sum_{l\le n< r\mu-d/2+1}\psi\left(\mu G\left(\frac{n+d/2-1}{\mu}\right)+c\right)+L_l+O(1).\label{sum5} \end{align} Applying the Euler--Maclaurin summation formula to the sum in \eqref{sum4} and the second mean value theorem yields \begin{equation*} \eqref{sum4}=\left|\{(x,y)\in(\mu \Omega)_l :y> G(r)\mu \}\right|+O(1). \end{equation*} Applying van der Corput's second derivative estimate (see \cite[Satz 5]{corput:1923}) to the sum in \eqref{sum5} with $f(x)=\mu G(\frac{x+d/2-1}{\mu})+c$ yields the sum is bounded by \begin{align*} & \int_{l}^{r\mu-d/2}\left|f''(x)\right|^{\frac{1}{3}}\,\textrm{d}x+\max_{l\leq x\leq r\mu-d/2}|f''(x)|^{-\frac{1}{2}}+1\\ \leq & \mu^{2/3}\int_{0}^{r}\left|G''(x)\right|^{\frac{1}{3}}\,\textrm{d}x+\mu^{1/2}\max_{0\leq x\leq r}|G''(x)|^{-\frac{1}{2}}+1\lesssim \mu^{2/3}. \end{align*} We therefore get \eqref{ND2NC} easily. We next prove \eqref{ND2NC} with the error term $O_{\epsilon}(\mu^{2\theta^*+\epsilon})$ in the rational case. Denote the rational slope of the tangent line at $J$ by $G'(r)=-a/q<0$ with $a$, $q$ positive and relatively prime. Let \begin{equation*} \mathcal{T}:=\left\{(x,y)\in\mathbb{R}^2: 0\leq x<r, G(r)< y\leq G(r)+aq^{-1}(r-x)\right\} \end{equation*} denote the triangle bounded by $y$-axis, $y=G(r)$ and the tangent line at $J$, and \begin{equation*} \Omega^*_2:=\left\{(x,y)\in\mathbb{R}^2: 0\leq x<r,G(x)< y\leq G(r)+aq^{-1}(r-x)\right\} \end{equation*} be the domain $\mathcal{T}\setminus \Omega_2$. Thus \begin{equation}\label{label-ND2NC} \mathscr{Q}_{\Omega_2,l}(\mu)=\mathscr{Q}_{\mathcal{T},l}(\mu)-\mathscr{Q}_{\Omega_2^*,l}(\mu), \end{equation} where counting functions $\mathscr{Q}_{\mathcal T,l}(\mu)$ and $\mathscr{Q}_{\Omega_2^{*},l}(\mu)$ are both defined by \eqref{countfcn3} with $\Omega_i$ replaced by $\mathcal T$ and $\Omega_2^{*}$ respectively. Concerning $\mathscr{Q}_{\Omega_2^{*},l}(\mu)$, we count points along lines $l_t$: \begin{equation*} a(x-d/2+1)+q(y+c)=t,\quad t\in \mathbb{Z}. \end{equation*} Observe that $l_t$ contains points from the lattice $\mathbb{Z}^2+(d/2-1,-c)$ if and only if $t\in \mathbb{Z}$. The line $l_t$ intersects the lower boundary curve of $\mu\Omega_2^*$ between endpoints $(l+d/2-1,\mu G((l+d/2-1)/\mu))$ and $(r\mu,G(r)\mu)$ at a unique point if $t\in[\mu q\beta_l,\mu q\gamma]$, where \begin{equation*} \beta_l=G\left(\frac{l+d/2-1}{\mu}\right)+\left(c+\frac{a}{q}l\right)\frac{1}{\mu} \end{equation*} and \begin{equation*} \gamma=G(r)+\frac{a}{q}r+\left(c+\frac{a}{q}\left(1-\frac{d}{2}\right)\right)\frac{1}{\mu}. \end{equation*} We denote the $x$-coordinate of the intersection point by $\mu T(t/(\mu q))$ where $T$ is a strictly increasing function from $[\beta_l,\gamma]$ to $[(l+d/2-1)/\mu,r]$. It satisfies the equation \begin{align}\label{definition-TNC} G(T(y))+\frac aq T(y)+\left(c+\frac{a}{q}\left(1-\frac{d}{2}\right)\right)\frac{1}{\mu}=y, \qquad y\in[\beta_l,\gamma]. \end{align} For every $t_0\in\{0,\dots,q-1\}$ let $x_0\in\{0,\dots,q-1\}$ be such that $ax_0\equiv t_0\pmod q$. If $t\equiv t_0\pmod q$ the points $(n+d/2-1,k-c)$ on $l_t$ are those with $n=x_0+qm$, $m\in\mathbb{Z}$. Therefore the number of $(\nu,k-c)$ with $n\ge l$ in $\mu\Omega_2^*\cap l_t$ is equal to the number of integers $m$ such that \begin{equation*} \frac{l-x_0}{q}\leq m<\frac{1}{q}\left(\mu T\left(\frac{t}{\mu q}\right)-x_0-\frac{d}{2}+1\right). \end{equation*} Thus \begin{equation*} \mathscr{Q}_{\Omega_2^*,l}(\mu)=S_1+S_2+S_3 \end{equation*} with \begin{equation*} S_1:=\sum_{\mu q\beta_l<t\leq\mu q \gamma} \left(\frac{\mu}{q}T\left(\frac{t}{\mu q}\right)-\frac{l+d/2-1}{q}\right), \end{equation*} \begin{equation*} S_2:=\sum_{t_0=0}^{q-1}\psi\left(\frac{x_0-l}{q}\right)\left(\mu(\beta_l-\gamma)+O(1)\right) \end{equation*} and \begin{equation*} S_3:=\sum_{t_0=0}^{q-1}\sum_{\mu \beta_l-t_0q^{-1}<m\leq\mu \gamma-t_0q^{-1}} \psi\left(-\frac{\mu}{q} T\left(\frac{t_0+qm}{\mu q}\right)+\frac{x_0+d/2-1}{q}\right). \end{equation*} Applying the Euler--Maclaurin summation formula and \eqref{definition-TNC} to the sum $S_1$ yields that \begin{align*} S_1=&\left|\left\{(x,y)\in \mu \Omega_2^* : x\geq l+\frac{d}{2}-1\right\}\right|-\frac{\psi(\mu q \gamma)}{q}\left(r\mu-\left(l+\frac{d}{2}-1\right)\right)\\ &+\frac{1}{q^2}\int_{\mu q\beta_l}^{\mu q \gamma}T'\left(\frac{x}{\mu q}\right)\psi(x)\,\mathrm dx. \end{align*} The last integral is of order $O(\mu^{1/3})$. Indeed, by splitting it into two parts over $[\mu q\beta_l,\mu q \gamma-1)$ and $[\mu q \gamma-1,\mu q \gamma]$ respectively and using the second mean value theorem and \cite[Lemma 4.7 and (4.15) on P. 33]{GMWW:2019}, we have it bounded by \begin{equation*} \sup_{\beta_l\leq y\leq\gamma-1/(\mu q)}\left|T'(y)\right|+\mu\left(T(\gamma)-T\left(\gamma-\frac1{\mu q}\right)\right)\lesssim\mu^{1/3}. \end{equation*} Since \begin{align}\label{complete-psi-sumNC} \sum_{m=0}^{q-1}\psi\left((x+m)/q\right)=\psi(x), \end{align} we have \begin{equation*} S_2=\mu(\gamma-\beta_l)/2+O(1). \end{equation*} In the inner sum of $S_3$, the part with $\lfloor \mu \gamma\rfloor-\mu^{2\theta^*}<m\leq \mu \gamma$ is of order $O(\mu^{2\theta^*})$ trivially and the rest part is divided into sums of the form \begin{equation} \sum_{\lfloor \mu \gamma\rfloor-2M\leq \lfloor \mu \gamma\rfloor-M'< m\leq \lfloor \mu \gamma\rfloor-M}\psi\left(-\frac{\mu}{q} T\left(\frac{t_0+qm}{\mu q}\right)+\frac{x_0+d/2-1}{q}\right) \label{sum7} \end{equation} with $M=2^j\mu^{2\theta^*}\lesssim \mu$. Do a substitution $\widetilde{m}:=\lfloor\mu \gamma\rfloor- m$ and rewrite it as \begin{equation*} \eqref{sum7}=\sum_{M\leq \widetilde{m}<M'\leq 2M}\psi\left(NF\left(\frac{\widetilde{m}}{M}\right)\right), \end{equation*} where $N=\mu^{1/3}M^{2/3}q^{-1}$ and \begin{equation} F(x)=-\left(\frac{\mu}{M}\right)^{2/3}T\left(\gamma-\frac{M}{\mu}x+\frac{c_0}{\mu}\right)+\frac{x_0+d/2-1}{qN}, \quad x\in[1,2], \label{F2} \end{equation} with $c_0=t_0/q+\lfloor\gamma\mu\rfloor-\gamma\mu$. Based on the size of derivatives of $T$ (in \cite[Lemma 4.7]{GMWW:2019}), we have $|F^{(j)}(x)|\asymp 1$ for $j=1,2,3$. By Theorem \ref{expo sum} with $T=MN$, we have \begin{equation*} \eqref{sum7}\lesssim_{\epsilon} \left(M^{5/3}\mu^{1/3}\right)^{\theta^*+\epsilon}. \end{equation*} Summing over $j$ gives \begin{equation*} S_3\lesssim_{\epsilon} \mu^{2\theta^*+2\epsilon}. \end{equation*} Combining results of $S_1$, $S_2$ and $S_3$ yields \begin{equation} \begin{split}\label{ND2-label2NC} \mathscr{Q}_{\Omega_2^*,l}(\mu)=&\left|\left\{(x,y)\in \mu \Omega_2^* : x\geq l+\frac{d}{2}-1\right\}\right|-\frac{r\mu-l}{q}\psi(\mu q \gamma)\\ &+\frac{\mu}{2}(\gamma-\beta_l)+O_{\epsilon}\left(\mu^{2\theta^*+\epsilon}\right). \end{split} \end{equation} Concerning $\mathscr{Q}_{\mathcal{T},l}(\mu)$ we notice that \begin{align*} \mathscr{Q}_{\mathcal{T},l}(\mu)=&\sum_{l\le n< r\mu-d/2+1}\left(\left\lfloor G(r)\mu-\frac{a}{q}(\nu-r\mu)+c\right\rfloor- \left\lfloor G(r)\mu+c\right\rfloor\right)\\ =&\sum_{l\leq n<r\mu-d/2+1}\left(\frac{a}{q}\left(r\mu-\nu \right)-\psi\left(\gamma\mu-\frac{a}{q}n\right)\right)+L_l+O(1). \end{align*} By the Euler--Maclaurin formula, the first sum is equal to \begin{equation*} \left|\left\{(x,y)\in \mu\mathcal{T} : x\geq l+\frac{d}{2}-1\right\}\right|-\frac{a}{2q}\left(l+\frac{d}{2}-1-r\mu\right)+O(1). \end{equation*} By the relation \eqref{complete-psi-sumNC}, the second sum is equal to $(r\mu-l)q^{-1}\psi(\mu q\gamma)+O(1)$. Hence \begin{equation} \begin{split} \mathscr{Q}_{\mathcal{T},l}(\mu)=&\left|\left\{(x,y)\in \mu\mathcal{T} : x\geq l+\frac{d}{2}-1\right\}\right|-\frac{r\mu-l}{q}\psi(\mu q\gamma)\\ &-\frac{a}{2q}\left(l+\frac{d}{2}-1-r\mu\right)+L_l+O(1). \end{split} \label{sum6} \end{equation} By \eqref{label-ND2NC}, \eqref{ND2-label2NC} and \eqref{sum6} we readily get \eqref{ND2NC} with the error term $O_{\epsilon}(\mu^{2\theta^*+\epsilon})$ in the rational case. This finishes the proof of Lemma \ref{theorem:no-in-D}. \end{proof} We next study the lattice counting associated with the band $\mathscr{Q}_{\mathcal{B}_{r\mu}^{*},l}$. \begin{lemma}\label{no-in-band} For $r>0$ and $d\geq 2$, we have \begin{equation*} \mathscr{Q}_{\mathcal{B}_{r\mu}^{*},l}=\frac{1}{4}\left(r\mu-l-\frac{d-3}{2}\right)+O\left(\mu^{2/3} \right) \end{equation*} with $0\leq l\leq r\mu-d/2+1$ and $*\in\{ \mathtt{D},\mathtt{N}\}$. If the boundary curve of $\Omega$ has a tangent in $J$ with rational slope, the remainder estimate can be improved to \begin{equation*} O_{\epsilon}\left(\mu^{2\theta^*+\epsilon}\right). \end{equation*} \end{lemma} \begin{proof} It suffices to find the size of the set \begin{equation*} \left\{(\nu,k-c)\in \mathcal{B}_{r\mu}^{\mathtt{D}}:n\in \mathbb{Z}_+,n\ge l, k\in \mathbb N \right\}, \end{equation*} since by taking $c=0$ and $3/4$ we get $\mathscr{Q}_{\mathcal{B}_{r\mu}^{\mathtt{D}},l}$ and $\mathscr{Q}_{\mathcal{B}_{r\mu}^{\mathtt{N}},l}$ respectively. We just need to count the number of points $(\nu,k-c)$ in $\mu\Omega_2$ and in $\mu\Omega_2\cup \mathcal{B}_{r\mu}^{\mathtt{D}}$, and then find their difference. However, the former number is already given by \eqref{ND2NC}, while the latter number can be easily obtained by repeating the computation of \eqref{ND2NC} in the proof of Lemma \ref{theorem:no-in-D}. \end{proof} In the last part of this subsection, we derive an asymptotics of $\mathscr{Q}_{\Omega}(\mu)$ by Lemma \ref{theorem:no-in-D}, based on which and also on Lemma \ref{no-in-band} and Proposition \ref{difference4} we derive an asymptotics of $\mathscr{P}_{\Omega}(\mu)$. To conclude, the shell part of Theorem \ref{specthm} follows immediately from the asymptotics of $\mathscr{P}_{\Omega}(\mu)$ and Proposition \ref{difference1}. \begin{theorem}\label{thm Ndu} For $0<r<R$ and $d\geq 2$, we have \begin{equation*} \mathscr{Q}_{\Omega}^*(\mu)=\frac{R^d-r^d}{2^{d}(\Gamma(d/2+1))^{2} }\mu^d\mp\frac{R^{d-1}}{2\cdot(d-1)!}\mu^{d-1}+O\left(\mu^{d-2+\frac{2}{3}}\right) \end{equation*} and \begin{equation*} \mathscr{P}_{\Omega}^*\textbf{}(\mu)=\frac{R^d-r^d}{2^{d}(\Gamma(d/2+1))^{2} }\mu^d\mp\frac{R^{d-1}+r^{d-1}}{2\cdot(d-1)!}\mu^{d-1}+O\left(\mu^{d-2+\frac{2}{3}}\right), \end{equation*} where we take the sign ``$-$'' (resp., ``$+$'') when $*=\mathtt{D}$ (resp., $*=\mathtt{N}$). If the boundary curve of $\Omega$ has a tangent in $J$ with rational slope, both remainder estimates can be improved to \begin{equation*} O_{\epsilon}\left(\mu^{d-2+2\theta^*+\epsilon}\right). \end{equation*} \end{theorem} \begin{proof} We claim that the following asymptotics hold \begin{equation} \sum_{0\leq l<R\mu-\frac{d-2}{2}} \left(m_{l}^d-m_{l-1}^d\right)\left|(\mu\Omega)_l\right|=\frac{R^d-r^d}{2^{d}\left(\Gamma\left(d/2+1\right)\right)^{2}} \mu^d+O\left(\mu^{d-2} \right) \label{mainterm} \end{equation} and \begin{equation} \sum_{0\leq l< a\mu-\frac{d-2}{2}}\left(m_{l}^d-m_{l-1}^d\right)\left(a\mu-l-\frac{d-3}{2}\right)=\frac{2(a\mu)^{d-1}}{(d-1)!}+O\left( \mu^{d-2}\right) \label{area} \end{equation} with $a>0$. It is easy to check that, for $d\geq 3$ and $l\geq 1$, \begin{equation} m_{l}^d-m_{l-1}^d=\frac{2}{(d-3)!}\left(l+\frac{d-3}{2}\right)^{d-3}+ \mathcal{E}_l, \label{999} \end{equation} where $\mathcal{E}_l=0$ if $3\leq d\leq 5$ and $\mathcal{E}_l=O(l^{d-5})$ if $d\geq 6$, and that $m_{0}^d-m_{-1}^d=1$. We also know that $m_{l}^2-m_{l-1}^2$ equals $1$ if $l=0,1$, and $0$ otherwise. Based on the above claim (with $a=R,r$) and estimates, Lemmas \ref{theorem:no-in-D} and \ref{no-in-band} and Proposition \ref{difference4}, it is straightforward to verify the desired asymptotics of $\mathscr{Q}_{\Omega}(\mu)$ and $\mathscr{P}_{\Omega}(\mu)$. We will assume $d\geq 3$ below. When $d=2$ the above claim follows directly from Lemma \ref{theorem:no-in-D} and the fact $|\Omega|=(R^2-r^2)/8$. We first verify the asymptotics \eqref{mainterm}. By \eqref{999} its left side is equal to \begin{equation} \sum_{0<l< R\mu-\frac{d-2}{2}}f(l)+\sum_{0< l< R\mu-\frac{d-2}{2}} \mathcal{E}_l \left|(\mu\Omega)_l\right|+|\Omega|\mu^2+O(\mu), \label{mainterm1} \end{equation} where \begin{equation*} f(x)=\frac{2}{(d-3)!}\left(x+\frac{d-3}{2}\right)^{d-3}\int_{x+\frac{d-3}{2}}^{R\mu} \mu G\left(\frac{t}{\mu} \right) \, \textrm{d}t. \end{equation*} It is trivial that the second sum in \eqref{mainterm1} is at most $O(\mu^{d-2})$. By the Euler--Maclaurin summation formula, the first sum in \eqref{mainterm1} is equal to \begin{equation} \int_{0}^{R\mu-\frac{d-2}{2}} f(t) \, \textrm{d}t+\int_{0}^{ R\mu-\frac{d-2}{2}} \psi(t)f'(t) \, \textrm{d}t-\frac{1}{2}f(0)+O\left(\mu^{d-3}\right).\label{mainterm2} \end{equation} By using integration by parts, a change of variables and properties of the beta and gamma functions, we get \begin{align*} \int_{0}^{R\mu-\frac{d-2}{2}} f(t) \, \textrm{d}t&=\frac{2\mu^d}{(d-2)!}\int_{0}^{R} t^{d-2}G(t)\, \textrm{d}t+O\left(\mu^{d-2} \right) \\ &=\frac{R^d-r^d}{2^{d}\left(\Gamma\left(d/2+1\right)\right)^{2}} \mu^d+O\left(\mu^{d-2} \right). \end{align*} By the second mean value theorem, the second term in \eqref{mainterm2} is of size $O(\mu^{d-2})$. It is also clear that $f(0)/2$ is of size $O(\mu^2)$ if $d\geq 4$ and equal to $|\Omega|\mu^2$ if $d=3$. Plugging these results in \eqref{mainterm1} gives us \eqref{mainterm}. It remains to verify the asymptotics \eqref{area}. By \eqref{bino-bound} and summation by parts, the left side of \eqref{area} is equal to \begin{equation*} 2\sum_{0\leq l\leq \lfloor a\mu-\frac{d+2}{2}\rfloor}\binom{l+d-2}{d-2}+O\left( \mu^{d-2}\right). \end{equation*} Simplifying the sum of binomial coefficients reduces this to \begin{equation*} 2\binom{\lfloor a\mu-\frac{d+2}{2}\rfloor+d-1}{d-1}+O\left( \mu^{d-2}\right)=\frac{2(a\mu)^{d-1}}{(d-1)!}+O\left( \mu^{d-2}\right), \end{equation*} as desired. \end{proof} \subsection{Weyl's law for balls}\label{sec5.2} In order to obtain the asymptotics of $\mathscr{N}_\mathbb{B}(\mu)$, based on the results of Subsection \ref{subsec4.2}, we need to study the lattice counting function $\mathscr{Q}_{\Omega_0}(\mu)$. By Lemma \ref{theorem:no-in-D} with $r=0$, as argued in the proof of Theorem \ref{thm Ndu}, we obtain the following. \begin{theorem} \label{thm5.5} For $d\geq 2$, we have \begin{equation*} \mathscr{Q}_{\Omega_0}^*(\mu)=\frac{R^d}{2^{d}(\Gamma(d/2+1))^{2} }\mu^d\mp\frac{R^{d-1}}{2\cdot(d-1)!}\mu^{d-1}+O_{\epsilon}\left(\mu^{d-2+2\theta^*+\epsilon}\right), \end{equation*} where we take the sign ``$-$'' (resp., ``$+$'') when $*=\mathtt{D}$ (resp., $*=\mathtt{N}$). \end{theorem} Finally, the ball part of Theorem \ref{specthm} follows immediately from Theorem \ref{thm5.5} and Proposition \ref{prop4.7}. \begin{remark} As a final remark on the Weyl's law for disks, we would like to point out some misunderstandings in a recent paper \cite{Huxley:2024} by Huxley. On \cite[P. 106]{Huxley:2024}, Huxley wrote ``\textit{A recent paper [6] by Guo, M\"{u}ller, Wang and Wang considers the more difficult problem of the Dirichlet eigenvalues of an annulus for which the ratio $r/R$ of the radii is a rational number... They claim that the estimate for Dirichlet eigenvalues of the disc follows by letting $r$ tend to zero...}''. This is not what Guo, M\"{u}ller, Wang and Wang claimed in their paper. On the contrary, they stated clearly (on \cite[P. 5]{GMWW:2019}, right above Theorem 1.4) that ``\textit{Its rigorous proof relies on the reduction step from the eigenvalue counting to the lattice point counting (see [6, Section 3], [10, Section 6] and its variant in Section 3), Theorem 4.1 (together with the symmetry of the domain D) and...}''. In fact, the focus in Guo, M\"{u}ller, Wang and Wang \cite{GMWW:2019} is the annulus case rather than the disk case whose proof is similar but much simpler. Hence \cite{GMWW:2019} did not provide a detailed proof for the disk case but a one-sentence sketch. A subsequent paper \cite{Guo} (which was not quoted in Huxley \cite{Huxley:2024}) focuses on extending the result of disks to high-dimensional balls for the Dirichlet Laplacian. A detailed proof for the disk case is essentially contained therein. By the way, \cite{GMWW:2019} obtained an improved remainder estimate for annuli under the assumption that $\pi^{-1}\arccos(r/R)\in \mathbb{Q}$ rather than under the assumption $r/R\in\mathbb{Q}$ as Huxley stated. \end{remark} \section{Estimates of rounding error sums} \label{sec6} This section is devoted to estimating sums of rounding errors. By applying the result of Theorem \ref{expo sum} to the analysis in the previous section, we can obtain the refined bounds presented in this paper. Recently Li and the last author \cite{LY2023} made a progress on the Gauss circle problem (among other results) by improving Huxley's exponent $131/208=0.6298\cdots$ (in \cite{Huxley:2003}) to $2\theta^*=0.6289\cdots$. Their work contains a new exponential sum estimate \cite[Theorem 4.2]{LY2023}. Let us explain how the estimate of exponential sums was obtained. It was first observed by Bourgain \cite{Bourgain:2017} that the decoupling theory of harmonic analysis can be brought into the study of the first spacing problem. He derived an essentially sharp $L^{12}$ estimate of an exponential sum whose Fourier support is a curve in $\mathbb R^4$. This exponent $12$ is essentially optimal, indicating the first spacing problem in estimating $|\zeta(\frac{1}{2}+it)|$ cannot be further improved. Then, in Bourgain and Watt \cite[Section 5]{BW2018}, they showed that the double large sieve inequality can be generalized by combining the locally constant property of exponential sums and H\"older's inequality. Next, in \cite{BWpreprint}, the same authors tried to bring this new idea of using the variant double large sieve inequality into the study of the first spacing problem of the circle and divisor problems. However, there was a technical issue with their (3.11)---a positive power of $N$ was missing when they applied the broad-and-narrow dichotomy argument, and thus their result on the first spacing problem in \cite{BWpreprint} does not hold. Subsequently, in \cite{LY2023}, Li and the last author applied Bourgain and Watt's variant double large sieve to generalize the setting from $G_4$ to $G_q$, and obtained a key estimate of $G_q$ with $q$ a little bit larger than $4$ (while in contrast Huxley used a bound of $G_4$ in \cite{Huxley:2003}). They have combined recent advancements of the small cap decoupling theory for cones (made by Guth and Maldague \cite{GM:2022}) with results on some diophantine counting problems to obtain better bounds in the first spacing problem. At last, by combining with Huxley's work on the second spacing problem, they obtained their improved pointwise estimate of two-dimensional exponential sums. We intended to apply \cite[Theorem 4.2]{LY2023} to the lattice point counting problems encountered in this paper. Unfortunately, the assumption \begin{equation} \label{ex cond} \left| F'(x)F^{(3)}(x)-3F''(x)^2\right| \gtrsim 1 \textrm{ for $x\asymp 1$} \end{equation} is not satisfied by the functions $F$ we have in \eqref{F1} and \eqref{F2}. There is no such problem in \cite{LY2023} because the functions $F$, encountered in the study of the circle and divisor problems, are variants of the reciprocal function $1/x$ that do satisfy the assumption \eqref{ex cond}. And meanwhile, assuming \eqref{ex cond} helps reduce the number of cases to be discussed. It is noted that Bourgain and Watt also made this assumption in their estimate of exponential sums in \cite{BWpreprint} for the same reason. To overcome this obstacle, inspired by Huxley's \cite[Proposition 3]{Huxley:2003}, we examined all the details of the proof of \cite[Theorem 4.2]{LY2023} and discovered that, even without the assumption \eqref{ex cond}, its conclusion still holds within a certain limited range, which suffices for our needs in this paper. Based on this discovery, the following theorem can be formulated. Essentially, the proof of this theorem is the same as those of \cite[Theorems 1.2 and 4.2]{LY2023}. For completeness, a sketchy proof will be provided below, with a particular emphasis on clarifying why the assumption \eqref{ex cond} is unnecessary in the range \eqref{range a}. For more details, we refer the interested readers to \cite[Section 5]{BWpreprint} and \cite[Sections 4 and 5]{LY2023}). Let $T,M$ be two large positive parameters and $F$ a real-valued function defined on $[1/2,2]$, three times continuously differentiable, satisfying $|F^{(j)}(x)|\asymp 1$ for $j=1,2,3$. We have the following bound for sums of rounding errors formed with the sawtooth function $\psi$. \begin{theorem} \label{expo sum} If $M$ is in the range \begin{equation} \label{range a} T^{\frac{141}{328}+c}\le M \le T^{\frac{1}{2}} \end{equation} for some small absolute constant $c>0$, then for any $\epsilon>0$ we have \begin{equation} \label{stf} \sum_{m=M}^{M_2} \psi\left( \frac{T}{M} F\left( \frac{m}{M} \right) \right)=O_\epsilon \left(T^{\theta^*+\epsilon}\right), \end{equation} where $M_2$ is an integer in the range $M\le M_2\le 2M-1$ and $\theta^*=0.314483\cdots$ (as defined in \cite[Definition 1.1]{LY2023}) is the opposite of the unique solution in the interval $[-0.35,-0.3]$ to the equation \begin{equation} \label{definition of theta} -\frac{8}{25}x-\frac{1}{200}\left(\sqrt{2(1-14x)}-5\sqrt{-1-8x}\right)^2+\frac{51}{200}=-x. \end{equation} \end{theorem} \begin{remark} One can compare this result with Huxley's \cite[Case (A) of Proposition 3]{Huxley:2003} with $\kappa=3/10$. If $M$ falls within the range \eqref{range a}, then it lies within the range (1.8) but not within (1.12) of Huxley's Proposition 3. Consequently, the bound \eqref{stf} offers an improvement over Huxley's bound (1.25). \end{remark} \begin{proof}[Proof of Theorem 6.1.] One can follow the approach of handling the sums in \cite[(5.1) and (5.2)]{LY2023} to treat the sum in equation \eqref{stf}. First of all, one may apply to \eqref{stf} the truncated Fourier expansion of the sawtooth function \begin{equation*} \psi(t)=\text{Im} \sum_{1\le h\le Y} \frac{e(ht)}{\pi h}+O\left(\frac{1}{1+\|t\|Y}\right) \,\, \textrm{with $Y=MT^{-\theta^*}$}, \end{equation*} and then one is left to estimate the exponential sum \begin{equation*} S=\sum_{h} \phi\left(\frac{h}{H}\right) \sum_{m} \chi\left(\frac{m}{M}\right) e\left(\frac{hT}{M} F\left(\frac{m}{M}\right)\right), \end{equation*} where $H$ is a dyadic number in $[1,Y]$ and $\phi, \chi$ are simply indicator functions of subintervals of $[1/2,2]$. An application of the Bombieri--Iwaniec method would transform the estimate to that of a bilinear form \begin{equation*} \mathop{\sum}\limits_{\substack{L\le l\le 2L \\K\le k\le 2K}} e\left( \vec{x}_{\frac{a}{r}}\cdot \vec{y}_{(k,l)} \right), \end{equation*} where \begin{equation*} \vec{x}_{\frac{a}{r}}=\left( \frac{\overline{a}}{r}, \frac{\overline{a}c}{r}, \frac{1}{\sqrt{\mu r^3}} \frac{\kappa}{\sqrt{\mu r^3}} \right) \end{equation*} (see \cite[Subsection 4.4]{LY2023} for the meaning of the notations) and \begin{equation*} \vec{y}_{(k,l)}=\left(k,lk,l\sqrt{k},\frac{l}{\sqrt{k}}\right) \end{equation*} are vectors in $\mathbb R^4$. Here, we have omitted the constant multiples in front and those negligible error terms. Applying Bourgain and Watt's variant of the double large sieve inequality in \cite[Section 5]{BW2018} reduces the problem into the following first and second spacing problems. The first spacing problem seeks for the bound of \begin{equation*} G_q:=\left\|\sum_{k\sim K}\sum_{l\sim L} a_{kl}e(lx_1+klx_2+l\sqrt{k}x_3)\right\|_{L^q_{\#}\left[|x_1|\le 1,|x_2|\le 1,|x_3|\le \frac{1}{\eta L\sqrt{K}}\right]}, \end{equation*} where $a_{kl}$ are arbitrary coefficients such that $|a_{kl}|\le 1$, $q\ge 4$, the $L^q_{\#}$-norm is given by \begin{equation*} \|f\|_{L^p_{\#}(B)}=\left(\frac{1}{|B|}\int_{B}|f|^p\right)^{1/p}, \end{equation*} the parameters $K,L$ are integers, $\eta>0$, and they satisfy \begin{equation*} 1\le L<K\le \frac{1}{\eta}\le KL. \end{equation*} One would directly resort to \cite[Proposition 3.1]{LY2023} for this part which is irrelevant to the assumption \eqref{ex cond} at all. It is in this part one can combine Guth and Maldague's work on the small cap decoupling theory for cones in \cite{GM:2022} with results on some diophantine counting problems. The second spacing problem asks for the number of pairs $(a/r, a_1/r_1)$ with $a,a_1\asymp A$ and $r,r_1\asymp Q$ such that \begin{align*} \left\|\frac{\overline{a}}{r}-\frac{\overline{a_1}}{r_1} \right\| & \lesssim \frac{1}{KL}, \\ \left\|\frac{\overline{a}c}{r}-\frac{\overline{a_1}c_1}{r_1} \right\| & \lesssim \frac{1}{L}, \\ \left| \frac{1}{\sqrt{\mu r^3}}-\frac{1}{\sqrt{\mu_1 r_1^3}} \right| & \lesssim \frac{1}{L\sqrt{K}}, \\ \left| \frac{\kappa}{\sqrt{\mu r^3}}-\frac{\kappa_1}{\sqrt{\mu_1 r_1^3}} \right| & \lesssim \frac{\sqrt{K}}{L}. \end{align*} They can be further simplified into the form \begin{align*} \left\|\frac{\overline{a}}{r}-\frac{\overline{a_1}}{r_1} \right\| & \le \Delta_1 \textrm{ with $\Delta_1$ much less than $1$,} \\ \left\|\frac{\overline{a}c}{r}-\frac{\overline{a_1}c_1}{r_1} \right\| & \le \Delta_2, \\ \left| \frac{\mu_1 r_1^3}{\mu r^3} -1 \right| & \le \Delta_3, \\ | \kappa-\kappa_1 | & \le \Delta_4 \end{align*} with parameters $\Delta_i$ properly chosen. One can then use Huxley's \cite[Lemmas 3.3 and 3.4]{Huxley:2003} for this part. As a matter of fact, the assumption \eqref{ex cond} is only needed in the case $M\gtrsim T^{\frac{181}{328}}(\log T)^{\frac{2907}{45920}}$ to ensure these Huxley's results on the second spacing problem to hold. Clearly, the parameter $M$ in \eqref{range a} falls outside this specific range. Hence this part is irrelevant to the assumption \eqref{ex cond} either. As a final step, by integrating results for the two parts above, one can prove \eqref{stf} in the same way as Li and the last author did for \cite[Theorem 1.2]{LY2023}. \end{proof} \appendix \section{Useful asymptotics of Bessel functions} For the convenience of readers we first quote Olver's uniform asymptotic expansions of Bessel functions of large order (see \cite[p. 368--369]{abram:1972} or \cite{olver:1954}): \begin{equation} J_\nu(\nu z)\sim \left(\frac{4\zeta}{1-z^2}\right)^{\!\! 1/4}\! \! \left(\frac{\mathrm{Ai}(\nu^{2/3}\zeta)}{\nu^{1/3}} \sum_{k=0}^{\infty}\frac{a_k(\zeta)}{\nu^{2k}}+\frac{\mathrm{Ai}^{\prime}(\nu^{2/3}\zeta)}{\nu^{5/3}} \sum_{k=0}^{\infty}\frac{b_k(\zeta)}{\nu^{2k}}\right), \label{jnuse111} \end{equation} \begin{equation} Y_\nu(\nu z) \sim -\left(\frac{4\zeta}{1-z^2}\right)^{\!\! 1/4}\!\!\left(\frac{\mathrm{Bi}(\nu^{2/3}\zeta)} {\nu^{1/3}}\sum_{k=0}^{\infty}\frac{a_k(\zeta)}{\nu^{2k}}+\frac{\mathrm{Bi}^{\prime}(\nu^{2/3}\zeta)}{\nu^{5/3}} \sum_{k=0}^{\infty}\frac{b_k(\zeta)}{\nu^{2k}}\right), \label{ynuse111} \end{equation} \begin{equation}\label{jnuse111NC} J_\nu'(\nu z)\sim -\frac{2}{z}\left(\frac{1-z^2 }{4\zeta}\right)^{\!\! 1/4}\!\!\left(\frac{\mathrm{Ai}^{\prime}(\nu^{2/3}\zeta)}{\nu^{2/3}} \sum_{k=0}^{\infty}\frac{d_k(\zeta)}{\nu^{2k}}+\frac{\mathrm{Ai}(\nu^{2/3}\zeta)}{\nu^{4/3}} \sum_{k=0}^{\infty}\frac{c_k(\zeta)}{\nu^{2k}}\right) \end{equation} and \begin{equation}\label{ynuse111NC} Y_\nu'(\nu z) \sim \frac{2}{z}\left(\frac{1-z^2 }{4\zeta}\right)^{\! \! 1/4}\!\!\left(\frac{\mathrm{Bi}^{\prime}(\nu^{2/3}\zeta)}{\nu^{2/3}} \sum_{k=0}^{\infty}\frac{d_k(\zeta)}{\nu^{2k}}+\frac{\mathrm{Bi}(\nu^{2/3}\zeta)} {\nu^{4/3}}\sum_{k=0}^{\infty}\frac{c_k(\zeta)}{\nu^{2k}}\right), \end{equation} where the notation $\sim$ is as defined by 3.6.15 in \cite[P. 15]{abram:1972} and $\zeta=\zeta(z)$ is given by \begin{equation} \frac{2}{3}(-\zeta)^{3/2}=\int_{1}^z\frac{\sqrt{t^2-1}}{t}\,\mathrm{d}t= \sqrt{z^2-1}-\arccos\left(\frac{1}{z}\right)\label{def-zeta1} \end{equation}or \begin{equation} \frac{2}{3}\zeta^{3/2}=\int_{z}^1\frac{\sqrt{1-t^2}}{t}\,\mathrm{d}t=\ln \frac{1+\sqrt{1-z^2}}{z}-\sqrt{1-z^2}.\label{def-zeta2} \end{equation} Here the branches are chosen so that $\zeta$ is real when $z$ is positive. $\mathrm{Ai}$ and $\mathrm{Bi}$ denote the Airy functions of the first and second kind. For the definitions and sizes of the coefficients $a_k$, $b_k$, $c_k$ and $d_k$ see \cite[p. 368--369]{abram:1972}. In particular it is known that $a_0(\zeta)=d_0(\zeta)=1$, $b_0(\zeta)$ is bounded, and if $\zeta$ is bounded above then $c_0(\zeta)$ is bounded. It is easy to check the following expansions of \eqref{def-zeta1} and \eqref{def-zeta2}. If $z\rightarrow 1+$ then \begin{equation}\label{bound-zeta+} (-\zeta(z))^{3/2}=\sqrt{2}(z-1)^{3/2}-\frac{9\sqrt{2}}{20}(z-1)^{5/2}+O\left((z-1)^{7/2}\right). \end{equation} If $z\rightarrow 1-$ then \begin{equation}\label{bound-zeta-} (\zeta(z))^{3/2}=\sqrt{2}(1-z)^{3/2}+\frac{9\sqrt{2}}{20}(1-z)^{5/2}+O\left((1-z)^{7/2}\right). \end{equation} We also have the following three lemmas about asymptotics of Bessel functions. Recall that $g(x)=\left(\sqrt{1-x^2}-x\arccos x\right)/\pi$. \begin{lemma} \label{app-1} For any $c>0$ and $\nu\ge 0$, if $x\geq \max\{(1+c)\nu, 10\}$ then \begin{equation}\label{jnasy} J_{\nu}(x)=\left(\frac{2}{\pi}\right)^{1/2}\left(x^2-\nu^2\right)^{-1/4} \left(\sin\left( \pi xg\left(\frac{\nu}{x}\right)+\frac{\pi}{4}\right)+O_c\left(x^{-1}\right)\right), \end{equation} \begin{equation}\label{ynasy} Y_{\nu}(x)=\left(\frac{2}{\pi}\right)^{1/2}\left(x^2-\nu^2\right)^{-1/4} \left(\sin\left( \pi xg\left(\frac{\nu}{x}\right)-\frac{\pi}{4}\right)+O_c\left(x^{-1}\right)\right), \end{equation} \begin{equation}\label{jnasyNC} J_{\nu}'(x)=\left(\frac{2}{\pi}\right)^{\! 1/2}\!\! x^{-1}\!\left(x^2-\nu^2\right)^{1/4} \!\left(\sin\left( \pi xg\left(\frac{\nu}{x}\right)+\frac{3\pi}{4}\right)+O_c\left(x^{-1}\right)\right) \end{equation} and \begin{equation}\label{ynasyNC} Y_{\nu}'(x)=\left(\frac{2}{\pi}\right)^{\! 1/2}\!\!x^{-1}\!\!\left(x^2-\nu^2\right)^{1/4} \!\!\left(\sin\left( \pi xg\left(\frac{\nu}{x}\right)+\frac{\pi}{4}\right)+O_c\left(x^{-1}\right)\right). \end{equation} \end{lemma} These standard asymptotics can be proved by using the method of stationary phase. All proofs are similar. For example see \cite[Lemma 2.1]{Guo} for a proof of \eqref{jnasy} and \cite[Appendix A]{GMWW:2019} for a proof of \eqref{ynasy} with non-negative integers $\nu$. \begin{lemma} \label{app-2} For any $c>0$ and all sufficiently large $\nu$, if $\nu < x\le(1+c)\nu$ and $xg(\nu/x)\geq 1$ then \begin{equation}\label{Bessel22} J_{\nu}(x)=\frac{(2/\pi )^{1/2}}{(x^2-\nu^2)^{1/4}}\left(\sin\left(\pi xg\left(\frac{\nu}{x}\right)+\frac{\pi}{4}\right)+O\left(\left(xg\left(\frac{\nu}{x}\right)\right)^{-1}\right)\right), \end{equation} \begin{equation}\label{Bessel221} Y_{\nu}(x)=\frac{(2/\pi )^{1/2}}{(x^2-\nu^2)^{1/4}}\left(\sin\left(\pi xg\left(\frac{\nu}{x}\right)-\frac{\pi}{4}\right)+O\left(\left(xg\left(\frac{\nu}{x}\right)\right)^{-1}\right)\right), \end{equation} \begin{equation}\label{Bessel22NC} J'_{\nu}(x)=\left(\frac{2}{\pi}\right)^{\!\! 1/2}\frac{(x^2-\nu^2)^{1/4}}{x}\left(\sin\left(\pi xg\left(\frac{\nu}{x}\right)+\frac{3\pi}{4}\right)+O\left(\left(xg\left(\frac{\nu}{x}\right)\right)^{-1}\right)\right) \end{equation} and \begin{equation}\label{Bessel221NC} Y'_{\nu}(x)=\left(\frac{2}{\pi}\right)^{\!\! 1/2}\frac{(x^2-\nu^2)^{1/4}}{x}\left(\sin\left(\pi xg\left(\frac{\nu}{x}\right)+\frac{\pi}{4}\right)+O\left(\left(xg\left(\frac{\nu}{x}\right)\right)^{-1}\right)\right). \end{equation} \end{lemma} The asymptotics in Lemma \ref{app-2} are consequences of Olver's asymptotics of Bessel functions and well-known asymptotics of Airy functions. Proofs are similar. For example see \cite[Lemma 2.2]{Guo} for a proof of \eqref{Bessel22}. The following is an analogue of the formula 9.3.4 in \cite[p. 366]{abram:1972}. \begin{lemma}\label{9.3.4analogue} For any $\epsilon>0$ and all sufficiently large $\nu$, \begin{enumerate} \item if $0\leq w\leq \nu^{\epsilon}$ then \begin{align*} J_\nu\left(\nu+w\nu^{1/3}\right)&=2^{1/3}\nu^{-1/3}\mathrm{Ai}\left(-2^{1/3}w\right)+O\left(\nu^{-1+2.25\epsilon}\right),\\ Y_\nu\left(\nu+w\nu^{1/3}\right)&=-2^{1/3}\nu^{-1/3}\mathrm{Bi}\left(-2^{1/3}w\right)+O\left(\nu^{-1+2.25\epsilon}\right),\\ J'_\nu\left(\nu+w\nu^{1/3}\right)&=-2^{2/3}\nu^{-2/3}\mathrm{Ai}'\left(-2^{1/3}w\right)+O\left(\nu^{-4/3+2.75\epsilon}\right),\\ Y'_\nu\left(\nu+w\nu^{1/3}\right)&=2^{2/3}\nu^{-2/3}\mathrm{Bi}'\left(-2^{1/3}w\right)+O\left(\nu^{-4/3+2.75\epsilon}\right); \end{align*} \item if $-\nu^{\epsilon}\leq w\leq 0$ then \begin{align*} J_\nu\left(\nu+w\nu^{1/3}\right)&=2^{1/3}\nu^{-1/3}\mathrm{Ai}\left(-2^{1/3}w\right)\left(1+O\left(\nu^{-2/3+2.5\epsilon}\right)\right),\\ Y_\nu\left(\nu+w\nu^{1/3}\right)&=-2^{1/3}\nu^{-1/3}\mathrm{Bi}\left(-2^{1/3}w\right)\left(1+O\left(\nu^{-2/3+2.5\epsilon}\right)\right),\\ J'_\nu\left(\nu+w\nu^{1/3}\right)&=-2^{2/3}\nu^{-2/3}\mathrm{Ai}'\left(-2^{1/3}w\right)\left(1+O\left(\nu^{-2/3+2.5\epsilon}\right)\right),\\ Y'_\nu\left(\nu+w\nu^{1/3}\right)&=2^{2/3}\nu^{-2/3}\mathrm{Bi}'\left(-2^{1/3}w\right)\left(1+O\left(\nu^{-2/3+2.5\epsilon}\right)\right). \end{align*} \end{enumerate} \end{lemma} \begin{proof} All these asymptotics can be proved similarly. As an example we sketch the proof of $J'_\nu$ below. See \cite[Lemma A.1]{GMWW:2019} for the proof of $J_{\nu}$ and $Y_{\nu}$. Let us first consider the case $-\nu^{\epsilon}\leq w<0$. (If $\omega=0$, the desired asymptotics follows immediately from \eqref{jnuse111NC}.) If we write $z=1+w\nu^{-2/3}$, then by \eqref{jnuse111NC} we have \begin{equation*} \begin{split} &J_{\nu}'\left(\nu+w\nu^{1/3}\right)=J_{\nu}'(\nu z)= \\ &-\frac{2}{z}\left(\frac{1-z^2 }{4\zeta}\right)^{\!\! 1/4}\nu^{-2/3}\left(\mathrm{Ai}'\left(\nu^{2/3}\zeta\right) \left(1+O\left(\nu^{-2}\right) \right)+\mathrm{Ai}\left(\nu^{2/3}\zeta\right)O\left(\nu^{-2/3}\right)\right), \end{split} \end{equation*} where $\zeta=\zeta(z)>0$, determined by \eqref{def-zeta2}, is such that \begin{equation} \nu^{2/3}\zeta=-2^{1/3}w+\frac{3}{10}2^{1/3}w^2 \nu^{-2/3}\left(1+O\left(|w|\nu^{-2/3}\right)\right) \label{app-3} \end{equation} implied by \eqref{bound-zeta-}. Thus \begin{equation} \left(\frac{1-z^2}{4\zeta}\right)^{1/4}=2^{-1/3}\left(1+O\left(|w|\nu^{-2/3}\right) \right).\label{firstfactor} \end{equation} By using the well-known asymptotics of $\mathrm{Ai}(r)$ and $\mathrm{Ai}'(r)$ for $r>0$ (see \cite[p. 448]{abram:1972}), we get \begin{equation*} \left|\mathrm{Ai}''\left(\eta\right)\right|=\left|\eta\mathrm{Ai}\left(\eta\right) \right|\lesssim |w|^{1/2}\left|\mathrm{Ai}'\left(-2^{1/3}w\right)\right| \end{equation*} for $-2^{1/3}\omega\leq \eta\leq \nu^{2/3}\zeta$. Hence, by the mean value theorem, we get \begin{equation} \mathrm{Ai}'\left(\nu^{2/3}\zeta\right)=\mathrm{Ai}'\left(-2^{1/3}w\right)+ O\left(\mathrm{Ai}'\left(-2^{1/3}w\right)\nu^{-2/3+2.5\varepsilon} \right). \label{app-4} \end{equation} Applying \eqref{app-3}, \eqref{firstfactor} and \eqref{app-4} to $J_{\nu}'\left(\nu+w\nu^{1/3}\right)$ gives the desired asymptotics. To handle the case $0< w\leq \nu^{\epsilon}$ we just need to slightly modify the first part. Again we write $z=1+w\nu^{-2/3}$. The corresponding $\zeta=\zeta(z)<0$, determined by \eqref{def-zeta1}, still satisfies \eqref{app-3} and \eqref{firstfactor}. By using the well-known asymptotics of $\mathrm{Ai}(-r)$ and $\mathrm{Ai}'(-r)$ for $r>0$ (see \cite[p. 448--449]{abram:1972}), we get \begin{equation*} \mathrm{Ai}'\left(\nu^{2/3}\zeta\right)=\mathrm{Ai}'\left(-2^{1/3}w\right)+O\left(\nu^{-2/3+2.75\varepsilon} \right). \end{equation*} We then easily get the desired asymptotics. \end{proof} \subsection*{Acknowledgments} Jingwei Guo would like to thank Wolfgang M\"uller for helpful communication. J. Guo is supported by the NSFC (no. 12341102). T. Jiang is supported by the University Annual Scientific Research Plan of Anhui Province (2022AH050173) and the Talent Cultivation Foundation of Anhui Normal University (2022xjxm036). Z. Wang is supported by the National Key R and D Program of China (2020YFA0713100) and the NSFC (no. 12171446). X. Yang is partially supported by the Campus Research Board Award RB24028 of UIUC. \begin{thebibliography}{00} \bibitem{abram:1972} Abramowitz, M. and Stegun, I. A., \textit{Handbook of mathematical functions with formulas, graphs, and mathematical tables}, National Bureau of Standards Applied Mathematics Series, 55, For sale by the Superintendent of Documents, U.S. Government Printing Office, Washington, D.C., 1964. \bibitem{Bourgain:2017} Bourgain, J., \textit{Decoupling, exponential sums and the Riemann zeta function}, J. Amer. Math. Soc. 30, no. 1, 205--224, 2017. \bibitem{BWpreprint} Bourgain, J. and Watt, N., \textit{Mean square of zeta function, circle problem and divisor problem revisited}, arXiv:1709.04340v1. \bibitem{BW2018} Bourgain, J. and Watt, N., \textit{Decoupling for perturbed cones and the mean square of $|\zeta(\frac{1}{2}+it)|$}, Int. Math. Res. Not. IMRN 2018, no. 17, 5219--5296, 2018. \bibitem{cochran:1964} Cochran, J. A., \textit{Remarks on the zeros of cross-product Bessel functions}, J. Soc. Indust. Appl. Math. 12, 580--587, 1964. \bibitem{Cochran:1966a} Cochran, J. A.,\textit{The asymptotic nature of zeros of cross-product Bessel functions}, Quart. J. Mech. Appl. Math. 19, 511--522, 1966. \bibitem{Cochran:1966} Cochran, J. A., \textit{The analyticity of cross-product Bessel function zeros}, Proc. Cambridge Philos. Soc. 62, 215--226, 1966. \bibitem{colin:2011} Colin de Verdi\`ere, Y., \textit{On the remainder in the Weyl formula for the Euclidean disk}, S\'eminaire de th\'eorie spectrale et g\'eom\'etrie 29, 1--13, 2010--2011. \bibitem{corput:1923} van der Corput, J. G., \textit{Zahlentheoretische Absch\"{a}tzungen mit Anwendung auf Gitterpunktprobleme (German)}, Math. Z. 17, no. 1, 250--259, 1923. \bibitem{EPT:2016} Eswarathasan, S., Polterovich, I. and Toth, J.A., \textit{Smooth billiards with a large Weyl remainder}, Int. Math. Res. Not. IMRN 2016, no. 12, 3639--3677, 2016. \bibitem{FLPS:2023} Filonov, N., Levitin, M., Polterovich, I. and Sher, D., \textit{P\'{o}lya's conjecture for Euclidean balls}, Invent. Math. 234, no.1, 129--169, 2023. \bibitem{FLPS:2024} Filonov, N., Levitin, M., Polterovich, I. and Sher, D., \textit{Uniform enclosures for the phase and zeros of Bessel functions and their derivative}, SIAM J. Math. Anal. 56, no. 6, 7644--7682, 2024. \bibitem{Guo} Guo, J., \textit{A note on the Weyl formula for balls in $\mathbb{R}^d$}, Proc. Amer. Math. Soc. 149, no. 4, 1663--1675, 2021. \bibitem{GMWW:2019} Guo, J., M\"{u}ller, W., Wang, W. and Wang, Z., \textit{The Weyl formula for planar annuli}, J. Funct. Anal. 281, no. 4, Paper No. 109063, 38 pp., 2021. \bibitem{GWW2018} Guo, J., Wang, W. and Wang, Z., \textit{An improved remainder estimate in the Weyl formula for the planar disk}, J. Fourier Anal. Appl. 25, no. 4, 1553--1579, 2019. \bibitem{Gur:1992} Gurarie, D., \textit{Symmetries and Laplacians. Introduction to harmonic analysis, group representations and applications}, North-Holland Mathematics Studies, 174, North-Holland Publishing Co., Amsterdam, 1992. \bibitem{GM:2022} Guth, L. and Maldague, D., \textit{Amplitude dependent wave envelope estimates for the cone in $\mathbb{R}^3$}, arXiv:2206.01093. \bibitem{Horsley} Horsley, D. E., \textit{Bessel phase functions: calculation and application}, Numer. Math. 136, no. 3, 679--702, 2017. \bibitem{Huxley:2003} Huxley, M. N., \textit{Exponential sums and lattice points. III}, Proc. London Math. Soc. (3) 87, 591--609, 2003. \bibitem{Huxley:2024} Huxley, M. N., \textit{Lattice points and Weyl's formula for the disc}, Acta Arith. 214, 89--107, 2024. \bibitem{Ivrii1980} Ivrii, V., \textit{The second term of the spectral asymptotics for a Laplace-Beltrami operator on manifolds with boundary (Russian)}, English translation in Funct. Anal. Appl. 14, 98--106, 1980. \bibitem{Kline:1948} Kline, M., \textit{Some Bessel equations and their application to guide and cavity theory}, J. Math. Physics 27, 37--48, 1948. \bibitem{Kuznecov:1965} Kuznecov, N. V., \textit{Asymptotic formulae for eigenvalues of an elliptic membrane. (Russian)}, Dokl. Akad. Nauk SSSR 161, 760--763, 1965. \bibitem{Kuznecov:1966} Kuznecov, N. V., \textit{Asymptotic distribution of eigenfrequencies of a plane membrane in the case of separable variables. (Russian)}, Differ. Uravn. 2, 1385--1402, 1966. \bibitem{kuz:1965} Kuznetsov, N. V. and Fedosov, B. V., \textit{An asymptotic formula for eigenvalues of a circular membrane}, Differ. Uravn. 1, 1682--1685, 1965. \bibitem{Lazu:1982} Lazutkin, V. F. and Terman, D. Ya., \textit{On the estimate of the remainder term in a formula of H. Weyl (Russian)}, Funct. Anal. Appl. 15, 299--300, 1982. \bibitem{LY2023} Li, X. and Yang, X., \textit{An improvement on Gauss's Circle Problem and Dirichlet's Divisor Problem}, arXiv:2308.14859. \bibitem{McCann:1977} McCann, R. C., \textit{Lower bounds for the zeros of Bessel functions}, Proc. Amer. Math. Soc. 64, 101--103, 1977. \bibitem{mcmahon:1894} McMahon, J., \textit{On the roots of the Bessel and certain related functions}, Ann. of Math. 9, 23--30, 1894/95. \bibitem{Mel:1980} Melrose, R. B., \textit{Weyl's conjecture for manifolds with concave boundary}, Geometry of the Laplace operator (Proc. Sympos. Pure Math., Univ. Hawaii, Honolulu, Hawaii, 1979), pp. 257--274, Proc. Sympos. Pure Math. XXXVI, Amer. Math. Soc., Providence, R.I., 1980. \bibitem{olver:1954} Olver, F. W. J., \textit{The asymptotic expansion of Bessel functions of large order}, Philos. Trans. Roy. Soc. London. Ser. A. 247, 328--368, 1954. \bibitem{olver:1997} Olver, F. W. J., \textit{Asymptotics and Special Functions}, Academic Press, New York, 1974; reprinted by A. K. Peters, Wellesley, MA, 1997. \bibitem{RWY} Rudnick, Z., Wigman, I. and Yesha, N., \textit{Differences between Robin and Neumann eigenvalues}, Comm. Math. Phys. 388, no. 3, 1603--1635, 2021. \bibitem{watson:1966} Watson, G. N., \textit{A treatise on the theory of Bessel functions}, Reprint of the second (1944) edition, Cambridge Mathematical Library, Cambridge University Press, Cambridge, 1995. \bibitem{weyl:1912} Weyl, H., \textit{Das asymptotische Verteilungsgesetz der Eigenwerte linearer partieller Differentialgleichungen (mit einer Anwendung auf die Theorie der Hohlraumstrahlung). (German)}, Math. Ann. 71, 441--479, 1912. \bibitem{weyl:1913} Weyl, H., \textit{\"{U}ber die Randwertaufgabe der Strahlungstheorie und asymptotische Spektralgeometrie. (German)}, J. Reine Angew. Math. 143, 177--202, 1913. \end{thebibliography} \end{document}
2412.14125v2
http://arxiv.org/abs/2412.14125v2
$η$-Ricci solitons and $η$-Einstein metrics on weak $β$-Kenmotsu $f$-manifolds
\documentclass[11pt,a4paper,twoside]{article} \usepackage{amsthm, amsfonts,amsmath} \topmargin=-18 true mm \oddsidemargin=-4 true mm \evensidemargin=-4 true mm \setlength{\textheight}{247 true mm} \setlength{\textwidth}{166 true mm} \newtheorem{theorem}{Theorem}\newtheorem{corollary}{Corollary}\newtheorem{definition}{Definition} \newtheorem{example}{Example}\newtheorem{lemma}{Lemma}\newtheorem{proposition}{Proposition}\newtheorem{remark}{Remark} \def\Ric{\operatorname{Ric}} \title{$\eta$-Ricci solitons and $\eta$-Einstein metrics \\ on weak $\beta$-Kenmotsu $f$-manifolds} \author{Vladimir Rovenski \footnote{Department of Mathematics, University of Haifa, Mount Carmel, 3498838 Haifa, Israel \newline e-mail: {\tt [email protected]}} } \begin{document} \date{} \maketitle \begin{abstract} The study is motivated by the interest in metric $f$-contact geometry and Ricci-type solitons in theoretical physics and geometry. Weak $f$-structures on a smooth manifold $M^{2n+s}\ (s>1)$ have been introduced by V.~Rovenski and R.~Wolak as a generalization of $f$-structures $(f,\xi_i,\eta^i,g)$ by K.~Yano. In~this paper, we introduce a new structure of this kind called the weak $\beta$-Kenmotsu $f$-structure as a generalization of the concept by K.~Kenmotsu and explore its properties and geometrical interpretations. We show that a weak $\beta$-Kenmotsu $f$-manifold is locally a twisted product of $\mathbb{R}^s$ and a weak K\"{a}hler manifold. Our main results show that such manifolds with $\beta=const$ and equipped with an $\eta$-Ricci soliton structure whose potential vector field is either a contact vector field or collinear with $\sum_i\xi_i$ are $\eta$-Einstein~manifolds of constant scalar curvature. \textbf{Keywords}: Twisted product, $\beta$-Kenmotsu $f$-manifold, $\eta$-Einstein manifold, $\eta$-Ricci soliton \end{abstract} \section{Introduction} Contact Riemannian geometry as well as Ricci solitons -- self-similar solutions of the Ricci flow equation $\partial g/\partial t = -2\Ric_{g}$, that naturally generalize Einstein metrics -- are of growing interest of mathematicians due to their important role in theoretical physics. There are no Einstein metrics on some compact manifolds, which motivates the study of more general metrics. Cho-Kimura \cite{cho2009ricci} generalized the notion of Ricci soliton, i.e., $\frac{1}{2}\,\pounds_{V}\,g +{\rm Ric} = \lambda\,g$ with $\lambda\in\mathbb{R}$, to $\eta$-\textit{Ricci~soliton}: \begin{align}\label{1.1} \frac{1}{2}\,\pounds_{V}\,g +{\rm Ric} = \lambda\,g +\mu\,\eta \otimes\eta ,\quad \lambda,\mu\in\mathbb{R}, \end{align} where ${\rm Ric}$ is the Ricci tensor, $\pounds_{V}$ is the Lie derivative in the direction of the vector field $V$ and $\eta$ is a one-form. If $V$ is a Killing vector field, then \eqref{1.1} reduces to $\eta$-\textit{Einstein metric}, \begin{align}\label{Eq-2.10s} {\rm Ric}= \lambda\,g + \mu\,\eta\otimes\eta. \end{align} Many recent papers have been motivated by the question: how interesting Ricci solitons might be for geometry of contact metric manifolds. Some authors consider a question, see~\cite{cho2009ricci,G2023,G-D-2020}: when an almost contact metric manifold (e.g., a Kenmotsu manifold, see \cite{kenmotsu1972class,olszak1991normal}) equipped with an $\eta$-Ricci soliton carries an Einstein-type~metric? \smallskip A metric $f$-structure on a $(2n+s)$-dimensional smooth manifold, see~\cite{b1970,yano-1961}, is a higher dimensional analog of a contact metric structure, defined by a skew-symmetric (1,1)-tensor $f$ of rank $2n$ and orthonormal vector fields $\{\xi_i\}_{1\le i\le s}$ spanning the $s$-dimensional distribution, $\ker f$ such~that \begin{align*} g({f} X,{f} Y)= g(X, Y) -\sum\nolimits_{\,i}{\eta^i}(X)\,{\eta^i}(Y),\quad \eta^i(X) = g(\xi_i, X), \quad X,Y\in\mathfrak{X}_M . \end{align*} A special class of metric $f$-manifolds, known as $\beta$-Kenmotsu $f$-manifolds (Kenmotsu manifolds when $s=1$ and $\beta=1$), can be characterized in terms of twisted products of $\mathbb{R}^s$ and a K\"{a}hler manifold, see~\cite{FP06,FP07,SV-2016}. Twisted and warped products are popular in geometry as well as in general~relativity. \smallskip In \cite{RWo_2}, we introduced new metric structures on smooth manifolds that generalize the metric $f$-structure and its satellites such as ${\cal C}$-, ${\cal S}$-, K- and $f$-K- contact structures. Such so-called ``weak" structures (i.e., the complex structure on the contact distribution is replaced by a nonsingular skew-symmetric tensor) are useful for studying totally geodesic and totally umbilical foliations, Killing fields and $\xi$-sectional curvature, and allows us a fresh look at the theory of classical structures and find new applications. In \cite{rst-43} we proved that the ${\cal S}$-structure is rigid in the class of weak ${\cal S}$-manifolds, and a weak metric $f$-structure with parallel tensor $f$ is a weak~${\cal C}$-structure. In~\cite{Rov-splitting}, we got a topological obstruction to the existence of weak $f$-K-contact manifolds. In \cite{rov-127}, concepts of $\eta$-Einstein manifolds and $\eta$-Ricci solitons were generalized for the weak metric $f$-structure. \begin{definition} \rm A weak metric $f$-manifold $M^{2n+s}({f},Q,{\xi_i},{\eta^i},g)$ (see Definition~\ref{D-basic}) is said to be \textit{$\eta$-Einstein},~if \begin{align}\label{Eq-2.10} \Ric = a\,g + b\sum\nolimits_{\,i}\eta^i\otimes\eta^i + (a+b)\sum\nolimits_{\,i\ne j}\eta^i\otimes\eta^j\ \ \mbox{for some}\ \ a,b\in C^\infty(M). \end{align} A weak metric $f$-manifold $M^{2n+s}({f},Q,{\xi_i},{\eta^i},g)$ represents an \textit{$\eta$-Ricci soliton} if \begin{align}\label{Eq-1.1} (1/2)\,\pounds_V\,g + \Ric = \lambda\,g +\mu\sum\nolimits_{\,i}\eta^i\otimes\eta^i +(\lambda+\mu)\sum\nolimits_{\,i\ne j}\eta^i\otimes\eta^j , \end{align} where $V$ is a smooth vector field on $M$ and $\lambda,\mu$ are real constants. \end{definition} In this paper, we study the question: when does a weak $\beta$-Kenmotsu $f$-manifold, see Definition~\ref{D-wK}, equipped with an $\eta$-Ricci soliton structure \eqref{Eq-1.1} carry an $\eta$-Einstein metric \eqref{Eq-2.10} of constant scalar curvature, and prove theorems generalizing some results in \cite{RP-1,rov-126} where $s=1$. \begin{remark}\rm If $V$ is a Killing vector field, i.e., $\pounds_V\,g=0$, then \eqref{Eq-1.1} reduces to \eqref{Eq-2.10} with $a=\lambda$ and $b=\mu$. Taking the trace of \eqref{Eq-2.10}, gives the scalar curvature $r = (2\,n + s)\,a + s\,b$. For $s=1$ and $Q={\rm id}$, definitions \eqref{Eq-1.1} and \eqref{Eq-2.10} are the well-known for almost contact metric~manifolds: \eqref{Eq-1.1}~reduces to an $\eta$-Ricci soliton \eqref{1.1}, and \eqref{Eq-2.10} reduces to an $\eta$-Einstein metric $\Ric = a\,g + b\,\eta\otimes\eta$. \end{remark} The paper consists of an introduction and three sections. In~Section~\ref{sec:01}, we review the basics of the weak metric $f$-structure. In Section~\ref{sec:02-f-beta}, we introduce the weak $\beta$-Kenmotsu $f$-structure (the {weak Kenmotsu $f$-structure} when $\beta\equiv1$, and the {weak ${\mathcal C}$-structure} when $\beta\equiv0$), derive its fundamental properties (Theorem~\ref{T-2.0}), give its geometrical interpretation in terms of the twisted structure (Theorem~\ref{T-2.1}), and show that a weak Kenmotsu $f$-manifold equipped with an $\eta$-Einstein metric has constant scalar curvature (Theorem~\ref{thm3.1A}). In~Section~\ref{sec:03-f-beta}, we study the interaction of the weak $\beta$-Kenmotsu $f$-structure with the $\eta$-Ricci soliton structure. We~show that if a weak $\beta$-Kenmotsu $f$-manifold with $\beta=const$ represents an $\eta$-Ricci soliton whose non-zero potential vector field $V$, either a contact vector field (Theorem~\ref{thm3.3}) or collinear with $\sum_i\xi_i$ (Theorem~\ref{thm3.4}), then it is an $\eta$-Einstein manifold of constant scalar curvature. The~results in Sections~\ref{sec:02-f-beta}-\ref{sec:03-f-beta} can be extended to the case where $\beta$ is a smooth function on $M$. \section{Preliminaries: weak metric $f$-manifolds} \label{sec:01} Here we will review the basics of a weak metric $f$-structure as a higher dimensional analog of the weak almost contact metric structure, see \cite{RWo_2,rst-43}. First, we generalize the notion of framed $f$-structure \cite{b1970,gy-1970,yano-1961} called $f.pk$-structure in \cite{FP07}. \begin{definition}\label{D-basic}\rm A~\textit{weak metric $f$-structure} on a smooth manifold $M^{2n+s}\ (n>0,\,s>1)$ is a set $({f},Q,{\xi_i},{\eta^i},g)$, where ${f}$ is a $(1,1)$-tensor of rank $2\,n$, $Q$ is a nonsingular $(1,1)$-tensor, ${\xi_i}\ (1\le i\le s)$ are characteristic vector fields and ${\eta^i}$ are 1-forms, $g$ is a Riemannian metric on $M$, satisfying \begin{align}\label{2.1} {f}^2 = -Q + \sum\nolimits_{\,i}{\eta^i}\otimes {\xi_i},\quad {\eta^i}({\xi_j})=\delta^i_j,\quad Q\,{\xi_i} = {\xi_i}, \\ \label{2.2} g({f} X,{f} Y)= g(X,Q\,Y) -\sum\nolimits_{\,i}{\eta^i}(X)\,{\eta^i}(Y),\quad X,Y\in\mathfrak{X}_M, \end{align} and $M^{2n+s}({f},Q,{\xi_i},{\eta^i},g)$ is called a \textit{weak metric $f$-manifold}. \end{definition} Assume that the distribution ${\cal D}:=\bigcap_{\,i}\ker{\eta^i}$ is ${f}$-invariant, thus ${\cal D}=f(TM)$, $\dim{\cal D}=2\,n$ and \[ {f}\,{\xi_i}=0,\quad {\eta^i}\circ{f}=0,\quad \eta^i\circ Q=\eta^i,\quad [Q,\,{f}]=0 . \] By the above, the distribution ${\cal D}^\bot=\ker f$ is spanned by $\{\xi_1,\ldots,\xi_s\}$ and is invariant for $Q$. Define a (1,1)-tensor $\tilde Q$ by $Q = {\rm id} + \tilde{Q}$, and note that $[\tilde{Q},{f}]=0$ and $\eta^i\circ\widetilde Q=0$. We also obtain ${f}^3+{f} = -\tilde{Q}\,{f}$. A weak metric $f$-structure $({f},Q,{\xi_i},{\eta^i})$ is called {\it normal} if the following tensor is zero: \begin{align*} {\cal N}^{\,(1)}(X,Y) = [{f},{f}](X,Y) + 2\sum\nolimits_{\,i}d{\eta^i}(X,Y)\,{\xi_i},\quad X,Y\in\mathfrak{X}_M , \end{align*} where the Nijenhuis torsion of a (1,1)-tensor ${S}$ and the exterior derivative of a 1-form ${\omega}$ are given~by \begin{align*}\notag & [{S},{S}](X,Y) = {S}^2 [X,Y] + [{S} X, {S} Y] - {S}[{S} X,Y] - {S}[X,{S} Y],\quad X,Y\in\mathfrak{X}_M, \\ & d\omega(X,Y) = \frac12\,\{X({\omega}(Y)) - Y({\omega}(X)) - {\omega}([X,Y])\},\quad X,Y\in\mathfrak{X}_M. \end{align*} Using the Levi-Civita connection $\nabla$ of $g$, one can rewrite $[S,S]$ as \begin{align}\label{4.NN} [{S},{S}](X,Y) = ({S}\nabla_Y{S} - \nabla_{{S} Y}{S}) X - ({S}\nabla_X{S} - \nabla_{{S} X}{S}) Y . \end{align} The following tensors ${\cal N}^{\,(2)}_i, {\cal N}^{\,(3)}_i$ and ${\cal N}^{\,(4)}_{ij}$ on weak metric $f$-manifolds, see \cite{rst-43, rov-127}, are well known in the classical theory, see \cite{b1970}: \begin{align*} {\cal N}^{\,(2)}_i(X,Y) &= (\pounds_{{f} X}\,{\eta^i})(Y) - (\pounds_{{f} Y}\,{\eta^i})(X) =2\,d{\eta^i}({f} X,Y) - 2\,d{\eta^i}({f} Y,X) , \\ {\cal N}^{\,(3)}_i(X) &= (\pounds_{{\xi_i}}{f})X = [{\xi_i}, {f} X] - {f} [{\xi_i}, X],\\ {\cal N}^{\,(4)}_{ij}(X) &= (\pounds_{{\xi_i}}\,{\eta^j})(X) = {\xi_i}({\eta^j}(X)) - {\eta^j}([{\xi_i}, X]) = 2\,d{\eta^j}({\xi_i}, X) . \end{align*} \begin{remark} \rm Let $M^{2n+s}(f,Q,\xi_i,\eta^i,g)$ be a weak metric $f$-manifold. Consider the product manifold $\bar M = M^{2n+s}\times\mathbb{R}^s$, where $\mathbb{R}^s$ is a Euclidean space with a basis $\partial_1,\ldots,\partial_s$. Define tensor $J$ on $\bar M$ putting $J(X, \sum\nolimits_{\,i}a^i\partial_i) = (fX - \sum\nolimits_{\,i}a^i\xi_i, \sum\nolimits_{\,j}\eta^j(X)\partial_j)$, where $a_i\in C^\infty(M)$. The~tensors ${\cal N}^{\,(1)}, {\cal N}^{\,(2)}_i, {\cal N}^{\,(3)}_i, {\cal N}^{\,(4)}_{ij}$ appear when we derive the integrability condition $[J, J]=0$ and express the normality condition ${\cal N}^{\,(1)}=0$. \end{remark} For a weak metric $f$-structure, the tensor ${f}$ is skew-symmetric and $Q$ is self-adjoint. Putting $Y={\xi_i}$ in \eqref{2.2} and using $Q\,{\xi_i}={\xi_i}$, we get ${\eta^i}(X)=g(X,{\xi_i})$. Thus, ${\xi_i}$ is $g$-orthogonal to ${\cal D}$. Therefore, $TM$ splits as complementary orthogonal sum of its subbundles ${\cal D}$ and ${\cal D}^\bot$. A~distribution ${\cal D}^\bot\subset TM$ (integrable or not) is {totally geodesic} if and only if its second fundamental form vanishes, i.e., $\nabla_X Y+\nabla_Y X\in{\cal D}^\bot$ for any vector fields $X,Y\in{\cal D}^\bot$ -- this is the case when {any geodesic of $M$ that is tangent to ${\cal D}^\bot$ at one point is tangent to ${\cal D}^\bot$ at all its points}. According to the Frobenius theorem, any involutive distribution is integrable, i.e., it is tangent to the leaves of the foliation. Any integrable and totally geodesic distribution determines a totally geodesic~foliation. \begin{proposition}[see \cite{rst-43}] The normality condition ${\cal N}^{\,(1)}=0$ for a weak metric $f$-structure implies \begin{align}\label{Eq-normal} & {\cal N}^{\,(3)}_i = {\cal N}^{\,(4)}_{ij} = 0,\quad {\cal N}^{\,(2)}_i(X,Y) = \eta^i([\widetilde QX, fY]), \\ \label{Eq-normal-2} & \nabla_{\xi_i}\,\xi_j\in{\cal D},\quad [X,\xi_i]\in{\cal D}\quad (X\in{\cal D}). \end{align} Moreover, ${\cal D}^\bot$ is a totally geodesic distribution. \end{proposition} The {fundamental $2$-form} $\Phi$ on $M^{2n+s}({f},Q,\xi_i,\eta^i,g)$ is defined by \[ \Phi(X,Y)=g(X,{f} Y),\quad X,Y\in\mathfrak{X}_M. \] Recall the co-boundary formula for exterior derivative $d$ on a $2$-form $\Phi$, \begin{align}\label{E-3.3} 3\,d\Phi(X,Y,Z) &= X\,\Phi(Y,Z) + Y\,\Phi(Z,X) + Z\,\Phi(X,Y) \notag\\ &-\Phi([X,Y],Z) - \Phi([Z,X],Y) - \Phi([Y,Z],X) . \end{align} \begin{proposition}[see \cite{rst-43}] For a weak metric $f$-structure we get \begin{align}\label{3.1-new} \notag & 2\,g((\nabla_{X}{f})Y,Z) = 3\,d\Phi(X,{f} Y,{f} Z) - 3\, d\Phi(X,Y,Z) + g({\cal N}^{\,(1)}(Y,Z),{f} X)\notag\\ \notag & +\sum\nolimits_{\,i}\big({\cal N}^{\,(2)}_i(Y,Z)\,\eta^i(X) + 2\,d\eta^i({f} Y,X)\,\eta^i(Z) - 2\,d\eta^i({f} Z,X)\,\eta^i(Y)\big) \notag\\ & + {\cal N}^{\,(5)}(X,Y,Z), \end{align} where ${\cal N}^{\,(5)}$ is the tensor field acting as \begin{align*} & {\cal N}^{\,(5)}(X,Y,Z) = {f} Z\,(g(X, \widetilde QY)) - {f} Y\,(g(X, \widetilde QZ)) \\ & + g([X, {f} Z], \widetilde QY) - g([X,{f} Y], \widetilde QZ) + g([Y,{f} Z] -[Z, {f} Y] - {f}[Y,Z],\ \widetilde Q X). \end{align*} \end{proposition} The curvature tensor is given by $R_{X, Y} = [\nabla_{X},\nabla_{Y}] - \nabla_{[X,Y]},\ X,Y\in\mathfrak{X}_M$, and ${\rm Ric}({X},{Y})={\rm trace}(Z\to R_{Z,X}Y)$ is the Ricci tensor. The Ricci operator $\Ric^\sharp$ is given by $g(\Ric^\sharp X,Y)={\rm Ric}(X,Y)$. The scalar curvature of $g$ is given by $r={\rm trace}_g \Ric$. The following formulas are known, e.g., \cite[Eqs. (6) and (7)]{ghosh2019ricci} and \cite[p.~23]{yano1970integral}: \begin{align} \label{3.21} & (\pounds_{V}\nabla)(X, Y) = \pounds_{V}(\nabla_{X} Y) - \nabla_{X}\,(\pounds_{V} Y) - \nabla_{[V,X]}Y , \\ \label{Eq-6} & (\nabla_X\pounds_V\,g)(Y,Z) = g((\pounds_V\nabla)(X,Y), Z) + g((\pounds_V\nabla)(X,Z), Y), \\ \label{2.6} & (\pounds_{V}\nabla_{Z}\,g - \nabla_{Z}\,\pounds_{V}\,g - \nabla_{[V,Z]}\,g)(X,Y) = -g((\pounds_{V}\nabla)(Z,X),Y) -g((\pounds_{V}\nabla)(Z,Y),X), \\ \label{Eq-7} & (\pounds_V\,R)_{X,Y} Z = (\nabla_X\pounds_V\nabla)(Y,Z) - (\nabla_Y\pounds_V\nabla)(X,Z). \end{align} \section{Geometry of weak $\beta$-Kenmotsu $f$-manifolds} \label{sec:02-f-beta} In the following definition, we generalize the notions of $\beta$-Kenmotsu manifolds ($s=1$) and Kenmotsu $f$-manifolds ($\beta=1,\ s>1$), see \cite{FP06,FP07,ghosh2019ricci,SV-2016}, and weak $\beta$-Kenmotsu manifolds ($s=1$), see \cite{rov-126}. \begin{definition}\label{D-wK} \rm A normal (i.e., ${\cal N}^{\,(1)} = 0$) weak metric $f$-manifold $M^{2n+s}({f},Q,\xi_i,\eta^i,g)$ will be called a \textit{weak $\beta$-Kenmotsu $f$-manifold} (a \textit{weak Kenmotsu $f$-manifold} when $\beta\equiv1$, and a \textit{weak ${\mathcal C}$-manifold} when $\beta\equiv0$) if \begin{align}\label{2.3-f-beta} (\nabla_{X}\,{f})Y=\beta\{g({f} X, Y)\,\bar\xi -\bar\eta(Y){f} X\}\quad (X,Y\in\mathfrak{X}_M), \end{align} where $\bar\xi=\sum\nolimits_{\,i}\xi_i$, $\bar\eta=\sum\nolimits_{\,i}\eta^i$, and $\beta$ is a smooth function on $M$. \end{definition} Note that $\bar\eta(\xi_i)=\eta^i(\bar\xi)=1$ and $\bar\eta(\bar\xi)=s$. Taking $X=\xi_j$ in \eqref{2.3-f-beta} and using ${f}\,\xi_j=0$, we get $\nabla_{\xi_j}{f}=0$, which implies $f(\nabla_{\xi_i}\,\xi_j)=0$, and so $\nabla_{\xi_i}\,\xi_j\in {\cal D}^\bot$. This and the 1st equality in \eqref{Eq-normal-2} give \begin{align}\label{Eq-normal-3} \nabla_{\xi_i}\,\xi_j =0\quad (1\le i,j\le s), \end{align} thus, ${\cal D}^\bot$ of a weak $\beta$-Kenmotsu $f$-manifold is tangent to a foliation with flat totally geodesic~leaves. \begin{lemma} For a weak $\beta$-Kenmotsu $f$-manifold the following formula holds: \begin{align}\label{2.3b} & \nabla_{X}\,\xi_i = \beta\{X -\sum\nolimits_{\,j}\eta^j(X)\,\xi_j\}\quad (1\le i\le s,\quad X\in\mathfrak{X}_M),\\ \label{2.3c} & (\nabla_{X}\,\eta^i)(Y) = \beta\{g(X,Y) -\sum\nolimits_{\,j}\eta^j(X)\,\eta^j(Y)\}\quad (1\le i\le s,\quad X,Y\in\mathfrak{X}_M) . \end{align} \end{lemma} \begin{proof} Taking $Y=\xi_i$ in \eqref{2.3-f-beta} and using $g({f} X, \xi_i)=0$ and $\bar\eta(\xi_i)=1$, we get ${f}(\nabla_X\,\xi_i -\beta X)=0$. Since ${f}$ is non-degenerate on ${\cal D}$ and has rank $2\,n$, we get $\nabla_X\,\xi_i -\beta X = \sum\nolimits_{\,p}c^p\xi_p$. The inner~product with $\xi_j$ gives $g(\nabla_X\,\xi_i,\xi_j) = \beta\,g(X,\xi_j) - c^j$. Using \eqref{Eq-normal-2} and \eqref{Eq-normal-3}, we find $g(\nabla_X\,\xi_i,\xi_j)=g(\nabla_{\xi_i}X,\xi_j)=0$; hence, $c^j=\beta\,\eta^j(X)$. This proves~\eqref{2.3b}. Using $(\nabla_{X}\,\eta^i)(Y) = g(\nabla_{X}\,\xi_i, Y)$ and \eqref{2.3b}, we get \eqref{2.3c}. \end{proof} The following result generalizes Theorem~3.4 in \cite{SV-2016}. \begin{theorem}\label{T-2.0} A weak metric $f$-manifold is a weak $\beta$-Kenmotsu $f$-mani\-fold if and only if the following conditions are valid: \begin{align}\label{Eq-almost-K} {\cal N}^{\,(1)} = 0,\quad d\eta^i = 0,\quad d\Phi = 2\,\beta\,\bar\eta\wedge\Phi,\quad {\cal N}^{\,(5)}(X,Y,Z)=2\,\beta\,\bar\eta(X) g(f Y, \widetilde Q Z). \end{align} \end{theorem} \begin{proof} Using \eqref{2.3b}, we obtain \begin{align}\label{2.4A} (\nabla_X\,\eta^i)Y = X g(\xi_i, Y) -g(\xi_i, \nabla_X\,Y) = g(\nabla_{X}\,\xi_i, Y) = \beta\{g(X,Y) -\sum\nolimits_{\,j}\eta^j(X)\,\eta^j(Y)\} \end{align} for all $X,Y\in\mathfrak{X}_M$. By \eqref{2.4A}, $(\nabla_X\,\eta^i)Y=(\nabla_Y\,\eta^i)X$ is true. Thus, for $X,Y\in {\cal D}$ we obtain \[ 0 = (\nabla_X\,\eta^i)Y-(\nabla_Y\,\eta^i)X = -\beta\,g([X,Y], \xi_i) \] that means integrability of the distribution ${\cal D}$, or equivalently, $d\eta^i(X,Y)=0$ for all $i=1,\ldots,s$ and $X,Y\in {\cal D}$. By this and ${\cal N}^{\,(4)}_{ij}=0$, see \eqref{Eq-normal}, we find $ d\eta^i = 0$. Using \eqref{2.3-f-beta} and \eqref{E-3.3}, we get \[ 3\,d\Phi(X,Y,Z) = 2\,\beta\{ \bar\eta(X) g({f}Z, Y) +\bar\eta(Y) g({f}X, Z) +\bar\eta(Z) g({f}Y, X) \}. \] On the other hand, we have \[ 3(\bar\eta\wedge\Phi)(X,Y,Z) = \bar\eta(X) g({f}Z, Y) +\bar\eta(Y) g({f}X, Z) +\bar\eta(Z) g({f}Y, X). \] Thus, $d\Phi = 2\,\beta\,\bar\eta\wedge\Phi$ is valid. By \eqref{4.NN} with $S=f$, and \eqref{2.3-f-beta}, we get $[{f},{f}]=0$; thus ${\cal N}^{\,(1)} = 0$. Finally, from \eqref{3.1-new}, using \eqref{2.1} and \eqref{2.2}, we~obtain \begin{align*} & g((\nabla_{X}{f})Y,Z) - \frac12\,{\cal N}^{\,(5)}(X,Y,Z) = 3\,\beta\big\{ (\bar\eta\wedge\Phi)(X,fY,fZ) - (\bar\eta\wedge\Phi)(X,Y,Z) \big\} \\ & = \beta\big\{ -\bar\eta(X) g(QZ, fY) + \bar\eta(X) g(Z, fY) - \bar\eta(Y) g(fX, Z) - \bar\eta(Z) g(X, fY)\big\} \\ & = \beta\big\{ \bar\eta(Z) g(fX, Y) - \bar\eta(Y) g(fX, Z) - \bar\eta(X) g(fY, \widetilde Q Z) \big\}. \end{align*} From this, using \eqref{2.3-f-beta}, we get ${\cal N}^{\,(5)}(X,Y,Z)=2\,\beta\,\bar\eta(X) g(f Y, \widetilde Q Z)$. Conversely, using \eqref{2.1} and \eqref{Eq-almost-K} in \eqref{3.1-new}, we obtain \begin{align*} & 2\,g((\nabla_{X}{f})Y,Z) = 6\,\beta\,(\bar\eta\wedge\Phi)(X,{f} Y,{f} Z) - 6\,\beta\,(\bar\eta\wedge\Phi)(X,Y,Z) +2\,\beta\,\bar\eta(X) g(\widetilde Q f Y, Z) \\ & = 2\,\beta\,\big\{ -\bar\eta(X) g(fY, QZ) - \bar\eta(X) g({f}Z, Y) -\bar\eta(Y) g({f}X, Z) -\bar\eta(Z) g({f}Y, X) +\bar\eta(X) g(f Y, \widetilde QZ) \big\} \\ & = 2\,\beta\{g({f} X, Y)\,g(\bar\xi, Z) -\bar\eta(Y) g({f}X, Z)\} , \end{align*} thus \eqref{2.3-f-beta} is true. \end{proof} \begin{definition}[see \cite{rov-126}] \rm A Riemannian manifold $(\bar M, \bar g)$ equipped with a skew-symmetric {\rm (1,1)}-tensor $J$ (other than the complex structure) is called a \textit{weak K\"{a}hler manifold} if the tensor $J^{\,2}$ is negative definite and $\bar\nabla J=0$, where $\bar\nabla$ is the Levi-Civita connection of $\bar g$. \end{definition} Let $(\bar M,\bar g)$ be a Riemannian manifold. A \textit{twisted product} $\mathbb{R}^s\times_\sigma\bar M$ is the product $M=\mathbb{R}^s\times\bar M$ with the metric $g=dt^2\oplus \sigma^2(t_1,\ldots,t_s)\,\bar g$, where $\sigma>0$ is a smooth function on $M$. Set $\xi_i=\partial_{\,t_i}$. The~Levi-Civita connections, $\nabla$ of $g$ and $\bar\nabla$ of $\bar g$, are related as follows: \noindent\ \ (i) $\nabla_{\xi_i}\,\xi_j= 0$, $\nabla_X\,\xi_i=\nabla_{\xi_i}X = \xi_i(\log\sigma)X$ for $X\perp Span\{\xi_1,\ldots,\xi_s\}$. \noindent\ \ (ii) $\pi_{1*}(\nabla_XY) = -g(X,Y)\,\pi_{1*}(\nabla\log\sigma)$, where $\pi_1: M \to\mathbb{R}^s$ is the orthoprojector. \noindent\ \ (iii) $\pi_{2*}(\nabla_XY)$ is the lift of $\bar\nabla_XY$, where $\pi_2: M \to\bar M$ is the orthoprojector. \noindent If $\sigma>0$ is a smooth function on $\mathbb{R}^s$, then we get a \textit{warped product} $\mathbb{R}^s\times_\sigma\bar M$. \begin{theorem}\label{T-2.1} A weak $\beta$-Kenmotsu $f$-manifold is locally a twisted product $\mathbb{R}^s\times_\sigma\bar M$ $($a warped product when $X(\beta)=0$ for any $X\in{\cal D})$, where $\bar M(\bar g, J)$ is a weak K\"{a}hler manifold. \end{theorem} \begin{proof} By \eqref{Eq-normal-3}, the distribution ${\cal D}^\bot$ is tangent to a foliation with flat totally geodesic~leaves, and by the second equality of \eqref{Eq-normal-2}, the distribution ${\cal D}$ is tangent to a foliation. By \eqref{2.3b}, the Weingarten operator $A_{\xi_i}=-(\nabla\,\xi_i)^\bot\ (1\le i\le s)$ on ${\cal D}$ is conformal: $A_{\xi_i}X = -\beta X\ (X\in{\cal D})$. Hence, ${\cal D}$ is tangent to a totally umbilical foliation with the mean curvature vector $H=-\beta\,\bar\xi$. By \cite[Theorem~1]{pr-1993}, our manifold is locally a twisted product. If $X(\beta)=0\ (X\in{\cal D})$, then we get locally a warped product, see \cite[Proposition~3]{pr-1993}. By \eqref{2.2}, the (1,1)-tensor $J=f|_{\,\cal D}$ is skew-symmetric and $J^2$ is negative definite. To show $\bar\nabla J=0$, using \eqref{2.3-f-beta} we find $(\bar\nabla_X J)Y=\pi_{2*}((\nabla_X{f})Y)=0$ for $X,Y\in{\cal D}$. \end{proof} \begin{example}\rm Let $\bar M(\bar g,J)$ be a weak K\"{a}hler manifold and $\sigma=c\,e^{\,\beta\sum t_i}$ a function on Euclidean space $\mathbb{R}^s(t_1,\ldots,t_s)$, where $c\ne0$ and $\beta$ are constants. Then the warped product manifold $M=\mathbb{R}^s\times_\sigma\bar M$ has a weak metric $f$-structure which satisfies \eqref{2.3-f-beta}. Using \eqref{4.NN} with $S=J$, for a weak K\"{a}hler manifold, we get $[{J},{J}]=0$; hence, ${\cal N}^{\,(1)}=0$ is true. \end{example} \begin{corollary} A weak Kenmotsu $f$-manifold $M^{2n+s}(f,Q,\xi_i,\eta^i,g)$ is locally a warped product $\mathbb{R}^s\times_\sigma \bar M$, where $\sigma=c\,e^{\,\sum t_i}\ (c=const\ne0)$ and $\bar M(\bar g,J)$ is a weak K\"{a}hler manifold. \end{corollary} To simplify the calculations in the rest of the paper, we assume that $\beta=const$. \begin{proposition} For a weak $\beta$-Kenmotsu $f$-manifold with $\beta=const$, we have \begin{align}\label{2.4} & R_{X, Y}\,\xi_i = \beta^2\big\{\bar\eta(X)Y-\bar\eta(Y)X +\sum\nolimits_{\,j}\big(\bar\eta(Y)\eta^j(X) - \bar\eta(X)\eta^j(Y)\big)\xi_j\big\} \quad (X,Y\in\mathfrak{X}_M),\\ \label{2.5-f-beta} & \Ric^\sharp \xi_i = -2\,n\,\beta^2\bar\xi , \\ \label{3.1} & (\nabla_{\xi_i}\Ric^\sharp)X = - 2\,\beta\Ric^\sharp X - 4\,n\,\beta^3\big\{ s X - s\sum\nolimits_{\,j}\eta^j(X)\,\xi_j + \bar\eta(X)\,\bar\xi\,\big\} \quad (X\in\mathfrak{X}_M) ,\\ \label{3.1A-f-beta} & \xi_i(r) = -2\,\beta\{r + 2\,s\,n(2\,n + 1)\,\beta^2\}\quad (1\le i\le s). \end{align} Taking covariant derivative of \eqref{2.5-f-beta} along $X$ and using \eqref{2.3b} gives \begin{align}\label{E-L-02a} (\nabla_X\Ric^\sharp)\,\xi_i= - \beta\Ric^\sharp X - 2\,n\,\beta^3\big\{ s X - s\sum\nolimits_{\,j}\eta^j(X)\,\xi_j + \bar\eta(X)\,\bar\xi\,\big\}\quad (X\in\mathfrak{X}_M) . \end{align} \end{proposition} \begin{proof} Taking covariant derivative of \eqref{2.3b} along $Y\in \mathfrak{X}_M$, we get \begin{align*} \nabla_Y\nabla_X\,\xi_i = -\beta^2\,\big\{ \big( g(X,Y) - \sum\nolimits_{\,q}\eta^q(Y)\eta^q(X)\big)\,\bar\xi + \bar\eta(X)\big(Y - \sum\nolimits_{\,p}\eta^p(Y)\xi_p\big)\big\}. \end{align*} Repeated application of \eqref{2.3b} and the foregoing equation in the {curvature~tensor} $R$ of the Riemannian manifold, we get \eqref{2.4}. Using a local orthonormal basis $(e_q)$ of the manifold, and the equality $\sum\nolimits_{\,p,q}\big(\bar\eta(Y)\eta^p(e_q) - \bar\eta(e_q)\eta^p(Y)\big) \eta^p(e_q)= (s-1)\,\bar\eta(Y)$, we derive from \eqref{2.4} \begin{align*} & g(\Ric^\sharp \xi_i,Y)= \sum\nolimits_{\,q} g(R_{e_q, Y}\,\xi_i, e_q) \\ & = \beta^2\sum\nolimits_{\,q}\big\{\bar\eta(e_q)g(Y,e_q)-\bar\eta(Y)g(e_q,e_q) +\big(\bar\eta(Y)\eta^p(e_q) - \bar\eta(e_q)\eta^p(Y)\big) \eta^p(e_q)\big\} \\ & = \beta^2 \big\{ g(Y, \bar\xi)-(2\,n+s)\bar\eta(Y) +(s-1)\,\bar\eta(Y) \big\} = -2\,n\,\beta^2 g(\bar\xi, Y), \end{align*} from which we get \eqref{2.5-f-beta}. Next, using \eqref{2.3b}, we get \begin{align}\label{Eq-9} (\pounds_{\xi_i}\,g)(Y,Z) = g(\nabla_Y\,\xi_i, Z)+g(\nabla_Z\,\xi_i, Y) = 2\,\beta\big(g(Y,Z)-\sum\nolimits_{\,j}\eta^j(Y)\,\eta^j(Z)\big). \end{align} Taking covariant derivative of \eqref{Eq-9} along $X$ and using \eqref{2.3b} gives \begin{align*} (\nabla_X\pounds_{\xi_i}\,g)(Y,Z) = 2\,\beta^2\big\{\sum\nolimits_{\,j}\eta^j(X)\big(\eta^j(Y)\,\bar\eta(Z) + \bar\eta(Y)\,\eta^j(Z)\big) -g(X,Y)\,\bar\eta(Z) -g(X,Z)\,\bar\eta(Y)\big\} \end{align*} for all $X,Y,Z\in\mathfrak{X}_M$. Using this in \eqref{Eq-6} with $V=\xi_i$, we obtain \begin{align}\label{Eq-9b} \nonumber & g((\pounds_{\xi_i}\nabla)(X,Y), Z) + g((\pounds_{\xi_i}\nabla)(X,Z), Y) \\ & = 2\,\beta^2\big\{\sum\nolimits_{\,j}\eta^j(X)\big(\eta^j(Y)\,\bar\eta(Z) + \bar\eta(Y)\,\eta^j(Z)\big) -g(X,Y)\,\bar\eta(Z) -g(X,Z)\,\bar\eta(Y)\big\}. \end{align} By a combinatorial computation, we find \begin{align*} & g((\pounds_{\xi_i}\nabla)(Y,Z), X) + g((\pounds_{\xi_i}\nabla)(Y,X), Z) \\ & = 2\,\beta^2\big\{\sum\nolimits_{\,j}\eta^j(Y)\big(\eta^j(Z)\,\bar\eta(X) + \bar\eta(Z)\,\eta^j(X)\big) -g(Y,Z)\,\bar\eta(X) -g(Y,X)\,\bar\eta(Z) \big\},\\ & g((\pounds_{\xi_i}\nabla)(Z,X), Y) + g((\pounds_{\xi_i}\nabla)(Z,Y), X) \\ & = 2\,\beta^2\big\{\sum\nolimits_{\,j}\eta^j(Z)\big(\eta^j(X)\,\bar\eta(Y) + \bar\eta(X)\,\eta^j(Y)\big) -g(Z,X)\,\bar\eta(Y) -g(Z,Y)\,\bar\eta(X) \big\}. \end{align*} Subtracting \eqref{Eq-9b} from the sum of the last two equations gives \begin{align}\label{Eq-10} (\pounds_{\xi_i}\nabla)(Y,Z) = 2\,\beta^2\big\{\sum\nolimits_{\,j}\eta^j(Y)\,\eta^j(Z) - g(Y,Z)\big\}\,\bar\xi\qquad (Y,Z\in\mathfrak{X}_M). \end{align} Taking covariant derivative of \eqref{Eq-10} along $X$ and using \eqref{2.3b} gives \begin{align*} & (\nabla_X\pounds_{\xi_i}\nabla)(Y,Z) = 2\,\beta^3\big\{ \big[ \big( g(X,Y) - \sum\nolimits_{\,j}\eta^j(X)\,\eta^j(Y)\big)\,\bar\eta(Z) \\ & + \big( g(X,Z) - \sum\nolimits_{\,j}\eta^j(X)\,\eta^j(Z) \big)\,\bar\eta(Y) \big]\bar\xi +s\big(\sum\nolimits_{\,j}\eta^j(Y)\,\eta^j(Z) - g(Y,Z)\big)\,\big(X - \sum\nolimits_{\,p}\eta^p(X)\,\xi_p\big) \big\} . \end{align*} Using this in \eqref{Eq-7} with $V=\xi_i$, we obtain \begin{align}\label{Eq-11} \nonumber & (\pounds_{\xi_i}R)_{X,Y} Z = 2\,\beta^3 \big\{ \big( g(X,Z) {-} \sum\nolimits_{\,j}\eta^j(X)\,\eta^j(Z) \big)\big(\bar\eta(Y)\,\bar\xi + s(Y-\sum\nolimits_{\,q}\eta^q(Y)\,\xi_q)\big) \\ & - \big( g(Y,Z) - \sum\nolimits_{\,j}\eta^j(Y)\,\eta^j(Z) \big)\big(\bar\eta(X)\,\bar\xi + s(X-\sum\nolimits_{\,q}\eta^q(X)\,\xi_q)\big)\big\}. \end{align} Contracting \eqref{Eq-11} over $X$, we deduce \begin{align}\label{Eq-12} (\pounds_{\xi_i}\Ric)(Y,Z) = \sum\nolimits_{\,a}g((\pounds_{\xi_i}R)_{e_a,Y} Z, e_a) = -4\,s\,n\,\beta^3 \big( g(Y,Z)- \sum\nolimits_{\,j}\eta^j(Y)\,\eta^j(Z)\big). \end{align} Taking the Lie derivative of equality $\Ric(Y,Z)= g(\Ric^\sharp Y, Z)$, we obtain \begin{align}\label{Eq-13} (\pounds_{\xi_i}\Ric)(Y,Z) = (\pounds_{\xi_i}\,g)(\Ric^\sharp Y,Z) +g((\pounds_{\xi_i}\Ric^\sharp)Y, Z). \end{align} On the other hand, replacing $Y$ by $\Ric^\sharp Y$ in \eqref{Eq-9} and using \eqref{2.5-f-beta}, we obtain \begin{align}\label{Eq-14} \nonumber (\pounds_{\xi_i}\,g)(\Ric^\sharp Y,Z) & = 2\,\beta\big(g(\Ric^\sharp Y, Z) - g(\Ric^\sharp \xi_j, Y)\,\eta^j(Z)\big) \\ &= 2\,\beta\big(g(\Ric^\sharp Y, Z) +2\,n\beta^2\bar\eta(Y)\,\bar\eta(Z)\big) . \end{align} Applying \eqref{Eq-13} and \eqref{Eq-14} in \eqref{Eq-12} we get \begin{align}\label{3.1b} (\pounds_{\xi_i}\Ric^\sharp)Y = -2\,\beta\Ric^\sharp Y - 4\,n\,\beta^3\big\{ sY - s\sum\nolimits_{\,j}\eta^j(Y)\,\xi_j + \bar\eta(Y)\bar\xi\,\big\} . \end{align} Using \eqref{2.3b}, we calculate \begin{align*} & (\pounds_{\xi_i}\Ric^\sharp)Y = \pounds_{\xi_i}(\Ric^\sharp Y) -\Ric^\sharp\pounds_{\xi_i} Y \\ & =\nabla_{\xi_i}(\Ric^\sharp Y) -\nabla_{\Ric^\sharp Y}\,{\xi_i} -\Ric^\sharp\nabla_{\xi_i} Y +\Ric^\sharp\nabla_Y\,{\xi_i} = (\nabla_{\xi_i}\Ric^\sharp) Y. \end{align*} Using this in \eqref{3.1b}, gives \eqref{3.1}. Contracting \eqref{3.1}, we get \eqref{3.1A-f-beta}. \end{proof} \begin{theorem}\label{thm3.1A} For an $\eta$-Einstein \eqref{Eq-2.10} weak $\beta$-Kenmotsu $f$-manifold with $\beta=const$, we obtain \begin{align}\label{3.16B} {\rm Ric}^\sharp X & = -2\,n\beta^2\big\{s\,X - (s-1)\sum\nolimits_{\,j}\eta^j(X)\,\xi_j + \sum\nolimits_{\,i\ne j} \eta^i(X)\,\xi_j\big\} ,\\ \label{3.16D} r & = -2\,s\,n(2n+1)\beta^2. \end{align} \end{theorem} \begin{proof} Tracing \eqref{Eq-2.10} gives $r=(2\,n+s)\,a+sb$. Putting $X=Y=\xi_i$ in \eqref{Eq-2.10} and using \eqref{2.5-f-beta}, yields $a+b=-2\,n\,\beta^2$. Thus, $a = s\,\beta^2+\frac{r}{2\,n}$ and $b = -(2\,n+s)\,\beta^2 - \frac{r}{2\,n}$, and \eqref{Eq-2.10} gives \begin{align}\label{3.16} {\rm Ric}^\sharp X = \big( s\,\beta^2{+}\frac{r}{2\,n} \big)X {-}\big((2\,n{+}s)\,\beta^2 {+} \frac{r}{2\,n}\big)\sum\nolimits_{\,j}\eta^j(X)\,\xi_j {-}2\,n\beta^2\sum\nolimits_{\,i\ne j} \eta^i(X)\,\xi_j \quad (X\in\mathfrak{X}_M). \end{align} Taking covariant derivative of \eqref{3.16} along $Y$ and using \eqref{2.3b}, we~get \begin{align}\label{AA} \nonumber & (\nabla_Y \Ric^\sharp)X = \frac{Y(r)}{2n}\big\{X-\sum\nolimits_{\,j}\eta^j(X)\xi_j\big\} \\ \nonumber & -\big((2\,n+s)\,\beta^2 + \frac{r}{2\,n} \big)\beta\big\{\,g(X,Y)\,\bar\xi + \bar\eta(X)\big( Y - \sum\nolimits_{\,j}\eta^j(Y)\xi_j\big) -\sum\nolimits_{\,i}\eta^i(X)\,\eta^i(Y)\,\bar\xi\,\big\} \\ & - 2(s-1)n\beta^3\big\{ \big(g(X,Y) -\sum\nolimits_{\,p}\eta^p(X)\,\eta^p(Y)\big)\,\bar\xi + \bar\eta(X)\big(Y - \sum\nolimits_{\,p}\eta^p(Y)\xi_p \big) \big\}. \end{align} Contracting \eqref{AA} over $Y$, we get \begin{align}\label{AA-2} (2\,n-1)\,X(r) = -\sum\nolimits_{\,i}\eta^i(X)\,\xi_i(r) - 2\,n\beta\big\{r + 2\,s\,n(2\,n + 1)\,\beta^2 \big\}\,\bar\eta(X). \end{align} Therefore, $X(r) = 0$ for all $X\in{\cal D}$. Taking $X=\xi_i$ in \eqref{AA-2}, we obtain \begin{align}\label{AA-2B} \xi(r) = - \beta\big\{r + 2\,s\,n(2\,n + 1)\,\beta^2 \big\}. \end{align} Comparing \eqref{AA-2B} with \eqref{3.1A-f-beta}, we find \eqref{3.16D}: $r= -2\,s\,n(2n+1)\beta^2$. Then by \eqref{3.16}, we obtain \eqref{3.16B}. Note that $(M,g)$ is an $\eta$-Einstein manifold with $a=-2\,s\,n\beta^2$ and $b=2(s-1)n\beta^2$ in \eqref{Eq-2.10}. \end{proof} \begin{corollary} An $\eta$-Einstein \eqref{Eq-2.10s} weak $\beta$-Kenmotsu manifold $M^{2n+1}({f},Q,\xi,\eta,g)$ with $\beta=const$, is an Einstein manifold with ${\rm Ric} = -2\,n\beta^2 g$ and $r = -2\,n(2\,n+1)\beta^2$. \end{corollary} The following theorem generalizes \cite[Theorem~1]{ghosh2019ricci} with $\beta\equiv1$ and $Q={\rm id}$. \begin{theorem}Let a weak $\beta$-Kenmotsu $f$-manifold $M^{2n+s}({f},Q,\xi_i,\eta^i,g)$ with $\beta=const$ satisfy $\nabla_{\xi_i}\Ric^\sharp=0$. Then $(M,g)$ is an $\eta$-Einstein manifold \eqref{Eq-2.10} of scalar curvature $r=-2\,s\,n(2\,n+1)\,\beta^2$. \end{theorem} \begin{proof} By conditions and \eqref{3.1}, \[ \Ric^\sharp X = -2\,n\,\beta^2\big\{ sX - s\sum\nolimits_{\,j}\eta^j(X)\,\xi_j + \bar\eta(X)\bar\xi\,\big\}, \] is valid; hence, $r=-2\,s\,n(2\,n+1)\,\beta^2$. Since \eqref{Eq-2.10} with $a=-2\,s\,n\,\beta^2$ and $b=2\,(s-1)\,n\,\beta^2$ is true, see also \eqref{3.16B}, $(M,g)$ is an $\eta$-Einstein manifold. \end{proof} \section{$\eta$-Ricci solitons on weak $\beta$-Kenmotsu $f$-manifolds} \label{sec:03-f-beta} Here, we study $\eta$-Ricci solitons on weak $\beta$-Kenmotsu $f$-manifolds and generalize some results in~\cite{RP-1}. First, we derive the following three lemmas. \begin{lemma}\label{lem3.3} Let $M^{2n+s}({f},Q,\xi_i,\eta^i,g)$ be a weak $\beta$-Kenmotsu $f$-manifold with $\beta=const$. If $g$ represents an $\eta$-Ricci soliton \eqref{Eq-1.1}, then $\lambda+\mu=-2\,n\,\beta^2$ is true. \end{lemma} \begin{proof} For a weak $\beta$-Kenmotsu $f$-manifold equipped with an $\eta$-Ricci soliton \eqref{Eq-1.1}, using \eqref{Eq-normal-2}, we get \begin{align}\label{Lie-V-g} (\pounds_V\,g)(\xi_i,\xi_j)=g(\xi_i, [V,\xi_j])=0. \end{align} Thus, using \eqref{Eq-1.1} in the Lie derivative of $g(\xi_i, \xi_j) = \delta_{ij}$, we obtain $\Ric(\xi_i,\xi_j) = \lambda +\mu$. Finally, using the equality $\Ric(\xi_i,\xi_j)=-2\,n\,\beta^2$, see \eqref{2.5-f-beta}, we achieve the required result. \end{proof} \begin{proposition}Let $M^{2n+s}({f},Q,\xi_i,\eta^i,g)$ be a weak $\beta$-Kenmotsu $f$-manifold with $\beta=const$. If $g$ represents an $\eta$-Ricci soliton \eqref{Eq-1.1} with the potential vector field $V$, then \begin{align}\label{3.7} & (\pounds_{V} \nabla)(X,\xi_i) = 2\,\beta\Ric^\sharp X + 4\,n\,\beta^3\big\{s X - s\sum\nolimits_{\,j}\eta^j(X)\,\xi_j + \bar\eta(X)\bar\xi\,\big\},\\ \label{3.9trace} & (\pounds_V R)_{X,\,\xi_j}\,\xi_i = 0\quad (X\in TM,\ 1\le i,j\le s). \end{align} \end{proposition} \begin{proof} Taking the covariant derivative of \eqref{Eq-1.1} along $Z\in\mathfrak{X}_M$ and using \eqref{2.3c}, we~get \begin{align}\label{3.3A-f-beta} \nonumber & \frac12\,(\nabla_Z\,\pounds_{V}\,g)(X,Y) = -(\nabla_{Z}\,{\rm Ric})(X,Y) +\beta[\mu+(\lambda+\mu)(s-1)]\times\\ & \times\big\{ \big(g(X, Z) - \sum\nolimits_{\,j}\eta^j(X)\eta^j(Z)\big)\,\bar\eta(Y) + \big(g(Y, Z) - \sum\nolimits_{\,j}\eta^j(Y)\eta^j(Z)\big)\,\bar\eta(X) \big\} \end{align} for all $X,Y\in\mathfrak{X}_M$. Since Riemannian metric tensor is parallel, $\nabla g=0$, it follows from \eqref{2.6} that \begin{align}\label{3.4} (\nabla_{Z}\,\pounds_{V}\,g)(X,Y) = g((\pounds_{V}\nabla)(Z,X),Y) + g((\pounds_{V}\nabla)(Z,Y),X). \end{align} Plugging \eqref{3.4} into (\ref{3.3A-f-beta}), we obtain \begin{align}\label{3.5} \nonumber & g((\pounds_{V}\nabla)(Z,X),Y) + g((\pounds_{V}\nabla)(Z,Y),X) = -2(\nabla_{Z}\,{\rm Ric})(X,Y) +\beta[\mu+(\lambda+\mu)(s-1)]\times\\ & \times\big\{ \big(g(X, Z) - \sum\nolimits_{\,j}\eta^j(X)\eta^j(Z)\big)\,\bar\eta(Y) + \big(g(Y, Z) - \sum\nolimits_{\,j}\eta^j(Y)\eta^j(Z)\big)\,\bar\eta(X) \big\} \end{align} for all $X,Y,Z\in\mathfrak{X}_M$. Cyclically rearranging $X,Y$ and $Z$ in \eqref{3.5}, we obtain \begin{align}\label{3.6} \nonumber g((\pounds_{V}\nabla)(X,Y),Z) & = (\nabla_{Z}\,{\rm Ric})(X,Y) - (\nabla_{X}\,{\rm Ric})(Y,Z) - (\nabla_{Y}\,{\rm Ric})(Z,X)\\ & +2\,\beta[\mu +(s-1)(\lambda+\mu)]\big(g(X, Y) {-} \sum\nolimits_{\,j}\eta^j(X)\eta^j(Y)\big)\,\bar\eta(Z). \end{align} Substituting $Y=\xi_i$ in \eqref{3.6} yields the following: \begin{align*} g((\pounds_{V}\nabla)(X,\xi_i),Z) & = (\nabla_{Z}\,{\rm Ric})(X,\xi_i) - (\nabla_{X}\,{\rm Ric})(\xi_i,Z) - (\nabla_{\xi_i}\,{\rm Ric})(Z,X) . \end{align*} Applying \eqref{3.1} and \eqref{E-L-02a} to this, we obtain \eqref{3.7} for any $X\in\mathfrak{X}_M$. Next, using (\ref{2.3b}) in the covariant derivative of (\ref{3.7}) along $Y\in\mathfrak{X}_M$, and calculating $\nabla_Y\big(s\sum\nolimits_{\,j}\eta^j(X)\,\xi_j - \bar\eta(X)\bar\xi\,\big) = 0$, yields \begin{align*} & (\nabla_Y(\pounds_V \nabla))(X, \xi_i) + \beta(\pounds_V \nabla)(X, Y) = 2\,\beta(\nabla_Y\Ric^\sharp)X + 2\,\beta^2\,\bar\eta(Y)\Ric^\sharp X \\ & + 4\,n\,\bar\eta(Y)\beta^4\,\big\{s X - s\sum\nolimits_{\,j}\eta^j(X)\xi_j + \bar\eta(X)\,\bar\xi\,\big\} \end{align*} for any $X\in\mathfrak{X}_M$. Plugging this in \eqref{Eq-7} with $Z=\xi_i$, we obtain \eqref{3.9} \begin{align}\label{3.9} \notag &(\pounds_V R)_{X,Y}\,\xi_i = 2\,\beta\{(\nabla_X\Ric^\sharp)Y - (\nabla_Y\Ric^\sharp)X\} + 2\,\beta^2\{\bar\eta(X)\Ric^\sharp Y - \bar\eta(Y)\Ric^\sharp X\} \\ &\quad + 4\,n\beta^4\big\{\big[s Y {-} s\sum\nolimits_{\,j}\eta^j(Y)\xi_j +\bar\eta(Y)\,\bar\xi\,\big]\,\bar\eta(X) -[s X {-} s\sum\nolimits_{\,j}\eta^j(X)\xi_j+\bar\eta(X)\,\bar\xi\,]\,\bar\eta(Y) \big\} \end{align} for all $X,Y\in\mathfrak{X}_M$. Substituting $Y=\xi_j$ in \eqref{3.9} gives \begin{align}\label{3.9b} \notag (\pounds_V R)_{X,\xi_j}\,\xi_i & = 2\,\beta\{(\nabla_X\Ric^\sharp)\xi_j - (\nabla_{\xi_j}\Ric^\sharp)X\} + 2\,\beta^2\{\bar\eta(X)\Ric^\sharp \xi_j - \Ric^\sharp X\} \\ & - 4\,s\,n\,\beta^4\,\big(X - \sum\nolimits_{\,j}\eta^j(X)\xi_j\big) . \end{align} Then using \eqref{2.5-f-beta}, \eqref{3.1} and \eqref{E-L-02a} in \eqref{3.9b}, yields \eqref{3.9trace}. \end{proof} \begin{definition} \rm A vector field $X$ on a weak metric $f$-manifold $M({f},Q,\xi_i,\eta^i,g)$ is called a {\it contact vector field}, if the flow of $X$ preserves the forms $\eta^i$, i.e., there exists a function $\rho\in C^\infty(M)$ such~that \begin{align}\label{3.20} \pounds_{X}\eta^i=\rho\,\eta^i, \end{align} and if $\rho=0$, then $X$ is said to be a \textit{strict contact vector field}. \end{definition} We consider the interaction of a weak $\beta$-Kenmotsu $f$-structure with an $\eta$-Ricci soliton whose potential vector field $V$ is a contact vector field, or $V$ is collinear to $\bar\xi$. \begin{theorem}\label{thm3.3} Let $M^{2n+s}({f},Q,\xi_i,\eta^i,g)$, be a weak $\beta$-Kenmotsu $f$-manifold with $\beta=const$. If $g$ represents an $\eta$-Ricci soliton \eqref{Eq-1.1} with a contact potential vector field $V$, then $V$ is strict contact and the manifold is $\eta$-Einstein \eqref{Eq-2.10} with $a=-2\,s\,n\beta^2,\ b=2(s-1)n\beta^2$ of constant scalar curvature $r=-2\,s\,n(2\,n+1)\beta^2$. \end{theorem} \begin{proof} Taking Lie derivative of $\eta^i(X)=g(X,\xi_i)$ along $V$ and using \eqref{3.20} and \eqref{Lie-V-g}, we obtain $\pounds_V\,\xi_i=\rho\,\xi_i$. Then, using $\pounds_V\,\xi_i\in{\cal D}$, see \eqref{Eq-normal-2}, we get $\rho=0$. Therefore, $\pounds_V\,\xi_i=0$ and $V$ is a strict contact vector field. Also, \eqref{3.20} gives $\pounds_{V}\,\eta^j=0$. Setting $Y=\xi_i$ in \eqref{3.21} and using \eqref{2.3b} and the equality $(\pounds_{V}\,\eta^j)(X) = V(\eta^j(X)) - \eta^j([V, X])$, we find \begin{align}\label{3.22} \nonumber & (\pounds_V \nabla)(X, \xi_i) = \beta\pounds_V(X-\sum\nolimits_{\,p}\eta^p(X)\xi_p) -\beta(\pounds_V X -\sum\nolimits_{\,p}\eta^p(\pounds_V X)\,\xi_p) \\ & = -\beta\sum\nolimits_{\,p}\{(\pounds_V\,\eta^p)(X)\xi_p +\eta^p(X)\pounds_V\,\xi_p\} + \beta\sum\nolimits_{\,p}\eta^p(\pounds_V X)\,\xi_p) , \end{align} From \eqref{3.22}, since $\pounds_V\,\eta^p=\pounds_V\,\xi_p=0$ is true and the distribution ${\cal D}$ is involutive, i.e., $\pounds_Y X\in{\cal D}\ (X,Y\in{\cal D})$, we obtain $(\pounds_V \nabla)(X, \xi_i)=0$. Using \eqref{3.7}, we get \begin{align}\label{3.22b} \Ric^\sharp X = - 2\,n\,\beta^2\big\{ s X - s\sum\nolimits_{\,j}\eta^j(X)\xi_j + \bar\eta(X)\,\bar\xi\,\big\}. \end{align} Therefore, our $(M,g)$ is an $\eta$-Einstein manifold \eqref{Eq-2.10} with $a=-2\,s\,n\beta^2,\ b=2(s-1)n\beta^2$. Taking the trace of \eqref{3.22b} gives $r=-2\,s\,n(2\,n+1)\beta^2$. \end{proof} \begin{theorem}\label{thm3.4} Let $M^{2n+s}({f},Q,\xi_i,\eta^i,g)$, be a weak $\beta$-Kenmotsu $f$-manifold with $\beta=const$. If~$g$ represents an $\eta$-Ricci soliton \eqref{Eq-1.1} with a potential vector field $V$ collinear to $\bar\xi$: $V=\delta\,\bar\xi$ for a smooth function $\delta\ne0$ on $M$, then $\delta=const$ and the manifold is $\eta$-Einstein \eqref{Eq-2.10} with $a=-2\,s\,n\beta^2$ and $b=2(s-1)n\beta^2$ of constant scalar curvature $r=-2\,s\,n(2\,n+1)\beta^2$. \end{theorem} \begin{proof} Using \eqref{2.3-f-beta} in the covariant derivative of $V=\delta\,\bar\xi$ with any $X\in\mathfrak{X}_M$ yields \[ \nabla_X V = X(\delta)\,\bar\xi +\delta\,\beta(X-\sum\nolimits_{\,j}\eta^j(X)\,\xi_j)\quad (X\in\mathfrak{X}_M). \] Using this and calculations \begin{align*} & (\pounds_{\delta\,\bar\xi}\,g)(X,Y)=\delta(\pounds_{\bar\xi}\,g)(X,Y) +X(\delta)\,\bar\eta(Y) +Y(\delta)\,\bar\eta(X),\\ & (\pounds_{\bar\xi}\,g)(X,Y)=2\,s\,\beta\{g(X,Y) -\sum\nolimits_{\,j}\eta^j(X)\,\eta^j(Y)\}, \end{align*} we transform the $\eta$-Ricci soliton equation \eqref{Eq-1.1} into \begin{align}\label{3.23} \notag & 2\,{\rm Ric}(X,Y) = -X(\delta)\,\bar\eta(Y) -Y(\delta)\,\bar\eta(X) +2(\lambda-\delta\beta)\,g(X,Y) \\ & +2(\delta\beta+\mu)\,\sum\nolimits_{\,j}\eta^j(X)\,\eta^j(Y) -4\,n\beta^2\,\sum\nolimits_{\,i\ne j}\eta^i(X)\,\eta^j(Y),\quad X,Y\in\mathfrak{X}_M. \end{align} Inserting $X=Y=\xi_i$ in \eqref{3.23} and using \eqref{2.5-f-beta} and $\lambda+\mu=-2\,n\,\beta^2$, see Lemma~\ref{lem3.3}, we get $\xi_i(\delta)=0$. It~follows from \eqref{3.23} and \eqref{2.5-f-beta} that $X(\delta)=0\ (X\in{\cal D})$. Thus $\delta$ is constant on $M$, and \eqref{3.23} reads~as \begin{align*} {\rm Ric}= (\lambda-\delta\beta)\,g +(\delta\beta+\mu)\sum\nolimits_{\,j}\eta^j\otimes\eta^j -2\,n\beta^2\,\sum\nolimits_{\,i\ne j}\eta^i\otimes\eta^j. \end{align*} This shows that $(M,g)$ is an $\eta$-Einstein manifold with $a=\lambda-\delta\beta$ and $b=\mu+\delta\beta$ in \eqref{Eq-2.10}. Therefore, from Theorem~\ref{thm3.1A} we conclude that $\lambda=\delta\beta-2\,s\,n\beta^2$, $\mu=-\delta\beta+2(s-1)n\beta^2$, and the scalar curvature of $(M,g)$ is $r=-2\,s\,n(2\,n+1)\beta^2$. \end{proof} \baselineskip=12.7pt \begin{thebibliography}{00} \bibitem{b1970} Blair, D.\,E. Geometry of manifolds with structural group $U(n)\times O(s)$, {J. Diff. Geom.} {4} (1970), 155--167 \bibitem{cho2009ricci} Cho, J. and Kimura, M. Ricci solitons and real hypersurfaces in a complex space form, Tohoku Math. J. 61(2), 205--212 (2009) \bibitem{FP06} Falcitelli, M. and Pastore, A.M. $f$-Structures of Kenmotsu type, Mediterr. J. Math. 3 (2006), 549--564 \bibitem{FP07} Falcitelli, M. and Pastore, A.M. Almost Kenmotsu $f$-manifolds, Balkan J. Geom. Appl., 12 (1) (2007) 32--43 \bibitem{ghosh2019ricci} Ghosh, A. Ricci soliton and Ricci almost soliton within the framework of Kenmotsu manifold, Carpathian Math. Publ., 11(1), 59--69, (2019) \bibitem{G2023} Ghosh, A. K-contact and $(k,\mu)$-contact metric as a generalized $\eta$-Ricci soliton, {Math. Slovaca 73}, No. 1, 185--194 (2023) \bibitem{G-D-2020} Ghosh, G. and De, U.\,C. Generalized Ricci soliton on K-contact manifolds, {Math. Sci. Appl. E-Notes}, {8} (2020), 165--169 \bibitem{gy-1970} Goldberg, S.\,I. and Yano, K. On normal globally framed $f$-manifolds, Tohoku Math. J. 22 (1970), 362--370 \bibitem{kenmotsu1972class} Kenmotsu, K. A class of almost contact Riemannian manifolds, T\^{o}hoku Math. J., 24 (1972), 93--103 \bibitem{olszak1991normal} Olszak, Z. Normal locally conformal almost cosymplectic manifolds, Publ. Math. Debrecen, 39(3-4) (1991), 315--323 \bibitem{RP-1} Patra, D.S. and Rovenski, V. Almost $\eta$-Ricci solitons on Kenmotsu manifolds, European J. of Mathematics, 7 (2021), 1753--1766 \bibitem{rov-126} Patra, D.S. and Rovenski, V. Weak $\beta$-Kenmotsu manifolds and $\eta$-Ricci solitons, pp. 53--72. In: Rovenski, V., Walczak, P., Wolak, R. (eds) \textit{Differential Geometric Structures and Applications}, 2023. Springer Proceedings in Mathematics and Statistics, 440. Springer, Cham \bibitem{pr-1993} Ponge, R. and Reckziegel, H. {Twisted products in pseudo-Riemannian geometry}, Geom. Dedicata, {48} (1993), 15--25 \bibitem{RWo_2} Rovenski, V. and Wolak, R. {New metric structures on $\mathfrak{g}$-foliations}, Indagationes Mathema\-ticae, 33 (2022), 518--532 \bibitem{rst-43} Rovenski, V. Metric structures that admit totally geodesic foliations, J. Geom. (2023) 114:32. \bibitem{Rov-splitting} Rovenski, V. On the splitting tensor of the weak $f$-contact structure, {Symmetry} 2023, 15(6), 1215. https://doi.org/10.3390/sym15061215 \bibitem{rov-127} Rovenski, V. Einstein-type metrics and Ricci-type solitons on weak $f$-K-contact manifolds, pp. 29--51. In: Rovenski, V., Walczak, P., Wolak, R. (eds) \textit{Differential Geometric Structures and Applications}, 2023. Springer Proceedings in Mathematics and Statistics, 440. Springer, Cham \bibitem{SV-2016} Sari, R. and Turgut Vanli, A. Generalized Kenmotsu manifolds, Communications in Mathematics and Applications, 7, No. 4, 311--328, 2016 \bibitem{yano1970integral} Yano, K. \textit{Integral formulas in Riemannian geometry}, Vol. 1. M. Dekker, 1970 \bibitem{yano-1961} Yano, K. On a structure $f$ satisfying $f^3+f=0$, Technical Report No. 12, University of Washington, 1961 \end{thebibliography} \end{document}
2412.14277v1
http://arxiv.org/abs/2412.14277v1
Quadratically enriched binomial coefficients over a finite field
\documentclass{amsproc} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{tikz} \usetikzlibrary{decorations.markings} \usepackage{tikz-cd} \usepackage{slashed} \usepackage{braket} \usepackage{extarrows} \usepackage[margin=1.3in]{geometry} \usepackage{textcomp} \usepackage{mathtools} \usepackage{hyperref} \usepackage{mathrsfs} \usepackage{tabularx} \usepackage[title]{appendix} \usetikzlibrary{backgrounds} \allowdisplaybreaks \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\Gal}{\mathrm{Gal}} \newcommand{\GW}{\mathrm{GW}} \newcommand{\Ad}{\mathrm{Ad}} \newcommand{\Sp}{\mathrm{Sp}} \newcommand{\GL}{\mathrm{GL}} \newcommand{\SL}{\mathrm{SL}} \newcommand{\ord}{\mathrm{ord}} \newcommand{\bP}{\mathbb P} \newcommand{\bC}{\mathbb C} \newcommand{\bA}{\mathbb A} \newcommand{\bQ}{\mathbb Q} \newcommand{\bR}{\mathbb R} \newcommand{\bZ}{\mathbb Z} \newcommand{\bF}{\mathbb F} \newcommand{\bV}{\mathbb V} \newcommand{\bn}{\mathbf n} \newcommand{\bk}{\mathbf k} \newcommand{\fsp}{\mathfrak{sp}} \newcommand{\fp}{\mathfrak{p}} \newcommand{\fO}{\mathfrak{O}} \newcommand{\fm}{\mathfrak{m}} \newcommand{\Spec}{\mathrm{Spec}} \newcommand{\Hom}{\mathrm{Hom}} \newcommand{\Mod}{\mathrm{Mod}} \newcommand{\cM}{\mathcal M} \newcommand{\cN}{\mathcal N} \newcommand{\cD}{\mathcal D} \newcommand{\cI}{\mathcal I} \newcommand{\cF}{\mathcal F} \newcommand{\cO}{\mathcal O} \newcommand{\sT}{\mathscr T} \newcommand{\Span}{\mathrm{Span}} \newcommand{\Ext}{\mathrm{Ext}} \newcommand{\Sym}{\mathrm{Sym}} \newcommand{\Der}{\mathrm{Der}} \newcommand{\Neck}{\mathrm{Neck}} \newcommand{\Emb}{\mathrm{Emb}} \newcommand{\Orb}{\mathrm{Orb}} \newcommand{\tfO}{\widetilde\mathfrak{O}} \newcommand{\SN}{\Orb(C_n,\Neck(n,j))} \newcommand{\SoN}{\Orb_{odd}(C_n,\Neck(n,j))} \newcommand{\SeN}{\Orb_{even}(C_n,\Neck(n,j))} \newcommand{\SNhalf}{\Orb(C_{\frac n2},\Neck(\frac n2,\frac j2))} \newcommand{\SoNhalf}{\Orb_{odd}(C_{\frac n2},\Neck(\frac n2,\frac j2))} \newcommand{\SeNhalf}{\Orb_{even}(C_{\frac n2},\Neck(\frac n2,\frac j2))} \newcommand{\SNt}{\Orb(C_{2j}, \Neck(2j,j)^\tau)} \newcommand{\SoNt}{\Orb_{odd}(C_{2j}, \Neck(2j,j)^\tau)} \newcommand{\SeNt}{\Orb_{even}(C_{2j}, \Neck(2j,j)^\tau)} \newcommand{\SNjj}{\Orb(C_{2j}, \Neck(2j,j))} \newcommand{\SoNjj}{\Orb_{odd}(C_{2j}, \Neck(2j,j))} \newcommand{\SeNjj}{\Orb_{even}(C_{2j}, \Neck(2j,j)} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{xca}[theorem]{Exercise} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \numberwithin{equation}{section} \begin{document} \title{Quadratically enriched binomial coefficients over a finite field} \author{Chongyao Chen} \address{Duke University, Durham, NC, USA} \curraddr{} \email{[email protected]} \thanks{CC was partially supported by National Science Foundation Awards DMS-2304981 and DMS-2405191} \author{Kirsten Wickelgren} \address{Duke University, Durham, NC, USA} \curraddr{} \email{[email protected]} \thanks{KW was partially supported by National Science Foundation Awards DMS-2103838 and DMS-2405191} \subjclass[2020]{Primary 05A10, 11E81, 14F42} \date{December 2024} \begin{abstract} We compute an analogue of Pascal's triangle enriched in bilinear forms over a finite field. This gives an arithmetically meaningful count of the ways to choose $j$ embeddings into an algebraic closure from an \'etale extension of degree $n$. We also compute a quadratic twist. These (twisted) enriched binomial coefficients are defined in joint work of Brugall\'e and the second-named author, building on work of Garibaldi, Merkurjev, and Serre. Such binomial coefficients support curve counting results over non-algebraically closed fields, using $\mathbb{A}^1$-homotopy theory. \end{abstract} \maketitle \section{Introduction} We consider combinatorics enriched in bilinear forms, in the sense that an integer $n$ is replaced by the class of a symmetric, non-degenerate bilinear form on a vector space of dimension $n$. The resulting binomial coefficients arose in \cite{Brugalle-WickelgrenABQ} in the context of curve counting over non-algebraically closed fields: to count curves on surfaces, one is lead to certain degeneration formulas in which curves of lower degrees are glued together. To perform such glueings, one chooses closed points. However, over non-algebraically closed fields, the points have different residue fields. To obtain a count retaining arithmetic information, it is effective to replace integer valued counts with ones enriched in bilinear forms. The counts now take values in the Grothendieck--Witt group of the base field, defined to be the group completion of isomorphism classes of symmetric, non-degenerate bilinear forms. See for example \cite{cubicsurface} \cite{Levine-EC} \cite{Wendt-oriented_Schubert_calculus} \cite{PMPR-tropical_GW_invts} \cite{McKean-Bezout} \cite{Cotterill-Darago-Han}. In \cite{Brugalle-WickelgrenABQ}, Erwan Brugall\'e and the second named author obtained a wall-crossing formula for $\mathbb{A}^1$ Gromov--Witten invariants using combinatorial identities enriched in bilinear forms. Here we systematically compute the analogue of Pascal's triangle over a finite field of odd characteristic and a quadratic twist, arising in the enumerative context of \cite{Brugalle-WickelgrenABQ},\cite{degree}. Let $k = \bF_q$ be a finite field with odd characteristic. Let $\GW(k)$ denote the Grothendieck--Witt group of $k$ and let $u \in \GW(\bF_q)$ denote the class of the bilinear form $\bF_q \times \bF_q \to \bF_q$ sending $(x,y)$ to $\mu x y$, where $\mu$ is a nonsquare in $\bF_q^*$. There is a ring structure on $\GW(k)$ induced from the tensor product of forms, and the unit, denoted by $1$, is represented by the bilinear form $\bF_q \times \bF_q \to \bF_q$ sending $(x,y)$ to $xy$. For an \'etale $k$-algebra $L$, the paper \cite{Brugalle-WickelgrenABQ} defines $\binom{L/k}{j} $ in $\GW(k)$ to be the trace form of the \'etale $k$-algebra associated via the Galois correspondence to the set of subsets of size $j$ of the set of $k$-maps of $L$ into $\overline{k}$. Here, $\overline{k}$ denotes the algebraic closure of $k$. For a quadratic extension $Q$ of $k$ and $[L:k] = 2j$, this set has a twisted action, where $\Gal(Q/k)$ acts by taking the complement of the subset. This defines a quadratically twisted binomial coefficient $\binom{L[Q]/k}{j}$ in $\GW(k)$. We give these definitions in detail in Section~\ref{Section:Quadratically_enriched_binomial_coefficients}. In this paper, we show the following closed formula for the quadratically enriched binomial coefficients over $\bF_q$. \begin{theorem}\label{thm:closedformula-1} Let $q$ be an odd prime power and let $j$ be a non-negative integer. Let $L/k$ be the finite extension of $\bF_q$ of degree $n$. Then \[ \binom{L/k}{j} = \binom{n}{j} - (1 -u)\cdot \binom{\frac{n-2}{2}}{\frac{j-1}{2}} \in \GW(\bF_q), \] where $u$ is the non-square class in $\GW(k)$ and our convention is that $\binom{a}{b}:=0$, if either $a$ or $b$ is not in $\bZ$. \end{theorem} We also compute the quadratically twisted enriched binomial coefficients over a finite field: \begin{theorem}\label{thm:closedformula-2} Let $q$ be an odd prime power and let $j$ be a non-negative integer. Let $L/k$ be the finite field extension of $\bF_q$ of degree $2j$, and let $Q/k$ the degree $2$ field extension of $\bF_q$. Then \[ \binom{L[Q]/k}{j}= \frac12\binom{2j}{j}\cdot(1+u) \in \GW(\bF_q). \] \end{theorem} \subsection{Summary of the proof} The idea of the proofs is first to rephrase the problems into purely combinatorial forms. In Section \ref{subsection:necklace_interpretation}, we show the proofs of these two theorems reduce to calculations of the parities of the number of orbits of necklaces of even cardinality under different cyclic group actions. \subsubsection{Untwisted case} The proof of Theorem~\ref{thm:closedformula-1} in the case when $ n $ is odd is very straightforward, as shown in Proposition \ref{prop:case1}. In Section \ref{subsection:nevenjodd}, we prove Theorem \ref{thm:closedformula-1} when $ j $ is odd. The proof involves a direct calculation using M\"obius inversion and Lucas's theorem. The remaining task is to show that when both $ n $ and $ j $ are even, the enumeration is always even, which is established combinatorically. In Section \ref{subsection:comb1}, we use a $ C_2 $-action, referred to as flipping, on the set of cyclic orbits of necklaces. Since we are concerned only with the parity of the enumeration, it suffices to count the $ C_2 $-fixed points. Next, in Section \ref{subsection:comb2}, we introduce the concept of symmetry axes for the orbits of necklaces, which allows us to rewrite the set of $ C_2 $-fixed (symmetric) orbits as a non-disjoint union of two subsets: the sets of symmetric orbits with a symmetry axis passing through two beads (we call this a symmetry axis of type 1) and those with a symmetry axis passing between two pairs of beads (type 2). The enumeration methods for these two subsets differ. In Section \ref{subsection:comb2}, we also decompose each cyclic orbit of necklaces into two smaller necklace orbits. This decomposition exhibits special properties when the orbit is symmetric, as different types of symmetry axes yield different properties. This decomposition induces a map whose codomain is the symmetric product of the sets of two smaller cyclic orbits of necklaces. The enumeration is carried out by summing the fibers of this map over the codomain. At the end of Section \ref{subsection:comb2}, we provide the enumeration of symmetric orbits with a type 1 symmetry axis when both $ n $ and $ j $ are even. Using this approach, in Section \ref{subsection:case2}, we enumerate the symmetric orbits with a type 2 symmetry axis for the case where $ n \equiv 2 \mod 4 $ and $ j $ is even. Combining this result with the results in Section \ref{subsection:comb2}, we prove Theorem \ref{thm:closedformula-1} for the case where $ n \equiv 2 \mod 4 $ and $ j $ is even by calculating the parity of the enumeration using Kummer's theorem. In Section \ref{subsection:case3}, we consider the case where $ n \equiv 0 \mod 4 $. For a symmetric orbit with a type 2 symmetry axis, we show how to reduce it to a symmetric orbit with a type 1 symmetry axis. Since $ n - 2 \equiv 0 \mod 4 $, we can now utilize the results from Section \ref{subsection:comb2} and Section \ref{subsection:case2} to conclude the proof of Theorem \ref{thm:closedformula-1} for the case where $ n \equiv 0 \mod 4 $ and $ j $ is even. This completes the proof of the untwisted case. \subsubsection{Twisted case} We prove Theorem \ref{thm:closedformula-2} by reducing to an untwisted enumeration that we have studied in detail previously. In Section \ref{subsection:reduction}, we first provide a description of the twisted orbits in terms of untwisted orbits. Next, we construct a $C_2$-action, called swapping, on the set of twisted orbits and reduce the problem to the enumeration of the swapping fixed points. Then, by studying the swapping fixed twisted orbits, we reduce the problem to an enumeration of untwisted orbits under certain conditions. In Section \ref{subsection:relatetopart}, we discuss another way to encode the information of an untwisted cyclic orbit of necklaces. This is the marked cyclic equivalence class of partitions. The condition for the untwisted orbits we need to enumerate can be rephrased in terms of the properties of the partitions, which allows us to calculate the parity of the enumeration using partition theory. This concludes the proof of Theorem \ref{thm:closedformula-2}. More explicit forms of the untwisted and twisted binomial coefficients over a finite field are presented in Section \ref{subsection:expval1} and Section \ref{subsection:expval}, respectively. \subsection{Acknowledgements} We heartily thank De'Asia Brodie and Zoe Valentine for collaboration on the cases $j=1,2,3$ of Theorem~\ref{thm:closedformula-1}, and Erwan Brugall\'e and Rena Chu for useful discussions. KW received support from NSF DMS-2103838 and NSF DMS-2405191. CC is partially supported by DMS-2304981 and DMS-2405191. \section{Quadratically enriched binomial coefficients over a field}\label{Section:Quadratically_enriched_binomial_coefficients} The paper \cite{Brugalle-WickelgrenABQ} defined quadratically enriched (twisted) binomial coefficients over a base scheme. We recall the definition in the case where the base scheme is a field. Let $k$ be a field, and fix a separable closure $k^s$ of $k$. For a finite separable field extension $k \subseteq L$, let $\Emb_k(L,k^s)$ denote the set of maps of fields of $L$ into $k^s$ over $k$ \[ \Emb_k(L,k^s) = \left\{ f : \begin{tikzcd} L \arrow[rr, "f"] && k^s \\ &k \arrow[lu, ""] \arrow[ru, ""] \end{tikzcd} \right\}. \] Recall that a finite \'etale $k$-algebra $k \to E$ has an associated trace map \[ \Tr_{E/k}: E \to k \] which takes an element $e$ to the trace of the matrix associated to multiplication by $e$, viewed as an endomorphism of the finite dimensional $k$-vector space $E$. This trace map determines an element of $\GW(k)$ denoted $\Tr_{E/k} \langle 1 \rangle$ and defined to be the class of the bilinear form \[ E \times E \to E \to k \] where the first map is multiplication on $E$ and the second is $\Tr_{E/k}$. Recall that the Galois correspondence \[ E \mapsto \Emb_k(E,k^s) \] gives an equivalence of categories from finite \'etale $k$-algebras to finite sets equipped with a $\Gal(k^s/k)$-action. To enrich binomial coefficients, we introduce the following notation. Let $\Emb^j_k(L,k^s)$ denote the subset of $\Emb_k(L,k^s)$ consisting of the set of subsets of size $j$. \begin{definition}\label{df:binomialcoef}\cite{Brugalle-WickelgrenABQ} Let $$\binom{ L/k}{j} \in \GW(k)$$ denote the class of the trace form of the finite \'etale $k$-algebra corresponding to the finite set $\Emb^j_k(L,k^s)$ with $\Gal(k^s/k)$-action induced from the canonical action on $\Emb_k(L,k^s)$. \end{definition} For $j=2,3$, these forms appeared in \cite[30.12-30.14]{Garibaldi-Serre-Merkurjev}. In \cite{Brugalle-WickelgrenABQ}, it was important to twist such quadratically enriched binomial coefficients by a degree $2$-field extension $k \subset Q$ as follows. There is a canonical isomorphism $\Gal(Q/k) \cong C_2$, where $C_2$ denotes the cyclic group of order $2$. Let $$q_Q: \Gal(k^s/k) \to \Gal(Q/k) \cong C_2$$ denote the corresponding quotient map. For a set $S$ of size $2j$, the set of subsets of $S$ of size $j$ has an action of $C_2$ given by taking a subset to its complement. If a group $G$ acts on $S$, the set of subsets of size $j$ inherits an action of $G \times C_2$ because taking a set to its complement commutes with any automorphism of $S$. \begin{definition}\label{df:twistedbincoef}\cite{Brugalle-WickelgrenABQ} For a finite \'etale $k$-algebra $L$, let $${ L[Q]/k \choose j} \in \GW(k)$$ denote the class of the trace form of the \'etale $k$-algebra corresponding to $\Emb^j_k(L,k^s)$ with the $\Gal(k^s/k)$-action given by the homomorphism $$\Gal(k^s/k)\stackrel{(1,q_Q)}{\to} \Gal(k^s/k) \times C_2$$ and the canonical action of $\Gal(k^s/k) \times C_2$ on $\Emb^j_k(L,k^s)$. \end{definition} \section{Untwisted case - Proof of Theorem \ref{thm:closedformula-1}} Let $k = \bF_q$ be a finite field of odd characteristic. We have \[ \GW(\bF_q) \cong \frac{\bZ[u]}{(u^2-1, 2-2u)} \cong \frac{\bZ\cdot 1\oplus\bZ\cdot u}{2-2u} \cong\bZ\times \bF_q^*/(\bF^*_q)^2, \] where the first isomorphism is an isomorphism of rings, the second and third only respect group structures, and the third isomorphism is the product of the rank and the discriminant. Note that $\bF_q^*/(\bF^*_q)^2\cong \bZ/2\bZ$. Here as above, $u$ denotes the class of the bilinear form $k \times k \to k$ given by $(x,y) \mapsto \mu xy$ where $\mu$ in $\bF_q$ is a non-square. See for example \cite[Theorem 3.5, Corollary 3.6]{lam05}. In particular, the class of any element in $\GW(\bF_q)$ is determined by the rank and the discriminant. Let $L/k$ be a finite extension of degree $n$. Then $\Gal(L/k)\cong C_{n}$, where $C_n$ is the cyclic group of order $n$ that is generated by the Frobenius automorphism $\varphi:x\to x^q$. It is a classical fact that for $[L:k] = n$, the class of the trace form of $L$ over $k$ in $\GW(k)$ is given: \begin{equation}\label{eqn:tracecompfield} \Tr_{L/k}\braket{1} = \epsilon(n):=\left\{\begin{array}{cc} n-1 + u, & n\equiv 0\mod 2 \\ n, & n\equiv 1\mod2 \end{array} \right. \end{equation} Indeed, the rank of $\Tr_{L/k}\braket{1}$ is $n$, so it suffices to compute the discriminant of $\Tr_{L/k}\braket{1}$. The discriminant of the trace form is the discriminant of the minimal polynomial of a generator. Since the Fr\"obenius acts as a $n$ cycle on the roots of a minimal polynomial for a generator of $k \subseteq L$, we have that $\Tr_{L/k}\braket{1}$ has square discriminant if and only if an $n$ cycle has even sign, which occurs if and only if $n$ is odd. For more on trace forms, see \cite{Conner-Perlis}. A proof of the classical fact \eqref{eqn:tracecompfield} can be found in Lemma 58 of the first ArXiv version of \cite{cubicsurface}. Let $E^j_k(L,k^s)$ denote the finite \'etale $k$-algebra that is associated to $\Emb^j_k(L,k^s)$ under the Galois correspondence. Here, $\Emb^j_k(L,k^s)$ denotes the set of subsets of size $j$ of the set of embeddings of $L$ into $k^s$ over $k$, as in Definition~\ref{df:binomialcoef}. In particular, ${ L/k \choose j} = \Tr_{E^j_k(L,k^s)/k} \langle 1 \rangle$. We have $\dim_k E^j_k(L,k^s) = |\Emb^j_k(L,k^s)| =\binom{n}{j}$. For a group $G$ acting on a set $S$, let $\Orb(G,S)$ denote the set of orbits. The set $\Emb_k^j(L,k^s)$ can be decomposed into Galois orbits $\Emb_k^j(L,k^s) = \amalg_{i\in I} \mathfrak O_i$ with $$I=\Orb(\Gal(k^s/k), \Emb_k^j(L,k^s))$$ being a finite set. Let $H_i\triangleleft \Gal(k^s/k)$ be the stabilizer subgroup of $\mathfrak O_i$ and let \[ d_i = [\Gal(k^s/k): H_i] = \vert \mathfrak O_i \vert \] be the number of elements in the orbit $\mathfrak O_i$. Then $E^j_k(L,k^s) = \prod_{i\in I}(k^s)^{H_i}$, and \[ \Tr_{E^j_k(L,k^s)/k}\braket{1} = \sum_{i\in I}\Tr_{(k^s)^{H_i}/k}\braket{1} = \sum_{i \in I} \epsilon(d_i). \] For a group $G$ acting on a set $S$, let $\Orb_{even}(G,S) \subseteq \Orb(G,S)$ denote the subset consisting of those orbits with an even cardinality. Define $\Orb_{odd}(G,S) $ similarly. The set $\Orb (\Gal(k^s/k), \Emb_k^j(L,k^s) )$ decomposes \[ \Orb(\Gal(k^s/k),\Emb_k^j(L,k^s))=GO_{odd}(L/k,j)\amalg GO_{even}(L/k,j),\] where we use the abbreviations \begin{align*} GO_{odd}(L/k,j):=&\Orb_{odd} (\Gal(k^s/k), \Emb_k^j(L,k^s) ),\\ GO_{even}(L/k,j):=&\Orb_{even} (\Gal(k^s/k), \Emb_k^j(L,k^s) ). \end{align*} Let $\Delta(n,j)$ denote the difference $\Delta(n,j):=\binom{L/k}{j}-\binom{n}{j}$. The following lemma follows immediately from \eqref{eqn:tracecompfield}. \begin{lemma}\label{lem:deltaGoeven} $\Delta(n,j) = (u-1)\cdot |GO_{even}(L/k,j)|$. \end{lemma} Therefore, to determine the value of $\Delta(n,j)$, one only needs to determine the mod 2 residue class of $|GO_{even}(L/k,j)|$. \begin{proposition}[Theorem \ref{thm:closedformula-1}: case for $n$ odd]\label{prop:case1} $\binom{L/k}{j} = \binom{n}{j}$ for $n$ odd. \end{proposition} \begin{proof} As $(k^s)^{H_i}$ is a subfield of $L$, we have $d_i|n$. Thus, if $n$ is odd, $GO_{even}(L/k,j)=\emptyset$. \end{proof} \subsection{Necklace interpretation}\label{subsection:necklace_interpretation} Let $\Neck(n,j)$ denote the set of necklaces consisting of $j$ blue beads and $n-j$ red beads, with one bead designated as the bead at the top of the necklace. One could alternately view this set as the set of necklaces with $j$ blue beads and $n-j$ red beads, lying on the plane, beads equally spaced on the unit circle with the top bead at $(0,1)$. It will be useful to consider the natural action of the dihedral group $D_n = C_n \rtimes C_2 = \langle r, f : r^n = 1, f^2 =1, rfr = f \rangle$ on $\Neck(n,j)$ in which $r$ acts by rotating the necklace one bead counterclockwise and $f$ acts by flipping the necklace over the vertical line through the top bead. As above, let $k \subseteq L$ be the degree $n$ field extension $\bF_q \subseteq \bF_{q^n}$. Fix an embedding $p \in \Emb_k(L,k^s)$. Then \[ \Emb_k(L,k^s) = \{p, \varphi p, \varphi^2 p\,\ldots, \varphi^{n-1} p\}, \] where $\varphi$ denotes the Fr\"obenius $\varphi \colon k^s \to k^s$, $\varphi (x) = x^q$ of $k$. It follows that we can identify $\Emb^j_k(L,k^s)$ with the set $\Neck(n,j)$ and the action of $\Gal(k^s/k)$ on $\Emb^j_k(L,k^s)$ is given by the homomorphism $\Gal(k^s/k) \to \Gal(L/k)\cong C_n \to D_n$ and the above action of $D_n$ on $\Neck(n,j)$ \[ \Emb^j_k(L,k^s) \cong \Neck(n,j). \] In particular, $GO_{odd}(L/k,j)$ can be identified with the subset of orbits of $\Neck(n,j)$ under the rotation action of $C_n = \langle r : r^n =1 \rangle$ consisting of those orbits with odd cardinality. A similar statement holds for $GO_{even}(L/k,j)$ as well, giving a canonical bijection \[ GO_{even}(L/k,j) \cong \Orb_{even}(C_n, \Neck(n,j)). \] Moreover, Lemma~\ref{lem:deltaGoeven} says that \[ {L/k \choose j} = {n \choose j} + (u - 1) \vert \Orb_{even}(C_n, \Neck(n,j))\vert. \] For the twisted binomial coefficients, we let $D_{2j} \times C_2$ act on $\Neck(2j,j)$ by defining the $D_{2j}$ action to be as above and defining the (commuting) action of $C_2 = \langle e : e^2 =1 \rangle$ so that $e$ exchanges the colors of all the beads, turning the red beads of a necklace to blue, and the blue beads of a necklace to red. Define the {\em twisted} action of $C_{2j}$ on $\Neck(2j,j)$ by the map $C_{2j} \to D_{2j} \times C_2$ defined by \[ r \mapsto (r,e) \] and the $D_{2j} \times C_2$ action just defined. To distinguish between the twisted and untwisted actions, when we view $\Neck(2j,j)$ with its twisted $C_{2j}$-action, we will write $\Neck(2j,j)^{\tau}$. Otherwise, $\Neck(2j,j)$ has the $C_{2j}$-action from the morphism $C_{2j} \to D_{2j}$, given $r \mapsto r$ as above. We have \[ {L[Q]/k \choose j} = {2j \choose j} + (u - 1) \vert \Orb_{even}(C_{2j}, \Neck(2j,j)^{\tau})\vert, \] and we will denote \begin{equation}\label{eqn:defdeltaprime} \Delta'(2j,j) : = {L[Q]/k \choose j}-{2j \choose j} = (u - 1) \vert \Orb_{even}(C_{2j}, \Neck(2j,j)^{\tau})\vert. \end{equation} \subsection{M\"obius inversion} Let $N(n,j)$ denote the cardinality of the set of necklaces in $\Neck(n,j)$ whose stabilizer under the $C_n$-action by rotation is trivial \[ N(n,j) := |\big\{[l]\in\Neck(n,j):\mathbf{Stab}_{C_n} ([l])= \{1\}\big\}|. \] By the necklace interpretation of Section~\ref{subsection:necklace_interpretation}, we have $$N(n,j)=|\big\{[l]\in\Emb^j_k(L,k^s):\mathbf{Stab}_{\Gal(L/k)} ([l]) = \{1\}\big\}|.$$ Note that that the numbers $N(n,j)$ and $|\Emb^j_k(L,k^s)| = \binom{n}{j}$ only depend on $n$ and $j$. As above, let $I=\Orb(C_n, \Neck(n,j))$ index the orbits of the action of $C_n$ on $\Neck(n,j)$ and, for $i$ in $I$, let $d_i$ denote the cardinality of the corresponding orbit. Let $\rho:=\frac{j}{n}$ denote the fraction of beads which are blue. Given an orbit $i \in I$ of cardinality $d_i$, we can take $d_i$ adjacent beads in the necklace and form a new necklace with $d_i$ beads, $\rho d$ of which are blue, and which has a trivial stabilizer under the rotation action of $C_{d_i}$. This process can be run in reverse, creating a necklace with $n$ beads from one with $d_i$ beads. It follows that \[ |\{i\in I,d_i = d\}|= N(d,\rho d). \] By the necklace interpretation, $I$ is in canonical bijection with an indexing set for the orbits of the $\Gal(L/k)$ action on $\Emb^j_k(L,k^s)$. Combining with the above, we have \[ \binom{n}{j} = |\Emb^j_k(L,k^s)|=\sum_{d|n}d~|\{i\in I,d_i = d\}| = \sum_{d|n}d~N(d,\rho d),\quad \rho:=\frac{j}{n}, \] under the convention $N(d,\rho d) = 0$ if $\rho d\notin \bZ$. We can now apply the M\"obius inversion formula \cite[16.4]{HardyWright}, which yields \[ |N(d,b)| = \frac{1}{d}\sum_{j|d}\mu(j)\binom{\frac{d}{j}}{\frac{b}{j}}, \] where $\mu(j)$ is the M\"obius function \[ \mu(j) = \begin{cases} 1 & \text{if } j = 1, \\ 0 & \text{if } j \text{ has a squared prime factor}, \\ (-1)^v & \text{if } j \text{ is a product of } v \text{ distinct prime numbers}. \end{cases} \] Therefore \begin{equation}\label{eqn:sizeGaloisorbit} |GO_{even}(L/k,j)| = \sum_{d|n,2|d}\frac{1}{d}\sum_{j|d}\mu(j)\binom{\frac{d}{j}}{\frac{\rho d}{j}}. \end{equation} \subsection{Lucas and Kummer's theorems} For a prime $p$, let $\nu_p:\bZ \to \bZ_{\geq0} $ denote the $p$-adic valuation map. Lucas' classical theorem calculates the mod $p$ residue class for binomial coefficients. \begin{theorem}\label{thm:lucastheorem}[Lucas's theorem \cite{lucastheorem}] For $x,y\in\bZ_{\geq 0}$, and $p$ a prime \[ \binom{x}{y} \equiv \prod_{i}\binom{x_i}{y_i}\mod p \] where $x_i,y_i$ are the coefficients of the $p$-adic expansions of $x$ and $y$, $x = \sum_{i}x_ip^i$, $y = \sum_i y_i p^i$. \end{theorem} Define the $p$-adic carrier function for an integer $x$ as $S_p(x) = \sum_i{x_i}$, where $x = \sum_{i}x_ip^i$ is the $p$-adic expansion. Kummer's classical theorem calculates the $p$-adic valuation of binomial coefficients. \begin{theorem}[Kummer's theorem \cite{Kummertheorem}] The $p$-adic valuation of $\binom{n}{m}$ is \[ \nu_p\binom{n}{m} = \frac{S_p(m)+S_p(n-m)-S_p(n)}{p-1}. \] \end{theorem} Some easy but useful corollaries that are relevant to our purpose are the following. \begin{corollary}\label{cor:corKummer1} For $n,j\in\bZ$, we have $\nu_2\binom{n}{j} = \nu_2\binom{2n}{2j} = \nu_2\binom{2n+1}{2j+1}$. \end{corollary} \begin{corollary}\label{cor:corKummer2} For $n,j\in2\bZ$ we have \[ \nu_2\binom{n}{j} = \nu_2\binom{n+1}{j}. \] \end{corollary} \begin{corollary}\label{cor:corKummer3} For $j\in\bZ_{>0}$, we have $\nu_2\binom{2j}{j}\geq1$, and the equality holds if and only if $j= 2^m$, $m\in\bZ_{>0}$. \end{corollary} \subsection{Proof of Theorem \ref{thm:closedformula-1}\label{subsection:nevenjodd} when $\nu_2(n)\geq1$ and $\nu_2(j)=0$ } Define a partial order $\prec$ on $\bQ_{\geq0}$ as follows. \begin{definition}\label{def:prec} For $x,y\in \bQ_{\geq0}$ written as 2-adic numbers $x = \sum_{i} x_i\cdot 2^i$, $y = \sum_{i} y_i\cdot 2^i$, then $x\prec y$ if and only if $x,y\in\bZ_{\geq 0}$ and $x_i\leq y_i$ for all $i$. \end{definition} \begin{proposition}\label{prop:case2} For $\nu_2(n)\geq1$ and $\nu_2(j)=0$ we have \[ \Delta(n,j) = \left\{ \begin{array}{cc} -1+u, & \frac{j-1}{2}\prec \frac{n-2}{2} \\ 0, & \mathrm{else}. \end{array} \right. \] \end{proposition} \begin{proof} As above, define $\rho = j/n$. From \eqref{eqn:sizeGaloisorbit}, we have \begin{align*} |GO_{even}(L/k,j)| &= \sum_{d|n,2|d}\frac{1}{d}\sum_{l|d}\mu(l) \binom{\frac{d}{l}}{\frac{\rho d}{l}} .\\ &= \sum_{d|n,2|d}\frac{1}{d}\sum_{l|d}\mu(l) \frac{1}{\rho} \binom{\frac{d}{l} -1}{\frac{\rho d}{l}-1} .\\ &= \frac{1}{j}\sum_{d|n,2|d}\frac{n}{d}\sum_{l|d} \mu(l)\binom{\frac{d}{l}-1}{\frac{\rho d}{l}-1}. \end{align*} By Lemma \ref{lem:deltaGoeven}, $\Delta(n,j)$ only depends on the mod 2 residue of $|GO_{even}(L/k,j)|$. As $j$ is odd by our assumption, \[ |GO_{even}(L/k,j)|\equiv \sum_{d|n, 2|d}\frac{n}{d}\sum_{l|d} \mu(l)\binom{\frac{d}{l}-1}{\frac{\rho d}{l}-1} \mod 2. \] Further, as $\mu(l)\equiv 1\mod2$ if and only if $l$ is square free, we have \begin{align} \notag |GO_{even}(L/k,j)| & \equiv \sum_{d|n,2|d}\frac{n}{d}\sum_{\substack{l | d \\ l \text{ square free}} }\binom{\frac{d}{l}-1}{\frac{\rho d}{l}-1} \\ \label{eq:GOevennevenjoddsumswpowers2} &\equiv\sum_{(2^{\nu_2(n)}d)|n}\sum_{\substack{l|(2^{\nu_2(n)}d)\\ l \text{ square free}} }\binom{\frac{2^{\nu_2(n)}d}{l}-1}{\frac{\rho 2^{\nu_2(n)}d}{l}-1}\mod 2. \end{align} By Corollary~\ref{cor:corKummer1}, we have mod $2$ equalities \[\binom{x-1}{y-1} \equiv \binom{2x-2}{2y-2}\equiv\binom{2x-1}{2y-1}\mod 2,\quad \forall x,y\in\bZ_{\geq0}.\] Thus the mod 2 residue of each of the summands of \eqref{eq:GOevennevenjoddsumswpowers2} only depends on $\frac dl$ and $\frac{\rho d}{l}$. Let $P$ denote the set of pairs $(d,l)$ with $2^{\nu_2(n)}d$ dividing $n$ and $l$ square free dividing $2^{\nu_2(n)}d$. Define an equivalence relation on $P$ by declaring $(d,l) \sim (d',l')$ when $\frac{d}{l} = \frac{d'}{l'}$. Suppose $(d,l)$ in $P$ with $l$ odd. Then we can divide both $l$ and $d$ by $l$, obtaining a new pair $(d',l')$ with the same ratio $\frac{d}{l} = \frac{d'}{l'}$ and $l'=1$. The number of pairs with this given ratio is then equal to $2^m$ where $m$ is the number of distinct odd prime factors of $n/(d')$. It follows that all these equivalence classes have even cardinality, except for the class of $(n,1)$. Now suppose $(d,l)$ in $P$ with $l $ even. Then we can divide both $l$ and $d$ by $l/2$, obtaining an equivalence pair $(d',2)$. The number of pairs equivalent to $(d',2)$ is then $2^m$ where $m$ is the number of odd prime factors of $n/d'$. Thus all these equivalence classes have even cardinatlity, except for the class of $(n,2)$. However, for $(d,l)=(n,2)$, the binomial coefficient $\binom{\frac{2^{\nu_2(n)}d}{l}-1}{\frac{\rho 2^{\nu_2(n)}d}{l}-1} = \binom{\frac{n}{2}-1}{\frac{j}{2}-1}$, which is $0$ because $\frac{j}{2}$ is not an integer and by convention (See Theorem \ref{thm:lucastheorem}), binomial coefficients with fractional lower terms are $0$. Thus, \[ |GO_{even}(L/k,j)| \equiv \binom{n-1}{j-1}\mod2. \] By Lucas's theorem (Theorem~\ref{thm:lucastheorem}), we have $|GO_{even}(L/k,j)| $ is odd if and only if $j-1 \prec n-1$. Finally, because $n$ is even and $j$ is odd, it follows that $j-1 \prec n-1$ if and only if $j-1 \prec n-2$. Then since both $j-1$ and $n-2$ are even, we have $j-1 \prec n-2$ if and only if $\frac{j-1}{2} \prec \frac{n-2}{2}$, which gives the desired formula. \end{proof} \subsection{Combinatorics of symmetric orbits} \label{subsection:comb1} We will use the necklace interpretation for the rest of the paper. Note that the $C_2$ generated by the flip $f$ acts on the set $\Orb(C_n, Neck(n,j))$. For a set $S\subset \SN$, let $S^f$ denote the subset of $S$ that is fixed by the flipping action. Note that if $l$ is a necklace with orbit $[l] \in \Orb(C_n, Neck(n,j))^f$, then $fl= r^m l$, which implies that $l$ has an axis of symmetry: the axis rotated $\frac{m\pi}{n}$ from the vertical counterclockwise. Since we will decompose a necklace with a symmetry axis using such an axis, it will be important to consider orbits under rotation of pairs consisting of a necklace and a symmetry axis. Consider the 2-dimensional faithful representation $\Phi:D_n\hookrightarrow O(2,\bR)$, \[ \Phi(r) = \left(\begin{array}{cc} \cos(\frac{2\pi i}{n}) & -\sin(\frac{2\pi i}{n}) \\ \sin(\frac{2\pi i}{n})& \cos(\frac{2\pi i}{n}) \end{array}\right), \quad \Phi(f) = \left(\begin{array}{cc} -1 & 0 \\ 0 & 1 \end{array}\right). \] Let $ x\in \bR^2,x = (0,1)$ be a fixed point, then $l\in\Neck(n,j)$ can be represented by a subset of size $j$ in $\Phi(D_n)\cdot x$. If $[l]\in \SN^f$, then for every representative $l\in \Neck(n,j)$ there exists $\sigma\in\bP^1(\bR)$ such that $l$ is invariant under the reflection $f_\sigma\in O(2,\bR)$ with respect to $\sigma$. Any such linear space $\sigma$ is called a symmetry axis of $l$. A symmetry axis $[(l,\sigma)]$ for $[l]\in \SN^f$ is the $C_n$-orbit of the pair $(l,\sigma)$. \begin{figure} \centering \begin{tikzpicture} \def\radius{2cm} \def\numBeads{6} \foreach \i in {1,...,\numBeads} { \pgfmathsetmacro{\angle}{360/\numBeads * (\i+1)} \ifnum\i=2 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=3 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=4 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=5 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \node[draw, fill=red, circle, minimum size=10pt] at (\angle:\radius) {}; } \draw[dashed] (0,1.3*\radius) -- (0,-1.3*\radius); \end{tikzpicture} \caption{An element in $\Orb(C_6,\Neck(6,4))^f$ with a unique symmetry axis.} \end{figure} \begin{definition} For any $[l]\in \SN$, define the period of $[l]$ as $\pi([l]):= |C_n\cdot l| $, which is well defined as $|C_n\cdot l|$ is independent from the choice of representative of $[l]$. \end{definition} \begin{definition} For two symmetry axes $[(l_1,\sigma_1)]$ and $[(l_2,\sigma_2)]$ of $[l]\in\SN^f$, define their distance as \[ d([(l_1,\sigma_1)],[(l_2,\sigma_2)]) = \min\{|m|:r^m\cdot \sigma_1 =\sigma_2,(l_i,\sigma_i)\in [(l_i,\sigma_i)],i=1,2 \}, \] where $m$ is allowed to be half-integers in the sense that $\Phi(r)^{1/2}$ is the counterclockwise rotation by $\frac{2 \pi i}{2n}$. \end{definition} \begin{lemma}\label{lem:2axes} Let $[l]$ be in $\SN^f$. If $\frac{n}{\pi([l])}$ is odd, then there is a unique symmetry axis. Otherwise, there are two distinct symmetry axes $[(l_1,\sigma_1)]$ and $[(l_2,\sigma_2)]$ with $d([(l_1,\sigma_1)],[(l_2,\sigma_2)]) = \frac{\pi([l])}{2}$. \end{lemma} \begin{proof} Let $[(l_1,\sigma_1)]$ and $[(l_2,\sigma_2)]$ be two symmetry axes of $[l]$. As the composition $f_{\sigma_2}\circ f_{\sigma_1}\in \Phi(C_n)$ is $r$ raised to the power of $2d([(l_1,\sigma_1)],[(l_2,\sigma_2)])$, we have \[\pi([l])~ |~2d([(l_1,\sigma_1)],[(l_2,\sigma_2)]).\] On the other hand, by definition $d([(l_1,\sigma_1)],[(l_2,\sigma_2)])<\pi([l])$. Thus $d([(l_1,\sigma_1)],[(l_2,\sigma_2)]) = \frac{\pi([l])}{2}$ or $0$. Moreover, in the first case, we must have $\frac{n}{\pi([l])}$ is even. \end{proof} For example, Figure \ref{fig:twosymmetryaxes} shows an element $[l]\in\Orb(C_6,\Neck(6,4))^f$ with two different symmetry axes. In this example, $\pi([l]) =3$, whence $\frac{6}{\pi([l])} = 2$, and the distance between the two axes of symmetry is $\frac{3}{2}$. \begin{definition} A symmetry axis $[(l,\sigma)]$ for $[l]\in \SN^f$ is called {\em type 1} if $\sigma$ does not intersect $\Phi(D_n)\cdot x$. It is called {\em type 2} if $\sigma$ intersects at least one element of $\Phi(D_n)\cdot x$. \end{definition} If $n$ is odd, then the only symmetry axis for $[l]\in\SN^f$ is of type 2. When $n$ is even, the symmetry axes for the example in Figure \ref{fig:twosymmetryaxes} have different types. \begin{figure} \centering \begin{tikzpicture} \def\radius{2cm} \def\numBeads{6} \foreach \i in {1,...,\numBeads} { \pgfmathsetmacro{\angle}{360/\numBeads * (\i + 1)} \ifnum\i=1 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=3 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=4 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=6 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \node[draw, fill=red, circle, minimum size=10pt] at (\angle:\radius) {}; } \draw[dashed] (0,1.3*\radius) -- (0,-1.3*\radius); \draw (180:1.3*\radius) -- (0:1.3*\radius); \end{tikzpicture} \caption{An element in $\Orb(C_6,\Neck(6,4))^f$ with two symmetry axes. The dashed line is a symmetry axis of type 1 and the solid line is of type 2.}\label{fig:twosymmetryaxes} \end{figure} For $S\subset \SN^f$, denote $S_1$ as the subset of $S_i$ that has a symmetry axis of type $i$, for $i=1,2$. We have $\SN^{f} = \SN_1^{f}\bigcup \SN_2^{f}$. \begin{lemma}\label{lem:intersection} For $\nu_2(n)\geq 1$, \[\SN_1^{f}\cap \SN_2^{f} = \SoN^{f}.\] In other words, \[ \SeN_1^{f}\cap \SeN_2^{f}=\emptyset. \] \end{lemma} \begin{proof} If $[l]\in \SN^f$ has two axes $[(l_1,\sigma_1)],[(l_2,\sigma_2)]$ of different type, then $d([(l_1,\sigma_1)],[(l_2,\sigma_2)])$ is an half integer. By Lemma \ref{lem:2axes}, we have $\pi([l]) = 2d([(l_1,\sigma_1)],[(l_2,\sigma_2)])$ is odd. \end{proof} Consider again the necklace in Figure \ref{fig:twosymmetryaxes}. This necklace has period $3$, and is contained in both sides of the first equation in Lemma \ref{lem:intersection}. An immediate corollary of Lemma~\ref{lem:intersection} is the following. \begin{corollary} For $\nu_2(n)\geq 1$, we have \begin{align*} |\SeN| \equiv & |\SeN^f| \mod 2\\ = & |\SeN^f_1| +|\SeN^f_2|. \\ \end{align*} \end{corollary} \subsection{Combinatorics of type 1 symmetric orbits} \label{subsection:comb2} Now assume $n$ is even. By choosing every other bead of a representative necklace $l$, any $[l]\in \SN$ can be uniquely decomposed into an unordered pair of necklace orbits (under the rotation action) with $\frac n2$ beads. This constructs a well-defined map \begin{equation}\label{defn:phi} \begin{split} \phi:\SN&\to \bigcup_{j_1+j_2 = j}\mathrm{Sym}(\Orb(C_{\frac{n}{2}},\Neck(\frac n 2,j_1), \Orb(C_{\frac{n}{2}},\Neck(\frac n 2,j_2))\\ [l]& \mapsto ([l_1],[l_2]), \end{split} \end{equation} where $\mathrm{Sym}$ denotes the symmetric product. \begin{figure} \centering \begin{tikzpicture}[scale=0.8] \begin{scope}[shift={(0, 0)}] \def\radius{1.5cm} \def\numBeads{6} \foreach \i in {1,...,\numBeads} { \pgfmathsetmacro{\angle}{360/\numBeads * (\i - 1)} \ifnum\i=1 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=3 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=4 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=6 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \node[draw, fill=red, circle, minimum size=10pt] at (\angle:\radius) {}; } \end{scope} \node at (3, 0) { $\xrightarrow{\phi}$}; \begin{scope}[shift={(5.5, 0)}] \def\radius{1cm} \def\numBeads{3} \foreach \i in {1,...,\numBeads} { \pgfmathsetmacro{\angle}{360/\numBeads * (\i - 1)} \ifnum\i=1 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=3 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \node[draw, fill=red, circle, minimum size=10pt] at (\angle:\radius) {}; } \end{scope} \node at (8, 0) { $\times$}; \begin{scope}[shift={(10, 0)}] \def\radius{1cm} \def\numBeads{3} \foreach \i in {1,...,\numBeads} { \pgfmathsetmacro{\angle}{360/\numBeads * (\i - 1)} \ifnum\i=2 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=3 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \node[draw, fill=red, circle, minimum size=10pt] at (\angle:\radius) {}; } \end{scope} \end{tikzpicture} \caption{An element in $\Orb(C_6,\Neck(6,4))^f$ decomposes to a symmetric product in $\Orb(C_3,\Neck(3,2)^f$ under $\phi$.} \end{figure} \begin{lemma} For $[l]\in \SN_1^f$, we have $\phi([l])=([l_1], f\cdot [l_1])$, where $l_1\in \Neck(\frac{n}{2},\frac j2)$. \end{lemma} \begin{proof} For $[l]\in \SN^f_1$, let $[(l,\sigma)]$ denote a symmetry axis of type 1. The action of $f$ on $\SN^f_1$ can be induced by the reflection $f_\sigma$. Since $f_\sigma$ exchanges the two smaller necklaces, we must have $[l_2]=f\cdot[l_1]$. \end{proof} Moreover, when restricted to $\SN_1^f$, $\phi$ surjects to $ \mathrm{Sym}(\SNhalf,f\cdot \SNhalf$. Therefore, we have \[ \SN^f_1 = \bigcup_{([l],f\cdot[l]))\in\mathrm{Sym}(\SNhalf,f\cdot \SNhalf}\phi^{-1}([l],f\cdot[l]). \] \begin{lemma}\label{lem:fibperiod1} Consider the restriction of $\phi$ to $\SN_1^f$. Then \begin{enumerate} \item $|\phi^{-1}([l],f\cdot [l])| = \pi([l])$, if $[l]\notin \SNhalf^f$, \item $|\phi^{-1}([l],[l])| = \frac{\pi([l])+1}{2}$ if $[l]\in \SNhalf^f$ and $\pi([l])\equiv 1\mod 2$, \item $|\phi^{-1}([l], [l])| = \frac{\pi([l])}{2}$ if $[l]\in \SNhalf^f$ and $\pi([l])\equiv 0\mod 2$. \end{enumerate} \end{lemma} \begin{proof} $|\phi^{-1}([l],f\cdot [l])|$ is the number of elements in $\SN_1^f$ that can be formed by interweaving $[l]$ and $f\cdot [l]$. The above statement can be seen by considering a fixed representative of $[l]$, and counting how many different ways one can insert $f\cdot l$. \end{proof} For example, the element shown in Figure \ref{fig:orb52} has period 5, which can generate 3 distinct elements by Lemma \ref{lem:fibperiod1}.(2), and these three elements are shown in Figure \ref{fig:orb104}. \begin{figure} \centering \begin{tikzpicture} \def\radius{1.5cm} \def\numBeads{5} \foreach \i in {1,...,\numBeads} { \pgfmathsetmacro{\angle}{360/\numBeads * (\i+1)} \ifnum\i=2 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=3 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \node[draw, fill=red, circle, minimum size=10pt] at (\angle:\radius) {}; } \end{tikzpicture} \caption{An element in $\Orb(C_5,\Neck(5,2))^f$.}\label{fig:orb52} \end{figure} \begin{figure} \centering \begin{tikzpicture}[scale=0.8] \begin{scope}[shift={(0, 0)}] \def\radius{1.5cm} \def\numBeads{10} \foreach \i in {1,...,\numBeads} { \pgfmathsetmacro{\angle}{360/\numBeads * (\i - 1)} \ifnum\i=2 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=3 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=4 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=5 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \node[draw, fill=red, circle, minimum size=10pt] at (\angle:\radius) {}; } \end{scope} \begin{scope}[shift={(5, 0)}] \def\radius{1.5cm} \def\numBeads{10} \foreach \i in {1,...,\numBeads} { \pgfmathsetmacro{\angle}{360/\numBeads * (\i+2)} \ifnum\i=2 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=4 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=7 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=9 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \node[draw, fill=red, circle, minimum size=10pt] at (\angle:\radius) {}; } \end{scope} \begin{scope}[shift={(10, 0)}] \def\radius{1.5cm} \def\numBeads{10} \foreach \i in {1,...,\numBeads} { \pgfmathsetmacro{\angle}{360/\numBeads * (\i - 1)} \ifnum\i=1 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=3 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=4 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=6 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \node[draw, fill=red, circle, minimum size=10pt] at (\angle:\radius) {}; } \end{scope} \end{tikzpicture} \caption{The three distinct elements in $\Orb(C_{10},\Neck(10,4))^f$ that can be generated by the same element in $\Orb(C_5,\Neck(5,2))^f$.}\label{fig:orb104} \end{figure} \begin{lemma}\label{lem:fibperiod2} Let $[l]\in \SN_1^f$ be such that $\phi([l])=([l_1],[l_1])$. Then there is one such element $[l]$ such that $[l_1]$ in $\SNhalf^f$ with $\pi([l_1])\equiv 1 \mod 2$, which has $\pi([l]) = \pi([l_1])$. Otherwise $\pi([l]) = 2\pi([l_1])$. \end{lemma} \begin{proof} The exceptional case is the following. View $l_1$ as a subset of $\Phi(D_n)\cdot x$. Then flip $l_1$ with respect to $\sigma_0\in \bP^1(\bR)$, where $\sigma_0$ is perpendicular to a symmetry axis of $l_1$. Then $[l]$ is the $C_n$-orbit of $l_1\cup s_{\sigma_0}\cdot l_1$. \end{proof} For example, the three elements shown in Figure \ref{fig:orb104} have period $10,5,10$ respectively, where the middle one is the exceptional case in Lemma \ref{lem:fibperiod2}. Therefore, for $[l]\in \Orb_{odd}(C_{\frac n2},\Neck(\frac n2,\frac j2))^f$, we have \begin{equation} \phi^{-1}([l],[l])\cap \SeN= \frac{\pi([l])-1}{2}. \end{equation} \begin{proposition}\label{prop:S1} For $\nu_2(n)\geq 1$, \[|\SeN_1^f| = \frac12 \left(\binom{\frac n2}{\frac j2}-|\SoNhalf^f|\right).\] \end{proposition} \begin{proof} By Lemma \ref{lem:fibperiod1} and Lemma \ref{lem:fibperiod2}, we have \begin{align*} |\SeN_1^f| = & \frac 12\sum_{[l]\notin \SNhalf^f}\pi([l]) + \sum_{[l]\in \SeNhalf^f}\frac{\pi([l])}{2}\\ & + \sum_{[l]\in\SoNhalf^f}\frac{\pi([l])-1}{2}\\ = & \frac12 \left(\sum_{[l]\in \SNhalf}\pi([l]) - |\SoNhalf^f|\right)\\ = & \frac12\left(\sum_{d|\frac n2}dN(d,\rho d)-|\SoNhalf^f|\right)\\ = & \frac12 \left(\binom{\frac n2}{\frac j2}-|\SoNhalf^f|\right). \end{align*} \end{proof} \begin{lemma}\label{lem:ooddvalue} For any $n,j\in \bZ_{\geq0}$, we have $$|\SoN^f| = \left\{ \begin{array}{cc} \binom{\frac{n}{2^{\nu_2(j)+1}}-\frac12}{\frac{j}{2^{\nu_2(j)+1}}}, & \textrm{ when } \nu_2(j)>\nu_2(n),\\ \\ \binom{\frac{n}{2^{\nu_2(j)+1}}-\frac12}{\frac{j}{2^{\nu_2(j)+1}}-\frac12}, &\textrm{ when } \nu_2(j) = \nu_2(n),\\ \\ 0, & \textrm{ when } \nu_2(j)<\nu_2(n). \end{array}\right.$$ \end{lemma} \begin{proof} Notice that for any $n,j\in 2\bZ$, we have $$\SoN^f = \SoNhalf^f.$$ Then, the problem reduces to the case when $\nu_2(n)\cdot\nu_2(j) =0$. The counting in this case is clear. \end{proof} \subsection{Proof of Theorem \ref{thm:closedformula-1} when $\nu_2(n)=1$ and $\nu_2(j)\geq1$}\label{subsection:case2} Consider the restriction of the map $\phi$ in \eqref{defn:phi} to $\SN_2^f$. As $n\equiv 2\mod4$, the image is contained in \[\bigcup_{j_1+j_2=j}\mathrm{Sym}(\Orb(C_{\frac n2},\Neck(\frac n2,j_1))^f,\Orb(C_{\frac n2},\Neck(\frac n2,j_2))^f).\] For an example of $([l], \phi([l]))$ in this case, see Figure \ref{fig:examplephi2}. \begin{figure} \centering \begin{tikzpicture}[scale=0.8] \begin{scope}[shift={(0, 0)}] \def\radius{1.5cm} \def\numBeads{10} \foreach \i in {1,...,\numBeads} { \pgfmathsetmacro{\angle}{360/\numBeads * (\i - 1)+90} \ifnum\i=1 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=2 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=10 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=6 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \node[draw, fill=red, circle, minimum size=10pt] at (\angle:\radius) {}; } \end{scope} \node at (3, 0) { $\xrightarrow{\phi}$}; \begin{scope}[shift={(5.5, 0)}] \def\radius{1cm} \def\numBeads{5} \foreach \i in {1,...,\numBeads} { \pgfmathsetmacro{\angle}{360/\numBeads * (\i - 1)+90} \ifnum\i=1 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \node[draw, fill=red, circle, minimum size=10pt] at (\angle:\radius) {}; } \end{scope} \node at (8, 0) { $\times$}; \begin{scope}[shift={(10, 0)}] \def\radius{1cm} \def\numBeads{5} \foreach \i in {1,...,\numBeads} { \pgfmathsetmacro{\angle}{360/\numBeads * (\i - 1)-90} \ifnum\i=4 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=3 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=1 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \node[draw, fill=red, circle, minimum size=10pt] at (\angle:\radius) {}; } \end{scope} \end{tikzpicture} \caption{An element in $\Orb(C_{10},\Neck(10,4))^f_2$ decomposes under $\phi$ to a symmetric product of two elements in $\Orb(C_{5},\Neck(5,1))^f$ and $\Orb(C_{5},\Neck(5,3))^f$, repsectively.}\label{fig:examplephi2} \end{figure} Moreover, since $\frac n2$ is odd, any symmetry axis for $[l]\in \Orb(C_{\frac n2},\Neck(\frac n2,j_1))^f$ is of type 2 and only passes through one single bead. It follows the restriction of $\phi$ is bijective, and we have \begin{equation}\label{eqn:ob2} \begin{split} |\SN_2^f| = & \sum_{j_1+j_2=j,j_1< j_2}|\Orb(C_{\frac n2},\Neck(\frac n2,j_1))^f|\cdot \Orb(C_{\frac n2},\Neck(\frac n2,j_2))^f \\ & + \frac12|\SNhalf^f|\cdot(|\SNhalf^f|+1).\\ \end{split} \end{equation} Next, we have \begin{lemma} Let $[l]\in \SN_2^f$. Then \begin{enumerate} \item $\pi([l]) = \pi([l_1])$, \text{ if } $\phi([l]) = ([l_1], [l_1])$, \item $\pi([l]) = 2\mathrm{lcm}(\pi([l_1]),\pi([l_2])$, \text{ if } $\phi([l]) = ([l_1],[l_2])$, $[l_1]\neq [l_2]$. \end{enumerate} \end{lemma} \begin{proof} $l$ has a stabilizer $r^k$ with $k\equiv 1\mod2$ under $C_n$-action if and only if $\phi([l]) = ([l_1],[l_1])$. \end{proof} Figure \ref{fig:examplephi2} provides an example of the second case. Since $\frac n2$ is odd, we have \[ |\SoN^f| = |\SoNhalf^f|. \] Thus \begin{equation}\label{eqn:ob3} |\SeN^f_2| = |\SN_2^f| - |\SoNhalf^f|. \end{equation} \begin{proposition}\label{prop:oevenfixvalue} \begin{align*} |\SeN^f| = \left\{ \begin{array}{cc} \binom{\frac n2}{\frac j2} - \binom{\frac{n-2}{4}}{\frac{j-2}{4}}, & j\equiv 2\mod4, \\ \binom{\frac n2}{\frac j2}-\binom{\frac{n-2}{4}}{\frac{j}{4}}, & j\equiv 0\mod4. \end{array}\right. \end{align*} \end{proposition} \begin{proof} Since $\frac n2$ is odd, we have $\SoNhalf^f = \SNhalf^f$. Therefore, combining \eqref{eqn:ob2}, \eqref{eqn:ob3}, and Proposition \ref{prop:S1}, and noting Lemma \ref{lem:intersection}, we have \begin{align*} |\SeN^f| = & |\SeN_1^f| + |\SeN_2^f| \\ = &\frac{1}{2} \sum_{j_1+j_2=j,j_1< j_2}|\Orb(C_{\frac n2},\Neck(\frac n2,j_1))^f|\cdot \Orb(C_{\frac n2},\Neck(\frac n2,j_2))^f\\ &+\frac 12\binom{\frac n2}{\frac j2} -|\SNhalf^f|. \end{align*} Since $j$ is even, we have $j_1\equiv j_2\mod 2$. As $\frac{n}{2}$ is odd, we further have $\nu_2(j_i)>\nu_2(n)$, $i=1,2$ or $\nu_2(j_i)=\nu_2(n)$, $i=1,2$. Thus by Lemma \ref{lem:ooddvalue}, we have \begin{align*} & \frac{1}{2} \sum_{j_1+j_2=j,j_1< j_2}|\Orb(C_{\frac n2},\Neck(\frac n2,j_1))^f|\cdot |\Orb(C_{\frac n2},\Neck(\frac n2,j_2))^f|\\ = &\frac12 \sum_{i=0}^{\frac j2-1}\binom{\frac{n-2}{4}}{i}\binom{\frac{n-2}{4}}{\frac j2-1-i}+\frac12\sum_{i=0}^{\frac j2}\binom{\frac{n-2}{4}}{i}\binom{\frac{n-2}{4}}{\frac j2-i}\\ = & \frac12\left(\binom{\frac{n}{2}-1}{\frac j2-1}+\binom{\frac n2-1}{\frac j2}\right) = \frac12\binom{\frac n2}{\frac j2}, \end{align*} where in the last line we have used the Pascal identity. Thus \begin{align*} |\SeN^f| = & \frac12\left(\binom{\frac n2}{\frac j2}+\binom{\frac n2}{\frac j2}\right)-|\SNhalf^f|\\ = & \left\{ \begin{array}{cc} \binom{\frac n2}{\frac j2} - \binom{\frac{n-2}{4}}{\frac{b-2}{4}}, & b\equiv 2\mod4, \\ \binom{\frac n2}{\frac j2}-\binom{\frac{n-2}{4}}{\frac{b}{4}}, & b\equiv 0\mod4. \end{array}\right. \end{align*} \end{proof} \begin{proposition}\label{prop:casenu2n1nu2jgeq1} For $\nu_2(n)=1$ and $\nu_2(j)\geq1$, we have $\Delta(n,j) =0$. \end{proposition} \begin{proof} By the Corollary \ref{cor:corKummer1} of Kummer's theorem, for $j\equiv 2\mod4$, we have \[ \nu_2\binom{\frac{n-2}{4}}{\frac{j-2}{4}} = \nu_2\binom{\frac{n}{2}-1}{\frac j2-1}=\nu_2\binom{\frac{n}{2}}{\frac j2}, \] and for $b\equiv 0\mod4$, \[ \nu_2\binom{\frac{n-2}{4}}{\frac{j}{4}} = \nu_2\binom{\frac{n}{2}-1}{\frac j2}=\nu_2\binom{\frac{n}{2}}{\frac j2}. \] By Proposition \ref{prop:oevenfixvalue}, \[ |\SeN^f|\equiv 0\mod2. \] Finally, by Lemma \ref{lem:deltaGoeven}, \[ \Delta(n,j) = (u-1)\cdot|\SeN^f| =0. \] \end{proof} \subsection{Proof of Theorem \ref{thm:closedformula-1} when $\nu_2(n)\geq 2$ and $\nu_2(j)\geq 1$}\label{subsection:case3} Since both $n$ and $j$ are even, for any $[l]\in \SN_2^f$ there are two beads on any type 2 symmetry axis, and they have to be of the same color. Denote $[l']$ as the subset formed by removing the two beads on a chosen type 2 symmetry axis for $l\in [l]$. If the two beads are blue, then \[ [l']\in \Orb(C_{n-2},\Neck(n-2,j-2))_1^f. \] Otherwise the two beads are red and \[ [l']\in \Orb(C_{n-2},\Neck(n-2,j))_1^f. \] \begin{figure} \centering \begin{tikzpicture}[scale=0.8] \begin{scope}[shift={(0, 0)}] \def\radius{2cm} \def\numBeads{8} \foreach \i in {1,...,\numBeads} { \pgfmathsetmacro{\angle}{360/\numBeads * (\i +1)} \ifnum\i=1 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=5 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \node[draw, fill=red, circle, minimum size=10pt] at (\angle:\radius) {}; } \draw[dashed] (90:\radius*1.5) -- (270:\radius*1.5); \end{scope} \node at (3.5, 0) { $\xrightarrow{\psi}$}; \begin{scope}[shift={(7, 0)}] \def\radius{2cm} \def\numBeads{8} \foreach \i in {1,...,\numBeads} { \pgfmathsetmacro{\angle}{360/\numBeads * (\i - 1)} \ifnum\i=3 \node[draw, dashed, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=7 \node[draw, dashed, circle, minimum size=10pt] at (\angle:\radius) {}; \else \node[draw, fill=red, circle, minimum size=10pt] at (\angle:\radius) {}; } \end{scope} \end{tikzpicture} \caption{By removing the two beads on the symmetry axis, $\psi$ maps an element in $\Orb(C_{8},\Neck(8,2))^f_2$ to $\Orb(C_{6},\Neck(6,0))^f_1$.}\label{fig:examplepsi} \end{figure} Now, if we start with $[l']\in \Orb(C_{n-2},\Neck(n-2,j-2))_1^f$, then we can recover an element in $\SN_2^f$ by picking a symmetry axis for $e\cdot[l]\in[l']$ and adding back two blue beads. We can also do a similar procedure for $[l']\in \Orb(C_{n-2},\Neck(n-2,j))_1^f$ by adding two red beads. However, the associated map \[ \psi:\Orb(C_{n-2},\Neck(n-2,j-2))_1^f\amalg \Orb(C_{n-2},\Neck(n-2,j))_1^f \to \SN^f_2 \] is surjective but not injective\footnote{For an example of how does the map $\psi$ works, see Figure \ref{fig:examplepsi}. }. In other words, if we count $\SN^f_2$ via counting the domain, then there will be over-counting. Figure \ref{fig:examplepsi} and Figure \ref{fig:examplepsi2} provide an example of the over-counting phenomenon. To understand the fibers of $\psi$, i.e., the over-counting, we first notice the following. \begin{lemma}\label{lem:analysisofaxes} Every element in $\Orb(C_{n-2},\Neck(n-2,j-2))_1^f\amalg \Orb(C_{n-2},\Neck(n-2,j))_1^f$ has exactly one type 1 symmetry axis if $\pi([l])$ is even, and two symmetry axes of different type if $\pi([l])$ is odd. \end{lemma} \begin{proof} Take $[l]\in \Orb(C_{n-2},\Neck(n-2,j-2))_1^f\amalg \Orb(C_{n-2},\Neck(n-2,j))_1^f]$. By Lemma \ref{lem:2axes}, if $\frac{n-2}{\pi([l])}$ is odd, then $[l]$ has exactly one symmetry axis, which has to pass between beads as $\pi([l])$ is even. On the other hand, if $\frac{n-2}{\pi([l])}$ is even, since $n-2\equiv 2\mod 4$, $\pi([l])$ is odd, thus the two axes are of a different type, as their distance is a half-integer. \end{proof} \begin{figure} \centering \begin{tikzpicture}[scale=0.8] \begin{scope}[shift={(0, 0)}] \def\radius{2cm} \def\numBeads{8} \foreach \i in {1,...,\numBeads} { \pgfmathsetmacro{\angle}{360/\numBeads * (\i +1)} \ifnum\i=1 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=5 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \node[draw, fill=red, circle, minimum size=10pt] at (\angle:\radius) {}; } \draw[dashed] (0:\radius*1.5) -- (180:\radius*1.5); \end{scope} \node at (3.5, 0) { $\xrightarrow{\psi}$}; \begin{scope}[shift={(7, 0)}] \def\radius{2cm} \def\numBeads{8} \foreach \i in {1,...,\numBeads} { \pgfmathsetmacro{\angle}{360/\numBeads * (\i + 1)} \ifnum\i=3 \node[draw, dashed, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=7 \node[draw, dashed, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=5 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=1 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \node[draw, fill=red, circle, minimum size=10pt] at (\angle:\radius) {}; } \end{scope} \end{tikzpicture} \caption{The element in Figure \ref{fig:examplepsi} has another symmetry axis, which maps to $\Orb(C_{6},\Neck(6,2))^f_1$ under $\psi$.}\label{fig:examplepsi2} \end{figure} The only elements $[l]\in \SN_2^f$ that have $|\psi^{-1}([l])|=2$ are the ones with two different axes both passing through two beads. Viewing on the domain, these are the elements in $\Orb(C_{n-2},\Neck(n-2,j-2))_1^f$ and $\Orb(C_{n-2},\Neck(n-2,j))_1^f$ that have two axes of different types. By Lemma \ref{lem:analysisofaxes}, these are the exactly the elements that have odd period. However, the over-counting is not simply \[ |\Orb_{odd}(C_{n-2},\Neck(n-2,j-2))_1^f|+| \Orb_{odd}(C_{n-2},\Neck(n-2,j))_1^f| \] since there would be an over-counting of the over-counting. This happens in the following situation. Denote the two symmetry axes for $[l]$ in the domain with $n-2$ beads as $[(l,\sigma_1)]$, $[(l,\sigma_2)]$, which is of type 1 and type 2, respectively. After adding the two beads along $[(l,\sigma_1)]$, the two symmetry axes $[(l,\sigma_1)]$, $[(l,\sigma_2)]$ will both become type 2 symmetry axis in $\psi([l])\in \SN^f_2$. Therefore, the over-counting of over-counting will happen if $[(l,\sigma_1)]$, $[(l,\sigma_2)]$ become the same axis in $\psi([l])$. A little thought combining with Lemma \ref{lem:2axes} shows this will happen if and only if $\psi([l])\in \SoN^f$. For example, if one want to construct an element in $\Orb(C_8,\Neck(8,4))^f_2$ from the right-hand side of Figure \ref{fig:examplepsi2}, then one needs to add two blue beads to the vacant spots. This becomes a new symmetry axis. However, as shown in Figure \ref{fig:examplepsi3}, this axis is the same as the dashed one from the left-hand side. \begin{figure} \centering \begin{tikzpicture}[scale=0.8] \begin{scope}[shift={(0, 0)}] \def\radius{2cm} \def\numBeads{8} \foreach \i in {1,...,\numBeads} { \pgfmathsetmacro{\angle}{360/\numBeads * (\i + 1)} \ifnum\i=3 \node[draw, dashed, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=7 \node[draw, dashed, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=5 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=1 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \node[draw, fill=red, circle, minimum size=10pt] at (\angle:\radius) {}; } \draw[dashed] (90:\radius*1.5) -- (270:\radius*1.5); \end{scope} \node at (3.5, 0) { $\to$}; \begin{scope}[shift={(7, 0)}] \def\radius{2cm} \def\numBeads{8} \foreach \i in {1,...,\numBeads} { \pgfmathsetmacro{\angle}{360/\numBeads * (\i + 1)} \ifnum\i=3 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=7 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=5 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \ifnum\i=1 \node[draw, fill=blue, circle, minimum size=10pt] at (\angle:\radius) {}; \else \node[draw, fill=red, circle, minimum size=10pt] at (\angle:\radius) {}; } \draw[dashed] (90:\radius*1.5) -- (270:\radius*1.5); \draw[dash dot] (0:\radius*1.5) -- (180:\radius*1.5); \end{scope} \end{tikzpicture} \caption{An example of the over-counting of over-counting.}\label{fig:examplepsi3} \end{figure} \begin{proposition}\label{prop:orb22} For $\nu_2(n)\geq 2$, and $\nu_2(j)\geq 1$, we have \[ |\SN_2^f| = \frac{1}{2}\binom{\frac n2}{\frac j2}+ \frac12 |\SoN^f|. \] \end{proposition} \begin{proof} From the earlier argument in this section, we have \begin{align*} |\SN_2^f| = & |\Orb(C_{n-2},\Neck(n-2,j-2))_1^f|+| \Orb(C_{n-2},\Neck(n-2,j))_1^f| \\ & - \frac12\left(|\Orb_{odd}(C_{n-2},\Neck(n-2,j-2))_1^f|+| \Orb_{odd}(C_{n-2},\Neck(n-2,j))_1^f|\right.\\ & - \left.|\SoN^f|\right). \\ \end{align*} By Proposition \ref{prop:S1}, we get \begin{align*} |\SN_2^f| = & \frac{1}{2}\left(\binom{\frac n2-1}{\frac j2-1}+\binom{\frac n2-1}{\frac j2}+|\Orb_{odd}(C_{\frac n2-1},\Neck(\frac n2-1,\frac j2-1))_1^f|\right. \\ & + | \Orb_{odd}(C_{\frac n2-1},\Neck(\frac n2-1,\frac j2))_1^f-|\Orb_{odd}(C_{n-2},\Neck(n-2,j-2))_1^f|\\ &-\left.| \Orb_{odd}(C_{n-2},\Neck(n-2,j))_1^f| + |\SoN^f| \right)\\ = & \frac{1}{2}\binom{\frac n2}{\frac j2}+ \frac12 |\SoN^f| \end{align*} where in the second identity, we have used the Pascal identity, and the same argument as in the first sentence of the proof of Lemma \ref{lem:ooddvalue}. \end{proof} \begin{proposition}\label{prop:case4} For $\nu_2(n)\geq 2$ and $\nu_2(j)\geq1$, we have $\Delta(n,j) =0$. \end{proposition} \begin{proof} Combining Lemma \ref{lem:intersection}, Proposition \ref{prop:S1}, and Proposition \ref{prop:orb22}, we have \begin{align*} |\SeN^f| = \binom{\frac n2}{\frac j2} - |\SeN^f|. \end{align*} Now, apply Lemma \ref{lem:ooddvalue}, we have \begin{enumerate} \item If $\nu_2(n)> \nu_2(j)$, \[ |\SeN^f| = \binom{\frac n2}{\frac j2}. \] \item If $\nu_2(n)= \nu_2(j)$, \[ |\SeN^f| = \binom{\frac n2}{\frac j2}- \binom{\frac{n}{2^{\nu_2(n)+1}}-\frac12}{\frac{j}{2^{\nu_2(n)+1}}-\frac12}. \] \item If $\nu_2(n) <\nu_2(j)$, \[ |\SeN^f|) = \binom{\frac n2}{\frac j2}-\binom{\frac{n}{2^{\nu_2(n)+1}}-\frac12}{\frac{j}{2^{\nu_2(n)+1}}}. \] \end{enumerate} In case (1), by Lucas's theorem, we have \[ |\SeN^f| \equiv 0\mod 2. \] In case (2), by Corollary \ref{cor:corKummer1} of Kummer's theorem, \[ \nu_2\binom{\frac{n}{2^{\nu_2(n)+1}}-\frac12}{\frac{j}{2^{\nu_2(n)+1}}-\frac12} = \nu_2\binom{\frac{n}{2^{\nu_2(n)}}}{\frac{j}{2^{\nu_2(n)}}} = \nu_2\binom{\frac n2}{\frac j2}. \] Therefore, $ |\SeN^f| \equiv 0\mod 2.$ In case (3), since $\frac{j}{2^{\nu_2(n)}}$ is even, and $\frac{n}{2^{\nu_2(n)}}$ is odd, then by Corollary \ref{cor:corKummer1} and \ref{cor:corKummer2} of Kummer's theorem \[ \nu_2\binom{\frac{n}{2^{\nu_2(n)}}-1}{\frac{j}{2^{\nu_2(n)}}} = \nu_2\binom{\frac{n}{2^{\nu_2(n)}}}{\frac{j}{2^{\nu_2(n)}}} = \nu_2\binom{\frac n2}{\frac j2}. \] Therefore, we have \[ |\SeN^f| \equiv 0\mod 2, \] and by Lemma \ref{lem:deltaGoeven}, \[ \Delta(n,j)=(u-1)\cdot|\SeN^f|=0. \] \end{proof} \subsection{Conclusion of the proof of Theorem \ref{thm:closedformula-1}}\label{subsection:expval1} Combining Proposition \ref{prop:case1}, Proposition \ref{prop:case2}, Proposition \ref{prop:casenu2n1nu2jgeq1}, and Proposition \ref{prop:case4}, then invoking Lemma \ref{lem:deltaGoeven}, we have \[ \binom{L/k}{j} = \binom{n}{j} - (1 -u)\cdot \delta(n,j), \] where $\delta(n,j) = \left\{ \begin{array}{cc} 1, & \frac{j-1}{2}\prec \frac{n-2}{2}, \\ 0, & \mathrm{else} \end{array} \right.$ and $\prec$ is as in Definition~\ref{def:prec} in the beginning of Section~\ref{subsection:nevenjodd}. Finally, we can conclude the proof of Theorem \ref{thm:closedformula-1} by Lucus's theorem. The first few rows of the untwisted Pascal triangle are the following: \[ \begin{array}{ccccccccccccccccccc} & & & & & & & & & & 1 & & & & & & & & \\ & & & & & & & & & 1 & & 1 & & & & & & & \\ & & & & & & & & 1 & & 1+u & & 1 & & & & & & \\ & & & & & & & 1 & & 3 & & 3 & & 1 & & & & & \\ & & & & & & 1 & & 3+u & & 6 & & 3+u & & 1 & & & & \\ & & & & & 1 & & 5 & & 10 & & 10 & & 5 & & 1 & & & \\ & & & & 1 & & 5+u & & 15 & & 20 & & 15 & & 5+u & & 1 & & \\ & & & 1 & & 7 & & 21 & & 35 & & 35 & & 21 & & 7 & & 1 & \\ & & 1 & & 7+u & & 28 & & 55+u & & 70 & & 55+u & & 28 & & 7+u & & 1 \\ \end{array}. \] \section{Twisted case - Proof of Theorem \ref{thm:closedformula-2}} The strategy for the proof of Theorem~\ref{thm:closedformula-2} is to rewrite $\Orb_{even}(C_n,\Neck(n,j)^{\tau})$ in terms of the orbits for the untwisted action that we have studied in the previous sections. \subsection{Reduction to the untwisted case}\label{subsection:reduction} \begin{lemma} The elements of $\Orb(C_n,\Neck(n,j)^{\tau})$ can be interpreted as triples \[\SNt \cong \{([l],[l_1],[l_2]):[l]\in\SN,\phi([l]) = ([l_1],[l_2])\}/\sim,\] where in $([l],[l_1],[l_2])$, $[l_1]$ and $[l_2]$ are ordered. The equivalence relation is defined as $([l],[l_1],[l_2])\sim(e\cdot [l],e\cdot [l_2],e\cdot [l_1])$. \end{lemma} \begin{proof} An element in $\SNt$ is uniquely determined by an element $[l]\in\SN$ together with a chosen bead in $[l]$. On the other hand, two different chosen beads in $[l]$ will give rise to the same $\SNt$ orbits if and only if they both lie in the $[l_1]$ or $[l_2]$, where $\phi([l]) = ([l_1],[l_2])$. Therefore, an element in $\SNt$ can be denoted as $([l],[l_1],[l_2])$, where the second entry $[l_1]$ means the chosen bead lie in $[l_1]$. However, both $([l],[l_1],[l_2])$ and $(e\cdot [l],e\cdot [l_2],e\cdot [l_1])$ give rise to the same element in $\SNt$. By quotienting out this equivalence relation, we arrive at the desired isomorphism. \end{proof} \begin{figure} \centering \begin{tikzpicture}[scale=1.0] \begin{scope}[shift={(0, 0)}] \def\radius{2cm} \def\numBeads{12} \foreach \i in {1,...,\numBeads} { \pgfmathsetmacro{\angle}{360/\numBeads * (\i - 1)} \ifnum\i=1 llColor{blue} \else \ifnum\i=3 llColor{blue} \else \ifnum\i=5 llColor{blue} \else \ifnum\i=6 llColor{blue} \else \ifnum\i=9 llColor{blue} \else \ifnum\i=10 llColor{blue} \else llColor{red} \fi\fi\fi\fi \ifodd\i llColor, minimum size=12pt] at (\angle:\radius) {}; \else llColor, circle, minimum size=12pt] at (\angle:\radius) {}; } \end{scope} \begin{scope}[shift={(7, 0)}] \def\radius{2cm} \def\numBeads{12} \foreach \i in {1,...,\numBeads} { \pgfmathsetmacro{\angle}{360/\numBeads * (\i -2)} \ifnum\i=1 llColor{red} \else \ifnum\i=3 llColor{red} \else \ifnum\i=5 llColor{red} \else \ifnum\i=6 llColor{red} \else \ifnum\i=9 llColor{red} \else \ifnum\i=10 llColor{red} \else llColor{blue} \fi\fi\fi\fi \ifodd\i llColor, minimum size=12pt] at (\angle:\radius) {}; \else llColor, minimum size=12pt] at (\angle:\radius) {}; } \end{scope} \node at (3.5, 0) { $\sim$}; \end{tikzpicture} \caption{An example of elements in $\Orb(C_{12},\Neck(12,6)^\tau)$.}\label{fig:untwistedorb} \end{figure} \begin{figure} \centering \begin{tikzpicture}[scale=1.0] \begin{scope}[shift={(0, 0)}] \def\radius{2cm} \def\numBeads{12} \foreach \i in {1,...,\numBeads} { \pgfmathsetmacro{\angle}{360/\numBeads * (\i - 1)} \ifnum\i=1 llColor{blue} \else \ifnum\i=3 llColor{blue} \else \ifnum\i=5 llColor{blue} \else \ifnum\i=6 llColor{blue} \else \ifnum\i=9 llColor{blue} \else \ifnum\i=10 llColor{blue} \else llColor{red} \fi\fi\fi\fi \ifodd\i llColor, minimum size=12pt] at (\angle:\radius) {}; \else llColor, circle, minimum size=12pt] at (\angle:\radius) {}; } \end{scope} \begin{scope}[shift={(7, 0)}] \def\radius{2cm} \def\numBeads{12} \foreach \i in {1,...,\numBeads} { \pgfmathsetmacro{\angle}{360/\numBeads * (\i - 1)} \ifnum\i=1 llColor{blue} \else \ifnum\i=3 llColor{blue} \else \ifnum\i=5 llColor{blue} \else \ifnum\i=6 llColor{blue} \else \ifnum\i=9 llColor{blue} \else \ifnum\i=10 llColor{blue} \else llColor{red} \fi\fi\fi\fi \ifodd\i llColor, minimum size=12pt] at (\angle:\radius) {}; \else llColor, minimum size=12pt] at (\angle:\radius) {}; } \end{scope} \node at (3.5, 0) { $\xrightarrow{s}$}; \end{tikzpicture} \caption{An example of swapping.}\label{fig:swapping} \end{figure} One way to visualize the triple is shown in Figure \ref{fig:untwistedorb}. Denote the element as $([l],[l_1],[l_2])\in\Orb(C_{12},\Neck(12,6)^\tau)$. The circle-shaped beads form $[l_1]\in \Orb(C_6,\Neck(6,2))$, whereas the square-shaped beads form $[l_2]\in \Orb(C_6,\Neck(6,4))$. From now on, we will use the triple $([l],[l_1],[l_2])$ to denote elements in $\SNt$. Define the twisted period as $\pi'([l],[l_1],[l_2]) := |([l],[l_1],[l_2])|$, i.e., the size of the twisted $C_{2j}$-orbit. Next, we have a $C_2$ action on $\SNt$, which acts by exchanging $l_1,l_2$. Combinatorically, this action interchanges the two potentially different twisted orbits that could come from the same underlining untwisted orbit $[l]$. We will call this action swapping, and an example is shown in Figure \ref{fig:swapping}. \begin{lemma}\label{lem:deltaprimevalue} $\Delta'(2j,j) = (u-1)|\SNt^s|$, where $\SNt^s$ is the subset of $\SeNt$ that is fixed by swapping. \end{lemma} \begin{proof} Since the swapping action does not change the twisted period, by \eqref{eqn:defdeltaprime} and $2(u-1)=0$, we got the desired formula. \end{proof} \begin{lemma}\label{lem:twistedfixed} In $\Orb(C_n,\Neck(n,j)^{\tau})$, $([l],[l_1],[l_2])= ([l],[l_2],[l_1])$ if and only if either $[l_1]=[l_2]$ or $[l] = e\cdot [l]$. \end{lemma} \begin{proof} Definition chasing. \end{proof} \begin{corollary}\label{cor:deltatwistedodd} $\Delta'(2j,j) =0$, for $\nu_2(j)=0$. \end{corollary} \begin{proof} Since $j$ is odd, $[l_1]\neq [l_2]$ or $e\cdot[l_2]$ for all elements $([l],[l_1],[l_2])\in \SNt$. Thus the $C_2$-action of swapping acts freely, so $|\SNt^s|\equiv0\mod2$. \end{proof} From now on, we will assume $j\equiv0\mod2$. Applying the above results to $$|\SNt^s|,$$ we have \begin{align*} |\SNt^s| = & |([l],[l_1],[l_1])\in \SeNt, [l]\neq e\cdot[l] | \\ & + |([l],[l_1],[l_2])\in \SeNt,[l]=e\cdot[l] |. \\ \end{align*} For $([l],[l_1],[l_1])\in \SeNt$, $[l]\neq e\cdot [l]\Leftrightarrow [l_1]\neq e\cdot [l_1]$, and notice that if $[l_1] \neq e\cdot[l_1]$, then $2|\pi'([l],[l_1],[l_1])$. Therefore, \begin{align*} |\SNt^s| = & \sum_{[l]\in \Orb(C_{j},\Neck(j,\frac j2)),[l]\neq e\cdot[l]}\frac{|\phi^{-1}([l],[l])|}{2} \\ &+ |([l],[l_1],[l_2])\in \SeNt,[l]=e\cdot[l]| \end{align*} To further simplify the second term, we notice the following facts. \begin{lemma} $\pi([l]) \equiv 0\mod 2$, $[l]\in \SNt$. \end{lemma} \begin{proof} $\nu_2(2j)>\nu_2(j)$. \end{proof} \begin{lemma} If $[l] = e\cdot [l]$, then for all $l\in \Neck(2j,j)$, we have \[ e\cdot l = r^{\frac{\pi([l])}{2}}\cdot l. \] \end{lemma} \begin{proof} Apparently $e= r^{m}$ for some $m\in\bZ/\pi([l])\bZ$. Since $e$ changes $l$, we can take $0<m<\pi([l])$. On the other hand, since $e^2 = 1$, one has $\pi([l])|2m$, thus $m = \frac{\pi([l])}{2}$. \end{proof} \begin{proposition} $\pi'([l],[l_1],[l_2])=\pi([l])$ unless $l = e\cdot[l]$, and $\nu_2(\pi([l])) =1$, in which case $\pi'([l],[l_1],[l_2])=\frac{\pi([l])}{2}\equiv 1\mod 2$. \end{proposition} \begin{proof} Since $\pi([l])$ is even, $r^{\pi([l])} = e^{\pi([l])}$, and we always have $\pi'([l],[l_1],[l_2])\leq\pi([l])$. By the definition of the twisted action, if $\pi'([l],[l_1],[l_2])$ is even, we have $\pi'([l],[l_1],[l_2])=\pi([l])$. Therefore, the only possibility for $\pi'([l],[l_1],[l_2])<\pi([l])$ is when $\pi'([l],[l_1],[l_2])$ is odd. In this case, we have $2\pi'([l],[l_1],[l_2]) = \pi([l])$, and thus $\nu_2([l])=1$. \end{proof} Therefore \begin{align*} |\SNt^s| = &\sum_{[l]\in \Orb(C_{j},\Neck(j,\frac j2)),[l]\neq e\cdot[l]}\frac{|\phi^{-1}([l],[l])|}{2} \\ & + |([l],[l_1],[l_2])\in \SNt,[l]=e\cdot[l],\nu_2(\pi([l]))>1|\\ = & \sum_{[l]\in \Orb(C_{j},\Neck(j,\frac j2)),[l]\neq e\cdot[l]}\frac{|\phi^{-1}([l],[l])|}{2} \\ & + |l\in \SNjj,[l]=e\cdot[l],\nu_2(\pi([l]))>1| \end{align*} where the last formula is an enumeration purely in terms of untwisted orbits. \subsection{Relations to partitions}\label{subsection:relatetopart} Consider an element $[l]\in\SN$. One can uniquely write $[l]$ as the cyclic orbit of a partition of $n$ as follows. Call a maximal consecutive segment of beads of $l$ of the same color a ``cluster." By replacing each cluster of $l$ by its number of beads, one obtains the corresponding partition on $n$. For example, fix $l\in [l]$, if the beads in positions 1 and 5 are red but everything in between is blue, we will replace the 3 blue beads with the number 3. If the cluster is red, we will mark it by adding an underline to distinguish it from being blue. We will use the notation $(\cdots)$ to denote the cyclic equivalent class of partitions. So $[l]$ can be uniquely written as the cyclic orbit of a partition of $n$ as \[ [l] = (\underline{r_1}b_1\underline{r_2}b_2\cdots \underline{r_m}b_m),\quad r_i,b_i\in\bZ_{>0}, \] with $\sum_{i}(r_i+b_i)=n,\sum_{i}b_i = j.$ Under this notation, the action of $e$ is simply \[ e\cdot (\underline{r_1}b_1\underline{r_2}b_2\cdots \underline{r_m}b_m) = (r_1\underline{b_1}r_2\underline{b_2}\cdots r_m \underline{b_m}). \] Apparently, the length of the partition has to be even. For example, the three elements in Figure \ref{fig:orb104} can be written as $(\underline{6} 4),(\underline{2}1\underline{1}1\underline{2}1\underline{1}1)$, $(\underline{1}1\underline{4}1\underline{1}2)$, respectively. Denote the set of even length partitions of $n$ with an underlined marking for every other entry such that the sum of unmarked entries is $j$ as $\mathrm{Part}(n,j,\underline{n-j})$. We further denote the set of its cyclic equivalent classes as $\Orb(C,\mathrm{Part}(n,j,\underline{n-j}))$. For an element $[p]\in \Orb(C,\mathrm{Part}(n,j,\underline{n-j}))$, we denote its length as $|p| := \mathrm{length}(p)$, and define its period as $\varpi([p]) = |C_{|p|}\cdot p|$\footnote{Note there is no direct relation between $\varpi$ and $\pi'$.}. Denote the set of partition of $j$ as $\mathrm{Part}(j)$, and similarly denote the set of its cyclic equivalent class and orbit as in the marked case, then we have \begin{lemma}\label{lem:neckorbtopart} \[ \{[l]\in \SNjj,l=e\cdot[l]\} \cong \{[p]\in\Orb(C,\mathrm{Part}(j)),\nu_2(\varpi([p]))=0\} \] \end{lemma} \begin{proof} Let $[l]\in \SNjj$, and $[p]= (\underline{r_1}b_1\underline{r_2}b_2\cdots \underline{r_m}b_m)$ be its corresponding element in $\Orb(C,\mathrm{Part}(2j,j,\underline{j}))$. If $[l]=e\cdot[l]$, then \[ (\underline{r_1}b_1\underline{r_2}b_2\cdots \underline{r_m}b_m) = (r_1\underline{b_1}r_2\underline{b_2}\cdots r_m \underline{b_m}), \] and apparently we need $(r_1\cdots r_m) = (b_1\cdots b_m)$. Moreover, by passing to the smaller partition that is formed by taking $\varpi([p])$ consecutive parts, without losing of generality we can assume the partition is non-periodic, i.e., $\varpi([p]) = 2m$. Then the only element in $\Orb(C,\mathrm{Part}(2j,j,\underline{j}))$ of the form $(\underline{r_1},r_{i_1},\underline{r_2},r_{i_2},\cdots,\underline{r_{m}},r_{i_m})$ that satisfy $[p] = e\cdot [p]$ is \[ (\underline{r_1}r_{\frac{m+3}{2}}\underline{r_2}r_{\frac{m+5}{2}}\cdots r_{m}\underline{r_{\frac{m+1}2}}r_1\underline{r_{\frac{m+3}{2}}}\cdots\underline{r_{m}}r_{\frac{m+1}{2}}), \] and $m$ has to be odd. \end{proof} Recall the total number of partitions of $j\in\bZ_{>0}$ is $2^{j-1}$. Then \begin{equation}\label{eqn:decomppart} 2^{j-1} = \sum_{d}d |[p]\in\Orb(C,\mathrm{Part}(j)),\varpi([p])=d|, \end{equation} \begin{lemma} For $j>1$, we have $|l\in \SNjj,[l]=e\cdot[l],\nu_2(\pi([l]))>1| \equiv |l\in \SNjj,[l]=e\cdot[l],\nu_2(\pi([l]))=1|\mod2$. \end{lemma} \begin{proof} By \eqref{eqn:decomppart}, we have \begin{align*} |[p]\in\Orb(C,\mathrm{Part}(j)),\nu_2(\varpi([p]))=0| = &\sum_{\nu_2(d)=0}d|[p]\in\Orb(C,\mathrm{Part}(j)),\varpi([p])=d|\\ \equiv & \sum_{d}d|[p]\in\Orb(C,\mathrm{Part}(j)),\varpi([p])=d|\mod2\\ = & 2^{j-1}\equiv 0\mod2. \end{align*} Then by Lemma \ref{lem:neckorbtopart}, we have \[ |l\in \SNjj,[l]=e\cdot[l]|\equiv 0\mod2. \] \end{proof} Therefore, we have \begin{equation}\label{eqn:secondtothelast} \begin{split} |\SNt^s| \equiv & \sum_{[l]\in \Orb(C_{j},\Neck(j,\frac j2)),[l]\neq e\cdot[l]}\frac{|\phi^{-1}([l],[l])|}{2} \\ & + |l\in \SNjj,[l]=e\cdot[l],\nu_2(\pi([l]))=1|\mod2. \end{split} \end{equation} \subsection{Conclusion of the proof of Theorem \ref{thm:closedformula-2}}\label{subsection:conc2} First, we have: \begin{lemma} For $[l]\in \Orb(C_{j},\Neck(j,\frac j2))$, we have $|\phi^{-1}([l],[l])| = \frac{\pi([l])}{2}$. \end{lemma} \begin{proof} Similar to Lemma \ref{lem:fibperiod1}. \end{proof} Then \begin{align*} \sum_{[l]\in \Orb(C_{j},\Neck(j,\frac j2)),[l]\neq e\cdot[l]}\frac{|\phi^{-1}([l],[l])|}{2} = &\sum_{[l]\in \Orb(C_{j},\Neck(j,\frac j2)),[l]\neq e\cdot[l]}\frac{|\pi([l])|}{2}\\ = & \sum_{[l]\in \Orb(C_{j},\Neck(j,\frac j2))}\frac{|\pi([l])|}{2} - \sum_{[l]\in \Orb(C_{j},\Neck(j,\frac j2)),[l]= e\cdot[l]}\frac{|\pi([l])|}{2}\\ \end{align*} Since $\pi([l])\equiv0\mod2$, we have \[ \sum_{[l]\in \Orb(C_{j},\Neck(j,\frac j2)),[l]= e\cdot[l]}\frac{|\pi([l])|}{2}\equiv |[l]\in \Orb(C_{j},\Neck(j,\frac j2)),[l]= e\cdot[l],\nu_2(\pi(l))=1|\mod 2. \] As \[ \{[l]\in \Orb(C_{j},\Neck(j,\frac j2)),[l]= e\cdot[l],\nu_2(\pi(l))=1\} =\{[l]\in \Orb(C_{j},\Neck(2j, j)),[l]= e\cdot[l],\nu_2(\pi(l))=1\}, \] we can conclude \begin{proposition}\label{prop:deltatwistedjeven} $\Delta'(2j,j) = \frac12\binom{2j}{j}\cdot (u-1) $, for $j\equiv 0\mod2$. \end{proposition} \begin{proof} By \eqref{eqn:secondtothelast}, and the calculation in this subsection we have \[ |\SNt^s| \equiv \frac12\binom{j}{\frac j2}\mod 2, \] Then by Lemma \ref{lem:deltaprimevalue} and Corollary \ref{cor:corKummer1} of Kummer's theorem, we have \[ \Delta'(2j,j) = \frac12\binom{2j}{j}\cdot (u-1) \] \end{proof} The proof of Theorem \ref{thm:closedformula-2} is concluded by combining Proposition \ref{prop:deltatwistedjeven} and Corolarry \ref{cor:deltatwistedodd}. \subsection{Explicit value of twisted binomial coefficients}\label{subsection:expval} By Corollary \ref{cor:corKummer3} of Kummer's theorem, we have \[ {L[Q]/k \choose j} = {2j \choose j}+(u-1)\delta(j), \] where $\delta(j)=\left\{\begin{array}{cc} 1 & j = 2^m,m\in\bZ_{>0} \\ 0 & \textrm{else} \end{array}\right.$. The first few terms of the sequence ${L[Q]/k \choose j}$ are \[2,5+u,20,69+u,252,924,3432,12869+u,\dots.\] \input{Bibli.bbl} \end{document} \section{} \subsection{} \begin{theorem}[Optional addition to theorem head] \end{theorem} \begin{proof}[Optional replacement proof heading] \end{proof} \begin{figure} \includegraphics{filename} \caption{text of caption} \label{} \end{figure} \begin{equation} \end{equation} \begin{equation*} \end{equation*} \begin{align} & \\ & \end{align}
2412.14266v1
http://arxiv.org/abs/2412.14266v1
Stability analysis of geodesics in dynamical Chern-Simons black holes: a geometrical perspective
\documentclass[pdflatex,sn-mathphys-num]{sn-jnl} \usepackage{graphicx}\usepackage{multirow}\usepackage{amsmath,amssymb,amsfonts}\usepackage{amsthm}\usepackage{mathrsfs}\usepackage[title]{appendix}\usepackage{xcolor}\usepackage{textcomp}\usepackage{manyfoot}\usepackage{booktabs}\usepackage{algorithm}\usepackage{algorithmicx}\usepackage{algpseudocode}\usepackage{listings}\usepackage{lmodern} \usepackage{fix-cm} \usepackage[T1]{fontenc} \usepackage{anyfontsize} \newtheoremstyle{thmstyleone} {3pt} {3pt} {\itshape} {} {\bfseries} {.} { } {\thmname{#1}\thmnumber{ #2}\thmnote{ (#3)}} \theoremstyle{thmstyleone} \newtheorem{theorem}{Theorem}[section] \raggedbottom \begin{document} \title[Stability analysis of geodesics in dynamical Chern-Simons black holes: geometrical perspective]{Stability analysis of geodesics in dynamical Chern-Simons black holes: a geometrical perspective} \author[1]{\fnm{Tonatiuh} \sur{Tiscareño}} \email{[email protected]} \author[1]{\fnm{Benito } \sur{Rodríguez}} \email{[email protected]} \author[1]{\fnm{Javier} \sur{Chagoya}} \email{[email protected]} \affil*[1]{\orgdiv{Unidad Académica de Física}, \orgname{Universidad autónoma de Zacatecas}, \orgaddress{\street{Calzada Solidaridad esquina con Paseo a la Bufa S/N}, \postcode{98060}, \state{Zacatecas}, \country{México}}} \abstract{We apply the Kosambi-Cartan-Chern theory to perform an extensive examination of Jacobi stability of geodesics around rotating black hole solutions to dynamical Chern-Simons gravity, a theory that introduces modifications to General Relativity via a scalar field non-minimally coupled to curvature scalars. We present a comparative study between Jacobi and Liapunov stability, pointing out the advantages of the more geometrical method over the usual Liapunov approach.} \keywords{Dynamical systems, Geodesics, Lyapunov stability, Jacobi Stability} \maketitle \section{Introduction}\label{sec1} This study focuses on timelike geodesics around black holes in the context of modifications to General Relativity (GR) that are introduced through a scalar field that is non-minimally coupled to curvature scalars~\cite{Yunes_2011}. Specifically, we work with dynamical Chern-Simons (dCS) gravity \cite{Jackiw_2003}, a notable variant of these theories, which has been extensively explored due to its deviations from GR in rotating black hole solutions. Although spherically symmetric spacetimes remain unchanged in dCS gravity, axially symmetric solutions deviate from those in GR. The groundbreaking work by Yunes and Pretorius \cite{Yunes_2009} presented the first rotating black hole solution under the slow rotation and small coupling approximations in dCS gravity. This solution introduces parametric modifications to the Kerr metric, resulting in corrections to the positions of photon orbits due to the presence of a scalar field or ``scalar hair'' \cite{Grumiller_2008,Alexander2009}. To address the complexity of the geodesic equations in dCS spacetime, we begin by studying linear stability (Lyapunov) and phase portraits to investigate the behavior of the dynamical system. However; due to the nature of the system, nonlinear stability methods are essential for a more comprehensive understanding of its global behavior, therefore we complement the linear Lyapunov analysis with the non-linear Kosambi-Cartan-Chern (KCC)~\footnote{Sometimes also known as Jacobi stability.} theory for a thorough analysis. The KCC theory, developed by Kosambi, Cartan, and Chern~\cite{Chern,Kosambi1933,Cartan1933}, is a robust framework for studying the non-linear stability of dynamical systems of higher dimension and even non-autonomous systems; it has multiple applications in gravitation and cosmology~\cite{Cardoso_2009,boemer2010,aceña2019circulargeodesicsstabilitystatic}. Furthermore, in~\cite{abolghasem} a comparative study between Jacobi stability, a core aspect of KCC theory, and Lyapunov stability is presented, identifying cases where the two approaches yielded differing results. This paper is structured as follows. Section 2 provides a foundational introduction to dynamical systems theory, Kosambi-Cartan-Chern theory, and discusses the existing relation between these mathematical tools. In Section 3 provides an overview of Chern-Simons modified gravity and the slow rotation approximation to black holes within this framework. In Section 4, we present a dynamical system analysis of the conditions under which circular orbits remain stable in this modified gravity scenario, deepening our understanding of the dynamics of black holes within dCS. Section 5 is devoted to a discussion of our main results, concluding that for the system under consideration, the Lyapunov and KCC stability criteria are in complete agreement. \section{Mathematical foundations}\label{sec2} The purpose of this section is to provide the fundamental mathematical framework necessary to analyze the stability of the problem in question, thereby enabling us to conduct a deep analysis of the underlying physics. This chapter is divided into three sections; the first focuses on the linear theory of dynamical systems, second on the non-linear Kosambi-Cartan-Chern theory, and in the third part we comment on the differences between the two approaches. \subsection{Elements of Lyapunov stability} In this section, only the key results that are directly applicable to our analysis are presented. For a more comprehensive understanding of the topic, additional references are suggested for further reading \cite{Perko,Verhulst,KuznetzovEOABT,Kl1,Kl2}. We are interested in differential equations expressed in the form \begin{equation} \dot{x}=f(x)\label{system}, \end{equation}where $x\in\mathbb{R}^n$, $E\in R^n$ and $f: E\to \mathbb{R}^n$. Critical points $x_c$ of equation \eqref{system} are those that satisfy $f(x_c)=0$. A second order differential equation of the form \begin{equation} \ddot{y}+a\dot{y}+by=0,\label{eq2orden} \end{equation} where $a, b$ are constants, can be written in vector form using the transformation $x_1=y$ and $x_2=\dot{x}_1=\dot{y}$. Then the differential equation \eqref{eq2orden} is equivalent to the system of equations \begin{equation} \left.\begin{array}{lcl} \dot{x}_1=x_2 , \\\dot{x}_2=-bx_1-ax_2 \end{array}\right\}\label{ODEsystem} .\end{equation} Equation \eqref{ODEsystem} is in the form of eq. \eqref{system} with $x=(x_1,x_2)^T$, i.e., we are in $\mathbb{R}^2$. If $x(t)=(x_1(t),x_2(t))$ is a solution of \eqref{ODEsystem} then $y=x_1(t)$ is a solution of \eqref{eq2orden}. The critical points correspond to equilibrium solutions of the system, that is, $x(t)=x_c$ satisfies the equation for all time. The study of these points and solutions in their neighborhood is of particular interest. Through the application of linearization around the critical points, equation~\eqref{system} can be approximated as a linear equation \begin{equation} \dot{x}=Ax \label{linealeq}. \end{equation} The next theorem establishes the existence and uniqueness of solutions for this kind of systems (see, e.g.~\cite{Perko}). \begin{theorem}[Fundamental theorem of linear systems]\label{tthm1} Consider an operator $A$ in $\mathbb{R}^n$, then the solution of the problem with initial conditions \begin{equation}\dot{x}=Ax;\hspace{0.5cm}x(0)=x_0\in\mathbb{R}^{n},\end{equation}\textit{is:} \begin{equation} x(t) = e^{tA}x_0, \end{equation} and there are no other solutions. \end{theorem} The algebraic method of diagonalization can be used to reduce the coupled system \eqref{linealeq} to an uncoupled linear system. For systems in $\mathbb{R}^2$, it can be shown that under a suitable linear transformation of coordinates, the system \eqref{linealeq} is equivalent to $\dot{x}=Bx$, where $B$ has one of the following forms, \begin{equation} B=\begin{pmatrix}\omega & 0 \\ 0 & \mu \\ \end{pmatrix} , \hspace{0.5cm} B=\begin{pmatrix}\omega & -1 \\ 0 & \omega \\ \end{pmatrix}, \hspace{0.5cm} B=\begin{pmatrix}a& -b \\ b & a \\ \end{pmatrix} . \end{equation} Following the fundamental theorem for linear systems, it can be shown that solutions of the initial value problem $\dot{x}=Bx$ with $x(0)=0$ are given respectively by \begin{equation} x(t)=\begin{pmatrix}e^{\omega t} & 0 \\ 0 & e^{\mu t} \\ \end{pmatrix}x_0 , \hspace{0.25cm} x(t)=e^{\omega t}\begin{pmatrix}1 & t \\ 0 & 1 \\ \end{pmatrix}x_0, \hspace{0.25cm} x(t)=e^{at}\begin{pmatrix}\cos( bt) & -\sin (bt) \\ \sin (bt) & \cos (bt) \\ \end{pmatrix} x_0. \end{equation} As a consequence of this, from the eigenvalues of $A$ we can obtain important qualitative information about the behavior of solution curves near critical points in the phase space of Eq.~\eqref{linealeq}. In the two dimensional case, the eigenvalues of $A$ can be written in terms of $\det A=\delta$ and $\rm{Tr\,} A$ as \begin{equation}\lambda_{12}=\frac{1}{2}(\rm{Tr\,} A\pm\sqrt{(\rm{Tr\,} A)^2-4\delta}).\label{ec eigenvlores}\end{equation} The phase portrait of the system \eqref{linealeq} is obtained from the phase portrait of $\dot{x}=Bx$ under a linear transformation of coordinates, in other words, the phase diagram of \eqref{linealeq} is topologically equivalent to those in Fig.~\ref{figure1}, and these results are summarized in Table~\ref{table1}. It should be noticed that this classification excludes cases where one or two eigenvalues are equal to 0; such critical points are called \textit{nonhyperbolic}, and require further analysis by determining the flow of the center manifold or using the center manifold theorem for higher-dimensional systems \footnote{The center manifold theorem is a fundamental result in the study of the stability of non-hyperbolic critical points, providing the necessary conditions to determine the behavior of the center manifold. See p. 196 of \cite{Verhulst}.}. For systems in $\mathbb{R}^3$, critical points have their three-dimensional counterparts of nodes, foci, centers, and so on. In the case where $\mathbb{R}^n$ with $n>3$, this classification is harder to visualize and the stability of systems is characterized in terms of attractive, repulsive, or periodic critical points and the existence of (un)stable manifolds~\cite{Perko,H/S}. { The Hartman-Grobman theorem is a fundamental result in the local qualitative theory of ordinary differential equations. It states that near a hyperbolic equilibrium point, the non-linear system \eqref{system} has the same qualitative behavior as the linearized system \eqref{linealeq} with $Df(x_0)=A$ (cf. \cite{Perko}).} \begin{figure}[htbp] \centering \includegraphics[width=0.7\linewidth]{figs/clasificacion.pdf} \caption{Classification of phase portraits for critical points in two dimensional systems depicted in terms of $detA$, $Tr A$ and $\Delta=Tr A-4\delta$. Based on figure 6, section 1.5 of \cite{Perko}. } \label{figure1} \end{figure} \begin{table}[htbp] \caption{Classification of critical points for two dimensional systems. Based on main results of \cite{Perko,Verhulst}.}\label{table1}\begin{tabular}{@{}llll@{}} \toprule Eigenvalues&Classification of critical point&Stability\\ \hline $\lambda_1,\lambda_2<0$ & Sink& Stable \\ $\lambda_1,\lambda_2>0$&Source & Unstable\\ $\lambda_1<0<\lambda_2$&Saddle&Unstable\\ $\lambda_{1,2}=a\pm ib, a<0$&Spiral sink&Stable\\ $\lambda_{1,2}=a\pm ib, a>0$&Spiral source&Unstable\\ $\lambda_{1,2}=\pm ib$&Center&Stable\\ \botrule \end{tabular} \end{table} \subsection{KCC Stability theory: Geometrization of arbitrary dynamical systems} KCC theory is a mathematical framework for studying the stability of solutions of second-order differential equations. The central idea of KCC theory is that the evolution and behavior of the trajectories of the system of strongly nonlinear equations \eqref{ec.KCC 2 orden} can be analyzed using tools of the differential geometry of Finsler spaces~\cite{antonelli2014handbook, bao2012introduction, Harko_2016}; this is made possible by the analogy between paths in the Euler-Lagrange equations and geodesics in a Finsler geometry. Finsler geometry is a generalization of Riemannian geometry, where the distance function, called the Finsler metric $ds=F(x, y)$, depends on both the position $x$ and the direction $y$ at each point. Unlike Riemannian geometry, which is isotropic (direction-independent) with a quadratic metric $ds=\sqrt{g_{ij}dx^{i}dx^{j}}$~\footnote{Riemannian geometry is a special case of Finsler geometry where the Finsler metric \( F(x,y) \) reduces to $F(x,y) = \sqrt{g_{ij}(x) y^i y^j}$. Here, the metric depends only on position $x$ and is quadratic in $y$, resulting in isotropic behavior.}, Finsler geometry facilitates the examination of anisotropic spaces, where distances can vary with direction; this flexibility makes Finsler geometry particularly useful for studying spaces with directional properties, such as those found in certain physical theories with preferred directions or complex movement costs. KCC theory~\cite{Antonelli2001,antonelli2014handbook,Antonellicelldivision2002} (we adopt Antonelli's notation) offers an alternative way to analyze the stability of systems with a more geometrical perspective, in which stability is described in terms of five invariants. Of these invariants, the second is particularly significant as it determines the stability of the system. These results are founded on the equivalence between the Euler-Lagrange equations and a system of second-order ordinary, strongly nonlinear differential equations as described by: \begin{equation} \frac{d^{2}x^{i}}{dt^{2}}+2G^{i}(x^{j}, y^{j}, t)=0, \hspace{0.3cm} i=1,2,...,n,\label{ec.KCC 2 orden} \end{equation} where $G^i$ are the geodesic coefficients with $x^i=(x^1,x^2,...,x^n)$ and $y^i=(y^1,y^2,...,y^n)$ a set of dynamical variables that are defined in a real smooth $n$-dimensional manifold $\mathcal{M}$ and $t$ is the usual time coordinate. It is assumed that each function $G^i(x^j,y^j,t)$ is smooth in the neighborhood of some initial conditions $(x_0,y_0,t_0)$ defined in $\mathcal{TM}$~\footnote{The tangent bundle of $\mathcal{M}$ is denoted by $\mathcal{TM}$. Usually $\mathcal{M}$ is considered as $\mathbb{R}^n$, $\mathcal{M}=\mathbb{R}^n$, and therefore $\mathcal{TM}=\mathcal{T}\mathbb{R}^n=\mathbb{R}^n$. }. The geodesic coefficients $G^i$ are analogous to the geodesic spray coefficients in the Finsler geometry~\footnote{Spray is a special vector field on the tangent bundle $\mathcal{TM}$ that encodes the geodesics of the Finsler manifold.}. This geometrical approach allows for a deeper understanding of the stability properties of dynamical systems, particularly in contexts where anisotropic or directional dependencies play a critical role. The next equations represent the central mathematical result of the KCC theory. The KCC-covariant derivative of an arbitrary vector field $\chi$ is defined in terms of the second KCC invariant $P^i_j$ as \begin{equation} \frac{D^{2}\chi^{i}}{dt^{2}}=P^{i}_j \chi^{j},\label{kcc-jacobi covariant} \end{equation} where \begin{equation} P^{i}_j = -2\frac{\partial G^{i}}{\partial x^{j}} - 2G^{l}G^{i}_{jl} + y^{l}\frac{\partial N^{i}_j}{\partial x^{l}} + N^{i}_lN^{l}_j + \frac{\partial N^{i}_j}{\partial t},\label{kcc. curvature tensor}\end{equation} the non-linear connection $N^i_j$ is defined as \begin{equation} N^{i}_{j}=\frac{\partial G^{i}}{\partial y^{j}}, \end{equation} and the Berwald connection is \begin{equation} G^{i}_{jl}\equiv \frac{\partial N^{i}_j}{\partial y^{l}}. \label{kcc.berwald connection}\end{equation} The second KCC invariant $P^i_j$ is also called the deviation curvature tensor~\footnote{The deviation tensor measures the infinitesimal separation between nearby geodesics and is influenced by the directional curvature properties of the Finsler space. }. It is the fundamental quantity in the KCC-theory and in the Jacobi stability method. If we assume that the system \eqref{ec.KCC 2 orden} corresponds to the geodesic motion of a physical system, then Eq.~\eqref{kcc-jacobi covariant} gives the Jacobi field equation. The trace $P=P^i_i$ of the curvature deviation tensor is a scalar invariant and can be calculated from \begin{equation} P=P^{i}_i=-2\frac{\partial G^{i}}{\partial x^{i}}-2G^{l}G^{i}_{il}+y^{l}\frac{\partial N^{i}_i}{\partial x^{l}}+N^{i}_lN^{l}_i + \frac{\partial N^{i}_i}{\partial t}. \label{kcc.traza tensor curvatura} \end{equation} The analysis of the deviation tensor is essential for understanding the Jacobi stability or instability of geodesics, as it reveals how perturbations to geodesics behave over time in a direction-sensitive manner. Other important invariants can also be constructed and are introduced according to the definitions \begin{equation} P^{i}_{jk}\equiv \frac{1}{3} \left(\frac{\partial P^{i}_j}{\partial y^{k}} - \frac{\partial P^{i}_k}{\partial y^{j}}\right), P^{i}_{jkl}\equiv \frac{\partial P^{i}_{jk}}{\partial y^{l}}, D^{i}_{jkl}\equiv \frac{\partial G^{i}_{jk}}{\partial y^{l}}. \end{equation} From a geometrical perspective $P^i_{jk}$ can be described as a torsion tensor, $P^i_{jkl}$ represents the equivalent of the Riemann-Christoffel curvature tensor and $D^i_{jkl}$ is the Douglas tensor. The stability of trajectories in the vicinity of stationary solutions of a system of differential equations \eqref{ec.KCC 2 orden}, assuming that these trajectories represent smooth curves in the Euclidean space $\mathbb{R}^n$, can be determined by the following condition: \begin{align} P^i_{j} & < 0 \quad \text{(stable)},\\ P_{j}^{i} & > 0 \quad \text{(unstable)}. \end{align} If the first condition is satisfied, the trajectories are designated as Jacobi stable if and only if the real parts of the eigenvalues of the curvature deviation tensor $P^i_j$ are strictly negative. On the other hand, if the real parts of the eigenvalues of the deviation curvature tensor $P^i_j$ are strictly positive, the trajectories of the dynamical system are designed as unstable. This definition is an alternative to the standard linear analysis to investigate the stability of second-order differential equations. \subsection{Comments about stability} In this section, we briefly compare linear (Lyapunov) stability with KCC (Jacobi) stability, with the goal of establishing a more robust framework that fortifies the theoretical foundations of dynamical systems. For a more detailed discussion~\cite{boemer2010, abolghasem, Paul, Strogratz}. As mentioned above, various types of stability can be considered when talking about dynamical systems. Lyapunov theory covers stability near critical points $x_c$ of \eqref{system} and is one of the most important ones. If all trajectories that begin near $x_c$ approach this point as $t\to\infty$, i.e., $\lim_{t\to \infty}x(t)=x_c\hspace{0.1cm}\forall \hspace{0.1cm}\|x(0)-x_c\|<\delta$, then we say that every trajectory beginning within a distance $\delta$ from $x_c$ eventually converges to it. A different notion of stability takes into account the behavior of trajectories for all $t$, not only when $t\to\infty$. The point $x_c$ is \textit{Lyapunov stable} if all trajectories that begin near $x_c$ remain near $x_c$ at all times. There can be critical points that are Lyapunov stable but not attractors. This situation occurs frequently and the point $x_0$ is called \textit{ neutrally stable}. Trajectories near $x_c$ are neither attracted nor repealed from a neutrally stable point. Finally, a point $x_c$ is \textit{asymptotically stable} if it is an \textit{attractor} and \textit{Lyapunov stable}. In contrast, Jacobi stability, as studied in KCC theory, takes a global and geometric approach, examining trajectories as geodesics in a Finsler space, evaluating how small perturbations influence the system's entire path via the deviation tensor. Jacobi stability examines the robustness of the system to parameter changes and considers non-linear effects across a broader range of the system's behavior. Lyapunov stability analysis and KCC theory both provide valuable insights into the stability of dynamical systems under perturbations, but they approach stability from distinct perspectives. Although different in scope -- Lyapunov stability being local and algebraic and Jacobi stability being global and geometric -- these methods complement each other. Lyapunov stability identifies local conditions for stability around equilibrium points, while the Jacobi stability provides a broader view of trajectory behavior and long-term robustness. Together, they offer a comprehensive framework for understanding system stability, combining local precision with global insights. In conclusion, Lyapunov stability and Jacobi stability serve as complementary methodologies for assessing the stability of dynamical systems. By integrating both local and global perspectives, these approaches provide a more profound understanding of the system's behavior, which is crucial for precise stability analysis in both physical and theoretical contexts. \section{Formulation of Chern-Simons modified gravity}\label{section3} Chern-Simons modified gravity (CS) is a four-dimensional extension of General Relativity (GR) that covers a variety of theories, each characterized by specific couplings $\alpha$ and $\beta$. Two significant formulations are identified: the non-dynamical framework, where $\beta = 0$; and the dynamical framework, where both $\alpha$ and $\beta$ are arbitrary but nonzero. In this analysis, we consider the dynamical framework and utilize the notation of~\cite{Yunes_2009,Alexander_2009}. The action defining this theory is \begin{equation}\label{actioncs} S_\text{total}=S_\text{EH}+S_\text{CS}+S_{\vartheta}+S_\text{M}. \end{equation} The first term corresponds to the Einstein-Hilbert action, \begin{equation} S_\text{EH}= \kappa \int d^{4}x\sqrt{- g }R, \end{equation} where $\kappa=\left ( 16 \pi\right)^{-1}$ is the dimensionless gravitational coupling constant ($c$ and $G$ are normalized to 1,) $R$ is the Ricci scalar, and $g$ represents the determinant of the metric tensor $g_{\mu\nu}$. The expression $S_\text{CS}$ for the CS action is \begin{equation} S_\text{CS} = \frac{\alpha}{4}\int d^{4}x \sqrt{-g}\vartheta {}^{\star}\!R R, \end{equation} where $\alpha$ represents the CS coupling constant measured in units of length squared, $\vartheta$ is a dimensionless scalar field referred to as the Chern-Simons coupling field and ${}^{\star}\!R$ denotes the dual Riemann tensor that constructs the Pontryagin density \begin{equation}\label{pontry} {}^{\star}R R = {}^{\star}R^{\mu}{}_{\nu}{}^{\rho \sigma} R^{\nu}{}_{\mu}{}_{\rho \sigma} = \frac{1}{2}\epsilon^{ \rho \sigma \delta\tau } R^{\mu}{}_{\nu}{}_{\delta\tau}R^{\nu}{}_{\mu}{}_{\rho \sigma}. \end{equation} with $\epsilon^{ \rho \sigma \delta \tau }$ being the 4-dimensional Levi-Civita tensor. The action for the scalar field is expressed as \begin{equation} S_{\vartheta}=-\frac{\beta}{2}\int d^{4}x\sqrt{-g}\left[g^{\mu\nu}\nabla_{\mu}\vartheta \nabla_{\nu}\vartheta +2V(\vartheta) \right], \end{equation} where $\nabla_{\mu}$ represents the covariant derivative, $\beta$ denotes a dimensionless coupling constant, and $V(\vartheta)$ is a potential for the scalar field, which is set to zero in the rest of this work. Finally, the matter action $S_M$ is also set to zero. When the scalar field $\vartheta$ is constant, CS modified gravity becomes equivalent to GR. This happens because the Pontryagin term~\eqref{pontry} can be written as a divergence of the Chern-Simons topological current, \begin{equation} \nabla_{\mu}K^{\mu}=\frac{1}{2}{}^{\star}R R, \end{equation} where $ K^{\mu}$ is given by \begin{equation} K^{\mu}=\epsilon^{ \mu \rho \sigma \tau }\Gamma^{\upsilon}_{\rho\psi}\left ( \partial_{\sigma}\Gamma^{\psi}_{\tau \upsilon}+\frac{2}{3}\Gamma^{\psi}_{\sigma\xi}\Gamma^{\xi}_{\tau \upsilon} \right ), \end{equation} with $\Gamma^{\psi}_{\sigma\xi}$ representing the Christoffel symbols. The field equations are derived by varying the action with respect to the metric and the CS coupling field. The variation with respect to the metric results in \begin{equation} G_{\mu\nu}+\frac{\alpha}{\kappa}C_{\mu\nu}=\frac{1}{2\kappa}T_{\mu\nu}, \end{equation} where $G_{\mu\nu}$ is the Einstein tensor and $C_{\mu\nu}$ is the trace-free C-tensor~\footnote{Brackets around the indices indicate symmetrization, for instance, $B_{(\mu\nu)}:=(B_{\mu\nu} + B_{\nu\mu})/2$.} \begin{equation} C^{\mu\nu} = \left( \nabla_{\sigma}\vartheta \right )\epsilon^{\sigma \delta \alpha ( \mu} \nabla_{\alpha}R^{\nu)}{}_{\delta}+\left (\nabla_{\sigma}\nabla_{\delta}\vartheta \right ) {}^{\star}\!R^{\delta (\mu \nu )\sigma}. \end{equation} Variation with respect to the CS coupling field results in the Klein-Gordon equation for a massless scalar field with a source term, \begin{equation} \beta\Box\vartheta=-\frac{\alpha}{4}\,{}^{\star}\!R R, \end{equation} where $\Box:=g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}$ is the d’Alembert operator. \subsection{Black holes in dCS modified gravity: slow rotation approximation}\label{sec:hjcs}. Adding a CS term to the Einstein-Hilbert action modifies the rotating solution of General Relativity~\cite{Jackiw_2003,Yunes_2009,Grumiller_20008,Konno_2007,Konno_2009,visser2008kerrspacetimebriefintroduction,Chandrasekhar1985kt}. The metric for the solution in the modified theory under the slow rotation approximation is given by \begin{equation}\label{a5} ds^{2}= ds^{2}_\text{Kerr}+ \frac{5}{4} \frac{\alpha^{2}}{\beta \kappa} \frac{a}{r^{4}} \left(1+\frac{12}{7}\frac{m}{r}+ \frac{27}{10}\frac{m^{2}}{r^{2}} \right)\sin^{2}{\theta} d\phi dt, \end{equation} where in Boyer-Lindquist coordinates the geometry of the Kerr metric is \begin{align} ds^{2}_\text{Kerr} = &-\left(1 - \frac{2mr}{\rho^2} \right) dt^{2} - \left(\frac{4mr\sin^{2}{\theta}}{\rho^2}\right) dtd\phi + \left(\frac{\rho^2}{\Delta}\right) dr^2 + \rho^2 d\theta^2 \nonumber \\ &+ \sin^{2}{\theta} \left(r^{2} + a^{2} + \frac{2mar\sin^{2}{\theta}}{\rho^2}\right) d\phi^{2}, \end{align} where \begin{align} \rho^2 &= r^{2} + a^{2} \cos^2{\theta}, \\ \Delta &= r^{2} + a^{2} - 2mr. \end{align} The scalar field compatible with this solution is \begin{equation} \vartheta = \frac{5\alpha a \cos{\theta}}{8\beta m r^{2}}\left( 1+\frac{2m}{r}+ \frac{18m^{2}}{5r^{2}} \right). \end{equation} Notice that the modified metric remains asymptotically flat at infinity. In equation~\eqref{a5}, $ds^{2}_\text{Kerr}$ represents the Kerr metric in the slow rotation approximation, where $m$ denotes the black hole's geometrized mass and $a$ its specific angular momentum. The constants $\alpha$ and $\beta$ are the coupling constants that appear in the action. In the following analysis, we simplify the expression in Eq.~(\ref{a5}) by neglecting the second and third terms enclosed by parentheses. This approximation is justified because, for small rotation and small CS coupling parameters, we expect the smallest circular matter orbit to be near $r\sim 6m$, where the terms that we are neglecting are already smaller than the first term in the parentheses. Hence, we use the metric \begin{equation}\label{approx} ds^{2}= ds^{2}_\text{Kerr}+ \frac{5}{4} \frac{\alpha^{2}}{\beta \kappa} \frac{a}{r^{4}} \sin^{2}{\theta} d\phi dt, \end{equation} and perform our calculations primarily at leading order. For the purpose of testing the theory, it is beneficial to define the parameter $\xi$ which is measured in units of $[\text{length}]^{4}$, and its dimensionless version $\zeta$, \begin{align} \xi &:= \frac{\alpha^{2}}{\beta \kappa}, \ \ \ \zeta:= \frac{\xi}{m^{4}}. \end{align} The solution for a rotating black hole in dCS gravity simplifies to the Kerr metric when $\alpha=0$, or equivalently when $\xi=0$ or $\zeta=0$. In this work, the values of $\xi$ are restricted by the conditions delineated in \cite{rodríguez2024shadowsblackholesdynamical}, where compatibility with the black hole shadow reported by the EHT~\cite{2019m87,Event_Horizon_Telescope_Collaboration_2022} is analyzed. \section{Stability Analysis for timelike geodesics in dCS}\label{section4} In this section, we perform a stability analysis of the equations using the mathematical frameworks reviewed in the previous sections. To precisely compute the pertinent physical quantities, starting from the one-dimensional geodesic equation, we initially determine the radial dependence of the angular velocity $\Omega$, the specific energy $E$, and the specific angular momentum $L$ of particles in circular orbits~\cite{Harko_2010, Harko_2009}, and we carry out an examination of the equatorial plane $\theta = \frac{\pi}{2}$. The latter manifests as follows: \begin{equation} g_{rr}\left( \frac{dr}{d\tau}\right)^{2} = -1+\frac{E^{2}g_{\phi\phi}+2ELg_{t\phi}+L^{2}g_{tt}}{g^{2}_{t\phi}-g_{tt}g_{\phi\phi}}. \end{equation} From this equation, we can identify the effective potential for timelike geodesics in the context of the Chern-Simons slow rotation approximation~\eqref{approx} as \begin{equation} V_\text{Eff}(r)=\frac{g_{t\phi}^2 - E^2 g_{\phi\phi} - 2 E g_{t\phi} L - g_{tt} (g_{\phi\phi} + L^2)}{g_{rr} (g_{t\phi}^2 - g_{tt} g_{\phi\phi})}, \end{equation} such that \begin{equation} \ddot{r}=-\frac{1}{2}V'_\text{Eff}(r).\label{Ec2orden r potencial} \end{equation} In this specific instance, we factored out the constant ${1}/{2}$ from the effective potential since it does not influence the Lyapunov analysis nor the Jacobi analysis. However, it is recommended that future studies take this factor into consideration. The effective potential is decomposed into a Kerr-related component and a Chern-Simons correction term, \begin{align} V_\text{Eff}(r) &= V(r)_\text{K} + \xi V(r)_\text{CS}, \\ V(r)_\text{K} &= 1 - \frac{2 (-a E + L)^2m + a^2 (-1 + E^2) r - L^2 r + 2 m r^2 + E^2 r^3}{r^3}, \\ V(r)_\text{CS} &= -\frac{5 a \big(2 a E m + L (-2 + r)\big) \big(-2 a L m + E r^3 + a^2 E (\mathcal{F}_{-2})\big)}{4 r^8 \big(a^2 + r \mathcal{F}_{2}\big)}. \end{align}\label{potenciald} with the functions $\mathcal{F}$ and $\mathcal{F}_{-2}$ defined as follows, \begin{align} \mathcal{F}_{n} & =r-nm. \\ \mathcal{F}^{*}_{n} & =(n-5)r-nm. \\ \mathcal{F}^{**}_{n} & =3nm^{2}-3(n+1)mr-nr^{2}. \end{align} where we have also defined the functions $\mathcal{F}^{*}_{n}$ and $\mathcal{F}^{**}_{n}$ for future use. \begin{figure}[htbp] \centering \includegraphics[width=0.3\textwidth]{figs/Veff1Kerr.pdf} \includegraphics[width=0.3\textwidth]{figs/Veff1.3.pdf} \includegraphics[width=0.3\textwidth]{figs/Veff1.7.pdf} \caption{The effective potential for GR and dCS spacetimes concerning timelike geodesics characterized by various spin parameters $a$ and $\xi$.} \label{fig:CSdVeff} \end{figure} \subsection{Linear stability} With the purpose of analyzing the stability of equation \eqref{Ec2orden r potencial} a system of equations can be constructed in $\mathbb{R}^2$ using the coordinate transformation $\dot{r}=p$ and $\dot{p}=\ddot{r}$, \begin{equation} \begin{array}{lcl} \dot{r}=p(r) , \\\dot{p}=-V'_\text{Eff}(r), \end{array}\label{DS chs} \end{equation} where \begin{align} V'_\text{Eff}(r) &= V'(r)_\text{K} + \xi V'(r)_\text{CS}, \\ V'(r)_\text{K} &= \frac{6 (-a E + L)^2 m + 2 r \big(a^2 (-1 + E^2) - L^2\big) + 2 m r^2}{r^4}, \\ V'(r)_\text{CS} &= \frac{15 a E L r^3 + 5 a^2 M \left(-9 L^2 + \frac{E^2 \mathcal{F}^{*}_{12} r^3}{\mathcal{F}_{2}^2}\right)}{2 r^{10}}. \end{align} Eqs. \eqref{DS chs} may be reformulated in vector notation with $x=(r,p)^T$, thereby constituting a nonlinear differential equation of the form $\dot{x}=f(x)$, where \begin{equation} f(x,t)=\begin{pmatrix} p \\ -V'_\text{Eff}(r) \end{pmatrix}.\label{VecFun} \end{equation} The critical points $x_c$ of the system are determined by condition $f(r,p)=0$, i.e., $p=0$ and $V'_\text{Eff}(r)=V'(r)_\text{K}+\xi V'(r)_\text{CS}=0$. Solving for $r$ the critical radius values $r_{*}$ are found as $r_{*}=r_\text{K*}+\xi r_\text{CS*}$, where~\footnote{The critical points by definition are real, so only the real radii are taken.} \begin{align} r_\text{K*}&= \frac{-a^2 (1 - E^2) + L^2 \mp \sqrt{(a^2 - a^2 E^2 + L^2)^2 - 12 (-a E + L)^2 m^2}}{2 m}, \\ r_\text{CS*}&=\frac{5 a \left(3 E L \, r_\text{K*}^3 (\mathcal{F}_\text{2K*})^2 + a m \left(-9 L^2 (\mathcal{F}_\text{2K*})^2 + E^2 r_\text{K*}^3 (\mathcal{F}^{*}_\text{12K*})\right)\right)}{4 \, r_\text{K*}^5 (\mathcal{F}_\text{2K*})^2 \left(12 (-a E + L)^2 m + 3 a^2 (1 - E^2) r_\text{K*} - 3 L^2 r_\text{K*} + 2 m r_\text{K*}^2\right)}. \end{align} The subscript $K^*$ in the functions $\mathcal{F}$ indicates that these are evaluated at $r_\text{K*}$. The system exhibits critical points $x_{c_i} = (0, r_{*})$, around which stability analyzes are performed. For these analyzes, and by the Hartman-Grobman theorem, the derivative of the function $f(r, p)$, specifically the Jacobian matrix, is required, \begin{align} Df(x,t) = \begin{pmatrix} \frac{\partial f_1}{\partial r} & \frac{\partial f_1}{\partial p} \\ \frac{\partial f_2}{\partial r} & \frac{\partial f_2}{\partial p} \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -V''(r) & 0 \end{pmatrix}. \end{align} The eigenvalues of this matrix, evaluated at $r_*$, indicate the stability of each point, respectively \begin{equation} \lambda_{12}=\pm\sqrt{-V''_\text{Eff}(r_{*})}\label{Eigenvalues}. \end{equation} For the cases where $a=0$, $0.2$ and $0.3$ and $\xi=0$, $0.0336$ and $0.0574$, tables~\ref{tab2},~\ref{tab3} and~\ref{tab4} show the behavior of $r_{*}$ and $V''_{\text{Eff}}$ for fixed values of parameters $m=1$, $E=1$, $L=4$. Table~\ref{tab5} provides the range of stable time-like circular orbits. These radii are used to determine the innermost stable circular orbit (ISCO), which corresponds to the inner boundary of the accretion disk. The determination of the marginally stable orbit surrounding the central object can be obtained by the additional condition $V''_{\text{Eff}}(r) = 0$, which yields the subsequent significant relation~\cite{Harko_2009} \begin{equation} E^{2}g''_{\phi\phi}+2E L g''_{t\phi}+L^{2}g''_{tt}-\left ( g^{2}_{t\phi}-g''_{tt}g''_{\phi\phi} \right )''=0, \end{equation} where the derivatives are with respect to $r$. By solving this equation for $r$, the radii of the marginally stable orbits are determined. For dCS gravity, the location of the ISCO is \begin{equation} r_\text{ISCO}=6 m \mp 4 \sqrt{\frac{2}{3}} a - \frac{7 a^2}{18 m} \pm \frac{25 a \, \xi}{432 \sqrt{6} \, m^4}, \end{equation} where the upper signs correspond to co-rotating geodesics, while the lower signs are for counter-rotating ones. \begin{table}[htbp] \caption{Values of $V''(r_{*})$ evaluated at critical radius $r_{*}$ for $a=0$ and different values of coupling parameter $\xi$.}\label{tab2}\begin{tabular}{@{}lllll@{}} \toprule $\xi$ & $r_1$ & $r_2$ &$V''(r_1)$ & $V''(r_2)$\\ \midrule 0 & 4 &12 &-0.0625&0.00077\\ 0.0336&4& 12 &-0.0625&0.00077\\ 0.0574 &4 &12 & -0.0625& 0.00077\\ \botrule \end{tabular} \end{table} \begin{table}[htbp] \caption{Values of $V''(r_{*})$ evaluated at critical radius $r_{*}$ for $a=0.2$ and different values of coupling parameter $\xi$.}\label{tab3} \begin{tabular}{@{}lllll@{}} \toprule $\xi$ & $r_1$ & $r_2$ &$V''(r_1)$ & $V''(r_2)$\\ \midrule 0 & 3.45247&12.54752 &-0.12803&0.00073\\ 0.0336&3.45275& 12.54752&-0.12801&0.00073\\ 0.0574&3.45293 &12.54752 &-0.12800& 0.00073\\ \botrule \end{tabular} \end{table} \begin{table}[htbp] \caption{Values of $V''(r_{*})$ evaluated at critical radius $r_{*}$ for $a=0.3$ and different values of coupling parameter $\xi$.}\label{tab4} \begin{tabular}{@{}lllll@{}} \toprule $\xi$ & $r_1$ & $r_2$ &$V''(r_1)$ & $V''(r_2)$\\ \midrule 0 & 3.21147 &12.78852 &-0.18007&0.00072\\ 0.0336&3.21194 & 12.78852 &-0.18003&0.00072\\ 0.0574 &3.21228 &12.78851 & -0.17999& 0.00072\\ \botrule \end{tabular} \end{table} The eigenvalues associated to critical points are given by Eq.~\eqref{Eigenvalues}, therefore, from Table~\ref{tab2} we observe that there are two critical points. We observe that for $a=0$ and $\xi=0,$, $\xi=0.0336$ and $\xi=0.0574$, at the critical point $x_1=(0,r_1)$ the eigenvalues are real and opposite in sign, thus $x_1$ is an unstable saddle point. At the critical point $x_2=(0,r_2)$, the eigenvalues are purely imaginary, therefore $x_2$ is a stable center. The phase portraits are sketched in Fig.~\ref{fig3}. Phase portraits exhibit similar behavior for $a=0.2$ and $a=0.3$, as can be observed in Fig.~\ref{fig4}, derived from the eigenvalues obtained from tables ~\ref{tab3} and \ref{tab4}. Stability depends entirely on the parameters' values $a, E, L, m,$ and $\xi$. \begin{figure}[ht] \centering \includegraphics[width=0.31\textwidth]{figs/DF,a=0,xi=0.03.pdf} \caption{Phase diagram in the $r-\dot{r}$ plane for timelike geodesics in Schwarzschild Spacetime, shown for spin parameter $a=0$, and coupling parameter value $\xi=0$; The red highlighted point $x_1=(0,r_1)$ is a saddle point, nearby solutions (dashed) in the neighborhood are unstable; the blue highlighted critical point $x_2=(0,r_2)$ is a stable center, and solutions near (solid) are stable closed orbits.} \label{fig3} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.31\textwidth]{figs/DF,a=0.2,xi=0.pdf} \includegraphics[width=0.31\textwidth]{figs/DF,a=0.2,xi=0.03.pdf} \includegraphics[width=0.31\textwidth]{figs/DF,a=0.2,Xi=0.05.pdf} \caption{Phase diagrams in the $r-\dot{r}$ plane for timelike geodesics in dCS Spacetime, shown for spin parameters from left to right $a=0, 0.2, 0.3$, and $\xi=0.0336$; The red highlighted point $x_1=(0,r_1)$ is a saddle point, nearby solutions (dashed red) in the neighborhood are unstable; the blue highlighted critical point $x_2=(0,r_2)$ is a stable center, and solutions (solid gray) near are stable closed orbits. It is observed that the critical points exhibit a slight displacement along the $r$ axis as variations occur in the spin parameter $a$.} \label{fig4} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.31\textwidth]{figs/DF,a=0.3,xi=0.pdf} \includegraphics[width=0.31\textwidth]{figs/DF,a=0.3,xi=0.03.pdf} \includegraphics[width=0.31\textwidth]{figs/DF,a=0.3,Xi=0.05.pdf} \caption{Phase diagrams in the $r-\dot{r}$ plane for timelike geodesics in dCS Spacetime, shown for spin parameters from left to right $a=0, 0.2, 0.3$, and $\xi=0$; The red highlighted point $x_1=(0,r_1)$ is a saddle point, nearby solutions (dashed orange) in the neighborhood are unstable; the blue highlighted critical point $x_2=(0,r_2)$ is a stable center, and solutions (solid purple) near are stable closed orbits. It is observed that the critical points exhibit a slight displacement along the $r$ axis as variations occur in the spin parameter $a$.} \label{fig5} \end{figure} \begin{table}[htbp] \caption{Range of stable circular orbits for timelike particles corresponding to various values of $a$ and $\xi$. Stable circular orbits are maintained at radii $r^{*}_{+}>6$, designating the innermost stable circular orbit, while unstable circular orbits occur at $r^{*}_{+} < 6$.}\label{tab5}\begin{tabular}{@{}lllll@{}} \toprule \textbf{a} & \textbf{Range of stable circular orbits} & \textbf{\(\xi = 0.0336\)} & \textbf{\(\xi = 0.0574\)} \\ \hline 0 (Sch) & $r^{*}_{+} > 6$ & $r^{*}_{+} > 6$ & $r^{*}_{+} > 6$ \\ 0.2 & $r^{*}_{+} > 5.33125$ & $r^{*}_{+} > 5.33141$ & $r^{*}_{+} >5.33152 $ \\ 0.3 & $r^{*}_{+} > 4.9852$ & $r^{*}_{+} >4.98544 $ & $r^{*}_{+} >4.98561$ \\ \botrule \end{tabular} \end{table} \subsection{KCC stability} Our goal in this analysis is to establish the second KCC invariant, referred to as the deviation curvature tensor, in order to assess Jacobi stability. By employing the expression for the effective potential of dCS spacetime, denoted by $V'_{\text{Eff}}$, we derive \begin{equation} \overset{..}{r} + V_{\text{Eff}}=0 \end{equation} Contrasting the previous equation with the general second-order differential equation used in KCC theory \begin{equation} \frac{d^2 x^i}{dt^2} + 2G^i(X,Y) = 0, \end{equation} it is evident that the second terms in both equations are identical, thus it can be inferred that \begin{equation} G^{1}(r,p) =G^{1}(r,p)_\text{K}+\xi G^{1}(r,p)_\text{CS}, \end{equation} with \begin{align} G^{1}(r,p)_\text{K}&=\frac{3 (-a E + L)^2 m + (a^2 (-1 + E^2) - L^2) r + m r^2}{ r^4},\\ G^{1}(r,p)_\text{CS}&= \frac{15 a E L r^3 + 5 a^2 m \left(-9 L^2 + \frac{E^2 \mathcal{F}^{*}_{12} r^3}{\mathcal{F}_{2}^2}\right)}{4 r^{10}}. \end{align} The derivatives of these functions are \begin{align} G'^{1}(r,p) &= G'^{1}(r,p)_\text{K}+\xi G'^{1}(r,p)_\text{CS},\\ G'^{1}(r,p)_\text{K}&=\frac{24 a E L m + 3 L^2 \mathcal{F}_{4} - 2 m r^2 - 3 a^2 (-r + E^2 \mathcal{F}_{4})}{r^5}, \\ G'^{1}(r,p)_\text{CS}&=-\frac{5 a \left(-90 a L ^2 m \mathcal{F}_{2}^3 +21 E L \mathcal{F}_{2}^3 r^3 + 8 a E^2 m r^3 \mathcal{F}^{**}_{7}\right) }{-4 r^{11}\mathcal{F}_{2}^3}. \end{align} The nonlinear connection for this system is determined by $N_{1}^{1} = \frac{\partial G^1}{\partial p} = 0,$ and the Berwald connection is established as $G_{11}^1 = \frac{\partial N_{1}^{1}}{\partial p} = 0 $. The equation for the deviation curvature tensor \eqref{kcc. curvature tensor} takes the form \begin{equation} P_1^1=-2\frac{\partial G^1}{\partial r}-2G^1G_{11}^1+p\frac{\partial N_1^1}{\partial r}+N_1^1N^1_1. \end{equation} Having derived all the ingredients, we are prepared to formulate the second KCC invariant as follows: \begin{align} P_1^1&=P^1_\text{1 K}+\xi P^1_\text{1 CS}, \\ P^1_\text{1 K}&=\frac{-48 a E L m + 4 m r^2 - 6 L^2 \mathcal{F}_{4} + 6 a^2 (-r - E^2 \mathcal{F}_{4})}{r^5}, \\ P^1_\text{1 CS}&=\frac{5 a \left(-90 a L^2 m \mathcal{F}_{2}^3 +21 E L \mathcal{F}_{2}^3 r^3 + 8 a E^2 m r^3 \mathcal{F}^{**}_{7}\right) }{-2 r^{11} \mathcal{F}_{2}^3}. \end{align} Evaluating the second invariant KCC at the critical point (0,$r_{*}$), the Jacobi stability is determined. The results are illustrated in Fig.~\ref{fig:seconkccVSr}. \begin{figure}[ht] \centering \includegraphics[width=0.4\linewidth]{figs/KCCg10.2.pdf} \includegraphics[width=0.4\linewidth]{figs/KCCg20.3.pdf} \caption{The figure on the left illustrates the second KCC $P^{1}_{1}$, maintaining constant parameters for spin $a$, energy $E$, and the coupling constant $\xi$ while varying the angular momentum $L$. Conversely, the figure on the right depicts the second KCC $P^{1}_{1}$, where the parameters for spin $a$, angular momentum $L$, and coupling constant $\xi$ are held constant as the energy $E$ is varied.} \label{fig:seconkccVSr} \end{figure} The behavior of the Second KCC was analyzed as a function of the radial coordinate $r$ with varying angular momentum ($L$) and energy ($E$). The initial set of plots in Fig.~\ref{fig:seconkccVSr} evaluated the behavior of the second KCC characterized by $L$, with energy, spin parameter, and coupling constant maintained at fixed values, denoted as $E=1.0$, $a=0.2$, and $\xi=0.0574$ respectively. The analysis yielded the following insights: there was a shift in the stability regions at lower values of $L$, where the Second KCC predominantly displayed positive values throughout the domain of $r$, indicating an inherent instability. With the increase of $L$, the Second KCC started to present negative values over specific intervals marked by $r$, signifying the development of stable regions. For minimal values of $L$, the Second KCC similarly exhibited a predominance of positive values over the domain $r$, again implying instability. As $L$ ascended, the emergence of negative value manifestations of Second KCC within certain ranges of $r$ marked the formation of stable regions. The presence of higher angular momentum brought about a stabilizing effect, as it introduced additional terms that mitigate the destabilizing influences stemming from the radial and coupling-dependent components of the Second KCC. This results in an expanded region of $ r $ where the stability conditions delineated by the Second KCC are fulfilled. During critical transitions, for median values of $L$, the Second KCC oscillated between stable and unstable regions across the domain, intersecting zero at critical radii. These transitions imply potential bifurcations or alterations in the nature of the trajectory dynamics. The effects of energy $E$ on stability were examined in the second set of plots \ref{fig:seconkccVSr}, where the influence of varying $E$ was investigated while constants $L=3.0$, $a=0.2$, and $\xi=0.0574$ remained unchanged. At lower energy levels, the Second KCC reveals extensive regions of negative values, indicating stability across a broad spectrum of $r$. With an increase in $E$, the magnitude of the positive terms within the Second KCC escalates, resulting in a reduction of the stable region and a shift in the critical radius at which stability transitions occur. The compression of stability zones is observed for higher values of $E$, leading to a considerable reduction in stable regions and an increase in positivity across the majority of the domain, indicating heightened instability. This analysis underscores the destabilizing impact of energy, which enhances the influence of centrifugal and radial forces, thereby overpowering the stabilizing contributions from other terms. These findings suggest that systems characterized by moderate to high angular momentum and low energy are predisposed to stable trajectories, similar to the behavior observed in the Kerr case. Conversely, high-energy systems with low angular momentum are likely to exhibit instability across a broad range of $ r $. Our results indicate that stability is not significantly impacted by variations in the coupling constant $\xi$. Instead, changes in $\xi$ primarily alter the location where the second KCC invariant intersects the $r$-axis, as shown in Figure~\ref{fig:seconkccVSr}. This behavior is observed as a lateral displacement of the critical points, depending on the value of $\xi$, without affecting the overall stability of the system. Such displacements can be interpreted as vertical shifts in the second KCC function, reflecting analogous behavior observed in parameters such as energy, spin, and angular momentum. These findings underscore that while $\xi$ modifies the geometric characteristics of the system, it does not inherently enhance or diminish its stability. \section{Discussion and conclusions} In this paper, we have studied the stability of time-like geodesics in the equatorial plane of the slowly rotating dCS black hole by analyzing the effect of the spin parameter $a$ and the coupling parameter $\xi$ on the effective potential. We used linear stability, KCC stability analysis, and phase portrait analysis. With the aim of aligning with realistic modified gravity scenarios, the coupling parameter $\xi$ is restricted with observational data of black hole shadows. From the analysis of Eq.~\eqref{potenciald}, stability analysis of critical points was carried out, allowing us to determine stable critical points and a range of stable orbits of the nonlinear system. {We find that, regardless of the values of the spin and CS coupling parameters --within the ranges used in this work -- the critical point $x_1=(0,r_1)$ is consistently Lyapunov unstable, while $x_2=(0,r_2)$ remains stable, and this behavior can also be observed in the phase diagrams. This indicates a robust saddle-point behavior for this configuration. } Although the values of $V''(r_1)$ at the first critical point $x_1=(0,r_1)$ become increasingly negative as $a$ increases, indicating a worsening instability, the stability at $x_2(0,r_2)$ remains consistently positive, suggesting that perturbations will tend to stabilize the system at this point. The analysis reveals that the system consistently exhibits saddle-point behavior in different values of $a$ and $\xi$. Although $r_1$ is Lyapunov unstable in all scenarios, $r_2$ is still stable. Furthermore, the analysis of the second KCC invariant reveals how spin and Chern-Simons correction influence the stability of black hole solutions. Although Schwarzschild black holes remain stable, increasing spin and introducing dCS corrections can lead to instability, as evidenced by the upward shifts and crossings of the second KCC invariant into positive values. The following summarizes the derived results: \begin{itemize} \item Schwarzschild: The system is stable, with the second KCC invariant consistently negative, showing strong Jacobi stability. \item Kerr : Introducing a small spin reduces stability but does not necessarily lead to instability. With higher spin, the system becomes more unstable. \item dCS : Increasing both the spin and the dCS corrections gradually makes the system less stable, with the second KCC invariant approaching zero or turning positive in some regions. \item In the range of parameters analysed, the stability exhibits negligible sensitivity to changes in $\xi$. \end{itemize} Both methods indicate that $x_1(0,r_1)$ remains unstable in all scenarios analyzed. The increasing negativity of $V''(r_1)$ as $a$ increases aligns with the behavior of the second KCC invariant, which shows an increasing instability as the spin increases. The stability observed at $x_2(0,r_2)$ in the Lyapunov analysis is corroborated by the second KCC invariant remaining negative for Schwarzschild configurations. The transition from stability to instability as the parameters are varied is consistently represented in both analyses. For Kerr solutions with higher spin ($a = 0.3$), both methods indicate an increased instability: Lyapunov's analysis shows that $V''(r_1)$ becomes more negative, whereas the second KCC invariant signifies instability by moving upwards to positive values. This suggests that both methods are compatible and can be used together to offer a more comprehensive understanding of the stability characteristics of black hole solutions under varying conditions of spin and dCS corrections. We observe some benefits of using the KCC approach. It provides a more comprehensive framework for understanding the robustness of solutions to second-order differential equations. Unlike classical stability methods, which may focus primarily on linearization and local behaviors near equilibrium, the KCC stability extends to the geometric and structural properties of the system. This simplifies the identification of how small changes in initial conditions or internal parameters gradually change the stabiliy properties of the system, while also giving a more nuanced view of the system's global structure, beyond just equilibrium points. As a result, this method is well suited for systems where classical linear stability analyzes may fall short or provide insufficient insight into the long-term behavior of the system under perturbations. \section{Acknowledgments} The authors appreciate the support provided by CONAHCyT through grants 932448, 519369 and DCF-320821, which made this research possible. \bibliography{refs} \end{document}
2412.14345v1
http://arxiv.org/abs/2412.14345v1
Coxeter-type quotients of surface braid groups
\documentclass[12pt,british,reqno]{amsart} \usepackage{babel} \usepackage[font=small,format=plain,labelfont=bf,up,textfont=it,up]{caption} \usepackage[utf8]{inputenc} \newcommand{\fecho}[1]{\ensuremath{\langle\langle #1 \rangle\rangle}} \usepackage{cancel} \usepackage{listings} \usepackage{faktor} \usepackage[T1]{fontenc} \DeclareFontFamily{T1}{calligra}{} \DeclareFontShape{T1}{calligra}{m}{n}{<->s*[1.44]callig15}{} \DeclareMathAlphabet\mathrsfso {U}{rsfso}{m}{n} \usepackage{amsthm,amssymb,amsmath} \usepackage{enumerate} \usepackage[all]{xy} \usepackage{mathtools} \makeatletter \def\@map#1#2[#3]{\mbox{$#1 \colon\thinspace #2 \longrightarrow #3$}} \def\map#1#2{\@ifnextchar [{\@map{#1}{#2}}{\@map{#1}{#2}[#2]}} \g@addto@macro\@floatboxreset\centering \makeatother \renewcommand{\epsilon}{\ensuremath{\varepsilon}} \renewcommand{\phi}{\ensuremath{\varphi}} \newcommand{\vide}{\ensuremath{\varnothing}} \renewcommand{\to}{\ensuremath{\longrightarrow}} \renewcommand{\mapsto}{\ensuremath{\longmapsto}} \newcommand{\R}{\ensuremath{\mathbb R}} \newcommand{\N}{\ensuremath{\mathbb N}} \newcommand{\Z}{\ensuremath{\mathbb{Z}}} \newcommand{\dt}{\ensuremath{\mathbb D}^{2}} \newcommand{\St}[1][2]{\ensuremath{\mathbb S}^{#1}} \newcommand{\FF}{\ensuremath{\mathbb F}} \newcommand{\F}[1][n]{\ensuremath{\FF_{{#1}}}} \newcommand{\rp}{\ensuremath{\mathbb{R}P^2}} \newcommand{\sn}[1][n]{\ensuremath{S_{{#1}}}} \newcommand{\an}[1][n]{\ensuremath{A_{{#1}}}} \newcommand{\hyparr}[1]{\ensuremath{\mathrsfso{#1}}} \newcommand{\brak}[1]{\ensuremath{\left\{ #1 \right\}}} \newcommand{\ang}[1]{\ensuremath{\left\langle #1\right\rangle}} \newcommand{\set}[2]{\ensuremath{\left\{#1 \,\mid\, #2\right\}}} \newcommand{\setang}[2]{\ensuremath{\ang{#1 \,\mid\, #2}}} \newcommand{\setangr}[2]{\ensuremath{\ang{#1 \,\left\lvert \, #2 \right.}}} \newcommand{\setangl}[2]{\ensuremath{\ang{\left. #1 \,\right\rvert \, #2}}} \newcommand{\ord}[1]{\ensuremath{\left\lvert #1\right\rvert}} \newcommand{\setr}[2]{\ensuremath{\brak{#1 \,\left\lvert \, #2 \right.}}} \newcommand{\setl}[2]{\ensuremath{\brak{\left. #1 \,\right\rvert \, #2}}} \newtheoremstyle{theoremm}{}{}{\itshape}{}{\scshape}{.}{ }{} \theoremstyle{theoremm} \newtheorem{thm}{Theorem} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheoremstyle{remark}{}{}{}{}{\scshape}{.}{ }{} \theoremstyle{remark} \newtheorem{defn}[thm]{Definition} \newtheorem{rem}[thm]{Remark} \newtheorem{rems}[thm]{Remarks} \newtheorem{exm}[thm]{Example} \newcommand{\inn}[1]{\ensuremath{\operatorname{\text{Inn}}\left({#1}\right)}} \newcommand{\aut}[1]{\ensuremath{\operatorname{\text{Aut}}\left({#1}\right)}} \renewcommand{\ker}[1]{\ensuremath{\operatorname{\text{Ker}}\left({#1}\right)}} \newcommand{\kernb}{\ensuremath{\operatorname{\text{Ker}}}} \newcommand{\redefn}[1]{Definition~\protect\ref{defn:#1}} \newcommand{\rethm}[1]{Theorem~\protect\ref{thm:#1}} \newcommand{\relem}[1]{Lemma~\protect\ref{lem:#1}} \newcommand{\reprop}[1]{Proposition~\protect\ref{prop:#1}} \newcommand{\recor}[1]{Corollary~\protect\ref{cor:#1}} \newcommand{\resec}[1]{Section~\protect\ref{sec:#1}} \newcommand{\rerem}[1]{Remark~\protect\ref{rem:#1}} \newcommand{\rerems}[1]{Remarks~\protect\ref{rem:#1}} \newcommand{\req}[1]{equation~(\protect\ref{eq:#1})} \newcommand{\reqref}[1]{(\protect\ref{eq:#1})} \setlength{\textwidth}{16.1cm} \setlength{\textheight}{24cm} \setlength{\oddsidemargin}{0.6cm} \setlength{\evensidemargin}{0.6cm} \setlength{\topmargin}{-1.0cm} \numberwithin{equation}{section} \allowdisplaybreaks \newcommand{\abs}[1]{\lvert#1\rvert} \usepackage[pdftex,bookmarks=true, breaklinks=true, bookmarksnumbered = true,colorlinks= true,urlcolor= green, anchorcolor = yellow, citecolor=blue,]{hyperref} \usepackage[x11names]{xcolor} \newcommand{\como}[1]{\textcolor{orange}{\textbf{!!!O!!!~#1}}} \newcommand{\comp}[1]{\textcolor{Purple2}{\textbf{!!!P!!!~#1}}} \newcommand{\comr}[1]{\textcolor{red}{\textbf{!!!R!!!~#1}}} \begin{document} \title[Coxeter-type quotients of surface braid groups]{Coxeter-type quotients of surface braid groups} \author[R.~Diniz]{Renato Diniz} \address{Universidade Federal do Rec\^oncavo da Bahia - CFP, Av. Nestor de Melo Pita, 535, CEP:45.300.000 - Amargosa - BA - Brasil} \email{[email protected]} \author[O.~Ocampo]{Oscar Ocampo} \address{Universidade Federal da Bahia, Departamento de Matem\'atica - IME, Av.~Milton Santos~S/N, CEP:~40170-110 - Salvador - BA - Brazil} \email{[email protected]} \author[P.~C.~C.~Santos J\'unior]{Paulo Cesar Cerqueira dos Santos J\'unior} \address{Secretaria da Educa\c{c}\~ao do Estado da Bahia, SEC-BA, $5^{a}$ Avenida N$^\circ 550$, centro administrativo da Bahia - CAB, CEP:~41745-004 - Salvador - BA - Brazil} \email{[email protected]} \subjclass[2020]{Primary: 20F36; Secondary: 20F05.} \date{\today} \keywords{Artin braid group, Surface braid group, Finite group.} \date{\today} \begin{abstract} \noindent Let $M$ be a closed surface, $q\geq 2$ and $n\geq 2$. In this paper, we analyze the Coxeter-type quotient group $B_n(M)(q)$ of the surface braid group $B_{n}(M)$ by the normal closure of the element $\sigma_1^q$, where $\sigma_1$ is the classic Artin generator of the Artin braid group $B_n$. Also, we study the Coxeter-type quotient groups obtained by taking the quotient of $B_n(M)$ by the commutator subgroup of the respective pure braid group $[P_n(M),P_n(M)]$ and adding the relation $\sigma_1^q=1$, when $M$ is a closed orientable surface or the disk. \end{abstract} \maketitle \section{Introduction}\label{sec:intro} The braid groups of the $2$-disk, or Artin braid groups, were introduced by Artin in 1925 and further studied in 1947~\cite{A1,A2}. Surface braid groups were initially studied by Zariski~\cite{Z}, and were later generalized by Fox and Neuwirth to braid groups of arbitrary topological spaces using configuration spaces as follows~\cite{FoN}. Let $S$ be a compact, connected surface, and let $n\in \mathbb N$. The \textit{$n$th ordered configuration space of $S$}, denoted by $F_{n}(S)$, is defined by: \begin{equation*} F_n(S)=\left\{(x_{1},\ldots,x_{n})\in S^{n} \mid x_{i}\neq x_{j}\,\, \text{if}\,\, i\neq j;\,i,j=1,\ldots,n\right\}. \end{equation*} The \textit{$n$-string pure braid group $P_n(S)$ of $S$} is defined by $P_n(S)=\pi_1(F_n(S))$. The symmetric group $S_{n}$ on $n$ letters acts freely on $F_{n}(S)$ by permuting coordinates, and the \textit{$n$-string braid group $B_n(S)$ of $S$} is defined by $B_n(S)=\pi_1(F_n(S)/S_{n})$. This gives rise to the following short exact sequence: \begin{equation}\label{eq:ses} 1 \to P_{n}(S) \to B_{n}(S) \stackrel{\sigma}{\longrightarrow} S_{n} \to 1. \end{equation} The map $\map{\sigma}{B_{n}(S)}[S_{n}]$ is the standard homomorphism that associates a permutation to each element of $S_{n}$. We note the following: \begin{enumerate} \item When $M=D^2$ is the disk then $B_n(D^2)$ (resp.\ $P_n(D^2)$) is the classical Artin braid group denoted by $B_n$ (resp.\ the classical pure Artin braid group denoted by $P_n$). \item Follows from the definition that $F_1(S)=S$ for any surface $S$, the groups $P_1(S)$ and $B_1(S)$ are isomorphic to $\pi_1(S)$. For this reason, braid groups over the surface $S$ may be seen as generalizations of the fundamental group of $S$. \end{enumerate} For more information on general aspects of surface braid groups we recommend \cite{Ha} and also the survey \cite{GPi}, in particular its Section~2 where equivalent definitions of these groups are given, showing different viewpoints of them. We recall that the Artin braid group $B_n$ admits the following presentation~\cite{A1}: \begin{equation}\label{eq:presbn} \bigg\langle \sigma_1, \ldots\, , \sigma_{n-1} \ \bigg\vert \ \begin{matrix} \sigma_{i} \sigma_j = \sigma_j \sigma_{i} &\text{for} &\vert i-j\vert > 1\\ \sigma_{i} \sigma_j \sigma_{i} = \sigma_j \sigma_{i} \sigma_j &\text{for} &\vert i-j\vert = 1 \end{matrix} \ \bigg\rangle. \end{equation} It is well known that the symmetric group $S_n$ admits the following presentation: $$ S_n=\left\langle \sigma_1,\ldots,\sigma_{n-1} \mid \begin{array}{l} \sigma_i\sigma_{i+1}\sigma_i=\sigma_{i+1}\sigma_i\sigma_{i+1} \textrm{ for } 1\leq i\leq n-2\\ \sigma_i\sigma_j=\sigma_j\sigma_i \textrm{ for } \left|i-j\right|\geq 2\\ \sigma_1^2=1 \end{array} \right\rangle. $$ Let $\fecho{g}$ denote the normal closure of an element $g$ in a group $G$. Hence, from \eqref{eq:presbn} it is clear that $\faktor{B_n}{\fecho{ \sigma_1^2}}$ is isomorphic with $S_n$. Let $B_n(2)=\faktor{B_n}{\fecho{ \sigma_1^2}}$. Notice that $B_n(2)$ is a finite group, while the braid group $B_n$ is an infinite torsion-free group. The question that naturally appears is whether the groups $B_n(k) = \faktor{B_n}{\fecho{ \sigma_1^k}}$ are finite for every $k\geq3$. The answer to this problem was given by Coxeter \cite{Co} using classical geometry and giving an unexpected connection between braid groups and platonic solids, see Figure~\ref{fig:platonics}, showing that $B_n(k)$ is finite if and only if $(k-2)(n-2)<4$, see Theorem~\ref{thm:coxeter} (see also \cite[Chapter~5, Proposition~2.2]{MK}). The complete statement of Coxeter's result is formulated in Subsection~\ref{sec:coxeter}. It is worth noting that it was proved differently by Assion \cite{As} using Buraus's representation of the braid groups. Assion gave also a presentation of some symplectic groups as quotient of braid groups and it was improved by Wajnryb \cite{W} giving a braidlike presentation of the symplectic group $Sp(n,p)$. More recently, in \cite{BDOS} Coxeter's result is used to study the relationship between level $m$ congruence subgroups $B_n[m]$ and the normal closure of the element $\sigma_1^m$. In particular, they characterize when the normal closure of the element $\sigma_1^m$ has finite index in $B_n[m]$ and provide explicit generators for the finite quotients. Motivated by Coxeter's work on Artin braid groups, we are interested in this problem for surface braid groups. From now on, let $B_{n}(M)(q)$ denote the quotient of the surface braid group $B_{n}(M)$ by the normal closure of the element $\sigma_1^q$ , where $\sigma_1$ is the classic Artin generator of the Artin braid group $B_n$ permuting the first two strands \cite{A1}. Our main purpose here is to study the Coxeter-type quotient of surface braid groups $B_n(M)(q)$. In contrast to the classical case of the disk, in this paper, we show that for every closed surface different from the sphere and the projective plane, the quotient group $B_n(M)(q)$ is infinite for all $n,q \geq 3$. In Subsection~\ref{subsec:kpi1} we prove the following result. \begin{thm} \label{thm:mainsurface} Let $q\geq 3$ and $n\geq 2$ integers. Let $M$ be a closed surface different from the sphere and the projective plane. \begin{enumerate} \item\label{item:mainsurface1} If $M$ is orientable then the abelianization of the group $B_n(M)(q)$ is isomorphic to ${\mathbb Z_q} \oplus H_1(M)$. \item\label{item:mainsurface2} If $M$ is non-orientable then the abelianization of the group $B_n(M)(q)$ is isomorphic to $$ \begin{cases} H_1(M) & \text{if $q$ is odd},\\ {\mathbb Z_2} \oplus H_1(M) & \text{if $q$ is even}. \end{cases} $$ \item\label{item:mainsurface3} For any surface $M$ different from the sphere and the projective plane, the group $B_n(M)(q)$ is infinite. \end{enumerate} \end{thm} We note that Theorem~\ref{thm:mainsurface} is also true for $q=2$. For instance, in \cite[P.~226]{GMP}, the authors claimed that for closed orientable surfaces, of genus $g\geq 1$, the quotient group $B_n(M)(2)$ is isomorphic to $\pi_1(M)^n\rtimes S_n$. So, it is infinite. In Subsection~\ref{subsec:s2} we analyze the cases where $M$ is the sphere or the projective plane. We compute the abelianization of $B_n(M)(q)$ and prove the following result for sphere braid groups with few strings. \begin{thm} \label{thm:s2} Let $q\geq 3$. \begin{enumerate} \item $B_2(\mathbb S^2)(q)= \begin{cases} \Z_2 & \text{if $q$ is even},\\ \{1\} & \text{if $q$ is odd}. \end{cases} $ \item $B_3(\mathbb S^2)(q)\cong \begin{cases} B_3(\mathbb S^2) & \text{if $gcd(4,q)=4$},\\ S_3 & \text{if $gcd(4,q)=2$},\\ \{1\} & \text{if $gcd(4,q)=1$}. \end{cases} $ \item $B_4(\mathbb S^2)(q)$ is an infinite group if and only if $q\geq 6$. \end{enumerate} \end{thm} Finally, in Section~\ref{sec:cryst} we show that the quotient group $\faktor{B_n(M)}{[P_n(M), P_n(M)]}(q)$ is finite when $M$ is the disk, see Theorem~\ref{Coxeimpar}, and that it is infinite when $M$ is a closed orientable surface $M$ of genus $g\geq 1$, see Proposition~\ref{prop:surfcrystcoxeter}, where $q\geq 3$, $n \geq 2$ and $[P_n(M), P_n(M)]$ is the commutator subgroup of the pure braid group of the surface $M$. \subsection*{Acknowledgments} The second named author would like to thank Eliane Santos, all HCA staff, Bruno Noronha, Luciano Macedo, Marcio Isabela, Andreia de Oliveira Rocha, Andreia Gracielle Santana, Ednice de Souza Santos, and Vinicius Aiala for their valuable help since July 2024, without whose support this work could not have been completed. O.O.~was partially supported by National Council for Scientific and Technological Development - CNPq through a \textit{Bolsa de Produtividade} 305422/2022-7. \section{Coxeter-type quotients of surface braid groups}\label{sec:surfaces} Our main purpose is to study the Coxeter-type quotient of surface braid groups $B_n(M)(q)$ obtained by considering $\sigma_1^q=1$, for $q\geq 3$ and where $\sigma_1$ is the classical Artin generator, see \cite{A1}. We will use presentations of surface braid groups that have in the set of generators, the Artin generators. We start this section with the following elementary result that will be useful in this work. \begin{lem} \label{lem:bezout} Let $a$ and $b$ positive integers and let $g$ be an element in a group $G$. If $g^a=1$ and $g^b=1$ then $g^d=1$, where $d=gcd(a, b)$ denote the greatest common divisor of the integers $a$ and $b$. \end{lem} \begin{proof} This result is a consequence of the Bezout's identity: If $a$ and $b$ are integers (not both $0$), then there exist integers $u$ and $v$ such that $gcd(a, b) = au + bv$, see \cite[Theorem~1.7, Section~1.2]{JJ}. \end{proof} \subsection{Coxeter's result for the disk}\label{sec:coxeter} In this section we recall Coxeter's result for braid groups over the disk that strongly motivates this paper. Let $P$ denote one of the 5 platonic polyhedra (see Figure~\ref{fig:platonics}) and $\Sigma$ one of the faces of $P$, that corresponds to a regular polygon. \begin{figure}[!htb] \begin{minipage}[b]{0.32\linewidth} \centering \includegraphics[width=0.3\columnwidth]{tetraedro.png} \caption*{Tetrahedron} \end{minipage} \hfill \begin{minipage}[b]{0.32\linewidth} \centering \includegraphics[width=0.3\columnwidth]{icosaedro.png} \caption*{Icosahedron} \end{minipage} \begin{minipage}[b]{0.32\linewidth} \centering \includegraphics[width=0.3\columnwidth]{dodecaedro.png} \caption*{Dodecahedron} \end{minipage} \hfill \begin{minipage}[b]{0.32\linewidth} \centering \includegraphics[width=0.3\columnwidth]{octaedro.png} \caption*{Octahedron} \end{minipage} \hfill \begin{minipage}[b]{0.32\linewidth} \centering \includegraphics[width=0.3\columnwidth]{cubo.png} \caption*{Cube} \end{minipage} \caption{The five regular polyhedra} \label{fig:platonics} \end{figure} We numerically code $P$ by means of a pair of integers $(n,p)$, where \begin{itemize} \item $n$ is the number of edges of $\Sigma$. \item $p$ is the number of polygons $\Sigma$ that meet at each vertex of $P$. \end{itemize} The integer pair $(n,p)$ is called the type of $P$. Now we state the unexpected result obtained by Coxeter about the groups $B_n(p)$. \begin{thm}{\cite{Co}} \label{thm:coxeter} Suppose $p\geq3$ and $B_n(p)$ is the quotient group derived from the $n$-braid group $B_n$ by adding one and only one relation $\sigma_1^p=1$ $$ B_n(p)=\left\langle \sigma_1,\ldots,\sigma_{n-1} \mid \begin{array}{l} \sigma_i\sigma_{i+1}\sigma_i=\sigma_{i+1}\sigma_i\sigma_{i+1} \textrm{ for } 1\leq i\leq n-2\\ \sigma_i\sigma_j=\sigma_j\sigma_i \textrm{ for } \left|i-j\right|\geq 2\\ \sigma_1^p=1 \end{array} \right\rangle. $$ Then the quotient group $B_n(p)$ is a finite group if and only if $(n,p)$ corresponds to the type of one of the five Platonic solids (regular polyhedra). Furthermore, the order of the finite group $B_n(p)$ is given by $$ \left|B_n(p)\right|=\left(\frac{f}{2}\right)^{n-1} n! $$ where $f$ is the number of faces of the Platonic solid of type $(n,p)$. \end{thm} Therefore, follows from Theorem~\ref{thm:coxeter} that there are only five finite groups $B_n(p)$ when $n,p\geq3$, namely: \begin{table}[htb] \centering \begin{tabular}{c|c || c|c} \hline Polyhedron & Type (n,p) & Quotient group & order \\ \hline tetrahedron & (3,3) & $B_3(3)$ & 24\\ hexahedron (cube) & (4,3) & $B_4(3)$ & 648\\ octahedron & (3,4) & $B_3(4)$ & 96 \\ dodecahedron & (5,3) & $B_5(3)$ & 155520 \\ icosahedron & (3,5) & $B_3(5)$ & 600 \\ \end{tabular} \caption{Types of Platonic Solids and finite groups $B_n(p)$} \end{table} Motivated by this unexpected result from Coxeter's work on the classical braid groups, we are interested in exploring these quotients for surface braid groups, as we show in the following subsections. \subsection{Braid groups over surfaces different from the sphere and the projective plane} \label{subsec:kpi1} Let $n\geq 2$ and let $B_n(M)$ denote the braid groups over a surface $M$. Compared with the case of the disk (see \cite{Co}) the group $B_n(M)(q)$ is infinite for any integer $q\geq 3$, for closed surfaces different from the sphere and the projective plane. In this subsection we prove Theorem~\ref{thm:mainsurface}, where $H_1(M)$ the first homology group of the surface $M$. We will use presentations of surface braid groups that have in the set of generators the Artin generators. Given a group $G$ we denote its abelianization by $G^{Ab}$. \begin{proof}[Proof of Theorem~\ref{thm:mainsurface}] Let $q\geq 3$ and $n\geq 3$ integers and let $M$ be a closed surface different from the sphere and the projective plane. \begin{enumerate} \item The proof of this item follows using a presentation of the braid group over orientable surfaces given in \cite[Theorem~1.4]{S}. Since the argument is similar for both cases (orientable and not) we give more details for the non-orientable case below. \item Let $M=\underbrace{\mathbb RP^2\# \cdots \# \mathbb RP^2}_{g \textrm{ projective planes}}$ where $g\geq 2$ is the genus of the non-orientable surface $M$. We give a presentation of the abelianization of the group $B_n(M)(q)$. To do this, we use the presentation of $B_n(M)$ given by Scott \cite[Theorem~1.2]{S}: \begin{itemize} \item Generators: $\sigma_1,\ldots, \sigma_{n-1}$ and $\rho_{i,j}$ where $1\leq i\leq n$, $1\leq j\leq g$. \item Relations: all generators commutes. From this and using the Scott's presentation, we get the following information: \end{itemize} \begin{enumerate} \item From \cite[Theorem~1.2, I(ii)]{S} follows $\sigma_i =\sigma_{i+1}$, for $i=1,\ldots,n-2$. \item From \cite[Theorem~1.2, III(ii)]{S} we get $\rho_{i,k} =\rho_{i+1,k}$, for $1\leq i\leq n-1$, $1\leq k\leq g$. \item In \cite[Theorem~1.2, II]{S}, were defined elements $A_{i,j}$ and $B_{i,j}$, for all $1\leq i < j\leq n$, as conjugates of $\sigma_i^2$. From \cite[Theorem~1.2, II(iii)]{S} (see also \cite[Theorem~1.1, II(iii)]{S}) we obtain, for all $1\leq i < j\leq n$, $B_{i,j}=1$ in $\left( B_n(M)(q) \right)^{Ab}$. So, in $\left( B_n(M)(q) \right)^{Ab}$ it holds that $\sigma_i^2=1$, for all $1\leq i\leq n-1$, as well as $A_{i,j}=1$, for all $1\leq i < j\leq n$. \item As a consequence of the previous item and \cite[Theorem~1.2, II(i)]{S} (see also \cite[Theorem~1.1, II(i)]{S}) we get $\rho_{i,g}^2\rho_{i,g-1}^2\cdots \rho_{i,1}^2 = 1$, for all $i=1,\ldots, n-1$. \end{enumerate} The other relations in \cite[Theorem~1.2]{S} does not contribute with further information about $\left( B_n(M)(q) \right)^{Ab}$. Since $\sigma_1^2=1$ and $\sigma_1^q=1$. So, from Lemma~\ref{lem:bezout}, $\sigma_1^d=1$, where $d=gcd(2,q)$. Therefore, a presentation of the abelianization of $B_n(M)(q)$ is given by: \begin{itemize} \item Generators: $\sigma_1$ and $\rho_{1,j}$ for $1\leq j\leq g$. \item Relations: \end{itemize} \begin{enumerate} \item all generators commutes, \item $\sigma_1^2=1$, and $\sigma_1^q=1$, for $q\geq 3$. So, from Lemma~\ref{lem:bezout}, $\sigma_1^d=1$, for $q\geq 3$, where $d=gcd(2,q)$. \item $\bf \rho_{1,g}^2\rho_{1,g-1}^2\cdots \rho_{1,1}^2 = 1$. \end{enumerate} We recall that a presentation of the fundamental group of the non-orientable surface $M$ of genus $g$ is given by \begin{equation}\label{eq:presfundMnon} \pi_1(M) = \bigg\langle \rho_{1}, \ldots , \rho_{g} \ \bigg\vert \ \rho_{g}^2\rho_{g-1}^2\cdots \rho_{1}^2 = 1 \ \bigg\rangle. \end{equation} Hence, from the computations given above we proved this item $$ \left( B_n(M)(q) \right)^{Ab} \cong {\mathbb Z_d} \oplus H_1(M), $$ where $d=gcd(2,q)$. \item Since the first homology group of the closed surfaces different from the sphere and the projective plane are infinite: $$ H_1(M)\cong \begin{cases} {\mathbb Z}^{2g} & \text{if $M$ is orientable of genus $g$}\\ {\mathbb Z}^{g-1}\oplus{\mathbb Z_2} & \text{if $M$ is non-orientable of genus $g$} \end{cases} $$ then we conclude that the Coxeter-type quotient $B_n(M)(q)$ is infinite. \end{enumerate} \end{proof} \subsection{The sphere and the projective plane} \label{subsec:s2} Now, we exhibit some information of $B_n(M)(q)$ when $M$ is either the sphere or the projective plane. From \cite{FVB} we know that the sphere braid group with $n$ strings, $B_n(\mathbb S^2)$, admits a presentation with generators $\sigma_i$ for $i=1,2,\dots,n-1$ and relations as in \eqref{eq:presbn} plus: \begin{itemize} \item the surface relation $\sigma_1\cdots \sigma_{n-2}\sigma_{n-1}^2\sigma_{n-2}\cdots \sigma_1=1$. \end{itemize} Recall that a perfect group $G$ is a group such that $G=[G,G]$. \begin{prop} Let $q\geq 2$ and $n\geq 3$ integers. Let $d=gcd(q,\, 2(n-1))$. \begin{enumerate} \item The abelianization of $B_n(\mathbb S^2)(q)$ is isomorphic to the cyclic group $\mathbb Z_d$. \item If $q$ and $2(n-1)$ are coprimes then $B_n(\mathbb{S}^2)(q)$ is perfect. \end{enumerate} \end{prop} \begin{proof} Let $q\geq 2$ and $n\geq 3$ integers and let $d=mcd(q,\, 2(n-1))$. Using the presentation of $B_n(\mathbb{S}^2)$ we conclude that the abelianization of the quotient group $B_n(\mathbb{S}^2)(q)$ has the presentation $$ \setang{\sigma_1}{\sigma_1^q=1,\, \sigma_1^{2(n-1)}=1}, $$ where the second equality comes from the surface relation. Lemma~\ref{lem:bezout} implies that the order of $\sigma_1\in \left(B_n(\mathbb{S}^2)(q)\right)^{Ab}$ is equal to $d$, where $d=gcd(q, 2(n-~1))$. From this, we proved the first item. From the first item of this result and the hypothesis of the second item, we get $\sigma_1=1$. Since the abelianization of $B_n(\mathbb{S}^2)(q)$ is the trivial group, then we conclude that $B_n(\mathbb{S}^2)(q)$ is perfect, proving the second item. \end{proof} For the special case of few strings, in Theorem~\ref{thm:s2} we have the result for the Coxeter-type quotient of the sphere braid group, that we prove below. When analyzing the case of four strings, we use triangle groups as defined in \cite[Appendix~I, Section~7]{MK}, see also \cite{M}. \begin{proof}[Proof of Theorem~\ref{thm:s2}] Let $q\geq 3$. \begin{enumerate} \item Since the group $B_2(\mathbb S^2)=\Z_2$ is generated by $\sigma_1$, then the result of this item follows immediately from Lemma~\ref{lem:bezout}. \item Recall from \cite[Third Lemma on p.248]{FVB} (see also \cite[Proposition~2.4, Chapter~11]{MK}) that $B_3(\mathbb S^2)$ has order 12 and the elements $\sigma_1$ and $\sigma_2$ have order 4. So, from Lemma~\ref{lem:bezout}, in $B_3(\mathbb S^2)$ it holds $$ \begin{cases} \sigma_1^4=1, & \text{if $gcd(4,q)=4$},\\ \sigma_1^2=1, & \text{if $gcd(4,q)=2$},\\ \sigma_1=1, & \text{if $gcd(4,q)=1$}. \end{cases} $$ From this, is clear that $B_3(\mathbb S^2)(q)\cong B_3(\mathbb S^2)$ if $gcd(4,q)=4$, and that $B_3(\mathbb S^2)(q)$ is the trivial group $\{1\}$ if $gcd(4,q)=1$. Finally, suppose that $gcd(4,q)=2$, then it follows from the proof of \cite[Third Lemma on p.248]{FVB} (see also the proof of \cite[Proposition~2.4, Chapter~11]{MK}) that $B_3(\mathbb S^2)(q)\cong S_3$ in this last case, completing the proof of this item. \item The group $B_4(\mathbb S^2)(q)$ admits the following presentation: \begin{equation}\label{eq:presb4s2} B_4(\mathbb S^2)(q) = \bigg\langle \sigma_1, \sigma_2 , \sigma_{3} \ \bigg\vert \ \begin{matrix} \sigma_1\sigma_2\sigma_1 = \sigma_2\sigma_1\sigma_2, \sigma_2\sigma_3\sigma_2=\sigma_3\sigma_2\sigma_3, \sigma_1\sigma_3=\sigma_3\sigma_1, \\ \sigma_1\sigma_2\sigma_3^2\sigma_2\sigma_1=1, \sigma_1^q=1 \end{matrix} \ \bigg\rangle. \end{equation} We used the GAP System \cite{GAP} to show that $B_4(\mathbb S^2)(q)$ is a finite group in the following cases: \begin{itemize} \item[(q=3):] The group $B_4(\mathbb S^2)(3)$ is isomorphic to the alternating group $A_4$. \item[(q=4):] In this case the group $B_4(\mathbb S^2)(4)$ has order 192. \item[(q=5):] The group $B_4(\mathbb S^2)(5)$ is isomorphic to the alternating group $A_5$. \end{itemize} We elucidate the routine used in the GAP computations for the case $B_4(\mathbb S^2)(3)$, the other cases are similar: \begin{lstlisting}[language=GAP] f3 := FreeGroup( "a", "b", "c" );; gens:= GeneratorsOfGroup(f3);; a:= gens[1];;b:= gens[2];;c:= gens[3];; B4S23:= f3/[ a*b*a*b^-1*a^-1*b^-1, b*c*b*c^-1*b^-1*c^-1, a*c*a^-1*c^-1, a^3, b^3, c^3, a*b*c^2*b*a ]; Order (B4S23); StructureDescription (B4S23); \end{lstlisting} Now, for $q\geq 6$, we show that the group $B_4(\mathbb S^2)(q)$ is infinite. Let $\langle\langle \sigma_1\sigma_3^{-1} \rangle\rangle$ be the normal closure of the element $\sigma_1\sigma_3^{-1}$ in $B_4(\mathbb S^2)(q)$. Then $$ B_4(\mathbb S^2)(q)/\langle\langle \sigma_1\sigma_3^{-1} \rangle\rangle =\langle \sigma_1, \sigma_2 \mid \sigma_1\sigma_2\sigma_1 = \sigma_2\sigma_1\sigma_2,\, (\sigma_1\sigma_2)^3=1,\, \sigma_1^q=1\rangle. $$ Taking $a=\sigma_1\sigma_2\sigma_1$ and $b=\sigma_1\sigma_2$ it follows that $(ab)=\sigma_1^{-1}$ and so $$ B_4(\mathbb S^2)(q)/\langle\langle \sigma_1\sigma_3^{-1} \rangle\rangle =\langle a,\, b \mid a^2=b^3=(ab)^q=1 \rangle. $$ Hence $B_4(\mathbb S^2)(q)/\langle\langle \sigma_1\sigma_3^{-1} \rangle\rangle$ is isomorphic to the triangular group $T(2,3,q)$ that is infinite if, and only if $q\geq 6$, see \cite[Theorem~7.1,\, Appendix~I]{MK}. \end{enumerate} \end{proof} Now we move to the case of the projective plane. Recall a presentation of the braid group of the projective plane. \begin{prop}[Section~III of \cite{VB}]\label{apB_n(P2)} The braid group of the projective plane on $n$ strings, $B_n(\R P^2)$ admits the following presentation: \item[Generators:] $\sigma_1,\sigma_2,\dots,\sigma_{n-1},\rho_1,\rho_2,\dots, \rho_n$. \item[Relations:] \ \item[I] $\sigma_i\sigma_j=\sigma_j\sigma_i$ if $|i-j|\ge 2$. \medskip \item[II] $\sigma_i\sigma_{i+1}\sigma_i=\sigma_{i+1}\sigma_i\sigma_{i+1}$ for $i=1,\dots,n-2$. \medskip \item[III] $\sigma_i\rho_j=\rho_j\sigma_i$ for $j\neq i,i+1$. \medskip \item[IV] $\rho_i=\sigma_i\rho_{i+1}\sigma_i$ for $i=1,\dots,n-1$. \medskip \item[V] $\rho_{i+1}^{-1}\rho_i^{-1}\rho_{i+1}\rho_i=\sigma_i^2$. \medskip \item[VI] $\rho_1^2=\sigma_1\sigma_2\cdots \sigma_{n-2}\sigma_{n-1}^2\sigma_{n-2}\cdots\sigma_2\sigma_1$. \end{prop} For the case of braid groups over the projective plane we have the following. \begin{prop} Let $q\geq 2$ and $n\geq 2$ integers. The abelianization of the group $B_n(\R P^2)(q)$ is isomorphic to $\Z_2$ if $q$ is odd, otherwise it is the Klein four group $\Z_2\oplus \Z_2$. \end{prop} \begin{proof} We obtain the result from Lemma~\ref{lem:bezout} and the presentation of $B_n(\R P^2)$ given by Van Buskirk in \cite{VB} (see Proposition~\ref{apB_n(P2)} and also \cite[page~202, Theorem~4.1]{MK}). \end{proof} \begin{rem} Except for the information of Theorem~\ref{thm:s2}, we do not know under which conditions on $n$ and $q$ the groups $B_n(M)(q)$ are finite, when $M$ is either the sphere or the projective plane. \end{rem} \section{Coxeter-type quotients and crystallographic surface braid groups}\label{sec:cryst} The quotients of surface braid groups $B_n(M)$ by the commutator subgroup of the respective pure braid group $[P_n(M),P_n(M)]$ considered in this subsection were deeply studied in \cite{GGO} for the case of the disk and in \cite{GGOP} for the case of closed surfaces, in both cases exploring its connection with crystallographic groups. In what follows, we analyze the Coxeter-type quotient groups $\faktor{B_n(M)}{[P_n(M),P_n(M)]}(q)$ by adding to the presentation of $\faktor{B_n(M)}{[P_n(M), P_n(M)]}$ the relation $\sigma_1^q=1$, for braid groups over closed orientable surfaces and also for the disk. \subsection{Braid groups over the the disk} Unlike the case of the Coxeter quotient of the Artin braid group \cite{Co}, see Theorem~\ref{thm:coxeter}, for all $n,q \ge 3$ the Coxeter-type quotient $\faktor{B_n}{[P_n,P_n]}(q)$ is finite. The following result is part of the Dissertation Thesis of the third author, see \cite[Theorem~3.3]{Sa}. \begin{thm} \label{Coxeimpar} Let $n,q \ge 3$ and $k\in\N$. For any integer number $q\geq 3$, the group $\faktor{B_n}{[P_n,P_n]}(q)$ is finite. \begin{enumerate} \item [(a)] If $q=2k+1$, then $\faktor{B_n}{[P_n,P_n]}(q)$ is isomorphic to $\Z_q$. \item [(b)] When $q=2k$, then $\faktor{B_n}{[P_n,P_n]}(q)$ has order $\frac{n(n-1)k}{2}\cdot n!$. \end{enumerate} \end{thm} \begin{proof} Let $n,q \ge 3$ and suppose that $\sigma_1^q=1$. The integer $q$ is equal to $2k+r$, with $0\leq r\leq 1$ and $r,k\in\N$. For item~$(a)$, as a consequence of the presentation of the Artin braid group $B_n$ given in \eqref{eq:presbn} we get $\sigma_i^{-1}\sigma_{i+1}\sigma_i=\sigma_{i+1}\sigma_i\sigma_{i+1}^{-1}$, for all $1\le i\le n-2$, and so $\sigma_i^q=1$, for all $1\le i\le n-2$. Hence, $\sigma_i=\sigma_i^{-2k}=A_{i,i+1}^{-k}$, for all $1\le i\le n-1$, where $A_{i,j}$ is an Artin generator of the pure Artin braid group. So, in the group $\faktor{B_n}{[P_n,P_n]}(q)$ holds $[\sigma_i,\sigma_j]=1$, for all $1\le i<j\le n-1$. Therefore, \begin{align*} \sigma_i\sigma_{i+1}\sigma_i=\sigma_{i+1}\sigma_i\sigma_{i+1}&\iff \sigma_i\sigma_{i+1}\sigma_i=\sigma_{i+1}\sigma_{i+1}\sigma_{i}\\ &\iff \sigma_i=\sigma_{i+1}, \end{align*} for all $1\le i\le n-1$. Then, $\faktor{B_n}{[P_n,P_n]}(q)$ is isomorphic to $\langle \sigma_1\mid\sigma_1^p=1\rangle$, proving item~$(a)$. Now we prove item~$(b)$. By hypothesis, we have $\sigma_1^{2k}=1$. As before, we may conclude that $\sigma_i^{2k}=1$, for all $1\le i\le n$, so $A_{i,i+1}^k=1$, for all $1\le i\le n$. Recall the definition of the pure Artin generator $A_{i,j}=\sigma_{j-1}\sigma_{j-2}\cdots\sigma_{i}^2\cdots\sigma_{j-2}^{-1}\sigma_{j-1}^{-1}$. So, $A_{i,j}^k=1$, for all $1\le i<j\le n$. We recall that the group $\faktor{P_n}{[P_n,P_n]}$ is free abelian with a basis given by the classes of pure Artin generators $\{ A_{i,j} \mid 1\leq i<j\leq n \}$. Hence, in $\faktor{B_n}{[P_n,P_n]}(q)$ the natural projection of the group $\faktor{P_n}{[P_n,P_n]}\leq \faktor{B_n}{[P_n,P_n]}$ is isomorphic to $\underbrace{\Z_k\times\cdots\times\Z_k}_\frac{n(n-1)}{2}$. From the above we get the following short exact sequence $$ 1\longrightarrow \underbrace{\Z_k\times\cdots\times\Z_k}_\frac{n(n-1)}{2}{\longrightarrow} \faktor{B_n}{[P_n,P_n]}(q) {\longrightarrow} S_n\longrightarrow 1. $$ Therefore the middle group $\faktor{B_n}{[P_n,P_n]}(q)$ has finite order $\frac{n(n-1)k}{2}\cdot n!$ and with this we verify item~(b). From items~$(a)$ and~$(b)$ we proved that for any integer number $q\geq 3$, the group $\faktor{B_n}{[P_n,P_n]}(q)$ is finite. \end{proof} \subsection{Braid groups over orientable surfaces} Let $M$ be a compact, orientable surface without boundary of genus $g\geq 1$, and let $n\geq 2$. We will use the presentation of $\faktor{B_n(M)}{[P_n(M), P_n(M)]}$ given in \cite{GGOP}. \begin{prop}{\cite[Proposition~9]{GGOP}}\label{prop:pres_quo} Let $M$ be a compact, orientable surface without boundary of genus $g\geq 1$, and let $n\geq 1$. The quotient group $\faktor{B_n(M)}{[P_n(M), P_n(M)]}$ has the following presentation: \noindent Generators: $\sigma_{1},\ldots, \sigma_{n-1}, a_{i,r},\,1\leq i \leq n,\, 1\leq r \leq 2g$. \noindent Relations: \begin{enumerate}[(a)] \item\label{it:pq1} the Artin relations \begin{equation*}\label{eq:artin} \begin{cases} \sigma_{i}\sigma_{j}=\sigma_{j}\sigma_{i} & \text{for all $1\leq i,j\leq n-1$, $\left\lvert i-j\right\rvert \geq 2$}\\ \sigma_{i}\sigma_{i+1}\sigma_{i}=\sigma_{i+1}\sigma_{i}\sigma_{i+1} & \text{for all $1\leq i\leq n-2$.} \end{cases} \end{equation*} \item\label{it:pq2} $\sigma^{2}_{i}=1$, for all $i=1,\ldots, n-1$. \item\label{it:pq3} $[a_{i,r},a_{j,s}]=1$, for all $i,j=1,\ldots,n$ and $r,s=1,\ldots, 2g$. \item\label{it:pq4} $\sigma_{i}a_{j,r}\sigma^{-1}_{i}= a_{\tau_{i}(j),r}$ for all $1\leq i\leq n-1$, $1\leq j\leq n$ and $1\leq r\leq 2g$. \end{enumerate} \end{prop} In \cite[Figure~9]{GM} we may see geometrically the elements $a_{i,r}$ of Proposition~\ref{prop:pres_quo}. We have the following result about Coxeter-type quotients and the quotient groups considered in \cite{GGOP}. \begin{prop} \label{prop:surfcrystcoxeter} Let $M$ be a compact, orientable surface without boundary of genus $g\geq 1$, $q\geq 3$ and let $n \geq 2$. The group $\faktor{B_n(M)}{[P_n(M), P_n(M)]}(q)$ is infinite. \begin{enumerate} \item\label{it:cryst1} If $q$ is odd, the Coxeter-type quotient $\faktor{B_n(M)}{[P_n(M), P_n(M)]}(q)$ is isomorphic to a free Abelian group of rank $2g$. \item\label{it:cryst2} The group $\faktor{B_n(M)}{[P_n(M), P_n(M)]}(q)$ is isomorphic to the crystallographic group quotient $\faktor{B_n(M)}{[P_n(M), P_n(M)]}$ if $q$ is even. \end{enumerate} \end{prop} \begin{proof} Let $M$ be a compact, orientable surface without boundary of genus $g\geq 1$, and let $n \geq 2$. We shall use the presentation of the quotient groups $\faktor{B_n(M)}{[P_n(M), P_n(M)]}$ given in Proposition~\ref{prop:pres_quo}. From Proposition~\ref{prop:pres_quo}\eqref{it:pq2} we have that in $\faktor{B_n(M)}{[P_n(M), P_n(M)]}(q)$ it holds $\sigma_i^2=1$, for all $1\leq i\leq n-1$. Hence, for all $1\leq i\leq n-1$, it is true that $\sigma_i^2=1$ and $\sigma_i^q=1$. If $q$ is odd, then from Lemma~\ref{lem:bezout} we get $\sigma_i=1$, for all $1\leq i\leq n-1$, proving item~\eqref{it:cryst1}. In the case that $q$ is even then, for all $1\leq i\leq n-1$, $\sigma_i^2=1$ independently of the number $q$, getting item~\eqref{it:cryst2}. \end{proof} \begin{thebibliography}{99} {\small \bibitem[A1]{A1} E.~Artin, Theorie der Z\"opfe, \emph{Abh.\ Math.\ Sem.\ Univ.\ Hamburg} \textbf{4} (1925), 47--72. \bibitem[A2]{A2} E.~Artin, Theory of braids, \emph{Ann.\ Math.} \textbf{48} (1947), 101--126. \bibitem[As]{As} J.~Assion, A proof of a theorem of Coxeter, \emph{C. R. Math. Rep. Acad. Sci. Canada} \textbf{1} no. 1(1978/79), 41--44. \bibitem[BDOS]{BDOS} P.~Bellingeri, C.~Damiani, O.~Ocampo, and C.~Stylianakis, Powers of half-twists and congruence subgroups of braid groups, in preparation. \bibitem[Co]{Co} H.~S.~M.~Coxeter, Factor groups of the braid groups, \emph{Proc.\ Fourth Canad.\ Math.\ Congress} (1957), 95--122. \bibitem[FN]{FoN} R.~H.~Fox and L.~Neuwirth, The braid groups, \emph{Math.\ Scandinavica} \textbf{10} (1962), 119--126. \bibitem[FVB]{FVB} E. Fadell and J. Van Buskirk, The braid groups of $\mathbb{E}^2$ and $\mathbb{S}^2$, \emph{Duke Math. Journal} 29 (1962), 243 -- 257 \bibitem[GAP]{GAP} The GAP~Group, \emph{GAP -- Groups, Algorithms, and Programming, Version 4.12.2}; 2022, \url{https://www.gap-system.org}. \bibitem[GGO]{GGO} D.~L.~Gon\c{c}alves, J.~Guaschi and O.~Ocampo, A quotient of the Artin braid groups related to crystallographic groups, \emph{J.~Algebra} \textbf{474} (2017), 393--423. \bibitem[GGOP]{GGOP} D.~L.~Gon\c calves, J.~Guaschi, O.~Ocampo and C.~M.~Pereiro, Crystallographic groups and flat manifolds from surface braid groups, Topol. Appl., 293, Article 107560 pp. (2021) \bibitem[GM]{GM} J.~Gonz\'alez-Meneses, New presentations of surface braid groups, \emph{J.~Knot Th.\ Ramif.} \textbf{10} (2001), 431--451. \bibitem[GMP]{GMP} J.~Gonz\'alez-Meneses and L.~Paris, Vassiliev invariants for braids on surfaces, \emph{Trans. Amer. Math. Soc.} \textbf{356} (2004) 219--243. \bibitem[Ha]{Ha} V.~L.~Hansen, Braids and Coverings: selected topics, London Math.\ Soc.\ Student Text~\textbf{18}, Cambridge University Press, 1989. \bibitem[GPi]{GPi} J.~Guaschi, D.~Juan-Pineda, A survey of surface braid groups and the lower algebraic K-theory of their group rings, from: ``Handbook of group actions, Vol. II'', (L Ji, A Papadopoulos, S-T Yau, editors), Adv. Lect. Math. 32, Int. Press, Somerville, MA (2015) 23--75. \bibitem[JJ]{JJ} G.~A.~Jones and J.~M.~Jones, Elementary Number Theory, Springer Undergraduate Mathematics Series (1998), Springer-Verlag: Springer-Verlag London. \bibitem[M]{M} W.~Magnus, Non-Euclidean Tesselations and Their Groups (1974), New York: Academic Press, New York \bibitem[MK]{MK} K.~Murasugi and B.~I.~Kurpita, A study of braids, Mathematics and its Applications~\textbf{484}, Kluwer Academic Publishers, Dordrecht, 1999. \bibitem[Sa]{Sa} P.C.C.~Santos~J\'unior. Um quociente do grupo de tran\c cas de Artin relacionado aos grupos cristalogr\'aficos. Dissertation thesis, Universidade Federal da Bahia, Salvador, Brazil, March 2019. \bibitem[Sc]{S} G.P.~Scott, Braid groups and the group of homeomorphisms of a surface, Proc. Camb. Phil. Soc. 68, p. 605-617, 1970. \bibitem[VB]{VB} J. Van Buskirk, Braid groups of compact 2-manifolds with elements of finite order, \emph{Trans. Amer. Math. Soc.} \textbf{122} (1966), 81--97. \bibitem[W]{W} B.~Wajnryb, A braidlike presentation of $Sp(n,p)$, \emph{Israel J. Math.}, \textbf{76}, 3, (1991), 265--288. \bibitem[Z]{Z} O.~Zariski, The topological discriminant group of a Riemann surface of genus $p$, \emph{Amer.\ J.\ Math.} \textbf{59} (1937), 335--358. } \end{thebibliography} \end{document}
2412.14410v1
http://arxiv.org/abs/2412.14410v1
The proper geometric dimension of the mapping class group of an orientable surface with punctures
\documentclass[a4paper,12pt,leqno]{amsart} \input{Preamble} \usepackage{verbatim} \begin{document} \title[]{The proper geometric dimension of the mapping class group of an orientable surface with punctures} \author[N. Colin]{Nestor Colin} \author[R. Jiménez Rolland]{Rita Jiménez Rolland} \author[P. L. León Álvarez ]{Porfirio L. León Álvarez} \address{Instituto de Matemáticas, Universidad Nacional Autónoma de México. Oaxaca de Juárez, Oaxaca, México 68000} \email{[email protected]} \email{[email protected]} \email{[email protected]} \author[L. J. Sánchez Saldaña]{Luis Jorge S\'anchez Salda\~na} \address{Departamento de Matemáticas, Facultad de Ciencias, Universidad Nacional Autónoma de México} \email{[email protected]} \subjclass{57K20, 20J05, 20F65.} \date{} \keywords{Mapping class groups, proper geometric dimension, classifying space for proper actions} \begin{abstract} We show that the {\it full} mapping class group of any orientable closed surface with punctures admits a cocompact classifying space for proper actions of dimension equal to its virtual cohomological dimension. This was proved for closed orientable surfaces and for {\it pure} mapping class groups by Aramayona and Martínez Pérez. As a consequence of our result we also obtain the proper geometric dimension of {\it full} spherical braid groups.\end{abstract} \maketitle \section{Introduction} Let \( S_g \) be an orientable closed connected surface of genus \( g\geq 0 \) and for \( n\geq 1 \) consider \( \{p_1, \ldots, p_n\} \) a collection of distinguished points in \( S_g\). The {\it mapping class group} $\Mod_g^n$ is the group of isotopy classes of all orientation-preserving homeomorphisms $f:S_g^n\rightarrow S_g^n$, where $S_g^{n}:=S_g-\{p_1, \ldots, p_n\}$. The group $\modgn$ permutes the set $\{p_1, \ldots, p_n\}$ and the kernel of such action is the {\it pure mapping class group} $\PMod_g^n$. When $n=0$, we remove it from the notation. In this paper, we compute the minimal dimension $\gdfin (\Mod_g^n)$, often called the \emph{proper geometric dimension}, of a classifying space $\underline{E}\Mod_g^n$ for proper actions of $\Mod_g^n$. Recall that, for a discrete group $G$, the space $\underline{E}G$ is a contractible $G$-CW-complex on which G acts properly, and such that the fixed point set of a subgroup $H < G$ is contractible if $H$ is finite, and is empty otherwise. Since $\Mod_g^n$ is virtually torsion-free, its virtual cohomological dimension $\vcd(\Mod_g^n)$ is a lower bound for $\gdfin(\Mod_g^n)$. Our main result is the following: \begin{theorem}\label{thm:main} For any $g\geq 0$ and $n\geq 1$, the proper geometric dimension of $\Mod_g^n$ is $$\gdfin(\Mod_g^n)=\vcd(\Mod_g^n).$$ Moreover, there exists a cocompact $\underline{E}\Mod_g^n$ of dimension equal to $\vcd(\Mod_g^n)$. \end{theorem} The computation of the proper geometric dimension follows from \cref{thm:main:genus:zero} for $g=0$, \cref{thm:main:genus:one} for $g=1$, \cref{thm:main:genus:2} for $g=2$, and \cref{thm:main:genus:atleast3} for $g\geq 3$. These theorems are the main results of Sections \ref{sec:zero}, \ref{sec:one}, \ref{sec:two}, and \ref{sec:atleast3} respectively, which provides a fair overview of the structure of the paper. For $2g+n>2$, the Teichmüller space $\Tgn$ is known to be a model for $\underline E \Mod_g^n$; see for instance \cite[Proposition 2.3]{WolpertJi} and \cite[Section 4.10]{Lu05}. In fact, Ji and Wolpert proved in \cite[Theorem 1.2]{WolpertJi} that the {\it truncated} Teichmüller space $\Tgn(\epsilon)$ is a cocompact classifying space for proper actions for $\epsilon$ sufficiently small. Hence, the {\it moreover} part of the main result follows from our computation of $\gdfin(\Mod_g^n)$ and a result of Lück \cite[Theorem 13.19]{Lu89} (see \cref{thm:Luck} below). \subsection{Known results and filling a gap in the literature} In \cite[Section 2]{Ha86} Harer constructed a {\it spine} of Teichmüller space, that is a $\PMod_g^n$-equivariant deformation retraction of $\Tgn$ provided $n>0$. This spine gives a model for $\underbar{E}\PMod_g^n$, which is of minimal dimension for the mapping class group $\Mod_g^1$ of an orientable surface with exactly one puncture; see \cite[Theorem 2.1]{Ha86} and \cite[Introduction]{LaEspinayLos4}. On the other hand, Aramayona and Martínez Pérez proved that the mapping class group of any closed orientable surface $\Mod_g$ \cite[Theorem 1.1]{AMP14} admits a cocompact classifying space for proper actions of dimension equal to its virtual cohomological dimension. They use their result and the Birman exact sequence to show that is also the case for the {\it pure} mapping class group $\PMod^n_g$ \cite[Corollary 1.3]{AMP14}; see also { arXiv:1302.4238v.2} and \cite[Corollary 3.5]{LaEspinayLos4}. However, \cite[Theorem 1.1]{AMP14} does not imply the result for the {\it full} mapping class group $\Mod_g^n$ when $n\geq 2$, and \cite[Theorem 1.1]{AMP14} has been misquoted in the literature. For instance, the existence of a model of minimal dimension for the {\it full} mapping class group is used in the proofs of \cite[Theorem 1 $\&$ Main Theorem]{JPT16}, \cite[Proposition 5.3 $\&$ Theorem 1.4]{Nucinkis:Petrosyan}, \cite[Theorem 1.1]{AJPTN18}, and \cite[Theorem 1.5]{JRLASS24}. Our \cref{thm:main} fills this {\it gap}, and together with \cite[Theorem 1.1]{AMP14} it gives the necessary ingredients to prove the following result. \begin{corollary}[Proposition 5.3 of \cite{Nucinkis:Petrosyan}] Let $S$ be a closed (possibly disconnected) surface with possible a finite number of punctures. Then there is a cocompact model for $\underline{E}\Mod(S)$ of dimension $\gdfin (\Mod(S)) = \vcd (\Mod(S))$. \end{corollary} \subsection{Proper geometric dimension of spherical braid groups} The {\it full $n$-th strand spherical braid group $B_n(S_0)$} is the fundamental group of the $n$th unordered configuration space of the sphere $S_0$. It is the trivial group for $n=1$ and a cyclic group of order $2$ for $n=2$. For all $n\geq 1$ these groups are virtually torsion free and their virtual cohomological dimension is $\vcd(B_n(S_0))=\max\{0,n-3\}$ \cite[Theorem 5]{VcdBraid}. For $n\geq 3$ the mapping class group $\Mod_0^n$ is related to the $B_n(S_0)$ by the following central extension (see for example \cite[Section 2.4]{BraidSurvey}) \[ 1\rightarrow \mathbb{Z}/2\rightarrow B_n(S_0)\rightarrow \Mod(S_0^n)\rightarrow 1.\] Note that any cocompact model for $\underline E \Mod(S_0^n)$ is, via the above short exact sequence, a cocompact model for $\underline E B_n(S_0)$. It follows from \cite[Lemma 5.8]{Lu05} that $\gdfin(B_n(S_0))=\gdfin(\Mod(S_0^n)$, and \cref{thm:main} implies: \begin{corollary} For any $n\geq 1$, $\gdfin(B_n(S_0))=\vcd(B_n(S_0))=\max\{0,n-3\}$. Moreover, there exists a cocompact model for $\underline E B_n(S_0)$ of dimension $\vcd(B_n(S_0))$. \end{corollary} \subsection{Overview of the proof and the paper} In order to prove our main result \cref{thm:main}, we follow the general strategy of \cite{AMP14}, and obtain the proper geometric dimension $\gdfin(\Mod_g^n)$ by computing its algebraic counterpart $\cdfin(\Mod_g^n)$. This computation amounts to verify that for any finite subgroup $F$ of $\Mod_g^n$, the inequatily $\vcd(NF) +\lambda (F)\leq \vcd(\Mod_g^n)$ holds, and use \cite[Theorem 3.3]{AMP14} (see \cref{thm:aramayona:martinezperez} below). Here $NF$ denotes the {\it normalizer} of $F$ in $\Mod_g^n$ and $\lambda (F)$ is the {\it length} of $F$. In \cref{sec:dimensions} we introduce the necessary definitions and details. An application of Nielsen realization and \cref{lemma:Maher} due to Maher, imply that $\vcd(NF)=\vcd(\Mod_{g_F}^{n_F})$ for any finite subgroup $F$ of $\Mod_g^n$. The parameters $g_F$ and $n_F$ are invariants of the orbifold quotient $S_g^n/F$ as described in \cref{mcg}. Most of the present paper deals with either computing or upper-bounding the parameters $n_F$ and $g_F$. Once computed $g_F$ and $n_F$, we can use Harer's computation of the $\vcd$ for mapping class groups (\cref{vcd:mcg}) to obtain $\vcd(\Mod_{g_F}^{n_F})$. On the other hand, $\lambda(F)$ can be bounded by the number of factors in the prime decomposition of $|F|$. For $g\geq 3$, we obtain \cref{thm:main:genus:atleast3} as a straightforward application of \cite[Proposition 4.4]{AMP14}, which we explain in \cref{sec:atleast3}. However, for genus $g=0$, $g=1$ and $g=2$, an independent and careful analysis of the finite subgroups of $\Mod_g^n$ is required. This is done in Sections \ref{sec:zero}, \ref{sec:one} and \ref{sec:two}, respectively.\medskip \noindent{\bf Acknowledgments.} The authors thank Omar Antolín Camarena and Jesús Nuñez Zimbrón for useful discussions. The first author was funded by CONAHCyT through the program \textit{Estancias Posdoctorales por México.} The third author's work was supported by UNAM \textit{Posdoctoral Program (POSDOC)}. All authors are grateful for the financial support of DGAPA-UNAM grant PAPIIT IA106923. \section{Preliminaries} \subsection{Proper geometric and cohomological dimensions}\label{sec:dimensions} There are several notions of dimension defined for a given group $G$. We recall in this section the notions that will be used in this paper. A model for the {\it classifying space of $G$ for proper actions $\underbar{E}G$} is a $G$-CW-complex $X$ such that the fixed point set $X^H$ of a subgroup $H < G$ is contractible if $H$ is finite, and is empty otherwise. Such a model always exists and is unique up to proper $G$-homotopy. The {\it proper geometric dimension of $G$} is defined to be $$\gdfin(G)=\{ n\in\mathbb{Z}_{\geq 0}:\ \text{there exists an $n$-dimensional model for }\underbar{E}G\}.$$ On the other hand, let {\sc Fin} be the collection of finite subgroups of $G$, and consider the restricted orbit category $\mathcal{O}_{\text{\sc Fin}}G$, which has as objects the homogeneous $G$-spaces $G/H$, $H\in${\sc Fin}, and morphisms are $G$-maps. A $\mathcal{O}_{\text{\sc Fin}}G$-module is a contravariant functor from $\mathcal{O}_{\text{\sc Fin}}G$ to the category of abelian groups, and a morphism between two $\mathcal{O}_{\text{\sc{Fin}}}G$-modules is a natural transformation of the underlying functors. The category $\mathcal{O}_{\text{\sc{Fin}}}G$-mod of $\mathcal{O}_{\text{\sc{Fin}}}G$-modules is an abelian category with enough projectives; see for example \cite[p.7]{MV03}. The {\it proper cohomological dimension of $G$} is defined to be \[\cdfin(G)=\min\left\{ n\in\mathbb{Z}_{\geq 0}:\ \exists\text{ a projective resolution of the $\mathcal{O}_{\text{\sc{Fin}}}G$-module $\mathbb{Z}_{\text{\sc{Fin}}}$ of length $n$}\right\},\] where $\mathbb{Z}_{\text{\sc{Fin}}}$ is the constant $\mathcal{O}_{\text{\sc{Fin}}}G$-module given by $\mathbb{Z}_{\text{\sc{Fin}}} (G/H) = \mathbb{Z}$ for all $H\in{\text{\sc{Fin}}}$, and every morphism of $\mathcal{O}_{\text{\sc{Fin}}}G$ goes to the identity function. The {\it cohomological dimension $\cd(H)$} of a group $H$ is the length of shortest projective resolution, in the category of $H$-modules, for the trivial $H$-module $\mathbb{Z}$. If $G$ is virtually torsion free, then $G$ contains a torsion free subgroup $H$ of finite index, and the {\it virtual cohomological dimension of $G$} is defined to be $\vcd(G) = \cd(H)$. A well-known theorem of Serre stablishes that $\vcd(G)$ is well defined and it does not depend on the choice of the finite index torsion free subgroup $H$ of $G$. For every virtually torsion free group $G$, we have the following inequalities (see \cite[Theorem 2]{BLN01}) \begin{equation} \vcd(G)\leq \cdfin(G) \leq \gdfin(G) \leq \max\{3, \cdfin(G)\}. \end{equation} Following the general strategy of \cite{AMP14}, we will determine the proper geometric dimension $\gdfin(\Mod_g^n)$ by computing the algebraic counterpart $\cdfin(\Mod_g^n)$. We recall here the two main properties of $\cdfin(G)$ that we will need to obtain \cref{thm:main}. Firstable, the following theorem is a consequence of \cite[Theorem 13.19]{Lu89}, as explained in \cite[Section 3]{AMP14}; see also \cite[Theorem 1]{BLN01}. \begin{theorem}\label{thm:Luck} Let $G$ be a group with $\cdfin(G)=d\geq 3$. Then there is a $d$-dimensional $\underbar{E}G$. Morevoer, if $G$ has a cocompact model for $\underbar{E}G$, then it also admits a cocompact $\underbar{E}G$ of dimension $d$. \end{theorem} Now, let $F$ be a finite subgroup of $G$. We denote by $Z_G(F)$, $N_G(F)$, and $W_G(F)=N_G(F)/F$ the centralizer, the normalizer, and the {\it Weyl group of} $F$ respectively. If there is no risk of confusion we will omit the parenthesis and subindices, i.e. we will use the notation $ZF$, $NF$, and $WF$. The \emph{length} $\lambda(F)$ of a finite group $F$ is the largest $i\geq 0$ for which there is a sequence \[1=F_0<F_1 < \cdots < F_i=F.\] The second useful result to obtain our \cref{thm:main} is the following theorem by Aramayona and Martínez Pérez that gives us a criterion to compute the proper cohomological dimension. It is a consequence from a result of Connolly and Kozniewski \cite[Theorem A]{CK86}. \begin{theorem}\label{thm:aramayona:martinezperez} Let $G$ be a virtually torsion free group such that for any $F\leq G$ finite subgroup, $\vcd(WF)+\lambda(F)\leq \vcd(G)$. Then $\cdfin(G)= \vcd(G)$. \end{theorem} \begin{remark} For $F$ is a finite subgroup of $G$ the normalizer $NF$ and the Weyl group $WF$ are commensurable, therefore $\vcd(NF)=\vcd(WF)$. In our analysis below, we will obtain upper bounds of $\vcd(NF)$ in order to apply \cref{thm:aramayona:martinezperez}. \end{remark} The following lemma will be useful in our computations below. For a given $g\in G$, we denote (respectively) by $Z(g)$, $N(g)$ and $W(g)$ the centralizer, the normalizer and the Weyl group of the cyclic subgroup $\langle g\rangle$ in $G$. \begin{lemma}\label{lem:length:vcdWF} Let $G$ be a virtually torsion free group and let $F$ a finite subgroup. Then \begin{enumerate} \item $\lambda(F)\leq \log_2(|F|)$, and \item $\vcd(WF)\leq \min \{\vcd(Z(g))|g\in F\}$. \end{enumerate} \end{lemma} \begin{proof} Item (1) is \cite[Lemma 3.1]{HSST}. For item (2) note that $ZF$ is commensurable with $WF$, hence they have the same $\vcd$. The conclusion follows from the fact that $ZF=\cap_{g\in F}Z(g)$ and the monotonicity of the $\vcd$. \end{proof} \subsection{Mapping class groups}\label{mcg} For any $g\geq 0$ and $n\geq 0$, the group $\Mod_{g}^n$ is virtually torsion free and its virtual cohomological dimension was computed by J. L. Harer in \cite{Ha86}: \begin{equation}\label{vcd:mcg} \vcd(\modgn) = \begin{cases} 0& \text{if } g=0 \text{ and } n< 3\\ n-3 & \text{if } g=0 \text{ and } n\geq 3 \\ 1 & \text{if } g=1 \text{ and } n= 0 \\ 4g-5 & \text{if } g\geq 2 \text{ and } n=0\\ 4g-4+n & \text{if } g\geq 1 \text{ and } n\geq 1. \end{cases} \end{equation} Let $F$ be a finite subgroup of $\modgn$. By Nielsen realization (\cite[Theorem 7.2]{FM12},\cite{KER}), there is a hyperbolic structure for $S_g^n$ on which $F$ acts by orientation-preserving isometries, when $\chi(S_g^n)<0$. By a slight abuse of notation, we denote by $S_g^n/F$ both the orbifold and the underlying topological surface. In what follows we will use the following notation: \begin{itemize} \item$g_F$ is the genus of $S_g^n/F$, \item $o_F$ is the number of elliptic (orbifold) points of $S_g^n/F$, \item $n_F$ the number $o_F$ of elliptic points plus the number of orbits of punctures of this action \item$(g_F;p_1^F,\ldots,p_{o_F}^F)$ is often called the {\it signature of $F$}, where $p_1^F,\ldots,p_{o_F}^F$ are the orders of the elliptic points of $S_g^n/F$. \end{itemize} Notice that $n_F\leq \frac{n}{|F|}+o_F$. The following lemma is due to Maher. Together with Harer's computation \cref{vcd:mcg}, it gives us a way to compute the $\vcd$ of the Weyl group $WF$ of any finite subgroup $F$ of $\Mod_g^n$. \begin{lemma}\label{lemma:Maher} \cite[Proposition 2.3]{Ma11} Let $F\leq \Mod_g^n$ be a finite subgroup with $g_F$ and $n_F$ as before. Then $NF$ has finite index in $\Mod_{g_F}^{n_F}$. In particular, $$\vcd(WF) =\vcd (NF)= \vcd(\Mod_{g_F}^{n_F}).$$ \end{lemma} \section{Genus 0}\label{sec:zero} The main result of this section is \cref{thm:main:genus:zero} which proves the case $g=0$ of \cref{thm:main}. In the first half of this section we study the finite subgroups of $\Mod_0^n$ and their actions on $S_0^n$, our analysis has as a starting point Stukow's classification of maximal finite subgroups of $\Mod_0^n$ given in \cite{STUKOW}. In the second half of the section we apply the results in the first half to prove \cref{thm:main:genus:zero}. It is worth saying that we dealt with cases $5\leq n\leq 11$ one by one, and all the computations of $\vcd(NF)$ and $\lambda(F)$ are summarized in \cref{appendixA} by means of several tables. \subsection{Finite subgroups of $\Mod_0^n$ and their realization} We start by recalling Stukow's classification of maximal finite subgroups of $\Mod_0^n$. \begin{theorem}\cite[Theorem 3]{STUKOW}\label{thm:classification:stukow} A finite subgroup $F$ of $\Mod_0^n$ is a maximal finite subgroup of $\Mod_0^n$ if and only if $F$ is isomorphic to one of the following: \begin{enumerate} \item a cyclic group $\Z/(n-1)$ if $n\neq 4$, \item the dihedral group $D_{2n}$ of order $2n$, \item the dihedral group $D_{2(n-2)}$ of order $2(n-2)$ if $n=5$ or $n\geq 7$, \item the alternating group $A_4$ if $n\equiv 4$ or $10\ (\mathrm{ mod}\ 12)$, \item the symmetric group $\mathfrak{S}_4$ if $n\equiv 0,2,6,8,12,14,18$ or $20\ (\mathrm{ mod }\ 24)$, \item the alternating group $A_5$ if $n\equiv 0,2,12,20,30,32,42$ or $50\ (\mathrm{ mod }\ 12)$. \end{enumerate} \end{theorem} Stukow also studied the conjugacy classes of finite subgroups of $\Mod_0^n$. \begin{theorem}\cite[Theorem 4, Corollary 5]{STUKOW}\label{thm:maximal:finite:stukow} Let $F$ be a finite subgroup of $\Mod_0^n$. Then the set of conjugacy classes of subgroups of $\Mod_0^n$ isomorphic to $F$ has two elements if \begin{enumerate} \item $F\cong \Z/2$ with $n$ even. \item $F\cong D_{2m}$ with $2m|n$ or $2m|n-2$, \end{enumerate} and one element otherwise. In particular, any two maximal finite subgroups of $\Mod_0^n$ are conjugate if and only if they are isomorphic. \end{theorem} Since, for a given finite subgroup $F$ of $\Mod_0^n$, the number $n_F$ only depends on the conjugacy class of $F$, we only have to work with a representative of such conjugacy class. Let us describe representatives of the conjugacy classes of maximal finite subgroups of $\Mod_0^n$ of types (1)-(3) following the numeration of \Cref{thm:maximal:finite:stukow}. For this let us denote by $\Nte$ and $\Sur$ the north and south poles of $S_0$ respectively. Also given any group isomorphic to $D_{2m}$ recall that there is an index 2 cyclic subgroup of order $m$, we call this subgroup the \emph{subgroup of rotations} and every element which do not belong to the subgroup of rotations is called a \emph{reflection} (all such elements have order 2). \begin{enumerate} \item Up to a self homeomorphism of $S_0^n$, the action of $\Z/(n-1)$ can be realized as follows: we place one of the punctures on $\Nte$ and the remaining $n-1$ on the equator equidistantly. The generator of $\Z/(n-1)$ acts on $S_0^n$ by a rotation of angle $2\pi/(n-1)$ that fixes $\Nte$ and $\Sur$. \item Up to a self homeomorphism of $S_0^n$, the action of $D_{2n}$ can be realized as follows: we place all the punctures along the equator equidistantly. As in the previous case we have a natural action of the subgroup of rotations $\Z/n$ on $S_0^n$, and a reflection can be realized as a half turn that interchanges $\Nte$ and $\Sur$ and leave the set of punctures invariant. \item The realization of $D_{2(n-1)}$ is completely analogous to the previous case, the only difference is that we place 2 punctures on $\Nte$ and $\Sur$ and the remaining $n-2$ on the equator. \end{enumerate} The numeration in the following proposition is designed to be compatible with the numeration in \cref{thm:classification:stukow}. We will use the above descriptions of the actions of maximal finite groups in the proof of the following statement. \begin{proposition}\label{prop:nF} Let $F$ be a finite subgroup of $\Mod_0^n$. Then the following holds: \begin{enumerate} \item Let $F\cong \Z/m\leq \Z/(n-1)$. Then \[n_F=2+\frac{n-1}{m}.\] \item[(2.1)] Let $F\cong \Z/m$ be a subgroup of the rotation subgroup $\Z/n$ of $D_{2n}$. Then \[n_F=2+\frac{n}{m}.\] \item[(2.2)] Let $F\cong D_{2m}\leq D_{2n}$ with $m>1$.\\ If $n$ odd, then \[n_F=1+\frac{n+3m}{2m}.\] If $n$ even and $2m|n$, then \[n_F=\begin{cases} 1+\frac{n+2m}{2m}\\ 1+\frac{n+4m}{2m} \end{cases}\] corresponding to the two conjugacy classes of subgroups isomorphic to $D_{2m}$.\\ If $n$ even and $2m\not |n$, then \[n_F=1+\frac{n+3m}{2m}.\] \item[(3.1)] Let $F\cong \Z/m$ be a subgroup of the rotation subgroup $\Z/(n-2)$ of $D_{2(n-2)}$. Then \[n_F=2+\frac{n-2}{m}.\] \item[(3.2)] Let $F\cong D_{2m}\leq D_{2(n-2)}$ with $m>1$.\\ If $n$ odd, then \[n_F=1+\frac{n-2+3m}{2m}.\] If $n$ even and $2m|(n-2)$, then \[n_F=\begin{cases} 1+\frac{n-2+2m}{2m}\\ 1+\frac{n-2+4m}{2m} \end{cases}\] corresponding to the two conjugacy classes of subgroups isomorphic to $D_{2m}$.\\ If $n$ even and $2m\not |(n-2)$, then \[n_F=1+\frac{n-2+3m}{2m}.\] \end{enumerate} \end{proposition} \begin{proof} We proceed by cases: \begin{enumerate} \item[(1)] We can realize the $\Z/m$-action placing a puncture in $\Nte$ and the remaining $n-1$ equidistantly on the equator. Hence \{$\Nte$\} and \{$\Sur\}$ are both orbits of elliptic points of the $\Z/m$-action, while the orbits of punctures are are exactly $(n-1)/m$. \item[(2.1)] The $\Z/m$-action can be realized placing all punctures on the equator equidistantly. The conclusion is completely analogous to case (1). \item[(2.2)] Recall that $D_{2m}$ is generated by the rotation subgroup $\Z/m\leq \Z/n$ and a half turn $\tau$ that interchanges $\Nte$ and $\Sur$. In this case we have the orbit $\{\Nte,\Sur\}$. On the other hand, note that the $D_{2n}$-action is free away from the poles and the equator. Hence any elliptic point other than $\Nte$ and $\Sur$ is on the equator. Let $X$ be the set of all punctures and all the elliptic points on the equator. Note that, for elements of $D_{2m}$, the identity element fix all points of $X$, all nontrivial rotations do not fix points in $X$, and each of the $m$ reflections fix two points in $X$, hence by the Burnside lemma, the number of orbits of $X$ is $\frac{|X|+2m}{2m}$. Therefore, if $e$ is the number of elliptic points in $X$ we have \[n_F=1+\frac{n+e+2m}{2m}.\] Assume $n$ is odd, then the axis points of $\tau$ consists exactly of one puncture $p$ and its antipode $-p$ which is an elliptic point. Therefore there are $m$ elliptic points, that is $e=m$. Assume $n$ is even. Then the axis of reflections in $D_{2m}$ are obtained by $\Z/m$-rotating the axis of $\tau$ and taking the bisectors of any two consecutive rotations of the axis of $\tau$. If $2m$ divides $n$, $n/m$ is even. Let $p$ be a fixed puncture and let $t$ be a fixed generator of $\Z/m$. Then between $p$ and $tp$ there are $n/m-1$ punctures, that is, an odd number of punctures. If $\tau$ fixes two punctures, then each bisector also fixes two punctures. Therefore there are no elliptic points and $e=0$. If $\tau$ fixes two elliptic points, then each bisector also fixes two elliptic points. Therefore $e=2m$. If $2m$ does not divide $n$, then $n/m$ is odd and $m$ is even. Let $p$ be a fixed puncture and let $t$ be a fixed generator of $\Z/m$. Then between $p$ and $tp$ there are $n/m-1$ punctures, that is, an even number of punctures. Hence, whenever $\tau$ fixes two punctures then each bisector fixes two elliptic points, and whenever $\tau$ fixes two elliptic points then each bisector fixes two punctures. In both scenarios we conclude $e=m$. \end{enumerate} Cases (3.1) and (3.2) are analogous to cases (2.1) and (2.2) respectively and are left as an exercise to the reader. \end{proof} \subsection{Proof of the main theorem for genus 0} We compute in this subsection, \cref{thm:main:genus:zero} below, the geometric dimension for $\Mod_0^n$. \begin{lemma}\label{lemma:vcdNF:sphere} Let $F$ be a finite subgroup of $\Mod_0^n$, then $\vcd(NF)=\max\{n_F-3,0\}$. \end{lemma} \begin{proof} By Nielsen realization $F$ acts on $S_0^n$ by orientation preserving homeomorphisms. As a straightforward application of the Riemann-Hurwitz theorem, we have that $S_0^n/F$ is homeomorphic to a sphere, that is $g_F=0$. Now the result is a consequence of \Cref{lemma:Maher} and Harer's computation of the virtual cohomological dimension for mapping class groups given in \cref{vcd:mcg}. \end{proof} \begin{proposition}\label{prop:Weyl:cyclic} Let $n\geq 3$ and let $g\in \Mod_0^n$ be an element of finite order $r\geq 2$. Then $r\leq n$ and \[\vcd(W(g))\leq \frac{n}{r}-1\leq \frac{n}{2}-1\] \end{proposition} \begin{proof} By \cite[Theorem 4, Corollary 5]{STUKOW} the element $g$ is contained in a unique (up to isotopy) maximal finite cyclic subgroup of order $n$, $n-1$ or $n-2$, which by \cref{thm:maximal:finite:stukow} and \cref{thm:classification:stukow}, is the rotation subgroup of one of the maximal subgroups $\Z/(n-1)$, $D_{2n}$ or $D_{2(n-2)}$. Hence, up to conjugation, $g$ can be realized as a rotation that fixes $\Nte$ and $\Sur$ and the punctures are on the equator and possible in one or both poles. Therefore $n_F\leq 2 + n/r$, and the first inequality in our statement follows from \cref{lemma:vcdNF:sphere}. The second inequality is clear. \end{proof} \begin{proposition}\label{prop:vcdpluslegth:cyclic:case} Let $n\geq 11$ and $g\in \Mod_0^n$ be an element of order $r\geq 2$. Then \[\vcd(W(g))+\lambda(\langle g \rangle)\leq \vcd(\Mod_0^n).\] \end{proposition} \begin{proof} By \cref{prop:Weyl:cyclic} and \cref{lem:length:vcdWF} we get \begin{align*} \vcd(W(g))+\lambda(\langle g \rangle)&\leq \frac{n}{r}-1+\log_2(r)\\ &\leq \frac{n}{r}-1+\log_2(n)\\ &\leq \frac{n}{2}+\log_2(n) \end{align*} Since both $\frac{n}{2}+\log_2(n)$ and $\vcd(\Mod_0^n)=n-3$ are both increasing functions of $n$, it is easy to verify that \[\frac{n}{2}+\log_2(n)\leq n-3\] for $n\geq 11$. \end{proof} \begin{theorem}\label{thm:vcdWF:lambdaF:inequality} Let $n= 5$ or $n\geq 7$, and let $F$ be a finite subgroup of $\Mod_0^n$. Then \[\vcd(WF)+\lambda(F)\leq \vcd(\Mod_0^n)=n-3.\] \end{theorem} \begin{proof} For $n=5$ or $7\leq n \leq 13$, the conclusion follows from the tables in \cref{appendixA}, where $\vcd(WF)$ is computed using \cref{prop:nF} and \cref{lemma:vcdNF:sphere}. The remaining of the proof deals with the case $n\geq 14$. By \cref{thm:maximal:finite:stukow}, up to conjugation, $F$ is a subgroup of one of six possibilities: \begin{enumerate} \item $F\leq \Z/(n-1)$. The conclusion follows directly from \cref{prop:vcdpluslegth:cyclic:case} for $n\geq 11$. \item $F\leq D_{2n}$. If $F$ is cyclic, the result follows from \cref{prop:vcdpluslegth:cyclic:case} for $n\geq 11$. Assume that $F\cong D_{2m}$ with $m|n$. Then by \cref{lem:length:vcdWF} and \cref{prop:Weyl:cyclic} we get \[ \vcd(WF)\leq \vcd(\Z/m)\leq \frac{n}{m}-1\quad\text{and}\quad \lambda(F)\leq \log_2(m)+1. \] Thus \[\vcd(WF)+\lambda(F) \leq \frac{n}{m} + \log_2(m)\leq \frac{n}{2}+\log_2(n).\] Now note that $\frac{n}{2}+\log_2(n)\leq n-3$ for $n\geq 14$. \item $F\leq D_{2(n-2)}$. This case is analogous to the previous one. \item $F\leq A_4$. We have $\lambda(F)\leq \lambda(A_4)=3$. By \cref{lem:length:vcdWF} and \cref{prop:Weyl:cyclic} we get \[\vcd(WF)+\lambda(F)\leq \frac{n}{2}+3.\] Note that $\frac{n}{2}+3\leq n-3$ for $n\geq 10$. \item $F\leq \mathfrak{S}_4$. Analogous to case (4). \item $F\leq A_5$. Analogous to case (4). \end{enumerate} \end{proof} \begin{theorem}\label{thm:main:genus:zero} Let $n\geq 0$. Then \[\gdfin(\Mod_0^n)= \cdfin(\Mod_0^n)= \vcd(\Mod_0^n)=\max\{n-3,0\}.\] \end{theorem} \begin{proof} We divide the proof in cases. Let $n=0,1,2,3$. In these cases $\Mod_0^n$ is a finite group and the conclusion follows. Let $n=4$. Since $\vcd(\Mod_0^4)=1$, we know that $\Mod_0^4$ is virtually finitely generated free, hence by a well-known theorem of Stallings, $\Mod_0^4$ acts properly on a tree and this action provides a model for $\underline E \Mod_0^4$ of dimension 1, therefore $\gdfin(\Mod_0^4)=\cdfin(\Mod_0^4)=1$. Let $n=6$. In this case, by Birman-Hilden theory we have a central extension \[1\to \Z/2\to \Mod_2 \to \Mod_0^6\to 1,\] where the kernel is generated by the hyperelliptic involution of $S_2$. By \cite[Proposition 4.3]{Ji14} there is a model $X$ for $\underline E \Mod_2$ of dimension 3. Therefore $X^{\Z/2}$ is a model for $\underline E \Mod_0^6$ of dimension at most 3. We conclude that $\cdfin(\Mod_0^6)=3$. Let $n=5$ or $n\geq 7$. From \cref{thm:vcdWF:lambdaF:inequality} and \cref{thm:aramayona:martinezperez} we obtain the proper cohomological dimension. By the Eilenberg-Ganea theorem, we have that $\gdfin(\Mod_0^n)=\cdfin(\Mod_0^n)$, except possibly when $n=5$. To rule out that possibility, we consider the following central extension that arises from Birman-Hilden theory $$1\rightarrow \mathbb{Z}/2\rightarrow \Mod_1^2\rightarrow \Mod_0^5\rightarrow 1,$$ where the kernel is generated by the hyperelliptic involution of $S_1^2$. If follows from \cite[Lemma 5.8]{Lu05} that $\gdfin(\Mod_0^5)=\gdfin(\Mod_1^2)$. Hence, $\gdfin(\Mod_0^5)=2$ follows from our computation $\gdfin(\Mod_1^2)=\cdfin(\Mod_1^2)=2$ in the proof of \cref{thm:main:genus:one}. \end{proof} \section{Genus 1}\label{sec:one} In this section we prove \cref{thm:main:genus:one} which gives the case $g=1$ of \cref{thm:main}. First we consider finite subgroups of $\Mod_1^n$ and their actions on $S_1^n$. \begin{theorem}\label{thm:classification:finite:torus} Let $n\geq 0$, and let $F$ be a finite subgroup of orientation-preserving diffeomorphisms of $S_1^n$, then $F$ is isomorphic to $(\Z/s\times \Z/t)\rtimes \Z/m$ where $s$ and $t$ are positive integers such that $st|n$ and $m=1,2,3,4$ or $6$. Moreover, the quotient $S_1/F$ only depends on $m$, and we have the following descriptions of these orbifold quotients: \begin{itemize} \item for $m=1$, $S_1/F$ is a torus with no elliptic points, \item for $m=2$, $S_1/F$ is a sphere with four elliptic points of orders $(2,2,2,2)$, \item for $m=3$, $S_1/F$ is a sphere with three elliptic points of orders $(3,3,3)$, \item for $m=4$, $S_1/F$ is a sphere with three elliptic points of orders $(2,4,4)$, and \item for $m=6$, $S_1/F$ is a sphere with three elliptic points of orders $(2,3,6)$. \end{itemize} \end{theorem} \begin{proof} Let $F$ be as in the statement, by capping the punctures, the action of $F$ on $S_1^n$ leads to an action of $F$ on the torus $S_1$. By the uniformization theorem, there is a metric of constant curvature 0 on $S_1$ such that the action of $F$ is by isometries. Next we can lift this action to the universal covering, that is, there is a group $\widetilde{F}$ acting by orientation preserving euclidean isometries on $\mathbb R^2$ that is a central extension of $F$ by a rank $2$ group of translations, and such that $\mathbb{R}^2/\widetilde{F}$ is diffeomorphic to $S_1/F$. Hence $\widetilde{F}$ is a wallpaper group without reflections nor glide reflections, and therefore $\widetilde{F}$ is isomorphic to $\Z^2\rtimes \Z/m$ for $m=1,2,3,4,$ or $6$. This implies that there are positive integers $s$ and $t$ such that $F\cong (\Z^2\rtimes \Z/m)/(s\Z \oplus t\Z)\cong (\Z/s\times \Z/t)\rtimes \Z/m$. Furthermore, note that $\Z/s\times \Z/t$ acts freely on $S_1$ and preserves the set of punctures in $S_1^n$, which is a set with $n$ elements, as a conclusion $st|n$. For the \textit{moreover part} of the statement, observe that $S_1/F$ is diffeomorphic to $\mathbb R^2/(\Z^2\rtimes \Z/m)$, and the latter quotients are well known, see for instance \cite[Theorem 13.3.6]{ThurstonBook}. \end{proof} \begin{proposition}\label{prop:inequality:genus1} Let $n\geq 2$. Then for every finite subgroup $F$ of $\Mod_1^n$ we have \[\vcd(WF)+\lambda(F)\leq \vcd(\Mod_1^n)=n.\] \end{proposition} \begin{proof} The conclusion is clear when $F$ is the trivial subgroup. Let $F$ be a nontrivial finite subgroup of $\Mod_1^n$, then by Nielsen realization theorem we can realize $F$ as a finite group of orientation-preserving diffeomorphisms of $S_1^n$. We proceed by cases following the notation and conclusions in \Cref{thm:classification:finite:torus}. \begin{itemize} \item $F\cong \Z/s\times \Z/t$ with $st|n$. In this case we have $n_F=n/st$ and $S_1^n/F$ is diffeomorphic to $S_1$ with $n/st$ punctures. Hence by \Cref{lem:length:vcdWF}(1) we get \[\vcd(WF)+\lambda(F)\leq \vcd(\Mod_1^{n_F})+\lambda(F)\leq \frac{n}{st}+\log_2(n)\leq \frac{n}{2}+\log_2(n).\] Now notice that $n/2+\log_2(n)\leq n$ for all $n\geq 0$. \item $F\cong ( \Z/s\times \Z/t)\rtimes \Z/2$. In this case we have $S_1/F$ is diffeomorphic to a sphere with 4 elliptic points. Hence we conclude that $n_F\leq 4+n/st\leq 4+n/2$, and $\vcd(WF)=n_F-3$. Therefore \[\vcd(WF)+\lambda(F)\leq \frac{n}{2}+1+\log_2(2n)\leq \frac{n}{2}+2+\log_2(n). \] Now notice that $n/2+2+\log_2(n)\leq n$ for all $n\geq 5$. \item $F\cong (\Z/s\times \Z/t)\rtimes \Z/m$ with $m=3,4,6$. In this case we have $S_1/F$ is diffeomorphic to a sphere with 3 elliptic points. Hence we conclude that $n_F\leq 4+n/st\leq 3+n/2$, and $\vcd(WF)=n_F-3$. Therefore \[\vcd(WF)+\lambda(F)\leq \frac{n}{2}+\log_2(6n)\leq \frac{n}{2}+2+\log_2(n). \] Now notice that $n/2+2+\log_2(n)\leq n$ for all $n\geq 5$. \end{itemize} To finish the proof we only have to verify the statement for $n=2,3,4$. From \Cref{thm:classification:finite:torus} we can describe explicitly the finite subgroups of $\Mod_1^n$ and their quotients, and the analysis can be done case by case; the details are left to the reader. \end{proof} \begin{theorem}\label{thm:main:genus:one} Let $n\geq 0$. Then \[\gdfin(\Mod_1^n))=\cdfin(\Mod_1^n)= \vcd(\Mod_1^n)=n.\] \end{theorem} \begin{proof} For $n=1$, the result follows since $\Mod_1^1\cong\text{SL}(2,\mathbb{Z})$ is virtually free. For $n\geq 2$, \cref{prop:inequality:genus1} and \cref{thm:aramayona:martinezperez} imply $\cdfin(\Mod_1^n)= \vcd(\Mod_1^n)$. By the Eilenberg-Ganea theorem, we have that $\gdfin(\Mod_1^n)=\cdfin(\Mod_1^n)=n$, except possibly for $n=2$. Let us rule out this possibility. We have the following short exact sequence \[1\to B_2(S_1)/Z \to \Mod_1^2 \to \Mod_1 \to 1,\] where $B_2(S_1)$ is the braid group over the torus on 2 strands, and $Z$ is its center. On the other hand $B_2(S_1)/Z\cong \Z/2*\Z/2*\Z/2 $, see for instance \cite[Proposition 4.4(i)]{bellingeri}. Note that $\Z/2*\Z/2*\Z/2$ is virtually finitely generated free, hence every finite extension $G$ of this group admits a proper action on a tree, which provides a 1-dimensional model for $\underline E G$. The group $\Mod_1$ also admits a 1-dimensional model for $\underline E \Mod_1$, hence by \cite[Theorem 5.16]{Lu05}, there is a 2-dimensional model for $\underline E \Mod_1^2$. This establishes that $\gdfin(\Mod_1^2)=\cdfin(\Mod_1^2)=2$. \end{proof} \section{Genus 2}\label{sec:two} In order to compute the proper geometric dimension of $\Mod_2^n$, with $n\geq 1$, we use Broughton's complete classification, up to topological equivalence, of finite group actions on a genus $2$ surface \cite[Theorem 4.1 $\&$ Table 4]{FiniteGenus2}, see \cref{appendixB}. Notice that there are only finitely many conjugacy classes of finite groups that act on $S_2$ by homeomorphism. Hence by Nielsen realization theorem, given $n\geq 0$, any finite subgroup $F$ of $\Mod_2^n$ can be realized, up to conjugation, by one of these finite possibilities. This makes the genus 2 case different in nature to the cases of genus 0 and 1 where the isomorphism types of finite subgroups of the corresponding mapping class groups depend strongly on $n$. \begin{theorem} \label{thm:main:genus:2} Let $n\geq 1$. Then for every non-trivial finite subgroup $F$ of $\Mod_2^n$ we have \[\vcd(WF)+\lambda(F)\leq \vcd(\Mod_2^n)=n+4.\] In particular $\cdfin(\Mod_2^n)=\gdfin(\Mod_2^n)=\vcd(\Mod_2^n)=n+4$. \end{theorem} \begin{proof} Since $\cdfin(\Mod_2^n)\geq \vcd(\Mod_2^n)\geq 5$, we have $\cdfin(\Mod_2^n)=\gdfin(\Mod_2^n)$ for every $n\geq 1$. Let $F$ be a non-trivial finite subgroup of $\Mod_2^n$. By Nielsen realization $F$ acts on $S_2^n$ by orientation preserving homeomorphisms. Recall from \cref{mcg} that $n_F$ is bounded above by $\frac{n}{|F|}+o_F$. According to Broughton's classification the quotient $S^n_2/F$ can only have genus $g_F=0$ or $g_F=1$. When $g_F=0$, we have that $\vcd(WF)=\vcd(\Mod_{0}^{n_F})=n_F-3$ and when $g_F=1$, then $\vcd(WF)=\vcd(\Mod_{1}^{n_F})=n_F$. In \cref{Genus2} we recall the classification from \cite[Table 4]{FiniteGenus2}, and we give explicit upper bounds for $\lambda(F)$, $n_F$ and $\vcd(WF)$. From this we can see that the inequality $\vcd(WF)+\lambda(F)\leq \vcd(\Mod_2^n)=n+4$ holds for almost all finite subgroups $F$ of $\Mod_2^n$ and $n\geq 1$. The only exception is the case of $F=\text{GL}_2(4)$ and $n=1$, which does not occur since $F=\text{GL}_2(4)$ cannot be realized as subgroup of $\Mod_2^1$; indeed since the only puncture will be a fixed point of the action, but as we can read in the signature of this action in \cref{Genus2} there are no fixed points. \end{proof} \begin{remark} From \cref{Genus2} we can see that there are several finite subgroups $F$ of $\Mod_2$ such that $\vcd(WF)+\lambda(F)\not\leq \vcd(\Mod_2)=3$. For instance, $\vcd(WF)+\lambda(F)=4$ when $F=D_{2(6)}$. Hence, we cannot use this strategy to obtain $\gdfin(\Mod_2)$. \end{remark} \section{Genus at least 3}\label{sec:atleast3} In this section we promote the results in \cite{AMP14} for $\Mod_g^0$ with $g\geq 3$, to $\Mod_g^n$ for $g\geq 3$ and $n\geq 1$, using a simple argument. \begin{proposition}\cite[Proposition 4.4]{AMP14}\label{AMP:g:atleast:three} For any $g\geq 3$ and any finite subgroup $F$ of $\mathrm{Mod}_g^0$ we have \[\vcd(WF)+\lambda(F)\leq \vcd(\mathrm{Mod}_g^0).\] \end{proposition} \begin{theorem}\label{thm:main:genus:atleast3} For any $g\geq 3$, $n\geq 1$ and any finite subgroup $F$ of $\mathrm{Mod}_g^n$ we have \[\vcd(WF)+\lambda(F)\leq \vcd(\mathrm{Mod}_g^n).\] In particular $\gdfin(\mathrm{Mod}_g^n)=\cdfin(\mathrm{Mod}_g^n)=\vcd(\mathrm{Mod}_g^n)$. \end{theorem} \begin{proof} Let us consider the Birman short exact sequence \cite[Theorem 4.3]{BIRMAN} \[1\to B_n(S_g) \to \modgn \xrightarrow{\varphi} \mathrm{Mod}_g^0\to 1\] where $B_n(S_g)$ is the full braid group over $S_g$ on $n$ strings. Let $F$ be any finite subgroup of $\modgn$. Then by restriction of the above sequence we obtain the following short exact sequence \[1\to B_n(S_g)\cap NF \to NF \xrightarrow{\varphi} \varphi(NF)\to 1\] and note that $\varphi(NF)\leq N\varphi(F)$. Therefore \begin{align*} \vcd(NF)+\lambda(F) &\leq \vcd(\varphi(NF))+\vcd(B_n(S_g)\cap NF)+\lambda(F),\text{ by subaditivity of $\vcd$}\\ &\leq \vcd(N(\varphi(F)))+\vcd(B_n(S_g))+\lambda(\varphi(F)),\text{ by monotonicity of $\vcd$}\\ &\leq \vcd(\mathrm{Mod}_g^0)+\vcd(B_n(S_g)), \text{ by \cref{AMP:g:atleast:three}}\\ &\leq 4g-5+n+1=4g+n-4=\vcd(\modgn), \text{by \cite[Theorem 1.2]{MR3869010}.} \end{align*} Now the result follows. The \textit{in particular} part follows directly from \cref{thm:aramayona:martinezperez}. \end{proof} \clearpage \input{appendix} \bibliographystyle{alpha} \bibliography{mybib} \end{document} \usepackage{graphicx} \usepackage{pinlabel} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{mathtools} \usepackage[all,cmtip]{xy} \usepackage{amsthm} \usepackage{tikz-cd} \usepackage{comment} \usepackage{enumerate} \usepackage{url} \usepackage{epsfig} \usepackage{hyperref} \usepackage[utf8]{inputenc} \usepackage[capitalize]{cleveref} \usepackage{geometry} \usepackage{subcaption} \captionsetup[subfigure]{labelfont=rm} \usepackage{booktabs} \geometry{ a4paper, total={170mm,257mm}, left=20mm, top=20mm, } \usepackage{mathrsfs} \usepackage{xcolor} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{claim}[theorem]{Claim} \newtheorem{fact}[theorem]{Fact} \newtheorem{step}{Step} \newtheorem*{thma}{Theorem A} \newtheorem*{thmb}{Theorem B} \newtheorem*{thmc}{Theorem C} \newcommand{\mycomment}[1]{} \newtheorem{conjecture}[theorem]{Conjecture} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{remark}{Remark} \newtheorem{question}[theorem]{Question} \newtheorem{notation}[theorem]{Notation} \newcommand{\dbA}{{\mathbb A}} \newcommand{\dbC}{\mathbb{C}} \newcommand{\dbE}{\mathbb{E}} \newcommand{\dbF}{\mathbb{F}} \newcommand{\dbH}{{\mathbb H}} \newcommand{\dbK}{\mathbb{K}} \newcommand{\dbN}{\mathbb{N}} \newcommand{\dbQ}{\mathbb{Q}} \newcommand{\dbR}{\mathbb{R}} \newcommand{\dbP}{\mathbb{P}} \newcommand{\dbS}{\mathbb{S}} \newcommand{\dbT}{\mathbb{T}} \newcommand{\dbZ}{\mathbb{Z}} \newcommand{\undZ}{\underline{\mathbb{Z}}} \newcommand{\sub}{SUB} \newcommand{\Z}{\mathbb{Z}} \newcommand{\N}{\mathbb{N}} \newcommand{\J}{J} \newcommand{\I}{I} \newcommand{\X}{\mathfrak X} \newcommand{\calA}{{\mathcal A}} \newcommand{\calB}{{\mathcal B}} \newcommand{\calE}{{\mathcal E}} \newcommand{\calF}{{\mathcal F}} \newcommand{\calFn}{{\mathcal F}^n} \newcommand{\calG}{{\mathcal G}} \newcommand{\calH}{{\mathcal H}} \newcommand{\calI}{{\mathcal I}} \newcommand{\calK}{{\mathcal K}} \newcommand{\calL}{{\mathcal L}} \newcommand{\calM}{{\mathcal M}} \newcommand{\calN}{{\mathcal N}} \newcommand{\calO}{{\mathcal O}} \newcommand{\calP}{{\mathcal P}} \newcommand{\calQ}{{\mathcal Q}} \newcommand{\calS}{{\mathcal S}} \newcommand{\calT}{{\mathcal T}} \newcommand{\calY}{\mathcal Y} \newcommand{\asA}{{\mathbf A}} \newcommand{\asB}{{\mathbf B}} \newcommand{\M}{\mathscr M} \newcommand{\T}{\mathscr T} \newcommand{\orf}[1]{O_\calF #1} \newcommand{\orvc}[1]{Or(#1,\vcyc)} n)} \newcommand{\tr}{\text{T\tiny{\textit{r}}}} \newcommand{\vcyc}{V\text{\tiny{\textit{CYC}}}} \newcommand{\fbc}{F\text{\tiny{\textit{BC}}}} \newcommand{\inftyvcyc}{\vcyc_\infty } n}{F\text{\tiny{\textit{IN}}}} \newcommand{\all}{A\text{\tiny{\textit{LL}}}} n\subseteq \vcyc}} \newcommand{\LW}[4]{\xymatrix{ #1 \ar[r] \ar[d] & #2 \ar[d] \\ #3 \ar[r] & #4 }} \newcommand{\func}[3]{#1:#2\to#3} \newcommand{\inv}[1]{ #1^{-1}} \newcommand{\cita}{\textquotedblleft} \newcommand{\nbeq}{\begin{equation}} \newcommand{\neeq}{\end{equation}} \newcommand{\beq}{\begin{equation*}} \newcommand{\eeq}{\end{equation*}} \newcommand{\seq}[3]{1\to #1 \to #2 \to #3 \to 1} \newcommand{\NGammaH}{N_\Gamma[H]} \newcommand{\NGammah}{N_\Gamma(H)} \newcommand{\SGammah}{S_\Gamma(H)} \newcommand{\barG}{\overline{G}} \newcommand{\rips}[3]{\Gamma_{#1}^{\blacktriangle}(#2,#3)} \newcommand{\ripsgraph}[3]{\Gamma_{#1}(#2,#3)} \newcommand{\fp}[1]{\mathrm{FP}_{#1}} \newcommand{\fpn}{\mathrm{FP}_{n}} \newcommand{\f}[1]{\mathrm{F}_{#1}} \newcommand{\fn}{\mathrm{F}_{n}} \newcommand{\esstriv}[2]{$\pi_{#1}$-$#2$-essentially trivial} \newcommand{\ffp}[1]{\calF \text{-}\mathrm{FP}_{#1}} \newcommand{\ffpn}{\calF\text{-}\mathrm{FP}_{n}} \newcommand{\ff}[1]{\calF \text{-}\mathrm{F}_{#1}} \newcommand{\ffn}{\calF\text{-}\mathrm{F}_{n}} \newcommand{\hvc}{\underline{\underline{\mathrm{H}}}} \newcommand{\hfin}{\underline{\mathrm{H}}} \newcommand{\evc}{\underline{\underline{E}}} \DeclareMathOperator{\vertex}{vert} \DeclareMathOperator{\edge}{edge} \DeclareMathOperator{\vrt}{vert} \DeclareMathOperator{\hd}{hd} \DeclareMathOperator{\cd}{cd} \DeclareMathOperator{\pd}{pd} \DeclareMathOperator{\vcd}{vcd} \DeclareMathOperator{\cdfin}{\underline{cd}} \DeclareMathOperator{\cdvc}{\underline{\underline{cd}}} \DeclareMathOperator{\cdfh}{cd_{\calF[H]}} \DeclareMathOperator{\cdfn}{\cd_{\calF_n}} \DeclareMathOperator{\cdpn}{\cd_{\calP_n}} \DeclareMathOperator{\dist}{\mathsf{dist}} \DeclareMathOperator{\Hdist}{\mathsf{hdist}} \DeclareMathOperator{\gd}{gd} \DeclareMathOperator{\gdfin}{\underline{gd}} \DeclareMathOperator{\gdvc}{\underline{\underline{gd}}} \DeclareMathOperator{\gdfh}{gd_{\calF[H]}} \DeclareMathOperator{\gdfn}{\gd_{\calF_n}} \DeclareMathOperator{\gdpn}{\gd_{\calP_n}} \newcommand{\slnq}{SL_n(\dbQ)} \newcommand{\slnqp}{SL_n(\dbQ_p)} \newcommand{\slnz}{SL_n(\dbZ)} \newcommand{\glnq}{GL_n(\dbQ)} \newcommand{\glnz}{GL_n(\dbZ)} \newcommand{\pslnz}{PSL_n(\dbZ)} \newcommand{\pslnr}{PSL_n(\dbR)} \newcommand{\sldosz}{SL_2(\dbZ)} \newcommand{\sldosez}{SL^e_2(\dbZ)} \newcommand{\psldosez}{PSL^e_2(\dbZ)} \newcommand{\slniz}{SL_{n_i}(\dbZ)} \newcommand{\glniz}{GL_{n_i}(\dbZ)} \newcommand{\gldosz}{GL_2(\dbZ)} \newcommand{\slnzpinv}{SL_n(\dbZ[\frac{1}{p}])} \newcommand{\glnzpinv}{GL_n(\dbZ[\frac{1}{p}])} \newcommand{\slnzp}{SL_n(\dbZ_p)} \newcommand{\slnzs}{SL_n(\dbZ_S)} \newcommand{\deltapn}{\Delta^p_n} \newcommand{\glnzp}{GL_n(\dbZ_p)} \newcommand{\glnzs}{GL_n(\dbZ_S)} \newcommand{\deltasn}{\Delta^S_n} \newcommand{\stabgammaL}{\stab_{\Gamma}(L)} \newcommand{\transgammaL}{\trans_{\Gamma}(L)} \DeclareMathOperator{\Mod}{Mod} \DeclareMathOperator{\PMod}{PMod} \newcommand{\extmod}{\mathrm{Mod}^{\pm}} \newcommand{\modgs}{\Mod_{g}^{s}} \newcommand{\modgn}{\Mod_{g}^{n}} \newcommand{\pmodgs}{\PMod_{g}^{s}} \newcommand{\Ngnb}{N_{g,n}^{b}} \newcommand{\calNgnb}{\calN_{g,n}^{b}} \newcommand{\Sgnb}{S_{g,n}^{b}} \newcommand{\Gammagnb}{\Gamma_{g,n}^{b}} \newcommand{\Gammagn}{\Gamma_{g,n}} \newcommand{\Ngn}[3][ ]{N_{#2,#3}^{#1}} \newcommand{\calNgn}[3][ ]{\calN_{#2,#3}^{#1}} \newcommand{\calSgn}[3][ ]{\Gamma_{#2,#3}^{#1}} \newcommand{\vcdNgoverF} {\frac{\vcd(\calN_g)+1}{|F|}} \newcommand{\Tgn}{\T_{g}^{n}} \DeclareMathOperator{\Min}{Min} \DeclareMathOperator{\Out}{Out} \DeclareMathOperator{\Inn}{Inn} \DeclareMathOperator{\Aut}{Aut} \DeclareMathOperator{\Diff}{Diff} \DeclareMathOperator{\cat}{CAT} \DeclareMathOperator{\stab}{Stab} x}{Fix} \DeclareMathOperator{\iso}{Iso} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator{\trans}{Trans} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\cok}{coker} \DeclareMathOperator{\sing}{sing} \DeclareMathOperator{\Ker}{Ker} \DeclareMathOperator{\Ima}{Im} \DeclareMathOperator{\mor}{mor} \DeclareMathOperator{\Nte}{\text{\sc N}} \DeclareMathOperator{\Sur}{\text{\sc S}} \newcommand{\comn}[1]{\textcolor{blue}{Nestor: #1}} \newcommand{\cuadro}[8]{\xymatrix @C=1.3cm @M=2mm { {#1} \ar[r]^-{#2} \ar[d]_-{#4} & {#3} \ar[d]^-{#5}\\ {#6} \ar[r]_-{#7} & {#8} }} \newcommand{\Sgmn}{S_g^{n-m}} \newcommand{\AS}{\calA(S^{n-m}_g)} \newcommand{\ASigma}{\calA^{\sigma}(S^{n-m}_g)} \newcommand{\abs}[1]{\lvert#1\rvert} \appendix \section{$\lambda(F)$ and $\vcd(WF)$ for finite subgroups of $\Mod_0^n$ for $5\leq n\leq 13$}\label{appendixA} In this appendix we describe $n_F$, $\vcd(WF)$ and $\lambda(F)$ for all the conjugacy classes of finite subgroups $F$ of $\Mod_0^n$ when $5\leq n\leq 13$.\\ \noindent{\bf Polyhedral subgroups of $\Mod_0^n$.} We keep the numeration from \cref{thm:classification:stukow} to analyze the polyhedral subgroups of $\Mod_0^n$ for $n=6,8,10,12$: \begin{itemize} \item[(3)] For $n=10$, up to conjugacy, there is a maximal finite subgroup $F\cong A_4$ of $\Mod_0^{n}$, which can be realized as the symmetry group of a tetrahedron. There are three orbits of points with nontrivial stabiliser in $F$: the centers of faces, the centers of edges and the vertices with orbits of length $4$, $6$ and $4$, respectively. Notice that for $n=10$ there must be two orbits of punctures (one of length $4$ and another of length $6$) and one orbit of elliptics, hence $n_F=3$. \item[(4)] For $n=6,8,12$, up to conjugacy, there is a maximal finite subgroup $F\cong \mathfrak{S}_4$ of $\Mod_0^{n}$, which can be realized as the symmetry group of a cube (octahedron). There are three orbits of points with nontrivial stabiliser in $F$: the centers of faces, the centers of edges and the vertices with stabilisers of order $4$, $2$ and $3$, respectively. Notice that for $n=6$ we must place the punctures in the centers of the faces (unique orbit of length $6$), for $n=8$ we must place the punctures in the vertices of the cube (unique orbit of length $8$) and for $n=12$ the punctures can only be placed in the centers of the edges (unique orbit of length $12$). Hence, in $S_0^n/F$ there is only one orbit of punctures and two orbits of elliptics and $n_F=3$. Now let $F\cong A_4\leq \mathfrak{S}_4$. It can be realized as subgroup of the symmetry group of a tetrahedron embedded into a cube. For $n=6$, there are two orbits of vertices, one orbit of centers of faces (where we have placed the punctures) and the centers of edges are no longer elliptics, hence $n_F=3$. For $n=8$, there are two orbits of vertices (where we have placed the punctures), one orbit of centers of faces and the centers of edges are no longer elliptics, hence $n_F=3$. Finally, for $n=10$ there are two orbits of vertices, one orbit of centers of faces and one orbit of the centers of edges (where we have placed the punctures), , hence $n_F=4$. \item[(5)] For $n=12$, up to conjugacy, there is a maximal finite subgroup $F\cong A_5$ of $\Mod_0^{n}$, which can be realized as the symmetry group of a dodecahedron (icosahedron). There are three orbits of points with nontrivial stabiliser in $F$: the centers of faces, the centers of edges and the vertices. For $n=12$ we must place the punctures in the centers of the faces (unique orbit of length $12$). Hence, in $S_0^n/F$ there is only one orbit of punctures and two orbits of elliptics and $n_F=3$. \end{itemize} We summarize the conclusions in \cref{Poly}. Notice that $\vcd(WF)+\lambda(F)\leq \vcd(\Mod_0^n)=n-3$ for $n=8,10,12$. However, $\vcd(W\mathfrak{S}_4)+\lambda(\mathfrak{S}_4)=4\not\leq \vcd(\Mod_0^6)=3$. {\footnotesize \begin{table}[ht] \centering {\caption{\small Polyhedral subgroups of $\Mod_0^n$, for $n=6,8,10,12$.}\label{Poly}} \begin{tabular}{|c|c|c|c|c|} \hline &&&&\\[-0.5em] $n$& $F$ & \textbf{${n_F}$} & \textbf{$\vcd(WF)$} & \textbf{$\lambda(F)$}\\ &&&&\\[-0.5em] \hline $6$& $\mathfrak{S}_4$ & $3$ & $0$ & $4$ \\ &$A_4$ & $3$ & $0$& $3$ \\ \hline $8$& $\mathfrak{S}_4$ & $3$ & $0$ & $4$ \\ &$A_4$ & $3$ & $0$& $3$ \\ \hline $10$& $A_4$ & $3$ & $0$& $3$ \\ \hline $12$& $\mathfrak{S}_4$ & $3$ & $0$ & $4$ \\ & $A_4$ & $4$ & $1$& $3$ \\ & $A_5$ & $3$ & $0$& $4$ \\ \hline \end{tabular} \end{table} }\medskip \clearpage \noindent{\bf Cyclic and dihedral subgroups of $\Mod_0^n$.} The following tables for $5\leq n\leq 13$ come directly form \cref{prop:nF}. Note that in all these cases $\vcd(WF)+\lambda(F)\leq \vcd(\Mod_0^n)=n-3$. {\footnotesize \begin{table}[ht] \centering \caption{\small Cyclic and dihedral subgroups of $\Mod_0^5$.} \begin{tabular}{|c|c|c|c|c|} \hline &&&&\\[-0.5em] \cref{prop:nF} type & $F$ & \textbf{$n_F$} & \textbf{$\vcd(WF)$} & \textbf{$\lambda(F)$}\\ &&&&\\[-0.5em] \hline (1) & $\mathbb{Z}/4$ & $3$ & $0$ & $2$ \\ & $\mathbb{Z}/2$ & $4$ & $1$ & $1$ \\ \hline (2.1) & $\mathbb{Z}/5$ & $3$ & $0$ & $1$ \\ \hline (2.2) & $D_{2(5)}$ & $3$ & $0$ & $2$ \\ \hline (3.1) & $\mathbb{Z}/3$ & $3$ & $0$ & $1$ \\ \hline (3.2) & $D_{2(3)}$ & $3$ & $0$ & $2$ \\ \hline \end{tabular} \end{table} } {\footnotesize \begin{table}[ht] \centering \caption{\small Cyclic and dihedral subgroups of $\Mod_0^6$.} \begin{tabular}{|c|c|c|c|c|} \hline &&&&\\[-0.5em] \cref{prop:nF} type & $F$ & \textbf{$n_F$} & \textbf{$\vcd(WF)$} & \textbf{$\lambda(F)$}\\ &&&&\\[-0.5em] \hline (1) & $\mathbb{Z}/5$ & $3$ & $0$ & $1$ \\ \hline (2.1) & $\mathbb{Z}/6$ & $3$ & $0$ & $2$ \\ & $\mathbb{Z}/3$ & $4$ & $1$ & $1$ \\ & $\mathbb{Z}/2$ & $5$ & $2$ & $1$ \\ \hline (2.2) & $D_{2(6)}$ & $3$ & $0$ & $3$ \\ & $D_{2(3)}$ & $3$ & $0$ & $2$ \\ & & $4$ & $1$ & $2$ \\ & $D_{2(2)}$ & $4$ & $1$ & $2$ \\ \hline (3.1) & $\mathbb{Z}/4$ & $3$ & $0$ & $2$ \\ & $\mathbb{Z}/2$ & $4$ & $1$ & $1$ \\ \hline (3.2) & $D_{2(4)}$ & $3$ & $0$ & $3$ \\ & $D_{2(2)}$ & $3$ & $0$ & $2$ \\ & & $4$ & $1$ & $2$ \\ \hline \end{tabular} \end{table} } {\footnotesize \begin{table}[ht] \centering \caption{\small Cyclic and dihedral subgroups of $\Mod_0^7$.} \begin{tabular}{|c|c|c|c|c|} \hline &&&&\\[-0.5em] \cref{prop:nF} type & $F$ & \textbf{$n_F$} & \textbf{$\vcd(WF)$} & \textbf{$\lambda(F)$}\\ &&&&\\[-0.5em] \hline (1) & $\mathbb{Z}/6$ & $3$ & $0$ & $2$ \\ & $\mathbb{Z}/3$ & $4$ & $1$ & $1$ \\ & $\mathbb{Z}/2$ & $5$ & $2$ & $1$ \\ \hline (2.1) & $\mathbb{Z}/7$ & $3$ & $0$ & $1$ \\ \hline (2.2) & $D_{2(7)}$ & $3$ & $0$ & $2$ \\ \hline (3.1) & $\mathbb{Z}/5$ & $3$ & $0$ & $1$ \\ \hline (3.2) & $D_{2(5)}$ & $3$ & $0$ & $2$ \\ \hline \end{tabular} \end{table} } {\footnotesize \begin{table}[ht] \centering \caption{\small Cyclic and dihedral subgroups of $\Mod_0^8$.} \begin{tabular}{|c|c|c|c|c|} \hline &&&&\\[-0.5em] \cref{prop:nF} type & $F$ & \textbf{$n_F$} & \textbf{$\vcd(WF)$} & \textbf{$\lambda(F)$}\\ &&&&\\[-0.5em] \hline (1) & $\mathbb{Z}/7$ & $3$ & $0$ & $1$ \\ \hline (2.1) & $\mathbb{Z}/8$ & $3$ & $0$ & $3$ \\ & $\mathbb{Z}/4$ & $4$ & $1$ & $2$ \\ & $\mathbb{Z}/2$ & $6$ & $3$ & $1$ \\ \hline (2.3) & $D_{2(8)}$ & $3$ & $0$ & $4$ \\ & $D_{2(4)}$ & $3$ & $0$ & $3$ \\ & & $4$ & $1$ & $3$ \\ & $D_{2(2)}$ & $4$ & $1$ & $2$ \\ & & $5$ & $2$ & $2$ \\ \hline (3.1) & $\mathbb{Z}/6$ & $3$ & $0$ & $2$ \\ & $\mathbb{Z}/3$ & $4$ & $1$ & $1$ \\ & $\mathbb{Z}/2$ & $5$ & $2$ & $1$ \\ \hline (3.2) & $D_{2(6)}$ & $3$ & $0$ & $3$ \\ & $D_{2(3)}$ & $3$ & $0$ & $2$ \\ & & $4$ & $1$ & $2$ \\ & $D_{2(2)}$ & $4$ & $1$ & $2$ \\ \hline \end{tabular} \end{table} } {\footnotesize \begin{table}[ht] \centering \caption{\small Cyclic and dihedral subgroups of $\Mod_0^9$.} \begin{tabular}{|c|c|c|c|c|} \hline &&&&\\[-0.5em] \cref{prop:nF} type & $F$ & \textbf{$n_F$} & \textbf{$\vcd(WF)$} & \textbf{$\lambda(F)$} \\ &&&&\\[-0.5em] \hline (1) & $\mathbb{Z}/8$ & $3$ & $0$ & $3$ \\ & $\mathbb{Z}/4$ & $4$ & $1$ & $2$ \\ & $\mathbb{Z}/2$ & $6$ & $3$ & $1$ \\ \hline (2.1) & $\mathbb{Z}/9$ & $3$ & $0$ & $2$ \\ & $\mathbb{Z}/3$ & $5$ & $2$ & $1$ \\ \hline (2.2) & $D_{2(9)}$ & $3$ & $0$ & $3$ \\ & $D_{2(3)}$ & $4$ & $1$ & $2$ \\ \hline (3.1) & $\mathbb{Z}/7$ & $3$ & $0$ & $1$ \\ \hline (3.2) & $D_{2(7)}$ & $3$ & $0$ & $2$ \\ \hline \end{tabular} \end{table} } {\footnotesize \begin{table}[ht] \centering \caption{\small Cyclic and dihedral subgroups of $\Mod_0^{10}$.} \begin{tabular}{|c|c|c|c|c|} \hline &&&& \\[-0.5em] \text{\cref{prop:nF} type} & $F$ & \textbf{${n_F}$} & \textbf{$\vcd(WF)$} & \textbf{$\lambda(F)$} \\ &&&& \\[-0.5em] \hline (1) & $\mathbb{Z}/9$ & $3$ & $0$ & $2$ \\ & $\mathbb{Z}/3$ & $5$ & $2$ & $1$ \\ \hline (2.1) & $\mathbb{Z}/10$ & $3$ & $0$ & $2$ \\ & $\mathbb{Z}/5$ & $4$ & $1$ & $1$ \\ & $\mathbb{Z}/2$ & $7$ & $4$ & $1$ \\ \hline (2.2) & $D_{2(10)}$ & $3$ & $0$ & $3$ \\ & $D_{2(5)}$ & $3$ & $0$ & $2$ \\ & & $4$ & $1$ & $2$ \\ & $D_{2(2)}$ & $5$ & $2$ & $2$ \\ \hline (3.1) & $\mathbb{Z}/8$ & $3$ & $0$ & $3$ \\ & $\mathbb{Z}/4$ & $4$ & $1$ & $2$ \\ & $\mathbb{Z}/2$ & $6$ & $3$ & $1$ \\ \hline (3.2) & $D_{2(8)}$ & $3$ & $0$ & $4$ \\ & $D_{2(4)}$ & $3$ & $0$ & $3$ \\ & & $4$ & $1$ & $3$ \\ & $D_{2(2)}$ & $4$ & $0$ & $2$ \\ & & $5$ & $2$ & $2$ \\ \hline \end{tabular} \end{table}} {\footnotesize \begin{table}[ht] \centering \caption{\small Cyclic and dihedral subgroups of $\Mod_0^{11}$.} \begin{tabular}{|c|c|c|c|c|} \hline &&&& \\[-0.5em] \text{\cref{prop:nF} type} & $F$ & \textbf{${n_F}$} & \textbf{$\vcd(WF)$} & \textbf{$\lambda(F)$} \\ &&&& \\[-0.5em] \hline (1) & $\mathbb{Z}/10$ & $3$ & $0$ & $2$ \\ & $\mathbb{Z}/5$ & $4$ & $1$ & $1$ \\ & $\mathbb{Z}/2$ & $7$ & $4$ & $1$ \\ \hline (2.1) & $\mathbb{Z}/11$ & $3$ & $0$ & $1$ \\ \hline (2.2) & $D_{2(11)}$ & $3$ & $0$ & $2$ \\ \hline (3.1) & $\mathbb{Z}/9$ & $3$ & $0$ & $2$ \\ & $\mathbb{Z}/3$ & $5$ & $2$ & $1$ \\ \hline (3.2) & $D_{2(9)}$ & $3$ & $0$ & $2$ \\ & $D_{2(3)}$ & $4$ & $1$ & $1$ \\ \hline \end{tabular} \end{table}} {\footnotesize \begin{table}[ht] \centering \caption{\small Cyclic and dihedral subgroups of $\Mod_0^{12}$.} \begin{tabular}{|c|c|c|c|c|} \hline &&&& \\[-0.5em] \text{\cref{prop:nF} type} & $F$ & \textbf{${n_F}$} & \textbf{$\vcd(WF)$} & \textbf{$\lambda(F)$} \\ &&&& \\[-0.5em] \hline (1) & $\mathbb{Z}/11$ & $3$ & $0$ & $1$ \\ \hline (2.1) & $\mathbb{Z}/12$ & $3$ & $0$ & $3$ \\ & $\mathbb{Z}/6$ & $4$ & $1$ & $2$ \\ & $\mathbb{Z}/4$ & $5$ & $2$ & $2$ \\ & $\mathbb{Z}/3$ & $6$ & $3$ & $1$ \\ & $\mathbb{Z}/2$ & $8$ & $5$ & $1$ \\ \hline (2.2) & $D_{2(12)}$ & $3$ & $0$ & $4$ \\ & $D_{2(6)}$ & $3$ & $0$ & $3$ \\ & & $4$ & $1$ & $3$ \\ & $D_{2(4)}$ & $4$ & $1$ & $3$ \\ & $D_{2(3)}$ & $4$ & $1$ & $2$ \\ & & $5$ & $2$ & $2$ \\ & $D_{2(2)}$ & $5$ & $2$ & $2$ \\ & & $6$ & $3$ & $2$ \\ \hline (3.1) & $\mathbb{Z}/10$ & $3$ & $0$ & $2$ \\ & $\mathbb{Z}/5$ & $4$ & $1$ & $1$ \\ & $\mathbb{Z}/2$ & $7$ & $4$ & $1$ \\ \hline (3.2) & $D_{2(10)}$ & $3$ & $0$ & $3$ \\ & $D_{2(5)}$ & $3$ & $0$ & $2$ \\ & & $4$ & $1$ & $2$ \\ & $D_{2(2)}$ & $5$ & $2$ & $2$ \\ \hline \end{tabular} \end{table}} {\footnotesize \begin{table}[ht] \centering \caption{\small Cyclic and dihedral subgroups of $\Mod_0^{13}$.} \begin{tabular}{|c|c|c|c|c|} \hline &&&& \\[-0.5em] \text{\cref{prop:nF} type} & $F$ & \textbf{${n_F}$} & \textbf{$\vcd(WF)$} & \textbf{$\lambda(F)$} \\ &&&& \\[-0.5em] \hline (1) & $\mathbb{Z}/12$ & $3$ & $0$ & $3$ \\ & $\mathbb{Z}/6$ & $3$ & $0$ & $3$ \\ & $\mathbb{Z}/4$ & $5$ & $2$ & $2$ \\ & $\mathbb{Z}/2$ & $8$ & $5$ & $1$ \\ \hline (2.1) & $\mathbb{Z}/13$ & $3$ & $0$ & $1$ \\ \hline (2.2) & $D_{2(13)}$ & $3$ & $0$ & $2$ \\ \hline (3.1) & $\mathbb{Z}/11$ & $3$ & $0$ & $1$ \\ \hline (3.2) & $D_{2(11)}$ & $3$ & $0$ & $2$ \\ \hline \end{tabular} \end{table}} \clearpage \section{$\lambda(F)$ and $\vcd(WF)$ for finite subgroups of $\Mod_2^n$.}\label{appendixB} In this appendix we use Broughton's classification \cite[Theorem 4.1 $\&$ Table 4]{FiniteGenus2} to give upper bounds for $n_F$, $\vcd(WF)$ and $\lambda(F)$ for all the conjugacy classes of non-trivial finite subgroups $F$ of $\Mod_2^n$ when $n\geq 0$. {\footnotesize \begin{table}[ht] \centering { \caption{\small Finite non-trivial subgroups of $\Mod_2^n$.}\label{Genus2}} \begin{tabular}{|c|c|c|c|c|c|} \hline &&&&&\\[-0.5em] $F$ & $\rvert F\rvert$ & $S_2/F$ & \textbf{${n_F}\leq$} & \textbf{$\vcd(WF)\leq$} & \textbf{$\lambda(F)\leq$}\\ &&&&&\\[-0.5em] \hline &&&&&\\[-0.5em] $\mathbb{Z}/2$ & $2$ & $(S_0;2,2,2,2,2,2)$ & $\frac{n}{2}+6$ & $\frac{n}{2}+3$ & $1$ \\ &&&&&\\[-0.5em] \hline &&&&&\\[-0.5em] $\mathbb{Z}/2$ & $2$ & $(S_1;2,2)$ & $\frac{n}{2}+2$ & $\frac{n}{2}+2$ & $1$ \\ &&&&&\\[-0.5em] \hline &&&&&\\[-0.5em] $\mathbb{Z}/3$ & $3$ & $(S_0;3,3,3,3)$ & $\frac{n}{3}+4$ & $\frac{n}{3}+1$ & $1$ \\ &&&&&\\[-0.5em] \hline &&&&&\\[-0.5em] $\ \mathbb{Z}/2\times\mathbb{Z}/2\ $ & $4$ & $(S_0;2,2,2,2,2)$ & $\frac{n}{4}+5$ & $\frac{n}{4}+2$ & $2$ \\ &&&&&\\[-0.5em] \hline &&&&&\\[-0.5em] $\mathbb{Z}/4$ & $4$ & $(S_0;2,2,4,4)$ & $\frac{n}{4}+4$ & $\frac{n}{4}+1$ & $2$ \\ &&&&&\\[-0.5em] \hline &&&&&\\[-0.5em] $\mathbb{Z}/5$ & $5$ & $(S_0;5,5,5)$ & $\frac{n}{5}+3$ & $\frac{n}{5}$ & $1$ \\ &&&&&\\[-0.5em] \hline &&&&&\\[-0.5em] $\mathbb{Z}/6$ & $6$ & $(S_0;3,6,6)$ & $\frac{n}{6}+3$ & $\frac{n}{6}$ & $2$ \\ &&&&&\\[-0.5em] \hline &&&&&\\[-0.5em] $\mathbb{Z}/6$ & $6$ & $(S_0;2,2,3,3)$ & $\frac{n}{6}+4$ & $\frac{n}{6}$+1 & $2$ \\ &&&&&\\[-0.5em] \hline &&&&&\\[-0.5em] $D_{2(3)}$ & $6$ & $(S_0;2,2,3,3)$ & $\frac{n}{6}+4$ & $\frac{n}{6}$+1 & $2$ \\ &&&&&\\[-0.5em] \hline &&&&&\\[-0.5em] $\mathbb{Z}/8$ & $8$ & $(S_0;2,8,8)$ & $\frac{n}{8}+3$ & $\frac{n}{8}$ & $3$ \\ &&&&&\\[-0.5em] \hline &&&&&\\[-0.5em] $\widetilde{D_2}$ & $8$ & $(S_0;4,4,4)$ & $\frac{n}{8}+3$ & $\frac{n}{8}$ & $2$ \\ &&&&&\\[-0.5em] \hline &&&&&\\[-0.5em] $D_{2(4)}$ & $8$ & $(S_0;2,2,2,4)$ & $\frac{n}{8}+4$ & $\frac{n}{8}$+1 & $3$ \\ &&&&&\\[-0.5em] \hline &&&&&\\[-0.5em] $\mathbb{Z}/10$ & $10$ & $(S_0;2,5,10)$ & $\frac{n}{10}+3$ & $\frac{n}{10}$ & $2$ \\ &&&&&\\[-0.5em] \hline &&&&&\\[-0.5em] $\ \mathbb{Z}/2\times\mathbb{Z}/6\ $ & $12$ & $(S_0;2,6,6)$ & $\frac{n}{12}+3$ & $\frac{n}{12}$ & $3$ \\ &&&&&\\[-0.5em] \hline &&&&&\\[-0.5em] $D_{4,3,-1}$ & $12$ & $(S_0;3,4,4)$ & $\frac{n}{12}+3$ & $\frac{n}{12}$ & $3$ \\ &&&&&\\[-0.5em] \hline &&&&&\\[-0.5em] $D_{2(6)}$ & $12$ & $(S_0;2,2,2,3)$ & $\frac{n}{12}+4$ & $\frac{n}{12}+1$ & $3$ \\ &&&&&\\[-0.5em] \hline &&&&&\\[-0.5em] $D_{2,8,3}$ & $16$ & $(S_0;2,4,8)$ & $\frac{n}{16}+3$ & $\frac{n}{16}$ & $4$ \\ &&&&&\\[-0.5em] \hline &&&&&\\[-0.5em] $\ \mathbb{Z}/2\rtimes (\mathbb{Z}/2\times\mathbb{Z}/2\times \mathbb{Z}/3)\ $ & $24$ & $(S_0;2,4,6)$ & $\frac{n}{24}+3$ & $\frac{n}{24}$ & $4$ \\ &&&&&\\[-0.5em] \hline &&&&&\\[-0.5em] $\text{SL}_2(3)$ & $24$ & $(S_0;3,3,4)$ & $\frac{n}{24}+3$ & $\frac{n}{24}$ & $4$ \\ &&&&&\\[-0.5em] \hline &&&&&\\[-0.5em] $\text{GL}_2(4)$ & $48$ & $(S_0;2,3,8)$ & $\frac{n}{48}+3$ & $\frac{n}{48}$ & $5$ \\ &&&&&\\ \hline \end{tabular} \end{table}} \clearpage
2412.14389v1
http://arxiv.org/abs/2412.14389v1
Rigidity of the hyperbolic marked energy spectrum and entropy for $k$-surfaces
\documentclass[a4paper,11pt,reqno]{amsart} \usepackage[utf8]{inputenc} \RequirePackage[l2tabu, orthodox]{nag} \usepackage[T1]{fontenc} \usepackage{lmodern} \usepackage{amsthm,amssymb,amsmath} \usepackage{latexsym} \usepackage{graphicx} \usepackage{subfigure} \usepackage{tikz-cd} \usetikzlibrary{arrows} \usetikzlibrary{calc} \usepackage[linktocpage=true]{hyperref} \usepackage[left=3.8cm, right=3.8cm, top=3cm, bottom=3cm]{geometry} \setcounter{tocdepth}{1} \usepackage{color} \usepackage{here} \usepackage{mathrsfs} \usepackage{pgf,tikz} \usetikzlibrary{decorations.pathreplacing, shapes.multipart, arrows, matrix, shapes} \usetikzlibrary{patterns} \usepackage{caption} \usepackage[frenchb,english]{babel} \usepackage[all] {xy} \usepackage{enumitem} \usepackage{array} \makeatletter \newcommand\footnoteref[1]{\protected@xdef\@thefnmark{\ref{#1}}\@footnotemark} \makeatother \hypersetup{ colorlinks = true, urlcolor = blue, linkcolor = blue, citecolor = blue } \date{\today} \def\SS{{\mathbb S}} \def\o{{\omega}} \def\O{{\Omega}} \def\L{{\Lambda}} \def\t{{\theta}} \def\l{{\lambda}} \def\vt{{\vartheta}} \def\p{{\prime}} \def\pa{{\partial}} \def\d{{\delta}} \def\b{{\beta}} \def\a{{\alpha}} \def\e{{\varepsilon}} \def\beq{\begin{equation}} \def\eeq{\end{equation}} \def\Sc{S_{\lambda,E}} \def\tF{\widetilde{F}} \def\SL{\mathrm{SL}_2(\mathbb{R})} \def\PSL{\mathrm{PSL}_2(\mathbb{R})} \def\SLc{\mathrm{SL}_2(\mathbb{C})} \def\PSLc{\mathrm{PSL}_2(\mathbb{C})} \def\Hyp{\mathbb{H}^3} \def\H2{\mathbb{H}^2} \def\tor{{\mathbb{R}/\mathbb{Z}}} \def\tU{\widetilde U} \def\tW{\widetilde W} \def\acts{\curvearrowright} \def\CP{\C\PP^1} \def\RP{\R\PP^1} \def\restriction#1#2{\mathchoice {\setbox1\hbox{${\displaystyle #1}_{\scriptstyle #2}$} \end{enumerate} \restrictionaux{#1}{#2}} {\setbox1\hbox{${\textstyle #1}_{\scriptstyle #2}$} \restrictionaux{#1}{#2}} {\setbox1\hbox{${\scriptstyle #1}_{\scriptscriptstyle #2}$} \restrictionaux{#1}{#2}} {\setbox1\hbox{${\scriptscriptstyle #1}_{\scriptscriptstyle #2}$} \restrictionaux{#1}{#2}}} \def\restrictionaux#1#2{{#1\,\smash{\vrule height .8\ht1 depth .85\dp1}}_{\,#2}} \DeclareMathOperator{\dLeb}{dLeb} \def\a{\alpha} \def\k{\kappa} \def\tE{\widetilde{E}} \def\e{\epsilon} \def\tB{\widetilde{B}} \def\tZ{\widetilde{Z}} \def\Sp{\Sigma_{\lambda V,\alpha}} \newcommand{\wt}{\widetilde} \newcommand*{\transp}[2][-3mu]{\ensuremath{\mskip1mu\prescript{\smash{\mathrm t\mkern#1}}{}{\mathstrut#2}}} \newcommand{\dif}{{\operatorname{Diff}^2(\mathbb{T}^3)}} \newcommand{\Z}{{\mathbb Z}} \newcommand{\R}{{\mathbb R}} \newcommand{\Q}{{\mathbb Q}} \newcommand{\C}{{\mathbb C}} \newcommand{\D}{{\mathbb D}} \newcommand{\T}{{\mathbb T}} \newcommand{\E}{{\mathbb E}} \newcommand{\N}{{\mathbb N}} \newcommand{\PP}{{\mathbb P}} \newcommand{\HH}{{\mathbb H}} \newcommand{\A}{{\mathbb A}} \newcommand{\TT}{{\mathbb{T}^3}} \newcommand{\loc}{{\operatorname{loc}}} \newcommand{\annotation}[1]{\marginpar{\tiny #1}} \newcommand{\cA}{{\mathcal A}} \newcommand{\cC}{{\mathcal C}} \newcommand{\cD}{{\mathcal D}} \newcommand{\cE}{{\mathcal E}} \newcommand{\cF}{{\mathcal F}} \newcommand{\ctF}{{\widetilde{\mathcal F}}} \newcommand{\cG}{{\mathcal G}} \newcommand{\ctG}{{\widetilde{\mathcal G}}} \newcommand{\cB}{{\mathcal B}} \newcommand{\cH}{{\mathcal H}} \newcommand{\cI}{{\mathcal I}} \newcommand{\cL}{{\mathcal L}} \newcommand{\cM}{{\mathcal M}} \newcommand{\cN}{{\mathcal N}} \newcommand{\cP}{{\mathcal P}} \newcommand{\cX}{{\mathcal X}} \newcommand{\cV}{{\mathcal V}} \newcommand{\cW}{{\mathcal W}} \newcommand{\cWcu}{{\mathcal W}^{cu}} \newcommand{\cWcs}{{\mathcal W}^{cs}} \newcommand{\cWc}{{\mathcal W}^{c}} \newcommand{\cWu}{{\mathcal W}^{u}} \newcommand{\cWs}{{\mathcal W}^{s}} \newcommand{\cPH}{{\mathcal{PH}}} \newcommand{\cS}{{\mathcal S}} \newcommand{\cY}{{\mathcal Y}} \newcommand{\cZ}{{\mathcal Z}} \newcommand{\cU}{{\mathcal U}} \newcommand{\cK}{{\mathcal K}} \newcommand{\cO}{{\mathcal O}} \newcommand{\cJ}{{\mathcal J}} \newcommand{\cR}{{\mathcal R}} \newcommand{\tX}{{\widetilde X}} \newcommand{\hC}{{\widehat{\mathbb{C}}}} \newcommand{\Keywords}[1]{\par\noindent {\small{\em Keywords\/}: #1}} \newcommand{\eps}{\varepsilon} \newcommand{\card}{\operatorname{card}} \newcommand{\length}{\operatorname{length}} \newcommand{\eqdef}{\stackrel{\scriptscriptstyle\rm def.}{=}} \def\dans{\mathop{\subset}} \newcommand{\Leb}{\mathrm{Leb}} \newcommand{\sect}{\mathrm{sect}} \newcommand{\QF}{\mathrm{QF}} \newcommand{\QC}{\mathrm{QC}} \newcommand{\KD}{\mathrm{KD}} \newcommand{\MKD}{\mathrm{MKD}} \newcommand{\UKD}{\mathrm{UKD}} \newcommand{\FKD}{\mathrm{FKD}} \newcommand{\JC}{\mathrm{JC}} \newcommand{\rD}{\mathrm{D}} \newcommand{\rC}{\mathrm{C}} \newcommand{\rW}{\mathrm{W}} \newcommand{\Vol}{\mathrm{Vol}} \newcommand{\tM}{\widetilde{M}} \newcommand{\bord}{\partial_{\infty}} \newcommand{\moins}{\setminus} \newcommand{\Lip}{\mathrm{Lip}} \newcommand{\Ext}{\mathrm{Ext}} \newcommand{\Ent}{\mathrm{Ent}} \newcommand{\Id}{\mathrm{Id}} \newcommand{\Isom}{\mathrm{Isom}} \newcommand{\Area}{\mathrm{Area}} \newcommand{\dist}{\mathrm{dist}} \newcommand{\pr}{\mathrm{proj}} \def\To{\mathop{\longrightarrow}} \newtheorem{theorem}{Theorem}[section] \newtheorem{thmdefn}[theorem]{Theorem \& Definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{notation}[theorem]{Notation} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{defi}[theorem]{Definition} \newtheorem{prop}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{claim}[theorem]{Claim} \newtheorem{question}[theorem]{Question} \newtheorem{theoalph}{Theorem} \renewcommand\thetheoalph{\Alph{theoalph}} \newtheorem{propalph}[theoalph]{Proposition} \sloppy \newtheorem{thmh}{Theorem} \renewcommand{\thethmh}{A} \newtheorem{thmha}{Theorem} \renewcommand{\thethmha}{H'} \newtheorem{thmB}{Theorem} \renewcommand{\thethmB}{B} \newtheorem{condition}{C}[section] \def\clr{ \color{red}} \def\clb{ \color{black}} \def\clg{ \color{green}} \def\cly{ \color{yellow}} \def\clblue{ \color{blue}} \catcode`\@=11 \def\triplealign#1{\null\,\vcenter{\openup1\jot \m@th \ialign{\strut\hfil$\displaystyle{##}\quad$&$\displaystyle{{}##}$\hfil&$\displaystyle{{}##}$\hfil\crcr#1\crcr}}\,} \def\multiline#1{\null\,\vcenter{\openup1\jot \m@th \ialign{\strut$\displaystyle{##}$\hfil&$\displaystyle{{}##}$\hfil\crcr#1\crcr}}\,} \catcode`\@=12 \begin{document} \title[]{Rigidity of the hyperbolic marked energy spectrum and entropy for $k$-surfaces} \author[]{Sébastien Alvarez, Ben Lowe, Graham Smith} \address{} \email{} \date{\today} \begin{abstract} Labourie raised the question of determining the possible asymptotics for the growth rate of compact $k$-surfaces, counted according to energy, in negatively curved $3$-manifolds, indicating the possibility of a theory of thermodynamical formalism for this class of surfaces. Motivated by this question and by analogous results for the geodesic flow, we prove a number of results concerning the asymptotic behavior of high energy $k$-surfaces, especially in relation to the curvature of the ambient space. First, we determine a rigid upper bound for the growth rate of quasi-Fuchsian $k$-surfaces, counted according to energy, and with asymptotically round limit set, subject to a lower bound on the sectional curvature of the ambient space. We also study the marked energy spectrum for $k$-surfaces, proving a number of domination and rigidity theorems in this context. Finally, we show that the marked area and energy spectra for $k$-surfaces in $3$-dimensional manifolds of negative curvature are asymptotic if and only if the sectional curvature is constant. \end{abstract} \maketitle \tableofcontents \section{Introduction} \subsection{Context} This paper continues our study of the dynamical properties of the space of immersed surfaces of constant, positive extrinsic curvature - henceforth called $k$-surfaces - in closed, negatively-curved $3$-manifolds. The study of this space was initiated by Labourie in a remarkable series of papers \cite{LabourieGAFA,LabourieInvent,LabourieAnnals} (see also \cite{Labourie_phase_space}). There, building on Gromov's theory of foliated Plateau problems elaborated in \cite{Gromov_FolPlateau1,Gromov_FolPlateau2}, he showed that, for ambient manifolds of sectional curvature bounded above by $-1$, and for $0<k<1$, the space of marked $k$-surfaces exhibits many of the hyperbolic properties of the geodesic flow. More precisely: \begin{enumerate} \item it has a natural compactification which is laminated by Riemann surfaces; \item it is independent of the ambient metric up to leaf-preserving homeomorphism, in analogy to Gromov's geodesic rigidity theorem \cite{Gromov3} (a local version was proven by Labourie in the geometrically finite case \cite{LabourieInvent}, and a global version was recently proven in the general case by the third author in \cite{Smith_asymp}); \item the generic leaf is dense, and the union of closed leaves is also dense; \item the lamination admits an infinite family of mutually singular, ergodic, totally invariant measures of full support obtained by a coding procedure. \end{enumerate} In this vein, Labourie suggested a possible analogy between the thermodynamic properties of the geodesic flow in negative curvature and the thermodynamic properties of the space of $k$-surfaces \cite{LabourieInvent,Labourie_phase_space}. \subsection{Marked area and energy spectra for $k$-surfaces} Before stating our main results, we first require a few definitions. Let $(X,h_0):=\Bbb{H}^3/\Pi$ be a compact, $3$-dimensional hyperbolic manifold, where $\Pi$ is a discrete, torsion-free subgroup of $\Isom^+(\Hyp)=\PSLc$. Given $C\geq 1$, we say that a compact surface subgroup $\Gamma$ of $\Pi$ is $C$-\emph{quasi-Fuchsian} whenever its limit set is a $C$-quasicircle. The existence of large families of quasi-Fuchsian subgroups was proven by Kahn--Markovi\'c in \cite{KM1,KM2} (see also \cite{HamenstadtKM,Kahn_Labourie_Mozes}). Let $\QF(C)$ denote the set of conjugacy classes of oriented $C$-quasi-Fuchsian subgroups of $\Pi$, and denote \begin{equation*} \QF:=\bigcup_{C\geq 1}\QF(C)\ . \end{equation*} Let $h$ be another riemannian metric on $X$ of sectional curvature less than $-k$, for some $k>0$. It follows from the main results of \cite{LabourieInvent} (see also \cite{Smith_asymp}) that, for every oriented conjugacy class $[\Gamma]\in\QF$, there exists a unique closed $k$-surface in $X$ representing $[\Gamma]$, which we henceforth denote by $S_{k,h}([\Gamma])$. We find that our results are best contextualized by the following reformulation, in terms of the marked area spectrum, of the main result of \cite{ALS}. We define the \emph{marked (quasi-Fuchsian) $k$-surface area spectrum} to be the map $\text{MAS}_{k,h}:\QF\rightarrow]0,\infty[$ given by \begin{equation*} \text{MAS}_{k,h}([\Gamma]) := \Area_h(S_{k,h}([\Gamma]))\ . \end{equation*} The main result of \cite{ALS} can now be expressed as follows. \begin{theorem}[Rigidity of the hyperbolic marked area spectra for $k$-surfaces]\label{th.rigidity_area} Let $(X,h_0):=\Bbb{H}^3/\Pi$ be a closed hyperbolic $3$-manifold, and let $h$ be another riemannian metric on $X$ of sectional curvature bounded above by $-1$. For all $0<k<1$, \begin{equation}\label{eqn.rigidity_area} \mathrm{MAS}_{k,h}\leq\mathrm{MAS}_{k,h_0}\ , \end{equation} with equality holding if and only if $h$ is hyperbolic. \end{theorem} \begin{remark} In particular, by Mostow's rigidity theorem \cite{Mostow}, equality holds if and only if $h$ is isometric to $h_0$. \end{remark} For any compact, locally strictly convex immersed surface $S\dans (X,h)$, and for all $p\geq 0$, we now define the \emph{$p$-energy} of $S$ by \begin{equation}\label{eq.energy} W^p_h(S):=\int_SH^pd\Area_h\ , \end{equation} where $H$ denotes the mean curvature of $S$ (which, by local strict convexity, we may take to be positive). Note that the $0$-energy of $S$ is simply its area, whilst its $1$-energy is the area with respect to the Sasaki metric of its \emph{Gauss lift} to the unit sphere bundle $S_hX$ (see Section \ref{ss.as_Plateau_definitions}). We define the \emph{marked (quasi-Fuchsian) $k$-surface $p$-energy spectrum} to be the map $\text{MES}_h^p:\QF\rightarrow]0,\infty[$ given by \begin{equation}\label{eq.MarkedEnergySpectrum} \text{MES}_h^p([\Gamma]) := W^p_h(S_{k,h}([\Gamma]))\ . \end{equation} In particular, the marked $0$-energy spectrum is just the marked area spectrum defined above. Our first main theorem is the following domination and rigidity result for marked energy spectra. \begin{theorem}[Domination by the hyperbolic marked $p$-energy spectrum]\label{thm.Rigidity_hyp_energy_spectrum} Let $(X,h_0):=\Bbb{H}^3/\Pi$ be a closed hyperbolic $3$-manifold, and let $h$ be another riemannian metric on $X$ of sectional curvature pinched between $-1$ and $-a$, for some $0<a<1$. For all $0<k<a$, \begin{equation}\label{eqn.rigidity_energy} \mathrm{MES}^p_{k,h_0}\leq\mathrm{MES}^p_{k,h}\ , \end{equation} with equality holding if and only if $h$ is hyperbolic. \end{theorem} \begin{remark} Note that this domination result requires that the sectional curvature of $h$ be bounded from \emph{below} rather than from above, which has been the case for all previous results of this kind (c.f. \cite{ALS,CMN}). \end{remark} The proof makes use of the following straightforward but interesting fact: \emph{in constant sectional curvature, nearly Fuchsian $k$-surfaces are almost totally umbilical}. It turns out that this property characterizes metrics of constant curvature. Indeed, we define the \emph{normalized $p$-energy} of a closed $k$-surface $S\dans X$ by \begin{equation*} \overline{W}^{p}_h(S) := k^{-\frac{p}{2}}\int_S H^p d\Area_h\ . \end{equation*} For $p\neq q\geq 0$ we say that the normalized marked $k$-surface $p$- and $q$-energy spectra are \emph{asymptotic} whenever, for every sequence $(\eps_n)_{n\in\N}$ of positive numbers converging to $0$, and every sequence $(S_n)_{n\in\N}$ of closed $(1+\eps_n)$-quasi-Fuchsian $k$-surfaces, \begin{equation*} \lim_{n\to\infty}\frac{\overline{W}^{p}_h(S_n)}{\overline{W}^{q}_h(S_n)}=1\ . \end{equation*} \begin{theorem}[Asymptotic energy spectra]\label{thm.asymptotic_spectra} Let $(X,h)$ be a closed $3$-manifold of sectional curvature bounded above by $-a$ for some $a>0$. For all $0<k<a$, the following properties are equivalent. \begin{enumerate} \item $h$ has constant sectional curvature; \item for all $p\neq q\geq 0$ the normalized marked $k$-surface $p-$ and $q$-energy spectra of $k$-surfaces are asymptotic; and \item for some $p\neq q\geq 0$ the normalized marked $k$-surface $p-$ and $q$-energy spectra of $k$-surfaces are asymptotic. \end{enumerate} \noindent In particular, $h$ has constant sectional curvature if and only if its normalized marked $k$-surface energy and area spectra are asymptotic. \end{theorem} We point out that besides the curvature upper bound of $-a$, which is necessary to ensure that $k$-surfaces exist in the first place, the previous theorem, unlike the other theorems stated before, has no assumptions on sectional curvature. \subsection{Area and energy entropies for $k$-surfaces} The method of proofs of the results above are similar to those of \cite{ALS}. We use the solution of foliated Plateau problems, and the equidistribution properties of closed quasi-Fuchsian $k$-surfaces. This is the method outlined by Labourie in his influential Bourbaki seminar \cite{LabourieBourbaki} in the case of minimal surfaces. These results can also be phrased in terms of counting. We will define in \S \ref{sss.Modified_entropy} an \emph{entropy} similar to those appearing in \cite{ALS,CMN}. It is actually analogous to the \emph{modified entropy} defined by Marques-Neves in \cite{MN_currents} and is a number, denoted by $\Ent^p_{k,\hat m}(X,h)$, that counts, according to their $p$-energies, the closed quasi-Fuchsian $k$-surfaces whose limit quasicircles becomes more and more round \emph{and equidistribute} to $\hat m$, the unique ergodic fully supported \emph{conformal current in the space $\cC^+$ of round circles}. We refer to \S \ref{sss.conf_currents_PSL_inv_laminar_measures} for the definition of conformal currents as defined by Labourie in \cite{LabourieBourbaki} (see also \cite{ALS,MN_currents}) and to \S \ref{sss.Modified_entropy} for the definition of entropy. The analogue of Theorem \ref{thm.Rigidity_hyp_energy_spectrum} for entropy is a rigid inequality. \begin{theorem}[Rigid inequality]\label{th.rigidity_entropy} Let $(X,h_0)$ be a closed hyperbolic $3$-manifold and $h$ be a Riemannian metric on $X$ with sectional curvature $-1\leq\sect_h\leq -a$ for some $a>0$. Let $k\in (0,a)$ and $p\geq 0$. Then $\Ent^p_{k,\hat m}(X,h)\leq\Ent^p_{k,\hat m}(X,h_0)$ and the equality holds if and only if $h$ and $h_0$ are isometric. \end{theorem} The analogue of Theorem \ref{thm.Rigidity_hyp_energy_spectrum} is the following theorem proving entropy rigidity (we refer to \S \ref{sss.equality_modified_entropies} for the definition of $\Ent^\mathrm{Area}_{k,\hat m}(X,h)$, the modified area entropy for $k$-surfaces). \begin{theorem}[Entropy rigidity]\label{equality_energy_area} Let $(X,h)$ be a closed $3$-manifold with sectional curvature $\sect_h\leq -a$ for some $0<a$. Let $k\in (0,a)$. Then the following properties are equivalent. \begin{enumerate} \item The sectional curvature of $h$ is constant. \item For some $p\neq q\geq 0$ we have $$k^{-p/2} \Ent^p_{k,\hat m}(X,h)=k^{-q/2}\Ent^q_{k,\hat m}(X,h).$$ \item The modified area entropy and modified $p$-energy entropy coincide for some $p\neq 0$ $$\Ent^\mathrm{Area}_{k,\hat m}(X,h)=k^{-p/2}\Ent^p_{k,\hat m}(X,h).$$ \end{enumerate} \end{theorem} These results give the first counting result of closed $k$-surfaces according to their energy, which we view as partialy addressing Labourie's original question \cite[Question (ii) p.243]{LabourieInvent}. We comment that the previous two results continue to hold with the modified entropies replaced by the standard entropies that count $k$-surfaces whose limit sets become more and more round but without an equidistribution condition, under a topological condition on $X$: that $X$ contain no closed totally geodesic surfaces in its hyperbolic metric (see Question \ref{question:intro}, Remark \ref{remarkattheend}). Examples of closed hyperbolic 3-manifolds without closed totally geodesic surfaces are given in \cite{MR03}. Let us note that many hyperbolic knot complements have no totally geodesic surfaces: see \cite{Basilio_Lee_Malionek,Reid} and references therein. \subsection{Background and Future Directions} \subsubsection{Marked length spectrum and domination} Our work is motivated in part by the theory developed around \textit{marked length spectrum rigidity.} Recall that, for any closed $3$-dimensional Riemannian manifold $(X,h)$ with negative sectional curvature, we define its marked length spectrum to be the function $\text{MLS}_h$ which associates to every conjugacy class of $\Pi=\pi_1(X)$ the length of its geodesic representative. The following conjecture (see \cite[Problem 3.1]{BurnsKatok}) remains open. \begin{conjecture}[Rigidity of the marked length spectrum] Let $h_1$ and $h_2$ be two Riemannian metrics with negative sectional curvature on a closed manifold $X$. Then $\mathrm{MLS}_{h_1}=\mathrm{MLS}_{h_2}$ if and only if $h_1$ and $h_2$ are isometric, that is, if and only if there exists a diffeomorphism $\phi:X\to X$ such that $\phi^\ast h_2=h_1$. \end{conjecture} This conjecture has been proved for surfaces by Otal \cite{Otal} (see also \cite{Croke} for results in non-positive curvature). In dimension $3$ it has been established in two particular cases \begin{enumerate} \item if one of the metrics has constant sectional curvature: see \cite{besson1996minimal,HamenstadtMLS}; \item if the two metrics are close enough: see \cite{GuillarmouLefeuvre}. \end{enumerate} Recently, some variations on the marked length spectrum rigidity problem have been considered. For example there is the problem of understanding the consequences of the property of \emph{domination} of marked length spectra $\text{MLS}_h\geq\text{MLS}_{h'}$. Conjecturally this should imply that for two metrics $h$ and $h'$ of negative sectional curvatures, with the sectional curvatures of $h$ bounded below by those of $h'$, that $\Vol(X,h)\geq\Vol(X,h')$ (see \cite{Croke_Dairbekov,Croke_Dairbekov_Sharafutdinov} as well as \cite{Gogolev_Reber} for related rigidity results). This domination property was studied in the context of representation of surface groups in \cite{Deroin_Tholozan,Gueritaud_Kassel_Wolff} and later generalized to other contexts, such as harmonic maps, higher Teichm\"uller theory and partial hyperbolicity \cite{Alvarez_Yang,Barman_Gupta,Dai_Li,Li,Sagman}. A direct analogue of marked length spectrum rigidity for $k$-surfaces is marked energy spectrum rigidity for $\pi_1$-injective $k$-surfaces, that is, where we consider the function which assigns to each conjugacy class of surface subgroups of $\Pi$ the energy of the corresponding $k$-surface in the given negatively curved metric on $X$. Establishing marked energy spectrum rigidity even infinitesimally is an interesting and hard problem. If instead of the energy, one considers the map which assigns to each conjugacy class of surface subgroups of $\Pi$ the quotient of the $p$ and $q$-energies of the corresponding $k$-surface, for some $p \neq q$, then Theorem \ref{thm.asymptotic_spectra} implies the corresponding marked spectral rigidity statement for metrics of negative sectional curvature, one of which has constant curvature. \subsubsection{Thermodynamical formalism for $k$-surfaces} The marked length spectrum rigidity and domination results for the geodesic flow fit into the theory of Hölder cocycles developed by Ledrappier and Hamenst\"adt \cite{HamenstadtMLS,Ledrappier_bord}. In this theory, one associates two objects to every real-valued Hölder function $F$ over the sphere bundle $S_hX$, henceforth called a \emph{potential}. The first is a function, called the \emph{spectrum}, mapping every conjugacy class of $\Pi$ to the integral of $F$ over its unique geodesic representative, and the second is a probability measure, defined by the \emph{variational principle}, and invariant under the geodesic flow, which we refer to as an \emph{equilibrium state}. By Liv\v{s}ic's theorem (see \cite{Guillemin_Kazhdan,Livsic}), up to a suitable normalization, two potentials have the same periods if and only if they are cohomologous, which in turn holds if and only if they have the same equilibrium states. It is classical to consider three potentials in $S_hX$: the constant potential (associated to the maximal entropy measure), the geometric potential (associated to the Liouville measure) and the harmonic potential (associated to the harmonic measure). In dimension $2$, it is known (see \cite{Katok_entropy_closed,Katok4,Ledrappier_har_BM}) that if any two of these three potentials have the same periods then the ambient metric has constant curvature (the only known results in higher dimension hold for perturbations of hyperbolic metrics and are due to Flaminio \cite{Flaminio} and Humbert \cite{humbert2024katok}). It is natural to ask whether there is a similar theory of thermodynamical formalism applying to $k$-surfaces containing Theorems \ref{thm.asymptotic_spectra} and \ref{equality_energy_area}. Indeed, a similar question is raised by Labourie in \cite[Question 8]{Labourie_phase_space}. A concrete starting point for such a theory would be the analogue of Liv\v{s}ic's theorem in the context of $k$-surfaces. \begin{question} Let $X$ be a closed negatively curved manifold, let $F$ be a Hölder function defined on the space of marked frame bundles $FS$ of quasi-Fuchsian $k$-surfaces $S$ of $X$. Assume that for every quasi-Fuchsian $k$-surface $S$ the integral of $F$ over $FS$ vanishes. What can be concluded about $F$? \end{question} This question seems to belong to a cohomological theory of $\PSL$-actions, though it is not immediately clear what should be considered as a coboundary in this context. One of the main difficulties is the absence in the case of $k$-surfaces of a theory of hyperbolic dynamics, which has proven invaluable in the study of the geodesic flow. A key challenge lies in identifying what may serve as a substitute. The main respect in which our work is less general than the setting proposed by Labourie's question is that we focus on quasi-Fuchsian surfaces which, in particular, are $\pi_1$-injective. It would be interesting to determine the extent to which the results of this paper hold for entropy functionals defined using larger families of compact $k$-surfaces. \subsubsection{Closed totally umbilical surfaces} Finally we state a question whose affirmative answer would allow us to extend the results of Section \ref{sss.Modified_entropy} to the modified entropy functionals studied there. \begin{question} \label{question:intro} Let $X$ be a closed hyperbolizable riemannian 3-manifold with negative sectional curvature greater than or equal to $-1$. Suppose that $X$ contains a closed totally umbilical surface $S$ with mean curvature $k^{1/2}$ and with constant curvature $-1+k$ in its intrinsic metric (that is, the sectional curvature of $X$ through every tangent plane of $S$ is equal to $-1$). Does it follow that $X$ is hyperbolic? \end{question} \noindent For $k=0$, an affirmative answer follows from \cite[Corollary 1.4]{ln21}. \subsection*{Acknowledgements} {\footnotesize S.A. was supported by CSIC, via the Grupo I+D 149 ``Geometr\'ia y acciones de Grupos''. He was also supported by the IRL-2030 IFUMI, Laboratorio del Plata. B.L. was supported by the NSF under grant DMS-2202830} \section{Foliated Plateau problems and invariant measures} \subsection{Foliated Plateau problems and actions of $\PSL$}\label{ss.as_Plateau} \subsubsection{Setting and definitions}\label{ss.as_Plateau_definitions} Throughout the sequel, $X$ will denote a closed, connected, hyperbolic $3$-manifold, $h_0$ will denote its hyperbolic metric, $\tilde{X}$ will denote its universal cover, and $\Pi$ will denote its fundamental group. We identify $\tilde{X}$ with hyperbolic $3$-space $\Hyp$, and we identify $\Pi$ with a discrete subgroup of $\Isom^+(\Hyp)=\PSLc$. $\tilde{X}=\Hyp$ is naturally compactified by its ideal boundary $\bord\Hyp$, which in turn identifies with the Riemann sphere $\CP$. We thus denote this compactification by $\tilde{X}\cup\CP$. By Mostow's rigidity theorem \cite{Mostow}, $h_0$ is unique up to diffeomorphism. Our aim is to characterize this metric within the space of riemannian metrics on $X$ of sectional curvature pinched between $-b$ and $-a$, for $0<a<b$. Let $h$ be such a riemannian metric, and let $h$ also denote its lift to the universal cover $\tilde{X}$. We will be interested in the following two bundles over $\tilde{X}$. \begin{itemize} \item \emph{The unit sphere bundle} $S_h\tilde{X}\dans T\tilde{X}$, whose fibre at the point $x$ is \begin{equation*} S_h\tilde{X}_x:=\{v\ |\ \|v\|_h=1\}\ . \end{equation*} \item \emph{The frame bundle} $F_h\tilde{X}\dans T\tilde{X}\oplus T\tilde{X}\oplus T\tilde{X}$, whose fibre at the point $x$ is \begin{equation*} F_h\tilde{X}_x:=\{\xi:=(v,e_1,e_2)\ |\ (v,e_1,e_2)\ \text{oriented and $h$-orthonormal}\}\ . \end{equation*} \end{itemize} There is natural projection $p:F_h\tilde{X}\rightarrow S_h\tilde{X}$ given by \begin{equation*} p(v,e_1,e_2):=v\ , \end{equation*} making $F_h\tilde{X}$ into a bundle over $S_h\tilde{X}$ with fibre $\Bbb{S}^1$. Furthermore, since $\Pi$ preserves $h$, it acts on both $S_h\tilde{X}$ and $F_h\tilde{X}$ by differentials of isometries, and the projection $p$ is equivariant with respect to this action. We denote by $S_hX$ and $F_hX$ the respective quotients of these bundles, and we observe that they are respectively the unit sphere bundle and the unit frame bundle of $(X,h)$. Given an oriented, embedded disk $D\dans\tilde{X}$, with unit normal vector field $n:D\rightarrow S_h\tilde{X}$, we define its \emph{Gauss lift} $\hat{D}\dans S_h\tilde{X}$ by \begin{equation*} \hat{D} := \{n(p)\ |\ p\in D\}\ . \end{equation*} Given a point $p\in D$, we define a \emph{frame} of $D$ at $p$ to be a triple $\xi:=(n(p),e_1,e_2)$, where $(e_1,e_2)$ is an oriented, orthonormal pair of tangent vectors of $D$ at $p$. We define the \emph{frame bundle} $FD\dans F_h\tilde{X}$ by \begin{equation*} FD := \{\xi(p)\ |\ \xi\ \text{a frame of $D$ at some point $p$}\}\ . \end{equation*} The Gauss lift is naturally diffeomorphic to $D$, whilst the frame bundle is naturally diffeomorphic to the unit circle bundle $SD$. We define a \emph{marked disk} in $\tilde{X}$ to be a pair $(D,p)$ where $D\dans\tilde{X}$ is an oriented disk, and $p\in D$. It will be convenient to identify each marked disk $(D,p)$ with its marked \emph{Gauss lift} $(\hat{D},n(p))$. Likewise, we define a \emph{marked frame bundle} of a disk $D$ to be a pair $(FD,\xi)$ where $FD$ is its frame bundle, and $\xi:=(n(p),e_1,e_2)$ is a frame of $FD$ based at some point $p\in D$. \subsubsection{The space of $k$-disks}\label{sss.space_k_discs} Fix $0<k<a$. We define a \emph{$k$-disk} in $(\tilde{X},h)$ to be a smoothly embedded, oriented disk $D\dans \tilde{X}$, whose extrinsic curvature with respect to $h$ is constant and equal to $k$, and whose closure in $\tilde{X}\cup\CP$ is a closed topological disk. Note that since $D$ has positive extrinsic curvature, it is locally strictly convex, and we choose its orientation such that its unit normal vector field $n$ points outwards from the convex set that it bounds. In particular, with this convention, its mean curvature $H$ is everywhere positive. Let $\KD^+_{h,k}$ denote the space of all \emph{oriented} $k$-disks in $(\tilde{X},h)$, endowed with the $C^\infty_{loc}$-topology. Recall that the \emph{intrinsic} and \emph{extrinsic curvatures} of any immersed surface $D$ in $(\tilde{X},h)$ are related by \emph{Gauss' equation} \begin{equation} \kappa_{int}=\kappa_{ext}+\sect_h|_{TD}\ . \end{equation} In particular, the intrinsic curvature of any $k$-disk satisfies \begin{equation}\label{eq_intr_curv} k-b\leq\kappa_{int}\leq k-a<0\ , \end{equation} so that every $k$-disk is of hyperbolic conformal type. \subsubsection{Asymptotic Plateau problem}\label{sss.asymptotic_Plateau_problem} We denote by $\JC^+$ the space of all oriented Jordan curves in $\CP$ endowed with the Hausdorff topology. Given $c\in\JC^+$, we say that $c$ \emph{spans} a $k$-disk $D$ whenever $\bord D=c$, with $\bord D$ furnished with the orientation that it inherits from $D$. We will require the following result from \cite[Theorem \& Definition 2.1.1]{ALS}, which is a corollary of the main results of \cite{LabourieInvent} (in the geometrically finite case) and \cite{Smith_asymp} (in the general case). \begin{theorem}[Asymptotic Plateau problem]\label{th.As_plateau_problem} For every $0<k<a$, and for every $c\in\JC^+$ there exists a unique $k$-disk $D:=\rD_{k,h}(c)$ which spans $c$. Furthermore, the function $c\in\JC^+\mapsto\rD_{k,h}(c)\in\KD^+_{k,h}$ is continuous. \end{theorem} \subsubsection{Quasicircles}\label{sss.PSL_quasicircles} Let $C\geq 1$. Recall that a Jordan curve $c$ in $\CP$ is a $C$-\emph{quasicircle} whenever it is the image of the real projective line $\RP$ under the action of some $C$-quasiconformal map.\footnote{We refer the reader to Ahlfors' book \cite{Ahlfors} for the basic properties of quasiconformal mappings.} Let $\QC^+(C)$ denote the space of oriented $C$-quasicircles. In what follows, we will use the following properties of quasicircles and quasiconformal maps. \begin{enumerate} \item The composition of a $C$-quasiconformal map with a M\"obius transformation, on the left or the right, is also a $C$-quasiconformal map, and the image of any $C$-quasicircle under a Möbius transformation is also a $C$-quasicircle; \item $1$-quasiconformal maps are Möbius transformations and $1$-quasicircles are round circles of $\CP$; \item Any closed subset of $\QC^+(C)$ is compact if and only if it does not accumulate on a singleton. \end{enumerate} In particular, by $(1)$, for all $C$, the group $\Pi$ sends $\QC^+(C)$ to itself. \subsubsection{Space of $k$-disks spanned by quasicircles} For $C\geq 1$ and $0<k<a$, we will use the following two spaces. \begin{itemize} \item The space $\FKD_{k,h}(C)$, formed of all marked frame bundles $(FD,\xi)$, where $D$ is an oriented $k$-disk spanned by a $C$-quasicircle and $\xi\in FD$. We furnish this space with the $C^\infty_\loc$-topology. \item The space $\MKD_{k,h}(C)$, formed of all Gauss lifts $(\hat D,n)$, where $D$ is an oriented $k$-disk spanned by a $C$-quasicircle and $n\in\hat D$. We likewise furnish this space with the $C^\infty_\loc$-topology. \end{itemize} $\Pi$ acts on these spaces cocompactly on the left (see \cite{ALS,LabourieGAFA}), and the canonical projection $p:\FKD_{k,h}(C)\to\MKD_{k,h}(C)$ is continuous, surjective, and $\Pi$-equivariant. We will also use the following boundary maps \begin{align*} &\bord:\FKD_{k,h}(C)\to\QC^+(C);(FD,\xi)\mapsto\bord D\ \text{and}\\ &\bord:\MKD_{k,h}(C)\to\QC^+(C);(\hat{D},n)\mapsto\bord D\ . \end{align*} In \cite{ALS}, we show that these maps are continuous. \subsubsection{Laminations, simultaneous uniformization, and the action of $\PSL$} Since $k$-disks have intrinsic curvature bounded above by $k-a<0$ (see \S \ref{sss.space_k_discs}), they are uniformized by the hyperbolic plane $\H2$. The space $\text{MKD}_{k,h}(C)$ therefore carries a natural lamination by hyperbolic surfaces, whose leaves are the preimages of $\bord$, and which we denote by $\cL_{k,h}$. Since $\Pi$ sends leaves to leaves, the quotient $\text{MKD}_{k,h}(C)/\Pi$ also carries a hyperbolic surface lamination which, furthermore, is compact (see \cite{LabourieGAFA}). We denote this lamination also by $\cL_{k,h}$ when no ambiguity arises. In a similar manner, the space $\FKD_{k,c}(C)$ carries a natural lamination by $3$-manifolds, whose leaves are likewise the preimages of $\bord$, and which we denote by $F\cL_{k,h}$. We also denote the natural lamination of the quotient $\FKD_{k,c}(C)/\Pi$ by $F\cL_{k,h}$ when no ambiguity arises. Finally, note that, in both cases, the leaves of $F\cL_{k,h}$ are the unit circle bundles of the leaves of $\cL_{k,h}$. Let $\UKD_{k,h}(C)$ denote the space of uniformizing maps of oriented $k$-disks inside $\tilde{X}$ (see \cite[Section 3]{ALS}). We identify $\FKD_{k,h}(C)$ with $\UKD_{k,h}(C)$ as follows. First, we identify $\H2$ with the upper half-space in $\Bbb{C}$ in the standard manner, and we define the oriented frame $(e_1^0,e_2^0)$ of this space at the point $i:=\sqrt{-1}$ by \begin{equation*} e_1^0 := 1\qquad\text{and}\qquad e_2^0 := i\ . \end{equation*} Given a $k$-disk $D$ and a frame $\xi:=(n(p),e_1,e_2)\in FD$, we define $u_\xi:\H2\rightarrow D$ to be the unique uniformizing map such that \begin{equation*} u_\xi(i)=p\qquad\text{and}\qquad Du_\xi(e_1^0)\propto e_1\ . \end{equation*} The map $\xi\mapsto u_\xi$ trivially defines a bijection from $\FKD_{k,h}(C)$ into $\UKD_{k,h}(C)$. By Candel's simultaneous uniformization theorem \cite{Candel} (in particular, the version given in \cite[Theorem 2.7]{AlvarezSmith}), this identification is in fact a homeomorphism. Furthermore, this identification is smooth over every fibre, and varies continuously in the $C^\infty_{\mathrm{loc}}$ sense transversally to every fibre. The above identification allows us to define a right action of $\PSL$ on $\FKD_{k,h}(C)$. Indeed, for all $g\in\PSL$, there exists a unique function $\alpha_k(g):\FKD_{k,h}(C)\rightarrow\FKD_{k,h}(C)$ such that, for every $k$-disk $D$, and for every frame $\xi=(n(p),e_1,e_2)\in FD$, \begin{equation*} u_\xi\circ g = u_{\alpha_k(g)(\xi)}\ . \end{equation*} More explicitely, \begin{equation*} \alpha_k(g)(\xi) = (n(q),e_1',e_2')\ , \end{equation*} where \begin{equation*} q := u_\xi(g(0))\qquad\text{and}\qquad e_1'\propto Du_\xi\cdot Dg\cdot e_1^0\ . \end{equation*} It is straightforward to show that this action of $\PSL$ on $\FKD_{k,h}(C)$ is free, transitive, and commutes with the left action of $\Pi$ on this space. Furthermore, by Candel's simultaneous uniformization theorem again, this action is continuous, and $\alpha_k(g)$ is a homeomorphism for all $g$. Furthermore, as before, this action is also smooth over every fibre, and varies continuously in the $C^\infty_{\mathrm{loc}}$ sense transversally to every fibre. Using this action of $\PSL$ on $\FKD_{k,h}(C)$, in \cite[Theorems 3.1.2. and 3.2.2]{ALS} we proved the following result. \begin{theorem}[Fibered Plateau problem]\label{t.fibered_PP} For all $C\geq 1$, the map \begin{equation*} \bord :\FKD_{k,h}(C)\to\mathrm{QC}^+(C)\ , \end{equation*} is a continuous principal $\PSL$-bundle over $\QC^+(C)$, and the map \begin{equation*} \bord :\mathrm{MKD}_{k,h}(C)\to\mathrm{QC}^+(C)\ , \end{equation*} is a continuous disk-bundle, which is an associated bundle of $\FKD_{k,h}(C)$. \end{theorem} \subsubsection{Foliated Plateau problem}\label{sss.FPP} Consider now the case where $C=1$. Let $\cC^+=\text{QC}^+(1)$ denote the set of oriented round circles in $\CP$. In this case, the spaces $\FKD_{k,h}(1)$ and $\MKD_{k,h}(1)$ identify respectively with the frame bundle $F_h\tilde{X}$ and the sphere bundle $S_h\tilde{X}$, as the following theorem shows. \begin{thmdefn}[Foliated Plateau problem]\label{th_foliated_Plateau} The Gauss lifts of oriented $k$-disks spanned by round circles of $\CP$ form a smooth foliation $\cF_{k,h}$ of $S_h\tilde{X}$, and their frames form a smooth foliation $F\cF_{k,h}$ of $F_h\tilde{X}$. Furthermore, these foliations are $\Pi$-invariant. \end{thmdefn} We call the above foliations the $k$-\emph{surface foliation} of $S_h\tilde{X}$ and the $k$-\emph{frame foliation} of $F_h\tilde{X}$ respectively. By $\Pi$-invariance, they descend respectively to foliations of $S_h X$ and $F_h X$ whose respective leaves have dimension $2$ and $3$. The above theorem has the following consequences. \begin{enumerate} \item The laminated space $(\MKD_{k,h}(1),\cL_{k,h})$ identifies with the $k$-surface foliation of the sphere bundle $(S_h\tilde{X},\cF_{k,h})$. This identification is equivariant under the natural left action of $\Pi$. \item The laminated space $(\FKD_{k,h}(1),F\cL_{k,h})$ identifies with the foliation of the frame bundle $(F_h\tilde{X},F\cF_{k,h})$. This identification is also equivariant by the natural left actions of $\Pi$. \item $F_h\tilde{X}$ inherits a free right-action of $\PSL$, which we also denote by $\alpha_k$. The orbit foliation of this action identifies with the $k$-frame foliation $F\cF_{k,h}$. \end{enumerate} We emphasize the strong analogies between the preceding constructions and those arising from geodesics. Indeed, we view the action of $\PSL$ on $F_h\tilde X$ as a higher-dimensional analogue of the geodesic flow. We likewise view the $k$-surface foliation of $S_h\tilde{X}$ as a higher-dimensional analogue of the geodesic foliation of this manifold. This analogy is further justified, for example, by the fact that the $k$-surface foliation is independent of the metric in the following sense. For $k<a$, if $h$ and $h'$ are two negatively curved metrics on $X$ with sectional curvatures bounded above by $-a<0$, then there exists a canonical homeomorphism of $S_h\tilde{X}$ conjugating the $k$-surface foliations (see \cite[Theorem 2.1.5.]{ALS}). We view this as the analogue in the present context of the \emph{geodesic rigidity theorem} proven by Gromov in \cite{Gromov3}. \begin{remark} The analogous foliated Plateau problem for minimal surfaces has a solution for perturbations $h$ of the hyperbolic metric, as well as a slightly larger neighborhood of the hyperbolic metric subject to strong curvature conditions (see, \cite{Gromov_FolPlateau1,LoweGAFA}). However, such foliations do not always exist. Indeed, in \cite{LoweGAFA}, the second author constructs negatively-curved metrics $h$ for which $S_h\tilde{X}$ admits no foliation by lifts of minimal surfaces equivalent to the totally geodesic foliation of the unit tangent bundle of the hyperbolic metric by a leaf-preserving homeomorphism. It remains an open problem to determine necessary and sufficient conditions under which the foliated Plateau problem for minimal surfaces admits a solution. \end{remark} \begin{remark}We note that the above laminated spaces are all sublaminations of the laminated space of all suitably complete $k$-surfaces studied by Labourie in \cite{LabourieGAFA,LabourieInvent,LabourieAnnals} (see also \cite{Smith_asymp}). These sublaminations arise naturally in the study of quasi-Fuchsian $k$-surfaces. \end{remark} \subsubsection{The homogeneous case} In the case where $h=h_0$ is the hyperbolic metric, the foliation $F\cF_{h_0,k}$ and the action $\alpha_k$ described in Theorem \ref{th_foliated_Plateau} are smoothly conjugated to a straightforward homogeneous model as follows. Note first that the natural action of $\Isom^+(\Hyp)\simeq\PSLc$ on the frame bundle $F_{h_0}\tilde{X}$ is free and transitive, so that, up to a choice of base-frame, $F_{h_0}\tilde{X}\simeq\PSLc$. Right-multiplication then defines a natural, free, right-action of $\PSL$ on $F_{0,k}\tilde{X}$, which we denote by $\alpha_0$. The orbit under this action of any point $\xi:=(v,e_1,e_2)$ is then simply the frame bundle $FP$ of the oriented totally geodesic plane $P$ passing through $p$, with outward-pointing normal $v$. Alternatively, we may identify $F_{h_0}\tilde{X}$ with the set of uniformizing maps of oriented totally geodesic planes of $(\tilde{X},h_0)$. This right action $\alpha_0$ is smoothly conjugated to the right action $\alpha_k$. Indeed let $FP$ be the frame bundle of an oriented totally geodesic plane $P$. Write $k=\tanh^2(R)$ and consider the time $R$ map of the \emph{frame flow} $\hat G_R:F_{h_0}\tilde X\to F_{h_0}\tilde X,(p,v,e_1,e_2)\mapsto(G_R(p,v),e_1',e_2')$, where $G_R$ denotes the time $R$ map of the geodesic flow and $(e_1',e_2')$ denote the parallel transport of $(e_1,e_2)$ along this flow. The image of the frame bundle $FP$ of the totally geodesic plane $P$ is simply the frame bundle $FD$ of the $k$-disk $D$ spanned by $\bord P$, and $\hat G_R$ is simply a uniformizing map for $D$. It follows that $\hat G_R$ conjugates $\alpha_0$ and $\alpha_k$. In summary, we have the following identification. \begin{theorem}[The homogeneous model]\label{t.homogeneous_model} The right action $\alpha_k$ of $\PSL$ on $F_{h_0}\tilde{X}$ is smoothly conjugated to the homogeneous action of $\PSL$ on $\PSLc$ by multiplication on the right. \end{theorem} Since $p:F_{h_0}\tilde{X}\rightarrow S_{h_0}\tilde{X}$ is a principal $\text{SO}(2)$-bundle, $S_{h_0}\tilde{X}$ likewise identifies with the quotient space $\PSLc/\text{SO}(2)$. The images of the orbits of the homogeneous $\PSL$-action under this identification are then the Gauss lifts of totally umbilic planes. It follows that the $k$-surface foliation of $S_{h_0}\tilde{X}$ is likewise smoothly conjugate to the homogeneous foliation $\PSLc/\text{SO}(2)$ by $\PSL$-orbits. \subsection{Quasi-Fuchsian surfaces and topological counting} \subsubsection{Quasi-Fuchsian surfaces} Given $C\geq 1$, we say that a surface subgroup $\Gamma\leq\Pi$ is $C$-\emph{quasi-Fuchsian} whenever $\partial_\infty\Gamma\dans\C\PP^1$ is a $C$-quasicircle. Let $\QF(C)$ denote the set of conjugacy classes of $C$-quasi-Fuchian surface subgroups of $\Pi$. For any such conjugacy class $[\Gamma]$, we define the $k$-surface $S_{k,h}([\Gamma])$ by \begin{equation}\label{eq.rep_surface} S_{k,h}([\Gamma]):=\rD_{k,h}(\partial_\infty\Gamma)/\Gamma\ , \end{equation} where $\Gamma$ is some representative of $[\Gamma]$. Note that the quotient of this surface under the action of $\Gamma$ is a closed $k$-surface in $(X,h)$ whose fundamental group identifies with $\Gamma$. Note, furthermore, that this quotient is independent of the representative chosen. We thus call $S_{k,h}([\Gamma])$ the $k$-surface \emph{representing} $[\Gamma$]. \subsubsection{Topological counting of quasi-Fuchsian surfaces} It follows from a theorem of Thurston (see, for example, \cite[Theorem 4.5.]{LabourieBourbaki} for a minimal-surfaces based proof) that the set of conjugacy classes of quasi-Fuchsian subgroups $\Gamma\leq\Pi$ of genus $\text{g}(\Gamma)$ less than some fixed $g$ is finite. In \cite{KM2} (see also \cite{CMN}), Kahn--Markovi\'c provide a precise asymptotic estimate of the number of such conjugacy classes as a function of their genus, showing, in particular, that such classes are abundant. \begin{theorem}[Kahn--Markovi\'c's topological counting \cite{KM2} (see also \cite{CMN})]\label{th.KM2} If $(X,h_0)$ is a closed hyperbolic $3$-manifold then, for every $C\geq 1$, \begin{equation}\label{eq.topo_counting}\lim_{g\to\infty}\frac{1}{2g\log(g)}\log\#\{[\Gamma]\in\mathrm{QF}(C);\,\mathrm{g}(\Gamma)\leq g\}=1. \end{equation} \end{theorem} \begin{remark}To be precise, Kahn--Markovi\'c prove the above result only for $C=\infty$. In \cite[Theorem 4.2.]{CMN}, Calegari--Marques--Neves use the formula proven by M\"uller--Puchta in Lemma \ref{l.Muller_Puchta} to show that the above estimate also holds for all $C>1$. \end{remark} \subsection{Equidistribution of $k$-surfaces and $\PSL$-invariant measures} \subsubsection{Conformal currents, $\PSL$-invariant measures and laminar measures}\label{sss.conf_currents_PSL_inv_laminar_measures} The terminology of fibered Plateau problems introduced in \S \ref{sss.PSL_quasicircles} allows us to generalize to higher dimensions the identification between geodesic currents and invariant measures of the geodesic flow studied, for example, by Bonahon in \cite{Bonahon_laminations}. The following construction was first suggested by Labourie in \cite{LabourieBourbaki}, and further developed by the present authors in \cite{ALS} (see also the recent works \cite{Brody_Reyes,MN_currents}). Consider the bundles $\bord:\text{FKD}_{k,h}(C)\to\text{QC}^+(C)$ and $\bord:\text{MKD}_{k,h}(C)\to\text{QC}^+(C)$ defined in \S \ref{sss.PSL_quasicircles}. Recall that the space $\QC^+(C)$ is separable and locally compact. We say that a Borel regular measure $\mu$ on $\FKD_{k,h}(C)$ (resp. $\MKD_{k,h}(C)$) is $\PSL$-invariant whenever there exists a Borel regular measure $m$ on $\QC^+(C)$ such that, in every trivializing chart $U\times F$ of $\bord$, \begin{equation*} \mu|_{U\times F}=m|_U\times\lambda_F\ , \end{equation*} where $\lambda_F$ denotes the Haar measure on $\PSL$ (resp. the hyperbolic area of $\D$). Borel regular $\PSL$-invariant measures on $\FKD(C)$ (resp. $\MKD(C)$) are trivially in one-to-one correspondence with Borel regular measures on $\QC^+$ (see \cite[Lemma 4.2.1.]{ALS}). We call $m$ the \emph{factor} of $\mu$, and we call $\mu$ the \emph{lift} of $m$. By construction, a $\PSL$-invariant measure $\mu$ is also $\Pi$-invariant if and only if its factor $m$ is. In other words, we have natural bijective correspondences between each the following classes of objects (see \cite[\S 4.2]{ALS} for a slightly different formalism). \begin{enumerate} \item $\Pi$-invariant Borel regular measures $\hat m$ on the space $\text{QC}^+(C)$ of oriented $C$-quasicircles; \emph{these are the analogues in our context of geodesic currents}. \item $(\Pi,\PSL)$ bi-invariant Borel measures $\hat\nu$ over $\FKD_{k,h}(C)$; \emph{these are the analogues in our context of measures invariant under the geodesic flow}. \item $\Pi$-invariant Borel measures $\hat\mu$ on $\text{MKD}_{k,h}(C)$ which are totally invariant for the lamination $\cL_{k,h}$, that is, whose disintegration on each leaf of $\cL_{k,h}$ is a multiple of the hyperbolic area; \emph{these are the analogues in our context of totally invariant measures of the geodesic foliation}. \end{enumerate} According to Labourie's terminology (see \cite{LabourieBourbaki,MN_currents}), measures of the first class are called \emph{conformal currents}, measures of the second are called \emph{$(\Pi,\PSL)$-bi-invariant measures}, and measures of the third are called \emph{laminar measures}. This is summarized in the following table. \begin{center} \begin{tabular}{|>{\raggedright}p{5cm}|>{\raggedright}p{3.5cm}|>{\raggedright}p{3.5cm}|} \hline \small\noindent{\bf Measure type}& \small\noindent{\bf Geodesic flow analogue}& \small\noindent {\bf AKA. (see \cite{LabourieBourbaki,MN_currents})} \tabularnewline \hline \small\noindent $\Pi$ invariant over $\text{QC}^+(C)$ & \small\noindent geodesic currents & \small\noindent conformal currents \tabularnewline \small\noindent $(\Pi,\PSL)$ bi-invariant over $\FKD_{k,h}(C)$ & \small\noindent invariant measures of geodesic flow & \small\noindent bi-invariant measures \tabularnewline \small\noindent $\Pi$ invariant over $\text{MKD}_{k,h}(C)$ and invariant wrt. $\cL_{k,h}$& \small\noindent invariant measures of geodesic foliation & \small\noindent laminar measures \tabularnewline \hline \end{tabular} \end{center} \begin{defi}[Ergodic conformal currents] A conformal current $\hat m$ in $\QC^+(C)$ is said to be \emph{ergodic} if every $\Pi$-invariant measurable function on $\QC^+(C)$ is constant $\hat m$-almost everywhere. \end{defi} \subsubsection{Lifts of closed quasi-Fuchsian surfaces}\label{sss.lifts} Choose $C\geq 1$, let $[\Gamma]$ be a conjugacy class in $\text{QF}(C)$ and let $S:=S_{k,h}([\Gamma])$. We define the measure $\mu_S$ on the sphere bundle $S_h X$ such that, for every Borel subset $E$ of $S_h X$, \begin{equation}\label{eq.inv_measure} \mu_S(E)=\frac{1}{2\pi|\chi(S)|}\lambda(\pi(E\cap\hat S))\ , \end{equation} where $\hat S$ denotes the Gauss lift of $S$, and $\lambda$ denotes the area form of the Poincar\'e metric of $S$.\\ Denote $c:=\bord\Gamma$, and let $\delta(c)$ denote the Dirac mass on $\text{QC}^+(C)$ supported on $c$. We now introduce the following conformal measure, bi-invariant measure, and laminated measure.\\ \noindent\emph{The conformal current associated to $S$}. We define \begin{equation*} \hat m(c)=\frac{1}{2\pi|\chi(S)|}\sum_{\gamma\in\Pi/\Gamma}\delta(\gamma\cdot c)\ , \end{equation*} where $\gamma\cdot c$ abusively denotes the image of $c$ under any representative of $\gamma\in\Pi/\Gamma$. By construction, $m(c)$ is infinite, atomic and $\Pi$-invariant. We call it the \emph{conformal current} associated to $c$.\\ \noindent\emph{The bi-invariant measure associated to $S$}. Recall that the frame bundle $FD_c$ of $D_c$ naturally identifies, up to a choice of base point, with $\PSL$. Let $\omega_{k,h}(c)$ denote the image of the Haar measure under this identification. We define the $(\Pi,\PSL)$-bi-invariant measure associated to $S$ over $\FKD_{k,h}(C)$ by \begin{equation*} \hat\nu_{k,h}(c)=\frac{1}{2\pi|\chi(S)|}\sum_{\gamma\in\Pi/\Gamma}\omega_{k,h}(\gamma.c)\ . \end{equation*} This measure is analogous to the lift to $S_h \tilde X$ of the Dirac mass supported by a periodic orbit of the geodesic flow.\\ \noindent \emph{The laminar measure associated to $S$.} Let $\lambda_{k,h}(c)$ denote the area measure of the Poincar\'e metric of the disk $D_c$. The laminar measure associated to $S$ is the measure \begin{equation*} \hat\mu_{k,h}(c)=\frac{1}{2\pi|\chi(S)|}\sum_{\gamma\in\Pi/\Gamma}\lambda_{k,h}(\gamma.c)\ . \end{equation*} The quotient of this measure under the action of $\Pi$ yields a Borel regular measure $\mu_{k,h}$ over $\MKD_{k,h}(C)/\Pi$ supported on $\hat S$, where here $\hat S$ is viewed as a closed leaf of $\cL_{k,h}$. We identify the measure $\mu_{k,h}$ with $\mu_S$ as follows. Note first that leaves of $\cL_{k,h}$ are identified with Gauss lifts of $k$-surfaces of $X$. It follows that any continuous function $\phi:\MKD_{k,h}(C)/\Pi\to\R$ restricts to a continuous function $\phi:\hat S\to\R$. We then have \begin{equation}\label{eq.ident_measures} \int_{\MKD_{k,h}(C)/\Pi} \phi d\mu_{k,h}(c)=\int_{\hat S}\phi d\mu_S\ . \end{equation} Note that the conformal current depends neither on the metric $h$, nor on the constant $k$, but that bi-invariant measures and laminar measures do depend on these parameters.\\ \noindent \emph{The area measure.} Finally, we define the measure $\overline{\mu}_{S,h}$ on $S_hX$ by \begin{equation}\label{eq.area_measure} \overline{\mu}_{S,h}(E)=\frac{1}{2\pi|\chi(S)|}\Area_h(\pi(E\cap\hat S))\ , \end{equation} where $\Area_h$ denotes the area measure of the restriction to $S$ of the metric $h$. The relationship between the measures $\mu_S$ and $\overline\mu_{S,h}$ is given by the following result. \begin{lemma}[\cite{ALS}, Lemma 4.2.2]\label{lem_comparison_area_laminar} If $-b\leq \sect_h\leq -a<0$ over $X$, and if $0<k<a$, then, for every closed, quasi-Fuchsian $k$-surface $S\dans X$, \begin{equation*} (a-k)\overline{\mu}_{S,h}\leq\mu_S\leq(b-k)\overline{\mu}_{S,h}. \end{equation*} \end{lemma} \subsubsection{Classification of ergodic $(\Pi,\PSL)$-bi-invariant measures} Following the main idea of \cite{CMN} (see also \cite{LabourieBourbaki}), we use Ratner's theory to study limits of measures of the form $\hat\mu_{k,h}(c)$ as $c$ tends to some round circle $c_0$. Note that the space $\cC^+$ of round circles in $\CP$ is a $3$-dimensional manifold carrying a natural Haar measure which we denote by $\Leb$. \begin{theorem}[Classification of ergodic conformal currents] \label{thm.Ratner} For every ergodic conformal current $\hat m$ on $\cC^+$, the following dichotomy holds. \begin{itemize} \item Either $\hat m$ is proportional to $\Leb$. \item Or $\hat m$ is the conformal current of a closed Fuchsian surface. \end{itemize} \end{theorem} \begin{proof} We work with the hyperbolic metric $h:=h_0$ on $\tilde{X}$. Let $\hat\nu_0$ be an ergodic measure over $\PSLc\simeq F_{h_0}\tilde{X}$ which is both invariant under the left action of $\Pi$, and under the right action by multiplication of $\PSL$. By Ratner's classification theorem \cite{Ratner_Duke} (see \cite{Einsiedler} for a simple proof for the case of $\PSL$-actions), this measure is \emph{homogeneous}. That is, it is the Haar measure supported on some closed orbit $v_0G$ of some closed, connected Lie subgroup $G\leq\PSLc$ containing $\PSL$. Since the only closed and connected Lie subgroups of $\PSLc$ containing $\PSL$ are $\PSL$ and $\PSLc$ itself, this yields the following dichotomy: \begin{itemize} \item either $\nu_0$ is the Haar measure of $F_{h_0}\tilde X/\Pi$; \item or $\nu_0$ is the Haar measure of the frame bundle of some closed, totally-umbilic surface of $\tilde X/\Pi$. \end{itemize} The result now follows. \end{proof} Theorem \ref{thm.Ratner} also yields the following dichotomy for ergodic laminar measures over $S_hX$. \begin{theorem}[Dichotomy for laminar measures]\label{thm.dichotomy_laminar} There exists a unique ergodic laminar probability measure $\mu$ on $S_hX$ with full support. All other ergodic laminar probability measures on $S_hX$ are associated to closed Fuchsian $k$-surfaces. \end{theorem} \begin{remark} Measures which arise as limits of random minimal surfaces are studied by Kahn--Markovi\'c--Smilga in \cite{KMS}. We expect analogous results to hold for limits of random $k$-surfaces. \end{remark} \begin{remark} When $\Pi$ contains no Fuchsian subgroup, there exists a unique laminar probability measure on $S_hX$ which is fully supported. \end{remark} \begin{proof} Let $\hat\mu$ denote the lift of $\mu$ to $S_h\tilde{X}$ and let $\hat m$ denote its associated conformal current on $\cC^+$. If $\mu$ has full support, then so too does $\hat m$ so that, by Theorem \ref{thm.Ratner}, $\hat m$ is proportional to $\Leb$, and the first assertion follows. Likewise, if $\mu$ does not have full support, then, by Theorem \ref{thm.Ratner} again, $\hat\mu$ is the laminar measure associated to some closed Fuchsian $k$-surface, as in Section \S \ref{sss.lifts}, and the second assertion follows. \end{proof} \subsubsection{Equidistribution and Kahn-Markovi\'c's sequences} The following facts are key to proving the main results of \cite{ALS}. \begin{lemma}\label{lem.conv_fuchsian} Let $(\eps_n)_{n\in\mathbb{N}}$ be a sequence of positive numbers converging to zero. For all $n$, let $\Gamma_n$ be a $(1+\eps_n)$-quasi-Fuchsian subgroup of $\Pi$, denote $c_n:=\bord\Gamma_n$ and let $S_n=S_{k,h}([\Gamma_n])$ denote the $k$-surface representative of $\Gamma_n$ in $(X,h)$. If $\lim_{n\to\infty}\eps_n=0$. Every accumulation point of $(\hat\mu_{k,h}(c_n))_{n\in\mathbb{N}}$ is a laminar measure on $\MKD_{k,h}(1)\simeq S_{h}\tilde{X}$. \end{lemma} Kahn--Markovi\'c's construction \cite{KM1} of quasi-Fuchsian subgroups of $\Pi$ yields a sequence of surfaces with remarkable properties. \begin{theorem}[Equidistribution of Kahn-Markovi\'c sequences]\label{KM_equidistrib} There exists a sequence $(\eps_n)_{n\in\mathbb{N}}$ of positive numbers converging to zero, and a sequence $(\Gamma_n)_{n\in\mathbb{N}}$ of quasi-Fuchsian subgroups of $\Pi$ satisfying the following properties. \begin{enumerate} \item For all $n$, $\Gamma_n$ is $(1+\eps_n)$-quasi-Fuchsian; and \item the sequence of laminar measures $(\hat \mu_{k,h}(\bord\Gamma_n))_{n\in\mathbb{N}}$ converges to a laminar measure $\hat \mu_\infty$ on $S_h\tilde{X}$ with full support. \end{enumerate} \end{theorem} We provide a proof of this theorem in \cite{ALS} by showing that the $k$-surfaces representing the subgroups constructed by Kahn--Markovi\'c in \cite{KM1} do not accumulate on closed surfaces, from which Theorem \ref{KM_equidistrib} follows as a consequence of Theorem \ref{thm.dichotomy_laminar}. In \cite[Proposition 6.1]{ln21}, in collaboration with A. Neves, the second author proves the following stronger result. \begin{theorem}[Lowe--Neves]\label{th.lowe_neves} There exists a sequence $(\eps_n)_{n\in\mathbb{N}}$ of positive numbers converging to zero, and a sequence $(\Gamma_n)_{n\in\mathbb{N}}$ of quasi-Fuchsian subgroups of $\Pi$ satisfying the following properties. \begin{enumerate} \item For all $n$, $\Gamma_n$ is $(1+\eps_n)$-quasi-Fuchsian; and \item the sequence of laminar measures $(\hat \mu_{k,h}(\bord\Gamma_n))_{n\in\mathbb{N}}$ converges to a fully supported and ergodic laminar measure on $S_h\tilde{X}$. \end{enumerate} \end{theorem} \section{Fuchsian, almost Fuchsian and totally umbilical $k$-surfaces} In this section we study some geometric properties of $k$-surfaces in closed $3$-manifolds of negative sectional curvature bounded below by $-1$. We define the $p$-energy of such a surface, and we prove a $k$-surface analogue of the result \cite{Seppi} of Seppi, namely that almost-Fuchsian $k$-surfaces in hyperbolic $3$-space are almost totally umbilical. \subsection{$p$-energy of surfaces} Recall that $(X,h)$ is a closed $3$-dimensional riemannian manifold of sectional curvature pinched between $-b$ and $-a$, where $0<a<b$. Given a closed, oriented immersed surface $S\dans X$, and a real number $p\geq 0$, we define the \emph{$p$-energy} of $S$ by \begin{equation*} W_h^{p}(S):=\int_S H^p d\Area_h\ , \end{equation*} and we define its \emph{normalized $p$-energy} by \begin{equation*} \overline{W}_h^{p}(S):=k^{-p/2}W_h^p(S)=k^{-p/2}\int_S H^p d\Area_h\ . \end{equation*} \begin{lemma}\label{lem_lower_bounds_energy} If $-1\leq\sect_h\leq -a$, $0<k<a$, $p\geq 0$, then, for every closed $k$-surface $S$, \begin{enumerate} \item $\Area_h(S)\geq 2\pi|\chi(S)|/(1-k)$ with equality holding if and only if the sectional curvature of $h$ equals $-1$ along $TS$; \item $W_h^{p}(S)\geq k^{p/2}\Area_h(S)$ with equality holding if and only if $S$ is totally umbilical. \end{enumerate} \end{lemma} \begin{proof} Indeed, by Gauss' equation and the Gauss--Bonnet theorem, \begin{equation*} 2 \pi \chi(S) = \int_{S} \kappa_{int}d\Area_h \geq \int_{S} (-1 + k)d\Area_h=\Area_h(S) (k-1)\ . \end{equation*} Hence \begin{equation*} \Area_h(S) \geq \frac{2\pi\left|\chi(S)\right|}{(1-k)}\ , \end{equation*} with equality holding if and only if $\sect_h=-1$ over $TS$. This proves the first assertion. Since the surfaces studied here are oriented in such a way that their principal curvatures are always positive, the algebraic-geometric mean inequality yields \begin{equation*} H\geq k^{\frac{1}{2}}\ , \end{equation*} so that \begin{equation*} W_h^p(S) \geq k^{p/2}\Area_h(S)\ . \end{equation*} Furthermore, equality holds if and only if $H$ is constant and equal to $k^{\frac{1}{2}}$, that is, if and only if $S$ is totally umbilical, and this proves the second assertion. \end{proof} \subsection{Almost totally umbilical $k$-disks in hyperbolic space}\label{ss.totally_umb} Consider first the case where $h:=h_0$ is the hyperbolic metric on $\tilde{X}$. Fix $k\in(0,1)$ and $p>0$. We show that, as $\eps$ tends to $0$, $k$-disks in $(\tilde{X},h_0)$ spanned by $(1+\eps)$-quasicircles become uniformly close to totally umbilical surfaces in the sense that their principal curvatures become uniformally close to $k^{\frac{1}{2}}$. This is the $k$-surface analogue of the result \cite{Seppi} of Seppi concerning minimal surfaces spanned by almost round quasicircles. However, in the present case, the proof is simpler. \begin{lemma}\label{l.almost_tot_umbilical} For all $\delta>0$ there exists $\eps>0$ having the property that, for every oriented $(1+\eps)$-quasicircle $c$, if $H$ denotes the mean curvature of the unique $k$-disk in $(\tilde{X},h_0)$ spanned by $c$, then \begin{equation*} k^{\frac{1}{2}}\leq H\leq k^{\frac{1}{2}}(1+\delta)\ . \end{equation*} \end{lemma} \begin{proof} The first inequality follows immediately from the algebraic-geometric mean inequality. Suppose now that the second inequality does not hold. There exists $\delta>0$, a sequence $(\eps_n)_{n\in\N}$ converging to $0$, and a sequence $(D_n,p_n)_{n\in\N}$ of marked $k$-disks such that, for all $n$, the curve $c_n:=\bord D_n$ is a $(1+\eps_n)$-quasicircle, and the mean curvature $H_n$ of $D_n$ satisfies \begin{equation}\label{eq.umbilical} \liminf_{n\to\infty} H_n(p_n)>(1+\delta)k^{\frac{1}{2}}\ . \end{equation} There exists a sequence $(\gamma_n)_{n\in\N}$ of isometries of $(\tilde{X},h_0)$ such that, for all $n$, $\gamma_n\cdot p_n$ lies within some fixed compact set. Upon extracting a subsequence, we may suppose that the sequence $(\gamma_n\cdot c_n)_{n\in\N}$ of quasicircles converges in the Hausdorff sense, either to a round circle $c_\infty$, say, or to a single ideal point $q_\infty$, say. The latter is impossible, for otherwise the sequence $(\gamma_n\cdot D_n)_{n\in\N}$ would converge to a $k$-surface in $(\tilde{X},h_0)$ with ideal boundary $\{q_\infty\}$, which is absurd by \cite[Proposition 2.5.3]{LabourieInvent}. By Theorem \ref{th.As_plateau_problem}, the sequence $(\gamma_n\cdot D_n)_{n\in\N}$ then converges smoothly to a $k$-disk $D_\infty$, say, of $(\tilde{X},h_0)$ whose ideal boundary is the round circle $c_\infty$. In particular, $D_\infty$ is totally umbilical, so that its mean curvature is everywhere equal to $k^{\frac{1}{2}}$ (see \S \ref{sss.FPP}), contradicting \eqref{eq.umbilical}. This is absurd, and the result follows. \end{proof} By Gauss' equation, every $k$-surface of $(\tilde{X},h_0)$ has intrinsic curvature equal to $(k-1)$. Upon combining Lemma \ref{l.almost_tot_umbilical} with the Gauss--Bonnet theorem, we therefore obtain the following result. \begin{corollary}\label{corollarouille} For all $0<k<1$, for all $p>0$, for every sequence $(\eps_n)_{m\in\N}$ of positive numbers that tends to zero, and every sequence $(S_n)_{n\in\N}$ of closed $(1+\eps_n)$-quasi-Fuchsian surfaces in $(X,h_0)$, \begin{equation*} \lim_{n\to\infty}\frac{k^{-p/2}}{\Area_{h_0}(S_n)}\int_{S_n} H_n^p d\Area_{h_0}=1\ , \end{equation*} and \begin{equation*} \lim_{n\to\infty}\frac{(1-k)k^{-p/2}}{2\pi|\chi(S_n)|}\int_{S_n} H_n^p d\Area_{h_0}=1\ . \end{equation*} That is, the marked normalized $p$-energy spectrum of $k$-surfaces in $(\tilde{X},h_0)$ is asymptotic to the marked area spectrum of $k$-surfaces in this space. \end{corollary} \section{The marked energy spectrum} \subsection{Proof of Theorem \ref{th.rigidity_area}}\label{ss.Rigidity} Now let $h$ be any riemannian metric on $X$ with sectional curvature pinched between $-1$ and $-a$ for some $0<a<1$. Suppose, furthermore, that the marked $p$-energy spectrum of $k$-surfaces of $h$ is equal to that of $h_0$, that is, that equality holds in \eqref{eqn.rigidity_energy}. We will prove that $h$ is isometric to $h_0$. By Mostow's rigidity theorem \cite{Mostow}, it suffices to show that $h$ has constant sectional curvature equal to $-1$. \subsubsection{Sectional curvature approaches $-1$ over asymptotically Fuchsian $k$-surfaces}Let $([\Gamma_n])_{n\in\N}\in(\QF^+)^\N$ be a sequence of conjugacy classes of quasi-Fuchsian surface subgroups of $\Pi$. For all $n$, let $c_n$ denote the ideal boundary of $\Gamma_n$. Suppose that there exists a sequence $(\eps_n)_{n\in\N}$ converging to $0$ such that, for all $n$, $c_n$ is a $(1+\eps_n)$-quasicircle. Suppose, furthermore, that $(c_n)_{n\in\N}$ converges in the Hausdorff sense to some round circle $c_\infty$, say. For all $n$, denote \begin{equation*} S_n=\rD_{k,h}(c_n)/\Gamma_n\dans X\ , \end{equation*} and let $H_n$ denote the mean curvature of this surface. Let $\sigma:S_hX\to\R$ denote the map which associates to every $v\in S_hX$ the sectional curvature of $h$ along the plane $v^\perp$. Recall the definition of the area measure $\overline{\mu}_{S_n,h}$ from \S \ref{sss.lifts}. \begin{lemma}\label{l.sect_approaches-1} \begin{equation*} \lim_{n\to\infty} \int_{S_hX}(1+\sigma)d\overline{\mu}_{S_n,h}=0\ . \end{equation*} \end{lemma} \begin{remark} Note that we use here the fact that the sectional curvature of $h$ is bounded below by $-1$. \end{remark} \begin{proof} Indeed, for all $n$, denote \begin{equation*} S_n^0=\text D_{k,h_0}(c_n)/\Gamma_n\ , \end{equation*} and let $H_n^0$ denote the mean curvature of this surface with respect to the hyperbolic metric $h_0$. By hypothesis, the marked $p$-energy spectra of $h$ and $h_0$ coincide so that, for all $n$, \begin{equation*} \int_{S^0_n}(H_n^0)^p d\Area_{h_0}=\int_{S_n} H_n^p d\Area_{h}\ , \end{equation*} and it follows by Corollary \ref{corollarouille} that \begin{equation}\label{eq.rigidity_marked_spectra} \lim_{n\to\infty}\frac{1}{2\pi|\chi(S_n)|}\int_{S_n} H_n^p d\Area_{h}=\frac{k^{p/2}}{(1-k)}\ . \end{equation} On the other hand, the algebraic-geometric mean inequality yields \begin{eqnarray*} \int_{S_n} H_n^p d\Area_{h}&\geq&\int_{S_n} k^{p/2}d\Area_{h}\\ &=&\frac{k^{p/2}}{(1-k)}\bigg(\int_{S_n}(1+\sect_h|_{TS_n})d\Area_{h}\\ & &\qquad\qquad + \int_{S_n} (-\sect_h|_{TS_n} - k)d\Area_{h}\bigg)\ . \end{eqnarray*} By Gauss' equation, for all $n$, the intrinsic curvature of $S_n$ is equal to $k+\sect_h|_{TS_n}$, so that, by the Gauss--Bonnet theorem, \begin{equation*} \int_{S_n} H_n^p d\Area_{h} = \frac{k^{p/2}}{(1-k)}\bigg(\int_{S_n}(1+\sect_h|_{TS_n})d\Area_{h} - 2\pi\chi(S_n)\bigg)\ , \end{equation*} and dividing by $2\pi\left|\chi(S_n)\right|$ yields \begin{equation}\label{eq.GB} \frac{1}{2\pi|\chi(S_n)|}\int_{S_n} H_n^p d\Area_{h}\geq \frac{k^{p/2}}{(1-k)}\bigg(\int_{S_hX}(1+\sigma)d\overline\mu_{S_n,h}+1\bigg)\ . \end{equation} Taking limits in \eqref{eq.GB}, and bearing in mind \eqref{eq.rigidity_marked_spectra}, we therefore obtain \begin{equation*} \lim_{n\rightarrow\infty}\int_{S_hX}(1+\sigma)d\overline\mu_{S_n,h}\leq 0\ . \end{equation*} However, by hypothesis, \begin{equation*} (1+\sigma)\geq 0\ . \end{equation*} The above integral is therefore also non-negative, and the result follows. \end{proof} \subsubsection{Using the equidistribution} We now make use of the equidistribution result of Theorem \ref{KM_equidistrib}. Indeed, let $([\Gamma_n])_{n\in\N}$ be a sequence of conjugacy classes of oriented, quasi-Fuchsian subgroups of $\Pi$ as in Theorem \ref{KM_equidistrib}. \begin{itemize} \item By Lemma \ref{lem_comparison_area_laminar}, for all $n$, $(a-k)\overline\mu_{S_n,h}\leq\mu_{S_n}\leq(1-k)\overline\mu_{S_n,h}$; \item By Theorem \ref{KM_equidistrib}, the weak-$\ast$ limit of the sequence $\mu_{S_n}$ is a probability measure $\mu_\infty$ on $S_hX$ of full support. \end{itemize} By Lemma \ref{l.sect_approaches-1}, \begin{equation*} \int(1+\sigma)d\mu_\infty=0\ , \end{equation*} and since $1+\sigma\geq 0$, \begin{equation*} \sigma=-1\qquad \mu_\infty\text{-a.e.} \end{equation*} in $S_hX$. Since $\mu_\infty$ has full support, it follows that $\sigma=-1$ over $S_hM$. In other words, $h$ has sectional curvature everywhere equal to $-1$, so that, by Mostow's rigidity theorem, $h$ and $h_0$ are isometric. \subsection{Proof of Theorem \ref{thm.asymptotic_spectra}}\label{ss.proof_of_thm_B} Suppose now that the sectional curvature of $X$ is pinched between $-b$ and $-a$ for some $0<a\leq b$. We first assume that the marked normalized energy spectrum is asymptotic to the marked area spectrum, that is \begin{equation}\label{eq.hypothesis} \lim_{n\to\infty}\frac{1}{\Area_h(S_n)}\int_{S_n}\big(k^{-\frac{1}{2}}H_n-1\big)d\Area_h=0\ , \end{equation} for every sequence $(S_n)_{n \in\N}$ of $(1+\eps_n)-$quasi-Fuchsian surfaces such that $\eps_n\to 0$ as $n\to\infty$. Given $C\geq 1$, let $H:\MKD_{k,h}(C)\to\R$ denote the map which associates to every $(D,p)\in \MKD_{k,h}(C)$ the mean curvature of the oriented $k$-disk $D$ at the point $p$. This map is obviously $\Pi$-invariant, and we denote the function that it defines over the quotient $\MKD_{k,h}(C)/\Pi$ also by $H$. When $C=1$, $H$ is the map which associates to each $v\in S_hX$ the mean curvature of the leaf of $\cF$ containing $v$. Theorem \ref{thm.asymptotic_spectra} will follow from the following result. \begin{prop}\label{p.tot_asymptotic} $H=k^{\frac{1}{2}}$ over $S_hX$. \end{prop} \begin{proof}[Proposition \ref{p.tot_asymptotic} implies Theorem \ref{thm.asymptotic_spectra}] Indeed, let $\hat L$ be a leaf of $\cF_{k,h}$, and let $L$ denote its projection in $S_hX$. Let $v$ be a point of $\hat L$, and let $p$ denote its base point. If $H=k^{\frac{1}{2}}$ at $v$ then both principal curvatures of $L$ at $p$ coincide, that is $p$ is an umbilic point of $L$. Proposition \ref{p.tot_asymptotic} therefore implies that the projection to $X$ of every leaf of $\cF$ is totally umbilical and has constant mean curvature. The manifold $X$ therefore satisfies the \emph{axiom of $2$-spheres} of Leung--Nomizu (see \cite{Leung_Nomizu}), namely \emph{for every point $p$ of $X$ and every plane $P$ of $T_pX$, there exists a totally umbilical, constant mean curvature surface $S$ tangent to $P$ at $p$.} It is proven in \cite{Leung_Nomizu} that $X$ has constant sectional curvature whenever this property is satisfied, and Theorem \ref{thm.asymptotic_spectra} therefore follows. \end{proof} \begin{proof}[Proof of Proposition \ref{p.tot_asymptotic}] We apply \eqref{eq.hypothesis} to the Kahn-Markovi\'c sequence of Theorem \ref{KM_equidistrib}. For all $n$, let $S_n,\Gamma_n$ and $c_n$ be as in that theorem. Let $\lambda_{S_n}$ denote the area measure \emph{for the hyperbolic metric} in $S_n$. By \eqref{eq.ident_measures} and the Gauss-Bonnet theorem, \begin{equation*} \int_{\MKD_{k,h}(1+\eps_n)/\Pi}\big(k^{-\frac{1}{2}}H-1\big)d\mu_{k,h}(c_n)= \frac{1}{\lambda_{S_n}(S_n)}\int_{S_n}\big(k^{-\frac{1}{2}}H-1\big)d\lambda_{S_n}\ . \end{equation*} Thus, by Lemma \ref{lem_comparison_area_laminar}, \begin{equation*} \multiline{ \int_{\MKD_{k,h}(1+\eps_n)/\Pi}\big(k^{-\frac{1}{2}}H-1\big)d\mu_{k,h}(c_n)\cr \qquad\qquad\qquad\qquad\leq\frac{(b-k)}{(a-k)}\frac{1}{\Area_h(S_n)}\int_{S_n}\big(k^{-\frac{1}{2}}H_n-1\big)d\Area_h\ .\cr} \end{equation*} By \eqref{eq.hypothesis}, the last integral converges to $0$. However, for the Kahn-Markovi\'c sequence, $(\mu_{k,h}(c_m))_{n\in\mathbb{N}}$ converges to a measure $\mu_\infty$, say, of full support over $\MKD_{k,h}(1)/\Pi\simeq S_hX$. Hence, \begin{equation*} \int_{S_hX}\big(k^{-\frac{1}{2}}H-1\big)d\mu_\infty = 0\ . \end{equation*} Since $H$ is continuous, and since $\mu_\infty$ has full support, it follows that $H=k^{\frac{1}{2}}$ everywhere, and this completes the proof. \end{proof} For the general case, it suffices to observe that the same argument also show that the sectional curvature is constant whenever the normalized marked $p$ and $q$ spectra are asymptotic. Indeed, for any closed surface $S$, \begin{equation*} \multiline{ \frac{\int_S k^{-\frac{p}{2}}H^p d\Area_h}{\int_S k^{-\frac{q}{2}}H^qd\Area_h}-1\cr \qquad\qquad\qquad\qquad=\frac{1}{\int_Sk^{-\frac{q}{2}}H^qd\Area_h}\int_S\big(k^{-\frac{p-q}{2}}H^{p-q}-1\big)k^{-\frac{q}{2}} H^qd\Area_h\ ,\cr} \end{equation*} and the result follows as before. \section{Asymptotic counting according to energy} \subsection{Energy entropy for $k$-surfaces} \subsubsection{The energy entropy} Given $p\geq 0$ we define \begin{equation*} \Ent^p_{k}(X,h)=\lim_{\eps\to 0}\liminf_{w\to\infty}\frac{1}{w\log w}\log\#\left\{[\Gamma]\in\QF(1+\eps)\ \bigg|\ W^{p}_{h}(S_{k,h}([\Gamma]))\leq w\right\}\ . \end{equation*} This quantity is analogous to the minimal surface entropy studied in \cite{CMN,ln21}. The \emph{area entropy} used in \cite{ALS} is more elementary since it does not involve a double limit. However, as in the case of the area entropy of minimal surfaces, it is hard to imagine how to compute the energy entropy without using such a double limit, even for the hyperbolic metric. \begin{lemma}[Entropy in the hyperbolic case] If $h_0$ is the hyperbolic metric on $X$, then for all $p\geq 0$ and $0<k<1$, \begin{equation*} \Ent^p_{k}(X,h_0)=\frac{k^{-\frac{p}{2}}(1-k)}{2\pi}\ . \end{equation*} \end{lemma} \begin{proof} We first prove the upper bound \begin{equation*} \Ent^p_{k}(X,h_0)\leq\frac{k^{-\frac{p}{2}}(1-k)}{2\pi}\ . \end{equation*} Indeed, by Lemma \ref{lem_lower_bounds_energy}, for every $[\Gamma]\in\QF(1+\eps)$, \begin{equation*} W_{h_0}^{p}(S_{k,h_0}([\Gamma]))\geq k^{\frac{p}{2}}\Area_{h_0}(S_{k,h_0}([\Gamma]))=\frac{4\pi(g-1)k^{\frac{p}{2}}}{(1-k)}\ , \end{equation*} where $g$ denotes the genus of $S_{k,h_0}([\Gamma])$. It follows that if \begin{equation*} W_{h_0}^{p}(S_{k,h_0}([\Gamma])) \leq w\ , \end{equation*} then \begin{equation*} g \leq \frac{k^{-\frac{p}{2}}(1-k)w}{4\pi} + 1\ , \end{equation*} and the upper bound now follows by Kahn-Markovi\'c's topological counting result (Theorem \ref{th.KM2}). We now use Lemma \ref{l.almost_tot_umbilical} to bound the entropy from below \begin{equation*} \Ent^p_{k}(X,h_0)\geq\frac{k^{-\frac{p}{2}}(1-k)}{2\pi}\ . \end{equation*} Fix $\delta>0$ and let $\eps_0>0$ be as in Lemma \ref{l.almost_tot_umbilical}. For all $\eps<\eps_0$ and for all $[\Gamma]\in\QF(1+\eps)$, \begin{equation*} W^{p}_{h_0}(S_{k,h_0}([\Gamma])\leq (1+\delta)^p\frac{4\pi(g-1)k^{\frac{p}{2}}}{(1-k)}\ . \end{equation*} It follows that if \begin{equation*} g \leq \frac{(1-k)w k^{-{\frac{p}{2}}}}{4\pi(1+\delta)^p} + 1\ , \end{equation*} then \begin{equation*} W_{h_0}^{p}(S_{k,h_0}([\Gamma])) \leq w\ , \end{equation*} and, proceeding as before, we show that \begin{equation*} \Ent^{p}_{k}(X,h_0)\geq\frac{(1-k)k^{-\frac{p}{2}}}{2\pi(1+\delta)^p}\ . \end{equation*} Since $\delta$ is arbitrary, the result follows. \end{proof} Note that in order to obtain the upper bound in the previous proof, we only used the inequality $\sect_h\geq -1$. We therefore also have the following result. \begin{lemma}\label{l_inequality_entropy} Let $(X,h_0)$ be a closed, $3$-dimensional hyperbolic manifold. Let $h$ be another riemannian metric on $X$ of sectional curvature pinched between $-1$ and $-a$, for some $0<a<1$. For all $p\geq 0$, and for all $0<k<a$, \begin{equation*} \Ent^{p}_{k}(X,h)\leq\frac{k^{-\frac{p}{2}}(1-k)}{2\pi}=\Ent^{p}_{k}(X,h_0)\ . \end{equation*} \end{lemma} \subsubsection{Entropy and closed Fuchsian $k$-surfaces} It is worth noting that the $k$-surface energy entropy counts closed \emph{conjugacy classes} of surface subgroups. This has the following consequence. \begin{theorem}\label{th.totally_umb_implies_big_entropy} Assume that $-1\leq\sect_h\leq-a$, $p\geq 0$ and $0<k<a$. Assume furthermore that there exists a closed \emph{Fuchsian} surface subgroup $\Gamma\leq\Pi$ such that \begin{equation*} W^{p}_{h}(S_{k,h}([\Gamma]))=W^{p}_{h_0}(S_{k,h_0}([\Gamma]))\ . \end{equation*} Then \begin{equation*} \Ent^{p}_{k}(X,h)=\Ent^{p}_{k}(X,h_0). \end{equation*} \end{theorem} \begin{remark} If $p>0$, one can check that $W^{p}_{h}(S_{k,h}([\Gamma]))=W^{p}_{h_0}(S_{k,h_0}([\Gamma]))$ if and only if $[\Gamma]$ is represented in $(M,h)$ by a totally umbilic surface with constant mean curvature $k^{1/2}$ (compare Question \ref{question:intro}.) \end{remark} The theorem relies on Lemma \ref{l.Muller_Puchta}, below, which is a consequence of Formula (3) of \cite{Muller_Puchta}. We first require the following definition. \begin{defi}\label{d.conj_classes} Let $\Gamma$ be a surface subgroup of $\Pi$. We denote by $\sigma(\Gamma)$ the set of $\Pi$-conjugacy classes of finite-index subgroups of $\Gamma$. For every $[\Gamma']\in\sigma(\Gamma)$, we denote by $\mathrm{g}(\Gamma')$ the genus of any subgroup $\Gamma'$ representing the class. \end{defi} \begin{lemma}[M\"uller--Puchta's lower bound]\label{l.Muller_Puchta} For every compact surface subgroup $\Gamma$ of $\Pi$, \begin{equation}\label{eq.lower_bound_MP} \liminf_{g\rightarrow\infty}\frac{\log\big(\#\{[\Gamma']\in\sigma(\Gamma)\ |\ \mathrm{g}(\Gamma')\leq g\}\big)}{2g\log(g)} \geq 1\ . \end{equation} \end{lemma} \begin{proof} We provide a proof of this estimate since \cite{Muller_Puchta} in fact counts \emph{subgroups}, not \emph{conjugacy classes}. For convenience, we suppose that $\Gamma$ is the stabilizer in $\Pi$ of its boundary curve $\partial_\infty\Gamma$. Upon applying Stirling's approximation to Equation (3) of \cite{Muller_Puchta}, we find that \begin{equation*} \liminf_{g\rightarrow\infty}\frac{\log\big(\#\{\Gamma'\subset\Gamma\ |\ \mathrm{g}(\Gamma')\leq g\}\big)}{2g\log(g)} \geq 1\ . \end{equation*} In terms of the Euler characteristic, this becomes \begin{equation*} \liminf_{g\rightarrow\infty}\frac{\log\big(\#\{\Gamma'\subset\Gamma\ |\ \left|\chi(\Gamma')\right|\leq x\}\big)}{x\log(x)} \geq 1\ . \end{equation*} Let $x_0:=\left|\chi(\Gamma)\right|$ denote the absolute value of the Euler characteristic of $\Gamma$. Let $\Gamma'$ be a finite-index subgroup of $\Gamma$ with \begin{equation*} \left|\chi(\Gamma')\right| \leq x\ . \end{equation*} Then $\Gamma'$ corresponds to an $N$-fold cover, where \begin{equation*} N\leq x/x_0\ . \end{equation*} We now claim that the number of $\Pi$-conjugates of $\Gamma'$ in $\Gamma$ is bounded above by $(x/x_0)$. Indeed, let $\Gamma''$ be a $\Pi$-conjugate of $\Gamma'$ in $\Gamma$. Let $\gamma\in\Pi$ be such that $\Gamma''=\gamma\Gamma'\gamma^{-1}$. Since $\Gamma'$ and $\Gamma''$ are both finite-index subgroups of $\Gamma$, $\gamma$ preserves $\partial_\infty\Gamma$, and is therefore an element of $\mathrm{Stab}(\partial_\infty\Gamma)=\Pi$. It follows that $\Gamma''$ is in fact a $\Gamma$-conjugate of $\Gamma'$. However, the number of distinct $\Gamma$-conjugates of $\Gamma'$ in $\Gamma$ is bounded above by the index of $\Gamma'$ in $\Gamma$. By considering fundamental domains, we see that this index is bounded above by $N\leq (x/x_0)$, and the assertion follows. Consequently, \begin{equation*} \#\{[\Gamma']\in\sigma(\Gamma)\ |\ \left|\chi(\Gamma')\right| \leq x\} \geq \#\{\Gamma'\subset\Gamma\ |\ \left|\chi(\Gamma')\right| \leq x\}/(x/x_0)\ . \end{equation*} Hence \begin{equation*} \liminf_{g\rightarrow\infty}\frac{\log\big(\#\{[\Gamma']\in\sigma(\Gamma)\ |\ \left|\chi(\Gamma')\right|\leq x\}\big)}{x\log(x)} \geq 1\ , \end{equation*} and the result follows. \end{proof} \begin{proof}[Proof of Theorem \ref{th.totally_umb_implies_big_entropy}.] It suffices to prove that \begin{equation*} \Ent^{p}_{k}(X,h)\leq\Ent^{p}_{k}(X,h_0)\ , \end{equation*} since the reverse inequality holds by Lemma \ref{l_inequality_entropy}. Let $\Gamma$ be a Fuchsian surface subgroup of $\Pi$ and suppose that \begin{equation*} W^{p}_{h}(S_{k,h}([\Gamma]))=W^{p}_{h_0}(S_{k,h_0}([\Gamma]))=\frac{4\pi k^{\frac{p}{2}}}{(1-k)}\mathrm{g}(\Gamma)\ , \end{equation*} where the last equality follows from Lemma \ref{lem_lower_bounds_energy}. By Lemma \ref{lem_lower_bounds_energy} again, the surface $S=S_{k,h}([\Gamma])$ must be totally umbilical with constant mean curvature equal to $k^{\frac{1}{2}}$. Let $[\Gamma']$ be an element of $\sigma(\Gamma)$, as in Definition \ref{d.conj_classes}. Any subgroup $\Gamma'$ representing the class is quasi-Fuchsian with $\bord \Gamma=\bord \Gamma'$, so that $[\Gamma']\in\QF(1+\eps)$ for all $\eps>0$, and \begin{equation*} S_{k,h}([\Gamma'])=\mathrm{D}_{k,h}(\bord \Gamma)/\Gamma'\ . \end{equation*} In particular, $S_{k,h}([\Gamma'])$ is just a finite cover of $S_{k,h}([\Gamma])$, and is therefore also totally umbilical with constant mean curvature equal to $k^{\frac{1}{2}}$. Consequently, by Lemma \ref{lem_lower_bounds_energy}, \begin{equation*} W^{p}_{h}(S_{k,h}([\Gamma']))=\frac{4\pi k^{\frac{p}{2}}}{(1-k)}\mathrm{g}(\Gamma')\ . \end{equation*} We conclude that, for every $\eps,w>0$, \begin{equation*} \#\{[\Gamma']\in\QF(1+\eps):W^{p}_{h}(S_{k,h}([\Gamma']))\leq w\}\geq \#\{[\Gamma']\in\sigma(\Gamma):W^{p}_{h}(S_{k,h}([\Gamma']))\leq w\}\ . \end{equation*} Thus, by Lemma \ref{l.Muller_Puchta}, \begin{equation*} \Ent^{p}_{k}(X,h)\geq\frac{(1-k)k^{-\frac{p}{2}}}{2\pi}=\Ent^{p}_{k}(X,h_0)\ , \end{equation*} and the result follows. \end{proof} \subsubsection{The modified energy entropy}\label{sss.Modified_entropy} In light of Theorem \ref{th.totally_umb_implies_big_entropy}, it is reasonable to exclude the counting of quasi-Fuchsian surfaces that accumulate to a Fuchsian one. This motivates modifying our energy entropy for $k$-surfaces, in line with the modified entropy of \cite{MN_currents}. This modified entropy will focus on counting only those quasi-Fuchsian surfaces that accumulate to the unique ergodic fully supported laminar measure in the sphere bundle $S_hX$. Let $\hat m$ denote the Haar measure of the set $\cC^+$ of round circles of $\C\PP^1$. Recall from Theorem \ref{thm.Ratner} that this is the unique fully supported, ergodic conformal current in $\cC^+$. Note now that every conformal current is determined by its restriction to some compact fundamental domain, and it follows that, for all $C\geq 1$, the space of conformal currents over $\QC^+(C)$ is metrizable. Let $\rho$ be such a metric. Given $\eta>0$, we define \begin{equation*} \QF_{\hat m}(\eta):=\left\{[\Gamma]\in\QF:\rho(\hat m(\bord\Gamma),\hat m)<\eta\right\}\ . \end{equation*} Following \cite{MN_currents} we define the \emph{modified $p$-energy entropy} by \begin{equation*} \Ent^p_{k,\hat m}(X,h):=\lim_{\eta\to 0}\liminf_{w\to\infty}\frac{1}{w\log w}\log\#\left\{[\Gamma]\in\QF_{\hat m}(\eta):W^{p}_{h}(S_{k,h}([\Gamma]))\leq w\right\}\ . \end{equation*} This entropy counts only quasi-Fuchsian $k$-surfaces that accumulate to the fully-supported, ergodic laminar measure. In particular, when $\Pi$ has no Fuchsian subgroups (see, for example, \cite[\S 5.3]{MR03}), \begin{equation*} \Ent^{p}_{k}(X,h)=\Ent^{p}_{k,\hat m}(X,h)\ . \end{equation*} \begin{theorem} Assume that $-1\leq\sect_h\leq-a$, $p\geq 0$ and $0<k<a$. Then \begin{equation*} 0<\Ent^{p}_{k,\hat m}(X,h)\leq\Ent^{p}_{k}(X,h)\ . \end{equation*} \end{theorem} \begin{proof} Let $(\Gamma_n)_{n\in\mathbb{N}}$ be a sequence of quasi-Fuchsian subgroups of $\Pi$ as in Theorem \ref{th.lowe_neves}. By compactness of $S_hX$, the mean curvatures of the leaves of the foliation $\cF_{k,h}$ are uniformly bounded. An argument by contradiction similar to that used in Lemma \ref{l.almost_tot_umbilical} then shows that there exists a uniform upper bound $H>0$ for the mean curvatures of all Lowe--Neves $k$-surfaces $S_n=S_{k,h}([\Gamma_n])$. Arguing as in Lemma \ref{lem_lower_bounds_energy} we then obtain, for every $n\in\N$, \begin{equation}\label{eq.upperbound_energy} W^{p}_{h}(S_n)\leq \frac{4\pi H^p}{(a-k)}(\mathrm{g}(\Gamma_n)-1)\ . \end{equation} Now fix $\eta>0$ and $n_0$ such that $[\Gamma_{n_0}]\in\QF_{\hat m}(\eta)$. Reasoning as in the proof of Theorem \ref{th.totally_umb_implies_big_entropy}, we show that, for every $[\Gamma']\in\sigma(\Gamma_{n_0})$, \begin{equation*} [\Gamma']\in\QF_{\hat m}(\eta)\ . \end{equation*} Furthermore, since we have the same bounds on the mean and extrinsic curvatures, \eqref{eq.upperbound_energy} also holds for $S'=S_{k,h}([\Gamma'])$. This implies that, for sufficiently large $g>0$, \begin{equation*} \#\{[\Gamma']\in\sigma(\Gamma_{n_0})\ \big|\ \mathrm{g}(\Gamma')\leq g\}\leq\#\{[\Gamma]\in\QF_{\hat m}(\eta)\ \big|\ W^{p}_{h}(S_{k,h}([\Gamma]))\leq \frac{4\pi H^p}{(a-k)}(g-1)\}\ . \end{equation*} Lemma \ref{l.Muller_Puchta} now yields a positive lower bound for the quantity \begin{equation*} \liminf_{w\to\infty}\frac{1}{w\log(w)}\log\#\{[\Gamma]\in\QF_\mu(\eta)\ \big|\ W^{p}_{h}(S_{k,h}([\Gamma]))\leq w\} \end{equation*} which only depends on $H$ and $a$, but not on $\eta$. This yields $0<\Ent^p_{k,\hat m}(X,h)$, as desired. \end{proof} \subsection{Rigidity of the modified entropy} \subsubsection{Proof of Theorem \ref{th.rigidity_entropy}} We assume from now on that $-1\leq\sect_h\leq -a <0$. Take $p\geq 0$ and $0<k<a$. We first need to prove the following lemma. \begin{lemma}\label{lem_converging_sequence} Suppose \begin{equation*} \Ent^{p}_{k,\hat m}(X,h)=\Ent^{p}_{k,\hat m}(X,h_0)\ . \end{equation*} Then there exists a sequence $(\eta_n)_{n\in\mathbb{N}}$ converging to zero, and a sequence $(\Gamma_n)_{n\in\mathbb{N}}$ of quasi-Fuchsian subgroups of $\Pi$ such that, \begin{itemize} \item for all $n$, $[\Gamma_n]\in\QF_{\hat m}(\eta_n)$; and \item \begin{equation*} \lim_{n\to\infty}\frac{W^{p}_{h}(S_{k,h}([\Gamma_n]))}{W^{p}_{h_0}(S_{k,h_0}([\Gamma_n]))}=1\ . \end{equation*} \end{itemize} \end{lemma} \begin{proof} Suppose the contrary. Then there exists $\delta>0$ such that for all sufficiently small $\eta>0$, and for all $[\Gamma]\in\QF_{\hat m}(\eta)$, \begin{equation*} W^{p}_{h}(S_{k,h}([\Gamma]))\geq (1+\eta) W^{p}_{h}(S_{k,h_0}([\Gamma]))\ . \end{equation*} This immediately implies \begin{equation*} \Ent^p_{k,\hat m}(X,h)\leq\frac{1}{(1+\eta)}\Ent^{p}_{k,\hat m}(X,h_0)\ , \end{equation*} which is absurd, and the result follows. \end{proof} We now prove Theorem \ref{th.rigidity_entropy}. \begin{proof}[Proof of Theorem \ref{th.rigidity_entropy}] Assume that $\Ent^{p}_{k,\hat m}(X,h)=\Ent^{p}_{k,\hat m}(X,h_0)$. Then Lemma \ref{lem_converging_sequence} provides a sequence $(\eta_n)_{n\in\mathbb{N}}$ converging to zero as well as a sequence $([\Gamma_n])_{n\in\mathbb{N}}\in\QF_{\hat m}(\eta_n)$ such that \begin{equation*} \lim_{n\to\infty}\frac{W^{p}_{h}(S_{k,h}([\Gamma_n]))}{W^{p}_{h_0}(S_{k,h_0}([\Gamma_n]))}=1\ . \end{equation*} By definition of the sequence $([\Gamma_n])_{n\in\mathbb{N}}$, the sequence $(\mu_{k,h}(\bord \Gamma_n))_{n\in\mathbb{N}}$ of laminar measures converges to $\mu$, the unique ergodic laminar measure that has full support in $S_hX$. By reproducing the argument given in \S \ref{ss.Rigidity} we get that $\sect_h=-1$ almost everywhere for $\mu$. So we must have $\sect_h=-1$ everywhere and $h$ is isometric to $h_0$ by Mostow's rigidity theorem. \end{proof} \subsubsection{Case of equality between modified energy and area entropies} \label{sss.equality_modified_entropies} As in \cite{ALS,MN_currents} we can define the \emph{modified area entropy} as \begin{equation*} \Ent^{\text{Area}}_{k,\hat m}(X,h)=\lim_{\eta\to 0}\liminf_{A\to\infty}\frac{1}{A\log A}\log\#\left\{[\Gamma]\in\QF_{\hat m}(\eta)\ \bigg|\ \Area(S_{k,h}([\Gamma]))\leq A\right\}\ . \end{equation*} We give the proof of our last theorem. \begin{proof}[Proof of Theorem \ref{equality_energy_area}.] We will first assume $\Ent^{\text{Area}}_{\hat m}(M,h)=\Ent^{1}_{k,\hat m}(M,h)$. Then an argument similar to Lemma \ref{lem_converging_sequence} gives the existence of a sequence $(\eta_n)_{n\in\mathbb{N}}$ converging to zero as well as a sequence $([\Gamma_n])_{n\in\mathbb{N}}\in\QF_{\hat m}(\eta_n)$ such that \begin{equation*} \lim_{n\to\infty}\frac{k^{-\frac{1}{2}}W^{1}_{h}(S_{k,h}([\Gamma_n]))}{\text{Area}_h(S_{k,h}([\Gamma_n]))}=1\ . \end{equation*} Here again, by definition of the sequence $([\Gamma_n])_{n\in\mathbb{N}}$, the sequence $(\mu_{k,h}(\bord \Gamma_n))_{n\in\mathbb{N}}$ of laminar measures converges to the ergodic laminar measure $\mu$, which has full support in $S_hX$. Hence by reproducing the argument in \S \ref{ss.proof_of_thm_B} we obtain that the $k$-surface foliation of $S_hX$ is by totally umbilical leaves. So by applying the axiom of $2$-spheres we conclude that the sectional curvature of $h$ is constant. The other implications follow from the same argument. \end{proof} \begin{remark} \label{remarkattheend} We comment that the results of this section also hold with the modified entropy functionals replaced by entropy functionals that count k-surfaces whose limit sets in the hyperbolic metric become more and more round, but without a condition on equidistribution, provided that $X$ contains no closed totally geodesic surfaces. The condition that there be no closed totally geodesic surfaces causes the equidistribution to hold automatically. (See \cite{MN_currents} for a closely analogous situation for entropy functionals counting minimal surfaces rather than $k$-surfaces.) If the following question had a positive answer, it would be possible to drop the condition that $X$ contain no closed totally geodesic surface in its hyperbolic metric. If it had a negative answer, it would be possible to give counterexamples to statements involving the un-modified k-surface entropy by taking covers of a totally umbilic surface as in the statement below. \end{remark} \begin{question} Suppose that $X$ has sectional curvature greater than or equal to $-1$, and that $X$ contains a closed totally umbilic surface $S$ with mean curvature $k^{1/2}$ and with constant curvature $-1 +k$ in its intrinsic metric (i.e., the sectional curvature of $X$ through every tangent plane to $S$ is $-1$.) Then must $X$ be hyperbolic? \end{question} \bibliographystyle{plain} \begin{thebibliography}{10} \bibitem{Ahlfors} L.~Ahlfors. \newblock {\em Lectures on quasiconformal mappings}. \newblock The Wadsworth \& Brooks/Cole Mathematics Series. Wadsworth \& Brooks/Cole Advanced Books \& Software, Monterey, CA, 1987. \newblock With the assistance of Clifford J. Earle, Jr., Reprint of the 1966 original. \bibitem{ALS} S.~Alvarez, B.~Lowe, and G.~Smith. \newblock Foliated {P}lateau problems and asymptotic counting of surface subgroups. \newblock {\em to appear in {A}nn. {S}ci. {\'E}cole {N}orm. {S}up.}, 2024. \bibitem{AlvarezSmith} S.~Alvarez and G.~Smith. \newblock Prescription de courbure des feuilles des laminations: retour sur un th\'{e}or\`eme de {C}andel. \newblock {\em Ann. Inst. Fourier (Grenoble)}, 71(6):2549--2593, 2021. \bibitem{Alvarez_Yang} S.~Alvarez and J.~Yang. \newblock Physical measures for the geodesic flow tangent to a transversally conformal foliation. \newblock {\em Ann. Inst. H. Poincar\'e{} C Anal. Non Lin\'eaire}, 36(1):27--51, 2019. \bibitem{Barman_Gupta} P.~Barman and S.~Gupta. \newblock Dominating surface-group representations via {F}ock-{G}oncharov coordinates. \newblock {\em Preprint, [arXiv:2405.15378]}, 2024. \bibitem{Basilio_Lee_Malionek} B.~Basilio, C.~Lee, and J.~Malionek. \newblock Totally geodesic surfaces in hyperbolic 3-manifolds: Algorithms and examples. \newblock {\em Preprint, [arXiv:2403.12397]}, 2024. \bibitem{BCG} G.~Besson, G.~Courtois, and S.~Gallot. \newblock Entropies et rigidit\'{e}s des espaces localement sym\'{e}triques de courbure strictement n\'{e}gative. \newblock {\em Geom. Funct. Anal.}, 5(5):731--799, 1995. \bibitem{besson1996minimal} G.~Besson, G.~Courtois, and S.~Gallot. \newblock Minimal entropy and mostow's rigidity theorems. \newblock {\em Ergodic Theory and Dynamical Systems}, 16(4):623--649, 1996. \bibitem{Bonahon_laminations} F.~Bonahon. \newblock Geodesic laminations with transverse {H}\"{o}lder distributions. \newblock {\em Ann. Sci. \'{E}cole Norm. Sup. (4)}, 30(2):205--240, 1997. \bibitem{Brody_Reyes} N.~Brody and E.~Reyes. \newblock Approximating hyperbolic lattices by cubulations. \newblock {\em Preprint, [arXiv:2404.01511]}, 2024. \bibitem{BurnsKatok} K.~Burns and A.~Katok. \newblock Manifolds with nonpositive curvature. \newblock {\em Ergodic Theory Dynam. Systems}, 5(2):307--317, 1985. \bibitem{CMN} D.~Calegari, F.~Marques, and A.~Neves. \newblock Counting minimal surfaces in negatively curved 3-manifolds. \newblock {\em Duke Math. J.}, 171(8):1615--1648, 2022. \bibitem{Candel} A.~Candel. \newblock Uniformization of surface laminations. \newblock {\em Ann. Sci. \'{E}cole Norm. Sup. (4)}, 26(4):489--516, 1993. \bibitem{Croke} C.~Croke. \newblock Rigidity for surfaces of nonpositive curvature. \newblock {\em Comment. Math. Helv.}, 65(1):150--169, 1990. \bibitem{Croke_Dairbekov} C.~Croke and N.~Dairbekov. \newblock Lengths and volumes in {R}iemannian manifolds. \newblock {\em Duke Math. J.}, 125(1):1--14, 2004. \bibitem{Croke_Dairbekov_Sharafutdinov} C.~Croke, N.~Dairbekov, and V.~Sharafutdinov. \newblock Local boundary rigidity of a compact {R}iemannian manifold with curvature bounded above. \newblock {\em Trans. Amer. Math. Soc.}, 352(9):3937--3956, 2000. \bibitem{Dai_Li} S.~Dai and Q.~Li. \newblock Domination results in {$n$}-{F}uchsian fibers in the moduli space of {H}iggs bundles. \newblock {\em Proc. Lond. Math. Soc. (3)}, 124(4):427--477, 2022. \bibitem{Deroin_Tholozan} B.~Deroin and N.~Tholozan. \newblock Dominating surface group representations by {F}uchsian ones. \newblock {\em Int. Math. Res. Not.}, (13):4145--4166, 2016. \bibitem{Einsiedler} M.~Einsiedler. \newblock Ratner's theorem for $\text{SL}(2,\mathbb{R})$-invariant measures. \newblock {\em Preprint, [arXiv:0603483]}, 2006. \bibitem{Flaminio} L.~Flaminio. \newblock Local entropy rigidity for hyperbolic manifolds. \newblock {\em Comm. Anal. Geom.}, 3(3-4):555--596, 1995. \bibitem{Gogolev_Reber} A.~Gogolev and J.M. Reber. \newblock A counteremample to marked length spectrum semi-rigidity. \newblock {\em Preprint, [arXiv:2309.10882]}, 2023. \bibitem{Gromov_FolPlateau1} M.~Gromov. \newblock Foliated {P}lateau problem. {I}. {M}inimal varieties. \newblock {\em Geom. Funct. Anal.}, 1(1):14--79, 1991. \bibitem{Gromov_FolPlateau2} M.~Gromov. \newblock Foliated {P}lateau problem. {II}. {H}armonic maps of foliations. \newblock {\em Geom. Funct. Anal.}, 1(3):253--320, 1991. \bibitem{Gromov3} M.~Gromov. \newblock Three remarks on geodesic dynamics and fundamental group. \newblock {\em Enseign. Math. (2)}, 46(3-4):391--402, 2000. \bibitem{Gueritaud_Kassel_Wolff} F.~Gu\'eritaud, F.~Kassel, and M.~Wolff. \newblock Compact anti--de {S}itter 3-manifolds and folded hyperbolic structures on surfaces. \newblock {\em Pacific J. Math.}, 275(2):325--359, 2015. \bibitem{GuillarmouLefeuvre} C.~Guillarmou and T.~Lefeuvre. \newblock The marked length spectrum of {A}nosov manifolds. \newblock {\em Ann. of Math. (2)}, 190(1):321--344, 2019. \bibitem{Guillemin_Kazhdan} V.~Guillemin and D.~Kazhdan. \newblock Some inverse spectral results for negatively curved {$2$}-manifolds. \newblock {\em Topology}, 19(3):301--312, 1980. \bibitem{HamenstadtMLS} U.~Hamenst\"{a}dt. \newblock Cocycles, symplectic structures and intersection. \newblock {\em Geom. Funct. Anal.}, 9(1):90--140, 1999. \bibitem{HamenstadtKM} U.~Hamenst\"{a}dt. \newblock Incompressible surfaces in rank one locally symmetric spaces. \newblock {\em Geom. Funct. Anal.}, 25(3):815--859, 2015. \bibitem{humbert2024katok} T.~Humbert. \newblock Katok's entropy conjecture near real and complex hyperbolic metrics. \newblock {\em Preprint, [arXiv:2409.11197]}, 2024. \bibitem{Kahn_Labourie_Mozes} J.~Kahn, F.~Labourie, and S.~Mozes. \newblock Surface groups in uniform lattices of some semi-simple groups. \newblock {\em Acta Math.}, 232:79--220, 2024. \bibitem{KM2} J.~Kahn and V.~Markovi\'{c}. \newblock Counting essential surfaces in a closed hyperbolic three-manifold. \newblock {\em Geom. Topol.}, 16(1):601--624, 2012. \bibitem{KM1} J.~Kahn and V.~Markovi\'c. \newblock Immersing almost geodesic surfaces in a closed hyperbolic three manifold. \newblock {\em Ann. of Math. (2)}, 175(3):1127--1190, 2012. \bibitem{KMS} J.~Kahn, V.~Markovic, and I.~Smilga. \newblock Geometrically and topologically random surfaces in a closed hyperbolic three manifold. \newblock {\em Preprint, [arXiv:2309.02847]}. \bibitem{Katok_entropy_closed} A.~Katok. \newblock Entropy and closed geodesics. \newblock {\em Ergodic Theory Dynam. Systems}, 2(3-4):339--365, 1982. \bibitem{Katok4} A.~Katok. \newblock Four applications of conformal equivalence to geometry and dynamics. \newblock {\em Ergodic Theory Dynam. Systems}, 8$^*$:139--152, 1988. \bibitem{LabourieGAFA} F.~Labourie. \newblock Probl\`emes de {M}onge-{A}mp\`ere, courbes holomorphes et laminations. \newblock {\em Geom. Funct. Anal.}, 7(3):496--534, 1997. \bibitem{LabourieInvent} F.~Labourie. \newblock Un lemme de {M}orse pour les surfaces convexes. \newblock {\em Invent. Math.}, 141(2):239--297, 2000. \bibitem{Labourie_phase_space} F.~Labourie. \newblock The phase space of {$k$}-surfaces. \newblock In {\em Rigidity in dynamics and geometry ({C}ambridge, 2000)}, pages 295--307. Springer, Berlin, 2002. \bibitem{LabourieAnnals} F.~Labourie. \newblock Random {$k$}-surfaces. \newblock {\em Ann. of Math. (2)}, 161(1):105--140, 2005. \bibitem{LabourieBourbaki} F.~Labourie. \newblock Asymptotic counting of minimal surfaces and of surface groups in hyperbolic 3-manifolds. \newblock {\em S\'eminaire Bourbaki, expos{\'e} 1179 Ast\'{e}risque}, (430):Exp. No. 1179, 425--457, 2021. \bibitem{Ledrappier_har_BM} F.~Ledrappier. \newblock Harmonic measures and {B}owen-{M}argulis measures. \newblock {\em Israel J. Math.}, 71(3):275--287, 1990. \bibitem{Ledrappier_bord} F.~Ledrappier. \newblock Structure au bord des vari\'{e}t\'{e}s \`a courbure n\'{e}gative. \newblock In {\em S\'{e}minaire de {T}h\'{e}orie {S}pectrale et {G}\'{e}om\'{e}trie, {N}o. 13, {A}nn\'{e}e 1994--1995}, volume~13 of {\em S\'{e}min. Th\'{e}or. Spectr. G\'{e}om.}, pages 97--122. Univ. Grenoble I, Saint-Martin-d'H\`eres, 1995. \bibitem{Leung_Nomizu} D.~Leung and K.~Nomizu. \newblock The axiom of spheres in {R}iemannian geometry. \newblock {\em J. Differential Geometry}, 5:487--489, 1971. \bibitem{Li} Q.~Li. \newblock Harmonic maps for {H}itchin representations. \newblock {\em Geom. Funct. Anal.}, 29(2):539--560, 2019. \bibitem{Livsic} A.~N. Liv\v{s}ic. \newblock Certain properties of the homology of {$Y$}-systems. \newblock {\em Mat. Zametki}, 10:555--564, 1971. \bibitem{LoweGAFA} B.~Lowe. \newblock Deformations of totally geodesic foliations and minimal surfaces in negatively curved 3-manifolds. \newblock {\em Geom. Funct. Anal.}, 31(4):895--929, 2021. \bibitem{ln21} B.~Lowe and A.~Neves. \newblock Minimal surface entropy and average area ratio. \newblock {\em to appear in Journal of Differential Geometry}, 2024. \bibitem{MR03} C.~Maclachlan and A.~Reid. \newblock {\em The arithmetic of hyperbolic 3-manifolds}, volume 219 of {\em Graduate Texts in Mathematics}. \newblock Springer-Verlag, New York, 2003. \bibitem{MN_currents} F.~Marques and A.~Neves. \newblock Conformal currents and the entropy of negatively curved three-manifolds. \newblock {\em Preprint, [arXiv:2405.16302]}, 2024. \bibitem{Mostow} G.~D. Mostow. \newblock Quasi-conformal mappings in {$n$}-space and the rigidity of hyperbolic space forms. \newblock {\em Inst. Hautes \'{E}tudes Sci. Publ. Math.}, (34):53--104, 1968. \bibitem{Muller_Puchta} T.~M\"uller and J.-C. Puchta. \newblock Character theory of symmetric groups and subgroup growth of surface groups. \newblock {\em J. London Math. Soc. (2)}, 66(3):623--640, 2002. \bibitem{Otal} J.-P. Otal. \newblock Le spectre marqu\'{e} des longueurs des surfaces \`a courbure n\'{e}gative. \newblock {\em Ann. of Math. (2)}, 131(1):151--162, 1990. \bibitem{Ratner_Duke} M.~Ratner. \newblock Raghunathan's topological conjecture and distributions of unipotent flows. \newblock {\em Duke Math. J.}, 63(1):235--280, 1991. \bibitem{Reid} A.~Reid. \newblock Totally geodesic surfaces in hyperbolic {$3$}-manifolds. \newblock {\em Proc. Edinburgh Math. Soc. (2)}, 34(1):77--88, 1991. \bibitem{Sagman} N.~Sagman. \newblock Infinite energy equivariant harmonic maps, domination, and anti--de {S}itter 3-manifolds. \newblock {\em J. Differential Geom.}, 124(3):553--598, 2023. \bibitem{Seppi} A.~Seppi. \newblock Minimal discs in hyperbolic space bounded by a quasicircle at infinity. \newblock {\em Comment. Math. Helv.}, 91(4):807--839, 2016. \bibitem{Smith_asymp} G.~Smith. \newblock On the asymptotic {P}lateau problem in {C}artan--{H}adamard manifolds. \newblock {\em Preprint, [arXiv:2107.14670]}, 2021. \end{thebibliography} \begin{flushleft} {\scshape S\'ebastien Alvarez}\\ CMAT, Facultad de Ciencias, Universidad de la Rep\'ublica\\ Igua 4225 esq. Mataojo. Montevideo, Uruguay.\\ email: \texttt{[email protected]} \vspace{0.2cm} {\scshape Ben Lowe}\\ Department of Mathematics, University of Chicago, Chicago IL 60637, USA.\\ email: \texttt{[email protected]} \vspace{0.2cm} {\scshape Graham Smith}\\ Departamento de Matem\'atica PUC-Rio, Marqu\^es de S\~ao Vicente 225, G\'avea, Rio de Janeiro 225453-900, Brazil.\\ email: \texttt{[email protected]} \vspace{0.2cm} \end{flushleft} \end{document}
2412.14423v2
http://arxiv.org/abs/2412.14423v2
Cross-Validation with Antithetic Gaussian Randomization
\documentclass[11pt]{article} \newcommand{\blind}{1} \usepackage[letterpaper, left=1.2truein, right=1.2truein, top = 1.2truein, bottom = 1.2truein]{geometry} \usepackage[blocks, affil-it]{authblk} \usepackage[toc,page]{appendix} \RequirePackage{amsthm,amsmath,amsfonts,amssymb, enumitem} \RequirePackage[authoryear]{natbib} \RequirePackage[colorlinks,citecolor=blue,urlcolor=blue]{hyperref} \RequirePackage{graphicx} \usepackage{sidecap} \usepackage{multirow} \usepackage{float} \usepackage{mathtools} \usepackage{color} \usepackage{xfrac} \usepackage{bigints} \usepackage{caption,subcaption} \usepackage{bbm} \usepackage{array} \usepackage{booktabs} \usepackage{siunitx, tabularx} \usepackage{adjustbox} \usepackage{xr} \usepackage{arydshln,,leftidx} \usepackage{verbatim} \usepackage{ upgreek } \usepackage{algorithm,algpseudocode} \usepackage{amssymb} \usepackage{epstopdf} \usepackage{bm} \usepackage{bigints} \usepackage{enumitem} \usepackage{layouts} \usepackage{todonotes} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}{Proposition}[section] \newtheorem{definition}[theorem]{Definition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{example}[theorem]{Example} \newtheorem{corollary}{Corollary} \newtheorem{remark}{Remark} \newtheorem{Example}{Example}[section] \newtheorem{rmk}{Remark}[section] \newtheorem{assumption}{Assumption} \newcommand{\h}[1]{\widehat{#1}} \newcommand{\Stacked}[1]{\mathbf{#1}} \newcommand{\StackedSymbol}[1]{\ensuremath{\boldsymbol{#1}}} \newcommand{\til}[1]{\widetilde{#1}} \newcommand{\Mb}{{\widehat{\boldsymbol\beta}}^{\text{\;MLE}}} \newcommand{\InvFI}{{\widehat{\boldsymbol{\mathsf{I}}}}^{\; -1}} \newcommand{\obs}[1]{{#1}_{\text{obs}}} \newcommand\indep{\protect\mathpalette{\protect\independenT}{\perp}} \def\independenT#1#2{\mathrel{\rlap{$#1#2$}\mkern2mu{#1#2}}} \newcommand{\numberthis}{\addtocounter{equation}{1}\tag{\theequation}} \newcommand{\CR}{Coverage} \newcommand{\AL}{Bias} \newcommand{\var}{\mathrm{Var}} \newcommand{\cov}{\mathrm{Cov}} \newcommand{\grad}{{\nabla}} \newcommand{\one}{\mathbbm{1}} \def\argmin{\mathop{\rm argmin}\limits} \newcommand{\EE}[2][]{\mathbb{E}_{#1}\left[#2\right]} \newcommand{\Cov}[2][]{\operatorname{Cov}_{#1}\left[#2\right]} \newcommand{\Var}[2][]{\operatorname{Var}_{#1}\left[#2\right]} \newcommand{\iid}{\stackrel{i.i.d.}{\sim}} \newcommand{\om}{\omega} \newcommand{\tran}{^\intercal} \newcommand{\tr}{\operatorname{tr}} \newcommand{\N}{\mathcal{N}} \newcommand{\R}{\mathbb{R}} \newcommand{\Pp}{{\mathbb P}} \newcommand{\ep}{\varepsilon} \newcommand{\cP}{{\mathcal{P}}} \newcommand{\cE}{{\mathcal{E}}} \newcommand{\cZ}{{\mathcal{Z}}} \newcommand{\cS}{{\mathcal{S}}} \newcommand{\cA}{{\mathcal{A}}} \newcommand{\cU}{{\mathcal{U}}} \newcommand{\cO}{{\mathcal{O}}} \newcommand{\cV}{{\mathcal{V}}} \newcommand{\calL}{{\mathcal{L}}} \newcommand{\bbP}{{\mathbb{P}}} \newcommand{\rZ}{{\mathrm{z}}} \newcommand{\ty}{{\tilde{y}}} \newcommand{\tY}{{\tilde{Y}}} \newcommand{\rd}{\mathrm{d}} \newcommand{\indc}[1]{{\mathbf{1}_{\left\{{#1}\right\}}}} \newcommand{\Indc}[1]{{\mathbf{1}\left\{{#1}\right\}}} \newcommand{\barr}{\operatorname{Barr}} \newcommand{\logdet}{\log\det} \newcommand{\Dg}{\text{Diag}} \newcommand{\mappy}[1]{\overset{#1}{\longmapsto}} \newcommand{\pdev}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\ind}[1]{\mathbf{1}_{\{#1\}}} \newcommand{\bGn}{\operatorname{sign}} \newcommand{\tp}{\intercal} \newcommand{\que}{\mathord{?}} \newcommand{\PE}{\mathrm{PE}} \newcommand{\cv}{\mathrm{CV}} \newcommand{\CB}{\mathrm{CB}} \newcommand{\red}[1]{\textcolor{red}{#1}} \newcommand{\blue}[1]{\textcolor{blue}{#1}} \newcommand{\hatPE}{\widehat{\text{PE}}} \renewcommand{\vec}[1]{\mathbf{#1}} \renewcommand{\hat}[1]{\widehat{#1}} \renewcommand{\tilde}[1]{\widetilde{#1}} \newcommand*{\Scale}[2][4]{\scalebox{#1}{$#2$}} \newcommand{\twofigs}[2]{ \hbox to\hsize{\hss \vbox{\psfig{figure=#1,width=2.7in,height=2.0in}}\qquad \vbox{\psfig{figure=#2,width=2.7in,height=2.0in}} \hss}} \newcommand{\Rom}[1]{\uppercase\expandafter{\romannumeral #1\relax}} \newcommand{\rom}[1]{\lowercase\expandafter{\romannumeral #1\relax}} \newcommand{\frakA}{{\mathfrak{A}}} \newcommand{\frakg}{{\mathfrak{g}}} \newcommand{\frakL}{{\mathfrak{L}}} \newcommand{\calT}{{\mathcal{T}}} \newcommand{\bbQ}{{\mathbb{Q}}} \makeatletter \newcommand\semiHuge{\@setfontsize\semiHuge{16.5}{22}} \makeatother \usepackage{setspace} \onehalfspacing \begin{document} \date{December, 2024} \def\spacingset#1{\renewcommand{\baselinestretch}{#1}\small\normalsize} \spacingset{1.3} \if1\blind { \title{Cross-Validation with Antithetic Gaussian Randomization} \author[1]{Sifan Liu} \author[2]{Snigdha Panigrahi\thanks{The author acknowledges support from NSF CAREER Award DMS-2337882.}\hspace{.03cm}} \author[3]{Jake A. Soloff} \affil[1]{Center for Computational Mathematics, Flatiron Institute} \affil[2]{Department of Statistics, University of Michigan} \affil[3]{Department of Statistics, University of Chicago} \maketitle \if0\blind { \bigskip \bigskip \bigskip \begin{center} {\bf Cross-validation with antithetic Gaussian randomization} \end{center} \medskip \begin{abstract} We introduce a new cross-validation method based on an equicorrelated Gaussian randomization scheme. The method is well-suited for problems where sample splitting is infeasible, such as when data violate the assumption of independent and identical distribution. Even when sample splitting is possible, our method offers a computationally efficient alternative for estimating the prediction error, achieving comparable or even lower error than standard cross-validation in a few train-test repetitions. Drawing inspiration from recent techniques like data-fission and data-thinning, our method constructs train-test data pairs using externally generated Gaussian randomization variables. The key innovation lies in a carefully designed correlation structure among the randomization variables, which we refer to as \emph{antithetic Gaussian randomization}. In theory, we show that this correlation is crucial in ensuring that the variance of our estimator remains bounded while allowing the bias to vanish. Through simulations on various data types and loss functions, we highlight the advantages of our antithetic Gaussian randomization scheme over both independent randomization and standard cross-validation, where the bias-variance tradeoff depends heavily on the number of folds. \end{abstract} \newpage \spacingset{1.15} \section{Introduction} \label{sec:1} Estimating prediction error is a fundamental task in statistics and machine learning, essential for assessing how well a model generalizes to unseen data, selecting tuning parameters during estimation, and comparing different models. Cross-validation is one of the most widely used tools for this purpose. In its standard form, the data is partitioned into independent subsamples or ``folds'' and prediction error is obtained by averaging the empirical errors from the test folds. The popularity of cross-validation is easy to understand---it is versatile and applies to a wide range of loss functions and data types, due to its assumption-light nature. The standard form of cross-validation is, however, not suitable for all types of data, especially when the assumptions of independent and identically distributed observations are not satisfied. For example, in regression settings with influential observations, a subset of samples may fail to adequately represent the full dataset. When dealing with categorical response variables or covariates, sample splitting may lead to imbalanced folds, potentially omitting rare categories from some folds entirely. For time series or spatially correlated data, splitting the data can disrupt the inherent temporal or spatial structure. In such cases, standard cross-validated estimators of prediction error can be misleading and can result in unreliable models for downstream tasks. In this paper, we address this issue by introducing a novel cross-validation method that eliminates the need for sample splitting. Instead, the train-test folds in our method are created with externally generated Gaussian randomization variables. The method is governed by two user-specified parameters, $\alpha$ and $K$. The first parameter, $\alpha\in \mathbb{R}^+$, is akin to the proportion of held-out samples in standard cross-validation. The second parameter, $K\in \mathbb{N}$, specifies the number of train-test repetitions over which estimates of prediction error are averaged. The proposed method is as follows: we generate $K$ randomization variables from an equicorrelated and degenerate normal distribution with a zero-sum constraint. By adding a $\sqrt\alpha$-scaled version of these randomization variables to the sufficient statistics, we create $K$ train-test data pairs. Prediction error is then estimated using these pairs in a manner similar to standard cross-validation. For example, consider normal data $Y \in \R^n$ with a covariance matrix $\sigma^2 I_n$. In this case, the train-test data for the $k$-th repetition are constructed as \begin{align}\label{eq:simple-split} Y_{\text{train}}^{(k)} =Y + \sqrt\alpha\omega^{(k)},\quad Y_{\text{test}}^{(k)}= Y - \frac{1}{\sqrt\alpha}\omega^{(k)}, \end{align} where $\omega^{(k)}\sim \N(0,\sigma^2 I_n)$, for $k\in [K]=\{1,2,\ldots, K\}$, are equicorrelated Gaussian randomization variables that sum to zero. In this paper, we extend this approach to handle a wide range of loss functions and data types, as long as the sufficient statistics for the unknown parameters in the loss function are asymptotically normal. \subsection{Highlights of our method} The performance of any cross-validation method, measured by mean squared error (MSE), depends on the bias-variance tradeoff, which is influenced by both the proportion of held-out data during training and the number of train-test repetitions. In standard cross-validation, this tradeoff is controlled by the number of folds. Our cross-validation method is particularly appealing because it provides two distinct levers to control the bias and variance of the associated estimator for prediction error. This is outlined below: \begin{enumerate}[leftmargin=*] \item \textbf{Direct control of bias via $\boldsymbol{\alpha}$:} The parameter $\alpha$ controls the bias introduced by estimating the prediction function on noisier training data, with the bias decaying to $0$ as $\alpha$ decreases. Unlike standard cross-validation, where bias is controlled by the number of folds, the parameter $\alpha$ in our method is independent of the number of train-test repetitions, $K$. This separation provides a significant advantage: by averaging empirical estimates of prediction error over just $K$ train-test repetitions---where $K$ can be as few as two---our method, with a small $\alpha$, can achieve a bias comparable to that of leave-one-out (LOO) cross-validation. Thus, even when sample splitting is feasible, the new cross-validated estimator offers a computationally efficient alternative for estimating prediction error. \item \textbf{Stable variance for finite $\mathbf{K}$:} A key strength of the proposed estimator, as supported by our theoretical analysis, is its stable variance for any finite $K$, even as the bias decays to zero with decreasing $\alpha$. This contrasts with standard cross-validation, where reducing bias often results in increased variance. The stability of the variance is due to the carefully designed correlation structure of the external Gaussian randomization variables. Following the literature on variance reduction techniques for Monte Carlo methods, e.g., \cite{craiu2005multiprocess}, we view our randomization approach as an ``extreme antithesis'', where the correlation between any pair of randomization variables takes the most negative value possible. \end{enumerate} To the best of our knowledge, this work is the first to investigate the potential of an antithetic Gaussian randomization approach for cross-validation. It provides a unique and a computationally efficient solution for reducing bias in the estimation of prediction errors, while maintaining a stable variance. Figure~\ref{fig: isotonic mse} showcases the performance of our new cross-validated estimator by comparing its mean squared error (MSE) against that of standard cross-validation estimators. In this example, we focus on estimating the prediction error for an isotonic regression problem. Our method uses only two train-test repetitions ($K=2$) with $\alpha=0.01$, while classic cross-validation is performed with $K=2$ folds and $K=100$ folds, the latter corresponding to leave-one-out (LOO) cross-validation. Remarkably, our estimator achieves a smaller MSE than LOO cross-validation while being $50$ times more computationally efficient. More details about this example, along with extensive numerical results that examine the effects of $\alpha$ and $K$, are presented in Section~\ref{sec: experiments}. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{isotonic_mse.pdf} \caption{Mean squared error (MSE) for estimating prediction error in an isotonic regression problem using a simulated dataset. From left to right, the methods shown are classic 2-fold CV, LOO CV, and the proposed method with $K=2$ and $\alpha=0.01$. Additional details are provided in Section~\ref{sec: experiments}.} \label{fig: isotonic mse} \end{figure} \subsection{Related work and contributions} Our cross-validation proposal is inspired by several recently introduced randomized methods that provide alternatives to traditional sample splitting for tasks such as model validation, selective inference, and risk estimation. These alternatives include data-fission and data-thinning techniques by \cite{rasines2023splitting, leiner2023data, neufeld2024data, dharamshi2024generalized}, methods employing Gaussian randomization for selective inference tasks, as considered in \cite{dai2023fdr, TianTaylor2018, PanigrahiTaylor2022, huang2023selective}, and randomized methods by \cite{oliveira2021unbiased, oliveira2022unbiased, fry2023unbiased} for unbiased estimation of risk and prediction errors. Our cross-validation method, like data fission or data thinning techniques, is naturally suited for problems where sample splitting is infeasible. However, unlike these existing methods, which use different randomization schemes tailored to specific parametric distributions, our approach employs the same Gaussian randomization scheme for different loss functions and justifies their use within a relatively assumption-light framework. In fact, the idea of employing alternative forms of randomization for cross-validation is by no means new. For example, \cite{brown2013poisson} described a ``nonstandard cross-validation method'' for the Gaussian sequence model. They propose using a single train-test split of the form~\eqref{eq:simple-split} for estimation and hyperparameter tuning. This construction is closely related to our proposal when we only use two ``folds'' and it is also a key motivating example of data fission \citep{leiner2023data}. Similarly, the multifold thinning approach in \cite{neufeld2024data} proposed the use of correlated Gaussian randomization variables for cross-validation in the normal means problem. However, their correlation structure differs from the antithetic randomization scheme proposed in our work, a distinction that we highlight in our concluding discussion. Similar randomization schemes, where Gaussian noise is added to the sufficient statistic, have been prominent in the selective inference literature. For example, in the randomized lasso estimators by \cite{PanigrahiTaylor2022, panigrahi2024exact} and the randomized group lasso estimators by \cite{panigrahi2023approximate}, Gaussian noise is added to the objective function of the optimization problem. This randomized scheme is indeed equivalent to adding normal variables to the sufficient statistic in Gaussian regression models. The randomization framework for generalized linear models (GLMs) developed by \cite{liu2023selective} for selective inference with distributed data employs the same antithetic approach as presented in this paper, though it serves a different purpose. As a natural by-product, our proposal here can also be seen to offer a way to perform cross-validation in these randomized problems, particularly for selecting optimal tuning parameters that determine the amount of sparsity in the selected model. Among the methods reviewed, the one most closely related to our work is the coupled bootstrap (CB) estimator proposed by \cite{oliveira2021unbiased} for normal data, which we discuss in detail in the next section. The CB estimator computes prediction error using randomized train-test data constructed with independent Gaussian randomization variables. A key advantage of our cross-validated estimator over the CB estimator lies in its substantial variance reduction, achieved by deliberately using an antithetic Gaussian randomization scheme. Here is a summary of our main contributions in the remainder of the paper: \begin{enumerate}[leftmargin=*] \item In Section~\ref{sec:2}, we review the CB estimator for the normal means problem with a quadratic loss function and introduce our cross-validated estimator, based on antithetic Gaussian randomization variables. \item In Section~\ref{sec: theory}, we analyze the mean squared error of the proposed estimator as $\alpha$, the parameter controlling bias, approaches zero. Our theory demonstrates that we can obtain unbiased estimates of prediction error as $\alpha \to 0$, while ensuring that the variance of our estimator remains stable even with vanishingly small $\alpha$. In contrast to the CB estimator, which requires increasing $K$ as $\alpha$ decreases, our method can achieve the same variance with significantly smaller $K$. This analysis highlights the benefits of employing a carefully chosen antithetic randomization scheme instead of an independent randomization scheme. \item In Section~\ref{sec: SURE}, we establish connections between the proposed estimator and classical risk estimators, such as Stein's Unbiased Risk Estimator (SURE) and its variants for exponential families. Notably, our estimator can be viewed as replacing the divergence term in SURE by the divergence of a Gaussian-smoothed version of the prediction function. \item In Section \ref{sec:glm}, we extend our cross-validation framework to accommodate more general loss functions, including those commonly used in fitting GLMs, such as logistic regression. Under the assumption that the sufficient statistics are asymptotically normal and satisfy certain regularity conditions, we demonstrate that the mean squared error analysis generalizes to a broader class of loss functions. \item In Section~\ref{sec: experiments}, we provide simulation results comparing our proposed framework to standard cross-validation, the coupled bootstrap, and SURE. The proposed method performs effectively across various data types, loss functions, and prediction algorithms. It eliminates the need for sample splitting, manual tuning of the bias-variance tradeoff, or differentiating the prediction function. Additionally, the method is computationally efficient, requiring us to conduct only a small number of train-test repetitions. \item In Section~\ref{sec: conclusion}, we conclude with a discussion of potential extensions and new directions for the proposed method. \end{enumerate} \section{Basic setup and the proposed estimator} \label{sec:2} Here, we outline the setup of our problem. We assume that the response vector $Y=(Y_1,\ldots,Y_n)\tran\in\R^n$ is drawn from a distribution $\bbP_n$, while the predictors or covariates are treated as fixed. A prediction function $g$ is trained on this data. Given a loss function $\calL:\R^n\times \R^n\to\R$, our goal is to evaluate the performance of this prediction function on unseen test data $\tY$, which is an independent copy of the observed data $Y$. Our estimand of interest is the expected prediction error, defined as \begin{equation*} \PE(g)=\EE{\calL(g(Y), \tY ) }, \end{equation*} where the expectation is taken over both the training data $Y$ and the testing data $\tY$. The most common approach to estimating prediction error involves splitting the sample space. In this approach, the $n$ observations $(Y_1,\ldots,Y_n)$ are randomly divided into two non-overlapping subsets, $Y^{(1)}$ and $Y^{(2)}$. The prediction function $g$ is trained on the first subset $Y^{(1)}$, and its performance is evaluated on the second subset $Y^{(2)}$, resulting in the prediction error estimator \begin{align} \label{equ: train test splitting} \calL\left(g(Y^{(1)}), Y^{(2)}\right). \end{align} A more data-efficient approach to the same problem employs the $K$-fold cross-validation (CV), where the $n$ observations are randomly partitioned into $K$ non-overlapping folds, denoted by $Y^{(k)}$ for $k\in [K]$. Each fold is used for both training and testing, and the prediction error is finally estimated as \begin{align*} \frac1K\sum_{k=1}^K \calL(g(Y^{(-k)}), Y^{(k)}). \end{align*} Here, $Y^{(-k)}$, the complement of the $k$-th fold $Y^{(k)}$, is used for training the prediction function $g$, and the held-out fold, $Y^{(k)}$, is used for evaluating the predictive performance of $g$ in the $k$-th repetition. The bias-variance tradeoff in standard cross-validation depends on the number of folds $K$, and practitioners often face the challenge of selecting the optimal value of $K$ to achieve an effective tradeoff between the bias and variance of the resulting estimator. This paper introduces a novel approach to cross-validation that constructs train-test data using external randomization variables. Unlike standard cross-validation, our method addresses the bias-variance tradeoff by controlling two separate parameters: $\alpha$, which controls bias, and $K$, which controls variance. The advantages of this new form of cross-validation, with two user-specified parameters, will become evident through our analysis of the mean squared error. Before presenting our method, we first review the coupled bootstrap (CB) estimator proposed by \cite{oliveira2021unbiased}, which also utilizes external randomization variables to construct train-test data. \subsection{Review of coupled bootstrap (CB)} The CB estimator \citep{oliveira2021unbiased} aims to estimate the risk in the normal means problem, where the response vector $Y\in\R^n$ is assumed to follow the normal distribution $\N(\theta,\sigma^2I_n)$, with a known variance $\sigma^2$. In this work, we focus on the prediction error for a prediction function $g$, defined as \begin{equation} \label{pred:error} \PE(g)= \EE{\|g(Y)- \tY\|_2^2}, \end{equation} where $\tY \sim \N(\theta, \sigma^2 I_n)$ is an independent copy of $Y$. Note that our estimand differs from the risk by a constant in the normal means problem. To estimate $\PE(g)$, the CB method generates $K$ independent Gaussian randomization variables $$ \tilde\om^{(1)}, \tilde\om^{(2)}, \ldots, \tilde\om^{(K)}\iid \N(0, \sigma^2 I_n). $$ For each $k \in [K]$ and a parameter $\alpha \in \mathbb{R}^+$, two randomized copies of $Y$ are constructed as \begin{equation} \label{CB:train:test} \tilde{Y}^{(k)}_{\text{train}}= Y + \sqrt{\alpha}\tilde\om^{(k)}, \quad \tilde{Y}^{(k)}_{\text{test}}=Y- \dfrac{1}{\sqrt{\alpha}}\tilde\om^{(k)}, \end{equation} where, by construction, the two vectors are distributed as $$\begin{pmatrix} \widetilde{Y}^{(k)}_{\text{train}} \\ \widetilde{Y}^{(k)}_{\text{test}}\end{pmatrix} \sim \N\left(\begin{pmatrix}\theta \\ \theta \end{pmatrix}, \begin{bmatrix}\sigma^2 (1+\alpha) I_n & 0_{n, n} \\ 0_{n,n} & \sigma^2(1+\alpha^{-1}) I_n)\end{bmatrix} \right).$$ The prediction error based on the $k$-th train-test pair is computed as \begin{equation} \label{CB:est} {\text{CB}}_{\alpha}^{(k)}= \|\tilde{Y}^{(k)}_{\text{test}} - g(\tilde{Y}^{(k)}_{\text{train}})\|_2^2- \frac{1}{\alpha}\|\tilde\om^{(k)}\|_2^2, \end{equation} where the second term, $\|\tilde\om^{(k)}\|_2^2/\alpha$, adjusts for the difference between the variance of the randomized test data and the variance of the original data $Y$. Finally, the CB estimator is obtained by averaging over $K$ independent draws of the Gaussian randomization variables $${\text{CB}}_{\alpha} = \frac{1}{K} \sum_{k=1}^K{\text{CB}}_{\alpha}^{(k)}.$$ Since $\tY^{(k)}_{\text{train}}\sim\N(\theta,(1+\alpha)\sigma^2 I_n)$, straightforward calculations show that the CB estimator is unbiased for a noise-inflated version of the prediction error \begin{align*} \PE_\alpha(g)=\EE{\|g(Y) - \tY\|_2^2 },\text{ where }Y\sim \N(\theta, (1+\alpha)\sigma^2 I_n ),\; \tY\sim \N(\theta,\sigma^2 I_n). \end{align*} This estimand corresponds to the prediction error when $g$ is trained on noisier data, with variance inflated by a factor of $(1+\alpha)$. The estimator $\CB_\alpha$ is, therefore, biased for the true prediction error $\PE(g)$, defined in Equation~\eqref{pred:error}. However, the bias---the difference between the noise-inflated prediction error $\PE_{\alpha}(g)$ and the original estimand $\PE(g)$---converges to zero as the parameter $\alpha$ approaches zero. Nevertheless, as in standard train-test splitting, a bias-variance tradeoff arises here: reducing the bias by decreasing $\alpha$ comes at the expense of increased variance. As shown in \cite{oliveira2021unbiased}, the variance of the CB estimator is of order $O((K\alpha)^{-1})$. This implies that, for any finite $K$, the variance of the CB estimator becomes unbounded as the bias decreases to $0$. We address this limitation of the CB estimator by introducing a randomization scheme with a carefully chosen correlation structure, which we refer to as an ``antithetic" randomization scheme. \subsection{Antithetic randomization} In our antithetic randomization scheme, we generate $K$ ($K>1$) randomization variables as follows: \begin{equation} \om^{(1)},\ldots,\om^{(K)}\sim \N(0,\sigma^2 I_n), \text{ where } \text{Cov}(\om^{(j)},\om^{(k)})=-\frac{\sigma^2}{K-1}I_n \text{ for }j\neq k. \label{antithetic:rand} \end{equation} We make two important observations about this distribution. First, the normal distribution in \eqref{antithetic:rand} is degenerate. This is because the variance of the sum of the randomization variables is zero, i.e., $\text{Var}\left(\sum_{k=1}^K \om^{(k)}\right)=0$. Combined with fact that the randomization variables have zero mean, this imposes the following zero-sum constraint on these randomization variables: \begin{equation} \sum_{k=1}^K \om^{(k)}=0. \label{zero:sum} \end{equation} Second, for a $K$-by-$K$ correlation matrix where all off-diagonal entries are equal, the range of possible correlation is $$[-\frac{1}{K-1}, 1].$$ Therefore, our randomization scheme takes the most negative correlation possible, which is why we refer to it as ``antithetic''. For a fixed $\alpha\in \mathbb{R}^+$, we construct randomized train-test copies of the data $Y$ as \begin{align*} \begin{pmatrix} Y^{(k)}_{\text{train}} \\ Y^{(k)}_{\text{test}} \end{pmatrix} = \begin{pmatrix} Y- \sqrt{\alpha}\displaystyle\sum_{j\neq k}\om^{(k)} \\ Y- \dfrac{1}{\sqrt{\alpha}}\om^{(k)} \end{pmatrix} = \begin{pmatrix} Y + \sqrt{\alpha}\om^{(k)} \\ Y- \dfrac{1}{\sqrt{\alpha}}\om^{(k)}\end{pmatrix},\;\text{ for } k\in[K], \end{align*} where the second equality is due to the zero-sum constraint in \eqref{zero:sum}. This approach mimics the standard $K$-fold cross-validation in that, when pooling the train (or test) data from all $K$ folds, the randomization variables cancel out, thereby recovering the original data $Y$. Our cross-validated estimator $\cv_\alpha$ is then defined as \begin{align}\label{equ: def cv} {\text{CV}}_{\alpha}= \frac{1}{K}\sum_{k=1}^K {\text{CV}}_{\alpha}^{(k)}, \end{align} where \begin{equation*} \begin{aligned} {\text{CV}}_{\alpha}^{(k)} &= \|Y^{(k)}_{\text{test}} - g(Y^{(k)}_{\text{train}})\|_2^2- \frac{1}{\alpha}\|\om^{(k)}\|_2^2. \end{aligned} \end{equation*} The key distinction between the CB estimator and the proposed estimator lies in the randomization scheme. In the coupled bootstrap method, the randomization variables $\tilde\omega^{(1)},\ldots,\tilde\omega^{(K)}$ are independent. In contrast, our method employs correlated randomization variables \sloppy{$\omega^{(1)},\ldots,\omega^{(K)}$}. As will be shown in the next section, this correlation leads to a significant variance reduction, ensuring that the variance of our cross-validated estimator remains bounded as $\alpha\to 0$, at which point the bias of our estimator also vanishes. \section{Mean squared error analysis} \label{sec: theory} In this section, we analyze the mean squared error (MSE) of the proposed estimator $\cv_\alpha$~\eqref{equ: def cv} for estimating the prediction error $\PE(g)$~\eqref{pred:error} in the normal means problem. The MSE can be decomposed into bias and variance as \begin{align*} \EE{(\cv_\alpha -\PE(g) )^2 } &= \left\{\EE{\cv_\alpha} -\PE(g) \right\}^2 + \Var{\cv_\alpha}\\ &= \left\{\EE{\cv_\alpha} -\PE(g) \right\}^2 + \EE{\Var{\cv_\alpha\mid Y}} + \Var{\EE{\cv_\alpha\mid Y }}.\numberthis\label{equ: MSE decomposition} \end{align*} We study the bias $\EE{\cv_\alpha} -\PE(g)$ in Section~\ref{sec: bias}, and the reducible variance $\EE{\Var{\cv_\alpha\mid Y}}$ and irreducible variance $\Var{\EE{\cv_\alpha\mid Y }}$ in Section~\ref{sec: variance}. \subsection{Bias}\label{sec: bias} We show that the bias $\EE{\cv_\alpha} -\PE(g)$ can be made arbitrarily small as $\alpha$ approaches zero, under the mild condition that $\|g(Y)\|_2^2$ is integrable. This result follows directly from the ``approximation to the identity" property of the Gaussian density, as stated in Lemma \ref{lem: approximation to identity} below. Let $\varphi_{\sigma^2}$ denote the density of the normal distribution $\N(0, \sigma^2 I_n)$. Let $f * \varphi_{\sigma^2}$ denote the convolution of an integrable function $f$ with $\varphi_{\sigma^2}$, which is defined as \begin{align*} f*\varphi_{\sigma^2}(y):=\int f(y-z)\varphi_{\sigma^2}(z)\rd z. \end{align*} \begin{lemma}[Approximation to the identity] \label{lem: approximation to identity} Let $f$ be an integrable function under the Gaussian distribution $\N(\theta, \sigma^2 I_n)$. Then \begin{align*} f*\varphi_{\alpha\sigma^2}(Y)\stackrel{L_1}{\to} f(Y) \text{ as }\alpha\to 0. \end{align*} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem: approximation to identity}] This is a direct application of Lemma~\ref{lem: log p condition} and Lemma~\ref{lem: L1} in the Appendix. \end{proof} Lemma \ref{lem: approximation to identity} states that the convolution of a function with $\varphi_{\alpha\sigma^2}$ is close to the original function in the $L_1$ sense as $\alpha\to0$. In the context of our problem, this lemma implies that $$\EE{g(Y+\sqrt\alpha\omega)\mid Y}\stackrel{L_1}{\to} g(Y)$$ as $\alpha\to0$, which is the key to showing that the bias of our estimator converges to 0 as $\alpha$ approaches zero. The result is formalized in the following theorem. \begin{theorem}[Bias]\label{thm: bias} Assume that $\EE{\|g(Y)\|_2^2}<\infty$. Then we have \begin{align*} \lim_{\alpha\to0} \EE{\cv_\alpha } =\PE(g). \end{align*} \end{theorem} \begin{proof}[Proof of Theorem~\ref{thm: bias}] Since $\EE{\cv_\alpha}=\EE{\cv_\alpha^{(k)}}$, it is sufficient to compute the expectation of $\cv_\alpha^{(k)}$. Observe that \begin{equation*} \begin{aligned} \EE{\cv_\alpha^{(k)}}&=\EE{\|Y-\frac{1}{\sqrt\alpha}\omega^{(k)} - g(Y+\sqrt\alpha\omega^{(k)})\|_2^2 - \frac{\|\omega^{(k)}\|_2^2}{\alpha} } \\ &=\EE{\|g(Y+\sqrt\alpha\omega^{(k)})\|_2^2 - 2(Y-\frac{1}{\sqrt\alpha}\omega^{(k)})\tran g(Y+\sqrt\alpha\omega^{(k)}) }\\ & \ \ \ \ + \EE{\|Y-\frac{1}{\sqrt\alpha}\omega^{(k)}\|_2^2} - \EE{\frac{\|\omega^{(k)} \|_2^2}{\alpha}}\\ &=\EE{\|g(Y+\sqrt\alpha\omega^{(k)})\|_2^2 } -2\EE{(Y-\frac{1}{\sqrt\alpha}\omega^{(k)}) } \tran \EE{g(Y+\sqrt\alpha\omega^{(k)})} + \EE{\|Y\|_2^2}\\ &=\EE{\|g(Y+\sqrt\alpha\omega^{(k)})\|_2^2 } -2\EE{Y } \tran \EE{g(Y+\sqrt\alpha\omega^{(k)})}+ \EE{\|Y\|_2^2} \end{aligned} \end{equation*} where we have used the facts that $Y+\sqrt\alpha\omega^{(k)} \indep Y-\frac{1}{\sqrt\alpha}\omega^{(k)}$, $Y\indep \omega^{(k)}$, and $\EE{\omega^{(k)}}=0$. Note that $$\EE{\|g(Y+\sqrt\alpha\omega^{(k})\|_2^2 \mid Y } = \|g\|_2^2 * \varphi_{\alpha\sigma^2} (Y),$$ which converges in $L_1$ to $\|g(Y)\|_2^2$ as $\alpha\to0$, by Lemma~\ref{lem: approximation to identity}. Similarly, applying Lemma~\ref{lem: approximation to identity} to the function $g_i(Y)$ for $1\leq i\leq n$ shows that that $\EE{g(Y+\sqrt\alpha\omega^{(k)})\mid Y }$ converges in $L_1$ to $g(Y)$. This establishes that, as $\alpha\to0$, \begin{align*} \EE{\cv_\alpha^{(k)}} \to \EE{\|g(Y)\|_2^2} - 2\EE{Y}\tran \EE{g(Y)} + \EE{\|Y\|_2^2}. \end{align*} The right-hand-side equals $\PE(g)=\EE{\|\tilde Y-g(Y)\|_2^2 }$, where $\tilde Y$ is an independent copy of $Y$. This completes the proof. \end{proof} Consequently, the proposed estimator $\cv_\alpha$ has vanishingly small bias when $\alpha$ is chosen to be small. In standard $K$-fold cross-validation, reducing bias typically requires increasing $K$, which leads to higher computational costs and often greater variance. In contrast, our estimator achieves low bias by simply using a small $\alpha$, without the need to increase $K$. More importantly, as we will demonstrate next, unlike the coupled bootstrap method, decreasing $\alpha$ does not increase the variance of our estimator. \subsection{Variance reduction with antithetic randomization} \label{sec: variance} To analyze the variance of the proposed estimator $\cv_\alpha$, we impose a mild smoothness condition on the prediction function $g$. This condition is the weak differentiability assumption considered in the classical SURE estimator~\citep{stein1981estimation}. \begin{assumption}[Weak differentiability]\label{assump: weakly differentiable} All components $g_i$ ($1\leq i\leq n$) of $g$ are weakly differentiable. That is, there exists a function $\nabla g_i:\R^n\to\R^n$, the weak derivative of $g_i$, such that \begin{align*} g_i(y+z) - g_i(y) = \int_0^1 z\cdot \nabla g_i(y+tz)\rd t, \end{align*} for almost all $y, z\in\R^n$. Denote the Jacobian matrix of $g$ as $\nabla g\in \R^{n\times n}$, where the $i$-th row is equal to $\nabla g_i$. \end{assumption} This class of functions encompasses many well-known estimators, including the ridge estimator, the lasso estimator, the group lasso estimator, and the generalized lasso estimator; see, for example, the paper by \cite{tibshirani2012degrees}. The following theorem provides the expression for the reducible variance of $\cv_\alpha$ as $\alpha$ approaches zero. \begin{theorem}[Reducible variance]\label{thm: reducible variance} Suppose that Assumption~\ref{assump: weakly differentiable} holds. Furthermore, let $\EE{\|g(Y)\|_2^4}<\infty$, $\EE{\|\nabla g(Y)\|_F^2}<\infty$. Then, we have that \begin{align*} \lim_{\alpha\to0} \EE{\Var{\cv_\alpha\mid Y}}= \frac{4\sigma^4}{K-1}\EE{\|\nabla g(Y) \|_F^2 + \tr(\nabla g(Y)^2 )}. \end{align*} \end{theorem} \begin{rmk} Theorem \ref{thm: reducible variance} implies that the reducible variance of our cross-validated estimator remains bounded for any fixed $K>1$, even as $\alpha\to0$. In contrast, the CB estimator, based on independent randomization variables, has a reducible variance of order $O(\frac{1}{K\alpha})$, which diverges to $\infty$ as $\alpha\to 0$ for any finite $K$. \end{rmk} We provide a sketch of the proof here to illustrate the role of antithetic randomization in achieving this reduction in variance, with the detailed proof deferred to Appendix~\ref{prf: thm reducible variance}. \begin{proof}[Proof sketch of Theorem~\ref{thm: reducible variance}] We first write \begin{align*} \cv_\alpha&=\frac1K\sum_{k=1}^K \|Y-\frac{1}{\sqrt\alpha}\omega^{(k)} - g(Y +\sqrt\alpha\omega^{(k)} )\|_2^2 - \frac{1}{\alpha}\|\omega^{(k)}\|_2^2\\ &=\underbrace{\frac1K\sum_{k=1}^K \|Y-g(Y+\sqrt\alpha\omega^{(k)})\|_2^2}_{(\Rom{1})} + \underbrace{\frac1K\sum_{k=1}^K \frac{2}{\sqrt\alpha}\langle \omega^{(k)} , g(Y+\sqrt\alpha\omega^{(k)})\rangle}_{(\Rom{2})} \numberthis\label{equ: CV decomp} \\ &\qquad \qquad - \underbrace{\frac2K\sum_{k=1}^K \langle Y, \frac{1}{\sqrt\alpha} \omega^{(k)} \rangle}_{=0} . \end{align*} Note that the last term is 0 because of the zero-sum property of the antithetic randomization variables, i.e., $\sum_{k=1}^K \omega^{(k)}=0$. Note that $$ \Var{\cv_\alpha \mid Y} = \Var{(\Rom{1}) \mid Y} + \Var{(\Rom{2}) \mid Y} + 2 \cov[{(\Rom{1}), (\Rom{2})\mid Y}].$$ For the first summation $(\Rom{1})$, we show that $$\Var{(\Rom{1}) \mid Y} \stackrel{L_1}{\to} 0.$$ This is because we can write this conditional variance as the convolution of an integrable function with the Gaussian density $\varphi_{\alpha\sigma^2}$, which converges in $L_1$ to 0, by the ``approximation to identity property of the Gaussian density", as stated in Lemma~\ref{lem: approximation to identity}. For the second summation $(\Rom{2})$, we have by the definition of weak differentiability that \begin{align*} (\Rom{2}) &=\frac{2}{K\sqrt\alpha } \sum_{k=1}^K \langle \omega^{(k)}, g(Y) + \int_0^1 \nabla g(Y+t\sqrt\alpha\omega^{(k)})\tran (\sqrt\alpha\omega^{(k)}) \rd t \rangle\\ &=\frac{2}{K}\sum_{k=1}^K {\omega^{(k)}}\tran \left[\int_0^1 \nabla g(Y+t\sqrt\alpha\omega^{(k)})\rd t\right] \omega^{(k)}.\numberthis\label{equ: second term decomp} \end{align*} The last equality is due to the fact that $\sum_{k=1}^K \omega^{(k)}=0$, which forces the term $$\frac{2}{K\sqrt\alpha } \sum_{k=1}^K \langle \omega^{(k)}, g(Y) \rangle$$ term to vanish. The ``approximation to identity property" is applied again to show that $$ \Var{(\Rom{2}) \mid Y} \stackrel{L_1}{\to} \Var{\frac{2}{K} \sum_{k=1}^K {\omega^{(k)}}\tran \nabla g(Y) \omega^{(k)}\mid Y }. $$ The right-hand-side in the last display is the variance of a quadratic form of the Gaussian vector $(\omega^{(1)}, \ldots,\omega^{(K)})$, which has a closed form as given in the statement of the Theorem. Lastly, $\cov[{(\Rom{1}), (\Rom{2})\mid Y}]\stackrel{L_1}{\to} 0$ by noting that \begin{equation*} \begin{aligned} \EE{\cov[{(\Rom{1}), (\Rom{2})\mid Y}]} &\leq \EE{\sqrt{\Var{(\Rom{1}) \mid Y} \Var{(\Rom{2}) \mid Y}}}\\ &\leq \sqrt{\EE{\Var{(\Rom{1}) \mid Y}}}\sqrt{\EE{\Var{(\Rom{2}) \mid Y}}}. \end{aligned} \end{equation*} The first inequality in the above display follows by applying the Cauchy-Schwarz inequality $$\cov[{(\Rom{1}), (\Rom{2})\mid Y}] \leq \sqrt{\Var{(\Rom{1}) \mid Y} \Var{(\Rom{2}) \mid Y}}.$$ \end{proof} Finally, to complete the analysis of variance of our estimator, we provide the limit of the irreducible variance. \begin{theorem}[Irreducible variance]\label{thm: irreducible variance} Under the same assumptions as in Theorem~\ref{thm: reducible variance}, we have that \begin{align*} \lim_{\alpha\to0}\Var{\EE{\cv_\alpha \mid Y }} = \Var{\|Y - g(Y)\|_2^2 + 2\sigma^2 \tr(\nabla g(Y)) }. \end{align*} \end{theorem} The proof is provided in Appendix~\ref{prf: irreducible}. Combining the bias-variance results in Theorem \ref{thm: bias}, \ref{thm: reducible variance} and \ref{thm: irreducible variance}, we find that, as $\alpha\to0$, \begin{align*} \text{MSE}(\cv_{\alpha}) \to \Var{\|Y - g(Y)\|_2^2 + 2\sigma^2 \tr(\nabla g(Y)) } + \frac{4\sigma^4}{K-1}\EE{\|\nabla g(Y) \|_F^2 + \tr(\nabla g(Y)^2 )}. \end{align*} Recall that the MSE of the CB estimator is dominated by a term of order $O(1/\alpha)$ as $\alpha\to0$ for any finite $K$. In contrast, the MSE of the proposed estimator remains bounded, leading to the following corollary. \begin{corollary} \label{cor:dominate CB} Under the same assumptions as in Theorem~\ref{thm: reducible variance}, for any finite $K>1$, we have that \begin{align*} \lim_{\alpha \to 0} \left\{\mathrm{MSE}(\cv_{\alpha}) - \mathrm{MSE}(\mathrm{CB}_{\alpha})\right\} = -\infty. \end{align*} \end{corollary} This result indicates that our cross-validated estimator offers an infinite efficiency gain over the coupled bootstrap method. Moreover, by selecting a small $\alpha$, we can make the bias arbitrarily small while ensuring that the variance does not blow up. This stability in variance underscores the advantages of the proposed antithetic randomization scheme. \section{Connection with SURE} \label{sec: SURE} For the normal means problem, a well-known method for risk estimation is Stein's Unbiased Risk Estimator (SURE) \citep{stein1981estimation}, which is defined as \begin{align*} \mathrm{SURE}(g)= \|Y-g(Y)\|_2^2 + 2\sigma^2\nabla\cdot g(Y), \end{align*} where the divergence of $g$ is given by $\nabla\cdot g(Y)=\tr(\nabla g(Y))$. SURE is commonly used to estimate the quadratic risk $\EE{\|\theta-g(Y)\|_2^2}$. In the normal means problem, the quadratic risk and the prediction error differ only by a constant $n\sigma^2$. Therefore, we analyze SURE here as an estimator of the prediction error $\PE(g)$. Under Assumption~\ref{assump: weakly differentiable}, along with the conditions that $\EE{\|g(Y)\|_2^2} < \infty$ and $\EE{|\nabla_i g_i(Y)|} < \infty$, the SURE estimator is unbiased for the prediction error $\PE(g)$. The unbiased-ness of SURE follows directly from Stein's identity for Gaussian distributions: $$ \EE{(Y-\theta)\tran g(Y)}=\sigma^2 \EE{\nabla\cdot g(Y)}. $$ We argue that our estimator $\cv_\alpha$ closely resembles SURE, despite being motivated from a completely different perspective. Recall from Equation~\eqref{equ: CV decomp} that our estimator can be expressed as \begin{align}\label{equ: cv decomp 2} \cv_\alpha = \frac1K\sum_{k=1}^K \|Y - g(Y+\sqrt\alpha\omega^{(k)})\|_2^2 +\frac1K\sum_{k=1}^K \frac{2}{\sqrt\alpha}(\omega^{(k)})\tran g(Y+\sqrt\alpha\omega^{(k)}). \end{align} For small $\alpha$, we claim that $$ \EE{\cv_\alpha\mid Y} \approx \|Y-g(Y)\|_2^2 + 2\sigma^2\nabla\cdot g(Y)=\mathrm{SURE}(g). $$ This is due to the following reasons. By Lemma~\ref{lem: approximation to identity}, the conditional expectation of the first term in \eqref{equ: cv decomp 2}, $\EE{\|Y-g(Y+\sqrt\alpha\omega^{(k)})\|_2^2 \mid Y }$, converges in $L_1$ as $\alpha\to0$ to $\|Y-g(Y)\|_2^2$, which is the first term in $\text{SURE}(g)$. Moreover, according to Equation~\eqref{equ: second term decomp}, the second term in \eqref{equ: cv decomp 2} equals \begin{align*} \frac1K \sum_{k=1}^K \frac{2}{\sqrt\alpha}(\omega^{(k)})\tran g(Y+\sqrt\alpha\omega^{(k)}) &= \frac{2}{K}\sum_{k=1}^K {\omega^{(k)}}\tran \left[\int_0^1 \nabla g(Y+t\sqrt\alpha\omega^{(k)})\rd t\right] \omega^{(k)}, \end{align*} By a reasoning similar to Lemma~\ref{lem: approximation to identity}, we can show that as $\alpha\to0$ \begin{align*} &\EE{\frac{2}{K}\sum_{k=1}^K {\omega^{(k)}}\tran \left[\int_0^1 \nabla g(Y+t\sqrt\alpha\omega^{(k)})\rd t\right] \omega^{(k)} \mid Y} \stackrel{L_1}{\to} 2\sigma^2\nabla\cdot g(Y), \end{align*} which corresponds to the second term in $\text{SURE}(g)$. Consequently, after integrating out the randomization variables, the proposed estimator $\cv_\alpha$ converges to SURE$(g)$ in $L_1$ as $\alpha\to0$. Furthermore, even for a positive $\alpha$, the proposed estimator remains closely related to SURE. In fact, we argue that the proposed estimator corresponds to the SURE applied to a convolution-smoothed version of the prediction function $g$. To see this, consider the expression for $\cv_\alpha$ in Equation~\eqref{equ: cv decomp 2}, and replace the term $g(Y+\sqrt\alpha\omega^{(k)})$ with its conditional expectation $\EE{g(Y+\sqrt\alpha\omega)\mid Y}$, where the expectation is over $\omega\sim\N(0,\sigma^2 I_n)$. This leads to the noise-free version of our estimator: \begin{align} \overline{\cv}_\alpha= \|Y - \EE{g(Y+\sqrt\alpha\omega)\mid Y }\|_2^2 + \frac{2}{\sqrt\alpha}\EE{\omega\tran g(Y+\sqrt\alpha\omega) \mid Y}, \label{noise:free:CV} \end{align} In other words, $\overline{\cv}_\alpha$ corresponds to $\cv_\alpha$ with the randomness from $\omega^{(k)}$'s marginalized out. The following result states that the noise-free version $\overline{\cv}_\alpha$ of the proposed estimator, coincides with the SURE when $g$ is replaced by its convolution-smoothed version $g*\varphi_{\alpha\sigma^2}$. \begin{proposition}[Connection with SURE]{\label{prop: SURE}} It holds that \begin{align}\label{equ: smoothed cv} \overline{\cv}_\alpha = \mathrm{SURE}(g * \varphi_{\alpha\sigma^2} ). \end{align} \end{proposition} The proof is provided in Appendix~\ref{prf: prop SURE}. Two remarks are in order. \begin{rmk} When SURE is applicable, the proposed estimator behaves similarly to SURE when $\alpha$ is small. Our estimator, however, does not require computing the divergence term $\nabla \cdot g$, which may not be available in closed form for many estimators. This makes $\cv_\alpha$ a more practical choice in such scenarios. \end{rmk} \begin{rmk} When SURE is not applicable, such as when $g$ is not weakly differentiable, the proposed estimator $\cv_\alpha$ remains well-defined. In these cases, $\cv_\alpha$ behaves as though applying SURE to the infinitely differentiable, convolution-smoothed estimator $g*\varphi_{\alpha\sigma^2}$. This connection with SURE provides further justification for the proposed method, providing a solution in settings where SURE is not applicable. \end{rmk} \subsection{Generalization to exponential families} Given the connection between $\cv_\alpha$ and SURE, we can naturally generalize our estimator to other exponential families, using the more general version of Stein's identity for this larger family of distributions. Suppose $Y\in\R^n$ follows the exponential family distribution with density \begin{align*} p(Y)=\exp(\theta\tran Y - A(\theta) )\cdot h(Y), \end{align*} where $\theta\in\R^n$ is the natural parameter, $A(\theta)$ is the log-partition function, and $h$ is the base measure. Let $g(Y)$ be an estimator of $\theta$. Our goal is to estimate the risk under the quadratic loss $\EE{\|\theta - g(Y)\|_2^2}$. Since $\|\theta\|_2^2$ is a constant not depending on the estimator and $\EE{\|g(Y)\|_2^2}$ can be estimated by $\|g(Y)\|_2^2$, the task reduces to estimating the cross term $\EE{\theta\tran g(Y)}$. Stein's identity (see, for example, \cite{eldar2008generalized}): \begin{align}\label{equ: stein identity} \EE{\theta\tran g(Y) }=-\EE{\nabla\cdot g(Y) + g(Y)\tran \nabla \log h(Y)} \end{align} implies that $$- \nabla\cdot g(Y) - g(Y)\tran \nabla\log h(Y) $$ is an unbiased estimator of $\EE{\theta\tran g(Y)}$. However, this estimator involves the divergence term $\nabla\cdot g(Y)$, which is often unavailable. In line with our earlier arguments, we propose to approximate the divergence term $\nabla\cdot g$ by its convolution-smoothed version $\nabla\cdot (g*\varphi_{\alpha\sigma^2})$. This term can then be estimated using the Monte Carlo estimator \begin{align*} \frac{1}{K\sqrt{\alpha}}\sum_{k=1}^K {\omega^{(k)}}\tran g(y+\sqrt\alpha\omega^{(k)}), \end{align*} where $$ \omega^{(k)}\sim \N(0, I_n), \ \Cov{\omega^{(j)},\omega^{(k)}}=-\frac{1}{K-1}I_n \text{ for } j\neq k. $$ The advantages of using antithetic randomization extend here as well, ensuring that the variance remains bounded even as $\alpha\to0$, at which point the bias also vanishes. \section{Extensions beyond the quadratic loss} \label{sec:glm} In this section, we extend our cross-validation method to handle more general loss functions, where the sufficient statistic in the loss function is asymptotically normal. To emphasize the dependence on the sample size, we add subscripts $n$ to the data, the estimand, and the estimator. Later in the section, we analyze the bias and variance of the proposed estimator in the asymptotic regime as $n \to \infty$. Suppose the data $Y=Y_n$ is generated from an exponential family with density: \begin{equation*} p_n(Y_n \mid \theta_n) = \exp\left\{\sqrt{n}(\theta_n\tran S_n(Y_n) - A_n(\theta_n))\right\}\cdot h_n(Y_n), \label{gen:density} \end{equation*} where $\theta_n$ is the $p$-dimensional natural parameter. Note, in this formulation, the sufficient statistic $S_n=S_n(Y_n)$ and the log-partition function $A_n(\theta_n)$ are scaled by $1/\sqrt n$. We consider a loss function derived from the negative log-likelihood of this density, which is given by \begin{equation} \calL(\theta_n, Y_n)= A_n(\theta_n)-\theta_n\tran S_n(Y_n) - \frac{1}{\sqrt n}\log h_n(Y_n) . \label{gen:loss} \end{equation} This setup accommodates the loss functions typically used in fitting generalized linear models (GLM). Throughout this section, we assume the existence of a sequence of $p\times p$ positive definite matrix $H_n$ and $\mu_n\in\R^p$ such that \begin{equation} H_n^{-1/2}(S_n-\mu_n) \stackrel{d}{\Rightarrow} \N(0, I_p). \label{asymptotic:normal:stats} \end{equation} The asymptotic normality assumption holds in GLMs under regularity conditions as established in \cite{fahrmeir1985consistency}. \subsection{Cross-validated estimator} Suppose that $g(S_n)$ is an estimator of $\theta_n$, which depends on the data only through the sufficient statistic $S_n$. As before, we define the prediction error as the expectation of the loss function: \begin{align*} \mathrm{PE}_n(g)=\EE{\calL(g(S_n), \tilde Y_n ) }= \EE{A_n(g(S_n)) - g(S_n)\tran \tilde{S}_n - n^{-1/2}\log h_n(\tY_n)}, \end{align*} where $\tilde Y_n$ is an independent copy of $Y$, and $\tilde{S}_n= S_n(\tilde{Y}_n)$ is the sufficient statistic of $\tilde{Y}_n$. We define the rescaled sufficient statistics as $$ T_n = H_n^{-1/2} S_n, \quad \tilde T_n=H_n^{-1/2} \tilde{S}_n. $$ By Equation~\eqref{asymptotic:normal:stats}, the asymptotic distributions of $T_n-H_n^{-1/2}\mu_n$ and $\tilde T_n-H_n^{-1/2}\mu_n$ are $\N(0, I_p)$. Let $$ \mathfrak{g}_n(T_n)= (H_n^{1/2})\tran g(H_n^{1/2} T_n), \quad \mathfrak{A}_n(T_n)= A_n(g(H_n^{1/2}T_n)), $$ such that $$ A_n(g(S_n))=\mathfrak{A}_n(T_n),\quad g(S_n)\tran \tilde S_n=\mathfrak g(T_n)\tran \tilde T_n. $$ With these notations, we can rewrite the prediction error as \begin{equation} \mathrm{PE}_n(g)=\EE{\mathfrak{A}_n(T_n) - \mathfrak{g}_n(T_n) \tran \tilde T_n} -\EE{n^{-1/2}\log h_n(Y_n)}. \label{PE:general} \end{equation} The second expectation in our estimand, $\EE{n^{-1/2}\log h_n(Y_n)}$, can be easily estimated by $n^{-1/2}\log h_n(Y_n)$. The first expectation is taken over $T_n$ and $\tilde T_n$, which are asymptotically normal with identity covariance. Thus, the problem reduces to a form analogous to the normal means example discussed earlier, except that $T_n$ is not exactly normal but asymptotically normal. We apply the same idea as before, constructing the train-test pair of randomized data as \begin{align*} T_n + \sqrt\alpha\omega\quad \text{and} \quad T_n-\frac{1}{\sqrt\alpha} \omega, \quad \text{where } \omega\sim \N(0, I_p), \end{align*} for $\alpha \in \mathbb{R}^+$. Clearly, the train-test data are asymptotically independent. We train the prediction function on $T_n+\sqrt\alpha\omega $ and evaluate its performance on $T_n-\frac{1}{\sqrt\alpha}\omega$, leading to the following estimate of $\PE_n(g)$: \begin{align*} \frakA_n(T_n+\sqrt\alpha\omega) - \frakg_n(T_n + \sqrt\alpha\omega )\tran (T_n - \frac{1}{\sqrt\alpha}\omega) - n^{-1/2}\log h_n(Y_n). \end{align*} We propose to repeat this procedure $K>1$ times, with randomization variables $\omega^{(1)},\ldots,\omega^{(K)}$ generated using the antithetic scheme described in \eqref{antithetic:rand}, i.e., \begin{align}\label{equ: antithetic 2} \omega^{(k)}\sim\N(0,I_p),\quad \Cov{\omega^{(j)}, \omega^{(k)} } = \frac{-1}{K-1}I_p\, \text{ for } j\neq k. \end{align} Averaging over the $K$ draws of randomization, we obtain the cross-validated estimator \begin{equation} \begin{aligned} \cv_{n,\alpha}=\frac1{K}\sum_{k=1}^K&\Big\{\mathfrak A_n( T_n+\sqrt\alpha\omega^{(k)}) - \mathfrak g_n(T_n + \sqrt\alpha\omega^{(k)} )\tran (T_n - \frac{1}{\sqrt\alpha}\omega^{(k)}) \Big\}\\ &\quad - n^{-1/2} \log h_n(Y_n). \end{aligned} \label{CV:general} \end{equation} Note that we could equivalently work with the sufficient statistics $S_n$ directly, without rescaling them to $T_n$. In this case, the randomization variables would be introduced with a marginal covariance matrix equal to $H_n$, while maintaining the same antithetic correlation structure used throughout our method. With the rescaling of the sufficient statistics, we instead work with randomization variables that have an identity covariance matrix, which simplifies the presentation. As we demonstrate next, the proposed estimator exhibits similar desirable bias-variance properties to those in the normal means problem. Specifically, the asymptotic bias vanishes as $\alpha\to 0$, and the variance remains bounded, which is again a consequence of the antithetic randomization scheme. \subsection{Mean squared error analysis} To conduct the mean squared error analysis of our cross-validated estimator $\cv_{n,\alpha}$, we require some additional assumptions on the sufficient statistics $T_n$. For a weakly differentiable $\mathbb{R}^p$-valued function $g$ and a $p$-dimensional vector $\mu$, define \begin{align*} (\calT_{\mu} g)(x)=\langle g(x),\mu-x \rangle + \nabla\cdot g(x). \end{align*} For a normal random variable $X\sim \mathcal{N}(\mu, I_p)$, it follows that $\EE{(\calT_\mu g)(X) }=0$, which recovers Stein's identity. Let $\mathbb{Q}_n$ represent the distribution of the rescaled sufficient statistics, $T_n$, with density $q_n$ and expectation $m_n= H_n^{-1/2}\mu_n$. \begin{assumption}\label{assump: stein discrepancy} Assume that \begin{align*} \lim_{n\to\infty}\EE{(\calT_{m_n} g_n) (T_n) } = 0 \end{align*} where $$ (\calT_{m_n} g)(x)= \langle g(x), m_n-x\rangle + \nabla\cdot g(x). $$ \end{assumption} Under a distribution $\mathbb{Q}_n$ that is not normal, note that $\EE{(\calT_{m_n} g_n) (T_n) }$ is no longer exactly zero. This quantity, known as Stein's measure of non-normality, forms the basis for the notion of Stein's discrepancy; see, for example, the paper by \cite{gorham2015measuring}. Assumption \ref{assump: stein discrepancy} requires that the sufficient statistics exhibit vanishingly small Stein's discrepancy as $n$ goes to infinity. For example, given that the sufficient statistics are asymptotically normal, this condition holds if $\|T_n\|_q^q$ is also uniformly integrable, and both functions $\langle g(x), x\rangle$, $\nabla\cdot g(x)$ grow slower than $\|x\|_q^q$ for some $q>0$. \begin{assumption}\label{assump: log density q_n} Assume that there exist constants $N_0>0$ and $C>0$ such that, for all $n\geq N_0$, the density $q_n$ of $T_n$ satisfies \begin{align*} |\log q_n(x) -\log q_n(x')| \leq C \|x-x'\|_2^2. \end{align*} \end{assumption} The condition in Assumption \ref{assump: log density q_n} is automatically satisfied if the density of the sufficient statistics converges to a normal density. Now we are ready to show that the bias and variance results established in Section~\ref{sec: theory} for exactly normal data carry over to our estimator based on asymptotically normal sufficient statistics. In particular, we show that the asymptotic bias is 0 as $\alpha\to0$ and $n\to\infty$. Moreover, the variance remains bounded as $\alpha\to0$. \begin{theorem}[Bias]\label{thm: glm bias} Let Assumptions~\ref{assump: weakly differentiable}, \ref{assump: stein discrepancy}, and \ref{assump: log density q_n} hold. In addition, assume that \sloppy{$\EE{|\frakA_n(T_n)|}<\infty$}, $\EE{\|\frakg_n(T_n)\|_2^2}<\infty$, and $\EE{|\nabla\frakg_n(T_n)|}<\infty$. Then \begin{align*} \lim_{n\to\infty} \lim_{\alpha\to0} \Big|\EE{\cv_{n,\alpha}} - \PE_n(g)\Big| = 0. \end{align*} \end{theorem} \begin{theorem}[Reducible variance]\label{thm: glm var} Let Assumptions~\ref{assump: weakly differentiable} and \ref{assump: log density q_n} hold. In addition, assume that $\EE{\frakA_n(T_n)^2}<\infty$, $\EE{\|\frakg_n(T_n)\|_2^4}<\infty$, and $\EE{\|\nabla\frakg_n(T_n)\|_F^2}<\infty$. When $n\geq N_0$, we have \begin{align*} \lim_{\alpha\to0} \EE{\Var{\cv_{n,\alpha} \mid Y_n }}=\frac{1}{K-1}\EE{\|\frakg_n(T_n)\|_F^2 + \tr(\nabla\frakg_n(T_n)^2) }. \end{align*} \end{theorem} The proofs are provided in Appendix~\ref{prf: glm bias} and \ref{prf: glm var}. We conclude this section by presenting the widely used logistic regression for classification as an example within our setup. Simulations in the next section illustrate the performance of our cross-validated estimator in this example. \begin{example}[Logistic regression] In logistic regression, the negative log-likelihood is given by \begin{align*} \sum_{i=1}^n - y_i x_i\tran \theta + \log(1+e^{x_i\tran\theta}), \end{align*} where the data follows the model \begin{align*} y_i\sim \mathrm{Bernoulli}(\pi(x_i\tran \theta_n) ) , \end{align*} where $\pi(x)=\frac{1}{1+\exp(-x)}$ is the sigmoid function. In the above-described setup, we scale the log-likelihood by $1/\sqrt{n}$, resulting in the loss function \begin{align*} &\calL(\theta_n, Y_n)= -\theta_n\tran S_n + A_n(\theta_n), \end{align*} where $$ S_n=\frac{1}{\sqrt n} X_n\tran Y_n,\quad A_n(\theta_n)= \frac{1}{\sqrt{n}}\sum_{i=1}^n\log(1+e^{x_i^\top \theta_n}). $$ Under mild regularity conditions, such as those outlined in \cite{fahrmeir1985consistency}, it can be shown that the sufficient statistic $S_n$ follows an asymptotically normal distribution. The covariance matrix in this limiting normal distribution is equal to $H_n=\dfrac{1}{n}X_n\tran W_n X_n$, where $W_n$ is a diagonal matrix with entries $\pi(x_i\tran \theta_n)(1-\pi(x_i\tran \theta_n))$. Clearly, this covariance depends on the true underlying parameter. However, our method performs effectively as long as we can estimate this matrix, as demonstrated in the simulations presented in Section \ref{sec: experiments}. \end{example} \section{Numerical experiments} \label{sec: experiments} In this section, we evaluate our method empirically on simulated data and compare our method with competing estimators for estimating prediction errors. \subsection{Effects of $\alpha$ and $K$} In our first example, we examine the effects of $\alpha$ and $K$, the two parameters that control the bias and variance of the proposed cross-validation estimator. We consider an isotonic regression problem with $n=100$ observations, in which the one-dimensional predictors are drawn independently from $\textnormal{Uniform}(0,1)$ and are treated as fixed throughout this experiment. The responses are then generated independently as $Y_i \sim \N(f^*(X_i), \sigma^2)$, where the true function $f^*(x) = 2\cdot\lceil 5x\rceil-6$ and the noise level $\sigma^2 = 1$. Visualizations of the function $f^*$ and simulated data points are provided in Figure~\ref{fig:true-monotone-function}. We apply isotonic regression to the data to estimate the true prediction function $f^*(x)$. A snapshot of this example was previously presented in the introduction to illustrate comparisons between our method and the standard CV method. Throughout this section, we will refer to our method as ``Antithetic CV". \begin{figure}[t] \centering \includegraphics[width=.5\textwidth]{monotone-ex.pdf} \caption{In the isotonic regression simulations, the data are generated based on the function $f^*$, shown in solid line. An example of a simulated dataset is displayed as scatter points.} \label{fig:true-monotone-function} \end{figure} We compare the MSE of the proposed Antithetic CV method and the Coupled Bootstrap (CB) method across various values of $\alpha$ and $K$. In the left panel of Figure~\ref{fig: isotonic alpha K}, $\alpha$ is fixed at 0.1, and $K$ is varied from 2 to 32. In the right panel, $K$ is fixed at 8, and $\alpha$ is varied from 0.005 to 0.5. In all comparisons, standard CV uses the same value of $K$ for train-test sample splitting, as the other two methods. From this figure, we make a few key observations: \begin{enumerate} \item The antithetic CV method consistently outperforms standard $K$-fold CV and the coupled bootstrap method across all configurations of $(\alpha, K)$. \item In the left panel of Figure~\ref{fig: isotonic alpha K}, the MSE of all methods decreases for all three methods as the number of train-test repetitions $K$ increases. For the coupled bootstrap and antithetic CV methods, this reduction can be attributed to the decrease in reducible variance with increasing $K$. \item In the right panel of Figure~\ref{fig: isotonic alpha K}, we observe that the MSE of CB decreases as $\alpha$ increases. This is because the MSE of CB is dominated by the variance when $\alpha$ is small. As a result, even though the bias of CB decreases with smaller $\alpha$, the increase in variance outweighs this benefit, leading to a higher overall MSE. In contrast, for antithetic CV, we note that the MSE either decreases or does not increase as $\alpha$ becomes smaller. This is expected as our approach achieves bias reduction without paying a price in terms of the variance. \end{enumerate} \begin{figure} \centering \begin{subfigure}{.49\textwidth} \includegraphics[width=\textwidth]{isotonic_vary_K.pdf} \end{subfigure} \begin{subfigure}{.49\textwidth} \includegraphics[width=\textwidth]{isotonic_vary_alpha.pdf} \end{subfigure} \caption{Mean squared error of standard $K$-fold CV, coupled bootstrap with $K$ repetitions, and the proposed Antithetic CV methods with $K$ repetitions. Left panel: MSE with $\alpha=0.1$ and varying $K$. Right panel: MSE with $K=8$ and varying $\alpha$. } \label{fig: isotonic alpha K} \end{figure} \subsection{Logistic regression} We illustrate the effectiveness of the proposed method for estimating prediction errors in GLMs, as described in Section~\ref{sec:glm}. We consider a logistic regression example with both continuous and categorical predictors and $n=100$. The $4$ continuous features $X^{\text{cts}}$ are generated from the standard Gaussian distribution independently, and each of the $2$ categorical features, $X^{\text{cat}}$ has 3 classes drawn with probability $[0.1, 0.1, 0.8]$ independently. Given the predictors, the response $Y_i$ is drawn from a Bernoulli distribution with mean parameter $(1+e^{-\eta_i})^{-1}$, where \[ \eta_i =\beta\tran X_i^{\text{cts}} + \sum_{j=1}^{2}\sum_{k=1}^{3} \gamma_{jk}\mathbf{1}\{X_{ij}^{\text{cat}} =k\}, \] with $\beta = (1, -1, 1, -1)$ and $\gamma_{j} = (1/2, -1/2, 0)$ for $j=1,2$. The regression coefficients are estimated by fitting a logistic regression model, where we have used one-hot encoding for the categorical features. The goal is to estimate the prediction error, which in this case is the expectation of the negative log likelihood of the logistic regression model. The proposed antithetic CV approach, like thinning methods for Gaussian data, assumes knowledge of the covariance matrix of the noise. Here, we demonstrate empirically that the proposed method continues to perform quite well even when we use a plug-in estimate for the covariance matrix. Specifically, we estimate the unknown covariance matrix of the sufficient statistic $H_n$ by $\hat H_n=\frac1n X\tran \hat W X$, where $\hat W$ is the diagonal matrix whose $i$-th diagonal entry is $\pi(x_i\tran \hat\theta ) (1-\pi(x_i\tran \hat\theta))$ and where $\hat\theta$ is the logistic regression solution. We then use the antithetic CV method introduced in Section~\ref{sec:glm} with $H_n$ replaced by $\hat H_n$. We also consider an alternative method that substitutes the antithetic Gaussian variables in our approach with independent Gaussian variables. This method mirrors the CB method in the normal means problem, and is referred to as ``independent randomization". We set $\alpha=0.1$ for both the antithetic CV method and the method based on a Gaussian randomization scheme with independent variables. In addition, similar to our previous comparisons, we also apply the standard $K$-fold CV estimator in this example. We consider two commonly used values, $K=10$ and $K=20$, which are standard choices in practice. The MSE of all three estimators is presented in Figure~\ref{fig: logistic vary K}. The proposed antithetic CV method achieves a much smaller MSE compared to the standard $K$-fold CV. Furthermore, we observe again a clear advantage of antithetic Gaussian randomization over the independent randomization scheme. These results highlight the substantial improvement in the quality of prediction error estimation that can be achieved through the carefully chosen correlated randomization scheme. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{logistic_vary_K_alpha_0.1_pluginH.pdf} \caption{MSE in estimating the prediction error in the logistic regression example. We set $\alpha=0.1$ and consider $K=10$ and 20. } \label{fig: logistic vary K} \end{figure} \subsection{Multi-layer perceptron regression} In this example, we apply the antithetic CV method to estimate the prediction error of a black-box neural network. The responses are drawn independently as $y_i\sim \N(f^*(x_i) , \sigma^2)$, where $f^*$ is the Friedman \#1 function \citep{friedman1991multivariate}, defined as: \begin{align*} f^*(x) = 10\sin(\pi x_1x_2) + 20(x_3-\frac12)^2 + 10x_4 + 5x_5, \end{align*} and $\sigma^2=1$. We generate $10$-dimensional predictors $x_i$ independently and uniformly from $[0,1]^{10}$. We fix $n=1000$. A multi-layer perceptron (MLP) prediction model is fitted using the Python package scikit-learn \citep{scikit-learn}. The MLP consists of two hidden layers, each with 64 units. The training configuration includes a maximum of $2000$ epochs, a batch size of $64$, and an $L_2$ regularization parameter of $0.01$, with all other parameters set to their default values. We evaluate the proposed antithetic CV method alongside the coupled bootstrap (CB) method described in Section~\ref{sec:2}, with $\alpha=0.1$. As done in the previous instance of logistic regression, we report results for $K=10$ and $K=20$. Consistent with the previous results, from Figure~\ref{fig: mlp vary K}, the proposed cross-validation method achieves the smallest MSE when compared against classic $K$-fold CV and the CB method. While the performance of CB improves with a larger $K$, this comes at a much higher computational cost. To elaborate further, hyperparameter tuning is a key step in nearly all machine learning methods and is typically performed using grid search. However, as the number of hyperparameters increases, this process can quickly become computationally prohibitive. In such cases, we believe that our antithetic cross-validation method provides a more computationally efficient alternative to standard cross-validation, making it particularly well-suited for hyperparameter tuning in resource-constrained environments. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{mlp_vary_K_alpha_0.1.pdf} \caption{MSE in estimating the prediction error in the MLP regression example. We consider $K=10$ and 20. For CB and antithetic CV, $\alpha$ is set to be 0.1. } \label{fig: mlp vary K} \end{figure} \section{Discussion} \label{sec: conclusion} We conclude by discussing related research directions and potential applications of the antithetic randomization scheme in our work. \paragraph{Multi-fold data thinning} As noted in the introduction, a related concept called multi-fold data thinning (MDT) was proposed in \cite{neufeld2024data}. For a vector $Y\sim\N(\theta,\sigma^2 I_n)$, the MDT method generates $X^{1},\ldots,X^K$ such that $\sum_{k=1}^K X^k=Y$, $\EE{X^{(k)}\mid Y }=\ep_k Y$, $\Var{X^{(k)} \mid Y}=\ep_k(1-\ep_k)\sigma^2$, and $\Cov{X^{(j)}, X^{(k)} \mid Y}=-\ep_j\ep_{k}\sigma^2$, where $\ep_k> 0$ and $\sum_{k=1}^K \ep_k=1$. This method treats $Y^{(k)}_{\mathrm{MDT}}=Y-X^{(k)}$ as the training data and $X^{(k)}$ as the testing data. If we take $\ep_k=1/K$ for all $k$, then the training data $Y^{(k)}_{\mathrm{MDT}}$ follow a joint Gaussian distribution with: \begin{align*} \EE{Y^{(k)}_{\mathrm{MDT}}} = \frac{K-1}{K} \theta,\quad \Cov{Y^{(j)}_{\mathrm{MDT}}, Y^{(k)}_{\mathrm{MDT}}} = \left(1 -\frac{2}{K}+ \frac{\delta_{j,k}}{K}\right) \sigma^2 I_n. \end{align*} Instead, the training data in the $k$-th fold of our cross-validation method equals $Y_{\mathrm{train}}^{(k)}=Y+\sqrt\alpha\omega^{(k)}$, where the joint distribution of the training data across the folds is equal to: \begin{align*} \EE{Y_{\mathrm{train}}^{(k)}} = \theta,\quad \Cov{ Y_{\mathrm{train}}^{(j)}, Y_{\mathrm{train}}^{(k)}} = \left(1-\frac{\alpha}{K-1} + \frac{\alpha K}{K-1} \delta_{j,k} \right) \sigma^2 I_n. \end{align*} Even if we multiply $Y^{(k)}_{\mathrm{MDT}}$ by $\frac{K}{K-1}$ to match the mean, the covariance \begin{align*} \Cov{\frac{K}{K-1} Y^{(j)}_{\mathrm{MDT}},\, \frac{K}{K-1} Y^{(k)}_{\mathrm{MDT}}} = \left(\frac{K(K-2)}{(K-1)^2}+ \frac{K}{(K-1)^2}\delta_{j,k}\right) \sigma^2 I_n \end{align*} is different from that in our method. More importantly, our proposed method introduces an additional parameter, $\alpha$, enabling direct control over the bias of the estimator. In contrast, the MDT method relies solely on a single parameter, $K$, to balance the bias-variance tradeoff, like the standard cross-validation. Moreover, \citet{neufeld2024data} did not provide a quantitative analysis of the bias and variance of the MDT estimator. Our paper focuses on problems where the sufficient statistics are either normal or asymptotically normal. As future work, it would be interesting to investigate whether similar antithetic randomization schemes could be extended to other convolution-closed distributions, as discussed in the aforementioned reference. \paragraph{Simultaneous perturbation} The randomization approach proposed in this paper is also connected to simultaneous perturbation methods commonly employed in black-box optimization. Suppose that the objective function is $F(\theta)$ for $\theta\in\R^d$, whose gradient is not available. Consider its Gaussian-smoothed version $\EE[\omega\sim\N(0,I)]{F(\theta+\sigma\omega)}$, whose gradient takes the form \begin{align*} \nabla_\theta \EE[\omega\sim\N(0,I)]{F(\theta+\sigma\omega)} = \frac{1}{\sigma}\EE[\omega\sim\N(0,I_d)]{F(\theta+\sigma\omega)\omega }. \end{align*} This expectation can be then approximated by sampling $\omega^{(k)}\sim\N(0,I_d)$ and computing \begin{align*} \hat g(\theta)= \frac{1}{K\sigma}\sum_{k=1}^K F(\theta + \sigma \omega^{(k)}) \omega^{(k)}. \end{align*} The estimated gradient $\hat g(\theta)$ can be used in any gradient-based optimization algorithm. This method is known as simultaneous perturbation stochastic approximation \citep{spall1992multivariate} or random gradient-free optimization \citep{nesterov2017random}, and it is widely used in black-box optimization tasks such as reinforcement learning \citep{salimans2017evolution} and fine tuning large language models \citep{malladi2023fine}. In these methods, the random perturbations $\omega^{(k)}$ are sampled independently of one another. It is known that the bias of the estimator vanishes as $\sigma\to 0$, while the variance increases as $\sigma$ decreases. The antithetic randomization approach introduced in this paper could be directly applied to gradient estimation in this context, potentially improving the performance of downstream optimization tasks. We leave this as a potential direction for future work. \bibliographystyle{apalike} \bibliography{citations.bib} \newpage \appendix \section{Proofs for Section~\ref{sec: theory}} \subsection{Proof of Theorem~\ref{thm: reducible variance}} \label{prf: thm reducible variance} \begin{proof}[Proof of Theorem~\ref{thm: reducible variance}] We first write \begin{align*} \cv_\alpha&=\frac1K\sum_{k=1}^K \|Y-\frac{1}{\sqrt\alpha}\omega^{(k)} - g(Y +\sqrt\alpha\omega^{(k)} )\|_2^2 - \frac{1}{\alpha}\|\omega^{(k)}\|_2^2\\ &= \underbrace{\frac1K\sum_{k=1}^K \left[ \|Y-g(Y+\sqrt\alpha\omega^{(k)})\|_2^2 \right]}_{(\Rom{1})} + \underbrace{\frac1K\sum_{k=1}^K \frac{2}{\sqrt\alpha}\langle \omega^{(k)} , g(Y+\sqrt\alpha\omega^{(k)})\rangle}_{(\Rom{2})}. \end{align*} By Lemma~\ref{lem: first term}, $\Var{(\Rom{1}) \mid y } $ converges in $L_1$ to 0. By Lemma~\ref{lem: second term}, $\Var{(\Rom{2})\mid Y } $ converges in $L_1$ to $\Var{\frac{2}{K}\sum_{k=1}^K (\omega^{(k)})\tran \nabla g(Y) \omega^{(k)} \mid Y }$. When $j\neq k$, $\Cov{\omega^{(j)}, \omega^{(k)} }=\rho \sigma^2 I$ where $\rho=-\frac{1}{K-1} $. So we have \begin{align*} &\Var{\frac{1}{K}\sum_k (\omega^{(k)})\tran \nabla g(Y) \omega^{(k)} \mid Y }\\ &\qquad =\frac{1}{K^2}\left(K\cdot \Var{\omega\tran \nabla g(Y)\omega } + K(K-1) \Cov{(\omega^{(1)})\tran \nabla g(Y) \omega^{(1)}, (\omega^{(2)})\tran \nabla g(Y) \omega^{(2)} } \right). \end{align*} By Lemma~\ref{lem: gaussian quadratic covariance}, \begin{align*} &\Var{\omega\tran \nabla g(Y)\omega }=\sigma^4 (\|\nabla g(Y) \|_F^2 + \tr(\nabla g(Y)^2 ) ),\\ &\Cov{(\omega^{(1)})\tran \nabla g(Y) \omega^{(1)}, (\omega^{(2)})\tran \nabla g(Y) \omega^{(2)} } =\frac{1}{(K-1)^2} \Var{\omega\tran \nabla g(Y)\omega }. \end{align*} Therefore, \begin{align*} \Var{\frac{1}{K}\sum_k (\omega^{(k)})\tran \nabla g(Y) \omega^{(k)} \mid Y } &=\frac{1}{K^2}\left(K + K(K-1) \frac{1}{(K-1)^2} \right) \Var{\omega\tran \nabla g(Y)\omega } \\ &=\frac{\sigma^4}{K-1}(\|\nabla g(Y) \|_F^2 + \tr(\nabla g(Y)^2 ) ). \end{align*} This completes the proof. \end{proof} \begin{lemma}[First term $(\Rom{1})$]\label{lem: first term} Assume that $\EE{\|g(Y)\|_2^4}<\infty$. Then as $\alpha\to0$, \begin{align*} \Var{ \frac1K\sum_{k=1}^K \|Y -g(Y + \sqrt\alpha\omega^{(k)}) \|_2^2 \mid Y }\stackrel{L_1}{\to} 0 . \end{align*} \end{lemma} \begin{proof} Because $\Var{X_1+\ldots+X_K}\leq K\cdot(\Var{X_1}+\ldots+\Var{X_K})$, it suffices to show that as $\alpha\to0$, \begin{align*} \Var{ \|Y - g(Y + \sqrt\alpha\omega)\tran \|_2^2 \mid Y } \stackrel{L_1}{\to} 0, \end{align*} where $\omega\sim \N(0, \sigma^2 I_n)$. It suffices to show that $\EE{\|Y - g(Y+\sqrt\alpha\omega) \|_2^4\mid Y }\stackrel{L_1}{\to}\|Y-g(Y)\|_2^4$, which is equivalent to showing that $\EE{\|g(Y+\sqrt\alpha\omega)\|_2^4 \mid Y }\stackrel{L_1}{\to}\|g(Y)\|_2^4$. This is true by applying Lemma~\ref{lem: approximation to identity} with $f(y)=\|g(y)\|_2^4$. \end{proof} \begin{lemma}[Second term $(\Rom{2})$]\label{lem: second term} Assume that $\EE{\|\nabla g(Y)\|_F^2}<\infty$. Then as $\alpha\to0$, \begin{align*} \Var{\frac2K\sum_{k=1}^K \langle \frac{1}{\sqrt\alpha}\omega^{(k)}, g(Y+\sqrt\alpha\omega^{(k)})\rangle \mid Y }\stackrel{L_1}{\to}\Var{\frac{2}{K}\sum_{k=1}^K (\omega^{(k)})\tran \nabla g(Y) \omega^{(k)} \mid Y}. \end{align*} \end{lemma} \begin{proof} Since all the components of $g$ are almost differentiable, we have \begin{align*} \langle\frac{1}{\sqrt\alpha} \omega, g(Y+\sqrt\alpha\omega)\rangle&=\frac{1}{\sqrt\alpha}\sum_{i=1}^n \omega_i g_i(Y + \sqrt\alpha\omega)\\ &=\frac{1}{\sqrt\alpha}\sum_{i=1}^n \omega_i \left[g_i(Y) + \int_0^1 \sqrt\alpha\omega \cdot \nabla g_i(Y+t\sqrt\alpha\omega )\rd t \right] \\ &=\frac{1}{\sqrt\alpha}\omega\tran g(Y) + \sum_{i=1}^n \omega_i \int_0^1 \omega\cdot \nabla g_i(Y+t\sqrt\alpha\omega )\rd t\\ &=\frac{1}{\sqrt\alpha}\omega\tran g(Y) + \omega\tran \int_0^1 \nabla g(Y+t\sqrt\alpha\omega )\rd t\, \omega, \end{align*} where $\nabla g$ is the $n\times n$ matrix with the $i$-th row being $\nabla g_i$. Averaging over $k=1,\ldots,K$ and using the fact that $\sum_{k=1}^K\omega^{(k)}=0$, we have \begin{align*} \frac1K\sum_{k=1}^K \langle \frac{1}{\sqrt\alpha}\omega^{(k)}, g(Y+\alpha\omega^{(k)})\rangle = \frac{1}{K}\sum_{k=1}^K (\omega^{(k)})\tran \int_0^1 \nabla g(Y+t\sqrt\alpha\omega^{(k)})\rd t\, \omega^{(k)}. \end{align*} Denote each summand as $\Gamma_\alpha(\omega^{(k)}, Y)$, where $\Gamma_\alpha(\omega,Y)=\omega\tran\int_0^1 \nabla g(Y+t\sqrt\alpha\omega )\rd t \omega $ and $\Gamma_0(\omega, Y)=\omega\tran \nabla g(Y) \omega$. The claim of the lemma is equivalent to \begin{align*} \Var{\frac1K\sum_{k=1}^K \Gamma_\alpha(\omega^{(k)}, Y )\mid Y }\stackrel{L_1}{\to}\Var{\frac1K\sum_{k=1}^K \Gamma_0(\omega^{(k)}, Y ) \mid Y },\text{ as }\alpha\to0. \end{align*} It suffices to show that as $\alpha\to0$, \begin{align*} \Var{\Gamma_\alpha(\omega,Y)\mid Y} &\stackrel{L_1}{\to} \Var{\Gamma_0(\omega,Y)\mid Y},\\ \Cov{\Gamma_\alpha(\omega^{(1)}, Y), \Gamma_\alpha(\omega^{(2)}, Y)\mid Y} &\stackrel{L_1}{\to} \Cov{\Gamma_0(\omega^{(1)}, Y), \Gamma_0(\omega^{(2)}, Y)\mid Y}. \end{align*} The conditional variance and conditional covariance involve the conditional expectation of the following terms: \begin{align*} &\omega\tran \int_0^1 \nabla g(Y+t\sqrt\alpha\omega)\rd t\, \omega = \sum_{i,j=1}^n \int_0^1 \nabla_{j} g_i(Y+t\sqrt\alpha\omega) \omega_i\omega_j \rd t\\ &(\omega\tran\int_0^1 \nabla g(Y+t\sqrt\alpha\omega)\rd t\omega)^2 =\\ &\qquad\qquad \sum_{i,j,k,l}\int_{[0,1]^2} \nabla_j g_i (Y+t_1\sqrt\alpha\omega) \nabla_l g_k(Y+t_2\sqrt\alpha\omega) \omega_i\omega_j\omega_k\omega_l\rd t_1 \rd t_2\\ &({\omega^{(1)}}\tran \int_0^1 \nabla g(Y+t\sqrt\alpha\omega^{(1)})\rd t \omega^{(1)})({\omega^{(2)}}\tran \int_0^1 \nabla g(Y+t\sqrt\alpha\omega^{(2)})\rd t \omega^{(2)})=\\ &\qquad\qquad\sum_{i,j,k,l}\int_{[0,1]^2} \nabla_j g_i(Y+t_1\sqrt\alpha\omega^{(1)}) \nabla_l g_k(Y+t_2\sqrt\alpha\omega^{(2)}) \omega_i^{(1)}\omega_j^{(1)}\omega_k^{(2)}\omega_l^{(2)}\rd t_1 \rd t_2 \end{align*} Consider the quantities \begin{align*} &\omega_i\omega_j \int_0^1 \nabla_j g_i (Y+t\sqrt\alpha\omega)\rd t\\ & \omega_i\omega_j\omega_k\omega_l \int_0^1\int_0^1 \nabla_j g_i (Y+t_1\sqrt\alpha\omega) \nabla_l g_k(Y+t_2\sqrt\alpha\omega)\rd t_1\rd t_2\\ &\omega_i^{(1)}\omega_j^{(1)}\omega_k^{(2)}\omega_l^{(2)} \int_0^1\int_0^1 \nabla_j g_i(Y+t_1\sqrt\alpha\omega^{(1)}) \nabla_l g_k(Y+t_2\sqrt\alpha\omega^{(2)})\rd t_1\rd t_2 . \end{align*} We need to show that their conditional expectation $\EE{\cdot\mid Y}$ converges in $L_1$ to their corresponding expected value at $\alpha=0$. For the first term, we apply Lemma~\ref{lem: L1 t} with $f=\nabla_j g_i $ and $h(\omega)=\omega_i\omega_j$. For the second term, we apply Lemma~\ref{lem: L1 t2} with $f_1=\nabla_j g_i$, $f_2= \nabla_l g_k $, and $h(\omega)=\omega_i\omega_j\omega_k\omega_l$. For the third term, we can apply Lemma~\ref{lem: L1 t2} as well. \end{proof} \subsection{Proof of Theorem~\ref{thm: irreducible variance}} \label{prf: irreducible} \begin{proof}[Proof of Theorem \ref{thm: irreducible variance}] Note that \begin{align*} \EE{\cv_\alpha\mid Y} &=\|Y\|_2^2 +\EE{\|g(Y+\sqrt\alpha\omega)\|_2^2 \mid Y } -2Y\tran\EE{g(Y+\sqrt\alpha\omega)\mid Y} \\ &+ \frac{2}{\sqrt\alpha}\EE{\omega\tran g(Y+\sqrt\alpha\omega) \mid Y }. \end{align*} It suffices to show that as $\alpha\to0$, \begin{align*} \EE{\|g(Y+\sqrt\alpha\omega)\|_2^2 \mid Y} &\stackrel{L_2}{\to} \|g(Y)\|_2^2,\\ \EE{g(Y+\sqrt\alpha\omega)\tran Y \mid Y} &\stackrel{L_2}{\to} g(Y)\tran Y,\\ \EE{\frac{1}{\sqrt\alpha}\omega\tran g(Y+\sqrt\alpha\omega)\mid Y } &\stackrel{L_2}{\to} \sigma^2 \tr(\nabla g(Y)). \end{align*} Applying Lemma~\ref{lem: L1} with $f(y)=\|g(y)\|_2^4$ and $h(\omega)=1$ proves the first line. Applying Lemma~\ref{lem: L1} with $f(y)=g_i(y)g_j(y)$ and $h(\omega)=1$ ($1\leq i,j\leq n$) proves the first two lines. For the third limit, recall that \begin{align*} \frac{1}{\sqrt\alpha}\omega\tran g(Y+\sqrt\alpha\omega)=\frac{1}{\sqrt\alpha}\omega\tran g(Y) + \omega\tran \int_0^1 \nabla g(Y+t\sqrt\alpha\omega )\rd t \omega. \end{align*} Because $\EE{\omega\tran g(Y)\mid Y }=0$, we have \begin{align*} \EE{\frac{1}{\sqrt\alpha}\omega\tran g(Y+\sqrt\alpha\omega) \mid Y} &= \EE{\omega\tran \int_0^1 \nabla g(Y+t\sqrt\alpha\omega )\rd t \omega \mid Y }\\ &=\sum_{i,j=1}^n \EE{\int_0^1 \nabla_j g_i(Y+t\sqrt\alpha\omega )\rd t \omega_i\omega_j \mid Y}. \end{align*} Applying Lemma~\ref{lem: L1 t} with $f(y)=\nabla_j g_i(y)\nabla_l g_k(y) $ and $h(\omega)=\omega_i\omega_j\omega_k\omega_l $ ($1\leq i,j,k,l\leq n$) proves the third line. \end{proof} \section{Proofs for Section~\ref{sec: SURE}} \subsection{Proof of Proposition~\ref{prop: SURE}} \label{prf: prop SURE} \begin{proof}[Proof of Proposition~\ref{prop: SURE}] Note that \begin{align*} \tilde\cv_\alpha &= \|Y - g*\varphi_{\alpha\sigma^2}(Y)\|_2^2 + \frac{2}{\sqrt\alpha}\EE{ \omega\tran g(Y+\sqrt\alpha\omega)\mid Y },\\ \mathrm{SURE}(g*\varphi_{\alpha\sigma^2}) &= \|Y - g*\varphi_{\alpha\sigma^2}(Y)\|_2^2 + 2\sigma^2\nabla\cdot (g*\varphi_{\alpha\sigma^2})(Y). \end{align*} It suffices to show that \begin{align*} \sigma^2 \nabla\cdot (g*\varphi_{\alpha\sigma^2})(Y) = \frac{1}{\sqrt\alpha}\EE{\omega\tran g(Y+\sqrt\alpha\omega)\mid Y }. \end{align*} This is true because \begin{align*} \nabla\cdot (g*\varphi_{\alpha\sigma^2})(y) &= \sum_{i=1}^n \nabla_{y_i} g_i*\varphi_{\alpha\sigma^2}(y)=\sum_{i=1}^n \nabla_{y_i} \int_{\R} g_i(x) \varphi_{\alpha\sigma^2}(y-x)\rd x\\ &=\sum_{i=1}^n \int_{\R} g_i(x)(-\frac{y_i-x }{\alpha\sigma^2}) \varphi_{\alpha\sigma^2}(y-x)\rd x\\ &=-\frac{1}{\alpha\sigma^2} \int g(x)\tran (y-x) \varphi_{\alpha\sigma^2}(y-x)\rd x\\ &=\frac{1}{\alpha\sigma^2} \int g(y+\ep)\tran \ep \varphi_{\alpha\sigma^2}(\ep)\rd \ep\\ &=\frac{1}{\alpha\sigma^2} \EE[\ep\sim\N(0,\alpha\sigma^2 I)]{g(y+\ep)\tran \ep }\\ &=\frac{1}{\sqrt{\alpha}\sigma^2} \EE[\omega\sim\N(0,\sigma^2I)] {g(y+\sqrt\alpha\omega)\tran\omega }. \end{align*} \end{proof} \section{Proofs for Section~\ref{sec:glm}} \subsection{Proof of Theorem~\ref{thm: glm bias}} \label{prf: glm bias} \begin{proof}[Proof of Theorem~\ref{thm: glm bias}] The bias can be decomposed as \begin{align*} &|\EE{\cv_{n,\alpha}} - \PE_n(g)|\\ &\qquad= \Big |\EE{\frakA_n(T_n+\sqrt\alpha\omega)-\frakg_n(T_n+\sqrt\alpha\omega)\tran (T_n-\frac{1}{\sqrt\alpha}\omega) -\frakA_n(T_n) + \frakg_n(T_n)\tran m_n }\Big|\\ &\qquad \leq \Big|\EE{\frakA_n(T_n+\sqrt\alpha\omega)} - \EE{\frakA_n(T_n)}\Big| + \Big|\EE{\frakg_n(T_n+\sqrt\alpha\omega)\tran T_n - \frakg_n(T_n)\tran T_n } \Big| \\ &\qquad\qquad + \Big| \EE{\frac{1}{\sqrt\alpha}\omega\tran \frakg_n(T_n+\sqrt\alpha\omega) -\frakg_n(T_n)\tran (T_n - m_n) }\Big|.\numberthis\label{equ: glm bias prf 1} \end{align*} Because the density $q_n$ of $T_n$ satisfies Assumption~\ref{assump: log density q_n}, and $\frakg_n$, $\frakA_n$ are integrable w.r.t. $q_n$, we can apply Lemma~\ref{lem: L1} to show that, for $n\geq N_0$, \begin{align*} &\lim_{\alpha\to0}\EE{\frakA_n(T_n + \sqrt\alpha\omega) } = \EE{\frakA_n(T_n)},\\ &\lim_{\alpha\to0}\EE{\frakg_n(T_n + \sqrt\alpha\omega)\tran T_n } = \EE{\frakg_n(T_n)\tran T_n}. \end{align*} Thus, the first two terms in the upper bound~\eqref{equ: glm bias prf 1} vanish as $\alpha\to0$. It remains to show that \begin{align}\label{equ: glm bias prf 2} \lim_{n\to\infty}\lim_{\alpha\to0}\EE{\frac{1}{\sqrt\alpha}\omega\tran \frakg_n(T_n+\sqrt\alpha\omega) -\frakg_n(T_n)\tran (T_n - m_n) }=0. \end{align} Because $\frakg_n$ is weakly differentiable, we can write \begin{align*} \EE{\frac{1}{\sqrt\alpha}\omega\tran \frakg_n(T_n+\sqrt\alpha\omega)}=\EE{\frac{1}{\sqrt\alpha}\omega\tran \frakg_n(T_n)} + \EE{\omega\tran \int_0^1 \nabla \frakg_n(T_n+t\sqrt\alpha\omega) \rd t \omega}. \end{align*} The first expectation is 0. For the second term, Lemma~\ref{lem: L1 t} shows that \begin{align*} \lim_{\alpha\to0}\EE{\omega\tran \int_0^1 \nabla \frakg_n(T_n+t\sqrt\alpha\omega) \rd t \omega} = \EE{\omega\tran \nabla \frakg_n(T_n) \omega} =\EE{\nabla\cdot\frakg_n(T_n)}. \end{align*} By Assumption~\ref{assump: stein discrepancy}, we have \begin{align*} \lim_{n\to\infty}\EE{\nabla\cdot \frakg_n(T_n)} - \EE{\frakg_n(T_n)\tran (T_n-m_n) }=0. \end{align*} This proves Equation~\eqref{equ: glm bias prf 2}. \end{proof} \subsection{Proof of Theorem~\ref{thm: glm var}} \label{prf: glm var} \begin{proof}[Proof of Theorem~\ref{thm: glm var}] We write \begin{align*} \cv_{n,\alpha}=(\Rom{1}) + (\Rom{2}), \end{align*} where \begin{align*} (\Rom{1})&=\frac1{K}\sum_{k=1}^K\Big\{\mathfrak A_n( T_n+\sqrt\alpha\omega^{(k)}) - \mathfrak g_n(T_n + \sqrt\alpha\omega^{(k)} )\tran T_n \Big\} - n^{-1/2} \log h_n(Y_n)\\ (\Rom{2})&=\frac1K\sum_{k=1}^K \frac{1}{\sqrt\alpha}\frakg_n(T_n+\sqrt\alpha\omega^{(k)})\tran \omega^{(k)} \end{align*} Because $\frakA_n^2,\|\frakg_n\|_2^4$ are integrable under $q_n$, and $q_n$ satisfies Assumption~\ref{assump: log density q_n}, we can apply Lemma~\ref{lem: L1} with $f(y)=\frakA_n^2(y)$ and $f(y)=\|\frakg_n\|_2^4 $ to show that, when $n\geq N_0$, \begin{align*} \lim_{\alpha\to0}\EE{\Var{(\Rom{1}) \mid Y_n} }=0. \end{align*} For the second term, because $\frakg_n$ is weakly differentiable, we can write \begin{align*} (\Rom{2})&=\frac1K\sum_{k=1}^K \frac{1}{\sqrt\alpha}(\omega^{(k)})\tran \frakg_n(T_n) +\frac1K\sum_{k=1}^K \int_0^1 \omega^{(k)}\cdot \nabla\frakg_n(T_n+t\sqrt\alpha\omega^{(k)}) \rd t \omega^{(k)} \\ &=\frac1K\sum_{k=1}^K \int_0^1 \omega^{(k)}\cdot \nabla\frakg_n(T_n+t\sqrt\alpha\omega^{(k)}) \rd t \omega^{(k)}, \end{align*} where the second equality is due to the zero-sum constraint $\sum_{k=1}^K\omega^{(k)}=0$. The rest of the proof is similar to the proof of Theorem~\ref{thm: reducible variance}. \end{proof} \section{Technical lemmas} \begin{lemma}\label{lem: log p condition} Let $p$ be a continuous density on $\R^n$ with respect to the Lebesgue measure. Suppose there exist a constant $L>0$ such that for all $x,x'\in\R^n$, \begin{align*} |\log p(x)-\log p(x')|\leq L \|x-x'\|_2^2. \end{align*} Let $\varphi$ be the density of the standard normal distribution $\N(0,I_n)$ with respect to the Lebesgue measure. Then the following results hold: \begin{enumerate}[label=(\roman*)] \item Let $f:\R^n\to\R$ be $L_1$-integrable with respect to $p$. Let $h:\R^n\to\R$ be be integrable w.r.t. $\N(0,(1+\delta_0)I_n)$ for some $\delta_0>0$. Then there exists $\ep_0>0$, such that for all $\ep\in[0,\ep_0]$, \begin{align*} |h(\omega)| \varphi(\omega)\int_{\R^n} |f(x+\ep\omega)|p(x)\rd x \leq q(\omega), \end{align*} where $q$ is an $L_1$-integrable function on $\R^n$. \item Let $f_1,f_2$ be $L_2$-integrable functions with respect to $p$. Let $h:\R^n\to\R$ be integrable w.r.t. $\N(0,(1+\delta_0)I_n)$ for some $\delta_0>0$. Then there exists $\ep_0>0$, such that when $\ep_1,\ep_2\in[0,\ep_0]$, \begin{align*} |h(\omega)| \varphi(\omega)\int |f_1(x+\ep_1\omega)f_2(x+\ep_2\omega)| p(x)\rd x \leq q(\omega), \end{align*} where $q$ is an $L_1$-integrable function on $\R^n$. \end{enumerate} In particular, if $p$ is a Gaussian density, then $\log p$ is a quadratic function, so the condition on $\log p$ in the lemma is satisfied. \end{lemma} \begin{proof} By the assumption on $\log p$, we have \begin{align*} |\log p(x-\ep\omega) - \log p(x)|\leq L \ep^2\|\omega\|_2^2. \end{align*} Then \begin{align*} \frac{p(x-\ep\omega)}{p(x)}=\exp(\log p(x-\ep\omega) - \log p(x) )\leq \exp(L\ep^2 \|\omega\|_2^2 ). \end{align*} Thus, we obtain \begin{align*} \varphi(\omega)\frac{p(x-\ep\omega)}{p(x)}\leq \frac{1}{(2\pi)^{n/2}}\exp(-(\frac12 - L\ep^2 )\|\omega\|_2^2 ). \end{align*} Take $\ep_0= (\frac{\delta_0}{4L(1+\delta_0)} )^{1/2}$. When $\ep\leq \ep_0$, we have $\frac12- L\ep^2\geq \frac{1}{2}-L\ep_0^2=\frac{1+\delta_0/2}{2(1+\delta_0)} > \frac{1}{2(1+\delta_0)}$. Define $q(\omega):= |h(\omega)| \frac{1}{(2\pi)^{n/2}} \exp(-\frac{1}{2(1+\delta_0)}\|\omega\|_2^2 )$, which is an integrable function by the assumption. We obtain \begin{align*} |h(\omega)| \varphi(\omega) p(x-\ep\omega)&\leq |h(\omega)|\frac{1}{(2\pi)^{n/2}}\exp(-(\frac12 - L\ep^2 )\|\omega\|_2^2 )\cdot p(x)\\ &\leq |h(\omega)|\frac{1}{(2\pi)^{n/2}}\exp(-\frac{1}{2(1+\delta_0)} \|\omega\|_2^2 ) \cdot p(x)\\ &=q(\omega)p(x). \end{align*} Therefore, \begin{align*} |h(\omega) |\varphi(\omega)\int |f(x+\ep\omega)| p(x)\rd x &=|h(\omega) |\varphi(\omega)\int |f(x)| p(x-\ep\omega)\rd x\\ &=\int |f(x)| |h(\omega)| \varphi(\omega) p(x-\ep\omega)\rd x\\ &\leq \int |f(x)| q(\omega) p(x)\rd x\\ &=q(\omega)\cdot \int |f(x)| p(x)\rd x, \end{align*} which is integrable because $q$ is integrable and $\int |f(x)| p(x)\rd x<\infty$. This proves the first claim. Next, suppose $f_1^2,f_2^2$ are integrable functions with respect to $p$. From the first claim, there exists $\ep_0$, such that for all $\ep_1,\ep_2\in[0,\ep_0]$, \begin{align*} |h(\omega)| \varphi(\omega)\int f_1(x+\ep_1\omega)^2 p(x)\rd x\leq q_1(\omega),\\ |h(\omega)|\varphi(\omega)\int f_2(x+\ep_2\omega)^2 p(x)\rd x\leq q_2(\omega), \end{align*} where $q_1,q_2$ are integrable functions. By the Cauchy-Schwarz inequality, we have \begin{align*} &|h(\omega)| \varphi(\omega)\int |f_1(x+\ep_1\omega)f_2(x+\ep_2\omega) |p(x)\rd x\\ &\qquad\leq \left(|h(\omega)| \varphi(\omega) \int f_1(x+\ep_1\omega)^2 p(x)\rd x \right)^{1/2} \cdot \left(|h(\omega)| \varphi(\omega) \int f_2(x+\ep_2\omega)^2 p(x)\rd x \right)^{1/2}\\ &\qquad \leq q_1(\omega)^{1/2}q_2(\omega)^{1/2}, \end{align*} which is $L_1$ integrable, proving the second claim. \end{proof} \begin{lemma} \label{lem: L1} Let $p$ be a density on $\R^n$ satisfying the condition in Lemma~\ref{lem: log p condition}. Let $f:\R^n\to\R$ be a function that is integrable w.r.t. the density $p$. Let $h:\R^n\to\R$ be a function that is integrable w.r.t. $\N(0,(1+\delta_0)I_n)$ for some $\delta_0>0$. Then as $\alpha\downarrow 0$, \begin{align*} \EE{f(Y+\sqrt{\alpha}\omega ) h(\omega) \mid Y}\stackrel{L_1}{\rightarrow} f(Y)\EE{h(\omega)}, \end{align*} where the expectation is taken over $\omega\sim\N(0,I_n)$, and the $L_1$ convergence is with respect to $Y\sim p$. \end{lemma} \begin{proof} We start from the inequality \begin{align*} \Big |\int (f(y+\sqrt\alpha\omega)h(\omega) - f(y)h(\omega) ) \varphi(\omega)\rd\omega\Big| &\leq \int |f(y+\sqrt\alpha\omega) - f(y) |\cdot |h(\omega)| \varphi(\omega) \rd\omega. \end{align*} Multiplying by $p(y)$, integrating over $y$, and applying Fubini's theorem, we have \begin{align*} &\int \Big|\int (f(y+\sqrt\alpha\omega)h(\omega) -f(y)h(\omega) )\varphi(\omega)\rd \omega \Big| p(y)\rd y\\ &\qquad \leq \int\int |f(y+\sqrt\alpha\omega) - f(y) |\cdot |h(\omega)| \varphi(\omega)\rd\omega p(y)\rd y\\ &\qquad =\int |h(\omega)| \varphi(\omega) \int |f(y+\sqrt\alpha\omega)-f(y) | p(y)\rd y\rd\omega . \end{align*} By Lemma~\ref{lem: log p condition}, there exists $\ep_0>0$ such that, for all $\ep\in[0,\ep_0]$, \begin{align*} |h(\omega)| \varphi(\omega) \int |f(y + \ep \omega)| p(y)\rd y \leq \bar f(\omega), \end{align*} where $\bar f(\omega)$ is $L_1$ integrable. When $\sqrt\alpha\leq \ep_0$, we have \begin{align*} |h(\omega)|\varphi(\omega) \int |f(y+\sqrt\alpha\omega) - f(y)| p(y) \rd y&\leq 2 \bar f(\omega). \end{align*} Thus, $|h(\omega)|\varphi(\omega) \int |f(y+\sqrt\alpha\omega) - f(y)| p(y) \rd y$ is dominated by an integrable function. Moreover, $\lim_{\alpha\to 0} \int |f(y+\sqrt\alpha\omega)-f(y)| p(y)\rd y=0$ (Lebesgue differentiation theorem or mean continuity theorem; proof by the fact that compactly supported continuous functions are dense in $L_1$). Applying the dominated convergence theorem, we have \begin{align*} &\lim_{\alpha\to0}\int |h(\omega)| \varphi(\omega) \int |f(y+\sqrt\alpha\omega) -f(y)| p(y)\rd y\rd \omega \\ &\quad= \int|h(\omega)| \varphi(\omega) \lim_{\alpha\to0}\int |f(y+\sqrt\alpha\omega) -f(y)| p(y)\rd y\rd \omega\\ &\quad =0. \end{align*} \end{proof} \begin{lemma} \label{lem: L1 t} Under the same condition as in Lemma~\ref{lem: L1}, as $\alpha\downarrow 0$, we have \begin{align*} \EE{\int_0^1 f(Y+t\sqrt{\alpha}\omega ) \rd t\cdot h(\omega) \mid Y}\stackrel{L_1}{\rightarrow} f(Y) \EE{h(\omega)}, \end{align*} where the expectation is taken over $\omega\sim\N(0,I_n)$, and the $L_1$ convergence is with respect to $Y\sim p$. \end{lemma} \begin{proof} We start from \begin{align*} &\Big|\int_0^1 \int (f(y+t\sqrt\alpha\omega) h(\omega)-f(y)h(\omega) )\varphi(\omega)\rd \omega \rd t \Big| \\ &\qquad\leq \int_0^1\int |f(y+t\sqrt\alpha\omega)-f(y) |\cdot|h(\omega)| \varphi(\omega)\rd\omega\rd t. \end{align*} Integrating over $y$ and applying Fubini's theorem, we have \begin{align*} &\int \Big|\int_0^1 \int (f(y+t\sqrt\alpha\omega)h(\omega) -f(y)h(\omega) ) \varphi(\omega) \rd\omega \rd t \Big| p(y) \rd y \\ &\qquad\leq \int_0^1 \int |h(\omega)| \varphi(\omega) \int |f(y+t\sqrt\alpha\omega) -f(y)| p(y)\rd y\rd \omega\rd t . \end{align*} When $\sqrt\alpha<\ep_0$, $t\sqrt\alpha<\ep_0$ for all $t\in[0,1]$. Following a similar argument as in the proof of Lemma~\ref{lem: L1}, we have $|h(\omega)|\varphi(\omega) \int |f(y+t\sqrt\alpha\omega) - f(y)| p(y) \rd y\leq 2 \bar f(\omega)$, which is integrable. Moreover, $\lim_{\alpha\to 0} \int |f(y+t\sqrt\alpha\omega)-f(y)|p(y)\rd y=0$ (mean continuity theorem). By dominated convergence theorem, the last display converges to 0 as $\alpha\to0$. \end{proof} \begin{lemma} \label{lem: L1 t2} Suppose $p$ is a density satisfying the condition in Lemma~\ref{lem: log p condition}. Suppose $f_1(y)^2,f_2(y)^2$ are integrable functions w.r.t. $p$. Suppose $h$ is integrable w.r.t. $\N(0, (1+\delta_0)I_n)$ for some $\delta_0>0$. Then as $\alpha\downarrow0$, we have \begin{align*} \EE{\int_0^1\int_0^1 f_1(Y+t_1\sqrt{\alpha}\omega ) f_2(Y+t_2\sqrt{\alpha}\omega )\rd t_1\rd t_2 \cdot h(\omega) \mid Y}\stackrel{L_1}{\rightarrow} f_1(Y)f_2(Y)\EE{h(\omega)} , \end{align*} where the expectation is taken over $\omega\sim\N(0,I_n)$, and the $L_1$ convergence is with respect to $Y\sim p$. \end{lemma} \begin{proof} Because $f_1^2,f_2^2$ are integrable w.r.t. $p$, applying the second claim in Lemma~\ref{lem: log p condition}, there exists $\ep_0$ such that, when $\ep_1,\ep_2\in[0,\ep_0]$, \begin{align*} |h(\omega)|\varphi(\omega) \int |f_1(y+\ep_1\omega) f_2(y+\ep_2\omega)| p(y)\rd y\leq \bar f(\omega), \end{align*} where $\bar f(\omega)$ is integrable. Following the same argument as in the proof of Lemma~\ref{lem: L1 t}, we have \begin{align*} &\int \bigg|\int_0^1\int_0^1 \int (f_1(y+t_1\sqrt\alpha\omega)f_2(y+t_2\sqrt\alpha\omega)h(\omega) -f_1(y)f_2(y)h(\omega) ) \varphi(\omega) \rd\omega \rd t_1\rd t_2 \bigg| p(y) \rd y \\ &\quad\leq \int_0^1\int_0^1 \int |h(\omega)| \varphi(\omega) \int \big|f_1(y+t_1\sqrt\alpha\omega)f_2(y+t_2\sqrt\alpha\omega)-f_1(y)f_2(y)\big| p(y)\rd y\rd \omega\rd t_1 \rd t_2 . \end{align*} Following a similar argument as in the proof of Lemma~\ref{lem: L1 t}, when $\sqrt\alpha <\ep_0$, the function \begin{align*} &|h(\omega)|\varphi(\omega) \int |f_1(y+t_1\sqrt\alpha\omega)f_2(y+t_2\sqrt\alpha\omega) - f_1(y)f_2(y)|p(y) \rd y \end{align*} is dominated by an integrable function. Since $\lim_{\alpha\to 0} \int |f_1(y+t_1\sqrt\alpha\omega)f_2(y+t_2\sqrt\alpha\omega)-f_1(y)f_2(y)|p(y)\rd y=0$ (mean continuity theorem), the proof is completed by applying the dominated convergence theorem. \end{proof} \begin{lemma} \label{lem: gaussian quadratic covariance} Suppose $x,y\sim\N(0,I_n)$ and $\Cov{x,y}=\rho I_n$. For a matrix $A$, we have \begin{align*} \Cov{x\tran Ax, y\tran Ay}=\rho^2\Var{x\tran Ax}=\rho^2(\|A\|_F^2 + \tr(A^2) ). \end{align*} \end{lemma} \begin{proof} Let $x,z\iid \N(0,I_n)$ and $y=\rho x + \tilde\rho z$ where $\tilde \rho=\sqrt{1-\rho^2}$. Then $y\sim\N(0,I_n)$ and $\Cov{x,y}=\rho I_n$. We have \begin{align*} \EE{(x\tran Ax)(y\tran Ay)}&=\EE{(x\tran Ax)(\rho^2 x\tran Ax + \tilde\rho^2 z\tran Az ) }\\ &=\rho^2 \EE{(x\tran Ax)^2} + \tilde\rho^2 \EE{(x\tran Ax)}\EE{(z\tran Az)}\\ &=\rho^2 \EE{(x\tran Ax)^2} + \tilde\rho^2 \tr(A)^2. \end{align*} Note that \begin{align*} \EE{(x\tran Ax)^2}&=\EE{\sum_{i,j,k,l}A_{ij}A_{kl}x_ix_jx_kx_l }\\ &=\EE{\sum_{i} A_{ii}^2x_i^4 + \sum_{i\neq j}A_{ij}^2 x_i^2x_j^2 + \sum_{i\neq j} A_{ij}A_{ji}x_i^2x_j^2 + \sum_{i\neq j} A_{ii}A_{jj}x_i^2x_j^2} \\ &=3\sum_{i}A_{ii}^2 + \sum_{i\neq j}A_{ij}^2 + \sum_{i\neq j}A_{ij}A_{ji} + \sum_{i\neq j}A_{ii}A_{jj}\\ &=\sum_{i,j}A_{ij}^2 + \sum_{i,j}A_{ij}A_{ji} + \sum_{i,j}A_{ii}A_{jj}\\ &=\tr(A\tran A) + \tr(AA) + \tr(A)^2. \end{align*} Moreover, $\EE{x\tran Ax}=\tr(A)$. So we have \begin{align*} \Cov{x\tran Ax, y\tran Ay}&=\rho^2\EE{(x\tran Ax)^2} + \tilde\rho^2\tr(A)^2 - \tr(A)^2\\ &=\rho^2(\tr(A\tran A) + \tr(AA) + \tr(A)^2) + \tilde\rho^2\tr(A)^2 - \tr(A)^2\\ &=\rho^2(\tr(A\tran A) + \tr(AA))\\ &=\rho^2\Var{x\tran Ax}. \end{align*} \end{proof} \end{document}
2412.14709v1
http://arxiv.org/abs/2412.14709v1
Minimal rank of primitively $n$-universal integral quadratic forms over local rings
\documentclass[11pt]{amsart} \usepackage{amssymb, amsmath, latexsym, bm, mathrsfs} \usepackage{multirow, xcolor, float} \usepackage[shortlabels]{enumitem} \usepackage{tasks} \usepackage[numbers,sort&compress]{natbib} \usepackage{soul} \sodef\lightspacing{}{.09em}{.2em plus .2em}{.5em plus .1em minus .1em} \renewcommand{\baselinestretch}{\baselinestretch} \renewcommand{\baselinestretch}{1.1} \numberwithin{equation}{section} \newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition} \newtheorem{defn}[thm]{Definition} \newtheorem{rmk}[thm]{Remark} \newtheorem{exam}[thm]{Example} \newcommand{\gen}{\operatorname{gen}} \newcommand{\spn}{\operatorname{spn}} \newcommand{\ord}{\operatorname{ord}} \newcommand{\diag}{\operatorname{diag}} \newcommand{\rank}{\operatorname{rank}} \newcommand{\n}{\mathbb{N}} \newcommand{\z}{\mathbb{Z}} \newcommand{\q}{\mathbb{Q}} \newcommand{\f}{\mathbb{F}} \newcommand{\lglue}{\bm[\!\![} \newcommand{\rglue}{]\!\!\bm]} \newcommand{\cls}{\operatorname{cls}} \newcommand{\oline}{\overline} \newcommand{\well}{\widetilde{\ell}} \newcommand{\fs}{\mathfrak s} \newcommand{\ring}{\mathfrak o} \newcommand{\ideal}{\mathfrak a} \newcommand{\vol}{\mathfrak v} \newcommand{\norm}{\textnormal{Nr}} \newcommand{\trace}{\textnormal{Tr}} \newcommand{\p}{\mathfrak p} \newcommand{\qq}{\mathfrak q} \newcommand{\pr}{\textnormal{pr}} \newcommand{\Mod}[1]{\ (\mathrm{mod}\ #1)} \newcommand{\ind}{\operatorname{ind}} \newcommand{\bbA}{\mathbb{A}} \newcommand{\bbH}{\mathbb{H}} \newcommand{\kn}{\mathfrak{n}} \newcommand{\ko}{\mathfrak{o}} \newcommand{\kp}{\mathfrak{p}} \newcommand{\ks}{\mathfrak{s}} \newcommand{\smod}[1]{\ \mathrm{mod}\ #1} \newcommand{\spmod}[1]{\ (\mathrm{mod}\ #1)} \newcommand{\abs}[1]{\left\lvert #1 \right\rvert} \newcommand{\ang}[1]{\left\langle #1 \right\rangle} \begin{document} \title[Minimal rank of P$n$U quadratic forms over local fields]{Minimal rank of primitively $n$-universal integral quadratic forms over local rings} \author{Byeong-Kweon Oh and Jongheun Yoon} \address{Department of Mathematical Sciences and Research Institute of Mathematics, Seoul National University, Seoul 08826, Korea} \email{[email protected]} \thanks{This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT)(RS-2024-00342122) and (NRF-2020R1A5A1016126). } \address{Department of Mathematical Sciences, Seoul National University, Seoul 08826, Korea} \email{[email protected]} \subjclass[2020]{Primary 11E08, 11E20} \keywords{primitively $n$-universal integral quadratic forms} \begin{abstract} Let $F$ be a local field and let $R$ be its ring of integers. For a positive integer $n$, an integral quadratic form defined over $R$ is called primitively $n$-universal if it primitively represents all quadratic forms of rank $n$. It was proved in \cite{EG21rj} that the minimal rank of primitively $1$-universal quadratic forms over the $p$-adic integer ring $\z_p$ is $2$ if $p$ is odd, and $3$ otherwise. In this article, we completely determine the minimal rank of primitively $n$-universal quadratic forms over $R$ for any positive integer $n$ and any local ring $R$ such that $2$ is a unit or a prime. \end{abstract} \maketitle \section{Introduction} \label{sec:intro} In $1770$, Lagrange proved that the quadratic form $x^2 + y^2 + z^2 + w^2$ represents all positive integers, that is, the diophantine equation \[ x^2 + y^2 + z^2 + w^2=n \] has an integer solution $x,y,z$, and $w$ for any positive integer $n$. Any (positive definite and integral) quadratic form having this property is called \emph{universal}, as introduced by Dickson and Ross. The concept of universality has been generalized in many directions, including the $n$-universality and the primitive $n$-universality over various rings. Among those, the primitive $n$-universality over local rings is of particular interest in this article, whose precise definition will be given in what follows. Let $F$ be a local field at place $\kp$ and let $R$ be the ring of integers in $F$. We generally assume that the characteristic of $F$ is not two. A quadratic form of rank $n$ defined over $F$ is a quadratic homogeneous polynomial \[ f(x_1, \dotsc, x_n) = \sum_{i, j = 1}^n f_{ij}x_i x_j \qquad \text{($f_{ij} = f_{ji}\in F$),} \] where the corresponding symmetric matrix $M_f = (f_{ij})$, which is called the Gram matrix of $f$, is nondegenerate. We say that $f$ is classically integral if the Gram matrix $M_f$ of $f$ is an integral matrix, that is, $f_{ij} = f_{ji} \in R$ for any $i$, $j$. Whereas, $f$ is called non-classically integral if $f$ is an integral polynomial, that is, $f_{ii}$ and $2f_{ij}$ are in $R$ for any $i$, $j$. For two (integral) quadratic forms $f$ and $g$ of rank $n$ and $m$, respectively, we say that $f$ is represented by $g$ over $R$ if there is an integral matrix $T\in M_{n,m}(R)$ such that \[ M_f = T^t M_g T\text. \] We say that $f$ is isometric to $g$ if the above integral matrix $T$ is invertible. We say that $f$ is primitively represented by $g$ if the above integral matrix $T$ can be extended to an invertible matrix in $GL_m(R)$ by adding suitable $(m-n)$ columns. In particular, an integer $k \in R$ is primitively represented by $g$ if and only if there are integers $x_1$, \dots, $x_m\in R$ such that \[ k = g(x_1, \dotsc, x_m) \text{\, and at least one of $x_i$ is a unit.} \] For a positive integer $n$, an integral quadratic form is called (\emph{primitively}) $n$-\emph{universal} if it (primitively, respectively) represents all integral quadratic forms of rank $n$. A complete classification of $n$-universal quadratic forms is recently published in \cite{He23}, \cite{HH23}, and \cite{HHX23}. Finding primitively $1$-universal quadratic forms was first considered, as far as the authors know, by Budarina in \cite{Bu10}. She classified all primitively $1$-universal quaternary quadratic forms when $R = \z_p$ for any odd prime $p$, and also obtained some partial classification when $R = \z_2$. Recently, Earnest and Gunawardana provided a complete classification of all primitively $1$-universal quadratic forms over $\z_p$ for any prime $p$, including results for both classically integral and non-classically integral forms when $p=2$. Budarina also classified in \cite{Bu11} primitively $2$-universal quadratic forms having squarefree odd discriminant. The subsequent discussion will be conducted in a more suitable language of quadratic spaces and lattices. A quadratic $F$-space, simply a space, is a finite dimensional vector space $V$ over $F$ equipped with a symmetric bilinear form \[ B : V\times V\to F\text, \] and the corresponding quadratic form $Q: V\to F$ is defined by $Q(v) = B(v, v)$ for any $v\in V$. A quadratic $R$-lattice, simply a lattice, is a finitely generated $R$-submodule of a quadratic $F$-space. The scale $\ks L$ of a quadratic $R$-lattice $L$ is the $R$-submodule $B(L, L)$ of $F$, and the norm $\kn L$ of $L$ is the $R$-submodule of $F$ generated by $Q(L) = \{ Q(v) \mid v\in L\}$. An $R$-lattice $L$ is called integral if $\ks L\subseteq R$. Throughout this article, we always assume that any lattice is integral, unless stated otherwise. An (integral) $R$-lattice $L$ is called even if $\kn L\subseteq 2R$. Given a basis $e_1, \dotsc, e_n$ for a space $V$, the corresponding symmetric $n\times n$ matrix $M_V = (B(e_i, e_j))$ is called the Gram matrix of $V$, and in this case, we write \[ V\cong (B(e_i, e_j))\quad\text{in}\quad e_1, \dotsc, e_n\text. \] The discriminant $dV$ of a space $V$ is an element of the quotient monoid $F / (F^\times)^2$ defined by $dV = (\det M_V)(F^\times)^2$. We call a space $V$ nondegenerate if $M_V$ is nondegenerate. If a space $V$ has the zero Gram matrix, then $V$ is called totally isotropic. For a nondegenerate space $V$, all maximal totally isotropic subspace of $V$ has the same dimension. Such a dimension is denoted by $\ind V$, called the Witt index of $V$. One may easily show that if $\ind V\ge r$, then $V$ is orthogonally split by a $2r$-dimensional hyperbolic space. Given a basis $e_1, \dotsc, e_n$ for a lattice $L$, the corresponding symmetric $n\times n$ matrix $M_L = (B(e_i, e_j))$ is called the Gram matrix of $L$, and in this case, we write \[ L\cong (B(e_i, e_j))\quad\text{in}\quad e_1, \dotsc, e_n\text. \] The discriminant $dL$ of a lattice $L$ is an element of the quotient monoid $F / (R^\times)^2$ defined by $dL = (\det M_L)(R^\times)^2$. We call a lattice $L$ nondegenerate if $M_L$ is nondegenerate. Throughout this article, we always assume that a lattice is nondegenerate, unless stated otherwise. For a symmetric $n\times n$ matrix $N$ over $R$, we let $\langle N\rangle$ (or simply $N$) stand for any $R$-lattice whose Gram matrix is $N$. Moreover, for $\alpha_1$, \dots, $\alpha_n\in R$, the notation \[ \langle \alpha_1, \dotsc, \alpha_n \rangle \] stands for the $R$-lattice having a diagonal Gram matrix $\diag(\alpha_1, \dotsc, \alpha_n)$. Let $\Delta = 1 - 4\rho$ be a fixed nonsquare unit in $R$ such that $\rho\in R^\times$. The symbols $\mathbb{H}$ and $\mathbb{A}$ stand for binary $R$-lattices whose Gram matrices are \[ \mathbb{H}\cong \left(\begin{smallmatrix}0&1\\1&0\end{smallmatrix}\right) \quad\text{and}\quad \mathbb{A}\cong \left(\begin{smallmatrix}2&1\\1&2\rho\end{smallmatrix}\right), \ \text{respectively.} \] In fact, $\mathbb{H}$ is the isotropic even unimodular lattice, and $\mathbb{A}$ is the anisotropic even unimodular lattice. Scalings of $\bbH$ and $\bbA$ by an integer $\alpha\in R$ are denoted by $\alpha\bbH\cong \left(\begin{smallmatrix} 0 & \alpha \\ \alpha & 0\end{smallmatrix}\right)$ and $\alpha\bbA\cong \left(\begin{smallmatrix} 2\alpha & \alpha \\ \alpha & 2\rho\alpha \end{smallmatrix}\right)$. For a positive integer $n$, $\bbH^n$ denotes the lattice $\bbH \mathbin{\perp} \dotsb \mathbin{\perp} \bbH$ with rank $2n$. We also use the same symbol $\mathbb{H}$ which stands for the hyperbolic plane $F\mathbb{H}$, if no confusion arises. For two $R$-lattices $L$ and $M$, a representation from $L$ to $M$ is a linear map $\sigma : L \to M$ such that \[ B(\sigma(v), \sigma(v')) = B(v, v') \] for any $v$, $v'\in L$. In addition, if $\sigma(L)$ is a direct summand of $M$, then such a representation is called \emph{primitive}. If there exists a (primitive) representation $\sigma : L \to M$, then we say that $L$ is (primitively, respectively) represented by $M$. If such a representation $\sigma$ is bijective then we say that $L$ is isometric to $M$, denoted by $\sigma: L\cong M$. An $R$-lattice $M$ is called (\emph{primitively}) $n$-\emph{universal} if $M$ (primitively, respectively) represents all $R$-lattices of rank $n$. We denote by $u_R(n)$ the minimal rank of $n$-universal $R$-lattices, and denote by $u^\ast_R(n)$ the minimal rank of primitively $n$-universal (P$n$U, for short) $R$-lattices. For $u_R(n)$, the followings are well-known (see, for instance, \cite{He23, HH23, HHX23}): If $R$ is nondyadic, then \[ u_R(n) = \begin{cases} 2n & \text{if $1\le n\le 2$,}\\ n+3 & \text{if $n\ge 3$;} \end{cases} \] If $R$ is unramified dyadic, then \[ u_R(n) = \begin{cases} 2n+1 & \text{if $1\le n\le 2$,}\\ n+3 & \text{if $n\ge 3$.} \end{cases} \] The aim of this article is to determine $u^\ast_R(n)$ completely for any integer $n$ and any local ring $R$ such that $2$ is a unit or a prime. Let $I$ be an ideal in $R$. For any $\alpha$, $\beta\in R$, the congruence $\alpha \equiv \beta \spmod{I}$ takes on its usual meaning, namely $\alpha - \beta\in I$. For any $\gamma\in R$, we write \[ \alpha \equiv \beta \pmod{\gamma} \] instead of $\alpha \equiv \beta \spmod{\gamma R}$. Moreover, for any subset $S$, $T$ of $R$, the congruence $S \equiv T \spmod{I}$ means that $\alpha \equiv \beta \spmod{I}$ for any $\alpha\in S$, $\beta\in T$. For instance, we have $(\z_2^\times)^2 \equiv 1\spmod8$. Any unexplained notation and terminology can be found in \cite{Ki} or \cite{OM}. \section{Witt indices of spaces admitting a P$n$U lattice} \label{sec:general} In this section, we prove some necessary condition for a quadratic space having a primitively $n$-universal $R$-lattice. More precisely, we prove that if $M$ is a primitively $n$-universal $R$-lattice, then the space $FM$ represents a $2n$-dimensional hyperbolic space. In particular, we have $u^\ast_R(n)\ge 2n$ for any positive integer $n$. In fact, the result follows from an easy application of a multivariable version of Hensel's lemma given below. Let $K$ be a complete field with respect to an absolute value $\abs{\cdot}$ satisfying the strong triangle inequality, and let $\ko := \{x\in K: \abs{x}\le 1\}$. Let $n\ge 1$ and define a norm of $\mathbf{c} = (c_1, \dots, c_n)\in K^n$ by $\lVert \mathbf{c} \rVert := \max_i \abs{c_i}$. Denote the derivative matrix and Jacobian of $\mathbf{f} = (f_1(\mathbf{X}), \dots, f_n(\mathbf{X})) \in K[X_1, \dots, X_n]^n$ by \[ (D\mathbf{f})(\mathbf{X}) = \left(\frac{\partial f_i}{\partial X_j}\right)_{1\le i, j\le n} \quad \text{and} \quad J_\mathbf{f}(\mathbf{X}) = \det((D\mathbf{f})(\mathbf{X})). \] \begin{thm} \label{con1} Let $\mathbf{f} \in \ko[\mathbf{X}]^n$ and let $\mathbf{a} \in \ko^n$ satisfy $\lVert \mathbf{f}(\mathbf{a}) \rVert < \abs{J_\mathbf{f}(\mathbf{a})}^2$. Then there is a unique $\bm{\alpha}\in\ko^n$ such that $\mathbf{f}(\bm{\alpha}) = \mathbf{0}$ and $\lVert \bm{\alpha} - \mathbf{a} \rVert < \abs{J_\mathbf{f}(\mathbf{a})}$. \end{thm} \begin{proof} See Theorem 3.3 of \cite{Co}. \end{proof} \begin{thm} \label{con2} For $m\ge n\ge 1$, let $\mathbf{f}(\mathbf{X}) = (f_1, \dots, f_n)\in \ko[X_1, \dots, X_m]^n$ and $\mathbf{a} = (a_1, \dots, a_m)\in \ko^m$ satisfy $\lVert \mathbf{f}(\mathbf{a}) \rVert < \abs{J_{\mathbf{f}, n}(\mathbf{a})}^2$, where $J_{\mathbf{f}, n}(\mathbf{a}) = \det\left(\frac{\partial f_i}{\partial X_j}\right)_{1\le i, j\le n}$. Then there is an $\bm{\alpha}\in\ko^n$ such that \[ \mathbf{f}(\alpha_1, \dots, \alpha_n, a_{n+1}, \dots, a_m) = \mathbf{0} \] and $\abs{\alpha_i - a_i} < \abs{J_{\mathbf{f},n}(\mathbf{a})}$ for $i = 1, \dots, n$. \end{thm} \begin{proof} See Theorem 3.8 of \cite{Co}. \end{proof} \begin{lem} For $m\ge n\ge 1$, let $G = (g_{ij})$ and $H$ be symmetric matrices over $\ko$ of size $n$ and $m$, respectively, and let $A = (\mathbf{a}_1, \dots, \mathbf{a}_n) = (a_{ij})_{m\times n}$ be a matrix over $\ko$ such that $G = A^t H A$. Suppose that $H$ is nondegenerate and $A$ is primitive. Then, for any $(\gamma_1, \dots, \gamma_n)\in\ko^n$ satisfying \[ \max_{1\le i\le n} \abs{g_{in} - \gamma_i} < 4\abs{\det H}^2\text{,} \] there is an $\bm{\alpha}\in\ko^m$ such that $A_1 = (\mathbf{a}_1, \dots, \mathbf{a}_{n-1}, \bm{\alpha})$ is also primitive, and $(g'_{ij}) = A_1^t F A_1$ satisfies \[ g'_{ij} = \begin{cases} \gamma_j & \text{if }i = n\text{,}\\ \gamma_i & \text{if }j = n\text{,}\\ g_{ij} & \text{otherwise.} \end{cases} \] \end{lem} \begin{proof} We regard\, $g_{1n} - \gamma_1$, \,\dots, \,$g_{n-1, n} - \gamma_{n-1}$,\, and\, $g_{nn} - \gamma_n$ as $n$ polynomials in $m$ variables $a_{1n}, \dots, a_{mn}$. Note that $g_{in} - \gamma_i = g_{ni} - \gamma_i$ for $1\le i\le n$. To apply Theorem \ref{con2}, it suffices to show that the derivative matrix has an $n\times n$ subdeterminant whose absolute value is greater than or equal to $2\abs{\det H}$. First, note that the derivative matrix is $\diag(1, \dots, 1, 2) A^t H$. Hence, it suffices to show that $A^t H$ has an $n\times n$ subdeterminant whose absolute value is greater than or equal to $\abs{\det H}$. By the primitivity of $A$, we may complete $A^t$ to an element of $GL_m(\ko)$, namely \[ U = \begin{pmatrix}A^t \\\hline C\end{pmatrix}\text{.} \] By the theory of modules over a principal ideal domain, there is a $V\in GL_m(\ko)$ such that $U H V = T = (t_{ij})_{m\times m}$ is lower triangular. Clearly, $A^t H V = (t_{ij})_{n\times m}$ has a submatrix (the leftmost one) whose determinant is $d = \prod_{i=1}^n t_{ii}$. Then, \[ \abs{d} = \abs{\prod_{i=1}^n t_{ii}} \ge \abs{\prod_{i=1}^m t_{ii}} = \abs{\det UHV} = \abs{\det H}\text. \] Now, observe that $d$ is a linear combination of $n\times n$ subdeterminants of $A^t H$. Hence, there exists at least one with absolute value greater than or equal to $\abs{\det H}$. This proves the lemma. \end{proof} From now until the end of this section, we do not assume that a lattice is nondegenerate. \begin{cor} Let $M$ be a nondegenerate $R$-lattice. Let $N$ and $N'$ be $R$-lattices of the same rank $n$ with Gram matrices $G$, $G'$, respectively, such that $G - G' \in M_n (4(dM)^2 \kp)$. Then $N$ is primitively represented by $M$ if and only if $N'$ is primitively represented by $M$. \end{cor} \begin{cor} Let $M$ be a nondegenerate $R$-lattice. Then the followings are equivalent: \begin{enumerate}[\textup{(\arabic{enumi})}] \item $\ind FM \ge n$; \item some lattice $N$ of rank $n$ with $\ks N \subseteq 4(dM)^2 \kp$ is primitively represented by $M$; \item all lattices $N$ of rank $n$ with $\ks N \subseteq 4(dM)^2 \kp$ are primitively represented by $M$. \end{enumerate} \end{cor} \begin{thm}\label{thm:indgen} If $M$ is a primitively $n$-universal nondegenerate $R$-lattice, then $\ind FM \ge n$. In particular, $u^\ast_R(n)\ge 2n$. \end{thm} \begin{proof} This is a direct consequence of the corollary given above. \end{proof} \begin{cor}\label{cor:corindgen} Let $M$ be a primitively $n$-universal $R$-lattice of rank $m\ge 2n$. We have the followings: \noindent \textup{(a)} If $m = 2n$, then $FM$ is hyperbolic. \noindent \textup{(b)} If $m = 2n+1$, then $FM \cong \bbH^n\mathbin{\perp}\ang{(-1)^n dM}$. \noindent \textup{(c)} If $m = 2n+2$ and $dM = (-1)^{n+1}$, then $FM$ is hyperbolic. \end{cor} As a direct consequence of the above corollary, any primitively $n$-universal $R$-lattice of rank $2n$ is an $n$-universal $R$-lattice on the hyperbolic space $\bbH^n$. Furthermore, any primitively $n$-universal $R$-lattice $M$ of rank $2n+1$ is an $n$-universal $R$-lattice on the space $\bbH^n\mathbin{\perp}\ang{(-1)^n dM}$. \begin{rmk} Suppose that an $R$-lattice $M$ has a Jordan splitting $M = M_1 \mathbin{\perp} \dotsb \mathbin{\perp} M_r$, where $R \supseteq \ks M_1 \supsetneq \cdots \supsetneq \ks M_r$. It can be shown that if $M$ primitively represents an $R$-lattice $N$ of rank $n$ with $\ks N \subsetneq \ks M_r$, then $\rank M\ge 2n$ \textup(for the proof, see \textup{\cite[Lemma~5.5]{CO23})}. \end{rmk} \section{Minimal rank of P$n$U lattices over nondyadic local rings} \label{sec:podd} In this section, we prove that $u^\ast_R(n) = 2n$ for any nondyadic local ring $R$ and any positive integer $n$. Furthermore, it is shown that, up to isometry, there exist exactly one primitively $2$-universal lattice of rank $4$ and exactly two primitively $3$-universal lattices of rank $6$ over any nondyadic local ring. Recall that $\bbH\cong \left(\begin{smallmatrix}0&1\\1&0\end{smallmatrix}\right)$ is the even unimodular $R$-lattice on the hyperbolic plane. Let $x, y$ be a basis for $\bbH$ such that $\bbH\cong \left(\begin{smallmatrix}0&1\\1&0\end{smallmatrix}\right)$ in this basis. Then, $Q(\alpha x + \beta y) = 2\alpha\beta$ for any $\alpha$, $\beta\in R$. Now, the proof of the following lemma is quite straightforward. \begin{lem}\label{lem:easypnu} \textup{(a)} The $R$-lattice $\bbH$ primitively represents all even integers. In particular, if $R$ is nondyadic, then $\bbH$ is primitively $1$-universal. \noindent \textup{(b)} If an $R$-lattice $J$ primitively represents a lattice $K$ of rank $k$, then $\bbH\mathbin{\perp} J$ primitively represents all lattices of rank $k+1$ of the form $\ang{\alpha}\mathbin{\perp} K$ for any $\alpha\in 2R$. In particular, if $R$ is nondyadic, then $\bbH^n$ is primitively $n$-universal. \end{lem} The above lemma shows that, for any nondyadic local ring $R$, $\bbH^n$ is a primitively $n$-universal $R$-lattice of rank $2n$. By combining it with Corollary~\ref{cor:corindgen}, we conclude that the minimal rank of primitively $n$-universal quadratic $R$-lattices is exactly $2n$. \begin{prop} Let $R$ be a nondyadic local ring and let $\pi\in R$ be a prime. \noindent \textup{(a)} Any primitively $2$-universal $R$-lattice of rank $4$ is isometric to $\bbH^2$. \noindent \textup{(b)} Any primitively $3$-universal $R$-lattice of rank $6$ is isometric to either $\bbH^3$ or $\bbH^2\mathbin{\perp} \pi\bbH$. \end{prop} \begin{proof} (a) According to \cite[Proposition~3.3]{HHX23} and Corollary~\ref{cor:corindgen}, any $2$-universal quaternary lattice on the hyperbolic space $\bbH^2$ is isometric to $I_4\cong \bbH^2$. (b) According to \cite[Proposition~3.4]{HHX23} and Corollary~\ref{cor:corindgen}, any $3$-universal senary lattice on the hyperbolic space $\bbH^3$ is isometric to either $\bbH^3$ or $\bbH^2\mathbin{\perp} \pi\bbH$. It can be easily shown that $\bbH^2\mathbin{\perp} \pi\bbH$ is also primitively $3$-universal. \end{proof} \section{Minimal rank of P$n$U non-classically integral lattices over unramified dyadic local rings} \label{sec:peven} Hereafter, we assume that $R$ is unramified dyadic. Recall that an $R$-lattice $L$ is called non-classically integral if $\kn L\subseteq R$. A non-classically integral lattice is called primitively $n$-universal if it primitively represents all non-classically integral lattices of rank $n$. In this section, we prove that the minimal rank of primitively $n$-universal non-classically integral lattices is $2n$. Furthermore, it can be shown that, up to isometry, there exists exactly one primitively $2$-universal non-classically integral lattice of rank $4$. Clearly, the study of primitively $n$-universal non-classically integral lattices is equivalent to the study of even lattices that primitively represent all even lattices of rank $n$. The following is a supplement of Lemma \ref{lem:easypnu} for even lattices over unramified dyadic local ring $R$. \begin{lem}\label{lem:easypnueven} \textup{(a)} If an $R$-lattice $J$ is isotropic, then $\bbH\mathbin{\perp} J$ primitively represents all binary lattices of the form $2^a \bbH$ for any nonnegative integer $a$. \noindent \textup{(b)} If an $R$-lattice $J$ primitively represents $2^{a+1}\epsilon$ for a nonnegative integer $a$ and a unit $\epsilon\in R$, then $\bbH\mathbin{\perp} J$ primitively represents all binary lattices of the form $2^a \bbH$ and $2^a \bbA$. In particular, $\bbH^2$ primitively represents all binary lattices of the form $2^a \bbH$ and $2^a \bbA$ for any nonnegative integer $a$. Hence, $\bbH^n$ primitively represents all even lattices of rank $n$. \end{lem} \begin{proof} We let $x, y$ be a basis for $\bbH$ such that $Q(\alpha x + \beta y) = 2\alpha\beta$ for any $\alpha$, $\beta\in R$. (a) Since $J$ is isotropic, there is a primitive vector $z\in J$ with $Q(z) = 0$. Then, we have $R[x, 2^a y + z] \cong 2^a \bbH$. (b) Let $z\in J$ be a primitive vector with $Q(z) = 2^{a+1}\epsilon$. Then, we have \[ R[\epsilon^{-1} x, - x + 2^a\epsilon y + z] \cong 2^a \bbH \quad\text{and}\quad R[x + 2^a\epsilon y, x + z] \cong 2^a\epsilon \bbA\text. \] Note that $2^a\epsilon \bbA \cong 2^a \bbA$ by \cite[93:11]{OM}. These facts combined with Lemma~\ref{lem:easypnu} imply the lemma. \end{proof} The above lemma shows that $\bbH^n$ is an $R$-lattice of rank $2n$ that primitively represents all even lattices of rank $n$. By combining it with Corollary~\ref{cor:corindgen}, we conclude that the minimal rank of even $R$-lattices that primitively represents all even $R$-lattices is exactly $2n$. \begin{prop} Let $R$ be an unramified dyadic local ring. Any even $R$-lattice of rank $4$ that primitively represents all even $R$-lattices of rank $2$ is isometric to $\bbH^2$. \end{prop} \begin{proof} According to \cite[Theorem~1.2]{HH23} and Corollary~\ref{cor:corindgen}, any lattice on the hyperbolic space $\bbH^2$ that represents all even lattices of rank $2$ is isometric to $\bbH^2$. \end{proof} \section{Minimal rank of P$n$U classically integral lattices over unramified dyadic local rings $R\ne\z_2$} \label{sec:pnotz2} In this section, we continue to assume that the local ring $R$ is unramified dyadic. And we prove that \[ \text{If\, $R \ne \z_2$,\, then\, $u^\ast_R(2) = 5$\, and\, $u^\ast_R(n) = 2n$\, for\, $n\ge 3$.} \] The following are supplements of Lemma \ref{lem:easypnu} for general lattices over $R$. We mean by $Q^\ast(L)$ the set of all $Q(v)$'s, where $v$ runs over all primitive vectors in $L$. \begin{lem}\label{lem:1-1} Let $L\cong\ang{1, -1}$ be a binary $R$-lattice. Then, we have \[ Q(L) = R^\times \cup 4R\text, \qquad Q^\ast(L) = \begin{cases} Q(L) & \text{if $R\ne \z_2$,}\\ R^\times \cup 8R & \text{if $R = \z_2$.} \end{cases} \] \end{lem} \begin{proof} Define $g(x, y) = x(x+2y)$. Since $\ang{1, -1} \cong \left(\begin{smallmatrix}1&1\\1&0\end{smallmatrix}\right)$, we have \begin{align*} Q(L) & = \left\{g(x, y) \mathrel{}\middle|\mathrel{} (x, y)\in R^2\right\}\text,\\ Q^\ast (L) & = \left\{g(x, y) \mathrel{}\middle|\mathrel{} (x, y)\in R^2\text{ is primitive}\right\}\text. \end{align*} If $x$ is a unit in $R$, then so is $g(x, y)$. If $x$ is even, then $g(x, y)$ is divisible by $4$. Hence, $Q(L)\subseteq R^\times\cup 4R$. Let $\epsilon$ be any unit. Then, there exists a unit $\eta$ such that $\eta^2 = \epsilon - 2\alpha$ for some $\alpha\in R$, which implies that $g(\eta, \eta^{-1}\alpha) = \epsilon$. For any nonnegative integer $a$, note that $g(2^{a+2}, \epsilon - 2^{a+1}) = 2^{a+3}\epsilon$. Suppose that $R = \z_2$. In this case, it remains to show that $Q^\ast(L)\cap 4 R^\times = \varnothing$. Assume that $g(x, y)\in 4R^\times$. Since $x$ must be even, we have $y\in R^\times$. Furthermore, since $y\equiv 1\spmod2$, this implies that $g(x, y)$ is divisible by $8$, which is absurd. Now, suppose that $R \ne \z_2$. In this case, it remains to show that $4R^\times \subseteq Q^\ast(L)$. There exists a unit $\eta$ such that $\eta^2 \not\equiv \epsilon\spmod2$. Then, $g(2\eta, \eta^{-1}\epsilon - \eta) = 4\epsilon$. This completes the proof. \end{proof} \begin{lem}\label{lem:easy2pnu} \textup{(a)} For any unit $\epsilon\in R$, $\bbH\mathbin{\perp}\ang{\epsilon}$ is isometric to $\ang{1, -1, \epsilon}$. Hence, $\bbH\mathbin{\perp}\ang{\epsilon}$ primitively represents all binary $R$-lattices of the form $\ang{\alpha, \epsilon}$ for any $\alpha\in R$. In particular, $\bbH\mathbin{\perp}\ang{\epsilon}$ is primitively $1$-universal. \noindent \textup{(b)} If an $R$-lattice $J$ primitively represents an $R$-lattice $K$ of rank $k$ as well as a unit in $R$, then $\bbH^n \mathbin{\perp} J$ primitively represents all $R$-lattices of rank $n+k$ of the form $\ell\mathbin{\perp} K$ for any $R$-lattice $\ell$ of rank $n$. In particular, $\bbH^n \mathbin{\perp} J$ is primitively $n$-universal. \end{lem} \begin{proof} (a) The isometry $\bbH\mathbin{\perp}\ang{\epsilon} \cong \ang{1, -1, \epsilon}$ follows from \cite[93:16]{OM}. Now, apply Lemma~\ref{lem:1-1}. (b) If $J$ primitively represents $\epsilon\in R^\times$, then $J \cong \ang{\epsilon}\mathbin{\perp} J'$ for some $R$-lattice $J'$. Now, apply (a) inductively. \end{proof} By Lemma~\ref{lem:easy2pnu}, $\bbH^n\mathbin{\perp}\ang{\epsilon}$ is prmitively $n$-universal for any unit $\epsilon$ in $R$. By combining it with Corollary~\ref{cor:corindgen}, we conclude that $2n\le u^\ast_2(n)\le 2n+1$. We first settle down the case when $R\ne\z_2$. In the remaining of this section, we assume that the local ring $R$ is unramified dyadic such that $R\ne\z_2$. Note that \cite[Theorem~1.3]{HH23} implies that $u_R(2) = 5$. Thus, $u^\ast_R(2) \ge u_R(2) = 5$. Therefore, the minimal rank of primitively $2$-universal lattices is five. Next, we prove that $u^\ast_R(n) = 2n$ for any $n\ge 3$. \begin{thm} For any $n\ge 3$, the $R$-lattice $\bbH^{n-1} \mathbin{\perp} \ang{1, -1}$ is primitively $n$-universal. In particular, $u^\ast_R(n) = 2n$ for any $n\ge 3$. \end{thm} \begin{proof} Let $L\cong \bbH^{n-1} \mathbin{\perp} \ang{1,-1}$ and let $\ell$ be any $R$-lattice of rank $n$. Since $Q^\ast(\ang{1,-1}) = R^\times \cup 4R$, Lemmas~\ref{lem:easypnueven} and \ref{lem:easy2pnu} implies that $\ell$ is primitively represented by $L$ unless $\ell$ satisfies the following condition (*): \begin{itemize} \item[(*)] $\ell$ is even unimodular or proper $2$-modular, or $\ell$ has a Jordan splitting $\ell\cong \ell_1 \mathbin{\perp} \ell_2$, where $\ell_1$ is even unimodular and $\ell_2$ is proper $2$-modular. \end{itemize} Suppose that $\ell$ satisfies the condition (*). If $\ell$ has a unimodular component, then actually $\ell \cong \bbH \mathbin{\perp} \ell'$ for some even lattice $\ell'$ of rank $n-2$. Since $\ell'$ is primitively represented by $\bbH^{n-2}$, $\ell$ is primitively represented by $L$. Otherwise, $\ell$ is proper $2$-modular. Then, $\ell$ is split by $2\bbH$ or $2\bbA$, either of which is primitively represented by $\bbH \mathbin{\perp} \ang{1,-1}$. Hence, $\ell$ is primitively represented by $L$. \end{proof} \section{Minimal rank of P$n$U classically integral lattices over $\z_2$} \label{sec:pz2} In this section, we prove that \[ u^\ast_{\z_2}(n) = \begin{cases} 2n+1 & \text{if $n\le 4$,}\\ 2n & \text{if $n\ge 5$.} \end{cases} \] Recall that over $\z_2$, $Q(\ang{1,-1}) = \z_2^\times \cup 4\z_2$ and $Q^\ast(\ang{1,-1}) = \z_2^\times \cup 8\z_2$ by Lemma~\ref{lem:1-1}. We make a particular choice $\rho = 1$ for $R = \z_2$ so that $\mathbb{A}\cong \left(\begin{smallmatrix}2&1\\1&2\end{smallmatrix}\right)$. We know that $u_{\z_2}(2) = 5$ according to \cite[Theorem~1.3]{HH23} or \citep[Lemma~2.3]{Oh03}\footnote{It has been brought to our attention that \cite{Oh03} contains some errors. Specifically: (1) While \cite[Theorem~2.8]{Oh03} remains valid, its proof requires partial correction, and the corrected version is available. (2) Contrary to the original statement, $\langle 1, 1, 2, 4, 7 \rangle$ does not have an exceptional core $\z$-lattice. As a result, the findings in Section 3 need to be revised, and $\langle 1, 1, 2, 4, 7 \rangle$ should be recategorized as almost $2$-universal, correcting its misclassification. The full corrigendum can be provided upon request from the authors.}. Thus, we have $u^\ast_{\z_2}(2) \ge u_{\z_2}(2) = 5$. Therefore, the minimal rank of primitively $2$-universal $\z_2$-lattices is five. Next, we prove that $u^\ast_{\z_2}(3) = 7$. Note that $6 \le u^\ast_{\z_2}(3) \le 7$. According to \cite[Theorem~6.16]{HH23} and Corollary~\ref{cor:corindgen}, any $3$-universal senary $\z_2$-lattice on the hyperbolic space $\bbH^3$ is isometric to a $\z_2$-lattice obtained from one of the following six $\z_2$-lattices by a unit scaling: \vspace{1ex} \begin{tasks}[label=(\Alph*),label-width=1.5em](3) \task $\bbH^2\mathbin{\perp}\ang{1, -1}$, \task $\bbH^2\mathbin{\perp}\ang{-1, 4}$, \task $\bbH\mathbin{\perp}\ang{1, -1, 2, -2}$, \task $\bbH\mathbin{\perp}\ang{1, -1, -2, 8}$, \task $\bbH\mathbin{\perp}\ang{-1, 2, -2, 4}$, \task $\bbH\mathbin{\perp}\ang{-1, -2, 4, 8}$. \end{tasks} \begin{thm}\label{thm:36} No $\z_2$-lattice of rank $6$ is primitively $3$-universal. Therefore, we have $u^\ast_{\z_2}(3) = 7$. \end{thm} \begin{proof} It suffices to show that none of the above six lattices is primitively $3$-universal. First, Let $L$ be one of (A), (B), or (E). Since $L$ is $2$-universal, $\bbA$ is represented by $L$. For any sublattice $M\cong\bbA$ of $L$, $M$ splits $L$ and \[ M^\perp\cong \ang{1,1,1,5}\text{, }\ang{5,1,1,4}\text{, or }\ang{5,2,2,4} \] by \cite[Theorem~91:9]{OM} and \cite[Corollary~93:14a]{OM}. Next, let $L$ be one of (C), (D), or (F). Then $\ang{1,3}$ is primitively represented by $L$. For any sublattice $M\cong\ang{1, 3}$ of $L$, $M$ splits $L$ and \[ M^\perp\cong \ang{1,3,2,-2}\text{, }\ang{1,3,-2,8}\text{, or }\ang{3,-2,4,8} \] by \cite[Theorem~91:9]{OM} and \cite[Corollary~93:14a]{OM}. In any case, if $L$ were primitively $3$-universal, then $M^\perp$ must be primitively $1$-universal. However, according to \cite[Theorem~5.2]{EG21jnt}, $M^\perp$ is not primitively $1$-universal. Therefore, $L$ is not primitively $3$-universal. \end{proof} Now, we prove that $u^\ast_{\z_2}(4) = 9$. Note that $8 \le u^\ast_{\z_2}(4) \le 9$. According to \cite[Theorem~1.3]{HH23} and Corollary~\ref{cor:corindgen}, any $4$-universal octonary $\z_2$-lattice on the hyperbolic space $\bbH^4$ is isometric to a $\z_2$-lattice obtained from one of the following $\z_2$-lattices by a unit scaling for some nonnegative integer $t$: \vspace{1ex} \begin{tasks}[label=(\Alph*),label-width=1.5em](2) \task $\bbH^3\mathbin{\perp}\ang{-1, 2^{2t}}$, \task $\bbH^2\mathbin{\perp}\ang{1, -1, -2, 2^{2t+1}}$, \task $\bbH^2\mathbin{\perp}\ang{-1, -1, 4, 2^{2t+2}}$, \task $\bbH^2\mathbin{\perp}\ang{-1, 2, -2, 2^{2t+2}}$, \task $\bbH^2\mathbin{\perp}\ang{-1, -2, 4, 2^{2t+3}}$, \task $\bbH^2\mathbin{\perp}\ang{-1, -2, 8, 2^{2t+4}}$, \task $\bbH^2\mathbin{\perp}\ang{-1, 2, -8, 2^{2t+4}}$. \end{tasks} \vspace{2ex} \begin{lem}\label{lem:no4mod8} Let $N\cong \bbA\mathbin{\perp}\ang{1, -1}$ be the $\z_2$-lattice. \noindent \textup{(a)} No integer congruent to $4$ modulo $8$ is primitively represented by $N$. \noindent \textup{(b)} Let $M = M_1\mathbin{\perp} \cdots\mathbin{\perp} M_r$ be a Jordan splitting of a $\z_2$-lattice $M$ such that \[ \z_2 \supseteq \ks M_1 \supsetneq \cdots \supsetneq \ks M_r = \kn M_r = 2^{2s}\z_2\text{,} \] for some positive integer $s$. Suppose that $\q_2 M \cong \q_2 N$ and $M' := M_1\mathbin{\perp} \cdots\mathbin{\perp} M_{r-1}$ is anisotropic. Then no integer congruent to $2^{2s+2}$ modulo $2^{2s+3}$ is primitively represented by $M$. \end{lem} \begin{proof} The assertion (a) follows directly from Lemma~\ref{lem:1-1}. Hence, we may assume that $r\ge 2$ in (b). Let $x = x_1 + \dotsb + x_r$ be a typical vector in $M$, where $x_i\in M_i$ for each $i$ with $1\le i\le r$. If $\ord Q(x)\ge 2s$, then \[ \ord Q(x_1 + \dotsb + x_{r-1})\ge 2s\text. \] Let $N' = \{y\in M' \mid \ord Q(y)\ge 2s\}$. It suffices to show that no integer congruent to $4\cdot 4^s$ modulo $8\cdot 4^s$ is primitively represented by $N'\mathbin{\perp} M_r$. Since $N'\mathbin{\perp} M_r\cong 4^s\bbA \mathbin{\perp} \ang{4^s, -4^s}$, the assertion (b) follows directly from (a). \end{proof} Let $N\cong \langle \epsilon_1, \dots, \epsilon_n \rangle$ be a unimodular $\z_2$-lattice, where $\epsilon_i$ is a unit in $\z_2$ for any $i$ with $1\le i\le n$. Note that, if $N\cong \langle \delta_1, \dots, \delta_n \rangle$ for some other units $\delta_i$ in $\z_2$ for $1\le i\le n$, then \[ \sum_{i=1}^n \delta_i \equiv \sum_{i=1}^n \epsilon_i \mod8\text. \] Hence, the residue class $(\sum_{i=1}^n \epsilon_i) + 8\z_2$ contained in $\z_2/8\z_2$ is an invariant of $N$, which is called the $2$-signature of $N$. (for this, see \cite[Chapter~15]{CS}). \begin{lem}\label{lem:eus} Let $L$ be a $\z_2$-lattice, and let $L = L_0\mathbin{\perp} L_1\mathbin{\perp} \dotsb \mathbin{\perp} L_t$ \textup($t\ge 0$\textup) be a fixed Jordan splitting of $L$, where $L_i = 0$ or $L_i$ is $2^i$-modular for each $i$ with $0\le i\le t$. For any integer $k$ with $0\le k\le t$, define $L_{\ge k} = L_k\mathbin{\perp} L_{k+1}\mathbin{\perp} \dotsb \mathbin{\perp} L_t$. Let $x = \sum_{i=0}^t x_i$ be a vector in $L$ such that $x_i\in L_i$. Assume that $\ks L = \z_2$, $x_0$ is primitive in $L_0$, and $Q(x)$ is even. \noindent \textup{(a)} Suppose that $\kn L = \z_2$. Write $L_0 \cong \langle \epsilon_1, \dots, \epsilon_n \rangle$, where $\epsilon_i$ are units in $\z_2$ for $1\le i\le n$. Let $\gamma = \sum_{i=1}^n \epsilon_i$ be the $2$-signature of $L_0$. Assume that one of the following conditions holds: \begin{enumerate}[\textup{(\roman{enumi})}] \item $\kn (L_{\ge 1}) \subseteq 8\z_2$ and $Q(x) \not\equiv \gamma \spmod8$; \item $\kn (L_{\ge 1}) \subseteq 4\z_2$ and $Q(x) \not\equiv \gamma \spmod4$; \item $L_1 \cong \ang{2\epsilon}$, $\kn (L_{\ge 2}) \subseteq 8\z_2$, and $Q(x) - \gamma \not\equiv 0$, $2\epsilon \spmod8$; \item $L_1 \cong \ang{2\epsilon, 2\epsilon'}$, $dL_1 \equiv 4\spmod{16}$, $\kn (L_{\ge 2}) \subseteq 8\z_2$, and \[ Q(x) - \gamma \not\equiv 0\text{, }2\epsilon\text{, }4 \pmod8\text; \] \item $L_1 \cong \ang{2\epsilon, 2\epsilon'}$, $dL_1 \equiv -4\spmod{16}$, $\kn (L_{\ge 2}) \subseteq 8\z_2$, and \[ Q(x) - \gamma \not\equiv 0\text{, }{\pm 2} \pmod8\text; \] \item $Q(x) \not\equiv \gamma \spmod2$. \end{enumerate} Then, there exists a binary even unimodular sublattice $M$ of $L$ containing $x$. In particular, $\rank L_0\ge 3$. \noindent \textup{(b)} In addition to the assumptions in \textup{(a)}, suppose further that $Q(x)\equiv 2\spmod4$. Then, in addition to the conclusions in \textup{(a)}, there exists another binary even unimodular sublattice $M'$ of $L$ containing $x$ such that $M'\not\cong M$ if and only if $\rank L_0\ge 5$, $\kn (L_{\ge 1}) = 2\z_2$, or $\rank L_0 = 4$ and $dL_0 \equiv 3\spmod4$. \noindent \textup{(c)} Suppose that $\kn L \subseteq 2\z_2$. Then, there exists a binary even unimodular sublattice $M$ of $L$ containing $x$. Suppose further that $Q(x)\equiv 2\spmod4$. Then, there exists another binary even unimodular sublattice $M'$ of $L$ containing $x$ such that $M'\not\cong M$ if and only if $\rank L_0\ge 4$ or $\kn (L_{\ge 1}) = 2\z_2$. \end{lem} \begin{proof} (a) Suppose that $L_0 \cong \ang{\epsilon_1, \dotsc, \epsilon_n}$ in $e_1, \dotsc, e_n$. Write $x_0 = \xi_1 e_1 + \dotsb + \xi_n e_n$. Since $x_0$ is primitive, we may assume that $\xi_1$ is odd. We claim that at least one among $\xi_2$, \dots, $\xi_n$ is even. Suppose on the contrary that $\xi_2 \dotsm \xi_n$ is odd. Note that \[ Q(x_0) = \epsilon_1 \xi_1^2 + \epsilon_2 \xi_2^2 + \dotsb + \epsilon_n \xi_n^2 \equiv s(L_0) \pmod8\text, \] which contradicts the hypothesis (i) through (vi). Hence, we may assume that $\xi_n$ is even. Now, $M = \z_2[x, e_1 + e_n]$ is an even unimodular binary $\z_2$-lattice containing $x$. (b) We continue to suppose that $L_0 \cong \ang{\epsilon_1, \dotsc, \epsilon_n}$ in $e_1, \dotsc, e_n$, and $x_0 = \xi_1 e_1 + \dotsb + \xi_n e_n$. By renumbering indices if necessary, we may assume that $\xi_1 \dotsm \xi_s$ is odd and $\xi_{s+1} \equiv \cdots \equiv \xi_n \equiv 0\spmod2$ for some $s$ with $1\le s < n$. As in (a), we let $M = \z_2[x, e_1 + e_n]$. First, we assume that $\kn (L_{\ge 1}) = 2\z_2$. Then, there exists $y\in L_{\ge 1}$ with $Q(y) \equiv 2\spmod4$. Note that $B(x, y) = B(x-x_0, y) \equiv 0\spmod2$. Hence, $M$ and $M' = \z_2[x, e_1 + e_n + y]$ satisfy all the required conditions. From now on, we assume that $\kn (L_{\ge 1}) \subseteq 4\z_2$. Assume that $\rank L_0 \ge 5$ or $\rank L_0 = 4$ and $dL_0 \equiv 3\spmod4$. If there exists an index $i$ with $1 < i \le s$ such that $\epsilon_1 \not\equiv \epsilon_i\spmod4$, then $M$ and $M' = \z_2[x, e_i + e_n]$ satisfy all the desired properties. The same conclusion can be obtained for the case when there exists an index $j$ with $s+1\le j < n$ such that $\epsilon_j \not\equiv \epsilon_n\spmod4$. Now, we assume that \[ \epsilon_1 \equiv \cdots \equiv \epsilon_s \spmod4 \quad \text{and} \quad \epsilon_{s+1} \equiv \cdots \equiv \epsilon_n \spmod4\text. \] If $\rank L_0 = 4$, then the above conditions on $\epsilon_i$'s imply that $dL_0\equiv 1\spmod4$, which is absurd. Hence, we must have $\rank L_0\ge 5$, which implies that either $s\ge 3$ or $n-s\ge 3$. If we let \[ M' = \begin{cases} \z_2[x, e_1 + e_2 + e_3 + e_n] & \text{if $s\ge 3$},\\ \z_2[x, e_1 + e_{n-2} + e_{n-1} + e_n] & \text{if $n-s\ge 3$}, \end{cases} \] then $M$ and $M'$ satisfy all the required conditions. Finally, assume that $\rank L_0 = 4$ and $dL_0 \equiv 1\spmod4$ or $\rank L_0 = 3$. Suppose that there exists an even unimodular binary sublattice $M'$ of $L$. Since we are assuming that $\kn (L_{\ge 1}) \subseteq 4\z_2$, $M'$ is represented by $L_0$ by virtue of \cite[Proposition~20]{OM58}. Note that \[ \bbH \mathbin{\perp} \ang{1, -1} \not\cong \bbA \mathbin{\perp} \ang{1, 3}\text, \quad \bbH \mathbin{\perp} \ang{1, 3} \not\cong \bbA \mathbin{\perp} \ang{1, -1}\text, \] and $\bbH \mathbin{\perp} \ang{\epsilon} \not\cong \bbA \mathbin{\perp} \ang{5\epsilon}$ for any $\epsilon \in \z_2^\times$. Hence, if $M'$ is represented by $L_0$, then $M'\cong M$. This proves the lemma. (c) Suppose that $L_0 \cong J_1 \mathbin{\perp} \dotsb \mathbin{\perp} J_n$, where $J_i \cong \bbH$ or $\bbA$ in $e_{2i-1}, e_{2i}$ for $1\le i\le n$. Write $x_0 = \xi_1 e_1 + \dotsb + \xi_{2n} e_{2n}$, and assume that $\xi_1$ is odd. Let $M = \z_2[x, e_2]$. Then, $M$ satisfies all the desired properties for the former assertion. Now, suppose that $Q(x)\equiv 2\spmod4$. First, assume that $\kn (L_{\ge 1}) = 2\z_2$. Then, there exists $y\in L_{\ge 1}$ with $Q(y) \equiv 2\spmod4$. Note that $B(x, y) = B(x-x_0, y) \equiv 0\spmod2$. Hence, $M$ and $M' = \z_2[x, e_2 + y]$ satisfy all the required conditions. From now on, we assume that $\kn (L_{\ge 1}) \subseteq 4\z_2$. Assume that $\rank L_0\ge 4$. If $\xi_3 \equiv \xi_4 \equiv 0\spmod2$, then $M$ and $M' = \z_2[x, e_2 + e_3 + e_4]$ satisfy all the desired properties. Hence, we may assume that $\xi_3$ is odd. If $J_1 \not\cong J_2$, then $M$ and $M' = \z_2[x, e_4]$ satisfy all the required conditions. If $J_1 \cong J_2$, then the same conclusion can be obtained by considering the isometry $\bbH \mathbin{\perp} \bbH \cong \bbA \mathbin{\perp} \bbA$. Now, assume that $\rank L_0 = 2$. Suppose that there exists an even binary unimodular sublattice $M'$ of $L$. Since we are assuming that $\kn (L_{\ge 1}) \subseteq 4\z_2$, $M'$ is represented by $L_0$ by \cite[Proposition~20]{OM58}. Hence, $M' \cong L_0 \cong M$. This proves the lemma. \end{proof} \begin{lem}\label{lem:isoiso} Let $L$ be a binary $\z_2$-lattice such that $\ks L = \z_2$ and $\q_2 L \cong \bbH$. Let $s$ and $t$ be integers such that $t > 2s = \ord_2 dL$. Let $z$ be a primitive vector in $L$ such that $\ord_2 Q(z) = t+1$, and let $w$ be a vector in $L$ such that $\ord_2 B(z, w) = t$. Then, $\ord_2 Q(w) \ge t+2$. \end{lem} \begin{proof} First, we claim that $\ord_2 Q(w) \ge t$. Assume that $L$ is even. Let $e_1, e_2$ be a basis for $L$ such that $L \cong \bbH$ in $e_1, e_2$. Write $z = z_1 e_1 + z_2 e_2$. We may assume that $z_1 = 1$. Then, $z_2 \equiv 2^t \spmod{2^{t+1}}$. Write $w = w_1 e_1 + w_2 e_2$. Since \[ 0 \equiv B(z, w) = z_1 w_2 + z_2 w_1 \equiv w_2 \pmod{2^t}\text, \] we have $Q(w) \equiv 0 \spmod{2^{t+1}}$, as desired. Now, we assume that $\kn L = \z_2$. Let $e_1, e_2$ be a basis for $L$ such that $L \cong \left(\begin{smallmatrix}\epsilon & \alpha \\ \alpha & 0\end{smallmatrix}\right)$ in $e_1, e_2$, where $\epsilon$ is a unit in $\z_2$ and $\alpha$ is an integer in $\z_2$ such that $\ord_2 \alpha = s$. Note that $Q(\zeta_1 e_1 + \zeta_2 e_2) = \zeta_1(\epsilon\zeta_1 + 2\alpha\zeta_2)$. Hence, \[ \ord_2 Q(\zeta_1 e_1 + \zeta_2 e_2) \begin{cases} = 2\ord_2 \zeta_1 & \text{if $\ord_2 \zeta_1 \le s$,}\\ \ge 2s+2 & \text{if $\ord_2 \zeta_1 \ge s+1$.} \end{cases} \] Write $z = z_1 e_1 + z_2 e_2$. Since $Q(z)$ is even, $z_1$ cannot be a unit. This implies that $z_2$ is a unit, and we have \[ \ord_2 Q(z) \begin{cases} = 2\ord_2 z_1 & \text{if $\ord_2 z_1 \le s$,}\\ \ge 2s+3 & \text{if $\ord_2 z_1 = s+1$,}\\ = s+1 + \ord_2 z_1 & \text{if $\ord_2 z_1 \ge s+2$.} \end{cases} \] Since $t > 2s$, we have $\ord_2 z_1 \ge s+1$ and $t \ge 2s+2$. Note that we have $\ord_2 z_1 = s+1$ or $t-s$. If $\ord_2 z_1 = s+1$, then $\ord_2 (\epsilon z_1 + 2\alpha z_2) = t-s$. Hence, by changing basis from $e_1, e_2$ to $e_1, -2\alpha e_1 + \epsilon e_2$ and converting coordinates $z_1$, $z_2$, $w_1$, $w_2$ accordingly if necessary, we may assume that $\ord_2 z_1 = t-s$. Write $w = w_1 e_1 + w_2 e_2$. Note that $B(z, w) = \epsilon z_1 w_1 + \alpha (z_1 w_2 + z_2 w_1)$ and $\ord_2(\epsilon z_1 + \alpha z_2) = s$. Since \[ 0 \equiv B(z, w) \equiv (\epsilon z_1 + \alpha z_2) w_1 \pmod{2^t}\text, \] we have $w_1 \equiv 0 \spmod{2^{t-s}}$. Hence, $\ord_2 Q(w) = \ord_2 w_1 + \ord_2(\epsilon w_1 + 2\alpha w_2) \ge t$, as desired. Now, assuming that $\ord_2 Q(w) \ge t$, we prove that $\ord_2 Q(w) \ge t+2$. Write $Q(z) = 2^{t+1} \epsilon$ and $Q(w) = 2^t \alpha$ for some $\epsilon \in \z_2^\times$ and $\alpha \in \z_2$. Then, $d(\z_2[z, w]) \equiv 2^{2t}(2\epsilon\alpha - 1) \spmod{2^{2t+3}}$. Since $z$ and $w$ are linearly independent, $\q_2[z, w] \cong \bbH$, which implies that $d(\q_2[z, w]) = -1$. Hence, we must have $\alpha \equiv 0\spmod4$, and the result follows directly from this. \end{proof} \begin{thm}\label{thm:48} No $\z_2$-lattice of rank $8$ is primitively $4$-universal. Therefore, we have $u^\ast_{\z_2}(4) = 9$. \end{thm} \begin{proof} It suffices to show that the $\z_2$-lattice $\ell$ is not primitively represented by the $\z_2$-lattice $L$ for each pair $(\ell, L)$ which is given in the table below. Let $(u, M)$ be the pair given in the same row with the pair $(\ell, L)$ in the table. Let $\ell = \ell_0 \mathbin{\perp} \ell'$, where $\ell_0$ is the unimodular component of $\ell$, and $\ell' \cong 2^{2t+u+1}\bbA$. For any representation $\phi : \ell_0 \to L$, the orthogonal complement of $\phi(\ell_0)$ in $L$ is always isometric to $M$ by cancellation laws (see \cite[Proposition~93:14a]{OM} and \cite[Theorem~5.3.6]{Ki}), independently of the choice of the representation $\phi$. Hence, it suffices to show that the $\z_2$-lattice $\ell'$ is not primitively represented by the $\z_2$-lattice $M$. \begin{center} \noindent\begin{tabular}{|c|c|c|c|c|} \hline & $\ell$ & $L$ & $M$ & $u$\\ \hline (A) & $\bbA\mathbin{\perp} 2^{2t+1}\bbA$ & $\bbH^3\mathbin{\perp}\ang{-1, 2^{2t}}$ & $\bbH\mathbin{\perp}\ang{1, 1, 5, 2^{2t}}$ & $0$\\ \hline (B) & $\ang{1,3}\mathbin{\perp} 2^{2t+2}\bbA$ & $\bbH^2\mathbin{\perp}\ang{1,-1,-2, 2^{2t+1}}$ & $\bbH\mathbin{\perp}\ang{1, 1, 10, 2^{2t+1}}$ & $1$\\ \hline (C) & $\bbA\mathbin{\perp} 2^{2t+3}\bbA$ & $\bbH^2\mathbin{\perp}\ang{-1,-1,4, 2^{2t+2}}$ & $\bbH\mathbin{\perp}\ang{1, 1, 20, 2^{2t+2}}$ & $2$\\ \hline (D) & $\bbA\mathbin{\perp} 2^{2t+3}\bbA$ & $\bbH^2\mathbin{\perp}\ang{-1, 2, -2, 2^{2t+2}}$ & $\bbH\mathbin{\perp}\ang{5, 2, 2, 2^{2t+2}}$ & $2$\\ \hline (E) & $\ang{1,3}\mathbin{\perp} 2^{2t+4}\bbA$ & $\bbH^2\mathbin{\perp}\ang{-1, -2, 4, 2^{2t+3}}$ & $\bbH\mathbin{\perp}\ang{1, 10, 4, 2^{2t+3}}$ & $3$\\ \hline (F) & $\bbA\mathbin{\perp} 2^{2t+5}\bbA$ & $\bbH^2\mathbin{\perp}\ang{-1, -2, 8, 2^{2t+4}}$ & $\bbH\mathbin{\perp}\ang{5, 2, 8, 2^{2t+4}}$ & $4$\\ \hline (G) & $\bbA\mathbin{\perp} 2^{2t+5}\bbA$ & $\bbH^2\mathbin{\perp}\ang{-1, 2, -8, 2^{2t+4}}$ & $\bbH\mathbin{\perp}\ang{1, 6, -8, 2^{2t+4}}$ & $4$\\ \hline \end{tabular} \end{center} Suppose that there exists a primitive representation from $\ell'$ to $M$. Then, there are vectors $z$, $w\in M$ such that \[ Q(z) = Q(w) = 2^{2t+u+2}\text,\quad B(z, w) = 2^{2t+u+1}\text, \] and $\z_2[z, w]$ is a primitive sublattice of $M$. Suppoose that there exists a basis $e_1, \dots, e_6$ for $M$ that satisfies all of the following conditions: \begin{enumerate}[(\roman{enumi})] \item $\ord_2 Q(e_6) = 2t+u$, and $\z_2 e_6$ splits $M$; \item $z = z_1 e_1 + z_2 e_2$, where $J = \z_2[e_1, e_2]$ is isotropic, splits $M$, and satisfies \[ 2t+u+1 > \ord_2 dJ - \ord_2 \ks J\text; \] \item For $w = \sum_{i=1}^6 w_i e_i$, the vector $\sum_{i=3}^6 w_i e_i$ is primitive in $M$. \end{enumerate} By the condition (ii) and Lemma~\ref{lem:isoiso}, we have \[ Q(w_1 e_1 + w_2 e_2) \equiv 0\spmod{2^{2t+u+3}}\text. \] Hence, by the condition (iii), $\sum_{i=3}^6 w_i e_i$ is a primitive vector in $\z_2[e_3, \dots, e_6]$ such that $Q(\sum_{i=3}^6 w_i e_i) \equiv 2^{2t+u+2}\spmod{2^{2t+u+3}}$. However, this is impossible by the condition (i) and Lemma \ref{lem:no4mod8}. This proves the theorem. Now, we prove the existence of a basis for $M$ that satisfies the conditions (i), (ii), and (iii). Since all the other cases can be proved in similar manners, we only provide the proof of the case (C). Assume that $M\cong\ang{1, 1, 1, 3, 4, 2^{2t+2}}$ in $e_1, \dots, e_6$. Write $z = \sum_{i=1}^6 z'_i e'_i$ and $w = \sum_{i=1}^6 w'_i e'_i$. First, assume that $t=0$. Write $z^\dag = z'_5 e'_5 + z'_6 e'_6$, $w^\dag = w'_5 e'_5 + w'_6 e'_6$. Suppose if possible that $\z_2[z^\dag, w^\dag]$ is primitive. Since $\z_2[z^\dag, w^\dag] \cong \ang{4, 4}$, exactly one among the following conditions holds: \begin{enumerate}[(\roman{enumi})] \item $Q(z^\dag)\equiv 4\spmod{32}$, $B(z^\dag, w^\dag)\equiv 0\spmod{16}$, $Q(w^\dag)\equiv 4\spmod{32}$; \item $Q(z^\dag)\equiv 4\spmod{32}$, $B(z^\dag, w^\dag)\equiv 8\spmod{16}$, $Q(w^\dag)\equiv 20\spmod{32}$; \item $Q(z^\dag)\equiv 20\spmod{32}$, $B(z^\dag, w^\dag)\equiv 8\spmod{16}$, $Q(w^\dag)\equiv 4\spmod{32}$; \item $Q(z^\dag)\equiv 20\spmod{32}$, $B(z^\dag, w^\dag)\equiv 0\spmod{16}$, $Q(w^\dag)\equiv 20\spmod{32}$; \item $Q(z^\dag)\equiv 4\spmod{16}$, $B(z^\dag, w^\dag)\equiv 4\spmod8$, $Q(w^\dag)\equiv 8\spmod{32}$; \item $Q(z^\dag)\equiv 8\spmod{32}$, $B(z^\dag, w^\dag)\equiv 4\spmod8$, $Q(w^\dag)\equiv 4\spmod{16}$. \end{enumerate} Since \begin{multline*} \quad\quad\quad\begin{pmatrix} Q(z-z^\dag) & B(z-z^\dag, w-w^\dag) \\ B(z-z^\dag, w-z^\dag) & Q(w-w^\dag) \end{pmatrix}\\ = \begin{pmatrix} Q(z) & B(z, w) \\ B(z, w) & Q(w) \end{pmatrix} - \begin{pmatrix} Q(z^\dag) & B(z^\dag, w^\dag) \\ B(z^\dag, w^\dag) & Q(w^\dag) \end{pmatrix}\text,\quad\quad\quad \end{multline*} in any case, we must have $\z_2[z-z^\dag, w-w^\dag] \cong \ang{12, -4}$, which contradicts the fact that $\ang{3, -1}$ is not represented by $\ang{1, 1, 1, 3}$ over $\q_2$. Hence, we may assume that at least one among $z'_1$, \dots, $z'_4$ is odd. Note that $Q(z) \equiv 0 \not\equiv 1+1+1+3 \spmod4$. Hence, we may apply Lemma~\ref{lem:eus} to $M$ and $z$ to obtain a basis $e_1, \dots, e_6$ for $M$ such that $M \cong \bbH \mathbin{\perp} \ang{1, 5, 4, 4}$ in the basis and such that $z = z_1 e_1 + z_2 e_2$. We may assume that $z_1 = 1$. Then, $z_2$ is even. Write $w = \sum_{i=1}^6 w_i e_i$. Since \[ B(z, w) = z_1 w_2 + z_2 w_1 \] is even, $w_2$ also is even. Therefore, $e_1, \dots, e_6$ is the basis which we are seeking for. Next, assume that $t\ge 1$. We may assume that at least one among $z'_1$, \dots, $z'_5$ is odd. If $z'_i$ is odd for some $1\le i\le 4$, then we may apply Lemma~\ref{lem:eus} to $M$ and $z$ to obtain the basis which we are seeking for. Suppose that $z'_1 \equiv \cdots \equiv z'_4 \equiv 0\spmod2$ and $z'_5$ is odd. Let $e_1 = z'_5 e'_5 + z'_6 e'_6$ and $e_2 = \frac12(z - e_1)$. Since $z'_5$ is odd, we have $Q(e_1) \equiv 4\spmod{16}$. Hence, $\z_2 e_1$ splits $N^\dag = \z_2[e'_5, e'_6]$. Note that \[ Q(e_1)\ \equiv\ 4\ \text{ or }\ 20\pmod{32}\text{,} \] where the latter may occur only when $t=1$. Thus, there exists a basis $e_1, e_6$ for $N^\dag$ such that $N^\dag \cong \ang{Q(e_1), 2^{2t+2}}$ or $\ang{Q(e_1), 80}$ in $e_1, e_6$. Since $Q(e_1) \equiv 4\spmod{16}$, we have $Q(e_2) \equiv -1\spmod4$. Since $Q(e_1) + 4Q(e_2) = Q(e_1 + 2e_2) = Q(z)\equiv 0\spmod{32}$, we have \[ Q(e_2)\ \equiv\ -1\ \text{ or }\ 3\pmod8\text{,} \] according as $Q(e_1) \equiv 4$ or $20\spmod{32}$. Hence, in any case, $\z_2 e_2$ splits $N = \z_2[e'_1, \dots, e'_4]$. Thus, there exists a basis $e_2, \dots, e_5$ for $N$ such that $N \cong \ang{Q(e_2), 1, 1, 5}$ or $\ang{Q(e_2), 1, 1, 1}$ in this basis. So far, we found a basis $e_1, \dots, e_6$ for $M$ such that \[ M\ \cong\ \ang{Q(e_1), Q(e_2), 1, 1, 5, 2^{2t+2}}\ \text{ or }\ \ang{Q(e_1), Q(e_2), 1, 1, 1, 80} \] in this basis and such that $z = e_1 + 2 e_2$. Note that $2t+u+1 \ge 5 > 2 = \ord_2 Q(e_1)$. Write $w = \sum_{i=1}^6 w_i e_i$. Since $B(z, w) = Q(e_1) w_1 + 2 Q(e_2) w_2 \equiv 0\spmod4$, $w_2$ also is even. Therefore, $e_1, \dots, e_6$ is the basis which we are seeking for. \end{proof} Finally, we prove that $u^\ast_{\z_2}(n) = 2n$ for any $n\ge 5$. Recall that, for a local ring $R$ and for an $R$-lattice $L$, $Q^\ast(L)$ means the set of all $Q(v)$, where $v$ runs over all primitive vectors in $L$. \begin{lem}\label{lem:easypnucap} Let $R$ be an unramified dyadic local ring and let $\ell$ be an even $R$-lattice of rank $n\ge 1$. If $J$ is an $R$-lattice such that $Q^\ast(\ell) \cap Q^\ast(J) \ne \varnothing$, then $\ell$ is primitively represented by $\bbH^{n-1} \mathbin{\perp} J$. \end{lem} \begin{proof} Assume that $\ell \cong K_1 \mathbin{\perp} \dotsb \mathbin{\perp} K_r$, where $K_i$ is either a (proper modular) lattice of rank $1$ or an improper modular lattice of rank $2$ for each $i$ with $1\le i\le r$. Let $s_0 = 0$ and let $s_i = (\rank K_1) + \dotsb + (\rank K_i)$ for $1\le i\le r$. Note that $s_r = n$. Fix $\beta\in Q^\ast(\ell)\cap Q^\ast(J)$ and find primitive vectors $w\in \ell$ and $z\in J$ such that $Q(w) = \beta = Q(z)$. Write $w = w_1 + \dotsb + w_r$, where $w_i \in K_i$ for $1\le i\le r$. By renumbering indices if necessary, we may assume that $w_r$ is primitive in $K_r$. Assume that $L \cong \bbH^{n-1} \mathbin{\perp} J$ in $e_1, e_2, \dots, e_m$. Note that $K_i$ is isometric to one of $\langle 2\alpha \rangle$, $\alpha\bbH$, or $\alpha\bbA$ for some integer $\alpha\in R$ for $1\le i\le r$. We define vectors $x_j$, $y_j\in L$ and integers $\beta_j \in R$ for $1\le j\le n-1$ as follows. Let $1\le i\le r-1$. First, suppose that $K_i \cong \langle 2\alpha \rangle$. Then, let \begin{align*} x_{s_i} & = e_{2s_i - 1} + \alpha e_{2s_i}\text, & y_{s_i} & = e_{2s_i - 1} - \alpha e_{2s_i}\text. \end{align*} Note that $B(x_{s_i}, y_{s_i}) = 0$ and there is $\sigma: Rx_{s_i} \cong K_i$. Write $w_i = \beta_{s_i} \sigma(x_{s_i})$. Note that $Q(\beta_{s_i} y_{s_i}) = -Q(w_i)$. Now, suppose that $K_i \cong \alpha\bbH$. Then, let \begin{align*} x_{s_i-1} & = e_{2s_i - 3}\text, & y_{s_i-1} & = e_{2s_i - 3} - \alpha e_{2s_i}\text,\\ x_{s_i} & = \alpha e_{2s_i - 2} + e_{2s_i - 1}\text, & y_{s_i} & = e_{2s_i - 1}\text. \end{align*} Note that $B(R[x_{s_i-1}, x_{s_i}], R[y_{s_i-1}, y_{s_i}]) = 0$ and there is $\sigma: R[x_{s_i-1}, x_{s_i}] \cong K_i$. Write $w_i = \beta_{s_i-1} \sigma(x_{s_i-1}) + \beta_{s_i} \sigma(x_{s_i})$. Note that $Q(\beta_{s_i-1} y_{s_i-1} + \beta_{s_i} y_{s_i}) = -Q(w_i)$. Next, suppose that $K_i \cong \alpha\bbA$. Then, let \begin{align*} x_{s_i-1} & = e_{2s_i - 3} + \alpha e_{2s_i - 2}\text, & y_{s_i-1} & = e_{2s_i - 3} - \alpha e_{2s_i - 2} + \alpha e_{2s_i}\text,\\ x_{s_i} & = e_{2s_i - 3} + e_{2s_i - 1} + \rho\alpha e_{2s_i}\text, & y_{s_i} & = - e_{2s_i - 1} + \rho\alpha e_{2s_i}\text. \end{align*} Note that $B(R[x_{s_i-1}, x_{s_i}], R[y_{s_i-1}, y_{s_i}]) = 0$ and there is $\sigma: R[x_{s_i-1}, x_{s_i}] \cong K_i$. Write $w_i = \beta_{s_i-1} \sigma(x_{s_i-1}) + \beta_{s_i} \sigma(x_{s_i})$. Note that $Q(\beta_{s_i-1} y_{s_i-1} + \beta_{s_i} y_{s_i}) = -Q(w_i)$. Finally, if $K_r \cong \alpha\bbH$, then let $x_{n-1} = e_{2n-3}$, $y_{n-1} = \alpha e_{2n-2}$, and $\beta_{n-1} = 1$. If $K_r \cong \alpha\bbA$, note that $Q(w_r) = 2\epsilon\alpha$ for some unit $\epsilon$ in $R$. In that case, let $x_{n-1} = \rho \epsilon^{-1} e_{2n-3} + \alpha e_{2n-2}$, $y_{n-1} = e_{2n-3}$, and $\beta_{n-1} = 1$. Now, note that \[ R[x_1, \dots, x_{n-1}, \beta_1 y_1 + \dotsb + \beta_{n-1} y_{n-1} + z] \] is a primitive sublattice of $L$ that is isometric to $\ell$. \end{proof} \begin{lem}\label{lem:48excep} The $\z_2$-lattice $\bbH^3\mathbin{\perp}\ang{1, -1}$ primitively represents all $\z_2$-lattices of rank $4$ except for the quaternary $\z_2$-lattice $\bbA\mathbin{\perp} 2\bbA$. \end{lem} \begin{proof} Let $L \cong \bbH^3\mathbin{\perp}\ang{1, -1}$ and let $\ell$ be any $\z_2$-lattice of rank $4$ not isometric to $\bbA\mathbin{\perp} 2\bbA$. Since $Q^\ast(\ang{1, -1}) = \z_2^\times \cup 8\z_2$, $\ell$ is primitively represented by $L$ unless any Jordan decomposition $\ell = \ell_1 \mathbin{\perp} \dotsb \mathbin{\perp} \ell_t$ ($\ell_i$ is nonzero for any $1\le i\le t$) of $\ell$ satisfies $2\z_2 \supseteq \kn\ell_1 \supseteq \cdots \supseteq \kn\ell_t \supseteq 4\z_2$, by Lemmas~\ref{lem:easypnueven} and \ref{lem:easy2pnu}. Suppose that $\ell$ has such a Jordan decomposition. Assume that $\ell$ is orthogonally split by $2^a\bbH$ for some nonnegative integer $a$. Since $2^a\bbH$ is primitively represented by $\bbH\mathbin{\perp}\ang{1, -1}$, $\ell$ is primitively represented by $L$. Hence, we may assume that $\ell$ has a proper component. Assume that $\ell \cong \ang{2^{a_1}\epsilon_1, 2^{a_2}\epsilon_2, 2^{a_3}\epsilon_3, 2^{a_4}\epsilon_4}$. Since $a_1, a_2, a_3, a_4\in\{1, 2\}$, up to rearrangement, at least one among \[ 2^{a_1}\epsilon_1 + 2^{a_2}\epsilon_2\text{,} \quad 2^{a_1}\epsilon_1 + 2^{a_2}\epsilon_2 + 2^{a_3}\epsilon_3\text{,} \quad 2^{a_1}\epsilon_1 + 2^{a_2}\epsilon_2 + 2^{a_3}\epsilon_3 + 2^{a_4}\epsilon_4 \] is a multiple of $8$, so that it is primitively represented by $\ang{1, -1}$. Hence, $\ell$ is primitively represented by $L$ by Lemma~\ref{lem:easypnucap}. Finally, assume that $\ell \cong \ang{2^{a_1}\epsilon_1, 2^{a_2}\epsilon_2}\mathbin{\perp} 2^{a_3}\bbA$. Note that $a_1, a_2\in\{1, 2\}$ and $a_3\in\{0, 1\}$. If $a_1 = a_3 + 1$, then $\ell$ is split by $2^{a_3}\bbH$. If $a_1 = a_3 = 1$, then $\ell$ is an orthogonal sum of proper components. Hence, we may assume that $a_1 = a_2 = 2$ and $a_3 = 0$. Then, $2^{a_1}\epsilon_1 + 2^{a_2}\epsilon_2$ is a multiple of $8$. Hence, $\ell$ is primitively represented by $L$ by Lemma~\ref{lem:easypnucap}. \end{proof} \begin{thm} For any $n\ge 5$, the $\z_2$-lattice $\bbH^{n-1} \mathbin{\perp} \ang{1, -1}$ is primitively $n$-universal. In particular, $u^\ast_{\z_2}(n) = 2n$ for any $n\ge 5$. \end{thm} \begin{proof} Suppose that $n=5$. Any $\z_2$-lattice of rank $5$ is split by a $\z_2$-lattice of rank $1$. Hence, by Lemma \ref{lem:48excep}, it suffices to show that any $\z_2$-lattice $\ell$ of the form $\bbA\mathbin{\perp} 2\bbA\mathbin{\perp}\ang{2^a\epsilon}$ is primitively represented by $\bbH^4\mathbin{\perp}\ang{1,-1}$. If $a=0$ or $a\ge 3$, then $\ang{2^a\epsilon}$ is primitively represented by $\ang{1,-1}$. If $a = 2$, then $\ell\cong \bbA\mathbin{\perp} 2\bbH\mathbin{\perp}\ang{20\epsilon}$. If $a = 1$, then $\ell\cong \bbH\mathbin{\perp} 2\bbA\mathbin{\perp}\ang{10\epsilon}$. Suppose that $n=6$. Any $\z_2$-lattice of rank $6$ either has a component of rank $1$ or is an orthogonal sum of improper $\z_2$-lattices of rank $2$. For the former case, the primitive representability follows directly from the case when $n=5$ by Lemma~\ref{lem:easy2pnu}. For the latter case, consider a $\z_2$-lattice of the form $\ell_1\mathbin{\perp} \ell_2\mathbin{\perp} \ell_3$, where $\ell_i$'s are binary improper $\z_2$-lattices. Clearly, it is impossible that $\ell_1\mathbin{\perp} \ell_2 \cong \ell_1\mathbin{\perp} \ell_3 \cong \ell_2\mathbin{\perp} \ell_3 \cong \bbA\mathbin{\perp} 2\bbA$. Hence, we may assume that $\ell_1\mathbin{\perp} \ell_2 \not\cong \bbA\mathbin{\perp} 2\bbA$. Therefore, $\ell_1\mathbin{\perp} \ell_2$ is primitively represented by $\bbH^3\mathbin{\perp}\ang{1,-1}$, and $\ell_3$ is primitively represented by $\bbH^2$. The case when $n\ge 7$ follows by an induction on $n$. It follows from the case of $n-1$ for any odd integer $n$, from the cases of $n-1$ and $n-2$ for any even integer $n$, by Lemmas~\ref{lem:easypnueven} and \ref{lem:easy2pnu}. This completes the proof. \end{proof} \begin{thebibliography}{[00]} \bibitem{Bu10} N. V. Budarina, \emph{On primitively universal quadratic forms}, Lith. Math. J. \textbf{50}(2010), 140--163. \bibitem{Bu11} N. V. Budarina, \emph{On primitively 2-universal quadratic forms} (Russian. Russian summary), Algebra i Analiz 23, 3 (2011), 31--62; translation in St. Petersburg Math. J. \textbf{23}(2012), 435--458. \bibitem{CO23} W. K. Chan and B.-K. Oh, \emph{Can we recover an integral quadratic form by representing all its subforms?}, Adv. Math. \textbf{433}(2023), Article 109317. \bibitem{Co} K. Conrad, \emph{A multivariable Hensel's lemma}, available online at {\tt \lightspacing{https://kconrad.math.uconn.edu/blurbs/gradnumthy/} multivarhensel.pdf}. \bibitem{CS} J. H. Conway and N. J. A. Sloane, \emph{Sphere packings, lattices and groups} (third edition), Springer, 1993. \bibitem{EG21jnt} A. G. Earnest and B. L. K. Gunawardana, \emph{Local criteria for universal and primitively universal quadratic forms}, J. Number Theory \textbf{225}(2021), 260--280. \bibitem{EG21rj} A. G. Earnest and B. L. K. Gunawardana, \emph{On locally primitively universal quadratic forms}, Ramanujan J. \textbf{55}(2021), 1145--1163. \bibitem{He23} Z. He, \emph{On classic $n$-universal quadratic forms over dyadic local fields}, Manuscripta Math. \textbf{174}(2024), 559--595. \bibitem{HH23} Z. He and Y. Hu, \emph{On $k$-universal quadratic lattices over unramified dyadic local fields}, Journal of Pure and Applied Algebra \textbf{227}(2023), Article 107334. \bibitem{HHX23} Z. He, Y. Hu, and F. Xu, \emph{On indefinite $k$-universal quadratic forms over number fields}, Math. Z. \textbf{304}(2023), Article 20. \bibitem{Ki} Y. Kitaoka, \emph{Arithmetic of quadratic forms}, Cambridge University Press, 1993. \bibitem{Oh03} B.-K. Oh, \emph{The representation of quadratic forms by almost universal forms of higher rank}, Math. Z. \textbf{244}(2003), 399--413. \bibitem{OM58} O. T. O'Meara, \emph{The integral representations of quadratic forms over local fields}, Amer. J. Math. \textbf{80}(1958), 843--878. \bibitem{OM} O. T. O'Meara, \emph{Introduction to quadratic forms}, Springer, 1963. \end{thebibliography} \end{document}
2412.14792v1
http://arxiv.org/abs/2412.14792v1
The asymptotic estimation of prime ideals in imaginary quadratic fields and Chebyshev's bias
\documentclass[11pt]{article} \usepackage[utf8]{inputenc} \usepackage{graphicx} \usepackage{amsmath,amsfonts,amssymb,amsthm} \usepackage[mathscr]{eucal} \usepackage{amscd} \usepackage{tikz} \usepackage{tkz-graph} \usepackage{multicol} \usepackage{float} \usepackage{authblk} \pagestyle{headings} \newtheorem{theorem}{THEOREM}[section] \newtheorem{lemma}[theorem]{LEMMA} \newtheorem{definition}[theorem]{DEFINITION} \newtheorem{example}[theorem]{EXAMPLE} \newtheorem{proposition}[theorem]{PROPOSITION} \newtheorem{corollary}[theorem]{COROLLARY} \newtheorem{conjecture}[theorem]{CONJECTURE} \newtheorem{remark}[theorem]{REMARK} \newcommand*{\dif}{\,\mathrm{d}} \newcommand*{\mo}{\,\mathrm{mod}\,} \newcommand*{\fr}{F} \title{The asymptotic estimation of prime ideals in imaginary quadratic fields and Chebyshev’s bias } \author{Chen Lin$^1$, Chenhao Tang$^2$, Xuejun Guo$^3$\footnote{Corresponding author.}} \affil{{\small {$^{1,2,3}$School of Mathematics, Nanjing University, Nanjing 210093, China}\\ $^[email protected], $^[email protected], $^[email protected]} } \date{} \voffset -2cm \marginparwidth 0pt \oddsidemargin 32pt \topmargin 20pt \textheight 21.5 truecm \textwidth 14.5 truecm \def\baselinestretch{1.1} \begin{document} \maketitle \begin{abstract} We study the asymptotic estimation of prime ideals that satisfy certain congruence and argument conditions in imaginary quadratic fields. We also discuss the phenomenon of Chebyshev's bias in the distribution of prime ideals among different residue classes. \end{abstract} \noindent { 2020\it Mathematics Subject Classification:} 11N05, 11K70\\[1mm] \noindent {\it Keywords: }asymptotic estimation, quadratic forms, Chebyshev’s bias \section{Introduction} Fermat proved that every prime congruent to $1$ modulo $4$ is a sum of two squares. Assume an odd prime $p$ such that $p=a_p^2+b_p^2$, where $a_p>b_p>0$ are integers. In 1919, Hecke demonstrated that $\theta_p=\arctan \frac{b_p}{a_p}$ exhibits a uniform distribution as the prime $p$ varies. Coleman established the result of argument equidistribution of prime ideals in imaginary quadratic fields in \cite{Coleman1990TheDO} and \cite{Coleman1992}. To state Coleman's theorem, we first recall some notations. Let $K=\mathbb{Q}(\sqrt{\Delta})$ be the imaginary quadratic number field with $\Delta$ a negative square-free integer. We write $\mathfrak{f},\mathfrak{a}$ for ideals and $\mathfrak{p}$ for a prime ideal. Let $N(\mathfrak{a})$ denote the norm of $\mathfrak{a}$. Given a non-zero ideal $\mathfrak{f}$, let $g=g(\mathfrak{f})$ be the number of units $\epsilon$ such that $\epsilon\equiv 1\pmod{\mathfrak{f}}$. We write $Cl(K,\mathfrak{f})$ for the ideal class group $\mo\mathfrak{f}$ and $h_{\mathfrak{f}}$ for its cardinality. Let $C$ denote an ideal class $\mo\mathfrak{f}$ and $(\xi)$ denote the principal fractional ideal generated by some $\xi\in K$. Assume that for each $C\in Cl(K,\mathfrak{f})$, there has been fixed an ideal $\mathfrak{a}_0 \in C^{-1}$. Then for $\mathfrak{a}\in C$, we can find some $\xi_{\mathfrak{a}}\in K$ satisfying $(\xi_{\mathfrak{a}})=\mathfrak{a}\mathfrak{a}_0$ and $\xi_{\mathfrak{a}}\equiv 1 \pmod{\mathfrak{f}}$. We write $\lambda(\mathfrak{a})=\left(\frac{\xi_{\mathfrak{a}}}{|\xi_{\mathfrak{a}}|}\right)^g$. \begin{theorem}[Coleman, Theorem 2.1 of \cite{Coleman1990TheDO}] \label{Coleman} Given $0\leq\varphi_1 \leq\varphi_2 \leq 2\pi,0\leq y\leq x$ and an ideal class $C\in Cl(K,\mathfrak{f})$, we define $$ S=S(x,y,C,\varphi_1,\varphi_2)=\{\mathfrak{a}\in C\mid x-y\leq N(\mathfrak{a})\leq x,\varphi_1 \leq arg(\lambda(\mathfrak{a}))\leq \varphi_2 \}. $$ and $$ T=T(x,y,C,\varphi_1,\varphi_2)=\{\mathfrak{p}\in S\mid N(\mathfrak{p})=p,prime\}. $$ Let $\epsilon>0$ be given. We have the asymptotic result, $$ \sum_{\mathfrak{p}\in T(x,y,C,\varphi_1,\varphi_2)}1=\frac{(\varphi_2-\varphi_1) y}{2\pi h_{\mathfrak{f}} \log x }\left(1+O\left(\frac{1}{\log x}\right)\right). $$ for $\varphi_2-\varphi_1>x^{-\frac{5}{24}+\epsilon}, y>x^{\frac{19}{24}+\epsilon}, x>x_{\epsilon}$. \end{theorem} A special case of Theorem \ref{Coleman} shows that the prime ideals of $\mathbb{Q}(i)$, whose norms are rational primes, are argument equidistributed. A natural question arises: among these prime ideals, are those satisfying $N(\mathfrak{p})\equiv 1\pmod{8}$ argument equidistributed? The main theorem of this paper answers this question and provides a generalization of Coleman's work. We present the argument equidistribution and asymptotic estimation of prime ideals that satisfy a given congruence relation in imaginary quadratic fields. \begin{theorem} \label{main theorem} Using the same notation as in Theorem \ref{Coleman}, fix two positive integers $m,M$ satisfying $\mathrm{gcd}(m,M)=1$, we define $$ P(x,y,C,\varphi_1,\varphi_2,m,M)=\{\mathfrak{p}\in T\mid N(\mathfrak{p})\equiv m\pmod{M}\}. $$ Let $\epsilon>0$ be given. We have the asymptotic result, $$ \sum_{\mathfrak{p}\in P(x,y,C,\varphi_1,\varphi_2,m,M)}1=\frac{A(m,M,C,\mathfrak{a}) (\varphi_2-\varphi_1) y}{2\pi h_{\mathfrak{a}} \varphi(M) \log x }\left(1+O\left(\frac{1}{\log x}\right)\right). $$ for $\varphi_2-\varphi_1>x^{-\frac{5}{24}+\epsilon}, y>x^{\frac{19}{24}+\epsilon}, x>x_{\epsilon}$, where $\varphi$ is the Euler's totient function and $$ A(m,M,C,\mathfrak{a})=\sum_{\phi,\psi}\phi(m)\psi(C), $$ the sum ranging over $\phi\in\widehat{(\mathbb{Z}/M\mathbb{Z})^{\times}}, \psi\in\widehat{Cl(K,\mathfrak{a})}$ satisfying $\phi(N(\mathfrak{p}))\psi(\overline{\mathfrak{p}})=1$ for almost all $\mathfrak{p}\in\mathrm{Spec}(\mathcal{O}_K)\backslash\{0\}$. \end{theorem} The complicated coefficient $A(m,M,C,\mathfrak{a})$ will be explained in Subsection \ref{section 7}. Many results which essentially depends on the equidistribution of prime ideals can be quickly generalized by Theorem \ref{main theorem}. As an application of Theorem \ref{main theorem}, we generalize C. Elsholtz and G. Harman's work \cite{MR3467390} on the conjectures of T. Ordowski and Z.-W. Sun \cite{sun2017conjectures}. T. Ordowski and Z.-W. Sun \cite{sun2017conjectures} considered all pairs of positive integers uniquely representing a prime number by a specific quadratic form, and found that the limit of certain coordinate-wise defined functions converges. C. Elsholtz and G. Harman \cite{MR3467390} discussed these conjectures and generalized the result to cover all positive definite quadratic forms that can represent prime numbers by using Coleman's result as stated above. This prompts us to consider the limit for all primes $p\equiv m\pmod{M}$ that can be represented by a given quadratic form $Q(x,y)$, where $\mathrm{gcd}(m,M)=1$. However, the result from Coleman's work used in C. Elsholtz and G. Harman's article only gives an asymptotic evaluation of the number of all primes in a polar box. Fortunately, using Theorem \ref{main theorem}, we can prove the following result for primes $p\equiv m\pmod{M}$ with $\mathrm{gcd}(m,M)=1$. Let $P_{m,M}=\{p\mid p\equiv m\pmod{M}, p \text{ is prime}\}$, where $m,M$ are coprime integers. Some definitions and notation in the following theorem will be explained later in Section \ref{pf of application}. \begin{theorem} \label{main} Given a primitive positive definite binary quadratic form $Q(x,y)=ax^2+bxy+cy^2$, where $a,b,c$ are integers, let $D=-\Delta=4ac-b^2>0$. For coprime integers $m,M$, consider all pairs $(a_p,b_p)$ with $a_p>b_p$ such that $Q(a_p,b_p)=p\in P_{m,M}$ if $P_{m,M}\cap\{Q(x,y)\mid x,y \text{ are integers}\}$ is not a finite set. Then we have $$\lim_{N\to\infty}\frac{\sum\limits_{p\leq N,p\in P_{m,M}}a_p^k}{\sum\limits_{p\leq N,p\in P_{m,M}}b_p^k}=\frac{\int_0^\beta{s(k,\theta)}\dif\theta}{\int_0^\beta{t(k,\theta)}\dif\theta},$$ where $$s(k,\theta)=(\sqrt{D}\cos\theta-b\sin\theta)^k,$$ $$t(k,\theta)=(2a)^k \sin^k\theta,$$ and $$\beta=\left\{ \begin{aligned} &\frac{1}{2}\pi &\text{if } a+2b=0,\\ \arctan&\left(\sqrt{D}/(b+2a)\right) &\text{otherwise}. \end{aligned} \right. $$ \end{theorem} Chebyshev noticed a famous phenomenon that primes congruent to 3 modulo 4 predominate over those congruent to 1 modulo 4. To be more precise, let $\pi(x;q,a)=\{p<x\mid p\equiv a\pmod{q} \text{ and } p \text{ is a prime number}\}$. Then the inequality $\pi(x;4,3)>\pi(x;4,1)$ holds for most values of $x$. Littlewood \cite{littlewood1914distribution} showed that the sign of the function $\pi(x;4,3)-\pi(x;4,1)$ changes infinite times. Such phenomenon and its generations are called ``Chebyshev's bias". Rubinstein and Sarnak \cite{em/1048515870} studied this phenomenon and used logarithmic density to explain this phenomenon. We are interested in whether the certain coordinate-wise defined function in our theorem shows similar bias. Using the same notations as in Theorem \ref{main}, let $$\fr(N;M,m)=\frac{\sum\limits_{p\leq Pr(N),p\in P_{m,M}}a_p}{\sum\limits_{p\leq Pr(N),p\in P_{m,M}}b_p},$$ where $Pr(N)$ denotes the $N$-th prime number. Now using certain congruence conditions, we can divide all the primes representable by a given quadratic form into two sets, and investigate the function defined above with respect to the two sets. For $Q(x,y)=x^2+y^2$, we compute $\fr(N;8,1)$ and $\fr(N;8,5)$ with $N$ up to $5\times 10^6$. For $Q(x,y)=x^2+xy+y^2$, we compute $\fr(N;12,1)$ and $\fr(N;12,7)$ with $N$ up to $1\times 10^6$. As shown in figures in Subsection \ref{fr}, in each case, although the two functions converges to the same limit, they exhibit oscillations, repeatedly intersect and alternately surpass each other. Since only osillations and intersections matter, we consider the following new function $$R(N;M,m)=\frac{\fr(N;M,m)}{\fr(N)},$$ where $\fr(N)=\frac{\sum_{p\leq Pr(N)}a_p}{\sum_{p\leq Pr(N)}b_p}.$ We compute $R(N;8,1)$ and $R(N;8,5)$ with $N$ up to $5\times 10^6$, and the result is shown in Figure \ref{R1-R2}. Since $\fr(N;8,1),\fr(N;8,5)$ and $\fr(N)$ have the same limit as $N\rightarrow \infty$, Figure \ref{R1-R2} shows very nice symmetry. The osillations and repeated intersections in these figures reveal the Chebyshev's bias phenomenon and infinitely many sign changes of the two functions (similar to Littlewood's result) in this case. On the other hand, these figures also show similarity to the phenomenon called ``murmuration". ``Murmuration" is a phenomenon of oscillating behaviour \cite{lee2024murmurations}. In addition, L. Devin \cite{devin2021discrepancies} conjectured that in logarithmic scale, more than half of the primes below $x$ can be written as a sum of two squares with the even square larger than the odd one. To study the separated cases $1\pmod{8}$ and $5\pmod{8}$, we define the differences of counting functions {\small $$D_1(x)=|\{p<x\mid p=a^2+4b^2,|a|>|2b|,p\in P_{1,8}\}|-|\{p<x\mid p=a^2+4b^2,|a|<|2b|,p\in P_{1,8}\}|,$$ $$D_2(x)=|\{p<x\mid p=a^2+4b^2,|a|>|2b|,p\in P_{5,8}\}|-|\{p<x\mid p=a^2+4b^2,|a|<|2b|,p\in P_{5,8}\}|,$$} and compute these two functions with $x$ up to $1.5\times 10^6$. As is shown in Figure \ref{D1D2}, the two functions both show a bias towards negative value. \section{Proof of Theorem \ref{main theorem}}\label{cor of coleman} In this section, we generalize Coleman's result Theorem \ref{Coleman} to the main result Theorem \ref{main theorem}. In Subsection \ref{section 2} and \ref{section 3}, we state some theorems of equidistribution with rough remainder estimations, which are not restricted to the case of imaginary quadratic field. Subsection \ref{section 4}, \ref{section 5} and \ref{section 6} give the proof. The main idea is to construct special Hecke L-functions to identify the prime ideal satisfying the given conditions. In Subsection \ref{section 7}, we use class field theory to translate the question to the side of Galois group and give another proof by $\check{\mathrm{C}}$ebotarev density theory. Especially, we verify that the results given by these two proofs are consistent. In Subsection \ref{section 8}, we use Coleman's result to improve the remainder estimation and obtain Theorem \ref{main theorem}. \subsection{Statement of theorem in general case} \label{section 2} Let $K$ be a number field. For an integer ideal $\mathfrak{a}$ of $K$, we denote by $Cl(K,\mathfrak{a})$ the ideal class group mod $\mathfrak{a}$ and $h_{\mathfrak{a}}$ its cardinality, which is known to be finite. Take two positive integers $m,M$ with $\mathrm{gcd}(m,M)=1$, an integer ideal $\mathfrak{a}$ and an ideal class $C$ $\mo\mathfrak{a}$. For any $x>0$, define $$ \mathscr{P}(x,m,M,C,\mathfrak{a})=\{\mathfrak{p}\in C\arrowvert N(\mathfrak{p})\leq x, N(\mathfrak{p})\equiv m\pmod{M}\}, $$ where $N$ is the norm of ideals. \begin{theorem} \label{2.1} We have the asymptotic result $$ \sum_{\mathfrak{p}\in\mathscr{P}(x,m,M,C,\mathfrak{a})}1=\frac{A(m,M,C,\mathfrak{a}) x}{h_{\mathfrak{a}} \varphi(M) \log x }+o\left(\frac{x}{\log x}\right) $$ as $x\to +\infty$, where $\varphi$ is the Euler's totient function and $$ A(m,M,C,\mathfrak{a})=\sum_{\phi,\psi}\phi(m)\psi(C), $$ the sum ranging over $\phi\in\widehat{(\mathbb{Z}/M\mathbb{Z})^{\times}}, \psi\in\widehat{Cl(K,\mathfrak{a})}$ satisfying $\phi(N(\mathfrak{p}))\psi(\overline{\mathfrak{p}})=1$ for almost all $\mathfrak{p}\in\mathrm{Spec}(\mathcal{O}_K)\backslash\{0\}$. \end{theorem} Our theorem can derive some classical result. \begin{corollary}[Prime Ideals Theorem] \label{2.2} For any $x>0$, define \[\mathscr{P}(x,C,\mathfrak{a})=\{\mathfrak{p}\in C\arrowvert N(\mathfrak{p})\leq x\}. \] Then as $x\to +\infty$, we have the asymptotic result \[\sum_{\mathfrak{p}\in\mathscr{P}(x,C,\mathfrak{a})}1=\frac{x}{h_{\mathfrak{a}} \log x }+o\left(\frac{x}{\log x}\right). \] \end{corollary} \begin{proof} Using orthogonal relation of characters, we have $$ \begin{aligned} \sum_{\mathfrak{p}\in\mathscr{P}(x,C,\mathfrak{a})}1&=\sum_{m\in(\mathbb{Z}/M\mathbb{Z})^{\times}}\sum_{\mathfrak{p}\in\mathscr{P}(x,m,M,C,\mathfrak{a})}1+O(1)\\ &=\sum_{m\in(\mathbb{Z}/M\mathbb{Z})^{\times}}\frac{A(m,M,C,\mathfrak{a}) x}{h_{\mathfrak{a}} \varphi(M) \log x }+o\left(\frac{x}{\log x}\right)\\ &=\frac{x}{h_{\mathfrak{a}} \varphi(M) \log x }\sum_{m\in(\mathbb{Z}/M\mathbb{Z})^{\times}}A(m,M,C,\mathfrak{a})+o\left(\frac{x}{\log x}\right)\\ &=\frac{x}{h_{\mathfrak{a}} \varphi(M) \log x }\sum_{\phi,\psi}\sum_{m\in(\mathbb{Z}/M\mathbb{Z})^{\times}}\phi(m)\psi(C)+o\left(\frac{x}{\log x}\right)\\ &=\frac{x}{h_{\mathfrak{a}} \log x }\sum_{\psi,\psi(\overline{\mathfrak{p}})=1}\psi(C)+o\left(\frac{x}{\log x}\right)\\ &=\frac{x}{h_{\mathfrak{a}} \log x }+o\left(\frac{x}{\log x}\right) \end{aligned} $$ as $x\to +\infty$. The result follows. \end{proof} \begin{corollary} \label{2.3} For any $x>0$, define \[\mathscr{P}(x,m,M)=\{\mathfrak{p}\arrowvert N(\mathfrak{p})\leq x, N(\mathfrak{p})\equiv m\pmod{M})\}. \] Then as $x\to +\infty$, we have the asymptotic result \[\sum_{\mathfrak{p}\in\mathscr{P}(x,m,M)}1=\frac{A(m,M) x}{\varphi(M) \log x }+o\left(\frac{x}{\log x}\right), \] where $A(m,M)=\sum_{\phi}\phi(m)$, the sum ranging over $\phi\in\widehat{(\mathbb{Z}/M\mathbb{Z})^{\times}}$ satisfying $\phi(N(\mathfrak{p}))=1$ for almost all $\mathfrak{p}\in\mathrm{Spec}(\mathcal{O}_K)\backslash\{0\}$. \end{corollary} \begin{proof} Using orthogonal relation of characters, we have $$ \begin{aligned} \sum_{\mathfrak{p}\in\mathscr{P}(x,m,M)}1&=\sum_{C\in Cl(K,\mathfrak{a})}\sum_{\mathfrak{p}\in\mathscr{P}(x,m,M,C,\mathfrak{a})}1+O(1)\\ &=\sum_{C\in Cl(K,\mathfrak{a})}\frac{A(m,M,C,\mathfrak{a}) x}{h_{\mathfrak{a}} \varphi(M) \log x }+o\left(\frac{x}{\log x}\right)\\ &=\frac{x}{h_{\mathfrak{a}} \varphi(M) \log x }\sum_{C\in Cl(K,\mathfrak{a})}A(m,M,C,\mathfrak{a})+o\left(\frac{x}{\log x}\right)\\ &=\frac{x}{h_{\mathfrak{a}} \varphi(M) \log x }\sum_{\phi,\psi}\sum_{C\in Cl(K,\mathfrak{a})}\phi(m)\psi(C)+o\left(\frac{x}{\log x}\right)\\ &=\frac{x}{\varphi(M) \log x }\sum_{\phi}\phi(m)+o\left(\frac{x}{\log x}\right)\\ &=\frac{A(m,M) x}{\varphi(M) \log x }+o\left(\frac{x}{\log x}\right) \end{aligned} $$ as $x\to +\infty$. The result follows. \end{proof} \subsection{Statement of theorem of imaginary quadratic field} \label{section 3} Let $K=\mathbb{Q}(\sqrt{\Delta})$ be an imaginary quadratic field where $\Delta$ is a square-free negative integer. We denote by $w$ the number of roots of unity in $K$; $w=4$ if $\Delta=-1$, $w=6$ if $\Delta=-3$ and $w=2$ if $\Delta\neq -1,-3$. Let $I(K)$ be the fractional ideal group of $K$, while $P(K)$ be the principle fractional ideal group. Define a homomorphism \[ \lambda:P(K)\longrightarrow S^{1}, (a)\longmapsto\left(\frac{a}{|a|}\right)^{w}. \] To consider the argument equidistribution of prime ideals, we first extend $\lambda$ to $I(K)$. By the structure theorem of finite abelian groups, we have \[ Cl(K)\cong\prod_{i=1}^{r}C_{i}, \] where $C_{i}$ is a cyclic group with cardinality $s_i$. For each $i$, choose an integer ideal $\mathfrak{b}_i$ such that $\overline{\mathfrak{b}_{i}}$ is the generator of $C_i$. Take $\lambda(\mathfrak{b}_i)=\lambda(\mathfrak{b}_{i}^{s_i})^{\frac{1}{s_i}}$. Then for any $\mathfrak{a}\in I(K), \mathfrak{a}=(a)\prod_{i=1}^{r}(\mathfrak{b_i})^{t_i}$, take $\lambda(\mathfrak{a}_i)=\lambda((a))\prod_{i=1}^{r}\lambda(\mathfrak{b_i})^{t_i}$. It is easy to check the definition is well-defined. Now take two positive integers $m,M$ with $\mathrm{gcd}(m,M)=1$, an integer ideal $\mathfrak{a}$ and an ideal class $C$, $\mo\mathfrak{a}$. Denote by $\overline{C}$ the image of $C$ under the canonical projection $Cl(K,\mathfrak{a})\to Cl(K)$. Fix a fractional ideal $\mathfrak{a}_0\in \overline{C}^{-1}$. Then for any $x>0$ and $ 0\leq\varphi_1 \leq\varphi_2 \leq 2\pi$, define \[\mathscr{P}(x,m,M,C,\mathfrak{a},\varphi_1,\varphi_2)=\{\mathfrak{p}\in \mathscr{P}(x,m,M,C,\mathfrak{a})\arrowvert \varphi_1 \leq \mathrm{arg}\lambda(\mathfrak{p} \mathfrak{a}_0)\leq\varphi_2 \}. \] \begin{theorem} \label{3.1} The arguments of prime ideals is equidistributed, i.e. we have the asymptotic result \[\sum_{\mathfrak{p}\in\mathscr{P}(x,m,M,C,\mathfrak{a},\varphi_1,\varphi_2)}1=\frac{(\varphi_2-\varphi_1) A(m,M,C,\mathfrak{a}) x}{2\pi h_{\mathfrak{a}} \varphi(M) \log x }+o\left(\frac{x}{\log x}\right) \] as $x\to +\infty$, where $\varphi$ is the Euler's totient function and $$ A(m,M,C,\mathfrak{a})=\sum_{\phi,\psi}\phi(m)\psi(C), $$ the sum ranging over $\phi\in\widehat{(\mathbb{Z}/M\mathbb{Z})^{\times}}, \psi\in\widehat{Cl(K,\mathfrak{a})}$ satisfying $\phi(N(\mathfrak{p}))\psi(\overline{\mathfrak{p}})=1$ for almost all $\mathfrak{p}\in\mathrm{Spec}(\mathcal{O}_K)\backslash\{0\}$. \end{theorem} \begin{corollary}[Argument Equidistribution of Prime Ideals] \label{3.2} For any $x>0$ and $ 0\leq\varphi_1 \leq\varphi_2 \leq 2\pi$, define \[\mathscr{P}(x,C,\mathfrak{a},\varphi_1,\varphi_2)=\{\mathfrak{p}\in \mathscr{P}(x,C,\mathfrak{a})\arrowvert \varphi_1 \leq \mathrm{arg}\lambda(\mathfrak{p} \mathfrak{a}_0)\leq\varphi_2 \}. \] Then as $x\to +\infty$, we have the asymptotic result \[\sum_{\mathfrak{p}\in\mathscr{P}(x,C,\mathfrak{a},\varphi_1,\varphi_2)}1=\frac{(\varphi_2-\varphi_1) x}{2\pi h_{\mathfrak{a}} \log x }+o\left(\frac{x}{\log x}\right). \] \end{corollary} \begin{corollary} \label{3.3} For any $x>0$, define \[\mathscr{P}(x,m,M,\varphi_1,\varphi_2)=\{\mathfrak{p}\in\mathscr{P}(x,m,M)\arrowvert\varphi_1 \leq \mathrm{arg}\lambda(\mathfrak{p} \mathfrak{a}_0)\leq\varphi_2 \}. \] Then as $x\to +\infty$, we have the asymptotic result \[\sum_{\mathfrak{p}\in\mathscr{P}(x,m,M,\varphi_1,\varphi_2)}1=\frac{(\varphi_2-\varphi_1) A(m,M) x}{2\pi \varphi(M) \log x }+o\left(\frac{x}{\log x}\right), \] where $A(m,M)=\sum_{\phi}\phi(m)$, the sum ranging over $\phi\in\widehat{(\mathbb{Z}/M\mathbb{Z})^{\times}}$ satisfying $\phi(N(\mathfrak{p}))=1$ for almost all $\mathfrak{p}\in\mathrm{Spec}(\mathcal{O}_K)\backslash\{0\}$. \end{corollary} \subsection{Some lemmas} \label{section 4} To prove our theorem, we need some preparations about equidistribution and $L$-function. \begin{lemma}[\cite{Serre1989AbelianLR}] \label{4.1} Let $G$ be a compact group and $X$ the space of conjugacy classes of $G$. Then a sequence $(x_n)$ of elements of $X$ is equidistributed for the normalized Haar measure of $G$ if and only if for any irreducible character $\chi\neq1$ of $G$, we have: $$ \lim_{n\to\infty}\frac{1}{n}\sum_{i=1}^{n}\chi(x_i)=0. $$ \end{lemma} \begin{lemma}[\cite{Serre1989AbelianLR}] \label{4.2} Let $K$ be a number field, $G$ be a compact group and $X$ the space of conjugacy classes of $G$. Suppose a mapping $f: \mathrm{Spec}(\mathcal{O}_K)\backslash\{0\}\to X$ is given. We make the following hypotheses: $(\star)$ Let $\rho$ be an irreducible representation of $G$ with character $\chi$, and put $$ L(\rho,s)=\prod_{\mathfrak{p}}\frac{1}{\det(1-\rho(f(\mathfrak{p})) (N(\mathfrak{p}))^{-s})}. $$ Then this product converges for $\Re(s)>1$, and extends to a meromorphic function on $\Re(s)\geq1$ having neither zero nor pole except possibly for $s=1$. The order of $L(\rho,s)$ at $s=1$ will be denoted by $-c_{\chi}$. Under the assumptions, for any irreducible character $\chi$ of $G$, we have: $$ \sum_{N(\mathfrak{p})\leq x}\chi(f(\mathfrak{p}))=c_{\chi} \frac{x}{\log x}+o\left(\frac{x}{\log x}\right) $$ as $x\to +\infty$. Moreover, the elements $f(\mathfrak{p})(\mathfrak{p}\in\mathrm{Spec}(\mathcal{O}_K)\backslash\{0\})$ are equidistributed for the normalized Haar measure of $G$ if and only if $c_\chi=0$ for every irreducible character $\chi\neq1$ of $G$, i.e., if and only if the $L$-function relative to the non trivial irreducible character of $G$ are holomorphic and non zero at $s=1$. \end{lemma} \subsection{Proof of Theorem \ref{2.1}} \label{section 5} Let $K$ be a number field. We denote by $\mathbb{A}_K$ the adele ring, $\mathbb{A}_{K}^{\times}$ the idele group and $\mathbb{C}_{K}=\mathbb{A}_{K}^{\times}/K^{\times}$ the idele class group of $K$. For a place $v$ of $K$, we denote by $K_v$ the local field associating with $v$ of $K$ and $\mathfrak{o_v}$ the integer ring of $K_v$ when $v$ is finite. A Hecke character of $K$ is a continuous character of $\mathbb{C}_{K}$, or equivalently, a continuous character of $\mathbb{A}_{K}^{\times}$ which is trivial when restricting to $K^{\times}$. Let $\psi$ be a unitary Hecke character of $K$. For a place $v$ of $K$, take $\psi_v=\psi\circ i_v$ the character of $K_{v}^{\times}$ where $i_v: K_{v}^{\times}\to\mathbb{A}_{K}^{\times}$ is the canonical embedding. Then we have $\psi=\otimes_{v}^{'} \psi_v$. For a finite place $v$, $\psi_v$ is said to be unramified if $\psi_v|_{\mathfrak{o}_{v}^{\times}}$ is trivial, otherwise ramified. Let $S$ be the finite set of all infinite places and finite places such that $\psi_v$ is ramified of $K$. The Hecke $L$-function associated with the Hecke character $\psi$ is defined as $$ L(\psi,s)=\prod_{v\notin S}(1-\psi_{v}(\pi_v) (Nv)^{-s})^{-1}, $$ where $\pi_v$ is a uniformizer of $K_v$ and $N$ is the norm of ideals. \begin{lemma}[Theorem 7.14 of \cite{MR1728620}] \label{5.1} The Hecke $L$-function $L(\psi,s)$ can extend to a meromorphic function on the whole complex plane with $L(\psi,1+it)\neq 0(t\in\mathbb{R})$. It has poles if and only if $\psi(x)=|x|_{\mathbb{A}_{K}}^{\lambda_0}$ for all $x\in\mathbb{A}_{K}$, where $| |_{\mathbb{A}_{K}}$ is the adelic absolute value and $\lambda_)$ is some fixed pure imaginary number. In this case, $L(\psi,s)=\zeta_K(s+\lambda_0)$, where $\zeta_K$ is the Dedekind $\zeta$-function of $K$. \end{lemma} \begin{proof}[Proof of Theorem 2.1] Using orthogonal relation of characters, we have $$ \begin{aligned} \sum_{\mathfrak{p}\in\mathscr{P}(x,m,M,C,\mathfrak{a})}1&=\frac{1}{\varphi(M)}\sum_{\mathfrak{p}\in\mathscr{P}(x,C,\mathfrak{a})}\sum_{\phi\in\widehat{(\mathbb{Z}/m\mathbb{Z})^{\times}}}\overline{\phi}(m)\phi(N(\mathfrak{p}))\\ &=\frac{1}{\varphi(M) h_\mathfrak{a}}\sum_{\phi\in\widehat{(\mathbb{Z}/m\mathbb{Z})^{\times}}}\overline{\phi}(m)\sum_{N(\mathfrak{p})\leq x}\phi(N(\mathfrak{p}))\sum_{\psi\in\widehat{(Cl(K,\mathfrak{a}))^{\times}}}\overline{\psi}(C)\psi(\overline{\mathfrak{p}})\\ &=\frac{1}{\varphi(M) h_\mathfrak{a}}\sum_{\phi\in\widehat{(\mathbb{Z}/m\mathbb{Z})^{\times}}}\sum_{\psi\in\widehat{(Cl(K,\mathfrak{a}))^{\times}}}\overline{\phi}(m)\overline{\psi}(C)\sum_{N(\mathfrak{p})\leq x}\phi(N(\mathfrak{p}))\psi(\overline{\mathfrak{p}}). \end{aligned} $$ Here, if $(N(\mathfrak{p}),M)>1$, take $\phi(N(\mathfrak{p}))=0$ and similarly if $\mathfrak{p}$ divides $\mathfrak{a}$, take $\psi(\overline{\mathfrak{p}})=0$. For fixed $\phi\in\widehat{(\mathbb{Z}/M\mathbb{Z})^{\times}}, \psi\in\widehat{(Cl(K,\mathfrak{a}))^{\times}}$, define an $L$-function $$ L(\phi,\psi,s)=\prod_{\mathfrak{p}}(1-\phi(N(\mathfrak{p}))\psi(\overline{\mathfrak{p}})(N(\mathfrak{p}))^{-s})^{-1}. $$ We will show $L(\phi,\psi,s)$ is a Hecke $L$-function up to a finite number of terms. First, we lift $\phi$ to a unitary Hecke character $\widetilde{\phi}$ of $K$. Define $\widetilde{\phi}$ to be the following composite mapping $$ \mathbb{C}_K\stackrel{\widetilde{N_{K/\mathbb{Q}}}}{\longrightarrow}\mathbb{C}_{\mathbb{Q}}\longrightarrow\mathbb{Q}^{\times}\backslash\mathbb{A}_{\mathbb{Q}}^{\times}/\mathbb{R}_{>0}\cong\prod_{p}\mathbb{Z}_{p}^{\times}\longrightarrow(\mathbb{Z}/M\mathbb{Z})^{\times}\stackrel{\phi^{-1}}{\longrightarrow}\mathbb{C}^{\times}. $$ Here the isomorphism is due to the strong approximation of the idele group $\mathbb{A}_{K}^{\times}$ and $\widetilde{N_{K/\mathbb{Q}}}$ is induced by the global norm mapping $$ N_{K/\mathbb{Q}}:\mathbb{A}_K\longrightarrow\mathbb{A}_{\mathbb{Q}}, (x_w)_w\longmapsto\left(\prod_{w|v}N_{w|v}(x_w)\right)_v, $$ where $N_{w|v}: K_{w}\longrightarrow \mathbb{Q}_v$ is the norm mapping of local fields. For the finite place $v$ of $K$ satisfying $(Nv,M)=1$, it is obvious from definition that $\widetilde{\phi}_v|_{\mathfrak{o}_{v}^{\times}}$ is trivial, i.e., $\widetilde{\phi}_v$ is unramified. Moreover, in this case, for a uniformizer $\pi_v$ of $K_v$, we have $\widetilde{\phi}_v(\pi_v)=\phi(Nv)$. Therefore the condition that $\widetilde{\phi}_v$ is unramified and $\widetilde{\phi}_v(\pi_v)=\phi(Nv)$ holds for almost all finite place $v$ of $K$. Next we lift $\psi$ to a unitary Hecke character $\widetilde{\psi}$ of $K$. It is well-known that the ideal class group $Cl(K,\mathfrak{a})$ can be realized as a quotient group of the idele class group $\mathbb{C}_K$. More concretely, we have an isomorphism $\mathbb{C}_{K}/\overline{U(\mathfrak{a})}\cong Cl(K,\mathfrak{a})$ where $\overline{U(\mathfrak{a})}$ is the image of $U(\mathfrak{a})$ in $\mathbb{C}_{K}$ and $U(\mathfrak{a})=\prod_{v}U_{v}(\mathfrak{a})\subset \mathbb{A}_{K}^{\times}$, $U_{v}(\mathfrak{a})=\ker(\mathfrak{o}_{v}^{\times}\to(\mathfrak{o}_{v}/\mathfrak{a}\mathfrak{o}_v)^{\times})$ if $v$ is a finite place, $U_{v}(\mathfrak{a})=\mathbb{C}^{\times}$ if $v$ is a complex place and $U_{v}(\mathfrak{a})=\mathbb{R}_{>0}$ if $v$ is a real place. Define $\widetilde{\psi}$ as the composition of $\psi$ with the isomorphism. For the finite place $v$ not dividing $\mathfrak{a}$ of $K$, it is obvious from definition that $\widetilde{\psi}_v|_{\mathfrak{o}_{v}^{\times}}$ is trivial, i.e., $\widetilde{\phi}_v$ is unramified. Moreover in this case, for a uniformizer $\pi_v$ of $K_v$, we have $\widetilde{\psi}_v(\pi_v)=\phi(\overline{v})$ because the isomorphism $\mathbb{C}_{K}/\overline{U(\mathfrak{a})}\cong Cl(K,\mathfrak{a})$ maps $i_v(\pi_v)$ to $\overline{v}$. Therefore the condition $\widetilde{\phi}_v$ is unramified and $\widetilde{\phi}_v(\pi_v)=\phi(\overline{v})$ holds for almost all finite place $v$ of $K$. The discussion above shows that $L(\phi,\psi,s)$ is equal to the Hecke $L$-function $L(\widetilde{\phi} \widetilde{\psi},s)$ up to a finite number of terms. Thus they have the same analytic properties. By Lemma \ref{5.1}, $L(\widetilde{\phi} \widetilde{\psi},s)$ has poles if and only if $\widetilde{\phi}(x) \widetilde{\psi}(x)=|x|_{\mathbb{A}_{K}}^{\lambda_0}$ for all $x\in\mathbb{A}_{K}$ and $\lambda_0$ some fixed pure imaginary number. In this case, take $v$ a infinite place and we have $\widetilde{\phi}_v(x) \widetilde{\psi}_v(x)=|x|_{v}^{\lambda_0}$ for all $x\in K_{v}^{\times}$. However, by the definition of $\widetilde{\phi}$ and $\widetilde{\psi}$, it is easy to see $\widetilde{\phi}_v \widetilde{\psi}_v$ is trivial when $v$ is a complex place and $\widetilde{\phi}_v \widetilde{\psi}_v|_{\mathbb{R}_{>0}}$ is trivial when $v$ is a real place, both obtaining $\lambda_0=0$. As a result, $L(\widetilde{\phi} \widetilde{\psi},s)$ has poles if and only if $\widetilde{\phi} \widetilde{\psi}$ is trivial, in this case $L(\widetilde{\phi} \widetilde{\psi},s)=\zeta_{K}(s)$. The last condition is equivalent to $\phi(N(\mathfrak{p}))\psi(\overline{\mathfrak{p}})=1$ for almost all $\mathfrak{p}\in\mathrm{Spec}(\mathcal{O}_K)\backslash\{0\}$, because a Hecke character is determined by its local multiplicative characters of almost all places (more strongly, of places with density strictly $>\frac{1}{2}$). Consequently, the $L$-function $L(\phi,\psi,s)$ converges for $\Re(s)>1$, and extends to a meromorphic function on $\Re(s)\geq1$ having neither zero nor pole except possibly a simple pole for $s=1$, and this occurs if and only if $\phi(N(\mathfrak{p}))\psi(\overline{\mathfrak{p}})=1$ for almost all $\mathfrak{p}\in\mathrm{Spec}(\mathcal{O}_K)\backslash\{0\}$. Take $G=(\mathbb{Z}/M\mathbb{Z})^{\times}\times Cl(K,\mathfrak{a})$ and $f(\mathfrak{p})=(N(\mathfrak{p}),\overline{\mathfrak{p}})$ in Lemma \ref{4.2}, we have $$ \sum_{\mathfrak{p}\in\mathscr{P}(x,m,M,C,\mathfrak{a})}1=\frac{A(m,M,C,\mathfrak{a}) x}{h_{\mathfrak{a}} \varphi(M) \log x }+o\left(\frac{x}{\log x}\right) $$ as $x\to +\infty$. The result follows. \end{proof} \subsection{Proof of Theorem \ref{3.1}} \label{section 6} \begin{proof}[Proof of Theorem \ref{3.1}] First notice that $\{\varphi_{n}:x\mapsto x^n,n\in\mathbb{Z}\}$ are all irreducible characters of circle $S^1$. Therefore by Lemma 4.1 and Theorem 2.1, it suffices to show $$ \sum_{\mathfrak{p}\in\mathscr{P}(x,m,M,C,\mathfrak{a})}\lambda^{n}(\mathfrak{p})=o\left(\frac{x}{\log x}\right), n\neq 0 $$ as $x\to +\infty$. Similar to the proof of Theorem 2.1, using orthogonal relation of characters, we have $$ \begin{aligned} \sum_{\mathfrak{p}\in\mathscr{P}(x,m,M,C,\mathfrak{a})}\lambda^{n}(\mathfrak{p})&=\frac{1}{\varphi(M)}\sum_{\mathfrak{p}\in\mathscr{P}(x,C,\mathfrak{a})}\sum_{\phi\in\widehat{(\mathbb{Z}/m\mathbb{Z})^{\times}}}\overline{\phi}(m)\phi(N(\mathfrak{p}))\lambda^{n}(\mathfrak{p})\\ &=\frac{1}{\varphi(M) h_\mathfrak{a}}\sum_{\phi\in\widehat{(\mathbb{Z}/m\mathbb{Z})^{\times}}}\overline{\phi}(m)\sum_{N(\mathfrak{p})\leq x}\phi(N(\mathfrak{p}))\sum_{\psi\in\widehat{(Cl(K,\mathfrak{a}))^{\times}}}\overline{\psi}(C)\psi(\overline{\mathfrak{p}})\lambda^{n}(\mathfrak{p})\\ &=\frac{1}{\varphi(M) h_\mathfrak{a}}\sum_{\phi\in\widehat{(\mathbb{Z}/m\mathbb{Z})^{\times}}}\sum_{\psi\in\widehat{(Cl(K,\mathfrak{a}))^{\times}}}\overline{\phi}(m)\overline{\psi}(C)\sum_{N(\mathfrak{p})\leq x}\phi(N(\mathfrak{p}))\psi(\overline{\mathfrak{p}})\lambda^{n}(\mathfrak{p}). \end{aligned} $$ For fixed $\phi\in\widehat{(\mathbb{Z}/M\mathbb{Z})^{\times}}$, $\psi\in\widehat{(Cl(K,\mathfrak{a}))^{\times}}$ and $n\neq 0$, define an $L$-function $$ L(\phi,\psi,\lambda^n,s)=\prod_{\mathfrak{p}}(1-\phi(N(\mathfrak{p}))\psi(\overline{\mathfrak{p}})\lambda^{n}(\mathfrak{p})(N(\mathfrak{p}))^{-s})^{-1}. $$ We will show $L(\phi,\psi,s)$ is a Hecke $L$-function with no pole on $\Re(s)=1$ up to a finite number of terms, and the result follows by taking $G=(\mathbb{Z}/M\mathbb{Z})^{\times}\times Cl(K,\mathfrak{a})\times S^1$ and $f(\mathfrak{p})=(N(\mathfrak{p}),\overline{\mathfrak{p}},\lambda(\mathfrak{p}))$ in Lemma \ref{4.2}. Lift $\phi$, $\psi$ to unitary Hecke characters $\widetilde{\phi}$, $\widetilde{\psi}$ of $K$ as in the proof of Theorem \ref{2.1}. The left is to lift $\lambda$ to a unitary Hecke characters $\widetilde{\lambda}$ of $K$. For a finite place $v$ of $K$, define a character $\widetilde{\lambda}_v$ of $K_{v}^{\times}$ by letting $\widetilde{\lambda}_v|_{\mathfrak{o_{v}^{\times}}}$ trivial and $\widetilde{\lambda}_v(\pi_v)=\lambda(v)$ for any uniformizer $\pi_v$ of $K_{v}$. For the unique complex place $v=\infty$, define a character $\widetilde{\lambda}_{\infty}$ of $\mathbb{C}^{\times}$ by $\widetilde{\lambda}_{\infty}(x)=\lambda(x)^{-1}$ for any $x\in\mathbb{C}^{\times}$. Let $\widetilde{\lambda}=\otimes_{v}^{'} \widetilde{\lambda}_v$. It is easy to check that $\widetilde{\lambda}$ is indeed a Hecke character of $K$ and from the definition, $\widetilde{\lambda}_v$ is unramified and $\widetilde{\lambda}_v(\pi_v)=\lambda(v)$ holds for all finite place $v$ of $K$. Now $L(\phi,\psi,\lambda^{n},s)$ is equal to the Hecke $L$-function $L(\widetilde{\phi} \widetilde{\psi} {\widetilde{\lambda}}^n,s)$ up to a finite number of terms. By Lemma \ref{5.1}, $L(\widetilde{\phi} \widetilde{\psi} {\widetilde{\lambda}}^n,s)$ has poles if and only if $\widetilde{\phi}(x) \widetilde{\psi}(x) {\widetilde{\lambda}}^{n}(x)=|x|_{\mathbb{A}_{K}}^{\lambda_0}$ for all $x\in\mathbb{A}_{K}$ and $\lambda_0$ some fixed pure imaginary number. In this case, we have $|x|_{\infty}^{\lambda_0}=\widetilde{\phi}_{\infty}(x) \widetilde{\psi}_{\infty}(x) {\widetilde{\lambda}}_{\infty}^n(x)=(\frac{x}{|x|})^{-w n}$ for all $x\in \mathbb{C}^{\times}$, which is impossible unless $n=0$. \end{proof} \subsection{View of $\check{\mathrm{C}}$ebotarev density theory} \label{section 7} In this section, we realize Theorem \ref{2.1} as a corollary of $\check{\mathrm{C}}$ebotarev density theory and Theorem \ref{3.1} as a corollary of a theory slightly different from $\check{\mathrm{C}}$ebotarev density theory. Let $K$ be a number field. Take an integer ideal $\mathfrak{a}$ of $K$, an ideal class $C\in Cl(K,\mathfrak{a})$ and two positive integers $m,M$ with $\mathrm{gcd}(m,M)=1$. Recall that $$ \mathscr{P}(x,m,M,C,\mathfrak{a})=\{\mathfrak{p}\in C\arrowvert N(\mathfrak{p})\leq x, N(\mathfrak{p})\equiv m\pmod{M}\}. $$ We denote by $K^{ab}$ the maximal abelian extension of $K$. By the global class field theory, we have the global Artin mapping $$ \rho_K:\mathbb{C}_K\longrightarrow {\rm{Gal}}(K^{ab}/K) $$ which induces an isomorphism $$ \widetilde{\rho_K}:Cl(K,\mathfrak{a})\longrightarrow {\rm{Gal}}(K(\mathfrak{a})/K), $$ where $K(\mathfrak{a})$ is the fixed field of $\rho_K(\overline{U(\mathfrak{a})})$ under the Galois correspondence. For a prime ideal $\mathfrak{p}$ not dividing $\mathfrak{a}$, the isomorphism takes $\mathfrak{p}$ to $\left(\frac{K(\mathfrak{a})/K}{\mathfrak{p}}\right)$, the Frobenius element of $\mathfrak{p}$. Thus the condition $\overline{\mathfrak{p}}=C$ can be translated into $\left(\frac{K(\mathfrak{a})/K}{\mathfrak{p}}\right)=\widetilde{\rho_K}(C)$. Take a prime ideal $\mathfrak{p}$ of $K$ such that $(N(\mathfrak{p}),M)=1$. Let $\zeta_M$ be a $M$-th primitive unit root, $\mathbb{Q}(\zeta_M)$ the $M$-th cyclotomic field and $$ \widetilde{\rho_{\mathbb{Q}}}:(\mathbb{Z}/M\mathbb{Z})^{\times}\longrightarrow {\rm{Gal}}(\mathbb{Q}(\zeta_M)/\mathbb{Q}) $$ the canonical isomorphism. Let $L_1=K\cap\mathbb{Q}(\zeta_M)$. We have a canonical isomorphism $$ {\rm{Gal}}(K(\zeta_M)/K))\stackrel{Res}{\longrightarrow}{\rm{Gal}}(\mathbb{Q}(\zeta_M)/L_1). $$ Suppose $\mathfrak{p}\cap L_1=\mathfrak{q}$ and $\mathfrak{p}\cap L_1=p\mathbb{Z}$. By the functorial properties of Frobenius, we have: $$ \begin{aligned} \left(\frac{K(\zeta_M)/K}{\mathfrak{p}}\right)\bigg|_{\mathbb{Q}(\zeta_M)}&=\left(\frac{\mathbb{Q}(\zeta_M)/L_1}{\mathfrak{q}}\right)^{f(\mathfrak{p}|\mathfrak{q})}\\ &=\left(\frac{\mathbb{Q}(\zeta_M)/\mathbb{Q}}{p}\right)^{f(\mathfrak{q}|p) f(\mathfrak{p}|\mathfrak{q})}\\ &=\widetilde{\rho_{\mathbb{Q}}}(p)^{f(\mathfrak{p}|p)}\\ &=\widetilde{\rho_{\mathbb{Q}}}(N(\mathfrak{p})). \end{aligned} $$ Thus if $\widetilde{\rho_{\mathbb{Q}}}(m)\notin {\rm{Gal}}(\mathbb{Q}(\zeta_M)/L_1)$, there is no $\mathfrak{p}$ satisfying $N(\mathfrak{p})\equiv m\pmod{M}$. Suppose $\widetilde{\rho_{\mathbb{Q}}}(m)\in {\rm{Gal}}(\mathbb{Q}(\zeta_M)/L_1)$ and still denote by $\widetilde{\rho_{\mathbb{Q}}}(m)$ the image of $\widetilde{\rho_{\mathbb{Q}}}(m)$ in ${\rm{Gal}}(K(\zeta_M)/K))$ under the canonical isomorphism. Then the condition $N(\mathfrak{p})\equiv m\pmod{M}$ can be translated into $\left(\frac{K(\zeta_M)/K}{\mathfrak{p}}\right)=\widetilde{\rho_{\mathbb{Q}}}(m)$. Take $L_2=K(\mathfrak{a})\cap K(\zeta_M)$, $\widetilde{L_2}=L_2\cap Q(\zeta_M)$ and $L_3=K(\mathfrak{a}) K(\zeta_m)$. Note that $\left(\frac{K(\mathfrak{a})/K}{\mathfrak{p}}\right)\bigg|_{L_2}=\left(\frac{L_2/K}{\mathfrak{p}}\right)=\left(\frac{K(\zeta_M)/K}{\mathfrak{p}}\right)\bigg|_{L_2}$. Thus if $\widetilde{\rho_K}(C)|_{L_2}\neq \widetilde{\rho_{\mathbb{Q}}}(m)|_{L_2}$, or equivalently, $\widetilde{\rho_K}(C)|_{\widetilde{L_2}}\neq \widetilde{\rho_{\mathbb{Q}}}(m)|_{\widetilde{L_2}}$, there is no $\mathfrak{p}$ satisfying $\overline{\mathfrak{p}}=C$ and $N(\mathfrak{p})\equiv m\pmod{M}$. Suppose $\widetilde{\rho_K}(C)|_{L_2}= \widetilde{\rho_{\mathbb{Q}}}(m)|_{L_2}$. Then an element $\sigma_{m,C}\in {\rm{Gal}}(L_3/K)$ is uniquely determined up by $\widetilde{\rho_K}(C)$ and $\widetilde{\rho_{\mathbb{Q}}}(m)$ while the element determined up by $\left(\frac{K(\mathfrak{a})/K}{\mathfrak{p}}\right)$ and $\left(\frac{K(\zeta_M)/K}{\mathfrak{p}}\right)$ is $\left(\frac{L_3/K}{\mathfrak{p}}\right)$. Hence the condition $\overline{\mathfrak{p}}=C$, $N(\mathfrak{p})\equiv m\pmod{M}$ can be translated into $\sigma_{m,C}=\left(\frac{L_3/K}{\mathfrak{p}}\right)$. By $\check{\mathrm{C}}$ebotarev density theory, we have $$ \begin{aligned} \sum_{\mathfrak{p}\in\mathscr{P}(x,m,M,C,\mathfrak{a})}1&=\frac{1}{[L_3:K]} \frac{x}{\log x}+o\left(\frac{x}{\log x}\right)\\ &=\frac{[L_2:K]}{[K(\zeta_M):K] h(\mathfrak{a})} \frac{x}{\log x}+o\left(\frac{x}{\log x}\right)\\ &=\frac{[\widetilde{L_2}:\mathbb{Q}]}{\varphi(M) h(\mathfrak{a})} \frac{x}{\log x}+o\left(\frac{x}{\log x}\right). \end{aligned} $$ as $x\to +\infty$. Now we explain that the result here is consistent with Theorem \ref{2.1}. Recall $$ A(m,M,C,\mathfrak{a})=\sum_{\phi,\psi}\phi(m)\psi(C), $$ the sum ranging over $\phi\in\widehat{(\mathbb{Z}/M\mathbb{Z})^{\times}}, \psi\in\widehat{Cl(K,\mathfrak{a})}$ satisfying $\phi(N(\mathfrak{p}))\psi(\overline{\mathfrak{p}})=1$ for almost all $\mathfrak{p}\in\mathrm{Spec}(\mathcal{O}_K)\backslash\{0\}$. Equate $\phi$ with character of ${\rm{Gal}}(\mathbb{Q}(\zeta_M)/\mathbb{Q})$ through $\widetilde{\rho_Q}$ while $\psi$ with character of ${\rm{Gal}}(K(\mathfrak{a})/K))$ through $\widetilde{\rho_K}$. Then $$ A(m,M,C,\mathfrak{a})=\sum_{\phi,\psi}\phi(\widetilde{\rho_Q}(m))\psi(\widetilde{\rho_K}(C)), $$ the sum ranging over $\phi\in\widehat{{\rm{Gal}}(\mathbb{Q}(\zeta_M)/\mathbb{Q})}, \psi\in\widehat{{\rm{Gal}}(K(\mathfrak{a})/K))}$ satisfying $$\phi\left(\left(\frac{K(\zeta_M)/K}{\mathfrak{p}}\right)\bigg|_{\mathbb{Q}(\zeta_M)}\right)\psi\left(\left(\frac{K(\mathfrak{a})/K}{\mathfrak{p}}\right)\right)=1$$ for almost all $\mathfrak{p}\in\mathrm{Spec}(\mathcal{O}_K)\backslash\{0\}$. For fixed $\phi$, define character $\widetilde{\phi}$ to be the following composite mapping $$ {\rm{Gal}}(K(\zeta_M)/K))\stackrel{Res}{\longrightarrow}{\rm{Gal}}(\mathbb{Q}(\zeta_M)/L_1)\stackrel{\phi}{\longrightarrow}\mathbb{C}^{\times}. $$ Then $$ \begin{aligned} &\widetilde{\phi}\left(\left(\frac{L_3/K}{\mathfrak{p}}\right)\bigg|_{K(\zeta_M)}\right)\psi\left(\left(\frac{L_3/K}{\mathfrak{p}}\right)\bigg|_{K(\mathfrak{a})}\right)\\ &=\widetilde{\phi}\left(\left(\frac{K(\zeta_M)/K}{\mathfrak{p}}\right)\right)\psi\left(\left(\frac{K(\mathfrak{a})/K}{\mathfrak{p}}\right)\right)\\ &=\phi\left(\left(\frac{K(\zeta_M)/K}{\mathfrak{p}}\right)\bigg|_{\mathbb{Q}(\zeta_M)}\right)\psi\left(\left(\frac{K(\mathfrak{a})/K}{\mathfrak{p}}\right)\right)=1 \end{aligned} $$ for almost all $\mathfrak{p}\in\mathrm{Spec}(\mathcal{O}_K)\backslash\{0\}$, i.e. $$ \widetilde{\phi}(\sigma|_{K(\zeta_M)})\psi(\sigma|_{K(\mathfrak{a})})=1, \forall\sigma\in {\rm{Gal}}(K(\mathfrak{a})/K). $$ The last condition is equivalent to that both $\widetilde{\phi}|_{{\rm{Gal}}(K(\zeta_M)/L_2)}$ and $\psi|_{{\rm{Gal}}(K(\mathfrak{a})/L_2)}$ is trivial and when view $\widetilde{\phi}, \psi$ as characters of ${\rm{Gal}}(L_2/K)$ naturally, $\widetilde{\phi} \psi$ is trivial. Therefore we have $$ A(m,M,C,\mathfrak{a})=\sum_{\phi,\psi}\phi(\widetilde{\rho_Q}(m)|_{\widetilde{L_2}})\psi(\widetilde{\rho_K}(C)|_{\widetilde{L_2}}), $$ the sum ranging over $\phi\in\widehat{{\rm{Gal}}(\widetilde{L_2}/\mathbb{Q})}, \psi\in\widehat{{\rm{Gal}}(\widetilde{L_2}/L_1)}$ satisfying $\phi \psi|_{{\rm{Gal}}(\widetilde{L_2}/L_1)}$ is trivial. A direct calculation shows $$ \begin{aligned} A(m,M,C,\mathfrak{a})&=\sum_{\phi,\psi}\phi(\widetilde{\rho_Q}(m)|_{\widetilde{L_2}})\psi(\widetilde{\rho_K}(C)|_{\widetilde{L_2}})\\ &=\sum_{\phi\in\widehat{{\rm{Gal}}(\widetilde{L_2}/\mathbb{Q})}}\phi(\widetilde{\rho_Q}(m)|_{\widetilde{L_2}})\overline{\phi}(\widetilde{\rho_K}(C)|_{\widetilde{L_2}})\\ &=\sum_{\phi\in\widehat{{\rm{Gal}}(\widetilde{L_2}/\mathbb{Q})}}\phi(\widetilde{\rho_Q}(m)|_{\widetilde{L_2}} (\widetilde{\rho_K}(C)|_{\widetilde{L_2}})^{-1}). \end{aligned} $$ By the orthogonal relation of characters, the value equals to $[\widetilde{L_2}:\mathbb{Q}]$ if $\widetilde{\rho_K}(C)|_{\widetilde{L_2}}= \widetilde{\rho_{\mathbb{Q}}}(m)|_{\widetilde{L_2}}$ and $0$ else. We point out that $\check{\mathrm{C}}$ebotarev density theory is a corollary of Lemma \ref{4.2} by taking $G={\rm{Gal}}(L/K)$ for a Galois extension $L$ of a number field $K$, $f(\mathfrak{p})=\left(\frac{L/K}{\mathfrak{p}}\right)$ for any prime ideal $\mathfrak{p}$ of $K$ unramified in $L$ and $\rho$ an irreducible representation of $G={\rm{Gal}}(L/K)$. The corresponding $L$-function is indeed the Artin $L$-function of $K$ up to a finite number of terms. To realize Theorem \ref{3.1} through a similar way of Theorem \ref{2.1} above, take in Lemma \ref{4.2} $G={\rm{Gal}}(L/K)\times S^1$ for a Galois extension of an imaginary quadratic field $K$, $f(\mathfrak{p})=\left(\left(\frac{L/K}{\mathfrak{p}}\right), \lambda(\mathfrak{p})\right)$ for any prime ideal $\mathfrak{p}$ of $K$ unramified in $L$ and $\rho\otimes\varphi_n$ an irreducible representation of $G$ where $\rho$ is an irreducible representation of ${\rm{Gal}}(L/K)$ and $\varphi_{n}:x\mapsto x^n,n\in\mathbb{Z}$. \subsection{Proof of Theorem \ref{main theorem}} \label{section 8} \begin{proof}[Proof of Theorem \ref{main theorem}] We use the notation of Theorem \ref{Coleman}, Theorem \ref{main theorem} and Subsection \ref{section 7}. The idea of the proof is to translate the restrictive conditions of the prime ideals to their associated Frobenius elements of the Galois group through class field theory, and translate it back. As the discussion of Subsection \ref{section 7}, if $\widetilde{\rho_K}(C)|_{L_2}\neq \widetilde{\rho_{\mathbb{Q}}}(m)|_{L_2}$, there is no $\mathfrak{p}$ satisfying $\overline{\mathfrak{p}}=C$ and $N(\mathfrak{p})\equiv m\pmod{M}$, and in this case we have $A(m,M,C,\mathfrak{a})=0$. The result is trivial. Suppose now $\widetilde{\rho_K}(C)|_{L_2}= \widetilde{\rho_{\mathbb{Q}}}(m)|_{L_2}$ and $\sigma_{m,C}$ is the element of ${\rm{Gal}}(L_3/K)$ uniquely determined up by $\widetilde{\rho_K}(C)$ and $\widetilde{\rho_{\mathbb{Q}}}(m)$. The condition $\overline{\mathfrak{p}}=C$, $N(\mathfrak{p})\equiv m\pmod{M}$ can be translated into $\left(\frac{L_3/K}{\mathfrak{p}}\right)=\sigma_{m,C}$. Note that $L_3/K$ is an abelian extension, and a basic result of class field theory, as a generalization of classical Kronecker-Weber theory, claims that there exists an ideal $\mathfrak{b}$ of $K$ satisfying $L_3\subset K(\mathfrak{b})$, where $K(\mathfrak{b})$ is the fixed field of $\rho_K(\overline{U(\mathfrak{b})})$ under the Galois correspondence. Now the condition $\overline{\mathfrak{p}}=C$, $N(\mathfrak{p})\equiv m\pmod{M}$ can be translated into $\left(\frac{K(\mathfrak{b})/K}{\mathfrak{p}}\right)$ is the elements of ${\rm{Gal}}(K(\mathfrak{b})/K)$ which is precisely $\sigma_{m,C}$ when restricting to $L_3$. Denote $[L_3:K(\mathfrak{b})]$ by $s$ and the $s$ elements of ${\rm{Gal}}(K(\mathfrak{b})/K)$ extending $\sigma_{m,C}$ by $\sigma_i, 1\leq i\leq s$. The isomorphism presented by class field theory $$ \overline{\rho_K}:Cl(K,\mathfrak{b})\longrightarrow {\rm{Gal}}(K(\mathfrak{b})/K) $$ translates the restrictive conditions back to $\overline{\mathfrak{p}}\in\{\overline{\rho_K}^{-1}(\sigma_i)\mid 1\leq i\leq s\}$. Notice that $h_{\mathfrak{b}}=[K(\mathfrak{b}):K]=[K(\mathfrak{b}):L_3] [L_3:K]=s [L_3:K]$, the result is immediate by Theorem \ref{Coleman} and the discussion in Subsection \ref{section 7}. \end{proof} \section{An application} \label{pf of application} As an application of Theorem \ref{main theorem}, we generalize C. Elsholtz and G. Harman's work \cite{MR3467390} on the conjectures of T. Ordowski and Z.-W. Sun \cite{sun2017conjectures}. For better illustration, we explain some definitions here. Note that in this article we only consider binary quadratic forms with integer coefficients. A binary quadratic form $Q(x,y)=ax^2+bxy+cy^2$ is called \textit{primitive} if the coefficients of the form are relatively prime, i.e., $\mathrm{gcd}(a,b,c)=1$. We say that a quadratic form can represent a prime number $p$ if there are some integers $a_p$ and $b_p$ such that $p=Q(a_p,b_p)$. It is easy to see that a quadratic form can represent prime numbers only if the form is primitive. The \textit{discriminant} of a quadratic form $Q(x,y)=ax^2+bxy+cy^2$ is denoted by $\Delta=b^2-4ac$. A quadratic form $Q(x,y)=ax^2+bxy+cy^2$ is said to be \textit{positive definite}, if $Q(x,y)>0$ for all nonnegetive integers $x,y$. Obviously, a quadratic form is positive definite if and only if its discriminant is negative. For a given primitive positive definite binary quadratic form, C. Elsholtz and G. Harman considered all representable primes in their work \cite{MR3467390}. Now we would like to consider part of these representable primes. Let $P_{m,M}=\{p\mid p\equiv m\pmod{M}, p \text{ is prime}\}$, where $m,M$ are coprime integers. For example, considering the binary quadratic form $Q(x,y)=x^2+y^2$, a prime $p$ can be represented by $Q(x,y)$ and $p=Q(a_p,b_p)=a_p^2+b_p^2$ with $a_p>b_p$ if and only if $p\equiv1\pmod{4}$. By the theorem proved by C. Elsholtz and G. Harman \cite[Theorem 1.4]{MR3467390}, for all such primes and these pairs $(a_p,b_p)$, we have $$\lim_{N\to\infty}\frac{\sum\limits_{p\leq N,p\equiv1 \pmod{4}}a_p}{\sum\limits_{p\leq N,p\equiv1 \pmod{4}}b_p}=1+\sqrt{2}.$$ Now let $p\equiv1\pmod{8}$ be a prime. Then $p$ can be uniquely represented with the sum of squares of two integers, that is $p=a_p^2+b_p^2$, where $a_p>b_p$. We find that only considering $p\equiv1\pmod{8}$ does not change the limit. This still holds for $p\equiv5\pmod{8}$. That is, $$\lim_{N\to\infty}\frac{\sum\limits_{p\leq N,p\equiv1\pmod{4}}a_p}{\sum\limits_{p\leq N,p\equiv1\pmod{4}}b_p}=\lim_{N\to\infty}\frac{\sum\limits_{p\leq N,p\equiv1\pmod{8}}a_p}{\sum\limits_{p\leq N,p\equiv1\pmod{8}}b_p}=\lim_{N\to\infty}\frac{\sum\limits_{p\leq N,p\equiv5\pmod{8}}a_p}{\sum\limits_{p\leq N,p\equiv5\pmod{8}}b_p}=1+\sqrt{2}.$$ This prompts us to consider Theorem \ref{main} and prove it. C. Elsholtz and G. Harman \cite{MR3467390} concentrated on primes with modulus in polar boxes. They used Coleman's result Theorem \ref{Coleman} to asymptotically evaluate $\sum_{p\leq N}a_p^k$ and $\sum_{p\leq N}b_p^k$ by dissecting certain sectors into polar boxes and summing up over all the intervals, and hence they canceled common factors and proved their result. The proof of Theorem \ref{main} is a straightforward generalization of the proof given by C. Elsholtz and G. Harman, and the only difficulty here is to get the asymptotic evaluation. Fortunately, it suffices to apply our main result Theorem \ref{main theorem} and use the similar method of C. Elsholtz and G. Harman for asymptotically evaluating $\sum_{p\leq N,p\in P_{m,M}}a_p^k$ and $\sum_{p\leq N,p\in P_{m,M}}b_p^k$. Now, we have proved Theorem \ref{main}. Moreover, we generalize the result by substitute $a_p$ and $b_p$ with polynomial functions $f(a_p,b_p)$ and $g(a_p,b_p)$. In the following theorem, for a bivariate polynomial $f(x,y)$ with $\deg{f}(x,y)=n$, we define $$\tilde{f}(x,y)=\frac{1}{n!} \frac{\dif^n f(xt,yt)}{\dif t^n}.$$ \begin{theorem} \label{polynomial} Using the same notation as in Theorem \ref{main}. Suppose $f(x,y),g(x,y)$ are two bivariate polynomials over $\mathbb{R}$, $\deg{f(x,y)}=\deg{g(x,y)}=n$. Then $$\lim_{N\to\infty}\frac{\sum\limits_{p\leq N}f(a_p,b_p)}{\sum\limits_{p\leq N}g(a_p,b_p)}=\frac{\int_0^{\beta}{\tilde{f}(s(n,\theta),t(n,\theta))\dif\theta}}{\int_0^{\beta}{\tilde{g}(s(n,\theta),t(n,\theta))\dif\theta}}.$$ \end{theorem} \begin{remark} \label{rek} \begin{enumerate} \item[(1)] This theorem holds for only considering all primes $p\in P_{m,M}$ with coprime integers $m,M$, if $P_{m,M}\cap\{Q(x,y)\mid x,y \text{ are integers}\}$ is not a finite set, that is, $$\lim_{N\to\infty}\frac{\sum\limits_{p\leq N,p\in P_{m,M}}f(a_p,b_p)}{\sum\limits_{p\leq N,p\in P_{m,M}}g(a_p,b_p)}=\frac{\int_0^{\beta}{\tilde{f}(s(n,\theta),t(n,\theta))\dif\theta}}{\int_0^{\beta}{\tilde{g}(s(n,\theta),t(n,\theta))\dif\theta}}.$$ \item[(2)] This theorem also holds for homogeneous functions $f(x,y)$ and $g(x,y)$ with degree $n$. In this case, we take $\tilde{f}(x,y)=f(x,y)$ and $\tilde{g}(x,y)=g(x,y)$. \end{enumerate} \end{remark} The proof of Theorem \ref{polynomial} and Remark \ref{rek} is another straightforward result of the proof above, since the asymptotic evaluation of the numerator and denominator is easy to find using Theorem \ref{Coleman} or Theorem \ref{main theorem}. \section{Some conjectures} Inspired by the Chebyshev’s bias in \cite{em/1048515870} and ``murmuration” in \cite{lee2024murmurations}, we investigate the difference of distributions between primes congruent to $1 \pmod{8}$ and primes congruent to $5\pmod{8}$ and their representations by quadratic forms. We find some fascinating phenomena which we can not explain. We list two conjectures. \subsection{Chebyshev’s bias} \label{fr} Recall $P_{m,M}=\{p\mid p\equiv m\pmod{M},p\text{ is prime}\}$ with $m,M$ positive integers and $\mathrm{gcd}(m,M)=1$. Using the same notations as in Theorem \ref{main}, for a given quadratic form $Q(x,y)$, take $$\fr(N;M,m)=\frac{\sum\limits_{p\leq Pr(N),p\in P_{m,M}}a_p}{\sum\limits_{p\leq Pr(N),p\in P_{m,M}}b_p},$$ where $Q(a_p,b_p)=p$ with $a_p>b_p$ and $Pr(N)$ denotes the $N$-th prime. For $Q(x,y)=x^2+y^2$, we have proved that $$\lim_{N\to\infty}\frac{\sum\limits_{p\leq N,p\equiv1\pmod{8}}a_p}{\sum\limits_{p\leq N,p\equiv1\pmod{8}}b_p}=\lim_{N\to\infty}\frac{\sum\limits_{p\leq N,p\equiv5\pmod{8}}a_p}{\sum\limits_{p\leq N,p\equiv5\pmod{8}}b_p}=1+\sqrt{2},$$ that is, $$\lim_{N\to\infty}\fr(N;8,1)=\lim_{N\to\infty}\fr(N;8,5)=1+\sqrt{2}.$$ To investigate the difference between $\fr(N;8,1)$ and $\fr(N;8,5)$, we compute these two functions with numerical data up to $5\times10^6$, and the graph is presented in Figure \ref{p1-p5}. \begin{figure}[H] \centering \includegraphics[width=10cm]{5x10_6.png} \caption{$\fr(N;8,1)$ and $\fr(N;8,5)$ for $N\in[100,5\times10^6]$:} $\fr(N;8,1)$ is marked blue, $\fr(N;8,5)$ is marked red. We only mark points with $100|N$. \label{p1-p5} \end{figure} Similarly, for another quadratic form $Q(x,y)=x^2+xy+y^2$, we consider $\fr(N;12,1)$ and $\fr(N;12,7)$ and compute these two functions with numerical data up to $1\times 10^6$. Then we have the graph in Figure \ref{1(12)vs7(12)}. \begin{figure}[H] \centering \includegraphics[width=10cm]{xx+xy+yy.png} \caption{$\fr(N;12,1)$ and $\fr(N;12,7)$ for $N\in[100,1\times10^6]$:} $\fr(N;12,1)$ is marked blue, $\fr(N;12,7)$ is marked red. We only mark points with $100|N$. \label{1(12)vs7(12)} \end{figure} We are interested in the phenomenon of oscillation and repeated intersections of two function as is shown in the above two figures. Now we define another function, which generates a graph that preserves oscillatory behavior and intersections while exhibiting nice symmetry. This symmetry facilitates more effective observation and analysis. For a given quadratic form $Q(x,y)$, take $R(N;M,m)=\frac{\fr(N;M,m)}{\fr(N)}$, where $\fr(N)=\frac{\sum_{p\leq Pr(N)}a_p}{\sum_{p\leq Pr(N)}b_p}$. For $Q(x,y)=x^2+y^2$, we consider $R(N;8,1)$ and $R(N;8,5)$ and compute these two functions with numerical data up to $5\times 10^6$, and the graph is presented in Figure \ref{R1-R2}. \begin{figure}[H] \centering \includegraphics[width=13cm]{R1R2.png} \caption{$R(N;8,1)$ and $R(N;8,5)$ for $N\in[100,5\times10^6]$:} $R(N;8,1)$ is marked blue, $R(N;8,5)$ is marked red. We only mark points with $100|N$. This graph show very nice symmetry since $\fr(N;8,1)$, $\fr(N;8,5)$ and $\fr(N)$ have the same limit as $N\to\infty$. \label{R1-R2} \end{figure} These three figures above all show that although the two series converge to the same limit, they repeatedly intersect and alternately surpass each other as $N$ increasing. Based on this phenomenon, we propose the following conjecture. \begin{conjecture} The two differences $\fr(N;8,1)-\fr(N;8,5) $ and $\fr(N;12,1)-\fr(N;12,7) $ will change signs infinitely many times as $N$ approaches infinity. In general, for a given primitive positive definite binary quadratic form, if we divide all its representable prime numbers into two reasonable sets $P_1$ and $P_2$, based on certain congruence conditions, then $\fr(N;P_1)$ and $\fr(N;P_2)$ alternately surpass each other as $N$ increases, with the pattern of intertwining likely to persist. \end{conjecture} \subsection{Counting functions} L. Devin \cite{devin2021discrepancies} conjectured that in logarithmic scale, more than half of the primes below $x$ can be written as a sum of two squares with the even square larger than the odd square. We would like to study the separated cases $ p\equiv 1\pmod{8}$ and $ p\equiv 5\pmod{8}$, and compare these two cases. Recall that $P_{m,M}=\{p\mid p\equiv m\pmod{M},p\text{ is prime}\}$ with $m,M$ positive integers and $\mathrm{gcd}(m,M)=1$. Then the differences of counting functions are denoted by{\small $$D_1(x)=|\{p<x\mid p=a^2+4b^2,|a|>|2b|,p\in P_{1,8}\}|-|\{p<x\mid p=a^2+4b^2,|a|<|2b|,p\in P_{1,8}\}|,$$ $$D_2(x)=|\{p<x\mid p=a^2+4b^2,|a|>|2b|,p\in P_{5,8}\}|-|\{p<x\mid p=a^2+4b^2,|a|<|2b|,p\in P_{5,8}\}|.$$} We compute these two functions with numerical data up to $1.5\times10^6$, and the graph is presented in Figure \ref{D1D2}. \begin{figure}[H] \centering \includegraphics[width=15cm]{D1D2.png} \caption{$D_1(x)$ and $D_2(x)$ for $x\in [2,1.5\times10^6]$:} $D_1(x)$ is marked blue, $D_2(x)$ is marked red. \label{D1D2} \end{figure} Based on Figure \ref{D1D2}, we propose the following conjecture. \begin{conjecture} There is a bias towards negative values in the distribution of the values of the function $D_1(x)$ and $D_2(x)$. \end{conjecture} \section*{Acknowledgments} The authors are supported by National Nature Science Foundation of China (Nos. 11971226, 12231009, 124B1010). \newpage \begin{thebibliography}{10} \bibitem{Coleman1990TheDO} M. D. Coleman, \newblock The distribution of points at which binary quadratic forms are prime, \newblock {\em Proceedings of The London Mathematical Society} (3) 61 (1990) 433--456. \bibitem{Coleman1992} M. D. Coleman, \newblock The distribution of points at which norm-forms are prime, \newblock {\em Journal of Number Theory} 41 (1992) 359--378. \bibitem{devin2021discrepancies} L. Devin, \newblock Discrepancies in the distribution of gaussian primes, arXiv:2105.02492. \bibitem{MR3467390} C. Elsholtz and G. Harman, \newblock On conjectures of {T}. {O}rdowski and {Z}. {W}. {S}un concerning primes and quadratic forms, \newblock {\em Analytic number theory}, 65--81. Springer, Cham, 2015. \bibitem{MR1728620} K. Kato, N. Kurokawa and T. Saito, \newblock {\em Number theory. 1}, volume 186 of {\em Translations of Mathematical Monographs}. \newblock American Mathematical Society, Providence, RI, 2000. \newblock Fermat's dream, Translated from the 1996 Japanese original by Masato Kuwata, Iwanami Series in Modern Mathematics. \bibitem{lee2024murmurations} K-H Lee, T. Oliver and A. Pozdnyakov, \newblock Murmurations of Dirichlet characters, arXiv:2307.00256. \bibitem{littlewood1914distribution} J. E Littlewood, \newblock Sur la distribution des nombres premiers, \newblock {\em CR Acad. Sci. Paris} 158 (1914) 1869--1872. \bibitem{em/1048515870} M. Rubinstein and P. Sarnak, \newblock {Chebyshev's bias}, \newblock {\em Experimental Mathematics} 3(3) (1994) 173--197. \bibitem{Serre1989AbelianLR} J. P. Serre, \newblock Abelian l-adic representation and elliptic curves, \newblock {\em Advanced book classics}, 1989. \bibitem{sun2017conjectures} Z-W. Sun, \newblock Conjectures on representations involving primes, arxiv:1211.1588v26. \end{thebibliography} \end{document}
2412.14813v2
http://arxiv.org/abs/2412.14813v2
Solutions of stationary McKean-Vlasov equation on a high-dimensional sphere and other Riemannian manifolds
\documentclass[12pt]{article} \input{header-2} \usepackage{graphicx} \usepackage[pdftex]{pict2e} \newcommand\ANDRE[2][]{{\color{orange}{\textbf{#1}}#2}} \let\AS\ANDRE \newcommand\ASpar[2][]{\marginpar{\color{orange}{\textbf{#1}}#2}} \newcommand\ANNA[2][]{{\color{blue}{\textbf{#1}}#2}} \renewcommand{\#}{\sharp} \newcommand{\dist}{\mathrm{dist}} \newcommand{\proj}{\mathrm{proj}} \newcommand{\grd}{\mathrm{grad}} \newcommand{\divr}{\mathrm{div}} \makeatletter \let\@fnsymbol\@arabic \makeatother \begin{document} \title{Solutions of stationary McKean-Vlasov equation on a high-dimensional sphere and other Riemannian manifolds} \author{Anna Shalova\thanks{\href{mailto:[email protected]}{[email protected]}} \quad Andr\'e Schlichting\thanks{\href{mailto:[email protected]}{[email protected]}}} \date{\normalsize ${}^1$Department of Mathematics and Computer Science,\\ Eindhoven University of Technology \\ ${}^2$Institute of Applied Analysis, Ulm University} \maketitle \def\ourkeywords{McKean-Vlasov equation, bifurcations, phase transition, nonlocal PDEs, interacting particle systems, PDEs on manifolds.} \begin{abstract} We study stationary solutions of McKean-Vlasov equation on a high-dimensional sphere and other compact Riemannian manifolds. We extend the equivalence of the energetic problem formulation to the manifold setting and characterize critical points of the corresponding free energy functional. On a sphere, we employ the properties of spherical convolution to study the bifurcation branches around the uniform state. We also give a sufficient condition for an existence of a discontinuous transition point in terms of the interaction kernel and compare it to the Euclidean setting. We illustrate our results on a range of system, including the particle system arising from the transformer models and the Onsager model of liquid crystals. \par\medskip \noindent\textbf{Keywords and phrases. } \ourkeywords \end{abstract} \tableofcontents \section{Introduction} McKean-Vlasov equation arises as a mean-field limit of various stochastic interacting particles systems. Such systems describe phenomena of different nature and have applications in fields varying from liquid crystals \cite{carrillo2020long, Vollmer2017} and statistical mechanics \cite{MartzelAslangul2001} to opinion dynamics \cite{HegselmannKrause2002}, mathematical biology \cite{KellerSegel1971, BurgerCapassoMorale2007}, galactic dynamics~\cite{binney2008}, droplet growth~\cite{ConlonSchlichting2019}, plasma physics~\cite{bittencourt1986fund}, and synchronisation~\cite{kuramoto1981rhythms}. In addition, recently, interacting particles systems found a whole set of applications in theoretical machine learning \cite{sirignano2020mean, rotskoff2022trainability, geshkovski2024mathematical}. Several of the above-mentioned applications are set on Riemannian manifolds, dominantly on a high-dimensional sphere~\cite{Vollmer2017, geshkovski2024mathematical}. Even though the solutions of the McKean-Vlasov equation are relatively well-studied in~$\bbR^n$ or the flat torus, the scope of work concerning McKean-Vlasov equation in a manifold setting is very limited. In this paper we characterize the set of measure-valued solutions $\rho \in \calP_{ac}(\calM)$ of the stationary McKean-Vlasov equation: \begin{equation} \label{eq:mckean-vlasov} \gamma^{-1}\Delta\rho + \divr(\rho \nabla_x W(x, \cdot) *\rho) =0, \end{equation} on a compact Riemannian manifold $\calM$ in general and on sphere $\calM =\bbS^{n-1}$ of arbitrary dimension bin particular. Solutions of this equation correspond to the densities which balance the first, \emph{diffusion} term and the second, \emph{interaction} term. The function $W: \calM \times \calM \to \bbR$ is called an \emph{interaction kernel} and is assumed to be symmetric $W(x,y) = W(y,x)$ throughout this paper. Depending on the direction of $\nabla W$, the interaction term can model both \emph{attractive} or \emph{repulsive} forces. The parameter $\gamma \in \bbR_+$, called \emph{inverse temperature}, expresses how much priority is given to the diffusion term. Formally, for $\gamma \to 0$ the impact of the interaction term becomes negligible; and as a result, we expect that the set of solutions of \eqref{eq:mckean-vlasov} will coincide with the kernel of the Laplace-Beltrami on $\calM$, which are constant with respect to the volume measure. Similarly, for $\gamma \to \infty$ the priority is given to the interaction term and the structure of the set of the solutions can vary depending on the properties of the interaction kernel $W$. We study the case of small $\gamma$ for a general compact Riemannian manifold. In case of $\calM=\bbS^{n-1}$ the knowledge of a suitable basis of $L_2(\bbS^{n-1})$ and its behavior under convolution operations allows us to characterize the behaviour of certain solutions for a larger range of $\gamma \in \bbR_+$. We begin our analysis by establishing equivalence between solutions of the stationary McKean-Vlasov equation \eqref{eq:mckean-vlasov} and critical points of the free energy functional $\calF_\gamma: \calP(\calM) \to \bbR$ (see Proposition~\ref{prop:equivalence}) which for any admissible $\calM$ consists of \begin{equation} \label{eq:free-energy} \calF_\gamma(\mu) := \gamma^{-1}\calE(\mu) + \calI(\mu) \,. \end{equation} where $\calE$ is the relative entropy with respect to the normalized volume measure $m$: \begin{equation} \label{eq:entropy} \calE(\mu) := \begin{cases} \int_{\calM} \rho \log \rho \,d{m} & \text{ if } \mu \text{ admits a positive density } \rho \text{ w.r.t. } m, \\ +\infty &\text{otherwise.} \end{cases} \end{equation} The second term $\calI: \calP(\calM) \to \bbR$ is called the interaction energy and denoted by \begin{equation} \label{eq:interaction-energy} \calI(\mu) := \frac12\int_{\calM\times \calM} W(x, y )d\mu(x)d\mu(y). \end{equation} Using this equivalence we prove existence of solutions for arbitrary $\gamma\in\bbR_+$ and give a sufficient condition for the uniqueness of the solution for small $\gamma$. Additional symmetry assumptions on the space $\calM$ and the interaction kernel $W$ can help to give a more explicit characterization of the solutions of \eqref{eq:mckean-vlasov} like it was done in case of a torus in \cite{carrillo2020long}. In \cite{carrillo2020long}, the authors showed that for an interaction kernel of form $W(x, y) = W(x-y)$ on a torus $\bbT^{n}$ the Fourier decomposition of the interaction kernel $W$ can be used to establish existence of bifurcation branches as well as characterize the phase transition of \eqref{eq:mckean-vlasov}. In this work we employ similar techniques to study the solutions of the stationary McKean-Vlasov equation on a sphere of arbitrary dimension $\calM=\bbS^{n-1}$. We study the bifurcation branches around the uniform state $\bar\rho$ and give a sufficient condition for the existence of a discontinuous transition point in terms of the spherical harmonics decomposition of the interaction kernel in case of a radially-symmetric kernel $W(x, y) = W(\left<x, y\right>)$. To characterize non-trivial stationary measures of the McKean-Vlasov equation we use another equivalent formulation (see Proposition~\ref{prop:equivalence}), namely the characterization of the invariant measures to~\eqref{eq:mckean-vlasov} in terms of the zeroes of the Gibbs-map $F: \bbR_+ \times L^2(\calM) \to L^2(\calM)$: \begin{equation} \label{eq:gibbs-map} F(\gamma, \rho) = \rho - \frac{1}{Z(\gamma, \rho)}e^{-\gamma W*\rho} \,, \end{equation} where $Z(\gamma, \rho)$ is a normalization constant $Z(\gamma, \rho) = \int_{\calM}e^{-\gamma W*\rho}dm$. Applying results from the bifurcation theory to the Gibbs map, we show that the bifurcation points can be expressed in terms of the spherical harmonics decomposition of $W$ and the corresponding invariant measures can be characterized in terms of the corresponding spherical basis functions. The same decomposition in combination with the known structure of the spherical harmonics allows us to study the behaviour of minimizers around the phase transition point. We apply our findings to a number of models of different nature. We begin by studying so-called noisy transformer model, which can be interpreted as stochastically perturbed continuous-time self-attention model \cite{geshkovski2024mathematical}. Self-attention is a key building block of transformers, the state-of-the-art large language models. We characterize invariant measures of the noisy transformers as well as calculate the critical noise ratio above which no prior information is preserved. We also study the Onsager model for liquid crystals, which also arises in mathematical biology, and generalize findings of \cite{WachsmuthThesis06,Vollmer2017} to the case of the unit sphere of an arbitrary dimension. Finally, we study the noisy Hegselmann–Krause model for opinion dynamics adapted to the spherical domain. All of the models can formally be interpreted as mean-filed limits of the corresponding particles system~\cite{McKean1966,Oelschlaeger1984,oelschlager1989derivation}. The corresponding evolution equation for the law has the structure: \[ \partial_t\rho = \nabla \cdot\left(\rho \nabla \frac{\delta \calF_\gamma}{\delta\rho}\right), \] where $\frac{\delta \calF_\gamma}{\delta\rho}$ is the Fréchet derivative of the free energy functional from~\eqref{eq:free-energy}. PDEs of this form posed on the space of probability measures with bounded second moments belong to a larger class of systems, namely gradient flows. We refer the reader to \cite{ambrosio2005gradient, santambrogio2015optimal} for the general theory of gradient flows on the state space $\R^d$. On manifolds the general theory is not fully developed, but it is expected to carry over. For instance on manifolds of positive curvature \cite{erbar2010heat} establishes the gradient flow formulation of the heat equation driven by relative entropy, albeit without interaction term. Due to the regular structure of the sphere, we argue that the same approaches might be applicable to rigorously prove the limiting behavior of the interacting particles systems posed on a sphere. In this paper we treat the stationary version of the McKean-Vlasov equation but the convexity properties established in Section~\ref{sec:convexity}, generalizing results from~\cite{sturm2005convex}, may also be of use for the characterization of the gradient-flow solutions of the non-stationary equation. \subsection{Main results} In this section we give an overview our main contributions. Our results are two-fold: we first study the solutions of the stationary McKean-Vlasov equation \eqref{eq:mckean-vlasov} on a compact connected Riemannian manifold without boundary, and in the second part we employ the symmetry properties of the unit sphere endowed with the natural topology to give a more explicit characterization of the solutions in terms of the spherical harmonics basis. \paragraph{Compact Riemannian manifold.} Let $\calM$ be a compact connected Riemannian manifold without boundary and let the interaction kernel $W: \calM\times\calM \to \bbR$ be continuous, then the following result holds (see Theorem~\ref{th:convexity-M} and Corollary~\ref{cor:convergence-min}). \begin{theorem}[Existence and uniqueness of solutions] For any $\gamma \in \bbR_+$ there exist a solution $\rho_\gamma$ of \eqref{eq:mckean-vlasov} and $\rho_\gamma \in H^1(\calM) \cap \calP_{ac}(\calM)$. In addition, if the curvature of the manifold is bounded from below $\operatorname{Ric}(\calM) \geq \lambda$, $W$ is twice-differentiable and there exist $\alpha > -\gamma^{-1}\lambda$ such that $W$ satisfies \[ \partial^2_t W\left(\exp_x vt, \exp_y ut\right) \geq \alpha (\|v\|^2 + \|u\|^2) \] for all $x, y \in \calM, \ v\in T_x\calM, u \in T_y\calM$, then $\rho_\gamma$ is a unique solution of \eqref{eq:mckean-vlasov}. \end{theorem} In fact we don't require $W$ to be everywhere twice-differentiable but only need the bound on the lower-second derivative. The proof relies on the geodesic convexity condition of the free energy functional \eqref{eq:free-energy}. \paragraph{Sphere $\bbS^{n-1}$.} In case of the high-dimensional sphere we impose more assumptions on the interaction kernel, namely we ask $W$ to be rotationally symmetric, namely by abuse of notation to take the form $W(x,y) = W(\left<x, y\right>)$ with $W:[-1,1]\to \R$. In this case, due to the symmetric structure of the unit sphere and the interaction kernel one can show that the uniform state $\bar\rho$ is always a solution of \eqref{eq:mckean-vlasov}. Employing the properties of the spherical convolution we are able to characterize non-trivial branches of solutions in terms of the spherical harmonics decomposition of the kernel. Components of the spherical harmonics decomposition are projections of the function on the symmetric spherical harmonics basis functions $Y_{k,0}$. An explicit form is given in the Definition~\ref{def:spherical-decomposition}. \begin{definition}[Spherical harmonics decomposition, see Definition \ref{def:spherical-decomposition}] \label{def:sph-decomposition-intro} Let $W:\bbS^{n-1}\times \bbS^{n-1} \to \bbR$ be a rotationally symmetric kernel, then the spherical harmonics decomposition of $W$ is defined as \[ \hat{W}_k = \alpha_k \int_{\bbS^{n-1}}W(\skp{x_0,\cdot}) Y_{k, 0} \,d\sigma, \] where $\sigma$ is the uniform measure on a sphere, $x_0\in \bbS^{n-1}$ an arbitrary reference point, $Y_{k, 0}$ are the spherical harmonics and $\alpha_k$ is the normalization constant for $k\in \bbN$. \end{definition} We show that if the spherical decomposition is non-positive, under certain structural assumptions, which we discuss in Section \ref{ssec:InteractionSphere}, there exist bifurcation curves around the uniform state. Our result can be summarized in the following formal theorem (for more details see Theorem \ref{th:bifurcations}). \begin{theorem}[Bifurcations] \label{th:bifurcations-intro} Let $W \in C_b \cap H^1$ be a rotationally symmetric interaction kernel. If there exists $k\in \bbN$ with unique negative value $\hat W_k < 0$, that is $\forall j\in \bbN\setminus\set{k}: W_j\ne W_k$, then there exists a non-trivial branch of solutions $\rho_\gamma \in L_2(\bbS^{n-1})$ of the form \[ \rho_\gamma(t) = \bar\rho + f(t)Y_{k, 0} + o(f(t)), \qquad \gamma(t) = \gamma_k + \mu(t), \] on some neighborhood $t \in (-\delta, \delta)$ around the bifurcation point $\gamma_k = -\frac{1}{\hat W_k}$, where $\bar\rho$ is the uniform state, $Y_{k, 0}$ is the corresponding spherical harmonic and $f, \mu$ are continuous functions on $(-\delta, \delta)$ satisfying $f(0) = 0, \ \mu(0) =0$. \end{theorem} Bifurcation theory describes continuous curves of solutions branching from the uniform state. These solutions however are not guaranteed to be (global) minimizers of the free energy functional \eqref{eq:free-energy}. Indeed, it may be the case that above certain value $\gamma > \gamma_c$ the uniform measure is no longer a global minimizer of \eqref{eq:free-energy} and a different configuration is preferable from the energy-minimization perspective. This phenomena is called phase transition and the value $\gamma_c$ where the uniform state stops being unique minimizer of the free energy is called a phase transition point (see Definition~\ref{def:transition-point}. We characterize the phase transition of the stationary McKean-Vlasov equation \eqref{eq:mckean-vlasov} for a certain class of the interaction kernels. We give a simplified version of the sufficient condition for a discontinuous phase transition here. See the detailed description in the Assumption \ref{assum:pt-general} and Theorem \ref{th:pt}. \begin{assumption}[Competitor in spherical harmonics] \label{assum:resonance-intro} Let $W$ be a rotationally symmetric interaction kernel and let $k\in \bbN$ be such that $\hat W_k= \min_l \hat W_l$ is among the smallest component of the spherical harmonics decomposition of $W$. Let $N_{\hat W_k}$ be the set of the indexes of all components with $\hat W_n = \hat W_k:$ \[ N_{W_k}= \{n\in \bbN: \hat W_n = \hat W_k\}, \] The interaction potential $W$ satisfies the resonance condition if there exists a linear combination $v = \sum_{l\in N_{W_k}} \alpha_l Y_{l,0}$ satisfying: $ \int \hat v^3 \,d\sigma \neq 0. $ \end{assumption} In particular we show that the above assumption is satisfied, for example, whenever the minimum is achieved for $k = 2$ or $k=4$, which is the case in the Examples of Sections~\ref{ssec:Onsager},~\ref{ssec:opinion} and~\ref{ssec:localized}. In this sense, single modes can resonate with themselves. Under the above assumption we are able to prove existence of the discontinuous transition point. \begin{theorem}[Phase transitions] Let the interaction kernel satisfy the resonance Assumption~\ref{assum:resonance-intro}, then there exists a discontinuous phase transition point $0<\gamma_c < -\frac{1}{\min_{n\in\bbN} \hat W_n}$. \end{theorem} Note that in this case $\gamma_c$ is strictly smaller then any of the bifurcation points characterized in Theorem \ref{th:bifurcations-intro}, implying that in the bifurcation points the uniform measure is not a global minimizer of the free energy functional \eqref{eq:free-energy}. \subsection{Literature Review} \paragraph{McKean-Vlasov equation as a mean-field limit.} Mean-field limits of particles system is a vast area of research, we refer to several recent results in this direction. A number of works treat interaction and diffusion systems separately. Namely, the mean-field convergence of Vlasov system (without interaction) under various assumptions is reviewed in \cite{jabin2014review}. Convergence of the system of interacting particles (with noise) goes back to~\cite{McKean1966} with rigorous derivations with more and more singular interaction kernels in~\cite{Oelschlaeger1984,oelschlager1989derivation,Stevens2000} and quantitative limits in~\cite{duerinckx2016mean, Serfaty2020mean} for Riesz and Coulomb-type (repulsive) interactions, also see the overview \cite{golse2016dynamics} and the recent work~\cite{bresch2023mean} for a mean-field with singular kernels. Recent innovations consider the question of uniform in time propagation of chaos in mean field limit of interacting diffusions with smooth kernels as for instance in~\cite{monmarche2017long} and references therein and upto the bifurcation point in~\cite{DelgadinoGvalaniPavliotisSmith2023}, optimal quantitative results as first established in~\cite{Lacker2023}, or revisit connection to large deviation principles~\cite{DawsonGaertner1989,hoeksema2024large}. \paragraph{PDEs and free energies on manifolds.} Well-posedness of the pure interaction systems on Riemannian manifolds have been studied in \cite{fetecau2021well, wu2015nonlocal}. Under the bounded curvature assumption the long-term behaviour of the same system have been established in \cite{fetecau2023long}. Relaxation of the manifold-restricted aggregation model has been introduced and studied in \cite{patacchini2021nonlocal}. On a sphere, well-posedness of the aggregation model is established in \cite{fetecau2021intrinsic}. In \cite{fetecau2023equilibria} the authors study the aggregation PDE on Cartan-Hadamar (hyperbolic) manifolds. For the manifolds with negative curvature the it is also possible to establish well-posedness of the aggregation model in the presence of diffusion term. Stationary solutions of McKean-Vlasov equation on hyperbolic manifolds are characterized in \cite{fetecau2023equilibria, fetecau2023ground, carrillo2024existence}. A few relevant results concern the free energies corresponding to the evolution equations on manifolds. The geodesic convexity of the entropic term and potential energy is established in \cite{otto2005eulerian, sturm2005convex}. We give a more detailed description of~\cite{sturm2005convex} in Section~\ref{sec:convexity}. In \cite{erbar2010heat}, the author shows existence and uniqueness of gradient flow solutions of the heat equations on manifolds of positive curvature. The general formalism of gradient flows for internal energies on the space of measures over a Riemannian manifold is discussed in~\cite{Villani2008}. \paragraph{Bifurcations and phase transitions.} Bifurcation theory dates back to the results formulated in \cite{CrandallRabinowitz1971}, for a general theoretical overview we refer the reader to the book of Kielhoefer \cite{Kielhoefer2012}. On a torus bifurcations of the free energy functional \eqref{eq:free-energy} have been studied in \cite{carrillo2020long} and in the presence of two local minima the existence of saddle point was proven~\cite{GvalaniSchlichting2020}. See also~\cite{CarrilloGvalani2021} for a generalization to nonlinear diffusion-aggregation equations. On $\bbS^2$ bifurcations of the Onsager energy are characterized in~\cite{fatkullin2005critical, WachsmuthThesis06, lucia2010exact, Vollmer2017}. Phenomenon of phase transition has been show to appear in systems of different nature, see for example \cite{PoschNarenhoferThirring1990,BarbaroCanizoCarrilloDegond2016, DegondFrouvelleLiu2015,Tugaut2014, Vollmer2017}. Phase transition of the McKean-Vlasov equation on a torus has been studied in \cite{ChayesPanferov2010}, the authors introduce concepts of continuous and discontinuous transition points and study their properties in terms of the interaction kernel. Explicit conditions of continuous and discontinuous phase transition in terms of the Fourier decomposition of the kernel are introduced in \cite{carrillo2020long}. Phase transition of McKean-Vlasov equation of weakly coupled Hodgkin-Huxley oscillators is characterized in \cite{vukadinovic2023phase}. In \cite{delgadino2021diffusive}, the authors discuss the mean-field behaviour of systems exhibiting phase transition. \subsection*{Acknowledgments} The authors are grateful to Hugo Melchers for the help concerning calculations in Section~\ref{sec:examples}. The authors are also thankful to Rishabh Gvalani, Jasper Hoeksema, Greg Pavliotis, Mark Peletier and Jim Portegies for helpful discussions. Andr\'e Schlichting is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2044-390685587, Mathematics M\"unster: Dynamics--Geometry--Structure. Anna Shalova is supported by the Dutch Research Council (NWO), in the framework of the program ‘Unraveling Neural Networks with Structure-Preserving Computing’ (file number OCENW.GROOT.2019.044). \section{Compact Riemannian manifold} \label{sec:general} Throughout this section we assume that $\calM$ is a compact connected Riemannian manifold without boundary. We study the weak solutions on $\calM$ of the stationary McKean-Vlasov equation~\eqref{eq:mckean-vlasov}, that is \begin{equation*} \gamma^{-1}\Delta\rho + \divr(\rho \nabla_x W(x, \cdot) *\rho) =0 \,, \end{equation*} where the operators $\nabla, \ \divr \text{ and } \Delta$ are manifold gradient, divergence and Laplace-Beltrami operator respectively and are rigorously defined in Appendix~\ref{sec:geometry} and $*$ denotes the measure convolution \[ (W*\rho)(x) = \int_{\calM} W(x, y)\rho(y)dm. \] For a Riemannian manifold with metric $g$, given the interaction kernel $W\in H^1(\calM\times\calM)$ (see Appendix~\ref{ssec:SobolevMfds} for the notion of Sobolev spaces) the weak solutions are defined in the following sense. \begin{definition}[Weak solution]\label{def:weak:mv} A function $\rho\in H^1(\calM) \cap \calP_{ac}(\calM)$ is a weak solution of \eqref{eq:mckean-vlasov} if for every $\phi \in H^1(\calM)$ it satisfies \[ \gamma^{-1}\int_{\calM}g(\nabla \rho, \nabla \phi)d\sigma + \int_{\calM} g(\rho \nabla\phi, \nabla_x W(x,\cdot) *\rho) d\sigma =0. \] \end{definition} The structure of this section is the following: we first establish three equivalence formulations for weak solution in the sense of Definition~\ref{def:weak:mv} in Section~\ref{sec:formulations}. We then proceed by proving existence of minimizers of the free energy functional $\calF$ in Section~\ref{sec:existence}. Finally, we introduce a convexity criterion for $\calF$ and derive a sufficient condition for the uniqueness of the minimizers in Section~\ref{sec:convexity}. \subsection{Equivalent characterizations of stationary states} \label{sec:formulations} In this section we reformulate the problem of solving the stationary McKean-Vlasov equation as a fixed-point problem of the Gibbs map $F$ as defined in \eqref{eq:gibbs-map} and as a minimization problem of the free energy functional defined in \eqref{eq:free-energy}. First we note that due to the smoothing effect of the convolution all the zeros of the Gibbs map are smooth, namely the following Lemma holds. \begin{lemma} \label{lemma:gibbs-H1} Let $\gamma \in \bbR_+$ and let $W \in C_b(\calM \times\calM) \cap H^1(\calM \times\calM)$, then any $\rho \in L^1(\calM)$ satisfying $F(\rho, \gamma) = 0$ is an $H^1(\calM)$ function.\end{lemma} \begin{proof} We begin by showing $\rho \in L^2(\calM)$. From the boundedness of the kernel we obtain the following estimate \[ \|W * \rho \|_\infty = \left\|\int W(x, y)\rho(y)dm(y)\right\|_\infty \leq \|W\|_{L_\infty(\calM\times\calM)} \|\rho\|_{L_1(\calM)}. \] Any zero of the Gibbs map satisfies almost everywhere \[ \rho(x) = \frac{1}{Z(\gamma, \rho)} e^{-\gamma (W *\rho)(x)}, \] implying that \begin{equation} \label{eq:rho-infty} \|\rho\|_\infty = \left\|\frac{1}{Z(\gamma, \rho)} e^{-\gamma W *\rho}\right\|_\infty = \frac{1}{Z(\gamma, \rho)}\left\| e^{-\gamma W *\rho}\right\|_\infty \leq \frac{1}{Z(\gamma, \rho)}e^{\gamma \|W \|_\infty} = m(\calM)^{-1}e^{2\gamma \|W \|_\infty}, \end{equation} where we used that $Z(\gamma, \rho)\geq \int e^{-\gamma \|W \|_\infty}dm = m(\calM)e^{-\gamma \|W \|_\infty} > 0$. As a result we conclude that $\rho$ is square integrable $\|\rho\|_2 \leq m(\calM)\|\rho\|^2_\infty < \infty$. Now, we show that $\nabla \rho \in L_2(T\calM)$. First of all note that the gradient exists and satisfies \begin{align*} \nabla \rho(x) &= \frac{1}{Z(\gamma, \rho)} \nabla e^{-\gamma (W *\rho)(x)} = - \frac{\gamma e^{-\gamma (W *\rho)(x)}}{Z(\gamma, \rho)} \int_\calM \nabla_x W(x, y) \rho(y)dm(y)\\ &= - \frac{\gamma e^{-\gamma (W *\rho)(x)}}{Z(\gamma, \rho)} (\nabla_x W\ast \rho)(x) \,. \end{align*} As a result we get the following bound \begin{align} \MoveEqLeft \int_{\calM}g(\nabla \rho, \nabla \rho)dm \leq \frac{\gamma^2e^{2\gamma\|W*\rho\|_{\infty}}}{Z(\gamma, \rho)^2} \int_{\calM}g_x\bra*{(\nabla_x W\ast \rho)(x), (\nabla_x W\ast \rho)(x)} dm(x) \notag \\ &\leq\frac{\gamma^2e^{2\gamma\|W*\rho\|_{\infty}}}{Z(\gamma, \rho)^2}\|\rho\|^2_{\infty}\int_{\calM^3}\mkern-4mu g_x\bigl( \nabla_x W(x, y), \nabla_x W(x, z)\bigr) (dm)^3 \notag\\ &\leq \frac{\gamma^2e^{2\gamma\|W*\rho\|_{\infty}}}{2Z(\gamma, \rho)^2}\|\rho\|^2_{\infty} \int_{\calM^3} \Bigl(g_x\bigl( \nabla_x W(x, y), \nabla_x W(x, y) \bigr) \notag \\ &\hspace{16em}+ g_x\bigl( \nabla_x W(x, z), \nabla_x W(x, z) \bigr)\Bigr)(dm)^3 \notag\\ &\leq \frac{\gamma^2e^{2\gamma\|W*\rho\|_{\infty}}}{2Z(\gamma, \rho)^2}\|\rho\|^2_{\infty} m(\calM) \int_{\calM^3}\Bigl(g_x\bigl( \nabla_x W(x, y), \nabla_x W(x, y) \bigr) \notag \\ &\hspace{16em} + g_y\bigl( \nabla_y W(x, y), \nabla_y W(x, y) \bigr)\Bigr)(dm)^3 \notag\\ &\leq \frac{\gamma^2e^{2\gamma\|W*\rho\|_{\infty}}}{2Z(\gamma, \rho)^2}\|\rho\|^2_{\infty} m(\calM) \int_{\calM\times \calM} g^{\calM\times \calM} (\nabla W(x, y), \nabla W(x, y))(dm)^2 \notag \\ &\leq\frac{\gamma^2e^{2\gamma\|W*\rho\|_{\infty}}}{2 Z(\gamma, \rho)^2}\|\rho\|^2_{\infty} m(\calM)\|W\|_{H^1} \,\label{eq:rho-h1} \end{align} where we use the product metric tensor $g^{\calM\times \calM}$ in the second last estimate (see Appendix~\ref{ssec:ProductMfds}). \end{proof} \begin{remark} In Euclidean setting the solutions of \eqref{eq:mckean-vlasov} are smooth functions $\rho \in C^\infty$, see for example \cite[Theorem 2.3]{carrillo2020long}. We argue that the same reasoning applies to the Riemannian manifold case and the solutions have in fact higher regularity. The main argument of the proof is the regularity of the 'convolution' which can be carried out in charts. Since it is not the main focus of the paper and is not required for the further analysis we do not provide the proof here. \end{remark} Estimates derived in the proof of Lemma \ref{lemma:gibbs-H1} also allow to characterize the limiting behavior of the minimizers for $\gamma \to 0$. \begin{corollary} \label{cor:gibbs-gamma0} Let $W \in C_b(\calM \times\calM) \cap H^1(\calM \times\calM)$, and assume that for all $\gamma \in [0, M)$ there exists $\rho_\gamma \in H^1$ such that $(\gamma,\rho_\gamma)$ is a zero of the Gibbs map \eqref{eq:gibbs-map}, then \[ \lim_{\gamma\to 0} \|\rho_\gamma - \bar \rho\|_{H^1} = 0 \,, \] where $\bar \rho = \frac{1}{m(\calM)}$ is the uniform state. \end{corollary} \begin{proof} Since $\bar\rho$ is a constant function, expanding $\|\rho_\gamma - \bar \rho\|_{H^1}$ we get \[ \|\rho_\gamma - \bar \rho\|_{H^1} = \|\rho_\gamma - \bar \rho\|_{L_2} + \|\nabla\rho_\gamma \|_{L_2(T\calM)}. \] Analogously to \eqref{eq:rho-infty}, we also have the lower bound on $\|\rho_\gamma\|_\infty$: \begin{equation*} \|\rho_\gamma\|_\infty \geq \frac{1}{Z(\gamma, \rho)}e^{-\gamma \|W \|_\infty} = m(\calM)^{-1}e^{-2\gamma \|W \|_\infty}. \end{equation*} and as a result the $L_2$ norm can be bounded as \[ \|\rho_\gamma - \bar \rho\|^2_{L_2} \leq m(\calM)\|\rho_\gamma - \bar \rho\|^2_\infty \leq \bar\rho \left((1 - e^{-2\gamma \|W \|_\infty})^2 + (e^{2\gamma \|W \|_\infty}-1)^2\right) \leq 16\gamma^2\bar\rho^2\|W \|_\infty^2\,, \] which vanishes for $\gamma\to 0$. In addition, the bound \eqref{eq:rho-h1} combined with the upper bound on~$\|\rho_\gamma\|_\infty$ gives $\|\nabla\rho_\gamma \|_{L_2(T\calM)} \to 0$. \end{proof} We are now ready to establish equivalence between weak solutions of the stationary McKean-Vlasov equation from Definition~\ref{def:weak:mv}, the zeros of the Gibbs map \eqref{eq:gibbs-map} and critical points of~$\calF_\gamma$. \begin{proposition} \label{prop:equivalence} For $\rho\in H^1(\calM) \cap \calP_{ac}^+(\calM)$ and $\gamma \in \bbR_+$ the following statements are equivalent: \begin{enumerate} \item $\rho$ is a weak solution of the stationary McKean-Vlasov equation \eqref{eq:mckean-vlasov} in the sense of Definition~\ref{def:weak:mv}, \item $(\rho, \gamma)$ is a solution of $ F(\rho, \gamma) = 0$, where $F$ is the Gibbs map defined in \eqref{eq:gibbs-map}. \item $\rho$ is a critical point of the free energy functional $\calF_\gamma$ \eqref{eq:free-energy}. \end{enumerate} \end{proposition} \begin{proof} \textbf{(2)$\to$(1)} Let $\rho \in L_1(\calM)$ be a solution of $F(\rho, \gamma) = 0$. By Lemma \ref{lemma:gibbs-H1}, $\rho \in H^1(\calM)$ and by differentiating $F(\rho, \gamma)$ we obtain \[ \nabla F(\rho, \gamma) = \nabla \rho -\gamma\frac{e^{-\gamma (W *\rho)(x)}}{Z(\rho, \gamma)}\nabla_x W(x, \cdot) * \rho =\nabla \rho -\gamma \rho \nabla_x W(x, \cdot) * \rho = 0. \] Testing against $\psi \in L_2(T\calM)$ shows that $\rho$ is a weak solution of McKean-Vlasov equation. \textbf{(1)$\to$(2)} Let $\rho \in H^1(\calM)$ be a weak solution of \eqref{eq:mckean-vlasov}, then $v = \rho$ is a solution of a "frozen" linear equation \begin{equation} \label{eq:mv-frozen} \gamma^{-1}\int_{\calM}g(\nabla v, \nabla \phi)dm + \int_{\calM} g(v \nabla\phi, \nabla_x W(x,\cdot) *\rho) dm =0, \end{equation} for every $\phi \in H^1(\calM)$. Let $T\psi := \frac{1}{Z(\gamma, \psi)} e^{-\gamma W *\psi}$. In Lemma \ref{lemma:gibbs-H1} we have shown that $\|W*\rho\|_\infty <\infty$ and therefore $T\rho$ is uniformly bounded away from zero \[ (T\rho)(x) \geq \frac{e^{-\gamma\|W*\rho\|_\infty}}{m(\calM)e^{\gamma\|W*\rho\|_\infty}} > 0 \] for any $\rho \in L_1(\calM)\cap \calP_{ac}(\calM)$. Consider the change of variables $h(x) = v(x)/(T\rho)(x)$ and note that $h$ satisfies \[ \nabla v(x) = (T\rho)(x)\nabla h(x) + h(x)\nabla(T\rho)(x). \] Using the fact that $\nabla(T\rho)(x) =-\gamma (T\rho)(x)(\nabla_xW(x,\cdot)*\rho)(x)$ one can see that \eqref{eq:mv-frozen} for any $\phi \in H^1(\calM)$ rewrites as \begin{equation} \label{eq:elliptic-PDE} \int_{\calM} g(\nabla\phi, T\rho \nabla h) dm =0. \end{equation} Recall from the proof of Lemma \ref{lemma:gibbs-H1} that $\|T\rho \|_\infty <\infty$ and thus \eqref{eq:elliptic-PDE} is weak formulation of a uniform-elliptic PDE \[ -\divr(T\rho\nabla h)=0. \] Similar to the Euclidean case, the only solutions satisfy $\nabla h = 0$ in $L_2(T\calM)$ sense and thus are constant functions $h = const$. By definition of $h$ we obtain for some $c>0$ that \[ \rho = v = c \; T\rho\,. \] and since $\|T\rho\|_{L_1} = 1$ we conclude that the only solution is $\rho = T\rho$. \textbf{(2)$\to$(3)} Let $\rho$ be a zero of the Gibbs map, take arbitrary $\rho' \in \calP_{ac}(\calM)$ and consider the curve $\rho_s = s\rho' + (1-s)\rho$ for $s\in[0,1]$. Applying $\calF_\gamma$ to $\rho_s$ and differentiating with respect to $s$ we obtain \[ \frac{d}{ds}\calF_\gamma(\rho_s)\Big|_{s=0} = \int_\calM \left(\gamma^{-1}\log \rho + W*\rho \right)(\rho' - \rho)dm. \] Since $\rho$ is a zero of the Gibbs map we know that $\rho = \frac{1}{Z(\gamma, \rho)} e^{-\gamma (W *\rho)(x)}$ and thus the above integral takes the form \begin{equation} \label{eq:2to3} \int_\calM \left(\gamma^{-1}\log \rho + W*\rho \right)(\rho' - \rho)dm= -\int_\calM \gamma^{-1}\log Z(\gamma, \rho) (\rho' - \rho)dm =0, \end{equation} so $\rho$ is a critical point of $\calF_\gamma$. \textbf{(3)$\to$(2)} Since $\rho \in H^1$, there exists a gradient of $\rho$ almost everywhere and thus it is almost everywhere continuous. Take an arbitrary point of continuity $x_0 \in \calM$, we show that \[ \gamma^{-1}\log \rho (x_0) + (W*\rho)(x_0) = \frac{1}{m(\calM)}\int_\calM \bigl(\gamma^{-1}\log \rho + W*\rho \bigr)dm = \text{const.} \, . \] First assume that there exist $\alpha_0 >0$ such that $\rho(x) \geq \alpha_0$ and we can take a sequence of positive densities $(\rho_n')_{n\in\bbN}$ of the form \[ \rho'_n(x) = \begin{cases} \rho(x) + \frac{\alpha_0}{m(B(x_0, 1/(n +R)))} \qquad &\text{if } x\in B(x_0, 1/(n+R)), \\ \rho(x) - \frac{\alpha_0}{m(\calM)- m(B(x_0, 1/(n+R)))}\qquad &\text{otherwise,} \end{cases} \] for some $R >0$. Then from \eqref{eq:2to3} we obtain \begin{align} \MoveEqLeft\frac{\alpha_0}{m(B(x_0, 1/(n +R)))}\int_{B(x_0, 1/(n+R))} \left(\gamma^{-1}\log \rho + W*\rho \right)dm \label{eq:3to2-left}\\ &= \frac{\alpha_0}{m(\calM)- m(B(x_0, 1/(n+R)))}\int_{\calM\backslash B(x_0, 1/(n+R))} \left(\gamma^{-1}\log \rho + W*\rho \right)dm.\label{eq:3to2-right} \end{align} Since $x_0$ is a point of continuity, the limit of the \eqref{eq:3to2-left} is simply the point evaluation \[ \lim_{n\to \infty}\frac{\alpha_0}{m(B(x_0, 1/(n +R)))}\int_{B(x_0, 1/(n+R))} \mkern-20mu \left(\gamma^{-1}\log \rho + W*\rho \right)dm = \bigl(\alpha_0\gamma^{-1}\log \rho + (W*\rho)\bigr)(x_0), \] and by the same argument the right hand side \eqref{eq:3to2-right} equals to the integral with respect to the volume measure \begin{align*} \MoveEqLeft\lim_{n\to \infty}\frac{\alpha_0}{m(\calM)- m(B(x_0, 1/(n+R)))}\int_{\calM\backslash B(x_0, 1/(n+R))} \left(\gamma^{-1}\log \rho + W*\rho \right)dm\\ &= \alpha_0\int_{\calM}\left(\gamma^{-1}\log \rho + (W*\rho)\right)dm. \end{align*} As a result we conclude that $\gamma^{-1}\log \rho + (W*\rho) = \text{const.}$\@ $m$-almost everywhere, and since $\rho$ is a probability measure we get the scaling \[ \rho = \frac{1}{Z(\gamma, \rho)}e^{-\gamma(W*\rho)}. \] If $\rho$ is not bounded away from zero, we can choose an arbitrary small $\alpha_\varepsilon \in \bbR_+$ and show that the expression $\gamma^{-1}\log \rho + W*\rho$ is constant on every set of form $A_{\varepsilon} := \{x\in \calM: \rho(x) \geq \alpha_\varepsilon\}$. Since $\alpha_\varepsilon$ is arbitrary, we get the result. \end{proof} \begin{remark} Proposition~\ref{prop:equivalence} shows that the invariant measures do not depend on the induced metric $g$ but only on the interaction kernel $W$. Because we have the formulation of solutions of \eqref{eq:mckean-vlasov} in terms of the Gibbs map, one can see that for two different parametrization of the manifold $\calM: x = x_1(\theta_1) = x_2(\theta_2)$ the sets of solutions will be identical, assuming that they induce the same volume measure $m$ and that the interaction kernel is independent of the parametrization in the sense that $W(x_1(\theta_1), y_1(\theta_1)) = W(x_2(\theta_2), y_2(\theta_2))$ for all pairs of points $x, y \in \calM$. Using the energetic interpretation of the stationary measures, one can say that an invariant measure stays invariant under any re-parametrization which does not affect the interaction between particles. \end{remark} Finally, using the established equivalence and the $H^1$ convergence proved in Corollary~\ref{cor:gibbs-gamma0} we see that the solutions of the stationary McKean-Vlasov equation converge to the kernel of the Laplace-Beltrami operator, consisting just of constants, in the limit of infinitely small interaction $\gamma \to 0$. \begin{corollary} \label{cor:convergence-min} Let the sequence of parameters $(\gamma_n)_{n\in\bbN}$ be such that $\gamma_n \in \bbR_+$ and $\gamma_n \to 0$. Let $W: \calM\times\calM \to \bbR$ be a continuous $H^1$ function on $\calM\times\calM$ satisfying $W(x,y)=W(y,x)$, then the sequence of solutions of \eqref{eq:mckean-vlasov}, if they exist, converges in $H^1$ to $\bar\rho$ \[ \rho_\gamma \stackrel{H^1}{\to} \bar \rho, \] where $\bar \rho = \frac{1}{m(\calM)}$ is the unique (up to rescaling) solution of $\Delta \rho = 0$. \end{corollary} We show existence of minimizers in the next section. The small noise limit $\gamma \to \infty$ is more involved since the number and the structure of the solutions of the pure interaction PDE strongly depends on the interaction potential $W$, so is is only possible to show convergence up to a subsequence. In addition, for $\gamma = \infty$ solutions of \eqref{eq:mckean-vlasov} are no longer guaranteed to be $H^1$ functions, so we are only able to show convergence in the weak sense, see Lemma \ref{prop:gamma-infty}. \subsection{Existence of minimizers} \label{sec:existence} Let $m$ be a normalized volume measure such that $m(\calM) = 1$. We consider the free energy functional of form \eqref{eq:free-energy} with continuous interaction kernel $W: \calM\times\calM \to \bbR$. We show that for arbitrary value of $\gamma \in\bbR_+$ there exist a minimizer of the free energy functional on the space of probability measures $\calP(\calM)$, the minimizer admits density, and the density is an $L_2$ function. \begin{theorem} \label{th:minimizers} Let $\calF_\gamma$ be as defined in \eqref{eq:free-energy} and $W: \calM\times\calM \to \bbR$ be a continuous function on $\calM\times\calM$ satisfying $W(x,y)=W(y,x)$, then there exist at least on minimizer $\mu^*$ in the space of probability measures $\calP(\calM)$ \[ \mu^* \in \argmin_{\mu\in \calP(\calM)}\calF(\mu). \] Moreover, every minimizer $\mu^*$ admits density w.r.t. normalized volume measure $d\mu^* = \rho^* dm$ and the density is a square-integrable function, $\rho^* \in L_2(\calM)$.\end{theorem} \begin{proof} As follows from the compactness of $\calM$, the interaction kernel $W$ is bounded on it's domain; we will denote it's minimum and maximum as $W_{\min} = \min_{x, y \in \calM} W(x, y)$ and $W_{\max} = \max_{x, y \in \calM}W(x, y)$. The proof is divided in two steps, in the first step we show existence of minimizers in the space of positive measures absolutely continuous with respect to the volume measure $\calP_{ac}^+(\calM)$, where \[ \calP_{ac}^+(\calM) = \set*{\mu\in \calP(\calM): d\mu = \rho dm, \ \int \rho(x)dm(x) = 1, \ \rho(x)> 0 \ m-\text{a.e.}}. \] It is easy to see that bounded interaction kernel, the interaction energy is bounded for any $\mu \in \calP(\calM)$ and the entropy is finite only on $\calP^+_{ac}(\calM)$, and thus if a minimizer $\rho^*$ exist, it is an element of $\calP_{ac}^+(\calM)$. At the second step we show the existence of an upper bound of the minimizer $C \in \bbR_+: \ \rho(x) \leq C $ for $m$-a.e. $x$. Then it is naturally follows that $\rho^*$ is square-integrable \[ \int_{\calM} \rho(x)^2 dm(x) \leq C^2\int_{\calM} dm(x) = C^2, \] in other words, $\rho^* \in L_2(\calM)$. \paragraph*{Existence of minimizers:} Take a minimizing sequence $(\rho_n)_{n\in \bbN}$, $\rho_n \in \calP_{ac}^+(\calM)$ \[ \inf_{\calP_{ac}^+(\calM)}\calF(\rho) = \lim_{n\to\infty}\calF(\rho_n). \] Since $\calM$ is a compact space, every sequence in $\calP_{ac}^+(\calM) \subset \calP(\calM)$ is tight and, by Prokhorov's theorem, relatively weakly compact in $\calP(\calM)$. Take a convergent subsequence $\rho_{n_k} \stackrel{w}{\to} \rho^* \in \calP(\calM)$ of $(\rho_n)_{n\in \bbN}$. The entropy term is a weakly lower-semicontinuous functional on the space of measures $\calP(\calM)$ (see for example \cite[Lemma 1.4.3]{dupuis2011weak}). Using \cite[Lemma 7.3]{santambrogio2015optimal} we get weak convergence of the product measures along the convergent subsequence $\rho_{n_k}$: \[ \rho_{n_k} \otimes\rho_{n_k} \stackrel{w}{\to} \rho^* \otimes\rho^*. \] Using the above and the boundedness of the interaction kernel we prove the continuity of the interaction energy \eqref{eq:interaction-energy}: \[ \calI(\rho_{n_k})= \int_{\calM\times\calM} \mkern-10mu W(x, y )\rho_{n_k}(x)\rho_{n_k}(y)dm(x)dm(y) \to \int_{\calM\times\calM} \mkern-10mu W(x, y )\rho^*(x)\rho^*(y)dm(x)dm(y). \] As a result, $\calF$ is weakly lower-semicontinuous on $\calP(\calM)$ as a sum of lower-semicontinuous functionals. Moreover, since $\calF_\gamma(\rho^*) <\infty$ we conclude that $\rho^* \in \calP_{ac}(\calM)$ and by direct method of calculus of variations \[ \calF_\gamma(\rho^*) =\argmin_{\rho \in \calP(\calM)} \calF_\gamma(\rho) = \argmin_{\rho \in \calP_{ac}^+(\calM)} \calF_\gamma(\rho). \] \textbf{Upper bound:} The construction follows a similar approach from~\cite{vollmer2018bifurcation}, where this is done on the sphere $\bbS^2$. Let $\rho^*$ be a minimizer of $\calF$. Let $C = \exp(12\gamma(W_{\max} - W_{\min}) +4)$ and assume that there exist set $A_{>C} := \{x\in \calM: \rho^*(x)> C\}$ of positive measure $m(A_{>C}) > 0$. Let $A_{<2} = \{x\in \calM: \rho^*(x)< 2\}$, and note that $A_{<2}$ has a positive measaure because \begin{align*} 1 &= \int_{\calM}\rho^*(x)dm(x) \geq \int_{\calM \backslash A_{<2}}\rho^*(x)dm(x) \geq 2(1-m(A_{<2})) \end{align*} which after rearranging gives \[ m(A_{<2}) \geq \frac{1}{2}. \] Define a density $\hat \rho^* \in \calP_{ac}^+(\calM)$: \[ \hat \rho^*(x) = \begin{cases} C ,\quad &x\in A_{>C}, \\ \rho^*(x), \quad &x\in \calM\backslash (A_{>C}\cup A_{<2}), \\ \rho^*(x) + \delta, &x\in A_{<2}, \end{cases} \] where $\delta =\frac{\int_{A_{>C}}(\rho^*(x) - C)dm(x)}{m(A_{<2})} \leq 2$. We will show that $\calF(\hat \rho^* ) <\calF(\rho^* ) $, implying that $\rho^*$ can not be a minimizer. For the entropy we have \begin{align*} \MoveEqLeft \int_{\calM}\mkern-4mu\bra*{\rho^*\log \rho^* - \hat \rho^*\log\hat \rho^*}dm = \int_{A_{>C}}\mkern-8mu\bra*{\rho^*\log \rho^* - \hat \rho^*\log\hat \rho^*}dm + \int_{A_{<1}}\mkern-8mu\bra*{\rho^*\log \rho^* - \hat \rho^*\log\hat \rho^*} dm \\ &\geq(\log C+1)\int_{A_{>C}} (\rho^* - C)dm - \delta\int_{A_{<1}} \left(\log(\rho^* +\delta) + 1 \right)dm \\ &\geq(\log C+1)\int_{A_{>C}} (\rho^* - C)dm - \delta m(A_{<2}) \left(\log(1 +\delta) + 1 \right) \\ &= \delta m(A_{<2})\left(\log C - \log(1+\delta)\right) \\ &\geq \frac12\delta \left(\log C - \log 3\right). \end{align*} And the difference of the interaction energy can be naively bounded as follows \begin{align} \MoveEqLeft \int_{\calM\times\calM}W(x, y)\rho^*(x)\rho^*(y)dm(x)dm(y) - \int_{\calM\times\calM}W(x, y)\hat \rho^*(x)\hat \rho^*(y)dm(x)dm(y) \notag \\ &=\int_{\calM\times\calM}(W(x, y)- W_{\min})\rho^*(x)\rho^*(y)dm(x)dm(y) \notag \\ &\qquad- \int_{\calM\times\calM}(W(x, y)- W_{\min})\hat \rho^*(x)\hat \rho^*(y)dm(x)dm(y)\notag \\ &= \int_{A_{>C}\times A_{>C}}(W(x, y)- W_{\min})(\rho^*(x)\rho^*(y) - C^2)dm(x)dm(y) \label{eq:interact:cc}\\ &+\int_{(\calM \backslash A_{>C})\times (\calM \backslash A_{>C})}(W(x, y)- W_{\min})(\rho^*(x)\rho^*(y) - \hat \rho^*(x)\hat \rho^*(y))dm(x)dm(y) \label{eq:interact:22}\\ &+2\int_{A_{>C}\times (\calM \backslash A_{>C})}(W(x, y)- W_{\min})(\rho^*(x)\rho^*(y) - C\hat \rho^*(y))dm(x)dm(y). \label{eq:interact:2c} \end{align} The first term \eqref{eq:interact:cc} is non-negative because on the set $A_{>C}$ we have $\rho^* > C$. For the second term \eqref{eq:interact:22} we use the fact that on $\calM \backslash A_{>C}$ the difference between the densities $\rho^*, \hat\rho^*$ is bounded $\rho^* - \hat \rho^* \leq \delta$ to get the estimate: \begin{align*} \eqref{eq:interact:22} &\geq (W_{\max}-W_{\min})\int_{(\calM \backslash A_{>C})\times (\calM \backslash A_{>C})} \mkern-16mu \bigl(\rho^*(x)\rho^*(y) - (\rho^*(x)+\delta)(\rho^*(y) + \delta)\bigr)dm(x)dm(y) \\ &= -2\delta(W_{\max}-W_{\min})\int_{\calM \backslash A_{>C}}\left(\frac12\delta+\rho^*(x)\right)dm(x) \\ &\geq -2\delta(W_{\max}-W_{\min})\left(m(\calM \backslash A_{>C}) + \int_{\calM \backslash A_{>C}}\rho^*(x)dm(x)\right) \geq -4\delta(W_{\max}-W_{\min}). \end{align*} Finally, the last term \eqref{eq:interact:2c} can be estimated as \begin{align*} \eqref{eq:interact:2c} &=2\int_{A_{>C}\times A_{<2}}(W(x, y)- W_{\min})(\rho^*(x)\rho^*(y) - C\rho^*(y) - C\delta)dm(x)dm(y) \\ &\quad +2\int_{A_{>C}\times (\calM \backslash (A_{>C}\cup A_{<2}))}(W(x, y)- W_{\min})(\rho^*(x)\rho^*(y) - C\rho^*(y))dm(x)dm(y) \\ &\geq 2\int_{A_{>C}\times A_{<2}}(W(x, y)- W_{\min})(\rho^*(x)- C)\rho^*(y) dm(x)dm(y) \\ &\quad -2\delta(W_{\max}- W_{\min})\int_{A_{>C}\times (\calM \backslash (A_{>C}\cup A_{<2}))} C dm(x)dm(y) \\ &\geq 0 - 2\delta(W_{\max}- W_{\min})m\left(\calM \backslash (A_{>C}\cup A_{<2})\right)\int_{A_{>C}} C dm(x) \geq -2\delta(W_{\max}- W_{\min}). \end{align*} Combining the above estimates we conclude that \[ \calF_\gamma(\rho^* ) - \calF_\gamma(\hat \rho^* ) \geq \delta\gamma^{-1} \left(\frac12\log C - \frac12\log 3\right) - 6\delta(W_{\max}-W_{\min})\geq 0, \] implying that any minimizer $\rho^*$ is uniformly bounded by $C$, which completes the proof. \end{proof} \subsection{Limit of small noise} \label{sec:large-gamma} In this section we study the limiting behavior of the minimizers of the free energy functional~\eqref{eq:free-energy} in the small noise regime $\gamma\to \infty$. Intuitively, as the noise ratio gets smaller, the resulting PDE tends to recover the behaviour of the pure interaction system. We consider a sequence of parameter values $(\gamma_n)_{n\in \bbN}$ with $\gamma_n \to \infty$. Since there always exist a minimizer we then consider a sequence of such minimizers $(\rho_n)_{n\in\bbN}$, where $\rho_n \in \argmin \calF_{\gamma_n}$. Using the theory of $\Gamma$-convergence (see Appendix~\ref{appendix:Gamma}) we show that all the limiting points of such a sequence are the minimizers of the interaction energy $\calI$. \begin{proposition} \label{prop:gamma-infty} Let $\calF_\gamma$ be as defined in \eqref{eq:free-energy} and $W: \calM\times\calM \to \bbR$ be a continuous function on $\calM\times\calM$ satisfying $W(x,y)=W(y,x)$. Let $(\gamma_n)_{n\in \bbN}$ be a positive, increasing sequence satisfying $\gamma_n \to \infty$. Let $(\rho_n)_{n\in \bbN}$ be a sequence of minimizers of $\calF_{\gamma_n}$, then there exist a weakly convergent subsequence $\rho_{n_k}$ such that $\rho_{n_k} \stackrel{w}{\to} \rho^*$ and $\rho^*$ is the minimizer of the interaction energy \[ \rho^* \in \argmin_{\rho \in \calP(\calM)} \calI(\rho). \] \end{proposition} \begin{proof} Consider a shifted functional $\bar\calF_\gamma = \calF_\gamma - \gamma^{-1}\calE(\bar\rho)$, since the last term is a constant, minimizers of $\bar\calF_\gamma$ coincide with the minimizers of $\calF_\gamma$. At the same time for $\gamma_1 > \gamma_2 > 0$ and arbitrary $\rho \in \calP(\calM)$ we have \[ \bar\calF_{\gamma_1}(\rho) = \calI(\rho) + \gamma_1^{-1}\left(\calE(\rho) - \calE(\bar\rho)\right) \leq \calI(\rho) + \gamma_2^{-1}\left(\calE(\rho) - \calE(\bar\rho)\right) = \bar\calF_{\gamma_2}(\rho), \] so the sequence $(\bar\calF_{\gamma_n})_{n\in\bbN}$ is decreasing. At the same time, the pointwise limit of $\bar\calF_{\gamma_n}$ is \[ \bar \calF =\lim_{n\to\infty}\bar\calF_{\gamma_n}(\rho) = \begin{cases} \calI(\rho), \qquad &\rho \in \calP_{ac}^+(\calM), \\ +\infty &\text{otherwise.} \end{cases} \] By Proposition \ref{prop:gamma-decreasing} $\bar\calF_{\gamma_n} \stackrel{\Gamma}{\to} \text{lsc}(\bar \calF)$, where the lower-semicontinuous envelope of $\bar \calF$ is exactly~$\calI$. As shown in Theorem \ref{th:minimizers}, $\calI$ is a weakly lower-semicontinuous functional, so we only need to show that there exists no lower-semicontinuous functional $\calG\neq \bar\calF$ satisfying $\calI \leq \calG\leq \bar\calF$. Since $\bar\calF = \calI$ on $\calP_{ac}^+(\calM)$ we only need to consider $\rho \in \calP(\calM) \backslash \calP_{ac}^+(\calM)$. The space of measures absolutely continuous w.r.t. the volume measure $\calP_{ac}(\calM)$ is dense in $\calP(\calM)$ and by simple construction $\calP_{ac}^+(\calM)$ is dense in $\calP(\calM)$. Taking a sequence $\rho_n \stackrel{w}{\to} \rho$, where $\rho_n \in \calP_{ac}^+(\calM)$ we conclude that $\text{lsc}(\bar\calF)(\rho) \leq \calI(\rho)$ and thus $\text{lsc}(\bar\calF) = \calI$. Applying the fundamental theorem of $\Gamma$-convergence (Theorem \ref{th:gamma-coonvergence}) we get the result. \end{proof} \begin{remark}[Limitations] Note that for the small noise limit we only show convergence of the minimizers of the free energy functional, while the stationary solutions of the McKean-Vlasov equations are all of the critical points. We also do not answer the reverse question, namely whether every minimizer of the interaction energy can be approximated by the minimizers of the free energy functional with (infinitely)-large $\gamma$. \end{remark} \subsection{Geodesic convexity} \label{sec:convexity} In this section we use the approach adapted from \cite{sturm2005convex} to characterize the convexity of the free energy functional \eqref{eq:free-energy}. The idea of generalizing the convexity criterion for the interaction potential on $\bbR^d$ to the manifold setting has been discussed in \cite[Chapter 17]{Villani2008}, but since we found no rigorous formulation in the literature we prove such a criterion in this Section. With a slight abuse of notation we will use $\calE(\rho)$ instead of $\calE(\mu)$ if $\mu$ admits density $\rho$. A functional is geodesically convex if it satisfies the following definition. \begin{definition}[Geodesic convexity] A functional $F: \calX \to \bbR$ on a metric space $(\calX, d)$ is geodesically $\lambda$-convex for $\lambda\in \bbR$ if for any geodesic $\gamma: [0,1] \to \calX$ it holds that \[ F(\gamma(s)) \leq (1-s)F(\gamma(0)) + sF(\gamma(1)) -\frac{\lambda}{2} s(1-s) d(\gamma(0), \gamma(1)), \quad \forall s\in [0,1]. \] \end{definition} For a lower-semicontinuous function $f:[0,1] \to \bbR$ define the lower centered second derivative \[ \underline{\partial_t^2}f(t) = \lim\inf_{s\to 0} \frac1{s^2}\left[f(t+s)+ f(t-s) - 2f(t)\right]. \] Then a functional is $\lambda$-convex if and only if it is lower semicontinuous along geodesics and if for each geodesic $\gamma:[0,1] \to \calX$ with $F(\gamma(0)), F(\gamma(1)) < \infty$, it holds that $ F(\gamma(s)) \leq \infty$ for all $s\in (0,1)$ and \[ \underline{\partial_s^2}F(\gamma(s)) \geq \lambda d(\gamma(0), \gamma(1))^2. \] We give a sufficient condition for $\lambda$-convexity of the functional \eqref{eq:free-energy} on the space of probability measures on a Riemannian manifold $\calM$ with finite second moment \[ \calP_2(\calM) := \{\mu \in \calP(\calM): \int \dist(x, x_0)^2d\mu <\infty\}, \] for some $x_0 \in \calM$, equipped with Wasserstein metric $\fw_2$. For any two measures $\mu, \nu \in \calP_2(\calM)$ the $\fw_2$ distance is \[ \fw_2(\mu, \nu) := \inf_{\pi \in \Pi(\mu, \nu)}\left(\int \dist(x, y)^2d\pi(x, y)\right)^{1/2}, \] where infimum is taken with respect to all possible couplings $\pi$ with first and second marginals being $\mu$ and $\nu$ respectively. Note that since $\calM$ is compact $\calP(\calM) = \calP_2(\calM)$, we continue using $\calP_2$ in this section to emphasise the usage of the Wasserstein-2 topology. We begin by stating some relevant results. \begin{lemma}[Lemma 3.1 \cite{sturm2005convex}] Let $\mu_0, \mu_1 \in \calP_2(\calM)$ admit densities $\rho_1, \rho_2 > 0$ w.r.t. the volume measure $m$. Then there exists a unique geodesic $\mu: [0,1] \to \calP_2(\calM)$ such that $\mu(0) = \mu_0, \ \mu(1) = \mu_1$ and for all $s \in [0,1]$ $\mu(s)$ is absolutely continuous w.r.t. $m$. Moreover, there exists a vector field $\Phi:\calM \to T\calM$ such that $\mu(s)$ is the push forward of $\mu_0$ under the map \[ F_s: \calM \to \calM \quad\text{with} \quad F_s(x)=\exp_x(s\Phi). \] \end{lemma} Note that by definition of the push forward the above implies that for any measurable function $u:\calM\to \R$ it holds that \[ \int_\calM u(x)d\mu_s(x) = \int_\calM u(F_s(x))d\mu_0(x). \] \begin{lemma}[Corollary 1.5 \cite{sturm2005convex}] \label{lemma:entropy-convexity} Consider the entropy $\calE$ defined in \eqref{eq:entropy}. Then the lower second derivative of $\calE$ along geodesic $\rho_t$, with $\calE(\rho_0), \calE(\rho_1) < \infty$, satisfies \[ \underline{\partial_t^2}\calE = \int \operatorname{Ric}_x(\dot{F_t}, \dot{F_t})\rho_0(x)dm(x) \] Moreover, $\calE$ is $\lambda$-convex for $\lambda\in\R$ if and only if $\forall x \in \calM, \ v\in T_x\calM$ \[ \operatorname{Ric}_x(v, v) \geq \lambda\|v\|^2. \] \end{lemma} Extending the result to the free energy functional $\calF_\gamma$ with the interaction term \eqref{eq:free-energy} we get the following sufficient condition for the geodesic convexity of $\calF_\gamma$. \begin{theorem} \label{th:convexity-M} Consider the free energy $\calF_\gamma$ as defined in \eqref{eq:free-energy}. Assume that there exist $\alpha, \lambda \in \bbR$ such that $W$ satisfies \[ \underline{\partial^2_t} W\left(\exp_x vt, \exp_y ut\right) \geq \alpha(\|v\|^2 + \|u\|^2) \] and $\calM$ is such that \[ \operatorname{Ric}_x(v, v) \geq \lambda\|v\|^2 \] for all $x, y \in \calM, \ v\in T_x\calM, u \in T_y\calM$, then $\calF_\gamma$ is an $(\gamma^{-1}\lambda + \alpha)$-convex functional. In particular, if $\underline{\partial^2_t} W\left(\exp_x vt, \exp_y ut\right) \geq 0$, $\calF_\gamma$ is $\gamma^{-1}\lambda$-convex. \end{theorem} \begin{proof} Recall that \eqref{eq:free-energy} is a sum of entropy and interaction energy $\calF = \gamma^{-1}\calE + \calI$. By definition of the lower second derivative we get \[ \underline{\partial_t^2}\calF \geq \gamma^{-1}\underline{\partial_t^2}\calE + \underline{\partial_t^2}\calI. \] Let $\rho_t$ be a geodesic with boundary values satisfying $\calE(\rho_0), \calE(\rho_1) < \infty$. We calculate the lower second derivative of the interaction energy along $\rho_t$. We begin by rewriting the interaction energy in term of the map $F_t$, namely \[ \calI(\mu_t) = \frac{1}{2}\int_{\calM \times\calM} W(x, y )d\mu_t(x)d\mu_t(y) = \frac{1}{2}\int_{\calM \times\calM} W(F_t(x), F_t(y) )d\mu_0(x)d\mu_0(y). \] Then by definition of the lower second derivative we get \begin{align*} \underline{\partial_t^2}\calI &= \lim\inf_{s\to 0} \frac1{s^2}\left[f(t+s)+ f(t-s) - 2f(t)\right] \\ &=\lim\inf_{s\to 0}\frac1{s^2}\int_{\calM \times\calM}\Big[W(F_{t+s}(x), F_{t+s}(y)) + W(F_{t-s}(x), F_{t-s}(y)) \\ &\hspace{110pt}-2W(F_t(x), F_t(y))\Big]d\mu_0(x)d\mu_0(y) \\ &\geq \int_{\calM \times\calM} \underline{\partial_t^2} W(F_t(x), F_t(y))d\mu_0(x)d\mu_0(y) \\ &\geq \alpha \int_{\calM \times\calM} \left( \|\dot{F}_t(x)\|^2+ \|\dot{F}_t(y)\|^2\right)d\mu_0(x)d\mu_0(y) = 2\alpha\int_{\calM}\|\dot{F}_0\|d\mu_0 = 2\alpha \fw_2^2(\mu_0, \mu_1). \end{align*} Combining the estimate with the bound from Lemma \ref{lemma:entropy-convexity} we get the result. \end{proof} \begin{remark} In the Euclidean case, $\calM = \bbR^d$, the criterion from Theorem \ref{th:convexity-M} reduces to $\alpha$-convexity of the interaction kernel $W: \bbR^{2d} \to \bbR$. As remarked in \cite[Proposition 7.25]{santambrogio2015optimal}, it is a sufficient but not necessary condition for the convexity of the corresponding interaction potential $S$. \end{remark} \begin{remark}[Gradient flow solutions] Formally, from the convexity properties one can also deduce existence (and uniqueness in case of $\lambda>0$) of a \emph{gradient flow solution} of the corresponding non-stationary McKean-Vlasov equation. For a separable Hilbert space $X$, such result for a large class of functionals on Wasserstein space $\calP_2(X)$ is rigorously established in \cite[Section 11.2]{ambrosio2005gradient}. On a manifold of positive curvature similar result was proved for the relative entropy (without the interaction term) in \cite{erbar2010heat}. \end{remark} \begin{remark}[Functional inequalities] In Euclidean space the uniform geodesic convexity has been shown to be equivalent to the log-Sobolev inequality \cite{Villani2003}. We expect the same arguments to hold on smooth manifolds. On the equivalence of functional inequalities in Riemannian setting see \cite{otto2000generalization}. Logarithmic Sobolev inequality in the special case $\calM = \bbS^{n-1}$ is studied in \cite{brigati2023logarithmic} \end{remark} \paragraph*{The case of the sphere $\calM = \bbS^{n-1}$} Consider a special case, namely $\calM = \bbS^{n-1}$. Note that any element of a unit sphere $x\in \bbS^{n-1}$ can be identified with a unit vector in $\bbR^{n}$. For any pair of points on a sphere $x, y \in \bbS^{n-1}$ we denote by $\left<x, y\right>$ a Euclidean scalar product between the corresponding vectors in $\bbR^n$. We now establish a sufficient condition for a convexity of an interaction energy for an interaction potential that defined in terms of the scalar product $W(x, y) = W(\left<x, y\right>)$ with now $W:[-1,1]\to\R$ by an abuse of notation. \begin{remark}[Choice of parametrization] For a general manifold $\calM$ a natural choice for introducing the interaction potential is in terms of the squared geodesic distance (cf.~\cite{fetecau2021well}) \[ W(x, y) = W(\dist(x,y)^2). \] This choice is inconvenient in the case of a sphere, where geodesic distance is equal to \[ \dist(x,y) = \arccos(\left<x, y\right>). \] The examples later are directly parametrized in terms of $\skp{x,y}$. Also, one can see that $\arccos$ is not differentiable at $\pm 1$ and in using the scalar product $\skp{x,y}$, we avoid dealing with regularity issues of the distance function at the endpoints. \end{remark} \begin{theorem} \label{th:convexity-sph} Consider the free energy functional $\calF_\gamma$ as defined in \eqref{eq:free-energy} on an $n$-dimensional sphere $\bbS^{n-1}$. Let the interaction kernel satisfy Assumption \ref{assum:sym-kernel} with some $W \in C^2((-1,1), \bbR)$ and let $\|W'\|_\infty, \|W''\|_\infty \leq C$. In addition let $W'(\pm 1)$ to be left/right derivative at $\pm 1$ respectively and assume that $|W'(\pm 1)|<C$, then $\calF$ is $\lambda$-convex, where $\lambda = \gamma^{-1}(n-2)-4C$. \end{theorem} \begin{proof} For any $x \in \bbS^{n-1}, \ v\in T_x\bbS^{n-1}$ let $\gamma_{x, v}: [0,1] \to \bbS^{n}$ be a constant speed geodesic with $\gamma_{x, v}(0) = x$ and $\dot\gamma_{x, v}(0) = v$. Introduce $g(x, y; v, u, t) = \left<\gamma_{x,v}(t), \gamma_{y,u}(t)\right>$, then it's first and second derivative with respect to $t$ are: \begin{align*} \partial_t g(x, y; v, u, t) &=\left<\gamma_{x,v}(t), \dot\gamma_{y,u}(t)\right> + \left<\dot \gamma_{x,v}(t), \gamma_{y,u}(t)\right> \\ \partial^2_t g(x, y; v, u; t) &=2\left<\dot\gamma_{x,v}(t), \dot\gamma_{y,u}(t)\right> + \left<\ddot \gamma_{x,v}(t), \gamma_{y,u}(t)\right> + \left<\gamma_{x,v}(t), \ddot\gamma_{y,u}(t)\right> \end{align*} As $W$ is twice differentiable on $(-1, 1)$, for $g(x, y; v, u, t) \neq \pm 1$ by the chain rule we get \begin{align*} \MoveEqLeft \underline{\partial^2_t} W\left(\gamma_{x,v}(t), \gamma_{y, u}(t)\right) = \partial^2_t W(g(x, y; u, v; t)) = W''(g(\cdot; t))\left(\partial_tg(\cdot; t)\right)^2 + W'g(\cdot; t)\partial^2_tg(\cdot; t). \end{align*} Using that for constant speed geodesics on a sphere $\|\dot\gamma_{x,v}(t)\| = \|v\|$ and $\|\ddot\gamma_{x,v}(t)\| = \|v\|^2$ we obtain the bound \begin{align*} \underline{\partial^2_t} W\left(\gamma_{x,v}(t), \gamma_{y, u}(t)\right) \geq -C\left((\|u\| +\|v\|)^2 + \|u\|^2 + \|v\|^2\right) \geq -3C\left(\|u\|^2 +\|v\|)^2\right). \end{align*} Note that $g(x, y; v, u, t) =1$ if and only if $\gamma_{x, v}(t) = \gamma_{y, u}(t)$, so for the case $g(x, y; v, u, t) =1$ we calculate \begin{align*} \MoveEqLeft \underline{\partial^2_t} W\left(\gamma_{x,v}(t), \gamma_{y, u}(t)\right) = \lim\inf_{s\to 0} \frac1{s^2}\Big[W(\left<\gamma_{x,v}(t+s), \gamma_{y, u}(t+s)\right>) \\ &\qquad+ W(\left<\gamma_{x,v}(t-s), \gamma_{y, u}(t-s)\right>) - 2W(\left<\gamma_{x,v}(t), \gamma_{y, u}(t)\right>)\Big] \\ &=\lim\inf_{s\to 0} \frac1{s^2}\Big[W\left(1 -s^2\|\dot\gamma_{x,v}(t)-\dot\gamma_{y,u}(t)\|^2 + o(s^2)\right) \\ &\qquad+ W\left(1 -s^2\|\dot\gamma_{x,v}(t)-\dot\gamma_{y,u}(t)\|^2+ o(s^2) \right) - 2W(1)\Big] \\ &=\lim\inf_{s\to 0} \frac1{s^2}\Bigl[ 2W(1) - 2s^2W'(1)\|\dot\gamma_{x,v}(t)-\dot\gamma_{y,u}(t)\|^2 - 2W(1) +o(s^2)\Bigr] \\ &= -2W'(1)\|\dot\gamma_{x,v}(t)-\dot\gamma_{y,u}(t)\|^2. \end{align*} Estimating the above expression we conclude that for $g(x, y; v, u, t) =1$: \[\begin{split} \underline{\partial^2_t} W\left(\gamma_{x,v}(t), \gamma_{y, u}(t)\right) &\geq -4|W'(1)|\left(\|\dot\gamma_{x,v}(t)\|^2+\|\dot\gamma_{y,u}(t)\|^2\right) \\ &\geq -4C\left(\|\dot\gamma_{x,v}(t)\|^2 + \|\dot\gamma_{y,u}(t)\|^2\right). \end{split} \] Analogous result holds for $g(x, y; v, u, t) =-1$. As a result, the assumption of Theorem~\ref{th:convexity-M} is satisfied for $\alpha = -4C$. \end{proof} Note that if $\lambda >0$ the above Theorems~\ref{th:minimizers} and~\ref{th:convexity-sph} guarantee strict convexity of the free-energy functional, and by the properties of strictly convex functionals we can conclude that $\calF_\gamma$ has a unique minimizer. \begin{corollary} \label{cor:convexity-sph} Consider free energy functional $\calF_\gamma$ with the interaction kernel $W$ as in Theorem \ref{th:convexity-sph}. Let $\gamma < \frac{(n-2)}{4C}$, then there exists a unique minimizer of $\calF_\gamma$. \end{corollary} \begin{proof} The statement follows directly from Theorems~\ref{th:minimizers} and~\ref{th:convexity-sph}. \end{proof} \section{The structure of \texorpdfstring{$L_2(\bbS^{n-1})$}{the Lebesgue space on the sphere}} We collect some properties of spherical harmonics in Section~\ref{sec:harmonics}, the convolution and interaction on a sphere in Sections~\ref{sec:spherical-conv} and~\ref{ssec:InteractionSphere}, respectively. \subsection{Spherical harmonics} \label{sec:harmonics} In our further analysis we will imploy the Hilbertian structure of $L_2(\bbS^{n-1})$. For that we will need some results concerning spherical harmonics, an orthonormal basis of square-integrable functions on a sphere. In this section we introduce the spherical harmonics basis and relate it to the eigenspaces of the Laplace-Beltrami operator. In the next section we give a definition of the convolution on a sphere and introduce the analog of the convolution theorem for $L_2(\bbS^{n-1})$. For more details on the topic we refer the reader to \cite[Chapters 1 and 2]{dai2013approximation}. We will work with the Hilbert space of square-integrable functions on $\bbS^{n-1}$ equipped with the scalar product \[ \left<f, g\right> = \frac{1}{\omega_n} \int_{\bbS^{n-1}}f(x)g(x)d\sigma(x), \] where $\sigma$ is the spherical measure (a uniform measure on $\bbS^{n-1}$) and $\omega_n = \sigma(\bbS^{n-1}) = \frac{2\pi^{n/2}}{\Gamma(n/2)}$ is the normalization constant. We will also refer to the normalized version of the spherical measure as the uniform measure or uniform state $\bar\rho = \frac1{\omega_n}\sigma$. Spherical measure is induced by the metric generated by the standard Euclidean product in $\bbR^{n}$. It is easy to verify that $\sigma$ is invariant under the rotation group: for any set $A \subseteq \bbS^{n-1}$ and any $g\in SO(n)$ it holds that $\sigma(gA) = \sigma(A)$, where $gA = \{ga: a\in A\}$. The basis of $L_2(\bbS^{n-1})$ can be constructed with the spherical harmonics. \begin{definition}[$l$-th spherical harmonics $\calH_l$] For $l \in \bbN_0$ let $H_l \subset \ker \Delta$ be a linear space of $l$-homogeneous polynomials $p: \bbR^n \to \bbR$ \[ H_l = \{p(x): \Delta p =0, \ p(\lambda x) = \lambda^lp(x)\}. \] The linear space of $l$-th spherical harmonics $\calH_l$ is the space of the elements of $H_l$ restricted to the unit sphere: \[ \calH_l = \{\tilde p(x): \exists p \in H_l \ \text{s.t.} \ \forall x \in \bbS^{n-1} \ \tilde p(x) = p(x)\}, \] where $\tilde p:\bbS^{n-1} \to \bbR$. \end{definition} It can be verified that the subspaces $\calH_l, \ \calH_m$ are orthogonal for $l\neq m$ (see \cite[Theorem 1.1.2]{dai2013approximation}) and every subspace is finite-dimensional (see \cite[Corollary 1.1.4]{dai2013approximation}). Moreover, all elements of the same subspace $\calH_l$ can be shown to be eigenfunctions of the Laplace-Beltrami operator corresponding to the same eigenvalue. \begin{theorem}[$\calH_l$ are eigenfunctions of $\Delta_{\bbS^{n-1}}$ {\cite[Theorem 1.4.5] {dai2013approximation}}] \label{th:sph-egenfunctions} Let $\Delta_{\bbS^{n-1}}$ be the Laplace-Beltrami operator on $\bbS^{n-1}$, then for any $p \in \calH_l$ it holds \[ \Delta_{\bbS^{n-1}} p = -l(n+l-2)p. \] \end{theorem} An orthonormal basis of $\calH_l$ has an explicit form in spherical coordinates $\theta_1...\theta_{n-1}$: \begin{align*} x_1 &= r\sin \theta_{n-1} ...\sin\theta_{2}\sin\theta_{1}, \\ x_2 &= r\sin \theta_{n-1} ... \sin\theta_{2}\cos\theta_{1}, \\ &\vdots\\ x_n &= r\cos\theta_{n-1}. \end{align*} The spherical measure in spherical coordinates takes the form \[ d\sigma = \prod_{i=1}^{n-1} (\sin\theta_{n-i})^{n-i-1}d\theta_{n-i}. \] And the corresponding basis of spherical harmonics then has the form: \begin{equation} \label{eq:harmonics} Y_{l, \bbK}(\theta) = e^{ik_{n-2}\theta_1}A_K^l \prod_{j=0}^{n-3} C_{k_j-k_{j+1}}^{\frac{n-j-2}{2}+k_{j+1}}(\cos \theta_{n-j-1}) (\sin \theta_{n-j-1})^{k_{j+1}}, \end{equation} where $K\in \bbK_l$ is a multi-index satisfying \begin{equation}\label{eq:MultiIndex:admissible} \bbK_l := \set*{K = (k_0, k_1 ... k_{n-2})\in \bbN_0^{n-2}\times \bbZ: l\equiv k_0 \geq k_1 \geq \cdots \geq k_{n-3} \geq |k_{n-2}|\geq 0}, \end{equation} with $A_K^l$ is a normalization constant and $C^\lambda_n$ are Gegenbauer polynomials of degree $n$. Gegenbauer polynomials satisfy the following recurrence relation: \begin{equation} \label{eq:gegenbauer} (n+2)C_{n+2}^\lambda(t) = 2(\lambda+n+1)tC_{n+1}^\lambda(t) -(2\lambda+n)C_{n}^\lambda(t), \end{equation} and the first two polynomials are given by $C_0^\lambda(t) = 1$ and $C_1^\lambda(t) = 2\lambda t$. It can be shown that the $Y_{l, K}$ form an orthonormal basis of square-integrable function on a sphere. For any $l \in \bbN_0$ define projection of $L_2(\bbS^{n-1})$ onto $\calH_l$, such that for any $f\in L_2(\bbS^{n-1})$: \[ \proj_l f := \sum_{K \in \bbK_l}f_{l,K} Y_{l,K}, \quad\text{with}\quad f_{l,K} := \left<f, Y_{l, K}\right>_{L_2(\bbS^{n-1})}, \] then the following theorem holds. \begin{theorem}[Fourier decomposition on $\bbS^{n-1}$ {\cite[Theorem 2.2.2] {dai2013approximation}}] \label{th:sph-decomposition} Let $Y_{l, K}$ be the orthogonal basis of $\calH_l$ as defined in \eqref{eq:harmonics}, then the set \[ \calY = \{ Y_{l,k}: l \in \bbN_0, k \in \bbK\} \] is an orthonormal basis of $L_2(\bbS^{n-1})$. In particular for any $f \in L_2(\bbS^{n-1})$ the following identity holds \[ f =\sum_{l\in \bbN_0} \proj_l f, \] in the sense that $\lim_{n\to \infty} \|f - \sum_{l=1}^n \proj_l f\|_{L_2} =0$. \end{theorem} Combining Theorem \ref{th:sph-decomposition} with Theorem \ref{th:sph-egenfunctions} one can see that $\calH_l$ coincides with $l$-th eigensubspace of the Laplace-Beltrami operator $\Delta_{\bbS^{n-1}}$. We will further use the spherical harmonics basis functions $Y_{l,K}$ which are explicitly defined for a fixed coordinate system. Note that since the choice of the coordinate system is arbitrary, the results will also hold for an arbitrary spherical harmonics basis. Given that, for simplicity we fix the coordinate system for the rest of the paper. \subsection{Spherical convolution} \label{sec:spherical-conv} Before we proceed to the convolution theorem on a sphere, we give some intuition on how it is related to the well-known convolution theorem in Euclidean space. Consider $L_2([-\pi, \pi]^d)$, the space of square-integrable functions on $[-\pi, \pi]$ and any $f\in L_2([-\pi, \pi]^d)$ is identified with its $2\pi$-periodic extension to $\bbR^d$. Then the set of Fourier harmonics $(w_k)_{k\in \bbZ^d}$ for $k = (k_1, ... k_d), \ k_i \in \bbZ$ defined as \[ w_k(x) = \alpha_k\prod_{i=1}^d w_{k_i}(x_i), \] where $w_{k_i}: [-\pi, \pi] \to \bbR$ takes the form \[ w_{k_i}(t) = \begin{cases} \cos(k_i t), &\quad k_i > 0, \\ 1, &\quad k_i = 0, \\ \sin(k_i t), &\quad k_i < 0, \end{cases} \] and $\alpha_k$ is the normalization constant, is an orthonormal basis of $L_2([-\pi, \pi])$. For any function $f \in L_2([-\pi, \pi]^d)$ denote by $\hat f(k)$ the $k$-th component of it's Fourier decomposition: \[ \hat f(k) := \left<f, w_k\right> = \int_{[-\pi, \pi]^d} f(x)w_k(x)dx. \] For any two functions $f, g \in L_2([-\pi, \pi]^d$ define their convolution as \begin{equation} \label{eq:conv-rd} (f * g)(x) = \int_{[-\pi, \pi]^d} f(y) g(x-y) dy, \end{equation} then by the convolution theorem it holds that \[ \widehat{f*g}(k) = \hat f(k) \cdot \hat g(k). \] In particular, the above representation implies that the convolution in a Euclidean space is a symmetric operator. Namely if the sum $\sum_{k\in \bbZ^d}\widehat{f*g}(k)$ is well-defined, it holds that $f*g = g*f$. \begin{remark} Note that the definition \eqref{eq:conv-rd} requires the domain of the function $g$ to be a vector space, so it cannot be directly applied to $L_2(\bbS^{n-1})$. At the same time the function $h_y(x) = x-y$ can be interpreted as a shift operator, which allows to formally rewrite \eqref{eq:conv-rd} as \[ (f * g)(x) = \int f(y) g(h_y(x)) dh_y, \] so that the integral is defined with respect to the measure $dh_y$ over the group of shifts. This definition can be generalized to a general group. In particular, convolution on a sphere can be defined as an integration over the special orthogonal group \cite{dokmanic2009convolution}, which can be formally introduced as: \begin{equation} \label{eq:conv-rd2} (f * g)(x) = \int_{SO(n)} f(h) g(hx) dh, \end{equation} where $dh$ is a measure on $SO(n)$ and $f$ is defined on $SO(n)$ instead of $\bbS^{n-1}$. \end{remark} We will now define a convolution on the sphere. Similar to the approach of \cite{dokmanic2009convolution} we will change the domain of the second argument $g$ making convolution a non-symmetric operation. We identify any element of $\bbS^{n-1}$ with the corresponding unit-length vector in $\bbR^d$. It allows us to define the convolution in a slightly more restrictive setting. \begin{definition}[Convolution on $\bbS^{n-1}$] \label{def:conv-sn} Let $f\in L_2(\bbS^{n-1})$ and let $g: [-1, 1] \to \bbR$ satisfy \begin{equation}\label{eq:spherical:intergrability} \int_{[-1, 1]} |g(t)|(1 - t^2)^{\frac{n-3}{2}}dt < \infty, \end{equation} then the convolution of $f$ with $g$ is defined as \[ (g*f)(x) := \int_{\bbS^{n-1}} f(y)g(\left<x, y\right>)d\sigma(y). \] \end{definition} We note for $g:[-1,1]\to\R$ satisfying the integrability condition~\eqref{eq:spherical:intergrability} the function $g(\skp{x_0,\cdot})\in L_1(\bbS^{n-1})$ for any $x_0\in \bbS^{n-1}$. For such admissible $g:[-1,1]\to\R$, we define its spherical harmonic decomposition as follows: \begin{definition}[Spherical harmonics decomposition] \label{def:spherical-decomposition} Consider $\calM = \bbS^{n-1}$, let $g: [-1, 1] \to \bbR$ satisfy the integrability condition~\eqref{eq:spherical:intergrability} then the sequence $(\hat g_k)_{k\in \bbN}$ is called a \emph{spherical harmonics decomposition} of $g$, where \[ \hat g_k = g_{k, 0} = c_{\frac{n-2}{2}}\int_{-1}^1 g(t)\frac{C_k^{\frac{n-2}{2}}(t)}{C_k^{\frac{n-2}{2}}(1)}(1-t^2)^{\frac{n-3}{2}}dt, \] with $C^\alpha_k$ the Gegenbauer polynomials defined in \eqref{eq:gegenbauer} and $c_\lambda$ is the normalization constant \[c_\lambda = \left(\int_{-1}^1(1-t^2)^{\lambda - \frac{1}{2}}dt\right)^{-1} = \frac{\Gamma(\lambda+1)}{\sqrt{\pi}\Gamma(\lambda+\frac12)}. \] \end{definition} Spherical harmonics decomposition allows to represent any admissible $g$ as a linear combination of Gegenbauer polynomials. \begin{lemma}[\cite{dai2013approximation}] \label{lemma:gegegnbauer-decomposition} Let $g: [-1, 1] \to \bbR$ satisfy the integrability condition~\eqref{eq:spherical:intergrability}, then $g$ has the following representation in terms of the Gegenbauer polynomials: \[ g = \sum_{k} \hat g_k \frac{2k +n-2}{n-2} C_k^{\frac{n-2}{2}}, \] where $(\hat g_k)_{k\in\bbN}$ is the spherical harmonics decomposition of $g$, $C_k^{\frac{n-2}{2}}$ are the Gegenbauer polynomials and the equality holds in $L_2(\bbS^{n-1})$ sense. \end{lemma} \begin{remark} Note that the Definition \ref{def:spherical-decomposition} is an explicit formulation in terms of the Gegenbauer polynomials of the (simplified) Definition \ref{def:sph-decomposition-intro} used in the Introduction. The coefficients $\hat W_k$ are properly scaled projections onto the spherical harmonics basis functions $Y_{k,0}$. See section \ref{sec:harmonics} for the explicit expressions of the spherical harmonics $Y_{k,0}$. \end{remark} The following theorem shows that the eigenspaces of the Laplace-Beltrami operator $\calH_p$ are invariant under the convolution with any admissible $g$. \begin{theorem}[Invariance of $\calH_p$, {\cite[Theorem 2.1.3]{dai2013approximation}}] Let $f, g$ be as in Definition \ref{def:conv-sn}, then \[ \proj_p (g * f) = \omega_n \hat g_p \proj_p f, \] where $\hat g_p$ denotes the spherical harmonics decomposition of $g$ as in Definition \ref{def:spherical-decomposition}. \end{theorem} As a result of the above theorem we can formulate an alternative of the convolution theorem in $\bbS^{n-1}$. \begin{corollary}[Convolution theorem on $\bbS^{n-1}$] \label{cor:convolution-theorem} Let $f, g$ be as in Definition \ref{def:conv-sn}, then for any $p\in \bbN, \ k \in \bbK_p$ the $(p, k)$-th spherical harmonics coefficient of the convolution $f* g$ satisfy \[ \left<g*f, Y_{p, k}\right>_{L_2(\bbS^{n-1})} = \omega_n\hat g_p\left<f, Y_{p, k}\right>_{L_2(\bbS^{n-1})}. \] \end{corollary} In a given coordinate system $\theta_1, ... \theta_{n-1}$ we define the space of rotationally symmetric square-integrble functions as \begin{definition}[Space $L_2^s(\bbS^{n-1})$]\label{def:L2s} Let $\theta_1, ... \theta_{n-1}$ be a coordinate system on $\bbS^{n-1}$. The space of $L_2^s(\bbS^{n-1})$ is the closed span of the spherical harmonics $Y_{l, 0}$ as defined in \eqref{eq:harmonics}: \[ L_2^s(\bbS^{n-1}) = \overline{\operatorname{span}}^{L_2}\{Y_{l, 0},l\in\bbN\}. \] \end{definition} \begin{remark} By definition of spherical harmonics it is easy to see that any $f\in L_2^s(\bbS^{n-1})$ is a function of $\theta_{n-1}$ only and any harmonic $Y_{l, p}$ with $p\neq 0$ is orthogonal to $f$. \end{remark} \subsection{Admissible interaction kernels on a sphere}\label{ssec:InteractionSphere} We endow the unit sphere $\bbS^{n-1}$ with the natural topology and assume that the interaction kernel satisfies the following assumption. \begin{assumption}[Rotational symmetry] \label{assum:sym-kernel} The interaction kernel $W: \bbS^{n-1}\times \bbS^{n-1} \to \bbR$ takes the form $W(x, y) = W(\left<x, y\right>)$, with $W:[-1,1]\to \R$ by abuse of notation and $\left<\cdot, \cdot\right>$ is the standard Euclidean product on $\bbR^n$. \end{assumption} In this case the measure convolution reduces to the spherical convolution defined in Section~\ref{sec:spherical-conv}. We discuss the spherical harmonics basis as well as properties of the convolution with rotationally symmetric function in Section \ref{sec:harmonics}. Properties of the spherical convolution are essential for the spectral analysis of the Gibbs map, which is the main component of the bifurcation analysis. For a kernel satisfying Assumption \ref{assum:sym-kernel} we define it's spherical harmonics decomposition following Definition~\ref{def:spherical-decomposition}. For rotationally symmetric kernels spherical harmonics decomposition is an analogue of the Fourier decomposition in Euclidean case. In particular, set of all spherical harmonics is an orthonormal basis of $L_2(\bbS^{n-1})$ and a 'convolution' with rotationally symmetric kernel $W$ can be expresses solely in terms of the spherical harmonics decomposition of $W$. For any $u \in L_2(\bbS^{n-1})$ such that $u = \sum_{l, K} \hat u_{l, K} Y_{l, K}$ the convolution takes thanks to Corollary~\ref{cor:convolution-theorem} the form \begin{equation} \label{eq:intro-conv} (W*u)(x) = \int_{\bbS^{n-1}} W(\left<x,y\right>) u(y)d\sigma(y) =\omega_n\sum_{l\in\bbN, K\in \bbK_l} \hat W_l \hat u_{l, K} Y_{l, K}(x), \end{equation} where $Y_{l, K}$ are the spherical harmonics from Section~\ref{sec:harmonics}. \begin{definition}[Stable kernels]\label{def:stable-kernel} A symmetric kernel $W$ satisfying the following condition \[ \calI(u) = \frac12\int_{\bbS^{n-1}\times \bbS^{n-1}} W(\left<x,y\right>) u(x)u(y)d\sigma(x)d\sigma(y) \geq 0 \quad \forall u \in L_2(\bbS^{n-1}) \] is called stable and the space of all stable kernels is denoted by $\calW_s$. \end{definition} \begin{lemma}[Spherical decomposition of a stable kernel] Let $W \in \calW_s$ be a stable kernel, then $\forall k \in \bbN$ it holds that $\hat W_k \geq 0$, where $(\hat{W}_n)_{n\in\bbN}$ is the spherical harmonics decomposition of $W$. \end{lemma} \begin{proof} Using the relation \eqref{eq:intro-conv} we obtain for any $u \in L_2(\bbS^{n-1})$: \begin{align*} \MoveEqLeft\calI(u) = \frac12\int_{\bbS^{n-1}\times \bbS^{n-1}} W(\left<x,y\right>) u(x)u(y)d\sigma(x)d\sigma(y) \\ &= \frac{1}{2\bar\rho}\int_{\bbS^{n-1}}d\sigma(x) \sum_{l, K}\hat W_l \hat u_{l, K} Y_{l, K}(x) \sum_{l, K} \hat u_{l, K} Y_{l, K}(x) =\frac{1}{2\bar\rho} \sum_{l, K} \hat W_l \hat u_{l, K}^2, \end{align*} where we used the orthonormality of spherical harmonics basis. Assume that there exists $l \in\bbN$ such that $\hat W_l < 0$, by taking $u^* = Y_{l, 0}$ we obtain $\calI(u^*) = \frac{1}{2\bar\rho}\hat W_l <0$, contradiction. \end{proof} \begin{remark} Note that the concept of a stable kernel on a sphere is equivalent to the characterization of $H$-stability of the interaction kernel on a torus used in \cite{carrillo2020long}. \end{remark} As it was established in \cite{GatesPenrose1970} for the Euclidean setting, phase transition is only possible if the interaction kernel is $H$-unstable. Similarly, in this work for the bifurcation analysis and the characterization of the phase transitions we require the interaction kernel to be unstable, and in particular to satisfy the following assumption. \begin{assumption} \label{assum:non-stable-kernel} The interaction kernel $W: \bbS^{n-1}\times \bbS^{n-1} \to \bbR$ satisfies Assumption~\ref{assum:sym-kernel} and has at least one negative component of the spherical harmonics decomposition, i.e. $W \not\in \calW_s$. \end{assumption} \begin{remark}[Convexity of the interaction energy for stable kernels] \label{remark:stable-kernels} As noted in \cite[Corollary 2.7]{ChayesPanferov2010}, the interaction energy $\calI$ with $W\in \calW_s$ is a positive-semidefinite bilinear map on $L_2(\bbS^{n-1})$, so the uniform state $\bar\rho$ the unique minimizer of the free energy $\calF_\gamma$ for arbitrary~$\gamma \in \bbR$. \end{remark} \section{Sphere: Bifurcation analysis}\label{sec:bifurcations} As shown before, for large enough noise the uniform measure is a unique minimizer of the aggregation-diffusion potential. We would like to know what happens to the set of minimizers as we decrease the noise. As one can see, for a fixed interaction kernel $W$ for certain values of $\gamma$ the convexity parameter $\lambda$ becomes negative, which may result in non-uniqueness of minimizers. Under certain conditions, the set of minimizers can start 'branching' around certain state $\rho$. This phenomenon is called bifurcation. In this section we give the necessary background information on the bifurcation theory in Hilbert spaces and apply it to the Gibbs map defined in \eqref{eq:gibbs-map}. \subsection{Background on general theory} We study behaviour of the set of solutions of the parametrized family of fixed point equations \begin{equation} \label{eq:fixed-point} F(x, \lambda)=0, \end{equation} for some $F: X\times \bbR \to X$, where $X$ is a Banach space and $\lambda$ is the parameter. The parameter point $\lambda_0$ combined with $x_0$, the solution of \eqref{eq:fixed-point} at which the set of minimizers $X_\lambda = \{x: F(x, \lambda_0) =0\}$ starts branching is called a bifurcation point. In particular by the branching we understand existence of solutions of \eqref{eq:fixed-point} different from $(x_0, \lambda_0)$ in any small proximity of $(x_0, \lambda_0)$. \begin{definition}[Bifurcation point] Consider a Banach space $X$ with $U \subset X$ an open neighbourhood of $0$, and a nonlinear $C^2$ map, $F: U \times V \rightarrow X$, then a point $(x_0, \lambda_0) \in U \times V$ is a \emph{bifurcation point} of equation $F(x, \lambda) = 0$ if \[ (x_0, \lambda_0) \in \text{cl}\left\{ (x, \lambda): F(x, \lambda) = 0, \ x \neq x_0, \ \lambda \neq \lambda_0\right\}. \] \end{definition} If the set of $(x, \lambda)$ form a curve in $X \times \bbR$, it's called a \emph{bifurcation curve}. \begin{definition}[Bifurcation curve] Let $(x_0, \lambda_0)$ be a bifurcation point of $F$. If there exists $\varepsilon >0$ and a continuous map $S: (-\varepsilon, \varepsilon) \to X \times \bbR$ such that $S(0) = (x_0, \lambda_0)$ and for all $t \in (\varepsilon, \varepsilon)$: \[ F(S(t)) = 0, \] then $(x(t), \lambda(t)) = S(t)$ is a bifurcation curve. \end{definition} The bifurcation theory often relies on the finite-dimensional inverse function theorem and thus sets additional assumptions of the functional $F$. Let $F$ take the form \begin{equation} \label{eq:k+g} F(x, \lambda) =x - \lambda Kx + G(x, \lambda), \end{equation} for some linear functional $K$, and a non-linear part $G$ with a bounded growth. The exact conditions will be made explicit later. To be able to reduce the problem to a finite-dimensional setting, we set an assumption on the linear part $K$ to be a Fredholm operator. \begin{definition} Let $X, Y$ be Banach spaces and $F$ be a linear bounded operator $F: X \to Y$. $F$ is a \emph{Fredholm} operator of index $k$ if $\dim \ker F = d_1 <\infty$, $\dim (Y\backslash im(F)) = d_2 <\infty$ and $k = d_1 - d_2$, where $\operatorname{im}(F) =\{F(x): x\in X\}$. \end{definition} We will make use of the following properties of the Fredholm operators: \begin{lemma}[Lemma 4.3.3 \cite{davies2007linear}] \label{lemma:Fredholm-compact} Let $X$ be a Banach space and let $A: X\to X$ be a compact operator, then for any $\lambda \neq 0$ operator $\lambda I - A$ is Fredholm. \end{lemma} As it is clear from the above lemma, operator $1/\lambda(\lambda I - A) = I - 1/\lambda A$ is also a Fredholm operator of the same index. \begin{lemma}[Theorem 4.3.11 \cite{davies2007linear}] \label{lemma:fredholm-continuity} If $A: X\to Y $ a Fredholm operator then there exists $\varepsilon > 0$ such that every bounded operator $X$ such that $\|A- X\| < \varepsilon$ is also Fredholm with $\operatorname{index}(X) = \operatorname{index}(A)$. \end{lemma} \begin{corollary} \label{cor:fredholm-family} Let $(A_t)_{t\in [0,1]}$ be a norm-continuous family of Fredholm operators $A_t: X\to Y$. Let $A_0$ be a Fredholm operator of index $k$, then $\operatorname{index}(A_t) = k$ for all $t\in [0,1]$. \end{corollary} \begin{proof} Consider the map $I: [0,1] \to \bbN_0$ defined as $I(t):= \operatorname{index}(A_t)$. If there exists $t$ such that $I(t) \neq k$, there exist at least one point of discontinuity of $I$. Take such a discontinuity point $t_0$, then by Lemma \ref{lemma:fredholm-continuity} there exists $\varepsilon >0$ such that map $I$ is constant on $B(A_{t_0}, \varepsilon)$. As the family $A_t$ is norm-continuous, there exists $\delta > 0$ such that for all $t \in (t_0 -\delta, t_0+\delta): \ \|A_t - A_{t_0}\| < \varepsilon$, so the map $I$ is constant on $(t_0 -\delta, t_0+\delta)$, contradiction. \end{proof} We are now ready to formulate the sufficient condition of a bifurcation curve around $(x_0, \lambda_0)$, which we apply afterwards. \begin{theorem}[Existence of a bifurcation curve {\cite[Theorem 28.3]{deimling2013nonlinear}}] \label{th:bifurcation-curve} Let $X$ be a real Banach space and $F: U \times V \to X $ be an operator of form \eqref{eq:k+g}, where $U \times V \subseteq X\times \bbR$ is a neighborhood of $(0, \lambda_0)$. Let $K$ be a bounded linear operator and $G: U \times V \to X$ is such that $G_x, G_\lambda, G_{x,\lambda}$ are continuous on $U \times V$. In addition suppose that \begin{itemize} \item $I - \lambda_0 K$ is a Fredholm operator of index zero and $1/\lambda_0$ is a simple characteristic value of $K$ with the corresponding eigenvector $v_0$. \item $G(x, \lambda) =o(\|x\|)$ as $x\to 0$ uniformly in $\lambda$ on $U \times V$, \end{itemize} then $(0, \lambda_0)$ is a bifurcation point of $F(x, \lambda)$. In addition, there exists $\delta > 0$ and a bifurcation curve $S: (-\delta, \delta) \to X \times \bbR$, $S(t) = (x(t), \lambda(t))$ such that $x(t), \lambda(t)$ take the form \[ x(t) = tv_0 + tz(t), \qquad \lambda(t) = \lambda_0 + \mu(t) \] where $\left<z(t), v_0\right> =0 $ for all $|t| <\delta$ and $z, \mu$ are continuous functions satisfying $z(0) =0, \ \mu(0) = 0$. \end{theorem} \subsection{Main result} In the previous section we've established equivalence between the set of solutions of the stationary McKean-Vlasov equation and the set of zeros of the Gibbs map. In this section we study bifurcations of the Gibbs map on an $n$-dimensional unit sphere around the uniform measure $\bar \rho$. First of all note that because of the symmetry of the problem the uniform measure is a solution of $F(x, \gamma)=0$ for any $\gamma\in\bbR_+$. \begin{lemma} \label{lemma:barrho} Let $W$ satisfy Assumption \ref{assum:sym-kernel}, then for any $\gamma\in\bbR_+$ the pair $(\bar\rho, \gamma)$ is a zero of the Gibbs map, i.e.: \[ \bar\rho - \frac{1}{Z(\gamma, \bar \rho)}e^{-\gamma W*\bar \rho} = 0. \] \end{lemma} \begin{proof} Note that by definition for any $x, y \in \bbS^{n-1}$ it holds that $\bar \rho(x) = \bar \rho(y)$, implying that for any interaction kernel of form $W(x, y) = W(\left<x, y\right>)$ the following equality is satisfied $e^{-\gamma (W*\bar \rho)(x)} = e^{-\gamma (W*\bar \rho)(y)}$. Finally note that \[ \int_{\bbS^{n-1}}\frac{1}{Z(\gamma, \bar \rho)}e^{-\gamma W*\bar \rho}d\sigma = \frac{e^{-\gamma (W*\bar \rho)(x_0)}\sigma(\bbS^{n-1})}{Z(\gamma, \bar \rho)} =1 = \bar{\rho}(x_0)\sigma(\bbS^{n-1}). \] The statement of the lemma immediately follows. \end{proof} In fact, due to the symmetric structure of the sphere, all the minimizers of the free energy functional can be shown to admit certain symmetries. Similar result has been established in~\cite{BaernsteinTaylor1976} using symmetrization argument, see also~\cite[Chapter 4]{hayman1994multivalent}. Define $\hat F: L_2(\bbS^{n-1})\times \bbR_+ \to L_2(\bbS^{n-1})$ as a shifted version of $F$: \begin{equation} \label{eq:hatF} \hat F(u, \gamma) = F(\bar \rho + u, \gamma) = \bar \rho + u - \frac{1}{Z(\gamma, u)} e^{-\gamma W* (\bar \rho + u)}, \end{equation} where $Z( \gamma, u) = \int_{\bbS^{n-1}} e^{-\gamma W* (\bar \rho + u)}d\sigma $. Then the following trivial representation of $\hat F$ holds \[ \hat F(u, \gamma) = D_u \hat F(0, \gamma) [u] + \left(\hat F(u, \gamma) - D_u \hat F(0, \gamma) [u]\right), \] where $D_u \hat F(\bar\rho, \gamma)$ is the Frechet derivative of $\hat F$ and $\left(\hat F(u, \gamma) - D_u \hat F(0, \gamma) [u]\right)$ is the non-linear term. Note that for any bifurcation curve $\hat S(t) = (\hat x(t), \hat \gamma(t))$ of the shifted functional $\hat F$ there exists corresponding bifurcation curve of $F$ which takes the form $S(t) = (\bar\rho + \hat x(t), \hat \gamma(t))$, and thus it is sufficient to establish bifurcation points of $\hat F$. Recall that for a rotationally symmetric interaction kernel according to Corollary \ref{cor:convolution-theorem} for any $u\in L_2(\bbS^{n-1})$ it holds that \[ \left<W*u, Y_{k, p}\right>_{L_2(\bbS^{n-1})} = \omega_n \hat W_k\left<u, Y_{k, p}\right>_{L_2(\bbS^{n-1})}, \] where $(\hat W_k)_{k\in \bbN}$ is the spherical harmonics decomposition of $W$. Using this property we are able to prove the following theorem. \begin{theorem} \label{th:bifurcations} Let $W \in C_b(\bbS^{n-1} \times \bbS^{n-1})$ satisfy Assumption \ref{assum:non-stable-kernel}. If $\hat W_k < 0$ is a unique component of the spherical harmonics decomposition $(\hat W_k)_{k\in \bbN}$, then $(0, \gamma_k)$ with $\gamma_k =- \frac{1}{\hat W_k}$ is a bifurcation point of $\hat F$. Moreover there exists at least one bifurcation curve $\hat S(t) = (\hat x(t), \gamma(t))$ of form \[ \hat x(t) = tY_{k, 0} + tz(t), \qquad \gamma(t) = \gamma_k + \mu(t), \] where $z, \mu$ are continuous functions satisfying $z(0)=0, \ \mu(0)= 0$ and $\hat x(t) \in L_2^s(\bbS^{n-1})$ for all $t \in (-\delta, \delta)$, with $L^s_2(\bbS^{n-1})$ from Definition~\ref{def:L2s}. \end{theorem} In order to prove the above theorem we require the following technical lemmas, for which we provide the proofs in the Appendix \ref{appendix:bifurcations}. \begin{lemma} \label{lemma:G} Let $W \in C_b(\bbS^{n-1} \times \bbS^{n-1}) $ satisfy Assumption \ref{assum:non-stable-kernel}. Let $G:L_2(\bbS^{n-1}) \times \bbR_+$ be the non-linear operator as defined in \eqref{eq:G}, then for any $\gamma\in\bbR_+$ there exists a neighborhood of $(0, \gamma)$: $U \times V \subseteq L_2(\bbS^{n-1})\times \bbR$, such that $G_u, G_\gamma, G_{u\gamma}$ are continuous on $U \times V$. Moreover, $G(w, \gamma) =o(\|w\|_{L_2})$ as $w\to 0$ uniformly in $\gamma$ on $U \times V$. \end{lemma} \begin{lemma} \label{lemma:compactness} Let $(\calM, \dist)$ be a compact smooth Riemannian manifold, let $\sigma$ be the uniform measure on $\calM$ and let $W\in C_b(\calM\times\calM, \bbR)$, then the operators $A, B: L_2(\calM) \to L_2(\calM)$ \[ Au:= W * u \quad \text{ and } \quad Bu:= \sigma \int_\calM W * ud\sigma \] are compact. \end{lemma} We are now ready to prove the main theorem of this section. \begin{proof}[Proof of Theorem~\ref{th:bifurcations}] The structure of the proof is the following, we first give an explicit form of both linear and non-linear parts of $\hat F$. Then we show that the space of symmetric functions $L_2^s$ is invariant under $\hat F$ and thus we can study the restriction of $\hat F$ to $L_2^s$. Finally, we show that the restricted operator satisfies the assumptions of Theorem \ref{th:bifurcation-curve}. \smallskip\noindent \textbf{Linear and non-linear parts of $\hat F$:} By definition of Frechet derivative we have \begin{align*} \hat{F}(u+w, \gamma) - \hat{F}(u, \gamma) = D_u \hat F(u, \gamma) [w] + o(\|w\|). \end{align*} Expanding the left hand side we get \begin{align*} \MoveEqLeft \hat{F}(u+w, \gamma) - \hat{F}(u, \gamma) - w= \frac{1}{Z(\gamma, u)} e^{-\gamma W* (\bar \rho + u)} - \frac{1}{Z(\gamma, u + w)} e^{-\gamma W* (\bar \rho + u + w)} \\ &=\frac{1}{Z(\gamma, u)} e^{-\gamma W* (\bar \rho + u)}\left(1 - \frac{Z(\gamma, u)}{Z(\gamma, u + w)}e^{-\gamma W* w}\right) \\ &=\frac{1}{Z(\gamma, u)} e^{-\gamma W* (\bar \rho + u)}\cdot \gamma W* w - \frac{e^{-\gamma W* (\bar \rho + u)}}{Z(\gamma, u)^2} \left(Z(\gamma, u + w) - Z(\gamma, u)\right) + o(\|w\|) \\ &=\frac{1}{Z(\gamma, u)} e^{-\gamma W* (\bar \rho + u)}\cdot \gamma W* w - \frac{ \gamma e^{-\gamma W* (\bar \rho + u)}}{Z(\gamma, u)^2}\int_{\bbS^{n-1}} e^{-\gamma W* (\bar \rho + u)} \cdot W* w d\sigma + o(\|w\|) \end{align*} Then the first Frechet derivative $D_u \hat F$ for variation $w \in L_2(\bbS^n)$ satisfies: \begin{equation} \label{eq:DwF} D_u \hat F(u, \gamma) [w] = w + \frac{1}{Z(\gamma, u)} e^{-\gamma W* (\bar \rho + u)} \cdot \gamma W* w - \frac{ \gamma e^{-\gamma W* (\bar \rho + u)}}{Z(\gamma, u)^2}\int_{\bbS^{n-1}} e^{-\gamma W* (\bar \rho + u)} \cdot W* w d\sigma, \end{equation} in particular for $u = 0$ we have \[ D_u \hat F(0, \gamma) [w] = w + \gamma \bar \rho \cdot W* w - \gamma \bar\rho^2 \int_{\bbS^{n-1}} W* w d\sigma, \] and can define the linear and non-linear parts as \begin{align} Kw&:= -\bar \rho \cdot W* w + \bar\rho^2 \int_{\bbS^{n-1}} W* w d\sigma, \label{eq:K} \\ G(w, \gamma) &:=\bar \rho - \frac{1}{Z(\gamma, w)} e^{-\gamma W* (\bar \rho + w)} - \gamma \bar \rho \cdot W* w + \gamma \bar\rho^2 \int_{\bbS^{n-1}} W* w dx. \label{eq:G} \end{align} \smallskip\noindent \textbf{Invariance of $L_2^s$:} Recall that the subset of symmetric spherical harmonics $(Y_{l, 0})_{l\in\bbN}$ is an orthonormal basis for $L_2^s(\bbS^{n-1})$. In addition, we prove $L_2^s(\bbS^{n-1})$ is invariant under $K$. For any $w \in L_2^s(\bbS^{n-1})$ we obtain: \begin{align*} \MoveEqLeft \norm[\bigg]{ Kw - \sum_{l\in\bbN}\left<Kw, Y_{l,0}\right>Y_{l,0}}_{L_2} = \norm[\bigg]{Kw - \sum_{l\in\bbN}\proj_l Kw }_{L_2} = 0, \end{align*} where we used that for any $m \in \bbN, \ p\neq 0$: \begin{align*} \MoveEqLeft \left<Kw, Y_{m,p}\right> = \left<\proj_m Kw, Y_{m,p}\right> = \biggl<\proj_m K\sum_{l\in\bbN}\left<w, Y_{m,0}\right>Y_{l, 0}, Y_{m,p}\biggr> \\ &= -\int \bar \rho \cdot \proj_m \biggl( W* \sum_{l\in\bbN}\left<w, Y_{l,0}\right>Y_{l, 0} \biggr)Y_{m, p}d\sigma \\ &\qquad + \int_{\bbS^{n-1}}\bar\rho^2 \cdot \proj_m\biggl(\int_{\bbS^{n-1}} W* wd\sigma \biggr)Y_{m, p} d\sigma \\ &=-\int \bar \rho \cdot\proj_m \biggl(\sum_{l\in\bbN}\hat W_l \left<w, Y_{l,0}\right>Y_{l, 0}\biggr) Y_{m, p}d\sigma +0 \\ &= -\int \bar \rho \cdot\hat W_m \left<w, Y_{m,0}\right>Y_{m, 0} Y_{m, p}d\sigma = 0. \end{align*} \smallskip\noindent \textbf{Properties of $K$:} We will now show that $I -\gamma K$ is a Fredholm operator of index zero for any $\gamma \in \bbR_+$ as an operator on the space $L_2^s(\bbS^{n-1})$. By Lemma \ref{lemma:compactness} operator $K$ is compact on $L_2(\bbS^{n-1})$. Since $L_2^s(\bbS^{n-1})$ is invariant under $K$ and $L_2^s(\bbS^{n-1})$ is a closed subspace of $L_2(\bbS^{n-1})$, it is also a compact operator on the smaller space $K:L_2^s(\bbS^{n-1}) \to L_2^s(\bbS^{n-1})$. By Lemma \ref{lemma:Fredholm-compact} $I -\gamma K$ is Fredholm. Moreover the family $(I-\gamma K)_{\gamma \in \bbR^+}$ is norm-continuous and as a result $\operatorname{index}(I-\gamma K) = \operatorname{index}(I) = 0$ by Corollary \ref{cor:fredholm-family}. We then calculate for $Y_{0,0}$ \[ KY_{0, 0} = -\frac{1}{\omega_n} W*Y_{0, 0} + \frac{1}{\omega_n^2} \int_{\bbS^{n-1}} W* Y_{0, 0} d\sigma = -\hat W_0 Y_{0,0} + \frac{1}{\omega_n} Y_{0,0}\int_{\bbS^{n-1}}\hat W_0 Y_{0,0}d\sigma=0. \] and for the other harmonics $Y_{l, 0}, l\in\bbN$ \[ KY_{l, 0} = -\frac{1}{\omega_n} W*Y_{l, 0} + \frac{1}{\omega_n^2} Y_{0,0} \int_{\bbS^{n-1}} W*Y_{l, 0} d\sigma = -\hat W_0 Y_{l,0}+ 0= -\hat W_0 Y_{l,0}. \] So in the orthonormal basis of spherical harmonics of order zero $K$ is a diagonal operator \[ KY_{l, 0} = \begin{cases} 0, \qquad &l=0, \\ -\hat W_lY_{l, 0}, \qquad &l\in \bbN, \end{cases} \] which under assumption of the theorem implies that $-1/\hat W_k$ is a simple characteristic value of $K$ and $Y_{k, 0}$ is the corresponding eigenvector. Thus we can apply Theorem \ref{th:bifurcation-curve}, where necessary regularity of the nonlinear term $G$ follows from Lemma \ref{lemma:G}. \end{proof} \section{Sphere: Phase transitions} As shown in Lemma \ref{lemma:barrho} the uniform measure is always a critical point of the free energy functional. Moreover, since sphere has positive curvature, if $\gamma$ is small enough then $\calF$ is strictly geodesically convex and thus uniform measure is a unique global minimizer. As shown is Section~\ref{sec:bifurcations} if the kernel $W$ is unstable, the set of stationary points of $\calF$ behaves non-trivially for larger values of $\gamma$, so $\bar \rho$ is not guaranteed to be the global minimizer anymore. The value $\gamma$ at which the uniform measure stops being the global minimizer of the free energy functional is called the \emph{transition point}. In this section we give a necessary condition for an existence of a transition point $\gamma_c$ and characterize the minimizers of $\calF_\gamma$ around it. \subsection{Background} \begin{definition}[Transition point]\label{def:transition-point} A parameter value $\gamma_c \in\bbR_+$ is a \emph{transition point} of the free energy functional \eqref{eq:free-energy} if the following holds \begin{enumerate} \item for all $\gamma \in (0,\gamma_c)$ the uniform measure $\bar\rho$ is a unique (global) minimizer of $\calF_\gamma$, \item for $\gamma =\gamma_c$ the uniform measure is a minimizer of $\calF_\gamma$, \item for any $\gamma>\gamma_c$ there exist at least one probability measure $\rho_\gamma \in \calP_{ac}^+(\bbS^{n-1})$ different from $\bar\rho$, minimizing $\calF_\gamma$. \end{enumerate} \end{definition} To give a characterization of the phase transition in the spherical case we introduce the result analogous to the analysis of phase transitions of the McKean-Vlasov equation on a torus from \cite{ChayesPanferov2010}. Since the proof does not rely on the structure of the domain, all the arguments from \cite{ChayesPanferov2010} directly translate to the case of a (high-dimensional) sphere. \begin{lemma} \label{lemma:properties} Let $\calF_\gamma$ be the free energy functional~\eqref{eq:free-energy} with fixed $\gamma>0$ and let $W \in \mathcal{W}_s^c$ be an unstable interaction potential (see Definition~\ref{def:stable-kernel}). Suppose that for some $\gamma_T <\infty$ there exists a minimizer $\rho_T \in \calP_{ac}(\bbS^{n-1})$ not equal to the uniform measure $\bar\rho $ such that: \[ \calF_{\gamma_T}(\rho_T) \leq \calF_{\gamma_T}(\bar \rho) \] then, for all $\gamma > \gamma_T$ the uniform measure no longer minimises the free energy. \end{lemma} Existence of the phase transition point depends on the properties of the interaction kernel~$W$. Namely, instability of the interaction kernel is a necessary and sufficient condition of a phase transition as well. This result dates back to \cite{GatesPenrose1970}, where the analog of the following proposition is proved in Euclidean setting, for completeness we provide a short proof below. \begin{proposition}[Existence of phase transition point] \label{prop:existence-pt} Consider the free energy functional \eqref{eq:free-energy} with rotationally symmetric interaction kernel~$W$. The system exhibits phase transition if and only if $W\in\calW_s^c$. \end{proposition} \begin{proof} Necessity follows immediately from the Remark \ref{remark:stable-kernels} since for $W\in\calW_s^c$ the uniform state is the unique minimizer of the free energy functional. We now prove sufficiency. Since $W\in\calW_s^c$, there exists $k\in \bbN$ such that the corresponding coefficient of the spherical harmonics decomposition $\hat W_k <0$. Choose such a $k$ and consider the competitor $\rho_k = \bar\rho + \varepsilon Y_{k, 0}$ for some $\varepsilon >0$ but small enough to ensure positivity of the density $\rho_k > 0$. Calculating the interaction energy for $\rho_k$ we obtain \[ \calI(\rho_k) = \frac12\int W(x, y)(\bar\rho + \varepsilon Y_{k, 0}(x)) (\bar\rho + \varepsilon Y_{k, 0}(y))d\sigma(x)d\sigma(y) = \calI(\bar\rho) + \frac{\varepsilon^2}{2\bar\rho}\hat W_k < \calI(\bar\rho). \] Since the spherical harmonics are continuous functions, $\rho_k$ is continuous and thus is upper bounded $\rho_k < \varrho \in \bbR_+$. As a result we can estimate the entropy at $\rho_k$ as \[ \calE(\rho_k) \leq \int \rho_k\log \rho_k d\sigma \leq \int \rho_k \log \varrho d\sigma= \log \varrho. \] Combining the estimates we conclude that the difference $\calF_\gamma(\rho_k) -\calF_\gamma(\bar\rho)$ has the following upper-bound: \[ \calF_\gamma(\rho_k) -\calF_\gamma(\bar\rho) = \gamma^{-1}(\calE(\rho_k)- \calE(\bar \rho)) +\calI(\rho_k)- \calI(\bar \rho) \leq \frac{\varepsilon^2}{2\bar\rho}\hat W_k + \gamma^{-1}(\log \varrho - \log \bar \rho). \] Choosing $\tilde\gamma > -\frac{\bar\rho(\log \varrho - \log \bar \rho)}{\varepsilon^2\hat W_k}$ we get $\calF_{\tilde\gamma}(\rho_k) -\calF_{\tilde\gamma}(\bar\rho) <0$ and thus $\bar \rho$ is not a minimizer of the free energy functional $\calF_{\tilde\gamma}$. We aim to show that there exists a parameter value $\gamma_c $ satisfying definition \ref{def:transition-point}. Since $\bar\rho$ is always a critical point of $\calF$, from the convexity properties of $\calF$ we know that for small enough $\gamma$ the uniform state $\bar \rho$ is the unique minimizer of the free energy. Since for any $\gamma\in\bbR$ there exists a minimizer of $\calF_\gamma$, by Lemma \ref{lemma:properties}, we conclude that there exist a point $\gamma_c \leq \tilde\gamma$ such that for every $\gamma > \gamma_c$ there exists a minimizer of the free energy functional $\rho_\gamma \neq \bar\rho$. Let \[\gamma^*_c = \sup\{\gamma: \bar\rho \text{ is a minimizer of } \calF_\gamma\}, \] then for $\gamma < \gamma^*_c$ the uniform state is the unique minimizer of the free energy. We will show that $\gamma^*_c$ is a phase transition point. If $\bar\rho$ is a minimizer of $\calF_{\gamma^*_c}$, then by Theorem \ref{th:minimizers} for every $\gamma > \gamma^*_c$ there exist a minimizer of the free energy different from $\bar\rho$, so $\gamma^*_c$ is a transition point. We thus only need to show that $\bar\rho$ is a minimizer of $\calF_{\gamma^*_c}$. Assume that $\bar\rho$ is not a minimizer of $\calF_{\gamma^*_c}$, then by Theorem \ref{th:minimizers} there exists $\rho_{\gamma^*_c} \in L_2(\bbS^{n-1}) \neq \bar\rho$ minimizing $\calF_{\gamma^*_c}$. For a fixed $\rho \in L_2(\bbS^{n-1})$ the map $\varGamma_\rho: \bbR_+ \to \bbR$ \[ \varGamma_\rho(\gamma ) = \calF_\gamma(\rho ) \] is continuous, so the map \[ f(\gamma) :=\varGamma_{\bar\rho}(\gamma) - \varGamma_{\rho_{\gamma^*_c}}(\gamma) \] is also continuous on $\bbR_+$. Since for $\gamma < \gamma^*_c$ the uniform state is the unique minimizer, we have $f(\gamma) <0$ on $(0, \gamma^*_c)$. At the same time by our assumption $\calF_{\gamma^*_c}(\bar\rho) > \calF_{\gamma^*_c}(\rho_{\gamma^*_c})$, so $f(\gamma^*_c) > 0$, which contradicts continuity, thus $\bar\rho$ is a minimizer of $\calF_{\gamma^*_c}$ and thus $\gamma^*_c$ is a point of phase transition of the corresponding free energy functional. \end{proof} Phase transition points are called \emph{continuous} or \emph{discontinuous} depending on the structure of the set of minimizers around $\gamma_c$. \begin{definition}[Continuous and discontinuous transition points] A transition point $\gamma_c$ is called a \emph{continuous} transition point of the free energy functional \eqref{eq:free-energy} if $\bar\rho$ is a unique minimizer for $\gamma = \gamma_c$ and for any family of minimizers $\{\rho_\gamma \in \calP_{ac}(\bbS^{n-1})| \gamma > \gamma_c\}$ it holds that \[ \lim_{\gamma\downarrow \gamma_c}\|\rho_\gamma - \bar\rho\|_{L_1} = 0. \] \emph{Discontinuous} transition point is any transition points violating at least one of the above conditions. \end{definition} \begin{remark} Our definition of a continuous transition point is equivalent to the definition used in \cite{ChayesPanferov2010} and \cite{carrillo2020long} since norm is non-negative and thus $\limsup$ coincides with the limit. \end{remark} The following Lemma relates continuous transition points with the points of linear stability, namely the parameter value $\gamma_\#$ at which the linearized version of the free energy functional at $\rho=\bar\rho$ becomes unstable (admits a non-negative eigenvalue). We give a formal definition as well as the necessary linear stability results in the following subsection. \begin{lemma}[\cite{ChayesPanferov2010}] \label{lemma:tp_lsp}Let $\calF_\gamma$ be the free energy functional with fixed $\gamma$ and the interaction potential $W \in C_b$. Assume that there exists at least one $k\in \bbN_0$ such that $\hat W_k <0$ and assume that $\calF$ exhibits a continuous transition point at some $\gamma_c < \infty$. Then the phase transition point coincides with the point of linear stability $\gamma_c = \gamma_\#$. \end{lemma} \subsection{Linear stability analysis} We define the linear operator $\bar \calL: L_2(\bbS^{n-1}) \to L_2(\bbS^{n-1})$ as a linearization of the non-linear PDE \eqref{eq:mckean-vlasov} at $\rho = \bar\rho$, namely: \[ \bar \calL w := \Delta w +\gamma \bar\rho \Delta (W * w), \] for every $w\in L_2(\bbS^{n-1}) $. Using the properties of spherical convolution we conclude that \[ \bar \calL Y_{l, K} = -l(n+l-2)(1 + \gamma\hat W_l)Y_{l, K}. \] In other words basis of spherical harmonics of order $l$ forms the eigensubspace of $\bar L$ corresponding to the eigenvalue \[ \lambda_l = -l(n+l-2)(1 + \gamma\hat W_l). \] If there exists at least one negative element of the spherical harmonics decomposition of $W$, $\hat W_k <0$, there also exists a parameter value $\gamma_k = -\frac{1}{\hat W_k}$ where the corresponding eigenvalue $\lambda_k$ changes the sign. The minimum of such values \begin{equation}\label{eq:def:gamma_sharp} \gamma_\# = -\frac{1}{\min_{k\in \bbN}\hat W_k}, \end{equation} is called the \emph{point of linear stability}. \begin{remark}[Uniform state is an unstable solution for $\gamma > \gamma_\#$] Mirroring the stability analysis of the linearized PDE, we remark that for $\gamma > \gamma_\#$ the uniform measure $\bar\rho$ is an unstable saddle point of the free energy functional $\calF_\gamma$ (namely $D^2_\rho \calF_\gamma(\bar\rho)$ is not a positively semidefinite bilinear operator). First recall that $\bar \rho$ is always a critical point by the symmetry argument. To show the latter let $l \in \argmin_{k\in \bbN}\hat W_k$ and consider the function $f_t(t) = \calF_\gamma(\bar\rho + tY_{l, 0})$. Differentiating $f(t)$ twice we obtain \[ \frac{d^2}{dt^2}f(t)\Big|_{t=0} = D^2_{\rho} \calF_\gamma(\bar\rho)[Y_{l, 0}, Y_{l, 0}] =\frac{1}{\bar\rho}(1+ \gamma\hat W_l) <0, \] so $\bar\rho$ is an unstable critical point for any $\gamma > \gamma_\#$. \end{remark} \subsection{Main result} \label{ssec:transition} In this section we present a sufficient condition for the existence of a discontinuous phase transition point of McKean-Vlasov equation on a high-dimensional sphere. Let $\hat W_k$ be the minimal eigenvalue of the linearized operator discussed in the previous section. We introduce a criterion which relies on the properties of the eigenvectors corresponding to $\hat W_k$. We also show that criterion is trivially satisfied if $k = 2$ or $k=4$. In this Section we require the interaction potential to satisfy the (relaxed) resonance condition Assumption~\ref{assum:pt-almost} below. To make a parallel with the analysis of phase transitions on a torus \cite{carrillo2020long} we call it a \emph{resonance condition} even though on a sphere it may be satisfied even if the corresponding eigenspace is one-dimensional. \begin{assumption}[Relaxed resonance condition] \label{assum:pt-almost} Given an interaction potential $W$ satisfying Assumption \ref{assum:non-stable-kernel} and denote the set of $\delta$-resonating modes as \[ K_{\sharp,\delta} = \set*{k \in \bbN: \hat W_k \leq -\frac{1-\delta}{\gamma_{\#}} }. \] The kernel $W$ satisfies the relaxed resonance condition with bandwidth $\delta$ if the following holds: \begin{itemize} \item there exists a linear combination of spherical harmonics $u = \sum_{l\in K_{\alpha, \delta}} c_l Y_{l, 0}$ such that \begin{equation}\label{eq:delta-resonant-U3} \int_{\bbS^{n-1}} u^3d\sigma = U_3 \neq 0, \quad \text{and} \quad |u|_\infty \leq 1 \,; \end{equation} \item the bandwidth is small enough, namely $ \delta < \min\Bigl(\frac14, \frac{U_3^2}{49\sigma(\bbS^{n-1})^2}\Bigr)$ for some $u$ satisfying~\eqref{eq:delta-resonant-U3}. \end{itemize} \end{assumption} For convenience of the reader we also provide an hierarchy of stricter resonance conditions. We start by introducing the (strict) resonance condition, namely the special case of the Assumption \ref{assum:pt-almost} with $\delta= 0$. \begin{assumption}[Resonance condition] \label{assum:pt-general} Given an interaction potential $W$ satisfying Assumption \ref{assum:non-stable-kernel} and denote the set $K_\sharp = \{k \in \bbN: \hat W_k = -\gamma_{\#}^{-1} \}$ (note~\eqref{eq:def:gamma_sharp}). We say that $W$ satisfies the resonance condition if there exists a finite linear combination $u = \sum_{l\in K_{\alpha}} c_l Y_{l, 0}$ such that \[ \int_{\bbS^{n-1}} u^3d\sigma = U_3 \neq 0, \quad \] where $Y_{l, 0}$ are the corresponding spherical harmonics. \end{assumption} Note that the supremum norm of the competitor $u$ is uniformly bounded for any finite linear combination after a proper rescaling, which is not necessarily true for arbitrary $u$ due to the lack of the uniform (in $x$ and $k$) bound on the absolute value of the basis functions $Y_{k, 0}(x)$. That is why in the Assumption \ref{assum:pt-general} we restricted the set of possible competitors to the finite linear combinations. In particular, it may be enough to have a single eigenfunction corresponding to the minimal eigenvalue. In this case the Assumption~\ref{assum:pt-general} reduces to the Assumption \ref{assum:pt} . \begin{assumption}[Single component resonance] \label{assum:pt} Given an interaction potential $W$ satisfying Assumption \ref{assum:non-stable-kernel} and let $k_{\min} = \argmin_{k\in \bbN}\hat W_k$. We say that $W$ satisfies the resonance condition if \[ \int_{\bbS^{n-1}} Y_{k_{\min}, 0}^3d\sigma \neq 0, \] where $Y_{k_{\min}, 0}$ is the corresponding spherical harmonic. \end{assumption} The Assumption~\ref{assum:pt} can only be satisfied for even $l$. Indeed from the recurrence relation we obtain that for odd $l$ the harmonic $Y_{l,0}$ is an odd function, implying that $Y_{l,0}^3$ is also odd and thus integrates to zero. Calculation for $l=2$ gives \begin{align*} Y_{2, 0}(\theta) &= A_0^2 C_{2}^{\frac{n-2}{2}}\left(\cos \theta_{n-1}\right), \\ \int Y_{2, 0}^3d\sigma &= (A_0^2)^3\int_{-1}^1\left( C_{2}^{\frac{n-2}{2}}(t)\right)^3(1-t^2)^{\frac{n-3}{2}}dt = \frac{4(A_0^2)^3(n-2)^3\sqrt{\pi}\Gamma(\frac{n+1}{2})}{(n+2)(n+4)\Gamma(\frac{n}{2}-1)} \neq 0 \end{align*} and similar for $l=4$ \begin{align*} \int Y_{4, 0}^3d\sigma &= \frac{(A_0^4)^3(n-2)^3n^4(n^2-4)\sqrt{\pi}\Gamma(\frac{n+5}{2})}{64\Gamma(\frac{n}{2}+6)}, \end{align*} where $\Gamma$ is the Gamma function and $A_K^l$ are the normalizing coefficients as in \eqref{eq:harmonics}. So in both cases Assumption \ref{assum:pt} is satisfied for a sphere of arbitrary dimension $n \geq 3$. We conjecture that it is satisfied for all even values of $l$. \begin{theorem} \label{th:pt} Let the interaction potential $W \in C_b$ satisfy Assumption \ref{assum:pt-almost}, then there exists a discontinuous transition point of \eqref{eq:mckean-vlasov} $\gamma_c \in (0, \gamma_\#)$. \end{theorem} \begin{proof} By Proposition \ref{prop:existence-pt} the system exhibits a phase transition. We aim to show that the phase transition point does not happen at the point of linear stability $\gamma_c \neq \gamma_\#$, then by Lemma \ref{lemma:tp_lsp} it must be a discontinuous transition point. By Lemma \ref{lemma:competitor} below $\bar\rho$ is not a minimizer of $\calF_{\gamma_\#}$, so the phase transition occurs at some $\gamma_c < \gamma_\#$ and is discontinuous. \end{proof} \begin{lemma} \label{lemma:competitor} Let the interaction potential $W \in C_b$ satisfy Assumption \ref{assum:pt-almost} for some~$u$ and $\delta\in[0,1)$, then there exists a state $\rho_\# \in \calP_{ac}(\bbS^{n-1})$ with lower free energy compared to the uniform state $\bar\rho$ at the point of linear stability $\calF_{\gamma_\#}(\rho_\#) < \calF_{\gamma_\#}(\bar\rho)$. \end{lemma} \begin{proof} Take $u$ as in the Assumption \ref{assum:pt-general} and denote \[ \xi = \sign\bra[\bigg]{\int u^3d\sigma}, \] we show that there exist $\varepsilon >0$ such that the statement of the lemma holds for the competitor $ \rho_\#$ of form: \[ \rho_\# = \bar\rho(1 + \varepsilon \xi u). \] Consider the Taylor expansion of $\calE$ and $S$ around the uniform state: \begin{align*} \calE(\rho_\#) &= \calE(\bar\rho) +\frac12 \varepsilon^2\|u\|^2 - \frac{1}{6}\xi\bar\rho\varepsilon^3\int u^3d\sigma + \frac{1}{12}\varepsilon^4\bar\rho \int\frac{u^4}{(1+\varepsilon\xi \tilde u)^3}d\sigma, \end{align*} for some $\tilde u$ satisfying $ |\tilde u(x)| \in [0, |u(x)|]$ for all $x\in \bbS^{n-1}$, and \begin{align*} \calI(\rho_\#) &= \calI(\bar\rho) + \frac12\varepsilon^2\bar\rho^2\int_{\bbS^{n-1}\times \bbS^{n-1}} \sum_{l\in K_{\sharp,\delta}} c_l^2 W(x,y)Y_{l, 0}(x)Y_{l, 0}(y)d\sigma(x)d\sigma(y) \\ &= \calI(\bar\rho) + \frac12\varepsilon^2 \bar\rho \sum_{l\in K_{\sharp,\delta}}\hat W_l c_l^2 \|Y_{l,0}\|^2 \leq \calI(\bar\rho) - \frac{1}{2\gamma_\sharp}\varepsilon^2 \bar\rho \|u\|^2 + \frac{1}{2\gamma_\sharp}\varepsilon^2 \bar\rho \delta\|u\|^2. \end{align*} Using the above and recalling that thanks to the definition of $\gamma_\sharp$ in~\eqref{eq:def:gamma_sharp}, we obtain a cancellation of the second order terms and arrive at the following expression for the free energy functional: \begin{align*} \calF_{\gamma_\#}(\rho_\#) - \calF_{\gamma_\#}(\bar\rho) &= - \frac{1}{6\gamma_{\#}}\xi\bar\rho\varepsilon^3\int u^3d\sigma + \frac{1}{12\gamma_{\#}}\varepsilon^4\bar\rho \int\frac{u^4}{(1+\varepsilon\xi \tilde u)^3}d\sigma + \frac{1}{2\gamma_{\#}}\varepsilon^2 \bar\rho \delta\|u\|^2. \end{align*} If $\delta = 0$, we obtain \[ \calF_{\gamma_\#}(\rho_\#) - \calF_{\gamma_\#}(\bar\rho) = - \frac{1}{6\gamma_{\#}}\bar\rho\varepsilon^3\left(|U_3| - \frac{1}{2}\varepsilon \int\frac{u^4}{(1+\varepsilon\xi \tilde u)^3}d\sigma\right). \] Recall that $|u|_\infty \leq 1$, then taking $\varepsilon < 1/2$ allows us to bound the integral above as \[ \frac{1}{2} \int\frac{u^4}{(1+\varepsilon\xi \tilde u)^3}d\sigma \leq 4\sigma(\bbS^{n-1}). \] Choosing $\varepsilon < \min\left(\frac{1}{2}, \frac{|U_3|}{4\sigma(\bbS^{n-1})}\right)$ we obtain $ \calF_{\gamma_\#}(\rho_\#) - \calF_{\gamma_\#}(\bar\rho) < 0$, and thus the statement of the lemma is satisfied. In case $\delta >0$, by choosing $\varepsilon = \sqrt{\delta}$ and using $|u|_\infty \leq 1$, we similarly conclude that the energy difference is negative \[ \calF_{\gamma_\#}(\rho_\#) - \calF_{\gamma_\#}(\bar\rho) \leq - \frac{1}{6\gamma_{\sharp}}\bar\rho\delta^{3/2}\left(|U_3| - 4\delta^{1/2} \sigma(\bbS^{n-1}) - 3\delta^{1/2}\sigma(\bbS^{n-1})\right) < 0, \] for $\delta$ chosen sufficiently small. Hence, the statement of the lemma holds for the corresponding $u$. \end{proof} \begin{remark} \label{remark:near-resonance} As studied in \cite{carrillo2020long}, existence of a discontinuous phase transition of McKean-Vlasov equation on a torus can be proven under a resonance condition between at least three different components of the Fourier decomposition of $W$. In contrast, on a sphere, due to the structure of the basis functions of $L_2^s$, a single negative mode $\hat W_k$ may be enough for a discontinuous phase transition. \end{remark} \begin{remark} We also remark that in the spherical setting we restrict the set of possible competitors to finite linear combinations because the basis functions are not guaranteed to be uniformly bounded. As a result, the residual term in the Taylor expansion can only be made arbitrary small for a finite linear combinations. \end{remark} \begin{remark}[Continuous phase transition] As follows from the Lemma~\ref{lemma:tp_lsp}, to show continuity of the phase transition point it is enough to prove that the phase transition happens at the point of linear stability $\gamma_c = \gamma_\#$. In \cite{carrillo2020long}, the authors use a criteria ensuring the uniform growth of the entropy under all perturbations corresponding to the negative components of the kernel decomposition. The difficulty in generalizing this approach to the spherical case lies in the fact that spherical harmonics are not uniformly bounded. \end{remark} \section{Examples} \label{sec:examples} We apply our results to several interaction kernels including exponential kernel (transformer model), the Onsager model of liquid crystals and an adaptation of the Hegselmann-Krause model to a high-dimensional sphere. This section heavily relies on the properties of the Gegenbauer polynomials, for which we refer the reader to \cite{abramowitz1968handbook} and their integrals, which can be found in~\cite{gradshteyn2014table}. \subsection{Noisy Transformers} Transformer is an architecture of neural networks, which revolutionized the field of natural language processing. It was first introduced in the paper by Vaswani et. al \cite{vaswani2017attention}. In \cite{geshkovski2024mathematical} the authors propose to use an interacting particles perspective to study the behavior of transformers. In particular it is shown that the underlying inference dynamics of transformer model is related to the gradient flow in the space of probability measures on a high dimensional sphere $\calP(\bbS^{n-1})$ of the interaction energy with exponential kernel $W_\beta$: \begin{equation} \label{eq:kernel-transformer} W_\beta(x, y) := -\frac{1}{\beta}e^{\beta\left<x, y\right>}. \end{equation} In this example we consider the system of identical interacting particles on a high-dimensional sphere in the presence of noise \[ d X_i = -\nabla_{X_i}\frac{1}{N}\sum_{j=1}^N W_\beta(X_i, X_j) + \sqrt{2\gamma^{-1}}dB_i, \] where $\{B_i\}_{i\leq N}$ are independent Brownian motions on a sphere. This system can be seen as a proxy of the transformer model with perturbation of order $\gamma^{-1}$. We call the corresponding McKean-Vlasov equation the \emph{noisy transformer model} and study the structure of the set of invariant measures of this system for various set of parameters $\beta, \gamma \in \bbR_+$. Clearly $W_\beta$ is bounded and rotationally symmetric, so the results of Section \ref{sec:general} apply. In particular note that $|\partial W_\beta|_\infty = e^\beta$ and $|\partial^2 W_\beta|_\infty = \beta e^\beta$, so using Corollary \ref{cor:convexity-sph} we conclude that for the inverse noise ratio $\gamma < \gamma_o = \frac{(n-2)}{4 \max(\beta e^{\beta}, e^\beta)}$ the uniform state is the unique invariant measure of the noisy transformer model. Since it is unique, the uniform states is the long-term limit of a solution with arbitrary admissible initial conditions. For the transformer model it implies that for the noise with the amplitude larger than $\gamma_o^{-1}$ the model does not preserve any information about the initial conditions, which is strongly undesirable from the application perspective. We are also able to recover the bifurcation branches on every level of the spherical harmonics, namely: \begin{proposition}[Noisy transformer on $\bbS^{n-1}$] Consider McKean-Vlasov equation \eqref{eq:mckean-vlasov} with the exponential interaction kernel defined in \eqref{eq:kernel-transformer}. Then for every $\beta\in \bbR_+$, for all $k \in \bbN_+$ there exist a bifurcation branch around $(\bar\rho, \gamma_{k})$, where \[ \gamma_{k} = \frac{\sqrt{\beta^n}}{2^{\frac{n-2}{2}}\Gamma(\frac{n}{2})I_{k+\frac{n-2}{2}}(\beta)}, \] and $I_{\alpha}$ is the modified Bessel function. \end{proposition} \begin{proof} The Rodrigues formula for Gegenbauer polynomials (see~\cite[p. 303]{Andrews_Askey_Roy_1999} or~\cite[p.~143f]{chihara2011introduction}) provides the representation \begin{equation}\label{eq:Rodrigues} C_k^\alpha(t) = \frac{(-1)^k\Gamma(\alpha+\frac12)\Gamma(k+2\alpha)}{2^k k!\Gamma(2\alpha)\Gamma(\alpha+k+\frac12)}(1-t^2)^{-\alpha+\frac12}\frac{d^k}{dt^k}\left[(1-t^2)^{k+\alpha-\frac12}\right]. \end{equation} Using the equality $C_k^\alpha(1) = \frac{\Gamma(k+2\alpha)}{\Gamma(2\alpha)\Gamma(k+1)}$ and the Rodrigues formula~\eqref{eq:Rodrigues}, after integrating by parts $k$ times we obtain the following expression for the spherical harmonics decomposition of $W_\beta$: \begin{align*} \hat W_{\beta,k} &= -\frac{c_{\frac{n-2}{2}}}{\beta C_k^{\frac{n-2}{2}}(1)}\int_{-1}^1e^{\beta t}C^{\frac{n-2}{2}}_k(t)(1-t^2)^{\frac{n-3}{2}}dt \\ &= \frac{(-1)^{k+1}\Gamma(\frac{n}{2})}{2^k\beta\sqrt{\pi}\Gamma(k+\frac{n-1}{2})}\int_{-1}^1 e^{\beta t}\frac{d^k}{dt^k}\left[(1-t^2)^{k+\frac{n-3}{2}}\right] dt \\ &= \frac{(-1)^{2k+1}\Gamma(\frac{n}{2})\beta^{k-1}}{2^k\sqrt{\pi}\Gamma(k+\frac{n-1}{2})}\int_{-1}^1 e^{\beta t}(1-t^2)^{k+\frac{n-3}{2}} dt = -2^{\frac{n-2}{2}}\beta^{-\frac{n}{2}}\Gamma\left(\frac{n}{2}\right)I_{k+\frac{n-2}{2}}(\beta). \end{align*} Modified Bessel functions $I_\alpha(x)$ are decreasing in $\alpha$ for any fixed $x \in \bbR_+$, so all the components of the spherical harmonics decomposition of the exponential kernel are distinct. Applying the Theorem \ref{th:bifurcations} we get the result. \end{proof} \begin{remark}[Phase transition] Since modified Bessel functions $I_\alpha(x)$ are decreasing in $\alpha$ for any fixed $x \in \bbR_+$, so for all admissible $\beta$ the smallest component of the spherical harmonics decomposition of the interaction kernel $W_\beta$ corresponds to $k_{\min} = k_1$. At the same time, corresponding harmonic $Y_{1, 0}$ is odd and thus it's cube integrates to zero, so the resonance Assumption \ref{assum:pt} is not satisfied and a finer analysis is required to establish the type of the phase transition of the noisy transformer model. \end{remark} \begin{remark} As studied in \cite{bruno2024emergence, geshkovski2024dynamic}, in the noiseless case $\gamma = \infty$, the transformer model exhibits metastable behavior. We expect the noisy model to have similar dynamics for large $\gamma$. Since the number of bifurcations points growths with the inverse temperature $\gamma$, we expect the model to have more saddles and more evident metastable behavior as $\gamma$ increases. \end{remark} \subsection{Onsager model}\label{ssec:Onsager} Onsager model of liquid crystals describes the behaviour of the liquid crystals materials in so-called nematic phase. Liquid crystals are assumed to consist of oriented interacting particles, and during nematic phase the main contribution of the interaction energy is coming from the orientation of the molecules (not their spatial coordinates), and thus the model is naturally posed on $\bbS^{n-1}$. Onsager interaction kernel has the form \begin{equation} \label{eq:kernel-onsager} W(x, y) := \sqrt{1 - \left<x, y\right>^2}. \end{equation} Note that in this case correlated molecules $\left<x, y\right> = 1$ have the same contribution as anticorelated ones $\left<x, y\right> = -1$. The noise parameter $\gamma$ in this case plays the role of inverse temperature or particles 'mobility'. Existence of solutions and bifurcation branches of the Onsager model on~$\bbS^2$ have been established in \cite{WachsmuthThesis06,vollmer2018bifurcation}. In this Section we generalize the result to the sphere of arbitrary dimension $\bbS^{n-1}$ and characterize the phase transition of the corresponding model. \begin{proposition}[Onsager model on $\bbS^{n-1}$] Consider the McKean-Vlasov equation with the interaction kernel \eqref{eq:kernel-onsager}, then \begin{itemize} \item the model undergoes a discontinuous phase transition, \item for every $k = 2l, \ l\in \bbN$ there exists a bifurcation branch around $(\bar\rho, \gamma_{2l})$, where \[ \gamma_{2l} = s_n^{-1}\frac{\Gamma(l+\frac{n+1}{2})\Gamma(l+n-2)}{ \Gamma(l- \frac12 )\Gamma(l+ \frac{n-2}{2})}, \] where $s_n = \frac{(n-2)\Gamma(\frac{n}{2})\Gamma(n-2)}{4\sqrt{\pi}\Gamma(\frac{n-1}{2})}$ \end{itemize} \end{proposition} \begin{proof} First of all note that both the kernel and the marginal of the spherical measure $\sigma$ are even functions of $\theta_{n-1}$. At the same time, since odd Gegenbauer polynomials are odd functions, corresponding coefficients of the spherical harmonics decomposition of the Onsager kernel are zero, namely $\hat W_{2l+1} = 0$ for all $l\in\bbN$. To calculate the even components of the decomposition we use the representation of the Gegenbauer polynomials in the monomial basis: \begin{align*} C_{k}^{\frac{n-2}{2}}(x) = \sum_{j=0}^{\lfloor k/2 \rfloor}(-1)^j \frac{\Gamma(k - j + \frac{n}{2} - 1)}{\Gamma(\frac{n}{2} - 1)j!(k - 2j)!}(2t)^{k - 2j}. \end{align*} Integrating the monomials we obtain \begin{align*} \int_{-1}^{1} t^{2m}\left( 1 - t^2 \right)^{(n - 2) / 2}~dt= \int_{0}^{\pi} \cos^{2m}\theta \sin^{n-1}\theta ~d\theta = \frac{\Gamma(m+ \frac{1}{ 2})\Gamma(\frac{n}{2})}{\Gamma(m +\frac{n+1}{2})}, \end{align*} and combining the above formulas we get the following expression for the even coefficients $\hat W_{2k}:$ \begin{align*} \hat W_{2k} &= \frac{c_{\frac{n-2}{2}}}{C_{2k}^{\frac{n-2}{2}}(1)}\int_{-1}^1\sum_{j=0}^{k}(-1)^j \frac{\Gamma(2k - j + \frac{n}{2} - 1)}{\Gamma(\frac{n}{2} - 1)j!(2k - 2j)!}4^{k-j}t^{2(k - j)}(1-t^2)^\frac{n-2}{2}dt \\ &= \frac{(\frac{n}{2} - 1)\Gamma(\frac{n}{2})\Gamma(n-2)\Gamma(k+1)}{\Gamma(\frac{n-1}{2})\Gamma(k+n-2)} \sum_{j=0}^{k}(-1)^j \frac{\Gamma(2k - j + \frac{n}{2} - 1)}{j!(k - j)!\Gamma(k - j + \frac{n+ 1}{2})} \\ &=-\frac{\sqrt{\pi}(\frac{n}{2}-1)\Gamma(\frac{n}{2})\Gamma(n-2)}{2\Gamma(\frac{n-1}{2})\Gamma(-\frac{3}{2})\Gamma(\frac{5}{2})} \times \frac{\Gamma(k-\frac{1}{2})\Gamma(k+\frac{n}{2}-1)}{\Gamma(k+\frac{n+1}{2})\Gamma(k+n-2)} \\ &= -s_n \cdot \frac{\Gamma(k-\frac{1}{2})\Gamma(k+\frac{n}{2}-1)}{\Gamma(k+\frac{n+1}{2})\Gamma(k+n-2)}, \end{align*} where $s_n$ is a dimension-dependent constant. Note that $s_n >0$ for $n\geq 3$ since all the components are positive in this case. Also note that the coefficients are decreasing in absolute value in $k$, for any $n\geq 3$. Namely we have \begin{align*} \frac{\hat W_{2k+2}}{\hat W_{2k}} &= \frac{\Gamma(k+\frac12 )\Gamma(k+\frac{n}{2})\Gamma(k+n-2)\Gamma(k+\frac{n+1}{2})}{\Gamma(k+1+\frac{n+1}{2})\Gamma(k+n-1)\Gamma(k- \frac12 )\Gamma(k - 1+\frac{n}{2})}\\ &= \frac{(k-\frac12)(k+\frac{n}{2} - 1)}{(k +\frac{n+1}{2})(k+n-2)} = \frac{k-\frac12}{k+n-2} \times \frac{k+\frac{n-2}{2}}{k+\frac{n+1}{2}} <1, \end{align*} implying that for any dimension $n\geq 3$ the minimal component corresponds to $k =2$. As shown in Section \ref{ssec:transition}, the second spherical harmonic $Y_{2,0}$ satisfies the self-resonance Condition \ref{assum:pt}, and thus the system has a discontinuous transition point. Moreover, since all the components are distinct, applying Theorem \ref{th:bifurcations} we recover all the bifurcation branches of the corresponding free energy functional. \end{proof} \begin{remark} For $n = 3$ we get $s_n = 1/8$ so the bifurcation points correspond to the values \[ \gamma_{2l} = \frac{8\Gamma(l+2)\Gamma(l+1)}{ \Gamma(l- \frac12 )\Gamma(l+ \frac{1}{2})}, \] and we recover the result of \cite{vollmer2018bifurcation}. \end{remark} \subsection{Opinion dynamics}\label{ssec:opinion} We introduce a spherical analogue of the Hegselmann-Krause model of opinion dynamics proposed in \cite{HegselmannKrause2002}. In this model, posed in $\bbR$, the particles (or 'agents') attract if they are in a close proximity of each other and have no effect on each other otherwise. We propose using the following interaction kernel as a spherical alternative of the Hegselmann-Krause model: \begin{equation} \label{eq:kernel-opinion} W_p(x, y) = -(1+\left<x, y\right>)^p = -(2 - (1-\left<x, y\right>))^p, \end{equation} where $p \in \bbR_+$. Note that the expression $(1-\left<x, y\right>)$ plays the role of the 'distance' between two vectors, and thus the kernel is an alternative to the rational kernel of the following form $W(x,y) = -(2 - \|x- y\|)_+^p$ on $\bbR$. Note, that localization in this model corresponds to large values of $p$. The kernel is an element of $C_b$ for any $p\geq 0$ and an element of $H^1$ for $p> \frac12 -\frac{n-3}{2}$, so the corresponding existence (\ref{th:minimizers}) and regularity (\ref{prop:equivalence}) results apply. For $p\geq 2$, the second derivative is also bounded and we conclude that similar to the noisy transformer model, for the inverse noise ratio $\gamma <\gamma_o = \frac{(n-2)}{4\max(2^{p-1}p, 2^{p-2}p(p-1))}$ the free energy is geodesically convex and thus the model obtains a unique invariant measure, the uniform state $\bar\rho$. We can also characterize the set of bifurcation branches of the model on an arbitrary-dimensional sphere. \begin{proposition}[Bifurcations for the opinion dynamics model on $\bbS^{n-1}$] Consider the McKean-Vlasov equation with rational interaction kernel as introduced in \eqref{eq:kernel-opinion}, then \begin{itemize} \item for $p\in \bbN$ model has at most $p$ bifurcation branches, \item if $p\in \bbN$ and $p\leq n-1$, then for every $k \in \{1...p\}$ there exist a bifurcation branch around the uniform state \item for $p\in \bbR_+\backslash \bbN$ the model has a countable number of bifurcation branches, namely for every $k \in \bbN$ such that $\frac{(-1)^k\Gamma(k-p)}{\Gamma(-1-p)} < 0$ there exists a bifurcation branch at the $k$-th level of spherical harmonics basis \item if there exist a bifurcation branch at the $k$-th level, the corresponding $\gamma_k$ takes the form \[ \gamma_k = \frac{\sqrt{\pi}\Gamma(p+1-k)\Gamma(n+k-1)}{2^{n-2+p}\Gamma(\frac{n}{2})\Gamma(\frac{n-1}{2}+p)\Gamma(p+1)} \] \end{itemize} \end{proposition} \begin{proof} First note that for $p\in \bbN$ the kernel is of a polynomial type and thus can be exactly represented by a linear combination of the first $p$ Gegenbauer polynomials, implying that $\hat W_{p, k} = 0$ for $k > p$. In general case, the spherical harmonics decomposition of $W_p$ takes the form \begin{align*} \hat W_{p, k} &= -\frac{c_{\frac{n-2}{2}}}{C_k^{\frac{n-2}{2}}(1)}\int_{-1}^1(1+t)^pC^{\frac{n-2}{2}}_k(t)(1-t^2)^{\frac{n-3}{2}}dt \\ &=-\frac{\Gamma(\frac{n}{2})\Gamma(n-2)\Gamma(k+1)}{\sqrt{\pi}\Gamma(\frac{n-1}{2})\Gamma(k+n-2)} \int_{-1}^1C^{\frac{n-2}{2}}_k(t)(1-t)^{\frac{n-3}{2}}(1+t)^{\frac{n-3}{2}+p}dt \\ &= -\frac{2^{n-2+p}\Gamma(\frac{n}{2})\Gamma(\frac{n-1}{2}+p)\Gamma(p+1)}{\sqrt{\pi}\Gamma(p+1-k)\Gamma(n+k-1)} \sim -\frac{1}{\Gamma(p+1-k)\Gamma(n+k-1)}, \end{align*} where we used the value of the definite integral from \cite[p. 795]{gradshteyn2014table}. Using the properties of Gamma function we then obtain \[ -\frac{1}{\Gamma(p+1-k)\Gamma(n+k-1)}= \frac{(-1)^k\Gamma(k-p)}{\Gamma(-1-p)\Gamma(p+2)\Gamma(n+k-1)}, \] and thus for $k> p$ the sign of $\hat W_{p, k}$ is $\frac{(-1)^k}{\Gamma(-1-p)}$, implying that either every even or every odd coefficient for $k>p$ is negative and all the components are distinct. Applying the Theorem \ref{th:bifurcations} we get the result. Finally note that for $p\in \bbN$ all nonzero components of the spherical harmonics decomposition are negative. Calculating the ratio we obtain \begin{equation}\label{eq:HK:W:ratio} \frac{\hat W_{p, k+1}}{\hat W_{p, k}} = \frac{\Gamma(p+1 - k)\Gamma(n+k-1)}{\Gamma(p-k)\Gamma(n+k)} = \frac{p-k}{n-1 +k }, \end{equation} so for $p \leq n-1$, the ratio satisfies $\frac{\hat W_{p, k+1}}{\hat W_{p, k}} < 1$ for $k\geq 1$, implying that all the coefficients are distinct and thus, by theorem \ref{th:bifurcations} there exists a bifurcation branch for every $k \in \{1...p\}$. \end{proof} First of all, the above result shows that the sign of the coefficient of the spherical harmonics decomposition of $W_p$ does not depend on the dimensionality of the sphere but only on the parameter $p$. In particular, for $k=1$ the above criterion gives \[ -\frac{\Gamma(1-p)}{\Gamma(-1-p)} = -(-p)(-1-p) = -p(1+p) < 0, \] so the bifurcation branch might exist for any positive $p$. Similar calculation for $k=2$ gives \[ \frac{\Gamma(2-p)}{\Gamma(-1-p)} = (1-p)p(1+p) <0, \] implying that the branch can exist only if $p>1$. Analogous analysis can be conducted for any $k\in \bbN_+$. \begin{proposition}[Discontinuous phase transition for opinion dynamics]\label{prop:HK:discont} The McKean-Vlasov equation with rational interaction kernel as introduced in \eqref{eq:kernel-opinion} has a discontinuous phase transition on $\bbS^{n-1}$ for some $\gamma_c<\gamma_{\#}$ provided that $p=n+2$. \end{proposition} \begin{proof} We verify Assumption~\ref{assum:pt} for $k_{\min}=2$. So, we have to ensure that indeed $k=2$ is the most negative mode. This can be observed immediately from~\eqref{eq:HK:W:ratio} with the choice $p=n+2$, which then becomes $\hat W_{n+2,k+1}/\hat W_{n+2,k}=(n+2-k)/(n-1+k)$. Indeed, for $k=1$, we have $\hat W_{n+2,k+1}/\hat W_{n+2,k}=n+1/n > 1$ and for $k\geq 2$, we get \[ \frac{\hat W_{n+2,k+1}}{\hat W_{n+2,k}}=\frac{n-1+k+3-2k}{n-1+k}=1- \frac{2k-3}{n-1+k}<1. \] Hence, $k_{\min}=2$ and we obtain a discontinuous phase transition from Theorem~\ref{th:pt}. \end{proof} \subsection{Localized spherical Gaussian kernel}\label{ssec:localized} The Hegselmann-Krause model of opinion dynamics on $\bbR^n$ has a localized nature, while in our spherical analogue introduced in the previous section we only found a very special range of kernels for which we can prove a discontinuous phase transition (see Proposition~\ref{prop:HK:discont}) For this reason, we study a system with a localized kernel, where Assumption~\ref{assum:pt} is verifiable. A canonical choice is the McKean-Vlasov equation with the exact hyperspherical heat kernel $W_\varepsilon(x, y) = - u(\varepsilon, x, y)$, where $u(\varepsilon, x, y)$ is the fundamental solution on $\bbS^{n-1}$. As follows from \cite{zhao2018exact}, $W_\varepsilon$ has an explicit representation in terms of the Gegenbauer polynomials: \begin{equation} \label{eq:heat-kernel} W_\varepsilon(x,y) = -\sum_k e^{-k(k+n-2)\varepsilon}\frac{2k+n-2}{n-2}\frac{\Gamma(\frac{n}{2})}{2\sqrt{\pi^n}}C_k^{\frac{n-2}{2}}(\skp{x,y}), \end{equation} using Lemma \ref{lemma:gegegnbauer-decomposition} we can recover the bifurcation branches of the corresponding McKean-Vlasov equation. \begin{proposition}[Localized interaction model on $\bbS^{n-1}$] \label{prop:localized} Consider the McKean-Vlasov equation with the localized interaction kernel as in \eqref{eq:heat-kernel}, then for every $k\in \bbN$ there exists a bifurcation branch around $(\bar \rho, \gamma_k)$, where $\gamma_k$ has the form \[ \gamma_k = \frac{2\sqrt{\pi^n}}{\Gamma(\frac{n}{2})}e^{k(k+n-2)\varepsilon}. \] \end{proposition} \begin{proof} Using Lemma \ref{lemma:gegegnbauer-decomposition} we conclude that the spherical harmonics decomposition of $W_\varepsilon$ takes the form \[ \hat W_{\varepsilon, k} = -\frac{\Gamma(\frac{n}{2})}{2\sqrt{\pi^n}}e^{-k(k+n-2)\varepsilon}. \] Note that all the coefficients $\hat W_{\varepsilon, k}$ are distinct for arbitrary $\varepsilon$ and arbitrary $n$. Applying Theorem~\ref{th:bifurcations} we recover all the bifurcation branches of the corresponding system. \end{proof} \begin{proposition}[Discontinuous phase transition of the localized interaction model] Consider the McKean-Vlasov equation with the localized interaction kernel \eqref{eq:heat-kernel}, then there exists $\varepsilon_0 > 0$ such that for all $\varepsilon \in (0, \varepsilon_0)$ the model undergoes a discontinuous phase transition. \end{proposition} \begin{proof} It follows from the proposition above that all the components of the spherical harmonics decomposition are different and $k_{\min} = 1$. However, for small $\varepsilon$ the difference between the first and the second components decays with $\varepsilon$: \[ \hat W_{\varepsilon, 2} - \hat W_{\varepsilon, 1} \sim (n+1)\varepsilon \] and thus we can show that for small enough $\varepsilon$ the model satisfies the relaxed resonance condition \ref{assum:pt-almost}. Making this observation rigorous we obtain \[ \hat W_{\varepsilon, 2} - \hat W_{\varepsilon, 1} = \frac{\Gamma(\frac{n}{2})}{2\sqrt{\pi^n}}\left(e^{-(n-1)\varepsilon} - e^{-2n\varepsilon}\right) = \frac{\Gamma(\frac{n}{2})e^{-(n-1)\varepsilon}}{2\sqrt{\pi^n}} \left(1 - e^{-(n+1)\varepsilon}\right) \leq \frac{3\varepsilon(n+1)\Gamma(\frac{n}{2})}{4\sqrt{\pi^n}}, \] provided that $\varepsilon(n+1) < 1$. Taking $u = \frac{Y_{2, 0}}{|Y_{2, 0}|_\infty}$, we then can check that for $\varepsilon < \frac{C_0\Gamma\left(\frac{n}{2}\right)U_3^2}{(n-1)\sqrt{\pi^n}} =\varepsilon_0$, where $C_0$ is a constant independent of $n$, the bandwidth $\delta_\varepsilon = \hat W_{\varepsilon, 2} - \hat W_{\varepsilon, 1}$ is sufficiently small for $W_\varepsilon$ to satisfy Condition \ref{assum:pt-almost}. Thus, according to Theorem~\ref{th:pt}, the model undergoes a discontinuous phase transition for any $\varepsilon < \varepsilon_0$. \end{proof} \appendix \section{Differential forms} \label{sec:geometry} In this section we give some background information on Riemannian geometry and define the differential operators used in this paper. We consider a smooth compact Riemannian manifold $(\calM, g)$ without boundary with metric $g$ which assigns a positive-definite quadratic form on the tangent bundle $g_x: T_x\calM \times T_x\calM \to \bbR_+$ to any point $p\in \calM$. For a smooth function $\gamma: [0,1] \to \calM$ we define the curve-length as \[ L(\gamma) := \int_0^1 \sqrt{p_{\gamma(s)}[\gamma'(s),\gamma'(s)]}ds, \] where \[ \gamma'(s) = \lim_{\delta \to 0}\frac{1}{2\delta} [\gamma(s+\delta) - \gamma(s-\delta)] \in T_{\gamma(s)}\calM, \] for $s \in (0,1)$, and the right or left limit for the end points. Then the distance between any two points $x, y \in \calM$ is defined as \[ \dist(x,y) = \inf_{\gamma \in \Gamma_{x,y}}L(\gamma), \] where $\Gamma_{x,y}$ is a set of all smooth curves $\gamma$ satisfying $\gamma(0)= x, \ \gamma(1)= y$. Given a connection $\nabla$ on $\calM$, a constant-speed geodesic $\gamma$ is a curve satisfying the zero-acceleration condition \begin{equation} \label{eq:geodesic} \nabla_{\gamma'(s)}\gamma'(s) = 0. \end{equation} In this work we will always consider Levi-Civita connection, the unique for every manifold torsion-free and metric-preserving connection. For two smooth vector fields $X, Y$ the map $\nabla_XY: \calM \to T\calM$ is called a covariant derivative of $Y$ in the direction of $X$ and it's evaluation at $\xi\in\calM$ shows the change of vector $Y_\xi = Y(\xi)$ in the direction of $X_\xi= X(\xi)$. For any fixed connection on $\calM$, for every point $x\in \calM$ and a tangent vector $v\in T_x\calM$ there exists a unique geodesic $\gamma_{x,v}: [0,1] \to \calM$ with initial condition $\gamma(0) = x, \ \gamma'(0) = v$. Then the exponential map is the end point of such a curve: \[ \exp_x(v) = \gamma_{x,v}(1). \] For a smooth function $f: \calM \to \bbR$ it's differential at a point $x\in\calM$ is a linear map $df_x: T_x\calM \to \bbR$ such that for any smooth curve satisfying $\gamma(0) = x, \ \gamma'(0) = v$ it holds that \[ df_x(\gamma'(0)) = (f\circ \gamma)'(0), \] where the expression on the right hand side $f\circ \gamma$ is a curve in $\bbR$. The gradient of a smooth function $f: \calM \to \bbR$ is a vector field $\grad f$ which for any vector field $Z$ on $\calM$ and any point $x\in \calM$ satisfies \[ g_x((\grad f)_x, Z_x) = df_x(Z_x). \] \begin{example} On a unit sphere $\calM = \bbS^{n-1}$ equipped with distance $\dist(x, y) = \arccos(\left<x, y\right>)$ the manifold gradient $\grad_{\bbS^{n-1}} f$ in Euclidean coordinates is equal to the projection of the Euclidean gradient onto the tangent space at $x$: \[ \grad_{\bbS^{n-1}} f_x = \nabla_{\bbR^n} f_x - \left<\nabla_{\bbR^n} f_x, x\right> x, \] where $\left<\cdot, \cdot \right>$ is a Euclidean scalar product and $\nabla_{\bbR^n} f_x = \left(\frac{\partial f(x)}{\partial x_1}, ... \frac{\partial f(x)}{\partial x_n} \right)$. \end{example} \begin{remark} In a special case when the metric $g$ is induced by a scalar product in some Euclidean space $\bbR^{n'}$ for $n'>n$, the scalar product satisfies \[ g(X, Y) = \left<X, P_{T\calM}Y\right>_{\bbR^{n'}}, \] and as a result the manifold gradient is a projection of the Euclidean gradient onto the tangent space $\grad_\calM f = P_{T\calM}\grad_{\bbR^{n'}}f$. \end{remark} Divergence of a smooth vector-field $X$ on a manifold is a trace of the covariant derivative $\nabla X$ with Levi-Civita connection: \[ \divr X := \tr (\nabla X), \] where $\nabla X$ is an object which for every smooth vector field $Y$ satisfies $\nabla X(Y) = \nabla_Y X$ in particular let $\{e_i\}$ be an orthonormal basis of the tangent bundle $T\calM$, then \[ \divr X = \sum_i \left<\nabla_{e_i} X, e_i\right> = \sum_i g(\nabla_{e_i} X, e_i). \] For an $n$-dimensional Riemannian manifold we can define a unique volume measure $m$ which in local coordinates takes the form \[ dm = \sqrt{\det g_{ij}}dx, \] where $g_{ij}$ is the metric tensor in local coordinates and $dx$ is the Lebesgue volume element in $\bbR^n$. As a result for any compact manifold without boundary $(\calM, g)$ we get the following integration by part rule \[ \int \phi \cdot \divr X dm = -\int g(\grad \phi, X )dm \] for any $\phi \in C^\infty(\calM)$. Laplace-Beltrami operator is a generalization of the Laplace operator to the manifold setting, namely for any smooth function $f: \calM \to \bbR$ such that $\grad f$ is a smooth vector field the action of the Laplace-Beltrami operator is defined as \[ \Delta f := \divr (\grad f). \] \begin{example}[Corollary 1.4.3 {\cite{dai2013approximation}}] On a unit sphere $\calM = \bbS^{n-1}$ equipped with distance $\dist(x, y) = \arccos(\left<x, y\right>)$ the Laplace-Beltrami operator $\Delta f$ is equal to the Euclidean Laplacian of the function $\tilde f : \bbR^n \to \bbR$ defined as $\tilde f (x) = f(x/\|x\|)$: \[ \Delta_{\bbS^{n-1}} f = \Delta_{\bbR^n} \tilde f, \] where $\Delta_{\bbR^n} = \sum_i \frac{\partial^2}{\partial x_i^2}$. \end{example} \begin{remark}(Laplace-Beltrami is a generator of Brownian motion) Consider a smooth Riemannian manifold $\calM$ and let $p(x, y, t)$ for $x, y \in \calM, \ t \in [0,T)$ be the (unique) solution of the following initial value problem: \begin{align*} u_t - \Delta u &= 0, \\ \lim_{t\to +0} u(x, t) &= \delta_y(x), \\ p(y, x, t) &= u(x, t). \end{align*} Define the $\calM$-valued stochastic process $B_t$ with the transition probability $\bbP(B_t = x| B_0 = y) = p(y, x, t)$, then one can show that the Laplace-Beltrami is a generator of $B_t$, namely for every admissible $f$ it holds that \[ \bbE f(B_t) = \bbE f(B_0) + \frac12\int_0^t \bbE\Delta f(B_s)ds. \] We will call $B_t$ a Brownian motion on $\calM$. \end{remark} \subsection{Sobolev spaces on manifolds} \label{ssec:SobolevMfds} We define the $H^1(\calM)$ on a manifold $\calM$ as the space of all functions $f\in L_2(\calM)$ such that the gradient $\nabla f$ exists $m$-almost everywhere and is an element of $L_2(T\calM)$: \[ H^1(\calM) := \{f: \calM \to \bbR: \int_\calM|f|^2dm < \infty, \ \int_\calM g(\nabla f, \nabla f)dm < \infty\}, \] equipped with the norm \[ \|f\|_{H^1(\calM)} = \int_\calM |f|^2dm + \int_\calM g(\grad f , \grad f)dm. \] Higher order Sobolev spaces can be defined in terms of the higher order covariant derivatives, we refer the reader to \cite{hebey1996sobolev} for details. Similar to $\bbR^d$, Sobolev spaces on a manifold can also be defined as a closure of the space of smooth functions with respect to the corresponding measure. On the equivalence between two definitions see also \cite{chan2024meyers}. For any $f\in H^1(\calM)$ we say that $u: \calM \to \bbR$ satisfies $u = \Delta f$ in a weak sense if for any $\phi \in C^\infty(\calM)$ we have \[ \int \phi \cdot u dm = -\int g(\grad \phi, \grad f )dm. \] Finally, note that by definition $H^1(\calM) \subseteq L_2(\calM)$. \subsection{Product manifolds}\label{ssec:ProductMfds} When working with a product space $\calM\times \calM$ we will consider the following product distance $\dist_{\calM \times \calM}(x,y)^2 = \dist_{\calM}(x_1,y_1)^2 + \dist_{\calM}(x_2,y_2)^2$, which implies that at any point there exist local coordinates such that the metric tensor $g_{\calM \times \calM}$ has the block-diagonal form and the scalar product satisfies $g_{\calM\times\calM}(X, Y) = g_{\calM}(X_1, Y_1) + g_{\calM}(X_2, Y_2)$. Then the gradient can also be decomposed into two components $\nabla f(x)= (\nabla_{x_1}f, \nabla_{x_2}f)$ and the volume measure is a product measure $dm_{\calM\times \calM}(x)=dm(x_1)dm(x_2)$. \section{\texorpdfstring{$\Gamma$}{Gamma}-convergence}\label{appendix:Gamma} In this section we give the background information on $\Gamma$-convergence, we refer the reader to \cite{braides2006handbook} for the proofs. \begin{definition}[$\Gamma$-convergence] A sequence of functionals $F_n: X \to \bbR$ on some metric space $(X,d)$ is said to converge to a functional $F:X \to \bbR$ in the sense of $\Gamma$-convergence, denoted by $F_n \stackrel{\Gamma}{\to} F$, if the following two conditions are satisfied: \begin{itemize} \item \emph{(Liminf inequality)} For every convergent sequence $x_n \to \hat x$ it holds \[ F(\hat x) \leq \liminf_{n\to\infty} F_n(x_n), \] \item \emph{(Limsup inequality)} For every $\hat x \in X$ there exists a convergent sequence $x_n \to x$ such that \[ F(\hat x) \geq \limsup_{n\to\infty} F_n(x_n). \] \end{itemize} \end{definition} Note that $\Gamma$-convergence depends on the topology of the underlying space $X$. Remarkably, a constant sequence $F_n =F$ converges to $F$ in the sense of $\Gamma$-convergence only if $F_n$ is a lower-semicontinuous functional. In general case, $\Gamma$-limit of a constant sequence $F_n = F$ is the lower-semicontinuous envelope of $F$. \begin{definition}[Lower-semicontinuous envelope] For a functional $F: X\to \bbR$ defined on a metric space $(X, d)$ we define it's \emph{lower-semicontinuous envelope} $\text{lsc}(F): X\to \bbR$ as the largest lower-semicontinuous functional not exceeding $F$: \[ \text{lsc}(F) := \sup\{ G: \ G \leq F, \ G : X \to \bbR \text{ is $d$-lower semi-continuous} \}. \] \end{definition} We will use the following proposition. \begin{proposition}[$\Gamma$-limit of decreasing sequence] \label{prop:gamma-decreasing} Let $(F_n)_{n\in\bbN}$ be a decreasing sequence of functionals and let $F_n \to F$ pointwise, then $F_n \stackrel{\Gamma}{\to} \text{\emph{lsc}}(F)$. \end{proposition} $\Gamma$-convergence is an important tool for the characterization of the minimizers of the limiting functionals. The fundamental Theorem of $\Gamma$-convergence establishes necessary conditions for the convergence of minimizers for an equi-coercive sequence of functionals. \begin{definition}[Equicoercivity] A sequence of functionals $(F_n)_{n\in\bbN}$ is equi-coercive if for all $\xi \in \bbR$ there exist a compact set $K_\xi\subseteq X$ such that $\{x: F_n(x) < \xi\} \subset K_\xi$ for all $n \in \bbN$. \end{definition} \begin{theorem}[Fundamental theorem of $\Gamma$-convergence] \label{th:gamma-coonvergence} Let $(F_n)_{n\in\bbN}$ be an equi-coercive sequence of functionals and let $F_n \stackrel{\Gamma}{\to} F$, such that $\inf F = \alpha_0$, then \begin{enumerate} \item $\lim_{n\to\infty} \inf F_n =\alpha_0$, \item if $x_n \to \hat x$ and $F_n(x_n)\to \alpha_0$, then $\hat x$ is a minimizer of $F$, \item every sequence $(x_n)_{n\in\bbN}$ satisfying $F_n(x_n)\to \alpha_0$ has a convergent subsequence $x_{n_k} \to \hat x$ and every accumulation point $\hat x$ is a minimizer of $F$. \end{enumerate} \end{theorem} \section{Proofs of auxilary Lemmas for Section \ref{sec:bifurcations}} \label{appendix:bifurcations} \begin{proof}[Proof of Lemma \ref{lemma:G}] \textbf{Uniform convergence of $G(w, \gamma)$.} We begin by showing the uniform in $\gamma$ convergence to zero around zero. Let \begin{align*} A_\gamma(w)&:=\frac{1}{Z(\gamma, w)} e^{-\gamma W* (\bar \rho + w)} = B_\gamma(w)C_\gamma(w), \quad \text{where}\\ B_\gamma(w) &:= \frac{1}{Z(\gamma, w)}, \\ C_\gamma(w) &:= e^{-\gamma_0 W* (\bar \rho + w)}. \end{align*} Using Taylor expansion we get the estimates for $ B_\gamma(w)$ and $C_\gamma(w)$ for $\|w\|_{L_2} \to 0$: \begin{align*} B_\gamma(w) &= \frac{1}{Z(\gamma, 0) + (Z(\gamma, w) - Z(\gamma, 0))}= \frac{1}{Z(\gamma, 0)}\left(1 + \frac{(Z(\gamma, 0) - Z(\gamma, w))}{Z(\gamma, 0)} + f_1(w)\right) \\ &= \frac{1}{Z(\gamma, 0)}\left(1 + \bar\rho\int\left(-\gamma W*w + \frac{1}{2}e^{-\gamma\xi_x}(\gamma W*w)^2\right)d\sigma + f_1(w)\right), \\ C_\gamma(w) &=e^{-\gamma W*\bar\rho}\left(1 -\gamma W* w + \frac{1}{2}e^{-\gamma\xi_x}(\gamma W*w))^2\right), \end{align*} where $\xi_x$ is the point of estimation of the second derivative for the Lagrange form of the remainder, by definition we know that for every $x\in\bbS$ the absolute value of $\xi_x$ is bounded by the value of convolution at $x:$ $|\xi_x| \leq (W*w)(x)$. In Lemma \ref{lemma:gibbs-H1} we have shown the uniform bound on the convolution $\|W*w\|_\infty \leq \|W\|_\infty\|w\|_{L_1}$, and since $\|w\|_{L_2} <\infty$ and the sphere is a compact domain we can bound the $L_1$-norm of $w$ in terms of $L_2$-norm $\|w\|_{L_1} \leq \|w\|_{L_2}^2 + \sigma(\bbS^{n-1})$. As a result, we conclude that $|\xi_x|$ is uniformly bounded and so is the exponential pre-factor $|e^{-\gamma\xi_x}| < C_\xi$. We use this argument to estimate the residual term $f_1(w)$, which by Taylor's theorem can be bounded by \[ |f_1(w)|\leq \beta_f\left(\bar\rho\int\left(-\gamma W*w + \frac{1}{2}e^{-\gamma\xi_x}(\gamma W*w)^2\right)d\sigma\right), \] for some $\beta_f(x) =O(x^2)$ as $x \to 0$. At the same time estimating the square of the argument of $\beta_f$, by Cauchy-Schwartz inequality we obtain \begin{align} \label{eq:long-bound} \MoveEqLeft\left(\int\left(-\gamma W*w + \frac{1}{2}e^{-\gamma\xi_x}(\gamma W*w)^2\right)d\sigma\right)^2 \leq \gamma^2\|W\|^2_\infty\|w\|_{L_1}^2 +\frac14C_\xi^2\gamma^2 \|W\|^4_\infty\|w\|_{L_1}^4 \\ &\qquad \leq \gamma^2(C_1 \|w\|^2_{L_2} + C_2\|w\|^4_{L_2}) = O(\|w\|^2_{L_2}),\notag \end{align} uniformly in $\gamma$ on any neighborhood $(\gamma-\delta, \gamma+\delta)\subset \bbR_+$. As follows from the bound \eqref{eq:long-bound} and the uniform bound obtained in Lemma \ref{lemma:G}, both $A_\gamma$ and $B_\gamma$ are also uniformly bounded on $L_2$-bounded sets, take $C_\infty \in \bbR$ such that $A_\gamma(w), B_\gamma(w) <C_\infty$ for all $w: \|w\|_{L_2}< C_w$. Then, combining the above estimates and using the fact that $\bar\rho = e^{-\gamma W*\bar\rho}/Z(\gamma, 0)$ we obtain the following bound for the non-linear operator $G$: \begin{align*} \MoveEqLeft \|G(w, \gamma)\|^2_{L_2} = \Big\|\bar \rho - \gamma \bar \rho \cdot W* w + \gamma \bar\rho^2 \int_{\bbS^{n-1}} W* w dx -A_\gamma(w)\Big\|^2_{L_2} \\ &=C_\infty^2\left(|f(w)|^2 +e^{-2\gamma W*\bar\rho}\frac14C_\xi^2\|(\gamma W*w)^2\|_{L_2}^2\right) + \bar\rho^2\|\gamma W*w \int_{\bbS^{n-1}} W* w d\sigma\|_{L_2}^2 \\ &\leq \tilde C_1|f(w)|^2 + \tilde C_2\|(\gamma W*w)^2d\sigma\|_{L_2}^2 = o(\|w\|^2_{L_2}). \end{align*} \textbf{Higher derivatives.} We now calculate Frechet derivatives of the non-linear term $G$. For $D_w G$ using \eqref{eq:DwF} we get \begin{align*} \MoveEqLeft D_w G(w, \gamma)[v] = D_w\hat{F}(w, \gamma)[v] -v + \gamma Kv \\ &= \gamma Kv +\frac{1}{Z( \gamma, w)} e^{-\gamma W* (\bar \rho + w)} \cdot \gamma W* v - \frac{ \gamma e^{-\gamma W* (\bar \rho + w)}}{Z( \gamma, w)^2}\int_{\bbS^{n-1}} e^{-\gamma W* (\bar \rho + w)} \cdot W* v d\sigma. \end{align*} Note that Define operators $Q_1(w):= Z(\gamma, w)^{-1}$, $Q_2(w) := e^{-\gamma W* (\bar \rho + w)}$ and $Q_3(w): = \int_{\bbS^{n-1}} e^{-\gamma W* (\bar \rho + w)} \cdot W* v$ for $v\in L_2(\bbS^{n-1})$, we show that all of them are continuous. To begin with, by Cauchy-Schwartz inequality we get the following uniform bound \[ |W * w|_\infty \leq \|W\|_{L_2} \|w\|_{L_2}. \] Using it, for $Q_2$ we obtain for sufficiently small $\|w_1\|_{L_2}, \|w_1\|_{L_2} < c_w$ the bound \begin{align*} \|Q_2(w_1) - Q_2(w_2)\|^2_{L_2} &= \left\|e^{-\gamma W* (\bar \rho + w_2)} - e^{-\gamma W* (\bar \rho + w_1)}\right\|^2_{L_2} \\ &= \int e^{-2\gamma W* (\bar \rho + w_2)}\left(1 - e^{-\gamma W*(w_1 - w_2)}\right)^2 \\ &\leq 2\gamma \sigma(\bbS^{n-1}) \|W\|^2_{L_2}e^{2\gamma\|W\|_{L_2}(1 + c_w)}\|w_1 - w_2\|_{L_2}^2\\ &= O(\|w_1 - w_2\|_{L_2}^2). \end{align*} Similarly for $Q_1$ we then get the estimate \begin{align*} |Q_1(w_1) - Q_1(w_2)| &= \frac{\left|\int e^{-\gamma W* (\bar \rho + w_2)} - e^{-\gamma W* (\bar \rho + w_1)}\right|}{\int e^{-\gamma W* (\bar \rho + w_1)} \cdot \int e^{-\gamma W* (\bar \rho + w_2)}} = \frac{\int e^{-\gamma W* (\bar \rho + w_2)}\left(1 - e^{-\gamma W*(w_1 - w_2)}\right)}{\int e^{-\gamma W* (\bar \rho + w_2)} \cdot \int e^{-\gamma W* (\bar \rho + w_1)}}\\ &\leq C e^{3\|W\|_{L_2}(1+c_w)}\int\left|1 - e^{-\gamma W*(w_1 - w_2)}\right| \\ &\leq 2\gamma C \sigma(\calM)\|W\|_{L_2} e^{3\|W\|_{L_2}(1+c_w)} \|w_1-w_2\|_{L_2} = O(\|w_1 - w_2\|_{L_2}). \end{align*} Finally, for $Q_3$ assuming that $\|v\|_{L_2}<1$ we obtain \begin{align*} |Q_3(w_1) - Q_3(w_2)| &= \int (Q_1(w_1) - Q_1(w_2)) \cdot W*v \leq \|W\|_{L_2}\int|Q_1(w_1) - Q_1(w_2)| \\ &\leq 2\gamma \sigma(\bbS^{n-1}) \|W\|_{L_2}e^{\gamma\|W\|_{L_2}(1 + c_w)}\|w_1 - w_2\|_{L_2} = O(\|w_1 - w_2\|_{L_2}). \end{align*} The derivative $D_w G(w, \gamma)[v]$ then takes the form \[ D_w G(w, \gamma)[v] =\gamma Kv +\gamma Q_1(w)Q_2(w)\cdot W*v - \gamma Q_1(w)Q_2(w)^2Q_3(w), \] and, as directly follows from the above estimates, is continuous in $w$ for sufficiently small perturbation $w$ and any finite $\gamma$. Continuity in $\gamma$ follows in similar manner. Calculating $D_\gamma G(w, \gamma)$ and $D_{w,\gamma} G(w, \gamma)[u]$ we obtain the following expressions. \begin{align*} D_\gamma G(w,\gamma) &= D_\gamma\left(\frac{1}{Z(\gamma, w)} e^{-\gamma W* (\bar \rho + w)} \right)+ \bar \rho \cdot W* w - \bar\rho^2 \int_{\bbS^{n-1}} W* w d\sigma \\ &=e^{-\gamma W* (\bar \rho + w)} D_\gamma\left(\frac{1}{Z(\gamma, w)}\right) + \frac{1}{Z(\gamma, w)} D_\gamma\left(e^{-\gamma W* (\bar \rho + w)}\right) \\ &\qquad + \bar \rho \cdot W* w - \bar\rho^2 \int_{\bbS^{n-1}} W* w d\sigma \\ &=e^{-\gamma W* (\bar \rho + w)} \frac{e^{-\gamma W* (\bar \rho + w)} }{Z(\gamma, w)^2} \int e^{-\gamma W*(\bar\rho +w) }\cdot W * (\bar\rho + w) -\frac{\gamma e^{-\gamma W* (\bar \rho + w)}}{Z(\gamma,w)} \\ &\qquad + \bar \rho \cdot W* w - \bar\rho^2 \int_{\bbS^{n-1}} W* w d\sigma. \end{align*} And analogously for the mixed derivative: \begin{align*} D_{w,\gamma} G(w,\gamma)[v] &= Kv +\frac{1}{Z( \gamma, w)} e^{-\gamma W* (\bar \rho + w)} \cdot W* v + \gamma W* v\cdot D_\gamma\left(\frac{1}{Z( \gamma, w)} e^{-\gamma W* (\bar \rho + w)}\right) \\ &\qquad- \frac{e^{-\gamma W* (\bar \rho + w)}}{Z( \gamma, w)^2}\int_{\bbS^{n-1}} e^{-\gamma W* (\bar \rho + w)} \cdot W* v d\sigma \\ &\qquad+\gamma D_\gamma\left(\frac{e^{-\gamma W* (\bar \rho + w)}}{Z( \gamma, w)^2}\int_{\bbS^{n-1}} e^{-\gamma W* (\bar \rho + w)} \cdot W* v d\sigma\right). \end{align*} A more explicit expression for $D_{w,\gamma} G(w,\gamma)[v]$ can be obtained using the product rule. Continuity of both $D_\gamma G(w,\gamma)$ and $D_{w,\gamma} G(w,\gamma)[v]$ follows from the arguments analogous to the estimates for $D_w G(w, \gamma)$. \end{proof} \begin{proof}[Proof of Lemma \ref{lemma:compactness}] Consider a bounded set $U \subset L_2(\calM)$, we aim to show that the images of $U$ under both $A$ and $B$ are relatively compact by applying Arzelà–Ascoli theorem. We start by showing the pointwise boundedness. Using the argument from Lemma \ref{lemma:gibbs-H1} and Hölder's inequality we obtain \begin{align*} \|Au\|_\infty &= \|W*u\|_\infty \leq \|W\|_\infty \|u\|_{L_1} \leq \|W\|_\infty \|u\|_{L_1} \leq \|W\|_\infty \sqrt{\sigma(\calM)}\|u\|_{L_2}, \\ \|Bu\|_\infty &= \left\|\sigma \int_\calM W * ud\sigma\right\|_\infty \leq \left\| \int_\calM \|W\|_\infty\sqrt{\sigma(\calM)}\|u\|_{L_2} d\sigma\right\|_\infty = \|W\|_\infty \sigma(\calM)^{3/2}\|u\|_{L_2}. \end{align*} Note that $Bu$ is a constant function for every $u\in L_2(\calM)$, so the set $B(U)$ is equicontinuous. For $A(U)$ we estimate for $x_1, x_2 \in \calM$ \begin{align*} |(Au)(x_1) - (Au)(x_2)|^2 &= \left|\int_\calM W(x_1, y)u(y)d\sigma(y) - \int_\calM W(x_2, y)u(y)d\sigma(y) \right|^2 \\ &= \left|\int_\calM \left(W(x_1, y) - W(x_2, y)\right)u(y)d\sigma(y) \right|^2 \\ &\leq \int_\calM \left|W(x_1, y) - W(x_2, y)\right|^2 \|u(y)\|^2d\sigma(y) \\ &\leq \|u\|_{L_2}^2\sup_{y\in \calM}\left|W(x_1, y) - W(x_2, y)\right|^2. \end{align*} Since $W$ is continuous on $\calM\times \calM$ and $\calM$ is compact, $W$ is uniformly continuous. Thus for every $\varepsilon >0$ there exists $\delta>0$ such that for all $x_1, x_2: \ \dist(x_1, x_2) <\delta$ it holds that $\left|W(x_1, y) - W(x_2, y)\right| < \varepsilon$, which guarantees that \[ |(Au)(x_1) - (Au)(x_2)|^2 \leq \varepsilon^2\|u\|_{L_2}^2, \] so $A(U)$ is equicontinuous. Thus by Arzelà–Ascoli theorem both $A(U)$ and $B(U)$ are relatively compact sets in the topology of uniform convergence. Since $\calM$ is compact, $A(U)$ and $B(U)$ are also relatively compact in $L_2(\calM)$ implying that both operators are compact. \end{proof} \bibliographystyle{myalpha} \bibliography{biblio} \end{document} \usepackage{amssymb} \usepackage{amsmath,amsthm} \usepackage[english]{babel} \usepackage[T1]{fontenc} \usepackage[a4paper,top=2.5cm,bottom=3cm,left=2.5cm,right=2.5cm,marginparwidth=1.75cm]{geometry} \def\bottomfraction{0.9} \def\topfraction{0.9} \usepackage{microtype} \usepackage{enumitem} \usepackage{mathtools} \usepackage{graphicx,adjustbox} \usepackage[dvipsnames]{xcolor} \usepackage{bbm} \usepackage[colorlinks=true, allcolors=black]{hyperref} \usepackage[normalem]{ulem} \usepackage{cancel} \usepackage{varwidth} \usepackage{booktabs} \usepackage{xparse} \NewDocumentCommand{\grad}{e{_^}}{ \mathop{}\! \nabla \IfValueT{#1}{_{\!#1}} \IfValueT{#2}{^{#2}}} \newcommand{\R}{\bbR} \newcommand{\e}{\varepsilon} \DeclareMathOperator{\Expectation}{\mathbb{E}} \newcommand{\Expect}{\mathbb E} \newcommand{\draz}{\mathsf{D}} \DeclareMathOperator{\tr}{tr} \DeclareMathOperator*{\argmin}{arg\, min} \DeclareMathOperator{\arcsinh}{arcsinh} \DeclareMathOperator{\coim}{coim} \DeclareMathOperator{\coker}{coker} \DeclareMathOperator{\cov}{cov} \DeclareMathOperator{\im}{im} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\Rad}{Rad} \DeclareMathOperator{\Real}{Re} \DeclareMathOperator{\Rep}{Rep} \DeclareMathOperator{\erfc}{erfc} \DeclareMathOperator{\Span}{span} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\diam}{diam} \DeclareMathOperator{\ReLU}{ReLU} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator\Prob{\mathbb{P}} \DeclareMathOperator\Uni{Unif} \DeclareMathOperator\TVar{V} \DeclareMathOperator\Var{Var} \DeclareMathOperator\Cov{Cov} \DeclareMathOperator{\ProbMeas}{\calP} \DeclareMathOperator{\Lip}{Lip} \newcommand{\normal}{\calN} \newcommand{\tran}{{\mkern-1.0mu\mathsf{T}}} \newcommand{\oLap}{{\Delta^{\mkern-1.5mu\odot}}} \newcommand{\Range}{\mathrm{Range}} \newcommand{\Rank}{\mathrm{rank}} \newcommand{\Ker}{\mathrm{Ker}} \newcommand{\sym}{\mathrm{sym}} \newcommand{\Proj}{\mathrm{P}} \newcommand\ol{\overline} \newcommand{\oti}{\mathop{\otimes}} \newcommand{\din}{d_{\mathrm{in}}} \usepackage{xspace} \newcommand\cadlag{c\`adl\`ag\xspace} \usepackage{dsfont} \newcommand{\One}{\mathds{1}} \newcommand{\invunit}{\rotatebox[origin=c]{180}{$\mathrm e$}} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{assumption}[theorem]{Assumption} \newtheorem{formaltheorem}{Formal Theorem} \newtheorem{formalcorollary}[formaltheorem]{Formal Corollary} \renewcommand*{\theformaltheorem}{\Alph{formaltheorem}} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{condition}[theorem]{Condition} \usepackage{textcomp} \newtheorem{remarkx}[theorem]{Remark} \newtheorem{examplex}[theorem]{Example} \newenvironment{remark} {\pushQED{\qed}\renewcommand{\qedsymbol}{$\triangleleft$}\remarkx} {\popQED\endremarkx} \newenvironment{example} {\pushQED{\qed}\renewcommand{\qedsymbol}{$\triangleleft$}\examplex} {\popQED\endexamplex} \DeclarePairedDelimiter{\abs}{\lvert}{\rvert} \DeclarePairedDelimiter{\norm}{\lVert}{\rVert} \DeclarePairedDelimiter{\bra}{(}{)} \DeclarePairedDelimiter{\pra}{[}{]} \DeclarePairedDelimiter{\set}{\{}{\}} \DeclarePairedDelimiter{\skp}{\langle}{\rangle} \DeclareMathAlphabet{\mathup}{OT1}{\familydefault}{m}{n} \newcommand{\dx}[1]{\mathop{}\!\mathup{d} #1} \newcommand{\dd}{\dx} \newcommand{\pderiv}[3][]{\frac{\mathop{}\!\mathup{d}^{#1} #2}{\mathop{}\!\mathup{d} #3^{#1}}} \usepackage{ifthen} \newlength{\leftstackrelawd} \newlength{\leftstackrelbwd} \def\leftstackrel#1#2{\settowidth{\leftstackrelawd}{${{}^{#1}}$}\settowidth{\leftstackrelbwd}{$#2$}\addtolength{\leftstackrelawd}{-\leftstackrelbwd}\leavevmode\ifthenelse{\lengthtest{\leftstackrelawd>0pt}}{\kern-.5\leftstackrelawd}{}\mathrel{\mathop{#2}\limits^{#1}}} \def\calA{{\mathcal A}} \def\calB{{\mathcal B}} \def\calC{{\mathcal C}} \def\calD{{\mathcal D}} \def\calE{{\mathcal E}} \def\calF{{\mathcal F}} \def\calG{{\mathcal G}} \def\calH{{\mathcal H}} \def\calI{{\mathcal I}} \def\calJ{{\mathcal J}} \def\calK{{\mathcal K}} \def\calL{{\mathcal L}} \def\calM{{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}} \def\calP{{\mathcal P}} \def\calQ{{\mathcal Q}} \def\calR{{\mathcal R}} \def\calS{{\mathcal S}} \def\calT{{\mathcal T}} \def\calU{{\mathcal U}} \def\calV{{\mathcal V}} \def\calW{{\mathcal W}} \def\calX{{\mathcal X}} \def\calY{{\mathcal Y}} \def\calZ{{\mathcal Z}} \def\rma{{\mathrm a}} \def\rmb{{\mathrm b}} \def\rmc{{\mathrm c}} \def\rmd{{\mathrm d}} \def\rme{{\mathrm e}} \def\rmf{{\mathrm f}} \def\rmg{{\mathrm g}} \def\rmh{{\mathrm h}} \def\rmi{{\mathrm i}} \def\rmj{{\mathrm j}} \def\rmk{{\mathrm k}} \def\rml{{\mathrm l}} \def\rmm{{\mathrm m}} \def\rmn{{\mathrm n}} \def\rmo{{\mathrm o}} \def\rmp{{\mathrm p}} \def\rmq{{\mathrm q}} \def\rmr{{\mathrm r}} \def\rms{{\mathrm s}} \def\rmt{{\mathrm t}} \def\rmu{{\mathrm u}} \def\rmv{{\mathrm v}} \def\rmw{{\mathrm w}} \def\rmx{{\mathrm x}} \def\rmy{{\mathrm y}} \def\rmz{{\mathrm z}} \def\rmA{{\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}} \def\rmD{{\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}} \def\rmG{{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}} \def\rmJ{{\mathrm J}} \def\rmK{{\mathrm K}} \def\rmL{{\mathrm L}} \def\rmM{{\mathrm M}} \def\rmN{{\mathrm N}} \def\rmO{{\mathrm O}} \def\rmP{{\mathrm P}} \def\rmQ{{\mathrm Q}} \def\rmR{{\mathrm R}} \def\rmS{{\mathrm S}} \def\rmT{{\mathrm T}} \def\rmU{{\mathrm U}} \def\rmV{{\mathrm V}} \def\fw{{\mathrm W}} \def\rmX{{\mathrm X}} \def\rmY{{\mathrm Y}} \def\rmZ{{\mathrm Z}} \def\sfa{{\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}} \def\sfe{{\mathsf e}} \def\sff{{\mathsf f}} \def\sfg{{\mathsf g}} \def\sfh{{\mathsf h}} \def\sfi{{\mathsf i}} \def\sfj{{\mathsf j}} \def\sfk{{\mathsf k}} \def\sfl{{\mathsf l}} \def\sfm{{\mathsf m}} \def\sfn{{\mathsf n}} \def\sfo{{\mathsf o}} \def\sfp{{\mathsf p}} \def\sfq{{\mathsf q}} \def\sfr{{\mathsf r}} \def\sfs{{\mathsf s}} \def\sft{{\mathsf t}} \def\sfu{{\mathsf u}} \def\sfv{{\mathsf v}} \def\sfw{{\mathsf w}} \def\sfx{{\mathsf x}} \def\sfy{{\mathsf y}} \def\sfz{{\mathsf z}} \def\sfA{{\mathsf A}} \def\sfB{{\mathsf B}} \def\sfC{{\mathsf C}} \def\sfD{{\mathsf D}} \def\sfE{{\mathsf E}} \def\sfF{{\mathsf F}} \def\sfG{{\mathsf G}} \def\sfH{{\mathsf H}} \def\sfI{{\mathsf I}} \def\sfJ{{\mathsf J}} \def\sfK{{\mathsf K}} \def\sfL{{\mathsf L}} \def\sfM{{\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O}} \def\sfP{{\mathsf P}} \def\sfQ{{\mathsf Q}} \def\sfR{{\mathsf R}} \def\sfS{{\mathsf S}} \def\sfT{{\mathsf T}} \def\sfU{{\mathsf U}} \def\sfV{{\mathsf V}} \def\sfW{{\mathsf W}} \def\sfX{{\mathsf X}} \def\sfY{{\mathsf Y}} \def\sfZ{{\mathsf Z}} \def\scrA{{\mathscr A}} \def\scrB{{\mathscr B}} \def\scrC{{\mathscr C}} \def\scrD{{\mathscr D}} \def\scrE{{\mathscr E}} \def\scrF{{\mathscr F}} \def\scrG{{\mathscr G}} \def\scrH{{\mathscr H}} \def\scrI{{\mathscr I}} \def\scrJ{{\mathscr J}} \def\scrK{{\mathscr K}} \def\scrL{{\mathscr L}} \def\scrM{{\mathscr M}} \def\scrN{{\mathscr N}} \def\scrO{{\mathscr O}} \def\scrP{{\mathscr P}} \def\scrQ{{\mathscr Q}} \def\scrR{{\mathscr R}} \def\scrS{{\mathscr S}} \def\scrT{{\mathscr T}} \def\scrU{{\mathscr U}} \def\scrV{{\mathscr V}} \def\scrW{{\mathscr W}} \def\scrX{{\mathscr X}} \def\scrY{{\mathscr Y}} \def\scrZ{{\mathscr Z}} \def\bbA{{\mathbb A}} \def\bbB{{\mathbb B}} \def\bbC{{\mathbb C}} \def\bbD{{\mathbb D}} \def\bbE{{\mathbb E}} \def\bbF{{\mathbb F}} \def\bbG{{\mathbb G}} \def\bbH{{\mathbb H}} \def\bbI{{\mathbb I}} \def\bbJ{{\mathbb J}} \def\bbK{{\mathbb K}} \def\bbL{{\mathbb L}} \def\bbM{{\mathbb M}} \def\bbN{{\mathbb N}} \def\bbO{{\mathbb O}} \def\bbP{{\mathbb P}} \def\bbQ{{\mathbb Q}} \def\bbR{{\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}} \def\bbU{{\mathbb U}} \def\bbV{{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbX{{\mathbb X}} \def\bbY{{\mathbb Y}} \def\bbZ{{\mathbb Z}}
2412.15096v2
http://arxiv.org/abs/2412.15096v2
Castelnuovo-Mumford regularity of finite schemes
\documentclass[12pt,reqno]{amsart} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsthm} \usepackage{amscd} \usepackage{xypic} \usepackage{hyperref} \textwidth=16.00cm \textheight=22cm \topmargin=0.00cm \oddsidemargin=0.00cm \evensidemargin=0.00cm \headheight=14.4pt \headsep=1cm \numberwithin{equation}{section} \hyphenation{semi-stable} \emergencystretch=10pt \def\A{{\mathbb A}} \def\B{{\mathbb B}} \def\C{{\mathbb C}} \def\D{{\mathbb D}} \def\E{{\mathbb E}} \def\F{{\mathbb F}} \def\G{{\mathbb G}} \def\K{{\mathbb K}} \def\N{{\mathbb N}} \def\O{{\mathbb O}} \def\P{{\mathbb P}} \def\Q{{\mathbb Q}} \def\R{{\mathbb R}} \def\Z{{\mathbb Z}} \setcounter{MaxMatrixCols}{20} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{problem}[theorem]{Problem} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{claim}[theorem]{Claim} \newtheorem{addendum}[theorem]{Addendum} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{convention}[theorem]{Convention} \newtheorem{notation and conventions}[theorem]{Notation and Conventions} \newtheorem{convention and reminder}[theorem]{Convention and Reminder} \newtheorem{convention and remark}[theorem]{Convention and Remark} \newtheorem{definition and remark}[theorem]{Definition and Remark} \newtheorem{reminder}[theorem]{Reminder} \newtheorem{reminders and definition}[theorem]{Reminders and Definition} \newtheorem{notation}[theorem]{Notation} \newtheorem{remark and notations}[theorem]{Remark and Notations} \newtheorem{notation and remark}[theorem]{Notation and Remark} \newtheorem{example}[theorem]{Example} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{acknowledgement}{Acknowledgement} \renewcommand{\theacknowledgement}{} \newtheorem{algorithm}[theorem]{Algorithm} \newcommand\Proj{\operatorname{Proj}} \newcommand\Spec{\operatorname{Spec}} \newcommand\Hom{\operatorname{Hom}} \newcommand\Ext{\operatorname{Ext}} \newcommand\Tor{\operatorname{Tor}} \newcommand\height{\operatorname{height}} \newcommand\depth{\operatorname{depth}} \newcommand\codim{\operatorname{codim}} \newcommand\beg{\operatorname{beg}} \newcommand\reg{\operatorname{reg}} \newcommand\Ker{\operatorname{\Ker}} \newcommand\Coker{\operatorname{Coker}} \newcommand\e{\operatorname{e}} \newcommand\Supp{\operatorname{Supp}} \newcommand\Ass{\operatorname{Ass}} \newcommand\Nor{\operatorname{Nor}} \newcommand\Sec{\operatorname{Sec}} \newcommand\Tan{\operatorname{Tan}} \newcommand\NZD{\operatorname{NZD}} \newcommand\Sing{\operatorname{Sing}} \newcommand\rank{\operatorname{rank}} \newcommand\Aut{\operatorname{Aut}} \newcommand\PGL{\operatorname{PGL}} \newcommand\im{\operatorname{Im}} \def\acts{\curvearrowright} \makeatletter \newcommand{\stars}{}\DeclareRobustCommand{\stars}[1]{\stars@{#1}} \newcommand{\stars@}[1]{ \ifcase#1\relax\or\stars@one\or\stars@two\or\stars@three\or\stars@four } \newcommand{\stars@char}{$\scriptstyle*$} \newcommand{\stars@base}[1]{ $\m@th\vcenter{\offinterlineskip\ialign{\hfil##\hfil\cr#1\crcr}}$} \newcommand{\stars@one}{ \stars@base{\stars@char}} \newcommand{\stars@two}{ \stars@base{\stars@char\cr\stars@char}} \newcommand{\stars@three}{ \stars@base{\stars@char\cr\stars@char\stars@char}} \newcommand{\stars@four}{ \stars@base{\stars@char\stars@char\cr\stars@char\stars@char}} \makeatother \def\ds{\displaystyle} \def\kk{{\Bbbk}} \def\QR{\mathsf{QR}} \title{Castelnuovo-Mumford regularity of finite schemes} \begin{document} \author{Donghyeop Lee} \address{Department of Mathematics, Korea University, Seoul 136-701, Korea} \email{[email protected]} \author{Euisung Park} \address{Department of Mathematics, Korea University, Seoul 136-701, Korea} \email{[email protected]} \date{Seoul, \today} \subjclass[2010]{14N05, 51N35} \keywords{Finite scheme, Castelnuovo-Mumford regularity, Rational normal curve} \maketitle \thispagestyle{empty} \begin{abstract} Let $\Gamma \subset \P^n$ be a nondegenerate finite subscheme of degree $d$. Then the Castelnuovo-Mumford regularity ${\rm reg} ({\Gamma})$ of $\Gamma$ is at most $\left\lceil \frac{d-n-1}{t(\Gamma)} \right\rceil +2$ where $t(\Gamma)$ is the smallest integer such that $\Gamma$ admits a $(t+2)$-secant $t$-plane. In this paper, we show that ${\rm reg} ({\Gamma})$ is close to this upper bound if and only if there exists a unique rational normal curve $C$ of degree $t(\Gamma)$ such that $\reg (\Gamma \cap C) = \reg (\Gamma)$. \end{abstract} \maketitle \setcounter{page}{1} \section{Introduction} \noindent Throughout this paper, we work over an algebraically closed field $\mathbb{K}$ of characteristic $0$. A closed subscheme $\Gamma \subset \P^n$, defined by a sheaf of ideals $\mathcal{I}_{\Gamma}$ in $\mathcal{O}_{\P^n}$, is said to be $p$-regular in the sense of Castelnuovo-Mumford if \begin{equation*} H^{i}(\mathbb{P}^n,\mathcal{I}_{\Gamma}(p-i))=0 \ \text{ for all } \ i\geq{1}. \end{equation*} The interest in this concept stems partly from the fact that $\Gamma$ is $p$-regular if and only if for every $j \geq 0$ the minimal generators of the $j$-th syzygy module of the saturated homogeneous ideal $I (\Gamma)$ of $\Gamma$ in the homogeneous coordinate ring $R := \mathbb{K} [x_0 , x_1 , \ldots, x_n ]$ of $\P^n$ occur in degree $\leq p+j$ (cf. \cite{EG}). In particular, $I (\Gamma)$ can be generated by forms of degree $\leq p$. The Castelnuovo-Mumford regularity ${\rm reg} ({\Gamma})$ of ${\Gamma}$ is defined to be the least integer $p$ such that ${\Gamma}$ is $p$-regular. In this paper, we study the regularity of finite subschemes of $\mathbb{P}^n$. Let $\Gamma \subset \P^n$ be a nondegenerate finite subscheme of degree $d \geq n+3$. There is a well-known upper bound of ${\rm reg}(\Gamma)$ in terms of some basic invariants of $\Gamma$. To state precisely, recall that a subspace $\Lambda$ of $\P^n$ is said to be a $(t+2)$-secant $t$-plane to $\Gamma$ if it is a $t$-dimensional subspace of $\P^n$ such that \begin{equation*} {\rm length} (\mathcal{O}_{\mathbb{P}^n} / (\mathcal{I}_{\Lambda} +\mathcal{I}_{{\Gamma}})) \geq t+2 \end{equation*} where $\mathcal{I}_{\Lambda}$ is the sheaf of ideals of $\Gamma$ in $\P^n$. We denote by $t(\Gamma)$ the smallest integer $t$ such that $\Gamma$ admits a $(t+2)$-secant $t$-plane. Then $1 \leq t(\Gamma) \leq n$. Also, $\Gamma$ is said to be \textit{in linearly general position} if $t(\Gamma)$ is equal to $n$. It always holds that \begin{equation}\label{ineq:kwak} {\rm reg}(\Gamma) \leq \left\lceil \frac{d-n-1}{t(\Gamma)} \right\rceil +2 \end{equation} where $\lceil{a}\rceil$ is the smallest integer greater than or equal to $a$ (cf. \cite[Proposition 2.1]{K}). Briefly speaking, this shows that in order for ${\rm reg}(\Gamma)$ to become large, the value of $t(\Gamma)$ must be small. Along this line, our paper aims to explore answers to the following problem.\\ \begin{enumerate} \item[$(*)$] When ${\rm reg}(\Gamma)$ is close to the upper bound $\left\lceil \frac{d-n-1}{t(\Gamma)} \right\rceil +2$ in (\ref{ineq:kwak}), find some elementary geometric reasons for why $\Gamma$ is not $({\rm reg}(\Gamma)-1)$-regular.\\ \end{enumerate} \noindent In the case of $t(\Gamma)=1$, the answer to this problem was obtained in \cite{LPW}, where it is proved that if \begin{equation*} \left\lceil \frac{d-n-1}{2} \right\rceil +3 \leq {\rm reg}(\Gamma) \leq d-n+1 \end{equation*} (and hence $t(\Gamma) =1$), then $\Gamma$ admits a ${\rm reg}(\Gamma)$-secant line. This result provides an intuitive and geometric explanation for why $\Gamma$ fails to be $({\rm reg}(\Gamma)-1)$-regular. Regarding the problem $(*)$ and the above result in \cite{LPW}, our first main result in this paper is as follows : \begin{theorem}\label{thm:main1} Let $\Gamma \subset \P^n$ be a nondegenerate finite subscheme of degree $d$ such that \begin{equation}\label{eq:1} \left\lceil \frac{d-n-1}{t(\Gamma)+1} \right\rceil +3 \leq {\rm reg}(\Gamma) \leq \left\lceil \frac{d-n-1}{t(\Gamma)} \right\rceil +2. \end{equation} Then there exists a unique $t(\Gamma)$-dimensional subspace $\Lambda$ of $\P^n$ such that \begin{enumerate} \item[$(i)$] $\Gamma \cap \Lambda$ is nondegenerate and in linearly general position as a subscheme of $\Lambda$ \end{enumerate} and \begin{enumerate} \item[$(ii)$] ${\rm reg}(\Gamma \cap \Lambda) = {\rm reg}(\Gamma)$. \end{enumerate} \end{theorem} The proof of Theorem \ref{thm:main1} is in Section 3. Note that if ${\rm reg}(\Gamma)$ holds the inequalities in (\ref{eq:1}), then $d$ is at least $n+ t(\Gamma)+2$. Theorem \ref{thm:main1} generalizes the previously mentioned result in \cite{LPW}. That is, if \begin{equation*} \left\lceil \frac{d-n-1}{2} \right\rceil +3 \leq {\rm reg}(\Gamma) \leq d-n+1 \end{equation*} (and hence $t(\Gamma) =1$), then Theorem \ref{thm:main1} says that there exists a unique line $\Lambda = \P^1$ such that ${\rm reg}(\Gamma \cap \Lambda) ={\rm reg}(\Gamma)$. That is, $\Gamma$ admits a unique ${\rm reg}(\Gamma)$-secant line.\\ Theorem \ref{thm:main1} gives us a partial answer for the problem $(*)$. That is, if ${\rm reg}(\Gamma)$ satisfies the inequality (\ref{eq:1}) then there exists a subscheme $\Gamma \cap \Lambda$ of $\Gamma$ which is in linearly general position in $\Lambda$ and which satisfies ${\rm reg}(\Gamma \cap \Lambda) = {\rm reg}(\Gamma)$. This leads us to study finite schemes in linearly general position whose regularity is close to the upper bound in (\ref{ineq:kwak}). We begin with recalling the following definition. \begin{definition} A nondegenerate finite set $\Gamma \subset \P^n$ of $d$ points is said to be \textit{in uniform position} if \begin{equation*} h_{Y} (\ell) = \min \{ |Y| , h_{\Gamma} (\ell) \} \end{equation*} for any subset $Y$ of $\Gamma$ and any $\ell \geq 0$, where $h_{\Gamma} (t)$ and $h_{Y} (t)$ are the Hilbert functions of $\Gamma$ and $Y$, respectively. \end{definition} Any finite set in uniform position is always in linearly general position. When $\Gamma$ is in uniform position and $d$ is large enough, the maximal regularity case was classified as follows. \begin{theorem}[N. V. Trung and G. Valla in \cite{TV} and U. Nagel in \cite{N}]\label{thm:Nagel} Let ${\Gamma} \subset \mathbb{P}^n$ be a finite set of $d$ points in uniform position. Then \renewcommand{\descriptionlabel}[1]{\hspace{\labelsep}\textrm{#1}} \begin{description} \setlength{\labelwidth}{13mm} \setlength{\labelsep}{1.5mm} \setlength{\itemindent}{0mm} \item[{\rm (1)}] Suppose that $d>(n+1)^2$. Then $${\rm{reg}}(\Gamma)=\left\lceil\frac{d-1}{n}\right\rceil+1 \text{ if and only if }{\Gamma} \text{ lies on a rational normal curve}.$$ \item[{\rm (2)}] Suppose that $n\geq{3}$, $d\geq min(3n+1,2n+5)$ and ${\Gamma}$ does not lie on a rational normal curve. Then ${\rm reg}({\Gamma}) \leq \tau (n,d)$ where $$\tau (n,d) := \begin{cases} \left\lceil\frac{d}{n+1}\right\rceil+2 \quad\text{ if } n+1 \text{ divides } d, \text{and}\\ \left\lceil\frac{d}{n+1}\right\rceil+1 \quad\text{ otherwise}. \end{cases}$$ \end{description} \end{theorem} Theorem \ref{thm:Nagel}.$(1)$ was shown using the classical Castelnuovo Lemma. Theorem \ref{thm:Nagel} says that if ${\Gamma}$ is in uniform position and $d>(n+1)^2$, then either $\Gamma$ lies on a rational normal curve and so ${\rm{reg}}(\Gamma)$ is maximal or else ${\rm{reg}}(\Gamma)$ is at most $\tau (n,d)$.\\ From now on, let ${\Gamma} \subset \mathbb{P}^n$ be a finite scheme in linearly general position of degree $d \geq n+3$. We first define the integer ${\rho}({\Gamma})$ as \begin{equation*} \rho (\Gamma)={\rm max} \{\vert {\Gamma} \cap C \vert ~|~ C \text{ is a rational normal curve in } \P^n \}. \end{equation*} By \cite[Theorem 1]{EH}, any finite scheme of degree $n+3$ in linearly general position is contained in a unique rational normal curve. Therefore, it always holds that $$n+3\leq {\rho}({\Gamma})\leq d.$$ Also, Theorem \ref{thm:Nagel}.$(1)$ says that if ${\Gamma}$ is in uniform position and $d \geq (n+1)^2$, then ${\rm{reg}}(\Gamma)$ is maximal if and only if $\rho (\Gamma)=d$. Along this line, our second main result in the present paper is as follows. \begin{theorem}\label{thm:main2} Let ${\Gamma} \subset \P^n$, $n \geq 2$, be a finite subscheme of degree $d$ in linearly general position such that \begin{equation}\label{eq:assumption on reg} d \geq 4n^2+6n+1 \quad \mbox{and} \quad {\rm reg}({\Gamma}) \geq \left\lceil\frac{d-1}{n+\frac{n}{2n+2}}\right\rceil+3. \end{equation} Then \begin{equation}\label{eq:intersection with RNC} \rho(\Gamma)> d-(m+1)n \quad \mbox{where} \quad m := \left\lceil\frac{d-1}{n}\right\rceil+1 - {\rm reg}({\Gamma}). \end{equation} Furthermore, there is a unique rational normal curve $C$ of degree $n$ such that $$\rho(\Gamma)=\vert\Gamma\cap{C}\vert \quad\mbox{and} \quad {\rm reg}(\Gamma) = {\rm reg}(\Gamma \cap C).$$ \end{theorem} The proof of Theorem \ref{thm:main2} is in Section 4. Considering the inequalities in (\ref{ineq:kwak}) and (\ref{eq:assumption on reg}), the following condition is required: \begin{equation*} \left\lceil\frac{d-1}{n+\frac{n}{2n+2}}\right\rceil+3 \leq \left\lceil\frac{d-1}{n}\right\rceil+1 \end{equation*} Note that our assumption $d\geq4n^2+6n+1$ ensures that this inequality always holds. Contrary to the results of Theorem \ref{thm:Nagel}, it can happen that if $\Gamma \subset \P^n$ is in linearly general position but not in uniform position then \begin{equation*} \left\lceil\frac{d}{n+1}\right\rceil+2 < {\rm{reg}}(\Gamma) \leq \left\lceil\frac{d-1}{n}\right\rceil+1. \end{equation*} Theorem \ref{thm:main2} provides a very precise description of such finite subschemes. In particular, it says that if ${\rm{reg}}(\Gamma)$ is close to the maximal possible value then there exists a unique rational normal curve which contains most of $\Gamma$. Thus Theorem \ref{thm:main2} clearly shows what properties hold when the condition on $\Gamma$ is extended from the ``uniform position" to ``linearly general position". Next, we apply Theorem \ref{thm:main2} to finite subschemes in linearly general position having maximal regularity. This can be compared to Theorem \ref{thm:Nagel}.$(1)$. \begin{corollary}\label{cor:maximal case} Let ${\Gamma} \subset \P^n$, $n \geq 2$, be a finite subscheme of degree $d \geq 4n^2+6n+1$ in linearly general position. Write $d = nq+r+2$ for some $0 \leq r \leq n-1$. Then \begin{equation*} {\rm reg}({\Gamma}) = \left\lceil\frac{d-1}{n}\right\rceil+1 \ \text{ if and only if } \ {\rho}({\Gamma})\geq d-r. \end{equation*} In particular, if $r=0$ and $\reg (\Gamma) = \left\lceil\frac{d-1}{n}\right\rceil+1$ then $\Gamma$ lies in a rational normal curve and hence it is in uniform position. \end{corollary} The proof of Corollary \ref{cor:maximal case} is in Section 5.\\ We finish this section by providing an answer to the previous problem $(*)$ by combining Theorem \ref{thm:main1} and Theorem \ref{thm:main2}. \begin{theorem}\label{thm:main3} Let $\Gamma \subset \P^n$ be a nondegenerate finite subscheme of degree $d$ with $t(\Gamma) = t$. If \begin{equation*} \left \lceil{\frac{d-1}{t+\frac{t}{2t +2} }}\right \rceil+2 < {\rm{reg}}(\Gamma) \leq \left\lceil \frac{d-n-1}{t} \right\rceil +2, \end{equation*} then there exist a unique subspace $\P^{t}$ of $\P^n$ and a unique rational normal curve $C$ in $\P^{t}$ such that ${\rm{reg}}(\Gamma\cap C)={\rm{reg}}(\Gamma)$. \end{theorem} The proof of Theorem \ref{thm:main3} is in Section 4. Regarding Problem $(*)$, if $\Gamma \subset \P^n$ is as in Theorem \ref{thm:main3} then it fails to be $(\rm{reg}(\Gamma)-1)$-regular since there exists a rational normal curve $C$ in $\P^{t}$ such that ${\rm{reg}}(\Gamma\cap C)={\rm{reg}}(\Gamma)$. \\ \noindent {\bf Organization of the paper.} In $\S 2$, we review some basic facts to study the Castelnuovo-Mumford regularity of finite schemes. In $\S 3$ and $\S 4$, we give a proof of Theorem \ref{thm:main1} and a proof of Theorem \ref{thm:main2}, respectively. Finally, in $\S 5$ we prove more properties of finite schemes in linearly general position which have maximal regularity.\\ \noindent {\bf Acknowledgement.} This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. 2022R1A2C1002784). \section{Preliminaries} \noindent We fix a few notations, which we use throughout this paper. \begin{notation and conventions}\label{notrmk:basicfacts} Let ${\Gamma} \subset \P^n$ be a (possibly degenerate) finite subscheme, defined by a sheaf of ideals $\mathcal{I}_{\Gamma}$ in $\mathcal{O}_{\P^n}$. Also, let $$I (\Gamma) := \bigoplus_{\ell \in \mathbb{Z}} H^0 (\P^n ,\mathcal{I}_{\Gamma} (\ell))$$ be the saturated homogeneous ideal of $\Gamma$ in the homogeneous coordinate ring $R := \mathbb{K} [x_0 , x_1 , \ldots, x_n ]$ of $\P^n$. \renewcommand{\descriptionlabel}[1]{\hspace{\labelsep}\textrm{#1}} \begin{description} \setlength{\labelwidth}{13mm} \setlength{\labelsep}{1.5mm} \setlength{\itemindent}{0mm} \item[$(1)$] The length of $\Gamma$, defined to be the length of $\mathcal{O}_\Gamma = \mathcal{O}_{\P^n} /\mathcal{I}_{\Gamma}$, is equal to $h^{0}(\Gamma ,\mathcal{O}_\Gamma)$. It is also called the degree of $\Gamma$. We will denote it by $\vert \Gamma \vert$. \item[$(2)$] For each $m \in \mathbb{Z}$, let \[ \rho_m : H^0(\mathbb{P}^n , \mathcal{O}_{\mathbb{P}^n} (m))\rightarrow H^0({\Gamma} , \mathcal{O}_{{\Gamma}} (m)) \] be the natural restriction map. The Hilbert function $h_{\Gamma}$ of ${\Gamma}$ is defined by $$h_{\Gamma} (m)= \dim_{\mathbb{K}} {\rm Im} ( \rho_m ).$$ We say that $\Gamma$ is $m$-\textit{normal} if $\rho_m$ is surjective, or equivalently, if $\Gamma$ is $(m+1)$-regular. \item[$(3)$] Let $V$ be a hypersurface of $\P^n$. The intersection ${\Gamma}\cap{V}$ is defined to be the scheme defined by the ideal sheaf $(\mathcal I_{{\Gamma}}+\mathcal I_V )$. Also, \textit{the residual scheme} of ${\Gamma}$ with respect to $V$, denoted by ${\Gamma} : V$, means the scheme defined by the ideal sheaf $(\mathcal I_{{\Gamma}} : \mathcal I_V )$. \item[$(4)$] If $\dim\langle\Gamma\rangle\geq{1}$, we define $t(\Gamma)$ to be the largest integer $k$ such that $\dim \langle \Gamma'\rangle=\vert\Gamma'\vert-1$ for any subscheme $\Gamma'$ of $\Gamma$ of degree $\vert\Gamma'\vert\leq k+1$. It is elementary to see that $1\leq t(\Gamma)\leq\dim \langle \Gamma\rangle$. For example, if $\vert\Gamma\vert\geq 3$, then the statement that $t(\Gamma)=1$ is equivalent to the statement that $\Gamma$ has a trisecant line. \end{description} \end{notation and conventions} Next, we list a few well-known facts about finite subschemes in a projective space below. \begin{proposition}\label{Basic facts} Let ${\Gamma} \subset \P^n$ be a finite subscheme of degree $d$. Then \renewcommand{\descriptionlabel}[1]{\hspace{\labelsep}\textrm{#1}} \begin{description} \setlength{\labelwidth}{13mm} \setlength{\labelsep}{1.5mm} \setlength{\itemindent}{0mm} \item[$(1)$] If ${\Gamma}$ is $m$-normal, then any subscheme ${\Gamma}'$ of ${\Gamma}$ is also $m$-normal. \item[$(2)$] If ${\Gamma}'$ is a subscheme of ${\Gamma}$ such that $\dim~\langle{{\Gamma}'} \rangle = n'$ and $\vert{\Gamma}'\vert=d'$, then $$\dim ~\langle{{\Gamma}}\rangle \leq \min \{ n'+d-d' , n\}.$$ \item[$(3)$] Let ${\Gamma}_0$ be a subscheme of ${\Gamma}$ such that $\vert{\Gamma}_0\vert=d_0$. Then there exist subschemes ${\Gamma}_1$, ${\Gamma}_2$,$\ldots$, ${\Gamma}_{d-d_0-1}$ of $\Gamma$ such that \begin{equation*} {\Gamma}_0 \subsetneq {\Gamma}_1 \subsetneq \cdots \subsetneq {\Gamma}_{d-d_0-1} \subsetneq {\Gamma} \quad \mbox{and} \quad |{\Gamma}_i| = |{\Gamma}_0| + i. \end{equation*} \item[$(4)$] Let $V$ be a hypersurface of $\mathbb{P}^n$. Then $$|{\Gamma}| = |{\Gamma} \cap V| + |{\Gamma}:V|.$$ \item[$(5)$] $($La méthode d’Horace, \cite{Hi}$)$ Let $V$ be a hypersurface of degree $k$. If ${\Gamma} \cap V$ is $m$-normal and ${\Gamma}:V$ is $(m-k)$-normal, then ${\Gamma}$ is $m$-normal. \item[$(6)$] If $\Gamma$ is in linearly general position and lies on a rational normal curve of degree $n$, Then \[ {\rm reg}({\Gamma}) =\left\lceil\frac{d-1}{n}\right\rceil+1. \] \end{description} \end{proposition} \begin{proof} For $(1) \sim (3)$, we refer the reader to \cite[Proposition 2.2]{LPW}. The proof of $(4)$ comes immediately from the following exact sequence \begin{equation*} 0 \rightarrow \big( S/(I_{{\Gamma}}:V) \big) (k) \rightarrow S/I_{{\Gamma}} \rightarrow S/ \langle I_{{\Gamma}},V \rangle \rightarrow 0 \end{equation*} where $k$ is the degree of $V$. For $(5)$, see \cite[p.352]{Hi}. For $(6)$, see \cite[Proposition 3.4]{P} (cf. \cite[Proposition 2.2]{N}, \cite[Proposition 2.3]{NP}). \end{proof} \begin{remark}\label{remark:Horace} In this paper, we will apply the above-mentioned d'Horace Lemma as follows. Let $\Gamma$ and $V$ be as in Proposition \ref{Basic facts}.$(5)$ such that ${\rm reg}({\Gamma}) > {\rm reg}({\Gamma\cap{V}})$. Then $${\rm reg}({\Gamma}:V ) \geq {\rm reg}(\Gamma )-k.$$ Indeed, if not then $\Gamma \cap V$ is $({\rm reg}({\Gamma})-2)$-normal and $\Gamma:V$ is (${\rm reg}({\Gamma})-k-2)$-normal. Thus it follows by Proposition \ref{Basic facts}.$(5)$ that $\Gamma$ is (${\rm reg}({\Gamma})-2$)-normal, which is obviously a contradiction. \end{remark} \section{Proof of Theorem \ref{thm:main1}} \noindent This section is devoted to giving a proof of Theorem \ref{thm:main1}. We begin with a definition. \begin{definition}\label{def:RPH condition} We say that a nondegenerate finite subscheme $\Gamma \subset \P^n$ satisfies condition \textmd{RPH} (=regularity preserving hyperplane) if it admits a regularity preserving hyperplane section in the sense that there exists a hyperplane $H = \P^{n-1}$ such that the following two statements hold: \begin{enumerate} \item[$(i)$] $\Gamma\cap{H}$ is a nondegenerate subscheme of $H$; \item[$(ii)$] the equality ${\rm reg}(\Gamma \cap H) = {\rm reg}(\Gamma)$ holds. \end{enumerate} \end{definition} \begin{remark} \label{rm:RPH condition2} $(1)$ Let $H$ be a hyperplane such that ${\rm reg}(\Gamma\cap{H})={\rm reg}(\Gamma)$ but $\Gamma\cap{H}$ fails to span $H$. In such a case, we can choose another hyperplane $H'$ containing $\Gamma \cap {H}$ and such that $\Gamma\cap{H'}$ is a nondegenerate subscheme of $H'$. Then \[ {\rm reg}(\Gamma)={\rm reg}(\Gamma\cap{H})\leq {\rm reg}(\Gamma\cap{H}')\leq {\rm reg}(\Gamma). \] That is, condition \textmd{RPH} holds for $\Gamma$ if there is a hyperplane which satisfies only condition $(ii)$ in Definition \ref{def:RPH condition}. \noindent $(2)$ For example, if a finite subscheme $\Gamma \subset \P^n$ is in linearly general position and $|\Gamma| \geq n+2$, then ${\rm reg}(\Gamma)\geq 3$ and any nondegenerate hyperplane section of $\Gamma$ is $2$-regular. Thus, $\Gamma$ fails to satisfy condition \textmd{RPH}. \end{remark} Our first goal in this section is to verify that $\Gamma \subset \P^n$ in Theorem \ref{thm:main1} satisfies condition \textmd{RPH}. \begin{lemma}\label{lem:not preserving hyperplane section} Let $\Gamma \subset \P^n$ be as in Theorem \ref{thm:main1} and let $H = \P^{n-1}$ be a hyperplane such that \begin{equation*} |\Gamma \cap H | \geq n+1 \quad \mbox{and} \quad {\rm reg}(\Gamma \cap H ) < {\rm reg}(\Gamma). \end{equation*} Then the following statements hold. \smallskip \renewcommand{\descriptionlabel}[1] {\hspace{\labelsep}\textrm{#1}} \begin{description} \setlength{\labelwidth}{13mm} \setlength{\labelsep}{1.5mm} \setlength{\itemindent}{0mm} \item[$(1)$] ${\rm reg}(\Gamma ) \geq 4$. \item[$(2)$] ${\rm reg}(\Gamma:H) \geq {\rm reg}(\Gamma) -1$. \item[$(3)$] $|\Gamma:H| - \dim \langle\Gamma:H \rangle + t(\Gamma:H) \leq d-n-1$. \item[$(4)$] $t(\Gamma:H) = t(\Gamma)$. \end{description} \end{lemma} \begin{proof} $(1)$ Since $d \geq n+t(\Gamma)+2$, we have ${\rm reg}(\Gamma) \geq 4$ from the first inequalities in (\ref{eq:1}). \noindent $(2)$ See Remark \ref{remark:Horace}. \noindent $(3)$ It always holds that $t(\Gamma:H) \leq \dim \langle \Gamma:H \rangle$. This shows that \begin{align*} n-\dim \langle\Gamma:H \rangle + t(\Gamma:H) &< n+1 \\ &\leq |\Gamma \cap H| \\ &= d- |\Gamma:H|. \end{align*} Thus we get the desired inequality $|\Gamma:H| - \dim \langle\Gamma:H \rangle + t(\Gamma:H) \leq d-n-1$. \noindent $(4)$ First, note that $\dim\langle{\Gamma:H}\rangle\geq 1$ since ${\rm reg}(\Gamma:H)>1$. By inequality (\ref{ineq:kwak}) and Lemma \ref{lem:not preserving hyperplane section}.$(3)$, we have \begin{align*} {\rm reg}(\Gamma:H) &\leq \left\lceil \frac{|\Gamma:H|-\dim \langle \Gamma:H \rangle -1 }{t(\Gamma:H)} \right\rceil +2 \\ &\leq \left\lceil \frac{d-n-1-t(\Gamma:H)}{t(\Gamma:H)} \right\rceil +2 = \left\lceil \frac{d-n-1}{t(\Gamma:H)} \right\rceil+1. \end{align*} Then, we can combine this result with inequalities (\ref{eq:1}) and Lemma \ref{lem:not preserving hyperplane section}.$(2)$ to deduce that \begin{equation*} \left\lceil \frac{d-n-1}{t(\Gamma)+1} \right\rceil +2 \leq {\rm reg}(\Gamma)-1 \leq {\rm reg}(\Gamma:H) \leq \left\lceil \frac{d-n-1}{t(\Gamma:H)} \right\rceil+1 . \end{equation*} Here, the inequality \begin{equation*} \left\lceil \frac{d-n-1}{t(\Gamma)+1} \right\rceil +2 \leq \left\lceil \frac{d-n-1}{t(\Gamma:H)} \right\rceil+1 \end{equation*} implies that $t(\Gamma:H) \leq t(\Gamma)$. To see this, suppose that $t(\Gamma:H) < t(\Gamma)$ for the sake of contradiction. Since $\dim\langle{\Gamma:H}\rangle\geq 1$, we have $t(\Gamma:H) \leq \dim \langle \Gamma:H \rangle$ by the definition of $t(\Gamma:H)$. Remark that \[ |\Gamma:H|> \dim\langle\Gamma:H\rangle+1 \] since ${\rm reg}(\Gamma:H)\geq 3$. If $t(\Gamma:H) < \dim \langle \Gamma:H \rangle$, then $\Gamma:H$ admits a $t(\Gamma:H)$-dimensional subspace $\Lambda$ such that $\vert\Lambda\cap{(\Gamma:H)}\vert\geq t(\Gamma:H)+2$ hence so does $\Gamma$. This is impossible since $t(\Gamma:H)<t(\Gamma)$. Thus, we may assume that $t(\Gamma:H) = \dim \langle \Gamma:H \rangle$. Then, it immediately implies that $\dim \langle \Gamma:H \rangle$ is strictly less than $t(\Gamma)$. If $\vert \Gamma:H \vert \geq \dim \langle \Gamma:H \rangle+2$, then $t(\Gamma)\leq t(\Gamma:H)$. This implies that $\vert \Gamma:H \vert = \dim \langle \Gamma:H \rangle+1$ and hence ${\rm reg}(\Gamma:H)=2$. This is a contradiction since \[ {\rm reg}(\Gamma:H) \geq {\rm reg}(\Gamma)-1 \geq 3 \] by Lemma \ref{lem:not preserving hyperplane section}.$(1)$ and $(2)$. Therefore, we can deduce that $t(\Gamma:H) = t(\Gamma)$. \end{proof} \begin{theorem}\label{thm:Thm RPH} Let $\Gamma \subset \P^n$ be a nondegenerate finite subscheme of degree $d$. If $t(\Gamma) \leq n-1$ and \begin{equation}\label{eq:Thm RPH} \left\lceil \frac{d-n-1}{t(\Gamma)+1} \right\rceil +3 \leq {\rm reg}(\Gamma) \leq \left\lceil \frac{d-n-1}{t(\Gamma)} \right\rceil +2, \end{equation} then $\Gamma \subset \P^n$ satisfies condition \rm\textmd{RPH}. \end{theorem} \begin{proof} It suffices to show that there exists a hyperplane $H$ such that ${\rm reg}(\Gamma\cap{H})={\rm reg}(\Gamma)$ by $(1)$ in Remark \ref{rm:RPH condition2}. Since ${\rm reg}(\Gamma)$ satisfies the inequalities in (\ref{eq:Thm RPH}) we have $d \geq n+ t(\Gamma)+2$. Keeping this in mind, we will use induction on $d$. First, consider the initial case that $d=n+t(\Gamma)+2$. We can choose a hyperplane $H$ such that $\vert\Gamma \cap H\vert \geq n+1$ since $\Gamma$ is not in linearly general position(i.e., $t(\Gamma)\leq n-1$). Now, it is easy to see that $\vert\Gamma:H\vert \leq d-n-1 = t(\Gamma)+1$ and hence $$\dim \langle \Gamma:H \rangle \leq |\Gamma:H|-1 \leq t(\Gamma).$$ The inequality $\dim \langle \Gamma:H \rangle \leq t(\Gamma)$ implies that $\dim \langle \Gamma:H \rangle = |\Gamma:H|-1$ by the definition of $t(\Gamma)$. Thus, we conclude that $1\leq {\rm reg}(\Gamma:H)\leq 2$. On the other hand, it follows from inequalities in (\ref{eq:Thm RPH}) that ${\rm reg}(\Gamma)=4$. Therefore, we deduce that ${\rm reg}(\Gamma \cap H ) =4$ by the Horace method (cf. Proposition \ref{Basic facts}.(5)). Next, we assume that $d > n+t(\Gamma)+2$. In order to derive a contradiction, we suppose that there is no hyperplane $H$ such that ${\rm reg}(\Gamma\cap{H})={\rm reg}(\Gamma)$. As before, we can choose a hyperplane $H_0$ such that $\vert\Gamma \cap H_0\vert \geq n+1$ since $\Gamma$ is not in linearly general position. We have ${\rm reg}(\Gamma)-1\leq {\rm reg}(\Gamma:H_0)$ by the Horace method. We claim that there is a hyperplane $H' = \P^{n-1}$ such that ${\rm reg}(\Gamma \cap H' ) = {\rm reg}(\Gamma) -1$. To see this, consider the case that $\dim \langle \Gamma:H_0 \rangle < n$. From this, we can choose a hyperplane $H'$ containing a subscheme $\Gamma:H_0$. Then, \begin{equation*} {\rm reg}(\Gamma)-1 \leq {\rm reg}(\Gamma:H_0) \leq {\rm reg}(\Gamma \cap H' ) < {\rm reg}(\Gamma) \end{equation*} and hence we get the desired equality \begin{equation*} {\rm reg}(\Gamma \cap H' ) = {\rm reg}(\Gamma)-1. \end{equation*} Next, consider the other case that $\dim \langle \Gamma:H_0 \rangle = n$. From Lemma \ref{lem:not preserving hyperplane section}.$(3)$, we have \begin{equation*} |\Gamma:H_0| - \dim \langle \Gamma:H_0 \rangle + t(\Gamma:H_0) \leq d-n-1. \end{equation*} Remark that $t(\Gamma:H_0) = t(\Gamma)$ by Lemma \ref{lem:not preserving hyperplane section}.$(4)$. Then we obtain \begin{equation*} \frac{|\Gamma:H_0| - \dim \langle \Gamma:H_0 \rangle -1}{t(\Gamma)+1} +1 \leq \frac{d-n-1}{t(\Gamma)+1}. \end{equation*} Thus it holds that \begin{equation*} \left \lceil \frac{|\Gamma:H_0| - \dim \langle \Gamma:H_0 \rangle -1}{t(\Gamma)+1} \right\rceil +2 \leq \left \lceil \frac{d-n-1}{t(\Gamma)+1} \right\rceil +1\leq {\rm reg}(\Gamma)-2 \leq {\rm reg}(\Gamma:H_0)-1. \end{equation*} In particular, it is shown that \begin{equation*} \left \lceil \frac{|\Gamma:H_0| - \dim \langle \Gamma:H_0 \rangle -1}{t(\Gamma)+1} \right\rceil +3 \leq {\rm reg}(\Gamma:H_0). \end{equation*} By induction hypothesis, there is a hyperplane $H' = \P^{n-1}$ such that \[ {\rm reg}((\Gamma:H_0) \cap H' ) = {\rm reg}(\Gamma:H_0). \] It follows that \begin{equation*} {\rm reg}(\Gamma)-1 \leq {\rm reg}(\Gamma:H_0)= {\rm reg}( (\Gamma:H_0)\cap H') \leq {\rm reg}(\Gamma \cap H' ) \leq {\rm reg}(\Gamma)-1 \end{equation*} and hence we get the desired equality ${\rm reg}(\Gamma \cap H' ) = {\rm reg}(\Gamma)-1$. Moreover, we may assume that $\Gamma\cap{H'}$ is nondegenerate in $H'$(cf. see in Remark \ref{rm:RPH condition2}). For this hyperplane $H'$, we obtain the following inequalities : \begin{equation*} \left \lceil{\frac{d-n-1}{t(\Gamma)+1}}\right \rceil+2\leq {\rm reg}(\Gamma)-1 \leq {\rm reg}(\Gamma:{H'}) \leq \left \lceil{\frac{\vert{\Gamma:{H'}}\vert-\dim\langle \Gamma:{H'}\rangle-1}{t(\Gamma:{H'})}}\right \rceil+2 \end{equation*} (cf. inequalities (\ref{eq:Thm RPH}) and Lemma \ref{lem:not preserving hyperplane section}.$(2)$). From the inequality \begin{equation*} \left \lceil{\frac{d-n-1}{t(\Gamma)+1}}\right \rceil+2\leq \left \lceil{\frac{\vert{\Gamma:{H'}}\vert-\dim\langle \Gamma:{H'}\rangle-1}{t(\Gamma:{H'})}}\right \rceil+2, \end{equation*} we can derive the inequality \begin{equation*} \frac{d-n-1}{t(\Gamma)+1} < \frac{|\Gamma:{H'}|- \dim \langle \Gamma:{H'} \rangle -1}{t(\Gamma)}+1 \end{equation*} since $t(\Gamma)=t(\Gamma:{H'})$ by Lemma \ref{lem:not preserving hyperplane section}.$(4)$. Since $t(\Gamma) = t(\Gamma:{H'}) \leq \dim \langle \Gamma:{H'} \rangle$ and $|\Gamma:{H'}| = d-|\Gamma\cap{H'}|$ , we can derive the follwoing inequalities : \begin{equation*} |\Gamma\cap{H'}| < n-\dim \langle \Gamma:{H'} \rangle +t(\Gamma)+\frac{d-n-1}{t(\Gamma)+1} \leq n+ \left\lceil \frac{d-n-1}{t(\Gamma)+1} \right\rceil. \end{equation*} Note that ${\rm reg}(\Gamma\cap{H'}) \leq |\Gamma\cap{H'}|-(n-1)+1$ from the inequality (\ref{ineq:kwak}). Thus we can deduce that \begin{equation*} {\rm reg}(\Gamma) = {\rm reg}(\Gamma\cap{H'}) +1 \leq |\Gamma\cap{H'}|-(n-1)+2 < 3+\left \lceil{\frac{d-n-1}{t(\Gamma)+1}}\right \rceil, \end{equation*} which contradicts our assumption (\ref{eq:Thm RPH}). This shows that $\Gamma \subset \P^n$ satisfies condition \textmd{RPH}. \end{proof} $\newline$ \noindent {\bf Proof of Theorem \ref{thm:main1}.} By Theorem \ref{thm:Thm RPH}, there is a hyperplane $H$ such that $\Gamma\cap{H}$ is a nondegenerate subscheme of $H$ and ${\rm reg}(\Gamma\cap{H})={\rm reg}(\Gamma)$. Then, we have \begin{align*} \left \lceil{\frac{\vert\Gamma\cap{H}\vert-\text{dim}\langle\Gamma\cap{H}\rangle-1}{t(\Gamma\cap{H})+1}}\right \rceil+2 &\leq\left \lceil{\frac{d-n-1}{t(\Gamma)+1}}\right \rceil+2 \\ &<{\rm reg}(\Gamma)\\ &={\rm reg}(\Gamma\cap{H}) \\ &\leq\left \lceil{\frac{\vert\Gamma\cap{H}\vert-\text{dim}\langle\Gamma\cap{H}\rangle-1}{t(\Gamma\cap{H})}}\right \rceil+2. \end{align*} Since $\vert\Gamma\cap{H}\vert-\text{dim}\langle\Gamma\cap{H}\rangle-1\leq d-n-1$, the inequality \[ \left \lceil{\frac{d-n-1}{t(\Gamma)+1}}\right \rceil+2 <\left \lceil{\frac{\vert\Gamma\cap{H}\vert-\text{dim}\langle\Gamma\cap{H}\rangle-1}{t(\Gamma\cap{H})}}\right \rceil+2 \]implies that ${t(\Gamma\cap{H})}\leq t(\Gamma)$. As in the proof of Lemma \ref{lem:not preserving hyperplane section}.(4), we see that $t(\Gamma\cap{H})=t(\Gamma)$. If $t(\Gamma\cap{H})<n-1$, then we can use Theorem \ref{thm:Thm RPH} again for $\Gamma\cap{H}$. Therefore, we can obtain the desired existence by using Theorem \ref{thm:Thm RPH} $n-t(\Gamma)$ times. To prove the uniqueness, we suppose that there are two distinct $t(\Gamma)$-dimensional linear spaces $\Lambda_1$ and $\Lambda_2$ satisfying conditions (i),(ii) in Theorem\ref{thm:main1}. Let $d_1$ and $d_2$ denote $\vert\Gamma\cap\Lambda_1\vert$ and $\vert\Gamma\cap\Lambda_2\vert$ respectively. Then \begin{equation} \left \lceil{\frac{d-n-1}{t(\Gamma)+1}}\right \rceil+3\leq {\rm reg}(\Gamma)= {\rm reg}(\Gamma\cap\Lambda_i) \leq \left \lceil{\frac{d_i-1}{t(\Gamma)}}\right \rceil+2 \end{equation} for $i=1,2$. From this, we get the inequalities $\frac{d-n-1}{t(\Gamma)+1}+1<\frac{d_i-1}{t(\Gamma)}$ for $i=1,2$. These imply that \begin{equation}\label{ieq:uniqueness} t(\Gamma)\left(\frac{d-n-1}{t(\Gamma)+1}\right)+1<d_i \end{equation} for $i=1,2$. Let $r$ denote the dimension of $\Lambda_1\cap\Lambda_2$, with the convention that $r=-1$ when $\Lambda_1\cap\Lambda_2=\emptyset$. Thus, the dimension of the linear space spanned by $\Lambda_1\cup\Lambda_2$ is $2t-r$. Since $\Gamma\cap\Lambda_1$ is in linearly general position in $\Lambda_1$, the degree of $\Gamma\cap\Lambda_1\cap\Lambda_2$ is less than or equal to $r+1$. We can obtain an inequality \begin{equation*} d_1+d_2-(r+1)\leq\vert(\Gamma\cap\Lambda_1)\cup(\Gamma\cap\Lambda_2)\vert\leq\vert\Gamma\cap\langle\Lambda_1\cup\Lambda_2\rangle\vert. \end{equation*} Note that $\vert\Gamma\vert-\vert\Gamma\cap(\Lambda_1\cup\Lambda_2)\vert\geq n-\dim\langle\Lambda_1\cup\Lambda_2\rangle$ since $\Gamma$ is nondegenerate in $\mathbb{P}^n$. Then, we can deduce that \[ d-(d_1+d_2-(r+1))\geq n-(2t(\Gamma)-r). \] Combined with (\ref{ieq:uniqueness}), we can derive an inequality \[ 2t(\Gamma)\left(\frac{d-n+t(\Gamma)}{t(\Gamma)+1}\right)+1+n-2t(\Gamma)<d. \] Then we have $2t(\Gamma)(d-n+t(\Gamma))+(1+n-2t(\Gamma))(t(\Gamma)+1)<d(t(\Gamma)+1)$, and hence \[ 0<(d-n-1)(1-t(\Gamma)). \] Since $d>n+1$, we have $t(\Gamma)<1$. Obviously, this is a contradiction. This completes the proof. \qed \section{Proof of Theorem \ref{thm:main2}} \noindent This section is primarily devoted to giving a proof of Theorem \ref{thm:main2}, but also includes a proof of Theorem \ref{thm:main3}. We begin with the following interesting observation. \begin{lemma}\label{lem:lemma1} Let ${\Gamma} \subset \mathbb{P}^n$ be a finite subscheme of degree $d$ in linearly general position and let $m$ be a nonnegative integer such that $$d \geq (m+3)n \quad \mbox{and} \quad {\rm reg}({\Gamma}) = \left\lceil\frac{d-1}{n}\right\rceil-{m}+1.$$ Let $Q \subset \mathbb{P}^n$ be a quadric such that $\vert{\Gamma}\cap{Q}\vert\geq ({m}+3)n$. Then $${\rm reg}({\Gamma}\cap{Q})={\rm reg}({\Gamma}) \quad \mbox{and} \quad \vert{\Gamma}\cap{Q}\vert>d-({m}+1)n.$$ \end{lemma} \begin{proof} If $\Gamma \subset Q$, then we are done. Suppose that $\Gamma \nsubseteq Q$, and hence $\vert{\Gamma}:{Q}\vert >0$. Note that ${\Gamma}\cap{Q}$ and ${\Gamma}:Q$ are in linearly general position. Also, $$d=|\Gamma| = |\Gamma \cap Q| + |\Gamma :Q | \geq (m+3)n+|\Gamma:Q|$$ and hence $${\rm reg}({\Gamma}) = \left\lceil\frac{d-1}{n}\right\rceil-{m}+1 \geq \left\lceil\frac{|\Gamma:Q|-1}{n}\right\rceil+4.$$ Concerning ${\Gamma}:Q$, all cases can be divided into the following three cases:\\ \begin{enumerate} \item[$(i)$] $\vert{\Gamma}:{Q}\vert = 1$; or \item[$(ii)$] $2 \leq \vert{\Gamma}:{Q}\vert \leq n$; or \item[$(iii)$] $n+1\leq\vert{\Gamma}:{Q}\vert$.\\ \end{enumerate} \noindent In case $(i)$, we have ${\rm reg}({\Gamma}:{Q})=1$ and ${\rm reg}({\Gamma}) \geq{4}$. In case $(ii)$, we have ${\rm reg}({\Gamma}:{Q})=2$ and ${\rm reg}({\Gamma}) \geq 5$. In case $(iii)$, it holds that $${\rm reg}(\Gamma : Q ) \leq \left\lceil\frac{d-|\Gamma \cap Q|-1}{n}\right\rceil+1 \leq \left\lceil\frac{d-(m+3)n-1}{n}\right\rceil+1 = {\rm reg}(\Gamma)-3. $$ Thus, by Proposition \ref{Basic facts}.$(5)$ and Remark \ref{remark:Horace}, it must be true that $${\rm reg}({\Gamma}\cap{Q})={\rm reg}({\Gamma}).$$ To prove the remaining part, let us suppose on the contrary that $$\vert{\Gamma}\cap{Q}\vert\leq d-({m}+1)n.$$ Then, we have \begin{equation*} {\rm reg}({\Gamma})= {\rm reg}({\Gamma}\cap{Q}) \leq \left\lceil \frac{d-({m}+1)n-1}{n}\right\rceil+1 = {\rm reg}({\Gamma})-1. \end{equation*} This is a contradiction, and so the desired inequality $\vert{\Gamma}\cap{Q}\vert>d-({m}+1)n$ holds. \end{proof} \begin{lemma}\label{lem:4.2} Let ${\Gamma} \subset \P^n$ be a nondegenerate finite subscheme of degree $d$ in linearly general position such that $$\left\lceil\frac{d-1}{n+\frac{n}{2n+2}}\right\rceil+3\leq {\rm reg}({\Gamma}).$$ Let $Q \subset \P^n$ be a quadric such that $\vert Q\cap{\Gamma}\vert\geq 2n+1$ and ${\rm reg}({\Gamma}\cap{Q})<{\rm reg}({\Gamma})$. Then $$\left\lceil\frac{\vert{\Gamma}:Q\vert-1}{n+\frac{n}{2n+2}}\right\rceil+3\leq {\rm reg}({\Gamma}:Q).$$ Also, let $m$ and $m'$ be nonnegative integers such that $${\rm reg}({\Gamma})=\left\lceil\frac{d-1}{n}\right\rceil-{m}+1 \quad \text{and} \quad {\rm reg}({\Gamma}:Q)=\left\lceil\frac{\vert{\Gamma}:{Q}\vert-1}{n}\right\rceil-{m'}+1.$$ Then $m \geq m'$. \end{lemma} \begin{proof} Since ${\rm reg}({\Gamma}\cap{Q})<{\rm reg}({\Gamma})$, Proposition \ref{Basic facts}.$(5)$ implies that ${\rm reg}({\Gamma}:Q)\geq{\rm reg}({\Gamma})-2$. Also, it follows from the assumption $\vert{\Gamma}\cap{Q}\vert\geq 2n+1$ that $\vert{\Gamma}:Q\vert\leq d-2n-1$. Hence we have the following : \begin{align*} {\rm reg}({\Gamma}:Q) & \geq {\rm reg}({\Gamma})-2 \\ & \geq \left\lceil\frac{d-1}{n+\frac{n}{2n+2}}\right\rceil+1 = \left\lceil\frac{d-2(n+\frac{n}{2n+2})-1}{n+\frac{n}{2n+2}}\right\rceil+3 \\ & \geq \left\lceil\frac{d-2n-1-1}{n+\frac{n}{2n+2}}\right\rceil+3 \\ & \geq \left\lceil\frac{\vert{\Gamma}:Q\vert-1}{n+\frac{n}{2n+2}}\right\rceil+3. \end{align*} Next, we will verify the second statement. We first note that $$\left\lceil\frac{\vert{\Gamma}:Q\vert}{n}\right\rceil-{m}+1\leq\left\lceil\frac{d-1}{n}\right\rceil-{m}-1 ={\rm reg}({\Gamma})-2 $$ since $\vert{\Gamma}:Q\vert\leq d-2n-1$. Now, it follows from the inequality ${\rm reg}({\Gamma})-2\leq {\rm reg}({\Gamma}:Q)$ that $$\left\lceil\frac{\vert{\Gamma}:Q\vert}{n}\right\rceil-{m}+1 \leq {\rm reg}({\Gamma}:Q)=\left\lceil\frac{\vert{\Gamma}:{Q}\vert-1}{n}\right\rceil-{m'}+1.$$ Obviously, this implies the desired inequality ${m} \geq {m}'$. \end{proof} Now, we are ready to give a proof of Theorem \ref{thm:main2}.\\ \noindent {\bf Proof of Theorem \ref{thm:main2}.} First, note that we can write $${\rm reg}({\Gamma})=\left\lceil\frac{d-1}{n}\right\rceil-{m}+1 \mbox{ for a nonnegative integer} \ m$$ since ${\rm reg}({\Gamma})\leq\left\lceil\frac{d-1}{n}\right\rceil+1$. Then, it follows from $$\left\lceil\frac{d-1}{n+\frac{n}{2n+2}}\right\rceil+3\leq {\rm reg}({\Gamma})=\left\lceil\frac{d-1}{n}\right\rceil-{m}+1$$ that \begin{equation*} \frac{d-1}{n+\frac{n}{2n+2}}+3<\frac{d-1}{n}-{m}+2. \end{equation*} Thus, we can deduce that \begin{equation}\label{ineq:3.1} (1+{m})(2n^2+3n)+1<d. \end{equation} Next, we will show that \\ \begin{enumerate} \item[$(**)$] there is a unique rational normal curve $C$ of degree $n$ such that ${\rm reg}(\Gamma) = {\rm reg}(\Gamma \cap C)$.\\ \end{enumerate} \noindent Before proving the above statement $(**)$, note that $(**)$ implies the two desired properties $$\rho(\Gamma)> d-(m+1)n \quad \mbox{and} \quad \rho(\Gamma)=\vert\Gamma\cap{C}\vert.$$ Indeed, we have \[ \left\lceil\frac{d-1}{n}\right\rceil-m+1={\rm reg}({\Gamma})={\rm reg}(\Gamma\cap{C})\leq\left\lceil\frac{\vert \Gamma\cap{C}\vert-1}{n}\right\rceil+1. \] Hence the first statement, $\rho(\Gamma)> d-(m+1)n$, comes immediately from \[ \frac{d-1}{n}-m+1\leq\frac{\vert \Gamma\cap{C}\vert-1}{n}+2. \] For the second statement, $\rho(\Gamma)=\vert\Gamma\cap{C}\vert$, assume that there is a rational normal curve $C'$ of degree $n$ such that $\vert\Gamma\cap{C'}\vert>\vert\Gamma\cap{C}\vert$. Then, we have \[ {\rm reg}({\Gamma}) = {\rm reg}(\Gamma\cap{C}) = \left\lceil\frac{\vert \Gamma\cap{C}\vert-1}{n}\right\rceil+1 \leq \left\lceil\frac{\vert \Gamma\cap{C'}\vert-1}{n}\right\rceil+1 = {\rm reg}(\Gamma\cap{C'}) \leq {\rm reg}(\Gamma). \] Thus, it follows immediately that $${\rm reg}(\Gamma\cap{C'})={\rm reg}(\Gamma).$$ This contradicts the uniqueness of $C$. As a result, it follows that $\rho(\Gamma)=\vert\Gamma\cap{C}\vert$. Therefore, it suffices to prove $(**)$. From now on, we will prove $(**)$ by showing the existence and the uniqueness of the rational normal curve $C$. \\ \noindent \underbar{\textmd{Existence :}} To prove the existence part in $(**)$, we will proceed in three steps.\\ \textmd{Step 1.}\quad We will construct a subscheme ${\Gamma}'$ of ${\Gamma}$ with $\vert{\Gamma}'\vert \geq 2{m}{n}+4n+3$ such that \\ \begin{enumerate} \item[]{(\stars{3})} ${\rm reg}({\Gamma}'\cap{Q})={\rm reg}({\Gamma}')$ for any quadric $Q \subset \mathbb{P}^n$ with $\vert Q\cap{\Gamma}'\vert \geq 2n+1$.\\ \end{enumerate} If ${\Gamma}$ satisfies the property $($\stars{3}$)$, then we are done. Suppose that ${\Gamma}$ fails to satisfy $($\stars{3}$)$. That is, there exists a quadric $Q_0$ such that $$\vert{\Gamma}\cap{Q_0}\vert \geq 2n+1 \quad \mbox{and} \quad {\rm reg}({\Gamma}) > {\rm reg}({\Gamma}\cap{Q_0}).$$ Let ${\Gamma}_1$ denote subscheme ${\Gamma}:{Q_0}$. If $\Gamma_1$ fails to satisfy $($\stars{3}$)$, then there exists a quadric $Q_1$ such that $$\vert{\Gamma_1}\cap{Q_1} \vert \geq 2n+1 \quad \mbox{and} \quad {\rm reg}({\Gamma_1}) > {\rm reg}({\Gamma_1}\cap{Q_1}).$$ In this way, setting ${\Gamma}={\Gamma}_0$, we can obtain a subscheme ${\Gamma}_i$ of ${\Gamma}$ and a quadric $Q_{i-1}$ inductively. Indeed, if ${\Gamma}_{i-1}$ fails to satisfy $($\stars{3}$)$ then there exists a quadric $Q_{i-1}$ such that $$\vert{\Gamma}_{i-1}\cap{Q_{i-1}}\vert \geq 2n+1 \quad \mbox{and} \quad {\rm reg}({\Gamma}_{i-1}) > {\rm reg}({\Gamma}_{i-1}\cap{Q_{i-1}}).$$ Then we define ${\Gamma}_i$ as the subscheme ${\Gamma}_{i-1}:Q_{i-1}$ of $\Gamma_{i-1}$. From our construction, it holds that $$\vert{\Gamma}_{i}\vert\leq{d-(2n+1)i}.$$ Also, it holds by Proposition \ref{Basic facts}.$(5)$ that $${\rm reg}({\Gamma}_i) \geq {\rm reg}({\Gamma}_{i-1})-2.$$ It follows that ${\rm reg}({\Gamma}_i) \geq {\rm reg}({\Gamma})-2i$, and hence we obtain \begin{align*} \left\lceil\frac{d-1}{n}\right\rceil-{m}+1-2i & \quad = \quad {\rm reg}({\Gamma})-2{i} \\ & \quad \leq \quad {\rm reg}({\Gamma}_i) \\ & \quad \leq \quad \left\lceil\frac{|\Gamma_i|-1}{n}\right\rceil+1 \\ & \quad \leq \quad \left\lceil\frac{d-(2n+1)i-1}{n}\right\rceil+1 \\ & \quad = \quad \left\lceil\frac{d-i-1}{n}\right\rceil-2i+1. \end{align*} The inequality $$\left\lceil\frac{d-1}{n}\right\rceil-{m}-2i+1\leq\left\lceil\frac{d-i-1}{n}\right\rceil-2i+1$$ implies that $i<({m}+1)n$. That is, there exists an integer $i<({m}+1)n$ such that the subscheme ${\Gamma}_i$ of ${\Gamma}$ satisfies the property $($\stars{3}$)$. With this $i$ fixed, we need to show that \begin{equation}\label{ineq:Gamma i} \vert{\Gamma}_i\vert \geq 2mn+4n+3. \end{equation} To show this, we first claim that \begin{equation}\label{ineq:4.2} \vert{\Gamma}_i\vert \geq d-(2n+1)(({m}+1)n-1) \end{equation} (It is clear that the number $d-(2n+1)(({m}+1)n-1)>0$ by our assumptions). For the sake of contradiction, suppose that \begin{equation*} \vert{\Gamma}_i\vert \leq d-(2n+1)(({m}+1)n-1)-1=d-(2n+1)({m}+1)n+2n. \end{equation*} Then we have \begin{equation}\label{ineq:3.4} {\rm reg}({\Gamma}_i)\leq\left\lceil\frac{\vert{\Gamma}_i\vert-1}{n}\right\rceil+1\leq\left\lceil\frac{d-1}{n}\right\rceil-m-2n(m+1)+2. \end{equation} On the other hand, the inequalities ${\rm reg}({\Gamma}_i) \geq {\rm reg}({\Gamma})-2{i}$ and $i<(m+1)n$ imply that \begin{equation}\label{ineq:3.5} {\rm reg}({\Gamma}_i) \geq {\rm reg}({\Gamma})-2i = \left\lceil\frac{d-1}{n}\right\rceil-{m}-2i+1 \geq \left\lceil\frac{d-1}{n}\right\rceil-{m}-2n(m+1)+3. \end{equation} It is obvious that (\ref{ineq:3.4}) and (\ref{ineq:3.5}) contradict each other. Now, it follows from inequalities in (\ref{ineq:3.1}) and (\ref{ineq:4.2}) that $$\vert{\Gamma}_i\vert \geq (1+{m})(2n^2+3n)+2-(2n+1)(({m}+1)n-1)=2{m}{n}+4n+3.$$ Therefore, this ${\Gamma}_i$ is the set ${\Gamma}'$ we are looking for.\\ \textmd{Step 2.}\quad We will show that\\ \begin{enumerate} \item[]{\bf(\stars{4})} ${\rm reg}({\Gamma}\cap{Q})={\rm reg}({\Gamma})$ for any quadric $Q$ in $\mathbb{P}^n$ such that $\vert Q\cap{\Gamma}'\vert\geq 2n+1$.\\ \end{enumerate} \noindent To prove $($\stars{4}$)$, we will apply Lemma \ref{lem:lemma1} to ${\Gamma}'$ constructed in Step 1. Since ${\Gamma}'$ is in linearly general position in $\mathbb{P}^n$, we can write $${\rm reg}({\Gamma}')=\left\lceil\frac{\vert{\Gamma}'\vert-1}{n}\right\rceil-{m'}+1$$ for some nonnegative integer $m'$. Let $Q$ be a quadric in $\mathbb{P}^n$ such that $\vert{Q\cap{{\Gamma}'}}\vert\geq 2n+1$. Then we have ${\rm reg}({\Gamma}'\cap{Q})={\rm reg}({\Gamma}')$ by $($\stars{3}$)$. Also, we have $$\vert{\Gamma}'\cap{Q}\vert>\vert{\Gamma}'\vert-({m}'+1)n$$ by Lemma \ref{lem:lemma1}. Since $\vert{\Gamma}'\vert\geq2{m}{n}+4n+3$, it follows that $$\vert{\Gamma}'\cap{Q}\vert>2{m}{n}-{m}'{n}+3n+3.$$ Next, we will show that ${m}\geq{m}'$. For this purpose, we first recall the construction of ${\Gamma}'$ described in Step 1. If we write $${\rm reg}({\Gamma}_j)=\left\lceil\frac{\vert{\Gamma}_{j}\vert-1}{n}\right\rceil-m_{j}+1$$ for some nonnegative integer $m_j$, then $m_{j+1}\leq m_{j}\leq m$ by Lemma \ref{lem:4.2}. This shows that ${m}\geq{m}'$. Thus, we have $$\vert{\Gamma}\cap{Q}\vert\geq\vert{\Gamma}'\cap{Q}\vert>{m}{n}+3n+3.$$ Now, we apply Lemma \ref{lem:lemma1} once again to deduce that ${\rm reg}({\Gamma})={\rm reg}({\Gamma}\cap{Q})$.\\ \textmd{Step 3.}\quad Setting $X_0={\Gamma}$, we will choose inductively a subscheme $X_i$ of ${\Gamma}$ such that \begin{equation}\label{ineq:Xi Construction} \left\lceil\frac{\vert{X_i}\vert-1}{n+\frac{n}{2n+2}}\right\rceil+3\leq {\rm reg}(X_i)={\rm reg}({\Gamma}) \end{equation} as follows. Let $X_i$ be a subscheme of ${\Gamma}$ satisfying (\ref{ineq:Xi Construction}). If $X_i$ is contained in a rational normal curve, then the proof is completed. Suppose not. Then, we have $$h_{X_i}(2)>2n+1$$ by \cite[Theorem 3.2 and Theorem 4]{EH}. Also, if we write $${\rm reg}(X_i) = \left\lceil\frac{\vert{X_i}\vert-1}{n}\right\rceil-m_i+1,$$ then $$\vert X_i\vert > (1+{m}_{i})(2n^2+3n)+1$$ by (\ref{ineq:3.1}). By Step 1 and Step 2, there exists a subscheme $X_i'$ of $X_i$ such that $${\rm reg}(X_i\cap{Q})={\rm reg}(X_i) \quad \mbox{for any quadric} \quad Q \subset \mathbb{P}^n \quad \mbox{with} \quad \vert Q\cap{X_i'}\vert \geq 2n+1.$$ Choose a subscheme $A_{i}$ of $X_i'$ with $\vert A_i\vert=2n+1$. Then $h_{A_i}(2)=2n+1$ since $A_i$ is $3$-regular. Thus, we may deduce that there exists a quadric $Q_i$ such that $Q_i$ contains $A_i$ but not $X_i$. We set $$X_{i+1}:=X_i\cap{Q_i}.$$ Since $2n+1\leq\vert Q_i\cap{X_i'}\vert$, we have $$\left\lceil\frac{\vert{X}_{i+1}\vert-1}{n+\frac{n}{2n+2}}\right\rceil+3\leq\left\lceil\frac{\vert{X}_i\vert-1}{n+\frac{n}{2n+2}}\right\rceil+3\leq {\rm reg}({X}_{i})={\rm reg}({X}_{i+1}).$$ Note that the sequence $\{\vert X_i\vert\}$ is strictly decreasing for $i$. However, $\{\vert X_i\vert\}$ has a lower bound since \[ \left\lceil\frac{d-1}{n+\frac{n}{2n+2}}\right\rceil+3\leq \left\lceil\frac{d-1}{n}\right\rceil-m+1={\rm reg}({\Gamma})={\rm reg}(X_i)\leq\left\lceil\frac{\vert X_i\vert-1}{n}\right\rceil+1. \] After a finite number of steps (this number is less than or equal to $mn+n-1$), $X_i$ should lie on a rational normal curve and ${\rm reg}(X_i)={\rm reg}({\Gamma})$.\\ \noindent \underbar{\textmd{Uniqueness :}} Suppose that there are two different rational normal curves $C_1$ and $C_2$ in $\mathbb{P}^n$ such that \[ {\rm reg}({\Gamma}\cap{C_1})={\rm reg}({\Gamma}\cap{C_2}) = {\rm reg}({\Gamma}). \] Then \[ \left\lceil\frac{\vert{{\Gamma}\cap{C_i}}\vert-1}{n}\right\rceil+1 ={\rm reg}({\Gamma}\cap{C_i}) = {\rm reg}({\Gamma}) \geq \left\lceil\frac{d-1}{n+\frac{n}{2n+2}}\right\rceil+3 \] for $i=1,2$. Thus, it holds that $$\frac{\vert{{\Gamma}\cap{C_i}}\vert-1}{n}+2 > \frac{d-1}{n+\frac{n}{2n+2}}+3$$ and hence $$\vert{{\Gamma}\cap{C_i}}\vert \geq \frac{(2n+2)(d-1)}{2n+3}+n+2.$$ Since the degree of the scheme-theoretic intersection ${{C_1}\cap{C_2}}$ is at most $n+2$ (cf. \cite[Theorem2.1]{EH1}), we have $$\vert{{\Gamma}\cap{C_1}\cap{C_2}}\vert\leq n+2.$$ Then, we have \begin{align*} \vert{ ( \Gamma \cap C_1 ) \cup ( \Gamma \cap C_2 ) } \vert & \quad = \quad \vert{{\Gamma}\cap{C_1}}\vert+\vert{{\Gamma}\cap{C_2}}\vert-\vert{{\Gamma}\cap{C_1}\cap{C_2}}\vert \\ & \quad \geq \quad \frac{2(2n+2)(d-1)}{2n+3}+n+2. \end{align*} Here the last term on the right is strictly bigger than $d$, which is a contradiction. This completes the proof. \qed\\ \noindent {\bf Proof of Theorem \ref{thm:main3}.} If $t(\Gamma)=n$, then we apply Theorem \ref{thm:main2} to $\Gamma$. Now, suppose that $t(\Gamma)<n$. Then we first apply Theorem \ref{thm:main1} to $\Gamma$. Thus there exists a unique subspace $\mathbb{P}^{t(\Gamma)}$ of $\mathbb{P}^n$ such that $${\rm{reg}}(\Gamma)={\rm{reg}}(\Gamma\cap\mathbb{P}^{t(\Gamma)}).$$ Then the proof is completed by applying Theorem \ref{thm:main2} to $\Gamma\cap{\mathbb{P}}^{t(\Gamma)}$. \qed\\ \section{Finite schemes in linearly general position having maximal regularity} \noindent This section is devoted to studying further properties of finite schemes in linearly general position whose regularities are maximal. We begin with proving Corollary \ref{cor:maximal case}.\\ \noindent {\bf Proof of Corollary \ref{cor:maximal case}.} Suppose that ${\rm{reg}}({\Gamma})=\left\lceil\frac{d-1}{n}\right\rceil+1$. Since $d \geq 4n^2 +6n+1$, it holds that $$\left\lceil\frac{d-1}{n+\frac{n}{2n+2}}\right\rceil+3 \leq{\rm reg}({\Gamma}) \leq \left\lceil\frac{d-1}{n}\right\rceil+1 $$ and hence Theorem \ref{thm:main2} says that there is a unique rational normal curve $C$ such that $\rho (\Gamma) = \vert\Gamma\cap{C}\vert$ and ${\rm{reg}}({\Gamma})={\rm{reg}}({\Gamma\cap{C}})$. Then we have \[ \left\lceil\frac{d-1}{n}\right\rceil+1={\rm{reg}}({\Gamma})={\rm{reg}}({\Gamma\cap{C}}) = \left\lceil\frac{\vert\Gamma\cap{C}\vert-1}{n}\right\rceil+1. \] Since we write $d=nq+r+2$ for $0\leq{r}\leq{n-1}$, it follows that $\rho{(\Gamma)} = \vert\Gamma\cap{C}\vert \geq d-r$. Conversely, suppose that $\rho(\Gamma)\geq d-r$ and so there exists a rational normal curve $C$ such that $\vert\Gamma\cap{C}\vert \geq d-r$. Since $d=nq+r+2$ for $0\leq{r}\leq{n-1}$, it follows that $${\rm{reg}}({\Gamma\cap{C}}) = \left\lceil\frac{\vert\Gamma\cap{C}\vert-1}{n}\right\rceil+1 \geq \left\lceil\frac{d-r-1}{n}\right\rceil+1 = \left\lceil\frac{d-1}{n}\right\rceil+1$$ But it also holds that $${\rm{reg}}({\Gamma\cap{C}})\leq{\rm{reg}}({\Gamma})\leq\left\lceil\frac{d-1}{n}\right\rceil+1.$$ Therefore, it is shown that $$ {\rm{reg}}({\Gamma}) = {\rm{reg}}({\Gamma\cap{C}}) = \left\lceil\frac{d-1}{n}\right\rceil+1.$$ This completes the proof. \qed\\ Let $\Gamma \subset \P^n$ be a nondegenerate finite subscheme. When we add some points to $\Gamma$, its Castelnuovo-Mumford regularity may either increase or remain the same. In the following theorem, we find a condition such that the latter case occurs. \begin{theorem}\label{thm:add several points} Let $m \geq 1$ be an integer and let $\Gamma \subset \P^n$ be a finite scheme of degree $$d \geq 4n^2 + 6n+1+ 2(n+1)m$$ which is contained in a rational normal curve $C$ of degree $n$. If $A \subset \P^n \setminus C$ is a finite set such that $|A| \leq m$ and $\Gamma \cup A$ is in linearly general position, then $$\reg (\Gamma \cup A ) = \reg (\Gamma) = \left\lceil\frac{d-1}{n}\right\rceil+1.$$ \end{theorem} To give a proof of Theorem \ref{thm:add several points}, we begin with the following lemma. \begin{lemma}\label{lem:adding points} Let ${\Gamma}$ be a nondegenerate finite subscheme of degree $d\geq{4n^2+6n+1}$ in linearly general position in $\mathbb{P}^n$. If there is a rational normal curve $C$ in $\mathbb{P}^n$ such that $$\frac{2n+2}{2n+3}(d-1)+2n+1\leq\vert{\Gamma\cap{C}}\vert,$$ then ${\rm reg}({\Gamma}\cap{C})={\rm reg}({\Gamma})$. \end{lemma} \begin{proof} Our assumptions imply that $$\left\lceil\frac{2n+2}{2n^2+3n}(d-1)\right\rceil+3 \leq \left\lceil\frac{\vert{\Gamma\cap{C}}\vert-1}{n}\right\rceil+1 = {\rm reg}({\Gamma}\cap{C})\leq{\rm reg}({\Gamma}).$$ Then we can apply Theorem \ref{thm:main2}, and get a rational normal curve $C'$ such that $${\rm reg}({\Gamma}\cap{C'})={\rm reg}({\Gamma}).$$ Now, one can show that the two rational normal curves $C$ and $C'$ are equal by using the same idea used in the proof of the uniqueness part of Theorem \ref{thm:main2}. Thus, we get the desired equality ${\rm reg}({\Gamma}\cap{C})={\rm reg}({\Gamma})$. \end{proof} \noindent {\bf Proof of Theorem \ref{thm:add several points}.} By Lemma \ref{lem:adding points}, it suffices to show that $$\frac{2n+2}{2n+3}(d+|A|-1)+2n+1 \leq d.$$ This inequality is equivalent to $$4n^2 + 6n+1 +2(n+1)|A| \leq d,$$ which holds by our assume on $d$ since $|A| \leq m$. This completes the proof.\qed \\ \begin{example} Let $C \subset \P^5$ be a rational normal curve of degree $5$ and let $\Gamma$ be a subscheme of $C$ of length $10000$. Theorem \ref{thm:add several points} says that if $A \subset \P^5 \setminus C$ is a finite set such that $|A| \leq 822$ and $\Gamma \cup A \subset \P^5$ is in linearly general position, then $\reg (\Gamma \cup A ) = \reg (\Gamma) =2001$. \end{example} Finally, we provide a cohomological characterization of finite schemes in uniform position of maximal regularity. More precisely, let $\Gamma \subset \P^n$ be a finite scheme in linearly general position of degree $d \geq 4n^2 +6n+1$ such that $\reg (\Gamma) =\left\lceil\frac{d-1}{n}\right\rceil+1$. If we write \begin{equation*} d = nq+r+2 \quad \mbox{for some} \quad 0 \leq r \leq n-1, \end{equation*} then $\reg (\Gamma) =q+2$ and hence $h^1 (\P^n , \mathcal{I}_{\Gamma} (q)) > 0$. Furthermore, if $\Gamma$ lies on a rational normal curve, then one can easily show that $$h^1 (\P^n , \mathcal{I}_{\Gamma} (q)) = r+1.$$ The following theorem shows that the converse is also true. \begin{theorem}\label{thm:coh char max reg} Let ${\Gamma} \subset \P^n$, $n \geq 2$, be a finite subscheme of degree $d \geq 4n^2+6n+1$ in linearly general position. Write $d = nq+r+2$ for some $0 \leq r \leq n-1$. Then $$h^1 (\P^n , \mathcal{I}_{\Gamma} (q)) \leq r+1.$$ Moreover, the following two statements are equivalent: \begin{enumerate} \item[$(i)$] $\Gamma$ lies on a rational normal curve. \item[$(ii)$] $h^1 (\P^n , \mathcal{I}_{\Gamma} (q)) = r+1$. \end{enumerate} In particular, if $\Gamma$ is a finite set such that $h^1 (\P^n , \mathcal{I}_{\Gamma} (q))>0$, then the following two statements are equivalent: \begin{enumerate} \item[$(i)$] $\Gamma$ is in uniform position. \item[$(ii)$] $h^1 (\P^n , \mathcal{I}_{\Gamma} (q)) = r+1$. \end{enumerate} \end{theorem} \begin{proof} Note that $h^1 (\P^n , \mathcal{I}_{\Gamma} (q))=0$ if and only if ${\rm reg}({\Gamma})<\left\lceil\frac{d-1}{n}\right\rceil+1$. So if $h^1 (\P^n , \mathcal{I}_{\Gamma} (q))=0$, then we are done. Now, we assume that $h^1 (\P^n , \mathcal{I}_{\Gamma} (q))>0$, or equivalently, that ${\rm reg}({\Gamma}) = \left\lceil\frac{d-1}{n}\right\rceil+1$. Let $C \subset \P^n$ be the rational normal curve of degree $n$ such that $\rho (\Gamma) = | \Gamma_0 |$ where $\Gamma_0 = \Gamma \cap C$ (cf. Theorem \ref{thm:main2}). Then $|\Gamma_0 | = nq+2+s$ for some $0 \leq s \leq r$ (cf. Corollary \ref{cor:maximal case}), and we have the exact sequence $$0 \rightarrow \mathcal{I}_C \rightarrow \mathcal{I}_{\Gamma_0} \rightarrow \mathcal{O}_{C} (-\Gamma_0 ) \rightarrow 0,$$ which enables us to verify that $$I(C)_q = I(\Gamma_0 )_q \quad \mbox{and} \quad h^1 (\P^n , \mathcal{I}_{\Gamma_0} (q))=s+1.$$ Also, we have the exact sequence $$0 \rightarrow \mathcal{I}_{\Gamma} \rightarrow \mathcal{I}_{\Gamma_0} \rightarrow \mathcal{O}_{\Gamma} (-\Gamma_0 ) \rightarrow 0$$ of coherent sheaves on $\P^n$. This induces the following cohomology long exact sequence: $$H^0 (\P^n , \mathcal{I}_{\Gamma_0} (q)) \rightarrow H^0 (\Gamma , \mathcal{O}_{\Gamma} (-\Gamma_0 ) \otimes \mathcal{O}_{\P^n} (q)) \rightarrow H^1 (\P^n , \mathcal{I}_{\Gamma} (q)) \rightarrow H^1 (\P^n , \mathcal{I}_{\Gamma_0} (q)) \rightarrow 0$$ Then it follows that $$h^1 (\P^n , \mathcal{I}_{\Gamma} (q)) \leq h^0 (\Gamma , \mathcal{O}_{\Gamma} (-\Gamma_0 ) \otimes \mathcal{O}_{\P^n} (q)) + h^1 (\P^n , \mathcal{I}_{\Gamma_0} (q)) = (r-s)+(s+1) = r+1.$$ Moreover, $h^1 (\P^n , \mathcal{I}_{\Gamma} (q)) = r+1$ if and only if the homomorphism $$H^0 (\P^n , \mathcal{I}_{\Gamma_0} (q)) \rightarrow H^0 (\Gamma , \mathcal{O}_{\Gamma} (-\Gamma_0 ) \otimes \mathcal{O}_{\P^n} (q))$$ is the zero map. Since $\Gamma_0 = \Gamma \cap C$ and $I(C)_q = I(\Gamma_0 )_q$, this can happen exactly when $\Gamma = \Gamma_0$ and hence $\Gamma$ is contained in the rational normal curve $C$. For the last statement, first assume that $\Gamma$ is in uniform position. Then $\Gamma$ lies on a rational normal curve of degree $n$ by Theorem \ref{thm:Nagel}. Thus, $h^1 (\P^n , \mathcal{I}_{\Gamma} (q)) = r+1$. Conversely, assume that $h^1 (\P^n , \mathcal{I}_{\Gamma} (q)) = r+1$. Then ${\rm reg}({\Gamma}) = \left\lceil\frac{d-1}{n}\right\rceil+1$ and hence $\Gamma$ lies on a rational normal curve of degree $n$ by Theorem \ref{thm:Nagel}.$(1)$. Obviously, any finite subset of a rational normal curve is in uniform position, which completes the proof. \end{proof} \begin{thebibliography}{0000000} \bibitem[C]{C} G. Castelnuovo, {\em Ricerche di geometria sulle curve algebraiche}, Atti R. Accad. Sci Torino 24 (1889), 196–-223. \bibitem[EG]{EG} D. Eisenbud and S. Goto, {\em Linear free resolutions and minimal multiplicity}, J. Algebra 88 (1984), no. 1, 89-–133. \bibitem[EH]{EH} D. Eisenbud and J. Harris, {\em Finite projective schemes in linearly general position}, J. Algebraic Geom. 1 (1992), no. 1, 15--30. \bibitem[Ha]{Ha} J. Harris, {\em Curves in projective space. With the collaboration of David Eisenbud}, Sém. Math. Sup., 85. Presses de l'Université de Montréal, Montreal, QC, 1982. 138 pp. \bibitem[Hi]{Hi} A. Hirschowitz, {\em La m\'{e}thode d'Horace pour l'interpolation \`{a} plusieurs variables}, Manuscripta Math. 50 (1985), 337-388. \bibitem[K]{K} S. Kwak, {\em Generic projections, the equations defining projective varieties and Castelnuovo regularity}, Math. Zeit. 234 (2000), no. 3, 413--434. \bibitem[LPW]{LPW} W. Lee, E. Park and Y. Woo, {\em Regularity and multisecant lines of finite schemes}, Int. Math. Res. Not., Vol. 2019, No. 6, 1725--1743. \bibitem[N]{N} Uwe Nagel, {\em Arithmetically Buchsbaum Divisors on Varieties of Minimal Degree}, Trans. Amer. Math. Soc. 351 (1999), no. 11, 4381--4409. \bibitem[NP]{NP} U. Nagel and Y. Pitteloud, {\em On graded Betti numbers and geometrical properties of projective varieties}, Manuscr. Math.84(3-4) (1994), 291–314. \bibitem[P]{P} E. Park, {\em On syzygies of divisors on rational normal scrolls}, Math. Nachr. 287, No. 11–12, (2014), 1383–1393. \bibitem[TV]{TV} N. V. Trung, G. Valla, Degree bounds for the defining equations of arithmetically Cohen-Macaulay varieties, Math. Ann. 281 (1988), 209–-218. \end{thebibliography} \end{document}
2412.15366v1
http://arxiv.org/abs/2412.15366v1
Capacity and PAPR Analysis for MIMO Faster-than-Nyquist Signaling with High Acceleration
\documentclass[journal]{IEEEtran} \IEEEoverridecommandlockouts \usepackage{makecell} \usepackage{cite} \usepackage{amsmath,amssymb,amsfonts} \DeclareMathOperator*{\argmax}{arg\,max} \usepackage{algorithmic} \usepackage{graphicx} \usepackage{textcomp} \usepackage{xcolor} \newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma} \newtheorem{remark}{Remark} \usepackage{bbm} \usepackage{bm} \newtheorem{proof}{Proof} \newtheorem{corollary}{Corollary} \usepackage{bbm} \begin{document} \setlength{\textfloatsep}{0.11cm} \setlength{\baselineskip}{0.42cm} \setlength{\abovecaptionskip}{1mm} \title{Capacity and PAPR Analysis for MIMO Faster-than-Nyquist Signaling with High Acceleration} \author{Zichao~Zhang,~\IEEEmembership{Student Member,~IEEE,} Melda~Yuksel,~\IEEEmembership{Senior Member,~IEEE,} Gokhan M. Guvensen, Halim~Yanikomeroglu,~\IEEEmembership{Fellow,~IEEE} \thanks{This work was funded in part by the Scientific and Technological Research Council of Turkey, TUBITAK, under grant 122E248, and in part by a Discovery Grant awarded by the Natural Sciences and Engineering Research Council of Canada (NSERC).} \thanks{Z. Zhang and H. Yanikomeroglu are with the Department of Systems and Computer Engineering at Carleton University, Ottawa, ON, K1S 5B6, Canada e-mail: [email protected], [email protected].} \thanks{M. Yuksel and G. Guvensen are with the Department of Electrical and Electronics Engineering, Middle East Technical University, Ankara, 06800, Turkey, e-mail: {ymelda, guvensen}@metu.edu.tr.} } \maketitle \begin{abstract} Faster-than-Nyquist (FTN) signaling is a non-orthogonal transmission technique {offering} a promising solution for future generations of communications. {This paper studies the capacity of FTN signaling in multiple-input multiple-output (MIMO) channels for high acceleration factors. In our previous study \cite{zhang2022faster}, we found the capacity for MIMO FTN channels if the acceleration factor is larger than a certain threshold, which depends on the bandwidth of the pulse shape used. In this paper we extend the capacity analysis to acceleration factors smaller than this mentioned threshold.} In addition to capacity, we conduct peak-to-average power ratio (PAPR) analysis and simulation for MIMO FTN for varying acceleration factors for both Gaussian and QPSK symbol sets. Our analysis reveals important insights about transmission power and received signal-to-noise ratio (SNR) variation in FTN. As the acceleration factor approaches 0, if the transmission power is fixed, the received SNR diminishes, or if the received SNR is fixed, PAPR at the transmitter explodes. \end{abstract} \begin{IEEEkeywords} Channel capacity, faster-than-Nyquist, multiple-input multiple-output, peak to average power ratio. \end{IEEEkeywords} \section{Introduction} Faster-than-Nyquist (FTN) transmission emerges as a groundbreaking technology in modern communication systems, poised to redefine data transmission capabilities \cite{5gand6g}. Departing from the traditional Nyquist criterion, which sets limits on the maximum data rate achievable without inter-symbol interference (ISI), FTN challenges conventional wisdom by introducing intentional ISI. By cleverly exploiting this intentional ISI, FTN enables communication rates that are unattainable by Nyquist transmission. In classical communication theory, consecutive symbols are transmitted in an orthogonal fashion and do not result in ISI. The Nyquist limit refers to the maximum signaling rate beyond which orthogonality between consecutive symbols can no longer be maintained. In contrast, in \cite{Mazo}, Mazo demonstrated that FTN signals can be transmitted at a rate of $1/\delta$ without an impact on the minimum distance of binary sinc pulses if the acceleration factor $\delta \in [0.802,1]$. In other words, approximately 25\% more bits than Nyquist signaling can be sent within the same bandwidth and at the same signal-to-noise ratio (SNR) without compromising the bit error rate (BER), given ideal processing at the receiver. Numerous studies have explored the capacity of single-input single-output (SISO) FTN systems. In \cite{rusek} and \cite{property}, the authors propose an achievable rate assuming independent transmitted symbols. However, optimal input distributions may introduce correlation to mitigate ISI, as investigated in \cite{bajcsy} and \cite{rusek12}, which propose a waterfilling-based power allocation strategy. These works, however, do not fully address ISI’s impact under transmit power constraints, differing from orthogonal signaling. Transmit precoding offers another approach, as shown in \cite{linearprecoding}, with eigen-decomposition-based precoding schemes proposed in \cite{svd}, \cite{eigendecomposition}, and \cite{chaki}. In contrast, \cite{ganji} provides the complete SISO FTN capacity expression, accounting for ISI in the power constraint and covering all acceleration factors $\delta \in (0,1]$. Similarly, \cite{asympftn} shows that binary FTN signaling is asymptotically optimal as $\delta$ tends to 0. Using multiple antennas at both the transmitter and receiver significantly enhances channel capacity \cite{telatar}. Since \cite{telatar}, the field has expanded to massive multiple-input multiple-output (MIMO) \cite{lu2014overview} and cell-free massive MIMO \cite{elhoushy2021cell}. Integrating FTN signaling into MIMO systems is a natural next step. Rusek \cite{rusek2009existence} explored the Mazo limit in MIMO, while Modenini et al. \cite{andrea} demonstrated FTN performance gains in large-scale antenna systems with simplified receivers. Yuhas et al. \cite{michael} studied MIMO FTN capacity with independent inputs, but this assumption is limited, as optimal input distribution may not be independent. The ergodic capacity of MIMO FTN over triply-selective Rayleigh fading channels was analyzed in \cite{MIMOFTNfad} without transmitter channel state information. In \cite{zhang2022faster}, we examined FTN capacity in MIMO systems for both frequency-flat and frequency-selective channels, but our analysis did not address small acceleration factors. Specifically, the capacity for $\delta < 1/(1+\beta)$, when using root raised cosine (RRC) pulses with roll-off factor $\beta \in [0,1]$, remains unknown. This work addresses that gap by establishing FTN capacity for MIMO systems at small $\delta$ values, providing new insights into transmit power, peak-to-average power ratio (PAPR), and received SNR in FTN systems. In FTN signaling, small $\delta$ leads to significant pulse overlap, often resulting in high transmission power peaks. Thus, examining PAPR performance in FTN signaling is crucial. Several studies have investigated PAPR in FTN systems. For instance, \cite{petitpied2018} compared FTN and Nyquist signaling in terms of spectral efficiency, bandwidth, SNR, and PAPR, showing FTN’s optimality over a range of spectral efficiencies. The paper \cite{liu2018peak} demonstrated that multi-carrier FTN signaling could achieve lower PAPR than Nyquist systems. However, these studies focused on specific roll-off and acceleration factors and did not provide a universal analysis of FTN PAPR performance compared to Nyquist signaling. FTN has also been proposed as an effective solution to PAPR issues in satellite communications. In \cite{lucciardi2016}, FTN was demonstrated to enhance spectral efficiency under non-linear amplification. The paper \cite{delamotte2017faster} showed that FTN could achieve lower PAPR than Nyquist transmission when symbol constellations and rates are appropriately chosen for small roll-off factors. Similarly, \cite{liq2020} examined FTN PAPR performance in DVB-S2X systems using a high power amplifier model. While previous studies have made practical assumptions and examined PAPR behavior in FTN signaling, this paper aims to analyze PAPR across different acceleration factors both for fixed average transmission power and for received SNR. This analysis will offer general design guidelines for practical FTN transmission. Additionally, there is no existing PAPR performance analysis for FTN under extreme accelerations in the literature. We will demonstrate that there is a trade-off between PAPR behavior and received SNR. If received SNR is fixed, this can imply an exploding PAPR behavior. Consequently, BER and/or spectral efficiency calculations should consider PAPR effects. The organization of the paper is as follows. For MIMO FTN, we define the system model in Section~\ref{sec:systemmodel}. We develop the mutual information expression in Section~\ref{sec:capacityderivation}. In Section~\ref{sec:paprsec}, we derive the theoretical expression for the PAPR distribution for FTN transmission with Gaussian signaling as well as QPSK symbols. In Section~\ref{sec:simresult}, we present our numerical results. Finally, in Section~\ref{sec:conclusion} we conclude the paper. We use the following notations in the paper. The superscript $*$ means complex conjugate and $\star$ means convolution. The superscript $T$ is transpose, the operation $\otimes$ is the Kronecker product. The operation $\mathbb{E}$ is expectation and the superscript $\dagger$ is the Hermitian conjugate. The indicator function is denoted as $\mathbbm{1}$. An identity matrix of size $L\times L$ is shown as $\bm{I}_L$, and the trace operation is $\text{tr}(\cdot)$. Finally, $(a)^+$ means $\max(0,a)$. \vspace{-0.35cm} \section{System Model}\label{sec:systemmodel} In this paper, we assume a communication scenario where the transmitter is equipped with $K$ antennas and the receiver is equipped with $L$ antennas. Assuming that $N$ symbols {are} transmitted from each transmit antenna, we use $a_k[n], n=0, \dots, N-1, k=1, \dots, K$, to denote the $n$th transmitted symbol from the $k$th transmit antenna. The symbols of each antenna go through the pulse shaping filter, which we denote as $p(t)$, and we let all transmit antennas have the same pulse shaping filter. {For FTN transmission, the symbols are transmitted every $\delta T$ seconds, where $T$ is the sampling period in which there will be no ISI at sampling instants.} Therefore, we write the expression of the transmitted signal $x_k(t)$ from the $k$th antenna as \begin{equation} x_k(t)=\sum_{m=0}^{N-1}a_k[m]p(t-m\delta T). \label{eqn:xt} \end{equation} After the signal is sent to the wireless channel, each transmission link experiences fading. {We assume in the paper that the communication suffers from frequency-flat fading} and denote the channel coefficient for the link from the $k$th transmit antenna to the $l$th receive antenna, $l=1,\dots, L$, as {$h_{lk}\in\mathbb{C}$}. On the receiver side, {the signal at the $l$th receiver antenna including the circularly symmetric complex Gaussian noise, which we denote as $\xi_l(t)$, goes through the matched filter.} By definition, the matched filter is $p^*(-t)$. The output of the matched filter of the $l$th receive antenna $y_l(t)$ is \begin{equation} y_l(t)=\sum_{k=1}^{K}h_{lk}\sum_{m=0}^{N-1}a_k[m]g(t-m\delta T)+\eta_l(t), \end{equation} where $g(t)=p(t)\star p^*(-t)$. Moreover, $\eta_l(t)$ can be written as $\eta_l(t)=\xi_l(t)\star p^*(-t)$. After the matched filter, we sample the output $y_l(t)$ at every $\delta T$ seconds. We write the samples $y_l[n], n=0,\dots, N-1,$ as \begin{align} y_l[n]&=y_l(n\delta T) \notag \\ &=\sum_{k=1}^{K}h_{lk}\sum_{m=0}^{N-1}a_k[m]g((n-m))\delta T)+\eta_l(n\delta T) \notag \\ & =\sum_{k=1}^{K}h_{lk}\sum_{m=0}^{N-1}a_k[m]g[n-m]+\eta_l[n].\label{eq:revsamp} \end{align} In FTN signaling, we accelerate the symbol rate without changing the pulse shape or the bandwidth. This inevitably leads to ISI since the Nyquist zero-ISI law is violated. This can be shown by the fact that $g((n-m)\delta T)\neq 0$ when $n\neq m$ and thus at sampling instant $n\delta T$, the symbol $a_k[n]$ will receive interference from other symbols due to the non-zero $g((n-m)\delta T)$. We can write \eqref{eq:revsamp} in a vector form as \begin{equation} \bm{y}_l=\sum_{k=1}^{K}h_{lk}\bm{G}\bm{a}_k+\bm{\eta}_l, \end{equation} where $\bm{y}_l=[y_l[0],\dots, y_l[N-1]]^T, \bm{a}_k=[a_k[0],\dots, a_k[N-1]]^T,$ and $\bm{\eta}_l=[\eta_l[0],\dots, \eta_l[N-1]]^T$. The $N \times N$ matrix $\bm{G}$ is formed by $(\bm{G})_{n,m}=g[n-m]$. It is easy to see that the $\bm{G}$ matrix is Hermitian. By collecting the samples from all the receive antennas, we can write the input-output model for the MIMO FTN channel as \begin{equation} \bm{Y}=\left(\bm{H}\otimes\bm{G}\right)\bm{A}+\bm{\Omega}, \end{equation} where $\bm{Y}=[\bm{y}_1^T, \bm{y}_2^T,\dots,\bm{y}_{L}^T]^T, \bm{A}=[\bm{a}_1^T, \bm{a}_2^T,\dots,\bm{a}_{K}^T]^T$, and $\bm{\Omega}=[\bm{\eta}_1^T, \bm{\eta}_2^T,\dots,\bm{\eta}_{L}^T]^T$. The channel matrix $\bm{H}$ contains the channel coefficients for all the transmission links and is defined by $(\bm{H})_{l,k}=h_{lk}$. For ease of notation, we denote the matrix $\bm{H}\otimes \bm{G}$ as $\Tilde{\bm{H}}$. The Gaussian noise vector $\bm{\eta}_l, l=1,\dots, L,$ follows the distribution $\mathcal{CN}\left(\bm{0}_N, \sigma_0^2\bm{G}\right)$, where $\bm{0}_N$ is a zero vector with size $N\times 1$ and $\sigma_0^2$ is the power spectral density (PSD) of $\xi_l(t)$. This shows that due to FTN, the additive noise becomes correlated, meanwhile, in Nyquist signaling, the $\bm{G}$ matrix reduces to the identity matrix and output noise terms become independent. {Since the noise terms $\xi_l(t)$ are independent of each other for all $l$, the matched filter output noise terms $\eta_l(t)$ are also independent of each other for all $l$.} Therefore, the noise vector $\bm{\Omega}$ has the distribution $\mathcal{CN}\left(\bm{0}_{LN}, \sigma_0^2(\bm{I}_L\otimes \bm{G})\right)$. \vspace{-0.2cm} \section{Capacity Derivation} \label{sec:capacityderivation} {In this section, we first derive the time-domain mutual information expression between the channel input $\bm{A}$ and the channel output $\bm{Y}$. Then we apply the generalized Szeg\"o's theorem to convert the expression into the frequency domain and form the capacity problem in the frequency domain. Eventually, we find the solution to the optimization problem.} \vspace{-0.2cm} \subsection{Mutual Information Derivation} In order to find the capacity, we write the mutual information between the output $\bm{Y}$ and the input $\bm{A}$, $I(\bm{Y};\bm{A})$ as \begin{align} I(\bm{Y};\bm{A})&=h(\bm{Y})-h(\bm{Y}|\bm{A}) \notag \\ =&\log_2\det\left(\bm{\Sigma_\Omega}+\Tilde{\bm{H}}\bm{\Sigma_A}\Tilde{\bm{H}}^\dagger\right)-\log_2\det\left(\bm{\Sigma_\Omega}\right). \label{eqn:IYA} \end{align} Here, $h(\cdot)$ is the differential entropy. The matrices $\bm{\Sigma_\Omega}=\mathbb{E}\left[\bm{\Omega}\bm{\Omega}^\dagger\right]$, $\bm{\Sigma_A}=\mathbb{E}\left[\bm{A}\bm{A}^\dagger\right]$ are respectively the covariance matrices for the Gaussian noise $\bm{\Omega}$ and the data symbols $\bm{A}$. {Since the noise process $\eta_l(t)$ is a stationary zero-mean Gaussian process, the optimal input is also a stationary zero-mean Gaussian process \cite{Cover}. As $\bm{\Sigma_\Omega}=\sigma_0^2(\bm{I}_L\otimes \bm{G})$, we calculate the input covariance matrix $\bm{\Sigma_A}$ as} \begin{equation} \bm{\Sigma_A}=\left[\begin{matrix} \bm{\Sigma}_{1,1} & \bm{\Sigma}_{1,2} &\dots & \bm{\Sigma}_{1,K} \\ \bm{\Sigma}_{2,1} & \bm{\Sigma}_{2,2} &\dots & \bm{\Sigma}_{2,K} \\ \vdots & \vdots & \ddots & \vdots \\ \bm{\Sigma}_{K,1} & \bm{\Sigma}_{K,2} &\dots & \bm{\Sigma}_{K,K} \end{matrix}\right], \end{equation} where $\bm{\Sigma}_{i,j}=\mathbb{E}\left[\bm{a}_i\bm{a}_j^\dagger\right], i,j=1,\dots,K$ and {we can see that $\bm{\Sigma_A}$ is a block matrix}. Since the input processes to the transmit antennas are zero-mean Gaussian processes, the input to each antenna is itself stationary, and inputs of any transmit antenna pairs are jointly stationary. In other words, the autocorrelation function $R_{k,k}[n,m]=\mathbb{E}[a_k[n]a^*_k[m]]=R_{k,k}[n-m]$ and the cross-correlation function $R_{k,l}[n,m]=\mathbb{E}[a_k[n]a^*_l[m]]=R_{k,l}[n-m]$. Therefore, each block inside the block matrix $\bm{\Sigma_A}$ is a Toeplitz matrix. The entries of an $N\times N$ Toeplitz matrix $\bm{R}$ has the property that $\left(\bm{R}\right)_{i,j}=r_{i-j}, i,j=0,\dots, {N-1}$, in other words, a Toeplitz matrix has the same value on each diagonal. Furthermore, a block Toeplitz matrix is a block matrix where each of its blocks is a Toeplitz matrix. Another important concept we will be using is the generating function of a Toeplitz matrix. It is defined as \begin{equation} \mathcal{G}(\bm{R})=\sum_{k=-\infty}^{\infty}r_ke^{j2\pi f_nk}, f_n\in \left[-\frac{1}{2},\frac{1}{2}\right].\label{eqn:defgenfunc} \end{equation} The capacity of the MIMO FTN channel can be obtained by finding the maximum of the asymptotic average mutual information over all input distributions $p(\bm{A})$, namely, \begin{equation} C_{FTN}=\underset{p(\bm{A})}{\max}\underset{N\rightarrow\infty}{\lim}\frac{1}{N}I(\bm{Y};\bm{A}).\label{eqn:capfirst} \end{equation} The capacity can be found by first taking the limit operation and then finding the optimal input distribution. In order to calculate \eqref{eqn:capfirst}, we need to first manipulate the expression \eqref{eqn:IYA}. According to \cite{zhang2022faster}, \eqref{eqn:IYA} can be written as \begin{equation} I(\bm{Y},\bm{A})=\log_2\det\left(\bm{I}_{KN}+\sigma_0^{-2}\bm{\Sigma_A}\left(\bm{H}^\dagger\bm{H}\otimes\bm{G}\right)\right), \label{eqn:mutinfsimp} \end{equation} where the detailed derivation can be found in \cite[(21)-(29)]{zhang2022faster}. It is straightforward to see that the matrix $\left(\bm{H}^\dagger\bm{H}\otimes\bm{G}\right)$ is a block Toeplitz matrix since $\bm{G}$ itself is Toeplitz. We know from \cite[Theorem 2]{jesus} that the product of block Toeplitz matrices is asymptotically Toeplitz, so the matrix product $\bm{\Sigma_A}\left(\bm{H}^\dagger\bm{H}\otimes\bm{G}\right)$ inside the determinant in \eqref{eqn:mutinfsimp} is asymptotically Toeplitz. Similarly, the matrix $\bm{I}_{LN}+\sigma_0^{-2}\bm{\Sigma_A}\left(\bm{H}^\dagger\bm{H}\otimes\bm{G}\right)$ is also asymptotically Toeplitz. Using this fact, we can find the limit of $\underset{N\rightarrow\infty}{\lim}\frac{1}{N}I(\bm{Y};\bm{A})$ by invoking the generalized Szeg\"o's theorem \cite[Theorem 3]{ganji2018novel}, which is given in Lemma \ref{lem:gensze}. \begin{lemma}{\cite[Theorem 3]{ganji2018novel}} \label{lem:gensze} Assume $\bm{T}$ is a $NK\times NK$ block Toeplitz matrix with the structure \begin{equation} \bm{T}=\left[\begin{matrix} \bm{T}_{1,1} & \bm{T}_{1,2} &\dots & \bm{T}_{1,K} \\ \bm{T}_{2,1} & \bm{T}_{2,2} &\dots & \bm{T}_{2,K} \\ \vdots & \vdots & \ddots & \vdots \\ \bm{T}_{K,1} & \bm{T}_{K,2} &\dots & \bm{T}_{K,K} \end{matrix}\right], \end{equation} where $\bm{T}_{i,j}$ are $N\times N$ Toeplitz matrices. We have \begin{equation} \underset{N\rightarrow\infty}{\lim} \frac{1}{N}\sum_{i=1}^{KN}F\left(\lambda_i(\bm{T})\right)=\int_{-\frac{1}{2}}^{\frac{1}{2}}\sum_{k=1}^KF\left(\lambda_j(\bm{T}(f_n))\right)df_n, \end{equation} where $\lambda_i(\cdot)$ means the $i$th eigenvalue of the matrix inside the parenthesis, and $F(\cdot)$ is a continuous function defined over the range of $f_n$. With a slight abuse of notation, which can be confused with the matrix $\bm{T}$ itself, We denote the generating matrix of the block Toeplitz matrix $\bm{T}$ as $\bm{T}(f_n)$. The $K\times K$ matrix $\bm{T}(f_n)$ is composed of the generating functions of the Toeplitz matrices $\bm{T}_{i,j}, i,j=1,\dots, K$. The entries of $\bm{T}(f_n)$ are calculated as $\left(\bm{T}(f_n)\right)_{i,j}=\mathcal{G}(\bm{T}_{i,j})$. \end{lemma} \begin{lemma}{\cite[Theorem 2]{jesus}}\label{lem:toepmprod} The generating matrix of the product of two block Toeplitz matrices is the product of the generating matrices of these two matrices. \end{lemma} \begin{lemma}\label{lem:toepmsum} The generating matrix of the sum of two block Toeplitz matrices is the sum of the generating matrices of these two matrices. \end{lemma} By applying Lemma \ref{lem:gensze}, we have the derivation below \begin{align} \underset{N\rightarrow\infty}{\lim}\frac{1}{N}I(\bm{Y};\bm{A})=&\int_{-\frac{1}{2}}^{\frac{1}{2}}\log_2\det\big(\bm{I}_{KN}(f_n)\notag\\ &+\sigma_0^{-2}\bm{\Sigma_A}(f_n)\bm{H}^\dagger\bm{H}G_d(f_n)\big)df_n, \label{eqn:limend} \end{align} due to Lemma \ref{lem:toepmprod} and \ref{lem:toepmsum}, and the fact that the matrices $\bm{I}_{KN}(f_n)$, $\bm{\Sigma_A}(f_n)$, and $\bm{H}^\dagger\bm{H}G_d(f_n)$ are respectively the generating matrices for $\bm{I}_{KN}$, $\bm{\Sigma_A}(f_n)$, and $\bm{H}^\dagger\bm{H}G_d(f_n)$. From \eqref{eqn:defgenfunc} we know that the generating matrix of $\bm{I}_{KN}$ is $\bm{I}_K$, and the generating function of $\bm{G}$ is $G_d(f_n)$, the definition of which can be found in \cite[(90)]{zhang2022faster}. The function $G_d(f_n)$ is also called the folded spectrum in the literature of FTN \cite{rusek}. The folded spectrum is periodical with period $1$, and the shape of it changes with the acceleration factor $\delta$. When an RRC pulse with roll-off factor $\beta$ is used for $p(t)$, $G(f)$, which is the continuous time Fourier transform of $g(t)$, will be duplicated and then shifted by $\frac{m}{\delta T}, m\in\mathbb{Z}$. In the end, the spectrum will be scaled by $\frac {1}{\delta T}$ both in frequency and amplitude. When $p(t)$ is an RRC pulse, the spectrum $G(f)$ is non-zero from $-\frac{(1+\beta)}{2T}$ to $\frac{(1+\beta)}{2T}$. Therefore as $\delta$ becomes smaller than $\frac{1}{1+\beta}$, the support of the folded-spectrum $G_d(f_n)$ will be from $-\frac{(1+\beta)}{2T}$ to $\frac{(1+\beta)}{2T}$ and there will be zero parts in $[-\frac{1}{2},\frac{1}{2}]$. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fold_spect.eps} \caption{Folded spectrum for RRC pulse with roll-off factor $\beta$.} \label{fig:foldspec} \end{figure} In Fig. \ref{fig:foldspec}, we give an example of $G_d(f_n)$ where an RRC pulse is used with roll-off factor $\beta$. For ease of exposition, we assume that the pulse $p(t)$ in this paper is also an RRC function with roll-off factor $\beta$. We next define $\bm{W}=\bm{H}^\dagger\bm{H}$. Since $\bm{W}$ is a Hermitian matrix, it has the eigenvalue decomposition $\bm{W}=\bm{U}\bm{\Gamma}\bm{U}^\dagger$, where $\bm{U}$ is a unitary matrix and $\bm{\Gamma}$ is a diagonal matrix. The eigenvalues of $\bm{W}$, $\tau_i, i=1,\dots, K$, are on the main diagonal, i.e., $\bm{\Gamma}=\text{diag}\left[\tau_1, \tau_2,\dots, \tau_{K}\right]$. Then we can upper bound \eqref{eqn:limend} by applying the generalized Hadamard inequality as in \eqref{eqn:ub}. \begin{eqnarray} \lefteqn{\underset{N\rightarrow\infty}{\lim}\frac{1}{N}I(\bm{Y};\bm{A})} \nonumber \\ &=&\int_{-\frac{1}{2}}^{\frac{1}{2}}\log_2\det\left(\bm{I}_K+\sigma_0^{-2}G_d(f_n)\bm{U}^\dagger\bm{\Sigma_A}(f_n)\bm{U}\bm{\Gamma}\right)df_n \notag\\ &\leq& \int_{-\frac{1}{2}}^{\frac{1}{2}}\log_2\det\left(\bm{I}_K+\sigma_0^{-2}G_d(f_n)\bm{\Phi}(f_n)\bm{\Gamma}\right)df_n. \label{eqn:ub} \end{eqnarray} The upper bound will be achieved if $\bm{\Sigma_A}(f_n)$ can be diagonalized into $\bm{\Sigma_A}(f_n)=\bm{U}\bm{\Phi}(f_n)\bm{U}^\dagger$, where $\bm{\Phi}=\text{diag}\left[\phi_1(f_n), \phi_2(f_n), \dots,\phi_{K}(f_n)\right]$. We call the functions $\phi_k(f_n), k=1,\dots, K$, as data spectrum, since they are only related to the distribution of the input data symbols. Therefore, \eqref{eqn:ub} can be written as \begin{align} \lefteqn{ \underset{N\rightarrow\infty}{\lim}\frac{1}{N}I(\bm{Y};\bm{A})} \nonumber \\ &=&\sum_{k=1}^K\int_{-\frac{1}{2}}^{\frac{1}{2}}\log_2\left(1+\sigma_0^{-2}G_d(f_n)\phi_k(f_n)\tau_k\right)df_n. \label{eqn:obj} \end{align} We then proceed to obtain the power constraint. Assume that the transmission has the power limit of $P$. The power constraint expression for transmitting $N$ symbols is \begin{align} P_{TX}&=\mathbb{E}\left[\frac{1}{N\delta T}\sum_{k=1}^{K}\int_{-\infty}^{\infty}|x_k(t)|^2dt\right] \\ &=\frac{1}{N\delta T}\text{tr}\left(\left(\bm{I}\otimes\bm{G}\right)\bm{\Sigma_A}\right)\leq P. \label{eqn:powconstres} \end{align} The detailed derivation of \eqref{eqn:powconstres} can be found in (33)-(37) of \cite{zhang2022faster}. As the number of symbols goes to infinity, we can apply Szeg\"o's theorem again, \begin{eqnarray} \lefteqn{\underset{N\rightarrow\infty}{\lim}\frac{1}{N\delta T}\text{tr}\left(\left(\bm{I}\otimes\bm{G}\right)\bm{\Sigma_A}\right)} \nonumber \\ &=&\frac{1}{\delta T}\int_{-\frac{1}{2}}^{\frac{1}{2}}\sum_{k=1}^KG_d(f_n)\phi_k(f_n)df_n.\label{eqn:powcons} \end{eqnarray} Eventually, we are able to form our optimization problem for the channel capacity by combining \eqref{eqn:obj} and \eqref{eqn:powcons}, and write \begin{align} \lefteqn{C_{FTN}(P,\delta)}\\ &=\underset{\substack{\phi_k(f_n),\\k=1,\dots,K}}{\max}\sum_{k=1}^K\int_{-\frac{1}{2}}^{\frac{1}{2}}\log_2\left(1+\sigma_0^{-2}G_d(f_n)\phi_k(f_n)\tau_k\right)df_n \notag\\ &s.t.~\frac{1}{\delta T}\int_{-\frac{1}{2}}^{\frac{1}{2}}\sum_{k=1}^KG_d(f_n)\phi_k(f_n)df_n\leq P. \notag \end{align} Note that as $\delta<\frac{1}{1+\beta}$, there will be zero parts for the folded spectrum in $\left[-\frac{1}{2},\frac{1}{2}\right]$. Therefore we should perform the optimization on the support of the folded spectrum. We denote the support of $G_d(f_n)$ in $\left[-\frac{1}{2},\frac{1}{2}\right]$ as $\mathcal{S}$ and the optimization problem becomes \begin{align} \lefteqn{C_{FTN}(P,\delta)}\label{eqn:capobj}\\ &=\underset{\substack{\phi_k(f_n),\\k=1,\dots,K}}{\max}\sum_{k=1}^K\int_{\mathcal{S}}\log_2\left(1+\sigma_0^{-2}G_d(f_n)\phi_k(f_n)\tau_k\right)df_n \notag \\ &s.t.~\frac{1}{\delta T}\int_{\mathcal{S}}\sum_{k=1}^KG_d(f_n)\phi_k(f_n)df_n\leq P.\notag \end{align} We write the Karush–Kuhn–Tucker (KKT) conditions as \begin{align} \frac{\sigma_0^{-2}G_d(f_n)\tau_k}{1+\sigma_0^{-2}G_d(f_n)\tau_k}-\mu G_d(f_n)-v_k(f_n)&=0 \\ \mu\left(\frac{1}{\delta T}\int_{\mathcal{S}}\sum_{k=1}^KG_d(f_n)\phi_k(f_n)df_n- P\right)&=0 \\ v_k(f_n)\phi_k(f_n)&=0, \end{align} where $\mu$ and $v_k(f_n)$'s are the Lagrange multipliers. The solution to this problem is \begin{equation} \bar{\phi}_k(f_n)=\begin{cases} \frac{\delta T}{G_d(f_n)}\left(\frac{1}{\mu}-\frac{1}{\tau_k}\right)^+,& \frac{1}{1+\beta}\leq\delta\leq1 \\ \frac{ T}{G_d(f_n)(1+\beta)}\left(\frac{1}{\mu}-\frac{1}{\tau_k}\right)^+,& 0<\delta<\frac{1}{1+\beta} \end{cases}, \label{eqn:optsolu} \end{equation} for $k=1,\dots,K, f_n\in \mathcal{S}$, where $\bar{\phi}_k(f_n)$ means the optimum $\phi_k(f_n)$. The Lagrange multiplier $\mu$ can be found by solving \begin{equation} \frac{1}{\delta T}\int_{\mathcal{S}}\sum_{k=1}^K\left(\frac{1}{\mu}-\frac{1}{\tau_k}\right)^+df_n=P.\label{eqn:mu} \end{equation} In order to find the channel capacity, we need to plug the optimum solution \eqref{eqn:optsolu} back into \eqref{eqn:capobj}. For ease of representation, we call the power allocated to the $k$th eigen-channel $\left(\frac{1}{\mu}-\frac{1}{\tau_k}\right)^+$ as $\sigma^2_k$, {and $\sum_{k=1}^K\sigma^2_k=P$}. We denote the Lebesgue measure of $\mathcal{S}$ as $|\mathcal{S}|$. When $\delta\geq\frac{1}{1+\beta}$, $|\mathcal{S}|=1$, then combining \eqref{eqn:capobj} and \eqref{eqn:optsolu}, the capacity expression is written as \begin{equation} C_{FTN}(P,\delta)=\sum_{k=1}^K\log_2\left(1+\frac{\sigma^2_k\delta T\tau_k}{\sigma_0^2}\right), \text{for}~\frac{1}{1+\beta}\leq\delta\leq1. \label{eqn:capbdbps} \end{equation} For RRC pulses with roll-off factors that satisfy $\delta<\frac{1}{1+\beta}$, the Lebesgue measure $|\mathcal{S}|=\delta(1+\beta)$ and $|\mathcal{S}|<1$. Then, the capacity expression becomes \begin{equation} C_{FTN}(P,\delta)=\delta(1+\beta)\sum_{k=1}^K\log_2\left(1+\frac{\sigma^2_k T\tau_k}{\sigma_0^2(1+\beta)}\right), \label{eqn:capsdbps} \end{equation} for $0<\delta<\frac{1}{1+\beta}$. Both the capacity in \eqref{eqn:capbdbps} and \eqref{eqn:capsdbps} are in bits/symbol. We normalize the capacity to convert the unit to bits/s/Hz and induce the following theorem. \begin{theorem} \label{thm:them1} The capacity (in bits/s/Hz) of the MIMO FTN channel with $K$ transmit and $L$ receive antennas, employing RRC pulses with roll-off factor $\beta$ is equal to \begin{align} \lefteqn{C_{FTN}(P,\delta)}\notag\\ &=\begin{cases} \frac{1}{\delta(1+\beta)}\sum_{k=1}^K\log_2\left(1+\frac{\sigma^2_k\delta T\tau_k}{\sigma_0^2}\right),& ~\frac{1}{1+\beta}\leq\delta\leq1 \\ \sum_{k=1}^K\log_2\left(1+\frac{\sigma^2_k T\tau_k}{\sigma_0^2(1+\beta)}\right),& ~0<\delta<\frac{1}{1+\beta} \end{cases}.\vspace{-5mm} \end{align} \end{theorem} \begin{remark} The power constraint in \eqref{eqn:powconstres} means fixed average transmission power. For this constraint, when $0<\delta<\frac{1}{1+\beta}$, the capacity is independent of $\delta$. The capacity for $\frac{1}{1+\beta}\leq\delta\leq1$ is identical with the result in \cite{zhang2022faster}. Therefore, the capacity increases as $\delta$ decreases, and when $\delta$ is at the threshold value $\frac{1}{1+\beta}$, the capacity reaches its maximum and stays the same as $\delta$ keeps decreasing to 0. {As $\delta$ decreases below the threshold, the amount of information each symbol can carry also decreases due to severe ISI. However, since the signaling rate increases at the same time, it compensates for the decreased information per symbol. In other words, although the capacity decreases in bits/symbol, after normalization by signaling rate, the resulting capacity in bits/s/Hz stays the same.} \end{remark} \begin{remark} For FTN, the capacity is 0 for $\delta=0$. If $\delta=0$, all the symbols will be sent all at once and it is impossible to sample each symbol separately. This case then becomes equivalent to sending only one symbol and the capacity of sending a single symbol is known to be zero. \end{remark} \begin{remark} The capacity-achieving input distribution is obtained by the combination of spatial domain water-filling and frequency domain spectrum inversion. The power allocated to the $k$th eigen-channel in the optimum data spectrum in \eqref{eqn:optsolu} is determined by spatial domain water-filling, and the shape of the spectrum for each eigenchannel is determined by the frequency domain inverted spectrum $\frac{1}{G_d(f_n)}$. \end{remark} \subsection{Different Power Allocation Schemes}\label{sec:diffpowallo} We have obtained the optimal power allocation over the spatial and frequency domains in \eqref{eqn:optsolu}, which means we perform water-filling in the spatial domain and channel inversion in the frequency domain. {We denote this scheme as $O_sO_f$, meaning optimal power allocation both in the spatial and the frequency domains.} We proceed to investigate other power allocation schemes. \subsubsection{Suboptimal in space and optimal in frequency}\label{sec:ssof} Secondly, we study uniform power allocation in the spatial domain and optimal power allocation in the frequency domain. {We denote this scheme as $S_sO_f$ as suboptimal in the spatial domain and optimal in the frequency domain.} In this case, for $f_n\in \mathcal{S}$, the data spectrum $\phi_k(f_n), k=1,\dots, K$, follows \begin{equation} \phi_k(f_n)=\left\{ \begin{matrix} \frac{P}{K}\frac{\delta T}{G_d(f_n)}, &\frac{1}{1+\beta}\leq\delta\leq1 \\ \frac{P}{K}\frac{ T}{G_d(f_n)(1+\beta)}, &0<\delta<\frac{1}{1+\beta} \end{matrix}\right. . \label{eqn:ssof} \end{equation} \subsubsection{Optimal in space and suboptimal in frequency}\label{sec:ossf} In the third scheme, the data spectrum follows \begin{equation} \phi_k(f_n)=\left\{ \begin{matrix} P_k\delta T, &\frac{1}{1+\beta}\leq\delta\leq1 \\ \frac{P_kT}{1+\beta}, &0<\delta<\frac{1}{1+\beta} \end{matrix}\right. , \label{eqn:ossf} \end{equation} for $f_n\in \mathcal{S}$. {We denote this scheme as $O_sS_f$ meaning optimal in the spatial domain and suboptimal in the frequency domain.} \subsubsection{Suboptimal in space and suboptimal in frequency}\label{sec:uniformpowallo} Finally, in the fourth scheme, we perform uniform power allocation both in spatial and frequency domains. It is denoted as $S_sS_f$ and for $f_n\in \mathcal{S}$, its data spectrum is given as \begin{equation} \phi_k(f_n)=\left\{ \begin{matrix} \frac{P\delta T}{K}, &\frac{1}{1+\beta}\leq\delta\leq1 \\ \frac{PT}{K(1+\beta)}. &0<\delta<\frac{1}{1+\beta} \end{matrix}\right. . \label{eqn:sssf} \end{equation} We will be comparing all these three schemes with the optimal scheme in Section ~\ref{sec:simresult}, where we present the numerical results. \section{Peak to Average Power Ratio Analysis} \label{sec:paprsec} Due to high acceleration, the pulses of FTN transmission are tightly packed, causing a high chance that pulses overlap with each other resulting in high PAPR. In practice, the power amplifier at the transmitter has a certain threshold output value called the saturation point, which limits the maximum amplitude of the output signal. In order to maintain linear performance at the power amplifier, the input power needs to be reduced to accommodate signal peaks. The reduction amount is referred to as the back-off value and is measured in decibels (dB). Without any compensation techniques, the required power amplifier back-off approximately equals the PAPR of the input signal. In FTN signaling, with transmit power $P$, transmitting $N$ symbols takes $N\delta T$ seconds and the overall energy is equal to $NP\delta T$. Also, we find that the average energy per symbol is $E=P\delta T$. In FTN signaling, the energy for each symbol decreases as $\delta$ decreases for fixed transmission power $P$. For practical constellations, this implies smaller minimum Euclidean distances and results in higher error probabilities. In FTN, we need two definitions for SNR. We define transmit SNR as $SNR_{tx}=\frac{P}{\sigma_0^2}$, which is the transmit power over noise variance. We also define received SNR as $SNR_{rx}=\frac{E/T}{\sigma_0^2}=\frac{P\delta }{\sigma_0^2}$. It is important to make these two definitions separately because we will be comparing system performance for different $\delta$. For Nyquist transmission, $\delta=1$ and the two definitions become equal to each other, and a separate definition becomes unnecessary. In this paper, we study the PAPR behavior of FTN under two types of power configurations, fixed transmit SNR or fixed receive SNR. Fixed transmit SNR means that the transmit power $P$ is fixed for all choices of $\delta$. This implies that as $\delta$ gets smaller the received SNR also becomes smaller. Note that this can imply inferior performance, for example, in bit error rate in practical systems. On the other hand, fixed received SNR refers to a fixed symbol energy $E$ for all choices of $\delta$. In other words, for this latter case, we fix $P\delta $ instead of $P$ itself, implying larger transmit power $P$ for smaller $\delta$. However, increasing $P$ indefinitely is practically impossible due to limitations in linear power amplification. According to the distribution of instant power values, we can calculate the probability of instant power exceeding the back-off value. We define this probability as the outage probability. In this section, we study the PAPR behavior of FTN signaling for uniform power allocation derived in Section \ref{sec:uniformpowallo}. Uniform power in both space and frequency imply that $a_k[m]$ is independent of $a_k[n]$ for $m\neq n$. Moreover, the real and imaginary parts of $a_k[m]$, which are denoted as $a_{r,k}[m]$ and $a_{i,k}$ respectively, are i.i.d. as well. Without loss of generality, we assume that $N=2M+1$ symbols are transmitted and $x_k(t)$ in \eqref{eqn:xt} can be written as \begin{align} x_k(t) &=\sum_{m=-M}^{M}\left(a_{r,k}[m]+ja_{i,k}[m]\right)p(t-m\delta T)\\ &= x_{r,k}(t)+jx_{i,k}(t), \end{align} where $x_{i,k}(t)$ and $x_{j,k}(t)$ are the real and imaginary parts of $x_k(t)$. In this paper, we define the PAPR as \begin{equation} \text{PAPR}=\frac{|x_k(t)|^2}{P_k}.\label{eqn:defpapr} \end{equation} As in \cite{zhang2022faster}, we can easily see that the signal $x_k(t)$ is a cyclostationary random process with period $\delta T$. In this paper, we assume that $N$ is always large enough so that the process $x_k(t)$ is a cyclostationary process. Therefore, with large enough $N$, it is sufficient to study the statistical distribution of power for each $t$ within only one period. We limit our time index $t$ to $[0,\delta T)$. For each time instant $t$, the distribution is different because the coefficients $p(t-m\delta T)$ are time-varying. The complementary cumulative distribution function (CCDF) represents the probability that a random variable exceeds a specific threshold, providing crucial insights into its tail distribution. The CCDF of the instantaneous power $|x_k(t)|^2$ with respect to $t$ can be defined as \begin{equation} \mathcal{C}(\gamma;t)=\text{Pr}\left[|x_k(t)|^2\geq\gamma\right]. \label{eqn:defccdf} \end{equation} The expression \eqref{eqn:defccdf} is still a function of time $t$. It is more important to investigate the average behavior of CCDF distribution within one period, thus we take the time average of it and define the average CCDF as \begin{equation} \bar{\mathcal{C}}(\gamma)=\frac{1}{\delta T}\int_0^{\delta T}\mathcal{C}(\gamma;t)dt. \label{eqn:aveccdfdef} \end{equation} Utilizing the techniques in \cite[Appendix]{exactpapr}, the expression of average CCDF can be computed as \begin{equation} \bar{\mathcal{C}}(\gamma)=1-\frac{1}{\delta T} \int_0^{\delta T}\sqrt{\gamma}\int_0^\infty D(\zeta;t)J_1(\sqrt{\gamma}\zeta)d\zeta dt, \label{eqn:aveccdfexp} \end{equation} where $J_1(\cdot)$ is the first type Bessel function of order 1, and $D(\zeta;t)$ is written as \begin{equation} D(\zeta;t)=\frac{1}{2\pi}\int_0^{2\pi}\Phi(\zeta\cos\phi,\zeta\sin\phi;t)d\phi. \label{eqn:intchar} \end{equation} In \eqref{eqn:intchar}, $\Phi(u,v;t_0)=\mathbb{E}\left[e^{j(ux_{r,k}(t_0)+vx_{i,k}(t_0))}\right]$ is the joint characteristic function of the real part $x_{r,k}(t)$ and the imaginary part $x_{i,k}(t)$ of the process $x_k(t)$ evaluated at $t_0$. \begin{remark} \label{rem:inspowccdfconvert} The expression in \eqref{eqn:aveccdfexp} is about the distribution of instant power, according to the definition of PAPR in \eqref{eqn:defpapr}. We can simply set $\gamma$ as $\gamma' P_k$ in \eqref{eqn:aveccdfdef} to obtain the average CCDF of PAPR of FTN signaling, namely, \begin{align} \bar{\mathcal{C}}(\gamma'P_k)&=\frac{1}{\delta T}\int_0^{\delta T}\mathcal{C}(\gamma'P_k;t)dt\\ &=\frac{1}{\delta T}\int_0^{\delta T}\text{Pr}\left[\frac{|x_k(t)|^2}{P_k}\geq\gamma' \right]dt. \end{align} We conclude that replacing $\gamma$ with $\gamma' P_k$ is merely a scaling operation and the average CCDF of PAPR has the same behavior as the average CCDF of instant power. \end{remark} We then derive the exact instant power distributions both for FTN signaling with Gaussian symbols and QPSK symbols in the following subsections. \subsection{PAPR Distribution for Gaussian Symbols} \label{sec:gausthy} \subsubsection{Average CCDF of instant power with $SNR_{tx}$ fixed} Assume that data symbols $a_k[m]$ follow complex Gaussian distribution, namely, $a_k[m]\backsim\mathcal{CN}(0, P_k\delta T)$. Note that the variance $P_k\delta T$ is also the symbol energy. Moreover, $a_{r,k}[m]$ and $a_{i,k}[m]$ are i.i.d. real Gaussian random variables with zero mean and variance $P_k\delta T/2$. We know that $x_k(t)$ is the linear combination of multiple complex Gaussian random variables and thus it is also a complex Gaussian random variable. Meanwhile, $x_{r,k}(t)$ and $x_{i,k}(t)$ are real Gaussian random variables. We then have the following theorem. \begin{theorem} \label{thm:gaustxsnr} If $SNR_{tx}$ is fixed and Gaussian symbols are used, as $\delta$ approaches zero, the average CCDF of the instant power $\bar{\mathcal{C}}(\gamma)$ in \eqref{eqn:aveccdfexp} does not change with $\delta$ and is equal to \begin{equation} \bar{\mathcal{C}}(\gamma)=\exp\left(-\frac{\gamma}{P_k\int_{-\frac{1}{2\delta T}}^{\frac{1}{2\delta T}}G(f)df}\right). \label{eqn:gausthytx} \end{equation} \end{theorem} \begin{proof} The proof is shown in Appendix \ref{prof:gaus}. \end{proof} It is easy to see that as $\delta$ decreases, the CCDF does not change with $\delta$, since the integration $\int_{-\frac{1}{2\delta T}}^{\frac{1}{2\delta T}}G(f)df$ does not change with $\delta$ for $\delta<\frac{1}{1+\beta}$. In other words, the integration range will be larger than the support of $G(f)$, which is $[-\frac{1+\beta}{2T}, \frac{1+\beta}{2T}]$. \subsubsection{Average CCDF of instant power with $SNR_{rx}$ fixed} When $SNR_{rx}$ is fixed, we replace $P_k \delta T$ with $E$, which is a constant with respect to $\delta$. The behavior of the average CCDF of instant power will be different from the $SNR_{tx}$ fixed case. The above equation \eqref{eqn:gausthytx} becomes \begin{equation} \bar{\mathcal{C}}(\gamma)=\exp\left(-\frac{\gamma}{\frac{E}{\delta T}\int_{-\frac{1}{2\delta T}}^{\frac{1}{2\delta T}}G(f)df}\right). \label{eqn:gausrxsnrthy} \end{equation} As $\delta$ goes to zero, the average CCDF $\bar{\mathcal{C}}(\gamma)$ of instant power approaches 1 asymptotically, this means that the average CCDF curve for instant power for fixed $SNR_{rx}$ with Gaussian symbols approaches a horizontal line as $\delta \rightarrow 0$. \begin{remark} The equations \eqref{eqn:gausthytx} and \eqref{eqn:gausrxsnrthy} are average CCDF for instant power. According to Remark \ref{rem:inspowccdfconvert}, we conclude that the average CCDF of PAPR for FTN with Gaussian symbols either for $SNR_{tx}$ or $SNR_{rx}$ has the same behavior as the average CCDF of instant power. \end{remark} \vspace{-0.2cm} \subsection{PAPR Distribution with QPSK Symbol Set} \label{sec:thyqpsk} Gaussian signaling is relevant to theoretical results. In practice, we also need to investigate the PAPR behavior for practical constellations such as PSK or QAM. For simplicity, in this paper we will study the QPSK symbol set. The analysis can be extended to higher-order PSK or QAM constellations similarly. \subsubsection{Average CCDF of instant power with $SNR_{tx}$ fixed} \label{sec:qpsktxsnrccdf} In this section, we adopt the analysis methodology in \cite{exactpapr} to find the distribution of instant power of FTN signaling. We assume that the data symbols $a_k[m]$ are i.i.d. and the constellation has energy $P_k\delta T$ for fixed $SNR_{tx}$, where $k$ is the antenna index. For fixed transmit SNR, the physical transmission power of FTN is the same for all $\delta$, including Nyquist transmission for $\delta=1$. This means that as $\delta$ decreases from 1 to 0, the physical transmission power does not change, but the average constellation energy decreases. Similar to Gaussian signaling, the constellation points $a_k[m]=a_{r,k}[m]+ja_{i,k}[m]$ are composed of real and imaginary parts, which are independent of each other\footnote{However, this is not necessarily true for all constellations, for example, in high-order PSK constellations, the real part and the imaginary part of the symbol are highly correlated.} and are drawn uniformly from the set, $a_{r,k}[m], a_{i,k}[m]\in\mathcal{A}=\left\{+\sqrt{P_k\delta T/2}, -\sqrt{P_k\delta T/2}\right\}$ for QPSK transmission. Then we have the following theorem. \begin{theorem} \label{thm:txsnrpaprqpsk} If $SNR_{tx}$ is fixed and the QPSK symbol set is used, as $\delta$ approaches $0$, the average CCDF of instant power $\bar{\mathcal{C}}(\gamma)$ in \eqref{eqn:aveccdfexp} asymptotically approaches $1$. Namely, \begin{equation} \underset{\delta\rightarrow0}{\lim}\bar{\mathcal{C}}(\gamma)=1. \end{equation} \end{theorem} \begin{proof} The proof is provided in Appendix \ref{app:proofthmpaprtxqpsk}. \end{proof} In other words, as $\delta$ decreases, the instant power of FTN transmission takes larger values. In the limit, the probability density function of the instant power distribution of $x_k(t)$ will approach a Dirac delta function located at infinity. On the other hand, since the average power $P_k$ is fixed, according to Remark \ref{rem:inspowccdfconvert}, the distribution of PAPR $|x(t)|^2/P_k$ follows the same behavior as the distribution of instant power. \subsubsection{Average CCDF of instant power with $SNR_{rx}$ fixed} We now investigate the behavior of the average instant power CCDF for PAPR when the received power is fixed. In the received SNR fixed scenario, the symbol energy $E$ is kept the same for all $\delta$, namely, the product $P_k\delta$ is fixed regardless of the value of $\delta$. We then have the following theorem. \begin{theorem} \label{thm:rxsnrqpsk} If $SNR_{rx}$ is fixed and the QPSK symbol set is used, as $\delta$ approaches $0$, the average CCDF of instant power $\bar{\mathcal{C}}(\gamma)$ asymptotically approaches $1$ as well. We have \begin{equation} \underset{\delta\rightarrow0}{\lim}\bar{\mathcal{C}}(\gamma)=1. \end{equation} \end{theorem} \begin{proof} The proof is provided in Appendix \ref{prof:qpskrxsnr}. \end{proof} To calculate the PAPR in this scenario, we use the same definition as in \eqref{eqn:defccdf}. We then apply a similar discussion as we did for transmit SNR fixed. We replace $\gamma$ with $\gamma' P_k$ in \eqref{eqn:defccdf}, and observe that the average CCDF of PAPR of FTN transmission for fixed $SNR_{rx}$ has the same behavior as its $SNR_{tx}$ fixed counterpart. \begin{remark} Assume $PT = E$ for all $\delta$. As \(\delta\) decreases, the symbol energy in the \(SNR_{rx}\) fixed case remains constant at \(E\), whereas the symbol energy in the \(SNR_{tx}\) fixed case, \(P_k \delta T\), decreases because \(P_k\) remains constant. When comparing their physical power, the \(SNR_{rx}\) fixed FTN has higher physical power, given by \(\frac{E}{\delta T}\), while the \(SNR_{tx}\) fixed FTN maintains a constant physical power proportional to \(P_k\). Examining the CCDF of instantaneous power, we observe that for the same threshold \(\gamma\), the \(SNR_{rx}\) fixed FTN is more likely to exhibit instantaneous power values larger than \(\gamma\), due to its higher symbol energy. Consequently, the CCDF curve for \(SNR_{rx}\) fixed FTN lies above that of \(SNR_{tx}\) fixed FTN and approaches the horizontal line faster as \(\delta \to 0\). This shows that keeping the symbol energy constant for faster acceleration deteriorates the PAPR performance faster than maintaining constant physical transmission power. \end{remark} \vspace{-0.2cm} \subsection{Asymptotic Behavior of FTN Signaling} In Theorem \ref{thm:gaustxsnr}, we learned that the average CCDF of PAPR of FTN signaling for fixed $SNR_{tx}$ with Gaussian symbols does not change with $\delta$ as $\delta$ goes to zero. Meanwhile, the average CCDF of PAPR of FTN signaling for fixed $SNR_{tx}$ with QPSK symbols behaves differently. The average CCDF approaches 1 as $\delta\rightarrow0$ as shown in Theorem~\ref{thm:txsnrpaprqpsk}. Intuitively, as $\delta$ approaches 0, the signal $x_k(t)$ with QPSK symbols approaches \begin{equation} x_k(t)\approx\left(\sum_{m=-M}^{M}a_k[m]\right)p(t), \label{eqn:llnequ} \end{equation} due to the fact that the transmitted symbols are highly packed and the signal starts to look like one-symbol transmission with all the transmitted symbols transmitted at once. One can get the impression that the process in \eqref{eqn:llnequ} starts to look like a Gaussian process, since we can invoke the law of large numbers. However, this is not the case if the number of symbols is large enough and consequently the process $x_k(t)$ is always a cyclostationary process. Therefore, in this section, we will verify the result we obtained in Section \ref{sec:qpsktxsnrccdf} by showing that the asymptotic behavior of average CCDF for FTN signaling using QPSK symbols with $SNR_{tx}$ fixed does not resemble the average CCDF with Gaussian symbols. \begin{figure} \centering \includegraphics[width=1\linewidth]{ftnhighaccdemo.eps} \caption{A demonstration of the transmitted FTN signal for two distinct $\delta$ values, showcasing only three symbols as an example.} \label{fig:demosmalltau} \end{figure} As $\delta$ decreases, transmit pulses modulated with symbols start to pack in a tighter manner. Fig. \ref{fig:demosmalltau} shows a simple demonstration of the signal $x_k(t)$. For smaller $\delta$, the ISI at sampling times becomes more severe. However, the samples of the process $x(t_0)=\sum_{m=-M}^{M}a_k[m]p(t_0-m\delta T)$ do not converge to Gaussian random variables. Despite the presence of a sufficiently large number of symbols, certain symbols within $x(t_0)$ are still multiplied by coefficients significantly smaller than others. For instance, as shown in Fig. \ref{fig:demosmalltau}, when $\delta=0.1$ and sampling occurs at $t_0=0$, symbols located far from the sampling time contribute minimally to the sample. Therefore, to determine whether the FTN signal approaches a Gaussian process, it is necessary to examine the Lindeberg condition \cite{lindeberg1922neue}, which is more general than the central limit theorem (CLT). \begin{lemma}\cite[Lindeberg condition]{lindeberg1922neue}\label{thm:lindeberg} Let $X_l, l=1,\dots, S$ be $S$ mutually independent random variables, each with zero mean and variance $\sigma^2_l$. Let $Z_S=\sum_{l=1}^{S}X_l$ be the sum of $X_l$ with variance $\sigma^2_{Z,S}=\sum_{l=1}^{S}\sigma_l^2$. If all the $S$ random variables satisfy \begin{align} \underset{S\rightarrow\infty}{\lim}\frac{1}{\sigma^2_{Z,S}}\sum_{l=1}^{S}\mathbb{E}\left[X_l^2\mathbbm{1}_{\left\{|X_l|>\epsilon \sigma_{Z,S}\right\}}\right]=0\label{eqn:lindebergdef} \end{align} for all $\epsilon>0$, then the variable $\frac{Z_N}{\sigma_{Z,S}}$ converges in distribution to a standard normal random variable as $N\rightarrow\infty$. \end{lemma} The main idea for this theorem is that the CLT can still hold if there is no single or a group of $X_l$'s dominating the variance for the sum variable. The ISI we receive mainly comes from neighboring symbols and if we look into the random variable $x_k(t_0)$, we can find that the variance for this random variable $P\delta T\sum_{m=-M}^Mp^2(t_0-m\delta T)$ is dominated by the neighboring symbols. Therefore, it is necessary to investigate whether the Lindeberg condition is satisfied for $x_k(t_0)$. As $\delta$ approaches zero (but not equal to zero), we have the following lemma. \begin{lemma} The variance of random variable $x_k(t_0)$ always converges for arbitrary small but non-zero $\delta$ as $S$ goes to infinity. \end{lemma} \begin{proof} We observe that the variance $P\delta T\sum_{m=-M}^Mp^2(t_0-m\delta T)$ resembles numerical integration, so when $\delta$ is small enough, we can replace $\delta T$ with $\Delta$, which can be considered as dividing the time period $T$ into small intervals with length $\delta T$. Also, recall that $N=2M+1$. We then have the following \begin{align} \underset{M\rightarrow\infty}{\lim}P_k\sum_{m=-M}^Mp^2(t_0-m\delta T)\Delta\approx P_k \int_{-\infty}^{+\infty}p^2(t)dt. \end{align} Since we assumed that $p(t)$ has unit energy, we conclude that the variance of $x_k(t_0)$ converges to $P_k$ as $\delta$ decreases. \end{proof} \begin{theorem} \label{thm:qpsknogaus} FTN signaling with QPSK symbols do not approach the Gaussian process for arbitrary small non-zero $\delta$, as long as $S$ is sufficiently large. \end{theorem} \begin{proof} Inserting $x_k(t)$ into Lindeberg's condition, and defining $s^2_N=P_k\delta T\sum_{m=-\frac{N-1}{2}}^{\frac{N-1}{2}}p^2(t_0-m\delta T)$, the left-hand side of \eqref{eqn:lindebergdef} becomes \begin{equation} \hspace{-0.2cm}\underset{N\rightarrow\infty}{\lim}\frac{1}{s^2_N}\sum_{m=-\frac{N-1}{2}}^{\frac{N-1}{2}}\mathbb{E}\left[|a_k[m]p(t_0-m\delta T)|^2\mathbbm{1}_{\left\{|a_k[m]p(t_0-m\delta T)|>\epsilon s_N\right\}}\right]. \label{eqn:lindeberg} \end{equation} We know that as $N\rightarrow\infty$, $s^2_N$ converges to $P_k$, therefore we choose $\epsilon<\sqrt{P_k\delta T}p(t_0)$ and $\epsilon$ is larger than $a_k[m]p(t_0-m\delta T), m\neq 0$. As a result, \eqref{eqn:lindeberg} becomes \begin{align} \frac{\mathbb{E}\left[|a_k[0]p(t_0)|^2\right]}{P_k}=\frac{P_k\delta Tp^2(t_0)}{P_k}, \end{align} which is not zero. Thus Lindeberg's condition is not satisfied. We then conclude that the CLT is not invoked, and the process $x_k(t)$ does not approach the Gaussian process in distribution. \end{proof} \begin{remark} Since the average CCDF of instant power has the same behavior as the average CCDF of PAPR, we can infer from Theorem \ref{thm:qpsknogaus} that as $\delta$ keeps decreasing, the average CCDF of PAPR for FTN using QPSK symbols with $SNR_{tx}$ fixed will keep increasing and will not converge to the average CCDF of PAPR using Gaussian symbols with $SNR_{tx}$ fixed. \end{remark} \begin{figure}[t] \centering \includegraphics[scale=0.47]{fig1txsnrvscap.eps} \caption{Capacity vs transmit SNR for four different power allocation schemes, $O_sO_f, S_sO_f, O_sS_f$, and $S_sS_f$, with different $\delta$ values. The MIMO size is $2\times 2$.} \label{fig:txsnrvsc} \centering \includegraphics[scale=0.47]{fig2rxsnrcap.eps} \caption{Capacity vs receive SNR for four different power allocation schemes, $O_sO_f, S_sO_f, O_sS_f$, and $S_sS_f$, with different $\delta$ values. The MIMO size is $2\times 2$.} \label{fig:rxsnrvsc} \end{figure} \section{Simulation Results}\label{sec:simresult} In this section, we show the capacity and PAPR of MIMO FTN signaling under different power allocation schemes with either transmit or receive power constraints. In the simulations conducted in this section, we set the symbol period $T=0.01$. The simulation results are averaged over 1000 random channel realizations. In our simulations, the MIMO channel coefficients $h_{l,k}$ are i.i.d. and complex Gaussian distributed according to $\mathcal{CN}\left(0,\frac{1}{K}\right)$. We assume RRC pulse shaping with roll-off factor $\beta=0.5$. Note that for this $\beta$, the threshold $\delta=0.67$. For the capacity results in Figs. \ref{fig:txsnrvsc}-\ref{fig:tauvscrxsnr}, we apply the power allocation schemes $O_sO_f$, $O_sS_f$, $S_sO_f$, and $S_sS_f$ described in Section \ref{sec:capacityderivation}. For the PAPR simulations in Figs.~\ref{fig:gaustxsnr}-\ref{fig:figBqpsk}, we transmit $1000$ symbols. \begin{figure}[t] \centering \includegraphics[scale=0.50]{fig3taucaptx.eps} \caption{Capacity vs $\delta$ for four different power allocation schemes with transmit SNR $SNR_{tx}=20 $ dB. The MIMO size is $2\times 2$ with threshold $\delta$ value to be 0.67.} \label{fig:tauvsctxsnr} \end{figure} Fig.~\ref{fig:txsnrvsc} shows the MIMO FTN capacity versus $SNR_{tx}$ for various $\delta$ values and for four power allocation schemes defined in \eqref{eqn:optsolu}, \eqref{eqn:ssof}, \eqref{eqn:ossf}, and \eqref{eqn:sssf}. For $\delta=0.67$ and $\delta=0.1$, the curves overlap as predicted by Theorem \ref{thm:them1}, since MIMO FTN capacity saturates below $\delta = \frac{1}{1+\beta}$. At $\delta=1$, the curves for $O_sO_f$ and $S_sO_f$ align with $O_sS_f$ and $S_sS_f$, as channel inversion yields the same result as uniform power allocation for Nyquist transmission. Compared to Nyquist signaling with uniform power allocation ($\delta=1$ and $S_sS_f$), FTN signaling with optimal power allocation ($\delta=0.67$ and $O_sO_f$) boosts rates by about $50\%$. However, uniform power allocation in frequency still performs well, as seen by comparing $O_sO_f$ and $O_sS_f$ at $\delta=0.67$. Therefore, in practice, uniform power allocation in the frequency domain can still be applied to avoid the extreme values brought by spectrum inversion. Fig.~\ref{fig:rxsnrvsc} examines MIMO FTN capacity versus $SNR_{rx}$. The $\delta=1$ curves show the worst performance, as smaller $\delta$ enables higher symbol rates despite equal symbol power. At high SNR, $O_sO_f$ and $O_sS_f$ curves converge with $S_sO_f$ and $S_sS_f$, as water-filling's impact diminishes. FTN transmission with optimal power allocation significantly outperforms Nyquist signaling ($\delta=1$), with $O_sO_f$ and $O_sS_f$ at $\delta=0.67$ achieving a $50\%$ improvement. The advantage is even greater for fixed receive SNR, where higher transmission rates amplify the benefits of FTN while the received symbol energy is kept the same. However, unlike in Fig.~\ref{fig:txsnrvsc}, where $\delta=0.1$ and $\delta=0.67$ curves overlap for all power allocation schemes, this is not observed in Fig.~\ref{fig:rxsnrvsc}. For fixed $SNR_{rx}$, although the symbol energy $P\delta T$ remains constant across accelerations, smaller $\delta$ requires higher physical transmission power $P$ to maintain the same symbol energy, amplifying the benefits of FTN. \begin{figure}[t] \centering \includegraphics[scale=0.50]{fig4taucaprx.eps} \caption{Capacity vs $\delta$ for four different power allocation schemes with receive SNR $SNR_{rx}=20 $ dB. The MIMO size is $2\times 2$ with threshold $\delta$ value to be 0.67.} \label{fig:tauvscrxsnr} \end{figure} In Fig. \ref{fig:tauvsctxsnr}, we show the capacity of MIMO FTN with respect to $\delta$ for fixed transmission power. As we explained in Section \ref{sec:capacityderivation}, the capacity increases as $\delta$ decreases until $\delta$ reaches $\frac{1}{1+\beta}$, then it remains fixed. We can also see that the curves with optimal frequency domain power allocation scheme $O_f$ grow faster as $\delta$ decreases. We also notice that spatial domain water-filling $O_s$ provides a more significant gain than frequency domain inversion over the support. Meanwhile, in Fig.~\ref{fig:tauvscrxsnr} we again plot the capacity of MIMO FTN with respect to $\delta$ but with fixed $SNR_{rx}$ instead. In this case, as $\delta$ decreases, capacity keeps increasing. Moreover, as $SNR_{rx}$ increases the improvement brought by spatial domain water-filling, $O_s$, is less significant. To evaluate the practicality of MIMO FTN signaling, we next simulate the PAPR performance under various signaling rates and plot the empirical CCDF of the PAPR to analyze outage probability and threshold $\gamma$, particularly at high acceleration. The definition of average CCDF of instant power can be found in \eqref{eqn:aveccdfdef}, we then replace $\gamma$ with $\gamma' P_k$ to obtain the average CCDF of PAPR as shown in Remark \ref{rem:inspowccdfconvert}. Fig.~\ref{fig:gaustxsnr} shows the CCDF of SISO FTN with a Gaussian symbol set at a fixed transmit SNR of $SNR_{tx}=20$ dB. As discussed in Section \ref{sec:gausthy}, the PAPR distribution is unaffected by $\delta$. The theoretical distribution from \eqref{eqn:gausthytx} closely matches the simulation results. Fig.~\ref{fig:gausrxsnr} presents the average CCDF of SISO FTN with Gaussian symbols for fixed $SNR_{tx}$, showing that decreasing $\delta$ degrades performance at a fixed $SNR_{rx}$. \begin{figure}[t] \centering \includegraphics[scale=0.50]{gaustxsnr.eps} \caption{CCDF of SISO FTN for different $\delta$ for only uniform power allocation both in space and in frequency. The Gaussian symbol set is used, $SNR_{tx}$ is fixed. } \label{fig:gaustxsnr} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.50]{gausrxsnr.eps} \caption{CCDF of SISO FTN for different $\delta$ for only uniform power allocation both in space and in frequency. The Gaussian symbol set is used, $SNR_{rx}$ is fixed. The theoretical CCDF in \eqref{eqn:gausrxsnrthy} is also plotted. } \label{fig:gausrxsnr} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.50]{qpsktxsnr.eps} \caption{CCDF of SISO FTN with different $\delta$ values for only uniform power allocation both in space and in frequency. The QPSK symbol set is used, $SNR_{tx}$ is fixed. The theoretical curve in \eqref{eqn:extdist} is also plotted.} \label{fig:qpsktxsnr} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.50]{qpskrxsnr.eps} \caption{CCDF of SISO FTN for different $\delta$ for only uniform power allocation both in space and in frequency. The QPSK symbol set is used, $SNR_{rx}$ is fixed. The theoretical CCDF in \eqref{eqn:extdist} is also plotted. } \label{fig:qpskrxsnr} \end{figure} Fig.~\ref{fig:qpsktxsnr} shows the CCDF for the QPSK symbol set at $SNR_{tx}=20$ dB, with the theoretical distribution from \eqref{eqn:extdist} aligning with simulation results. The CCDF rises as $\delta$ decreases, consistent with Section \ref{sec:thyqpsk}. Similarly, Fig.~\ref{fig:qpskrxsnr} illustrates the average CCDF of SISO FTN with QPSK at $SNR_{rx}=20$ dB, highlighting the impact of different acceleration factors. As $\delta$ decreases, the PAPR distribution worsens. Since the symbol power is fixed for different $\delta$ values, higher acceleration increases pulse overlap and causes more PAPR spread. However, QPSK generally outperforms the Gaussian symbol set in PAPR distribution. \begin{figure}[t] \centering \includegraphics[scale=0.50]{figBgaus.eps} \caption{Gamma value versus $\delta$ for Gaussian symbol set. } \label{fig:figBgaus} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.50]{figBqpsk.eps} \caption{Gamma value versus $\delta$ for QPSK symbol set. } \label{fig:figBqpsk} \end{figure} Finally, we simulate the outage threshold $\gamma$ for a fixed outage probability of 0.01. Fig.~\ref{fig:figBgaus} shows the effect of $\delta$ on $\gamma$ in SISO FTN using the Gaussian symbol set with uniform power allocation $S_sS_f$, comparing $SNR_{tx}=20$~dB and $SNR_{rx}=20$~dB. With fixed transmit SNR, the outage threshold stays high, while for fixed receive SNR, $\gamma$ increases as $\delta$ decreases, aligning with Fig.~\ref{fig:gausrxsnr}. Fig.~\ref{fig:figBqpsk} presents similar results for the QPSK symbol set, showing that QPSK achieves better performance with smaller $\gamma$ values than the Gaussian symbol set. Note that we obtain stable PAPR results for frequency domain uniform power allocation schemes, $S_f$, in the simulations conducted in Figs. \ref{fig:figBgaus} and \ref{fig:figBqpsk}. The inversion of the frequency spectrum magnifies the close-to-zero values and these values are highly sensitive to $\delta$, number of symbols $N$, and the simulation length. \section{Conclusion}\label{sec:conclusion} In this paper, we investigate the capacity of FTN signaling for frequency-flat MIMO channels for arbitrary high acceleration. The frequency domain expression is derived for the MIMO FTN capacity calculations. The optimum power allocation scheme is spatial domain water-filling and frequency domain channel inversion for all $\delta$. For RRC pulses with roll-off factor $\beta$, and for fixed transmit power, for $\delta<\frac{1}{1+\beta}$, the capacity is independent of $\delta$ and remains constant. Moreover, we derive the theoretical PAPR distribution of FTN for Gaussian and QPSK signaling. The theoretical distributions closely align with the simulation results. We find that as acceleration increases, the PAPR grows unbounded for fixed $SNR_{rx}$ case for both Gaussian and QPSK symbols. It is also unbounded for QPSK for fixed $SNR_{tx}$. Overall, we show that MIMO FTN significantly improves spectral efficiency with respect to Nyquist transmission, and practical FTN designs should take PAPR into account. As future work, we plan to perform a comprehensive analysis of BER and spectral efficiency in conjunction with PAPR evaluations. The real potential of FTN will reveal itself when different modulation and coding schemes at different acceleration factors are compared under a given PAPR constraint. \appendices \section{Proof for Theorem \ref{thm:gaustxsnr}} \label{prof:gaus} We can find the variance for $x_{r,k}(t)$ and $x_{i,k}(t)$ as \begin{align} \mathbb{E}\left[|x_{r,k}(t)|^2\right]&=\sum_{i=-M}^M\mathbb{E}\left[|a_{r,k}[i]|^2\right]p^2_{i,\delta}(t) \end{align} \begin{align} &=\frac{P_k\delta T}{2}\sum_{i=-M}^Mp^2_{i,\delta}(t)\\&=\mathbb{E}\left[|x_{i,k}(t)|^2\right]. \end{align} The random variable $|x_k(t)|^2=|x_{r,k}(t)|^2+|x_{i,k}(t)|^2$ follows a scaled chi-squared distribution with two degrees of freedom, which is also a Gamma distribution with parameters $1$ and $P_k\delta T\sum_{i=-M}^Mp^2_i(t)$. The average CCDF $\bar{\mathcal{C}}(\gamma)$ is written as \begin{align} \bar{\mathcal{C}}(\gamma)=\frac{1}{\delta T}\int_0^{\delta T}e^{-\frac{\gamma}{P_k\delta T\sum_{-M}^Mp^2_m(t)}}dt. \label{eqn:ccdfgaus} \end{align} We assume that the energy of $p(t)$ is concentrated in a finite time period. Since we assumed that the number of symbols is large enough, we can approximate $\sum_{i=-M}^Mp^2_i(t)$ by $\sum_{i=-\infty}^{+\infty}p^2_i(t)$. Since we focus on the asymptotic behavior of the CCDF as $\delta$ goes to zero, we might as well assume $\delta\leq\frac{1}{1+\beta}$. \begingroup \setlength{\belowdisplayskip}{-15pt} \begin{figure*} \begin{align} \sum_{n=-\infty}^{\infty}\left|p(t-n\delta T)\right|^2 &=\int_{-\frac{1}{2}}^{\frac{1}{2}}\left|\frac{1}{\delta T}\sum_{n=-\infty}^{\infty}\sqrt{G\left(\frac{f_n-n}{\delta T}\right)}e^{-j2\pi \left(\frac{f_n-n}{\delta T}\right)t}\right|^2df_n \label{eqn:longpaprbeg}\\ &=\delta T\int_{-\frac{1}{2\delta T}}^{\frac{1}{2\delta T}}\left|\frac{1}{\delta T}\sum_{n=-\infty}^{\infty}\sqrt{G\left(f-\frac{n}{\delta T}\right)}e^{-j2\pi \left(f-\frac{n}{\delta T}\right)t}\right|^2df \\ &=\frac{1}{\delta T}\int_{-\frac{1}{2\delta T}}^{\frac{1}{2\delta T}}\sum_{n=-\infty}^{\infty}\sum_{m=-\infty}^{\infty}\sqrt{G\left(f-\frac{n}{\delta T}\right)}\sqrt{G\left(f-\frac{m}{\delta T}\right)}e^{-j2\pi \frac{(n-m)}{\delta T}t}df \\ &=\frac{1}{\delta T}\int_{-\frac{1}{2\delta T}}^{\frac{1}{2\delta T}}\sum_{n=-1}^{1}\sum_{m=-1}^{1}\sqrt{G\left(f-\frac{n}{\delta T}\right)}\sqrt{G\left(f-\frac{m}{\delta T}\right)}e^{-j2\pi \frac{(n-m)}{\delta T}t}df \label{eqn:leaveone}\\ &=\frac{1}{\delta T}\int_{-\frac{1}{2\delta T}}^{\frac{1}{2\delta T}} \left(G(f)+2\left(\sqrt{G\left(f-\frac{1}{\delta T}\right)G(f)}+\sqrt{G\left(f+\frac{1}{\delta T}\right)G(f)}\right)\cos\left(\frac{2\pi t}{\delta T}\right)\right)df \label{eqn:longpaprend} \end{align} \end{figure*} \endgroup We then obtain the derivations in \eqref{eqn:longpaprbeg}-\eqref{eqn:longpaprend}. Here \eqref{eqn:leaveone} is because we use RRC for $p(t)$, and inside the integration interval $[-\frac{1}{2\delta T},\frac{1}{2\delta T}]$, only the closest shifted versions of $G(f)$ will be present; i.e. $m=\pm1$ and $n=\pm1$. Furthermore, if $\delta(1+\beta)\leq 1$, there will only be one spectrum $G(f)$ inside the interval. Then in \eqref{eqn:longpaprend}, as $\delta\leq\frac{1}{1+\beta}$, the term $2\left(\sqrt{G\left(f-\frac{n}{\delta T}\right)G(f)}+\sqrt{G\left(f+\frac{n}{\delta T}\right)G(f)}\right)\cos\left(\frac{2\pi t}{\delta T}\right)$ will disappear in the integration. Therefore, we have \begin{align} \sum_{i=-\infty}^{+\infty}p^2_i(t)&=\sum_{i=-\infty}^{+\infty}|p_i(t)|^2=\frac{1}{\delta T}\int_{-\frac{1}{2\delta T}}^{\frac{1}{2\delta T}}G(f)df, \end{align} and \eqref{eqn:ccdfgaus} becomes \begin{equation} \bar{\mathcal{C}}(\gamma)=\exp\left(-\frac{\gamma}{P_k\int_{-\frac{1}{2\delta T}}^{\frac{1}{2\delta T}}G(f)df}\right). \label{eqn:gausthytxapp} \end{equation} \vspace{-0.31cm} \section{Proof for Theorem \ref{thm:txsnrpaprqpsk}} \label{app:proofthmpaprtxqpsk} In this section, we prove that the average CCDF for the process $x_k(t)$ approaches $1$ as $\delta$ approaches $0$ using the QPSK symbol set with $SNR_{tx}$ fixed. The random variable $x_k(t_0)=\sum_{m=-M}^{M}a_k[m]p(t_0-m\delta T)$ can be viewed as the linear combination of $2M-1$ independent random variables. For ease of notation, we denote $p(t-m\delta T)$ as $p_{m,\delta}(t)$. Therefore, we can decompose $\Phi(u,v;t_0)$ into \begin{equation} \Phi(u,v;t_0)=\prod_{m=-M}^M\Phi(p_{m,\delta}(t_0)u,p_{m,\delta}(t_0)v), \end{equation} where $\Phi(p_{m,\delta}(t_0)u,p_{m,\delta}(t_0)v)$ is the joint characteristic function of $p_{m,\delta}(t_0)x_{r,k}[m]$ and $p_{m,\delta}(t_0)x_{i,k}[m]$. Considering the fact that $x_{r,k}[m]$ and $x_{i,k}[m]$ are independent, $\Phi(p_{m,\delta}(t_0)u,p_{m,\delta}(t_0)v)$ can be decomposed into the multiplication of the characteristic functions of $p_{m,\delta}(t_0)x_{r,k}[m]$ and $p_{m,\delta}(t_0)x_{i,k}[m]$, namely, \begin{equation} \Phi(p_{m,\delta}(t_0)u,p_{m,\delta}(t_0)v)=\Phi(p_{m,\delta}(t_0)u)\Phi(p_{m,\delta}(t_0)v). \end{equation} Since the symbols $a_k[m]$ are drawn uniformly from the constellation, we know that \begin{align} \Phi(p_{m,\delta}(t_0)u) &=\int_{-\infty}^{+\infty}\left(\frac{1}{2}\delta(\nu-\sqrt{P_k\delta T/2}p_{m,\delta}(t_0)) \right.\notag\\ &~\quad\quad\quad+\left.\frac{1}{2}\delta(\nu+\sqrt{P_k\delta T/2}p_{m,\delta}(t_0))\right)e^{j\nu u}d\nu\notag \\ &=\cos(\sqrt{P_k\delta T/2}p_{m,\delta}(t_0)u). \end{align} Similarly, we have \begin{equation} \Phi(p_{m,\delta}(t_0)v)=\cos(\sqrt{P_k\delta T/2}p_{m,\delta}(t_0)v). \end{equation} Eventually, we obtain \begin{align} &\bar{\mathcal{C}}(\gamma)=1-\frac{1}{2\pi\delta T} \int_0^{\delta T}\sqrt{\gamma}\int_0^\infty J_1(\sqrt{\gamma}\zeta) \int_0^{2\pi}\prod_{m=-M}^M \notag\\ &\cos(\sqrt{P_k\delta T/2}p_{m,\delta}(t){\zeta}\cos\phi)\cos(\sqrt{P_k\delta T/2}p_{m,\delta}(t){\zeta}\sin\phi)d\phi d\zeta dt. \label{eqn:extdist} \end{align} Note that it is possible that for some $m$ and $t$, $p_{m,\delta}(t)$ equals zero, for the corresponding $m$ and $t$, the term $\cos(\sqrt{P_k\delta T/2}p_{m,\delta}(t)\sqrt{\zeta}\cos\phi)\cos(\sqrt{P_k\delta T/2}p_{m,\delta}(t)\sqrt{\zeta}\sin\phi)$ becomes $1$. However, because of the high acceleration, other values of $m$ and $t$ will still lead to non-zero $p_{m,\delta}(t)$. Therefore for those $m$ and $t$ that $p_{m,\delta}(t)=0$, their contribution to the whole product $\prod_{m=-M}^M$ $\cos(\sqrt{P_k\delta T/2}p_{m,\delta}(t){\zeta}\cos\phi)\cos(\sqrt{P_k\delta T/2}p_{m,\delta}(t){\zeta}\sin\phi)$ is $1$. Meanwhile, if $\delta=1$, it is possible that for some $m$ and $t$ there is only one term left in the product, for example, when $t=0$. We now proceed to discuss the behavior of the distribution at extreme $\delta$ values. Assuming that $\delta$ is sufficiently small, we are able to approximate the product $\prod_{m=-M}^M$ $\cos(\sqrt{P_k\delta T/2}p_{m,\delta}(t){\zeta}\cos\phi)\cos(\sqrt{P_k\delta T/2}p_{m,\delta}(t){\zeta}\sin\phi)$ with its Taylor expansion. For ease of notation, we denote \begin{align} c_0= \sqrt{P_k\delta T/2}{\zeta}\cos\phi ,\quad c_1=\sqrt{P_k\delta T/2}{\zeta}\sin\phi. \end{align} We define the index set $\mathcal{I}=\{-M,-M+1, \dots, M-1, \allowbreak M\}$. We use Taylor expansion about $t$ around point $0$ and approximate the multiplication $\Pi_{m=-M}^M\allowbreak\cos(c_0p_{m,\delta}(t))\cos(c_1p_{m,\delta}(t))$ by keeping only the zero-order and the first-order items. Eventually, we get the approximation in \eqref{eqn:approx}. \begingroup\setlength{\belowdisplayskip}{-10pt} \begin{figure*} \begin{align} &\prod_{m=-M}^M\cos(c_0p_{m,\delta}(t))\cos(c_1p_{m,\delta}(t)) \notag\\ &\approx \prod_{m=-M}^M\cos(c_0p_{m,\delta}(0))\cos(c_1p_{m,\delta}(0)) - t \bigg(\sum_{m=-M}^Mc_0p'_m(0)\sin(c_0p_{m,\delta}(0))\prod_{n\in\mathcal{I}/m}\cos(c_0p_n(0))\prod_{n=-M}^M\cos(c_1p_n(0)) \notag \\ & \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+\sum_{m=-M}^Mc_0p'_m(0)\sin(c_1p_{m,\delta}(0))\prod_{n=-M}^M\cos(c_0p_n(0))\prod_{n\in\mathcal{I}/m}\cos(c_1p_n(0))\bigg)\label{eqn:approx} \end{align} \end{figure*} \endgroup The result becomes a linear function of $t$, we plug \eqref{eqn:approx} back into \eqref{eqn:extdist} and get the approximation of average CCDF in \eqref{eqn:distapprox}, where the $\frac{\delta T}{2}$ term comes from the integration $\int_0^{\delta T}tdt$. \begingroup \setlength{\belowdisplayskip}{-11pt} \begin{figure*} \begin{align} \hline &\bar{\mathcal{C}}(\gamma)\approx1-\frac{\sqrt{\gamma}}{2\pi} \int_0^\infty J_1(\sqrt{\gamma}\zeta)\int_0^{2\pi} \prod_{m=-M}^M\cos(c_0p_{m,\delta}(0))\cos(c_1p_{m,\delta}(0))-\frac{\delta T}{2}\bigg(\sum_{m=-M}^Mc_0p'_m(0)\sin(c_0p_{m,\delta}(0))\times \notag\\ &\prod_{n\in\mathcal{I}/m}\cos(c_0p_n(0))\prod_{n=-M}^M\cos(c_1p_n(0)) +\sum_{m=-M}^Mc_1p'_m(0)\sin(c_1p_{m,\delta}(0))\prod_{n=-M}^M\cos(c_0p_n(0))\prod_{n\in\mathcal{I}/m}\cos(c_1p_n(0))\bigg)d\phi d\zeta \label{eqn:distapprox} \end{align} \end{figure*} \endgroup This form allows us to investigate the asymptotic behavior, as $\delta$ approaches zero, the cosine and sine terms approach $1$ and $0$ respectively. As $\int_0^\infty J_1(x)dx=0$, the average CCDF $\bar{\mathcal{C}}(\gamma)$ asymptotically becomes \begin{align} \underset{\delta\rightarrow0}{\lim}\bar{\mathcal{C}}(\gamma)=1- \sqrt{\gamma}\int_0^\infty J_1(\sqrt{\gamma}\zeta)d\zeta =1. \end{align} \vspace{-0.2cm} \section{Proof for Theorem \ref{thm:rxsnrqpsk}} \label{prof:qpskrxsnr} As $\delta$ approaches zero, \eqref{eqn:distapprox}, which is shown on the next page, becomes \begin{align} &\bar{\mathcal{C}}(\gamma)\approx1-\frac{\sqrt{\gamma}}{2\pi} \int_0^\infty J_1(\sqrt{\gamma}\zeta)\int_0^{2\pi} \prod_{m=-M}^M\cos(\sqrt{P_\delta T/2} \notag\\ &p_{m,\delta}(0){\zeta}\cos\phi)\cos(\sqrt{P_\delta T/2}p_{m,\delta}(0){\zeta}\sin\phi)d\phi d\zeta. \label{eqn:rxsnrapprox} \end{align} As $\delta$ becomes small enough, the samples $p_{m,\delta}(t)$ can be considered equal to each other. Without loss of generality, we assume they are equal to $p_0(0)$. Then we have \begin{align} &\bar{\mathcal{C}}(\gamma)\approx1-\frac{\sqrt{\gamma}}{2\pi} \int_0^\infty J_1(\sqrt{\gamma}\zeta)\int_0^{2\pi}\cos^{2M+1}(c\cos\phi) \notag\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\times\cos^{2M+1}(c\sin\phi)d\phi d\zeta, \label{eqn:beforeexpandcos} \end{align} where we denote $\sqrt{P_\delta T/2}p_0(0){\zeta}$ as $c$. \begingroup \setlength{\belowdisplayskip}{-20pt} \begin{figure*} \begin{align} \hline \cos^{2M+1}(c\cos\phi)\cos^{2M+1}(c\sin\phi)&=\left(\sum_{i=0}^{2M}\left(\begin{matrix} 2M\\i \end{matrix}\right)\cos((2M+1-2i)c\cos\phi\right)\left(\sum_{j=0}^{2M}\left(\begin{matrix} 2M\\j \end{matrix}\right)\cos((2M+1-2j)c\sin\phi\right) \label{eqn:rxsnrcosexpandstart}\\ &=\sum_{i=0}^{2M}\sum_{j=0}^{2M}\left(\begin{matrix} 2M\\i \end{matrix}\right)\left(\begin{matrix} 2M\\j \end{matrix}\right)\cos((2M+1-2i)c\cos\phi) \cos((2M+1-2j)c\sin\phi) \\ &=\frac{1}{2}\sum_{i=0}^{2M}\sum_{j=0}^{2M}\left(\begin{matrix} 2M\\i \end{matrix}\right)\left(\begin{matrix} 2M\\j \end{matrix}\right)\left(\cos(d_{i,j}c\cos(\phi+\theta_{i,j})) + \cos(d_{i,j}c\cos(\phi-\theta_{i,j}))\right)\label{eqn:rxsnrcosexpandend} \end{align} \end{figure*} \endgroup By applying trigonometric identities, we obtain the derivations in \eqref{eqn:rxsnrcosexpandstart}-\eqref{eqn:rxsnrcosexpandend}, where $d_{k,l}=\sqrt{(2M+1-2i)^2+(2M+1-2j)^2}$ and $\cos\theta_{k,l}=\frac{2M+1-2k}{\sqrt{(2M+1-2k)^2+(2M+1-2l)^2}}$. We then integrate \eqref{eqn:rxsnrcosexpandend} over $[0, 2\pi)$. Note that this integration is the first kind of Bessel function of order zero, namely, \begin{align} \frac{1}{2\pi}\int_0^{2\pi}\cos(d_{k,l}c\cos(\phi+\theta_{k,l}))d\phi=J_0(d_{k,l}c). \label{eqn:int1} \end{align} Similarly, we have \begin{align} \frac{1}{2\pi}\int_0^{2\pi}\cos(d_{k,l}c\cos(\phi-\theta_{k,l}))d\phi=J_0(d_{k,l}c). \label{eqn:int2} \end{align} We plug \eqref{eqn:int1} and \eqref{eqn:int2} back in \eqref{eqn:beforeexpandcos}. Because of the orthogonality of Bessel functions \cite{bowman1958introduction}, \eqref{eqn:beforeexpandcos} can be written as \begin{align} \bar{\mathcal{C}}(\gamma)&\approx1-\sqrt{\gamma}\sum_{k=0}^{2M}\sum_{l=0}^{2M}\left(\begin{matrix} 2M\\k \end{matrix}\right)\left(\begin{matrix} 2M\\l \end{matrix}\right) \notag\\ &\quad\quad\times\int_0^\infty J_1(\sqrt{\gamma}\zeta)J_0(d_{k,l}\sqrt{P_\delta T/2}p_0(0){\zeta})d\zeta \\ &\overset{(a)}{=}1. \end{align} \vspace{-1cm} \bibliographystyle{IEEEtran} \bibliography{main} \end{document}
2412.15561v1
http://arxiv.org/abs/2412.15561v1
Spirals, Tic-Tac-Toe Partition, and Deep Diagonal Maps
\documentclass[12pt]{article} \usepackage[margin=1in]{geometry} \usepackage{amsfonts,latexsym,amsthm,amssymb,amsmath,amscd,esint,dsfont} \usepackage{graphicx} \graphicspath{ {./figure/} } \usepackage[margin=10pt,font=small,labelfont=bf]{caption} \newcommand{\incfig}[2]{ \def\svgwidth{#2} \input{figure/#1.pdf_tex} } \usepackage{cases} \usepackage{textcomp, gensymb} \usepackage{xcolor} \usepackage{tabularx} \usepackage{array} \makeatletter \def\namedlabel#1#2{\begingroup \def\@currentlabel{#2} \label{#1}\endgroup } \makeatother \usepackage[parfill]{parskip} \setlength{\parindent}{15pt} \usepackage{fancyhdr} \newcommand{\ind}{{\mathds{1}}} \renewcommand{\AA}{\mathbb{A}} \newcommand{\BB}{\mathbb{B}} \newcommand{\CC}{\mathbb{C}} \newcommand{\DD}{\mathbb{D}} \newcommand{\EE}{\mathbb{E}} \newcommand{\FF}{\mathbb{F}} \newcommand{\GG}{\mathbb{G}} \newcommand{\HH}{\mathbb{H}} \newcommand{\II}{\mathbb{I}} \newcommand{\JJ}{\mathbb{J}} \newcommand{\KK}{\mathbb{K}} \newcommand{\LL}{\mathbb{L}} \newcommand{\MM}{\mathbb{M}} \newcommand{\NN}{\mathbb{N}} \newcommand{\OO}{\mathbb{O}} \newcommand{\PP}{\mathbb{P}} \newcommand{\QQ}{\mathbb{Q}} \newcommand{\RR}{\mathbb{R}} \renewcommand{\SS}{\mathbb{S}} \newcommand{\TT}{\mathbb{T}} \newcommand{\UU}{\mathbb{U}} \newcommand{\VV}{\mathbb{V}} \newcommand{\WW}{\mathbb{W}} \newcommand{\XX}{\mathbb{X}} \newcommand{\YY}{\mathbb{Y}} \newcommand{\ZZ}{\mathbb{Z}} \newcommand{\cA}{\mathcal{A}} \newcommand{\cB}{\mathcal{B}} \newcommand{\cC}{\mathcal{C}} \newcommand{\cD}{\mathcal{D}} \newcommand{\cE}{\mathcal{E}} \newcommand{\cG}{\mathcal{G}} \newcommand{\cF}{\mathcal{F}} \newcommand{\cH}{\mathcal{H}} \newcommand{\cI}{\mathcal{I}} \newcommand{\cJ}{\mathcal{J}} \newcommand{\cK}{\mathcal{K}} \newcommand{\cL}{\mathcal{L}} \newcommand{\cM}{\mathcal{M}} \newcommand{\cN}{\mathcal{N}} \newcommand{\cO}{\mathcal{O}} \newcommand{\cP}{\mathcal{P}} \newcommand{\cQ}{\mathcal{Q}} \newcommand{\cR}{\mathcal{R}} \newcommand{\cS}{\mathcal{S}} \newcommand{\cT}{\mathcal{T}} \newcommand{\cU}{\mathcal{U}} \newcommand{\cV}{\mathcal{V}} \newcommand{\cW}{\mathcal{W}} \newcommand{\cX}{\mathcal{X}} \newcommand{\cY}{\mathcal{Y}} \newcommand{\cZ}{\mathcal{Z}} \newcommand{\nn}{\mathbb N} \newcommand{\zz}{\mathbb Z} \newcommand{\qq}{\mathbb Q} \newcommand{\cc}{\mathbb C} \newcommand{\ff}{\mathbb F} \newcommand{\M}{\mathbf{M}} \newcommand{\rp}{\mathbb{R}\mathrm{P}} \newcommand{\cp}{\mathbb{C}P} \newcommand{\power}{\mathcal{P}} \newcommand{\vspan}{\text{span}} \newcommand{\vnull}{\text{null}} \newcommand{\range}{\text{range}} \newcommand{\re}{\text{Re}} \newcommand{\im}{\text{Im}} \newcommand{\erf}{\text{erf}} \newcommand{\pdv}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\pdvv}[2]{\frac{\partial^2 #1}{\partial #2}} \newcommand{\tr}{\operatorname{Tr}} \newcommand{\GL}{\operatorname{GL}} \newcommand{\SL}{\operatorname{SL}} \newcommand{\PGL}{\operatorname{PGL}} \newcommand{\SO}{\operatorname{SO}} \newcommand{\ortho}{\operatorname{O}} \newcommand{\conv}{\operatorname{conv}} \newcommand{\Int}{\operatorname{int}} \newcommand{\Aff}{\operatorname{Aff}} \newcommand{\sgn}{\operatorname{sgn}} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}{Corollary}[theorem] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{claim}[theorem]{Claim} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{statement}[theorem]{Statement} \newtheorem{conjecture}[theorem]{Conjecture} \theoremstyle{definition} \newtheorem{definition}{Definition}[section] \newtheorem{example}[definition]{Example} \newtheorem{remark}[definition]{Remark} \usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=blue, filecolor=magenta, urlcolor=cyan, pdftitle={Overleaf Example}, pdfpagemode=FullScreen, } \usepackage[ backend=bibtex, style=alphabetic, sorting=ynt ]{biblatex} \addbibresource{main.bib} \title{Spirals, Tic-Tac-Toe Partition, and Deep Diagonal Maps} \author{Zhengyu Zou} \date{\today} \begin{document} \maketitle \begin{abstract} The deep diagonal map $T_k$ acts on planar polygons by connecting the $k$-th diagonals and intersecting them successively. The map $T_2$ is the pentagram map and $T_k$ is a generalization. We study the action of $T_k$ on two special subsets of the so-called twisted polygons, which we name \textit{$k$-spirals of type $\alpha$ and $\beta$}. Both types of $k$-spirals are twisted $n$-gons that resemble the shape of inward spirals on the affine patch under suitable projective normalization. We show that for $k \geq 3$, $T_{k}$ preserves both types of $k$-spirals. In particular, we show that the two types of $3$-spirals have precompact forward and backward $T_3$ orbits, and these special orbits in the moduli space are partitioned into squares of a $3 \times 3$ tic-tac-toe grid. This establishes the action of $T_k$ on $k$-spirals as a nice geometric generalization of $T_2$ on convex polygons. \end{abstract} \input{1intro.tex} \input{2background.tex} \input{3spiral.tex} \input{4partition.tex} \input{5precompact.tex} \newpage \printbibliography \end{document} \section{Introduction} \label{sec:intro} \subsection{Context} Given a polygon $P$ on the projective plane, let $T_k$ be the map that connects its $k$-th diagonals and intersects them successively to form another polygon $P'$. The map $T_2$ is called the \textit{pentagram map}. It's a well-studied discrete dynamical system. See \cite{Schwartz1992}, \cite{Schwartz2001recurrent}, \cite{schwartz2008discretemonodromypentagramsmethod}, \cite{Ovsienko2010}. A well-known result is that $T_2$ preserves the space of convex projective polygons.\footnote{A projective polygon is \textit{convex} if some projective transformation maps it to a planar convex polygon in the affine patch} The $T_2$-orbit of a convex polygon sits on a flat torus in the moduli space of projective equivalent convex polygons. See Figure \ref{fig:delta demo} for the action of $T_2$ on an arbitrary convex heptagon. On the other hand, the geometry of the map $T_k$ is less well-behaved--images of convex polygons may not even be embedded. Figure \ref{fig:delta demo} demonstrates an example of $T_3$ taking a convex heptagon to a polygon that is not even embedded. Previous results of $T_k$ often had a combinatorial and algebraic flavor. A sequence of works \cite{schwartz2008discretemonodromypentagramsmethod}, \cite{Ovsienko2010}, \cite{Soloviev_2013}, \cite{Ovsienko2013Integrability} led to the complete integrability of $T_2$ action on the moduli space of projective convex polygons. The connection between cluster transforms and $T_2$ was discovered in \cite{glick2011Ypatterns}, which led to studies of deeper diagonal maps $T_k$ on objects that are combinatorially analogous to the $T_2$ action. See \cite{gekhtman2012higherpentagrammapsweighted} for integrability results of $T_k$. There are numerous integrability results for these higher-dimensional analogs. See \cite{Khesin2013}, \cite{Beffa2013AGDflows}, \cite{Beffa2015IntegrableGeneralizations}, \cite{Khesin2016}, \cite{izosimov2023longdiagonalpentagrammaps}. \begin{figure}[ht] \centering \scriptsize \incfig{delta_demo}{0.6\columnwidth} \caption{Left: $T_2$ acting on a convex heptagon. Right: $T_3$ doesn't preserve embeddedness.} \label{fig:delta demo} \end{figure} The map $T_k$ has many applications and connections to other fields, such as octahedral recurrence \cite{schwartz2008discretemonodromypentagramsmethod} \cite{difrancesco2012tsystemsboundariesnetworksolutions}, the condensation method of computing determinants \cite{schwartz2008discretemonodromypentagramsmethod} \cite{Glick2020limit}, cluster algebra \cite{glick2011Ypatterns} \cite{gekhtman2012higherpentagrammapsweighted} \cite{difrancesco2012tsystemsboundariesnetworksolutions}, Poisson Lie groups \cite{Fock2016} \cite{Izosimov2022poisson}, $T$-systems \cite{KEDEM2015233} \cite{difrancesco2012tsystemsboundariesnetworksolutions}, Grassmannians \cite{Felipe2019Grassmannians}, algebraically closed fields \cite{Weinreich2023}, Poncelet polygons \cite{Schwartz2007Poncelet} \cite{schwartz2021textbookcasepentagramrigidity} \cite{Izosimov2022poncelet} \cite{schwartz2024pentagramrigiditycentrallysymmetric}, and integrable partial differential equations \cite{schwartz2008discretemonodromypentagramsmethod} \cite{Ovsienko2010}\cite{Nackan2021kdv}. The geometric aspects of deeper diagonal maps on planar polygons remain relatively underexplored. There have been efforts to generalize the invariance of geometric properties, such as convexity from \cite{Schwartz1992}, to deep diagonal maps and their variants. One such attempt is the \textit{$k$-birds} and the map $\Delta_{k+1}$, which connects the $(k+1)$-th diagonal of a polygon and intersects the diagonals that are $k$ clicks apart. In \cite{schwartz2024flappingbirdspentagramzoo}, R.\ Schwartz showed that the $k$-birds are invariant under both $\Delta_{k+1}$ and $\Delta_{k+1}^{-1}$. The orbits of the birds experimentally resemble points on a flat tori, where $\Delta_{k+1}$ acts as an isometry with respect to a certain flat metric, suggesting integrability. In this paper, I searched for geometric objects in $\RR\PP^2$ with similar torus-shaped precompact orbits under $T_k$ for $k \geq 3$. These objects are based on the concept of \textit{twisted polygons}: bi-infinite sequences $P: \ZZ \rightarrow \RR\PP^2$ such that no three consecutive points are collinear, and $P_{i+n} = M(P_i)$ for some fixed projective transformation $M$ called the \textit{monodromy}. Denote $\cP_n$ the moduli space of projective equivalent twisted $n$-gons. This paper contains the following two results: \begin{itemize} \item A certain orientation of triangles and configurations of the four points $P_i$, $P_{i+1}$, $P_{i+k}$, $P_{i+k+1}$ are preserved under $T_k$. Twisted polygons satisfying these conditions are called \textit{k-spirals}. There are two types of $k$-spirals: \textit{type-$\alpha$ and type-$\beta$}. The action of $T_k$ preserves each of the two types. \item The orbits of 3-spirals under $T_3$ and $T_3^{-1}$ are precompact. This suggests that the ($T_k$, $k$-spirals) pair is a nice geometric analogue to the ($T_2$, convex polygons) pair and the ($\Delta_{k+1}$, $k$-birds) pair. \end{itemize} \subsection{The \texorpdfstring{$k$}{k}-Spirals under the Map \texorpdfstring{$T_k$}{T(k)}} \label{subsec:spiral polygons} Here we describe the geometric picture of a $k$-spiral. For the formal definition, see $\S$\ref{subsec:construct Sk}. Geometrically, $[P] \in \cP_n$ is a \textit{$k$-spiral} if for all $N \geq \ZZ$, we can find a representative $P$ such that $\{P_{i}\}_{i \geq N}$ lies on the affine patch, and the triangles $(P_i, P_{i+1}, P_{i+2})$ and $(P_i, P_{i+1}, P_{i+k})$ have positive orientation for all $i \geq N$. We call $P$ an \textit{$N$-representative} of $[P]$. \begin{figure}[hb] \centering \incfig{spiral_gallery}{0.5\columnwidth} \caption{A gallery of $5$-spirals. Left: Examples of $\cS_{5,n}^\alpha$. Middle \& right: Examples of $\cS_{5,n}^\beta$.} \label{fig:spiral example} \end{figure} There are two types of $k$-spirals, which we call \textit{type-$\alpha$} and \textit{type-$\beta$}. They are distinguished by the arrangement of the four points $P_{i}$, $P_{i+1}$, $P_{i+k}$, $P_{i+k+1}$. For type-$\alpha$ spirals, we require $P_{i+k}$ to be contained in the interior of the triangle $(P_{i}, P_{i+1}, P_{i+k+1})$. For type-$\beta$ spirals, $P_{i+k+1}$ needs to be contained in the interior of the triangle $(P_{i}, P_{i+1}, P_{i+k})$. We say $P$ is an \textit{$N$-representative of type $\alpha$ or type $\beta$}. Saying that $P$ is a $k$-spiral of type $\alpha$ (resp.\ $\beta$) is the same as saying it admits $N$-representative of type $\alpha$ (resp.\ $\beta$) for all $N \in \ZZ$. Let $\cS_{k,n}^\alpha$ and $\cS_{k,n}^\beta$ denote the space of $k$-spirals of type $\alpha$ and $\beta$ modulo projective equivalence. These are subspaces of $\cP_n$, the moduli space of \textit{twisted $n$-gons}. Figure \ref{fig:spiral example} illustrates many examples of representatives of $\cS_{5,n}^\alpha$ and $\cS_{5,n}^\beta$. Here we choose not to specify the value of $n$ because it's not essential. It turns out that each of $\cS_{k,n}^\alpha$ and $\cS_{k,n}^\beta$ is invariant under both $T_{k}$ and $T_k^{-1}$. Figure \ref{fig:T5 action on S5} shows a black polygonal arc that consists of finite many points of a representative $P$ of some $[P] \in \cS_{5,n}^\alpha$, and the red arc $P'$ is the image of $P$ under $T_5$. On the right we joined the points of $P$ 5 clicks apart. This gives us five different polygonal arcs, which we call \textit{the transversals of $P$}. We will show that these flows all have the same orientation, and we will use these facts to prove our main theorem. \begin{theorem} \label{thm:spiral polygon invariance} For all $n \geq 2$ and $k \geq 3$, both $T_k(\cS_{k,n}^\alpha) = \cS_{k,n}^\alpha$ and $T_k(\cS_{k,n}^\beta) = \cS_{k,n}^\beta$. \begin{figure}[ht] \centering \scriptsize \incfig{spiral_action}{0.5\columnwidth} \caption{Left: $T_5$ acting on a representative $P$ of $[P] \in \cS_{k,n}^\beta$. Right: transversals of $P$.} \label{fig:T5 action on S5} \end{figure} \end{theorem} \subsection{Tic-Tac-Toe Partition and Precompact \texorpdfstring{$T_3$}{T(3)} Orbits} \label{subsec:tic tac toe} The $k$-spirals are important because they are good analogs of convex polygons to the map $T_k$. One of the behaviors I observed experimentally is that $k$-spirals have precompact orbits under $T_k$. I was able to prove the case for $k = 3$. \begin{theorem} \label{thm:spiral precompact} For $n \geq 2$, both the forward and backward $T_3$-orbits of any $[P] \in \cS_{3,n}^{\alpha}$ and $[Q] \in \cS_{3,n}^\beta$ are precompact in $\cP_n$. \end{theorem} The sets $\cS_{3,n}^\alpha$ and $\cS_{3,n}^\beta$ fit well with a local parameterization of $\cP_n \rightarrow \RR^{2n}$ called \textit{corner invariants}. The invariant sets of $\cP_n$ under $T_3$ are partitioned by linear boundaries in the parameter space. The boundary lines give a grid pattern that resembles the board of the game ``tic-tac-toe.'' Each of the four ``side-squares'' is invariant under $T_3$. \begin{figure}[ht] \centering \incfig{tic-tac-toe_grid}{0.4\columnwidth} \caption{The partition of $\RR^2$ into a $3 \times 3$ grid, and the four side-squares of our interest.} \label{fig:tic-tac-toe grid} \end{figure} To construct the tic-tac-toe board, consider the three intervals $I, J, K$ of $\RR$ given by $I = (-\infty, 0)$, $J = (0, 1)$, $K = (1, \infty)$. The squares are of the form $I \times I$, $I \times J$, $I \times K$, $J \times I$, etc..\ Given a twisted $n$-gon $P$, we can classify $P$ according to the squares that contain its odd and even corner invariants. The four side-squares are marked $S_n(J,I)$, $S_n(I,J)$, $S_n(K,J)$, $S_n(J,K)$ in the order top-left-right-bottom. See Figure \ref{fig:tic-tac-toe grid} for a visualization of the tic-tac-toe grid. Figure \ref{fig:orbit demo} shows vertices of a representative $P$ of $[P] \in S_n(K,J)$ and the image $P' = T_3(P)$. On the right, we have the projection of the first $2^{11}$ iterations of the orbit of $P$ under $T_3$. Each point corresponds to $P^{(m)}_3$ after normalizing $(P_{-2}^{(m)}, P_{-1}^{(m)}, P_{0}^{(m)}, P_1^{(m)})$ to the unit square. The projection suggests that the orbit lies on a flat torus, where the map $T_3$ is a translation on the flat metric. Twisted polygons that are assigned to these squares have geometric implications. For example, the closed convex polygons always lie in the center square; two of the side-squares are $\cS_{3,n}^\alpha$ and $\cS_{3,n}^\beta$; the other two side-squares are obtained by reverting the indexing of $\cS_{3,n}^\alpha$ and $\cS_{3,n}^\beta$. These facts will be proved in $\S$\ref{sec:formula}. The proof of Theorem \ref{thm:spiral precompact} uses algebraic properties of $T_3$. In $\S$\ref{sec:precompact} I show that $T_3$ is a birational map on the corner invariants, and I use certain algebraic invariants of $T_3$ to prove the precompactness of $T_3$ orbits of the tic-tac-toe squares. \begin{figure}[ht] \centering \scriptsize \incfig{T3_action_on_S4}{0.35\columnwidth} \ \ \ \ \includegraphics[width=0.35\columnwidth]{4_spiral_traj_3.pdf} \caption{Left: $T_3$ acting on a representative of $[P] \in S_n(K,J)$. Right: The orbit projection.} \label{fig:orbit demo} \end{figure} \subsection{Paper Organization} This paper is organized as follows: \begin{itemize} \item In $\S$\ref{sec:background}, we provide background knowledge on projective geometry, triangle orientations, and twisted polygons. \item In $\S$\ref{sec:spiral}, we define the $k$-spirals formally and prove Theorem \ref{thm:spiral polygon invariance}. \item In $\S$\ref{sec:formula}, we define the tic-tac-toe partition and identify them with the $3$-spirals. \item In $\S$\ref{sec:precompact}, we derive a formula for $T_3$ in corner invariants and show four algebraic invariants of $T_3$. Then, we prove Theorem \ref{thm:spiral precompact}. \end{itemize} \subsection{Accompanying Program} I wrote a web-based program to visualize the orbits of twisted polygons under $T_k$. Readers can access it from the following link: \begin{center} \href{https://zzou9.github.io/pentagram-map}{\textbf{https://zzou9.github.io/pentagram-map}} \end{center} I discovered most of the results by computer experiments using this program. The paper contains rigorous proofs of the beautiful pictures I observed from it. \subsection{Acknowledgements} This paper was supported by a Brown University SPRINT/UTRA summer research program grant. I would like to thank my advisor, Richard Evan Schwartz, for introducing the concept of deep diagonal maps, providing extensive insights throughout the project, and offering guidance during the writing process. I would also like to thank Anton Izosimov and Sergei Tabachnikov for their insightful discussions on the tic-tac-toe partition. \section{Background} \label{sec:background} \subsection{Projective Geometry} The real projective plane $\RR\PP^2$ is the space of 1-dimensional subspaces of $\RR^3$. \textit{Points} of $\RR\PP^2$ are lines in $\RR^3$ that go through the origin. We say that $[x:y:z]$ is a \textit{homogeneous coordinate} of $V \in \RR\PP^2$ if the vector $\tilde V = (x, y, z)$ is a representative of $V$. Given two distinct points $V_1, V_2 \in \RR\PP^2$, the \textit{line} $l = V_1V_2$ connecting $V_1$ and $V_2$ is the 2-dimensional hyperplane spanned by the two 1-dimensional subspaces. Let $l_1,l_2$ be two lines in $\RR\PP^2$. The \textit{point of intersection} $l_1 \cap l_2$ is the 1-dimensional line given by the intersection of the two 2-dimensional subspaces. In $\RR\PP^2$, there exists a unique line connecting each pair of distinct points and a unique point of intersection given two distinct lines. We call a collection of points $V_1, V_2, \ldots, V_n \in \RR\PP^2$ \textit{in general position} if no three of them are collinear. The \textit{affine patch} $\AA^2$ is the collection of points in $\RR\PP^2$ with homogeneous coordinate $[x:y:1]$. We call this canonical choice of coordinate $(x,y,1)$ the \textit{affine coordinate} of a point $V \in \AA^2$. There is a diffeomorphism $\Phi: \RR^2 \rightarrow \AA^2$ given by $\Phi(x,y) = [x:y:1]$. We often identify $\AA^2$ as a copy of $\RR^2$ in $\RR\PP^2$. The line $\RR\PP^2 - \AA^2$ is called the \textit{line at infinity}. A map $\phi: \RR\PP^2 \rightarrow \RR\PP^2$ is a \textit{projective transformation} if it maps points to points and lines to lines. Algebraically, the group of projective transformations is $\PGL_3(\RR) = \GL_3(\RR) / \RR^* I$, where we are modding by the subgroup $\RR^* I = \{\lambda I: \lambda \in \RR^*\}$. We state a classical result regarding projective transformations without proof below: \begin{theorem} \label{thm:proj transform} Given two 4-tuples of points $(V_1,V_2,V_3,V_4)$ and $(W_1,W_2,W_3,W_4)$ in $\RR\PP^2$, both in general position, there exists a unique $\phi \in \PGL_3(\RR)$ such that $\phi(V_i) = W_i$. \end{theorem} The group of \textit{affine transformations} $\Aff_2(\RR)$ on $\AA^2$ is the subgroup of projective transformations that fixes the line at infinity. It is isomorphic to a semidirect product of $\GL_2(\RR)$ and $\RR^2$. Elements of $\Aff_2(\RR)$ can be uniquely expressed as a tuple $(M',v)$ where $M' \in \GL_2(\RR)$ and $v \in \RR^2$. Denote $\Aff_2^+(\RR)$ the subgroup of $\Aff_2^+(\RR)$ where $(M',v) \in \Aff_2^+(\RR)$ iff $\det(M') > 0$. These are orientation-preserving affine transformations. \subsection{Orientation of Affine Triangles} \label{subsec:triangle orientation} Given an ordered 3-tuple $(V_1,V_2,V_3)$ of points in $\RR^2$ or $\AA^2$, denote $\Int(V_1, V_2, V_3)$ the interior of the affine triangle with vertices $V_1, V_2, V_3$. There is a canonical way to define the orientation of an order 3-tuple. Let $\tilde V_i$ be the affine coordinate of $V_i$. We associate a number $\cO(V_1,V_2,V_3)$ to the oriented triangle: \begin{equation} \label{eqn:triangle orientation} \cO(V_1,V_2,V_3) = \det(\tilde V_1, \tilde V_2, \tilde V_3). \end{equation} The determinant is evaluated on the $3 \times 3$ matrix with column vectors $\tilde V_i$. We say an ordered 3-tuple $(V_1, V_2, V_3)$ is \textit{positive} if $\cO(V_1,V_2,V_3)>0$. Figure \ref{fig:triangle orientation} shows an example of a positive 3-tuple. \begin{figure}[ht] \centering \footnotesize \incfig{triangle_orientation}{0.8\columnwidth} \caption{A positive 3-tuple of affine points $(V_1, V_2, V_3)$.} \label{fig:triangle orientation} \end{figure} Here is another way to compute $\cO$ using the $\RR^2$ coordinates of $V_1, V_2, V_3$: \begin{equation} \label{eqn:vector orientations} \begin{aligned} \cO(V_1,V_2,V_3) &= \det(V_1,V_2) + \det(V_2,V_3) + \det(V_3,V_1) \\ &= \det(V_i - V_{i-1}, V_{i+1} - V_i) \end{aligned} \end{equation} where the determinant is evaluated on the $2\times2$ matrix. $\cO$ interacts with the action of $\Aff_2^+(\RR)$ and the symmetric group $S_3$ on planar/affine triangles in the following way: Given $M \in \Aff_2^+(\RR)$, denote $V'_i = M(V_i)$. One can show that $(V_1, V_2, V_3)$ is positive iff $(V_1', V_2', V_3')$ is positive. On the other hand, for all $\sigma \in S_3$, $\cO(V_{\sigma(1)}, V_{\sigma(2)}, V_{\sigma(3)}) = \sgn(\sigma) \cO(V_1, V_2, V_3)$, so $\cO(V_{\sigma(1)}, V_{\sigma(2)}, V_{\sigma(3)}) = \cO(V_1, V_2, V_3)$ when $\sigma$ is a 3-cycle. Below is a useful equivalence conditions for the positivity of $(V_1, V_2, V_3)$. The proof is elementary, so we will omit. \begin{proposition} \label{prop:orientation lemma} Given $V_1, V_2, V_3 \in \RR^2$ in general position, and $W \in \Int(V_1,V_2,V_3)$, the followings are equivalent: \begin{enumerate} \item $(V_1, V_2, V_3)$ is positive. \item $(V_i, V_{i+1}, W)$ is positive for some $i \in \{1,2,3\}$. \item $(V_i, V_{i+1}, W)$ is positive for all $i \in \{1,2,3\}$. \item $\det(V_i - V_{i-1}, V_{i+1} - W) > 0$ for some $i \in \{1,2,3 \}$. \item $\det(V_i - V_{i-1}, V_{i+1} - W) > 0$ for all $i \in \{1,2,3 \}$. \end{enumerate} \end{proposition} \subsection{The Inverse Cross Ratio} Given four colinear points $A, B, C, D$ on the line $\omega \subset \RR\PP^2$, there is a projective transformation $\psi$ that maps $\omega$ to the $x$-axis of $\AA^2$. Let $a, b, c, d$ be the $x$-coordinates of $\psi(A)$, $\psi(B)$, $\psi(C)$, $\psi(D)$. We define the \textit{inverse cross ratio} to be the following quantity: \begin{equation} \label{eqn:chi definition} \chi(A, B, C, D) := \frac{(a - b)(c - d)}{(a - c)(b - d)} . \end{equation} If $A$ lies on the line at infinity, we define $\chi(A, B, C, D)$ as follows: $\chi(A, B, C, D) = \frac{c - d}{b - d}$. $\chi$ is invariant under projective transformation. One can check that given any $\phi \in \PGL_3(\RR)$, \begin{equation*} \chi(A, B, C, D) = \chi(\phi(A), \phi(B), \phi(C), \phi(D)) . \end{equation*} We define the inverse cross ratio for four projective lines analogously. Let $l, m, n, k$ be four lines intersecting at a common point $O$. Let $\omega$ be a line that does not go through $O$. Denote $A = l \cap \omega$, $B = m \cap \omega$, $C = n \cap \omega$, $D = k \cap \omega$. Then, we define \begin{equation} \label{eqn:chi definition lines} \chi(l, m, n, k) \overset{\omega}{=} \chi(A, B, C, D) . \end{equation} Since $\chi$ is invariant under projective transformations, Equation \ref{eqn:chi definition lines} is independent of the choice of $\omega$. Suppose there is another line $\omega'$ that intersects $l, m, n, k$ at $A', B', C', D'$, then \begin{equation} \label{eqn:chi prop} \chi(A, B, C, D) \overset{\omega}{=} \chi(l, m, n, k) \overset{\omega'}{=} \chi(A', B', C', D'). \end{equation} \begin{figure}[ht] \centering \scriptsize \incfig{inverse_cross}{0.5\columnwidth} \caption{The configuration in Equation \ref{eqn:chi prop}.} \label{fig:inverse cross} \end{figure} \subsection{Twisted Polygons, Corner Invariants} In $\S$\ref{sec:formula} we study the spirals under the context of \textit{twisted polygons} to work in finite dimensions. Introduced in \cite{schwartz2008discretemonodromypentagramsmethod}, a \textit{twisted $n$-gon} is a bi-infinite sequence $P : \ZZ \rightarrow \RR\PP^2$, along with a projective transformation $M \in \PGL_3(\RR)$ called the \textit{monodromy}, such that every three consecutive points of $P$ are in general position, and $P_{i+n} = M(P_i)$ for all $i \in \ZZ$. When $M$ is the identity, we get an ordinary closed $n$-gon. Two twisted $n$-gons $P, Q$ are \textit{equivalent} if there exists $\phi \in \PGL_3(\RR)$ such that $\phi(P_i) = Q_i$ for all $i \in \ZZ$. The two monodromies $M_p$ and $M_q$ satisfy $M_q = \phi M_p \phi^{-1}$. Denote $\cP_n$ the space of twisted $n$-gons modulo projective equivalence. The inverse cross ratio allows us to parameterize $\cP_n$ with coordinates in $\RR^{2n}$. Given a twisted $n$-gon $P$, the \textit{corner invariants} of $P$ is a $2n$-tuple $(x_0, \ldots, x_{2n-1})$ given by \begin{equation} \label{eqn:corner invariant def} \left\{ \begin{aligned} x_{2i} = \chi(P_{i-2}, \, P_{i-1}, \, P_{i-2} P_{i-1} \cap P_{i} P_{i+1}, \, P_{i-2} P_{i-1} \cap P_{i+1} P_{i+2}) ; \\ x_{2i+1} = \chi(P_{i+2}, \, P_{i+1}, \, P_{i+2} P_{i+1} \cap P_{i} P_{i-1}, \, P_{i+2} P_{i+1} \cap P_{i-1} P_{i-2}). \end{aligned} \right. \end{equation} \begin{figure}[ht] \centering \scriptsize \incfig{corner_inv}{0.5\columnwidth} \caption{The corner invariants $x_{2i} = \chi(P_{i-2}, P_{i-1}, A, O)$; $x_{2i+1} = \chi(P_{i+2}, P_{i+1}, B, O)$.} \label{fig:corner inv} \end{figure} Since $\chi$ is invariant under projective transformations, we always have $x_i = x_{i+2n}$, so a $2n$-tuple of corner invariants is enough to fully determine the projective equivalence class of a twisted $n$-gon. To obtain the corner invariant of some $[P] \in \cP_n$, one can just choose a representative $P$ and compute its corner invariants. \cite{schwartz2008discretemonodromypentagramsmethod} Equation 19, 20 showed that one can also revert the process and obtain a representative twisted polygon of the equivalence class given its corner invariants. \section{The Spirals and \texorpdfstring{$T_k$}{Tk}-Orbit Invariance} \label{sec:spiral} $T_k$ acts on polygons, closed or twisted, by intersecting consecutive diagonal lines joined by vertices $k$-clicks away. As shown in $\S$\ref{sec:intro}, when $k\geq3$, $T_k$ often does not preserve the convexity and embeddedness of closed polygons, but it has an interesting dynamic with the spiral-like shapes generated by infinitely-long sequences of points. We call these infinite polygonal arcs \textit{$k$-spirals}. We will construct two types of $k$-spirals, where we define them based on the configuration of the four vertices $P_i, P_{i+1}, P_{i+k}, P_{i+k+1}$. \subsection{The Geometry of \texorpdfstring{$k$}{k}-Spirals} \label{subsec:construct Sk} A twisted polygon $P$ is called \textit{$k$-nice} if the four points $P_i, P_{i+1}. P_{i+k}, P_{i+k+1}$ are in general position for all $i \in \ZZ$. The $k$-nice condition is projective invariant. Denote $\cP_{k,n}$ the space of $k$-nice twisted $n$-gons modulo projective equivalence. \begin{definition} \label{def:spiral polygon} Given an integer $k \geq 3$, and an integer $N \in \ZZ$, we say that $[P] \in \cP_n$ is a \textit{$k$-spiral of type $\alpha$ or $\beta$} if it is $k$-nice, and for all $N \in \ZZ$, there exists a representative $P$ that satisfies the following: For all $i \geq N$, $P_i \in \AA^2$, and $(P_{i}, P_{i+1}, P_{i+2})$ is positive. Furthermore, \begin{itemize} \item If $[P]$ is \textit{type $\alpha$}, then $(P_i, P_{i+1}, P_{i+k+1})$ is positive, and $P_{i+k} \in \Int(P_i, P_{i+1}, P_{i+k+1})$; \item If $[P]$ is \textit{type $\beta$}, then $(P_i, P_{i+1}, P_{i+k})$ is positive, and $P_{i+k+1} \in \Int(P_i, P_{i+1}, P_{i+k})$. \end{itemize} We call $P$ an \textit{$N$-representative} of $[P]$. Saying that $[P]$ is a $k$-spiral of type $\alpha$ or $\beta$ means that $[P]$ admits an $N$-representative of the corresponding type for all $N \in \ZZ$. \end{definition} \begin{remark} One may attempt to define the $k$-spirals on $k$-nice bi-infinite sequences of points in $\RR\PP^2$ with no periodic constraints. Readers will see that the proof of Theorem \ref{thm:spiral polygon invariance} makes no use of the periodicity nature of a twisted polygon. We stick to twisted polygons because it's a finite-dimensional space, which allows us to keep track of the $T_k$ orbits. \end{remark} Let $\cS_{k,n}^\alpha$ and $\cS_{k,n}^\beta$ denote the space of $k$-spirals of type $\alpha$ or $\beta$. These are subspaces of $\cP_{k,n}$. See Figure \ref{fig:spiral witness} for examples of 6-spirals of type $\alpha$ and $\beta$. \begin{figure}[ht] \centering \incfig{spiral_witness}{0.6\columnwidth} \caption{Left: A representative $P$ of $[P] \in \cS_{6,n}^\alpha$. Right: A representative $P$ of $[P] \in \cS_{6,n}^\beta$.} \label{fig:spiral witness} \end{figure} \subsection{Transversals of the Spirals} The next result is important for the proof of Theorem \ref{thm:spiral polygon invariance}, and it carries along interesting geometric implications. It says that we can find $(k-1)$ many locally convex polygonal arcs of the same orientation by joining the vertices of a $k$-spiral that are $k$-clicks apart. We call these polygonal arcs \textit{transversals}. It turns out that the orientation of the transversals is determined by the type of spirals we are given. See Figure \ref{fig:spiral witness flags} for one of the $(k-1)$ transversals from the two representatives in Figure \ref{fig:spiral witness}. Observe that transversals for type-$\alpha$ spirals are oriented counterclockwise, whereas transversals for type-$\beta$ are oriented clockwise. \begin{figure}[ht] \centering \incfig{spiral_witness_flags}{0.6\columnwidth} \caption{Transversals of two representatives from Figure \ref{fig:spiral witness}.} \label{fig:spiral witness flags} \end{figure} \begin{lemma} \label{lem:orientation of pentagon} Given five points $O,A,B,C,D \in \AA^2$. Suppose $(A,O,B)$, $(A,O,D)$, $(B,O,C)$, $(C,O,D)$ are all positive. Then, $(A,O,C)$ is positive iff $(B,O,D)$ is positive. \end{lemma} \begin{proof} For the forward direction, we may normalize with $\Aff_2^+(\RR)$ if necessary so that $O = (0,0)$ and $A = (-1,0)$. Notice that the normalization we applied doesn't change the sign of the orientation. Denote $B = (x_b, y_b)$, $C = (x_c, y_c)$, and $D = (x_d,y_d)$. Since $(A,O,B)$ is positive, Equation \ref{eqn:vector orientations} gives us \begin{equation*} \cO(A,O,B) = \det(O - A,B - O) = \det(-A, B) = y_b > 0. \end{equation*} Similarly, positivity of $(A,O,C)$ and $(A,O,D)$ give us $y_c,y_d > 0$. Next, observe that \begin{equation*} \begin{aligned} \cO(B,O,C) = \det(-B, C) = -x_by_c + x_cy_b; \\ \cO(B,O,D) = \det(-B, D) = -x_by_d + x_dy_b; \\ \cO(C,O,D) = \det(-C, D) = -x_cy_d + x_dy_c. \end{aligned} \end{equation*} Since $y_b,y_c,y_d > 0$, $\frac{y_b}{y_c}, \frac{y_d}{y_c} > 0$, which gives us \begin{equation*} \cO(B, O, D) = -x_by_d + x_d y_b = \frac{y_b}{y_c} \, \cO(C,O,D) - \frac{y_d}{y_c} \, \cO(B,O,C) > 0. \end{equation*} This shows positivity of $(B,O,D)$. \begin{figure}[ht] \centering \tiny \incfig{quadrilateral_config}{0.5\columnwidth} \caption{Examples of $O, A, B, C, D$ in Lemma \ref{lem:orientation of pentagon}.} \label{fig:quadrilateral config} \end{figure} The backward direction is basically the same proof. Normalize with $\Aff_2^+(\RR)$ if necessary so that $O = (0,0)$ and $D = (1,0)$. Denote $A = (x_a,y_a)$, $B = (x_b,y_b)$, $C = (x_c,y_c)$. Since $(A,O,D)$ is positive, Equation \ref{eqn:vector orientations} gives us \begin{equation*} \cO(A,O,D) = \det(O - A,D - O) = \det(-A, D) = y_a > 0. \end{equation*} Similarly, positivity of $(B,O,D)$ and $(C,O,D)$ give us $y_b,y_c > 0$. Next, observe that \begin{equation*} \begin{aligned} \cO(A,O,B) = \det(-A, B) = -x_ay_b + x_by_a; \\ \cO(A,O,C) = \det(-A, C) = -x_ay_c + x_cy_a; \\ \cO(B,O,C) = \det(-B, C) = -x_by_c + x_cy_b. \end{aligned} \end{equation*} Since $y_a,y_b,y_c > 0$, $\frac{y_a}{y_b}, \frac{y_c}{y_b} > 0$, which gives us \begin{equation*} \cO(A, O, C) = -x_ay_c + x_cy_a = \frac{y_c}{y_b} \, \cO(A,O,B) - \frac{y_a}{y_b} \, \cO(B,O,C) > 0. \end{equation*} This shows positivity of $(A,O,C)$. \end{proof} \begin{proposition} \label{prop:whirlpool} Let $P$ be an $N$-representative of a $k$-spiral $[P]$. If $[P]$ is type-$\alpha$, then $(P_i, P_{i+k}, P_{i+2k})$ is positive for all $i > N$. If $[P]$ is type-$\beta$, then $(P_{i+2k}, P_{i+k}, P_{i})$ is positive for all $i > N$. \end{proposition} \begin{figure}[ht] \centering \footnotesize \incfig{spiral_flags}{0.8\columnwidth} \caption{Left: $\cS_{k,n}^\alpha$ configuration. Right: $\cS_{k,n}^\beta$ configuration.} \label{fig:spiral flags} \end{figure} \begin{proof} The proof applies Lemma \ref{lem:orientation of pentagon} with suitable choices of $O,A,B,C,D$. See Figure \ref{fig:spiral flags} for the configuration of points involved. We start with $P$ of type $\alpha$. Consider the following choices of vertices: \begin{equation*} O = P_{i+k}; \ A = P_{i}; \ B = P_{i+k-1}; \ C = P_{i+2k}; \ D = P_{i+k+1}. \end{equation*} It follows immediately from the definition of an $N$-representative of type $\alpha$ that $(C,O,D)$ and $(B,O,D)$ are positive. The other conditions follow from applications of Proposition \ref{prop:orientation lemma}. Apply \ref{prop:orientation lemma} with $(P_{i-1}, P_{i}, P_{i+k})$ positive and $P_{i+k-1} \in \Int(P_{i-1}, P_{i}, P_{i+k})$ to get positivity of $(A, O, B)$. Apply \ref{prop:orientation lemma} with $(P_{i}, P_{i+1}, P_{i+k+1})$ positive and $P_{i+k} \in \Int(P_{i}, P_{i+1}, P_{i+k+1})$ to get positivity of $(A, O, D)$. Apply \ref{prop:orientation lemma} with $(P_{i+k}, P_{i+k+1}, P_{i+2k+1})$ positive and $P_{i+2k} \in \Int(P_{i+k}, P_{i+k+1}, P_{i+2k+1})$ to get positivity of $(C, O, D)$. Then, the backward direction of Lemma \ref{lem:orientation of pentagon} implies $(P_{i}, P_{i+k}, P_{i+2k})$ is positive. Next, we will prove $P$ of type $\beta$. Consider the following choices of vertices: \begin{equation*} O = P_{i+k}; \ A = P_{i+k-1}; \ B = P_{i+2k}; \ C = P_{i+k+1}; \ D = P_{i}. \end{equation*} Again, it follows immediately from the definition of an $N$-representative of type $\beta$ that $(A,O,C)$ and $(B,O,C)$ are positive. Apply \ref{prop:orientation lemma} with $(P_{i+k-1}, P_{i+k}, P_{i+2k-1})$ positive and $P_{i+2k} \in \Int(P_{i+k-1}, P_{i+k}, P_{i+2k-1})$ to get $(A, O, B)$ positive. Apply \ref{prop:orientation lemma} with $(P_{i-1}, P_{i}, P_{i+k-1})$ positive and $P_{i+k} \in \Int(P_{i-1}, P_{i}, P_{i+k-1})$ to get positivity of $(A, O, D)$. Apply \ref{prop:orientation lemma} with $(P_{i}, P_{i+1}, P_{i+k})$ positive and $P_{i+k+1} \in \Int(P_{i}, P_{i+1}, P_{i+k})$ to get positivity of $(C, O, D)$. Then, the forward direction of Lemma \ref{lem:orientation of pentagon} implies $(P_{i+2k}, P_{i+k}, P_{i})$ is positive. \end{proof} \subsection{Invariance of Forward Orbit} \label{subsec:proof of forward invariance} In this section, we prove that $\cS_{k,n}^\alpha$ and $\cS_{k,n}^\beta$ are $T_k$-invariant. We start by defining how $T_k$ acts on $\cP_{k,n}$. Geometrically, $T_k$ connects lines that join vertices $k$ clicks apart, and intersects consecutive lines. Algebraically, this action of $T_k$ on $P: \ZZ \rightarrow \RR\PP^2$ depends on the choice of labeling of the image $P' = T_k(P)$. We use the following labeling convention: \begin{equation} \label{eqn:delta k,1 formula} P'_i = P_i P_{i+k} \cap P_{i+1} P_{i+k+1}. \end{equation} Observe that the $k$-nice condition on $P$ ensures that each $P'_i$ is well-defined. Also, the image of a $k$-nice twisted polygon under $T_k$ is again $k$-nice. To see this, notice that $P_{i+k+1}$ is the intersection of $P'_i P'_{i+1}$ and $P'_{i+k}P'_{i+k+1}$, but clearly $P_{i+k+1}$ is distinct from the four points $P'_i, P'_{i+1}, P'_{i+k}, P'_{i+k+1}$, so $P'$ must be $k$-nice. Since $T_k$ commutes with projective transformations, it induces a well-defined map on $\cP_{k,n}$. We proceed to prove the $T_k$-invariance of $\cS_{k,n}^\alpha$ and $\cS_{k,n}^\beta$ separately. We start with the following lemma. \begin{lemma} \label{lem:triangle lemma} Given four points $A, B, C, D$ on $\RR^2$ in general position where $D \in \Int(A,B,C)$. Let $O = AB \cap CD$. Then, there exist $s \in (0,1)$ and $t \in (1,\infty)$ such that $O = (1-s)A + sB = (1-t)C + tD$. \end{lemma} \begin{proof} Since $D \in \Int(A,B,C)$, there exists $\lambda_1, \lambda_2, \lambda_3 \in (0,1)$ such that \begin{equation*} \lambda_1+\lambda_2+\lambda_3 = 1; \ \ D = \lambda_1 A + \lambda_2 B + \lambda_3 C. \end{equation*} Taking $s = \frac{\lambda_2}{1 - \lambda_3}$ and $t = \frac{1}{1 - \lambda_3}$ gives us the desired result. \end{proof} \begin{proposition} \label{prop:Sk alpha invariance} For all $k \geq 3$ and $n \geq 2$, $T_k(\cS_{k,n}^\alpha) \subset \cS_{k,n}^\alpha$. \end{proposition} \begin{proof} Given an $N$-representative $P$ of some $[P] \in \cS_{k,n}^\alpha$, we will show that $P' = T_k(P)$ is an $N$-representative of $[T_k(P)]$ of type $\alpha$. Recall that we have already shown $P'$ is $k$-nice, so it suffices to show that for all $i \geq N$, $(P'_{i}, P'_{i+1}, P'_{i+2})$ is positive, $(P'_{i}, P'_{i+1}, P'_{i+k+1})$ is positive, and $P'_{i+k} \in \Int (P'_i, P'_{i+1}, P'_{i+k+1})$. Let $i \geq N$ be fixed. Since $P$ is an $N$-representative of type $\alpha$, $P_{j+k} \in \Int(P_{j}, P_{j+1}, P_{j+k+1})$ for all $j \geq N$. Applying Lemma \ref{lem:triangle lemma} with Equation \ref{eqn:delta k,1 formula} on $P'_{j}$ for $j \in \{i,i+1,i+2,i+k,i+k+1\}$ gives us \begin{equation} \label{eqn:Sk alpha invariance} \begin{aligned} & P'_{i} = (1-s_1) P_{i+1} + s_1 P_{i+k+1}; & & P'_{i+1} = (1-t_1) P_{i+1} + t_1 P_{i+k+1}; \\ & P'_{i+1} = (1-s_2) P_{i+2} + s_2 P_{i+k+2}; & & P'_{i+2} = (1-t_2) P_{i+2} + t_2 P_{i+k+2}; \\ & P'_{i+k} = (1-s_3) P_{i+k+1} + s_3 P_{i+2k+1}; & & P'_{i+k+1} = (1-t_3) P_{i+k+1} + t_3 P_{i+2k+1}, \end{aligned} \end{equation} where $s_1,s_2,s_3 \in (0,1)$ and $t_1,t_2,t_3 \in (1,\infty)$. We first show that $(P'_{i}, P'_{i+1}, P'_{i+2})$ is positive. Equation \ref{eqn:vector orientations} and \ref{eqn:Sk alpha invariance} give us \begin{equation*} \begin{aligned} \cO(P'_{i}, P'_{i+1}, P'_{i+2}) &= \det(P'_{i+1} - P'_{i}, P'_{i+2} - P'_{i+1}) \\ &= (t_1 - s_1)(t_2 - s_2) \det(P_{i+k+2} - P_{i+2}, P_{i+1} - P_{i+k+1}). \end{aligned} \end{equation*} Then, since $\cO(P_{i+1}, P_{i+2}, P_{i+k+2}) > 0$ and $P_{i+k+1} \in \Int (P_{i+1}, P_{i+2}, P_{i+k+2})$, Proposition \ref{prop:orientation lemma} implies $\det(P_{i+k+2} - P_{i+2}, P_{i+1} - P_{i+k+1}) > 0$, so $\cO(P'_{i}, P'_{i+1}, P'_{i+2}) > 0$. Next, we show that $P'_{i+k} \in \Int(P'_{i}, P'_{i+1}, P'_{i+k+1})$. Let $r_1 = \frac{1 - s_1}{t_1 - s_1}$ and $r_2 = \frac{s_3}{t_3}$. Equation \ref{eqn:Sk alpha invariance} implies $r_1, r_2 \in (0,1)$ and \begin{equation*} P'_{i+k} = (1 - r_2)(1 - r_1) P'_{i} + (1 - r_2)r_1 P'_{i+1} + r_2 P'_{i+k+1}. \end{equation*} In particular, the coefficients $(1-r_2)(1-r_1)$, $(1-r_2)r_1$, and $r_2$ are all in $(0,1)$ and sum up to $1$, so $P'_{i+k} \in \Int(P'_{i}, P'_{i+1}, P'_{i+k+1})$. Finally, we check $(P'_{i}, P'_{i+1}, P'_{i+k+1})$ is positive. Applying Equation \ref{eqn:vector orientations} and \ref{eqn:Sk alpha invariance} gives us \begin{equation*} \begin{aligned} \det(P'_{i+1} - P'_i, P'_{i+k+1} - P'_{i+k}) &= (t_1 - s_1)(t_3 - s_3) \det(P_{i+k+1} - P_{i+1}, P_{i+2k+1} - P_{i+k+1}) \\ &= (t_1 - s_1)(t_3 - s_3) \cO(P_{i+1}, P_{i+k+1}, P_{i+2k+1}). \end{aligned} \end{equation*} Proposition \ref{prop:whirlpool} implies $\cO(P_{i+1}, P_{i+k+1}, P_{i+2k+1}) > 0$, so $\det(P'_{i+1} - P'_i, P'_{i+k+1} - P'_{i+k}) > 0$ Finally, since $P'_{i+k} \in \Int(P'_{i}, P'_{i+1}, P'_{i+k+1})$, Proposition \ref{prop:orientation lemma} implies $\cO(P'_{i}, P'_{i+1}, P'_{i+k+1}) > 0$. We conclude that $P'$ is an $N$-representative of type $\alpha$. \end{proof} \begin{figure}[ht] \centering \scriptsize \incfig{forward_invariance}{\columnwidth} \caption{Left: Proposition \ref{prop:Sk alpha invariance} configuration. Right: Proposition \ref{prop:Sk beta invariance} configuration.} \label{fig:forward invariance} \end{figure} \begin{proposition} \label{prop:Sk beta invariance} For all $k \geq 3$ and $n \geq 2$, $T_k(\cS_{k,n}^{\beta}) \subset \cS_{k,n}^{\beta}$. \end{proposition} \begin{proof} Given an $N$-representative $P$ of some $[P] \in \cS_{k,n}^\beta$, we will show that $P' = T_k(P)$ is an $N$-representative of $[T_k(P)]$ of type $\beta$. Similar to the proof Proposition \ref{prop:Sk alpha invariance}, it suffices to show that $(P'_{i}, P'_{i+1}, P'_{i+2})$ is positive, $(P'_{i}, P'_{i+1}, P'_{i+k})$ is positive, and $P'_{i+k+1} \in \Int (P'_i, P'_{i+1}, P'_{i+k})$ for all $i \geq N$. Let $i \geq N$ be fixed. Since $P$ is an $N$-representative of type $\beta$, $P_{j+k+1} \in \Int(P_{j}, P_{j+1}, P_{j+k})$ for all $j \geq N$. Applying Lemma \ref{lem:triangle lemma} with Equation \ref{eqn:delta k,1 formula} on $P'_{j}$ for $j \in \{i,i+1,i+2,i+k,i+k+1\}$ gives us \begin{equation} \label{eqn:Sk beta invariance} \begin{aligned} & P'_{i} = (1-t_1) P_{i+1} + t_1 P_{i+k+1}; & & P'_{i+1} = (1-s_1) P_{i+1} + s_1 P_{i+k+1}; \\ & P'_{i+1} = (1-t_2) P_{i+2} + t_2 P_{i+k+2}; & & P'_{i+2} = (1-s_2) P_{i+2} + s_2 P_{i+k+2}; \\ & P'_{i+k} = (1-t_3) P_{i+k+1} + t_3 P_{i+2k+1}; & & P'_{i+k+1} = (1-s_3) P_{i+k+1} + s_3 P_{i+2k+1}, \end{aligned} \end{equation} where $s_1,s_2,s_3 \in (0,1)$ and $t_1,t_2,t_3 \in (1,\infty)$. We first show that $(P'_i, P'_{i+1}, P'_{i+2})$ is positive. Equation \ref{eqn:vector orientations} and \ref{eqn:Sk beta invariance} give us \begin{equation*} \begin{aligned} \cO(P'_{i}, P'_{i+1}, P'_{i+2}) &= \det(P'_{i+1} - P'_i, P'_{i+2} - P'_{i+1}) \\ &= (t_1 - s_1)(t_2 - s_2) \det(P_{i+1} - P_{i+k+1}, P_{i+2} - P_{i+k+2}). \end{aligned} \end{equation*} Then, since $\cO(P_{i+1}, P_{i+2}, P_{i+k+1}) > 0$ and $P_{i+k+2} \in \Int (P_{i+1}, P_{i+2}, P_{i+k+1})$, Proposition \ref{prop:orientation lemma} implies $\det(P_{i+1} - P_{i+k+1}, P_{i+2} - P_{i+k+2}) > 0$, so $\cO(P'_{i}, P'_{i+1}, P'_{i+2}) > 0$. Next, we show that $P'_{i+k+1} \in \Int(P'_{i}, P'_{i+1}, P'_{i+k})$. Let $r_1 = \frac{t_1 - 1}{t_1 - s_1}$ and $r_2 = \frac{s_3}{t_3}$. Equation \ref{eqn:Sk beta invariance} implies $r_1, r_2 \in (0,1)$ and \begin{equation*} P'_{i+k+1} = (1 - r_2)(1 - r_1) P'_{i} + (1 - r_2)r_1 P'_{i+1} + r_2 P'_{i+k}. \end{equation*} In particular, the coefficients $(1-r_2)(1-r_1)$, $(1-r_2)r_1$, and $r_2$ are all in $(0,1)$ and sum up to $1$, so $P'_{i+k+1} \in \Int(P'_{i}, P'_{i+1}, P'_{i+k})$. Finally, we check $(P'_{i}, P'_{i+1}, P'_{i+k})$ is positive. Applying Equation \ref{eqn:vector orientations} and \ref{eqn:Sk beta invariance} gives us \begin{equation*} \begin{aligned} \det(P'_{i+1} - P'_i, P'_{i+k} - P'_{i+k+1}) &= (t_1 - s_1)(t_3 - s_3) \det(P_{i+1} - P_{i+k+1}, P_{i+2k+1} - P_{i+k+1}) \\ &= (t_1 - s_1)(t_3 - s_3)\cO(P_{i+2k+1}, P_{i+k+1}, P_{i+1}). \end{aligned} \end{equation*} Proposition \ref{prop:whirlpool} implies $\cO(P_{i+2k+1}, P_{i+k+1}, P_{i+1}) > 0$, so $\det(P'_{i+k+1} - P'_{i+k}, P'_{i+1} - P'_i) > 0$. Finally, since $P'_{i} \in \Int(P'_{i+1}, P'_{i+k}, P'_{i+k+1})$, Proposition \ref{prop:orientation lemma} implies $\cO(P'_{i+1}. P'_{i+k}, P'_{i+k+1}) > 0$. We conclude that $P'$ is an $N$-representative of type $\beta$. \end{proof} \subsection{Invariance of Backward Orbit} \label{subsec:proof of backward invariance} In this section we complete the proof of Theorem \ref{thm:spiral polygon invariance} by showing that $\cS_{k,n}^\alpha$ and $\cS_{k,n}^\beta$ are $T_k^{-1}$-invariant. One can derive a formula for $T_k^{-1}$ from Equation \ref{eqn:delta k,1 formula}. For $P' \in \cP_{k,n}$, $P = T_k^{-1}(P')$ is given by \begin{equation} \label{eqn:Tk inverse formula} P_i = P'_{i-k-1} P'_{i-k} \cap P'_{i-1} P'_{i}. \end{equation} Again, $T_k^{-1}$ is well-defined on $k$-nice twisted polygons, and one can check that the image is $k$-nice. \begin{proposition} \label{prop:Sk alpha inverse invariance} For all $k \geq 3$ and $n \geq 2$, $T_k^{-1}(S_{k,n}^\alpha) \subset S_{k,n}^\alpha$. \end{proposition} \begin{proof} Given $P'$ an $N$-representative of type $\alpha$, we will show that $P = T_k^{-1}(P')$ is a $(N+k+1)$-representative of type $\alpha$. We know that $P$ is $k$-nice, so it suffices to show that for all $i \geq N+k+1$, $(P_i, P_{i+1}, P_{i+2})$ is positive, $(P_i, P_{i+1}, P_{i+k+1})$ is positive, and $P_{i+k} \in \Int(P_i, P_{i+1}, P_{i+k+1})$. Let $i \geq N+k+1$ be fixed. Since $P'$ is an $N$-representative of type $\alpha$, we must have $P'_{j+k} \in \Int(P'_j, P'_{j+1}, P'_{j+k+1})$ for all $j \geq N$. Applying Lemma \ref{lem:triangle lemma} with Equation \ref{eqn:Tk inverse formula} on $P_j$ for $j \in \{i, i+1, i+2, i+k, i+k+1\}$ gives us \begin{equation} \label{eqn:Sk alpha inverse formula} \begin{aligned} & P_i = (1 - s_1) P'_{i-k} + s_1 P'_{i-k-1}; & & P_i = (1 - t_1) P'_{i} + t_1 P'_{i-1}; \\ & P_{i+1} = (1 - s_2) P'_{i-k+1} + s_2 P'_{i-k}; & & P_{i+1} = (1 - t_2) P'_{i+1} + t_3 P'_{i}; \\ & P_{i+2} = (1 - s_3) P'_{i-k+2} + s_3 P'_{i-k+1}; & & P_{i+k} = (1 - s_4) P'_i + s_4 P'_{i-1}; \\ & P_{i+k+1} = (1 - s_5) P'_{i+1} + s_5 P'_i, & & \end{aligned} \end{equation} where $s_1, s_2, s_3, s_4, s_5 \in (0,1)$ and $t_1, t_2 \in (1, \infty)$. We first show that $(P_i, P_{i+1}, P_{i+k+1})$ is positive. From Equation \ref{eqn:Sk alpha inverse formula} we have \begin{equation*} \cO(P_i, P_{i+1}, P_{i+k+1}) = (t_1t_2(1 - s_5) - t_1(1 - t_2)s_5) \cO(P'_{i-1}, P'_i, P'_{i+1}). \end{equation*} It follows that $\cO(P_i, P_{i+1}, P_{i+k+1}) > 0$, so $(P_i, P_{i+1}, P_{i+k+1})$ is positive. Next, we show that $P_{i+k} \in \Int(P_i, P_{i+1}, P_{i+k+1})$. Let $r_1 = \frac{t_2 - 1}{t_2 - s_5}$ and $r_2 = \frac{s_4}{t_1}$. Equation \ref{eqn:Sk alpha inverse formula} implies $r_1, r_2 \in (0,1)$ and \begin{equation*} P_{i+k} = (1 - r_2)(1 - r_1) P_{i+1} + (1 - r_2)r_1 P_{i+k+1} + r_2 P_{i}. \end{equation*} In particular, the coefficients $(1 - r_2)(1 - r_1)$, $(1 - r_2)r_1$, and $r_2$ are all in $(0,1)$ and sum up to 1, so $P_{i+k} \in \Int(P_i, P_{i+1}, P_{i+k+1})$. Finally, we check $(P_i, P_{i+1}, P_{i+2})$ is positive. We will invoke Lemma \ref{lem:orientation of pentagon}. Consider the following choices of vertices: \begin{equation*} O = P_{i+1}; \ \ A = P_{i}; \ \ B = P_{i+k+1}; \ \ C = P_{i+2}; \ \ D = P'_{i-k+1}. \end{equation*} We have shown that $(A, O, B)$ is positive. The positivity of $(B, O, C)$ follows from applying Proposition \ref{prop:orientation lemma} with $(P_{i+1}, P_{i+2}, P_{i+k+2})$ positive and $P_{i+k+1} \in \Int(P_{i+1}, P_{i+2}, P_{i+k+2})$. Next, observe that \begin{equation*} \begin{aligned} & \cO(A, O, D) = s_1 s_2 \cO(P'_{i-k-1}, P'_{i-k}, P'_{i-k+1}); \\ & \cO(C, O, D) = (1 - s_3) s_2 \cO(P'_{i-k}, P'_{i-k+1}, P'_{i-k+2}); \\ & \cO(B, O, D) = s_2(1 - s_5) \cO(P'_{i-k}, P'_{i-k+1}, P'_{i+1}) + s_2s_5 \cO(P'_{i-k}, P'_{i-k+1}, P'_{i}); \end{aligned} \end{equation*} Then, positivity of $(A, O, D)$ and $(C, O, D)$ follows from positivity of $(P'_{i-k-1}, P'_{i-k}, P'_{i-k+1})$ and $(P'_{i-k}, P'_{i-k+1}, P'_{i-k+2})$. To see that $(B, O, D)$ is positive, apply Proposition \ref{prop:orientation lemma} on $(P'_{i-k}, P'_{i-k+1}, P'_{i+1})$ positive and $P'_{i} \in \Int(P'_{i-k}, P'_{i-k+1}, P'_{i+1})$ to get $(P'_{i-k}, P'_{i-k+1}, P'_i)$ positive. The backward direction of Lemma \ref{lem:orientation of pentagon} then implies $(P_i, P_{i+1}, P_{i+2})$ is positive. We conclude that $P$ is a $(N+k+1)$-representative of type $\alpha$. \end{proof} \begin{figure}[ht] \centering \scriptsize \incfig{backward_invariance}{0.85\columnwidth} \caption{Left: Proposition \ref{prop:Sk alpha inverse invariance} configuration. Right: Proposition \ref{prop:Sk beta inverse invariance} configuration.} \label{fig:backward invariance} \end{figure} \begin{proposition} \label{prop:Sk beta inverse invariance} For all $k \geq 3$ and $n \geq 2$, $T_k^{-1}(S_{k,n}^\beta) \subset S_{k,n}^\beta$. \end{proposition} \begin{proof} The proof is similar. Given $P'$ an $N$-representative of type $\beta$, we will show that $P = T_k^{-1}(P')$ is a $(N+k+1)$-representative of type $\beta$. We know that $P$ is $k$-nice, so it suffices to show that for all $i \geq N+k+1$, $(P_i, P_{i+1}, P_{i+2})$ is positive, $(P_i, P_{i+1}, P_{i+k})$ is positive, and $P_{i+k+1} \in \Int(P_i, P_{i+1}, P_{i+k+1})$. Let $i \geq N+k+1$ be fixed. Since $P'$ is an $N$-representative of type $\beta$, we must have $P'_{j+k+1} \in \Int(P'_j, P'_{j+1}, P'_{j+k})$ for all $j \geq N$. Applying Lemma \ref{lem:triangle lemma} with Equation \ref{eqn:Tk inverse formula} on $P_j$ for $j \in \{i, i+1, i+2, i+k, i+k+1\}$ gives us \begin{equation} \label{eqn:Sk beta inverse formula} \begin{aligned} & P_i = (1 - s_1) P'_{i-k} + s_1 P'_{i-k-1}; & & P_i = (1 - t_1) P'_{i-1} + t_1 P'_{i}; \\ & P_{i+1} = (1 - s_2) P'_{i-k+1} + s_2 P'_{i-k}; & & P_{i+1} = (1 - t_2) P'_{i} + t_3 P'_{i+1}; \\ & P_{i+2} = (1 - s_3) P'_{i-k+2} + s_3 P'_{i-k+1}; & & P_{i+k} = (1 - s_4) P'_{i-1} + s_4 P'_{i}; \\ & P_{i+k+1} = (1 - s_5) P'_{i} + s_5 P'_{i+1}, & & \end{aligned} \end{equation} where $s_1, s_2, s_3, s_4, s_5 \in (0,1)$ and $t_1, t_2 \in (1, \infty)$. We first show that $(P_i, P_{i+1}, P_{i+k})$ is positive. From Equation \ref{eqn:Sk alpha inverse formula} we have \begin{equation*} \cO(P_i, P_{i+1}, P_{i+k+1}) = (t_1t_2(1 - s_4) - (1-t_1)t_2s_4) \cO(P'_{i-1}, P'_i, P'_{i+1}). \end{equation*} It follows that $\cO(P_i, P_{i+1}, P_{i+k+1}) > 0$, so $(P_i, P_{i+1}, P_{i+k+1})$ is positive. Next, we show that $P_{i+k+1} \in \Int(P_i, P_{i+1}, P_{i+k})$. Let $r_1 = \frac{1 - s_4}{t_1 - s_4}$ and $r_2 = \frac{s_5}{t_2}$. Equation \ref{eqn:Sk alpha inverse formula} implies $r_1, r_2 \in (0,1)$ and \begin{equation*} P_{i+k+1} = (1 - r_2)(1 - r_1) P_{i+k} + (1 - r_2)r_1 P_{i} + r_2 P_{i+1}. \end{equation*} In particular, the coefficients $(1 - r_2)(1 - r_1)$, $(1 - r_2)r_1$, and $r_2$ are all in $(0,1)$ and sum up to 1, so $P_{i+k+1} \in \Int(P_i, P_{i+1}, P_{i+k})$. Finally, we check $(P_i, P_{i+1}, P_{i+2})$ is positive. We will invoke Lemma \ref{lem:orientation of pentagon}. Consider the following choices of vertices: \begin{equation*} O = P_{i+1}; \ \ A = P_{i}; \ \ B = P_{i+k+1}; \ \ C = P_{i+2}; \ \ D = P'_{i-k+1}. \end{equation*} We have shown that $(B, O, C)$ is positive. The positivity of $(A, O, B)$ follows from applying Proposition \ref{prop:orientation lemma} with $(P_{i}, P_{i+1}, P_{i+k})$ positive and $P_{i+k+1} \in \Int(P_{i}, P_{i+1}, P_{i+k})$. Next, observe that \begin{equation*} \begin{aligned} & \cO(A, O, D) = s_1 s_2 \cO(P'_{i-k-1}, P'_{i-k}, P'_{i-k+1}); \\ & \cO(C, O, D) = (1 - s_3) s_2 \cO(P'_{i-k}, P'_{i-k+1}, P'_{i-k+2}); \\ & \cO(B, O, D) = s_2 (1 - s_5) \cO(P'_{i-k}, P'_{i-k+1}, P'_{i}) + s_2 s_5 \cO(P'_{i-k}, P'_{i-k+1}, P'_{i+1}); \\ \end{aligned} \end{equation*} Then, positivity of $(A, O, D)$ and $(C, O, D)$ follows from positivity of $(P'_{i-k-1}, P'_{i-k}, P'_{i-k+1})$ and $(P'_{i-k}, P'_{i-k+1}, P'_{i-k+2})$. To see that $(B, O, D)$ is positive, apply Proposition \ref{prop:orientation lemma} on $(P'_{i-k}, P'_{i-k+1}, P'_{i})$ positive and $P'_{i+1} \in \Int(P'_{i-k}, P'_{i-k+1}, P'_{i})$ to get $(P'_{i-k}, P'_{i-k+1}, P'_{i+1})$ positive. The backward direction of Lemma \ref{lem:orientation of pentagon} then implies $(P_i, P_{i+1}, P_{i+2})$ is positive. We conclude that $P$ is a $(N+k+1)$-representative of type $\beta$. \end{proof} We conclude this section by claiming that Proposition \ref{prop:Sk alpha invariance}, \ref{prop:Sk beta invariance}, \ref{prop:Sk alpha inverse invariance}, \ref{prop:Sk beta inverse invariance} together prove Theorem \ref{thm:spiral polygon invariance}. \section{Coordinate Representation of 3-Spirals} \label{sec:formula} \subsection{The Tic-Tac-Toe Grids} \label{subsec:definition of tic tac toe} Recall the intervals $I = (-\infty, 0)$, $J = (0,1)$, $K = (1,\infty)$ from $\S$\ref{subsec:tic tac toe}. One can partition $\RR^2$ into a $3 \times 3$ grid. See Figure \ref{fig:tic-tac-toe grid}. We make the following definition: \begin{definition} \label{def:checkerboard polygon} For $n \geq 2$, let $S_n(I, J)$ (resp.\ $S_n(K, J)$, $S_n(J, I)$, $S_n(J, K)$) be the subset of $\cP_n$ that satisfies the following: given $[P] \in S_n(I, J)$ (resp.\ $S_n(K, J)$, $S_n(J, I)$, $S_n(J, K)$), for all $i \in \{0, \ldots, n-1\}$, $(x_{2i}, x_{2i+1}) \in I \times J$ (resp.\ $K \times J$, $J \times I$, $J \times K$). \end{definition} The following symmetries of the four grids follow directly from Definition \ref{def:checkerboard polygon}. \begin{proposition} \label{prop:tic tac toe symmetry} For $i \in \ZZ$, define a map $\sigma_i: \ZZ \rightarrow \ZZ$ by $\sigma_i(x) = x + i$. Define a map $\iota: \ZZ \rightarrow \ZZ$ by $\iota(x) = -x$. Given $[P] \in \cP_n$, the followings are true: \begin{itemize} \item If $[P] \in S_n(I,J)$ \textnormal{(}resp.\ $S_n(K, J)$, $S_n(J, I)$, $S_n(J, K)$\textnormal{)}, then $[P \circ \sigma_i] \in S_n(I,J)$ \textnormal{(}resp.\ $S_n(K, J)$, $S_n(J, I)$, $S_n(J, K)$\textnormal{)} for all $i \in \ZZ$. \item $[P] \in S_n(I,J)$ if and only if $[P \circ \iota] \in S_n(J,I)$. \item $[P] \in S_n(K,J)$ if and only if $[P \circ \iota] \in S_n(J,K)$. \end{itemize} \end{proposition} To understand the geometry implied by the corner invariants, we need to examine what happens when the corner invariants $(x_{2i}, x_{2i+1})$ take value from $0,1,\infty$. \begin{proposition} \label{prop:P2 position and corner invariants} Let $P$ be a representative of $[P] \in \cP_n$. Let $(x_{0}, \ldots, x_{2n-1})$ be its corner invariants. For all $i$, suppose the four points $P_{i-2}$, $P_{i-1}$, $P_{i}$, $P_{i+1}$ are fixed. Then, we have the following equivalence between the position of $P_{i+2}$ and the corner invariants $(x_{2i}, x_{2i+1})$: \begin{center} \begin{tabular}{c|c||c|c} Configuration & Coordinates & Configuration & Coordinates \\ \hline $P_{i+2} \in P_{i+1} P_{i}$ & $x_{2i} = 0$ & $P_{i+2} \in P_{i-1} P_{i+1}$ & $x_{2i+1} = 0$ \\ $P_{i+2} \in P_{i+1} P_{i-2}$ & $x_{2i} = 1$ & $P_{i+2} \in P_{i-1} P_{i-2}$ & $x_{2i+1} = 1$ \\ $P_{i+2} \in P_{i+1} P_{i-1}$ & $x_{2i} = \infty$ & $P_{i+2} \in P_{i-1} P_{i}$ & $x_{2i+1} = \infty $ \end{tabular} \end{center} \end{proposition} \begin{proof} Consider the following lines: \begin{center} \begin{tabular}{c c c c} $l_1 = P_{i+1} P_{i-2}$; & $l_2 = P_{i+1} P_{i-1}$; & $l_3 = P_{i+1} P_{i}$; & $l_4 = P_{i+1} P_{i+2}$; \\ $m_1 = P_{i-1} P_{i+2}$; & $m_2 = P_{i-1} P_{i+1}$; & $m_3 = P_{i-1} P_{i}$; & $m_4 = P_{i-1} P_{i-2}$. \end{tabular} \end{center} Equation \ref{eqn:chi definition lines} and \ref{eqn:corner invariant def} implies $x_{2i} = \chi(l_1,l_2,l_3,l_4)$ and $x_{2i+1} = \chi(m_1,m_2,m_3,m_4)$. This yields \begin{center} \begin{tabular}{c|c|c||c|c|c} Configuration & Lines & Coordinates & Configuration & Lines & Coordinates \\ \hline $P_{i+2} \in P_{i+1} P_{i}$ & $l_4 = l_3$ & $x_{2i} = 0$ & $P_{i+2} \in P_{i-1} P_{i+1}$ & $m_1 = m_2$ & $x_{2i+1} = 0$ \\ $P_{i+2} \in P_{i+1} P_{i-2}$ & $l_4 = l_1$ & $x_{2i} = 1$ & $P_{i+2} \in P_{i-1} P_{i-2}$ & $m_1 = m_4$ & $x_{2i+1} = 1$ \\ $P_{i+2} \in P_{i+1} P_{i-1}$ & $l_4 = l_2$ & $x_{2i} = \infty$ & $P_{i+2} \in P_{i-1} P_{i}$ & $m_1 = m_3$ & $x_{2i+1} = \infty$ \end{tabular} \end{center} which is precisely the relationship described in the proposition. \end{proof} \begin{remark} \label{rmk:P2 position and corner invariants} Proposition \ref{prop:P2 position and corner invariants} also gives us a way to determine the position of $P_{i+2}$ when $(x_{2i}, x_{2i+1})$ do not take value in $0,1,\infty$. Suppose the four points $P_{i-2}, P_{i-1}, P_i, P_{i+1}$ are in general position. For $i, j, k \in \{1,2,3\}$ distinct, we define $U_{i,j}$ to be the connected component of $\RR\PP^2 - (l_i \cup l_j)$ that has trivial intersection with $l_k$. For $i,j,k \in \{2,3,4\}$ distinct, we define $V_{i,j}$ to be the connected component of $\RR\PP^2 - (m_i \cup m_j)$ that has trivial intersection with $m_k$. By Proposition \ref{prop:P2 position and corner invariants} and continuity of $\chi$, we have the following: \begin{table}[h!] \centering \begin{tabular}{c|c||c|c} Configuration & Coordinates & Configuration & Coordinates \\ \hline $P_{i+2} \in U_{2,3}$ & $x_{2i} = I$ & $P_{i+2} \in V_{2,3}$ & $x_{2i+1} = I$ \\ $P_{i+2} \in U_{1,3}$ & $x_{2i} = J$ & $P_{i+2} \in V_{2,4}$ & $x_{2i+1} = J$ \\ $P_{i+2} \in U_{1,2}$ & $x_{2i} = K$ & $P_{i+2} \in V_{3,4}$ & $x_{2i+1} = K $ \end{tabular} \end{table} \end{remark} Proposition \ref{prop:P2 position and corner invariants} has the following nice application: \begin{corollary} \label{cor:grids are 3 nice} Given $P \in \cP_n$ with corner invariants $(x_0,\ldots, x_{2n-1})$. If none of the corner invariants takes value from $\{0,1,\infty\}$, then $P$ is $3$-nice. In particular, every four consecutive points of $P$ are in general position. \end{corollary} \begin{proof} Using Proposition \ref{prop:P2 position and corner invariants} we may check that \begin{table}[h!] \centering \begin{tabular}{c|c||c|c} Collinearity & Coordinates & Collinearity & Coordinates \\ \hline $P_{i-2}$, $P_{i-1}$, $P_{i+1}$ & $x_{2i-1} = \infty$ & $P_{i-1}$, $P_{i}$, $P_{i+2}$ & $x_{2i+1} = \infty$ \\ $P_{i-2}$, $P_{i-1}$, $P_{i+2}$ & $x_{2i+1} = 1$ & $P_{i-1}$, $P_{i+1}$, $P_{i+2}$ & $x_{2i+1} = 0$ \\ $P_{i-2}$, $P_{i+1}$, $P_{i+2}$ & $x_{2i} = 1$ & $P_{i}$, $P_{i+1}$, $P_{i+2}$ & $x_{2i} = 0$ \\ $P_{i-1}$, $P_{i}$, $P_{i+1}$ & $x_{2i-2} = 0$ & & \\ \end{tabular} \end{table} All seven cases contradict the assumption in the corollary. Therefore, the four points $P_{i-2}$, $P_{i-1}$, $P_{i+1}$, $P_{i+2}$ are in general position, and the four consecutive points $P_{i-1}$, $P_i$, $P_{i+1}$, $P_{i+2}$ are in general position for all $i \in \ZZ$. This shows $P$ is 3-nice, and every four consecutive points of $P$ are in general position. \end{proof} Our goal of this section is to prove the following correspondence theorem: \begin{theorem} \label{thm:tic-tac-toe and 3-spirals} For all $n \geq 2$, $\cS_{3,n}^\alpha = S_n(I,J)$, $\cS_{3,n}^\beta = S_n(K,J)$. \end{theorem} This theorem immediately produces the following important corollary. \begin{corollary} \label{cor:tic-tac-toe and 3-spirals} For all $n \geq 2$, the four grids $S_n(I,J)$, $S_n(K,J)$, $S_n(J,I)$, $S_n(J,K)$ are invariant under $T_3$. \end{corollary} \begin{proof} The case $S_n(I,J)$ and $S_n(K,J)$ follows immediately from Theorem \ref{thm:spiral polygon invariance} and \ref{thm:tic-tac-toe and 3-spirals}. For the other two grids, recall the maps $\sigma_i$ and $\iota$ from Proposition \ref{prop:tic tac toe symmetry}. Observe that Equation \ref{eqn:delta k,1 formula} implies $T_3([P \circ \iota]) = [T_3(P) \circ \iota \circ \sigma_4]$. Then, by Proposition \ref{prop:tic tac toe symmetry}, $[P] \in S_n(J,I)$ implies $[P \circ \iota] \in S_n(I,J)$, which was shown to imply $T_3([P \circ \iota]) \in S_n(I,J)$. Finally, observe that \begin{equation*} T_3([P]) = T_3([(P \circ \iota \circ \sigma_4) \circ (\sigma_{-4} \circ \iota)]). \end{equation*} It follows that $T_3([P]) \in S_n(J,I)$. The case $S_n(J,K)$ is completely analogous. \end{proof} \subsection{The Correspondence of \texorpdfstring{$\cS_{3,n}^\alpha$}{S3n alpha} and \texorpdfstring{$S_n(I,J)$}{Sn(I,J)}} Here we show that $\cS_{3,n}^\alpha$ is equivalent to $S_n(I,J)$. We will first show that the corner invariants a $0$-representative $P$ of some $[P] \in \cS_{3,n}^\alpha$ satisfies $S_n(I,J)$. Then, we will show that we can find $N$-representatives of type $\alpha$ for all $N \in \ZZ$ given any $[P] \in S_n(I,J)$. \begin{lemma} \label{lem:S3 alpha in S(I,J)} If $P$ is an $N$-representative of $[P] \in \cS_{3,n}^\alpha$, then $P_{i+3} \in \Int(P_i, P_{i+1}, P_{i+2})$ for all $i > N$. \end{lemma} \begin{proof} From $P$ is an $N$-representative, $(P_i, P_{i+1}, P_{i+2})$ is positive. We may normalize with $\Aff_2^+(\RR)$ if necessary so that $P_i = (-1,0)$, $P_{i+1} = (0,0)$, and $P_{i+2} = (0,1)$. Denote $P_{i+3} = (x,y)$. It suffices to show that $x < 0$, $y > 0$, and $y - x < 1$. First, since $(P_{i+1}, P_{i+2}, P_{i+3})$ is positive, we must have $x < 0$. Next, since $P_{i+3} \in \Int(P_{i}, P_{i+1}, P_{i+4})$ and $(P_i, P_{i+1}, P_{i+4})$ is positive, Proposition \ref{prop:orientation lemma} implies $(P_i, P_{i+1}, P_{i+3})$ is positive, which gives us $y > 0$. Finally, since $i-1 \geq N$, $(P_{i-1}, P_{i}, P_{i+3})$ is positive, and $P_{i+2} \in \Int (P_{i-1}, P_{i}, P_{i+3})$. Again, by Proposition \ref{prop:orientation lemma}, $(P_{i+2}, P_{i}, P_{i+3})$ is positive, which gives us $y - x < 1$ as desired. We conclude that $P_{i+3} \in \Int(P_{i}, P_{i+1}, P_{i+2})$. \end{proof} \begin{proposition} \label{prop:S3 alpha in S(I,J)} For all $n \geq 2$, $\cS_{3,n}^\alpha \subset S_n(I,J)$. \end{proposition} \begin{proof} Let $P$ be a $(-3)$-representative of $[P] \in \cS_{3,n}^\alpha$. Let $(x_0, \ldots, x_{2n-1})$ be its corner invariants. We would like to show that $(x_{2i}, x_{2i+1}) \in I \times J$ for all $i$. By Lemma \ref{lem:S3 alpha in S(I,J)}, $P_{i+1} \in \Int (P_{i-2}, P_{i-1}, P_i)$ for all $i \geq 0$. Denote $O = P_{i-2} P_{i+1} \cap P_{i-1} P_i$. Since $P_{i+1} \in \Int (P_{i-2}, P_{i-1}, P_{i+2})$ and $P_{i+2} \in \Int (P_{i-1}, P_{i}, P_{i+1})$, we see that $P_{i+2}$ must be in the intersection of the halfplane above $P_{i-2} P_{i+1}$ and $\Int (P_{i-1}, P_{i}, P_{i+1})$, which is precisely $\Int(O, P_i, P_{i+1})$ (see the shaded region in Figure \ref{fig:alpha IJ}). By Remark \ref{rmk:P2 position and corner invariants}, $(x_{2i}, x_{2i+1}) \in I \times J$ if $P_{i+2} \in \Int(O, P_i, P_{i+1})$. This concludes the proof. \end{proof} \begin{figure}[ht] \centering \footnotesize \incfig{alpha_IJ}{0.4\columnwidth} \caption{Configuration of Proposition \ref{prop:S3 alpha in S(I,J)} and Lemma \ref{lem:S3 alpha induction}.} \label{fig:alpha IJ} \end{figure} \begin{lemma} \label{lem:S3 alpha induction} Given a $3$-nice sequence $P: \ZZ \rightarrow \RR\PP^2$ and an integer $i \in \ZZ$, let $(x_{2i}, x_{2i+1})$ be the corner invariants of $P_i$ as in Equation \ref{eqn:corner invariant def}. If $(P_{i-2}, P_{i-1}, P_i)$, $(P_{i-1}, P_i, P_{i+1})$ are both positive, $P_{i+1} \in \Int(P_{i-2}, P_{i-1}, P_i)$, and $(x_{2i}, x_{2i+1}) \in I \times J$, then $P_{i+2} \in \AA^2$, $(P_{i-2}, P_{i-1}, P_{i+2})$, $(P_{i}, P_{i+1}, P_{i+2})$ are both positive, $P_{i+1} \in \Int(P_{i-2}, P_{i-1}, P_{i+2})$, and $P_{i+2} \in \Int(P_{i-1}, P_i, P_{i+1})$. \end{lemma} \begin{proof} We will give a geometric argument. Figure \ref{fig:alpha IJ} shows a configuration of vertices $P_{i-2}$, $P_{i-1}$, $P_i$, $P_{i+1}$ that satisfies the assumption, where $O = P_{i-2} P_{i+1} \cap P_{i-1} P_i$. Since $(x_{2i}, x_{2i+1}) \in I \times J$, Remark \ref{rmk:P2 position and corner invariants} implies we must have $P_{i+2} \in \Int(O, P_{i},P_{i+1}) \subset \AA^2$, in which case all the conclusions of this lemma hold. \end{proof} \begin{proposition} \label{prop:S3 alpha equals in S(I,J)} For all $n \geq 2$, $\cS_{3,n}^\alpha = S_n(I,J)$. \end{proposition} \begin{proof} From the conclusion of Proposition \ref{prop:S3 alpha in S(I,J)} we only need to show $\cS_{3,n}^\alpha \supset S_n(I,J)$. Given $[P] \in S_n(I,J)$, let $P$ be a representative that satisfies $P_{N} = (-1,0)$, $P_{N+1} = (1,0)$, $P_{N+2} = (0,2)$, $P_{N+3} = (0,1)$. Notice that $P$ exists since by Proposition \ref{cor:grids are 3 nice}, consecutive four points of any representative of $[P]$ must be in general position. One may first obtain an arbitrary representative, say using the reconstruction formula in \cite{schwartz2008discretemonodromypentagramsmethod}, and then normalize by a projective transformation as in Theorem \ref{thm:proj transform}. We claim that $P$ is an $N$-representative of type $\alpha$. To start with, Proposition \ref{cor:grids are 3 nice} implies $P$ is 3-nice. Also, observe that $(P_{N}, P_{N+1}, P_{N+2})$, $(P_{N+1}, P_{N+2}, P_{N+3})$ are both positive, $P_{N+3} \in \Int(P_{N}, P_{N+1}, P_{N+2})$, and $(x_{2N+4},x_{2N+5}) \in I \times J$, so the conditions of Lemma \ref{lem:S3 alpha induction} hold. We may inductively apply Lemma \ref{lem:S3 alpha induction} to get $(P_{i}, P_{i+1}, P_{i+2})$, $(P_{i}, P_{i+1}, P_{i+4})$ positive and $P_{i+3} \in \Int(P_i, P_{i+1}, P_{i+4})$ for all $i \geq N$. This implies $P$ is an $N$-representative of type $\alpha$, so $[P] \in \cS_{3,n}^\alpha$. \end{proof} \subsection{The Correspondence of \texorpdfstring{$\cS_{3,n}^\beta$}{S3n beta} and \texorpdfstring{$S_n(K,J)$}{Sn(K,J)}} Here we show that $\cS_{3,n}^\beta$ is equivalent to $S_n(K,J)$. The ideas behind the proofs are essentially the same as the ones in the previous sections. We just need to modify the details of our constructions to suit the type $\beta$ spirals and $S_n(K,J)$. \begin{lemma} \label{lem:S3 beta in S(K,J)} If $P$ is an $N$-representative of $[P] \in \cS_{3,n}^\beta$, then the quadrilateral joined by vertices $(P_i, P_{i+1}, P_{i+2}, P_{i+3})$ is convex for all $i > N$. \end{lemma} \begin{proof} From $P$ is an $N$-representative, we know that $(P_i, P_{i+1}, P_{i+2})$ is positive. We may normalize with $\Aff_2^+(\RR)$ if necessary so that $P_i = (-1,0)$, $P_{i+1} = (0,0)$, $P_{i+1} = (0,1)$. Denote $P_{i+3} = (x,y)$. It suffices to show that $x < 0$, $y > 0$, and $y - x > 1$. First, since $(P_{i+1}, P_{i+2}, P_{i+3})$ and $(P_{i}, P_{i+1}, P_{i+3})$ are positive, we must have $x < 0$ and $y > 0$. Next, since $i - 1 \geq N$, $(P_{i-1}, P_i, P_{i+2})$ is positive, and $P_{i+3} \in \Int(P_{i-1}, P_i, P_{i+2})$. By Proposition \ref{prop:orientation lemma}, $(P_i, P_{i+2}, P_{i+3})$ is positive, which implies $y - x > 1$. We conclude that the quadrilateral $(P_i, P_{i+1}, P_{i+2}, P_{i+3})$ is convex. \end{proof} \begin{proposition} \label{prop:S3 beta in S(K,J)} For all $n \geq 2$, $\cS_{3,n}^\beta \subset S_n(K,J)$. \end{proposition} \begin{proof} We mimic the proof of Proposition \ref{prop:S3 alpha equals in S(I,J)}. Let $P$ be a $(-3)$-representative of $[P] \in \cS_{3,n}^\beta$. Let $(x_0, \ldots, x_{2n-1})$ be its corner invariants. We would like to show that $(x_{2i}, x_{2i+1}) \in K \times J$. By Lemma \ref{lem:S3 beta in S(K,J)}, the quadrilateral $(P_{i-2}, P_{i-1}, P_i, P_{i+1})$ must be convex. Next, since $P$ is a $(-3)$-representative of type $\beta$, $P_i \in \Int(P_{i-2}, P_{i-1}, P_{i+1})$ for all $i \geq 0$ (See Figure \ref{fig:beta KJ}). Referring back to Remark \ref{rmk:P2 position and corner invariants}, the convexity of $(P_{i-2}, P_{i-1}, P_i, P_{i+1})$ implies $P_{i} P_{i+1}$ doesn't go through $P_{i-2}, P_{i-1}, P_{i+1}$, so $(x_{2i}, x_{2i+1}) \in K \times J$ whenever $P_{i+2} \in \Int(P_{i-2}, P_{i-1}, P_{i+1})$. \end{proof} \begin{lemma} \label{lem:S3 beta induction} With the same setup as Lemma \ref{lem:S3 alpha induction}, if $(P_{i-2}, P_{i-1}, P_i)$, $(P_{i-1}, P_i, P_{i+1})$ are both positive, the quadrilateral $(P_{i-2}, P_{i-1}, P_i, P_{i+1})$ is convex, and $(x_{2i}, x_{2i+1}) \in K \times J$, then $P_{i+2} \in \Int(P_{i-2}, P_{i-1}, P_{i+1})$, in which case the quadrilateral $(P_{i-1}, P_i, P_{i+1}, P_{i+2})$ is convex, and $(P_{i}, P_{i+1}, P_{i+2})$, $(P_{i-1}, P_{i}, P_{i+2})$ are positive. \end{lemma} \begin{proof} Recall from the proof of Proposition \ref{prop:S3 beta in S(K,J)} where we claimed that if the quadrilateral $(P_{i-2}, P_{i-1}, P_i, P_{i+1})$ is convex, then the line $P_i P_{i+1}$ doesn't go through $(P_{i-2}, P_{i-1}, P_{i+1})$. Then, from $(x_{2i}, x_{2i+1}) \in K \times J$, Remark \ref{rmk:P2 position and corner invariants} implies $P_{i+2} \in \Int(P_{i-2}, P_{i-1}, P_{i+1})$, in which case all the conclusions of this lemma will hold. See Figure \ref{fig:beta KJ} for a visualization of the configurations. \end{proof} \begin{figure}[ht] \centering \footnotesize \incfig{beta_KJ}{0.3\columnwidth} \caption{Configuration of Proposition \ref{prop:S3 beta in S(K,J)} and Lemma \ref{lem:S3 beta induction}.} \label{fig:beta KJ} \end{figure} \begin{proposition} \label{prop:S3 beta equals S(J,K)} $\cS_{3,n}^\beta = S_n(K,J)$. \end{proposition} \begin{proof} From Proposition \ref{prop:S3 beta in S(K,J)} it suffices to show $\cS_{3,n}^\beta \supset S_n(K,J)$. Given $[P] \in S_n(I,J)$, by Proposition \ref{cor:grids are 3 nice} we can find a representative $P$ that satisfies $P_{N} = (0,0)$, $P_{N+1} = (1,0)$, $P_{N+2} = (1,1)$, $P_{N+3} = (0,1)$. We will show that $P$ is an $N$-representative of type $\beta$. Proposition \ref{cor:grids are 3 nice} shows that $P$ is 3-nice. To see that $(P_i, P_{i+1}, P_{i+2})$, $(P_{i}, P_{i+1}, P_{i+3})$ are positive, and $P_{i+4} \in \Int(P_{i}, P_{i+1}, P_{i+3})$, we apply an inductive argument using Lemma \ref{lem:S3 beta induction}. Observe that for the base case we have $(P_{N}, P_{N+1}, P_{N+2})$, $(P_{N+1}, P_{N+2}, P_{N+3})$ both positive, the quadrilateral $(P_N, P_{N+1}, P_{N+2}, P_{N+3})$ is convex (it is a square), and $(x_{2i}, x_{2i+1}) \in K, J$ by the assumption that $[P] \in S_n(K,J)$. The conditions of Lemma \ref{lem:S3 beta induction} thus all hold, so we may inductively apply Lemma \ref{lem:S3 beta induction} to get $(P_i, P_{i+1}, P_{i+2})$, $(P_{i}, P_{i+1}, P_{i+3})$ are positive, and $P_{i+4} \in \Int(P_{i}, P_{i+1}, P_{i+3})$ for all $i \geq N$. This implies $P$ is an $N$-representative of type $\beta$, so $[P] \in \cS_{3,n}^\beta$. \end{proof} \section{The Precompactness of \texorpdfstring{$T_3$}{T(3)} Orbits} \label{sec:precompact} In this section, we will establish four algebraic invariants of $T_3$. We will then use it to prove Theorem \ref{thm:spiral precompact}. Having Theorem \ref{thm:tic-tac-toe and 3-spirals} in hand, we may fully work with the coordinate partitions $S_n(I,J)$ and $S_n(K,J)$. Our strategy is to establish a coordinate formula, find invariant quantities, and use it to bound the orbit inside a compact set. \subsection{The Formula and the Invariants} \label{subsec:T3 map formula and invariants} Let $P$ be a twisted $n$-gon. Denote $P' = T_3(P)$. In this section, we will use a different labeling convention: \begin{equation} \label{eqn:delta 3,1 formula} P'_i = P_{i-2} P_{i+1} \cap P_{i-1} P_{i+2}. \end{equation} $T_3$ is a birational map over the corner invariants. I discovered it using computer algebra and the reconstruction formula from \cite{schwartz2008discretemonodromypentagramsmethod}. \begin{proposition} \label{prop:coordinate formula} Let $[P] \in \cP_n$ be a twisted $n$-gon with $(x_0, x_1, \ldots, x_{2n-1})$ as its corner invariants. Let $P' = T_3(P)$ admits corner invariants $(x'_0, x'_1, \ldots, x'_{2n-1})$. Then, the following formula holds (indices are taken modulo $2n$): \begin{equation} \label{eqn:31 corner} \left\{ \begin{aligned} x'_{2i} = x_{2i-2} \cdot \frac{(x_{2i-4} + x_{2i-1} - 1)}{x_{2i-2}x_{2i-1} - (1 - x_{2i+1})(1 - x_{2i-4})}; \\ x'_{2i+1} = x_{2i+3} \cdot \frac{(x_{2i+2} + x_{2i+5} - 1)}{x_{2i+2}x_{2i+3} - (1 - x_{2i+5})(1 - x_{2i})}. \end{aligned} \right. \end{equation} \end{proposition} One can verify Equation \ref{eqn:31 corner} with the following procedure: Given the corner invariants of $[P]$, use the reconstruction formula from \cite{schwartz2008discretemonodromypentagramsmethod} to obtain a representative $P$. Apply $T_3$ on $P$ as in Equation \ref{eqn:delta 3,1 formula} to get $P' = T_3(P)$. Then, compute the corner invariants of $P'$. One could also prove Equation \ref{eqn:31 corner} geometrically using cross ratio identities and Equation \ref{eqn:corner invariant def}. Next, we present the algebraic invariants of $T_3$. \begin{proposition} \label{prop:31 invariants} Given $[P] \in \cP_n$ with corner invariants $(x_0, x_1, \ldots, x_{2n-1})$, consider the following four quantities: \begin{equation} \label{eqn:31 energies} \cF_1 = \prod_{i=0}^{n-1} \frac{x_{2i}}{x_{2i} - 1} ; \ \ \cF_2 = \prod_{i=0}^{n-1} \frac{x_{2i+1}}{x_{2i+1} - 1} ; \ \ \cF_3 = \prod_{i=0}^{n-1} \frac{x_{2i}}{x_{2i+1}}; \ \ \cF_4 = \prod_{i=0}^{n-1} \frac{1 - x_{2i}}{1 - x_{2i+1}}. \end{equation} Then, $\cF_i$ is invariant under $T_3$ for $i = 1, 2, 3, 4$. \end{proposition} To verify that they are $T_3$-invariant, one can simply plug them into Equation \ref{eqn:31 corner}. \begin{remark} It turns out that $\cF_1$ and $\cF_2$ correspond to the product of one of the six conjugates of the inverse cross ratio: $\lambda \mapsto \frac{\lambda}{\lambda - 1}$. $\cF_3$ is the ratio of the two $T_2$ invariants $\frac{O}{E}$. For discussions on $\cF_3$, see \cite{schwartz2024pentagramrigiditycentrallysymmetric}. Finally, $\cF_4 = \frac{\cF_2 \cF_3}{\cF_1}$ via an easy calculation. The four invariants in Proposition \ref{prop:31 invariants} are not algebraically independent. The differential fails to be full-rank when the corner invariants are taken to be alternating sequences of the form $(a,b,a,b\ldots,a,b)$ for some $a,b \in \RR$. \end{remark} \subsection{The Case \texorpdfstring{$S_n(I,J)$}{Sn(I,J)}} We will denote $[P]^{(m)} = T_3^m [P]$ for $[P] \in \cP_n$ and $m \in \ZZ_{\geq 0}$. Let $[n]$ be the set of indices $\{1, \ldots, n\}$ modulo $n$. Let $(x_0^{(m)}, \ldots, x_{2n-1}^{(m)})$ be the corner invariants of $[P]^{(m)}$. We will also denote $\cF_i^{(m)}$ to be the invariants $\cF_i$ of $[P]^{(m)}$ in Equation \ref{eqn:31 energies}, which by Proposition \ref{prop:31 invariants} should equal $\cF_i^{(0)}$. \begin{lemma} \label{lem:compact J in (I, J)} Given $[P] \in S_n(I, J)$ on which $T_3$ is defined generically. Then, there exists some $a, b \in J$, $a < b$, such that $x_{2i+1}^{(m)} \in [a, b]$ for all $i \in [n]$ and $m \in \ZZ_{\geq0}$. \end{lemma} \begin{proof} We first show that fixing $i$, there exists $b_i < 1$ such that $x_{2i+1}^{(m)} \leq b_i$ for all $m \in \ZZ_{\geq0}$. Suppose not, then there exists a subsequence of $\{x_{2i+1}^{(m)}\}$ that converges to $1$, which implies $\{1 - x_{2i+1}^{(m)}\}$ converges to $0$ on the same subsequence. From Corollary \ref{cor:tic-tac-toe and 3-spirals} we have $P^{(m)} \in S_n(I, J)$ for all $m \in \ZZ_{\geq0}$, so $1 - x_{2j}^{(m)} \in (1, \infty)$ and $1 / (1 - x_{2j+1}^{(m)}) \in (1, \infty)$ for all $j\in [n]$. These together with $1 - x_{2i+1}^{(m)} \rightarrow 0$ on a subsequence implies $\cF_4^{(m)} \rightarrow \infty$ on the same subsequence, but that contradicts Proposition \ref{prop:31 invariants}. Therefore, we can take $b_i = \sup_m \{x_{2i+1}^{(m)}\} < 1$. Next, we show that fixing $i$, there exists $a_i > 0$ such that $x_{2i+1}^{(m)} \geq a_i$ for all $m$. Suppose not, then there exists a subsequence of $\{x_{2i+1}^{(m)}\}$ that converges to $0$, so $x_{2i+1}^{(m)} / (x_{2i+1}^{(m)} - 1) \rightarrow 0$ on a subsequence. From the argument above, for all $j\in [n]$ we can find $b_j<1$ such that $x_{2j+1}^{(m)} \leq b_j$ for all $m$, which gives us $|x_{2j+1}^{(m)} / (x_{2j+1}^{(m)} - 1)| \leq |\frac{b_j}{b_j-1}|$, so the sequence is uniformly bounded for all $j \neq i$. This together with $|x_{2i+1}^{(m)} / (x_{2i+1}^{(m)} - 1)| \rightarrow 0$ on a subsequence implies $|\cF_2^{(m)}| \rightarrow 0$ on the same subsequence, but that contradicts Proposition \ref{prop:31 invariants}. Therefore, we can take $a_i = \inf_m \{x_{2i+1}^{(m)}\} > 0$. Finally, take $a = \min_{i} a_i$ and $b = \max_i b_i$. The condition $x_{2i+1}^{(m)} \in [a, b]$ follows directly from our choice of $a_i$ and $b_i$. Also, since $i$ indexes a finite set, $a, b \in J$. This produces $a, b$ as desired. \end{proof} \begin{lemma} \label{lem:compact I in (I, J)} With the same notation as in Lemma \ref{lem:compact J in (K, J)}, there exists some $c < d < 0$, such that $x_{2i}^{(m)} \in [c, d]$ for all $i \in [n]$ and $m \in \ZZ_{\geq0}$. \end{lemma} \begin{proof} We first show that fixing $i$, there exists $d_i < 0$ such that $x_{2i}^{(m)} \leq d_i$ for all $m \in \ZZ_{\geq0}$. Suppose not, then there exists a subsequence of $\{x_{2i}^{(m)}\}$ that converges to $0$ from below, so $x_{2i}^{(m)} / (x_{2i}^{(m)} - 1) \rightarrow 0$ on a subsequence. From Corollary \ref{cor:tic-tac-toe and 3-spirals} we have $[P]^{(m)} \in S_n(I, J)$ for all $m$, so $x_{2j}^{(m)} / (x_{2j}^{(m)} - 1) < 1$ for all $j \in [n]$. This together with $x_{2i}^{(m)} / (x_{2i}^{(m)} - 1) \rightarrow 0$ on a subsequence implies $\cF_1^{(m)} \rightarrow 0$ on the same subsequence, but that contradicts Proposition \ref{prop:31 invariants}. Therefore, we must have $d_i = \sup_m \{x_{2i}^{(m)}\} < 0$. Next, we show that fixing $i$, there exists $c_i$ such that $x_{2i}^{(m)} \geq c_i$ for all $m$. Suppose not, then there exists a subsequence of $\{x_{2i}^{(m)}\}$ that diverges, so the same subsequence for $\{1 - x_{2i}^{(m)}\}$ also diverges. Lemma \ref{lem:compact J in (I, J)} and $x_{2i}^{(m)} \leq d_i < 0$ together implies that if $\{x_{2i}^{(m)}\}$ diverges on a subsequence, then $\cF_4^{(m)}$ also diverges on the same subsequence, but that contradicts Proposition \ref{prop:31 invariants}. Therefore, we have $c_i = \inf_m \{x_{2i}^{(m)}\} \in \RR$. Finally, we can take $c = \min_i c_i$ and $d = \max_i d_i$. The condition $x_{2i}^{(m)} \in [c, d]$ follows directly from our choice of $c_i$ and $d_i$, and $c < d< 0$ because $i$ indexes a finite set. This produces $c$ and $d$ as desired. \end{proof} \begin{lemma} \label{lem:IJ precompact} Given $[P] \in S_n(I, J)$, the forward $T_3$-orbit of $[P]$ is precompact in $\cP_n$. \end{lemma} \begin{proof} Let $[a,b] \subset J$, $[c, d] \subset I$ be compact intervals derived from Lemma \ref{lem:compact J in (I, J)} and Lemma \ref{lem:compact I in (I, J)}. Then, the sequence $(x_0^{(m)}, \ldots, x_{2n-1}^{(m)})$ is contained in the product $\prod_{i=0}^{n-1} [c,d] \times [a,b]$, which is compact. \end{proof} \begin{corollary} \label{cor:JI precompact} Given $[P] \in S_n(J,I)$, the forward $T_3$-orbit of $[P]$ is precompact in $\cP_n$. \end{corollary} \begin{proof} If a sequence of $n$-tuples $\{(x_{m1}, \ldots, x_{mn})\}_{m = 1}^\infty$ converges to some $(y_1, \ldots, y_n)$, then given any permutation $\sigma \in S_n$, $\{(x_{m\sigma(1)}, \ldots, x_{m\sigma(n)})\}_{m = 1}^\infty$ converges to $(y_{\sigma(1)}, \ldots, y_{\sigma(n)})$. The result then follows from Proposition \ref{prop:tic tac toe symmetry}. \end{proof} \subsection{The Case \texorpdfstring{$S_n(K,J)$}{Sn(K,J)}} The proof of $S_n(K,J)$ is analogous to Lemma \ref{lem:compact J in (I, J)} and \ref{lem:compact I in (I, J)}. \begin{lemma} \label{lem:compact J in (K, J)} Given $[P] \in S_n(K, J)$ on which $T_3$ is defined generically. There exists some $a, b \in J$, $a < b$, such that $x_{2i+1}^{(m)} \in [a, b]$ for all $i \in [n]$ and $m \in \ZZ_{\geq0}$. \end{lemma} \begin{proof} We first show that fixing $i \in [n]$, there exists $a_i > 0$ such that $x_{2i+1}^{(m)} \geq a_i$ for all $m \in \ZZ_{\geq0}$. Suppose not, then there exists a subsequence of $\{x_{2i+1}^{(m)}\}$ that converges to $0$. From Corollary \ref{cor:tic-tac-toe and 3-spirals} we have $[P]^{(m)} \in S_n(K, J)$ for all $m$, so $x_{2j}^{(m)} \in (1, \infty)$ and $1 / x_{2j+1}^{(m)} \in (1, \infty)$ for all $j \in [n]$. These together with $x_{2i+1}^{(m)} \rightarrow 0$ on a subsequence implies $\cF_3^{(m)} \rightarrow \infty$ on the same subsequence, but that contradicts Proposition \ref{prop:31 invariants}. Therefore, we can take $a_i = \inf_m \{x_{2i+1}^{(m)}\} > 0$. Next, we show that fixing $i \in [n]$, there exists $b_i < 1$ such that $x_{2i+1}^{(m)} \leq b_i$ for all $m \in \ZZ_{\geq0}$. Suppose not, then there exists a subsequence of $\{x_{2i+1}^{(m)}\}$ that converges to $1$, so $x_{2i+1}^{(m)} / (x_{2i+1}^{(m)} - 1) \rightarrow -\infty$ on a subsequence. From the argument above, for all $j \in [n]$ we can find $a_j>0$ such that $x_{2j+1}^{(m)} \geq a_j$ for all $m$, which gives us $|x_{2j+1}^{(m)} / (x_{2j+1}^{(m)} - 1)| \geq |\frac{a_j}{a_j-1}|$, so the sequence $\{ |\cF_2^{(m)}| \}$ is uniformly bounded below by $a_0\ldots a_{2n-1} > 0$. This together with $|x_{2i+1}^{(m)} / (x_{2i+1}^{(m)} - 1)| \rightarrow \infty$ on a subsequence implies $|\cF_2^{(m)}| \rightarrow \infty$ on the same subsequence, but that contradicts Proposition \ref{prop:31 invariants}. Therefore, we can take $b_i = \sup_m \{x_{2i+1}^{(m)}\} < 1$. Finally, we can take $a = \min_{i} a_i$ and $b = \max_i b_i$. The condition $x_{2i+1}^{(m)} \in [a, b]$ follows directly from our choice of $a_i$ and $b_i$. Also, since $i$ indexes a finite set, we have $a, b \in J$. This produces $a, b$ as desired. \end{proof} \begin{lemma} \label{lem:compact K in (K, J)} With the same notation as in Lemma \ref{lem:compact J in (K, J)}, there exists some $1 < c < d$, such that $x_{2i}^{(m)} \in [c, d]$ for all $i \in [n]$ and $m \in \ZZ_{\geq0}$. \end{lemma} \begin{proof} We first show that fixing $i \in [n]$, there exists $c_i > 1$ such that $x_{2i}^{(m)} \geq c_i$ for all $m \in \ZZ_{\geq 0}$. Suppose not, then there exists a subsequence of $\{x_{2i}^{(m)}\}$ that converges to $1$, so $x_{2i}^{(m)} / (x_{2i}^{(m)} - 1) \rightarrow \infty$ on a subsequence. From Corollary \ref{cor:tic-tac-toe and 3-spirals} we have $[P]^{(m)} \in S_n(K, J)$ for all $m$, so $x_{2j}^{(m)} / (x_{2j}^{(m)} - 1) > 1$ for all $j \in [n]$. This together with $x_{2i}^{(m)} / (x_{2i}^{(m)} - 1) \rightarrow \infty$ on a subsequence implies $\cF_1^{(m)} \rightarrow \infty$ on the same subsequence, but that contradicts Proposition \ref{prop:31 invariants}. Therefore, we can take $c_i = \inf_m \{x_{2i}^{(m)}\} > 1$. Next, we show that fixing $i$, there exists $d_i$ such that $x_{2i}^{(m)} \leq d_i$ for all $m \in \ZZ_{\geq0}$. Suppose not, then there exists a subsequence of $\{x_{2i}^{(m)}\}$ that diverges. Lemma \ref{lem:compact J in (K, J)} and $x_{2i}^{(m)} > 1$ together implies that if $\{x_{2i}^{(m)}\}$ diverges on a subsequence, then $\cF_3^{(m)}$ also diverges on the same subsequence, but that contradicts Proposition \ref{prop:31 invariants}. Therefore, we can take $d_i = \sup_m \{x_{2i}^{(m)}\} \in \RR$. Finally, we can take $c = \min_i c_i$ and $d = \max_i d_i$. The condition $x_{2i}^{(m)} \in [c, d]$ follows directly from our choice of $c$ and $d$, and $1 < c < d$ because $i$ indexes a finite set. This produces $c$ and $d$ as desired. \end{proof} \begin{lemma} \label{lem:KJ precompact} Given $[P] \in S_n(K, J)$, the forward $T_3$-orbit of $[P]$ is precompact in $\cP_n$. \end{lemma} \begin{proof} Let $[a,b] \subset J$, $[c, d] \subset K$ be compact intervals derived from Lemma \ref{lem:compact J in (K, J)} and Lemma \ref{lem:compact K in (K, J)}. Then, the sequence $(x_0^{(m)}, \ldots, x_{2n-1}^{(m)})$ is contained in the product $\prod_{i=0}^{n-1} [c,d] \times [a,b]$, which is compact. \end{proof} \begin{corollary} \label{cor:JK precompact} Given $[P] \in S_n(J,K)$, the forward $T_3$-orbit of $[P]$ is precompact in $\cP_n$.\footnote{The proof of the corollary is the same as Corollary \ref{cor:JI precompact}, so we omit it.} \end{corollary} \subsection{Precompactness of Backward Orbit} We mentioned in $\S$\ref{subsec:T3 map formula and invariants} that Equation \ref{eqn:31 corner} is birational. Below is its inverse, which gives us a coordinate formula for $T_3^{-1}$: \begin{equation} \label{eqn:T3 inverse corner} \left\{ \begin{aligned} x_{2i} = x_{2i+2} \cdot \frac{(x_{2i+4} + x_{2i+1} - 1)}{x_{2i+1}x_{2i+2} - (1 - x_{2i-1})(1 - x_{2i+4})}; \\ x_{2i+1} = x_{2i-1} \cdot \frac{(x_{2i-3} + x_{2i} - 1)}{x_{2i}x_{2i-1} - (1 - x_{2i+2})(1 - x_{2i-3})}. \end{aligned} \right. \end{equation} \begin{proposition} \label{prop:31 inverse invariants} The four invariants $\cF_1, \cF_2, \cF_3, \cF_4$ from Proposition \ref{prop:31 invariants} are invariant under $T_3^{-1}$. \end{proposition} The proof is again a direct computation using Equation \ref{eqn:T3 inverse corner}. \begin{corollary} \label{cor:31 inverse invariants} The forward $T_3^{-1}$-orbit of $S_n(I,J)$, $S_n(K,J)$, $S_n(J,I)$, $S_n(J,K)$ are precompact. \end{corollary} \begin{proof} Corollary \ref{cor:tic-tac-toe and 3-spirals} implies all four grids are backward invariant under $T_3$, so $T_3^{-n}[P]$ is well-defined for $n \in \ZZ_{\geq 0}$ and $[P]$ in any of the four grids. Since $\cF_1, \cF_2, \cF_3, \cF_4$ remain invariant under $T_3^{-1}$, one can simply mimic the proof of Lemma \ref{lem:compact J in (I, J)}, \ref{lem:compact I in (I, J)}, \ref{lem:compact J in (K, J)}, \ref{lem:compact K in (K, J)}. \end{proof} We conclude this section with the proof of Theorem \ref{thm:spiral precompact}. \begin{proof}[Proof of Theorem \ref{thm:spiral precompact}:] The proof directly follows from Lemma \ref{lem:KJ precompact}, Lemma \ref{lem:IJ precompact}, Corollary \ref{cor:31 inverse invariants}, and the identification $\cS_{3,n}^\alpha = S_n(I,J)$, $\cS_{3,n}^\beta = S_n(K,J)$ from Theorem \ref{thm:tic-tac-toe and 3-spirals}. \end{proof}
2412.15626v1
http://arxiv.org/abs/2412.15626v1
Stationary states for stable processes with partial resetting
\documentclass[11pt]{amsart} \usepackage{mathrsfs} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsxtra} \usepackage{dsfont} \usepackage{color} \usepackage[compress, sort]{cite} \usepackage{enumitem} \usepackage{graphicx} \usepackage[type1]{newtxtext} \usepackage{newtxmath} \usepackage[english,polish]{babel} \usepackage[T1]{fontenc} \usepackage[margin=2.5cm, centering]{geometry} \usepackage[colorlinks,citecolor=blue,urlcolor=blue,bookmarks=true]{hyperref} \hypersetup{ pdfpagemode=UseNone, pdfstartview=FitH, pdfdisplaydoctitle=true, pdfborder={0 0 0}, pdftitle={Stationary states for stable processes with resetting}, pdfauthor={Tomasz Grzywny and Zbigniew Palmowski and Karol Szczypkowski and Bartosz Trojan}, pdflang=en-US } \newcommand{\A}{\mathbb{\Omega}} \newcommand{\eqdistr}{\stackrel{D}{=}} \newcommand{\CC}{\mathbb{C}} \newcommand{\ZZ}{\mathbb{Z}} \newcommand{\sS}{\mathbb{S}} \newcommand{\NN}{\mathbb{N}} \newcommand{\RR}{\mathbb{R}} \newcommand{\PP}{\mathbb{P}} \newcommand{\EE}{\mathbb{E}} \newcommand{\TT}{\mathcal{T}} \newcommand{\calW}{\mathcal{W}} \newcommand{\calR}{\mathcal{R}} \newcommand{\calC}{\mathcal{C}} \newcommand{\calL}{\mathcal{L}} \newcommand{\calS}{\mathcal{S}} \newcommand{\calF}{\mathcal{F}} \newcommand{\calG}{\mathcal{G}} \newcommand{\calO}{\mathcal{O}} \newcommand{\calM}{\mathcal{M}} \newcommand{\calB}{\mathcal{B}} \newcommand{\calV}{\mathcal{V}} \newcommand{\calA}{\mathcal{A}} \newcommand{\calN}{\mathcal{N}} \newcommand{\calX}{\mathcal{X}} \newcommand{\calY}{\mathcal{Y}} \newcommand{\calD}{\mathcal{D}} \newcommand{\calH}{\mathcal{H}} \newcommand{\calI}{\mathcal{I}} \newcommand{\calT}{\mathcal{T}} \newcommand{\calE}{\mathcal{E}} \newcommand{\scrD}{\mathscr{D}} \newcommand{\halmos}{{\mbox{\, \vspace{3mm}}} \hfill \mbox{$\Box$}} \newcommand{\itp}{\mathit{p}} \newcommand{\bE}{\mathbf{E}} \newcommand{\Id}{\operatorname{Id}} \newcommand{\dvg}{\operatorname{div}} \newcommand{\sign}[1]{\operatorname{sign}({#1})} \newcommand{\per}{\mathrm{per}} \newcommand{\WUSC}[3]{\operatorname{WUSC}_0({#1}, {#2}, {#3})} \newcommand{\WLSC}[3]{\operatorname{WLSC}_0({#1}, {#2}, {#3})} \newcommand{\WUSCINF}[3]{\operatorname{WUSC}_\infty({#1}, {#2}, {#3})} \newcommand{\WLSCINF}[3]{\operatorname{WLSC}_\infty({#1}, {#2}, {#3})} \newcommand{\pl}[1]{\foreignlanguage{polish}{#1}} \renewcommand{\labelenumi}{(\roman{enumi})} \newcommand{\qnorm}[1]{\lVert {#1} \rVert} \newcommand{\norm}[1]{\lvert {#1} \rvert} \newcommand{\abs}[1]{\lvert {#1} \rvert} \newcommand{\sprod}[2]{\langle {#1}, {#2} \rangle} \newcommand{\bx}{{\mathbf x}} \newcommand{\tr}{\operatorname{tr}} \newcommand{\ad}{\operatornamewithlimits{ad}} \newcommand{\Ad}{\operatorname{Ad}} \newcommand{\GL}{\operatorname{GL}} \newcommand{\discr}{\operatorname{discr}} \newcommand{\ind}[1]{{\mathds{1}_{{#1}}}} \newcommand{\vphi}{\vartheta} \newcommand{\dm}{{\: \rm d}m} \newcommand{\db}{{\: \rm d}b} \newcommand{\ud}{{\: \rm d}} \newcommand{\ue}{\textrm{e}} \newcommand{\supp}{\operatornamewithlimits{supp}} \newcommand{\quadra}[1]{\langle {#1} \rangle} \newcommand{\Log}{\operatorname{Log}} \newcommand{\Mod}{\Xi} \renewcommand{\atop}[2]{\genfrac{}{}{0pt}2{#1}{#2}} \newcommand{\qbinom}[3]{\genfrac{[}{]}{0pt}{}{{#1}}{{#2}}_{{#3}}} \newcounter{thm} \renewcommand{\thethm}{\Alph{thm}} \newtheorem{main_theorem}[thm]{Theorem} \newtheorem{claim}{Claim} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \numberwithin{equation}{section} \theoremstyle{definition} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \newtheorem{definition}{Definition} \title{ Stationary states for stable processes with partial resetting} \date{\today} \author{Tomasz Grzywny} \address{ \pl{ Tomasz Grzywny\\ Wydzia\l{} Matematyki, Politechnika Wroc\l{}awska\\ Wyb. Wyspia\'{n}skiego 27\\ 50-370 Wroc\l{}aw\\ Poland} } \email{[email protected]} \author{Zbigniew Palmowski} \address{ \pl{ Zbigniew Palmowski\\ Wydzia{\l{}} Matematyki, Politechnika Wroc\l{}awska\\ Wyb. Wyspia\'{n}skiego 27\\ 50-370 Wroc\l{}aw\\ Poland} } \email{[email protected]} \author{Karol Szczypkowski} \address{ \pl{ Karol Szczypkowski\\ Wydzia\l{} Matematyki, Politechnika Wroc\l{}awska\\ Wyb. Wyspia\'{n}skiego 27\\ 50-370 Wroc\l{}aw\\ Poland} } \email{[email protected]} \author{Bartosz Trojan} \address{ \pl{ Bartosz Trojan\\ Wydzia\l{} Matematyki, Politechnika Wroc\l{}awska\\ Wyb. Wyspia\'{n}skiego 27\\ 50-370 Wroc\l{}aw\\ Poland} } \email{[email protected]} \subjclass[2020]{60G10, 60J35, 60K40, 82C05, 82C31, 35K08,60J65, 60G51, 60G52} \keywords{asymptotic behavior, Brownian motion, ergodic measure, Fokker--Planck equation, heat kernel, non-equilibrium stationary state, transition density} \begin{document} \selectlanguage{english} \begin{abstract} We study a $d$-dimensional stochastic process $\mathbf{X}$ which arises from a L\'evy process $\mathbf{Y}$ by partial resetting, that is the position of the process $\mathbf{X}$ at a Poisson moment equals $c$ times its position right before the moment, and it develops as $\mathbf{Y}$ between these two consecutive moments, $c \in (0, 1)$. We focus on $\mathbf{Y}$ being a strictly $\alpha$-stable process with $\alpha\in (0,2]$ having a transition density: We analyze properties of the transition density $p$ of the process $\mathbf{X}$. We establish a series representation of $p$. We prove its convergence as time goes to infinity (ergodicity), and we show that the limit $\rho_{\mathbf{Y}}$ (density of the ergodic measure) can be expressed by means of the transition density of the process $\mathbf{Y}$ starting from zero, which results in closed concise formulae for its moments. We show that the process $\mathbf{X}$ reaches a non-equilibrium stationary state. Furthermore, we check that $p$ satisfies the Fokker--Planck equation, and we confirm the harmonicity of $\rho_{\mathbf{Y}}$ with respect to the adjoint generator. In detail, we discuss the following cases: Brownian motion, isotropic and $d$-cylindrical $\alpha$-stable processes for $\alpha \in (0,2)$, and $\alpha$-stable subordinator for $\alpha\in (0,1)$. We find the asymptotic behavior of $p(t;x,y)$ as $t\to +\infty$ while $(t,y)$ stays in a certain space-time region. For Brownian motion, we discover a phase transition, that is a change of the asymptotic behavior of $p(t;0,y)$ with respect to $\rho_{\mathbf{Y}}(y)$. \end{abstract} \maketitle \section{Introduction} \label{sec:Intro} We consider a semigroup density $p(t;x,y)$ corresponding to a $d$-dimensional L\'evy process with partial resetting, that is, a L\'evy process with additional proportional jumps realized at independent Poisson epochs. The process solves the following stochastic differential equation \[{\mathrm d} X_t=(c-1)X_{t-}{\mathrm d} N_t +{\mathrm d} Y_t\] where $\mathbf{Y}=(Y_t : t \geq 0)$ is a L\'evy process, $\mathbf{N}=(N_t : t \geq 0)$ is an independent Poisson process and $c\in (0,1)$ is a constant. Focusing $\mathbf{Y}$ being a strictly $\alpha$-stable process with $\alpha\in (0,2]$, we give a representation of $p$ in terms of splines satisfying certain recursion. With the help of this representation we prove the convergence of $p(t;x,y)$ as $t\to +\infty$ to a density $\rho_{\mathbf{Y}}$. We describe $\rho_{\mathbf{Y}}$, in particular, we provide formulas for its moments. Later, we show that the process under considerations has non-equilibrium stationary state, that is, we prove that the infinitesimal generator related to $p$ on $L^2(\RR^d, \rho_{\mathbf{Y}}(y) {\rm d} y)$ is not self-adjoint. Let us recall that the classical ergodic theory concerns the convergence of $p(t;x,y)$ as $t\to +\infty$ for fixed $x,y\in \mathbb{R}^d$. Moreover, one of our main results gives the space-time regions where the uniform asymptototic behavior of $p(t;0,y)$ as $t\to +\infty$ is precisely described. In particular, we find the regions where $p(t;0,y)$ is weakly equivalent to $\rho_{\mathbf{Y}}$. Additionally, in the case of Brownian motion we show that there is a phase transition in behavior along the curve $|y|=2t$. Let us motivate the study of the process with partial resetting. In the past decade, due to various applications, models that accommodate the resetting mechanism have been extensively studied. One of them appears in simulating results of procedures dealing with missing packets in the transmission control protocol (TCP), see \cite{MR1895332, MR2023017}. In the ideal TCP congestion avoidance algorithm, when a congestion signal is received, e.g. missing packets are detected, the window transferring size is proportionally decreased and the retransmission starts. Otherwise, it grows at constant speed. In \cite{Kemperman} it was shown that the evolution of the window size may be approximated by a continuous time process: a linear drift with partial resetting. More precisely, the process grows linearly in time and at Poisson epochs experiences downward jumps proportional to its position right before the epoch. This one-dimensional process is also known as the additive-increase and multiplicative-decrease process (called AIMD), or the growth-collapse process. For these processes, the main questions addressed in the literature concerned: stability conditions, the form of the steady-state laws, and identification of first-passage times, see \cite{MR4546112, MR2840300, MR2576022}. Due to possible perturbations during data transmission, instead of the constant drift process, it is reasonable to consider models based on $\alpha$-stable subordinators which, among other things, motivates our studies. Another important application where resetting occurs is related to searching for a static target by a method based on two mechanisms: slow local movements and a relocation procedure. This strategy is widely used in nature, for example, by foraging animals, biomolecules searching for proteins on DNA, or people looking for an object in a crowd. The corresponding model consists of a stochastic process representing the first phase, and partial resetting that mimics the relocation, see \cite{19} and \cite{Bel, Ben, Evans, White} for an extensive list of references. This motivates us to study multi-dimensional L\'evy processes that are subject to resetting. Let us explain the resetting procedure in detail. Given a $d$-dimensional L\'evy process $\mathbf{Y}$ a stochastic process $\mathbf{X}$ is obtained from $\mathbf{Y}$ by partial resetting if at each Poisson moment the position of the process $\mathbf{X}$ equals a point obtained by multiplying the position of the process right before that moment by a factor $c\in(0,1)$, and that it develops according to the process $\mathbf{Y}$ between these two consecutive moments. To be more precise, let $\mathbf{N}$ be a Poisson process with intensity $1$ independent of $\mathbf{Y}$. Let us denote by $(T_j : j \in \NN)$ the Poisson arrival moments (Poisson epochs) of $\mathbf{N}$. We define $\mathbf{X}$ as \begin{equation} \label{eq:18} X_t = \begin{cases} Y_t, & \text{if } t<T_1 , \\ c X_{T_n^-} + Y_t - Y_{T_n}, & \text{for } t \in [T_n, T_{n+1}),\, n\in\NN. \end{cases} \end{equation} We say that $\mathbf{X}$ is obtained from $\mathbf{Y}$ by partial resetting with factor $c \in (0, 1)$. Throughout the paper we use the following notation \begin{equation} \label{def:m} m = c^\alpha. \end{equation} It has already been observed by a group of physicists that introducing the resetting to a one-dimensional diffusive movement of a single particle turns it into a process with a stationary measure, see \cite{MR4525953, Gupta}. The existence of such a measure is a desired feature, for example, in the context of thermodynamics of certain physical systems, in optimizing the efficiency of stochastic heat engines, or in modeling search processes. Before we state our first result, let us recall the $q$-Pochhammer symbol, \begin{align*} (a; q)_0 = 1,\qquad (a; q)_n = \prod_{j = 0}^{n-1} (1-aq^j),\qquad (a; q)_\infty = \prod_{j = 0}^\infty (1 - a q^j), \end{align*} and $q$-Gamma function, \[ \Gamma_q(x)=(1-q)^{1-x}\frac{(q;q)_{\infty}}{(q^x;q)_{\infty}}\,, \qquad \qquad x\notin -\mathbb{N}. \] The following theorem concerns the ergodicity of the process $\mathbf{X}$. \begin{main_theorem} \label{thm:B} Suppose that $\mathbf{Y}$ is a strictly $\alpha$-stable process in $\RR^d$, $\alpha\in(0,2]$, with a transition density $p_0$. Assume that $\mathbf{X}$ is obtained from $\mathbf{Y}$ by partial resetting with factor $c\in(0,1)$. Then the process $\mathbf{X}$ has a transition density denoted by $p$, such that for each $x, y \in \RR^d$, \begin{equation} \label{eq:4} \rho_{\mathbf{Y}}(y)=\lim_{t\to+\infty} p(t;x,y) \end{equation} where \[ \rho_{\mathbf{Y}}(y)= \frac{1}{(m; m)_\infty}\sum_{k=0}^\infty (-1)^k \frac{m^{\frac{1}{2}k(k-1)}}{(m; m)_k} \, \int_0^\infty e^{-m^{-k} u} p_0(u;0,y) {\: \rm d}u. \] Furthermore, $\rho_{\mathbf{Y}} \in \calC_0^\infty(\RR^d)$, and for every $\gamma \in \RR$, \begin{equation} \label{eq:3} \int_{\RR^d} |y|^{\gamma} \rho_{\mathbf{Y}}(y) {\: \rm d}y = \frac{\Gamma(\gamma/\alpha+1)}{\Gamma_m(\gamma/\alpha+1)} (1-m)^{-\gamma/\alpha}\, \mathbb{E}|Y_1|^\gamma. \end{equation} \end{main_theorem} For a proper interpretation of the quotient $\Gamma(\gamma+1)/\Gamma_m(\gamma+1)$ for $\gamma \in -\NN$, see \eqref{eq:G/G_m}. The limit \eqref{eq:4} is a consequence of Theorem~\ref{thm:lim_p_t_infty}. The smoothness of $\rho_{\mathbf{Y}}$ as well as its moments are studied in Proposition \ref{prop:6}. We also check that $p$ solves the \emph{Fokker--Planck equation}, and $\rho_{\mathbf{Y}}$ is \emph{harmonic} with respect to the operator $L^2(\RR^d, {\rm d}y)$-adjoint to the generator of the process $\mathbf{X}$, see Theorem~\ref{thm:H+F-P}. To the best of our knowledge in this context the only rigorously studied process is a linear drift with partial resetting \cite{14}. Since this process has values in the half-line, a natural tool to study its distribution is the Laplace transform. For a one-dimensional Brownian motion with partial resetting in \cite{jaifizycy} some results are obtained using the Fourier transform under the assumption that $\rho_{\mathbf{Y}}$ exists. In both cases the resulting formulas are obtained with the help of inversion theorems. We tried to apply the same reasoning in the multidimensional case, but it led to expressions that are highly nontrivial to analyze. In this paper, we develop another approach: The derivation of Theorem \ref{thm:B} begins with establishing a series representation of $p$ valid for general L\'evy processes having densities. To be more precise, if $p_0$ is the density of a L\'evy process $\mathbf{Y}$, then \[ p(t; x, y) =e^{-t} p_0(t; x, y) + \int_0^t \int_{\RR^d} e^{-s} p_0(s; x, z) p(t-s; cz, y) {\: \rm d} z {\: \rm d} s, \] and therefore \[ p(t; x, y) = e^{-t} \sum_{j = 0}^\infty p_j(t; x, y), \quad \text{for all } x,y \in \RR^d, t > 0 \] where $(p_n : n \in \NN)$ satisfies the recursion \[ p_{n+1}(t; x, y) = \int_0^t \int_{\RR^d} p_0(s; x, z) p_n(t-s; cz, y) {\: \rm d}z {\: \rm d} s, \quad\text{for all }x, y \in \RR^d, t >0, n \in \NN_0. \] Assuming additionally that $\mathbf{Y}$ is a strictly stable process, we are able to simplify the representation and we express it by means of an auxiliary family of one-dimensional splines $(P_j : j \in \NN)$. Namely, we get \begin{equation} \label{eq:36} p(t; x, y)=e^{-t}p_0(t; 0, y-x)+e^{-t}\sum_{j=1}^\infty t^j \int_0^1 p_0(tu;0,y-c^jx) P_j(u) {\: \rm d} u \end{equation} where $(P_j)$ are given by recursive formulas \eqref{eq:P1u} and \eqref{Pnu}. To simplify the exposition we restrict our attention to $x=0$. In this case \eqref{eq:36} takes the form \begin{equation} \label{eq:40} p(t;0,y)= \int_0^\infty p_0(u;0,y) \: \mu_t({\rm d} u), \quad\text{for all } y \in \RR^d, t > 0 \end{equation} where $\mu_t$ is a probability measure constructed from splines $(P_j)$ as in \eqref{def:mu_t}. Clearly, \[ p(t;0,0)=p_0(1;0,0)\int_0^\infty u^{-d/\alpha} \: \mu_t( {\rm d} u) \] which motivates the analysis of the moments of $\mu_t$. To do so, we first compute $\gamma$ moments for $P_j$ which satisfy a two-parameter recursive equation, see \eqref{eq:19}. Namely, $\gamma$ moment of $P_j$ is expressed as a linear combination of $\gamma$ moment of $P_{j+1}$ and $(\gamma-1)$ moment of $P_{j+1}$. Solving the equation for non-natural $\gamma$ is nontrivial because it connects $\gamma+\ZZ$ moments, but there is no a priori known value in this collection. To solve this problem we introduce scaled moments and we show that they do have a limit as $\gamma$ tends to minus infinity. It is not hard to compute zero moments. Then to find negative integer moments with large absolute value we express them, with the help of the recurrence relation, as a combination of moments of larger orders. However, the recurrence breaks down for $\gamma=0$ which makes it impossible to use any initial condition. To overcome this difficulty we use an epsilon trick to reach $\epsilon$ moment. Rough estimates on the moments together with continuity in $\epsilon$ allow us to conclude. Having the negative integer moments computed we use them to evaluate the limit as $\gamma$ tends to minus infinity. Next, we deal with non-integer moments. The previous steps permit us to iterate the scaled recursion infinitely many times which reduces the problem to computing the value of a certain series. For this purpose we use the $q$-binomial theorem. The missing integer moments are obtained by continuity. Having all moments of $P_j$'s we find the corresponding moments of the measures $\mu_t$. This gives the tightness of the family $(\mu_t : t > 0)$ while the convergence of natural moments to explicit quantities allows us to deduce the weak convergence of $(\mu_t : t > 0)$ to certain absolutely continuous probability measure $\mu$. In fact, all the moments of $(\mu_t : t > 0)$ converge to the corresponding moments of $\mu$ and are given explicitly, see Corollary \ref{cor:m-2} and Theorem \ref{thm:weak_conv}. The weak convergence together with the convergence of moments and the absolute continuity lead to \eqref{eq:4} for $x=0$, that is, \begin{equation} \label{eq:42} \rho_{\mathbf{Y}}(y) = \int_0^{\infty} p_0(u;0,y) \: \mu({\rm d} u). \end{equation} The general case requires additional work because we have to deal with \eqref{eq:36} in place of \eqref{eq:40}. To prove the regularity of $\rho_{\mathbf{Y}}$ we use \eqref{eq:42} together with the finiteness of all moments of $\mu$ and the properties of the density $p_0$ of the stable process $\mathbf{Y}$. Since $\mathbf{X}$ has the stationary measure, one may check its equilibrium. Let us recall that a stochastic process reaches equilibrium stationary state if a time-reversed process has the same distribution as $\mathbf{X}$, see e.g. \cite{e21090884, Floreani, Derrida}. Otherwise we say that it reaches the non-equilibrium stationary state (abbreviated as NESS). One of commonly used tests to determine whether the process reaches NESS is to check if its generator is \emph{not} self-adjoint in $L^2(\RR^d, \rho_{\mathbf{Y}}(x) {\rm d} x)$. In Theorem \ref{thm:NESS}, by this method we prove that $\mathbf{X}$ reaches NESS. The convergence \eqref{eq:4}, can also be written in the following form \begin{equation} \label{eq:5} \lim_{t\to+\infty}\frac{p(t;x,y)}{\rho_{\mathbf{Y}}(y)}=1, \end{equation} for each $x,y \in \RR^d$, such that $\rho_{\mathbf{Y}}(y)>0$. To better understand the behavior of the transition density $p$ we seek for possibly largest space-time region $\calD \subset \RR_+ \times \RR^d$ such that \eqref{eq:5} holds true uniformly with respect to $(t, y) \in \calD$ while $t$ tends to infinity (\footnote{$\RR_+ = (0, \infty)$}). \begin{main_theorem} \label{thm:C} Suppose that $\mathbf{Y}$ is an isotropic $\alpha$-stable process in $\RR^d$, $\alpha\in(0,2)$. Assume that $\mathbf{X}$ is obtained from $\mathbf{Y}$ by partial resetting with factor $c\in(0,1)$. Then for each $\kappa \in (0, 1)$, the transition density of $\mathbf{X}$ satisfies \begin{equation} \label{eq:12} \lim_{\atop{t \to \infty}{\norm{y} \to \infty}} \sup_{\norm{x} \leq \kappa \norm{y}} \bigg| \frac{p(t; x, y)}{\rho_{\mathbf{Y}}(y)} - 1 \bigg| = 0. \end{equation} \end{main_theorem} Theorem \ref{thm:C} is a direct consequence of Theorem \ref{thm:ius} and Corollary \ref{cor:ius}. In fact, in Theorem \ref{thm:ius} we also investigate uniform limits with respect to $c \in (0, 1)$. Similar theorems are obtained for $\alpha$-stable subordinators $\alpha \in (0, 1)$, see Theorem \ref{thm:s-s}, and $d$-cylindrical $\alpha$-stable processes $\alpha \in (0, 2)$, see Theorem \ref{thm:cylindrical}. To the best of our knowledge, the limit of the form as in Theorem \ref{thm:C} has never been studied before in this context. The proof of \eqref{eq:12} proceeds as follows: We first consider the quotient $(1-m)p(t;x,y)/\nu(y)$ where $\nu$ is the density of the L\'{e}vy measure of the isotropic $\alpha$-stable process. For simplicity of the exposition, let us consider $x=0$ only. By \eqref{eq:40}, to prove Theorem \ref{thm:C} we study the asymptotic behavior of the integral \[ \int_0^\infty \frac{p_0(u;0,y)}{\nu(y)} \: \mu_t({\rm d} u). \] To do so we use the well-known asymptotic behavior of $p_0(u;0,y)/(u \nu(y))$ as $u |y|^{-\alpha}$ tends to $0$, and the splitting of the integral into two parts: the one that carries most of the mass, this is where the asymptotic is used, and the remaining one which is negligible as $t$ goes to infinity. The explicit forms of the first and the second moments of the measure $\mu_t$ are essential, especially to obtain results uniform in the parameter $c$. Let us observe that Theorem \ref{thm:C} does not cover the Brownian motion case. In fact, the analysis for $\alpha = 2$ is more delicate. However, there is a large space-time region where uniform convergence occurs. We get the following result. \begin{main_theorem} \label{thm:D} Suppose that $\mathbf{Y}$ is Brownian motion in $\RR^d$. Assume that $\mathbf{X}$ is obtained from $\mathbf{Y}$ by partial resetting with factor $c\in(0,1)$. For each $\delta > 0$, the transition density of $\mathbf{X}$ satisfies \begin{equation} \label{eq:16} p(t; 0, y) = \rho_{\mathbf{Y}}(y) \big(1 + \calO\big(t^{-1}\big)\big) \end{equation} as $t$ tends to infinity, uniformly in the region \begin{equation} \label{eq:14} \Big\{(t, y) \in \RR_+ \times \RR^d : m^2 +\delta \leq \frac{\norm{y}^2}{4t^2} \leq 1 - \delta \Big\}. \end{equation} \end{main_theorem} Theorem \ref{thm:D} is implied by Theorem \ref{thm:6} combined with Lemma \ref{lem:densities}. Currently, we do not know how to get the asymptotic behavior of $p(t; 0, y)$ in the whole space-time region below $m^2 + \delta$, but we expect that \eqref{eq:16} is uniform in the region \[ \Big\{(t, y) \in \RR_+ \times \RR^d : \frac{\norm{y}^2}{4t^2} \leq 1 - \delta \Big\}. \] We plan to return to this problem in the future. The following theorem shows that if $\norm{y}$ stays above $2t$, the asymptotic behavior of $p(t; 0, y)$ is totally different. \begin{main_theorem} \label{thm:F} Suppose that $\mathbf{Y}$ is a Brownian motion in $\RR^d$. Assume that $\mathbf{X}$ is obtained from $\mathbf{Y}$ by partial resetting with factor $c\in(0,1)$. For each $\delta > 0$, the transition density of $\mathbf{X}$ satisfies \[ p(t; 0, y) = e^{-t} (4\pi t)^{-\frac{d}{2}} e^{-\frac{|y|^2}{4t}} \bigg\{1 + \bigg(\frac{4t^2}{\norm{y}^2}\bigg) \vphi\bigg(\frac{4t^2}{\norm{y}^2}\bigg)+ \calO\bigg(\frac{t}{\norm{y}^2}\bigg) \bigg\} \] as $t$ tends to infinity, uniformly in the region \begin{equation} \label{eq:83} \Big\{(t, y) \in \RR_+ \times \RR^d : \frac{|y|^2}{4t^2} \geq 1 +\delta \Big\} \end{equation} where \[ \vphi(x) = \sum_{j = 0}^\infty \frac{1}{(m; m)_{j+1}} x^j, \qquad \norm{x} < 1. \] \end{main_theorem} Theorem \ref{thm:F} is proved in Theorem \ref{thm:5}. Most of the existing papers focus on analyzing one-dimensional Brownian motion subject to \emph{total resetting}, that is the process is put to zero at the Poisson moments. In this case one can explore the regenerative structure of Brownian motion with total resetting which is not available when $c \in (0, 1)$. Let us also emphasize that for total resetting the transition density $p$ can be written explicitly which makes the asymptotic analysis straightforward, for example by using the large deviation theory. In particular, in \cite{MR3476293} the authors showed the asymptotic behavior of $p(t; 0, y)$ as $t$ goes to infinity while $|y|/t$ stays constant. Based on certain simulations in dimensions $1$ and $2$, the change in the asymptotic behavior has been predicted by physicists, see e.g. \cite{MR4093464, Tal}. An attempt to understand the case of multi-dimensional Brownian motion was done in \cite{MR3225982} for total resetting. To prove Theorems \ref{thm:D} and \ref{thm:F} we use the representation \eqref{eq:rep-p-0} of $p$, and the properties of the splines $P_j$ to show that for $\norm{y} > 2 t m$, \[ p(t; 0, y) = e^{-t} (4\pi t)^{-\frac{d}{2}} \Big( e^{-\frac{|y|^2}{4t}} + I(t, y) + \text{negligible term}\Big) \] where \[ I(t, y) = t \int_m^1 e^{\psi(t, y; u)} {\: \rm d} u \] for certain concave function $\psi(t, y; \cdot)$. If $(t, y)$ belongs to the region \eqref{eq:14}, the function $\psi(t, y; \cdot)$ has the unique critical point in $[m, 1)$. To get the asymptotic behavior of $I(t, y)$ in the uniform manner we use a variant of the steepest descent method keeping track of the interplay between $t$ and $\norm{y}$. If $(t, y)$ belongs to the region \eqref{eq:83}, the function $\psi(t, y; \cdot)$ may have the critical point arbitrarily close to or above $1$. In this case a careful study of the integral leads to a complete description of the asymptotic behavior of $p(t; 0, y)$ in \eqref{eq:83}. Our paper is organized as follows: In Section \ref{sec:2} we introduce the splines $(P_j : j \in \NN)$ and measures $(\mu_t : t > 0)$. We then computed their moments in Section \ref{sec:2.1} and Section \ref{sec:2.2}, respectively. We show that the measures weakly converge to the probability measure $\mu$, see Section \ref{sec:mu_t}. Finally, in Section \ref{sec:2.4} we define and study basic properties of the function $\rho_{\mathbf{Y}}$. In Section \ref{sec:stationary} we provide a rigorous definition of the resetting. Then, with help of the splines $(P_j)$, we construct the representation \eqref{eq:rep-p-0.1} for processes obtained by partial resetting from strictly $\alpha$-stable processes with densities. Next, we prove that the function $\rho_{\mathbf{Y}}$ is the density of the ergodic measure for the process $\mathbf{X}$. In the following Section \ref{sec:3.3} we study the density of $\mathbf{X}$. In Section \ref{sec:3.4} we prove that the process $\mathbf{X}$ reaches NESS. Section \ref{sec:4} is devoted to the study of the asymptotic behavior of the transition density of $\mathbf{X}$. Finally, in Appendix \ref{appendix:A} we collect basic properties of strictly $\alpha$-stable processes. In Appendix \ref{appendix:B} we put further comments about the resetting and connections with the existing literature. \subsection*{Notation} We denote by $\NN$ positive integers and $\NN_0 = \NN \cup \{0\}$. We write $f \approx g$ on $U$ or $f(x) \approx g(x)$ for $x \in U$, if there is a constant $C > 0$ such that $C^{-1} g \leq f \leq C g$ for all $x \in U$. As usual $a \land b= \min\{a,b\}$, $a \vee b=\max\{a,b\}$. By $\lceil x\rceil$ and $\lfloor x \rfloor$ we denote the ceiling and the floor of a real number $x$. An open ball of radius $r > 0$ centered at $x$ is denoted by $B_r(x)$, and abbreviated to $B_r$ if $x=0$. \section{Splines $P_j$ and measures $\mu_t$} \label{sec:2} In this section we introduce a sequence of splines on $[0, 1]$ which is the building block for the representation of the transition density of stable processes after resetting. Given $c \in (0, 1)$ and $\alpha \in (0, 2]$, let us consider a sequence $(W_n : n \in \NN)$ of functions on $\RR_+ \times \RR$ defined as \begin{align*} W_1(t, u) &= \frac{1}{1-m} \ind{(mt, t]}(u), \\ W_{n+1}(t, u) &= \ind{(m^{n+1} t, t]}(u) \int^{\frac{t-u}{1- m^{n+1}}}_{\frac{m^{n+1} t - u}{m^n - m^{n+1}} \vee 0} W_n(t - s, u - m^{n+1} s) {\: \rm d} s, \quad \text{for } n \in \NN \end{align*} where $m = c^\alpha$. Observe that $W_n$ is a homogeneous function of degree $n-1$. \begin{proposition} \label{prop:3} For every $n \in \NN$ and $\lambda \geq 0$, \[ W_n(\lambda t, \lambda u) = \lambda^{n-1} W_n(t, u), \quad\text{for all } s, u \geq 0. \] \end{proposition} \begin{proof} We argue by induction. There is nothing to prove for $n = 1$. Next, by the change of variables, we obtain \begin{align*} W_{n+1}(\lambda t, \lambda u) &= \ind{[m^{n+1}\lambda t, \lambda t)}(\lambda u) \int^{\frac{\lambda t - \lambda u}{1-m^{n+1}}}_{\frac{m^n \lambda t - \lambda u}{m^n-m^{n+1}} \vee 0} W_n(\lambda t - s, \lambda u - m^{n+1} s) {\: \rm d} s \\ &= \lambda \ind{[m^{n+1} t, t)}(u) \int^{\frac{t - u}{1-m^{n+1}}}_{\frac{m^n t - u}{m^n-m^{n+1}} \vee 0} W_n(\lambda t - \lambda s, \lambda u - m^{n+1} \lambda s) {\: \rm d} s. \end{align*} Now, by the inductive assumption \[ W_{n+1}(\lambda t, \lambda u) = \lambda \ind{[m^{n+1} t, t)}(u) \int^{\frac{t - u}{1-m^{n+1}}}_{\frac{m^n t - u}{m^n-m^{n+1}} \vee 0} \lambda^{n-1} W_n(t - s, u - m^{n+1} s) {\: \rm d} s = \lambda^n W_{n+1}(t, u), \] and the proposition follows. \end{proof} For each $n \in \NN$, we set \begin{equation} \label{eq:21} P_n(u) = W_n(1, u), \quad u \geq 0. \end{equation} \begin{proposition} \label{prop:1} The sequence $(P_n : n \in \NN)$ satisfies \begin{align} P_1(u) &= \frac{1}{1-m} \ind{(m, 1]}(u), \label{eq:P1u}\\ P_{n+1}(u) &= \big(u-m^{n+1}\big)_+^n \int_u^1 \frac{P_n(v)}{(v-m^{n+1})^{n+1}} {\: \rm d}v, \quad \text{for } n \in \NN. \label{Pnu} \end{align} In particular, $P_n$ is supported on $[m^n, 1]$. \end{proposition} \begin{proof} For $u \in (m^{n+1} 1]$, we have \begin{align*} P_{n+1}(u) = W_{n+1}(1, u) &= \int_{\frac{m^n-u}{m^n-m^{n+1}} \vee 0}^{\frac{1-u}{1-m^{n+1}}} W_n(1-s, u - m^{n+1} s) {\: \rm d} s \\ &= \int_{\frac{m^n-u}{m^n-m^{n+1}} \vee 0}^{\frac{1-u}{1-m^{n+1}}} (1-s)^{n-1} P_n\bigg(\frac{u-m^{n+1}s }{1 - s} \bigg) {\: \rm d} s. \end{align*} Setting \[ w = \frac{u-m^{n+1} s }{1-s} = \frac{u-m^{n+1}}{1-s} + m^{n+1}, \] we obtain \begin{align*} P_{n+1}(u) &= \int_{u \vee m^n}^1 \bigg(\frac{u-m^{n+1}}{w - m^{n+1}} \bigg)^{n-1} P_n(w) \frac{u-m^{n+1}}{(w-m^{n+1})^2} {\: \rm d} w, \end{align*} as claimed. \end{proof} Later we will need the following fact. \begin{proposition} \label{prop:2} For each $n \in \NN$, $P_n$ is a spline supported on $[m^n, 1]$, such that \begin{equation} \label{eq:8} P_n(u) = \frac{1}{(n-1)!} \frac{1}{(m; m)_n} (1-u)^{n-1}, \quad \text{for all } u \in [m, 1], \end{equation} and \begin{equation} \label{eq:9} P_n(u) \leq \frac{1}{(n-1)!} \frac{1}{(m; m)_n} (1-u)^{n-1}, \quad \text{for all } u \in [0, 1]. \end{equation} \end{proposition} \begin{proof} Let us recall that for $a<b$, $n\in \NN$ and $v>a$ we have \[ \int \frac{(v-b)^{n-1}}{(v-a)^{n+1}}{\: \rm d} v = \frac1{n}\frac1{b-a} (v-b)^n(v-a)^{-n}. \] Hence, taking $a=m^{n+1}$ and $b=1$, for all $n \geq 1$ and $u \in [m, 1]$ we get \begin{align} \label{eq:integral_m} (u - m^{n+1})^n \int_u^1 \frac{(1-v)^{n-1}}{(v-m^{n+1})^{n+1}} {\: \rm d} v = \frac{1}{n} \frac{1}{1-m^{n+1}} (1-u)^n. \end{align} The proof of \eqref{eq:8} is by induction with respect to $n \in \NN$. For $n = 1$ the formula trivially holds true. Next, using the inductive hypothesis and Proposition \ref{prop:1} we can write \begin{align*} P_{n+1}(u) &= (u - m^{n+1})^n \int_u^1 \frac{P_n(v)}{(v-m^{n+1})^{n+1}} {\: \rm d} v \\ &= \frac{1}{(n-1)!} \frac{1}{(m; m)_{n}} (u - m^{n+1})^n \int_u^1 \frac{(1-v)^{n-1}}{(v-m^{n+1})^{n+1}} {\: \rm d} v \\ &= \frac{1}{n!} \frac{1}{(m; m)_{n+1}} (1-u)^n \end{align*} where the last equality is a consequence of \eqref{eq:integral_m}. Similarly, one can prove the estimates \eqref{eq:9}. \end{proof} In Section \ref{sec:repr}, we prove that the transition density of the process $\mathbf{X}$ obtained from strictly $\alpha$-stable process in $\RR^d$, $\alpha \in (0, 2]$, by resetting with factor $c \in (0, 1)$, can be written in a closed form with help of measures $(\mu_t : t > 0)$ where \begin{align} \label{def:mu_t} \mu_t({\rm d} u) =e^{-t}\delta_{t}({\rm d} u) + e^{-t} \sum_{j=1}^\infty t^j P_j(u/t) \frac{{\rm d} u}{t}. \end{align} Note that $\mu_t$ is a probability measure supported on $[0, t]$. Our aim is to compute the moments of $\mu_t$. To do so we start by computing the moments of $P_j$'s. \subsection{Moments of $P_j$'s} \label{sec:2.1} In this section we compute moments of splines $P_j$'s. The main result of this section is Theorem \ref{thm:all-moments}. For $\gamma \in \RR$ and $j \in \NN$, we set \begin{equation} \label{eq:28b} \mathbb{A}(\gamma, j) = \int_0^1 u^{\gamma} P_j(u) {\: \rm d} u. \end{equation} We start by proving several auxiliary lemmas. \begin{lemma} \label{lem:2} For all $\gamma \in \RR$ and $j \in \NN$, \begin{equation} \label{eq:19} (j+1+\gamma) \mathbb{A}(\gamma, j+1) = \mathbb{A}(\gamma, j) + \gamma m^{j+1} \mathbb{A}(\gamma-1, j+1). \end{equation} \end{lemma} \begin{proof} For the proof, we write \begin{align*} \mathbb{A}(\gamma, j+1) &= \int_{m^{j+1}} ^1 u^{\gamma} \big(u - m^{j+1}\big)^j \int_u^1 \frac{P_j(v)}{(v-m^{j+1})^{j+1}} {\: \rm d} v {\: \rm d}u \\ &= \int_{m^{j+1}}^1 \frac{P_j(v)}{(v-m^{j+1})^{j+1}} \int_{m^{j+1}}^v u^{\gamma} \big(u - m^{j+1}\big)^j {\: \rm d} u {\: \rm d} v. \end{align*} Next, by the integration by parts, we obtain the following \begin{align*} \int_{m^{j+1}}^v u^{\gamma} \big(u-m^{j+1}\big)^j {\: \rm d} u &= \frac{1}{j+1} v^{\gamma} \big(v-m^{j+1}\big)^{j+1} - \frac{\gamma}{j+1} \int_{m^{j+1}}^v u^{\gamma-1} \big(u-m^{j+1}\big)^{j+1} {\: \rm d} u \\ &= \frac{1}{j+1} v^{\gamma} \big(v-m^{j+1}\big)^{j+1} - \frac{\gamma}{j+1} \int_{m^{j+1}}^v u^{\gamma} \big(u-m^{j+1}\big)^j {\: \rm d} u \\ &\phantom{=\frac{1}{j+1} v^{-\gamma} \big(v-m^{j+1}\big)^{j+1}} + m^{j+1} \frac{\gamma}{j+1} \int_{m^{j+1}}^v u^{-\gamma-1} \big(u-m^{j+1}\big)^j {\: \rm d} u \end{align*} which leads to \begin{align*} (j+1 + \gamma) \int_{m^{j+1}}^v u^{\gamma} \big(u-m^{j+1}\big)^j {\: \rm d} u = v^{\gamma} \big(v-m^{j+1}\big)^{j+1} + \gamma m^{j+1} \int_{m^{j+1}}^v u^{\gamma-1} \big(u-m^{j+1}\big)^j {\: \rm d} u \end{align*} and the proposition follows. \end{proof} \begin{corollary} \label{cor:A0} For each $n\in\NN$, \[ \mathbb{A}(0, n)=\frac1{n!}. \] \end{corollary} We next introduce scaled moments. For $\gamma \in \RR$ and $n \in \NN$, we set \begin{align} \label{defG} \mathbb{B}(\gamma, n)= \bigg(\prod_{k=1}^n \frac{k+\gamma}{1-m^{k+\gamma}}\bigg) \int_0^1 u^{\gamma} P_n(u)\: {\rm d}u. \end{align} If $\gamma$ is a negative integer the value of the product is understood in the limiting sense. Namely, if $\gamma \in -\NN$ and $n \geq \abs{\gamma}$, then \begin{equation} \label{eq:43} \begin{aligned} \prod_{k = 1}^n \frac{k+\gamma}{1-m^{k+\gamma}} &= \lim_{\epsilon \to 0^+} \prod_{k = 1}^n \frac{k+\gamma+\epsilon}{1-m^{k+\gamma+\epsilon}} \\ &=\frac{1}{-\log m} \prod_{\stackrel{k = 1}{k \neq \abs{\gamma}}}^n \frac{k+\gamma}{1-m^{k+\gamma}}. \end{aligned} \end{equation} Clearly, for every $n\in\NN$ the function $\RR \ni \gamma \mapsto \mathbb{B}(\gamma, n)$ is continuous. \begin{lemma} \label{lem:C_lim_-infty} For every $n\in\NN$, \[ \lim_{\gamma \to -\infty} \mathbb{B}(\gamma,n+1)= m^{-\frac{n(n-1)}{2}} \frac{n!}{(1-m)^n} P_{n+1}(m^n). \] \end{lemma} \begin{proof} Given two real functions $f$, $g$ defined on $(-\infty, a)$, $a \in \RR$, we write $f \sim g$ as $x \to -\infty$, if \[ \lim_{x \to -\infty} \frac{f(x)}{g(x)} = 1. \] Let us observe that \begin{equation} \label{eq:prod_beh} \prod_{k=1}^{n+1} \frac{k+\gamma}{1-m^{k+\gamma}} \sim (-\gamma)^{n+1} m^{-\gamma (n+1) -\frac{(n+2)(n+1)}{2}} \quad\text{as } \gamma \to -\infty. \end{equation} Since for $\gamma<0$, \[ \int_{m^n}^1 u^{\gamma} P_{n+1}(u)\: {\rm d}u \leq (m^n)^{\gamma} \int_0^1 P_{n+1}(u)\: {\rm d}u=\frac{(m^n)^{\gamma}}{(n+1)!}, \] we get \[ \lim_{\gamma \to -\infty} \int_{m^n}^1 u^{\gamma} P_{n+1}(u)\: {\rm d}u = 0. \] Using now Proposition~\ref{prop:1} we obtain \begin{align} \label{eq:main_part} \int_{m^{n+1}}^{m^n}u^\gamma P_{n+1}(u) \: {\rm d}u &= \int_{m^{n+1}}^{m^n} u^\gamma \big(u-m^{n+1}\big)^n {\: \rm d}u \frac{P_{n+1}(m^n)}{(m^n-m^{n+1})^n}. \end{align} For $\gamma < -n -1$, we can write \begin{align*} \int_{m^{n+1}}^{m^n} u^\gamma \big(u-m^{n+1}\big)^n \: {\rm d}u &= (m^{n+1})^{\gamma+n+1} \int_m^1 u^{-\gamma-n-2}(1-u)^n \: {\rm d}u\\ &= (m^{n+1})^{\gamma+n+1} \bigg(\frac{\Gamma(-\gamma-n-1)\Gamma(n+1)}{\Gamma(-\gamma)} + \int_0^m u^{-\gamma-n-2}(1-u)^n \: {\rm d}u \bigg) \end{align*} where in the last equality we expressed the beta function in terms of the gamma function. Since for $\gamma < -n -2$, \[ \int_0^m u^{-\gamma-n-2}(1-u)^n {\: \rm d}u \leq m^{-\gamma -n-1}, \] and \[ \frac{\Gamma(-\gamma-n-1)}{\Gamma(-\gamma)} =(-1)^{n+1}\bigg(\prod_{k=1}^{n+1} (k+\gamma)\bigg)^{-1} \sim (-\gamma)^{-n-1} \quad\text{as } \gamma \to -\infty, \] we conclude that \[ \int_{m^{n+1}}^{m^n} u^\gamma \big(u-m^{n+1}\big)^n \: {\rm d}u \sim (m^{n+1})^{\gamma+n+1} (-\gamma)^{-n-1} \Gamma(n+1), \quad\text{as } \gamma \to -\infty, \] which together with \eqref{eq:prod_beh} and \eqref{eq:main_part} leads to \begin{align*} \mathbb{B}(\gamma, n+1) &\sim m^{-\gamma (n+1) -\frac{(n+2)(n+1)}{2}} (m^{n+1})^{\gamma+n+1} \Gamma(n+1) \frac{P_{n+1}(m^n)}{(m^n-m^{n+1})^n} \quad\text{as } \gamma \to -\infty. \end{align*} This completes the proof. \end{proof} Let us recall that for $q > 0$, the $q$-bracket of $x \in \RR$ is defined as \[ [x]_q = \frac{1-q^x}{1-q}. \] For $1 \leq k \leq n$, the $q$-binomial coefficient is \[ \qbinom{n}{k}{q} = \frac{[n]_q!}{[k]_q! [n-k]_q!} \] where \begin{align*} [n]_q! &= [1]_q [2]_q \ldots [n]_q, \quad n \in \NN,\\ [0]_q! &= 1. \end{align*} \begin{lemma} \label{lem:C_neg_int_gamma} For all $n\in\NN$ and $\gamma\in-\NN$ satisfying $\gamma\leq -(n+1)$, \begin{equation} \label{eq:22} \mathbb{B}(\gamma,n)=\frac1{(m; m)_n}. \end{equation} \end{lemma} \begin{proof} Let $\gamma \in \RR \setminus \{-1\}$. By Lemma \ref{lem:2}, for all $n \in \NN$, we have \[ (1-m^{n+1+\gamma+1})\mathbb{B}(\gamma+1,n+1)=\mathbb{B}(\gamma+1,n)+(1-m^{\gamma+1}) m^{n+1} \, \mathbb{B}(\gamma,n+1), \] or equivalently, \begin{align} \label{eq:C_rec} \mathbb{B}(\gamma,n+1) =- \frac{\mathbb{B}(\gamma+1,n)}{(1-m^{\gamma+1}) m^{n+1}} + \frac{[n+1+\gamma+1]_m}{[\gamma+1]_m } \frac1{m^{n+1}} \mathbb{B}(\gamma+1,n+1). \end{align} Therefore, if $\gamma \in \RR \setminus \{-1, -2\}$, \begin{align*} \mathbb{B}(\gamma,n+1) &= - \frac{\mathbb{B}(\gamma+1,n)}{(1-m^{\gamma+1}) m^{n+1}} + \frac{[n+1+\gamma+1]_m}{[\gamma+1]_m } \frac1{m^{n+1}} \mathbb{B}(\gamma+1,n+1) \\ &= -\frac{\mathbb{B}(\gamma+1,n)}{(1-m^{\gamma+1})m^{n+1}} -\frac{[n+1+\gamma+1]_m}{[\gamma+1]_m } \frac1{m^{n+1}} \frac{\mathbb{B}(\gamma+2,n)}{(1-m^{\gamma+1})m^{n+1}} \\ &\phantom{=} + \frac{[n+1+\gamma+1]_m}{[\gamma+1]_m } \frac{[n+1+\gamma+2]_m}{[\gamma+2]_m } \Big(\frac1{m^{n+1}}\Big)^2 \mathbb{B}(\gamma+2,n+1). \end{align*} Hence, if $\gamma \in \RR \setminus \{-1, -2, \ldots, -r\}$, for $r \in \NN$, we can iterate \eqref{eq:C_rec} to get \begin{equation} \label{eq:23} \begin{aligned} \mathbb{B}(\gamma, n+1) &=- \sum_{k=0}^{r-1} \bigg\{\prod_{\ell=1}^k \frac{[n+1+\gamma+\ell]_m}{[\gamma+\ell]_m} \bigg\} \Big(\frac1{m^{n+1}}\Big)^k \frac{\mathbb{B}(\gamma+k+1,n)}{(1-m^{\gamma+k+1})m^{n+1}}\\ &\phantom{=} + \bigg\{ \prod_{\ell=1}^r \frac{[n+1+\gamma+\ell]_m}{[\gamma+\ell]_m}\bigg\}\Big(\frac1{m^{n+1}}\Big)^r \mathbb{B}(\gamma+r,n+1). \end{aligned} \end{equation} Now, to prove \eqref{eq:22} we proceed by induction with respect to $n \in \NN$. Let $n = 1$ and $\gamma \leq -2$. By \eqref{eq:P1u}, we get \begin{align*} \mathbb{B}(\gamma, 1) &= \frac{1 + \gamma}{1 - m^{\gamma+1}} \int_0^1 u^\gamma P_1(u) {\: \rm d} u \\ &= \frac{1 + \gamma}{1 - m^{\gamma+1}} \frac{1}{1-m} \int_m^1 u^{\gamma} {\: \rm d} u = \frac{1}{1-m}. \end{align*} Suppose that \eqref{eq:22} holds true for $n \in \NN$. Setting $\gamma_\epsilon = -(n+2) + \epsilon$ for $\epsilon \in (0,1)$, by continuity we have \[ \mathbb{B}(-(n+2),n+1) = \lim_{\epsilon\to 0^+} \mathbb{B}(-(n+2)+\epsilon, n+1). \] Using \eqref{eq:23} with $r=n+2$ we can write \[ \mathbb{B}(-(n+2), n+1) = I_1+I_2+I_3+I_4 \] where \begin{align*} I_1&= -\lim_{\epsilon\to 0^+} \frac{\mathbb{B}(-n-1+\epsilon,n)}{(1-m^{-n-1+\epsilon})m^{n+1}},\\ I_2&= -\lim_{\epsilon\to 0^+} \sum_{k=1}^n \bigg\{\prod_{\ell=1}^k \frac{[-1+\epsilon+\ell]_m}{[\gamma_\epsilon+\ell]_m} \bigg\} \Big(\frac1{m^{n+1}}\Big)^k \frac{\mathbb{B}(-n-1+\epsilon+k,n)}{(1-m^{-n-1+\epsilon+k})m^{n+1}},\\ I_3 &= -\lim_{\epsilon\to 0^+} \bigg\{\prod_{\ell=1}^{n+1} \frac{[n+1+\gamma_\epsilon+\ell]_m}{[\gamma_\epsilon+\ell]_m}\bigg\} \Big(\frac1{m^{n+1}}\Big)^{n+1} \frac{\mathbb{B}(\epsilon,n)}{(1-m^{\epsilon})m^{n+1}},\\ \intertext{and} I_4 &=\lim_{\epsilon\to 0^+} \bigg\{ \prod_{\ell=1}^{n+2} \frac{[n+1+\gamma_\epsilon+\ell]_m}{[\gamma_\epsilon+\ell]_m}\bigg\} \Big(\frac1{m^{n+1}}\Big)^{n+2} \mathbb{B}(\epsilon,n+1). \end{align*} Thanks to the inductive hypothesis, we get \[ I_1= - \frac{\mathbb{B}(-n-1,n)}{(1-m^{-n-1})m^{n+1}}=\frac1{(m;m)_{n+1}}. \] Since $\lim_{\epsilon \to 0^+} [\epsilon]_m = 0$, we also have $I_2 = 0$. Furthermore, \[ I_3=- \bigg\{\prod_{\ell=2}^{n+1} \frac{[n+1+\gamma_0+\ell]_m}{[\gamma_0+\ell]_m}\bigg\} \Big(\frac1{m^{n+1}}\Big)^{n+2} \frac{\mathbb{B}(0,n)}{1-m^{-n-1}}, \] and \[ I_4= \bigg\{ \prod_{\ell=2}^{n+1} \frac{[n+1+\gamma_0+\ell]_m}{[\gamma_0+\ell]_m}\bigg\} \Big(\frac1{m^{n+1}}\Big)^{n+2}\frac{1-m^{n+1}}{1-m^{-n-1}} \mathbb{B}(0,n+1). \] In view of Corollary~\ref{cor:A0} we have $-\mathbb{B}(0,n) + (1-m^{n+1}) \mathbb{B}(0,n+1) = 0$, thus $I_3 + I_4 = 0$. Summarizing, we obtain \[ \mathbb{B}(-(n+2),n+1)=\frac1{(m;m)_{n+1}}. \] Next, we claim that for all $k \in \NN$, \[ \mathbb{B}(-(n+1+k),n+1)=\frac1{(m;m)_{n+1}}. \] Indeed, if the formula holds true for $k \in \NN$, then by \eqref{eq:C_rec} we can write \begin{align*} \mathbb{B}(-(n+1+k+1),n+1) &=-\frac{\mathbb{B}(-(n+1+k),n)}{m^{n+1}-m^{-k}}+\frac{1-m^{-k}}{m^{n+1}-m^{-k}}\mathbb{B}(-(n+1+k),n+1)\\ &=\frac1{m^{n+1}-m^{-k}} \bigg(\frac{-1}{(m;m)_n}+\frac{1-m^{-k}}{(m;m)_{n+1}} \Big) = \frac1{(m;m)_{n+1}}, \end{align*} as claimed. This completes the proof of the lemma. \end{proof} Combining Lemmas~\ref{lem:C_lim_-infty} and \ref{lem:C_neg_int_gamma} one can compute the value of $P_{n+1}(m^n)$ explicitly. \begin{corollary} For $n\in\NN$, \[ P_{n+1}(m^n)= m^{\frac{n(n-1)}{2}} \frac1{n!} \frac{(1-m)^n}{(m;m)_{n+1}}. \] \end{corollary} We are now ready to compute moments of $P_n$. \begin{theorem} \label{thm:all-moments} For all $n\in\NN$ and $\gamma\in \RR$, \begin{align*} \int_0^1 u^{\gamma} P_n(u)\: {\rm d}u = \frac1{(m;m)_n} \bigg\{\prod_{k=1}^n \frac{1-m^{k+\gamma}}{k+\gamma}\bigg\}. \end{align*} If $\gamma \in -\NN$ the value of the product is understood in the limiting sense, see \eqref{eq:43}. \end{theorem} \begin{proof} In view of \eqref{defG} our aim is to prove that \begin{equation} \label{eq:25} \mathbb{B}(\gamma,n)=\frac1{(m;m)_{n}} \end{equation} for all $n \in \NN$ and $\gamma \in \RR$. The reasoning is by induction with respect to $n \in \NN$. For $n = 1$, thanks to Proposition \ref{prop:1}, the formula holds true. Suppose that it holds for $n \geq 1$. By Lemma~\ref{lem:C_lim_-infty} the limit $\lim_{\gamma\to -\infty} \mathbb{B}(\gamma,n+1)$ exists. Furthermore, by Lemma~\ref{lem:C_neg_int_gamma} we have the equality \begin{align} \label{eq:C_lim_-infty_value} \lim_{\gamma\to -\infty} \mathbb{B}(\gamma,n+1)=\frac1{(m;m)_{n+1}}. \end{align} Let us first consider $\gamma \in \RR \setminus \ZZ$. By \eqref{eq:19}, we have \begin{equation} \label{eq:24} \mathbb{B}(\gamma,n+1) =\frac{\mathbb{B}(\gamma,n)}{(1-m^{n+1+\gamma})}+\frac{[\gamma]_m}{[n+1+\gamma]_m} m^{n+1} \mathbb{B}(\gamma-1,n+1). \end{equation} Hence, by repeated application of \eqref{eq:24} for $r \in \NN$ we get \begin{align*} \mathbb{B}(\gamma,n+1) &= \sum_{k=0}^{r-1} \bigg\{\prod_{\ell=0}^{k-1} \frac{[\gamma-\ell]_m}{[n+1+\gamma-\ell]_m} \bigg\} (m^{n+1})^k \frac{\mathbb{B}(\gamma-k,n)}{(1-m^{n+1+\gamma-k})}\\ &\phantom{=} + \bigg\{\prod_{\ell=0}^{r-1} \frac{[\gamma-\ell]_m}{[n+1+\gamma-\ell]_m}\bigg\} (m^{n+1})^r \mathbb{B}(\gamma-r,n+1). \end{align*} Notice that \begin{align} \nonumber \prod_{\ell=0}^{r-1} \frac{[\gamma-\ell]_m}{[n+1+\gamma-\ell]_m} &= \frac{[n+1+\gamma-r]_m \ldots [1+\gamma-r]_m}{[n+1+\gamma]_m\ldots [1+\gamma]_m } \\ \label{eq:prod_unified} &= \frac{(m^{1+\gamma-r};m)_{n+1}}{(m^{1+\gamma};m)_{n+1}}. \end{align} Therefore, by \eqref{eq:C_lim_-infty_value}, \begin{align} \label{eq:C-remainder} \lim_{r\to +\infty} \bigg\{ \prod_{\ell=0}^{r-1} \frac{[\gamma-\ell]_m}{[n+1+\gamma-\ell]_m}\bigg\} (m^{n+1})^r \mathbb{B}(\gamma-r,n+1) = \frac{m^{\frac{(n+1)n}{2}} (- m^{1+\gamma})^{n+1}}{(m^{1+\gamma};m)_{n+1}} \frac1{(m;m)_{n+1}}. \end{align} Similarly, by \eqref{eq:prod_unified}, for $k\in \NN$, \[ \bigg\{\prod_{\ell=0}^{k-1} \frac{[\gamma-\ell]_m}{[n+1+\gamma-\ell]_m} \bigg\} \frac1{(1-m^{n+1+\gamma-k})}=\frac{(m^{1+\gamma -k};m)_n}{(m^{1+\gamma};m)_{n+1}}. \] Hence, using the inductive hypothesis and the $q$-binomial theorem, \begin{align} \lim_{r\to \infty} &\sum_{k=0}^{r-1} \bigg\{\prod_{\ell=0}^{k-1} \frac{[\gamma-\ell]_m}{[n+1+\gamma-\ell]_m} \bigg\} (m^{n+1})^k \frac{\mathbb{B}(\gamma-k,n)}{(1-m^{n+1+\gamma-k})} \nonumber \\ &= \frac1{(m^{1+\gamma};m)_{n+1}} \frac1{(m;m)_n} \sum_{k=0}^\infty (m^{1+\gamma -k};m)_n (m^{n+1})^k \nonumber \\ &= \frac1{(m^{1+\gamma};m)_{n+1}} \frac1{(m;m)_n} \sum_{k=0}^\infty \bigg( \sum_{\ell=0}^n m^{\frac{\ell(\ell-1)}{2}} \qbinom{n}{\ell}{m} (-m^{1+\gamma-k})^\ell \bigg) (m^{n+1})^k \nonumber \\ &= \frac1{(m^{1+\gamma};m)_{n+1}} \frac1{(m;m)_n} \sum_{\ell=0}^n m^{\frac{\ell(\ell-1)}{2}} \qbinom{n}{\ell}{m} (-m^{1+\gamma})^\ell \Big(\sum_{k=0}^\infty (m^{n+1-\ell})^k\Big) \nonumber \\ &= \frac1{(m^{1+\gamma};m)_{n+1}} \frac1{(m;m)_{n+1}} \sum_{\ell=0}^n m^{\frac{\ell(\ell-1)}{2}} \qbinom{n+1}{\ell}{m} (-m^{1+\gamma})^\ell. \label{eq:C-series} \end{align} Adding \eqref{eq:C-remainder} and \eqref{eq:C-series}, and using the $q$-binomial theorem we obtain \eqref{eq:25} for $\gamma \in \RR \setminus \ZZ$, which by continuity holds true for all $\gamma \in \RR$. \end{proof} We are going to derive alternative formulations of Theorem~\ref{thm:all-moments} that will be useful in Section~\ref{sec:2.2}. For this purpose let us recall the generalized binomial coefficient and its $q$-version, $0<q<1$: For $x,y\in\RR$ such that $x,x-y,y \notin -\NN$, we set \[ \binom{x}{y}=\frac{\Gamma(x+1)}{\Gamma(y+1)\Gamma(x-y+1)}, \qquad \mbox{and} \qquad\quad \qbinom{x}{y}{q}=\frac{\Gamma_q(x+1)}{\Gamma_q(y+1)\Gamma_q(x-y+1)} \] where \[ \Gamma_q(x)=(1-q)^{1-x}\frac{(q;q)_{\infty}}{(q^x;q)_{\infty}}. \] Notice that $x\Gamma(x)=\Gamma(x+1)$ and $[x]_q \Gamma_q(x)=\Gamma_q(x+1)$ for $x\notin -\NN$. Therefore, for each $\gamma \in \RR \setminus (-\NN)$ and $N \in \NN_0$, \[ \frac{\Gamma(\gamma+1)}{\Gamma_m(\gamma+1)} = (1-m)^{-N} \bigg\{\prod_{k = 1}^{N} \frac{1 - m^{\gamma+k}}{\gamma + k} \bigg\} \frac{\Gamma(\gamma+N+1)}{\Gamma_m(\gamma+N+1)}. \] We can thus continuously extend $\Gamma(\gamma+1)/\Gamma_m(\gamma+1)$ to all $\gamma \in \RR$, by setting \begin{align} \label{eq:G/G_m} \frac{\Gamma(\gamma+1)}{\Gamma_m(\gamma+1)} = (1-m)^{\gamma} \bigg\{\prod_{k=1}^{|\gamma|-1} \frac{1-m^{k+\gamma}}{k+\gamma}\bigg\} \log(1/m), \quad\text{for } \gamma \in -\NN. \end{align} In particular, one can extend the natural domain of \[ \frac{\qbinom{n+\gamma}{\gamma}{m}}{\binom{n+\gamma}{\gamma}} \] to all $\gamma \in \RR$. \begin{corollary} \label{cor:m-1} For all $n\in\NN$ and $\gamma\in \RR$, \begin{align*} \int_0^1 u^\gamma P_n(u) {\: \rm d} u &= \frac{1}{n!} \frac{\qbinom{n+\gamma}{\gamma}{m}}{\binom{n+\gamma}{\gamma}} = \frac{\Gamma(\gamma+1)}{\Gamma_m(\gamma+1)} \frac{\Gamma_m(n+\gamma+1)}{\Gamma(n+\gamma+1)} \frac{1}{\Gamma_m(n+1)}. \end{align*} If $\gamma \in -\NN$, the value of the right-hand side is understood in the limiting sense, see \eqref{eq:G/G_m}. Furthermore, if $\gamma \in \RR \setminus (-\NN)$, then \[ \int_0^1 u^\gamma P_n(u) {\: \rm d} u =\frac{\Gamma(\gamma+1)}{\Gamma_m(\gamma+1)} (1-m)^{-\gamma} \frac1{\Gamma(n+\gamma+1)} \frac{(m^{n+1};m)_{\infty}}{(m^{n+\gamma+1};m)_{\infty}}. \] \end{corollary} \subsection{Moments of $\mu_t$} \label{sec:2.2} In this section we compute the moments of $\mu_t$. For each $\gamma \in \RR$ and $t > 0$, by \eqref{def:mu_t} \begin{align} \label{eq:moments-mu_t-P_j} \int_0^{\infty} u^\gamma \mu_t({\rm d} u) = e^{-t} t^\gamma + e^{-t} t^\gamma \sum_{j=1}^\infty t^j \int_0^1 u^\gamma P_j(u){\: \rm d} u. \end{align} Hence, by Corollary \ref{cor:m-1}, we immediately get the following statement. \begin{corollary} \label{cor:m-2} For all $t>0$ and $\gamma\in \RR$, \[ \int_0^\infty u^\gamma \mu_t({\rm d} u)= e^{-t} t^\gamma \frac{\Gamma(\gamma+1)}{\Gamma_m(\gamma+1)} \sum_{j=0}^\infty \frac{t^{j}}{\Gamma_m(j+1)} \frac{\Gamma_m(j+\gamma+1)}{\Gamma(j+\gamma+1)}. \] If $\gamma \in -\NN$, the value of the right-hand side is understood in the limiting sense, see \eqref{eq:G/G_m}. \end{corollary} \begin{corollary} \label{cor:m-3} For all $t>0$ and $k\in \NN$, \begin{align*} \int_0^\infty u^k \mu_t({\rm d} u) &= e^{-t} \frac{k!}{(m;m)_k} \int_{mt}^t \int_{m u_{k-1}}^{u_{k-1}} \ldots \int_{m u_1}^{u_1} e^{u_0} {\: \rm d} u_0 \ldots {\: \rm d} u_{k-1}\\ &= k! \sum_{j=0}^k \bigg\{\prod_{\stackrel{i=0}{i\neq j}}^k \frac1{m^j-m^i} \bigg\} e^{-(1-m^j) t}. \end{align*} \end{corollary} \begin{proof} For $k\in\NN$, $\gamma \in \ZZ \setminus\{-1,\ldots,-k\}$, and $t>0$, \begin{align} \label{eq:multi-integral} \int_{m t}^t \int_{m u_{k-1}}^{u_{k-1}} \ldots \int_{m u_1}^{u_1} u_0^{\gamma} {\: \rm d} u_0 \ldots {\: \rm d} u_{k-1} = t^{\gamma+k} \prod_{i=1}^k \frac{1-m^{\gamma+i}}{\gamma+i}. \end{align} Using Corollary~\ref{cor:m-2} and \eqref{eq:multi-integral} we get \begin{align*} \int_0^\infty u^k \mu_t({\rm d} u) &= e^{-t} \frac{k!}{[k]_m!} \sum_{j = 0}^\infty \frac{t^{j+k}}{(j+k)!} \frac{[j+k]_m!}{[j]_m!}\\ &= e^{-t} \frac{k!}{(m;m)_k} \sum_{j = 0}^\infty \frac{t^{j+k}}{j!} \bigg\{ \prod_{i=1}^k \frac{1-m^{j+i}}{j+i}\bigg\}\\ &= e^{-t} \frac{k!}{(m;m)_k} \sum_{j = 0}^\infty \frac1{j!} \int_{m t}^t \int_{m u_{k-1}}^{u_{k-1}} \ldots \int_{m u_1}^{u_1} u_0^j {\: \rm d} u_0 \ldots {\: \rm d} u_{k-1}. \end{align*} Now it suffices to show that \begin{align*} \int_{mt}^t \int_{m u_{k-1}}^{u_{k-1}} \ldots \int_{m u_1}^{u_1} e^{u_0} {\: \rm d} u_0 \ldots {\: \rm d} u_{k-1} =(m;m)_k \sum_{j=0}^k \bigg\{\prod_{\stackrel{i=0}{i \neq j}} ^k \frac1{m^j-m^i} \bigg\} e^{m^j t} \end{align*} which one can prove by a straightforward but tedious induction with respect to $k \in \NN$. \end{proof} Next, we compute the limits of moments. \begin{proposition} \label{prop:m-1b} For all $\kappa \in (0, 1)$ and $\gamma\in \RR$, \begin{align} \label{lim:moments} \lim_{t\to +\infty} (1-m)^{\gamma} \frac{\Gamma_m(\gamma+1)}{\Gamma(\gamma+1)} \int_0^{\infty} u^\gamma \mu_t({\rm d} u) =1, \end{align} uniformly with respect to $m \in (0, \kappa]$ where for $\gamma \in -\NN$, the ratio is understood in the limiting sense, see \eqref{eq:G/G_m}. Moreover, for all $t_0 > 0$ and $\gamma \in \RR$, \begin{align} \label{ineq:sup-finite} \sup_{t\geq t_0} \int_0^{\infty} u^\gamma \mu_t({\rm d} u) < \infty. \end{align} \end{proposition} \begin{proof} Let $\gamma \in \RR$. If $\gamma>0$ we have \[ 1 \geq \frac{(m^{j+1};m)_{\infty}}{(m^{j+\gamma+1};m)_{\infty}} \geq (m^{j+1};m)_{\lceil \gamma \rceil} \geq (\kappa^{j+1};\kappa)_{\lceil \gamma \rceil}. \] Similarly, for $\gamma<0$, and $j \geq \lfloor -\gamma \rfloor$ we get \[ 1 \leq \frac{(m^{j+1};m)_{\infty}}{(m^{j+\gamma+1};m)_{\infty}} \leq \frac1{(m^{j+\gamma+1};m)_{\lceil -\gamma \rceil}} \leq \frac1{(\kappa^{j+\gamma+1};\kappa)_{\lceil -\gamma \rceil}}. \] Therefore for a fixed $\epsilon \in (0, 1)$, there is $N \geq \lfloor -\gamma \rfloor$ which depends only on $\kappa$ and $\gamma$, such that for all $j \geq N$, \[ \bigg| \frac{(m^{j+1};m)_{\infty}}{(m^{j+\gamma+1};m)_{\infty}} - 1 \bigg| \leq \epsilon. \] Using \eqref{eq:moments-mu_t-P_j}, we write \begin{align*} \int_0^{\infty} u^\gamma \mu_t({\rm d} u) = e^{-t} t^\gamma + e^{-t} t^{\gamma} \sum_{j = 1}^{N-1} t^j \int_0^1 u^\gamma P_j(u) {\: \rm d} u + I(t) \end{align*} where \[ I(t) = e^{-t} t^{\gamma} \sum_{j = N+1}^{\infty} t^j \int_0^1 u^\gamma P_j(u) {\: \rm d} u. \] Therefore, by Corollary \ref{cor:m-1} \[ \lim_{t \to +\infty} \frac{\Gamma_m(\gamma+1) (1-m)^{\gamma}}{\Gamma(\gamma+1)} \int_0^{\infty} u^\gamma \mu_t({\rm d} u) = \lim_{t \to +\infty} \frac{\Gamma_m(\gamma+1) (1-m)^{\gamma}}{\Gamma(\gamma+1)} I(t), \] uniformly with respect to $m \in (0, \kappa]$. Next, by Corollary \ref{cor:m-2} we have \begin{equation} \label{eq:88} \frac{\Gamma_m(\gamma+1) (1-m)^{\gamma}}{\Gamma(\gamma+1)} I(t) = e^{-t} t^\gamma\sum_{j=N+1}^\infty \frac{t^j}{\Gamma(j+\gamma+1)} \frac{(m^{j+1};m)_{\infty}}{(m^{j+\gamma+1};m)_{\infty}}. \end{equation} Let us recall the Mittag-Leffler function, that is \[ E_{\alpha, \beta}(t) = \sum_{n = n_0}^\infty \frac{t^n}{\Gamma(\alpha n + \beta)}, \qquad t \in \RR \] where $n_0 \in \NN_0$ is any nonnegative integer such that $\alpha n_0 + \beta > 0$. Since \[ E_{\alpha, \beta}(t) = \sum_{n = 0}^\infty \frac{t^{n+n_0}}{\Gamma(\alpha n + \beta + n_0\alpha)} =t^{n_0} E_{\alpha, \beta + n_0 \alpha}(t), \] by \cite[Theorem 4.3]{MR4179587}, for $\alpha \in (0, 2)$ we get \begin{equation} \label{eq:89} \lim_{t \to +\infty} t^{\beta-1} e^{-t} E_{\alpha, \beta}(t^\alpha) = \lim_{t \to +\infty} t^{\beta+ n_0 \alpha - 1} e^{-t} E_{\alpha, \beta+n_0\alpha}(t^{\alpha}) =\frac1\alpha. \end{equation} Hence, by \eqref{eq:88}, \begin{align*} \bigg| \frac{\Gamma_m(\gamma+1) (1-m)^{\gamma}}{\Gamma(\gamma+1)} I(t) - e^{-t} t^\gamma \mathit{E}_{1, \gamma+1}(t) \bigg| &\leq e^{-t}t^\gamma \sum_{j=N+1}^\infty \frac{t^j}{\Gamma(j+\gamma+1)} \bigg| \frac{(m^{j+1};m)_{\infty}}{(m^{j+\gamma+1};m)_{\infty}} -1\bigg|\\ &\leq \epsilon e^{-t}t^\gamma \mathit{E}_{1,\gamma+1}(t), \end{align*} which by \eqref{eq:89} leads to \eqref{lim:moments}. \end{proof} \subsection{Weak convergence of $\mu_t$} \label{sec:mu_t} In this section we show that family of measures $(\mu_t : t > 0)$ converges weakly. \begin{theorem} \label{thm:weak_conv} The family of probability measures $(\mu_t : t>0)$ on $[0,\infty)$ converges weakly as $t\to+\infty$ to a probability measure $\mu$ which is uniquely characterized by its moments: \[ \int_0^{\infty} u^k \mu({\rm d}u)=\frac{k!}{(m;m)_k},\qquad k\in \NN_0. \] The measure $\mu$ has finite moments of all orders $\gamma\in \RR$, and \begin{equation} \label{eq:55} \lim_{t \to +\infty} \int_0^{\infty} u^\gamma \mu_t({\rm d} u) = \int_0^{\infty} u^\gamma \mu({\rm d} u)= \frac{\Gamma(\gamma+1)}{\Gamma_m(\gamma+1)} (1-m)^{-\gamma}. \end{equation} The value of the right-hand side for $\gamma \in -\NN$ is understood in the limiting sense, see \eqref{eq:G/G_m}. \end{theorem} \begin{proof} By Proposition \ref{prop:m-1b}, for each $k \in \NN$, \begin{equation} \label{eq:85} M_k = \lim_{t \to \infty} \int_0^\infty u^k \mu_t({\rm d} u) = \frac{k!}{(m;m)_k}. \end{equation} By Stirling's formula there is $C > 0$ such that \[ \bigg(\frac{k!}{(m; m)_k} \bigg)^{\frac{1}{2k}} \leq C \sqrt{k}, \] thus the Carleman's condition is satisfied, that is \begin{equation} \label{eq:84} \sum_{k = 1}^\infty M_k^{-\frac{1}{2k}} = \infty. \end{equation} Consequently, the Stieltjes moment problem is determinate, i.e. the limit measure is unique if it exists. Next, by Corollary \ref{cor:m-2}, each measure $\mu_t$ is a probability measure on $[0, \infty)$. By Chebyshev's inequality, for all $\epsilon > 0$ and $t > 0$, \[ 1- \mu_t\big(\big\{ \abs{u} < \epsilon^{-1} \big\}\big) = \mu_t\big(\big\{ \abs{u} \geq \epsilon^{-1} \big\}\big) \leq \epsilon \int_0^\infty \abs{u} \: \mu_t({\rm d} u) \] which is uniformly bounded thanks to Proposition \ref{prop:m-1b}. Hence, the family $(\mu_t : t > 0)$ is tight. Since the moment problem is determinate, tightness implies that there is a measure $\mu$ such that $\mu_t$ weakly converge to $\mu$ as $t$ tends to infinity, see e.g. \cite[Theorem 25.10]{MR1324786}. Recall that a sequence of random variables which converges in distribution and has uniformly bounded $(p+\delta)$-moments, it has also convergent $p$-moments, see e.g. \cite[Theorem 25.12]{MR1324786}. Hence, all non-negative moments of $(\mu_t : t > 1)$ converge to the moments of $\mu$ as $t$ tends to infinity. Lastly, notice that by the weak convergence, for each $\epsilon > 0$, \begin{align*} \mu(\{0\}) \leq \mu((-\infty,\epsilon)) &\leq \liminf_{t\to +\infty} \mu_t((-\infty,\epsilon)) \\ &\leq \sup_{t\geq 1} \int _0^\epsilon (u/\epsilon)^{-1}\mu_t({\rm d} u) \\ &\leq \epsilon \sup_{t \geq 1} \int_0^\infty u^{-1} \mu_t({\rm d} u), \end{align*} hence by Proposition \ref{prop:m-1b} we obtain $\mu(\{0\})=0$. Consequently, we can use \cite[Theorem 25.7]{MR1324786} with $h(u)=|u|^{-1}$ to conclude that $\mu_t h^{-1}$ converges weakly to $\mu h^{-1}$. Hence, all positive real moments of $(\mu_t h^{-1} : t \geq 1)$ converge to those of $\mu h^{-1}$ as $t \to +\infty$ which corresponds to negative real moments of $(\mu_t : t\geq 1)$ and $\mu$, respectively. The exact values of the moments of $\mu$ follows by Proposition \ref{prop:m-1b}. \end{proof} We record the following corollary for later use. \begin{corollary} For $t>0$ and $m\in (0,1)$, \begin{align} \label{eq:1-moment} \frac{(1-m)e^t}{e^t -e^{mt}} \int_0^\infty u\: \mu_t({\rm d} u) = 1, \end{align} and \begin{align} \label{ineq:2-moment} \frac{(1-m)^2 e^t}{e^t-e^{mt}} \int_0^\infty u^2 \: \mu_t({\rm d} u) \leq \frac2{1+m}. \end{align} \end{corollary} \begin{proof} The equality \eqref{eq:1-moment} directly follows from Corollary~\ref{cor:m-3} with $k=1$. To prove \eqref{ineq:2-moment}, we observe that \[ \int_{mt}^t \int_{m u_1}^{u_1} e^{u_0} {\: \rm d} u_0 \: {\rm d} u_1 \leq e^t-e^{mt}, \] thus the inequality is a consequence of Corollary~\ref{cor:m-3} with $k=2$. \end{proof} \begin{lemma} \label{lem:mu_t_erg} The family of probability measures $(\mu_t : t > 0)$ converge to $\mu$ in total variation distance, i.e., \[ \lim_{t \to +\infty} \|\mu_t-\mu\|_{TV}= \lim_{t \to +\infty} \sup_{B \in \mathcal{B}(\RR)} \left| \mu_t(B)-\mu(B) \right| \] where $\calB(\RR)$ denotes $\sigma$-field of Borel sets in $\RR$. \end{lemma} \begin{proof} Let us observe that by Corollary \ref{cor:m-3}, for each $t > 0$ and $k \in \NN_0$ we have \[ M_k(t) = \int_0^\infty u^k \mu_t({\rm d} u) \leq M_k \] where $M_k$ is defined in \eqref{eq:85}. Then by \eqref{eq:84}, we obtain \[ \sum_{k = 0}^\infty M_k(t)^{-\frac{1}{2k}} = \infty. \] Consequently, there is the unique measure on $(0, \infty)$ with moments $(M_k(t) : k \in \NN_0)$. Let $Y_t \equiv t$ for $t > 0$, and let $\mathbf{X}^{\rm TCP}$ be a process obtain from $\mathbf{Y}$ by partial resetting with factor $c$. The probability distribution of $X_t$ equals $\mu_t$ since they do have the same moments, see \cite[Theorem 3]{MR2426601}. Therefore proving the lemma is equivalent to proving ergodicity of the process $\mathbf{X}^{\rm TCP}$. The latter is a consequence of \cite[Theorem 1(3)]{MR1157423} together with \cite[Theorem 2.1 and Remark B]{MR1956829}. More precisely, the process $\mathbf{X}^{\rm TCP}$ admits an embedded recursive chain $\mathbf{Z}^{TCP} = (Z_n^{\rm TCP} : n \in \NN_0)$ where \[ Z^{\rm TCP}_n=X_{T_n}, \qquad n \in \NN_0. \] Since $\mathbf{Z}^{TCP}$ satisfies \[ Z^{\rm TCP}_{n+1}=c Z^{\rm TCP}_n + (T_{n+1}-T_n), \qquad n \in \NN_0, \] its driver $(T_{n+1}-T_n : n \in \mathbb{N}_0)$ consists of i.i.d. exponential random variables, see \cite[Definition 1]{MR1157423}. Hence, by \cite[Theorem 1(3)]{MR1157423}, it is enough to show sc-convergence of $\mathbf{Z}^{\rm TCP}$, see \cite[Def. 4, p. 23]{MR1157423} for the definition of sc-convergence. In view of \cite[Theorem 8]{MR1157423} it is enough to check conditions I--III given on page 18 of \cite{MR1157423} (notice that the assumption $\mathbb{E} \tau_V(x)<\infty$ is superfluous since it is a part of condition I). We notice that the conditions I--III hold true if $\mathbf{Z}^{\rm TCP}$ converge in total variation to its stationary law, see \cite[Theorem 2]{MR1157423}. Now, let us observe that the Markov chain $\mathbf{Z}^{\rm TCP}$ is an autoregressive process of order $1$, and it is positive Harris recurrent, see \cite[Theorem 2.1 and Remark B]{MR1956829}. Since $Z^{\rm TCP}_n$ has an absolutely continuous distribution, see e.g. \cite[Theorem 2.8]{MR1894253}, we immediately conclude that $\mathbf{Z}^{\rm TCP}$ convergence in total variation. \end{proof} \begin{remark} There is a purely analytic proof of Lemma \ref{lem:mu_t_erg}, however it would require preparatory steps which go beyond the scope of this article. The details will appear elsewhere. For the sake of completeness, here we provided a probabilistic arguments. \end{remark} \begin{remark} By Theorem~\ref{thm:weak_conv}, the measure $\mu$ is uniquely characterized by its moments. Since it has the same moments as the random variable $Z$ in \cite[(5.15)]{Kemperman}, the density of $\mu$ can be read of from \cite[(5.9)]{Kemperman}, that is \begin{equation} \label{eq:81} \mu(u)= \frac{1}{(m,m)_\infty}\sum_{k=0}^\infty (-1)^k \frac{m^{\frac{1}{2}k(k-1)}}{(m,m)_k}e^{-m^{-k}u}, \quad u \geq 0. \end{equation} \end{remark} The following lemma will be important in Section \ref{sec:cyl}. \begin{lemma} \label{lem:inf_mu_t} For all $t_0 > 0$ and $\delta_2 > \delta_1 > 0$ such that $\delta_1 < t_0$ we have \[ \inf_{t \in [t_0,\infty)} \mu_t\big([\delta_1,\delta_2]\big) > 0. \] \end{lemma} \begin{proof} Since the function $\mu(u)$ is a density of the probability measure $\mu({\rm d} u)$ it is non-negative. By \eqref{eq:81}, $\mu$ has holomorphic extension to $\big\{z \in \CC : \Re z > 0\big\}$. Therefore, it has finitely many zeros in $(\delta_1,\delta_2)$. In particular, $\mu([\delta_1,\delta_2]) > 0$. Hence, by Theorem~\ref{thm:weak_conv} there is $T > 0$ such that for all $t > T$, $\mu_t([\delta_1,\delta_2]) > 0$. It remains to deal with $t \in [t_0, T]$. Let $j \geq 2$ be such that $m^j < \delta_2 T^{-1}$. Using \eqref{def:mu_t}, for all $t \in [t_0, T]$ we get \begin{align} \nonumber \mu_t\big([\delta_1,\delta_2]\big) &\geq e^{-t} t^j \int_{(\delta_1,\delta_2)} P_j(u/t) \frac{{\rm d} u}{t} \\ \label{eq:82} &= e^{-t} t^j \int_{(\delta_1/t,\delta_2/t)} P_j(u) \: {\rm d} u. \end{align} Let us observe that \eqref{eq:82} defines a continuous function on $[t_0, T]$. Moreover, by Proposition \ref{prop:1} $P_j$ is positive on $[m_j, 1]$. Thus \eqref{eq:82} has positive infimum on $[t_0, T]$. This completes the proof. \end{proof} In the remaining part of this section we prove auxiliary lemmas which are helpful in studying ergodicity of the processes with resetting. \begin{lemma} \label{lem:unif_conv-1} Fix $C, \gamma > 0$ and let $\mathcal{G}$ be a family of functions $g : (0,\infty) \to \RR$, such that \[ 0 \leq g(u) \leq C\big(u+ u^{-1}\big)^\gamma, \quad\text{for all } u>0. \] Then \[ \lim_{t\to+\infty} \sup_{g \in \mathcal{G}} \bigg| \int_0^{\infty}g(u) \: \mu_t({\rm d} u) - \int_0^{\infty}g(u)\: \mu({\rm d} u)\bigg|=0. \] \end{lemma} \begin{proof} Let $\epsilon \in(0,1)$. By \eqref{ineq:sup-finite}, for sufficiently large $M > 1$, we have \[ \int_{(0,1/M] \cup [M,\infty)} g(u) \: \mu_t({\rm d} u) \leq \frac{C}{ M+ M^{-1}} \sup_{t \geq 1} \int_0^{\infty} \big(u+u^{-1}\big)^{\gamma+1} \mu_t({\rm d} u) \leq \epsilon/3. \] A similar inequality holds true for $\mu$ in place of $\mu_t$. Notice that $M$ may be chosen so that the inequality holds true uniformly for all $g \in \mathcal{G}$ and $t \geq 1$. Next we write \begin{align*} &\bigg| \int_{(1/M, M)} g(u) \: \mu_t({\rm d} u) - \int_{(1/M,M)} g(u) \: \mu({\rm d} u) \bigg| \\ &\qquad \leq \int_{(1/M,M)} g(u) |\mu_t-\mu| ({\rm d} u) \\ &\qquad= C \big(M+ M^{-1}\big)^{\gamma} 2 \|\mu_t-\mu\|_{TV} \end{align*} where $|\mu_t-\mu|({\rm d}u)$ denotes the total variation measure. Finally, by Lemma~\ref{lem:mu_t_erg}, there is $T \geq 1$ such that for all $t\geq T$ we have \[ C \big(M+ M^{-1}\big)^{\gamma} 2 \|\mu_t-\mu\|_{TV} \leq \epsilon/3 \] which completes the proof. \end{proof} Given $c \in (0, 1)$, for $\delta > 0$ and $t > 0$, we set \begin{equation} \label{eq:52} \mathscr{X}(t; \delta) = \bigg\{x \in \RR^d : -\log_c \norm{x} \leq \frac{t}{1+\delta} \bigg\}. \end{equation} \begin{lemma} \label{lem:unif_conv-2} Let $c \in (0, 1)$, $\alpha \in (0, 2]$ and $m = c^\alpha$. Fix $n \in \NN$ and $C, \gamma > 0$. Let $\mathcal{F}$ denote the family of functions $f : (0,\infty)\times \RR^n \to \RR$, such that for all $x \in \RR^n$ and $u>0$, \begin{align} \label{eq:78} 0 \leq f(u,x) \leq C \big(u+u^{-1} \big)^\gamma,\qquad | f(u, x)-f(u,0)|\leq C |x| \big(u+u^{-1} \big)^\gamma. \end{align} Then for each $\delta > 0$, we have \begin{align*} \lim_{t \to +\infty} \sup_{\stackrel{x \in \mathscr{X}(t; \delta)}{f \in \mathcal{F}}} \bigg| e^{-t}f(t,x)+e^{-t}\sum_{j=1}^\infty t^j \int_0^1 f(tu,c^jx) P_j(u) \: {\rm d}u -\int_0^{\infty} f(u,0) \: \mu({\rm d} u) \bigg|=0. \end{align*} \end{lemma} \begin{proof} In view of Lemma \ref{lem:unif_conv-1}, it is enough to show that \begin{equation} \lim_{t \to +\infty} \sup_{\stackrel{x \in \mathscr{X}(t; \delta)}{f \in \mathcal{F}}} \bigg| e^{-t} f(t, x) + e^{-t} \sum_{j = 1}^\infty t^j \int_0^1 f(tu, c^j x) P_j(u) {\: \rm d} u - \int_0^\infty f(u, 0) \mu_t({\rm d} u) \bigg|=0. \end{equation} Given $\epsilon > 0$ we set \[ N = \max\big\{\gamma + 1, -\log_c (\norm{x}/\epsilon ) \big\}. \] By \eqref{def:mu_t}, we can write \begin{align*} & \bigg| e^{-t} f(t, x) + e^{-t} \sum_{j = 1}^\infty t^j \int_0^1 f(tu, c^j x) P_j(u) {\: \rm d} u - \int_0^\infty f(u, 0) {\: \rm d} \mu_t({\rm d} u) \bigg|\\ &\qquad\qquad \leq e^{-t} \big( \abs{f(t, x)} + \abs{f(t, 0)}\big) + e^{-t} \sum_{1 \leq j \leq N} t^j \int_0^1 \big(\abs{f(tu, c^j x)} + \abs{f(tu, 0)} \big) P_j(u) {\: \rm d} u \\ &\qquad\qquad\phantom{\leq} + e^{-t} \sum_{j > N} t^j \int_0^1 \big| f(tu, c^j x) - f(tu, 0) \big| P_j(u) {\: \rm d} u. \end{align*} First, let us observe that if $x \in \mathscr{X}(t; \delta)$, then for $j > N$, $c^j \norm{x} \leq c^N \norm{x} \leq \epsilon$, and thus \begin{align*} e^{-t} \sum_{j > N} t^j \int_0^1 \big| f(tu, c^j x) - f(tu, 0) \big| P_j(u) {\: \rm d} u &\leq C e^{-t} \sum_{j > N} t^j \int_0^1 c^j\norm{x} \Big( tu+\frac1{tu}\Big)^\gamma P_j(u) \: {\rm d}u \\ &\leq \epsilon C \int_0^\infty \big(u + u^{-1}\big)^\gamma \mu_t({\rm d} u) \end{align*} which by Proposition \ref{prop:m-1b} is uniformly bounded by a constant multiply of $\epsilon$. Next, by Theorem \ref{thm:all-moments}, for each $j \in \NN$, \begin{align} \label{eq:56} \int_0^1 \abs{f(tu,c^jx)} P_j(u) \: {\rm d}u \leq C \frac{2^{\lceil \gamma \rceil - 1}}{(m;m)_\infty} &\bigg( t^{\lceil \gamma \rceil} \bigg\{\prod_{k=1}^j \frac{1-m^{k+\lceil \gamma \rceil}}{k+\lceil \gamma \rceil}\bigg\} + t^{-\lceil \gamma \rceil} \bigg\{\prod_{k=1}^j \frac{1-m^{k-\lceil \gamma \rceil}}{k-\lceil \gamma \rceil}\bigg\} \bigg). \end{align} If $j \geq \lceil \gamma \rceil$, the second product is understood in the limiting sense, that is \[ \prod_{k=1}^j \frac{1-m^{k-\lceil \gamma \rceil}}{k-\lceil \gamma \rceil} = (-\log m) \prod_{\stackrel{k=1}{k \neq \lceil \gamma \rceil}}^j \frac{1-m^{k-\lceil \gamma \rceil}}{k-\lceil \gamma \rceil}. \] Now, using \eqref{eq:56}, we get \[ \sum_{1 \leq j \leq N} t^j \int_0^1 \abs{f(tu, c^j x)} P_j(u) {\: \rm d} u \leq C \frac{2^{\lceil \gamma \rceil-1}}{(m; m)_\infty} \big(I_1 + I_2\big) \] where \[ I_1=\sum_{1 \leq j \leq N} t^{j+\lceil \gamma \rceil} \bigg\{\prod_{k=1}^j \frac{1-m^{k+\lceil \gamma \rceil}}{k+\lceil \gamma \rceil}\bigg\}, \quad\text{and}\quad I_2 = \sum_{1 \leq j \leq N} t^{j-\lceil \gamma \rceil} \bigg\{\prod_{k=1}^j \frac{1-m^{k-\lceil \gamma \rceil}}{k-\lceil \gamma \rceil}\bigg\}. \] There is $c_1 > 0$ such that \[ I_1 \leq c_1 \sum_{1 \leq j \leq N} \frac{t^{j+\lceil \gamma \rceil}}{(j+\lceil \gamma \rceil)!} \leq c_1 S(t, x) \] where \[ S(t, x)= \sum_{0 \leq j \leq N+\lceil \gamma \rceil}\frac{t^j}{j!}. \] To bound $I_2$ for $t \geq 1$, we write \begin{align*} I_2 &\leq \sum_{j = 1}^{\lceil \gamma \rceil-1} t^{j-\lceil \gamma \rceil} \bigg\{ \prod_{k=1}^j \frac{1-m^{k-\lceil \gamma \rceil}}{k-\lceil \gamma \rceil}\bigg\} + (-\log m) \sum_{\lceil \gamma \rceil \leq j \leq N} t^{j-\lceil \gamma \rceil} \bigg\{ \prod_{\stackrel{k=1}{k \neq \lceil \gamma \rceil}}^j \frac{1-m^{k-\lceil \gamma \rceil}}{k-\lceil \gamma \rceil} \bigg\} \\ &\leq c_2 + c_3 \sum_{0 \leq j \leq N-\lceil \gamma \rceil} \frac{t^j}{j!} \end{align*} for certain $c_2, c_3 > 0$. Hence, there is $c_4 > 0$ such that \[ \abs{f(t, x)} + \sum_{1 \leq j \leq N} t^j \int_0^1 \abs{f(tu, c^j x)} P_j(u) {\: \rm d} u \leq c_4 S(t, x). \] It remains to show that \begin{equation} \label{eq:50} \lim_{t \to +\infty} e^{-t} \sup_{x \in \mathscr{X}(t; \delta)} S(t, x) = 0. \end{equation} Let $T = \lceil N + \gamma \rceil$. If $-\log_c (\norm{x}/\epsilon) \leq \gamma+1$, then $N = \gamma+1$, and so for $t > T$, \[ S(t, x) \leq e t^{2\gamma+1}. \] Otherwise, $N = - \log_c (\norm{x}/\epsilon)$. Since $x \in \mathscr{X}(t; \delta)$, if \[ t \geq \frac{(1+\delta)^2}{\delta} (\log_c \epsilon + \gamma + 1), \] we have $t > (1+\delta)T$, and so \[ S(t; x) = \sum_{j = 0}^T \frac{t^j}{j!} \leq (T+1) \frac{t^T}{T!}. \] The function $[1, \infty) \ni s \mapsto s^{-1} \log s$ is decreasing, hence for $t \geq (1+\delta) T$, we get we have \[ T \log \frac{t}{T} \leq \frac{\log (1+\delta)}{1+\delta} t. \] Now, by the Stirling's formula \begin{align*} T \frac{t^T}{T!} e^{-t} &\leq c_5 \sqrt{T} \exp\big\{ T \log t - T \log T + T - t\big\} \\ &\leq c_5 \sqrt{\frac{t}{1+\delta}} \exp\Big\{t \frac{\log(1+\delta)-\delta}{1+\delta}\Big\}, \end{align*} and because \[ \frac{\log(1+\delta)-\delta}{1+\delta} < 0, \] we obtain \eqref{eq:50} and the theorem follows. \end{proof} \subsection{The function $\rho_{\mathbf{Y}}$} \label{sec:2.4} Let $(\mu_t : t > 0)$ be the family of probability measures defined by \eqref{def:mu_t} for certain $c \in (0, 1)$. In view of Theorem \ref{thm:weak_conv}, $\mu_t$ weakly converge to the probability measure $\mu$ as $t$ tends to infinity. For a given strictly $\alpha$-stable process $\mathbf{Y}$ in $\RR^d$, $\alpha \in (0, 2]$, with a transition density $p_0$, we define a function \begin{equation} \label{representationrho} \rho_{\mathbf{Y}} (y) = \int_0^{\infty} p_0(u;0,y) \: \mu({\rm d} u), \qquad y \in \RR^d. \end{equation} In this section we investigate the properties of $\rho_{\mathbf{Y}}$. \begin{proposition} \label{prop:6} Suppose that $\mathbf{Y}$ is a strictly $\alpha$-stable process in $\RR^d$, $\alpha \in (0, 2]$, with a transition density. Then $\rho_{\mathbf{Y}} \in \calC_0^\infty(\RR^d)$. Moreover, for all $\gamma \in \RR$, \begin{equation} \label{eq:57} \int_{\RR^d} |y|^{\gamma} \rho_{\mathbf{Y}}(y) {\: \rm d}y = \frac{\Gamma(\gamma/\alpha+1)}{\Gamma_m(\gamma/\alpha+1)} (1-m)^{-\gamma/\alpha}\, \mathbb{E}|Y_1|^\gamma. \end{equation} \end{proposition} \begin{proof} The regularity of $\rho_{\mathbf{Y}}$ follows by Lemma~\ref{lem:A3}\eqref{en:3:2}, and the finiteness of all moments of $\mu$, see Theorem~\ref{thm:weak_conv}. Next, by the scaling Lemma \ref{lem:A3}\eqref{en:3:1} and Tonelli's theorem, we get \begin{align*} \int_{\RR^d} |y|^{\gamma} \rho_{\mathbf{Y}}(y) {\: \rm d}y &= \int_0^\infty u^{-d/\alpha} \int_{\RR^d}|y|^\gamma p_0(1;0, u^{-1/\alpha} y){\: \rm d} y \: \mu({\rm d}u)\\ &=\int_0^\infty u^{\gamma/\alpha}\mu({\rm d}u) \int_{\RR^d}|z|^\gamma p_0(1;0, z){\: \rm d} z, \end{align*} which in view of \eqref{eq:55} completes the proof. \end{proof} \begin{remark} Let us recall that a function $f: \RR^d \to \RR$ is a homogeneous function of degree $\gamma \in \RR$, if $f(sx)=s^\gamma f(x)$ for all $x\in\RR^d$ and $s > 0$. Reasoning similar to the proof of \eqref{eq:57} leads to the following statement: Under the assumptions of Proposition \ref{prop:6}, let $f$ be a homogeneous function of degree $\gamma \in \RR$, such that \[ \sup_{|\theta|=1} |f(\theta)|<\infty. \] If \[ \gamma > \begin{cases} -\frac{\alpha}{d} & \text{if } \alpha \in (0, 2), \\ -d & \text{if } \alpha = 2, \end{cases} \] then \[ \int_{\RR^d} f(y) \rho_{\mathbf{Y}}(y){\: \rm d}y = \frac{\Gamma(\gamma/\alpha+1)}{\Gamma_m(\gamma/\alpha+1)} (1-m)^{-\gamma/\alpha} \: \mathbb{E} f(Y_1) \] and the integral converges absolutely. \end{remark} There are examples of strictly $\alpha$-stable processes in $\RR$ for which the explicit expression for all moments is known, see e.g. \cite[Proposition 1.4]{Hardin}. \begin{example}[{see \cite{Shanbhag} and \cite[Example 25.10]{MR1739520}}] If $d=1$ and $\mathbf{Y}$ is an $\alpha$-stable subordinator with $\alpha \in (0,1)$, then for $\gamma<\alpha$, \[ \mathbb{E}|Y_1|^\gamma= \frac{\Gamma\left(1-\frac{\gamma}{\alpha}\right)}{\Gamma\left(1-\gamma\right)}. \] \end{example} \begin{example}[see \cite{Shanbhag}] If $d = 1$ and $\mathbf{Y}$ is a symmetric $\alpha$-stable process with $\alpha \in (0,2)$, then for $-1<\gamma<\alpha$, \[ \mathbb{E}|Y_1|^\gamma =\frac{2^\gamma \Gamma (\tfrac{1+\gamma}{2})\Gamma (1-\tfrac{\gamma}{\alpha})}{\sqrt{\pi}\,\Gamma(1-\tfrac{\gamma}{2})}. \] \end{example} \begin{example} If $d = 1$ and $\mathbf{Y}$ is a Brownian motion, then for $\gamma>-1$ one can see that \[ \mathbb{E}|Y_1|^\gamma=\frac{2^{\gamma}\Gamma \big(\tfrac{1+\gamma}{2}\big)}{\sqrt{\pi}}. \] In particular, by \eqref{eq:57} for $k \in \NN$, \[ \int_{\RR} y^{2k-1} \rho_{\mathbf{Y}}(y){\: \rm d}y=0, \quad\text{ and }\quad \int_{\RR} y^{2k} \rho_{\mathbf{Y}}(y){\: \rm d}y= \frac{(2k)!}{(m;m)_k}. \] \end{example} \begin{lemma} \label{lem:densities} Suppose that $\mathbf{Y}$ is a strictly $\alpha$-stable process in $\RR^d$, $\alpha \in (0, 2]$, with a transition density. Then, for all $y \in \RR^d$, \begin{equation} \label{eq:66} \rho_{\mathbf{Y}}(y) =\frac{1}{(m; m)_\infty} \sum_{k=0}^\infty (-1)^k \frac{m^{\frac{1}{2}k(k-1)}}{(m; m)_k} \, U^{(m^{-k})}(y) \end{equation} where $U^{(\beta)}(y)=\int_0^\infty e^{-\beta u} \,p_0(u;0,y) {\: \rm d}u$, $\beta > 0$. Moreover, if $\mathbf{Y}$ is a Brownian motion in $\RR^d$, then \begin{equation} \label{eq:67} \lim_{\norm{y} \to +\infty} \frac{\rho_{\mathbf{Y}}(y)}{|y|^{-\frac{d-1}{2}}e^{-|y|}} = \frac12 \frac{1}{(m; m)_\infty} (2\pi)^{-\frac{d-1}{2}}. \end{equation} \end{lemma} \begin{proof} The formula follows immediately from \eqref{representationrho} and \eqref{eq:81}. Indeed, we get \[ \rho_{\mathbf{Y}}(y) = \int_0^\infty p_0(u; 0, y) \mu({\rm d} u) = \frac{1}{(m; m)_\infty} \sum_{k = 0}^\infty \frac{m^{\frac{1}{2}k(k-1)}}{(m; m)_k} \int_0^\infty e^{-m^k u} p_0(u; 0, y) \mu({\rm d} u). \] To show \eqref{eq:67}, let us recall that \begin{equation} \label{eq:U_Bessel} U^{(\beta)}(y)=(2\pi)^{-d/2}\left(\frac{|y|}{\sqrt{\beta}}\right)^{1-d/2} K_{d/2-1}(\sqrt{\beta}|y|) \end{equation} where $K_{d/2-1}$ is the modified Bessel function of the second type. Hence, by \eqref{eq:66} we get \[ \frac{\rho_{\mathbf{Y}}(y)}{|y|^{-\frac{d-1}{2}}e^{-|y|}} = \frac{1}{(m; m)_\infty} \left( \frac{U^{(1)}(y)}{{|y|^{-\frac{d-1}{2}}e^{-|y|}}}\right) \sum_{k=0}^\infty (-1)^k \frac{m^{\frac{1}{2}k(k-1)}}{(m; m)_k} \frac{U^{(m^{-k})}(y)}{U^{(1)}(y)}. \] Because \[ \lim_{|y|\to +\infty} \left(\frac{U^{(1)}(y)}{{|y|^{-\frac{d-1}{2}}e^{-|y|}}}\right) = \frac12 (2\pi)^{-\frac{d-1}{2}}, \] and \[ 0\leq \frac{U^{(m^{-k})}(y)}{U^{(1)}(y)}\leq 1 \] it is enough to treat sums of the form \[ 1+ \sum_{k=1}^N (-1)^k \frac{m^{\frac{1}{2}k(k-1)}}{(m; m)_k} \frac{U^{(m^{-k})}(y)}{U^{(1)}(y)} \] for a fixed $N$. For each $k \in \{1, \ldots, N\}$, by \eqref{eq:U_Bessel} and the asymptotic behavior of $K_{d/2-1}$, we obtain \[ \lim_{|y|\to +\infty} \frac{U^{(m^{-k})}(y)}{U^{(1)}(y)}=0, \] which completes the proof. \end{proof} \begin{remark} Under the assumptions of Lemma \ref{lem:densities} for $d = 1$, we get the following formula \[ \rho_{\mathbf{Y}} (y)= \frac12 \frac{1}{(m; m)_\infty}\sum_{k=0}^\infty (-1)^k \frac{m^{\frac{k^2}{2}}}{(m; m)_k} e^{-m^{-\frac{k}2 }|y|}, \] and \[ \lim_{|y|\to +\infty} \rho_{\mathbf{Y}}(y) \,e^{|y|} = \frac12 \frac{1}{(m; m)_\infty}. \] \end{remark} \section{Stable processes with resetting} \label{sec:stationary} In this section, we describe the resetting procedure for a large class of L{\'e}vy processes. Let $\mathbf{Y}$ be a L{\'e}vy process in $\RR^d$ and let $c \in (0, 1)$. We consider another L\'{e}vy process $\big((Y_t, N_t) : t \geq 0\big)$ with a natural filtration $\big(\tilde{\calF}_t : t \geq 0\big)$ where $\mathbf{N}$ is a Poisson process independent of $\mathbf{Y}$ with intensity $1$. Then the Poisson arrival moments $(T_k : k \geq 0)$ are Markov times relative to $(\tilde{\calF}_t : t\geq 0)$ where $T_0=0$. Let $\mathbf{X}$ be a process with resetting defined in \eqref{eq:18}. Then for all $t > 0$ and every Borel set $A \subset \RR^d$, we have \begin{align*} & \PP^{(x,0)} (X_t \in A) \\ &\qquad= \sum^\infty_{n=0} \PP^{(x, 0)}( X_t\in A, N_t=n) \\ &\qquad= \sum^\infty_{n=0} \PP^{(x,0)} \bigg(\sum^n_{k=0} c^{n-k} \Big(Y_{t \wedge T_{k+1}}-Y_{T_k}\Big)+c^n x \in A, T_n \leq t <T_{n+1}\bigg)\\ &\qquad= \PP^{(x,0)}(Y_t \in A) \PP^{(x, 0)}(N_t=0) + \sum^\infty_{n=1} \PP^{(x,0)}\bigg(\sum^n_{k=1}c^{n-k} \Big(Y_{t\wedge T_{k+1}}-Y_{T_k}\Big)+c^nY_{T_1}\in A,T_n\leq t <T_{n+1}\bigg). \end{align*} Hence, if for each $t > 0$, $Y_t$ has an absolutely continuous distribution, then for each $t > 0$ also $X_t$ has an absolutely continuous distribution. Let us denote by $p_0$ and $p$ the transition densities of $\mathbf{Y}$ and $\mathbf{X}$, respectively. Using the strong Markov property we get $\big( (\tilde{Y}_t, \tilde{N}_t) : t \geq 0\big) = \big((Y_{t+T_1}-Y_{T_1}, N_{t+T_1}-1) : t \geq 0\big)$ is a L\'evy process with the same distribution as $\big((Y_t,N_t) : t \geq 0\big)$ and independent of $Y_{T_1}$. Hence, \begin{equation} \label{eq:pomoc} \begin{aligned} \PP^{(x,0)}(X_t\in A) &= \PP^{(x,0)}(Y_t\in A)e^{-t}\\ &\phantom{=} + \sum^\infty_{n=1} \PP^{(x,0)}\bigg(\sum^{n-1}_{k=0}c^{n-(k+1)} \Big(\tilde{Y}_{(t-T_1) \wedge \tilde{T}_{k+1}}-\tilde{Y}_{\tilde{T}_k}\Big) +c^nY_{T_1}\in A,\tilde{T}_{n-1}\leq t-T_1 <\tilde{T}_{n}\bigg)\\ &=\PP^{(x,0)}(Y_t\in A)e^{-t}+\PP^{(x,0)}\Big(T_1\leq t,\PP^{(cY_{T_1},0)}(\tilde{X}_{t-T_1}\in A)\Big), \end{aligned} \end{equation} that is the transition densities satisfy \begin{equation} \label{eq:6} p(t; x, y) = e^{-t} p_0(t; x, y) + \int_0^t \int_{\RR^d} e^{-s} p_0(s; x, z) p(t-s; cz, y) {\: \rm d} z {\: \rm d} s, \qquad x, y \in \RR^d, t > 0. \end{equation} Similarly, the natural filtration of $\textbf{X}$ is a subfiltration of $(\tilde{\calF}_t : t \geq 0)$, thus $\PP^{(x,0)}(X_{t+u}\in A | \tilde{\calF_t})$ is a function of $X_t$, that is $\mathbf{X}$ is a Markov process. \begin{proposition} \label{prop:5} Suppose that $\mathbf{Y}$ is a L{\'e}vy process in $\RR^d$ with a transition density $p_0$. Assume that $\mathbf{X}$ is obtained from $\mathbf{Y}$ by partial resetting with factor $c\in(0,1)$. Then the process $\mathbf{X}$ has the transition density $p$ satisfying \begin{equation} \label{eq:2} p(t; x, y) = e^{-t} \sum_{j = 0}^\infty p_j(t; x, y), \quad \text{for all } x,y \in \RR^d, t > 0 \end{equation} where $(p_n : n \in \NN_0)$ is a sequence of functions that satisfy the recursion \begin{equation} \label{eq:15} p_{n+1}(t; x, y) = \int_0^t \int_{\RR^d} p_0(s; x, z) p_n(t-s; cz, y) {\: \rm d}z {\: \rm d} s, \quad\text{for all }x, y \in \RR^d, t >0. \end{equation} \end{proposition} \begin{proof} First, let us observe that for all $j \in \NN_0$, $x \in \RR^d$ and $t > 0$, \begin{equation} \label{eq:20} \int_{\RR^d} p_j(t; x, y) {\: \rm d} y = \frac{t^j}{j!}. \end{equation} To see this, we reason by induction over $j \in \NN_0$. For $j = 0$ the formula trivially holds true. Next, \begin{align*} \int_{\RR^d} p_{j+1}(t; x, y) {\: \rm d} y &= \int_{\RR^d} \int_0^t \int_{\RR^d} p_0(s; x, z) p_j(t-s; cz, y) {\: \rm d} z {\: \rm d} s {\: \rm d} y \\ &= \frac{1}{j!} \int_0^t \int_{\RR^d} p_0(s; x, z) (t-s)^j {\: \rm d}z {\: \rm d} s \\ &= \frac{1}{j!} \int_0^t (t-s)^j {\: \rm d} s = \frac{t^{j+1}}{(j+1)!}, \end{align*} as claimed. Since all the terms of the series \eqref{eq:2} are positive, by Tonelli's theorem \eqref{eq:20} we get \begin{align*} \int_{\RR^d} e^{-t} \sum_{j = 0}^\infty p_j(t; x, y) {\: \rm d} y &= e^{-t} \sum_{j = 0}^\infty \int_{\RR^d} p_j(t; x, y) {\: \rm d} y \\ &= e^{-t} \sum_{j = 0}^\infty \frac{t^j}{j!} = 1. \end{align*} Hence, \eqref{eq:2} defines actual solution of \eqref{eq:6}. Next, we observe that it is also the unique probabilistic solution of \eqref{eq:6}. Indeed, if $\tilde{p}(t; x , y)$ would be another solution, then \[ e^{-t} p_0(t; x, y) \leq \tilde{p}(t; x, y), \quad \text{for all } x, y \in \RR^d, t > 0. \] Thus reasoning by induction one can prove that for all $N \in \NN$, \[ e^{-t} \sum_{j = 0}^N p_j(t; x, y) \leq \tilde{p}(t; x, y), \quad \text{for all } x, y \in \RR^d, t > 0. \] Consequently, \[ p(t; x, y) \leq \tilde{p}(t; x, y), \quad \text{for all } x, y \in \RR^d, t > 0. \] Since both $p(t; x, \cdot)$ and $\tilde{p}(t; x, \cdot)$ integrate to $1$, for each $\epsilon > 0$ by Chebyshev's inequality we obtain \[ \big|\big\{y \in \RR^d : p(t; x, y) \geq \tilde{p}(t; x, y) + \epsilon \big\}\big| \leq \epsilon^{-1} \int_{\RR^d} (p(t; x, y) - \tilde{p}(t; x, y)) {\: \rm d} y = 0, \] that is $p(t; x, y) = \tilde{p}(t; x, y)$. This completes the proof. \end{proof} Let us observe that certain properties of $p_0$ translate to $p_n$. \begin{proposition} \label{prop:4} Suppose that $\mathbf{Y}$ is a L\'evy process on $\RR^d$ with a transition density $p_0$. \begin{enumerate}[label=\rm (\roman*), start=1, ref=\roman*] \item \label{en:2:1} Then for all $n \in \NN$, \[ p_n(t; x, y) = p_n(t; 0, y - c^n x). \] \item \label{en:2:2} If there is $\Upsilon \in \GL(\RR, d)$ such that \[ p_0(t; x, y) = p_0(t; \Upsilon x, \Upsilon y), \quad \text{for all } x, y \in \RR^d, t > 0, \] then for each $j \in \NN$, \[ p_j(t; x, y) = p_j(t; \Upsilon x, \Upsilon y), \quad \text{for all } x, y \in \RR^d, t > 0. \] \end{enumerate} \end{proposition} \begin{proof} The proof of \eqref{en:2:1} is by induction with respect to $n \in \NN$. For $n = 0$ the formula trivially holds true. Using the inductive hypothesis and the change of variables we can write \begin{align*} p_{n+1}(t; x, y) &= \int_0^t \int_{\RR^d} p_0(s; x, z) p_n(t-s; cz, y) {\: \rm d}z {\: \rm d} s\\ &= \int_0^t \int_{\RR^d} p_0(s; 0, z-x) p_n(t-s; cz, y) {\: \rm d} z {\: \rm d} s \\ &= \int_0^t \int_{\RR^d} p_0(s; 0, z) p_n(t-s; c(z +x), y) {\: \rm d} z {\: \rm d} s \\ &= \int_0^t \int_{\RR^d} p_0(s; 0, z) p_n(t-s; cz , y - c^{n+1} x) {\: \rm d} z {\: \rm d} s \\ &= p_{n+1}(t; 0, y - c^{n+1} x), \end{align*} completing the proof of the first part of the proposition. To show \eqref{en:2:2} we again use induction to get \begin{align*} p_{j+1}(t; \Upsilon x, \Upsilon y) &= \int_0^t \int_{\RR^d} p_0(s; \Upsilon x, z) p_j(t-s; cz, \Upsilon y) {\: \rm d}z {\: \rm d} s \\ &= \int_0^t \int_{\RR^d} p_0(s; x, \Upsilon^{-1} z) p_j(t-s; c\Upsilon^{-1} z, y) (\det \Upsilon)^{j} {\: \rm d}z {\: \rm d} s \\ &= (\det \Upsilon)^{j+1} p_{j+1}(t; x, y). \end{align*} Note that $p(t; x, y)$ integrates to $1$, hence $\det \Upsilon =1$ and the proposition follows. \end{proof} \begin{remark} Due to Proposition \ref{prop:4}\eqref{en:2:2} we can easily see that $p_n$ is isotropic if $p_0$ is isotropic. Moreover, if $p_0$ is the transition density of isotropic unimodal L\'evy process, then $\RR^d \ni y \mapsto p_0(t; 0, y)$ is radial and non-increasing, thus $\RR^d \ni y \mapsto p_j(t; 0, y)$ is radial and non-increasing too. Indeed, by Proposition \ref{prop:4}\eqref{en:2:1}, \begin{align*} p_{j+1}(t; 0, y) &= \int_0^t \int_{\RR^d} p_0(s; 0, z) p_j(t-s; cz, y) {\: \rm d} z {\: \rm d} s \\ & = \int_0^t \int_{\RR^d} p_0(s; 0, z) p_j\big(t-s; 0, c^{j+1} (c^{-j-1}y - z) \big) {\: \rm d} z {\: \rm d} s. \end{align*} Consequently, the process $\mathbf{X}$ is isotropic and unimodal if $\mathbf{X}$ is such. \end{remark} \subsection{Representation of the transition density} \label{sec:repr} We are now ready to prove the key representation of the density. From this point we assume that the original process is $\alpha$-stable for $\alpha \in (0, 2]$. \begin{theorem} \label{thm:rep-gen-st} Suppose that $\mathbf{Y}$ is a strictly $\alpha$-stable process in $\RR^d$, $\alpha \in (0, 2]$, with a transition density $p_0$. Suppose that $\mathbf{X}$ is obtained from $\mathbf{Y}$ by partial resetting with factor $c \in (0, 1)$. Then the transition density of $\mathbf{X}$ satisfies \begin{align} \label{eq:7} p_n(t; 0, y) = t^n \int_0^1 p_0(t u; 0, y) P_n(u) {\: \rm d} u, \quad \text{for all } n \in \NN, y \in \RR^d, t > 0. \end{align} Moreover, for all $x, y \in \RR^d$ and $t > 0$, \begin{align} \label{eq:rep-p-x} p(t; x, y)=e^{-t}p_0(t; 0, y-x)+e^{-t}\sum_{j=1}^\infty t^j \int_0^1 p_0(tu;0,y-c^jx) P_j(u) {\: \rm d} u. \end{align} \end{theorem} \begin{proof} First, we link $W_n$ with $p_n$ which are defined by \eqref{eq:15}. Namely, we claim that \begin{equation} \label{eq:32} p_n(t; 0, y) = \int_{m^n t}^t p_0(u; 0, y) W_n(t, u) {\: \rm d} u, \quad \text{for all } y \in \RR^d, t > 0. \end{equation} The proof is by induction with respect to $n \in \NN$. For $n = 1$, we have \begin{align*} p_1(t; 0, y) &= \int_0^t \int_{\RR^d} p_0(s; 0, z) p_0(t - s; m^{\frac{1}{\alpha}} z, y) {\: \rm d} z {\: \rm d} s \\ &= \int_0^t \int_{\RR^d} p_0(s; 0, z m^{-\frac{1}{\alpha}}) p_0(t - s; z, y) m^{-\frac{d}{\alpha}} {\: \rm d} z {\: \rm d} s. \end{align*} Since $p_0$ is the density of $\alpha$-stable process, it satisfies the scaling property (see \cite[Theorems 14.2 and 14.7]{MR1739520}), hence \begin{align*} p_1(t; 0, y) &= \int_0^t \int_{\RR^d} p_0(m s; 0, z) p_0(t - s; z, y) {\: \rm d} z {\: \rm d} s \\ &= \int_0^t p_0(t + ms - s; 0, y) {\: \rm d} s \\ &= \int^t_{m t} p_0(s; 0, y) \frac{1}{1-m} {\: \rm d} s. \end{align*} Next, we write \begin{align*} p_{n+1}(t; 0, y) &= \int_0^t \int_{\RR^d} p_0(s; 0, z) p_n(t-s; 0, y- m^{\frac{n+1}{\alpha} z}) {\: \rm d} z {\: \rm d} s \\ &= \int_0^t \int_{\RR^d} p_0(s; 0, z) \int_{m^n (t-s)}^{t-s} p_0(u; 0, y - m^{\frac{n+1}{\alpha}} z) W_n(t-s, u) {\: \rm d} u {\: \rm d} z {\: \rm d} s \\ &= \int_0^t \int_{m^n (t-s)}^{t-s} \int_{\RR^d} p_0(s; 0, z) p_0(u; 0, y-m^{\frac{n+1}{\alpha}} z) {\: \rm d} z W_n(t-s, u) {\: \rm d} u {\: \rm d} s \\ &= \int_0^t \int_{m^n (t-s)}^{t-s} \int_{\RR^d} m^{-d\frac{n+1}{\alpha}} p_0(s; 0, m^{-\frac{n+1}{\alpha}} z) p_0(u; 0, y - z) {\: \rm d} z W_n(t-s, u) {\: \rm d} u {\: \rm d} s \\ &= \int_0^t \int_{m^n (t-s)}^{t-s} \int_{\RR^d} p_0(m^{n+1} s; 0, z) p_0(u; z, y) {\: \rm d} z W_n(t-s, u) {\: \rm d} u {\: \rm d} s \end{align*} where in the last equality we have used properties of $\alpha$-stable density. Hence, \begin{align*} p_{n+1}(t; 0, y) &= \int_0^t \int_{m^n (t-s)}^{t-s} p_0(u + m^{n+1} s; 0, y) W_n(t - s, u) {\: \rm d} u {\: \rm d} s \\ &= \int_0^t \int^{t-s + m^{n+1}s}_{m^n (t-s)+m^{n+1}s} p_0(u; 0, y) W_n(t-s, u - m^{n+1}s) {\: \rm d} u {\: \rm d} s. \end{align*} Now, we observe that \begin{align*} & \Big\{ (s, u) \in \RR^2 : 0 \leq s \leq t \text{ and } m^n(t-s) + m^{n+1} s \leq u \leq t-s+m^{n+1}s \Big\} \\ &\qquad= \bigg\{(s, u) \in \RR^2 : m^{n+1} t \leq u \leq t, \text{ and } \frac{m^n t - u}{m^n - m^{n+1}} \vee 0 \leq s \leq \frac{t - u}{1- m^{n+1}} \bigg\}. \end{align*} Thus changing the order of integration we obtain \begin{align*} p_{n+1}(t; 0, y) &= \int_{m^{n+1} t}^t \int_{\frac{m^n t- u}{m^n - m^{n+1}} \vee 0}^{\frac{t-u}{1-m^{n+1}}} p_0(u; 0, y) W_n(t-s, u - m^{n+1}s) {\: \rm d} s {\: \rm d} u \end{align*} as claimed. Having proved \eqref{eq:23}, by Proposition \ref{prop:3} and \eqref{eq:21}, we can write \begin{align*} p_n(t; 0, y) &= \int_{m^n t}^t p_0(u; 0, y) W_n(t, u) {\: \rm d} u \\ &= \int_{m^n t}^t p_0(u; 0, y) t^{n-1} P_n(u t^{-1}) {\: \rm d} u = t^n \int_{m^n}^1 p_0(t u; 0, y) P_n(u) {\: \rm d} u. \end{align*} Now, \eqref{eq:rep-p-x} follows by Proposition \ref{prop:5}, Proposition~\ref{prop:4}\eqref{en:2:1} and \eqref{eq:7}. \end{proof} \begin{corollary} \label{cor:rep-1} Under the assumptions of Theorem \ref{thm:rep-gen-st}, the transition density of the process $\mathbf{X}$ satisfies \begin{align} \label{eq:rep-p-0.1} p(t;0,y)= \int_0^\infty p_0(u;0,y) \: \mu_t({\rm d} u), \quad\text{for all } y \in \RR^d, t > 0 \end{align} where $\mu_t$ is the measure defined by the formula \eqref{def:mu_t}. \end{corollary} \begin{corollary} \label{cor:rep-2} Under the assumptions of Theorem \ref{thm:rep-gen-st}, the transition density of the process $\mathbf{X}$ satisfies \begin{align} \label{eq:rep-p-0} p(t;0,y)= e^{-t}p_0(t;0,y)+e^{-t} \int_0^1 p_0(tu;0,y) \Phi(t,u) {\: \rm d} u, \quad\text{for all } y \in \RR^d, t > 0 \end{align} where \begin{align} \label{def:Phi} \Phi(t, u)= \sum_{j = 1}^\infty t^j P_j(u),\qquad u \in [0, 1],\, t > 0. \end{align} \end{corollary} \subsection{Ergodicity of $\mathbf{X}$} In this section we show that the process obtained by partial resetting of $\alpha$-stable process, $\alpha \in (0, 2]$, is ergodic, see Theorem \ref{thm:lim_p_t_infty}. In fact, we are going to prove that $p(t; x, y)$ converges uniformly in a certain space-time region in $(t, y)$ as $t$ tends to infinity. \begin{theorem} \label{thm:lim_p_t_infty} Suppose that $\mathbf{Y}$ is a strictly $\alpha$-stable process in $\RR^d$, $\alpha \in (0, 2]$, with a transition density. Assume that $\mathbf{X}$ is obtained from $\mathbf{Y}$ by partial resetting with factor $c \in (0, 1)$. Then for each $\delta > 0$, \begin{align} \label{eq:lim_p_t_infty-unif} \lim_{t\to+\infty} \sup_{y \in \RR^d}\sup_{x \in \mathscr{X}(\delta; t)} | p(t;x,y) - \rho_{\mathbf{Y}}(y)|=0 \end{align} where $\mathscr{X}(\delta; t)$ is defined in \eqref{eq:52} and $\rho_{\mathbf{Y}}$ is defined in \eqref{representationrho}. In particular, for all $x, y \in \RR^d$, \begin{equation} \label{eq:64} \lim_{t\to+\infty} p(t;x,y)=\rho_{\mathbf{Y}}(y). \end{equation} \end{theorem} \begin{proof} The convergence \eqref{eq:lim_p_t_infty-unif} is a consequence of Lemma \ref{lem:unif_conv-2} applied to the family of functions \[ \mathcal{F}=\big\{p_0(u; 0, y-x) : y\in\RR^d \big\}. \qedhere \] \end{proof} By Theorem \ref{thm:lim_p_t_infty} together with Scheff\'e's lemma, see e.g. \cite{MR2728440}, we easily get the following corollary. \begin{corollary} \label{cor:1} Under the assumptions of Theorem \ref{thm:lim_p_t_infty}, for all $x \in \RR^d$, \begin{equation} \label{eq:65} \lim_{t\to+\infty} \int_{\RR^d}|p(t;x,y)-\rho_{\mathbf{Y}}(y)| {\: \rm d}y=0. \end{equation} In particular, the measure $\rho_{\mathbf{Y}}(y)$ is an invariant measure of the process $\mathbf{X}$. \end{corollary} \subsection{The transition density of $\mathbf{X}$} \label{sec:3.3} We study the regularity of the transition density of the process obtained by a partial resetting from $\alpha$-stable process. \begin{lemma} \label{lem:p_reg} Let $\mathbf{Y}$ be a strictly $\alpha$-stable process in $\RR^d$, $\alpha \in (0, 2]$, with a transition density $p_0$. Suppose that $\mathbf{X}$ is obtained from $\mathbf{Y}$ by partial resetting with factor $c \in (0, 1)$. Then $p$ the transition density of $\mathbf{X}$ belongs to $\calC^{\infty}\big((0,\infty)\times\RR^d\times\RR^d\big)$. Moreover, for all $\ell \in \NN_0$, $\mathbf{a}, \mathbf{b} \in \NN_0^d$, $x, y \in \RR^d$ and $t>0$, both functions \[ z \mapsto \partial_t^\ell \partial_{x}^{\mathbf{a}}\partial_{y}^\mathbf{b} p(t; z,y), \quad \text{and} \quad z \mapsto \partial_t^\ell \partial_{x}^\mathbf{a} \partial_{y}^\mathbf{b} \,p(t;x,z) \] belongs to $\calC_0^\infty(\RR^d)$. \end{lemma} \begin{proof} First, using \eqref{eq:rep-p-x} and the induction with respect to the order of the differentiation one can prove that $\partial_t^\ell \partial_{x}^{\mathbf{a}}\partial_{y}^{\mathbf{b}} p(t;x,y)$ equals to $\partial_t^\ell \partial_{x}^{\mathbf{a}}\partial_{y}^{\mathbf{b}} (e^{-t} p_0(t;0,y-x))$ plus a linear combination of expressions of the form \begin{align} \label{expr:aux} e^{-t}\sum_{j=1}^\infty c_j t^{j-\ell_1} \int_0^1 u^{\ell_2} f(tu,y-c^jx) P_j(u) \: {\rm d}u \end{align} where $\ell_1 + \ell_2 = \ell$, \[ c_j= \begin{cases} \frac{j!}{(j-\ell_1)!}(-c^j)^{|\mathbf{a}|} & \text{if } j \geq \ell_1, \\ 0 & \text{if } j<\ell_1, \end{cases} \] and \[ f(u,y)= \big(\partial_u^{\ell_2} \,\partial_{y}^{\mathbf{a}+\mathbf{b}}\, p_0\big)(u;0,y). \] The inductive step will be verified once we justify the differentiation under both the series and the integral sign. By Lemma~\ref{lem:A3}\eqref{en:3:2} we have \begin{align} \label{bound:aux} |\partial_u f(u,y)| +|\nabla_y f(u,y)|+|f(u,y)| \leq C\big(u+u^{-1}\big)^\gamma, \end{align} for certain $C,\gamma > 0$. Recall that each $P_j$ has support bounded away from zero, see Proposition~\ref{prop:1}. Thus, due to \eqref{bound:aux}, any partial sum of the series in \eqref{expr:aux} may be differentiated as many times as one needs. Let us consider $j \geq \ell_1+\lceil \gamma \rceil+1$. By \eqref{bound:aux} each difference quotient may be bounded terms of the form \begin{align*} \sum_{j=\ell_1+\lceil \gamma \rceil+1}^\infty \frac{j!}{(j-k)!} (2t)^{j-k} \int_0^1 u^{\ell_2-\lceil\gamma\rceil}\left(t^{\lceil \gamma \rceil}+ t^{-\lceil\gamma\rceil}\right) P_j(u) \: {\rm d}u \end{align*} where $k = \ell_1$ or $k = \ell_1+1$, which by Theorem~\ref{thm:all-moments} is finite. Hence, to conclude it is enough to invoke the dominated convergence theorem. Lastly, the convergence to zero as $\norm{x} + \norm{y}$ gets large follows by reasoning similar to the proof of Lemma~\ref{lem:A3}\eqref{en:3:2}, together with \eqref{expr:aux} and \eqref{bound:aux}. \end{proof} \begin{remark} The Markov property of $\mathbf{X}$ implies that \[ P_t f(x) = \int_{\RR^d} p(t; x, y) f(y) {\: \rm d} y, \qquad x \in \RR^d \] forms a semigroup on nonnegative bounded functions. In view of the regularity of $p$, the Chapmann--Kolmogorov equation holds true, that is \begin{equation} \label{eq:90} p(t + s; x, y) = \int_{\RR^d} p(t; x, z) p(s; z, y) {\: \rm d} z, \qquad\text{ for all } s, t > 0 \text{ and } x, y \in \RR^d. \end{equation} \end{remark} \subsection{Non-equilibrium stationary state of $\mathbf{X}$} \label{sec:3.4} In this section our aim is to show that $\mathbf{X}$ has non-equilibrium stationary state. To do so, we prove that the infinitesimal generator of $\mathbf{X}$ on $L^2(\RR^d, \rho_{\mathbf{Y}}(y) {\rm d} y)$ is \emph{not} self-adjoint, see Theorem \ref{thm:NESS}. We also show that $\rho_{\mathbf{Y}}$ is harmonic with respect to $L^2$-adjoint operator to the infinitesimal generator of $\mathbf{X}$, see Theorem \ref{thm:H+F-P}. For $f \in \calC_0^2(\RR^d)$ we set \[ \begin{aligned} \mathscr{L} f(x)= \sum_{j,k=1}^d a_{jk} \partial_{x_j} \partial_{x_k} f(x) &+ \sprod{\nabla f(x)}{\gamma} \\ &+\int_{\RR^d} \big(f(x+z)-f(x)-\ind{\{|z|<1\}} \sprod{\nabla f(x)}{z} \big) \: \nu({\rm d} z) \end{aligned} \] where $(A, \gamma, \nu)$ with $A=[a_{jk}]_{j,k=1}^d$ is the L{\'e}vy triplet of $\mathbf{Y}$, see Appendix \ref{appendix:A}. Furthermore, we set \[ \begin{aligned} \mathscr{L}^*f(x) = \sum_{j,k=1}^d a_{jk} \partial_{x_j} \partial_{x_k} f (x) &+ \sprod{\nabla f(x)}{\gamma^*} \\ &+\int_{\RR^d} \big(f(x+z)-f(x)-\ind{\{|z|<1\}} \sprod{\nabla f(x)}{z}\big) \: \nu^*({\rm d} z) \end{aligned} \] where $\gamma^*=-\gamma$ and $\nu^*(B)=\nu(-B)$, for a Borel set $B \subset \RR^d$. Accordingly, we define \[ \mathscr{A} f(x) = \mathscr{L} f(x) + f(cx) - f(x), \qquad\text{and}\qquad \mathscr{A}^* f(x) = \mathscr{L}^* f(x) + \frac{1}{c} f\Big(\frac{x}{c}\Big) - f(x). \] Given $r \in [1, \infty)$, we let \[ \mathcal{B}_r = \begin{cases} L^r(\RR^d) & \text{if } r \in [1, \infty), \\ \calC_0(\RR^d) & \text{if } r = \infty. \end{cases} \] In view of Proposition \ref{prop:9}, $\mathbf{Y}$ generates a strongly continuous contractive semigroup on $\mathcal{B}_r$. Let $\mathscr{L}_r$ be the infinitesimal generator of this semigroup, and let $D(\mathscr{L}_r)$ be the domain of $\mathscr{L}_r$. \begin{theorem} \label{thm:1} Suppose that $\mathbf{Y}$ is a strictly $\alpha$-stable process in $\RR^d$, $\alpha \in (0, 2]$, with a transition density. Assume that $\mathbf{X}$ is obtained from $\mathbf{Y}$ by partial resetting with factor $c \in (0, 1)$. Let $r \in [1, \infty]$. Then the process $\mathbf{X}$ generates strongly continuous semigroup of bounded operators on $\mathcal{B}_r$. Its infinitesimal generator is \begin{equation} \label{eq:87} \mathscr{A}_r f(x) = \mathscr{L}_r f(x) + f(cx) - f(x), \qquad f \in D(\mathscr{A}_r). \end{equation} Moreover, $D(\mathscr{A}_r) = D(\mathscr{L}_r)$. \end{theorem} \begin{proof} Fix $r \in [1, \infty]$. For $f \in \mathcal{B}_r$ and $t > 0$ we set \[ P_t f(x) = \int_{\RR^d} p(t; x, y) f(y) {\: \rm d} y, \qquad\text{ for } x \in \RR^d. \] By Lemma \ref{lem:p_reg} and H\"older's inequality, the integral is well-defined. The proof that $(P_t : t > 0)$ is a strongly continuous semigroup of bounded operators on $\mathcal{B}_r$, is essentially the same as the proof that its infinitesimal generator is given by \eqref{eq:87}, which is proved as follows: First, using \eqref{eq:rep-p-x}, we write \begin{align*} \frac{P_t f(x) - f(x)}{t} &= e^{-t} \frac{1}{t} \int_{\RR^d} p_0(t; x, y) \big(f(y) - f(x)\big) {\: \rm d} y \\ &\phantom{=} + e^{-t} \sum_{j = 1}^\infty t^{j-1} \int_{\RR^d} \int_0^1 p_0(tu; 0, y- c^j x) \big(f(y) -f(x)\big) P_j(u) {\: \rm d} u. \end{align*} Let us observe that in the sum the terms for $j \geq 2$ tends to zero in $\mathcal{B}_r$. Indeed, we have \begin{align*} & \bigg\| \int_{\RR^d} \int_0^1 p_0(tu; 0, y - c^j x) \big(f(y) - f(x) \big) P_j(u) {\: \rm d} u {\: \rm d} y \bigg\|_{\mathcal{B}_r(x)} \\ &\qquad= \bigg\| \int_{\RR^d} \int_0^1 p_0(tu; 0, y) \big(f(y+c^j x) - f(x) \big) P_j(u) {\: \rm d} u {\: \rm d} y \bigg\|_{\mathcal{B}_r(x)} \\ &\qquad\leq \int_{\RR^d} \int_0^1 p_0(tu; 0, y) \big\|f(y+c^j x) - f(x) \big\|_{\mathcal{B}_r(x)} {\: \rm d} u {\: \rm d} y \\ &\qquad\leq \frac{1 + c^{-d j/r}}{j!} \|f\|_{\mathcal{B}_r} \end{align*} where in the last inequality we have used Corollary \ref{cor:A0}. Hence, \begin{align*} \bigg\| \sum_{j = 2}^\infty t^{j-1} \int_{\RR^d} \int_0^1 p_0(tu; 0, y- c^j x) \big(f(y) -f(x)\big) P_j(u) {\: \rm d} u {\: \rm d} y \bigg\|_{\mathcal{B}_r(x)} \leq \|f\|_{\mathcal{B}_r} t \sum_{j = 0}^\infty \frac{1 + c^{-d (j+2)/r}}{(j+2)!} t^j. \end{align*} For $j = 1$, we write \begin{align*} &\bigg\| \int_{\RR^d} \int_0^1 p_0(tu; 0, y) f(y + c x) P_1(u) {\: \rm d} u {\: \rm d} y -f(cx) \bigg\|_{\mathcal{B}_r(x)} \\ &\qquad\leq \int_0^1 \bigg\| \int_{\RR^d} p_0(tu; 0, y) \big( f(y + c x) - f(cx)\big) {\: \rm d} y \bigg\|_{\mathcal{B}_r(x)} P_1(u) {\: \rm d} u, \end{align*} thus by the dominated convergence together with the strong continuity on $\mathcal{B}_r$, we conclude that \[ \lim_{t \to 0^+} \int_{\RR^d} \int_0^1 p_0(tu; 0, y) f(y + c x) P_1(u) {\: \rm d} u {\: \rm d} y = f(cx), \] and the theorem follows. \end{proof} \begin{corollary} \label{cor:2} Under the assumptions of Theorem \ref{thm:1}, for every $r \in [1, \infty]$, if $f \in \calC_0^2(\RR^d)$ is such that $\partial_x^{\mathbf{a}} f \in \mathcal{B}_r$ for each $\mathbf{a} \in \NN_0^d$, $\abs{\mathbf{a}} \leq 2$, then $f \in D(\mathscr{A}_r)$ and \[ \mathscr{A}_r f = \mathscr{A} f. \] \end{corollary} Let $\mathscr{A}^*_2$ denote the $L^2(\RR^d)$-adjoint operator to $\mathscr{A}_2$. Recall that $g \in D(\mathscr{A}_2^*)$ if and only if there is $h \in L^2(\RR^d)$ such that \[ \sprod{\mathscr{A}_2 f}{g} = \sprod{f}{h} \qquad\text{for all } f \in D(\mathscr{A}_2). \] Then $\mathscr{A}_2^* g = h$. We have the following lemma. \begin{lemma} \label{lem:3} Suppose that $\mathbf{Y}$ is a strictly $\alpha$-stable process in $\RR^d$, $\alpha \in (0, 2]$, with a transition density. Assume that $\mathbf{X}$ is obtained from $\mathbf{Y}$ by partial resetting with factor $c \in (0, 1)$. If $g \in \calC_0^2(\RR^d)$ is such that $\partial^{\mathbf{a}} g \in L^2(\RR^d)$ for each $\mathbf{a} \in \NN_0^d$, $\abs{\mathbf{a}} \leq 2$, then $g \in D(\mathscr{A}_2^*)$ and $\mathscr{A}_2^* g = \mathscr{A}^* g$. \end{lemma} \begin{proof} Let $f \in D(\mathscr{A}_2)$. By Proposition \ref{prop:9}, we have \begin{align*} \sprod{\mathscr{L}_2 f}{g} &= \lim_{t\to 0^+} \int_{\RR^d} \frac{\EE\big[f(Y_t + x)\big]-f(x)}{t} g(x) \: {\rm d} x\\ &= \lim_{t\to 0^+} \int_{\RR^d} f(x) \frac{\EE\big[g(-Y_t + x)\big] -g(x)}{t} \: {\rm d} x = \sprod{f}{\mathscr{L}^*g}. \end{align*} Hence, by Theorem \ref{thm:1}, \begin{align*} \sprod{\mathscr{A}_2 f}{g} &= \sprod{\mathscr{L}_2 f}{g}+ \int_{\RR^d} \big(f(cx) - f(x)\big) g(x) \: {\rm d}x \\ &= \sprod{f}{\mathscr{L}^*g}+ \int_{\RR^d} f(x) \bigg(\frac{1}{c} g(cx)-g(x)\bigg) \: {\rm d}x = \sprod{f}{\mathscr{A}^*g}. \end{align*} Since $\mathscr{A}^*g \in L^2(\RR^d)$, we conclude that $\mathscr{A}_2^* g = \mathscr{A}^* g$. \end{proof} \begin{theorem} \label{thm:H+F-P} Suppose that $\mathbf{Y}$ is a strictly $\alpha$-stable process in $\RR^d$, $\alpha \in (0, 2]$, with a transition density. Assume that $\mathbf{X}$ is obtained from $\mathbf{Y}$ by partial resetting with factor $c \in (0, 1)$. Then $\rho_{\mathbf{Y}}$ defined in \eqref{representationrho} belongs to $D(\mathscr{A}_2^*)$ and \begin{equation} \label{eq:69} \mathscr{A}_2^* \rho_{\mathbf{Y}} = \mathscr{A}^* \rho_{\mathbf{Y}}= 0. \end{equation} Moreover, \[ \partial_t p = \mathscr{A}_x p =\mathscr{A}_y^* p \] \end{theorem} \begin{proof} In view of Proposition \ref{prop:6} and Lemma \ref{lem:p_reg}, $\rho_{\mathbf{Y}}$ and $p$ are regular, respectively. Let $(P_t : t > 0)$ be the semigroup on $\calC_0(\RR^d)$ generated by $\mathbf{X}$. By Corollary \ref{cor:2}, for $f \in \calC_c^\infty(\RR^d)$, \[ \partial_t P_t f(x) = P_t \mathscr{A} f(x). \] Hence, by Lemma \ref{lem:p_reg}, \begin{align*} \int_{\RR^d} f(y) \partial_t p(t; x, y) {\: \rm d} y &= \partial_t P_t f(x) = P_t \mathscr{A} f(x) \\ &= \int_{\RR^d} \mathscr{A} f(y) p(t; x, y) {\: \rm d} y = \int_{\RR^d} f(y) \mathscr{A}^*_y p(t; x, y) {\: \rm d} y \end{align*} proving $\partial_t p = \mathscr{A}_y^* p$. To show that $\partial_t p = \mathscr{A}_x p$, we observe that for $f \in \calC_0^\infty(\RR^d)$, \[ \partial_t P_t f(x) = \mathscr{A} P_t f(x) \] provided that $P_t f \in \calC_0^\infty(\RR^d)$. Therefore, in view of the Chapmann--Kolmogorov equation \eqref{eq:90}, it is enough to take $f(x) = p(s; x, y)$, since the required regularity is a consequence of Lemma \ref{lem:p_reg}. To prove \eqref{eq:69}, by Corollary \ref{cor:1}, for $f \in \calC_c^\infty(\RR^d)$ we have \begin{align*} 0 = \int_{\RR^d} \frac{P_t f(x) - f(x)}{t} \rho_{\mathbf{Y}}(x) {\: \rm d} x, \end{align*} thus, \[ 0=\sprod{\mathscr{A} f}{ \rho_{\mathbf{Y}}} = \sprod{f}{\mathscr{A}^* \rho_{\mathbf{Y}}} = \sprod{f}{\mathscr{A}_2^* \rho_{\mathbf{Y}}} \] where the last equation follows by Lemma \ref{lem:3}, which completes the proof. \end{proof} Notice that $\mathbf{X}$ generates a strongly continuous semigroup of contractive operators on $L^2(\RR^d, \rho_{\mathbf{Y}}(y) {\rm d} y)$. Let us denote by $\mathscr{A}_\rho$ its infinitesimal generator. Since $\rho_{\mathbf{Y}}(y) {\rm d} y$ is a probabilistic measure, for $f \in \calC_0^\infty(\RR^d) \subset D(\mathscr{A}_\rho)$, we have $\mathscr{A}_\rho f = \mathscr{A}_\infty f = \mathscr{A} f$. Let $\mathscr{A}_\rho^*$ denote the operator adjoint to $\mathscr{A}_\rho$ on $L^2(\RR^d, \rho_{\mathbf{Y}}(y) {\rm d} y)$. If $g \in \calC_c^\infty(\RR^d)$ belongs to the domain of $\mathscr{A}^*_\rho$, then \begin{align*} \int_{\RR^d} \mathscr{A}_\rho^* g (y) f(y) \: \rho_{\mathbf{Y}}(y) {\rm d} y &= \int_{\RR^d} g(y) \mathscr{A}_\rho f(y) \: \rho_{\mathbf{Y}} (y) {\rm d} y \\ &= \int_{\RR^d} \rho_{\mathbf{Y}} (y) g(y) \mathscr{A} f(y) {\rm d} y \\ &= \int_{\RR^d} \mathscr{A}^* \big(\rho_{\mathbf{Y}} g\big)(y) f(y) {\rm d} y \end{align*} for all $f \in \calC_c^\infty(\RR^d)$. Hence, \begin{equation} \label{eq:91} \rho_{\mathbf{Y}} \: \mathscr{A}_\rho^* g = \mathscr{A}^* \big(\rho_{\mathbf{Y}} \: g\big). \end{equation} \begin{theorem} \label{thm:NESS} Suppose that $\mathbf{Y}$ is a strictly $\alpha$-stable process in $\RR^d$, $\alpha \in (0, 2]$, with a transition density. Assume that $\mathbf{X}$ is obtained from $\mathbf{Y}$ by partial resetting with factor $c \in (0, 1)$. There exists $g \in D(\mathscr{A}_\rho)$ such that if $g \in D(\mathscr{A}_\rho^*)$ then $\mathscr{A}_\rho g \neq \mathscr{A}_\rho^* g$. \end{theorem} \begin{proof} We are going to construct $g \in \calC_c^\infty(\RR^d)$. Since for $g \in \calC_c^\infty(\RR^d)$ both $\mathscr{A}_\rho g$ and $\mathscr{A}_\rho^* g$ are at least continuous, if suffices to show that $\mathscr{A}_\rho g(x) \neq \mathscr{A}_\rho^* g(x)$ for some $x\in \RR^d$ such that $\rho_{\mathbf{Y}}(x)>0$. Let $B_r(x)$ denote the open Euclidean ball of radius $r > 0$ and centered at $x \in \RR^d$. Fix $x_0 \in \RR^d \setminus\{0\}$ such that $\rho_{\mathbf{Y}}(x_0) > 0$. Let us consider a function $g_r \in \calC_c^\infty(\RR^d)$ such that \[ \ind{B_{r/2}(c x_0)} \leq g_r \leq \ind{B_r(c x_0)}. \] If $(1-c) \norm{x_0} > r > 0$, then \begin{align*} \mathscr{A}_{\rho} g_r(x_0) &= \mathscr{A} g_r(x_0) \\ &=\mathscr{L} g_r(x_0)+g_r(cx_0)-g_r(x_0) \\ &=\int_{\RR^d} g_r(x_0+z) \: \nu({\rm d}z) + 1. \end{align*} Moreover, by \eqref{eq:91} \begin{align*} \mathscr{A}_\rho^* g_r(x_0) &=\frac1{\rho_{\mathbf{Y}}(x_0)} \mathscr{A}^*(\rho_{\mathbf{Y}} g_r)(x_0)\\ &=\frac1{\rho_{\mathbf{Y}}(x_0)}\bigg( \mathscr{L}^*(\rho_{\mathbf{Y}} g_r)(x_0) +\frac1{c}(\rho_{\mathbf{Y}} g_r)\Big(\frac{x_0}{c}\Big)-(\rho_{\mathbf{Y}} g_r)(x_0)\bigg)\\ &=\frac1{\rho_{\mathbf{Y}}(x_0)} \int_{\RR^d} \rho_{\mathbf{Y}}(x_0+z)g_r(x_0+z) \: \nu^*({\rm d}z). \end{align*} Since $\nu$ and $\nu^*$ both are measures without atoms, see \eqref{eq:nu-str_st}, we obtain \[ \lim_{r \to 0^+} \mathscr{A}_\rho g_r(x_0) = 1, \qquad \text{ and }\qquad \lim_{r\to 0^+} \mathscr{A}_\rho^* g_r(x_0)=0. \] Therefore, there is $r > 0$ such that $\mathscr{A}_\rho g_r(x_0) \neq \mathscr{A}_\rho^* g_r(x_0)$. \end{proof} \section{Asymptotic and estimates of transition density}\label{sec:4} In this section we study the asymptotic behavior of the transition densities processes obtained by partial resetting from $\alpha$-stable processes such as: isotropic $\alpha$-stable processes (see Section \ref{sec:stable}), $\alpha$-stable subordinators (see Section \ref{sec:sub}) and $d$-cylindrical $\alpha$-stable processes (see Section \ref{sec:cyl}). \subsection{Isotropic $\alpha$-stable process with $\alpha\in(0,2)$ and resetting} \label{sec:stable} Let $\nu$ denote the L{\'e}vy measure of an isotropic $\alpha$-stable process with $\alpha\in (0,2)$. Namely, \begin{align} \label{eq:lm} \nu(y)=\frac{2^{\alpha}\Gamma((d+\alpha)/2)}{\pi^{d/2}|\Gamma(-\alpha/2)|} |y|^{-d-\alpha}, \end{align} see Example~\ref{ex:names:2}. In view of the construction of the resetting, the transition density of the process $\mathbf{X}$ depends on the parameter $c \in (0, 1)$. Since in the following theorem we also study the asymptotic behavior of $p(t; x, y)$ uniformly with respect to the parameter $c$, we write $p^{(c)}(t; x, y)$ instead of $p(t; x, y)$, to indicate the dependence on $m = c^\alpha$. \begin{theorem} \label{thm:ius} Suppose that $\mathbf{Y}$ is an isotropic $\alpha$-stable process in $\RR^d$, $\alpha \in (0, 2)$, with a transition density $p_0$. Assume that $\mathbf{X}^{(c)}$ is obtained from $\mathbf{Y}$ by partial resetting with factor $c\in(0,1)$. Then for each $\kappa_1, \kappa_2 \in (0, 1)$, the transition density $p^{(c)}$ of $\mathbf{X}^{(c)}$ satisfies \begin{align} \label{eq:ius-1} \lim_{\atop{|y| \to +\infty}{t \to +\infty}} \, \sup_{\substack{c\in(0,\kappa_1] \\ |x| \leq \kappa_2 |y|}} \left| (1-m)\frac{p^{(c)}(t; x, y)}{\nu(y)}-1 \right|= 0. \end{align} Furthermore, for any fixed $K>0$, we have \[ \lim_{\substack{ |y| \to +\infty \\ t \to+\infty}}\, \sup_{\substack{c\in (0,1)\\ |x|\leq K |y|}} \left| \frac{(1-m)t}{1 -e^{-(1-m)t}}\frac{p^{(c)}(t; x, t^{1/\alpha} y)}{t\nu(t^{1/\alpha}y)} -1 \right| = 0, \] and \[ \lim_{\substack{|y| \to +\infty \\ t \to+\infty} } \sup_{\substack{c\in (0,1)\\ |x|\leq K }} \left| \frac{(1-m)t}{1 -e^{-(1-m)t}}\frac{p^{(c)}(t; x, (1-m)^{-1/\alpha}y)}{t\nu((1-m)^{-1/\alpha}y)} - 1 \right| = 0. \] \end{theorem} \begin{proof} It is well known that (see \cite{BlumenthalGetoor}), \begin{align} \label{approx:is} p_0(s;0,w)\approx \min\bigg\{ s^{-d/\alpha}, \frac{s}{|w|^{d+\alpha}}\bigg\} \end{align} uniformly with respect to $s > 0$ and $w \in \RR^d$. In particular, there is $C > 0$ such that for all $s > 0$ and $w \in \RR^d$, \begin{align} \label{ineq:2} p_0(s;0,w)\leq C s \nu(w). \end{align} We are going to prove all three statements simultaneously. We set \[ \tilde{y}=y\, \qquad\qquad \tilde{y}=t^{1/\alpha}y\,, \qquad\qquad \tilde{y}=(1-m)^{-1/\alpha}y, \] respectively. Fix $\kappa_1, \kappa_2 \in (0,1)$ and $K>0$. By \eqref{eq:rep-p-x}, we get \begin{equation} \label{eq:70} p^{(c)}(t; x, \tilde{y}) = e^{-t} p_0(t; 0, \tilde{y}-x) + e^{-t} \sum_{j = 1}^\infty t^j \int_0^1 p_0(t u; 0, \tilde{y}-c^j x) P_j(u) {\: \rm d} u. \end{equation} We emphasize that the splines $(P_j)$ depend on $c$. If $|y| \geq K/\kappa_2$ and $t \geq (K/\kappa_2)^{\alpha}$, then $|x| \leq \kappa_2 |\tilde{y}|$. Hence, for $j \geq \NN_0$, \begin{align} \label{ineq:1} \frac{\nu(\tilde{y}-c^jx)}{\nu(\tilde{y})} \leq (1-\kappa_2)^{-d-\alpha}. \end{align} Let us observe that for $u > 0$, \[ \frac{u}{1-e^{-u}} = u + \frac{u}{e^u - 1} \leq u + 1. \] Therefore, by \eqref{ineq:2} and \eqref{ineq:1}, we get \[ \frac{(1-m)t}{1 -e^{-(1-m)t}} e^{-t}\frac{p_0(t;0,\tilde{y}-x)}{t\nu(\tilde{y})} \leq C (1 - \kappa_2)^{-d-\alpha} (1+t) e^{-t}, \] and so, the first term in \eqref{eq:70} uniformly tends to zero. Next, by \eqref{ineq:2}, \eqref{ineq:1} and Theorem \ref{thm:all-moments} we obtain \begin{align} \nonumber &\frac{(1-m) t}{1 -e^{-(1-m)t}} e^{-t} t^j \int_0^1 \bigg( \frac{p_0(t u; 0, \tilde{y}-c^j x)}{tu\nu(\tilde{y})}+ 1 \bigg) u P_j(u) {\: \rm d} u \\ \nonumber &\qquad\qquad\leq (1+t) e^{-t} t^j \int_0^1 \Bigg(\frac{\nu(\tilde{y} - c^j x)}{\nu(\tilde{y})} + 1\bigg) u P_j(u) {\: \rm d} u \\ \label{ineq:b-2} &\qquad\qquad\leq \big(C (1 - \kappa_2)^{-d-\alpha} + 1 \big) (1+t) e^{-t} \frac{t^j}{j!}. \end{align} Next, let us recall that by \cite{BlumenthalGetoor} \[ \lim_{s |w|^{-\alpha} \to 0^+} \frac{p_0(s; 0, w)}{s \nu(w)} = 1. \] For $j \in \NN$, we can write \begin{align*} \bigg|\frac{p_0(s; 0,\tilde{y}-c^j x)}{s \nu(\tilde{y})} - 1 \bigg| \leq \bigg|\frac{p_0(s; 0, \tilde{y}-c^j x)}{s \nu(\tilde{y}-c^j x)} - 1 \bigg|\cdot \frac{\nu(\tilde{y}-c^j x)}{\nu(\tilde{y})} +\bigg|\frac{\nu(\tilde{y}-c^j x)}{\nu(\tilde{y})} -1\bigg|. \end{align*} Since $|x|\leq \kappa_2|\tilde{y}|$, we get \[ \frac{s}{|\tilde{y}-c^jx|^{\alpha}} \leq \frac{s |\tilde{y}|^{-\alpha}}{(1-\kappa_2)^{\alpha}}. \] Hence, \begin{itemize} \item if $|x|\leq \kappa_2|y|$, then \[ \frac{\nu(\tilde{y})}{(1+\kappa_1^j\kappa_2)^{d+\alpha}} \leq \nu(\tilde{y}-c^j x) \leq \frac{\nu(\tilde{y})}{(1-\kappa_1^j\kappa_2)^{d+\alpha}} \quad\text{ for all } c \in (0,\kappa_1]; \] \item if $|x|\leq K |y|$, then \[ \frac{\nu(\tilde{y})}{(1+Kt^{-1/\alpha})^{d+\alpha}} \leq \nu(\tilde{y}-c^jx) \\ \leq \frac{\nu(\tilde{y})}{(1-Kt^{-1/\alpha})^{d+\alpha}} \quad \text{ for all }t > K^\alpha; \] \item if $|x|\leq K$, then \[ \frac{\nu(\tilde{y})}{(1+K/|y|)^{d+\alpha}} \leq \nu(\tilde{y}-c^jx) \leq \frac{\nu(\tilde{y})}{(1-K/|y|)^{d+\alpha}} \quad \text{ for all } |y|>K. \] \end{itemize} Therefore, for a given $\epsilon>0$ there are $\delta > 0$ and $n_0\in\NN$ depending only on $d$, $\alpha$, $\epsilon$, $\kappa_1$, $\kappa_2$ and $K$, such that \begin{align} \label{ineq:3} \bigg|\frac{p_0(s; 0, \tilde{y}-c^j x)}{s \nu(\tilde{y})} - 1 \bigg| \leq \epsilon \end{align} \begin{itemize} \item for all $j \geq n_0$, and $c \in (0, \kappa_1]$ provided that $s |\tilde{y}|^{-\alpha} \leq \delta$; \item for all $j \in \NN$, $c \in (0, 1)$ and $t \geq \delta^{-1}$ provided that $\norm{x} \leq K \norm{y}$; \item for all $j \in \NN$, $c \in (0, 1)$ and $\norm{y} \geq \delta^{-1}$ provided that $\norm{x} \leq K$. \end{itemize} In all three cases, we further investigate (see \eqref{def:Phi} for the definition of $\Phi$) \begin{equation} \label{eq:26} \begin{aligned} & \frac{(1-m)t}{1 -e^{-(1-m)t}} e^{-t} \sum_{j = 1}^\infty t^j \int_0^1 \frac{p_0(t u; 0, \tilde{y}-c^j x)}{t\nu(\tilde{y})} P_j(u) {\: \rm d} u - \frac{(1-m)t}{1 -e^{-(1-m)t}} e^{-t}\int_0^1 u \Phi(t, u) {\: \rm d} u\\ &\qquad\qquad =\frac{(1-m)t}{1 -e^{-(1-m)t}} e^{-t} \sum_{j =1}^{n_0} t^j \int_0^1 \bigg( \frac{p_0(t u; 0, \tilde{y}-c^j x)}{tu\nu(\tilde{y})}- 1 \bigg) u P_j(u) {\: \rm d} u \\ &\qquad\qquad\phantom{=} + \frac{(1-m)t}{1 -e^{-(1-m)t}} e^{-t} \sum_{j =n_0+1}^{\infty} t^j \int_{\frac{\delta |\tilde{y}|^{\alpha}}{t} \land 1}^1 \bigg( \frac{p_0(t u; 0, \tilde{y}-c^j x)}{tu\nu(\tilde{y})}- 1 \bigg) u P_j(u) {\: \rm d} u\\ &\qquad\qquad\phantom{=} +\frac{(1-m)t}{1 -e^{-(1-m)t}}e^{-t} \sum_{j =n_0+1}^{\infty} t^j \int_0^{\frac{\delta |\tilde{y}|^{\alpha}}{t} \land 1} \bigg( \frac{p_0(t u; 0, \tilde{y}-c^j x)}{tu\nu(\tilde{y})}- 1 \bigg) u P_j(u) {\: \rm d} u. \end{aligned} \end{equation} The first term is small due to \eqref{ineq:b-2}. Furthermore, in all considered cases we can assure that $|y| \geq \delta$ which together with \eqref{ineq:3} and \eqref{eq:1-moment} (see \eqref{def:mu_t} for the definition of $\mu_t$) allows us to bound the absolute value of third term by \begin{align*} &\frac{(1-m)t}{1 -e^{-(1-m)t}}e^{-t}\sum_{j =n_0+1}^{\infty} \int_0^{\frac{\delta |\tilde{y}|^{\alpha}}{t} \land 1} t^j \, \bigg| \frac{p_0(t u; 0, \tilde{y}-c^j x)}{tu\nu(\tilde{y})}- 1 \bigg| \, u P_j(u) {\: \rm d} u \\ &\qquad\qquad \leq \frac{(1-m) \epsilon}{1-e^{-(1-m)t}} \int_0^\infty u \: \mu_t({\rm d} u) = \epsilon. \end{align*} Next, the middle term in \eqref{eq:26} equals zero in the case when $\tilde{y}=t^{1/\alpha}y$. If $\tilde{y}=y$ or $\tilde{y}=(1-m)^{-1/\alpha}y$, by \eqref{ineq:2}, \eqref{ineq:1} and \eqref{ineq:2-moment} we obtain \begin{align*} &\frac{(1-m)t}{1 -e^{-(1-m)t}}e^{-t} \sum_{j =1}^\infty t^j \int_{\frac{\delta |\tilde{y}|^{\alpha}}{t} \land 1}^1 \bigg( \frac{p_0(t u; 0, \tilde{y}-c^j x)}{tu\nu(\tilde{y})}+ 1 \bigg) u P_j(u) {\: \rm d} u \\ &\qquad\qquad \leq \frac{1-m}{1 -e^{-(1-m)t}} \big(C(1-\kappa_2)^{-d-\alpha}+1\big) \int_{\delta |\tilde{y}|^{\alpha} \land t}^t u \: \mu_t({\rm d} u) \\ &\qquad\qquad\leq \frac{1-m}{1 -e^{-(1-m)t}} \Big(C(1-\kappa_2)^{-d-\alpha}+1\Big) \frac1{\delta |\tilde{y}|^{\alpha}} \int_0^\infty u^2 \: \mu_t({\rm d} u) \\ &\qquad\qquad\leq \frac{2 \big(C(1-\kappa_2)^{-d-\alpha}+1\big)}{\delta(1-m)|\tilde{y}|^{\alpha}}. \end{align*} Now we use either $(1-m)\geq (1-\kappa_1^\alpha)$ or $(1-m)|\tilde{y}|^{\alpha}=|y|$. To conclude the proof, it enough to notice that by \eqref{eq:1-moment} the expression \[ \frac{(1-m)t}{1 -e^{-(1-m)t}} e^{-t}\int_0^1 u \Phi(t, u) {\: \rm d} u = 1 - \frac{(1-m)t}{1 -e^{-(1-m)t}} e^{-t} \] converges to the stated limits. \end{proof} \begin{corollary} \label{cor:ius} Under the assumptions of Theorem~\ref{thm:ius} we have \begin{equation} \label{eq:27a} \lim_{|y|\to +\infty} \frac{\rho_{\mathbf{Y}}(y)}{\nu(y)}= \frac1{1-m}, \end{equation} and \begin{equation} \label{eq:27b} \rho_{\mathbf{Y}} \approx 1 \land \nu, \qquad\text{on}\quad \RR^d. \end{equation} Furthermore, for all $\delta,r>0$ \begin{align} \label{approx:p_rho} p \approx \rho_{\mathbf{Y}}, \qquad\text{on}\quad [\delta,\infty)\times B_r \times \RR^d. \end{align} \end{corollary} \begin{proof} The equality \eqref{eq:27a} follows from \eqref{eq:ius-1} by passing with $t$ to infinity and using \eqref{representationrho}. The integral representation \eqref{representationrho} implies that $\rho_{\mathbf{Y}}$ is bounded from above by $\rho_{\mathbf{Y}}(0)$. Moreover, it is bounded from below by a positive constant, on every compact subset of $\RR^d$. From \eqref{eq:27a} we easily deduce that $\rho_{\mathbf{Y}}(y) \approx 1 \land \nu(y)$ on $\RR^d$. Since $\inf_{y\in B_{2r}} \rho_{\mathbf{Y}}(y) > 0$, it follows from \eqref{eq:lim_p_t_infty-unif} that there is $T > 0$ such that $p \approx \rho_{\mathbf{Y}}$ on $[T,\infty)\times B_r \times B_{2r}$ Now, by positivity and continuity, see Lemma~\ref{lem:p_reg}, $p \approx \rho_{\mathbf{Y}}$ on $[\delta,T]\times B_r \times B_{2r}$. It remains to investigate $(t, x, y) \in [\delta,\infty)\times B_r \times (\RR^d \setminus B_{2r})$. Using the representation \eqref{eq:rep-p-x} and \eqref{approx:is}, there is $C > 0$ such that \begin{equation} \label{eq:86} C^{-1} p(t; 0, y) \leq p(t;x,y) \leq C p(t;0,y) \end{equation} for all $(t, x, y) \in (0,\infty)\times B_r \times (\RR^d \setminus B_{2r})$. Now, by \eqref{eq:rep-p-0.1}, \eqref{ineq:2} and \eqref{eq:1-moment}, for all $t>0$, $y\in\RR^d$, \[ \frac{p(t;0,y)}{\nu(y)} \leq C \int_0^\infty u \: \mu_t({\rm d} u) \leq \frac{C}{1-m}\,. \] Using again \eqref{eq:1-moment} and \eqref{ineq:2-moment}, we obtain that for all $t>0$, \begin{align} \label{ineq:cut-M} \int_{[M,\infty)} u \: \mu_t({\rm d} u) \leq \frac12 \int_0^\infty u \: \mu_t({\rm d} u) \end{align} where $M= 4/(1-m^2)$. Thus, by \eqref{approx:is} we have $p_0(s;0,w) \geq C^{-1} s \nu(w)$ provided that $|w| \geq r (s/M)^{1/\alpha}$. Hence, by \eqref{eq:rep-p-0.1}, for all $t \geq \delta$, $|y|\geq 2r$, \begin{align*} \frac{p(t;0,y)}{\nu(y)} \geq C^{-1} \int_{(0,M]} u \: \mu_t({\rm d} u) &\geq \frac{C^{-1}}2 \int_0^\infty u \: \mu_t({\rm d} u) \\ &\geq \frac{C^{-1}}2 \frac{1-e^{-(1-m)\delta}}{1-m}, \end{align*} which together with \eqref{eq:27b} and \eqref{eq:86} completes the proof. \end{proof} \subsection{$\alpha$-stable subordinator with $\alpha\in (0,1)$ and resetting} \label{sec:sub} Let $\nu$ be the density of the L{\'e}vy measure of an $\alpha$-stable subordinator, $\alpha \in (0, 1)$. Namely, \[ \nu(s)=\frac{\alpha}{\Gamma\left(1-\alpha\right)} \frac{\ind{(0,\infty)}(s)}{s^{1+\alpha}}, \] see Example~\ref{ex:names:1}. A similar result to Theorem \ref{thm:ius} holds in the present setting as well. \begin{theorem} \label{thm:s-s} Suppose that $\mathbf{Y}$ is an $\alpha$-stable subordinator with $\alpha \in (0, 1)$. Assume that $\mathbf{X}^{(c)}$ is obtained from $\mathbf{Y}$ by partial resetting with factor $c\in(0,1)$. Then for each $\kappa_1, \kappa_2 \in (0, 1)$, the transition density $p^{(c)}$ of $\mathbf{X}^{(c)}$ satisfies \begin{align} \label{eq:s-s-1} \lim_{\atop{y \to +\infty}{t \to +\infty}} \, \sup_{\substack{ c \in(0,\kappa_1] \\ |x| \leq \kappa_2 y}} \left| (1-m)\frac{p^{(c)}(t; x, y)}{\nu(y)}-1 \right|= 0\,. \end{align} Furthermore, for all $K>0$, \[ \lim_{\substack{ y \to +\infty \\ t \to+\infty}}\, \sup_{\substack{c \in (0,1)\\ |x|\leq K y}} \left| \frac{(1-m)t}{1 -e^{-(1-m)t}}\frac{p^{(c)}(t; x, t^{1/\alpha} y)}{t\nu(t^{1/\alpha}y)} - 1 \right| = 0, \] and \[ \lim_{\substack{y \to +\infty \\ t \to+\infty} } \sup_{\substack{c \in (0,1)\\ |x|\leq K }} \left| \frac{(1-m)t}{1 -e^{-(1-m)t}}\frac{p^{(c)}(t; x, (1-m)^{-1/\alpha}y)}{t\nu((1-m)^{-1/\alpha}y)} - 1 \right| = 0. \] \end{theorem} \begin{proof} First, let us recall that \begin{equation} \label{eq:34} \lim_{s w^{-\alpha} \to 0^+} \frac{p_0(s;0,w)}{s\nu(w)}=1, \end{equation} see e.g. \cite[Theorem 37.1]{MR0344810} or \cite[Theorem 2.5.6]{MR854867}. In particular, \eqref{eq:34} describes the asymptotic behavior of $p_0(1/\tilde{c}; 0, y)$ with $\tilde{c}=\cos(\pi\alpha/2)$, cf. \cite[Example 24.12]{MR1739520}. By the formula \cite[(9)]{MR2013738} we also have \begin{align} \label{ineq:s-s-2} p_0(s;0,w) \leq C s \nu(w). \end{align} Now the arguments follows by the same line of reasoning as the proof of Theorem~\ref{thm:ius}. \end{proof} \begin{corollary} \label{cor:s-s} Under the assumptions of Theorem~\ref{thm:s-s} we have \begin{equation} \label{eq:35a} \lim_{y\to +\infty} \frac{\rho_{\mathbf{Y}}(y)}{\nu(y)} = \frac1{1-m}. \end{equation} Moreover, for each $\delta > 0$, \begin{equation} \label{eq:35b} \rho_{\mathbf{Y}} \approx \nu \qquad\text{on } [\delta, \infty). \end{equation} Furthermore, for each $\delta, r>0$, \begin{align} \label{approx:p_rho-2} p \approx \rho_{\mathbf{Y}} \qquad\text{on } [\delta,\infty)\times (-r,r)\times [\delta,\infty). \end{align} \end{corollary} \begin{proof} The equality \eqref{eq:35a} follows from \eqref{eq:s-s-1} by passing with $t$ to infinity and using \eqref{representationrho}. Since both $\rho_{\mathbf{Y}}$ and $\nu$ are continuous and positive on $(0,\infty)$, by \eqref{eq:35a} we get $\rho_{\mathbf{Y}} \approx \nu$ for $y \in [\delta, \infty)$. Because $\inf_{y\in (\delta,2r)} \rho_{\mathbf{Y}}(y)>0$, by \eqref{eq:lim_p_t_infty-unif} there is $T > 0$ such that $p \approx \rho_{\mathbf{Y}}$ on $[T,\infty)\times (-r,r) \times [\delta, 2r)$. We notice that $p(t;x,y)>0$ for all $t>0$, $x\in\RR$ and $y>0$, see \eqref{eq:rep-p-x}. Since $p$ is continuous, see Lemma \ref{lem:p_reg}, the comparability $p \approx \rho_{\mathbf{Y}}$ holds on $[\delta,T] \times (-r,r) \times [\delta,2r)$. Therefore, it remains to consider $(t, x, y) \in [\delta,\infty)\times (-r,r)\times [2r,\infty)$. Let us observe that for $|x|\leq r$, and $y \geq 2r$, \[ \nu(y-c^jx)\leq 2^{1+\alpha}\nu(y). \] Hence, using the representation \eqref{eq:rep-p-x} and \eqref{ineq:s-s-2}, for all $(t, x, y) \in [\delta,\infty)\times (-r,r)\times [2r,\infty)$, we get \begin{align*} p(t;x,y) &\leq 2^{1+\alpha} \nu(y)\, C \int_0^\infty u \: \mu_t({\rm d} u) \\ &\leq 2^{1+\alpha}\frac{C}{1-m} \nu(y) \end{align*} where the last estimate is a consequence of \eqref{eq:1-moment}. In view of the formula \cite[(10)]{MR2013738}, for $w \geq (s/M)^{1/\alpha}$ with $M=4/(1-m^2)$, and $s > 0$, \[ p_0(s;0,w) \geq C^{-1} s \nu(w). \] Moreover, for $u \in (0, 1)$, \[ u -c^jx \geq r (tu/M)^{1/\alpha}, \] thus, using \eqref{eq:rep-p-x} together with \eqref{ineq:cut-M} and \eqref{eq:1-moment}, we get \begin{align*} p(t;x,y) &\geq e^{-t} p_0(t;0,y-x) \ind{(0,M]}(t)+e^{-t}\sum_{j=1}^\infty t^j \int_0^1 p_0(tu;0,y-c^jx) \ind{(0,M]}(tu) P_j(u) {\: \rm d} u \\ &\geq 2^{-1-\alpha} \nu(y)\, C^{-1} \int_{(0,M]} u \: \mu_t({\rm d} u) \\ &\geq 2^{-1-\alpha} \nu(y)\,\frac{C^{-1}}2 \int_0^\infty u \: \mu_t({\rm d} u)\\ &\geq 2^{-1-\alpha} \frac{C^{-1}}2 \frac{1-e^{-(1-m)\delta}}{1-m} \nu(y), \end{align*} which together with \eqref{eq:35b} completes the proof. \end{proof} \subsection{$d$-Cylindrical $\alpha$-stable process with $\alpha\in(0,2)$ and resetting} \label{sec:cyl} In this section we study $d$-dimensional cylindrical $\alpha$-stable process, $\alpha \in (0, 2)$, namely a L\'evy process with the density \[ p_0(t; x, y) = \prod_{j=1}^d q(t; x_j, y_j) \] where $q$ denotes the density of the symmetric one-dimensional $\alpha$-stable process. Let $\nu$ be the L{\'e}vy measure of the one-dimensional process, that is \[ \nu(r)=\frac{2^{\alpha}\Gamma((1+\alpha)/2)}{\pi^{1/2}|\Gamma(-\alpha/2)|} |r|^{-1-\alpha}, \qquad r \in \RR. \] The function $\nu$ should not be confused with the L{\'e}vy measure of the $d$-cylindrical $\alpha$-stable process, which is singular, see Example~\ref{ex:names:3} for details. Given $\theta\in\RR^d$ let us define \[ S_1=\{i \in \{1, \ldots, d\} : \theta_i\neq 0\}\,,\qquad d_1=\#S_1\,,\qquad S_0=\{1, \ldots, d\} \setminus S_1. \] Recall that $p(t;x,y)$ depends on $c$ (or equivalently on $m$), see \eqref{def:m}. \begin{theorem} \label{thm:cylindrical} Suppose that $\mathbf{Y}$ is a cylindrical $\alpha$-stable process in $\RR^d$, $\alpha \in (0, 2)$, with a transition density $p_0$. Assume that $\mathbf{X}$ is obtained from $\mathbf{Y}$ by partial resetting with factor $c\in(0,1)$. Then for each $\theta\in \RR^d$, $\kappa\in (0,1)$, $K\geq 0$ and $\delta>0$, the transition density of $\mathbf{X}$ satisfies \begin{align} \label{eq:c-1} \lim_{\substack{ r \to +\infty \\ t \to+\infty}} \sup_{y \in \mathscr{Y}} \sup_{x\in \mathscr{X}(r,t)} \bigg| \frac{p(t;x,r\theta+y)}{\prod_{i\in S_1}\nu(r\theta_i)}- L \bigg|=0 \end{align} where \begin{align*} \mathscr{X}(r,t) &= \left\{x\in\RR^d : \begin{aligned} \abs{x_i} &\leq \kappa \norm{r \theta_i}, \quad \text{for all}\quad i \in S_1, \\ \log_{1/c} \abs{x_i} &\leq \frac{t}{1+\delta}, \quad \text{for all}\quad i \in S_0, \end{aligned} \right\} \\ \mathscr{Y}&=\Big\{y\in\RR^d : |y_i|\leq K \quad \text{for all}\quad i \in S_1\Big\}, \end{align*} and \[ L= \int_0^{\infty} u^{d_1} \prod_{i\in S_0} q(u;0,y_i) \: \mu({\rm d}u)\,. \] \end{theorem} \begin{proof} Let $d_0=\#S_0$. For $s > 0$ and $w \in \RR^d$, \begin{align*} p_0(s;0,w)= \bigg\{ \prod_{i\in S_1} q(s;0,w_i) \bigg\} \bigg\{\prod_{i\in S_0} q(s;0,w_i)\bigg\}. \end{align*} Recall that for all $s>0$, $v\in\RR$, (see \cite{BlumenthalGetoor}) \begin{align} \label{approx:is-c} q(s;0,v) \approx \min\bigg\{ s^{-d/\alpha}, \frac{s}{|v|^{1+\alpha}}\bigg\}. \end{align} In particular, \begin{align} \label{ineq:2-c} q(s;0,v) & \leq C \min\big\{ s^{-1/\alpha}, s \nu(v)\big\}. \end{align} Using \eqref{eq:rep-p-x} we write \begin{align*} I_1 &= \frac{p(t; x, r\theta + y)}{\prod_{i\in S_1}\nu(r\theta_i)} \\ &= e^{-t} t^{d_1} \frac{p_0(t;0,r\theta+y-x)}{\prod_{i\in S_1}t \nu(r\theta_i)} + e^{-t} \sum_{j = 1}^\infty t^j \int_0^1 \frac{p_0(t u; 0,r\theta+ y-c^j x)}{\prod_{i\in S_1} (tu) \nu(r\theta_i)}(tu)^{d_1} P_j(u) {\: \rm d} u. \end{align*} First, we show that \[ \lim_{\substack{ r \to +\infty \\ t \to+\infty}} \sup_{y \in \mathscr{Y}} \sup_{x\in \mathscr{X}(r,t)} \abs{I_1-I_2}=0 \] where \[ I_2=e^{-t}f(t,x)+e^{-t}\sum_{j=1}^\infty t^j \int_0^1 f(tu,c^jx) P_j(u) \: {\rm d}u, \] and \[ f(u,x)= u^{d_1} \prod_{i\in S_0} q(u;0,y_i-x_i). \] Let \[ r > 2 \frac{K}{1-\kappa}\min\big\{\abs{\theta_i}^{-1} : i \in S_1\big\}. \] Since for $i \in S_1$, \begin{equation} \label{eq:71} |x_i| \leq \kappa r|\theta_i|, \qquad\text{ and }\qquad r |\theta_i| \geq |y_i| \frac{2}{1-\kappa}, \end{equation} for each $j \in \NN_0$, we have \begin{align} \label{ineq:1-c} \frac{\nu(r\theta_i+y_i-c^jx_i)}{\nu(r\theta_i)} \leq \left(\frac{1-\kappa}{2}\right)^{-1-\alpha}. \end{align} Now using \eqref{ineq:2-c} and \eqref{ineq:1-c} we obtain \[ e^{-t} t^{d_1} \frac{p_0(t;0,r\theta+y-x)}{\prod_{i\in S_1} t \nu(r\theta_i)} \leq \tilde{C}^{d_1} C^{d_0} e^{-t} t^{d_1-d_0/\alpha}, \] and \[ e^{-t}f(t,x) \leq C^{d_0} e^{-t} t^{d_1-d_0/\alpha} \] where $\tilde{C} = 2^{1+\alpha} C (1-\kappa)^{-1-\alpha}$. Hence, the first terms in $I_1$ as well as in $I_2$ are negligible. Furthermore, by \eqref{ineq:2-c} and \eqref{ineq:1-c} for $j\in\NN$ we have \begin{align} e^{-t} t^j \int_0^1 \bigg( \frac{p_0(tu;0,r\theta+y-c^j x)}{\prod_{i\in S_1} (tu)\nu(r\theta_i)}(tu)^{d_1}+ f(tu,c^j x)\bigg) P_j(u) {\: \rm d} u \nonumber \\ \leq \big( \tilde{C}^{d_1}+1\big) C^{d_0} e^{-t} t^j \int_0^1 (tu)^{d_1-d_0/\alpha} P_j(u) {\: \rm d} u. \label{ineq:b-2-c} \end{align} Let us recall that (see e.g. \cite{BlumenthalGetoor}) \begin{equation} \label{eq:72} \lim_{s /|v|^{\alpha} \to 0^+} \frac{q(s; 0, v)}{s \nu(v)} = 1. \end{equation} For $j \in \NN$, we write \begin{equation} \label{eq:75} \begin{aligned} \bigg|\frac{q(s; 0,r\theta_i+ y_i-c^j x_i)}{s \nu(r\theta_i)} - 1 \bigg| &\leq \bigg|\frac{q(s; 0, r\theta_i+ y_i-c^j x_i)}{s \nu(r\theta_i+ y_i-c^j x_i)} - 1 \bigg| \frac{\nu(r\theta_i+ y_i-c^j x_i)}{\nu(r\theta_i)}\\ &\phantom{\leq}+\bigg|\frac{\nu(r\theta_i+ y_i-c^j x_i)}{\nu(r\theta_i)} -1\bigg|. \end{aligned} \end{equation} By \eqref{eq:71}, for $i \in S_1$ and $j \in \NN$, \begin{equation} \label{eq:73} \frac{s}{|r\theta_i +y_i-c^jx_i|^{\alpha}} \leq 2^{\alpha} (1 - \kappa)^{-\alpha} \frac{s}{\abs{r \theta_i}^\alpha}, \end{equation} and \begin{equation} \label{eq:74} \frac{\nu(r\theta_i)}{\big(1+K/|r\theta_i|+c^j\kappa\big)^{1+\alpha}} \leq \nu(r\theta_i+ y_i-c^j x_i) \leq \frac{\nu(r\theta_i)}{\big(1-K/|r\theta_i|-c^j\kappa\big)^{1+\alpha}}. \end{equation} Let $\epsilon>0$. By \eqref{eq:75}, in view of \eqref{eq:72}, \eqref{eq:73} and \eqref{eq:74}, we deduce that there are $\tilde{\delta} > 0$ and $n_0 \in \NN$, such that if $s r^{-\alpha} \leq \tilde{\delta}$, then for all $j > n_0$, $r \geq (\tilde{\delta})^{-1}$ and $i \in S_1$, \begin{equation} \label{ineq:3-c} \bigg|\frac{q(s; 0, r\theta_i+ y_{0.i}-c^j xi)}{s \nu(r\theta_i)} - 1 \bigg| \leq \epsilon. \end{equation} Both $\tilde{\delta}$ and $n_0$ depend only on $\alpha$, $\epsilon$, $\kappa$, $K$ and $\theta$. Now, we write \begin{equation} \label{eq:77} \begin{aligned} &e^{-t} \sum_{j = 1}^\infty t^j \int_0^1 \frac{p_0(t u; 0,r\theta+ y-c^j x)}{\prod_{i\in S_1} (tu) \nu(r\theta_i)}(tu)^{d_1} P_j(u) {\: \rm d} u -e^{-t}\sum_{j=1}^\infty t^j \int_0^1 f(tu,c^jx) P_j(u) \: {\rm d}u\\ &\qquad= e^{-t} \sum_{j=1}^{n_0} t^j \int_0^1 \bigg( \frac{p_0(tu;0,r\theta+y-c^j x)}{\prod_{i\in S_1} (tu)\nu(r\theta_i)}(tu)^{d_1} - f(tu,c^j x)\bigg) P_j(u) {\: \rm d} u\\ &\qquad\phantom{=} +e^{-t} \sum_{j=n_0+1}^\infty t^j\int_{\frac{\tilde{\delta} r^\alpha}{t}\land 1}^1 \bigg( \frac{p_0(tu;0,r\theta+y-c^j x)}{\prod_{i\in S_1} (tu)\nu(r\theta_i)}(tu)^{d_1} - f(tu,c^j x) \bigg) P_j(u) {\: \rm d} u \\ &\qquad\phantom{=} + e^{-t} \sum_{j=n_0+1}^\infty t^j \int_0^{\frac{\tilde{\delta} r^\alpha}{t}\land 1} \bigg( \frac{p_0(tu;0,r\theta+y-c^j x)}{\prod_{i\in S_1} (tu)\nu(r\theta_i)}(tu)^{d_1} - f(tu,c^j x)\bigg) P_j(u) {\: \rm d} u. \end{aligned} \end{equation} By \eqref{ineq:b-2-c}, the first term on the right-hand side of \eqref{eq:77} is small, see also Theorem \ref{thm:all-moments}. The absolute value of the third term can be bounded by \begin{align*} e^{-t} \sum_{j = n_0+1}^\infty t^j \int_0^{\frac{\tilde{\delta} r^\alpha}{t} \land 1} \bigg| \prod_{i\in S_1} \frac{q(tu;0,r\theta_i+y_i-c^jx_i)}{tu \nu(r\theta_i)} - 1\bigg| \bigg( \prod_{i\in S_0} q(tu;0,y_i-c^jx_i)\bigg) (tu)^{d_1} P_j(u) {\: \rm d} u. \end{align*} In view of \eqref{ineq:2-c}, \eqref{ineq:1-c} and \eqref{ineq:3-c} the last displayed formula is bounded by \begin{align*} \epsilon e^{-t} \sum_{j = 1}^\infty t^j \int_0^1 \big(\tilde{C}\vee 1 \big)^{d_1-1} C^{d_0} (tu)^{d_1-d_0/\alpha} P_j(u) {\: \rm d} u \leq \epsilon \big( \tilde{C}\vee 1 \big)^{d_1-1} C^{d_0} \sup_{t\geq 1} \int_0^\infty u^{d_1-d_0/\alpha} \mu_t({\rm d}u) \end{align*} which is small by \eqref{ineq:sup-finite}. Next, using \eqref{ineq:2-c} and \eqref{ineq:1-c}, the absolute value of the second term in \eqref{eq:77} is controlled as follows \begin{align*} &e^{-t} \sum_{j=1}^\infty t^j\int_{\frac{\tilde{\delta} r^\alpha}{t}\land 1}^1 \bigg( \frac{p_0(tu;0,r\theta+y-c^j x)}{\prod_{i\in S_1} (tu)\nu(r\theta_i)}(tu)^{d_1} + f(tu,c^j x)\bigg) P_j(u) {\: \rm d} u\\ &\qquad \leq \big(\tilde{C}^{d_1}+1\big) C^{d_0} \int_{\tilde{\delta} r^\alpha \land t}^t u^{d_1-d_0/\alpha} \mu_t({\rm d}u)\\ &\qquad \leq \big( \tilde{C}^{d_1}+1\big) C^{d_0} \frac1{\tilde{\delta} r^\alpha} \sup_{t\geq 1} \int_0^\infty u^{d_1-d_0/\alpha+1} \mu_t({\rm d}u). \end{align*} Consequently, $|I_1-I_2|$ converges to zero as desired. Therefore, it suffices to show that \[ \lim_{\substack{r \to +\infty \\ t \to +\infty}} \sup_{y \in \mathscr{Y}} \sup_{x \in \mathscr{X}(r, t)} \big|I_2 - L \big| = 0. \] Without loss of generality we assume that $S_0=\{1,\ldots, d_0\}$. Since $\prod_{i\in S_0} q(s;0,w_i)$ may be regarded as a transition density of a strictly $\alpha$-stable process in $\RR^{d_0}$, it follows from Lemma~\ref{lem:A3} that the family \[ \mathcal{F}=\Big\{ f : \RR_+ \times \RR^d \rightarrow \RR : f(u, x) = u^{d_1} \prod_{i\in S_0} q(u;0,y_i-x_i), \, y\in \RR^{d_0} \Big\} \] satisfies \eqref{eq:78}. Notice that for each $f \in \mathcal{F}$ we have $\int_0^\infty f(u,0)\: \mu({\rm d} u)=L$. Applying now Lemma \ref{lem:unif_conv-2} with $\delta$ replaced by $\delta/2$, we get \[ \lim_{t\to +\infty} \sup_{f\in \mathcal{F}} \sup_{x \in \mathscr{X}_{\delta/2}(t)} |I_2-L|=0. \] In particular, $I_2 \rightarrow L$ uniformly with respect to $y \in \RR^{d_0}$. Furthermore, since $x \in \mathscr{X} (r,t)$ implies that $x \in \mathscr{X}_{\delta/2}(t)$ provided that \[ t \geq (1+\delta/2)(1+\delta^{-1}) \log_{1/c} d_0, \] the theorem follows. \end{proof} In addition to studying the asymptotic behavior of $p$, we also show its upper and lower estimates. \begin{proposition} \label{prop:cylindrical} Under the assumptions of Theorem~\ref{thm:cylindrical}, for all $\theta\in\RR^d$ and $K\geq 0$, \begin{equation} \label{eq:79} \lim_{r\to\infty} \sup_{y\in\mathscr{Y}} \bigg| \frac{\rho_{\mathbf{Y}}(r\theta+y)}{\prod_{i\in S_1}\nu(r\theta_i)} - L\bigg|=0 \,. \end{equation} Furthermore, for all $\delta, r>0$, \begin{equation} \label{eq:80} p \approx \rho_{\mathbf{Y}} \qquad\text{on } [\delta,\infty) \times B_r \times \RR^d, \end{equation} and \[ \rho_{\mathbf{Y}}(y) \approx \prod_{i=1}^d 1\land \nu(y_i) \qquad\text{for } y \in \RR^d. \] \end{proposition} \begin{proof} To get \eqref{eq:79}, it is enough to use \eqref{representationrho} and take $t$ tending to infinity in \eqref{eq:c-1}. Now, to prove \eqref{eq:80}, it is enough to show that \[ \frac{p(t;x,y)}{\prod_{i=1}^d 1\land \nu(y_i)} \approx 1, \qquad\text{on } [\delta,\infty)\times B_r \times \RR^d. \] Without loss of generality we assume that $\delta < 1$. Given $y\in\RR^d$ and $r > 0$ we set \begin{align*} S_1(y, r)&=\{i \in \{1, \ldots, d\} : |y_i|> 2r\}, &\qquad d_1(y, r)&=\#S_1(y, r), \\ S_0(y, r)&=\{i \in \{1, \ldots, d\} : |y_i|\leq 2r\}, &\qquad d_0(y, r)&=\#S_0(y, r). \end{align*} The reasoning is analogous to the proof of Theorem \ref{thm:cylindrical}. For $s > 0$ and $y, w \in \RR^d$, we write \[ p_0(s;0,y-w)= \bigg\{ \prod_{i\in S_1(y, r)} q(s;0,y_i-w_i) \bigg\} \bigg\{\prod_{i\in S_0(y, r)} q(s;0,y_i-w_i)\bigg\}. \] Let $t \geq \delta$ and $r \geq \norm{x}$. For $y \in \RR^d$, by \eqref{ineq:2-c}, we have \begin{align*} \frac{q(s;0,y_i-c^jx_i)}{s \nu(y_i)} &\leq 2^{1+\alpha} C, \qquad\text{ for all } i \in S_1(y, r), \intertext{and} q(s;0,y_i-c^jx_i) &\leq C s^{-1/\alpha}, \qquad\text{ for all } i \in S_0(y, r), \end{align*} hence by \eqref{eq:rep-p-x} \begin{align*} p(t; x,y) &\leq 2^{d_1(y, r) (1+\alpha)} C^d \bigg(\prod_{i\in S_1(y, r)} \nu(y_i)\bigg) \int_0^\infty u^{d_1(y,r)-d_0(y,r)/\alpha} \: \mu_t({\rm d} u). \end{align*} Since the latter integral is bounded by the sum of moments of $\mu_t$ of orders $-d/\alpha$ and $d$, using \eqref{ineq:sup-finite} we obtain the desired upper bound. Next, \eqref{approx:is-c} implies that \begin{align*} q(s;,0,v) &\geq C^{-1} s \nu(v), \qquad\text{if } |v|\geq r s^{1/\alpha}, \intertext{and} q(s;,0,v)&\geq C^{-1}, \qquad \text{if } \norm{v} \leq 3r, s \in [\delta/2,1]. \end{align*} Therefore, for $s \in [\delta/2,1]$, \begin{align*} \frac{q(s;0,y_i-c^jx_i)}{s\nu(y_i)} &\geq 2^{-1-\alpha}C^{-1}, \qquad\text{for all } i \in S_1(y, r), \intertext{and} q(s;0,y_i-c^jx_i) &\geq C^{-1}, \qquad\text{for all } i \in S_0(y, r). \end{align*} Therefore, by \eqref{eq:rep-p-x} we get \begin{align*} p(t;x,y) &\geq e^{-t} p_0(t;0,y-x) \ind{[\delta/2,1]}(t)+ e^{-t}\sum_{j=1}^\infty t^j \int_0^1 p_0(tu;0,y-c^jx) \ind{[\delta/2,1]}(tu) P_j(u) {\: \rm d} u\\ &\geq 2^{-d_1(1+\alpha)} C^{-d} \bigg(\prod_{i\in S_1(y, r) } \nu(y_i)\bigg) \int_{\delta/2}^1 u^{d_1(y, r)} \mu_t({\rm d}u)\\ &\geq 2^{-d_1(y, r) (1+\alpha)} C^{-d} (\delta/2)^d \bigg(\prod_{i\in S_1(y, r)} \nu(y_i)\bigg) \inf_{t \in [t_0,\infty)} \mu_t\big([\delta/2,1]\big). \end{align*} To conclude the proof, we invoke Lemma~\ref{lem:inf_mu_t}. \end{proof} \subsection{Brownian motion with resetting} In this section we consider the case where $\mathbf{Y}$ is a Brownian motion with a transition density \begin{align} \label{p0gauss} p_0(t;x,y)=(4\pi t)^{-\frac{d}{2}} e^{-\frac{|y-x|^2}{4t}}. \end{align} We are going to study the asymptotic behavior of $p(t; 0, y)$ when $t$ tends to infinity while $(t, y)$ stays in certain space-time regions. By Corollary \ref{cor:rep-2} and \eqref{p0gauss}, we have \begin{align} \label{eq:p_gauss} p(t; 0, y) = e^{-t} (4\pi t)^{-\frac{d}{2}} e^{-\frac{|y|^2}{4t}} + t e^{-t}(4\pi t)^{-\frac{d}{2}} \int_0^1 u^{-\frac{d}{2}} e^{-\frac{|y|^2}{4 tu}+\log \frac{\Phi(t, u)}{t}} {\: \rm d} u. \end{align} Let us define \[ \phi(t) = \sum_{j = 0}^\infty \frac{1}{j!} \frac{1}{(m; m)_{j+1}} t^j, \qquad t \geq 0. \] In view of \eqref{def:Phi} and Proposition \ref{prop:2} \begin{equation} \label{eq:29} \Phi(t, u) \leq t \phi(t(1-u)), \qquad \text{for all } t >0,\, u \in [0, 1], \end{equation} and \begin{equation} \label{eq:30} \Phi(t, u) = t \phi(t(1-u)), \qquad \text{for all } t > 0,\, u \in [m, 1]. \end{equation} We first show a few key properties of the function $\phi(t)$. Note that for $k\in\NN_0$, \begin{equation} \label{eq:31} \phi^{(k)}(t) = \sum_{j = 0}^\infty \frac{t^j}{j!} \frac{1}{(m; m)_{j+k+1}}, \qquad t \geq 0. \end{equation} \begin{proposition} \label{prop:7} For each $k \in \NN_0$, and $t > 0$, \begin{equation} \label{eq:10} \frac{e^t}{(m; m)_{k+1}} \leq \phi^{(k)}(t) \leq \frac{e^t}{(m; m)_\infty}, \end{equation} and \begin{equation} \label{eq:11} \phi(t) \leq \phi^{(k)}(t) \leq \frac{1}{(m; m)_k} \phi(t). \end{equation} \end{proposition} \begin{proof} Since \[ \frac{1}{(m; m)_{k+1}} \leq \frac{1}{(m; m)_{j+k+1}} \leq \frac{1}{(m; m)_{\infty}}, \] we easily get \eqref{eq:10}. Moreover, \[ \frac{1}{(m; m)_{j+1}} \leq \frac{1}{(m; m)_{j+k+1}} = \frac{1}{(m; m)_{j+1}} \prod_{\ell = 0}^{k-1} \frac{1}{1-m^{j+2+\ell}} \leq \frac{1}{(m; m)_{j+1}} \frac{1}{(m; m)_{k}} \] which leads to \eqref{eq:11}. \end{proof} \begin{proposition} \label{prop:8} For each $n \in \NN$ and $k \in \NN$, \begin{equation} \label{lim:phi-1} \lim_{t \to \infty} t^n \frac{\phi^{(k+1)}(t) - \phi^{(k)}(t)}{\phi^{(k)}(t)} = 0, \end{equation} and \begin{align} \label{lim:phi-2} \lim_{t \to \infty} t^n \Big(\frac{\phi(t)}{e^t} - \frac1{(m;m)_\infty}\Big) = 0. \end{align} \end{proposition} \begin{proof} Using \eqref{eq:31}, we get \[ \phi^{(k+1)}(t) = \sum_{j = 0}^\infty \frac{t^j}{j!} \frac{1}{(m; m)_{j+k+1}} \frac{1}{1-m^{j+k+2}}. \] Thus \begin{align*} \phi^{(k+1)}(t) &= \sum_{j = 0}^\infty \frac{t^j}{j!} \frac{1}{(m; m)_{j+k+1}} \sum_{\ell = 0}^\infty m^{(j+k+2)\ell} \\ &=\sum_{\ell=0}^\infty m^{(k+2)\ell} \phi^{(k)}\big(m^\ell t\big). \end{align*} By Proposition \ref{prop:7}, for $\ell \in \NN$, \[ \frac{\phi^{(k)}(m^\ell t)}{\phi^{(k)}(t)} \leq \frac{(m; m)_{k+1}}{(m; m)_\infty} e^{-(1-m^\ell) t}, \] hence \[ \lim_{t \to +\infty} t^n \frac{\phi^{(k)}(m^\ell t)}{\phi^{(k)}(t)} = 0. \] Since for each $\ell\in\NN$, \begin{align*} t^n \frac{\phi^{(k)}(m^\ell t)}{\phi^{(k)}(t)} &\leq \frac{(m; m)_{k+1}}{(m; m)_\infty} n! \big(1-m^\ell\big)^{-n} \\ &\leq \frac{(m; m)_{k+1}}{(m; m)_\infty} n! (1-m)^{-n}, \end{align*} by the Lebesgue's dominated convergence theorem \[ \lim_{t \to +\infty} \sum_{\ell = 1}^\infty m^{(k+2)\ell} t^n \frac{\phi^{(k)}(m^\ell t)}{\phi^{(k)}(t)} = 0, \] which completes the proof of \eqref{lim:phi-1}. Next, we write \[ \frac{1}{(m; m)_\infty} e^{t} - \phi(t) = \sum_{j = 0}^\infty \frac{t^j}{j!} \frac{1}{(m; m)_\infty} \Big(1 - \prod_{k = j+2}^\infty (1-m^k)\Big). \] By the generalized Bernoulli's inequality, \[ 1 - \prod_{k = j+2}^\infty (1 - m^k) \leq \sum_{k = j+2}^\infty m^k, \] hence \begin{align*} \frac{1}{(m; m)_\infty} e^{t} - \phi(t) &\leq \sum_{j = 0}^\infty \frac{t^j}{j!} \frac{1}{(m; m)_\infty} \sum_{k = j+2}^\infty m^k \\ &\leq \frac{1}{(m; m)_\infty} \frac{m^2}{1-m} e^{m t}. \end{align*} Lastly, \begin{align*} 0 \leq \Big(\frac{1}{(m; m)_\infty} e^{t} - \phi(t)\Big)e^{-t} \leq \frac{1}{(m; m)_\infty} \frac{m^2}{1-m} e^{-(1-m)t} \end{align*} which completes the proof of \eqref{lim:phi-2}. \end{proof} \begin{lemma} \label{lem:1} For all $t \geq 0$, \[ \frac{\phi''(t)}{\phi(t)} - \bigg(\frac{\phi'(t)}{\phi(t)} \bigg)^2 < 0. \] \end{lemma} \begin{proof} Using \eqref{eq:31}, for $s,t\geq 0$, we obtain \[ \phi'(s) \phi(t) - \phi'(t) \phi(s) = \sum_{k = 0}^\infty \sum_{\stackrel{j_1 + j_2 = k}{0 \leq j_1, j_2}} \frac{s^{j_1} t^{j_2}}{j_1! j_2!} \frac{1}{(m; m)_{j_1+1}} \frac{1}{(m; m)_{j_2+1}} \Big(\frac{1}{1-m^{j_1+2}} - \frac{1}{1-m^{j_2+2}}\Big). \] Observe that we can assume that $j_1 \neq j_2$. If $0 \leq j_1 < j_2$, then \[ \frac{1}{1-m^{j_1+2}} - \frac{1}{1-m^{j_2+2}} = \frac{m^{j_1+2} - m^{j_2+2}}{(1-m^{j_1+2}) (1-m^{j_2+2})} > 0. \] Hence, \begin{align*} & \sum_{\stackrel{j_1 + j_2 = k}{0 \leq j_1, j_2}} \frac{s^{j_1} t^{j_2}}{j_1! j_2!} \frac{1}{(m; m)_{j_1+1}} \frac{1}{(m; m)_{j_2+1}} \Big(\frac{1}{1-m^{j_1+2}} - \frac{1}{1-m^{j_2+2}}\Big) \\ &\qquad= \sum_{\stackrel{j_1 + j_2 = k}{0 \leq j_1 < j_2}} \frac1{j_1! j_2!} \frac{1}{(m; m)_{j_1+1}} \frac{1}{(m; m)_{j_2+1}} \Big(\frac{1}{1-m^{j_1+2}} - \frac{1}{1-m^{j_2+2}}\Big) \Big(s^{j_1} t^{j_2} - s^{j_2} t^{j_1}\Big). \end{align*} Therefore, $\phi''(s) \phi(t) - \phi'(t) \phi'(s)$ equals \begin{align*} \sum_{k = 1}^\infty \sum_{\stackrel{j_1 + j_2 = k}{0 \leq j_1 < j_2}} \frac1{j_1! j_2!} \frac{1}{(m; m)_{j_1+1}} \frac{1}{(m; m)_{j_2+1}} \Big(\frac{1}{1-m^{j_1+2}} - \frac{1}{1-m^{j_2+2}}\Big) \Big(j_1 s^{j_1-1} t^{j_2} - j_2 s^{j_2-1} t^{j_1}\Big). \end{align*} Now, to conclude the proof it suffices to put $s=t$ and use that $j_1<j_2$. \end{proof} \begin{proposition} \label{prop:log_phi} For each $k, n \in \NN$, $k \geq 2$, there is $C_{n, k} > 0$ such that for all $t \geq 0$, \[ t^n \Big| \frac{{\rm d}^k}{{\rm d} t^k} \log \phi(t) \Big| \leq C_{n, k}. \] \end{proposition} \begin{proof} Let us recall that for a positive smooth real function $f$, by Fa\'a di Bruno's formula, there are positive constants $C_{\ell}$ such that \[ \frac{{\rm d}^k }{{\rm d} t^k} \log f(t) = \sum_{j = 1}^k j! (-1)^{j+1} \frac{1}{(f(t))^j} \sum_{\ell} C_\ell \prod_{i = 1}^j \frac{{\rm d}^{\ell_i}}{{\rm d} t^{\ell_i}} f(t) \] where in the inner sum $\ell$ runs over all sequences $\ell = (\ell_1, \ldots, \ell_j)$, $\ell_i \in \NN$, with $\ell_1+\ldots+\ell_j = k$. Since the coefficients $C_\ell$ are independent of $f$, by taking $f(t) = e^t$ and $k \geq 2$, we get \[ 0 = \sum_{j = 1}^k j! (-1)^{j+1} \sum_{\ell} C_\ell. \] Hence, for $f \equiv \phi$, and $k \geq 2$, we obtain \begin{align*} \frac{{\rm d}^k}{{\rm d} t^k} \log \phi(t) &= \sum_{j = 1}^k j! (-1)^{j+1} \sum_{\ell} C_\ell \prod_{i = 1}^j \bigg(\frac{\phi^{(\ell_i)}(t)}{\phi(t)} - 1 + 1\bigg) -\sum_{j = 1}^k j! (-1)^{j+1} \sum_{\ell} C_\ell \\ &= \sum_{j = 1}^k j! (-1)^{j+1} \sum_{\ell} C_\ell \sum_{\stackrel{I \subset \{1, \ldots, j\}}{I \neq \emptyset}} \prod_{i \in I} \bigg(\frac{\phi^{(\ell_i)}(t)}{\phi(t)} - 1\bigg). \end{align*} Now, the claim follows by \eqref{lim:phi-1} and \eqref{eq:11}. \end{proof} In view of \eqref{eq:p_gauss}, \eqref{eq:29}, and \eqref{eq:30} our aim is to understand the critical points of the function \begin{equation} \label{psi} \psi(u) = -\frac{d}{2} \log u -\frac{L}{u} + \log \phi\big(t(1-u)\big), \qquad u \in (0, 1) \end{equation} where \begin{equation}\label{A} L = \frac{|y|^2}{4 t}. \end{equation} We note that \begin{equation} \label{eq:33} \psi'(u) = -\frac{d}{2} \frac{1}{u} + \frac{L}{u^2} - t \frac{\phi'\big(t(1-u)\big)}{\phi\big(t(1-u)\big)}, \end{equation} and \begin{equation} \label{eq:37} \psi''(u) = \frac{d}{2} \frac{1}{u^2} - 2 \frac{L}{u^3} + t^2\bigg\{ \frac{\phi''\big(t(1-u)\big)}{\phi\big(t(1-u)\big)} - \bigg(\frac{\phi'\big(t(1-u)\big)}{\phi\big(t(1-u)\big)}\bigg)^2 \bigg\}. \end{equation} The following lemma is instrumental in treating the integral \eqref{eq:p_gauss} restricted to the interval $(0, m)$. \begin{lemma} \label{lem:psi_prim} If \[ \frac{L}{t} \geq m^2 + \frac{C}{t} \] for certain $C > \frac{d}{2} m$, then $\psi'$ is decreasing and $\psi'(m)>0$. Moreover, there is $T_0 > 0$ such that for all $t \geq T_0$, \[ \int_0^m u^{-\frac{d}{2}} e^{-\frac{|y|^2}{4 tu}+\log \frac{\Phi(t, u)}{t}} {\: \rm d} u \leq m e^{\psi(m)}. \] \end{lemma} \begin{proof} In view of \eqref{eq:37} and Lemma \ref{lem:1}, $\psi''(u) < 0$ for all $u \in (0, 1)$ provided that $L > d/4$. Next, by \eqref{eq:33} we have \[ \psi'(m) = -\frac{d}{2} \frac{1}{m} + t\bigg(1 - \frac{\phi'\big(t(1-m)\big)}{\phi\big(t(1-m)\big)}\bigg) + \frac{t}{m^2} \bigg(\frac{L}{t} - m^2\bigg). \] Since \[ -\frac{d}{2} \frac{1}{m} + \frac{t}{m^2} \Big(\frac{L}{t} - m^2\Big) \geq -\frac{d}{2} \frac{1}{m}+\frac{C}{m^2}>0, \] by \eqref{lim:phi-1}, we get $\psi'(m) > 0$ provided that $t$ is sufficiently large. Lastly, by \eqref{eq:29} and \eqref{psi} we get \begin{align*} \int_0^m u^{-\frac{d}{2}} e^{-\frac{|y|^2}{4 tu}+\log \frac{\Phi(t, u)}{t}} {\: \rm d} u &\leq \int_0^m e^{\psi(u)} {\: \rm d} u \end{align*} which is bounded by $m e^{\psi(m)}$. \end{proof} Before we formulate the next result let us define \[ \vphi(r) = \int_0^\infty e^{-s} \phi(r s) {\: \rm d} s, \qquad\text{ for } r \in [0, 1). \] Using the definition of the function $\phi$ we can write \begin{align*} \vphi(r) &= \sum_{j = 0}^\infty \frac{1}{j!} \frac{r^j}{(m; m)_{j+1}} \int_0^\infty s^j e^{-s} {\: \rm d} s \\ &= \sum_{j = 0}^\infty \frac{r^j}{(m; m)_{j+1}}. \end{align*} \begin{theorem} \label{thm:5} Suppose that $\mathbf{Y}$ is Brownian motion in $\RR^d$. Assume that $\mathbf{X}$ is obtained from $\mathbf{Y}$ by partial resetting with factor $c \in (0,1)$. Then for each $\delta > 0$, the transition density of $\mathbf{X}$ satisfies \[ p(t; 0, y) = e^{-t} (4\pi t)^{-\frac{d}{2}} e^{-\frac{|y|^2}{4t}} \bigg\{ 1 + \bigg(\frac{4t^2}{|y|^2}\bigg) \vphi\bigg(\frac{4t^2}{\norm{y}^2}\bigg)+ \calO\bigg(\frac{t}{\norm{y}^2}\bigg) \bigg\} \] as $t$ tends to infinity, uniformly in the region \begin{equation} \label{rg1} \Big\{(t, y) \in \RR_+ \times \RR^d : \frac{|y|^2}{4t^2} \geq 1 +\delta \Big\}. \end{equation} \end{theorem} \begin{proof} Thank to \eqref{eq:p_gauss}, we can write \begin{equation} \label{eq:58} e^t (4 \pi t)^{\frac{d}{2}} p(t; 0, y) = e^{-\frac{\norm{y}^2}{4 t}} + t \int_0^m u^{-\frac{d}{2}} u^{-\frac{d}{2}} e^{-\frac{\norm{y}^2}{4tu} + \log \frac{\Phi(t, u)}{t}} {\: \rm d} u +t I \end{equation} where we have set \[ I = \int_m^1 u^{-\frac{d}{2}} e^{-\frac{|y|^2}{4 tu}+\log \frac{\Phi(t, u)}{t}} {\: \rm d} u. \] Using \eqref{eq:30}, we get \[ I= \int_m^1 u^{-\frac{d}{2}} e^{-\frac{L}{u}} \phi\big(t(1-u)\big) {\: \rm d} u. \] Let us observe that for $(t, y)$ in the region \eqref{rg1} we have $L \geq (1+\delta) t$. By the change of variable $v = L(u^{-1}-1)$, we obtain \begin{align*} \int_m^1 u^{-\frac{d}{2}} e^{-\frac{L}{u}} \phi\big(t(1-u)\big) {\: \rm d} u = \frac{1}{L} e^{-L} \int_0^{L (m^{-1}-1)} e^{-v} \Big(1 + \frac{v}{L} \Big)^{\frac{d}{2}-2} \phi\Big(\frac{t}{L} \frac{L v}{v+L} \Big) {\: \rm d} v. \end{align*} By \eqref{eq:10}, for $v > 0$ we have \begin{align*} e^{-v} \phi\Big(\frac{t}{L} \frac{L v}{v+L} \Big) &\leq \frac{1}{(m; m)_\infty} \exp\Big\{-v + v \frac{t}{L} \frac{L}{v+L} \Big\} \\ &\leq \frac{1}{(m; m)_\infty} \exp\Big\{-v + v \frac{1}{1+\delta} \Big\} \\ &= \frac{1}{(m; m)_\infty} e^{-\frac{\delta}{1+\delta} v}. \end{align*} Moreover, for $v \in [0, L(m^{-1} - 1) ]$, \begin{align*} \Big|\Big(1 + \frac{v}{L} \Big)^{\frac{d}{2}-2} - 1 \Big| \leq C \frac{v}{L}. \end{align*} Hence, \begin{align*} &\bigg| \int_0^{L(m^{-1}-1)} e^{-v} \Big(1 + \frac{v}{L} \Big)^{\frac{d}{2}-2} \phi\Big(\frac{t}{L} \frac{L v}{v+L} \Big) {\: \rm d} v - \int_0^{L(m^{-1}-1)} e^{-v} \phi\Big(\frac{t}{L} \frac{L v}{v+L} \Big) {\: \rm d} v \bigg| \\ &\qquad\qquad \leq \int_0^{L(m^{-1}-1)} e^{-v} \phi\Big(\frac{t}{L} \frac{L v}{v+L} \Big) \Big|\Big(1 + \frac{v}{L} \Big)^{\frac{d}{2}-2} - 1 \Big| {\: \rm d} v \\ &\qquad\qquad\leq \frac{C}{(m;m)_\infty} \frac1{L} \int_0^\infty e^{-\frac{\delta}{1+\delta} v} v {\: \rm d} v. \end{align*} Since $\phi'$ is increasing, by the mean value theorem and \eqref{eq:10}, \begin{align*} \bigg| \phi\Big(\frac{t}{L} \frac{L v}{v+L} \Big) - \phi\Big(\frac{t}{L} v \Big) \bigg| &\leq \phi'\Big(\frac{t}{L} v \Big) \Big|\frac{t}{L} v \frac{L}{v+L} - \frac{t}{L} v \Big| \\ &\leq \frac{v}{(m;m)_\infty} \exp\Big\{\frac{t}{L} v\Big\} \Big|\frac{L}{v+L} - 1\Big| \\ &\leq \frac{v^2}{(m;m)_\infty} \frac1{L} \exp\Big\{\frac{t}{L} v\Big\} \\ &\leq \frac1{(m;m)_\infty} \frac1{L} v^2 e^{\frac{1}{1+\delta} v}. \end{align*} Therefore, \begin{align*} &\bigg| \int_0^{L(m^{-1}-1)} e^{-v} \phi\Big(\frac{t}{L} \frac{L v}{v+L} \Big) {\: \rm d} v - \int_0^{L(m^{-1}-1)} e^{-v} \phi\Big(\frac{t}{L}v\Big) {\: \rm d} v \bigg| \\ &\qquad\qquad\leq \int_0^{L(m^{-1}-1)} e^{-v} \bigg|\phi\Big(\frac{t}{L} \frac{L v}{v+L} \Big) - \phi\Big(\frac{t}{L}v\Big) \bigg| {\: \rm d} v \\ &\qquad\qquad\leq \frac1{(m;m)_\infty} \frac1{L} \int_0^\infty e^{-\frac{\delta}{1+\delta} v} v^2 {\: \rm d} v. \end{align*} Lastly, we can write \begin{align*} \int_0^{L(m^{-1}-1)} e^{-v} \phi\Big(\frac{t}{L}v\Big) {\: \rm d} v &= \int_0^\infty e^{-v} \phi\Big(\frac{t}{L}v\Big) {\: \rm d} v - \int_{L(m^{-1}-1)}^\infty e^{-v} \phi\Big(\frac{t}{L}v\Big) {\: \rm d} v \\ &= \vphi\Big(\frac{t}{L}\Big) - \int_{L(m^{-1}-1)}^\infty e^{-v} \phi\Big(\frac{t}{L}v\Big) {\: \rm d} v \end{align*} and so, by \eqref{eq:10}, \begin{align*} \int_{L(m^{-1}-1)}^\infty e^{-v} \phi\Big(\frac{t}{L}v\Big) {\: \rm d} v &\leq \frac1{(m;m)_\infty} \int_{L(m^{-1}-1)}^\infty \exp\Big\{-\Big(1-\frac{t}{L}\Big) v\Big\} {\: \rm d} v \\ &\leq \frac1{(m;m)_\infty} \int_{L(m^{-1}-1)}^\infty e^{-\frac{\delta}{1+\delta} v} {\: \rm d} v \\ &\leq \frac1{(m;m)_\infty} e^{-L (1-m) \frac{\delta}{1+\delta}} \int_0^\infty e^{-(1-m) \frac{\delta}{1+\delta} v} {\: \rm d} v. \end{align*} Summarizing, we obtain \begin{align*} I= e^{-L} \Big( \frac{t}{L}\Big) \Big\{ \vphi\Big(\frac{t}{L}\Big) + \calO\big(L^{-1}\big)\Big\}. \end{align*} Hence, by \eqref{eq:58} and Lemma \ref{lem:psi_prim} \begin{align*} \Big| e^t (4 \pi t)^{\frac{d}{2}} p(t; 0, y) - e^{-L} - t I \Big| &\leq t \int_0^m u^{-\frac{d}{2}} e^{-\frac{L}{u} + \log \frac{\Phi(t, u)}{t}} {\: \rm d} u \\ &\leq m t e^{\psi(m)}, \end{align*} provided that $t$ is sufficiently large. To complete the proof we need to show that \[ t e^{\psi(m)+L}=\calO\big(L^{-1}\big). \] To see this, we apply \eqref{eq:10} to get \begin{align*} t e^{\psi(m)+L} &\leq C_1 \exp\Big\{-\frac{L}{m} +t(1-m)+L\Big\}\\ &=C_1 \exp\Big\{ -L (1-m) \Big(\frac{1}{m} - \frac{t}{L}\Big) \Big\}\\ &\leq C_1 \exp\Big\{ -L (1-m) \Big(\frac{1}{m} -1\Big) \Big\}. \end{align*} and the theorem follows. \end{proof} \begin{corollary} Under the assumptions of Theorem \ref{thm:5}, \[ \lim_{t \to +\infty} \frac{p(t;0,y)}{\abs{y}^{-\frac{d-1}{2}} e^{-\abs{y}}} = 0 \] uniformly in the region \eqref{rg1}. \end{corollary} \begin{proof} Since $2\sqrt{L t} = |y|$, by Theorem~\ref{thm:5} we get \[ \lim_{t \to +\infty} \frac{p(t;0,y)}{\abs{y}^{-\frac{d-1}{2}} e^{-\abs{y}}} = \lim_{t \to +\infty} t^{-\frac12} \exp\Big\{-t-L -\frac{d-1}{2}\log t +2\sqrt{L t} +\frac{d-1}{2}\log |y|\Big\}. \] Now, it is sufficient to observe that in the region \eqref{rg1}, \[ -t \Big(\sqrt{\frac{L}{t}}-1\Big)^2 +\frac{d}{2} \log \sqrt{\frac{L}{t}} \] is uniformly bounded from above. \end{proof} \begin{theorem} \label{thm:6} Suppose that $\mathbf{Y}$ is a Brownian motion in $\RR^d$. Assume that $\mathbf{X}$ is obtained from $\mathbf{Y}$ by partial resetting with factor $c\in(0,1)$. Then for each $\delta > 0$, the transition density of $\mathbf{X}$ satisfies \[ p(t; 0, y) = \frac{1}{2} \frac1{(m;m)_\infty} (2\pi)^{-\frac{d-1}{2}} \abs{y}^{-\frac{d-1}{2}} e^{-\abs{y}} \Big(1 + \calO\big(t^{-1}\big)\Big) \] as $t$ tends to infinity, uniformly in the region \begin{equation} \label{eq:38} \Big\{(t, y) \in \RR_+ \times \RR^d : m^2 +\delta \leq \frac{\norm{y}^2}{4t^2} \leq 1 - \delta \Big\}. \end{equation} \end{theorem} \begin{proof} Notice that for $(t, y)$ in the region \eqref{eq:38}, $L$ is comparable to $t$. Now, let us observe that \[ \psi'(1) = -\frac{d}{2} + t \bigg(\frac{L}{t} - \frac{\phi'(0)}{\phi(0)}\bigg), \quad\text{ and }\quad \frac{\phi'(0)}{\phi(0)}=\frac1{1-m^2}, \] thus $\psi'(1)<0$. Using now Lemma~\ref{lem:psi_prim} we conclude that $\psi$ has the unique critical point $u_0$. Moreover, $u_0$ belongs to $(m, 1)$. In fact, we claim that $u_0 \in (m, 1-\delta')$ for certain $\delta' > 0$. Indeed, suppose that $u_0$ tends to $1$. We can assume that $t(1-u_0)$ converges to $g \in [0, \infty]$. Since $u_0$ satisfies \eqref{eq:33}, we have \[ \frac{1}{u_0^2} - 1 = \frac{d}{2 L} \frac{1}{u_0} + \frac{t}{L} \bigg( \frac{\phi'\big(t(1-u_0) \big)}{\phi\big(t(1-u_0) \big)} - \frac{\phi'(g)}{\phi(g)}\bigg) + \frac{t}{L}\bigg(\frac{\phi'(g)}{\phi(g)} - \frac{L}{t}\bigg). \] Hence \[ \frac{t}{L}\bigg(\frac{\phi'(g)}{\phi(g)} - \frac{L}{t}\bigg) = o(1) \] which leads to a contradiction because according to \eqref{eq:11}, \[ 1 \leq \frac{\phi'(t)}{\phi(t)} \leq \frac{1}{1-m^2} \] for all $t \geq 0$. This proves the claim. Next, we determine the asymptotic behavior of $u_0$. Since $u_0$ solves \eqref{eq:33}, we obtain \[ u_0 = \frac{2L}{\dfrac{d}{2} + \sqrt{\Big(\dfrac{d}{2}\Big)^2 + 4 L t \dfrac{\phi'(t(1-u_0))}{\phi(t(1-u_0))}}}. \] Because $t(1-u_0)$ tends to infinity, by \eqref{lim:phi-1}, \[ \frac{\phi'\big(t(1-u_0) \big)}{\phi\big(t(1-u_0)\big)} = 1 + o\big(t^{-2}\big), \] and consequently \begin{align} \nonumber \frac{1}{u_0} &= \frac{d}{4 L} +\sqrt{\Big(\frac{d}{2}\Big)^2 \frac{1}{4 L^2} + \frac{t}{L} + \frac{t}{L} \Big(\frac{\phi'(t(1-u_0))}{\phi(t(1-u_0))} - 1\Big)} \\ \nonumber &= \frac{d}{4 L} +\sqrt{\frac{t}{L}} \sqrt{ 1+ \Big(\frac{d}{2}\Big)^2 \frac{1}{4L t} + \Big(\frac{\phi'(t(1-u_0))}{\phi(t(1-u_0))} - 1\Big)} \\ \nonumber &= \frac{d}{4 L}+\sqrt{\frac{t}{L}} \Big(1 + \calO\big(t^{-2}\big)\Big) \\ \label{eq:48} &= \frac{d}{4 L} + \sqrt{\frac{t}{L}} + \calO\big(t^{-2}\big). \end{align} Hence \begin{align*} u_0 &= \frac{1}{ \frac{d}{4L } + \sqrt{\frac{t}{L}} + \calO\big(t^{-2}\big)} \\ &= \sqrt{\frac{L}{t}} \frac{1}{1+\frac{d}{4 \sqrt{t L}} + \calO\big(t^{-2}\big)} \\ &= \sqrt{\frac{L}{t}} \Big(1 - \frac{d}{4 \sqrt{t L}} + \calO\big(t^{-2}\big)\Big), \end{align*} and so \begin{equation} \label{eq:47} u_0 = \sqrt{\frac{L}{t}} - \frac{d}{4 t} + \calO\big(t^{-2}\big). \end{equation} We now establish the asymptotic behavior of $\psi(u_0)$ and $\psi''(u_0)$. By \eqref{lim:phi-2}, \[ \frac{\phi\big(t(1-u_0)\big)}{e^{t(1-u_0)}} = \frac1{(m;m)_\infty} + o\big(t^{-1}\big), \] and \[ \log \phi\big(t(1-u_0)\big) = \log \frac1{(m;m)_\infty} + t(1-u_0) + o\big(t^{-1}\big). \] By \eqref{eq:48} and \eqref{eq:47} we obtain \begin{align*} \frac{L}{u_0} &= \frac{d}{4} + \sqrt{L t} + \calO\big(t^{-1}\big), \\ t u_0 &= \sqrt{L t} - \frac{d}{4} + \calO\big(t^{-1}\big), \\ \log u_0 &=\log \sqrt{\frac{L}{t}}+\calO\big(t^{-1}\big). \end{align*} Therefore, \[ \log \phi\big(t(1-u_0)\big) = \log \frac1{(m;m)_\infty}+ t - \sqrt{L t} + \frac{d}{4} + \calO\big(t^{-1}\big), \] and \begin{align} \nonumber \psi(u_0) &= -\frac{d}{2} \log u_0 - \frac{L}{u_0} + \log \phi\big(t(1-u_0)\big) \\ \label{eq:39} &= \log \frac1{(m;m)_\infty} -\frac{d}{2} \log \sqrt{\frac{L}{t}} - 2 \sqrt{L t} + t + \calO\big(t^{-1}\big). \end{align} Using \eqref{eq:48} and \eqref{lim:phi-1} supported by \eqref{eq:11} we can write \begin{align} \psi''(u_0) &= \frac{d}{2} \frac{1}{u_0^2} - 2 \frac{L}{u_0^3} + t^2 \bigg\{\frac{\phi''\big(t(1-u_0)\big)}{\phi\big(t(1-u_0)\big)} - \bigg(\frac{\phi'\big(t(1-u_0)\big)}{\phi\big(t(1-u_0)\big)} \bigg)^2\bigg\} \nonumber \\ \nonumber &= \frac{d}{2} \frac{t}{L} - 2 t \sqrt{\frac{t}{L}} + \calO\big(t^{-1}\big) \\ &= -2t \sqrt{\frac{t}{L}} \Big( 1 + \calO\big(t^{-1}\big)\Big).\label{eq:psi_bis_asymp} \end{align} We are now ready to study the asymptotic behavior of the transition density. By \eqref{eq:p_gauss}, \eqref{eq:30} and \eqref{psi}, we can write \begin{equation} \label{eq:61} e^t (4 \pi t)^{\frac{d}{2}} p(t; 0, y) = e^{-L} + t \int_0^m u^{-\frac{d}{2}} e^{-\frac{|y|^2}{4 tu}+\log \frac{\Phi(t, u)}{t}} {\: \rm d} u + t I \end{equation} where \[ I = \int_m^1 e^{\psi(u)} {\: \rm d} u. \] Since \[ m^2 + \delta \leq \frac{L}{t} \leq 1 - \delta, \] and $L > d/2$, by Lemma \ref{lem:1} for all $u \in (m, 1)$, \begin{align} \label{eq:53} \psi''(u) &< \frac{d}{2} \frac{1}{u^2} - \frac{L}{u^2}- L\leq - L \leq -m^2 t. \end{align} Our aim is to find the asymptotic behavior of $I$. Let us first focus on the integral over $(m, u_0 - \eta) \cup (u_0 + \eta, 1)$, for certain $\eta > 0$. By the Taylor's theorem and \eqref{eq:53}, for $u \in (m, 1)$ we get \begin{align*} \psi(u) &= \psi(u_0) + \frac{1}{2} \psi''\big(\lambda u + (1-\lambda) u_0 \big) (u - u_0)^2 \\ &\leq \psi(u_0) - \frac{m^2}{2} t (u-u_0)^2. \end{align*} Therefore, using \eqref{eq:psi_bis_asymp}, \begin{align} \nonumber \bigg(\int_m^{u_0-\eta } + \int_{u_0+\eta}^1 \bigg) e^{\psi(u)} {\: \rm d} u &\leq e^{\psi(u_0)} \bigg(\int_m^{u_0-\eta } + \int_{u_0+\eta}^1 \bigg) e^{-\frac{m^2}{2}t (u-u_0)^2} {\: \rm d} u \\ \nonumber & \leq e^{\psi(u_0)} e^{-\frac{m^2 \eta^2}{2} t}\\ &\leq C t^{-1} e^{\psi(u_0)} (-\psi''(u_0))^{-\frac12}. \label{eq:51} \end{align} Next, we tread the integral over $(u_0-\eta,u_0+\eta)$. We write \begin{equation} \label{eq:45} \int_{\abs{u-u_0} \leq \eta} e^{\psi(u)} {\: \rm d} u = e^{\psi(u_0)} \int_{\abs{u - u_0} \leq \eta} e^{\frac{1}{2} \psi''(u_0) (u-u_0)^2} e^{\theta(u)} {\: \rm d} u \end{equation} where \[ \theta(u) = \psi(u) - \psi(u_0) - \frac{1}{2} \psi''(u_0) (u-u_0)^2. \] For $k \geq 2$, we compute \[ \psi^{(k)}(u) = (-1)^{k} k! \frac{d}{2} \frac{1}{u^k} + (-1)^{k+1} k! \frac{L}{u^{k+1}} + (-1)^k t^k (\log \phi)^{(k)}\big(t(1-u)\big). \] Hence, by Proposition~\ref{prop:log_phi}, there is $C_k$ such that for all $u \in (m, u_0+\eta)$ and $t \geq 1$, \begin{equation} \label{eq:41} \big| \psi^{(k)}(u) \big| \leq C_k t, \end{equation} thus by the Taylor's theorem \begin{align} \label{eq:54} \abs{\theta(u)} \leq \frac{C_3}{3!} t \abs{u-u_0}^3. \end{align} If $\abs{u - u_0} \leq \eta$, by \eqref{eq:53}, we obtain \begin{equation} \label{eq:46} \abs{\theta(u)} \leq -\frac{1}{4} \psi''(u_0) \abs{u - u_0}^2 \end{equation} provided that \[ \eta \leq m^2 \frac{3}{2 C_3}. \] Another application of the Taylor's theorem together with \eqref{eq:41} gives \begin{equation} \label{eq:44} \Big| \theta(u) - \frac{1}{3!} \psi'''(u_0) (u-u_0)^3 \Big| \leq \frac{C_4}{4!} t \abs{u-u_0}^4. \end{equation} Now, we write \begin{align*} e^{\theta(u)} &= \Big(e^{\theta(u)} - 1 - \theta(u)\Big) \\ &\phantom{=}+ \Big(\theta(u) - \frac{1}{3!} \psi'''(u_0)(u-u_0)^3\Big) \\ &\phantom{=}+ \frac{1}{3!} \psi'''(u_0)(u-u_0)^3 + 1, \end{align*} and we split the integral \eqref{eq:45} into four corresponding integrals. Observe that for $x \in \RR$, \begin{equation} \label{eq:49} \big| e^x - 1 - x\big| \leq \frac{1}{2} x^2 e^{|x|}. \end{equation} Hence, by \eqref{eq:54} and \eqref{eq:46}, the first integrand can be bounded as follows \begin{align*} \Big|e^{\theta(u)} - 1 - \theta(u) \Big| \leq C t^2 \abs{u-u_0}^6 e^{-\frac{1}{4} \psi''(u_0)(u-u_0)^2}. \end{align*} Therefore, by \eqref{eq:53}, \begin{align*} &\bigg| \int_{\abs{u-u_0} \leq \eta} e^{\frac{1}{2}\psi''(u_0) (u-u_0)^2} \big(e^{\theta(u)} - 1 - \theta(u) \big) {\: \rm d} u \bigg| \\ &\qquad\qquad\leq C t^2 \int_{\abs{u - u_0} \leq \eta} e^{\frac{1}{4} \psi''(u_0)(u-u_0)^2} \abs{u-u_0}^6 {\: \rm d} u \\ &\qquad\qquad = C t^2 (-\psi''(u_0))^{-3-\frac{1}{2}} \int_{\abs{u} \leq \eta \sqrt{-\psi''(u_0)}} e^{-\frac{1}{4} \abs{u}^2} \abs{u}^6 {\: \rm d} u \\ &\qquad\qquad\leq C t^{-1} (-\psi''(u_0))^{-\frac{1}{2}}. \end{align*} For the second integral, by \eqref{eq:44} and \eqref{eq:53} we obtain \begin{align*} &\bigg| \int_{\abs{u - u_0} \leq \eta} e^{\frac{1}{2}\psi''(u_0) (u-u_0)^2} \Big(\theta(u) - \frac{1}{3!} \psi'''(u_0)(u-u_0)^3 \Big) {\: \rm d} u \bigg|\\ &\qquad\qquad\leq C t \int_{\abs{u-u_0} \leq \eta} e^{\frac{1}{2}\psi''(u_0) (u-u_0)^2} \abs{u-u_0}^4 {\: \rm d} u\\ &\qquad\qquad\leq C t (-\psi''(u_0))^{-2 - \frac{1}{2}} \int_{\abs{u} \leq \eta \sqrt{-\psi''(u_0)}} e^{-\frac{1}{2} \abs{u}^2} \abs{u}^4 {\: \rm d} u \\ &\qquad\qquad\leq C t^{-1} (\psi''(u_0))^{-\frac{1}{2}}. \end{align*} The third integral equals zero. Lastly, by \eqref{eq:53}, we have \begin{align*} \int_{\abs{u-u_0} \leq \eta}e^{\frac{1}{2}\psi''(u_0) (u-u_0)^2} {\: \rm d}u &=(-\psi''(u_0))^{-\frac{1}{2}} \int_{\abs{u} \leq \eta \sqrt{-\psi''(u_0)}} e^{-\frac{1}{2} \abs{u}^2} {\: \rm d} u\\ &=(-\psi''(u_0))^{-\frac{1}{2}} \left(\sqrt{2\pi} + \int_{\abs{u} \geq \eta \sqrt{-\psi''(u_0)}} e^{-\frac{1}{2} \abs{u}^2} {\: \rm d} u \right)\\ &= (-\psi''(u_0))^{-\frac{1}{2}} \sqrt{2\pi} \Big(1 + \calO(t^{-1})\Big). \end{align*} Summarizing, we get \begin{align*} I = e^{\psi(u_0)} (-\psi''(u_0))^{-\frac{1}{2}} \sqrt{2\pi} \Big(1 + \calO(t^{-1})\Big). \end{align*} Hence, by \eqref{eq:39} and \eqref{eq:psi_bis_asymp}, \begin{align*} I &= \frac1{(m;m)_\infty} \Big(\frac{L}{t}\Big)^{-\frac{d}{4}} e^{- 2 \sqrt{L t} + t} \Big( 2t \sqrt{\frac{t}{L}} \Big)^{-\frac12} \sqrt{2\pi} \Big(1 + \calO(t^{-1})\Big)\\ &= \frac1{(m;m)_\infty} (2 \pi)^{-\frac{d-1}{2}} |y|^{-\frac{d-1}{2}}e^{-|y|} \Big(1 + \calO(t^{-1})\Big). \end{align*} Moreover, by \eqref{eq:61} and Lemma \ref{lem:1} we get \begin{align*} \Big| p(t; 0, y) - e^{-t} (4 \pi t)^{-\frac{d}{2}}e^{-L} - t e^{-t} (4 \pi t)^{-\frac{d}{2}} I \Big| \leq m t^{1-\frac{d}{2}} e^{-t+\psi(m)}. \end{align*} Therefore, to finish the proof we need to show that \begin{equation} \label{eq:62} t^{-\frac{d}{2}}e^{-t-L} = \abs{y}^{-\frac{d-1}{2}} e^{-\abs{y}} \calO(t^{-1}), \end{equation} and \begin{equation} \label{eq:63} t^{1-\frac{d}{2}}e^{-t+\psi(m)} = \abs{y}^{-\frac{d-1}{2}} e^{-\abs{y}} \calO(t^{-1}). \end{equation} Notice that $t^{-\frac{d}{2}}$ is comparable to $t^{-\frac12}|y|^{-\frac{d-1}{2}}$. Since $L \leq t(1-\delta)$, we have \[ \Big(\sqrt{\frac{L}{t}} - 1 \Big)^2 \geq (1-\sqrt{1-\delta})^2 \] which implies that \[ -t - L \leq -\norm{y}- (1-\sqrt{1-\delta})^2 t \] proving \eqref{eq:62}. Since $t^{1-\frac{d}{2}}$ is comparable to $t^{\frac12}|y|^{-\frac{d-1}{2}}$, and $L \geq t(m^2 + \delta)$, we get \[ \Big(\sqrt{\frac{L}{t}} - m \Big)^2 \geq (\sqrt{m^2+\delta}-m)^2=\delta^* \] which implies that \[ - \frac{L}{m}- m t \leq -\norm{y}- \frac{(\sqrt{m^2+\delta}-m)^2}{m} t. \] In view of \eqref{psi} and \eqref{eq:10}, we have \[ -t+\psi(m)\leq -\frac{L}{m} -mt +C, \] which proves \eqref{eq:63}, and the theorem follows. \end{proof} \begin{remark} We note that the asymptotic behavior of $p(t; 0, y)$ in Theorem~\ref{thm:6} remain true if $m^2+\delta \leq L/t$ is replaced by \[ \Big(m+\sqrt{\frac{\log f(t)}{t}} \Big)^2 \leq \frac{L}{t} \] where $f$ is a fixed positive function such that \[ \liminf_{t\to\infty} t^{-\frac{3m}{2}} f(t) >0. \] Indeed, Lemma~\ref{lem:psi_prim} still applies, and the only change required in the proof is in showing that \[ t^{\frac12} e^{-t+\psi(m)}=e^{-|y|}\calO(t^{-1}), \] or equivalently \[ t^{\frac12} \exp\Big\{-\frac{t}{m}\Big( \sqrt{\frac{L}{t}} -m\Big)^2\Big\} =\calO(t^{-1}), \] which is guaranteed by the proposed condition. \end{remark} \begin{remark} \label{nessgeneralgaussian} Theorems~\ref{thm:5} and~\ref{thm:6} can be easily generalized to the case of $Y_t=\Upsilon B_t$ for $\Upsilon \in \GL(\mathbb{R}, d)$. In such a case, the counterpart of \eqref{eq:p_gauss} is given by \[ p(t; 0, y) = e^{-t} (\det \Upsilon)^{-1} (2\pi t)^{-\frac{d}{2}} e^{-\frac{|\Upsilon^{-1} y|^2}{4t}} +t e^{-t}(4\pi t)^{-\frac{d}{2}} \int_0^1 u^{-\frac{d}{2}} e^{-\frac{|\Upsilon^{-1} y|^2}{4 tu}+ \log \frac{\Phi(t, u)}{t}} {\: \rm d} u. \] Then $\eqref{psi}$ takes the form \[ \psi(u) = -\frac{d}{2} \log u -\frac{L}{u} + \log \phi\big(t(1-u)\big) \] where \[ L = \frac{|\Upsilon^{-1}y|^2}{2 t}, \] cf. \eqref{A}. The statements of Theorems~\ref{thm:5} and~\ref{thm:6} remain true if we replace $\frac{|y|^2}{4 t}$ by $\frac{|\Upsilon^{-1}y|^2}{2 t}$. All the proofs are the same. \end{remark} \appendix \section{Strictly stable processes} \label{appendix:A} We begin with a general observation on L{\'e}vy process in $\mathbb{R}^d$. By \cite[Theorem 8.1]{MR1739520} there is a one-to-one correspondence between L{\'e}vy processes and generating triplets $(A, \nu, \gamma)$ where $A$ is nonnegative definite real $d \times d$ matrix, $\nu$ a Borel measure on $\RR^d$ such that \[ \int_{\RR^d} \big(1 \land \norm{x}^2\big) \nu({\rm d} x) < \infty, \] and $\gamma \in \RR^d$. The triplet are helpful in identification of the generator of the process, see \cite[Theorem 31.5]{MR1739520}, which for sufficiently smooth functions $f$ can be given by the following expression \begin{align*} \mathscr{L} f(x) &= \sum_{j, k = 1}^d a_{jk} \partial_{x_j} \partial_{x_k} f(x) + \sprod{\nabla f(x)}{\gamma} + \int_{\RR^d} \Big(f(x+z) - f(x) - \ind{\{\abs{z} < 1\}} \sprod{\nabla f(x)}{z}\Big) \nu({\rm d} z). \end{align*} We are going to describe the infinitesimal generator of the semigroup associated with the process considered on $\calC_0(\RR^d)$ as well as $L^r(\RR^d)$, $r \in [1, \infty)$. Let us recall that \[ \mathcal{B}_r = \begin{cases} L^r(\RR^d) &\text{if } r \in [1, \infty), \\ \calC_0(\RR^d) &\text{if } r = \infty. \end{cases} \] \begin{proposition} \label{prop:9} Let $\mathbf{Y}$ be a L{\'e}vy process in $\mathbb{R}^d$ and $r \in [1, \infty]$. Then the family of operators defined on $\mathcal{B}_r$ as \[ \Big(f \mapsto \EE[f(Y_t + \cdot)]: t > 0\Big) \] forms a strongly continuous contraction semigroup. Let $\mathscr{L}_r$ be its infinitesimal generator with domain $D(\mathscr{L}_r)$. Moreover, if $f \in \calC_0^2(\RR^d)$ is such that $\partial_x^{\mathbf{a}} f \in \mathcal{B}_r$ for each $\mathbf{a} \in \NN_0^d$, $\abs{\mathbf{a}} \leq 2$, then $f \in D(\mathscr{L}_r)$ and \[ \mathscr{L}_r f = \mathscr{L} f. \] \end{proposition} \begin{proof} It is well-known that the contractivity and strong continuity are consequences of Minkowski's integral inequality and the continuity of the shift operator, while the semigroup property follows by the Markov property of the process. Since for $f \in \calC_0^2(\RR^d)$, \[ \big\|\mathscr{L}f \big\|_{\mathcal{B}_r} \leq c \sum_{|\mathbf{a}|\leq 2} \left\|\partial^{\mathbf{a}} f \right \|_{ \mathcal{B}_r} \] we have $\mathscr{L}f \in \mathcal{B}_r$, because $\partial^{\mathbf{a}} f \in \mathcal{B}_r$ for each $\mathbf{a} \in \NN_0^d$, $\abs{\mathbf{a}} \leq 2$. Next, by \cite[Theorem 31.5]{MR1739520}, we get \[ \bigg\| \frac{\EE\big[f(Y_t + x)\big]-f(x)}{t} - \mathscr{L} f(x) \bigg\|_{\mathcal{B}_r(x)} \leq \frac1{t} \int_0^t \Big\| \EE\big[\mathscr{L}f(Y_s + x)- \mathscr{L} f(x) \big] \Big\|_{\mathcal{B}_r(x)} {\rm d}s. \] By the strong continuity in $\mathcal{B}_r(x)$, the right hand-side in the last inequality converges to zero as $t$ tends to zero. \end{proof} L{\'e}vy processes in $\RR^d$ are in one-to-one correspondence with their characteristic exponents, see e.g. \cite[Theorems 8.1 and 7.10, and Corollaries 8.3 and 11.6]{MR1739520}, that is \[ \EE e^{i \sprod{x}{Y_t}}=e^{-t\Psi(x)}. \] Non-zero strictly stable processes in $\RR^d$ form a subclass of L{\'e}vy processes in $\RR^d$ indexed by $\alpha\in(0,2]$. For every non-zero strictly $\alpha$-stable process $\mathbf{Y}$ in $\RR^d$, $\alpha \in (0, 2]$, there are unique $A$, $\lambda$ and $\gamma$ such that the characteristic exponent $\Psi$ has the form: \begin{enumerate}[label=\rm (A.\roman*), start=1, ref=A.\roman*] \item \label{en:4:1} if $\alpha=2$, then \[ \Psi(x)=\sprod{x}{Ax} \] where $A$ is a non-zero $d\times d$ symmetric non-negative definite real matrix; \item \label{en:4:2} if $\alpha \in (1, 2)$, then \[ \Psi(x)=\int_{\mathbb{S}} \lambda({\rm d}\xi) \int_0^\infty \big(1-e^{i\sprod{x}{r\xi}} +i \sprod{x}{r\xi} \big) \frac{{\rm d}r}{r^{1+\alpha}} \] where $\lambda$ is a non-zero finite measure on the unit sphere $\mathbb{S}$; \item \label{en:4:3} if $\alpha=1$, \[ \Psi(x)=\int_{\mathbb{S}}\lambda({\rm d}\xi) \int_0^\infty \big(1-e^{i\sprod{x}{r\xi}} + i \sprod{x}{r\xi} \ind{(0,1]}(r)\big) \frac{{\rm d}r}{r^{1+\alpha}}-i\sprod{\gamma}{x} \] where either $\gamma\in\RR^d$ and $\lambda$ is a non-zero finite measure on the unit sphere $\mathbb{S}$ satisfying \[ \int_{\mathbb{S}} \xi\, \lambda({\rm d}\xi)=0 \] or $\gamma \neq 0$ and $\lambda\equiv 0$; \item \label{en:4:4} if $\alpha \in (0, 1)$, \[ \Psi(x)=\int_{\mathbb{S}}\lambda({\rm d}\xi) \int_0^\infty \big(1-e^{i\sprod{x}{r\xi}} \big) \frac{{\rm d}r}{r^{1+\alpha}} \] where $\lambda$ is a non-zero finite measure on the unit sphere $\mathbb{S}$. \end{enumerate} Conversely, for every function $\Psi$ of the form as in \eqref{en:4:1}--\eqref{en:4:4} there is a unique non-zero strictly $\alpha$-stable process $\mathbf{Y}$ in $\RR^d$ with $\alpha\in(0,2]$. Let us observe that the only non-zero trivial L{\'e}vy process is a non-zero constant drift, that is $\alpha=1$ with $\gamma\neq 0$ and $\lambda \equiv 0$. Thus, an alternative representation of characteristic exponents of non-zero strictly stable processes is a consequence of \cite[Theorem 14.10 and Example 18.8]{MR1739520}. The classification \eqref{en:4:1}--\eqref{en:4:4} is a summary of the following results: \cite[Theorem 14.2 and Example 2.10]{MR1739520} for $\alpha=2$ and \cite[Theorem 14.7, Remarks 14.6 and 14.4]{MR1739520} for $\alpha \in (0, 2)$. For precise definitions of a strictly stable process see \cite[Definitions 13.2 and 13.16]{MR1739520}, and a non-zero and nontrivial process see \cite[Definition 13.6]{MR1739520}. Let us recall that for a give non-zero strictly $\alpha$-stable process, $\alpha \in (0, 2]$, the generating triplet is identified as follows: \begin{enumerate}[label=\rm (A.\roman*), start=1, ref=A.\roman*] \item if $\alpha = 2$, then take $(A, 0, 0)$; \item if $\alpha \in (1, 2)$, then take $(0, \nu, \gamma)$ where \[ \gamma = -\int_{\{\norm{z} > 1\}} z \nu({\rm d} z); \] \item if $\alpha = 1$, then we take $(0, \nu, \gamma)$; \item if $\alpha \in (0, 1)$, we take $(0, \nu, \gamma)$ where \[ \gamma = \int_{\{ |z| \leq 1 \}}z \nu({\rm d}z); \] \end{enumerate} In view of \cite[Theorem 14.3]{MR1739520}, \begin{equation} \label{eq:nu-str_st} \nu(B)=\int_{\mathbb{S}}\lambda({\rm d}\xi) \int_0^\infty \ind{B} (r\xi) \frac{{\rm d}r}{r^{1+\alpha}}, \qquad B \in \mathcal{B}(\RR^d). \end{equation} In our studies, we shall assume that for each $t > 0$, the distribution of $Y_t$ is absolutely continuous with respect to the Lebesgue measure. In fact, for strictly stable processes, it is equivalent to the absolute continuity of $Y_1$. The proof of the following lemma is inspired by \cite[Proposition 24.20 and Example 37.19]{MR1739520}. For the definition of non-degenerate process see \cite[Definitions 24.16 and 24.18]{MR1739520}. \begin{lemma} \label{lem:density} Let $\mathbf{Y}$ be a strictly $\alpha$-stable process in $\RR^d$, $\alpha\in(0,2]$. The following statements are equivalent: \begin{enumerate}[label=\rm (\roman*), start=1, ref=\roman*] \item \label{en:1:1} the distribution of $Y_1$ is absolutely continuous with respect to the Lebesgue measure; \item \label{en:1:2} $\mathbf{Y}$ is non-degenerate; \item \label{en:1:3} if $\alpha = 2$, then $\det A \neq 0$; otherwise for each $x \in \RR^d \setminus \{0\}$, \begin{equation} \label{eq:60} \int_{\mathbb{S}} |\sprod{x}{\xi}| \: \lambda({\rm d}\xi) \neq 0; \end{equation} \item \label{en:1:4} $\Re ( \Psi(x)) \approx |x|^{\alpha}$. \end{enumerate} \end{lemma} \begin{proof} Each of the conditions \eqref{en:1:1}--\eqref{en:1:4} implies that $\mathbf{Y}$ is non-zero. Clearly, \eqref{en:1:1} $\implies$ \eqref{en:1:2} follows from the definition of the non-degeneracy. Next, let us assume that \eqref{en:1:2} holds true. If $\alpha=2$, then by \cite[Proposition 24.17]{MR1739520} we must have $\det(A)\neq 0$. If $\alpha\in(0,2)$, by \cite[Theorem 14.10 and Example 18.8]{MR1739520}, we have that \begin{equation} \label{eq:59} \Re (\Psi(x)) = \int_{\mathbb{S}} |\sprod{x}{\xi}|^{\alpha} \lambda_1({\rm d}\xi) \end{equation} where $\lambda_1$ is a positive measure proportional to $\lambda$. Suppose, contrary to our claim, that there is $x_0 \in \RR^d \setminus\{0\}$, such that $\Re (\Psi(x_0)) = 0$. Hence, the measure $\lambda_1$ as well as the L{\'e}vy measure $\nu$, are supported on $x_0^\perp$. This contradicts the non-degeneracy of $\mathbf{Y}$, see \cite[Proposition 24.17]{MR1739520}. This proves \eqref{en:1:2} $\implies$ \eqref{en:1:3}. Let us observe that for $x \in \RR^d \setminus \{0\}$, \[ \Re \Phi(x) = \norm{x}^\alpha \Re \Psi\Big(\tfrac{x}{\norm{x}} \Big). \] Indeed, it follows immediately from \eqref{en:4:1} for $\alpha = 2$, and from \eqref{eq:59} for $\alpha \in (0, 2)$. By the continuity and compactness of the unit sphere, $\Re \Phi$ attains its extremes. Therefore, to show that \eqref{en:1:3} $\implies$ \eqref{en:1:4}, it is enough to prove that the minimum on $\mathbb{S}$ is positive. For $\alpha \in (0, 2)$, it is an easy consequence of \eqref{eq:60}. If $\alpha = 2$, since the matrix $A$ is symmetric and non-negative definite, the condition $\det A \neq 0$ implies that it is in fact positive definite, proving \eqref{en:1:4}. Finally, \eqref{en:1:4} $\implies$ \eqref{en:1:1} easily follows by the Fourier inversion formula. \end{proof} \begin{remark} \label{rem:density} If $d=1$, the only non-zero strictly $\alpha$-stable process which is \emph{degenerate} is a non-zero drift, that is, $\alpha=1$, $\gamma \neq 0$ and $\lambda \equiv 0$. Hence, by Lemma~\ref{lem:density}, for every non-zero strictly $\alpha$-stable process other than drift, $Y_1$ is absolutely continuous with respect to the Lebesgue measure. \end{remark} If any of the conditions \eqref{en:1:1}--\eqref{en:1:4} in Lemma \ref{lem:density} is satisfied, then for each $t > 0$ the random variable $Y_t$ has absolutely continuous density $p_0(t; 0, \cdot)$. Moreover, $p_0(u; 0, \cdot)$ belongs to $\calC_0^\infty(\RR^d)$, and \begin{align} \label{eq:p_0} p_0(t;x,y)= \bigg(\frac{1}{2\pi}\bigg)^d \int_{\RR^d} e^{-i\sprod{y-x}{z}} e^{-t \Psi(z)} {\: \rm d}z, \end{align} see \cite{Hartman, MR3010850, MR4140542} and \cite[Section 6]{MR3996792} for a broader context. \begin{lemma} \label{lem:A3} Let $\mathbf{Y}$ be a strictly $\alpha$-stable process in $\RR^d$, $\alpha\in(0,2]$, with a density $p_0$. \begin{enumerate}[label=\rm (\roman*), start=1, ref=\roman*] \item \label{en:3:1} For all $x,y\in\RR^d$ and $t,u>0$, \[ p_0(tu; x, y)=t^{-d/\alpha}p_0(u;t^{-1/\alpha}x,t^{-1/\alpha}y). \] \item \label{en:3:2} For every $\ell \in \NN_0$, $\mathbf{a} \in \NN_0^d$ there is a constant $C > 0$ such that for all $y \in \RR^d$ and $u>0$, \[ \partial_u^\ell \,\partial_y^\mathbf{a} p(u;0,\cdot) \in \calC_0^\infty\big(\RR^d\big), \] and \[ \big| \partial_u^\ell \partial_y^\mathbf{a} p(u;0,y)\big| \leq C u^{-\ell-(d+|\mathbf{a}|)/\alpha}. \] \item \label{en:3:3} There is a constant $C > 0$ such that for all $x,y,z\in\RR^d$ and $u>0$, \begin{align*} p_0(u;0,y-x) \leq C u^{-d/\alpha}, \end{align*} and \begin{align*} \big|p_0(u;0,y+z)-p_0(u;0,y)\big| \leq C |z| u^{-(d+1)/\alpha}. \end{align*} \end{enumerate} \end{lemma} \begin{proof} The proof of \eqref{en:3:1} follows from \eqref{eq:p_0} and the scaling property of $\Psi$, that is $t\Psi(z)=\Psi(t^{1/\alpha}z)$. By Lemma~\ref{lem:density} and \cite[Proposition~2.17]{MR3156646} we obtain that $\Re\Psi(z) \approx |z|^{\alpha}$ and $|\Psi(z)|\leq c (1+|z|^2)$, respectively. Thus, we can differentiate under the integral sign in \eqref{eq:p_0}, and the integrands are absolutely integrable. This proves the regularity in \eqref{en:3:2}. Now using the scaling of $\Psi$ we get \[ \partial_u^{\ell} \partial_y^\mathbf{a} p_0(u;0,y)= u^{-\ell-(d+|\mathbf{a}|)/\alpha}\big(\partial_u^\ell \,\partial_y^\mathbf{a} \, p_0\big)\big(1;0,u^{-1/\alpha}y\big) \] which together with the above mentioned estimates on $\Psi$ leads to the upper bound in \eqref{en:3:2}. Finally, \eqref{en:3:3} easily follows from \eqref{en:3:2}. \end{proof} \begin{example} \label{ex:names:1} Let $\mathbf{Y}$ be an $\alpha$-stable subordinator with $\alpha\in (0,1)$, and let $d = 1$. If $\lambda({\rm d}\xi)= \frac{\alpha}{\Gamma\left(1-\alpha\right)} \delta_{1}({\rm d}\xi)$, then \[ \nu({\rm d}s)=\frac{\alpha}{\Gamma\left(1-\alpha\right)} \ind{(0, \infty]}(s) \frac{{\rm d} s}{s^{1+\alpha}}. \] Moreover, the Laplace transform is $\EE e^{- u Y_t}= e^{-t u^{\alpha}}$, $u \geq 0$. See \cite[Example 24.12]{MR1739520}. \end{example} \begin{example} \label{ex:names:2} Let $\mathbf{Y}$ be an isotropic $\alpha$-stable process with $\alpha\in (0,2)$. If $\lambda({\rm d}\xi)=c \lambda_0({\rm d}\xi)$ (and $\gamma=0$ for $\alpha=1$) where $\lambda_0$ is the uniform probability measure on the unit sphere $\mathbb{S}$, and \[ c^{-1}=c_0(d,\alpha)=2^{-\alpha}\frac{\Gamma(d/2)\Gamma((2-\alpha)/2)}{\alpha \Gamma((\alpha+d)/2)}. \] Then \[ \nu({\rm d}z)= \mathscr{A}_{d,\alpha} |z|^{-d-\alpha} {\: \rm d}z, \quad\text{ and }\quad \mathscr{A}_{d,\alpha} =\frac{2^\alpha \Gamma((d+\alpha)/2)}{\pi^{d/2} |\Gamma(-\alpha/2)|}, \] and $\EE e^{i\sprod{x}{Y_t}}=e^{-t|x|^\alpha}$. See \cite[Example 18.9]{MR1739520}. \end{example} \begin{example} \label{ex:names:3} Let $\mathbf{Y}$ be a cylindrical $\alpha$-stable process with $\alpha\in (0,2)$. If $d \geq 2$, then \[ \lambda ({\rm d\xi})=\sum_{k=1}^d \Big( \frac{c}{2} \big(\delta_{-1}({\rm d}\xi_k) +\delta_{1}({\rm d}\xi_k)\big) \prod_{\substack{j=1 \\ j\neq k}}^d \delta_0 ({\rm d}\xi_j)\Big) \] where $c^{-1} = c_0(1, \alpha)$. If $\alpha = 1$, we also have $\gamma = 0$. Then \[ \nu({\rm d}z) = \sum_{k=1}^d \Big( \mathscr{A}_{1,\alpha} |z_k|^{-1-\alpha} {\: \rm d}z_k \prod_{\substack{j=1 \\ j \neq k}}^d \delta_0 ({\rm d} z_j) \Big), \] and $\EE e^{i\sprod{x}{Y_t}}=\exp\{-t (|x_1|^\alpha+\ldots +|x_d|^\alpha)\}$. \end{example} \begin{example} Let $\mathbf{Y}$ be Brownian motion. If $A=I$ (identity matrix), then \[ p_0(t;x,y)=(4\pi t)^{-d/2}e^{-\frac{|y-x|^2}{4t}}, \] and $\EE e^{i\sprod{x}{Y_t}} = \exp\{-t \norm{x}^2\}$. \end{example} There is also a formula for the transition density of $\frac{1}{2}$-stable subordinator, see e.g. \cite[Example 2.13]{MR1739520}. Namely, \[ p_0(t;x,y)= (4\pi)^{-1/2} t (y-x)^{-3/2} e^{- \frac{t^2}{4(y-x)}} \ind{\{x<y\}}. \] Another example covered in the article with an explicit formula for the transition density is the Cauchy process, i.e., a L{\'e}vy process in $\RR^d$ with the characteristic function \[ \EE e^{i\sprod{x}{Y_t}}=e^{-t(|x|- i \sprod{\gamma}{x})}. \] For $\gamma=0$, the formula reduces to the isotropic $1$-stable process, called the \emph{symmetric Cauchy process}. For general $\gamma\in\RR^d$ it is a strictly $1$-stable process with the density, see e.g. \cite[Example 2.12]{MR1739520}, \[ p_0(t;x,y)= \frac{\Gamma\big(\tfrac{d+1}{2}\big)}{\pi^{\frac{d+1}{2}}} \frac{t}{\big(|y-x-\gamma t|^2+t^2\big)^\frac{d+1}{2} }. \] \section{Further comments} \label{appendix:B} \subsection{Literature overview} Resetting describes an evolutionary pattern encountered in a variety of physical phenomena (see \cite{Gupta, MR4093464} for survey). Apart from the topics already described in Introduction, see Section \ref{sec:Intro}, it models reset algorithms (see e.g. \cite{Avrach, MR2023017}), and appears as the fluid limit for some queuing models with binomial catastrophe rates (see e.g. \cite{Artalejo}). Such processes can be viewed as particular examples of the so-called shot noise model. They are used in modeling earthquakes or layers of sedimentary material accumulated in depositional environments, but not subject to subsequent erosion (see e.g. \cite{18}). It can also model snow avalanches or neuron firings (see e.g. \cite{MR1780992, MR2213972, MR2044928,MR1102872}). Applications of resetting procedure have also been discussed in the context of backtrack recovery in RNA polymerase (see e.g. \cite{4, 9}), enzymatic velocity (see e.g. \cite{16}), pollination strategies (see e.g. \cite{17}), enzymatic inhibition (see e.g. \cite{2}), stochastic thermodynamics (see e.g. \cite{21}), and quantum mechanics (see e.g. \cite{22}). Realization of resetting has also been confirmed in switching holographic optical tweezers (see \cite{8} for details). Partial resetting also models disasters and catastrophes (see e.g. \cite{MR4189343}), for example, in the context of population dynamics (see e.g. \cite{Gerber, Assaf, 33, 34}). The risk process with partial resetting found to be an indispensable part of the so-called microinsurance policies (see e.g. \cite{Corina}). In \cite{Rev} it has been pointed out that a process with resetting can be used to model certain chemical reactions. Additionally, the partial resetting mechanism has also been observed in granular gas where collisions convert the relative speed of two particles quasi-instantaneously to its fractions (see e.g. \cite{27}), and in a soil contaminant where rainfall events (or other events) leach the contaminant to its fraction (see e.g. \cite{28, 29,30, 18}). Other possible applications of the partial resetting procedure include vegetation biomass with fires (see e.g. \cite{31, 32}), modeling of financial crashes (see, e.g. \cite{35, 36, Mendoza, Sor}), or queueing with random clearing events (see e.g. \cite{38}). Much attention has been devoted to the fact that resetting can produce non-equilibrium steady states (see e.g. \cite{Eule, Malakar, Gupta, Evans, Evans1, MR4093464, MR3476293, Pal, White, frenchNESS}). In most of the papers, the density of the stationary measure of the process with resetting is described either numerically or in explicit form (e.g. in the case of Brownian motion with total resetting). Then there are arguments given to prove that so-called balance conditions are violated. Roughly, the conditions check if the reversed in time process with resetting seen from the point of view of invariant law is the same as original process starting at the invariant law. In \cite{frenchNESS}, in the context of NESS, the authors discussed the duality of the generator for symmetric exclusion process with reservoirs. \subsection{Fourier transforms} Let $Y$ be the isotropic strictly stable process. Let $\mathbf{X}$ be obtained from $\mathbf{Y}$ by partial resetting with factor $c \in (0, 1)$. Let $p$ denote a transition density of the process $\mathbf{X}$. Following \cite{jaifizycy}, we can formally compute the Laplace--Fourier transform of $p$ as well as the Fourier transform of the density $\rho_{\mathbf{Y}}$ of the stationary measure for $\mathbf{X}$. Indeed, applying the Laplace--Fourier transform to both sides of the second equation of Theorem \ref{thm:H+F-P} with $\mathscr{L}^*f(x)=\mathscr{L}f(x)=\Delta^{\alpha/2}f(x)$ (with usual understanding that for $\alpha=2$ we have $\Delta^{\alpha/2}=\Delta$) we obtain \begin{equation*} \label{Fp} r\mathcal{F}(p)(\theta, r)-1= -\mathcal{F}(p)(\theta, r)\big(|\theta|^\alpha+1\big)+ \mathcal{F}(p)(c\theta, r), \quad\text{for } r\geq 0, \theta \in \RR \end{equation*} where \[ \mathcal{F}(p)(\theta, r)=\int_0^\infty e^{-rt}\mathcal{F}(p(t; 0,\cdot ))(\theta) {\: \rm d} t. \] Hence, it is expected that \begin{equation*} \label{Fpa} \mathcal{F}(p)(\theta, r)=\sum_{k=0}^\infty \prod_{j=0}^k \frac{1}{1+|\theta|^\alpha m^{j}+r}. \end{equation*} Similarly, using the first equation in Theorem \ref{thm:H+F-P} we obtain \begin{equation*} \label{Fpi} -\mathcal{F}(\rho_{\mathbf{Y}})(\theta)\big(|\theta|^\alpha+1\big)+ \mathcal{F}(\rho_{\mathbf{Y}})(c\theta)=0, \end{equation*} and thus \begin{equation*} \label{Fpb} \mathcal{F}(\rho_{\mathbf{Y}})(\theta)=\prod_{j=0}^\infty \frac{1}{1+|\theta|^\alpha m^{j}}. \end{equation*} Note that from Theorem~\ref{thm:H+F-P} it follows that $\rho_{\mathbf{Y}}$ satisfies the pantograph functional-differential equation, which has reach structure (see e.g. \cite{iserles_1993, WISNIEWOLSKI2024600}). \subsection{L\'evy process with partial resetting and autoregressive process of order $1$} Let $\mathbf{Y}$ be a one-dimensional L\'evy process with a transition density $p_0$. Let $\mathbf{X}$ be obtained from $\mathbf{Y}$ by partial resetting with factor $c \in (0, 1)$. Then the density of the stationary distribution of $\mathbf{X}$ exists and it equals the stationary distribution of the process $\mathbf{X}$ observed at Poisson epochs only. Indeed, according to \cite[Definition 1]{MR1157423}, $\mathbf{X}$ admits an embedded recursive chain $\mathbf{Z}=(Z_n =X_{T_n}: n \in \NN_0)$ being an autoregressive process of order $1$ defined via \[ \begin{cases} Z_0 = 0, \\ Z_{n+1} =c Z_n + Y_{T_{n+1}-T_n}, &\quad n \in \NN. \end{cases} \] By \cite[Theorem 1(3), Theorem 8 and Theorem 2]{MR1157423}, it is enough to show that $\mathbf{Z}$ convergence in total variation towards its stationary law. The latter follows from \cite[Theorem 2.1 and Remark B]{MR1956829} and \cite[Theorem 2.8]{MR1894253}, since $\mathbf{Z}$ is an aperiodic, positive Harris recurrent Markov chain that has an absolutely continuous distribution (see also \cite[Theorem 13.0.1]{MR1287609}). Furthermore, denoting by $X_\infty$ the random variable with distribution being the invariant law of $\mathbf{X}$, we have the following affine equation satisfied \begin{equation} \label{affine} X_\infty \eqdistr cX_\infty+ Y_{T_1}. \end{equation} In particular, if $\mathbf{Y}$ is a one-dimensional strictly isotropic $\alpha$-stable process, then by \cite{Grinc} and \cite[Lemma A.3]{MikoschSamrodnitsky}, as $y\rightarrow +\infty$, we get \begin{enumerate} \item if $\alpha \in (0, 2)$, then \begin{align*} \mathbb{P} (\pm X_\infty >y) &\sim (1-c^{\alpha})^{-1}\mathbb{P}( \pm Y_{T_1}>y) \\ &\sim \frac{1}{2\alpha \Gamma(-\alpha)\cos \left(\frac{(2-\alpha)\pi}{2}\right)}y^{-\alpha}, \qquad \text{as } y \rightarrow + \infty; \end{align*} \item if $\alpha = 2$, then \[ \mathbb{P} (X_\infty >y) \sim \frac{1}{(m,m)_\infty}\mathbb{P} (Y_{T_1} >y) \qquad\text{as } y \rightarrow +\infty. \] \end{enumerate} \subsection{Generalized Ornstein--Uhlenbeck process and exponential functional} Observe that the one-dimensional process $\mathbf{X}$ obtained from the L\'evy process $\mathbf{Y}$ by partial resetting is a solution of the stochastic differential equation \begin{equation} \label{sde1} \mathrm{d}X_t=(c-1)X_{t-}\;\mathrm{d}N_t+{\rm d} Y_t, \end{equation} hence it is a special case of the so-called Generalized Ornstein--Uhlenbeck process. Therefore, $X_\infty$ being the solution of \eqref{affine} has the same law as the law of the following exponential functional \[ D= \int_0^\infty e^{-\log (1+c)N_{s-}}\mathrm{d} Y_s, \] see e.g. \cite{Behme}. \section*{Acknowledgement} The second author acknowledges that the research is partially supported by the Polish National Science Centre Grant No.~2021/41/B/HS4/00599. \section*{Data availability} We do not analyse or generate any datasets, because our work proceeds within a theoretical and mathematical approach. \section*{Conflict of interest} On behalf of all authors, the corresponding author states that there is no conflict of interest. \begin{thebibliography}{10} \bibitem{Corina} J.~Akahori, C.~Imamura~Y. Constantinescu, and H.~Pham, \emph{An application of risk theory to mortgage lending}, Scand. Actuar. J. \textbf{5} (2022), 447--469. \bibitem{MR1956829} G.~Alsmeyer, \emph{On the {H}arris recurrence of iterated random {L}ipschitz functions and related convergence rate results}, J. Theoret. Probab. \textbf{16} (2003), no.~1, 217--247. \bibitem{Artalejo} J.R Artalejo, A.~Economou, and M.J. Lopez-Herrero, \emph{Evaluating growth measures in populations subject to binomial and geometric catastrophes}, Math. Biosci. Eng. \textbf{4} (2006), no.~4, 573--94. \bibitem{Assaf} M.~Assaf, A.~Kamenev, and B.~Meerson, \emph{Population extinction risk in the aftermath of a catastrophic event}, Phys. Rev. E \textbf{79} (2009), 011127. \bibitem{Avrach} K.~Avrachenkov, A.~Piunovskiy, and Y.~Zhang, \emph{Markov processes with restart}, J. Appl. Probab. \textbf{50} (2013), no.~4, 960--968. \bibitem{MR1894253} B.~Basrak, R.A. Davis, and T.~Mikosch, \emph{Regular variation of {GARCH} processes}, Stochastic Process. Appl. \textbf{99} (2002), no.~1, 95--115. \bibitem{Behme} Anita Behme, \emph{Generalized {Ornstein--Uhlenbeck} processes and extensions}, Ph.D. thesis, Braunschweig Universite, 2011. \bibitem{Bel} W.J. Bell, \emph{Searching behaviour: The behavioural ecology of finding resources}, Chapman and Hall, 1991. \bibitem{Ben} O.~B\'enichou, M.~Coppey, M.~Moreau, P-H. Suet, and R.~Voituriez, \emph{Optimal search strategies for hidden targets}, Phys. Rev. Lett. \textbf{94} (2005), 198101. \bibitem{19} O.~B\'enichou, C.~Loverdo, Moreau M., and R.~Voituriez, \emph{Intermittent search strategies}, Rev. Mod. Phys. \textbf{83} (2011), 81. \bibitem{16} A.~Berezhkovskii, A.~Szabo, R.~Urbakh, and A.~Kolomeisky, \emph{Dependence of the enzymatic velocity on the substrate dissociation rate}, J. Phys. Chem. B \textbf{121} (2016), 3437. \bibitem{MR1324786} P.~Billingsley, \emph{Probability and measure}, third ed., Wiley Series in Probability and Mathematical Statistics, John Wiley \& Sons, Inc., New York, 1995, A Wiley-Interscience Publication. \bibitem{BlumenthalGetoor} R.M. Blumenthal and R.K. Getoor, \emph{Some theorems on stable processes}, Trans. Amer. Math.Soc. \textbf{95} (1960), 263--273. \bibitem{MR2013738} K.~Bogdan, A.~St\'{o}s, and P.~Sztonyk, \emph{Harnack inequality for stable processes on {$d$}-sets}, Studia Math. \textbf{158} (2003), no.~2, 163--198. \bibitem{MR1157423} A.A. Borovkov and S.G. Foss, \emph{Stochastically recursive sequences and their generalizations}, Siberian Adv. Math. \textbf{2} (1992), no.~1, 16--81. \bibitem{MR1780992} K.~Borovkov and D.~Vere-Jones, \emph{Explicit formulae for stationary distributions of stress release processes}, J. Appl. Probab. \textbf{37} (2000), no.~2, 315--321. \bibitem{MR3156646} B.~B\"ottcher, R.~Schilling, and J.~Wang, \emph{L\'evy matters. {III}}, Lecture Notes in Mathematics, vol. 2099, Springer, Cham, 2013. \bibitem{MR2213972} O.~Boxma, D.~Perry, W.~Stadje, and Sh. Zacks, \emph{A {M}arkovian growth-collapse model}, Adv. in Appl. Probab. \textbf{38} (2006), no.~1, 221--243. \bibitem{27} N.V. Brilliantov and T.~Poschel, \emph{Kinetic theory of granular gases}, Oxford University Press, 2004. \bibitem{34} S.~Calabrese, A.~Porporato, F.~Laio, P.D. Odorico, and L.~Ridolfi, \emph{Age distribution dynamics with stochastic jumps in mortality}, Proc. R. Soc. A: Math. Phys. Eng. Sci. \textbf{473} (2017), 20170451. \bibitem{31} J.S. Clark, \emph{Ecological disturbance as a renewal process: Theory and application to fire history}, Oikos \textbf{56} (1989), 17. \bibitem{e21090884} R.~Cofré, L.~Videla, and F.~Rosas, \emph{An introduction to the non-equilibrium steady states of maximum entropy spike trains}, Entropy \textbf{21} (2019), no.~9. \bibitem{18} M.~Dahlenburg, A.~Chechkin, R.~Schumer, and R.~Metzler, \emph{Stochastic resetting by a random amplitude}, Phys. Rev. E \textbf{103} (2021), 052123. \bibitem{Derrida} B.~Derrida, \emph{Non equilibrium steady states: fluctuations and large deviations of the density and of the current}, J. Stat. Mech. Theory Exp. \textbf{10} (2007), no.~7, 7023. \bibitem{jaifizycy} C.~Di~Bello, A.~Chechkin, A.~Hartmann, Z.~Palmowski, and R.~Metzler, \emph{Time-dependent probability density function for partial resetting dynamics}, New J. Phys. \textbf{25} (2023), 082002. \bibitem{MR0344810} G.~Doetsch, \emph{Introduction to the theory and application of the {L}aplace transformation}, Springer-Verlag, New York-Heidelberg, 1974. \bibitem{MR1895332} V.~Dumas, F.~Guillemin, and Ph. Robert, \emph{A {M}arkovian analysis of additive-increase multiplicative-decrease algorithms}, Adv. in Appl. Probab. \textbf{34} (2002), no.~1, 85--111. \bibitem{MR2044928} I.~Eliazar and J.~Klafter, \emph{A growth-collapse model: {L}\'{e}vy inflow, geometric crashes, and generalized {O}rnstein-{U}hlenbeck dynamics}, Phys. A \textbf{334} (2004), no.~1-2, 1--21. \bibitem{Eule} S.~Eule and J.~Metzger, \emph{Non-equilibrium steady states of stochastic processes with intermittent resetting}, New J. Phys. \textbf{18} (2016), 033006. \bibitem{Evans} M.R. Evans and S.N. Majumdar, \emph{Diffusion with stochastic resetting}, Phys. Rev. Lett. \textbf{106} (2011), 160601. \bibitem{MR3225982} \bysame, \emph{Diffusion with resetting in arbitrary spatial dimension}, J. Phys. A \textbf{47} (2014), no.~28, 285001, 19. \bibitem{Evans1} M.R. Evans, S.N. Majumdar, and K.~Mallick, \emph{Optimal diffusive search: nonequilibrium resetting versus equilibrium dynamics}, J. Phys. A \textbf{46} (2013), no.~18, 185001, 13. \bibitem{MR4093464} M.R. Evans, S.N. Majumdar, and G.~Schehr, \emph{Stochastic resetting and applications}, J. Phys. A \textbf{53} (2020), no.~19, 193001, 67. \bibitem{Floreani} S.~Floreani and A.G. Casanova, \emph{Non-equilibrium steady state of the symmetric exclusion process with reservoirs}, arXiv: 2307.02481, 2023. \bibitem{frenchNESS} S.~Floreani and A.~Gonz\'{a}lez~Casanova, \emph{Non-equilibrium steady state of the symmetric exclusion process with reservoirs}, ArXiv: 2307.02481, 2023. \bibitem{21} J.~Fuchs, S.~Goldt, and U.~Seifert, \emph{Stochastic thermodynamics of resetting}, EPL-Europhys Lett. \textbf{113} (2016), no.~6, 60009. \bibitem{Gerber} L.R. Gerber and R.~Hilborn, \emph{Catastrophic events and recovery from low densities in populations of otariids: implications for risk of extinction}, Mammal Rev. \textbf{31} (2001), no.~9, 131. \bibitem{MR4179587} R.~Gorenflo, A.A. Kilbas, F.~Mainardi, and S.~Rogosin, \emph{Mittag-{L}effler functions, related topics and applications}, Springer Monographs in Mathematics, Springer, Berlin, 2020. \bibitem{8} D.G. Grier and Y.~Roichman, \emph{Holographic optical trapping}, Appl. Opt. \textbf{45} (2006), no.~5, 880--887. \bibitem{Grinc} A.R. Grincevi\v{c}ius, \emph{Random difference equations and renewal theory for products of random matrices}, Lithuanian Math. J. \textbf{15} (1975), 580--589. \bibitem{9} D.~Gr\"{u}nwald and R.H. Singer, \emph{In vivo imaging of labelled endogenous {$\beta$}-actin m{RNA} during nucleocytoplasmic transport}, Nature \textbf{467} (2010), no.~7315, 604. \bibitem{MR3996792} T.~Grzywny and K.~Szczypkowski, \emph{Heat kernels of non-symmetric {L}\'{e}vy-type operators}, J. Differ. Equations \textbf{267} (2019), no.~10, 6004--6064. \bibitem{MR4140542} \bysame, \emph{L\'{e}vy processes: concentration function and heat kernel bounds}, Bernoulli \textbf{26} (2020), no.~4, 3191--3223. \bibitem{MR2023017} F.~Guillemin, Ph. Robert, and B.~Zwart, \emph{{AIMD} algorithms and exponential functionals}, Ann. Appl. Probab. \textbf{14} (2004), no.~1, 90--117. \bibitem{Gupta} Sh. Gupta and A.M. Jayannavar, \emph{Stochastic resetting: A (very) brief review}, Front. Phys. \textbf{10} (2022), 789097. \bibitem{Hardin} D.~Hardin, \emph{Skewed stable variables and processes}, Tech. report, North Carolina University, 1984. \bibitem{Hartman} P.~Hartman and A.~Wintner, \emph{On the infinitesimal generators of integral convolutions}, Am. J. Math. \textbf{64} (1942), 273--298. \bibitem{iserles_1993} A.~Iserles, \emph{On the generalized pantograph functional-differential equation}, Eur. J. Appl. Math. \textbf{4} (1993), no.~1, 1--38. \bibitem{38} O.~Kella, D.~Perry, and W.~Stadje, \emph{A stochastic clearing model with a {B}rownian and a compound {P}oisson component,}, Probab. Eng. Inf. Sci. \textbf{17} (2003), 1. \bibitem{MR3010850} V.~Knopova and R.~Schilling, \emph{A note on the existence of transition probability densities of {L}\'{e}vy processes}, Forum Math. \textbf{25} (2013), no.~1, 125--149. \bibitem{36} S.~Kou, \emph{A jump-diffusion model for option pricing}, Manag. Sci \textbf{48} (2002), 1086. \bibitem{MR2728440} N.~Kusolitsch, \emph{Why the theorem of {S}cheff\'{e} should be rather called a theorem of {R}iesz}, Period. Math. Hungar. \textbf{61} (2010), no.~1-2, 225--229. \bibitem{MR2840300} A.~L\"{o}pker and W.~Stadje, \emph{Hitting times and the running maximum of {M}arkovian growth-collapse processes}, J. Appl. Probab. \textbf{48} (2011), no.~2, 295--312. \bibitem{14} A.~L\"{o}pker, J.S.H. van Leeuwaarden, and T.J. Ott, \emph{{TCP} and iso-stationary transformations}, Queueing Syst. \textbf{63} (2009), 459--475. \bibitem{MR3476293} S.N. Majumdar, S.~Sabhapandit, and G.~Schehr, \emph{Dynamical transition in the temporal relaxation of stochastic processes under resetting}, Phys. Rev. E \textbf{91} (2015), no.~5, 052131, 8. \bibitem{Malakar} K.~Malakar, V.~Jemseena, A.~Kundu, K.~Kumar, S.~Sabhapandit, S.~Majumdar, S.~Redner, and A.~Dhar, \emph{Steady state, relaxation and first-passage properties of a run-and-tumble particle in one-dimension}, J. Stat. Mech. \textbf{93} (2018), 043215. \bibitem{30} X.Y. Mau, X.~Feng, and A.~Porporato, \emph{Multiplicative jump processes and applications to leaching of salt and contaminants in the soil}, Phys. Rev. E \textbf{90} (2014), 1. \bibitem{Mendoza} E.G. Mendoza, \emph{Sudden stops, financial crises, and leverage}, Am. Econ. Rev. \textbf{100} (2010), 1941. \bibitem{35} R.C. Merton, \emph{Option pricing when underlying stock returns are discontinuous}, J. Financ. Econ. \textbf{3} (1976), 125. \bibitem{MR1287609} S.~P. Meyn and R.L. Tweedie, \emph{Markov chains and stochastic stability}, Communications and Control Engineering Series, Springer-Verlag London, 1993. \bibitem{MikoschSamrodnitsky} T.~Mikosch and G.~Samorodnitsky, \emph{The supremum of a negative drift random walk with dependent heavy-tailed steps}, Ann. Appl. Probab. \textbf{10} (2000), no.~3, 1025--1064. \bibitem{22} B.~Mukherjee, K.~Sengupta, and S.N. Majumdar, \emph{Quantum dynamics with stochastic reset}, Phys. Rev. B \textbf{98} (2018), 104309. \bibitem{32} P.D. Odorico, F.~Laio, L.~Ridolfi, P.D. Odorico, F.~Laio, and L.~Ridolfi, \emph{A probabilistic analysis of fire-induced tree-grass coexistence in savannas}, Am. Nat. \textbf{167} (2006), 78. \bibitem{MR2426601} T.J. Ott and J.H.B. Kemperman, \emph{Transient behavior of processes in the {TCP} paradigm}, Probab. Engrg. Inform. Sci. \textbf{22} (2008), no.~3, 431--471. \bibitem{Kemperman} T.J. Ott, J.H.B. Kemperman, and M.~Mathis, \emph{The stationary behavior of ideal {TCP} congestion avoidance}, {\tt http://www.teunisott.com/Papers/TCP\_Paradigm/TCPwindow.pdf}, 1996. \bibitem{Pal} A.~Pal, \emph{Diffusion in a potential landscape with stochastic resetting}, Phys. Rev. E \textbf{91} (2015), 012113. \bibitem{MR4189343} C.A. Plata, D.~Gupta, and S.~Azaele, \emph{Asymmetric stochastic resetting: modeling catastrophic events}, Phys. Rev. E \textbf{102} (2020), no.~5, 052116, 9. \bibitem{Rev} M.~Reuveni~S., Urbakh and J.~Klafter, \emph{Role of substrate unbinding in {M}ichaelis--{M}enten enzymatic reactions}, Proc. Natl. Acad. Sci. USA \textbf{111} (2014), 4391. \bibitem{17} T.~Robin, L.~Hadany, and M.~Urbakh, \emph{Random search with resetting as a strategy for optimal pollination}, Phys. Rev. E \textbf{99} (2019), 052119. \bibitem{2} T.~Robin, S.~Reuveni, and M.~Urbakh, \emph{Single-molecule theory of enzymatic inhibition}, Nat. Commun. \textbf{9} (2018), no.~1, 779. \bibitem{4} E.~Roldan, A.~Lisica, Sanchez-Taltavull, D., and S.W. Grill, \emph{Stochastic resetting in backtrack recovery by {RNA} polymerases}, Phys. Rev. E \textbf{93} (2016), no.~6, 062411. \bibitem{MR1739520} K.-I. Sato, \emph{L\'{e}vy processes and infinitely divisible distributions}, Cambridge Studies in Advanced Mathematics, vol.~68, Cambridge University Press, Cambridge, 1999, Translated from the 1990 Japanese original, Revised by the author. \bibitem{Shanbhag} D.N. Shanbhag and M.~Sreehari, \emph{On certain self-decomposable distributions}, Z. Wahrscheinlichkeitstheorie verw Gebiete \textbf{38} (1977), 217--222. \bibitem{Sor} D.~Sornette, \emph{Why stock markets crash: critical events in complex financial systems}, Princeton University Press, 2017. \bibitem{28} S.~Suweis, A.~Porporato, A.~Rinaldo, and A.~Maritan, \emph{Prescription-induced jump distributions in multiplicative {P}oisson processes}, Phys. Rev. E \textbf{83} (2011), 1. \bibitem{29} S.~Suweis, A.~Rinaldo, S.~E. Van Der~Zee, E.~Daly, A.~Maritan, and A.~Porporato, \emph{Stochastic modeling of soil salinity}, Geophys. Res. Lett. \textbf{37} (2010), 1. \bibitem{Tal} O.~Tal-Friedman, A.~Pal, A.~Sekhon, Sh. Reuveni, and Y.~Roichman, \emph{Experimental realization of diffusion with stochastic resetting}, J. Phys. Chem. Lett. \textbf{11} (2020), no.~17, 7350--7355. \bibitem{MR4525953} O.~Tal-Friedman, Y.~Roichman, and Sh. Reuveni, \emph{Diffusion with partial resetting}, Phys. Rev. E \textbf{106} (2022), no.~5, Paper No. 054116, 13. \bibitem{MR4546112} R.~van~der Hofstad, S.~Kapodistria, Z.~Palmowski, and S.~Shneer, \emph{Unified approach for solving exit problems for additive-increase and multiplicative-decrease processes}, J. Appl. Probab. \textbf{60} (2023), no.~1, 85--105. \bibitem{MR2576022} J.S.H. van Leeuwaarden, A.H. L\"{o}pker, and T.J. Ott, \emph{{TCP} and iso-stationary transformations}, Queueing Syst. \textbf{63} (2009), no.~1-4, 459--475. \bibitem{White} J.~Whitehouse, M.R. Evans, and S.N. Majumdar, \emph{Effect of partial absorption on diffusion with resetting}, Phys. Rev. E \textbf{87} (2013), 022118. \bibitem{WISNIEWOLSKI2024600} M.~Wiśniewolski, \emph{On the probabilistic representations of solutions of pantograph equations and triangle coefficients}, J. Differ. Equations \textbf{379} (2024), 600--625. \bibitem{33} Wu~Y. and W.Q. Zhu, \emph{Stochastic analysis of a pulse-type prey-predator model}, Phys. Rev. E \textbf{77} (2008), 1. \bibitem{MR1102872} X.G. Zheng, \emph{Ergodic theorems for stress release processes}, Stochastic Process. Appl. \textbf{37} (1991), no.~2, 239--258. \bibitem{MR854867} V.M. Zolotarev, \emph{One-dimensional stable distributions}, Translations of Mathematical Monographs, vol.~65, American Mathematical Society, Providence, RI, 1986. \end{thebibliography} \end{document}
2412.15688v1
http://arxiv.org/abs/2412.15688v1
On the number of connected edge cover sets in a graph
\documentclass[11pt]{article} \usepackage{amssymb,amsfonts,amsmath,latexsym,epsf,tikz,url} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{observation}[theorem]{Observation} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{rem}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \newtheorem{definition}[theorem]{Definition} \newcommand{\proof}{\noindent{\bf Proof.\ }} \newcommand{\qed}{\hfill $\square$\medskip} \textwidth 14.5cm \textheight 21.0cm \oddsidemargin 0.4cm \evensidemargin 0.4cm \voffset -1cm \begin{document} \title{On the number of connected edge cover sets in a graph } \author{Mahsa Zare$^1$ \and Saeid Alikhani$^{1,}$\footnote{Corresponding author} \and Mohammad Reza Oboudi$^2$} \date{\today} \maketitle \begin{center} $^1$Department of Mathematical Sciences, Yazd University, 89195-741, Yazd, Iran\\ {\tt [email protected][email protected]}\\ $^2$Department of Mathematics, College of Science, Shiraz University, Shiraz, Iran {\tt mr\[email protected]} \end{center} \begin{abstract} Let $ G=(V,E) $ be a simple graph of order $ n $ and size $ m $. A connected edge cover set of a graph is a subset $S$ of edges such that every vertex of the graph is incident to at least one edge of $S$ and the subgraph induced by $S$ is connected. We initiate the study of the number of the connected edge cover sets of a graph $G$ with cardinality $i$, $ e_{c}(G,i) $ and consider the generating function for $ e_{c}(G,i) $ which is called the connected edge cover polynomial of $ G $. After obtaining some results for this polynomial, we investigate this polynomial for some certain graphs. \end{abstract} \noindent{\bf Keywords:} Edge cover number, connected edge cover number, cubic graphs. \medskip \noindent{\bf AMS Subj.\ Class.}: 05C30, 05C69. \section{Introduction} Let $G=(V,E)$ be a simple graph. The {\it order} and the size of $G$ is the number of vertices and the number of edges of $G$, respectively. For every graph $G$ with no isolated vertex, an edge covering of $G$ is a set of edges of $G$ such that every vertex is incident with at least one edge of the set. In other words, an edge covering of a graph is a set of edges which together meet all vertices of the graph. A minimum edge covering is an edge covering of the smallest possible size. The edge covering number of $G$ is the size of a minimum edge covering of $G$ and is denoted by $\rho(G)$. We let $\rho(G) = 0$, if $G$ has some isolated vertices. For a detailed treatment of these parameters, the reader is referred to~\cite{saeid1,JAS,bond,GRo}. Let $\mathcal{E}(G,i)$ be the family of all edge coverings of a graph $G$ with cardinality $i$ and let $e(G,i)=|{\mathcal{E}}(G,i)|$. The { edge cover polynomial} $E(G,x)$ of $G$ is defined as \[ E(G, x)=\sum_{ i=\rho(G)}^{m} e(G, i) x^{i}, \] where $\rho(G)$ is the edge covering number of $G$. Also, for a graph $G$ with some isolated vertices we define $E(G, x) = 0$. Let $E(G, x) = 1$, when both order and size of $G$ are zero (see \cite{saeid1}). In \cite{saeid1} authors have characterized all graphs whose edge cover polynomials have exactly one or two distinct roots and moreover they proved that these roots are contained in the set $\{-3,-2,-1, 0\}$. In \cite{JAS}, authors constructed some infinite families of graphs whose edge cover polynomials have only roots $-1$ and $0$. Also, they studied the edge coverings and edge cover polynomials of cubic graphs of order $10$. As a consequence , they have shown that the all cubic graphs of order $10$ (especially the Petersen graph) are determined uniquely by their edge cover polynomials. Motivated by the edge cover number, we consider the following definition. \begin{definition} A {\it connected edge cover set} of graph $G$ is a subset $S$ of edges such that every vertex of $G$ is incident to at least one edge of $S$ and the subgraph induced by $S$ is connected. The connected edge cover number of $G$, $ \rho_{c}(G)$, is the minimum cardinality of the connected edge cover. \end{definition} Also, we state the following definition for the connected edge cover polynomial. \medskip \begin{definition} The {\it connected edge cover polynomial} of $ G $ is the polynomial \[ E_{c}(G,x)=\sum_{i=1}^{m} e_{c}(G,i)x^{i}, \] where $ e_{c}(G,i) $ is the number of connected edge cover set of size $ i $. \end{definition} For two graphs $G$ and $H$, the corona $G\circ H$ is the graph arising from the disjoint union of $G$ with $| V(G) |$ copies of $H$, by adding edges between the $i$th vertex of $G$ and all vertices of $i$th copy of $H$. The corona $G\circ K_1$, in particular, is the graph constructed from a copy of $G$, where for each vertex $v\in V(G)$, a new vertex $u$ and a pendant edge $\{v, u\}$ are added. It is easy to see that the corona operation of two graphs does not have the commutative property. \medskip Usually the generalized friendship graph is denoted by $ F_{n,m} $ which is a collection of $ n $ cycles (all of order $ m$), meeting at a common vertex. \medskip Two graphs $ G $ and $ H $ are said to be connected edge covering equivalent, or simply {\it ${\mathcal{E}_{c}}$-equivalent}, written $ G\sim_{c}H $, if $ E_{c}(G,x)=E_{c}(H,x) $. It is evident that the relation $\sim_{c}$ of being ${\mathcal{E}_{c}}$-equivalence is an equivalence relation on the family ${\cal G}$ of graphs, and thus ${\cal G}$ is partitioned into equivalence classes, called the {\it ${\mathcal{E}_{c}}$-equivalence classes}. Given $G\in {\cal G}$, let \[ [G]=\{H\in {\cal G}:H\sim_{c} G\}. \] We call $[G]$ the equivalence class determined by $G$. A graph $ G $ is said to be connected edge covering unique or simply {\it $ E_{c} $-unique}, if $ [G]={G} $. \medskip In this paper, we obtain the connected edge cover polynomial for certain graphs. \section{Connected edge cover polynomial} Here, we state some new results on the connected edge cover number and the connected edge cover polynomial. The following theorem is easy to obtain: \begin{theorem} For every natural number $ n\geq 3 $, \begin{enumerate} \item [(i)] $ E_{c}(K_{n},x)=E(K_{n},x)-\sum_{ i=\lceil n/2\rceil}^{n-2} e(K_{n}, i) x^{i} $. \item[(ii)] For every natural number $ n\geq 3 $, $ \rho_{c}(C_{n})=n-1 $ and $ E_{c}(C_{n},x)=\sum_{ i=n-1}^{n} {n \choose i} x^{i} $. \item[(iii)] For every natural number $ n\geq 5 $, $ E_{c}(P_{n},x)= x^{n-1} $. \end{enumerate} \end{theorem} \medskip \begin{theorem} For every natural numbers $n$ and $ m\geq 3$, $ E_{c}(F_{n,m},x)=\sum_{i=0}^{n} {n \choose i} m^{i} x^{mn-i} $. \end{theorem} \begin{proof} We know that $\Delta(F_{n,m})=mn$ and $\delta(F_{m,n})=n(m-1)$. To construct a connected edge cover set of $F_{m,n}$ with cardinal $ mn-i$, it is enough to choose $ m-1 $ edges from $ m $ edges of $i$ cycles $C_m$. So $e_c(F_{m,n},mn-i)={n \choose i} m^{i}$ and so we have the result. \qed \end{proof} \begin{theorem} If $ G $ is a graph with order $ n $ and $ E_{c}(G ,x)=E_{c}(K_{n} ,x) $, then $ G=K_{n} $. \end{theorem} \begin{proof} Since the degree of $ E_{c}(K_{n} ,x) $ is $m=\frac{n(n-1)}{2}$ and $ E_{c}(G ,x)=E_{c}(K_{n},x) $, so $ G $ is a graph of size $ m $. On the other hand, the only connected graph of the order $ n $ and size $ m=\frac{n(n-1)}{2}$ is graph $ K_{n} $. Therefore $ G=K_{n} $.\qed \end{proof} Here, we obtain an recursive formula for the connected edge cover polynomial of graphs. Let $u\in V(G)$. By $N_u$ we mean the set of all edges of $G$ incident with $u$. \begin{theorem}\label{main} Let $ G $ be a graph, $ u, v\in V(G) $ and $ uv\in E(G) $. Then $$ E_{c}(G, x)=(x+1)E_{c}(G\setminus uv, x)+xE_{c}(G\setminus v, x)+xE_{c}(G\setminus u, x) .$$ \end{theorem} \begin{proof} If $G$ has an isolated vertex, then $G$ is a disconnected graph, so there is nothing to prove. Suppose that $ \delta(G)\geq1 $ and $ S $ is a connected edge covering set of $ G $ of size $ i $. \begin{itemize} \item If $ uv\notin S $, then we have two cases: \begin{enumerate} \item[(1)] $ deg(v)=1 $ or $ deg(u)=1 $. So $ S $ is a disconnected graph. \item[(2)] $ deg(v)>1 $ and $ deg(u)>1 $. So $ S $ is a connected edge covering set of $ G\setminus uv $ with size $ i $. \end{enumerate} \item If $ uv\in S $, then we have the following cases: \begin{enumerate} \item[(i)] $ |S\cap N_{u}|=|S\cap N_{v}|=1 $. So in this case $ S $ is disconnected graph. \item[(ii)] $ |S\cap N_{u}|>1 $ and $|S\cap N_{v}|=1 $. Therefore $ S\setminus uv $ is a connected edge covering set of $ G\setminus v $ with size $ i-1 $. \item[(iii)] $|S\cap N_{u}|= 1 $ and $|S\cap N_{v}|>1 $. Therefore $ S\setminus uv $ is a connected edge covering set of $ G\setminus u $ with size $ i-1 $. \item[(iv)] $|S\cap N_{u}|>1 $ and $|S\cap N_{v}|>1 $. Therefore $ S\setminus uv $ is a connected edge covering set of $ G\setminus uv $ with size $ i-1 $. \end{enumerate} \end{itemize} So we have $$ e_{c}(G, i)= e_{c}(G\setminus uv, i)+ e_{c}(G\setminus v, i-1)+ e_{c}(G\setminus u, i-1)+ e_{c}(G\setminus uv, i-1), $$ and so we have the result. \qed \end{proof} \medskip By Theorem \ref{main}, we have the following corollary: \begin{corollary} \begin{enumerate} \item[(i)] For every natural number $ n\geq 3 $, $ E_{c}(P_{n}, x)= xE_{c}(P_{n-1}, x) $. \item[(ii)] For every natural number $ n\geq 4 $, $ E_{c}(C_{n}, x)= xE_{c}(C_{n-1}, x)+x^{n-1} $. \end{enumerate} \end{corollary} Here, we consider the connected edge cover number and the connected edge cover polynomial for corona of some graphs. \begin{theorem} \begin{enumerate} \item [(i)] For any connected graph $ G $ of order $ n $, $ \rho_{c}(G\circ K_{1})=2n-1$. \item[(ii)] For any natural number $ n\geq3 $, and for every $ i $, $ 2n-1\leq i\leq n+\frac{n(n-1)}{2}$, $$ e_{c}(K_{n}\circ K_{1}, i)={\frac{n(n-1)}{2} \choose i-n}-n{n-1 \choose i-n} .$$ \end{enumerate} \end{theorem} \begin{proof} \begin{enumerate} \item [(i)] If $ S $ is a connected edge covering of $ G\circ K_{1} $, then $S$ contains at least $ n-1 $ edges of the graph $ G $ and $ n $ edges which connect the vertices of $G$ and the copies of graph $ K_{1} $. So we have $|S|\geq 2n-1$ and so we have the result. \item[(ii)] Any edge cover set of $ K_{n}\circ K_{1} $ of size $ i $ should contain $n$ edges of the outer $C_n$. Now we should choose $i-n$ edges from any $n$ induced subgraph $K_{n-1}$. Therefore, we have the result. \qed \end{enumerate} \end{proof} \medskip \begin{theorem} Let $ G $ be a connected graph of order $ n $ and size $ m $. If $ E_{c}(G,x)=\sum_{i=1}^{m} e_{c}(G,i)x^{i} $, then the following hold: \begin{enumerate} \item[(i)] $ E_{c}(G, x) $ is a monic polynomial of degree $ m $. \item[(ii)] $ n\leq \rho_{c}(G)+1 $. \item[(iii)] For $ i\geq m-\delta+1 $, $ e_{c}(G, i)={m \choose i} $. Moreover, if $ i_{0}=min \lbrace i \vert e_{c}(G, i)={m \choose i}\rbrace $, then $ \delta=m-i_{0}+1 $. \end{enumerate} \end{theorem} \begin{proof} \begin{enumerate} \item[(i)] Since $ E(G) $ is the unique connected edge covering of $ G $ of size $ m $, so the result follows. \item[(ii)] Since any $ n-1 $ edges in graph $G$ is a connected edge covering of $ G $, so we have the result. \item[(iii)] Let $ i\geq m-\delta+1 $. So every subset $S\subseteq E(G)$ of size $i$ is a connected edge covering of $G$. Now, suppose that $i \leq m-\delta$. Consider a vertex $v$ of degree $\delta$. Let $A\subseteq \overline{N_v}$, such that $|A|=i$. Clearly, $A$ is not a connected edge covering of $G$. So $e_c(G,i)<{m\choose i}$. \qed \end{enumerate} \end{proof} \medskip \begin{corollary} Let $ G $ and $ H $ be two connected graphs of size $ m_{1} $ and $ m_{2} $. If $ E_{c}(H, x)=E_{c}(G, x) $, then $ \rho_{c}(G)=\rho_{c}(H) $, $ m_{1}=m_{2} $ and $ \delta(G)=\delta(H) $. \end{corollary} \medskip \section{Cubic graphs of order $6$, $8$ and the Petersen graph} In this section, we compute the number of connected edge cover set of size $ \rho_{c} $ for cubic graphs of order $6$, $8$ and the Petersen graph. Domination polynomials of cubic graphs of order $10$ has studied in \cite{turk} and the Coalition of cubic graphs of order at most $10$ studied in \cite{CCO}. The cubic graphs of order $6$ has shown in Figure \ref{1}. \medskip \begin{figure}[h!] \centering \includegraphics[scale=0.8]{C6} \caption{Cubic graphs of order 6} \label{1} \end{figure} The following results give $e_c(G_1, \rho_{c}(G_1))$ and $e_c(G_2, \rho_{c}(G_2))$ for the cubic graphs of order $6$. \begin{theorem} \label{cub6} $ e_{c}(G_{1},5)= e_{c}(G_{2}, 5)=81$. \end{theorem} \begin{proof} Consider the graphs $G_1$ and $G_2$ in Figure \ref{1}. To construct a connected edge covering set $S$ of size $5$: \noindent $\bullet$ Choose $5$ edges from the cycle $ \{ \{ 1,2 \},\{ 2,3 \},\{ 3,4 \},\{ 4,5 \},\{ 5,6 \},\{ 6,1\} \}$ in Figure \ref{1}. So we have $6$ distinct sets. \noindent $\bullet$ Choose $4$ edges from the cycle $ \{ \{ 1,2 \},\{ 2,3 \},\{ 3,4 \},\{ 4,5 \},\{ 5,6 \},\{ 6,1\} \} $ and one another edge that one of its end-vertex is a vertex which is not on the $4$ chosen edges. So we have $ {6 \choose 4}{1 \choose 1}=15 $ distinct connected edge covering set. \noindent $\bullet$ Choose $3$ edges from the cycle $ \{ \{ 1,2 \},\{ 2,3 \},\{ 3,4 \},\{ 4,5 \},\{ 5,6 \},\{ 6,1\} \} $ and $2$ edges from $ \{ \{ 1,4 \}, \{ 2,6 \}, \{ 3,5 \} \} $, except for the case that $3$ edges of the cycle $ \{ \{ 1,2 \}, \{ 2,3 \},\\ \{ 3,4 \},\{ 4,5 \},\{ 5,6 \},\{ 6,1 \} \} $ are connected. So in case, we have $ {6 \choose 3}{3 \choose 2}-{6 \choose 1}\times2=48 $ distinct connected edge covering set. \noindent $\bullet$ Choose $3$ edges from $ \{ \{ 1,4 \}, \{ 2,6 \}, \{ 3,5 \}\} $ and $2$ edges from $ \{ \{ 1,2 \},\{ 2,3 \},\{ 3, \\ 4 \},\{ 4,5 \},\{ 5,6 \},\{ 6,1\} \} $, except for three states $ \{ \{\{1,2\},\{6,1\}\}, \{\{2,3\},\{5,6\}\}, \{\{3,4\},\\\{4 ,5\}\} \} $. So in case we have $ {3 \choose 3}\times [{6 \choose 2}-3]=12 $ distinct connected edge covering set. Therefore, by the addition principle, $e_{c}(G_{1},5)=81$. \qed \end{proof} Similar to the proof of Theorem \ref{cub6}, we can compute another coefficients of cubic graphs of order $6$ and we have the following result: \begin{theorem} If $G_1$ and $G_2$ are two cubic graphs of order $6$ (Figure \ref{1}), then $$ E_{c}(G_{1}, x)=E_{c}(G_{2}, x)=x^{9}+{9 \choose 8}x^{8}+{9 \choose 7}x^{7}+{9 \choose 6}x^{6}+81x^{9}.$$ \end{theorem} \begin{figure}[ht] \centering \includegraphics[scale=0.8]{C8} \caption{Cubic graphs of order 8} \label{2} \end{figure} Here, we obtain the number of connected edge covering sets of size $\rho_c$ of cubic graphs of order $8$ which have shown in Figure \ref{2}. \begin{theorem}\label{cube8} \begin{enumerate} \item[(i)] $ e_{c}(G_{1},7)=324$. \item[(ii)] $ e_{c}(G_{2}, 7)=338 $. \item[(iii)] $ e_{c}(G_{3}, 7)= e_{c}(G_{4}, 7)=332 $. \item[(iv)] $ e_{c}(G_{5}, 7)=344 $. \end{enumerate} \end{theorem} \begin{proof} The proof of all parts are easy and similar. For instance, we state the proof of Part (i). To construct a connected edge covering of size $7$ of $G_1$, we have some cases: \begin{enumerate} \item[(1)] Choose seven edges from $ \{ \{1,2\}, \{2,3\}, \{3,4\}, \{4,5\}, \{5,6\}, \{6,7\}, \{7,8\}, \{8,1\} \} $. So we have ${8 \choose 7}=8$ distinct connected edge covering sets. \item[(2)] Choose six edges from $ \{ \{1,2\}, \{2,3\}, \{3,4\}, \{4,5\}, \{5,6\}, \{6,7\}, \{7,8\}, \{8,1\} \} $ and one another edge that one of its end-vertices is a vertex which is not on the $6$ chosen edges. So we have $ {8 \choose 6}{1 \choose 1}=28 $ distinct connected edge covering sets. \item[(3)] Choose five edges from $ \{ \{1,2\}, \{2,3\}, \{3,4\}, \{4,5\}, \{5,6\}, \{6,7\}, \{7,8\}, \{8,1\} \} $. We have the following subcases: \textbf{Case 1}. Choose five edges of induced subgraph $ P_{6} $ from the cycle $ \{ \{1,2\}, \{2,3\},\\ \{3,4\}, \{4,5\}, \{5,6\}, \{6,7\}, \{7,8\}, \{8,1\} \} $, and choose $2$ edges from $2$ edges that are connected to vertices which are not on the $5$ chosen edges. So, we have $ 8\times {2 \choose 2}=8 $ distinct connected edge covering sets. \textbf{Case 2}. Choose five edges from outer $C_8$ such that a vertex of $C_8$, say $v$ is not incident to chosen edges. Then we can choose two edges of $ \{ \{1,2\}, \{2,3\},\\ \{3,4\}, \{4,5\}, \{5,6\}, \{6,7\}, \{7,8\}, \{8,1\} \} $ such that the distance between an end-vertex and the vertex $v$ is two. So, we choose one of the edges with end-vertex $v$ and one of the edges whose induced subgraph have leaf. Therefore in this case there are $ {8 \choose 1}{1 \choose 1}{4 \choose 3}{1 \choose 1}=32 $ connected edge cover sets. \textbf{Case 3}. If the induced subgraph of $5$ chosen edges does not have an isolated vertex, then we subtract two previous cases from the total states and we know that two pairs vertices from $6$ vertices with degree one have repetitive edges connecting to each other, so we have $4$ vertices and we need to choose $2$ edges connected to them. Thus in this case there are $ [{8 \choose 5}-(8+32)]\times {4 \choose 2}=96 $ connected edge cover sets. \item[(4)] Choose four edges from $ \{ \{1,2\}, \{2,3\}, \{3,4\}, \{4,5\}, \{5,6\}, \{6,7\}, \{7,8\}, \{8,1\} \} $. We have the following subcases: \textbf{Case 1}. Consider an isolated vertex (in the induced subgraph), say $v$. Choose one of the edges with end-vertex $v$, one edge from edges that are adjacent with an edge with end-vertex $v$ and one of the edges from two edges that is adjacent to an edge $K_2$ in induced subgraph. So in this case, there are $ {8 \choose 1}{1 \choose 1}{1 \choose 1}{2 \choose 1}=16 $ connected edge cover sets. \textbf{Case 2}. Choose two isolated vertices (in the induced subgraph), two vertices from 2 vertices that are connected to isolated vertices and one of edge from 2 other edges. Thus in this case there are $ {8 \choose 2}{2 \choose 2}{2 \choose 1}=56 $ connected edge cover sets. \textbf{Case 3}. Choose there isolated vertices (in the induced subgraph) and there vertices from 3 vertices that are connected to isolated vertices. Therefore in this case there are $ {8 \choose 3}{3 \choose 3}=56 $ connected edge cover sets. \item[(5)] Choose four edges from $\{ \{1,7\}, \{2,4\}, \{3,5\}, \{6,8\} \} $. We have the following subcases: \textbf{Case 1}. Choose two edges from $ \{ \{1,2\}, \{5,6\} \} $ and one edge from other 6 edges. So in this case there are $ {4 \choose 4}{2 \choose 2}{6 \choose 1}=6 $ connected edge cover sets. \textbf{Case 2}. Choose one edge from $ \{ \{1,2\}, \{5,6\} \} $, one edge from $ \{ \{2,3\}, \{3,4\}, \{4,5\} \} $ and one edge from $ \{ \{6,7\}, \{7,8\}, \{8,1\} \} $. So in this case there are $ {4 \choose 4}{2 \choose 1}{3 \choose 1}{3 \choose 1}=18 $ connected edge cover sets. According the addition principle, $e_{c}(G_{1}, 7)=324 $. \qed \end{enumerate} \end{proof} Similar to the proof of Theorem \ref{cube8}, we can obtain another coefficients of the connected edge cover polynomial of cubic graphs of order $8$ and so we have the following result: \begin{theorem} If $ G_{1},...,G_{5}$ are the cubic graphs of oder $8$ (Figure \ref{2}), then \item[(i)] $ E_{c}(G_{1}, x)=x^{12}+{12 \choose 11}x^{11}+[{12 \choose 7}-1]x^{10}+[{12 \choose 9}-6]x^{9}+[{12 \choose 8}-6]x^{8}+324x^{7} $, \item[(ii)] $ E_{c}(G_{2}, x)=x^{12}+{12 \choose 11}x^{11}+{12 \choose 7}x^{10}+{12 \choose 9}x^{9}+[{12 \choose 8}-2]x^{8}+338x^{7} $, \item[(iii)] $ E_{c}(G_{3}, x)=E_{c}(G_{4}, x)=x^{12}+{12 \choose 11}x^{11}+{12 \choose 7}x^{10}+{12 \choose 9}x^{9}+[{12 \choose 8}-4]x^{8}+332x^{7} $, \item[(iv)] $ E_{c}(G_{5}, x)=x^{12}+{12 \choose 11}x^{11}+{12 \choose 7}x^{10}+{12 \choose 9}x^{9}+{12 \choose 8}x^{8}+344x^{7} $. \end{theorem} \begin{figure}[h!] \centering \includegraphics[scale=1]{P2} \caption{The Petersen graph} \label{Pet} \end{figure} \medskip There are exactly $21$ cubic graphs of order $10$ (\cite{JAS}), of them all, the Petersen graph $P$ is the most well-known. Here, we compute the number of connected edge cover sets of $P$ of size $9$. \begin{lemma} If $P$ is the Petersen graph, then $ e_{c}(P, 9)=235$. \end{lemma} \begin{proof} To construct a connected edge covering of $P$ of size $9$ (Figure \ref{Pet}), we have some cases: \begin{enumerate} \item[(i)] Choose four edges from the cycle $ \{ \{1,2\},\{2,3\},\{3,4\},\{4,5\},\{5,1\} \} $, one edge from $ \{ \{1,6\}, \{2,7\}, \{3,8\}, \{4,9\}, \{5,10\} \} $ and four edges from $ \{ \{6,8\}, \{6,9\}, \{7,9\}, \\ \{7,10\}, \{8,10\} \} $. So in case, there are $ {5 \choose 4}{5 \choose 1}{5 \choose 4}=125 $ connected edge cover sets. \item[(ii)] Choose four edges from the cycle $ \{ \{1,2\},\{2,3\},\{3,4\},\{4,5\},\{5,1\} \} $, four edges from $ \{ \{1,6\}, \{2,7\}, \{3,8\}, \{4,9\}, \{5,10\} \} $ and there are four ways to connect an isolated vertex to other vertices. Therefore in case, there are $ {5 \choose 4}{5 \choose 4}\times 4=100 $ connected edge cover sets. \item[(iii)] Choose five edges from $ \{ \{1,6\}, \{2,7\}, \{3,8\}, \{4,9\}, \{5,10\} \} $ and four edges from $ \{ \{6,8\}, \{6,9\}, \{7,9\}, \{7,10\}, \{8,10\} \} $. So in this case there are $ {5 \choose 5}{5 \choose 4}=5 $ connected edge cover sets. \item[(iv)] Choose four edges from the cycle $ \{ \{1,2\},\{2,3\},\{3,4\},\{4,5\},\{5,1\} \} $ and five edges from $ \{ \{1,6\}, \{2,7\}, \{3,8\}, \{4,9\}, \{5,10\} \} $. So in this case there are $ {5 \choose 4}{5 \choose 5}=5 $ connected edge cover sets. \end{enumerate} Therefore, by the addition principle, we have the result. \qed \end{proof} \section{Conclusion} In this paper we considered the connected edge cover sets of a graph and initiated the study of the number of the connected edge cover sets with cardinality $i$. We introduced the connected edge cover polynomial of a graph and found some of its properties. We computed this polynomial for cubic graphs of order $6,8$ and also we computed the number of connected edge cover sets of the Petersen graphs of size $9$. There are many open problems for future works. We state some of them: \begin{enumerate} \item [(i)] Study the roots of the connected edge cover polynomial of a graph. \item[(ii)] What are the formula for the connected edge cover polynomial of some graph products, such as, Cartesian, corona, join and lexicographic. \item[(iii)] Examine the effects on $\rho(G), \rho_c(G)$ and $E_c(G,x)$ when $G$ is modified by operations on vertex and edge of $G$. \end{enumerate} \begin{thebibliography}{9} \bibitem{saeid1} S. Akbari, M.R. Oboudi, {\it On the edge cover polynomial of a graph}, { Europ. J. Combin.} {34} (2013) 297--321. \bibitem{JAS} S. Alikhani, S. Jahari, {\it On the edge cover polynomials of certain graphs}, {J. Alg. Sys.} 2 (2) (2014) 97--108. \bibitem{turk} S. Alikhani, Y.H, Peng, {\it Domination polynomials of cubic graphs of order 10}, Turk. J. Math. 35 (2011) 355-366. \bibitem{CCO} S. Alikhani, H.R. Golmohammadi, E.V. Konstantinova, {\it Coalition of cubic graphs of order at most $10$}, Commun. Comb. Optim. 9 (3) (2024) 437-450. \bibitem{bond} J.A. Bondy, U.S.R. Murty, {\it Graph Theory}, Graduate Texts in Mathematics, 244. Springer, New York, 2008. \bibitem{GRo} M. Grohe, D. Marx, {\it Constraint solving via fractional edge covers}, in: Proceedings of the Seventeenth Annual ACM SIAM Symposium on Discrete Algorithm, SODA 06, New York, NY, USA, 2006, pp. 289--298. \end{thebibliography} \end{document}